Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'linus' into locking/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>

+3440 -1545
+4 -4
CREDITS
··· 9 9 Linus 10 10 ---------- 11 11 12 - M: Matt Mackal 12 + N: Matt Mackal 13 13 E: mpm@selenic.com 14 14 D: SLOB slab allocator 15 15 ··· 1910 1910 1911 1911 N: Andi Kleen 1912 1912 E: andi@firstfloor.org 1913 - U: http://www.halobates.de 1913 + W: http://www.halobates.de 1914 1914 D: network, x86, NUMA, various hacks 1915 1915 S: Schwalbenstr. 96 1916 1916 S: 85551 Ottobrunn ··· 2089 2089 D: Synopsys Designware PCI host bridge driver 2090 2090 2091 2091 N: Gabor Kuti 2092 - M: seasons@falcon.sch.bme.hu 2093 - M: seasons@makosteszta.sote.hu 2092 + E: seasons@falcon.sch.bme.hu 2093 + E: seasons@makosteszta.sote.hu 2094 2094 D: Original author of software suspend 2095 2095 2096 2096 N: Jaroslav Kysela
+20 -4
Documentation/devicetree/bindings/net/ethernet.txt
··· 9 9 - max-speed: number, specifies maximum speed in Mbit/s supported by the device; 10 10 - max-frame-size: number, maximum transfer unit (IEEE defined MTU), rather than 11 11 the maximum frame size (there's contradiction in ePAPR). 12 - - phy-mode: string, operation mode of the PHY interface; supported values are 13 - "mii", "gmii", "sgmii", "qsgmii", "tbi", "rev-mii", "rmii", "rgmii", "rgmii-id", 14 - "rgmii-rxid", "rgmii-txid", "rtbi", "smii", "xgmii", "trgmii"; this is now a 15 - de-facto standard property; 12 + - phy-mode: string, operation mode of the PHY interface. This is now a de-facto 13 + standard property; supported values are: 14 + * "mii" 15 + * "gmii" 16 + * "sgmii" 17 + * "qsgmii" 18 + * "tbi" 19 + * "rev-mii" 20 + * "rmii" 21 + * "rgmii" (RX and TX delays are added by the MAC when required) 22 + * "rgmii-id" (RGMII with internal RX and TX delays provided by the PHY, the 23 + MAC should not add the RX or TX delays in this case) 24 + * "rgmii-rxid" (RGMII with internal RX delay provided by the PHY, the MAC 25 + should not add an RX delay in this case) 26 + * "rgmii-txid" (RGMII with internal TX delay provided by the PHY, the MAC 27 + should not add an TX delay in this case) 28 + * "rtbi" 29 + * "smii" 30 + * "xgmii" 31 + * "trgmii" 16 32 - phy-connection-type: the same as "phy-mode" property but described in ePAPR; 17 33 - phy-handle: phandle, specifies a reference to a node representing a PHY 18 34 device; this property is described in ePAPR and so preferred;
+5 -2
Documentation/networking/nf_conntrack-sysctl.txt
··· 62 62 protocols. 63 63 64 64 nf_conntrack_helper - BOOLEAN 65 - 0 - disabled 66 - not 0 - enabled (default) 65 + 0 - disabled (default) 66 + not 0 - enabled 67 67 68 68 Enable automatic conntrack helper assignment. 69 + If disabled it is required to set up iptables rules to assign 70 + helpers to connections. See the CT target description in the 71 + iptables-extensions(8) man page for further information. 69 72 70 73 nf_conntrack_icmp_timeout - INTEGER (seconds) 71 74 default 30
+14 -9
MAINTAINERS
··· 77 77 Q: Patchwork web based patch tracking system site 78 78 T: SCM tree type and location. 79 79 Type is one of: git, hg, quilt, stgit, topgit 80 + B: Bug tracking system location. 80 81 S: Status, one of the following: 81 82 Supported: Someone is actually paid to look after this. 82 83 Maintained: Someone actually looks after it. ··· 282 281 W: https://01.org/linux-acpi 283 282 Q: https://patchwork.kernel.org/project/linux-acpi/list/ 284 283 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm 284 + B: https://bugzilla.kernel.org 285 285 S: Supported 286 286 F: drivers/acpi/ 287 287 F: drivers/pnp/pnpacpi/ ··· 306 304 W: https://github.com/acpica/acpica/ 307 305 Q: https://patchwork.kernel.org/project/linux-acpi/list/ 308 306 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm 307 + B: https://bugzilla.kernel.org 308 + B: https://bugs.acpica.org 309 309 S: Supported 310 310 F: drivers/acpi/acpica/ 311 311 F: include/acpi/ ··· 317 313 M: Zhang Rui <rui.zhang@intel.com> 318 314 L: linux-acpi@vger.kernel.org 319 315 W: https://01.org/linux-acpi 316 + B: https://bugzilla.kernel.org 320 317 S: Supported 321 318 F: drivers/acpi/fan.c 322 319 ··· 333 328 M: Zhang Rui <rui.zhang@intel.com> 334 329 L: linux-acpi@vger.kernel.org 335 330 W: https://01.org/linux-acpi 331 + B: https://bugzilla.kernel.org 336 332 S: Supported 337 333 F: drivers/acpi/*thermal* 338 334 ··· 341 335 M: Zhang Rui <rui.zhang@intel.com> 342 336 L: linux-acpi@vger.kernel.org 343 337 W: https://01.org/linux-acpi 338 + B: https://bugzilla.kernel.org 344 339 S: Supported 345 340 F: drivers/acpi/acpi_video.c 346 341 ··· 5670 5663 M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 5671 5664 M: Pavel Machek <pavel@ucw.cz> 5672 5665 L: linux-pm@vger.kernel.org 5666 + B: https://bugzilla.kernel.org 5673 5667 S: Supported 5674 5668 F: arch/x86/power/ 5675 5669 F: drivers/base/power/ ··· 9257 9249 F: drivers/pci/host/*layerscape* 9258 9250 9259 9251 PCI DRIVER FOR IMX6 9260 - M: Richard Zhu <Richard.Zhu@freescale.com> 9252 + M: Richard Zhu <hongxing.zhu@nxp.com> 9261 9253 M: Lucas Stach <l.stach@pengutronix.de> 9262 9254 L: linux-pci@vger.kernel.org 9263 9255 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 9264 9256 S: Maintained 9257 + F: Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt 9265 9258 F: drivers/pci/host/*imx6* 9266 9259 9267 9260 PCI DRIVER FOR TI KEYSTONE ··· 9321 9312 9322 9313 PCI DRIVER FOR SYNOPSIS DESIGNWARE 9323 9314 M: Jingoo Han <jingoohan1@gmail.com> 9324 - M: Pratyush Anand <pratyush.anand@gmail.com> 9325 - L: linux-pci@vger.kernel.org 9326 - S: Maintained 9327 - F: drivers/pci/host/*designware* 9328 - 9329 - PCI DRIVER FOR SYNOPSYS PROTOTYPING DEVICE 9330 - M: Jose Abreu <Jose.Abreu@synopsys.com> 9315 + M: Joao Pinto <Joao.Pinto@synopsys.com> 9331 9316 L: linux-pci@vger.kernel.org 9332 9317 S: Maintained 9333 9318 F: Documentation/devicetree/bindings/pci/designware-pcie.txt 9334 - F: drivers/pci/host/pcie-designware-plat.c 9319 + F: drivers/pci/host/*designware* 9335 9320 9336 9321 PCI DRIVER FOR GENERIC OF HOSTS 9337 9322 M: Will Deacon <will.deacon@arm.com> ··· 9627 9624 M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 9628 9625 L: linux-pm@vger.kernel.org 9629 9626 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm 9627 + B: https://bugzilla.kernel.org 9630 9628 S: Supported 9631 9629 F: drivers/base/power/ 9632 9630 F: include/linux/pm.h ··· 11617 11613 M: Len Brown <len.brown@intel.com> 11618 11614 M: Pavel Machek <pavel@ucw.cz> 11619 11615 L: linux-pm@vger.kernel.org 11616 + B: https://bugzilla.kernel.org 11620 11617 S: Supported 11621 11618 F: Documentation/power/ 11622 11619 F: arch/x86/kernel/acpi/
+9 -4
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 9 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc8 5 5 NAME = Psychotic Stoned Sheep 6 6 7 7 # *DOCUMENTATION* ··· 607 607 include/config/auto.conf: ; 608 608 endif # $(dot-config) 609 609 610 + # For the kernel to actually contain only the needed exported symbols, 611 + # we have to build modules as well to determine what those symbols are. 612 + # (this can be evaluated only once include/config/auto.conf has been included) 613 + ifdef CONFIG_TRIM_UNUSED_KSYMS 614 + KBUILD_MODULES := 1 615 + endif 616 + 610 617 # The all: target is the default when no target is given on the 611 618 # command line. 612 619 # This allow a user to issue only 'make' to build a kernel including modules ··· 951 944 endif 952 945 ifdef CONFIG_TRIM_UNUSED_KSYMS 953 946 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \ 954 - "$(MAKE) KBUILD_MODULES=1 -f $(srctree)/Makefile vmlinux_prereq" 947 + "$(MAKE) -f $(srctree)/Makefile vmlinux" 955 948 endif 956 949 957 950 # standalone target for easier testing ··· 1026 1019 prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ 1027 1020 include/config/auto.conf 1028 1021 $(cmd_crmodverdir) 1029 - $(Q)test -e include/generated/autoksyms.h || \ 1030 - touch include/generated/autoksyms.h 1031 1022 1032 1023 archprepare: archheaders archscripts prepare1 scripts_basic 1033 1024
+5 -4
arch/arc/include/asm/delay.h
··· 22 22 static inline void __delay(unsigned long loops) 23 23 { 24 24 __asm__ __volatile__( 25 - " lp 1f \n" 26 - " nop \n" 27 - "1: \n" 28 - : "+l"(loops)); 25 + " mov lp_count, %0 \n" 26 + " lp 1f \n" 27 + " nop \n" 28 + "1: \n" 29 + : : "r"(loops)); 29 30 } 30 31 31 32 extern void __bad_udelay(void);
+1 -1
arch/arc/include/asm/pgtable.h
··· 280 280 281 281 #define pte_page(pte) pfn_to_page(pte_pfn(pte)) 282 282 #define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) 283 - #define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) 283 + #define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) 284 284 285 285 /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ 286 286 #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT)
+1 -1
arch/arc/mm/cache.c
··· 23 23 24 24 static int l2_line_sz; 25 25 static int ioc_exists; 26 - int slc_enable = 1, ioc_enable = 1; 26 + int slc_enable = 1, ioc_enable = 0; 27 27 unsigned long perip_base = ARC_UNCACHED_ADDR_SPACE; /* legacy value for boot */ 28 28 unsigned long perip_end = 0xFFFFFFFF; /* legacy value */ 29 29
+1 -1
arch/arm/boot/dts/Makefile
··· 745 745 sun4i-a10-pcduino2.dtb \ 746 746 sun4i-a10-pov-protab2-ips9.dtb 747 747 dtb-$(CONFIG_MACH_SUN5I) += \ 748 - ntc-gr8-evb.dtb \ 749 748 sun5i-a10s-auxtek-t003.dtb \ 750 749 sun5i-a10s-auxtek-t004.dtb \ 751 750 sun5i-a10s-mk802.dtb \ ··· 760 761 sun5i-a13-olinuxino-micro.dtb \ 761 762 sun5i-a13-q8-tablet.dtb \ 762 763 sun5i-a13-utoo-p66.dtb \ 764 + sun5i-gr8-evb.dtb \ 763 765 sun5i-r8-chip.dtb 764 766 dtb-$(CONFIG_MACH_SUN6I) += \ 765 767 sun6i-a31-app4-evb1.dtb \
+2 -3
arch/arm/boot/dts/imx7s.dtsi
··· 643 643 reg = <0x30730000 0x10000>; 644 644 interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 645 645 clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>, 646 - <&clks IMX7D_CLK_DUMMY>, 647 - <&clks IMX7D_CLK_DUMMY>; 648 - clock-names = "pix", "axi", "disp_axi"; 646 + <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>; 647 + clock-names = "pix", "axi"; 649 648 status = "disabled"; 650 649 }; 651 650 };
+1 -1
arch/arm/boot/dts/ntc-gr8-evb.dts arch/arm/boot/dts/sun5i-gr8-evb.dts
··· 44 44 */ 45 45 46 46 /dts-v1/; 47 - #include "ntc-gr8.dtsi" 47 + #include "sun5i-gr8.dtsi" 48 48 #include "sunxi-common-regulators.dtsi" 49 49 50 50 #include <dt-bindings/gpio/gpio.h>
arch/arm/boot/dts/ntc-gr8.dtsi arch/arm/boot/dts/sun5i-gr8.dtsi
+4
arch/arm/boot/dts/orion5x-linkstation-lsgl.dts
··· 82 82 gpios = <&gpio0 9 GPIO_ACTIVE_HIGH>; 83 83 }; 84 84 85 + &sata { 86 + nr-ports = <2>; 87 + }; 88 + 85 89 &ehci1 { 86 90 status = "okay"; 87 91 };
+16
arch/arm/boot/dts/stih407-family.dtsi
··· 283 283 clock-frequency = <400000>; 284 284 pinctrl-names = "default"; 285 285 pinctrl-0 = <&pinctrl_i2c0_default>; 286 + #address-cells = <1>; 287 + #size-cells = <0>; 286 288 287 289 status = "disabled"; 288 290 }; ··· 298 296 clock-frequency = <400000>; 299 297 pinctrl-names = "default"; 300 298 pinctrl-0 = <&pinctrl_i2c1_default>; 299 + #address-cells = <1>; 300 + #size-cells = <0>; 301 301 302 302 status = "disabled"; 303 303 }; ··· 313 309 clock-frequency = <400000>; 314 310 pinctrl-names = "default"; 315 311 pinctrl-0 = <&pinctrl_i2c2_default>; 312 + #address-cells = <1>; 313 + #size-cells = <0>; 316 314 317 315 status = "disabled"; 318 316 }; ··· 328 322 clock-frequency = <400000>; 329 323 pinctrl-names = "default"; 330 324 pinctrl-0 = <&pinctrl_i2c3_default>; 325 + #address-cells = <1>; 326 + #size-cells = <0>; 331 327 332 328 status = "disabled"; 333 329 }; ··· 343 335 clock-frequency = <400000>; 344 336 pinctrl-names = "default"; 345 337 pinctrl-0 = <&pinctrl_i2c4_default>; 338 + #address-cells = <1>; 339 + #size-cells = <0>; 346 340 347 341 status = "disabled"; 348 342 }; ··· 358 348 clock-frequency = <400000>; 359 349 pinctrl-names = "default"; 360 350 pinctrl-0 = <&pinctrl_i2c5_default>; 351 + #address-cells = <1>; 352 + #size-cells = <0>; 361 353 362 354 status = "disabled"; 363 355 }; ··· 375 363 clock-frequency = <400000>; 376 364 pinctrl-names = "default"; 377 365 pinctrl-0 = <&pinctrl_i2c10_default>; 366 + #address-cells = <1>; 367 + #size-cells = <0>; 378 368 379 369 status = "disabled"; 380 370 }; ··· 390 376 clock-frequency = <400000>; 391 377 pinctrl-names = "default"; 392 378 pinctrl-0 = <&pinctrl_i2c11_default>; 379 + #address-cells = <1>; 380 + #size-cells = <0>; 393 381 394 382 status = "disabled"; 395 383 };
+1 -1
arch/arm/boot/dts/sun8i-h3.dtsi
··· 410 410 }; 411 411 412 412 uart3_pins: uart3 { 413 - allwinner,pins = "PG13", "PG14"; 413 + allwinner,pins = "PA13", "PA14"; 414 414 allwinner,function = "uart3"; 415 415 allwinner,drive = <SUN4I_PINCTRL_10_MA>; 416 416 allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
-1
arch/arm/include/asm/Kbuild
··· 8 8 generic-y += emergency-restart.h 9 9 generic-y += errno.h 10 10 generic-y += exec.h 11 - generic-y += export.h 12 11 generic-y += ioctl.h 13 12 generic-y += ipcbuf.h 14 13 generic-y += irq_regs.h
+1 -1
arch/arm/kernel/Makefile
··· 33 33 obj-$(CONFIG_CPU_IDLE) += cpuidle.o 34 34 obj-$(CONFIG_ISA_DMA_API) += dma.o 35 35 obj-$(CONFIG_FIQ) += fiq.o fiqasm.o 36 - obj-$(CONFIG_MODULES) += module.o 36 + obj-$(CONFIG_MODULES) += armksyms.o module.o 37 37 obj-$(CONFIG_ARM_MODULE_PLTS) += module-plts.o 38 38 obj-$(CONFIG_ISA_DMA) += dma-isa.o 39 39 obj-$(CONFIG_PCI) += bios32.o isa.o
+183
arch/arm/kernel/armksyms.c
··· 1 + /* 2 + * linux/arch/arm/kernel/armksyms.c 3 + * 4 + * Copyright (C) 2000 Russell King 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + #include <linux/export.h> 11 + #include <linux/sched.h> 12 + #include <linux/string.h> 13 + #include <linux/cryptohash.h> 14 + #include <linux/delay.h> 15 + #include <linux/in6.h> 16 + #include <linux/syscalls.h> 17 + #include <linux/uaccess.h> 18 + #include <linux/io.h> 19 + #include <linux/arm-smccc.h> 20 + 21 + #include <asm/checksum.h> 22 + #include <asm/ftrace.h> 23 + 24 + /* 25 + * libgcc functions - functions that are used internally by the 26 + * compiler... (prototypes are not correct though, but that 27 + * doesn't really matter since they're not versioned). 28 + */ 29 + extern void __ashldi3(void); 30 + extern void __ashrdi3(void); 31 + extern void __divsi3(void); 32 + extern void __lshrdi3(void); 33 + extern void __modsi3(void); 34 + extern void __muldi3(void); 35 + extern void __ucmpdi2(void); 36 + extern void __udivsi3(void); 37 + extern void __umodsi3(void); 38 + extern void __do_div64(void); 39 + extern void __bswapsi2(void); 40 + extern void __bswapdi2(void); 41 + 42 + extern void __aeabi_idiv(void); 43 + extern void __aeabi_idivmod(void); 44 + extern void __aeabi_lasr(void); 45 + extern void __aeabi_llsl(void); 46 + extern void __aeabi_llsr(void); 47 + extern void __aeabi_lmul(void); 48 + extern void __aeabi_uidiv(void); 49 + extern void __aeabi_uidivmod(void); 50 + extern void __aeabi_ulcmp(void); 51 + 52 + extern void fpundefinstr(void); 53 + 54 + void mmioset(void *, unsigned int, size_t); 55 + void mmiocpy(void *, const void *, size_t); 56 + 57 + /* platform dependent support */ 58 + EXPORT_SYMBOL(arm_delay_ops); 59 + 60 + /* networking */ 61 + EXPORT_SYMBOL(csum_partial); 62 + EXPORT_SYMBOL(csum_partial_copy_from_user); 63 + EXPORT_SYMBOL(csum_partial_copy_nocheck); 64 + EXPORT_SYMBOL(__csum_ipv6_magic); 65 + 66 + /* io */ 67 + #ifndef __raw_readsb 68 + EXPORT_SYMBOL(__raw_readsb); 69 + #endif 70 + #ifndef __raw_readsw 71 + EXPORT_SYMBOL(__raw_readsw); 72 + #endif 73 + #ifndef __raw_readsl 74 + EXPORT_SYMBOL(__raw_readsl); 75 + #endif 76 + #ifndef __raw_writesb 77 + EXPORT_SYMBOL(__raw_writesb); 78 + #endif 79 + #ifndef __raw_writesw 80 + EXPORT_SYMBOL(__raw_writesw); 81 + #endif 82 + #ifndef __raw_writesl 83 + EXPORT_SYMBOL(__raw_writesl); 84 + #endif 85 + 86 + /* string / mem functions */ 87 + EXPORT_SYMBOL(strchr); 88 + EXPORT_SYMBOL(strrchr); 89 + EXPORT_SYMBOL(memset); 90 + EXPORT_SYMBOL(memcpy); 91 + EXPORT_SYMBOL(memmove); 92 + EXPORT_SYMBOL(memchr); 93 + EXPORT_SYMBOL(__memzero); 94 + 95 + EXPORT_SYMBOL(mmioset); 96 + EXPORT_SYMBOL(mmiocpy); 97 + 98 + #ifdef CONFIG_MMU 99 + EXPORT_SYMBOL(copy_page); 100 + 101 + EXPORT_SYMBOL(arm_copy_from_user); 102 + EXPORT_SYMBOL(arm_copy_to_user); 103 + EXPORT_SYMBOL(arm_clear_user); 104 + 105 + EXPORT_SYMBOL(__get_user_1); 106 + EXPORT_SYMBOL(__get_user_2); 107 + EXPORT_SYMBOL(__get_user_4); 108 + EXPORT_SYMBOL(__get_user_8); 109 + 110 + #ifdef __ARMEB__ 111 + EXPORT_SYMBOL(__get_user_64t_1); 112 + EXPORT_SYMBOL(__get_user_64t_2); 113 + EXPORT_SYMBOL(__get_user_64t_4); 114 + EXPORT_SYMBOL(__get_user_32t_8); 115 + #endif 116 + 117 + EXPORT_SYMBOL(__put_user_1); 118 + EXPORT_SYMBOL(__put_user_2); 119 + EXPORT_SYMBOL(__put_user_4); 120 + EXPORT_SYMBOL(__put_user_8); 121 + #endif 122 + 123 + /* gcc lib functions */ 124 + EXPORT_SYMBOL(__ashldi3); 125 + EXPORT_SYMBOL(__ashrdi3); 126 + EXPORT_SYMBOL(__divsi3); 127 + EXPORT_SYMBOL(__lshrdi3); 128 + EXPORT_SYMBOL(__modsi3); 129 + EXPORT_SYMBOL(__muldi3); 130 + EXPORT_SYMBOL(__ucmpdi2); 131 + EXPORT_SYMBOL(__udivsi3); 132 + EXPORT_SYMBOL(__umodsi3); 133 + EXPORT_SYMBOL(__do_div64); 134 + EXPORT_SYMBOL(__bswapsi2); 135 + EXPORT_SYMBOL(__bswapdi2); 136 + 137 + #ifdef CONFIG_AEABI 138 + EXPORT_SYMBOL(__aeabi_idiv); 139 + EXPORT_SYMBOL(__aeabi_idivmod); 140 + EXPORT_SYMBOL(__aeabi_lasr); 141 + EXPORT_SYMBOL(__aeabi_llsl); 142 + EXPORT_SYMBOL(__aeabi_llsr); 143 + EXPORT_SYMBOL(__aeabi_lmul); 144 + EXPORT_SYMBOL(__aeabi_uidiv); 145 + EXPORT_SYMBOL(__aeabi_uidivmod); 146 + EXPORT_SYMBOL(__aeabi_ulcmp); 147 + #endif 148 + 149 + /* bitops */ 150 + EXPORT_SYMBOL(_set_bit); 151 + EXPORT_SYMBOL(_test_and_set_bit); 152 + EXPORT_SYMBOL(_clear_bit); 153 + EXPORT_SYMBOL(_test_and_clear_bit); 154 + EXPORT_SYMBOL(_change_bit); 155 + EXPORT_SYMBOL(_test_and_change_bit); 156 + EXPORT_SYMBOL(_find_first_zero_bit_le); 157 + EXPORT_SYMBOL(_find_next_zero_bit_le); 158 + EXPORT_SYMBOL(_find_first_bit_le); 159 + EXPORT_SYMBOL(_find_next_bit_le); 160 + 161 + #ifdef __ARMEB__ 162 + EXPORT_SYMBOL(_find_first_zero_bit_be); 163 + EXPORT_SYMBOL(_find_next_zero_bit_be); 164 + EXPORT_SYMBOL(_find_first_bit_be); 165 + EXPORT_SYMBOL(_find_next_bit_be); 166 + #endif 167 + 168 + #ifdef CONFIG_FUNCTION_TRACER 169 + #ifdef CONFIG_OLD_MCOUNT 170 + EXPORT_SYMBOL(mcount); 171 + #endif 172 + EXPORT_SYMBOL(__gnu_mcount_nc); 173 + #endif 174 + 175 + #ifdef CONFIG_ARM_PATCH_PHYS_VIRT 176 + EXPORT_SYMBOL(__pv_phys_pfn_offset); 177 + EXPORT_SYMBOL(__pv_offset); 178 + #endif 179 + 180 + #ifdef CONFIG_HAVE_ARM_SMCCC 181 + EXPORT_SYMBOL(arm_smccc_smc); 182 + EXPORT_SYMBOL(arm_smccc_hvc); 183 + #endif
-3
arch/arm/kernel/entry-ftrace.S
··· 7 7 #include <asm/assembler.h> 8 8 #include <asm/ftrace.h> 9 9 #include <asm/unwind.h> 10 - #include <asm/export.h> 11 10 12 11 #include "entry-header.S" 13 12 ··· 153 154 __mcount _old 154 155 #endif 155 156 ENDPROC(mcount) 156 - EXPORT_SYMBOL(mcount) 157 157 158 158 #ifdef CONFIG_DYNAMIC_FTRACE 159 159 ENTRY(ftrace_caller_old) ··· 205 207 #endif 206 208 UNWIND(.fnend) 207 209 ENDPROC(__gnu_mcount_nc) 208 - EXPORT_SYMBOL(__gnu_mcount_nc) 209 210 210 211 #ifdef CONFIG_DYNAMIC_FTRACE 211 212 ENTRY(ftrace_caller)
-3
arch/arm/kernel/head.S
··· 22 22 #include <asm/memory.h> 23 23 #include <asm/thread_info.h> 24 24 #include <asm/pgtable.h> 25 - #include <asm/export.h> 26 25 27 26 #if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_SEMIHOSTING) 28 27 #include CONFIG_DEBUG_LL_INCLUDE ··· 727 728 __pv_offset: 728 729 .quad 0 729 730 .size __pv_offset, . -__pv_offset 730 - EXPORT_SYMBOL(__pv_phys_pfn_offset) 731 - EXPORT_SYMBOL(__pv_offset) 732 731 #endif 733 732 734 733 #include "head-common.S"
-3
arch/arm/kernel/smccc-call.S
··· 16 16 #include <asm/opcodes-sec.h> 17 17 #include <asm/opcodes-virt.h> 18 18 #include <asm/unwind.h> 19 - #include <asm/export.h> 20 19 21 20 /* 22 21 * Wrap c macros in asm macros to delay expansion until after the ··· 51 52 ENTRY(arm_smccc_smc) 52 53 SMCCC SMCCC_SMC 53 54 ENDPROC(arm_smccc_smc) 54 - EXPORT_SYMBOL(arm_smccc_smc) 55 55 56 56 /* 57 57 * void smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a2, ··· 60 62 ENTRY(arm_smccc_hvc) 61 63 SMCCC SMCCC_HVC 62 64 ENDPROC(arm_smccc_hvc) 63 - EXPORT_SYMBOL(arm_smccc_hvc)
-3
arch/arm/lib/ashldi3.S
··· 28 28 29 29 #include <linux/linkage.h> 30 30 #include <asm/assembler.h> 31 - #include <asm/export.h> 32 31 33 32 #ifdef __ARMEB__ 34 33 #define al r1 ··· 52 53 53 54 ENDPROC(__ashldi3) 54 55 ENDPROC(__aeabi_llsl) 55 - EXPORT_SYMBOL(__ashldi3) 56 - EXPORT_SYMBOL(__aeabi_llsl)
-3
arch/arm/lib/ashrdi3.S
··· 28 28 29 29 #include <linux/linkage.h> 30 30 #include <asm/assembler.h> 31 - #include <asm/export.h> 32 31 33 32 #ifdef __ARMEB__ 34 33 #define al r1 ··· 52 53 53 54 ENDPROC(__ashrdi3) 54 55 ENDPROC(__aeabi_lasr) 55 - EXPORT_SYMBOL(__ashrdi3) 56 - EXPORT_SYMBOL(__aeabi_lasr)
-5
arch/arm/lib/bitops.h
··· 1 1 #include <asm/assembler.h> 2 2 #include <asm/unwind.h> 3 - #include <asm/export.h> 4 3 5 4 #if __LINUX_ARM_ARCH__ >= 6 6 5 .macro bitop, name, instr ··· 25 26 bx lr 26 27 UNWIND( .fnend ) 27 28 ENDPROC(\name ) 28 - EXPORT_SYMBOL(\name ) 29 29 .endm 30 30 31 31 .macro testop, name, instr, store ··· 55 57 2: bx lr 56 58 UNWIND( .fnend ) 57 59 ENDPROC(\name ) 58 - EXPORT_SYMBOL(\name ) 59 60 .endm 60 61 #else 61 62 .macro bitop, name, instr ··· 74 77 ret lr 75 78 UNWIND( .fnend ) 76 79 ENDPROC(\name ) 77 - EXPORT_SYMBOL(\name ) 78 80 .endm 79 81 80 82 /** ··· 102 106 ret lr 103 107 UNWIND( .fnend ) 104 108 ENDPROC(\name ) 105 - EXPORT_SYMBOL(\name ) 106 109 .endm 107 110 #endif
-3
arch/arm/lib/bswapsdi2.S
··· 1 1 #include <linux/linkage.h> 2 2 #include <asm/assembler.h> 3 - #include <asm/export.h> 4 3 5 4 #if __LINUX_ARM_ARCH__ >= 6 6 5 ENTRY(__bswapsi2) ··· 35 36 ret lr 36 37 ENDPROC(__bswapdi2) 37 38 #endif 38 - EXPORT_SYMBOL(__bswapsi2) 39 - EXPORT_SYMBOL(__bswapdi2)
-4
arch/arm/lib/clear_user.S
··· 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 12 #include <asm/unwind.h> 13 - #include <asm/export.h> 14 13 15 14 .text 16 15 ··· 50 51 UNWIND(.fnend) 51 52 ENDPROC(arm_clear_user) 52 53 ENDPROC(__clear_user_std) 53 - #ifndef CONFIG_UACCESS_WITH_MEMCPY 54 - EXPORT_SYMBOL(arm_clear_user) 55 - #endif 56 54 57 55 .pushsection .text.fixup,"ax" 58 56 .align 0
-2
arch/arm/lib/copy_from_user.S
··· 13 13 #include <linux/linkage.h> 14 14 #include <asm/assembler.h> 15 15 #include <asm/unwind.h> 16 - #include <asm/export.h> 17 16 18 17 /* 19 18 * Prototype: ··· 94 95 #include "copy_template.S" 95 96 96 97 ENDPROC(arm_copy_from_user) 97 - EXPORT_SYMBOL(arm_copy_from_user) 98 98 99 99 .pushsection .fixup,"ax" 100 100 .align 0
-2
arch/arm/lib/copy_page.S
··· 13 13 #include <asm/assembler.h> 14 14 #include <asm/asm-offsets.h> 15 15 #include <asm/cache.h> 16 - #include <asm/export.h> 17 16 18 17 #define COPY_COUNT (PAGE_SZ / (2 * L1_CACHE_BYTES) PLD( -1 )) 19 18 ··· 45 46 PLD( beq 2b ) 46 47 ldmfd sp!, {r4, pc} @ 3 47 48 ENDPROC(copy_page) 48 - EXPORT_SYMBOL(copy_page)
-4
arch/arm/lib/copy_to_user.S
··· 13 13 #include <linux/linkage.h> 14 14 #include <asm/assembler.h> 15 15 #include <asm/unwind.h> 16 - #include <asm/export.h> 17 16 18 17 /* 19 18 * Prototype: ··· 99 100 100 101 ENDPROC(arm_copy_to_user) 101 102 ENDPROC(__copy_to_user_std) 102 - #ifndef CONFIG_UACCESS_WITH_MEMCPY 103 - EXPORT_SYMBOL(arm_copy_to_user) 104 - #endif 105 103 106 104 .pushsection .text.fixup,"ax" 107 105 .align 0
+1 -2
arch/arm/lib/csumipv6.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .text 15 14 ··· 30 31 adcs r0, r0, #0 31 32 ldmfd sp!, {pc} 32 33 ENDPROC(__csum_ipv6_magic) 33 - EXPORT_SYMBOL(__csum_ipv6_magic) 34 +
-2
arch/arm/lib/csumpartial.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .text 15 14 ··· 140 141 bne 4b 141 142 b .Lless4 142 143 ENDPROC(csum_partial) 143 - EXPORT_SYMBOL(csum_partial)
-1
arch/arm/lib/csumpartialcopy.S
··· 49 49 50 50 #define FN_ENTRY ENTRY(csum_partial_copy_nocheck) 51 51 #define FN_EXIT ENDPROC(csum_partial_copy_nocheck) 52 - #define FN_EXPORT EXPORT_SYMBOL(csum_partial_copy_nocheck) 53 52 54 53 #include "csumpartialcopygeneric.S"
-2
arch/arm/lib/csumpartialcopygeneric.S
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 #include <asm/assembler.h> 11 - #include <asm/export.h> 12 11 13 12 /* 14 13 * unsigned int ··· 331 332 mov r5, r4, get_byte_1 332 333 b .Lexit 333 334 FN_EXIT 334 - FN_EXPORT
-1
arch/arm/lib/csumpartialcopyuser.S
··· 73 73 74 74 #define FN_ENTRY ENTRY(csum_partial_copy_from_user) 75 75 #define FN_EXIT ENDPROC(csum_partial_copy_from_user) 76 - #define FN_EXPORT EXPORT_SYMBOL(csum_partial_copy_from_user) 77 76 78 77 #include "csumpartialcopygeneric.S" 79 78
-2
arch/arm/lib/delay.c
··· 24 24 #include <linux/init.h> 25 25 #include <linux/kernel.h> 26 26 #include <linux/module.h> 27 - #include <linux/export.h> 28 27 #include <linux/timex.h> 29 28 30 29 /* ··· 34 35 .const_udelay = __loop_const_udelay, 35 36 .udelay = __loop_udelay, 36 37 }; 37 - EXPORT_SYMBOL(arm_delay_ops); 38 38 39 39 static const struct delay_timer *delay_timer; 40 40 static bool delay_calibrated;
-2
arch/arm/lib/div64.S
··· 15 15 #include <linux/linkage.h> 16 16 #include <asm/assembler.h> 17 17 #include <asm/unwind.h> 18 - #include <asm/export.h> 19 18 20 19 #ifdef __ARMEB__ 21 20 #define xh r0 ··· 210 211 211 212 UNWIND(.fnend) 212 213 ENDPROC(__do_div64) 213 - EXPORT_SYMBOL(__do_div64)
-9
arch/arm/lib/findbit.S
··· 15 15 */ 16 16 #include <linux/linkage.h> 17 17 #include <asm/assembler.h> 18 - #include <asm/export.h> 19 18 .text 20 19 21 20 /* ··· 37 38 3: mov r0, r1 @ no free bits 38 39 ret lr 39 40 ENDPROC(_find_first_zero_bit_le) 40 - EXPORT_SYMBOL(_find_first_zero_bit_le) 41 41 42 42 /* 43 43 * Purpose : Find next 'zero' bit ··· 57 59 add r2, r2, #1 @ align bit pointer 58 60 b 2b @ loop for next bit 59 61 ENDPROC(_find_next_zero_bit_le) 60 - EXPORT_SYMBOL(_find_next_zero_bit_le) 61 62 62 63 /* 63 64 * Purpose : Find a 'one' bit ··· 78 81 3: mov r0, r1 @ no free bits 79 82 ret lr 80 83 ENDPROC(_find_first_bit_le) 81 - EXPORT_SYMBOL(_find_first_bit_le) 82 84 83 85 /* 84 86 * Purpose : Find next 'one' bit ··· 97 101 add r2, r2, #1 @ align bit pointer 98 102 b 2b @ loop for next bit 99 103 ENDPROC(_find_next_bit_le) 100 - EXPORT_SYMBOL(_find_next_bit_le) 101 104 102 105 #ifdef __ARMEB__ 103 106 ··· 116 121 3: mov r0, r1 @ no free bits 117 122 ret lr 118 123 ENDPROC(_find_first_zero_bit_be) 119 - EXPORT_SYMBOL(_find_first_zero_bit_be) 120 124 121 125 ENTRY(_find_next_zero_bit_be) 122 126 teq r1, #0 ··· 133 139 add r2, r2, #1 @ align bit pointer 134 140 b 2b @ loop for next bit 135 141 ENDPROC(_find_next_zero_bit_be) 136 - EXPORT_SYMBOL(_find_next_zero_bit_be) 137 142 138 143 ENTRY(_find_first_bit_be) 139 144 teq r1, #0 ··· 150 157 3: mov r0, r1 @ no free bits 151 158 ret lr 152 159 ENDPROC(_find_first_bit_be) 153 - EXPORT_SYMBOL(_find_first_bit_be) 154 160 155 161 ENTRY(_find_next_bit_be) 156 162 teq r1, #0 ··· 166 174 add r2, r2, #1 @ align bit pointer 167 175 b 2b @ loop for next bit 168 176 ENDPROC(_find_next_bit_be) 169 - EXPORT_SYMBOL(_find_next_bit_be) 170 177 171 178 #endif 172 179
-9
arch/arm/lib/getuser.S
··· 31 31 #include <asm/assembler.h> 32 32 #include <asm/errno.h> 33 33 #include <asm/domain.h> 34 - #include <asm/export.h> 35 34 36 35 ENTRY(__get_user_1) 37 36 check_uaccess r0, 1, r1, r2, __get_user_bad ··· 38 39 mov r0, #0 39 40 ret lr 40 41 ENDPROC(__get_user_1) 41 - EXPORT_SYMBOL(__get_user_1) 42 42 43 43 ENTRY(__get_user_2) 44 44 check_uaccess r0, 2, r1, r2, __get_user_bad ··· 58 60 mov r0, #0 59 61 ret lr 60 62 ENDPROC(__get_user_2) 61 - EXPORT_SYMBOL(__get_user_2) 62 63 63 64 ENTRY(__get_user_4) 64 65 check_uaccess r0, 4, r1, r2, __get_user_bad ··· 65 68 mov r0, #0 66 69 ret lr 67 70 ENDPROC(__get_user_4) 68 - EXPORT_SYMBOL(__get_user_4) 69 71 70 72 ENTRY(__get_user_8) 71 73 check_uaccess r0, 8, r1, r2, __get_user_bad ··· 78 82 mov r0, #0 79 83 ret lr 80 84 ENDPROC(__get_user_8) 81 - EXPORT_SYMBOL(__get_user_8) 82 85 83 86 #ifdef __ARMEB__ 84 87 ENTRY(__get_user_32t_8) ··· 91 96 mov r0, #0 92 97 ret lr 93 98 ENDPROC(__get_user_32t_8) 94 - EXPORT_SYMBOL(__get_user_32t_8) 95 99 96 100 ENTRY(__get_user_64t_1) 97 101 check_uaccess r0, 1, r1, r2, __get_user_bad8 ··· 98 104 mov r0, #0 99 105 ret lr 100 106 ENDPROC(__get_user_64t_1) 101 - EXPORT_SYMBOL(__get_user_64t_1) 102 107 103 108 ENTRY(__get_user_64t_2) 104 109 check_uaccess r0, 2, r1, r2, __get_user_bad8 ··· 114 121 mov r0, #0 115 122 ret lr 116 123 ENDPROC(__get_user_64t_2) 117 - EXPORT_SYMBOL(__get_user_64t_2) 118 124 119 125 ENTRY(__get_user_64t_4) 120 126 check_uaccess r0, 4, r1, r2, __get_user_bad8 ··· 121 129 mov r0, #0 122 130 ret lr 123 131 ENDPROC(__get_user_64t_4) 124 - EXPORT_SYMBOL(__get_user_64t_4) 125 132 #endif 126 133 127 134 __get_user_bad8:
-2
arch/arm/lib/io-readsb.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .Linsb_align: rsb ip, ip, #4 15 14 cmp ip, r2 ··· 121 122 122 123 ldmfd sp!, {r4 - r6, pc} 123 124 ENDPROC(__raw_readsb) 124 - EXPORT_SYMBOL(__raw_readsb)
-2
arch/arm/lib/io-readsl.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 ENTRY(__raw_readsl) 15 14 teq r2, #0 @ do we have to check for the zero len? ··· 77 78 strb r3, [r1, #0] 78 79 ret lr 79 80 ENDPROC(__raw_readsl) 80 - EXPORT_SYMBOL(__raw_readsl)
+1 -2
arch/arm/lib/io-readsw-armv3.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .Linsw_bad_alignment: 15 14 adr r0, .Linsw_bad_align_msg ··· 103 104 104 105 ldmfd sp!, {r4, r5, r6, pc} 105 106 106 - EXPORT_SYMBOL(__raw_readsw) 107 +
-2
arch/arm/lib/io-readsw-armv4.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .macro pack, rd, hw1, hw2 15 14 #ifndef __ARMEB__ ··· 129 130 strneb ip, [r1] 130 131 ldmfd sp!, {r4, pc} 131 132 ENDPROC(__raw_readsw) 132 - EXPORT_SYMBOL(__raw_readsw)
-2
arch/arm/lib/io-writesb.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .macro outword, rd 15 14 #ifndef __ARMEB__ ··· 92 93 93 94 ldmfd sp!, {r4, r5, pc} 94 95 ENDPROC(__raw_writesb) 95 - EXPORT_SYMBOL(__raw_writesb)
-2
arch/arm/lib/io-writesl.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 ENTRY(__raw_writesl) 15 14 teq r2, #0 @ do we have to check for the zero len? ··· 65 66 bne 6b 66 67 ret lr 67 68 ENDPROC(__raw_writesl) 68 - EXPORT_SYMBOL(__raw_writesl)
-2
arch/arm/lib/io-writesw-armv3.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .Loutsw_bad_alignment: 15 14 adr r0, .Loutsw_bad_align_msg ··· 124 125 strne ip, [r0] 125 126 126 127 ldmfd sp!, {r4, r5, r6, pc} 127 - EXPORT_SYMBOL(__raw_writesw)
-2
arch/arm/lib/io-writesw-armv4.S
··· 9 9 */ 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 - #include <asm/export.h> 13 12 14 13 .macro outword, rd 15 14 #ifndef __ARMEB__ ··· 98 99 strneh ip, [r0] 99 100 ret lr 100 101 ENDPROC(__raw_writesw) 101 - EXPORT_SYMBOL(__raw_writesw)
-9
arch/arm/lib/lib1funcs.S
··· 36 36 #include <linux/linkage.h> 37 37 #include <asm/assembler.h> 38 38 #include <asm/unwind.h> 39 - #include <asm/export.h> 40 39 41 40 .macro ARM_DIV_BODY dividend, divisor, result, curbit 42 41 ··· 238 239 UNWIND(.fnend) 239 240 ENDPROC(__udivsi3) 240 241 ENDPROC(__aeabi_uidiv) 241 - EXPORT_SYMBOL(__udivsi3) 242 - EXPORT_SYMBOL(__aeabi_uidiv) 243 242 244 243 ENTRY(__umodsi3) 245 244 UNWIND(.fnstart) ··· 256 259 257 260 UNWIND(.fnend) 258 261 ENDPROC(__umodsi3) 259 - EXPORT_SYMBOL(__umodsi3) 260 262 261 263 #ifdef CONFIG_ARM_PATCH_IDIV 262 264 .align 3 ··· 303 307 UNWIND(.fnend) 304 308 ENDPROC(__divsi3) 305 309 ENDPROC(__aeabi_idiv) 306 - EXPORT_SYMBOL(__divsi3) 307 - EXPORT_SYMBOL(__aeabi_idiv) 308 310 309 311 ENTRY(__modsi3) 310 312 UNWIND(.fnstart) ··· 327 333 328 334 UNWIND(.fnend) 329 335 ENDPROC(__modsi3) 330 - EXPORT_SYMBOL(__modsi3) 331 336 332 337 #ifdef CONFIG_AEABI 333 338 ··· 343 350 344 351 UNWIND(.fnend) 345 352 ENDPROC(__aeabi_uidivmod) 346 - EXPORT_SYMBOL(__aeabi_uidivmod) 347 353 348 354 ENTRY(__aeabi_idivmod) 349 355 UNWIND(.fnstart) ··· 356 364 357 365 UNWIND(.fnend) 358 366 ENDPROC(__aeabi_idivmod) 359 - EXPORT_SYMBOL(__aeabi_idivmod) 360 367 361 368 #endif 362 369
-3
arch/arm/lib/lshrdi3.S
··· 28 28 29 29 #include <linux/linkage.h> 30 30 #include <asm/assembler.h> 31 - #include <asm/export.h> 32 31 33 32 #ifdef __ARMEB__ 34 33 #define al r1 ··· 52 53 53 54 ENDPROC(__lshrdi3) 54 55 ENDPROC(__aeabi_llsr) 55 - EXPORT_SYMBOL(__lshrdi3) 56 - EXPORT_SYMBOL(__aeabi_llsr)
-2
arch/arm/lib/memchr.S
··· 11 11 */ 12 12 #include <linux/linkage.h> 13 13 #include <asm/assembler.h> 14 - #include <asm/export.h> 15 14 16 15 .text 17 16 .align 5 ··· 24 25 2: movne r0, #0 25 26 ret lr 26 27 ENDPROC(memchr) 27 - EXPORT_SYMBOL(memchr)
-3
arch/arm/lib/memcpy.S
··· 13 13 #include <linux/linkage.h> 14 14 #include <asm/assembler.h> 15 15 #include <asm/unwind.h> 16 - #include <asm/export.h> 17 16 18 17 #define LDR1W_SHIFT 0 19 18 #define STR1W_SHIFT 0 ··· 68 69 69 70 ENDPROC(memcpy) 70 71 ENDPROC(mmiocpy) 71 - EXPORT_SYMBOL(memcpy) 72 - EXPORT_SYMBOL(mmiocpy)
-2
arch/arm/lib/memmove.S
··· 13 13 #include <linux/linkage.h> 14 14 #include <asm/assembler.h> 15 15 #include <asm/unwind.h> 16 - #include <asm/export.h> 17 16 18 17 .text 19 18 ··· 225 226 18: backward_copy_shift push=24 pull=8 226 227 227 228 ENDPROC(memmove) 228 - EXPORT_SYMBOL(memmove)
-3
arch/arm/lib/memset.S
··· 12 12 #include <linux/linkage.h> 13 13 #include <asm/assembler.h> 14 14 #include <asm/unwind.h> 15 - #include <asm/export.h> 16 15 17 16 .text 18 17 .align 5 ··· 135 136 UNWIND( .fnend ) 136 137 ENDPROC(memset) 137 138 ENDPROC(mmioset) 138 - EXPORT_SYMBOL(memset) 139 - EXPORT_SYMBOL(mmioset)
-2
arch/arm/lib/memzero.S
··· 10 10 #include <linux/linkage.h> 11 11 #include <asm/assembler.h> 12 12 #include <asm/unwind.h> 13 - #include <asm/export.h> 14 13 15 14 .text 16 15 .align 5 ··· 135 136 ret lr @ 1 136 137 UNWIND( .fnend ) 137 138 ENDPROC(__memzero) 138 - EXPORT_SYMBOL(__memzero)
-3
arch/arm/lib/muldi3.S
··· 12 12 13 13 #include <linux/linkage.h> 14 14 #include <asm/assembler.h> 15 - #include <asm/export.h> 16 15 17 16 #ifdef __ARMEB__ 18 17 #define xh r0 ··· 46 47 47 48 ENDPROC(__muldi3) 48 49 ENDPROC(__aeabi_lmul) 49 - EXPORT_SYMBOL(__muldi3) 50 - EXPORT_SYMBOL(__aeabi_lmul)
-5
arch/arm/lib/putuser.S
··· 31 31 #include <asm/assembler.h> 32 32 #include <asm/errno.h> 33 33 #include <asm/domain.h> 34 - #include <asm/export.h> 35 34 36 35 ENTRY(__put_user_1) 37 36 check_uaccess r0, 1, r1, ip, __put_user_bad ··· 38 39 mov r0, #0 39 40 ret lr 40 41 ENDPROC(__put_user_1) 41 - EXPORT_SYMBOL(__put_user_1) 42 42 43 43 ENTRY(__put_user_2) 44 44 check_uaccess r0, 2, r1, ip, __put_user_bad ··· 62 64 mov r0, #0 63 65 ret lr 64 66 ENDPROC(__put_user_2) 65 - EXPORT_SYMBOL(__put_user_2) 66 67 67 68 ENTRY(__put_user_4) 68 69 check_uaccess r0, 4, r1, ip, __put_user_bad ··· 69 72 mov r0, #0 70 73 ret lr 71 74 ENDPROC(__put_user_4) 72 - EXPORT_SYMBOL(__put_user_4) 73 75 74 76 ENTRY(__put_user_8) 75 77 check_uaccess r0, 8, r1, ip, __put_user_bad ··· 82 86 mov r0, #0 83 87 ret lr 84 88 ENDPROC(__put_user_8) 85 - EXPORT_SYMBOL(__put_user_8) 86 89 87 90 __put_user_bad: 88 91 mov r0, #-EFAULT
-2
arch/arm/lib/strchr.S
··· 11 11 */ 12 12 #include <linux/linkage.h> 13 13 #include <asm/assembler.h> 14 - #include <asm/export.h> 15 14 16 15 .text 17 16 .align 5 ··· 25 26 subeq r0, r0, #1 26 27 ret lr 27 28 ENDPROC(strchr) 28 - EXPORT_SYMBOL(strchr)
-2
arch/arm/lib/strrchr.S
··· 11 11 */ 12 12 #include <linux/linkage.h> 13 13 #include <asm/assembler.h> 14 - #include <asm/export.h> 15 14 16 15 .text 17 16 .align 5 ··· 24 25 mov r0, r3 25 26 ret lr 26 27 ENDPROC(strrchr) 27 - EXPORT_SYMBOL(strrchr)
-3
arch/arm/lib/uaccess_with_memcpy.c
··· 19 19 #include <linux/gfp.h> 20 20 #include <linux/highmem.h> 21 21 #include <linux/hugetlb.h> 22 - #include <linux/export.h> 23 22 #include <asm/current.h> 24 23 #include <asm/page.h> 25 24 ··· 156 157 } 157 158 return n; 158 159 } 159 - EXPORT_SYMBOL(arm_copy_to_user); 160 160 161 161 static unsigned long noinline 162 162 __clear_user_memset(void __user *addr, unsigned long n) ··· 213 215 } 214 216 return n; 215 217 } 216 - EXPORT_SYMBOL(arm_clear_user); 217 218 218 219 #if 0 219 220
-3
arch/arm/lib/ucmpdi2.S
··· 12 12 13 13 #include <linux/linkage.h> 14 14 #include <asm/assembler.h> 15 - #include <asm/export.h> 16 15 17 16 #ifdef __ARMEB__ 18 17 #define xh r0 ··· 35 36 ret lr 36 37 37 38 ENDPROC(__ucmpdi2) 38 - EXPORT_SYMBOL(__ucmpdi2) 39 39 40 40 #ifdef CONFIG_AEABI 41 41 ··· 48 50 ret lr 49 51 50 52 ENDPROC(__aeabi_ulcmp) 51 - EXPORT_SYMBOL(__aeabi_ulcmp) 52 53 53 54 #endif 54 55
+1
arch/arm/mach-imx/Makefile
··· 32 32 33 33 ifdef CONFIG_SND_IMX_SOC 34 34 obj-y += ssi-fiq.o 35 + obj-y += ssi-fiq-ksym.o 35 36 endif 36 37 37 38 # i.MX21 based machines
+20
arch/arm/mach-imx/ssi-fiq-ksym.c
··· 1 + /* 2 + * Exported ksyms for the SSI FIQ handler 3 + * 4 + * Copyright (C) 2009, Sascha Hauer <s.hauer@pengutronix.de> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #include <linux/module.h> 12 + 13 + #include <linux/platform_data/asoc-imx-ssi.h> 14 + 15 + EXPORT_SYMBOL(imx_ssi_fiq_tx_buffer); 16 + EXPORT_SYMBOL(imx_ssi_fiq_rx_buffer); 17 + EXPORT_SYMBOL(imx_ssi_fiq_start); 18 + EXPORT_SYMBOL(imx_ssi_fiq_end); 19 + EXPORT_SYMBOL(imx_ssi_fiq_base); 20 +
+1 -6
arch/arm/mach-imx/ssi-fiq.S
··· 8 8 9 9 #include <linux/linkage.h> 10 10 #include <asm/assembler.h> 11 - #include <asm/export.h> 12 11 13 12 /* 14 13 * r8 = bit 0-15: tx offset, bit 16-31: tx buffer size ··· 144 145 .word 0x0 145 146 .L_imx_ssi_fiq_end: 146 147 imx_ssi_fiq_end: 147 - EXPORT_SYMBOL(imx_ssi_fiq_tx_buffer) 148 - EXPORT_SYMBOL(imx_ssi_fiq_rx_buffer) 149 - EXPORT_SYMBOL(imx_ssi_fiq_start) 150 - EXPORT_SYMBOL(imx_ssi_fiq_end) 151 - EXPORT_SYMBOL(imx_ssi_fiq_base) 148 +
+1 -1
arch/arm64/boot/dts/arm/juno-base.dtsi
··· 393 393 #address-cells = <3>; 394 394 #size-cells = <2>; 395 395 dma-coherent; 396 - ranges = <0x01000000 0x00 0x5f800000 0x00 0x5f800000 0x0 0x00800000>, 396 + ranges = <0x01000000 0x00 0x00000000 0x00 0x5f800000 0x0 0x00800000>, 397 397 <0x02000000 0x00 0x50000000 0x00 0x50000000 0x0 0x08000000>, 398 398 <0x42000000 0x40 0x00000000 0x40 0x00000000 0x1 0x00000000>; 399 399 #interrupt-cells = <1>;
+1 -1
arch/arm64/boot/dts/arm/juno-r1.dts
··· 76 76 compatible = "arm,idle-state"; 77 77 arm,psci-suspend-param = <0x1010000>; 78 78 local-timer-stop; 79 - entry-latency-us = <300>; 79 + entry-latency-us = <400>; 80 80 exit-latency-us = <1200>; 81 81 min-residency-us = <2500>; 82 82 };
+1 -1
arch/arm64/boot/dts/arm/juno-r2.dts
··· 76 76 compatible = "arm,idle-state"; 77 77 arm,psci-suspend-param = <0x1010000>; 78 78 local-timer-stop; 79 - entry-latency-us = <300>; 79 + entry-latency-us = <400>; 80 80 exit-latency-us = <1200>; 81 81 min-residency-us = <2500>; 82 82 };
+1 -1
arch/arm64/boot/dts/arm/juno.dts
··· 76 76 compatible = "arm,idle-state"; 77 77 arm,psci-suspend-param = <0x1010000>; 78 78 local-timer-stop; 79 - entry-latency-us = <300>; 79 + entry-latency-us = <400>; 80 80 exit-latency-us = <1200>; 81 81 min-residency-us = <2500>; 82 82 };
+6 -2
arch/m68k/configs/amiga_defconfig
··· 95 95 CONFIG_NF_TABLES_NETDEV=m 96 96 CONFIG_NFT_EXTHDR=m 97 97 CONFIG_NFT_META=m 98 + CONFIG_NFT_NUMGEN=m 98 99 CONFIG_NFT_CT=m 99 - CONFIG_NFT_RBTREE=m 100 - CONFIG_NFT_HASH=m 100 + CONFIG_NFT_SET_RBTREE=m 101 + CONFIG_NFT_SET_HASH=m 101 102 CONFIG_NFT_COUNTER=m 102 103 CONFIG_NFT_LOG=m 103 104 CONFIG_NFT_LIMIT=m ··· 106 105 CONFIG_NFT_REDIR=m 107 106 CONFIG_NFT_NAT=m 108 107 CONFIG_NFT_QUEUE=m 108 + CONFIG_NFT_QUOTA=m 109 109 CONFIG_NFT_REJECT=m 110 110 CONFIG_NFT_COMPAT=m 111 + CONFIG_NFT_HASH=m 111 112 CONFIG_NFT_DUP_NETDEV=m 112 113 CONFIG_NFT_FWD_NETDEV=m 113 114 CONFIG_NETFILTER_XT_SET=m ··· 369 366 CONFIG_NETCONSOLE_DYNAMIC=y 370 367 CONFIG_VETH=m 371 368 # CONFIG_NET_VENDOR_3COM is not set 369 + # CONFIG_NET_VENDOR_AMAZON is not set 372 370 CONFIG_A2065=y 373 371 CONFIG_ARIADNE=y 374 372 # CONFIG_NET_VENDOR_ARC is not set
+6 -2
arch/m68k/configs/apollo_defconfig
··· 93 93 CONFIG_NF_TABLES_NETDEV=m 94 94 CONFIG_NFT_EXTHDR=m 95 95 CONFIG_NFT_META=m 96 + CONFIG_NFT_NUMGEN=m 96 97 CONFIG_NFT_CT=m 97 - CONFIG_NFT_RBTREE=m 98 - CONFIG_NFT_HASH=m 98 + CONFIG_NFT_SET_RBTREE=m 99 + CONFIG_NFT_SET_HASH=m 99 100 CONFIG_NFT_COUNTER=m 100 101 CONFIG_NFT_LOG=m 101 102 CONFIG_NFT_LIMIT=m ··· 104 103 CONFIG_NFT_REDIR=m 105 104 CONFIG_NFT_NAT=m 106 105 CONFIG_NFT_QUEUE=m 106 + CONFIG_NFT_QUOTA=m 107 107 CONFIG_NFT_REJECT=m 108 108 CONFIG_NFT_COMPAT=m 109 + CONFIG_NFT_HASH=m 109 110 CONFIG_NFT_DUP_NETDEV=m 110 111 CONFIG_NFT_FWD_NETDEV=m 111 112 CONFIG_NETFILTER_XT_SET=m ··· 350 347 CONFIG_NETCONSOLE=m 351 348 CONFIG_NETCONSOLE_DYNAMIC=y 352 349 CONFIG_VETH=m 350 + # CONFIG_NET_VENDOR_AMAZON is not set 353 351 # CONFIG_NET_VENDOR_ARC is not set 354 352 # CONFIG_NET_CADENCE is not set 355 353 # CONFIG_NET_VENDOR_BROADCOM is not set
+6 -2
arch/m68k/configs/atari_defconfig
··· 93 93 CONFIG_NF_TABLES_NETDEV=m 94 94 CONFIG_NFT_EXTHDR=m 95 95 CONFIG_NFT_META=m 96 + CONFIG_NFT_NUMGEN=m 96 97 CONFIG_NFT_CT=m 97 - CONFIG_NFT_RBTREE=m 98 - CONFIG_NFT_HASH=m 98 + CONFIG_NFT_SET_RBTREE=m 99 + CONFIG_NFT_SET_HASH=m 99 100 CONFIG_NFT_COUNTER=m 100 101 CONFIG_NFT_LOG=m 101 102 CONFIG_NFT_LIMIT=m ··· 104 103 CONFIG_NFT_REDIR=m 105 104 CONFIG_NFT_NAT=m 106 105 CONFIG_NFT_QUEUE=m 106 + CONFIG_NFT_QUOTA=m 107 107 CONFIG_NFT_REJECT=m 108 108 CONFIG_NFT_COMPAT=m 109 + CONFIG_NFT_HASH=m 109 110 CONFIG_NFT_DUP_NETDEV=m 110 111 CONFIG_NFT_FWD_NETDEV=m 111 112 CONFIG_NETFILTER_XT_SET=m ··· 359 356 CONFIG_NETCONSOLE=m 360 357 CONFIG_NETCONSOLE_DYNAMIC=y 361 358 CONFIG_VETH=m 359 + # CONFIG_NET_VENDOR_AMAZON is not set 362 360 CONFIG_ATARILANCE=y 363 361 # CONFIG_NET_VENDOR_ARC is not set 364 362 # CONFIG_NET_CADENCE is not set
+6 -2
arch/m68k/configs/bvme6000_defconfig
··· 91 91 CONFIG_NF_TABLES_NETDEV=m 92 92 CONFIG_NFT_EXTHDR=m 93 93 CONFIG_NFT_META=m 94 + CONFIG_NFT_NUMGEN=m 94 95 CONFIG_NFT_CT=m 95 - CONFIG_NFT_RBTREE=m 96 - CONFIG_NFT_HASH=m 96 + CONFIG_NFT_SET_RBTREE=m 97 + CONFIG_NFT_SET_HASH=m 97 98 CONFIG_NFT_COUNTER=m 98 99 CONFIG_NFT_LOG=m 99 100 CONFIG_NFT_LIMIT=m ··· 102 101 CONFIG_NFT_REDIR=m 103 102 CONFIG_NFT_NAT=m 104 103 CONFIG_NFT_QUEUE=m 104 + CONFIG_NFT_QUOTA=m 105 105 CONFIG_NFT_REJECT=m 106 106 CONFIG_NFT_COMPAT=m 107 + CONFIG_NFT_HASH=m 107 108 CONFIG_NFT_DUP_NETDEV=m 108 109 CONFIG_NFT_FWD_NETDEV=m 109 110 CONFIG_NETFILTER_XT_SET=m ··· 349 346 CONFIG_NETCONSOLE=m 350 347 CONFIG_NETCONSOLE_DYNAMIC=y 351 348 CONFIG_VETH=m 349 + # CONFIG_NET_VENDOR_AMAZON is not set 352 350 # CONFIG_NET_VENDOR_ARC is not set 353 351 # CONFIG_NET_CADENCE is not set 354 352 # CONFIG_NET_VENDOR_BROADCOM is not set
+6 -2
arch/m68k/configs/hp300_defconfig
··· 93 93 CONFIG_NF_TABLES_NETDEV=m 94 94 CONFIG_NFT_EXTHDR=m 95 95 CONFIG_NFT_META=m 96 + CONFIG_NFT_NUMGEN=m 96 97 CONFIG_NFT_CT=m 97 - CONFIG_NFT_RBTREE=m 98 - CONFIG_NFT_HASH=m 98 + CONFIG_NFT_SET_RBTREE=m 99 + CONFIG_NFT_SET_HASH=m 99 100 CONFIG_NFT_COUNTER=m 100 101 CONFIG_NFT_LOG=m 101 102 CONFIG_NFT_LIMIT=m ··· 104 103 CONFIG_NFT_REDIR=m 105 104 CONFIG_NFT_NAT=m 106 105 CONFIG_NFT_QUEUE=m 106 + CONFIG_NFT_QUOTA=m 107 107 CONFIG_NFT_REJECT=m 108 108 CONFIG_NFT_COMPAT=m 109 + CONFIG_NFT_HASH=m 109 110 CONFIG_NFT_DUP_NETDEV=m 110 111 CONFIG_NFT_FWD_NETDEV=m 111 112 CONFIG_NETFILTER_XT_SET=m ··· 350 347 CONFIG_NETCONSOLE=m 351 348 CONFIG_NETCONSOLE_DYNAMIC=y 352 349 CONFIG_VETH=m 350 + # CONFIG_NET_VENDOR_AMAZON is not set 353 351 CONFIG_HPLANCE=y 354 352 # CONFIG_NET_VENDOR_ARC is not set 355 353 # CONFIG_NET_CADENCE is not set
+6 -2
arch/m68k/configs/mac_defconfig
··· 92 92 CONFIG_NF_TABLES_NETDEV=m 93 93 CONFIG_NFT_EXTHDR=m 94 94 CONFIG_NFT_META=m 95 + CONFIG_NFT_NUMGEN=m 95 96 CONFIG_NFT_CT=m 96 - CONFIG_NFT_RBTREE=m 97 - CONFIG_NFT_HASH=m 97 + CONFIG_NFT_SET_RBTREE=m 98 + CONFIG_NFT_SET_HASH=m 98 99 CONFIG_NFT_COUNTER=m 99 100 CONFIG_NFT_LOG=m 100 101 CONFIG_NFT_LIMIT=m ··· 103 102 CONFIG_NFT_REDIR=m 104 103 CONFIG_NFT_NAT=m 105 104 CONFIG_NFT_QUEUE=m 105 + CONFIG_NFT_QUOTA=m 106 106 CONFIG_NFT_REJECT=m 107 107 CONFIG_NFT_COMPAT=m 108 + CONFIG_NFT_HASH=m 108 109 CONFIG_NFT_DUP_NETDEV=m 109 110 CONFIG_NFT_FWD_NETDEV=m 110 111 CONFIG_NETFILTER_XT_SET=m ··· 366 363 CONFIG_NETCONSOLE=m 367 364 CONFIG_NETCONSOLE_DYNAMIC=y 368 365 CONFIG_VETH=m 366 + # CONFIG_NET_VENDOR_AMAZON is not set 369 367 CONFIG_MACMACE=y 370 368 # CONFIG_NET_VENDOR_ARC is not set 371 369 # CONFIG_NET_CADENCE is not set
+6 -2
arch/m68k/configs/multi_defconfig
··· 102 102 CONFIG_NF_TABLES_NETDEV=m 103 103 CONFIG_NFT_EXTHDR=m 104 104 CONFIG_NFT_META=m 105 + CONFIG_NFT_NUMGEN=m 105 106 CONFIG_NFT_CT=m 106 - CONFIG_NFT_RBTREE=m 107 - CONFIG_NFT_HASH=m 107 + CONFIG_NFT_SET_RBTREE=m 108 + CONFIG_NFT_SET_HASH=m 108 109 CONFIG_NFT_COUNTER=m 109 110 CONFIG_NFT_LOG=m 110 111 CONFIG_NFT_LIMIT=m ··· 113 112 CONFIG_NFT_REDIR=m 114 113 CONFIG_NFT_NAT=m 115 114 CONFIG_NFT_QUEUE=m 115 + CONFIG_NFT_QUOTA=m 116 116 CONFIG_NFT_REJECT=m 117 117 CONFIG_NFT_COMPAT=m 118 + CONFIG_NFT_HASH=m 118 119 CONFIG_NFT_DUP_NETDEV=m 119 120 CONFIG_NFT_FWD_NETDEV=m 120 121 CONFIG_NETFILTER_XT_SET=m ··· 400 397 CONFIG_NETCONSOLE_DYNAMIC=y 401 398 CONFIG_VETH=m 402 399 # CONFIG_NET_VENDOR_3COM is not set 400 + # CONFIG_NET_VENDOR_AMAZON is not set 403 401 CONFIG_A2065=y 404 402 CONFIG_ARIADNE=y 405 403 CONFIG_ATARILANCE=y
+6 -2
arch/m68k/configs/mvme147_defconfig
··· 90 90 CONFIG_NF_TABLES_NETDEV=m 91 91 CONFIG_NFT_EXTHDR=m 92 92 CONFIG_NFT_META=m 93 + CONFIG_NFT_NUMGEN=m 93 94 CONFIG_NFT_CT=m 94 - CONFIG_NFT_RBTREE=m 95 - CONFIG_NFT_HASH=m 95 + CONFIG_NFT_SET_RBTREE=m 96 + CONFIG_NFT_SET_HASH=m 96 97 CONFIG_NFT_COUNTER=m 97 98 CONFIG_NFT_LOG=m 98 99 CONFIG_NFT_LIMIT=m ··· 101 100 CONFIG_NFT_REDIR=m 102 101 CONFIG_NFT_NAT=m 103 102 CONFIG_NFT_QUEUE=m 103 + CONFIG_NFT_QUOTA=m 104 104 CONFIG_NFT_REJECT=m 105 105 CONFIG_NFT_COMPAT=m 106 + CONFIG_NFT_HASH=m 106 107 CONFIG_NFT_DUP_NETDEV=m 107 108 CONFIG_NFT_FWD_NETDEV=m 108 109 CONFIG_NETFILTER_XT_SET=m ··· 348 345 CONFIG_NETCONSOLE=m 349 346 CONFIG_NETCONSOLE_DYNAMIC=y 350 347 CONFIG_VETH=m 348 + # CONFIG_NET_VENDOR_AMAZON is not set 351 349 CONFIG_MVME147_NET=y 352 350 # CONFIG_NET_VENDOR_ARC is not set 353 351 # CONFIG_NET_CADENCE is not set
+6 -2
arch/m68k/configs/mvme16x_defconfig
··· 91 91 CONFIG_NF_TABLES_NETDEV=m 92 92 CONFIG_NFT_EXTHDR=m 93 93 CONFIG_NFT_META=m 94 + CONFIG_NFT_NUMGEN=m 94 95 CONFIG_NFT_CT=m 95 - CONFIG_NFT_RBTREE=m 96 - CONFIG_NFT_HASH=m 96 + CONFIG_NFT_SET_RBTREE=m 97 + CONFIG_NFT_SET_HASH=m 97 98 CONFIG_NFT_COUNTER=m 98 99 CONFIG_NFT_LOG=m 99 100 CONFIG_NFT_LIMIT=m ··· 102 101 CONFIG_NFT_REDIR=m 103 102 CONFIG_NFT_NAT=m 104 103 CONFIG_NFT_QUEUE=m 104 + CONFIG_NFT_QUOTA=m 105 105 CONFIG_NFT_REJECT=m 106 106 CONFIG_NFT_COMPAT=m 107 + CONFIG_NFT_HASH=m 107 108 CONFIG_NFT_DUP_NETDEV=m 108 109 CONFIG_NFT_FWD_NETDEV=m 109 110 CONFIG_NETFILTER_XT_SET=m ··· 349 346 CONFIG_NETCONSOLE=m 350 347 CONFIG_NETCONSOLE_DYNAMIC=y 351 348 CONFIG_VETH=m 349 + # CONFIG_NET_VENDOR_AMAZON is not set 352 350 # CONFIG_NET_VENDOR_ARC is not set 353 351 # CONFIG_NET_CADENCE is not set 354 352 # CONFIG_NET_VENDOR_BROADCOM is not set
+6 -2
arch/m68k/configs/q40_defconfig
··· 91 91 CONFIG_NF_TABLES_NETDEV=m 92 92 CONFIG_NFT_EXTHDR=m 93 93 CONFIG_NFT_META=m 94 + CONFIG_NFT_NUMGEN=m 94 95 CONFIG_NFT_CT=m 95 - CONFIG_NFT_RBTREE=m 96 - CONFIG_NFT_HASH=m 96 + CONFIG_NFT_SET_RBTREE=m 97 + CONFIG_NFT_SET_HASH=m 97 98 CONFIG_NFT_COUNTER=m 98 99 CONFIG_NFT_LOG=m 99 100 CONFIG_NFT_LIMIT=m ··· 102 101 CONFIG_NFT_REDIR=m 103 102 CONFIG_NFT_NAT=m 104 103 CONFIG_NFT_QUEUE=m 104 + CONFIG_NFT_QUOTA=m 105 105 CONFIG_NFT_REJECT=m 106 106 CONFIG_NFT_COMPAT=m 107 + CONFIG_NFT_HASH=m 107 108 CONFIG_NFT_DUP_NETDEV=m 108 109 CONFIG_NFT_FWD_NETDEV=m 109 110 CONFIG_NETFILTER_XT_SET=m ··· 356 353 CONFIG_NETCONSOLE_DYNAMIC=y 357 354 CONFIG_VETH=m 358 355 # CONFIG_NET_VENDOR_3COM is not set 356 + # CONFIG_NET_VENDOR_AMAZON is not set 359 357 # CONFIG_NET_VENDOR_AMD is not set 360 358 # CONFIG_NET_VENDOR_ARC is not set 361 359 # CONFIG_NET_CADENCE is not set
+6 -2
arch/m68k/configs/sun3_defconfig
··· 88 88 CONFIG_NF_TABLES_NETDEV=m 89 89 CONFIG_NFT_EXTHDR=m 90 90 CONFIG_NFT_META=m 91 + CONFIG_NFT_NUMGEN=m 91 92 CONFIG_NFT_CT=m 92 - CONFIG_NFT_RBTREE=m 93 - CONFIG_NFT_HASH=m 93 + CONFIG_NFT_SET_RBTREE=m 94 + CONFIG_NFT_SET_HASH=m 94 95 CONFIG_NFT_COUNTER=m 95 96 CONFIG_NFT_LOG=m 96 97 CONFIG_NFT_LIMIT=m ··· 99 98 CONFIG_NFT_REDIR=m 100 99 CONFIG_NFT_NAT=m 101 100 CONFIG_NFT_QUEUE=m 101 + CONFIG_NFT_QUOTA=m 102 102 CONFIG_NFT_REJECT=m 103 103 CONFIG_NFT_COMPAT=m 104 + CONFIG_NFT_HASH=m 104 105 CONFIG_NFT_DUP_NETDEV=m 105 106 CONFIG_NFT_FWD_NETDEV=m 106 107 CONFIG_NETFILTER_XT_SET=m ··· 346 343 CONFIG_NETCONSOLE=m 347 344 CONFIG_NETCONSOLE_DYNAMIC=y 348 345 CONFIG_VETH=m 346 + # CONFIG_NET_VENDOR_AMAZON is not set 349 347 CONFIG_SUN3LANCE=y 350 348 # CONFIG_NET_VENDOR_ARC is not set 351 349 # CONFIG_NET_CADENCE is not set
+6 -2
arch/m68k/configs/sun3x_defconfig
··· 88 88 CONFIG_NF_TABLES_NETDEV=m 89 89 CONFIG_NFT_EXTHDR=m 90 90 CONFIG_NFT_META=m 91 + CONFIG_NFT_NUMGEN=m 91 92 CONFIG_NFT_CT=m 92 - CONFIG_NFT_RBTREE=m 93 - CONFIG_NFT_HASH=m 93 + CONFIG_NFT_SET_RBTREE=m 94 + CONFIG_NFT_SET_HASH=m 94 95 CONFIG_NFT_COUNTER=m 95 96 CONFIG_NFT_LOG=m 96 97 CONFIG_NFT_LIMIT=m ··· 99 98 CONFIG_NFT_REDIR=m 100 99 CONFIG_NFT_NAT=m 101 100 CONFIG_NFT_QUEUE=m 101 + CONFIG_NFT_QUOTA=m 102 102 CONFIG_NFT_REJECT=m 103 103 CONFIG_NFT_COMPAT=m 104 + CONFIG_NFT_HASH=m 104 105 CONFIG_NFT_DUP_NETDEV=m 105 106 CONFIG_NFT_FWD_NETDEV=m 106 107 CONFIG_NETFILTER_XT_SET=m ··· 346 343 CONFIG_NETCONSOLE=m 347 344 CONFIG_NETCONSOLE_DYNAMIC=y 348 345 CONFIG_VETH=m 346 + # CONFIG_NET_VENDOR_AMAZON is not set 349 347 CONFIG_SUN3LANCE=y 350 348 # CONFIG_NET_VENDOR_ARC is not set 351 349 # CONFIG_NET_CADENCE is not set
+1 -1
arch/m68k/include/asm/delay.h
··· 114 114 */ 115 115 #define HZSCALE (268435456 / (1000000 / HZ)) 116 116 117 - #define ndelay(n) __delay(DIV_ROUND_UP((n) * ((((HZSCALE) >> 11) * (loops_per_jiffy >> 11)) >> 6), 1000)); 117 + #define ndelay(n) __delay(DIV_ROUND_UP((n) * ((((HZSCALE) >> 11) * (loops_per_jiffy >> 11)) >> 6), 1000)) 118 118 119 119 #endif /* defined(_M68K_DELAY_H) */
+6
arch/mips/include/asm/mipsregs.h
··· 215 215 #endif 216 216 217 217 /* 218 + * Wired register bits 219 + */ 220 + #define MIPSR6_WIRED_LIMIT (_ULCAST_(0xffff) << 16) 221 + #define MIPSR6_WIRED_WIRED (_ULCAST_(0xffff) << 0) 222 + 223 + /* 218 224 * Values used for computation of new tlb entries 219 225 */ 220 226 #define PL_4K 12
+13
arch/mips/include/asm/tlb.h
··· 1 1 #ifndef __ASM_TLB_H 2 2 #define __ASM_TLB_H 3 3 4 + #include <asm/cpu-features.h> 5 + #include <asm/mipsregs.h> 6 + 4 7 /* 5 8 * MIPS doesn't need any special per-pte or per-vma handling, except 6 9 * we need to flush cache for area to be unmapped. ··· 24 21 #define UNIQUE_ENTRYHI(idx) \ 25 22 ((CKSEG0 + ((idx) << (PAGE_SHIFT + 1))) | \ 26 23 (cpu_has_tlbinv ? MIPS_ENTRYHI_EHINV : 0)) 24 + 25 + static inline unsigned int num_wired_entries(void) 26 + { 27 + unsigned int wired = read_c0_wired(); 28 + 29 + if (cpu_has_mips_r6) 30 + wired &= MIPSR6_WIRED_WIRED; 31 + 32 + return wired; 33 + } 27 34 28 35 #include <asm-generic/tlb.h> 29 36
+5 -4
arch/mips/mm/fault.c
··· 209 209 if (show_unhandled_signals && 210 210 unhandled_signal(tsk, SIGSEGV) && 211 211 __ratelimit(&ratelimit_state)) { 212 - pr_info("\ndo_page_fault(): sending SIGSEGV to %s for invalid %s %0*lx", 212 + pr_info("do_page_fault(): sending SIGSEGV to %s for invalid %s %0*lx\n", 213 213 tsk->comm, 214 214 write ? "write access to" : "read access from", 215 215 field, address); 216 216 pr_info("epc = %0*lx in", field, 217 217 (unsigned long) regs->cp0_epc); 218 - print_vma_addr(" ", regs->cp0_epc); 218 + print_vma_addr(KERN_CONT " ", regs->cp0_epc); 219 + pr_cont("\n"); 219 220 pr_info("ra = %0*lx in", field, 220 221 (unsigned long) regs->regs[31]); 221 - print_vma_addr(" ", regs->regs[31]); 222 - pr_info("\n"); 222 + print_vma_addr(KERN_CONT " ", regs->regs[31]); 223 + pr_cont("\n"); 223 224 } 224 225 current->thread.trap_nr = (regs->cp0_cause >> 2) & 0x1f; 225 226 info.si_signo = SIGSEGV;
+2 -2
arch/mips/mm/init.c
··· 118 118 writex_c0_entrylo1(entrylo); 119 119 } 120 120 #endif 121 - tlbidx = read_c0_wired(); 121 + tlbidx = num_wired_entries(); 122 122 write_c0_wired(tlbidx + 1); 123 123 write_c0_index(tlbidx); 124 124 mtc0_tlbw_hazard(); ··· 147 147 148 148 local_irq_save(flags); 149 149 old_ctx = read_c0_entryhi(); 150 - wired = read_c0_wired() - 1; 150 + wired = num_wired_entries() - 1; 151 151 write_c0_wired(wired); 152 152 write_c0_index(wired); 153 153 write_c0_entryhi(UNIQUE_ENTRYHI(wired));
+3 -3
arch/mips/mm/tlb-r4k.c
··· 65 65 write_c0_entrylo0(0); 66 66 write_c0_entrylo1(0); 67 67 68 - entry = read_c0_wired(); 68 + entry = num_wired_entries(); 69 69 70 70 /* 71 71 * Blast 'em all away. ··· 385 385 old_ctx = read_c0_entryhi(); 386 386 htw_stop(); 387 387 old_pagemask = read_c0_pagemask(); 388 - wired = read_c0_wired(); 388 + wired = num_wired_entries(); 389 389 write_c0_wired(wired + 1); 390 390 write_c0_index(wired); 391 391 tlbw_use_hazard(); /* What is the hazard here? */ ··· 449 449 htw_stop(); 450 450 old_ctx = read_c0_entryhi(); 451 451 old_pagemask = read_c0_pagemask(); 452 - wired = read_c0_wired(); 452 + wired = num_wired_entries(); 453 453 if (--temp_tlb_entry < wired) { 454 454 printk(KERN_WARNING 455 455 "No TLB space left for add_temporary_entry\n");
+3 -1
arch/parisc/Kconfig
··· 34 34 select HAVE_ARCH_HASH 35 35 select HAVE_ARCH_SECCOMP_FILTER 36 36 select HAVE_ARCH_TRACEHOOK 37 - select HAVE_UNSTABLE_SCHED_CLOCK if (SMP || !64BIT) 37 + select GENERIC_SCHED_CLOCK 38 + select HAVE_UNSTABLE_SCHED_CLOCK if SMP 39 + select GENERIC_CLOCKEVENTS 38 40 select ARCH_NO_COHERENT_DMA_MMAP 39 41 select CPU_NO_EFFICIENT_FFS 40 42
+4 -4
arch/parisc/include/asm/pgtable.h
··· 65 65 unsigned long flags; \ 66 66 spin_lock_irqsave(&pa_tlb_lock, flags); \ 67 67 old_pte = *ptep; \ 68 - set_pte(ptep, pteval); \ 69 68 if (pte_inserted(old_pte)) \ 70 69 purge_tlb_entries(mm, addr); \ 70 + set_pte(ptep, pteval); \ 71 71 spin_unlock_irqrestore(&pa_tlb_lock, flags); \ 72 72 } while (0) 73 73 ··· 478 478 spin_unlock_irqrestore(&pa_tlb_lock, flags); 479 479 return 0; 480 480 } 481 - set_pte(ptep, pte_mkold(pte)); 482 481 purge_tlb_entries(vma->vm_mm, addr); 482 + set_pte(ptep, pte_mkold(pte)); 483 483 spin_unlock_irqrestore(&pa_tlb_lock, flags); 484 484 return 1; 485 485 } ··· 492 492 493 493 spin_lock_irqsave(&pa_tlb_lock, flags); 494 494 old_pte = *ptep; 495 - set_pte(ptep, __pte(0)); 496 495 if (pte_inserted(old_pte)) 497 496 purge_tlb_entries(mm, addr); 497 + set_pte(ptep, __pte(0)); 498 498 spin_unlock_irqrestore(&pa_tlb_lock, flags); 499 499 500 500 return old_pte; ··· 504 504 { 505 505 unsigned long flags; 506 506 spin_lock_irqsave(&pa_tlb_lock, flags); 507 - set_pte(ptep, pte_wrprotect(*ptep)); 508 507 purge_tlb_entries(mm, addr); 508 + set_pte(ptep, pte_wrprotect(*ptep)); 509 509 spin_unlock_irqrestore(&pa_tlb_lock, flags); 510 510 } 511 511
+22 -18
arch/parisc/kernel/cache.c
··· 369 369 { 370 370 unsigned long rangetime, alltime; 371 371 unsigned long size, start; 372 + unsigned long threshold; 372 373 373 374 alltime = mfctl(16); 374 375 flush_data_cache(); ··· 383 382 printk(KERN_DEBUG "Whole cache flush %lu cycles, flushing %lu bytes %lu cycles\n", 384 383 alltime, size, rangetime); 385 384 386 - /* Racy, but if we see an intermediate value, it's ok too... */ 387 - parisc_cache_flush_threshold = size * alltime / rangetime; 388 - 389 - parisc_cache_flush_threshold = L1_CACHE_ALIGN(parisc_cache_flush_threshold); 390 - if (!parisc_cache_flush_threshold) 391 - parisc_cache_flush_threshold = FLUSH_THRESHOLD; 392 - 393 - if (parisc_cache_flush_threshold > cache_info.dc_size) 394 - parisc_cache_flush_threshold = cache_info.dc_size; 395 - 396 - printk(KERN_INFO "Setting cache flush threshold to %lu kB\n", 385 + threshold = L1_CACHE_ALIGN(size * alltime / rangetime); 386 + if (threshold > cache_info.dc_size) 387 + threshold = cache_info.dc_size; 388 + if (threshold) 389 + parisc_cache_flush_threshold = threshold; 390 + printk(KERN_INFO "Cache flush threshold set to %lu KiB\n", 397 391 parisc_cache_flush_threshold/1024); 398 392 399 393 /* calculate TLB flush threshold */ 394 + 395 + /* On SMP machines, skip the TLB measure of kernel text which 396 + * has been mapped as huge pages. */ 397 + if (num_online_cpus() > 1 && !parisc_requires_coherency()) { 398 + threshold = max(cache_info.it_size, cache_info.dt_size); 399 + threshold *= PAGE_SIZE; 400 + threshold /= num_online_cpus(); 401 + goto set_tlb_threshold; 402 + } 400 403 401 404 alltime = mfctl(16); 402 405 flush_tlb_all(); 403 406 alltime = mfctl(16) - alltime; 404 407 405 - size = PAGE_SIZE; 408 + size = 0; 406 409 start = (unsigned long) _text; 407 410 rangetime = mfctl(16); 408 411 while (start < (unsigned long) _end) { ··· 419 414 printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n", 420 415 alltime, size, rangetime); 421 416 422 - parisc_tlb_flush_threshold = size * alltime / rangetime; 423 - parisc_tlb_flush_threshold *= num_online_cpus(); 424 - parisc_tlb_flush_threshold = PAGE_ALIGN(parisc_tlb_flush_threshold); 425 - if (!parisc_tlb_flush_threshold) 426 - parisc_tlb_flush_threshold = FLUSH_TLB_THRESHOLD; 417 + threshold = PAGE_ALIGN(num_online_cpus() * size * alltime / rangetime); 427 418 428 - printk(KERN_INFO "Setting TLB flush threshold to %lu kB\n", 419 + set_tlb_threshold: 420 + if (threshold) 421 + parisc_tlb_flush_threshold = threshold; 422 + printk(KERN_INFO "TLB flush threshold set to %lu KiB\n", 429 423 parisc_tlb_flush_threshold/1024); 430 424 } 431 425
+4 -4
arch/parisc/kernel/inventory.c
··· 58 58 status = pdc_system_map_find_mods(&module_result, &module_path, 0); 59 59 if (status == PDC_OK) { 60 60 pdc_type = PDC_TYPE_SYSTEM_MAP; 61 - printk("System Map.\n"); 61 + pr_cont("System Map.\n"); 62 62 return; 63 63 } 64 64 ··· 77 77 status = pdc_pat_cell_get_number(&cell_info); 78 78 if (status == PDC_OK) { 79 79 pdc_type = PDC_TYPE_PAT; 80 - printk("64 bit PAT.\n"); 80 + pr_cont("64 bit PAT.\n"); 81 81 return; 82 82 } 83 83 #endif ··· 97 97 case 0xC: /* 715/64, at least */ 98 98 99 99 pdc_type = PDC_TYPE_SNAKE; 100 - printk("Snake.\n"); 100 + pr_cont("Snake.\n"); 101 101 return; 102 102 103 103 default: /* Everything else */ 104 104 105 - printk("Unsupported.\n"); 105 + pr_cont("Unsupported.\n"); 106 106 panic("If this is a 64-bit machine, please try a 64-bit kernel.\n"); 107 107 } 108 108 }
+18 -31
arch/parisc/kernel/pacache.S
··· 96 96 97 97 fitmanymiddle: /* Loop if LOOP >= 2 */ 98 98 addib,COND(>) -1, %r31, fitmanymiddle /* Adjusted inner loop decr */ 99 - pitlbe 0(%sr1, %r28) 99 + pitlbe %r0(%sr1, %r28) 100 100 pitlbe,m %arg1(%sr1, %r28) /* Last pitlbe and addr adjust */ 101 101 addib,COND(>) -1, %r29, fitmanymiddle /* Middle loop decr */ 102 102 copy %arg3, %r31 /* Re-init inner loop count */ ··· 139 139 140 140 fdtmanymiddle: /* Loop if LOOP >= 2 */ 141 141 addib,COND(>) -1, %r31, fdtmanymiddle /* Adjusted inner loop decr */ 142 - pdtlbe 0(%sr1, %r28) 142 + pdtlbe %r0(%sr1, %r28) 143 143 pdtlbe,m %arg1(%sr1, %r28) /* Last pdtlbe and addr adjust */ 144 144 addib,COND(>) -1, %r29, fdtmanymiddle /* Middle loop decr */ 145 145 copy %arg3, %r31 /* Re-init inner loop count */ ··· 626 626 /* Purge any old translations */ 627 627 628 628 #ifdef CONFIG_PA20 629 - pdtlb,l 0(%r28) 630 - pdtlb,l 0(%r29) 629 + pdtlb,l %r0(%r28) 630 + pdtlb,l %r0(%r29) 631 631 #else 632 632 tlb_lock %r20,%r21,%r22 633 - pdtlb 0(%r28) 634 - pdtlb 0(%r29) 633 + pdtlb %r0(%r28) 634 + pdtlb %r0(%r29) 635 635 tlb_unlock %r20,%r21,%r22 636 636 #endif 637 637 ··· 774 774 /* Purge any old translation */ 775 775 776 776 #ifdef CONFIG_PA20 777 - pdtlb,l 0(%r28) 777 + pdtlb,l %r0(%r28) 778 778 #else 779 779 tlb_lock %r20,%r21,%r22 780 - pdtlb 0(%r28) 780 + pdtlb %r0(%r28) 781 781 tlb_unlock %r20,%r21,%r22 782 782 #endif 783 783 ··· 858 858 /* Purge any old translation */ 859 859 860 860 #ifdef CONFIG_PA20 861 - pdtlb,l 0(%r28) 861 + pdtlb,l %r0(%r28) 862 862 #else 863 863 tlb_lock %r20,%r21,%r22 864 - pdtlb 0(%r28) 864 + pdtlb %r0(%r28) 865 865 tlb_unlock %r20,%r21,%r22 866 866 #endif 867 867 ··· 892 892 fdc,m r31(%r28) 893 893 fdc,m r31(%r28) 894 894 fdc,m r31(%r28) 895 - cmpb,COND(<<) %r28, %r25,1b 895 + cmpb,COND(<<) %r28, %r25,1b 896 896 fdc,m r31(%r28) 897 897 898 898 sync 899 - 900 - #ifdef CONFIG_PA20 901 - pdtlb,l 0(%r25) 902 - #else 903 - tlb_lock %r20,%r21,%r22 904 - pdtlb 0(%r25) 905 - tlb_unlock %r20,%r21,%r22 906 - #endif 907 - 908 899 bv %r0(%r2) 909 900 nop 910 901 .exit ··· 922 931 depwi 0, 31,PAGE_SHIFT, %r28 /* Clear any offset bits */ 923 932 #endif 924 933 925 - /* Purge any old translation */ 934 + /* Purge any old translation. Note that the FIC instruction 935 + * may use either the instruction or data TLB. Given that we 936 + * have a flat address space, it's not clear which TLB will be 937 + * used. So, we purge both entries. */ 926 938 927 939 #ifdef CONFIG_PA20 940 + pdtlb,l %r0(%r28) 928 941 pitlb,l %r0(%sr4,%r28) 929 942 #else 930 943 tlb_lock %r20,%r21,%r22 931 - pitlb (%sr4,%r28) 944 + pdtlb %r0(%r28) 945 + pitlb %r0(%sr4,%r28) 932 946 tlb_unlock %r20,%r21,%r22 933 947 #endif 934 948 ··· 970 974 fic,m %r31(%sr4,%r28) 971 975 972 976 sync 973 - 974 - #ifdef CONFIG_PA20 975 - pitlb,l %r0(%sr4,%r25) 976 - #else 977 - tlb_lock %r20,%r21,%r22 978 - pitlb (%sr4,%r25) 979 - tlb_unlock %r20,%r21,%r22 980 - #endif 981 - 982 977 bv %r0(%r2) 983 978 nop 984 979 .exit
+1 -1
arch/parisc/kernel/pci-dma.c
··· 95 95 96 96 if (!pte_none(*pte)) 97 97 printk(KERN_ERR "map_pte_uncached: page already exists\n"); 98 - set_pte(pte, __mk_pte(*paddr_ptr, PAGE_KERNEL_UNC)); 99 98 purge_tlb_start(flags); 99 + set_pte(pte, __mk_pte(*paddr_ptr, PAGE_KERNEL_UNC)); 100 100 pdtlb_kernel(orig_vaddr); 101 101 purge_tlb_end(flags); 102 102 vaddr += PAGE_SIZE;
+4
arch/parisc/kernel/setup.c
··· 334 334 /* tell PDC we're Linux. Nevermind failure. */ 335 335 pdc_stable_write(0x40, &osid, sizeof(osid)); 336 336 337 + /* start with known state */ 338 + flush_cache_all_local(); 339 + flush_tlb_all_local(NULL); 340 + 337 341 processor_init(); 338 342 #ifdef CONFIG_SMP 339 343 pr_info("CPU(s): %d out of %d %s at %d.%06d MHz online\n",
+11 -46
arch/parisc/kernel/time.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/rtc.h> 16 16 #include <linux/sched.h> 17 + #include <linux/sched_clock.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/param.h> 19 20 #include <linux/string.h> ··· 39 38 #include <linux/timex.h> 40 39 41 40 static unsigned long clocktick __read_mostly; /* timer cycles per tick */ 42 - 43 - #ifndef CONFIG_64BIT 44 - /* 45 - * The processor-internal cycle counter (Control Register 16) is used as time 46 - * source for the sched_clock() function. This register is 64bit wide on a 47 - * 64-bit kernel and 32bit on a 32-bit kernel. Since sched_clock() always 48 - * requires a 64bit counter we emulate on the 32-bit kernel the higher 32bits 49 - * with a per-cpu variable which we increase every time the counter 50 - * wraps-around (which happens every ~4 secounds). 51 - */ 52 - static DEFINE_PER_CPU(unsigned long, cr16_high_32_bits); 53 - #endif 54 41 55 42 /* 56 43 * We keep time on PA-RISC Linux by using the Interval Timer which is ··· 109 120 * Only bottom 32-bits of next_tick are writable in CR16! 110 121 */ 111 122 mtctl(next_tick, 16); 112 - 113 - #if !defined(CONFIG_64BIT) 114 - /* check for overflow on a 32bit kernel (every ~4 seconds). */ 115 - if (unlikely(next_tick < now)) 116 - this_cpu_inc(cr16_high_32_bits); 117 - #endif 118 123 119 124 /* Skip one clocktick on purpose if we missed next_tick. 120 125 * The new CR16 must be "later" than current CR16 otherwise ··· 191 208 192 209 /* clock source code */ 193 210 194 - static cycle_t read_cr16(struct clocksource *cs) 211 + static cycle_t notrace read_cr16(struct clocksource *cs) 195 212 { 196 213 return get_cycles(); 197 214 } ··· 270 287 } 271 288 272 289 273 - /* 274 - * sched_clock() framework 275 - */ 276 - 277 - static u32 cyc2ns_mul __read_mostly; 278 - static u32 cyc2ns_shift __read_mostly; 279 - 280 - u64 sched_clock(void) 290 + static u64 notrace read_cr16_sched_clock(void) 281 291 { 282 - u64 now; 283 - 284 - /* Get current cycle counter (Control Register 16). */ 285 - #ifdef CONFIG_64BIT 286 - now = mfctl(16); 287 - #else 288 - now = mfctl(16) + (((u64) this_cpu_read(cr16_high_32_bits)) << 32); 289 - #endif 290 - 291 - /* return the value in ns (cycles_2_ns) */ 292 - return mul_u64_u32_shr(now, cyc2ns_mul, cyc2ns_shift); 292 + return get_cycles(); 293 293 } 294 294 295 295 ··· 282 316 283 317 void __init time_init(void) 284 318 { 285 - unsigned long current_cr16_khz; 319 + unsigned long cr16_hz; 286 320 287 - current_cr16_khz = PAGE0->mem_10msec/10; /* kHz */ 288 321 clocktick = (100 * PAGE0->mem_10msec) / HZ; 289 - 290 - /* calculate mult/shift values for cr16 */ 291 - clocks_calc_mult_shift(&cyc2ns_mul, &cyc2ns_shift, current_cr16_khz, 292 - NSEC_PER_MSEC, 0); 293 - 294 322 start_cpu_itimer(); /* get CPU 0 started */ 295 323 324 + cr16_hz = 100 * PAGE0->mem_10msec; /* Hz */ 325 + 296 326 /* register at clocksource framework */ 297 - clocksource_register_khz(&clocksource_cr16, current_cr16_khz); 327 + clocksource_register_hz(&clocksource_cr16, cr16_hz); 328 + 329 + /* register as sched_clock source */ 330 + sched_clock_register(read_cr16_sched_clock, BITS_PER_LONG, cr16_hz); 298 331 }
+2 -1
arch/powerpc/boot/Makefile
··· 100 100 ns16550.c serial.c simple_alloc.c div64.S util.S \ 101 101 elf_util.c $(zlib-y) devtree.c stdlib.c \ 102 102 oflib.c ofconsole.c cuboot.c mpsc.c cpm-serial.c \ 103 - uartlite.c mpc52xx-psc.c opal.c opal-calls.S 103 + uartlite.c mpc52xx-psc.c opal.c 104 + src-wlib-$(CONFIG_PPC64_BOOT_WRAPPER) += opal-calls.S 104 105 src-wlib-$(CONFIG_40x) += 4xx.c planetcore.c 105 106 src-wlib-$(CONFIG_44x) += 4xx.c ebony.c bamboo.c 106 107 src-wlib-$(CONFIG_8xx) += mpc8xx.c planetcore.c fsl-soc.c
+6 -2
arch/powerpc/boot/main.c
··· 232 232 console_ops.close(); 233 233 234 234 kentry = (kernel_entry_t) vmlinux.addr; 235 - if (ft_addr) 236 - kentry(ft_addr, 0, NULL); 235 + if (ft_addr) { 236 + if(platform_ops.kentry) 237 + platform_ops.kentry(ft_addr, vmlinux.addr); 238 + else 239 + kentry(ft_addr, 0, NULL); 240 + } 237 241 else 238 242 kentry((unsigned long)initrd.addr, initrd.size, 239 243 loader_info.promptr);
+13
arch/powerpc/boot/opal-calls.S
··· 12 12 13 13 .text 14 14 15 + .globl opal_kentry 16 + opal_kentry: 17 + /* r3 is the fdt ptr */ 18 + mtctr r4 19 + li r4, 0 20 + li r5, 0 21 + li r6, 0 22 + li r7, 0 23 + ld r11,opal@got(r2) 24 + ld r8,0(r11) 25 + ld r9,8(r11) 26 + bctr 27 + 15 28 #define OPAL_CALL(name, token) \ 16 29 .globl name; \ 17 30 name: \
+12 -1
arch/powerpc/boot/opal.c
··· 13 13 #include <libfdt.h> 14 14 #include "../include/asm/opal-api.h" 15 15 16 - #ifdef __powerpc64__ 16 + #ifdef CONFIG_PPC64_BOOT_WRAPPER 17 17 18 18 /* Global OPAL struct used by opal-call.S */ 19 19 struct opal { ··· 23 23 24 24 static u32 opal_con_id; 25 25 26 + /* see opal-wrappers.S */ 26 27 int64_t opal_console_write(int64_t term_number, u64 *length, const u8 *buffer); 27 28 int64_t opal_console_read(int64_t term_number, uint64_t *length, u8 *buffer); 28 29 int64_t opal_console_write_buffer_space(uint64_t term_number, uint64_t *length); 29 30 int64_t opal_console_flush(uint64_t term_number); 30 31 int64_t opal_poll_events(uint64_t *outstanding_event_mask); 31 32 33 + void opal_kentry(unsigned long fdt_addr, void *vmlinux_addr); 34 + 32 35 static int opal_con_open(void) 33 36 { 37 + /* 38 + * When OPAL loads the boot kernel it stashes the OPAL base and entry 39 + * address in r8 and r9 so the kernel can use the OPAL console 40 + * before unflattening the devicetree. While executing the wrapper will 41 + * probably trash r8 and r9 so this kentry hook restores them before 42 + * entering the decompressed kernel. 43 + */ 44 + platform_ops.kentry = opal_kentry; 34 45 return 0; 35 46 } 36 47
+1
arch/powerpc/boot/ops.h
··· 30 30 void * (*realloc)(void *ptr, unsigned long size); 31 31 void (*exit)(void); 32 32 void * (*vmlinux_alloc)(unsigned long size); 33 + void (*kentry)(unsigned long fdt_addr, void *vmlinux_addr); 33 34 }; 34 35 extern struct platform_ops platform_ops; 35 36
+12
arch/powerpc/include/asm/asm-prototypes.h
··· 14 14 15 15 #include <linux/threads.h> 16 16 #include <linux/kprobes.h> 17 + #include <asm/cacheflush.h> 18 + #include <asm/checksum.h> 19 + #include <asm/uaccess.h> 20 + #include <asm/epapr_hcalls.h> 17 21 18 22 #include <uapi/asm/ucontext.h> 19 23 ··· 112 108 113 109 /* time */ 114 110 void accumulate_stolen_time(void); 111 + 112 + /* misc runtime */ 113 + extern u64 __bswapdi2(u64); 114 + extern s64 __lshrdi3(s64, int); 115 + extern s64 __ashldi3(s64, int); 116 + extern s64 __ashrdi3(s64, int); 117 + extern int __cmpdi2(s64, s64); 118 + extern int __ucmpdi2(u64, u64); 115 119 116 120 #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
+10 -4
arch/powerpc/include/asm/mmu.h
··· 29 29 */ 30 30 31 31 /* 32 + * Kernel read only support. 33 + * We added the ppp value 0b110 in ISA 2.04. 34 + */ 35 + #define MMU_FTR_KERNEL_RO ASM_CONST(0x00004000) 36 + 37 + /* 32 38 * We need to clear top 16bits of va (from the remaining 64 bits )in 33 39 * tlbie* instructions 34 40 */ ··· 109 103 #define MMU_FTRS_POWER4 MMU_FTRS_DEFAULT_HPTE_ARCH_V2 110 104 #define MMU_FTRS_PPC970 MMU_FTRS_POWER4 | MMU_FTR_TLBIE_CROP_VA 111 105 #define MMU_FTRS_POWER5 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE 112 - #define MMU_FTRS_POWER6 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE 113 - #define MMU_FTRS_POWER7 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE 114 - #define MMU_FTRS_POWER8 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE 115 - #define MMU_FTRS_POWER9 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE 106 + #define MMU_FTRS_POWER6 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO 107 + #define MMU_FTRS_POWER7 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO 108 + #define MMU_FTRS_POWER8 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO 109 + #define MMU_FTRS_POWER9 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO 116 110 #define MMU_FTRS_CELL MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ 117 111 MMU_FTR_CI_LARGE_PAGE 118 112 #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \
+1
arch/powerpc/include/asm/reg.h
··· 355 355 #define LPCR_PECE0 ASM_CONST(0x0000000000004000) /* ext. exceptions can cause exit */ 356 356 #define LPCR_PECE1 ASM_CONST(0x0000000000002000) /* decrementer can cause exit */ 357 357 #define LPCR_PECE2 ASM_CONST(0x0000000000001000) /* machine check etc can cause exit */ 358 + #define LPCR_PECE_HVEE ASM_CONST(0x0000400000000000) /* P9 Wakeup on HV interrupts */ 358 359 #define LPCR_MER ASM_CONST(0x0000000000000800) /* Mediated External Exception */ 359 360 #define LPCR_MER_SH 11 360 361 #define LPCR_TC ASM_CONST(0x0000000000000200) /* Translation control */
+4 -4
arch/powerpc/kernel/cpu_setup_power.S
··· 98 98 li r0,0 99 99 mtspr SPRN_LPID,r0 100 100 mfspr r3,SPRN_LPCR 101 - ori r3, r3, LPCR_PECEDH 102 - ori r3, r3, LPCR_HVICE 101 + LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE) 102 + or r3, r3, r4 103 103 bl __init_LPCR 104 104 bl __init_HFSCR 105 105 bl __init_tlb_power9 ··· 118 118 li r0,0 119 119 mtspr SPRN_LPID,r0 120 120 mfspr r3,SPRN_LPCR 121 - ori r3, r3, LPCR_PECEDH 122 - ori r3, r3, LPCR_HVICE 121 + LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE) 122 + or r3, r3, r4 123 123 bl __init_LPCR 124 124 bl __init_HFSCR 125 125 bl __init_tlb_power9
+3 -1
arch/powerpc/kernel/eeh_driver.c
··· 671 671 672 672 /* Clear frozen state */ 673 673 rc = eeh_clear_pe_frozen_state(pe, false); 674 - if (rc) 674 + if (rc) { 675 + pci_unlock_rescan_remove(); 675 676 return rc; 677 + } 676 678 677 679 /* Give the system 5 seconds to finish running the user-space 678 680 * hotplug shutdown scripts, e.g. ifdown for ethernet. Yes,
+9
arch/powerpc/kernel/vmlinux.lds.S
··· 94 94 * detected, and will result in a crash at boot due to offsets being 95 95 * wrong. 96 96 */ 97 + #ifdef CONFIG_PPC64 98 + /* 99 + * BLOCK(0) overrides the default output section alignment because 100 + * this needs to start right after .head.text in order for fixed 101 + * section placement to work. 102 + */ 103 + .text BLOCK(0) : AT(ADDR(.text) - LOAD_OFFSET) { 104 + #else 97 105 .text : AT(ADDR(.text) - LOAD_OFFSET) { 98 106 ALIGN_FUNCTION(); 107 + #endif 99 108 /* careful! __ftr_alt_* sections need to be close to .text */ 100 109 *(.text .fixup __ftr_alt_* .ref.text) 101 110 SCHED_TEXT
+1 -1
arch/powerpc/mm/hash64_4k.c
··· 55 55 */ 56 56 rflags = htab_convert_pte_flags(new_pte); 57 57 58 - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && 58 + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && 59 59 !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) 60 60 rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap); 61 61
+2 -2
arch/powerpc/mm/hash64_64k.c
··· 87 87 subpg_pte = new_pte & ~subpg_prot; 88 88 rflags = htab_convert_pte_flags(subpg_pte); 89 89 90 - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && 90 + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && 91 91 !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) { 92 92 93 93 /* ··· 258 258 259 259 rflags = htab_convert_pte_flags(new_pte); 260 260 261 - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && 261 + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && 262 262 !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) 263 263 rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap); 264 264
+6 -2
arch/powerpc/mm/hash_utils_64.c
··· 193 193 /* 194 194 * Kernel read only mapped with ppp bits 0b110 195 195 */ 196 - if (!(pteflags & _PAGE_WRITE)) 197 - rflags |= (HPTE_R_PP0 | 0x2); 196 + if (!(pteflags & _PAGE_WRITE)) { 197 + if (mmu_has_feature(MMU_FTR_KERNEL_RO)) 198 + rflags |= (HPTE_R_PP0 | 0x2); 199 + else 200 + rflags |= 0x3; 201 + } 198 202 } else { 199 203 if (pteflags & _PAGE_RWX) 200 204 rflags |= 0x2;
+1 -1
arch/x86/events/core.c
··· 69 69 int shift = 64 - x86_pmu.cntval_bits; 70 70 u64 prev_raw_count, new_raw_count; 71 71 int idx = hwc->idx; 72 - s64 delta; 72 + u64 delta; 73 73 74 74 if (idx == INTEL_PMC_IDX_FIXED_BTS) 75 75 return 0;
+1 -1
arch/x86/events/intel/core.c
··· 4034 4034 4035 4035 /* Support full width counters using alternative MSR range */ 4036 4036 if (x86_pmu.intel_cap.full_width_write) { 4037 - x86_pmu.max_period = x86_pmu.cntval_mask; 4037 + x86_pmu.max_period = x86_pmu.cntval_mask >> 1; 4038 4038 x86_pmu.perfctr = MSR_IA32_PMC0; 4039 4039 pr_cont("full-width counters, "); 4040 4040 }
+1
arch/x86/events/intel/cstate.c
··· 540 540 X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_DESKTOP, snb_cstates), 541 541 542 542 X86_CSTATES_MODEL(INTEL_FAM6_XEON_PHI_KNL, knl_cstates), 543 + X86_CSTATES_MODEL(INTEL_FAM6_XEON_PHI_KNM, knl_cstates), 543 544 { }, 544 545 }; 545 546 MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
+1 -3
arch/x86/include/asm/compat.h
··· 272 272 /* 273 273 * The type of struct elf_prstatus.pr_reg in compatible core dumps. 274 274 */ 275 - #ifdef CONFIG_X86_X32_ABI 276 275 typedef struct user_regs_struct compat_elf_gregset_t; 277 276 278 277 /* Full regset -- prstatus on x32, otherwise on ia32 */ ··· 280 281 do { *(int *) (((void *) &((S)->pr_reg)) + R) = (V); } \ 281 282 while (0) 282 283 284 + #ifdef CONFIG_X86_X32_ABI 283 285 #define COMPAT_USE_64BIT_TIME \ 284 286 (!!(task_pt_regs(current)->orig_ax & __X32_SYSCALL_BIT)) 285 - #else 286 - typedef struct user_regs_struct32 compat_elf_gregset_t; 287 287 #endif 288 288 289 289 /*
+2 -2
arch/x86/kernel/apic/x2apic_uv_x.c
··· 815 815 l = li; 816 816 } 817 817 addr1 = (base << shift) + 818 - f * (unsigned long)(1 << m_io); 818 + f * (1ULL << m_io); 819 819 addr2 = (base << shift) + 820 - (l + 1) * (unsigned long)(1 << m_io); 820 + (l + 1) * (1ULL << m_io); 821 821 pr_info("UV: %s[%03d..%03d] NASID 0x%04x ADDR 0x%016lx - 0x%016lx\n", 822 822 id, fi, li, lnasid, addr1, addr2); 823 823 if (max_io < l)
+6 -3
arch/x86/kernel/unwind_guess.c
··· 7 7 8 8 unsigned long unwind_get_return_address(struct unwind_state *state) 9 9 { 10 - unsigned long addr = READ_ONCE_NOCHECK(*state->sp); 10 + unsigned long addr; 11 11 12 12 if (unwind_done(state)) 13 13 return 0; 14 + 15 + addr = READ_ONCE_NOCHECK(*state->sp); 14 16 15 17 return ftrace_graph_ret_addr(state->task, &state->graph_idx, 16 18 addr, state->sp); ··· 27 25 return false; 28 26 29 27 do { 30 - unsigned long addr = READ_ONCE_NOCHECK(*state->sp); 28 + for (state->sp++; state->sp < info->end; state->sp++) { 29 + unsigned long addr = READ_ONCE_NOCHECK(*state->sp); 31 30 32 - for (state->sp++; state->sp < info->end; state->sp++) 33 31 if (__kernel_text_address(addr)) 34 32 return true; 33 + } 35 34 36 35 state->sp = info->next_sp; 37 36
+11 -25
arch/x86/kvm/emulate.c
··· 2105 2105 static int em_jmp_far(struct x86_emulate_ctxt *ctxt) 2106 2106 { 2107 2107 int rc; 2108 - unsigned short sel, old_sel; 2109 - struct desc_struct old_desc, new_desc; 2110 - const struct x86_emulate_ops *ops = ctxt->ops; 2108 + unsigned short sel; 2109 + struct desc_struct new_desc; 2111 2110 u8 cpl = ctxt->ops->cpl(ctxt); 2112 - 2113 - /* Assignment of RIP may only fail in 64-bit mode */ 2114 - if (ctxt->mode == X86EMUL_MODE_PROT64) 2115 - ops->get_segment(ctxt, &old_sel, &old_desc, NULL, 2116 - VCPU_SREG_CS); 2117 2111 2118 2112 memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2); 2119 2113 ··· 2118 2124 return rc; 2119 2125 2120 2126 rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc); 2121 - if (rc != X86EMUL_CONTINUE) { 2122 - WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64); 2123 - /* assigning eip failed; restore the old cs */ 2124 - ops->set_segment(ctxt, old_sel, &old_desc, 0, VCPU_SREG_CS); 2125 - return rc; 2126 - } 2127 + /* Error handling is not implemented. */ 2128 + if (rc != X86EMUL_CONTINUE) 2129 + return X86EMUL_UNHANDLEABLE; 2130 + 2127 2131 return rc; 2128 2132 } 2129 2133 ··· 2181 2189 { 2182 2190 int rc; 2183 2191 unsigned long eip, cs; 2184 - u16 old_cs; 2185 2192 int cpl = ctxt->ops->cpl(ctxt); 2186 - struct desc_struct old_desc, new_desc; 2187 - const struct x86_emulate_ops *ops = ctxt->ops; 2188 - 2189 - if (ctxt->mode == X86EMUL_MODE_PROT64) 2190 - ops->get_segment(ctxt, &old_cs, &old_desc, NULL, 2191 - VCPU_SREG_CS); 2193 + struct desc_struct new_desc; 2192 2194 2193 2195 rc = emulate_pop(ctxt, &eip, ctxt->op_bytes); 2194 2196 if (rc != X86EMUL_CONTINUE) ··· 2199 2213 if (rc != X86EMUL_CONTINUE) 2200 2214 return rc; 2201 2215 rc = assign_eip_far(ctxt, eip, &new_desc); 2202 - if (rc != X86EMUL_CONTINUE) { 2203 - WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64); 2204 - ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS); 2205 - } 2216 + /* Error handling is not implemented. */ 2217 + if (rc != X86EMUL_CONTINUE) 2218 + return X86EMUL_UNHANDLEABLE; 2219 + 2206 2220 return rc; 2207 2221 } 2208 2222
+1 -1
arch/x86/kvm/ioapic.c
··· 94 94 static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic) 95 95 { 96 96 ioapic->rtc_status.pending_eoi = 0; 97 - bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPUS); 97 + bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID); 98 98 } 99 99 100 100 static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
+2 -2
arch/x86/kvm/ioapic.h
··· 42 42 43 43 struct dest_map { 44 44 /* vcpu bitmap where IRQ has been sent */ 45 - DECLARE_BITMAP(map, KVM_MAX_VCPUS); 45 + DECLARE_BITMAP(map, KVM_MAX_VCPU_ID); 46 46 47 47 /* 48 48 * Vector sent to a given vcpu, only valid when 49 49 * the vcpu's bit in map is set 50 50 */ 51 - u8 vectors[KVM_MAX_VCPUS]; 51 + u8 vectors[KVM_MAX_VCPU_ID]; 52 52 }; 53 53 54 54
+13
arch/x86/kvm/irq_comm.c
··· 41 41 bool line_status) 42 42 { 43 43 struct kvm_pic *pic = pic_irqchip(kvm); 44 + 45 + /* 46 + * XXX: rejecting pic routes when pic isn't in use would be better, 47 + * but the default routing table is installed while kvm->arch.vpic is 48 + * NULL and KVM_CREATE_IRQCHIP can race with KVM_IRQ_LINE. 49 + */ 50 + if (!pic) 51 + return -1; 52 + 44 53 return kvm_pic_set_irq(pic, e->irqchip.pin, irq_source_id, level); 45 54 } 46 55 ··· 58 49 bool line_status) 59 50 { 60 51 struct kvm_ioapic *ioapic = kvm->arch.vioapic; 52 + 53 + if (!ioapic) 54 + return -1; 55 + 61 56 return kvm_ioapic_set_irq(ioapic, e->irqchip.pin, irq_source_id, level, 62 57 line_status); 63 58 }
+1 -1
arch/x86/kvm/lapic.c
··· 138 138 *mask = dest_id & 0xff; 139 139 return true; 140 140 case KVM_APIC_MODE_XAPIC_CLUSTER: 141 - *cluster = map->xapic_cluster_map[dest_id >> 4]; 141 + *cluster = map->xapic_cluster_map[(dest_id >> 4) & 0xf]; 142 142 *mask = dest_id & 0xf; 143 143 return true; 144 144 default:
+2
arch/x86/platform/olpc/olpc-xo15-sci.c
··· 196 196 return 0; 197 197 } 198 198 199 + #ifdef CONFIG_PM_SLEEP 199 200 static int xo15_sci_resume(struct device *dev) 200 201 { 201 202 /* Enable all EC events */ ··· 208 207 209 208 return 0; 210 209 } 210 + #endif 211 211 212 212 static SIMPLE_DEV_PM_OPS(xo15_sci_pm, NULL, xo15_sci_resume); 213 213
+1 -1
arch/x86/tools/relocs.h
··· 16 16 #include <regex.h> 17 17 #include <tools/le_byteshift.h> 18 18 19 - void die(char *fmt, ...); 19 + void die(char *fmt, ...) __attribute__((noreturn)); 20 20 21 21 #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) 22 22
+4
block/blk-map.c
··· 118 118 struct iov_iter i; 119 119 int ret; 120 120 121 + if (!iter_is_iovec(iter)) 122 + goto fail; 123 + 121 124 if (map_data) 122 125 copy = true; 123 126 else if (iov_iter_alignment(iter) & align) ··· 143 140 144 141 unmap_rq: 145 142 __blk_rq_unmap_user(bio); 143 + fail: 146 144 rq->bio = NULL; 147 145 return -EINVAL; 148 146 }
+1
crypto/Makefile
··· 40 40 41 41 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h 42 42 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h 43 + $(obj)/rsa_helper.o: $(obj)/rsapubkey-asn1.h $(obj)/rsaprivkey-asn1.h 43 44 clean-files += rsapubkey-asn1.c rsapubkey-asn1.h 44 45 clean-files += rsaprivkey-asn1.c rsaprivkey-asn1.h 45 46
+37 -22
crypto/algif_aead.c
··· 81 81 { 82 82 unsigned as = crypto_aead_authsize(crypto_aead_reqtfm(&ctx->aead_req)); 83 83 84 - return ctx->used >= ctx->aead_assoclen + as; 84 + /* 85 + * The minimum amount of memory needed for an AEAD cipher is 86 + * the AAD and in case of decryption the tag. 87 + */ 88 + return ctx->used >= ctx->aead_assoclen + (ctx->enc ? 0 : as); 85 89 } 86 90 87 91 static void aead_reset_ctx(struct aead_ctx *ctx) ··· 420 416 unsigned int i, reqlen = GET_REQ_SIZE(tfm); 421 417 int err = -ENOMEM; 422 418 unsigned long used; 423 - size_t outlen; 419 + size_t outlen = 0; 424 420 size_t usedpages = 0; 425 421 426 422 lock_sock(sk); ··· 430 426 goto unlock; 431 427 } 432 428 433 - used = ctx->used; 434 - outlen = used; 435 - 436 429 if (!aead_sufficient_data(ctx)) 437 430 goto unlock; 431 + 432 + used = ctx->used; 433 + if (ctx->enc) 434 + outlen = used + as; 435 + else 436 + outlen = used - as; 438 437 439 438 req = sock_kmalloc(sk, reqlen, GFP_KERNEL); 440 439 if (unlikely(!req)) ··· 452 445 aead_request_set_ad(req, ctx->aead_assoclen); 453 446 aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, 454 447 aead_async_cb, sk); 455 - used -= ctx->aead_assoclen + (ctx->enc ? as : 0); 448 + used -= ctx->aead_assoclen; 456 449 457 450 /* take over all tx sgls from ctx */ 458 451 areq->tsgl = sock_kmalloc(sk, sizeof(*areq->tsgl) * sgl->cur, ··· 468 461 areq->tsgls = sgl->cur; 469 462 470 463 /* create rx sgls */ 471 - while (iov_iter_count(&msg->msg_iter)) { 464 + while (outlen > usedpages && iov_iter_count(&msg->msg_iter)) { 472 465 size_t seglen = min_t(size_t, iov_iter_count(&msg->msg_iter), 473 466 (outlen - usedpages)); 474 467 ··· 498 491 499 492 last_rsgl = rsgl; 500 493 501 - /* we do not need more iovecs as we have sufficient memory */ 502 - if (outlen <= usedpages) 503 - break; 504 - 505 494 iov_iter_advance(&msg->msg_iter, err); 506 495 } 507 - err = -EINVAL; 496 + 508 497 /* ensure output buffer is sufficiently large */ 509 - if (usedpages < outlen) 510 - goto free; 498 + if (usedpages < outlen) { 499 + err = -EINVAL; 500 + goto unlock; 501 + } 511 502 512 503 aead_request_set_crypt(req, areq->tsgl, areq->first_rsgl.sgl.sg, used, 513 504 areq->iv); ··· 576 571 goto unlock; 577 572 } 578 573 574 + /* data length provided by caller via sendmsg/sendpage */ 579 575 used = ctx->used; 580 576 581 577 /* ··· 591 585 if (!aead_sufficient_data(ctx)) 592 586 goto unlock; 593 587 594 - outlen = used; 588 + /* 589 + * Calculate the minimum output buffer size holding the result of the 590 + * cipher operation. When encrypting data, the receiving buffer is 591 + * larger by the tag length compared to the input buffer as the 592 + * encryption operation generates the tag. For decryption, the input 593 + * buffer provides the tag which is consumed resulting in only the 594 + * plaintext without a buffer for the tag returned to the caller. 595 + */ 596 + if (ctx->enc) 597 + outlen = used + as; 598 + else 599 + outlen = used - as; 595 600 596 601 /* 597 602 * The cipher operation input data is reduced by the associated data 598 603 * length as this data is processed separately later on. 599 604 */ 600 - used -= ctx->aead_assoclen + (ctx->enc ? as : 0); 605 + used -= ctx->aead_assoclen; 601 606 602 607 /* convert iovecs of output buffers into scatterlists */ 603 - while (iov_iter_count(&msg->msg_iter)) { 608 + while (outlen > usedpages && iov_iter_count(&msg->msg_iter)) { 604 609 size_t seglen = min_t(size_t, iov_iter_count(&msg->msg_iter), 605 610 (outlen - usedpages)); 606 611 ··· 638 621 639 622 last_rsgl = rsgl; 640 623 641 - /* we do not need more iovecs as we have sufficient memory */ 642 - if (outlen <= usedpages) 643 - break; 644 624 iov_iter_advance(&msg->msg_iter, err); 645 625 } 646 626 647 - err = -EINVAL; 648 627 /* ensure output buffer is sufficiently large */ 649 - if (usedpages < outlen) 628 + if (usedpages < outlen) { 629 + err = -EINVAL; 650 630 goto unlock; 631 + } 651 632 652 633 sg_mark_end(sgl->sg + sgl->cur - 1); 653 634 aead_request_set_crypt(&ctx->aead_req, sgl->sg, ctx->first_rsgl.sgl.sg,
-1
crypto/asymmetric_keys/x509_cert_parser.c
··· 133 133 return cert; 134 134 135 135 error_decode: 136 - kfree(cert->pub->key); 137 136 kfree(ctx); 138 137 error_no_ctx: 139 138 x509_free_certificate(cert);
+24 -5
crypto/drbg.c
··· 262 262 u8 *inbuf, u32 inbuflen, 263 263 u8 *outbuf, u32 outlen); 264 264 #define DRBG_CTR_NULL_LEN 128 265 + #define DRBG_OUTSCRATCHLEN DRBG_CTR_NULL_LEN 265 266 266 267 /* BCC function for CTR DRBG as defined in 10.4.3 */ 267 268 static int drbg_ctr_bcc(struct drbg_state *drbg, ··· 1645 1644 kfree(drbg->ctr_null_value_buf); 1646 1645 drbg->ctr_null_value = NULL; 1647 1646 1647 + kfree(drbg->outscratchpadbuf); 1648 + drbg->outscratchpadbuf = NULL; 1649 + 1648 1650 return 0; 1649 1651 } 1650 1652 ··· 1712 1708 drbg->ctr_null_value = (u8 *)PTR_ALIGN(drbg->ctr_null_value_buf, 1713 1709 alignmask + 1); 1714 1710 1711 + drbg->outscratchpadbuf = kmalloc(DRBG_OUTSCRATCHLEN + alignmask, 1712 + GFP_KERNEL); 1713 + if (!drbg->outscratchpadbuf) { 1714 + drbg_fini_sym_kernel(drbg); 1715 + return -ENOMEM; 1716 + } 1717 + drbg->outscratchpad = (u8 *)PTR_ALIGN(drbg->outscratchpadbuf, 1718 + alignmask + 1); 1719 + 1715 1720 return alignmask; 1716 1721 } 1717 1722 ··· 1750 1737 u8 *outbuf, u32 outlen) 1751 1738 { 1752 1739 struct scatterlist sg_in; 1740 + int ret; 1753 1741 1754 1742 sg_init_one(&sg_in, inbuf, inlen); 1755 1743 1756 1744 while (outlen) { 1757 - u32 cryptlen = min_t(u32, inlen, outlen); 1745 + u32 cryptlen = min3(inlen, outlen, (u32)DRBG_OUTSCRATCHLEN); 1758 1746 struct scatterlist sg_out; 1759 - int ret; 1760 1747 1761 - sg_init_one(&sg_out, outbuf, cryptlen); 1748 + /* Output buffer may not be valid for SGL, use scratchpad */ 1749 + sg_init_one(&sg_out, drbg->outscratchpad, cryptlen); 1762 1750 skcipher_request_set_crypt(drbg->ctr_req, &sg_in, &sg_out, 1763 1751 cryptlen, drbg->V); 1764 1752 ret = crypto_skcipher_encrypt(drbg->ctr_req); ··· 1775 1761 break; 1776 1762 } 1777 1763 default: 1778 - return ret; 1764 + goto out; 1779 1765 } 1780 1766 init_completion(&drbg->ctr_completion); 1781 1767 1768 + memcpy(outbuf, drbg->outscratchpad, cryptlen); 1769 + 1782 1770 outlen -= cryptlen; 1783 1771 } 1772 + ret = 0; 1784 1773 1785 - return 0; 1774 + out: 1775 + memzero_explicit(drbg->outscratchpad, DRBG_OUTSCRATCHLEN); 1776 + return ret; 1786 1777 } 1787 1778 #endif /* CONFIG_CRYPTO_DRBG_CTR */ 1788 1779
+12 -7
crypto/mcryptd.c
··· 254 254 goto out; 255 255 } 256 256 257 - static inline void mcryptd_check_internal(struct rtattr **tb, u32 *type, 257 + static inline bool mcryptd_check_internal(struct rtattr **tb, u32 *type, 258 258 u32 *mask) 259 259 { 260 260 struct crypto_attr_type *algt; 261 261 262 262 algt = crypto_get_attr_type(tb); 263 263 if (IS_ERR(algt)) 264 - return; 265 - if ((algt->type & CRYPTO_ALG_INTERNAL)) 266 - *type |= CRYPTO_ALG_INTERNAL; 267 - if ((algt->mask & CRYPTO_ALG_INTERNAL)) 268 - *mask |= CRYPTO_ALG_INTERNAL; 264 + return false; 265 + 266 + *type |= algt->type & CRYPTO_ALG_INTERNAL; 267 + *mask |= algt->mask & CRYPTO_ALG_INTERNAL; 268 + 269 + if (*type & *mask & CRYPTO_ALG_INTERNAL) 270 + return true; 271 + else 272 + return false; 269 273 } 270 274 271 275 static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm) ··· 496 492 u32 mask = 0; 497 493 int err; 498 494 499 - mcryptd_check_internal(tb, &type, &mask); 495 + if (!mcryptd_check_internal(tb, &type, &mask)) 496 + return -EINVAL; 500 497 501 498 halg = ahash_attr_alg(tb[1], type, mask); 502 499 if (IS_ERR(halg))
+40 -15
drivers/acpi/nfit/core.c
··· 94 94 return to_acpi_device(acpi_desc->dev); 95 95 } 96 96 97 - static int xlat_status(void *buf, unsigned int cmd, u32 status) 97 + static int xlat_bus_status(void *buf, unsigned int cmd, u32 status) 98 98 { 99 99 struct nd_cmd_clear_error *clear_err; 100 100 struct nd_cmd_ars_status *ars_status; ··· 113 113 flags = ND_ARS_PERSISTENT | ND_ARS_VOLATILE; 114 114 if ((status >> 16 & flags) == 0) 115 115 return -ENOTTY; 116 - break; 116 + return 0; 117 117 case ND_CMD_ARS_START: 118 118 /* ARS is in progress */ 119 119 if ((status & 0xffff) == NFIT_ARS_START_BUSY) ··· 122 122 /* Command failed */ 123 123 if (status & 0xffff) 124 124 return -EIO; 125 - break; 125 + return 0; 126 126 case ND_CMD_ARS_STATUS: 127 127 ars_status = buf; 128 128 /* Command failed */ ··· 146 146 * then just continue with the returned results. 147 147 */ 148 148 if (status == NFIT_ARS_STATUS_INTR) { 149 - if (ars_status->flags & NFIT_ARS_F_OVERFLOW) 149 + if (ars_status->out_length >= 40 && (ars_status->flags 150 + & NFIT_ARS_F_OVERFLOW)) 150 151 return -ENOSPC; 151 152 return 0; 152 153 } ··· 155 154 /* Unknown status */ 156 155 if (status >> 16) 157 156 return -EIO; 158 - break; 157 + return 0; 159 158 case ND_CMD_CLEAR_ERROR: 160 159 clear_err = buf; 161 160 if (status & 0xffff) ··· 164 163 return -EIO; 165 164 if (clear_err->length > clear_err->cleared) 166 165 return clear_err->cleared; 167 - break; 166 + return 0; 168 167 default: 169 168 break; 170 169 } ··· 175 174 return 0; 176 175 } 177 176 178 - static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, 179 - struct nvdimm *nvdimm, unsigned int cmd, void *buf, 180 - unsigned int buf_len, int *cmd_rc) 177 + static int xlat_status(struct nvdimm *nvdimm, void *buf, unsigned int cmd, 178 + u32 status) 179 + { 180 + if (!nvdimm) 181 + return xlat_bus_status(buf, cmd, status); 182 + if (status) 183 + return -EIO; 184 + return 0; 185 + } 186 + 187 + int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, 188 + unsigned int cmd, void *buf, unsigned int buf_len, int *cmd_rc) 181 189 { 182 190 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); 183 191 union acpi_object in_obj, in_buf, *out_obj; ··· 308 298 309 299 for (i = 0, offset = 0; i < desc->out_num; i++) { 310 300 u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i, buf, 311 - (u32 *) out_obj->buffer.pointer); 301 + (u32 *) out_obj->buffer.pointer, 302 + out_obj->buffer.length - offset); 312 303 313 304 if (offset + out_size > out_obj->buffer.length) { 314 305 dev_dbg(dev, "%s:%s output object underflow cmd: %s field: %d\n", ··· 344 333 */ 345 334 rc = buf_len - offset - in_buf.buffer.length; 346 335 if (cmd_rc) 347 - *cmd_rc = xlat_status(buf, cmd, fw_status); 336 + *cmd_rc = xlat_status(nvdimm, buf, cmd, 337 + fw_status); 348 338 } else { 349 339 dev_err(dev, "%s:%s underrun cmd: %s buf_len: %d out_len: %d\n", 350 340 __func__, dimm_name, cmd_name, buf_len, ··· 355 343 } else { 356 344 rc = 0; 357 345 if (cmd_rc) 358 - *cmd_rc = xlat_status(buf, cmd, fw_status); 346 + *cmd_rc = xlat_status(nvdimm, buf, cmd, fw_status); 359 347 } 360 348 361 349 out: ··· 363 351 364 352 return rc; 365 353 } 354 + EXPORT_SYMBOL_GPL(acpi_nfit_ctl); 366 355 367 356 static const char *spa_type_name(u16 type) 368 357 { ··· 2014 2001 return cmd_rc; 2015 2002 } 2016 2003 2017 - static int ars_status_process_records(struct nvdimm_bus *nvdimm_bus, 2004 + static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc, 2018 2005 struct nd_cmd_ars_status *ars_status) 2019 2006 { 2007 + struct nvdimm_bus *nvdimm_bus = acpi_desc->nvdimm_bus; 2020 2008 int rc; 2021 2009 u32 i; 2022 2010 2011 + /* 2012 + * First record starts at 44 byte offset from the start of the 2013 + * payload. 2014 + */ 2015 + if (ars_status->out_length < 44) 2016 + return 0; 2023 2017 for (i = 0; i < ars_status->num_records; i++) { 2018 + /* only process full records */ 2019 + if (ars_status->out_length 2020 + < 44 + sizeof(struct nd_ars_record) * (i + 1)) 2021 + break; 2024 2022 rc = nvdimm_bus_add_poison(nvdimm_bus, 2025 2023 ars_status->records[i].err_address, 2026 2024 ars_status->records[i].length); 2027 2025 if (rc) 2028 2026 return rc; 2029 2027 } 2028 + if (i < ars_status->num_records) 2029 + dev_warn(acpi_desc->dev, "detected truncated ars results\n"); 2030 2030 2031 2031 return 0; 2032 2032 } ··· 2292 2266 if (rc < 0 && rc != -ENOSPC) 2293 2267 return rc; 2294 2268 2295 - if (ars_status_process_records(acpi_desc->nvdimm_bus, 2296 - acpi_desc->ars_status)) 2269 + if (ars_status_process_records(acpi_desc, acpi_desc->ars_status)) 2297 2270 return -ENOMEM; 2298 2271 2299 2272 return 0;
+2
drivers/acpi/nfit/nfit.h
··· 240 240 int acpi_nfit_init(struct acpi_nfit_desc *acpi_desc, void *nfit, acpi_size sz); 241 241 void __acpi_nfit_notify(struct device *dev, acpi_handle handle, u32 event); 242 242 void __acpi_nvdimm_notify(struct device *dev, u32 event); 243 + int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, 244 + unsigned int cmd, void *buf, unsigned int buf_len, int *cmd_rc); 243 245 void acpi_nfit_desc_init(struct acpi_nfit_desc *acpi_desc, struct device *dev); 244 246 #endif /* __NFIT_H__ */
+6 -23
drivers/acpi/sleep.c
··· 47 47 } 48 48 } 49 49 50 - static void acpi_sleep_pts_switch(u32 acpi_state) 51 - { 52 - acpi_status status; 53 - 54 - status = acpi_execute_simple_method(NULL, "\\_PTS", acpi_state); 55 - if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { 56 - /* 57 - * OS can't evaluate the _PTS object correctly. Some warning 58 - * message will be printed. But it won't break anything. 59 - */ 60 - printk(KERN_NOTICE "Failure in evaluating _PTS object\n"); 61 - } 62 - } 63 - 64 - static int sleep_notify_reboot(struct notifier_block *this, 50 + static int tts_notify_reboot(struct notifier_block *this, 65 51 unsigned long code, void *x) 66 52 { 67 53 acpi_sleep_tts_switch(ACPI_STATE_S5); 68 - 69 - acpi_sleep_pts_switch(ACPI_STATE_S5); 70 - 71 54 return NOTIFY_DONE; 72 55 } 73 56 74 - static struct notifier_block sleep_notifier = { 75 - .notifier_call = sleep_notify_reboot, 57 + static struct notifier_block tts_notifier = { 58 + .notifier_call = tts_notify_reboot, 76 59 .next = NULL, 77 60 .priority = 0, 78 61 }; ··· 899 916 pr_info(PREFIX "(supports%s)\n", supported); 900 917 901 918 /* 902 - * Register the sleep_notifier to reboot notifier list so that the _TTS 903 - * and _PTS object can also be evaluated when the system enters S5. 919 + * Register the tts_notifier to reboot notifier list so that the _TTS 920 + * object can also be evaluated when the system enters S5. 904 921 */ 905 - register_reboot_notifier(&sleep_notifier); 922 + register_reboot_notifier(&tts_notifier); 906 923 return 0; 907 924 }
-7
drivers/ata/ahci.c
··· 1436 1436 "ahci: MRSM is on, fallback to single MSI\n"); 1437 1437 pci_free_irq_vectors(pdev); 1438 1438 } 1439 - 1440 - /* 1441 - * -ENOSPC indicated we don't have enough vectors. Don't bother 1442 - * trying a single vectors for any other error: 1443 - */ 1444 - if (nvec < 0 && nvec != -ENOSPC) 1445 - return nvec; 1446 1439 } 1447 1440 1448 1441 /*
+2 -1
drivers/ata/libata-scsi.c
··· 1088 1088 desc[1] = tf->command; /* status */ 1089 1089 desc[2] = tf->device; 1090 1090 desc[3] = tf->nsect; 1091 - desc[0] = 0; 1091 + desc[7] = 0; 1092 1092 if (tf->flags & ATA_TFLAG_LBA48) { 1093 1093 desc[8] |= 0x80; 1094 1094 if (tf->hob_nsect) ··· 1159 1159 { 1160 1160 sdev->use_10_for_rw = 1; 1161 1161 sdev->use_10_for_ms = 1; 1162 + sdev->no_write_same = 1; 1162 1163 1163 1164 /* Schedule policy is determined by ->qc_defer() callback and 1164 1165 * it needs to see every deferred qc. Set dev_blocked to 1 to
+14 -1
drivers/ata/sata_mv.c
··· 4090 4090 4091 4091 /* allocate host */ 4092 4092 if (pdev->dev.of_node) { 4093 - of_property_read_u32(pdev->dev.of_node, "nr-ports", &n_ports); 4093 + rc = of_property_read_u32(pdev->dev.of_node, "nr-ports", 4094 + &n_ports); 4095 + if (rc) { 4096 + dev_err(&pdev->dev, 4097 + "error parsing nr-ports property: %d\n", rc); 4098 + return rc; 4099 + } 4100 + 4101 + if (n_ports <= 0) { 4102 + dev_err(&pdev->dev, "nr-ports must be positive: %d\n", 4103 + n_ports); 4104 + return -EINVAL; 4105 + } 4106 + 4094 4107 irq = irq_of_parse_and_map(pdev->dev.of_node, 0); 4095 4108 } else { 4096 4109 mv_platform_data = dev_get_platdata(&pdev->dev);
+1 -1
drivers/atm/eni.c
··· 1727 1727 printk("\n"); 1728 1728 printk(KERN_ERR DEV_LABEL "(itf %d): can't set up page " 1729 1729 "mapping\n",dev->number); 1730 - return error; 1730 + return -ENOMEM; 1731 1731 } 1732 1732 eni_dev->ioaddr = base; 1733 1733 eni_dev->base_diff = real_base - (unsigned long) base;
+1
drivers/atm/lanai.c
··· 2143 2143 lanai->base = (bus_addr_t) ioremap(raw_base, LANAI_MAPPING_SIZE); 2144 2144 if (lanai->base == NULL) { 2145 2145 printk(KERN_ERR DEV_LABEL ": couldn't remap I/O space\n"); 2146 + result = -ENOMEM; 2146 2147 goto error_pci; 2147 2148 } 2148 2149 /* 3.3: Reset lanai and PHY */
+9 -2
drivers/block/zram/zram_drv.c
··· 1403 1403 zram = idr_find(&zram_index_idr, dev_id); 1404 1404 if (zram) { 1405 1405 ret = zram_remove(zram); 1406 - idr_remove(&zram_index_idr, dev_id); 1406 + if (!ret) 1407 + idr_remove(&zram_index_idr, dev_id); 1407 1408 } else { 1408 1409 ret = -ENODEV; 1409 1410 } ··· 1413 1412 return ret ? ret : count; 1414 1413 } 1415 1414 1415 + /* 1416 + * NOTE: hot_add attribute is not the usual read-only sysfs attribute. In a 1417 + * sense that reading from this file does alter the state of your system -- it 1418 + * creates a new un-initialized zram device and returns back this device's 1419 + * device_id (or an error code if it fails to create a new device). 1420 + */ 1416 1421 static struct class_attribute zram_control_class_attrs[] = { 1417 - __ATTR_RO(hot_add), 1422 + __ATTR(hot_add, 0400, hot_add_show, NULL), 1418 1423 __ATTR_WO(hot_remove), 1419 1424 __ATTR_NULL, 1420 1425 };
+1 -1
drivers/clk/bcm/Kconfig
··· 20 20 21 21 config COMMON_CLK_IPROC 22 22 bool "Broadcom iProc clock support" 23 - depends on ARCH_BCM_IPROC || COMPILE_TEST 23 + depends on ARCH_BCM_IPROC || ARCH_BCM_63XX || COMPILE_TEST 24 24 depends on COMMON_CLK 25 25 default ARCH_BCM_IPROC 26 26 help
+1 -1
drivers/clk/sunxi-ng/ccu-sun6i-a31.c
··· 143 143 4, 2, /* K */ 144 144 0, 4, /* M */ 145 145 21, 0, /* mux */ 146 - BIT(31), /* gate */ 146 + BIT(31) | BIT(23) | BIT(22), /* gate */ 147 147 BIT(28), /* lock */ 148 148 CLK_SET_RATE_UNGATE); 149 149
+1 -1
drivers/clk/sunxi-ng/ccu-sun8i-a33.c
··· 131 131 8, 4, /* N */ 132 132 4, 2, /* K */ 133 133 0, 4, /* M */ 134 - BIT(31), /* gate */ 134 + BIT(31) | BIT(23) | BIT(22), /* gate */ 135 135 BIT(28), /* lock */ 136 136 CLK_SET_RATE_UNGATE); 137 137
+3 -2
drivers/crypto/caam/ctrl.c
··· 558 558 * Enable DECO watchdogs and, if this is a PHYS_ADDR_T_64BIT kernel, 559 559 * long pointers in master configuration register 560 560 */ 561 - clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK, MCFGR_AWCACHE_CACH | 562 - MCFGR_AWCACHE_BUFF | MCFGR_WDENABLE | MCFGR_LARGE_BURST | 561 + clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK | MCFGR_LONG_PTR, 562 + MCFGR_AWCACHE_CACH | MCFGR_AWCACHE_BUFF | 563 + MCFGR_WDENABLE | MCFGR_LARGE_BURST | 563 564 (sizeof(dma_addr_t) == sizeof(u64) ? MCFGR_LONG_PTR : 0)); 564 565 565 566 /*
+2 -1
drivers/crypto/chelsio/chcr_algo.h
··· 422 422 { 423 423 u32 temp; 424 424 u32 w_ring[MAX_NK]; 425 - int i, j, k = 0; 425 + int i, j, k; 426 426 u8 nr, nk; 427 427 428 428 switch (keylength) { ··· 460 460 temp = w_ring[i % nk]; 461 461 i++; 462 462 } 463 + i--; 463 464 for (k = 0, j = i % nk; k < nk; k++) { 464 465 *((u32 *)dec_key + k) = htonl(w_ring[j]); 465 466 j--;
+5 -6
drivers/crypto/marvell/hash.c
··· 168 168 mv_cesa_adjust_op(engine, &creq->op_tmpl); 169 169 memcpy_toio(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl)); 170 170 171 - digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req)); 172 - for (i = 0; i < digsize / 4; i++) 173 - writel_relaxed(creq->state[i], engine->regs + CESA_IVDIG(i)); 174 - 175 - mv_cesa_adjust_op(engine, &creq->op_tmpl); 176 - memcpy_toio(engine->sram, &creq->op_tmpl, sizeof(creq->op_tmpl)); 171 + if (!sreq->offset) { 172 + digsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req)); 173 + for (i = 0; i < digsize / 4; i++) 174 + writel_relaxed(creq->state[i], engine->regs + CESA_IVDIG(i)); 175 + } 177 176 178 177 if (creq->cache_ptr) 179 178 memcpy_toio(engine->sram + CESA_SA_DATA_SRAM_OFFSET,
+2 -2
drivers/dax/dax.c
··· 270 270 if (!dax_dev->alive) 271 271 return -ENXIO; 272 272 273 - /* prevent private / writable mappings from being established */ 274 - if ((vma->vm_flags & (VM_NORESERVE|VM_SHARED|VM_WRITE)) == VM_WRITE) { 273 + /* prevent private mappings from being established */ 274 + if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) { 275 275 dev_info(dev, "%s: %s: fail, attempted private mapping\n", 276 276 current->comm, func); 277 277 return -EINVAL;
+3 -1
drivers/dax/pmem.c
··· 78 78 nsio = to_nd_namespace_io(&ndns->dev); 79 79 80 80 /* parse the 'pfn' info block via ->rw_bytes */ 81 - devm_nsio_enable(dev, nsio); 81 + rc = devm_nsio_enable(dev, nsio); 82 + if (rc) 83 + return rc; 82 84 altmap = nvdimm_setup_pfn(nd_pfn, &res, &__altmap); 83 85 if (IS_ERR(altmap)) 84 86 return PTR_ERR(altmap);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 2472 2472 struct drm_file *file_priv); 2473 2473 void amdgpu_driver_preclose_kms(struct drm_device *dev, 2474 2474 struct drm_file *file_priv); 2475 + int amdgpu_suspend(struct amdgpu_device *adev); 2475 2476 int amdgpu_device_suspend(struct drm_device *dev, bool suspend, bool fbcon); 2476 2477 int amdgpu_device_resume(struct drm_device *dev, bool resume, bool fbcon); 2477 2478 u32 amdgpu_get_vblank_counter_kms(struct drm_device *dev, unsigned int pipe);
+15 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
··· 34 34 35 35 static struct amdgpu_atpx_priv { 36 36 bool atpx_detected; 37 + bool bridge_pm_usable; 37 38 /* handle for device - and atpx */ 38 39 acpi_handle dhandle; 39 40 acpi_handle other_handle; ··· 206 205 atpx->is_hybrid = false; 207 206 if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) { 208 207 printk("ATPX Hybrid Graphics\n"); 209 - atpx->functions.power_cntl = false; 208 + /* 209 + * Disable legacy PM methods only when pcie port PM is usable, 210 + * otherwise the device might fail to power off or power on. 211 + */ 212 + atpx->functions.power_cntl = !amdgpu_atpx_priv.bridge_pm_usable; 210 213 atpx->is_hybrid = true; 211 214 } 212 215 ··· 560 555 struct pci_dev *pdev = NULL; 561 556 bool has_atpx = false; 562 557 int vga_count = 0; 558 + bool d3_supported = false; 559 + struct pci_dev *parent_pdev; 563 560 564 561 while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 565 562 vga_count++; 566 563 567 564 has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true); 565 + 566 + parent_pdev = pci_upstream_bridge(pdev); 567 + d3_supported |= parent_pdev && parent_pdev->bridge_d3; 568 568 } 569 569 570 570 while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { 571 571 vga_count++; 572 572 573 573 has_atpx |= (amdgpu_atpx_pci_probe_handle(pdev) == true); 574 + 575 + parent_pdev = pci_upstream_bridge(pdev); 576 + d3_supported |= parent_pdev && parent_pdev->bridge_d3; 574 577 } 575 578 576 579 if (has_atpx && vga_count == 2) { ··· 586 573 printk(KERN_INFO "vga_switcheroo: detected switching method %s handle\n", 587 574 acpi_method_name); 588 575 amdgpu_atpx_priv.atpx_detected = true; 576 + amdgpu_atpx_priv.bridge_pm_usable = d3_supported; 589 577 amdgpu_atpx_init(); 590 578 return true; 591 579 }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1493 1493 return 0; 1494 1494 } 1495 1495 1496 - static int amdgpu_suspend(struct amdgpu_device *adev) 1496 + int amdgpu_suspend(struct amdgpu_device *adev) 1497 1497 { 1498 1498 int i, r; 1499 1499
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 479 479 static void 480 480 amdgpu_pci_shutdown(struct pci_dev *pdev) 481 481 { 482 + struct drm_device *dev = pci_get_drvdata(pdev); 483 + struct amdgpu_device *adev = dev->dev_private; 484 + 482 485 /* if we are running in a VM, make sure the device 483 486 * torn down properly on reboot/shutdown. 484 487 * unfortunately we can't detect certain 485 488 * hypervisors so just do this all the time. 486 489 */ 487 - amdgpu_pci_remove(pdev); 490 + amdgpu_suspend(adev); 488 491 } 489 492 490 493 static int amdgpu_pmops_suspend(struct device *dev)
+6 -6
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 2984 2984 if (!(data->mc_micro_code_feature & DISABLE_MC_LOADMICROCODE) && memory_clock > data->highest_mclk) 2985 2985 data->highest_mclk = memory_clock; 2986 2986 2987 - performance_level = &(ps->performance_levels 2988 - [ps->performance_level_count++]); 2989 - 2990 2987 PP_ASSERT_WITH_CODE( 2991 2988 (ps->performance_level_count < smum_get_mac_definition(hwmgr->smumgr, SMU_MAX_LEVELS_GRAPHICS)), 2992 2989 "Performance levels exceeds SMC limit!", 2993 2990 return -EINVAL); 2994 2991 2995 2992 PP_ASSERT_WITH_CODE( 2996 - (ps->performance_level_count <= 2993 + (ps->performance_level_count < 2997 2994 hwmgr->platform_descriptor.hardwareActivityPerformanceLevels), 2998 - "Performance levels exceeds Driver limit!", 2999 - return -EINVAL); 2995 + "Performance levels exceeds Driver limit, Skip!", 2996 + return 0); 2997 + 2998 + performance_level = &(ps->performance_levels 2999 + [ps->performance_level_count++]); 3000 3000 3001 3001 /* Performance levels are arranged from low to high. */ 3002 3002 performance_level->memory_clock = memory_clock;
+4 -1
drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smc.c
··· 2214 2214 int polaris10_process_firmware_header(struct pp_hwmgr *hwmgr) 2215 2215 { 2216 2216 struct polaris10_smumgr *smu_data = (struct polaris10_smumgr *)(hwmgr->smumgr->backend); 2217 + struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); 2217 2218 uint32_t tmp; 2218 2219 int result; 2219 2220 bool error = false; ··· 2234 2233 offsetof(SMU74_Firmware_Header, SoftRegisters), 2235 2234 &tmp, SMC_RAM_END); 2236 2235 2237 - if (!result) 2236 + if (!result) { 2237 + data->soft_regs_start = tmp; 2238 2238 smu_data->smu7_data.soft_regs_start = tmp; 2239 + } 2239 2240 2240 2241 error |= (0 != result); 2241 2242
+2 -3
drivers/gpu/drm/arm/hdlcd_crtc.c
··· 150 150 clk_prepare_enable(hdlcd->clk); 151 151 hdlcd_crtc_mode_set_nofb(crtc); 152 152 hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 1); 153 + drm_crtc_vblank_on(crtc); 153 154 } 154 155 155 156 static void hdlcd_crtc_disable(struct drm_crtc *crtc) 156 157 { 157 158 struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); 158 159 159 - if (!crtc->state->active) 160 - return; 161 - 160 + drm_crtc_vblank_off(crtc); 162 161 hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 0); 163 162 clk_disable_unprepare(hdlcd->clk); 164 163 }
+1 -1
drivers/gpu/drm/arm/hdlcd_drv.c
··· 375 375 376 376 err_fbdev: 377 377 drm_kms_helper_poll_fini(drm); 378 - drm_mode_config_cleanup(drm); 379 378 drm_vblank_cleanup(drm); 380 379 err_vblank: 381 380 pm_runtime_disable(drm->dev); ··· 386 387 drm_irq_uninstall(drm); 387 388 of_reserved_mem_device_release(drm->dev); 388 389 err_free: 390 + drm_mode_config_cleanup(drm); 389 391 dev_set_drvdata(dev, NULL); 390 392 drm_dev_unref(drm); 391 393
+6 -4
drivers/gpu/drm/drm_ioctl.c
··· 254 254 req->value = dev->mode_config.async_page_flip; 255 255 break; 256 256 case DRM_CAP_PAGE_FLIP_TARGET: 257 - req->value = 1; 258 - drm_for_each_crtc(crtc, dev) { 259 - if (!crtc->funcs->page_flip_target) 260 - req->value = 0; 257 + if (drm_core_check_feature(dev, DRIVER_MODESET)) { 258 + req->value = 1; 259 + drm_for_each_crtc(crtc, dev) { 260 + if (!crtc->funcs->page_flip_target) 261 + req->value = 0; 262 + } 261 263 } 262 264 break; 263 265 case DRM_CAP_CURSOR_WIDTH:
+5
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 1907 1907 err_hdmiphy: 1908 1908 if (hdata->hdmiphy_port) 1909 1909 put_device(&hdata->hdmiphy_port->dev); 1910 + if (hdata->regs_hdmiphy) 1911 + iounmap(hdata->regs_hdmiphy); 1910 1912 err_ddc: 1911 1913 put_device(&hdata->ddc_adpt->dev); 1912 1914 ··· 1930 1928 1931 1929 if (hdata->hdmiphy_port) 1932 1930 put_device(&hdata->hdmiphy_port->dev); 1931 + 1932 + if (hdata->regs_hdmiphy) 1933 + iounmap(hdata->regs_hdmiphy); 1933 1934 1934 1935 put_device(&hdata->ddc_adpt->dev); 1935 1936
+3 -2
drivers/gpu/drm/i915/i915_gem.c
··· 2268 2268 page = shmem_read_mapping_page(mapping, i); 2269 2269 if (IS_ERR(page)) { 2270 2270 ret = PTR_ERR(page); 2271 - goto err_pages; 2271 + goto err_sg; 2272 2272 } 2273 2273 } 2274 2274 #ifdef CONFIG_SWIOTLB ··· 2311 2311 2312 2312 return 0; 2313 2313 2314 - err_pages: 2314 + err_sg: 2315 2315 sg_mark_end(sg); 2316 + err_pages: 2316 2317 for_each_sgt_page(page, sgt_iter, st) 2317 2318 put_page(page); 2318 2319 sg_free_table(st);
+2 -1
drivers/gpu/drm/i915/intel_display.c
··· 12260 12260 intel_crtc->reset_count = i915_reset_count(&dev_priv->gpu_error); 12261 12261 if (i915_reset_in_progress_or_wedged(&dev_priv->gpu_error)) { 12262 12262 ret = -EIO; 12263 - goto cleanup; 12263 + goto unlock; 12264 12264 } 12265 12265 12266 12266 atomic_inc(&intel_crtc->unpin_work_count); ··· 12352 12352 intel_unpin_fb_obj(fb, crtc->primary->state->rotation); 12353 12353 cleanup_pending: 12354 12354 atomic_dec(&intel_crtc->unpin_work_count); 12355 + unlock: 12355 12356 mutex_unlock(&dev->struct_mutex); 12356 12357 cleanup: 12357 12358 crtc->primary->fb = old_fb;
+7 -7
drivers/gpu/drm/mediatek/mtk_disp_ovl.c
··· 251 251 if (irq < 0) 252 252 return irq; 253 253 254 - ret = devm_request_irq(dev, irq, mtk_disp_ovl_irq_handler, 255 - IRQF_TRIGGER_NONE, dev_name(dev), priv); 256 - if (ret < 0) { 257 - dev_err(dev, "Failed to request irq %d: %d\n", irq, ret); 258 - return ret; 259 - } 260 - 261 254 comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DISP_OVL); 262 255 if (comp_id < 0) { 263 256 dev_err(dev, "Failed to identify by alias: %d\n", comp_id); ··· 265 272 } 266 273 267 274 platform_set_drvdata(pdev, priv); 275 + 276 + ret = devm_request_irq(dev, irq, mtk_disp_ovl_irq_handler, 277 + IRQF_TRIGGER_NONE, dev_name(dev), priv); 278 + if (ret < 0) { 279 + dev_err(dev, "Failed to request irq %d: %d\n", irq, ret); 280 + return ret; 281 + } 268 282 269 283 ret = component_add(dev, &mtk_disp_ovl_component_ops); 270 284 if (ret)
+1 -1
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
··· 123 123 unsigned int bpc) 124 124 { 125 125 writel(w << 16 | h, comp->regs + DISP_OD_SIZE); 126 - writel(OD_RELAYMODE, comp->regs + OD_RELAYMODE); 126 + writel(OD_RELAYMODE, comp->regs + DISP_OD_CFG); 127 127 mtk_dither_set(comp, bpc, DISP_OD_CFG); 128 128 } 129 129
+50 -18
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 86 86 87 87 #define DSI_PHY_TIMECON0 0x110 88 88 #define LPX (0xff << 0) 89 - #define HS_PRPR (0xff << 8) 89 + #define HS_PREP (0xff << 8) 90 90 #define HS_ZERO (0xff << 16) 91 91 #define HS_TRAIL (0xff << 24) 92 92 ··· 102 102 #define CLK_TRAIL (0xff << 24) 103 103 104 104 #define DSI_PHY_TIMECON3 0x11c 105 - #define CLK_HS_PRPR (0xff << 0) 105 + #define CLK_HS_PREP (0xff << 0) 106 106 #define CLK_HS_POST (0xff << 8) 107 107 #define CLK_HS_EXIT (0xff << 16) 108 + 109 + #define T_LPX 5 110 + #define T_HS_PREP 6 111 + #define T_HS_TRAIL 8 112 + #define T_HS_EXIT 7 113 + #define T_HS_ZERO 10 108 114 109 115 #define NS_TO_CYCLE(n, c) ((n) / (c) + (((n) % (c)) ? 1 : 0)) 110 116 ··· 167 161 static void dsi_phy_timconfig(struct mtk_dsi *dsi) 168 162 { 169 163 u32 timcon0, timcon1, timcon2, timcon3; 170 - unsigned int ui, cycle_time; 171 - unsigned int lpx; 164 + u32 ui, cycle_time; 172 165 173 166 ui = 1000 / dsi->data_rate + 0x01; 174 167 cycle_time = 8000 / dsi->data_rate + 0x01; 175 - lpx = 5; 176 168 177 - timcon0 = (8 << 24) | (0xa << 16) | (0x6 << 8) | lpx; 178 - timcon1 = (7 << 24) | (5 * lpx << 16) | ((3 * lpx) / 2) << 8 | 179 - (4 * lpx); 169 + timcon0 = T_LPX | T_HS_PREP << 8 | T_HS_ZERO << 16 | T_HS_TRAIL << 24; 170 + timcon1 = 4 * T_LPX | (3 * T_LPX / 2) << 8 | 5 * T_LPX << 16 | 171 + T_HS_EXIT << 24; 180 172 timcon2 = ((NS_TO_CYCLE(0x64, cycle_time) + 0xa) << 24) | 181 173 (NS_TO_CYCLE(0x150, cycle_time) << 16); 182 - timcon3 = (2 * lpx) << 16 | NS_TO_CYCLE(80 + 52 * ui, cycle_time) << 8 | 183 - NS_TO_CYCLE(0x40, cycle_time); 174 + timcon3 = NS_TO_CYCLE(0x40, cycle_time) | (2 * T_LPX) << 16 | 175 + NS_TO_CYCLE(80 + 52 * ui, cycle_time) << 8; 184 176 185 177 writel(timcon0, dsi->regs + DSI_PHY_TIMECON0); 186 178 writel(timcon1, dsi->regs + DSI_PHY_TIMECON1); ··· 206 202 { 207 203 struct device *dev = dsi->dev; 208 204 int ret; 205 + u64 pixel_clock, total_bits; 206 + u32 htotal, htotal_bits, bit_per_pixel, overhead_cycles, overhead_bits; 209 207 210 208 if (++dsi->refcount != 1) 211 209 return 0; 212 210 213 - /** 214 - * data_rate = (pixel_clock / 1000) * pixel_dipth * mipi_ratio; 215 - * pixel_clock unit is Khz, data_rata unit is MHz, so need divide 1000. 216 - * mipi_ratio is mipi clk coefficient for balance the pixel clk in mipi. 217 - * we set mipi_ratio is 1.05. 218 - */ 219 - dsi->data_rate = dsi->vm.pixelclock * 3 * 21 / (1 * 1000 * 10); 211 + switch (dsi->format) { 212 + case MIPI_DSI_FMT_RGB565: 213 + bit_per_pixel = 16; 214 + break; 215 + case MIPI_DSI_FMT_RGB666_PACKED: 216 + bit_per_pixel = 18; 217 + break; 218 + case MIPI_DSI_FMT_RGB666: 219 + case MIPI_DSI_FMT_RGB888: 220 + default: 221 + bit_per_pixel = 24; 222 + break; 223 + } 220 224 221 - ret = clk_set_rate(dsi->hs_clk, dsi->data_rate * 1000000); 225 + /** 226 + * vm.pixelclock is in kHz, pixel_clock unit is Hz, so multiply by 1000 227 + * htotal_time = htotal * byte_per_pixel / num_lanes 228 + * overhead_time = lpx + hs_prepare + hs_zero + hs_trail + hs_exit 229 + * mipi_ratio = (htotal_time + overhead_time) / htotal_time 230 + * data_rate = pixel_clock * bit_per_pixel * mipi_ratio / num_lanes; 231 + */ 232 + pixel_clock = dsi->vm.pixelclock * 1000; 233 + htotal = dsi->vm.hactive + dsi->vm.hback_porch + dsi->vm.hfront_porch + 234 + dsi->vm.hsync_len; 235 + htotal_bits = htotal * bit_per_pixel; 236 + 237 + overhead_cycles = T_LPX + T_HS_PREP + T_HS_ZERO + T_HS_TRAIL + 238 + T_HS_EXIT; 239 + overhead_bits = overhead_cycles * dsi->lanes * 8; 240 + total_bits = htotal_bits + overhead_bits; 241 + 242 + dsi->data_rate = DIV_ROUND_UP_ULL(pixel_clock * total_bits, 243 + htotal * dsi->lanes); 244 + 245 + ret = clk_set_rate(dsi->hs_clk, dsi->data_rate); 222 246 if (ret < 0) { 223 247 dev_err(dev, "Failed to set data rate: %d\n", ret); 224 248 goto err_refcount;
+15 -1
drivers/gpu/drm/radeon/radeon_atpx_handler.c
··· 34 34 35 35 static struct radeon_atpx_priv { 36 36 bool atpx_detected; 37 + bool bridge_pm_usable; 37 38 /* handle for device - and atpx */ 38 39 acpi_handle dhandle; 39 40 struct radeon_atpx atpx; ··· 204 203 atpx->is_hybrid = false; 205 204 if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) { 206 205 printk("ATPX Hybrid Graphics\n"); 207 - atpx->functions.power_cntl = false; 206 + /* 207 + * Disable legacy PM methods only when pcie port PM is usable, 208 + * otherwise the device might fail to power off or power on. 209 + */ 210 + atpx->functions.power_cntl = !radeon_atpx_priv.bridge_pm_usable; 208 211 atpx->is_hybrid = true; 209 212 } 210 213 ··· 553 548 struct pci_dev *pdev = NULL; 554 549 bool has_atpx = false; 555 550 int vga_count = 0; 551 + bool d3_supported = false; 552 + struct pci_dev *parent_pdev; 556 553 557 554 while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) { 558 555 vga_count++; 559 556 560 557 has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); 558 + 559 + parent_pdev = pci_upstream_bridge(pdev); 560 + d3_supported |= parent_pdev && parent_pdev->bridge_d3; 561 561 } 562 562 563 563 /* some newer PX laptops mark the dGPU as a non-VGA display device */ ··· 570 560 vga_count++; 571 561 572 562 has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); 563 + 564 + parent_pdev = pci_upstream_bridge(pdev); 565 + d3_supported |= parent_pdev && parent_pdev->bridge_d3; 573 566 } 574 567 575 568 if (has_atpx && vga_count == 2) { ··· 580 567 printk(KERN_INFO "vga_switcheroo: detected switching method %s handle\n", 581 568 acpi_method_name); 582 569 radeon_atpx_priv.atpx_detected = true; 570 + radeon_atpx_priv.bridge_pm_usable = d3_supported; 583 571 radeon_atpx_init(); 584 572 return true; 585 573 }
+79 -36
drivers/hid/hid-cp2112.c
··· 32 32 #include <linux/usb/ch9.h> 33 33 #include "hid-ids.h" 34 34 35 + #define CP2112_REPORT_MAX_LENGTH 64 36 + #define CP2112_GPIO_CONFIG_LENGTH 5 37 + #define CP2112_GPIO_GET_LENGTH 2 38 + #define CP2112_GPIO_SET_LENGTH 3 39 + 35 40 enum { 36 41 CP2112_GPIO_CONFIG = 0x02, 37 42 CP2112_GPIO_GET = 0x03, ··· 166 161 atomic_t read_avail; 167 162 atomic_t xfer_avail; 168 163 struct gpio_chip gc; 164 + u8 *in_out_buffer; 165 + spinlock_t lock; 169 166 }; 170 167 171 168 static int gpio_push_pull = 0xFF; ··· 178 171 { 179 172 struct cp2112_device *dev = gpiochip_get_data(chip); 180 173 struct hid_device *hdev = dev->hdev; 181 - u8 buf[5]; 174 + u8 *buf = dev->in_out_buffer; 175 + unsigned long flags; 182 176 int ret; 183 177 178 + spin_lock_irqsave(&dev->lock, flags); 179 + 184 180 ret = hid_hw_raw_request(hdev, CP2112_GPIO_CONFIG, buf, 185 - sizeof(buf), HID_FEATURE_REPORT, 186 - HID_REQ_GET_REPORT); 187 - if (ret != sizeof(buf)) { 181 + CP2112_GPIO_CONFIG_LENGTH, HID_FEATURE_REPORT, 182 + HID_REQ_GET_REPORT); 183 + if (ret != CP2112_GPIO_CONFIG_LENGTH) { 188 184 hid_err(hdev, "error requesting GPIO config: %d\n", ret); 189 - return ret; 185 + goto exit; 190 186 } 191 187 192 188 buf[1] &= ~(1 << offset); 193 189 buf[2] = gpio_push_pull; 194 190 195 - ret = hid_hw_raw_request(hdev, CP2112_GPIO_CONFIG, buf, sizeof(buf), 196 - HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 191 + ret = hid_hw_raw_request(hdev, CP2112_GPIO_CONFIG, buf, 192 + CP2112_GPIO_CONFIG_LENGTH, HID_FEATURE_REPORT, 193 + HID_REQ_SET_REPORT); 197 194 if (ret < 0) { 198 195 hid_err(hdev, "error setting GPIO config: %d\n", ret); 199 - return ret; 196 + goto exit; 200 197 } 201 198 202 - return 0; 199 + ret = 0; 200 + 201 + exit: 202 + spin_unlock_irqrestore(&dev->lock, flags); 203 + return ret <= 0 ? ret : -EIO; 203 204 } 204 205 205 206 static void cp2112_gpio_set(struct gpio_chip *chip, unsigned offset, int value) 206 207 { 207 208 struct cp2112_device *dev = gpiochip_get_data(chip); 208 209 struct hid_device *hdev = dev->hdev; 209 - u8 buf[3]; 210 + u8 *buf = dev->in_out_buffer; 211 + unsigned long flags; 210 212 int ret; 213 + 214 + spin_lock_irqsave(&dev->lock, flags); 211 215 212 216 buf[0] = CP2112_GPIO_SET; 213 217 buf[1] = value ? 0xff : 0; 214 218 buf[2] = 1 << offset; 215 219 216 - ret = hid_hw_raw_request(hdev, CP2112_GPIO_SET, buf, sizeof(buf), 217 - HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 220 + ret = hid_hw_raw_request(hdev, CP2112_GPIO_SET, buf, 221 + CP2112_GPIO_SET_LENGTH, HID_FEATURE_REPORT, 222 + HID_REQ_SET_REPORT); 218 223 if (ret < 0) 219 224 hid_err(hdev, "error setting GPIO values: %d\n", ret); 225 + 226 + spin_unlock_irqrestore(&dev->lock, flags); 220 227 } 221 228 222 229 static int cp2112_gpio_get(struct gpio_chip *chip, unsigned offset) 223 230 { 224 231 struct cp2112_device *dev = gpiochip_get_data(chip); 225 232 struct hid_device *hdev = dev->hdev; 226 - u8 buf[2]; 233 + u8 *buf = dev->in_out_buffer; 234 + unsigned long flags; 227 235 int ret; 228 236 229 - ret = hid_hw_raw_request(hdev, CP2112_GPIO_GET, buf, sizeof(buf), 230 - HID_FEATURE_REPORT, HID_REQ_GET_REPORT); 231 - if (ret != sizeof(buf)) { 237 + spin_lock_irqsave(&dev->lock, flags); 238 + 239 + ret = hid_hw_raw_request(hdev, CP2112_GPIO_GET, buf, 240 + CP2112_GPIO_GET_LENGTH, HID_FEATURE_REPORT, 241 + HID_REQ_GET_REPORT); 242 + if (ret != CP2112_GPIO_GET_LENGTH) { 232 243 hid_err(hdev, "error requesting GPIO values: %d\n", ret); 233 - return ret; 244 + ret = ret < 0 ? ret : -EIO; 245 + goto exit; 234 246 } 235 247 236 - return (buf[1] >> offset) & 1; 248 + ret = (buf[1] >> offset) & 1; 249 + 250 + exit: 251 + spin_unlock_irqrestore(&dev->lock, flags); 252 + 253 + return ret; 237 254 } 238 255 239 256 static int cp2112_gpio_direction_output(struct gpio_chip *chip, ··· 265 234 { 266 235 struct cp2112_device *dev = gpiochip_get_data(chip); 267 236 struct hid_device *hdev = dev->hdev; 268 - u8 buf[5]; 237 + u8 *buf = dev->in_out_buffer; 238 + unsigned long flags; 269 239 int ret; 270 240 241 + spin_lock_irqsave(&dev->lock, flags); 242 + 271 243 ret = hid_hw_raw_request(hdev, CP2112_GPIO_CONFIG, buf, 272 - sizeof(buf), HID_FEATURE_REPORT, 273 - HID_REQ_GET_REPORT); 274 - if (ret != sizeof(buf)) { 244 + CP2112_GPIO_CONFIG_LENGTH, HID_FEATURE_REPORT, 245 + HID_REQ_GET_REPORT); 246 + if (ret != CP2112_GPIO_CONFIG_LENGTH) { 275 247 hid_err(hdev, "error requesting GPIO config: %d\n", ret); 276 - return ret; 248 + goto fail; 277 249 } 278 250 279 251 buf[1] |= 1 << offset; 280 252 buf[2] = gpio_push_pull; 281 253 282 - ret = hid_hw_raw_request(hdev, CP2112_GPIO_CONFIG, buf, sizeof(buf), 283 - HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 254 + ret = hid_hw_raw_request(hdev, CP2112_GPIO_CONFIG, buf, 255 + CP2112_GPIO_CONFIG_LENGTH, HID_FEATURE_REPORT, 256 + HID_REQ_SET_REPORT); 284 257 if (ret < 0) { 285 258 hid_err(hdev, "error setting GPIO config: %d\n", ret); 286 - return ret; 259 + goto fail; 287 260 } 261 + 262 + spin_unlock_irqrestore(&dev->lock, flags); 288 263 289 264 /* 290 265 * Set gpio value when output direction is already set, ··· 299 262 cp2112_gpio_set(chip, offset, value); 300 263 301 264 return 0; 265 + 266 + fail: 267 + spin_unlock_irqrestore(&dev->lock, flags); 268 + return ret < 0 ? ret : -EIO; 302 269 } 303 270 304 271 static int cp2112_hid_get(struct hid_device *hdev, unsigned char report_number, ··· 1048 1007 struct cp2112_smbus_config_report config; 1049 1008 int ret; 1050 1009 1010 + dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL); 1011 + if (!dev) 1012 + return -ENOMEM; 1013 + 1014 + dev->in_out_buffer = devm_kzalloc(&hdev->dev, CP2112_REPORT_MAX_LENGTH, 1015 + GFP_KERNEL); 1016 + if (!dev->in_out_buffer) 1017 + return -ENOMEM; 1018 + 1019 + spin_lock_init(&dev->lock); 1020 + 1051 1021 ret = hid_parse(hdev); 1052 1022 if (ret) { 1053 1023 hid_err(hdev, "parse failed\n"); ··· 1115 1063 goto err_power_normal; 1116 1064 } 1117 1065 1118 - dev = kzalloc(sizeof(*dev), GFP_KERNEL); 1119 - if (!dev) { 1120 - ret = -ENOMEM; 1121 - goto err_power_normal; 1122 - } 1123 - 1124 1066 hid_set_drvdata(hdev, (void *)dev); 1125 1067 dev->hdev = hdev; 1126 1068 dev->adap.owner = THIS_MODULE; ··· 1133 1087 1134 1088 if (ret) { 1135 1089 hid_err(hdev, "error registering i2c adapter\n"); 1136 - goto err_free_dev; 1090 + goto err_power_normal; 1137 1091 } 1138 1092 1139 1093 hid_dbg(hdev, "adapter registered\n"); ··· 1169 1123 gpiochip_remove(&dev->gc); 1170 1124 err_free_i2c: 1171 1125 i2c_del_adapter(&dev->adap); 1172 - err_free_dev: 1173 - kfree(dev); 1174 1126 err_power_normal: 1175 1127 hid_hw_power(hdev, PM_HINT_NORMAL); 1176 1128 err_hid_close: ··· 1193 1149 */ 1194 1150 hid_hw_close(hdev); 1195 1151 hid_hw_stop(hdev); 1196 - kfree(dev); 1197 1152 } 1198 1153 1199 1154 static int cp2112_raw_event(struct hid_device *hdev, struct hid_report *report,
+10 -4
drivers/hid/hid-lg.c
··· 756 756 757 757 /* Setup wireless link with Logitech Wii wheel */ 758 758 if (hdev->product == USB_DEVICE_ID_LOGITECH_WII_WHEEL) { 759 - unsigned char buf[] = { 0x00, 0xAF, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; 759 + const unsigned char cbuf[] = { 0x00, 0xAF, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; 760 + u8 *buf = kmemdup(cbuf, sizeof(cbuf), GFP_KERNEL); 760 761 761 - ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(buf), 762 + if (!buf) { 763 + ret = -ENOMEM; 764 + goto err_free; 765 + } 766 + 767 + ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(cbuf), 762 768 HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 763 - 764 769 if (ret >= 0) { 765 770 /* insert a little delay of 10 jiffies ~ 40ms */ 766 771 wait_queue_head_t wait; ··· 777 772 buf[1] = 0xB2; 778 773 get_random_bytes(&buf[2], 2); 779 774 780 - ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(buf), 775 + ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(cbuf), 781 776 HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 782 777 } 778 + kfree(buf); 783 779 } 784 780 785 781 if (drv_data->quirks & LG_FF)
+10 -2
drivers/hid/hid-magicmouse.c
··· 493 493 static int magicmouse_probe(struct hid_device *hdev, 494 494 const struct hid_device_id *id) 495 495 { 496 - __u8 feature[] = { 0xd7, 0x01 }; 496 + const u8 feature[] = { 0xd7, 0x01 }; 497 + u8 *buf; 497 498 struct magicmouse_sc *msc; 498 499 struct hid_report *report; 499 500 int ret; ··· 545 544 } 546 545 report->size = 6; 547 546 547 + buf = kmemdup(feature, sizeof(feature), GFP_KERNEL); 548 + if (!buf) { 549 + ret = -ENOMEM; 550 + goto err_stop_hw; 551 + } 552 + 548 553 /* 549 554 * Some devices repond with 'invalid report id' when feature 550 555 * report switching it into multitouch mode is sent to it. ··· 559 552 * but there seems to be no other way of switching the mode. 560 553 * Thus the super-ugly hacky success check below. 561 554 */ 562 - ret = hid_hw_raw_request(hdev, feature[0], feature, sizeof(feature), 555 + ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(feature), 563 556 HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 557 + kfree(buf); 564 558 if (ret != -EIO && ret != sizeof(feature)) { 565 559 hid_err(hdev, "unable to request touch data (%d)\n", ret); 566 560 goto err_stop_hw;
+8 -2
drivers/hid/hid-rmi.c
··· 188 188 static int rmi_set_mode(struct hid_device *hdev, u8 mode) 189 189 { 190 190 int ret; 191 - u8 txbuf[2] = {RMI_SET_RMI_MODE_REPORT_ID, mode}; 191 + const u8 txbuf[2] = {RMI_SET_RMI_MODE_REPORT_ID, mode}; 192 + u8 *buf; 192 193 193 - ret = hid_hw_raw_request(hdev, RMI_SET_RMI_MODE_REPORT_ID, txbuf, 194 + buf = kmemdup(txbuf, sizeof(txbuf), GFP_KERNEL); 195 + if (!buf) 196 + return -ENOMEM; 197 + 198 + ret = hid_hw_raw_request(hdev, RMI_SET_RMI_MODE_REPORT_ID, buf, 194 199 sizeof(txbuf), HID_FEATURE_REPORT, HID_REQ_SET_REPORT); 200 + kfree(buf); 195 201 if (ret < 0) { 196 202 dev_err(&hdev->dev, "unable to set rmi mode to %d (%d)\n", mode, 197 203 ret);
+1
drivers/hid/hid-sensor-hub.c
··· 212 212 __s32 value; 213 213 int ret = 0; 214 214 215 + memset(buffer, 0, buffer_size); 215 216 mutex_lock(&data->mutex); 216 217 report = sensor_hub_report(report_id, hsdev->hdev, HID_FEATURE_REPORT); 217 218 if (!report || (field_index >= report->maxfield)) {
+25 -39
drivers/i2c/busses/i2c-designware-core.c
··· 91 91 DW_IC_INTR_TX_ABRT | \ 92 92 DW_IC_INTR_STOP_DET) 93 93 94 - #define DW_IC_STATUS_ACTIVITY 0x1 95 - #define DW_IC_STATUS_TFE BIT(2) 96 - #define DW_IC_STATUS_MST_ACTIVITY BIT(5) 94 + #define DW_IC_STATUS_ACTIVITY 0x1 97 95 98 96 #define DW_IC_SDA_HOLD_RX_SHIFT 16 99 97 #define DW_IC_SDA_HOLD_RX_MASK GENMASK(23, DW_IC_SDA_HOLD_RX_SHIFT) ··· 476 478 { 477 479 struct i2c_msg *msgs = dev->msgs; 478 480 u32 ic_tar = 0; 479 - bool enabled; 480 481 481 - enabled = dw_readl(dev, DW_IC_ENABLE_STATUS) & 1; 482 - 483 - if (enabled) { 484 - u32 ic_status; 485 - 486 - /* 487 - * Only disable adapter if ic_tar and ic_con can't be 488 - * dynamically updated 489 - */ 490 - ic_status = dw_readl(dev, DW_IC_STATUS); 491 - if (!dev->dynamic_tar_update_enabled || 492 - (ic_status & DW_IC_STATUS_MST_ACTIVITY) || 493 - !(ic_status & DW_IC_STATUS_TFE)) { 494 - __i2c_dw_enable_and_wait(dev, false); 495 - enabled = false; 496 - } 497 - } 482 + /* Disable the adapter */ 483 + __i2c_dw_enable_and_wait(dev, false); 498 484 499 485 /* if the slave address is ten bit address, enable 10BITADDR */ 500 486 if (dev->dynamic_tar_update_enabled) { ··· 508 526 /* enforce disabled interrupts (due to HW issues) */ 509 527 i2c_dw_disable_int(dev); 510 528 511 - if (!enabled) 512 - __i2c_dw_enable(dev, true); 529 + /* Enable the adapter */ 530 + __i2c_dw_enable(dev, true); 513 531 514 532 /* Clear and enable interrupts */ 515 533 dw_readl(dev, DW_IC_CLR_INTR); ··· 593 611 if (msgs[dev->msg_write_idx].flags & I2C_M_RD) { 594 612 595 613 /* avoid rx buffer overrun */ 596 - if (rx_limit - dev->rx_outstanding <= 0) 614 + if (dev->rx_outstanding >= dev->rx_fifo_depth) 597 615 break; 598 616 599 617 dw_writel(dev, cmd | 0x100, DW_IC_DATA_CMD); ··· 690 708 } 691 709 692 710 /* 693 - * Prepare controller for a transaction and start transfer by calling 694 - * i2c_dw_xfer_init() 711 + * Prepare controller for a transaction and call i2c_dw_xfer_msg 695 712 */ 696 713 static int 697 714 i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num) ··· 733 752 goto done; 734 753 } 735 754 755 + /* 756 + * We must disable the adapter before returning and signaling the end 757 + * of the current transfer. Otherwise the hardware might continue 758 + * generating interrupts which in turn causes a race condition with 759 + * the following transfer. Needs some more investigation if the 760 + * additional interrupts are a hardware bug or this driver doesn't 761 + * handle them correctly yet. 762 + */ 763 + __i2c_dw_enable(dev, false); 764 + 736 765 if (dev->msg_err) { 737 766 ret = dev->msg_err; 738 767 goto done; 739 768 } 740 769 741 770 /* no error */ 742 - if (likely(!dev->cmd_err)) { 771 + if (likely(!dev->cmd_err && !dev->status)) { 743 772 ret = num; 744 773 goto done; 745 774 } ··· 759 768 ret = i2c_dw_handle_tx_abort(dev); 760 769 goto done; 761 770 } 771 + 772 + if (dev->status) 773 + dev_err(dev->dev, 774 + "transfer terminated early - interrupt latency too high?\n"); 775 + 762 776 ret = -EIO; 763 777 764 778 done: ··· 884 888 */ 885 889 886 890 tx_aborted: 887 - if ((stat & (DW_IC_INTR_TX_ABRT | DW_IC_INTR_STOP_DET)) 888 - || dev->msg_err) { 889 - /* 890 - * We must disable interruts before returning and signaling 891 - * the end of the current transfer. Otherwise the hardware 892 - * might continue generating interrupts for non-existent 893 - * transfers. 894 - */ 895 - i2c_dw_disable_int(dev); 896 - dw_readl(dev, DW_IC_CLR_INTR); 897 - 891 + if ((stat & (DW_IC_INTR_TX_ABRT | DW_IC_INTR_STOP_DET)) || dev->msg_err) 898 892 complete(&dev->cmd_complete); 899 - } else if (unlikely(dev->accessor_flags & ACCESS_INTR_MASK)) { 893 + else if (unlikely(dev->accessor_flags & ACCESS_INTR_MASK)) { 900 894 /* workaround to trigger pending interrupt */ 901 895 stat = dw_readl(dev, DW_IC_INTR_MASK); 902 896 i2c_dw_disable_int(dev);
+1 -3
drivers/i2c/busses/i2c-octeon-core.c
··· 381 381 if (result) 382 382 return result; 383 383 384 - data[i] = octeon_i2c_data_read(i2c, &result); 385 - if (result) 386 - return result; 384 + data[i] = octeon_i2c_data_read(i2c); 387 385 if (recv_len && i == 0) { 388 386 if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) 389 387 return -EPROTO;
+11 -16
drivers/i2c/busses/i2c-octeon-core.h
··· 5 5 #include <linux/i2c.h> 6 6 #include <linux/i2c-smbus.h> 7 7 #include <linux/io.h> 8 - #include <linux/iopoll.h> 9 8 #include <linux/kernel.h> 10 9 #include <linux/pci.h> 11 10 ··· 144 145 u64 tmp; 145 146 146 147 __raw_writeq(SW_TWSI_V | eop_reg | data, i2c->twsi_base + SW_TWSI(i2c)); 147 - 148 - readq_poll_timeout(i2c->twsi_base + SW_TWSI(i2c), tmp, tmp & SW_TWSI_V, 149 - I2C_OCTEON_EVENT_WAIT, i2c->adap.timeout); 148 + do { 149 + tmp = __raw_readq(i2c->twsi_base + SW_TWSI(i2c)); 150 + } while ((tmp & SW_TWSI_V) != 0); 150 151 } 151 152 152 153 #define octeon_i2c_ctl_write(i2c, val) \ ··· 163 164 * 164 165 * The I2C core registers are accessed indirectly via the SW_TWSI CSR. 165 166 */ 166 - static inline int octeon_i2c_reg_read(struct octeon_i2c *i2c, u64 eop_reg, 167 - int *error) 167 + static inline u8 octeon_i2c_reg_read(struct octeon_i2c *i2c, u64 eop_reg) 168 168 { 169 169 u64 tmp; 170 - int ret; 171 170 172 171 __raw_writeq(SW_TWSI_V | eop_reg | SW_TWSI_R, i2c->twsi_base + SW_TWSI(i2c)); 172 + do { 173 + tmp = __raw_readq(i2c->twsi_base + SW_TWSI(i2c)); 174 + } while ((tmp & SW_TWSI_V) != 0); 173 175 174 - ret = readq_poll_timeout(i2c->twsi_base + SW_TWSI(i2c), tmp, 175 - tmp & SW_TWSI_V, I2C_OCTEON_EVENT_WAIT, 176 - i2c->adap.timeout); 177 - if (error) 178 - *error = ret; 179 176 return tmp & 0xFF; 180 177 } 181 178 182 179 #define octeon_i2c_ctl_read(i2c) \ 183 - octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_CTL, NULL) 184 - #define octeon_i2c_data_read(i2c, error) \ 185 - octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_DATA, error) 180 + octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_CTL) 181 + #define octeon_i2c_data_read(i2c) \ 182 + octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_DATA) 186 183 #define octeon_i2c_stat_read(i2c) \ 187 - octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_STAT, NULL) 184 + octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_STAT) 188 185 189 186 /** 190 187 * octeon_i2c_read_int - read the TWSI_INT register
-4
drivers/input/mouse/psmouse-base.c
··· 1115 1115 if (psmouse_try_protocol(psmouse, PSMOUSE_TOUCHKIT_PS2, 1116 1116 &max_proto, set_properties, true)) 1117 1117 return PSMOUSE_TOUCHKIT_PS2; 1118 - 1119 - if (psmouse_try_protocol(psmouse, PSMOUSE_BYD, 1120 - &max_proto, set_properties, true)) 1121 - return PSMOUSE_BYD; 1122 1118 } 1123 1119 1124 1120 /*
+3 -1
drivers/iommu/dmar.c
··· 338 338 struct pci_dev *pdev = to_pci_dev(data); 339 339 struct dmar_pci_notify_info *info; 340 340 341 - /* Only care about add/remove events for physical functions */ 341 + /* Only care about add/remove events for physical functions. 342 + * For VFs we actually do the lookup based on the corresponding 343 + * PF in device_to_iommu() anyway. */ 342 344 if (pdev->is_virtfn) 343 345 return NOTIFY_DONE; 344 346 if (action != BUS_NOTIFY_ADD_DEVICE &&
+13
drivers/iommu/intel-iommu.c
··· 892 892 return NULL; 893 893 894 894 if (dev_is_pci(dev)) { 895 + struct pci_dev *pf_pdev; 896 + 895 897 pdev = to_pci_dev(dev); 898 + /* VFs aren't listed in scope tables; we need to look up 899 + * the PF instead to find the IOMMU. */ 900 + pf_pdev = pci_physfn(pdev); 901 + dev = &pf_pdev->dev; 896 902 segment = pci_domain_nr(pdev->bus); 897 903 } else if (has_acpi_companion(dev)) 898 904 dev = &ACPI_COMPANION(dev)->dev; ··· 911 905 for_each_active_dev_scope(drhd->devices, 912 906 drhd->devices_cnt, i, tmp) { 913 907 if (tmp == dev) { 908 + /* For a VF use its original BDF# not that of the PF 909 + * which we used for the IOMMU lookup. Strictly speaking 910 + * we could do this for all PCI devices; we only need to 911 + * get the BDF# from the scope table for ACPI matches. */ 912 + if (pdev->is_virtfn) 913 + goto got_pdev; 914 + 914 915 *bus = drhd->devices[i].bus; 915 916 *devfn = drhd->devices[i].devfn; 916 917 goto out;
+16 -10
drivers/iommu/intel-svm.c
··· 39 39 struct page *pages; 40 40 int order; 41 41 42 - order = ecap_pss(iommu->ecap) + 7 - PAGE_SHIFT; 43 - if (order < 0) 44 - order = 0; 42 + /* Start at 2 because it's defined as 2^(1+PSS) */ 43 + iommu->pasid_max = 2 << ecap_pss(iommu->ecap); 45 44 45 + /* Eventually I'm promised we will get a multi-level PASID table 46 + * and it won't have to be physically contiguous. Until then, 47 + * limit the size because 8MiB contiguous allocations can be hard 48 + * to come by. The limit of 0x20000, which is 1MiB for each of 49 + * the PASID and PASID-state tables, is somewhat arbitrary. */ 50 + if (iommu->pasid_max > 0x20000) 51 + iommu->pasid_max = 0x20000; 52 + 53 + order = get_order(sizeof(struct pasid_entry) * iommu->pasid_max); 46 54 pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 47 55 if (!pages) { 48 56 pr_warn("IOMMU: %s: Failed to allocate PASID table\n", ··· 61 53 pr_info("%s: Allocated order %d PASID table.\n", iommu->name, order); 62 54 63 55 if (ecap_dis(iommu->ecap)) { 56 + /* Just making it explicit... */ 57 + BUILD_BUG_ON(sizeof(struct pasid_entry) != sizeof(struct pasid_state_entry)); 64 58 pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 65 59 if (pages) 66 60 iommu->pasid_state_table = page_address(pages); ··· 78 68 79 69 int intel_svm_free_pasid_tables(struct intel_iommu *iommu) 80 70 { 81 - int order; 82 - 83 - order = ecap_pss(iommu->ecap) + 7 - PAGE_SHIFT; 84 - if (order < 0) 85 - order = 0; 71 + int order = get_order(sizeof(struct pasid_entry) * iommu->pasid_max); 86 72 87 73 if (iommu->pasid_table) { 88 74 free_pages((unsigned long)iommu->pasid_table, order); ··· 377 371 } 378 372 svm->iommu = iommu; 379 373 380 - if (pasid_max > 2 << ecap_pss(iommu->ecap)) 381 - pasid_max = 2 << ecap_pss(iommu->ecap); 374 + if (pasid_max > iommu->pasid_max) 375 + pasid_max = iommu->pasid_max; 382 376 383 377 /* Do not use PASID 0 in caching mode (virtualised IOMMU) */ 384 378 ret = idr_alloc(&iommu->pasid_idr, svm,
+3 -1
drivers/isdn/gigaset/ser-gigaset.c
··· 755 755 driver = gigaset_initdriver(GIGASET_MINOR, GIGASET_MINORS, 756 756 GIGASET_MODULENAME, GIGASET_DEVNAME, 757 757 &ops, THIS_MODULE); 758 - if (!driver) 758 + if (!driver) { 759 + rc = -ENOMEM; 759 760 goto error; 761 + } 760 762 761 763 rc = tty_register_ldisc(N_GIGASET_M101, &gigaset_ldisc); 762 764 if (rc != 0) {
+1
drivers/isdn/hisax/hfc4s8s_l1.c
··· 1499 1499 printk(KERN_INFO 1500 1500 "HFC-4S/8S: failed to request address space at 0x%04x\n", 1501 1501 hw->iobase); 1502 + err = -EBUSY; 1502 1503 goto out; 1503 1504 } 1504 1505
+16 -21
drivers/media/tuners/tuner-xc2028.c
··· 281 281 int i; 282 282 tuner_dbg("%s called\n", __func__); 283 283 284 + /* free allocated f/w string */ 285 + if (priv->fname != firmware_name) 286 + kfree(priv->fname); 287 + priv->fname = NULL; 288 + 289 + priv->state = XC2028_NO_FIRMWARE; 290 + memset(&priv->cur_fw, 0, sizeof(priv->cur_fw)); 291 + 284 292 if (!priv->firm) 285 293 return; 286 294 ··· 299 291 300 292 priv->firm = NULL; 301 293 priv->firm_size = 0; 302 - priv->state = XC2028_NO_FIRMWARE; 303 - 304 - memset(&priv->cur_fw, 0, sizeof(priv->cur_fw)); 305 294 } 306 295 307 296 static int load_all_firmwares(struct dvb_frontend *fe, ··· 889 884 return 0; 890 885 891 886 fail: 892 - priv->state = XC2028_NO_FIRMWARE; 887 + free_firmware(priv); 893 888 894 - memset(&priv->cur_fw, 0, sizeof(priv->cur_fw)); 895 889 if (retry_count < 8) { 896 890 msleep(50); 897 891 retry_count++; ··· 1336 1332 mutex_lock(&xc2028_list_mutex); 1337 1333 1338 1334 /* only perform final cleanup if this is the last instance */ 1339 - if (hybrid_tuner_report_instance_count(priv) == 1) { 1335 + if (hybrid_tuner_report_instance_count(priv) == 1) 1340 1336 free_firmware(priv); 1341 - kfree(priv->ctrl.fname); 1342 - priv->ctrl.fname = NULL; 1343 - } 1344 1337 1345 1338 if (priv) 1346 1339 hybrid_tuner_release_state(priv); ··· 1400 1399 1401 1400 /* 1402 1401 * Copy the config data. 1403 - * For the firmware name, keep a local copy of the string, 1404 - * in order to avoid troubles during device release. 1405 1402 */ 1406 - kfree(priv->ctrl.fname); 1407 - priv->ctrl.fname = NULL; 1408 1403 memcpy(&priv->ctrl, p, sizeof(priv->ctrl)); 1409 - if (p->fname) { 1410 - priv->ctrl.fname = kstrdup(p->fname, GFP_KERNEL); 1411 - if (priv->ctrl.fname == NULL) { 1412 - rc = -ENOMEM; 1413 - goto unlock; 1414 - } 1415 - } 1416 1404 1417 1405 /* 1418 1406 * If firmware name changed, frees firmware. As free_firmware will ··· 1416 1426 1417 1427 if (priv->state == XC2028_NO_FIRMWARE) { 1418 1428 if (!firmware_name[0]) 1419 - priv->fname = priv->ctrl.fname; 1429 + priv->fname = kstrdup(p->fname, GFP_KERNEL); 1420 1430 else 1421 1431 priv->fname = firmware_name; 1432 + 1433 + if (!priv->fname) { 1434 + rc = -ENOMEM; 1435 + goto unlock; 1436 + } 1422 1437 1423 1438 rc = request_firmware_nowait(THIS_MODULE, 1, 1424 1439 priv->fname,
+3 -1
drivers/mfd/syscon.c
··· 73 73 /* Parse the device's DT node for an endianness specification */ 74 74 if (of_property_read_bool(np, "big-endian")) 75 75 syscon_config.val_format_endian = REGMAP_ENDIAN_BIG; 76 - else if (of_property_read_bool(np, "little-endian")) 76 + else if (of_property_read_bool(np, "little-endian")) 77 77 syscon_config.val_format_endian = REGMAP_ENDIAN_LITTLE; 78 + else if (of_property_read_bool(np, "native-endian")) 79 + syscon_config.val_format_endian = REGMAP_ENDIAN_NATIVE; 78 80 79 81 /* 80 82 * search for reg-io-width property in DT. If it is not provided,
+12 -4
drivers/mfd/wm8994-core.c
··· 393 393 BUG(); 394 394 goto err; 395 395 } 396 - 397 - ret = devm_regulator_bulk_get(wm8994->dev, wm8994->num_supplies, 396 + 397 + /* 398 + * Can't use devres helper here as some of the supplies are provided by 399 + * wm8994->dev's children (regulators) and those regulators are 400 + * unregistered by the devres core before the supplies are freed. 401 + */ 402 + ret = regulator_bulk_get(wm8994->dev, wm8994->num_supplies, 398 403 wm8994->supplies); 399 404 if (ret != 0) { 400 405 dev_err(wm8994->dev, "Failed to get supplies: %d\n", ret); ··· 410 405 wm8994->supplies); 411 406 if (ret != 0) { 412 407 dev_err(wm8994->dev, "Failed to enable supplies: %d\n", ret); 413 - goto err; 408 + goto err_regulator_free; 414 409 } 415 410 416 411 ret = wm8994_reg_read(wm8994, WM8994_SOFTWARE_RESET); ··· 601 596 err_enable: 602 597 regulator_bulk_disable(wm8994->num_supplies, 603 598 wm8994->supplies); 599 + err_regulator_free: 600 + regulator_bulk_free(wm8994->num_supplies, wm8994->supplies); 604 601 err: 605 602 mfd_remove_devices(wm8994->dev); 606 603 return ret; ··· 611 604 static void wm8994_device_exit(struct wm8994 *wm8994) 612 605 { 613 606 pm_runtime_disable(wm8994->dev); 614 - mfd_remove_devices(wm8994->dev); 615 607 wm8994_irq_exit(wm8994); 616 608 regulator_bulk_disable(wm8994->num_supplies, 617 609 wm8994->supplies); 610 + regulator_bulk_free(wm8994->num_supplies, wm8994->supplies); 611 + mfd_remove_devices(wm8994->dev); 618 612 } 619 613 620 614 static const struct of_device_id wm8994_of_match[] = {
+1
drivers/mmc/host/dw_mmc.c
··· 1058 1058 spin_unlock_irqrestore(&host->irq_lock, irqflags); 1059 1059 1060 1060 if (host->dma_ops->start(host, sg_len)) { 1061 + host->dma_ops->stop(host); 1061 1062 /* We can't do DMA, try PIO for this one */ 1062 1063 dev_dbg(host->dev, 1063 1064 "%s: fall back to PIO mode for current transfer\n",
+14
drivers/mmc/host/sdhci-of-esdhc.c
··· 66 66 return ret; 67 67 } 68 68 } 69 + /* 70 + * The DAT[3:0] line signal levels and the CMD line signal level are 71 + * not compatible with standard SDHC register. The line signal levels 72 + * DAT[7:0] are at bits 31:24 and the command line signal level is at 73 + * bit 23. All other bits are the same as in the standard SDHC 74 + * register. 75 + */ 76 + if (spec_reg == SDHCI_PRESENT_STATE) { 77 + ret = value & 0x000fffff; 78 + ret |= (value >> 4) & SDHCI_DATA_LVL_MASK; 79 + ret |= (value << 1) & SDHCI_CMD_LVL; 80 + return ret; 81 + } 82 + 69 83 ret = value; 70 84 return ret; 71 85 }
+1
drivers/mmc/host/sdhci.h
··· 73 73 #define SDHCI_DATA_LVL_MASK 0x00F00000 74 74 #define SDHCI_DATA_LVL_SHIFT 20 75 75 #define SDHCI_DATA_0_LVL_MASK 0x00100000 76 + #define SDHCI_CMD_LVL 0x01000000 76 77 77 78 #define SDHCI_HOST_CONTROL 0x28 78 79 #define SDHCI_CTRL_LED 0x01
+29 -8
drivers/net/can/usb/peak_usb/pcan_ucan.h
··· 43 43 u16 args[3]; 44 44 }; 45 45 46 + #define PUCAN_TSLOW_BRP_BITS 10 47 + #define PUCAN_TSLOW_TSGEG1_BITS 8 48 + #define PUCAN_TSLOW_TSGEG2_BITS 7 49 + #define PUCAN_TSLOW_SJW_BITS 7 50 + 51 + #define PUCAN_TSLOW_BRP_MASK ((1 << PUCAN_TSLOW_BRP_BITS) - 1) 52 + #define PUCAN_TSLOW_TSEG1_MASK ((1 << PUCAN_TSLOW_TSGEG1_BITS) - 1) 53 + #define PUCAN_TSLOW_TSEG2_MASK ((1 << PUCAN_TSLOW_TSGEG2_BITS) - 1) 54 + #define PUCAN_TSLOW_SJW_MASK ((1 << PUCAN_TSLOW_SJW_BITS) - 1) 55 + 46 56 /* uCAN TIMING_SLOW command fields */ 47 - #define PUCAN_TSLOW_SJW_T(s, t) (((s) & 0xf) | ((!!(t)) << 7)) 48 - #define PUCAN_TSLOW_TSEG2(t) ((t) & 0xf) 49 - #define PUCAN_TSLOW_TSEG1(t) ((t) & 0x3f) 50 - #define PUCAN_TSLOW_BRP(b) ((b) & 0x3ff) 57 + #define PUCAN_TSLOW_SJW_T(s, t) (((s) & PUCAN_TSLOW_SJW_MASK) | \ 58 + ((!!(t)) << 7)) 59 + #define PUCAN_TSLOW_TSEG2(t) ((t) & PUCAN_TSLOW_TSEG2_MASK) 60 + #define PUCAN_TSLOW_TSEG1(t) ((t) & PUCAN_TSLOW_TSEG1_MASK) 61 + #define PUCAN_TSLOW_BRP(b) ((b) & PUCAN_TSLOW_BRP_MASK) 51 62 52 63 struct __packed pucan_timing_slow { 53 64 __le16 opcode_channel; ··· 71 60 __le16 brp; /* BaudRate Prescaler */ 72 61 }; 73 62 63 + #define PUCAN_TFAST_BRP_BITS 10 64 + #define PUCAN_TFAST_TSGEG1_BITS 5 65 + #define PUCAN_TFAST_TSGEG2_BITS 4 66 + #define PUCAN_TFAST_SJW_BITS 4 67 + 68 + #define PUCAN_TFAST_BRP_MASK ((1 << PUCAN_TFAST_BRP_BITS) - 1) 69 + #define PUCAN_TFAST_TSEG1_MASK ((1 << PUCAN_TFAST_TSGEG1_BITS) - 1) 70 + #define PUCAN_TFAST_TSEG2_MASK ((1 << PUCAN_TFAST_TSGEG2_BITS) - 1) 71 + #define PUCAN_TFAST_SJW_MASK ((1 << PUCAN_TFAST_SJW_BITS) - 1) 72 + 74 73 /* uCAN TIMING_FAST command fields */ 75 - #define PUCAN_TFAST_SJW(s) ((s) & 0x3) 76 - #define PUCAN_TFAST_TSEG2(t) ((t) & 0x7) 77 - #define PUCAN_TFAST_TSEG1(t) ((t) & 0xf) 78 - #define PUCAN_TFAST_BRP(b) ((b) & 0x3ff) 74 + #define PUCAN_TFAST_SJW(s) ((s) & PUCAN_TFAST_SJW_MASK) 75 + #define PUCAN_TFAST_TSEG2(t) ((t) & PUCAN_TFAST_TSEG2_MASK) 76 + #define PUCAN_TFAST_TSEG1(t) ((t) & PUCAN_TFAST_TSEG1_MASK) 77 + #define PUCAN_TFAST_BRP(b) ((b) & PUCAN_TFAST_BRP_MASK) 79 78 80 79 struct __packed pucan_timing_fast { 81 80 __le16 opcode_channel;
+6 -2
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 39 39 {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBPRO_PRODUCT_ID)}, 40 40 {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBFD_PRODUCT_ID)}, 41 41 {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBPROFD_PRODUCT_ID)}, 42 + {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBX6_PRODUCT_ID)}, 42 43 {} /* Terminating entry */ 43 44 }; 44 45 ··· 51 50 &pcan_usb_pro, 52 51 &pcan_usb_fd, 53 52 &pcan_usb_pro_fd, 53 + &pcan_usb_x6, 54 54 }; 55 55 56 56 /* ··· 870 868 static void peak_usb_disconnect(struct usb_interface *intf) 871 869 { 872 870 struct peak_usb_device *dev; 871 + struct peak_usb_device *dev_prev_siblings; 873 872 874 873 /* unregister as many netdev devices as siblings */ 875 - for (dev = usb_get_intfdata(intf); dev; dev = dev->prev_siblings) { 874 + for (dev = usb_get_intfdata(intf); dev; dev = dev_prev_siblings) { 876 875 struct net_device *netdev = dev->netdev; 877 876 char name[IFNAMSIZ]; 878 877 878 + dev_prev_siblings = dev->prev_siblings; 879 879 dev->state &= ~PCAN_USB_STATE_CONNECTED; 880 880 strncpy(name, netdev->name, IFNAMSIZ); 881 881 882 882 unregister_netdev(netdev); 883 - free_candev(netdev); 884 883 885 884 kfree(dev->cmd_buf); 886 885 dev->next_siblings = NULL; 887 886 if (dev->adapter->dev_free) 888 887 dev->adapter->dev_free(dev); 889 888 889 + free_candev(netdev); 890 890 dev_info(&intf->dev, "%s removed\n", name); 891 891 } 892 892
+2
drivers/net/can/usb/peak_usb/pcan_usb_core.h
··· 27 27 #define PCAN_USBPRO_PRODUCT_ID 0x000d 28 28 #define PCAN_USBPROFD_PRODUCT_ID 0x0011 29 29 #define PCAN_USBFD_PRODUCT_ID 0x0012 30 + #define PCAN_USBX6_PRODUCT_ID 0x0014 30 31 31 32 #define PCAN_USB_DRIVER_NAME "peak_usb" 32 33 ··· 91 90 extern const struct peak_usb_adapter pcan_usb_pro; 92 91 extern const struct peak_usb_adapter pcan_usb_fd; 93 92 extern const struct peak_usb_adapter pcan_usb_pro_fd; 93 + extern const struct peak_usb_adapter pcan_usb_x6; 94 94 95 95 struct peak_time_ref { 96 96 struct timeval tv_host_0, tv_host;
+88 -16
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 993 993 static const struct can_bittiming_const pcan_usb_fd_const = { 994 994 .name = "pcan_usb_fd", 995 995 .tseg1_min = 1, 996 - .tseg1_max = 64, 996 + .tseg1_max = (1 << PUCAN_TSLOW_TSGEG1_BITS), 997 997 .tseg2_min = 1, 998 - .tseg2_max = 16, 999 - .sjw_max = 16, 998 + .tseg2_max = (1 << PUCAN_TSLOW_TSGEG2_BITS), 999 + .sjw_max = (1 << PUCAN_TSLOW_SJW_BITS), 1000 1000 .brp_min = 1, 1001 - .brp_max = 1024, 1001 + .brp_max = (1 << PUCAN_TSLOW_BRP_BITS), 1002 1002 .brp_inc = 1, 1003 1003 }; 1004 1004 1005 1005 static const struct can_bittiming_const pcan_usb_fd_data_const = { 1006 1006 .name = "pcan_usb_fd", 1007 1007 .tseg1_min = 1, 1008 - .tseg1_max = 16, 1008 + .tseg1_max = (1 << PUCAN_TFAST_TSGEG1_BITS), 1009 1009 .tseg2_min = 1, 1010 - .tseg2_max = 8, 1011 - .sjw_max = 4, 1010 + .tseg2_max = (1 << PUCAN_TFAST_TSGEG2_BITS), 1011 + .sjw_max = (1 << PUCAN_TFAST_SJW_BITS), 1012 1012 .brp_min = 1, 1013 - .brp_max = 1024, 1013 + .brp_max = (1 << PUCAN_TFAST_BRP_BITS), 1014 1014 .brp_inc = 1, 1015 1015 }; 1016 1016 ··· 1065 1065 static const struct can_bittiming_const pcan_usb_pro_fd_const = { 1066 1066 .name = "pcan_usb_pro_fd", 1067 1067 .tseg1_min = 1, 1068 - .tseg1_max = 64, 1068 + .tseg1_max = (1 << PUCAN_TSLOW_TSGEG1_BITS), 1069 1069 .tseg2_min = 1, 1070 - .tseg2_max = 16, 1071 - .sjw_max = 16, 1070 + .tseg2_max = (1 << PUCAN_TSLOW_TSGEG2_BITS), 1071 + .sjw_max = (1 << PUCAN_TSLOW_SJW_BITS), 1072 1072 .brp_min = 1, 1073 - .brp_max = 1024, 1073 + .brp_max = (1 << PUCAN_TSLOW_BRP_BITS), 1074 1074 .brp_inc = 1, 1075 1075 }; 1076 1076 1077 1077 static const struct can_bittiming_const pcan_usb_pro_fd_data_const = { 1078 1078 .name = "pcan_usb_pro_fd", 1079 1079 .tseg1_min = 1, 1080 - .tseg1_max = 16, 1080 + .tseg1_max = (1 << PUCAN_TFAST_TSGEG1_BITS), 1081 1081 .tseg2_min = 1, 1082 - .tseg2_max = 8, 1083 - .sjw_max = 4, 1082 + .tseg2_max = (1 << PUCAN_TFAST_TSGEG2_BITS), 1083 + .sjw_max = (1 << PUCAN_TFAST_SJW_BITS), 1084 1084 .brp_min = 1, 1085 - .brp_max = 1024, 1085 + .brp_max = (1 << PUCAN_TFAST_BRP_BITS), 1086 1086 .brp_inc = 1, 1087 1087 }; 1088 1088 ··· 1097 1097 }, 1098 1098 .bittiming_const = &pcan_usb_pro_fd_const, 1099 1099 .data_bittiming_const = &pcan_usb_pro_fd_data_const, 1100 + 1101 + /* size of device private data */ 1102 + .sizeof_dev_private = sizeof(struct pcan_usb_fd_device), 1103 + 1104 + /* timestamps usage */ 1105 + .ts_used_bits = 32, 1106 + .ts_period = 1000000, /* calibration period in ts. */ 1107 + .us_per_ts_scale = 1, /* us = (ts * scale) >> shift */ 1108 + .us_per_ts_shift = 0, 1109 + 1110 + /* give here messages in/out endpoints */ 1111 + .ep_msg_in = PCAN_USBPRO_EP_MSGIN, 1112 + .ep_msg_out = {PCAN_USBPRO_EP_MSGOUT_0, PCAN_USBPRO_EP_MSGOUT_1}, 1113 + 1114 + /* size of rx/tx usb buffers */ 1115 + .rx_buffer_size = PCAN_UFD_RX_BUFFER_SIZE, 1116 + .tx_buffer_size = PCAN_UFD_TX_BUFFER_SIZE, 1117 + 1118 + /* device callbacks */ 1119 + .intf_probe = pcan_usb_pro_probe, /* same as PCAN-USB Pro */ 1120 + .dev_init = pcan_usb_fd_init, 1121 + 1122 + .dev_exit = pcan_usb_fd_exit, 1123 + .dev_free = pcan_usb_fd_free, 1124 + .dev_set_bus = pcan_usb_fd_set_bus, 1125 + .dev_set_bittiming = pcan_usb_fd_set_bittiming_slow, 1126 + .dev_set_data_bittiming = pcan_usb_fd_set_bittiming_fast, 1127 + .dev_decode_buf = pcan_usb_fd_decode_buf, 1128 + .dev_start = pcan_usb_fd_start, 1129 + .dev_stop = pcan_usb_fd_stop, 1130 + .dev_restart_async = pcan_usb_fd_restart_async, 1131 + .dev_encode_msg = pcan_usb_fd_encode_msg, 1132 + 1133 + .do_get_berr_counter = pcan_usb_fd_get_berr_counter, 1134 + }; 1135 + 1136 + /* describes the PCAN-USB X6 adapter */ 1137 + static const struct can_bittiming_const pcan_usb_x6_const = { 1138 + .name = "pcan_usb_x6", 1139 + .tseg1_min = 1, 1140 + .tseg1_max = (1 << PUCAN_TSLOW_TSGEG1_BITS), 1141 + .tseg2_min = 1, 1142 + .tseg2_max = (1 << PUCAN_TSLOW_TSGEG2_BITS), 1143 + .sjw_max = (1 << PUCAN_TSLOW_SJW_BITS), 1144 + .brp_min = 1, 1145 + .brp_max = (1 << PUCAN_TSLOW_BRP_BITS), 1146 + .brp_inc = 1, 1147 + }; 1148 + 1149 + static const struct can_bittiming_const pcan_usb_x6_data_const = { 1150 + .name = "pcan_usb_x6", 1151 + .tseg1_min = 1, 1152 + .tseg1_max = (1 << PUCAN_TFAST_TSGEG1_BITS), 1153 + .tseg2_min = 1, 1154 + .tseg2_max = (1 << PUCAN_TFAST_TSGEG2_BITS), 1155 + .sjw_max = (1 << PUCAN_TFAST_SJW_BITS), 1156 + .brp_min = 1, 1157 + .brp_max = (1 << PUCAN_TFAST_BRP_BITS), 1158 + .brp_inc = 1, 1159 + }; 1160 + 1161 + const struct peak_usb_adapter pcan_usb_x6 = { 1162 + .name = "PCAN-USB X6", 1163 + .device_id = PCAN_USBX6_PRODUCT_ID, 1164 + .ctrl_count = PCAN_USBPROFD_CHANNEL_COUNT, 1165 + .ctrlmode_supported = CAN_CTRLMODE_FD | 1166 + CAN_CTRLMODE_3_SAMPLES | CAN_CTRLMODE_LISTENONLY, 1167 + .clock = { 1168 + .freq = PCAN_UFD_CRYSTAL_HZ, 1169 + }, 1170 + .bittiming_const = &pcan_usb_x6_const, 1171 + .data_bittiming_const = &pcan_usb_x6_data_const, 1100 1172 1101 1173 /* size of device private data */ 1102 1174 .sizeof_dev_private = sizeof(struct pcan_usb_fd_device),
+4
drivers/net/dsa/bcm_sf2.c
··· 588 588 struct phy_device *phydev) 589 589 { 590 590 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 591 + struct ethtool_eee *p = &priv->port_sts[port].eee; 591 592 u32 id_mode_dis = 0, port_mode; 592 593 const char *str = NULL; 593 594 u32 reg; ··· 663 662 reg |= DUPLX_MODE; 664 663 665 664 core_writel(priv, reg, CORE_STS_OVERRIDE_GMIIP_PORT(port)); 665 + 666 + if (!phydev->is_pseudo_fixed_link) 667 + p->eee_enabled = bcm_sf2_eee_init(ds, port, phydev); 666 668 } 667 669 668 670 static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
+8 -13
drivers/net/ethernet/altera/altera_tse_main.c
··· 400 400 401 401 skb_put(skb, pktlength); 402 402 403 - /* make cache consistent with receive packet buffer */ 404 - dma_sync_single_for_cpu(priv->device, 405 - priv->rx_ring[entry].dma_addr, 406 - priv->rx_ring[entry].len, 407 - DMA_FROM_DEVICE); 408 - 409 403 dma_unmap_single(priv->device, priv->rx_ring[entry].dma_addr, 410 404 priv->rx_ring[entry].len, DMA_FROM_DEVICE); 411 405 ··· 463 469 464 470 if (unlikely(netif_queue_stopped(priv->dev) && 465 471 tse_tx_avail(priv) > TSE_TX_THRESH(priv))) { 466 - netif_tx_lock(priv->dev); 467 472 if (netif_queue_stopped(priv->dev) && 468 473 tse_tx_avail(priv) > TSE_TX_THRESH(priv)) { 469 474 if (netif_msg_tx_done(priv)) ··· 470 477 __func__); 471 478 netif_wake_queue(priv->dev); 472 479 } 473 - netif_tx_unlock(priv->dev); 474 480 } 475 481 476 482 spin_unlock(&priv->tx_lock); ··· 583 591 buffer->skb = skb; 584 592 buffer->dma_addr = dma_addr; 585 593 buffer->len = nopaged_len; 586 - 587 - /* Push data out of the cache hierarchy into main memory */ 588 - dma_sync_single_for_device(priv->device, buffer->dma_addr, 589 - buffer->len, DMA_TO_DEVICE); 590 594 591 595 priv->dmaops->tx_buffer(priv, buffer); 592 596 ··· 807 819 808 820 if (!phydev) { 809 821 netdev_err(dev, "Could not find the PHY\n"); 822 + if (fixed_link) 823 + of_phy_deregister_fixed_link(priv->device->of_node); 810 824 return -ENODEV; 811 825 } 812 826 ··· 1535 1545 static int altera_tse_remove(struct platform_device *pdev) 1536 1546 { 1537 1547 struct net_device *ndev = platform_get_drvdata(pdev); 1548 + struct altera_tse_private *priv = netdev_priv(ndev); 1538 1549 1539 - if (ndev->phydev) 1550 + if (ndev->phydev) { 1540 1551 phy_disconnect(ndev->phydev); 1552 + 1553 + if (of_phy_is_fixed_link(priv->device->of_node)) 1554 + of_phy_deregister_fixed_link(priv->device->of_node); 1555 + } 1541 1556 1542 1557 platform_set_drvdata(pdev, NULL); 1543 1558 altera_tse_mdio_destroy(ndev);
+2 -2
drivers/net/ethernet/amd/xgbe/xgbe-main.c
··· 829 829 return 0; 830 830 } 831 831 832 - #ifdef CONFIG_PM 832 + #ifdef CONFIG_PM_SLEEP 833 833 static int xgbe_suspend(struct device *dev) 834 834 { 835 835 struct net_device *netdev = dev_get_drvdata(dev); ··· 874 874 875 875 return ret; 876 876 } 877 - #endif /* CONFIG_PM */ 877 + #endif /* CONFIG_PM_SLEEP */ 878 878 879 879 #ifdef CONFIG_ACPI 880 880 static const struct acpi_device_id xgbe_acpi_match[] = {
+7 -2
drivers/net/ethernet/aurora/nb8800.c
··· 1466 1466 1467 1467 ret = nb8800_hw_init(dev); 1468 1468 if (ret) 1469 - goto err_free_bus; 1469 + goto err_deregister_fixed_link; 1470 1470 1471 1471 if (ops && ops->init) { 1472 1472 ret = ops->init(dev); 1473 1473 if (ret) 1474 - goto err_free_bus; 1474 + goto err_deregister_fixed_link; 1475 1475 } 1476 1476 1477 1477 dev->netdev_ops = &nb8800_netdev_ops; ··· 1504 1504 1505 1505 err_free_dma: 1506 1506 nb8800_dma_free(dev); 1507 + err_deregister_fixed_link: 1508 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 1509 + of_phy_deregister_fixed_link(pdev->dev.of_node); 1507 1510 err_free_bus: 1508 1511 of_node_put(priv->phy_node); 1509 1512 mdiobus_unregister(bus); ··· 1524 1521 struct nb8800_priv *priv = netdev_priv(ndev); 1525 1522 1526 1523 unregister_netdev(ndev); 1524 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 1525 + of_phy_deregister_fixed_link(pdev->dev.of_node); 1527 1526 of_node_put(priv->phy_node); 1528 1527 1529 1528 mdiobus_unregister(priv->mii_bus);
+12 -5
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1755 1755 if (priv->irq0 <= 0 || priv->irq1 <= 0) { 1756 1756 dev_err(&pdev->dev, "invalid interrupts\n"); 1757 1757 ret = -EINVAL; 1758 - goto err; 1758 + goto err_free_netdev; 1759 1759 } 1760 1760 1761 1761 priv->base = devm_ioremap_resource(&pdev->dev, r); 1762 1762 if (IS_ERR(priv->base)) { 1763 1763 ret = PTR_ERR(priv->base); 1764 - goto err; 1764 + goto err_free_netdev; 1765 1765 } 1766 1766 1767 1767 priv->netdev = dev; ··· 1779 1779 ret = of_phy_register_fixed_link(dn); 1780 1780 if (ret) { 1781 1781 dev_err(&pdev->dev, "failed to register fixed PHY\n"); 1782 - goto err; 1782 + goto err_free_netdev; 1783 1783 } 1784 1784 1785 1785 priv->phy_dn = dn; ··· 1821 1821 ret = register_netdev(dev); 1822 1822 if (ret) { 1823 1823 dev_err(&pdev->dev, "failed to register net_device\n"); 1824 - goto err; 1824 + goto err_deregister_fixed_link; 1825 1825 } 1826 1826 1827 1827 priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK; ··· 1832 1832 priv->base, priv->irq0, priv->irq1, txq, rxq); 1833 1833 1834 1834 return 0; 1835 - err: 1835 + 1836 + err_deregister_fixed_link: 1837 + if (of_phy_is_fixed_link(dn)) 1838 + of_phy_deregister_fixed_link(dn); 1839 + err_free_netdev: 1836 1840 free_netdev(dev); 1837 1841 return ret; 1838 1842 } ··· 1844 1840 static int bcm_sysport_remove(struct platform_device *pdev) 1845 1841 { 1846 1842 struct net_device *dev = dev_get_drvdata(&pdev->dev); 1843 + struct device_node *dn = pdev->dev.of_node; 1847 1844 1848 1845 /* Not much to do, ndo_close has been called 1849 1846 * and we use managed allocations 1850 1847 */ 1851 1848 unregister_netdev(dev); 1849 + if (of_phy_is_fixed_link(dn)) 1850 + of_phy_deregister_fixed_link(dn); 1852 1851 free_netdev(dev); 1853 1852 dev_set_drvdata(&pdev->dev, NULL); 1854 1853
+8
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 1872 1872 1873 1873 ering->rx_max_pending = MAX_RX_AVAIL; 1874 1874 1875 + /* If size isn't already set, we give an estimation of the number 1876 + * of buffers we'll have. We're neglecting some possible conditions 1877 + * [we couldn't know for certain at this point if number of queues 1878 + * might shrink] but the number would be correct for the likely 1879 + * scenario. 1880 + */ 1875 1881 if (bp->rx_ring_size) 1876 1882 ering->rx_pending = bp->rx_ring_size; 1883 + else if (BNX2X_NUM_RX_QUEUES(bp)) 1884 + ering->rx_pending = MAX_RX_AVAIL / BNX2X_NUM_RX_QUEUES(bp); 1877 1885 else 1878 1886 ering->rx_pending = MAX_RX_AVAIL; 1879 1887
+3 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 10138 10138 { 10139 10139 struct bnx2x_udp_tunnel *udp_port = &bp->udp_tunnel_ports[type]; 10140 10140 10141 - if (!netif_running(bp->dev) || !IS_PF(bp)) 10141 + if (!netif_running(bp->dev) || !IS_PF(bp) || CHIP_IS_E1x(bp)) 10142 10142 return; 10143 10143 10144 10144 if (udp_port->count && udp_port->dst_port == port) { ··· 10163 10163 { 10164 10164 struct bnx2x_udp_tunnel *udp_port = &bp->udp_tunnel_ports[type]; 10165 10165 10166 - if (!IS_PF(bp)) 10166 + if (!IS_PF(bp) || CHIP_IS_E1x(bp)) 10167 10167 return; 10168 10168 10169 10169 if (!udp_port->count || udp_port->dst_port != port) { ··· 13505 13505 13506 13506 /* Initialize the pointers to the init arrays */ 13507 13507 /* Blob */ 13508 + rc = -ENOMEM; 13508 13509 BNX2X_ALLOC_AND_SET(init_data, request_firmware_exit, be32_to_cpu_n); 13509 13510 13510 13511 /* Opcodes */
+13 -4
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1811 1811 if (atomic_read(&bp->intr_sem) != 0) 1812 1812 return LL_FLUSH_FAILED; 1813 1813 1814 + if (!bp->link_info.link_up) 1815 + return LL_FLUSH_FAILED; 1816 + 1814 1817 if (!bnxt_lock_poll(bnapi)) 1815 1818 return LL_FLUSH_BUSY; 1816 1819 ··· 3213 3210 goto err_out; 3214 3211 } 3215 3212 3216 - if (tunnel_type & TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN) 3213 + switch (tunnel_type) { 3214 + case TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN: 3217 3215 bp->vxlan_fw_dst_port_id = resp->tunnel_dst_port_id; 3218 - 3219 - else if (tunnel_type & TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE) 3216 + break; 3217 + case TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE: 3220 3218 bp->nge_fw_dst_port_id = resp->tunnel_dst_port_id; 3219 + break; 3220 + default: 3221 + break; 3222 + } 3223 + 3221 3224 err_out: 3222 3225 mutex_unlock(&bp->hwrm_cmd_lock); 3223 3226 return rc; ··· 4120 4111 bp->grp_info[i].fw_stats_ctx = cpr->hw_stats_ctx_id; 4121 4112 } 4122 4113 mutex_unlock(&bp->hwrm_cmd_lock); 4123 - return 0; 4114 + return rc; 4124 4115 } 4125 4116 4126 4117 static int bnxt_hwrm_func_qcfg(struct bnxt *bp)
+5 -3
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1172 1172 struct bcmgenet_tx_ring *ring) 1173 1173 { 1174 1174 struct bcmgenet_priv *priv = netdev_priv(dev); 1175 + struct device *kdev = &priv->pdev->dev; 1175 1176 struct enet_cb *tx_cb_ptr; 1176 1177 struct netdev_queue *txq; 1177 1178 unsigned int pkts_compl = 0; ··· 1200 1199 if (tx_cb_ptr->skb) { 1201 1200 pkts_compl++; 1202 1201 bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent; 1203 - dma_unmap_single(&dev->dev, 1202 + dma_unmap_single(kdev, 1204 1203 dma_unmap_addr(tx_cb_ptr, dma_addr), 1205 1204 dma_unmap_len(tx_cb_ptr, dma_len), 1206 1205 DMA_TO_DEVICE); 1207 1206 bcmgenet_free_cb(tx_cb_ptr); 1208 1207 } else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) { 1209 - dma_unmap_page(&dev->dev, 1208 + dma_unmap_page(kdev, 1210 1209 dma_unmap_addr(tx_cb_ptr, dma_addr), 1211 1210 dma_unmap_len(tx_cb_ptr, dma_len), 1212 1211 DMA_TO_DEVICE); ··· 1776 1775 1777 1776 static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv) 1778 1777 { 1778 + struct device *kdev = &priv->pdev->dev; 1779 1779 struct enet_cb *cb; 1780 1780 int i; 1781 1781 ··· 1784 1782 cb = &priv->rx_cbs[i]; 1785 1783 1786 1784 if (dma_unmap_addr(cb, dma_addr)) { 1787 - dma_unmap_single(&priv->dev->dev, 1785 + dma_unmap_single(kdev, 1788 1786 dma_unmap_addr(cb, dma_addr), 1789 1787 priv->rx_buf_len, DMA_FROM_DEVICE); 1790 1788 dma_unmap_addr_set(cb, dma_addr, 0);
+9 -1
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 542 542 /* Make sure we initialize MoCA PHYs with a link down */ 543 543 if (phy_mode == PHY_INTERFACE_MODE_MOCA) { 544 544 phydev = of_phy_find_device(dn); 545 - if (phydev) 545 + if (phydev) { 546 546 phydev->link = 0; 547 + put_device(&phydev->mdio.dev); 548 + } 547 549 } 548 550 549 551 return 0; ··· 627 625 int bcmgenet_mii_init(struct net_device *dev) 628 626 { 629 627 struct bcmgenet_priv *priv = netdev_priv(dev); 628 + struct device_node *dn = priv->pdev->dev.of_node; 630 629 int ret; 631 630 632 631 ret = bcmgenet_mii_alloc(priv); ··· 641 638 return 0; 642 639 643 640 out: 641 + if (of_phy_is_fixed_link(dn)) 642 + of_phy_deregister_fixed_link(dn); 644 643 of_node_put(priv->phy_dn); 645 644 mdiobus_unregister(priv->mii_bus); 646 645 mdiobus_free(priv->mii_bus); ··· 652 647 void bcmgenet_mii_exit(struct net_device *dev) 653 648 { 654 649 struct bcmgenet_priv *priv = netdev_priv(dev); 650 + struct device_node *dn = priv->pdev->dev.of_node; 655 651 652 + if (of_phy_is_fixed_link(dn)) 653 + of_phy_deregister_fixed_link(dn); 656 654 of_node_put(priv->phy_dn); 657 655 mdiobus_unregister(priv->mii_bus); 658 656 mdiobus_free(priv->mii_bus);
+3 -2
drivers/net/ethernet/cadence/macb.c
··· 975 975 addr += bp->rx_buffer_size; 976 976 } 977 977 bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP); 978 + bp->rx_tail = 0; 978 979 } 979 980 980 981 static int macb_rx(struct macb *bp, int budget) ··· 1157 1156 if (status & MACB_BIT(RXUBR)) { 1158 1157 ctrl = macb_readl(bp, NCR); 1159 1158 macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE)); 1159 + wmb(); 1160 1160 macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); 1161 1161 1162 1162 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) ··· 1618 1616 bp->queues[0].tx_head = 0; 1619 1617 bp->queues[0].tx_tail = 0; 1620 1618 bp->queues[0].tx_ring[TX_RING_SIZE - 1].ctrl |= MACB_BIT(TX_WRAP); 1621 - 1622 - bp->rx_tail = 0; 1623 1619 } 1624 1620 1625 1621 static void macb_reset_hw(struct macb *bp) ··· 2770 2770 if (intstatus & MACB_BIT(RXUBR)) { 2771 2771 ctl = macb_readl(lp, NCR); 2772 2772 macb_writel(lp, NCR, ctl & ~MACB_BIT(RE)); 2773 + wmb(); 2773 2774 macb_writel(lp, NCR, ctl | MACB_BIT(RE)); 2774 2775 } 2775 2776
+1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 4931 4931 */ 4932 4932 for_each_port(adapter, i) { 4933 4933 pi = adap2pinfo(adapter, i); 4934 + adapter->port[i]->dev_port = pi->lport; 4934 4935 netif_set_real_num_tx_queues(adapter->port[i], pi->nqsets); 4935 4936 netif_set_real_num_rx_queues(adapter->port[i], pi->nqsets); 4936 4937
-1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 7851 7851 return ret; 7852 7852 7853 7853 memcpy(adap->port[i]->dev_addr, addr, ETH_ALEN); 7854 - adap->port[i]->dev_port = j; 7855 7854 j++; 7856 7855 } 7857 7856 return 0;
+1
drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
··· 168 168 CH_PCI_ID_TABLE_FENTRY(0x509a), /* Custom T520-CR */ 169 169 CH_PCI_ID_TABLE_FENTRY(0x509b), /* Custom T540-CR LOM */ 170 170 CH_PCI_ID_TABLE_FENTRY(0x509c), /* Custom T520-CR*/ 171 + CH_PCI_ID_TABLE_FENTRY(0x509d), /* Custom T540-CR*/ 171 172 172 173 /* T6 adapters: 173 174 */
+1
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 2969 2969 2970 2970 netdev->netdev_ops = &cxgb4vf_netdev_ops; 2971 2971 netdev->ethtool_ops = &cxgb4vf_ethtool_ops; 2972 + netdev->dev_port = pi->port_id; 2972 2973 2973 2974 /* 2974 2975 * Initialize the hardware/software state for the port.
+4
drivers/net/ethernet/cirrus/ep93xx_eth.c
··· 468 468 struct device *dev = ep->dev->dev.parent; 469 469 int i; 470 470 471 + if (!ep->descs) 472 + return; 473 + 471 474 for (i = 0; i < RX_QUEUE_ENTRIES; i++) { 472 475 dma_addr_t d; 473 476 ··· 493 490 494 491 dma_free_coherent(dev, sizeof(struct ep93xx_descs), ep->descs, 495 492 ep->descs_dma_addr); 493 + ep->descs = NULL; 496 494 } 497 495 498 496 static int ep93xx_alloc_buffers(struct ep93xx_priv *ep)
+2 -1
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 90 90 { 91 91 OPCODE_COMMON_SET_HSW_CONFIG, 92 92 CMD_SUBSYSTEM_COMMON, 93 - BE_PRIV_DEVCFG | BE_PRIV_VHADM 93 + BE_PRIV_DEVCFG | BE_PRIV_VHADM | 94 + BE_PRIV_DEVSEC 94 95 }, 95 96 { 96 97 OPCODE_COMMON_GET_EXT_FAT_CAPABILITIES,
+2
drivers/net/ethernet/freescale/fec.h
··· 574 574 unsigned int reload_period; 575 575 int pps_enable; 576 576 unsigned int next_counter; 577 + 578 + u64 ethtool_stats[0]; 577 579 }; 578 580 579 581 void fec_ptp_init(struct platform_device *pdev);
+32 -5
drivers/net/ethernet/freescale/fec_main.c
··· 2313 2313 { "IEEE_rx_octets_ok", IEEE_R_OCTETS_OK }, 2314 2314 }; 2315 2315 2316 - static void fec_enet_get_ethtool_stats(struct net_device *dev, 2317 - struct ethtool_stats *stats, u64 *data) 2316 + #define FEC_STATS_SIZE (ARRAY_SIZE(fec_stats) * sizeof(u64)) 2317 + 2318 + static void fec_enet_update_ethtool_stats(struct net_device *dev) 2318 2319 { 2319 2320 struct fec_enet_private *fep = netdev_priv(dev); 2320 2321 int i; 2321 2322 2322 2323 for (i = 0; i < ARRAY_SIZE(fec_stats); i++) 2323 - data[i] = readl(fep->hwp + fec_stats[i].offset); 2324 + fep->ethtool_stats[i] = readl(fep->hwp + fec_stats[i].offset); 2325 + } 2326 + 2327 + static void fec_enet_get_ethtool_stats(struct net_device *dev, 2328 + struct ethtool_stats *stats, u64 *data) 2329 + { 2330 + struct fec_enet_private *fep = netdev_priv(dev); 2331 + 2332 + if (netif_running(dev)) 2333 + fec_enet_update_ethtool_stats(dev); 2334 + 2335 + memcpy(data, fep->ethtool_stats, FEC_STATS_SIZE); 2324 2336 } 2325 2337 2326 2338 static void fec_enet_get_strings(struct net_device *netdev, ··· 2356 2344 default: 2357 2345 return -EOPNOTSUPP; 2358 2346 } 2347 + } 2348 + 2349 + #else /* !defined(CONFIG_M5272) */ 2350 + #define FEC_STATS_SIZE 0 2351 + static inline void fec_enet_update_ethtool_stats(struct net_device *dev) 2352 + { 2359 2353 } 2360 2354 #endif /* !defined(CONFIG_M5272) */ 2361 2355 ··· 2892 2874 if (fep->quirks & FEC_QUIRK_ERR006687) 2893 2875 imx6q_cpuidle_fec_irqs_unused(); 2894 2876 2877 + fec_enet_update_ethtool_stats(ndev); 2878 + 2895 2879 fec_enet_clk_enable(ndev, false); 2896 2880 pinctrl_pm_select_sleep_state(&fep->pdev->dev); 2897 2881 pm_runtime_mark_last_busy(&fep->pdev->dev); ··· 3200 3180 3201 3181 fec_restart(ndev); 3202 3182 3183 + fec_enet_update_ethtool_stats(ndev); 3184 + 3203 3185 return 0; 3204 3186 } 3205 3187 ··· 3300 3278 fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs); 3301 3279 3302 3280 /* Init network device */ 3303 - ndev = alloc_etherdev_mqs(sizeof(struct fec_enet_private), 3304 - num_tx_qs, num_rx_qs); 3281 + ndev = alloc_etherdev_mqs(sizeof(struct fec_enet_private) + 3282 + FEC_STATS_SIZE, num_tx_qs, num_rx_qs); 3305 3283 if (!ndev) 3306 3284 return -ENOMEM; 3307 3285 ··· 3497 3475 failed_clk_ipg: 3498 3476 fec_enet_clk_enable(ndev, false); 3499 3477 failed_clk: 3478 + if (of_phy_is_fixed_link(np)) 3479 + of_phy_deregister_fixed_link(np); 3500 3480 failed_phy: 3501 3481 of_node_put(phy_node); 3502 3482 failed_ioremap: ··· 3512 3488 { 3513 3489 struct net_device *ndev = platform_get_drvdata(pdev); 3514 3490 struct fec_enet_private *fep = netdev_priv(ndev); 3491 + struct device_node *np = pdev->dev.of_node; 3515 3492 3516 3493 cancel_work_sync(&fep->tx_timeout_work); 3517 3494 fec_ptp_stop(pdev); ··· 3520 3495 fec_enet_mii_remove(fep); 3521 3496 if (fep->reg_phy) 3522 3497 regulator_disable(fep->reg_phy); 3498 + if (of_phy_is_fixed_link(np)) 3499 + of_phy_deregister_fixed_link(np); 3523 3500 of_node_put(fep->phy_node); 3524 3501 free_netdev(ndev); 3525 3502
+3
drivers/net/ethernet/freescale/fman/fman_memac.c
··· 1107 1107 { 1108 1108 free_init_resources(memac); 1109 1109 1110 + if (memac->pcsphy) 1111 + put_device(&memac->pcsphy->mdio.dev); 1112 + 1110 1113 kfree(memac->memac_drv_param); 1111 1114 kfree(memac); 1112 1115
-3
drivers/net/ethernet/freescale/fman/fman_tgec.c
··· 722 722 { 723 723 free_init_resources(tgec); 724 724 725 - if (tgec->cfg) 726 - tgec->cfg = NULL; 727 - 728 725 kfree(tgec->cfg); 729 726 kfree(tgec); 730 727
+2
drivers/net/ethernet/freescale/fman/mac.c
··· 892 892 priv->fixed_link->duplex = phy->duplex; 893 893 priv->fixed_link->pause = phy->pause; 894 894 priv->fixed_link->asym_pause = phy->asym_pause; 895 + 896 + put_device(&phy->mdio.dev); 895 897 } 896 898 897 899 err = mac_dev->init(mac_dev);
+6 -1
drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
··· 980 980 err = clk_prepare_enable(clk); 981 981 if (err) { 982 982 ret = err; 983 - goto out_free_fpi; 983 + goto out_deregister_fixed_link; 984 984 } 985 985 fpi->clk_per = clk; 986 986 } ··· 1061 1061 of_node_put(fpi->phy_node); 1062 1062 if (fpi->clk_per) 1063 1063 clk_disable_unprepare(fpi->clk_per); 1064 + out_deregister_fixed_link: 1065 + if (of_phy_is_fixed_link(ofdev->dev.of_node)) 1066 + of_phy_deregister_fixed_link(ofdev->dev.of_node); 1064 1067 out_free_fpi: 1065 1068 kfree(fpi); 1066 1069 return ret; ··· 1082 1079 of_node_put(fep->fpi->phy_node); 1083 1080 if (fep->fpi->clk_per) 1084 1081 clk_disable_unprepare(fep->fpi->clk_per); 1082 + if (of_phy_is_fixed_link(ofdev->dev.of_node)) 1083 + of_phy_deregister_fixed_link(ofdev->dev.of_node); 1085 1084 free_netdev(ndev); 1086 1085 return 0; 1087 1086 }
+8
drivers/net/ethernet/freescale/gianfar.c
··· 1312 1312 */ 1313 1313 static int gfar_probe(struct platform_device *ofdev) 1314 1314 { 1315 + struct device_node *np = ofdev->dev.of_node; 1315 1316 struct net_device *dev = NULL; 1316 1317 struct gfar_private *priv = NULL; 1317 1318 int err = 0, i; ··· 1463 1462 return 0; 1464 1463 1465 1464 register_fail: 1465 + if (of_phy_is_fixed_link(np)) 1466 + of_phy_deregister_fixed_link(np); 1466 1467 unmap_group_regs(priv); 1467 1468 gfar_free_rx_queues(priv); 1468 1469 gfar_free_tx_queues(priv); ··· 1477 1474 static int gfar_remove(struct platform_device *ofdev) 1478 1475 { 1479 1476 struct gfar_private *priv = platform_get_drvdata(ofdev); 1477 + struct device_node *np = ofdev->dev.of_node; 1480 1478 1481 1479 of_node_put(priv->phy_node); 1482 1480 of_node_put(priv->tbi_node); 1483 1481 1484 1482 unregister_netdev(priv->ndev); 1483 + 1484 + if (of_phy_is_fixed_link(np)) 1485 + of_phy_deregister_fixed_link(np); 1486 + 1485 1487 unmap_group_regs(priv); 1486 1488 gfar_free_rx_queues(priv); 1487 1489 gfar_free_tx_queues(priv);
+16 -7
drivers/net/ethernet/freescale/ucc_geth.c
··· 3868 3868 dev = alloc_etherdev(sizeof(*ugeth)); 3869 3869 3870 3870 if (dev == NULL) { 3871 - of_node_put(ug_info->tbi_node); 3872 - of_node_put(ug_info->phy_node); 3873 - return -ENOMEM; 3871 + err = -ENOMEM; 3872 + goto err_deregister_fixed_link; 3874 3873 } 3875 3874 3876 3875 ugeth = netdev_priv(dev); ··· 3906 3907 if (netif_msg_probe(ugeth)) 3907 3908 pr_err("%s: Cannot register net device, aborting\n", 3908 3909 dev->name); 3909 - free_netdev(dev); 3910 - of_node_put(ug_info->tbi_node); 3911 - of_node_put(ug_info->phy_node); 3912 - return err; 3910 + goto err_free_netdev; 3913 3911 } 3914 3912 3915 3913 mac_addr = of_get_mac_address(np); ··· 3919 3923 ugeth->node = np; 3920 3924 3921 3925 return 0; 3926 + 3927 + err_free_netdev: 3928 + free_netdev(dev); 3929 + err_deregister_fixed_link: 3930 + if (of_phy_is_fixed_link(np)) 3931 + of_phy_deregister_fixed_link(np); 3932 + of_node_put(ug_info->tbi_node); 3933 + of_node_put(ug_info->phy_node); 3934 + 3935 + return err; 3922 3936 } 3923 3937 3924 3938 static int ucc_geth_remove(struct platform_device* ofdev) 3925 3939 { 3926 3940 struct net_device *dev = platform_get_drvdata(ofdev); 3927 3941 struct ucc_geth_private *ugeth = netdev_priv(dev); 3942 + struct device_node *np = ofdev->dev.of_node; 3928 3943 3929 3944 unregister_netdev(dev); 3930 3945 free_netdev(dev); 3931 3946 ucc_geth_memclean(ugeth); 3947 + if (of_phy_is_fixed_link(np)) 3948 + of_phy_deregister_fixed_link(np); 3932 3949 of_node_put(ugeth->ug_info->tbi_node); 3933 3950 of_node_put(ugeth->ug_info->phy_node); 3934 3951
+63 -2
drivers/net/ethernet/ibm/ibmveth.c
··· 58 58 59 59 static const char ibmveth_driver_name[] = "ibmveth"; 60 60 static const char ibmveth_driver_string[] = "IBM Power Virtual Ethernet Driver"; 61 - #define ibmveth_driver_version "1.05" 61 + #define ibmveth_driver_version "1.06" 62 62 63 63 MODULE_AUTHOR("Santiago Leon <santil@linux.vnet.ibm.com>"); 64 64 MODULE_DESCRIPTION("IBM Power Virtual Ethernet Driver"); ··· 135 135 static inline int ibmveth_rxq_frame_offset(struct ibmveth_adapter *adapter) 136 136 { 137 137 return ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_OFF_MASK; 138 + } 139 + 140 + static inline int ibmveth_rxq_large_packet(struct ibmveth_adapter *adapter) 141 + { 142 + return ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_LRG_PKT; 138 143 } 139 144 140 145 static inline int ibmveth_rxq_frame_length(struct ibmveth_adapter *adapter) ··· 1179 1174 goto retry_bounce; 1180 1175 } 1181 1176 1177 + static void ibmveth_rx_mss_helper(struct sk_buff *skb, u16 mss, int lrg_pkt) 1178 + { 1179 + int offset = 0; 1180 + 1181 + /* only TCP packets will be aggregated */ 1182 + if (skb->protocol == htons(ETH_P_IP)) { 1183 + struct iphdr *iph = (struct iphdr *)skb->data; 1184 + 1185 + if (iph->protocol == IPPROTO_TCP) { 1186 + offset = iph->ihl * 4; 1187 + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; 1188 + } else { 1189 + return; 1190 + } 1191 + } else if (skb->protocol == htons(ETH_P_IPV6)) { 1192 + struct ipv6hdr *iph6 = (struct ipv6hdr *)skb->data; 1193 + 1194 + if (iph6->nexthdr == IPPROTO_TCP) { 1195 + offset = sizeof(struct ipv6hdr); 1196 + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; 1197 + } else { 1198 + return; 1199 + } 1200 + } else { 1201 + return; 1202 + } 1203 + /* if mss is not set through Large Packet bit/mss in rx buffer, 1204 + * expect that the mss will be written to the tcp header checksum. 1205 + */ 1206 + if (lrg_pkt) { 1207 + skb_shinfo(skb)->gso_size = mss; 1208 + } else if (offset) { 1209 + struct tcphdr *tcph = (struct tcphdr *)(skb->data + offset); 1210 + 1211 + skb_shinfo(skb)->gso_size = ntohs(tcph->check); 1212 + tcph->check = 0; 1213 + } 1214 + } 1215 + 1182 1216 static int ibmveth_poll(struct napi_struct *napi, int budget) 1183 1217 { 1184 1218 struct ibmveth_adapter *adapter = ··· 1226 1182 int frames_processed = 0; 1227 1183 unsigned long lpar_rc; 1228 1184 struct iphdr *iph; 1185 + u16 mss = 0; 1229 1186 1230 1187 restart_poll: 1231 1188 while (frames_processed < budget) { ··· 1244 1199 int length = ibmveth_rxq_frame_length(adapter); 1245 1200 int offset = ibmveth_rxq_frame_offset(adapter); 1246 1201 int csum_good = ibmveth_rxq_csum_good(adapter); 1202 + int lrg_pkt = ibmveth_rxq_large_packet(adapter); 1247 1203 1248 1204 skb = ibmveth_rxq_get_buffer(adapter); 1205 + 1206 + /* if the large packet bit is set in the rx queue 1207 + * descriptor, the mss will be written by PHYP eight 1208 + * bytes from the start of the rx buffer, which is 1209 + * skb->data at this stage 1210 + */ 1211 + if (lrg_pkt) { 1212 + __be64 *rxmss = (__be64 *)(skb->data + 8); 1213 + 1214 + mss = (u16)be64_to_cpu(*rxmss); 1215 + } 1249 1216 1250 1217 new_skb = NULL; 1251 1218 if (length < rx_copybreak) ··· 1292 1235 if (iph->check == 0xffff) { 1293 1236 iph->check = 0; 1294 1237 iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl); 1295 - adapter->rx_large_packets++; 1296 1238 } 1297 1239 } 1240 + } 1241 + 1242 + if (length > netdev->mtu + ETH_HLEN) { 1243 + ibmveth_rx_mss_helper(skb, mss, lrg_pkt); 1244 + adapter->rx_large_packets++; 1298 1245 } 1299 1246 1300 1247 napi_gro_receive(napi, skb); /* send it up */
+1
drivers/net/ethernet/ibm/ibmveth.h
··· 209 209 #define IBMVETH_RXQ_TOGGLE 0x80000000 210 210 #define IBMVETH_RXQ_TOGGLE_SHIFT 31 211 211 #define IBMVETH_RXQ_VALID 0x40000000 212 + #define IBMVETH_RXQ_LRG_PKT 0x04000000 212 213 #define IBMVETH_RXQ_NO_CSUM 0x02000000 213 214 #define IBMVETH_RXQ_CSUM_GOOD 0x01000000 214 215 #define IBMVETH_RXQ_OFF_MASK 0x0000FFFF
-1
drivers/net/ethernet/ibm/ibmvnic.c
··· 74 74 #include <asm/iommu.h> 75 75 #include <linux/uaccess.h> 76 76 #include <asm/firmware.h> 77 - #include <linux/seq_file.h> 78 77 #include <linux/workqueue.h> 79 78 80 79 #include "ibmvnic.h"
+6 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 4931 4931 4932 4932 /* initialize outer IP header fields */ 4933 4933 if (ip.v4->version == 4) { 4934 + unsigned char *csum_start = skb_checksum_start(skb); 4935 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 4936 + 4934 4937 /* IP header will have to cancel out any data that 4935 4938 * is not a part of the outer IP header 4936 4939 */ 4937 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 4938 - csum_unfold(l4.tcp->check))); 4940 + ip.v4->check = csum_fold(csum_partial(trans_start, 4941 + csum_start - trans_start, 4942 + 0)); 4939 4943 type_tucmd |= E1000_ADVTXD_TUCMD_IPV4; 4940 4944 4941 4945 ip.v4->tot_len = 0;
+6 -2
drivers/net/ethernet/intel/igbvf/netdev.c
··· 1965 1965 1966 1966 /* initialize outer IP header fields */ 1967 1967 if (ip.v4->version == 4) { 1968 + unsigned char *csum_start = skb_checksum_start(skb); 1969 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 1970 + 1968 1971 /* IP header will have to cancel out any data that 1969 1972 * is not a part of the outer IP header 1970 1973 */ 1971 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 1972 - csum_unfold(l4.tcp->check))); 1974 + ip.v4->check = csum_fold(csum_partial(trans_start, 1975 + csum_start - trans_start, 1976 + 0)); 1973 1977 type_tucmd |= E1000_ADVTXD_TUCMD_IPV4; 1974 1978 1975 1979 ip.v4->tot_len = 0;
+6 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 7277 7277 7278 7278 /* initialize outer IP header fields */ 7279 7279 if (ip.v4->version == 4) { 7280 + unsigned char *csum_start = skb_checksum_start(skb); 7281 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 7282 + 7280 7283 /* IP header will have to cancel out any data that 7281 7284 * is not a part of the outer IP header 7282 7285 */ 7283 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 7284 - csum_unfold(l4.tcp->check))); 7286 + ip.v4->check = csum_fold(csum_partial(trans_start, 7287 + csum_start - trans_start, 7288 + 0)); 7285 7289 type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4; 7286 7290 7287 7291 ip.v4->tot_len = 0;
+6 -2
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 3329 3329 3330 3330 /* initialize outer IP header fields */ 3331 3331 if (ip.v4->version == 4) { 3332 + unsigned char *csum_start = skb_checksum_start(skb); 3333 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 3334 + 3332 3335 /* IP header will have to cancel out any data that 3333 3336 * is not a part of the outer IP header 3334 3337 */ 3335 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 3336 - csum_unfold(l4.tcp->check))); 3338 + ip.v4->check = csum_fold(csum_partial(trans_start, 3339 + csum_start - trans_start, 3340 + 0)); 3337 3341 type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4; 3338 3342 3339 3343 ip.v4->tot_len = 0;
+1
drivers/net/ethernet/lantiq_etop.c
··· 704 704 priv->pldata = dev_get_platdata(&pdev->dev); 705 705 priv->netdev = dev; 706 706 spin_lock_init(&priv->lock); 707 + SET_NETDEV_DEV(dev, &pdev->dev); 707 708 708 709 for (i = 0; i < MAX_DMA_CHAN; i++) { 709 710 if (IS_TX(i))
+6 -1
drivers/net/ethernet/marvell/mvneta.c
··· 4151 4151 dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO; 4152 4152 dev->hw_features |= dev->features; 4153 4153 dev->vlan_features |= dev->features; 4154 - dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; 4154 + dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 4155 4155 dev->gso_max_segs = MVNETA_MAX_TSO_SEGS; 4156 4156 4157 4157 err = register_netdev(dev); ··· 4191 4191 clk_disable_unprepare(pp->clk); 4192 4192 err_put_phy_node: 4193 4193 of_node_put(phy_node); 4194 + if (of_phy_is_fixed_link(dn)) 4195 + of_phy_deregister_fixed_link(dn); 4194 4196 err_free_irq: 4195 4197 irq_dispose_mapping(dev->irq); 4196 4198 err_free_netdev: ··· 4204 4202 static int mvneta_remove(struct platform_device *pdev) 4205 4203 { 4206 4204 struct net_device *dev = platform_get_drvdata(pdev); 4205 + struct device_node *dn = pdev->dev.of_node; 4207 4206 struct mvneta_port *pp = netdev_priv(dev); 4208 4207 4209 4208 unregister_netdev(dev); ··· 4212 4209 clk_disable_unprepare(pp->clk); 4213 4210 free_percpu(pp->ports); 4214 4211 free_percpu(pp->stats); 4212 + if (of_phy_is_fixed_link(dn)) 4213 + of_phy_deregister_fixed_link(dn); 4215 4214 irq_dispose_mapping(dev->irq); 4216 4215 of_node_put(pp->phy_node); 4217 4216 free_netdev(dev);
+1 -1
drivers/net/ethernet/marvell/mvpp2.c
··· 3293 3293 mvpp2_write(priv, MVPP2_CLS_MODE_REG, MVPP2_CLS_MODE_ACTIVE_MASK); 3294 3294 3295 3295 /* Clear classifier flow table */ 3296 - memset(&fe.data, 0, MVPP2_CLS_FLOWS_TBL_DATA_WORDS); 3296 + memset(&fe.data, 0, sizeof(fe.data)); 3297 3297 for (index = 0; index < MVPP2_CLS_FLOWS_TBL_SIZE; index++) { 3298 3298 fe.index = index; 3299 3299 mvpp2_cls_flow_write(priv, &fe);
+4
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 318 318 return 0; 319 319 320 320 err_phy: 321 + if (of_phy_is_fixed_link(mac->of_node)) 322 + of_phy_deregister_fixed_link(mac->of_node); 321 323 of_node_put(np); 322 324 dev_err(eth->dev, "%s: invalid phy\n", __func__); 323 325 return -EINVAL; ··· 1925 1923 struct mtk_eth *eth = mac->hw; 1926 1924 1927 1925 phy_disconnect(dev->phydev); 1926 + if (of_phy_is_fixed_link(mac->of_node)) 1927 + of_phy_deregister_fixed_link(mac->of_node); 1928 1928 mtk_irq_disable(eth, MTK_QDMA_INT_MASK, ~0); 1929 1929 mtk_irq_disable(eth, MTK_PDMA_INT_MASK, ~0); 1930 1930 }
+6 -16
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 129 129 } 130 130 }; 131 131 132 + /* Must not acquire state_lock, as its corresponding work_sync 133 + * is done under it. 134 + */ 132 135 static void mlx4_en_filter_work(struct work_struct *work) 133 136 { 134 137 struct mlx4_en_filter *filter = container_of(work, ··· 2079 2076 return -ENOMEM; 2080 2077 } 2081 2078 2082 - static void mlx4_en_shutdown(struct net_device *dev) 2083 - { 2084 - rtnl_lock(); 2085 - netif_device_detach(dev); 2086 - mlx4_en_close(dev); 2087 - rtnl_unlock(); 2088 - } 2089 2079 2090 2080 static int mlx4_en_copy_priv(struct mlx4_en_priv *dst, 2091 2081 struct mlx4_en_priv *src, ··· 2155 2159 { 2156 2160 struct mlx4_en_priv *priv = netdev_priv(dev); 2157 2161 struct mlx4_en_dev *mdev = priv->mdev; 2158 - bool shutdown = mdev->dev->persist->interface_state & 2159 - MLX4_INTERFACE_STATE_SHUTDOWN; 2160 2162 2161 2163 en_dbg(DRV, priv, "Destroying netdev on port:%d\n", priv->port); 2162 2164 ··· 2162 2168 if (priv->registered) { 2163 2169 devlink_port_type_clear(mlx4_get_devlink_port(mdev->dev, 2164 2170 priv->port)); 2165 - if (shutdown) 2166 - mlx4_en_shutdown(dev); 2167 - else 2168 - unregister_netdev(dev); 2171 + unregister_netdev(dev); 2169 2172 } 2170 2173 2171 2174 if (priv->allocated) ··· 2180 2189 mutex_lock(&mdev->state_lock); 2181 2190 mdev->pndev[priv->port] = NULL; 2182 2191 mdev->upper[priv->port] = NULL; 2183 - mutex_unlock(&mdev->state_lock); 2184 2192 2185 2193 #ifdef CONFIG_RFS_ACCEL 2186 2194 mlx4_en_cleanup_filters(priv); 2187 2195 #endif 2188 2196 2189 2197 mlx4_en_free_resources(priv); 2198 + mutex_unlock(&mdev->state_lock); 2190 2199 2191 2200 kfree(priv->tx_ring); 2192 2201 kfree(priv->tx_cq); 2193 2202 2194 - if (!shutdown) 2195 - free_netdev(dev); 2203 + free_netdev(dev); 2196 2204 } 2197 2205 2198 2206 static int mlx4_en_change_mtu(struct net_device *dev, int new_mtu)
+1 -4
drivers/net/ethernet/mellanox/mlx4/main.c
··· 4147 4147 4148 4148 mlx4_info(persist->dev, "mlx4_shutdown was called\n"); 4149 4149 mutex_lock(&persist->interface_state_mutex); 4150 - if (persist->interface_state & MLX4_INTERFACE_STATE_UP) { 4151 - /* Notify mlx4 clients that the kernel is being shut down */ 4152 - persist->interface_state |= MLX4_INTERFACE_STATE_SHUTDOWN; 4150 + if (persist->interface_state & MLX4_INTERFACE_STATE_UP) 4153 4151 mlx4_unload_one(pdev); 4154 - } 4155 4152 mutex_unlock(&persist->interface_state_mutex); 4156 4153 } 4157 4154
+6 -1
drivers/net/ethernet/mellanox/mlx4/mcg.c
··· 1457 1457 int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port, 1458 1458 u32 qpn, enum mlx4_net_trans_promisc_mode mode) 1459 1459 { 1460 - struct mlx4_net_trans_rule rule; 1460 + struct mlx4_net_trans_rule rule = { 1461 + .queue_mode = MLX4_NET_TRANS_Q_FIFO, 1462 + .exclusive = 0, 1463 + .allow_loopback = 1, 1464 + }; 1465 + 1461 1466 u64 *regid_p; 1462 1467 1463 1468 switch (mode) {
-2
drivers/net/ethernet/mellanox/mlx5/core/Kconfig
··· 18 18 default n 19 19 ---help--- 20 20 Ethernet support in Mellanox Technologies ConnectX-4 NIC. 21 - Ethernet and Infiniband support in ConnectX-4 are currently mutually 22 - exclusive. 23 21 24 22 config MLX5_CORE_EN_DCB 25 23 bool "Data Center Bridging (DCB) Support"
-5
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 268 268 pr_debug("\n"); 269 269 } 270 270 271 - enum { 272 - MLX5_DRIVER_STATUS_ABORTED = 0xfe, 273 - MLX5_DRIVER_SYND = 0xbadd00de, 274 - }; 275 - 276 271 static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op, 277 272 u32 *synd, u8 *status) 278 273 {
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 241 241 }; 242 242 243 243 enum { 244 - MLX5E_RQ_STATE_FLUSH, 244 + MLX5E_RQ_STATE_ENABLED, 245 245 MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, 246 246 MLX5E_RQ_STATE_AM, 247 247 }; ··· 394 394 }; 395 395 396 396 enum { 397 - MLX5E_SQ_STATE_FLUSH, 397 + MLX5E_SQ_STATE_ENABLED, 398 398 MLX5E_SQ_STATE_BF_ENABLE, 399 399 }; 400 400
+9 -6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 759 759 if (err) 760 760 goto err_destroy_rq; 761 761 762 + set_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); 762 763 err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY); 763 764 if (err) 764 765 goto err_disable_rq; ··· 774 773 return 0; 775 774 776 775 err_disable_rq: 776 + clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); 777 777 mlx5e_disable_rq(rq); 778 778 err_destroy_rq: 779 779 mlx5e_destroy_rq(rq); ··· 784 782 785 783 static void mlx5e_close_rq(struct mlx5e_rq *rq) 786 784 { 787 - set_bit(MLX5E_RQ_STATE_FLUSH, &rq->state); 785 + clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); 788 786 napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */ 789 787 cancel_work_sync(&rq->am.work); 790 788 ··· 1008 1006 MLX5_SET(sqc, sqc, min_wqe_inline_mode, sq->min_inline_mode); 1009 1007 MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RST); 1010 1008 MLX5_SET(sqc, sqc, tis_lst_sz, param->type == MLX5E_SQ_ICO ? 0 : 1); 1011 - MLX5_SET(sqc, sqc, flush_in_error_en, 1); 1012 1009 1013 1010 MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC); 1014 1011 MLX5_SET(wq, wq, uar_page, sq->uar.index); ··· 1084 1083 if (err) 1085 1084 goto err_destroy_sq; 1086 1085 1086 + set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1087 1087 err = mlx5e_modify_sq(sq, MLX5_SQC_STATE_RST, MLX5_SQC_STATE_RDY, 1088 1088 false, 0); 1089 1089 if (err) ··· 1098 1096 return 0; 1099 1097 1100 1098 err_disable_sq: 1099 + clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1101 1100 mlx5e_disable_sq(sq); 1102 1101 err_destroy_sq: 1103 1102 mlx5e_destroy_sq(sq); ··· 1115 1112 1116 1113 static void mlx5e_close_sq(struct mlx5e_sq *sq) 1117 1114 { 1118 - set_bit(MLX5E_SQ_STATE_FLUSH, &sq->state); 1115 + clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 1119 1116 /* prevent netif_tx_wake_queue */ 1120 1117 napi_synchronize(&sq->channel->napi); 1121 1118 ··· 3095 3092 if (!netif_xmit_stopped(netdev_get_tx_queue(dev, i))) 3096 3093 continue; 3097 3094 sched_work = true; 3098 - set_bit(MLX5E_SQ_STATE_FLUSH, &sq->state); 3095 + clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); 3099 3096 netdev_err(dev, "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x\n", 3100 3097 i, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc); 3101 3098 } ··· 3150 3147 for (i = 0; i < priv->params.num_channels; i++) { 3151 3148 struct mlx5e_channel *c = priv->channel[i]; 3152 3149 3153 - set_bit(MLX5E_RQ_STATE_FLUSH, &c->rq.state); 3150 + clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); 3154 3151 napi_synchronize(&c->napi); 3155 3152 /* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */ 3156 3153 3157 3154 old_prog = xchg(&c->rq.xdp_prog, prog); 3158 3155 3159 - clear_bit(MLX5E_RQ_STATE_FLUSH, &c->rq.state); 3156 + set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); 3160 3157 /* napi_schedule in case we have missed anything */ 3161 3158 set_bit(MLX5E_CHANNEL_NAPI_SCHED, &c->flags); 3162 3159 napi_schedule(&c->napi);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 340 340 while ((pi = (sq->pc & wq->sz_m1)) > sq->edge) { 341 341 sq->db.ico_wqe[pi].opcode = MLX5_OPCODE_NOP; 342 342 sq->db.ico_wqe[pi].num_wqebbs = 1; 343 - mlx5e_send_nop(sq, true); 343 + mlx5e_send_nop(sq, false); 344 344 } 345 345 346 346 wqe = mlx5_wq_cyc_get_wqe(wq, pi); ··· 412 412 413 413 clear_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state); 414 414 415 - if (unlikely(test_bit(MLX5E_RQ_STATE_FLUSH, &rq->state))) { 415 + if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state))) { 416 416 mlx5e_free_rx_mpwqe(rq, &rq->mpwqe.info[wq->head]); 417 417 return; 418 418 } ··· 445 445 } 446 446 447 447 #define RQ_CANNOT_POST(rq) \ 448 - (test_bit(MLX5E_RQ_STATE_FLUSH, &rq->state) || \ 448 + (!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state) || \ 449 449 test_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state)) 450 450 451 451 bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq) ··· 924 924 struct mlx5e_sq *xdp_sq = &rq->channel->xdp_sq; 925 925 int work_done = 0; 926 926 927 - if (unlikely(test_bit(MLX5E_RQ_STATE_FLUSH, &rq->state))) 927 + if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state))) 928 928 return 0; 929 929 930 930 if (cq->decmprs_left)
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 409 409 410 410 sq = container_of(cq, struct mlx5e_sq, cq); 411 411 412 - if (unlikely(test_bit(MLX5E_SQ_STATE_FLUSH, &sq->state))) 412 + if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &sq->state))) 413 413 return false; 414 414 415 415 npkts = 0;
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
··· 56 56 struct mlx5_cqe64 *cqe; 57 57 u16 sqcc; 58 58 59 - if (unlikely(test_bit(MLX5E_SQ_STATE_FLUSH, &sq->state))) 59 + if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &sq->state))) 60 60 return; 61 61 62 62 cqe = mlx5e_get_cqe(cq); ··· 113 113 114 114 sq = container_of(cq, struct mlx5e_sq, cq); 115 115 116 - if (unlikely(test_bit(MLX5E_SQ_STATE_FLUSH, &sq->state))) 116 + if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &sq->state))) 117 117 return false; 118 118 119 119 /* sq->cc must be updated only after mlx5_cqwq_update_db_record(),
+25 -18
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 46 46 #include <linux/mlx5/srq.h> 47 47 #include <linux/debugfs.h> 48 48 #include <linux/kmod.h> 49 - #include <linux/delay.h> 50 49 #include <linux/mlx5/mlx5_ifc.h> 51 50 #ifdef CONFIG_RFS_ACCEL 52 51 #include <linux/cpu_rmap.h> ··· 62 63 MODULE_LICENSE("Dual BSD/GPL"); 63 64 MODULE_VERSION(DRIVER_VERSION); 64 65 65 - int mlx5_core_debug_mask; 66 - module_param_named(debug_mask, mlx5_core_debug_mask, int, 0644); 66 + unsigned int mlx5_core_debug_mask; 67 + module_param_named(debug_mask, mlx5_core_debug_mask, uint, 0644); 67 68 MODULE_PARM_DESC(debug_mask, "debug mask: 1 = dump cmd data, 2 = dump cmd exec time, 3 = both. Default=0"); 68 69 69 70 #define MLX5_DEFAULT_PROF 2 70 - static int prof_sel = MLX5_DEFAULT_PROF; 71 - module_param_named(prof_sel, prof_sel, int, 0444); 71 + static unsigned int prof_sel = MLX5_DEFAULT_PROF; 72 + module_param_named(prof_sel, prof_sel, uint, 0444); 72 73 MODULE_PARM_DESC(prof_sel, "profile selector. Valid range 0 - 2"); 73 74 74 75 enum { ··· 732 733 u8 status; 733 734 734 735 mlx5_cmd_mbox_status(query_out, &status, &syndrome); 735 - if (status == MLX5_CMD_STAT_BAD_OP_ERR) { 736 - pr_debug("Only ISSI 0 is supported\n"); 737 - return 0; 736 + if (!status || syndrome == MLX5_DRIVER_SYND) { 737 + mlx5_core_err(dev, "Failed to query ISSI err(%d) status(%d) synd(%d)\n", 738 + err, status, syndrome); 739 + return err; 738 740 } 739 741 740 - pr_err("failed to query ISSI err(%d)\n", err); 741 - return err; 742 + mlx5_core_warn(dev, "Query ISSI is not supported by FW, ISSI is 0\n"); 743 + dev->issi = 0; 744 + return 0; 742 745 } 743 746 744 747 sup_issi = MLX5_GET(query_issi_out, query_out, supported_issi_dw0); ··· 754 753 err = mlx5_cmd_exec(dev, set_in, sizeof(set_in), 755 754 set_out, sizeof(set_out)); 756 755 if (err) { 757 - pr_err("failed to set ISSI=1 err(%d)\n", err); 756 + mlx5_core_err(dev, "Failed to set ISSI to 1 err(%d)\n", 757 + err); 758 758 return err; 759 759 } 760 760 ··· 1230 1228 1231 1229 dev->pdev = pdev; 1232 1230 dev->event = mlx5_core_event; 1233 - 1234 - if (prof_sel < 0 || prof_sel >= ARRAY_SIZE(profile)) { 1235 - mlx5_core_warn(dev, 1236 - "selected profile out of range, selecting default (%d)\n", 1237 - MLX5_DEFAULT_PROF); 1238 - prof_sel = MLX5_DEFAULT_PROF; 1239 - } 1240 1231 dev->profile = &profile[prof_sel]; 1241 1232 1242 1233 INIT_LIST_HEAD(&priv->ctx_list); ··· 1446 1451 .sriov_configure = mlx5_core_sriov_configure, 1447 1452 }; 1448 1453 1454 + static void mlx5_core_verify_params(void) 1455 + { 1456 + if (prof_sel >= ARRAY_SIZE(profile)) { 1457 + pr_warn("mlx5_core: WARNING: Invalid module parameter prof_sel %d, valid range 0-%zu, changing back to default(%d)\n", 1458 + prof_sel, 1459 + ARRAY_SIZE(profile) - 1, 1460 + MLX5_DEFAULT_PROF); 1461 + prof_sel = MLX5_DEFAULT_PROF; 1462 + } 1463 + } 1464 + 1449 1465 static int __init init(void) 1450 1466 { 1451 1467 int err; 1452 1468 1469 + mlx5_core_verify_params(); 1453 1470 mlx5_register_debugfs(); 1454 1471 1455 1472 err = pci_register_driver(&mlx5_core_driver);
+10 -5
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 44 44 45 45 #define MLX5_TOTAL_VPORTS(mdev) (1 + pci_sriov_get_totalvfs(mdev->pdev)) 46 46 47 - extern int mlx5_core_debug_mask; 47 + extern uint mlx5_core_debug_mask; 48 48 49 49 #define mlx5_core_dbg(__dev, format, ...) \ 50 - dev_dbg(&(__dev)->pdev->dev, "%s:%s:%d:(pid %d): " format, \ 51 - (__dev)->priv.name, __func__, __LINE__, current->pid, \ 50 + dev_dbg(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \ 51 + __func__, __LINE__, current->pid, \ 52 52 ##__VA_ARGS__) 53 53 54 54 #define mlx5_core_dbg_mask(__dev, mask, format, ...) \ ··· 63 63 ##__VA_ARGS__) 64 64 65 65 #define mlx5_core_warn(__dev, format, ...) \ 66 - dev_warn(&(__dev)->pdev->dev, "%s:%s:%d:(pid %d): " format, \ 67 - (__dev)->priv.name, __func__, __LINE__, current->pid, \ 66 + dev_warn(&(__dev)->pdev->dev, "%s:%d:(pid %d): " format, \ 67 + __func__, __LINE__, current->pid, \ 68 68 ##__VA_ARGS__) 69 69 70 70 #define mlx5_core_info(__dev, format, ...) \ ··· 73 73 enum { 74 74 MLX5_CMD_DATA, /* print command payload only */ 75 75 MLX5_CMD_TIME, /* print command execution time */ 76 + }; 77 + 78 + enum { 79 + MLX5_DRIVER_STATUS_ABORTED = 0xfe, 80 + MLX5_DRIVER_SYND = 0xbadd00de, 76 81 }; 77 82 78 83 int mlx5_query_hca_caps(struct mlx5_core_dev *dev);
+1
drivers/net/ethernet/qlogic/qed/qed_ll2.c
··· 1730 1730 mapping))) { 1731 1731 DP_NOTICE(cdev, 1732 1732 "Unable to map frag - dropping packet\n"); 1733 + rc = -ENOMEM; 1733 1734 goto err; 1734 1735 } 1735 1736 } else {
+1
drivers/net/ethernet/qualcomm/emac/emac-phy.c
··· 212 212 213 213 phy_np = of_parse_phandle(np, "phy-handle", 0); 214 214 adpt->phydev = of_phy_find_device(phy_np); 215 + of_node_put(phy_np); 215 216 } 216 217 217 218 if (!adpt->phydev) {
+4
drivers/net/ethernet/qualcomm/emac/emac.c
··· 711 711 err_undo_napi: 712 712 netif_napi_del(&adpt->rx_q.napi); 713 713 err_undo_mdiobus: 714 + if (!has_acpi_companion(&pdev->dev)) 715 + put_device(&adpt->phydev->mdio.dev); 714 716 mdiobus_unregister(adpt->mii_bus); 715 717 err_undo_clocks: 716 718 emac_clks_teardown(adpt); ··· 732 730 733 731 emac_clks_teardown(adpt); 734 732 733 + if (!has_acpi_companion(&pdev->dev)) 734 + put_device(&adpt->phydev->mdio.dev); 735 735 mdiobus_unregister(adpt->mii_bus); 736 736 free_netdev(netdev); 737 737
+14 -5
drivers/net/ethernet/renesas/ravb_main.c
··· 1008 1008 of_node_put(pn); 1009 1009 if (!phydev) { 1010 1010 netdev_err(ndev, "failed to connect PHY\n"); 1011 - return -ENOENT; 1011 + err = -ENOENT; 1012 + goto err_deregister_fixed_link; 1012 1013 } 1013 1014 1014 1015 /* This driver only support 10/100Mbit speeds on Gen3 1015 1016 * at this time. 1016 1017 */ 1017 1018 if (priv->chip_id == RCAR_GEN3) { 1018 - int err; 1019 - 1020 1019 err = phy_set_max_speed(phydev, SPEED_100); 1021 1020 if (err) { 1022 1021 netdev_err(ndev, "failed to limit PHY to 100Mbit/s\n"); 1023 - phy_disconnect(phydev); 1024 - return err; 1022 + goto err_phy_disconnect; 1025 1023 } 1026 1024 1027 1025 netdev_info(ndev, "limited PHY to 100Mbit/s\n"); ··· 1031 1033 phy_attached_info(phydev); 1032 1034 1033 1035 return 0; 1036 + 1037 + err_phy_disconnect: 1038 + phy_disconnect(phydev); 1039 + err_deregister_fixed_link: 1040 + if (of_phy_is_fixed_link(np)) 1041 + of_phy_deregister_fixed_link(np); 1042 + 1043 + return err; 1034 1044 } 1035 1045 1036 1046 /* PHY control start function */ ··· 1640 1634 /* Device close function for Ethernet AVB */ 1641 1635 static int ravb_close(struct net_device *ndev) 1642 1636 { 1637 + struct device_node *np = ndev->dev.parent->of_node; 1643 1638 struct ravb_private *priv = netdev_priv(ndev); 1644 1639 struct ravb_tstamp_skb *ts_skb, *ts_skb2; 1645 1640 ··· 1670 1663 if (ndev->phydev) { 1671 1664 phy_stop(ndev->phydev); 1672 1665 phy_disconnect(ndev->phydev); 1666 + if (of_phy_is_fixed_link(np)) 1667 + of_phy_deregister_fixed_link(np); 1673 1668 } 1674 1669 1675 1670 if (priv->chip_id != RCAR_GEN2) {
+1 -1
drivers/net/ethernet/renesas/sh_eth.c
··· 518 518 519 519 .ecsr_value = ECSR_ICD, 520 520 .ecsipr_value = ECSIPR_ICDIP, 521 - .eesipr_value = 0xff7f009f, 521 + .eesipr_value = 0xe77f009f, 522 522 523 523 .tx_check = EESR_TC1 | EESR_FTC, 524 524 .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT |
+8 -1
drivers/net/ethernet/smsc/smsc911x.c
··· 438 438 ret = regulator_bulk_get(&pdev->dev, 439 439 ARRAY_SIZE(pdata->supplies), 440 440 pdata->supplies); 441 - if (ret) 441 + if (ret) { 442 + /* 443 + * Retry on deferrals, else just report the error 444 + * and try to continue. 445 + */ 446 + if (ret == -EPROBE_DEFER) 447 + return ret; 442 448 netdev_err(ndev, "couldn't get regulators %d\n", 443 449 ret); 450 + } 444 451 445 452 /* Request optional RESET GPIO */ 446 453 pdata->reset_gpiod = devm_gpiod_get_optional(&pdev->dev,
+15 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
··· 50 50 if (plat_dat->init) { 51 51 ret = plat_dat->init(pdev, plat_dat->bsp_priv); 52 52 if (ret) 53 - return ret; 53 + goto err_remove_config_dt; 54 54 } 55 55 56 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 56 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 57 + if (ret) 58 + goto err_exit; 59 + 60 + return 0; 61 + 62 + err_exit: 63 + if (plat_dat->exit) 64 + plat_dat->exit(pdev, plat_dat->bsp_priv); 65 + err_remove_config_dt: 66 + if (pdev->dev.of_node) 67 + stmmac_remove_config_dt(pdev, plat_dat); 68 + 69 + return ret; 57 70 } 58 71 59 72 static const struct of_device_id dwmac_generic_match[] = {
+19 -6
drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
··· 271 271 return PTR_ERR(plat_dat); 272 272 273 273 gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL); 274 - if (!gmac) 275 - return -ENOMEM; 274 + if (!gmac) { 275 + err = -ENOMEM; 276 + goto err_remove_config_dt; 277 + } 276 278 277 279 gmac->pdev = pdev; 278 280 279 281 err = ipq806x_gmac_of_parse(gmac); 280 282 if (err) { 281 283 dev_err(dev, "device tree parsing error\n"); 282 - return err; 284 + goto err_remove_config_dt; 283 285 } 284 286 285 287 regmap_write(gmac->qsgmii_csr, QSGMII_PCS_CAL_LCKDT_CTL, ··· 302 300 default: 303 301 dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", 304 302 phy_modes(gmac->phy_mode)); 305 - return -EINVAL; 303 + err = -EINVAL; 304 + goto err_remove_config_dt; 306 305 } 307 306 regmap_write(gmac->nss_common, NSS_COMMON_GMAC_CTL(gmac->id), val); 308 307 ··· 322 319 default: 323 320 dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", 324 321 phy_modes(gmac->phy_mode)); 325 - return -EINVAL; 322 + err = -EINVAL; 323 + goto err_remove_config_dt; 326 324 } 327 325 regmap_write(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, val); 328 326 ··· 350 346 plat_dat->bsp_priv = gmac; 351 347 plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed; 352 348 353 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 349 + err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 350 + if (err) 351 + goto err_remove_config_dt; 352 + 353 + return 0; 354 + 355 + err_remove_config_dt: 356 + stmmac_remove_config_dt(pdev, plat_dat); 357 + 358 + return err; 354 359 } 355 360 356 361 static const struct of_device_id ipq806x_gmac_dwmac_match[] = {
+14 -3
drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c
··· 46 46 reg = syscon_regmap_lookup_by_compatible("nxp,lpc1850-creg"); 47 47 if (IS_ERR(reg)) { 48 48 dev_err(&pdev->dev, "syscon lookup failed\n"); 49 - return PTR_ERR(reg); 49 + ret = PTR_ERR(reg); 50 + goto err_remove_config_dt; 50 51 } 51 52 52 53 if (plat_dat->interface == PHY_INTERFACE_MODE_MII) { ··· 56 55 ethmode = LPC18XX_CREG_CREG6_ETHMODE_RMII; 57 56 } else { 58 57 dev_err(&pdev->dev, "Only MII and RMII mode supported\n"); 59 - return -EINVAL; 58 + ret = -EINVAL; 59 + goto err_remove_config_dt; 60 60 } 61 61 62 62 regmap_update_bits(reg, LPC18XX_CREG_CREG6, 63 63 LPC18XX_CREG_CREG6_ETHMODE_MASK, ethmode); 64 64 65 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 65 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 66 + if (ret) 67 + goto err_remove_config_dt; 68 + 69 + return 0; 70 + 71 + err_remove_config_dt: 72 + stmmac_remove_config_dt(pdev, plat_dat); 73 + 74 + return ret; 66 75 } 67 76 68 77 static const struct of_device_id lpc18xx_dwmac_match[] = {
+18 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
··· 64 64 return PTR_ERR(plat_dat); 65 65 66 66 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 67 - if (!dwmac) 68 - return -ENOMEM; 67 + if (!dwmac) { 68 + ret = -ENOMEM; 69 + goto err_remove_config_dt; 70 + } 69 71 70 72 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 71 73 dwmac->reg = devm_ioremap_resource(&pdev->dev, res); 72 - if (IS_ERR(dwmac->reg)) 73 - return PTR_ERR(dwmac->reg); 74 + if (IS_ERR(dwmac->reg)) { 75 + ret = PTR_ERR(dwmac->reg); 76 + goto err_remove_config_dt; 77 + } 74 78 75 79 plat_dat->bsp_priv = dwmac; 76 80 plat_dat->fix_mac_speed = meson6_dwmac_fix_mac_speed; 77 81 78 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 82 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 83 + if (ret) 84 + goto err_remove_config_dt; 85 + 86 + return 0; 87 + 88 + err_remove_config_dt: 89 + stmmac_remove_config_dt(pdev, plat_dat); 90 + 91 + return ret; 79 92 } 80 93 81 94 static const struct of_device_id meson6_dwmac_match[] = {
+24 -8
drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
··· 264 264 return PTR_ERR(plat_dat); 265 265 266 266 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 267 - if (!dwmac) 268 - return -ENOMEM; 267 + if (!dwmac) { 268 + ret = -ENOMEM; 269 + goto err_remove_config_dt; 270 + } 269 271 270 272 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 271 273 dwmac->regs = devm_ioremap_resource(&pdev->dev, res); 272 - if (IS_ERR(dwmac->regs)) 273 - return PTR_ERR(dwmac->regs); 274 + if (IS_ERR(dwmac->regs)) { 275 + ret = PTR_ERR(dwmac->regs); 276 + goto err_remove_config_dt; 277 + } 274 278 275 279 dwmac->pdev = pdev; 276 280 dwmac->phy_mode = of_get_phy_mode(pdev->dev.of_node); 277 281 if (dwmac->phy_mode < 0) { 278 282 dev_err(&pdev->dev, "missing phy-mode property\n"); 279 - return -EINVAL; 283 + ret = -EINVAL; 284 + goto err_remove_config_dt; 280 285 } 281 286 282 287 ret = meson8b_init_clk(dwmac); 283 288 if (ret) 284 - return ret; 289 + goto err_remove_config_dt; 285 290 286 291 ret = meson8b_init_prg_eth(dwmac); 287 292 if (ret) 288 - return ret; 293 + goto err_remove_config_dt; 289 294 290 295 plat_dat->bsp_priv = dwmac; 291 296 292 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 297 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 298 + if (ret) 299 + goto err_clk_disable; 300 + 301 + return 0; 302 + 303 + err_clk_disable: 304 + clk_disable_unprepare(dwmac->m25_div_clk); 305 + err_remove_config_dt: 306 + stmmac_remove_config_dt(pdev, plat_dat); 307 + 308 + return ret; 293 309 } 294 310 295 311 static int meson8b_dwmac_remove(struct platform_device *pdev)
+17 -4
drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
··· 981 981 plat_dat->resume = rk_gmac_resume; 982 982 983 983 plat_dat->bsp_priv = rk_gmac_setup(pdev, data); 984 - if (IS_ERR(plat_dat->bsp_priv)) 985 - return PTR_ERR(plat_dat->bsp_priv); 984 + if (IS_ERR(plat_dat->bsp_priv)) { 985 + ret = PTR_ERR(plat_dat->bsp_priv); 986 + goto err_remove_config_dt; 987 + } 986 988 987 989 ret = rk_gmac_init(pdev, plat_dat->bsp_priv); 988 990 if (ret) 989 - return ret; 991 + goto err_remove_config_dt; 990 992 991 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 993 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 994 + if (ret) 995 + goto err_gmac_exit; 996 + 997 + return 0; 998 + 999 + err_gmac_exit: 1000 + rk_gmac_exit(pdev, plat_dat->bsp_priv); 1001 + err_remove_config_dt: 1002 + stmmac_remove_config_dt(pdev, plat_dat); 1003 + 1004 + return ret; 992 1005 } 993 1006 994 1007 static const struct of_device_id rk_gmac_dwmac_match[] = {
+26 -13
drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
··· 304 304 struct device *dev = &pdev->dev; 305 305 int ret; 306 306 struct socfpga_dwmac *dwmac; 307 + struct net_device *ndev; 308 + struct stmmac_priv *stpriv; 307 309 308 310 ret = stmmac_get_platform_resources(pdev, &stmmac_res); 309 311 if (ret) ··· 316 314 return PTR_ERR(plat_dat); 317 315 318 316 dwmac = devm_kzalloc(dev, sizeof(*dwmac), GFP_KERNEL); 319 - if (!dwmac) 320 - return -ENOMEM; 317 + if (!dwmac) { 318 + ret = -ENOMEM; 319 + goto err_remove_config_dt; 320 + } 321 321 322 322 ret = socfpga_dwmac_parse_data(dwmac, dev); 323 323 if (ret) { 324 324 dev_err(dev, "Unable to parse OF data\n"); 325 - return ret; 325 + goto err_remove_config_dt; 326 326 } 327 327 328 328 plat_dat->bsp_priv = dwmac; 329 329 plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed; 330 330 331 331 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 332 + if (ret) 333 + goto err_remove_config_dt; 332 334 333 - if (!ret) { 334 - struct net_device *ndev = platform_get_drvdata(pdev); 335 - struct stmmac_priv *stpriv = netdev_priv(ndev); 335 + ndev = platform_get_drvdata(pdev); 336 + stpriv = netdev_priv(ndev); 336 337 337 - /* The socfpga driver needs to control the stmmac reset to 338 - * set the phy mode. Create a copy of the core reset handel 339 - * so it can be used by the driver later. 340 - */ 341 - dwmac->stmmac_rst = stpriv->stmmac_rst; 338 + /* The socfpga driver needs to control the stmmac reset to set the phy 339 + * mode. Create a copy of the core reset handle so it can be used by 340 + * the driver later. 341 + */ 342 + dwmac->stmmac_rst = stpriv->stmmac_rst; 342 343 343 - ret = socfpga_dwmac_set_phy_mode(dwmac); 344 - } 344 + ret = socfpga_dwmac_set_phy_mode(dwmac); 345 + if (ret) 346 + goto err_dvr_remove; 347 + 348 + return 0; 349 + 350 + err_dvr_remove: 351 + stmmac_dvr_remove(&pdev->dev); 352 + err_remove_config_dt: 353 + stmmac_remove_config_dt(pdev, plat_dat); 345 354 346 355 return ret; 347 356 }
+18 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
··· 345 345 return PTR_ERR(plat_dat); 346 346 347 347 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 348 - if (!dwmac) 349 - return -ENOMEM; 348 + if (!dwmac) { 349 + ret = -ENOMEM; 350 + goto err_remove_config_dt; 351 + } 350 352 351 353 ret = sti_dwmac_parse_data(dwmac, pdev); 352 354 if (ret) { 353 355 dev_err(&pdev->dev, "Unable to parse OF data\n"); 354 - return ret; 356 + goto err_remove_config_dt; 355 357 } 356 358 357 359 dwmac->fix_retime_src = data->fix_retime_src; ··· 365 363 366 364 ret = sti_dwmac_init(pdev, plat_dat->bsp_priv); 367 365 if (ret) 368 - return ret; 366 + goto err_remove_config_dt; 369 367 370 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 368 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 369 + if (ret) 370 + goto err_dwmac_exit; 371 + 372 + return 0; 373 + 374 + err_dwmac_exit: 375 + sti_dwmac_exit(pdev, plat_dat->bsp_priv); 376 + err_remove_config_dt: 377 + stmmac_remove_config_dt(pdev, plat_dat); 378 + 379 + return ret; 371 380 } 372 381 373 382 static const struct sti_dwmac_of_data stih4xx_dwmac_data = {
+14 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
··· 107 107 return PTR_ERR(plat_dat); 108 108 109 109 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 110 - if (!dwmac) 111 - return -ENOMEM; 110 + if (!dwmac) { 111 + ret = -ENOMEM; 112 + goto err_remove_config_dt; 113 + } 112 114 113 115 ret = stm32_dwmac_parse_data(dwmac, &pdev->dev); 114 116 if (ret) { 115 117 dev_err(&pdev->dev, "Unable to parse OF data\n"); 116 - return ret; 118 + goto err_remove_config_dt; 117 119 } 118 120 119 121 plat_dat->bsp_priv = dwmac; 120 122 121 123 ret = stm32_dwmac_init(plat_dat); 122 124 if (ret) 123 - return ret; 125 + goto err_remove_config_dt; 124 126 125 127 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 126 128 if (ret) 127 - stm32_dwmac_clk_disable(dwmac); 129 + goto err_clk_disable; 130 + 131 + return 0; 132 + 133 + err_clk_disable: 134 + stm32_dwmac_clk_disable(dwmac); 135 + err_remove_config_dt: 136 + stmmac_remove_config_dt(pdev, plat_dat); 128 137 129 138 return ret; 130 139 }
+19 -7
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
··· 120 120 return PTR_ERR(plat_dat); 121 121 122 122 gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL); 123 - if (!gmac) 124 - return -ENOMEM; 123 + if (!gmac) { 124 + ret = -ENOMEM; 125 + goto err_remove_config_dt; 126 + } 125 127 126 128 gmac->interface = of_get_phy_mode(dev->of_node); 127 129 128 130 gmac->tx_clk = devm_clk_get(dev, "allwinner_gmac_tx"); 129 131 if (IS_ERR(gmac->tx_clk)) { 130 132 dev_err(dev, "could not get tx clock\n"); 131 - return PTR_ERR(gmac->tx_clk); 133 + ret = PTR_ERR(gmac->tx_clk); 134 + goto err_remove_config_dt; 132 135 } 133 136 134 137 /* Optional regulator for PHY */ 135 138 gmac->regulator = devm_regulator_get_optional(dev, "phy"); 136 139 if (IS_ERR(gmac->regulator)) { 137 - if (PTR_ERR(gmac->regulator) == -EPROBE_DEFER) 138 - return -EPROBE_DEFER; 140 + if (PTR_ERR(gmac->regulator) == -EPROBE_DEFER) { 141 + ret = -EPROBE_DEFER; 142 + goto err_remove_config_dt; 143 + } 139 144 dev_info(dev, "no regulator found\n"); 140 145 gmac->regulator = NULL; 141 146 } ··· 156 151 157 152 ret = sun7i_gmac_init(pdev, plat_dat->bsp_priv); 158 153 if (ret) 159 - return ret; 154 + goto err_remove_config_dt; 160 155 161 156 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 162 157 if (ret) 163 - sun7i_gmac_exit(pdev, plat_dat->bsp_priv); 158 + goto err_gmac_exit; 159 + 160 + return 0; 161 + 162 + err_gmac_exit: 163 + sun7i_gmac_exit(pdev, plat_dat->bsp_priv); 164 + err_remove_config_dt: 165 + stmmac_remove_config_dt(pdev, plat_dat); 164 166 165 167 return ret; 166 168 }
+2
drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
··· 43 43 if (axi->axi_xit_frm) 44 44 value |= DMA_AXI_LPI_XIT_FRM; 45 45 46 + value &= ~DMA_AXI_WR_OSR_LMT; 46 47 value |= (axi->axi_wr_osr_lmt & DMA_AXI_WR_OSR_LMT_MASK) << 47 48 DMA_AXI_WR_OSR_LMT_SHIFT; 48 49 50 + value &= ~DMA_AXI_RD_OSR_LMT; 49 51 value |= (axi->axi_rd_osr_lmt & DMA_AXI_RD_OSR_LMT_MASK) << 50 52 DMA_AXI_RD_OSR_LMT_SHIFT; 51 53
+2
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
··· 30 30 if (axi->axi_xit_frm) 31 31 value |= DMA_AXI_LPI_XIT_FRM; 32 32 33 + value &= ~DMA_AXI_WR_OSR_LMT; 33 34 value |= (axi->axi_wr_osr_lmt & DMA_AXI_OSR_MAX) << 34 35 DMA_AXI_WR_OSR_LMT_SHIFT; 35 36 37 + value &= ~DMA_AXI_RD_OSR_LMT; 36 38 value |= (axi->axi_rd_osr_lmt & DMA_AXI_OSR_MAX) << 37 39 DMA_AXI_RD_OSR_LMT_SHIFT; 38 40
-1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3416 3416 stmmac_set_mac(priv->ioaddr, false); 3417 3417 netif_carrier_off(ndev); 3418 3418 unregister_netdev(ndev); 3419 - of_node_put(priv->plat->phy_node); 3420 3419 if (priv->stmmac_rst) 3421 3420 reset_control_assert(priv->stmmac_rst); 3422 3421 clk_disable_unprepare(priv->pclk);
+33 -6
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 126 126 axi->axi_mb = of_property_read_bool(np, "snps,axi_mb"); 127 127 axi->axi_rb = of_property_read_bool(np, "snps,axi_rb"); 128 128 129 - of_property_read_u32(np, "snps,wr_osr_lmt", &axi->axi_wr_osr_lmt); 130 - of_property_read_u32(np, "snps,rd_osr_lmt", &axi->axi_rd_osr_lmt); 129 + if (of_property_read_u32(np, "snps,wr_osr_lmt", &axi->axi_wr_osr_lmt)) 130 + axi->axi_wr_osr_lmt = 1; 131 + if (of_property_read_u32(np, "snps,rd_osr_lmt", &axi->axi_rd_osr_lmt)) 132 + axi->axi_rd_osr_lmt = 1; 131 133 of_property_read_u32_array(np, "snps,blen", axi->axi_blen, AXI_BLEN); 132 134 of_node_put(np); 133 135 ··· 202 200 /** 203 201 * stmmac_probe_config_dt - parse device-tree driver parameters 204 202 * @pdev: platform_device structure 205 - * @plat: driver data platform structure 206 203 * @mac: MAC address to use 207 204 * Description: 208 205 * this function is to read the driver parameters from device-tree and ··· 307 306 dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg), 308 307 GFP_KERNEL); 309 308 if (!dma_cfg) { 310 - of_node_put(plat->phy_node); 309 + stmmac_remove_config_dt(pdev, plat); 311 310 return ERR_PTR(-ENOMEM); 312 311 } 313 312 plat->dma_cfg = dma_cfg; ··· 330 329 331 330 return plat; 332 331 } 332 + 333 + /** 334 + * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt() 335 + * @pdev: platform_device structure 336 + * @plat: driver data platform structure 337 + * 338 + * Release resources claimed by stmmac_probe_config_dt(). 339 + */ 340 + void stmmac_remove_config_dt(struct platform_device *pdev, 341 + struct plat_stmmacenet_data *plat) 342 + { 343 + struct device_node *np = pdev->dev.of_node; 344 + 345 + if (of_phy_is_fixed_link(np)) 346 + of_phy_deregister_fixed_link(np); 347 + of_node_put(plat->phy_node); 348 + } 333 349 #else 334 350 struct plat_stmmacenet_data * 335 351 stmmac_probe_config_dt(struct platform_device *pdev, const char **mac) 336 352 { 337 353 return ERR_PTR(-ENOSYS); 338 354 } 355 + 356 + void stmmac_remove_config_dt(struct platform_device *pdev, 357 + struct plat_stmmacenet_data *plat) 358 + { 359 + } 339 360 #endif /* CONFIG_OF */ 340 361 EXPORT_SYMBOL_GPL(stmmac_probe_config_dt); 362 + EXPORT_SYMBOL_GPL(stmmac_remove_config_dt); 341 363 342 364 int stmmac_get_platform_resources(struct platform_device *pdev, 343 365 struct stmmac_resources *stmmac_res) ··· 416 392 { 417 393 struct net_device *ndev = platform_get_drvdata(pdev); 418 394 struct stmmac_priv *priv = netdev_priv(ndev); 395 + struct plat_stmmacenet_data *plat = priv->plat; 419 396 int ret = stmmac_dvr_remove(&pdev->dev); 420 397 421 - if (priv->plat->exit) 422 - priv->plat->exit(pdev, priv->plat->bsp_priv); 398 + if (plat->exit) 399 + plat->exit(pdev, plat->bsp_priv); 400 + 401 + stmmac_remove_config_dt(pdev, plat); 423 402 424 403 return ret; 425 404 }
+2
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h
··· 23 23 24 24 struct plat_stmmacenet_data * 25 25 stmmac_probe_config_dt(struct platform_device *pdev, const char **mac); 26 + void stmmac_remove_config_dt(struct platform_device *pdev, 27 + struct plat_stmmacenet_data *plat); 26 28 27 29 int stmmac_get_platform_resources(struct platform_device *pdev, 28 30 struct stmmac_resources *stmmac_res);
+13 -9
drivers/net/ethernet/synopsys/dwc_eth_qos.c
··· 33 33 #include <linux/stat.h> 34 34 #include <linux/types.h> 35 35 36 - #include <linux/types.h> 37 36 #include <linux/slab.h> 38 37 #include <linux/delay.h> 39 38 #include <linux/mm.h> ··· 42 43 43 44 #include <linux/phy.h> 44 45 #include <linux/mii.h> 45 - #include <linux/delay.h> 46 46 #include <linux/dma-mapping.h> 47 47 #include <linux/vmalloc.h> 48 48 ··· 2881 2883 ret = of_get_phy_mode(lp->pdev->dev.of_node); 2882 2884 if (ret < 0) { 2883 2885 dev_err(&lp->pdev->dev, "error in getting phy i/f\n"); 2884 - goto err_out_clk_dis_phy; 2886 + goto err_out_deregister_fixed_link; 2885 2887 } 2886 2888 2887 2889 lp->phy_interface = ret; ··· 2889 2891 ret = dwceqos_mii_init(lp); 2890 2892 if (ret) { 2891 2893 dev_err(&lp->pdev->dev, "error in dwceqos_mii_init\n"); 2892 - goto err_out_clk_dis_phy; 2894 + goto err_out_deregister_fixed_link; 2893 2895 } 2894 2896 2895 2897 ret = dwceqos_mii_probe(ndev); 2896 2898 if (ret != 0) { 2897 2899 netdev_err(ndev, "mii_probe fail.\n"); 2898 2900 ret = -ENXIO; 2899 - goto err_out_clk_dis_phy; 2901 + goto err_out_deregister_fixed_link; 2900 2902 } 2901 2903 2902 2904 dwceqos_set_umac_addr(lp, lp->ndev->dev_addr, 0); ··· 2914 2916 if (ret) { 2915 2917 dev_err(&lp->pdev->dev, "Unable to retrieve DT, error %d\n", 2916 2918 ret); 2917 - goto err_out_clk_dis_phy; 2919 + goto err_out_deregister_fixed_link; 2918 2920 } 2919 2921 dev_info(&lp->pdev->dev, "pdev->id %d, baseaddr 0x%08lx, irq %d\n", 2920 2922 pdev->id, ndev->base_addr, ndev->irq); ··· 2924 2926 if (ret) { 2925 2927 dev_err(&lp->pdev->dev, "Unable to request IRQ %d, error %d\n", 2926 2928 ndev->irq, ret); 2927 - goto err_out_clk_dis_phy; 2929 + goto err_out_deregister_fixed_link; 2928 2930 } 2929 2931 2930 2932 if (netif_msg_probe(lp)) ··· 2935 2937 ret = register_netdev(ndev); 2936 2938 if (ret) { 2937 2939 dev_err(&pdev->dev, "Cannot register net device, aborting.\n"); 2938 - goto err_out_clk_dis_phy; 2940 + goto err_out_deregister_fixed_link; 2939 2941 } 2940 2942 2941 2943 return 0; 2942 2944 2945 + err_out_deregister_fixed_link: 2946 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 2947 + of_phy_deregister_fixed_link(pdev->dev.of_node); 2943 2948 err_out_clk_dis_phy: 2944 2949 clk_disable_unprepare(lp->phy_ref_clk); 2945 2950 err_out_clk_dis_aper: ··· 2962 2961 if (ndev) { 2963 2962 lp = netdev_priv(ndev); 2964 2963 2965 - if (ndev->phydev) 2964 + if (ndev->phydev) { 2966 2965 phy_disconnect(ndev->phydev); 2966 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 2967 + of_phy_deregister_fixed_link(pdev->dev.of_node); 2968 + } 2967 2969 mdiobus_unregister(lp->mii_bus); 2968 2970 mdiobus_free(lp->mii_bus); 2969 2971
+1
drivers/net/ethernet/ti/cpmac.c
··· 1113 1113 if (!dev) 1114 1114 return -ENOMEM; 1115 1115 1116 + SET_NETDEV_DEV(dev, &pdev->dev); 1116 1117 platform_set_drvdata(pdev, dev); 1117 1118 priv = netdev_priv(dev); 1118 1119
+1
drivers/net/ethernet/ti/cpsw-phy-sel.c
··· 81 81 }; 82 82 83 83 mask = GMII_SEL_MODE_MASK << (slave * 2) | BIT(slave + 6); 84 + mask |= BIT(slave + 4); 84 85 mode <<= slave * 2; 85 86 86 87 if (priv->rmii_clock_external) {
+6 -14
drivers/net/ethernet/ti/cpsw.c
··· 2459 2459 if (strcmp(slave_node->name, "slave")) 2460 2460 continue; 2461 2461 2462 - if (of_phy_is_fixed_link(slave_node)) { 2463 - struct phy_device *phydev; 2464 - 2465 - phydev = of_phy_find_device(slave_node); 2466 - if (phydev) { 2467 - fixed_phy_unregister(phydev); 2468 - /* Put references taken by 2469 - * of_phy_find_device() and 2470 - * of_phy_register_fixed_link(). 2471 - */ 2472 - phy_device_free(phydev); 2473 - phy_device_free(phydev); 2474 - } 2475 - } 2462 + if (of_phy_is_fixed_link(slave_node)) 2463 + of_phy_deregister_fixed_link(slave_node); 2476 2464 2477 2465 of_node_put(slave_data->phy_node); 2478 2466 ··· 2930 2942 /* Select default pin state */ 2931 2943 pinctrl_pm_select_default_state(dev); 2932 2944 2945 + /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */ 2946 + rtnl_lock(); 2933 2947 if (cpsw->data.dual_emac) { 2934 2948 int i; 2935 2949 ··· 2943 2953 if (netif_running(ndev)) 2944 2954 cpsw_ndo_open(ndev); 2945 2955 } 2956 + rtnl_unlock(); 2957 + 2946 2958 return 0; 2947 2959 } 2948 2960 #endif
+9 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 1767 1767 */ 1768 1768 static int davinci_emac_probe(struct platform_device *pdev) 1769 1769 { 1770 + struct device_node *np = pdev->dev.of_node; 1770 1771 int rc = 0; 1771 1772 struct resource *res, *res_ctrl; 1772 1773 struct net_device *ndev; ··· 1806 1805 if (!pdata) { 1807 1806 dev_err(&pdev->dev, "no platform data\n"); 1808 1807 rc = -ENODEV; 1809 - goto no_pdata; 1808 + goto err_free_netdev; 1810 1809 } 1811 1810 1812 1811 /* MAC addr and PHY mask , RMII enable info from platform_data */ ··· 1942 1941 cpdma_chan_destroy(priv->rxchan); 1943 1942 cpdma_ctlr_destroy(priv->dma); 1944 1943 no_pdata: 1944 + if (of_phy_is_fixed_link(np)) 1945 + of_phy_deregister_fixed_link(np); 1946 + of_node_put(priv->phy_node); 1947 + err_free_netdev: 1945 1948 free_netdev(ndev); 1946 1949 return rc; 1947 1950 } ··· 1961 1956 { 1962 1957 struct net_device *ndev = platform_get_drvdata(pdev); 1963 1958 struct emac_priv *priv = netdev_priv(ndev); 1959 + struct device_node *np = pdev->dev.of_node; 1964 1960 1965 1961 dev_notice(&ndev->dev, "DaVinci EMAC: davinci_emac_remove()\n"); 1966 1962 ··· 1974 1968 unregister_netdev(ndev); 1975 1969 of_node_put(priv->phy_node); 1976 1970 pm_runtime_disable(&pdev->dev); 1971 + if (of_phy_is_fixed_link(np)) 1972 + of_phy_deregister_fixed_link(np); 1977 1973 free_netdev(ndev); 1978 1974 1979 1975 return 0;
+4 -10
drivers/net/geneve.c
··· 859 859 struct geneve_dev *geneve = netdev_priv(dev); 860 860 struct geneve_sock *gs4; 861 861 struct rtable *rt = NULL; 862 - const struct iphdr *iip; /* interior IP header */ 863 862 int err = -EINVAL; 864 863 struct flowi4 fl4; 865 864 __u8 tos, ttl; ··· 889 890 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 890 891 skb_reset_mac_header(skb); 891 892 892 - iip = ip_hdr(skb); 893 - 894 893 if (info) { 895 894 const struct ip_tunnel_key *key = &info->key; 896 895 u8 *opts = NULL; ··· 908 911 if (unlikely(err)) 909 912 goto tx_error; 910 913 911 - tos = ip_tunnel_ecn_encap(key->tos, iip, skb); 914 + tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 912 915 ttl = key->ttl; 913 916 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; 914 917 } else { ··· 917 920 if (unlikely(err)) 918 921 goto tx_error; 919 922 920 - tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, iip, skb); 923 + tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb); 921 924 ttl = geneve->ttl; 922 925 if (!ttl && IN_MULTICAST(ntohl(fl4.daddr))) 923 926 ttl = 1; ··· 949 952 { 950 953 struct geneve_dev *geneve = netdev_priv(dev); 951 954 struct dst_entry *dst = NULL; 952 - const struct iphdr *iip; /* interior IP header */ 953 955 struct geneve_sock *gs6; 954 956 int err = -EINVAL; 955 957 struct flowi6 fl6; ··· 978 982 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 979 983 skb_reset_mac_header(skb); 980 984 981 - iip = ip_hdr(skb); 982 - 983 985 if (info) { 984 986 const struct ip_tunnel_key *key = &info->key; 985 987 u8 *opts = NULL; ··· 998 1004 if (unlikely(err)) 999 1005 goto tx_error; 1000 1006 1001 - prio = ip_tunnel_ecn_encap(key->tos, iip, skb); 1007 + prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 1002 1008 ttl = key->ttl; 1003 1009 label = info->key.label; 1004 1010 } else { ··· 1008 1014 goto tx_error; 1009 1015 1010 1016 prio = ip_tunnel_ecn_encap(ip6_tclass(fl6.flowlabel), 1011 - iip, skb); 1017 + ip_hdr(skb), skb); 1012 1018 ttl = geneve->ttl; 1013 1019 if (!ttl && ipv6_addr_is_multicast(&fl6.daddr)) 1014 1020 ttl = 1;
+5
drivers/net/hyperv/netvsc_drv.c
··· 47 47 NETIF_F_TSO | \ 48 48 NETIF_F_TSO6 | \ 49 49 NETIF_F_HW_CSUM) 50 + 51 + /* Restrict GSO size to account for NVGRE */ 52 + #define NETVSC_GSO_MAX_SIZE 62768 53 + 50 54 static int ring_size = 128; 51 55 module_param(ring_size, int, S_IRUGO); 52 56 MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)"); ··· 1404 1400 nvdev = net_device_ctx->nvdev; 1405 1401 netif_set_real_num_tx_queues(net, nvdev->num_chn); 1406 1402 netif_set_real_num_rx_queues(net, nvdev->num_chn); 1403 + netif_set_gso_max_size(net, NETVSC_GSO_MAX_SIZE); 1407 1404 1408 1405 ret = register_netdev(net); 1409 1406 if (ret != 0) {
-1
drivers/net/ieee802154/adf7242.c
··· 20 20 #include <linux/skbuff.h> 21 21 #include <linux/of.h> 22 22 #include <linux/irq.h> 23 - #include <linux/delay.h> 24 23 #include <linux/debugfs.h> 25 24 #include <linux/bitops.h> 26 25 #include <linux/ieee802154.h>
+14 -5
drivers/net/ipvlan/ipvlan_main.c
··· 497 497 struct net_device *phy_dev; 498 498 int err; 499 499 u16 mode = IPVLAN_MODE_L3; 500 + bool create = false; 500 501 501 502 if (!tb[IFLA_LINK]) 502 503 return -EINVAL; ··· 514 513 err = ipvlan_port_create(phy_dev); 515 514 if (err < 0) 516 515 return err; 516 + create = true; 517 517 } 518 518 519 519 if (data && data[IFLA_IPVLAN_MODE]) ··· 538 536 539 537 err = register_netdevice(dev); 540 538 if (err < 0) 541 - return err; 539 + goto destroy_ipvlan_port; 542 540 543 541 err = netdev_upper_dev_link(phy_dev, dev); 544 542 if (err) { 545 - unregister_netdevice(dev); 546 - return err; 543 + goto unregister_netdev; 547 544 } 548 545 err = ipvlan_set_port_mode(port, mode); 549 546 if (err) { 550 - unregister_netdevice(dev); 551 - return err; 547 + goto unlink_netdev; 552 548 } 553 549 554 550 list_add_tail_rcu(&ipvlan->pnode, &port->ipvlans); 555 551 netif_stacked_transfer_operstate(phy_dev, dev); 556 552 return 0; 553 + 554 + unlink_netdev: 555 + netdev_upper_dev_unlink(phy_dev, dev); 556 + unregister_netdev: 557 + unregister_netdevice(dev); 558 + destroy_ipvlan_port: 559 + if (create) 560 + ipvlan_port_destroy(phy_dev); 561 + return err; 557 562 } 558 563 559 564 static void ipvlan_link_delete(struct net_device *dev, struct list_head *head)
+1
drivers/net/irda/irda-usb.c
··· 1723 1723 /* Don't change this buffer size and allocation without doing 1724 1724 * some heavy and complete testing. Don't ask why :-( 1725 1725 * Jean II */ 1726 + ret = -ENOMEM; 1726 1727 self->speed_buff = kzalloc(IRDA_USB_SPEED_MTU, GFP_KERNEL); 1727 1728 if (!self->speed_buff) 1728 1729 goto err_out_3;
+3 -1
drivers/net/irda/w83977af_ir.c
··· 518 518 519 519 mtt = irda_get_mtt(skb); 520 520 pr_debug("%s(%ld), mtt=%d\n", __func__ , jiffies, mtt); 521 - if (mtt) 521 + if (mtt > 1000) 522 + mdelay(mtt/1000); 523 + else if (mtt) 522 524 udelay(mtt); 523 525 524 526 /* Enable DMA interrupt */
+2 -1
drivers/net/macvlan.c
··· 623 623 return 0; 624 624 625 625 clear_multi: 626 - dev_set_allmulti(lowerdev, -1); 626 + if (dev->flags & IFF_ALLMULTI) 627 + dev_set_allmulti(lowerdev, -1); 627 628 del_unicast: 628 629 dev_uc_del(lowerdev, dev->dev_addr); 629 630 out:
+12 -7
drivers/net/macvtap.c
··· 491 491 /* Don't put anything that may fail after macvlan_common_newlink 492 492 * because we can't undo what it does. 493 493 */ 494 - return macvlan_common_newlink(src_net, dev, tb, data); 494 + err = macvlan_common_newlink(src_net, dev, tb, data); 495 + if (err) { 496 + netdev_rx_handler_unregister(dev); 497 + return err; 498 + } 499 + 500 + return 0; 495 501 } 496 502 497 503 static void macvtap_dellink(struct net_device *dev, ··· 742 736 743 737 if (zerocopy) 744 738 err = zerocopy_sg_from_iter(skb, from); 745 - else { 739 + else 746 740 err = skb_copy_datagram_from_iter(skb, 0, from, len); 747 - if (!err && m && m->msg_control) { 748 - struct ubuf_info *uarg = m->msg_control; 749 - uarg->callback(uarg, false); 750 - } 751 - } 752 741 753 742 if (err) 754 743 goto err_kfree; ··· 774 773 skb_shinfo(skb)->destructor_arg = m->msg_control; 775 774 skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; 776 775 skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; 776 + } else if (m && m->msg_control) { 777 + struct ubuf_info *uarg = m->msg_control; 778 + uarg->callback(uarg, false); 777 779 } 780 + 778 781 if (vlan) { 779 782 skb->dev = vlan->dev; 780 783 dev_queue_xmit(skb);
+4 -4
drivers/net/phy/micrel.c
··· 318 318 /* Limit supported and advertised modes in fiber mode */ 319 319 if (of_property_read_bool(of_node, "micrel,fiber-mode")) { 320 320 phydev->dev_flags |= MICREL_PHY_FXEN; 321 - phydev->supported &= SUPPORTED_FIBRE | 322 - SUPPORTED_100baseT_Full | 321 + phydev->supported &= SUPPORTED_100baseT_Full | 323 322 SUPPORTED_100baseT_Half; 324 - phydev->advertising &= ADVERTISED_FIBRE | 325 - ADVERTISED_100baseT_Full | 323 + phydev->supported |= SUPPORTED_FIBRE; 324 + phydev->advertising &= ADVERTISED_100baseT_Full | 326 325 ADVERTISED_100baseT_Half; 326 + phydev->advertising |= ADVERTISED_FIBRE; 327 327 phydev->autoneg = AUTONEG_DISABLE; 328 328 } 329 329
+13 -3
drivers/net/phy/phy_device.c
··· 857 857 int phy_attach_direct(struct net_device *dev, struct phy_device *phydev, 858 858 u32 flags, phy_interface_t interface) 859 859 { 860 + struct module *ndev_owner = dev->dev.parent->driver->owner; 860 861 struct mii_bus *bus = phydev->mdio.bus; 861 862 struct device *d = &phydev->mdio.dev; 862 863 int err; 863 864 864 - if (!try_module_get(bus->owner)) { 865 + /* For Ethernet device drivers that register their own MDIO bus, we 866 + * will have bus->owner match ndev_mod, so we do not want to increment 867 + * our own module->refcnt here, otherwise we would not be able to 868 + * unload later on. 869 + */ 870 + if (ndev_owner != bus->owner && !try_module_get(bus->owner)) { 865 871 dev_err(&dev->dev, "failed to get the bus module\n"); 866 872 return -EIO; 867 873 } ··· 927 921 928 922 error: 929 923 put_device(d); 930 - module_put(bus->owner); 924 + if (ndev_owner != bus->owner) 925 + module_put(bus->owner); 931 926 return err; 932 927 } 933 928 EXPORT_SYMBOL(phy_attach_direct); ··· 978 971 */ 979 972 void phy_detach(struct phy_device *phydev) 980 973 { 974 + struct net_device *dev = phydev->attached_dev; 975 + struct module *ndev_owner = dev->dev.parent->driver->owner; 981 976 struct mii_bus *bus; 982 977 int i; 983 978 ··· 1007 998 bus = phydev->mdio.bus; 1008 999 1009 1000 put_device(&phydev->mdio.dev); 1010 - module_put(bus->owner); 1001 + if (ndev_owner != bus->owner) 1002 + module_put(bus->owner); 1011 1003 } 1012 1004 EXPORT_SYMBOL(phy_detach); 1013 1005
+12 -8
drivers/net/phy/realtek.c
··· 102 102 if (ret < 0) 103 103 return ret; 104 104 105 - if (phydev->interface == PHY_INTERFACE_MODE_RGMII) { 106 - /* enable TXDLY */ 107 - phy_write(phydev, RTL8211F_PAGE_SELECT, 0xd08); 108 - reg = phy_read(phydev, 0x11); 105 + phy_write(phydev, RTL8211F_PAGE_SELECT, 0xd08); 106 + reg = phy_read(phydev, 0x11); 107 + 108 + /* enable TX-delay for rgmii-id and rgmii-txid, otherwise disable it */ 109 + if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID || 110 + phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 109 111 reg |= RTL8211F_TX_DELAY; 110 - phy_write(phydev, 0x11, reg); 111 - /* restore to default page 0 */ 112 - phy_write(phydev, RTL8211F_PAGE_SELECT, 0x0); 113 - } 112 + else 113 + reg &= ~RTL8211F_TX_DELAY; 114 + 115 + phy_write(phydev, 0x11, reg); 116 + /* restore to default page 0 */ 117 + phy_write(phydev, RTL8211F_PAGE_SELECT, 0x0); 114 118 115 119 return 0; 116 120 }
+4 -6
drivers/net/tun.c
··· 1246 1246 1247 1247 if (zerocopy) 1248 1248 err = zerocopy_sg_from_iter(skb, from); 1249 - else { 1249 + else 1250 1250 err = skb_copy_datagram_from_iter(skb, 0, from, len); 1251 - if (!err && msg_control) { 1252 - struct ubuf_info *uarg = msg_control; 1253 - uarg->callback(uarg, false); 1254 - } 1255 - } 1256 1251 1257 1252 if (err) { 1258 1253 this_cpu_inc(tun->pcpu_stats->rx_dropped); ··· 1293 1298 skb_shinfo(skb)->destructor_arg = msg_control; 1294 1299 skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; 1295 1300 skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; 1301 + } else if (msg_control) { 1302 + struct ubuf_info *uarg = msg_control; 1303 + uarg->callback(uarg, false); 1296 1304 } 1297 1305 1298 1306 skb_reset_network_header(skb);
+3 -3
drivers/net/usb/asix_devices.c
··· 603 603 u16 medium; 604 604 605 605 /* Stop MAC operation */ 606 - medium = asix_read_medium_status(dev, 0); 606 + medium = asix_read_medium_status(dev, 1); 607 607 medium &= ~AX_MEDIUM_RE; 608 - asix_write_medium_mode(dev, medium, 0); 608 + asix_write_medium_mode(dev, medium, 1); 609 609 610 610 netdev_dbg(dev->net, "ax88772_suspend: medium=0x%04x\n", 611 - asix_read_medium_status(dev, 0)); 611 + asix_read_medium_status(dev, 1)); 612 612 613 613 /* Preserve BMCR for restoring */ 614 614 priv->presvd_phy_bmcr =
+31 -7
drivers/net/usb/cdc_ether.c
··· 388 388 case USB_CDC_NOTIFY_NETWORK_CONNECTION: 389 389 netif_dbg(dev, timer, dev->net, "CDC: carrier %s\n", 390 390 event->wValue ? "on" : "off"); 391 - 392 - /* Work-around for devices with broken off-notifications */ 393 - if (event->wValue && 394 - !test_bit(__LINK_STATE_NOCARRIER, &dev->net->state)) 395 - usbnet_link_change(dev, 0, 0); 396 - 397 391 usbnet_link_change(dev, !!event->wValue, 0); 398 392 break; 399 393 case USB_CDC_NOTIFY_SPEED_CHANGE: /* tx/rx rates */ ··· 460 466 return 1; 461 467 } 462 468 469 + /* Ensure correct link state 470 + * 471 + * Some devices (ZTE MF823/831/910) export two carrier on notifications when 472 + * connected. This causes the link state to be incorrect. Work around this by 473 + * always setting the state to off, then on. 474 + */ 475 + void usbnet_cdc_zte_status(struct usbnet *dev, struct urb *urb) 476 + { 477 + struct usb_cdc_notification *event; 478 + 479 + if (urb->actual_length < sizeof(*event)) 480 + return; 481 + 482 + event = urb->transfer_buffer; 483 + 484 + if (event->bNotificationType != USB_CDC_NOTIFY_NETWORK_CONNECTION) { 485 + usbnet_cdc_status(dev, urb); 486 + return; 487 + } 488 + 489 + netif_dbg(dev, timer, dev->net, "CDC: carrier %s\n", 490 + event->wValue ? "on" : "off"); 491 + 492 + if (event->wValue && 493 + netif_carrier_ok(dev->net)) 494 + netif_carrier_off(dev->net); 495 + 496 + usbnet_link_change(dev, !!event->wValue, 0); 497 + } 498 + 463 499 static const struct driver_info cdc_info = { 464 500 .description = "CDC Ethernet Device", 465 501 .flags = FLAG_ETHER | FLAG_POINTTOPOINT, ··· 505 481 .flags = FLAG_ETHER | FLAG_POINTTOPOINT, 506 482 .bind = usbnet_cdc_zte_bind, 507 483 .unbind = usbnet_cdc_unbind, 508 - .status = usbnet_cdc_status, 484 + .status = usbnet_cdc_zte_status, 509 485 .set_rx_mode = usbnet_cdc_update_filter, 510 486 .manage_power = usbnet_manage_power, 511 487 .rx_fixup = usbnet_cdc_zte_rx_fixup,
+21
drivers/net/usb/cdc_mbim.c
··· 602 602 .data = CDC_NCM_FLAG_NDP_TO_END, 603 603 }; 604 604 605 + /* Some modems (e.g. Telit LE922A6) do not work properly with altsetting 606 + * toggle done in cdc_ncm_bind_common. CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE 607 + * flag is used to avoid this procedure. 608 + */ 609 + static const struct driver_info cdc_mbim_info_avoid_altsetting_toggle = { 610 + .description = "CDC MBIM", 611 + .flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN, 612 + .bind = cdc_mbim_bind, 613 + .unbind = cdc_mbim_unbind, 614 + .manage_power = cdc_mbim_manage_power, 615 + .rx_fixup = cdc_mbim_rx_fixup, 616 + .tx_fixup = cdc_mbim_tx_fixup, 617 + .data = CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE, 618 + }; 619 + 605 620 static const struct usb_device_id mbim_devs[] = { 606 621 /* This duplicate NCM entry is intentional. MBIM devices can 607 622 * be disguised as NCM by default, and this is necessary to ··· 641 626 { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 642 627 .driver_info = (unsigned long)&cdc_mbim_info_ndp_to_end, 643 628 }, 629 + 630 + /* Telit LE922A6 in MBIM composition */ 631 + { USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1041, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 632 + .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, 633 + }, 634 + 644 635 /* default entry */ 645 636 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), 646 637 .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+9 -5
drivers/net/usb/cdc_ncm.c
··· 839 839 840 840 iface_no = ctx->data->cur_altsetting->desc.bInterfaceNumber; 841 841 842 + /* Device-specific flags */ 843 + ctx->drvflags = drvflags; 844 + 842 845 /* Reset data interface. Some devices will not reset properly 843 846 * unless they are configured first. Toggle the altsetting to 844 - * force a reset 847 + * force a reset. 848 + * Some other devices do not work properly with this procedure 849 + * that can be avoided using quirk CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE 845 850 */ 846 - usb_set_interface(dev->udev, iface_no, data_altsetting); 851 + if (!(ctx->drvflags & CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE)) 852 + usb_set_interface(dev->udev, iface_no, data_altsetting); 853 + 847 854 temp = usb_set_interface(dev->udev, iface_no, 0); 848 855 if (temp) { 849 856 dev_dbg(&intf->dev, "set interface failed\n"); ··· 896 889 897 890 /* finish setting up the device specific data */ 898 891 cdc_ncm_setup(dev); 899 - 900 - /* Device-specific flags */ 901 - ctx->drvflags = drvflags; 902 892 903 893 /* Allocate the delayed NDP if needed. */ 904 894 if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {
+1
drivers/net/usb/lan78xx.c
··· 3395 3395 if (buf) { 3396 3396 dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL); 3397 3397 if (!dev->urb_intr) { 3398 + ret = -ENOMEM; 3398 3399 kfree(buf); 3399 3400 goto out3; 3400 3401 } else {
+1
drivers/net/usb/qmi_wwan.c
··· 894 894 {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ 895 895 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 896 896 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 897 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 897 898 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 898 899 {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */ 899 900 {QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */
+14 -5
drivers/net/virtio_net.c
··· 969 969 struct virtnet_info *vi = netdev_priv(dev); 970 970 struct virtio_device *vdev = vi->vdev; 971 971 int ret; 972 - struct sockaddr *addr = p; 972 + struct sockaddr *addr; 973 973 struct scatterlist sg; 974 974 975 - ret = eth_prepare_mac_addr_change(dev, p); 975 + addr = kmalloc(sizeof(*addr), GFP_KERNEL); 976 + if (!addr) 977 + return -ENOMEM; 978 + memcpy(addr, p, sizeof(*addr)); 979 + 980 + ret = eth_prepare_mac_addr_change(dev, addr); 976 981 if (ret) 977 - return ret; 982 + goto out; 978 983 979 984 if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR)) { 980 985 sg_init_one(&sg, addr->sa_data, dev->addr_len); ··· 987 982 VIRTIO_NET_CTRL_MAC_ADDR_SET, &sg)) { 988 983 dev_warn(&vdev->dev, 989 984 "Failed to set mac address by vq command.\n"); 990 - return -EINVAL; 985 + ret = -EINVAL; 986 + goto out; 991 987 } 992 988 } else if (virtio_has_feature(vdev, VIRTIO_NET_F_MAC) && 993 989 !virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) { ··· 1002 996 } 1003 997 1004 998 eth_commit_mac_addr_change(dev, p); 999 + ret = 0; 1005 1000 1006 - return 0; 1001 + out: 1002 + kfree(addr); 1003 + return ret; 1007 1004 } 1008 1005 1009 1006 static struct rtnl_link_stats64 *virtnet_stats(struct net_device *dev,
+7 -3
drivers/net/vxlan.c
··· 611 611 struct vxlan_rdst *rd = NULL; 612 612 struct vxlan_fdb *f; 613 613 int notify = 0; 614 + int rc; 614 615 615 616 f = __vxlan_find_mac(vxlan, mac); 616 617 if (f) { ··· 642 641 if ((flags & NLM_F_APPEND) && 643 642 (is_multicast_ether_addr(f->eth_addr) || 644 643 is_zero_ether_addr(f->eth_addr))) { 645 - int rc = vxlan_fdb_append(f, ip, port, vni, ifindex, 646 - &rd); 644 + rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 647 645 648 646 if (rc < 0) 649 647 return rc; ··· 673 673 INIT_LIST_HEAD(&f->remotes); 674 674 memcpy(f->eth_addr, mac, ETH_ALEN); 675 675 676 - vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 676 + rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 677 + if (rc < 0) { 678 + kfree(f); 679 + return rc; 680 + } 677 681 678 682 ++vxlan->addrcnt; 679 683 hlist_add_head_rcu(&f->hlist,
+7 -6
drivers/net/wireless/marvell/mwifiex/cfg80211.c
··· 2222 2222 is_scanning_required = 1; 2223 2223 } else { 2224 2224 mwifiex_dbg(priv->adapter, MSG, 2225 - "info: trying to associate to '%s' bssid %pM\n", 2226 - (char *)req_ssid.ssid, bss->bssid); 2225 + "info: trying to associate to '%.*s' bssid %pM\n", 2226 + req_ssid.ssid_len, (char *)req_ssid.ssid, 2227 + bss->bssid); 2227 2228 memcpy(&priv->cfg_bssid, bss->bssid, ETH_ALEN); 2228 2229 break; 2229 2230 } ··· 2284 2283 } 2285 2284 2286 2285 mwifiex_dbg(adapter, INFO, 2287 - "info: Trying to associate to %s and bssid %pM\n", 2288 - (char *)sme->ssid, sme->bssid); 2286 + "info: Trying to associate to %.*s and bssid %pM\n", 2287 + (int)sme->ssid_len, (char *)sme->ssid, sme->bssid); 2289 2288 2290 2289 if (!mwifiex_stop_bg_scan(priv)) 2291 2290 cfg80211_sched_scan_stopped_rtnl(priv->wdev.wiphy); ··· 2418 2417 } 2419 2418 2420 2419 mwifiex_dbg(priv->adapter, MSG, 2421 - "info: trying to join to %s and bssid %pM\n", 2422 - (char *)params->ssid, params->bssid); 2420 + "info: trying to join to %.*s and bssid %pM\n", 2421 + params->ssid_len, (char *)params->ssid, params->bssid); 2423 2422 2424 2423 mwifiex_set_ibss_params(priv, params); 2425 2424
+20 -5
drivers/nvdimm/bus.c
··· 715 715 716 716 u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd, 717 717 const struct nd_cmd_desc *desc, int idx, const u32 *in_field, 718 - const u32 *out_field) 718 + const u32 *out_field, unsigned long remainder) 719 719 { 720 720 if (idx >= desc->out_num) 721 721 return UINT_MAX; ··· 727 727 return in_field[1]; 728 728 else if (nvdimm && cmd == ND_CMD_VENDOR && idx == 2) 729 729 return out_field[1]; 730 - else if (!nvdimm && cmd == ND_CMD_ARS_STATUS && idx == 2) 731 - return out_field[1] - 8; 732 - else if (cmd == ND_CMD_CALL) { 730 + else if (!nvdimm && cmd == ND_CMD_ARS_STATUS && idx == 2) { 731 + /* 732 + * Per table 9-276 ARS Data in ACPI 6.1, out_field[1] is 733 + * "Size of Output Buffer in bytes, including this 734 + * field." 735 + */ 736 + if (out_field[1] < 4) 737 + return 0; 738 + /* 739 + * ACPI 6.1 is ambiguous if 'status' is included in the 740 + * output size. If we encounter an output size that 741 + * overshoots the remainder by 4 bytes, assume it was 742 + * including 'status'. 743 + */ 744 + if (out_field[1] - 8 == remainder) 745 + return remainder; 746 + return out_field[1] - 4; 747 + } else if (cmd == ND_CMD_CALL) { 733 748 struct nd_cmd_pkg *pkg = (struct nd_cmd_pkg *) in_field; 734 749 735 750 return pkg->nd_size_out; ··· 891 876 /* process an output envelope */ 892 877 for (i = 0; i < desc->out_num; i++) { 893 878 u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i, 894 - (u32 *) in_env, (u32 *) out_env); 879 + (u32 *) in_env, (u32 *) out_env, 0); 895 880 u32 copy; 896 881 897 882 if (out_size == UINT_MAX) {
+15
drivers/of/of_mdio.c
··· 490 490 return -ENODEV; 491 491 } 492 492 EXPORT_SYMBOL(of_phy_register_fixed_link); 493 + 494 + void of_phy_deregister_fixed_link(struct device_node *np) 495 + { 496 + struct phy_device *phydev; 497 + 498 + phydev = of_phy_find_device(np); 499 + if (!phydev) 500 + return; 501 + 502 + fixed_phy_unregister(phydev); 503 + 504 + put_device(&phydev->mdio.dev); /* of_phy_find_device() */ 505 + phy_device_free(phydev); /* fixed_phy_register() */ 506 + } 507 + EXPORT_SYMBOL(of_phy_deregister_fixed_link);
+1 -1
drivers/pci/host/pcie-designware-plat.c
··· 3 3 * 4 4 * Copyright (C) 2015-2016 Synopsys, Inc. (www.synopsys.com) 5 5 * 6 - * Authors: Joao Pinto <jpmpinto@gmail.com> 6 + * Authors: Joao Pinto <Joao.Pinto@synopsys.com> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 9 * it under the terms of the GNU General Public License version 2 as
-14
drivers/pci/pcie/aer/aer_inject.c
··· 307 307 return 0; 308 308 } 309 309 310 - static struct pci_dev *pcie_find_root_port(struct pci_dev *dev) 311 - { 312 - while (1) { 313 - if (!pci_is_pcie(dev)) 314 - break; 315 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) 316 - return dev; 317 - if (!dev->bus->self) 318 - break; 319 - dev = dev->bus->self; 320 - } 321 - return NULL; 322 - } 323 - 324 310 static int find_aer_device_iter(struct device *device, void *data) 325 311 { 326 312 struct pcie_device **result = data;
+27 -1
drivers/pci/probe.c
··· 1439 1439 dev_warn(&dev->dev, "PCI-X settings not supported\n"); 1440 1440 } 1441 1441 1442 + static bool pcie_root_rcb_set(struct pci_dev *dev) 1443 + { 1444 + struct pci_dev *rp = pcie_find_root_port(dev); 1445 + u16 lnkctl; 1446 + 1447 + if (!rp) 1448 + return false; 1449 + 1450 + pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl); 1451 + if (lnkctl & PCI_EXP_LNKCTL_RCB) 1452 + return true; 1453 + 1454 + return false; 1455 + } 1456 + 1442 1457 static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp) 1443 1458 { 1444 1459 int pos; ··· 1483 1468 ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); 1484 1469 1485 1470 /* Initialize Link Control Register */ 1486 - if (pcie_cap_has_lnkctl(dev)) 1471 + if (pcie_cap_has_lnkctl(dev)) { 1472 + 1473 + /* 1474 + * If the Root Port supports Read Completion Boundary of 1475 + * 128, set RCB to 128. Otherwise, clear it. 1476 + */ 1477 + hpp->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB; 1478 + hpp->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB; 1479 + if (pcie_root_rcb_set(dev)) 1480 + hpp->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB; 1481 + 1487 1482 pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 1488 1483 ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); 1484 + } 1489 1485 1490 1486 /* Find Advanced Error Reporting Enhanced Capability */ 1491 1487 pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR);
+2 -2
drivers/phy/phy-twl4030-usb.c
··· 459 459 struct twl4030_usb *twl = phy_get_drvdata(phy); 460 460 461 461 dev_dbg(twl->dev, "%s\n", __func__); 462 - pm_runtime_mark_last_busy(twl->dev); 463 - pm_runtime_put_autosuspend(twl->dev); 464 462 465 463 return 0; 466 464 } ··· 470 472 dev_dbg(twl->dev, "%s\n", __func__); 471 473 pm_runtime_get_sync(twl->dev); 472 474 schedule_delayed_work(&twl->id_workaround_work, HZ); 475 + pm_runtime_mark_last_busy(twl->dev); 476 + pm_runtime_put_autosuspend(twl->dev); 473 477 474 478 return 0; 475 479 }
+1
drivers/pwm/pwm-meson.c
··· 474 474 if (IS_ERR(meson->base)) 475 475 return PTR_ERR(meson->base); 476 476 477 + spin_lock_init(&meson->lock); 477 478 meson->chip.dev = &pdev->dev; 478 479 meson->chip.ops = &meson_pwm_ops; 479 480 meson->chip.base = -1;
+2
drivers/pwm/sysfs.c
··· 425 425 if (test_bit(PWMF_EXPORTED, &pwm->flags)) 426 426 pwm_unexport_child(parent, pwm); 427 427 } 428 + 429 + put_device(parent); 428 430 } 429 431 430 432 static int __init pwm_sysfs_init(void)
+1 -1
drivers/scsi/be2iscsi/be_mgmt.c
··· 1083 1083 nonemb_cmd = &phba->boot_struct.nonemb_cmd; 1084 1084 nonemb_cmd->size = sizeof(*resp); 1085 1085 nonemb_cmd->va = pci_alloc_consistent(phba->ctrl.pdev, 1086 - sizeof(nonemb_cmd->size), 1086 + nonemb_cmd->size, 1087 1087 &nonemb_cmd->dma); 1088 1088 if (!nonemb_cmd->va) { 1089 1089 mutex_unlock(&ctrl->mbox_lock);
+11 -5
drivers/scsi/hpsa.c
··· 2009 2009 2010 2010 static int hpsa_slave_alloc(struct scsi_device *sdev) 2011 2011 { 2012 - struct hpsa_scsi_dev_t *sd; 2012 + struct hpsa_scsi_dev_t *sd = NULL; 2013 2013 unsigned long flags; 2014 2014 struct ctlr_info *h; 2015 2015 ··· 2026 2026 sd->target = sdev_id(sdev); 2027 2027 sd->lun = sdev->lun; 2028 2028 } 2029 - } else 2029 + } 2030 + if (!sd) 2030 2031 sd = lookup_hpsa_scsi_dev(h, sdev_channel(sdev), 2031 2032 sdev_id(sdev), sdev->lun); 2032 2033 ··· 3841 3840 sizeof(this_device->vendor)); 3842 3841 memcpy(this_device->model, &inq_buff[16], 3843 3842 sizeof(this_device->model)); 3843 + this_device->rev = inq_buff[2]; 3844 3844 memset(this_device->device_id, 0, 3845 3845 sizeof(this_device->device_id)); 3846 3846 if (hpsa_get_device_id(h, scsi3addr, this_device->device_id, 8, ··· 3931 3929 3932 3930 if (!is_logical_dev_addr_mode(lunaddrbytes)) { 3933 3931 /* physical device, target and lun filled in later */ 3934 - if (is_hba_lunid(lunaddrbytes)) 3932 + if (is_hba_lunid(lunaddrbytes)) { 3933 + int bus = HPSA_HBA_BUS; 3934 + 3935 + if (!device->rev) 3936 + bus = HPSA_LEGACY_HBA_BUS; 3935 3937 hpsa_set_bus_target_lun(device, 3936 - HPSA_HBA_BUS, 0, lunid & 0x3fff); 3937 - else 3938 + bus, 0, lunid & 0x3fff); 3939 + } else 3938 3940 /* defer target, lun assignment for physical devices */ 3939 3941 hpsa_set_bus_target_lun(device, 3940 3942 HPSA_PHYSICAL_DEVICE_BUS, -1, -1);
+2
drivers/scsi/hpsa.h
··· 69 69 u64 sas_address; 70 70 unsigned char vendor[8]; /* bytes 8-15 of inquiry data */ 71 71 unsigned char model[16]; /* bytes 16-31 of inquiry data */ 72 + unsigned char rev; /* byte 2 of inquiry data */ 72 73 unsigned char raid_level; /* from inquiry page 0xC1 */ 73 74 unsigned char volume_offline; /* discovered via TUR or VPD */ 74 75 u16 queue_depth; /* max queue_depth for this device */ ··· 403 402 #define HPSA_RAID_VOLUME_BUS 1 404 403 #define HPSA_EXTERNAL_RAID_VOLUME_BUS 2 405 404 #define HPSA_HBA_BUS 0 405 + #define HPSA_LEGACY_HBA_BUS 3 406 406 407 407 /* 408 408 Send the command to the hardware
+1 -1
drivers/scsi/libfc/fc_lport.c
··· 308 308 fc_stats = &lport->host_stats; 309 309 memset(fc_stats, 0, sizeof(struct fc_host_statistics)); 310 310 311 - fc_stats->seconds_since_last_reset = (lport->boot_time - jiffies) / HZ; 311 + fc_stats->seconds_since_last_reset = (jiffies - lport->boot_time) / HZ; 312 312 313 313 for_each_possible_cpu(cpu) { 314 314 struct fc_stats *stats;
+8 -6
drivers/scsi/lpfc/lpfc_sli.c
··· 1323 1323 { 1324 1324 lockdep_assert_held(&phba->hbalock); 1325 1325 1326 - BUG_ON(!piocb || !piocb->vport); 1326 + BUG_ON(!piocb); 1327 1327 1328 1328 list_add_tail(&piocb->list, &pring->txcmplq); 1329 1329 piocb->iocb_flag |= LPFC_IO_ON_TXCMPLQ; 1330 1330 1331 1331 if ((unlikely(pring->ringno == LPFC_ELS_RING)) && 1332 1332 (piocb->iocb.ulpCommand != CMD_ABORT_XRI_CN) && 1333 - (piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN) && 1334 - (!(piocb->vport->load_flag & FC_UNLOADING))) 1335 - mod_timer(&piocb->vport->els_tmofunc, 1336 - jiffies + 1337 - msecs_to_jiffies(1000 * (phba->fc_ratov << 1))); 1333 + (piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN)) { 1334 + BUG_ON(!piocb->vport); 1335 + if (!(piocb->vport->load_flag & FC_UNLOADING)) 1336 + mod_timer(&piocb->vport->els_tmofunc, 1337 + jiffies + 1338 + msecs_to_jiffies(1000 * (phba->fc_ratov << 1))); 1339 + } 1338 1340 1339 1341 return 0; 1340 1342 }
+8 -5
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 3885 3885 } 3886 3886 } 3887 3887 3888 + static inline bool ata_12_16_cmd(struct scsi_cmnd *scmd) 3889 + { 3890 + return (scmd->cmnd[0] == ATA_12 || scmd->cmnd[0] == ATA_16); 3891 + } 3892 + 3888 3893 /** 3889 3894 * _scsih_flush_running_cmds - completing outstanding commands. 3890 3895 * @ioc: per adapter object ··· 3911 3906 if (!scmd) 3912 3907 continue; 3913 3908 count++; 3909 + if (ata_12_16_cmd(scmd)) 3910 + scsi_internal_device_unblock(scmd->device, 3911 + SDEV_RUNNING); 3914 3912 mpt3sas_base_free_smid(ioc, smid); 3915 3913 scsi_dma_unmap(scmd); 3916 3914 if (ioc->pci_error_recovery) ··· 4016 4008 ascq); 4017 4009 scmd->result = DRIVER_SENSE << 24 | (DID_ABORT << 16) | 4018 4010 SAM_STAT_CHECK_CONDITION; 4019 - } 4020 - 4021 - static inline bool ata_12_16_cmd(struct scsi_cmnd *scmd) 4022 - { 4023 - return (scmd->cmnd[0] == ATA_12 || scmd->cmnd[0] == ATA_16); 4024 4011 } 4025 4012 4026 4013 /**
+3 -1
drivers/scsi/mvsas/mv_sas.c
··· 791 791 slot->slot_tag = tag; 792 792 793 793 slot->buf = pci_pool_alloc(mvi->dma_pool, GFP_ATOMIC, &slot->buf_dma); 794 - if (!slot->buf) 794 + if (!slot->buf) { 795 + rc = -ENOMEM; 795 796 goto err_out_tag; 797 + } 796 798 memset(slot->buf, 0, MVS_SLOT_BUF_SZ); 797 799 798 800 tei.task = task;
+2 -2
drivers/scsi/qlogicpti.h
··· 356 356 357 357 /* The rest of the elements are unimportant for performance. */ 358 358 struct qlogicpti *next; 359 - __u32 res_dvma; /* Ptr to RESPONSE bufs (DVMA)*/ 360 - __u32 req_dvma; /* Ptr to REQUEST bufs (DVMA) */ 359 + dma_addr_t res_dvma; /* Ptr to RESPONSE bufs (DVMA)*/ 360 + dma_addr_t req_dvma; /* Ptr to REQUEST bufs (DVMA) */ 361 361 u_char fware_majrev, fware_minrev, fware_micrev; 362 362 struct Scsi_Host *qhost; 363 363 int qpti_id;
+1
drivers/usb/chipidea/core.c
··· 914 914 if (!ci) 915 915 return -ENOMEM; 916 916 917 + spin_lock_init(&ci->lock); 917 918 ci->dev = dev; 918 919 ci->platdata = dev_get_platdata(dev); 919 920 ci->imx28_write_fix = !!(ci->platdata->flags &
-2
drivers/usb/chipidea/udc.c
··· 1889 1889 struct usb_otg_caps *otg_caps = &ci->platdata->ci_otg_caps; 1890 1890 int retval = 0; 1891 1891 1892 - spin_lock_init(&ci->lock); 1893 - 1894 1892 ci->gadget.ops = &usb_gadget_ops; 1895 1893 ci->gadget.speed = USB_SPEED_UNKNOWN; 1896 1894 ci->gadget.max_speed = USB_SPEED_HIGH;
+4 -4
drivers/usb/gadget/function/f_fs.c
··· 3225 3225 3226 3226 switch (creq->bRequestType & USB_RECIP_MASK) { 3227 3227 case USB_RECIP_INTERFACE: 3228 - return ffs_func_revmap_intf(func, 3229 - le16_to_cpu(creq->wIndex) >= 0); 3228 + return (ffs_func_revmap_intf(func, 3229 + le16_to_cpu(creq->wIndex)) >= 0); 3230 3230 case USB_RECIP_ENDPOINT: 3231 - return ffs_func_revmap_ep(func, 3232 - le16_to_cpu(creq->wIndex) >= 0); 3231 + return (ffs_func_revmap_ep(func, 3232 + le16_to_cpu(creq->wIndex)) >= 0); 3233 3233 default: 3234 3234 return (bool) (func->ffs->user_flags & 3235 3235 FUNCTIONFS_ALL_CTRL_RECIP);
+129 -18
drivers/usb/musb/musb_core.c
··· 986 986 } 987 987 #endif 988 988 989 - schedule_work(&musb->irq_work); 989 + schedule_delayed_work(&musb->irq_work, 0); 990 990 991 991 return handled; 992 992 } ··· 1855 1855 MUSB_DEVCTL_HR; 1856 1856 switch (devctl & ~s) { 1857 1857 case MUSB_QUIRK_B_INVALID_VBUS_91: 1858 - if (!musb->session && !musb->quirk_invalid_vbus) { 1859 - musb->quirk_invalid_vbus = true; 1858 + if (musb->quirk_retries--) { 1860 1859 musb_dbg(musb, 1861 - "First invalid vbus, assume no session"); 1860 + "Poll devctl on invalid vbus, assume no session"); 1861 + schedule_delayed_work(&musb->irq_work, 1862 + msecs_to_jiffies(1000)); 1863 + 1862 1864 return; 1863 1865 } 1864 - break; 1865 1866 case MUSB_QUIRK_A_DISCONNECT_19: 1867 + if (musb->quirk_retries--) { 1868 + musb_dbg(musb, 1869 + "Poll devctl on possible host mode disconnect"); 1870 + schedule_delayed_work(&musb->irq_work, 1871 + msecs_to_jiffies(1000)); 1872 + 1873 + return; 1874 + } 1866 1875 if (!musb->session) 1867 1876 break; 1868 1877 musb_dbg(musb, "Allow PM on possible host mode disconnect"); ··· 1895 1886 if (error < 0) 1896 1887 dev_err(musb->controller, "Could not enable: %i\n", 1897 1888 error); 1889 + musb->quirk_retries = 3; 1898 1890 } else { 1899 1891 musb_dbg(musb, "Allow PM with no session: %02x", devctl); 1900 - musb->quirk_invalid_vbus = false; 1901 1892 pm_runtime_mark_last_busy(musb->controller); 1902 1893 pm_runtime_put_autosuspend(musb->controller); 1903 1894 } ··· 1908 1899 /* Only used to provide driver mode change events */ 1909 1900 static void musb_irq_work(struct work_struct *data) 1910 1901 { 1911 - struct musb *musb = container_of(data, struct musb, irq_work); 1902 + struct musb *musb = container_of(data, struct musb, irq_work.work); 1912 1903 1913 1904 musb_pm_runtime_check_session(musb); 1914 1905 ··· 1978 1969 INIT_LIST_HEAD(&musb->control); 1979 1970 INIT_LIST_HEAD(&musb->in_bulk); 1980 1971 INIT_LIST_HEAD(&musb->out_bulk); 1972 + INIT_LIST_HEAD(&musb->pending_list); 1981 1973 1982 1974 musb->vbuserr_retry = VBUSERR_RETRY_COUNT; 1983 1975 musb->a_wait_bcon = OTG_TIME_A_WAIT_BCON; ··· 2028 2018 musb_host_free(musb); 2029 2019 } 2030 2020 2021 + struct musb_pending_work { 2022 + int (*callback)(struct musb *musb, void *data); 2023 + void *data; 2024 + struct list_head node; 2025 + }; 2026 + 2027 + /* 2028 + * Called from musb_runtime_resume(), musb_resume(), and 2029 + * musb_queue_resume_work(). Callers must take musb->lock. 2030 + */ 2031 + static int musb_run_resume_work(struct musb *musb) 2032 + { 2033 + struct musb_pending_work *w, *_w; 2034 + unsigned long flags; 2035 + int error = 0; 2036 + 2037 + spin_lock_irqsave(&musb->list_lock, flags); 2038 + list_for_each_entry_safe(w, _w, &musb->pending_list, node) { 2039 + if (w->callback) { 2040 + error = w->callback(musb, w->data); 2041 + if (error < 0) { 2042 + dev_err(musb->controller, 2043 + "resume callback %p failed: %i\n", 2044 + w->callback, error); 2045 + } 2046 + } 2047 + list_del(&w->node); 2048 + devm_kfree(musb->controller, w); 2049 + } 2050 + spin_unlock_irqrestore(&musb->list_lock, flags); 2051 + 2052 + return error; 2053 + } 2054 + 2055 + /* 2056 + * Called to run work if device is active or else queue the work to happen 2057 + * on resume. Caller must take musb->lock and must hold an RPM reference. 2058 + * 2059 + * Note that we cowardly refuse queuing work after musb PM runtime 2060 + * resume is done calling musb_run_resume_work() and return -EINPROGRESS 2061 + * instead. 2062 + */ 2063 + int musb_queue_resume_work(struct musb *musb, 2064 + int (*callback)(struct musb *musb, void *data), 2065 + void *data) 2066 + { 2067 + struct musb_pending_work *w; 2068 + unsigned long flags; 2069 + int error; 2070 + 2071 + if (WARN_ON(!callback)) 2072 + return -EINVAL; 2073 + 2074 + if (pm_runtime_active(musb->controller)) 2075 + return callback(musb, data); 2076 + 2077 + w = devm_kzalloc(musb->controller, sizeof(*w), GFP_ATOMIC); 2078 + if (!w) 2079 + return -ENOMEM; 2080 + 2081 + w->callback = callback; 2082 + w->data = data; 2083 + spin_lock_irqsave(&musb->list_lock, flags); 2084 + if (musb->is_runtime_suspended) { 2085 + list_add_tail(&w->node, &musb->pending_list); 2086 + error = 0; 2087 + } else { 2088 + dev_err(musb->controller, "could not add resume work %p\n", 2089 + callback); 2090 + devm_kfree(musb->controller, w); 2091 + error = -EINPROGRESS; 2092 + } 2093 + spin_unlock_irqrestore(&musb->list_lock, flags); 2094 + 2095 + return error; 2096 + } 2097 + EXPORT_SYMBOL_GPL(musb_queue_resume_work); 2098 + 2031 2099 static void musb_deassert_reset(struct work_struct *work) 2032 2100 { 2033 2101 struct musb *musb; ··· 2153 2065 } 2154 2066 2155 2067 spin_lock_init(&musb->lock); 2068 + spin_lock_init(&musb->list_lock); 2156 2069 musb->board_set_power = plat->set_power; 2157 2070 musb->min_power = plat->min_power; 2158 2071 musb->ops = plat->platform_ops; ··· 2297 2208 musb_generic_disable(musb); 2298 2209 2299 2210 /* Init IRQ workqueue before request_irq */ 2300 - INIT_WORK(&musb->irq_work, musb_irq_work); 2211 + INIT_DELAYED_WORK(&musb->irq_work, musb_irq_work); 2301 2212 INIT_DELAYED_WORK(&musb->deassert_reset_work, musb_deassert_reset); 2302 2213 INIT_DELAYED_WORK(&musb->finish_resume_work, musb_host_finish_resume); 2303 2214 ··· 2380 2291 if (status) 2381 2292 goto fail5; 2382 2293 2294 + musb->is_initialized = 1; 2383 2295 pm_runtime_mark_last_busy(musb->controller); 2384 2296 pm_runtime_put_autosuspend(musb->controller); 2385 2297 ··· 2394 2304 musb_host_cleanup(musb); 2395 2305 2396 2306 fail3: 2397 - cancel_work_sync(&musb->irq_work); 2307 + cancel_delayed_work_sync(&musb->irq_work); 2398 2308 cancel_delayed_work_sync(&musb->finish_resume_work); 2399 2309 cancel_delayed_work_sync(&musb->deassert_reset_work); 2400 2310 if (musb->dma_controller) ··· 2461 2371 */ 2462 2372 musb_exit_debugfs(musb); 2463 2373 2464 - cancel_work_sync(&musb->irq_work); 2374 + cancel_delayed_work_sync(&musb->irq_work); 2465 2375 cancel_delayed_work_sync(&musb->finish_resume_work); 2466 2376 cancel_delayed_work_sync(&musb->deassert_reset_work); 2467 2377 pm_runtime_get_sync(musb->controller); ··· 2647 2557 2648 2558 musb_platform_disable(musb); 2649 2559 musb_generic_disable(musb); 2560 + WARN_ON(!list_empty(&musb->pending_list)); 2650 2561 2651 2562 spin_lock_irqsave(&musb->lock, flags); 2652 2563 ··· 2669 2578 2670 2579 static int musb_resume(struct device *dev) 2671 2580 { 2672 - struct musb *musb = dev_to_musb(dev); 2673 - u8 devctl; 2674 - u8 mask; 2581 + struct musb *musb = dev_to_musb(dev); 2582 + unsigned long flags; 2583 + int error; 2584 + u8 devctl; 2585 + u8 mask; 2675 2586 2676 2587 /* 2677 2588 * For static cmos like DaVinci, register values were preserved ··· 2707 2614 2708 2615 musb_start(musb); 2709 2616 2617 + spin_lock_irqsave(&musb->lock, flags); 2618 + error = musb_run_resume_work(musb); 2619 + if (error) 2620 + dev_err(musb->controller, "resume work failed with %i\n", 2621 + error); 2622 + spin_unlock_irqrestore(&musb->lock, flags); 2623 + 2710 2624 return 0; 2711 2625 } 2712 2626 ··· 2722 2622 struct musb *musb = dev_to_musb(dev); 2723 2623 2724 2624 musb_save_context(musb); 2625 + musb->is_runtime_suspended = 1; 2725 2626 2726 2627 return 0; 2727 2628 } 2728 2629 2729 2630 static int musb_runtime_resume(struct device *dev) 2730 2631 { 2731 - struct musb *musb = dev_to_musb(dev); 2732 - static int first = 1; 2632 + struct musb *musb = dev_to_musb(dev); 2633 + unsigned long flags; 2634 + int error; 2733 2635 2734 2636 /* 2735 2637 * When pm_runtime_get_sync called for the first time in driver ··· 2742 2640 * Also context restore without save does not make 2743 2641 * any sense 2744 2642 */ 2745 - if (!first) 2746 - musb_restore_context(musb); 2747 - first = 0; 2643 + if (!musb->is_initialized) 2644 + return 0; 2645 + 2646 + musb_restore_context(musb); 2748 2647 2749 2648 if (musb->need_finish_resume) { 2750 2649 musb->need_finish_resume = 0; 2751 2650 schedule_delayed_work(&musb->finish_resume_work, 2752 2651 msecs_to_jiffies(USB_RESUME_TIMEOUT)); 2753 2652 } 2653 + 2654 + spin_lock_irqsave(&musb->lock, flags); 2655 + error = musb_run_resume_work(musb); 2656 + if (error) 2657 + dev_err(musb->controller, "resume work failed with %i\n", 2658 + error); 2659 + musb->is_runtime_suspended = 0; 2660 + spin_unlock_irqrestore(&musb->lock, flags); 2754 2661 2755 2662 return 0; 2756 2663 }
+11 -2
drivers/usb/musb/musb_core.h
··· 303 303 struct musb { 304 304 /* device lock */ 305 305 spinlock_t lock; 306 + spinlock_t list_lock; /* resume work list lock */ 306 307 307 308 struct musb_io io; 308 309 const struct musb_platform_ops *ops; 309 310 struct musb_context_registers context; 310 311 311 312 irqreturn_t (*isr)(int, void *); 312 - struct work_struct irq_work; 313 + struct delayed_work irq_work; 313 314 struct delayed_work deassert_reset_work; 314 315 struct delayed_work finish_resume_work; 315 316 struct delayed_work gadget_work; ··· 338 337 struct list_head control; /* of musb_qh */ 339 338 struct list_head in_bulk; /* of musb_qh */ 340 339 struct list_head out_bulk; /* of musb_qh */ 340 + struct list_head pending_list; /* pending work list */ 341 341 342 342 struct timer_list otg_timer; 343 343 struct notifier_block nb; ··· 381 379 382 380 int port_mode; /* MUSB_PORT_MODE_* */ 383 381 bool session; 384 - bool quirk_invalid_vbus; 382 + unsigned long quirk_retries; 385 383 bool is_host; 386 384 387 385 int a_wait_bcon; /* VBUS timeout in msecs */ 388 386 unsigned long idle_timeout; /* Next timeout in jiffies */ 387 + 388 + unsigned is_initialized:1; 389 + unsigned is_runtime_suspended:1; 389 390 390 391 /* active means connected and not suspended */ 391 392 unsigned is_active:1; ··· 544 539 extern irqreturn_t musb_interrupt(struct musb *); 545 540 546 541 extern void musb_hnp_stop(struct musb *musb); 542 + 543 + int musb_queue_resume_work(struct musb *musb, 544 + int (*callback)(struct musb *musb, void *data), 545 + void *data); 547 546 548 547 static inline void musb_platform_set_vbus(struct musb *musb, int is_on) 549 548 {
+28 -30
drivers/usb/musb/musb_dsps.c
··· 185 185 musb_writel(reg_base, wrp->coreintr_clear, wrp->usb_bitmap); 186 186 musb_writel(reg_base, wrp->epintr_clear, 187 187 wrp->txep_bitmap | wrp->rxep_bitmap); 188 + del_timer_sync(&glue->timer); 188 189 musb_writeb(musb->mregs, MUSB_DEVCTL, 0); 189 190 } 190 191 191 - static void otg_timer(unsigned long _musb) 192 + /* Caller must take musb->lock */ 193 + static int dsps_check_status(struct musb *musb, void *unused) 192 194 { 193 - struct musb *musb = (void *)_musb; 194 195 void __iomem *mregs = musb->mregs; 195 196 struct device *dev = musb->controller; 196 197 struct dsps_glue *glue = dev_get_drvdata(dev->parent); 197 198 const struct dsps_musb_wrapper *wrp = glue->wrp; 198 199 u8 devctl; 199 - unsigned long flags; 200 200 int skip_session = 0; 201 - int err; 202 - 203 - err = pm_runtime_get_sync(dev); 204 - if (err < 0) 205 - dev_err(dev, "Poll could not pm_runtime_get: %i\n", err); 206 201 207 202 /* 208 203 * We poll because DSPS IP's won't expose several OTG-critical ··· 207 212 dev_dbg(musb->controller, "Poll devctl %02x (%s)\n", devctl, 208 213 usb_otg_state_string(musb->xceiv->otg->state)); 209 214 210 - spin_lock_irqsave(&musb->lock, flags); 211 215 switch (musb->xceiv->otg->state) { 212 216 case OTG_STATE_A_WAIT_VRISE: 213 217 mod_timer(&glue->timer, jiffies + ··· 239 245 default: 240 246 break; 241 247 } 242 - spin_unlock_irqrestore(&musb->lock, flags); 243 248 249 + return 0; 250 + } 251 + 252 + static void otg_timer(unsigned long _musb) 253 + { 254 + struct musb *musb = (void *)_musb; 255 + struct device *dev = musb->controller; 256 + unsigned long flags; 257 + int err; 258 + 259 + err = pm_runtime_get(dev); 260 + if ((err != -EINPROGRESS) && err < 0) { 261 + dev_err(dev, "Poll could not pm_runtime_get: %i\n", err); 262 + pm_runtime_put_noidle(dev); 263 + 264 + return; 265 + } 266 + 267 + spin_lock_irqsave(&musb->lock, flags); 268 + err = musb_queue_resume_work(musb, dsps_check_status, NULL); 269 + if (err < 0) 270 + dev_err(dev, "%s resume work: %i\n", __func__, err); 271 + spin_unlock_irqrestore(&musb->lock, flags); 244 272 pm_runtime_mark_last_busy(dev); 245 273 pm_runtime_put_autosuspend(dev); 246 274 } ··· 783 767 784 768 platform_set_drvdata(pdev, glue); 785 769 pm_runtime_enable(&pdev->dev); 786 - pm_runtime_use_autosuspend(&pdev->dev); 787 - pm_runtime_set_autosuspend_delay(&pdev->dev, 200); 788 - 789 - ret = pm_runtime_get_sync(&pdev->dev); 790 - if (ret < 0) { 791 - dev_err(&pdev->dev, "pm_runtime_get_sync FAILED"); 792 - goto err2; 793 - } 794 - 795 770 ret = dsps_create_musb_pdev(glue, pdev); 796 771 if (ret) 797 - goto err3; 798 - 799 - pm_runtime_mark_last_busy(&pdev->dev); 800 - pm_runtime_put_autosuspend(&pdev->dev); 772 + goto err; 801 773 802 774 return 0; 803 775 804 - err3: 805 - pm_runtime_put_sync(&pdev->dev); 806 - err2: 807 - pm_runtime_dont_use_autosuspend(&pdev->dev); 776 + err: 808 777 pm_runtime_disable(&pdev->dev); 809 778 return ret; 810 779 } ··· 800 799 801 800 platform_device_unregister(glue->musb); 802 801 803 - /* disable usbss clocks */ 804 - pm_runtime_dont_use_autosuspend(&pdev->dev); 805 - pm_runtime_put_sync(&pdev->dev); 806 802 pm_runtime_disable(&pdev->dev); 807 803 808 804 return 0;
+32 -7
drivers/usb/musb/musb_gadget.c
··· 1114 1114 musb_ep->dma ? "dma, " : "", 1115 1115 musb_ep->packet_sz); 1116 1116 1117 - schedule_work(&musb->irq_work); 1117 + schedule_delayed_work(&musb->irq_work, 0); 1118 1118 1119 1119 fail: 1120 1120 spin_unlock_irqrestore(&musb->lock, flags); ··· 1158 1158 musb_ep->desc = NULL; 1159 1159 musb_ep->end_point.desc = NULL; 1160 1160 1161 - schedule_work(&musb->irq_work); 1161 + schedule_delayed_work(&musb->irq_work, 0); 1162 1162 1163 1163 spin_unlock_irqrestore(&(musb->lock), flags); 1164 1164 ··· 1222 1222 rxstate(musb, req); 1223 1223 } 1224 1224 1225 + static int musb_ep_restart_resume_work(struct musb *musb, void *data) 1226 + { 1227 + struct musb_request *req = data; 1228 + 1229 + musb_ep_restart(musb, req); 1230 + 1231 + return 0; 1232 + } 1233 + 1225 1234 static int musb_gadget_queue(struct usb_ep *ep, struct usb_request *req, 1226 1235 gfp_t gfp_flags) 1227 1236 { 1228 1237 struct musb_ep *musb_ep; 1229 1238 struct musb_request *request; 1230 1239 struct musb *musb; 1231 - int status = 0; 1240 + int status; 1232 1241 unsigned long lockflags; 1233 1242 1234 1243 if (!ep || !req) ··· 1254 1245 if (request->ep != musb_ep) 1255 1246 return -EINVAL; 1256 1247 1248 + status = pm_runtime_get(musb->controller); 1249 + if ((status != -EINPROGRESS) && status < 0) { 1250 + dev_err(musb->controller, 1251 + "pm runtime get failed in %s\n", 1252 + __func__); 1253 + pm_runtime_put_noidle(musb->controller); 1254 + 1255 + return status; 1256 + } 1257 + status = 0; 1258 + 1257 1259 trace_musb_req_enq(request); 1258 1260 1259 1261 /* request is mine now... */ ··· 1275 1255 1276 1256 map_dma_buffer(request, musb, musb_ep); 1277 1257 1278 - pm_runtime_get_sync(musb->controller); 1279 1258 spin_lock_irqsave(&musb->lock, lockflags); 1280 1259 1281 1260 /* don't queue if the ep is down */ ··· 1290 1271 list_add_tail(&request->list, &musb_ep->req_list); 1291 1272 1292 1273 /* it this is the head of the queue, start i/o ... */ 1293 - if (!musb_ep->busy && &request->list == musb_ep->req_list.next) 1294 - musb_ep_restart(musb, request); 1274 + if (!musb_ep->busy && &request->list == musb_ep->req_list.next) { 1275 + status = musb_queue_resume_work(musb, 1276 + musb_ep_restart_resume_work, 1277 + request); 1278 + if (status < 0) 1279 + dev_err(musb->controller, "%s resume work: %i\n", 1280 + __func__, status); 1281 + } 1295 1282 1296 1283 unlock: 1297 1284 spin_unlock_irqrestore(&musb->lock, lockflags); ··· 1994 1969 */ 1995 1970 1996 1971 /* Force check of devctl register for PM runtime */ 1997 - schedule_work(&musb->irq_work); 1972 + schedule_delayed_work(&musb->irq_work, 0); 1998 1973 1999 1974 pm_runtime_mark_last_busy(musb->controller); 2000 1975 pm_runtime_put_autosuspend(musb->controller);
+4 -6
drivers/usb/musb/omap2430.c
··· 513 513 } 514 514 515 515 pm_runtime_enable(glue->dev); 516 - pm_runtime_use_autosuspend(glue->dev); 517 - pm_runtime_set_autosuspend_delay(glue->dev, 100); 518 516 519 517 ret = platform_device_add(musb); 520 518 if (ret) { 521 519 dev_err(&pdev->dev, "failed to register musb device\n"); 522 - goto err2; 520 + goto err3; 523 521 } 524 522 525 523 return 0; 524 + 525 + err3: 526 + pm_runtime_disable(glue->dev); 526 527 527 528 err2: 528 529 platform_device_put(musb); ··· 536 535 { 537 536 struct omap2430_glue *glue = platform_get_drvdata(pdev); 538 537 539 - pm_runtime_get_sync(glue->dev); 540 538 platform_device_unregister(glue->musb); 541 - pm_runtime_put_sync(glue->dev); 542 - pm_runtime_dont_use_autosuspend(glue->dev); 543 539 pm_runtime_disable(glue->dev); 544 540 545 541 return 0;
+3 -3
drivers/usb/musb/tusb6010.c
··· 724 724 dev_dbg(musb->controller, "vbus change, %s, otg %03x\n", 725 725 usb_otg_state_string(musb->xceiv->otg->state), otg_stat); 726 726 idle_timeout = jiffies + (1 * HZ); 727 - schedule_work(&musb->irq_work); 727 + schedule_delayed_work(&musb->irq_work, 0); 728 728 729 729 } else /* A-dev state machine */ { 730 730 dev_dbg(musb->controller, "vbus change, %s, otg %03x\n", ··· 814 814 break; 815 815 } 816 816 } 817 - schedule_work(&musb->irq_work); 817 + schedule_delayed_work(&musb->irq_work, 0); 818 818 819 819 return idle_timeout; 820 820 } ··· 864 864 musb_writel(tbase, TUSB_PRCM_WAKEUP_CLEAR, reg); 865 865 if (reg & ~TUSB_PRCM_WNORCS) { 866 866 musb->is_active = 1; 867 - schedule_work(&musb->irq_work); 867 + schedule_delayed_work(&musb->irq_work, 0); 868 868 } 869 869 dev_dbg(musb->controller, "wake %sactive %02x\n", 870 870 musb->is_active ? "" : "in", reg);
+1
drivers/usb/serial/cp210x.c
··· 131 131 { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */ 132 132 { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */ 133 133 { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */ 134 + { USB_DEVICE(0x10C4, 0x8962) }, /* Brim Brothers charging dock */ 134 135 { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */ 135 136 { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */ 136 137 { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
+2
drivers/usb/serial/ftdi_sio.c
··· 1012 1012 { USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) }, 1013 1013 { USB_DEVICE(ICPDAS_VID, ICPDAS_I7563U_PID) }, 1014 1014 { USB_DEVICE(WICED_VID, WICED_USB20706V2_PID) }, 1015 + { USB_DEVICE(TI_VID, TI_CC3200_LAUNCHPAD_PID), 1016 + .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 1015 1017 { } /* Terminating entry */ 1016 1018 }; 1017 1019
+6
drivers/usb/serial/ftdi_sio_ids.h
··· 596 596 #define STK541_PID 0x2109 /* Zigbee Controller */ 597 597 598 598 /* 599 + * Texas Instruments 600 + */ 601 + #define TI_VID 0x0451 602 + #define TI_CC3200_LAUNCHPAD_PID 0xC32A /* SimpleLink Wi-Fi CC3200 LaunchPad */ 603 + 604 + /* 599 605 * Blackfin gnICE JTAG 600 606 * http://docs.blackfin.uclinux.org/doku.php?id=hw:jtag:gnice 601 607 */
+6 -1
drivers/usb/storage/transport.c
··· 954 954 955 955 /* COMMAND STAGE */ 956 956 /* let's send the command via the control pipe */ 957 + /* 958 + * Command is sometime (f.e. after scsi_eh_prep_cmnd) on the stack. 959 + * Stack may be vmallocated. So no DMA for us. Make a copy. 960 + */ 961 + memcpy(us->iobuf, srb->cmnd, srb->cmd_len); 957 962 result = usb_stor_ctrl_transfer(us, us->send_ctrl_pipe, 958 963 US_CBI_ADSC, 959 964 USB_TYPE_CLASS | USB_RECIP_INTERFACE, 0, 960 - us->ifnum, srb->cmnd, srb->cmd_len); 965 + us->ifnum, us->iobuf, srb->cmd_len); 961 966 962 967 /* check the return code for the command */ 963 968 usb_stor_dbg(us, "Call to usb_stor_ctrl_transfer() returned %d\n",
+1 -1
drivers/vhost/vsock.c
··· 506 506 * executing. 507 507 */ 508 508 509 - if (!vhost_vsock_get(vsk->local_addr.svm_cid)) { 509 + if (!vhost_vsock_get(vsk->remote_addr.svm_cid)) { 510 510 sock_set_flag(sk, SOCK_DONE); 511 511 vsk->peer_shutdown = SHUTDOWN_MASK; 512 512 sk->sk_state = SS_UNCONNECTED;
+1
drivers/watchdog/Kconfig
··· 155 155 config WDAT_WDT 156 156 tristate "ACPI Watchdog Action Table (WDAT)" 157 157 depends on ACPI 158 + select WATCHDOG_CORE 158 159 select ACPI_WATCHDOG 159 160 help 160 161 This driver adds support for systems with ACPI Watchdog Action
+14 -10
fs/ceph/dir.c
··· 1261 1261 return -ECHILD; 1262 1262 1263 1263 op = ceph_snap(dir) == CEPH_SNAPDIR ? 1264 - CEPH_MDS_OP_LOOKUPSNAP : CEPH_MDS_OP_LOOKUP; 1264 + CEPH_MDS_OP_LOOKUPSNAP : CEPH_MDS_OP_GETATTR; 1265 1265 req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS); 1266 1266 if (!IS_ERR(req)) { 1267 1267 req->r_dentry = dget(dentry); 1268 - req->r_num_caps = 2; 1268 + req->r_num_caps = op == CEPH_MDS_OP_GETATTR ? 1 : 2; 1269 1269 1270 1270 mask = CEPH_STAT_CAP_INODE | CEPH_CAP_AUTH_SHARED; 1271 1271 if (ceph_security_xattr_wanted(dir)) 1272 1272 mask |= CEPH_CAP_XATTR_SHARED; 1273 1273 req->r_args.getattr.mask = mask; 1274 1274 1275 - req->r_locked_dir = dir; 1276 1275 err = ceph_mdsc_do_request(mdsc, NULL, req); 1277 - if (err == 0 || err == -ENOENT) { 1278 - if (dentry == req->r_dentry) { 1279 - valid = !d_unhashed(dentry); 1280 - } else { 1281 - d_invalidate(req->r_dentry); 1282 - err = -EAGAIN; 1283 - } 1276 + switch (err) { 1277 + case 0: 1278 + if (d_really_is_positive(dentry) && 1279 + d_inode(dentry) == req->r_target_inode) 1280 + valid = 1; 1281 + break; 1282 + case -ENOENT: 1283 + if (d_really_is_negative(dentry)) 1284 + valid = 1; 1285 + /* Fallthrough */ 1286 + default: 1287 + break; 1284 1288 } 1285 1289 ceph_mdsc_put_request(req); 1286 1290 dout("d_revalidate %p lookup result=%d\n",
+8 -3
fs/cifs/cifsencrypt.c
··· 808 808 struct crypto_skcipher *tfm_arc4; 809 809 struct scatterlist sgin, sgout; 810 810 struct skcipher_request *req; 811 - unsigned char sec_key[CIFS_SESS_KEY_SIZE]; /* a nonce */ 811 + unsigned char *sec_key; 812 + 813 + sec_key = kmalloc(CIFS_SESS_KEY_SIZE, GFP_KERNEL); 814 + if (sec_key == NULL) 815 + return -ENOMEM; 812 816 813 817 get_random_bytes(sec_key, CIFS_SESS_KEY_SIZE); 814 818 ··· 820 816 if (IS_ERR(tfm_arc4)) { 821 817 rc = PTR_ERR(tfm_arc4); 822 818 cifs_dbg(VFS, "could not allocate crypto API arc4\n"); 823 - return rc; 819 + goto out; 824 820 } 825 821 826 822 rc = crypto_skcipher_setkey(tfm_arc4, ses->auth_key.response, ··· 858 854 859 855 out_free_cipher: 860 856 crypto_free_skcipher(tfm_arc4); 861 - 857 + out: 858 + kfree(sec_key); 862 859 return rc; 863 860 } 864 861
+2 -2
fs/cifs/cifssmb.c
··· 3427 3427 __u16 rc = 0; 3428 3428 struct cifs_posix_acl *cifs_acl = (struct cifs_posix_acl *)parm_data; 3429 3429 struct posix_acl_xattr_header *local_acl = (void *)pACL; 3430 + struct posix_acl_xattr_entry *ace = (void *)(local_acl + 1); 3430 3431 int count; 3431 3432 int i; 3432 3433 ··· 3454 3453 return 0; 3455 3454 } 3456 3455 for (i = 0; i < count; i++) { 3457 - rc = convert_ace_to_cifs_ace(&cifs_acl->ace_array[i], 3458 - (struct posix_acl_xattr_entry *)(local_acl + 1)); 3456 + rc = convert_ace_to_cifs_ace(&cifs_acl->ace_array[i], &ace[i]); 3459 3457 if (rc != 0) { 3460 3458 /* ACE not converted */ 3461 3459 break;
+18 -7
fs/cifs/connect.c
··· 412 412 } 413 413 } while (server->tcpStatus == CifsNeedReconnect); 414 414 415 + if (server->tcpStatus == CifsNeedNegotiate) 416 + mod_delayed_work(cifsiod_wq, &server->echo, 0); 417 + 415 418 return rc; 416 419 } 417 420 ··· 424 421 int rc; 425 422 struct TCP_Server_Info *server = container_of(work, 426 423 struct TCP_Server_Info, echo.work); 427 - unsigned long echo_interval = server->echo_interval; 424 + unsigned long echo_interval; 428 425 429 426 /* 430 - * We cannot send an echo if it is disabled or until the 431 - * NEGOTIATE_PROTOCOL request is done, which is indicated by 432 - * server->ops->need_neg() == true. Also, no need to ping if 433 - * we got a response recently. 427 + * If we need to renegotiate, set echo interval to zero to 428 + * immediately call echo service where we can renegotiate. 429 + */ 430 + if (server->tcpStatus == CifsNeedNegotiate) 431 + echo_interval = 0; 432 + else 433 + echo_interval = server->echo_interval; 434 + 435 + /* 436 + * We cannot send an echo if it is disabled. 437 + * Also, no need to ping if we got a response recently. 434 438 */ 435 439 436 440 if (server->tcpStatus == CifsNeedReconnect || 437 - server->tcpStatus == CifsExiting || server->tcpStatus == CifsNew || 441 + server->tcpStatus == CifsExiting || 442 + server->tcpStatus == CifsNew || 438 443 (server->ops->can_echo && !server->ops->can_echo(server)) || 439 444 time_before(jiffies, server->lstrp + echo_interval - HZ)) 440 445 goto requeue_echo; ··· 453 442 server->hostname); 454 443 455 444 requeue_echo: 456 - queue_delayed_work(cifsiod_wq, &server->echo, echo_interval); 445 + queue_delayed_work(cifsiod_wq, &server->echo, server->echo_interval); 457 446 } 458 447 459 448 static bool
+2 -5
fs/fuse/dir.c
··· 1739 1739 * This should be done on write(), truncate() and chown(). 1740 1740 */ 1741 1741 if (!fc->handle_killpriv) { 1742 - int kill; 1743 - 1744 1742 /* 1745 1743 * ia_mode calculation may have used stale i_mode. 1746 1744 * Refresh and recalculate. ··· 1748 1750 return ret; 1749 1751 1750 1752 attr->ia_mode = inode->i_mode; 1751 - kill = should_remove_suid(entry); 1752 - if (kill & ATTR_KILL_SUID) { 1753 + if (inode->i_mode & S_ISUID) { 1753 1754 attr->ia_valid |= ATTR_MODE; 1754 1755 attr->ia_mode &= ~S_ISUID; 1755 1756 } 1756 - if (kill & ATTR_KILL_SGID) { 1757 + if ((inode->i_mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) { 1757 1758 attr->ia_valid |= ATTR_MODE; 1758 1759 attr->ia_mode &= ~S_ISGID; 1759 1760 }
+2 -2
fs/isofs/rock.c
··· 377 377 { 378 378 int p; 379 379 for (p = 0; p < rr->u.ER.len_id; p++) 380 - printk("%c", rr->u.ER.data[p]); 380 + printk(KERN_CONT "%c", rr->u.ER.data[p]); 381 381 } 382 - printk("\n"); 382 + printk(KERN_CONT "\n"); 383 383 break; 384 384 case SIG('P', 'X'): 385 385 inode->i_mode = isonum_733(rr->u.PX.mode);
+3 -3
fs/overlayfs/super.c
··· 328 328 if (!real) 329 329 goto bug; 330 330 331 + /* Handle recursion */ 332 + real = d_real(real, inode, open_flags); 333 + 331 334 if (!inode || inode == d_inode(real)) 332 335 return real; 333 - 334 - /* Handle recursion */ 335 - return d_real(real, inode, open_flags); 336 336 bug: 337 337 WARN(1, "ovl_d_real(%pd4, %s:%lu): real dentry not found\n", dentry, 338 338 inode ? inode->i_sb->s_id : "NULL", inode ? inode->i_ino : 0);
+2 -1
fs/splice.c
··· 408 408 if (res <= 0) 409 409 return -ENOMEM; 410 410 411 - nr_pages = res / PAGE_SIZE; 411 + BUG_ON(dummy); 412 + nr_pages = DIV_ROUND_UP(res, PAGE_SIZE); 412 413 413 414 vec = __vec; 414 415 if (nr_pages > PIPE_DEF_BUFFERS) {
+2
include/crypto/drbg.h
··· 124 124 struct skcipher_request *ctr_req; /* CTR mode request handle */ 125 125 __u8 *ctr_null_value_buf; /* CTR mode unaligned buffer */ 126 126 __u8 *ctr_null_value; /* CTR mode aligned zero buf */ 127 + __u8 *outscratchpadbuf; /* CTR mode output scratchpad */ 128 + __u8 *outscratchpad; /* CTR mode aligned outbuf */ 127 129 struct completion ctr_completion; /* CTR mode async handler */ 128 130 int ctr_async_err; /* CTR mode async error */ 129 131
+3 -1
include/linux/compiler-gcc.h
··· 263 263 #endif 264 264 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP && !__CHECKER__ */ 265 265 266 - #if GCC_VERSION >= 50000 266 + #if GCC_VERSION >= 70000 267 + #define KASAN_ABI_VERSION 5 268 + #elif GCC_VERSION >= 50000 267 269 #define KASAN_ABI_VERSION 4 268 270 #elif GCC_VERSION >= 40902 269 271 #define KASAN_ABI_VERSION 3
+1
include/linux/intel-iommu.h
··· 429 429 struct page_req_dsc *prq; 430 430 unsigned char prq_name[16]; /* Name for PRQ interrupt */ 431 431 struct idr pasid_idr; 432 + u32 pasid_max; 432 433 #endif 433 434 struct q_inval *qi; /* Queued invalidation info */ 434 435 u32 *iommu_state; /* Store iommu states between suspend and resume.*/
+1 -1
include/linux/libnvdimm.h
··· 143 143 const struct nd_cmd_desc *desc, int idx, void *buf); 144 144 u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd, 145 145 const struct nd_cmd_desc *desc, int idx, const u32 *in_field, 146 - const u32 *out_field); 146 + const u32 *out_field, unsigned long remainder); 147 147 int nvdimm_bus_check_dimm_count(struct nvdimm_bus *nvdimm_bus, int dimm_count); 148 148 struct nd_region *nvdimm_pmem_region_create(struct nvdimm_bus *nvdimm_bus, 149 149 struct nd_region_desc *ndr_desc);
-1
include/linux/mlx4/device.h
··· 476 476 enum { 477 477 MLX4_INTERFACE_STATE_UP = 1 << 0, 478 478 MLX4_INTERFACE_STATE_DELETION = 1 << 1, 479 - MLX4_INTERFACE_STATE_SHUTDOWN = 1 << 2, 480 479 }; 481 480 482 481 #define MSTR_SM_CHANGE_MASK (MLX4_EQ_PORT_INFO_MSTR_SM_SL_CHANGE_MASK | \
+1 -1
include/linux/netdevice.h
··· 1619 1619 * @dcbnl_ops: Data Center Bridging netlink ops 1620 1620 * @num_tc: Number of traffic classes in the net device 1621 1621 * @tc_to_txq: XXX: need comments on this one 1622 - * @prio_tc_map XXX: need comments on this one 1622 + * @prio_tc_map: XXX: need comments on this one 1623 1623 * 1624 1624 * @fcoe_ddp_xid: Max exchange id for FCoE LRO by ddp 1625 1625 *
+4
include/linux/of_mdio.h
··· 29 29 extern struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np); 30 30 extern int of_mdio_parse_addr(struct device *dev, const struct device_node *np); 31 31 extern int of_phy_register_fixed_link(struct device_node *np); 32 + extern void of_phy_deregister_fixed_link(struct device_node *np); 32 33 extern bool of_phy_is_fixed_link(struct device_node *np); 33 34 34 35 #else /* CONFIG_OF */ ··· 83 82 static inline int of_phy_register_fixed_link(struct device_node *np) 84 83 { 85 84 return -ENOSYS; 85 + } 86 + static inline void of_phy_deregister_fixed_link(struct device_node *np) 87 + { 86 88 } 87 89 static inline bool of_phy_is_fixed_link(struct device_node *np) 88 90 {
+15 -6
include/linux/pagemap.h
··· 374 374 } 375 375 376 376 /* 377 - * Get the offset in PAGE_SIZE. 378 - * (TODO: hugepage should have ->index in PAGE_SIZE) 377 + * Get index of the page with in radix-tree 378 + * (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE) 379 379 */ 380 - static inline pgoff_t page_to_pgoff(struct page *page) 380 + static inline pgoff_t page_to_index(struct page *page) 381 381 { 382 382 pgoff_t pgoff; 383 - 384 - if (unlikely(PageHeadHuge(page))) 385 - return page->index << compound_order(page); 386 383 387 384 if (likely(!PageTransTail(page))) 388 385 return page->index; ··· 391 394 pgoff = compound_head(page)->index; 392 395 pgoff += page - compound_head(page); 393 396 return pgoff; 397 + } 398 + 399 + /* 400 + * Get the offset in PAGE_SIZE. 401 + * (TODO: hugepage should have ->index in PAGE_SIZE) 402 + */ 403 + static inline pgoff_t page_to_pgoff(struct page *page) 404 + { 405 + if (unlikely(PageHeadHuge(page))) 406 + return page->index << compound_order(page); 407 + 408 + return page_to_index(page); 394 409 } 395 410 396 411 /*
+14
include/linux/pci.h
··· 1928 1928 return (pcie_caps_reg(dev) & PCI_EXP_FLAGS_TYPE) >> 4; 1929 1929 } 1930 1930 1931 + static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev) 1932 + { 1933 + while (1) { 1934 + if (!pci_is_pcie(dev)) 1935 + break; 1936 + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) 1937 + return dev; 1938 + if (!dev->bus->self) 1939 + break; 1940 + dev = dev->bus->self; 1941 + } 1942 + return NULL; 1943 + } 1944 + 1931 1945 void pci_request_acs(void); 1932 1946 bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags); 1933 1947 bool pci_acs_path_enabled(struct pci_dev *start,
+2 -1
include/linux/usb/cdc_ncm.h
··· 81 81 #define CDC_NCM_TIMER_INTERVAL_MAX (U32_MAX / NSEC_PER_USEC) 82 82 83 83 /* Driver flags */ 84 - #define CDC_NCM_FLAG_NDP_TO_END 0x02 /* NDP is placed at end of frame */ 84 + #define CDC_NCM_FLAG_NDP_TO_END 0x02 /* NDP is placed at end of frame */ 85 + #define CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE 0x04 /* Avoid altsetting toggle during init */ 85 86 86 87 #define cdc_ncm_comm_intf_is_mbim(x) ((x)->desc.bInterfaceSubClass == USB_CDC_SUBCLASS_MBIM && \ 87 88 (x)->desc.bInterfaceProtocol == USB_CDC_PROTO_NONE)
+1 -1
include/net/bluetooth/hci_core.h
··· 1018 1018 } 1019 1019 1020 1020 struct hci_dev *hci_dev_get(int index); 1021 - struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src); 1021 + struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src, u8 src_type); 1022 1022 1023 1023 struct hci_dev *hci_alloc_dev(void); 1024 1024 void hci_free_dev(struct hci_dev *hdev);
+2
include/net/ipv6.h
··· 970 970 int compat_ipv6_getsockopt(struct sock *sk, int level, int optname, 971 971 char __user *optval, int __user *optlen); 972 972 973 + int __ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, 974 + int addr_len); 973 975 int ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, int addr_len); 974 976 int ip6_datagram_connect_v6_only(struct sock *sk, struct sockaddr *addr, 975 977 int addr_len);
+3 -3
include/net/netfilter/nf_conntrack.h
··· 100 100 101 101 possible_net_t ct_net; 102 102 103 + #if IS_ENABLED(CONFIG_NF_NAT) 104 + struct rhlist_head nat_bysource; 105 + #endif 103 106 /* all members below initialized via memset */ 104 107 u8 __nfct_init_offset[0]; 105 108 ··· 120 117 /* Extensions */ 121 118 struct nf_ct_ext *ext; 122 119 123 - #if IS_ENABLED(CONFIG_NF_NAT) 124 - struct rhash_head nat_bysource; 125 - #endif 126 120 /* Storage reserved for other modules, must be the last member */ 127 121 union nf_conntrack_proto proto; 128 122 };
+1 -1
include/net/netfilter/nf_tables.h
··· 313 313 * @size: maximum set size 314 314 * @nelems: number of elements 315 315 * @ndeact: number of deactivated elements queued for removal 316 - * @timeout: default timeout value in msecs 316 + * @timeout: default timeout value in jiffies 317 317 * @gc_int: garbage collection interval in msecs 318 318 * @policy: set parameterization (see enum nft_set_policies) 319 319 * @udlen: user data length
+1
include/uapi/linux/can.h
··· 196 196 }; 197 197 198 198 #define CAN_INV_FILTER 0x20000000U /* to be set in can_filter.can_id */ 199 + #define CAN_RAW_FILTER_MAX 512 /* maximum number of can_filter set via setsockopt() */ 199 200 200 201 #endif /* !_UAPI_CAN_H */
+2 -2
include/uapi/linux/if.h
··· 31 31 #include <linux/hdlc/ioctl.h> 32 32 33 33 /* For glibc compatibility. An empty enum does not compile. */ 34 - #if __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO != 0 && \ 34 + #if __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO != 0 || \ 35 35 __UAPI_DEF_IF_NET_DEVICE_FLAGS != 0 36 36 /** 37 37 * enum net_device_flags - &struct net_device flags ··· 99 99 IFF_ECHO = 1<<18, /* volatile */ 100 100 #endif /* __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO */ 101 101 }; 102 - #endif /* __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO != 0 && __UAPI_DEF_IF_NET_DEVICE_FLAGS != 0 */ 102 + #endif /* __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO != 0 || __UAPI_DEF_IF_NET_DEVICE_FLAGS != 0 */ 103 103 104 104 /* for compatibility with glibc net/if.h */ 105 105 #if __UAPI_DEF_IF_NET_DEVICE_FLAGS
+1 -1
include/uapi/linux/input-event-codes.h
··· 640 640 * Control a data application associated with the currently viewed channel, 641 641 * e.g. teletext or data broadcast application (MHEG, MHP, HbbTV, etc.) 642 642 */ 643 - #define KEY_DATA 0x275 643 + #define KEY_DATA 0x277 644 644 645 645 #define BTN_TRIGGER_HAPPY 0x2c0 646 646 #define BTN_TRIGGER_HAPPY1 0x2c0
+1
include/uapi/linux/netfilter/Kbuild
··· 5 5 header-y += nf_conntrack_sctp.h 6 6 header-y += nf_conntrack_tcp.h 7 7 header-y += nf_conntrack_tuple_common.h 8 + header-y += nf_log.h 8 9 header-y += nf_tables.h 9 10 header-y += nf_tables_compat.h 10 11 header-y += nf_nat.h
+2
include/uapi/linux/tc_act/Kbuild
··· 11 11 header-y += tc_bpf.h 12 12 header-y += tc_connmark.h 13 13 header-y += tc_ife.h 14 + header-y += tc_tunnel_key.h 15 + header-y += tc_skbmod.h
+1 -1
init/do_mounts_rd.c
··· 272 272 sys_write(out_fd, buf, BLOCK_SIZE); 273 273 #if !defined(CONFIG_S390) 274 274 if (!(i % 16)) { 275 - printk("%c\b", rotator[rotate & 0x3]); 275 + pr_cont("%c\b", rotator[rotate & 0x3]); 276 276 rotate++; 277 277 } 278 278 #endif
+8 -2
kernel/bpf/verifier.c
··· 2454 2454 struct bpf_verifier_state *old, 2455 2455 struct bpf_verifier_state *cur) 2456 2456 { 2457 + bool varlen_map_access = env->varlen_map_value_access; 2457 2458 struct bpf_reg_state *rold, *rcur; 2458 2459 int i; 2459 2460 ··· 2468 2467 /* If the ranges were not the same, but everything else was and 2469 2468 * we didn't do a variable access into a map then we are a-ok. 2470 2469 */ 2471 - if (!env->varlen_map_value_access && 2470 + if (!varlen_map_access && 2472 2471 rold->type == rcur->type && rold->imm == rcur->imm) 2473 2472 continue; 2474 2473 2474 + /* If we didn't map access then again we don't care about the 2475 + * mismatched range values and it's ok if our old type was 2476 + * UNKNOWN and we didn't go to a NOT_INIT'ed reg. 2477 + */ 2475 2478 if (rold->type == NOT_INIT || 2476 - (rold->type == UNKNOWN_VALUE && rcur->type != NOT_INIT)) 2479 + (!varlen_map_access && rold->type == UNKNOWN_VALUE && 2480 + rcur->type != NOT_INIT)) 2477 2481 continue; 2478 2482 2479 2483 if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET &&
+8 -11
kernel/events/core.c
··· 903 903 */ 904 904 cpuctx = __get_cpu_context(ctx); 905 905 906 - /* Only set/clear cpuctx->cgrp if current task uses event->cgrp. */ 907 - if (perf_cgroup_from_task(current, ctx) != event->cgrp) { 908 - /* 909 - * We are removing the last cpu event in this context. 910 - * If that event is not active in this cpu, cpuctx->cgrp 911 - * should've been cleared by perf_cgroup_switch. 912 - */ 913 - WARN_ON_ONCE(!add && cpuctx->cgrp); 914 - return; 915 - } 916 - cpuctx->cgrp = add ? event->cgrp : NULL; 906 + /* 907 + * cpuctx->cgrp is NULL until a cgroup event is sched in or 908 + * ctx->nr_cgroup == 0 . 909 + */ 910 + if (add && perf_cgroup_from_task(current, ctx) == event->cgrp) 911 + cpuctx->cgrp = event->cgrp; 912 + else if (!add) 913 + cpuctx->cgrp = NULL; 917 914 } 918 915 919 916 #else /* !CONFIG_CGROUP_PERF */
+1
kernel/kcov.c
··· 7 7 #include <linux/fs.h> 8 8 #include <linux/mm.h> 9 9 #include <linux/printk.h> 10 + #include <linux/sched.h> 10 11 #include <linux/slab.h> 11 12 #include <linux/spinlock.h> 12 13 #include <linux/vmalloc.h>
+57 -54
kernel/locking/lockdep.c
··· 506 506 name = class->name; 507 507 if (!name) { 508 508 name = __get_key_name(class->key, str); 509 - printk("%s", name); 509 + printk(KERN_CONT "%s", name); 510 510 } else { 511 - printk("%s", name); 511 + printk(KERN_CONT "%s", name); 512 512 if (class->name_version > 1) 513 - printk("#%d", class->name_version); 513 + printk(KERN_CONT "#%d", class->name_version); 514 514 if (class->subclass) 515 - printk("/%d", class->subclass); 515 + printk(KERN_CONT "/%d", class->subclass); 516 516 } 517 517 } 518 518 ··· 522 522 523 523 get_usage_chars(class, usage); 524 524 525 - printk(" ("); 525 + printk(KERN_CONT " ("); 526 526 __print_lock_name(class); 527 - printk("){%s}", usage); 527 + printk(KERN_CONT "){%s}", usage); 528 528 } 529 529 530 530 static void print_lockdep_cache(struct lockdep_map *lock) ··· 536 536 if (!name) 537 537 name = __get_key_name(lock->key->subkeys, str); 538 538 539 - printk("%s", name); 539 + printk(KERN_CONT "%s", name); 540 540 } 541 541 542 542 static void print_lock(struct held_lock *hlock) ··· 551 551 barrier(); 552 552 553 553 if (!class_idx || (class_idx - 1) >= MAX_LOCKDEP_KEYS) { 554 - printk("<RELEASED>\n"); 554 + printk(KERN_CONT "<RELEASED>\n"); 555 555 return; 556 556 } 557 557 558 558 print_lock_name(lock_classes + class_idx - 1); 559 - printk(", at: "); 560 - print_ip_sym(hlock->acquire_ip); 559 + printk(KERN_CONT ", at: [<%p>] %pS\n", 560 + (void *)hlock->acquire_ip, (void *)hlock->acquire_ip); 561 561 } 562 562 563 563 static void lockdep_print_held_locks(struct task_struct *curr) ··· 792 792 793 793 printk("\nnew class %p: %s", class->key, class->name); 794 794 if (class->name_version > 1) 795 - printk("#%d", class->name_version); 796 - printk("\n"); 795 + printk(KERN_CONT "#%d", class->name_version); 796 + printk(KERN_CONT "\n"); 797 797 dump_stack(); 798 798 799 799 if (!graph_lock()) { ··· 1071 1071 return 0; 1072 1072 printk("\n-> #%u", depth); 1073 1073 print_lock_name(target->class); 1074 - printk(":\n"); 1074 + printk(KERN_CONT ":\n"); 1075 1075 print_stack_trace(&target->trace, 6); 1076 1076 1077 1077 return 0; ··· 1102 1102 if (parent != source) { 1103 1103 printk("Chain exists of:\n "); 1104 1104 __print_lock_name(source); 1105 - printk(" --> "); 1105 + printk(KERN_CONT " --> "); 1106 1106 __print_lock_name(parent); 1107 - printk(" --> "); 1107 + printk(KERN_CONT " --> "); 1108 1108 __print_lock_name(target); 1109 - printk("\n\n"); 1109 + printk(KERN_CONT "\n\n"); 1110 1110 } 1111 1111 1112 1112 printk(" Possible unsafe locking scenario:\n\n"); ··· 1114 1114 printk(" ---- ----\n"); 1115 1115 printk(" lock("); 1116 1116 __print_lock_name(target); 1117 - printk(");\n"); 1117 + printk(KERN_CONT ");\n"); 1118 1118 printk(" lock("); 1119 1119 __print_lock_name(parent); 1120 - printk(");\n"); 1120 + printk(KERN_CONT ");\n"); 1121 1121 printk(" lock("); 1122 1122 __print_lock_name(target); 1123 - printk(");\n"); 1123 + printk(KERN_CONT ");\n"); 1124 1124 printk(" lock("); 1125 1125 __print_lock_name(source); 1126 - printk(");\n"); 1126 + printk(KERN_CONT ");\n"); 1127 1127 printk("\n *** DEADLOCK ***\n\n"); 1128 1128 } 1129 1129 ··· 1359 1359 1360 1360 printk("%*s->", depth, ""); 1361 1361 print_lock_name(class); 1362 - printk(" ops: %lu", class->ops); 1363 - printk(" {\n"); 1362 + printk(KERN_CONT " ops: %lu", class->ops); 1363 + printk(KERN_CONT " {\n"); 1364 1364 1365 1365 for (bit = 0; bit < LOCK_USAGE_STATES; bit++) { 1366 1366 if (class->usage_mask & (1 << bit)) { 1367 1367 int len = depth; 1368 1368 1369 1369 len += printk("%*s %s", depth, "", usage_str[bit]); 1370 - len += printk(" at:\n"); 1370 + len += printk(KERN_CONT " at:\n"); 1371 1371 print_stack_trace(class->usage_traces + bit, len); 1372 1372 } 1373 1373 } 1374 1374 printk("%*s }\n", depth, ""); 1375 1375 1376 - printk("%*s ... key at: ",depth,""); 1377 - print_ip_sym((unsigned long)class->key); 1376 + printk("%*s ... key at: [<%p>] %pS\n", 1377 + depth, "", class->key, class->key); 1378 1378 } 1379 1379 1380 1380 /* ··· 1437 1437 if (middle_class != unsafe_class) { 1438 1438 printk("Chain exists of:\n "); 1439 1439 __print_lock_name(safe_class); 1440 - printk(" --> "); 1440 + printk(KERN_CONT " --> "); 1441 1441 __print_lock_name(middle_class); 1442 - printk(" --> "); 1442 + printk(KERN_CONT " --> "); 1443 1443 __print_lock_name(unsafe_class); 1444 - printk("\n\n"); 1444 + printk(KERN_CONT "\n\n"); 1445 1445 } 1446 1446 1447 1447 printk(" Possible interrupt unsafe locking scenario:\n\n"); ··· 1449 1449 printk(" ---- ----\n"); 1450 1450 printk(" lock("); 1451 1451 __print_lock_name(unsafe_class); 1452 - printk(");\n"); 1452 + printk(KERN_CONT ");\n"); 1453 1453 printk(" local_irq_disable();\n"); 1454 1454 printk(" lock("); 1455 1455 __print_lock_name(safe_class); 1456 - printk(");\n"); 1456 + printk(KERN_CONT ");\n"); 1457 1457 printk(" lock("); 1458 1458 __print_lock_name(middle_class); 1459 - printk(");\n"); 1459 + printk(KERN_CONT ");\n"); 1460 1460 printk(" <Interrupt>\n"); 1461 1461 printk(" lock("); 1462 1462 __print_lock_name(safe_class); 1463 - printk(");\n"); 1463 + printk(KERN_CONT ");\n"); 1464 1464 printk("\n *** DEADLOCK ***\n\n"); 1465 1465 } 1466 1466 ··· 1497 1497 print_lock(prev); 1498 1498 printk("which would create a new lock dependency:\n"); 1499 1499 print_lock_name(hlock_class(prev)); 1500 - printk(" ->"); 1500 + printk(KERN_CONT " ->"); 1501 1501 print_lock_name(hlock_class(next)); 1502 - printk("\n"); 1502 + printk(KERN_CONT "\n"); 1503 1503 1504 1504 printk("\nbut this new dependency connects a %s-irq-safe lock:\n", 1505 1505 irqclass); ··· 1521 1521 1522 1522 lockdep_print_held_locks(curr); 1523 1523 1524 - printk("\nthe dependencies between %s-irq-safe lock", irqclass); 1525 - printk(" and the holding lock:\n"); 1524 + printk("\nthe dependencies between %s-irq-safe lock and the holding lock:\n", irqclass); 1526 1525 if (!save_trace(&prev_root->trace)) 1527 1526 return 0; 1528 1527 print_shortest_lock_dependencies(backwards_entry, prev_root); ··· 1693 1694 printk(" ----\n"); 1694 1695 printk(" lock("); 1695 1696 __print_lock_name(prev); 1696 - printk(");\n"); 1697 + printk(KERN_CONT ");\n"); 1697 1698 printk(" lock("); 1698 1699 __print_lock_name(next); 1699 - printk(");\n"); 1700 + printk(KERN_CONT ");\n"); 1700 1701 printk("\n *** DEADLOCK ***\n\n"); 1701 1702 printk(" May be due to missing lock nesting notation\n\n"); 1702 1703 } ··· 1890 1891 graph_unlock(); 1891 1892 printk("\n new dependency: "); 1892 1893 print_lock_name(hlock_class(prev)); 1893 - printk(" => "); 1894 + printk(KERN_CONT " => "); 1894 1895 print_lock_name(hlock_class(next)); 1895 - printk("\n"); 1896 + printk(KERN_CONT "\n"); 1896 1897 dump_stack(); 1897 1898 return graph_lock(); 1898 1899 } ··· 2342 2343 printk(" ----\n"); 2343 2344 printk(" lock("); 2344 2345 __print_lock_name(class); 2345 - printk(");\n"); 2346 + printk(KERN_CONT ");\n"); 2346 2347 printk(" <Interrupt>\n"); 2347 2348 printk(" lock("); 2348 2349 __print_lock_name(class); 2349 - printk(");\n"); 2350 + printk(KERN_CONT ");\n"); 2350 2351 printk("\n *** DEADLOCK ***\n\n"); 2351 2352 } 2352 2353 ··· 2521 2522 void print_irqtrace_events(struct task_struct *curr) 2522 2523 { 2523 2524 printk("irq event stamp: %u\n", curr->irq_events); 2524 - printk("hardirqs last enabled at (%u): ", curr->hardirq_enable_event); 2525 - print_ip_sym(curr->hardirq_enable_ip); 2526 - printk("hardirqs last disabled at (%u): ", curr->hardirq_disable_event); 2527 - print_ip_sym(curr->hardirq_disable_ip); 2528 - printk("softirqs last enabled at (%u): ", curr->softirq_enable_event); 2529 - print_ip_sym(curr->softirq_enable_ip); 2530 - printk("softirqs last disabled at (%u): ", curr->softirq_disable_event); 2531 - print_ip_sym(curr->softirq_disable_ip); 2525 + printk("hardirqs last enabled at (%u): [<%p>] %pS\n", 2526 + curr->hardirq_enable_event, (void *)curr->hardirq_enable_ip, 2527 + (void *)curr->hardirq_enable_ip); 2528 + printk("hardirqs last disabled at (%u): [<%p>] %pS\n", 2529 + curr->hardirq_disable_event, (void *)curr->hardirq_disable_ip, 2530 + (void *)curr->hardirq_disable_ip); 2531 + printk("softirqs last enabled at (%u): [<%p>] %pS\n", 2532 + curr->softirq_enable_event, (void *)curr->softirq_enable_ip, 2533 + (void *)curr->softirq_enable_ip); 2534 + printk("softirqs last disabled at (%u): [<%p>] %pS\n", 2535 + curr->softirq_disable_event, (void *)curr->softirq_disable_ip, 2536 + (void *)curr->softirq_disable_ip); 2532 2537 } 2533 2538 2534 2539 static int HARDIRQ_verbose(struct lock_class *class) ··· 3238 3235 if (very_verbose(class)) { 3239 3236 printk("\nacquire class [%p] %s", class->key, class->name); 3240 3237 if (class->name_version > 1) 3241 - printk("#%d", class->name_version); 3242 - printk("\n"); 3238 + printk(KERN_CONT "#%d", class->name_version); 3239 + printk(KERN_CONT "\n"); 3243 3240 dump_stack(); 3244 3241 } 3245 3242 ··· 3381 3378 printk("%s/%d is trying to release lock (", 3382 3379 curr->comm, task_pid_nr(curr)); 3383 3380 print_lockdep_cache(lock); 3384 - printk(") at:\n"); 3381 + printk(KERN_CONT ") at:\n"); 3385 3382 print_ip_sym(ip); 3386 3383 printk("but there are no more locks to release!\n"); 3387 3384 printk("\nother info that might help us debug this:\n"); ··· 3874 3871 printk("%s/%d is trying to contend lock (", 3875 3872 curr->comm, task_pid_nr(curr)); 3876 3873 print_lockdep_cache(lock); 3877 - printk(") at:\n"); 3874 + printk(KERN_CONT ") at:\n"); 3878 3875 print_ip_sym(ip); 3879 3876 printk("but there are no locks held!\n"); 3880 3877 printk("\nother info that might help us debug this:\n");
+3 -2
kernel/module.c
··· 1301 1301 goto bad_version; 1302 1302 } 1303 1303 1304 - pr_warn("%s: no symbol version for %s\n", mod->name, symname); 1305 - return 0; 1304 + /* Broken toolchain. Warn once, then let it go.. */ 1305 + pr_warn_once("%s: no symbol version for %s\n", mod->name, symname); 1306 + return 1; 1306 1307 1307 1308 bad_version: 1308 1309 pr_warn("%s: disagrees about version of symbol %s\n",
+3 -1
kernel/sched/auto_group.c
··· 212 212 { 213 213 static unsigned long next = INITIAL_JIFFIES; 214 214 struct autogroup *ag; 215 + unsigned long shares; 215 216 int err; 216 217 217 218 if (nice < MIN_NICE || nice > MAX_NICE) ··· 231 230 232 231 next = HZ / 10 + jiffies; 233 232 ag = autogroup_task_get(p); 233 + shares = scale_load(sched_prio_to_weight[nice + 20]); 234 234 235 235 down_write(&ag->lock); 236 - err = sched_group_set_shares(ag->tg, sched_prio_to_weight[nice + 20]); 236 + err = sched_group_set_shares(ag->tg, shares); 237 237 if (!err) 238 238 ag->nice = nice; 239 239 up_write(&ag->lock);
+8
lib/debugobjects.c
··· 362 362 363 363 __debug_object_init(addr, descr, 0); 364 364 } 365 + EXPORT_SYMBOL_GPL(debug_object_init); 365 366 366 367 /** 367 368 * debug_object_init_on_stack - debug checks when an object on stack is ··· 377 376 378 377 __debug_object_init(addr, descr, 1); 379 378 } 379 + EXPORT_SYMBOL_GPL(debug_object_init_on_stack); 380 380 381 381 /** 382 382 * debug_object_activate - debug checks when an object is activated ··· 451 449 } 452 450 return 0; 453 451 } 452 + EXPORT_SYMBOL_GPL(debug_object_activate); 454 453 455 454 /** 456 455 * debug_object_deactivate - debug checks when an object is deactivated ··· 499 496 500 497 raw_spin_unlock_irqrestore(&db->lock, flags); 501 498 } 499 + EXPORT_SYMBOL_GPL(debug_object_deactivate); 502 500 503 501 /** 504 502 * debug_object_destroy - debug checks when an object is destroyed ··· 546 542 out_unlock: 547 543 raw_spin_unlock_irqrestore(&db->lock, flags); 548 544 } 545 + EXPORT_SYMBOL_GPL(debug_object_destroy); 549 546 550 547 /** 551 548 * debug_object_free - debug checks when an object is freed ··· 587 582 out_unlock: 588 583 raw_spin_unlock_irqrestore(&db->lock, flags); 589 584 } 585 + EXPORT_SYMBOL_GPL(debug_object_free); 590 586 591 587 /** 592 588 * debug_object_assert_init - debug checks when object should be init-ed ··· 632 626 633 627 raw_spin_unlock_irqrestore(&db->lock, flags); 634 628 } 629 + EXPORT_SYMBOL_GPL(debug_object_assert_init); 635 630 636 631 /** 637 632 * debug_object_active_state - debug checks object usage state machine ··· 680 673 681 674 raw_spin_unlock_irqrestore(&db->lock, flags); 682 675 } 676 + EXPORT_SYMBOL_GPL(debug_object_active_state); 683 677 684 678 #ifdef CONFIG_DEBUG_OBJECTS_FREE 685 679 static void __debug_check_no_obj_freed(const void *address, unsigned long size)
+6 -1
lib/mpi/mpi-pow.c
··· 64 64 if (!esize) { 65 65 /* Exponent is zero, result is 1 mod MOD, i.e., 1 or 0 66 66 * depending on if MOD equals 1. */ 67 - rp[0] = 1; 68 67 res->nlimbs = (msize == 1 && mod->d[0] == 1) ? 0 : 1; 68 + if (res->nlimbs) { 69 + if (mpi_resize(res, 1) < 0) 70 + goto enomem; 71 + rp = res->d; 72 + rp[0] = 1; 73 + } 69 74 res->sign = 0; 70 75 goto leave; 71 76 }
+29
lib/test_kasan.c
··· 20 20 #include <linux/uaccess.h> 21 21 #include <linux/module.h> 22 22 23 + /* 24 + * Note: test functions are marked noinline so that their names appear in 25 + * reports. 26 + */ 27 + 23 28 static noinline void __init kmalloc_oob_right(void) 24 29 { 25 30 char *ptr; ··· 416 411 kfree(kmem); 417 412 } 418 413 414 + static noinline void __init use_after_scope_test(void) 415 + { 416 + volatile char *volatile p; 417 + 418 + pr_info("use-after-scope on int\n"); 419 + { 420 + int local = 0; 421 + 422 + p = (char *)&local; 423 + } 424 + p[0] = 1; 425 + p[3] = 1; 426 + 427 + pr_info("use-after-scope on array\n"); 428 + { 429 + char local[1024] = {0}; 430 + 431 + p = local; 432 + } 433 + p[0] = 1; 434 + p[1023] = 1; 435 + } 436 + 419 437 static int __init kmalloc_tests_init(void) 420 438 { 421 439 kmalloc_oob_right(); ··· 464 436 kasan_global_oob(); 465 437 ksize_unpoisons_memory(); 466 438 copy_user_test(); 439 + use_after_scope_test(); 467 440 return -EAGAIN; 468 441 } 469 442
+2 -2
mm/huge_memory.c
··· 1456 1456 new_ptl = pmd_lockptr(mm, new_pmd); 1457 1457 if (new_ptl != old_ptl) 1458 1458 spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); 1459 - if (pmd_present(*old_pmd) && pmd_dirty(*old_pmd)) 1460 - force_flush = true; 1461 1459 pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd); 1460 + if (pmd_present(pmd) && pmd_dirty(pmd)) 1461 + force_flush = true; 1462 1462 VM_BUG_ON(!pmd_none(*new_pmd)); 1463 1463 1464 1464 if (pmd_move_must_withdraw(new_ptl, old_ptl) &&
+19
mm/kasan/kasan.c
··· 764 764 void __asan_handle_no_return(void) {} 765 765 EXPORT_SYMBOL(__asan_handle_no_return); 766 766 767 + /* Emitted by compiler to poison large objects when they go out of scope. */ 768 + void __asan_poison_stack_memory(const void *addr, size_t size) 769 + { 770 + /* 771 + * Addr is KASAN_SHADOW_SCALE_SIZE-aligned and the object is surrounded 772 + * by redzones, so we simply round up size to simplify logic. 773 + */ 774 + kasan_poison_shadow(addr, round_up(size, KASAN_SHADOW_SCALE_SIZE), 775 + KASAN_USE_AFTER_SCOPE); 776 + } 777 + EXPORT_SYMBOL(__asan_poison_stack_memory); 778 + 779 + /* Emitted by compiler to unpoison large objects when they go into scope. */ 780 + void __asan_unpoison_stack_memory(const void *addr, size_t size) 781 + { 782 + kasan_unpoison_shadow(addr, size); 783 + } 784 + EXPORT_SYMBOL(__asan_unpoison_stack_memory); 785 + 767 786 #ifdef CONFIG_MEMORY_HOTPLUG 768 787 static int kasan_mem_notifier(struct notifier_block *nb, 769 788 unsigned long action, void *data)
+4
mm/kasan/kasan.h
··· 21 21 #define KASAN_STACK_MID 0xF2 22 22 #define KASAN_STACK_RIGHT 0xF3 23 23 #define KASAN_STACK_PARTIAL 0xF4 24 + #define KASAN_USE_AFTER_SCOPE 0xF8 24 25 25 26 /* Don't break randconfig/all*config builds */ 26 27 #ifndef KASAN_ABI_VERSION ··· 53 52 unsigned long has_dynamic_init; /* This needed for C++ */ 54 53 #if KASAN_ABI_VERSION >= 4 55 54 struct kasan_source_location *location; 55 + #endif 56 + #if KASAN_ABI_VERSION >= 5 57 + char *odr_indicator; 56 58 #endif 57 59 }; 58 60
+3
mm/kasan/report.c
··· 90 90 case KASAN_KMALLOC_FREE: 91 91 bug_type = "use-after-free"; 92 92 break; 93 + case KASAN_USE_AFTER_SCOPE: 94 + bug_type = "use-after-scope"; 95 + break; 93 96 } 94 97 95 98 pr_err("BUG: KASAN: %s in %pS at addr %p\n",
+2
mm/khugepaged.c
··· 103 103 .mm_head = LIST_HEAD_INIT(khugepaged_scan.mm_head), 104 104 }; 105 105 106 + #ifdef CONFIG_SYSFS 106 107 static ssize_t scan_sleep_millisecs_show(struct kobject *kobj, 107 108 struct kobj_attribute *attr, 108 109 char *buf) ··· 296 295 .attrs = khugepaged_attr, 297 296 .name = "khugepaged", 298 297 }; 298 + #endif /* CONFIG_SYSFS */ 299 299 300 300 #define VM_NO_KHUGEPAGED (VM_SPECIAL | VM_HUGETLB) 301 301
+5 -2
mm/mlock.c
··· 190 190 */ 191 191 spin_lock_irq(zone_lru_lock(zone)); 192 192 193 - nr_pages = hpage_nr_pages(page); 194 - if (!TestClearPageMlocked(page)) 193 + if (!TestClearPageMlocked(page)) { 194 + /* Potentially, PTE-mapped THP: do not skip the rest PTEs */ 195 + nr_pages = 1; 195 196 goto unlock_out; 197 + } 196 198 199 + nr_pages = hpage_nr_pages(page); 197 200 __mod_zone_page_state(zone, NR_MLOCK, -nr_pages); 198 201 199 202 if (__munlock_isolate_lru_page(page, true)) {
+11 -7
mm/mremap.c
··· 149 149 if (pte_none(*old_pte)) 150 150 continue; 151 151 152 - /* 153 - * We are remapping a dirty PTE, make sure to 154 - * flush TLB before we drop the PTL for the 155 - * old PTE or we may race with page_mkclean(). 156 - */ 157 - if (pte_present(*old_pte) && pte_dirty(*old_pte)) 158 - force_flush = true; 159 152 pte = ptep_get_and_clear(mm, old_addr, old_pte); 153 + /* 154 + * If we are remapping a dirty PTE, make sure 155 + * to flush TLB before we drop the PTL for the 156 + * old PTE or we may race with page_mkclean(). 157 + * 158 + * This check has to be done after we removed the 159 + * old PTE from page tables or another thread may 160 + * dirty it after the check and before the removal. 161 + */ 162 + if (pte_present(pte) && pte_dirty(pte)) 163 + force_flush = true; 160 164 pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr); 161 165 pte = move_soft_dirty_pte(pte); 162 166 set_pte_at(mm, new_addr, new_pte, pte);
+14 -1
mm/shmem.c
··· 1848 1848 return error; 1849 1849 } 1850 1850 1851 + /* 1852 + * This is like autoremove_wake_function, but it removes the wait queue 1853 + * entry unconditionally - even if something else had already woken the 1854 + * target. 1855 + */ 1856 + static int synchronous_wake_function(wait_queue_t *wait, unsigned mode, int sync, void *key) 1857 + { 1858 + int ret = default_wake_function(wait, mode, sync, key); 1859 + list_del_init(&wait->task_list); 1860 + return ret; 1861 + } 1862 + 1851 1863 static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) 1852 1864 { 1853 1865 struct inode *inode = file_inode(vma->vm_file); ··· 1895 1883 vmf->pgoff >= shmem_falloc->start && 1896 1884 vmf->pgoff < shmem_falloc->next) { 1897 1885 wait_queue_head_t *shmem_falloc_waitq; 1898 - DEFINE_WAIT(shmem_fault_wait); 1886 + DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function); 1899 1887 1900 1888 ret = VM_FAULT_NOPAGE; 1901 1889 if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && ··· 2677 2665 spin_lock(&inode->i_lock); 2678 2666 inode->i_private = NULL; 2679 2667 wake_up_all(&shmem_falloc_waitq); 2668 + WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.task_list)); 2680 2669 spin_unlock(&inode->i_lock); 2681 2670 error = 0; 2682 2671 goto out;
+4 -4
mm/truncate.c
··· 283 283 284 284 if (!trylock_page(page)) 285 285 continue; 286 - WARN_ON(page_to_pgoff(page) != index); 286 + WARN_ON(page_to_index(page) != index); 287 287 if (PageWriteback(page)) { 288 288 unlock_page(page); 289 289 continue; ··· 371 371 } 372 372 373 373 lock_page(page); 374 - WARN_ON(page_to_pgoff(page) != index); 374 + WARN_ON(page_to_index(page) != index); 375 375 wait_on_page_writeback(page); 376 376 truncate_inode_page(mapping, page); 377 377 unlock_page(page); ··· 492 492 if (!trylock_page(page)) 493 493 continue; 494 494 495 - WARN_ON(page_to_pgoff(page) != index); 495 + WARN_ON(page_to_index(page) != index); 496 496 497 497 /* Middle of THP: skip */ 498 498 if (PageTransTail(page)) { ··· 612 612 } 613 613 614 614 lock_page(page); 615 - WARN_ON(page_to_pgoff(page) != index); 615 + WARN_ON(page_to_index(page) != index); 616 616 if (page->mapping != mapping) { 617 617 unlock_page(page); 618 618 continue;
+2
mm/vmscan.c
··· 2354 2354 } 2355 2355 } 2356 2356 2357 + cond_resched(); 2358 + 2357 2359 if (nr_reclaimed < nr_to_reclaim || scan_adjusted) 2358 2360 continue; 2359 2361
+1 -1
mm/workingset.c
··· 348 348 shadow_nodes = list_lru_shrink_count(&workingset_shadow_nodes, sc); 349 349 local_irq_enable(); 350 350 351 - if (memcg_kmem_enabled()) { 351 + if (sc->memcg) { 352 352 pages = mem_cgroup_node_nr_lru_pages(sc->memcg, sc->nid, 353 353 LRU_ALL_FILE); 354 354 } else {
+2 -2
net/batman-adv/translation-table.c
··· 3282 3282 &tvlv_tt_data, 3283 3283 &tt_change, 3284 3284 &tt_len); 3285 - if (!tt_len) 3285 + if (!tt_len || !tvlv_len) 3286 3286 goto unlock; 3287 3287 3288 3288 /* Copy the last orig_node's OGM buffer */ ··· 3300 3300 &tvlv_tt_data, 3301 3301 &tt_change, 3302 3302 &tt_len); 3303 - if (!tt_len) 3303 + if (!tt_len || !tvlv_len) 3304 3304 goto out; 3305 3305 3306 3306 /* fill the rest of the tvlv with the real TT entries */
+2 -2
net/bluetooth/6lowpan.c
··· 1090 1090 { 1091 1091 struct hci_conn *hcon; 1092 1092 struct hci_dev *hdev; 1093 - bdaddr_t *src = BDADDR_ANY; 1094 1093 int n; 1095 1094 1096 1095 n = sscanf(buf, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx %hhu", ··· 1100 1101 if (n < 7) 1101 1102 return -EINVAL; 1102 1103 1103 - hdev = hci_get_route(addr, src); 1104 + /* The LE_PUBLIC address type is ignored because of BDADDR_ANY */ 1105 + hdev = hci_get_route(addr, BDADDR_ANY, BDADDR_LE_PUBLIC); 1104 1106 if (!hdev) 1105 1107 return -ENOENT; 1106 1108
+24 -2
net/bluetooth/hci_conn.c
··· 613 613 return 0; 614 614 } 615 615 616 - struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src) 616 + struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src, uint8_t src_type) 617 617 { 618 618 int use_src = bacmp(src, BDADDR_ANY); 619 619 struct hci_dev *hdev = NULL, *d; ··· 634 634 */ 635 635 636 636 if (use_src) { 637 - if (!bacmp(&d->bdaddr, src)) { 637 + bdaddr_t id_addr; 638 + u8 id_addr_type; 639 + 640 + if (src_type == BDADDR_BREDR) { 641 + if (!lmp_bredr_capable(d)) 642 + continue; 643 + bacpy(&id_addr, &d->bdaddr); 644 + id_addr_type = BDADDR_BREDR; 645 + } else { 646 + if (!lmp_le_capable(d)) 647 + continue; 648 + 649 + hci_copy_identity_address(d, &id_addr, 650 + &id_addr_type); 651 + 652 + /* Convert from HCI to three-value type */ 653 + if (id_addr_type == ADDR_LE_DEV_PUBLIC) 654 + id_addr_type = BDADDR_LE_PUBLIC; 655 + else 656 + id_addr_type = BDADDR_LE_RANDOM; 657 + } 658 + 659 + if (!bacmp(&id_addr, src) && id_addr_type == src_type) { 638 660 hdev = d; break; 639 661 } 640 662 } else {
+1 -1
net/bluetooth/l2cap_core.c
··· 7060 7060 BT_DBG("%pMR -> %pMR (type %u) psm 0x%2.2x", &chan->src, dst, 7061 7061 dst_type, __le16_to_cpu(psm)); 7062 7062 7063 - hdev = hci_get_route(dst, &chan->src); 7063 + hdev = hci_get_route(dst, &chan->src, chan->src_type); 7064 7064 if (!hdev) 7065 7065 return -EHOSTUNREACH; 7066 7066
+1 -1
net/bluetooth/rfcomm/tty.c
··· 178 178 struct hci_dev *hdev; 179 179 struct hci_conn *conn; 180 180 181 - hdev = hci_get_route(&dev->dst, &dev->src); 181 + hdev = hci_get_route(&dev->dst, &dev->src, BDADDR_BREDR); 182 182 if (!hdev) 183 183 return; 184 184
+1 -1
net/bluetooth/sco.c
··· 219 219 220 220 BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst); 221 221 222 - hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src); 222 + hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR); 223 223 if (!hdev) 224 224 return -EHOSTUNREACH; 225 225
+1
net/bridge/br_sysfs_br.c
··· 898 898 if (!br->ifobj) { 899 899 pr_info("%s: can't add kobject (directory) %s/%s\n", 900 900 __func__, dev->name, SYSFS_BRIDGE_PORT_SUBDIR); 901 + err = -ENOMEM; 901 902 goto out3; 902 903 } 903 904 return 0;
+1 -4
net/caif/caif_socket.c
··· 1107 1107 1108 1108 static int __init caif_sktinit_module(void) 1109 1109 { 1110 - int err = sock_register(&caif_family_ops); 1111 - if (!err) 1112 - return err; 1113 - return 0; 1110 + return sock_register(&caif_family_ops); 1114 1111 } 1115 1112 1116 1113 static void __exit caif_sktexit_module(void)
+10 -8
net/can/bcm.c
··· 77 77 (CAN_EFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG) : \ 78 78 (CAN_SFF_MASK | CAN_EFF_FLAG | CAN_RTR_FLAG)) 79 79 80 - #define CAN_BCM_VERSION "20160617" 80 + #define CAN_BCM_VERSION "20161123" 81 81 82 82 MODULE_DESCRIPTION("PF_CAN broadcast manager protocol"); 83 83 MODULE_LICENSE("Dual BSD/GPL"); ··· 109 109 u32 count; 110 110 u32 nframes; 111 111 u32 currframe; 112 - struct canfd_frame *frames; 113 - struct canfd_frame *last_frames; 112 + /* void pointers to arrays of struct can[fd]_frame */ 113 + void *frames; 114 + void *last_frames; 114 115 struct canfd_frame sframe; 115 116 struct canfd_frame last_sframe; 116 117 struct sock *sk; ··· 682 681 683 682 if (op->flags & RX_FILTER_ID) { 684 683 /* the easiest case */ 685 - bcm_rx_update_and_send(op, &op->last_frames[0], rxframe); 684 + bcm_rx_update_and_send(op, op->last_frames, rxframe); 686 685 goto rx_starttimer; 687 686 } 688 687 ··· 1069 1068 1070 1069 if (msg_head->nframes) { 1071 1070 /* update CAN frames content */ 1072 - err = memcpy_from_msg((u8 *)op->frames, msg, 1071 + err = memcpy_from_msg(op->frames, msg, 1073 1072 msg_head->nframes * op->cfsiz); 1074 1073 if (err < 0) 1075 1074 return err; ··· 1119 1118 } 1120 1119 1121 1120 if (msg_head->nframes) { 1122 - err = memcpy_from_msg((u8 *)op->frames, msg, 1121 + err = memcpy_from_msg(op->frames, msg, 1123 1122 msg_head->nframes * op->cfsiz); 1124 1123 if (err < 0) { 1125 1124 if (op->frames != &op->sframe) ··· 1164 1163 /* check flags */ 1165 1164 1166 1165 if (op->flags & RX_RTR_FRAME) { 1166 + struct canfd_frame *frame0 = op->frames; 1167 1167 1168 1168 /* no timers in RTR-mode */ 1169 1169 hrtimer_cancel(&op->thrtimer); ··· 1176 1174 * prevent a full-load-loopback-test ... ;-] 1177 1175 */ 1178 1176 if ((op->flags & TX_CP_CAN_ID) || 1179 - (op->frames[0].can_id == op->can_id)) 1180 - op->frames[0].can_id = op->can_id & ~CAN_RTR_FLAG; 1177 + (frame0->can_id == op->can_id)) 1178 + frame0->can_id = op->can_id & ~CAN_RTR_FLAG; 1181 1179 1182 1180 } else { 1183 1181 if (op->flags & SETTIMER) {
+3
net/can/raw.c
··· 499 499 if (optlen % sizeof(struct can_filter) != 0) 500 500 return -EINVAL; 501 501 502 + if (optlen > CAN_RAW_FILTER_MAX * sizeof(struct can_filter)) 503 + return -EINVAL; 504 + 502 505 count = optlen / sizeof(struct can_filter); 503 506 504 507 if (count > 1) {
+1
net/core/ethtool.c
··· 2479 2479 case ETHTOOL_GET_TS_INFO: 2480 2480 case ETHTOOL_GEEE: 2481 2481 case ETHTOOL_GTUNABLE: 2482 + case ETHTOOL_GLINKSETTINGS: 2482 2483 break; 2483 2484 default: 2484 2485 if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
+2 -4
net/core/flow.c
··· 95 95 list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) { 96 96 flow_entry_kill(fce, xfrm); 97 97 atomic_dec(&xfrm->flow_cache_gc_count); 98 - WARN_ON(atomic_read(&xfrm->flow_cache_gc_count) < 0); 99 98 } 100 99 } 101 100 ··· 235 236 if (fcp->hash_count > fc->high_watermark) 236 237 flow_cache_shrink(fc, fcp); 237 238 238 - if (fcp->hash_count > 2 * fc->high_watermark || 239 - atomic_read(&net->xfrm.flow_cache_gc_count) > fc->high_watermark) { 240 - atomic_inc(&net->xfrm.flow_cache_genid); 239 + if (atomic_read(&net->xfrm.flow_cache_gc_count) > 240 + 2 * num_online_cpus() * fc->high_watermark) { 241 241 flo = ERR_PTR(-ENOBUFS); 242 242 goto ret_object; 243 243 }
+1 -1
net/core/flow_dissector.c
··· 1013 1013 return 0; 1014 1014 } 1015 1015 1016 - late_initcall_sync(init_default_flow_dissectors); 1016 + core_initcall(init_default_flow_dissectors);
+3 -3
net/core/rtnetlink.c
··· 931 931 + nla_total_size(4) /* IFLA_PROMISCUITY */ 932 932 + nla_total_size(4) /* IFLA_NUM_TX_QUEUES */ 933 933 + nla_total_size(4) /* IFLA_NUM_RX_QUEUES */ 934 - + nla_total_size(4) /* IFLA_MAX_GSO_SEGS */ 935 - + nla_total_size(4) /* IFLA_MAX_GSO_SIZE */ 934 + + nla_total_size(4) /* IFLA_GSO_MAX_SEGS */ 935 + + nla_total_size(4) /* IFLA_GSO_MAX_SIZE */ 936 936 + nla_total_size(1) /* IFLA_OPERSTATE */ 937 937 + nla_total_size(1) /* IFLA_LINKMODE */ 938 938 + nla_total_size(4) /* IFLA_CARRIER_CHANGES */ ··· 2737 2737 ext_filter_mask)); 2738 2738 } 2739 2739 2740 - return min_ifinfo_dump_size; 2740 + return nlmsg_total_size(min_ifinfo_dump_size); 2741 2741 } 2742 2742 2743 2743 static int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb)
+2 -2
net/core/sock.c
··· 715 715 val = min_t(u32, val, sysctl_wmem_max); 716 716 set_sndbuf: 717 717 sk->sk_userlocks |= SOCK_SNDBUF_LOCK; 718 - sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF); 718 + sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF); 719 719 /* Wake up sending tasks if we upped the value. */ 720 720 sk->sk_write_space(sk); 721 721 break; ··· 751 751 * returning the value we actually used in getsockopt 752 752 * is the most desirable behavior. 753 753 */ 754 - sk->sk_rcvbuf = max_t(u32, val * 2, SOCK_MIN_RCVBUF); 754 + sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF); 755 755 break; 756 756 757 757 case SO_RCVBUFFORCE:
+1
net/dcb/dcbnl.c
··· 1353 1353 dcb_unlock: 1354 1354 spin_unlock_bh(&dcb_lock); 1355 1355 nla_put_failure: 1356 + err = -EMSGSIZE; 1356 1357 return err; 1357 1358 } 1358 1359
+7 -5
net/dccp/ipv4.c
··· 700 700 { 701 701 const struct dccp_hdr *dh; 702 702 unsigned int cscov; 703 + u8 dccph_doff; 703 704 704 705 if (skb->pkt_type != PACKET_HOST) 705 706 return 1; ··· 722 721 /* 723 722 * If P.Data Offset is too small for packet type, drop packet and return 724 723 */ 725 - if (dh->dccph_doff < dccp_hdr_len(skb) / sizeof(u32)) { 726 - DCCP_WARN("P.Data Offset(%u) too small\n", dh->dccph_doff); 724 + dccph_doff = dh->dccph_doff; 725 + if (dccph_doff < dccp_hdr_len(skb) / sizeof(u32)) { 726 + DCCP_WARN("P.Data Offset(%u) too small\n", dccph_doff); 727 727 return 1; 728 728 } 729 729 /* 730 730 * If P.Data Offset is too too large for packet, drop packet and return 731 731 */ 732 - if (!pskb_may_pull(skb, dh->dccph_doff * sizeof(u32))) { 733 - DCCP_WARN("P.Data Offset(%u) too large\n", dh->dccph_doff); 732 + if (!pskb_may_pull(skb, dccph_doff * sizeof(u32))) { 733 + DCCP_WARN("P.Data Offset(%u) too large\n", dccph_doff); 734 734 return 1; 735 735 } 736 - 736 + dh = dccp_hdr(skb); 737 737 /* 738 738 * If P.type is not Data, Ack, or DataAck and P.X == 0 (the packet 739 739 * has short sequence numbers), drop packet and return
+4 -9
net/dsa/dsa.c
··· 233 233 genphy_read_status(phydev); 234 234 if (ds->ops->adjust_link) 235 235 ds->ops->adjust_link(ds, port, phydev); 236 + 237 + put_device(&phydev->mdio.dev); 236 238 } 237 239 238 240 return 0; ··· 506 504 507 505 void dsa_cpu_dsa_destroy(struct device_node *port_dn) 508 506 { 509 - struct phy_device *phydev; 510 - 511 - if (of_phy_is_fixed_link(port_dn)) { 512 - phydev = of_phy_find_device(port_dn); 513 - if (phydev) { 514 - phy_device_free(phydev); 515 - fixed_phy_unregister(phydev); 516 - } 517 - } 507 + if (of_phy_is_fixed_link(port_dn)) 508 + of_phy_deregister_fixed_link(port_dn); 518 509 } 519 510 520 511 static void dsa_switch_destroy(struct dsa_switch *ds)
+3 -1
net/dsa/dsa2.c
··· 28 28 struct dsa_switch_tree *dst; 29 29 30 30 list_for_each_entry(dst, &dsa_switch_trees, list) 31 - if (dst->tree == tree) 31 + if (dst->tree == tree) { 32 + kref_get(&dst->refcount); 32 33 return dst; 34 + } 33 35 return NULL; 34 36 } 35 37
+16 -3
net/dsa/slave.c
··· 1125 1125 p->phy_interface = mode; 1126 1126 1127 1127 phy_dn = of_parse_phandle(port_dn, "phy-handle", 0); 1128 - if (of_phy_is_fixed_link(port_dn)) { 1128 + if (!phy_dn && of_phy_is_fixed_link(port_dn)) { 1129 1129 /* In the case of a fixed PHY, the DT node associated 1130 1130 * to the fixed PHY is the Port DT node 1131 1131 */ ··· 1135 1135 return ret; 1136 1136 } 1137 1137 phy_is_fixed = true; 1138 - phy_dn = port_dn; 1138 + phy_dn = of_node_get(port_dn); 1139 1139 } 1140 1140 1141 1141 if (ds->ops->get_phy_flags) ··· 1154 1154 ret = dsa_slave_phy_connect(p, slave_dev, phy_id); 1155 1155 if (ret) { 1156 1156 netdev_err(slave_dev, "failed to connect to phy%d: %d\n", phy_id, ret); 1157 + of_node_put(phy_dn); 1157 1158 return ret; 1158 1159 } 1159 1160 } else { ··· 1163 1162 phy_flags, 1164 1163 p->phy_interface); 1165 1164 } 1165 + 1166 + of_node_put(phy_dn); 1166 1167 } 1167 1168 1168 1169 if (p->phy && phy_is_fixed) ··· 1177 1174 ret = dsa_slave_phy_connect(p, slave_dev, p->port); 1178 1175 if (ret) { 1179 1176 netdev_err(slave_dev, "failed to connect to port %d: %d\n", p->port, ret); 1177 + if (phy_is_fixed) 1178 + of_phy_deregister_fixed_link(port_dn); 1180 1179 return ret; 1181 1180 } 1182 1181 } ··· 1294 1289 void dsa_slave_destroy(struct net_device *slave_dev) 1295 1290 { 1296 1291 struct dsa_slave_priv *p = netdev_priv(slave_dev); 1292 + struct dsa_switch *ds = p->parent; 1293 + struct device_node *port_dn; 1294 + 1295 + port_dn = ds->ports[p->port].dn; 1297 1296 1298 1297 netif_carrier_off(slave_dev); 1299 - if (p->phy) 1298 + if (p->phy) { 1300 1299 phy_disconnect(p->phy); 1300 + 1301 + if (of_phy_is_fixed_link(port_dn)) 1302 + of_phy_deregister_fixed_link(port_dn); 1303 + } 1301 1304 unregister_netdev(slave_dev); 1302 1305 free_netdev(slave_dev); 1303 1306 }
+1
net/ipv4/Kconfig
··· 715 715 default "reno" if DEFAULT_RENO 716 716 default "dctcp" if DEFAULT_DCTCP 717 717 default "cdg" if DEFAULT_CDG 718 + default "bbr" if DEFAULT_BBR 718 719 default "cubic" 719 720 720 721 config TCP_MD5SIG
+1 -1
net/ipv4/af_inet.c
··· 1233 1233 fixedid = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TCP_FIXEDID); 1234 1234 1235 1235 /* fixed ID is invalid if DF bit is not set */ 1236 - if (fixedid && !(iph->frag_off & htons(IP_DF))) 1236 + if (fixedid && !(ip_hdr(skb)->frag_off & htons(IP_DF))) 1237 1237 goto out; 1238 1238 } 1239 1239
+1 -1
net/ipv4/esp4.c
··· 476 476 esph = (void *)skb_push(skb, 4); 477 477 *seqhi = esph->spi; 478 478 esph->spi = esph->seq_no; 479 - esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.input.hi); 479 + esph->seq_no = XFRM_SKB_CB(skb)->seq.input.hi; 480 480 aead_request_set_callback(req, 0, esp_input_done_esn, skb); 481 481 } 482 482
+35 -33
net/ipv4/fib_trie.c
··· 719 719 { 720 720 unsigned char slen = tn->pos; 721 721 unsigned long stride, i; 722 + unsigned char slen_max; 723 + 724 + /* only vector 0 can have a suffix length greater than or equal to 725 + * tn->pos + tn->bits, the second highest node will have a suffix 726 + * length at most of tn->pos + tn->bits - 1 727 + */ 728 + slen_max = min_t(unsigned char, tn->pos + tn->bits - 1, tn->slen); 722 729 723 730 /* search though the list of children looking for nodes that might 724 731 * have a suffix greater than the one we currently have. This is ··· 743 736 slen = n->slen; 744 737 i &= ~(stride - 1); 745 738 746 - /* if slen covers all but the last bit we can stop here 747 - * there will be nothing longer than that since only node 748 - * 0 and 1 << (bits - 1) could have that as their suffix 749 - * length. 750 - */ 751 - if ((slen + 1) >= (tn->pos + tn->bits)) 739 + /* stop searching if we have hit the maximum possible value */ 740 + if (slen >= slen_max) 752 741 break; 753 742 } 754 743 ··· 916 913 return collapse(t, tn); 917 914 918 915 /* update parent in case halve failed */ 919 - tp = node_parent(tn); 920 - 921 - /* Return if at least one deflate was run */ 922 - if (max_work != MAX_WORK) 923 - return tp; 924 - 925 - /* push the suffix length to the parent node */ 926 - if (tn->slen > tn->pos) { 927 - unsigned char slen = update_suffix(tn); 928 - 929 - if (slen > tp->slen) 930 - tp->slen = slen; 931 - } 932 - 933 - return tp; 916 + return node_parent(tn); 934 917 } 935 918 936 - static void leaf_pull_suffix(struct key_vector *tp, struct key_vector *l) 919 + static void node_pull_suffix(struct key_vector *tn, unsigned char slen) 937 920 { 938 - while ((tp->slen > tp->pos) && (tp->slen > l->slen)) { 939 - if (update_suffix(tp) > l->slen) 921 + unsigned char node_slen = tn->slen; 922 + 923 + while ((node_slen > tn->pos) && (node_slen > slen)) { 924 + slen = update_suffix(tn); 925 + if (node_slen == slen) 940 926 break; 941 - tp = node_parent(tp); 927 + 928 + tn = node_parent(tn); 929 + node_slen = tn->slen; 942 930 } 943 931 } 944 932 945 - static void leaf_push_suffix(struct key_vector *tn, struct key_vector *l) 933 + static void node_push_suffix(struct key_vector *tn, unsigned char slen) 946 934 { 947 - /* if this is a new leaf then tn will be NULL and we can sort 948 - * out parent suffix lengths as a part of trie_rebalance 949 - */ 950 - while (tn->slen < l->slen) { 951 - tn->slen = l->slen; 935 + while (tn->slen < slen) { 936 + tn->slen = slen; 952 937 tn = node_parent(tn); 953 938 } 954 939 } ··· 1057 1066 } 1058 1067 1059 1068 /* Case 3: n is NULL, and will just insert a new leaf */ 1069 + node_push_suffix(tp, new->fa_slen); 1060 1070 NODE_INIT_PARENT(l, tp); 1061 1071 put_child_root(tp, key, l); 1062 1072 trie_rebalance(t, tp); ··· 1099 1107 /* if we added to the tail node then we need to update slen */ 1100 1108 if (l->slen < new->fa_slen) { 1101 1109 l->slen = new->fa_slen; 1102 - leaf_push_suffix(tp, l); 1110 + node_push_suffix(tp, new->fa_slen); 1103 1111 } 1104 1112 1105 1113 return 0; ··· 1491 1499 * out parent suffix lengths as a part of trie_rebalance 1492 1500 */ 1493 1501 if (hlist_empty(&l->leaf)) { 1502 + if (tp->slen == l->slen) 1503 + node_pull_suffix(tp, tp->pos); 1494 1504 put_child_root(tp, l->key, NULL); 1495 1505 node_free(l); 1496 1506 trie_rebalance(t, tp); ··· 1505 1511 1506 1512 /* update the trie with the latest suffix length */ 1507 1513 l->slen = fa->fa_slen; 1508 - leaf_pull_suffix(tp, l); 1514 + node_pull_suffix(tp, fa->fa_slen); 1509 1515 } 1510 1516 1511 1517 /* Caller must hold RTNL. */ ··· 1777 1783 if (IS_TRIE(pn)) 1778 1784 break; 1779 1785 1786 + /* update the suffix to address pulled leaves */ 1787 + if (pn->slen > pn->pos) 1788 + update_suffix(pn); 1789 + 1780 1790 /* resize completed node */ 1781 1791 pn = resize(t, pn); 1782 1792 cindex = get_index(pkey, pn); ··· 1846 1848 /* cannot resize the trie vector */ 1847 1849 if (IS_TRIE(pn)) 1848 1850 break; 1851 + 1852 + /* update the suffix to address pulled leaves */ 1853 + if (pn->slen > pn->pos) 1854 + update_suffix(pn); 1849 1855 1850 1856 /* resize completed node */ 1851 1857 pn = resize(t, pn);
+2
net/ipv4/ip_output.c
··· 107 107 if (unlikely(!skb)) 108 108 return 0; 109 109 110 + skb->protocol = htons(ETH_P_IP); 111 + 110 112 return nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, 111 113 net, sk, skb, NULL, skb_dst(skb)->dev, 112 114 dst_output);
+4 -1
net/ipv4/netfilter.c
··· 24 24 struct flowi4 fl4 = {}; 25 25 __be32 saddr = iph->saddr; 26 26 __u8 flags = skb->sk ? inet_sk_flowi_flags(skb->sk) : 0; 27 + struct net_device *dev = skb_dst(skb)->dev; 27 28 unsigned int hh_len; 28 29 29 30 if (addr_type == RTN_UNSPEC) 30 - addr_type = inet_addr_type(net, saddr); 31 + addr_type = inet_addr_type_dev_table(net, dev, saddr); 31 32 if (addr_type == RTN_LOCAL || addr_type == RTN_UNICAST) 32 33 flags |= FLOWI_FLAG_ANYSRC; 33 34 else ··· 41 40 fl4.saddr = saddr; 42 41 fl4.flowi4_tos = RT_TOS(iph->tos); 43 42 fl4.flowi4_oif = skb->sk ? skb->sk->sk_bound_dev_if : 0; 43 + if (!fl4.flowi4_oif) 44 + fl4.flowi4_oif = l3mdev_master_ifindex(dev); 44 45 fl4.flowi4_mark = skb->mark; 45 46 fl4.flowi4_flags = flags; 46 47 rt = ip_route_output_key(net, &fl4);
+2 -2
net/ipv4/netfilter/arp_tables.c
··· 1201 1201 1202 1202 newinfo->number = compatr->num_entries; 1203 1203 for (i = 0; i < NF_ARP_NUMHOOKS; i++) { 1204 - newinfo->hook_entry[i] = info->hook_entry[i]; 1205 - newinfo->underflow[i] = info->underflow[i]; 1204 + newinfo->hook_entry[i] = compatr->hook_entry[i]; 1205 + newinfo->underflow[i] = compatr->underflow[i]; 1206 1206 } 1207 1207 entry1 = newinfo->entries; 1208 1208 pos = entry1;
+4
net/ipv4/ping.c
··· 657 657 if (len > 0xFFFF) 658 658 return -EMSGSIZE; 659 659 660 + /* Must have at least a full ICMP header. */ 661 + if (len < icmph_len) 662 + return -EINVAL; 663 + 660 664 /* 661 665 * Check the flags. 662 666 */
+21 -1
net/ipv4/tcp_input.c
··· 128 128 #define REXMIT_LOST 1 /* retransmit packets marked lost */ 129 129 #define REXMIT_NEW 2 /* FRTO-style transmit of unsent/new packets */ 130 130 131 + static void tcp_gro_dev_warn(struct sock *sk, const struct sk_buff *skb) 132 + { 133 + static bool __once __read_mostly; 134 + 135 + if (!__once) { 136 + struct net_device *dev; 137 + 138 + __once = true; 139 + 140 + rcu_read_lock(); 141 + dev = dev_get_by_index_rcu(sock_net(sk), skb->skb_iif); 142 + pr_warn("%s: Driver has suspect GRO implementation, TCP performance may be compromised.\n", 143 + dev ? dev->name : "Unknown driver"); 144 + rcu_read_unlock(); 145 + } 146 + } 147 + 131 148 /* Adapt the MSS value used to make delayed ack decision to the 132 149 * real world. 133 150 */ ··· 161 144 */ 162 145 len = skb_shinfo(skb)->gso_size ? : skb->len; 163 146 if (len >= icsk->icsk_ack.rcv_mss) { 164 - icsk->icsk_ack.rcv_mss = len; 147 + icsk->icsk_ack.rcv_mss = min_t(unsigned int, len, 148 + tcp_sk(sk)->advmss); 149 + if (unlikely(icsk->icsk_ack.rcv_mss != len)) 150 + tcp_gro_dev_warn(sk, skb); 165 151 } else { 166 152 /* Otherwise, we make more careful check taking into account, 167 153 * that SACKs block is variable.
+1 -1
net/ipv4/udp.c
··· 1455 1455 udp_lib_rehash(sk, new_hash); 1456 1456 } 1457 1457 1458 - static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 1458 + int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 1459 1459 { 1460 1460 int rc; 1461 1461
+1 -1
net/ipv4/udp_impl.h
··· 25 25 int flags, int *addr_len); 26 26 int udp_sendpage(struct sock *sk, struct page *page, int offset, size_t size, 27 27 int flags); 28 - int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 28 + int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 29 29 void udp_destroy_sock(struct sock *sk); 30 30 31 31 #ifdef CONFIG_PROC_FS
+1 -1
net/ipv4/udplite.c
··· 50 50 .sendmsg = udp_sendmsg, 51 51 .recvmsg = udp_recvmsg, 52 52 .sendpage = udp_sendpage, 53 - .backlog_rcv = udp_queue_rcv_skb, 53 + .backlog_rcv = __udp_queue_rcv_skb, 54 54 .hash = udp_lib_hash, 55 55 .unhash = udp_lib_unhash, 56 56 .get_port = udp_v4_get_port,
+12 -6
net/ipv6/addrconf.c
··· 183 183 184 184 static void addrconf_dad_start(struct inet6_ifaddr *ifp); 185 185 static void addrconf_dad_work(struct work_struct *w); 186 - static void addrconf_dad_completed(struct inet6_ifaddr *ifp); 186 + static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id); 187 187 static void addrconf_dad_run(struct inet6_dev *idev); 188 188 static void addrconf_rs_timer(unsigned long data); 189 189 static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa); ··· 2898 2898 spin_lock_bh(&ifp->lock); 2899 2899 ifp->flags &= ~IFA_F_TENTATIVE; 2900 2900 spin_unlock_bh(&ifp->lock); 2901 + rt_genid_bump_ipv6(dev_net(idev->dev)); 2901 2902 ipv6_ifa_notify(RTM_NEWADDR, ifp); 2902 2903 in6_ifa_put(ifp); 2903 2904 } ··· 3741 3740 { 3742 3741 struct inet6_dev *idev = ifp->idev; 3743 3742 struct net_device *dev = idev->dev; 3744 - bool notify = false; 3743 + bool bump_id, notify = false; 3745 3744 3746 3745 addrconf_join_solict(dev, &ifp->addr); 3747 3746 ··· 3756 3755 idev->cnf.accept_dad < 1 || 3757 3756 !(ifp->flags&IFA_F_TENTATIVE) || 3758 3757 ifp->flags & IFA_F_NODAD) { 3758 + bump_id = ifp->flags & IFA_F_TENTATIVE; 3759 3759 ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED); 3760 3760 spin_unlock(&ifp->lock); 3761 3761 read_unlock_bh(&idev->lock); 3762 3762 3763 - addrconf_dad_completed(ifp); 3763 + addrconf_dad_completed(ifp, bump_id); 3764 3764 return; 3765 3765 } 3766 3766 ··· 3821 3819 struct inet6_ifaddr, 3822 3820 dad_work); 3823 3821 struct inet6_dev *idev = ifp->idev; 3822 + bool bump_id, disable_ipv6 = false; 3824 3823 struct in6_addr mcaddr; 3825 - bool disable_ipv6 = false; 3826 3824 3827 3825 enum { 3828 3826 DAD_PROCESS, ··· 3892 3890 * DAD was successful 3893 3891 */ 3894 3892 3893 + bump_id = ifp->flags & IFA_F_TENTATIVE; 3895 3894 ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED); 3896 3895 spin_unlock(&ifp->lock); 3897 3896 write_unlock_bh(&idev->lock); 3898 3897 3899 - addrconf_dad_completed(ifp); 3898 + addrconf_dad_completed(ifp, bump_id); 3900 3899 3901 3900 goto out; 3902 3901 } ··· 3934 3931 return true; 3935 3932 } 3936 3933 3937 - static void addrconf_dad_completed(struct inet6_ifaddr *ifp) 3934 + static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id) 3938 3935 { 3939 3936 struct net_device *dev = ifp->idev->dev; 3940 3937 struct in6_addr lladdr; ··· 3986 3983 spin_unlock(&ifp->lock); 3987 3984 write_unlock_bh(&ifp->idev->lock); 3988 3985 } 3986 + 3987 + if (bump_id) 3988 + rt_genid_bump_ipv6(dev_net(dev)); 3989 3989 } 3990 3990 3991 3991 static void addrconf_dad_run(struct inet6_dev *idev)
+3 -1
net/ipv6/datagram.c
··· 139 139 } 140 140 EXPORT_SYMBOL_GPL(ip6_datagram_release_cb); 141 141 142 - static int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) 142 + int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, 143 + int addr_len) 143 144 { 144 145 struct sockaddr_in6 *usin = (struct sockaddr_in6 *) uaddr; 145 146 struct inet_sock *inet = inet_sk(sk); ··· 253 252 out: 254 253 return err; 255 254 } 255 + EXPORT_SYMBOL_GPL(__ip6_datagram_connect); 256 256 257 257 int ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) 258 258 {
+1 -1
net/ipv6/esp6.c
··· 418 418 esph = (void *)skb_push(skb, 4); 419 419 *seqhi = esph->spi; 420 420 esph->spi = esph->seq_no; 421 - esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.input.hi); 421 + esph->seq_no = XFRM_SKB_CB(skb)->seq.input.hi; 422 422 aead_request_set_callback(req, 0, esp_input_done_esn, skb); 423 423 } 424 424
+4 -2
net/ipv6/icmp.c
··· 447 447 448 448 if (__ipv6_addr_needs_scope_id(addr_type)) 449 449 iif = skb->dev->ifindex; 450 - else 451 - iif = l3mdev_master_ifindex(skb_dst(skb)->dev); 450 + else { 451 + dst = skb_dst(skb); 452 + iif = l3mdev_master_ifindex(dst ? dst->dev : skb->dev); 453 + } 452 454 453 455 /* 454 456 * Must not send error if the source does not uniquely
+1 -1
net/ipv6/ip6_offload.c
··· 99 99 segs = ops->callbacks.gso_segment(skb, features); 100 100 } 101 101 102 - if (IS_ERR(segs)) 102 + if (IS_ERR_OR_NULL(segs)) 103 103 goto out; 104 104 105 105 gso_partial = !!(skb_shinfo(segs)->gso_type & SKB_GSO_PARTIAL);
-1
net/ipv6/ip6_tunnel.c
··· 1181 1181 if (err) 1182 1182 return err; 1183 1183 1184 - skb->protocol = htons(ETH_P_IPV6); 1185 1184 skb_push(skb, sizeof(struct ipv6hdr)); 1186 1185 skb_reset_network_header(skb); 1187 1186 ipv6h = ipv6_hdr(skb);
+31
net/ipv6/ip6_vti.c
··· 1138 1138 .priority = 100, 1139 1139 }; 1140 1140 1141 + static bool is_vti6_tunnel(const struct net_device *dev) 1142 + { 1143 + return dev->netdev_ops == &vti6_netdev_ops; 1144 + } 1145 + 1146 + static int vti6_device_event(struct notifier_block *unused, 1147 + unsigned long event, void *ptr) 1148 + { 1149 + struct net_device *dev = netdev_notifier_info_to_dev(ptr); 1150 + struct ip6_tnl *t = netdev_priv(dev); 1151 + 1152 + if (!is_vti6_tunnel(dev)) 1153 + return NOTIFY_DONE; 1154 + 1155 + switch (event) { 1156 + case NETDEV_DOWN: 1157 + if (!net_eq(t->net, dev_net(dev))) 1158 + xfrm_garbage_collect(t->net); 1159 + break; 1160 + } 1161 + return NOTIFY_DONE; 1162 + } 1163 + 1164 + static struct notifier_block vti6_notifier_block __read_mostly = { 1165 + .notifier_call = vti6_device_event, 1166 + }; 1167 + 1141 1168 /** 1142 1169 * vti6_tunnel_init - register protocol and reserve needed resources 1143 1170 * ··· 1174 1147 { 1175 1148 const char *msg; 1176 1149 int err; 1150 + 1151 + register_netdevice_notifier(&vti6_notifier_block); 1177 1152 1178 1153 msg = "tunnel device"; 1179 1154 err = register_pernet_device(&vti6_net_ops); ··· 1209 1180 xfrm_proto_esp_failed: 1210 1181 unregister_pernet_device(&vti6_net_ops); 1211 1182 pernet_dev_failed: 1183 + unregister_netdevice_notifier(&vti6_notifier_block); 1212 1184 pr_err("vti6 init: failed to register %s\n", msg); 1213 1185 return err; 1214 1186 } ··· 1224 1194 xfrm6_protocol_deregister(&vti_ah6_protocol, IPPROTO_AH); 1225 1195 xfrm6_protocol_deregister(&vti_esp6_protocol, IPPROTO_ESP); 1226 1196 unregister_pernet_device(&vti6_net_ops); 1197 + unregister_netdevice_notifier(&vti6_notifier_block); 1227 1198 } 1228 1199 1229 1200 module_init(vti6_tunnel_init);
+2 -2
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 576 576 /* Jumbo payload inhibits frag. header */ 577 577 if (ipv6_hdr(skb)->payload_len == 0) { 578 578 pr_debug("payload len = 0\n"); 579 - return -EINVAL; 579 + return 0; 580 580 } 581 581 582 582 if (find_prev_fhdr(skb, &prevhdr, &nhoff, &fhoff) < 0) 583 - return -EINVAL; 583 + return 0; 584 584 585 585 if (!pskb_may_pull(skb, fhoff + sizeof(*fhdr))) 586 586 return -ENOMEM;
+1 -1
net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
··· 69 69 if (err == -EINPROGRESS) 70 70 return NF_STOLEN; 71 71 72 - return NF_ACCEPT; 72 + return err == 0 ? NF_ACCEPT : NF_DROP; 73 73 } 74 74 75 75 static struct nf_hook_ops ipv6_defrag_ops[] = {
+1
net/ipv6/netfilter/nf_reject_ipv6.c
··· 156 156 fl6.daddr = oip6h->saddr; 157 157 fl6.fl6_sport = otcph->dest; 158 158 fl6.fl6_dport = otcph->source; 159 + fl6.flowi6_oif = l3mdev_master_ifindex(skb_dst(oldskb)->dev); 159 160 security_skb_classify_flow(oldskb, flowi6_to_flowi(&fl6)); 160 161 dst = ip6_route_output(net, NULL, &fl6); 161 162 if (dst->error) {
+2
net/ipv6/output_core.c
··· 155 155 if (unlikely(!skb)) 156 156 return 0; 157 157 158 + skb->protocol = htons(ETH_P_IPV6); 159 + 158 160 return nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, 159 161 net, sk, skb, NULL, skb_dst(skb)->dev, 160 162 dst_output);
+1 -1
net/ipv6/udp.c
··· 514 514 return; 515 515 } 516 516 517 - static int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 517 + int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 518 518 { 519 519 int rc; 520 520
+1 -1
net/ipv6/udp_impl.h
··· 26 26 int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len); 27 27 int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock, 28 28 int flags, int *addr_len); 29 - int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 29 + int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 30 30 void udpv6_destroy_sock(struct sock *sk); 31 31 32 32 #ifdef CONFIG_PROC_FS
+1 -1
net/ipv6/udplite.c
··· 45 45 .getsockopt = udpv6_getsockopt, 46 46 .sendmsg = udpv6_sendmsg, 47 47 .recvmsg = udpv6_recvmsg, 48 - .backlog_rcv = udpv6_queue_rcv_skb, 48 + .backlog_rcv = __udpv6_queue_rcv_skb, 49 49 .hash = udp_lib_hash, 50 50 .unhash = udp_lib_unhash, 51 51 .get_port = udp_v6_get_port,
+1 -1
net/l2tp/l2tp_eth.c
··· 97 97 unsigned int len = skb->len; 98 98 int ret = l2tp_xmit_skb(session, skb, session->hdr_len); 99 99 100 - if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN)) { 100 + if (likely(ret == NET_XMIT_SUCCESS)) { 101 101 atomic_long_add(len, &priv->tx_bytes); 102 102 atomic_long_inc(&priv->tx_packets); 103 103 } else {
+35 -30
net/l2tp/l2tp_ip.c
··· 61 61 if ((l2tp->conn_id == tunnel_id) && 62 62 net_eq(sock_net(sk), net) && 63 63 !(inet->inet_rcv_saddr && inet->inet_rcv_saddr != laddr) && 64 - !(sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)) 64 + (!sk->sk_bound_dev_if || !dif || 65 + sk->sk_bound_dev_if == dif)) 65 66 goto found; 66 67 } 67 68 ··· 183 182 struct iphdr *iph = (struct iphdr *) skb_network_header(skb); 184 183 185 184 read_lock_bh(&l2tp_ip_lock); 186 - sk = __l2tp_ip_bind_lookup(net, iph->daddr, 0, tunnel_id); 185 + sk = __l2tp_ip_bind_lookup(net, iph->daddr, inet_iif(skb), 186 + tunnel_id); 187 + if (!sk) { 188 + read_unlock_bh(&l2tp_ip_lock); 189 + goto discard; 190 + } 191 + 192 + sock_hold(sk); 187 193 read_unlock_bh(&l2tp_ip_lock); 188 194 } 189 - 190 - if (sk == NULL) 191 - goto discard; 192 - 193 - sock_hold(sk); 194 195 195 196 if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) 196 197 goto discard_put; ··· 259 256 if (addr->l2tp_family != AF_INET) 260 257 return -EINVAL; 261 258 262 - ret = -EADDRINUSE; 263 - read_lock_bh(&l2tp_ip_lock); 264 - if (__l2tp_ip_bind_lookup(net, addr->l2tp_addr.s_addr, 265 - sk->sk_bound_dev_if, addr->l2tp_conn_id)) 266 - goto out_in_use; 267 - 268 - read_unlock_bh(&l2tp_ip_lock); 269 - 270 259 lock_sock(sk); 260 + 261 + ret = -EINVAL; 271 262 if (!sock_flag(sk, SOCK_ZAPPED)) 272 263 goto out; 273 264 ··· 278 281 inet->inet_rcv_saddr = inet->inet_saddr = addr->l2tp_addr.s_addr; 279 282 if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST) 280 283 inet->inet_saddr = 0; /* Use device */ 281 - sk_dst_reset(sk); 282 - 283 - l2tp_ip_sk(sk)->conn_id = addr->l2tp_conn_id; 284 284 285 285 write_lock_bh(&l2tp_ip_lock); 286 + if (__l2tp_ip_bind_lookup(net, addr->l2tp_addr.s_addr, 287 + sk->sk_bound_dev_if, addr->l2tp_conn_id)) { 288 + write_unlock_bh(&l2tp_ip_lock); 289 + ret = -EADDRINUSE; 290 + goto out; 291 + } 292 + 293 + sk_dst_reset(sk); 294 + l2tp_ip_sk(sk)->conn_id = addr->l2tp_conn_id; 295 + 286 296 sk_add_bind_node(sk, &l2tp_ip_bind_table); 287 297 sk_del_node_init(sk); 288 298 write_unlock_bh(&l2tp_ip_lock); 299 + 289 300 ret = 0; 290 301 sock_reset_flag(sk, SOCK_ZAPPED); 291 302 292 303 out: 293 304 release_sock(sk); 294 - 295 - return ret; 296 - 297 - out_in_use: 298 - read_unlock_bh(&l2tp_ip_lock); 299 305 300 306 return ret; 301 307 } ··· 308 308 struct sockaddr_l2tpip *lsa = (struct sockaddr_l2tpip *) uaddr; 309 309 int rc; 310 310 311 - if (sock_flag(sk, SOCK_ZAPPED)) /* Must bind first - autobinding does not work */ 312 - return -EINVAL; 313 - 314 311 if (addr_len < sizeof(*lsa)) 315 312 return -EINVAL; 316 313 317 314 if (ipv4_is_multicast(lsa->l2tp_addr.s_addr)) 318 315 return -EINVAL; 319 316 320 - rc = ip4_datagram_connect(sk, uaddr, addr_len); 321 - if (rc < 0) 322 - return rc; 323 - 324 317 lock_sock(sk); 318 + 319 + /* Must bind first - autobinding does not work */ 320 + if (sock_flag(sk, SOCK_ZAPPED)) { 321 + rc = -EINVAL; 322 + goto out_sk; 323 + } 324 + 325 + rc = __ip4_datagram_connect(sk, uaddr, addr_len); 326 + if (rc < 0) 327 + goto out_sk; 325 328 326 329 l2tp_ip_sk(sk)->peer_conn_id = lsa->l2tp_conn_id; 327 330 ··· 333 330 sk_add_bind_node(sk, &l2tp_ip_bind_table); 334 331 write_unlock_bh(&l2tp_ip_lock); 335 332 333 + out_sk: 336 334 release_sock(sk); 335 + 337 336 return rc; 338 337 } 339 338
+42 -37
net/l2tp/l2tp_ip6.c
··· 72 72 73 73 if ((l2tp->conn_id == tunnel_id) && 74 74 net_eq(sock_net(sk), net) && 75 - !(addr && ipv6_addr_equal(addr, laddr)) && 76 - !(sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)) 75 + (!addr || ipv6_addr_equal(addr, laddr)) && 76 + (!sk->sk_bound_dev_if || !dif || 77 + sk->sk_bound_dev_if == dif)) 77 78 goto found; 78 79 } 79 80 ··· 197 196 struct ipv6hdr *iph = ipv6_hdr(skb); 198 197 199 198 read_lock_bh(&l2tp_ip6_lock); 200 - sk = __l2tp_ip6_bind_lookup(net, &iph->daddr, 201 - 0, tunnel_id); 199 + sk = __l2tp_ip6_bind_lookup(net, &iph->daddr, inet6_iif(skb), 200 + tunnel_id); 201 + if (!sk) { 202 + read_unlock_bh(&l2tp_ip6_lock); 203 + goto discard; 204 + } 205 + 206 + sock_hold(sk); 202 207 read_unlock_bh(&l2tp_ip6_lock); 203 208 } 204 - 205 - if (sk == NULL) 206 - goto discard; 207 - 208 - sock_hold(sk); 209 209 210 210 if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) 211 211 goto discard_put; ··· 268 266 struct sockaddr_l2tpip6 *addr = (struct sockaddr_l2tpip6 *) uaddr; 269 267 struct net *net = sock_net(sk); 270 268 __be32 v4addr = 0; 269 + int bound_dev_if; 271 270 int addr_type; 272 271 int err; 273 272 ··· 287 284 if (addr_type & IPV6_ADDR_MULTICAST) 288 285 return -EADDRNOTAVAIL; 289 286 290 - err = -EADDRINUSE; 291 - read_lock_bh(&l2tp_ip6_lock); 292 - if (__l2tp_ip6_bind_lookup(net, &addr->l2tp_addr, 293 - sk->sk_bound_dev_if, addr->l2tp_conn_id)) 294 - goto out_in_use; 295 - read_unlock_bh(&l2tp_ip6_lock); 296 - 297 287 lock_sock(sk); 298 288 299 289 err = -EINVAL; ··· 296 300 if (sk->sk_state != TCP_CLOSE) 297 301 goto out_unlock; 298 302 303 + bound_dev_if = sk->sk_bound_dev_if; 304 + 299 305 /* Check if the address belongs to the host. */ 300 306 rcu_read_lock(); 301 307 if (addr_type != IPV6_ADDR_ANY) { 302 308 struct net_device *dev = NULL; 303 309 304 310 if (addr_type & IPV6_ADDR_LINKLOCAL) { 305 - if (addr_len >= sizeof(struct sockaddr_in6) && 306 - addr->l2tp_scope_id) { 307 - /* Override any existing binding, if another 308 - * one is supplied by user. 309 - */ 310 - sk->sk_bound_dev_if = addr->l2tp_scope_id; 311 - } 311 + if (addr->l2tp_scope_id) 312 + bound_dev_if = addr->l2tp_scope_id; 312 313 313 314 /* Binding to link-local address requires an 314 - interface */ 315 - if (!sk->sk_bound_dev_if) 315 + * interface. 316 + */ 317 + if (!bound_dev_if) 316 318 goto out_unlock_rcu; 317 319 318 320 err = -ENODEV; 319 - dev = dev_get_by_index_rcu(sock_net(sk), 320 - sk->sk_bound_dev_if); 321 + dev = dev_get_by_index_rcu(sock_net(sk), bound_dev_if); 321 322 if (!dev) 322 323 goto out_unlock_rcu; 323 324 } ··· 329 336 } 330 337 rcu_read_unlock(); 331 338 332 - inet->inet_rcv_saddr = inet->inet_saddr = v4addr; 339 + write_lock_bh(&l2tp_ip6_lock); 340 + if (__l2tp_ip6_bind_lookup(net, &addr->l2tp_addr, bound_dev_if, 341 + addr->l2tp_conn_id)) { 342 + write_unlock_bh(&l2tp_ip6_lock); 343 + err = -EADDRINUSE; 344 + goto out_unlock; 345 + } 346 + 347 + inet->inet_saddr = v4addr; 348 + inet->inet_rcv_saddr = v4addr; 349 + sk->sk_bound_dev_if = bound_dev_if; 333 350 sk->sk_v6_rcv_saddr = addr->l2tp_addr; 334 351 np->saddr = addr->l2tp_addr; 335 352 336 353 l2tp_ip6_sk(sk)->conn_id = addr->l2tp_conn_id; 337 354 338 - write_lock_bh(&l2tp_ip6_lock); 339 355 sk_add_bind_node(sk, &l2tp_ip6_bind_table); 340 356 sk_del_node_init(sk); 341 357 write_unlock_bh(&l2tp_ip6_lock); ··· 357 355 rcu_read_unlock(); 358 356 out_unlock: 359 357 release_sock(sk); 360 - return err; 361 358 362 - out_in_use: 363 - read_unlock_bh(&l2tp_ip6_lock); 364 359 return err; 365 360 } 366 361 ··· 369 370 struct in6_addr *daddr; 370 371 int addr_type; 371 372 int rc; 372 - 373 - if (sock_flag(sk, SOCK_ZAPPED)) /* Must bind first - autobinding does not work */ 374 - return -EINVAL; 375 373 376 374 if (addr_len < sizeof(*lsa)) 377 375 return -EINVAL; ··· 386 390 return -EINVAL; 387 391 } 388 392 389 - rc = ip6_datagram_connect(sk, uaddr, addr_len); 390 - 391 393 lock_sock(sk); 394 + 395 + /* Must bind first - autobinding does not work */ 396 + if (sock_flag(sk, SOCK_ZAPPED)) { 397 + rc = -EINVAL; 398 + goto out_sk; 399 + } 400 + 401 + rc = __ip6_datagram_connect(sk, uaddr, addr_len); 402 + if (rc < 0) 403 + goto out_sk; 392 404 393 405 l2tp_ip6_sk(sk)->peer_conn_id = lsa->l2tp_conn_id; 394 406 ··· 405 401 sk_add_bind_node(sk, &l2tp_ip6_bind_table); 406 402 write_unlock_bh(&l2tp_ip6_lock); 407 403 404 + out_sk: 408 405 release_sock(sk); 409 406 410 407 return rc;
+1 -1
net/mpls/af_mpls.c
··· 1252 1252 if (!nla) 1253 1253 continue; 1254 1254 1255 - switch(index) { 1255 + switch (index) { 1256 1256 case RTA_OIF: 1257 1257 cfg->rc_ifindex = nla_get_u32(nla); 1258 1258 break;
+30 -19
net/netfilter/nf_nat_core.c
··· 42 42 const struct nf_conntrack_zone *zone; 43 43 }; 44 44 45 - static struct rhashtable nf_nat_bysource_table; 45 + static struct rhltable nf_nat_bysource_table; 46 46 47 47 inline const struct nf_nat_l3proto * 48 48 __nf_nat_l3proto_find(u8 family) ··· 193 193 const struct nf_nat_conn_key *key = arg->key; 194 194 const struct nf_conn *ct = obj; 195 195 196 - return same_src(ct, key->tuple) && 197 - net_eq(nf_ct_net(ct), key->net) && 198 - nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL); 196 + if (!same_src(ct, key->tuple) || 197 + !net_eq(nf_ct_net(ct), key->net) || 198 + !nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL)) 199 + return 1; 200 + 201 + return 0; 199 202 } 200 203 201 204 static struct rhashtable_params nf_nat_bysource_params = { ··· 207 204 .obj_cmpfn = nf_nat_bysource_cmp, 208 205 .nelem_hint = 256, 209 206 .min_size = 1024, 210 - .nulls_base = (1U << RHT_BASE_SHIFT), 211 207 }; 212 208 213 209 /* Only called for SRC manip */ ··· 225 223 .tuple = tuple, 226 224 .zone = zone 227 225 }; 226 + struct rhlist_head *hl; 228 227 229 - ct = rhashtable_lookup_fast(&nf_nat_bysource_table, &key, 230 - nf_nat_bysource_params); 231 - if (!ct) 228 + hl = rhltable_lookup(&nf_nat_bysource_table, &key, 229 + nf_nat_bysource_params); 230 + if (!hl) 232 231 return 0; 232 + 233 + ct = container_of(hl, typeof(*ct), nat_bysource); 233 234 234 235 nf_ct_invert_tuplepr(result, 235 236 &ct->tuplehash[IP_CT_DIR_REPLY].tuple); ··· 451 446 } 452 447 453 448 if (maniptype == NF_NAT_MANIP_SRC) { 449 + struct nf_nat_conn_key key = { 450 + .net = nf_ct_net(ct), 451 + .tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 452 + .zone = nf_ct_zone(ct), 453 + }; 454 454 int err; 455 455 456 - err = rhashtable_insert_fast(&nf_nat_bysource_table, 457 - &ct->nat_bysource, 458 - nf_nat_bysource_params); 456 + err = rhltable_insert_key(&nf_nat_bysource_table, 457 + &key, 458 + &ct->nat_bysource, 459 + nf_nat_bysource_params); 459 460 if (err) 460 461 return NF_DROP; 461 462 } ··· 578 567 * will delete entry from already-freed table. 579 568 */ 580 569 ct->status &= ~IPS_NAT_DONE_MASK; 581 - rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource, 582 - nf_nat_bysource_params); 570 + rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource, 571 + nf_nat_bysource_params); 583 572 584 573 /* don't delete conntrack. Although that would make things a lot 585 574 * simpler, we'd end up flushing all conntracks on nat rmmod. ··· 709 698 if (!nat) 710 699 return; 711 700 712 - rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource, 713 - nf_nat_bysource_params); 701 + rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource, 702 + nf_nat_bysource_params); 714 703 } 715 704 716 705 static struct nf_ct_ext_type nat_extend __read_mostly = { ··· 845 834 { 846 835 int ret; 847 836 848 - ret = rhashtable_init(&nf_nat_bysource_table, &nf_nat_bysource_params); 837 + ret = rhltable_init(&nf_nat_bysource_table, &nf_nat_bysource_params); 849 838 if (ret) 850 839 return ret; 851 840 852 841 ret = nf_ct_extend_register(&nat_extend); 853 842 if (ret < 0) { 854 - rhashtable_destroy(&nf_nat_bysource_table); 843 + rhltable_destroy(&nf_nat_bysource_table); 855 844 printk(KERN_ERR "nf_nat_core: Unable to register extension\n"); 856 845 return ret; 857 846 } ··· 875 864 return 0; 876 865 877 866 cleanup_extend: 878 - rhashtable_destroy(&nf_nat_bysource_table); 867 + rhltable_destroy(&nf_nat_bysource_table); 879 868 nf_ct_extend_unregister(&nat_extend); 880 869 return ret; 881 870 } ··· 894 883 for (i = 0; i < NFPROTO_NUMPROTO; i++) 895 884 kfree(nf_nat_l4protos[i]); 896 885 897 - rhashtable_destroy(&nf_nat_bysource_table); 886 + rhltable_destroy(&nf_nat_bysource_table); 898 887 } 899 888 900 889 MODULE_LICENSE("GPL");
+9 -5
net/netfilter/nf_tables_api.c
··· 2570 2570 } 2571 2571 2572 2572 if (set->timeout && 2573 - nla_put_be64(skb, NFTA_SET_TIMEOUT, cpu_to_be64(set->timeout), 2573 + nla_put_be64(skb, NFTA_SET_TIMEOUT, 2574 + cpu_to_be64(jiffies_to_msecs(set->timeout)), 2574 2575 NFTA_SET_PAD)) 2575 2576 goto nla_put_failure; 2576 2577 if (set->gc_int && ··· 2860 2859 if (nla[NFTA_SET_TIMEOUT] != NULL) { 2861 2860 if (!(flags & NFT_SET_TIMEOUT)) 2862 2861 return -EINVAL; 2863 - timeout = be64_to_cpu(nla_get_be64(nla[NFTA_SET_TIMEOUT])); 2862 + timeout = msecs_to_jiffies(be64_to_cpu(nla_get_be64( 2863 + nla[NFTA_SET_TIMEOUT]))); 2864 2864 } 2865 2865 gc_int = 0; 2866 2866 if (nla[NFTA_SET_GC_INTERVAL] != NULL) { ··· 3180 3178 3181 3179 if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) && 3182 3180 nla_put_be64(skb, NFTA_SET_ELEM_TIMEOUT, 3183 - cpu_to_be64(*nft_set_ext_timeout(ext)), 3181 + cpu_to_be64(jiffies_to_msecs( 3182 + *nft_set_ext_timeout(ext))), 3184 3183 NFTA_SET_ELEM_PAD)) 3185 3184 goto nla_put_failure; 3186 3185 ··· 3450 3447 memcpy(nft_set_ext_data(ext), data, set->dlen); 3451 3448 if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) 3452 3449 *nft_set_ext_expiration(ext) = 3453 - jiffies + msecs_to_jiffies(timeout); 3450 + jiffies + timeout; 3454 3451 if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT)) 3455 3452 *nft_set_ext_timeout(ext) = timeout; 3456 3453 ··· 3538 3535 if (nla[NFTA_SET_ELEM_TIMEOUT] != NULL) { 3539 3536 if (!(set->flags & NFT_SET_TIMEOUT)) 3540 3537 return -EINVAL; 3541 - timeout = be64_to_cpu(nla_get_be64(nla[NFTA_SET_ELEM_TIMEOUT])); 3538 + timeout = msecs_to_jiffies(be64_to_cpu(nla_get_be64( 3539 + nla[NFTA_SET_ELEM_TIMEOUT]))); 3542 3540 } else if (set->flags & NFT_SET_TIMEOUT) { 3543 3541 timeout = set->timeout; 3544 3542 }
+5 -2
net/netfilter/nft_hash.c
··· 53 53 { 54 54 struct nft_hash *priv = nft_expr_priv(expr); 55 55 u32 len; 56 + int err; 56 57 57 58 if (!tb[NFTA_HASH_SREG] || 58 59 !tb[NFTA_HASH_DREG] || ··· 68 67 priv->sreg = nft_parse_register(tb[NFTA_HASH_SREG]); 69 68 priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); 70 69 71 - len = ntohl(nla_get_be32(tb[NFTA_HASH_LEN])); 72 - if (len == 0 || len > U8_MAX) 70 + err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len); 71 + if (err < 0) 72 + return err; 73 + if (len == 0) 73 74 return -ERANGE; 74 75 75 76 priv->len = len;
+6
net/netfilter/nft_range.c
··· 59 59 int err; 60 60 u32 op; 61 61 62 + if (!tb[NFTA_RANGE_SREG] || 63 + !tb[NFTA_RANGE_OP] || 64 + !tb[NFTA_RANGE_FROM_DATA] || 65 + !tb[NFTA_RANGE_TO_DATA]) 66 + return -EINVAL; 67 + 62 68 err = nft_data_init(NULL, &priv->data_from, sizeof(priv->data_from), 63 69 &desc_from, tb[NFTA_RANGE_FROM_DATA]); 64 70 if (err < 0)
+19 -2
net/netlink/af_netlink.c
··· 329 329 if (nlk->cb_running) { 330 330 if (nlk->cb.done) 331 331 nlk->cb.done(&nlk->cb); 332 - 333 332 module_put(nlk->cb.module); 334 333 kfree_skb(nlk->cb.skb); 335 334 } ··· 343 344 WARN_ON(atomic_read(&sk->sk_rmem_alloc)); 344 345 WARN_ON(atomic_read(&sk->sk_wmem_alloc)); 345 346 WARN_ON(nlk_sk(sk)->groups); 347 + } 348 + 349 + static void netlink_sock_destruct_work(struct work_struct *work) 350 + { 351 + struct netlink_sock *nlk = container_of(work, struct netlink_sock, 352 + work); 353 + 354 + sk_free(&nlk->sk); 346 355 } 347 356 348 357 /* This lock without WQ_FLAG_EXCLUSIVE is good on UP and it is _very_ bad on ··· 655 648 static void deferred_put_nlk_sk(struct rcu_head *head) 656 649 { 657 650 struct netlink_sock *nlk = container_of(head, struct netlink_sock, rcu); 651 + struct sock *sk = &nlk->sk; 658 652 659 - sock_put(&nlk->sk); 653 + if (!atomic_dec_and_test(&sk->sk_refcnt)) 654 + return; 655 + 656 + if (nlk->cb_running && nlk->cb.done) { 657 + INIT_WORK(&nlk->work, netlink_sock_destruct_work); 658 + schedule_work(&nlk->work); 659 + return; 660 + } 661 + 662 + sk_free(sk); 660 663 } 661 664 662 665 static int netlink_release(struct socket *sock)
+2
net/netlink/af_netlink.h
··· 3 3 4 4 #include <linux/rhashtable.h> 5 5 #include <linux/atomic.h> 6 + #include <linux/workqueue.h> 6 7 #include <net/sock.h> 7 8 8 9 #define NLGRPSZ(x) (ALIGN(x, sizeof(unsigned long) * 8) / 8) ··· 34 33 35 34 struct rhash_head node; 36 35 struct rcu_head rcu; 36 + struct work_struct work; 37 37 }; 38 38 39 39 static inline struct netlink_sock *nlk_sk(struct sock *sk)
+4 -1
net/openvswitch/conntrack.c
··· 370 370 skb_orphan(skb); 371 371 memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm)); 372 372 err = nf_ct_frag6_gather(net, skb, user); 373 - if (err) 373 + if (err) { 374 + if (err != -EINPROGRESS) 375 + kfree_skb(skb); 374 376 return err; 377 + } 375 378 376 379 key->ip.proto = ipv6_hdr(skb)->nexthdr; 377 380 ovs_cb.mru = IP6CB(skb)->frag_max_size;
+12 -6
net/packet/af_packet.c
··· 3648 3648 3649 3649 if (optlen != sizeof(val)) 3650 3650 return -EINVAL; 3651 - if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) 3652 - return -EBUSY; 3653 3651 if (copy_from_user(&val, optval, sizeof(val))) 3654 3652 return -EFAULT; 3655 3653 switch (val) { 3656 3654 case TPACKET_V1: 3657 3655 case TPACKET_V2: 3658 3656 case TPACKET_V3: 3659 - po->tp_version = val; 3660 - return 0; 3657 + break; 3661 3658 default: 3662 3659 return -EINVAL; 3663 3660 } 3661 + lock_sock(sk); 3662 + if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { 3663 + ret = -EBUSY; 3664 + } else { 3665 + po->tp_version = val; 3666 + ret = 0; 3667 + } 3668 + release_sock(sk); 3669 + return ret; 3664 3670 } 3665 3671 case PACKET_RESERVE: 3666 3672 { ··· 4170 4164 /* Added to avoid minimal code churn */ 4171 4165 struct tpacket_req *req = &req_u->req; 4172 4166 4167 + lock_sock(sk); 4173 4168 /* Opening a Tx-ring is NOT supported in TPACKET_V3 */ 4174 4169 if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) { 4175 4170 net_warn_ratelimited("Tx-ring is not supported.\n"); ··· 4252 4245 goto out; 4253 4246 } 4254 4247 4255 - lock_sock(sk); 4256 4248 4257 4249 /* Detach socket from network */ 4258 4250 spin_lock(&po->bind_lock); ··· 4300 4294 if (!tx_ring) 4301 4295 prb_shutdown_retire_blk_timer(po, rb_queue); 4302 4296 } 4303 - release_sock(sk); 4304 4297 4305 4298 if (pg_vec) 4306 4299 free_pg_vec(pg_vec, order, req->tp_block_nr); 4307 4300 out: 4301 + release_sock(sk); 4308 4302 return err; 4309 4303 } 4310 4304
+2
net/rds/tcp.c
··· 659 659 out_pernet: 660 660 unregister_pernet_subsys(&rds_tcp_net_ops); 661 661 out_slab: 662 + if (unregister_netdevice_notifier(&rds_tcp_dev_notifier)) 663 + pr_warn("could not unregister rds_tcp_dev_notifier\n"); 662 664 kmem_cache_destroy(rds_tcp_conn_slab); 663 665 out: 664 666 return ret;
+20 -4
net/sched/act_pedit.c
··· 108 108 kfree(keys); 109 109 } 110 110 111 + static bool offset_valid(struct sk_buff *skb, int offset) 112 + { 113 + if (offset > 0 && offset > skb->len) 114 + return false; 115 + 116 + if (offset < 0 && -offset > skb_headroom(skb)) 117 + return false; 118 + 119 + return true; 120 + } 121 + 111 122 static int tcf_pedit(struct sk_buff *skb, const struct tc_action *a, 112 123 struct tcf_result *res) 113 124 { ··· 145 134 if (tkey->offmask) { 146 135 char *d, _d; 147 136 137 + if (!offset_valid(skb, off + tkey->at)) { 138 + pr_info("tc filter pedit 'at' offset %d out of bounds\n", 139 + off + tkey->at); 140 + goto bad; 141 + } 148 142 d = skb_header_pointer(skb, off + tkey->at, 1, 149 143 &_d); 150 144 if (!d) ··· 162 146 " offset must be on 32 bit boundaries\n"); 163 147 goto bad; 164 148 } 165 - if (offset > 0 && offset > skb->len) { 166 - pr_info("tc filter pedit" 167 - " offset %d can't exceed pkt length %d\n", 168 - offset, skb->len); 149 + 150 + if (!offset_valid(skb, off + offset)) { 151 + pr_info("tc filter pedit offset %d out of bounds\n", 152 + offset); 169 153 goto bad; 170 154 } 171 155
+1 -1
net/sched/cls_api.c
··· 112 112 113 113 for (it_chain = chain; (tp = rtnl_dereference(*it_chain)) != NULL; 114 114 it_chain = &tp->next) 115 - tfilter_notify(net, oskb, n, tp, n->nlmsg_flags, event, false); 115 + tfilter_notify(net, oskb, n, tp, 0, event, false); 116 116 } 117 117 118 118 /* Select new prio value from the range, managed by kernel. */
-4
net/sched/cls_basic.c
··· 62 62 struct basic_head *head = rtnl_dereference(tp->root); 63 63 struct basic_filter *f; 64 64 65 - if (head == NULL) 66 - return 0UL; 67 - 68 65 list_for_each_entry(f, &head->flist, link) { 69 66 if (f->handle == handle) { 70 67 l = (unsigned long) f; ··· 106 109 tcf_unbind_filter(tp, &f->res); 107 110 call_rcu(&f->rcu, basic_delete_filter); 108 111 } 109 - RCU_INIT_POINTER(tp->root, NULL); 110 112 kfree_rcu(head, rcu); 111 113 return true; 112 114 }
-4
net/sched/cls_bpf.c
··· 292 292 call_rcu(&prog->rcu, __cls_bpf_delete_prog); 293 293 } 294 294 295 - RCU_INIT_POINTER(tp->root, NULL); 296 295 kfree_rcu(head, rcu); 297 296 return true; 298 297 } ··· 301 302 struct cls_bpf_head *head = rtnl_dereference(tp->root); 302 303 struct cls_bpf_prog *prog; 303 304 unsigned long ret = 0UL; 304 - 305 - if (head == NULL) 306 - return 0UL; 307 305 308 306 list_for_each_entry(prog, &head->plist, link) { 309 307 if (prog->handle == handle) {
+3 -4
net/sched/cls_cgroup.c
··· 137 137 138 138 if (!force) 139 139 return false; 140 - 141 - if (head) { 142 - RCU_INIT_POINTER(tp->root, NULL); 140 + /* Head can still be NULL due to cls_cgroup_init(). */ 141 + if (head) 143 142 call_rcu(&head->rcu, cls_cgroup_destroy_rcu); 144 - } 143 + 145 144 return true; 146 145 } 147 146
-1
net/sched/cls_flow.c
··· 596 596 list_del_rcu(&f->list); 597 597 call_rcu(&f->rcu, flow_destroy_filter); 598 598 } 599 - RCU_INIT_POINTER(tp->root, NULL); 600 599 kfree_rcu(head, rcu); 601 600 return true; 602 601 }
+32 -9
net/sched/cls_flower.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/module.h> 15 15 #include <linux/rhashtable.h> 16 + #include <linux/workqueue.h> 16 17 17 18 #include <linux/if_ether.h> 18 19 #include <linux/in6.h> ··· 65 64 bool mask_assigned; 66 65 struct list_head filters; 67 66 struct rhashtable_params ht_params; 68 - struct rcu_head rcu; 67 + union { 68 + struct work_struct work; 69 + struct rcu_head rcu; 70 + }; 69 71 }; 70 72 71 73 struct cls_fl_filter { ··· 273 269 dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); 274 270 } 275 271 272 + static void fl_destroy_sleepable(struct work_struct *work) 273 + { 274 + struct cls_fl_head *head = container_of(work, struct cls_fl_head, 275 + work); 276 + if (head->mask_assigned) 277 + rhashtable_destroy(&head->ht); 278 + kfree(head); 279 + module_put(THIS_MODULE); 280 + } 281 + 282 + static void fl_destroy_rcu(struct rcu_head *rcu) 283 + { 284 + struct cls_fl_head *head = container_of(rcu, struct cls_fl_head, rcu); 285 + 286 + INIT_WORK(&head->work, fl_destroy_sleepable); 287 + schedule_work(&head->work); 288 + } 289 + 276 290 static bool fl_destroy(struct tcf_proto *tp, bool force) 277 291 { 278 292 struct cls_fl_head *head = rtnl_dereference(tp->root); ··· 304 282 list_del_rcu(&f->list); 305 283 call_rcu(&f->rcu, fl_destroy_filter); 306 284 } 307 - RCU_INIT_POINTER(tp->root, NULL); 308 - if (head->mask_assigned) 309 - rhashtable_destroy(&head->ht); 310 - kfree_rcu(head, rcu); 285 + 286 + __module_get(THIS_MODULE); 287 + call_rcu(&head->rcu, fl_destroy_rcu); 311 288 return true; 312 289 } 313 290 ··· 732 711 goto errout; 733 712 734 713 if (fold) { 735 - rhashtable_remove_fast(&head->ht, &fold->ht_node, 736 - head->ht_params); 714 + if (!tc_skip_sw(fold->flags)) 715 + rhashtable_remove_fast(&head->ht, &fold->ht_node, 716 + head->ht_params); 737 717 fl_hw_destroy_filter(tp, (unsigned long)fold); 738 718 } 739 719 ··· 761 739 struct cls_fl_head *head = rtnl_dereference(tp->root); 762 740 struct cls_fl_filter *f = (struct cls_fl_filter *) arg; 763 741 764 - rhashtable_remove_fast(&head->ht, &f->ht_node, 765 - head->ht_params); 742 + if (!tc_skip_sw(f->flags)) 743 + rhashtable_remove_fast(&head->ht, &f->ht_node, 744 + head->ht_params); 766 745 list_del_rcu(&f->list); 767 746 fl_hw_destroy_filter(tp, (unsigned long)f); 768 747 tcf_unbind_filter(tp, &f->res);
-1
net/sched/cls_matchall.c
··· 114 114 115 115 call_rcu(&f->rcu, mall_destroy_filter); 116 116 } 117 - RCU_INIT_POINTER(tp->root, NULL); 118 117 kfree_rcu(head, rcu); 119 118 return true; 120 119 }
+2 -1
net/sched/cls_rsvp.h
··· 152 152 return -1; 153 153 nhptr = ip_hdr(skb); 154 154 #endif 155 - 155 + if (unlikely(!head)) 156 + return -1; 156 157 restart: 157 158 158 159 #if RSVP_DST_LEN == 4
-1
net/sched/cls_tcindex.c
··· 543 543 walker.fn = tcindex_destroy_element; 544 544 tcindex_walk(tp, &walker); 545 545 546 - RCU_INIT_POINTER(tp->root, NULL); 547 546 call_rcu(&p->rcu, __tcindex_destroy); 548 547 return true; 549 548 }
+9 -2
net/tipc/bearer.c
··· 421 421 dev = dev_get_by_name(net, driver_name); 422 422 if (!dev) 423 423 return -ENODEV; 424 + if (tipc_mtu_bad(dev, 0)) { 425 + dev_put(dev); 426 + return -EINVAL; 427 + } 424 428 425 429 /* Associate TIPC bearer with L2 bearer */ 426 430 rcu_assign_pointer(b->media_ptr, dev); ··· 614 610 if (!b) 615 611 return NOTIFY_DONE; 616 612 617 - b->mtu = dev->mtu; 618 - 619 613 switch (evt) { 620 614 case NETDEV_CHANGE: 621 615 if (netif_carrier_ok(dev)) ··· 626 624 tipc_reset_bearer(net, b); 627 625 break; 628 626 case NETDEV_CHANGEMTU: 627 + if (tipc_mtu_bad(dev, 0)) { 628 + bearer_disable(net, b); 629 + break; 630 + } 631 + b->mtu = dev->mtu; 629 632 tipc_reset_bearer(net, b); 630 633 break; 631 634 case NETDEV_CHANGEADDR:
+13
net/tipc/bearer.h
··· 39 39 40 40 #include "netlink.h" 41 41 #include "core.h" 42 + #include "msg.h" 42 43 #include <net/genetlink.h> 43 44 44 45 #define MAX_MEDIA 3 ··· 59 58 #define TIPC_MEDIA_TYPE_ETH 1 60 59 #define TIPC_MEDIA_TYPE_IB 2 61 60 #define TIPC_MEDIA_TYPE_UDP 3 61 + 62 + /* minimum bearer MTU */ 63 + #define TIPC_MIN_BEARER_MTU (MAX_H_SIZE + INT_H_SIZE) 62 64 63 65 /** 64 66 * struct tipc_media_addr - destination address used by TIPC bearers ··· 218 214 struct tipc_media_addr *dst); 219 215 void tipc_bearer_bc_xmit(struct net *net, u32 bearer_id, 220 216 struct sk_buff_head *xmitq); 217 + 218 + /* check if device MTU is too low for tipc headers */ 219 + static inline bool tipc_mtu_bad(struct net_device *dev, unsigned int reserve) 220 + { 221 + if (dev->mtu >= TIPC_MIN_BEARER_MTU + reserve) 222 + return false; 223 + netdev_warn(dev, "MTU too low for tipc bearer\n"); 224 + return true; 225 + } 221 226 222 227 #endif /* _TIPC_BEARER_H */
+22 -18
net/tipc/link.c
··· 47 47 #include <linux/pkt_sched.h> 48 48 49 49 struct tipc_stats { 50 - u32 sent_info; /* used in counting # sent packets */ 51 - u32 recv_info; /* used in counting # recv'd packets */ 50 + u32 sent_pkts; 51 + u32 recv_pkts; 52 52 u32 sent_states; 53 53 u32 recv_states; 54 54 u32 sent_probes; ··· 857 857 l->acked = 0; 858 858 l->silent_intv_cnt = 0; 859 859 l->rst_cnt = 0; 860 - l->stats.recv_info = 0; 861 860 l->stale_count = 0; 862 861 l->bc_peer_is_up = false; 863 862 memset(&l->mon_state, 0, sizeof(l->mon_state)); ··· 887 888 struct sk_buff_head *transmq = &l->transmq; 888 889 struct sk_buff_head *backlogq = &l->backlogq; 889 890 struct sk_buff *skb, *_skb, *bskb; 891 + int pkt_cnt = skb_queue_len(list); 890 892 891 893 /* Match msg importance against this and all higher backlog limits: */ 892 894 if (!skb_queue_empty(backlogq)) { ··· 899 899 if (unlikely(msg_size(hdr) > mtu)) { 900 900 skb_queue_purge(list); 901 901 return -EMSGSIZE; 902 + } 903 + 904 + if (pkt_cnt > 1) { 905 + l->stats.sent_fragmented++; 906 + l->stats.sent_fragments += pkt_cnt; 902 907 } 903 908 904 909 /* Prepare each packet for sending, and add to relevant queue: */ ··· 925 920 __skb_queue_tail(xmitq, _skb); 926 921 TIPC_SKB_CB(skb)->ackers = l->ackers; 927 922 l->rcv_unacked = 0; 923 + l->stats.sent_pkts++; 928 924 seqno++; 929 925 continue; 930 926 } ··· 974 968 msg_set_ack(hdr, ack); 975 969 msg_set_bcast_ack(hdr, bc_ack); 976 970 l->rcv_unacked = 0; 971 + l->stats.sent_pkts++; 977 972 seqno++; 978 973 } 979 974 l->snd_nxt = seqno; ··· 1267 1260 1268 1261 /* Deliver packet */ 1269 1262 l->rcv_nxt++; 1270 - l->stats.recv_info++; 1263 + l->stats.recv_pkts++; 1271 1264 if (!tipc_data_input(l, skb, l->inputq)) 1272 1265 rc |= tipc_link_input(l, skb, l->inputq); 1273 1266 if (unlikely(++l->rcv_unacked >= TIPC_MIN_LINK_WIN)) ··· 1499 1492 if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) 1500 1493 l->tolerance = peers_tol; 1501 1494 1502 - if (peers_prio && in_range(peers_prio, TIPC_MIN_LINK_PRI, 1503 - TIPC_MAX_LINK_PRI)) { 1495 + /* Update own prio if peer indicates a different value */ 1496 + if ((peers_prio != l->priority) && 1497 + in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) { 1504 1498 l->priority = peers_prio; 1505 1499 rc = tipc_link_fsm_evt(l, LINK_FAILURE_EVT); 1506 1500 } ··· 1807 1799 void tipc_link_reset_stats(struct tipc_link *l) 1808 1800 { 1809 1801 memset(&l->stats, 0, sizeof(l->stats)); 1810 - if (!link_is_bc_sndlink(l)) { 1811 - l->stats.sent_info = l->snd_nxt; 1812 - l->stats.recv_info = l->rcv_nxt; 1813 - } 1814 1802 } 1815 1803 1816 1804 static void link_print(struct tipc_link *l, const char *str) ··· 1870 1866 }; 1871 1867 1872 1868 struct nla_map map[] = { 1873 - {TIPC_NLA_STATS_RX_INFO, s->recv_info}, 1869 + {TIPC_NLA_STATS_RX_INFO, 0}, 1874 1870 {TIPC_NLA_STATS_RX_FRAGMENTS, s->recv_fragments}, 1875 1871 {TIPC_NLA_STATS_RX_FRAGMENTED, s->recv_fragmented}, 1876 1872 {TIPC_NLA_STATS_RX_BUNDLES, s->recv_bundles}, 1877 1873 {TIPC_NLA_STATS_RX_BUNDLED, s->recv_bundled}, 1878 - {TIPC_NLA_STATS_TX_INFO, s->sent_info}, 1874 + {TIPC_NLA_STATS_TX_INFO, 0}, 1879 1875 {TIPC_NLA_STATS_TX_FRAGMENTS, s->sent_fragments}, 1880 1876 {TIPC_NLA_STATS_TX_FRAGMENTED, s->sent_fragmented}, 1881 1877 {TIPC_NLA_STATS_TX_BUNDLES, s->sent_bundles}, ··· 1950 1946 goto attr_msg_full; 1951 1947 if (nla_put_u32(msg->skb, TIPC_NLA_LINK_MTU, link->mtu)) 1952 1948 goto attr_msg_full; 1953 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, link->rcv_nxt)) 1949 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, link->stats.recv_pkts)) 1954 1950 goto attr_msg_full; 1955 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, link->snd_nxt)) 1951 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, link->stats.sent_pkts)) 1956 1952 goto attr_msg_full; 1957 1953 1958 1954 if (tipc_link_is_up(link)) ··· 2007 2003 }; 2008 2004 2009 2005 struct nla_map map[] = { 2010 - {TIPC_NLA_STATS_RX_INFO, stats->recv_info}, 2006 + {TIPC_NLA_STATS_RX_INFO, stats->recv_pkts}, 2011 2007 {TIPC_NLA_STATS_RX_FRAGMENTS, stats->recv_fragments}, 2012 2008 {TIPC_NLA_STATS_RX_FRAGMENTED, stats->recv_fragmented}, 2013 2009 {TIPC_NLA_STATS_RX_BUNDLES, stats->recv_bundles}, 2014 2010 {TIPC_NLA_STATS_RX_BUNDLED, stats->recv_bundled}, 2015 - {TIPC_NLA_STATS_TX_INFO, stats->sent_info}, 2011 + {TIPC_NLA_STATS_TX_INFO, stats->sent_pkts}, 2016 2012 {TIPC_NLA_STATS_TX_FRAGMENTS, stats->sent_fragments}, 2017 2013 {TIPC_NLA_STATS_TX_FRAGMENTED, stats->sent_fragmented}, 2018 2014 {TIPC_NLA_STATS_TX_BUNDLES, stats->sent_bundles}, ··· 2079 2075 goto attr_msg_full; 2080 2076 if (nla_put_string(msg->skb, TIPC_NLA_LINK_NAME, bcl->name)) 2081 2077 goto attr_msg_full; 2082 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, bcl->rcv_nxt)) 2078 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, 0)) 2083 2079 goto attr_msg_full; 2084 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, bcl->snd_nxt)) 2080 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, 0)) 2085 2081 goto attr_msg_full; 2086 2082 2087 2083 prop = nla_nest_start(msg->skb, TIPC_NLA_LINK_PROP);
+5 -5
net/tipc/monitor.c
··· 455 455 int i, applied_bef; 456 456 457 457 state->probing = false; 458 - if (!dlen) 459 - return; 460 458 461 459 /* Sanity check received domain record */ 462 - if ((dlen < new_dlen) || ntohs(arrv_dom->len) != new_dlen) { 463 - pr_warn_ratelimited("Received illegal domain record\n"); 460 + if (dlen < dom_rec_len(arrv_dom, 0)) 464 461 return; 465 - } 462 + if (dlen != dom_rec_len(arrv_dom, new_member_cnt)) 463 + return; 464 + if ((dlen < new_dlen) || ntohs(arrv_dom->len) != new_dlen) 465 + return; 466 466 467 467 /* Synch generation numbers with peer if link just came up */ 468 468 if (!state->synched) {
+1 -1
net/tipc/socket.c
··· 186 186 187 187 static bool tsk_conn_cong(struct tipc_sock *tsk) 188 188 { 189 - return tsk->snt_unacked >= tsk->snd_win; 189 + return tsk->snt_unacked > tsk->snd_win; 190 190 } 191 191 192 192 /* tsk_blocks(): translate a buffer size in bytes to number of
+5
net/tipc/udp_media.c
··· 697 697 udp_conf.local_ip.s_addr = htonl(INADDR_ANY); 698 698 udp_conf.use_udp_checksums = false; 699 699 ub->ifindex = dev->ifindex; 700 + if (tipc_mtu_bad(dev, sizeof(struct iphdr) + 701 + sizeof(struct udphdr))) { 702 + err = -EINVAL; 703 + goto err; 704 + } 700 705 b->mtu = dev->mtu - sizeof(struct iphdr) 701 706 - sizeof(struct udphdr); 702 707 #if IS_ENABLED(CONFIG_IPV6)
+6 -4
net/xfrm/xfrm_policy.c
··· 1268 1268 err = security_xfrm_policy_lookup(pol->security, 1269 1269 fl->flowi_secid, 1270 1270 policy_to_flow_dir(dir)); 1271 - if (!err && !xfrm_pol_hold_rcu(pol)) 1272 - goto again; 1273 - else if (err == -ESRCH) 1271 + if (!err) { 1272 + if (!xfrm_pol_hold_rcu(pol)) 1273 + goto again; 1274 + } else if (err == -ESRCH) { 1274 1275 pol = NULL; 1275 - else 1276 + } else { 1276 1277 pol = ERR_PTR(err); 1278 + } 1277 1279 } else 1278 1280 pol = NULL; 1279 1281 }
+1 -1
net/xfrm/xfrm_user.c
··· 2450 2450 2451 2451 #ifdef CONFIG_COMPAT 2452 2452 if (in_compat_syscall()) 2453 - return -ENOTSUPP; 2453 + return -EOPNOTSUPP; 2454 2454 #endif 2455 2455 2456 2456 type = nlh->nlmsg_type;
+1 -1
samples/bpf/bpf_helpers.h
··· 113 113 #define PT_REGS_FP(x) ((x)->gprs[11]) /* Works only with CONFIG_FRAME_POINTER */ 114 114 #define PT_REGS_RC(x) ((x)->gprs[2]) 115 115 #define PT_REGS_SP(x) ((x)->gprs[15]) 116 - #define PT_REGS_IP(x) ((x)->ip) 116 + #define PT_REGS_IP(x) ((x)->psw.addr) 117 117 118 118 #elif defined(__aarch64__) 119 119
+1 -1
samples/bpf/sampleip_kern.c
··· 25 25 u64 ip; 26 26 u32 *value, init_val = 1; 27 27 28 - ip = ctx->regs.ip; 28 + ip = PT_REGS_IP(&ctx->regs); 29 29 value = bpf_map_lookup_elem(&ip_map, &ip); 30 30 if (value) 31 31 *value += 1;
+1 -1
samples/bpf/trace_event_kern.c
··· 50 50 key.userstack = bpf_get_stackid(ctx, &stackmap, USER_STACKID_FLAGS); 51 51 if ((int)key.kernstack < 0 && (int)key.userstack < 0) { 52 52 bpf_trace_printk(fmt, sizeof(fmt), cpu, ctx->sample_period, 53 - ctx->regs.ip); 53 + PT_REGS_IP(&ctx->regs)); 54 54 return 0; 55 55 } 56 56
+2
scripts/kconfig/Makefile
··· 35 35 36 36 silentoldconfig: $(obj)/conf 37 37 $(Q)mkdir -p include/config include/generated 38 + $(Q)test -e include/generated/autoksyms.h || \ 39 + touch include/generated/autoksyms.h 38 40 $< $(silent) --$@ $(Kconfig) 39 41 40 42 localyesconfig localmodconfig: $(obj)/streamline_config.pl $(obj)/conf
+17 -10
sound/sparc/dbri.c
··· 304 304 spinlock_t lock; 305 305 306 306 struct dbri_dma *dma; /* Pointer to our DMA block */ 307 - u32 dma_dvma; /* DBRI visible DMA address */ 307 + dma_addr_t dma_dvma; /* DBRI visible DMA address */ 308 308 309 309 void __iomem *regs; /* dbri HW regs */ 310 310 int dbri_irqp; /* intr queue pointer */ ··· 657 657 */ 658 658 static s32 *dbri_cmdlock(struct snd_dbri *dbri, int len) 659 659 { 660 + u32 dvma_addr = (u32)dbri->dma_dvma; 661 + 660 662 /* Space for 2 WAIT cmds (replaced later by 1 JUMP cmd) */ 661 663 len += 2; 662 664 spin_lock(&dbri->cmdlock); 663 665 if (dbri->cmdptr - dbri->dma->cmd + len < DBRI_NO_CMDS - 2) 664 666 return dbri->cmdptr + 2; 665 - else if (len < sbus_readl(dbri->regs + REG8) - dbri->dma_dvma) 667 + else if (len < sbus_readl(dbri->regs + REG8) - dvma_addr) 666 668 return dbri->dma->cmd; 667 669 else 668 670 printk(KERN_ERR "DBRI: no space for commands."); ··· 682 680 */ 683 681 static void dbri_cmdsend(struct snd_dbri *dbri, s32 *cmd, int len) 684 682 { 683 + u32 dvma_addr = (u32)dbri->dma_dvma; 685 684 s32 tmp, addr; 686 685 static int wait_id = 0; 687 686 ··· 692 689 *(cmd+1) = DBRI_CMD(D_WAIT, 1, wait_id); 693 690 694 691 /* Replace the last command with JUMP */ 695 - addr = dbri->dma_dvma + (cmd - len - dbri->dma->cmd) * sizeof(s32); 692 + addr = dvma_addr + (cmd - len - dbri->dma->cmd) * sizeof(s32); 696 693 *(dbri->cmdptr+1) = addr; 697 694 *(dbri->cmdptr) = DBRI_CMD(D_JUMP, 0, 0); 698 695 ··· 750 747 /* Lock must not be held before calling this */ 751 748 static void dbri_initialize(struct snd_dbri *dbri) 752 749 { 750 + u32 dvma_addr = (u32)dbri->dma_dvma; 753 751 s32 *cmd; 754 752 u32 dma_addr; 755 753 unsigned long flags; ··· 768 764 /* 769 765 * Initialize the interrupt ring buffer. 770 766 */ 771 - dma_addr = dbri->dma_dvma + dbri_dma_off(intr, 0); 767 + dma_addr = dvma_addr + dbri_dma_off(intr, 0); 772 768 dbri->dma->intr[0] = dma_addr; 773 769 dbri->dbri_irqp = 1; 774 770 /* ··· 782 778 dbri->cmdptr = cmd; 783 779 *(cmd++) = DBRI_CMD(D_WAIT, 1, 0); 784 780 *(cmd++) = DBRI_CMD(D_WAIT, 1, 0); 785 - dma_addr = dbri->dma_dvma + dbri_dma_off(cmd, 0); 781 + dma_addr = dvma_addr + dbri_dma_off(cmd, 0); 786 782 sbus_writel(dma_addr, dbri->regs + REG8); 787 783 spin_unlock(&dbri->cmdlock); 788 784 ··· 1081 1077 static int setup_descs(struct snd_dbri *dbri, int streamno, unsigned int period) 1082 1078 { 1083 1079 struct dbri_streaminfo *info = &dbri->stream_info[streamno]; 1080 + u32 dvma_addr = (u32)dbri->dma_dvma; 1084 1081 __u32 dvma_buffer; 1085 1082 int desc; 1086 1083 int len; ··· 1182 1177 else { 1183 1178 dbri->next_desc[last_desc] = desc; 1184 1179 dbri->dma->desc[last_desc].nda = 1185 - dbri->dma_dvma + dbri_dma_off(desc, desc); 1180 + dvma_addr + dbri_dma_off(desc, desc); 1186 1181 } 1187 1182 1188 1183 last_desc = desc; ··· 1197 1192 } 1198 1193 1199 1194 dbri->dma->desc[last_desc].nda = 1200 - dbri->dma_dvma + dbri_dma_off(desc, first_desc); 1195 + dvma_addr + dbri_dma_off(desc, first_desc); 1201 1196 dbri->next_desc[last_desc] = first_desc; 1202 1197 dbri->pipes[info->pipe].first_desc = first_desc; 1203 1198 dbri->pipes[info->pipe].desc = first_desc; ··· 1702 1697 static void xmit_descs(struct snd_dbri *dbri) 1703 1698 { 1704 1699 struct dbri_streaminfo *info; 1700 + u32 dvma_addr; 1705 1701 s32 *cmd; 1706 1702 unsigned long flags; 1707 1703 int first_td; ··· 1710 1704 if (dbri == NULL) 1711 1705 return; /* Disabled */ 1712 1706 1707 + dvma_addr = (u32)dbri->dma_dvma; 1713 1708 info = &dbri->stream_info[DBRI_REC]; 1714 1709 spin_lock_irqsave(&dbri->lock, flags); 1715 1710 ··· 1725 1718 *(cmd++) = DBRI_CMD(D_SDP, 0, 1726 1719 dbri->pipes[info->pipe].sdp 1727 1720 | D_SDP_P | D_SDP_EVERY | D_SDP_C); 1728 - *(cmd++) = dbri->dma_dvma + 1721 + *(cmd++) = dvma_addr + 1729 1722 dbri_dma_off(desc, first_td); 1730 1723 dbri_cmdsend(dbri, cmd, 2); 1731 1724 ··· 1747 1740 *(cmd++) = DBRI_CMD(D_SDP, 0, 1748 1741 dbri->pipes[info->pipe].sdp 1749 1742 | D_SDP_P | D_SDP_EVERY | D_SDP_C); 1750 - *(cmd++) = dbri->dma_dvma + 1743 + *(cmd++) = dvma_addr + 1751 1744 dbri_dma_off(desc, first_td); 1752 1745 dbri_cmdsend(dbri, cmd, 2); 1753 1746 ··· 2546 2539 if (!dbri->dma) 2547 2540 return -ENOMEM; 2548 2541 2549 - dprintk(D_GEN, "DMA Cmd Block 0x%p (0x%08x)\n", 2542 + dprintk(D_GEN, "DMA Cmd Block 0x%p (%pad)\n", 2550 2543 dbri->dma, dbri->dma_dvma); 2551 2544 2552 2545 /* Map the registers into memory. */
+1 -1
tools/objtool/arch/x86/decode.c
··· 99 99 break; 100 100 101 101 case 0x8d: 102 - if (insn.rex_prefix.bytes && 102 + if (insn.rex_prefix.nbytes && 103 103 insn.rex_prefix.bytes[0] == 0x48 && 104 104 insn.modrm.nbytes && insn.modrm.bytes[0] == 0x2c && 105 105 insn.sib.nbytes && insn.sib.bytes[0] == 0x24)
+1
tools/testing/nvdimm/Kbuild
··· 14 14 ldflags-y += --wrap=insert_resource 15 15 ldflags-y += --wrap=remove_resource 16 16 ldflags-y += --wrap=acpi_evaluate_object 17 + ldflags-y += --wrap=acpi_evaluate_dsm 17 18 18 19 DRIVERS := ../../../drivers 19 20 NVDIMM_SRC := $(DRIVERS)/nvdimm
+22 -1
tools/testing/nvdimm/test/iomap.c
··· 26 26 27 27 static struct iomap_ops { 28 28 nfit_test_lookup_fn nfit_test_lookup; 29 + nfit_test_evaluate_dsm_fn evaluate_dsm; 29 30 struct list_head list; 30 31 } iomap_ops = { 31 32 .list = LIST_HEAD_INIT(iomap_ops.list), 32 33 }; 33 34 34 - void nfit_test_setup(nfit_test_lookup_fn lookup) 35 + void nfit_test_setup(nfit_test_lookup_fn lookup, 36 + nfit_test_evaluate_dsm_fn evaluate) 35 37 { 36 38 iomap_ops.nfit_test_lookup = lookup; 39 + iomap_ops.evaluate_dsm = evaluate; 37 40 list_add_rcu(&iomap_ops.list, &iomap_head); 38 41 } 39 42 EXPORT_SYMBOL(nfit_test_setup); ··· 369 366 return AE_OK; 370 367 } 371 368 EXPORT_SYMBOL(__wrap_acpi_evaluate_object); 369 + 370 + union acpi_object * __wrap_acpi_evaluate_dsm(acpi_handle handle, const u8 *uuid, 371 + u64 rev, u64 func, union acpi_object *argv4) 372 + { 373 + union acpi_object *obj = ERR_PTR(-ENXIO); 374 + struct iomap_ops *ops; 375 + 376 + rcu_read_lock(); 377 + ops = list_first_or_null_rcu(&iomap_head, typeof(*ops), list); 378 + if (ops) 379 + obj = ops->evaluate_dsm(handle, uuid, rev, func, argv4); 380 + rcu_read_unlock(); 381 + 382 + if (IS_ERR(obj)) 383 + return acpi_evaluate_dsm(handle, uuid, rev, func, argv4); 384 + return obj; 385 + } 386 + EXPORT_SYMBOL(__wrap_acpi_evaluate_dsm); 372 387 373 388 MODULE_LICENSE("GPL v2");
+232 -4
tools/testing/nvdimm/test/nfit.c
··· 23 23 #include <linux/sizes.h> 24 24 #include <linux/list.h> 25 25 #include <linux/slab.h> 26 + #include <nd-core.h> 26 27 #include <nfit.h> 27 28 #include <nd.h> 28 29 #include "nfit_test.h" ··· 1507 1506 return 0; 1508 1507 } 1509 1508 1509 + static unsigned long nfit_ctl_handle; 1510 + 1511 + union acpi_object *result; 1512 + 1513 + static union acpi_object *nfit_test_evaluate_dsm(acpi_handle handle, 1514 + const u8 *uuid, u64 rev, u64 func, union acpi_object *argv4) 1515 + { 1516 + if (handle != &nfit_ctl_handle) 1517 + return ERR_PTR(-ENXIO); 1518 + 1519 + return result; 1520 + } 1521 + 1522 + static int setup_result(void *buf, size_t size) 1523 + { 1524 + result = kmalloc(sizeof(union acpi_object) + size, GFP_KERNEL); 1525 + if (!result) 1526 + return -ENOMEM; 1527 + result->package.type = ACPI_TYPE_BUFFER, 1528 + result->buffer.pointer = (void *) (result + 1); 1529 + result->buffer.length = size; 1530 + memcpy(result->buffer.pointer, buf, size); 1531 + memset(buf, 0, size); 1532 + return 0; 1533 + } 1534 + 1535 + static int nfit_ctl_test(struct device *dev) 1536 + { 1537 + int rc, cmd_rc; 1538 + struct nvdimm *nvdimm; 1539 + struct acpi_device *adev; 1540 + struct nfit_mem *nfit_mem; 1541 + struct nd_ars_record *record; 1542 + struct acpi_nfit_desc *acpi_desc; 1543 + const u64 test_val = 0x0123456789abcdefULL; 1544 + unsigned long mask, cmd_size, offset; 1545 + union { 1546 + struct nd_cmd_get_config_size cfg_size; 1547 + struct nd_cmd_ars_status ars_stat; 1548 + struct nd_cmd_ars_cap ars_cap; 1549 + char buf[sizeof(struct nd_cmd_ars_status) 1550 + + sizeof(struct nd_ars_record)]; 1551 + } cmds; 1552 + 1553 + adev = devm_kzalloc(dev, sizeof(*adev), GFP_KERNEL); 1554 + if (!adev) 1555 + return -ENOMEM; 1556 + *adev = (struct acpi_device) { 1557 + .handle = &nfit_ctl_handle, 1558 + .dev = { 1559 + .init_name = "test-adev", 1560 + }, 1561 + }; 1562 + 1563 + acpi_desc = devm_kzalloc(dev, sizeof(*acpi_desc), GFP_KERNEL); 1564 + if (!acpi_desc) 1565 + return -ENOMEM; 1566 + *acpi_desc = (struct acpi_nfit_desc) { 1567 + .nd_desc = { 1568 + .cmd_mask = 1UL << ND_CMD_ARS_CAP 1569 + | 1UL << ND_CMD_ARS_START 1570 + | 1UL << ND_CMD_ARS_STATUS 1571 + | 1UL << ND_CMD_CLEAR_ERROR, 1572 + .module = THIS_MODULE, 1573 + .provider_name = "ACPI.NFIT", 1574 + .ndctl = acpi_nfit_ctl, 1575 + }, 1576 + .dev = &adev->dev, 1577 + }; 1578 + 1579 + nfit_mem = devm_kzalloc(dev, sizeof(*nfit_mem), GFP_KERNEL); 1580 + if (!nfit_mem) 1581 + return -ENOMEM; 1582 + 1583 + mask = 1UL << ND_CMD_SMART | 1UL << ND_CMD_SMART_THRESHOLD 1584 + | 1UL << ND_CMD_DIMM_FLAGS | 1UL << ND_CMD_GET_CONFIG_SIZE 1585 + | 1UL << ND_CMD_GET_CONFIG_DATA | 1UL << ND_CMD_SET_CONFIG_DATA 1586 + | 1UL << ND_CMD_VENDOR; 1587 + *nfit_mem = (struct nfit_mem) { 1588 + .adev = adev, 1589 + .family = NVDIMM_FAMILY_INTEL, 1590 + .dsm_mask = mask, 1591 + }; 1592 + 1593 + nvdimm = devm_kzalloc(dev, sizeof(*nvdimm), GFP_KERNEL); 1594 + if (!nvdimm) 1595 + return -ENOMEM; 1596 + *nvdimm = (struct nvdimm) { 1597 + .provider_data = nfit_mem, 1598 + .cmd_mask = mask, 1599 + .dev = { 1600 + .init_name = "test-dimm", 1601 + }, 1602 + }; 1603 + 1604 + 1605 + /* basic checkout of a typical 'get config size' command */ 1606 + cmd_size = sizeof(cmds.cfg_size); 1607 + cmds.cfg_size = (struct nd_cmd_get_config_size) { 1608 + .status = 0, 1609 + .config_size = SZ_128K, 1610 + .max_xfer = SZ_4K, 1611 + }; 1612 + rc = setup_result(cmds.buf, cmd_size); 1613 + if (rc) 1614 + return rc; 1615 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, nvdimm, ND_CMD_GET_CONFIG_SIZE, 1616 + cmds.buf, cmd_size, &cmd_rc); 1617 + 1618 + if (rc < 0 || cmd_rc || cmds.cfg_size.status != 0 1619 + || cmds.cfg_size.config_size != SZ_128K 1620 + || cmds.cfg_size.max_xfer != SZ_4K) { 1621 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1622 + __func__, __LINE__, rc, cmd_rc); 1623 + return -EIO; 1624 + } 1625 + 1626 + 1627 + /* test ars_status with zero output */ 1628 + cmd_size = offsetof(struct nd_cmd_ars_status, address); 1629 + cmds.ars_stat = (struct nd_cmd_ars_status) { 1630 + .out_length = 0, 1631 + }; 1632 + rc = setup_result(cmds.buf, cmd_size); 1633 + if (rc) 1634 + return rc; 1635 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS, 1636 + cmds.buf, cmd_size, &cmd_rc); 1637 + 1638 + if (rc < 0 || cmd_rc) { 1639 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1640 + __func__, __LINE__, rc, cmd_rc); 1641 + return -EIO; 1642 + } 1643 + 1644 + 1645 + /* test ars_cap with benign extended status */ 1646 + cmd_size = sizeof(cmds.ars_cap); 1647 + cmds.ars_cap = (struct nd_cmd_ars_cap) { 1648 + .status = ND_ARS_PERSISTENT << 16, 1649 + }; 1650 + offset = offsetof(struct nd_cmd_ars_cap, status); 1651 + rc = setup_result(cmds.buf + offset, cmd_size - offset); 1652 + if (rc) 1653 + return rc; 1654 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_CAP, 1655 + cmds.buf, cmd_size, &cmd_rc); 1656 + 1657 + if (rc < 0 || cmd_rc) { 1658 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1659 + __func__, __LINE__, rc, cmd_rc); 1660 + return -EIO; 1661 + } 1662 + 1663 + 1664 + /* test ars_status with 'status' trimmed from 'out_length' */ 1665 + cmd_size = sizeof(cmds.ars_stat) + sizeof(struct nd_ars_record); 1666 + cmds.ars_stat = (struct nd_cmd_ars_status) { 1667 + .out_length = cmd_size - 4, 1668 + }; 1669 + record = &cmds.ars_stat.records[0]; 1670 + *record = (struct nd_ars_record) { 1671 + .length = test_val, 1672 + }; 1673 + rc = setup_result(cmds.buf, cmd_size); 1674 + if (rc) 1675 + return rc; 1676 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS, 1677 + cmds.buf, cmd_size, &cmd_rc); 1678 + 1679 + if (rc < 0 || cmd_rc || record->length != test_val) { 1680 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1681 + __func__, __LINE__, rc, cmd_rc); 1682 + return -EIO; 1683 + } 1684 + 1685 + 1686 + /* test ars_status with 'Output (Size)' including 'status' */ 1687 + cmd_size = sizeof(cmds.ars_stat) + sizeof(struct nd_ars_record); 1688 + cmds.ars_stat = (struct nd_cmd_ars_status) { 1689 + .out_length = cmd_size, 1690 + }; 1691 + record = &cmds.ars_stat.records[0]; 1692 + *record = (struct nd_ars_record) { 1693 + .length = test_val, 1694 + }; 1695 + rc = setup_result(cmds.buf, cmd_size); 1696 + if (rc) 1697 + return rc; 1698 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS, 1699 + cmds.buf, cmd_size, &cmd_rc); 1700 + 1701 + if (rc < 0 || cmd_rc || record->length != test_val) { 1702 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1703 + __func__, __LINE__, rc, cmd_rc); 1704 + return -EIO; 1705 + } 1706 + 1707 + 1708 + /* test extended status for get_config_size results in failure */ 1709 + cmd_size = sizeof(cmds.cfg_size); 1710 + cmds.cfg_size = (struct nd_cmd_get_config_size) { 1711 + .status = 1 << 16, 1712 + }; 1713 + rc = setup_result(cmds.buf, cmd_size); 1714 + if (rc) 1715 + return rc; 1716 + rc = acpi_nfit_ctl(&acpi_desc->nd_desc, nvdimm, ND_CMD_GET_CONFIG_SIZE, 1717 + cmds.buf, cmd_size, &cmd_rc); 1718 + 1719 + if (rc < 0 || cmd_rc >= 0) { 1720 + dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n", 1721 + __func__, __LINE__, rc, cmd_rc); 1722 + return -EIO; 1723 + } 1724 + 1725 + return 0; 1726 + } 1727 + 1510 1728 static int nfit_test_probe(struct platform_device *pdev) 1511 1729 { 1512 1730 struct nvdimm_bus_descriptor *nd_desc; ··· 1735 1515 struct nfit_mem *nfit_mem; 1736 1516 union acpi_object *obj; 1737 1517 int rc; 1518 + 1519 + if (strcmp(dev_name(&pdev->dev), "nfit_test.0") == 0) { 1520 + rc = nfit_ctl_test(&pdev->dev); 1521 + if (rc) 1522 + return rc; 1523 + } 1738 1524 1739 1525 nfit_test = to_nfit_test(&pdev->dev); 1740 1526 ··· 1865 1639 { 1866 1640 int rc, i; 1867 1641 1868 - nfit_test_dimm = class_create(THIS_MODULE, "nfit_test_dimm"); 1869 - if (IS_ERR(nfit_test_dimm)) 1870 - return PTR_ERR(nfit_test_dimm); 1642 + nfit_test_setup(nfit_test_lookup, nfit_test_evaluate_dsm); 1871 1643 1872 - nfit_test_setup(nfit_test_lookup); 1644 + nfit_test_dimm = class_create(THIS_MODULE, "nfit_test_dimm"); 1645 + if (IS_ERR(nfit_test_dimm)) { 1646 + rc = PTR_ERR(nfit_test_dimm); 1647 + goto err_register; 1648 + } 1873 1649 1874 1650 for (i = 0; i < NUM_NFITS; i++) { 1875 1651 struct nfit_test *nfit_test;
+7 -1
tools/testing/nvdimm/test/nfit_test.h
··· 31 31 void *buf; 32 32 }; 33 33 34 + union acpi_object; 35 + typedef void *acpi_handle; 36 + 34 37 typedef struct nfit_test_resource *(*nfit_test_lookup_fn)(resource_size_t); 38 + typedef union acpi_object *(*nfit_test_evaluate_dsm_fn)(acpi_handle handle, 39 + const u8 *uuid, u64 rev, u64 func, union acpi_object *argv4); 35 40 void __iomem *__wrap_ioremap_nocache(resource_size_t offset, 36 41 unsigned long size); 37 42 void __wrap_iounmap(volatile void __iomem *addr); 38 - void nfit_test_setup(nfit_test_lookup_fn lookup); 43 + void nfit_test_setup(nfit_test_lookup_fn lookup, 44 + nfit_test_evaluate_dsm_fn evaluate); 39 45 void nfit_test_teardown(void); 40 46 struct nfit_test_resource *get_nfit_res(resource_size_t resource); 41 47 #endif
+4 -2
virt/kvm/arm/vgic/vgic-v2.c
··· 50 50 51 51 WARN_ON(cpuif->vgic_lr[lr] & GICH_LR_STATE); 52 52 53 - kvm_notify_acked_irq(vcpu->kvm, 0, 54 - intid - VGIC_NR_PRIVATE_IRQS); 53 + /* Only SPIs require notification */ 54 + if (vgic_valid_spi(vcpu->kvm, intid)) 55 + kvm_notify_acked_irq(vcpu->kvm, 0, 56 + intid - VGIC_NR_PRIVATE_IRQS); 55 57 } 56 58 } 57 59
+4 -2
virt/kvm/arm/vgic/vgic-v3.c
··· 41 41 42 42 WARN_ON(cpuif->vgic_lr[lr] & ICH_LR_STATE); 43 43 44 - kvm_notify_acked_irq(vcpu->kvm, 0, 45 - intid - VGIC_NR_PRIVATE_IRQS); 44 + /* Only SPIs require notification */ 45 + if (vgic_valid_spi(vcpu->kvm, intid)) 46 + kvm_notify_acked_irq(vcpu->kvm, 0, 47 + intid - VGIC_NR_PRIVATE_IRQS); 46 48 } 47 49 48 50 /*
+1 -1
virt/kvm/kvm_main.c
··· 2897 2897 2898 2898 ret = anon_inode_getfd(ops->name, &kvm_device_fops, dev, O_RDWR | O_CLOEXEC); 2899 2899 if (ret < 0) { 2900 - ops->destroy(dev); 2901 2900 mutex_lock(&kvm->lock); 2902 2901 list_del(&dev->vm_node); 2903 2902 mutex_unlock(&kvm->lock); 2903 + ops->destroy(dev); 2904 2904 return ret; 2905 2905 } 2906 2906