Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'perf/urgent' into perf/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>

+2128 -1040
+1
.mailmap
··· 69 69 Jeff Garzik <jgarzik@pretzel.yyz.us> 70 70 Jens Axboe <axboe@suse.de> 71 71 Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 72 + John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 72 73 John Stultz <johnstul@us.ibm.com> 73 74 <josh@joshtriplett.org> <josh@freedesktop.org> 74 75 <josh@joshtriplett.org> <josh@kernel.org>
+4
Documentation/devicetree/bindings/ata/ahci-platform.txt
··· 32 32 - target-supply : regulator for SATA target power 33 33 - phys : reference to the SATA PHY node 34 34 - phy-names : must be "sata-phy" 35 + - ports-implemented : Mask that indicates which ports that the HBA supports 36 + are available for software to use. Useful if PORTS_IMPL 37 + is not programmed by the BIOS, which is true with 38 + some embedded SOC's. 35 39 36 40 Required properties when using sub-nodes: 37 41 - #address-cells : number of cells to encode an address
+3 -3
Documentation/devicetree/bindings/net/cpsw.txt
··· 45 45 Optional properties: 46 46 - dual_emac_res_vlan : Specifies VID to be used to segregate the ports 47 47 - mac-address : See ethernet.txt file in the same directory 48 - - phy_id : Specifies slave phy id 48 + - phy_id : Specifies slave phy id (deprecated, use phy-handle) 49 49 - phy-handle : See ethernet.txt file in the same directory 50 50 51 51 Slave sub-nodes: 52 52 - fixed-link : See fixed-link.txt file in the same directory 53 - Either the property phy_id, or the sub-node 54 - fixed-link can be specified 53 + 54 + Note: Exactly one of phy_id, phy-handle, or fixed-link must be specified. 55 55 56 56 Note: "ti,hwmods" field is used to fetch the base address and irq 57 57 resources from TI, omap hwmod data base during device registration.
+3 -3
Documentation/networking/altera_tse.txt
··· 6 6 using the SGDMA and MSGDMA soft DMA IP components. The driver uses the 7 7 platform bus to obtain component resources. The designs used to test this 8 8 driver were built for a Cyclone(R) V SOC FPGA board, a Cyclone(R) V FPGA board, 9 - and tested with ARM and NIOS processor hosts seperately. The anticipated use 9 + and tested with ARM and NIOS processor hosts separately. The anticipated use 10 10 cases are simple communications between an embedded system and an external peer 11 11 for status and simple configuration of the embedded system. 12 12 ··· 65 65 4.1) Transmit process 66 66 When the driver's transmit routine is called by the kernel, it sets up a 67 67 transmit descriptor by calling the underlying DMA transmit routine (SGDMA or 68 - MSGDMA), and initites a transmit operation. Once the transmit is complete, an 68 + MSGDMA), and initiates a transmit operation. Once the transmit is complete, an 69 69 interrupt is driven by the transmit DMA logic. The driver handles the transmit 70 70 completion in the context of the interrupt handling chain by recycling 71 71 resource required to send and track the requested transmit operation. 72 72 73 73 4.2) Receive process 74 74 The driver will post receive buffers to the receive DMA logic during driver 75 - intialization. Receive buffers may or may not be queued depending upon the 75 + initialization. Receive buffers may or may not be queued depending upon the 76 76 underlying DMA logic (MSGDMA is able queue receive buffers, SGDMA is not able 77 77 to queue receive buffers to the SGDMA receive logic). When a packet is 78 78 received, the DMA logic generates an interrupt. The driver handles a receive
+7 -7
Documentation/networking/checksum-offloads.txt
··· 69 69 LCO is a technique for efficiently computing the outer checksum of an 70 70 encapsulated datagram when the inner checksum is due to be offloaded. 71 71 The ones-complement sum of a correctly checksummed TCP or UDP packet is 72 - equal to the sum of the pseudo header, because everything else gets 73 - 'cancelled out' by the checksum field. This is because the sum was 72 + equal to the complement of the sum of the pseudo header, because everything 73 + else gets 'cancelled out' by the checksum field. This is because the sum was 74 74 complemented before being written to the checksum field. 75 75 More generally, this holds in any case where the 'IP-style' ones complement 76 76 checksum is used, and thus any checksum that TX Checksum Offload supports. 77 77 That is, if we have set up TX Checksum Offload with a start/offset pair, we 78 - know that _after the device has filled in that checksum_, the ones 78 + know that after the device has filled in that checksum, the ones 79 79 complement sum from csum_start to the end of the packet will be equal to 80 - _whatever value we put in the checksum field beforehand_. This allows us 81 - to compute the outer checksum without looking at the payload: we simply 82 - stop summing when we get to csum_start, then add the 16-bit word at 83 - (csum_start + csum_offset). 80 + the complement of whatever value we put in the checksum field beforehand. 81 + This allows us to compute the outer checksum without looking at the payload: 82 + we simply stop summing when we get to csum_start, then add the complement of 83 + the 16-bit word at (csum_start + csum_offset). 84 84 Then, when the true inner checksum is filled in (either by hardware or by 85 85 skb_checksum_help()), the outer checksum will become correct by virtue of 86 86 the arithmetic.
+3 -3
Documentation/networking/ipvlan.txt
··· 8 8 This is conceptually very similar to the macvlan driver with one major 9 9 exception of using L3 for mux-ing /demux-ing among slaves. This property makes 10 10 the master device share the L2 with it's slave devices. I have developed this 11 - driver in conjuntion with network namespaces and not sure if there is use case 11 + driver in conjunction with network namespaces and not sure if there is use case 12 12 outside of it. 13 13 14 14 ··· 42 42 as well. 43 43 44 44 4.2 L3 mode: 45 - In this mode TX processing upto L3 happens on the stack instance attached 45 + In this mode TX processing up to L3 happens on the stack instance attached 46 46 to the slave device and packets are switched to the stack instance of the 47 47 master device for the L2 processing and routing from that instance will be 48 48 used before packets are queued on the outbound device. In this mode the slaves ··· 56 56 (a) The Linux host that is connected to the external switch / router has 57 57 policy configured that allows only one mac per port. 58 58 (b) No of virtual devices created on a master exceed the mac capacity and 59 - puts the NIC in promiscous mode and degraded performance is a concern. 59 + puts the NIC in promiscuous mode and degraded performance is a concern. 60 60 (c) If the slave device is to be put into the hostile / untrusted network 61 61 namespace where L2 on the slave could be changed / misused. 62 62
+3 -3
Documentation/networking/pktgen.txt
··· 67 67 * add_device DEVICE@NAME -- adds a single device 68 68 * rem_device_all -- remove all associated devices 69 69 70 - When adding a device to a thread, a corrosponding procfile is created 70 + When adding a device to a thread, a corresponding procfile is created 71 71 which is used for configuring this device. Thus, device names need to 72 72 be unique. 73 73 74 74 To support adding the same device to multiple threads, which is useful 75 - with multi queue NICs, a the device naming scheme is extended with "@": 75 + with multi queue NICs, the device naming scheme is extended with "@": 76 76 device@something 77 77 78 78 The part after "@" can be anything, but it is custom to use the thread ··· 221 221 222 222 A collection of tutorial scripts and helpers for pktgen is in the 223 223 samples/pktgen directory. The helper parameters.sh file support easy 224 - and consistant parameter parsing across the sample scripts. 224 + and consistent parameter parsing across the sample scripts. 225 225 226 226 Usage example and help: 227 227 ./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
+1 -1
Documentation/networking/vrf.txt
··· 41 41 the VRF device. Similarly on egress routing rules are used to send packets 42 42 to the VRF device driver before getting sent out the actual interface. This 43 43 allows tcpdump on a VRF device to capture all packets into and out of the 44 - VRF as a whole.[1] Similiarly, netfilter [2] and tc rules can be applied 44 + VRF as a whole.[1] Similarly, netfilter [2] and tc rules can be applied 45 45 using the VRF device to specify rules that apply to the VRF domain as a whole. 46 46 47 47 [1] Packets in the forwarded state do not flow through the device, so those
+3 -3
Documentation/networking/xfrm_sync.txt
··· 4 4 from Jamal <hadi@cyberus.ca>. 5 5 6 6 The end goal for syncing is to be able to insert attributes + generate 7 - events so that the an SA can be safely moved from one machine to another 7 + events so that the SA can be safely moved from one machine to another 8 8 for HA purposes. 9 9 The idea is to synchronize the SA so that the takeover machine can do 10 10 the processing of the SA as accurate as possible if it has access to it. ··· 13 13 These patches add ability to sync and have accurate lifetime byte (to 14 14 ensure proper decay of SAs) and replay counters to avoid replay attacks 15 15 with as minimal loss at failover time. 16 - This way a backup stays as closely uptodate as an active member. 16 + This way a backup stays as closely up-to-date as an active member. 17 17 18 18 Because the above items change for every packet the SA receives, 19 19 it is possible for a lot of the events to be generated. ··· 163 163 there is a period where the timer threshold expires with no packets 164 164 seen, then an odd behavior is seen as follows: 165 165 The first packet arrival after a timer expiry will trigger a timeout 166 - aevent; i.e we dont wait for a timeout period or a packet threshold 166 + event; i.e we don't wait for a timeout period or a packet threshold 167 167 to be reached. This is done for simplicity and efficiency reasons. 168 168 169 169 -JHS
+1 -1
Documentation/sysctl/kernel.txt
··· 646 646 perf_event_paranoid: 647 647 648 648 Controls use of the performance events system by unprivileged 649 - users (without CAP_SYS_ADMIN). The default value is 1. 649 + users (without CAP_SYS_ADMIN). The default value is 2. 650 650 651 651 -1: Allow use of (almost) all events by all users 652 652 >=0: Disallow raw tracepoint access by users without CAP_IOC_LOCK
+45 -30
MAINTAINERS
··· 872 872 F: include/linux/perf/arm_pmu.h 873 873 874 874 ARM PORT 875 - M: Russell King <linux@arm.linux.org.uk> 875 + M: Russell King <linux@armlinux.org.uk> 876 876 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 877 - W: http://www.arm.linux.org.uk/ 877 + W: http://www.armlinux.org.uk/ 878 878 S: Maintained 879 879 F: arch/arm/ 880 880 ··· 886 886 T: git git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc.git 887 887 888 888 ARM PRIMECELL AACI PL041 DRIVER 889 - M: Russell King <linux@arm.linux.org.uk> 889 + M: Russell King <linux@armlinux.org.uk> 890 890 S: Maintained 891 891 F: sound/arm/aaci.* 892 892 893 893 ARM PRIMECELL CLCD PL110 DRIVER 894 - M: Russell King <linux@arm.linux.org.uk> 894 + M: Russell King <linux@armlinux.org.uk> 895 895 S: Maintained 896 896 F: drivers/video/fbdev/amba-clcd.* 897 897 898 898 ARM PRIMECELL KMI PL050 DRIVER 899 - M: Russell King <linux@arm.linux.org.uk> 899 + M: Russell King <linux@armlinux.org.uk> 900 900 S: Maintained 901 901 F: drivers/input/serio/ambakmi.* 902 902 F: include/linux/amba/kmi.h 903 903 904 904 ARM PRIMECELL MMCI PL180/1 DRIVER 905 - M: Russell King <linux@arm.linux.org.uk> 905 + M: Russell King <linux@armlinux.org.uk> 906 906 S: Maintained 907 907 F: drivers/mmc/host/mmci.* 908 908 F: include/linux/amba/mmci.h 909 909 910 910 ARM PRIMECELL UART PL010 AND PL011 DRIVERS 911 - M: Russell King <linux@arm.linux.org.uk> 911 + M: Russell King <linux@armlinux.org.uk> 912 912 S: Maintained 913 913 F: drivers/tty/serial/amba-pl01*.c 914 914 F: include/linux/amba/serial.h 915 915 916 916 ARM PRIMECELL BUS SUPPORT 917 - M: Russell King <linux@arm.linux.org.uk> 917 + M: Russell King <linux@armlinux.org.uk> 918 918 S: Maintained 919 919 F: drivers/amba/ 920 920 F: include/linux/amba/bus.h ··· 1036 1036 S: Maintained 1037 1037 1038 1038 ARM/CLKDEV SUPPORT 1039 - M: Russell King <linux@arm.linux.org.uk> 1039 + M: Russell King <linux@armlinux.org.uk> 1040 1040 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1041 1041 S: Maintained 1042 1042 F: arch/arm/include/asm/clkdev.h ··· 1093 1093 N: digicolor 1094 1094 1095 1095 ARM/EBSA110 MACHINE SUPPORT 1096 - M: Russell King <linux@arm.linux.org.uk> 1096 + M: Russell King <linux@armlinux.org.uk> 1097 1097 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1098 - W: http://www.arm.linux.org.uk/ 1098 + W: http://www.armlinux.org.uk/ 1099 1099 S: Maintained 1100 1100 F: arch/arm/mach-ebsa110/ 1101 1101 F: drivers/net/ethernet/amd/am79c961a.* ··· 1124 1124 F: arch/arm/mm/*-fa* 1125 1125 1126 1126 ARM/FOOTBRIDGE ARCHITECTURE 1127 - M: Russell King <linux@arm.linux.org.uk> 1127 + M: Russell King <linux@armlinux.org.uk> 1128 1128 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1129 - W: http://www.arm.linux.org.uk/ 1129 + W: http://www.armlinux.org.uk/ 1130 1130 S: Maintained 1131 1131 F: arch/arm/include/asm/hardware/dec21285.h 1132 1132 F: arch/arm/mach-footbridge/ ··· 1457 1457 ARM/PT DIGITAL BOARD PORT 1458 1458 M: Stefan Eletzhofer <stefan.eletzhofer@eletztrick.de> 1459 1459 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1460 - W: http://www.arm.linux.org.uk/ 1460 + W: http://www.armlinux.org.uk/ 1461 1461 S: Maintained 1462 1462 1463 1463 ARM/QUALCOMM SUPPORT ··· 1493 1493 F: arch/arm64/boot/dts/renesas/ 1494 1494 1495 1495 ARM/RISCPC ARCHITECTURE 1496 - M: Russell King <linux@arm.linux.org.uk> 1496 + M: Russell King <linux@armlinux.org.uk> 1497 1497 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1498 - W: http://www.arm.linux.org.uk/ 1498 + W: http://www.armlinux.org.uk/ 1499 1499 S: Maintained 1500 1500 F: arch/arm/include/asm/hardware/entry-macro-iomd.S 1501 1501 F: arch/arm/include/asm/hardware/ioc.h ··· 1773 1773 F: drivers/clocksource/versatile.c 1774 1774 1775 1775 ARM/VFP SUPPORT 1776 - M: Russell King <linux@arm.linux.org.uk> 1776 + M: Russell King <linux@armlinux.org.uk> 1777 1777 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1778 - W: http://www.arm.linux.org.uk/ 1778 + W: http://www.armlinux.org.uk/ 1779 1779 S: Maintained 1780 1780 F: arch/arm/vfp/ 1781 1781 ··· 2921 2921 F: include/linux/cleancache.h 2922 2922 2923 2923 CLK API 2924 - M: Russell King <linux@arm.linux.org.uk> 2924 + M: Russell King <linux@armlinux.org.uk> 2925 2925 L: linux-clk@vger.kernel.org 2926 2926 S: Maintained 2927 2927 F: include/linux/clk.h ··· 3354 3354 F: drivers/net/ethernet/stmicro/stmmac/ 3355 3355 3356 3356 CYBERPRO FB DRIVER 3357 - M: Russell King <linux@arm.linux.org.uk> 3357 + M: Russell King <linux@armlinux.org.uk> 3358 3358 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3359 - W: http://www.arm.linux.org.uk/ 3359 + W: http://www.armlinux.org.uk/ 3360 3360 S: Maintained 3361 3361 F: drivers/video/fbdev/cyber2000fb.* 3362 3362 ··· 3881 3881 3882 3882 DRM DRIVERS FOR VIVANTE GPU IP 3883 3883 M: Lucas Stach <l.stach@pengutronix.de> 3884 - R: Russell King <linux+etnaviv@arm.linux.org.uk> 3884 + R: Russell King <linux+etnaviv@armlinux.org.uk> 3885 3885 R: Christian Gmeiner <christian.gmeiner@gmail.com> 3886 3886 L: dri-devel@lists.freedesktop.org 3887 3887 S: Maintained ··· 4223 4223 F: arch/ia64/kernel/efi.c 4224 4224 F: arch/x86/boot/compressed/eboot.[ch] 4225 4225 F: arch/x86/include/asm/efi.h 4226 - F: arch/x86/platform/efi/* 4227 - F: drivers/firmware/efi/* 4226 + F: arch/x86/platform/efi/ 4227 + F: drivers/firmware/efi/ 4228 4228 F: include/linux/efi*.h 4229 4229 4230 4230 EFI VARIABLE FILESYSTEM ··· 4744 4744 4745 4745 FUSE: FILESYSTEM IN USERSPACE 4746 4746 M: Miklos Szeredi <miklos@szeredi.hu> 4747 - L: fuse-devel@lists.sourceforge.net 4747 + L: linux-fsdevel@vger.kernel.org 4748 4748 W: http://fuse.sourceforge.net/ 4749 4749 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git 4750 4750 S: Maintained ··· 4903 4903 F: include/net/gre.h 4904 4904 4905 4905 GRETH 10/100/1G Ethernet MAC device driver 4906 - M: Kristoffer Glembo <kristoffer@gaisler.com> 4906 + M: Andreas Larsson <andreas@gaisler.com> 4907 4907 L: netdev@vger.kernel.org 4908 4908 S: Maintained 4909 4909 F: drivers/net/ethernet/aeroflex/ ··· 6905 6905 S: Maintained 6906 6906 6907 6907 MARVELL ARMADA DRM SUPPORT 6908 - M: Russell King <rmk+kernel@arm.linux.org.uk> 6908 + M: Russell King <rmk+kernel@armlinux.org.uk> 6909 6909 S: Maintained 6910 6910 F: drivers/gpu/drm/armada/ 6911 6911 ··· 7905 7905 F: drivers/nfc/nxp-nci 7906 7906 7907 7907 NXP TDA998X DRM DRIVER 7908 - M: Russell King <rmk+kernel@arm.linux.org.uk> 7908 + M: Russell King <rmk+kernel@armlinux.org.uk> 7909 7909 S: Supported 7910 7910 F: drivers/gpu/drm/i2c/tda998x_drv.c 7911 7911 F: include/drm/i2c/tda998x.h ··· 7978 7978 F: drivers/cpufreq/omap-cpufreq.c 7979 7979 7980 7980 OMAP POWERDOMAIN SOC ADAPTATION LAYER SUPPORT 7981 - M: Rajendra Nayak <rnayak@ti.com> 7981 + M: Rajendra Nayak <rnayak@codeaurora.org> 7982 7982 M: Paul Walmsley <paul@pwsan.com> 7983 7983 L: linux-omap@vger.kernel.org 7984 7984 S: Maintained ··· 10014 10014 10015 10015 SFC NETWORK DRIVER 10016 10016 M: Solarflare linux maintainers <linux-net-drivers@solarflare.com> 10017 - M: Shradha Shah <sshah@solarflare.com> 10017 + M: Edward Cree <ecree@solarflare.com> 10018 + M: Bert Kenward <bkenward@solarflare.com> 10018 10019 L: netdev@vger.kernel.org 10019 10020 S: Supported 10020 10021 F: drivers/net/ethernet/sfc/ ··· 11317 11316 F: include/trace/ 11318 11317 F: kernel/trace/ 11319 11318 F: tools/testing/selftests/ftrace/ 11319 + 11320 + TRACING MMIO ACCESSES (MMIOTRACE) 11321 + M: Steven Rostedt <rostedt@goodmis.org> 11322 + M: Ingo Molnar <mingo@kernel.org> 11323 + R: Karol Herbst <karolherbst@gmail.com> 11324 + R: Pekka Paalanen <ppaalanen@gmail.com> 11325 + S: Maintained 11326 + L: linux-kernel@vger.kernel.org 11327 + L: nouveau@lists.freedesktop.org 11328 + F: kernel/trace/trace_mmiotrace.c 11329 + F: include/linux/mmiotrace.h 11330 + F: arch/x86/mm/kmmio.c 11331 + F: arch/x86/mm/mmio-mod.c 11332 + F: arch/x86/mm/testmmiotrace.c 11320 11333 11321 11334 TRIVIAL PATCHES 11322 11335 M: Jiri Kosina <trivial@kernel.org>
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc7 5 5 NAME = Charred Weasel 6 6 7 7 # *DOCUMENTATION*
+13
arch/arc/Kconfig
··· 58 58 config RWSEM_GENERIC_SPINLOCK 59 59 def_bool y 60 60 61 + config ARCH_DISCONTIGMEM_ENABLE 62 + def_bool y 63 + 61 64 config ARCH_FLATMEM_ENABLE 62 65 def_bool y 63 66 ··· 350 347 351 348 endchoice 352 349 350 + config NODES_SHIFT 351 + int "Maximum NUMA Nodes (as a power of 2)" 352 + default "1" if !DISCONTIGMEM 353 + default "2" if DISCONTIGMEM 354 + depends on NEED_MULTIPLE_NODES 355 + ---help--- 356 + Accessing memory beyond 1GB (with or w/o PAE) requires 2 memory 357 + zones. 358 + 353 359 if ISA_ARCOMPACT 354 360 355 361 config ARC_COMPACT_IRQ_LEVELS ··· 467 455 468 456 config HIGHMEM 469 457 bool "High Memory Support" 458 + select DISCONTIGMEM 470 459 help 471 460 With ARC 2G:2G address split, only upper 2G is directly addressable by 472 461 kernel. Enable this to potentially allow access to rest of 2G and PAE
+18 -9
arch/arc/include/asm/io.h
··· 13 13 #include <asm/byteorder.h> 14 14 #include <asm/page.h> 15 15 16 + #ifdef CONFIG_ISA_ARCV2 17 + #include <asm/barrier.h> 18 + #define __iormb() rmb() 19 + #define __iowmb() wmb() 20 + #else 21 + #define __iormb() do { } while (0) 22 + #define __iowmb() do { } while (0) 23 + #endif 24 + 16 25 extern void __iomem *ioremap(phys_addr_t paddr, unsigned long size); 17 26 extern void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size, 18 27 unsigned long flags); ··· 39 30 #define ioremap_nocache(phy, sz) ioremap(phy, sz) 40 31 #define ioremap_wc(phy, sz) ioremap(phy, sz) 41 32 #define ioremap_wt(phy, sz) ioremap(phy, sz) 33 + 34 + /* 35 + * io{read,write}{16,32}be() macros 36 + */ 37 + #define ioread16be(p) ({ u16 __v = be16_to_cpu((__force __be16)__raw_readw(p)); __iormb(); __v; }) 38 + #define ioread32be(p) ({ u32 __v = be32_to_cpu((__force __be32)__raw_readl(p)); __iormb(); __v; }) 39 + 40 + #define iowrite16be(v,p) ({ __iowmb(); __raw_writew((__force u16)cpu_to_be16(v), p); }) 41 + #define iowrite32be(v,p) ({ __iowmb(); __raw_writel((__force u32)cpu_to_be32(v), p); }) 42 42 43 43 /* Change struct page to physical address */ 44 44 #define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT) ··· 125 107 : "memory"); 126 108 127 109 } 128 - 129 - #ifdef CONFIG_ISA_ARCV2 130 - #include <asm/barrier.h> 131 - #define __iormb() rmb() 132 - #define __iowmb() wmb() 133 - #else 134 - #define __iormb() do { } while (0) 135 - #define __iowmb() do { } while (0) 136 - #endif 137 110 138 111 /* 139 112 * MMIO can also get buffered/optimized in micro-arch, so barriers needed
+43
arch/arc/include/asm/mmzone.h
··· 1 + /* 2 + * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com) 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #ifndef _ASM_ARC_MMZONE_H 10 + #define _ASM_ARC_MMZONE_H 11 + 12 + #ifdef CONFIG_DISCONTIGMEM 13 + 14 + extern struct pglist_data node_data[]; 15 + #define NODE_DATA(nid) (&node_data[nid]) 16 + 17 + static inline int pfn_to_nid(unsigned long pfn) 18 + { 19 + int is_end_low = 1; 20 + 21 + if (IS_ENABLED(CONFIG_ARC_HAS_PAE40)) 22 + is_end_low = pfn <= virt_to_pfn(0xFFFFFFFFUL); 23 + 24 + /* 25 + * node 0: lowmem: 0x8000_0000 to 0xFFFF_FFFF 26 + * node 1: HIGHMEM w/o PAE40: 0x0 to 0x7FFF_FFFF 27 + * HIGHMEM with PAE40: 0x1_0000_0000 to ... 28 + */ 29 + if (pfn >= ARCH_PFN_OFFSET && is_end_low) 30 + return 0; 31 + 32 + return 1; 33 + } 34 + 35 + static inline int pfn_valid(unsigned long pfn) 36 + { 37 + int nid = pfn_to_nid(pfn); 38 + 39 + return (pfn <= node_end_pfn(nid)); 40 + } 41 + #endif /* CONFIG_DISCONTIGMEM */ 42 + 43 + #endif
+11 -4
arch/arc/include/asm/page.h
··· 72 72 73 73 typedef pte_t * pgtable_t; 74 74 75 + /* 76 + * Use virt_to_pfn with caution: 77 + * If used in pte or paddr related macros, it could cause truncation 78 + * in PAE40 builds 79 + * As a rule of thumb, only use it in helpers starting with virt_ 80 + * You have been warned ! 81 + */ 75 82 #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) 76 83 77 84 #define ARCH_PFN_OFFSET virt_to_pfn(CONFIG_LINUX_LINK_BASE) 78 85 86 + #ifdef CONFIG_FLATMEM 79 87 #define pfn_valid(pfn) (((pfn) - ARCH_PFN_OFFSET) < max_mapnr) 88 + #endif 80 89 81 90 /* 82 91 * __pa, __va, virt_to_page (ALERT: deprecated, don't use them) ··· 94 85 * virt here means link-address/program-address as embedded in object code. 95 86 * And for ARC, link-addr = physical address 96 87 */ 97 - #define __pa(vaddr) ((unsigned long)vaddr) 88 + #define __pa(vaddr) ((unsigned long)(vaddr)) 98 89 #define __va(paddr) ((void *)((unsigned long)(paddr))) 99 90 100 - #define virt_to_page(kaddr) \ 101 - (mem_map + virt_to_pfn((kaddr) - CONFIG_LINUX_LINK_BASE)) 102 - 91 + #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) 103 92 #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) 104 93 105 94 /* Default Permissions for stack/heaps pages (Non Executable) */
+6 -7
arch/arc/include/asm/pgtable.h
··· 278 278 #define pmd_present(x) (pmd_val(x)) 279 279 #define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) 280 280 281 - #define pte_page(pte) \ 282 - (mem_map + virt_to_pfn(pte_val(pte) - CONFIG_LINUX_LINK_BASE)) 283 - 281 + #define pte_page(pte) pfn_to_page(pte_pfn(pte)) 284 282 #define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) 285 - #define pte_pfn(pte) virt_to_pfn(pte_val(pte)) 286 - #define pfn_pte(pfn, prot) (__pte(((pte_t)(pfn) << PAGE_SHIFT) | \ 287 - pgprot_val(prot))) 288 - #define __pte_index(addr) (virt_to_pfn(addr) & (PTRS_PER_PTE - 1)) 283 + #define pfn_pte(pfn, prot) (__pte(((pte_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) 284 + 285 + /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ 286 + #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) 287 + #define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 289 288 290 289 /* 291 290 * pte_offset gets a @ptr to PMD entry (PGD in our 2-tier paging system)
+39 -15
arch/arc/mm/init.c
··· 30 30 static unsigned long low_mem_sz; 31 31 32 32 #ifdef CONFIG_HIGHMEM 33 - static unsigned long min_high_pfn; 33 + static unsigned long min_high_pfn, max_high_pfn; 34 34 static u64 high_mem_start; 35 35 static u64 high_mem_sz; 36 + #endif 37 + 38 + #ifdef CONFIG_DISCONTIGMEM 39 + struct pglist_data node_data[MAX_NUMNODES] __read_mostly; 40 + EXPORT_SYMBOL(node_data); 36 41 #endif 37 42 38 43 /* User can over-ride above with "mem=nnn[KkMm]" in cmdline */ ··· 114 109 /* Last usable page of low mem */ 115 110 max_low_pfn = max_pfn = PFN_DOWN(low_mem_start + low_mem_sz); 116 111 117 - #ifdef CONFIG_HIGHMEM 118 - min_high_pfn = PFN_DOWN(high_mem_start); 119 - max_pfn = PFN_DOWN(high_mem_start + high_mem_sz); 112 + #ifdef CONFIG_FLATMEM 113 + /* pfn_valid() uses this */ 114 + max_mapnr = max_low_pfn - min_low_pfn; 120 115 #endif 121 - 122 - max_mapnr = max_pfn - min_low_pfn; 123 116 124 117 /*------------- bootmem allocator setup -----------------------*/ 125 118 ··· 132 129 * the crash 133 130 */ 134 131 135 - memblock_add(low_mem_start, low_mem_sz); 132 + memblock_add_node(low_mem_start, low_mem_sz, 0); 136 133 memblock_reserve(low_mem_start, __pa(_end) - low_mem_start); 137 134 138 135 #ifdef CONFIG_BLK_DEV_INITRD ··· 152 149 zones_size[ZONE_NORMAL] = max_low_pfn - min_low_pfn; 153 150 zones_holes[ZONE_NORMAL] = 0; 154 151 155 - #ifdef CONFIG_HIGHMEM 156 - zones_size[ZONE_HIGHMEM] = max_pfn - max_low_pfn; 157 - 158 - /* This handles the peripheral address space hole */ 159 - zones_holes[ZONE_HIGHMEM] = min_high_pfn - max_low_pfn; 160 - #endif 161 - 162 152 /* 163 153 * We can't use the helper free_area_init(zones[]) because it uses 164 154 * PAGE_OFFSET to compute the @min_low_pfn which would be wrong ··· 164 168 zones_holes); /* holes */ 165 169 166 170 #ifdef CONFIG_HIGHMEM 171 + /* 172 + * Populate a new node with highmem 173 + * 174 + * On ARC (w/o PAE) HIGHMEM addresses are actually smaller (0 based) 175 + * than addresses in normal ala low memory (0x8000_0000 based). 176 + * Even with PAE, the huge peripheral space hole would waste a lot of 177 + * mem with single mem_map[]. This warrants a mem_map per region design. 178 + * Thus HIGHMEM on ARC is imlemented with DISCONTIGMEM. 179 + * 180 + * DISCONTIGMEM in turns requires multiple nodes. node 0 above is 181 + * populated with normal memory zone while node 1 only has highmem 182 + */ 183 + node_set_online(1); 184 + 185 + min_high_pfn = PFN_DOWN(high_mem_start); 186 + max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz); 187 + 188 + zones_size[ZONE_NORMAL] = 0; 189 + zones_holes[ZONE_NORMAL] = 0; 190 + 191 + zones_size[ZONE_HIGHMEM] = max_high_pfn - min_high_pfn; 192 + zones_holes[ZONE_HIGHMEM] = 0; 193 + 194 + free_area_init_node(1, /* node-id */ 195 + zones_size, /* num pages per zone */ 196 + min_high_pfn, /* first pfn of node */ 197 + zones_holes); /* holes */ 198 + 167 199 high_memory = (void *)(min_high_pfn << PAGE_SHIFT); 168 200 kmap_init(); 169 201 #endif ··· 209 185 unsigned long tmp; 210 186 211 187 reset_all_zones_managed_pages(); 212 - for (tmp = min_high_pfn; tmp < max_pfn; tmp++) 188 + for (tmp = min_high_pfn; tmp < max_high_pfn; tmp++) 213 189 free_highmem_page(pfn_to_page(tmp)); 214 190 #endif 215 191
+9
arch/arm/boot/dts/omap3-n900.dts
··· 329 329 regulator-name = "V28"; 330 330 regulator-min-microvolt = <2800000>; 331 331 regulator-max-microvolt = <2800000>; 332 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 332 333 regulator-always-on; /* due to battery cover sensor */ 333 334 }; 334 335 ··· 337 336 regulator-name = "VCSI"; 338 337 regulator-min-microvolt = <1800000>; 339 338 regulator-max-microvolt = <1800000>; 339 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 340 340 }; 341 341 342 342 &vaux3 { 343 343 regulator-name = "VMMC2_30"; 344 344 regulator-min-microvolt = <2800000>; 345 345 regulator-max-microvolt = <3000000>; 346 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 346 347 }; 347 348 348 349 &vaux4 { 349 350 regulator-name = "VCAM_ANA_28"; 350 351 regulator-min-microvolt = <2800000>; 351 352 regulator-max-microvolt = <2800000>; 353 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 352 354 }; 353 355 354 356 &vmmc1 { 355 357 regulator-name = "VMMC1"; 356 358 regulator-min-microvolt = <1850000>; 357 359 regulator-max-microvolt = <3150000>; 360 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 358 361 }; 359 362 360 363 &vmmc2 { 361 364 regulator-name = "V28_A"; 362 365 regulator-min-microvolt = <2800000>; 363 366 regulator-max-microvolt = <3000000>; 367 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 364 368 regulator-always-on; /* due VIO leak to AIC34 VDDs */ 365 369 }; 366 370 ··· 373 367 regulator-name = "VPLL"; 374 368 regulator-min-microvolt = <1800000>; 375 369 regulator-max-microvolt = <1800000>; 370 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 376 371 regulator-always-on; 377 372 }; 378 373 ··· 381 374 regulator-name = "VSDI_CSI"; 382 375 regulator-min-microvolt = <1800000>; 383 376 regulator-max-microvolt = <1800000>; 377 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 384 378 regulator-always-on; 385 379 }; 386 380 ··· 389 381 regulator-name = "VMMC2_IO_18"; 390 382 regulator-min-microvolt = <1800000>; 391 383 regulator-max-microvolt = <1800000>; 384 + regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */ 392 385 }; 393 386 394 387 &vio {
+1 -1
arch/arm/boot/dts/omap34xx.dtsi
··· 46 46 0x480bd800 0x017c>; 47 47 interrupts = <24>; 48 48 iommus = <&mmu_isp>; 49 - syscon = <&scm_conf 0xdc>; 49 + syscon = <&scm_conf 0x6c>; 50 50 ti,phy-type = <OMAP3ISP_PHY_TYPE_COMPLEX_IO>; 51 51 #clock-cells = <1>; 52 52 ports {
+2 -2
arch/arm/boot/dts/omap5-board-common.dtsi
··· 472 472 ldo1_reg: ldo1 { 473 473 /* VDDAPHY_CAM: vdda_csiport */ 474 474 regulator-name = "ldo1"; 475 - regulator-min-microvolt = <1500000>; 475 + regulator-min-microvolt = <1800000>; 476 476 regulator-max-microvolt = <1800000>; 477 477 }; 478 478 ··· 498 498 ldo4_reg: ldo4 { 499 499 /* VDDAPHY_DISP: vdda_dsiport/hdmi */ 500 500 regulator-name = "ldo4"; 501 - regulator-min-microvolt = <1500000>; 501 + regulator-min-microvolt = <1800000>; 502 502 regulator-max-microvolt = <1800000>; 503 503 }; 504 504
+2 -2
arch/arm/boot/dts/omap5-cm-t54.dts
··· 513 513 ldo1_reg: ldo1 { 514 514 /* VDDAPHY_CAM: vdda_csiport */ 515 515 regulator-name = "ldo1"; 516 - regulator-min-microvolt = <1500000>; 516 + regulator-min-microvolt = <1800000>; 517 517 regulator-max-microvolt = <1800000>; 518 518 }; 519 519 ··· 537 537 ldo4_reg: ldo4 { 538 538 /* VDDAPHY_DISP: vdda_dsiport/hdmi */ 539 539 regulator-name = "ldo4"; 540 - regulator-min-microvolt = <1500000>; 540 + regulator-min-microvolt = <1800000>; 541 541 regulator-max-microvolt = <1800000>; 542 542 }; 543 543
+1 -1
arch/arm/boot/dts/omap5.dtsi
··· 269 269 omap5_pmx_wkup: pinmux@c840 { 270 270 compatible = "ti,omap5-padconf", 271 271 "pinctrl-single"; 272 - reg = <0xc840 0x0038>; 272 + reg = <0xc840 0x003c>; 273 273 #address-cells = <1>; 274 274 #size-cells = <0>; 275 275 #interrupt-cells = <1>;
+2 -1
arch/arm/boot/dts/qcom-apq8064.dtsi
··· 666 666 }; 667 667 668 668 sata0: sata@29000000 { 669 - compatible = "generic-ahci"; 669 + compatible = "qcom,apq8064-ahci", "generic-ahci"; 670 670 status = "disabled"; 671 671 reg = <0x29000000 0x180>; 672 672 interrupts = <GIC_SPI 209 IRQ_TYPE_NONE>; ··· 688 688 689 689 phys = <&sata_phy0>; 690 690 phy-names = "sata-phy"; 691 + ports-implemented = <0x1>; 691 692 }; 692 693 693 694 /* Temporary fixed regulator */
-2
arch/arm/boot/dts/sun8i-q8-common.dtsi
··· 125 125 }; 126 126 127 127 &reg_dc1sw { 128 - regulator-min-microvolt = <3000000>; 129 - regulator-max-microvolt = <3000000>; 130 128 regulator-name = "vcc-lcd"; 131 129 }; 132 130
+11
arch/arm/include/asm/domain.h
··· 84 84 85 85 #ifndef __ASSEMBLY__ 86 86 87 + #ifdef CONFIG_CPU_CP15_MMU 87 88 static inline unsigned int get_domain(void) 88 89 { 89 90 unsigned int domain; ··· 104 103 : : "r" (val) : "memory"); 105 104 isb(); 106 105 } 106 + #else 107 + static inline unsigned int get_domain(void) 108 + { 109 + return 0; 110 + } 111 + 112 + static inline void set_domain(unsigned val) 113 + { 114 + } 115 + #endif 107 116 108 117 #ifdef CONFIG_CPU_USE_DOMAINS 109 118 #define modify_domain(dom,type) \
+1 -1
arch/arm/kernel/head-nommu.S
··· 236 236 mov r0, #CONFIG_VECTORS_BASE @ Cover from VECTORS_BASE 237 237 ldr r5,=(MPU_AP_PL1RW_PL0NA | MPU_RGN_NORMAL) 238 238 /* Writing N to bits 5:1 (RSR_SZ) --> region size 2^N+1 */ 239 - mov r6, #(((PAGE_SHIFT - 1) << MPU_RSR_SZ) | 1 << MPU_RSR_EN) 239 + mov r6, #(((2 * PAGE_SHIFT - 1) << MPU_RSR_SZ) | 1 << MPU_RSR_EN) 240 240 241 241 setup_region r0, r5, r6, MPU_DATA_SIDE @ VECTORS_BASE, PL0 NA, enabled 242 242 beq 3f @ Memory-map not unified
+1 -1
arch/arm/kvm/mmu.c
··· 1004 1004 kvm_pfn_t pfn = *pfnp; 1005 1005 gfn_t gfn = *ipap >> PAGE_SHIFT; 1006 1006 1007 - if (PageTransCompound(pfn_to_page(pfn))) { 1007 + if (PageTransCompoundMap(pfn_to_page(pfn))) { 1008 1008 unsigned long mask; 1009 1009 /* 1010 1010 * The address we faulted on is backed by a transparent huge
+5
arch/arm/mach-davinci/board-mityomapl138.c
··· 121 121 const char *partnum = NULL; 122 122 struct davinci_soc_info *soc_info = &davinci_soc_info; 123 123 124 + if (!IS_BUILTIN(CONFIG_NVMEM)) { 125 + pr_warn("Factory Config not available without CONFIG_NVMEM\n"); 126 + goto bad_config; 127 + } 128 + 124 129 ret = nvmem_device_read(nvmem, 0, sizeof(factory_config), 125 130 &factory_config); 126 131 if (ret != sizeof(struct factory_config)) {
+5
arch/arm/mach-davinci/common.c
··· 33 33 char *mac_addr = davinci_soc_info.emac_pdata->mac_addr; 34 34 off_t offset = (off_t)context; 35 35 36 + if (!IS_BUILTIN(CONFIG_NVMEM)) { 37 + pr_warn("Cannot read MAC addr from EEPROM without CONFIG_NVMEM\n"); 38 + return; 39 + } 40 + 36 41 /* Read MAC addr from EEPROM */ 37 42 if (nvmem_device_read(nvmem, offset, ETH_ALEN, mac_addr) == ETH_ALEN) 38 43 pr_info("Read MAC addr from EEPROM: %pM\n", mac_addr);
+1 -1
arch/arm/mach-exynos/pm_domains.c
··· 92 92 if (IS_ERR(pd->clk[i])) 93 93 break; 94 94 95 - if (IS_ERR(pd->clk[i])) 95 + if (IS_ERR(pd->pclk[i])) 96 96 continue; /* Skip on first power up */ 97 97 if (clk_set_parent(pd->clk[i], pd->pclk[i])) 98 98 pr_err("%s: error setting parent to clock%d\n",
+1
arch/arm/mach-socfpga/headsmp.S
··· 13 13 #include <asm/assembler.h> 14 14 15 15 .arch armv7-a 16 + .arm 16 17 17 18 ENTRY(secondary_trampoline) 18 19 /* CPU1 will always fetch from 0x0 when it is brought out of reset.
+8 -7
arch/arm/mm/nommu.c
··· 87 87 /* MPU initialisation functions */ 88 88 void __init sanity_check_meminfo_mpu(void) 89 89 { 90 - int i; 91 90 phys_addr_t phys_offset = PHYS_OFFSET; 92 91 phys_addr_t aligned_region_size, specified_mem_size, rounded_mem_size; 93 92 struct memblock_region *reg; ··· 109 110 } else { 110 111 /* 111 112 * memblock auto merges contiguous blocks, remove 112 - * all blocks afterwards 113 + * all blocks afterwards in one go (we can't remove 114 + * blocks separately while iterating) 113 115 */ 114 116 pr_notice("Ignoring RAM after %pa, memory at %pa ignored\n", 115 - &mem_start, &reg->base); 116 - memblock_remove(reg->base, reg->size); 117 + &mem_end, &reg->base); 118 + memblock_remove(reg->base, 0 - reg->base); 119 + break; 117 120 } 118 121 } 119 122 ··· 145 144 pr_warn("Truncating memory from %pa to %pa (MPU region constraints)", 146 145 &specified_mem_size, &aligned_region_size); 147 146 memblock_remove(mem_start + aligned_region_size, 148 - specified_mem_size - aligned_round_size); 147 + specified_mem_size - aligned_region_size); 149 148 150 149 mem_end = mem_start + aligned_region_size; 151 150 } ··· 262 261 return; 263 262 264 263 region_err = mpu_setup_region(MPU_RAM_REGION, PHYS_OFFSET, 265 - ilog2(meminfo.bank[0].size), 264 + ilog2(memblock.memory.regions[0].size), 266 265 MPU_AP_PL1RW_PL0RW | MPU_RGN_NORMAL); 267 266 if (region_err) { 268 267 panic("MPU region initialization failure! %d", region_err); ··· 286 285 * some architectures which the DRAM is the exception vector to trap, 287 286 * alloc_page breaks with error, although it is not NULL, but "0." 288 287 */ 289 - memblock_reserve(CONFIG_VECTORS_BASE, PAGE_SIZE); 288 + memblock_reserve(CONFIG_VECTORS_BASE, 2 * PAGE_SIZE); 290 289 #else /* ifndef CONFIG_CPU_V7M */ 291 290 /* 292 291 * There is no dedicated vector page on V7-M. So nothing needs to be
-1
arch/arm64/boot/dts/renesas/r8a7795.dtsi
··· 120 120 compatible = "fixed-clock"; 121 121 #clock-cells = <0>; 122 122 clock-frequency = <0>; 123 - status = "disabled"; 124 123 }; 125 124 126 125 soc {
+1 -1
arch/parisc/kernel/syscall.S
··· 344 344 #endif 345 345 346 346 cmpib,COND(=),n -1,%r20,tracesys_exit /* seccomp may have returned -1 */ 347 - comiclr,>>= __NR_Linux_syscalls, %r20, %r0 347 + comiclr,>> __NR_Linux_syscalls, %r20, %r0 348 348 b,n .Ltracesys_nosys 349 349 350 350 LDREGX %r20(%r19), %r19
+1 -1
arch/powerpc/include/asm/word-at-a-time.h
··· 82 82 "andc %1,%1,%2\n\t" 83 83 "popcntd %0,%1" 84 84 : "=r" (leading_zero_bits), "=&r" (trailing_zero_bit_mask) 85 - : "r" (bits)); 85 + : "b" (bits)); 86 86 87 87 return leading_zero_bits; 88 88 }
-1
arch/sparc/configs/sparc32_defconfig
··· 24 24 CONFIG_INET_ESP=y 25 25 CONFIG_INET_IPCOMP=y 26 26 # CONFIG_INET_LRO is not set 27 - CONFIG_IPV6_PRIVACY=y 28 27 CONFIG_INET6_AH=m 29 28 CONFIG_INET6_ESP=m 30 29 CONFIG_INET6_IPCOMP=m
-1
arch/sparc/configs/sparc64_defconfig
··· 48 48 CONFIG_INET_AH=y 49 49 CONFIG_INET_ESP=y 50 50 CONFIG_INET_IPCOMP=y 51 - CONFIG_IPV6_PRIVACY=y 52 51 CONFIG_IPV6_ROUTER_PREF=y 53 52 CONFIG_IPV6_ROUTE_INFO=y 54 53 CONFIG_IPV6_OPTIMISTIC_DAD=y
+1
arch/sparc/include/asm/spitfire.h
··· 48 48 #define SUN4V_CHIP_SPARC_M6 0x06 49 49 #define SUN4V_CHIP_SPARC_M7 0x07 50 50 #define SUN4V_CHIP_SPARC64X 0x8a 51 + #define SUN4V_CHIP_SPARC_SN 0x8b 51 52 #define SUN4V_CHIP_UNKNOWN 0xff 52 53 53 54 #ifndef __ASSEMBLY__
+3 -1
arch/sparc/include/uapi/asm/unistd.h
··· 423 423 #define __NR_setsockopt 355 424 424 #define __NR_mlock2 356 425 425 #define __NR_copy_file_range 357 426 + #define __NR_preadv2 358 427 + #define __NR_pwritev2 359 426 428 427 - #define NR_syscalls 358 429 + #define NR_syscalls 360 428 430 429 431 /* Bitmask values returned from kern_features system call. */ 430 432 #define KERN_FEATURE_MIXED_MODE_STACK 0x00000001
+5 -9
arch/sparc/kernel/cherrs.S
··· 214 214 subcc %g1, %g2, %g1 ! Next cacheline 215 215 bge,pt %icc, 1b 216 216 nop 217 - ba,pt %xcc, dcpe_icpe_tl1_common 218 - nop 217 + ba,a,pt %xcc, dcpe_icpe_tl1_common 219 218 220 219 do_dcpe_tl1_fatal: 221 220 sethi %hi(1f), %g7 ··· 223 224 mov 0x2, %o0 224 225 call cheetah_plus_parity_error 225 226 add %sp, PTREGS_OFF, %o1 226 - ba,pt %xcc, rtrap 227 - nop 227 + ba,a,pt %xcc, rtrap 228 228 .size do_dcpe_tl1,.-do_dcpe_tl1 229 229 230 230 .globl do_icpe_tl1 ··· 257 259 subcc %g1, %g2, %g1 258 260 bge,pt %icc, 1b 259 261 nop 260 - ba,pt %xcc, dcpe_icpe_tl1_common 261 - nop 262 + ba,a,pt %xcc, dcpe_icpe_tl1_common 262 263 263 264 do_icpe_tl1_fatal: 264 265 sethi %hi(1f), %g7 ··· 266 269 mov 0x3, %o0 267 270 call cheetah_plus_parity_error 268 271 add %sp, PTREGS_OFF, %o1 269 - ba,pt %xcc, rtrap 270 - nop 272 + ba,a,pt %xcc, rtrap 271 273 .size do_icpe_tl1,.-do_icpe_tl1 272 274 273 275 .type dcpe_icpe_tl1_common,#function ··· 452 456 cmp %g2, 0x63 453 457 be c_cee 454 458 nop 455 - ba,pt %xcc, c_deferred 459 + ba,a,pt %xcc, c_deferred 456 460 .size __cheetah_log_error,.-__cheetah_log_error 457 461 458 462 /* Cheetah FECC trap handling, we get here from tl{0,1}_fecc
+6
arch/sparc/kernel/cpu.c
··· 506 506 sparc_pmu_type = "sparc-m7"; 507 507 break; 508 508 509 + case SUN4V_CHIP_SPARC_SN: 510 + sparc_cpu_type = "SPARC-SN"; 511 + sparc_fpu_type = "SPARC-SN integrated FPU"; 512 + sparc_pmu_type = "sparc-sn"; 513 + break; 514 + 509 515 case SUN4V_CHIP_SPARC64X: 510 516 sparc_cpu_type = "SPARC64-X"; 511 517 sparc_fpu_type = "SPARC64-X integrated FPU";
+1
arch/sparc/kernel/cpumap.c
··· 328 328 case SUN4V_CHIP_NIAGARA5: 329 329 case SUN4V_CHIP_SPARC_M6: 330 330 case SUN4V_CHIP_SPARC_M7: 331 + case SUN4V_CHIP_SPARC_SN: 331 332 case SUN4V_CHIP_SPARC64X: 332 333 rover_inc_table = niagara_iterate_method; 333 334 break;
+5 -6
arch/sparc/kernel/fpu_traps.S
··· 100 100 fmuld %f0, %f2, %f26 101 101 faddd %f0, %f2, %f28 102 102 fmuld %f0, %f2, %f30 103 - b,pt %xcc, fpdis_exit 104 - nop 103 + ba,a,pt %xcc, fpdis_exit 104 + 105 105 2: andcc %g5, FPRS_DU, %g0 106 106 bne,pt %icc, 3f 107 107 fzero %f32 ··· 144 144 fmuld %f32, %f34, %f58 145 145 faddd %f32, %f34, %f60 146 146 fmuld %f32, %f34, %f62 147 - ba,pt %xcc, fpdis_exit 148 - nop 147 + ba,a,pt %xcc, fpdis_exit 148 + 149 149 3: mov SECONDARY_CONTEXT, %g3 150 150 add %g6, TI_FPREGS, %g1 151 151 ··· 197 197 fp_other_bounce: 198 198 call do_fpother 199 199 add %sp, PTREGS_OFF, %o0 200 - ba,pt %xcc, rtrap 201 - nop 200 + ba,a,pt %xcc, rtrap 202 201 .size fp_other_bounce,.-fp_other_bounce 203 202 204 203 .align 32
+16 -16
arch/sparc/kernel/head_64.S
··· 414 414 cmp %g2, 'T' 415 415 be,pt %xcc, 70f 416 416 cmp %g2, 'M' 417 + be,pt %xcc, 70f 418 + cmp %g2, 'S' 417 419 bne,pn %xcc, 49f 418 420 nop 419 421 ··· 435 433 cmp %g2, '7' 436 434 be,pt %xcc, 5f 437 435 mov SUN4V_CHIP_SPARC_M7, %g4 436 + cmp %g2, 'N' 437 + be,pt %xcc, 5f 438 + mov SUN4V_CHIP_SPARC_SN, %g4 438 439 ba,pt %xcc, 49f 439 440 nop 440 441 ··· 466 461 subcc %g3, 1, %g3 467 462 bne,pt %xcc, 41b 468 463 add %g1, 1, %g1 469 - mov SUN4V_CHIP_SPARC64X, %g4 470 464 ba,pt %xcc, 5f 471 - nop 465 + mov SUN4V_CHIP_SPARC64X, %g4 472 466 473 467 49: 474 468 mov SUN4V_CHIP_UNKNOWN, %g4 ··· 552 548 stxa %g0, [%g7] ASI_DMMU 553 549 membar #Sync 554 550 555 - ba,pt %xcc, sun4u_continue 556 - nop 551 + ba,a,pt %xcc, sun4u_continue 557 552 558 553 sun4v_init: 559 554 /* Set ctx 0 */ ··· 563 560 mov SECONDARY_CONTEXT, %g7 564 561 stxa %g0, [%g7] ASI_MMU 565 562 membar #Sync 566 - ba,pt %xcc, niagara_tlb_fixup 567 - nop 563 + ba,a,pt %xcc, niagara_tlb_fixup 568 564 569 565 sun4u_continue: 570 566 BRANCH_IF_ANY_CHEETAH(g1, g7, cheetah_tlb_fixup) 571 567 572 - ba,pt %xcc, spitfire_tlb_fixup 573 - nop 568 + ba,a,pt %xcc, spitfire_tlb_fixup 574 569 575 570 niagara_tlb_fixup: 576 571 mov 3, %g2 /* Set TLB type to hypervisor. */ ··· 596 595 be,pt %xcc, niagara4_patch 597 596 nop 598 597 cmp %g1, SUN4V_CHIP_SPARC_M7 598 + be,pt %xcc, niagara4_patch 599 + nop 600 + cmp %g1, SUN4V_CHIP_SPARC_SN 599 601 be,pt %xcc, niagara4_patch 600 602 nop 601 603 ··· 643 639 call hypervisor_patch_cachetlbops 644 640 nop 645 641 646 - ba,pt %xcc, tlb_fixup_done 647 - nop 642 + ba,a,pt %xcc, tlb_fixup_done 648 643 649 644 cheetah_tlb_fixup: 650 645 mov 2, %g2 /* Set TLB type to cheetah+. */ ··· 662 659 call cheetah_patch_cachetlbops 663 660 nop 664 661 665 - ba,pt %xcc, tlb_fixup_done 666 - nop 662 + ba,a,pt %xcc, tlb_fixup_done 667 663 668 664 spitfire_tlb_fixup: 669 665 /* Set TLB type to spitfire. */ ··· 776 774 call %o1 777 775 add %sp, (2047 + 128), %o0 778 776 779 - ba,pt %xcc, 2f 780 - nop 777 + ba,a,pt %xcc, 2f 781 778 782 779 1: sethi %hi(sparc64_ttable_tl0), %o0 783 780 set prom_set_trap_table_name, %g2 ··· 815 814 816 815 BRANCH_IF_ANY_CHEETAH(o2, o3, 1f) 817 816 818 - ba,pt %xcc, 2f 819 - nop 817 + ba,a,pt %xcc, 2f 820 818 821 819 /* Disable STICK_INT interrupts. */ 822 820 1:
+4 -8
arch/sparc/kernel/misctrap.S
··· 18 18 109: or %g7, %lo(109b), %g7 19 19 call do_privact 20 20 add %sp, PTREGS_OFF, %o0 21 - ba,pt %xcc, rtrap 22 - nop 21 + ba,a,pt %xcc, rtrap 23 22 .size __do_privact,.-__do_privact 24 23 25 24 .type do_mna,#function ··· 45 46 mov %l5, %o2 46 47 call mem_address_unaligned 47 48 add %sp, PTREGS_OFF, %o0 48 - ba,pt %xcc, rtrap 49 - nop 49 + ba,a,pt %xcc, rtrap 50 50 .size do_mna,.-do_mna 51 51 52 52 .type do_lddfmna,#function ··· 63 65 mov %l5, %o2 64 66 call handle_lddfmna 65 67 add %sp, PTREGS_OFF, %o0 66 - ba,pt %xcc, rtrap 67 - nop 68 + ba,a,pt %xcc, rtrap 68 69 .size do_lddfmna,.-do_lddfmna 69 70 70 71 .type do_stdfmna,#function ··· 81 84 mov %l5, %o2 82 85 call handle_stdfmna 83 86 add %sp, PTREGS_OFF, %o0 84 - ba,pt %xcc, rtrap 85 - nop 87 + ba,a,pt %xcc, rtrap 86 88 .size do_stdfmna,.-do_stdfmna 87 89 88 90 .type breakpoint_trap,#function
+36 -6
arch/sparc/kernel/pci.c
··· 245 245 } 246 246 } 247 247 248 + static void pci_init_dev_archdata(struct dev_archdata *sd, void *iommu, 249 + void *stc, void *host_controller, 250 + struct platform_device *op, 251 + int numa_node) 252 + { 253 + sd->iommu = iommu; 254 + sd->stc = stc; 255 + sd->host_controller = host_controller; 256 + sd->op = op; 257 + sd->numa_node = numa_node; 258 + } 259 + 248 260 static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm, 249 261 struct device_node *node, 250 262 struct pci_bus *bus, int devfn) ··· 271 259 if (!dev) 272 260 return NULL; 273 261 262 + op = of_find_device_by_node(node); 274 263 sd = &dev->dev.archdata; 275 - sd->iommu = pbm->iommu; 276 - sd->stc = &pbm->stc; 277 - sd->host_controller = pbm; 278 - sd->op = op = of_find_device_by_node(node); 279 - sd->numa_node = pbm->numa_node; 280 - 264 + pci_init_dev_archdata(sd, pbm->iommu, &pbm->stc, pbm, op, 265 + pbm->numa_node); 281 266 sd = &op->dev.archdata; 282 267 sd->iommu = pbm->iommu; 283 268 sd->stc = &pbm->stc; ··· 1002 993 { 1003 994 /* No special bus mastering setup handling */ 1004 995 } 996 + 997 + #ifdef CONFIG_PCI_IOV 998 + int pcibios_add_device(struct pci_dev *dev) 999 + { 1000 + struct pci_dev *pdev; 1001 + 1002 + /* Add sriov arch specific initialization here. 1003 + * Copy dev_archdata from PF to VF 1004 + */ 1005 + if (dev->is_virtfn) { 1006 + struct dev_archdata *psd; 1007 + 1008 + pdev = dev->physfn; 1009 + psd = &pdev->dev.archdata; 1010 + pci_init_dev_archdata(&dev->dev.archdata, psd->iommu, 1011 + psd->stc, psd->host_controller, NULL, 1012 + psd->numa_node); 1013 + } 1014 + return 0; 1015 + } 1016 + #endif /* CONFIG_PCI_IOV */ 1005 1017 1006 1018 static int __init pcibios_init(void) 1007 1019 {
+6 -1
arch/sparc/kernel/setup_64.c
··· 285 285 286 286 sun4v_patch_2insn_range(&__sun4v_2insn_patch, 287 287 &__sun4v_2insn_patch_end); 288 - if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7) 288 + if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 289 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN) 289 290 sun_m7_patch_2insn_range(&__sun_m7_2insn_patch, 290 291 &__sun_m7_2insn_patch_end); 291 292 ··· 525 524 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 526 525 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 527 526 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 527 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 528 528 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 529 529 cap |= HWCAP_SPARC_BLKINIT; 530 530 if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 || ··· 534 532 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 535 533 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 536 534 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 535 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 537 536 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 538 537 cap |= HWCAP_SPARC_N2; 539 538 } ··· 564 561 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 565 562 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 566 563 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 564 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 567 565 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 568 566 cap |= (AV_SPARC_VIS | AV_SPARC_VIS2 | 569 567 AV_SPARC_ASI_BLK_INIT | ··· 574 570 sun4v_chip_type == SUN4V_CHIP_NIAGARA5 || 575 571 sun4v_chip_type == SUN4V_CHIP_SPARC_M6 || 576 572 sun4v_chip_type == SUN4V_CHIP_SPARC_M7 || 573 + sun4v_chip_type == SUN4V_CHIP_SPARC_SN || 577 574 sun4v_chip_type == SUN4V_CHIP_SPARC64X) 578 575 cap |= (AV_SPARC_VIS3 | AV_SPARC_HPC | 579 576 AV_SPARC_FMAF);
+6 -12
arch/sparc/kernel/spiterrs.S
··· 85 85 ba,pt %xcc, etraptl1 86 86 rd %pc, %g7 87 87 88 - ba,pt %xcc, 2f 89 - nop 88 + ba,a,pt %xcc, 2f 90 89 91 90 1: ba,pt %xcc, etrap_irq 92 91 rd %pc, %g7 ··· 99 100 mov %l5, %o2 100 101 call spitfire_access_error 101 102 add %sp, PTREGS_OFF, %o0 102 - ba,pt %xcc, rtrap 103 - nop 103 + ba,a,pt %xcc, rtrap 104 104 .size __spitfire_access_error,.-__spitfire_access_error 105 105 106 106 /* This is the trap handler entry point for ECC correctable ··· 177 179 mov %l5, %o2 178 180 call spitfire_data_access_exception_tl1 179 181 add %sp, PTREGS_OFF, %o0 180 - ba,pt %xcc, rtrap 181 - nop 182 + ba,a,pt %xcc, rtrap 182 183 .size __spitfire_data_access_exception_tl1,.-__spitfire_data_access_exception_tl1 183 184 184 185 .type __spitfire_data_access_exception,#function ··· 197 200 mov %l5, %o2 198 201 call spitfire_data_access_exception 199 202 add %sp, PTREGS_OFF, %o0 200 - ba,pt %xcc, rtrap 201 - nop 203 + ba,a,pt %xcc, rtrap 202 204 .size __spitfire_data_access_exception,.-__spitfire_data_access_exception 203 205 204 206 .type __spitfire_insn_access_exception_tl1,#function ··· 216 220 mov %l5, %o2 217 221 call spitfire_insn_access_exception_tl1 218 222 add %sp, PTREGS_OFF, %o0 219 - ba,pt %xcc, rtrap 220 - nop 223 + ba,a,pt %xcc, rtrap 221 224 .size __spitfire_insn_access_exception_tl1,.-__spitfire_insn_access_exception_tl1 222 225 223 226 .type __spitfire_insn_access_exception,#function ··· 235 240 mov %l5, %o2 236 241 call spitfire_insn_access_exception 237 242 add %sp, PTREGS_OFF, %o0 238 - ba,pt %xcc, rtrap 239 - nop 243 + ba,a,pt %xcc, rtrap 240 244 .size __spitfire_insn_access_exception,.-__spitfire_insn_access_exception
+1 -1
arch/sparc/kernel/systbls_32.S
··· 88 88 /*340*/ .long sys_ni_syscall, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr 89 89 /*345*/ .long sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 90 90 /*350*/ .long sys_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 91 - /*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range 91 + /*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
+2 -2
arch/sparc/kernel/systbls_64.S
··· 89 89 /*340*/ .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr 90 90 .word sys32_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 91 91 /*350*/ .word sys32_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 92 - .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range 92 + .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range, compat_sys_preadv2, compat_sys_pwritev2 93 93 94 94 #endif /* CONFIG_COMPAT */ 95 95 ··· 170 170 /*340*/ .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr 171 171 .word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf 172 172 /*350*/ .word sys64_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen 173 - .word sys_setsockopt, sys_mlock2, sys_copy_file_range 173 + .word sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
+1 -2
arch/sparc/kernel/utrap.S
··· 11 11 mov %l4, %o1 12 12 call bad_trap 13 13 add %sp, PTREGS_OFF, %o0 14 - ba,pt %xcc, rtrap 15 - nop 14 + ba,a,pt %xcc, rtrap 16 15 17 16 invoke_utrap: 18 17 sllx %g3, 3, %g3
+18
arch/sparc/kernel/vio.c
··· 45 45 return NULL; 46 46 } 47 47 48 + static int vio_hotplug(struct device *dev, struct kobj_uevent_env *env) 49 + { 50 + const struct vio_dev *vio_dev = to_vio_dev(dev); 51 + 52 + add_uevent_var(env, "MODALIAS=vio:T%sS%s", vio_dev->type, vio_dev->compat); 53 + return 0; 54 + } 55 + 48 56 static int vio_bus_match(struct device *dev, struct device_driver *drv) 49 57 { 50 58 struct vio_dev *vio_dev = to_vio_dev(dev); ··· 113 105 return sprintf(buf, "%s\n", vdev->type); 114 106 } 115 107 108 + static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, 109 + char *buf) 110 + { 111 + const struct vio_dev *vdev = to_vio_dev(dev); 112 + 113 + return sprintf(buf, "vio:T%sS%s\n", vdev->type, vdev->compat); 114 + } 115 + 116 116 static struct device_attribute vio_dev_attrs[] = { 117 117 __ATTR_RO(devspec), 118 118 __ATTR_RO(type), 119 + __ATTR_RO(modalias), 119 120 __ATTR_NULL 120 121 }; 121 122 122 123 static struct bus_type vio_bus_type = { 123 124 .name = "vio", 124 125 .dev_attrs = vio_dev_attrs, 126 + .uevent = vio_hotplug, 125 127 .match = vio_bus_match, 126 128 .probe = vio_device_probe, 127 129 .remove = vio_device_remove,
+4
arch/sparc/kernel/vmlinux.lds.S
··· 33 33 jiffies = jiffies_64; 34 34 #endif 35 35 36 + #ifdef CONFIG_SPARC64 37 + ASSERT((swapper_tsb == 0x0000000000408000), "Error: sparc64 early assembler too large") 38 + #endif 39 + 36 40 SECTIONS 37 41 { 38 42 #ifdef CONFIG_SPARC64
+1 -2
arch/sparc/kernel/winfixup.S
··· 32 32 rd %pc, %g7 33 33 call do_sparc64_fault 34 34 add %sp, PTREGS_OFF, %o0 35 - ba,pt %xcc, rtrap 36 - nop 35 + ba,a,pt %xcc, rtrap 37 36 38 37 /* Be very careful about usage of the trap globals here. 39 38 * You cannot touch %g5 as that has the fault information.
+3
arch/sparc/mm/init_64.c
··· 1769 1769 max_phys_bits = 47; 1770 1770 break; 1771 1771 case SUN4V_CHIP_SPARC_M7: 1772 + case SUN4V_CHIP_SPARC_SN: 1772 1773 default: 1773 1774 /* M7 and later support 52-bit virtual addresses. */ 1774 1775 sparc64_va_hole_top = 0xfff8000000000000UL; ··· 1987 1986 */ 1988 1987 switch (sun4v_chip_type) { 1989 1988 case SUN4V_CHIP_SPARC_M7: 1989 + case SUN4V_CHIP_SPARC_SN: 1990 1990 pagecv_flag = 0x00; 1991 1991 break; 1992 1992 default: ··· 2140 2138 */ 2141 2139 switch (sun4v_chip_type) { 2142 2140 case SUN4V_CHIP_SPARC_M7: 2141 + case SUN4V_CHIP_SPARC_SN: 2143 2142 page_cache4v_flag = _PAGE_CP_4V; 2144 2143 break; 2145 2144 default:
+1 -3
arch/x86/kernel/apic/x2apic_uv_x.c
··· 891 891 } 892 892 pr_info("UV: Found %s hub\n", hub); 893 893 894 - /* We now only need to map the MMRs on UV1 */ 895 - if (is_uv1_hub()) 896 - map_low_mmrs(); 894 + map_low_mmrs(); 897 895 898 896 m_n_config.v = uv_read_local_mmr(UVH_RH_GAM_CONFIG_MMR ); 899 897 m_val = m_n_config.s.m_skt;
+1 -1
arch/x86/kernel/cpu/intel.c
··· 336 336 { 337 337 unsigned int eax, ebx, ecx, edx; 338 338 339 - if (c->cpuid_level < 4) 339 + if (!IS_ENABLED(CONFIG_SMP) || c->cpuid_level < 4) 340 340 return 1; 341 341 342 342 /* Intel has a non-standard dependency on %ecx for this CPUID level. */
+5
arch/x86/kernel/smpboot.c
··· 332 332 * primary cores. 333 333 */ 334 334 ncpus = boot_cpu_data.x86_max_cores; 335 + if (!ncpus) { 336 + pr_warn("x86_max_cores == zero !?!?"); 337 + ncpus = 1; 338 + } 339 + 335 340 __max_logical_packages = DIV_ROUND_UP(total_cpus, ncpus); 336 341 337 342 /*
+12 -2
arch/x86/kernel/sysfb_efi.c
··· 106 106 continue; 107 107 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 108 108 resource_size_t start, end; 109 + unsigned long flags; 110 + 111 + flags = pci_resource_flags(dev, i); 112 + if (!(flags & IORESOURCE_MEM)) 113 + continue; 114 + 115 + if (flags & IORESOURCE_UNSET) 116 + continue; 117 + 118 + if (pci_resource_len(dev, i) == 0) 119 + continue; 109 120 110 121 start = pci_resource_start(dev, i); 111 - if (start == 0) 112 - break; 113 122 end = pci_resource_end(dev, i); 114 123 if (screen_info.lfb_base >= start && 115 124 screen_info.lfb_base < end) { 116 125 found_bar = 1; 126 + break; 117 127 } 118 128 } 119 129 }
+1 -1
arch/x86/kernel/tsc_msr.c
··· 92 92 93 93 if (freq_desc_tables[cpu_index].msr_plat) { 94 94 rdmsr(MSR_PLATFORM_INFO, lo, hi); 95 - ratio = (lo >> 8) & 0x1f; 95 + ratio = (lo >> 8) & 0xff; 96 96 } else { 97 97 rdmsr(MSR_IA32_PERF_STATUS, lo, hi); 98 98 ratio = (hi >> 8) & 0x1f;
+2 -2
arch/x86/kvm/mmu.c
··· 2823 2823 */ 2824 2824 if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && 2825 2825 level == PT_PAGE_TABLE_LEVEL && 2826 - PageTransCompound(pfn_to_page(pfn)) && 2826 + PageTransCompoundMap(pfn_to_page(pfn)) && 2827 2827 !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { 2828 2828 unsigned long mask; 2829 2829 /* ··· 4785 4785 */ 4786 4786 if (sp->role.direct && 4787 4787 !kvm_is_reserved_pfn(pfn) && 4788 - PageTransCompound(pfn_to_page(pfn))) { 4788 + PageTransCompoundMap(pfn_to_page(pfn))) { 4789 4789 drop_spte(kvm, sptep); 4790 4790 need_tlb_flush = 1; 4791 4791 goto restart;
+9 -9
arch/x86/platform/efi/efi-bgrt.c
··· 43 43 return; 44 44 45 45 if (bgrt_tab->header.length < sizeof(*bgrt_tab)) { 46 - pr_err("Ignoring BGRT: invalid length %u (expected %zu)\n", 46 + pr_notice("Ignoring BGRT: invalid length %u (expected %zu)\n", 47 47 bgrt_tab->header.length, sizeof(*bgrt_tab)); 48 48 return; 49 49 } 50 50 if (bgrt_tab->version != 1) { 51 - pr_err("Ignoring BGRT: invalid version %u (expected 1)\n", 51 + pr_notice("Ignoring BGRT: invalid version %u (expected 1)\n", 52 52 bgrt_tab->version); 53 53 return; 54 54 } 55 55 if (bgrt_tab->status & 0xfe) { 56 - pr_err("Ignoring BGRT: reserved status bits are non-zero %u\n", 56 + pr_notice("Ignoring BGRT: reserved status bits are non-zero %u\n", 57 57 bgrt_tab->status); 58 58 return; 59 59 } 60 60 if (bgrt_tab->image_type != 0) { 61 - pr_err("Ignoring BGRT: invalid image type %u (expected 0)\n", 61 + pr_notice("Ignoring BGRT: invalid image type %u (expected 0)\n", 62 62 bgrt_tab->image_type); 63 63 return; 64 64 } 65 65 if (!bgrt_tab->image_address) { 66 - pr_err("Ignoring BGRT: null image address\n"); 66 + pr_notice("Ignoring BGRT: null image address\n"); 67 67 return; 68 68 } 69 69 70 70 image = memremap(bgrt_tab->image_address, sizeof(bmp_header), MEMREMAP_WB); 71 71 if (!image) { 72 - pr_err("Ignoring BGRT: failed to map image header memory\n"); 72 + pr_notice("Ignoring BGRT: failed to map image header memory\n"); 73 73 return; 74 74 } 75 75 76 76 memcpy(&bmp_header, image, sizeof(bmp_header)); 77 77 memunmap(image); 78 78 if (bmp_header.id != 0x4d42) { 79 - pr_err("Ignoring BGRT: Incorrect BMP magic number 0x%x (expected 0x4d42)\n", 79 + pr_notice("Ignoring BGRT: Incorrect BMP magic number 0x%x (expected 0x4d42)\n", 80 80 bmp_header.id); 81 81 return; 82 82 } ··· 84 84 85 85 bgrt_image = kmalloc(bgrt_image_size, GFP_KERNEL | __GFP_NOWARN); 86 86 if (!bgrt_image) { 87 - pr_err("Ignoring BGRT: failed to allocate memory for image (wanted %zu bytes)\n", 87 + pr_notice("Ignoring BGRT: failed to allocate memory for image (wanted %zu bytes)\n", 88 88 bgrt_image_size); 89 89 return; 90 90 } 91 91 92 92 image = memremap(bgrt_tab->image_address, bmp_header.size, MEMREMAP_WB); 93 93 if (!image) { 94 - pr_err("Ignoring BGRT: failed to map image memory\n"); 94 + pr_notice("Ignoring BGRT: failed to map image memory\n"); 95 95 kfree(bgrt_image); 96 96 bgrt_image = NULL; 97 97 return;
+1
crypto/Kconfig
··· 96 96 config CRYPTO_RSA 97 97 tristate "RSA algorithm" 98 98 select CRYPTO_AKCIPHER 99 + select CRYPTO_MANAGER 99 100 select MPILIB 100 101 select ASN1 101 102 help
+2 -1
crypto/ahash.c
··· 69 69 struct scatterlist *sg; 70 70 71 71 sg = walk->sg; 72 - walk->pg = sg_page(sg); 73 72 walk->offset = sg->offset; 73 + walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT); 74 + walk->offset = offset_in_page(walk->offset); 74 75 walk->entrylen = sg->length; 75 76 76 77 if (walk->entrylen > walk->total)
+3
drivers/acpi/acpica/dsmethod.c
··· 428 428 obj_desc->method.mutex->mutex. 429 429 original_sync_level = 430 430 obj_desc->method.mutex->mutex.sync_level; 431 + 432 + obj_desc->method.mutex->mutex.thread_id = 433 + acpi_os_get_thread_id(); 431 434 } 432 435 } 433 436
+4 -1
drivers/acpi/nfit.c
··· 287 287 offset); 288 288 rc = -ENXIO; 289 289 } 290 - } else 290 + } else { 291 291 rc = 0; 292 + if (cmd_rc) 293 + *cmd_rc = xlat_status(buf, cmd); 294 + } 292 295 293 296 out: 294 297 ACPI_FREE(out_obj);
+8
drivers/ata/Kconfig
··· 202 202 203 203 If unsure, say N. 204 204 205 + config SATA_AHCI_SEATTLE 206 + tristate "AMD Seattle 6.0Gbps AHCI SATA host controller support" 207 + depends on ARCH_SEATTLE 208 + help 209 + This option enables support for AMD Seattle SATA host controller. 210 + 211 + If unsure, say N 212 + 205 213 config SATA_INIC162X 206 214 tristate "Initio 162x SATA support (Very Experimental)" 207 215 depends on PCI
+1
drivers/ata/Makefile
··· 4 4 # non-SFF interface 5 5 obj-$(CONFIG_SATA_AHCI) += ahci.o libahci.o 6 6 obj-$(CONFIG_SATA_ACARD_AHCI) += acard-ahci.o libahci.o 7 + obj-$(CONFIG_SATA_AHCI_SEATTLE) += ahci_seattle.o libahci.o libahci_platform.o 7 8 obj-$(CONFIG_SATA_AHCI_PLATFORM) += ahci_platform.o libahci.o libahci_platform.o 8 9 obj-$(CONFIG_SATA_FSL) += sata_fsl.o 9 10 obj-$(CONFIG_SATA_INIC162X) += sata_inic162x.o
+3
drivers/ata/ahci_platform.c
··· 51 51 if (rc) 52 52 return rc; 53 53 54 + of_property_read_u32(dev->of_node, 55 + "ports-implemented", &hpriv->force_port_map); 56 + 54 57 if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci")) 55 58 hpriv->flags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ; 56 59
+210
drivers/ata/ahci_seattle.c
··· 1 + /* 2 + * AMD Seattle AHCI SATA driver 3 + * 4 + * Copyright (c) 2015, Advanced Micro Devices 5 + * Author: Brijesh Singh <brijesh.singh@amd.com> 6 + * 7 + * based on the AHCI SATA platform driver by Jeff Garzik and Anton Vorontsov 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License as published by 11 + * the Free Software Foundation; either version 2 of the License. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + */ 18 + 19 + #include <linux/kernel.h> 20 + #include <linux/module.h> 21 + #include <linux/pm.h> 22 + #include <linux/device.h> 23 + #include <linux/of_device.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/libata.h> 26 + #include <linux/ahci_platform.h> 27 + #include <linux/acpi.h> 28 + #include <linux/pci_ids.h> 29 + #include "ahci.h" 30 + 31 + /* SGPIO Control Register definition 32 + * 33 + * Bit Type Description 34 + * 31 RW OD7.2 (activity) 35 + * 30 RW OD7.1 (locate) 36 + * 29 RW OD7.0 (fault) 37 + * 28...8 RW OD6.2...OD0.0 (3bits per port, 1 bit per LED) 38 + * 7 RO SGPIO feature flag 39 + * 6:4 RO Reserved 40 + * 3:0 RO Number of ports (0 means no port supported) 41 + */ 42 + #define ACTIVITY_BIT_POS(x) (8 + (3 * x)) 43 + #define LOCATE_BIT_POS(x) (ACTIVITY_BIT_POS(x) + 1) 44 + #define FAULT_BIT_POS(x) (LOCATE_BIT_POS(x) + 1) 45 + 46 + #define ACTIVITY_MASK 0x00010000 47 + #define LOCATE_MASK 0x00080000 48 + #define FAULT_MASK 0x00400000 49 + 50 + #define DRV_NAME "ahci-seattle" 51 + 52 + static ssize_t seattle_transmit_led_message(struct ata_port *ap, u32 state, 53 + ssize_t size); 54 + 55 + struct seattle_plat_data { 56 + void __iomem *sgpio_ctrl; 57 + }; 58 + 59 + static struct ata_port_operations ahci_port_ops = { 60 + .inherits = &ahci_ops, 61 + }; 62 + 63 + static const struct ata_port_info ahci_port_info = { 64 + .flags = AHCI_FLAG_COMMON, 65 + .pio_mask = ATA_PIO4, 66 + .udma_mask = ATA_UDMA6, 67 + .port_ops = &ahci_port_ops, 68 + }; 69 + 70 + static struct ata_port_operations ahci_seattle_ops = { 71 + .inherits = &ahci_ops, 72 + .transmit_led_message = seattle_transmit_led_message, 73 + }; 74 + 75 + static const struct ata_port_info ahci_port_seattle_info = { 76 + .flags = AHCI_FLAG_COMMON | ATA_FLAG_EM | ATA_FLAG_SW_ACTIVITY, 77 + .link_flags = ATA_LFLAG_SW_ACTIVITY, 78 + .pio_mask = ATA_PIO4, 79 + .udma_mask = ATA_UDMA6, 80 + .port_ops = &ahci_seattle_ops, 81 + }; 82 + 83 + static struct scsi_host_template ahci_platform_sht = { 84 + AHCI_SHT(DRV_NAME), 85 + }; 86 + 87 + static ssize_t seattle_transmit_led_message(struct ata_port *ap, u32 state, 88 + ssize_t size) 89 + { 90 + struct ahci_host_priv *hpriv = ap->host->private_data; 91 + struct ahci_port_priv *pp = ap->private_data; 92 + struct seattle_plat_data *plat_data = hpriv->plat_data; 93 + unsigned long flags; 94 + int pmp; 95 + struct ahci_em_priv *emp; 96 + u32 val; 97 + 98 + /* get the slot number from the message */ 99 + pmp = (state & EM_MSG_LED_PMP_SLOT) >> 8; 100 + if (pmp >= EM_MAX_SLOTS) 101 + return -EINVAL; 102 + emp = &pp->em_priv[pmp]; 103 + 104 + val = ioread32(plat_data->sgpio_ctrl); 105 + if (state & ACTIVITY_MASK) 106 + val |= 1 << ACTIVITY_BIT_POS((ap->port_no)); 107 + else 108 + val &= ~(1 << ACTIVITY_BIT_POS((ap->port_no))); 109 + 110 + if (state & LOCATE_MASK) 111 + val |= 1 << LOCATE_BIT_POS((ap->port_no)); 112 + else 113 + val &= ~(1 << LOCATE_BIT_POS((ap->port_no))); 114 + 115 + if (state & FAULT_MASK) 116 + val |= 1 << FAULT_BIT_POS((ap->port_no)); 117 + else 118 + val &= ~(1 << FAULT_BIT_POS((ap->port_no))); 119 + 120 + iowrite32(val, plat_data->sgpio_ctrl); 121 + 122 + spin_lock_irqsave(ap->lock, flags); 123 + 124 + /* save off new led state for port/slot */ 125 + emp->led_state = state; 126 + 127 + spin_unlock_irqrestore(ap->lock, flags); 128 + 129 + return size; 130 + } 131 + 132 + static const struct ata_port_info *ahci_seattle_get_port_info( 133 + struct platform_device *pdev, struct ahci_host_priv *hpriv) 134 + { 135 + struct device *dev = &pdev->dev; 136 + struct seattle_plat_data *plat_data; 137 + u32 val; 138 + 139 + plat_data = devm_kzalloc(dev, sizeof(*plat_data), GFP_KERNEL); 140 + if (IS_ERR(plat_data)) 141 + return &ahci_port_info; 142 + 143 + plat_data->sgpio_ctrl = devm_ioremap_resource(dev, 144 + platform_get_resource(pdev, IORESOURCE_MEM, 1)); 145 + if (IS_ERR(plat_data->sgpio_ctrl)) 146 + return &ahci_port_info; 147 + 148 + val = ioread32(plat_data->sgpio_ctrl); 149 + 150 + if (!(val & 0xf)) 151 + return &ahci_port_info; 152 + 153 + hpriv->em_loc = 0; 154 + hpriv->em_buf_sz = 4; 155 + hpriv->em_msg_type = EM_MSG_TYPE_LED; 156 + hpriv->plat_data = plat_data; 157 + 158 + dev_info(dev, "SGPIO LED control is enabled.\n"); 159 + return &ahci_port_seattle_info; 160 + } 161 + 162 + static int ahci_seattle_probe(struct platform_device *pdev) 163 + { 164 + int rc; 165 + struct ahci_host_priv *hpriv; 166 + 167 + hpriv = ahci_platform_get_resources(pdev); 168 + if (IS_ERR(hpriv)) 169 + return PTR_ERR(hpriv); 170 + 171 + rc = ahci_platform_enable_resources(hpriv); 172 + if (rc) 173 + return rc; 174 + 175 + rc = ahci_platform_init_host(pdev, hpriv, 176 + ahci_seattle_get_port_info(pdev, hpriv), 177 + &ahci_platform_sht); 178 + if (rc) 179 + goto disable_resources; 180 + 181 + return 0; 182 + disable_resources: 183 + ahci_platform_disable_resources(hpriv); 184 + return rc; 185 + } 186 + 187 + static SIMPLE_DEV_PM_OPS(ahci_pm_ops, ahci_platform_suspend, 188 + ahci_platform_resume); 189 + 190 + static const struct acpi_device_id ahci_acpi_match[] = { 191 + { "AMDI0600", 0 }, 192 + {} 193 + }; 194 + MODULE_DEVICE_TABLE(acpi, ahci_acpi_match); 195 + 196 + static struct platform_driver ahci_seattle_driver = { 197 + .probe = ahci_seattle_probe, 198 + .remove = ata_platform_remove_one, 199 + .driver = { 200 + .name = DRV_NAME, 201 + .acpi_match_table = ahci_acpi_match, 202 + .pm = &ahci_pm_ops, 203 + }, 204 + }; 205 + module_platform_driver(ahci_seattle_driver); 206 + 207 + MODULE_DESCRIPTION("Seattle AHCI SATA platform driver"); 208 + MODULE_AUTHOR("Brijesh Singh <brijesh.singh@amd.com>"); 209 + MODULE_LICENSE("GPL"); 210 + MODULE_ALIAS("platform:" DRV_NAME);
+1
drivers/ata/libahci.c
··· 507 507 dev_info(dev, "forcing port_map 0x%x -> 0x%x\n", 508 508 port_map, hpriv->force_port_map); 509 509 port_map = hpriv->force_port_map; 510 + hpriv->saved_port_map = port_map; 510 511 } 511 512 512 513 if (hpriv->mask_port_map) {
-3
drivers/base/power/opp/core.c
··· 259 259 reg = opp_table->regulator; 260 260 if (IS_ERR(reg)) { 261 261 /* Regulator may not be required for device */ 262 - if (reg) 263 - dev_err(dev, "%s: Invalid regulator (%ld)\n", __func__, 264 - PTR_ERR(reg)); 265 262 rcu_read_unlock(); 266 263 return 0; 267 264 }
+1 -1
drivers/base/property.c
··· 21 21 22 22 static inline bool is_pset_node(struct fwnode_handle *fwnode) 23 23 { 24 - return fwnode && fwnode->type == FWNODE_PDATA; 24 + return !IS_ERR_OR_NULL(fwnode) && fwnode->type == FWNODE_PDATA; 25 25 } 26 26 27 27 static inline struct property_set *to_pset_node(struct fwnode_handle *fwnode)
+1 -1
drivers/clk/imx/clk-imx6q.c
··· 394 394 clk[IMX6QDL_CLK_LDB_DI1_DIV_3_5] = imx_clk_fixed_factor("ldb_di1_div_3_5", "ldb_di1", 2, 7); 395 395 } else { 396 396 clk[IMX6QDL_CLK_ECSPI_ROOT] = imx_clk_divider("ecspi_root", "pll3_60m", base + 0x38, 19, 6); 397 - clk[IMX6QDL_CLK_CAN_ROOT] = imx_clk_divider("can_root", "pll3_60", base + 0x20, 2, 6); 397 + clk[IMX6QDL_CLK_CAN_ROOT] = imx_clk_divider("can_root", "pll3_60m", base + 0x20, 2, 6); 398 398 clk[IMX6QDL_CLK_IPG_PER] = imx_clk_fixup_divider("ipg_per", "ipg", base + 0x1c, 0, 6, imx_cscmr1_fixup); 399 399 clk[IMX6QDL_CLK_UART_SERIAL_PODF] = imx_clk_divider("uart_serial_podf", "pll3_80m", base + 0x24, 0, 6); 400 400 clk[IMX6QDL_CLK_LDB_DI0_DIV_3_5] = imx_clk_fixed_factor("ldb_di0_div_3_5", "ldb_di0_sel", 2, 7);
+15 -11
drivers/cpufreq/cpufreq.c
··· 1557 1557 if (!cpufreq_driver) 1558 1558 return; 1559 1559 1560 - if (!has_target()) 1560 + if (!has_target() && !cpufreq_driver->suspend) 1561 1561 goto suspend; 1562 1562 1563 1563 pr_debug("%s: Suspending Governors\n", __func__); 1564 1564 1565 1565 for_each_active_policy(policy) { 1566 - down_write(&policy->rwsem); 1567 - ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP); 1568 - up_write(&policy->rwsem); 1566 + if (has_target()) { 1567 + down_write(&policy->rwsem); 1568 + ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP); 1569 + up_write(&policy->rwsem); 1569 1570 1570 - if (ret) 1571 - pr_err("%s: Failed to stop governor for policy: %p\n", 1572 - __func__, policy); 1573 - else if (cpufreq_driver->suspend 1574 - && cpufreq_driver->suspend(policy)) 1571 + if (ret) { 1572 + pr_err("%s: Failed to stop governor for policy: %p\n", 1573 + __func__, policy); 1574 + continue; 1575 + } 1576 + } 1577 + 1578 + if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy)) 1575 1579 pr_err("%s: Failed to suspend driver: %p\n", __func__, 1576 1580 policy); 1577 1581 } ··· 1600 1596 1601 1597 cpufreq_suspended = false; 1602 1598 1603 - if (!has_target()) 1599 + if (!has_target() && !cpufreq_driver->resume) 1604 1600 return; 1605 1601 1606 1602 pr_debug("%s: Resuming Governors\n", __func__); ··· 1609 1605 if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) { 1610 1606 pr_err("%s: Failed to resume driver: %p\n", __func__, 1611 1607 policy); 1612 - } else { 1608 + } else if (has_target()) { 1613 1609 down_write(&policy->rwsem); 1614 1610 ret = cpufreq_start_governor(policy); 1615 1611 up_write(&policy->rwsem);
+18 -8
drivers/cpufreq/intel_pstate.c
··· 453 453 } 454 454 } 455 455 456 + static int intel_pstate_hwp_set_policy(struct cpufreq_policy *policy) 457 + { 458 + if (hwp_active) 459 + intel_pstate_hwp_set(policy->cpus); 460 + 461 + return 0; 462 + } 463 + 456 464 static void intel_pstate_hwp_set_online_cpus(void) 457 465 { 458 466 get_online_cpus(); ··· 1070 1062 1071 1063 static inline int32_t get_avg_frequency(struct cpudata *cpu) 1072 1064 { 1073 - return div64_u64(cpu->pstate.max_pstate_physical * cpu->sample.aperf * 1074 - cpu->pstate.scaling, cpu->sample.mperf); 1065 + return fp_toint(mul_fp(cpu->sample.core_pct_busy, 1066 + int_tofp(cpu->pstate.max_pstate_physical * 1067 + cpu->pstate.scaling / 100))); 1075 1068 } 1076 1069 1077 1070 static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu) ··· 1114 1105 { 1115 1106 int32_t core_busy, max_pstate, current_pstate, sample_ratio; 1116 1107 u64 duration_ns; 1117 - 1118 - intel_pstate_calc_busy(cpu); 1119 1108 1120 1109 /* 1121 1110 * core_busy is the ratio of actual performance to max ··· 1198 1191 if ((s64)delta_ns >= pid_params.sample_rate_ns) { 1199 1192 bool sample_taken = intel_pstate_sample(cpu, time); 1200 1193 1201 - if (sample_taken && !hwp_active) 1202 - intel_pstate_adjust_busy_pstate(cpu); 1194 + if (sample_taken) { 1195 + intel_pstate_calc_busy(cpu); 1196 + if (!hwp_active) 1197 + intel_pstate_adjust_busy_pstate(cpu); 1198 + } 1203 1199 } 1204 1200 } 1205 1201 ··· 1356 1346 out: 1357 1347 intel_pstate_set_update_util_hook(policy->cpu); 1358 1348 1359 - if (hwp_active) 1360 - intel_pstate_hwp_set(policy->cpus); 1349 + intel_pstate_hwp_set_policy(policy); 1361 1350 1362 1351 return 0; 1363 1352 } ··· 1420 1411 .flags = CPUFREQ_CONST_LOOPS, 1421 1412 .verify = intel_pstate_verify_policy, 1422 1413 .setpolicy = intel_pstate_set_policy, 1414 + .resume = intel_pstate_hwp_set_policy, 1423 1415 .get = intel_pstate_get, 1424 1416 .init = intel_pstate_cpu_init, 1425 1417 .stop_cpu = intel_pstate_stop_cpu,
+4
drivers/cpufreq/sti-cpufreq.c
··· 259 259 { 260 260 int ret; 261 261 262 + if ((!of_machine_is_compatible("st,stih407")) && 263 + (!of_machine_is_compatible("st,stih410"))) 264 + return -ENODEV; 265 + 262 266 ddata.cpu = get_cpu_device(0); 263 267 if (!ddata.cpu) { 264 268 dev_err(ddata.cpu, "Failed to get device for CPU0\n");
+1 -1
drivers/cpuidle/cpuidle-arm.c
··· 50 50 * call the CPU ops suspend protocol with idle index as a 51 51 * parameter. 52 52 */ 53 - arm_cpuidle_suspend(idx); 53 + ret = arm_cpuidle_suspend(idx); 54 54 55 55 cpu_pm_exit(); 56 56 }
+11
drivers/crypto/qat/qat_common/adf_common_drv.h
··· 236 236 uint32_t vf_mask); 237 237 void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); 238 238 void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); 239 + int adf_init_pf_wq(void); 240 + void adf_exit_pf_wq(void); 239 241 #else 240 242 static inline int adf_sriov_configure(struct pci_dev *pdev, int numvfs) 241 243 { ··· 253 251 } 254 252 255 253 static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev) 254 + { 255 + } 256 + 257 + static inline int adf_init_pf_wq(void) 258 + { 259 + return 0; 260 + } 261 + 262 + static inline void adf_exit_pf_wq(void) 256 263 { 257 264 } 258 265 #endif
+6
drivers/crypto/qat/qat_common/adf_ctl_drv.c
··· 462 462 if (adf_init_aer()) 463 463 goto err_aer; 464 464 465 + if (adf_init_pf_wq()) 466 + goto err_pf_wq; 467 + 465 468 if (qat_crypto_register()) 466 469 goto err_crypto_register; 467 470 468 471 return 0; 469 472 470 473 err_crypto_register: 474 + adf_exit_pf_wq(); 475 + err_pf_wq: 471 476 adf_exit_aer(); 472 477 err_aer: 473 478 adf_chr_drv_destroy(); ··· 485 480 { 486 481 adf_chr_drv_destroy(); 487 482 adf_exit_aer(); 483 + adf_exit_pf_wq(); 488 484 qat_crypto_unregister(); 489 485 adf_clean_vf_map(false); 490 486 mutex_destroy(&adf_ctl_lock);
+16 -10
drivers/crypto/qat/qat_common/adf_sriov.c
··· 119 119 int i; 120 120 u32 reg; 121 121 122 - /* Workqueue for PF2VF responses */ 123 - pf2vf_resp_wq = create_workqueue("qat_pf2vf_resp_wq"); 124 - if (!pf2vf_resp_wq) 125 - return -ENOMEM; 126 - 127 122 for (i = 0, vf_info = accel_dev->pf.vf_info; i < totalvfs; 128 123 i++, vf_info++) { 129 124 /* This ptr will be populated when VFs will be created */ ··· 211 216 212 217 kfree(accel_dev->pf.vf_info); 213 218 accel_dev->pf.vf_info = NULL; 214 - 215 - if (pf2vf_resp_wq) { 216 - destroy_workqueue(pf2vf_resp_wq); 217 - pf2vf_resp_wq = NULL; 218 - } 219 219 } 220 220 EXPORT_SYMBOL_GPL(adf_disable_sriov); 221 221 ··· 294 304 return numvfs; 295 305 } 296 306 EXPORT_SYMBOL_GPL(adf_sriov_configure); 307 + 308 + int __init adf_init_pf_wq(void) 309 + { 310 + /* Workqueue for PF2VF responses */ 311 + pf2vf_resp_wq = create_workqueue("qat_pf2vf_resp_wq"); 312 + 313 + return !pf2vf_resp_wq ? -ENOMEM : 0; 314 + } 315 + 316 + void adf_exit_pf_wq(void) 317 + { 318 + if (pf2vf_resp_wq) { 319 + destroy_workqueue(pf2vf_resp_wq); 320 + pf2vf_resp_wq = NULL; 321 + } 322 + }
+1 -1
drivers/firmware/qemu_fw_cfg.c
··· 77 77 static inline void fw_cfg_read_blob(u16 key, 78 78 void *buf, loff_t pos, size_t count) 79 79 { 80 - u32 glk; 80 + u32 glk = -1U; 81 81 acpi_status status; 82 82 83 83 /* If we have ACPI, ensure mutual exclusion against any potential
+6 -59
drivers/gpio/gpio-rcar.c
··· 196 196 return 0; 197 197 } 198 198 199 - static void gpio_rcar_irq_bus_lock(struct irq_data *d) 200 - { 201 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 202 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 203 - 204 - pm_runtime_get_sync(&p->pdev->dev); 205 - } 206 - 207 - static void gpio_rcar_irq_bus_sync_unlock(struct irq_data *d) 208 - { 209 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 210 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 211 - 212 - pm_runtime_put(&p->pdev->dev); 213 - } 214 - 215 - 216 - static int gpio_rcar_irq_request_resources(struct irq_data *d) 217 - { 218 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 219 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 220 - int error; 221 - 222 - error = pm_runtime_get_sync(&p->pdev->dev); 223 - if (error < 0) 224 - return error; 225 - 226 - return 0; 227 - } 228 - 229 - static void gpio_rcar_irq_release_resources(struct irq_data *d) 230 - { 231 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 232 - struct gpio_rcar_priv *p = gpiochip_get_data(gc); 233 - 234 - pm_runtime_put(&p->pdev->dev); 235 - } 236 - 237 199 static irqreturn_t gpio_rcar_irq_handler(int irq, void *dev_id) 238 200 { 239 201 struct gpio_rcar_priv *p = dev_id; ··· 242 280 243 281 static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset) 244 282 { 245 - struct gpio_rcar_priv *p = gpiochip_get_data(chip); 246 - int error; 247 - 248 - error = pm_runtime_get_sync(&p->pdev->dev); 249 - if (error < 0) 250 - return error; 251 - 252 - error = pinctrl_request_gpio(chip->base + offset); 253 - if (error) 254 - pm_runtime_put(&p->pdev->dev); 255 - 256 - return error; 283 + return pinctrl_request_gpio(chip->base + offset); 257 284 } 258 285 259 286 static void gpio_rcar_free(struct gpio_chip *chip, unsigned offset) 260 287 { 261 - struct gpio_rcar_priv *p = gpiochip_get_data(chip); 262 - 263 288 pinctrl_free_gpio(chip->base + offset); 264 289 265 - /* Set the GPIO as an input to ensure that the next GPIO request won't 290 + /* 291 + * Set the GPIO as an input to ensure that the next GPIO request won't 266 292 * drive the GPIO pin as an output. 267 293 */ 268 294 gpio_rcar_config_general_input_output_mode(chip, offset, false); 269 - 270 - pm_runtime_put(&p->pdev->dev); 271 295 } 272 296 273 297 static int gpio_rcar_direction_input(struct gpio_chip *chip, unsigned offset) ··· 400 452 } 401 453 402 454 pm_runtime_enable(dev); 455 + pm_runtime_get_sync(dev); 403 456 404 457 io = platform_get_resource(pdev, IORESOURCE_MEM, 0); 405 458 irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); ··· 437 488 irq_chip->irq_unmask = gpio_rcar_irq_enable; 438 489 irq_chip->irq_set_type = gpio_rcar_irq_set_type; 439 490 irq_chip->irq_set_wake = gpio_rcar_irq_set_wake; 440 - irq_chip->irq_bus_lock = gpio_rcar_irq_bus_lock; 441 - irq_chip->irq_bus_sync_unlock = gpio_rcar_irq_bus_sync_unlock; 442 - irq_chip->irq_request_resources = gpio_rcar_irq_request_resources; 443 - irq_chip->irq_release_resources = gpio_rcar_irq_release_resources; 444 491 irq_chip->flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_MASK_ON_SUSPEND; 445 492 446 493 ret = gpiochip_add_data(gpio_chip, p); ··· 467 522 err1: 468 523 gpiochip_remove(gpio_chip); 469 524 err0: 525 + pm_runtime_put(dev); 470 526 pm_runtime_disable(dev); 471 527 return ret; 472 528 } ··· 478 532 479 533 gpiochip_remove(&p->gpio_chip); 480 534 535 + pm_runtime_put(&pdev->dev); 481 536 pm_runtime_disable(&pdev->dev); 482 537 return 0; 483 538 }
+1 -1
drivers/gpio/gpiolib-acpi.c
··· 977 977 lookup = kmalloc(sizeof(*lookup), GFP_KERNEL); 978 978 if (lookup) { 979 979 lookup->adev = adev; 980 - lookup->con_id = con_id; 980 + lookup->con_id = kstrdup(con_id, GFP_KERNEL); 981 981 list_add_tail(&lookup->node, &acpi_crs_lookup_list); 982 982 } 983 983 }
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 541 541 if (!metadata_size) { 542 542 if (bo->metadata_size) { 543 543 kfree(bo->metadata); 544 + bo->metadata = NULL; 544 545 bo->metadata_size = 0; 545 546 } 546 547 return 0;
+4
drivers/gpu/drm/amd/amdgpu/atombios_encoders.c
··· 298 298 && (mode->crtc_vsync_start < (mode->crtc_vdisplay + 2))) 299 299 adjusted_mode->crtc_vsync_start = adjusted_mode->crtc_vdisplay + 2; 300 300 301 + /* vertical FP must be at least 1 */ 302 + if (mode->crtc_vsync_start == mode->crtc_vdisplay) 303 + adjusted_mode->crtc_vsync_start++; 304 + 301 305 /* get the native mode for scaling */ 302 306 if (amdgpu_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT)) 303 307 amdgpu_panel_mode_fixup(encoder, adjusted_mode);
+31 -1
drivers/gpu/drm/i915/i915_drv.c
··· 792 792 static int i915_drm_resume_early(struct drm_device *dev) 793 793 { 794 794 struct drm_i915_private *dev_priv = dev->dev_private; 795 - int ret = 0; 795 + int ret; 796 796 797 797 /* 798 798 * We have a resume ordering issue with the snd-hda driver also ··· 802 802 * 803 803 * FIXME: This should be solved with a special hdmi sink device or 804 804 * similar so that power domains can be employed. 805 + */ 806 + 807 + /* 808 + * Note that we need to set the power state explicitly, since we 809 + * powered off the device during freeze and the PCI core won't power 810 + * it back up for us during thaw. Powering off the device during 811 + * freeze is not a hard requirement though, and during the 812 + * suspend/resume phases the PCI core makes sure we get here with the 813 + * device powered on. So in case we change our freeze logic and keep 814 + * the device powered we can also remove the following set power state 815 + * call. 816 + */ 817 + ret = pci_set_power_state(dev->pdev, PCI_D0); 818 + if (ret) { 819 + DRM_ERROR("failed to set PCI D0 power state (%d)\n", ret); 820 + goto out; 821 + } 822 + 823 + /* 824 + * Note that pci_enable_device() first enables any parent bridge 825 + * device and only then sets the power state for this device. The 826 + * bridge enabling is a nop though, since bridge devices are resumed 827 + * first. The order of enabling power and enabling the device is 828 + * imposed by the PCI core as described above, so here we preserve the 829 + * same order for the freeze/thaw phases. 830 + * 831 + * TODO: eventually we should remove pci_disable_device() / 832 + * pci_enable_enable_device() from suspend/resume. Due to how they 833 + * depend on the device enable refcount we can't anyway depend on them 834 + * disabling/enabling the device. 805 835 */ 806 836 if (pci_enable_device(dev->pdev)) { 807 837 ret = -EIO;
+8 -1
drivers/gpu/drm/i915/i915_reg.h
··· 2907 2907 #define GEN6_RP_STATE_CAP _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5998) 2908 2908 #define BXT_RP_STATE_CAP _MMIO(0x138170) 2909 2909 2910 - #define INTERVAL_1_28_US(us) (((us) * 100) >> 7) 2910 + /* 2911 + * Make these a multiple of magic 25 to avoid SNB (eg. Dell XPS 2912 + * 8300) freezing up around GPU hangs. Looks as if even 2913 + * scheduling/timer interrupts start misbehaving if the RPS 2914 + * EI/thresholds are "bad", leading to a very sluggish or even 2915 + * frozen machine. 2916 + */ 2917 + #define INTERVAL_1_28_US(us) roundup(((us) * 100) >> 7, 25) 2911 2918 #define INTERVAL_1_33_US(us) (((us) * 3) >> 2) 2912 2919 #define INTERVAL_0_833_US(us) (((us) * 6) / 5) 2913 2920 #define GT_INTERVAL_FROM_US(dev_priv, us) (IS_GEN9(dev_priv) ? \
+13 -9
drivers/gpu/drm/i915/intel_ddi.c
··· 443 443 } else if (IS_BROADWELL(dev_priv)) { 444 444 ddi_translations_fdi = bdw_ddi_translations_fdi; 445 445 ddi_translations_dp = bdw_ddi_translations_dp; 446 - ddi_translations_edp = bdw_ddi_translations_edp; 446 + 447 + if (dev_priv->edp_low_vswing) { 448 + ddi_translations_edp = bdw_ddi_translations_edp; 449 + n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp); 450 + } else { 451 + ddi_translations_edp = bdw_ddi_translations_dp; 452 + n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_dp); 453 + } 454 + 447 455 ddi_translations_hdmi = bdw_ddi_translations_hdmi; 448 - n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp); 456 + 449 457 n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp); 450 458 n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi); 451 459 hdmi_default_entry = 7; ··· 3209 3201 intel_ddi_clock_get(encoder, pipe_config); 3210 3202 } 3211 3203 3212 - static void intel_ddi_destroy(struct drm_encoder *encoder) 3213 - { 3214 - /* HDMI has nothing special to destroy, so we can go with this. */ 3215 - intel_dp_encoder_destroy(encoder); 3216 - } 3217 - 3218 3204 static bool intel_ddi_compute_config(struct intel_encoder *encoder, 3219 3205 struct intel_crtc_state *pipe_config) 3220 3206 { ··· 3227 3225 } 3228 3226 3229 3227 static const struct drm_encoder_funcs intel_ddi_funcs = { 3230 - .destroy = intel_ddi_destroy, 3228 + .reset = intel_dp_encoder_reset, 3229 + .destroy = intel_dp_encoder_destroy, 3231 3230 }; 3232 3231 3233 3232 static struct intel_connector * ··· 3327 3324 intel_encoder->post_disable = intel_ddi_post_disable; 3328 3325 intel_encoder->get_hw_state = intel_ddi_get_hw_state; 3329 3326 intel_encoder->get_config = intel_ddi_get_config; 3327 + intel_encoder->suspend = intel_dp_encoder_suspend; 3330 3328 3331 3329 intel_dig_port->port = port; 3332 3330 intel_dig_port->saved_port_bits = I915_READ(DDI_BUF_CTL(port)) &
+3
drivers/gpu/drm/i915/intel_display.c
··· 13351 13351 } 13352 13352 13353 13353 for_each_crtc_in_state(state, crtc, crtc_state, i) { 13354 + if (state->legacy_cursor_update) 13355 + continue; 13356 + 13354 13357 ret = intel_crtc_wait_for_pending_flips(crtc); 13355 13358 if (ret) 13356 13359 return ret;
+2 -2
drivers/gpu/drm/i915/intel_dp.c
··· 4898 4898 kfree(intel_dig_port); 4899 4899 } 4900 4900 4901 - static void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder) 4901 + void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder) 4902 4902 { 4903 4903 struct intel_dp *intel_dp = enc_to_intel_dp(&intel_encoder->base); 4904 4904 ··· 4940 4940 edp_panel_vdd_schedule_off(intel_dp); 4941 4941 } 4942 4942 4943 - static void intel_dp_encoder_reset(struct drm_encoder *encoder) 4943 + void intel_dp_encoder_reset(struct drm_encoder *encoder) 4944 4944 { 4945 4945 struct intel_dp *intel_dp; 4946 4946
+2
drivers/gpu/drm/i915/intel_drv.h
··· 1238 1238 void intel_dp_start_link_train(struct intel_dp *intel_dp); 1239 1239 void intel_dp_stop_link_train(struct intel_dp *intel_dp); 1240 1240 void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode); 1241 + void intel_dp_encoder_reset(struct drm_encoder *encoder); 1242 + void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder); 1241 1243 void intel_dp_encoder_destroy(struct drm_encoder *encoder); 1242 1244 int intel_dp_sink_crc(struct intel_dp *intel_dp, u8 *crc); 1243 1245 bool intel_dp_compute_config(struct intel_encoder *encoder,
+10 -2
drivers/gpu/drm/i915/intel_hdmi.c
··· 1415 1415 hdmi_to_dig_port(intel_hdmi)); 1416 1416 } 1417 1417 1418 - if (!live_status) 1419 - DRM_DEBUG_KMS("Live status not up!"); 1418 + if (!live_status) { 1419 + DRM_DEBUG_KMS("HDMI live status down\n"); 1420 + /* 1421 + * Live status register is not reliable on all intel platforms. 1422 + * So consider live_status only for certain platforms, for 1423 + * others, read EDID to determine presence of sink. 1424 + */ 1425 + if (INTEL_INFO(dev_priv)->gen < 7 || IS_IVYBRIDGE(dev_priv)) 1426 + live_status = true; 1427 + } 1420 1428 1421 1429 intel_hdmi_unset_edid(connector); 1422 1430
+4
drivers/gpu/drm/radeon/atombios_encoders.c
··· 310 310 && (mode->crtc_vsync_start < (mode->crtc_vdisplay + 2))) 311 311 adjusted_mode->crtc_vsync_start = adjusted_mode->crtc_vdisplay + 2; 312 312 313 + /* vertical FP must be at least 1 */ 314 + if (mode->crtc_vsync_start == mode->crtc_vdisplay) 315 + adjusted_mode->crtc_vsync_start++; 316 + 313 317 /* get the native mode for scaling */ 314 318 if (radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT)) { 315 319 radeon_panel_mode_fixup(encoder, adjusted_mode);
+6 -1
drivers/gpu/ipu-v3/ipu-common.c
··· 1068 1068 goto err_register; 1069 1069 } 1070 1070 1071 - pdev->dev.of_node = of_node; 1072 1071 pdev->dev.parent = dev; 1073 1072 1074 1073 ret = platform_device_add_data(pdev, &reg->pdata, ··· 1078 1079 platform_device_put(pdev); 1079 1080 goto err_register; 1080 1081 } 1082 + 1083 + /* 1084 + * Set of_node only after calling platform_device_add. Otherwise 1085 + * the platform:imx-ipuv3-crtc modalias won't be used. 1086 + */ 1087 + pdev->dev.of_node = of_node; 1081 1088 } 1082 1089 1083 1090 return 0;
+1
drivers/hid/hid-ids.h
··· 259 259 #define USB_DEVICE_ID_CORSAIR_K90 0x1b02 260 260 261 261 #define USB_VENDOR_ID_CREATIVELABS 0x041e 262 + #define USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51 0x322c 262 263 #define USB_DEVICE_ID_PRODIKEYS_PCMIDI 0x2801 263 264 264 265 #define USB_VENDOR_ID_CVTOUCH 0x1ff7
+1
drivers/hid/usbhid/hid-quirks.c
··· 71 71 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK, HID_QUIRK_NOGET }, 72 72 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET }, 73 73 { USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL }, 74 + { USB_VENDOR_ID_CREATIVELABS, USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51, HID_QUIRK_NOGET }, 74 75 { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, 75 76 { USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_WIIU, HID_QUIRK_MULTI_INPUT }, 76 77 { USB_VENDOR_ID_ELAN, HID_ANY_ID, HID_QUIRK_ALWAYS_POLL },
+6
drivers/hid/wacom_wac.c
··· 684 684 685 685 wacom->tool[idx] = wacom_intuos_get_tool_type(wacom->id[idx]); 686 686 687 + wacom->shared->stylus_in_proximity = true; 687 688 return 1; 688 689 } 689 690 ··· 3396 3395 { "Wacom Intuos PT M 2", 21600, 13500, 2047, 63, 3397 3396 INTUOSHT2, WACOM_INTUOS_RES, WACOM_INTUOS_RES, .touch_max = 16, 3398 3397 .check_for_hid_type = true, .hid_type = HID_TYPE_USBNONE }; 3398 + static const struct wacom_features wacom_features_0x343 = 3399 + { "Wacom DTK1651", 34616, 19559, 1023, 0, 3400 + DTUS, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4, 3401 + WACOM_DTU_OFFSET, WACOM_DTU_OFFSET }; 3399 3402 3400 3403 static const struct wacom_features wacom_features_HID_ANY_ID = 3401 3404 { "Wacom HID", .type = HID_GENERIC }; ··· 3565 3560 { USB_DEVICE_WACOM(0x33C) }, 3566 3561 { USB_DEVICE_WACOM(0x33D) }, 3567 3562 { USB_DEVICE_WACOM(0x33E) }, 3563 + { USB_DEVICE_WACOM(0x343) }, 3568 3564 { USB_DEVICE_WACOM(0x4001) }, 3569 3565 { USB_DEVICE_WACOM(0x4004) }, 3570 3566 { USB_DEVICE_WACOM(0x5000) },
+20 -6
drivers/hv/ring_buffer.c
··· 103 103 * there is room for the producer to send the pending packet. 104 104 */ 105 105 106 - static bool hv_need_to_signal_on_read(u32 prev_write_sz, 107 - struct hv_ring_buffer_info *rbi) 106 + static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi) 108 107 { 109 108 u32 cur_write_sz; 110 109 u32 r_size; 111 - u32 write_loc = rbi->ring_buffer->write_index; 110 + u32 write_loc; 112 111 u32 read_loc = rbi->ring_buffer->read_index; 113 - u32 pending_sz = rbi->ring_buffer->pending_send_sz; 112 + u32 pending_sz; 114 113 114 + /* 115 + * Issue a full memory barrier before making the signaling decision. 116 + * Here is the reason for having this barrier: 117 + * If the reading of the pend_sz (in this function) 118 + * were to be reordered and read before we commit the new read 119 + * index (in the calling function) we could 120 + * have a problem. If the host were to set the pending_sz after we 121 + * have sampled pending_sz and go to sleep before we commit the 122 + * read index, we could miss sending the interrupt. Issue a full 123 + * memory barrier to address this. 124 + */ 125 + mb(); 126 + 127 + pending_sz = rbi->ring_buffer->pending_send_sz; 128 + write_loc = rbi->ring_buffer->write_index; 115 129 /* If the other end is not blocked on write don't bother. */ 116 130 if (pending_sz == 0) 117 131 return false; ··· 134 120 cur_write_sz = write_loc >= read_loc ? r_size - (write_loc - read_loc) : 135 121 read_loc - write_loc; 136 122 137 - if ((prev_write_sz < pending_sz) && (cur_write_sz >= pending_sz)) 123 + if (cur_write_sz >= pending_sz) 138 124 return true; 139 125 140 126 return false; ··· 469 455 /* Update the read index */ 470 456 hv_set_next_read_location(inring_info, next_read_location); 471 457 472 - *signal = hv_need_to_signal_on_read(bytes_avail_towrite, inring_info); 458 + *signal = hv_need_to_signal_on_read(inring_info); 473 459 474 460 return ret; 475 461 }
+2
drivers/iio/adc/at91-sama5d2_adc.c
··· 451 451 if (ret) 452 452 goto vref_disable; 453 453 454 + platform_set_drvdata(pdev, indio_dev); 455 + 454 456 ret = iio_device_register(indio_dev); 455 457 if (ret < 0) 456 458 goto per_clk_disable_unprepare;
+27 -3
drivers/iio/imu/inv_mpu6050/inv_mpu_i2c.c
··· 104 104 return 0; 105 105 } 106 106 107 + static const char *inv_mpu_match_acpi_device(struct device *dev, int *chip_id) 108 + { 109 + const struct acpi_device_id *id; 110 + 111 + id = acpi_match_device(dev->driver->acpi_match_table, dev); 112 + if (!id) 113 + return NULL; 114 + 115 + *chip_id = (int)id->driver_data; 116 + 117 + return dev_name(dev); 118 + } 119 + 107 120 /** 108 121 * inv_mpu_probe() - probe function. 109 122 * @client: i2c client. ··· 128 115 const struct i2c_device_id *id) 129 116 { 130 117 struct inv_mpu6050_state *st; 131 - int result; 132 - const char *name = id ? id->name : NULL; 118 + int result, chip_type; 133 119 struct regmap *regmap; 120 + const char *name; 134 121 135 122 if (!i2c_check_functionality(client->adapter, 136 123 I2C_FUNC_SMBUS_I2C_BLOCK)) 137 124 return -EOPNOTSUPP; 125 + 126 + if (id) { 127 + chip_type = (int)id->driver_data; 128 + name = id->name; 129 + } else if (ACPI_HANDLE(&client->dev)) { 130 + name = inv_mpu_match_acpi_device(&client->dev, &chip_type); 131 + if (!name) 132 + return -ENODEV; 133 + } else { 134 + return -ENOSYS; 135 + } 138 136 139 137 regmap = devm_regmap_init_i2c(client, &inv_mpu_regmap_config); 140 138 if (IS_ERR(regmap)) { ··· 155 131 } 156 132 157 133 result = inv_mpu_core_probe(regmap, client->irq, name, 158 - NULL, id->driver_data); 134 + NULL, chip_type); 159 135 if (result < 0) 160 136 return result; 161 137
+2 -1
drivers/iio/imu/inv_mpu6050/inv_mpu_spi.c
··· 46 46 struct regmap *regmap; 47 47 const struct spi_device_id *id = spi_get_device_id(spi); 48 48 const char *name = id ? id->name : NULL; 49 + const int chip_type = id ? id->driver_data : 0; 49 50 50 51 regmap = devm_regmap_init_spi(spi, &inv_mpu_regmap_config); 51 52 if (IS_ERR(regmap)) { ··· 56 55 } 57 56 58 57 return inv_mpu_core_probe(regmap, spi->irq, name, 59 - inv_mpu_i2c_disable, id->driver_data); 58 + inv_mpu_i2c_disable, chip_type); 60 59 } 61 60 62 61 static int inv_mpu_remove(struct spi_device *spi)
+3 -3
drivers/iio/magnetometer/ak8975.c
··· 462 462 int rc; 463 463 int irq; 464 464 465 + init_waitqueue_head(&data->data_ready_queue); 466 + clear_bit(0, &data->flags); 465 467 if (client->irq) 466 468 irq = client->irq; 467 469 else ··· 479 477 return rc; 480 478 } 481 479 482 - init_waitqueue_head(&data->data_ready_queue); 483 - clear_bit(0, &data->flags); 484 480 data->eoc_irq = irq; 485 481 486 482 return rc; ··· 732 732 int eoc_gpio; 733 733 int err; 734 734 const char *name = NULL; 735 - enum asahi_compass_chipset chipset; 735 + enum asahi_compass_chipset chipset = AK_MAX_TYPE; 736 736 737 737 /* Grab and set up the supplied GPIO. */ 738 738 if (client->dev.platform_data)
+10 -4
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 612 612 struct Scsi_Host *shost; 613 613 struct iser_conn *iser_conn = NULL; 614 614 struct ib_conn *ib_conn; 615 + u32 max_fr_sectors; 615 616 u16 max_cmds; 616 617 617 618 shost = iscsi_host_alloc(&iscsi_iser_sht, 0, 0); ··· 633 632 iser_conn = ep->dd_data; 634 633 max_cmds = iser_conn->max_cmds; 635 634 shost->sg_tablesize = iser_conn->scsi_sg_tablesize; 636 - shost->max_sectors = iser_conn->scsi_max_sectors; 637 635 638 636 mutex_lock(&iser_conn->state_mutex); 639 637 if (iser_conn->state != ISER_CONN_UP) { ··· 657 657 */ 658 658 shost->sg_tablesize = min_t(unsigned short, shost->sg_tablesize, 659 659 ib_conn->device->ib_device->attrs.max_fast_reg_page_list_len); 660 - shost->max_sectors = min_t(unsigned int, 661 - 1024, (shost->sg_tablesize * PAGE_SIZE) >> 9); 662 660 663 661 if (iscsi_host_add(shost, 664 662 ib_conn->device->ib_device->dma_device)) { ··· 669 671 if (iscsi_host_add(shost, NULL)) 670 672 goto free_host; 671 673 } 674 + 675 + /* 676 + * FRs or FMRs can only map up to a (device) page per entry, but if the 677 + * first entry is misaligned we'll end up using using two entries 678 + * (head and tail) for a single page worth data, so we have to drop 679 + * one segment from the calculation. 680 + */ 681 + max_fr_sectors = ((shost->sg_tablesize - 1) * PAGE_SIZE) >> 9; 682 + shost->max_sectors = min(iser_max_sectors, max_fr_sectors); 672 683 673 684 if (cmds_max > max_cmds) { 674 685 iser_info("cmds_max changed from %u to %u\n", ··· 996 989 .queuecommand = iscsi_queuecommand, 997 990 .change_queue_depth = scsi_change_queue_depth, 998 991 .sg_tablesize = ISCSI_ISER_DEF_SG_TABLESIZE, 999 - .max_sectors = ISER_DEF_MAX_SECTORS, 1000 992 .cmd_per_lun = ISER_DEF_CMD_PER_LUN, 1001 993 .eh_abort_handler = iscsi_eh_abort, 1002 994 .eh_device_reset_handler= iscsi_eh_device_reset,
+8 -8
drivers/input/misc/twl6040-vibra.c
··· 181 181 { 182 182 struct vibra_info *info = container_of(work, 183 183 struct vibra_info, play_work); 184 + int ret; 185 + 186 + /* Do not allow effect, while the routing is set to use audio */ 187 + ret = twl6040_get_vibralr_status(info->twl6040); 188 + if (ret & TWL6040_VIBSEL) { 189 + dev_info(info->dev, "Vibra is configured for audio\n"); 190 + return; 191 + } 184 192 185 193 mutex_lock(&info->mutex); 186 194 ··· 207 199 struct ff_effect *effect) 208 200 { 209 201 struct vibra_info *info = input_get_drvdata(input); 210 - int ret; 211 - 212 - /* Do not allow effect, while the routing is set to use audio */ 213 - ret = twl6040_get_vibralr_status(info->twl6040); 214 - if (ret & TWL6040_VIBSEL) { 215 - dev_info(&input->dev, "Vibra is configured for audio\n"); 216 - return -EBUSY; 217 - } 218 202 219 203 info->weak_speed = effect->u.rumble.weak_magnitude; 220 204 info->strong_speed = effect->u.rumble.strong_magnitude;
+14 -14
drivers/input/touchscreen/atmel_mxt_ts.c
··· 1093 1093 return 0; 1094 1094 } 1095 1095 1096 + static int mxt_acquire_irq(struct mxt_data *data) 1097 + { 1098 + int error; 1099 + 1100 + enable_irq(data->irq); 1101 + 1102 + error = mxt_process_messages_until_invalid(data); 1103 + if (error) 1104 + return error; 1105 + 1106 + return 0; 1107 + } 1108 + 1096 1109 static int mxt_soft_reset(struct mxt_data *data) 1097 1110 { 1098 1111 struct device *dev = &data->client->dev; ··· 1124 1111 /* Ignore CHG line for 100ms after reset */ 1125 1112 msleep(100); 1126 1113 1127 - enable_irq(data->irq); 1114 + mxt_acquire_irq(data); 1128 1115 1129 1116 ret = mxt_wait_for_completion(data, &data->reset_completion, 1130 1117 MXT_RESET_TIMEOUT); ··· 1477 1464 release_mem: 1478 1465 kfree(config_mem); 1479 1466 return ret; 1480 - } 1481 - 1482 - static int mxt_acquire_irq(struct mxt_data *data) 1483 - { 1484 - int error; 1485 - 1486 - enable_irq(data->irq); 1487 - 1488 - error = mxt_process_messages_until_invalid(data); 1489 - if (error) 1490 - return error; 1491 - 1492 - return 0; 1493 1467 } 1494 1468 1495 1469 static int mxt_get_info(struct mxt_data *data)
+2 -2
drivers/input/touchscreen/zforce_ts.c
··· 370 370 point.coord_x = point.coord_y = 0; 371 371 } 372 372 373 - point.state = payload[9 * i + 5] & 0x03; 374 - point.id = (payload[9 * i + 5] & 0xfc) >> 2; 373 + point.state = payload[9 * i + 5] & 0x0f; 374 + point.id = (payload[9 * i + 5] & 0xf0) >> 4; 375 375 376 376 /* determine touch major, minor and orientation */ 377 377 point.area_major = max(payload[9 * i + 6],
+2
drivers/md/md.c
··· 284 284 * go away inside make_request 285 285 */ 286 286 sectors = bio_sectors(bio); 287 + /* bio could be mergeable after passing to underlayer */ 288 + bio->bi_rw &= ~REQ_NOMERGE; 287 289 mddev->pers->make_request(mddev, bio); 288 290 289 291 cpu = part_stat_lock();
+1 -1
drivers/md/raid0.c
··· 70 70 (unsigned long long)zone_size>>1); 71 71 zone_start = conf->strip_zone[j].zone_end; 72 72 } 73 - printk(KERN_INFO "\n"); 74 73 } 75 74 76 75 static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) ··· 84 85 struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL); 85 86 unsigned short blksize = 512; 86 87 88 + *private_conf = ERR_PTR(-ENOMEM); 87 89 if (!conf) 88 90 return -ENOMEM; 89 91 rdev_for_each(rdev1, mddev) {
-2
drivers/md/raid5.c
··· 3502 3502 dev = &sh->dev[i]; 3503 3503 } else if (test_bit(R5_Discard, &dev->flags)) 3504 3504 discard_pending = 1; 3505 - WARN_ON(test_bit(R5_SkipCopy, &dev->flags)); 3506 - WARN_ON(dev->page != dev->orig_page); 3507 3505 } 3508 3506 3509 3507 r5l_stripe_write_finished(sh);
+4 -4
drivers/media/media-device.c
··· 846 846 } 847 847 EXPORT_SYMBOL_GPL(media_device_find_devres); 848 848 849 + #if IS_ENABLED(CONFIG_PCI) 849 850 void media_device_pci_init(struct media_device *mdev, 850 851 struct pci_dev *pci_dev, 851 852 const char *name) 852 853 { 853 - #ifdef CONFIG_PCI 854 854 mdev->dev = &pci_dev->dev; 855 855 856 856 if (name) ··· 866 866 mdev->driver_version = LINUX_VERSION_CODE; 867 867 868 868 media_device_init(mdev); 869 - #endif 870 869 } 871 870 EXPORT_SYMBOL_GPL(media_device_pci_init); 871 + #endif 872 872 873 + #if IS_ENABLED(CONFIG_USB) 873 874 void __media_device_usb_init(struct media_device *mdev, 874 875 struct usb_device *udev, 875 876 const char *board_name, 876 877 const char *driver_name) 877 878 { 878 - #ifdef CONFIG_USB 879 879 mdev->dev = &udev->dev; 880 880 881 881 if (driver_name) ··· 895 895 mdev->driver_version = LINUX_VERSION_CODE; 896 896 897 897 media_device_init(mdev); 898 - #endif 899 898 } 900 899 EXPORT_SYMBOL_GPL(__media_device_usb_init); 900 + #endif 901 901 902 902 903 903 #endif /* CONFIG_MEDIA_CONTROLLER */
+2 -11
drivers/media/platform/exynos4-is/media-dev.c
··· 1446 1446 1447 1447 platform_set_drvdata(pdev, fmd); 1448 1448 1449 - /* Protect the media graph while we're registering entities */ 1450 - mutex_lock(&fmd->media_dev.graph_mutex); 1451 - 1452 1449 ret = fimc_md_register_platform_entities(fmd, dev->of_node); 1453 - if (ret) { 1454 - mutex_unlock(&fmd->media_dev.graph_mutex); 1450 + if (ret) 1455 1451 goto err_clk; 1456 - } 1457 1452 1458 1453 ret = fimc_md_register_sensor_entities(fmd); 1459 - if (ret) { 1460 - mutex_unlock(&fmd->media_dev.graph_mutex); 1454 + if (ret) 1461 1455 goto err_m_ent; 1462 - } 1463 - 1464 - mutex_unlock(&fmd->media_dev.graph_mutex); 1465 1456 1466 1457 ret = device_create_file(&pdev->dev, &dev_attr_subdev_conf_mode); 1467 1458 if (ret)
+3 -9
drivers/media/platform/s3c-camif/camif-core.c
··· 493 493 if (ret < 0) 494 494 goto err_sens; 495 495 496 - mutex_lock(&camif->media_dev.graph_mutex); 497 - 498 496 ret = v4l2_device_register_subdev_nodes(&camif->v4l2_dev); 499 497 if (ret < 0) 500 - goto err_unlock; 498 + goto err_sens; 501 499 502 500 ret = camif_register_video_nodes(camif); 503 501 if (ret < 0) 504 - goto err_unlock; 502 + goto err_sens; 505 503 506 504 ret = camif_create_media_links(camif); 507 505 if (ret < 0) 508 - goto err_unlock; 509 - 510 - mutex_unlock(&camif->media_dev.graph_mutex); 506 + goto err_sens; 511 507 512 508 ret = media_device_register(&camif->media_dev); 513 509 if (ret < 0) ··· 512 516 pm_runtime_put(dev); 513 517 return 0; 514 518 515 - err_unlock: 516 - mutex_unlock(&camif->media_dev.graph_mutex); 517 519 err_sens: 518 520 v4l2_device_unregister(&camif->v4l2_dev); 519 521 media_device_unregister(&camif->media_dev);
+5
drivers/misc/mic/vop/vop_vringh.c
··· 945 945 ret = -EFAULT; 946 946 goto free_ret; 947 947 } 948 + /* Ensure desc has not changed between the two reads */ 949 + if (memcmp(&dd, dd_config, sizeof(dd))) { 950 + ret = -EINVAL; 951 + goto free_ret; 952 + } 948 953 mutex_lock(&vdev->vdev_mutex); 949 954 mutex_lock(&vi->vop_mutex); 950 955 ret = vop_virtio_add_device(vdev, dd_config);
+1 -1
drivers/net/dsa/mv88e6xxx.c
··· 2181 2181 struct net_device *bridge) 2182 2182 { 2183 2183 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds); 2184 - int i, err; 2184 + int i, err = 0; 2185 2185 2186 2186 mutex_lock(&ps->smi_mutex); 2187 2187
+4 -3
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 1595 1595 1596 1596 ret = xgene_enet_init_hw(pdata); 1597 1597 if (ret) 1598 - goto err; 1598 + goto err_netdev; 1599 1599 1600 1600 mac_ops = pdata->mac_ops; 1601 1601 if (pdata->phy_mode == PHY_INTERFACE_MODE_RGMII) { 1602 1602 ret = xgene_enet_mdio_config(pdata); 1603 1603 if (ret) 1604 - goto err; 1604 + goto err_netdev; 1605 1605 } else { 1606 1606 INIT_DELAYED_WORK(&pdata->link_work, mac_ops->link_state); 1607 1607 } 1608 1608 1609 1609 xgene_enet_napi_add(pdata); 1610 1610 return 0; 1611 - err: 1611 + err_netdev: 1612 1612 unregister_netdev(ndev); 1613 + err: 1613 1614 free_netdev(ndev); 1614 1615 return ret; 1615 1616 }
+58 -18
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 581 581 struct page *page; 582 582 dma_addr_t mapping; 583 583 u16 sw_prod = rxr->rx_sw_agg_prod; 584 + unsigned int offset = 0; 584 585 585 - page = alloc_page(gfp); 586 - if (!page) 587 - return -ENOMEM; 586 + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { 587 + page = rxr->rx_page; 588 + if (!page) { 589 + page = alloc_page(gfp); 590 + if (!page) 591 + return -ENOMEM; 592 + rxr->rx_page = page; 593 + rxr->rx_page_offset = 0; 594 + } 595 + offset = rxr->rx_page_offset; 596 + rxr->rx_page_offset += BNXT_RX_PAGE_SIZE; 597 + if (rxr->rx_page_offset == PAGE_SIZE) 598 + rxr->rx_page = NULL; 599 + else 600 + get_page(page); 601 + } else { 602 + page = alloc_page(gfp); 603 + if (!page) 604 + return -ENOMEM; 605 + } 588 606 589 - mapping = dma_map_page(&pdev->dev, page, 0, PAGE_SIZE, 607 + mapping = dma_map_page(&pdev->dev, page, offset, BNXT_RX_PAGE_SIZE, 590 608 PCI_DMA_FROMDEVICE); 591 609 if (dma_mapping_error(&pdev->dev, mapping)) { 592 610 __free_page(page); ··· 619 601 rxr->rx_sw_agg_prod = NEXT_RX_AGG(sw_prod); 620 602 621 603 rx_agg_buf->page = page; 604 + rx_agg_buf->offset = offset; 622 605 rx_agg_buf->mapping = mapping; 623 606 rxbd->rx_bd_haddr = cpu_to_le64(mapping); 624 607 rxbd->rx_bd_opaque = sw_prod; ··· 661 642 page = cons_rx_buf->page; 662 643 cons_rx_buf->page = NULL; 663 644 prod_rx_buf->page = page; 645 + prod_rx_buf->offset = cons_rx_buf->offset; 664 646 665 647 prod_rx_buf->mapping = cons_rx_buf->mapping; 666 648 ··· 729 709 RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT; 730 710 731 711 cons_rx_buf = &rxr->rx_agg_ring[cons]; 732 - skb_fill_page_desc(skb, i, cons_rx_buf->page, 0, frag_len); 712 + skb_fill_page_desc(skb, i, cons_rx_buf->page, 713 + cons_rx_buf->offset, frag_len); 733 714 __clear_bit(cons, rxr->rx_agg_bmap); 734 715 735 716 /* It is possible for bnxt_alloc_rx_page() to allocate ··· 761 740 return NULL; 762 741 } 763 742 764 - dma_unmap_page(&pdev->dev, mapping, PAGE_SIZE, 743 + dma_unmap_page(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, 765 744 PCI_DMA_FROMDEVICE); 766 745 767 746 skb->data_len += frag_len; ··· 1388 1367 if (!TX_CMP_VALID(txcmp, raw_cons)) 1389 1368 break; 1390 1369 1370 + /* The valid test of the entry must be done first before 1371 + * reading any further. 1372 + */ 1373 + rmb(); 1391 1374 if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) { 1392 1375 tx_pkts++; 1393 1376 /* return full budget so NAPI will complete. */ ··· 1609 1584 1610 1585 dma_unmap_page(&pdev->dev, 1611 1586 dma_unmap_addr(rx_agg_buf, mapping), 1612 - PAGE_SIZE, PCI_DMA_FROMDEVICE); 1587 + BNXT_RX_PAGE_SIZE, PCI_DMA_FROMDEVICE); 1613 1588 1614 1589 rx_agg_buf->page = NULL; 1615 1590 __clear_bit(j, rxr->rx_agg_bmap); 1616 1591 1617 1592 __free_page(page); 1593 + } 1594 + if (rxr->rx_page) { 1595 + __free_page(rxr->rx_page); 1596 + rxr->rx_page = NULL; 1618 1597 } 1619 1598 } 1620 1599 } ··· 2002 1973 if (!(bp->flags & BNXT_FLAG_AGG_RINGS)) 2003 1974 return 0; 2004 1975 2005 - type = ((u32)PAGE_SIZE << RX_BD_LEN_SHIFT) | 1976 + type = ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | 2006 1977 RX_BD_TYPE_RX_AGG_BD | RX_BD_FLAGS_SOP; 2007 1978 2008 1979 bnxt_init_rxbd_pages(ring, type); ··· 2193 2164 bp->rx_agg_nr_pages = 0; 2194 2165 2195 2166 if (bp->flags & BNXT_FLAG_TPA) 2196 - agg_factor = 4; 2167 + agg_factor = min_t(u32, 4, 65536 / BNXT_RX_PAGE_SIZE); 2197 2168 2198 2169 bp->flags &= ~BNXT_FLAG_JUMBO; 2199 2170 if (rx_space > PAGE_SIZE) { ··· 3049 3020 /* Number of segs are log2 units, and first packet is not 3050 3021 * included as part of this units. 3051 3022 */ 3052 - if (mss <= PAGE_SIZE) { 3053 - n = PAGE_SIZE / mss; 3023 + if (mss <= BNXT_RX_PAGE_SIZE) { 3024 + n = BNXT_RX_PAGE_SIZE / mss; 3054 3025 nsegs = (MAX_SKB_FRAGS - 1) * n; 3055 3026 } else { 3056 - n = mss / PAGE_SIZE; 3057 - if (mss & (PAGE_SIZE - 1)) 3027 + n = mss / BNXT_RX_PAGE_SIZE; 3028 + if (mss & (BNXT_RX_PAGE_SIZE - 1)) 3058 3029 n++; 3059 3030 nsegs = (MAX_SKB_FRAGS - n) / n; 3060 3031 } ··· 4042 4013 } 4043 4014 4044 4015 static int bnxt_cfg_rx_mode(struct bnxt *); 4016 + static bool bnxt_mc_list_updated(struct bnxt *, u32 *); 4045 4017 4046 4018 static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init) 4047 4019 { 4020 + struct bnxt_vnic_info *vnic = &bp->vnic_info[0]; 4048 4021 int rc = 0; 4049 4022 4050 4023 if (irq_re_init) { ··· 4102 4071 netdev_err(bp->dev, "HWRM vnic filter failure rc: %x\n", rc); 4103 4072 goto err_out; 4104 4073 } 4105 - bp->vnic_info[0].uc_filter_count = 1; 4074 + vnic->uc_filter_count = 1; 4106 4075 4107 - bp->vnic_info[0].rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST; 4076 + vnic->rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST; 4108 4077 4109 4078 if ((bp->dev->flags & IFF_PROMISC) && BNXT_PF(bp)) 4110 - bp->vnic_info[0].rx_mask |= 4111 - CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS; 4079 + vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS; 4080 + 4081 + if (bp->dev->flags & IFF_ALLMULTI) { 4082 + vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST; 4083 + vnic->mc_list_count = 0; 4084 + } else { 4085 + u32 mask = 0; 4086 + 4087 + bnxt_mc_list_updated(bp, &mask); 4088 + vnic->rx_mask |= mask; 4089 + } 4112 4090 4113 4091 rc = bnxt_cfg_rx_mode(bp); 4114 4092 if (rc) ··· 4349 4309 if (bp->flags & BNXT_FLAG_MSIX_CAP) 4350 4310 rc = bnxt_setup_msix(bp); 4351 4311 4352 - if (!(bp->flags & BNXT_FLAG_USING_MSIX)) { 4312 + if (!(bp->flags & BNXT_FLAG_USING_MSIX) && BNXT_PF(bp)) { 4353 4313 /* fallback to INTA */ 4354 4314 rc = bnxt_setup_inta(bp); 4355 4315 }
+13
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 407 407 408 408 #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHIFT) 409 409 410 + /* The RXBD length is 16-bit so we can only support page sizes < 64K */ 411 + #if (PAGE_SHIFT > 15) 412 + #define BNXT_RX_PAGE_SHIFT 15 413 + #else 414 + #define BNXT_RX_PAGE_SHIFT PAGE_SHIFT 415 + #endif 416 + 417 + #define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT) 418 + 410 419 #define BNXT_MIN_PKT_SIZE 45 411 420 412 421 #define BNXT_NUM_TESTS(bp) 0 ··· 515 506 516 507 struct bnxt_sw_rx_agg_bd { 517 508 struct page *page; 509 + unsigned int offset; 518 510 dma_addr_t mapping; 519 511 }; 520 512 ··· 595 585 596 586 unsigned long *rx_agg_bmap; 597 587 u16 rx_agg_bmap_size; 588 + 589 + struct page *rx_page; 590 + unsigned int rx_page_offset; 598 591 599 592 dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; 600 593 dma_addr_t rx_agg_desc_mapping[MAX_RX_AGG_PAGES];
+21 -13
drivers/net/ethernet/cadence/macb.c
··· 441 441 snprintf(bp->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", 442 442 bp->pdev->name, bp->pdev->id); 443 443 bp->mii_bus->priv = bp; 444 - bp->mii_bus->parent = &bp->dev->dev; 444 + bp->mii_bus->parent = &bp->pdev->dev; 445 445 pdata = dev_get_platdata(&bp->pdev->dev); 446 446 447 447 dev_set_drvdata(&bp->dev->dev, bp->mii_bus); ··· 458 458 struct phy_device *phydev; 459 459 460 460 phydev = mdiobus_scan(bp->mii_bus, i); 461 - if (IS_ERR(phydev)) { 461 + if (IS_ERR(phydev) && 462 + PTR_ERR(phydev) != -ENODEV) { 462 463 err = PTR_ERR(phydev); 463 464 break; 464 465 } ··· 3020 3019 if (err) 3021 3020 goto err_out_free_netdev; 3022 3021 3022 + err = macb_mii_init(bp); 3023 + if (err) 3024 + goto err_out_free_netdev; 3025 + 3026 + phydev = bp->phy_dev; 3027 + 3028 + netif_carrier_off(dev); 3029 + 3023 3030 err = register_netdev(dev); 3024 3031 if (err) { 3025 3032 dev_err(&pdev->dev, "Cannot register net device, aborting.\n"); 3026 - goto err_out_unregister_netdev; 3033 + goto err_out_unregister_mdio; 3027 3034 } 3028 3035 3029 - err = macb_mii_init(bp); 3030 - if (err) 3031 - goto err_out_unregister_netdev; 3032 - 3033 - netif_carrier_off(dev); 3036 + phy_attached_info(phydev); 3034 3037 3035 3038 netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n", 3036 3039 macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID), 3037 3040 dev->base_addr, dev->irq, dev->dev_addr); 3038 3041 3039 - phydev = bp->phy_dev; 3040 - phy_attached_info(phydev); 3041 - 3042 3042 return 0; 3043 3043 3044 - err_out_unregister_netdev: 3045 - unregister_netdev(dev); 3044 + err_out_unregister_mdio: 3045 + phy_disconnect(bp->phy_dev); 3046 + mdiobus_unregister(bp->mii_bus); 3047 + mdiobus_free(bp->mii_bus); 3048 + 3049 + /* Shutdown the PHY if there is a GPIO reset */ 3050 + if (bp->reset_gpio) 3051 + gpiod_set_value(bp->reset_gpio, 0); 3046 3052 3047 3053 err_out_free_netdev: 3048 3054 free_netdev(dev);
+2 -1
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
··· 576 576 unsigned int nq0 = adap2pinfo(adap, 0)->nqsets; 577 577 unsigned int nq1 = adap->port[1] ? adap2pinfo(adap, 1)->nqsets : 1; 578 578 u8 cpus[SGE_QSETS + 1]; 579 - u16 rspq_map[RSS_TABLE_SIZE]; 579 + u16 rspq_map[RSS_TABLE_SIZE + 1]; 580 580 581 581 for (i = 0; i < SGE_QSETS; ++i) 582 582 cpus[i] = i; ··· 586 586 rspq_map[i] = i % nq0; 587 587 rspq_map[i + RSS_TABLE_SIZE / 2] = (i % nq1) + nq0; 588 588 } 589 + rspq_map[RSS_TABLE_SIZE] = 0xffff; /* terminator */ 589 590 590 591 t3_config_rss(adap, F_RQFEEDBACKENABLE | F_TNLLKPEN | F_TNLMAPEN | 591 592 F_TNLPRTEN | F_TNL2TUPEN | F_TNL4TUPEN |
+8 -2
drivers/net/ethernet/freescale/fec_main.c
··· 1521 1521 struct fec_enet_private *fep = netdev_priv(ndev); 1522 1522 1523 1523 for_each_set_bit(queue_id, &fep->work_rx, FEC_ENET_MAX_RX_QS) { 1524 - clear_bit(queue_id, &fep->work_rx); 1525 - pkt_received += fec_enet_rx_queue(ndev, 1524 + int ret; 1525 + 1526 + ret = fec_enet_rx_queue(ndev, 1526 1527 budget - pkt_received, queue_id); 1528 + 1529 + if (ret < budget - pkt_received) 1530 + clear_bit(queue_id, &fep->work_rx); 1531 + 1532 + pkt_received += ret; 1527 1533 } 1528 1534 return pkt_received; 1529 1535 }
+2 -4
drivers/net/ethernet/marvell/mvneta.c
··· 3354 3354 /* Enable per-CPU interrupts on the CPU that is 3355 3355 * brought up. 3356 3356 */ 3357 - smp_call_function_single(cpu, mvneta_percpu_enable, 3358 - pp, true); 3357 + mvneta_percpu_enable(pp); 3359 3358 3360 3359 /* Enable per-CPU interrupt on the one CPU we care 3361 3360 * about. ··· 3386 3387 /* Disable per-CPU interrupts on the CPU that is 3387 3388 * brought down. 3388 3389 */ 3389 - smp_call_function_single(cpu, mvneta_percpu_disable, 3390 - pp, true); 3390 + mvneta_percpu_disable(pp); 3391 3391 3392 3392 break; 3393 3393 case CPU_DEAD:
+2
drivers/net/ethernet/marvell/pxa168_eth.c
··· 979 979 return 0; 980 980 981 981 pep->phy = mdiobus_scan(pep->smi_bus, pep->phy_addr); 982 + if (IS_ERR(pep->phy)) 983 + return PTR_ERR(pep->phy); 982 984 if (!pep->phy) 983 985 return -ENODEV; 984 986
+1 -1
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 707 707 708 708 if (ipv6h->nexthdr == IPPROTO_FRAGMENT || ipv6h->nexthdr == IPPROTO_HOPOPTS) 709 709 return -1; 710 - hw_checksum = csum_add(hw_checksum, (__force __wsum)(ipv6h->nexthdr << 8)); 710 + hw_checksum = csum_add(hw_checksum, (__force __wsum)htons(ipv6h->nexthdr)); 711 711 712 712 csum_pseudo_hdr = csum_partial(&ipv6h->saddr, 713 713 sizeof(ipv6h->saddr) + sizeof(ipv6h->daddr), 0);
+7
drivers/net/ethernet/mellanox/mlx5/core/Kconfig
··· 31 31 This flag is depended on the kernel's DCB support. 32 32 33 33 If unsure, set to Y 34 + 35 + config MLX5_CORE_EN_VXLAN 36 + bool "VXLAN offloads Support" 37 + default y 38 + depends on MLX5_CORE_EN && VXLAN && !(MLX5_CORE=y && VXLAN=m) 39 + ---help--- 40 + Say Y here if you want to use VXLAN offloads in the driver.
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/Makefile
··· 6 6 7 7 mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o \ 8 8 en_main.o en_fs.o en_ethtool.o en_tx.o en_rx.o \ 9 - en_txrx.o en_clock.o vxlan.o en_tc.o 9 + en_txrx.o en_clock.o en_tc.o 10 10 11 + mlx5_core-$(CONFIG_MLX5_CORE_EN_VXLAN) += vxlan.o 11 12 mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o
+3
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 564 564 struct mlx5e_flow_tables fts; 565 565 struct mlx5e_eth_addr_db eth_addr; 566 566 struct mlx5e_vlan_db vlan; 567 + #ifdef CONFIG_MLX5_CORE_EN_VXLAN 567 568 struct mlx5e_vxlan_db vxlan; 569 + #endif 568 570 569 571 struct mlx5e_params params; 572 + struct workqueue_struct *wq; 570 573 struct work_struct update_carrier_work; 571 574 struct work_struct set_rx_mode_work; 572 575 struct delayed_work update_stats_work;
+25 -13
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 262 262 mutex_lock(&priv->state_lock); 263 263 if (test_bit(MLX5E_STATE_OPENED, &priv->state)) { 264 264 mlx5e_update_stats(priv); 265 - schedule_delayed_work(dwork, 266 - msecs_to_jiffies( 267 - MLX5E_UPDATE_STATS_INTERVAL)); 265 + queue_delayed_work(priv->wq, dwork, 266 + msecs_to_jiffies(MLX5E_UPDATE_STATS_INTERVAL)); 268 267 } 269 268 mutex_unlock(&priv->state_lock); 270 269 } ··· 279 280 switch (event) { 280 281 case MLX5_DEV_EVENT_PORT_UP: 281 282 case MLX5_DEV_EVENT_PORT_DOWN: 282 - schedule_work(&priv->update_carrier_work); 283 + queue_work(priv->wq, &priv->update_carrier_work); 283 284 break; 284 285 285 286 default: ··· 1504 1505 mlx5e_update_carrier(priv); 1505 1506 mlx5e_timestamp_init(priv); 1506 1507 1507 - schedule_delayed_work(&priv->update_stats_work, 0); 1508 + queue_delayed_work(priv->wq, &priv->update_stats_work, 0); 1508 1509 1509 1510 return 0; 1510 1511 ··· 1960 1961 { 1961 1962 struct mlx5e_priv *priv = netdev_priv(dev); 1962 1963 1963 - schedule_work(&priv->set_rx_mode_work); 1964 + queue_work(priv->wq, &priv->set_rx_mode_work); 1964 1965 } 1965 1966 1966 1967 static int mlx5e_set_mac(struct net_device *netdev, void *addr) ··· 1975 1976 ether_addr_copy(netdev->dev_addr, saddr->sa_data); 1976 1977 netif_addr_unlock_bh(netdev); 1977 1978 1978 - schedule_work(&priv->set_rx_mode_work); 1979 + queue_work(priv->wq, &priv->set_rx_mode_work); 1979 1980 1980 1981 return 0; 1981 1982 } ··· 2149 2150 vf_stats); 2150 2151 } 2151 2152 2153 + #if IS_ENABLED(CONFIG_MLX5_CORE_EN_VXLAN) 2152 2154 static void mlx5e_add_vxlan_port(struct net_device *netdev, 2153 2155 sa_family_t sa_family, __be16 port) 2154 2156 { ··· 2158 2158 if (!mlx5e_vxlan_allowed(priv->mdev)) 2159 2159 return; 2160 2160 2161 - mlx5e_vxlan_add_port(priv, be16_to_cpu(port)); 2161 + mlx5e_vxlan_queue_work(priv, sa_family, be16_to_cpu(port), 1); 2162 2162 } 2163 2163 2164 2164 static void mlx5e_del_vxlan_port(struct net_device *netdev, ··· 2169 2169 if (!mlx5e_vxlan_allowed(priv->mdev)) 2170 2170 return; 2171 2171 2172 - mlx5e_vxlan_del_port(priv, be16_to_cpu(port)); 2172 + mlx5e_vxlan_queue_work(priv, sa_family, be16_to_cpu(port), 0); 2173 2173 } 2174 2174 2175 2175 static netdev_features_t mlx5e_vxlan_features_check(struct mlx5e_priv *priv, ··· 2221 2221 2222 2222 return features; 2223 2223 } 2224 + #endif 2224 2225 2225 2226 static const struct net_device_ops mlx5e_netdev_ops_basic = { 2226 2227 .ndo_open = mlx5e_open, ··· 2253 2252 .ndo_set_features = mlx5e_set_features, 2254 2253 .ndo_change_mtu = mlx5e_change_mtu, 2255 2254 .ndo_do_ioctl = mlx5e_ioctl, 2255 + #ifdef CONFIG_MLX5_CORE_EN_VXLAN 2256 2256 .ndo_add_vxlan_port = mlx5e_add_vxlan_port, 2257 2257 .ndo_del_vxlan_port = mlx5e_del_vxlan_port, 2258 2258 .ndo_features_check = mlx5e_features_check, 2259 + #endif 2259 2260 .ndo_set_vf_mac = mlx5e_set_vf_mac, 2260 2261 .ndo_set_vf_vlan = mlx5e_set_vf_vlan, 2261 2262 .ndo_get_vf_config = mlx5e_get_vf_config, ··· 2501 2498 2502 2499 priv = netdev_priv(netdev); 2503 2500 2501 + priv->wq = create_singlethread_workqueue("mlx5e"); 2502 + if (!priv->wq) 2503 + goto err_free_netdev; 2504 + 2504 2505 err = mlx5_alloc_map_uar(mdev, &priv->cq_uar, false); 2505 2506 if (err) { 2506 2507 mlx5_core_err(mdev, "alloc_map uar failed, %d\n", err); 2507 - goto err_free_netdev; 2508 + goto err_destroy_wq; 2508 2509 } 2509 2510 2510 2511 err = mlx5_core_alloc_pd(mdev, &priv->pdn); ··· 2587 2580 vxlan_get_rx_port(netdev); 2588 2581 2589 2582 mlx5e_enable_async_events(priv); 2590 - schedule_work(&priv->set_rx_mode_work); 2583 + queue_work(priv->wq, &priv->set_rx_mode_work); 2591 2584 2592 2585 return priv; 2593 2586 ··· 2624 2617 err_unmap_free_uar: 2625 2618 mlx5_unmap_free_uar(mdev, &priv->cq_uar); 2626 2619 2620 + err_destroy_wq: 2621 + destroy_workqueue(priv->wq); 2622 + 2627 2623 err_free_netdev: 2628 2624 free_netdev(netdev); 2629 2625 ··· 2640 2630 2641 2631 set_bit(MLX5E_STATE_DESTROYING, &priv->state); 2642 2632 2643 - schedule_work(&priv->set_rx_mode_work); 2633 + queue_work(priv->wq, &priv->set_rx_mode_work); 2644 2634 mlx5e_disable_async_events(priv); 2645 - flush_scheduled_work(); 2635 + flush_workqueue(priv->wq); 2646 2636 if (test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) { 2647 2637 netif_device_detach(netdev); 2648 2638 mutex_lock(&priv->state_lock); ··· 2665 2655 mlx5_core_dealloc_transport_domain(priv->mdev, priv->tdn); 2666 2656 mlx5_core_dealloc_pd(priv->mdev, priv->pdn); 2667 2657 mlx5_unmap_free_uar(priv->mdev, &priv->cq_uar); 2658 + cancel_delayed_work_sync(&priv->update_stats_work); 2659 + destroy_workqueue(priv->wq); 2668 2660 2669 2661 if (!test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) 2670 2662 free_netdev(netdev);
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/uar.c
··· 269 269 270 270 void mlx5_unmap_free_uar(struct mlx5_core_dev *mdev, struct mlx5_uar *uar) 271 271 { 272 - iounmap(uar->map); 273 - iounmap(uar->bf_map); 272 + if (uar->map) 273 + iounmap(uar->map); 274 + else 275 + iounmap(uar->bf_map); 274 276 mlx5_cmd_free_uar(mdev, uar->index); 275 277 } 276 278 EXPORT_SYMBOL(mlx5_unmap_free_uar);
+38 -12
drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
··· 95 95 return vxlan; 96 96 } 97 97 98 - int mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port) 98 + static void mlx5e_vxlan_add_port(struct work_struct *work) 99 99 { 100 + struct mlx5e_vxlan_work *vxlan_work = 101 + container_of(work, struct mlx5e_vxlan_work, work); 102 + struct mlx5e_priv *priv = vxlan_work->priv; 100 103 struct mlx5e_vxlan_db *vxlan_db = &priv->vxlan; 104 + u16 port = vxlan_work->port; 101 105 struct mlx5e_vxlan *vxlan; 102 106 int err; 103 107 104 - err = mlx5e_vxlan_core_add_port_cmd(priv->mdev, port); 105 - if (err) 106 - return err; 108 + if (mlx5e_vxlan_core_add_port_cmd(priv->mdev, port)) 109 + goto free_work; 107 110 108 111 vxlan = kzalloc(sizeof(*vxlan), GFP_KERNEL); 109 - if (!vxlan) { 110 - err = -ENOMEM; 112 + if (!vxlan) 111 113 goto err_delete_port; 112 - } 113 114 114 115 vxlan->udp_port = port; 115 116 ··· 120 119 if (err) 121 120 goto err_free; 122 121 123 - return 0; 122 + goto free_work; 124 123 125 124 err_free: 126 125 kfree(vxlan); 127 126 err_delete_port: 128 127 mlx5e_vxlan_core_del_port_cmd(priv->mdev, port); 129 - return err; 128 + free_work: 129 + kfree(vxlan_work); 130 130 } 131 131 132 132 static void __mlx5e_vxlan_core_del_port(struct mlx5e_priv *priv, u16 port) ··· 147 145 kfree(vxlan); 148 146 } 149 147 150 - void mlx5e_vxlan_del_port(struct mlx5e_priv *priv, u16 port) 148 + static void mlx5e_vxlan_del_port(struct work_struct *work) 151 149 { 152 - if (!mlx5e_vxlan_lookup_port(priv, port)) 153 - return; 150 + struct mlx5e_vxlan_work *vxlan_work = 151 + container_of(work, struct mlx5e_vxlan_work, work); 152 + struct mlx5e_priv *priv = vxlan_work->priv; 153 + u16 port = vxlan_work->port; 154 154 155 155 __mlx5e_vxlan_core_del_port(priv, port); 156 + 157 + kfree(vxlan_work); 158 + } 159 + 160 + void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family, 161 + u16 port, int add) 162 + { 163 + struct mlx5e_vxlan_work *vxlan_work; 164 + 165 + vxlan_work = kmalloc(sizeof(*vxlan_work), GFP_ATOMIC); 166 + if (!vxlan_work) 167 + return; 168 + 169 + if (add) 170 + INIT_WORK(&vxlan_work->work, mlx5e_vxlan_add_port); 171 + else 172 + INIT_WORK(&vxlan_work->work, mlx5e_vxlan_del_port); 173 + 174 + vxlan_work->priv = priv; 175 + vxlan_work->port = port; 176 + vxlan_work->sa_family = sa_family; 177 + queue_work(priv->wq, &vxlan_work->work); 156 178 } 157 179 158 180 void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv)
+18 -4
drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
··· 39 39 u16 udp_port; 40 40 }; 41 41 42 + struct mlx5e_vxlan_work { 43 + struct work_struct work; 44 + struct mlx5e_priv *priv; 45 + sa_family_t sa_family; 46 + u16 port; 47 + }; 48 + 42 49 static inline bool mlx5e_vxlan_allowed(struct mlx5_core_dev *mdev) 43 50 { 44 - return (MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) && 51 + return IS_ENABLED(CONFIG_MLX5_CORE_EN_VXLAN) && 52 + (MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) && 45 53 mlx5_core_is_pf(mdev)); 46 54 } 47 55 56 + #ifdef CONFIG_MLX5_CORE_EN_VXLAN 48 57 void mlx5e_vxlan_init(struct mlx5e_priv *priv); 49 - int mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port); 50 - void mlx5e_vxlan_del_port(struct mlx5e_priv *priv, u16 port); 51 - struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port); 52 58 void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv); 59 + #else 60 + static inline void mlx5e_vxlan_init(struct mlx5e_priv *priv) {} 61 + static inline void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv) {} 62 + #endif 63 + 64 + void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family, 65 + u16 port, int add); 66 + struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port); 53 67 54 68 #endif /* __MLX5_VXLAN_H__ */
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 2541 2541 lag->ref_count++; 2542 2542 return 0; 2543 2543 2544 + err_col_port_enable: 2545 + mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id); 2544 2546 err_col_port_add: 2545 2547 if (!lag->ref_count) 2546 2548 mlxsw_sp_lag_destroy(mlxsw_sp, lag_id); 2547 - err_col_port_enable: 2548 - mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id); 2549 2549 return err; 2550 2550 } 2551 2551
+8
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 214 214 mlxsw_reg_sftr_pack(sftr_pl, MLXSW_SP_FLOOD_TABLE_BM, idx_begin, 215 215 table_type, range, local_port, set); 216 216 err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sftr), sftr_pl); 217 + if (err) 218 + goto err_flood_bm_set; 219 + else 220 + goto buffer_out; 217 221 222 + err_flood_bm_set: 223 + mlxsw_reg_sftr_pack(sftr_pl, MLXSW_SP_FLOOD_TABLE_UC, idx_begin, 224 + table_type, range, local_port, !set); 225 + mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sftr), sftr_pl); 218 226 buffer_out: 219 227 kfree(sftr_pl); 220 228 return err;
+2 -2
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 2668 2668 2669 2669 del_timer_sync(&mgp->watchdog_timer); 2670 2670 mgp->running = MYRI10GE_ETH_STOPPING; 2671 - local_bh_disable(); /* myri10ge_ss_lock_napi needs bh disabled */ 2672 2671 for (i = 0; i < mgp->num_slices; i++) { 2673 2672 napi_disable(&mgp->ss[i].napi); 2673 + local_bh_disable(); /* myri10ge_ss_lock_napi needs this */ 2674 2674 /* Lock the slice to prevent the busy_poll handler from 2675 2675 * accessing it. Later when we bring the NIC up, myri10ge_open 2676 2676 * resets the slice including this lock. ··· 2679 2679 pr_info("Slice %d locked\n", i); 2680 2680 mdelay(1); 2681 2681 } 2682 + local_bh_enable(); 2682 2683 } 2683 - local_bh_enable(); 2684 2684 netif_carrier_off(dev); 2685 2685 2686 2686 netif_tx_stop_all_queues(dev);
+9 -5
drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
··· 1015 1015 { 1016 1016 int i, v, addr; 1017 1017 __le32 *ptr32; 1018 + int ret; 1018 1019 1019 1020 addr = base; 1020 1021 ptr32 = buf; 1021 1022 for (i = 0; i < size / sizeof(u32); i++) { 1022 - if (netxen_rom_fast_read(adapter, addr, &v) == -1) 1023 - return -1; 1023 + ret = netxen_rom_fast_read(adapter, addr, &v); 1024 + if (ret) 1025 + return ret; 1026 + 1024 1027 *ptr32 = cpu_to_le32(v); 1025 1028 ptr32++; 1026 1029 addr += sizeof(u32); 1027 1030 } 1028 1031 if ((char *)buf + size > (char *)ptr32) { 1029 1032 __le32 local; 1030 - if (netxen_rom_fast_read(adapter, addr, &v) == -1) 1031 - return -1; 1033 + ret = netxen_rom_fast_read(adapter, addr, &v); 1034 + if (ret) 1035 + return ret; 1032 1036 local = cpu_to_le32(v); 1033 1037 memcpy(ptr32, &local, (char *)buf + size - (char *)ptr32); 1034 1038 } ··· 1944 1940 if (adapter->phy_read && 1945 1941 adapter->phy_read(adapter, 1946 1942 NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG, 1947 - &autoneg) != 0) 1943 + &autoneg) == 0) 1948 1944 adapter->link_autoneg = autoneg; 1949 1945 } else 1950 1946 goto link_down;
+2 -1
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 852 852 ptr32 = (__le32 *)&serial_num; 853 853 offset = NX_FW_SERIAL_NUM_OFFSET; 854 854 for (i = 0; i < 8; i++) { 855 - if (netxen_rom_fast_read(adapter, offset, &val) == -1) { 855 + err = netxen_rom_fast_read(adapter, offset, &val); 856 + if (err) { 856 857 dev_err(&pdev->dev, "error reading board info\n"); 857 858 adapter->driver_mismatch = 1; 858 859 return;
+3 -5
drivers/net/ethernet/qlogic/qede/qede_main.c
··· 421 421 u8 xmit_type; 422 422 u16 idx; 423 423 u16 hlen; 424 - bool data_split; 424 + bool data_split = false; 425 425 426 426 /* Get tx-queue context and netdev index */ 427 427 txq_index = skb_get_queue_mapping(skb); ··· 1938 1938 edev->q_num_rx_buffers = NUM_RX_BDS_DEF; 1939 1939 edev->q_num_tx_buffers = NUM_TX_BDS_DEF; 1940 1940 1941 - DP_INFO(edev, "Allocated netdev with 64 tx queues and 64 rx queues\n"); 1942 - 1943 1941 SET_NETDEV_DEV(ndev, &pdev->dev); 1944 1942 1945 1943 memset(&edev->stats, 0, sizeof(edev->stats)); ··· 2088 2090 { 2089 2091 struct qed_pf_params pf_params; 2090 2092 2091 - /* 16 rx + 16 tx */ 2093 + /* 64 rx + 64 tx */ 2092 2094 memset(&pf_params, 0, sizeof(struct qed_pf_params)); 2093 - pf_params.eth_pf_params.num_cons = 32; 2095 + pf_params.eth_pf_params.num_cons = 128; 2094 2096 qed_ops->common->update_pf_params(cdev, &pf_params); 2095 2097 } 2096 2098
+13 -2
drivers/net/ethernet/sfc/ef10.c
··· 1920 1920 return 0; 1921 1921 } 1922 1922 1923 + if (nic_data->datapath_caps & 1924 + 1 << MC_CMD_GET_CAPABILITIES_OUT_RX_RSS_LIMITED_LBN) 1925 + return -EOPNOTSUPP; 1926 + 1923 1927 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_UPSTREAM_PORT_ID, 1924 1928 nic_data->vport_id); 1925 1929 MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_TYPE, alloc_type); ··· 2927 2923 bool replacing) 2928 2924 { 2929 2925 struct efx_ef10_nic_data *nic_data = efx->nic_data; 2926 + u32 flags = spec->flags; 2930 2927 2931 2928 memset(inbuf, 0, MC_CMD_FILTER_OP_IN_LEN); 2929 + 2930 + /* Remove RSS flag if we don't have an RSS context. */ 2931 + if (flags & EFX_FILTER_FLAG_RX_RSS && 2932 + spec->rss_context == EFX_FILTER_RSS_CONTEXT_DEFAULT && 2933 + nic_data->rx_rss_context == EFX_EF10_RSS_CONTEXT_INVALID) 2934 + flags &= ~EFX_FILTER_FLAG_RX_RSS; 2932 2935 2933 2936 if (replacing) { 2934 2937 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_OP, ··· 2996 2985 spec->dmaq_id == EFX_FILTER_RX_DMAQ_ID_DROP ? 2997 2986 0 : spec->dmaq_id); 2998 2987 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_MODE, 2999 - (spec->flags & EFX_FILTER_FLAG_RX_RSS) ? 2988 + (flags & EFX_FILTER_FLAG_RX_RSS) ? 3000 2989 MC_CMD_FILTER_OP_IN_RX_MODE_RSS : 3001 2990 MC_CMD_FILTER_OP_IN_RX_MODE_SIMPLE); 3002 - if (spec->flags & EFX_FILTER_FLAG_RX_RSS) 2991 + if (flags & EFX_FILTER_FLAG_RX_RSS) 3003 2992 MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_CONTEXT, 3004 2993 spec->rss_context != 3005 2994 EFX_FILTER_RSS_CONTEXT_DEFAULT ?
+37 -32
drivers/net/ethernet/ti/cpsw.c
··· 367 367 spinlock_t lock; 368 368 struct platform_device *pdev; 369 369 struct net_device *ndev; 370 - struct device_node *phy_node; 371 370 struct napi_struct napi_rx; 372 371 struct napi_struct napi_tx; 373 372 struct device *dev; ··· 1147 1148 cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast, 1148 1149 1 << slave_port, 0, 0, ALE_MCAST_FWD_2); 1149 1150 1150 - if (priv->phy_node) 1151 - slave->phy = of_phy_connect(priv->ndev, priv->phy_node, 1151 + if (slave->data->phy_node) { 1152 + slave->phy = of_phy_connect(priv->ndev, slave->data->phy_node, 1152 1153 &cpsw_adjust_link, 0, slave->data->phy_if); 1153 - else 1154 + if (!slave->phy) { 1155 + dev_err(priv->dev, "phy \"%s\" not found on slave %d\n", 1156 + slave->data->phy_node->full_name, 1157 + slave->slave_num); 1158 + return; 1159 + } 1160 + } else { 1154 1161 slave->phy = phy_connect(priv->ndev, slave->data->phy_id, 1155 1162 &cpsw_adjust_link, slave->data->phy_if); 1156 - if (IS_ERR(slave->phy)) { 1157 - dev_err(priv->dev, "phy %s not found on slave %d\n", 1158 - slave->data->phy_id, slave->slave_num); 1159 - slave->phy = NULL; 1160 - } else { 1161 - phy_attached_info(slave->phy); 1162 - 1163 - phy_start(slave->phy); 1164 - 1165 - /* Configure GMII_SEL register */ 1166 - cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, 1167 - slave->slave_num); 1163 + if (IS_ERR(slave->phy)) { 1164 + dev_err(priv->dev, 1165 + "phy \"%s\" not found on slave %d, err %ld\n", 1166 + slave->data->phy_id, slave->slave_num, 1167 + PTR_ERR(slave->phy)); 1168 + slave->phy = NULL; 1169 + return; 1170 + } 1168 1171 } 1172 + 1173 + phy_attached_info(slave->phy); 1174 + 1175 + phy_start(slave->phy); 1176 + 1177 + /* Configure GMII_SEL register */ 1178 + cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, slave->slave_num); 1169 1179 } 1170 1180 1171 1181 static inline void cpsw_add_default_vlan(struct cpsw_priv *priv) ··· 1948 1940 slave->port_vlan = data->dual_emac_res_vlan; 1949 1941 } 1950 1942 1951 - static int cpsw_probe_dt(struct cpsw_priv *priv, 1943 + static int cpsw_probe_dt(struct cpsw_platform_data *data, 1952 1944 struct platform_device *pdev) 1953 1945 { 1954 1946 struct device_node *node = pdev->dev.of_node; 1955 1947 struct device_node *slave_node; 1956 - struct cpsw_platform_data *data = &priv->data; 1957 1948 int i = 0, ret; 1958 1949 u32 prop; 1959 1950 ··· 2040 2033 if (strcmp(slave_node->name, "slave")) 2041 2034 continue; 2042 2035 2043 - priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0); 2036 + slave_data->phy_node = of_parse_phandle(slave_node, 2037 + "phy-handle", 0); 2044 2038 parp = of_get_property(slave_node, "phy_id", &lenp); 2045 - if (of_phy_is_fixed_link(slave_node)) { 2046 - struct device_node *phy_node; 2047 - struct phy_device *phy_dev; 2048 - 2039 + if (slave_data->phy_node) { 2040 + dev_dbg(&pdev->dev, 2041 + "slave[%d] using phy-handle=\"%s\"\n", 2042 + i, slave_data->phy_node->full_name); 2043 + } else if (of_phy_is_fixed_link(slave_node)) { 2049 2044 /* In the case of a fixed PHY, the DT node associated 2050 2045 * to the PHY is the Ethernet MAC DT node. 2051 2046 */ 2052 2047 ret = of_phy_register_fixed_link(slave_node); 2053 2048 if (ret) 2054 2049 return ret; 2055 - phy_node = of_node_get(slave_node); 2056 - phy_dev = of_phy_find_device(phy_node); 2057 - if (!phy_dev) 2058 - return -ENODEV; 2059 - snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 2060 - PHY_ID_FMT, phy_dev->mdio.bus->id, 2061 - phy_dev->mdio.addr); 2050 + slave_data->phy_node = of_node_get(slave_node); 2062 2051 } else if (parp) { 2063 2052 u32 phyid; 2064 2053 struct device_node *mdio_node; ··· 2075 2072 snprintf(slave_data->phy_id, sizeof(slave_data->phy_id), 2076 2073 PHY_ID_FMT, mdio->name, phyid); 2077 2074 } else { 2078 - dev_err(&pdev->dev, "No slave[%d] phy_id or fixed-link property\n", i); 2075 + dev_err(&pdev->dev, 2076 + "No slave[%d] phy_id, phy-handle, or fixed-link property\n", 2077 + i); 2079 2078 goto no_phy_slave; 2080 2079 } 2081 2080 slave_data->phy_if = of_get_phy_mode(slave_node); ··· 2280 2275 /* Select default pin state */ 2281 2276 pinctrl_pm_select_default_state(&pdev->dev); 2282 2277 2283 - if (cpsw_probe_dt(priv, pdev)) { 2278 + if (cpsw_probe_dt(&priv->data, pdev)) { 2284 2279 dev_err(&pdev->dev, "cpsw: platform data missing\n"); 2285 2280 ret = -ENODEV; 2286 2281 goto clean_runtime_disable_ret;
+1
drivers/net/ethernet/ti/cpsw.h
··· 18 18 #include <linux/phy.h> 19 19 20 20 struct cpsw_slave_data { 21 + struct device_node *phy_node; 21 22 char phy_id[MII_BUS_ID_SIZE]; 22 23 int phy_if; 23 24 u8 mac_addr[ETH_ALEN];
+4 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 1512 1512 1513 1513 /* TODO: Add phy read and write and private statistics get feature */ 1514 1514 1515 - return phy_mii_ioctl(priv->phydev, ifrq, cmd); 1515 + if (priv->phydev) 1516 + return phy_mii_ioctl(priv->phydev, ifrq, cmd); 1517 + else 1518 + return -EOPNOTSUPP; 1516 1519 } 1517 1520 1518 1521 static int match_first_device(struct device *dev, void *data)
+1 -1
drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
··· 1622 1622 continue; 1623 1623 1624 1624 /* copy hw scan info */ 1625 - memcpy(target->hwinfo, scan_info, scan_info->size); 1625 + memcpy(target->hwinfo, scan_info, be16_to_cpu(scan_info->size)); 1626 1626 target->essid_len = strnlen(scan_info->essid, 1627 1627 sizeof(scan_info->essid)); 1628 1628 target->rate_len = 0;
+3 -2
drivers/net/geneve.c
··· 504 504 int gh_len; 505 505 int err = -ENOSYS; 506 506 507 - udp_tunnel_gro_complete(skb, nhoff); 508 - 509 507 gh = (struct genevehdr *)(skb->data + nhoff); 510 508 gh_len = geneve_hlen(gh); 511 509 type = gh->proto_type; ··· 514 516 err = ptype->callbacks.gro_complete(skb, nhoff + gh_len); 515 517 516 518 rcu_read_unlock(); 519 + 520 + skb_set_inner_mac_header(skb, nhoff + gh_len); 521 + 517 522 return err; 518 523 } 519 524
+13 -6
drivers/net/macsec.c
··· 85 85 * @tfm: crypto struct, key storage 86 86 */ 87 87 struct macsec_key { 88 - u64 id; 88 + u8 id[MACSEC_KEYID_LEN]; 89 89 struct crypto_aead *tfm; 90 90 }; 91 91 ··· 1529 1529 [MACSEC_SA_ATTR_AN] = { .type = NLA_U8 }, 1530 1530 [MACSEC_SA_ATTR_ACTIVE] = { .type = NLA_U8 }, 1531 1531 [MACSEC_SA_ATTR_PN] = { .type = NLA_U32 }, 1532 - [MACSEC_SA_ATTR_KEYID] = { .type = NLA_U64 }, 1532 + [MACSEC_SA_ATTR_KEYID] = { .type = NLA_BINARY, 1533 + .len = MACSEC_KEYID_LEN, }, 1533 1534 [MACSEC_SA_ATTR_KEY] = { .type = NLA_BINARY, 1534 1535 .len = MACSEC_MAX_KEY_LEN, }, 1535 1536 }; ··· 1576 1575 if (nla_get_u8(attrs[MACSEC_SA_ATTR_ACTIVE]) > 1) 1577 1576 return false; 1578 1577 } 1578 + 1579 + if (nla_len(attrs[MACSEC_SA_ATTR_KEYID]) != MACSEC_KEYID_LEN) 1580 + return false; 1579 1581 1580 1582 return true; 1581 1583 } ··· 1645 1641 if (tb_sa[MACSEC_SA_ATTR_ACTIVE]) 1646 1642 rx_sa->active = !!nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]); 1647 1643 1648 - rx_sa->key.id = nla_get_u64(tb_sa[MACSEC_SA_ATTR_KEYID]); 1644 + nla_memcpy(rx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEY], MACSEC_KEYID_LEN); 1649 1645 rx_sa->sc = rx_sc; 1650 1646 rcu_assign_pointer(rx_sc->sa[assoc_num], rx_sa); 1651 1647 ··· 1726 1722 return false; 1727 1723 } 1728 1724 1725 + if (nla_len(attrs[MACSEC_SA_ATTR_KEYID]) != MACSEC_KEYID_LEN) 1726 + return false; 1727 + 1729 1728 return true; 1730 1729 } 1731 1730 ··· 1784 1777 return -ENOMEM; 1785 1778 } 1786 1779 1787 - tx_sa->key.id = nla_get_u64(tb_sa[MACSEC_SA_ATTR_KEYID]); 1780 + nla_memcpy(tx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEY], MACSEC_KEYID_LEN); 1788 1781 1789 1782 spin_lock_bh(&tx_sa->lock); 1790 1783 tx_sa->next_pn = nla_get_u32(tb_sa[MACSEC_SA_ATTR_PN]); ··· 2325 2318 2326 2319 if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) || 2327 2320 nla_put_u32(skb, MACSEC_SA_ATTR_PN, tx_sa->next_pn) || 2328 - nla_put_u64(skb, MACSEC_SA_ATTR_KEYID, tx_sa->key.id) || 2321 + nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, tx_sa->key.id) || 2329 2322 nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, tx_sa->active)) { 2330 2323 nla_nest_cancel(skb, txsa_nest); 2331 2324 nla_nest_cancel(skb, txsa_list); ··· 2426 2419 2427 2420 if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) || 2428 2421 nla_put_u32(skb, MACSEC_SA_ATTR_PN, rx_sa->next_pn) || 2429 - nla_put_u64(skb, MACSEC_SA_ATTR_KEYID, rx_sa->key.id) || 2422 + nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, rx_sa->key.id) || 2430 2423 nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, rx_sa->active)) { 2431 2424 nla_nest_cancel(skb, rxsa_nest); 2432 2425 nla_nest_cancel(skb, rxsc_nest);
+1 -1
drivers/net/macvtap.c
··· 373 373 goto wake_up; 374 374 } 375 375 376 - kfree_skb(skb); 376 + consume_skb(skb); 377 377 while (segs) { 378 378 struct sk_buff *nskb = segs->next; 379 379
+14 -18
drivers/net/phy/at803x.c
··· 359 359 * in the FIFO. In such cases, the FIFO enters an error mode it 360 360 * cannot recover from by software. 361 361 */ 362 - if (phydev->drv->phy_id == ATH8030_PHY_ID) { 363 - if (phydev->state == PHY_NOLINK) { 364 - if (priv->gpiod_reset && !priv->phy_reset) { 365 - struct at803x_context context; 362 + if (phydev->state == PHY_NOLINK) { 363 + if (priv->gpiod_reset && !priv->phy_reset) { 364 + struct at803x_context context; 366 365 367 - at803x_context_save(phydev, &context); 366 + at803x_context_save(phydev, &context); 368 367 369 - gpiod_set_value(priv->gpiod_reset, 1); 370 - msleep(1); 371 - gpiod_set_value(priv->gpiod_reset, 0); 372 - msleep(1); 368 + gpiod_set_value(priv->gpiod_reset, 1); 369 + msleep(1); 370 + gpiod_set_value(priv->gpiod_reset, 0); 371 + msleep(1); 373 372 374 - at803x_context_restore(phydev, &context); 373 + at803x_context_restore(phydev, &context); 375 374 376 - phydev_dbg(phydev, "%s(): phy was reset\n", 377 - __func__); 378 - priv->phy_reset = true; 379 - } 380 - } else { 381 - priv->phy_reset = false; 375 + phydev_dbg(phydev, "%s(): phy was reset\n", 376 + __func__); 377 + priv->phy_reset = true; 382 378 } 379 + } else { 380 + priv->phy_reset = false; 383 381 } 384 382 } 385 383 ··· 389 391 .phy_id_mask = 0xffffffef, 390 392 .probe = at803x_probe, 391 393 .config_init = at803x_config_init, 392 - .link_change_notify = at803x_link_change_notify, 393 394 .set_wol = at803x_set_wol, 394 395 .get_wol = at803x_get_wol, 395 396 .suspend = at803x_suspend, ··· 424 427 .phy_id_mask = 0xffffffef, 425 428 .probe = at803x_probe, 426 429 .config_init = at803x_config_init, 427 - .link_change_notify = at803x_link_change_notify, 428 430 .set_wol = at803x_set_wol, 429 431 .get_wol = at803x_get_wol, 430 432 .suspend = at803x_suspend,
+38 -6
drivers/net/usb/lan78xx.c
··· 269 269 struct lan78xx_net *dev; 270 270 enum skb_state state; 271 271 size_t length; 272 + int num_of_packet; 272 273 }; 273 274 274 275 struct usb_context { ··· 1804 1803 1805 1804 static void lan78xx_link_status_change(struct net_device *net) 1806 1805 { 1807 - /* nothing to do */ 1806 + struct phy_device *phydev = net->phydev; 1807 + int ret, temp; 1808 + 1809 + /* At forced 100 F/H mode, chip may fail to set mode correctly 1810 + * when cable is switched between long(~50+m) and short one. 1811 + * As workaround, set to 10 before setting to 100 1812 + * at forced 100 F/H mode. 1813 + */ 1814 + if (!phydev->autoneg && (phydev->speed == 100)) { 1815 + /* disable phy interrupt */ 1816 + temp = phy_read(phydev, LAN88XX_INT_MASK); 1817 + temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_; 1818 + ret = phy_write(phydev, LAN88XX_INT_MASK, temp); 1819 + 1820 + temp = phy_read(phydev, MII_BMCR); 1821 + temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000); 1822 + phy_write(phydev, MII_BMCR, temp); /* set to 10 first */ 1823 + temp |= BMCR_SPEED100; 1824 + phy_write(phydev, MII_BMCR, temp); /* set to 100 later */ 1825 + 1826 + /* clear pending interrupt generated while workaround */ 1827 + temp = phy_read(phydev, LAN88XX_INT_STS); 1828 + 1829 + /* enable phy interrupt back */ 1830 + temp = phy_read(phydev, LAN88XX_INT_MASK); 1831 + temp |= LAN88XX_INT_MASK_MDINTPIN_EN_; 1832 + ret = phy_write(phydev, LAN88XX_INT_MASK, temp); 1833 + } 1808 1834 } 1809 1835 1810 1836 static int lan78xx_phy_init(struct lan78xx_net *dev) ··· 2492 2464 struct lan78xx_net *dev = entry->dev; 2493 2465 2494 2466 if (urb->status == 0) { 2495 - dev->net->stats.tx_packets++; 2467 + dev->net->stats.tx_packets += entry->num_of_packet; 2496 2468 dev->net->stats.tx_bytes += entry->length; 2497 2469 } else { 2498 2470 dev->net->stats.tx_errors++; ··· 2709 2681 return; 2710 2682 } 2711 2683 2712 - skb->protocol = eth_type_trans(skb, dev->net); 2713 2684 dev->net->stats.rx_packets++; 2714 2685 dev->net->stats.rx_bytes += skb->len; 2686 + 2687 + skb->protocol = eth_type_trans(skb, dev->net); 2715 2688 2716 2689 netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n", 2717 2690 skb->len + sizeof(struct ethhdr), skb->protocol); ··· 2963 2934 2964 2935 skb_totallen = 0; 2965 2936 pkt_cnt = 0; 2937 + count = 0; 2938 + length = 0; 2966 2939 for (skb = tqp->next; pkt_cnt < tqp->qlen; skb = skb->next) { 2967 2940 if (skb_is_gso(skb)) { 2968 2941 if (pkt_cnt) { 2969 2942 /* handle previous packets first */ 2970 2943 break; 2971 2944 } 2972 - length = skb->len; 2945 + count = 1; 2946 + length = skb->len - TX_OVERHEAD; 2973 2947 skb2 = skb_dequeue(tqp); 2974 2948 goto gso_skb; 2975 2949 } ··· 2993 2961 for (count = pos = 0; count < pkt_cnt; count++) { 2994 2962 skb2 = skb_dequeue(tqp); 2995 2963 if (skb2) { 2964 + length += (skb2->len - TX_OVERHEAD); 2996 2965 memcpy(skb->data + pos, skb2->data, skb2->len); 2997 2966 pos += roundup(skb2->len, sizeof(u32)); 2998 2967 dev_kfree_skb(skb2); 2999 2968 } 3000 2969 } 3001 - 3002 - length = skb_totallen; 3003 2970 3004 2971 gso_skb: 3005 2972 urb = usb_alloc_urb(0, GFP_ATOMIC); ··· 3011 2980 entry->urb = urb; 3012 2981 entry->dev = dev; 3013 2982 entry->length = length; 2983 + entry->num_of_packet = count; 3014 2984 3015 2985 spin_lock_irqsave(&dev->txq.lock, flags); 3016 2986 ret = usb_autopm_get_interface_async(dev->intf);
+5 -5
drivers/net/usb/pegasus.c
··· 411 411 int ret; 412 412 413 413 read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart); 414 - data[0] = 0xc9; 414 + data[0] = 0xc8; /* TX & RX enable, append status, no CRC */ 415 415 data[1] = 0; 416 416 if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL)) 417 417 data[1] |= 0x20; /* set full duplex */ ··· 497 497 pkt_len = buf[count - 3] << 8; 498 498 pkt_len += buf[count - 4]; 499 499 pkt_len &= 0xfff; 500 - pkt_len -= 8; 500 + pkt_len -= 4; 501 501 } 502 502 503 503 /* ··· 528 528 goon: 529 529 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 530 530 usb_rcvbulkpipe(pegasus->usb, 1), 531 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 531 + pegasus->rx_skb->data, PEGASUS_MTU, 532 532 read_bulk_callback, pegasus); 533 533 rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); 534 534 if (rx_status == -ENODEV) ··· 569 569 } 570 570 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 571 571 usb_rcvbulkpipe(pegasus->usb, 1), 572 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 572 + pegasus->rx_skb->data, PEGASUS_MTU, 573 573 read_bulk_callback, pegasus); 574 574 try_again: 575 575 status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC); ··· 823 823 824 824 usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb, 825 825 usb_rcvbulkpipe(pegasus->usb, 1), 826 - pegasus->rx_skb->data, PEGASUS_MTU + 8, 826 + pegasus->rx_skb->data, PEGASUS_MTU, 827 827 read_bulk_callback, pegasus); 828 828 if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) { 829 829 if (res == -ENODEV)
+11 -1
drivers/net/usb/smsc75xx.c
··· 29 29 #include <linux/crc32.h> 30 30 #include <linux/usb/usbnet.h> 31 31 #include <linux/slab.h> 32 + #include <linux/of_net.h> 32 33 #include "smsc75xx.h" 33 34 34 35 #define SMSC_CHIPNAME "smsc75xx" ··· 762 761 763 762 static void smsc75xx_init_mac_address(struct usbnet *dev) 764 763 { 764 + const u8 *mac_addr; 765 + 766 + /* maybe the boot loader passed the MAC address in devicetree */ 767 + mac_addr = of_get_mac_address(dev->udev->dev.of_node); 768 + if (mac_addr) { 769 + memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN); 770 + return; 771 + } 772 + 765 773 /* try reading mac address from EEPROM */ 766 774 if (smsc75xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN, 767 775 dev->net->dev_addr) == 0) { ··· 782 772 } 783 773 } 784 774 785 - /* no eeprom, or eeprom values are invalid. generate random MAC */ 775 + /* no useful static MAC address found. generate a random one */ 786 776 eth_hw_addr_random(dev->net); 787 777 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); 788 778 }
+11 -1
drivers/net/usb/smsc95xx.c
··· 29 29 #include <linux/crc32.h> 30 30 #include <linux/usb/usbnet.h> 31 31 #include <linux/slab.h> 32 + #include <linux/of_net.h> 32 33 #include "smsc95xx.h" 33 34 34 35 #define SMSC_CHIPNAME "smsc95xx" ··· 766 765 767 766 static void smsc95xx_init_mac_address(struct usbnet *dev) 768 767 { 768 + const u8 *mac_addr; 769 + 770 + /* maybe the boot loader passed the MAC address in devicetree */ 771 + mac_addr = of_get_mac_address(dev->udev->dev.of_node); 772 + if (mac_addr) { 773 + memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN); 774 + return; 775 + } 776 + 769 777 /* try reading mac address from EEPROM */ 770 778 if (smsc95xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN, 771 779 dev->net->dev_addr) == 0) { ··· 785 775 } 786 776 } 787 777 788 - /* no eeprom, or eeprom values are invalid. generate random MAC */ 778 + /* no useful static MAC address found. generate a random one */ 789 779 eth_hw_addr_random(dev->net); 790 780 netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n"); 791 781 }
+3 -2
drivers/net/vxlan.c
··· 616 616 static int vxlan_gro_complete(struct sk_buff *skb, int nhoff, 617 617 struct udp_offload *uoff) 618 618 { 619 - udp_tunnel_gro_complete(skb, nhoff); 620 - 619 + /* Sets 'skb->inner_mac_header' since we are always called with 620 + * 'skb->encapsulation' set. 621 + */ 621 622 return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr)); 622 623 } 623 624
+3 -5
drivers/net/wireless/ath/ath9k/ar5008_phy.c
··· 274 274 }; 275 275 static const int inc[4] = { 0, 100, 0, 0 }; 276 276 277 + memset(&mask_m, 0, sizeof(int8_t) * 123); 278 + memset(&mask_p, 0, sizeof(int8_t) * 123); 279 + 277 280 cur_bin = -6000; 278 281 upper = bin + 100; 279 282 lower = bin - 100; ··· 427 424 int tmp, new; 428 425 int i; 429 426 430 - int8_t mask_m[123]; 431 - int8_t mask_p[123]; 432 427 int cur_bb_spur; 433 428 bool is2GHz = IS_CHAN_2GHZ(chan); 434 - 435 - memset(&mask_m, 0, sizeof(int8_t) * 123); 436 - memset(&mask_p, 0, sizeof(int8_t) * 123); 437 429 438 430 for (i = 0; i < AR_EEPROM_MODAL_SPURS; i++) { 439 431 cur_bb_spur = ah->eep_ops->get_spur_channel(ah, i, is2GHz);
-5
drivers/net/wireless/ath/ath9k/ar9002_phy.c
··· 178 178 int i; 179 179 struct chan_centers centers; 180 180 181 - int8_t mask_m[123]; 182 - int8_t mask_p[123]; 183 181 int cur_bb_spur; 184 182 bool is2GHz = IS_CHAN_2GHZ(chan); 185 - 186 - memset(&mask_m, 0, sizeof(int8_t) * 123); 187 - memset(&mask_p, 0, sizeof(int8_t) * 123); 188 183 189 184 ath9k_hw_get_channel_centers(ah, chan, &centers); 190 185 freq = centers.synth_center;
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-8000.c
··· 93 93 #define IWL8260_SMEM_OFFSET 0x400000 94 94 #define IWL8260_SMEM_LEN 0x68000 95 95 96 - #define IWL8000_FW_PRE "iwlwifi-8000" 96 + #define IWL8000_FW_PRE "iwlwifi-8000C-" 97 97 #define IWL8000_MODULE_FIRMWARE(api) \ 98 98 IWL8000_FW_PRE "-" __stringify(api) ".ucode" 99 99
+10 -16
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 238 238 snprintf(drv->firmware_name, sizeof(drv->firmware_name), "%s%s.ucode", 239 239 name_pre, tag); 240 240 241 - /* 242 - * Starting 8000B - FW name format has changed. This overwrites the 243 - * previous name and uses the new format. 244 - */ 245 - if (drv->trans->cfg->device_family == IWL_DEVICE_FAMILY_8000) { 246 - char rev_step = 'A' + CSR_HW_REV_STEP(drv->trans->hw_rev); 247 - 248 - if (rev_step != 'A') 249 - snprintf(drv->firmware_name, 250 - sizeof(drv->firmware_name), "%s%c-%s.ucode", 251 - name_pre, rev_step, tag); 252 - } 253 - 254 241 IWL_DEBUG_INFO(drv, "attempting to load firmware %s'%s'\n", 255 242 (drv->fw_index == UCODE_EXPERIMENTAL_INDEX) 256 243 ? "EXPERIMENTAL " : "", ··· 1047 1060 return -EINVAL; 1048 1061 } 1049 1062 1050 - if (WARN(fw_has_capa(capa, IWL_UCODE_TLV_CAPA_GSCAN_SUPPORT) && 1051 - !gscan_capa, 1052 - "GSCAN is supported but capabilities TLV is unavailable\n")) 1063 + /* 1064 + * If ucode advertises that it supports GSCAN but GSCAN 1065 + * capabilities TLV is not present, or if it has an old format, 1066 + * warn and continue without GSCAN. 1067 + */ 1068 + if (fw_has_capa(capa, IWL_UCODE_TLV_CAPA_GSCAN_SUPPORT) && 1069 + !gscan_capa) { 1070 + IWL_DEBUG_INFO(drv, 1071 + "GSCAN is supported but capabilities TLV is unavailable\n"); 1053 1072 __clear_bit((__force long)IWL_UCODE_TLV_CAPA_GSCAN_SUPPORT, 1054 1073 capa->_capa); 1074 + } 1055 1075 1056 1076 return 0; 1057 1077
+4 -2
drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
··· 526 526 file_len += sizeof(*dump_data) + sizeof(*dump_mem) + sram2_len; 527 527 528 528 /* Make room for fw's virtual image pages, if it exists */ 529 - if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) 529 + if (mvm->fw->img[mvm->cur_ucode].paging_mem_size && 530 + mvm->fw_paging_db[0].fw_paging_block) 530 531 file_len += mvm->num_of_paging_blk * 531 532 (sizeof(*dump_data) + 532 533 sizeof(struct iwl_fw_error_dump_paging) + ··· 644 643 } 645 644 646 645 /* Dump fw's virtual image */ 647 - if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) { 646 + if (mvm->fw->img[mvm->cur_ucode].paging_mem_size && 647 + mvm->fw_paging_db[0].fw_paging_block) { 648 648 for (i = 1; i < mvm->num_of_paging_blk + 1; i++) { 649 649 struct iwl_fw_error_dump_paging *paging; 650 650 struct page *pages =
+2
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 144 144 145 145 __free_pages(mvm->fw_paging_db[i].fw_paging_block, 146 146 get_order(mvm->fw_paging_db[i].fw_paging_size)); 147 + mvm->fw_paging_db[i].fw_paging_block = NULL; 147 148 } 148 149 kfree(mvm->trans->paging_download_buf); 149 150 mvm->trans->paging_download_buf = NULL; 151 + mvm->trans->paging_db = NULL; 150 152 151 153 memset(mvm->fw_paging_db, 0, sizeof(mvm->fw_paging_db)); 152 154 }
+10
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 479 479 {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)}, 480 480 {IWL_PCI_DEVICE(0x24F3, 0x0000, iwl8265_2ac_cfg)}, 481 481 {IWL_PCI_DEVICE(0x24FD, 0x0010, iwl8265_2ac_cfg)}, 482 + {IWL_PCI_DEVICE(0x24FD, 0x0110, iwl8265_2ac_cfg)}, 483 + {IWL_PCI_DEVICE(0x24FD, 0x1110, iwl8265_2ac_cfg)}, 484 + {IWL_PCI_DEVICE(0x24FD, 0x1010, iwl8265_2ac_cfg)}, 485 + {IWL_PCI_DEVICE(0x24FD, 0x0050, iwl8265_2ac_cfg)}, 486 + {IWL_PCI_DEVICE(0x24FD, 0x0150, iwl8265_2ac_cfg)}, 487 + {IWL_PCI_DEVICE(0x24FD, 0x9010, iwl8265_2ac_cfg)}, 488 + {IWL_PCI_DEVICE(0x24FD, 0x8110, iwl8265_2ac_cfg)}, 489 + {IWL_PCI_DEVICE(0x24FD, 0x8050, iwl8265_2ac_cfg)}, 482 490 {IWL_PCI_DEVICE(0x24FD, 0x8010, iwl8265_2ac_cfg)}, 483 491 {IWL_PCI_DEVICE(0x24FD, 0x0810, iwl8265_2ac_cfg)}, 492 + {IWL_PCI_DEVICE(0x24FD, 0x9110, iwl8265_2ac_cfg)}, 493 + {IWL_PCI_DEVICE(0x24FD, 0x8130, iwl8265_2ac_cfg)}, 484 494 485 495 /* 9000 Series */ 486 496 {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl5165_2ac_cfg)},
+10 -3
drivers/nvdimm/pmem.c
··· 397 397 */ 398 398 start += start_pad; 399 399 npfns = (pmem->size - start_pad - end_trunc - SZ_8K) / SZ_4K; 400 - if (nd_pfn->mode == PFN_MODE_PMEM) 401 - offset = ALIGN(start + SZ_8K + 64 * npfns, nd_pfn->align) 400 + if (nd_pfn->mode == PFN_MODE_PMEM) { 401 + unsigned long memmap_size; 402 + 403 + /* 404 + * vmemmap_populate_hugepages() allocates the memmap array in 405 + * PMD_SIZE chunks. 406 + */ 407 + memmap_size = ALIGN(64 * npfns, PMD_SIZE); 408 + offset = ALIGN(start + SZ_8K + memmap_size, nd_pfn->align) 402 409 - start; 403 - else if (nd_pfn->mode == PFN_MODE_RAM) 410 + } else if (nd_pfn->mode == PFN_MODE_RAM) 404 411 offset = ALIGN(start + SZ_8K, nd_pfn->align) - start; 405 412 else 406 413 goto err;
+2 -2
drivers/nvmem/mxs-ocotp.c
··· 94 94 if (ret) 95 95 goto close_banks; 96 96 97 - while (val_size) { 97 + while (val_size >= reg_size) { 98 98 if ((offset < OCOTP_DATA_OFFSET) || (offset % 16)) { 99 99 /* fill up non-data register */ 100 100 *buf = 0; ··· 103 103 } 104 104 105 105 buf++; 106 - val_size--; 106 + val_size -= reg_size; 107 107 offset += reg_size; 108 108 } 109 109
+4 -2
drivers/pci/bus.c
··· 294 294 295 295 dev->match_driver = true; 296 296 retval = device_attach(&dev->dev); 297 - if (retval < 0) { 297 + if (retval < 0 && retval != -EPROBE_DEFER) { 298 298 dev_warn(&dev->dev, "device attach failed (%d)\n", retval); 299 299 pci_proc_detach_device(dev); 300 300 pci_remove_sysfs_dev_files(dev); ··· 324 324 } 325 325 326 326 list_for_each_entry(dev, &bus->devices, bus_list) { 327 - BUG_ON(!dev->is_added); 327 + /* Skip if device attach failed */ 328 + if (!dev->is_added) 329 + continue; 328 330 child = dev->subordinate; 329 331 if (child) 330 332 pci_bus_add_devices(child);
+64 -51
drivers/rapidio/devices/rio_mport_cdev.c
··· 126 126 struct list_head node; 127 127 struct mport_dev *md; 128 128 enum rio_mport_map_dir dir; 129 - u32 rioid; 129 + u16 rioid; 130 130 u64 rio_addr; 131 131 dma_addr_t phys_addr; /* for mmap */ 132 132 void *virt_addr; /* kernel address, for dma_free_coherent */ ··· 137 137 138 138 struct rio_mport_dma_map { 139 139 int valid; 140 - uint64_t length; 140 + u64 length; 141 141 void *vaddr; 142 142 dma_addr_t paddr; 143 143 }; ··· 208 208 struct kfifo event_fifo; 209 209 wait_queue_head_t event_rx_wait; 210 210 spinlock_t fifo_lock; 211 - unsigned int event_mask; /* RIO_DOORBELL, RIO_PORTWRITE */ 211 + u32 event_mask; /* RIO_DOORBELL, RIO_PORTWRITE */ 212 212 #ifdef CONFIG_RAPIDIO_DMA_ENGINE 213 213 struct dma_chan *dmach; 214 214 struct list_head async_list; ··· 276 276 return -EFAULT; 277 277 278 278 if ((maint_io.offset % 4) || 279 - (maint_io.length == 0) || (maint_io.length % 4)) 279 + (maint_io.length == 0) || (maint_io.length % 4) || 280 + (maint_io.length + maint_io.offset) > RIO_MAINT_SPACE_SZ) 280 281 return -EINVAL; 281 282 282 283 buffer = vmalloc(maint_io.length); ··· 299 298 offset += 4; 300 299 } 301 300 302 - if (unlikely(copy_to_user(maint_io.buffer, buffer, maint_io.length))) 301 + if (unlikely(copy_to_user((void __user *)(uintptr_t)maint_io.buffer, 302 + buffer, maint_io.length))) 303 303 ret = -EFAULT; 304 304 out: 305 305 vfree(buffer); ··· 321 319 return -EFAULT; 322 320 323 321 if ((maint_io.offset % 4) || 324 - (maint_io.length == 0) || (maint_io.length % 4)) 322 + (maint_io.length == 0) || (maint_io.length % 4) || 323 + (maint_io.length + maint_io.offset) > RIO_MAINT_SPACE_SZ) 325 324 return -EINVAL; 326 325 327 326 buffer = vmalloc(maint_io.length); ··· 330 327 return -ENOMEM; 331 328 length = maint_io.length; 332 329 333 - if (unlikely(copy_from_user(buffer, maint_io.buffer, length))) { 330 + if (unlikely(copy_from_user(buffer, 331 + (void __user *)(uintptr_t)maint_io.buffer, length))) { 334 332 ret = -EFAULT; 335 333 goto out; 336 334 } ··· 364 360 */ 365 361 static int 366 362 rio_mport_create_outbound_mapping(struct mport_dev *md, struct file *filp, 367 - u32 rioid, u64 raddr, u32 size, 363 + u16 rioid, u64 raddr, u32 size, 368 364 dma_addr_t *paddr) 369 365 { 370 366 struct rio_mport *mport = md->mport; ··· 373 369 374 370 rmcd_debug(OBW, "did=%d ra=0x%llx sz=0x%x", rioid, raddr, size); 375 371 376 - map = kzalloc(sizeof(struct rio_mport_mapping), GFP_KERNEL); 372 + map = kzalloc(sizeof(*map), GFP_KERNEL); 377 373 if (map == NULL) 378 374 return -ENOMEM; 379 375 ··· 398 394 399 395 static int 400 396 rio_mport_get_outbound_mapping(struct mport_dev *md, struct file *filp, 401 - u32 rioid, u64 raddr, u32 size, 397 + u16 rioid, u64 raddr, u32 size, 402 398 dma_addr_t *paddr) 403 399 { 404 400 struct rio_mport_mapping *map; ··· 437 433 dma_addr_t paddr; 438 434 int ret; 439 435 440 - if (unlikely(copy_from_user(&map, arg, sizeof(struct rio_mmap)))) 436 + if (unlikely(copy_from_user(&map, arg, sizeof(map)))) 441 437 return -EFAULT; 442 438 443 439 rmcd_debug(OBW, "did=%d ra=0x%llx sz=0x%llx", ··· 452 448 453 449 map.handle = paddr; 454 450 455 - if (unlikely(copy_to_user(arg, &map, sizeof(struct rio_mmap)))) 451 + if (unlikely(copy_to_user(arg, &map, sizeof(map)))) 456 452 return -EFAULT; 457 453 return 0; 458 454 } ··· 473 469 if (!md->mport->ops->unmap_outb) 474 470 return -EPROTONOSUPPORT; 475 471 476 - if (copy_from_user(&handle, arg, sizeof(u64))) 472 + if (copy_from_user(&handle, arg, sizeof(handle))) 477 473 return -EFAULT; 478 474 479 475 rmcd_debug(OBW, "h=0x%llx", handle); ··· 502 498 static int maint_hdid_set(struct mport_cdev_priv *priv, void __user *arg) 503 499 { 504 500 struct mport_dev *md = priv->md; 505 - uint16_t hdid; 501 + u16 hdid; 506 502 507 - if (copy_from_user(&hdid, arg, sizeof(uint16_t))) 503 + if (copy_from_user(&hdid, arg, sizeof(hdid))) 508 504 return -EFAULT; 509 505 510 506 md->mport->host_deviceid = hdid; ··· 524 520 static int maint_comptag_set(struct mport_cdev_priv *priv, void __user *arg) 525 521 { 526 522 struct mport_dev *md = priv->md; 527 - uint32_t comptag; 523 + u32 comptag; 528 524 529 - if (copy_from_user(&comptag, arg, sizeof(uint32_t))) 525 + if (copy_from_user(&comptag, arg, sizeof(comptag))) 530 526 return -EFAULT; 531 527 532 528 rio_local_write_config_32(md->mport, RIO_COMPONENT_TAG_CSR, comptag); ··· 841 837 * @xfer: data transfer descriptor structure 842 838 */ 843 839 static int 844 - rio_dma_transfer(struct file *filp, uint32_t transfer_mode, 840 + rio_dma_transfer(struct file *filp, u32 transfer_mode, 845 841 enum rio_transfer_sync sync, enum dma_data_direction dir, 846 842 struct rio_transfer_io *xfer) 847 843 { ··· 879 875 unsigned long offset; 880 876 long pinned; 881 877 882 - offset = (unsigned long)xfer->loc_addr & ~PAGE_MASK; 878 + offset = (unsigned long)(uintptr_t)xfer->loc_addr & ~PAGE_MASK; 883 879 nr_pages = PAGE_ALIGN(xfer->length + offset) >> PAGE_SHIFT; 884 880 885 881 page_list = kmalloc_array(nr_pages, ··· 1019 1015 if (unlikely(copy_from_user(&transaction, arg, sizeof(transaction)))) 1020 1016 return -EFAULT; 1021 1017 1022 - if (transaction.count != 1) 1018 + if (transaction.count != 1) /* only single transfer for now */ 1023 1019 return -EINVAL; 1024 1020 1025 1021 if ((transaction.transfer_mode & 1026 1022 priv->md->properties.transfer_mode) == 0) 1027 1023 return -ENODEV; 1028 1024 1029 - transfer = vmalloc(transaction.count * sizeof(struct rio_transfer_io)); 1025 + transfer = vmalloc(transaction.count * sizeof(*transfer)); 1030 1026 if (!transfer) 1031 1027 return -ENOMEM; 1032 1028 1033 - if (unlikely(copy_from_user(transfer, transaction.block, 1034 - transaction.count * sizeof(struct rio_transfer_io)))) { 1029 + if (unlikely(copy_from_user(transfer, 1030 + (void __user *)(uintptr_t)transaction.block, 1031 + transaction.count * sizeof(*transfer)))) { 1035 1032 ret = -EFAULT; 1036 1033 goto out_free; 1037 1034 } ··· 1043 1038 ret = rio_dma_transfer(filp, transaction.transfer_mode, 1044 1039 transaction.sync, dir, &transfer[i]); 1045 1040 1046 - if (unlikely(copy_to_user(transaction.block, transfer, 1047 - transaction.count * sizeof(struct rio_transfer_io)))) 1041 + if (unlikely(copy_to_user((void __user *)(uintptr_t)transaction.block, 1042 + transfer, 1043 + transaction.count * sizeof(*transfer)))) 1048 1044 ret = -EFAULT; 1049 1045 1050 1046 out_free: ··· 1135 1129 } 1136 1130 1137 1131 static int rio_mport_create_dma_mapping(struct mport_dev *md, struct file *filp, 1138 - uint64_t size, struct rio_mport_mapping **mapping) 1132 + u64 size, struct rio_mport_mapping **mapping) 1139 1133 { 1140 1134 struct rio_mport_mapping *map; 1141 1135 1142 - map = kzalloc(sizeof(struct rio_mport_mapping), GFP_KERNEL); 1136 + map = kzalloc(sizeof(*map), GFP_KERNEL); 1143 1137 if (map == NULL) 1144 1138 return -ENOMEM; 1145 1139 ··· 1171 1165 struct rio_mport_mapping *mapping = NULL; 1172 1166 int ret; 1173 1167 1174 - if (unlikely(copy_from_user(&map, arg, sizeof(struct rio_dma_mem)))) 1168 + if (unlikely(copy_from_user(&map, arg, sizeof(map)))) 1175 1169 return -EFAULT; 1176 1170 1177 1171 ret = rio_mport_create_dma_mapping(md, filp, map.length, &mapping); ··· 1180 1174 1181 1175 map.dma_handle = mapping->phys_addr; 1182 1176 1183 - if (unlikely(copy_to_user(arg, &map, sizeof(struct rio_dma_mem)))) { 1177 + if (unlikely(copy_to_user(arg, &map, sizeof(map)))) { 1184 1178 mutex_lock(&md->buf_mutex); 1185 1179 kref_put(&mapping->ref, mport_release_mapping); 1186 1180 mutex_unlock(&md->buf_mutex); ··· 1198 1192 int ret = -EFAULT; 1199 1193 struct rio_mport_mapping *map, *_map; 1200 1194 1201 - if (copy_from_user(&handle, arg, sizeof(u64))) 1195 + if (copy_from_user(&handle, arg, sizeof(handle))) 1202 1196 return -EFAULT; 1203 1197 rmcd_debug(EXIT, "filp=%p", filp); 1204 1198 ··· 1248 1242 1249 1243 static int 1250 1244 rio_mport_create_inbound_mapping(struct mport_dev *md, struct file *filp, 1251 - u64 raddr, u32 size, 1245 + u64 raddr, u64 size, 1252 1246 struct rio_mport_mapping **mapping) 1253 1247 { 1254 1248 struct rio_mport *mport = md->mport; 1255 1249 struct rio_mport_mapping *map; 1256 1250 int ret; 1257 1251 1258 - map = kzalloc(sizeof(struct rio_mport_mapping), GFP_KERNEL); 1252 + /* rio_map_inb_region() accepts u32 size */ 1253 + if (size > 0xffffffff) 1254 + return -EINVAL; 1255 + 1256 + map = kzalloc(sizeof(*map), GFP_KERNEL); 1259 1257 if (map == NULL) 1260 1258 return -ENOMEM; 1261 1259 ··· 1272 1262 1273 1263 if (raddr == RIO_MAP_ANY_ADDR) 1274 1264 raddr = map->phys_addr; 1275 - ret = rio_map_inb_region(mport, map->phys_addr, raddr, size, 0); 1265 + ret = rio_map_inb_region(mport, map->phys_addr, raddr, (u32)size, 0); 1276 1266 if (ret < 0) 1277 1267 goto err_map_inb; 1278 1268 ··· 1298 1288 1299 1289 static int 1300 1290 rio_mport_get_inbound_mapping(struct mport_dev *md, struct file *filp, 1301 - u64 raddr, u32 size, 1291 + u64 raddr, u64 size, 1302 1292 struct rio_mport_mapping **mapping) 1303 1293 { 1304 1294 struct rio_mport_mapping *map; ··· 1341 1331 1342 1332 if (!md->mport->ops->map_inb) 1343 1333 return -EPROTONOSUPPORT; 1344 - if (unlikely(copy_from_user(&map, arg, sizeof(struct rio_mmap)))) 1334 + if (unlikely(copy_from_user(&map, arg, sizeof(map)))) 1345 1335 return -EFAULT; 1346 1336 1347 1337 rmcd_debug(IBW, "%s filp=%p", dev_name(&priv->md->dev), filp); ··· 1354 1344 map.handle = mapping->phys_addr; 1355 1345 map.rio_addr = mapping->rio_addr; 1356 1346 1357 - if (unlikely(copy_to_user(arg, &map, sizeof(struct rio_mmap)))) { 1347 + if (unlikely(copy_to_user(arg, &map, sizeof(map)))) { 1358 1348 /* Delete mapping if it was created by this request */ 1359 1349 if (ret == 0 && mapping->filp == filp) { 1360 1350 mutex_lock(&md->buf_mutex); ··· 1385 1375 if (!md->mport->ops->unmap_inb) 1386 1376 return -EPROTONOSUPPORT; 1387 1377 1388 - if (copy_from_user(&handle, arg, sizeof(u64))) 1378 + if (copy_from_user(&handle, arg, sizeof(handle))) 1389 1379 return -EFAULT; 1390 1380 1391 1381 mutex_lock(&md->buf_mutex); ··· 1411 1401 static int maint_port_idx_get(struct mport_cdev_priv *priv, void __user *arg) 1412 1402 { 1413 1403 struct mport_dev *md = priv->md; 1414 - uint32_t port_idx = md->mport->index; 1404 + u32 port_idx = md->mport->index; 1415 1405 1416 1406 rmcd_debug(MPORT, "port_index=%d", port_idx); 1417 1407 ··· 1461 1451 handled = 0; 1462 1452 spin_lock(&data->db_lock); 1463 1453 list_for_each_entry(db_filter, &data->doorbells, data_node) { 1464 - if (((db_filter->filter.rioid == 0xffffffff || 1454 + if (((db_filter->filter.rioid == RIO_INVALID_DESTID || 1465 1455 db_filter->filter.rioid == src)) && 1466 1456 info >= db_filter->filter.low && 1467 1457 info <= db_filter->filter.high) { ··· 1534 1524 1535 1525 if (copy_from_user(&filter, arg, sizeof(filter))) 1536 1526 return -EFAULT; 1527 + 1528 + if (filter.low > filter.high) 1529 + return -EINVAL; 1537 1530 1538 1531 spin_lock_irqsave(&priv->md->db_lock, flags); 1539 1532 list_for_each_entry(db_filter, &priv->db_filters, priv_node) { ··· 1750 1737 return -EEXIST; 1751 1738 } 1752 1739 1753 - size = sizeof(struct rio_dev); 1740 + size = sizeof(*rdev); 1754 1741 mport = md->mport; 1755 - destid = (u16)dev_info.destid; 1756 - hopcount = (u8)dev_info.hopcount; 1742 + destid = dev_info.destid; 1743 + hopcount = dev_info.hopcount; 1757 1744 1758 1745 if (rio_mport_read_config_32(mport, destid, hopcount, 1759 1746 RIO_PEF_CAR, &rval)) ··· 1885 1872 do { 1886 1873 rdev = rio_get_comptag(dev_info.comptag, rdev); 1887 1874 if (rdev && rdev->dev.parent == &mport->net->dev && 1888 - rdev->destid == (u16)dev_info.destid && 1889 - rdev->hopcount == (u8)dev_info.hopcount) 1875 + rdev->destid == dev_info.destid && 1876 + rdev->hopcount == dev_info.hopcount) 1890 1877 break; 1891 1878 } while (rdev); 1892 1879 } ··· 2159 2146 return maint_port_idx_get(data, (void __user *)arg); 2160 2147 case RIO_MPORT_GET_PROPERTIES: 2161 2148 md->properties.hdid = md->mport->host_deviceid; 2162 - if (copy_to_user((void __user *)arg, &(data->md->properties), 2163 - sizeof(data->md->properties))) 2149 + if (copy_to_user((void __user *)arg, &(md->properties), 2150 + sizeof(md->properties))) 2164 2151 return -EFAULT; 2165 2152 return 0; 2166 2153 case RIO_ENABLE_DOORBELL_RANGE: ··· 2172 2159 case RIO_DISABLE_PORTWRITE_RANGE: 2173 2160 return rio_mport_remove_pw_filter(data, (void __user *)arg); 2174 2161 case RIO_SET_EVENT_MASK: 2175 - data->event_mask = arg; 2162 + data->event_mask = (u32)arg; 2176 2163 return 0; 2177 2164 case RIO_GET_EVENT_MASK: 2178 2165 if (copy_to_user((void __user *)arg, &data->event_mask, 2179 - sizeof(data->event_mask))) 2166 + sizeof(u32))) 2180 2167 return -EFAULT; 2181 2168 return 0; 2182 2169 case RIO_MAP_OUTBOUND: ··· 2387 2374 return -EINVAL; 2388 2375 2389 2376 ret = rio_mport_send_doorbell(mport, 2390 - (u16)event.u.doorbell.rioid, 2377 + event.u.doorbell.rioid, 2391 2378 event.u.doorbell.payload); 2392 2379 if (ret < 0) 2393 2380 return ret; ··· 2434 2421 struct mport_dev *md; 2435 2422 struct rio_mport_attr attr; 2436 2423 2437 - md = kzalloc(sizeof(struct mport_dev), GFP_KERNEL); 2424 + md = kzalloc(sizeof(*md), GFP_KERNEL); 2438 2425 if (!md) { 2439 2426 rmcd_error("Unable allocate a device object"); 2440 2427 return NULL; ··· 2483 2470 /* The transfer_mode property will be returned through mport query 2484 2471 * interface 2485 2472 */ 2486 - #ifdef CONFIG_PPC /* for now: only on Freescale's SoCs */ 2473 + #ifdef CONFIG_FSL_RIO /* for now: only on Freescale's SoCs */ 2487 2474 md->properties.transfer_mode |= RIO_TRANSFER_MODE_MAPPED; 2488 2475 #else 2489 2476 md->properties.transfer_mode |= RIO_TRANSFER_MODE_TRANSFER;
-6
drivers/usb/core/port.c
··· 249 249 250 250 return retval; 251 251 } 252 - 253 - static int usb_port_prepare(struct device *dev) 254 - { 255 - return 1; 256 - } 257 252 #endif 258 253 259 254 static const struct dev_pm_ops usb_port_pm_ops = { 260 255 #ifdef CONFIG_PM 261 256 .runtime_suspend = usb_port_runtime_suspend, 262 257 .runtime_resume = usb_port_runtime_resume, 263 - .prepare = usb_port_prepare, 264 258 #endif 265 259 }; 266 260
+1 -7
drivers/usb/core/usb.c
··· 312 312 313 313 static int usb_dev_prepare(struct device *dev) 314 314 { 315 - struct usb_device *udev = to_usb_device(dev); 316 - 317 - /* Return 0 if the current wakeup setting is wrong, otherwise 1 */ 318 - if (udev->do_remote_wakeup != device_may_wakeup(dev)) 319 - return 0; 320 - 321 - return 1; 315 + return 0; /* Implement eventually? */ 322 316 } 323 317 324 318 static void usb_dev_complete(struct device *dev)
+2 -2
drivers/usb/musb/jz4740.c
··· 83 83 { 84 84 usb_phy_generic_register(); 85 85 musb->xceiv = usb_get_phy(USB_PHY_TYPE_USB2); 86 - if (!musb->xceiv) { 86 + if (IS_ERR(musb->xceiv)) { 87 87 pr_err("HS UDC: no transceiver configured\n"); 88 - return -ENODEV; 88 + return PTR_ERR(musb->xceiv); 89 89 } 90 90 91 91 /* Silicon does not implement ConfigData register.
+3 -3
drivers/usb/musb/musb_gadget.c
··· 1164 1164 musb_writew(epio, MUSB_RXMAXP, 0); 1165 1165 } 1166 1166 1167 - musb_ep->desc = NULL; 1168 - musb_ep->end_point.desc = NULL; 1169 - 1170 1167 /* abort all pending DMA and requests */ 1171 1168 nuke(musb_ep, -ESHUTDOWN); 1169 + 1170 + musb_ep->desc = NULL; 1171 + musb_ep->end_point.desc = NULL; 1172 1172 1173 1173 schedule_work(&musb->irq_work); 1174 1174
+1 -1
drivers/usb/musb/musb_host.c
··· 2735 2735 .description = "musb-hcd", 2736 2736 .product_desc = "MUSB HDRC host driver", 2737 2737 .hcd_priv_size = sizeof(struct musb *), 2738 - .flags = HCD_USB2 | HCD_MEMORY | HCD_BH, 2738 + .flags = HCD_USB2 | HCD_MEMORY, 2739 2739 2740 2740 /* not using irq handler or reset hooks from usbcore, since 2741 2741 * those must be shared with peripheral code for OTG configs
+4
drivers/usb/serial/cp210x.c
··· 109 109 { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ 110 110 { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ 111 111 { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ 112 + { USB_DEVICE(0x10C4, 0x82F4) }, /* Starizona MicroTouch */ 112 113 { USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */ 113 114 { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */ 114 115 { USB_DEVICE(0x10C4, 0x8382) }, /* Cygnal Integrated Products, Inc. */ ··· 119 118 { USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */ 120 119 { USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */ 121 120 { USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */ 121 + { USB_DEVICE(0x10C4, 0x84B6) }, /* Starizona Hyperion */ 122 122 { USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */ 123 123 { USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */ 124 124 { USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */ ··· 143 141 { USB_DEVICE(0x10C4, 0xF004) }, /* Elan Digital Systems USBcount50 */ 144 142 { USB_DEVICE(0x10C5, 0xEA61) }, /* Silicon Labs MobiData GPRS USB Modem */ 145 143 { USB_DEVICE(0x10CE, 0xEA6A) }, /* Silicon Labs MobiData GPRS USB Modem 100EU */ 144 + { USB_DEVICE(0x12B8, 0xEC60) }, /* Link G4 ECU */ 145 + { USB_DEVICE(0x12B8, 0xEC62) }, /* Link G4+ ECU */ 146 146 { USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */ 147 147 { USB_DEVICE(0x1555, 0x0004) }, /* Owen AC4 USB-RS485 Converter */ 148 148 { USB_DEVICE(0x166A, 0x0201) }, /* Clipsal 5500PACA C-Bus Pascal Automation Controller */
+1 -1
drivers/virtio/virtio_ring.c
··· 1006 1006 const char *name) 1007 1007 { 1008 1008 struct virtqueue *vq; 1009 - void *queue; 1009 + void *queue = NULL; 1010 1010 dma_addr_t dma_addr; 1011 1011 size_t queue_size_in_bytes; 1012 1012 struct vring vring;
+16
drivers/xen/balloon.c
··· 151 151 static void balloon_process(struct work_struct *work); 152 152 static DECLARE_DELAYED_WORK(balloon_worker, balloon_process); 153 153 154 + static void release_memory_resource(struct resource *resource); 155 + 154 156 /* When ballooning out (allocating memory to return to Xen) we don't really 155 157 want the kernel to try too hard since that can trigger the oom killer. */ 156 158 #define GFP_BALLOON \ ··· 268 266 kfree(res); 269 267 return NULL; 270 268 } 269 + 270 + #ifdef CONFIG_SPARSEMEM 271 + { 272 + unsigned long limit = 1UL << (MAX_PHYSMEM_BITS - PAGE_SHIFT); 273 + unsigned long pfn = res->start >> PAGE_SHIFT; 274 + 275 + if (pfn > limit) { 276 + pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n", 277 + pfn, limit); 278 + release_memory_resource(res); 279 + return NULL; 280 + } 281 + } 282 + #endif 271 283 272 284 return res; 273 285 }
+8 -12
drivers/xen/evtchn.c
··· 316 316 { 317 317 unsigned int new_size; 318 318 evtchn_port_t *new_ring, *old_ring; 319 - unsigned int p, c; 320 319 321 320 /* 322 321 * Ensure the ring is large enough to capture all possible ··· 345 346 /* 346 347 * Copy the old ring contents to the new ring. 347 348 * 348 - * If the ring contents crosses the end of the current ring, 349 - * it needs to be copied in two chunks. 349 + * To take care of wrapping, a full ring, and the new index 350 + * pointing into the second half, simply copy the old contents 351 + * twice. 350 352 * 351 353 * +---------+ +------------------+ 352 - * |34567 12| -> | 1234567 | 353 - * +-----p-c-+ +------------------+ 354 + * |34567 12| -> |34567 1234567 12| 355 + * +-----p-c-+ +-------c------p---+ 354 356 */ 355 - p = evtchn_ring_offset(u, u->ring_prod); 356 - c = evtchn_ring_offset(u, u->ring_cons); 357 - if (p < c) { 358 - memcpy(new_ring + c, u->ring + c, (u->ring_size - c) * sizeof(*u->ring)); 359 - memcpy(new_ring + u->ring_size, u->ring, p * sizeof(*u->ring)); 360 - } else 361 - memcpy(new_ring + c, u->ring + c, (p - c) * sizeof(*u->ring)); 357 + memcpy(new_ring, old_ring, u->ring_size * sizeof(*u->ring)); 358 + memcpy(new_ring + u->ring_size, old_ring, 359 + u->ring_size * sizeof(*u->ring)); 362 360 363 361 u->ring = new_ring; 364 362 u->ring_size = new_size;
+1 -1
fs/fuse/file.c
··· 1295 1295 1296 1296 *nbytesp = nbytes; 1297 1297 1298 - return ret; 1298 + return ret < 0 ? ret : 0; 1299 1299 } 1300 1300 1301 1301 static inline int fuse_iter_npages(const struct iov_iter *ii_p)
+14 -11
fs/pnode.c
··· 198 198 199 199 /* all accesses are serialized by namespace_sem */ 200 200 static struct user_namespace *user_ns; 201 - static struct mount *last_dest, *last_source, *dest_master; 201 + static struct mount *last_dest, *first_source, *last_source, *dest_master; 202 202 static struct mountpoint *mp; 203 203 static struct hlist_head *list; 204 204 ··· 221 221 type = CL_MAKE_SHARED; 222 222 } else { 223 223 struct mount *n, *p; 224 + bool done; 224 225 for (n = m; ; n = p) { 225 226 p = n->mnt_master; 226 - if (p == dest_master || IS_MNT_MARKED(p)) { 227 - while (last_dest->mnt_master != p) { 228 - last_source = last_source->mnt_master; 229 - last_dest = last_source->mnt_parent; 230 - } 231 - if (!peers(n, last_dest)) { 232 - last_source = last_source->mnt_master; 233 - last_dest = last_source->mnt_parent; 234 - } 227 + if (p == dest_master || IS_MNT_MARKED(p)) 235 228 break; 236 - } 237 229 } 230 + do { 231 + struct mount *parent = last_source->mnt_parent; 232 + if (last_source == first_source) 233 + break; 234 + done = parent->mnt_master == p; 235 + if (done && peers(n, parent)) 236 + break; 237 + last_source = last_source->mnt_master; 238 + } while (!done); 239 + 238 240 type = CL_SLAVE; 239 241 /* beginning of peer group among the slaves? */ 240 242 if (IS_MNT_SHARED(m)) ··· 288 286 */ 289 287 user_ns = current->nsproxy->mnt_ns->user_ns; 290 288 last_dest = dest_mnt; 289 + first_source = source_mnt; 291 290 last_source = source_mnt; 292 291 mp = dest_mp; 293 292 list = tree_list;
+3 -2
fs/proc/base.c
··· 434 434 && !lookup_symbol_name(wchan, symname)) 435 435 seq_printf(m, "%s", symname); 436 436 else 437 - seq_puts(m, "0\n"); 437 + seq_putc(m, '0'); 438 438 439 439 return 0; 440 440 } ··· 955 955 struct mm_struct *mm = file->private_data; 956 956 unsigned long env_start, env_end; 957 957 958 - if (!mm) 958 + /* Ensure the process spawned far enough to have an environment. */ 959 + if (!mm || !mm->env_end) 959 960 return 0; 960 961 961 962 page = (char *)__get_free_page(GFP_TEMPORARY);
+2 -2
fs/udf/super.c
··· 919 919 #endif 920 920 } 921 921 922 - ret = udf_CS0toUTF8(outstr, 31, pvoldesc->volIdent, 32); 922 + ret = udf_dstrCS0toUTF8(outstr, 31, pvoldesc->volIdent, 32); 923 923 if (ret < 0) 924 924 goto out_bh; 925 925 926 926 strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret); 927 927 udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident); 928 928 929 - ret = udf_CS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128); 929 + ret = udf_dstrCS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128); 930 930 if (ret < 0) 931 931 goto out_bh; 932 932
+1 -1
fs/udf/udfdecl.h
··· 212 212 uint8_t *, int); 213 213 extern int udf_put_filename(struct super_block *, const uint8_t *, int, 214 214 uint8_t *, int); 215 - extern int udf_CS0toUTF8(uint8_t *, int, const uint8_t *, int); 215 + extern int udf_dstrCS0toUTF8(uint8_t *, int, const uint8_t *, int); 216 216 217 217 /* ialloc.c */ 218 218 extern void udf_free_inode(struct inode *);
+14 -2
fs/udf/unicode.c
··· 335 335 return u_len; 336 336 } 337 337 338 - int udf_CS0toUTF8(uint8_t *utf_o, int o_len, const uint8_t *ocu_i, int i_len) 338 + int udf_dstrCS0toUTF8(uint8_t *utf_o, int o_len, 339 + const uint8_t *ocu_i, int i_len) 339 340 { 340 - return udf_name_from_CS0(utf_o, o_len, ocu_i, i_len, 341 + int s_len = 0; 342 + 343 + if (i_len > 0) { 344 + s_len = ocu_i[i_len - 1]; 345 + if (s_len >= i_len) { 346 + pr_err("incorrect dstring lengths (%d/%d)\n", 347 + s_len, i_len); 348 + return -EINVAL; 349 + } 350 + } 351 + 352 + return udf_name_from_CS0(utf_o, o_len, ocu_i, s_len, 341 353 udf_uni2char_utf8, 0); 342 354 } 343 355
+2 -2
include/acpi/acpi_bus.h
··· 394 394 395 395 static inline bool is_acpi_node(struct fwnode_handle *fwnode) 396 396 { 397 - return fwnode && (fwnode->type == FWNODE_ACPI 397 + return !IS_ERR_OR_NULL(fwnode) && (fwnode->type == FWNODE_ACPI 398 398 || fwnode->type == FWNODE_ACPI_DATA); 399 399 } 400 400 401 401 static inline bool is_acpi_device_node(struct fwnode_handle *fwnode) 402 402 { 403 - return fwnode && fwnode->type == FWNODE_ACPI; 403 + return !IS_ERR_OR_NULL(fwnode) && fwnode->type == FWNODE_ACPI; 404 404 } 405 405 406 406 static inline struct acpi_device *to_acpi_device_node(struct fwnode_handle *fwnode)
+2 -1
include/linux/bpf.h
··· 171 171 void bpf_register_map_type(struct bpf_map_type_list *tl); 172 172 173 173 struct bpf_prog *bpf_prog_get(u32 ufd); 174 + struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog); 174 175 void bpf_prog_put(struct bpf_prog *prog); 175 176 void bpf_prog_put_rcu(struct bpf_prog *prog); 176 177 177 178 struct bpf_map *bpf_map_get_with_uref(u32 ufd); 178 179 struct bpf_map *__bpf_map_get(struct fd f); 179 - void bpf_map_inc(struct bpf_map *map, bool uref); 180 + struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref); 180 181 void bpf_map_put_with_uref(struct bpf_map *map); 181 182 void bpf_map_put(struct bpf_map *map); 182 183 int bpf_map_precharge_memlock(u32 pages);
+1 -1
include/linux/compiler-gcc.h
··· 246 246 #define __HAVE_BUILTIN_BSWAP32__ 247 247 #define __HAVE_BUILTIN_BSWAP64__ 248 248 #endif 249 - #if GCC_VERSION >= 40800 || (defined(__powerpc__) && GCC_VERSION >= 40600) 249 + #if GCC_VERSION >= 40800 250 250 #define __HAVE_BUILTIN_BSWAP16__ 251 251 #endif 252 252 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
+18 -2
include/linux/hash.h
··· 32 32 #error Wordsize not 32 or 64 33 33 #endif 34 34 35 + /* 36 + * The above primes are actively bad for hashing, since they are 37 + * too sparse. The 32-bit one is mostly ok, the 64-bit one causes 38 + * real problems. Besides, the "prime" part is pointless for the 39 + * multiplicative hash. 40 + * 41 + * Although a random odd number will do, it turns out that the golden 42 + * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice 43 + * properties. 44 + * 45 + * These are the negative, (1 - phi) = (phi^2) = (3 - sqrt(5))/2. 46 + * (See Knuth vol 3, section 6.4, exercise 9.) 47 + */ 48 + #define GOLDEN_RATIO_32 0x61C88647 49 + #define GOLDEN_RATIO_64 0x61C8864680B583EBull 50 + 35 51 static __always_inline u64 hash_64(u64 val, unsigned int bits) 36 52 { 37 53 u64 hash = val; 38 54 39 - #if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64 40 - hash = hash * GOLDEN_RATIO_PRIME_64; 55 + #if BITS_PER_LONG == 64 56 + hash = hash * GOLDEN_RATIO_64; 41 57 #else 42 58 /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ 43 59 u64 n = hash;
+5
include/linux/if_ether.h
··· 28 28 return (struct ethhdr *)skb_mac_header(skb); 29 29 } 30 30 31 + static inline struct ethhdr *inner_eth_hdr(const struct sk_buff *skb) 32 + { 33 + return (struct ethhdr *)skb_inner_mac_header(skb); 34 + } 35 + 31 36 int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr); 32 37 33 38 extern ssize_t sysfs_format_mac(char *buf, const unsigned char *addr, int len);
+9 -1
include/linux/net.h
··· 246 246 net_ratelimited_function(pr_warn, fmt, ##__VA_ARGS__) 247 247 #define net_info_ratelimited(fmt, ...) \ 248 248 net_ratelimited_function(pr_info, fmt, ##__VA_ARGS__) 249 - #if defined(DEBUG) 249 + #if defined(CONFIG_DYNAMIC_DEBUG) 250 + #define net_dbg_ratelimited(fmt, ...) \ 251 + do { \ 252 + DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \ 253 + if (unlikely(descriptor.flags & _DPRINTK_FLAGS_PRINT) && \ 254 + net_ratelimit()) \ 255 + __dynamic_pr_debug(&descriptor, fmt, ##__VA_ARGS__); \ 256 + } while (0) 257 + #elif defined(DEBUG) 250 258 #define net_dbg_ratelimited(fmt, ...) \ 251 259 net_ratelimited_function(pr_debug, fmt, ##__VA_ARGS__) 252 260 #else
+4 -1
include/linux/netdevice.h
··· 2164 2164 2165 2165 struct udp_offload; 2166 2166 2167 + /* 'skb->encapsulation' is set before gro_complete() is called. gro_complete() 2168 + * must set 'skb->inner_mac_header' to the beginning of tunnel payload. 2169 + */ 2167 2170 struct udp_offload_callbacks { 2168 2171 struct sk_buff **(*gro_receive)(struct sk_buff **head, 2169 2172 struct sk_buff *skb, ··· 4007 4004 4008 4005 static inline bool net_gso_ok(netdev_features_t features, int gso_type) 4009 4006 { 4010 - netdev_features_t feature = gso_type << NETIF_F_GSO_SHIFT; 4007 + netdev_features_t feature = (netdev_features_t)gso_type << NETIF_F_GSO_SHIFT; 4011 4008 4012 4009 /* check flags correspondence */ 4013 4010 BUILD_BUG_ON(SKB_GSO_TCPV4 != (NETIF_F_TSO >> NETIF_F_GSO_SHIFT));
+1 -1
include/linux/of.h
··· 133 133 134 134 static inline bool is_of_node(struct fwnode_handle *fwnode) 135 135 { 136 - return fwnode && fwnode->type == FWNODE_OF; 136 + return !IS_ERR_OR_NULL(fwnode) && fwnode->type == FWNODE_OF; 137 137 } 138 138 139 139 static inline struct device_node *to_of_node(struct fwnode_handle *fwnode)
+22
include/linux/page-flags.h
··· 517 517 } 518 518 519 519 /* 520 + * PageTransCompoundMap is the same as PageTransCompound, but it also 521 + * guarantees the primary MMU has the entire compound page mapped 522 + * through pmd_trans_huge, which in turn guarantees the secondary MMUs 523 + * can also map the entire compound page. This allows the secondary 524 + * MMUs to call get_user_pages() only once for each compound page and 525 + * to immediately map the entire compound page with a single secondary 526 + * MMU fault. If there will be a pmd split later, the secondary MMUs 527 + * will get an update through the MMU notifier invalidation through 528 + * split_huge_pmd(). 529 + * 530 + * Unlike PageTransCompound, this is safe to be called only while 531 + * split_huge_pmd() cannot run from under us, like if protected by the 532 + * MMU notifier, otherwise it may result in page->_mapcount < 0 false 533 + * positives. 534 + */ 535 + static inline int PageTransCompoundMap(struct page *page) 536 + { 537 + return PageTransCompound(page) && atomic_read(&page->_mapcount) < 0; 538 + } 539 + 540 + /* 520 541 * PageTransTail returns true for both transparent huge pages 521 542 * and hugetlbfs pages, so it should only be called when it's known 522 543 * that hugetlbfs pages aren't involved. ··· 580 559 #else 581 560 TESTPAGEFLAG_FALSE(TransHuge) 582 561 TESTPAGEFLAG_FALSE(TransCompound) 562 + TESTPAGEFLAG_FALSE(TransCompoundMap) 583 563 TESTPAGEFLAG_FALSE(TransTail) 584 564 TESTPAGEFLAG_FALSE(DoubleMap) 585 565 TESTSETFLAG_FALSE(DoubleMap)
+75 -69
include/linux/rio_mport_cdev.h include/uapi/linux/rio_mport_cdev.h
··· 39 39 #ifndef _RIO_MPORT_CDEV_H_ 40 40 #define _RIO_MPORT_CDEV_H_ 41 41 42 - #ifndef __user 43 - #define __user 44 - #endif 42 + #include <linux/ioctl.h> 43 + #include <linux/types.h> 45 44 46 45 struct rio_mport_maint_io { 47 - uint32_t rioid; /* destID of remote device */ 48 - uint32_t hopcount; /* hopcount to remote device */ 49 - uint32_t offset; /* offset in register space */ 50 - size_t length; /* length in bytes */ 51 - void __user *buffer; /* data buffer */ 46 + __u16 rioid; /* destID of remote device */ 47 + __u8 hopcount; /* hopcount to remote device */ 48 + __u8 pad0[5]; 49 + __u32 offset; /* offset in register space */ 50 + __u32 length; /* length in bytes */ 51 + __u64 buffer; /* pointer to data buffer */ 52 52 }; 53 53 54 54 /* ··· 66 66 #define RIO_CAP_MAP_INB (1 << 7) 67 67 68 68 struct rio_mport_properties { 69 - uint16_t hdid; 70 - uint8_t id; /* Physical port ID */ 71 - uint8_t index; 72 - uint32_t flags; 73 - uint32_t sys_size; /* Default addressing size */ 74 - uint8_t port_ok; 75 - uint8_t link_speed; 76 - uint8_t link_width; 77 - uint32_t dma_max_sge; 78 - uint32_t dma_max_size; 79 - uint32_t dma_align; 80 - uint32_t transfer_mode; /* Default transfer mode */ 81 - uint32_t cap_sys_size; /* Capable system sizes */ 82 - uint32_t cap_addr_size; /* Capable addressing sizes */ 83 - uint32_t cap_transfer_mode; /* Capable transfer modes */ 84 - uint32_t cap_mport; /* Mport capabilities */ 69 + __u16 hdid; 70 + __u8 id; /* Physical port ID */ 71 + __u8 index; 72 + __u32 flags; 73 + __u32 sys_size; /* Default addressing size */ 74 + __u8 port_ok; 75 + __u8 link_speed; 76 + __u8 link_width; 77 + __u8 pad0; 78 + __u32 dma_max_sge; 79 + __u32 dma_max_size; 80 + __u32 dma_align; 81 + __u32 transfer_mode; /* Default transfer mode */ 82 + __u32 cap_sys_size; /* Capable system sizes */ 83 + __u32 cap_addr_size; /* Capable addressing sizes */ 84 + __u32 cap_transfer_mode; /* Capable transfer modes */ 85 + __u32 cap_mport; /* Mport capabilities */ 85 86 }; 86 87 87 88 /* ··· 94 93 #define RIO_PORTWRITE (1 << 1) 95 94 96 95 struct rio_doorbell { 97 - uint32_t rioid; 98 - uint16_t payload; 96 + __u16 rioid; 97 + __u16 payload; 99 98 }; 100 99 101 100 struct rio_doorbell_filter { 102 - uint32_t rioid; /* 0xffffffff to match all ids */ 103 - uint16_t low; 104 - uint16_t high; 101 + __u16 rioid; /* Use RIO_INVALID_DESTID to match all ids */ 102 + __u16 low; 103 + __u16 high; 104 + __u16 pad0; 105 105 }; 106 106 107 107 108 108 struct rio_portwrite { 109 - uint32_t payload[16]; 109 + __u32 payload[16]; 110 110 }; 111 111 112 112 struct rio_pw_filter { 113 - uint32_t mask; 114 - uint32_t low; 115 - uint32_t high; 113 + __u32 mask; 114 + __u32 low; 115 + __u32 high; 116 + __u32 pad0; 116 117 }; 117 118 118 119 /* RapidIO base address for inbound requests set to value defined below 119 120 * indicates that no specific RIO-to-local address translation is requested 120 121 * and driver should use direct (one-to-one) address mapping. 121 122 */ 122 - #define RIO_MAP_ANY_ADDR (uint64_t)(~((uint64_t) 0)) 123 + #define RIO_MAP_ANY_ADDR (__u64)(~((__u64) 0)) 123 124 124 125 struct rio_mmap { 125 - uint32_t rioid; 126 - uint64_t rio_addr; 127 - uint64_t length; 128 - uint64_t handle; 129 - void *address; 126 + __u16 rioid; 127 + __u16 pad0[3]; 128 + __u64 rio_addr; 129 + __u64 length; 130 + __u64 handle; 131 + __u64 address; 130 132 }; 131 133 132 134 struct rio_dma_mem { 133 - uint64_t length; /* length of DMA memory */ 134 - uint64_t dma_handle; /* handle associated with this memory */ 135 - void *buffer; /* pointer to this memory */ 135 + __u64 length; /* length of DMA memory */ 136 + __u64 dma_handle; /* handle associated with this memory */ 137 + __u64 address; 136 138 }; 137 139 138 - 139 140 struct rio_event { 140 - unsigned int header; /* event type RIO_DOORBELL or RIO_PORTWRITE */ 141 + __u32 header; /* event type RIO_DOORBELL or RIO_PORTWRITE */ 141 142 union { 142 143 struct rio_doorbell doorbell; /* header for RIO_DOORBELL */ 143 144 struct rio_portwrite portwrite; /* header for RIO_PORTWRITE */ 144 145 } u; 146 + __u32 pad0; 145 147 }; 146 148 147 149 enum rio_transfer_sync { ··· 188 184 }; 189 185 190 186 struct rio_transfer_io { 191 - uint32_t rioid; /* Target destID */ 192 - uint64_t rio_addr; /* Address in target's RIO mem space */ 193 - enum rio_exchange method; /* Data exchange method */ 194 - void __user *loc_addr; 195 - uint64_t handle; 196 - uint64_t offset; /* Offset in buffer */ 197 - uint64_t length; /* Length in bytes */ 198 - uint32_t completion_code; /* Completion code for this transfer */ 187 + __u64 rio_addr; /* Address in target's RIO mem space */ 188 + __u64 loc_addr; 189 + __u64 handle; 190 + __u64 offset; /* Offset in buffer */ 191 + __u64 length; /* Length in bytes */ 192 + __u16 rioid; /* Target destID */ 193 + __u16 method; /* Data exchange method, one of rio_exchange enum */ 194 + __u32 completion_code; /* Completion code for this transfer */ 199 195 }; 200 196 201 197 struct rio_transaction { 202 - uint32_t transfer_mode; /* Data transfer mode */ 203 - enum rio_transfer_sync sync; /* Synchronization method */ 204 - enum rio_transfer_dir dir; /* Transfer direction */ 205 - size_t count; /* Number of transfers */ 206 - struct rio_transfer_io __user *block; /* Array of <count> transfers */ 198 + __u64 block; /* Pointer to array of <count> transfers */ 199 + __u32 count; /* Number of transfers */ 200 + __u32 transfer_mode; /* Data transfer mode */ 201 + __u16 sync; /* Synch method, one of rio_transfer_sync enum */ 202 + __u16 dir; /* Transfer direction, one of rio_transfer_dir enum */ 203 + __u32 pad0; 207 204 }; 208 205 209 206 struct rio_async_tx_wait { 210 - uint32_t token; /* DMA transaction ID token */ 211 - uint32_t timeout; /* Wait timeout in msec, if 0 use default TO */ 207 + __u32 token; /* DMA transaction ID token */ 208 + __u32 timeout; /* Wait timeout in msec, if 0 use default TO */ 212 209 }; 213 210 214 211 #define RIO_MAX_DEVNAME_SZ 20 215 212 216 213 struct rio_rdev_info { 217 - uint32_t destid; 218 - uint8_t hopcount; 219 - uint32_t comptag; 214 + __u16 destid; 215 + __u8 hopcount; 216 + __u8 pad0; 217 + __u32 comptag; 220 218 char name[RIO_MAX_DEVNAME_SZ + 1]; 221 219 }; 222 220 ··· 226 220 #define RIO_MPORT_DRV_MAGIC 'm' 227 221 228 222 #define RIO_MPORT_MAINT_HDID_SET \ 229 - _IOW(RIO_MPORT_DRV_MAGIC, 1, uint16_t) 223 + _IOW(RIO_MPORT_DRV_MAGIC, 1, __u16) 230 224 #define RIO_MPORT_MAINT_COMPTAG_SET \ 231 - _IOW(RIO_MPORT_DRV_MAGIC, 2, uint32_t) 225 + _IOW(RIO_MPORT_DRV_MAGIC, 2, __u32) 232 226 #define RIO_MPORT_MAINT_PORT_IDX_GET \ 233 - _IOR(RIO_MPORT_DRV_MAGIC, 3, uint32_t) 227 + _IOR(RIO_MPORT_DRV_MAGIC, 3, __u32) 234 228 #define RIO_MPORT_GET_PROPERTIES \ 235 229 _IOR(RIO_MPORT_DRV_MAGIC, 4, struct rio_mport_properties) 236 230 #define RIO_MPORT_MAINT_READ_LOCAL \ ··· 250 244 #define RIO_DISABLE_PORTWRITE_RANGE \ 251 245 _IOW(RIO_MPORT_DRV_MAGIC, 12, struct rio_pw_filter) 252 246 #define RIO_SET_EVENT_MASK \ 253 - _IOW(RIO_MPORT_DRV_MAGIC, 13, unsigned int) 247 + _IOW(RIO_MPORT_DRV_MAGIC, 13, __u32) 254 248 #define RIO_GET_EVENT_MASK \ 255 - _IOR(RIO_MPORT_DRV_MAGIC, 14, unsigned int) 249 + _IOR(RIO_MPORT_DRV_MAGIC, 14, __u32) 256 250 #define RIO_MAP_OUTBOUND \ 257 251 _IOWR(RIO_MPORT_DRV_MAGIC, 15, struct rio_mmap) 258 252 #define RIO_UNMAP_OUTBOUND \ ··· 260 254 #define RIO_MAP_INBOUND \ 261 255 _IOWR(RIO_MPORT_DRV_MAGIC, 17, struct rio_mmap) 262 256 #define RIO_UNMAP_INBOUND \ 263 - _IOW(RIO_MPORT_DRV_MAGIC, 18, uint64_t) 257 + _IOW(RIO_MPORT_DRV_MAGIC, 18, __u64) 264 258 #define RIO_ALLOC_DMA \ 265 259 _IOWR(RIO_MPORT_DRV_MAGIC, 19, struct rio_dma_mem) 266 260 #define RIO_FREE_DMA \ 267 - _IOW(RIO_MPORT_DRV_MAGIC, 20, uint64_t) 261 + _IOW(RIO_MPORT_DRV_MAGIC, 20, __u64) 268 262 #define RIO_TRANSFER \ 269 263 _IOWR(RIO_MPORT_DRV_MAGIC, 21, struct rio_transaction) 270 264 #define RIO_WAIT_FOR_ASYNC \
+4
include/linux/swap.h
··· 533 533 #ifdef CONFIG_MEMCG 534 534 static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg) 535 535 { 536 + /* Cgroup2 doesn't have per-cgroup swappiness */ 537 + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) 538 + return vm_swappiness; 539 + 536 540 /* root ? */ 537 541 if (mem_cgroup_disabled() || !memcg->css.parent) 538 542 return vm_swappiness;
+1
include/net/netns/xfrm.h
··· 80 80 struct flow_cache flow_cache_global; 81 81 atomic_t flow_cache_genid; 82 82 struct list_head flow_cache_gc_list; 83 + atomic_t flow_cache_gc_count; 83 84 spinlock_t flow_cache_gc_lock; 84 85 struct work_struct flow_cache_gc_work; 85 86 struct work_struct flow_cache_flush_work;
-9
include/net/udp_tunnel.h
··· 106 106 return iptunnel_handle_offloads(skb, type); 107 107 } 108 108 109 - static inline void udp_tunnel_gro_complete(struct sk_buff *skb, int nhoff) 110 - { 111 - struct udphdr *uh; 112 - 113 - uh = (struct udphdr *)(skb->data + nhoff - sizeof(struct udphdr)); 114 - skb_shinfo(skb)->gso_type |= uh->check ? 115 - SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL; 116 - } 117 - 118 109 static inline void udp_tunnel_encap_enable(struct socket *sock) 119 110 { 120 111 #if IS_ENABLED(CONFIG_IPV6)
+3 -1
include/net/vxlan.h
··· 252 252 (skb->inner_protocol_type != ENCAP_TYPE_ETHER || 253 253 skb->inner_protocol != htons(ETH_P_TEB) || 254 254 (skb_inner_mac_header(skb) - skb_transport_header(skb) != 255 - sizeof(struct udphdr) + sizeof(struct vxlanhdr)))) 255 + sizeof(struct udphdr) + sizeof(struct vxlanhdr)) || 256 + (skb->ip_summed != CHECKSUM_NONE && 257 + !can_checksum_protocol(features, inner_eth_hdr(skb)->h_proto)))) 256 258 return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 257 259 258 260 return features;
+2 -2
include/uapi/asm-generic/unistd.h
··· 718 718 #define __NR_copy_file_range 285 719 719 __SYSCALL(__NR_copy_file_range, sys_copy_file_range) 720 720 #define __NR_preadv2 286 721 - __SYSCALL(__NR_preadv2, sys_preadv2) 721 + __SC_COMP(__NR_preadv2, sys_preadv2, compat_sys_preadv2) 722 722 #define __NR_pwritev2 287 723 - __SYSCALL(__NR_pwritev2, sys_pwritev2) 723 + __SC_COMP(__NR_pwritev2, sys_pwritev2, compat_sys_pwritev2) 724 724 725 725 #undef __NR_syscalls 726 726 #define __NR_syscalls 288
+3 -1
include/uapi/linux/if_macsec.h
··· 19 19 20 20 #define MACSEC_MAX_KEY_LEN 128 21 21 22 + #define MACSEC_KEYID_LEN 16 23 + 22 24 #define MACSEC_DEFAULT_CIPHER_ID 0x0080020001000001ULL 23 25 #define MACSEC_DEFAULT_CIPHER_ALT 0x0080C20001000001ULL 24 26 ··· 79 77 MACSEC_SA_ATTR_ACTIVE, /* config/dump, u8 0..1 */ 80 78 MACSEC_SA_ATTR_PN, /* config/dump, u32 */ 81 79 MACSEC_SA_ATTR_KEY, /* config, data */ 82 - MACSEC_SA_ATTR_KEYID, /* config/dump, u64 */ 80 + MACSEC_SA_ATTR_KEYID, /* config/dump, 128-bit */ 83 81 MACSEC_SA_ATTR_STATS, /* dump, nested, macsec_sa_stats_attr */ 84 82 __MACSEC_SA_ATTR_END, 85 83 NUM_MACSEC_SA_ATTR = __MACSEC_SA_ATTR_END,
+15 -9
include/uapi/linux/swab.h
··· 45 45 46 46 static inline __attribute_const__ __u16 __fswab16(__u16 val) 47 47 { 48 - #ifdef __HAVE_BUILTIN_BSWAP16__ 49 - return __builtin_bswap16(val); 50 - #elif defined (__arch_swab16) 48 + #if defined (__arch_swab16) 51 49 return __arch_swab16(val); 52 50 #else 53 51 return ___constant_swab16(val); ··· 54 56 55 57 static inline __attribute_const__ __u32 __fswab32(__u32 val) 56 58 { 57 - #ifdef __HAVE_BUILTIN_BSWAP32__ 58 - return __builtin_bswap32(val); 59 - #elif defined(__arch_swab32) 59 + #if defined(__arch_swab32) 60 60 return __arch_swab32(val); 61 61 #else 62 62 return ___constant_swab32(val); ··· 63 67 64 68 static inline __attribute_const__ __u64 __fswab64(__u64 val) 65 69 { 66 - #ifdef __HAVE_BUILTIN_BSWAP64__ 67 - return __builtin_bswap64(val); 68 - #elif defined (__arch_swab64) 70 + #if defined (__arch_swab64) 69 71 return __arch_swab64(val); 70 72 #elif defined(__SWAB_64_THRU_32__) 71 73 __u32 h = val >> 32; ··· 96 102 * __swab16 - return a byteswapped 16-bit value 97 103 * @x: value to byteswap 98 104 */ 105 + #ifdef __HAVE_BUILTIN_BSWAP16__ 106 + #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x)) 107 + #else 99 108 #define __swab16(x) \ 100 109 (__builtin_constant_p((__u16)(x)) ? \ 101 110 ___constant_swab16(x) : \ 102 111 __fswab16(x)) 112 + #endif 103 113 104 114 /** 105 115 * __swab32 - return a byteswapped 32-bit value 106 116 * @x: value to byteswap 107 117 */ 118 + #ifdef __HAVE_BUILTIN_BSWAP32__ 119 + #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x)) 120 + #else 108 121 #define __swab32(x) \ 109 122 (__builtin_constant_p((__u32)(x)) ? \ 110 123 ___constant_swab32(x) : \ 111 124 __fswab32(x)) 125 + #endif 112 126 113 127 /** 114 128 * __swab64 - return a byteswapped 64-bit value 115 129 * @x: value to byteswap 116 130 */ 131 + #ifdef __HAVE_BUILTIN_BSWAP64__ 132 + #define __swab64(x) (__u64)__builtin_bswap64((__u64)(x)) 133 + #else 117 134 #define __swab64(x) \ 118 135 (__builtin_constant_p((__u64)(x)) ? \ 119 136 ___constant_swab64(x) : \ 120 137 __fswab64(x)) 138 + #endif 121 139 122 140 /** 123 141 * __swahw32 - return a word-swapped 32-bit value
+2 -2
include/xen/page.h
··· 15 15 */ 16 16 17 17 #define xen_pfn_to_page(xen_pfn) \ 18 - ((pfn_to_page(((unsigned long)(xen_pfn) << XEN_PAGE_SHIFT) >> PAGE_SHIFT))) 18 + (pfn_to_page((unsigned long)(xen_pfn) >> (PAGE_SHIFT - XEN_PAGE_SHIFT))) 19 19 #define page_to_xen_pfn(page) \ 20 - (((page_to_pfn(page)) << PAGE_SHIFT) >> XEN_PAGE_SHIFT) 20 + ((page_to_pfn(page)) << (PAGE_SHIFT - XEN_PAGE_SHIFT)) 21 21 22 22 #define XEN_PFN_PER_PAGE (PAGE_SIZE / XEN_PAGE_SIZE) 23 23
+4 -3
kernel/bpf/inode.c
··· 31 31 { 32 32 switch (type) { 33 33 case BPF_TYPE_PROG: 34 - atomic_inc(&((struct bpf_prog *)raw)->aux->refcnt); 34 + raw = bpf_prog_inc(raw); 35 35 break; 36 36 case BPF_TYPE_MAP: 37 - bpf_map_inc(raw, true); 37 + raw = bpf_map_inc(raw, true); 38 38 break; 39 39 default: 40 40 WARN_ON_ONCE(1); ··· 297 297 goto out; 298 298 299 299 raw = bpf_any_get(inode->i_private, *type); 300 - touch_atime(&path); 300 + if (!IS_ERR(raw)) 301 + touch_atime(&path); 301 302 302 303 path_put(&path); 303 304 return raw;
+20 -4
kernel/bpf/syscall.c
··· 218 218 return f.file->private_data; 219 219 } 220 220 221 - void bpf_map_inc(struct bpf_map *map, bool uref) 221 + /* prog's and map's refcnt limit */ 222 + #define BPF_MAX_REFCNT 32768 223 + 224 + struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref) 222 225 { 223 - atomic_inc(&map->refcnt); 226 + if (atomic_inc_return(&map->refcnt) > BPF_MAX_REFCNT) { 227 + atomic_dec(&map->refcnt); 228 + return ERR_PTR(-EBUSY); 229 + } 224 230 if (uref) 225 231 atomic_inc(&map->usercnt); 232 + return map; 226 233 } 227 234 228 235 struct bpf_map *bpf_map_get_with_uref(u32 ufd) ··· 241 234 if (IS_ERR(map)) 242 235 return map; 243 236 244 - bpf_map_inc(map, true); 237 + map = bpf_map_inc(map, true); 245 238 fdput(f); 246 239 247 240 return map; ··· 665 658 return f.file->private_data; 666 659 } 667 660 661 + struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog) 662 + { 663 + if (atomic_inc_return(&prog->aux->refcnt) > BPF_MAX_REFCNT) { 664 + atomic_dec(&prog->aux->refcnt); 665 + return ERR_PTR(-EBUSY); 666 + } 667 + return prog; 668 + } 669 + 668 670 /* called by sockets/tracing/seccomp before attaching program to an event 669 671 * pairs with bpf_prog_put() 670 672 */ ··· 686 670 if (IS_ERR(prog)) 687 671 return prog; 688 672 689 - atomic_inc(&prog->aux->refcnt); 673 + prog = bpf_prog_inc(prog); 690 674 fdput(f); 691 675 692 676 return prog;
+47 -29
kernel/bpf/verifier.c
··· 239 239 [CONST_IMM] = "imm", 240 240 }; 241 241 242 - static const struct { 243 - int map_type; 244 - int func_id; 245 - } func_limit[] = { 246 - {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call}, 247 - {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read}, 248 - {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_output}, 249 - {BPF_MAP_TYPE_STACK_TRACE, BPF_FUNC_get_stackid}, 250 - }; 251 - 252 242 static void print_verifier_state(struct verifier_env *env) 253 243 { 254 244 enum bpf_reg_type t; ··· 911 921 912 922 static int check_map_func_compatibility(struct bpf_map *map, int func_id) 913 923 { 914 - bool bool_map, bool_func; 915 - int i; 916 - 917 924 if (!map) 918 925 return 0; 919 926 920 - for (i = 0; i < ARRAY_SIZE(func_limit); i++) { 921 - bool_map = (map->map_type == func_limit[i].map_type); 922 - bool_func = (func_id == func_limit[i].func_id); 923 - /* only when map & func pair match it can continue. 924 - * don't allow any other map type to be passed into 925 - * the special func; 926 - */ 927 - if (bool_func && bool_map != bool_func) { 928 - verbose("cannot pass map_type %d into func %d\n", 929 - map->map_type, func_id); 930 - return -EINVAL; 931 - } 927 + /* We need a two way check, first is from map perspective ... */ 928 + switch (map->map_type) { 929 + case BPF_MAP_TYPE_PROG_ARRAY: 930 + if (func_id != BPF_FUNC_tail_call) 931 + goto error; 932 + break; 933 + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: 934 + if (func_id != BPF_FUNC_perf_event_read && 935 + func_id != BPF_FUNC_perf_event_output) 936 + goto error; 937 + break; 938 + case BPF_MAP_TYPE_STACK_TRACE: 939 + if (func_id != BPF_FUNC_get_stackid) 940 + goto error; 941 + break; 942 + default: 943 + break; 944 + } 945 + 946 + /* ... and second from the function itself. */ 947 + switch (func_id) { 948 + case BPF_FUNC_tail_call: 949 + if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY) 950 + goto error; 951 + break; 952 + case BPF_FUNC_perf_event_read: 953 + case BPF_FUNC_perf_event_output: 954 + if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) 955 + goto error; 956 + break; 957 + case BPF_FUNC_get_stackid: 958 + if (map->map_type != BPF_MAP_TYPE_STACK_TRACE) 959 + goto error; 960 + break; 961 + default: 962 + break; 932 963 } 933 964 934 965 return 0; 966 + error: 967 + verbose("cannot pass map_type %d into func %d\n", 968 + map->map_type, func_id); 969 + return -EINVAL; 935 970 } 936 971 937 972 static int check_call(struct verifier_env *env, int func_id) ··· 2064 2049 return -E2BIG; 2065 2050 } 2066 2051 2067 - /* remember this map */ 2068 - env->used_maps[env->used_map_cnt++] = map; 2069 - 2070 2052 /* hold the map. If the program is rejected by verifier, 2071 2053 * the map will be released by release_maps() or it 2072 2054 * will be used by the valid program until it's unloaded 2073 2055 * and all maps are released in free_bpf_prog_info() 2074 2056 */ 2075 - bpf_map_inc(map, false); 2057 + map = bpf_map_inc(map, false); 2058 + if (IS_ERR(map)) { 2059 + fdput(f); 2060 + return PTR_ERR(map); 2061 + } 2062 + env->used_maps[env->used_map_cnt++] = map; 2063 + 2076 2064 fdput(f); 2077 2065 next_insn: 2078 2066 insn++;
+1 -1
kernel/events/core.c
··· 353 353 * 1 - disallow cpu events for unpriv 354 354 * 2 - disallow kernel profiling for unpriv 355 355 */ 356 - int sysctl_perf_event_paranoid __read_mostly = 1; 356 + int sysctl_perf_event_paranoid __read_mostly = 2; 357 357 358 358 /* Minimum for 512 kiB + 1 user control page */ 359 359 int sysctl_perf_event_mlock __read_mostly = 512 + (PAGE_SIZE / 1024); /* 'free' kiB per user */
+16 -13
kernel/sched/core.c
··· 596 596 return false; 597 597 598 598 /* 599 - * FIFO realtime policy runs the highest priority task (after DEADLINE). 600 - * Other runnable tasks are of a lower priority. The scheduler tick 601 - * isn't needed. 602 - */ 603 - fifo_nr_running = rq->rt.rt_nr_running - rq->rt.rr_nr_running; 604 - if (fifo_nr_running) 605 - return true; 606 - 607 - /* 608 - * Round-robin realtime tasks time slice with other tasks at the same 609 - * realtime priority. 599 + * If there are more than one RR tasks, we need the tick to effect the 600 + * actual RR behaviour. 610 601 */ 611 602 if (rq->rt.rr_nr_running) { 612 603 if (rq->rt.rr_nr_running == 1) ··· 606 615 return false; 607 616 } 608 617 609 - /* Normal multitasking need periodic preemption checks */ 610 - if (rq->cfs.nr_running > 1) 618 + /* 619 + * If there's no RR tasks, but FIFO tasks, we can skip the tick, no 620 + * forced preemption between FIFO tasks. 621 + */ 622 + fifo_nr_running = rq->rt.rt_nr_running - rq->rt.rr_nr_running; 623 + if (fifo_nr_running) 624 + return true; 625 + 626 + /* 627 + * If there are no DL,RR/FIFO tasks, there must only be CFS tasks left; 628 + * if there's more than one we need the tick for involuntary 629 + * preemption. 630 + */ 631 + if (rq->nr_running > 1) 611 632 return false; 612 633 613 634 return true;
+1
kernel/sched/deadline.c
··· 1394 1394 !cpumask_test_cpu(later_rq->cpu, 1395 1395 &task->cpus_allowed) || 1396 1396 task_running(rq, task) || 1397 + !dl_task(task) || 1397 1398 !task_on_rq_queued(task))) { 1398 1399 double_unlock_balance(rq, later_rq); 1399 1400 later_rq = NULL;
+8 -1
kernel/sched/fair.c
··· 3030 3030 3031 3031 #else /* CONFIG_SMP */ 3032 3032 3033 - static inline void update_load_avg(struct sched_entity *se, int update_tg) {} 3033 + static inline void update_load_avg(struct sched_entity *se, int not_used) 3034 + { 3035 + struct cfs_rq *cfs_rq = cfs_rq_of(se); 3036 + struct rq *rq = rq_of(cfs_rq); 3037 + 3038 + cpufreq_trigger_update(rq_clock(rq)); 3039 + } 3040 + 3034 3041 static inline void 3035 3042 enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} 3036 3043 static inline void
+1
kernel/sched/rt.c
··· 1729 1729 !cpumask_test_cpu(lowest_rq->cpu, 1730 1730 tsk_cpus_allowed(task)) || 1731 1731 task_running(rq, task) || 1732 + !rt_task(task) || 1732 1733 !task_on_rq_queued(task))) { 1733 1734 1734 1735 double_unlock_balance(rq, lowest_rq);
+7 -2
kernel/trace/trace_events.c
··· 2095 2095 trace_create_file("filter", 0644, file->dir, file, 2096 2096 &ftrace_event_filter_fops); 2097 2097 2098 - trace_create_file("trigger", 0644, file->dir, file, 2099 - &event_trigger_fops); 2098 + /* 2099 + * Only event directories that can be enabled should have 2100 + * triggers. 2101 + */ 2102 + if (!(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE)) 2103 + trace_create_file("trigger", 0644, file->dir, file, 2104 + &event_trigger_fops); 2100 2105 2101 2106 trace_create_file("format", 0444, file->dir, call, 2102 2107 &ftrace_event_format_fops);
+5 -1
lib/stackdepot.c
··· 42 42 43 43 #define DEPOT_STACK_BITS (sizeof(depot_stack_handle_t) * 8) 44 44 45 + #define STACK_ALLOC_NULL_PROTECTION_BITS 1 45 46 #define STACK_ALLOC_ORDER 2 /* 'Slab' size order for stack depot, 4 pages */ 46 47 #define STACK_ALLOC_SIZE (1LL << (PAGE_SHIFT + STACK_ALLOC_ORDER)) 47 48 #define STACK_ALLOC_ALIGN 4 48 49 #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ 49 50 STACK_ALLOC_ALIGN) 50 - #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - STACK_ALLOC_OFFSET_BITS) 51 + #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ 52 + STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) 51 53 #define STACK_ALLOC_SLABS_CAP 1024 52 54 #define STACK_ALLOC_MAX_SLABS \ 53 55 (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ ··· 61 59 struct { 62 60 u32 slabindex : STACK_ALLOC_INDEX_BITS; 63 61 u32 offset : STACK_ALLOC_OFFSET_BITS; 62 + u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; 64 63 }; 65 64 }; 66 65 ··· 139 136 stack->size = size; 140 137 stack->handle.slabindex = depot_index; 141 138 stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; 139 + stack->handle.valid = 1; 142 140 memcpy(stack->entries, entries, size * sizeof(unsigned long)); 143 141 depot_offset += required_size; 144 142
+4 -10
mm/compaction.c
··· 852 852 pfn = isolate_migratepages_block(cc, pfn, block_end_pfn, 853 853 ISOLATE_UNEVICTABLE); 854 854 855 - /* 856 - * In case of fatal failure, release everything that might 857 - * have been isolated in the previous iteration, and signal 858 - * the failure back to caller. 859 - */ 860 - if (!pfn) { 861 - putback_movable_pages(&cc->migratepages); 862 - cc->nr_migratepages = 0; 855 + if (!pfn) 863 856 break; 864 - } 865 857 866 858 if (cc->nr_migratepages == COMPACT_CLUSTER_MAX) 867 859 break; ··· 1733 1741 1734 1742 static inline bool kcompactd_work_requested(pg_data_t *pgdat) 1735 1743 { 1736 - return pgdat->kcompactd_max_order > 0; 1744 + return pgdat->kcompactd_max_order > 0 || kthread_should_stop(); 1737 1745 } 1738 1746 1739 1747 static bool kcompactd_node_suitable(pg_data_t *pgdat) ··· 1797 1805 INIT_LIST_HEAD(&cc.freepages); 1798 1806 INIT_LIST_HEAD(&cc.migratepages); 1799 1807 1808 + if (kthread_should_stop()) 1809 + return; 1800 1810 status = compact_zone(zone, &cc); 1801 1811 1802 1812 if (zone_watermark_ok(zone, cc.order, low_wmark_pages(zone),
+2 -2
mm/huge_memory.c
··· 3452 3452 } 3453 3453 } 3454 3454 3455 - pr_info("%lu of %lu THP split", split, total); 3455 + pr_info("%lu of %lu THP split\n", split, total); 3456 3456 3457 3457 return 0; 3458 3458 } ··· 3463 3463 { 3464 3464 void *ret; 3465 3465 3466 - ret = debugfs_create_file("split_huge_pages", 0644, NULL, NULL, 3466 + ret = debugfs_create_file("split_huge_pages", 0200, NULL, NULL, 3467 3467 &split_huge_pages_fops); 3468 3468 if (!ret) 3469 3469 pr_warn("Failed to create split_huge_pages in debugfs");
+2 -9
mm/memory.c
··· 1222 1222 next = pmd_addr_end(addr, end); 1223 1223 if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { 1224 1224 if (next - addr != HPAGE_PMD_SIZE) { 1225 - #ifdef CONFIG_DEBUG_VM 1226 - if (!rwsem_is_locked(&tlb->mm->mmap_sem)) { 1227 - pr_err("%s: mmap_sem is unlocked! addr=0x%lx end=0x%lx vma->vm_start=0x%lx vma->vm_end=0x%lx\n", 1228 - __func__, addr, end, 1229 - vma->vm_start, 1230 - vma->vm_end); 1231 - BUG(); 1232 - } 1233 - #endif 1225 + VM_BUG_ON_VMA(vma_is_anonymous(vma) && 1226 + !rwsem_is_locked(&tlb->mm->mmap_sem), vma); 1234 1227 split_huge_pmd(vma, pmd, addr); 1235 1228 } else if (zap_huge_pmd(tlb, vma, pmd, addr)) 1236 1229 goto next;
+4 -2
mm/page-writeback.c
··· 1910 1910 if (gdtc->dirty > gdtc->bg_thresh) 1911 1911 return true; 1912 1912 1913 - if (wb_stat(wb, WB_RECLAIMABLE) > __wb_calc_thresh(gdtc)) 1913 + if (wb_stat(wb, WB_RECLAIMABLE) > 1914 + wb_calc_thresh(gdtc->wb, gdtc->bg_thresh)) 1914 1915 return true; 1915 1916 1916 1917 if (mdtc) { ··· 1925 1924 if (mdtc->dirty > mdtc->bg_thresh) 1926 1925 return true; 1927 1926 1928 - if (wb_stat(wb, WB_RECLAIMABLE) > __wb_calc_thresh(mdtc)) 1927 + if (wb_stat(wb, WB_RECLAIMABLE) > 1928 + wb_calc_thresh(mdtc->wb, mdtc->bg_thresh)) 1929 1929 return true; 1930 1930 } 1931 1931
+1 -1
mm/page_alloc.c
··· 6485 6485 setup_per_zone_inactive_ratio(); 6486 6486 return 0; 6487 6487 } 6488 - module_init(init_per_zone_wmark_min) 6488 + core_initcall(init_per_zone_wmark_min) 6489 6489 6490 6490 /* 6491 6491 * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so
+5 -2
mm/zsmalloc.c
··· 1735 1735 static unsigned long zs_can_compact(struct size_class *class) 1736 1736 { 1737 1737 unsigned long obj_wasted; 1738 + unsigned long obj_allocated = zs_stat_get(class, OBJ_ALLOCATED); 1739 + unsigned long obj_used = zs_stat_get(class, OBJ_USED); 1738 1740 1739 - obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) - 1740 - zs_stat_get(class, OBJ_USED); 1741 + if (obj_allocated <= obj_used) 1742 + return 0; 1741 1743 1744 + obj_wasted = obj_allocated - obj_used; 1742 1745 obj_wasted /= get_maxobj_per_zspage(class->size, 1743 1746 class->pages_per_zspage); 1744 1747
+7 -1
mm/zswap.c
··· 170 170 static LIST_HEAD(zswap_pools); 171 171 /* protects zswap_pools list modification */ 172 172 static DEFINE_SPINLOCK(zswap_pools_lock); 173 + /* pool counter to provide unique names to zpool */ 174 + static atomic_t zswap_pools_count = ATOMIC_INIT(0); 173 175 174 176 /* used by param callback function */ 175 177 static bool zswap_init_started; ··· 567 565 static struct zswap_pool *zswap_pool_create(char *type, char *compressor) 568 566 { 569 567 struct zswap_pool *pool; 568 + char name[38]; /* 'zswap' + 32 char (max) num + \0 */ 570 569 gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; 571 570 572 571 pool = kzalloc(sizeof(*pool), GFP_KERNEL); ··· 576 573 return NULL; 577 574 } 578 575 579 - pool->zpool = zpool_create_pool(type, "zswap", gfp, &zswap_zpool_ops); 576 + /* unique name for each pool specifically required by zsmalloc */ 577 + snprintf(name, 38, "zswap%x", atomic_inc_return(&zswap_pools_count)); 578 + 579 + pool->zpool = zpool_create_pool(type, name, gfp, &zswap_zpool_ops); 580 580 if (!pool->zpool) { 581 581 pr_err("%s zpool not available\n", type); 582 582 goto error;
+12
net/batman-adv/bat_v.c
··· 32 32 33 33 #include "bat_v_elp.h" 34 34 #include "bat_v_ogm.h" 35 + #include "hard-interface.h" 35 36 #include "hash.h" 36 37 #include "originator.h" 37 38 #include "packet.h" 39 + 40 + static void batadv_v_iface_activate(struct batadv_hard_iface *hard_iface) 41 + { 42 + /* B.A.T.M.A.N. V does not use any queuing mechanism, therefore it can 43 + * set the interface as ACTIVE right away, without any risk of race 44 + * condition 45 + */ 46 + if (hard_iface->if_status == BATADV_IF_TO_BE_ACTIVATED) 47 + hard_iface->if_status = BATADV_IF_ACTIVE; 48 + } 38 49 39 50 static int batadv_v_iface_enable(struct batadv_hard_iface *hard_iface) 40 51 { ··· 285 274 286 275 static struct batadv_algo_ops batadv_batman_v __read_mostly = { 287 276 .name = "BATMAN_V", 277 + .bat_iface_activate = batadv_v_iface_activate, 288 278 .bat_iface_enable = batadv_v_iface_enable, 289 279 .bat_iface_disable = batadv_v_iface_disable, 290 280 .bat_iface_update_mac = batadv_v_iface_update_mac,
+10 -7
net/batman-adv/distributed-arp-table.c
··· 568 568 * be sent to 569 569 * @bat_priv: the bat priv with all the soft interface information 570 570 * @ip_dst: ipv4 to look up in the DHT 571 + * @vid: VLAN identifier 571 572 * 572 573 * An originator O is selected if and only if its DHT_ID value is one of three 573 574 * closest values (from the LEFT, with wrap around if needed) then the hash ··· 577 576 * Return: the candidate array of size BATADV_DAT_CANDIDATE_NUM. 578 577 */ 579 578 static struct batadv_dat_candidate * 580 - batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst) 579 + batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst, 580 + unsigned short vid) 581 581 { 582 582 int select; 583 583 batadv_dat_addr_t last_max = BATADV_DAT_ADDR_MAX, ip_key; ··· 594 592 return NULL; 595 593 596 594 dat.ip = ip_dst; 597 - dat.vid = 0; 595 + dat.vid = vid; 598 596 ip_key = (batadv_dat_addr_t)batadv_hash_dat(&dat, 599 597 BATADV_DAT_ADDR_MAX); 600 598 ··· 614 612 * @bat_priv: the bat priv with all the soft interface information 615 613 * @skb: payload to send 616 614 * @ip: the DHT key 615 + * @vid: VLAN identifier 617 616 * @packet_subtype: unicast4addr packet subtype to use 618 617 * 619 618 * This function copies the skb with pskb_copy() and is sent as unicast packet ··· 625 622 */ 626 623 static bool batadv_dat_send_data(struct batadv_priv *bat_priv, 627 624 struct sk_buff *skb, __be32 ip, 628 - int packet_subtype) 625 + unsigned short vid, int packet_subtype) 629 626 { 630 627 int i; 631 628 bool ret = false; ··· 634 631 struct sk_buff *tmp_skb; 635 632 struct batadv_dat_candidate *cand; 636 633 637 - cand = batadv_dat_select_candidates(bat_priv, ip); 634 + cand = batadv_dat_select_candidates(bat_priv, ip, vid); 638 635 if (!cand) 639 636 goto out; 640 637 ··· 1025 1022 ret = true; 1026 1023 } else { 1027 1024 /* Send the request to the DHT */ 1028 - ret = batadv_dat_send_data(bat_priv, skb, ip_dst, 1025 + ret = batadv_dat_send_data(bat_priv, skb, ip_dst, vid, 1029 1026 BATADV_P_DAT_DHT_GET); 1030 1027 } 1031 1028 out: ··· 1153 1150 /* Send the ARP reply to the candidates for both the IP addresses that 1154 1151 * the node obtained from the ARP reply 1155 1152 */ 1156 - batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT); 1157 - batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT); 1153 + batadv_dat_send_data(bat_priv, skb, ip_src, vid, BATADV_P_DAT_DHT_PUT); 1154 + batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT); 1158 1155 } 1159 1156 1160 1157 /**
+4 -2
net/batman-adv/hard-interface.c
··· 407 407 408 408 batadv_update_min_mtu(hard_iface->soft_iface); 409 409 410 + if (bat_priv->bat_algo_ops->bat_iface_activate) 411 + bat_priv->bat_algo_ops->bat_iface_activate(hard_iface); 412 + 410 413 out: 411 414 if (primary_if) 412 415 batadv_hardif_put(primary_if); ··· 575 572 struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface); 576 573 struct batadv_hard_iface *primary_if = NULL; 577 574 578 - if (hard_iface->if_status == BATADV_IF_ACTIVE) 579 - batadv_hardif_deactivate_interface(hard_iface); 575 + batadv_hardif_deactivate_interface(hard_iface); 580 576 581 577 if (hard_iface->if_status != BATADV_IF_INACTIVE) 582 578 goto out;
+6 -11
net/batman-adv/originator.c
··· 250 250 { 251 251 struct hlist_node *node_tmp; 252 252 struct batadv_neigh_node *neigh_node; 253 - struct batadv_hardif_neigh_node *hardif_neigh; 254 253 struct batadv_neigh_ifinfo *neigh_ifinfo; 255 254 struct batadv_algo_ops *bao; 256 255 ··· 261 262 batadv_neigh_ifinfo_put(neigh_ifinfo); 262 263 } 263 264 264 - hardif_neigh = batadv_hardif_neigh_get(neigh_node->if_incoming, 265 - neigh_node->addr); 266 - if (hardif_neigh) { 267 - /* batadv_hardif_neigh_get() increases refcount too */ 268 - batadv_hardif_neigh_put(hardif_neigh); 269 - batadv_hardif_neigh_put(hardif_neigh); 270 - } 265 + batadv_hardif_neigh_put(neigh_node->hardif_neigh); 271 266 272 267 if (bao->bat_neigh_free) 273 268 bao->bat_neigh_free(neigh_node); ··· 656 663 ether_addr_copy(neigh_node->addr, neigh_addr); 657 664 neigh_node->if_incoming = hard_iface; 658 665 neigh_node->orig_node = orig_node; 666 + neigh_node->last_seen = jiffies; 667 + 668 + /* increment unique neighbor refcount */ 669 + kref_get(&hardif_neigh->refcount); 670 + neigh_node->hardif_neigh = hardif_neigh; 659 671 660 672 /* extra reference for return */ 661 673 kref_init(&neigh_node->refcount); ··· 669 671 spin_lock_bh(&orig_node->neigh_list_lock); 670 672 hlist_add_head_rcu(&neigh_node->list, &orig_node->neigh_list); 671 673 spin_unlock_bh(&orig_node->neigh_list_lock); 672 - 673 - /* increment unique neighbor refcount */ 674 - kref_get(&hardif_neigh->refcount); 675 674 676 675 batadv_dbg(BATADV_DBG_BATMAN, orig_node->bat_priv, 677 676 "Creating new neighbor %pM for orig_node %pM on interface %s\n",
+9
net/batman-adv/routing.c
··· 105 105 neigh_node = NULL; 106 106 107 107 spin_lock_bh(&orig_node->neigh_list_lock); 108 + /* curr_router used earlier may not be the current orig_ifinfo->router 109 + * anymore because it was dereferenced outside of the neigh_list_lock 110 + * protected region. After the new best neighbor has replace the current 111 + * best neighbor the reference counter needs to decrease. Consequently, 112 + * the code needs to ensure the curr_router variable contains a pointer 113 + * to the replaced best neighbor. 114 + */ 115 + curr_router = rcu_dereference_protected(orig_ifinfo->router, true); 116 + 108 117 rcu_assign_pointer(orig_ifinfo->router, neigh_node); 109 118 spin_unlock_bh(&orig_node->neigh_list_lock); 110 119 batadv_orig_ifinfo_put(orig_ifinfo);
+6
net/batman-adv/send.c
··· 675 675 676 676 if (pending) { 677 677 hlist_del(&forw_packet->list); 678 + if (!forw_packet->own) 679 + atomic_inc(&bat_priv->bcast_queue_left); 680 + 678 681 batadv_forw_packet_free(forw_packet); 679 682 } 680 683 } ··· 705 702 706 703 if (pending) { 707 704 hlist_del(&forw_packet->list); 705 + if (!forw_packet->own) 706 + atomic_inc(&bat_priv->batman_queue_left); 707 + 708 708 batadv_forw_packet_free(forw_packet); 709 709 } 710 710 }
+6 -2
net/batman-adv/soft-interface.c
··· 408 408 */ 409 409 nf_reset(skb); 410 410 411 + if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) 412 + goto dropped; 413 + 411 414 vid = batadv_get_vid(skb, 0); 412 415 ethhdr = eth_hdr(skb); 413 416 414 417 switch (ntohs(ethhdr->h_proto)) { 415 418 case ETH_P_8021Q: 419 + if (!pskb_may_pull(skb, VLAN_ETH_HLEN)) 420 + goto dropped; 421 + 416 422 vhdr = (struct vlan_ethhdr *)skb->data; 417 423 418 424 if (vhdr->h_vlan_encapsulated_proto != ethertype) ··· 430 424 } 431 425 432 426 /* skb->dev & skb->pkt_type are set here */ 433 - if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) 434 - goto dropped; 435 427 skb->protocol = eth_type_trans(skb, soft_iface); 436 428 437 429 /* should not be necessary anymore as we use skb_pull_rcsum()
+4 -38
net/batman-adv/translation-table.c
··· 215 215 tt_local_entry = container_of(ref, struct batadv_tt_local_entry, 216 216 common.refcount); 217 217 218 + batadv_softif_vlan_put(tt_local_entry->vlan); 219 + 218 220 kfree_rcu(tt_local_entry, common.rcu); 219 221 } 220 222 ··· 675 673 kref_get(&tt_local->common.refcount); 676 674 tt_local->last_seen = jiffies; 677 675 tt_local->common.added_at = tt_local->last_seen; 676 + tt_local->vlan = vlan; 678 677 679 678 /* the batman interface mac and multicast addresses should never be 680 679 * purged ··· 994 991 struct batadv_tt_common_entry *tt_common_entry; 995 992 struct batadv_tt_local_entry *tt_local; 996 993 struct batadv_hard_iface *primary_if; 997 - struct batadv_softif_vlan *vlan; 998 994 struct hlist_head *head; 999 995 unsigned short vid; 1000 996 u32 i; ··· 1029 1027 last_seen_msecs = last_seen_msecs % 1000; 1030 1028 1031 1029 no_purge = tt_common_entry->flags & np_flag; 1032 - 1033 - vlan = batadv_softif_vlan_get(bat_priv, vid); 1034 - if (!vlan) { 1035 - seq_printf(seq, "Cannot retrieve VLAN %d\n", 1036 - BATADV_PRINT_VID(vid)); 1037 - continue; 1038 - } 1039 - 1040 1030 seq_printf(seq, 1041 1031 " * %pM %4i [%c%c%c%c%c%c] %3u.%03u (%#.8x)\n", 1042 1032 tt_common_entry->addr, ··· 1046 1052 BATADV_TT_CLIENT_ISOLA) ? 'I' : '.'), 1047 1053 no_purge ? 0 : last_seen_secs, 1048 1054 no_purge ? 0 : last_seen_msecs, 1049 - vlan->tt.crc); 1050 - 1051 - batadv_softif_vlan_put(vlan); 1055 + tt_local->vlan->tt.crc); 1052 1056 } 1053 1057 rcu_read_unlock(); 1054 1058 } ··· 1091 1099 { 1092 1100 struct batadv_tt_local_entry *tt_local_entry; 1093 1101 u16 flags, curr_flags = BATADV_NO_FLAGS; 1094 - struct batadv_softif_vlan *vlan; 1095 1102 void *tt_entry_exists; 1096 1103 1097 1104 tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid); ··· 1129 1138 1130 1139 /* extra call to free the local tt entry */ 1131 1140 batadv_tt_local_entry_put(tt_local_entry); 1132 - 1133 - /* decrease the reference held for this vlan */ 1134 - vlan = batadv_softif_vlan_get(bat_priv, vid); 1135 - if (!vlan) 1136 - goto out; 1137 - 1138 - batadv_softif_vlan_put(vlan); 1139 - batadv_softif_vlan_put(vlan); 1140 1141 1141 1142 out: 1142 1143 if (tt_local_entry) ··· 1202 1219 spinlock_t *list_lock; /* protects write access to the hash lists */ 1203 1220 struct batadv_tt_common_entry *tt_common_entry; 1204 1221 struct batadv_tt_local_entry *tt_local; 1205 - struct batadv_softif_vlan *vlan; 1206 1222 struct hlist_node *node_tmp; 1207 1223 struct hlist_head *head; 1208 1224 u32 i; ··· 1222 1240 tt_local = container_of(tt_common_entry, 1223 1241 struct batadv_tt_local_entry, 1224 1242 common); 1225 - 1226 - /* decrease the reference held for this vlan */ 1227 - vlan = batadv_softif_vlan_get(bat_priv, 1228 - tt_common_entry->vid); 1229 - if (vlan) { 1230 - batadv_softif_vlan_put(vlan); 1231 - batadv_softif_vlan_put(vlan); 1232 - } 1233 1243 1234 1244 batadv_tt_local_entry_put(tt_local); 1235 1245 } ··· 3283 3309 struct batadv_hashtable *hash = bat_priv->tt.local_hash; 3284 3310 struct batadv_tt_common_entry *tt_common; 3285 3311 struct batadv_tt_local_entry *tt_local; 3286 - struct batadv_softif_vlan *vlan; 3287 3312 struct hlist_node *node_tmp; 3288 3313 struct hlist_head *head; 3289 3314 spinlock_t *list_lock; /* protects write access to the hash lists */ ··· 3311 3338 tt_local = container_of(tt_common, 3312 3339 struct batadv_tt_local_entry, 3313 3340 common); 3314 - 3315 - /* decrease the reference held for this vlan */ 3316 - vlan = batadv_softif_vlan_get(bat_priv, tt_common->vid); 3317 - if (vlan) { 3318 - batadv_softif_vlan_put(vlan); 3319 - batadv_softif_vlan_put(vlan); 3320 - } 3321 3341 3322 3342 batadv_tt_local_entry_put(tt_local); 3323 3343 }
+7
net/batman-adv/types.h
··· 433 433 * @ifinfo_lock: lock protecting private ifinfo members and list 434 434 * @if_incoming: pointer to incoming hard-interface 435 435 * @last_seen: when last packet via this neighbor was received 436 + * @hardif_neigh: hardif_neigh of this neighbor 436 437 * @refcount: number of contexts the object is used 437 438 * @rcu: struct used for freeing in an RCU-safe manner 438 439 */ ··· 445 444 spinlock_t ifinfo_lock; /* protects ifinfo_list and its members */ 446 445 struct batadv_hard_iface *if_incoming; 447 446 unsigned long last_seen; 447 + struct batadv_hardif_neigh_node *hardif_neigh; 448 448 struct kref refcount; 449 449 struct rcu_head rcu; 450 450 }; ··· 1075 1073 * struct batadv_tt_local_entry - translation table local entry data 1076 1074 * @common: general translation table data 1077 1075 * @last_seen: timestamp used for purging stale tt local entries 1076 + * @vlan: soft-interface vlan of the entry 1078 1077 */ 1079 1078 struct batadv_tt_local_entry { 1080 1079 struct batadv_tt_common_entry common; 1081 1080 unsigned long last_seen; 1081 + struct batadv_softif_vlan *vlan; 1082 1082 }; 1083 1083 1084 1084 /** ··· 1254 1250 * struct batadv_algo_ops - mesh algorithm callbacks 1255 1251 * @list: list node for the batadv_algo_list 1256 1252 * @name: name of the algorithm 1253 + * @bat_iface_activate: start routing mechanisms when hard-interface is brought 1254 + * up 1257 1255 * @bat_iface_enable: init routing info when hard-interface is enabled 1258 1256 * @bat_iface_disable: de-init routing info when hard-interface is disabled 1259 1257 * @bat_iface_update_mac: (re-)init mac addresses of the protocol information ··· 1283 1277 struct batadv_algo_ops { 1284 1278 struct hlist_node list; 1285 1279 char *name; 1280 + void (*bat_iface_activate)(struct batadv_hard_iface *hard_iface); 1286 1281 int (*bat_iface_enable)(struct batadv_hard_iface *hard_iface); 1287 1282 void (*bat_iface_disable)(struct batadv_hard_iface *hard_iface); 1288 1283 void (*bat_iface_update_mac)(struct batadv_hard_iface *hard_iface);
+3 -2
net/bridge/br_ioctl.c
··· 21 21 #include <asm/uaccess.h> 22 22 #include "br_private.h" 23 23 24 - /* called with RTNL */ 25 24 static int get_bridge_ifindices(struct net *net, int *indices, int num) 26 25 { 27 26 struct net_device *dev; 28 27 int i = 0; 29 28 30 - for_each_netdev(net, dev) { 29 + rcu_read_lock(); 30 + for_each_netdev_rcu(net, dev) { 31 31 if (i >= num) 32 32 break; 33 33 if (dev->priv_flags & IFF_EBRIDGE) 34 34 indices[i++] = dev->ifindex; 35 35 } 36 + rcu_read_unlock(); 36 37 37 38 return i; 38 39 }
+7 -5
net/bridge/br_multicast.c
··· 1279 1279 struct br_ip saddr; 1280 1280 unsigned long max_delay; 1281 1281 unsigned long now = jiffies; 1282 + unsigned int offset = skb_transport_offset(skb); 1282 1283 __be32 group; 1283 1284 int err = 0; 1284 1285 ··· 1290 1289 1291 1290 group = ih->group; 1292 1291 1293 - if (skb->len == sizeof(*ih)) { 1292 + if (skb->len == offset + sizeof(*ih)) { 1294 1293 max_delay = ih->code * (HZ / IGMP_TIMER_SCALE); 1295 1294 1296 1295 if (!max_delay) { 1297 1296 max_delay = 10 * HZ; 1298 1297 group = 0; 1299 1298 } 1300 - } else if (skb->len >= sizeof(*ih3)) { 1299 + } else if (skb->len >= offset + sizeof(*ih3)) { 1301 1300 ih3 = igmpv3_query_hdr(skb); 1302 1301 if (ih3->nsrcs) 1303 1302 goto out; ··· 1358 1357 struct br_ip saddr; 1359 1358 unsigned long max_delay; 1360 1359 unsigned long now = jiffies; 1360 + unsigned int offset = skb_transport_offset(skb); 1361 1361 const struct in6_addr *group = NULL; 1362 1362 bool is_general_query; 1363 1363 int err = 0; ··· 1368 1366 (port && port->state == BR_STATE_DISABLED)) 1369 1367 goto out; 1370 1368 1371 - if (skb->len == sizeof(*mld)) { 1372 - if (!pskb_may_pull(skb, sizeof(*mld))) { 1369 + if (skb->len == offset + sizeof(*mld)) { 1370 + if (!pskb_may_pull(skb, offset + sizeof(*mld))) { 1373 1371 err = -EINVAL; 1374 1372 goto out; 1375 1373 } ··· 1378 1376 if (max_delay) 1379 1377 group = &mld->mld_mca; 1380 1378 } else { 1381 - if (!pskb_may_pull(skb, sizeof(*mld2q))) { 1379 + if (!pskb_may_pull(skb, offset + sizeof(*mld2q))) { 1382 1380 err = -EINVAL; 1383 1381 goto out; 1384 1382 }
+1 -1
net/core/dev.c
··· 2802 2802 2803 2803 if (skb->ip_summed != CHECKSUM_NONE && 2804 2804 !can_checksum_protocol(features, type)) { 2805 - features &= ~NETIF_F_CSUM_MASK; 2805 + features &= ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 2806 2806 } else if (illegal_highdma(skb->dev, skb)) { 2807 2807 features &= ~NETIF_F_SG; 2808 2808 }
+13 -1
net/core/flow.c
··· 92 92 list_splice_tail_init(&xfrm->flow_cache_gc_list, &gc_list); 93 93 spin_unlock_bh(&xfrm->flow_cache_gc_lock); 94 94 95 - list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) 95 + list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) { 96 96 flow_entry_kill(fce, xfrm); 97 + atomic_dec(&xfrm->flow_cache_gc_count); 98 + WARN_ON(atomic_read(&xfrm->flow_cache_gc_count) < 0); 99 + } 97 100 } 98 101 99 102 static void flow_cache_queue_garbage(struct flow_cache_percpu *fcp, ··· 104 101 struct netns_xfrm *xfrm) 105 102 { 106 103 if (deleted) { 104 + atomic_add(deleted, &xfrm->flow_cache_gc_count); 107 105 fcp->hash_count -= deleted; 108 106 spin_lock_bh(&xfrm->flow_cache_gc_lock); 109 107 list_splice_tail(gc_list, &xfrm->flow_cache_gc_list); ··· 235 231 if (unlikely(!fle)) { 236 232 if (fcp->hash_count > fc->high_watermark) 237 233 flow_cache_shrink(fc, fcp); 234 + 235 + if (fcp->hash_count > 2 * fc->high_watermark || 236 + atomic_read(&net->xfrm.flow_cache_gc_count) > fc->high_watermark) { 237 + atomic_inc(&net->xfrm.flow_cache_genid); 238 + flo = ERR_PTR(-ENOBUFS); 239 + goto ret_object; 240 + } 238 241 239 242 fle = kmem_cache_alloc(flow_cachep, GFP_ATOMIC); 240 243 if (fle) { ··· 457 446 INIT_WORK(&net->xfrm.flow_cache_gc_work, flow_cache_gc_task); 458 447 INIT_WORK(&net->xfrm.flow_cache_flush_work, flow_cache_flush_task); 459 448 mutex_init(&net->xfrm.flow_flush_sem); 449 + atomic_set(&net->xfrm.flow_cache_gc_count, 0); 460 450 461 451 fc->hash_shift = 10; 462 452 fc->low_watermark = 2 * flow_cache_hash_size(fc);
+10 -8
net/core/rtnetlink.c
··· 1180 1180 1181 1181 static int rtnl_fill_link_ifmap(struct sk_buff *skb, struct net_device *dev) 1182 1182 { 1183 - struct rtnl_link_ifmap map = { 1184 - .mem_start = dev->mem_start, 1185 - .mem_end = dev->mem_end, 1186 - .base_addr = dev->base_addr, 1187 - .irq = dev->irq, 1188 - .dma = dev->dma, 1189 - .port = dev->if_port, 1190 - }; 1183 + struct rtnl_link_ifmap map; 1184 + 1185 + memset(&map, 0, sizeof(map)); 1186 + map.mem_start = dev->mem_start; 1187 + map.mem_end = dev->mem_end; 1188 + map.base_addr = dev->base_addr; 1189 + map.irq = dev->irq; 1190 + map.dma = dev->dma; 1191 + map.port = dev->if_port; 1192 + 1191 1193 if (nla_put(skb, IFLA_MAP, sizeof(map), &map)) 1192 1194 return -EMSGSIZE; 1193 1195
+4 -2
net/ipv4/fou.c
··· 228 228 int err = -ENOSYS; 229 229 const struct net_offload **offloads; 230 230 231 - udp_tunnel_gro_complete(skb, nhoff); 232 - 233 231 rcu_read_lock(); 234 232 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; 235 233 ops = rcu_dereference(offloads[proto]); ··· 235 237 goto out_unlock; 236 238 237 239 err = ops->callbacks.gro_complete(skb, nhoff); 240 + 241 + skb_set_inner_mac_header(skb, nhoff); 238 242 239 243 out_unlock: 240 244 rcu_read_unlock(); ··· 413 413 goto out_unlock; 414 414 415 415 err = ops->callbacks.gro_complete(skb, nhoff + guehlen); 416 + 417 + skb_set_inner_mac_header(skb, nhoff + guehlen); 416 418 417 419 out_unlock: 418 420 rcu_read_unlock();
+2
net/ipv4/inet_hashtables.c
··· 470 470 const struct sock *sk2, 471 471 bool match_wildcard)) 472 472 { 473 + struct inet_bind_bucket *tb = inet_csk(sk)->icsk_bind_hash; 473 474 struct sock *sk2; 474 475 struct hlist_nulls_node *node; 475 476 kuid_t uid = sock_i_uid(sk); ··· 480 479 sk2->sk_family == sk->sk_family && 481 480 ipv6_only_sock(sk2) == ipv6_only_sock(sk) && 482 481 sk2->sk_bound_dev_if == sk->sk_bound_dev_if && 482 + inet_csk(sk2)->icsk_bind_hash == tb && 483 483 sk2->sk_reuseport && uid_eq(uid, sock_i_uid(sk2)) && 484 484 saddr_same(sk, sk2, false)) 485 485 return reuseport_add_sock(sk, sk2);
+21 -9
net/ipv4/ip_gre.c
··· 179 179 return flags; 180 180 } 181 181 182 + /* Fills in tpi and returns header length to be pulled. */ 182 183 static int parse_gre_header(struct sk_buff *skb, struct tnl_ptk_info *tpi, 183 184 bool *csum_err) 184 185 { ··· 239 238 return -EINVAL; 240 239 } 241 240 } 242 - return iptunnel_pull_header(skb, hdr_len, tpi->proto, false); 241 + return hdr_len; 243 242 } 244 243 245 244 static void ipgre_err(struct sk_buff *skb, u32 info, ··· 342 341 struct tnl_ptk_info tpi; 343 342 bool csum_err = false; 344 343 345 - if (parse_gre_header(skb, &tpi, &csum_err)) { 344 + if (parse_gre_header(skb, &tpi, &csum_err) < 0) { 346 345 if (!csum_err) /* ignore csum errors. */ 347 346 return; 348 347 } ··· 420 419 { 421 420 struct tnl_ptk_info tpi; 422 421 bool csum_err = false; 422 + int hdr_len; 423 423 424 424 #ifdef CONFIG_NET_IPGRE_BROADCAST 425 425 if (ipv4_is_multicast(ip_hdr(skb)->daddr)) { ··· 430 428 } 431 429 #endif 432 430 433 - if (parse_gre_header(skb, &tpi, &csum_err) < 0) 431 + hdr_len = parse_gre_header(skb, &tpi, &csum_err); 432 + if (hdr_len < 0) 433 + goto drop; 434 + if (iptunnel_pull_header(skb, hdr_len, tpi.proto, false) < 0) 434 435 goto drop; 435 436 436 437 if (ipgre_rcv(skb, &tpi) == PACKET_RCVD) ··· 528 523 return ip_route_output_key(net, fl); 529 524 } 530 525 531 - static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev) 526 + static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev, 527 + __be16 proto) 532 528 { 533 529 struct ip_tunnel_info *tun_info; 534 530 const struct ip_tunnel_key *key; ··· 581 575 } 582 576 583 577 flags = tun_info->key.tun_flags & (TUNNEL_CSUM | TUNNEL_KEY); 584 - build_header(skb, tunnel_hlen, flags, htons(ETH_P_TEB), 578 + build_header(skb, tunnel_hlen, flags, proto, 585 579 tunnel_id_to_key(tun_info->key.tun_id), 0); 586 580 587 581 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; ··· 622 616 const struct iphdr *tnl_params; 623 617 624 618 if (tunnel->collect_md) { 625 - gre_fb_xmit(skb, dev); 619 + gre_fb_xmit(skb, dev, skb->protocol); 626 620 return NETDEV_TX_OK; 627 621 } 628 622 ··· 666 660 struct ip_tunnel *tunnel = netdev_priv(dev); 667 661 668 662 if (tunnel->collect_md) { 669 - gre_fb_xmit(skb, dev); 663 + gre_fb_xmit(skb, dev, htons(ETH_P_TEB)); 670 664 return NETDEV_TX_OK; 671 665 } 672 666 ··· 899 893 netif_keep_dst(dev); 900 894 dev->addr_len = 4; 901 895 902 - if (iph->daddr) { 896 + if (iph->daddr && !tunnel->collect_md) { 903 897 #ifdef CONFIG_NET_IPGRE_BROADCAST 904 898 if (ipv4_is_multicast(iph->daddr)) { 905 899 if (!iph->saddr) ··· 908 902 dev->header_ops = &ipgre_header_ops; 909 903 } 910 904 #endif 911 - } else 905 + } else if (!tunnel->collect_md) { 912 906 dev->header_ops = &ipgre_header_ops; 907 + } 913 908 914 909 return ip_tunnel_init(dev); 915 910 } ··· 951 944 if (data[IFLA_GRE_OFLAGS]) 952 945 flags |= nla_get_be16(data[IFLA_GRE_OFLAGS]); 953 946 if (flags & (GRE_VERSION|GRE_ROUTING)) 947 + return -EINVAL; 948 + 949 + if (data[IFLA_GRE_COLLECT_METADATA] && 950 + data[IFLA_GRE_ENCAP_TYPE] && 951 + nla_get_u16(data[IFLA_GRE_ENCAP_TYPE]) != TUNNEL_ENCAP_NONE) 954 952 return -EINVAL; 955 953 956 954 return 0;
+2 -2
net/ipv4/ip_tunnel.c
··· 326 326 327 327 if (!IS_ERR(rt)) { 328 328 tdev = rt->dst.dev; 329 - dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, 330 - fl4.saddr); 331 329 ip_rt_put(rt); 332 330 } 333 331 if (dev->type != ARPHRD_ETHER) 334 332 dev->flags |= IFF_POINTOPOINT; 333 + 334 + dst_cache_reset(&tunnel->dst_cache); 335 335 } 336 336 337 337 if (!tdev && tunnel->parms.link)
+18
net/ipv4/ip_vti.c
··· 156 156 struct dst_entry *dst = skb_dst(skb); 157 157 struct net_device *tdev; /* Device to other host */ 158 158 int err; 159 + int mtu; 159 160 160 161 if (!dst) { 161 162 dev->stats.tx_carrier_errors++; ··· 191 190 dst_link_failure(skb); 192 191 } else 193 192 tunnel->err_count = 0; 193 + } 194 + 195 + mtu = dst_mtu(dst); 196 + if (skb->len > mtu) { 197 + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); 198 + if (skb->protocol == htons(ETH_P_IP)) { 199 + icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, 200 + htonl(mtu)); 201 + } else { 202 + if (mtu < IPV6_MIN_MTU) 203 + mtu = IPV6_MIN_MTU; 204 + 205 + icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 206 + } 207 + 208 + dst_release(dst); 209 + goto tx_error; 194 210 } 195 211 196 212 skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev)));
+5 -3
net/ipv4/udp_offload.c
··· 399 399 400 400 uh->len = newlen; 401 401 402 + /* Set encapsulation before calling into inner gro_complete() functions 403 + * to make them set up the inner offsets. 404 + */ 405 + skb->encapsulation = 1; 406 + 402 407 rcu_read_lock(); 403 408 404 409 uo_priv = rcu_dereference(udp_offload_base); ··· 425 420 426 421 if (skb->remcsum_offload) 427 422 skb_shinfo(skb)->gso_type |= SKB_GSO_TUNNEL_REMCSUM; 428 - 429 - skb->encapsulation = 1; 430 - skb_set_inner_mac_header(skb, nhoff + sizeof(struct udphdr)); 431 423 432 424 return err; 433 425 }
+2 -3
net/ipv6/icmp.c
··· 445 445 446 446 if (__ipv6_addr_needs_scope_id(addr_type)) 447 447 iif = skb->dev->ifindex; 448 + else 449 + iif = l3mdev_master_ifindex(skb->dev); 448 450 449 451 /* 450 452 * Must not send error if the source does not uniquely ··· 500 498 fl6.flowi6_oif = np->mcast_oif; 501 499 else if (!fl6.flowi6_oif) 502 500 fl6.flowi6_oif = np->ucast_oif; 503 - 504 - if (!fl6.flowi6_oif) 505 - fl6.flowi6_oif = l3mdev_master_ifindex(skb->dev); 506 501 507 502 dst = icmpv6_route_lookup(net, skb, sk, &fl6); 508 503 if (IS_ERR(dst))
+1 -2
net/ipv6/ila/ila_lwt.c
··· 120 120 121 121 static int ila_encap_nlsize(struct lwtunnel_state *lwtstate) 122 122 { 123 - /* No encapsulation overhead */ 124 - return 0; 123 + return nla_total_size(sizeof(u64)); /* ILA_ATTR_LOCATOR */ 125 124 } 126 125 127 126 static int ila_encap_cmp(struct lwtunnel_state *a, struct lwtunnel_state *b)
+6 -1
net/ipv6/tcp_ipv6.c
··· 810 810 fl6.flowi6_proto = IPPROTO_TCP; 811 811 if (rt6_need_strict(&fl6.daddr) && !oif) 812 812 fl6.flowi6_oif = tcp_v6_iif(skb); 813 - else 813 + else { 814 + if (!oif && netif_index_is_l3_master(net, skb->skb_iif)) 815 + oif = skb->skb_iif; 816 + 814 817 fl6.flowi6_oif = oif; 818 + } 819 + 815 820 fl6.flowi6_mark = IP6_REPLY_MARK(net, skb->mark); 816 821 fl6.fl6_dport = t1->dest; 817 822 fl6.fl6_sport = t1->source;
+2 -2
net/l2tp/l2tp_core.c
··· 1376 1376 memcpy(&udp_conf.peer_ip6, cfg->peer_ip6, 1377 1377 sizeof(udp_conf.peer_ip6)); 1378 1378 udp_conf.use_udp6_tx_checksums = 1379 - cfg->udp6_zero_tx_checksums; 1379 + ! cfg->udp6_zero_tx_checksums; 1380 1380 udp_conf.use_udp6_rx_checksums = 1381 - cfg->udp6_zero_rx_checksums; 1381 + ! cfg->udp6_zero_rx_checksums; 1382 1382 } else 1383 1383 #endif 1384 1384 {
+1
net/llc/af_llc.c
··· 626 626 if (llc->cmsg_flags & LLC_CMSG_PKTINFO) { 627 627 struct llc_pktinfo info; 628 628 629 + memset(&info, 0, sizeof(info)); 629 630 info.lpi_ifindex = llc_sk(skb->sk)->dev->ifindex; 630 631 llc_pdu_decode_dsap(skb, &info.lpi_sap); 631 632 llc_pdu_decode_da(skb, info.lpi_mac);
+2 -2
net/mac80211/iface.c
··· 1761 1761 1762 1762 ret = dev_alloc_name(ndev, ndev->name); 1763 1763 if (ret < 0) { 1764 - free_netdev(ndev); 1764 + ieee80211_if_free(ndev); 1765 1765 return ret; 1766 1766 } 1767 1767 ··· 1847 1847 1848 1848 ret = register_netdevice(ndev); 1849 1849 if (ret) { 1850 - free_netdev(ndev); 1850 + ieee80211_if_free(ndev); 1851 1851 return ret; 1852 1852 } 1853 1853 }
+2 -1
net/rds/tcp.c
··· 127 127 128 128 /* 129 129 * This is the only path that sets tc->t_sock. Send and receive trust that 130 - * it is set. The RDS_CONN_CONNECTED bit protects those paths from being 130 + * it is set. The RDS_CONN_UP bit protects those paths from being 131 131 * called while it isn't set. 132 132 */ 133 133 void rds_tcp_set_callbacks(struct socket *sock, struct rds_connection *conn) ··· 216 216 if (!tc) 217 217 return -ENOMEM; 218 218 219 + mutex_init(&tc->t_conn_lock); 219 220 tc->t_sock = NULL; 220 221 tc->t_tinc = NULL; 221 222 tc->t_tinc_hdr_rem = sizeof(struct rds_header);
+4
net/rds/tcp.h
··· 12 12 13 13 struct list_head t_tcp_node; 14 14 struct rds_connection *conn; 15 + /* t_conn_lock synchronizes the connection establishment between 16 + * rds_tcp_accept_one and rds_tcp_conn_connect 17 + */ 18 + struct mutex t_conn_lock; 15 19 struct socket *t_sock; 16 20 void *t_orig_write_space; 17 21 void *t_orig_data_ready;
+8
net/rds/tcp_connect.c
··· 78 78 struct socket *sock = NULL; 79 79 struct sockaddr_in src, dest; 80 80 int ret; 81 + struct rds_tcp_connection *tc = conn->c_transport_data; 81 82 83 + mutex_lock(&tc->t_conn_lock); 84 + 85 + if (rds_conn_up(conn)) { 86 + mutex_unlock(&tc->t_conn_lock); 87 + return 0; 88 + } 82 89 ret = sock_create_kern(rds_conn_net(conn), PF_INET, 83 90 SOCK_STREAM, IPPROTO_TCP, &sock); 84 91 if (ret < 0) ··· 127 120 } 128 121 129 122 out: 123 + mutex_unlock(&tc->t_conn_lock); 130 124 if (sock) 131 125 sock_release(sock); 132 126 return ret;
+36 -18
net/rds/tcp_listen.c
··· 76 76 struct rds_connection *conn; 77 77 int ret; 78 78 struct inet_sock *inet; 79 - struct rds_tcp_connection *rs_tcp; 79 + struct rds_tcp_connection *rs_tcp = NULL; 80 + int conn_state; 81 + struct sock *nsk; 80 82 81 83 ret = sock_create_kern(sock_net(sock->sk), sock->sk->sk_family, 82 84 sock->sk->sk_type, sock->sk->sk_protocol, ··· 117 115 * rds_tcp_state_change() will do that cleanup 118 116 */ 119 117 rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data; 120 - if (rs_tcp->t_sock && 121 - ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) { 122 - struct sock *nsk = new_sock->sk; 123 - 124 - nsk->sk_user_data = NULL; 125 - nsk->sk_prot->disconnect(nsk, 0); 126 - tcp_done(nsk); 127 - new_sock = NULL; 128 - ret = 0; 129 - goto out; 130 - } else if (rs_tcp->t_sock) { 131 - rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp); 132 - conn->c_outgoing = 0; 133 - } 134 - 135 118 rds_conn_transition(conn, RDS_CONN_DOWN, RDS_CONN_CONNECTING); 119 + mutex_lock(&rs_tcp->t_conn_lock); 120 + conn_state = rds_conn_state(conn); 121 + if (conn_state != RDS_CONN_CONNECTING && conn_state != RDS_CONN_UP) 122 + goto rst_nsk; 123 + if (rs_tcp->t_sock) { 124 + /* Need to resolve a duelling SYN between peers. 125 + * We have an outstanding SYN to this peer, which may 126 + * potentially have transitioned to the RDS_CONN_UP state, 127 + * so we must quiesce any send threads before resetting 128 + * c_transport_data. 129 + */ 130 + wait_event(conn->c_waitq, 131 + !test_bit(RDS_IN_XMIT, &conn->c_flags)); 132 + if (ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) { 133 + goto rst_nsk; 134 + } else if (rs_tcp->t_sock) { 135 + rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp); 136 + conn->c_outgoing = 0; 137 + } 138 + } 136 139 rds_tcp_set_callbacks(new_sock, conn); 137 - rds_connect_complete(conn); 140 + rds_connect_complete(conn); /* marks RDS_CONN_UP */ 138 141 new_sock = NULL; 139 142 ret = 0; 140 - 143 + goto out; 144 + rst_nsk: 145 + /* reset the newly returned accept sock and bail */ 146 + nsk = new_sock->sk; 147 + rds_tcp_stats_inc(s_tcp_listen_closed_stale); 148 + nsk->sk_user_data = NULL; 149 + nsk->sk_prot->disconnect(nsk, 0); 150 + tcp_done(nsk); 151 + new_sock = NULL; 152 + ret = 0; 141 153 out: 154 + if (rs_tcp) 155 + mutex_unlock(&rs_tcp->t_conn_lock); 142 156 if (new_sock) 143 157 sock_release(new_sock); 144 158 return ret;
+59 -2
net/sched/sch_netem.c
··· 395 395 sch->q.qlen++; 396 396 } 397 397 398 + /* netem can't properly corrupt a megapacket (like we get from GSO), so instead 399 + * when we statistically choose to corrupt one, we instead segment it, returning 400 + * the first packet to be corrupted, and re-enqueue the remaining frames 401 + */ 402 + static struct sk_buff *netem_segment(struct sk_buff *skb, struct Qdisc *sch) 403 + { 404 + struct sk_buff *segs; 405 + netdev_features_t features = netif_skb_features(skb); 406 + 407 + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); 408 + 409 + if (IS_ERR_OR_NULL(segs)) { 410 + qdisc_reshape_fail(skb, sch); 411 + return NULL; 412 + } 413 + consume_skb(skb); 414 + return segs; 415 + } 416 + 398 417 /* 399 418 * Insert one skb into qdisc. 400 419 * Note: parent depends on return value to account for queue length. ··· 426 407 /* We don't fill cb now as skb_unshare() may invalidate it */ 427 408 struct netem_skb_cb *cb; 428 409 struct sk_buff *skb2; 410 + struct sk_buff *segs = NULL; 411 + unsigned int len = 0, last_len, prev_len = qdisc_pkt_len(skb); 412 + int nb = 0; 429 413 int count = 1; 414 + int rc = NET_XMIT_SUCCESS; 430 415 431 416 /* Random duplication */ 432 417 if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor)) ··· 476 453 * do it now in software before we mangle it. 477 454 */ 478 455 if (q->corrupt && q->corrupt >= get_crandom(&q->corrupt_cor)) { 456 + if (skb_is_gso(skb)) { 457 + segs = netem_segment(skb, sch); 458 + if (!segs) 459 + return NET_XMIT_DROP; 460 + } else { 461 + segs = skb; 462 + } 463 + 464 + skb = segs; 465 + segs = segs->next; 466 + 479 467 if (!(skb = skb_unshare(skb, GFP_ATOMIC)) || 480 468 (skb->ip_summed == CHECKSUM_PARTIAL && 481 - skb_checksum_help(skb))) 482 - return qdisc_drop(skb, sch); 469 + skb_checksum_help(skb))) { 470 + rc = qdisc_drop(skb, sch); 471 + goto finish_segs; 472 + } 483 473 484 474 skb->data[prandom_u32() % skb_headlen(skb)] ^= 485 475 1<<(prandom_u32() % 8); ··· 552 516 sch->qstats.requeues++; 553 517 } 554 518 519 + finish_segs: 520 + if (segs) { 521 + while (segs) { 522 + skb2 = segs->next; 523 + segs->next = NULL; 524 + qdisc_skb_cb(segs)->pkt_len = segs->len; 525 + last_len = segs->len; 526 + rc = qdisc_enqueue(segs, sch); 527 + if (rc != NET_XMIT_SUCCESS) { 528 + if (net_xmit_drop_count(rc)) 529 + qdisc_qstats_drop(sch); 530 + } else { 531 + nb++; 532 + len += last_len; 533 + } 534 + segs = skb2; 535 + } 536 + sch->q.qlen += nb; 537 + if (nb > 1) 538 + qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len); 539 + } 555 540 return NET_XMIT_SUCCESS; 556 541 } 557 542
+5
net/tipc/node.c
··· 1444 1444 int bearer_id = b->identity; 1445 1445 struct tipc_link_entry *le; 1446 1446 u16 bc_ack = msg_bcast_ack(hdr); 1447 + u32 self = tipc_own_addr(net); 1447 1448 int rc = 0; 1448 1449 1449 1450 __skb_queue_head_init(&xmitq); ··· 1460 1459 else 1461 1460 return tipc_node_bc_rcv(net, skb, bearer_id); 1462 1461 } 1462 + 1463 + /* Discard unicast link messages destined for another node */ 1464 + if (unlikely(!msg_short(hdr) && (msg_destnode(hdr) != self))) 1465 + goto discard; 1463 1466 1464 1467 /* Locate neighboring node that sent packet */ 1465 1468 n = tipc_node_find(net, msg_prevnode(hdr));
+1 -20
net/vmw_vsock/af_vsock.c
··· 1808 1808 else if (sk->sk_shutdown & RCV_SHUTDOWN) 1809 1809 err = 0; 1810 1810 1811 - if (copied > 0) { 1812 - /* We only do these additional bookkeeping/notification steps 1813 - * if we actually copied something out of the queue pair 1814 - * instead of just peeking ahead. 1815 - */ 1816 - 1817 - if (!(flags & MSG_PEEK)) { 1818 - /* If the other side has shutdown for sending and there 1819 - * is nothing more to read, then modify the socket 1820 - * state. 1821 - */ 1822 - if (vsk->peer_shutdown & SEND_SHUTDOWN) { 1823 - if (vsock_stream_has_data(vsk) <= 0) { 1824 - sk->sk_state = SS_UNCONNECTED; 1825 - sock_set_flag(sk, SOCK_DONE); 1826 - sk->sk_state_change(sk); 1827 - } 1828 - } 1829 - } 1811 + if (copied > 0) 1830 1812 err = copied; 1831 - } 1832 1813 1833 1814 out: 1834 1815 release_sock(sk);
+3
net/xfrm/xfrm_output.c
··· 99 99 100 100 skb_dst_force(skb); 101 101 102 + /* Inner headers are invalid now. */ 103 + skb->encapsulation = 0; 104 + 102 105 err = x->type->output(x, skb); 103 106 if (err == -EINPROGRESS) 104 107 goto out;
-1
samples/bpf/trace_output_kern.c
··· 18 18 u64 cookie; 19 19 } data; 20 20 21 - memset(&data, 0, sizeof(data)); 22 21 data.pid = bpf_get_current_pid_tgid(); 23 22 data.cookie = 0x12345678; 24 23
+45 -24
scripts/mod/file2alias.c
··· 371 371 do_usb_entry_multi(symval + i, mod); 372 372 } 373 373 374 + static void do_of_entry_multi(void *symval, struct module *mod) 375 + { 376 + char alias[500]; 377 + int len; 378 + char *tmp; 379 + 380 + DEF_FIELD_ADDR(symval, of_device_id, name); 381 + DEF_FIELD_ADDR(symval, of_device_id, type); 382 + DEF_FIELD_ADDR(symval, of_device_id, compatible); 383 + 384 + len = sprintf(alias, "of:N%sT%s", (*name)[0] ? *name : "*", 385 + (*type)[0] ? *type : "*"); 386 + 387 + if (compatible[0]) 388 + sprintf(&alias[len], "%sC%s", (*type)[0] ? "*" : "", 389 + *compatible); 390 + 391 + /* Replace all whitespace with underscores */ 392 + for (tmp = alias; tmp && *tmp; tmp++) 393 + if (isspace(*tmp)) 394 + *tmp = '_'; 395 + 396 + buf_printf(&mod->dev_table_buf, "MODULE_ALIAS(\"%s\");\n", alias); 397 + strcat(alias, "C"); 398 + add_wildcard(alias); 399 + buf_printf(&mod->dev_table_buf, "MODULE_ALIAS(\"%s\");\n", alias); 400 + } 401 + 402 + static void do_of_table(void *symval, unsigned long size, 403 + struct module *mod) 404 + { 405 + unsigned int i; 406 + const unsigned long id_size = SIZE_of_device_id; 407 + 408 + device_id_check(mod->name, "of", size, id_size, symval); 409 + 410 + /* Leave last one: it's the terminator. */ 411 + size -= id_size; 412 + 413 + for (i = 0; i < size; i += id_size) 414 + do_of_entry_multi(symval + i, mod); 415 + } 416 + 374 417 /* Looks like: hid:bNvNpN */ 375 418 static int do_hid_entry(const char *filename, 376 419 void *symval, char *alias) ··· 726 683 return 1; 727 684 } 728 685 ADD_TO_DEVTABLE("pcmcia", pcmcia_device_id, do_pcmcia_entry); 729 - 730 - static int do_of_entry (const char *filename, void *symval, char *alias) 731 - { 732 - int len; 733 - char *tmp; 734 - DEF_FIELD_ADDR(symval, of_device_id, name); 735 - DEF_FIELD_ADDR(symval, of_device_id, type); 736 - DEF_FIELD_ADDR(symval, of_device_id, compatible); 737 - 738 - len = sprintf(alias, "of:N%sT%s", (*name)[0] ? *name : "*", 739 - (*type)[0] ? *type : "*"); 740 - 741 - if (compatible[0]) 742 - sprintf(&alias[len], "%sC%s", (*type)[0] ? "*" : "", 743 - *compatible); 744 - 745 - /* Replace all whitespace with underscores */ 746 - for (tmp = alias; tmp && *tmp; tmp++) 747 - if (isspace (*tmp)) 748 - *tmp = '_'; 749 - 750 - return 1; 751 - } 752 - ADD_TO_DEVTABLE("of", of_device_id, do_of_entry); 753 686 754 687 static int do_vio_entry(const char *filename, void *symval, 755 688 char *alias) ··· 1367 1348 /* First handle the "special" cases */ 1368 1349 if (sym_is(name, namelen, "usb")) 1369 1350 do_usb_table(symval, sym->st_size, mod); 1351 + if (sym_is(name, namelen, "of")) 1352 + do_of_table(symval, sym->st_size, mod); 1370 1353 else if (sym_is(name, namelen, "pnp")) 1371 1354 do_pnp_device_entry(symval, sym->st_size, mod); 1372 1355 else if (sym_is(name, namelen, "pnp_card"))
+2 -2
security/integrity/ima/ima_policy.c
··· 884 884 "BPRM_CHECK", 885 885 "MODULE_CHECK", 886 886 "FIRMWARE_CHECK", 887 + "POST_SETATTR", 887 888 "KEXEC_KERNEL_CHECK", 888 889 "KEXEC_INITRAMFS_CHECK", 889 - "POLICY_CHECK", 890 - "POST_SETATTR" 890 + "POLICY_CHECK" 891 891 }; 892 892 893 893 void *ima_policy_start(struct seq_file *m, loff_t *pos)
+3
tools/net/bpf_jit_disasm.c
··· 98 98 char *buff; 99 99 100 100 len = klogctl(CMD_ACTION_SIZE_BUFFER, NULL, 0); 101 + if (len < 0) 102 + return NULL; 103 + 101 104 buff = malloc(len); 102 105 if (!buff) 103 106 return NULL;
+3
tools/perf/util/sort.c
··· 2438 2438 2439 2439 static char *setup_overhead(char *keys) 2440 2440 { 2441 + if (sort__mode == SORT_MODE__DIFF) 2442 + return keys; 2443 + 2441 2444 keys = prefix_if_not_in("overhead", keys); 2442 2445 2443 2446 if (symbol_conf.cumulate_callchain)