Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/emulex/benet/be.h
drivers/net/netconsole.c
net/bridge/br_private.h

Three mostly trivial conflicts.

The net/bridge/br_private.h conflict was a function signature (argument
addition) change overlapping with the extern removals from Joe Perches.

In drivers/net/netconsole.c we had one change adjusting a printk message
whilst another changed "printk(KERN_INFO" into "pr_info(".

Lastly, the emulex change was a new inline function addition overlapping
with Joe Perches's extern removals.

Signed-off-by: David S. Miller <davem@davemloft.net>

+1496 -1169
+2 -2
Documentation/networking/dccp.txt
··· 18 18 Datagram Congestion Control Protocol (DCCP) is an unreliable, connection 19 19 oriented protocol designed to solve issues present in UDP and TCP, particularly 20 20 for real-time and multimedia (streaming) traffic. 21 - It divides into a base protocol (RFC 4340) and plugable congestion control 22 - modules called CCIDs. Like plugable TCP congestion control, at least one CCID 21 + It divides into a base protocol (RFC 4340) and pluggable congestion control 22 + modules called CCIDs. Like pluggable TCP congestion control, at least one CCID 23 23 needs to be enabled in order for the protocol to function properly. In the Linux 24 24 implementation, this is the TCP-like CCID2 (RFC 4341). Additional CCIDs, such as 25 25 the TCP-friendly CCID3 (RFC 4342), are optional.
+1 -1
Documentation/networking/e100.txt
··· 103 103 PRO/100 Family of Adapters is e100. 104 104 105 105 As an example, if you install the e100 driver for two PRO/100 adapters 106 - (eth0 and eth1), add the following to a configuraton file in /etc/modprobe.d/ 106 + (eth0 and eth1), add the following to a configuration file in /etc/modprobe.d/ 107 107 108 108 alias eth0 e100 109 109 alias eth1 e100
+2 -2
Documentation/networking/ieee802154.txt
··· 4 4 5 5 Introduction 6 6 ============ 7 - The IEEE 802.15.4 working group focuses on standartization of bottom 7 + The IEEE 802.15.4 working group focuses on standardization of bottom 8 8 two layers: Medium Access Control (MAC) and Physical (PHY). And there 9 9 are mainly two options available for upper layers: 10 10 - ZigBee - proprietary protocol from ZigBee Alliance ··· 66 66 code via plain sk_buffs. On skb reception skb->cb must contain additional 67 67 info as described in the struct ieee802154_mac_cb. During packet transmission 68 68 the skb->cb is used to provide additional data to device's header_ops->create 69 - function. Be aware, that this data can be overriden later (when socket code 69 + function. Be aware that this data can be overridden later (when socket code 70 70 submits skb to qdisc), so if you need something from that cb later, you should 71 71 store info in the skb->data on your own. 72 72
+1 -1
Documentation/networking/l2tp.txt
··· 197 197 implemented to provide extra debug information to help diagnose 198 198 problems.) Users should use the netlink API. 199 199 200 - /proc/net/pppol2tp is also provided for backwards compaibility with 200 + /proc/net/pppol2tp is also provided for backwards compatibility with 201 201 the original pppol2tp driver. It lists information about L2TPv2 202 202 tunnels and sessions only. Its use is discouraged. 203 203
+12 -12
Documentation/networking/netdev-FAQ.txt
··· 4 4 5 5 Q: What is netdev? 6 6 7 - A: It is a mailing list for all network related linux stuff. This includes 7 + A: It is a mailing list for all network-related Linux stuff. This includes 8 8 anything found under net/ (i.e. core code like IPv6) and drivers/net 9 - (i.e. hardware specific drivers) in the linux source tree. 9 + (i.e. hardware specific drivers) in the Linux source tree. 10 10 11 11 Note that some subsystems (e.g. wireless drivers) which have a high volume 12 12 of traffic have their own specific mailing lists. 13 13 14 - The netdev list is managed (like many other linux mailing lists) through 14 + The netdev list is managed (like many other Linux mailing lists) through 15 15 VGER ( http://vger.kernel.org/ ) and archives can be found below: 16 16 17 17 http://marc.info/?l=linux-netdev 18 18 http://www.spinics.net/lists/netdev/ 19 19 20 - Aside from subsystems like that mentioned above, all network related linux 21 - development (i.e. RFC, review, comments, etc) takes place on netdev. 20 + Aside from subsystems like that mentioned above, all network-related Linux 21 + development (i.e. RFC, review, comments, etc.) takes place on netdev. 22 22 23 - Q: How do the changes posted to netdev make their way into linux? 23 + Q: How do the changes posted to netdev make their way into Linux? 24 24 25 25 A: There are always two trees (git repositories) in play. Both are driven 26 26 by David Miller, the main network maintainer. There is the "net" tree, ··· 35 35 Q: How often do changes from these trees make it to the mainline Linus tree? 36 36 37 37 A: To understand this, you need to know a bit of background information 38 - on the cadence of linux development. Each new release starts off with 38 + on the cadence of Linux development. Each new release starts off with 39 39 a two week "merge window" where the main maintainers feed their new 40 40 stuff to Linus for merging into the mainline tree. After the two weeks, 41 41 the merge window is closed, and it is called/tagged "-rc1". No new ··· 46 46 things are in a state of churn), and a week after the last vX.Y-rcN 47 47 was done, the official "vX.Y" is released. 48 48 49 - Relating that to netdev: At the beginning of the 2 week merge window, 49 + Relating that to netdev: At the beginning of the 2-week merge window, 50 50 the net-next tree will be closed - no new changes/features. The 51 51 accumulated new content of the past ~10 weeks will be passed onto 52 52 mainline/Linus via a pull request for vX.Y -- at the same time, ··· 59 59 IMPORTANT: Do not send new net-next content to netdev during the 60 60 period during which net-next tree is closed. 61 61 62 - Shortly after the two weeks have passed, (and vX.Y-rc1 is released) the 62 + Shortly after the two weeks have passed (and vX.Y-rc1 is released), the 63 63 tree for net-next reopens to collect content for the next (vX.Y+1) release. 64 64 65 65 If you aren't subscribed to netdev and/or are simply unsure if net-next 66 66 has re-opened yet, simply check the net-next git repository link above for 67 - any new networking related commits. 67 + any new networking-related commits. 68 68 69 69 The "net" tree continues to collect fixes for the vX.Y content, and 70 70 is fed back to Linus at regular (~weekly) intervals. Meaning that the 71 - focus for "net" is on stablilization and bugfixes. 71 + focus for "net" is on stabilization and bugfixes. 72 72 73 73 Finally, the vX.Y gets released, and the whole cycle starts over. 74 74 ··· 217 217 to why it happens, and then if necessary, explain why the fix proposed 218 218 is the best way to get things done. Don't mangle whitespace, and as 219 219 is common, don't mis-indent function arguments that span multiple lines. 220 - If it is your 1st patch, mail it to yourself so you can test apply 220 + If it is your first patch, mail it to yourself so you can test apply 221 221 it to an unpatched tree to confirm infrastructure didn't mangle it. 222 222 223 223 Finally, go back and read Documentation/SubmittingPatches to be
+2 -2
Documentation/networking/operstates.txt
··· 89 89 it as lower layer. 90 90 91 91 Note that for certain kind of soft-devices, which are not managing any 92 - real hardware, there is possible to set this bit from userpsace. 93 - One should use TVL IFLA_CARRIER to do so. 92 + real hardware, it is possible to set this bit from userspace. One 93 + should use TVL IFLA_CARRIER to do so. 94 94 95 95 netif_carrier_ok() can be used to query that bit. 96 96
+1 -1
Documentation/networking/rxrpc.txt
··· 144 144 (*) Calls use ACK packets to handle reliability. Data packets are also 145 145 explicitly sequenced per call. 146 146 147 - (*) There are two types of positive acknowledgement: hard-ACKs and soft-ACKs. 147 + (*) There are two types of positive acknowledgment: hard-ACKs and soft-ACKs. 148 148 A hard-ACK indicates to the far side that all the data received to a point 149 149 has been received and processed; a soft-ACK indicates that the data has 150 150 been received but may yet be discarded and re-requested. The sender may
+4 -4
Documentation/networking/stmmac.txt
··· 160 160 o pmt: core has the embedded power module (optional). 161 161 o force_sf_dma_mode: force DMA to use the Store and Forward mode 162 162 instead of the Threshold. 163 - o force_thresh_dma_mode: force DMA to use the Shreshold mode other than 163 + o force_thresh_dma_mode: force DMA to use the Threshold mode other than 164 164 the Store and Forward mode. 165 165 o riwt_off: force to disable the RX watchdog feature and switch to NAPI mode. 166 166 o fix_mac_speed: this callback is used for modifying some syscfg registers ··· 175 175 registers. 176 176 o custom_cfg/custom_data: this is a custom configuration that can be passed 177 177 while initializing the resources. 178 - o bsp_priv: another private poiter. 178 + o bsp_priv: another private pointer. 179 179 180 180 For MDIO bus The we have: 181 181 ··· 271 271 o dwmac1000_dma.c: dma functions for the GMAC chip; 272 272 o dwmac1000.h: specific header file for the GMAC; 273 273 o dwmac100_core: MAC 100 core and dma code; 274 - o dwmac100_dma.c: dma funtions for the MAC chip; 274 + o dwmac100_dma.c: dma functions for the MAC chip; 275 275 o dwmac1000.h: specific header file for the MAC; 276 276 o dwmac_lib.c: generic DMA functions shared among chips; 277 277 o enh_desc.c: functions for handling enhanced descriptors; ··· 364 364 10) TODO: 365 365 o XGMAC is not supported. 366 366 o Complete the TBI & RTBI support. 367 - o extened VLAN support for 3.70a SYNP GMAC. 367 + o extend VLAN support for 3.70a SYNP GMAC.
+2 -2
Documentation/networking/vortex.txt
··· 68 68 69 69 There are several parameters which may be provided to the driver when 70 70 its module is loaded. These are usually placed in /etc/modprobe.d/*.conf 71 - configuretion files. Example: 71 + configuration files. Example: 72 72 73 73 options 3c59x debug=3 rx_copybreak=300 74 74 ··· 178 178 179 179 The driver's interrupt service routine can handle many receive and 180 180 transmit packets in a single invocation. It does this in a loop. 181 - The value of max_interrupt_work governs how mnay times the interrupt 181 + The value of max_interrupt_work governs how many times the interrupt 182 182 service routine will loop. The default value is 32 loops. If this 183 183 is exceeded the interrupt service routine gives up and generates a 184 184 warning message "eth0: Too much work in interrupt".
+1 -1
Documentation/networking/x25-iface.txt
··· 105 105 later. 106 106 The lapb module interface was modified to support this. Its 107 107 data_indication() method should now transparently pass the 108 - netif_rx() return value to the (lapb mopdule) caller. 108 + netif_rx() return value to the (lapb module) caller. 109 109 (2) Drivers for kernel versions 2.2.x should always check the global 110 110 variable netdev_dropping when a new frame is received. The driver 111 111 should only call netif_rx() if netdev_dropping is zero. Otherwise
+78 -53
MAINTAINERS
··· 1009 1009 M: Jason Cooper <jason@lakedaemon.net> 1010 1010 M: Andrew Lunn <andrew@lunn.ch> 1011 1011 M: Gregory Clement <gregory.clement@free-electrons.com> 1012 + M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 1012 1013 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1013 1014 S: Maintained 1014 1015 F: arch/arm/mach-mvebu/ ··· 1017 1016 ARM/Marvell Dove/Kirkwood/MV78xx0/Orion SOC support 1018 1017 M: Jason Cooper <jason@lakedaemon.net> 1019 1018 M: Andrew Lunn <andrew@lunn.ch> 1019 + M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 1020 1020 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1021 1021 S: Maintained 1022 1022 F: arch/arm/mach-dove/ ··· 1149 1147 F: drivers/net/ethernet/i825xx/ether1* 1150 1148 F: drivers/net/ethernet/seeq/ether3* 1151 1149 F: drivers/scsi/arm/ 1150 + 1151 + ARM/Rockchip SoC support 1152 + M: Heiko Stuebner <heiko@sntech.de> 1153 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1154 + S: Maintained 1155 + F: arch/arm/mach-rockchip/ 1156 + F: drivers/*/*rockchip* 1152 1157 1153 1158 ARM/SHARK MACHINE SUPPORT 1154 1159 M: Alexander Schulz <alex@shark-linux.de> ··· 2728 2719 DMA GENERIC OFFLOAD ENGINE SUBSYSTEM 2729 2720 M: Vinod Koul <vinod.koul@intel.com> 2730 2721 M: Dan Williams <dan.j.williams@intel.com> 2722 + L: dmaengine@vger.kernel.org 2723 + Q: https://patchwork.kernel.org/project/linux-dmaengine/list/ 2731 2724 S: Supported 2732 2725 F: drivers/dma/ 2733 2726 F: include/linux/dma* ··· 2833 2822 L: dri-devel@lists.freedesktop.org 2834 2823 L: linux-tegra@vger.kernel.org 2835 2824 T: git git://anongit.freedesktop.org/tegra/linux.git 2836 - S: Maintained 2825 + S: Supported 2837 2826 F: drivers/gpu/host1x/ 2838 2827 F: include/uapi/drm/tegra_drm.h 2839 2828 F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt ··· 4369 4358 4370 4359 INTEL I/OAT DMA DRIVER 4371 4360 M: Dan Williams <dan.j.williams@intel.com> 4372 - S: Maintained 4361 + M: Dave Jiang <dave.jiang@intel.com> 4362 + L: dmaengine@vger.kernel.org 4363 + Q: https://patchwork.kernel.org/project/linux-dmaengine/list/ 4364 + S: Supported 4373 4365 F: drivers/dma/ioat* 4374 4366 4375 4367 INTEL IOMMU (VT-d) ··· 8324 8310 S: Maintained 8325 8311 F: drivers/media/rc/ttusbir.c 8326 8312 8327 - TEGRA SUPPORT 8313 + TEGRA ARCHITECTURE SUPPORT 8328 8314 M: Stephen Warren <swarren@wwwdotorg.org> 8315 + M: Thierry Reding <thierry.reding@gmail.com> 8329 8316 L: linux-tegra@vger.kernel.org 8330 8317 Q: http://patchwork.ozlabs.org/project/linux-tegra/list/ 8331 8318 T: git git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra.git 8332 8319 S: Supported 8333 8320 N: [^a-z]tegra 8321 + 8322 + TEGRA ASOC DRIVER 8323 + M: Stephen Warren <swarren@wwwdotorg.org> 8324 + S: Supported 8325 + F: sound/soc/tegra/ 8326 + 8327 + TEGRA CLOCK DRIVER 8328 + M: Peter De Schrijver <pdeschrijver@nvidia.com> 8329 + M: Prashant Gaikwad <pgaikwad@nvidia.com> 8330 + S: Supported 8331 + F: drivers/clk/tegra/ 8332 + 8333 + TEGRA DMA DRIVER 8334 + M: Laxman Dewangan <ldewangan@nvidia.com> 8335 + S: Supported 8336 + F: drivers/dma/tegra20-apb-dma.c 8337 + 8338 + TEGRA GPIO DRIVER 8339 + M: Stephen Warren <swarren@wwwdotorg.org> 8340 + S: Supported 8341 + F: drivers/gpio/gpio-tegra.c 8342 + 8343 + TEGRA I2C DRIVER 8344 + M: Laxman Dewangan <ldewangan@nvidia.com> 8345 + S: Supported 8346 + F: drivers/i2c/busses/i2c-tegra.c 8347 + 8348 + TEGRA IOMMU DRIVERS 8349 + M: Hiroshi Doyu <hdoyu@nvidia.com> 8350 + S: Supported 8351 + F: drivers/iommu/tegra* 8352 + 8353 + TEGRA KBC DRIVER 8354 + M: Rakesh Iyer <riyer@nvidia.com> 8355 + M: Laxman Dewangan <ldewangan@nvidia.com> 8356 + S: Supported 8357 + F: drivers/input/keyboard/tegra-kbc.c 8358 + 8359 + TEGRA PINCTRL DRIVER 8360 + M: Stephen Warren <swarren@wwwdotorg.org> 8361 + S: Supported 8362 + F: drivers/pinctrl/pinctrl-tegra* 8363 + 8364 + TEGRA PWM DRIVER 8365 + M: Thierry Reding <thierry.reding@gmail.com> 8366 + S: Supported 8367 + F: drivers/pwm/pwm-tegra.c 8368 + 8369 + TEGRA SERIAL DRIVER 8370 + M: Laxman Dewangan <ldewangan@nvidia.com> 8371 + S: Supported 8372 + F: drivers/tty/serial/serial-tegra.c 8373 + 8374 + TEGRA SPI DRIVER 8375 + M: Laxman Dewangan <ldewangan@nvidia.com> 8376 + S: Supported 8377 + F: drivers/spi/spi-tegra* 8334 8378 8335 8379 TEHUTI ETHERNET DRIVER 8336 8380 M: Andy Gospodarek <andy@greyhouse.net> ··· 8925 8853 S: Maintained 8926 8854 F: drivers/net/usb/rtl8150.c 8927 8855 8928 - USB SERIAL BELKIN F5U103 DRIVER 8929 - M: William Greathouse <wgreathouse@smva.com> 8856 + USB SERIAL SUBSYSTEM 8857 + M: Johan Hovold <jhovold@gmail.com> 8930 8858 L: linux-usb@vger.kernel.org 8931 8859 S: Maintained 8932 - F: drivers/usb/serial/belkin_sa.* 8933 - 8934 - USB SERIAL CYPRESS M8 DRIVER 8935 - M: Lonnie Mendez <dignome@gmail.com> 8936 - L: linux-usb@vger.kernel.org 8937 - S: Maintained 8938 - W: http://geocities.com/i0xox0i 8939 - W: http://firstlight.net/cvs 8940 - F: drivers/usb/serial/cypress_m8.* 8941 - 8942 - USB SERIAL CYBERJACK DRIVER 8943 - M: Matthias Bruestle and Harald Welte <support@reiner-sct.com> 8944 - W: http://www.reiner-sct.de/support/treiber_cyberjack.php 8945 - S: Maintained 8946 - F: drivers/usb/serial/cyberjack.c 8947 - 8948 - USB SERIAL DIGI ACCELEPORT DRIVER 8949 - M: Peter Berger <pberger@brimson.com> 8950 - M: Al Borchers <alborchers@steinerpoint.com> 8951 - L: linux-usb@vger.kernel.org 8952 - S: Maintained 8953 - F: drivers/usb/serial/digi_acceleport.c 8954 - 8955 - USB SERIAL DRIVER 8956 - M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 8957 - L: linux-usb@vger.kernel.org 8958 - S: Supported 8959 8860 F: Documentation/usb/usb-serial.txt 8960 - F: drivers/usb/serial/generic.c 8961 - F: drivers/usb/serial/usb-serial.c 8861 + F: drivers/usb/serial/ 8962 8862 F: include/linux/usb/serial.h 8963 - 8964 - USB SERIAL EMPEG EMPEG-CAR MARK I/II DRIVER 8965 - M: Gary Brubaker <xavyer@ix.netcom.com> 8966 - L: linux-usb@vger.kernel.org 8967 - S: Maintained 8968 - F: drivers/usb/serial/empeg.c 8969 - 8970 - USB SERIAL KEYSPAN DRIVER 8971 - M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 8972 - L: linux-usb@vger.kernel.org 8973 - S: Maintained 8974 - F: drivers/usb/serial/*keyspan* 8975 - 8976 - USB SERIAL WHITEHEAT DRIVER 8977 - M: Support Department <support@connecttech.com> 8978 - L: linux-usb@vger.kernel.org 8979 - W: http://www.connecttech.com 8980 - S: Supported 8981 - F: drivers/usb/serial/whiteheat* 8982 8863 8983 8864 USB SMSC75XX ETHERNET DRIVER 8984 8865 M: Steve Glendinning <steve.glendinning@shawell.net>
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 12 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = 5 5 NAME = One Giant Leap for Frogkind 6 6 7 7 # *DOCUMENTATION*
+3 -3
arch/arc/mm/fault.c
··· 17 17 #include <asm/pgalloc.h> 18 18 #include <asm/mmu.h> 19 19 20 - static int handle_vmalloc_fault(struct mm_struct *mm, unsigned long address) 20 + static int handle_vmalloc_fault(unsigned long address) 21 21 { 22 22 /* 23 23 * Synchronize this task's top level page-table ··· 27 27 pud_t *pud, *pud_k; 28 28 pmd_t *pmd, *pmd_k; 29 29 30 - pgd = pgd_offset_fast(mm, address); 30 + pgd = pgd_offset_fast(current->active_mm, address); 31 31 pgd_k = pgd_offset_k(address); 32 32 33 33 if (!pgd_present(*pgd_k)) ··· 72 72 * nothing more. 73 73 */ 74 74 if (address >= VMALLOC_START && address <= VMALLOC_END) { 75 - ret = handle_vmalloc_fault(mm, address); 75 + ret = handle_vmalloc_fault(address); 76 76 if (unlikely(ret)) 77 77 goto bad_area_nosemaphore; 78 78 else
+4 -5
arch/arm/boot/dts/integratorcp.dts
··· 9 9 model = "ARM Integrator/CP"; 10 10 compatible = "arm,integrator-cp"; 11 11 12 - aliases { 13 - arm,timer-primary = &timer2; 14 - arm,timer-secondary = &timer1; 15 - }; 16 - 17 12 chosen { 18 13 bootargs = "root=/dev/ram0 console=ttyAMA0,38400n8 earlyprintk"; 19 14 }; ··· 19 24 }; 20 25 21 26 timer0: timer@13000000 { 27 + /* TIMER0 runs @ 25MHz */ 22 28 compatible = "arm,integrator-cp-timer"; 29 + status = "disabled"; 23 30 }; 24 31 25 32 timer1: timer@13000100 { 33 + /* TIMER1 runs @ 1MHz */ 26 34 compatible = "arm,integrator-cp-timer"; 27 35 }; 28 36 29 37 timer2: timer@13000200 { 38 + /* TIMER2 runs @ 1MHz */ 30 39 compatible = "arm,integrator-cp-timer"; 31 40 }; 32 41
+2 -2
arch/mips/kernel/perf_event_mipsxx.c
··· 971 971 [C(LL)] = { 972 972 [C(OP_READ)] = { 973 973 [C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P }, 974 - [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P }, 974 + [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P }, 975 975 }, 976 976 [C(OP_WRITE)] = { 977 977 [C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P }, 978 - [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P }, 978 + [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P }, 979 979 }, 980 980 }, 981 981 [C(ITLB)] = {
+5 -4
arch/mips/mti-malta/malta-int.c
··· 473 473 { 474 474 int cpu; 475 475 476 - for (cpu = 0; cpu < NR_CPUS; cpu++) { 476 + for (cpu = 0; cpu < nr_cpu_ids; cpu++) { 477 477 fill_ipi_map1(gic_resched_int_base, cpu, GIC_CPU_INT1); 478 478 fill_ipi_map1(gic_call_int_base, cpu, GIC_CPU_INT2); 479 479 } ··· 574 574 /* FIXME */ 575 575 int i; 576 576 #if defined(CONFIG_MIPS_MT_SMP) 577 - gic_call_int_base = GIC_NUM_INTRS - NR_CPUS; 578 - gic_resched_int_base = gic_call_int_base - NR_CPUS; 577 + gic_call_int_base = GIC_NUM_INTRS - 578 + (NR_CPUS - nr_cpu_ids) * 2 - nr_cpu_ids; 579 + gic_resched_int_base = gic_call_int_base - nr_cpu_ids; 579 580 fill_ipi_map(); 580 581 #endif 581 582 gic_init(GIC_BASE_ADDR, GIC_ADDRSPACE_SZ, gic_intr_map, ··· 600 599 printk("CPU%d: status register now %08x\n", smp_processor_id(), read_c0_status()); 601 600 write_c0_status(0x1100dc00); 602 601 printk("CPU%d: status register frc %08x\n", smp_processor_id(), read_c0_status()); 603 - for (i = 0; i < NR_CPUS; i++) { 602 + for (i = 0; i < nr_cpu_ids; i++) { 604 603 arch_init_ipiirq(MIPS_GIC_IRQ_BASE + 605 604 GIC_RESCHED_INT(i), &irq_resched); 606 605 arch_init_ipiirq(MIPS_GIC_IRQ_BASE +
+1 -1
arch/mips/ralink/timer.c
··· 126 126 return -ENOENT; 127 127 } 128 128 129 - rt->membase = devm_request_and_ioremap(&pdev->dev, res); 129 + rt->membase = devm_ioremap_resource(&pdev->dev, res); 130 130 if (IS_ERR(rt->membase)) 131 131 return PTR_ERR(rt->membase); 132 132
+4
arch/parisc/kernel/head.S
··· 195 195 ldw MEM_PDC_HI(%r0),%r6 196 196 depd %r6, 31, 32, %r3 /* move to upper word */ 197 197 198 + mfctl %cr30,%r6 /* PCX-W2 firmware bug */ 199 + 198 200 ldo PDC_PSW(%r0),%arg0 /* 21 */ 199 201 ldo PDC_PSW_SET_DEFAULTS(%r0),%arg1 /* 2 */ 200 202 ldo PDC_PSW_WIDE_BIT(%r0),%arg2 /* 2 */ ··· 205 203 copy %r0,%arg3 206 204 207 205 stext_pdc_ret: 206 + mtctl %r6,%cr30 /* restore task thread info */ 207 + 208 208 /* restore rfi target address*/ 209 209 ldd TI_TASK-THREAD_SZ_ALGN(%sp), %r10 210 210 tophys_r1 %r10
+3 -1
arch/um/kernel/exitcode.c
··· 40 40 const char __user *buffer, size_t count, loff_t *pos) 41 41 { 42 42 char *end, buf[sizeof("nnnnn\0")]; 43 + size_t size; 43 44 int tmp; 44 45 45 - if (copy_from_user(buf, buffer, count)) 46 + size = min(count, sizeof(buf)); 47 + if (copy_from_user(buf, buffer, size)) 46 48 return -EFAULT; 47 49 48 50 tmp = simple_strtol(buf, &end, 0);
+2 -1
arch/x86/include/asm/percpu.h
··· 128 128 do { \ 129 129 typedef typeof(var) pao_T__; \ 130 130 const int pao_ID__ = (__builtin_constant_p(val) && \ 131 - ((val) == 1 || (val) == -1)) ? (val) : 0; \ 131 + ((val) == 1 || (val) == -1)) ? \ 132 + (int)(val) : 0; \ 132 133 if (0) { \ 133 134 pao_T__ pao_tmp__; \ 134 135 pao_tmp__ = (val); \
+3 -3
arch/x86/kernel/cpu/perf_event.c
··· 1276 1276 static int __kprobes 1277 1277 perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) 1278 1278 { 1279 - int ret; 1280 1279 u64 start_clock; 1281 1280 u64 finish_clock; 1281 + int ret; 1282 1282 1283 1283 if (!atomic_read(&active_events)) 1284 1284 return NMI_DONE; 1285 1285 1286 - start_clock = local_clock(); 1286 + start_clock = sched_clock(); 1287 1287 ret = x86_pmu.handle_irq(regs); 1288 - finish_clock = local_clock(); 1288 + finish_clock = sched_clock(); 1289 1289 1290 1290 perf_sample_event_took(finish_clock - start_clock); 1291 1291
+1 -1
arch/x86/kernel/kvm.c
··· 609 609 610 610 struct dentry *kvm_init_debugfs(void) 611 611 { 612 - d_kvm_debug = debugfs_create_dir("kvm", NULL); 612 + d_kvm_debug = debugfs_create_dir("kvm-guest", NULL); 613 613 if (!d_kvm_debug) 614 614 printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n"); 615 615
+2 -2
arch/x86/kernel/nmi.c
··· 113 113 u64 before, delta, whole_msecs; 114 114 int remainder_ns, decimal_msecs, thishandled; 115 115 116 - before = local_clock(); 116 + before = sched_clock(); 117 117 thishandled = a->handler(type, regs); 118 118 handled += thishandled; 119 - delta = local_clock() - before; 119 + delta = sched_clock() - before; 120 120 trace_nmi_handler(a->handler, (int)delta, thishandled); 121 121 122 122 if (delta < nmi_longest_ns)
+30 -19
arch/xtensa/kernel/entry.S
··· 1122 1122 * a3: exctable, original value in excsave1 1123 1123 */ 1124 1124 1125 - fast_syscall_spill_registers_fixup: 1125 + ENTRY(fast_syscall_spill_registers_fixup) 1126 1126 1127 1127 rsr a2, windowbase # get current windowbase (a2 is saved) 1128 1128 xsr a0, depc # restore depc and a0 ··· 1134 1134 */ 1135 1135 1136 1136 xsr a3, excsave1 # get spill-mask 1137 - slli a2, a3, 1 # shift left by one 1137 + slli a3, a3, 1 # shift left by one 1138 1138 1139 - slli a3, a2, 32-WSBITS 1140 - src a2, a2, a3 # a1 = xxwww1yyxxxwww1yy...... 1139 + slli a2, a3, 32-WSBITS 1140 + src a2, a3, a2 # a2 = xxwww1yyxxxwww1yy...... 1141 1141 wsr a2, windowstart # set corrected windowstart 1142 1142 1143 - rsr a3, excsave1 1144 - l32i a2, a3, EXC_TABLE_DOUBLE_SAVE # restore a2 1145 - l32i a3, a3, EXC_TABLE_PARAM # original WB (in user task) 1143 + srli a3, a3, 1 1144 + rsr a2, excsave1 1145 + l32i a2, a2, EXC_TABLE_DOUBLE_SAVE # restore a2 1146 + xsr a2, excsave1 1147 + s32i a3, a2, EXC_TABLE_DOUBLE_SAVE # save a3 1148 + l32i a3, a2, EXC_TABLE_PARAM # original WB (in user task) 1149 + xsr a2, excsave1 1146 1150 1147 1151 /* Return to the original (user task) WINDOWBASE. 1148 1152 * We leave the following frame behind: 1149 1153 * a0, a1, a2 same 1150 - * a3: trashed (saved in excsave_1) 1154 + * a3: trashed (saved in EXC_TABLE_DOUBLE_SAVE) 1151 1155 * depc: depc (we have to return to that address) 1152 - * excsave_1: a3 1156 + * excsave_1: exctable 1153 1157 */ 1154 1158 1155 1159 wsr a3, windowbase ··· 1163 1159 * a0: return address 1164 1160 * a1: used, stack pointer 1165 1161 * a2: kernel stack pointer 1166 - * a3: available, saved in EXCSAVE_1 1162 + * a3: available 1167 1163 * depc: exception address 1168 - * excsave: a3 1164 + * excsave: exctable 1169 1165 * Note: This frame might be the same as above. 1170 1166 */ 1171 1167 ··· 1185 1181 rsr a0, exccause 1186 1182 addx4 a0, a0, a3 # find entry in table 1187 1183 l32i a0, a0, EXC_TABLE_FAST_USER # load handler 1184 + l32i a3, a3, EXC_TABLE_DOUBLE_SAVE 1188 1185 jx a0 1189 1186 1190 - fast_syscall_spill_registers_fixup_return: 1187 + ENDPROC(fast_syscall_spill_registers_fixup) 1188 + 1189 + ENTRY(fast_syscall_spill_registers_fixup_return) 1191 1190 1192 1191 /* When we return here, all registers have been restored (a2: DEPC) */ 1193 1192 ··· 1198 1191 1199 1192 /* Restore fixup handler. */ 1200 1193 1201 - xsr a3, excsave1 1202 - movi a2, fast_syscall_spill_registers_fixup 1203 - s32i a2, a3, EXC_TABLE_FIXUP 1204 - s32i a0, a3, EXC_TABLE_DOUBLE_SAVE 1205 - rsr a2, windowbase 1206 - s32i a2, a3, EXC_TABLE_PARAM 1207 - l32i a2, a3, EXC_TABLE_KSTK 1194 + rsr a2, excsave1 1195 + s32i a3, a2, EXC_TABLE_DOUBLE_SAVE 1196 + movi a3, fast_syscall_spill_registers_fixup 1197 + s32i a3, a2, EXC_TABLE_FIXUP 1198 + rsr a3, windowbase 1199 + s32i a3, a2, EXC_TABLE_PARAM 1200 + l32i a2, a2, EXC_TABLE_KSTK 1208 1201 1209 1202 /* Load WB at the time the exception occurred. */ 1210 1203 ··· 1213 1206 wsr a3, windowbase 1214 1207 rsync 1215 1208 1209 + rsr a3, excsave1 1210 + l32i a3, a3, EXC_TABLE_DOUBLE_SAVE 1211 + 1216 1212 rfde 1217 1213 1214 + ENDPROC(fast_syscall_spill_registers_fixup_return) 1218 1215 1219 1216 /* 1220 1217 * spill all registers.
+1 -1
arch/xtensa/kernel/signal.c
··· 341 341 342 342 sp = regs->areg[1]; 343 343 344 - if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! on_sig_stack(sp)) { 344 + if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && sas_ss_flags(sp) == 0) { 345 345 sp = current->sas_ss_sp + current->sas_ss_size; 346 346 } 347 347
+2 -1
arch/xtensa/platforms/iss/network.c
··· 737 737 return 1; 738 738 } 739 739 740 - if ((new = alloc_bootmem(sizeof new)) == NULL) { 740 + new = alloc_bootmem(sizeof(*new)); 741 + if (new == NULL) { 741 742 printk("Alloc_bootmem failed\n"); 742 743 return 1; 743 744 }
+21
drivers/clk/clk-nomadik.c
··· 27 27 */ 28 28 29 29 #define SRC_CR 0x00U 30 + #define SRC_CR_T0_ENSEL BIT(15) 31 + #define SRC_CR_T1_ENSEL BIT(17) 32 + #define SRC_CR_T2_ENSEL BIT(19) 33 + #define SRC_CR_T3_ENSEL BIT(21) 34 + #define SRC_CR_T4_ENSEL BIT(23) 35 + #define SRC_CR_T5_ENSEL BIT(25) 36 + #define SRC_CR_T6_ENSEL BIT(27) 37 + #define SRC_CR_T7_ENSEL BIT(29) 30 38 #define SRC_XTALCR 0x0CU 31 39 #define SRC_XTALCR_XTALTIMEN BIT(20) 32 40 #define SRC_XTALCR_SXTALDIS BIT(19) ··· 551 543 __func__, np->name); 552 544 return; 553 545 } 546 + 547 + /* Set all timers to use the 2.4 MHz TIMCLK */ 548 + val = readl(src_base + SRC_CR); 549 + val |= SRC_CR_T0_ENSEL; 550 + val |= SRC_CR_T1_ENSEL; 551 + val |= SRC_CR_T2_ENSEL; 552 + val |= SRC_CR_T3_ENSEL; 553 + val |= SRC_CR_T4_ENSEL; 554 + val |= SRC_CR_T5_ENSEL; 555 + val |= SRC_CR_T6_ENSEL; 556 + val |= SRC_CR_T7_ENSEL; 557 + writel(val, src_base + SRC_CR); 558 + 554 559 val = readl(src_base + SRC_XTALCR); 555 560 pr_info("SXTALO is %s\n", 556 561 (val & SRC_XTALCR_SXTALDIS) ? "disabled" : "enabled");
+2 -2
drivers/clk/mvebu/armada-370.c
··· 39 39 }; 40 40 41 41 static const u32 a370_tclk_freqs[] __initconst = { 42 - 16600000, 43 - 20000000, 42 + 166000000, 43 + 200000000, 44 44 }; 45 45 46 46 static u32 __init a370_get_tclk_freq(void __iomem *sar)
+1 -1
drivers/clk/socfpga/clk.c
··· 49 49 #define SOCFPGA_L4_SP_CLK "l4_sp_clk" 50 50 #define SOCFPGA_NAND_CLK "nand_clk" 51 51 #define SOCFPGA_NAND_X_CLK "nand_x_clk" 52 - #define SOCFPGA_MMC_CLK "mmc_clk" 52 + #define SOCFPGA_MMC_CLK "sdmmc_clk" 53 53 #define SOCFPGA_DB_CLK "gpio_db_clk" 54 54 55 55 #define div_mask(width) ((1 << (width)) - 1)
+1 -1
drivers/clk/versatile/clk-icst.c
··· 107 107 108 108 vco = icst_hz_to_vco(icst->params, rate); 109 109 icst->rate = icst_hz(icst->params, vco); 110 - vco_set(icst->vcoreg, icst->lockreg, vco); 110 + vco_set(icst->lockreg, icst->vcoreg, vco); 111 111 return 0; 112 112 } 113 113
+4 -4
drivers/cpufreq/acpi-cpufreq.c
··· 986 986 { 987 987 int ret; 988 988 989 + if (acpi_disabled) 990 + return -ENODEV; 991 + 989 992 /* don't keep reloading if cpufreq_driver exists */ 990 993 if (cpufreq_get_current_driver()) 991 - return 0; 992 - 993 - if (acpi_disabled) 994 - return 0; 994 + return -EEXIST; 995 995 996 996 pr_debug("acpi_cpufreq_init\n"); 997 997
+18 -20
drivers/cpufreq/intel_pstate.c
··· 48 48 } 49 49 50 50 struct sample { 51 - int core_pct_busy; 51 + int32_t core_pct_busy; 52 52 u64 aperf; 53 53 u64 mperf; 54 54 int freq; ··· 68 68 int32_t i_gain; 69 69 int32_t d_gain; 70 70 int deadband; 71 - int last_err; 71 + int32_t last_err; 72 72 }; 73 73 74 74 struct cpudata { ··· 153 153 pid->d_gain = div_fp(int_tofp(percent), int_tofp(100)); 154 154 } 155 155 156 - static signed int pid_calc(struct _pid *pid, int busy) 156 + static signed int pid_calc(struct _pid *pid, int32_t busy) 157 157 { 158 - signed int err, result; 158 + signed int result; 159 159 int32_t pterm, dterm, fp_error; 160 160 int32_t integral_limit; 161 161 162 - err = pid->setpoint - busy; 163 - fp_error = int_tofp(err); 162 + fp_error = int_tofp(pid->setpoint) - busy; 164 163 165 - if (abs(err) <= pid->deadband) 164 + if (abs(fp_error) <= int_tofp(pid->deadband)) 166 165 return 0; 167 166 168 167 pterm = mul_fp(pid->p_gain, fp_error); ··· 175 176 if (pid->integral < -integral_limit) 176 177 pid->integral = -integral_limit; 177 178 178 - dterm = mul_fp(pid->d_gain, (err - pid->last_err)); 179 - pid->last_err = err; 179 + dterm = mul_fp(pid->d_gain, fp_error - pid->last_err); 180 + pid->last_err = fp_error; 180 181 181 182 result = pterm + mul_fp(pid->integral, pid->i_gain) + dterm; 182 183 ··· 366 367 static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max) 367 368 { 368 369 int max_perf = cpu->pstate.turbo_pstate; 370 + int max_perf_adj; 369 371 int min_perf; 370 372 if (limits.no_turbo) 371 373 max_perf = cpu->pstate.max_pstate; 372 374 373 - max_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf)); 374 - *max = clamp_t(int, max_perf, 375 + max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf)); 376 + *max = clamp_t(int, max_perf_adj, 375 377 cpu->pstate.min_pstate, cpu->pstate.turbo_pstate); 376 378 377 379 min_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.min_perf)); ··· 436 436 struct sample *sample) 437 437 { 438 438 u64 core_pct; 439 - core_pct = div64_u64(sample->aperf * 100, sample->mperf); 440 - sample->freq = cpu->pstate.max_pstate * core_pct * 1000; 439 + core_pct = div64_u64(int_tofp(sample->aperf * 100), 440 + sample->mperf); 441 + sample->freq = fp_toint(cpu->pstate.max_pstate * core_pct * 1000); 441 442 442 443 sample->core_pct_busy = core_pct; 443 444 } ··· 470 469 mod_timer_pinned(&cpu->timer, jiffies + delay); 471 470 } 472 471 473 - static inline int intel_pstate_get_scaled_busy(struct cpudata *cpu) 472 + static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu) 474 473 { 475 - int32_t busy_scaled; 476 474 int32_t core_busy, max_pstate, current_pstate; 477 475 478 - core_busy = int_tofp(cpu->samples[cpu->sample_ptr].core_pct_busy); 476 + core_busy = cpu->samples[cpu->sample_ptr].core_pct_busy; 479 477 max_pstate = int_tofp(cpu->pstate.max_pstate); 480 478 current_pstate = int_tofp(cpu->pstate.current_pstate); 481 - busy_scaled = mul_fp(core_busy, div_fp(max_pstate, current_pstate)); 482 - 483 - return fp_toint(busy_scaled); 479 + return mul_fp(core_busy, div_fp(max_pstate, current_pstate)); 484 480 } 485 481 486 482 static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu) 487 483 { 488 - int busy_scaled; 484 + int32_t busy_scaled; 489 485 struct _pid *pid; 490 486 signed int ctl = 0; 491 487 int steps;
+2
drivers/dma/edma.c
··· 305 305 edma_alloc_slot(EDMA_CTLR(echan->ch_num), 306 306 EDMA_SLOT_ANY); 307 307 if (echan->slot[i] < 0) { 308 + kfree(edesc); 308 309 dev_err(dev, "Failed to allocate slot\n"); 309 310 kfree(edesc); 310 311 return NULL; ··· 347 346 ccnt = sg_dma_len(sg) / (acnt * bcnt); 348 347 if (ccnt > (SZ_64K - 1)) { 349 348 dev_err(dev, "Exceeded max SG segment size\n"); 349 + kfree(edesc); 350 350 return NULL; 351 351 } 352 352 cidx = acnt * bcnt;
+1 -1
drivers/gpu/drm/drm_drv.c
··· 61 61 62 62 /** Ioctl table */ 63 63 static const struct drm_ioctl_desc drm_ioctls[] = { 64 - DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED), 64 + DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED|DRM_RENDER_ALLOW), 65 65 DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0), 66 66 DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0), 67 67 DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),
+24 -4
drivers/gpu/drm/i915/intel_crt.c
··· 83 83 return true; 84 84 } 85 85 86 - static void intel_crt_get_config(struct intel_encoder *encoder, 87 - struct intel_crtc_config *pipe_config) 86 + static unsigned int intel_crt_get_flags(struct intel_encoder *encoder) 88 87 { 89 88 struct drm_i915_private *dev_priv = encoder->base.dev->dev_private; 90 89 struct intel_crt *crt = intel_encoder_to_crt(encoder); ··· 101 102 else 102 103 flags |= DRM_MODE_FLAG_NVSYNC; 103 104 104 - pipe_config->adjusted_mode.flags |= flags; 105 + return flags; 106 + } 107 + 108 + static void intel_crt_get_config(struct intel_encoder *encoder, 109 + struct intel_crtc_config *pipe_config) 110 + { 111 + pipe_config->adjusted_mode.flags |= intel_crt_get_flags(encoder); 112 + } 113 + 114 + static void hsw_crt_get_config(struct intel_encoder *encoder, 115 + struct intel_crtc_config *pipe_config) 116 + { 117 + intel_ddi_get_config(encoder, pipe_config); 118 + 119 + pipe_config->adjusted_mode.flags &= ~(DRM_MODE_FLAG_PHSYNC | 120 + DRM_MODE_FLAG_NHSYNC | 121 + DRM_MODE_FLAG_PVSYNC | 122 + DRM_MODE_FLAG_NVSYNC); 123 + pipe_config->adjusted_mode.flags |= intel_crt_get_flags(encoder); 105 124 } 106 125 107 126 /* Note: The caller is required to filter out dpms modes not supported by the ··· 816 799 crt->base.mode_set = intel_crt_mode_set; 817 800 crt->base.disable = intel_disable_crt; 818 801 crt->base.enable = intel_enable_crt; 819 - crt->base.get_config = intel_crt_get_config; 802 + if (IS_HASWELL(dev)) 803 + crt->base.get_config = hsw_crt_get_config; 804 + else 805 + crt->base.get_config = intel_crt_get_config; 820 806 if (I915_HAS_HOTPLUG(dev)) 821 807 crt->base.hpd_pin = HPD_CRT; 822 808 if (HAS_DDI(dev))
+19 -2
drivers/gpu/drm/i915/intel_ddi.c
··· 1249 1249 intel_dp_check_link_status(intel_dp); 1250 1250 } 1251 1251 1252 - static void intel_ddi_get_config(struct intel_encoder *encoder, 1253 - struct intel_crtc_config *pipe_config) 1252 + void intel_ddi_get_config(struct intel_encoder *encoder, 1253 + struct intel_crtc_config *pipe_config) 1254 1254 { 1255 1255 struct drm_i915_private *dev_priv = encoder->base.dev->dev_private; 1256 1256 struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc); ··· 1268 1268 flags |= DRM_MODE_FLAG_NVSYNC; 1269 1269 1270 1270 pipe_config->adjusted_mode.flags |= flags; 1271 + 1272 + switch (temp & TRANS_DDI_BPC_MASK) { 1273 + case TRANS_DDI_BPC_6: 1274 + pipe_config->pipe_bpp = 18; 1275 + break; 1276 + case TRANS_DDI_BPC_8: 1277 + pipe_config->pipe_bpp = 24; 1278 + break; 1279 + case TRANS_DDI_BPC_10: 1280 + pipe_config->pipe_bpp = 30; 1281 + break; 1282 + case TRANS_DDI_BPC_12: 1283 + pipe_config->pipe_bpp = 36; 1284 + break; 1285 + default: 1286 + break; 1287 + } 1271 1288 } 1272 1289 1273 1290 static void intel_ddi_destroy(struct drm_encoder *encoder)
+84 -47
drivers/gpu/drm/i915/intel_display.c
··· 2327 2327 FDI_FE_ERRC_ENABLE); 2328 2328 } 2329 2329 2330 - static bool pipe_has_enabled_pch(struct intel_crtc *intel_crtc) 2330 + static bool pipe_has_enabled_pch(struct intel_crtc *crtc) 2331 2331 { 2332 - return intel_crtc->base.enabled && intel_crtc->config.has_pch_encoder; 2332 + return crtc->base.enabled && crtc->active && 2333 + crtc->config.has_pch_encoder; 2333 2334 } 2334 2335 2335 2336 static void ivb_modeset_global_resources(struct drm_device *dev) ··· 2980 2979 I915_READ(VSYNCSHIFT(cpu_transcoder))); 2981 2980 } 2982 2981 2982 + static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev) 2983 + { 2984 + struct drm_i915_private *dev_priv = dev->dev_private; 2985 + uint32_t temp; 2986 + 2987 + temp = I915_READ(SOUTH_CHICKEN1); 2988 + if (temp & FDI_BC_BIFURCATION_SELECT) 2989 + return; 2990 + 2991 + WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE); 2992 + WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE); 2993 + 2994 + temp |= FDI_BC_BIFURCATION_SELECT; 2995 + DRM_DEBUG_KMS("enabling fdi C rx\n"); 2996 + I915_WRITE(SOUTH_CHICKEN1, temp); 2997 + POSTING_READ(SOUTH_CHICKEN1); 2998 + } 2999 + 3000 + static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc) 3001 + { 3002 + struct drm_device *dev = intel_crtc->base.dev; 3003 + struct drm_i915_private *dev_priv = dev->dev_private; 3004 + 3005 + switch (intel_crtc->pipe) { 3006 + case PIPE_A: 3007 + break; 3008 + case PIPE_B: 3009 + if (intel_crtc->config.fdi_lanes > 2) 3010 + WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT); 3011 + else 3012 + cpt_enable_fdi_bc_bifurcation(dev); 3013 + 3014 + break; 3015 + case PIPE_C: 3016 + cpt_enable_fdi_bc_bifurcation(dev); 3017 + 3018 + break; 3019 + default: 3020 + BUG(); 3021 + } 3022 + } 3023 + 2983 3024 /* 2984 3025 * Enable PCH resources required for PCH ports: 2985 3026 * - PCH PLLs ··· 3039 2996 u32 reg, temp; 3040 2997 3041 2998 assert_pch_transcoder_disabled(dev_priv, pipe); 2999 + 3000 + if (IS_IVYBRIDGE(dev)) 3001 + ivybridge_update_fdi_bc_bifurcation(intel_crtc); 3042 3002 3043 3003 /* Write the TU size bits before fdi link training, so that error 3044 3004 * detection works. */ ··· 5029 4983 if (!(tmp & PIPECONF_ENABLE)) 5030 4984 return false; 5031 4985 4986 + if (IS_G4X(dev) || IS_VALLEYVIEW(dev)) { 4987 + switch (tmp & PIPECONF_BPC_MASK) { 4988 + case PIPECONF_6BPC: 4989 + pipe_config->pipe_bpp = 18; 4990 + break; 4991 + case PIPECONF_8BPC: 4992 + pipe_config->pipe_bpp = 24; 4993 + break; 4994 + case PIPECONF_10BPC: 4995 + pipe_config->pipe_bpp = 30; 4996 + break; 4997 + default: 4998 + break; 4999 + } 5000 + } 5001 + 5032 5002 intel_get_pipe_timings(crtc, pipe_config); 5033 5003 5034 5004 i9xx_get_pfit_config(crtc, pipe_config); ··· 5638 5576 return true; 5639 5577 } 5640 5578 5641 - static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev) 5642 - { 5643 - struct drm_i915_private *dev_priv = dev->dev_private; 5644 - uint32_t temp; 5645 - 5646 - temp = I915_READ(SOUTH_CHICKEN1); 5647 - if (temp & FDI_BC_BIFURCATION_SELECT) 5648 - return; 5649 - 5650 - WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE); 5651 - WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE); 5652 - 5653 - temp |= FDI_BC_BIFURCATION_SELECT; 5654 - DRM_DEBUG_KMS("enabling fdi C rx\n"); 5655 - I915_WRITE(SOUTH_CHICKEN1, temp); 5656 - POSTING_READ(SOUTH_CHICKEN1); 5657 - } 5658 - 5659 - static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc) 5660 - { 5661 - struct drm_device *dev = intel_crtc->base.dev; 5662 - struct drm_i915_private *dev_priv = dev->dev_private; 5663 - 5664 - switch (intel_crtc->pipe) { 5665 - case PIPE_A: 5666 - break; 5667 - case PIPE_B: 5668 - if (intel_crtc->config.fdi_lanes > 2) 5669 - WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT); 5670 - else 5671 - cpt_enable_fdi_bc_bifurcation(dev); 5672 - 5673 - break; 5674 - case PIPE_C: 5675 - cpt_enable_fdi_bc_bifurcation(dev); 5676 - 5677 - break; 5678 - default: 5679 - BUG(); 5680 - } 5681 - } 5682 - 5683 5579 int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp) 5684 5580 { 5685 5581 /* ··· 5831 5811 &intel_crtc->config.fdi_m_n); 5832 5812 } 5833 5813 5834 - if (IS_IVYBRIDGE(dev)) 5835 - ivybridge_update_fdi_bc_bifurcation(intel_crtc); 5836 - 5837 5814 ironlake_set_pipeconf(crtc); 5838 5815 5839 5816 /* Set up the display plane register */ ··· 5897 5880 tmp = I915_READ(PIPECONF(crtc->pipe)); 5898 5881 if (!(tmp & PIPECONF_ENABLE)) 5899 5882 return false; 5883 + 5884 + switch (tmp & PIPECONF_BPC_MASK) { 5885 + case PIPECONF_6BPC: 5886 + pipe_config->pipe_bpp = 18; 5887 + break; 5888 + case PIPECONF_8BPC: 5889 + pipe_config->pipe_bpp = 24; 5890 + break; 5891 + case PIPECONF_10BPC: 5892 + pipe_config->pipe_bpp = 30; 5893 + break; 5894 + case PIPECONF_12BPC: 5895 + pipe_config->pipe_bpp = 36; 5896 + break; 5897 + default: 5898 + break; 5899 + } 5900 5900 5901 5901 if (I915_READ(PCH_TRANSCONF(crtc->pipe)) & TRANS_ENABLE) { 5902 5902 struct intel_shared_dpll *pll; ··· 8645 8611 PIPE_CONF_CHECK_X(dpll_hw_state.dpll_md); 8646 8612 PIPE_CONF_CHECK_X(dpll_hw_state.fp0); 8647 8613 PIPE_CONF_CHECK_X(dpll_hw_state.fp1); 8614 + 8615 + if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5) 8616 + PIPE_CONF_CHECK_I(pipe_bpp); 8648 8617 8649 8618 #undef PIPE_CONF_CHECK_X 8650 8619 #undef PIPE_CONF_CHECK_I
+20
drivers/gpu/drm/i915/intel_dp.c
··· 1401 1401 else 1402 1402 pipe_config->port_clock = 270000; 1403 1403 } 1404 + 1405 + if (is_edp(intel_dp) && dev_priv->vbt.edp_bpp && 1406 + pipe_config->pipe_bpp > dev_priv->vbt.edp_bpp) { 1407 + /* 1408 + * This is a big fat ugly hack. 1409 + * 1410 + * Some machines in UEFI boot mode provide us a VBT that has 18 1411 + * bpp and 1.62 GHz link bandwidth for eDP, which for reasons 1412 + * unknown we fail to light up. Yet the same BIOS boots up with 1413 + * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as 1414 + * max, not what it tells us to use. 1415 + * 1416 + * Note: This will still be broken if the eDP panel is not lit 1417 + * up by the BIOS, and thus we can't get the mode at module 1418 + * load. 1419 + */ 1420 + DRM_DEBUG_KMS("pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n", 1421 + pipe_config->pipe_bpp, dev_priv->vbt.edp_bpp); 1422 + dev_priv->vbt.edp_bpp = pipe_config->pipe_bpp; 1423 + } 1404 1424 } 1405 1425 1406 1426 static bool is_edp_psr(struct intel_dp *intel_dp)
+2
drivers/gpu/drm/i915/intel_drv.h
··· 765 765 extern bool 766 766 intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector); 767 767 extern void intel_ddi_fdi_disable(struct drm_crtc *crtc); 768 + extern void intel_ddi_get_config(struct intel_encoder *encoder, 769 + struct intel_crtc_config *pipe_config); 768 770 769 771 extern void intel_display_handle_reset(struct drm_device *dev); 770 772 extern bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev,
+16
drivers/gpu/drm/i915/intel_lvds.c
··· 700 700 }, 701 701 { 702 702 .callback = intel_no_lvds_dmi_callback, 703 + .ident = "Intel D410PT", 704 + .matches = { 705 + DMI_MATCH(DMI_BOARD_VENDOR, "Intel"), 706 + DMI_MATCH(DMI_BOARD_NAME, "D410PT"), 707 + }, 708 + }, 709 + { 710 + .callback = intel_no_lvds_dmi_callback, 711 + .ident = "Intel D425KT", 712 + .matches = { 713 + DMI_MATCH(DMI_BOARD_VENDOR, "Intel"), 714 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "D425KT"), 715 + }, 716 + }, 717 + { 718 + .callback = intel_no_lvds_dmi_callback, 703 719 .ident = "Intel D510MO", 704 720 .matches = { 705 721 DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
+1
drivers/gpu/drm/radeon/evergreen_hdmi.c
··· 291 291 /* fglrx clears sth in AFMT_AUDIO_PACKET_CONTROL2 here */ 292 292 293 293 WREG32(HDMI_ACR_PACKET_CONTROL + offset, 294 + HDMI_ACR_SOURCE | /* select SW CTS value */ 294 295 HDMI_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */ 295 296 296 297 evergreen_hdmi_update_ACR(encoder, mode->clock);
+1 -1
drivers/gpu/drm/radeon/kv_dpm.c
··· 2635 2635 pi->caps_sclk_ds = true; 2636 2636 pi->enable_auto_thermal_throttling = true; 2637 2637 pi->disable_nb_ps3_in_battery = false; 2638 - pi->bapm_enable = true; 2638 + pi->bapm_enable = false; 2639 2639 pi->voltage_drop_t = 0; 2640 2640 pi->caps_sclk_throttle_low_notification = false; 2641 2641 pi->caps_fps = false; /* true? */
+2 -2
drivers/gpu/drm/radeon/radeon.h
··· 1272 1272 struct radeon_clock_and_voltage_limits { 1273 1273 u32 sclk; 1274 1274 u32 mclk; 1275 - u32 vddc; 1276 - u32 vddci; 1275 + u16 vddc; 1276 + u16 vddci; 1277 1277 }; 1278 1278 1279 1279 struct radeon_clock_array {
+1 -1
drivers/infiniband/ulp/isert/ib_isert.c
··· 594 594 595 595 pr_debug("Entering isert_connect_release(): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n"); 596 596 597 - if (device->use_frwr) 597 + if (device && device->use_frwr) 598 598 isert_conn_free_frwr_pool(isert_conn); 599 599 600 600 if (isert_conn->conn_qp) {
+5 -5
drivers/input/input.c
··· 1734 1734 */ 1735 1735 struct input_dev *input_allocate_device(void) 1736 1736 { 1737 + static atomic_t input_no = ATOMIC_INIT(0); 1737 1738 struct input_dev *dev; 1738 1739 1739 1740 dev = kzalloc(sizeof(struct input_dev), GFP_KERNEL); ··· 1744 1743 device_initialize(&dev->dev); 1745 1744 mutex_init(&dev->mutex); 1746 1745 spin_lock_init(&dev->event_lock); 1746 + init_timer(&dev->timer); 1747 1747 INIT_LIST_HEAD(&dev->h_list); 1748 1748 INIT_LIST_HEAD(&dev->node); 1749 + 1750 + dev_set_name(&dev->dev, "input%ld", 1751 + (unsigned long) atomic_inc_return(&input_no) - 1); 1749 1752 1750 1753 __module_get(THIS_MODULE); 1751 1754 } ··· 2024 2019 */ 2025 2020 int input_register_device(struct input_dev *dev) 2026 2021 { 2027 - static atomic_t input_no = ATOMIC_INIT(0); 2028 2022 struct input_devres *devres = NULL; 2029 2023 struct input_handler *handler; 2030 2024 unsigned int packet_size; ··· 2063 2059 * If delay and period are pre-set by the driver, then autorepeating 2064 2060 * is handled by the driver itself and we don't do it in input.c. 2065 2061 */ 2066 - init_timer(&dev->timer); 2067 2062 if (!dev->rep[REP_DELAY] && !dev->rep[REP_PERIOD]) { 2068 2063 dev->timer.data = (long) dev; 2069 2064 dev->timer.function = input_repeat_key; ··· 2075 2072 2076 2073 if (!dev->setkeycode) 2077 2074 dev->setkeycode = input_default_setkeycode; 2078 - 2079 - dev_set_name(&dev->dev, "input%ld", 2080 - (unsigned long) atomic_inc_return(&input_no) - 1); 2081 2075 2082 2076 error = device_add(&dev->dev); 2083 2077 if (error)
+9 -2
drivers/input/keyboard/pxa27x_keypad.c
··· 786 786 input_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP); 787 787 input_set_capability(input_dev, EV_MSC, MSC_SCAN); 788 788 789 - if (pdata) 789 + if (pdata) { 790 790 error = pxa27x_keypad_build_keycode(keypad); 791 - else 791 + } else { 792 792 error = pxa27x_keypad_build_keycode_from_dt(keypad); 793 + /* 794 + * Data that we get from DT resides in dynamically 795 + * allocated memory so we need to update our pdata 796 + * pointer. 797 + */ 798 + pdata = keypad->pdata; 799 + } 793 800 if (error) { 794 801 dev_err(&pdev->dev, "failed to build keycode\n"); 795 802 goto failed_put_clk;
+10 -4
drivers/input/misc/cm109.c
··· 351 351 if (status) { 352 352 if (status == -ESHUTDOWN) 353 353 return; 354 - dev_err(&dev->intf->dev, "%s: urb status %d\n", __func__, status); 354 + dev_err_ratelimited(&dev->intf->dev, "%s: urb status %d\n", 355 + __func__, status); 356 + goto out; 355 357 } 356 358 357 359 /* Special keys */ ··· 420 418 dev->ctl_data->byte[2], 421 419 dev->ctl_data->byte[3]); 422 420 423 - if (status) 424 - dev_err(&dev->intf->dev, "%s: urb status %d\n", __func__, status); 421 + if (status) { 422 + if (status == -ESHUTDOWN) 423 + return; 424 + dev_err_ratelimited(&dev->intf->dev, "%s: urb status %d\n", 425 + __func__, status); 426 + } 425 427 426 428 spin_lock(&dev->ctl_submit_lock); 427 429 ··· 433 427 434 428 if (likely(!dev->shutdown)) { 435 429 436 - if (dev->buzzer_pending) { 430 + if (dev->buzzer_pending || status) { 437 431 dev->buzzer_pending = 0; 438 432 dev->ctl_urb_pending = 1; 439 433 cm109_submit_buzz_toggle(dev);
+1
drivers/input/mouse/alps.c
··· 103 103 /* Dell Latitude E5500, E6400, E6500, Precision M4400 */ 104 104 { { 0x62, 0x02, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, 105 105 ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, 106 + { { 0x73, 0x00, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_DUALPOINT }, /* Dell XT2 */ 106 107 { { 0x73, 0x02, 0x50 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_FOUR_BUTTONS }, /* Dell Vostro 1400 */ 107 108 { { 0x52, 0x01, 0x14 }, 0x00, ALPS_PROTO_V2, 0xff, 0xff, 108 109 ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */
+14 -9
drivers/input/serio/i8042.c
··· 223 223 { 224 224 unsigned long flags; 225 225 unsigned char data, str; 226 - int i = 0; 226 + int count = 0; 227 + int retval = 0; 227 228 228 229 spin_lock_irqsave(&i8042_lock, flags); 229 230 230 - while (((str = i8042_read_status()) & I8042_STR_OBF) && (i < I8042_BUFFER_SIZE)) { 231 - udelay(50); 232 - data = i8042_read_data(); 233 - i++; 234 - dbg("%02x <- i8042 (flush, %s)\n", 235 - data, str & I8042_STR_AUXDATA ? "aux" : "kbd"); 231 + while ((str = i8042_read_status()) & I8042_STR_OBF) { 232 + if (count++ < I8042_BUFFER_SIZE) { 233 + udelay(50); 234 + data = i8042_read_data(); 235 + dbg("%02x <- i8042 (flush, %s)\n", 236 + data, str & I8042_STR_AUXDATA ? "aux" : "kbd"); 237 + } else { 238 + retval = -EIO; 239 + break; 240 + } 236 241 } 237 242 238 243 spin_unlock_irqrestore(&i8042_lock, flags); 239 244 240 - return i; 245 + return retval; 241 246 } 242 247 243 248 /* ··· 854 849 855 850 static int i8042_controller_check(void) 856 851 { 857 - if (i8042_flush() == I8042_BUFFER_SIZE) { 852 + if (i8042_flush()) { 858 853 pr_err("No controller found\n"); 859 854 return -ENODEV; 860 855 }
+4
drivers/input/tablet/wacom_sys.c
··· 1031 1031 } 1032 1032 1033 1033 static enum power_supply_property wacom_battery_props[] = { 1034 + POWER_SUPPLY_PROP_SCOPE, 1034 1035 POWER_SUPPLY_PROP_CAPACITY 1035 1036 }; 1036 1037 ··· 1043 1042 int ret = 0; 1044 1043 1045 1044 switch (psp) { 1045 + case POWER_SUPPLY_PROP_SCOPE: 1046 + val->intval = POWER_SUPPLY_SCOPE_DEVICE; 1047 + break; 1046 1048 case POWER_SUPPLY_PROP_CAPACITY: 1047 1049 val->intval = 1048 1050 wacom->wacom_wac.battery_capacity * 100 / 31;
+8
drivers/input/tablet/wacom_wac.c
··· 2054 2054 static const struct wacom_features wacom_features_0x10D = 2055 2055 { "Wacom ISDv4 10D", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2056 2056 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2057 + static const struct wacom_features wacom_features_0x10E = 2058 + { "Wacom ISDv4 10E", WACOM_PKGLEN_MTTPC, 27760, 15694, 255, 2059 + 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2060 + static const struct wacom_features wacom_features_0x10F = 2061 + { "Wacom ISDv4 10F", WACOM_PKGLEN_MTTPC, 27760, 15694, 255, 2062 + 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; 2057 2063 static const struct wacom_features wacom_features_0x4001 = 2058 2064 { "Wacom ISDv4 4001", WACOM_PKGLEN_MTTPC, 26202, 16325, 255, 2059 2065 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES }; ··· 2254 2248 { USB_DEVICE_WACOM(0x100) }, 2255 2249 { USB_DEVICE_WACOM(0x101) }, 2256 2250 { USB_DEVICE_WACOM(0x10D) }, 2251 + { USB_DEVICE_WACOM(0x10E) }, 2252 + { USB_DEVICE_WACOM(0x10F) }, 2257 2253 { USB_DEVICE_WACOM(0x300) }, 2258 2254 { USB_DEVICE_WACOM(0x301) }, 2259 2255 { USB_DEVICE_WACOM(0x304) },
+3 -2
drivers/md/md.c
··· 8111 8111 u64 *p; 8112 8112 int lo, hi; 8113 8113 int rv = 1; 8114 + unsigned long flags; 8114 8115 8115 8116 if (bb->shift < 0) 8116 8117 /* badblocks are disabled */ ··· 8126 8125 sectors = next - s; 8127 8126 } 8128 8127 8129 - write_seqlock_irq(&bb->lock); 8128 + write_seqlock_irqsave(&bb->lock, flags); 8130 8129 8131 8130 p = bb->page; 8132 8131 lo = 0; ··· 8242 8241 bb->changed = 1; 8243 8242 if (!acknowledged) 8244 8243 bb->unacked_exist = 1; 8245 - write_sequnlock_irq(&bb->lock); 8244 + write_sequnlock_irqrestore(&bb->lock, flags); 8246 8245 8247 8246 return rv; 8248 8247 }
+1
drivers/md/raid1.c
··· 1479 1479 } 1480 1480 } 1481 1481 if (rdev 1482 + && rdev->recovery_offset == MaxSector 1482 1483 && !test_bit(Faulty, &rdev->flags) 1483 1484 && !test_and_set_bit(In_sync, &rdev->flags)) { 1484 1485 count++;
+1
drivers/md/raid10.c
··· 1782 1782 } 1783 1783 sysfs_notify_dirent_safe(tmp->replacement->sysfs_state); 1784 1784 } else if (tmp->rdev 1785 + && tmp->rdev->recovery_offset == MaxSector 1785 1786 && !test_bit(Faulty, &tmp->rdev->flags) 1786 1787 && !test_and_set_bit(In_sync, &tmp->rdev->flags)) { 1787 1788 count++;
+20
drivers/md/raid5.c
··· 778 778 bi->bi_io_vec[0].bv_len = STRIPE_SIZE; 779 779 bi->bi_io_vec[0].bv_offset = 0; 780 780 bi->bi_size = STRIPE_SIZE; 781 + /* 782 + * If this is discard request, set bi_vcnt 0. We don't 783 + * want to confuse SCSI because SCSI will replace payload 784 + */ 785 + if (rw & REQ_DISCARD) 786 + bi->bi_vcnt = 0; 781 787 if (rrdev) 782 788 set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags); 783 789 ··· 822 816 rbi->bi_io_vec[0].bv_len = STRIPE_SIZE; 823 817 rbi->bi_io_vec[0].bv_offset = 0; 824 818 rbi->bi_size = STRIPE_SIZE; 819 + /* 820 + * If this is discard request, set bi_vcnt 0. We don't 821 + * want to confuse SCSI because SCSI will replace payload 822 + */ 823 + if (rw & REQ_DISCARD) 824 + rbi->bi_vcnt = 0; 825 825 if (conf->mddev->gendisk) 826 826 trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev), 827 827 rbi, disk_devt(conf->mddev->gendisk), ··· 2922 2910 } 2923 2911 /* now that discard is done we can proceed with any sync */ 2924 2912 clear_bit(STRIPE_DISCARD, &sh->state); 2913 + /* 2914 + * SCSI discard will change some bio fields and the stripe has 2915 + * no updated data, so remove it from hash list and the stripe 2916 + * will be reinitialized 2917 + */ 2918 + spin_lock_irq(&conf->device_lock); 2919 + remove_hash(sh); 2920 + spin_unlock_irq(&conf->device_lock); 2925 2921 if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state)) 2926 2922 set_bit(STRIPE_HANDLE, &sh->state); 2927 2923
+1 -1
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 349 349 350 350 int common_nfc_set_geometry(struct gpmi_nand_data *this) 351 351 { 352 - return set_geometry_by_ecc_info(this) ? 0 : legacy_set_geometry(this); 352 + return legacy_set_geometry(this); 353 353 } 354 354 355 355 struct dma_chan *get_dma_chan(struct gpmi_nand_data *this)
+6 -1
drivers/mtd/nand/pxa3xx_nand.c
··· 1320 1320 for (cs = 0; cs < pdata->num_cs; cs++) { 1321 1321 struct mtd_info *mtd = info->host[cs]->mtd; 1322 1322 1323 - mtd->name = pdev->name; 1323 + /* 1324 + * The mtd name matches the one used in 'mtdparts' kernel 1325 + * parameter. This name cannot be changed or otherwise 1326 + * user's mtd partitions configuration would get broken. 1327 + */ 1328 + mtd->name = "pxa3xx_nand-0"; 1324 1329 info->cs = cs; 1325 1330 ret = pxa3xx_nand_scan(mtd); 1326 1331 if (ret) {
+3 -3
drivers/net/can/c_can/c_can.c
··· 814 814 msg_ctrl_save = priv->read_reg(priv, 815 815 C_CAN_IFACE(MSGCTRL_REG, 0)); 816 816 817 - if (msg_ctrl_save & IF_MCONT_EOB) 818 - return num_rx_pkts; 819 - 820 817 if (msg_ctrl_save & IF_MCONT_MSGLST) { 821 818 c_can_handle_lost_msg_obj(dev, 0, msg_obj); 822 819 num_rx_pkts++; 823 820 quota--; 824 821 continue; 825 822 } 823 + 824 + if (msg_ctrl_save & IF_MCONT_EOB) 825 + return num_rx_pkts; 826 826 827 827 if (!(msg_ctrl_save & IF_MCONT_NEWDAT)) 828 828 continue;
+13 -7
drivers/net/can/usb/kvaser_usb.c
··· 1544 1544 return 0; 1545 1545 } 1546 1546 1547 - static void kvaser_usb_get_endpoints(const struct usb_interface *intf, 1548 - struct usb_endpoint_descriptor **in, 1549 - struct usb_endpoint_descriptor **out) 1547 + static int kvaser_usb_get_endpoints(const struct usb_interface *intf, 1548 + struct usb_endpoint_descriptor **in, 1549 + struct usb_endpoint_descriptor **out) 1550 1550 { 1551 1551 const struct usb_host_interface *iface_desc; 1552 1552 struct usb_endpoint_descriptor *endpoint; ··· 1557 1557 for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { 1558 1558 endpoint = &iface_desc->endpoint[i].desc; 1559 1559 1560 - if (usb_endpoint_is_bulk_in(endpoint)) 1560 + if (!*in && usb_endpoint_is_bulk_in(endpoint)) 1561 1561 *in = endpoint; 1562 1562 1563 - if (usb_endpoint_is_bulk_out(endpoint)) 1563 + if (!*out && usb_endpoint_is_bulk_out(endpoint)) 1564 1564 *out = endpoint; 1565 + 1566 + /* use first bulk endpoint for in and out */ 1567 + if (*in && *out) 1568 + return 0; 1565 1569 } 1570 + 1571 + return -ENODEV; 1566 1572 } 1567 1573 1568 1574 static int kvaser_usb_probe(struct usb_interface *intf, ··· 1582 1576 if (!dev) 1583 1577 return -ENOMEM; 1584 1578 1585 - kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out); 1586 - if (!dev->bulk_in || !dev->bulk_out) { 1579 + err = kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out); 1580 + if (err) { 1587 1581 dev_err(&intf->dev, "Cannot get usb endpoint(s)"); 1588 1582 return err; 1589 1583 }
+14 -6
drivers/net/ethernet/broadcom/bgmac.c
··· 252 252 struct bgmac_slot_info *slot) 253 253 { 254 254 struct device *dma_dev = bgmac->core->dma_dev; 255 + struct sk_buff *skb; 256 + dma_addr_t dma_addr; 255 257 struct bgmac_rx_header *rx; 256 258 257 259 /* Alloc skb */ 258 - slot->skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE); 259 - if (!slot->skb) 260 + skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE); 261 + if (!skb) 260 262 return -ENOMEM; 261 263 262 264 /* Poison - if everything goes fine, hardware will overwrite it */ 263 - rx = (struct bgmac_rx_header *)slot->skb->data; 265 + rx = (struct bgmac_rx_header *)skb->data; 264 266 rx->len = cpu_to_le16(0xdead); 265 267 rx->flags = cpu_to_le16(0xbeef); 266 268 267 269 /* Map skb for the DMA */ 268 - slot->dma_addr = dma_map_single(dma_dev, slot->skb->data, 269 - BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE); 270 - if (dma_mapping_error(dma_dev, slot->dma_addr)) { 270 + dma_addr = dma_map_single(dma_dev, skb->data, 271 + BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE); 272 + if (dma_mapping_error(dma_dev, dma_addr)) { 271 273 bgmac_err(bgmac, "DMA mapping error\n"); 274 + dev_kfree_skb(skb); 272 275 return -ENOMEM; 273 276 } 277 + 278 + /* Update the slot */ 279 + slot->skb = skb; 280 + slot->dma_addr = dma_addr; 281 + 274 282 if (slot->dma_addr & 0xC0000000) 275 283 bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 276 284
+5 -5
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 2545 2545 } 2546 2546 } 2547 2547 2548 - /* Allocated memory for FW statistics */ 2549 - if (bnx2x_alloc_fw_stats_mem(bp)) 2550 - LOAD_ERROR_EXIT(bp, load_error0); 2551 - 2552 2548 /* need to be done after alloc mem, since it's self adjusting to amount 2553 2549 * of memory available for RSS queues 2554 2550 */ ··· 2553 2557 BNX2X_ERR("Unable to allocate memory for fps\n"); 2554 2558 LOAD_ERROR_EXIT(bp, load_error0); 2555 2559 } 2560 + 2561 + /* Allocated memory for FW statistics */ 2562 + if (bnx2x_alloc_fw_stats_mem(bp)) 2563 + LOAD_ERROR_EXIT(bp, load_error0); 2556 2564 2557 2565 /* request pf to initialize status blocks */ 2558 2566 if (IS_VF(bp)) { ··· 2812 2812 if (IS_PF(bp)) 2813 2813 bnx2x_clear_pf_load(bp); 2814 2814 load_error0: 2815 - bnx2x_free_fp_mem(bp); 2816 2815 bnx2x_free_fw_stats_mem(bp); 2816 + bnx2x_free_fp_mem(bp); 2817 2817 bnx2x_free_mem(bp); 2818 2818 2819 2819 return rc;
+15 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 2018 2018 2019 2019 void bnx2x_iov_remove_one(struct bnx2x *bp) 2020 2020 { 2021 + int vf_idx; 2022 + 2021 2023 /* if SRIOV is not enabled there's nothing to do */ 2022 2024 if (!IS_SRIOV(bp)) 2023 2025 return; ··· 2027 2025 DP(BNX2X_MSG_IOV, "about to call disable sriov\n"); 2028 2026 pci_disable_sriov(bp->pdev); 2029 2027 DP(BNX2X_MSG_IOV, "sriov disabled\n"); 2028 + 2029 + /* disable access to all VFs */ 2030 + for (vf_idx = 0; vf_idx < bp->vfdb->sriov.total; vf_idx++) { 2031 + bnx2x_pretend_func(bp, 2032 + HW_VF_HANDLE(bp, 2033 + bp->vfdb->sriov.first_vf_in_pf + 2034 + vf_idx)); 2035 + DP(BNX2X_MSG_IOV, "disabling internal access for vf %d\n", 2036 + bp->vfdb->sriov.first_vf_in_pf + vf_idx); 2037 + bnx2x_vf_enable_internal(bp, 0); 2038 + bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 2039 + } 2030 2040 2031 2041 /* free vf database */ 2032 2042 __bnx2x_iov_free_vfdb(bp); ··· 3211 3197 * the "acquire" messages to appear on the VF PF channel. 3212 3198 */ 3213 3199 DP(BNX2X_MSG_IOV, "about to call enable sriov\n"); 3214 - pci_disable_sriov(bp->pdev); 3200 + bnx2x_disable_sriov(bp); 3215 3201 rc = pci_enable_sriov(bp->pdev, req_vfs); 3216 3202 if (rc) { 3217 3203 BNX2X_ERR("pci_enable_sriov failed with %d\n", rc);
+2 -1
drivers/net/ethernet/chelsio/cxgb3/sge.c
··· 1599 1599 flits = skb_transport_offset(skb) / 8; 1600 1600 sgp = ndesc == 1 ? (struct sg_ent *)&d->flit[flits] : sgl; 1601 1601 sgl_flits = make_sgl(skb, sgp, skb_transport_header(skb), 1602 - skb->tail - skb->transport_header, 1602 + skb_tail_pointer(skb) - 1603 + skb_transport_header(skb), 1603 1604 adap->pdev); 1604 1605 if (need_skb_unmap()) { 1605 1606 setup_deferred_unmapping(skb, adap->pdev, sgp, sgl_flits);
+10
drivers/net/ethernet/emulex/benet/be.h
··· 840 840 bool be_is_wol_supported(struct be_adapter *adapter); 841 841 bool be_pause_supported(struct be_adapter *adapter); 842 842 u32 be_get_fw_log_level(struct be_adapter *adapter); 843 + 844 + static inline int fw_major_num(const char *fw_ver) 845 + { 846 + int fw_major = 0; 847 + 848 + sscanf(fw_ver, "%d.", &fw_major); 849 + 850 + return fw_major; 851 + } 852 + 843 853 int be_update_queues(struct be_adapter *adapter); 844 854 int be_poll(struct napi_struct *napi, int budget); 845 855
+6
drivers/net/ethernet/emulex/benet/be_main.c
··· 3379 3379 3380 3380 be_cmd_get_fw_ver(adapter, adapter->fw_ver, adapter->fw_on_flash); 3381 3381 3382 + if (BE2_chip(adapter) && fw_major_num(adapter->fw_ver) < 4) { 3383 + dev_err(dev, "Firmware on card is old(%s), IRQs may not work.", 3384 + adapter->fw_ver); 3385 + dev_err(dev, "Please upgrade firmware to version >= 4.0\n"); 3386 + } 3387 + 3382 3388 if (adapter->vlans_added) 3383 3389 be_vid_config(adapter); 3384 3390
+8 -8
drivers/net/ethernet/ibm/emac/mal.c
··· 263 263 { 264 264 if (likely(napi_schedule_prep(&mal->napi))) { 265 265 MAL_DBG2(mal, "schedule_poll" NL); 266 + spin_lock(&mal->lock); 266 267 mal_disable_eob_irq(mal); 268 + spin_unlock(&mal->lock); 267 269 __napi_schedule(&mal->napi); 268 270 } else 269 271 MAL_DBG2(mal, "already in poll" NL); ··· 444 442 if (unlikely(mc->ops->peek_rx(mc->dev) || 445 443 test_bit(MAL_COMMAC_RX_STOPPED, &mc->flags))) { 446 444 MAL_DBG2(mal, "rotting packet" NL); 447 - if (napi_reschedule(napi)) 448 - mal_disable_eob_irq(mal); 449 - else 450 - MAL_DBG2(mal, "already in poll list" NL); 451 - 452 - if (budget > 0) 453 - goto again; 454 - else 445 + if (!napi_reschedule(napi)) 455 446 goto more_work; 447 + 448 + spin_lock_irqsave(&mal->lock, flags); 449 + mal_disable_eob_irq(mal); 450 + spin_unlock_irqrestore(&mal->lock, flags); 451 + goto again; 456 452 } 457 453 mc->ops->poll_tx(mc->dev); 458 454 }
+1 -1
drivers/net/ethernet/mellanox/mlx4/cmd.c
··· 1691 1691 vp_oper->vlan_idx = NO_INDX; 1692 1692 } 1693 1693 if (NO_INDX != vp_oper->mac_idx) { 1694 - __mlx4_unregister_mac(&priv->dev, port, vp_oper->mac_idx); 1694 + __mlx4_unregister_mac(&priv->dev, port, vp_oper->state.mac); 1695 1695 vp_oper->mac_idx = NO_INDX; 1696 1696 } 1697 1697 }
+3 -3
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 2276 2276 temp = (cmd.rsp.arg[8] & 0x7FFE0000) >> 17; 2277 2277 npar_info->max_linkspeed_reg_offset = temp; 2278 2278 } 2279 - if (npar_info->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) 2280 - memcpy(ahw->extra_capability, &cmd.rsp.arg[16], 2281 - sizeof(ahw->extra_capability)); 2279 + 2280 + memcpy(ahw->extra_capability, &cmd.rsp.arg[16], 2281 + sizeof(ahw->extra_capability)); 2282 2282 2283 2283 out: 2284 2284 qlcnic_free_mbx_args(&cmd);
+2 -5
drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.c
··· 785 785 786 786 #define QLCNIC_ENABLE_IPV4_LRO 1 787 787 #define QLCNIC_ENABLE_IPV6_LRO 2 788 - #define QLCNIC_NO_DEST_IPV4_CHECK (1 << 8) 789 - #define QLCNIC_NO_DEST_IPV6_CHECK (2 << 8) 790 788 791 789 int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int enable) 792 790 { ··· 804 806 805 807 word = 0; 806 808 if (enable) { 807 - word = QLCNIC_ENABLE_IPV4_LRO | QLCNIC_NO_DEST_IPV4_CHECK; 809 + word = QLCNIC_ENABLE_IPV4_LRO; 808 810 if (adapter->ahw->extra_capability[0] & 809 811 QLCNIC_FW_CAP2_HW_LRO_IPV6) 810 - word |= QLCNIC_ENABLE_IPV6_LRO | 811 - QLCNIC_NO_DEST_IPV6_CHECK; 812 + word |= QLCNIC_ENABLE_IPV6_LRO; 812 813 } 813 814 814 815 req.words[0] = cpu_to_le64(word);
+4 -2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 1133 1133 if (err == -EIO) 1134 1134 return err; 1135 1135 adapter->ahw->extra_capability[0] = temp; 1136 + } else { 1137 + adapter->ahw->extra_capability[0] = 0; 1136 1138 } 1139 + 1137 1140 adapter->ahw->max_mac_filters = nic_info.max_mac_filters; 1138 1141 adapter->ahw->max_mtu = nic_info.max_mtu; 1139 1142 ··· 2164 2161 else if (qlcnic_83xx_check(adapter)) 2165 2162 fw_cmd = QLCNIC_CMD_83XX_SET_DRV_VER; 2166 2163 2167 - if ((ahw->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) && 2168 - (ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_SET_DRV_VER)) 2164 + if (ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_SET_DRV_VER) 2169 2165 qlcnic_fw_cmd_set_drv_version(adapter, fw_cmd); 2170 2166 } 2171 2167
+12 -8
drivers/net/netconsole.c
··· 312 312 const char *buf, 313 313 size_t count) 314 314 { 315 + unsigned long flags; 315 316 int enabled; 316 317 int err; 317 318 ··· 327 326 return -EINVAL; 328 327 } 329 328 330 - mutex_lock(&nt->mutex); 331 329 if (enabled) { /* 1 */ 332 - 333 330 /* 334 331 * Skip netpoll_parse_options() -- all the attributes are 335 332 * already configured via configfs. Just print them out. ··· 335 336 netpoll_print_options(&nt->np); 336 337 337 338 err = netpoll_setup(&nt->np); 338 - if (err) { 339 - mutex_unlock(&nt->mutex); 339 + if (err) 340 340 return err; 341 - } 342 341 343 - pr_info("network logging started\n"); 344 - 342 + pr_info("netconsole: network logging started\n"); 345 343 } else { /* 0 */ 344 + /* We need to disable the netconsole before cleaning it up 345 + * otherwise we might end up in write_msg() with 346 + * nt->np.dev == NULL and nt->enabled == 1 347 + */ 348 + spin_lock_irqsave(&target_list_lock, flags); 349 + nt->enabled = 0; 350 + spin_unlock_irqrestore(&target_list_lock, flags); 346 351 netpoll_cleanup(&nt->np); 347 352 } 348 353 349 354 nt->enabled = enabled; 350 - mutex_unlock(&nt->mutex); 351 355 352 356 return strnlen(buf, count); 353 357 } ··· 561 559 struct netconsole_target_attr *na = 562 560 container_of(attr, struct netconsole_target_attr, attr); 563 561 562 + mutex_lock(&nt->mutex); 564 563 if (na->store) 565 564 ret = na->store(nt, buf, count); 565 + mutex_unlock(&nt->mutex); 566 566 567 567 return ret; 568 568 }
+5 -6
drivers/net/usb/ax88179_178a.c
··· 78 78 #define AX_MEDIUM_STATUS_MODE 0x22 79 79 #define AX_MEDIUM_GIGAMODE 0x01 80 80 #define AX_MEDIUM_FULL_DUPLEX 0x02 81 - #define AX_MEDIUM_ALWAYS_ONE 0x04 82 81 #define AX_MEDIUM_EN_125MHZ 0x08 83 82 #define AX_MEDIUM_RXFLOW_CTRLEN 0x10 84 83 #define AX_MEDIUM_TXFLOW_CTRLEN 0x20 ··· 1064 1065 1065 1066 /* Configure default medium type => giga */ 1066 1067 *tmp16 = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN | 1067 - AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE | 1068 - AX_MEDIUM_FULL_DUPLEX | AX_MEDIUM_GIGAMODE; 1068 + AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_FULL_DUPLEX | 1069 + AX_MEDIUM_GIGAMODE; 1069 1070 ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE, 1070 1071 2, 2, tmp16); 1071 1072 ··· 1224 1225 } 1225 1226 1226 1227 mode = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN | 1227 - AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE; 1228 + AX_MEDIUM_RXFLOW_CTRLEN; 1228 1229 1229 1230 ax88179_read_cmd(dev, AX_ACCESS_MAC, PHYSICAL_LINK_STATUS, 1230 1231 1, 1, &link_sts); ··· 1338 1339 1339 1340 /* Configure default medium type => giga */ 1340 1341 *tmp16 = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN | 1341 - AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE | 1342 - AX_MEDIUM_FULL_DUPLEX | AX_MEDIUM_GIGAMODE; 1342 + AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_FULL_DUPLEX | 1343 + AX_MEDIUM_GIGAMODE; 1343 1344 ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE, 1344 1345 2, 2, tmp16); 1345 1346
+6 -7
drivers/net/virtio_net.c
··· 1160 1160 { 1161 1161 struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb); 1162 1162 1163 - mutex_lock(&vi->config_lock); 1164 - 1165 - if (!vi->config_enable) 1166 - goto done; 1167 - 1168 1163 switch(action & ~CPU_TASKS_FROZEN) { 1169 1164 case CPU_ONLINE: 1170 1165 case CPU_DOWN_FAILED: ··· 1173 1178 break; 1174 1179 } 1175 1180 1176 - done: 1177 - mutex_unlock(&vi->config_lock); 1178 1181 return NOTIFY_OK; 1179 1182 } 1180 1183 ··· 1740 1747 struct virtnet_info *vi = vdev->priv; 1741 1748 int i; 1742 1749 1750 + unregister_hotcpu_notifier(&vi->nb); 1751 + 1743 1752 /* Prevent config work handler from accessing the device */ 1744 1753 mutex_lock(&vi->config_lock); 1745 1754 vi->config_enable = false; ··· 1789 1794 rtnl_lock(); 1790 1795 virtnet_set_queues(vi, vi->curr_queue_pairs); 1791 1796 rtnl_unlock(); 1797 + 1798 + err = register_hotcpu_notifier(&vi->nb); 1799 + if (err) 1800 + return err; 1792 1801 1793 1802 return 0; 1794 1803 }
-89
drivers/net/wan/sbni.c
··· 148 148 static int emancipate( struct net_device * ); 149 149 #endif 150 150 151 - #ifdef __i386__ 152 - #define ASM_CRC 1 153 - #endif 154 - 155 151 static const char version[] = 156 152 "Granch SBNI12 driver ver 5.0.1 Jun 22 2001 Denis I.Timofeev.\n"; 157 153 ··· 1547 1551 1548 1552 /* -------------------------------------------------------------------------- */ 1549 1553 1550 - #ifdef ASM_CRC 1551 - 1552 - static u32 1553 - calc_crc32( u32 crc, u8 *p, u32 len ) 1554 - { 1555 - register u32 _crc; 1556 - _crc = crc; 1557 - 1558 - __asm__ __volatile__ ( 1559 - "xorl %%ebx, %%ebx\n" 1560 - "movl %2, %%esi\n" 1561 - "movl %3, %%ecx\n" 1562 - "movl $crc32tab, %%edi\n" 1563 - "shrl $2, %%ecx\n" 1564 - "jz 1f\n" 1565 - 1566 - ".align 4\n" 1567 - "0:\n" 1568 - "movb %%al, %%bl\n" 1569 - "movl (%%esi), %%edx\n" 1570 - "shrl $8, %%eax\n" 1571 - "xorb %%dl, %%bl\n" 1572 - "shrl $8, %%edx\n" 1573 - "xorl (%%edi,%%ebx,4), %%eax\n" 1574 - 1575 - "movb %%al, %%bl\n" 1576 - "shrl $8, %%eax\n" 1577 - "xorb %%dl, %%bl\n" 1578 - "shrl $8, %%edx\n" 1579 - "xorl (%%edi,%%ebx,4), %%eax\n" 1580 - 1581 - "movb %%al, %%bl\n" 1582 - "shrl $8, %%eax\n" 1583 - "xorb %%dl, %%bl\n" 1584 - "movb %%dh, %%dl\n" 1585 - "xorl (%%edi,%%ebx,4), %%eax\n" 1586 - 1587 - "movb %%al, %%bl\n" 1588 - "shrl $8, %%eax\n" 1589 - "xorb %%dl, %%bl\n" 1590 - "addl $4, %%esi\n" 1591 - "xorl (%%edi,%%ebx,4), %%eax\n" 1592 - 1593 - "decl %%ecx\n" 1594 - "jnz 0b\n" 1595 - 1596 - "1:\n" 1597 - "movl %3, %%ecx\n" 1598 - "andl $3, %%ecx\n" 1599 - "jz 2f\n" 1600 - 1601 - "movb %%al, %%bl\n" 1602 - "shrl $8, %%eax\n" 1603 - "xorb (%%esi), %%bl\n" 1604 - "xorl (%%edi,%%ebx,4), %%eax\n" 1605 - 1606 - "decl %%ecx\n" 1607 - "jz 2f\n" 1608 - 1609 - "movb %%al, %%bl\n" 1610 - "shrl $8, %%eax\n" 1611 - "xorb 1(%%esi), %%bl\n" 1612 - "xorl (%%edi,%%ebx,4), %%eax\n" 1613 - 1614 - "decl %%ecx\n" 1615 - "jz 2f\n" 1616 - 1617 - "movb %%al, %%bl\n" 1618 - "shrl $8, %%eax\n" 1619 - "xorb 2(%%esi), %%bl\n" 1620 - "xorl (%%edi,%%ebx,4), %%eax\n" 1621 - "2:\n" 1622 - : "=a" (_crc) 1623 - : "0" (_crc), "g" (p), "g" (len) 1624 - : "bx", "cx", "dx", "si", "di" 1625 - ); 1626 - 1627 - return _crc; 1628 - } 1629 - 1630 - #else /* ASM_CRC */ 1631 - 1632 1554 static u32 1633 1555 calc_crc32( u32 crc, u8 *p, u32 len ) 1634 1556 { ··· 1555 1641 1556 1642 return crc; 1557 1643 } 1558 - 1559 - #endif /* ASM_CRC */ 1560 - 1561 1644 1562 1645 static u32 crc32tab[] __attribute__ ((aligned(8))) = { 1563 1646 0xD202EF8D, 0xA505DF1B, 0x3C0C8EA1, 0x4B0BBE37,
+1
drivers/net/xen-netback/common.h
··· 169 169 unsigned long credit_usec; 170 170 unsigned long remaining_credit; 171 171 struct timer_list credit_timeout; 172 + u64 credit_window_start; 172 173 173 174 /* Statistics */ 174 175 unsigned long rx_gso_checksum_fixup;
+1 -2
drivers/net/xen-netback/interface.c
··· 316 316 vif->credit_bytes = vif->remaining_credit = ~0UL; 317 317 vif->credit_usec = 0UL; 318 318 init_timer(&vif->credit_timeout); 319 - /* Initialize 'expires' now: it's used to track the credit window. */ 320 - vif->credit_timeout.expires = jiffies; 319 + vif->credit_window_start = get_jiffies_64(); 321 320 322 321 dev->netdev_ops = &xenvif_netdev_ops; 323 322 dev->hw_features = NETIF_F_SG |
+5 -5
drivers/net/xen-netback/netback.c
··· 1380 1380 1381 1381 static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) 1382 1382 { 1383 - unsigned long now = jiffies; 1384 - unsigned long next_credit = 1385 - vif->credit_timeout.expires + 1383 + u64 now = get_jiffies_64(); 1384 + u64 next_credit = vif->credit_window_start + 1386 1385 msecs_to_jiffies(vif->credit_usec / 1000); 1387 1386 1388 1387 /* Timer could already be pending in rare cases. */ ··· 1389 1390 return true; 1390 1391 1391 1392 /* Passed the point where we can replenish credit? */ 1392 - if (time_after_eq(now, next_credit)) { 1393 - vif->credit_timeout.expires = now; 1393 + if (time_after_eq64(now, next_credit)) { 1394 + vif->credit_window_start = now; 1394 1395 tx_add_credit(vif); 1395 1396 } 1396 1397 ··· 1402 1403 tx_credit_callback; 1403 1404 mod_timer(&vif->credit_timeout, 1404 1405 next_credit); 1406 + vif->credit_window_start = next_credit; 1405 1407 1406 1408 return true; 1407 1409 }
+1 -5
drivers/pci/hotplug/acpiphp_glue.c
··· 552 552 struct acpiphp_func *func; 553 553 int max, pass; 554 554 LIST_HEAD(add_list); 555 - int nr_found; 556 555 557 - nr_found = acpiphp_rescan_slot(slot); 556 + acpiphp_rescan_slot(slot); 558 557 max = acpiphp_max_busnr(bus); 559 558 for (pass = 0; pass < 2; pass++) { 560 559 list_for_each_entry(dev, &bus->devices, bus_list) { ··· 573 574 } 574 575 } 575 576 __pci_bus_assign_resources(bus, &add_list, NULL); 576 - /* Nothing more to do here if there are no new devices on this bus. */ 577 - if (!nr_found && (slot->flags & SLOT_ENABLED)) 578 - return; 579 577 580 578 acpiphp_sanitize_bus(bus); 581 579 acpiphp_set_hpp_values(bus);
+8 -8
drivers/scsi/BusLogic.c
··· 696 696 while ((pci_device = pci_get_device(PCI_VENDOR_ID_BUSLOGIC, 697 697 PCI_DEVICE_ID_BUSLOGIC_MULTIMASTER, 698 698 pci_device)) != NULL) { 699 - struct blogic_adapter *adapter = adapter; 699 + struct blogic_adapter *host_adapter = adapter; 700 700 struct blogic_adapter_info adapter_info; 701 701 enum blogic_isa_ioport mod_ioaddr_req; 702 702 unsigned char bus; ··· 744 744 known and enabled, note that the particular Standard ISA I/O 745 745 Address should not be probed. 746 746 */ 747 - adapter->io_addr = io_addr; 748 - blogic_intreset(adapter); 749 - if (blogic_cmd(adapter, BLOGIC_INQ_PCI_INFO, NULL, 0, 747 + host_adapter->io_addr = io_addr; 748 + blogic_intreset(host_adapter); 749 + if (blogic_cmd(host_adapter, BLOGIC_INQ_PCI_INFO, NULL, 0, 750 750 &adapter_info, sizeof(adapter_info)) == 751 751 sizeof(adapter_info)) { 752 752 if (adapter_info.isa_port < 6) ··· 762 762 I/O Address assigned at system initialization. 763 763 */ 764 764 mod_ioaddr_req = BLOGIC_IO_DISABLE; 765 - blogic_cmd(adapter, BLOGIC_MOD_IOADDR, &mod_ioaddr_req, 765 + blogic_cmd(host_adapter, BLOGIC_MOD_IOADDR, &mod_ioaddr_req, 766 766 sizeof(mod_ioaddr_req), NULL, 0); 767 767 /* 768 768 For the first MultiMaster Host Adapter enumerated, ··· 779 779 780 780 fetch_localram.offset = BLOGIC_AUTOSCSI_BASE + 45; 781 781 fetch_localram.count = sizeof(autoscsi_byte45); 782 - blogic_cmd(adapter, BLOGIC_FETCH_LOCALRAM, 782 + blogic_cmd(host_adapter, BLOGIC_FETCH_LOCALRAM, 783 783 &fetch_localram, sizeof(fetch_localram), 784 784 &autoscsi_byte45, 785 785 sizeof(autoscsi_byte45)); 786 - blogic_cmd(adapter, BLOGIC_GET_BOARD_ID, NULL, 0, &id, 787 - sizeof(id)); 786 + blogic_cmd(host_adapter, BLOGIC_GET_BOARD_ID, NULL, 0, 787 + &id, sizeof(id)); 788 788 if (id.fw_ver_digit1 == '5') 789 789 force_scan_order = 790 790 autoscsi_byte45.force_scan_order;
+2
drivers/scsi/aacraid/linit.c
··· 771 771 static int aac_compat_ioctl(struct scsi_device *sdev, int cmd, void __user *arg) 772 772 { 773 773 struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata; 774 + if (!capable(CAP_SYS_RAWIO)) 775 + return -EPERM; 774 776 return aac_compat_do_ioctl(dev, cmd, (unsigned long)arg); 775 777 } 776 778
+1 -1
drivers/scsi/qla2xxx/qla_dbg.c
··· 20 20 * | Device Discovery | 0x2095 | 0x2020-0x2022, | 21 21 * | | | 0x2011-0x2012, | 22 22 * | | | 0x2016 | 23 - * | Queue Command and IO tracing | 0x3058 | 0x3006-0x300b | 23 + * | Queue Command and IO tracing | 0x3059 | 0x3006-0x300b | 24 24 * | | | 0x3027-0x3028 | 25 25 * | | | 0x303d-0x3041 | 26 26 * | | | 0x302d,0x3033 |
+9
drivers/scsi/qla2xxx/qla_isr.c
··· 1957 1957 que = MSW(sts->handle); 1958 1958 req = ha->req_q_map[que]; 1959 1959 1960 + /* Check for invalid queue pointer */ 1961 + if (req == NULL || 1962 + que >= find_first_zero_bit(ha->req_qid_map, ha->max_req_queues)) { 1963 + ql_dbg(ql_dbg_io, vha, 0x3059, 1964 + "Invalid status handle (0x%x): Bad req pointer. req=%p, " 1965 + "que=%u.\n", sts->handle, req, que); 1966 + return; 1967 + } 1968 + 1960 1969 /* Validate handle. */ 1961 1970 if (handle < req->num_outstanding_cmds) 1962 1971 sp = req->outstanding_cmds[handle];
+1 -1
drivers/scsi/sd.c
··· 2854 2854 gd->events |= DISK_EVENT_MEDIA_CHANGE; 2855 2855 } 2856 2856 2857 + blk_pm_runtime_init(sdp->request_queue, dev); 2857 2858 add_disk(gd); 2858 2859 if (sdkp->capacity) 2859 2860 sd_dif_config_host(sdkp); ··· 2863 2862 2864 2863 sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n", 2865 2864 sdp->removable ? "removable " : ""); 2866 - blk_pm_runtime_init(sdp->request_queue, dev); 2867 2865 scsi_autopm_put_device(sdp); 2868 2866 put_device(&sdkp->dev); 2869 2867 }
+95 -81
drivers/scsi/sg.c
··· 105 105 static int sg_add(struct device *, struct class_interface *); 106 106 static void sg_remove(struct device *, struct class_interface *); 107 107 108 + static DEFINE_SPINLOCK(sg_open_exclusive_lock); 109 + 108 110 static DEFINE_IDR(sg_index_idr); 109 - static DEFINE_RWLOCK(sg_index_lock); 111 + static DEFINE_RWLOCK(sg_index_lock); /* Also used to lock 112 + file descriptor list for device */ 110 113 111 114 static struct class_interface sg_interface = { 112 115 .add_dev = sg_add, ··· 146 143 } Sg_request; 147 144 148 145 typedef struct sg_fd { /* holds the state of a file descriptor */ 149 - struct list_head sfd_siblings; /* protected by sfd_lock of device */ 146 + /* sfd_siblings is protected by sg_index_lock */ 147 + struct list_head sfd_siblings; 150 148 struct sg_device *parentdp; /* owning device */ 151 149 wait_queue_head_t read_wait; /* queue read until command done */ 152 150 rwlock_t rq_list_lock; /* protect access to list in req_arr */ ··· 170 166 171 167 typedef struct sg_device { /* holds the state of each scsi generic device */ 172 168 struct scsi_device *device; 169 + wait_queue_head_t o_excl_wait; /* queue open() when O_EXCL in use */ 173 170 int sg_tablesize; /* adapter's max scatter-gather table size */ 174 171 u32 index; /* device index number */ 175 - spinlock_t sfd_lock; /* protect file descriptor list for device */ 172 + /* sfds is protected by sg_index_lock */ 176 173 struct list_head sfds; 177 - struct rw_semaphore o_sem; /* exclude open should hold this rwsem */ 178 174 volatile char detached; /* 0->attached, 1->detached pending removal */ 175 + /* exclude protected by sg_open_exclusive_lock */ 179 176 char exclude; /* opened for exclusive access */ 180 177 char sgdebug; /* 0->off, 1->sense, 9->dump dev, 10-> all devs */ 181 178 struct gendisk *disk; ··· 225 220 return blk_verify_command(cmd, filp->f_mode & FMODE_WRITE); 226 221 } 227 222 223 + static int get_exclude(Sg_device *sdp) 224 + { 225 + unsigned long flags; 226 + int ret; 227 + 228 + spin_lock_irqsave(&sg_open_exclusive_lock, flags); 229 + ret = sdp->exclude; 230 + spin_unlock_irqrestore(&sg_open_exclusive_lock, flags); 231 + return ret; 232 + } 233 + 234 + static int set_exclude(Sg_device *sdp, char val) 235 + { 236 + unsigned long flags; 237 + 238 + spin_lock_irqsave(&sg_open_exclusive_lock, flags); 239 + sdp->exclude = val; 240 + spin_unlock_irqrestore(&sg_open_exclusive_lock, flags); 241 + return val; 242 + } 243 + 228 244 static int sfds_list_empty(Sg_device *sdp) 229 245 { 230 246 unsigned long flags; 231 247 int ret; 232 248 233 - spin_lock_irqsave(&sdp->sfd_lock, flags); 249 + read_lock_irqsave(&sg_index_lock, flags); 234 250 ret = list_empty(&sdp->sfds); 235 - spin_unlock_irqrestore(&sdp->sfd_lock, flags); 251 + read_unlock_irqrestore(&sg_index_lock, flags); 236 252 return ret; 237 253 } 238 254 ··· 265 239 struct request_queue *q; 266 240 Sg_device *sdp; 267 241 Sg_fd *sfp; 242 + int res; 268 243 int retval; 269 244 270 245 nonseekable_open(inode, filp); ··· 294 267 goto error_out; 295 268 } 296 269 297 - if ((flags & O_EXCL) && (O_RDONLY == (flags & O_ACCMODE))) { 298 - retval = -EPERM; /* Can't lock it with read only access */ 270 + if (flags & O_EXCL) { 271 + if (O_RDONLY == (flags & O_ACCMODE)) { 272 + retval = -EPERM; /* Can't lock it with read only access */ 273 + goto error_out; 274 + } 275 + if (!sfds_list_empty(sdp) && (flags & O_NONBLOCK)) { 276 + retval = -EBUSY; 277 + goto error_out; 278 + } 279 + res = wait_event_interruptible(sdp->o_excl_wait, 280 + ((!sfds_list_empty(sdp) || get_exclude(sdp)) ? 0 : set_exclude(sdp, 1))); 281 + if (res) { 282 + retval = res; /* -ERESTARTSYS because signal hit process */ 283 + goto error_out; 284 + } 285 + } else if (get_exclude(sdp)) { /* some other fd has an exclusive lock on dev */ 286 + if (flags & O_NONBLOCK) { 287 + retval = -EBUSY; 288 + goto error_out; 289 + } 290 + res = wait_event_interruptible(sdp->o_excl_wait, !get_exclude(sdp)); 291 + if (res) { 292 + retval = res; /* -ERESTARTSYS because signal hit process */ 293 + goto error_out; 294 + } 295 + } 296 + if (sdp->detached) { 297 + retval = -ENODEV; 299 298 goto error_out; 300 299 } 301 - if (flags & O_NONBLOCK) { 302 - if (flags & O_EXCL) { 303 - if (!down_write_trylock(&sdp->o_sem)) { 304 - retval = -EBUSY; 305 - goto error_out; 306 - } 307 - } else { 308 - if (!down_read_trylock(&sdp->o_sem)) { 309 - retval = -EBUSY; 310 - goto error_out; 311 - } 312 - } 313 - } else { 314 - if (flags & O_EXCL) 315 - down_write(&sdp->o_sem); 316 - else 317 - down_read(&sdp->o_sem); 318 - } 319 - /* Since write lock is held, no need to check sfd_list */ 320 - if (flags & O_EXCL) 321 - sdp->exclude = 1; /* used by release lock */ 322 - 323 300 if (sfds_list_empty(sdp)) { /* no existing opens on this device */ 324 301 sdp->sgdebug = 0; 325 302 q = sdp->device->request_queue; 326 303 sdp->sg_tablesize = queue_max_segments(q); 327 304 } 328 - sfp = sg_add_sfp(sdp, dev); 329 - if (!IS_ERR(sfp)) 305 + if ((sfp = sg_add_sfp(sdp, dev))) 330 306 filp->private_data = sfp; 331 - /* retval is already provably zero at this point because of the 332 - * check after retval = scsi_autopm_get_device(sdp->device)) 333 - */ 334 307 else { 335 - retval = PTR_ERR(sfp); 336 - 337 308 if (flags & O_EXCL) { 338 - sdp->exclude = 0; /* undo if error */ 339 - up_write(&sdp->o_sem); 340 - } else 341 - up_read(&sdp->o_sem); 309 + set_exclude(sdp, 0); /* undo if error */ 310 + wake_up_interruptible(&sdp->o_excl_wait); 311 + } 312 + retval = -ENOMEM; 313 + goto error_out; 314 + } 315 + retval = 0; 342 316 error_out: 317 + if (retval) { 343 318 scsi_autopm_put_device(sdp->device); 344 319 sdp_put: 345 320 scsi_device_put(sdp->device); ··· 358 329 { 359 330 Sg_device *sdp; 360 331 Sg_fd *sfp; 361 - int excl; 362 332 363 333 if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp))) 364 334 return -ENXIO; 365 335 SCSI_LOG_TIMEOUT(3, printk("sg_release: %s\n", sdp->disk->disk_name)); 366 336 367 - excl = sdp->exclude; 368 - sdp->exclude = 0; 369 - if (excl) 370 - up_write(&sdp->o_sem); 371 - else 372 - up_read(&sdp->o_sem); 337 + set_exclude(sdp, 0); 338 + wake_up_interruptible(&sdp->o_excl_wait); 373 339 374 340 scsi_autopm_put_device(sdp->device); 375 341 kref_put(&sfp->f_ref, sg_remove_sfp); ··· 1415 1391 disk->first_minor = k; 1416 1392 sdp->disk = disk; 1417 1393 sdp->device = scsidp; 1418 - spin_lock_init(&sdp->sfd_lock); 1419 1394 INIT_LIST_HEAD(&sdp->sfds); 1420 - init_rwsem(&sdp->o_sem); 1395 + init_waitqueue_head(&sdp->o_excl_wait); 1421 1396 sdp->sg_tablesize = queue_max_segments(q); 1422 1397 sdp->index = k; 1423 1398 kref_init(&sdp->d_ref); ··· 1549 1526 1550 1527 /* Need a write lock to set sdp->detached. */ 1551 1528 write_lock_irqsave(&sg_index_lock, iflags); 1552 - spin_lock(&sdp->sfd_lock); 1553 1529 sdp->detached = 1; 1554 1530 list_for_each_entry(sfp, &sdp->sfds, sfd_siblings) { 1555 1531 wake_up_interruptible(&sfp->read_wait); 1556 1532 kill_fasync(&sfp->async_qp, SIGPOLL, POLL_HUP); 1557 1533 } 1558 - spin_unlock(&sdp->sfd_lock); 1559 1534 write_unlock_irqrestore(&sg_index_lock, iflags); 1560 1535 1561 1536 sysfs_remove_link(&scsidp->sdev_gendev.kobj, "generic"); ··· 2064 2043 2065 2044 sfp = kzalloc(sizeof(*sfp), GFP_ATOMIC | __GFP_NOWARN); 2066 2045 if (!sfp) 2067 - return ERR_PTR(-ENOMEM); 2046 + return NULL; 2068 2047 2069 2048 init_waitqueue_head(&sfp->read_wait); 2070 2049 rwlock_init(&sfp->rq_list_lock); ··· 2078 2057 sfp->cmd_q = SG_DEF_COMMAND_Q; 2079 2058 sfp->keep_orphan = SG_DEF_KEEP_ORPHAN; 2080 2059 sfp->parentdp = sdp; 2081 - spin_lock_irqsave(&sdp->sfd_lock, iflags); 2082 - if (sdp->detached) { 2083 - spin_unlock_irqrestore(&sdp->sfd_lock, iflags); 2084 - return ERR_PTR(-ENODEV); 2085 - } 2060 + write_lock_irqsave(&sg_index_lock, iflags); 2086 2061 list_add_tail(&sfp->sfd_siblings, &sdp->sfds); 2087 - spin_unlock_irqrestore(&sdp->sfd_lock, iflags); 2062 + write_unlock_irqrestore(&sg_index_lock, iflags); 2088 2063 SCSI_LOG_TIMEOUT(3, printk("sg_add_sfp: sfp=0x%p\n", sfp)); 2089 2064 if (unlikely(sg_big_buff != def_reserved_size)) 2090 2065 sg_big_buff = def_reserved_size; ··· 2130 2113 struct sg_device *sdp = sfp->parentdp; 2131 2114 unsigned long iflags; 2132 2115 2133 - spin_lock_irqsave(&sdp->sfd_lock, iflags); 2116 + write_lock_irqsave(&sg_index_lock, iflags); 2134 2117 list_del(&sfp->sfd_siblings); 2135 - spin_unlock_irqrestore(&sdp->sfd_lock, iflags); 2118 + write_unlock_irqrestore(&sg_index_lock, iflags); 2119 + wake_up_interruptible(&sdp->o_excl_wait); 2136 2120 2137 2121 INIT_WORK(&sfp->ew.work, sg_remove_sfp_usercontext); 2138 2122 schedule_work(&sfp->ew.work); ··· 2520 2502 return 0; 2521 2503 } 2522 2504 2523 - /* must be called while holding sg_index_lock and sfd_lock */ 2505 + /* must be called while holding sg_index_lock */ 2524 2506 static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp) 2525 2507 { 2526 2508 int k, m, new_interface, blen, usg; ··· 2605 2587 2606 2588 read_lock_irqsave(&sg_index_lock, iflags); 2607 2589 sdp = it ? sg_lookup_dev(it->index) : NULL; 2608 - if (sdp) { 2609 - spin_lock(&sdp->sfd_lock); 2610 - if (!list_empty(&sdp->sfds)) { 2611 - struct scsi_device *scsidp = sdp->device; 2590 + if (sdp && !list_empty(&sdp->sfds)) { 2591 + struct scsi_device *scsidp = sdp->device; 2612 2592 2613 - seq_printf(s, " >>> device=%s ", sdp->disk->disk_name); 2614 - if (sdp->detached) 2615 - seq_printf(s, "detached pending close "); 2616 - else 2617 - seq_printf 2618 - (s, "scsi%d chan=%d id=%d lun=%d em=%d", 2619 - scsidp->host->host_no, 2620 - scsidp->channel, scsidp->id, 2621 - scsidp->lun, 2622 - scsidp->host->hostt->emulated); 2623 - seq_printf(s, " sg_tablesize=%d excl=%d\n", 2624 - sdp->sg_tablesize, sdp->exclude); 2625 - sg_proc_debug_helper(s, sdp); 2626 - } 2627 - spin_unlock(&sdp->sfd_lock); 2593 + seq_printf(s, " >>> device=%s ", sdp->disk->disk_name); 2594 + if (sdp->detached) 2595 + seq_printf(s, "detached pending close "); 2596 + else 2597 + seq_printf 2598 + (s, "scsi%d chan=%d id=%d lun=%d em=%d", 2599 + scsidp->host->host_no, 2600 + scsidp->channel, scsidp->id, 2601 + scsidp->lun, 2602 + scsidp->host->hostt->emulated); 2603 + seq_printf(s, " sg_tablesize=%d excl=%d\n", 2604 + sdp->sg_tablesize, get_exclude(sdp)); 2605 + sg_proc_debug_helper(s, sdp); 2628 2606 } 2629 2607 read_unlock_irqrestore(&sg_index_lock, iflags); 2630 2608 return 0;
+1
drivers/staging/bcm/Bcmchar.c
··· 1960 1960 1961 1961 BCM_DEBUG_PRINT(Adapter, DBG_TYPE_OTHERS, OSAL_DBG, DBG_LVL_ALL, "Called IOCTL_BCM_GET_DEVICE_DRIVER_INFO\n"); 1962 1962 1963 + memset(&DevInfo, 0, sizeof(DevInfo)); 1963 1964 DevInfo.MaxRDMBufferSize = BUFFER_4K; 1964 1965 DevInfo.u32DSDStartOffset = EEPROM_CALPARAM_START; 1965 1966 DevInfo.u32RxAlignmentCorrection = 0;
+3
drivers/staging/ozwpan/ozcdev.c
··· 155 155 struct oz_app_hdr *app_hdr; 156 156 struct oz_serial_ctx *ctx; 157 157 158 + if (count > sizeof(ei->data) - sizeof(*elt) - sizeof(*app_hdr)) 159 + return -EINVAL; 160 + 158 161 spin_lock_bh(&g_cdev.lock); 159 162 pd = g_cdev.active_pd; 160 163 if (pd)
+1 -1
drivers/staging/sb105x/sb_pci_mp.c
··· 1063 1063 1064 1064 static int mp_get_count(struct sb_uart_state *state, struct serial_icounter_struct *icnt) 1065 1065 { 1066 - struct serial_icounter_struct icount; 1066 + struct serial_icounter_struct icount = {}; 1067 1067 struct sb_uart_icount cnow; 1068 1068 struct sb_uart_port *port = state->port; 1069 1069
+6 -3
drivers/staging/wlags49_h2/wl_priv.c
··· 570 570 ltv_t *pLtv; 571 571 bool_t ltvAllocated = FALSE; 572 572 ENCSTRCT sEncryption; 573 + size_t len; 573 574 574 575 #ifdef USE_WDS 575 576 hcf_16 hcfPort = HCF_PORT_0; ··· 687 686 break; 688 687 case CFG_CNF_OWN_NAME: 689 688 memset(lp->StationName, 0, sizeof(lp->StationName)); 690 - memcpy((void *)lp->StationName, (void *)&pLtv->u.u8[2], (size_t)pLtv->u.u16[0]); 689 + len = min_t(size_t, pLtv->u.u16[0], sizeof(lp->StationName)); 690 + strlcpy(lp->StationName, &pLtv->u.u8[2], len); 691 691 pLtv->u.u16[0] = CNV_INT_TO_LITTLE(pLtv->u.u16[0]); 692 692 break; 693 693 case CFG_CNF_LOAD_BALANCING: ··· 1785 1783 { 1786 1784 struct wl_private *lp = wl_priv(dev); 1787 1785 unsigned long flags; 1786 + size_t len; 1788 1787 int ret = 0; 1789 1788 /*------------------------------------------------------------------------*/ 1790 1789 ··· 1796 1793 wl_lock(lp, &flags); 1797 1794 1798 1795 memset(lp->StationName, 0, sizeof(lp->StationName)); 1799 - 1800 - memcpy(lp->StationName, extra, wrqu->data.length); 1796 + len = min_t(size_t, wrqu->data.length, sizeof(lp->StationName)); 1797 + strlcpy(lp->StationName, extra, len); 1801 1798 1802 1799 /* Commit the adapter parameters */ 1803 1800 wl_apply(lp);
+4 -4
drivers/target/target_core_pscsi.c
··· 134 134 * pSCSI Host ID and enable for phba mode 135 135 */ 136 136 sh = scsi_host_lookup(phv->phv_host_id); 137 - if (IS_ERR(sh)) { 137 + if (!sh) { 138 138 pr_err("pSCSI: Unable to locate SCSI Host for" 139 139 " phv_host_id: %d\n", phv->phv_host_id); 140 - return PTR_ERR(sh); 140 + return -EINVAL; 141 141 } 142 142 143 143 phv->phv_lld_host = sh; ··· 515 515 sh = phv->phv_lld_host; 516 516 } else { 517 517 sh = scsi_host_lookup(pdv->pdv_host_id); 518 - if (IS_ERR(sh)) { 518 + if (!sh) { 519 519 pr_err("pSCSI: Unable to locate" 520 520 " pdv_host_id: %d\n", pdv->pdv_host_id); 521 - return PTR_ERR(sh); 521 + return -EINVAL; 522 522 } 523 523 } 524 524 } else {
+5
drivers/target/target_core_sbc.c
··· 263 263 sectors, cmd->se_dev->dev_attrib.max_write_same_len); 264 264 return TCM_INVALID_CDB_FIELD; 265 265 } 266 + /* We always have ANC_SUP == 0 so setting ANCHOR is always an error */ 267 + if (flags[0] & 0x10) { 268 + pr_warn("WRITE SAME with ANCHOR not supported\n"); 269 + return TCM_INVALID_CDB_FIELD; 270 + } 266 271 /* 267 272 * Special case for WRITE_SAME w/ UNMAP=1 that ends up getting 268 273 * translated into block discard requests within backend code.
+37 -16
drivers/target/target_core_xcopy.c
··· 82 82 mutex_lock(&g_device_mutex); 83 83 list_for_each_entry(se_dev, &g_device_list, g_dev_node) { 84 84 85 + if (!se_dev->dev_attrib.emulate_3pc) 86 + continue; 87 + 85 88 memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN); 86 89 target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]); 87 90 ··· 360 357 struct se_cmd se_cmd; 361 358 struct xcopy_op *xcopy_op; 362 359 struct completion xpt_passthrough_sem; 360 + unsigned char sense_buffer[TRANSPORT_SENSE_BUFFER]; 363 361 }; 364 362 365 363 static struct se_port xcopy_pt_port; ··· 679 675 680 676 pr_debug("target_xcopy_issue_pt_cmd(): SCSI status: 0x%02x\n", 681 677 se_cmd->scsi_status); 682 - return 0; 678 + 679 + return (se_cmd->scsi_status) ? -EINVAL : 0; 683 680 } 684 681 685 682 static int target_xcopy_read_source( ··· 713 708 (unsigned long long)src_lba, src_sectors, length); 714 709 715 710 transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length, 716 - DMA_FROM_DEVICE, 0, NULL); 711 + DMA_FROM_DEVICE, 0, &xpt_cmd->sense_buffer[0]); 717 712 xop->src_pt_cmd = xpt_cmd; 718 713 719 714 rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, src_dev, &cdb[0], ··· 773 768 (unsigned long long)dst_lba, dst_sectors, length); 774 769 775 770 transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length, 776 - DMA_TO_DEVICE, 0, NULL); 771 + DMA_TO_DEVICE, 0, &xpt_cmd->sense_buffer[0]); 777 772 xop->dst_pt_cmd = xpt_cmd; 778 773 779 774 rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, dst_dev, &cdb[0], ··· 889 884 890 885 sense_reason_t target_do_xcopy(struct se_cmd *se_cmd) 891 886 { 887 + struct se_device *dev = se_cmd->se_dev; 892 888 struct xcopy_op *xop = NULL; 893 889 unsigned char *p = NULL, *seg_desc; 894 890 unsigned int list_id, list_id_usage, sdll, inline_dl, sa; 891 + sense_reason_t ret = TCM_INVALID_PARAMETER_LIST; 895 892 int rc; 896 893 unsigned short tdll; 894 + 895 + if (!dev->dev_attrib.emulate_3pc) { 896 + pr_err("EXTENDED_COPY operation explicitly disabled\n"); 897 + return TCM_UNSUPPORTED_SCSI_OPCODE; 898 + } 897 899 898 900 sa = se_cmd->t_task_cdb[1] & 0x1f; 899 901 if (sa != 0x00) { ··· 908 896 return TCM_UNSUPPORTED_SCSI_OPCODE; 909 897 } 910 898 899 + xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL); 900 + if (!xop) { 901 + pr_err("Unable to allocate xcopy_op\n"); 902 + return TCM_OUT_OF_RESOURCES; 903 + } 904 + xop->xop_se_cmd = se_cmd; 905 + 911 906 p = transport_kmap_data_sg(se_cmd); 912 907 if (!p) { 913 908 pr_err("transport_kmap_data_sg() failed in target_do_xcopy\n"); 909 + kfree(xop); 914 910 return TCM_OUT_OF_RESOURCES; 915 911 } 916 912 917 913 list_id = p[0]; 918 - if (list_id != 0x00) { 919 - pr_err("XCOPY with non zero list_id: 0x%02x\n", list_id); 920 - goto out; 921 - } 922 - list_id_usage = (p[1] & 0x18); 914 + list_id_usage = (p[1] & 0x18) >> 3; 915 + 923 916 /* 924 917 * Determine TARGET DESCRIPTOR LIST LENGTH + SEGMENT DESCRIPTOR LIST LENGTH 925 918 */ ··· 937 920 goto out; 938 921 } 939 922 940 - xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL); 941 - if (!xop) { 942 - pr_err("Unable to allocate xcopy_op\n"); 943 - goto out; 944 - } 945 - xop->xop_se_cmd = se_cmd; 946 - 947 923 pr_debug("Processing XCOPY with list_id: 0x%02x list_id_usage: 0x%02x" 948 924 " tdll: %hu sdll: %u inline_dl: %u\n", list_id, list_id_usage, 949 925 tdll, sdll, inline_dl); ··· 944 934 rc = target_xcopy_parse_target_descriptors(se_cmd, xop, &p[16], tdll); 945 935 if (rc <= 0) 946 936 goto out; 937 + 938 + if (xop->src_dev->dev_attrib.block_size != 939 + xop->dst_dev->dev_attrib.block_size) { 940 + pr_err("XCOPY: Non matching src_dev block_size: %u + dst_dev" 941 + " block_size: %u currently unsupported\n", 942 + xop->src_dev->dev_attrib.block_size, 943 + xop->dst_dev->dev_attrib.block_size); 944 + xcopy_pt_undepend_remotedev(xop); 945 + ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 946 + goto out; 947 + } 947 948 948 949 pr_debug("XCOPY: Processed %d target descriptors, length: %u\n", rc, 949 950 rc * XCOPY_TARGET_DESC_LEN); ··· 978 957 if (p) 979 958 transport_kunmap_data_sg(se_cmd); 980 959 kfree(xop); 981 - return TCM_INVALID_CDB_FIELD; 960 + return ret; 982 961 } 983 962 984 963 static sense_reason_t target_rcr_operating_parameters(struct se_cmd *se_cmd)
+2 -7
drivers/tty/serial/atmel_serial.c
··· 1499 1499 /* 1500 1500 * Get ip name usart or uart 1501 1501 */ 1502 - static int atmel_get_ip_name(struct uart_port *port) 1502 + static void atmel_get_ip_name(struct uart_port *port) 1503 1503 { 1504 1504 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1505 1505 int name = UART_GET_IP_NAME(port); ··· 1518 1518 atmel_port->is_usart = false; 1519 1519 } else { 1520 1520 dev_err(port->dev, "Not supported ip name, set to uart\n"); 1521 - return -EINVAL; 1522 1521 } 1523 - 1524 - return 0; 1525 1522 } 1526 1523 1527 1524 /* ··· 2402 2405 /* 2403 2406 * Get port name of usart or uart 2404 2407 */ 2405 - ret = atmel_get_ip_name(&port->uart); 2406 - if (ret < 0) 2407 - goto err_add_port; 2408 + atmel_get_ip_name(&port->uart); 2408 2409 2409 2410 return 0; 2410 2411
+15 -2
drivers/uio/uio.c
··· 642 642 { 643 643 struct uio_device *idev = vma->vm_private_data; 644 644 int mi = uio_find_mem_index(vma); 645 + struct uio_mem *mem; 645 646 if (mi < 0) 647 + return -EINVAL; 648 + mem = idev->info->mem + mi; 649 + 650 + if (vma->vm_end - vma->vm_start > mem->size) 646 651 return -EINVAL; 647 652 648 653 vma->vm_ops = &uio_physical_vm_ops; 649 - 650 654 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 651 655 656 + /* 657 + * We cannot use the vm_iomap_memory() helper here, 658 + * because vma->vm_pgoff is the map index we looked 659 + * up above in uio_find_mem_index(), rather than an 660 + * actual page offset into the mmap. 661 + * 662 + * So we just do the physical mmap without a page 663 + * offset. 664 + */ 652 665 return remap_pfn_range(vma, 653 666 vma->vm_start, 654 - idev->info->mem[mi].addr >> PAGE_SHIFT, 667 + mem->addr >> PAGE_SHIFT, 655 668 vma->vm_end - vma->vm_start, 656 669 vma->vm_page_prot); 657 670 }
+1
drivers/usb/serial/ftdi_sio.c
··· 904 904 { USB_DEVICE(FTDI_VID, FTDI_LUMEL_PD12_PID) }, 905 905 /* Crucible Devices */ 906 906 { USB_DEVICE(FTDI_VID, FTDI_CT_COMET_PID) }, 907 + { USB_DEVICE(FTDI_VID, FTDI_Z3X_PID) }, 907 908 { } /* Terminating entry */ 908 909 }; 909 910
+6
drivers/usb/serial/ftdi_sio_ids.h
··· 1307 1307 * Manufacturer: Crucible Technologies 1308 1308 */ 1309 1309 #define FTDI_CT_COMET_PID 0x8e08 1310 + 1311 + /* 1312 + * Product: Z3X Box 1313 + * Manufacturer: Smart GSM Team 1314 + */ 1315 + #define FTDI_Z3X_PID 0x0011
+57 -219
drivers/usb/serial/pl2303.c
··· 4 4 * Copyright (C) 2001-2007 Greg Kroah-Hartman (greg@kroah.com) 5 5 * Copyright (C) 2003 IBM Corp. 6 6 * 7 - * Copyright (C) 2009, 2013 Frank Schäfer <fschaefer.oss@googlemail.com> 8 - * - fixes, improvements and documentation for the baud rate encoding methods 9 - * Copyright (C) 2013 Reinhard Max <max@suse.de> 10 - * - fixes and improvements for the divisor based baud rate encoding method 11 - * 12 7 * Original driver for 2.2.x by anonymous 13 8 * 14 9 * This program is free software; you can redistribute it and/or ··· 129 134 130 135 131 136 enum pl2303_type { 132 - type_0, /* H version ? */ 133 - type_1, /* H version ? */ 134 - HX_TA, /* HX(A) / X(A) / TA version */ /* TODO: improve */ 135 - HXD_EA_RA_SA, /* HXD / EA / RA / SA version */ /* TODO: improve */ 136 - TB, /* TB version */ 137 - HX_CLONE, /* Cheap and less functional clone of the HX chip */ 137 + type_0, /* don't know the difference between type 0 and */ 138 + type_1, /* type 1, until someone from prolific tells us... */ 139 + HX, /* HX version of the pl2303 chip */ 138 140 }; 139 - /* 140 - * NOTE: don't know the difference between type 0 and type 1, 141 - * until someone from Prolific tells us... 142 - * TODO: distinguish between X/HX, TA and HXD, EA, RA, SA variants 143 - */ 144 141 145 142 struct pl2303_serial_private { 146 143 enum pl2303_type type; ··· 172 185 { 173 186 struct pl2303_serial_private *spriv; 174 187 enum pl2303_type type = type_0; 175 - char *type_str = "unknown (treating as type_0)"; 176 188 unsigned char *buf; 177 189 178 190 spriv = kzalloc(sizeof(*spriv), GFP_KERNEL); ··· 184 198 return -ENOMEM; 185 199 } 186 200 187 - if (serial->dev->descriptor.bDeviceClass == 0x02) { 201 + if (serial->dev->descriptor.bDeviceClass == 0x02) 188 202 type = type_0; 189 - type_str = "type_0"; 190 - } else if (serial->dev->descriptor.bMaxPacketSize0 == 0x40) { 191 - /* 192 - * NOTE: The bcdDevice version is the only difference between 193 - * the device descriptors of the X/HX, HXD, EA, RA, SA, TA, TB 194 - */ 195 - if (le16_to_cpu(serial->dev->descriptor.bcdDevice) == 0x300) { 196 - /* Check if the device is a clone */ 197 - pl2303_vendor_read(0x9494, 0, serial, buf); 198 - /* 199 - * NOTE: Not sure if this read is really needed. 200 - * The HX returns 0x00, the clone 0x02, but the Windows 201 - * driver seems to ignore the value and continues. 202 - */ 203 - pl2303_vendor_write(0x0606, 0xaa, serial); 204 - pl2303_vendor_read(0x8686, 0, serial, buf); 205 - if (buf[0] != 0xaa) { 206 - type = HX_CLONE; 207 - type_str = "X/HX clone (limited functionality)"; 208 - } else { 209 - type = HX_TA; 210 - type_str = "X/HX/TA"; 211 - } 212 - pl2303_vendor_write(0x0606, 0x00, serial); 213 - } else if (le16_to_cpu(serial->dev->descriptor.bcdDevice) 214 - == 0x400) { 215 - type = HXD_EA_RA_SA; 216 - type_str = "HXD/EA/RA/SA"; 217 - } else if (le16_to_cpu(serial->dev->descriptor.bcdDevice) 218 - == 0x500) { 219 - type = TB; 220 - type_str = "TB"; 221 - } else { 222 - dev_info(&serial->interface->dev, 223 - "unknown/unsupported device type\n"); 224 - kfree(spriv); 225 - kfree(buf); 226 - return -ENODEV; 227 - } 228 - } else if (serial->dev->descriptor.bDeviceClass == 0x00 229 - || serial->dev->descriptor.bDeviceClass == 0xFF) { 203 + else if (serial->dev->descriptor.bMaxPacketSize0 == 0x40) 204 + type = HX; 205 + else if (serial->dev->descriptor.bDeviceClass == 0x00) 230 206 type = type_1; 231 - type_str = "type_1"; 232 - } 233 - dev_dbg(&serial->interface->dev, "device type: %s\n", type_str); 207 + else if (serial->dev->descriptor.bDeviceClass == 0xFF) 208 + type = type_1; 209 + dev_dbg(&serial->interface->dev, "device type: %d\n", type); 234 210 235 211 spriv->type = type; 236 212 usb_set_serial_data(serial, spriv); ··· 207 259 pl2303_vendor_read(0x8383, 0, serial, buf); 208 260 pl2303_vendor_write(0, 1, serial); 209 261 pl2303_vendor_write(1, 0, serial); 210 - if (type == type_0 || type == type_1) 211 - pl2303_vendor_write(2, 0x24, serial); 212 - else 262 + if (type == HX) 213 263 pl2303_vendor_write(2, 0x44, serial); 264 + else 265 + pl2303_vendor_write(2, 0x24, serial); 214 266 215 267 kfree(buf); 216 268 return 0; ··· 264 316 return retval; 265 317 } 266 318 267 - static int pl2303_baudrate_encode_direct(int baud, enum pl2303_type type, 268 - u8 buf[4]) 319 + static void pl2303_encode_baudrate(struct tty_struct *tty, 320 + struct usb_serial_port *port, 321 + u8 buf[4]) 269 322 { 270 - /* 271 - * NOTE: Only the values defined in baud_sup are supported ! 272 - * => if unsupported values are set, the PL2303 uses 9600 baud instead 273 - * => HX clones just don't work at unsupported baud rates < 115200 baud, 274 - * for baud rates > 115200 they run at 115200 baud 275 - */ 276 323 const int baud_sup[] = { 75, 150, 300, 600, 1200, 1800, 2400, 3600, 277 - 4800, 7200, 9600, 14400, 19200, 28800, 38400, 278 - 57600, 115200, 230400, 460800, 614400, 921600, 279 - 1228800, 2457600, 3000000, 6000000, 12000000 }; 280 - /* 281 - * NOTE: With the exception of type_0/1 devices, the following 282 - * additional baud rates are supported (tested with HX rev. 3A only): 283 - * 110*, 56000*, 128000, 134400, 161280, 201600, 256000*, 268800, 284 - * 403200, 806400. (*: not HX and HX clones) 285 - * 286 - * Maximum values: HXD, TB: 12000000; HX, TA: 6000000; 287 - * type_0+1: 1228800; RA: 921600; HX clones, SA: 115200 288 - * 289 - * As long as we are not using this encoding method for anything else 290 - * than the type_0+1, HX and HX clone chips, there is no point in 291 - * complicating the code to support them. 292 - */ 324 + 4800, 7200, 9600, 14400, 19200, 28800, 38400, 325 + 57600, 115200, 230400, 460800, 500000, 614400, 326 + 921600, 1228800, 2457600, 3000000, 6000000 }; 327 + 328 + struct usb_serial *serial = port->serial; 329 + struct pl2303_serial_private *spriv = usb_get_serial_data(serial); 330 + int baud; 293 331 int i; 332 + 333 + /* 334 + * NOTE: Only the values defined in baud_sup are supported! 335 + * => if unsupported values are set, the PL2303 seems to use 336 + * 9600 baud (at least my PL2303X always does) 337 + */ 338 + baud = tty_get_baud_rate(tty); 339 + dev_dbg(&port->dev, "baud requested = %d\n", baud); 340 + if (!baud) 341 + return; 294 342 295 343 /* Set baudrate to nearest supported value */ 296 344 for (i = 0; i < ARRAY_SIZE(baud_sup); ++i) { 297 345 if (baud_sup[i] > baud) 298 346 break; 299 347 } 348 + 300 349 if (i == ARRAY_SIZE(baud_sup)) 301 350 baud = baud_sup[i - 1]; 302 351 else if (i > 0 && (baud_sup[i] - baud) > (baud - baud_sup[i - 1])) 303 352 baud = baud_sup[i - 1]; 304 353 else 305 354 baud = baud_sup[i]; 306 - /* Respect the chip type specific baud rate limits */ 307 - /* 308 - * FIXME: as long as we don't know how to distinguish between the 309 - * HXD, EA, RA, and SA chip variants, allow the max. value of 12M. 310 - */ 311 - if (type == HX_TA) 312 - baud = min_t(int, baud, 6000000); 313 - else if (type == type_0 || type == type_1) 355 + 356 + /* type_0, type_1 only support up to 1228800 baud */ 357 + if (spriv->type != HX) 314 358 baud = min_t(int, baud, 1228800); 315 - else if (type == HX_CLONE) 316 - baud = min_t(int, baud, 115200); 317 - /* Direct (standard) baud rate encoding method */ 318 - put_unaligned_le32(baud, buf); 319 359 320 - return baud; 321 - } 322 - 323 - static int pl2303_baudrate_encode_divisor(int baud, enum pl2303_type type, 324 - u8 buf[4]) 325 - { 326 - /* 327 - * Divisor based baud rate encoding method 328 - * 329 - * NOTE: HX clones do NOT support this method. 330 - * It's not clear if the type_0/1 chips support it. 331 - * 332 - * divisor = 12MHz * 32 / baudrate = 2^A * B 333 - * 334 - * with 335 - * 336 - * A = buf[1] & 0x0e 337 - * B = buf[0] + (buf[1] & 0x01) << 8 338 - * 339 - * Special cases: 340 - * => 8 < B < 16: device seems to work not properly 341 - * => B <= 8: device uses the max. value B = 512 instead 342 - */ 343 - unsigned int A, B; 344 - 345 - /* 346 - * NOTE: The Windows driver allows maximum baud rates of 110% of the 347 - * specified maximium value. 348 - * Quick tests with early (2004) HX (rev. A) chips suggest, that even 349 - * higher baud rates (up to the maximum of 24M baud !) are working fine, 350 - * but that should really be tested carefully in "real life" scenarios 351 - * before removing the upper limit completely. 352 - * Baud rates smaller than the specified 75 baud are definitely working 353 - * fine. 354 - */ 355 - if (type == type_0 || type == type_1) 356 - baud = min_t(int, baud, 1228800 * 1.1); 357 - else if (type == HX_TA) 358 - baud = min_t(int, baud, 6000000 * 1.1); 359 - else if (type == HXD_EA_RA_SA) 360 - /* HXD, EA: 12Mbps; RA: 1Mbps; SA: 115200 bps */ 361 - /* 362 - * FIXME: as long as we don't know how to distinguish between 363 - * these chip variants, allow the max. of these values 364 - */ 365 - baud = min_t(int, baud, 12000000 * 1.1); 366 - else if (type == TB) 367 - baud = min_t(int, baud, 12000000 * 1.1); 368 - /* Determine factors A and B */ 369 - A = 0; 370 - B = 12000000 * 32 / baud; /* 12MHz */ 371 - B <<= 1; /* Add one bit for rounding */ 372 - while (B > (512 << 1) && A <= 14) { 373 - A += 2; 374 - B >>= 2; 375 - } 376 - if (A > 14) { /* max. divisor = min. baudrate reached */ 377 - A = 14; 378 - B = 512; 379 - /* => ~45.78 baud */ 360 + if (baud <= 115200) { 361 + put_unaligned_le32(baud, buf); 380 362 } else { 381 - B = (B + 1) >> 1; /* Round the last bit */ 382 - } 383 - /* Handle special cases */ 384 - if (B == 512) 385 - B = 0; /* also: 1 to 8 */ 386 - else if (B < 16) 387 363 /* 388 - * NOTE: With the current algorithm this happens 389 - * only for A=0 and means that the min. divisor 390 - * (respectively: the max. baudrate) is reached. 364 + * Apparently the formula for higher speeds is: 365 + * baudrate = 12M * 32 / (2^buf[1]) / buf[0] 391 366 */ 392 - B = 16; /* => 24 MBaud */ 393 - /* Encode the baud rate */ 394 - buf[3] = 0x80; /* Select divisor encoding method */ 395 - buf[2] = 0; 396 - buf[1] = (A & 0x0e); /* A */ 397 - buf[1] |= ((B & 0x100) >> 8); /* MSB of B */ 398 - buf[0] = B & 0xff; /* 8 LSBs of B */ 399 - /* Calculate the actual/resulting baud rate */ 400 - if (B <= 8) 401 - B = 512; 402 - baud = 12000000 * 32 / ((1 << A) * B); 367 + unsigned tmp = 12000000 * 32 / baud; 368 + buf[3] = 0x80; 369 + buf[2] = 0; 370 + buf[1] = (tmp >= 256); 371 + while (tmp >= 256) { 372 + tmp >>= 2; 373 + buf[1] <<= 1; 374 + } 375 + buf[0] = tmp; 376 + } 403 377 404 - return baud; 405 - } 406 - 407 - static void pl2303_encode_baudrate(struct tty_struct *tty, 408 - struct usb_serial_port *port, 409 - enum pl2303_type type, 410 - u8 buf[4]) 411 - { 412 - int baud; 413 - 414 - baud = tty_get_baud_rate(tty); 415 - dev_dbg(&port->dev, "baud requested = %d\n", baud); 416 - if (!baud) 417 - return; 418 - /* 419 - * There are two methods for setting/encoding the baud rate 420 - * 1) Direct method: encodes the baud rate value directly 421 - * => supported by all chip types 422 - * 2) Divisor based method: encodes a divisor to a base value (12MHz*32) 423 - * => not supported by HX clones (and likely type_0/1 chips) 424 - * 425 - * NOTE: Although the divisor based baud rate encoding method is much 426 - * more flexible, some of the standard baud rate values can not be 427 - * realized exactly. But the difference is very small (max. 0.2%) and 428 - * the device likely uses the same baud rate generator for both methods 429 - * so that there is likley no difference. 430 - */ 431 - if (type == type_0 || type == type_1 || type == HX_CLONE) 432 - baud = pl2303_baudrate_encode_direct(baud, type, buf); 433 - else 434 - baud = pl2303_baudrate_encode_divisor(baud, type, buf); 435 378 /* Save resulting baud rate */ 436 379 tty_encode_baud_rate(tty, baud, baud); 437 380 dev_dbg(&port->dev, "baud set = %d\n", baud); ··· 379 540 dev_dbg(&port->dev, "data bits = %d\n", buf[6]); 380 541 } 381 542 382 - /* For reference: buf[0]:buf[3] baud rate value */ 383 - pl2303_encode_baudrate(tty, port, spriv->type, buf); 543 + /* For reference buf[0]:buf[3] baud rate value */ 544 + pl2303_encode_baudrate(tty, port, &buf[0]); 384 545 385 546 /* For reference buf[4]=0 is 1 stop bits */ 386 547 /* For reference buf[4]=1 is 1.5 stop bits */ ··· 457 618 dev_dbg(&port->dev, "0xa1:0x21:0:0 %d - %7ph\n", i, buf); 458 619 459 620 if (C_CRTSCTS(tty)) { 460 - if (spriv->type == type_0 || spriv->type == type_1) 461 - pl2303_vendor_write(0x0, 0x41, serial); 462 - else 621 + if (spriv->type == HX) 463 622 pl2303_vendor_write(0x0, 0x61, serial); 623 + else 624 + pl2303_vendor_write(0x0, 0x41, serial); 464 625 } else { 465 626 pl2303_vendor_write(0x0, 0x0, serial); 466 627 } ··· 497 658 struct pl2303_serial_private *spriv = usb_get_serial_data(serial); 498 659 int result; 499 660 500 - if (spriv->type == type_0 || spriv->type == type_1) { 661 + if (spriv->type != HX) { 501 662 usb_clear_halt(serial->dev, port->write_urb->pipe); 502 663 usb_clear_halt(serial->dev, port->read_urb->pipe); 503 664 } else { ··· 672 833 result = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0), 673 834 BREAK_REQUEST, BREAK_REQUEST_TYPE, state, 674 835 0, NULL, 0, 100); 675 - /* NOTE: HX clones don't support sending breaks, -EPIPE is returned */ 676 836 if (result) 677 837 dev_err(&port->dev, "error sending break = %d\n", result); 678 838 }
+1 -1
drivers/vhost/scsi.c
··· 1056 1056 if (data_direction != DMA_NONE) { 1057 1057 ret = vhost_scsi_map_iov_to_sgl(cmd, 1058 1058 &vq->iov[data_first], data_num, 1059 - data_direction == DMA_TO_DEVICE); 1059 + data_direction == DMA_FROM_DEVICE); 1060 1060 if (unlikely(ret)) { 1061 1061 vq_err(vq, "Failed to map iov to sgl\n"); 1062 1062 goto err_free;
+1 -25
drivers/video/au1100fb.c
··· 361 361 int au1100fb_fb_mmap(struct fb_info *fbi, struct vm_area_struct *vma) 362 362 { 363 363 struct au1100fb_device *fbdev; 364 - unsigned int len; 365 - unsigned long start=0, off; 366 364 367 365 fbdev = to_au1100fb_device(fbi); 368 - 369 - if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) { 370 - return -EINVAL; 371 - } 372 - 373 - start = fbdev->fb_phys & PAGE_MASK; 374 - len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len); 375 - 376 - off = vma->vm_pgoff << PAGE_SHIFT; 377 - 378 - if ((vma->vm_end - vma->vm_start + off) > len) { 379 - return -EINVAL; 380 - } 381 - 382 - off += start; 383 - vma->vm_pgoff = off >> PAGE_SHIFT; 384 366 385 367 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 386 368 pgprot_val(vma->vm_page_prot) |= (6 << 9); //CCA=6 387 369 388 - if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT, 389 - vma->vm_end - vma->vm_start, 390 - vma->vm_page_prot)) { 391 - return -EAGAIN; 392 - } 393 - 394 - return 0; 370 + return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len); 395 371 } 396 372 397 373 static struct fb_ops au1100fb_ops =
+1 -22
drivers/video/au1200fb.c
··· 1233 1233 * method mainly to allow the use of the TLB streaming flag (CCA=6) 1234 1234 */ 1235 1235 static int au1200fb_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) 1236 - 1237 1236 { 1238 - unsigned int len; 1239 - unsigned long start=0, off; 1240 1237 struct au1200fb_device *fbdev = info->par; 1241 - 1242 - if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) { 1243 - return -EINVAL; 1244 - } 1245 - 1246 - start = fbdev->fb_phys & PAGE_MASK; 1247 - len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len); 1248 - 1249 - off = vma->vm_pgoff << PAGE_SHIFT; 1250 - 1251 - if ((vma->vm_end - vma->vm_start + off) > len) { 1252 - return -EINVAL; 1253 - } 1254 - 1255 - off += start; 1256 - vma->vm_pgoff = off >> PAGE_SHIFT; 1257 1238 1258 1239 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1259 1240 pgprot_val(vma->vm_page_prot) |= _CACHE_MASK; /* CCA=7 */ 1260 1241 1261 - return io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT, 1262 - vma->vm_end - vma->vm_start, 1263 - vma->vm_page_prot); 1242 + return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len); 1264 1243 } 1265 1244 1266 1245 static void set_global(u_int cmd, struct au1200_lcd_global_regs_t *pdata)
+3 -2
fs/dcache.c
··· 542 542 * If ref is non-zero, then decrement the refcount too. 543 543 * Returns dentry requiring refcount drop, or NULL if we're done. 544 544 */ 545 - static inline struct dentry * 545 + static struct dentry * 546 546 dentry_kill(struct dentry *dentry, int unlock_on_failure) 547 547 __releases(dentry->d_lock) 548 548 { ··· 630 630 goto kill_it; 631 631 } 632 632 633 - dentry->d_flags |= DCACHE_REFERENCED; 633 + if (!(dentry->d_flags & DCACHE_REFERENCED)) 634 + dentry->d_flags |= DCACHE_REFERENCED; 634 635 dentry_lru_add(dentry); 635 636 636 637 dentry->d_lockref.count--;
+1 -1
fs/ecryptfs/crypto.c
··· 408 408 struct page *page) 409 409 { 410 410 return ecryptfs_lower_header_size(crypt_stat) + 411 - (page->index << PAGE_CACHE_SHIFT); 411 + ((loff_t)page->index << PAGE_CACHE_SHIFT); 412 412 } 413 413 414 414 /**
+2 -1
fs/ecryptfs/keystore.c
··· 1149 1149 struct ecryptfs_msg_ctx *msg_ctx; 1150 1150 struct ecryptfs_message *msg = NULL; 1151 1151 char *auth_tok_sig; 1152 - char *payload; 1152 + char *payload = NULL; 1153 1153 size_t payload_len = 0; 1154 1154 int rc; 1155 1155 ··· 1203 1203 } 1204 1204 out: 1205 1205 kfree(msg); 1206 + kfree(payload); 1206 1207 return rc; 1207 1208 } 1208 1209
+1 -3
fs/eventpoll.c
··· 34 34 #include <linux/mutex.h> 35 35 #include <linux/anon_inodes.h> 36 36 #include <linux/device.h> 37 - #include <linux/freezer.h> 38 37 #include <asm/uaccess.h> 39 38 #include <asm/io.h> 40 39 #include <asm/mman.h> ··· 1604 1605 } 1605 1606 1606 1607 spin_unlock_irqrestore(&ep->lock, flags); 1607 - if (!freezable_schedule_hrtimeout_range(to, slack, 1608 - HRTIMER_MODE_ABS)) 1608 + if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS)) 1609 1609 timed_out = 1; 1610 1610 1611 1611 spin_lock_irqsave(&ep->lock, flags);
+2 -2
fs/file_table.c
··· 297 297 delayed_fput(NULL); 298 298 } 299 299 300 - static DECLARE_WORK(delayed_fput_work, delayed_fput); 300 + static DECLARE_DELAYED_WORK(delayed_fput_work, delayed_fput); 301 301 302 302 void fput(struct file *file) 303 303 { ··· 317 317 } 318 318 319 319 if (llist_add(&file->f_u.fu_llist, &delayed_fput_list)) 320 - schedule_work(&delayed_fput_work); 320 + schedule_delayed_work(&delayed_fput_work, 1); 321 321 } 322 322 } 323 323
+1 -2
fs/select.c
··· 238 238 239 239 set_current_state(state); 240 240 if (!pwq->triggered) 241 - rc = freezable_schedule_hrtimeout_range(expires, slack, 242 - HRTIMER_MODE_ABS); 241 + rc = schedule_hrtimeout_range(expires, slack, HRTIMER_MODE_ABS); 243 242 __set_current_state(TASK_RUNNING); 244 243 245 244 /*
+2
fs/seq_file.c
··· 328 328 m->read_pos = offset; 329 329 retval = file->f_pos = offset; 330 330 } 331 + } else { 332 + file->f_pos = offset; 331 333 } 332 334 } 333 335 file->f_version = m->version;
+3 -3
include/linux/ipc_namespace.h
··· 34 34 int sem_ctls[4]; 35 35 int used_sems; 36 36 37 - int msg_ctlmax; 38 - int msg_ctlmnb; 39 - int msg_ctlmni; 37 + unsigned int msg_ctlmax; 38 + unsigned int msg_ctlmnb; 39 + unsigned int msg_ctlmni; 40 40 atomic_t msg_bytes; 41 41 atomic_t msg_hdrs; 42 42 int auto_msgmni;
+3 -2
include/linux/netpoll.h
··· 24 24 struct net_device *dev; 25 25 char dev_name[IFNAMSIZ]; 26 26 const char *name; 27 - void (*rx_hook)(struct netpoll *, int, char *, int); 27 + void (*rx_skb_hook)(struct netpoll *np, int source, struct sk_buff *skb, 28 + int offset, int len); 28 29 29 30 union inet_addr local_ip, remote_ip; 30 31 bool ipv6; ··· 42 41 unsigned long rx_flags; 43 42 spinlock_t rx_lock; 44 43 struct semaphore dev_lock; 45 - struct list_head rx_np; /* netpolls that registered an rx_hook */ 44 + struct list_head rx_np; /* netpolls that registered an rx_skb_hook */ 46 45 47 46 struct sk_buff_head neigh_tx; /* list of neigh requests to reply to */ 48 47 struct sk_buff_head txq;
+4 -4
include/linux/percpu.h
··· 332 332 #endif 333 333 334 334 #ifndef this_cpu_sub 335 - # define this_cpu_sub(pcp, val) this_cpu_add((pcp), -(val)) 335 + # define this_cpu_sub(pcp, val) this_cpu_add((pcp), -(typeof(pcp))(val)) 336 336 #endif 337 337 338 338 #ifndef this_cpu_inc ··· 418 418 # define this_cpu_add_return(pcp, val) __pcpu_size_call_return2(this_cpu_add_return_, pcp, val) 419 419 #endif 420 420 421 - #define this_cpu_sub_return(pcp, val) this_cpu_add_return(pcp, -(val)) 421 + #define this_cpu_sub_return(pcp, val) this_cpu_add_return(pcp, -(typeof(pcp))(val)) 422 422 #define this_cpu_inc_return(pcp) this_cpu_add_return(pcp, 1) 423 423 #define this_cpu_dec_return(pcp) this_cpu_add_return(pcp, -1) 424 424 ··· 586 586 #endif 587 587 588 588 #ifndef __this_cpu_sub 589 - # define __this_cpu_sub(pcp, val) __this_cpu_add((pcp), -(val)) 589 + # define __this_cpu_sub(pcp, val) __this_cpu_add((pcp), -(typeof(pcp))(val)) 590 590 #endif 591 591 592 592 #ifndef __this_cpu_inc ··· 668 668 __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) 669 669 #endif 670 670 671 - #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) 671 + #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(typeof(pcp))(val)) 672 672 #define __this_cpu_inc_return(pcp) __this_cpu_add_return(pcp, 1) 673 673 #define __this_cpu_dec_return(pcp) __this_cpu_add_return(pcp, -1) 674 674
+1
include/net/ip6_fib.h
··· 165 165 static inline void rt6_clean_expires(struct rt6_info *rt) 166 166 { 167 167 rt->rt6i_flags &= ~RTF_EXPIRES; 168 + rt->dst.expires = 0; 168 169 } 169 170 170 171 static inline void rt6_set_expires(struct rt6_info *rt, unsigned long expires)
+2 -2
include/trace/events/target.h
··· 144 144 ), 145 145 146 146 TP_fast_assign( 147 - __entry->unpacked_lun = cmd->se_lun->unpacked_lun; 147 + __entry->unpacked_lun = cmd->orig_fe_lun; 148 148 __entry->opcode = cmd->t_task_cdb[0]; 149 149 __entry->data_length = cmd->data_length; 150 150 __entry->task_attribute = cmd->sam_task_attr; ··· 182 182 ), 183 183 184 184 TP_fast_assign( 185 - __entry->unpacked_lun = cmd->se_lun->unpacked_lun; 185 + __entry->unpacked_lun = cmd->orig_fe_lun; 186 186 __entry->opcode = cmd->t_task_cdb[0]; 187 187 __entry->data_length = cmd->data_length; 188 188 __entry->task_attribute = cmd->sam_task_attr;
+7 -5
include/uapi/linux/perf_event.h
··· 456 456 /* 457 457 * Control data for the mmap() data buffer. 458 458 * 459 - * User-space reading the @data_head value should issue an rmb(), on 460 - * SMP capable platforms, after reading this value -- see 461 - * perf_event_wakeup(). 459 + * User-space reading the @data_head value should issue an smp_rmb(), 460 + * after reading this value. 462 461 * 463 462 * When the mapping is PROT_WRITE the @data_tail value should be 464 - * written by userspace to reflect the last read data. In this case 465 - * the kernel will not over-write unread data. 463 + * written by userspace to reflect the last read data, after issueing 464 + * an smp_mb() to separate the data read from the ->data_tail store. 465 + * In this case the kernel will not over-write unread data. 466 + * 467 + * See perf_output_put_handle() for the data ordering. 466 468 */ 467 469 __u64 data_head; /* head in the data section */ 468 470 __u64 data_tail; /* user-space written tail */
+12 -8
ipc/ipc_sysctl.c
··· 62 62 return err; 63 63 } 64 64 65 - static int proc_ipc_callback_dointvec(ctl_table *table, int write, 65 + static int proc_ipc_callback_dointvec_minmax(ctl_table *table, int write, 66 66 void __user *buffer, size_t *lenp, loff_t *ppos) 67 67 { 68 68 struct ctl_table ipc_table; ··· 72 72 memcpy(&ipc_table, table, sizeof(ipc_table)); 73 73 ipc_table.data = get_ipc(table); 74 74 75 - rc = proc_dointvec(&ipc_table, write, buffer, lenp, ppos); 75 + rc = proc_dointvec_minmax(&ipc_table, write, buffer, lenp, ppos); 76 76 77 77 if (write && !rc && lenp_bef == *lenp) 78 78 /* ··· 152 152 #define proc_ipc_dointvec NULL 153 153 #define proc_ipc_dointvec_minmax NULL 154 154 #define proc_ipc_dointvec_minmax_orphans NULL 155 - #define proc_ipc_callback_dointvec NULL 155 + #define proc_ipc_callback_dointvec_minmax NULL 156 156 #define proc_ipcauto_dointvec_minmax NULL 157 157 #endif 158 158 159 159 static int zero; 160 160 static int one = 1; 161 - #ifdef CONFIG_CHECKPOINT_RESTORE 162 161 static int int_max = INT_MAX; 163 - #endif 164 162 165 163 static struct ctl_table ipc_kern_table[] = { 166 164 { ··· 196 198 .data = &init_ipc_ns.msg_ctlmax, 197 199 .maxlen = sizeof (init_ipc_ns.msg_ctlmax), 198 200 .mode = 0644, 199 - .proc_handler = proc_ipc_dointvec, 201 + .proc_handler = proc_ipc_dointvec_minmax, 202 + .extra1 = &zero, 203 + .extra2 = &int_max, 200 204 }, 201 205 { 202 206 .procname = "msgmni", 203 207 .data = &init_ipc_ns.msg_ctlmni, 204 208 .maxlen = sizeof (init_ipc_ns.msg_ctlmni), 205 209 .mode = 0644, 206 - .proc_handler = proc_ipc_callback_dointvec, 210 + .proc_handler = proc_ipc_callback_dointvec_minmax, 211 + .extra1 = &zero, 212 + .extra2 = &int_max, 207 213 }, 208 214 { 209 215 .procname = "msgmnb", 210 216 .data = &init_ipc_ns.msg_ctlmnb, 211 217 .maxlen = sizeof (init_ipc_ns.msg_ctlmnb), 212 218 .mode = 0644, 213 - .proc_handler = proc_ipc_dointvec, 219 + .proc_handler = proc_ipc_dointvec_minmax, 220 + .extra1 = &zero, 221 + .extra2 = &int_max, 214 222 }, 215 223 { 216 224 .procname = "sem",
+4
kernel/events/core.c
··· 6767 6767 if (ret) 6768 6768 return -EFAULT; 6769 6769 6770 + /* disabled for now */ 6771 + if (attr->mmap2) 6772 + return -EINVAL; 6773 + 6770 6774 if (attr->__reserved_1) 6771 6775 return -EINVAL; 6772 6776
+27 -4
kernel/events/ring_buffer.c
··· 87 87 goto out; 88 88 89 89 /* 90 - * Publish the known good head. Rely on the full barrier implied 91 - * by atomic_dec_and_test() order the rb->head read and this 92 - * write. 90 + * Since the mmap() consumer (userspace) can run on a different CPU: 91 + * 92 + * kernel user 93 + * 94 + * READ ->data_tail READ ->data_head 95 + * smp_mb() (A) smp_rmb() (C) 96 + * WRITE $data READ $data 97 + * smp_wmb() (B) smp_mb() (D) 98 + * STORE ->data_head WRITE ->data_tail 99 + * 100 + * Where A pairs with D, and B pairs with C. 101 + * 102 + * I don't think A needs to be a full barrier because we won't in fact 103 + * write data until we see the store from userspace. So we simply don't 104 + * issue the data WRITE until we observe it. Be conservative for now. 105 + * 106 + * OTOH, D needs to be a full barrier since it separates the data READ 107 + * from the tail WRITE. 108 + * 109 + * For B a WMB is sufficient since it separates two WRITEs, and for C 110 + * an RMB is sufficient since it separates two READs. 111 + * 112 + * See perf_output_begin(). 93 113 */ 114 + smp_wmb(); 94 115 rb->user_page->data_head = head; 95 116 96 117 /* ··· 175 154 * Userspace could choose to issue a mb() before updating the 176 155 * tail pointer. So that all reads will be completed before the 177 156 * write is issued. 157 + * 158 + * See perf_output_put_handle(). 178 159 */ 179 160 tail = ACCESS_ONCE(rb->user_page->data_tail); 180 - smp_rmb(); 161 + smp_mb(); 181 162 offset = head = local_read(&rb->head); 182 163 head += size; 183 164 if (unlikely(!perf_output_space(rb, tail, offset, head)))
+16 -16
kernel/mutex.c
··· 410 410 static __always_inline int __sched 411 411 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, 412 412 struct lockdep_map *nest_lock, unsigned long ip, 413 - struct ww_acquire_ctx *ww_ctx) 413 + struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) 414 414 { 415 415 struct task_struct *task = current; 416 416 struct mutex_waiter waiter; ··· 450 450 struct task_struct *owner; 451 451 struct mspin_node node; 452 452 453 - if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) { 453 + if (use_ww_ctx && ww_ctx->acquired > 0) { 454 454 struct ww_mutex *ww; 455 455 456 456 ww = container_of(lock, struct ww_mutex, base); ··· 480 480 if ((atomic_read(&lock->count) == 1) && 481 481 (atomic_cmpxchg(&lock->count, 1, 0) == 1)) { 482 482 lock_acquired(&lock->dep_map, ip); 483 - if (!__builtin_constant_p(ww_ctx == NULL)) { 483 + if (use_ww_ctx) { 484 484 struct ww_mutex *ww; 485 485 ww = container_of(lock, struct ww_mutex, base); 486 486 ··· 551 551 goto err; 552 552 } 553 553 554 - if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) { 554 + if (use_ww_ctx && ww_ctx->acquired > 0) { 555 555 ret = __mutex_lock_check_stamp(lock, ww_ctx); 556 556 if (ret) 557 557 goto err; ··· 575 575 lock_acquired(&lock->dep_map, ip); 576 576 mutex_set_owner(lock); 577 577 578 - if (!__builtin_constant_p(ww_ctx == NULL)) { 578 + if (use_ww_ctx) { 579 579 struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); 580 580 struct mutex_waiter *cur; 581 581 ··· 615 615 { 616 616 might_sleep(); 617 617 __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 618 - subclass, NULL, _RET_IP_, NULL); 618 + subclass, NULL, _RET_IP_, NULL, 0); 619 619 } 620 620 621 621 EXPORT_SYMBOL_GPL(mutex_lock_nested); ··· 625 625 { 626 626 might_sleep(); 627 627 __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 628 - 0, nest, _RET_IP_, NULL); 628 + 0, nest, _RET_IP_, NULL, 0); 629 629 } 630 630 631 631 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); ··· 635 635 { 636 636 might_sleep(); 637 637 return __mutex_lock_common(lock, TASK_KILLABLE, 638 - subclass, NULL, _RET_IP_, NULL); 638 + subclass, NULL, _RET_IP_, NULL, 0); 639 639 } 640 640 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); 641 641 ··· 644 644 { 645 645 might_sleep(); 646 646 return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 647 - subclass, NULL, _RET_IP_, NULL); 647 + subclass, NULL, _RET_IP_, NULL, 0); 648 648 } 649 649 650 650 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); ··· 682 682 683 683 might_sleep(); 684 684 ret = __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 685 - 0, &ctx->dep_map, _RET_IP_, ctx); 685 + 0, &ctx->dep_map, _RET_IP_, ctx, 1); 686 686 if (!ret && ctx->acquired > 1) 687 687 return ww_mutex_deadlock_injection(lock, ctx); 688 688 ··· 697 697 698 698 might_sleep(); 699 699 ret = __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 700 - 0, &ctx->dep_map, _RET_IP_, ctx); 700 + 0, &ctx->dep_map, _RET_IP_, ctx, 1); 701 701 702 702 if (!ret && ctx->acquired > 1) 703 703 return ww_mutex_deadlock_injection(lock, ctx); ··· 809 809 struct mutex *lock = container_of(lock_count, struct mutex, count); 810 810 811 811 __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, 812 - NULL, _RET_IP_, NULL); 812 + NULL, _RET_IP_, NULL, 0); 813 813 } 814 814 815 815 static noinline int __sched 816 816 __mutex_lock_killable_slowpath(struct mutex *lock) 817 817 { 818 818 return __mutex_lock_common(lock, TASK_KILLABLE, 0, 819 - NULL, _RET_IP_, NULL); 819 + NULL, _RET_IP_, NULL, 0); 820 820 } 821 821 822 822 static noinline int __sched 823 823 __mutex_lock_interruptible_slowpath(struct mutex *lock) 824 824 { 825 825 return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, 826 - NULL, _RET_IP_, NULL); 826 + NULL, _RET_IP_, NULL, 0); 827 827 } 828 828 829 829 static noinline int __sched 830 830 __ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 831 831 { 832 832 return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, 833 - NULL, _RET_IP_, ctx); 833 + NULL, _RET_IP_, ctx, 1); 834 834 } 835 835 836 836 static noinline int __sched ··· 838 838 struct ww_acquire_ctx *ctx) 839 839 { 840 840 return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0, 841 - NULL, _RET_IP_, ctx); 841 + NULL, _RET_IP_, ctx, 1); 842 842 } 843 843 844 844 #endif
+1 -1
kernel/power/hibernate.c
··· 846 846 goto Finish; 847 847 } 848 848 849 - late_initcall(software_resume); 849 + late_initcall_sync(software_resume); 850 850 851 851 852 852 static const char * const hibernation_modes[] = {
+51 -16
kernel/time/clockevents.c
··· 33 33 int res; 34 34 }; 35 35 36 + static u64 cev_delta2ns(unsigned long latch, struct clock_event_device *evt, 37 + bool ismax) 38 + { 39 + u64 clc = (u64) latch << evt->shift; 40 + u64 rnd; 41 + 42 + if (unlikely(!evt->mult)) { 43 + evt->mult = 1; 44 + WARN_ON(1); 45 + } 46 + rnd = (u64) evt->mult - 1; 47 + 48 + /* 49 + * Upper bound sanity check. If the backwards conversion is 50 + * not equal latch, we know that the above shift overflowed. 51 + */ 52 + if ((clc >> evt->shift) != (u64)latch) 53 + clc = ~0ULL; 54 + 55 + /* 56 + * Scaled math oddities: 57 + * 58 + * For mult <= (1 << shift) we can safely add mult - 1 to 59 + * prevent integer rounding loss. So the backwards conversion 60 + * from nsec to device ticks will be correct. 61 + * 62 + * For mult > (1 << shift), i.e. device frequency is > 1GHz we 63 + * need to be careful. Adding mult - 1 will result in a value 64 + * which when converted back to device ticks can be larger 65 + * than latch by up to (mult - 1) >> shift. For the min_delta 66 + * calculation we still want to apply this in order to stay 67 + * above the minimum device ticks limit. For the upper limit 68 + * we would end up with a latch value larger than the upper 69 + * limit of the device, so we omit the add to stay below the 70 + * device upper boundary. 71 + * 72 + * Also omit the add if it would overflow the u64 boundary. 73 + */ 74 + if ((~0ULL - clc > rnd) && 75 + (!ismax || evt->mult <= (1U << evt->shift))) 76 + clc += rnd; 77 + 78 + do_div(clc, evt->mult); 79 + 80 + /* Deltas less than 1usec are pointless noise */ 81 + return clc > 1000 ? clc : 1000; 82 + } 83 + 36 84 /** 37 85 * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds 38 86 * @latch: value to convert ··· 90 42 */ 91 43 u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt) 92 44 { 93 - u64 clc = (u64) latch << evt->shift; 94 - 95 - if (unlikely(!evt->mult)) { 96 - evt->mult = 1; 97 - WARN_ON(1); 98 - } 99 - 100 - do_div(clc, evt->mult); 101 - if (clc < 1000) 102 - clc = 1000; 103 - if (clc > KTIME_MAX) 104 - clc = KTIME_MAX; 105 - 106 - return clc; 45 + return cev_delta2ns(latch, evt, false); 107 46 } 108 47 EXPORT_SYMBOL_GPL(clockevent_delta2ns); 109 48 ··· 415 380 sec = 600; 416 381 417 382 clockevents_calc_mult_shift(dev, freq, sec); 418 - dev->min_delta_ns = clockevent_delta2ns(dev->min_delta_ticks, dev); 419 - dev->max_delta_ns = clockevent_delta2ns(dev->max_delta_ticks, dev); 383 + dev->min_delta_ns = cev_delta2ns(dev->min_delta_ticks, dev, false); 384 + dev->max_delta_ns = cev_delta2ns(dev->max_delta_ticks, dev, true); 420 385 } 421 386 422 387 /**
+1 -1
lib/Kconfig.debug
··· 983 983 984 984 config DEBUG_KOBJECT_RELEASE 985 985 bool "kobject release debugging" 986 - depends on DEBUG_KERNEL 986 + depends on DEBUG_OBJECTS_TIMERS 987 987 help 988 988 kobjects are reference counted objects. This means that their 989 989 last reference count put is not predictable, and the kobject can
+2 -1
lib/scatterlist.c
··· 577 577 miter->__offset += miter->consumed; 578 578 miter->__remaining -= miter->consumed; 579 579 580 - if (miter->__flags & SG_MITER_TO_SG) 580 + if ((miter->__flags & SG_MITER_TO_SG) && 581 + !PageSlab(miter->page)) 581 582 flush_kernel_dcache_page(miter->page); 582 583 583 584 if (miter->__flags & SG_MITER_ATOMIC) {
+48 -22
mm/huge_memory.c
··· 1278 1278 int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, 1279 1279 unsigned long addr, pmd_t pmd, pmd_t *pmdp) 1280 1280 { 1281 + struct anon_vma *anon_vma = NULL; 1281 1282 struct page *page; 1282 1283 unsigned long haddr = addr & HPAGE_PMD_MASK; 1284 + int page_nid = -1, this_nid = numa_node_id(); 1283 1285 int target_nid; 1284 - int current_nid = -1; 1285 - bool migrated; 1286 + bool page_locked; 1287 + bool migrated = false; 1286 1288 1287 1289 spin_lock(&mm->page_table_lock); 1288 1290 if (unlikely(!pmd_same(pmd, *pmdp))) 1289 1291 goto out_unlock; 1290 1292 1291 1293 page = pmd_page(pmd); 1292 - get_page(page); 1293 - current_nid = page_to_nid(page); 1294 + page_nid = page_to_nid(page); 1294 1295 count_vm_numa_event(NUMA_HINT_FAULTS); 1295 - if (current_nid == numa_node_id()) 1296 + if (page_nid == this_nid) 1296 1297 count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); 1297 1298 1299 + /* 1300 + * Acquire the page lock to serialise THP migrations but avoid dropping 1301 + * page_table_lock if at all possible 1302 + */ 1303 + page_locked = trylock_page(page); 1298 1304 target_nid = mpol_misplaced(page, vma, haddr); 1299 1305 if (target_nid == -1) { 1300 - put_page(page); 1301 - goto clear_pmdnuma; 1306 + /* If the page was locked, there are no parallel migrations */ 1307 + if (page_locked) 1308 + goto clear_pmdnuma; 1309 + 1310 + /* 1311 + * Otherwise wait for potential migrations and retry. We do 1312 + * relock and check_same as the page may no longer be mapped. 1313 + * As the fault is being retried, do not account for it. 1314 + */ 1315 + spin_unlock(&mm->page_table_lock); 1316 + wait_on_page_locked(page); 1317 + page_nid = -1; 1318 + goto out; 1302 1319 } 1303 1320 1304 - /* Acquire the page lock to serialise THP migrations */ 1321 + /* Page is misplaced, serialise migrations and parallel THP splits */ 1322 + get_page(page); 1305 1323 spin_unlock(&mm->page_table_lock); 1306 - lock_page(page); 1324 + if (!page_locked) 1325 + lock_page(page); 1326 + anon_vma = page_lock_anon_vma_read(page); 1307 1327 1308 1328 /* Confirm the PTE did not while locked */ 1309 1329 spin_lock(&mm->page_table_lock); 1310 1330 if (unlikely(!pmd_same(pmd, *pmdp))) { 1311 1331 unlock_page(page); 1312 1332 put_page(page); 1333 + page_nid = -1; 1313 1334 goto out_unlock; 1314 1335 } 1315 - spin_unlock(&mm->page_table_lock); 1316 1336 1317 - /* Migrate the THP to the requested node */ 1337 + /* 1338 + * Migrate the THP to the requested node, returns with page unlocked 1339 + * and pmd_numa cleared. 1340 + */ 1341 + spin_unlock(&mm->page_table_lock); 1318 1342 migrated = migrate_misplaced_transhuge_page(mm, vma, 1319 1343 pmdp, pmd, addr, page, target_nid); 1320 - if (!migrated) 1321 - goto check_same; 1344 + if (migrated) 1345 + page_nid = target_nid; 1322 1346 1323 - task_numa_fault(target_nid, HPAGE_PMD_NR, true); 1324 - return 0; 1325 - 1326 - check_same: 1327 - spin_lock(&mm->page_table_lock); 1328 - if (unlikely(!pmd_same(pmd, *pmdp))) 1329 - goto out_unlock; 1347 + goto out; 1330 1348 clear_pmdnuma: 1349 + BUG_ON(!PageLocked(page)); 1331 1350 pmd = pmd_mknonnuma(pmd); 1332 1351 set_pmd_at(mm, haddr, pmdp, pmd); 1333 1352 VM_BUG_ON(pmd_numa(*pmdp)); 1334 1353 update_mmu_cache_pmd(vma, addr, pmdp); 1354 + unlock_page(page); 1335 1355 out_unlock: 1336 1356 spin_unlock(&mm->page_table_lock); 1337 - if (current_nid != -1) 1338 - task_numa_fault(current_nid, HPAGE_PMD_NR, false); 1357 + 1358 + out: 1359 + if (anon_vma) 1360 + page_unlock_anon_vma_read(anon_vma); 1361 + 1362 + if (page_nid != -1) 1363 + task_numa_fault(page_nid, HPAGE_PMD_NR, migrated); 1364 + 1339 1365 return 0; 1340 1366 } 1341 1367
+2 -1
mm/list_lru.c
··· 81 81 * decrement nr_to_walk first so that we don't livelock if we 82 82 * get stuck on large numbesr of LRU_RETRY items 83 83 */ 84 - if (--(*nr_to_walk) == 0) 84 + if (!*nr_to_walk) 85 85 break; 86 + --*nr_to_walk; 86 87 87 88 ret = isolate(item, &nlru->lock, cb_arg); 88 89 switch (ret) {
+26 -31
mm/memcontrol.c
··· 54 54 #include <linux/page_cgroup.h> 55 55 #include <linux/cpu.h> 56 56 #include <linux/oom.h> 57 + #include <linux/lockdep.h> 57 58 #include "internal.h" 58 59 #include <net/sock.h> 59 60 #include <net/ip.h> ··· 2047 2046 return total; 2048 2047 } 2049 2048 2049 + #ifdef CONFIG_LOCKDEP 2050 + static struct lockdep_map memcg_oom_lock_dep_map = { 2051 + .name = "memcg_oom_lock", 2052 + }; 2053 + #endif 2054 + 2050 2055 static DEFINE_SPINLOCK(memcg_oom_lock); 2051 2056 2052 2057 /* ··· 2090 2083 } 2091 2084 iter->oom_lock = false; 2092 2085 } 2093 - } 2086 + } else 2087 + mutex_acquire(&memcg_oom_lock_dep_map, 0, 1, _RET_IP_); 2094 2088 2095 2089 spin_unlock(&memcg_oom_lock); 2096 2090 ··· 2103 2095 struct mem_cgroup *iter; 2104 2096 2105 2097 spin_lock(&memcg_oom_lock); 2098 + mutex_release(&memcg_oom_lock_dep_map, 1, _RET_IP_); 2106 2099 for_each_mem_cgroup_tree(iter, memcg) 2107 2100 iter->oom_lock = false; 2108 2101 spin_unlock(&memcg_oom_lock); ··· 2774 2765 *ptr = memcg; 2775 2766 return 0; 2776 2767 nomem: 2777 - *ptr = NULL; 2778 - if (gfp_mask & __GFP_NOFAIL) 2779 - return 0; 2780 - return -ENOMEM; 2768 + if (!(gfp_mask & __GFP_NOFAIL)) { 2769 + *ptr = NULL; 2770 + return -ENOMEM; 2771 + } 2781 2772 bypass: 2782 2773 *ptr = root_mem_cgroup; 2783 2774 return -EINTR; ··· 3782 3773 { 3783 3774 /* Update stat data for mem_cgroup */ 3784 3775 preempt_disable(); 3785 - WARN_ON_ONCE(from->stat->count[idx] < nr_pages); 3786 - __this_cpu_add(from->stat->count[idx], -nr_pages); 3776 + __this_cpu_sub(from->stat->count[idx], nr_pages); 3787 3777 __this_cpu_add(to->stat->count[idx], nr_pages); 3788 3778 preempt_enable(); 3789 3779 } ··· 4958 4950 } while (usage > 0); 4959 4951 } 4960 4952 4961 - /* 4962 - * This mainly exists for tests during the setting of set of use_hierarchy. 4963 - * Since this is the very setting we are changing, the current hierarchy value 4964 - * is meaningless 4965 - */ 4966 - static inline bool __memcg_has_children(struct mem_cgroup *memcg) 4967 - { 4968 - struct cgroup_subsys_state *pos; 4969 - 4970 - /* bounce at first found */ 4971 - css_for_each_child(pos, &memcg->css) 4972 - return true; 4973 - return false; 4974 - } 4975 - 4976 - /* 4977 - * Must be called with memcg_create_mutex held, unless the cgroup is guaranteed 4978 - * to be already dead (as in mem_cgroup_force_empty, for instance). This is 4979 - * from mem_cgroup_count_children(), in the sense that we don't really care how 4980 - * many children we have; we only need to know if we have any. It also counts 4981 - * any memcg without hierarchy as infertile. 4982 - */ 4983 4953 static inline bool memcg_has_children(struct mem_cgroup *memcg) 4984 4954 { 4985 - return memcg->use_hierarchy && __memcg_has_children(memcg); 4955 + lockdep_assert_held(&memcg_create_mutex); 4956 + /* 4957 + * The lock does not prevent addition or deletion to the list 4958 + * of children, but it prevents a new child from being 4959 + * initialized based on this parent in css_online(), so it's 4960 + * enough to decide whether hierarchically inherited 4961 + * attributes can still be changed or not. 4962 + */ 4963 + return memcg->use_hierarchy && 4964 + !list_empty(&memcg->css.cgroup->children); 4986 4965 } 4987 4966 4988 4967 /* ··· 5049 5054 */ 5050 5055 if ((!parent_memcg || !parent_memcg->use_hierarchy) && 5051 5056 (val == 1 || val == 0)) { 5052 - if (!__memcg_has_children(memcg)) 5057 + if (list_empty(&memcg->css.cgroup->children)) 5053 5058 memcg->use_hierarchy = val; 5054 5059 else 5055 5060 retval = -EBUSY;
+21 -32
mm/memory.c
··· 3521 3521 } 3522 3522 3523 3523 int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, 3524 - unsigned long addr, int current_nid) 3524 + unsigned long addr, int page_nid) 3525 3525 { 3526 3526 get_page(page); 3527 3527 3528 3528 count_vm_numa_event(NUMA_HINT_FAULTS); 3529 - if (current_nid == numa_node_id()) 3529 + if (page_nid == numa_node_id()) 3530 3530 count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); 3531 3531 3532 3532 return mpol_misplaced(page, vma, addr); ··· 3537 3537 { 3538 3538 struct page *page = NULL; 3539 3539 spinlock_t *ptl; 3540 - int current_nid = -1; 3540 + int page_nid = -1; 3541 3541 int target_nid; 3542 3542 bool migrated = false; 3543 3543 ··· 3567 3567 return 0; 3568 3568 } 3569 3569 3570 - current_nid = page_to_nid(page); 3571 - target_nid = numa_migrate_prep(page, vma, addr, current_nid); 3570 + page_nid = page_to_nid(page); 3571 + target_nid = numa_migrate_prep(page, vma, addr, page_nid); 3572 3572 pte_unmap_unlock(ptep, ptl); 3573 3573 if (target_nid == -1) { 3574 - /* 3575 - * Account for the fault against the current node if it not 3576 - * being replaced regardless of where the page is located. 3577 - */ 3578 - current_nid = numa_node_id(); 3579 3574 put_page(page); 3580 3575 goto out; 3581 3576 } ··· 3578 3583 /* Migrate to the requested node */ 3579 3584 migrated = migrate_misplaced_page(page, target_nid); 3580 3585 if (migrated) 3581 - current_nid = target_nid; 3586 + page_nid = target_nid; 3582 3587 3583 3588 out: 3584 - if (current_nid != -1) 3585 - task_numa_fault(current_nid, 1, migrated); 3589 + if (page_nid != -1) 3590 + task_numa_fault(page_nid, 1, migrated); 3586 3591 return 0; 3587 3592 } 3588 3593 ··· 3597 3602 unsigned long offset; 3598 3603 spinlock_t *ptl; 3599 3604 bool numa = false; 3600 - int local_nid = numa_node_id(); 3601 3605 3602 3606 spin_lock(&mm->page_table_lock); 3603 3607 pmd = *pmdp; ··· 3619 3625 for (addr = _addr + offset; addr < _addr + PMD_SIZE; pte++, addr += PAGE_SIZE) { 3620 3626 pte_t pteval = *pte; 3621 3627 struct page *page; 3622 - int curr_nid = local_nid; 3628 + int page_nid = -1; 3623 3629 int target_nid; 3624 - bool migrated; 3630 + bool migrated = false; 3631 + 3625 3632 if (!pte_present(pteval)) 3626 3633 continue; 3627 3634 if (!pte_numa(pteval)) ··· 3644 3649 if (unlikely(page_mapcount(page) != 1)) 3645 3650 continue; 3646 3651 3647 - /* 3648 - * Note that the NUMA fault is later accounted to either 3649 - * the node that is currently running or where the page is 3650 - * migrated to. 3651 - */ 3652 - curr_nid = local_nid; 3653 - target_nid = numa_migrate_prep(page, vma, addr, 3654 - page_to_nid(page)); 3655 - if (target_nid == -1) { 3652 + page_nid = page_to_nid(page); 3653 + target_nid = numa_migrate_prep(page, vma, addr, page_nid); 3654 + pte_unmap_unlock(pte, ptl); 3655 + if (target_nid != -1) { 3656 + migrated = migrate_misplaced_page(page, target_nid); 3657 + if (migrated) 3658 + page_nid = target_nid; 3659 + } else { 3656 3660 put_page(page); 3657 - continue; 3658 3661 } 3659 3662 3660 - /* Migrate to the requested node */ 3661 - pte_unmap_unlock(pte, ptl); 3662 - migrated = migrate_misplaced_page(page, target_nid); 3663 - if (migrated) 3664 - curr_nid = target_nid; 3665 - task_numa_fault(curr_nid, 1, migrated); 3663 + if (page_nid != -1) 3664 + task_numa_fault(page_nid, 1, migrated); 3666 3665 3667 3666 pte = pte_offset_map_lock(mm, pmdp, addr, &ptl); 3668 3667 }
+11 -8
mm/migrate.c
··· 1715 1715 unlock_page(new_page); 1716 1716 put_page(new_page); /* Free it */ 1717 1717 1718 - unlock_page(page); 1718 + /* Retake the callers reference and putback on LRU */ 1719 + get_page(page); 1719 1720 putback_lru_page(page); 1720 - 1721 - count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR); 1722 - isolated = 0; 1723 - goto out; 1721 + mod_zone_page_state(page_zone(page), 1722 + NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); 1723 + goto out_fail; 1724 1724 } 1725 1725 1726 1726 /* ··· 1737 1737 entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 1738 1738 entry = pmd_mkhuge(entry); 1739 1739 1740 - page_add_new_anon_rmap(new_page, vma, haddr); 1741 - 1740 + pmdp_clear_flush(vma, haddr, pmd); 1742 1741 set_pmd_at(mm, haddr, pmd, entry); 1742 + page_add_new_anon_rmap(new_page, vma, haddr); 1743 1743 update_mmu_cache_pmd(vma, address, &entry); 1744 1744 page_remove_rmap(page); 1745 1745 /* ··· 1758 1758 count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR); 1759 1759 count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR); 1760 1760 1761 - out: 1762 1761 mod_zone_page_state(page_zone(page), 1763 1762 NR_ISOLATED_ANON + page_lru, 1764 1763 -HPAGE_PMD_NR); ··· 1766 1767 out_fail: 1767 1768 count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR); 1768 1769 out_dropref: 1770 + entry = pmd_mknonnuma(entry); 1771 + set_pmd_at(mm, haddr, pmd, entry); 1772 + update_mmu_cache_pmd(vma, address, &entry); 1773 + 1769 1774 unlock_page(page); 1770 1775 put_page(page); 1771 1776 return 0;
+1 -1
mm/mprotect.c
··· 148 148 split_huge_page_pmd(vma, addr, pmd); 149 149 else if (change_huge_pmd(vma, pmd, addr, newprot, 150 150 prot_numa)) { 151 - pages += HPAGE_PMD_NR; 151 + pages++; 152 152 continue; 153 153 } 154 154 /* fall through */
+1 -1
mm/pagewalk.c
··· 242 242 if (err) 243 243 break; 244 244 pgd++; 245 - } while (addr = next, addr != end); 245 + } while (addr = next, addr < end); 246 246 247 247 return err; 248 248 }
+1 -1
net/bridge/br_device.c
··· 64 64 br_flood_deliver(br, skb, false); 65 65 goto out; 66 66 } 67 - if (br_multicast_rcv(br, NULL, skb)) { 67 + if (br_multicast_rcv(br, NULL, skb, vid)) { 68 68 kfree_skb(skb); 69 69 goto out; 70 70 }
+1 -1
net/bridge/br_input.c
··· 80 80 br_fdb_update(br, p, eth_hdr(skb)->h_source, vid); 81 81 82 82 if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) && 83 - br_multicast_rcv(br, p, skb)) 83 + br_multicast_rcv(br, p, skb, vid)) 84 84 goto drop; 85 85 86 86 if (p->state == BR_STATE_LEARNING)
+19 -25
net/bridge/br_multicast.c
··· 947 947 948 948 static int br_ip4_multicast_igmp3_report(struct net_bridge *br, 949 949 struct net_bridge_port *port, 950 - struct sk_buff *skb) 950 + struct sk_buff *skb, 951 + u16 vid) 951 952 { 952 953 struct igmpv3_report *ih; 953 954 struct igmpv3_grec *grec; ··· 958 957 int type; 959 958 int err = 0; 960 959 __be32 group; 961 - u16 vid = 0; 962 960 963 961 if (!pskb_may_pull(skb, sizeof(*ih))) 964 962 return -EINVAL; 965 963 966 - br_vlan_get_tag(skb, &vid); 967 964 ih = igmpv3_report_hdr(skb); 968 965 num = ntohs(ih->ngrec); 969 966 len = sizeof(*ih); ··· 1004 1005 #if IS_ENABLED(CONFIG_IPV6) 1005 1006 static int br_ip6_multicast_mld2_report(struct net_bridge *br, 1006 1007 struct net_bridge_port *port, 1007 - struct sk_buff *skb) 1008 + struct sk_buff *skb, 1009 + u16 vid) 1008 1010 { 1009 1011 struct icmp6hdr *icmp6h; 1010 1012 struct mld2_grec *grec; ··· 1013 1013 int len; 1014 1014 int num; 1015 1015 int err = 0; 1016 - u16 vid = 0; 1017 1016 1018 1017 if (!pskb_may_pull(skb, sizeof(*icmp6h))) 1019 1018 return -EINVAL; 1020 1019 1021 - br_vlan_get_tag(skb, &vid); 1022 1020 icmp6h = icmp6_hdr(skb); 1023 1021 num = ntohs(icmp6h->icmp6_dataun.un_data16[1]); 1024 1022 len = sizeof(*icmp6h); ··· 1139 1141 1140 1142 static int br_ip4_multicast_query(struct net_bridge *br, 1141 1143 struct net_bridge_port *port, 1142 - struct sk_buff *skb) 1144 + struct sk_buff *skb, 1145 + u16 vid) 1143 1146 { 1144 1147 const struct iphdr *iph = ip_hdr(skb); 1145 1148 struct igmphdr *ih = igmp_hdr(skb); ··· 1152 1153 unsigned long now = jiffies; 1153 1154 __be32 group; 1154 1155 int err = 0; 1155 - u16 vid = 0; 1156 1156 1157 1157 spin_lock(&br->multicast_lock); 1158 1158 if (!netif_running(br->dev) || ··· 1187 1189 if (!group) 1188 1190 goto out; 1189 1191 1190 - br_vlan_get_tag(skb, &vid); 1191 1192 mp = br_mdb_ip4_get(mlock_dereference(br->mdb, br), group, vid); 1192 1193 if (!mp) 1193 1194 goto out; ··· 1216 1219 #if IS_ENABLED(CONFIG_IPV6) 1217 1220 static int br_ip6_multicast_query(struct net_bridge *br, 1218 1221 struct net_bridge_port *port, 1219 - struct sk_buff *skb) 1222 + struct sk_buff *skb, 1223 + u16 vid) 1220 1224 { 1221 1225 const struct ipv6hdr *ip6h = ipv6_hdr(skb); 1222 1226 struct mld_msg *mld; ··· 1229 1231 unsigned long now = jiffies; 1230 1232 const struct in6_addr *group = NULL; 1231 1233 int err = 0; 1232 - u16 vid = 0; 1233 1234 1234 1235 spin_lock(&br->multicast_lock); 1235 1236 if (!netif_running(br->dev) || ··· 1262 1265 if (!group) 1263 1266 goto out; 1264 1267 1265 - br_vlan_get_tag(skb, &vid); 1266 1268 mp = br_mdb_ip6_get(mlock_dereference(br->mdb, br), group, vid); 1267 1269 if (!mp) 1268 1270 goto out; ··· 1435 1439 1436 1440 static int br_multicast_ipv4_rcv(struct net_bridge *br, 1437 1441 struct net_bridge_port *port, 1438 - struct sk_buff *skb) 1442 + struct sk_buff *skb, 1443 + u16 vid) 1439 1444 { 1440 1445 struct sk_buff *skb2 = skb; 1441 1446 const struct iphdr *iph; ··· 1444 1447 unsigned int len; 1445 1448 unsigned int offset; 1446 1449 int err; 1447 - u16 vid = 0; 1448 1450 1449 1451 /* We treat OOM as packet loss for now. */ 1450 1452 if (!pskb_may_pull(skb, sizeof(*iph))) ··· 1504 1508 1505 1509 err = 0; 1506 1510 1507 - br_vlan_get_tag(skb2, &vid); 1508 1511 BR_INPUT_SKB_CB(skb)->igmp = 1; 1509 1512 ih = igmp_hdr(skb2); 1510 1513 ··· 1514 1519 err = br_ip4_multicast_add_group(br, port, ih->group, vid); 1515 1520 break; 1516 1521 case IGMPV3_HOST_MEMBERSHIP_REPORT: 1517 - err = br_ip4_multicast_igmp3_report(br, port, skb2); 1522 + err = br_ip4_multicast_igmp3_report(br, port, skb2, vid); 1518 1523 break; 1519 1524 case IGMP_HOST_MEMBERSHIP_QUERY: 1520 - err = br_ip4_multicast_query(br, port, skb2); 1525 + err = br_ip4_multicast_query(br, port, skb2, vid); 1521 1526 break; 1522 1527 case IGMP_HOST_LEAVE_MESSAGE: 1523 1528 br_ip4_multicast_leave_group(br, port, ih->group, vid); ··· 1535 1540 #if IS_ENABLED(CONFIG_IPV6) 1536 1541 static int br_multicast_ipv6_rcv(struct net_bridge *br, 1537 1542 struct net_bridge_port *port, 1538 - struct sk_buff *skb) 1543 + struct sk_buff *skb, 1544 + u16 vid) 1539 1545 { 1540 1546 struct sk_buff *skb2; 1541 1547 const struct ipv6hdr *ip6h; ··· 1546 1550 unsigned int len; 1547 1551 int offset; 1548 1552 int err; 1549 - u16 vid = 0; 1550 1553 1551 1554 if (!pskb_may_pull(skb, sizeof(*ip6h))) 1552 1555 return -EINVAL; ··· 1635 1640 1636 1641 err = 0; 1637 1642 1638 - br_vlan_get_tag(skb, &vid); 1639 1643 BR_INPUT_SKB_CB(skb)->igmp = 1; 1640 1644 1641 1645 switch (icmp6_type) { ··· 1651 1657 break; 1652 1658 } 1653 1659 case ICMPV6_MLD2_REPORT: 1654 - err = br_ip6_multicast_mld2_report(br, port, skb2); 1660 + err = br_ip6_multicast_mld2_report(br, port, skb2, vid); 1655 1661 break; 1656 1662 case ICMPV6_MGM_QUERY: 1657 - err = br_ip6_multicast_query(br, port, skb2); 1663 + err = br_ip6_multicast_query(br, port, skb2, vid); 1658 1664 break; 1659 1665 case ICMPV6_MGM_REDUCTION: 1660 1666 { ··· 1675 1681 #endif 1676 1682 1677 1683 int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, 1678 - struct sk_buff *skb) 1684 + struct sk_buff *skb, u16 vid) 1679 1685 { 1680 1686 BR_INPUT_SKB_CB(skb)->igmp = 0; 1681 1687 BR_INPUT_SKB_CB(skb)->mrouters_only = 0; ··· 1685 1691 1686 1692 switch (skb->protocol) { 1687 1693 case htons(ETH_P_IP): 1688 - return br_multicast_ipv4_rcv(br, port, skb); 1694 + return br_multicast_ipv4_rcv(br, port, skb, vid); 1689 1695 #if IS_ENABLED(CONFIG_IPV6) 1690 1696 case htons(ETH_P_IPV6): 1691 - return br_multicast_ipv6_rcv(br, port, skb); 1697 + return br_multicast_ipv6_rcv(br, port, skb, vid); 1692 1698 #endif 1693 1699 } 1694 1700
+3 -2
net/bridge/br_private.h
··· 435 435 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING 436 436 extern unsigned int br_mdb_rehash_seq; 437 437 int br_multicast_rcv(struct net_bridge *br, struct net_bridge_port *port, 438 - struct sk_buff *skb); 438 + struct sk_buff *skb, u16 vid); 439 439 struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge *br, 440 440 struct sk_buff *skb, u16 vid); 441 441 void br_multicast_add_port(struct net_bridge_port *port); ··· 504 504 #else 505 505 static inline int br_multicast_rcv(struct net_bridge *br, 506 506 struct net_bridge_port *port, 507 - struct sk_buff *skb) 507 + struct sk_buff *skb, 508 + u16 vid) 508 509 { 509 510 return 0; 510 511 }
+3 -6
net/bridge/netfilter/ebt_ulog.c
··· 181 181 ub->qlen++; 182 182 183 183 pm = nlmsg_data(nlh); 184 + memset(pm, 0, sizeof(*pm)); 184 185 185 186 /* Fill in the ulog data */ 186 187 pm->version = EBT_ULOG_VERSION; ··· 194 193 pm->hook = hooknr; 195 194 if (uloginfo->prefix != NULL) 196 195 strcpy(pm->prefix, uloginfo->prefix); 197 - else 198 - *(pm->prefix) = '\0'; 199 196 200 197 if (in) { 201 198 strcpy(pm->physindev, in->name); ··· 203 204 strcpy(pm->indev, br_port_get_rcu(in)->br->dev->name); 204 205 else 205 206 strcpy(pm->indev, in->name); 206 - } else 207 - pm->indev[0] = pm->physindev[0] = '\0'; 207 + } 208 208 209 209 if (out) { 210 210 /* If out exists, then out is a bridge port */ 211 211 strcpy(pm->physoutdev, out->name); 212 212 /* rcu_read_lock()ed by nf_hook_slow */ 213 213 strcpy(pm->outdev, br_port_get_rcu(out)->br->dev->name); 214 - } else 215 - pm->outdev[0] = pm->physoutdev[0] = '\0'; 214 + } 216 215 217 216 if (skb_copy_bits(skb, -ETH_HLEN, pm->data, copy_len) < 0) 218 217 BUG();
+1 -1
net/core/flow_dissector.c
··· 66 66 struct iphdr _iph; 67 67 ip: 68 68 iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph); 69 - if (!iph) 69 + if (!iph || iph->ihl < 5) 70 70 return false; 71 71 72 72 if (ip_is_fragment(iph))
+18 -13
net/core/netpoll.c
··· 636 636 637 637 netpoll_send_skb(np, send_skb); 638 638 639 - /* If there are several rx_hooks for the same address, 640 - we're fine by sending a single reply */ 639 + /* If there are several rx_skb_hooks for the same 640 + * address we're fine by sending a single reply 641 + */ 641 642 break; 642 643 } 643 644 spin_unlock_irqrestore(&npinfo->rx_lock, flags); ··· 720 719 721 720 netpoll_send_skb(np, send_skb); 722 721 723 - /* If there are several rx_hooks for the same address, 724 - we're fine by sending a single reply */ 722 + /* If there are several rx_skb_hooks for the same 723 + * address, we're fine by sending a single reply 724 + */ 725 725 break; 726 726 } 727 727 spin_unlock_irqrestore(&npinfo->rx_lock, flags); ··· 758 756 759 757 int __netpoll_rx(struct sk_buff *skb, struct netpoll_info *npinfo) 760 758 { 761 - int proto, len, ulen; 762 - int hits = 0; 759 + int proto, len, ulen, data_len; 760 + int hits = 0, offset; 763 761 const struct iphdr *iph; 764 762 struct udphdr *uh; 765 763 struct netpoll *np, *tmp; 764 + uint16_t source; 766 765 767 766 if (list_empty(&npinfo->rx_np)) 768 767 goto out; ··· 823 820 824 821 len -= iph->ihl*4; 825 822 uh = (struct udphdr *)(((char *)iph) + iph->ihl*4); 823 + offset = (unsigned char *)(uh + 1) - skb->data; 826 824 ulen = ntohs(uh->len); 825 + data_len = skb->len - offset; 826 + source = ntohs(uh->source); 827 827 828 828 if (ulen != len) 829 829 goto out; ··· 840 834 if (np->local_port && np->local_port != ntohs(uh->dest)) 841 835 continue; 842 836 843 - np->rx_hook(np, ntohs(uh->source), 844 - (char *)(uh+1), 845 - ulen - sizeof(struct udphdr)); 837 + np->rx_skb_hook(np, source, skb, offset, data_len); 846 838 hits++; 847 839 } 848 840 } else { ··· 863 859 if (!pskb_may_pull(skb, sizeof(struct udphdr))) 864 860 goto out; 865 861 uh = udp_hdr(skb); 862 + offset = (unsigned char *)(uh + 1) - skb->data; 866 863 ulen = ntohs(uh->len); 864 + data_len = skb->len - offset; 865 + source = ntohs(uh->source); 867 866 if (ulen != skb->len) 868 867 goto out; 869 868 if (udp6_csum_init(skb, uh, IPPROTO_UDP)) ··· 879 872 if (np->local_port && np->local_port != ntohs(uh->dest)) 880 873 continue; 881 874 882 - np->rx_hook(np, ntohs(uh->source), 883 - (char *)(uh+1), 884 - ulen - sizeof(struct udphdr)); 875 + np->rx_skb_hook(np, source, skb, offset, data_len); 885 876 hits++; 886 877 } 887 878 #endif ··· 1067 1062 1068 1063 npinfo->netpoll = np; 1069 1064 1070 - if (np->rx_hook) { 1065 + if (np->rx_skb_hook) { 1071 1066 spin_lock_irqsave(&npinfo->rx_lock, flags); 1072 1067 npinfo->rx_flags |= NETPOLL_RX_ENABLED; 1073 1068 list_add_tail(&np->rx, &npinfo->rx_np);
+5
net/ipv4/netfilter/arp_tables.c
··· 271 271 local_bh_disable(); 272 272 addend = xt_write_recseq_begin(); 273 273 private = table->private; 274 + /* 275 + * Ensure we load private-> members after we've fetched the base 276 + * pointer. 277 + */ 278 + smp_read_barrier_depends(); 274 279 table_base = private->entries[smp_processor_id()]; 275 280 276 281 e = get_entry(table_base, private->hook_entry[hook]);
+5
net/ipv4/netfilter/ip_tables.c
··· 327 327 addend = xt_write_recseq_begin(); 328 328 private = table->private; 329 329 cpu = smp_processor_id(); 330 + /* 331 + * Ensure we load private-> members after we've fetched the base 332 + * pointer. 333 + */ 334 + smp_read_barrier_depends(); 330 335 table_base = private->entries[cpu]; 331 336 jumpstack = (struct ipt_entry **)private->jumpstack[cpu]; 332 337 stackptr = per_cpu_ptr(private->stackptr, cpu);
+1 -6
net/ipv4/netfilter/ipt_ULOG.c
··· 220 220 ub->qlen++; 221 221 222 222 pm = nlmsg_data(nlh); 223 + memset(pm, 0, sizeof(*pm)); 223 224 224 225 /* We might not have a timestamp, get one */ 225 226 if (skb->tstamp.tv64 == 0) ··· 239 238 } 240 239 else if (loginfo->prefix[0] != '\0') 241 240 strncpy(pm->prefix, loginfo->prefix, sizeof(pm->prefix)); 242 - else 243 - *(pm->prefix) = '\0'; 244 241 245 242 if (in && in->hard_header_len > 0 && 246 243 skb->mac_header != skb->network_header && ··· 250 251 251 252 if (in) 252 253 strncpy(pm->indev_name, in->name, sizeof(pm->indev_name)); 253 - else 254 - pm->indev_name[0] = '\0'; 255 254 256 255 if (out) 257 256 strncpy(pm->outdev_name, out->name, sizeof(pm->outdev_name)); 258 - else 259 - pm->outdev_name[0] = '\0'; 260 257 261 258 /* copy_len <= skb->len, so can't fail. */ 262 259 if (skb_copy_bits(skb, 0, pm->payload, copy_len) < 0)
+25 -9
net/ipv4/tcp_input.c
··· 2903 2903 * left edge of the send window. 2904 2904 * See draft-ietf-tcplw-high-performance-00, section 3.3. 2905 2905 */ 2906 - if (seq_rtt < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr) 2906 + if (seq_rtt < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr && 2907 + flag & FLAG_ACKED) 2907 2908 seq_rtt = tcp_time_stamp - tp->rx_opt.rcv_tsecr; 2908 2909 2909 2910 if (seq_rtt < 0) ··· 2919 2918 } 2920 2919 2921 2920 /* Compute time elapsed between (last) SYNACK and the ACK completing 3WHS. */ 2922 - static void tcp_synack_rtt_meas(struct sock *sk, struct request_sock *req) 2921 + static void tcp_synack_rtt_meas(struct sock *sk, const u32 synack_stamp) 2923 2922 { 2924 2923 struct tcp_sock *tp = tcp_sk(sk); 2925 2924 s32 seq_rtt = -1; 2926 2925 2927 - if (tp->lsndtime && !tp->total_retrans) 2928 - seq_rtt = tcp_time_stamp - tp->lsndtime; 2929 - tcp_ack_update_rtt(sk, FLAG_SYN_ACKED, seq_rtt, -1); 2926 + if (synack_stamp && !tp->total_retrans) 2927 + seq_rtt = tcp_time_stamp - synack_stamp; 2928 + 2929 + /* If the ACK acks both the SYNACK and the (Fast Open'd) data packets 2930 + * sent in SYN_RECV, SYNACK RTT is the smooth RTT computed in tcp_ack() 2931 + */ 2932 + if (!tp->srtt) 2933 + tcp_ack_update_rtt(sk, FLAG_SYN_ACKED, seq_rtt, -1); 2930 2934 } 2931 2935 2932 2936 static void tcp_cong_avoid(struct sock *sk, u32 ack, u32 in_flight) ··· 3034 3028 s32 seq_rtt = -1; 3035 3029 s32 ca_seq_rtt = -1; 3036 3030 ktime_t last_ackt = net_invalid_timestamp(); 3031 + bool rtt_update; 3037 3032 3038 3033 while ((skb = tcp_write_queue_head(sk)) && skb != tcp_send_head(sk)) { 3039 3034 struct tcp_skb_cb *scb = TCP_SKB_CB(skb); ··· 3111 3104 if (skb && (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED)) 3112 3105 flag |= FLAG_SACK_RENEGING; 3113 3106 3114 - if (tcp_ack_update_rtt(sk, flag, seq_rtt, sack_rtt) || 3115 - (flag & FLAG_ACKED)) 3116 - tcp_rearm_rto(sk); 3107 + rtt_update = tcp_ack_update_rtt(sk, flag, seq_rtt, sack_rtt); 3117 3108 3118 3109 if (flag & FLAG_ACKED) { 3119 3110 const struct tcp_congestion_ops *ca_ops 3120 3111 = inet_csk(sk)->icsk_ca_ops; 3121 3112 3113 + tcp_rearm_rto(sk); 3122 3114 if (unlikely(icsk->icsk_mtup.probe_size && 3123 3115 !after(tp->mtu_probe.probe_seq_end, tp->snd_una))) { 3124 3116 tcp_mtup_probe_success(sk); ··· 3156 3150 3157 3151 ca_ops->pkts_acked(sk, pkts_acked, rtt_us); 3158 3152 } 3153 + } else if (skb && rtt_update && sack_rtt >= 0 && 3154 + sack_rtt > (s32)(now - TCP_SKB_CB(skb)->when)) { 3155 + /* Do not re-arm RTO if the sack RTT is measured from data sent 3156 + * after when the head was last (re)transmitted. Otherwise the 3157 + * timeout may continue to extend in loss recovery. 3158 + */ 3159 + tcp_rearm_rto(sk); 3159 3160 } 3160 3161 3161 3162 #if FASTRETRANS_DEBUG > 0 ··· 5639 5626 struct request_sock *req; 5640 5627 int queued = 0; 5641 5628 bool acceptable; 5629 + u32 synack_stamp; 5642 5630 5643 5631 tp->rx_opt.saw_tstamp = 0; 5644 5632 ··· 5722 5708 * so release it. 5723 5709 */ 5724 5710 if (req) { 5711 + synack_stamp = tcp_rsk(req)->snt_synack; 5725 5712 tp->total_retrans = req->num_retrans; 5726 5713 reqsk_fastopen_remove(sk, req, false); 5727 5714 } else { 5715 + synack_stamp = tp->lsndtime; 5728 5716 /* Make sure socket is routed, for correct metrics. */ 5729 5717 icsk->icsk_af_ops->rebuild_header(sk); 5730 5718 tcp_init_congestion_control(sk); ··· 5749 5733 tp->snd_una = TCP_SKB_CB(skb)->ack_seq; 5750 5734 tp->snd_wnd = ntohs(th->window) << tp->rx_opt.snd_wscale; 5751 5735 tcp_init_wl(tp, TCP_SKB_CB(skb)->seq); 5752 - tcp_synack_rtt_meas(sk, req); 5736 + tcp_synack_rtt_meas(sk, synack_stamp); 5753 5737 5754 5738 if (tp->rx_opt.tstamp_ok) 5755 5739 tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
+5 -8
net/ipv4/tcp_offload.c
··· 18 18 netdev_features_t features) 19 19 { 20 20 struct sk_buff *segs = ERR_PTR(-EINVAL); 21 + unsigned int sum_truesize = 0; 21 22 struct tcphdr *th; 22 23 unsigned int thlen; 23 24 unsigned int seq; ··· 105 104 if (copy_destructor) { 106 105 skb->destructor = gso_skb->destructor; 107 106 skb->sk = gso_skb->sk; 108 - /* {tcp|sock}_wfree() use exact truesize accounting : 109 - * sum(skb->truesize) MUST be exactly be gso_skb->truesize 110 - * So we account mss bytes of 'true size' for each segment. 111 - * The last segment will contain the remaining. 112 - */ 113 - skb->truesize = mss; 114 - gso_skb->truesize -= mss; 107 + sum_truesize += skb->truesize; 115 108 } 116 109 skb = skb->next; 117 110 th = tcp_hdr(skb); ··· 122 127 if (copy_destructor) { 123 128 swap(gso_skb->sk, skb->sk); 124 129 swap(gso_skb->destructor, skb->destructor); 125 - swap(gso_skb->truesize, skb->truesize); 130 + sum_truesize += skb->truesize; 131 + atomic_add(sum_truesize - gso_skb->truesize, 132 + &skb->sk->sk_wmem_alloc); 126 133 } 127 134 128 135 delta = htonl(oldlen + (skb_tail_pointer(skb) -
+6 -2
net/ipv4/xfrm4_policy.c
··· 104 104 const struct iphdr *iph = ip_hdr(skb); 105 105 u8 *xprth = skb_network_header(skb) + iph->ihl * 4; 106 106 struct flowi4 *fl4 = &fl->u.ip4; 107 + int oif = 0; 108 + 109 + if (skb_dst(skb)) 110 + oif = skb_dst(skb)->dev->ifindex; 107 111 108 112 memset(fl4, 0, sizeof(struct flowi4)); 109 113 fl4->flowi4_mark = skb->mark; 110 - fl4->flowi4_oif = skb_dst(skb)->dev->ifindex; 114 + fl4->flowi4_oif = reverse ? skb->skb_iif : oif; 111 115 112 116 if (!ip_is_fragment(iph)) { 113 117 switch (iph->protocol) { ··· 240 236 .destroy = xfrm4_dst_destroy, 241 237 .ifdown = xfrm4_dst_ifdown, 242 238 .local_out = __ip_local_out, 243 - .gc_thresh = 1024, 239 + .gc_thresh = 32768, 244 240 }; 245 241 246 242 static struct xfrm_policy_afinfo xfrm4_policy_afinfo = {
+5
net/ipv6/netfilter/ip6_tables.c
··· 349 349 local_bh_disable(); 350 350 addend = xt_write_recseq_begin(); 351 351 private = table->private; 352 + /* 353 + * Ensure we load private-> members after we've fetched the base 354 + * pointer. 355 + */ 356 + smp_read_barrier_depends(); 352 357 cpu = smp_processor_id(); 353 358 table_base = private->entries[cpu]; 354 359 jumpstack = (struct ip6t_entry **)private->jumpstack[cpu];
+6 -3
net/ipv6/route.c
··· 1087 1087 if (rt->rt6i_genid != rt_genid_ipv6(dev_net(rt->dst.dev))) 1088 1088 return NULL; 1089 1089 1090 - if (rt->rt6i_node && (rt->rt6i_node->fn_sernum == cookie)) 1091 - return dst; 1090 + if (!rt->rt6i_node || (rt->rt6i_node->fn_sernum != cookie)) 1091 + return NULL; 1092 1092 1093 - return NULL; 1093 + if (rt6_check_expired(rt)) 1094 + return NULL; 1095 + 1096 + return dst; 1094 1097 } 1095 1098 1096 1099 static struct dst_entry *ip6_negative_advice(struct dst_entry *dst)
+6 -2
net/ipv6/xfrm6_policy.c
··· 135 135 struct ipv6_opt_hdr *exthdr; 136 136 const unsigned char *nh = skb_network_header(skb); 137 137 u8 nexthdr = nh[IP6CB(skb)->nhoff]; 138 + int oif = 0; 139 + 140 + if (skb_dst(skb)) 141 + oif = skb_dst(skb)->dev->ifindex; 138 142 139 143 memset(fl6, 0, sizeof(struct flowi6)); 140 144 fl6->flowi6_mark = skb->mark; 141 - fl6->flowi6_oif = skb_dst(skb)->dev->ifindex; 145 + fl6->flowi6_oif = reverse ? skb->skb_iif : oif; 142 146 143 147 fl6->daddr = reverse ? hdr->saddr : hdr->daddr; 144 148 fl6->saddr = reverse ? hdr->daddr : hdr->saddr; ··· 289 285 .destroy = xfrm6_dst_destroy, 290 286 .ifdown = xfrm6_dst_ifdown, 291 287 .local_out = __ip6_local_out, 292 - .gc_thresh = 1024, 288 + .gc_thresh = 32768, 293 289 }; 294 290 295 291 static struct xfrm_policy_afinfo xfrm6_policy_afinfo = {
+6 -1
net/netfilter/x_tables.c
··· 845 845 return NULL; 846 846 } 847 847 848 - table->private = newinfo; 849 848 newinfo->initial_entries = private->initial_entries; 849 + /* 850 + * Ensure contents of newinfo are visible before assigning to 851 + * private. 852 + */ 853 + smp_wmb(); 854 + table->private = newinfo; 850 855 851 856 /* 852 857 * Even though table entries have now been swapped, other CPU's
+6 -1
net/netfilter/xt_NFQUEUE.c
··· 147 147 { 148 148 const struct xt_NFQ_info_v3 *info = par->targinfo; 149 149 u32 queue = info->queuenum; 150 + int ret; 150 151 151 152 if (info->queues_total > 1) { 152 153 if (info->flags & NFQ_FLAG_CPU_FANOUT) { ··· 158 157 queue = nfqueue_hash(skb, par); 159 158 } 160 159 161 - return NF_QUEUE_NR(queue); 160 + ret = NF_QUEUE_NR(queue); 161 + if (info->flags & NFQ_FLAG_BYPASS) 162 + ret |= NF_VERDICT_FLAG_QUEUE_BYPASS; 163 + 164 + return ret; 162 165 } 163 166 164 167 static struct xt_target nfqueue_tg_reg[] __read_mostly = {
+5 -2
net/openvswitch/dp_notify.c
··· 65 65 continue; 66 66 67 67 netdev_vport = netdev_vport_priv(vport); 68 - if (netdev_vport->dev->reg_state == NETREG_UNREGISTERED || 69 - netdev_vport->dev->reg_state == NETREG_UNREGISTERING) 68 + if (!(netdev_vport->dev->priv_flags & IFF_OVS_DATAPATH)) 70 69 dp_detach_port_notify(vport); 71 70 } 72 71 } ··· 87 88 return NOTIFY_DONE; 88 89 89 90 if (event == NETDEV_UNREGISTER) { 91 + /* upper_dev_unlink and decrement promisc immediately */ 92 + ovs_netdev_detach_dev(vport); 93 + 94 + /* schedule vport destroy, dev_put and genl notification */ 90 95 ovs_net = net_generic(dev_net(dev), ovs_net_id); 91 96 queue_work(system_wq, &ovs_net->dp_notify_work); 92 97 }
+14 -4
net/openvswitch/vport-netdev.c
··· 150 150 ovs_vport_free(vport_from_priv(netdev_vport)); 151 151 } 152 152 153 + void ovs_netdev_detach_dev(struct vport *vport) 154 + { 155 + struct netdev_vport *netdev_vport = netdev_vport_priv(vport); 156 + 157 + ASSERT_RTNL(); 158 + netdev_vport->dev->priv_flags &= ~IFF_OVS_DATAPATH; 159 + netdev_rx_handler_unregister(netdev_vport->dev); 160 + netdev_upper_dev_unlink(netdev_vport->dev, 161 + netdev_master_upper_dev_get(netdev_vport->dev)); 162 + dev_set_promiscuity(netdev_vport->dev, -1); 163 + } 164 + 153 165 static void netdev_destroy(struct vport *vport) 154 166 { 155 167 struct netdev_vport *netdev_vport = netdev_vport_priv(vport); 156 168 157 169 rtnl_lock(); 158 - netdev_vport->dev->priv_flags &= ~IFF_OVS_DATAPATH; 159 - netdev_rx_handler_unregister(netdev_vport->dev); 160 - netdev_upper_dev_unlink(netdev_vport->dev, get_dpdev(vport->dp)); 161 - dev_set_promiscuity(netdev_vport->dev, -1); 170 + if (netdev_vport->dev->priv_flags & IFF_OVS_DATAPATH) 171 + ovs_netdev_detach_dev(vport); 162 172 rtnl_unlock(); 163 173 164 174 call_rcu(&netdev_vport->rcu, free_port_rcu);
+1
net/openvswitch/vport-netdev.h
··· 39 39 } 40 40 41 41 const char *ovs_netdev_get_name(const struct vport *); 42 + void ovs_netdev_detach_dev(struct vport *); 42 43 43 44 #endif /* vport_netdev.h */
+1
net/sched/sch_fq.c
··· 255 255 f->socket_hash != sk->sk_hash)) { 256 256 f->credit = q->initial_quantum; 257 257 f->socket_hash = sk->sk_hash; 258 + f->time_next_packet = 0ULL; 258 259 } 259 260 return f; 260 261 }
+3 -1
net/sctp/ipv6.c
··· 279 279 sctp_v6_to_addr(&dst_saddr, &fl6->saddr, htons(bp->port)); 280 280 rcu_read_lock(); 281 281 list_for_each_entry_rcu(laddr, &bp->address_list, list) { 282 - if (!laddr->valid || (laddr->state != SCTP_ADDR_SRC)) 282 + if (!laddr->valid || laddr->state == SCTP_ADDR_DEL || 283 + (laddr->state != SCTP_ADDR_SRC && 284 + !asoc->src_out_of_asoc_ok)) 283 285 continue; 284 286 285 287 /* Do not compare against v4 addrs */
-1
net/sctp/sm_sideeffect.c
··· 860 860 (!asoc->temp) && (sk->sk_shutdown != SHUTDOWN_MASK)) 861 861 return; 862 862 863 - BUG_ON(asoc->peer.primary_path == NULL); 864 863 sctp_unhash_established(asoc); 865 864 sctp_association_free(asoc); 866 865 }
+2 -2
net/x25/Kconfig
··· 16 16 if you want that) and the lower level data link layer protocol LAPB 17 17 (say Y to "LAPB Data Link Driver" below if you want that). 18 18 19 - You can read more about X.25 at <http://www.sangoma.com/x25.htm> and 20 - <http://www.cisco.com/univercd/cc/td/doc/product/software/ios11/cbook/cx25.htm>. 19 + You can read more about X.25 at <http://www.sangoma.com/tutorials/x25/> and 20 + <http://docwiki.cisco.com/wiki/X.25>. 21 21 Information about X.25 for Linux is contained in the files 22 22 <file:Documentation/networking/x25.txt> and 23 23 <file:Documentation/networking/x25-iface.txt>.
+6 -6
net/xfrm/xfrm_ipcomp.c
··· 141 141 const int plen = skb->len; 142 142 int dlen = IPCOMP_SCRATCH_SIZE; 143 143 u8 *start = skb->data; 144 - const int cpu = get_cpu(); 145 - u8 *scratch = *per_cpu_ptr(ipcomp_scratches, cpu); 146 - struct crypto_comp *tfm = *per_cpu_ptr(ipcd->tfms, cpu); 144 + struct crypto_comp *tfm; 145 + u8 *scratch; 147 146 int err; 148 147 149 148 local_bh_disable(); 149 + scratch = *this_cpu_ptr(ipcomp_scratches); 150 + tfm = *this_cpu_ptr(ipcd->tfms); 150 151 err = crypto_comp_compress(tfm, start, plen, scratch, &dlen); 151 - local_bh_enable(); 152 152 if (err) 153 153 goto out; 154 154 ··· 158 158 } 159 159 160 160 memcpy(start + sizeof(struct ip_comp_hdr), scratch, dlen); 161 - put_cpu(); 161 + local_bh_enable(); 162 162 163 163 pskb_trim(skb, dlen + sizeof(struct ip_comp_hdr)); 164 164 return 0; 165 165 166 166 out: 167 - put_cpu(); 167 + local_bh_enable(); 168 168 return err; 169 169 } 170 170
+11 -1
scripts/kallsyms.c
··· 55 55 static unsigned int table_size, table_cnt; 56 56 static int all_symbols = 0; 57 57 static char symbol_prefix_char = '\0'; 58 + static unsigned long long kernel_start_addr = 0; 58 59 59 60 int token_profit[0x10000]; 60 61 ··· 66 65 67 66 static void usage(void) 68 67 { 69 - fprintf(stderr, "Usage: kallsyms [--all-symbols] [--symbol-prefix=<prefix char>] < in.map > out.S\n"); 68 + fprintf(stderr, "Usage: kallsyms [--all-symbols] " 69 + "[--symbol-prefix=<prefix char>] " 70 + "[--page-offset=<CONFIG_PAGE_OFFSET>] " 71 + "< in.map > out.S\n"); 70 72 exit(1); 71 73 } 72 74 ··· 197 193 NULL }; 198 194 int i; 199 195 int offset = 1; 196 + 197 + if (s->addr < kernel_start_addr) 198 + return 0; 200 199 201 200 /* skip prefix char */ 202 201 if (symbol_prefix_char && *(s->sym + 1) == symbol_prefix_char) ··· 653 646 if ((*p == '"' && *(p+2) == '"') || (*p == '\'' && *(p+2) == '\'')) 654 647 p++; 655 648 symbol_prefix_char = *p; 649 + } else if (strncmp(argv[i], "--page-offset=", 14) == 0) { 650 + const char *p = &argv[i][14]; 651 + kernel_start_addr = strtoull(p, NULL, 16); 656 652 } else 657 653 usage(); 658 654 }
+4
sound/core/pcm.c
··· 49 49 struct snd_pcm *pcm; 50 50 51 51 list_for_each_entry(pcm, &snd_pcm_devices, list) { 52 + if (pcm->internal) 53 + continue; 52 54 if (pcm->card == card && pcm->device == device) 53 55 return pcm; 54 56 } ··· 62 60 struct snd_pcm *pcm; 63 61 64 62 list_for_each_entry(pcm, &snd_pcm_devices, list) { 63 + if (pcm->internal) 64 + continue; 65 65 if (pcm->card == card && pcm->device > device) 66 66 return pcm->device; 67 67 else if (pcm->card->number > card->number)
+2 -2
sound/pci/hda/hda_codec.c
··· 4864 4864 spin_unlock(&codec->power_lock); 4865 4865 4866 4866 state = hda_call_codec_suspend(codec, true); 4867 - codec->pm_down_notified = 0; 4868 - if (!bus->power_keep_link_on && (state & AC_PWRST_CLK_STOP_OK)) { 4867 + if (!codec->pm_down_notified && 4868 + !bus->power_keep_link_on && (state & AC_PWRST_CLK_STOP_OK)) { 4869 4869 codec->pm_down_notified = 1; 4870 4870 hda_call_pm_notify(bus, false); 4871 4871 }
+3 -1
sound/pci/hda/hda_generic.c
··· 4475 4475 true, &spec->vmaster_mute.sw_kctl); 4476 4476 if (err < 0) 4477 4477 return err; 4478 - if (spec->vmaster_mute.hook) 4478 + if (spec->vmaster_mute.hook) { 4479 4479 snd_hda_add_vmaster_hook(codec, &spec->vmaster_mute, 4480 4480 spec->vmaster_mute_enum); 4481 + snd_hda_sync_vmaster_hook(&spec->vmaster_mute); 4482 + } 4481 4483 } 4482 4484 4483 4485 free_kctls(spec); /* no longer needed */
+17 -1
sound/pci/hda/patch_analog.c
··· 968 968 } 969 969 } 970 970 971 + static void ad1884_fixup_thinkpad(struct hda_codec *codec, 972 + const struct hda_fixup *fix, int action) 973 + { 974 + struct ad198x_spec *spec = codec->spec; 975 + 976 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 977 + spec->gen.keep_eapd_on = 1; 978 + } 979 + 971 980 /* set magic COEFs for dmic */ 972 981 static const struct hda_verb ad1884_dmic_init_verbs[] = { 973 982 {0x01, AC_VERB_SET_COEF_INDEX, 0x13f7}, ··· 988 979 AD1884_FIXUP_AMP_OVERRIDE, 989 980 AD1884_FIXUP_HP_EAPD, 990 981 AD1884_FIXUP_DMIC_COEF, 982 + AD1884_FIXUP_THINKPAD, 991 983 AD1884_FIXUP_HP_TOUCHSMART, 992 984 }; 993 985 ··· 1007 997 .type = HDA_FIXUP_VERBS, 1008 998 .v.verbs = ad1884_dmic_init_verbs, 1009 999 }, 1000 + [AD1884_FIXUP_THINKPAD] = { 1001 + .type = HDA_FIXUP_FUNC, 1002 + .v.func = ad1884_fixup_thinkpad, 1003 + .chained = true, 1004 + .chain_id = AD1884_FIXUP_DMIC_COEF, 1005 + }, 1010 1006 [AD1884_FIXUP_HP_TOUCHSMART] = { 1011 1007 .type = HDA_FIXUP_VERBS, 1012 1008 .v.verbs = ad1884_dmic_init_verbs, ··· 1024 1008 static const struct snd_pci_quirk ad1884_fixup_tbl[] = { 1025 1009 SND_PCI_QUIRK(0x103c, 0x2a82, "HP Touchsmart", AD1884_FIXUP_HP_TOUCHSMART), 1026 1010 SND_PCI_QUIRK_VENDOR(0x103c, "HP", AD1884_FIXUP_HP_EAPD), 1027 - SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1884_FIXUP_DMIC_COEF), 1011 + SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1884_FIXUP_THINKPAD), 1028 1012 {} 1029 1013 }; 1030 1014
+1
sound/pci/hda/patch_realtek.c
··· 4623 4623 SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 4624 4624 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 4625 4625 SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_ASUS_MODE4), 4626 + SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_ASUS_MODE4), 4626 4627 SND_PCI_QUIRK(0x1043, 0x8469, "ASUS mobo", ALC662_FIXUP_NO_JACK_DETECT), 4627 4628 SND_PCI_QUIRK(0x105b, 0x0cd6, "Foxconn", ALC662_FIXUP_ASUS_MODE2), 4628 4629 SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+1
sound/soc/codecs/wm_hubs.c
··· 530 530 hubs->hp_startup_mode); 531 531 break; 532 532 } 533 + break; 533 534 534 535 case SND_SOC_DAPM_PRE_PMD: 535 536 snd_soc_update_bits(codec, WM8993_CHARGE_PUMP_1,
+3 -1
sound/soc/soc-dapm.c
··· 1949 1949 w->active ? "active" : "inactive"); 1950 1950 1951 1951 list_for_each_entry(p, &w->sources, list_sink) { 1952 - if (p->connected && !p->connected(w, p->sink)) 1952 + if (p->connected && !p->connected(w, p->source)) 1953 1953 continue; 1954 1954 1955 1955 if (p->connect) ··· 3495 3495 if (!w) { 3496 3496 dev_err(dapm->dev, "ASoC: Failed to create %s widget\n", 3497 3497 dai->driver->playback.stream_name); 3498 + return -ENOMEM; 3498 3499 } 3499 3500 3500 3501 w->priv = dai; ··· 3514 3513 if (!w) { 3515 3514 dev_err(dapm->dev, "ASoC: Failed to create %s widget\n", 3516 3515 dai->driver->capture.stream_name); 3516 + return -ENOMEM; 3517 3517 } 3518 3518 3519 3519 w->priv = dai;
+13 -1
tools/perf/Documentation/perf-record.txt
··· 90 90 Number of mmap data pages. Must be a power of two. 91 91 92 92 -g:: 93 + Enables call-graph (stack chain/backtrace) recording. 94 + 93 95 --call-graph:: 94 - Do call-graph (stack chain/backtrace) recording. 96 + Setup and enable call-graph (stack chain/backtrace) recording, 97 + implies -g. 98 + 99 + Allows specifying "fp" (frame pointer) or "dwarf" 100 + (DWARF's CFI - Call Frame Information) as the method to collect 101 + the information used to show the call graphs. 102 + 103 + In some systems, where binaries are build with gcc 104 + --fomit-frame-pointer, using the "fp" method will produce bogus 105 + call graphs, using "dwarf", if available (perf tools linked to 106 + the libunwind library) should be used instead. 95 107 96 108 -q:: 97 109 --quiet::
+5 -13
tools/perf/Documentation/perf-top.txt
··· 140 140 --asm-raw:: 141 141 Show raw instruction encoding of assembly instructions. 142 142 143 - -G [type,min,order]:: 143 + -G:: 144 + Enables call-graph (stack chain/backtrace) recording. 145 + 144 146 --call-graph:: 145 - Display call chains using type, min percent threshold and order. 146 - type can be either: 147 - - flat: single column, linear exposure of call chains. 148 - - graph: use a graph tree, displaying absolute overhead rates. 149 - - fractal: like graph, but displays relative rates. Each branch of 150 - the tree is considered as a new profiled object. 151 - 152 - order can be either: 153 - - callee: callee based call graph. 154 - - caller: inverted caller based call graph. 155 - 156 - Default: fractal,0.5,callee. 147 + Setup and enable call-graph (stack chain/backtrace) recording, 148 + implies -G. 157 149 158 150 --ignore-callees=<regex>:: 159 151 Ignore callees of the function(s) matching the given regex.
+7
tools/perf/builtin-kvm.c
··· 888 888 while ((event = perf_evlist__mmap_read(kvm->evlist, idx)) != NULL) { 889 889 err = perf_evlist__parse_sample(kvm->evlist, event, &sample); 890 890 if (err) { 891 + perf_evlist__mmap_consume(kvm->evlist, idx); 891 892 pr_err("Failed to parse sample\n"); 892 893 return -1; 893 894 } 894 895 895 896 err = perf_session_queue_event(kvm->session, event, &sample, 0); 897 + /* 898 + * FIXME: Here we can't consume the event, as perf_session_queue_event will 899 + * point to it, and it'll get possibly overwritten by the kernel. 900 + */ 901 + perf_evlist__mmap_consume(kvm->evlist, idx); 902 + 896 903 if (err) { 897 904 pr_err("Failed to enqueue sample: %d\n", err); 898 905 return -1;
+51 -22
tools/perf/builtin-record.c
··· 712 712 } 713 713 #endif /* LIBUNWIND_SUPPORT */ 714 714 715 - int record_parse_callchain_opt(const struct option *opt, 716 - const char *arg, int unset) 715 + int record_parse_callchain(const char *arg, struct perf_record_opts *opts) 717 716 { 718 - struct perf_record_opts *opts = opt->value; 719 717 char *tok, *name, *saveptr = NULL; 720 718 char *buf; 721 719 int ret = -1; 722 - 723 - /* --no-call-graph */ 724 - if (unset) 725 - return 0; 726 - 727 - /* We specified default option if none is provided. */ 728 - BUG_ON(!arg); 729 720 730 721 /* We need buffer that we know we can write to. */ 731 722 buf = malloc(strlen(arg) + 1); ··· 755 764 ret = get_stack_size(tok, &size); 756 765 opts->stack_dump_size = size; 757 766 } 758 - 759 - if (!ret) 760 - pr_debug("callchain: stack dump size %d\n", 761 - opts->stack_dump_size); 762 767 #endif /* LIBUNWIND_SUPPORT */ 763 768 } else { 764 - pr_err("callchain: Unknown -g option " 769 + pr_err("callchain: Unknown --call-graph option " 765 770 "value: %s\n", arg); 766 771 break; 767 772 } ··· 765 778 } while (0); 766 779 767 780 free(buf); 781 + return ret; 782 + } 768 783 784 + static void callchain_debug(struct perf_record_opts *opts) 785 + { 786 + pr_debug("callchain: type %d\n", opts->call_graph); 787 + 788 + if (opts->call_graph == CALLCHAIN_DWARF) 789 + pr_debug("callchain: stack dump size %d\n", 790 + opts->stack_dump_size); 791 + } 792 + 793 + int record_parse_callchain_opt(const struct option *opt, 794 + const char *arg, 795 + int unset) 796 + { 797 + struct perf_record_opts *opts = opt->value; 798 + int ret; 799 + 800 + /* --no-call-graph */ 801 + if (unset) { 802 + opts->call_graph = CALLCHAIN_NONE; 803 + pr_debug("callchain: disabled\n"); 804 + return 0; 805 + } 806 + 807 + ret = record_parse_callchain(arg, opts); 769 808 if (!ret) 770 - pr_debug("callchain: type %d\n", opts->call_graph); 809 + callchain_debug(opts); 771 810 772 811 return ret; 812 + } 813 + 814 + int record_callchain_opt(const struct option *opt, 815 + const char *arg __maybe_unused, 816 + int unset __maybe_unused) 817 + { 818 + struct perf_record_opts *opts = opt->value; 819 + 820 + if (opts->call_graph == CALLCHAIN_NONE) 821 + opts->call_graph = CALLCHAIN_FP; 822 + 823 + callchain_debug(opts); 824 + return 0; 773 825 } 774 826 775 827 static const char * const record_usage[] = { ··· 839 813 }, 840 814 }; 841 815 842 - #define CALLCHAIN_HELP "do call-graph (stack chain/backtrace) recording: " 816 + #define CALLCHAIN_HELP "setup and enables call-graph (stack chain/backtrace) recording: " 843 817 844 818 #ifdef LIBUNWIND_SUPPORT 845 - const char record_callchain_help[] = CALLCHAIN_HELP "[fp] dwarf"; 819 + const char record_callchain_help[] = CALLCHAIN_HELP "fp dwarf"; 846 820 #else 847 - const char record_callchain_help[] = CALLCHAIN_HELP "[fp]"; 821 + const char record_callchain_help[] = CALLCHAIN_HELP "fp"; 848 822 #endif 849 823 850 824 /* ··· 884 858 "number of mmap data pages"), 885 859 OPT_BOOLEAN(0, "group", &record.opts.group, 886 860 "put the counters into a counter group"), 887 - OPT_CALLBACK_DEFAULT('g', "call-graph", &record.opts, 888 - "mode[,dump_size]", record_callchain_help, 889 - &record_parse_callchain_opt, "fp"), 861 + OPT_CALLBACK_NOOPT('g', NULL, &record.opts, 862 + NULL, "enables call-graph recording" , 863 + &record_callchain_opt), 864 + OPT_CALLBACK(0, "call-graph", &record.opts, 865 + "mode[,dump_size]", record_callchain_help, 866 + &record_parse_callchain_opt), 890 867 OPT_INCR('v', "verbose", &verbose, 891 868 "be more verbose (show counter open errors, etc)"), 892 869 OPT_BOOLEAN('q', "quiet", &quiet, "don't print any message"),
+19 -14
tools/perf/builtin-top.c
··· 810 810 ret = perf_evlist__parse_sample(top->evlist, event, &sample); 811 811 if (ret) { 812 812 pr_err("Can't parse sample, err = %d\n", ret); 813 - continue; 813 + goto next_event; 814 814 } 815 815 816 816 evsel = perf_evlist__id2evsel(session->evlist, sample.id); ··· 825 825 case PERF_RECORD_MISC_USER: 826 826 ++top->us_samples; 827 827 if (top->hide_user_symbols) 828 - continue; 828 + goto next_event; 829 829 machine = &session->machines.host; 830 830 break; 831 831 case PERF_RECORD_MISC_KERNEL: 832 832 ++top->kernel_samples; 833 833 if (top->hide_kernel_symbols) 834 - continue; 834 + goto next_event; 835 835 machine = &session->machines.host; 836 836 break; 837 837 case PERF_RECORD_MISC_GUEST_KERNEL: ··· 847 847 */ 848 848 /* Fall thru */ 849 849 default: 850 - continue; 850 + goto next_event; 851 851 } 852 852 853 853 ··· 859 859 machine__process_event(machine, event); 860 860 } else 861 861 ++session->stats.nr_unknown_events; 862 + next_event: 863 + perf_evlist__mmap_consume(top->evlist, idx); 862 864 } 863 865 } 864 866 ··· 1018 1016 } 1019 1017 1020 1018 static int 1019 + callchain_opt(const struct option *opt, const char *arg, int unset) 1020 + { 1021 + symbol_conf.use_callchain = true; 1022 + return record_callchain_opt(opt, arg, unset); 1023 + } 1024 + 1025 + static int 1021 1026 parse_callchain_opt(const struct option *opt, const char *arg, int unset) 1022 1027 { 1023 - /* 1024 - * --no-call-graph 1025 - */ 1026 - if (unset) 1027 - return 0; 1028 - 1029 1028 symbol_conf.use_callchain = true; 1030 - 1031 1029 return record_parse_callchain_opt(opt, arg, unset); 1032 1030 } 1033 1031 ··· 1108 1106 "sort by key(s): pid, comm, dso, symbol, parent, weight, local_weight"), 1109 1107 OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples, 1110 1108 "Show a column with the number of samples"), 1111 - OPT_CALLBACK_DEFAULT('G', "call-graph", &top.record_opts, 1112 - "mode[,dump_size]", record_callchain_help, 1113 - &parse_callchain_opt, "fp"), 1109 + OPT_CALLBACK_NOOPT('G', NULL, &top.record_opts, 1110 + NULL, "enables call-graph recording", 1111 + &callchain_opt), 1112 + OPT_CALLBACK(0, "call-graph", &top.record_opts, 1113 + "mode[,dump_size]", record_callchain_help, 1114 + &parse_callchain_opt), 1114 1115 OPT_CALLBACK(0, "ignore-callees", NULL, "regex", 1115 1116 "ignore callees of these functions in call graphs", 1116 1117 report_parse_ignore_callees_opt),
+5 -3
tools/perf/builtin-trace.c
··· 987 987 err = perf_evlist__parse_sample(evlist, event, &sample); 988 988 if (err) { 989 989 fprintf(trace->output, "Can't parse sample, err = %d, skipping...\n", err); 990 - continue; 990 + goto next_event; 991 991 } 992 992 993 993 if (trace->base_time == 0) ··· 1001 1001 evsel = perf_evlist__id2evsel(evlist, sample.id); 1002 1002 if (evsel == NULL) { 1003 1003 fprintf(trace->output, "Unknown tp ID %" PRIu64 ", skipping...\n", sample.id); 1004 - continue; 1004 + goto next_event; 1005 1005 } 1006 1006 1007 1007 if (sample.raw_data == NULL) { 1008 1008 fprintf(trace->output, "%s sample with no payload for tid: %d, cpu %d, raw_size=%d, skipping...\n", 1009 1009 perf_evsel__name(evsel), sample.tid, 1010 1010 sample.cpu, sample.raw_size); 1011 - continue; 1011 + goto next_event; 1012 1012 } 1013 1013 1014 1014 handler = evsel->handler.func; 1015 1015 handler(trace, evsel, &sample); 1016 + next_event: 1017 + perf_evlist__mmap_consume(evlist, i); 1016 1018 1017 1019 if (done) 1018 1020 goto out_unmap_evlist;
+1
tools/perf/tests/code-reading.c
··· 290 290 for (i = 0; i < evlist->nr_mmaps; i++) { 291 291 while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) { 292 292 ret = process_event(machine, evlist, event, state); 293 + perf_evlist__mmap_consume(evlist, i); 293 294 if (ret < 0) 294 295 return ret; 295 296 }
+1
tools/perf/tests/keep-tracking.c
··· 36 36 (pid_t)event->comm.tid == getpid() && 37 37 strcmp(event->comm.comm, comm) == 0) 38 38 found += 1; 39 + perf_evlist__mmap_consume(evlist, i); 39 40 } 40 41 } 41 42 return found;
+1
tools/perf/tests/mmap-basic.c
··· 122 122 goto out_munmap; 123 123 } 124 124 nr_events[evsel->idx]++; 125 + perf_evlist__mmap_consume(evlist, 0); 125 126 } 126 127 127 128 err = 0;
+3 -1
tools/perf/tests/open-syscall-tp-fields.c
··· 77 77 78 78 ++nr_events; 79 79 80 - if (type != PERF_RECORD_SAMPLE) 80 + if (type != PERF_RECORD_SAMPLE) { 81 + perf_evlist__mmap_consume(evlist, i); 81 82 continue; 83 + } 82 84 83 85 err = perf_evsel__parse_sample(evsel, event, &sample); 84 86 if (err) {
+2
tools/perf/tests/perf-record.c
··· 263 263 type); 264 264 ++errs; 265 265 } 266 + 267 + perf_evlist__mmap_consume(evlist, i); 266 268 } 267 269 } 268 270
+3 -1
tools/perf/tests/perf-time-to-tsc.c
··· 122 122 if (event->header.type != PERF_RECORD_COMM || 123 123 (pid_t)event->comm.pid != getpid() || 124 124 (pid_t)event->comm.tid != getpid()) 125 - continue; 125 + goto next_event; 126 126 127 127 if (strcmp(event->comm.comm, comm1) == 0) { 128 128 CHECK__(perf_evsel__parse_sample(evsel, event, ··· 134 134 &sample)); 135 135 comm2_time = sample.time; 136 136 } 137 + next_event: 138 + perf_evlist__mmap_consume(evlist, i); 137 139 } 138 140 } 139 141
+3 -1
tools/perf/tests/sw-clock.c
··· 78 78 struct perf_sample sample; 79 79 80 80 if (event->header.type != PERF_RECORD_SAMPLE) 81 - continue; 81 + goto next_event; 82 82 83 83 err = perf_evlist__parse_sample(evlist, event, &sample); 84 84 if (err < 0) { ··· 88 88 89 89 total_periods += sample.period; 90 90 nr_samples++; 91 + next_event: 92 + perf_evlist__mmap_consume(evlist, 0); 91 93 } 92 94 93 95 if ((u64) nr_samples == total_periods) {
+3 -3
tools/perf/tests/task-exit.c
··· 96 96 97 97 retry: 98 98 while ((event = perf_evlist__mmap_read(evlist, 0)) != NULL) { 99 - if (event->header.type != PERF_RECORD_EXIT) 100 - continue; 99 + if (event->header.type == PERF_RECORD_EXIT) 100 + nr_exit++; 101 101 102 - nr_exit++; 102 + perf_evlist__mmap_consume(evlist, 0); 103 103 } 104 104 105 105 if (!exited || !nr_exit) {
+4 -5
tools/perf/ui/stdio/hist.c
··· 315 315 } 316 316 317 317 static int hist_entry__period_snprintf(struct perf_hpp *hpp, 318 - struct hist_entry *he, 319 - bool color) 318 + struct hist_entry *he) 320 319 { 321 320 const char *sep = symbol_conf.field_sep; 322 321 struct perf_hpp_fmt *fmt; ··· 337 338 } else 338 339 first = false; 339 340 340 - if (color && fmt->color) 341 + if (perf_hpp__use_color() && fmt->color) 341 342 ret = fmt->color(fmt, hpp, he); 342 343 else 343 344 ret = fmt->entry(fmt, hpp, he); ··· 357 358 .buf = bf, 358 359 .size = size, 359 360 }; 360 - bool color = !symbol_conf.field_sep; 361 361 362 362 if (size == 0 || size > bfsz) 363 363 size = hpp.size = bfsz; 364 364 365 - ret = hist_entry__period_snprintf(&hpp, he, color); 365 + ret = hist_entry__period_snprintf(&hpp, he); 366 366 hist_entry__sort_snprintf(he, bf + ret, size - ret, hists); 367 367 368 368 ret = fprintf(fp, "%s\n", bf); ··· 480 482 481 483 print_entries: 482 484 linesz = hists__sort_list_width(hists) + 3 + 1; 485 + linesz += perf_hpp__color_overhead(); 483 486 line = malloc(linesz); 484 487 if (line == NULL) { 485 488 ret = -1;
+3
tools/perf/util/callchain.h
··· 147 147 148 148 struct option; 149 149 150 + int record_parse_callchain(const char *arg, struct perf_record_opts *opts); 150 151 int record_parse_callchain_opt(const struct option *opt, const char *arg, int unset); 152 + int record_callchain_opt(const struct option *opt, const char *arg, int unset); 153 + 151 154 extern const char record_callchain_help[]; 152 155 #endif /* __PERF_CALLCHAIN_H */
+14 -18
tools/perf/util/event.c
··· 187 187 return -1; 188 188 } 189 189 190 - event->header.type = PERF_RECORD_MMAP2; 190 + event->header.type = PERF_RECORD_MMAP; 191 191 /* 192 192 * Just like the kernel, see __perf_event_mmap in kernel/perf_event.c 193 193 */ ··· 198 198 char prot[5]; 199 199 char execname[PATH_MAX]; 200 200 char anonstr[] = "//anon"; 201 - unsigned int ino; 202 201 size_t size; 203 202 ssize_t n; 204 203 ··· 208 209 strcpy(execname, ""); 209 210 210 211 /* 00400000-0040c000 r-xp 00000000 fd:01 41038 /bin/cat */ 211 - n = sscanf(bf, "%"PRIx64"-%"PRIx64" %s %"PRIx64" %x:%x %u %s\n", 212 - &event->mmap2.start, &event->mmap2.len, prot, 213 - &event->mmap2.pgoff, &event->mmap2.maj, 214 - &event->mmap2.min, 215 - &ino, execname); 212 + n = sscanf(bf, "%"PRIx64"-%"PRIx64" %s %"PRIx64" %*x:%*x %*u %s\n", 213 + &event->mmap.start, &event->mmap.len, prot, 214 + &event->mmap.pgoff, 215 + execname); 216 216 217 - event->mmap2.ino = (u64)ino; 218 - 219 - if (n != 8) 217 + if (n != 5) 220 218 continue; 221 219 222 220 if (prot[2] != 'x') ··· 223 227 strcpy(execname, anonstr); 224 228 225 229 size = strlen(execname) + 1; 226 - memcpy(event->mmap2.filename, execname, size); 230 + memcpy(event->mmap.filename, execname, size); 227 231 size = PERF_ALIGN(size, sizeof(u64)); 228 - event->mmap2.len -= event->mmap.start; 229 - event->mmap2.header.size = (sizeof(event->mmap2) - 230 - (sizeof(event->mmap2.filename) - size)); 231 - memset(event->mmap2.filename + size, 0, machine->id_hdr_size); 232 - event->mmap2.header.size += machine->id_hdr_size; 233 - event->mmap2.pid = tgid; 234 - event->mmap2.tid = pid; 232 + event->mmap.len -= event->mmap.start; 233 + event->mmap.header.size = (sizeof(event->mmap) - 234 + (sizeof(event->mmap.filename) - size)); 235 + memset(event->mmap.filename + size, 0, machine->id_hdr_size); 236 + event->mmap.header.size += machine->id_hdr_size; 237 + event->mmap.pid = tgid; 238 + event->mmap.tid = pid; 235 239 236 240 if (process(tool, event, &synth_sample, machine) != 0) { 237 241 rc = -1;
+10 -3
tools/perf/util/evlist.c
··· 545 545 546 546 md->prev = old; 547 547 548 - if (!evlist->overwrite) 549 - perf_mmap__write_tail(md, old); 550 - 551 548 return event; 549 + } 550 + 551 + void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx) 552 + { 553 + if (!evlist->overwrite) { 554 + struct perf_mmap *md = &evlist->mmap[idx]; 555 + unsigned int old = md->prev; 556 + 557 + perf_mmap__write_tail(md, old); 558 + } 552 559 } 553 560 554 561 static void __perf_evlist__munmap(struct perf_evlist *evlist, int idx)
+2
tools/perf/util/evlist.h
··· 89 89 90 90 union perf_event *perf_evlist__mmap_read(struct perf_evlist *self, int idx); 91 91 92 + void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx); 93 + 92 94 int perf_evlist__open(struct perf_evlist *evlist); 93 95 void perf_evlist__close(struct perf_evlist *evlist); 94 96
-1
tools/perf/util/evsel.c
··· 678 678 attr->sample_type |= PERF_SAMPLE_WEIGHT; 679 679 680 680 attr->mmap = track; 681 - attr->mmap2 = track && !perf_missing_features.mmap2; 682 681 attr->comm = track; 683 682 684 683 /*
+13
tools/perf/util/hist.h
··· 5 5 #include <pthread.h> 6 6 #include "callchain.h" 7 7 #include "header.h" 8 + #include "color.h" 8 9 9 10 extern struct callchain_param callchain_param; 10 11 ··· 175 174 void perf_hpp__init(void); 176 175 void perf_hpp__column_register(struct perf_hpp_fmt *format); 177 176 void perf_hpp__column_enable(unsigned col); 177 + 178 + static inline size_t perf_hpp__use_color(void) 179 + { 180 + return !symbol_conf.field_sep; 181 + } 182 + 183 + static inline size_t perf_hpp__color_overhead(void) 184 + { 185 + return perf_hpp__use_color() ? 186 + (COLOR_MAXLEN + sizeof(PERF_COLOR_RESET)) * PERF_HPP__MAX_INDEX 187 + : 0; 188 + } 178 189 179 190 struct perf_evlist; 180 191
+1 -1
tools/perf/util/probe-finder.c
··· 1357 1357 goto post; 1358 1358 } 1359 1359 1360 + fname = dwarf_decl_file(&spdie); 1360 1361 if (addr == (unsigned long)baseaddr) { 1361 1362 /* Function entry - Relative line number is 0 */ 1362 1363 lineno = baseline; 1363 - fname = dwarf_decl_file(&spdie); 1364 1364 goto post; 1365 1365 } 1366 1366
+2
tools/perf/util/python.c
··· 822 822 PyObject *pyevent = pyrf_event__new(event); 823 823 struct pyrf_event *pevent = (struct pyrf_event *)pyevent; 824 824 825 + perf_evlist__mmap_consume(evlist, cpu); 826 + 825 827 if (pyevent == NULL) 826 828 return PyErr_NoMemory(); 827 829
+1 -1
tools/perf/util/scripting-engines/trace-event-perl.c
··· 282 282 283 283 event = find_cache_event(evsel); 284 284 if (!event) 285 - die("ug! no event found for type %" PRIu64, evsel->attr.config); 285 + die("ug! no event found for type %" PRIu64, (u64)evsel->attr.config); 286 286 287 287 pid = raw_field_value(event, "common_pid", data); 288 288
+24 -13
tools/perf/util/scripting-engines/trace-event-python.c
··· 56 56 Py_FatalError("problem in Python trace event handler"); 57 57 } 58 58 59 + /* 60 + * Insert val into into the dictionary and decrement the reference counter. 61 + * This is necessary for dictionaries since PyDict_SetItemString() does not 62 + * steal a reference, as opposed to PyTuple_SetItem(). 63 + */ 64 + static void pydict_set_item_string_decref(PyObject *dict, const char *key, PyObject *val) 65 + { 66 + PyDict_SetItemString(dict, key, val); 67 + Py_DECREF(val); 68 + } 69 + 59 70 static void define_value(enum print_arg_type field_type, 60 71 const char *ev_name, 61 72 const char *field_name, ··· 290 279 PyTuple_SetItem(t, n++, PyInt_FromLong(pid)); 291 280 PyTuple_SetItem(t, n++, PyString_FromString(comm)); 292 281 } else { 293 - PyDict_SetItemString(dict, "common_cpu", PyInt_FromLong(cpu)); 294 - PyDict_SetItemString(dict, "common_s", PyInt_FromLong(s)); 295 - PyDict_SetItemString(dict, "common_ns", PyInt_FromLong(ns)); 296 - PyDict_SetItemString(dict, "common_pid", PyInt_FromLong(pid)); 297 - PyDict_SetItemString(dict, "common_comm", PyString_FromString(comm)); 282 + pydict_set_item_string_decref(dict, "common_cpu", PyInt_FromLong(cpu)); 283 + pydict_set_item_string_decref(dict, "common_s", PyInt_FromLong(s)); 284 + pydict_set_item_string_decref(dict, "common_ns", PyInt_FromLong(ns)); 285 + pydict_set_item_string_decref(dict, "common_pid", PyInt_FromLong(pid)); 286 + pydict_set_item_string_decref(dict, "common_comm", PyString_FromString(comm)); 298 287 } 299 288 for (field = event->format.fields; field; field = field->next) { 300 289 if (field->flags & FIELD_IS_STRING) { ··· 324 313 if (handler) 325 314 PyTuple_SetItem(t, n++, obj); 326 315 else 327 - PyDict_SetItemString(dict, field->name, obj); 316 + pydict_set_item_string_decref(dict, field->name, obj); 328 317 329 318 } 330 319 if (!handler) ··· 381 370 if (!handler || !PyCallable_Check(handler)) 382 371 goto exit; 383 372 384 - PyDict_SetItemString(dict, "ev_name", PyString_FromString(perf_evsel__name(evsel))); 385 - PyDict_SetItemString(dict, "attr", PyString_FromStringAndSize( 373 + pydict_set_item_string_decref(dict, "ev_name", PyString_FromString(perf_evsel__name(evsel))); 374 + pydict_set_item_string_decref(dict, "attr", PyString_FromStringAndSize( 386 375 (const char *)&evsel->attr, sizeof(evsel->attr))); 387 - PyDict_SetItemString(dict, "sample", PyString_FromStringAndSize( 376 + pydict_set_item_string_decref(dict, "sample", PyString_FromStringAndSize( 388 377 (const char *)sample, sizeof(*sample))); 389 - PyDict_SetItemString(dict, "raw_buf", PyString_FromStringAndSize( 378 + pydict_set_item_string_decref(dict, "raw_buf", PyString_FromStringAndSize( 390 379 (const char *)sample->raw_data, sample->raw_size)); 391 - PyDict_SetItemString(dict, "comm", 380 + pydict_set_item_string_decref(dict, "comm", 392 381 PyString_FromString(thread->comm)); 393 382 if (al->map) { 394 - PyDict_SetItemString(dict, "dso", 383 + pydict_set_item_string_decref(dict, "dso", 395 384 PyString_FromString(al->map->dso->name)); 396 385 } 397 386 if (al->sym) { 398 - PyDict_SetItemString(dict, "symbol", 387 + pydict_set_item_string_decref(dict, "symbol", 399 388 PyString_FromString(al->sym->name)); 400 389 } 401 390
+1 -1
virt/kvm/kvm_main.c
··· 3091 3091 3092 3092 static int kvm_init_debug(void) 3093 3093 { 3094 - int r = -EFAULT; 3094 + int r = -EEXIST; 3095 3095 struct kvm_stats_debugfs_item *p; 3096 3096 3097 3097 kvm_debugfs_dir = debugfs_create_dir("kvm", NULL);