Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/usb/r8152.c
drivers/net/xen-netback/netback.c

Both the r8152 and netback conflicts were simple overlapping
changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2205 -1640
+5 -6
Documentation/device-mapper/cache.txt
··· 124 124 Updating on-disk metadata 125 125 ------------------------- 126 126 127 - On-disk metadata is committed every time a REQ_SYNC or REQ_FUA bio is 128 - written. If no such requests are made then commits will occur every 129 - second. This means the cache behaves like a physical disk that has a 130 - write cache (the same is true of the thin-provisioning target). If 131 - power is lost you may lose some recent writes. The metadata should 132 - always be consistent in spite of any crash. 127 + On-disk metadata is committed every time a FLUSH or FUA bio is written. 128 + If no such requests are made then commits will occur every second. This 129 + means the cache behaves like a physical disk that has a volatile write 130 + cache. If power is lost you may lose some recent writes. The metadata 131 + should always be consistent in spite of any crash. 133 132 134 133 The 'dirty' state for a cache block changes far too frequently for us 135 134 to keep updating it on the fly. So we treat it as a hint. In normal
+31 -3
Documentation/device-mapper/thin-provisioning.txt
··· 116 116 userspace daemon can use this to detect a situation where a new table 117 117 already exceeds the threshold. 118 118 119 + A low water mark for the metadata device is maintained in the kernel and 120 + will trigger a dm event if free space on the metadata device drops below 121 + it. 122 + 123 + Updating on-disk metadata 124 + ------------------------- 125 + 126 + On-disk metadata is committed every time a FLUSH or FUA bio is written. 127 + If no such requests are made then commits will occur every second. This 128 + means the thin-provisioning target behaves like a physical disk that has 129 + a volatile write cache. If power is lost you may lose some recent 130 + writes. The metadata should always be consistent in spite of any crash. 131 + 132 + If data space is exhausted the pool will either error or queue IO 133 + according to the configuration (see: error_if_no_space). If metadata 134 + space is exhausted or a metadata operation fails: the pool will error IO 135 + until the pool is taken offline and repair is performed to 1) fix any 136 + potential inconsistencies and 2) clear the flag that imposes repair. 137 + Once the pool's metadata device is repaired it may be resized, which 138 + will allow the pool to return to normal operation. Note that if a pool 139 + is flagged as needing repair, the pool's data and metadata devices 140 + cannot be resized until repair is performed. It should also be noted 141 + that when the pool's metadata space is exhausted the current metadata 142 + transaction is aborted. Given that the pool will cache IO whose 143 + completion may have already been acknowledged to upper IO layers 144 + (e.g. filesystem) it is strongly suggested that consistency checks 145 + (e.g. fsck) be performed on those layers when repair of the pool is 146 + required. 147 + 119 148 Thin provisioning 120 149 ----------------- 121 150 ··· 287 258 should register for the event and then check the target's status. 288 259 289 260 held metadata root: 290 - The location, in sectors, of the metadata root that has been 261 + The location, in blocks, of the metadata root that has been 291 262 'held' for userspace read access. '-' indicates there is no 292 - held root. This feature is not yet implemented so '-' is 293 - always returned. 263 + held root. 294 264 295 265 discard_passdown|no_discard_passdown 296 266 Whether or not discards are actually being passed down to the
+4 -4
Documentation/devicetree/bindings/pinctrl/brcm,capri-pinctrl.txt Documentation/devicetree/bindings/pinctrl/brcm,bcm11351-pinctrl.txt
··· 1 - Broadcom Capri Pin Controller 1 + Broadcom BCM281xx Pin Controller 2 2 3 3 This is a pin controller for the Broadcom BCM281xx SoC family, which includes 4 4 BCM11130, BCM11140, BCM11351, BCM28145, and BCM28155 SoCs. ··· 7 7 8 8 Required Properties: 9 9 10 - - compatible: Must be "brcm,capri-pinctrl". 10 + - compatible: Must be "brcm,bcm11351-pinctrl" 11 11 - reg: Base address of the PAD Controller register block and the size 12 12 of the block. 13 13 14 14 For example, the following is the bare minimum node: 15 15 16 16 pinctrl@35004800 { 17 - compatible = "brcm,capri-pinctrl"; 17 + compatible = "brcm,bcm11351-pinctrl"; 18 18 reg = <0x35004800 0x430>; 19 19 }; 20 20 ··· 119 119 Example: 120 120 // pin controller node 121 121 pinctrl@35004800 { 122 - compatible = "brcm,capri-pinctrl"; 122 + compatible = "brcmbcm11351-pinctrl"; 123 123 reg = <0x35004800 0x430>; 124 124 125 125 // pin configuration node
+1 -1
Documentation/networking/packet_mmap.txt
··· 453 453 enabled previously with setsockopt() and 454 454 the PACKET_COPY_THRESH option. 455 455 456 - The number of frames than can be buffered to 456 + The number of frames that can be buffered to 457 457 be read with recvfrom is limited like a normal socket. 458 458 See the SO_RCVBUF option in the socket (7) man page. 459 459
+30 -18
Documentation/networking/timestamping.txt
··· 21 21 22 22 SO_TIMESTAMPING: 23 23 24 - Instructs the socket layer which kind of information is wanted. The 25 - parameter is an integer with some of the following bits set. Setting 26 - other bits is an error and doesn't change the current state. 24 + Instructs the socket layer which kind of information should be collected 25 + and/or reported. The parameter is an integer with some of the following 26 + bits set. Setting other bits is an error and doesn't change the current 27 + state. 27 28 28 - SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamp in hardware 29 - SOF_TIMESTAMPING_TX_SOFTWARE: if SOF_TIMESTAMPING_TX_HARDWARE is off or 30 - fails, then do it in software 31 - SOF_TIMESTAMPING_RX_HARDWARE: return the original, unmodified time stamp 32 - as generated by the hardware 33 - SOF_TIMESTAMPING_RX_SOFTWARE: if SOF_TIMESTAMPING_RX_HARDWARE is off or 34 - fails, then do it in software 35 - SOF_TIMESTAMPING_RAW_HARDWARE: return original raw hardware time stamp 36 - SOF_TIMESTAMPING_SYS_HARDWARE: return hardware time stamp transformed to 37 - the system time base 38 - SOF_TIMESTAMPING_SOFTWARE: return system time stamp generated in 39 - software 29 + Four of the bits are requests to the stack to try to generate 30 + timestamps. Any combination of them is valid. 40 31 41 - SOF_TIMESTAMPING_TX/RX determine how time stamps are generated. 42 - SOF_TIMESTAMPING_RAW/SYS determine how they are reported in the 43 - following control message: 32 + SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamps in hardware 33 + SOF_TIMESTAMPING_TX_SOFTWARE: try to obtain send time stamps in software 34 + SOF_TIMESTAMPING_RX_HARDWARE: try to obtain receive time stamps in hardware 35 + SOF_TIMESTAMPING_RX_SOFTWARE: try to obtain receive time stamps in software 36 + 37 + The other three bits control which timestamps will be reported in a 38 + generated control message. If none of these bits are set or if none of 39 + the set bits correspond to data that is available, then the control 40 + message will not be generated: 41 + 42 + SOF_TIMESTAMPING_SOFTWARE: report systime if available 43 + SOF_TIMESTAMPING_SYS_HARDWARE: report hwtimetrans if available 44 + SOF_TIMESTAMPING_RAW_HARDWARE: report hwtimeraw if available 45 + 46 + It is worth noting that timestamps may be collected for reasons other 47 + than being requested by a particular socket with 48 + SOF_TIMESTAMPING_[TR]X_(HARD|SOFT)WARE. For example, most drivers that 49 + can generate hardware receive timestamps ignore 50 + SOF_TIMESTAMPING_RX_HARDWARE. It is still a good idea to set that flag 51 + in case future drivers pay attention. 52 + 53 + If timestamps are reported, they will appear in a control message with 54 + cmsg_level==SOL_SOCKET, cmsg_type==SO_TIMESTAMPING, and a payload like 55 + this: 44 56 45 57 struct scm_timestamping { 46 58 struct timespec systime;
+10 -1
MAINTAINERS
··· 474 474 475 475 AGPGART DRIVER 476 476 M: David Airlie <airlied@linux.ie> 477 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git 477 + T: git git://people.freedesktop.org/~airlied/linux (part of drm maint) 478 478 S: Maintained 479 479 F: drivers/char/agp/ 480 480 F: include/linux/agp* ··· 1738 1738 BLACKFIN ARCHITECTURE 1739 1739 M: Steven Miao <realmz6@gmail.com> 1740 1740 L: adi-buildroot-devel@lists.sourceforge.net 1741 + T: git git://git.code.sf.net/p/adi-linux/code 1741 1742 W: http://blackfin.uclinux.org 1742 1743 S: Supported 1743 1744 F: arch/blackfin/ ··· 6009 6008 F: include/uapi/linux/in.h 6010 6009 F: include/uapi/linux/net.h 6011 6010 F: include/uapi/linux/netdevice.h 6011 + F: tools/net/ 6012 + F: tools/testing/selftests/net/ 6012 6013 6013 6014 NETWORKING [IPv4/IPv6] 6014 6015 M: "David S. Miller" <davem@davemloft.net> ··· 6184 6181 S: Supported 6185 6182 F: drivers/block/nvme* 6186 6183 F: include/linux/nvme.h 6184 + 6185 + NXP TDA998X DRM DRIVER 6186 + M: Russell King <rmk+kernel@arm.linux.org.uk> 6187 + S: Supported 6188 + F: drivers/gpu/drm/i2c/tda998x_drv.c 6189 + F: include/drm/i2c/tda998x.h 6187 6190 6188 6191 OMAP SUPPORT 6189 6192 M: Tony Lindgren <tony@atomide.com>
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 14 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Shuffling Zombie Juror 6 6 7 7 # *DOCUMENTATION*
+2 -2
arch/arc/mm/cache_arc700.c
··· 282 282 #else 283 283 /* if V-P const for loop, PTAG can be written once outside loop */ 284 284 if (full_page_op) 285 - write_aux_reg(ARC_REG_DC_PTAG, paddr); 285 + write_aux_reg(aux_tag, paddr); 286 286 #endif 287 287 288 288 while (num_lines-- > 0) { ··· 296 296 write_aux_reg(aux_cmd, vaddr); 297 297 vaddr += L1_CACHE_BYTES; 298 298 #else 299 - write_aux_reg(aux, paddr); 299 + write_aux_reg(aux_cmd, paddr); 300 300 paddr += L1_CACHE_BYTES; 301 301 #endif 302 302 }
+3
arch/arm/Kconfig
··· 1578 1578 1579 1579 choice 1580 1580 prompt "Memory split" 1581 + depends on MMU 1581 1582 default VMSPLIT_3G 1582 1583 help 1583 1584 Select the desired split between kernel and user memory. ··· 1596 1595 1597 1596 config PAGE_OFFSET 1598 1597 hex 1598 + default PHYS_OFFSET if !MMU 1599 1599 default 0x40000000 if VMSPLIT_1G 1600 1600 default 0x80000000 if VMSPLIT_2G 1601 1601 default 0xC0000000 ··· 1905 1903 depends on ARM && AEABI && OF 1906 1904 depends on CPU_V7 && !CPU_V6 1907 1905 depends on !GENERIC_ATOMIC64 1906 + depends on MMU 1908 1907 select ARM_PSCI 1909 1908 select SWIOTLB_XEN 1910 1909 select ARCH_DMA_ADDR_T_64BIT
+1
arch/arm/boot/compressed/.gitignore
··· 1 1 ashldi3.S 2 + bswapsdi2.S 2 3 font.c 3 4 lib1funcs.S 4 5 hyp-stub.S
+1 -1
arch/arm/boot/dts/bcm11351.dtsi
··· 147 147 }; 148 148 149 149 pinctrl@35004800 { 150 - compatible = "brcm,capri-pinctrl"; 150 + compatible = "brcm,bcm11351-pinctrl"; 151 151 reg = <0x35004800 0x430>; 152 152 }; 153 153
+1 -1
arch/arm/boot/dts/omap3-gta04.dts
··· 13 13 14 14 / { 15 15 model = "OMAP3 GTA04"; 16 - compatible = "ti,omap3-gta04", "ti,omap3"; 16 + compatible = "ti,omap3-gta04", "ti,omap36xx", "ti,omap3"; 17 17 18 18 cpus { 19 19 cpu@0 {
+1 -1
arch/arm/boot/dts/omap3-igep0020.dts
··· 14 14 15 15 / { 16 16 model = "IGEPv2 (TI OMAP AM/DM37x)"; 17 - compatible = "isee,omap3-igep0020", "ti,omap3"; 17 + compatible = "isee,omap3-igep0020", "ti,omap36xx", "ti,omap3"; 18 18 19 19 leds { 20 20 pinctrl-names = "default";
+1 -1
arch/arm/boot/dts/omap3-igep0030.dts
··· 13 13 14 14 / { 15 15 model = "IGEP COM MODULE (TI OMAP AM/DM37x)"; 16 - compatible = "isee,omap3-igep0030", "ti,omap3"; 16 + compatible = "isee,omap3-igep0030", "ti,omap36xx", "ti,omap3"; 17 17 18 18 leds { 19 19 pinctrl-names = "default";
+1 -1
arch/arm/boot/dts/sun4i-a10.dtsi
··· 426 426 }; 427 427 428 428 rtp: rtp@01c25000 { 429 - compatible = "allwinner,sun4i-ts"; 429 + compatible = "allwinner,sun4i-a10-ts"; 430 430 reg = <0x01c25000 0x100>; 431 431 interrupts = <29>; 432 432 };
+1 -1
arch/arm/boot/dts/sun5i-a10s.dtsi
··· 383 383 }; 384 384 385 385 rtp: rtp@01c25000 { 386 - compatible = "allwinner,sun4i-ts"; 386 + compatible = "allwinner,sun4i-a10-ts"; 387 387 reg = <0x01c25000 0x100>; 388 388 interrupts = <29>; 389 389 };
+1 -1
arch/arm/boot/dts/sun5i-a13.dtsi
··· 346 346 }; 347 347 348 348 rtp: rtp@01c25000 { 349 - compatible = "allwinner,sun4i-ts"; 349 + compatible = "allwinner,sun4i-a10-ts"; 350 350 reg = <0x01c25000 0x100>; 351 351 interrupts = <29>; 352 352 };
+6 -6
arch/arm/boot/dts/sun7i-a20.dtsi
··· 454 454 rtc: rtc@01c20d00 { 455 455 compatible = "allwinner,sun7i-a20-rtc"; 456 456 reg = <0x01c20d00 0x20>; 457 - interrupts = <0 24 1>; 457 + interrupts = <0 24 4>; 458 458 }; 459 459 460 460 sid: eeprom@01c23800 { ··· 463 463 }; 464 464 465 465 rtp: rtp@01c25000 { 466 - compatible = "allwinner,sun4i-ts"; 466 + compatible = "allwinner,sun4i-a10-ts"; 467 467 reg = <0x01c25000 0x100>; 468 468 interrupts = <0 29 4>; 469 469 }; ··· 596 596 hstimer@01c60000 { 597 597 compatible = "allwinner,sun7i-a20-hstimer"; 598 598 reg = <0x01c60000 0x1000>; 599 - interrupts = <0 81 1>, 600 - <0 82 1>, 601 - <0 83 1>, 602 - <0 84 1>; 599 + interrupts = <0 81 4>, 600 + <0 82 4>, 601 + <0 83 4>, 602 + <0 84 4>; 603 603 clocks = <&ahb_gates 28>; 604 604 }; 605 605
+3
arch/arm/configs/tegra_defconfig
··· 204 204 CONFIG_MMC_SDHCI=y 205 205 CONFIG_MMC_SDHCI_PLTFM=y 206 206 CONFIG_MMC_SDHCI_TEGRA=y 207 + CONFIG_NEW_LEDS=y 208 + CONFIG_LEDS_CLASS=y 207 209 CONFIG_LEDS_GPIO=y 210 + CONFIG_LEDS_TRIGGERS=y 208 211 CONFIG_LEDS_TRIGGER_TIMER=y 209 212 CONFIG_LEDS_TRIGGER_ONESHOT=y 210 213 CONFIG_LEDS_TRIGGER_HEARTBEAT=y
+3 -6
arch/arm/include/asm/memory.h
··· 30 30 */ 31 31 #define UL(x) _AC(x, UL) 32 32 33 + /* PAGE_OFFSET - the virtual address of the start of the kernel image */ 34 + #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) 35 + 33 36 #ifdef CONFIG_MMU 34 37 35 38 /* 36 - * PAGE_OFFSET - the virtual address of the start of the kernel image 37 39 * TASK_SIZE - the maximum size of a user space task. 38 40 * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area 39 41 */ 40 - #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) 41 42 #define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M)) 42 43 #define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M) 43 44 ··· 103 102 104 103 #ifndef END_MEM 105 104 #define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE) 106 - #endif 107 - 108 - #ifndef PAGE_OFFSET 109 - #define PAGE_OFFSET PLAT_PHYS_OFFSET 110 105 #endif 111 106 112 107 /*
+12
arch/arm/kernel/head-common.S
··· 177 177 .long __proc_info_end 178 178 .size __lookup_processor_type_data, . - __lookup_processor_type_data 179 179 180 + __error_lpae: 181 + #ifdef CONFIG_DEBUG_LL 182 + adr r0, str_lpae 183 + bl printascii 184 + b __error 185 + str_lpae: .asciz "\nError: Kernel with LPAE support, but CPU does not support LPAE.\n" 186 + #else 187 + b __error 188 + #endif 189 + .align 190 + ENDPROC(__error_lpae) 191 + 180 192 __error_p: 181 193 #ifdef CONFIG_DEBUG_LL 182 194 adr r0, str_p1
+1 -1
arch/arm/kernel/head.S
··· 102 102 and r3, r3, #0xf @ extract VMSA support 103 103 cmp r3, #5 @ long-descriptor translation table format? 104 104 THUMB( it lo ) @ force fixup-able long branch encoding 105 - blo __error_p @ only classic page table format 105 + blo __error_lpae @ only classic page table format 106 106 #endif 107 107 108 108 #ifndef CONFIG_XIP_KERNEL
+2
arch/arm/mach-omap2/cclock3xxx_data.c
··· 433 433 .enable = &omap2_dflt_clk_enable, 434 434 .disable = &omap2_dflt_clk_disable, 435 435 .is_enabled = &omap2_dflt_clk_is_enabled, 436 + .set_rate = &omap3_clkoutx2_set_rate, 436 437 .recalc_rate = &omap3_clkoutx2_recalc, 438 + .round_rate = &omap3_clkoutx2_round_rate, 437 439 }; 438 440 439 441 static const struct clk_ops dpll4_m5x2_ck_3630_ops = {
+5 -3
arch/arm/mach-omap2/cpuidle44xx.c
··· 23 23 #include "prm.h" 24 24 #include "clockdomain.h" 25 25 26 + #define MAX_CPUS 2 27 + 26 28 /* Machine specific information */ 27 29 struct idle_statedata { 28 30 u32 cpu_state; ··· 50 48 }, 51 49 }; 52 50 53 - static struct powerdomain *mpu_pd, *cpu_pd[NR_CPUS]; 54 - static struct clockdomain *cpu_clkdm[NR_CPUS]; 51 + static struct powerdomain *mpu_pd, *cpu_pd[MAX_CPUS]; 52 + static struct clockdomain *cpu_clkdm[MAX_CPUS]; 55 53 56 54 static atomic_t abort_barrier; 57 - static bool cpu_done[NR_CPUS]; 55 + static bool cpu_done[MAX_CPUS]; 58 56 static struct idle_statedata *state_ptr = &omap4_idle_data[0]; 59 57 60 58 /* Private functions */
+78 -16
arch/arm/mach-omap2/dpll3xxx.c
··· 623 623 624 624 /* Clock control for DPLL outputs */ 625 625 626 - /** 627 - * omap3_clkoutx2_recalc - recalculate DPLL X2 output virtual clock rate 628 - * @clk: DPLL output struct clk 629 - * 630 - * Using parent clock DPLL data, look up DPLL state. If locked, set our 631 - * rate to the dpll_clk * 2; otherwise, just use dpll_clk. 632 - */ 633 - unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw, 634 - unsigned long parent_rate) 626 + /* Find the parent DPLL for the given clkoutx2 clock */ 627 + static struct clk_hw_omap *omap3_find_clkoutx2_dpll(struct clk_hw *hw) 635 628 { 636 - const struct dpll_data *dd; 637 - unsigned long rate; 638 - u32 v; 639 629 struct clk_hw_omap *pclk = NULL; 640 630 struct clk *parent; 641 - 642 - if (!parent_rate) 643 - return 0; 644 631 645 632 /* Walk up the parents of clk, looking for a DPLL */ 646 633 do { ··· 643 656 /* clk does not have a DPLL as a parent? error in the clock data */ 644 657 if (!pclk) { 645 658 WARN_ON(1); 646 - return 0; 659 + return NULL; 647 660 } 661 + 662 + return pclk; 663 + } 664 + 665 + /** 666 + * omap3_clkoutx2_recalc - recalculate DPLL X2 output virtual clock rate 667 + * @clk: DPLL output struct clk 668 + * 669 + * Using parent clock DPLL data, look up DPLL state. If locked, set our 670 + * rate to the dpll_clk * 2; otherwise, just use dpll_clk. 671 + */ 672 + unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw, 673 + unsigned long parent_rate) 674 + { 675 + const struct dpll_data *dd; 676 + unsigned long rate; 677 + u32 v; 678 + struct clk_hw_omap *pclk = NULL; 679 + 680 + if (!parent_rate) 681 + return 0; 682 + 683 + pclk = omap3_find_clkoutx2_dpll(hw); 684 + 685 + if (!pclk) 686 + return 0; 648 687 649 688 dd = pclk->dpll_data; 650 689 ··· 683 670 else 684 671 rate = parent_rate * 2; 685 672 return rate; 673 + } 674 + 675 + int omap3_clkoutx2_set_rate(struct clk_hw *hw, unsigned long rate, 676 + unsigned long parent_rate) 677 + { 678 + return 0; 679 + } 680 + 681 + long omap3_clkoutx2_round_rate(struct clk_hw *hw, unsigned long rate, 682 + unsigned long *prate) 683 + { 684 + const struct dpll_data *dd; 685 + u32 v; 686 + struct clk_hw_omap *pclk = NULL; 687 + 688 + if (!*prate) 689 + return 0; 690 + 691 + pclk = omap3_find_clkoutx2_dpll(hw); 692 + 693 + if (!pclk) 694 + return 0; 695 + 696 + dd = pclk->dpll_data; 697 + 698 + /* TYPE J does not have a clkoutx2 */ 699 + if (dd->flags & DPLL_J_TYPE) { 700 + *prate = __clk_round_rate(__clk_get_parent(pclk->hw.clk), rate); 701 + return *prate; 702 + } 703 + 704 + WARN_ON(!dd->enable_mask); 705 + 706 + v = omap2_clk_readl(pclk, dd->control_reg) & dd->enable_mask; 707 + v >>= __ffs(dd->enable_mask); 708 + 709 + /* If in bypass, the rate is fixed to the bypass rate*/ 710 + if (v != OMAP3XXX_EN_DPLL_LOCKED) 711 + return *prate; 712 + 713 + if (__clk_get_flags(hw->clk) & CLK_SET_RATE_PARENT) { 714 + unsigned long best_parent; 715 + 716 + best_parent = (rate / 2); 717 + *prate = __clk_round_rate(__clk_get_parent(hw->clk), 718 + best_parent); 719 + } 720 + 721 + return *prate * 2; 686 722 } 687 723 688 724 /* OMAP3/4 non-CORE DPLL clkops */
+14 -12
arch/arm/mach-omap2/omap_hwmod.c
··· 1947 1947 goto dis_opt_clks; 1948 1948 1949 1949 _write_sysconfig(v, oh); 1950 + 1951 + if (oh->class->sysc->srst_udelay) 1952 + udelay(oh->class->sysc->srst_udelay); 1953 + 1954 + c = _wait_softreset_complete(oh); 1955 + if (c == MAX_MODULE_SOFTRESET_WAIT) { 1956 + pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n", 1957 + oh->name, MAX_MODULE_SOFTRESET_WAIT); 1958 + ret = -ETIMEDOUT; 1959 + goto dis_opt_clks; 1960 + } else { 1961 + pr_debug("omap_hwmod: %s: softreset in %d usec\n", oh->name, c); 1962 + } 1963 + 1950 1964 ret = _clear_softreset(oh, &v); 1951 1965 if (ret) 1952 1966 goto dis_opt_clks; 1953 1967 1954 1968 _write_sysconfig(v, oh); 1955 1969 1956 - if (oh->class->sysc->srst_udelay) 1957 - udelay(oh->class->sysc->srst_udelay); 1958 - 1959 - c = _wait_softreset_complete(oh); 1960 - if (c == MAX_MODULE_SOFTRESET_WAIT) 1961 - pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n", 1962 - oh->name, MAX_MODULE_SOFTRESET_WAIT); 1963 - else 1964 - pr_debug("omap_hwmod: %s: softreset in %d usec\n", oh->name, c); 1965 - 1966 1970 /* 1967 1971 * XXX add _HWMOD_STATE_WEDGED for modules that don't come back from 1968 1972 * _wait_target_ready() or _reset() 1969 1973 */ 1970 - 1971 - ret = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0; 1972 1974 1973 1975 dis_opt_clks: 1974 1976 if (oh->flags & HWMOD_CONTROL_OPT_CLKS_IN_RESET)
+4 -5
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1365 1365 .rev_offs = 0x0000, 1366 1366 .sysc_offs = 0x0010, 1367 1367 .syss_offs = 0x0014, 1368 - .sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_CLOCKACTIVITY | 1369 - SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE | 1370 - SYSC_HAS_SOFTRESET | SYSS_HAS_RESET_STATUS), 1371 - .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 1372 - SIDLE_SMART_WKUP), 1368 + .sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_ENAWAKEUP | 1369 + SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET | 1370 + SYSS_HAS_RESET_STATUS), 1371 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART), 1373 1372 .sysc_fields = &omap_hwmod_sysc_type1, 1374 1373 }; 1375 1374
+20 -1
arch/arm/mach-omap2/pdata-quirks.c
··· 22 22 #include "common-board-devices.h" 23 23 #include "dss-common.h" 24 24 #include "control.h" 25 + #include "omap-secure.h" 26 + #include "soc.h" 25 27 26 28 struct pdata_init { 27 29 const char *compatible; ··· 171 169 omap_ctrl_writel(v, AM35XX_CONTROL_IP_SW_RESET); 172 170 omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); /* OCP barrier */ 173 171 } 172 + 173 + static void __init nokia_n900_legacy_init(void) 174 + { 175 + hsmmc2_internal_input_clk(); 176 + 177 + if (omap_type() == OMAP2_DEVICE_TYPE_SEC) { 178 + if (IS_ENABLED(CONFIG_ARM_ERRATA_430973)) { 179 + pr_info("RX-51: Enabling ARM errata 430973 workaround\n"); 180 + /* set IBE to 1 */ 181 + rx51_secure_update_aux_cr(BIT(6), 0); 182 + } else { 183 + pr_warning("RX-51: Not enabling ARM errata 430973 workaround\n"); 184 + pr_warning("Thumb binaries may crash randomly without this workaround\n"); 185 + } 186 + } 187 + } 174 188 #endif /* CONFIG_ARCH_OMAP3 */ 175 189 176 190 #ifdef CONFIG_ARCH_OMAP4 ··· 257 239 #endif 258 240 #ifdef CONFIG_ARCH_OMAP3 259 241 OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002030, "48002030.pinmux", &pcs_pdata), 242 + OF_DEV_AUXDATA("ti,omap3-padconf", 0x480025a0, "480025a0.pinmux", &pcs_pdata), 260 243 OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002a00, "48002a00.pinmux", &pcs_pdata), 261 244 /* Only on am3517 */ 262 245 OF_DEV_AUXDATA("ti,davinci_mdio", 0x5c030000, "davinci_mdio.0", NULL), ··· 278 259 static struct pdata_init pdata_quirks[] __initdata = { 279 260 #ifdef CONFIG_ARCH_OMAP3 280 261 { "compulab,omap3-sbc-t3730", omap3_sbc_t3730_legacy_init, }, 281 - { "nokia,omap3-n900", hsmmc2_internal_input_clk, }, 262 + { "nokia,omap3-n900", nokia_n900_legacy_init, }, 282 263 { "nokia,omap3-n9", hsmmc2_internal_input_clk, }, 283 264 { "nokia,omap3-n950", hsmmc2_internal_input_clk, }, 284 265 { "isee,omap3-igep0020", omap3_igep0020_legacy_init, },
+2 -2
arch/arm/mach-omap2/prminst44xx.c
··· 183 183 OMAP4_PRM_RSTCTRL_OFFSET); 184 184 v |= OMAP4430_RST_GLOBAL_WARM_SW_MASK; 185 185 omap4_prminst_write_inst_reg(v, OMAP4430_PRM_PARTITION, 186 - OMAP4430_PRM_DEVICE_INST, 186 + dev_inst, 187 187 OMAP4_PRM_RSTCTRL_OFFSET); 188 188 189 189 /* OCP barrier */ 190 190 v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION, 191 - OMAP4430_PRM_DEVICE_INST, 191 + dev_inst, 192 192 OMAP4_PRM_RSTCTRL_OFFSET); 193 193 }
+2
arch/arm/mach-sa1100/include/mach/collie.h
··· 13 13 #ifndef __ASM_ARCH_COLLIE_H 14 14 #define __ASM_ARCH_COLLIE_H 15 15 16 + #include "hardware.h" /* Gives GPIO_MAX */ 17 + 16 18 extern void locomolcd_power(int on); 17 19 18 20 #define COLLIE_SCOOP_GPIO_BASE (GPIO_MAX + 1)
+3
arch/arm/mm/dump.c
··· 264 264 note_page(st, addr, 3, pmd_val(*pmd)); 265 265 else 266 266 walk_pte(st, pmd, addr); 267 + 268 + if (SECTION_SIZE < PMD_SIZE && pmd_large(pmd[1])) 269 + note_page(st, addr + SECTION_SIZE, 3, pmd_val(pmd[1])); 267 270 } 268 271 } 269 272
+1
arch/c6x/include/asm/cache.h
··· 12 12 #define _ASM_C6X_CACHE_H 13 13 14 14 #include <linux/irqflags.h> 15 + #include <linux/init.h> 15 16 16 17 /* 17 18 * Cache line size
+1 -1
arch/cris/include/asm/bitops.h
··· 144 144 * definition, which doesn't have the same semantics. We don't want to 145 145 * use -fno-builtin, so just hide the name ffs. 146 146 */ 147 - #define ffs kernel_ffs 147 + #define ffs(x) kernel_ffs(x) 148 148 149 149 #include <asm-generic/bitops/fls.h> 150 150 #include <asm-generic/bitops/__fls.h>
+1 -1
arch/ia64/kernel/uncached.c
··· 98 98 /* attempt to allocate a granule's worth of cached memory pages */ 99 99 100 100 page = alloc_pages_exact_node(nid, 101 - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, 101 + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, 102 102 IA64_GRANULE_SHIFT-PAGE_SHIFT); 103 103 if (!page) { 104 104 mutex_unlock(&uc_pool->add_chunk_mutex);
+9
arch/powerpc/kernel/process.c
··· 1048 1048 flush_altivec_to_thread(src); 1049 1049 flush_vsx_to_thread(src); 1050 1050 flush_spe_to_thread(src); 1051 + /* 1052 + * Flush TM state out so we can copy it. __switch_to_tm() does this 1053 + * flush but it removes the checkpointed state from the current CPU and 1054 + * transitions the CPU out of TM mode. Hence we need to call 1055 + * tm_recheckpoint_new_task() (on the same task) to restore the 1056 + * checkpointed state back and the TM mode. 1057 + */ 1058 + __switch_to_tm(src); 1059 + tm_recheckpoint_new_task(src); 1051 1060 1052 1061 *dst = *src; 1053 1062
+1
arch/powerpc/kernel/reloc_64.S
··· 81 81 82 82 6: blr 83 83 84 + .balign 8 84 85 p_dyn: .llong __dynamic_start - 0b 85 86 p_rela: .llong __rela_dyn_start - 0b 86 87 p_st: .llong _stext - 0b
+2 -1
arch/powerpc/platforms/cell/ras.c
··· 123 123 124 124 area->nid = nid; 125 125 area->order = order; 126 - area->pages = alloc_pages_exact_node(area->nid, GFP_KERNEL|GFP_THISNODE, 126 + area->pages = alloc_pages_exact_node(area->nid, 127 + GFP_KERNEL|__GFP_THISNODE, 127 128 area->order); 128 129 129 130 if (!area->pages) {
-4
arch/x86/Kconfig.cpu
··· 341 341 def_bool y 342 342 depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML 343 343 344 - config X86_OOSTORE 345 - def_bool y 346 - depends on (MWINCHIP3D || MWINCHIPC6) && MTRR 347 - 348 344 # 349 345 # P6_NOPs are a relatively minor optimization that require a family >= 350 346 # 6 processor, except that it is broken on certain VIA chips.
+2 -6
arch/x86/include/asm/barrier.h
··· 85 85 #else 86 86 # define smp_rmb() barrier() 87 87 #endif 88 - #ifdef CONFIG_X86_OOSTORE 89 - # define smp_wmb() wmb() 90 - #else 91 - # define smp_wmb() barrier() 92 - #endif 88 + #define smp_wmb() barrier() 93 89 #define smp_read_barrier_depends() read_barrier_depends() 94 90 #define set_mb(var, value) do { (void)xchg(&var, value); } while (0) 95 91 #else /* !SMP */ ··· 96 100 #define set_mb(var, value) do { var = value; barrier(); } while (0) 97 101 #endif /* SMP */ 98 102 99 - #if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE) 103 + #if defined(CONFIG_X86_PPRO_FENCE) 100 104 101 105 /* 102 106 * For either of these options x86 doesn't have a strong TSO memory
+1
arch/x86/include/asm/efi.h
··· 134 134 extern void __init old_map_region(efi_memory_desc_t *md); 135 135 extern void __init runtime_code_page_mkexec(void); 136 136 extern void __init efi_runtime_mkexec(void); 137 + extern void __init efi_apply_memmap_quirks(void); 137 138 138 139 struct efi_setup_data { 139 140 u64 fw_vendor;
+1 -1
arch/x86/include/asm/io.h
··· 237 237 238 238 static inline void flush_write_buffers(void) 239 239 { 240 - #if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE) 240 + #if defined(CONFIG_X86_PPRO_FENCE) 241 241 asm volatile("lock; addl $0,0(%%esp)": : :"memory"); 242 242 #endif 243 243 }
+2 -3
arch/x86/include/asm/spinlock.h
··· 26 26 # define LOCK_PTR_REG "D" 27 27 #endif 28 28 29 - #if defined(CONFIG_X86_32) && \ 30 - (defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE)) 29 + #if defined(CONFIG_X86_32) && (defined(CONFIG_X86_PPRO_FENCE)) 31 30 /* 32 - * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock 31 + * On PPro SMP, we use a locked operation to unlock 33 32 * (PPro errata 66, 92) 34 33 */ 35 34 # define UNLOCK_LOCK_PREFIX LOCK_PREFIX
-272
arch/x86/kernel/cpu/centaur.c
··· 8 8 9 9 #include "cpu.h" 10 10 11 - #ifdef CONFIG_X86_OOSTORE 12 - 13 - static u32 power2(u32 x) 14 - { 15 - u32 s = 1; 16 - 17 - while (s <= x) 18 - s <<= 1; 19 - 20 - return s >>= 1; 21 - } 22 - 23 - 24 - /* 25 - * Set up an actual MCR 26 - */ 27 - static void centaur_mcr_insert(int reg, u32 base, u32 size, int key) 28 - { 29 - u32 lo, hi; 30 - 31 - hi = base & ~0xFFF; 32 - lo = ~(size-1); /* Size is a power of 2 so this makes a mask */ 33 - lo &= ~0xFFF; /* Remove the ctrl value bits */ 34 - lo |= key; /* Attribute we wish to set */ 35 - wrmsr(reg+MSR_IDT_MCR0, lo, hi); 36 - mtrr_centaur_report_mcr(reg, lo, hi); /* Tell the mtrr driver */ 37 - } 38 - 39 - /* 40 - * Figure what we can cover with MCR's 41 - * 42 - * Shortcut: We know you can't put 4Gig of RAM on a winchip 43 - */ 44 - static u32 ramtop(void) 45 - { 46 - u32 clip = 0xFFFFFFFFUL; 47 - u32 top = 0; 48 - int i; 49 - 50 - for (i = 0; i < e820.nr_map; i++) { 51 - unsigned long start, end; 52 - 53 - if (e820.map[i].addr > 0xFFFFFFFFUL) 54 - continue; 55 - /* 56 - * Don't MCR over reserved space. Ignore the ISA hole 57 - * we frob around that catastrophe already 58 - */ 59 - if (e820.map[i].type == E820_RESERVED) { 60 - if (e820.map[i].addr >= 0x100000UL && 61 - e820.map[i].addr < clip) 62 - clip = e820.map[i].addr; 63 - continue; 64 - } 65 - start = e820.map[i].addr; 66 - end = e820.map[i].addr + e820.map[i].size; 67 - if (start >= end) 68 - continue; 69 - if (end > top) 70 - top = end; 71 - } 72 - /* 73 - * Everything below 'top' should be RAM except for the ISA hole. 74 - * Because of the limited MCR's we want to map NV/ACPI into our 75 - * MCR range for gunk in RAM 76 - * 77 - * Clip might cause us to MCR insufficient RAM but that is an 78 - * acceptable failure mode and should only bite obscure boxes with 79 - * a VESA hole at 15Mb 80 - * 81 - * The second case Clip sometimes kicks in is when the EBDA is marked 82 - * as reserved. Again we fail safe with reasonable results 83 - */ 84 - if (top > clip) 85 - top = clip; 86 - 87 - return top; 88 - } 89 - 90 - /* 91 - * Compute a set of MCR's to give maximum coverage 92 - */ 93 - static int centaur_mcr_compute(int nr, int key) 94 - { 95 - u32 mem = ramtop(); 96 - u32 root = power2(mem); 97 - u32 base = root; 98 - u32 top = root; 99 - u32 floor = 0; 100 - int ct = 0; 101 - 102 - while (ct < nr) { 103 - u32 fspace = 0; 104 - u32 high; 105 - u32 low; 106 - 107 - /* 108 - * Find the largest block we will fill going upwards 109 - */ 110 - high = power2(mem-top); 111 - 112 - /* 113 - * Find the largest block we will fill going downwards 114 - */ 115 - low = base/2; 116 - 117 - /* 118 - * Don't fill below 1Mb going downwards as there 119 - * is an ISA hole in the way. 120 - */ 121 - if (base <= 1024*1024) 122 - low = 0; 123 - 124 - /* 125 - * See how much space we could cover by filling below 126 - * the ISA hole 127 - */ 128 - 129 - if (floor == 0) 130 - fspace = 512*1024; 131 - else if (floor == 512*1024) 132 - fspace = 128*1024; 133 - 134 - /* And forget ROM space */ 135 - 136 - /* 137 - * Now install the largest coverage we get 138 - */ 139 - if (fspace > high && fspace > low) { 140 - centaur_mcr_insert(ct, floor, fspace, key); 141 - floor += fspace; 142 - } else if (high > low) { 143 - centaur_mcr_insert(ct, top, high, key); 144 - top += high; 145 - } else if (low > 0) { 146 - base -= low; 147 - centaur_mcr_insert(ct, base, low, key); 148 - } else 149 - break; 150 - ct++; 151 - } 152 - /* 153 - * We loaded ct values. We now need to set the mask. The caller 154 - * must do this bit. 155 - */ 156 - return ct; 157 - } 158 - 159 - static void centaur_create_optimal_mcr(void) 160 - { 161 - int used; 162 - int i; 163 - 164 - /* 165 - * Allocate up to 6 mcrs to mark as much of ram as possible 166 - * as write combining and weak write ordered. 167 - * 168 - * To experiment with: Linux never uses stack operations for 169 - * mmio spaces so we could globally enable stack operation wc 170 - * 171 - * Load the registers with type 31 - full write combining, all 172 - * writes weakly ordered. 173 - */ 174 - used = centaur_mcr_compute(6, 31); 175 - 176 - /* 177 - * Wipe unused MCRs 178 - */ 179 - for (i = used; i < 8; i++) 180 - wrmsr(MSR_IDT_MCR0+i, 0, 0); 181 - } 182 - 183 - static void winchip2_create_optimal_mcr(void) 184 - { 185 - u32 lo, hi; 186 - int used; 187 - int i; 188 - 189 - /* 190 - * Allocate up to 6 mcrs to mark as much of ram as possible 191 - * as write combining, weak store ordered. 192 - * 193 - * Load the registers with type 25 194 - * 8 - weak write ordering 195 - * 16 - weak read ordering 196 - * 1 - write combining 197 - */ 198 - used = centaur_mcr_compute(6, 25); 199 - 200 - /* 201 - * Mark the registers we are using. 202 - */ 203 - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); 204 - for (i = 0; i < used; i++) 205 - lo |= 1<<(9+i); 206 - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); 207 - 208 - /* 209 - * Wipe unused MCRs 210 - */ 211 - 212 - for (i = used; i < 8; i++) 213 - wrmsr(MSR_IDT_MCR0+i, 0, 0); 214 - } 215 - 216 - /* 217 - * Handle the MCR key on the Winchip 2. 218 - */ 219 - static void winchip2_unprotect_mcr(void) 220 - { 221 - u32 lo, hi; 222 - u32 key; 223 - 224 - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); 225 - lo &= ~0x1C0; /* blank bits 8-6 */ 226 - key = (lo>>17) & 7; 227 - lo |= key<<6; /* replace with unlock key */ 228 - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); 229 - } 230 - 231 - static void winchip2_protect_mcr(void) 232 - { 233 - u32 lo, hi; 234 - 235 - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); 236 - lo &= ~0x1C0; /* blank bits 8-6 */ 237 - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); 238 - } 239 - #endif /* CONFIG_X86_OOSTORE */ 240 - 241 11 #define ACE_PRESENT (1 << 6) 242 12 #define ACE_ENABLED (1 << 7) 243 13 #define ACE_FCR (1 << 28) /* MSR_VIA_FCR */ ··· 132 362 fcr_clr = DPDC; 133 363 printk(KERN_NOTICE "Disabling bugged TSC.\n"); 134 364 clear_cpu_cap(c, X86_FEATURE_TSC); 135 - #ifdef CONFIG_X86_OOSTORE 136 - centaur_create_optimal_mcr(); 137 - /* 138 - * Enable: 139 - * write combining on non-stack, non-string 140 - * write combining on string, all types 141 - * weak write ordering 142 - * 143 - * The C6 original lacks weak read order 144 - * 145 - * Note 0x120 is write only on Winchip 1 146 - */ 147 - wrmsr(MSR_IDT_MCR_CTRL, 0x01F0001F, 0); 148 - #endif 149 365 break; 150 366 case 8: 151 367 switch (c->x86_mask) { ··· 148 392 fcr_set = ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK| 149 393 E2MMX|EAMD3D; 150 394 fcr_clr = DPDC; 151 - #ifdef CONFIG_X86_OOSTORE 152 - winchip2_unprotect_mcr(); 153 - winchip2_create_optimal_mcr(); 154 - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); 155 - /* 156 - * Enable: 157 - * write combining on non-stack, non-string 158 - * write combining on string, all types 159 - * weak write ordering 160 - */ 161 - lo |= 31; 162 - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); 163 - winchip2_protect_mcr(); 164 - #endif 165 395 break; 166 396 case 9: 167 397 name = "3"; 168 398 fcr_set = ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK| 169 399 E2MMX|EAMD3D; 170 400 fcr_clr = DPDC; 171 - #ifdef CONFIG_X86_OOSTORE 172 - winchip2_unprotect_mcr(); 173 - winchip2_create_optimal_mcr(); 174 - rdmsr(MSR_IDT_MCR_CTRL, lo, hi); 175 - /* 176 - * Enable: 177 - * write combining on non-stack, non-string 178 - * write combining on string, all types 179 - * weak write ordering 180 - */ 181 - lo |= 31; 182 - wrmsr(MSR_IDT_MCR_CTRL, lo, hi); 183 - winchip2_protect_mcr(); 184 - #endif 185 401 break; 186 402 default: 187 403 name = "??";
+6 -1
arch/x86/kernel/head_32.S
··· 544 544 /* This is global to keep gas from relaxing the jumps */ 545 545 ENTRY(early_idt_handler) 546 546 cld 547 + 548 + cmpl $2,(%esp) # X86_TRAP_NMI 549 + je is_nmi # Ignore NMI 550 + 547 551 cmpl $2,%ss:early_recursion_flag 548 552 je hlt_loop 549 553 incl %ss:early_recursion_flag ··· 598 594 pop %edx 599 595 pop %ecx 600 596 pop %eax 601 - addl $8,%esp /* drop vector number and error code */ 602 597 decl %ss:early_recursion_flag 598 + is_nmi: 599 + addl $8,%esp /* drop vector number and error code */ 603 600 iret 604 601 ENDPROC(early_idt_handler) 605 602
+5 -1
arch/x86/kernel/head_64.S
··· 343 343 ENTRY(early_idt_handler) 344 344 cld 345 345 346 + cmpl $2,(%rsp) # X86_TRAP_NMI 347 + je is_nmi # Ignore NMI 348 + 346 349 cmpl $2,early_recursion_flag(%rip) 347 350 jz 1f 348 351 incl early_recursion_flag(%rip) ··· 408 405 popq %rdx 409 406 popq %rcx 410 407 popq %rax 411 - addq $16,%rsp # drop vector number and error code 412 408 decl early_recursion_flag(%rip) 409 + is_nmi: 410 + addq $16,%rsp # drop vector number and error code 413 411 INTERRUPT_RETURN 414 412 ENDPROC(early_idt_handler) 415 413
+12 -3
arch/x86/kernel/i387.c
··· 86 86 87 87 void __kernel_fpu_end(void) 88 88 { 89 - if (use_eager_fpu()) 90 - math_state_restore(); 91 - else 89 + if (use_eager_fpu()) { 90 + /* 91 + * For eager fpu, most the time, tsk_used_math() is true. 92 + * Restore the user math as we are done with the kernel usage. 93 + * At few instances during thread exit, signal handling etc, 94 + * tsk_used_math() is false. Those few places will take proper 95 + * actions, so we don't need to restore the math here. 96 + */ 97 + if (likely(tsk_used_math(current))) 98 + math_state_restore(); 99 + } else { 92 100 stts(); 101 + } 93 102 } 94 103 EXPORT_SYMBOL(__kernel_fpu_end); 95 104
+1 -1
arch/x86/kernel/quirks.c
··· 529 529 return; 530 530 531 531 pci_read_config_dword(nb_ht, 0x60, &val); 532 - node = val & 7; 532 + node = pcibus_to_node(dev->bus) | (val & 7); 533 533 /* 534 534 * Some hardware may return an invalid node ID, 535 535 * so check it first:
+2 -8
arch/x86/kernel/setup.c
··· 1239 1239 register_refined_jiffies(CLOCK_TICK_RATE); 1240 1240 1241 1241 #ifdef CONFIG_EFI 1242 - /* Once setup is done above, unmap the EFI memory map on 1243 - * mismatched firmware/kernel archtectures since there is no 1244 - * support for runtime services. 1245 - */ 1246 - if (efi_enabled(EFI_BOOT) && !efi_is_native()) { 1247 - pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n"); 1248 - efi_unmap_memmap(); 1249 - } 1242 + if (efi_enabled(EFI_BOOT)) 1243 + efi_apply_memmap_quirks(); 1250 1244 #endif 1251 1245 } 1252 1246
+3 -3
arch/x86/kvm/svm.c
··· 3002 3002 u8 cr8_prev = kvm_get_cr8(&svm->vcpu); 3003 3003 /* instruction emulation calls kvm_set_cr8() */ 3004 3004 r = cr_interception(svm); 3005 - if (irqchip_in_kernel(svm->vcpu.kvm)) { 3006 - clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); 3005 + if (irqchip_in_kernel(svm->vcpu.kvm)) 3007 3006 return r; 3008 - } 3009 3007 if (cr8_prev <= kvm_get_cr8(&svm->vcpu)) 3010 3008 return r; 3011 3009 kvm_run->exit_reason = KVM_EXIT_SET_TPR; ··· 3564 3566 3565 3567 if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK)) 3566 3568 return; 3569 + 3570 + clr_cr_intercept(svm, INTERCEPT_CR8_WRITE); 3567 3571 3568 3572 if (irr == -1) 3569 3573 return;
+33 -14
arch/x86/mm/fault.c
··· 1020 1020 * This routine handles page faults. It determines the address, 1021 1021 * and the problem, and then passes it off to one of the appropriate 1022 1022 * routines. 1023 + * 1024 + * This function must have noinline because both callers 1025 + * {,trace_}do_page_fault() have notrace on. Having this an actual function 1026 + * guarantees there's a function trace entry. 1023 1027 */ 1024 - static void __kprobes 1025 - __do_page_fault(struct pt_regs *regs, unsigned long error_code) 1028 + static void __kprobes noinline 1029 + __do_page_fault(struct pt_regs *regs, unsigned long error_code, 1030 + unsigned long address) 1026 1031 { 1027 1032 struct vm_area_struct *vma; 1028 1033 struct task_struct *tsk; 1029 - unsigned long address; 1030 1034 struct mm_struct *mm; 1031 1035 int fault; 1032 1036 unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 1033 1037 1034 1038 tsk = current; 1035 1039 mm = tsk->mm; 1036 - 1037 - /* Get the faulting address: */ 1038 - address = read_cr2(); 1039 1040 1040 1041 /* 1041 1042 * Detect and handle instructions that would cause a page fault for ··· 1249 1248 up_read(&mm->mmap_sem); 1250 1249 } 1251 1250 1252 - dotraplinkage void __kprobes 1251 + dotraplinkage void __kprobes notrace 1253 1252 do_page_fault(struct pt_regs *regs, unsigned long error_code) 1254 1253 { 1254 + unsigned long address = read_cr2(); /* Get the faulting address */ 1255 1255 enum ctx_state prev_state; 1256 1256 1257 + /* 1258 + * We must have this function tagged with __kprobes, notrace and call 1259 + * read_cr2() before calling anything else. To avoid calling any kind 1260 + * of tracing machinery before we've observed the CR2 value. 1261 + * 1262 + * exception_{enter,exit}() contain all sorts of tracepoints. 1263 + */ 1264 + 1257 1265 prev_state = exception_enter(); 1258 - __do_page_fault(regs, error_code); 1266 + __do_page_fault(regs, error_code, address); 1259 1267 exception_exit(prev_state); 1260 1268 } 1261 1269 1262 - static void trace_page_fault_entries(struct pt_regs *regs, 1270 + #ifdef CONFIG_TRACING 1271 + static void trace_page_fault_entries(unsigned long address, struct pt_regs *regs, 1263 1272 unsigned long error_code) 1264 1273 { 1265 1274 if (user_mode(regs)) 1266 - trace_page_fault_user(read_cr2(), regs, error_code); 1275 + trace_page_fault_user(address, regs, error_code); 1267 1276 else 1268 - trace_page_fault_kernel(read_cr2(), regs, error_code); 1277 + trace_page_fault_kernel(address, regs, error_code); 1269 1278 } 1270 1279 1271 - dotraplinkage void __kprobes 1280 + dotraplinkage void __kprobes notrace 1272 1281 trace_do_page_fault(struct pt_regs *regs, unsigned long error_code) 1273 1282 { 1283 + /* 1284 + * The exception_enter and tracepoint processing could 1285 + * trigger another page faults (user space callchain 1286 + * reading) and destroy the original cr2 value, so read 1287 + * the faulting address now. 1288 + */ 1289 + unsigned long address = read_cr2(); 1274 1290 enum ctx_state prev_state; 1275 1291 1276 1292 prev_state = exception_enter(); 1277 - trace_page_fault_entries(regs, error_code); 1278 - __do_page_fault(regs, error_code); 1293 + trace_page_fault_entries(address, regs, error_code); 1294 + __do_page_fault(regs, error_code, address); 1279 1295 exception_exit(prev_state); 1280 1296 } 1297 + #endif /* CONFIG_TRACING */
+1 -1
arch/x86/net/bpf_jit.S
··· 140 140 push %r9; \ 141 141 push SKBDATA; \ 142 142 /* rsi already has offset */ \ 143 - mov $SIZE,%ecx; /* size */ \ 143 + mov $SIZE,%edx; /* size */ \ 144 144 call bpf_internal_load_pointer_neg_helper; \ 145 145 test %rax,%rax; \ 146 146 pop SKBDATA; \
+20
arch/x86/platform/efi/efi.c
··· 52 52 #include <asm/tlbflush.h> 53 53 #include <asm/x86_init.h> 54 54 #include <asm/rtc.h> 55 + #include <asm/uv/uv.h> 55 56 56 57 #define EFI_DEBUG 57 58 ··· 1211 1210 return 0; 1212 1211 } 1213 1212 early_param("efi", parse_efi_cmdline); 1213 + 1214 + void __init efi_apply_memmap_quirks(void) 1215 + { 1216 + /* 1217 + * Once setup is done earlier, unmap the EFI memory map on mismatched 1218 + * firmware/kernel architectures since there is no support for runtime 1219 + * services. 1220 + */ 1221 + if (!efi_is_native()) { 1222 + pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n"); 1223 + efi_unmap_memmap(); 1224 + } 1225 + 1226 + /* 1227 + * UV doesn't support the new EFI pagetable mapping yet. 1228 + */ 1229 + if (is_uv_system()) 1230 + set_bit(EFI_OLD_MEMMAP, &x86_efi_facility); 1231 + }
-4
arch/x86/um/asm/barrier.h
··· 40 40 #define smp_rmb() barrier() 41 41 #endif /* CONFIG_X86_PPRO_FENCE */ 42 42 43 - #ifdef CONFIG_X86_OOSTORE 44 - #define smp_wmb() wmb() 45 - #else /* CONFIG_X86_OOSTORE */ 46 43 #define smp_wmb() barrier() 47 - #endif /* CONFIG_X86_OOSTORE */ 48 44 49 45 #define smp_read_barrier_depends() read_barrier_depends() 50 46 #define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
+1 -1
block/blk-exec.c
··· 65 65 * be resued after dying flag is set 66 66 */ 67 67 if (q->mq_ops) { 68 - blk_mq_insert_request(q, rq, at_head, true); 68 + blk_mq_insert_request(rq, at_head, true, false); 69 69 return; 70 70 } 71 71
+2 -2
block/blk-flush.c
··· 137 137 rq = container_of(work, struct request, mq_flush_work); 138 138 139 139 memset(&rq->csd, 0, sizeof(rq->csd)); 140 - blk_mq_run_request(rq, true, false); 140 + blk_mq_insert_request(rq, false, true, false); 141 141 } 142 142 143 143 static bool blk_flush_queue_rq(struct request *rq) ··· 411 411 if ((policy & REQ_FSEQ_DATA) && 412 412 !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) { 413 413 if (q->mq_ops) { 414 - blk_mq_run_request(rq, false, true); 414 + blk_mq_insert_request(rq, false, false, true); 415 415 } else 416 416 list_add_tail(&rq->queuelist, &q->queue_head); 417 417 return;
+7 -7
block/blk-mq-cpu.c
··· 11 11 #include "blk-mq.h" 12 12 13 13 static LIST_HEAD(blk_mq_cpu_notify_list); 14 - static DEFINE_SPINLOCK(blk_mq_cpu_notify_lock); 14 + static DEFINE_RAW_SPINLOCK(blk_mq_cpu_notify_lock); 15 15 16 16 static int blk_mq_main_cpu_notify(struct notifier_block *self, 17 17 unsigned long action, void *hcpu) ··· 19 19 unsigned int cpu = (unsigned long) hcpu; 20 20 struct blk_mq_cpu_notifier *notify; 21 21 22 - spin_lock(&blk_mq_cpu_notify_lock); 22 + raw_spin_lock(&blk_mq_cpu_notify_lock); 23 23 24 24 list_for_each_entry(notify, &blk_mq_cpu_notify_list, list) 25 25 notify->notify(notify->data, action, cpu); 26 26 27 - spin_unlock(&blk_mq_cpu_notify_lock); 27 + raw_spin_unlock(&blk_mq_cpu_notify_lock); 28 28 return NOTIFY_OK; 29 29 } 30 30 ··· 32 32 { 33 33 BUG_ON(!notifier->notify); 34 34 35 - spin_lock(&blk_mq_cpu_notify_lock); 35 + raw_spin_lock(&blk_mq_cpu_notify_lock); 36 36 list_add_tail(&notifier->list, &blk_mq_cpu_notify_list); 37 - spin_unlock(&blk_mq_cpu_notify_lock); 37 + raw_spin_unlock(&blk_mq_cpu_notify_lock); 38 38 } 39 39 40 40 void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier) 41 41 { 42 - spin_lock(&blk_mq_cpu_notify_lock); 42 + raw_spin_lock(&blk_mq_cpu_notify_lock); 43 43 list_del(&notifier->list); 44 - spin_unlock(&blk_mq_cpu_notify_lock); 44 + raw_spin_unlock(&blk_mq_cpu_notify_lock); 45 45 } 46 46 47 47 void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
+22 -86
block/blk-mq.c
··· 73 73 set_bit(ctx->index_hw, hctx->ctx_map); 74 74 } 75 75 76 - static struct request *blk_mq_alloc_rq(struct blk_mq_hw_ctx *hctx, gfp_t gfp, 77 - bool reserved) 76 + static struct request *__blk_mq_alloc_request(struct blk_mq_hw_ctx *hctx, 77 + gfp_t gfp, bool reserved) 78 78 { 79 79 struct request *rq; 80 80 unsigned int tag; ··· 193 193 ctx->rq_dispatched[rw_is_sync(rw_flags)]++; 194 194 } 195 195 196 - static struct request *__blk_mq_alloc_request(struct blk_mq_hw_ctx *hctx, 197 - gfp_t gfp, bool reserved) 198 - { 199 - return blk_mq_alloc_rq(hctx, gfp, reserved); 200 - } 201 - 202 196 static struct request *blk_mq_alloc_request_pinned(struct request_queue *q, 203 197 int rw, gfp_t gfp, 204 198 bool reserved) ··· 283 289 __blk_mq_free_request(hctx, ctx, rq); 284 290 } 285 291 286 - static void blk_mq_bio_endio(struct request *rq, struct bio *bio, int error) 292 + bool blk_mq_end_io_partial(struct request *rq, int error, unsigned int nr_bytes) 287 293 { 288 - if (error) 289 - clear_bit(BIO_UPTODATE, &bio->bi_flags); 290 - else if (!test_bit(BIO_UPTODATE, &bio->bi_flags)) 291 - error = -EIO; 292 - 293 - if (unlikely(rq->cmd_flags & REQ_QUIET)) 294 - set_bit(BIO_QUIET, &bio->bi_flags); 295 - 296 - /* don't actually finish bio if it's part of flush sequence */ 297 - if (!(rq->cmd_flags & REQ_FLUSH_SEQ)) 298 - bio_endio(bio, error); 299 - } 300 - 301 - void blk_mq_end_io(struct request *rq, int error) 302 - { 303 - struct bio *bio = rq->bio; 304 - unsigned int bytes = 0; 305 - 306 - trace_block_rq_complete(rq->q, rq); 307 - 308 - while (bio) { 309 - struct bio *next = bio->bi_next; 310 - 311 - bio->bi_next = NULL; 312 - bytes += bio->bi_iter.bi_size; 313 - blk_mq_bio_endio(rq, bio, error); 314 - bio = next; 315 - } 316 - 317 - blk_account_io_completion(rq, bytes); 294 + if (blk_update_request(rq, error, blk_rq_bytes(rq))) 295 + return true; 318 296 319 297 blk_account_io_done(rq); 320 298 ··· 294 328 rq->end_io(rq, error); 295 329 else 296 330 blk_mq_free_request(rq); 331 + return false; 297 332 } 298 - EXPORT_SYMBOL(blk_mq_end_io); 333 + EXPORT_SYMBOL(blk_mq_end_io_partial); 299 334 300 335 static void __blk_mq_complete_request_remote(void *data) 301 336 { ··· 697 730 blk_mq_add_timer(rq); 698 731 } 699 732 700 - void blk_mq_insert_request(struct request_queue *q, struct request *rq, 701 - bool at_head, bool run_queue) 702 - { 703 - struct blk_mq_hw_ctx *hctx; 704 - struct blk_mq_ctx *ctx, *current_ctx; 705 - 706 - ctx = rq->mq_ctx; 707 - hctx = q->mq_ops->map_queue(q, ctx->cpu); 708 - 709 - if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) { 710 - blk_insert_flush(rq); 711 - } else { 712 - current_ctx = blk_mq_get_ctx(q); 713 - 714 - if (!cpu_online(ctx->cpu)) { 715 - ctx = current_ctx; 716 - hctx = q->mq_ops->map_queue(q, ctx->cpu); 717 - rq->mq_ctx = ctx; 718 - } 719 - spin_lock(&ctx->lock); 720 - __blk_mq_insert_request(hctx, rq, at_head); 721 - spin_unlock(&ctx->lock); 722 - 723 - blk_mq_put_ctx(current_ctx); 724 - } 725 - 726 - if (run_queue) 727 - __blk_mq_run_hw_queue(hctx); 728 - } 729 - EXPORT_SYMBOL(blk_mq_insert_request); 730 - 731 - /* 732 - * This is a special version of blk_mq_insert_request to bypass FLUSH request 733 - * check. Should only be used internally. 734 - */ 735 - void blk_mq_run_request(struct request *rq, bool run_queue, bool async) 733 + void blk_mq_insert_request(struct request *rq, bool at_head, bool run_queue, 734 + bool async) 736 735 { 737 736 struct request_queue *q = rq->q; 738 737 struct blk_mq_hw_ctx *hctx; 739 - struct blk_mq_ctx *ctx, *current_ctx; 738 + struct blk_mq_ctx *ctx = rq->mq_ctx, *current_ctx; 740 739 741 740 current_ctx = blk_mq_get_ctx(q); 741 + if (!cpu_online(ctx->cpu)) 742 + rq->mq_ctx = ctx = current_ctx; 742 743 743 - ctx = rq->mq_ctx; 744 - if (!cpu_online(ctx->cpu)) { 745 - ctx = current_ctx; 746 - rq->mq_ctx = ctx; 747 - } 748 744 hctx = q->mq_ops->map_queue(q, ctx->cpu); 749 745 750 - /* ctx->cpu might be offline */ 751 - spin_lock(&ctx->lock); 752 - __blk_mq_insert_request(hctx, rq, false); 753 - spin_unlock(&ctx->lock); 746 + if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA) && 747 + !(rq->cmd_flags & (REQ_FLUSH_SEQ))) { 748 + blk_insert_flush(rq); 749 + } else { 750 + spin_lock(&ctx->lock); 751 + __blk_mq_insert_request(hctx, rq, at_head); 752 + spin_unlock(&ctx->lock); 753 + } 754 754 755 755 blk_mq_put_ctx(current_ctx); 756 756 ··· 860 926 ctx = blk_mq_get_ctx(q); 861 927 hctx = q->mq_ops->map_queue(q, ctx->cpu); 862 928 929 + if (is_sync) 930 + rw |= REQ_SYNC; 863 931 trace_block_getrq(q, bio, rw); 864 932 rq = __blk_mq_alloc_request(hctx, GFP_ATOMIC, false); 865 933 if (likely(rq))
-1
block/blk-mq.h
··· 23 23 }; 24 24 25 25 void __blk_mq_complete_request(struct request *rq); 26 - void blk_mq_run_request(struct request *rq, bool run_queue, bool async); 27 26 void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async); 28 27 void blk_mq_init_flush(struct request_queue *q); 29 28 void blk_mq_drain_queue(struct request_queue *q);
+64
drivers/acpi/ec.c
··· 67 67 #define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */ 68 68 #define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */ 69 69 #define ACPI_EC_MSI_UDELAY 550 /* Wait 550us for MSI EC */ 70 + #define ACPI_EC_CLEAR_MAX 100 /* Maximum number of events to query 71 + * when trying to clear the EC */ 70 72 71 73 enum { 72 74 EC_FLAGS_QUERY_PENDING, /* Query is pending */ ··· 118 116 static int EC_FLAGS_MSI; /* Out-of-spec MSI controller */ 119 117 static int EC_FLAGS_VALIDATE_ECDT; /* ASUStec ECDTs need to be validated */ 120 118 static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */ 119 + static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */ 121 120 122 121 /* -------------------------------------------------------------------------- 123 122 Transaction Management ··· 443 440 444 441 EXPORT_SYMBOL(ec_get_handle); 445 442 443 + static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data); 444 + 445 + /* 446 + * Clears stale _Q events that might have accumulated in the EC. 447 + * Run with locked ec mutex. 448 + */ 449 + static void acpi_ec_clear(struct acpi_ec *ec) 450 + { 451 + int i, status; 452 + u8 value = 0; 453 + 454 + for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) { 455 + status = acpi_ec_query_unlocked(ec, &value); 456 + if (status || !value) 457 + break; 458 + } 459 + 460 + if (unlikely(i == ACPI_EC_CLEAR_MAX)) 461 + pr_warn("Warning: Maximum of %d stale EC events cleared\n", i); 462 + else 463 + pr_info("%d stale EC events cleared\n", i); 464 + } 465 + 446 466 void acpi_ec_block_transactions(void) 447 467 { 448 468 struct acpi_ec *ec = first_ec; ··· 489 463 mutex_lock(&ec->mutex); 490 464 /* Allow transactions to be carried out again */ 491 465 clear_bit(EC_FLAGS_BLOCKED, &ec->flags); 466 + 467 + if (EC_FLAGS_CLEAR_ON_RESUME) 468 + acpi_ec_clear(ec); 469 + 492 470 mutex_unlock(&ec->mutex); 493 471 } 494 472 ··· 851 821 852 822 /* EC is fully operational, allow queries */ 853 823 clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags); 824 + 825 + /* Clear stale _Q events if hardware might require that */ 826 + if (EC_FLAGS_CLEAR_ON_RESUME) { 827 + mutex_lock(&ec->mutex); 828 + acpi_ec_clear(ec); 829 + mutex_unlock(&ec->mutex); 830 + } 854 831 return ret; 855 832 } 856 833 ··· 959 922 return 0; 960 923 } 961 924 925 + /* 926 + * On some hardware it is necessary to clear events accumulated by the EC during 927 + * sleep. These ECs stop reporting GPEs until they are manually polled, if too 928 + * many events are accumulated. (e.g. Samsung Series 5/9 notebooks) 929 + * 930 + * https://bugzilla.kernel.org/show_bug.cgi?id=44161 931 + * 932 + * Ideally, the EC should also be instructed NOT to accumulate events during 933 + * sleep (which Windows seems to do somehow), but the interface to control this 934 + * behaviour is not known at this time. 935 + * 936 + * Models known to be affected are Samsung 530Uxx/535Uxx/540Uxx/550Pxx/900Xxx, 937 + * however it is very likely that other Samsung models are affected. 938 + * 939 + * On systems which don't accumulate _Q events during sleep, this extra check 940 + * should be harmless. 941 + */ 942 + static int ec_clear_on_resume(const struct dmi_system_id *id) 943 + { 944 + pr_debug("Detected system needing EC poll on resume.\n"); 945 + EC_FLAGS_CLEAR_ON_RESUME = 1; 946 + return 0; 947 + } 948 + 962 949 static struct dmi_system_id ec_dmi_table[] __initdata = { 963 950 { 964 951 ec_skip_dsdt_scan, "Compal JFL92", { ··· 1026 965 ec_validate_ecdt, "ASUS hardware", { 1027 966 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek Computer Inc."), 1028 967 DMI_MATCH(DMI_PRODUCT_NAME, "L4R"),}, NULL}, 968 + { 969 + ec_clear_on_resume, "Samsung hardware", { 970 + DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL}, 1029 971 {}, 1030 972 }; 1031 973
+10
drivers/acpi/resource.c
··· 77 77 switch (ares->type) { 78 78 case ACPI_RESOURCE_TYPE_MEMORY24: 79 79 memory24 = &ares->data.memory24; 80 + if (!memory24->address_length) 81 + return false; 80 82 acpi_dev_get_memresource(res, memory24->minimum, 81 83 memory24->address_length, 82 84 memory24->write_protect); 83 85 break; 84 86 case ACPI_RESOURCE_TYPE_MEMORY32: 85 87 memory32 = &ares->data.memory32; 88 + if (!memory32->address_length) 89 + return false; 86 90 acpi_dev_get_memresource(res, memory32->minimum, 87 91 memory32->address_length, 88 92 memory32->write_protect); 89 93 break; 90 94 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 91 95 fixed_memory32 = &ares->data.fixed_memory32; 96 + if (!fixed_memory32->address_length) 97 + return false; 92 98 acpi_dev_get_memresource(res, fixed_memory32->address, 93 99 fixed_memory32->address_length, 94 100 fixed_memory32->write_protect); ··· 150 144 switch (ares->type) { 151 145 case ACPI_RESOURCE_TYPE_IO: 152 146 io = &ares->data.io; 147 + if (!io->address_length) 148 + return false; 153 149 acpi_dev_get_ioresource(res, io->minimum, 154 150 io->address_length, 155 151 io->io_decode); 156 152 break; 157 153 case ACPI_RESOURCE_TYPE_FIXED_IO: 158 154 fixed_io = &ares->data.fixed_io; 155 + if (!fixed_io->address_length) 156 + return false; 159 157 acpi_dev_get_ioresource(res, fixed_io->address, 160 158 fixed_io->address_length, 161 159 ACPI_DECODE_10);
+15 -17
drivers/acpi/sleep.c
··· 71 71 return 0; 72 72 } 73 73 74 + static bool acpi_sleep_state_supported(u8 sleep_state) 75 + { 76 + acpi_status status; 77 + u8 type_a, type_b; 78 + 79 + status = acpi_get_sleep_type_data(sleep_state, &type_a, &type_b); 80 + return ACPI_SUCCESS(status) && (!acpi_gbl_reduced_hardware 81 + || (acpi_gbl_FADT.sleep_control.address 82 + && acpi_gbl_FADT.sleep_status.address)); 83 + } 84 + 74 85 #ifdef CONFIG_ACPI_SLEEP 75 86 static u32 acpi_target_sleep_state = ACPI_STATE_S0; 76 87 ··· 615 604 { 616 605 int i; 617 606 618 - for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) { 619 - acpi_status status; 620 - u8 type_a, type_b; 621 - 622 - status = acpi_get_sleep_type_data(i, &type_a, &type_b); 623 - if (ACPI_SUCCESS(status)) { 607 + for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) 608 + if (acpi_sleep_state_supported(i)) 624 609 sleep_states[i] = 1; 625 - } 626 - } 627 610 628 611 suspend_set_ops(old_suspend_ordering ? 629 612 &acpi_suspend_ops_old : &acpi_suspend_ops); ··· 745 740 746 741 static void acpi_sleep_hibernate_setup(void) 747 742 { 748 - acpi_status status; 749 - u8 type_a, type_b; 750 - 751 - status = acpi_get_sleep_type_data(ACPI_STATE_S4, &type_a, &type_b); 752 - if (ACPI_FAILURE(status)) 743 + if (!acpi_sleep_state_supported(ACPI_STATE_S4)) 753 744 return; 754 745 755 746 hibernation_set_ops(old_suspend_ordering ? ··· 794 793 795 794 int __init acpi_sleep_init(void) 796 795 { 797 - acpi_status status; 798 - u8 type_a, type_b; 799 796 char supported[ACPI_S_STATE_COUNT * 3 + 1]; 800 797 char *pos = supported; 801 798 int i; ··· 805 806 acpi_sleep_suspend_setup(); 806 807 acpi_sleep_hibernate_setup(); 807 808 808 - status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); 809 - if (ACPI_SUCCESS(status)) { 809 + if (acpi_sleep_state_supported(ACPI_STATE_S5)) { 810 810 sleep_states[ACPI_STATE_S5] = 1; 811 811 pm_power_off_prepare = acpi_power_off_prepare; 812 812 pm_power_off = acpi_power_off;
+2 -1
drivers/ata/libata-core.c
··· 4175 4175 4176 4176 /* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */ 4177 4177 { "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA }, 4178 + { "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA }, 4178 4179 4179 4180 /* Blacklist entries taken from Silicon Image 3124/3132 4180 4181 Windows driver .inf file - also several Linux problem reports */ ··· 4225 4224 4226 4225 /* devices that don't properly handle queued TRIM commands */ 4227 4226 { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, 4228 - { "Crucial_CT???M500SSD1", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, 4227 + { "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, 4229 4228 4230 4229 /* 4231 4230 * Some WD SATA-I drives spin up and down erratically when the link
+1 -1
drivers/block/mtip32xx/mtip32xx.h
··· 53 53 #define MTIP_FTL_REBUILD_TIMEOUT_MS 2400000 54 54 55 55 /* unaligned IO handling */ 56 - #define MTIP_MAX_UNALIGNED_SLOTS 8 56 + #define MTIP_MAX_UNALIGNED_SLOTS 2 57 57 58 58 /* Macro to extract the tag bit number from a tag value. */ 59 59 #define MTIP_TAG_BIT(tag) (tag & 0x1F)
+34 -2
drivers/clk/shmobile/clk-rcar-gen2.c
··· 26 26 void __iomem *reg; 27 27 }; 28 28 29 + #define CPG_FRQCRB 0x00000004 30 + #define CPG_FRQCRB_KICK BIT(31) 29 31 #define CPG_SDCKCR 0x00000074 30 32 #define CPG_PLL0CR 0x000000d8 31 33 #define CPG_FRQCRC 0x000000e0 ··· 47 45 struct cpg_z_clk { 48 46 struct clk_hw hw; 49 47 void __iomem *reg; 48 + void __iomem *kick_reg; 50 49 }; 51 50 52 51 #define to_z_clk(_hw) container_of(_hw, struct cpg_z_clk, hw) ··· 86 83 { 87 84 struct cpg_z_clk *zclk = to_z_clk(hw); 88 85 unsigned int mult; 89 - u32 val; 86 + u32 val, kick; 87 + unsigned int i; 90 88 91 89 mult = div_u64((u64)rate * 32, parent_rate); 92 90 mult = clamp(mult, 1U, 32U); 91 + 92 + if (clk_readl(zclk->kick_reg) & CPG_FRQCRB_KICK) 93 + return -EBUSY; 93 94 94 95 val = clk_readl(zclk->reg); 95 96 val &= ~CPG_FRQCRC_ZFC_MASK; 96 97 val |= (32 - mult) << CPG_FRQCRC_ZFC_SHIFT; 97 98 clk_writel(val, zclk->reg); 98 99 99 - return 0; 100 + /* 101 + * Set KICK bit in FRQCRB to update hardware setting and wait for 102 + * clock change completion. 103 + */ 104 + kick = clk_readl(zclk->kick_reg); 105 + kick |= CPG_FRQCRB_KICK; 106 + clk_writel(kick, zclk->kick_reg); 107 + 108 + /* 109 + * Note: There is no HW information about the worst case latency. 110 + * 111 + * Using experimental measurements, it seems that no more than 112 + * ~10 iterations are needed, independently of the CPU rate. 113 + * Since this value might be dependant of external xtal rate, pll1 114 + * rate or even the other emulation clocks rate, use 1000 as a 115 + * "super" safe value. 116 + */ 117 + for (i = 1000; i; i--) { 118 + if (!(clk_readl(zclk->kick_reg) & CPG_FRQCRB_KICK)) 119 + return 0; 120 + 121 + cpu_relax(); 122 + } 123 + 124 + return -ETIMEDOUT; 100 125 } 101 126 102 127 static const struct clk_ops cpg_z_clk_ops = { ··· 151 120 init.num_parents = 1; 152 121 153 122 zclk->reg = cpg->reg + CPG_FRQCRC; 123 + zclk->kick_reg = cpg->reg + CPG_FRQCRB; 154 124 zclk->hw.init = &init; 155 125 156 126 clk = clk_register(NULL, &zclk->hw);
+25 -30
drivers/cpufreq/cpufreq.c
··· 1109 1109 goto err_set_policy_cpu; 1110 1110 } 1111 1111 1112 + /* related cpus should atleast have policy->cpus */ 1113 + cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus); 1114 + 1115 + /* 1116 + * affected cpus must always be the one, which are online. We aren't 1117 + * managing offline cpus here. 1118 + */ 1119 + cpumask_and(policy->cpus, policy->cpus, cpu_online_mask); 1120 + 1121 + if (!frozen) { 1122 + policy->user_policy.min = policy->min; 1123 + policy->user_policy.max = policy->max; 1124 + } 1125 + 1126 + down_write(&policy->rwsem); 1112 1127 write_lock_irqsave(&cpufreq_driver_lock, flags); 1113 1128 for_each_cpu(j, policy->cpus) 1114 1129 per_cpu(cpufreq_cpu_data, j) = policy; 1115 1130 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1116 1131 1117 - if (cpufreq_driver->get) { 1132 + if (cpufreq_driver->get && !cpufreq_driver->setpolicy) { 1118 1133 policy->cur = cpufreq_driver->get(policy->cpu); 1119 1134 if (!policy->cur) { 1120 1135 pr_err("%s: ->get() failed\n", __func__); ··· 1177 1162 } 1178 1163 } 1179 1164 1180 - /* related cpus should atleast have policy->cpus */ 1181 - cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus); 1182 - 1183 - /* 1184 - * affected cpus must always be the one, which are online. We aren't 1185 - * managing offline cpus here. 1186 - */ 1187 - cpumask_and(policy->cpus, policy->cpus, cpu_online_mask); 1188 - 1189 - if (!frozen) { 1190 - policy->user_policy.min = policy->min; 1191 - policy->user_policy.max = policy->max; 1192 - } 1193 - 1194 1165 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1195 1166 CPUFREQ_START, policy); 1196 1167 ··· 1207 1206 policy->user_policy.policy = policy->policy; 1208 1207 policy->user_policy.governor = policy->governor; 1209 1208 } 1209 + up_write(&policy->rwsem); 1210 1210 1211 1211 kobject_uevent(&policy->kobj, KOBJ_ADD); 1212 1212 up_read(&cpufreq_rwsem); ··· 1548 1546 */ 1549 1547 unsigned int cpufreq_get(unsigned int cpu) 1550 1548 { 1551 - struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 1549 + struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 1552 1550 unsigned int ret_freq = 0; 1553 1551 1554 - if (cpufreq_disabled() || !cpufreq_driver) 1555 - return -ENOENT; 1552 + if (policy) { 1553 + down_read(&policy->rwsem); 1554 + ret_freq = __cpufreq_get(cpu); 1555 + up_read(&policy->rwsem); 1556 1556 1557 - BUG_ON(!policy); 1558 - 1559 - if (!down_read_trylock(&cpufreq_rwsem)) 1560 - return 0; 1561 - 1562 - down_read(&policy->rwsem); 1563 - 1564 - ret_freq = __cpufreq_get(cpu); 1565 - 1566 - up_read(&policy->rwsem); 1567 - up_read(&cpufreq_rwsem); 1557 + cpufreq_cpu_put(policy); 1558 + } 1568 1559 1569 1560 return ret_freq; 1570 1561 } ··· 2143 2148 * BIOS might change freq behind our back 2144 2149 * -> ask driver for current freq and notify governors about a change 2145 2150 */ 2146 - if (cpufreq_driver->get) { 2151 + if (cpufreq_driver->get && !cpufreq_driver->setpolicy) { 2147 2152 new_policy.cur = cpufreq_driver->get(cpu); 2148 2153 if (!policy->cur) { 2149 2154 pr_debug("Driver did not initialize current freq");
+15 -7
drivers/firewire/core-device.c
··· 916 916 old->config_rom_retries = 0; 917 917 fw_notice(card, "rediscovered device %s\n", dev_name(dev)); 918 918 919 - PREPARE_DELAYED_WORK(&old->work, fw_device_update); 919 + old->workfn = fw_device_update; 920 920 fw_schedule_device_work(old, 0); 921 921 922 922 if (current_node == card->root_node) ··· 1075 1075 if (atomic_cmpxchg(&device->state, 1076 1076 FW_DEVICE_INITIALIZING, 1077 1077 FW_DEVICE_RUNNING) == FW_DEVICE_GONE) { 1078 - PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown); 1078 + device->workfn = fw_device_shutdown; 1079 1079 fw_schedule_device_work(device, SHUTDOWN_DELAY); 1080 1080 } else { 1081 1081 fw_notice(card, "created device %s: GUID %08x%08x, S%d00\n", ··· 1196 1196 dev_name(&device->device), fw_rcode_string(ret)); 1197 1197 gone: 1198 1198 atomic_set(&device->state, FW_DEVICE_GONE); 1199 - PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown); 1199 + device->workfn = fw_device_shutdown; 1200 1200 fw_schedule_device_work(device, SHUTDOWN_DELAY); 1201 1201 out: 1202 1202 if (node_id == card->root_node->node_id) 1203 1203 fw_schedule_bm_work(card, 0); 1204 + } 1205 + 1206 + static void fw_device_workfn(struct work_struct *work) 1207 + { 1208 + struct fw_device *device = container_of(to_delayed_work(work), 1209 + struct fw_device, work); 1210 + device->workfn(work); 1204 1211 } 1205 1212 1206 1213 void fw_node_event(struct fw_card *card, struct fw_node *node, int event) ··· 1259 1252 * power-up after getting plugged in. We schedule the 1260 1253 * first config rom scan half a second after bus reset. 1261 1254 */ 1262 - INIT_DELAYED_WORK(&device->work, fw_device_init); 1255 + device->workfn = fw_device_init; 1256 + INIT_DELAYED_WORK(&device->work, fw_device_workfn); 1263 1257 fw_schedule_device_work(device, INITIAL_DELAY); 1264 1258 break; 1265 1259 ··· 1276 1268 if (atomic_cmpxchg(&device->state, 1277 1269 FW_DEVICE_RUNNING, 1278 1270 FW_DEVICE_INITIALIZING) == FW_DEVICE_RUNNING) { 1279 - PREPARE_DELAYED_WORK(&device->work, fw_device_refresh); 1271 + device->workfn = fw_device_refresh; 1280 1272 fw_schedule_device_work(device, 1281 1273 device->is_local ? 0 : INITIAL_DELAY); 1282 1274 } ··· 1291 1283 smp_wmb(); /* update node_id before generation */ 1292 1284 device->generation = card->generation; 1293 1285 if (atomic_read(&device->state) == FW_DEVICE_RUNNING) { 1294 - PREPARE_DELAYED_WORK(&device->work, fw_device_update); 1286 + device->workfn = fw_device_update; 1295 1287 fw_schedule_device_work(device, 0); 1296 1288 } 1297 1289 break; ··· 1316 1308 device = node->data; 1317 1309 if (atomic_xchg(&device->state, 1318 1310 FW_DEVICE_GONE) == FW_DEVICE_RUNNING) { 1319 - PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown); 1311 + device->workfn = fw_device_shutdown; 1320 1312 fw_schedule_device_work(device, 1321 1313 list_empty(&card->link) ? 0 : SHUTDOWN_DELAY); 1322 1314 }
+3 -3
drivers/firewire/net.c
··· 929 929 if (rcode == RCODE_COMPLETE) { 930 930 fwnet_transmit_packet_done(ptask); 931 931 } else { 932 - fwnet_transmit_packet_failed(ptask); 933 - 934 932 if (printk_timed_ratelimit(&j, 1000) || rcode != last_rcode) { 935 933 dev_err(&ptask->dev->netdev->dev, 936 934 "fwnet_write_complete failed: %x (skipped %d)\n", ··· 936 938 937 939 errors_skipped = 0; 938 940 last_rcode = rcode; 939 - } else 941 + } else { 940 942 errors_skipped++; 943 + } 944 + fwnet_transmit_packet_failed(ptask); 941 945 } 942 946 } 943 947
+2 -13
drivers/firewire/ohci.c
··· 290 290 #define QUIRK_NO_MSI 0x10 291 291 #define QUIRK_TI_SLLZ059 0x20 292 292 #define QUIRK_IR_WAKE 0x40 293 - #define QUIRK_PHY_LCTRL_TIMEOUT 0x80 294 293 295 294 /* In case of multiple matches in ohci_quirks[], only the first one is used. */ 296 295 static const struct { ··· 302 303 QUIRK_BE_HEADERS}, 303 304 304 305 {PCI_VENDOR_ID_ATT, PCI_DEVICE_ID_AGERE_FW643, 6, 305 - QUIRK_PHY_LCTRL_TIMEOUT | QUIRK_NO_MSI}, 306 - 307 - {PCI_VENDOR_ID_ATT, PCI_ANY_ID, PCI_ANY_ID, 308 - QUIRK_PHY_LCTRL_TIMEOUT}, 306 + QUIRK_NO_MSI}, 309 307 310 308 {PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_SB1394, PCI_ANY_ID, 311 309 QUIRK_RESET_PACKET}, ··· 349 353 ", disable MSI = " __stringify(QUIRK_NO_MSI) 350 354 ", TI SLLZ059 erratum = " __stringify(QUIRK_TI_SLLZ059) 351 355 ", IR wake unreliable = " __stringify(QUIRK_IR_WAKE) 352 - ", phy LCtrl timeout = " __stringify(QUIRK_PHY_LCTRL_TIMEOUT) 353 356 ")"); 354 357 355 358 #define OHCI_PARAM_DEBUG_AT_AR 1 ··· 2294 2299 * TI TSB82AA2 + TSB81BA3(A) cards signal LPS enabled early but 2295 2300 * cannot actually use the phy at that time. These need tens of 2296 2301 * millisecods pause between LPS write and first phy access too. 2297 - * 2298 - * But do not wait for 50msec on Agere/LSI cards. Their phy 2299 - * arbitration state machine may time out during such a long wait. 2300 2302 */ 2301 2303 2302 2304 reg_write(ohci, OHCI1394_HCControlSet, ··· 2301 2309 OHCI1394_HCControl_postedWriteEnable); 2302 2310 flush_writes(ohci); 2303 2311 2304 - if (!(ohci->quirks & QUIRK_PHY_LCTRL_TIMEOUT)) 2312 + for (lps = 0, i = 0; !lps && i < 3; i++) { 2305 2313 msleep(50); 2306 - 2307 - for (lps = 0, i = 0; !lps && i < 150; i++) { 2308 - msleep(1); 2309 2314 lps = reg_read(ohci, OHCI1394_HCControlSet) & 2310 2315 OHCI1394_HCControl_LPS; 2311 2316 }
+13 -4
drivers/firewire/sbp2.c
··· 146 146 */ 147 147 int generation; 148 148 int retries; 149 + work_func_t workfn; 149 150 struct delayed_work work; 150 151 bool has_sdev; 151 152 bool blocked; ··· 865 864 /* set appropriate retry limit(s) in BUSY_TIMEOUT register */ 866 865 sbp2_set_busy_timeout(lu); 867 866 868 - PREPARE_DELAYED_WORK(&lu->work, sbp2_reconnect); 867 + lu->workfn = sbp2_reconnect; 869 868 sbp2_agent_reset(lu); 870 869 871 870 /* This was a re-login. */ ··· 919 918 * If a bus reset happened, sbp2_update will have requeued 920 919 * lu->work already. Reset the work from reconnect to login. 921 920 */ 922 - PREPARE_DELAYED_WORK(&lu->work, sbp2_login); 921 + lu->workfn = sbp2_login; 923 922 } 924 923 925 924 static void sbp2_reconnect(struct work_struct *work) ··· 953 952 lu->retries++ >= 5) { 954 953 dev_err(tgt_dev(tgt), "failed to reconnect\n"); 955 954 lu->retries = 0; 956 - PREPARE_DELAYED_WORK(&lu->work, sbp2_login); 955 + lu->workfn = sbp2_login; 957 956 } 958 957 sbp2_queue_work(lu, DIV_ROUND_UP(HZ, 5)); 959 958 ··· 971 970 sbp2_agent_reset(lu); 972 971 sbp2_cancel_orbs(lu); 973 972 sbp2_conditionally_unblock(lu); 973 + } 974 + 975 + static void sbp2_lu_workfn(struct work_struct *work) 976 + { 977 + struct sbp2_logical_unit *lu = container_of(to_delayed_work(work), 978 + struct sbp2_logical_unit, work); 979 + lu->workfn(work); 974 980 } 975 981 976 982 static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry) ··· 1006 998 lu->blocked = false; 1007 999 ++tgt->dont_block; 1008 1000 INIT_LIST_HEAD(&lu->orb_list); 1009 - INIT_DELAYED_WORK(&lu->work, sbp2_login); 1001 + lu->workfn = sbp2_login; 1002 + INIT_DELAYED_WORK(&lu->work, sbp2_lu_workfn); 1010 1003 1011 1004 list_add_tail(&lu->link, &tgt->lu_list); 1012 1005 return 0;
+1 -9
drivers/gpu/drm/armada/armada_drv.c
··· 68 68 { 69 69 struct armada_private *priv = dev->dev_private; 70 70 71 - /* 72 - * Yes, we really must jump through these hoops just to store a 73 - * _pointer_ to something into the kfifo. This is utterly insane 74 - * and idiotic, because it kfifo requires the _data_ pointed to by 75 - * the pointer const, not the pointer itself. Not only that, but 76 - * you have to pass a pointer _to_ the pointer you want stored. 77 - */ 78 - const struct drm_framebuffer *silly_api_alert = fb; 79 - WARN_ON(!kfifo_put(&priv->fb_unref, &silly_api_alert)); 71 + WARN_ON(!kfifo_put(&priv->fb_unref, fb)); 80 72 schedule_work(&priv->fb_unref_work); 81 73 } 82 74
+1
drivers/gpu/drm/bochs/Kconfig
··· 2 2 tristate "DRM Support for bochs dispi vga interface (qemu stdvga)" 3 3 depends on DRM && PCI 4 4 select DRM_KMS_HELPER 5 + select DRM_KMS_FB_HELPER 5 6 select FB_SYS_FILLRECT 6 7 select FB_SYS_COPYAREA 7 8 select FB_SYS_IMAGEBLIT
+9 -14
drivers/gpu/drm/i915/i915_drv.c
··· 403 403 void intel_detect_pch(struct drm_device *dev) 404 404 { 405 405 struct drm_i915_private *dev_priv = dev->dev_private; 406 - struct pci_dev *pch; 406 + struct pci_dev *pch = NULL; 407 407 408 408 /* In all current cases, num_pipes is equivalent to the PCH_NOP setting 409 409 * (which really amounts to a PCH but no South Display). ··· 424 424 * all the ISA bridge devices and check for the first match, instead 425 425 * of only checking the first one. 426 426 */ 427 - pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL); 428 - while (pch) { 429 - struct pci_dev *curr = pch; 427 + while ((pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, pch))) { 430 428 if (pch->vendor == PCI_VENDOR_ID_INTEL) { 431 - unsigned short id; 432 - id = pch->device & INTEL_PCH_DEVICE_ID_MASK; 429 + unsigned short id = pch->device & INTEL_PCH_DEVICE_ID_MASK; 433 430 dev_priv->pch_id = id; 434 431 435 432 if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) { ··· 458 461 DRM_DEBUG_KMS("Found LynxPoint LP PCH\n"); 459 462 WARN_ON(!IS_HASWELL(dev)); 460 463 WARN_ON(!IS_ULT(dev)); 461 - } else { 462 - goto check_next; 463 - } 464 - pci_dev_put(pch); 464 + } else 465 + continue; 466 + 465 467 break; 466 468 } 467 - check_next: 468 - pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr); 469 - pci_dev_put(curr); 470 469 } 471 470 if (!pch) 472 - DRM_DEBUG_KMS("No PCH found?\n"); 471 + DRM_DEBUG_KMS("No PCH found.\n"); 472 + 473 + pci_dev_put(pch); 473 474 } 474 475 475 476 bool i915_semaphore_is_enabled(struct drm_device *dev)
+16 -3
drivers/gpu/drm/i915/i915_gem_stolen.c
··· 82 82 r = devm_request_mem_region(dev->dev, base, dev_priv->gtt.stolen_size, 83 83 "Graphics Stolen Memory"); 84 84 if (r == NULL) { 85 - DRM_ERROR("conflict detected with stolen region: [0x%08x - 0x%08x]\n", 86 - base, base + (uint32_t)dev_priv->gtt.stolen_size); 87 - base = 0; 85 + /* 86 + * One more attempt but this time requesting region from 87 + * base + 1, as we have seen that this resolves the region 88 + * conflict with the PCI Bus. 89 + * This is a BIOS w/a: Some BIOS wrap stolen in the root 90 + * PCI bus, but have an off-by-one error. Hence retry the 91 + * reservation starting from 1 instead of 0. 92 + */ 93 + r = devm_request_mem_region(dev->dev, base + 1, 94 + dev_priv->gtt.stolen_size - 1, 95 + "Graphics Stolen Memory"); 96 + if (r == NULL) { 97 + DRM_ERROR("conflict detected with stolen region: [0x%08x - 0x%08x]\n", 98 + base, base + (uint32_t)dev_priv->gtt.stolen_size); 99 + base = 0; 100 + } 88 101 } 89 102 90 103 return base;
+4 -4
drivers/gpu/drm/i915/intel_display.c
··· 1092 1092 struct drm_device *dev = dev_priv->dev; 1093 1093 bool cur_state; 1094 1094 1095 - if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) 1096 - cur_state = I915_READ(CURCNTR_IVB(pipe)) & CURSOR_MODE; 1097 - else if (IS_845G(dev) || IS_I865G(dev)) 1095 + if (IS_845G(dev) || IS_I865G(dev)) 1098 1096 cur_state = I915_READ(_CURACNTR) & CURSOR_ENABLE; 1099 - else 1097 + else if (INTEL_INFO(dev)->gen <= 6 || IS_VALLEYVIEW(dev)) 1100 1098 cur_state = I915_READ(CURCNTR(pipe)) & CURSOR_MODE; 1099 + else 1100 + cur_state = I915_READ(CURCNTR_IVB(pipe)) & CURSOR_MODE; 1101 1101 1102 1102 WARN(cur_state != state, 1103 1103 "cursor on pipe %c assertion failure (expected %s, current %s)\n",
+3 -3
drivers/gpu/drm/i915/intel_hdmi.c
··· 845 845 { 846 846 struct drm_device *dev = intel_hdmi_to_dev(hdmi); 847 847 848 - if (IS_G4X(dev)) 848 + if (!hdmi->has_hdmi_sink || IS_G4X(dev)) 849 849 return 165000; 850 850 else if (IS_HASWELL(dev) || INTEL_INFO(dev)->gen >= 8) 851 851 return 300000; ··· 899 899 * outputs. We also need to check that the higher clock still fits 900 900 * within limits. 901 901 */ 902 - if (pipe_config->pipe_bpp > 8*3 && clock_12bpc <= portclock_limit 903 - && HAS_PCH_SPLIT(dev)) { 902 + if (pipe_config->pipe_bpp > 8*3 && intel_hdmi->has_hdmi_sink && 903 + clock_12bpc <= portclock_limit && HAS_PCH_SPLIT(dev)) { 904 904 DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n"); 905 905 desired_bpp = 12*3; 906 906
+2 -2
drivers/gpu/drm/i915/intel_panel.c
··· 698 698 freq /= 0xff; 699 699 700 700 ctl = freq << 17; 701 - if (IS_GEN2(dev) && panel->backlight.combination_mode) 701 + if (panel->backlight.combination_mode) 702 702 ctl |= BLM_LEGACY_MODE; 703 703 if (IS_PINEVIEW(dev) && panel->backlight.active_low_pwm) 704 704 ctl |= BLM_POLARITY_PNV; ··· 979 979 980 980 ctl = I915_READ(BLC_PWM_CTL); 981 981 982 - if (IS_GEN2(dev)) 982 + if (IS_GEN2(dev) || IS_I915GM(dev) || IS_I945GM(dev)) 983 983 panel->backlight.combination_mode = ctl & BLM_LEGACY_MODE; 984 984 985 985 if (IS_PINEVIEW(dev))
+4 -2
drivers/gpu/drm/i915/intel_pm.c
··· 3493 3493 u32 pcbr; 3494 3494 int pctx_size = 24*1024; 3495 3495 3496 + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 3497 + 3496 3498 pcbr = I915_READ(VLV_PCBR); 3497 3499 if (pcbr) { 3498 3500 /* BIOS set it up already, grab the pre-alloc'd space */ ··· 3543 3541 gtfifodbg); 3544 3542 I915_WRITE(GTFIFODBG, gtfifodbg); 3545 3543 } 3546 - 3547 - valleyview_setup_pctx(dev); 3548 3544 3549 3545 /* If VLV, Forcewake all wells, else re-direct to regular path */ 3550 3546 gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL); ··· 4395 4395 ironlake_enable_rc6(dev); 4396 4396 intel_init_emon(dev); 4397 4397 } else if (IS_GEN6(dev) || IS_GEN7(dev)) { 4398 + if (IS_VALLEYVIEW(dev)) 4399 + valleyview_setup_pctx(dev); 4398 4400 /* 4399 4401 * PCU communication is slow and this doesn't need to be 4400 4402 * done at any specific time, so do this out of our fast path
+1 -1
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1314 1314 } 1315 1315 if (is_dp) 1316 1316 args.v5.ucLaneNum = dp_lane_count; 1317 - else if (radeon_encoder->pixel_clock > 165000) 1317 + else if (radeon_dig_monitor_is_duallink(encoder, radeon_encoder->pixel_clock)) 1318 1318 args.v5.ucLaneNum = 8; 1319 1319 else 1320 1320 args.v5.ucLaneNum = 4;
+7 -3
drivers/gpu/drm/radeon/cik.c
··· 3046 3046 } 3047 3047 3048 3048 /** 3049 - * cik_select_se_sh - select which SE, SH to address 3049 + * cik_get_rb_disabled - computes the mask of disabled RBs 3050 3050 * 3051 3051 * @rdev: radeon_device pointer 3052 3052 * @max_rb_num: max RBs (render backends) for the asic ··· 4134 4134 { 4135 4135 if (enable) 4136 4136 WREG32(CP_MEC_CNTL, 0); 4137 - else 4137 + else { 4138 4138 WREG32(CP_MEC_CNTL, (MEC_ME1_HALT | MEC_ME2_HALT)); 4139 + rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX].ready = false; 4140 + rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false; 4141 + } 4139 4142 udelay(50); 4140 4143 } 4141 4144 ··· 7905 7902 /* init golden registers */ 7906 7903 cik_init_golden_registers(rdev); 7907 7904 7908 - radeon_pm_resume(rdev); 7905 + if (rdev->pm.pm_method == PM_METHOD_DPM) 7906 + radeon_pm_resume(rdev); 7909 7907 7910 7908 rdev->accel_working = true; 7911 7909 r = cik_startup(rdev);
+7 -7
drivers/gpu/drm/radeon/cik_sdma.c
··· 264 264 WREG32(SDMA0_GFX_RB_CNTL + reg_offset, rb_cntl); 265 265 WREG32(SDMA0_GFX_IB_CNTL + reg_offset, 0); 266 266 } 267 + rdev->ring[R600_RING_TYPE_DMA_INDEX].ready = false; 268 + rdev->ring[CAYMAN_RING_TYPE_DMA1_INDEX].ready = false; 267 269 } 268 270 269 271 /** ··· 292 290 { 293 291 u32 me_cntl, reg_offset; 294 292 int i; 293 + 294 + if (enable == false) { 295 + cik_sdma_gfx_stop(rdev); 296 + cik_sdma_rlc_stop(rdev); 297 + } 295 298 296 299 for (i = 0; i < 2; i++) { 297 300 if (i == 0) ··· 427 420 if (!rdev->sdma_fw) 428 421 return -EINVAL; 429 422 430 - /* stop the gfx rings and rlc compute queues */ 431 - cik_sdma_gfx_stop(rdev); 432 - cik_sdma_rlc_stop(rdev); 433 - 434 423 /* halt the MEs */ 435 424 cik_sdma_enable(rdev, false); 436 425 ··· 495 492 */ 496 493 void cik_sdma_fini(struct radeon_device *rdev) 497 494 { 498 - /* stop the gfx rings and rlc compute queues */ 499 - cik_sdma_gfx_stop(rdev); 500 - cik_sdma_rlc_stop(rdev); 501 495 /* halt the MEs */ 502 496 cik_sdma_enable(rdev, false); 503 497 radeon_ring_fini(rdev, &rdev->ring[R600_RING_TYPE_DMA_INDEX]);
+2 -1
drivers/gpu/drm/radeon/evergreen.c
··· 5299 5299 /* init golden registers */ 5300 5300 evergreen_init_golden_registers(rdev); 5301 5301 5302 - radeon_pm_resume(rdev); 5302 + if (rdev->pm.pm_method == PM_METHOD_DPM) 5303 + radeon_pm_resume(rdev); 5303 5304 5304 5305 rdev->accel_working = true; 5305 5306 r = evergreen_startup(rdev);
+1 -1
drivers/gpu/drm/radeon/evergreen_smc.h
··· 57 57 58 58 #define EVERGREEN_SMC_FIRMWARE_HEADER_LOCATION 0x100 59 59 60 - #define EVERGREEN_SMC_FIRMWARE_HEADER_softRegisters 0x0 60 + #define EVERGREEN_SMC_FIRMWARE_HEADER_softRegisters 0x8 61 61 #define EVERGREEN_SMC_FIRMWARE_HEADER_stateTable 0xC 62 62 #define EVERGREEN_SMC_FIRMWARE_HEADER_mcRegisterTable 0x20 63 63
+2 -1
drivers/gpu/drm/radeon/ni.c
··· 2105 2105 /* init golden registers */ 2106 2106 ni_init_golden_registers(rdev); 2107 2107 2108 - radeon_pm_resume(rdev); 2108 + if (rdev->pm.pm_method == PM_METHOD_DPM) 2109 + radeon_pm_resume(rdev); 2109 2110 2110 2111 rdev->accel_working = true; 2111 2112 r = cayman_startup(rdev);
-2
drivers/gpu/drm/radeon/r100.c
··· 3942 3942 /* Initialize surface registers */ 3943 3943 radeon_surface_init(rdev); 3944 3944 3945 - radeon_pm_resume(rdev); 3946 - 3947 3945 rdev->accel_working = true; 3948 3946 r = r100_startup(rdev); 3949 3947 if (r) {
-2
drivers/gpu/drm/radeon/r300.c
··· 1430 1430 /* Initialize surface registers */ 1431 1431 radeon_surface_init(rdev); 1432 1432 1433 - radeon_pm_resume(rdev); 1434 - 1435 1433 rdev->accel_working = true; 1436 1434 r = r300_startup(rdev); 1437 1435 if (r) {
-2
drivers/gpu/drm/radeon/r420.c
··· 325 325 /* Initialize surface registers */ 326 326 radeon_surface_init(rdev); 327 327 328 - radeon_pm_resume(rdev); 329 - 330 328 rdev->accel_working = true; 331 329 r = r420_startup(rdev); 332 330 if (r) {
-2
drivers/gpu/drm/radeon/r520.c
··· 240 240 /* Initialize surface registers */ 241 241 radeon_surface_init(rdev); 242 242 243 - radeon_pm_resume(rdev); 244 - 245 243 rdev->accel_working = true; 246 244 r = r520_startup(rdev); 247 245 if (r) {
+2 -1
drivers/gpu/drm/radeon/r600.c
··· 2968 2968 /* post card */ 2969 2969 atom_asic_init(rdev->mode_info.atom_context); 2970 2970 2971 - radeon_pm_resume(rdev); 2971 + if (rdev->pm.pm_method == PM_METHOD_DPM) 2972 + radeon_pm_resume(rdev); 2972 2973 2973 2974 rdev->accel_working = true; 2974 2975 r = r600_startup(rdev);
+4 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 1521 1521 if (r) 1522 1522 DRM_ERROR("ib ring test failed (%d).\n", r); 1523 1523 1524 - if (rdev->pm.dpm_enabled) { 1524 + if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) { 1525 1525 /* do dpm late init */ 1526 1526 r = radeon_pm_late_init(rdev); 1527 1527 if (r) { 1528 1528 rdev->pm.dpm_enabled = false; 1529 1529 DRM_ERROR("radeon_pm_late_init failed, disabling dpm\n"); 1530 1530 } 1531 + } else { 1532 + /* resume old pm late */ 1533 + radeon_pm_resume(rdev); 1531 1534 } 1532 1535 1533 1536 radeon_restore_bios_scratch_regs(rdev);
+9 -1
drivers/gpu/drm/radeon/radeon_kms.c
··· 33 33 #include <linux/vga_switcheroo.h> 34 34 #include <linux/slab.h> 35 35 #include <linux/pm_runtime.h> 36 + 37 + #if defined(CONFIG_VGA_SWITCHEROO) 38 + bool radeon_is_px(void); 39 + #else 40 + static inline bool radeon_is_px(void) { return false; } 41 + #endif 42 + 36 43 /** 37 44 * radeon_driver_unload_kms - Main unload function for KMS. 38 45 * ··· 137 130 "Error during ACPI methods call\n"); 138 131 } 139 132 140 - if (radeon_runtime_pm != 0) { 133 + if ((radeon_runtime_pm == 1) || 134 + ((radeon_runtime_pm == -1) && radeon_is_px())) { 141 135 pm_runtime_use_autosuspend(dev->dev); 142 136 pm_runtime_set_autosuspend_delay(dev->dev, 5000); 143 137 pm_runtime_set_active(dev->dev);
+4 -1
drivers/gpu/drm/radeon/radeon_ttm.c
··· 714 714 DRM_ERROR("Failed initializing VRAM heap.\n"); 715 715 return r; 716 716 } 717 + /* Change the size here instead of the init above so only lpfn is affected */ 718 + radeon_ttm_set_active_vram_size(rdev, rdev->mc.visible_vram_size); 719 + 717 720 r = radeon_bo_create(rdev, 256 * 1024, PAGE_SIZE, true, 718 721 RADEON_GEM_DOMAIN_VRAM, 719 722 NULL, &rdev->stollen_vga_memory); ··· 938 935 while (size) { 939 936 loff_t p = *pos / PAGE_SIZE; 940 937 unsigned off = *pos & ~PAGE_MASK; 941 - ssize_t cur_size = min(size, PAGE_SIZE - off); 938 + size_t cur_size = min_t(size_t, size, PAGE_SIZE - off); 942 939 struct page *page; 943 940 void *ptr; 944 941
-2
drivers/gpu/drm/radeon/rs400.c
··· 474 474 /* Initialize surface registers */ 475 475 radeon_surface_init(rdev); 476 476 477 - radeon_pm_resume(rdev); 478 - 479 477 rdev->accel_working = true; 480 478 r = rs400_startup(rdev); 481 479 if (r) {
-2
drivers/gpu/drm/radeon/rs600.c
··· 1048 1048 /* Initialize surface registers */ 1049 1049 radeon_surface_init(rdev); 1050 1050 1051 - radeon_pm_resume(rdev); 1052 - 1053 1051 rdev->accel_working = true; 1054 1052 r = rs600_startup(rdev); 1055 1053 if (r) {
-2
drivers/gpu/drm/radeon/rs690.c
··· 756 756 /* Initialize surface registers */ 757 757 radeon_surface_init(rdev); 758 758 759 - radeon_pm_resume(rdev); 760 - 761 759 rdev->accel_working = true; 762 760 r = rs690_startup(rdev); 763 761 if (r) {
-2
drivers/gpu/drm/radeon/rv515.c
··· 586 586 /* Initialize surface registers */ 587 587 radeon_surface_init(rdev); 588 588 589 - radeon_pm_resume(rdev); 590 - 591 589 rdev->accel_working = true; 592 590 r = rv515_startup(rdev); 593 591 if (r) {
+2 -1
drivers/gpu/drm/radeon/rv770.c
··· 1811 1811 /* init golden registers */ 1812 1812 rv770_init_golden_registers(rdev); 1813 1813 1814 - radeon_pm_resume(rdev); 1814 + if (rdev->pm.pm_method == PM_METHOD_DPM) 1815 + radeon_pm_resume(rdev); 1815 1816 1816 1817 rdev->accel_working = true; 1817 1818 r = rv770_startup(rdev);
+2 -1
drivers/gpu/drm/radeon/si.c
··· 6618 6618 /* init golden registers */ 6619 6619 si_init_golden_registers(rdev); 6620 6620 6621 - radeon_pm_resume(rdev); 6621 + if (rdev->pm.pm_method == PM_METHOD_DPM) 6622 + radeon_pm_resume(rdev); 6622 6623 6623 6624 rdev->accel_working = true; 6624 6625 r = si_startup(rdev);
+5 -3
drivers/gpu/drm/ttm/ttm_bo.c
··· 351 351 352 352 moved: 353 353 if (bo->evicted) { 354 - ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement); 355 - if (ret) 356 - pr_err("Can not flush read caches\n"); 354 + if (bdev->driver->invalidate_caches) { 355 + ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement); 356 + if (ret) 357 + pr_err("Can not flush read caches\n"); 358 + } 357 359 bo->evicted = false; 358 360 } 359 361
+7 -5
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 339 339 vma->vm_private_data = bo; 340 340 341 341 /* 342 - * PFNMAP is faster than MIXEDMAP due to reduced page 343 - * administration. So use MIXEDMAP only if private VMA, where 344 - * we need to support COW. 342 + * We'd like to use VM_PFNMAP on shared mappings, where 343 + * (vma->vm_flags & VM_SHARED) != 0, for performance reasons, 344 + * but for some reason VM_PFNMAP + x86 PAT + write-combine is very 345 + * bad for performance. Until that has been sorted out, use 346 + * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719 345 347 */ 346 - vma->vm_flags |= (vma->vm_flags & VM_SHARED) ? VM_PFNMAP : VM_MIXEDMAP; 348 + vma->vm_flags |= VM_MIXEDMAP; 347 349 vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; 348 350 return 0; 349 351 out_unref: ··· 361 359 362 360 vma->vm_ops = &ttm_bo_vm_ops; 363 361 vma->vm_private_data = ttm_bo_reference(bo); 364 - vma->vm_flags |= (vma->vm_flags & VM_SHARED) ? VM_PFNMAP : VM_MIXEDMAP; 362 + vma->vm_flags |= VM_MIXEDMAP; 365 363 vma->vm_flags |= VM_IO | VM_DONTEXPAND; 366 364 return 0; 367 365 }
+18
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 830 830 if (unlikely(ret != 0)) 831 831 goto out_unlock; 832 832 833 + /* 834 + * A gb-aware client referencing a shared surface will 835 + * expect a backup buffer to be present. 836 + */ 837 + if (dev_priv->has_mob && req->shareable) { 838 + uint32_t backup_handle; 839 + 840 + ret = vmw_user_dmabuf_alloc(dev_priv, tfile, 841 + res->backup_size, 842 + true, 843 + &backup_handle, 844 + &res->backup); 845 + if (unlikely(ret != 0)) { 846 + vmw_resource_unreference(&res); 847 + goto out_unlock; 848 + } 849 + } 850 + 833 851 tmp = vmw_resource_reference(&srf->res); 834 852 ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime, 835 853 req->shareable, VMW_RES_SURFACE,
+1 -1
drivers/i2c/busses/Kconfig
··· 387 387 388 388 config I2C_CPM 389 389 tristate "Freescale CPM1 or CPM2 (MPC8xx/826x)" 390 - depends on (CPM1 || CPM2) && OF_I2C 390 + depends on CPM1 || CPM2 391 391 help 392 392 This supports the use of the I2C interface on Freescale 393 393 processors with CPM1 or CPM2.
+109 -73
drivers/infiniband/ulp/isert/ib_isert.c
··· 492 492 isert_conn->state = ISER_CONN_INIT; 493 493 INIT_LIST_HEAD(&isert_conn->conn_accept_node); 494 494 init_completion(&isert_conn->conn_login_comp); 495 - init_waitqueue_head(&isert_conn->conn_wait); 496 - init_waitqueue_head(&isert_conn->conn_wait_comp_err); 495 + init_completion(&isert_conn->conn_wait); 496 + init_completion(&isert_conn->conn_wait_comp_err); 497 497 kref_init(&isert_conn->conn_kref); 498 498 kref_get(&isert_conn->conn_kref); 499 499 mutex_init(&isert_conn->conn_mutex); 500 - mutex_init(&isert_conn->conn_comp_mutex); 501 500 spin_lock_init(&isert_conn->conn_lock); 502 501 503 502 cma_id->context = isert_conn; ··· 687 688 688 689 pr_debug("isert_disconnect_work(): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n"); 689 690 mutex_lock(&isert_conn->conn_mutex); 690 - isert_conn->state = ISER_CONN_DOWN; 691 + if (isert_conn->state == ISER_CONN_UP) 692 + isert_conn->state = ISER_CONN_TERMINATING; 691 693 692 694 if (isert_conn->post_recv_buf_count == 0 && 693 695 atomic_read(&isert_conn->post_send_buf_count) == 0) { 694 - pr_debug("Calling wake_up(&isert_conn->conn_wait);\n"); 695 696 mutex_unlock(&isert_conn->conn_mutex); 696 697 goto wake_up; 697 698 } ··· 711 712 mutex_unlock(&isert_conn->conn_mutex); 712 713 713 714 wake_up: 714 - wake_up(&isert_conn->conn_wait); 715 + complete(&isert_conn->conn_wait); 715 716 isert_put_conn(isert_conn); 716 717 } 717 718 ··· 887 888 * Coalesce send completion interrupts by only setting IB_SEND_SIGNALED 888 889 * bit for every ISERT_COMP_BATCH_COUNT number of ib_post_send() calls. 889 890 */ 890 - mutex_lock(&isert_conn->conn_comp_mutex); 891 - if (coalesce && 891 + mutex_lock(&isert_conn->conn_mutex); 892 + if (coalesce && isert_conn->state == ISER_CONN_UP && 892 893 ++isert_conn->conn_comp_batch < ISERT_COMP_BATCH_COUNT) { 894 + tx_desc->llnode_active = true; 893 895 llist_add(&tx_desc->comp_llnode, &isert_conn->conn_comp_llist); 894 - mutex_unlock(&isert_conn->conn_comp_mutex); 896 + mutex_unlock(&isert_conn->conn_mutex); 895 897 return; 896 898 } 897 899 isert_conn->conn_comp_batch = 0; 898 900 tx_desc->comp_llnode_batch = llist_del_all(&isert_conn->conn_comp_llist); 899 - mutex_unlock(&isert_conn->conn_comp_mutex); 901 + mutex_unlock(&isert_conn->conn_mutex); 900 902 901 903 send_wr->send_flags = IB_SEND_SIGNALED; 902 904 } ··· 1464 1464 case ISCSI_OP_SCSI_CMD: 1465 1465 spin_lock_bh(&conn->cmd_lock); 1466 1466 if (!list_empty(&cmd->i_conn_node)) 1467 - list_del(&cmd->i_conn_node); 1467 + list_del_init(&cmd->i_conn_node); 1468 1468 spin_unlock_bh(&conn->cmd_lock); 1469 1469 1470 1470 if (cmd->data_direction == DMA_TO_DEVICE) ··· 1476 1476 case ISCSI_OP_SCSI_TMFUNC: 1477 1477 spin_lock_bh(&conn->cmd_lock); 1478 1478 if (!list_empty(&cmd->i_conn_node)) 1479 - list_del(&cmd->i_conn_node); 1479 + list_del_init(&cmd->i_conn_node); 1480 1480 spin_unlock_bh(&conn->cmd_lock); 1481 1481 1482 1482 transport_generic_free_cmd(&cmd->se_cmd, 0); ··· 1486 1486 case ISCSI_OP_TEXT: 1487 1487 spin_lock_bh(&conn->cmd_lock); 1488 1488 if (!list_empty(&cmd->i_conn_node)) 1489 - list_del(&cmd->i_conn_node); 1489 + list_del_init(&cmd->i_conn_node); 1490 1490 spin_unlock_bh(&conn->cmd_lock); 1491 1491 1492 1492 /* ··· 1549 1549 iscsit_stop_dataout_timer(cmd); 1550 1550 device->unreg_rdma_mem(isert_cmd, isert_conn); 1551 1551 cmd->write_data_done = wr->cur_rdma_length; 1552 + wr->send_wr_num = 0; 1552 1553 1553 1554 pr_debug("Cmd: %p RDMA_READ comp calling execute_cmd\n", isert_cmd); 1554 1555 spin_lock_bh(&cmd->istate_lock); ··· 1590 1589 pr_debug("Calling iscsit_logout_post_handler >>>>>>>>>>>>>>\n"); 1591 1590 /* 1592 1591 * Call atomic_dec(&isert_conn->post_send_buf_count) 1593 - * from isert_free_conn() 1592 + * from isert_wait_conn() 1594 1593 */ 1595 1594 isert_conn->logout_posted = true; 1596 1595 iscsit_logout_post_handler(cmd, cmd->conn); ··· 1614 1613 struct ib_device *ib_dev) 1615 1614 { 1616 1615 struct iscsi_cmd *cmd = isert_cmd->iscsi_cmd; 1616 + struct isert_rdma_wr *wr = &isert_cmd->rdma_wr; 1617 1617 1618 1618 if (cmd->i_state == ISTATE_SEND_TASKMGTRSP || 1619 1619 cmd->i_state == ISTATE_SEND_LOGOUTRSP || ··· 1626 1624 queue_work(isert_comp_wq, &isert_cmd->comp_work); 1627 1625 return; 1628 1626 } 1629 - atomic_dec(&isert_conn->post_send_buf_count); 1627 + atomic_sub(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); 1630 1628 1631 1629 cmd->i_state = ISTATE_SENT_STATUS; 1632 1630 isert_completion_put(tx_desc, isert_cmd, ib_dev); ··· 1664 1662 case ISER_IB_RDMA_READ: 1665 1663 pr_debug("isert_send_completion: Got ISER_IB_RDMA_READ:\n"); 1666 1664 1667 - atomic_dec(&isert_conn->post_send_buf_count); 1665 + atomic_sub(wr->send_wr_num, &isert_conn->post_send_buf_count); 1668 1666 isert_completion_rdma_read(tx_desc, isert_cmd); 1669 1667 break; 1670 1668 default: ··· 1693 1691 } 1694 1692 1695 1693 static void 1696 - isert_cq_comp_err(struct iser_tx_desc *tx_desc, struct isert_conn *isert_conn) 1694 + isert_cq_drain_comp_llist(struct isert_conn *isert_conn, struct ib_device *ib_dev) 1695 + { 1696 + struct llist_node *llnode; 1697 + struct isert_rdma_wr *wr; 1698 + struct iser_tx_desc *t; 1699 + 1700 + mutex_lock(&isert_conn->conn_mutex); 1701 + llnode = llist_del_all(&isert_conn->conn_comp_llist); 1702 + isert_conn->conn_comp_batch = 0; 1703 + mutex_unlock(&isert_conn->conn_mutex); 1704 + 1705 + while (llnode) { 1706 + t = llist_entry(llnode, struct iser_tx_desc, comp_llnode); 1707 + llnode = llist_next(llnode); 1708 + wr = &t->isert_cmd->rdma_wr; 1709 + 1710 + atomic_sub(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); 1711 + isert_completion_put(t, t->isert_cmd, ib_dev); 1712 + } 1713 + } 1714 + 1715 + static void 1716 + isert_cq_tx_comp_err(struct iser_tx_desc *tx_desc, struct isert_conn *isert_conn) 1697 1717 { 1698 1718 struct ib_device *ib_dev = isert_conn->conn_cm_id->device; 1719 + struct isert_cmd *isert_cmd = tx_desc->isert_cmd; 1720 + struct llist_node *llnode = tx_desc->comp_llnode_batch; 1721 + struct isert_rdma_wr *wr; 1722 + struct iser_tx_desc *t; 1699 1723 1700 - if (tx_desc) { 1701 - struct isert_cmd *isert_cmd = tx_desc->isert_cmd; 1724 + while (llnode) { 1725 + t = llist_entry(llnode, struct iser_tx_desc, comp_llnode); 1726 + llnode = llist_next(llnode); 1727 + wr = &t->isert_cmd->rdma_wr; 1702 1728 1703 - if (!isert_cmd) 1704 - isert_unmap_tx_desc(tx_desc, ib_dev); 1705 - else 1706 - isert_completion_put(tx_desc, isert_cmd, ib_dev); 1729 + atomic_sub(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); 1730 + isert_completion_put(t, t->isert_cmd, ib_dev); 1731 + } 1732 + tx_desc->comp_llnode_batch = NULL; 1733 + 1734 + if (!isert_cmd) 1735 + isert_unmap_tx_desc(tx_desc, ib_dev); 1736 + else 1737 + isert_completion_put(tx_desc, isert_cmd, ib_dev); 1738 + } 1739 + 1740 + static void 1741 + isert_cq_rx_comp_err(struct isert_conn *isert_conn) 1742 + { 1743 + struct ib_device *ib_dev = isert_conn->conn_cm_id->device; 1744 + struct iscsi_conn *conn = isert_conn->conn; 1745 + 1746 + if (isert_conn->post_recv_buf_count) 1747 + return; 1748 + 1749 + isert_cq_drain_comp_llist(isert_conn, ib_dev); 1750 + 1751 + if (conn->sess) { 1752 + target_sess_cmd_list_set_waiting(conn->sess->se_sess); 1753 + target_wait_for_sess_cmds(conn->sess->se_sess); 1707 1754 } 1708 1755 1709 - if (isert_conn->post_recv_buf_count == 0 && 1710 - atomic_read(&isert_conn->post_send_buf_count) == 0) { 1711 - pr_debug("isert_cq_comp_err >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n"); 1712 - pr_debug("Calling wake_up from isert_cq_comp_err\n"); 1756 + while (atomic_read(&isert_conn->post_send_buf_count)) 1757 + msleep(3000); 1713 1758 1714 - mutex_lock(&isert_conn->conn_mutex); 1715 - if (isert_conn->state != ISER_CONN_DOWN) 1716 - isert_conn->state = ISER_CONN_TERMINATING; 1717 - mutex_unlock(&isert_conn->conn_mutex); 1759 + mutex_lock(&isert_conn->conn_mutex); 1760 + isert_conn->state = ISER_CONN_DOWN; 1761 + mutex_unlock(&isert_conn->conn_mutex); 1718 1762 1719 - wake_up(&isert_conn->conn_wait_comp_err); 1720 - } 1763 + complete(&isert_conn->conn_wait_comp_err); 1721 1764 } 1722 1765 1723 1766 static void ··· 1787 1740 pr_debug("TX wc.status != IB_WC_SUCCESS >>>>>>>>>>>>>>\n"); 1788 1741 pr_debug("TX wc.status: 0x%08x\n", wc.status); 1789 1742 pr_debug("TX wc.vendor_err: 0x%08x\n", wc.vendor_err); 1790 - atomic_dec(&isert_conn->post_send_buf_count); 1791 - isert_cq_comp_err(tx_desc, isert_conn); 1743 + 1744 + if (wc.wr_id != ISER_FASTREG_LI_WRID) { 1745 + if (tx_desc->llnode_active) 1746 + continue; 1747 + 1748 + atomic_dec(&isert_conn->post_send_buf_count); 1749 + isert_cq_tx_comp_err(tx_desc, isert_conn); 1750 + } 1792 1751 } 1793 1752 } 1794 1753 ··· 1837 1784 wc.vendor_err); 1838 1785 } 1839 1786 isert_conn->post_recv_buf_count--; 1840 - isert_cq_comp_err(NULL, isert_conn); 1787 + isert_cq_rx_comp_err(isert_conn); 1841 1788 } 1842 1789 } 1843 1790 ··· 2255 2202 2256 2203 if (!fr_desc->valid) { 2257 2204 memset(&inv_wr, 0, sizeof(inv_wr)); 2205 + inv_wr.wr_id = ISER_FASTREG_LI_WRID; 2258 2206 inv_wr.opcode = IB_WR_LOCAL_INV; 2259 2207 inv_wr.ex.invalidate_rkey = fr_desc->data_mr->rkey; 2260 2208 wr = &inv_wr; ··· 2266 2212 2267 2213 /* Prepare FASTREG WR */ 2268 2214 memset(&fr_wr, 0, sizeof(fr_wr)); 2215 + fr_wr.wr_id = ISER_FASTREG_LI_WRID; 2269 2216 fr_wr.opcode = IB_WR_FAST_REG_MR; 2270 2217 fr_wr.wr.fast_reg.iova_start = 2271 2218 fr_desc->data_frpl->page_list[0] + page_off; ··· 2432 2377 isert_init_send_wr(isert_conn, isert_cmd, 2433 2378 &isert_cmd->tx_desc.send_wr, true); 2434 2379 2435 - atomic_inc(&isert_conn->post_send_buf_count); 2380 + atomic_add(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); 2436 2381 2437 2382 rc = ib_post_send(isert_conn->conn_qp, wr->send_wr, &wr_failed); 2438 2383 if (rc) { 2439 2384 pr_warn("ib_post_send() failed for IB_WR_RDMA_WRITE\n"); 2440 - atomic_dec(&isert_conn->post_send_buf_count); 2385 + atomic_sub(wr->send_wr_num + 1, &isert_conn->post_send_buf_count); 2441 2386 } 2442 2387 pr_debug("Cmd: %p posted RDMA_WRITE + Response for iSER Data READ\n", 2443 2388 isert_cmd); ··· 2465 2410 return rc; 2466 2411 } 2467 2412 2468 - atomic_inc(&isert_conn->post_send_buf_count); 2413 + atomic_add(wr->send_wr_num, &isert_conn->post_send_buf_count); 2469 2414 2470 2415 rc = ib_post_send(isert_conn->conn_qp, wr->send_wr, &wr_failed); 2471 2416 if (rc) { 2472 2417 pr_warn("ib_post_send() failed for IB_WR_RDMA_READ\n"); 2473 - atomic_dec(&isert_conn->post_send_buf_count); 2418 + atomic_sub(wr->send_wr_num, &isert_conn->post_send_buf_count); 2474 2419 } 2475 2420 pr_debug("Cmd: %p posted RDMA_READ memory for ISER Data WRITE\n", 2476 2421 isert_cmd); ··· 2757 2702 kfree(isert_np); 2758 2703 } 2759 2704 2760 - static int isert_check_state(struct isert_conn *isert_conn, int state) 2761 - { 2762 - int ret; 2763 - 2764 - mutex_lock(&isert_conn->conn_mutex); 2765 - ret = (isert_conn->state == state); 2766 - mutex_unlock(&isert_conn->conn_mutex); 2767 - 2768 - return ret; 2769 - } 2770 - 2771 - static void isert_free_conn(struct iscsi_conn *conn) 2705 + static void isert_wait_conn(struct iscsi_conn *conn) 2772 2706 { 2773 2707 struct isert_conn *isert_conn = conn->context; 2774 2708 2775 - pr_debug("isert_free_conn: Starting \n"); 2709 + pr_debug("isert_wait_conn: Starting \n"); 2776 2710 /* 2777 2711 * Decrement post_send_buf_count for special case when called 2778 2712 * from isert_do_control_comp() -> iscsit_logout_post_handler() ··· 2771 2727 atomic_dec(&isert_conn->post_send_buf_count); 2772 2728 2773 2729 if (isert_conn->conn_cm_id && isert_conn->state != ISER_CONN_DOWN) { 2774 - pr_debug("Calling rdma_disconnect from isert_free_conn\n"); 2730 + pr_debug("Calling rdma_disconnect from isert_wait_conn\n"); 2775 2731 rdma_disconnect(isert_conn->conn_cm_id); 2776 2732 } 2777 2733 /* 2778 2734 * Only wait for conn_wait_comp_err if the isert_conn made it 2779 2735 * into full feature phase.. 2780 2736 */ 2781 - if (isert_conn->state == ISER_CONN_UP) { 2782 - pr_debug("isert_free_conn: Before wait_event comp_err %d\n", 2783 - isert_conn->state); 2784 - mutex_unlock(&isert_conn->conn_mutex); 2785 - 2786 - wait_event(isert_conn->conn_wait_comp_err, 2787 - (isert_check_state(isert_conn, ISER_CONN_TERMINATING))); 2788 - 2789 - wait_event(isert_conn->conn_wait, 2790 - (isert_check_state(isert_conn, ISER_CONN_DOWN))); 2791 - 2792 - isert_put_conn(isert_conn); 2793 - return; 2794 - } 2795 2737 if (isert_conn->state == ISER_CONN_INIT) { 2796 2738 mutex_unlock(&isert_conn->conn_mutex); 2797 - isert_put_conn(isert_conn); 2798 2739 return; 2799 2740 } 2800 - pr_debug("isert_free_conn: wait_event conn_wait %d\n", 2801 - isert_conn->state); 2741 + if (isert_conn->state == ISER_CONN_UP) 2742 + isert_conn->state = ISER_CONN_TERMINATING; 2802 2743 mutex_unlock(&isert_conn->conn_mutex); 2803 2744 2804 - wait_event(isert_conn->conn_wait, 2805 - (isert_check_state(isert_conn, ISER_CONN_DOWN))); 2745 + wait_for_completion(&isert_conn->conn_wait_comp_err); 2746 + 2747 + wait_for_completion(&isert_conn->conn_wait); 2748 + } 2749 + 2750 + static void isert_free_conn(struct iscsi_conn *conn) 2751 + { 2752 + struct isert_conn *isert_conn = conn->context; 2806 2753 2807 2754 isert_put_conn(isert_conn); 2808 2755 } ··· 2806 2771 .iscsit_setup_np = isert_setup_np, 2807 2772 .iscsit_accept_np = isert_accept_np, 2808 2773 .iscsit_free_np = isert_free_np, 2774 + .iscsit_wait_conn = isert_wait_conn, 2809 2775 .iscsit_free_conn = isert_free_conn, 2810 2776 .iscsit_get_login_rx = isert_get_login_rx, 2811 2777 .iscsit_put_login_tx = isert_put_login_tx,
+4 -3
drivers/infiniband/ulp/isert/ib_isert.h
··· 6 6 7 7 #define ISERT_RDMA_LISTEN_BACKLOG 10 8 8 #define ISCSI_ISER_SG_TABLESIZE 256 9 + #define ISER_FASTREG_LI_WRID 0xffffffffffffffffULL 9 10 10 11 enum isert_desc_type { 11 12 ISCSI_TX_CONTROL, ··· 46 45 struct isert_cmd *isert_cmd; 47 46 struct llist_node *comp_llnode_batch; 48 47 struct llist_node comp_llnode; 48 + bool llnode_active; 49 49 struct ib_send_wr send_wr; 50 50 } __packed; 51 51 ··· 118 116 struct isert_device *conn_device; 119 117 struct work_struct conn_logout_work; 120 118 struct mutex conn_mutex; 121 - wait_queue_head_t conn_wait; 122 - wait_queue_head_t conn_wait_comp_err; 119 + struct completion conn_wait; 120 + struct completion conn_wait_comp_err; 123 121 struct kref conn_kref; 124 122 struct list_head conn_fr_pool; 125 123 int conn_fr_pool_size; ··· 128 126 #define ISERT_COMP_BATCH_COUNT 8 129 127 int conn_comp_batch; 130 128 struct llist_head conn_comp_llist; 131 - struct mutex conn_comp_mutex; 132 129 }; 133 130 134 131 #define ISERT_MAX_CQ 64
-10
drivers/md/Kconfig
··· 254 254 ---help--- 255 255 Provides thin provisioning and snapshots that share a data store. 256 256 257 - config DM_DEBUG_BLOCK_STACK_TRACING 258 - boolean "Keep stack trace of persistent data block lock holders" 259 - depends on STACKTRACE_SUPPORT && DM_PERSISTENT_DATA 260 - select STACKTRACE 261 - ---help--- 262 - Enable this for messages that may help debug problems with the 263 - block manager locking used by thin provisioning and caching. 264 - 265 - If unsure, say N. 266 - 267 257 config DM_CACHE 268 258 tristate "Cache target (EXPERIMENTAL)" 269 259 depends on BLK_DEV_DM
+2 -2
drivers/md/dm-cache-policy-mq.c
··· 872 872 { 873 873 struct mq_policy *mq = to_mq_policy(p); 874 874 875 - kfree(mq->table); 875 + vfree(mq->table); 876 876 epool_exit(&mq->cache_pool); 877 877 epool_exit(&mq->pre_cache_pool); 878 878 kfree(mq); ··· 1245 1245 1246 1246 mq->nr_buckets = next_power(from_cblock(cache_size) / 2, 16); 1247 1247 mq->hash_bits = ffs(mq->nr_buckets) - 1; 1248 - mq->table = kzalloc(sizeof(*mq->table) * mq->nr_buckets, GFP_KERNEL); 1248 + mq->table = vzalloc(sizeof(*mq->table) * mq->nr_buckets); 1249 1249 if (!mq->table) 1250 1250 goto bad_alloc_table; 1251 1251
+5 -6
drivers/md/dm-cache-target.c
··· 979 979 int r; 980 980 struct dm_io_region o_region, c_region; 981 981 struct cache *cache = mg->cache; 982 + sector_t cblock = from_cblock(mg->cblock); 982 983 983 984 o_region.bdev = cache->origin_dev->bdev; 984 985 o_region.count = cache->sectors_per_block; 985 986 986 987 c_region.bdev = cache->cache_dev->bdev; 987 - c_region.sector = from_cblock(mg->cblock) * cache->sectors_per_block; 988 + c_region.sector = cblock * cache->sectors_per_block; 988 989 c_region.count = cache->sectors_per_block; 989 990 990 991 if (mg->writeback || mg->demote) { ··· 2465 2464 bool discarded_block; 2466 2465 struct dm_bio_prison_cell *cell; 2467 2466 struct policy_result lookup_result; 2468 - struct per_bio_data *pb; 2467 + struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size); 2469 2468 2470 - if (from_oblock(block) > from_oblock(cache->origin_blocks)) { 2469 + if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { 2471 2470 /* 2472 2471 * This can only occur if the io goes to a partial block at 2473 2472 * the end of the origin device. We don't cache these. 2474 2473 * Just remap to the origin and carry on. 2475 2474 */ 2476 - remap_to_origin_clear_discard(cache, bio, block); 2475 + remap_to_origin(cache, bio); 2477 2476 return DM_MAPIO_REMAPPED; 2478 2477 } 2479 - 2480 - pb = init_per_bio_data(bio, pb_data_size); 2481 2478 2482 2479 if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { 2483 2480 defer_bio(cache, bio);
+3
drivers/md/dm-snap-persistent.c
··· 546 546 r = insert_exceptions(ps, area, callback, callback_context, 547 547 &full); 548 548 549 + if (!full) 550 + memcpy(ps->area, area, ps->store->chunk_size << SECTOR_SHIFT); 551 + 549 552 dm_bufio_release(bp); 550 553 551 554 dm_bufio_forget(client, chunk);
+36 -1
drivers/md/dm-thin-metadata.c
··· 76 76 77 77 #define THIN_SUPERBLOCK_MAGIC 27022010 78 78 #define THIN_SUPERBLOCK_LOCATION 0 79 - #define THIN_VERSION 1 79 + #define THIN_VERSION 2 80 80 #define THIN_METADATA_CACHE_SIZE 64 81 81 #define SECTOR_TO_BLOCK_SHIFT 3 82 82 ··· 1754 1754 up_write(&pmd->root_lock); 1755 1755 1756 1756 return r; 1757 + } 1758 + 1759 + int dm_pool_metadata_set_needs_check(struct dm_pool_metadata *pmd) 1760 + { 1761 + int r; 1762 + struct dm_block *sblock; 1763 + struct thin_disk_superblock *disk_super; 1764 + 1765 + down_write(&pmd->root_lock); 1766 + pmd->flags |= THIN_METADATA_NEEDS_CHECK_FLAG; 1767 + 1768 + r = superblock_lock(pmd, &sblock); 1769 + if (r) { 1770 + DMERR("couldn't read superblock"); 1771 + goto out; 1772 + } 1773 + 1774 + disk_super = dm_block_data(sblock); 1775 + disk_super->flags = cpu_to_le32(pmd->flags); 1776 + 1777 + dm_bm_unlock(sblock); 1778 + out: 1779 + up_write(&pmd->root_lock); 1780 + return r; 1781 + } 1782 + 1783 + bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd) 1784 + { 1785 + bool needs_check; 1786 + 1787 + down_read(&pmd->root_lock); 1788 + needs_check = pmd->flags & THIN_METADATA_NEEDS_CHECK_FLAG; 1789 + up_read(&pmd->root_lock); 1790 + 1791 + return needs_check; 1757 1792 }
+11
drivers/md/dm-thin-metadata.h
··· 25 25 26 26 /*----------------------------------------------------------------*/ 27 27 28 + /* 29 + * Thin metadata superblock flags. 30 + */ 31 + #define THIN_METADATA_NEEDS_CHECK_FLAG (1 << 0) 32 + 28 33 struct dm_pool_metadata; 29 34 struct dm_thin_device; 30 35 ··· 206 201 dm_block_t threshold, 207 202 dm_sm_threshold_fn fn, 208 203 void *context); 204 + 205 + /* 206 + * Updates the superblock immediately. 207 + */ 208 + int dm_pool_metadata_set_needs_check(struct dm_pool_metadata *pmd); 209 + bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd); 209 210 210 211 /*----------------------------------------------------------------*/ 211 212
+235 -69
drivers/md/dm-thin.c
··· 130 130 struct dm_thin_new_mapping; 131 131 132 132 /* 133 - * The pool runs in 3 modes. Ordered in degraded order for comparisons. 133 + * The pool runs in 4 modes. Ordered in degraded order for comparisons. 134 134 */ 135 135 enum pool_mode { 136 136 PM_WRITE, /* metadata may be changed */ 137 + PM_OUT_OF_DATA_SPACE, /* metadata may be changed, though data may not be allocated */ 137 138 PM_READ_ONLY, /* metadata may not be changed */ 138 139 PM_FAIL, /* all I/O fails */ 139 140 }; ··· 199 198 }; 200 199 201 200 static enum pool_mode get_pool_mode(struct pool *pool); 202 - static void out_of_data_space(struct pool *pool); 203 201 static void metadata_operation_failed(struct pool *pool, const char *op, int r); 204 202 205 203 /* ··· 226 226 227 227 struct pool *pool; 228 228 struct dm_thin_device *td; 229 + bool requeue_mode:1; 229 230 }; 230 231 231 232 /*----------------------------------------------------------------*/ ··· 370 369 struct dm_thin_new_mapping *overwrite_mapping; 371 370 }; 372 371 373 - static void __requeue_bio_list(struct thin_c *tc, struct bio_list *master) 372 + static void requeue_bio_list(struct thin_c *tc, struct bio_list *master) 374 373 { 375 374 struct bio *bio; 376 375 struct bio_list bios; 376 + unsigned long flags; 377 377 378 378 bio_list_init(&bios); 379 + 380 + spin_lock_irqsave(&tc->pool->lock, flags); 379 381 bio_list_merge(&bios, master); 380 382 bio_list_init(master); 383 + spin_unlock_irqrestore(&tc->pool->lock, flags); 381 384 382 385 while ((bio = bio_list_pop(&bios))) { 383 386 struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); ··· 396 391 static void requeue_io(struct thin_c *tc) 397 392 { 398 393 struct pool *pool = tc->pool; 394 + 395 + requeue_bio_list(tc, &pool->deferred_bios); 396 + requeue_bio_list(tc, &pool->retry_on_resume_list); 397 + } 398 + 399 + static void error_retry_list(struct pool *pool) 400 + { 401 + struct bio *bio; 399 402 unsigned long flags; 403 + struct bio_list bios; 404 + 405 + bio_list_init(&bios); 400 406 401 407 spin_lock_irqsave(&pool->lock, flags); 402 - __requeue_bio_list(tc, &pool->deferred_bios); 403 - __requeue_bio_list(tc, &pool->retry_on_resume_list); 408 + bio_list_merge(&bios, &pool->retry_on_resume_list); 409 + bio_list_init(&pool->retry_on_resume_list); 404 410 spin_unlock_irqrestore(&pool->lock, flags); 411 + 412 + while ((bio = bio_list_pop(&bios))) 413 + bio_io_error(bio); 405 414 } 406 415 407 416 /* ··· 944 925 } 945 926 } 946 927 928 + static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); 929 + 947 930 static int alloc_data_block(struct thin_c *tc, dm_block_t *result) 948 931 { 949 932 int r; 950 933 dm_block_t free_blocks; 951 934 struct pool *pool = tc->pool; 952 935 953 - if (get_pool_mode(pool) != PM_WRITE) 936 + if (WARN_ON(get_pool_mode(pool) != PM_WRITE)) 954 937 return -EINVAL; 955 938 956 939 r = dm_pool_get_free_block_count(pool->pmd, &free_blocks); ··· 979 958 } 980 959 981 960 if (!free_blocks) { 982 - out_of_data_space(pool); 961 + set_pool_mode(pool, PM_OUT_OF_DATA_SPACE); 983 962 return -ENOSPC; 984 963 } 985 964 } ··· 1009 988 spin_unlock_irqrestore(&pool->lock, flags); 1010 989 } 1011 990 991 + static bool should_error_unserviceable_bio(struct pool *pool) 992 + { 993 + enum pool_mode m = get_pool_mode(pool); 994 + 995 + switch (m) { 996 + case PM_WRITE: 997 + /* Shouldn't get here */ 998 + DMERR_LIMIT("bio unserviceable, yet pool is in PM_WRITE mode"); 999 + return true; 1000 + 1001 + case PM_OUT_OF_DATA_SPACE: 1002 + return pool->pf.error_if_no_space; 1003 + 1004 + case PM_READ_ONLY: 1005 + case PM_FAIL: 1006 + return true; 1007 + default: 1008 + /* Shouldn't get here */ 1009 + DMERR_LIMIT("bio unserviceable, yet pool has an unknown mode"); 1010 + return true; 1011 + } 1012 + } 1013 + 1012 1014 static void handle_unserviceable_bio(struct pool *pool, struct bio *bio) 1013 1015 { 1014 - /* 1015 - * When pool is read-only, no cell locking is needed because 1016 - * nothing is changing. 1017 - */ 1018 - WARN_ON_ONCE(get_pool_mode(pool) != PM_READ_ONLY); 1019 - 1020 - if (pool->pf.error_if_no_space) 1016 + if (should_error_unserviceable_bio(pool)) 1021 1017 bio_io_error(bio); 1022 1018 else 1023 1019 retry_on_resume(bio); ··· 1045 1007 struct bio *bio; 1046 1008 struct bio_list bios; 1047 1009 1010 + if (should_error_unserviceable_bio(pool)) { 1011 + cell_error(pool, cell); 1012 + return; 1013 + } 1014 + 1048 1015 bio_list_init(&bios); 1049 1016 cell_release(pool, cell, &bios); 1050 1017 1051 - while ((bio = bio_list_pop(&bios))) 1052 - handle_unserviceable_bio(pool, bio); 1018 + if (should_error_unserviceable_bio(pool)) 1019 + while ((bio = bio_list_pop(&bios))) 1020 + bio_io_error(bio); 1021 + else 1022 + while ((bio = bio_list_pop(&bios))) 1023 + retry_on_resume(bio); 1053 1024 } 1054 1025 1055 1026 static void process_discard(struct thin_c *tc, struct bio *bio) ··· 1343 1296 } 1344 1297 } 1345 1298 1299 + static void process_bio_success(struct thin_c *tc, struct bio *bio) 1300 + { 1301 + bio_endio(bio, 0); 1302 + } 1303 + 1346 1304 static void process_bio_fail(struct thin_c *tc, struct bio *bio) 1347 1305 { 1348 1306 bio_io_error(bio); ··· 1379 1327 while ((bio = bio_list_pop(&bios))) { 1380 1328 struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); 1381 1329 struct thin_c *tc = h->tc; 1330 + 1331 + if (tc->requeue_mode) { 1332 + bio_endio(bio, DM_ENDIO_REQUEUE); 1333 + continue; 1334 + } 1382 1335 1383 1336 /* 1384 1337 * If we've got no free new_mapping structs, and processing ··· 1451 1394 1452 1395 /*----------------------------------------------------------------*/ 1453 1396 1397 + struct noflush_work { 1398 + struct work_struct worker; 1399 + struct thin_c *tc; 1400 + 1401 + atomic_t complete; 1402 + wait_queue_head_t wait; 1403 + }; 1404 + 1405 + static void complete_noflush_work(struct noflush_work *w) 1406 + { 1407 + atomic_set(&w->complete, 1); 1408 + wake_up(&w->wait); 1409 + } 1410 + 1411 + static void do_noflush_start(struct work_struct *ws) 1412 + { 1413 + struct noflush_work *w = container_of(ws, struct noflush_work, worker); 1414 + w->tc->requeue_mode = true; 1415 + requeue_io(w->tc); 1416 + complete_noflush_work(w); 1417 + } 1418 + 1419 + static void do_noflush_stop(struct work_struct *ws) 1420 + { 1421 + struct noflush_work *w = container_of(ws, struct noflush_work, worker); 1422 + w->tc->requeue_mode = false; 1423 + complete_noflush_work(w); 1424 + } 1425 + 1426 + static void noflush_work(struct thin_c *tc, void (*fn)(struct work_struct *)) 1427 + { 1428 + struct noflush_work w; 1429 + 1430 + INIT_WORK(&w.worker, fn); 1431 + w.tc = tc; 1432 + atomic_set(&w.complete, 0); 1433 + init_waitqueue_head(&w.wait); 1434 + 1435 + queue_work(tc->pool->wq, &w.worker); 1436 + 1437 + wait_event(w.wait, atomic_read(&w.complete)); 1438 + } 1439 + 1440 + /*----------------------------------------------------------------*/ 1441 + 1454 1442 static enum pool_mode get_pool_mode(struct pool *pool) 1455 1443 { 1456 1444 return pool->pf.mode; 1457 1445 } 1458 1446 1447 + static void notify_of_pool_mode_change(struct pool *pool, const char *new_mode) 1448 + { 1449 + dm_table_event(pool->ti->table); 1450 + DMINFO("%s: switching pool to %s mode", 1451 + dm_device_name(pool->pool_md), new_mode); 1452 + } 1453 + 1459 1454 static void set_pool_mode(struct pool *pool, enum pool_mode new_mode) 1460 1455 { 1461 - int r; 1462 - enum pool_mode old_mode = pool->pf.mode; 1456 + struct pool_c *pt = pool->ti->private; 1457 + bool needs_check = dm_pool_metadata_needs_check(pool->pmd); 1458 + enum pool_mode old_mode = get_pool_mode(pool); 1459 + 1460 + /* 1461 + * Never allow the pool to transition to PM_WRITE mode if user 1462 + * intervention is required to verify metadata and data consistency. 1463 + */ 1464 + if (new_mode == PM_WRITE && needs_check) { 1465 + DMERR("%s: unable to switch pool to write mode until repaired.", 1466 + dm_device_name(pool->pool_md)); 1467 + if (old_mode != new_mode) 1468 + new_mode = old_mode; 1469 + else 1470 + new_mode = PM_READ_ONLY; 1471 + } 1472 + /* 1473 + * If we were in PM_FAIL mode, rollback of metadata failed. We're 1474 + * not going to recover without a thin_repair. So we never let the 1475 + * pool move out of the old mode. 1476 + */ 1477 + if (old_mode == PM_FAIL) 1478 + new_mode = old_mode; 1463 1479 1464 1480 switch (new_mode) { 1465 1481 case PM_FAIL: 1466 1482 if (old_mode != new_mode) 1467 - DMERR("%s: switching pool to failure mode", 1468 - dm_device_name(pool->pool_md)); 1483 + notify_of_pool_mode_change(pool, "failure"); 1469 1484 dm_pool_metadata_read_only(pool->pmd); 1470 1485 pool->process_bio = process_bio_fail; 1471 1486 pool->process_discard = process_bio_fail; 1472 1487 pool->process_prepared_mapping = process_prepared_mapping_fail; 1473 1488 pool->process_prepared_discard = process_prepared_discard_fail; 1489 + 1490 + error_retry_list(pool); 1474 1491 break; 1475 1492 1476 1493 case PM_READ_ONLY: 1477 1494 if (old_mode != new_mode) 1478 - DMERR("%s: switching pool to read-only mode", 1479 - dm_device_name(pool->pool_md)); 1480 - r = dm_pool_abort_metadata(pool->pmd); 1481 - if (r) { 1482 - DMERR("%s: aborting transaction failed", 1483 - dm_device_name(pool->pool_md)); 1484 - new_mode = PM_FAIL; 1485 - set_pool_mode(pool, new_mode); 1486 - } else { 1487 - dm_pool_metadata_read_only(pool->pmd); 1488 - pool->process_bio = process_bio_read_only; 1489 - pool->process_discard = process_discard; 1490 - pool->process_prepared_mapping = process_prepared_mapping_fail; 1491 - pool->process_prepared_discard = process_prepared_discard_passdown; 1492 - } 1495 + notify_of_pool_mode_change(pool, "read-only"); 1496 + dm_pool_metadata_read_only(pool->pmd); 1497 + pool->process_bio = process_bio_read_only; 1498 + pool->process_discard = process_bio_success; 1499 + pool->process_prepared_mapping = process_prepared_mapping_fail; 1500 + pool->process_prepared_discard = process_prepared_discard_passdown; 1501 + 1502 + error_retry_list(pool); 1503 + break; 1504 + 1505 + case PM_OUT_OF_DATA_SPACE: 1506 + /* 1507 + * Ideally we'd never hit this state; the low water mark 1508 + * would trigger userland to extend the pool before we 1509 + * completely run out of data space. However, many small 1510 + * IOs to unprovisioned space can consume data space at an 1511 + * alarming rate. Adjust your low water mark if you're 1512 + * frequently seeing this mode. 1513 + */ 1514 + if (old_mode != new_mode) 1515 + notify_of_pool_mode_change(pool, "out-of-data-space"); 1516 + pool->process_bio = process_bio_read_only; 1517 + pool->process_discard = process_discard; 1518 + pool->process_prepared_mapping = process_prepared_mapping; 1519 + pool->process_prepared_discard = process_prepared_discard_passdown; 1493 1520 break; 1494 1521 1495 1522 case PM_WRITE: 1496 1523 if (old_mode != new_mode) 1497 - DMINFO("%s: switching pool to write mode", 1498 - dm_device_name(pool->pool_md)); 1524 + notify_of_pool_mode_change(pool, "write"); 1499 1525 dm_pool_metadata_read_write(pool->pmd); 1500 1526 pool->process_bio = process_bio; 1501 1527 pool->process_discard = process_discard; ··· 1588 1448 } 1589 1449 1590 1450 pool->pf.mode = new_mode; 1451 + /* 1452 + * The pool mode may have changed, sync it so bind_control_target() 1453 + * doesn't cause an unexpected mode transition on resume. 1454 + */ 1455 + pt->adjusted_pf.mode = new_mode; 1591 1456 } 1592 1457 1593 - /* 1594 - * Rather than calling set_pool_mode directly, use these which describe the 1595 - * reason for mode degradation. 1596 - */ 1597 - static void out_of_data_space(struct pool *pool) 1458 + static void abort_transaction(struct pool *pool) 1598 1459 { 1599 - DMERR_LIMIT("%s: no free data space available.", 1600 - dm_device_name(pool->pool_md)); 1601 - set_pool_mode(pool, PM_READ_ONLY); 1460 + const char *dev_name = dm_device_name(pool->pool_md); 1461 + 1462 + DMERR_LIMIT("%s: aborting current metadata transaction", dev_name); 1463 + if (dm_pool_abort_metadata(pool->pmd)) { 1464 + DMERR("%s: failed to abort metadata transaction", dev_name); 1465 + set_pool_mode(pool, PM_FAIL); 1466 + } 1467 + 1468 + if (dm_pool_metadata_set_needs_check(pool->pmd)) { 1469 + DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name); 1470 + set_pool_mode(pool, PM_FAIL); 1471 + } 1602 1472 } 1603 1473 1604 1474 static void metadata_operation_failed(struct pool *pool, const char *op, int r) 1605 1475 { 1606 - dm_block_t free_blocks; 1607 - 1608 1476 DMERR_LIMIT("%s: metadata operation '%s' failed: error = %d", 1609 1477 dm_device_name(pool->pool_md), op, r); 1610 1478 1611 - if (r == -ENOSPC && 1612 - !dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks) && 1613 - !free_blocks) 1614 - DMERR_LIMIT("%s: no free metadata space available.", 1615 - dm_device_name(pool->pool_md)); 1616 - 1479 + abort_transaction(pool); 1617 1480 set_pool_mode(pool, PM_READ_ONLY); 1618 1481 } 1619 1482 ··· 1666 1523 struct dm_cell_key key; 1667 1524 1668 1525 thin_hook_bio(tc, bio); 1526 + 1527 + if (tc->requeue_mode) { 1528 + bio_endio(bio, DM_ENDIO_REQUEUE); 1529 + return DM_MAPIO_SUBMITTED; 1530 + } 1669 1531 1670 1532 if (get_pool_mode(tc->pool) == PM_FAIL) { 1671 1533 bio_io_error(bio); ··· 1835 1687 /* 1836 1688 * We want to make sure that a pool in PM_FAIL mode is never upgraded. 1837 1689 */ 1838 - enum pool_mode old_mode = pool->pf.mode; 1690 + enum pool_mode old_mode = get_pool_mode(pool); 1839 1691 enum pool_mode new_mode = pt->adjusted_pf.mode; 1840 1692 1841 1693 /* ··· 1848 1700 pool->ti = ti; 1849 1701 pool->pf = pt->adjusted_pf; 1850 1702 pool->low_water_blocks = pt->low_water_blocks; 1851 - 1852 - /* 1853 - * If we were in PM_FAIL mode, rollback of metadata failed. We're 1854 - * not going to recover without a thin_repair. So we never let the 1855 - * pool move out of the old mode. On the other hand a PM_READ_ONLY 1856 - * may have been due to a lack of metadata or data space, and may 1857 - * now work (ie. if the underlying devices have been resized). 1858 - */ 1859 - if (old_mode == PM_FAIL) 1860 - new_mode = old_mode; 1861 1703 1862 1704 set_pool_mode(pool, new_mode); 1863 1705 ··· 2391 2253 return -EINVAL; 2392 2254 2393 2255 } else if (data_size > sb_data_size) { 2256 + if (dm_pool_metadata_needs_check(pool->pmd)) { 2257 + DMERR("%s: unable to grow the data device until repaired.", 2258 + dm_device_name(pool->pool_md)); 2259 + return 0; 2260 + } 2261 + 2394 2262 if (sb_data_size) 2395 2263 DMINFO("%s: growing the data device from %llu to %llu blocks", 2396 2264 dm_device_name(pool->pool_md), ··· 2438 2294 return -EINVAL; 2439 2295 2440 2296 } else if (metadata_dev_size > sb_metadata_dev_size) { 2297 + if (dm_pool_metadata_needs_check(pool->pmd)) { 2298 + DMERR("%s: unable to grow the metadata device until repaired.", 2299 + dm_device_name(pool->pool_md)); 2300 + return 0; 2301 + } 2302 + 2441 2303 warn_if_metadata_device_too_big(pool->md_dev); 2442 2304 DMINFO("%s: growing the metadata device from %llu to %llu blocks", 2443 2305 dm_device_name(pool->pool_md), ··· 2831 2681 else 2832 2682 DMEMIT("- "); 2833 2683 2834 - if (pool->pf.mode == PM_READ_ONLY) 2684 + if (pool->pf.mode == PM_OUT_OF_DATA_SPACE) 2685 + DMEMIT("out_of_data_space "); 2686 + else if (pool->pf.mode == PM_READ_ONLY) 2835 2687 DMEMIT("ro "); 2836 2688 else 2837 2689 DMEMIT("rw "); ··· 2947 2795 .name = "thin-pool", 2948 2796 .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE | 2949 2797 DM_TARGET_IMMUTABLE, 2950 - .version = {1, 10, 0}, 2798 + .version = {1, 11, 0}, 2951 2799 .module = THIS_MODULE, 2952 2800 .ctr = pool_ctr, 2953 2801 .dtr = pool_dtr, ··· 3149 2997 return 0; 3150 2998 } 3151 2999 3000 + static void thin_presuspend(struct dm_target *ti) 3001 + { 3002 + struct thin_c *tc = ti->private; 3003 + 3004 + if (dm_noflush_suspending(ti)) 3005 + noflush_work(tc, do_noflush_start); 3006 + } 3007 + 3152 3008 static void thin_postsuspend(struct dm_target *ti) 3153 3009 { 3154 - if (dm_noflush_suspending(ti)) 3155 - requeue_io((struct thin_c *)ti->private); 3010 + struct thin_c *tc = ti->private; 3011 + 3012 + /* 3013 + * The dm_noflush_suspending flag has been cleared by now, so 3014 + * unfortunately we must always run this. 3015 + */ 3016 + noflush_work(tc, do_noflush_stop); 3156 3017 } 3157 3018 3158 3019 /* ··· 3250 3085 3251 3086 static struct target_type thin_target = { 3252 3087 .name = "thin", 3253 - .version = {1, 10, 0}, 3088 + .version = {1, 11, 0}, 3254 3089 .module = THIS_MODULE, 3255 3090 .ctr = thin_ctr, 3256 3091 .dtr = thin_dtr, 3257 3092 .map = thin_map, 3258 3093 .end_io = thin_endio, 3094 + .presuspend = thin_presuspend, 3259 3095 .postsuspend = thin_postsuspend, 3260 3096 .status = thin_status, 3261 3097 .iterate_devices = thin_iterate_devices,
+10
drivers/md/persistent-data/Kconfig
··· 6 6 ---help--- 7 7 Library providing immutable on-disk data structure support for 8 8 device-mapper targets such as the thin provisioning target. 9 + 10 + config DM_DEBUG_BLOCK_STACK_TRACING 11 + boolean "Keep stack trace of persistent data block lock holders" 12 + depends on STACKTRACE_SUPPORT && DM_PERSISTENT_DATA 13 + select STACKTRACE 14 + ---help--- 15 + Enable this for messages that may help debug problems with the 16 + block manager locking used by thin provisioning and caching. 17 + 18 + If unsure, say N.
+92 -21
drivers/md/persistent-data/dm-space-map-metadata.c
··· 91 91 dm_block_t block; 92 92 }; 93 93 94 + struct bop_ring_buffer { 95 + unsigned begin; 96 + unsigned end; 97 + struct block_op bops[MAX_RECURSIVE_ALLOCATIONS + 1]; 98 + }; 99 + 100 + static void brb_init(struct bop_ring_buffer *brb) 101 + { 102 + brb->begin = 0; 103 + brb->end = 0; 104 + } 105 + 106 + static bool brb_empty(struct bop_ring_buffer *brb) 107 + { 108 + return brb->begin == brb->end; 109 + } 110 + 111 + static unsigned brb_next(struct bop_ring_buffer *brb, unsigned old) 112 + { 113 + unsigned r = old + 1; 114 + return (r >= (sizeof(brb->bops) / sizeof(*brb->bops))) ? 0 : r; 115 + } 116 + 117 + static int brb_push(struct bop_ring_buffer *brb, 118 + enum block_op_type type, dm_block_t b) 119 + { 120 + struct block_op *bop; 121 + unsigned next = brb_next(brb, brb->end); 122 + 123 + /* 124 + * We don't allow the last bop to be filled, this way we can 125 + * differentiate between full and empty. 126 + */ 127 + if (next == brb->begin) 128 + return -ENOMEM; 129 + 130 + bop = brb->bops + brb->end; 131 + bop->type = type; 132 + bop->block = b; 133 + 134 + brb->end = next; 135 + 136 + return 0; 137 + } 138 + 139 + static int brb_pop(struct bop_ring_buffer *brb, struct block_op *result) 140 + { 141 + struct block_op *bop; 142 + 143 + if (brb_empty(brb)) 144 + return -ENODATA; 145 + 146 + bop = brb->bops + brb->begin; 147 + result->type = bop->type; 148 + result->block = bop->block; 149 + 150 + brb->begin = brb_next(brb, brb->begin); 151 + 152 + return 0; 153 + } 154 + 155 + /*----------------------------------------------------------------*/ 156 + 94 157 struct sm_metadata { 95 158 struct dm_space_map sm; 96 159 ··· 164 101 165 102 unsigned recursion_count; 166 103 unsigned allocated_this_transaction; 167 - unsigned nr_uncommitted; 168 - struct block_op uncommitted[MAX_RECURSIVE_ALLOCATIONS]; 104 + struct bop_ring_buffer uncommitted; 169 105 170 106 struct threshold threshold; 171 107 }; 172 108 173 109 static int add_bop(struct sm_metadata *smm, enum block_op_type type, dm_block_t b) 174 110 { 175 - struct block_op *op; 111 + int r = brb_push(&smm->uncommitted, type, b); 176 112 177 - if (smm->nr_uncommitted == MAX_RECURSIVE_ALLOCATIONS) { 113 + if (r) { 178 114 DMERR("too many recursive allocations"); 179 115 return -ENOMEM; 180 116 } 181 - 182 - op = smm->uncommitted + smm->nr_uncommitted++; 183 - op->type = type; 184 - op->block = b; 185 117 186 118 return 0; 187 119 } ··· 216 158 return -ENOMEM; 217 159 } 218 160 219 - if (smm->recursion_count == 1 && smm->nr_uncommitted) { 220 - while (smm->nr_uncommitted && !r) { 221 - smm->nr_uncommitted--; 222 - r = commit_bop(smm, smm->uncommitted + 223 - smm->nr_uncommitted); 161 + if (smm->recursion_count == 1) { 162 + while (!brb_empty(&smm->uncommitted)) { 163 + struct block_op bop; 164 + 165 + r = brb_pop(&smm->uncommitted, &bop); 166 + if (r) { 167 + DMERR("bug in bop ring buffer"); 168 + break; 169 + } 170 + 171 + r = commit_bop(smm, &bop); 224 172 if (r) 225 173 break; 226 174 } ··· 281 217 static int sm_metadata_get_count(struct dm_space_map *sm, dm_block_t b, 282 218 uint32_t *result) 283 219 { 284 - int r, i; 220 + int r; 221 + unsigned i; 285 222 struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); 286 223 unsigned adjustment = 0; 287 224 ··· 290 225 * We may have some uncommitted adjustments to add. This list 291 226 * should always be really short. 292 227 */ 293 - for (i = 0; i < smm->nr_uncommitted; i++) { 294 - struct block_op *op = smm->uncommitted + i; 228 + for (i = smm->uncommitted.begin; 229 + i != smm->uncommitted.end; 230 + i = brb_next(&smm->uncommitted, i)) { 231 + struct block_op *op = smm->uncommitted.bops + i; 295 232 296 233 if (op->block != b) 297 234 continue; ··· 321 254 static int sm_metadata_count_is_more_than_one(struct dm_space_map *sm, 322 255 dm_block_t b, int *result) 323 256 { 324 - int r, i, adjustment = 0; 257 + int r, adjustment = 0; 258 + unsigned i; 325 259 struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); 326 260 uint32_t rc; 327 261 ··· 330 262 * We may have some uncommitted adjustments to add. This list 331 263 * should always be really short. 332 264 */ 333 - for (i = 0; i < smm->nr_uncommitted; i++) { 334 - struct block_op *op = smm->uncommitted + i; 265 + for (i = smm->uncommitted.begin; 266 + i != smm->uncommitted.end; 267 + i = brb_next(&smm->uncommitted, i)) { 268 + 269 + struct block_op *op = smm->uncommitted.bops + i; 335 270 336 271 if (op->block != b) 337 272 continue; ··· 742 671 smm->begin = superblock + 1; 743 672 smm->recursion_count = 0; 744 673 smm->allocated_this_transaction = 0; 745 - smm->nr_uncommitted = 0; 674 + brb_init(&smm->uncommitted); 746 675 threshold_init(&smm->threshold); 747 676 748 677 memcpy(&smm->sm, &bootstrap_ops, sizeof(smm->sm)); ··· 786 715 smm->begin = 0; 787 716 smm->recursion_count = 0; 788 717 smm->allocated_this_transaction = 0; 789 - smm->nr_uncommitted = 0; 718 + brb_init(&smm->uncommitted); 790 719 threshold_init(&smm->threshold); 791 720 792 721 memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
+1 -1
drivers/misc/sgi-xp/xpc_uv.c
··· 240 240 241 241 nid = cpu_to_node(cpu); 242 242 page = alloc_pages_exact_node(nid, 243 - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, 243 + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, 244 244 pg_order); 245 245 if (page == NULL) { 246 246 dev_err(xpc_part, "xpc_create_gru_mq_uv() failed to alloc %d "
+1 -1
drivers/net/bonding/bond_alb.c
··· 722 722 client_info->ntt = 0; 723 723 } 724 724 725 - if (!vlan_get_tag(skb, &client_info->vlan_id)) 725 + if (vlan_get_tag(skb, &client_info->vlan_id)) 726 726 client_info->vlan_id = 0; 727 727 728 728 if (!client_info->assigned) {
+1
drivers/net/bonding/bond_options.c
··· 176 176 static const struct bond_opt_value bond_lp_interval_tbl[] = { 177 177 { "minval", 1, BOND_VALFLAG_MIN | BOND_VALFLAG_DEFAULT}, 178 178 { "maxval", INT_MAX, BOND_VALFLAG_MAX}, 179 + { NULL, -1, 0}, 179 180 }; 180 181 181 182 static const struct bond_option bond_opts[] = {
+33 -4
drivers/net/ethernet/broadcom/bnx2.c
··· 2507 2507 2508 2508 bp->fw_wr_seq++; 2509 2509 msg_data |= bp->fw_wr_seq; 2510 + bp->fw_last_msg = msg_data; 2510 2511 2511 2512 bnx2_shmem_wr(bp, BNX2_DRV_MB, msg_data); 2512 2513 ··· 4004 4003 wol_msg = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL; 4005 4004 } 4006 4005 4007 - if (!(bp->flags & BNX2_FLAG_NO_WOL)) 4008 - bnx2_fw_sync(bp, BNX2_DRV_MSG_DATA_WAIT3 | wol_msg, 1, 0); 4006 + if (!(bp->flags & BNX2_FLAG_NO_WOL)) { 4007 + u32 val; 4008 + 4009 + wol_msg |= BNX2_DRV_MSG_DATA_WAIT3; 4010 + if (bp->fw_last_msg || BNX2_CHIP(bp) != BNX2_CHIP_5709) { 4011 + bnx2_fw_sync(bp, wol_msg, 1, 0); 4012 + return; 4013 + } 4014 + /* Tell firmware not to power down the PHY yet, otherwise 4015 + * the chip will take a long time to respond to MMIO reads. 4016 + */ 4017 + val = bnx2_shmem_rd(bp, BNX2_PORT_FEATURE); 4018 + bnx2_shmem_wr(bp, BNX2_PORT_FEATURE, 4019 + val | BNX2_PORT_FEATURE_ASF_ENABLED); 4020 + bnx2_fw_sync(bp, wol_msg, 1, 0); 4021 + bnx2_shmem_wr(bp, BNX2_PORT_FEATURE, val); 4022 + } 4009 4023 4010 4024 } 4011 4025 ··· 4052 4036 4053 4037 if (bp->wol) 4054 4038 pci_set_power_state(bp->pdev, PCI_D3hot); 4055 - } else { 4056 - pci_set_power_state(bp->pdev, PCI_D3hot); 4039 + break; 4040 + 4057 4041 } 4042 + if (!bp->fw_last_msg && BNX2_CHIP(bp) == BNX2_CHIP_5709) { 4043 + u32 val; 4044 + 4045 + /* Tell firmware not to power down the PHY yet, 4046 + * otherwise the other port may not respond to 4047 + * MMIO reads. 4048 + */ 4049 + val = bnx2_shmem_rd(bp, BNX2_BC_STATE_CONDITION); 4050 + val &= ~BNX2_CONDITION_PM_STATE_MASK; 4051 + val |= BNX2_CONDITION_PM_STATE_UNPREP; 4052 + bnx2_shmem_wr(bp, BNX2_BC_STATE_CONDITION, val); 4053 + } 4054 + pci_set_power_state(bp->pdev, PCI_D3hot); 4058 4055 4059 4056 /* No more memory access after this point until 4060 4057 * device is brought back to D0.
+5
drivers/net/ethernet/broadcom/bnx2.h
··· 6900 6900 6901 6901 u16 fw_wr_seq; 6902 6902 u16 fw_drv_pulse_wr_seq; 6903 + u32 fw_last_msg; 6903 6904 6904 6905 int rx_max_ring; 6905 6906 int rx_ring_size; ··· 7407 7406 #define BNX2_CONDITION_MFW_RUN_NCSI 0x00006000 7408 7407 #define BNX2_CONDITION_MFW_RUN_NONE 0x0000e000 7409 7408 #define BNX2_CONDITION_MFW_RUN_MASK 0x0000e000 7409 + #define BNX2_CONDITION_PM_STATE_MASK 0x00030000 7410 + #define BNX2_CONDITION_PM_STATE_FULL 0x00030000 7411 + #define BNX2_CONDITION_PM_STATE_PREP 0x00020000 7412 + #define BNX2_CONDITION_PM_STATE_UNPREP 0x00010000 7410 7413 7411 7414 #define BNX2_BC_STATE_DEBUG_CMD 0x1dc 7412 7415 #define BNX2_BC_STATE_BC_DBG_CMD_SIGNATURE 0x42440000
+1 -1
drivers/net/ethernet/brocade/bna/bfa_ioc.c
··· 1704 1704 while (!bfa_raw_sem_get(bar)) { 1705 1705 if (--n <= 0) 1706 1706 return BFA_STATUS_BADFLASH; 1707 - udelay(10000); 1707 + mdelay(10); 1708 1708 } 1709 1709 return BFA_STATUS_OK; 1710 1710 }
+13 -3
drivers/net/ethernet/cadence/macb.c
··· 632 632 "Unable to allocate sk_buff\n"); 633 633 break; 634 634 } 635 - bp->rx_skbuff[entry] = skb; 636 635 637 636 /* now fill corresponding descriptor entry */ 638 637 paddr = dma_map_single(&bp->pdev->dev, skb->data, 639 638 bp->rx_buffer_size, DMA_FROM_DEVICE); 639 + if (dma_mapping_error(&bp->pdev->dev, paddr)) { 640 + dev_kfree_skb(skb); 641 + break; 642 + } 643 + 644 + bp->rx_skbuff[entry] = skb; 640 645 641 646 if (entry == RX_RING_SIZE - 1) 642 647 paddr |= MACB_BIT(RX_WRAP); ··· 730 725 skb_put(skb, len); 731 726 addr = MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, addr)); 732 727 dma_unmap_single(&bp->pdev->dev, addr, 733 - len, DMA_FROM_DEVICE); 728 + bp->rx_buffer_size, DMA_FROM_DEVICE); 734 729 735 730 skb->protocol = eth_type_trans(skb, bp->dev); 736 731 skb_checksum_none_assert(skb); ··· 1041 1036 } 1042 1037 1043 1038 entry = macb_tx_ring_wrap(bp->tx_head); 1044 - bp->tx_head++; 1045 1039 netdev_vdbg(bp->dev, "Allocated ring entry %u\n", entry); 1046 1040 mapping = dma_map_single(&bp->pdev->dev, skb->data, 1047 1041 len, DMA_TO_DEVICE); 1042 + if (dma_mapping_error(&bp->pdev->dev, mapping)) { 1043 + kfree_skb(skb); 1044 + goto unlock; 1045 + } 1048 1046 1047 + bp->tx_head++; 1049 1048 tx_skb = &bp->tx_skb[entry]; 1050 1049 tx_skb->skb = skb; 1051 1050 tx_skb->mapping = mapping; ··· 1075 1066 if (CIRC_SPACE(bp->tx_head, bp->tx_tail, TX_RING_SIZE) < 1) 1076 1067 netif_stop_queue(dev); 1077 1068 1069 + unlock: 1078 1070 spin_unlock_irqrestore(&bp->lock, flags); 1079 1071 1080 1072 return NETDEV_TX_OK;
+7 -7
drivers/net/ethernet/freescale/fec_main.c
··· 528 528 /* Clear any outstanding interrupt. */ 529 529 writel(0xffc00000, fep->hwp + FEC_IEVENT); 530 530 531 - /* Setup multicast filter. */ 532 - set_multicast_list(ndev); 533 - #ifndef CONFIG_M5272 534 - writel(0, fep->hwp + FEC_HASH_TABLE_HIGH); 535 - writel(0, fep->hwp + FEC_HASH_TABLE_LOW); 536 - #endif 537 - 538 531 /* Set maximum receive buffer size. */ 539 532 writel(PKT_MAXBLR_SIZE, fep->hwp + FEC_R_BUFF_SIZE); 540 533 ··· 647 654 #endif /* !defined(CONFIG_M5272) */ 648 655 649 656 writel(rcntl, fep->hwp + FEC_R_CNTRL); 657 + 658 + /* Setup multicast filter. */ 659 + set_multicast_list(ndev); 660 + #ifndef CONFIG_M5272 661 + writel(0, fep->hwp + FEC_HASH_TABLE_HIGH); 662 + writel(0, fep->hwp + FEC_HASH_TABLE_LOW); 663 + #endif 650 664 651 665 if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) { 652 666 /* enable ENET endian swap */
+16 -9
drivers/net/ethernet/ibm/ibmveth.c
··· 522 522 return rc; 523 523 } 524 524 525 + static u64 ibmveth_encode_mac_addr(u8 *mac) 526 + { 527 + int i; 528 + u64 encoded = 0; 529 + 530 + for (i = 0; i < ETH_ALEN; i++) 531 + encoded = (encoded << 8) | mac[i]; 532 + 533 + return encoded; 534 + } 535 + 525 536 static int ibmveth_open(struct net_device *netdev) 526 537 { 527 538 struct ibmveth_adapter *adapter = netdev_priv(netdev); 528 - u64 mac_address = 0; 539 + u64 mac_address; 529 540 int rxq_entries = 1; 530 541 unsigned long lpar_rc; 531 542 int rc; ··· 590 579 adapter->rx_queue.num_slots = rxq_entries; 591 580 adapter->rx_queue.toggle = 1; 592 581 593 - memcpy(&mac_address, netdev->dev_addr, netdev->addr_len); 594 - mac_address = mac_address >> 16; 582 + mac_address = ibmveth_encode_mac_addr(netdev->dev_addr); 595 583 596 584 rxq_desc.fields.flags_len = IBMVETH_BUF_VALID | 597 585 adapter->rx_queue.queue_len; ··· 1193 1183 /* add the addresses to the filter table */ 1194 1184 netdev_for_each_mc_addr(ha, netdev) { 1195 1185 /* add the multicast address to the filter table */ 1196 - unsigned long mcast_addr = 0; 1197 - memcpy(((char *)&mcast_addr)+2, ha->addr, ETH_ALEN); 1186 + u64 mcast_addr; 1187 + mcast_addr = ibmveth_encode_mac_addr(ha->addr); 1198 1188 lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address, 1199 1189 IbmVethMcastAddFilter, 1200 1190 mcast_addr); ··· 1382 1372 1383 1373 netif_napi_add(netdev, &adapter->napi, ibmveth_poll, 16); 1384 1374 1385 - adapter->mac_addr = 0; 1386 - memcpy(&adapter->mac_addr, mac_addr_p, ETH_ALEN); 1387 - 1388 1375 netdev->irq = dev->irq; 1389 1376 netdev->netdev_ops = &ibmveth_netdev_ops; 1390 1377 netdev->ethtool_ops = &netdev_ethtool_ops; ··· 1390 1383 NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 1391 1384 netdev->features |= netdev->hw_features; 1392 1385 1393 - memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len); 1386 + memcpy(netdev->dev_addr, mac_addr_p, ETH_ALEN); 1394 1387 1395 1388 for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) { 1396 1389 struct kobject *kobj = &adapter->rx_buff_pool[i].kobj;
-1
drivers/net/ethernet/ibm/ibmveth.h
··· 138 138 struct napi_struct napi; 139 139 struct net_device_stats stats; 140 140 unsigned int mcastFilterSize; 141 - unsigned long mac_addr; 142 141 void * buffer_list_addr; 143 142 void * filter_list_addr; 144 143 dma_addr_t buffer_list_dma;
+10
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 742 742 err = mlx4_en_uc_steer_add(priv, new_mac, 743 743 &qpn, 744 744 &entry->reg_id); 745 + if (err) 746 + return err; 747 + if (priv->tunnel_reg_id) { 748 + mlx4_flow_detach(priv->mdev->dev, priv->tunnel_reg_id); 749 + priv->tunnel_reg_id = 0; 750 + } 751 + err = mlx4_en_tunnel_steer_add(priv, new_mac, qpn, 752 + &priv->tunnel_reg_id); 745 753 return err; 746 754 } 747 755 } ··· 1788 1780 mc_list[5] = priv->port; 1789 1781 mlx4_multicast_detach(mdev->dev, &priv->rss_map.indir_qp, 1790 1782 mc_list, MLX4_PROT_ETH, mclist->reg_id); 1783 + if (mclist->tunnel_reg_id) 1784 + mlx4_flow_detach(mdev->dev, mclist->tunnel_reg_id); 1791 1785 } 1792 1786 mlx4_en_clear_list(dev); 1793 1787 list_for_each_entry_safe(mclist, tmp, &priv->curr_list, list) {
+6 -5
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 129 129 [0] = "RSS support", 130 130 [1] = "RSS Toeplitz Hash Function support", 131 131 [2] = "RSS XOR Hash Function support", 132 - [3] = "Device manage flow steering support", 132 + [3] = "Device managed flow steering support", 133 133 [4] = "Automatic MAC reassignment support", 134 134 [5] = "Time stamping support", 135 135 [6] = "VST (control vlan insertion/stripping) support", 136 136 [7] = "FSM (MAC anti-spoofing) support", 137 137 [8] = "Dynamic QP updates support", 138 - [9] = "TCP/IP offloads/flow-steering for VXLAN support" 138 + [9] = "Device managed flow steering IPoIB support", 139 + [10] = "TCP/IP offloads/flow-steering for VXLAN support" 139 140 }; 140 141 int i; 141 142 ··· 860 859 MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_CQ_TS_SUPPORT_OFFSET); 861 860 862 861 /* For guests, disable vxlan tunneling */ 863 - MLX4_GET(field, outbox, QUERY_DEV_CAP_VXLAN); 862 + MLX4_GET(field, outbox->buf, QUERY_DEV_CAP_VXLAN); 864 863 field &= 0xf7; 865 864 MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_VXLAN); 866 865 ··· 870 869 MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_BF_OFFSET); 871 870 872 871 /* For guests, disable mw type 2 */ 873 - MLX4_GET(bmme_flags, outbox, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); 872 + MLX4_GET(bmme_flags, outbox->buf, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); 874 873 bmme_flags &= ~MLX4_BMME_FLAG_TYPE_2_WIN; 875 874 MLX4_PUT(outbox->buf, bmme_flags, QUERY_DEV_CAP_BMME_FLAGS_OFFSET); 876 875 ··· 884 883 } 885 884 886 885 /* turn off ipoib managed steering for guests */ 887 - MLX4_GET(field, outbox, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET); 886 + MLX4_GET(field, outbox->buf, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET); 888 887 field &= ~0x80; 889 888 MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET); 890 889
+13 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 149 149 struct pci_dev *pdev; 150 150 }; 151 151 152 + static atomic_t pf_loading = ATOMIC_INIT(0); 153 + 152 154 int mlx4_check_port_params(struct mlx4_dev *dev, 153 155 enum mlx4_port_type *port_type) 154 156 { ··· 750 748 has_eth_port = true; 751 749 } 752 750 753 - if (has_ib_port) 751 + if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE)) 754 752 request_module_nowait(IB_DRV_NAME); 755 753 if (has_eth_port) 756 754 request_module_nowait(EN_DRV_NAME); ··· 1407 1405 int ret_from_reset = 0; 1408 1406 u32 slave_read; 1409 1407 u32 cmd_channel_ver; 1408 + 1409 + if (atomic_read(&pf_loading)) { 1410 + mlx4_warn(dev, "PF is not ready. Deferring probe\n"); 1411 + return -EPROBE_DEFER; 1412 + } 1410 1413 1411 1414 mutex_lock(&priv->cmd.slave_cmd_mutex); 1412 1415 priv->cmd.max_cmds = 1; ··· 2318 2311 2319 2312 if (num_vfs) { 2320 2313 mlx4_warn(dev, "Enabling SR-IOV with %d VFs\n", num_vfs); 2314 + 2315 + atomic_inc(&pf_loading); 2321 2316 err = pci_enable_sriov(pdev, num_vfs); 2317 + atomic_dec(&pf_loading); 2318 + 2322 2319 if (err) { 2323 2320 mlx4_err(dev, "Failed to enable SR-IOV, continuing without SR-IOV (err = %d).\n", 2324 2321 err); ··· 2687 2676 .name = DRV_NAME, 2688 2677 .id_table = mlx4_pci_table, 2689 2678 .probe = mlx4_init_one, 2679 + .shutdown = mlx4_remove_one, 2690 2680 .remove = mlx4_remove_one, 2691 2681 .err_handler = &mlx4_err_handler, 2692 2682 };
+1 -1
drivers/net/ethernet/realtek/r8169.c
··· 209 209 [RTL_GIGA_MAC_VER_16] = 210 210 _R("RTL8101e", RTL_TD_0, NULL, JUMBO_1K, true), 211 211 [RTL_GIGA_MAC_VER_17] = 212 - _R("RTL8168b/8111b", RTL_TD_1, NULL, JUMBO_4K, false), 212 + _R("RTL8168b/8111b", RTL_TD_0, NULL, JUMBO_4K, false), 213 213 [RTL_GIGA_MAC_VER_18] = 214 214 _R("RTL8168cp/8111cp", RTL_TD_1, NULL, JUMBO_6K, false), 215 215 [RTL_GIGA_MAC_VER_19] =
+1 -1
drivers/net/ethernet/stmicro/stmmac/chain_mode.c
··· 151 151 sizeof(struct dma_desc))); 152 152 } 153 153 154 - const struct stmmac_chain_mode_ops chain_mode_ops = { 154 + const struct stmmac_mode_ops chain_mode_ops = { 155 155 .init = stmmac_init_dma_chain, 156 156 .is_jumbo_frm = stmmac_is_jumbo_frm, 157 157 .jumbo_frm = stmmac_jumbo_frm,
+6 -14
drivers/net/ethernet/stmicro/stmmac/common.h
··· 419 419 unsigned int data; /* MII Data */ 420 420 }; 421 421 422 - struct stmmac_ring_mode_ops { 423 - unsigned int (*is_jumbo_frm) (int len, int ehn_desc); 424 - unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum); 425 - void (*refill_desc3) (void *priv, struct dma_desc *p); 426 - void (*init_desc3) (struct dma_desc *p); 427 - void (*clean_desc3) (void *priv, struct dma_desc *p); 428 - int (*set_16kib_bfsize) (int mtu); 429 - }; 430 - 431 - struct stmmac_chain_mode_ops { 422 + struct stmmac_mode_ops { 432 423 void (*init) (void *des, dma_addr_t phy_addr, unsigned int size, 433 424 unsigned int extend_desc); 434 425 unsigned int (*is_jumbo_frm) (int len, int ehn_desc); 435 426 unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum); 427 + int (*set_16kib_bfsize)(int mtu); 428 + void (*init_desc3)(struct dma_desc *p); 436 429 void (*refill_desc3) (void *priv, struct dma_desc *p); 437 430 void (*clean_desc3) (void *priv, struct dma_desc *p); 438 431 }; ··· 434 441 const struct stmmac_ops *mac; 435 442 const struct stmmac_desc_ops *desc; 436 443 const struct stmmac_dma_ops *dma; 437 - const struct stmmac_ring_mode_ops *ring; 438 - const struct stmmac_chain_mode_ops *chain; 444 + const struct stmmac_mode_ops *mode; 439 445 const struct stmmac_hwtimestamp *ptp; 440 446 struct mii_regs mii; /* MII register Addresses */ 441 447 struct mac_link link; ··· 452 460 void stmmac_set_mac(void __iomem *ioaddr, bool enable); 453 461 454 462 void dwmac_dma_flush_tx_fifo(void __iomem *ioaddr); 455 - extern const struct stmmac_ring_mode_ops ring_mode_ops; 456 - extern const struct stmmac_chain_mode_ops chain_mode_ops; 463 + extern const struct stmmac_mode_ops ring_mode_ops; 464 + extern const struct stmmac_mode_ops chain_mode_ops; 457 465 458 466 #endif /* __COMMON_H__ */
+4 -5
drivers/net/ethernet/stmicro/stmmac/ring_mode.c
··· 100 100 { 101 101 struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr; 102 102 103 - if (unlikely(priv->plat->has_gmac)) 104 - /* Fill DES3 in case of RING mode */ 105 - if (priv->dma_buf_sz >= BUF_SIZE_8KiB) 106 - p->des3 = p->des2 + BUF_SIZE_8KiB; 103 + /* Fill DES3 in case of RING mode */ 104 + if (priv->dma_buf_sz >= BUF_SIZE_8KiB) 105 + p->des3 = p->des2 + BUF_SIZE_8KiB; 107 106 } 108 107 109 108 /* In ring mode we need to fill the desc3 because it is used as buffer */ ··· 125 126 return ret; 126 127 } 127 128 128 - const struct stmmac_ring_mode_ops ring_mode_ops = { 129 + const struct stmmac_mode_ops ring_mode_ops = { 129 130 .is_jumbo_frm = stmmac_is_jumbo_frm, 130 131 .jumbo_frm = stmmac_jumbo_frm, 131 132 .refill_desc3 = stmmac_refill_desc3,
+49 -44
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 92 92 module_param(tc, int, S_IRUGO | S_IWUSR); 93 93 MODULE_PARM_DESC(tc, "DMA threshold control value"); 94 94 95 - #define DMA_BUFFER_SIZE BUF_SIZE_4KiB 96 - static int buf_sz = DMA_BUFFER_SIZE; 95 + #define DEFAULT_BUFSIZE 1536 96 + static int buf_sz = DEFAULT_BUFSIZE; 97 97 module_param(buf_sz, int, S_IRUGO | S_IWUSR); 98 98 MODULE_PARM_DESC(buf_sz, "DMA buffer size"); 99 99 ··· 136 136 dma_rxsize = DMA_RX_SIZE; 137 137 if (unlikely(dma_txsize < 0)) 138 138 dma_txsize = DMA_TX_SIZE; 139 - if (unlikely((buf_sz < DMA_BUFFER_SIZE) || (buf_sz > BUF_SIZE_16KiB))) 140 - buf_sz = DMA_BUFFER_SIZE; 139 + if (unlikely((buf_sz < DEFAULT_BUFSIZE) || (buf_sz > BUF_SIZE_16KiB))) 140 + buf_sz = DEFAULT_BUFSIZE; 141 141 if (unlikely(flow_ctrl > 1)) 142 142 flow_ctrl = FLOW_AUTO; 143 143 else if (likely(flow_ctrl < 0)) ··· 286 286 287 287 /* MAC core supports the EEE feature. */ 288 288 if (priv->dma_cap.eee) { 289 - /* Check if the PHY supports EEE */ 290 - if (phy_init_eee(priv->phydev, 1)) 291 - goto out; 289 + int tx_lpi_timer = priv->tx_lpi_timer; 292 290 291 + /* Check if the PHY supports EEE */ 292 + if (phy_init_eee(priv->phydev, 1)) { 293 + /* To manage at run-time if the EEE cannot be supported 294 + * anymore (for example because the lp caps have been 295 + * changed). 296 + * In that case the driver disable own timers. 297 + */ 298 + if (priv->eee_active) { 299 + pr_debug("stmmac: disable EEE\n"); 300 + del_timer_sync(&priv->eee_ctrl_timer); 301 + priv->hw->mac->set_eee_timer(priv->ioaddr, 0, 302 + tx_lpi_timer); 303 + } 304 + priv->eee_active = 0; 305 + goto out; 306 + } 307 + /* Activate the EEE and start timers */ 293 308 if (!priv->eee_active) { 294 309 priv->eee_active = 1; 295 310 init_timer(&priv->eee_ctrl_timer); ··· 315 300 316 301 priv->hw->mac->set_eee_timer(priv->ioaddr, 317 302 STMMAC_DEFAULT_LIT_LS, 318 - priv->tx_lpi_timer); 303 + tx_lpi_timer); 319 304 } else 320 305 /* Set HW EEE according to the speed */ 321 306 priv->hw->mac->set_eee_pls(priv->ioaddr, 322 307 priv->phydev->link); 323 308 324 - pr_info("stmmac: Energy-Efficient Ethernet initialized\n"); 309 + pr_debug("stmmac: Energy-Efficient Ethernet initialized\n"); 325 310 326 311 ret = true; 327 312 } ··· 901 886 ret = BUF_SIZE_8KiB; 902 887 else if (mtu >= BUF_SIZE_2KiB) 903 888 ret = BUF_SIZE_4KiB; 904 - else if (mtu >= DMA_BUFFER_SIZE) 889 + else if (mtu > DEFAULT_BUFSIZE) 905 890 ret = BUF_SIZE_2KiB; 906 891 else 907 - ret = DMA_BUFFER_SIZE; 892 + ret = DEFAULT_BUFSIZE; 908 893 909 894 return ret; 910 895 } ··· 966 951 967 952 p->des2 = priv->rx_skbuff_dma[i]; 968 953 969 - if ((priv->mode == STMMAC_RING_MODE) && 954 + if ((priv->hw->mode->init_desc3) && 970 955 (priv->dma_buf_sz == BUF_SIZE_16KiB)) 971 - priv->hw->ring->init_desc3(p); 956 + priv->hw->mode->init_desc3(p); 972 957 973 958 return 0; 974 959 } ··· 999 984 unsigned int bfsize = 0; 1000 985 int ret = -ENOMEM; 1001 986 1002 - /* Set the max buffer size according to the DESC mode 1003 - * and the MTU. Note that RING mode allows 16KiB bsize. 1004 - */ 1005 - if (priv->mode == STMMAC_RING_MODE) 1006 - bfsize = priv->hw->ring->set_16kib_bfsize(dev->mtu); 987 + if (priv->hw->mode->set_16kib_bfsize) 988 + bfsize = priv->hw->mode->set_16kib_bfsize(dev->mtu); 1007 989 1008 990 if (bfsize < BUF_SIZE_16KiB) 1009 991 bfsize = stmmac_set_bfsize(dev->mtu, priv->dma_buf_sz); ··· 1041 1029 /* Setup the chained descriptor addresses */ 1042 1030 if (priv->mode == STMMAC_CHAIN_MODE) { 1043 1031 if (priv->extend_desc) { 1044 - priv->hw->chain->init(priv->dma_erx, priv->dma_rx_phy, 1045 - rxsize, 1); 1046 - priv->hw->chain->init(priv->dma_etx, priv->dma_tx_phy, 1047 - txsize, 1); 1032 + priv->hw->mode->init(priv->dma_erx, priv->dma_rx_phy, 1033 + rxsize, 1); 1034 + priv->hw->mode->init(priv->dma_etx, priv->dma_tx_phy, 1035 + txsize, 1); 1048 1036 } else { 1049 - priv->hw->chain->init(priv->dma_rx, priv->dma_rx_phy, 1050 - rxsize, 0); 1051 - priv->hw->chain->init(priv->dma_tx, priv->dma_tx_phy, 1052 - txsize, 0); 1037 + priv->hw->mode->init(priv->dma_rx, priv->dma_rx_phy, 1038 + rxsize, 0); 1039 + priv->hw->mode->init(priv->dma_tx, priv->dma_tx_phy, 1040 + txsize, 0); 1053 1041 } 1054 1042 } 1055 1043 ··· 1300 1288 DMA_TO_DEVICE); 1301 1289 priv->tx_skbuff_dma[entry] = 0; 1302 1290 } 1303 - priv->hw->ring->clean_desc3(priv, p); 1291 + priv->hw->mode->clean_desc3(priv, p); 1304 1292 1305 1293 if (likely(skb != NULL)) { 1306 1294 dev_kfree_skb(skb); ··· 1856 1844 int nfrags = skb_shinfo(skb)->nr_frags; 1857 1845 struct dma_desc *desc, *first; 1858 1846 unsigned int nopaged_len = skb_headlen(skb); 1847 + unsigned int enh_desc = priv->plat->enh_desc; 1859 1848 1860 1849 if (unlikely(stmmac_tx_avail(priv) < nfrags + 1)) { 1861 1850 if (!netif_queue_stopped(dev)) { ··· 1884 1871 first = desc; 1885 1872 1886 1873 /* To program the descriptors according to the size of the frame */ 1887 - if (priv->mode == STMMAC_RING_MODE) { 1888 - is_jumbo = priv->hw->ring->is_jumbo_frm(skb->len, 1889 - priv->plat->enh_desc); 1890 - if (unlikely(is_jumbo)) 1891 - entry = priv->hw->ring->jumbo_frm(priv, skb, 1892 - csum_insertion); 1893 - } else { 1894 - is_jumbo = priv->hw->chain->is_jumbo_frm(skb->len, 1895 - priv->plat->enh_desc); 1896 - if (unlikely(is_jumbo)) 1897 - entry = priv->hw->chain->jumbo_frm(priv, skb, 1898 - csum_insertion); 1899 - } 1874 + if (enh_desc) 1875 + is_jumbo = priv->hw->mode->is_jumbo_frm(skb->len, enh_desc); 1876 + 1900 1877 if (likely(!is_jumbo)) { 1901 1878 desc->des2 = dma_map_single(priv->device, skb->data, 1902 1879 nopaged_len, DMA_TO_DEVICE); 1903 1880 priv->tx_skbuff_dma[entry] = desc->des2; 1904 1881 priv->hw->desc->prepare_tx_desc(desc, 1, nopaged_len, 1905 1882 csum_insertion, priv->mode); 1906 - } else 1883 + } else { 1907 1884 desc = first; 1885 + entry = priv->hw->mode->jumbo_frm(priv, skb, csum_insertion); 1886 + } 1908 1887 1909 1888 for (i = 0; i < nfrags; i++) { 1910 1889 const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; ··· 2034 2029 2035 2030 p->des2 = priv->rx_skbuff_dma[entry]; 2036 2031 2037 - priv->hw->ring->refill_desc3(priv, p); 2032 + priv->hw->mode->refill_desc3(priv, p); 2038 2033 2039 2034 if (netif_msg_rx_status(priv)) 2040 2035 pr_debug("\trefill entry #%d\n", entry); ··· 2638 2633 2639 2634 /* To use the chained or ring mode */ 2640 2635 if (chain_mode) { 2641 - priv->hw->chain = &chain_mode_ops; 2636 + priv->hw->mode = &chain_mode_ops; 2642 2637 pr_info(" Chain mode enabled\n"); 2643 2638 priv->mode = STMMAC_CHAIN_MODE; 2644 2639 } else { 2645 - priv->hw->ring = &ring_mode_ops; 2640 + priv->hw->mode = &ring_mode_ops; 2646 2641 pr_info(" Ring mode enabled\n"); 2647 2642 priv->mode = STMMAC_RING_MODE; 2648 2643 }
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 36 36 #ifdef CONFIG_DWMAC_STI 37 37 { .compatible = "st,stih415-dwmac", .data = &sti_gmac_data}, 38 38 { .compatible = "st,stih416-dwmac", .data = &sti_gmac_data}, 39 - { .compatible = "st,stih127-dwmac", .data = &sti_gmac_data}, 39 + { .compatible = "st,stid127-dwmac", .data = &sti_gmac_data}, 40 40 #endif 41 41 /* SoC specific glue layers should come before generic bindings */ 42 42 { .compatible = "st,spear600-gmac"},
+4
drivers/net/hyperv/netvsc_drv.c
··· 674 674 if (!net) 675 675 return -ENOMEM; 676 676 677 + netif_carrier_off(net); 678 + 677 679 net_device_ctx = netdev_priv(net); 678 680 net_device_ctx->device_ctx = dev; 679 681 hv_set_drvdata(dev, net); ··· 708 706 pr_err("Unable to register netdev.\n"); 709 707 rndis_filter_device_remove(dev); 710 708 free_netdev(net); 709 + } else { 710 + schedule_delayed_work(&net_device_ctx->dwork, 0); 711 711 } 712 712 713 713 return ret;
+20 -1
drivers/net/hyperv/rndis_filter.c
··· 240 240 return ret; 241 241 } 242 242 243 + static void rndis_set_link_state(struct rndis_device *rdev, 244 + struct rndis_request *request) 245 + { 246 + u32 link_status; 247 + struct rndis_query_complete *query_complete; 248 + 249 + query_complete = &request->response_msg.msg.query_complete; 250 + 251 + if (query_complete->status == RNDIS_STATUS_SUCCESS && 252 + query_complete->info_buflen == sizeof(u32)) { 253 + memcpy(&link_status, (void *)((unsigned long)query_complete + 254 + query_complete->info_buf_offset), sizeof(u32)); 255 + rdev->link_state = link_status != 0; 256 + } 257 + } 258 + 243 259 static void rndis_filter_receive_response(struct rndis_device *dev, 244 260 struct rndis_message *resp) 245 261 { ··· 285 269 sizeof(struct rndis_message) + RNDIS_EXT_LEN) { 286 270 memcpy(&request->response_msg, resp, 287 271 resp->msg_len); 272 + if (request->request_msg.ndis_msg_type == 273 + RNDIS_MSG_QUERY && request->request_msg.msg. 274 + query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS) 275 + rndis_set_link_state(dev, request); 288 276 } else { 289 277 netdev_err(ndev, 290 278 "rndis response buffer overflow " ··· 694 674 ret = rndis_filter_query_device(dev, 695 675 RNDIS_OID_GEN_MEDIA_CONNECT_STATUS, 696 676 &link_status, &size); 697 - dev->link_state = (link_status != 0) ? true : false; 698 677 699 678 return ret; 700 679 }
+6 -5
drivers/net/ieee802154/at86rf230.c
··· 655 655 int rc; 656 656 unsigned long flags; 657 657 658 - spin_lock(&lp->lock); 658 + spin_lock_irqsave(&lp->lock, flags); 659 659 if (lp->irq_busy) { 660 - spin_unlock(&lp->lock); 660 + spin_unlock_irqrestore(&lp->lock, flags); 661 661 return -EBUSY; 662 662 } 663 - spin_unlock(&lp->lock); 663 + spin_unlock_irqrestore(&lp->lock, flags); 664 664 665 665 might_sleep(); 666 666 ··· 944 944 static irqreturn_t at86rf230_isr(int irq, void *data) 945 945 { 946 946 struct at86rf230_local *lp = data; 947 + unsigned long flags; 947 948 948 - spin_lock(&lp->lock); 949 + spin_lock_irqsave(&lp->lock, flags); 949 950 lp->irq_busy = 1; 950 - spin_unlock(&lp->lock); 951 + spin_unlock_irqrestore(&lp->lock, flags); 951 952 952 953 schedule_work(&lp->irqwork); 953 954
+6 -5
drivers/net/phy/phy.c
··· 186 186 * of that setting. Returns the index of the last setting if 187 187 * none of the others match. 188 188 */ 189 - static inline int phy_find_setting(int speed, int duplex) 189 + static inline unsigned int phy_find_setting(int speed, int duplex) 190 190 { 191 - int idx = 0; 191 + unsigned int idx = 0; 192 192 193 193 while (idx < ARRAY_SIZE(settings) && 194 194 (settings[idx].speed != speed || settings[idx].duplex != duplex)) ··· 207 207 * the mask in features. Returns the index of the last setting 208 208 * if nothing else matches. 209 209 */ 210 - static inline int phy_find_valid(int idx, u32 features) 210 + static inline unsigned int phy_find_valid(unsigned int idx, u32 features) 211 211 { 212 212 while (idx < MAX_NUM_SETTINGS && !(settings[idx].setting & features)) 213 213 idx++; ··· 226 226 static void phy_sanitize_settings(struct phy_device *phydev) 227 227 { 228 228 u32 features = phydev->supported; 229 - int idx; 229 + unsigned int idx; 230 230 231 231 /* Sanitize settings based on PHY capabilities */ 232 232 if ((features & SUPPORTED_Autoneg) == 0) ··· 979 979 (phydev->interface == PHY_INTERFACE_MODE_RGMII))) { 980 980 int eee_lp, eee_cap, eee_adv; 981 981 u32 lp, cap, adv; 982 - int idx, status; 982 + int status; 983 + unsigned int idx; 983 984 984 985 /* Read phy status to properly get the right settings */ 985 986 status = phy_read_status(phydev);
+1 -1
drivers/net/usb/Makefile
··· 11 11 obj-$(CONFIG_USB_NET_AX8817X) += asix.o 12 12 asix-y := asix_devices.o asix_common.o ax88172a.o 13 13 obj-$(CONFIG_USB_NET_AX88179_178A) += ax88179_178a.o 14 - obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o r815x.o 14 + obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o 15 15 obj-$(CONFIG_USB_NET_CDC_EEM) += cdc_eem.o 16 16 obj-$(CONFIG_USB_NET_DM9601) += dm9601.o 17 17 obj-$(CONFIG_USB_NET_SR9700) += sr9700.o
-8
drivers/net/usb/ax88179_178a.c
··· 1029 1029 dev->mii.phy_id = 0x03; 1030 1030 dev->mii.supports_gmii = 1; 1031 1031 1032 - if (usb_device_no_sg_constraint(dev->udev)) 1033 - dev->can_dma_sg = 1; 1034 - 1035 1032 dev->net->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 1036 1033 NETIF_F_RXCSUM; 1037 1034 1038 1035 dev->net->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 1039 1036 NETIF_F_RXCSUM; 1040 - 1041 - if (dev->can_dma_sg) { 1042 - dev->net->features |= NETIF_F_SG | NETIF_F_TSO; 1043 - dev->net->hw_features |= NETIF_F_SG | NETIF_F_TSO; 1044 - } 1045 1037 1046 1038 /* Enable checksum offload */ 1047 1039 *tmp = AX_RXCOE_IP | AX_RXCOE_TCP | AX_RXCOE_UDP |
+7
drivers/net/usb/cdc_ether.c
··· 652 652 .driver_info = 0, 653 653 }, 654 654 655 + /* Samsung USB Ethernet Adapters */ 656 + { 657 + USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM, 658 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 659 + .driver_info = 0, 660 + }, 661 + 655 662 /* WHITELIST!!! 656 663 * 657 664 * CDC Ether uses two interfaces, not necessarily consecutive.
+9 -3
drivers/net/usb/r8152.c
··· 3367 3367 struct net_device *netdev; 3368 3368 int ret; 3369 3369 3370 + if (udev->actconfig->desc.bConfigurationValue != 1) { 3371 + usb_driver_set_configuration(udev, 1); 3372 + return -ENODEV; 3373 + } 3374 + 3375 + usb_reset_device(udev); 3370 3376 netdev = alloc_etherdev(sizeof(struct r8152)); 3371 3377 if (!netdev) { 3372 3378 dev_err(&intf->dev, "Out of memory\n"); ··· 3462 3456 3463 3457 /* table of devices that work with this driver */ 3464 3458 static struct usb_device_id rtl8152_table[] = { 3465 - {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8152)}, 3466 - {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8153)}, 3467 - {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, PRODUCT_ID_SAMSUNG)}, 3459 + {USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8152)}, 3460 + {USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8153)}, 3461 + {USB_DEVICE(VENDOR_ID_SAMSUNG, PRODUCT_ID_SAMSUNG)}, 3468 3462 {} 3469 3463 }; 3470 3464
-248
drivers/net/usb/r815x.c
··· 1 - #include <linux/module.h> 2 - #include <linux/netdevice.h> 3 - #include <linux/mii.h> 4 - #include <linux/usb.h> 5 - #include <linux/usb/cdc.h> 6 - #include <linux/usb/usbnet.h> 7 - 8 - #define RTL815x_REQT_READ 0xc0 9 - #define RTL815x_REQT_WRITE 0x40 10 - #define RTL815x_REQ_GET_REGS 0x05 11 - #define RTL815x_REQ_SET_REGS 0x05 12 - 13 - #define MCU_TYPE_PLA 0x0100 14 - #define OCP_BASE 0xe86c 15 - #define BASE_MII 0xa400 16 - 17 - #define BYTE_EN_DWORD 0xff 18 - #define BYTE_EN_WORD 0x33 19 - #define BYTE_EN_BYTE 0x11 20 - 21 - #define R815x_PHY_ID 32 22 - #define REALTEK_VENDOR_ID 0x0bda 23 - 24 - 25 - static int pla_read_word(struct usb_device *udev, u16 index) 26 - { 27 - int ret; 28 - u8 shift = index & 2; 29 - __le32 *tmp; 30 - 31 - tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 32 - if (!tmp) 33 - return -ENOMEM; 34 - 35 - index &= ~3; 36 - 37 - ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 38 - RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 39 - index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); 40 - if (ret < 0) 41 - goto out2; 42 - 43 - ret = __le32_to_cpu(*tmp); 44 - ret >>= (shift * 8); 45 - ret &= 0xffff; 46 - 47 - out2: 48 - kfree(tmp); 49 - return ret; 50 - } 51 - 52 - static int pla_write_word(struct usb_device *udev, u16 index, u32 data) 53 - { 54 - __le32 *tmp; 55 - u32 mask = 0xffff; 56 - u16 byen = BYTE_EN_WORD; 57 - u8 shift = index & 2; 58 - int ret; 59 - 60 - tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); 61 - if (!tmp) 62 - return -ENOMEM; 63 - 64 - data &= mask; 65 - 66 - if (shift) { 67 - byen <<= shift; 68 - mask <<= (shift * 8); 69 - data <<= (shift * 8); 70 - index &= ~3; 71 - } 72 - 73 - ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 74 - RTL815x_REQ_GET_REGS, RTL815x_REQT_READ, 75 - index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500); 76 - if (ret < 0) 77 - goto out3; 78 - 79 - data |= __le32_to_cpu(*tmp) & ~mask; 80 - *tmp = __cpu_to_le32(data); 81 - 82 - ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 83 - RTL815x_REQ_SET_REGS, RTL815x_REQT_WRITE, 84 - index, MCU_TYPE_PLA | byen, tmp, sizeof(*tmp), 85 - 500); 86 - 87 - out3: 88 - kfree(tmp); 89 - return ret; 90 - } 91 - 92 - static int ocp_reg_read(struct usbnet *dev, u16 addr) 93 - { 94 - u16 ocp_base, ocp_index; 95 - int ret; 96 - 97 - ocp_base = addr & 0xf000; 98 - ret = pla_write_word(dev->udev, OCP_BASE, ocp_base); 99 - if (ret < 0) 100 - goto out; 101 - 102 - ocp_index = (addr & 0x0fff) | 0xb000; 103 - ret = pla_read_word(dev->udev, ocp_index); 104 - 105 - out: 106 - return ret; 107 - } 108 - 109 - static int ocp_reg_write(struct usbnet *dev, u16 addr, u16 data) 110 - { 111 - u16 ocp_base, ocp_index; 112 - int ret; 113 - 114 - ocp_base = addr & 0xf000; 115 - ret = pla_write_word(dev->udev, OCP_BASE, ocp_base); 116 - if (ret < 0) 117 - goto out1; 118 - 119 - ocp_index = (addr & 0x0fff) | 0xb000; 120 - ret = pla_write_word(dev->udev, ocp_index, data); 121 - 122 - out1: 123 - return ret; 124 - } 125 - 126 - static int r815x_mdio_read(struct net_device *netdev, int phy_id, int reg) 127 - { 128 - struct usbnet *dev = netdev_priv(netdev); 129 - int ret; 130 - 131 - if (phy_id != R815x_PHY_ID) 132 - return -EINVAL; 133 - 134 - if (usb_autopm_get_interface(dev->intf) < 0) 135 - return -ENODEV; 136 - 137 - ret = ocp_reg_read(dev, BASE_MII + reg * 2); 138 - 139 - usb_autopm_put_interface(dev->intf); 140 - return ret; 141 - } 142 - 143 - static 144 - void r815x_mdio_write(struct net_device *netdev, int phy_id, int reg, int val) 145 - { 146 - struct usbnet *dev = netdev_priv(netdev); 147 - 148 - if (phy_id != R815x_PHY_ID) 149 - return; 150 - 151 - if (usb_autopm_get_interface(dev->intf) < 0) 152 - return; 153 - 154 - ocp_reg_write(dev, BASE_MII + reg * 2, val); 155 - 156 - usb_autopm_put_interface(dev->intf); 157 - } 158 - 159 - static int r8153_bind(struct usbnet *dev, struct usb_interface *intf) 160 - { 161 - int status; 162 - 163 - status = usbnet_cdc_bind(dev, intf); 164 - if (status < 0) 165 - return status; 166 - 167 - dev->mii.dev = dev->net; 168 - dev->mii.mdio_read = r815x_mdio_read; 169 - dev->mii.mdio_write = r815x_mdio_write; 170 - dev->mii.phy_id_mask = 0x3f; 171 - dev->mii.reg_num_mask = 0x1f; 172 - dev->mii.phy_id = R815x_PHY_ID; 173 - dev->mii.supports_gmii = 1; 174 - 175 - return status; 176 - } 177 - 178 - static int r8152_bind(struct usbnet *dev, struct usb_interface *intf) 179 - { 180 - int status; 181 - 182 - status = usbnet_cdc_bind(dev, intf); 183 - if (status < 0) 184 - return status; 185 - 186 - dev->mii.dev = dev->net; 187 - dev->mii.mdio_read = r815x_mdio_read; 188 - dev->mii.mdio_write = r815x_mdio_write; 189 - dev->mii.phy_id_mask = 0x3f; 190 - dev->mii.reg_num_mask = 0x1f; 191 - dev->mii.phy_id = R815x_PHY_ID; 192 - dev->mii.supports_gmii = 0; 193 - 194 - return status; 195 - } 196 - 197 - static const struct driver_info r8152_info = { 198 - .description = "RTL8152 ECM Device", 199 - .flags = FLAG_ETHER | FLAG_POINTTOPOINT, 200 - .bind = r8152_bind, 201 - .unbind = usbnet_cdc_unbind, 202 - .status = usbnet_cdc_status, 203 - .manage_power = usbnet_manage_power, 204 - }; 205 - 206 - static const struct driver_info r8153_info = { 207 - .description = "RTL8153 ECM Device", 208 - .flags = FLAG_ETHER | FLAG_POINTTOPOINT, 209 - .bind = r8153_bind, 210 - .unbind = usbnet_cdc_unbind, 211 - .status = usbnet_cdc_status, 212 - .manage_power = usbnet_manage_power, 213 - }; 214 - 215 - static const struct usb_device_id products[] = { 216 - { 217 - USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8152, USB_CLASS_COMM, 218 - USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 219 - .driver_info = (unsigned long) &r8152_info, 220 - }, 221 - 222 - { 223 - USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8153, USB_CLASS_COMM, 224 - USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 225 - .driver_info = (unsigned long) &r8153_info, 226 - }, 227 - 228 - { }, /* END */ 229 - }; 230 - MODULE_DEVICE_TABLE(usb, products); 231 - 232 - static struct usb_driver r815x_driver = { 233 - .name = "r815x", 234 - .id_table = products, 235 - .probe = usbnet_probe, 236 - .disconnect = usbnet_disconnect, 237 - .suspend = usbnet_suspend, 238 - .resume = usbnet_resume, 239 - .reset_resume = usbnet_resume, 240 - .supports_autosuspend = 1, 241 - .disable_hub_initiated_lpm = 1, 242 - }; 243 - 244 - module_usb_driver(r815x_driver); 245 - 246 - MODULE_AUTHOR("Hayes Wang"); 247 - MODULE_DESCRIPTION("Realtek USB ECM device"); 248 - MODULE_LICENSE("GPL");
+14 -5
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1762 1762 { 1763 1763 struct vmxnet3_adapter *adapter = netdev_priv(netdev); 1764 1764 1765 - if (adapter->intr.mask_mode == VMXNET3_IMM_ACTIVE) 1766 - vmxnet3_disable_all_intrs(adapter); 1767 - 1768 - vmxnet3_do_poll(adapter, adapter->rx_queue[0].rx_ring[0].size); 1769 - vmxnet3_enable_all_intrs(adapter); 1765 + switch (adapter->intr.type) { 1766 + #ifdef CONFIG_PCI_MSI 1767 + case VMXNET3_IT_MSIX: { 1768 + int i; 1769 + for (i = 0; i < adapter->num_rx_queues; i++) 1770 + vmxnet3_msix_rx(0, &adapter->rx_queue[i]); 1771 + break; 1772 + } 1773 + #endif 1774 + case VMXNET3_IT_MSI: 1775 + default: 1776 + vmxnet3_intr(0, adapter->netdev); 1777 + break; 1778 + } 1770 1779 1771 1780 } 1772 1781 #endif /* CONFIG_NET_POLL_CONTROLLER */
+5 -2
drivers/net/wireless/iwlwifi/mvm/bt-coex.c
··· 906 906 907 907 lockdep_assert_held(&mvm->mutex); 908 908 909 - /* Rssi update while not associated ?! */ 910 - if (WARN_ON_ONCE(mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT)) 909 + /* 910 + * Rssi update while not associated - can happen since the statistics 911 + * are handled asynchronously 912 + */ 913 + if (mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT) 911 914 return; 912 915 913 916 /* No BT - reports should be disabled */
+2 -3
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 360 360 /* 7265 Series */ 361 361 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)}, 362 362 {IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)}, 363 - {IWL_PCI_DEVICE(0x095A, 0x5112, iwl7265_2ac_cfg)}, 364 363 {IWL_PCI_DEVICE(0x095A, 0x5100, iwl7265_2ac_cfg)}, 365 - {IWL_PCI_DEVICE(0x095A, 0x510A, iwl7265_2ac_cfg)}, 366 364 {IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)}, 367 - {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)}, 365 + {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_n_cfg)}, 368 366 {IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)}, 369 367 {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, 368 + {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, 370 369 {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, 371 370 {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, 372 371 {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)},
+1 -2
drivers/net/wireless/mwifiex/11ac.c
··· 192 192 vht_cap->header.len = 193 193 cpu_to_le16(sizeof(struct ieee80211_vht_cap)); 194 194 memcpy((u8 *)vht_cap + sizeof(struct mwifiex_ie_types_header), 195 - (u8 *)bss_desc->bcn_vht_cap + 196 - sizeof(struct ieee_types_header), 195 + (u8 *)bss_desc->bcn_vht_cap, 197 196 le16_to_cpu(vht_cap->header.len)); 198 197 199 198 mwifiex_fill_vht_cap_tlv(priv, &vht_cap->vht_cap,
+1 -2
drivers/net/wireless/mwifiex/11n.c
··· 317 317 ht_cap->header.len = 318 318 cpu_to_le16(sizeof(struct ieee80211_ht_cap)); 319 319 memcpy((u8 *) ht_cap + sizeof(struct mwifiex_ie_types_header), 320 - (u8 *) bss_desc->bcn_ht_cap + 321 - sizeof(struct ieee_types_header), 320 + (u8 *)bss_desc->bcn_ht_cap, 322 321 le16_to_cpu(ht_cap->header.len)); 323 322 324 323 mwifiex_fill_cap_info(priv, radio_type, &ht_cap->ht_cap);
+4 -4
drivers/net/wireless/mwifiex/scan.c
··· 2311 2311 curr_bss->ht_info_offset); 2312 2312 2313 2313 if (curr_bss->bcn_vht_cap) 2314 - curr_bss->bcn_ht_cap = (void *)(curr_bss->beacon_buf + 2315 - curr_bss->vht_cap_offset); 2314 + curr_bss->bcn_vht_cap = (void *)(curr_bss->beacon_buf + 2315 + curr_bss->vht_cap_offset); 2316 2316 2317 2317 if (curr_bss->bcn_vht_oper) 2318 - curr_bss->bcn_ht_oper = (void *)(curr_bss->beacon_buf + 2319 - curr_bss->vht_info_offset); 2318 + curr_bss->bcn_vht_oper = (void *)(curr_bss->beacon_buf + 2319 + curr_bss->vht_info_offset); 2320 2320 2321 2321 if (curr_bss->bcn_bss_co_2040) 2322 2322 curr_bss->bcn_bss_co_2040 =
+1 -2
drivers/net/xen-netback/interface.c
··· 148 148 /* If the skb is GSO then we'll also need an extra slot for the 149 149 * metadata. 150 150 */ 151 - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || 152 - skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) 151 + if (skb_is_gso(skb)) 153 152 min_slots_needed++; 154 153 155 154 /* If the skb can't possibly fit in the remaining slots
+18 -21
drivers/net/xen-netback/netback.c
··· 243 243 struct gnttab_copy *copy_gop; 244 244 struct xenvif_rx_meta *meta; 245 245 unsigned long bytes; 246 - int gso_type; 246 + int gso_type = XEN_NETIF_GSO_TYPE_NONE; 247 247 248 248 /* Data must not cross a page boundary. */ 249 249 BUG_ON(size + offset > PAGE_SIZE<<compound_order(page)); ··· 309 309 } 310 310 311 311 /* Leave a gap for the GSO descriptor. */ 312 - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) 313 - gso_type = XEN_NETIF_GSO_TYPE_TCPV4; 314 - else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) 315 - gso_type = XEN_NETIF_GSO_TYPE_TCPV6; 316 - else 317 - gso_type = XEN_NETIF_GSO_TYPE_NONE; 312 + if (skb_is_gso(skb)) { 313 + if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) 314 + gso_type = XEN_NETIF_GSO_TYPE_TCPV4; 315 + else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) 316 + gso_type = XEN_NETIF_GSO_TYPE_TCPV6; 317 + } 318 318 319 319 if (*head && ((1 << gso_type) & vif->gso_mask)) 320 320 vif->rx.req_cons++; ··· 348 348 int head = 1; 349 349 int old_meta_prod; 350 350 int gso_type; 351 - int gso_size; 352 351 struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg; 353 352 grant_ref_t foreign_grefs[MAX_SKB_FRAGS]; 354 353 struct xenvif *foreign_vif = NULL; 355 354 356 355 old_meta_prod = npo->meta_prod; 357 356 358 - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) { 359 - gso_type = XEN_NETIF_GSO_TYPE_TCPV4; 360 - gso_size = skb_shinfo(skb)->gso_size; 361 - } else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) { 362 - gso_type = XEN_NETIF_GSO_TYPE_TCPV6; 363 - gso_size = skb_shinfo(skb)->gso_size; 364 - } else { 365 - gso_type = XEN_NETIF_GSO_TYPE_NONE; 366 - gso_size = 0; 357 + gso_type = XEN_NETIF_GSO_TYPE_NONE; 358 + if (skb_is_gso(skb)) { 359 + if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) 360 + gso_type = XEN_NETIF_GSO_TYPE_TCPV4; 361 + else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) 362 + gso_type = XEN_NETIF_GSO_TYPE_TCPV6; 367 363 } 368 364 369 365 /* Set up a GSO prefix descriptor, if necessary */ ··· 367 371 req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); 368 372 meta = npo->meta + npo->meta_prod++; 369 373 meta->gso_type = gso_type; 370 - meta->gso_size = gso_size; 374 + meta->gso_size = skb_shinfo(skb)->gso_size; 371 375 meta->size = 0; 372 376 meta->id = req->id; 373 377 } ··· 377 381 378 382 if ((1 << gso_type) & vif->gso_mask) { 379 383 meta->gso_type = gso_type; 380 - meta->gso_size = gso_size; 384 + meta->gso_size = skb_shinfo(skb)->gso_size; 381 385 } else { 382 386 meta->gso_type = XEN_NETIF_GSO_TYPE_NONE; 383 387 meta->gso_size = 0; ··· 527 531 size = skb_frag_size(&skb_shinfo(skb)->frags[i]); 528 532 max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE); 529 533 } 530 - if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || 531 - skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) 534 + if (skb_is_gso(skb) && 535 + (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || 536 + skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)) 532 537 max_slots_needed++; 533 538 534 539 /* If the skb may not fit then bail out now */
-2
drivers/pci/bus.c
··· 162 162 163 163 avail = *r; 164 164 pci_clip_resource_to_region(bus, &avail, region); 165 - if (!resource_size(&avail)) 166 - continue; 167 165 168 166 /* 169 167 * "min" is typically PCIBIOS_MIN_IO or PCIBIOS_MIN_MEM to
+3
drivers/pci/pci.c
··· 1192 1192 return err; 1193 1193 pci_fixup_device(pci_fixup_enable, dev); 1194 1194 1195 + if (dev->msi_enabled || dev->msix_enabled) 1196 + return 0; 1197 + 1195 1198 pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin); 1196 1199 if (pin) { 1197 1200 pci_read_config_word(dev, PCI_COMMAND, &cmd);
+1 -1
drivers/pinctrl/Kconfig
··· 217 217 select PINCTRL_MXS 218 218 219 219 config PINCTRL_MSM 220 - tristate 220 + bool 221 221 select PINMUX 222 222 select PINCONF 223 223 select GENERIC_PINCONF
+1 -1
drivers/pinctrl/pinctrl-capri.c
··· 1435 1435 } 1436 1436 1437 1437 static struct of_device_id capri_pinctrl_of_match[] = { 1438 - { .compatible = "brcm,capri-pinctrl", }, 1438 + { .compatible = "brcm,bcm11351-pinctrl", }, 1439 1439 { }, 1440 1440 }; 1441 1441
+5 -1
drivers/pinctrl/pinctrl-sunxi.c
··· 14 14 #include <linux/clk.h> 15 15 #include <linux/gpio.h> 16 16 #include <linux/irqdomain.h> 17 + #include <linux/irqchip/chained_irq.h> 17 18 #include <linux/module.h> 18 19 #include <linux/of.h> 19 20 #include <linux/of_address.h> ··· 585 584 spin_lock_irqsave(&pctl->lock, flags); 586 585 587 586 regval = readl(pctl->membase + reg); 588 - regval &= ~IRQ_CFG_IRQ_MASK; 587 + regval &= ~(IRQ_CFG_IRQ_MASK << index); 589 588 writel(regval | (mode << index), pctl->membase + reg); 590 589 591 590 spin_unlock_irqrestore(&pctl->lock, flags); ··· 666 665 667 666 static void sunxi_pinctrl_irq_handler(unsigned irq, struct irq_desc *desc) 668 667 { 668 + struct irq_chip *chip = irq_get_chip(irq); 669 669 struct sunxi_pinctrl *pctl = irq_get_handler_data(irq); 670 670 const unsigned long reg = readl(pctl->membase + IRQ_STATUS_REG); 671 671 ··· 676 674 if (reg) { 677 675 int irqoffset; 678 676 677 + chained_irq_enter(chip, desc); 679 678 for_each_set_bit(irqoffset, &reg, SUNXI_IRQ_NUMBER) { 680 679 int pin_irq = irq_find_mapping(pctl->domain, irqoffset); 681 680 generic_handle_irq(pin_irq); 682 681 } 682 + chained_irq_exit(chip, desc); 683 683 } 684 684 } 685 685
+3 -3
drivers/pinctrl/pinctrl-sunxi.h
··· 511 511 512 512 static inline u32 sunxi_irq_cfg_reg(u16 irq) 513 513 { 514 - u8 reg = irq / IRQ_CFG_IRQ_PER_REG; 514 + u8 reg = irq / IRQ_CFG_IRQ_PER_REG * 0x04; 515 515 return reg + IRQ_CFG_REG; 516 516 } 517 517 ··· 523 523 524 524 static inline u32 sunxi_irq_ctrl_reg(u16 irq) 525 525 { 526 - u8 reg = irq / IRQ_CTRL_IRQ_PER_REG; 526 + u8 reg = irq / IRQ_CTRL_IRQ_PER_REG * 0x04; 527 527 return reg + IRQ_CTRL_REG; 528 528 } 529 529 ··· 535 535 536 536 static inline u32 sunxi_irq_status_reg(u16 irq) 537 537 { 538 - u8 reg = irq / IRQ_STATUS_IRQ_PER_REG; 538 + u8 reg = irq / IRQ_STATUS_IRQ_PER_REG * 0x04; 539 539 return reg + IRQ_STATUS_REG; 540 540 } 541 541
+4 -2
drivers/pinctrl/sh-pfc/pfc-r8a7791.c
··· 89 89 90 90 /* GPSR6 */ 91 91 FN_IP13_10, FN_IP13_11, FN_IP13_12, FN_IP13_13, FN_IP13_14, 92 - FN_IP13_15, FN_IP13_18_16, FN_IP13_21_19, FN_IP13_22, FN_IP13_24_23, 92 + FN_IP13_15, FN_IP13_18_16, FN_IP13_21_19, 93 + FN_IP13_22, FN_IP13_24_23, FN_SD1_CLK, 93 94 FN_IP13_25, FN_IP13_26, FN_IP13_27, FN_IP13_30_28, FN_IP14_1_0, 94 95 FN_IP14_2, FN_IP14_3, FN_IP14_4, FN_IP14_5, FN_IP14_6, FN_IP14_7, 95 96 FN_IP14_10_8, FN_IP14_13_11, FN_IP14_16_14, FN_IP14_19_17, ··· 789 788 PINMUX_DATA(USB1_PWEN_MARK, FN_USB1_PWEN), 790 789 PINMUX_DATA(USB1_OVC_MARK, FN_USB1_OVC), 791 790 PINMUX_DATA(DU0_DOTCLKIN_MARK, FN_DU0_DOTCLKIN), 791 + PINMUX_DATA(SD1_CLK_MARK, FN_SD1_CLK), 792 792 793 793 /* IPSR0 */ 794 794 PINMUX_IPSR_DATA(IP0_0, D0), ··· 3827 3825 GP_6_11_FN, FN_IP13_25, 3828 3826 GP_6_10_FN, FN_IP13_24_23, 3829 3827 GP_6_9_FN, FN_IP13_22, 3830 - 0, 0, 3828 + GP_6_8_FN, FN_SD1_CLK, 3831 3829 GP_6_7_FN, FN_IP13_21_19, 3832 3830 GP_6_6_FN, FN_IP13_18_16, 3833 3831 GP_6_5_FN, FN_IP13_15,
+2 -2
drivers/pinctrl/sirf/pinctrl-sirf.c
··· 598 598 { 599 599 struct sirfsoc_gpio_bank *bank = irq_data_get_irq_chip_data(d); 600 600 601 - if (gpio_lock_as_irq(&bank->chip.gc, d->hwirq)) 601 + if (gpio_lock_as_irq(&bank->chip.gc, d->hwirq % SIRFSOC_GPIO_BANK_SIZE)) 602 602 dev_err(bank->chip.gc.dev, 603 603 "unable to lock HW IRQ %lu for IRQ\n", 604 604 d->hwirq); ··· 611 611 struct sirfsoc_gpio_bank *bank = irq_data_get_irq_chip_data(d); 612 612 613 613 sirfsoc_gpio_irq_mask(d); 614 - gpio_unlock_as_irq(&bank->chip.gc, d->hwirq); 614 + gpio_unlock_as_irq(&bank->chip.gc, d->hwirq % SIRFSOC_GPIO_BANK_SIZE); 615 615 } 616 616 617 617 static struct irq_chip sirfsoc_irq_chip = {
+12 -3
drivers/pnp/pnpacpi/rsparser.c
··· 183 183 struct resource r = {0}; 184 184 int i, flags; 185 185 186 - if (acpi_dev_resource_memory(res, &r) 187 - || acpi_dev_resource_io(res, &r) 188 - || acpi_dev_resource_address_space(res, &r) 186 + if (acpi_dev_resource_address_space(res, &r) 189 187 || acpi_dev_resource_ext_address_space(res, &r)) { 190 188 pnp_add_resource(dev, &r); 191 189 return AE_OK; ··· 215 217 } 216 218 217 219 switch (res->type) { 220 + case ACPI_RESOURCE_TYPE_MEMORY24: 221 + case ACPI_RESOURCE_TYPE_MEMORY32: 222 + case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 223 + if (acpi_dev_resource_memory(res, &r)) 224 + pnp_add_resource(dev, &r); 225 + break; 226 + case ACPI_RESOURCE_TYPE_IO: 227 + case ACPI_RESOURCE_TYPE_FIXED_IO: 228 + if (acpi_dev_resource_io(res, &r)) 229 + pnp_add_resource(dev, &r); 230 + break; 218 231 case ACPI_RESOURCE_TYPE_DMA: 219 232 dma = &res->data.dma; 220 233 if (dma->channel_count > 0 && dma->channels[0] != (u8) -1)
+2 -2
drivers/spi/spi-ath79.c
··· 132 132 133 133 flags = GPIOF_DIR_OUT; 134 134 if (spi->mode & SPI_CS_HIGH) 135 - flags |= GPIOF_INIT_HIGH; 136 - else 137 135 flags |= GPIOF_INIT_LOW; 136 + else 137 + flags |= GPIOF_INIT_HIGH; 138 138 139 139 status = gpio_request_one(cdata->gpio, flags, 140 140 dev_name(&spi->dev));
+16 -1
drivers/spi/spi-atmel.c
··· 1455 1455 { 1456 1456 struct spi_master *master = dev_get_drvdata(dev); 1457 1457 struct atmel_spi *as = spi_master_get_devdata(master); 1458 + int ret; 1459 + 1460 + /* Stop the queue running */ 1461 + ret = spi_master_suspend(master); 1462 + if (ret) { 1463 + dev_warn(dev, "cannot suspend master\n"); 1464 + return ret; 1465 + } 1458 1466 1459 1467 clk_disable_unprepare(as->clk); 1460 1468 return 0; ··· 1472 1464 { 1473 1465 struct spi_master *master = dev_get_drvdata(dev); 1474 1466 struct atmel_spi *as = spi_master_get_devdata(master); 1467 + int ret; 1475 1468 1476 1469 clk_prepare_enable(as->clk); 1477 - return 0; 1470 + 1471 + /* Start the queue running */ 1472 + ret = spi_master_resume(master); 1473 + if (ret) 1474 + dev_err(dev, "problem starting queue (%d)\n", ret); 1475 + 1476 + return ret; 1478 1477 } 1479 1478 1480 1479 static SIMPLE_DEV_PM_OPS(atmel_spi_pm_ops, atmel_spi_suspend, atmel_spi_resume);
+4 -2
drivers/spi/spi-coldfire-qspi.c
··· 514 514 #ifdef CONFIG_PM_RUNTIME 515 515 static int mcfqspi_runtime_suspend(struct device *dev) 516 516 { 517 - struct mcfqspi *mcfqspi = dev_get_drvdata(dev); 517 + struct spi_master *master = dev_get_drvdata(dev); 518 + struct mcfqspi *mcfqspi = spi_master_get_devdata(master); 518 519 519 520 clk_disable(mcfqspi->clk); 520 521 ··· 524 523 525 524 static int mcfqspi_runtime_resume(struct device *dev) 526 525 { 527 - struct mcfqspi *mcfqspi = dev_get_drvdata(dev); 526 + struct spi_master *master = dev_get_drvdata(dev); 527 + struct mcfqspi *mcfqspi = spi_master_get_devdata(master); 528 528 529 529 clk_enable(mcfqspi->clk); 530 530
+3 -3
drivers/spi/spi-fsl-dspi.c
··· 420 420 421 421 static int dspi_resume(struct device *dev) 422 422 { 423 - 424 423 struct spi_master *master = dev_get_drvdata(dev); 425 424 struct fsl_dspi *dspi = spi_master_get_devdata(master); 426 425 ··· 503 504 clk_prepare_enable(dspi->clk); 504 505 505 506 init_waitqueue_head(&dspi->waitq); 506 - platform_set_drvdata(pdev, dspi); 507 + platform_set_drvdata(pdev, master); 507 508 508 509 ret = spi_bitbang_start(&dspi->bitbang); 509 510 if (ret != 0) { ··· 524 525 525 526 static int dspi_remove(struct platform_device *pdev) 526 527 { 527 - struct fsl_dspi *dspi = platform_get_drvdata(pdev); 528 + struct spi_master *master = platform_get_drvdata(pdev); 529 + struct fsl_dspi *dspi = spi_master_get_devdata(master); 528 530 529 531 /* Disconnect from the SPI framework */ 530 532 spi_bitbang_stop(&dspi->bitbang);
+2 -2
drivers/spi/spi-imx.c
··· 948 948 spi_bitbang_stop(&spi_imx->bitbang); 949 949 950 950 writel(0, spi_imx->base + MXC_CSPICTRL); 951 - clk_disable_unprepare(spi_imx->clk_ipg); 952 - clk_disable_unprepare(spi_imx->clk_per); 951 + clk_unprepare(spi_imx->clk_ipg); 952 + clk_unprepare(spi_imx->clk_per); 953 953 spi_master_put(master); 954 954 955 955 return 0;
+8 -7
drivers/spi/spi-topcliff-pch.c
··· 915 915 /* Set Tx DMA */ 916 916 param = &dma->param_tx; 917 917 param->dma_dev = &dma_dev->dev; 918 - param->chan_id = data->master->bus_num * 2; /* Tx = 0, 2 */ 918 + param->chan_id = data->ch * 2; /* Tx = 0, 2 */; 919 919 param->tx_reg = data->io_base_addr + PCH_SPDWR; 920 920 param->width = width; 921 921 chan = dma_request_channel(mask, pch_spi_filter, param); ··· 930 930 /* Set Rx DMA */ 931 931 param = &dma->param_rx; 932 932 param->dma_dev = &dma_dev->dev; 933 - param->chan_id = data->master->bus_num * 2 + 1; /* Rx = Tx + 1 */ 933 + param->chan_id = data->ch * 2 + 1; /* Rx = Tx + 1 */; 934 934 param->rx_reg = data->io_base_addr + PCH_SPDRR; 935 935 param->width = width; 936 936 chan = dma_request_channel(mask, pch_spi_filter, param); ··· 1452 1452 1453 1453 pch_spi_set_master_mode(master); 1454 1454 1455 + if (use_dma) { 1456 + dev_info(&plat_dev->dev, "Use DMA for data transfers\n"); 1457 + pch_alloc_dma_buf(board_dat, data); 1458 + } 1459 + 1455 1460 ret = spi_register_master(master); 1456 1461 if (ret != 0) { 1457 1462 dev_err(&plat_dev->dev, ··· 1464 1459 goto err_spi_register_master; 1465 1460 } 1466 1461 1467 - if (use_dma) { 1468 - dev_info(&plat_dev->dev, "Use DMA for data transfers\n"); 1469 - pch_alloc_dma_buf(board_dat, data); 1470 - } 1471 - 1472 1462 return 0; 1473 1463 1474 1464 err_spi_register_master: 1465 + pch_free_dma_buf(board_dat, data); 1475 1466 free_irq(board_dat->pdev->irq, data); 1476 1467 err_request_irq: 1477 1468 pch_spi_free_resources(board_dat, data);
+2
drivers/staging/cxt1e1/linux.c
··· 866 866 _IOC_SIZE (iocmd)); 867 867 #endif 868 868 iolen = _IOC_SIZE (iocmd); 869 + if (iolen > sizeof(arg)) 870 + return -EFAULT; 869 871 data = ifr->ifr_data + sizeof (iocmd); 870 872 if (copy_from_user (&arg, data, iolen)) 871 873 return -EFAULT;
+7 -3
drivers/target/iscsi/iscsi_target.c
··· 785 785 spin_unlock_bh(&conn->cmd_lock); 786 786 787 787 list_for_each_entry_safe(cmd, cmd_p, &ack_list, i_conn_node) { 788 - list_del(&cmd->i_conn_node); 788 + list_del_init(&cmd->i_conn_node); 789 789 iscsit_free_cmd(cmd, false); 790 790 } 791 791 } ··· 3708 3708 break; 3709 3709 case ISTATE_REMOVE: 3710 3710 spin_lock_bh(&conn->cmd_lock); 3711 - list_del(&cmd->i_conn_node); 3711 + list_del_init(&cmd->i_conn_node); 3712 3712 spin_unlock_bh(&conn->cmd_lock); 3713 3713 3714 3714 iscsit_free_cmd(cmd, false); ··· 4151 4151 spin_lock_bh(&conn->cmd_lock); 4152 4152 list_for_each_entry_safe(cmd, cmd_tmp, &conn->conn_cmd_list, i_conn_node) { 4153 4153 4154 - list_del(&cmd->i_conn_node); 4154 + list_del_init(&cmd->i_conn_node); 4155 4155 spin_unlock_bh(&conn->cmd_lock); 4156 4156 4157 4157 iscsit_increment_maxcmdsn(cmd, sess); ··· 4196 4196 iscsit_stop_timers_for_cmds(conn); 4197 4197 iscsit_stop_nopin_response_timer(conn); 4198 4198 iscsit_stop_nopin_timer(conn); 4199 + 4200 + if (conn->conn_transport->iscsit_wait_conn) 4201 + conn->conn_transport->iscsit_wait_conn(conn); 4202 + 4199 4203 iscsit_free_queue_reqs_for_conn(conn); 4200 4204 4201 4205 /*
+8 -8
drivers/target/iscsi/iscsi_target_erl2.c
··· 138 138 list_for_each_entry_safe(cmd, cmd_tmp, 139 139 &cr->conn_recovery_cmd_list, i_conn_node) { 140 140 141 - list_del(&cmd->i_conn_node); 141 + list_del_init(&cmd->i_conn_node); 142 142 cmd->conn = NULL; 143 143 spin_unlock(&cr->conn_recovery_cmd_lock); 144 144 iscsit_free_cmd(cmd, true); ··· 160 160 list_for_each_entry_safe(cmd, cmd_tmp, 161 161 &cr->conn_recovery_cmd_list, i_conn_node) { 162 162 163 - list_del(&cmd->i_conn_node); 163 + list_del_init(&cmd->i_conn_node); 164 164 cmd->conn = NULL; 165 165 spin_unlock(&cr->conn_recovery_cmd_lock); 166 166 iscsit_free_cmd(cmd, true); ··· 216 216 } 217 217 cr = cmd->cr; 218 218 219 - list_del(&cmd->i_conn_node); 219 + list_del_init(&cmd->i_conn_node); 220 220 return --cr->cmd_count; 221 221 } 222 222 ··· 297 297 if (!(cmd->cmd_flags & ICF_OOO_CMDSN)) 298 298 continue; 299 299 300 - list_del(&cmd->i_conn_node); 300 + list_del_init(&cmd->i_conn_node); 301 301 302 302 spin_unlock_bh(&conn->cmd_lock); 303 303 iscsit_free_cmd(cmd, true); ··· 335 335 /* 336 336 * Only perform connection recovery on ISCSI_OP_SCSI_CMD or 337 337 * ISCSI_OP_NOOP_OUT opcodes. For all other opcodes call 338 - * list_del(&cmd->i_conn_node); to release the command to the 338 + * list_del_init(&cmd->i_conn_node); to release the command to the 339 339 * session pool and remove it from the connection's list. 340 340 * 341 341 * Also stop the DataOUT timer, which will be restarted after ··· 351 351 " CID: %hu\n", cmd->iscsi_opcode, 352 352 cmd->init_task_tag, cmd->cmd_sn, conn->cid); 353 353 354 - list_del(&cmd->i_conn_node); 354 + list_del_init(&cmd->i_conn_node); 355 355 spin_unlock_bh(&conn->cmd_lock); 356 356 iscsit_free_cmd(cmd, true); 357 357 spin_lock_bh(&conn->cmd_lock); ··· 371 371 */ 372 372 if (!(cmd->cmd_flags & ICF_OOO_CMDSN) && !cmd->immediate_cmd && 373 373 iscsi_sna_gte(cmd->cmd_sn, conn->sess->exp_cmd_sn)) { 374 - list_del(&cmd->i_conn_node); 374 + list_del_init(&cmd->i_conn_node); 375 375 spin_unlock_bh(&conn->cmd_lock); 376 376 iscsit_free_cmd(cmd, true); 377 377 spin_lock_bh(&conn->cmd_lock); ··· 393 393 394 394 cmd->sess = conn->sess; 395 395 396 - list_del(&cmd->i_conn_node); 396 + list_del_init(&cmd->i_conn_node); 397 397 spin_unlock_bh(&conn->cmd_lock); 398 398 399 399 iscsit_free_all_datain_reqs(cmd);
+1 -1
drivers/target/iscsi/iscsi_target_tpg.c
··· 137 137 list_for_each_entry(tpg, &tiqn->tiqn_tpg_list, tpg_list) { 138 138 139 139 spin_lock(&tpg->tpg_state_lock); 140 - if (tpg->tpg_state == TPG_STATE_FREE) { 140 + if (tpg->tpg_state != TPG_STATE_ACTIVE) { 141 141 spin_unlock(&tpg->tpg_state_lock); 142 142 continue; 143 143 }
+20 -14
drivers/target/target_core_sbc.c
··· 1079 1079 left = sectors * dev->prot_length; 1080 1080 1081 1081 for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) { 1082 - 1083 - len = min(psg->length, left); 1084 - if (offset >= sg->length) { 1085 - sg = sg_next(sg); 1086 - offset = 0; 1087 - } 1082 + unsigned int psg_len, copied = 0; 1088 1083 1089 1084 paddr = kmap_atomic(sg_page(psg)) + psg->offset; 1090 - addr = kmap_atomic(sg_page(sg)) + sg->offset + offset; 1085 + psg_len = min(left, psg->length); 1086 + while (psg_len) { 1087 + len = min(psg_len, sg->length - offset); 1088 + addr = kmap_atomic(sg_page(sg)) + sg->offset + offset; 1091 1089 1092 - if (read) 1093 - memcpy(paddr, addr, len); 1094 - else 1095 - memcpy(addr, paddr, len); 1090 + if (read) 1091 + memcpy(paddr + copied, addr, len); 1092 + else 1093 + memcpy(addr, paddr + copied, len); 1096 1094 1097 - left -= len; 1098 - offset += len; 1095 + left -= len; 1096 + offset += len; 1097 + copied += len; 1098 + psg_len -= len; 1099 + 1100 + if (offset >= sg->length) { 1101 + sg = sg_next(sg); 1102 + offset = 0; 1103 + } 1104 + kunmap_atomic(addr); 1105 + } 1099 1106 kunmap_atomic(paddr); 1100 - kunmap_atomic(addr); 1101 1107 } 1102 1108 } 1103 1109
+11 -2
drivers/thermal/Kconfig
··· 136 136 config RCAR_THERMAL 137 137 tristate "Renesas R-Car thermal driver" 138 138 depends on ARCH_SHMOBILE || COMPILE_TEST 139 + depends on HAS_IOMEM 139 140 help 140 141 Enable this to plug the R-Car thermal sensor driver into the Linux 141 142 thermal framework. ··· 211 210 tristate "ACPI INT3403 thermal driver" 212 211 depends on X86 && ACPI 213 212 help 214 - This driver uses ACPI INT3403 device objects. If present, it will 215 - register each INT3403 thermal sensor as a thermal zone. 213 + Newer laptops and tablets that use ACPI may have thermal sensors 214 + outside the core CPU/SOC for thermal safety reasons. These 215 + temperature sensors are also exposed for the OS to use via the so 216 + called INT3403 ACPI object. This driver will, on devices that have 217 + such sensors, expose the temperature information from these sensors 218 + to userspace via the normal thermal framework. This means that a wide 219 + range of applications and GUI widgets can show this information to 220 + the user or use this information for making decisions. For example, 221 + the Intel Thermal Daemon can use this information to allow the user 222 + to select his laptop to run without turning on the fans. 216 223 217 224 menu "Texas Instruments thermal drivers" 218 225 source "drivers/thermal/ti-soc-thermal/Kconfig"
+19 -8
drivers/thermal/thermal_core.c
··· 56 56 static DEFINE_MUTEX(thermal_list_lock); 57 57 static DEFINE_MUTEX(thermal_governor_lock); 58 58 59 + static struct thermal_governor *def_governor; 60 + 59 61 static struct thermal_governor *__find_governor(const char *name) 60 62 { 61 63 struct thermal_governor *pos; 64 + 65 + if (!name || !name[0]) 66 + return def_governor; 62 67 63 68 list_for_each_entry(pos, &thermal_governor_list, governor_list) 64 69 if (!strnicmp(name, pos->name, THERMAL_NAME_LENGTH)) ··· 87 82 if (__find_governor(governor->name) == NULL) { 88 83 err = 0; 89 84 list_add(&governor->governor_list, &thermal_governor_list); 85 + if (!def_governor && !strncmp(governor->name, 86 + DEFAULT_THERMAL_GOVERNOR, THERMAL_NAME_LENGTH)) 87 + def_governor = governor; 90 88 } 91 89 92 90 mutex_lock(&thermal_list_lock); 93 91 94 92 list_for_each_entry(pos, &thermal_tz_list, node) { 93 + /* 94 + * only thermal zones with specified tz->tzp->governor_name 95 + * may run with tz->govenor unset 96 + */ 95 97 if (pos->governor) 96 98 continue; 97 - if (pos->tzp) 98 - name = pos->tzp->governor_name; 99 - else 100 - name = DEFAULT_THERMAL_GOVERNOR; 99 + 100 + name = pos->tzp->governor_name; 101 + 101 102 if (!strnicmp(name, governor->name, THERMAL_NAME_LENGTH)) 102 103 pos->governor = governor; 103 104 } ··· 353 342 static void handle_non_critical_trips(struct thermal_zone_device *tz, 354 343 int trip, enum thermal_trip_type trip_type) 355 344 { 356 - if (tz->governor) 357 - tz->governor->throttle(tz, trip); 345 + tz->governor ? tz->governor->throttle(tz, trip) : 346 + def_governor->throttle(tz, trip); 358 347 } 359 348 360 349 static void handle_critical_trips(struct thermal_zone_device *tz, ··· 1118 1107 INIT_LIST_HEAD(&cdev->thermal_instances); 1119 1108 cdev->np = np; 1120 1109 cdev->ops = ops; 1121 - cdev->updated = true; 1110 + cdev->updated = false; 1122 1111 cdev->device.class = &thermal_class; 1123 1112 cdev->devdata = devdata; 1124 1113 dev_set_name(&cdev->device, "cooling_device%d", cdev->id); ··· 1544 1533 if (tz->tzp) 1545 1534 tz->governor = __find_governor(tz->tzp->governor_name); 1546 1535 else 1547 - tz->governor = __find_governor(DEFAULT_THERMAL_GOVERNOR); 1536 + tz->governor = def_governor; 1548 1537 1549 1538 mutex_unlock(&thermal_governor_lock); 1550 1539
+6 -5
drivers/thermal/x86_pkg_temp_thermal.c
··· 68 68 struct thermal_zone_device *tzone; 69 69 }; 70 70 71 + static const struct thermal_zone_params pkg_temp_tz_params = { 72 + .no_hwmon = true, 73 + }; 74 + 71 75 /* List maintaining number of package instances */ 72 76 static LIST_HEAD(phy_dev_list); 73 77 static DEFINE_MUTEX(phy_dev_list_mutex); ··· 398 394 int err; 399 395 u32 tj_max; 400 396 struct phy_dev_entry *phy_dev_entry; 401 - char buffer[30]; 402 397 int thres_count; 403 398 u32 eax, ebx, ecx, edx; 404 399 u8 *temp; ··· 443 440 phy_dev_entry->first_cpu = cpu; 444 441 phy_dev_entry->tj_max = tj_max; 445 442 phy_dev_entry->ref_cnt = 1; 446 - snprintf(buffer, sizeof(buffer), "pkg-temp-%d\n", 447 - phy_dev_entry->phys_proc_id); 448 - phy_dev_entry->tzone = thermal_zone_device_register(buffer, 443 + phy_dev_entry->tzone = thermal_zone_device_register("x86_pkg_temp", 449 444 thres_count, 450 445 (thres_count == MAX_NUMBER_OF_TRIPS) ? 451 446 0x03 : 0x01, 452 - phy_dev_entry, &tzone_ops, NULL, 0, 0); 447 + phy_dev_entry, &tzone_ops, &pkg_temp_tz_params, 0, 0); 453 448 if (IS_ERR(phy_dev_entry->tzone)) { 454 449 err = PTR_ERR(phy_dev_entry->tzone); 455 450 goto err_ret_free;
+4
drivers/usb/core/config.c
··· 717 717 result = -ENOMEM; 718 718 goto err; 719 719 } 720 + 721 + if (dev->quirks & USB_QUIRK_DELAY_INIT) 722 + msleep(100); 723 + 720 724 result = usb_get_descriptor(dev, USB_DT_CONFIG, cfgno, 721 725 bigbuffer, length); 722 726 if (result < 0) {
+4
drivers/usb/core/quirks.c
··· 47 47 /* Microsoft LifeCam-VX700 v2.0 */ 48 48 { USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME }, 49 49 50 + /* Logitech HD Pro Webcams C920 and C930e */ 51 + { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, 52 + { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT }, 53 + 50 54 /* Logitech Quickcam Fusion */ 51 55 { USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME }, 52 56
+3 -11
drivers/usb/host/xhci.c
··· 4733 4733 /* Accept arbitrarily long scatter-gather lists */ 4734 4734 hcd->self.sg_tablesize = ~0; 4735 4735 4736 + /* support to build packet from discontinuous buffers */ 4737 + hcd->self.no_sg_constraint = 1; 4738 + 4736 4739 /* XHCI controllers don't stop the ep queue on short packets :| */ 4737 4740 hcd->self.no_stop_on_short = 1; 4738 4741 ··· 4760 4757 /* xHCI private pointer was set in xhci_pci_probe for the second 4761 4758 * registered roothub. 4762 4759 */ 4763 - xhci = hcd_to_xhci(hcd); 4764 - /* 4765 - * Support arbitrarily aligned sg-list entries on hosts without 4766 - * TD fragment rules (which are currently unsupported). 4767 - */ 4768 - if (xhci->hci_version < 0x100) 4769 - hcd->self.no_sg_constraint = 1; 4770 - 4771 4760 return 0; 4772 4761 } 4773 4762 ··· 4787 4792 */ 4788 4793 if (xhci->hci_version > 0x96) 4789 4794 xhci->quirks |= XHCI_SPURIOUS_SUCCESS; 4790 - 4791 - if (xhci->hci_version < 0x100) 4792 - hcd->self.no_sg_constraint = 1; 4793 4795 4794 4796 /* Make sure the HC is halted. */ 4795 4797 retval = xhci_halt(xhci);
+1 -4
fs/bio-integrity.c
··· 458 458 struct blk_integrity_exchg bix; 459 459 struct bio_vec *bv; 460 460 sector_t sector = bio->bi_integrity->bip_iter.bi_sector; 461 - unsigned int sectors, total, ret; 461 + unsigned int sectors, ret = 0; 462 462 void *prot_buf = bio->bi_integrity->bip_buf; 463 463 int i; 464 464 465 - ret = total = 0; 466 465 bix.disk_name = bio->bi_bdev->bd_disk->disk_name; 467 466 bix.sector_size = bi->sector_size; 468 467 ··· 483 484 sectors = bv->bv_len / bi->sector_size; 484 485 sector += sectors; 485 486 prot_buf += sectors * bi->tuple_size; 486 - total += sectors * bi->tuple_size; 487 - BUG_ON(total > bio->bi_integrity->bip_iter.bi_size); 488 487 489 488 kunmap_atomic(kaddr); 490 489 }
+1 -1
fs/cifs/cifsglob.h
··· 513 513 static inline unsigned int 514 514 get_rfc1002_length(void *buf) 515 515 { 516 - return be32_to_cpu(*((__be32 *)buf)); 516 + return be32_to_cpu(*((__be32 *)buf)) & 0xffffff; 517 517 } 518 518 519 519 static inline void
+6 -18
fs/cifs/file.c
··· 2579 2579 struct cifsInodeInfo *cinode = CIFS_I(inode); 2580 2580 struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server; 2581 2581 ssize_t rc = -EACCES; 2582 + loff_t lock_pos = pos; 2582 2583 2583 - BUG_ON(iocb->ki_pos != pos); 2584 - 2584 + if (file->f_flags & O_APPEND) 2585 + lock_pos = i_size_read(inode); 2585 2586 /* 2586 2587 * We need to hold the sem to be sure nobody modifies lock list 2587 2588 * with a brlock that prevents writing. 2588 2589 */ 2589 2590 down_read(&cinode->lock_sem); 2590 - if (!cifs_find_lock_conflict(cfile, pos, iov_length(iov, nr_segs), 2591 + if (!cifs_find_lock_conflict(cfile, lock_pos, iov_length(iov, nr_segs), 2591 2592 server->vals->exclusive_lock_type, NULL, 2592 - CIFS_WRITE_OP)) { 2593 - mutex_lock(&inode->i_mutex); 2594 - rc = __generic_file_aio_write(iocb, iov, nr_segs, 2595 - &iocb->ki_pos); 2596 - mutex_unlock(&inode->i_mutex); 2597 - } 2598 - 2599 - if (rc > 0) { 2600 - ssize_t err; 2601 - 2602 - err = generic_write_sync(file, iocb->ki_pos - rc, rc); 2603 - if (err < 0) 2604 - rc = err; 2605 - } 2606 - 2593 + CIFS_WRITE_OP)) 2594 + rc = generic_file_aio_write(iocb, iov, nr_segs, pos); 2607 2595 up_read(&cinode->lock_sem); 2608 2596 return rc; 2609 2597 }
+29
fs/cifs/transport.c
··· 270 270 iov->iov_len = rqst->rq_pagesz; 271 271 } 272 272 273 + static unsigned long 274 + rqst_len(struct smb_rqst *rqst) 275 + { 276 + unsigned int i; 277 + struct kvec *iov = rqst->rq_iov; 278 + unsigned long buflen = 0; 279 + 280 + /* total up iov array first */ 281 + for (i = 0; i < rqst->rq_nvec; i++) 282 + buflen += iov[i].iov_len; 283 + 284 + /* add in the page array if there is one */ 285 + if (rqst->rq_npages) { 286 + buflen += rqst->rq_pagesz * (rqst->rq_npages - 1); 287 + buflen += rqst->rq_tailsz; 288 + } 289 + 290 + return buflen; 291 + } 292 + 273 293 static int 274 294 smb_send_rqst(struct TCP_Server_Info *server, struct smb_rqst *rqst) 275 295 { ··· 297 277 struct kvec *iov = rqst->rq_iov; 298 278 int n_vec = rqst->rq_nvec; 299 279 unsigned int smb_buf_length = get_rfc1002_length(iov[0].iov_base); 280 + unsigned long send_length; 300 281 unsigned int i; 301 282 size_t total_len = 0, sent; 302 283 struct socket *ssocket = server->ssocket; ··· 305 284 306 285 if (ssocket == NULL) 307 286 return -ENOTSOCK; 287 + 288 + /* sanity check send length */ 289 + send_length = rqst_len(rqst); 290 + if (send_length != smb_buf_length + 4) { 291 + WARN(1, "Send length mismatch(send_length=%lu smb_buf_length=%u)\n", 292 + send_length, smb_buf_length); 293 + return -EIO; 294 + } 308 295 309 296 cifs_dbg(FYI, "Sending smb: smb_len=%u\n", smb_buf_length); 310 297 dump_smb(iov[0].iov_base, iov[0].iov_len);
+43 -13
fs/file.c
··· 683 683 * The fput_needed flag returned by fget_light should be passed to the 684 684 * corresponding fput_light. 685 685 */ 686 - struct file *__fget_light(unsigned int fd, fmode_t mask, int *fput_needed) 686 + static unsigned long __fget_light(unsigned int fd, fmode_t mask) 687 687 { 688 688 struct files_struct *files = current->files; 689 689 struct file *file; 690 690 691 - *fput_needed = 0; 692 691 if (atomic_read(&files->count) == 1) { 693 692 file = __fcheck_files(files, fd); 694 - if (file && (file->f_mode & mask)) 695 - file = NULL; 693 + if (!file || unlikely(file->f_mode & mask)) 694 + return 0; 695 + return (unsigned long)file; 696 696 } else { 697 697 file = __fget(fd, mask); 698 - if (file) 699 - *fput_needed = 1; 698 + if (!file) 699 + return 0; 700 + return FDPUT_FPUT | (unsigned long)file; 700 701 } 701 - 702 - return file; 703 702 } 704 - struct file *fget_light(unsigned int fd, int *fput_needed) 703 + unsigned long __fdget(unsigned int fd) 705 704 { 706 - return __fget_light(fd, FMODE_PATH, fput_needed); 705 + return __fget_light(fd, FMODE_PATH); 707 706 } 708 - EXPORT_SYMBOL(fget_light); 707 + EXPORT_SYMBOL(__fdget); 709 708 710 - struct file *fget_raw_light(unsigned int fd, int *fput_needed) 709 + unsigned long __fdget_raw(unsigned int fd) 711 710 { 712 - return __fget_light(fd, 0, fput_needed); 711 + return __fget_light(fd, 0); 713 712 } 713 + 714 + unsigned long __fdget_pos(unsigned int fd) 715 + { 716 + struct files_struct *files = current->files; 717 + struct file *file; 718 + unsigned long v; 719 + 720 + if (atomic_read(&files->count) == 1) { 721 + file = __fcheck_files(files, fd); 722 + v = 0; 723 + } else { 724 + file = __fget(fd, 0); 725 + v = FDPUT_FPUT; 726 + } 727 + if (!file) 728 + return 0; 729 + 730 + if (file->f_mode & FMODE_ATOMIC_POS) { 731 + if (file_count(file) > 1) { 732 + v |= FDPUT_POS_UNLOCK; 733 + mutex_lock(&file->f_pos_lock); 734 + } 735 + } 736 + return v | (unsigned long)file; 737 + } 738 + 739 + /* 740 + * We only lock f_pos if we have threads or if the file might be 741 + * shared with another process. In both cases we'll have an elevated 742 + * file count (done either by fdget() or by fork()). 743 + */ 714 744 715 745 void set_close_on_exec(unsigned int fd, int flag) 716 746 {
+1
fs/file_table.c
··· 135 135 atomic_long_set(&f->f_count, 1); 136 136 rwlock_init(&f->f_owner.lock); 137 137 spin_lock_init(&f->f_lock); 138 + mutex_init(&f->f_pos_lock); 138 139 eventpoll_init_file(f); 139 140 /* f->f_version: 0 */ 140 141 return f;
+41
fs/hfsplus/catalog.c
··· 103 103 folder = &entry->folder; 104 104 memset(folder, 0, sizeof(*folder)); 105 105 folder->type = cpu_to_be16(HFSPLUS_FOLDER); 106 + if (test_bit(HFSPLUS_SB_HFSX, &sbi->flags)) 107 + folder->flags |= cpu_to_be16(HFSPLUS_HAS_FOLDER_COUNT); 106 108 folder->id = cpu_to_be32(inode->i_ino); 107 109 HFSPLUS_I(inode)->create_date = 108 110 folder->create_date = ··· 205 203 return hfs_brec_find(fd, hfs_find_rec_by_key); 206 204 } 207 205 206 + static void hfsplus_subfolders_inc(struct inode *dir) 207 + { 208 + struct hfsplus_sb_info *sbi = HFSPLUS_SB(dir->i_sb); 209 + 210 + if (test_bit(HFSPLUS_SB_HFSX, &sbi->flags)) { 211 + /* 212 + * Increment subfolder count. Note, the value is only meaningful 213 + * for folders with HFSPLUS_HAS_FOLDER_COUNT flag set. 214 + */ 215 + HFSPLUS_I(dir)->subfolders++; 216 + } 217 + } 218 + 219 + static void hfsplus_subfolders_dec(struct inode *dir) 220 + { 221 + struct hfsplus_sb_info *sbi = HFSPLUS_SB(dir->i_sb); 222 + 223 + if (test_bit(HFSPLUS_SB_HFSX, &sbi->flags)) { 224 + /* 225 + * Decrement subfolder count. Note, the value is only meaningful 226 + * for folders with HFSPLUS_HAS_FOLDER_COUNT flag set. 227 + * 228 + * Check for zero. Some subfolders may have been created 229 + * by an implementation ignorant of this counter. 230 + */ 231 + if (HFSPLUS_I(dir)->subfolders) 232 + HFSPLUS_I(dir)->subfolders--; 233 + } 234 + } 235 + 208 236 int hfsplus_create_cat(u32 cnid, struct inode *dir, 209 237 struct qstr *str, struct inode *inode) 210 238 { ··· 279 247 goto err1; 280 248 281 249 dir->i_size++; 250 + if (S_ISDIR(inode->i_mode)) 251 + hfsplus_subfolders_inc(dir); 282 252 dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC; 283 253 hfsplus_mark_inode_dirty(dir, HFSPLUS_I_CAT_DIRTY); 284 254 ··· 370 336 goto out; 371 337 372 338 dir->i_size--; 339 + if (type == HFSPLUS_FOLDER) 340 + hfsplus_subfolders_dec(dir); 373 341 dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC; 374 342 hfsplus_mark_inode_dirty(dir, HFSPLUS_I_CAT_DIRTY); 375 343 ··· 416 380 417 381 hfs_bnode_read(src_fd.bnode, &entry, src_fd.entryoffset, 418 382 src_fd.entrylength); 383 + type = be16_to_cpu(entry.type); 419 384 420 385 /* create new dir entry with the data from the old entry */ 421 386 hfsplus_cat_build_key(sb, dst_fd.search_key, dst_dir->i_ino, dst_name); ··· 431 394 if (err) 432 395 goto out; 433 396 dst_dir->i_size++; 397 + if (type == HFSPLUS_FOLDER) 398 + hfsplus_subfolders_inc(dst_dir); 434 399 dst_dir->i_mtime = dst_dir->i_ctime = CURRENT_TIME_SEC; 435 400 436 401 /* finally remove the old entry */ ··· 444 405 if (err) 445 406 goto out; 446 407 src_dir->i_size--; 408 + if (type == HFSPLUS_FOLDER) 409 + hfsplus_subfolders_dec(src_dir); 447 410 src_dir->i_mtime = src_dir->i_ctime = CURRENT_TIME_SEC; 448 411 449 412 /* remove old thread entry */
+1
fs/hfsplus/hfsplus_fs.h
··· 242 242 */ 243 243 sector_t fs_blocks; 244 244 u8 userflags; /* BSD user file flags */ 245 + u32 subfolders; /* Subfolder count (HFSX only) */ 245 246 struct list_head open_dir_list; 246 247 loff_t phys_size; 247 248
+4 -2
fs/hfsplus/hfsplus_raw.h
··· 261 261 struct DInfo user_info; 262 262 struct DXInfo finder_info; 263 263 __be32 text_encoding; 264 - u32 reserved; 264 + __be32 subfolders; /* Subfolder count in HFSX. Reserved in HFS+. */ 265 265 } __packed; 266 266 267 267 /* HFS file info (stolen from hfs.h) */ ··· 301 301 struct hfsplus_fork_raw rsrc_fork; 302 302 } __packed; 303 303 304 - /* File attribute bits */ 304 + /* File and folder flag bits */ 305 305 #define HFSPLUS_FILE_LOCKED 0x0001 306 306 #define HFSPLUS_FILE_THREAD_EXISTS 0x0002 307 307 #define HFSPLUS_XATTR_EXISTS 0x0004 308 308 #define HFSPLUS_ACL_EXISTS 0x0008 309 + #define HFSPLUS_HAS_FOLDER_COUNT 0x0010 /* Folder has subfolder count 310 + * (HFSX only) */ 309 311 310 312 /* HFS+ catalog thread (part of a cat_entry) */ 311 313 struct hfsplus_cat_thread {
+9
fs/hfsplus/inode.c
··· 375 375 hip->extent_state = 0; 376 376 hip->flags = 0; 377 377 hip->userflags = 0; 378 + hip->subfolders = 0; 378 379 memset(hip->first_extents, 0, sizeof(hfsplus_extent_rec)); 379 380 memset(hip->cached_extents, 0, sizeof(hfsplus_extent_rec)); 380 381 hip->alloc_blocks = 0; ··· 495 494 inode->i_ctime = hfsp_mt2ut(folder->attribute_mod_date); 496 495 HFSPLUS_I(inode)->create_date = folder->create_date; 497 496 HFSPLUS_I(inode)->fs_blocks = 0; 497 + if (folder->flags & cpu_to_be16(HFSPLUS_HAS_FOLDER_COUNT)) { 498 + HFSPLUS_I(inode)->subfolders = 499 + be32_to_cpu(folder->subfolders); 500 + } 498 501 inode->i_op = &hfsplus_dir_inode_operations; 499 502 inode->i_fop = &hfsplus_dir_operations; 500 503 } else if (type == HFSPLUS_FILE) { ··· 571 566 folder->content_mod_date = hfsp_ut2mt(inode->i_mtime); 572 567 folder->attribute_mod_date = hfsp_ut2mt(inode->i_ctime); 573 568 folder->valence = cpu_to_be32(inode->i_size - 2); 569 + if (folder->flags & cpu_to_be16(HFSPLUS_HAS_FOLDER_COUNT)) { 570 + folder->subfolders = 571 + cpu_to_be32(HFSPLUS_I(inode)->subfolders); 572 + } 574 573 hfs_bnode_write(fd.bnode, &entry, fd.entryoffset, 575 574 sizeof(struct hfsplus_cat_folder)); 576 575 } else if (HFSPLUS_IS_RSRC(inode)) {
+1 -1
fs/namei.c
··· 1884 1884 1885 1885 nd->path = f.file->f_path; 1886 1886 if (flags & LOOKUP_RCU) { 1887 - if (f.need_put) 1887 + if (f.flags & FDPUT_FPUT) 1888 1888 *fp = f.file; 1889 1889 nd->seq = __read_seqcount_begin(&nd->path.dentry->d_seq); 1890 1890 rcu_read_lock();
+7 -4
fs/nfs/delegation.c
··· 659 659 660 660 rcu_read_lock(); 661 661 delegation = rcu_dereference(NFS_I(inode)->delegation); 662 + if (delegation == NULL) 663 + goto out_enoent; 662 664 663 - if (!clp->cl_mvops->match_stateid(&delegation->stateid, stateid)) { 664 - rcu_read_unlock(); 665 - return -ENOENT; 666 - } 665 + if (!clp->cl_mvops->match_stateid(&delegation->stateid, stateid)) 666 + goto out_enoent; 667 667 nfs_mark_return_delegation(server, delegation); 668 668 rcu_read_unlock(); 669 669 670 670 nfs_delegation_run_state_manager(clp); 671 671 return 0; 672 + out_enoent: 673 + rcu_read_unlock(); 674 + return -ENOENT; 672 675 } 673 676 674 677 static struct inode *
+6 -4
fs/nfs/nfs4filelayout.c
··· 324 324 &rdata->res.seq_res, 325 325 task)) 326 326 return; 327 - nfs4_set_rw_stateid(&rdata->args.stateid, rdata->args.context, 328 - rdata->args.lock_context, FMODE_READ); 327 + if (nfs4_set_rw_stateid(&rdata->args.stateid, rdata->args.context, 328 + rdata->args.lock_context, FMODE_READ) == -EIO) 329 + rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 329 330 } 330 331 331 332 static void filelayout_read_call_done(struct rpc_task *task, void *data) ··· 436 435 &wdata->res.seq_res, 437 436 task)) 438 437 return; 439 - nfs4_set_rw_stateid(&wdata->args.stateid, wdata->args.context, 440 - wdata->args.lock_context, FMODE_WRITE); 438 + if (nfs4_set_rw_stateid(&wdata->args.stateid, wdata->args.context, 439 + wdata->args.lock_context, FMODE_WRITE) == -EIO) 440 + rpc_exit(task, -EIO); /* lost lock, terminate I/O */ 441 441 } 442 442 443 443 static void filelayout_write_call_done(struct rpc_task *task, void *data)
+14 -10
fs/nfs/nfs4proc.c
··· 2398 2398 2399 2399 if (nfs4_copy_delegation_stateid(&arg.stateid, inode, fmode)) { 2400 2400 /* Use that stateid */ 2401 - } else if (truncate && state != NULL && nfs4_valid_open_stateid(state)) { 2401 + } else if (truncate && state != NULL) { 2402 2402 struct nfs_lockowner lockowner = { 2403 2403 .l_owner = current->files, 2404 2404 .l_pid = current->tgid, 2405 2405 }; 2406 - nfs4_select_rw_stateid(&arg.stateid, state, FMODE_WRITE, 2407 - &lockowner); 2406 + if (!nfs4_valid_open_stateid(state)) 2407 + return -EBADF; 2408 + if (nfs4_select_rw_stateid(&arg.stateid, state, FMODE_WRITE, 2409 + &lockowner) == -EIO) 2410 + return -EBADF; 2408 2411 } else 2409 2412 nfs4_stateid_copy(&arg.stateid, &zero_stateid); 2410 2413 ··· 4014 4011 { 4015 4012 nfs4_stateid current_stateid; 4016 4013 4017 - if (nfs4_set_rw_stateid(&current_stateid, ctx, l_ctx, fmode)) 4018 - return false; 4014 + /* If the current stateid represents a lost lock, then exit */ 4015 + if (nfs4_set_rw_stateid(&current_stateid, ctx, l_ctx, fmode) == -EIO) 4016 + return true; 4019 4017 return nfs4_stateid_match(stateid, &current_stateid); 4020 4018 } 4021 4019 ··· 5832 5828 struct nfs4_lock_state *lsp; 5833 5829 struct nfs_server *server; 5834 5830 struct nfs_release_lockowner_args args; 5835 - struct nfs4_sequence_args seq_args; 5836 - struct nfs4_sequence_res seq_res; 5831 + struct nfs_release_lockowner_res res; 5837 5832 unsigned long timestamp; 5838 5833 }; 5839 5834 ··· 5840 5837 { 5841 5838 struct nfs_release_lockowner_data *data = calldata; 5842 5839 nfs40_setup_sequence(data->server, 5843 - &data->seq_args, &data->seq_res, task); 5840 + &data->args.seq_args, &data->res.seq_res, task); 5844 5841 data->timestamp = jiffies; 5845 5842 } 5846 5843 ··· 5849 5846 struct nfs_release_lockowner_data *data = calldata; 5850 5847 struct nfs_server *server = data->server; 5851 5848 5852 - nfs40_sequence_done(task, &data->seq_res); 5849 + nfs40_sequence_done(task, &data->res.seq_res); 5853 5850 5854 5851 switch (task->tk_status) { 5855 5852 case 0: ··· 5890 5887 data = kmalloc(sizeof(*data), GFP_NOFS); 5891 5888 if (!data) 5892 5889 return -ENOMEM; 5893 - nfs4_init_sequence(&data->seq_args, &data->seq_res, 0); 5894 5890 data->lsp = lsp; 5895 5891 data->server = server; 5896 5892 data->args.lock_owner.clientid = server->nfs_client->cl_clientid; ··· 5897 5895 data->args.lock_owner.s_dev = server->s_dev; 5898 5896 5899 5897 msg.rpc_argp = &data->args; 5898 + msg.rpc_resp = &data->res; 5899 + nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 0); 5900 5900 rpc_call_async(server->client, &msg, 0, &nfs4_release_lockowner_ops, data); 5901 5901 return 0; 5902 5902 }
+3 -11
fs/nfs/nfs4state.c
··· 974 974 else if (lsp != NULL && test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags) != 0) { 975 975 nfs4_stateid_copy(dst, &lsp->ls_stateid); 976 976 ret = 0; 977 - smp_rmb(); 978 - if (!list_empty(&lsp->ls_seqid.list)) 979 - ret = -EWOULDBLOCK; 980 977 } 981 978 spin_unlock(&state->state_lock); 982 979 nfs4_put_lock_state(lsp); ··· 981 984 return ret; 982 985 } 983 986 984 - static int nfs4_copy_open_stateid(nfs4_stateid *dst, struct nfs4_state *state) 987 + static void nfs4_copy_open_stateid(nfs4_stateid *dst, struct nfs4_state *state) 985 988 { 986 989 const nfs4_stateid *src; 987 - int ret; 988 990 int seq; 989 991 990 992 do { ··· 992 996 if (test_bit(NFS_OPEN_STATE, &state->flags)) 993 997 src = &state->open_stateid; 994 998 nfs4_stateid_copy(dst, src); 995 - ret = 0; 996 - smp_rmb(); 997 - if (!list_empty(&state->owner->so_seqid.list)) 998 - ret = -EWOULDBLOCK; 999 999 } while (read_seqretry(&state->seqlock, seq)); 1000 - return ret; 1001 1000 } 1002 1001 1003 1002 /* ··· 1017 1026 * choose to use. 1018 1027 */ 1019 1028 goto out; 1020 - ret = nfs4_copy_open_stateid(dst, state); 1029 + nfs4_copy_open_stateid(dst, state); 1030 + ret = 0; 1021 1031 out: 1022 1032 if (nfs_server_capable(state->inode, NFS_CAP_STATEID_NFSV41)) 1023 1033 dst->seqid = 0;
+4 -4
fs/ocfs2/file.c
··· 2393 2393 2394 2394 if (((file->f_flags & O_DSYNC) && !direct_io) || IS_SYNC(inode) || 2395 2395 ((file->f_flags & O_DIRECT) && !direct_io)) { 2396 - ret = filemap_fdatawrite_range(file->f_mapping, pos, 2397 - pos + count - 1); 2396 + ret = filemap_fdatawrite_range(file->f_mapping, *ppos, 2397 + *ppos + count - 1); 2398 2398 if (ret < 0) 2399 2399 written = ret; 2400 2400 ··· 2407 2407 } 2408 2408 2409 2409 if (!ret) 2410 - ret = filemap_fdatawait_range(file->f_mapping, pos, 2411 - pos + count - 1); 2410 + ret = filemap_fdatawait_range(file->f_mapping, *ppos, 2411 + *ppos + count - 1); 2412 2412 } 2413 2413 2414 2414 /*
+4
fs/open.c
··· 705 705 return 0; 706 706 } 707 707 708 + /* POSIX.1-2008/SUSv4 Section XSI 2.9.7 */ 709 + if (S_ISREG(inode->i_mode)) 710 + f->f_mode |= FMODE_ATOMIC_POS; 711 + 708 712 f->f_op = fops_get(inode->i_fop); 709 713 if (unlikely(WARN_ON(!f->f_op))) { 710 714 error = -ENODEV;
+1
fs/proc/base.c
··· 1824 1824 if (rc) 1825 1825 goto out_mmput; 1826 1826 1827 + rc = -ENOENT; 1827 1828 down_read(&mm->mmap_sem); 1828 1829 vma = find_exact_vma(mm, vm_start, vm_end); 1829 1830 if (vma && vma->vm_file) {
+26 -14
fs/read_write.c
··· 264 264 } 265 265 EXPORT_SYMBOL(vfs_llseek); 266 266 267 + static inline struct fd fdget_pos(int fd) 268 + { 269 + return __to_fd(__fdget_pos(fd)); 270 + } 271 + 272 + static inline void fdput_pos(struct fd f) 273 + { 274 + if (f.flags & FDPUT_POS_UNLOCK) 275 + mutex_unlock(&f.file->f_pos_lock); 276 + fdput(f); 277 + } 278 + 267 279 SYSCALL_DEFINE3(lseek, unsigned int, fd, off_t, offset, unsigned int, whence) 268 280 { 269 281 off_t retval; 270 - struct fd f = fdget(fd); 282 + struct fd f = fdget_pos(fd); 271 283 if (!f.file) 272 284 return -EBADF; 273 285 ··· 290 278 if (res != (loff_t)retval) 291 279 retval = -EOVERFLOW; /* LFS: should only happen on 32 bit platforms */ 292 280 } 293 - fdput(f); 281 + fdput_pos(f); 294 282 return retval; 295 283 } 296 284 ··· 510 498 511 499 SYSCALL_DEFINE3(read, unsigned int, fd, char __user *, buf, size_t, count) 512 500 { 513 - struct fd f = fdget(fd); 501 + struct fd f = fdget_pos(fd); 514 502 ssize_t ret = -EBADF; 515 503 516 504 if (f.file) { ··· 518 506 ret = vfs_read(f.file, buf, count, &pos); 519 507 if (ret >= 0) 520 508 file_pos_write(f.file, pos); 521 - fdput(f); 509 + fdput_pos(f); 522 510 } 523 511 return ret; 524 512 } ··· 526 514 SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf, 527 515 size_t, count) 528 516 { 529 - struct fd f = fdget(fd); 517 + struct fd f = fdget_pos(fd); 530 518 ssize_t ret = -EBADF; 531 519 532 520 if (f.file) { ··· 534 522 ret = vfs_write(f.file, buf, count, &pos); 535 523 if (ret >= 0) 536 524 file_pos_write(f.file, pos); 537 - fdput(f); 525 + fdput_pos(f); 538 526 } 539 527 540 528 return ret; ··· 809 797 SYSCALL_DEFINE3(readv, unsigned long, fd, const struct iovec __user *, vec, 810 798 unsigned long, vlen) 811 799 { 812 - struct fd f = fdget(fd); 800 + struct fd f = fdget_pos(fd); 813 801 ssize_t ret = -EBADF; 814 802 815 803 if (f.file) { ··· 817 805 ret = vfs_readv(f.file, vec, vlen, &pos); 818 806 if (ret >= 0) 819 807 file_pos_write(f.file, pos); 820 - fdput(f); 808 + fdput_pos(f); 821 809 } 822 810 823 811 if (ret > 0) ··· 829 817 SYSCALL_DEFINE3(writev, unsigned long, fd, const struct iovec __user *, vec, 830 818 unsigned long, vlen) 831 819 { 832 - struct fd f = fdget(fd); 820 + struct fd f = fdget_pos(fd); 833 821 ssize_t ret = -EBADF; 834 822 835 823 if (f.file) { ··· 837 825 ret = vfs_writev(f.file, vec, vlen, &pos); 838 826 if (ret >= 0) 839 827 file_pos_write(f.file, pos); 840 - fdput(f); 828 + fdput_pos(f); 841 829 } 842 830 843 831 if (ret > 0) ··· 980 968 const struct compat_iovec __user *,vec, 981 969 compat_ulong_t, vlen) 982 970 { 983 - struct fd f = fdget(fd); 971 + struct fd f = fdget_pos(fd); 984 972 ssize_t ret; 985 973 loff_t pos; 986 974 ··· 990 978 ret = compat_readv(f.file, vec, vlen, &pos); 991 979 if (ret >= 0) 992 980 f.file->f_pos = pos; 993 - fdput(f); 981 + fdput_pos(f); 994 982 return ret; 995 983 } 996 984 ··· 1047 1035 const struct compat_iovec __user *, vec, 1048 1036 compat_ulong_t, vlen) 1049 1037 { 1050 - struct fd f = fdget(fd); 1038 + struct fd f = fdget_pos(fd); 1051 1039 ssize_t ret; 1052 1040 loff_t pos; 1053 1041 ··· 1057 1045 ret = compat_writev(f.file, vec, vlen, &pos); 1058 1046 if (ret >= 0) 1059 1047 f.file->f_pos = pos; 1060 - fdput(f); 1048 + fdput_pos(f); 1061 1049 return ret; 1062 1050 } 1063 1051
+5
include/kvm/arm_vgic.h
··· 171 171 return 0; 172 172 } 173 173 174 + static inline int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write) 175 + { 176 + return -ENXIO; 177 + } 178 + 174 179 static inline int kvm_vgic_init(struct kvm *kvm) 175 180 { 176 181 return 0;
+2 -1
include/linux/audit.h
··· 43 43 struct mqstat; 44 44 struct audit_watch; 45 45 struct audit_tree; 46 + struct sk_buff; 46 47 47 48 struct audit_krule { 48 49 int vers_ops; ··· 464 463 extern int audit_filter_type(int type); 465 464 extern int audit_rule_change(int type, __u32 portid, int seq, 466 465 void *data, size_t datasz); 467 - extern int audit_list_rules_send(__u32 portid, int seq); 466 + extern int audit_list_rules_send(struct sk_buff *request_skb, int seq); 468 467 469 468 extern u32 audit_enabled; 470 469 #else /* CONFIG_AUDIT */
+8 -3
include/linux/blk-mq.h
··· 121 121 122 122 void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule); 123 123 124 - void blk_mq_insert_request(struct request_queue *, struct request *, 125 - bool, bool); 124 + void blk_mq_insert_request(struct request *, bool, bool, bool); 126 125 void blk_mq_run_queues(struct request_queue *q, bool async); 127 126 void blk_mq_free_request(struct request *rq); 128 127 bool blk_mq_can_queue(struct blk_mq_hw_ctx *); ··· 133 134 struct blk_mq_hw_ctx *blk_mq_alloc_single_hw_queue(struct blk_mq_reg *, unsigned int); 134 135 void blk_mq_free_single_hw_queue(struct blk_mq_hw_ctx *, unsigned int); 135 136 136 - void blk_mq_end_io(struct request *rq, int error); 137 + bool blk_mq_end_io_partial(struct request *rq, int error, 138 + unsigned int nr_bytes); 139 + static inline void blk_mq_end_io(struct request *rq, int error) 140 + { 141 + bool done = !blk_mq_end_io_partial(rq, error, blk_rq_bytes(rq)); 142 + BUG_ON(!done); 143 + } 137 144 138 145 void blk_mq_complete_request(struct request *rq); 139 146
+4
include/linux/clk/ti.h
··· 245 245 void omap2_init_clk_clkdm(struct clk_hw *clk); 246 246 unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw, 247 247 unsigned long parent_rate); 248 + int omap3_clkoutx2_set_rate(struct clk_hw *hw, unsigned long rate, 249 + unsigned long parent_rate); 250 + long omap3_clkoutx2_round_rate(struct clk_hw *hw, unsigned long rate, 251 + unsigned long *prate); 248 252 int omap2_clkops_enable_clkdm(struct clk_hw *hw); 249 253 void omap2_clkops_disable_clkdm(struct clk_hw *hw); 250 254 int omap2_clk_disable_autoidle_all(void);
+15 -12
include/linux/file.h
··· 28 28 29 29 struct fd { 30 30 struct file *file; 31 - int need_put; 31 + unsigned int flags; 32 32 }; 33 + #define FDPUT_FPUT 1 34 + #define FDPUT_POS_UNLOCK 2 33 35 34 36 static inline void fdput(struct fd fd) 35 37 { 36 - if (fd.need_put) 38 + if (fd.flags & FDPUT_FPUT) 37 39 fput(fd.file); 38 40 } 39 41 40 42 extern struct file *fget(unsigned int fd); 41 - extern struct file *fget_light(unsigned int fd, int *fput_needed); 43 + extern struct file *fget_raw(unsigned int fd); 44 + extern unsigned long __fdget(unsigned int fd); 45 + extern unsigned long __fdget_raw(unsigned int fd); 46 + extern unsigned long __fdget_pos(unsigned int fd); 47 + 48 + static inline struct fd __to_fd(unsigned long v) 49 + { 50 + return (struct fd){(struct file *)(v & ~3),v & 3}; 51 + } 42 52 43 53 static inline struct fd fdget(unsigned int fd) 44 54 { 45 - int b; 46 - struct file *f = fget_light(fd, &b); 47 - return (struct fd){f,b}; 55 + return __to_fd(__fdget(fd)); 48 56 } 49 - 50 - extern struct file *fget_raw(unsigned int fd); 51 - extern struct file *fget_raw_light(unsigned int fd, int *fput_needed); 52 57 53 58 static inline struct fd fdget_raw(unsigned int fd) 54 59 { 55 - int b; 56 - struct file *f = fget_raw_light(fd, &b); 57 - return (struct fd){f,b}; 60 + return __to_fd(__fdget_raw(fd)); 58 61 } 59 62 60 63 extern int f_dupfd(unsigned int from, struct file *file, unsigned flags);
+1
include/linux/firewire.h
··· 200 200 unsigned irmc:1; 201 201 unsigned bc_implemented:2; 202 202 203 + work_func_t workfn; 203 204 struct delayed_work work; 204 205 struct fw_attribute_group attribute_group; 205 206 };
+6 -2
include/linux/fs.h
··· 123 123 /* File is opened with O_PATH; almost nothing can be done with it */ 124 124 #define FMODE_PATH ((__force fmode_t)0x4000) 125 125 126 + /* File needs atomic accesses to f_pos */ 127 + #define FMODE_ATOMIC_POS ((__force fmode_t)0x8000) 128 + 126 129 /* File was opened by fanotify and shouldn't generate fanotify events */ 127 130 #define FMODE_NONOTIFY ((__force fmode_t)0x1000000) 128 131 ··· 783 780 const struct file_operations *f_op; 784 781 785 782 /* 786 - * Protects f_ep_links, f_flags, f_pos vs i_size in lseek SEEK_CUR. 783 + * Protects f_ep_links, f_flags. 787 784 * Must not be taken from IRQ context. 788 785 */ 789 786 spinlock_t f_lock; 790 787 atomic_long_t f_count; 791 788 unsigned int f_flags; 792 789 fmode_t f_mode; 790 + struct mutex f_pos_lock; 793 791 loff_t f_pos; 794 792 struct fown_struct f_owner; 795 793 const struct cred *f_cred; ··· 812 808 #ifdef CONFIG_DEBUG_WRITECOUNT 813 809 unsigned long f_mnt_write_state; 814 810 #endif 815 - }; 811 + } __attribute__((aligned(4))); /* lest something weird decides that 2 is OK */ 816 812 817 813 struct file_handle { 818 814 __u32 handle_bytes;
+4
include/linux/gfp.h
··· 123 123 __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN | \ 124 124 __GFP_NO_KSWAPD) 125 125 126 + /* 127 + * GFP_THISNODE does not perform any reclaim, you most likely want to 128 + * use __GFP_THISNODE to allocate from a given node without fallback! 129 + */ 126 130 #ifdef CONFIG_NUMA 127 131 #define GFP_THISNODE (__GFP_THISNODE | __GFP_NOWARN | __GFP_NORETRY) 128 132 #else
+2 -2
include/linux/mmzone.h
··· 590 590 591 591 /* 592 592 * The NUMA zonelists are doubled because we need zonelists that restrict the 593 - * allocations to a single node for GFP_THISNODE. 593 + * allocations to a single node for __GFP_THISNODE. 594 594 * 595 595 * [0] : Zonelist with fallback 596 - * [1] : No fallback (GFP_THISNODE) 596 + * [1] : No fallback (__GFP_THISNODE) 597 597 */ 598 598 #define MAX_ZONELISTS 2 599 599
+5
include/linux/nfs_xdr.h
··· 467 467 }; 468 468 469 469 struct nfs_release_lockowner_args { 470 + struct nfs4_sequence_args seq_args; 470 471 struct nfs_lowner lock_owner; 472 + }; 473 + 474 + struct nfs_release_lockowner_res { 475 + struct nfs4_sequence_res seq_res; 471 476 }; 472 477 473 478 struct nfs4_delegreturnargs {
+1 -1
include/linux/slab.h
··· 410 410 * 411 411 * %GFP_NOWAIT - Allocation will not sleep. 412 412 * 413 - * %GFP_THISNODE - Allocate node-local memory only. 413 + * %__GFP_THISNODE - Allocate node-local memory only. 414 414 * 415 415 * %GFP_DMA - Allocation suitable for DMA. 416 416 * Should only be used for kmalloc() caches. Otherwise, use a
+6
include/linux/tracepoint.h
··· 60 60 unsigned int num_tracepoints; 61 61 struct tracepoint * const *tracepoints_ptrs; 62 62 }; 63 + bool trace_module_has_bad_taint(struct module *mod); 64 + #else 65 + static inline bool trace_module_has_bad_taint(struct module *mod) 66 + { 67 + return false; 68 + } 63 69 #endif /* CONFIG_MODULES */ 64 70 65 71 struct tracepoint_iter {
+5 -1
include/net/sock.h
··· 1488 1488 */ 1489 1489 #define sock_owned_by_user(sk) ((sk)->sk_lock.owned) 1490 1490 1491 + static inline void sock_release_ownership(struct sock *sk) 1492 + { 1493 + sk->sk_lock.owned = 0; 1494 + } 1495 + 1491 1496 /* 1492 1497 * Macro so as to not evaluate some arguments when 1493 1498 * lockdep is not enabled. ··· 2191 2186 { 2192 2187 #define FLAGS_TS_OR_DROPS ((1UL << SOCK_RXQ_OVFL) | \ 2193 2188 (1UL << SOCK_RCVTSTAMP) | \ 2194 - (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE) | \ 2195 2189 (1UL << SOCK_TIMESTAMPING_SOFTWARE) | \ 2196 2190 (1UL << SOCK_TIMESTAMPING_RAW_HARDWARE) | \ 2197 2191 (1UL << SOCK_TIMESTAMPING_SYS_HARDWARE))
+1
include/target/iscsi/iscsi_transport.h
··· 12 12 int (*iscsit_setup_np)(struct iscsi_np *, struct __kernel_sockaddr_storage *); 13 13 int (*iscsit_accept_np)(struct iscsi_np *, struct iscsi_conn *); 14 14 void (*iscsit_free_np)(struct iscsi_np *); 15 + void (*iscsit_wait_conn)(struct iscsi_conn *); 15 16 void (*iscsit_free_conn)(struct iscsi_conn *); 16 17 int (*iscsit_get_login_rx)(struct iscsi_conn *, struct iscsi_login *); 17 18 int (*iscsit_put_login_tx)(struct iscsi_conn *, struct iscsi_login *, u32);
+2 -2
include/trace/events/sunrpc.h
··· 83 83 ), 84 84 85 85 TP_fast_assign( 86 - __entry->client_id = clnt->cl_clid; 86 + __entry->client_id = clnt ? clnt->cl_clid : -1; 87 87 __entry->task_id = task->tk_pid; 88 88 __entry->action = action; 89 89 __entry->runstate = task->tk_runstate; ··· 91 91 __entry->flags = task->tk_flags; 92 92 ), 93 93 94 - TP_printk("task:%u@%u flags=%4.4x state=%4.4lx status=%d action=%pf", 94 + TP_printk("task:%u@%d flags=%4.4x state=%4.4lx status=%d action=%pf", 95 95 __entry->task_id, __entry->client_id, 96 96 __entry->flags, 97 97 __entry->runstate,
+1 -1
init/main.c
··· 561 561 init_timers(); 562 562 hrtimers_init(); 563 563 softirq_init(); 564 - acpi_early_init(); 565 564 timekeeping_init(); 566 565 time_init(); 567 566 sched_clock_postinit(); ··· 612 613 calibrate_delay(); 613 614 pidmap_init(); 614 615 anon_vma_init(); 616 + acpi_early_init(); 615 617 #ifdef CONFIG_X86 616 618 if (efi_enabled(EFI_RUNTIME_SERVICES)) 617 619 efi_enter_virtual_mode();
+16 -15
kernel/audit.c
··· 182 182 183 183 struct audit_reply { 184 184 __u32 portid; 185 - pid_t pid; 185 + struct net *net; 186 186 struct sk_buff *skb; 187 187 }; 188 188 ··· 500 500 { 501 501 struct audit_netlink_list *dest = _dest; 502 502 struct sk_buff *skb; 503 - struct net *net = get_net_ns_by_pid(dest->pid); 503 + struct net *net = dest->net; 504 504 struct audit_net *aunet = net_generic(net, audit_net_id); 505 505 506 506 /* wait for parent to finish and send an ACK */ ··· 510 510 while ((skb = __skb_dequeue(&dest->q)) != NULL) 511 511 netlink_unicast(aunet->nlsk, skb, dest->portid, 0); 512 512 513 + put_net(net); 513 514 kfree(dest); 514 515 515 516 return 0; ··· 544 543 static int audit_send_reply_thread(void *arg) 545 544 { 546 545 struct audit_reply *reply = (struct audit_reply *)arg; 547 - struct net *net = get_net_ns_by_pid(reply->pid); 546 + struct net *net = reply->net; 548 547 struct audit_net *aunet = net_generic(net, audit_net_id); 549 548 550 549 mutex_lock(&audit_cmd_mutex); ··· 553 552 /* Ignore failure. It'll only happen if the sender goes away, 554 553 because our timeout is set to infinite. */ 555 554 netlink_unicast(aunet->nlsk , reply->skb, reply->portid, 0); 555 + put_net(net); 556 556 kfree(reply); 557 557 return 0; 558 558 } 559 559 /** 560 560 * audit_send_reply - send an audit reply message via netlink 561 - * @portid: netlink port to which to send reply 561 + * @request_skb: skb of request we are replying to (used to target the reply) 562 562 * @seq: sequence number 563 563 * @type: audit message type 564 564 * @done: done (last) flag ··· 570 568 * Allocates an skb, builds the netlink message, and sends it to the port id. 571 569 * No failure notifications. 572 570 */ 573 - static void audit_send_reply(__u32 portid, int seq, int type, int done, 571 + static void audit_send_reply(struct sk_buff *request_skb, int seq, int type, int done, 574 572 int multi, const void *payload, int size) 575 573 { 574 + u32 portid = NETLINK_CB(request_skb).portid; 575 + struct net *net = sock_net(NETLINK_CB(request_skb).sk); 576 576 struct sk_buff *skb; 577 577 struct task_struct *tsk; 578 578 struct audit_reply *reply = kmalloc(sizeof(struct audit_reply), ··· 587 583 if (!skb) 588 584 goto out; 589 585 586 + reply->net = get_net(net); 590 587 reply->portid = portid; 591 - reply->pid = task_pid_vnr(current); 592 588 reply->skb = skb; 593 589 594 590 tsk = kthread_run(audit_send_reply_thread, reply, "audit_send_reply"); ··· 677 673 678 674 seq = nlmsg_hdr(skb)->nlmsg_seq; 679 675 680 - audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_GET, 0, 0, 681 - &af, sizeof(af)); 676 + audit_send_reply(skb, seq, AUDIT_GET, 0, 0, &af, sizeof(af)); 682 677 683 678 return 0; 684 679 } ··· 797 794 s.backlog = skb_queue_len(&audit_skb_queue); 798 795 s.version = AUDIT_VERSION_LATEST; 799 796 s.backlog_wait_time = audit_backlog_wait_time; 800 - audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_GET, 0, 0, 801 - &s, sizeof(s)); 797 + audit_send_reply(skb, seq, AUDIT_GET, 0, 0, &s, sizeof(s)); 802 798 break; 803 799 } 804 800 case AUDIT_SET: { ··· 907 905 seq, data, nlmsg_len(nlh)); 908 906 break; 909 907 case AUDIT_LIST_RULES: 910 - err = audit_list_rules_send(NETLINK_CB(skb).portid, seq); 908 + err = audit_list_rules_send(skb, seq); 911 909 break; 912 910 case AUDIT_TRIM: 913 911 audit_trim_trees(); ··· 972 970 memcpy(sig_data->ctx, ctx, len); 973 971 security_release_secctx(ctx, len); 974 972 } 975 - audit_send_reply(NETLINK_CB(skb).portid, seq, AUDIT_SIGNAL_INFO, 976 - 0, 0, sig_data, sizeof(*sig_data) + len); 973 + audit_send_reply(skb, seq, AUDIT_SIGNAL_INFO, 0, 0, 974 + sig_data, sizeof(*sig_data) + len); 977 975 kfree(sig_data); 978 976 break; 979 977 case AUDIT_TTY_GET: { ··· 985 983 s.log_passwd = tsk->signal->audit_tty_log_passwd; 986 984 spin_unlock(&tsk->sighand->siglock); 987 985 988 - audit_send_reply(NETLINK_CB(skb).portid, seq, 989 - AUDIT_TTY_GET, 0, 0, &s, sizeof(s)); 986 + audit_send_reply(skb, seq, AUDIT_TTY_GET, 0, 0, &s, sizeof(s)); 990 987 break; 991 988 } 992 989 case AUDIT_TTY_SET: {
+1 -1
kernel/audit.h
··· 247 247 248 248 struct audit_netlink_list { 249 249 __u32 portid; 250 - pid_t pid; 250 + struct net *net; 251 251 struct sk_buff_head q; 252 252 }; 253 253
+7 -3
kernel/auditfilter.c
··· 29 29 #include <linux/sched.h> 30 30 #include <linux/slab.h> 31 31 #include <linux/security.h> 32 + #include <net/net_namespace.h> 33 + #include <net/sock.h> 32 34 #include "audit.h" 33 35 34 36 /* ··· 1067 1065 1068 1066 /** 1069 1067 * audit_list_rules_send - list the audit rules 1070 - * @portid: target portid for netlink audit messages 1068 + * @request_skb: skb of request we are replying to (used to target the reply) 1071 1069 * @seq: netlink audit message sequence (serial) number 1072 1070 */ 1073 - int audit_list_rules_send(__u32 portid, int seq) 1071 + int audit_list_rules_send(struct sk_buff *request_skb, int seq) 1074 1072 { 1073 + u32 portid = NETLINK_CB(request_skb).portid; 1074 + struct net *net = sock_net(NETLINK_CB(request_skb).sk); 1075 1075 struct task_struct *tsk; 1076 1076 struct audit_netlink_list *dest; 1077 1077 int err = 0; ··· 1087 1083 dest = kmalloc(sizeof(struct audit_netlink_list), GFP_KERNEL); 1088 1084 if (!dest) 1089 1085 return -ENOMEM; 1086 + dest->net = get_net(net); 1090 1087 dest->portid = portid; 1091 - dest->pid = task_pid_vnr(current); 1092 1088 skb_queue_head_init(&dest->q); 1093 1089 1094 1090 mutex_lock(&audit_filter_mutex);
+3 -7
kernel/cpuset.c
··· 974 974 * Temporarilly set tasks mems_allowed to target nodes of migration, 975 975 * so that the migration code can allocate pages on these nodes. 976 976 * 977 - * Call holding cpuset_mutex, so current's cpuset won't change 978 - * during this call, as manage_mutex holds off any cpuset_attach() 979 - * calls. Therefore we don't need to take task_lock around the 980 - * call to guarantee_online_mems(), as we know no one is changing 981 - * our task's cpuset. 982 - * 983 977 * While the mm_struct we are migrating is typically from some 984 978 * other task, the task_struct mems_allowed that we are hacking 985 979 * is for our current task, which must allocate new pages for that ··· 990 996 991 997 do_migrate_pages(mm, from, to, MPOL_MF_MOVE_ALL); 992 998 999 + rcu_read_lock(); 993 1000 mems_cs = effective_nodemask_cpuset(task_cs(tsk)); 994 1001 guarantee_online_mems(mems_cs, &tsk->mems_allowed); 1002 + rcu_read_unlock(); 995 1003 } 996 1004 997 1005 /* ··· 2482 2486 2483 2487 task_lock(current); 2484 2488 cs = nearest_hardwall_ancestor(task_cs(current)); 2489 + allowed = node_isset(node, cs->mems_allowed); 2485 2490 task_unlock(current); 2486 2491 2487 - allowed = node_isset(node, cs->mems_allowed); 2488 2492 mutex_unlock(&callback_mutex); 2489 2493 return allowed; 2490 2494 }
+1
kernel/irq/irqdomain.c
··· 10 10 #include <linux/mutex.h> 11 11 #include <linux/of.h> 12 12 #include <linux/of_address.h> 13 + #include <linux/of_irq.h> 13 14 #include <linux/topology.h> 14 15 #include <linux/seq_file.h> 15 16 #include <linux/slab.h>
+1 -2
kernel/irq/manage.c
··· 802 802 803 803 static void wake_threads_waitq(struct irq_desc *desc) 804 804 { 805 - if (atomic_dec_and_test(&desc->threads_active) && 806 - waitqueue_active(&desc->wait_for_threads)) 805 + if (atomic_dec_and_test(&desc->threads_active)) 807 806 wake_up(&desc->wait_for_threads); 808 807 } 809 808
+2 -2
kernel/profile.c
··· 549 549 struct page *page; 550 550 551 551 page = alloc_pages_exact_node(node, 552 - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, 552 + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, 553 553 0); 554 554 if (!page) 555 555 goto out_cleanup; 556 556 per_cpu(cpu_profile_hits, cpu)[1] 557 557 = (struct profile_hit *)page_address(page); 558 558 page = alloc_pages_exact_node(node, 559 - GFP_KERNEL | __GFP_ZERO | GFP_THISNODE, 559 + GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE, 560 560 0); 561 561 if (!page) 562 562 goto out_cleanup;
+10
kernel/trace/trace_events.c
··· 1777 1777 { 1778 1778 struct ftrace_event_call **call, **start, **end; 1779 1779 1780 + if (!mod->num_trace_events) 1781 + return; 1782 + 1783 + /* Don't add infrastructure for mods without tracepoints */ 1784 + if (trace_module_has_bad_taint(mod)) { 1785 + pr_err("%s: module has bad taint, not creating trace events\n", 1786 + mod->name); 1787 + return; 1788 + } 1789 + 1780 1790 start = mod->trace_events; 1781 1791 end = mod->trace_events + mod->num_trace_events; 1782 1792
+6 -1
kernel/tracepoint.c
··· 631 631 EXPORT_SYMBOL_GPL(tracepoint_iter_reset); 632 632 633 633 #ifdef CONFIG_MODULES 634 + bool trace_module_has_bad_taint(struct module *mod) 635 + { 636 + return mod->taints & ~((1 << TAINT_OOT_MODULE) | (1 << TAINT_CRAP)); 637 + } 638 + 634 639 static int tracepoint_module_coming(struct module *mod) 635 640 { 636 641 struct tp_module *tp_mod, *iter; ··· 646 641 * module headers (for forced load), to make sure we don't cause a crash. 647 642 * Staging and out-of-tree GPL modules are fine. 648 643 */ 649 - if (mod->taints & ~((1 << TAINT_OOT_MODULE) | (1 << TAINT_CRAP))) 644 + if (trace_module_has_bad_taint(mod)) 650 645 return 0; 651 646 mutex_lock(&tracepoints_mutex); 652 647 tp_mod = kmalloc(sizeof(struct tp_module), GFP_KERNEL);
+2 -2
mm/Kconfig
··· 575 575 then you should select this. This causes zsmalloc to use page table 576 576 mapping rather than copying for object mapping. 577 577 578 - You can check speed with zsmalloc benchmark[1]. 579 - [1] https://github.com/spartacus06/zsmalloc 578 + You can check speed with zsmalloc benchmark: 579 + https://github.com/spartacus06/zsmapbench
+13 -7
mm/compaction.c
··· 251 251 { 252 252 int nr_scanned = 0, total_isolated = 0; 253 253 struct page *cursor, *valid_page = NULL; 254 - unsigned long nr_strict_required = end_pfn - blockpfn; 255 254 unsigned long flags; 256 255 bool locked = false; 257 256 ··· 263 264 264 265 nr_scanned++; 265 266 if (!pfn_valid_within(blockpfn)) 266 - continue; 267 + goto isolate_fail; 268 + 267 269 if (!valid_page) 268 270 valid_page = page; 269 271 if (!PageBuddy(page)) 270 - continue; 272 + goto isolate_fail; 271 273 272 274 /* 273 275 * The zone lock must be held to isolate freepages. ··· 289 289 290 290 /* Recheck this is a buddy page under lock */ 291 291 if (!PageBuddy(page)) 292 - continue; 292 + goto isolate_fail; 293 293 294 294 /* Found a free page, break it into order-0 pages */ 295 295 isolated = split_free_page(page); 296 - if (!isolated && strict) 297 - break; 298 296 total_isolated += isolated; 299 297 for (i = 0; i < isolated; i++) { 300 298 list_add(&page->lru, freelist); ··· 303 305 if (isolated) { 304 306 blockpfn += isolated - 1; 305 307 cursor += isolated - 1; 308 + continue; 306 309 } 310 + 311 + isolate_fail: 312 + if (strict) 313 + break; 314 + else 315 + continue; 316 + 307 317 } 308 318 309 319 trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated); ··· 321 315 * pages requested were isolated. If there were any failures, 0 is 322 316 * returned and CMA will fail. 323 317 */ 324 - if (strict && nr_strict_required > total_isolated) 318 + if (strict && blockpfn < end_pfn) 325 319 total_isolated = 0; 326 320 327 321 if (locked)
+6 -5
mm/migrate.c
··· 1158 1158 pm->node); 1159 1159 else 1160 1160 return alloc_pages_exact_node(pm->node, 1161 - GFP_HIGHUSER_MOVABLE | GFP_THISNODE, 0); 1161 + GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, 0); 1162 1162 } 1163 1163 1164 1164 /* ··· 1544 1544 struct page *newpage; 1545 1545 1546 1546 newpage = alloc_pages_exact_node(nid, 1547 - (GFP_HIGHUSER_MOVABLE | GFP_THISNODE | 1548 - __GFP_NOMEMALLOC | __GFP_NORETRY | 1549 - __GFP_NOWARN) & 1547 + (GFP_HIGHUSER_MOVABLE | 1548 + __GFP_THISNODE | __GFP_NOMEMALLOC | 1549 + __GFP_NORETRY | __GFP_NOWARN) & 1550 1550 ~GFP_IOFS, 0); 1551 1551 1552 1552 return newpage; ··· 1747 1747 goto out_dropref; 1748 1748 1749 1749 new_page = alloc_pages_node(node, 1750 - (GFP_TRANSHUGE | GFP_THISNODE) & ~__GFP_WAIT, HPAGE_PMD_ORDER); 1750 + (GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_WAIT, 1751 + HPAGE_PMD_ORDER); 1751 1752 if (!new_page) 1752 1753 goto out_fail; 1753 1754
+3
net/8021q/vlan_dev.c
··· 538 538 struct vlan_dev_priv *vlan = vlan_dev_priv(dev); 539 539 struct net_device *real_dev = vlan->real_dev; 540 540 541 + if (saddr == NULL) 542 + saddr = dev->dev_addr; 543 + 541 544 return dev_hard_header(skb, real_dev, type, daddr, saddr, len); 542 545 } 543 546
+30 -3
net/bridge/br_multicast.c
··· 1127 1127 struct net_bridge_port *port, 1128 1128 struct bridge_mcast_querier *querier, 1129 1129 int saddr, 1130 + bool is_general_query, 1130 1131 unsigned long max_delay) 1131 1132 { 1132 - if (saddr) 1133 + if (saddr && is_general_query) 1133 1134 br_multicast_update_querier_timer(br, querier, max_delay); 1134 1135 else if (timer_pending(&querier->timer)) 1135 1136 return; ··· 1182 1181 IGMPV3_MRC(ih3->code) * (HZ / IGMP_TIMER_SCALE) : 1; 1183 1182 } 1184 1183 1184 + /* RFC2236+RFC3376 (IGMPv2+IGMPv3) require the multicast link layer 1185 + * all-systems destination addresses (224.0.0.1) for general queries 1186 + */ 1187 + if (!group && iph->daddr != htonl(INADDR_ALLHOSTS_GROUP)) { 1188 + err = -EINVAL; 1189 + goto out; 1190 + } 1191 + 1185 1192 br_multicast_query_received(br, port, &br->ip4_querier, !!iph->saddr, 1186 - max_delay); 1193 + !group, max_delay); 1187 1194 1188 1195 if (!group) 1189 1196 goto out; ··· 1237 1228 unsigned long max_delay; 1238 1229 unsigned long now = jiffies; 1239 1230 const struct in6_addr *group = NULL; 1231 + bool is_general_query; 1240 1232 int err = 0; 1241 1233 1242 1234 spin_lock(&br->multicast_lock); 1243 1235 if (!netif_running(br->dev) || 1244 1236 (port && port->state == BR_STATE_DISABLED)) 1245 1237 goto out; 1238 + 1239 + /* RFC2710+RFC3810 (MLDv1+MLDv2) require link-local source addresses */ 1240 + if (!(ipv6_addr_type(&ip6h->saddr) & IPV6_ADDR_LINKLOCAL)) { 1241 + err = -EINVAL; 1242 + goto out; 1243 + } 1246 1244 1247 1245 if (skb->len == sizeof(*mld)) { 1248 1246 if (!pskb_may_pull(skb, sizeof(*mld))) { ··· 1272 1256 max_delay = max(msecs_to_jiffies(mldv2_mrc(mld2q)), 1UL); 1273 1257 } 1274 1258 1259 + is_general_query = group && ipv6_addr_any(group); 1260 + 1261 + /* RFC2710+RFC3810 (MLDv1+MLDv2) require the multicast link layer 1262 + * all-nodes destination address (ff02::1) for general queries 1263 + */ 1264 + if (is_general_query && !ipv6_addr_is_ll_all_nodes(&ip6h->daddr)) { 1265 + err = -EINVAL; 1266 + goto out; 1267 + } 1268 + 1275 1269 br_multicast_query_received(br, port, &br->ip6_querier, 1276 - !ipv6_addr_any(&ip6h->saddr), max_delay); 1270 + !ipv6_addr_any(&ip6h->saddr), 1271 + is_general_query, max_delay); 1277 1272 1278 1273 if (!group) 1279 1274 goto out;
+54 -46
net/core/skbuff.c
··· 2838 2838 2839 2839 /** 2840 2840 * skb_segment - Perform protocol segmentation on skb. 2841 - * @skb: buffer to segment 2841 + * @head_skb: buffer to segment 2842 2842 * @features: features for the output path (see dev->features) 2843 2843 * 2844 2844 * This function performs segmentation on the given skb. It returns 2845 2845 * a pointer to the first in a list of new skbs for the segments. 2846 2846 * In case of error it returns ERR_PTR(err). 2847 2847 */ 2848 - struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features) 2848 + struct sk_buff *skb_segment(struct sk_buff *head_skb, 2849 + netdev_features_t features) 2849 2850 { 2850 2851 struct sk_buff *segs = NULL; 2851 2852 struct sk_buff *tail = NULL; 2852 - struct sk_buff *fskb = skb_shinfo(skb)->frag_list; 2853 - skb_frag_t *skb_frag = skb_shinfo(skb)->frags; 2854 - unsigned int mss = skb_shinfo(skb)->gso_size; 2855 - unsigned int doffset = skb->data - skb_mac_header(skb); 2853 + struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list; 2854 + skb_frag_t *frag = skb_shinfo(head_skb)->frags; 2855 + unsigned int mss = skb_shinfo(head_skb)->gso_size; 2856 + unsigned int doffset = head_skb->data - skb_mac_header(head_skb); 2857 + struct sk_buff *frag_skb = head_skb; 2856 2858 unsigned int offset = doffset; 2857 - unsigned int tnl_hlen = skb_tnl_header_len(skb); 2859 + unsigned int tnl_hlen = skb_tnl_header_len(head_skb); 2858 2860 unsigned int headroom; 2859 2861 unsigned int len; 2860 2862 __be16 proto; 2861 2863 bool csum; 2862 2864 int sg = !!(features & NETIF_F_SG); 2863 - int nfrags = skb_shinfo(skb)->nr_frags; 2865 + int nfrags = skb_shinfo(head_skb)->nr_frags; 2864 2866 int err = -ENOMEM; 2865 2867 int i = 0; 2866 2868 int pos; 2867 2869 2868 - proto = skb_network_protocol(skb); 2870 + proto = skb_network_protocol(head_skb); 2869 2871 if (unlikely(!proto)) 2870 2872 return ERR_PTR(-EINVAL); 2871 2873 2872 2874 csum = !!can_checksum_protocol(features, proto); 2873 - __skb_push(skb, doffset); 2874 - headroom = skb_headroom(skb); 2875 - pos = skb_headlen(skb); 2875 + __skb_push(head_skb, doffset); 2876 + headroom = skb_headroom(head_skb); 2877 + pos = skb_headlen(head_skb); 2876 2878 2877 2879 do { 2878 2880 struct sk_buff *nskb; 2879 - skb_frag_t *frag; 2881 + skb_frag_t *nskb_frag; 2880 2882 int hsize; 2881 2883 int size; 2882 2884 2883 - len = skb->len - offset; 2885 + len = head_skb->len - offset; 2884 2886 if (len > mss) 2885 2887 len = mss; 2886 2888 2887 - hsize = skb_headlen(skb) - offset; 2889 + hsize = skb_headlen(head_skb) - offset; 2888 2890 if (hsize < 0) 2889 2891 hsize = 0; 2890 2892 if (hsize > len || !sg) 2891 2893 hsize = len; 2892 2894 2893 - if (!hsize && i >= nfrags && skb_headlen(fskb) && 2894 - (skb_headlen(fskb) == len || sg)) { 2895 - BUG_ON(skb_headlen(fskb) > len); 2895 + if (!hsize && i >= nfrags && skb_headlen(list_skb) && 2896 + (skb_headlen(list_skb) == len || sg)) { 2897 + BUG_ON(skb_headlen(list_skb) > len); 2896 2898 2897 2899 i = 0; 2898 - nfrags = skb_shinfo(fskb)->nr_frags; 2899 - skb_frag = skb_shinfo(fskb)->frags; 2900 - pos += skb_headlen(fskb); 2900 + nfrags = skb_shinfo(list_skb)->nr_frags; 2901 + frag = skb_shinfo(list_skb)->frags; 2902 + frag_skb = list_skb; 2903 + pos += skb_headlen(list_skb); 2901 2904 2902 2905 while (pos < offset + len) { 2903 2906 BUG_ON(i >= nfrags); 2904 2907 2905 - size = skb_frag_size(skb_frag); 2908 + size = skb_frag_size(frag); 2906 2909 if (pos + size > offset + len) 2907 2910 break; 2908 2911 2909 2912 i++; 2910 2913 pos += size; 2911 - skb_frag++; 2914 + frag++; 2912 2915 } 2913 2916 2914 - nskb = skb_clone(fskb, GFP_ATOMIC); 2915 - fskb = fskb->next; 2917 + nskb = skb_clone(list_skb, GFP_ATOMIC); 2918 + list_skb = list_skb->next; 2916 2919 2917 2920 if (unlikely(!nskb)) 2918 2921 goto err; ··· 2936 2933 __skb_push(nskb, doffset); 2937 2934 } else { 2938 2935 nskb = __alloc_skb(hsize + doffset + headroom, 2939 - GFP_ATOMIC, skb_alloc_rx_flag(skb), 2936 + GFP_ATOMIC, skb_alloc_rx_flag(head_skb), 2940 2937 NUMA_NO_NODE); 2941 2938 2942 2939 if (unlikely(!nskb)) ··· 2952 2949 segs = nskb; 2953 2950 tail = nskb; 2954 2951 2955 - __copy_skb_header(nskb, skb); 2956 - nskb->mac_len = skb->mac_len; 2952 + __copy_skb_header(nskb, head_skb); 2953 + nskb->mac_len = head_skb->mac_len; 2957 2954 2958 2955 skb_headers_offset_update(nskb, skb_headroom(nskb) - headroom); 2959 2956 2960 - skb_copy_from_linear_data_offset(skb, -tnl_hlen, 2957 + skb_copy_from_linear_data_offset(head_skb, -tnl_hlen, 2961 2958 nskb->data - tnl_hlen, 2962 2959 doffset + tnl_hlen); 2963 2960 ··· 2966 2963 2967 2964 if (!sg) { 2968 2965 nskb->ip_summed = CHECKSUM_NONE; 2969 - nskb->csum = skb_copy_and_csum_bits(skb, offset, 2966 + nskb->csum = skb_copy_and_csum_bits(head_skb, offset, 2970 2967 skb_put(nskb, len), 2971 2968 len, 0); 2972 2969 continue; 2973 2970 } 2974 2971 2975 - frag = skb_shinfo(nskb)->frags; 2972 + nskb_frag = skb_shinfo(nskb)->frags; 2976 2973 2977 - skb_copy_from_linear_data_offset(skb, offset, 2974 + skb_copy_from_linear_data_offset(head_skb, offset, 2978 2975 skb_put(nskb, hsize), hsize); 2979 2976 2980 - skb_shinfo(nskb)->tx_flags = skb_shinfo(skb)->tx_flags & SKBTX_SHARED_FRAG; 2977 + skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags & 2978 + SKBTX_SHARED_FRAG; 2981 2979 2982 2980 while (pos < offset + len) { 2983 2981 if (i >= nfrags) { 2984 - BUG_ON(skb_headlen(fskb)); 2982 + BUG_ON(skb_headlen(list_skb)); 2985 2983 2986 2984 i = 0; 2987 - nfrags = skb_shinfo(fskb)->nr_frags; 2988 - skb_frag = skb_shinfo(fskb)->frags; 2985 + nfrags = skb_shinfo(list_skb)->nr_frags; 2986 + frag = skb_shinfo(list_skb)->frags; 2987 + frag_skb = list_skb; 2989 2988 2990 2989 BUG_ON(!nfrags); 2991 2990 2992 - fskb = fskb->next; 2991 + list_skb = list_skb->next; 2993 2992 } 2994 2993 2995 2994 if (unlikely(skb_shinfo(nskb)->nr_frags >= ··· 3002 2997 goto err; 3003 2998 } 3004 2999 3005 - *frag = *skb_frag; 3006 - __skb_frag_ref(frag); 3007 - size = skb_frag_size(frag); 3000 + if (unlikely(skb_orphan_frags(frag_skb, GFP_ATOMIC))) 3001 + goto err; 3002 + 3003 + *nskb_frag = *frag; 3004 + __skb_frag_ref(nskb_frag); 3005 + size = skb_frag_size(nskb_frag); 3008 3006 3009 3007 if (pos < offset) { 3010 - frag->page_offset += offset - pos; 3011 - skb_frag_size_sub(frag, offset - pos); 3008 + nskb_frag->page_offset += offset - pos; 3009 + skb_frag_size_sub(nskb_frag, offset - pos); 3012 3010 } 3013 3011 3014 3012 skb_shinfo(nskb)->nr_frags++; 3015 3013 3016 3014 if (pos + size <= offset + len) { 3017 3015 i++; 3018 - skb_frag++; 3016 + frag++; 3019 3017 pos += size; 3020 3018 } else { 3021 - skb_frag_size_sub(frag, pos + size - (offset + len)); 3019 + skb_frag_size_sub(nskb_frag, pos + size - (offset + len)); 3022 3020 goto skip_fraglist; 3023 3021 } 3024 3022 3025 - frag++; 3023 + nskb_frag++; 3026 3024 } 3027 3025 3028 3026 skip_fraglist: ··· 3039 3031 nskb->len - doffset, 0); 3040 3032 nskb->ip_summed = CHECKSUM_NONE; 3041 3033 } 3042 - } while ((offset += len) < skb->len); 3034 + } while ((offset += len) < head_skb->len); 3043 3035 3044 3036 return segs; 3045 3037
+4 -1
net/core/sock.c
··· 2357 2357 if (sk->sk_backlog.tail) 2358 2358 __release_sock(sk); 2359 2359 2360 + /* Warning : release_cb() might need to release sk ownership, 2361 + * ie call sock_release_ownership(sk) before us. 2362 + */ 2360 2363 if (sk->sk_prot->release_cb) 2361 2364 sk->sk_prot->release_cb(sk); 2362 2365 2363 - sk->sk_lock.owned = 0; 2366 + sock_release_ownership(sk); 2364 2367 if (waitqueue_active(&sk->sk_lock.wq)) 2365 2368 wake_up(&sk->sk_lock.wq); 2366 2369 spin_unlock_bh(&sk->sk_lock.slock);
+3 -2
net/ipv4/inet_fragment.c
··· 208 208 } 209 209 210 210 work = frag_mem_limit(nf) - nf->low_thresh; 211 - while (work > 0) { 211 + while (work > 0 || force) { 212 212 spin_lock(&nf->lru_lock); 213 213 214 214 if (list_empty(&nf->lru_list)) { ··· 278 278 279 279 atomic_inc(&qp->refcnt); 280 280 hlist_add_head(&qp->list, &hb->chain); 281 + inet_frag_lru_add(nf, qp); 281 282 spin_unlock(&hb->chain_lock); 282 283 read_unlock(&f->lock); 283 - inet_frag_lru_add(nf, qp); 284 + 284 285 return qp; 285 286 } 286 287
+11
net/ipv4/tcp_output.c
··· 780 780 if (flags & (1UL << TCP_TSQ_DEFERRED)) 781 781 tcp_tsq_handler(sk); 782 782 783 + /* Here begins the tricky part : 784 + * We are called from release_sock() with : 785 + * 1) BH disabled 786 + * 2) sk_lock.slock spinlock held 787 + * 3) socket owned by us (sk->sk_lock.owned == 1) 788 + * 789 + * But following code is meant to be called from BH handlers, 790 + * so we should keep BH disabled, but early release socket ownership 791 + */ 792 + sock_release_ownership(sk); 793 + 783 794 if (flags & (1UL << TCP_WRITE_TIMER_DEFERRED)) { 784 795 tcp_write_timer_handler(sk); 785 796 __sock_put(sk);
+4 -1
net/ipv6/addrconf.c
··· 1103 1103 * Lifetime is greater than REGEN_ADVANCE time units. In particular, 1104 1104 * an implementation must not create a temporary address with a zero 1105 1105 * Preferred Lifetime. 1106 + * Use age calculation as in addrconf_verify to avoid unnecessary 1107 + * temporary addresses being generated. 1106 1108 */ 1107 - if (tmp_prefered_lft <= regen_advance) { 1109 + age = (now - tmp_tstamp + ADDRCONF_TIMER_FUZZ_MINUS) / HZ; 1110 + if (tmp_prefered_lft <= regen_advance + age) { 1108 1111 in6_ifa_put(ifp); 1109 1112 in6_dev_put(idev); 1110 1113 ret = -1;
+2 -2
net/ipv6/exthdrs_offload.c
··· 25 25 int ret; 26 26 27 27 ret = inet6_add_offload(&rthdr_offload, IPPROTO_ROUTING); 28 - if (!ret) 28 + if (ret) 29 29 goto out; 30 30 31 31 ret = inet6_add_offload(&dstopt_offload, IPPROTO_DSTOPTS); 32 - if (!ret) 32 + if (ret) 33 33 goto out_rt; 34 34 35 35 out:
+1 -1
net/ipv6/route.c
··· 1513 1513 if (!table) 1514 1514 goto out; 1515 1515 1516 - rt = ip6_dst_alloc(net, NULL, DST_NOCOUNT, table); 1516 + rt = ip6_dst_alloc(net, NULL, (cfg->fc_flags & RTF_ADDRCONF) ? 0 : DST_NOCOUNT, table); 1517 1517 1518 1518 if (!rt) { 1519 1519 err = -ENOMEM;
+2 -2
net/l2tp/l2tp_core.c
··· 112 112 spinlock_t l2tp_session_hlist_lock; 113 113 }; 114 114 115 - static void l2tp_session_set_header_len(struct l2tp_session *session, int version); 116 115 static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel); 117 116 118 117 static inline struct l2tp_tunnel *l2tp_tunnel(struct sock *sk) ··· 1840 1841 /* We come here whenever a session's send_seq, cookie_len or 1841 1842 * l2specific_len parameters are set. 1842 1843 */ 1843 - static void l2tp_session_set_header_len(struct l2tp_session *session, int version) 1844 + void l2tp_session_set_header_len(struct l2tp_session *session, int version) 1844 1845 { 1845 1846 if (version == L2TP_HDR_VER_2) { 1846 1847 session->hdr_len = 6; ··· 1853 1854 } 1854 1855 1855 1856 } 1857 + EXPORT_SYMBOL_GPL(l2tp_session_set_header_len); 1856 1858 1857 1859 struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg) 1858 1860 {
+1
net/l2tp/l2tp_core.h
··· 263 263 int length, int (*payload_hook)(struct sk_buff *skb)); 264 264 int l2tp_session_queue_purge(struct l2tp_session *session); 265 265 int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb); 266 + void l2tp_session_set_header_len(struct l2tp_session *session, int version); 266 267 267 268 int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, 268 269 int hdr_len);
+3 -1
net/l2tp/l2tp_netlink.c
··· 578 578 if (info->attrs[L2TP_ATTR_RECV_SEQ]) 579 579 session->recv_seq = nla_get_u8(info->attrs[L2TP_ATTR_RECV_SEQ]); 580 580 581 - if (info->attrs[L2TP_ATTR_SEND_SEQ]) 581 + if (info->attrs[L2TP_ATTR_SEND_SEQ]) { 582 582 session->send_seq = nla_get_u8(info->attrs[L2TP_ATTR_SEND_SEQ]); 583 + l2tp_session_set_header_len(session, session->tunnel->version); 584 + } 583 585 584 586 if (info->attrs[L2TP_ATTR_LNS_MODE]) 585 587 session->lns_mode = nla_get_u8(info->attrs[L2TP_ATTR_LNS_MODE]);
+8 -5
net/l2tp/l2tp_ppp.c
··· 254 254 po = pppox_sk(sk); 255 255 ppp_input(&po->chan, skb); 256 256 } else { 257 - l2tp_info(session, PPPOL2TP_MSG_DATA, "%s: socket not bound\n", 258 - session->name); 257 + l2tp_dbg(session, PPPOL2TP_MSG_DATA, 258 + "%s: recv %d byte data frame, passing to L2TP socket\n", 259 + session->name, data_len); 259 260 260 - /* Not bound. Nothing we can do, so discard. */ 261 - atomic_long_inc(&session->stats.rx_errors); 262 - kfree_skb(skb); 261 + if (sock_queue_rcv_skb(sk, skb) < 0) { 262 + atomic_long_inc(&session->stats.rx_errors); 263 + kfree_skb(skb); 264 + } 263 265 } 264 266 265 267 return; ··· 1311 1309 po->chan.hdrlen = val ? PPPOL2TP_L2TP_HDR_SIZE_SEQ : 1312 1310 PPPOL2TP_L2TP_HDR_SIZE_NOSEQ; 1313 1311 } 1312 + l2tp_session_set_header_len(session, session->tunnel->version); 1314 1313 l2tp_info(session, PPPOL2TP_MSG_CONTROL, 1315 1314 "%s: set send_seq=%d\n", 1316 1315 session->name, session->send_seq);
+6
net/mac80211/chan.c
··· 100 100 } 101 101 max_bw = max(max_bw, width); 102 102 } 103 + 104 + /* use the configured bandwidth in case of monitor interface */ 105 + sdata = rcu_dereference(local->monitor_sdata); 106 + if (sdata && rcu_access_pointer(sdata->vif.chanctx_conf) == conf) 107 + max_bw = max(max_bw, conf->def.width); 108 + 103 109 rcu_read_unlock(); 104 110 105 111 return max_bw;
+1
net/mac80211/mesh_ps.c
··· 36 36 sdata->vif.addr); 37 37 nullfunc->frame_control = fc; 38 38 nullfunc->duration_id = 0; 39 + nullfunc->seq_ctrl = 0; 39 40 /* no address resolution for this frame -> set addr 1 immediately */ 40 41 memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN); 41 42 memset(skb_put(skb, 2), 0, 2); /* append QoS control field */
+1
net/mac80211/sta_info.c
··· 1206 1206 memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN); 1207 1207 memcpy(nullfunc->addr2, sdata->vif.addr, ETH_ALEN); 1208 1208 memcpy(nullfunc->addr3, sdata->vif.addr, ETH_ALEN); 1209 + nullfunc->seq_ctrl = 0; 1209 1210 1210 1211 skb->priority = tid; 1211 1212 skb_set_queue_mapping(skb, ieee802_1d_to_ac[tid]);
+4 -3
net/sched/sch_api.c
··· 273 273 274 274 void qdisc_list_add(struct Qdisc *q) 275 275 { 276 - struct Qdisc *root = qdisc_dev(q)->qdisc; 276 + if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) { 277 + struct Qdisc *root = qdisc_dev(q)->qdisc; 277 278 278 - WARN_ON_ONCE(root == &noop_qdisc); 279 - if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) 279 + WARN_ON_ONCE(root == &noop_qdisc); 280 280 list_add_tail(&q->list, &root->list); 281 + } 281 282 } 282 283 EXPORT_SYMBOL(qdisc_list_add); 283 284
+2 -2
net/sctp/sm_make_chunk.c
··· 1421 1421 BUG_ON(!list_empty(&chunk->list)); 1422 1422 list_del_init(&chunk->transmitted_list); 1423 1423 1424 - /* Free the chunk skb data and the SCTP_chunk stub itself. */ 1425 - dev_kfree_skb(chunk->skb); 1424 + consume_skb(chunk->skb); 1425 + consume_skb(chunk->auth_chunk); 1426 1426 1427 1427 SCTP_DBG_OBJCNT_DEC(chunk); 1428 1428 kmem_cache_free(sctp_chunk_cachep, chunk);
-5
net/sctp/sm_statefuns.c
··· 760 760 761 761 /* Make sure that we and the peer are AUTH capable */ 762 762 if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) { 763 - kfree_skb(chunk->auth_chunk); 764 763 sctp_association_free(new_asoc); 765 764 return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 766 765 } ··· 774 775 auth.transport = chunk->transport; 775 776 776 777 ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth); 777 - 778 - /* We can now safely free the auth_chunk clone */ 779 - kfree_skb(chunk->auth_chunk); 780 - 781 778 if (ret != SCTP_IERROR_NO_ERROR) { 782 779 sctp_association_free(new_asoc); 783 780 return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+11 -6
net/socket.c
··· 450 450 451 451 static struct socket *sockfd_lookup_light(int fd, int *err, int *fput_needed) 452 452 { 453 - struct file *file; 453 + struct fd f = fdget(fd); 454 454 struct socket *sock; 455 455 456 456 *err = -EBADF; 457 - file = fget_light(fd, fput_needed); 458 - if (file) { 459 - sock = sock_from_file(file, err); 460 - if (sock) 457 + if (f.file) { 458 + sock = sock_from_file(f.file, err); 459 + if (likely(sock)) { 460 + *fput_needed = f.flags; 461 461 return sock; 462 - fput_light(file, *fput_needed); 462 + } 463 + fdput(f); 463 464 } 464 465 return NULL; 465 466 } ··· 1986 1985 { 1987 1986 if (copy_from_user(kmsg, umsg, sizeof(struct msghdr))) 1988 1987 return -EFAULT; 1988 + 1989 + if (kmsg->msg_namelen < 0) 1990 + return -EINVAL; 1991 + 1989 1992 if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) 1990 1993 kmsg->msg_namelen = sizeof(struct sockaddr_storage); 1991 1994 return 0;
+2 -7
net/tipc/config.c
··· 376 376 struct tipc_cfg_msg_hdr *req_hdr; 377 377 struct tipc_cfg_msg_hdr *rep_hdr; 378 378 struct sk_buff *rep_buf; 379 - int ret; 380 379 381 380 /* Validate configuration message header (ignore invalid message) */ 382 381 req_hdr = (struct tipc_cfg_msg_hdr *)buf; ··· 397 398 memcpy(rep_hdr, req_hdr, sizeof(*rep_hdr)); 398 399 rep_hdr->tcm_len = htonl(rep_buf->len); 399 400 rep_hdr->tcm_flags &= htons(~TCM_F_REQUEST); 400 - 401 - ret = tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data, 402 - rep_buf->len); 403 - if (ret < 0) 404 - pr_err("Sending cfg reply message failed, no memory\n"); 405 - 401 + tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data, 402 + rep_buf->len); 406 403 kfree_skb(rep_buf); 407 404 } 408 405 }
-1
net/tipc/handler.c
··· 58 58 59 59 spin_lock_bh(&qitem_lock); 60 60 if (!handler_enabled) { 61 - pr_err("Signal request ignored by handler\n"); 62 61 spin_unlock_bh(&qitem_lock); 63 62 return -ENOPROTOOPT; 64 63 }
+34 -3
net/tipc/name_table.c
··· 941 941 return 0; 942 942 } 943 943 944 + /** 945 + * tipc_purge_publications - remove all publications for a given type 946 + * 947 + * tipc_nametbl_lock must be held when calling this function 948 + */ 949 + static void tipc_purge_publications(struct name_seq *seq) 950 + { 951 + struct publication *publ, *safe; 952 + struct sub_seq *sseq; 953 + struct name_info *info; 954 + 955 + if (!seq->sseqs) { 956 + nameseq_delete_empty(seq); 957 + return; 958 + } 959 + sseq = seq->sseqs; 960 + info = sseq->info; 961 + list_for_each_entry_safe(publ, safe, &info->zone_list, zone_list) { 962 + tipc_nametbl_remove_publ(publ->type, publ->lower, publ->node, 963 + publ->ref, publ->key); 964 + } 965 + } 966 + 944 967 void tipc_nametbl_stop(void) 945 968 { 946 969 u32 i; 970 + struct name_seq *seq; 971 + struct hlist_head *seq_head; 972 + struct hlist_node *safe; 947 973 948 - /* Verify name table is empty, then release it */ 974 + /* Verify name table is empty and purge any lingering 975 + * publications, then release the name table 976 + */ 949 977 write_lock_bh(&tipc_nametbl_lock); 950 978 for (i = 0; i < TIPC_NAMETBL_SIZE; i++) { 951 979 if (hlist_empty(&table.types[i])) 952 980 continue; 953 - pr_err("nametbl_stop(): orphaned hash chain detected\n"); 954 - break; 981 + seq_head = &table.types[i]; 982 + hlist_for_each_entry_safe(seq, safe, seq_head, ns_list) { 983 + tipc_purge_publications(seq); 984 + } 985 + continue; 955 986 } 956 987 kfree(table.types); 957 988 table.types = NULL;
+7 -7
net/tipc/server.c
··· 87 87 static void tipc_conn_kref_release(struct kref *kref) 88 88 { 89 89 struct tipc_conn *con = container_of(kref, struct tipc_conn, kref); 90 - struct tipc_server *s = con->server; 91 90 92 91 if (con->sock) { 93 92 tipc_sock_release_local(con->sock); ··· 94 95 } 95 96 96 97 tipc_clean_outqueues(con); 97 - 98 - if (con->conid) 99 - s->tipc_conn_shutdown(con->conid, con->usr_data); 100 - 101 98 kfree(con); 102 99 } 103 100 ··· 176 181 struct tipc_server *s = con->server; 177 182 178 183 if (test_and_clear_bit(CF_CONNECTED, &con->flags)) { 184 + if (con->conid) 185 + s->tipc_conn_shutdown(con->conid, con->usr_data); 186 + 179 187 spin_lock_bh(&s->idr_lock); 180 188 idr_remove(&s->conn_idr, con->conid); 181 189 s->idr_in_use--; ··· 427 429 list_add_tail(&e->list, &con->outqueue); 428 430 spin_unlock_bh(&con->outqueue_lock); 429 431 430 - if (test_bit(CF_CONNECTED, &con->flags)) 432 + if (test_bit(CF_CONNECTED, &con->flags)) { 431 433 if (!queue_work(s->send_wq, &con->swork)) 432 434 conn_put(con); 433 - 435 + } else { 436 + conn_put(con); 437 + } 434 438 return 0; 435 439 } 436 440
+2 -2
net/tipc/socket.c
··· 992 992 993 993 for (;;) { 994 994 prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 995 - if (skb_queue_empty(&sk->sk_receive_queue)) { 995 + if (timeo && skb_queue_empty(&sk->sk_receive_queue)) { 996 996 if (sock->state == SS_DISCONNECTING) { 997 997 err = -ENOTCONN; 998 998 break; ··· 1609 1609 for (;;) { 1610 1610 prepare_to_wait_exclusive(sk_sleep(sk), &wait, 1611 1611 TASK_INTERRUPTIBLE); 1612 - if (skb_queue_empty(&sk->sk_receive_queue)) { 1612 + if (timeo && skb_queue_empty(&sk->sk_receive_queue)) { 1613 1613 release_sock(sk); 1614 1614 timeo = schedule_timeout(timeo); 1615 1615 lock_sock(sk);
+2 -17
net/tipc/subscr.c
··· 96 96 { 97 97 struct tipc_subscriber *subscriber = sub->subscriber; 98 98 struct kvec msg_sect; 99 - int ret; 100 99 101 100 msg_sect.iov_base = (void *)&sub->evt; 102 101 msg_sect.iov_len = sizeof(struct tipc_event); 103 - 104 102 sub->evt.event = htohl(event, sub->swap); 105 103 sub->evt.found_lower = htohl(found_lower, sub->swap); 106 104 sub->evt.found_upper = htohl(found_upper, sub->swap); 107 105 sub->evt.port.ref = htohl(port_ref, sub->swap); 108 106 sub->evt.port.node = htohl(node, sub->swap); 109 - ret = tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL, 110 - msg_sect.iov_base, msg_sect.iov_len); 111 - if (ret < 0) 112 - pr_err("Sending subscription event failed, no memory\n"); 107 + tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL, msg_sect.iov_base, 108 + msg_sect.iov_len); 113 109 } 114 110 115 111 /** ··· 148 152 149 153 /* The spin lock per subscriber is used to protect its members */ 150 154 spin_lock_bh(&subscriber->lock); 151 - 152 - /* Validate if the connection related to the subscriber is 153 - * closed (in case subscriber is terminating) 154 - */ 155 - if (subscriber->conid == 0) { 156 - spin_unlock_bh(&subscriber->lock); 157 - return; 158 - } 159 155 160 156 /* Validate timeout (in case subscription is being cancelled) */ 161 157 if (sub->timeout == TIPC_WAIT_FOREVER) { ··· 202 214 struct tipc_subscription *sub_temp; 203 215 204 216 spin_lock_bh(&subscriber->lock); 205 - 206 - /* Invalidate subscriber reference */ 207 - subscriber->conid = 0; 208 217 209 218 /* Destroy any existing subscriptions for subscriber */ 210 219 list_for_each_entry_safe(sub, sub_temp, &subscriber->subscription_list,
+1 -2
net/unix/af_unix.c
··· 163 163 164 164 static inline unsigned int unix_hash_fold(__wsum n) 165 165 { 166 - unsigned int hash = (__force unsigned int)n; 166 + unsigned int hash = (__force unsigned int)csum_fold(n); 167 167 168 - hash ^= hash>>16; 169 168 hash ^= hash>>8; 170 169 return hash&(UNIX_HASH_SIZE-1); 171 170 }
-2
net/wireless/core.c
··· 788 788 default: 789 789 break; 790 790 } 791 - 792 - wdev->beacon_interval = 0; 793 791 } 794 792 795 793 static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
+2 -1
scripts/kallsyms.c
··· 330 330 printf("\tPTR\t_text + %#llx\n", 331 331 table[i].addr - _text); 332 332 else 333 - printf("\tPTR\t%#llx\n", table[i].addr); 333 + printf("\tPTR\t_text - %#llx\n", 334 + _text - table[i].addr); 334 335 } else { 335 336 printf("\tPTR\t%#llx\n", table[i].addr); 336 337 }
+13
scripts/mod/modpost.c
··· 1502 1502 #define R_ARM_JUMP24 29 1503 1503 #endif 1504 1504 1505 + #ifndef R_ARM_THM_CALL 1506 + #define R_ARM_THM_CALL 10 1507 + #endif 1508 + #ifndef R_ARM_THM_JUMP24 1509 + #define R_ARM_THM_JUMP24 30 1510 + #endif 1511 + #ifndef R_ARM_THM_JUMP19 1512 + #define R_ARM_THM_JUMP19 51 1513 + #endif 1514 + 1505 1515 static int addend_arm_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r) 1506 1516 { 1507 1517 unsigned int r_typ = ELF_R_TYPE(r->r_info); ··· 1525 1515 case R_ARM_PC24: 1526 1516 case R_ARM_CALL: 1527 1517 case R_ARM_JUMP24: 1518 + case R_ARM_THM_CALL: 1519 + case R_ARM_THM_JUMP24: 1520 + case R_ARM_THM_JUMP19: 1528 1521 /* From ARM ABI: ((S + A) | T) - P */ 1529 1522 r->r_addend = (int)(long)(elf->hdr + 1530 1523 sechdr->sh_offset +
+5 -1
security/keys/keyring.c
··· 1000 1000 1001 1001 kenter("{%d}", key->serial); 1002 1002 1003 - BUG_ON(key != ctx->match_data); 1003 + /* We might get a keyring with matching index-key that is nonetheless a 1004 + * different keyring. */ 1005 + if (key != ctx->match_data) 1006 + return 0; 1007 + 1004 1008 ctx->result = ERR_PTR(-EDEADLK); 1005 1009 return 1; 1006 1010 }
+4
sound/pci/hda/patch_analog.c
··· 1026 1026 spec->gen.keep_eapd_on = 1; 1027 1027 spec->gen.vmaster_mute.hook = ad_vmaster_eapd_hook; 1028 1028 spec->eapd_nid = 0x12; 1029 + /* Analog PC Beeper - allow firmware/ACPI beeps */ 1030 + spec->beep_amp = HDA_COMPOSE_AMP_VAL(0x20, 3, 3, HDA_INPUT); 1031 + spec->gen.beep_nid = 0; /* no digital beep */ 1029 1032 } 1030 1033 } 1031 1034 ··· 1095 1092 spec = codec->spec; 1096 1093 1097 1094 spec->gen.mixer_nid = 0x20; 1095 + spec->gen.mixer_merge_nid = 0x21; 1098 1096 spec->gen.beep_nid = 0x10; 1099 1097 set_beep_amp(spec, 0x10, 0, HDA_OUTPUT); 1100 1098
+21 -1
sound/pci/hda/patch_realtek.c
··· 3616 3616 } 3617 3617 } 3618 3618 3619 + static void alc_no_shutup(struct hda_codec *codec) 3620 + { 3621 + } 3622 + 3623 + static void alc_fixup_no_shutup(struct hda_codec *codec, 3624 + const struct hda_fixup *fix, int action) 3625 + { 3626 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 3627 + struct alc_spec *spec = codec->spec; 3628 + spec->shutup = alc_no_shutup; 3629 + } 3630 + } 3631 + 3619 3632 static void alc_fixup_headset_mode_alc668(struct hda_codec *codec, 3620 3633 const struct hda_fixup *fix, int action) 3621 3634 { ··· 3857 3844 ALC269_FIXUP_HP_GPIO_LED, 3858 3845 ALC269_FIXUP_INV_DMIC, 3859 3846 ALC269_FIXUP_LENOVO_DOCK, 3847 + ALC269_FIXUP_NO_SHUTUP, 3860 3848 ALC286_FIXUP_SONY_MIC_NO_PRESENCE, 3861 3849 ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT, 3862 3850 ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, ··· 4033 4019 [ALC269_FIXUP_INV_DMIC] = { 4034 4020 .type = HDA_FIXUP_FUNC, 4035 4021 .v.func = alc_fixup_inv_dmic_0x12, 4022 + }, 4023 + [ALC269_FIXUP_NO_SHUTUP] = { 4024 + .type = HDA_FIXUP_FUNC, 4025 + .v.func = alc_fixup_no_shutup, 4036 4026 }, 4037 4027 [ALC269_FIXUP_LENOVO_DOCK] = { 4038 4028 .type = HDA_FIXUP_PINS, ··· 4271 4253 }; 4272 4254 4273 4255 static const struct snd_pci_quirk alc269_fixup_tbl[] = { 4256 + SND_PCI_QUIRK(0x1025, 0x0283, "Acer TravelMate 8371", ALC269_FIXUP_INV_DMIC), 4274 4257 SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC), 4275 4258 SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC), 4276 4259 SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700), ··· 4423 4404 SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4424 4405 SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4425 4406 SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4407 + SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP), 4426 4408 SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4427 4409 SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC), 4428 4410 SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 5183 5163 SND_PCI_QUIRK(0x1028, 0x0625, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 5184 5164 SND_PCI_QUIRK(0x1028, 0x0626, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 5185 5165 SND_PCI_QUIRK(0x1028, 0x0628, "Dell", ALC668_FIXUP_AUTO_MUTE), 5186 - SND_PCI_QUIRK(0x1028, 0x064e, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 5166 + SND_PCI_QUIRK(0x1028, 0x064e, "Dell", ALC668_FIXUP_AUTO_MUTE), 5187 5167 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 5188 5168 SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A_CHMAP), 5189 5169 SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_CHMAP),
+3
sound/soc/codecs/88pm860x-codec.c
··· 1328 1328 pm860x->codec = codec; 1329 1329 1330 1330 codec->control_data = pm860x->regmap; 1331 + ret = snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP); 1332 + if (ret) 1333 + return ret; 1331 1334 1332 1335 for (i = 0; i < 4; i++) { 1333 1336 ret = request_threaded_irq(pm860x->irq[i], NULL,
+1 -1
sound/soc/codecs/si476x.c
··· 210 210 static int si476x_codec_probe(struct snd_soc_codec *codec) 211 211 { 212 212 codec->control_data = dev_get_regmap(codec->dev->parent, NULL); 213 - return 0; 213 + return snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP); 214 214 } 215 215 216 216 static struct snd_soc_dai_ops si476x_dai_ops = {
+3 -1
sound/soc/omap/n810.c
··· 305 305 int err; 306 306 struct device *dev; 307 307 308 - if (!(machine_is_nokia_n810() || machine_is_nokia_n810_wimax())) 308 + if (!of_have_populated_dt() || 309 + (!of_machine_is_compatible("nokia,n810") && 310 + !of_machine_is_compatible("nokia,n810-wimax"))) 309 311 return -ENODEV; 310 312 311 313 n810_snd_device = platform_device_alloc("soc-audio", -1);
+3
sound/soc/soc-pcm.c
··· 1989 1989 1990 1990 paths = dpcm_path_get(fe, SNDRV_PCM_STREAM_PLAYBACK, &list); 1991 1991 if (paths < 0) { 1992 + dpcm_path_put(&list); 1992 1993 dev_warn(fe->dev, "ASoC: %s no valid %s path\n", 1993 1994 fe->dai_link->name, "playback"); 1994 1995 mutex_unlock(&card->mutex); ··· 2019 2018 2020 2019 paths = dpcm_path_get(fe, SNDRV_PCM_STREAM_CAPTURE, &list); 2021 2020 if (paths < 0) { 2021 + dpcm_path_put(&list); 2022 2022 dev_warn(fe->dev, "ASoC: %s no valid %s path\n", 2023 2023 fe->dai_link->name, "capture"); 2024 2024 mutex_unlock(&card->mutex); ··· 2084 2082 fe->dpcm[stream].runtime = fe_substream->runtime; 2085 2083 2086 2084 if (dpcm_path_get(fe, stream, &list) <= 0) { 2085 + dpcm_path_put(&list); 2087 2086 dev_dbg(fe->dev, "ASoC: %s no valid %s route\n", 2088 2087 fe->dai_link->name, stream ? "capture" : "playback"); 2089 2088 }
+1
sound/usb/mixer.c
··· 883 883 } 884 884 break; 885 885 886 + case USB_ID(0x046d, 0x0807): /* Logitech Webcam C500 */ 886 887 case USB_ID(0x046d, 0x0808): 887 888 case USB_ID(0x046d, 0x0809): 888 889 case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */
+1 -1
tools/net/Makefile
··· 12 12 13 13 all : bpf_jit_disasm bpf_dbg bpf_asm 14 14 15 - bpf_jit_disasm : CFLAGS = -Wall -O2 15 + bpf_jit_disasm : CFLAGS = -Wall -O2 -DPACKAGE='bpf_jit_disasm' 16 16 bpf_jit_disasm : LDLIBS = -lopcodes -lbfd -ldl 17 17 bpf_jit_disasm : bpf_jit_disasm.o 18 18
+1
tools/testing/selftests/ipc/msgque.c
··· 201 201 202 202 msgque.msq_id = msgget(msgque.key, IPC_CREAT | IPC_EXCL | 0666); 203 203 if (msgque.msq_id == -1) { 204 + err = -errno; 204 205 printf("Can't create queue\n"); 205 206 goto err_out; 206 207 }