Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
arch/mips/net/bpf_jit.c
drivers/net/can/flexcan.c

Both the flexcan and MIPS bpf_jit conflicts were cases of simple
overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+4650 -2444
+3 -3
Documentation/devicetree/bindings/dma/rcar-audmapp.txt
··· 16 16 * DMA client 17 17 18 18 Required properties: 19 - - dmas: a list of <[DMA multiplexer phandle] [SRS/DRS value]> pairs, 20 - where SRS/DRS values are fixed handles, specified in the SoC 21 - manual as the value that would be written into the PDMACHCR. 19 + - dmas: a list of <[DMA multiplexer phandle] [SRS << 8 | DRS]> pairs. 20 + where SRS/DRS are specified in the SoC manual. 21 + It will be written into PDMACHCR as high 16-bit parts. 22 22 - dma-names: a list of DMA channel names, one per "dmas" entry 23 23 24 24 Example:
-4
Documentation/devicetree/bindings/input/atmel,maxtouch.txt
··· 11 11 12 12 Optional properties for main touchpad device: 13 13 14 - - linux,gpio-keymap: An array of up to 4 entries indicating the Linux 15 - keycode generated by each GPIO. Linux keycodes are defined in 16 - <dt-bindings/input/input.h>. 17 - 18 14 - linux,gpio-keymap: When enabled, the SPT_GPIOPWN_T19 object sends messages 19 15 on GPIO bit changes. An array of up to 8 entries can be provided 20 16 indicating the Linux keycode mapped to each bit of the status byte,
+1 -1
Documentation/devicetree/bindings/sound/rockchip-i2s.txt
··· 31 31 #address-cells = <1>; 32 32 #size-cells = <0>; 33 33 dmas = <&pdma1 0>, <&pdma1 1>; 34 - dma-names = "rx", "tx"; 34 + dma-names = "tx", "rx"; 35 35 clock-names = "i2s_hclk", "i2s_clk"; 36 36 clocks = <&cru HCLK_I2S0>, <&cru SCLK_I2S0>; 37 37 };
+6 -2
Documentation/devicetree/bindings/spi/spi-rockchip.txt
··· 16 16 - clocks: Must contain an entry for each entry in clock-names. 17 17 - clock-names: Shall be "spiclk" for the transfer-clock, and "apb_pclk" for 18 18 the peripheral clock. 19 + - #address-cells: should be 1. 20 + - #size-cells: should be 0. 21 + 22 + Optional Properties: 23 + 19 24 - dmas: DMA specifiers for tx and rx dma. See the DMA client binding, 20 25 Documentation/devicetree/bindings/dma/dma.txt 21 26 - dma-names: DMA request names should include "tx" and "rx" if present. 22 - - #address-cells: should be 1. 23 - - #size-cells: should be 0. 27 + 24 28 25 29 Example: 26 30
+1
Documentation/devicetree/bindings/usb/mxs-phy.txt
··· 5 5 * "fsl,imx23-usbphy" for imx23 and imx28 6 6 * "fsl,imx6q-usbphy" for imx6dq and imx6dl 7 7 * "fsl,imx6sl-usbphy" for imx6sl 8 + * "fsl,imx6sx-usbphy" for imx6sx 8 9 "fsl,imx23-usbphy" is still a fallback for other strings 9 10 - reg: Should contain registers location and length 10 11 - interrupts: Should contain phy interrupt
+2 -2
Documentation/devicetree/bindings/video/analog-tv-connector.txt
··· 2 2 =================== 3 3 4 4 Required properties: 5 - - compatible: "composite-connector" or "svideo-connector" 5 + - compatible: "composite-video-connector" or "svideo-connector" 6 6 7 7 Optional properties: 8 8 - label: a symbolic name for the connector ··· 14 14 ------- 15 15 16 16 tv: connector { 17 - compatible = "composite-connector"; 17 + compatible = "composite-video-connector"; 18 18 label = "tv"; 19 19 20 20 port {
+1
Documentation/kernel-parameters.txt
··· 3541 3541 bogus residue values); 3542 3542 s = SINGLE_LUN (the device has only one 3543 3543 Logical Unit); 3544 + u = IGNORE_UAS (don't bind to the uas driver); 3544 3545 w = NO_WP_DETECT (don't test whether the 3545 3546 medium is write-protected). 3546 3547 Example: quirks=0419:aaf5:rl,0421:0433:rc
+3 -3
Documentation/networking/filter.txt
··· 462 462 ------------ 463 463 464 464 The Linux kernel has a built-in BPF JIT compiler for x86_64, SPARC, PowerPC, 465 - ARM and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler is 466 - transparently invoked for each attached filter from user space or for internal 467 - kernel users if it has been previously enabled by root: 465 + ARM, MIPS and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler 466 + is transparently invoked for each attached filter from user space or for 467 + internal kernel users if it has been previously enabled by root: 468 468 469 469 echo 1 > /proc/sys/net/core/bpf_jit_enable 470 470
+19 -4
MAINTAINERS
··· 6425 6425 F: drivers/scsi/nsp32* 6426 6426 6427 6427 NTB DRIVER 6428 - M: Jon Mason <jon.mason@intel.com> 6428 + M: Jon Mason <jdmason@kudzu.us> 6429 + M: Dave Jiang <dave.jiang@intel.com> 6429 6430 S: Supported 6430 6431 W: https://github.com/jonmason/ntb/wiki 6431 6432 T: git git://github.com/jonmason/ntb.git ··· 6877 6876 6878 6877 PCI DRIVER FOR IMX6 6879 6878 M: Richard Zhu <r65037@freescale.com> 6880 - M: Shawn Guo <shawn.guo@freescale.com> 6879 + M: Lucas Stach <l.stach@pengutronix.de> 6881 6880 L: linux-pci@vger.kernel.org 6882 6881 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6883 6882 S: Maintained ··· 7055 7054 F: drivers/pinctrl/sh-pfc/ 7056 7055 7057 7056 PIN CONTROLLER - SAMSUNG 7058 - M: Tomasz Figa <t.figa@samsung.com> 7057 + M: Tomasz Figa <tomasz.figa@gmail.com> 7059 7058 M: Thomas Abraham <thomas.abraham@linaro.org> 7060 7059 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 7061 7060 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) ··· 7901 7900 F: drivers/media/i2c/s5k5baf.c 7902 7901 7903 7902 SAMSUNG SOC CLOCK DRIVERS 7904 - M: Tomasz Figa <t.figa@samsung.com> 7903 + M: Sylwester Nawrocki <s.nawrocki@samsung.com> 7904 + M: Tomasz Figa <tomasz.figa@gmail.com> 7905 7905 S: Supported 7906 7906 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) 7907 7907 F: drivers/clk/samsung/ ··· 7914 7912 S: Supported 7915 7913 L: netdev@vger.kernel.org 7916 7914 F: drivers/net/ethernet/samsung/sxgbe/ 7915 + 7916 + SAMSUNG USB2 PHY DRIVER 7917 + M: Kamil Debski <k.debski@samsung.com> 7918 + L: linux-kernel@vger.kernel.org 7919 + S: Supported 7920 + F: Documentation/devicetree/bindings/phy/samsung-phy.txt 7921 + F: Documentation/phy/samsung-usb2.txt 7922 + F: drivers/phy/phy-exynos4210-usb2.c 7923 + F: drivers/phy/phy-exynos4x12-usb2.c 7924 + F: drivers/phy/phy-exynos5250-usb2.c 7925 + F: drivers/phy/phy-s5pv210-usb2.c 7926 + F: drivers/phy/phy-samsung-usb2.c 7927 + F: drivers/phy/phy-samsung-usb2.h 7917 7928 7918 7929 SERIAL DRIVERS 7919 7930 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 17 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc6 5 5 NAME = Shuffling Zombie Juror 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/omap3-n900.dts
··· 93 93 }; 94 94 95 95 tv: connector { 96 - compatible = "composite-connector"; 96 + compatible = "composite-video-connector"; 97 97 label = "tv"; 98 98 99 99 port {
+62
arch/arm/include/asm/tls.h
··· 1 1 #ifndef __ASMARM_TLS_H 2 2 #define __ASMARM_TLS_H 3 3 4 + #include <linux/compiler.h> 5 + #include <asm/thread_info.h> 6 + 4 7 #ifdef __ASSEMBLY__ 5 8 #include <asm/asm-offsets.h> 6 9 .macro switch_tls_none, base, tp, tpuser, tmp1, tmp2 ··· 53 50 #endif 54 51 55 52 #ifndef __ASSEMBLY__ 53 + 54 + static inline void set_tls(unsigned long val) 55 + { 56 + struct thread_info *thread; 57 + 58 + thread = current_thread_info(); 59 + 60 + thread->tp_value[0] = val; 61 + 62 + /* 63 + * This code runs with preemption enabled and therefore must 64 + * be reentrant with respect to switch_tls. 65 + * 66 + * We need to ensure ordering between the shadow state and the 67 + * hardware state, so that we don't corrupt the hardware state 68 + * with a stale shadow state during context switch. 69 + * 70 + * If we're preempted here, switch_tls will load TPIDRURO from 71 + * thread_info upon resuming execution and the following mcr 72 + * is merely redundant. 73 + */ 74 + barrier(); 75 + 76 + if (!tls_emu) { 77 + if (has_tls_reg) { 78 + asm("mcr p15, 0, %0, c13, c0, 3" 79 + : : "r" (val)); 80 + } else { 81 + /* 82 + * User space must never try to access this 83 + * directly. Expect your app to break 84 + * eventually if you do so. The user helper 85 + * at 0xffff0fe0 must be used instead. (see 86 + * entry-armv.S for details) 87 + */ 88 + *((unsigned int *)0xffff0ff0) = val; 89 + } 90 + 91 + } 92 + } 93 + 56 94 static inline unsigned long get_tpuser(void) 57 95 { 58 96 unsigned long reg = 0; ··· 103 59 104 60 return reg; 105 61 } 62 + 63 + static inline void set_tpuser(unsigned long val) 64 + { 65 + /* Since TPIDRURW is fully context-switched (unlike TPIDRURO), 66 + * we need not update thread_info. 67 + */ 68 + if (has_tls_reg && !tls_emu) { 69 + asm("mcr p15, 0, %0, c13, c0, 2" 70 + : : "r" (val)); 71 + } 72 + } 73 + 74 + static inline void flush_tls(void) 75 + { 76 + set_tls(0); 77 + set_tpuser(0); 78 + } 79 + 106 80 #endif 107 81 #endif /* __ASMARM_TLS_H */
+39 -9
arch/arm/include/asm/uaccess.h
··· 107 107 extern int __get_user_1(void *); 108 108 extern int __get_user_2(void *); 109 109 extern int __get_user_4(void *); 110 - extern int __get_user_lo8(void *); 110 + extern int __get_user_32t_8(void *); 111 111 extern int __get_user_8(void *); 112 + extern int __get_user_64t_1(void *); 113 + extern int __get_user_64t_2(void *); 114 + extern int __get_user_64t_4(void *); 112 115 113 116 #define __GUP_CLOBBER_1 "lr", "cc" 114 117 #ifdef CONFIG_CPU_USE_DOMAINS ··· 120 117 #define __GUP_CLOBBER_2 "lr", "cc" 121 118 #endif 122 119 #define __GUP_CLOBBER_4 "lr", "cc" 123 - #define __GUP_CLOBBER_lo8 "lr", "cc" 120 + #define __GUP_CLOBBER_32t_8 "lr", "cc" 124 121 #define __GUP_CLOBBER_8 "lr", "cc" 125 122 126 123 #define __get_user_x(__r2,__p,__e,__l,__s) \ ··· 134 131 135 132 /* narrowing a double-word get into a single 32bit word register: */ 136 133 #ifdef __ARMEB__ 137 - #define __get_user_xb(__r2, __p, __e, __l, __s) \ 138 - __get_user_x(__r2, __p, __e, __l, lo8) 134 + #define __get_user_x_32t(__r2, __p, __e, __l, __s) \ 135 + __get_user_x(__r2, __p, __e, __l, 32t_8) 139 136 #else 140 - #define __get_user_xb __get_user_x 137 + #define __get_user_x_32t __get_user_x 141 138 #endif 139 + 140 + /* 141 + * storing result into proper least significant word of 64bit target var, 142 + * different only for big endian case where 64 bit __r2 lsw is r3: 143 + */ 144 + #ifdef __ARMEB__ 145 + #define __get_user_x_64t(__r2, __p, __e, __l, __s) \ 146 + __asm__ __volatile__ ( \ 147 + __asmeq("%0", "r0") __asmeq("%1", "r2") \ 148 + __asmeq("%3", "r1") \ 149 + "bl __get_user_64t_" #__s \ 150 + : "=&r" (__e), "=r" (__r2) \ 151 + : "0" (__p), "r" (__l) \ 152 + : __GUP_CLOBBER_##__s) 153 + #else 154 + #define __get_user_x_64t __get_user_x 155 + #endif 156 + 142 157 143 158 #define __get_user_check(x,p) \ 144 159 ({ \ ··· 167 146 register int __e asm("r0"); \ 168 147 switch (sizeof(*(__p))) { \ 169 148 case 1: \ 170 - __get_user_x(__r2, __p, __e, __l, 1); \ 149 + if (sizeof((x)) >= 8) \ 150 + __get_user_x_64t(__r2, __p, __e, __l, 1); \ 151 + else \ 152 + __get_user_x(__r2, __p, __e, __l, 1); \ 171 153 break; \ 172 154 case 2: \ 173 - __get_user_x(__r2, __p, __e, __l, 2); \ 155 + if (sizeof((x)) >= 8) \ 156 + __get_user_x_64t(__r2, __p, __e, __l, 2); \ 157 + else \ 158 + __get_user_x(__r2, __p, __e, __l, 2); \ 174 159 break; \ 175 160 case 4: \ 176 - __get_user_x(__r2, __p, __e, __l, 4); \ 161 + if (sizeof((x)) >= 8) \ 162 + __get_user_x_64t(__r2, __p, __e, __l, 4); \ 163 + else \ 164 + __get_user_x(__r2, __p, __e, __l, 4); \ 177 165 break; \ 178 166 case 8: \ 179 167 if (sizeof((x)) < 8) \ 180 - __get_user_xb(__r2, __p, __e, __l, 4); \ 168 + __get_user_x_32t(__r2, __p, __e, __l, 4); \ 181 169 else \ 182 170 __get_user_x(__r2, __p, __e, __l, 8); \ 183 171 break; \
+7 -18
arch/arm/include/asm/xen/page-coherent.h
··· 26 26 __generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs); 27 27 } 28 28 29 - static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, 29 + void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, 30 30 size_t size, enum dma_data_direction dir, 31 - struct dma_attrs *attrs) 32 - { 33 - if (__generic_dma_ops(hwdev)->unmap_page) 34 - __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs); 35 - } 31 + struct dma_attrs *attrs); 36 32 37 - static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, 38 - dma_addr_t handle, size_t size, enum dma_data_direction dir) 39 - { 40 - if (__generic_dma_ops(hwdev)->sync_single_for_cpu) 41 - __generic_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir); 42 - } 33 + void xen_dma_sync_single_for_cpu(struct device *hwdev, 34 + dma_addr_t handle, size_t size, enum dma_data_direction dir); 43 35 44 - static inline void xen_dma_sync_single_for_device(struct device *hwdev, 45 - dma_addr_t handle, size_t size, enum dma_data_direction dir) 46 - { 47 - if (__generic_dma_ops(hwdev)->sync_single_for_device) 48 - __generic_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir); 49 - } 36 + void xen_dma_sync_single_for_device(struct device *hwdev, 37 + dma_addr_t handle, size_t size, enum dma_data_direction dir); 38 + 50 39 #endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */
-9
arch/arm/include/asm/xen/page.h
··· 33 33 #define INVALID_P2M_ENTRY (~0UL) 34 34 35 35 unsigned long __pfn_to_mfn(unsigned long pfn); 36 - unsigned long __mfn_to_pfn(unsigned long mfn); 37 36 extern struct rb_root phys_to_mach; 38 37 39 38 static inline unsigned long pfn_to_mfn(unsigned long pfn) ··· 50 51 51 52 static inline unsigned long mfn_to_pfn(unsigned long mfn) 52 53 { 53 - unsigned long pfn; 54 - 55 - if (phys_to_mach.rb_node != NULL) { 56 - pfn = __mfn_to_pfn(mfn); 57 - if (pfn != INVALID_P2M_ENTRY) 58 - return pfn; 59 - } 60 - 61 54 return mfn; 62 55 } 63 56
+8
arch/arm/kernel/armksyms.c
··· 98 98 EXPORT_SYMBOL(__get_user_1); 99 99 EXPORT_SYMBOL(__get_user_2); 100 100 EXPORT_SYMBOL(__get_user_4); 101 + EXPORT_SYMBOL(__get_user_8); 102 + 103 + #ifdef __ARMEB__ 104 + EXPORT_SYMBOL(__get_user_64t_1); 105 + EXPORT_SYMBOL(__get_user_64t_2); 106 + EXPORT_SYMBOL(__get_user_64t_4); 107 + EXPORT_SYMBOL(__get_user_32t_8); 108 + #endif 101 109 102 110 EXPORT_SYMBOL(__put_user_1); 103 111 EXPORT_SYMBOL(__put_user_2);
+1 -1
arch/arm/kernel/irq.c
··· 175 175 c = irq_data_get_irq_chip(d); 176 176 if (!c->irq_set_affinity) 177 177 pr_debug("IRQ%u: unable to set affinity\n", d->irq); 178 - else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret) 178 + else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret) 179 179 cpumask_copy(d->affinity, affinity); 180 180 181 181 return ret;
+4 -10
arch/arm/kernel/perf_event_cpu.c
··· 76 76 77 77 static void cpu_pmu_enable_percpu_irq(void *data) 78 78 { 79 - struct arm_pmu *cpu_pmu = data; 80 - struct platform_device *pmu_device = cpu_pmu->plat_device; 81 - int irq = platform_get_irq(pmu_device, 0); 79 + int irq = *(int *)data; 82 80 83 81 enable_percpu_irq(irq, IRQ_TYPE_NONE); 84 - cpumask_set_cpu(smp_processor_id(), &cpu_pmu->active_irqs); 85 82 } 86 83 87 84 static void cpu_pmu_disable_percpu_irq(void *data) 88 85 { 89 - struct arm_pmu *cpu_pmu = data; 90 - struct platform_device *pmu_device = cpu_pmu->plat_device; 91 - int irq = platform_get_irq(pmu_device, 0); 86 + int irq = *(int *)data; 92 87 93 - cpumask_clear_cpu(smp_processor_id(), &cpu_pmu->active_irqs); 94 88 disable_percpu_irq(irq); 95 89 } 96 90 ··· 97 103 98 104 irq = platform_get_irq(pmu_device, 0); 99 105 if (irq >= 0 && irq_is_percpu(irq)) { 100 - on_each_cpu(cpu_pmu_disable_percpu_irq, cpu_pmu, 1); 106 + on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1); 101 107 free_percpu_irq(irq, &percpu_pmu); 102 108 } else { 103 109 for (i = 0; i < irqs; ++i) { ··· 132 138 irq); 133 139 return err; 134 140 } 135 - on_each_cpu(cpu_pmu_enable_percpu_irq, cpu_pmu, 1); 141 + on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1); 136 142 } else { 137 143 for (i = 0; i < irqs; ++i) { 138 144 err = 0;
+2
arch/arm/kernel/process.c
··· 334 334 memset(&tsk->thread.debug, 0, sizeof(struct debug_info)); 335 335 memset(&thread->fpstate, 0, sizeof(union fp_state)); 336 336 337 + flush_tls(); 338 + 337 339 thread_notify(THREAD_NOTIFY_FLUSH, thread); 338 340 } 339 341
-15
arch/arm/kernel/swp_emulate.c
··· 142 142 while (1) { 143 143 unsigned long temp; 144 144 145 - /* 146 - * Barrier required between accessing protected resource and 147 - * releasing a lock for it. Legacy code might not have done 148 - * this, and we cannot determine that this is not the case 149 - * being emulated, so insert always. 150 - */ 151 - smp_mb(); 152 - 153 145 if (type == TYPE_SWPB) 154 146 __user_swpb_asm(*data, address, res, temp); 155 147 else ··· 154 162 } 155 163 156 164 if (res == 0) { 157 - /* 158 - * Barrier also required between acquiring a lock for a 159 - * protected resource and accessing the resource. Inserted for 160 - * same reason as above. 161 - */ 162 - smp_mb(); 163 - 164 165 if (type == TYPE_SWPB) 165 166 swpbcounter++; 166 167 else
+1 -1
arch/arm/kernel/thumbee.c
··· 45 45 46 46 switch (cmd) { 47 47 case THREAD_NOTIFY_FLUSH: 48 - thread->thumbee_state = 0; 48 + teehbr_write(0); 49 49 break; 50 50 case THREAD_NOTIFY_SWITCH: 51 51 current_thread_info()->thumbee_state = teehbr_read();
+1 -16
arch/arm/kernel/traps.c
··· 581 581 #define NR(x) ((__ARM_NR_##x) - __ARM_NR_BASE) 582 582 asmlinkage int arm_syscall(int no, struct pt_regs *regs) 583 583 { 584 - struct thread_info *thread = current_thread_info(); 585 584 siginfo_t info; 586 585 587 586 if ((no >> 16) != (__ARM_NR_BASE>> 16)) ··· 631 632 return regs->ARM_r0; 632 633 633 634 case NR(set_tls): 634 - thread->tp_value[0] = regs->ARM_r0; 635 - if (tls_emu) 636 - return 0; 637 - if (has_tls_reg) { 638 - asm ("mcr p15, 0, %0, c13, c0, 3" 639 - : : "r" (regs->ARM_r0)); 640 - } else { 641 - /* 642 - * User space must never try to access this directly. 643 - * Expect your app to break eventually if you do so. 644 - * The user helper at 0xffff0fe0 must be used instead. 645 - * (see entry-armv.S for details) 646 - */ 647 - *((unsigned int *)0xffff0ff0) = regs->ARM_r0; 648 - } 635 + set_tls(regs->ARM_r0); 649 636 return 0; 650 637 651 638 #ifdef CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG
+36 -2
arch/arm/lib/getuser.S
··· 80 80 ENDPROC(__get_user_8) 81 81 82 82 #ifdef __ARMEB__ 83 - ENTRY(__get_user_lo8) 83 + ENTRY(__get_user_32t_8) 84 84 check_uaccess r0, 8, r1, r2, __get_user_bad 85 85 #ifdef CONFIG_CPU_USE_DOMAINS 86 86 add r0, r0, #4 ··· 90 90 #endif 91 91 mov r0, #0 92 92 ret lr 93 - ENDPROC(__get_user_lo8) 93 + ENDPROC(__get_user_32t_8) 94 + 95 + ENTRY(__get_user_64t_1) 96 + check_uaccess r0, 1, r1, r2, __get_user_bad8 97 + 8: TUSER(ldrb) r3, [r0] 98 + mov r0, #0 99 + ret lr 100 + ENDPROC(__get_user_64t_1) 101 + 102 + ENTRY(__get_user_64t_2) 103 + check_uaccess r0, 2, r1, r2, __get_user_bad8 104 + #ifdef CONFIG_CPU_USE_DOMAINS 105 + rb .req ip 106 + 9: ldrbt r3, [r0], #1 107 + 10: ldrbt rb, [r0], #0 108 + #else 109 + rb .req r0 110 + 9: ldrb r3, [r0] 111 + 10: ldrb rb, [r0, #1] 112 + #endif 113 + orr r3, rb, r3, lsl #8 114 + mov r0, #0 115 + ret lr 116 + ENDPROC(__get_user_64t_2) 117 + 118 + ENTRY(__get_user_64t_4) 119 + check_uaccess r0, 4, r1, r2, __get_user_bad8 120 + 11: TUSER(ldr) r3, [r0] 121 + mov r0, #0 122 + ret lr 123 + ENDPROC(__get_user_64t_4) 94 124 #endif 95 125 96 126 __get_user_bad8: ··· 141 111 .long 6b, __get_user_bad8 142 112 #ifdef __ARMEB__ 143 113 .long 7b, __get_user_bad 114 + .long 8b, __get_user_bad8 115 + .long 9b, __get_user_bad8 116 + .long 10b, __get_user_bad8 117 + .long 11b, __get_user_bad8 144 118 #endif 145 119 .popsection
-1
arch/arm/mm/proc-v7-3level.S
··· 146 146 mov \tmp, \ttbr1, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits 147 147 mov \ttbr1, \ttbr1, lsl #ARCH_PGD_SHIFT @ lower bits 148 148 addls \ttbr1, \ttbr1, #TTBR1_OFFSET 149 - adcls \tmp, \tmp, #0 150 149 mcrr p15, 1, \ttbr1, \tmp, c2 @ load TTBR1 151 150 mov \tmp, \ttbr0, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits 152 151 mov \ttbr0, \ttbr0, lsl #ARCH_PGD_SHIFT @ lower bits
+1 -1
arch/arm/xen/Makefile
··· 1 - obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o 1 + obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o mm32.o
+6
arch/arm/xen/enlighten.c
··· 260 260 xen_domain_type = XEN_HVM_DOMAIN; 261 261 262 262 xen_setup_features(); 263 + 264 + if (!xen_feature(XENFEAT_grant_map_identity)) { 265 + pr_warn("Please upgrade your Xen.\n" 266 + "If your platform has any non-coherent DMA devices, they won't work properly.\n"); 267 + } 268 + 263 269 if (xen_feature(XENFEAT_dom0)) 264 270 xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED; 265 271 else
+202
arch/arm/xen/mm32.c
··· 1 + #include <linux/cpu.h> 2 + #include <linux/dma-mapping.h> 3 + #include <linux/gfp.h> 4 + #include <linux/highmem.h> 5 + 6 + #include <xen/features.h> 7 + 8 + static DEFINE_PER_CPU(unsigned long, xen_mm32_scratch_virt); 9 + static DEFINE_PER_CPU(pte_t *, xen_mm32_scratch_ptep); 10 + 11 + static int alloc_xen_mm32_scratch_page(int cpu) 12 + { 13 + struct page *page; 14 + unsigned long virt; 15 + pmd_t *pmdp; 16 + pte_t *ptep; 17 + 18 + if (per_cpu(xen_mm32_scratch_ptep, cpu) != NULL) 19 + return 0; 20 + 21 + page = alloc_page(GFP_KERNEL); 22 + if (page == NULL) { 23 + pr_warn("Failed to allocate xen_mm32_scratch_page for cpu %d\n", cpu); 24 + return -ENOMEM; 25 + } 26 + 27 + virt = (unsigned long)__va(page_to_phys(page)); 28 + pmdp = pmd_offset(pud_offset(pgd_offset_k(virt), virt), virt); 29 + ptep = pte_offset_kernel(pmdp, virt); 30 + 31 + per_cpu(xen_mm32_scratch_virt, cpu) = virt; 32 + per_cpu(xen_mm32_scratch_ptep, cpu) = ptep; 33 + 34 + return 0; 35 + } 36 + 37 + static int xen_mm32_cpu_notify(struct notifier_block *self, 38 + unsigned long action, void *hcpu) 39 + { 40 + int cpu = (long)hcpu; 41 + switch (action) { 42 + case CPU_UP_PREPARE: 43 + if (alloc_xen_mm32_scratch_page(cpu)) 44 + return NOTIFY_BAD; 45 + break; 46 + default: 47 + break; 48 + } 49 + return NOTIFY_OK; 50 + } 51 + 52 + static struct notifier_block xen_mm32_cpu_notifier = { 53 + .notifier_call = xen_mm32_cpu_notify, 54 + }; 55 + 56 + static void* xen_mm32_remap_page(dma_addr_t handle) 57 + { 58 + unsigned long virt = get_cpu_var(xen_mm32_scratch_virt); 59 + pte_t *ptep = __get_cpu_var(xen_mm32_scratch_ptep); 60 + 61 + *ptep = pfn_pte(handle >> PAGE_SHIFT, PAGE_KERNEL); 62 + local_flush_tlb_kernel_page(virt); 63 + 64 + return (void*)virt; 65 + } 66 + 67 + static void xen_mm32_unmap(void *vaddr) 68 + { 69 + put_cpu_var(xen_mm32_scratch_virt); 70 + } 71 + 72 + 73 + /* functions called by SWIOTLB */ 74 + 75 + static void dma_cache_maint(dma_addr_t handle, unsigned long offset, 76 + size_t size, enum dma_data_direction dir, 77 + void (*op)(const void *, size_t, int)) 78 + { 79 + unsigned long pfn; 80 + size_t left = size; 81 + 82 + pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; 83 + offset %= PAGE_SIZE; 84 + 85 + do { 86 + size_t len = left; 87 + void *vaddr; 88 + 89 + if (!pfn_valid(pfn)) 90 + { 91 + /* Cannot map the page, we don't know its physical address. 92 + * Return and hope for the best */ 93 + if (!xen_feature(XENFEAT_grant_map_identity)) 94 + return; 95 + vaddr = xen_mm32_remap_page(handle) + offset; 96 + op(vaddr, len, dir); 97 + xen_mm32_unmap(vaddr - offset); 98 + } else { 99 + struct page *page = pfn_to_page(pfn); 100 + 101 + if (PageHighMem(page)) { 102 + if (len + offset > PAGE_SIZE) 103 + len = PAGE_SIZE - offset; 104 + 105 + if (cache_is_vipt_nonaliasing()) { 106 + vaddr = kmap_atomic(page); 107 + op(vaddr + offset, len, dir); 108 + kunmap_atomic(vaddr); 109 + } else { 110 + vaddr = kmap_high_get(page); 111 + if (vaddr) { 112 + op(vaddr + offset, len, dir); 113 + kunmap_high(page); 114 + } 115 + } 116 + } else { 117 + vaddr = page_address(page) + offset; 118 + op(vaddr, len, dir); 119 + } 120 + } 121 + 122 + offset = 0; 123 + pfn++; 124 + left -= len; 125 + } while (left); 126 + } 127 + 128 + static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, 129 + size_t size, enum dma_data_direction dir) 130 + { 131 + /* Cannot use __dma_page_dev_to_cpu because we don't have a 132 + * struct page for handle */ 133 + 134 + if (dir != DMA_TO_DEVICE) 135 + outer_inv_range(handle, handle + size); 136 + 137 + dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area); 138 + } 139 + 140 + static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, 141 + size_t size, enum dma_data_direction dir) 142 + { 143 + 144 + dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area); 145 + 146 + if (dir == DMA_FROM_DEVICE) { 147 + outer_inv_range(handle, handle + size); 148 + } else { 149 + outer_clean_range(handle, handle + size); 150 + } 151 + } 152 + 153 + void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, 154 + size_t size, enum dma_data_direction dir, 155 + struct dma_attrs *attrs) 156 + 157 + { 158 + if (!__generic_dma_ops(hwdev)->unmap_page) 159 + return; 160 + if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) 161 + return; 162 + 163 + __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); 164 + } 165 + 166 + void xen_dma_sync_single_for_cpu(struct device *hwdev, 167 + dma_addr_t handle, size_t size, enum dma_data_direction dir) 168 + { 169 + if (!__generic_dma_ops(hwdev)->sync_single_for_cpu) 170 + return; 171 + __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); 172 + } 173 + 174 + void xen_dma_sync_single_for_device(struct device *hwdev, 175 + dma_addr_t handle, size_t size, enum dma_data_direction dir) 176 + { 177 + if (!__generic_dma_ops(hwdev)->sync_single_for_device) 178 + return; 179 + __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir); 180 + } 181 + 182 + int __init xen_mm32_init(void) 183 + { 184 + int cpu; 185 + 186 + if (!xen_initial_domain()) 187 + return 0; 188 + 189 + register_cpu_notifier(&xen_mm32_cpu_notifier); 190 + get_online_cpus(); 191 + for_each_online_cpu(cpu) { 192 + if (alloc_xen_mm32_scratch_page(cpu)) { 193 + put_online_cpus(); 194 + unregister_cpu_notifier(&xen_mm32_cpu_notifier); 195 + return -ENOMEM; 196 + } 197 + } 198 + put_online_cpus(); 199 + 200 + return 0; 201 + } 202 + arch_initcall(xen_mm32_init);
+1 -65
arch/arm/xen/p2m.c
··· 21 21 unsigned long pfn; 22 22 unsigned long mfn; 23 23 unsigned long nr_pages; 24 - struct rb_node rbnode_mach; 25 24 struct rb_node rbnode_phys; 26 25 }; 27 26 28 27 static rwlock_t p2m_lock; 29 28 struct rb_root phys_to_mach = RB_ROOT; 30 29 EXPORT_SYMBOL_GPL(phys_to_mach); 31 - static struct rb_root mach_to_phys = RB_ROOT; 32 30 33 31 static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new) 34 32 { ··· 39 41 parent = *link; 40 42 entry = rb_entry(parent, struct xen_p2m_entry, rbnode_phys); 41 43 42 - if (new->mfn == entry->mfn) 43 - goto err_out; 44 44 if (new->pfn == entry->pfn) 45 45 goto err_out; 46 46 ··· 83 87 return INVALID_P2M_ENTRY; 84 88 } 85 89 EXPORT_SYMBOL_GPL(__pfn_to_mfn); 86 - 87 - static int xen_add_mach_to_phys_entry(struct xen_p2m_entry *new) 88 - { 89 - struct rb_node **link = &mach_to_phys.rb_node; 90 - struct rb_node *parent = NULL; 91 - struct xen_p2m_entry *entry; 92 - int rc = 0; 93 - 94 - while (*link) { 95 - parent = *link; 96 - entry = rb_entry(parent, struct xen_p2m_entry, rbnode_mach); 97 - 98 - if (new->mfn == entry->mfn) 99 - goto err_out; 100 - if (new->pfn == entry->pfn) 101 - goto err_out; 102 - 103 - if (new->mfn < entry->mfn) 104 - link = &(*link)->rb_left; 105 - else 106 - link = &(*link)->rb_right; 107 - } 108 - rb_link_node(&new->rbnode_mach, parent, link); 109 - rb_insert_color(&new->rbnode_mach, &mach_to_phys); 110 - goto out; 111 - 112 - err_out: 113 - rc = -EINVAL; 114 - pr_warn("%s: cannot add pfn=%pa -> mfn=%pa: pfn=%pa -> mfn=%pa already exists\n", 115 - __func__, &new->pfn, &new->mfn, &entry->pfn, &entry->mfn); 116 - out: 117 - return rc; 118 - } 119 - 120 - unsigned long __mfn_to_pfn(unsigned long mfn) 121 - { 122 - struct rb_node *n = mach_to_phys.rb_node; 123 - struct xen_p2m_entry *entry; 124 - unsigned long irqflags; 125 - 126 - read_lock_irqsave(&p2m_lock, irqflags); 127 - while (n) { 128 - entry = rb_entry(n, struct xen_p2m_entry, rbnode_mach); 129 - if (entry->mfn <= mfn && 130 - entry->mfn + entry->nr_pages > mfn) { 131 - read_unlock_irqrestore(&p2m_lock, irqflags); 132 - return entry->pfn + (mfn - entry->mfn); 133 - } 134 - if (mfn < entry->mfn) 135 - n = n->rb_left; 136 - else 137 - n = n->rb_right; 138 - } 139 - read_unlock_irqrestore(&p2m_lock, irqflags); 140 - 141 - return INVALID_P2M_ENTRY; 142 - } 143 - EXPORT_SYMBOL_GPL(__mfn_to_pfn); 144 90 145 91 int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops, 146 92 struct gnttab_map_grant_ref *kmap_ops, ··· 130 192 p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); 131 193 if (p2m_entry->pfn <= pfn && 132 194 p2m_entry->pfn + p2m_entry->nr_pages > pfn) { 133 - rb_erase(&p2m_entry->rbnode_mach, &mach_to_phys); 134 195 rb_erase(&p2m_entry->rbnode_phys, &phys_to_mach); 135 196 write_unlock_irqrestore(&p2m_lock, irqflags); 136 197 kfree(p2m_entry); ··· 154 217 p2m_entry->mfn = mfn; 155 218 156 219 write_lock_irqsave(&p2m_lock, irqflags); 157 - if ((rc = xen_add_phys_to_mach_entry(p2m_entry) < 0) || 158 - (rc = xen_add_mach_to_phys_entry(p2m_entry) < 0)) { 220 + if ((rc = xen_add_phys_to_mach_entry(p2m_entry)) < 0) { 159 221 write_unlock_irqrestore(&p2m_lock, irqflags); 160 222 return false; 161 223 }
+4 -8
arch/arm64/kernel/irq.c
··· 97 97 if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity)) 98 98 return false; 99 99 100 - if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) 100 + if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { 101 + affinity = cpu_online_mask; 101 102 ret = true; 103 + } 102 104 103 - /* 104 - * when using forced irq_set_affinity we must ensure that the cpu 105 - * being offlined is not present in the affinity mask, it may be 106 - * selected as the target CPU otherwise 107 - */ 108 - affinity = cpu_online_mask; 109 105 c = irq_data_get_irq_chip(d); 110 106 if (!c->irq_set_affinity) 111 107 pr_debug("IRQ%u: unable to set affinity\n", d->irq); 112 - else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret) 108 + else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret) 113 109 cpumask_copy(d->affinity, affinity); 114 110 115 111 return ret;
+18
arch/arm64/kernel/process.c
··· 230 230 { 231 231 } 232 232 233 + static void tls_thread_flush(void) 234 + { 235 + asm ("msr tpidr_el0, xzr"); 236 + 237 + if (is_compat_task()) { 238 + current->thread.tp_value = 0; 239 + 240 + /* 241 + * We need to ensure ordering between the shadow state and the 242 + * hardware state, so that we don't corrupt the hardware state 243 + * with a stale shadow state during context switch. 244 + */ 245 + barrier(); 246 + asm ("msr tpidrro_el0, xzr"); 247 + } 248 + } 249 + 233 250 void flush_thread(void) 234 251 { 235 252 fpsimd_flush_thread(); 253 + tls_thread_flush(); 236 254 flush_ptrace_hw_breakpoint(current); 237 255 } 238 256
+6
arch/arm64/kernel/sys_compat.c
··· 79 79 80 80 case __ARM_NR_compat_set_tls: 81 81 current->thread.tp_value = regs->regs[0]; 82 + 83 + /* 84 + * Protect against register corruption from context switch. 85 + * See comment in tls_thread_flush. 86 + */ 87 + barrier(); 82 88 asm ("msr tpidrro_el0, %0" : : "r" (regs->regs[0])); 83 89 return 0; 84 90
+1 -2
arch/arm64/mm/init.c
··· 149 149 memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start); 150 150 #endif 151 151 152 - if (!efi_enabled(EFI_MEMMAP)) 153 - early_init_fdt_scan_reserved_mem(); 152 + early_init_fdt_scan_reserved_mem(); 154 153 155 154 /* 4GB maximum for 32-bit only capable devices */ 156 155 if (IS_ENABLED(CONFIG_ZONE_DMA))
+1 -1
arch/ia64/include/uapi/asm/unistd.h
··· 329 329 #define __NR_sched_getattr 1337 330 330 #define __NR_renameat2 1338 331 331 #define __NR_getrandom 1339 332 - #define __NR_memfd_create 1339 332 + #define __NR_memfd_create 1340 333 333 334 334 #endif /* _UAPI_ASM_IA64_UNISTD_H */
+1 -23
arch/ia64/pci/fixup.c
··· 38 38 return; 39 39 /* Maybe, this machine supports legacy memory map. */ 40 40 41 - if (!vga_default_device()) { 42 - resource_size_t start, end; 43 - int i; 44 - 45 - /* Does firmware framebuffer belong to us? */ 46 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 47 - if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM)) 48 - continue; 49 - 50 - start = pci_resource_start(pdev, i); 51 - end = pci_resource_end(pdev, i); 52 - 53 - if (!start || !end) 54 - continue; 55 - 56 - if (screen_info.lfb_base >= start && 57 - (screen_info.lfb_base + screen_info.lfb_size) < end) 58 - vga_set_default_device(pdev); 59 - } 60 - } 61 - 62 41 /* Is VGA routed to us? */ 63 42 bus = pdev->bus; 64 43 while (bus) { ··· 62 83 pci_read_config_word(pdev, PCI_COMMAND, &config); 63 84 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 64 85 pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 65 - dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 66 - vga_set_default_device(pdev); 86 + dev_printk(KERN_DEBUG, &pdev->dev, "Video device with shadowed ROM\n"); 67 87 } 68 88 } 69 89 }
+3 -3
arch/microblaze/Kconfig
··· 127 127 128 128 endmenu 129 129 130 - menu "Advanced setup" 130 + menu "Kernel features" 131 131 132 132 config ADVANCED_OPTIONS 133 133 bool "Prompt for advanced kernel configuration options" ··· 248 248 249 249 endchoice 250 250 251 - endmenu 252 - 253 251 source "mm/Kconfig" 252 + 253 + endmenu 254 254 255 255 menu "Executable file formats" 256 256
+1
arch/microblaze/include/asm/entry.h
··· 15 15 16 16 #include <asm/percpu.h> 17 17 #include <asm/ptrace.h> 18 + #include <linux/linkage.h> 18 19 19 20 /* 20 21 * These are per-cpu variables required in entry.S, among other
+2 -2
arch/microblaze/include/asm/uaccess.h
··· 98 98 99 99 if ((get_fs().seg < ((unsigned long)addr)) || 100 100 (get_fs().seg < ((unsigned long)addr + size - 1))) { 101 - pr_debug("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n", 101 + pr_devel("ACCESS fail: %s at 0x%08x (size 0x%x), seg 0x%08x\n", 102 102 type ? "WRITE" : "READ ", (__force u32)addr, (u32)size, 103 103 (u32)get_fs().seg); 104 104 return 0; 105 105 } 106 106 ok: 107 - pr_debug("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n", 107 + pr_devel("ACCESS OK: %s at 0x%08x (size 0x%x), seg 0x%08x\n", 108 108 type ? "WRITE" : "READ ", (__force u32)addr, (u32)size, 109 109 (u32)get_fs().seg); 110 110 return 1;
+1 -1
arch/microblaze/include/asm/unistd.h
··· 38 38 39 39 #endif /* __ASSEMBLY__ */ 40 40 41 - #define __NR_syscalls 381 41 + #define __NR_syscalls 387 42 42 43 43 #endif /* _ASM_MICROBLAZE_UNISTD_H */
+3
arch/mips/Kconfig
··· 546 546 # select SYS_HAS_EARLY_PRINTK 547 547 select SYS_SUPPORTS_64BIT_KERNEL 548 548 select SYS_SUPPORTS_BIG_ENDIAN 549 + select MIPS_L1_CACHE_SHIFT_7 549 550 help 550 551 This is the SGI Indigo2 with R10000 processor. To compile a Linux 551 552 kernel that runs on these, say Y here. ··· 2030 2029 bool "MIPS CMP framework support (DEPRECATED)" 2031 2030 depends on SYS_SUPPORTS_MIPS_CMP 2032 2031 select MIPS_GIC_IPI 2032 + select SMP 2033 2033 select SYNC_R4K 2034 + select SYS_SUPPORTS_SMP 2034 2035 select WEAK_ORDERING 2035 2036 default n 2036 2037 help
+10 -1
arch/mips/Makefile
··· 113 113 cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' && echo -EB $(undef-all) $(predef-be)) 114 114 cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += $(shell $(CC) -dumpmachine |grep -q 'mips.*el-.*' || echo -EL $(undef-all) $(predef-le)) 115 115 116 - cflags-$(CONFIG_CPU_HAS_SMARTMIPS) += $(call cc-option,-msmartmips) 116 + # For smartmips configurations, there are hundreds of warnings due to ISA overrides 117 + # in assembly and header files. smartmips is only supported for MIPS32r1 onwards 118 + # and there is no support for 64-bit. Various '.set mips2' or '.set mips3' or 119 + # similar directives in the kernel will spam the build logs with the following warnings: 120 + # Warning: the `smartmips' extension requires MIPS32 revision 1 or greater 121 + # or 122 + # Warning: the 64-bit MIPS architecture does not support the `smartmips' extension 123 + # Pass -Wa,--no-warn to disable all assembler warnings until the kernel code has 124 + # been fixed properly. 125 + cflags-$(CONFIG_CPU_HAS_SMARTMIPS) += $(call cc-option,-msmartmips) -Wa,--no-warn 117 126 cflags-$(CONFIG_CPU_MICROMIPS) += $(call cc-option,-mmicromips) 118 127 119 128 cflags-$(CONFIG_SB1XXX_CORELIS) += $(call cc-option,-mno-sched-prolog) \
+2 -2
arch/mips/bcm63xx/irq.c
··· 434 434 irq_stat_addr[0] += PERF_IRQSTAT_3368_REG; 435 435 irq_mask_addr[0] += PERF_IRQMASK_3368_REG; 436 436 irq_stat_addr[1] = 0; 437 - irq_stat_addr[1] = 0; 437 + irq_mask_addr[1] = 0; 438 438 irq_bits = 32; 439 439 ext_irq_count = 4; 440 440 ext_irq_cfg_reg1 = PERF_EXTIRQ_CFG_REG_3368; ··· 443 443 irq_stat_addr[0] += PERF_IRQSTAT_6328_REG(0); 444 444 irq_mask_addr[0] += PERF_IRQMASK_6328_REG(0); 445 445 irq_stat_addr[1] += PERF_IRQSTAT_6328_REG(1); 446 - irq_stat_addr[1] += PERF_IRQMASK_6328_REG(1); 446 + irq_mask_addr[1] += PERF_IRQMASK_6328_REG(1); 447 447 irq_bits = 64; 448 448 ext_irq_count = 4; 449 449 is_ext_irq_cascaded = 1;
+1
arch/mips/boot/compressed/decompress.c
··· 13 13 14 14 #include <linux/types.h> 15 15 #include <linux/kernel.h> 16 + #include <linux/string.h> 16 17 17 18 #include <asm/addrspace.h> 18 19
+9 -9
arch/mips/include/asm/cop2.h
··· 16 16 extern void octeon_cop2_save(struct octeon_cop2_state *); 17 17 extern void octeon_cop2_restore(struct octeon_cop2_state *); 18 18 19 - #define cop2_save(r) octeon_cop2_save(r) 20 - #define cop2_restore(r) octeon_cop2_restore(r) 19 + #define cop2_save(r) octeon_cop2_save(&(r)->thread.cp2) 20 + #define cop2_restore(r) octeon_cop2_restore(&(r)->thread.cp2) 21 21 22 22 #define cop2_present 1 23 23 #define cop2_lazy_restore 1 ··· 26 26 27 27 extern void nlm_cop2_save(struct nlm_cop2_state *); 28 28 extern void nlm_cop2_restore(struct nlm_cop2_state *); 29 - #define cop2_save(r) nlm_cop2_save(r) 30 - #define cop2_restore(r) nlm_cop2_restore(r) 29 + 30 + #define cop2_save(r) nlm_cop2_save(&(r)->thread.cp2) 31 + #define cop2_restore(r) nlm_cop2_restore(&(r)->thread.cp2) 31 32 32 33 #define cop2_present 1 33 34 #define cop2_lazy_restore 0 34 35 35 36 #elif defined(CONFIG_CPU_LOONGSON3) 36 37 37 - #define cop2_save(r) 38 - #define cop2_restore(r) 39 - 40 38 #define cop2_present 1 41 39 #define cop2_lazy_restore 1 40 + #define cop2_save(r) do { (r); } while (0) 41 + #define cop2_restore(r) do { (r); } while (0) 42 42 43 43 #else 44 44 45 45 #define cop2_present 0 46 46 #define cop2_lazy_restore 0 47 - #define cop2_save(r) 48 - #define cop2_restore(r) 47 + #define cop2_save(r) do { (r); } while (0) 48 + #define cop2_restore(r) do { (r); } while (0) 49 49 #endif 50 50 51 51 enum cu2_ops {
-7
arch/mips/include/asm/mach-ip28/spaces.h
··· 11 11 #ifndef _ASM_MACH_IP28_SPACES_H 12 12 #define _ASM_MACH_IP28_SPACES_H 13 13 14 - #define CAC_BASE _AC(0xa800000000000000, UL) 15 - 16 - #define HIGHMEM_START (~0UL) 17 - 18 14 #define PHYS_OFFSET _AC(0x20000000, UL) 19 - 20 - #define UNCAC_BASE _AC(0xc0000000, UL) /* 0xa0000000 + PHYS_OFFSET */ 21 - #define IO_BASE UNCAC_BASE 22 15 23 16 #include <asm/mach-generic/spaces.h> 24 17
+3 -2
arch/mips/include/asm/page.h
··· 37 37 38 38 /* 39 39 * This is used for calculating the real page sizes 40 - * for FTLB or VTLB + FTLB confugrations. 40 + * for FTLB or VTLB + FTLB configurations. 41 41 */ 42 42 static inline unsigned int page_size_ftlb(unsigned int mmuextdef) 43 43 { ··· 223 223 224 224 #endif 225 225 226 - #define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(virt_to_phys(kaddr))) 226 + #define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(virt_to_phys((void *) \ 227 + (kaddr)))) 227 228 228 229 extern int __virt_addr_valid(const volatile void *kaddr); 229 230 #define virt_addr_valid(kaddr) \
-5
arch/mips/include/asm/smp.h
··· 37 37 38 38 #define NO_PROC_ID (-1) 39 39 40 - #define topology_physical_package_id(cpu) (cpu_data[cpu].package) 41 - #define topology_core_id(cpu) (cpu_data[cpu].core) 42 - #define topology_core_cpumask(cpu) (&cpu_core_map[cpu]) 43 - #define topology_thread_cpumask(cpu) (&cpu_sibling_map[cpu]) 44 - 45 40 #define SMP_RESCHEDULE_YOURSELF 0x1 /* XXX braindead */ 46 41 #define SMP_CALL_FUNCTION 0x2 47 42 /* Octeon - Tell another core to flush its icache */
+2 -2
arch/mips/include/asm/switch_to.h
··· 92 92 KSTK_STATUS(prev) &= ~ST0_CU2; \ 93 93 __c0_stat = read_c0_status(); \ 94 94 write_c0_status(__c0_stat | ST0_CU2); \ 95 - cop2_save(&prev->thread.cp2); \ 95 + cop2_save(prev); \ 96 96 write_c0_status(__c0_stat & ~ST0_CU2); \ 97 97 } \ 98 98 __clear_software_ll_bit(); \ ··· 111 111 (KSTK_STATUS(current) & ST0_CU2)) { \ 112 112 __c0_stat = read_c0_status(); \ 113 113 write_c0_status(__c0_stat | ST0_CU2); \ 114 - cop2_restore(&current->thread.cp2); \ 114 + cop2_restore(current); \ 115 115 write_c0_status(__c0_stat & ~ST0_CU2); \ 116 116 } \ 117 117 if (cpu_has_dsp) \
+8
arch/mips/include/asm/topology.h
··· 9 9 #define __ASM_TOPOLOGY_H 10 10 11 11 #include <topology.h> 12 + #include <linux/smp.h> 13 + 14 + #ifdef CONFIG_SMP 15 + #define topology_physical_package_id(cpu) (cpu_data[cpu].package) 16 + #define topology_core_id(cpu) (cpu_data[cpu].core) 17 + #define topology_core_cpumask(cpu) (&cpu_core_map[cpu]) 18 + #define topology_thread_cpumask(cpu) (&cpu_sibling_map[cpu]) 19 + #endif 12 20 13 21 #endif /* __ASM_TOPOLOGY_H */
+12 -6
arch/mips/include/uapi/asm/unistd.h
··· 373 373 #define __NR_sched_getattr (__NR_Linux + 350) 374 374 #define __NR_renameat2 (__NR_Linux + 351) 375 375 #define __NR_seccomp (__NR_Linux + 352) 376 + #define __NR_getrandom (__NR_Linux + 353) 377 + #define __NR_memfd_create (__NR_Linux + 354) 376 378 377 379 /* 378 380 * Offset of the last Linux o32 flavoured syscall 379 381 */ 380 - #define __NR_Linux_syscalls 352 382 + #define __NR_Linux_syscalls 354 381 383 382 384 #endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ 383 385 384 386 #define __NR_O32_Linux 4000 385 - #define __NR_O32_Linux_syscalls 352 387 + #define __NR_O32_Linux_syscalls 354 386 388 387 389 #if _MIPS_SIM == _MIPS_SIM_ABI64 388 390 ··· 705 703 #define __NR_sched_getattr (__NR_Linux + 310) 706 704 #define __NR_renameat2 (__NR_Linux + 311) 707 705 #define __NR_seccomp (__NR_Linux + 312) 706 + #define __NR_getrandom (__NR_Linux + 313) 707 + #define __NR_memfd_create (__NR_Linux + 314) 708 708 709 709 /* 710 710 * Offset of the last Linux 64-bit flavoured syscall 711 711 */ 712 - #define __NR_Linux_syscalls 312 712 + #define __NR_Linux_syscalls 314 713 713 714 714 #endif /* _MIPS_SIM == _MIPS_SIM_ABI64 */ 715 715 716 716 #define __NR_64_Linux 5000 717 - #define __NR_64_Linux_syscalls 312 717 + #define __NR_64_Linux_syscalls 314 718 718 719 719 #if _MIPS_SIM == _MIPS_SIM_NABI32 720 720 ··· 1041 1037 #define __NR_sched_getattr (__NR_Linux + 314) 1042 1038 #define __NR_renameat2 (__NR_Linux + 315) 1043 1039 #define __NR_seccomp (__NR_Linux + 316) 1040 + #define __NR_getrandom (__NR_Linux + 317) 1041 + #define __NR_memfd_create (__NR_Linux + 318) 1044 1042 1045 1043 /* 1046 1044 * Offset of the last N32 flavoured syscall 1047 1045 */ 1048 - #define __NR_Linux_syscalls 316 1046 + #define __NR_Linux_syscalls 318 1049 1047 1050 1048 #endif /* _MIPS_SIM == _MIPS_SIM_NABI32 */ 1051 1049 1052 1050 #define __NR_N32_Linux 6000 1053 - #define __NR_N32_Linux_syscalls 316 1051 + #define __NR_N32_Linux_syscalls 318 1054 1052 1055 1053 #endif /* _UAPI_ASM_UNISTD_H */
+6 -2
arch/mips/kernel/machine_kexec.c
··· 71 71 kexec_start_address = 72 72 (unsigned long) phys_to_virt(image->start); 73 73 74 - kexec_indirection_page = 75 - (unsigned long) phys_to_virt(image->head & PAGE_MASK); 74 + if (image->type == KEXEC_TYPE_DEFAULT) { 75 + kexec_indirection_page = 76 + (unsigned long) phys_to_virt(image->head & PAGE_MASK); 77 + } else { 78 + kexec_indirection_page = (unsigned long)&image->head; 79 + } 76 80 77 81 memcpy((void*)reboot_code_buffer, relocate_new_kernel, 78 82 relocate_new_kernel_size);
+2
arch/mips/kernel/scall32-o32.S
··· 577 577 PTR sys_sched_getattr /* 4350 */ 578 578 PTR sys_renameat2 579 579 PTR sys_seccomp 580 + PTR sys_getrandom 581 + PTR sys_memfd_create
+2
arch/mips/kernel/scall64-64.S
··· 432 432 PTR sys_sched_getattr /* 5310 */ 433 433 PTR sys_renameat2 434 434 PTR sys_seccomp 435 + PTR sys_getrandom 436 + PTR sys_memfd_create 435 437 .size sys_call_table,.-sys_call_table
+2
arch/mips/kernel/scall64-n32.S
··· 425 425 PTR sys_sched_getattr 426 426 PTR sys_renameat2 /* 6315 */ 427 427 PTR sys_seccomp 428 + PTR sys_getrandom 429 + PTR sys_memfd_create 428 430 .size sysn32_call_table,.-sysn32_call_table
+2
arch/mips/kernel/scall64-o32.S
··· 562 562 PTR sys_sched_getattr /* 4350 */ 563 563 PTR sys_renameat2 564 564 PTR sys_seccomp 565 + PTR sys_getrandom 566 + PTR sys_memfd_create 565 567 .size sys32_call_table,.-sys32_call_table
+1
arch/mips/mm/init.c
··· 53 53 */ 54 54 unsigned long empty_zero_page, zero_page_mask; 55 55 EXPORT_SYMBOL_GPL(empty_zero_page); 56 + EXPORT_SYMBOL(zero_page_mask); 56 57 57 58 /* 58 59 * Not static inline because used by IP27 special magic initialization code
+1
arch/mips/net/bpf_jit.c
··· 772 772 const struct sock_filter *inst; 773 773 unsigned int i, off, load_order, condt; 774 774 u32 k, b_off __maybe_unused; 775 + int tmp; 775 776 776 777 for (i = 0; i < prog->len; i++) { 777 778 u16 code;
+16
arch/parisc/Kconfig
··· 321 321 322 322 source "arch/parisc/Kconfig.debug" 323 323 324 + config SECCOMP 325 + def_bool y 326 + prompt "Enable seccomp to safely compute untrusted bytecode" 327 + ---help--- 328 + This kernel feature is useful for number crunching applications 329 + that may need to compute untrusted bytecode during their 330 + execution. By using pipes or other transports made available to 331 + the process as file descriptors supporting the read/write 332 + syscalls, it's possible to isolate those applications in 333 + their own address space using seccomp. Once seccomp is 334 + enabled via prctl(PR_SET_SECCOMP), it cannot be disabled 335 + and the task is only allowed to execute a few safe syscalls 336 + defined by each seccomp mode. 337 + 338 + If unsure, say Y. Only embedded should say N here. 339 + 324 340 source "security/Kconfig" 325 341 326 342 source "crypto/Kconfig"
+1 -1
arch/parisc/hpux/sys_hpux.c
··· 456 456 } 457 457 458 458 /* String could be altered by userspace after strlen_user() */ 459 - fsname[len] = '\0'; 459 + fsname[len - 1] = '\0'; 460 460 461 461 printk(KERN_DEBUG "that is '%s' as (char *)\n", fsname); 462 462 if ( !strcmp(fsname, "hfs") ) {
+16
arch/parisc/include/asm/seccomp.h
··· 1 + #ifndef _ASM_PARISC_SECCOMP_H 2 + #define _ASM_PARISC_SECCOMP_H 3 + 4 + #include <linux/unistd.h> 5 + 6 + #define __NR_seccomp_read __NR_read 7 + #define __NR_seccomp_write __NR_write 8 + #define __NR_seccomp_exit __NR_exit 9 + #define __NR_seccomp_sigreturn __NR_rt_sigreturn 10 + 11 + #define __NR_seccomp_read_32 __NR_read 12 + #define __NR_seccomp_write_32 __NR_write 13 + #define __NR_seccomp_exit_32 __NR_exit 14 + #define __NR_seccomp_sigreturn_32 __NR_rt_sigreturn 15 + 16 + #endif /* _ASM_PARISC_SECCOMP_H */
+4 -1
arch/parisc/include/asm/thread_info.h
··· 60 60 #define TIF_NOTIFY_RESUME 8 /* callback before returning to user */ 61 61 #define TIF_SINGLESTEP 9 /* single stepping? */ 62 62 #define TIF_BLOCKSTEP 10 /* branch stepping? */ 63 + #define TIF_SECCOMP 11 /* secure computing */ 63 64 64 65 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 65 66 #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) ··· 71 70 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 72 71 #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) 73 72 #define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP) 73 + #define _TIF_SECCOMP (1 << TIF_SECCOMP) 74 74 75 75 #define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \ 76 76 _TIF_NEED_RESCHED) 77 77 #define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \ 78 - _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT) 78 + _TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \ 79 + _TIF_SECCOMP) 79 80 80 81 #ifdef CONFIG_64BIT 81 82 # ifdef CONFIG_COMPAT
+4 -1
arch/parisc/include/uapi/asm/unistd.h
··· 830 830 #define __NR_sched_getattr (__NR_Linux + 335) 831 831 #define __NR_utimes (__NR_Linux + 336) 832 832 #define __NR_renameat2 (__NR_Linux + 337) 833 + #define __NR_seccomp (__NR_Linux + 338) 834 + #define __NR_getrandom (__NR_Linux + 339) 835 + #define __NR_memfd_create (__NR_Linux + 340) 833 836 834 - #define __NR_Linux_syscalls (__NR_renameat2 + 1) 837 + #define __NR_Linux_syscalls (__NR_memfd_create + 1) 835 838 836 839 837 840 #define __IGNORE_select /* newselect */
+6
arch/parisc/kernel/ptrace.c
··· 270 270 { 271 271 long ret = 0; 272 272 273 + /* Do the secure computing check first. */ 274 + if (secure_computing(regs->gr[20])) { 275 + /* seccomp failures shouldn't expose any additional code. */ 276 + return -1; 277 + } 278 + 273 279 if (test_thread_flag(TIF_SYSCALL_TRACE) && 274 280 tracehook_report_syscall_entry(regs)) 275 281 ret = -1L;
+229 -4
arch/parisc/kernel/syscall.S
··· 74 74 /* ADDRESS 0xb0 to 0xb8, lws uses two insns for entry */ 75 75 /* Light-weight-syscall entry must always be located at 0xb0 */ 76 76 /* WARNING: Keep this number updated with table size changes */ 77 - #define __NR_lws_entries (2) 77 + #define __NR_lws_entries (3) 78 78 79 79 lws_entry: 80 80 gate lws_start, %r0 /* increase privilege */ ··· 502 502 503 503 504 504 /*************************************************** 505 - Implementing CAS as an atomic operation: 505 + Implementing 32bit CAS as an atomic operation: 506 506 507 507 %r26 - Address to examine 508 508 %r25 - Old value to check (old) ··· 659 659 ASM_EXCEPTIONTABLE_ENTRY(2b-linux_gateway_page, 3b-linux_gateway_page) 660 660 661 661 662 + /*************************************************** 663 + New CAS implementation which uses pointers and variable size 664 + information. The value pointed by old and new MUST NOT change 665 + while performing CAS. The lock only protect the value at %r26. 666 + 667 + %r26 - Address to examine 668 + %r25 - Pointer to the value to check (old) 669 + %r24 - Pointer to the value to set (new) 670 + %r23 - Size of the variable (0/1/2/3 for 8/16/32/64 bit) 671 + %r28 - Return non-zero on failure 672 + %r21 - Kernel error code 673 + 674 + %r21 has the following meanings: 675 + 676 + EAGAIN - CAS is busy, ldcw failed, try again. 677 + EFAULT - Read or write failed. 678 + 679 + Scratch: r20, r22, r28, r29, r1, fr4 (32bit for 64bit CAS only) 680 + 681 + ****************************************************/ 682 + 683 + /* ELF32 Process entry path */ 684 + lws_compare_and_swap_2: 685 + #ifdef CONFIG_64BIT 686 + /* Clip the input registers */ 687 + depdi 0, 31, 32, %r26 688 + depdi 0, 31, 32, %r25 689 + depdi 0, 31, 32, %r24 690 + depdi 0, 31, 32, %r23 691 + #endif 692 + 693 + /* Check the validity of the size pointer */ 694 + subi,>>= 4, %r23, %r0 695 + b,n lws_exit_nosys 696 + 697 + /* Jump to the functions which will load the old and new values into 698 + registers depending on the their size */ 699 + shlw %r23, 2, %r29 700 + blr %r29, %r0 701 + nop 702 + 703 + /* 8bit load */ 704 + 4: ldb 0(%sr3,%r25), %r25 705 + b cas2_lock_start 706 + 5: ldb 0(%sr3,%r24), %r24 707 + nop 708 + nop 709 + nop 710 + nop 711 + nop 712 + 713 + /* 16bit load */ 714 + 6: ldh 0(%sr3,%r25), %r25 715 + b cas2_lock_start 716 + 7: ldh 0(%sr3,%r24), %r24 717 + nop 718 + nop 719 + nop 720 + nop 721 + nop 722 + 723 + /* 32bit load */ 724 + 8: ldw 0(%sr3,%r25), %r25 725 + b cas2_lock_start 726 + 9: ldw 0(%sr3,%r24), %r24 727 + nop 728 + nop 729 + nop 730 + nop 731 + nop 732 + 733 + /* 64bit load */ 734 + #ifdef CONFIG_64BIT 735 + 10: ldd 0(%sr3,%r25), %r25 736 + 11: ldd 0(%sr3,%r24), %r24 737 + #else 738 + /* Load new value into r22/r23 - high/low */ 739 + 10: ldw 0(%sr3,%r25), %r22 740 + 11: ldw 4(%sr3,%r25), %r23 741 + /* Load new value into fr4 for atomic store later */ 742 + 12: flddx 0(%sr3,%r24), %fr4 743 + #endif 744 + 745 + cas2_lock_start: 746 + /* Load start of lock table */ 747 + ldil L%lws_lock_start, %r20 748 + ldo R%lws_lock_start(%r20), %r28 749 + 750 + /* Extract four bits from r26 and hash lock (Bits 4-7) */ 751 + extru %r26, 27, 4, %r20 752 + 753 + /* Find lock to use, the hash is either one of 0 to 754 + 15, multiplied by 16 (keep it 16-byte aligned) 755 + and add to the lock table offset. */ 756 + shlw %r20, 4, %r20 757 + add %r20, %r28, %r20 758 + 759 + rsm PSW_SM_I, %r0 /* Disable interrupts */ 760 + /* COW breaks can cause contention on UP systems */ 761 + LDCW 0(%sr2,%r20), %r28 /* Try to acquire the lock */ 762 + cmpb,<>,n %r0, %r28, cas2_action /* Did we get it? */ 763 + cas2_wouldblock: 764 + ldo 2(%r0), %r28 /* 2nd case */ 765 + ssm PSW_SM_I, %r0 766 + b lws_exit /* Contended... */ 767 + ldo -EAGAIN(%r0), %r21 /* Spin in userspace */ 768 + 769 + /* 770 + prev = *addr; 771 + if ( prev == old ) 772 + *addr = new; 773 + return prev; 774 + */ 775 + 776 + /* NOTES: 777 + This all works becuse intr_do_signal 778 + and schedule both check the return iasq 779 + and see that we are on the kernel page 780 + so this process is never scheduled off 781 + or is ever sent any signal of any sort, 782 + thus it is wholly atomic from usrspaces 783 + perspective 784 + */ 785 + cas2_action: 786 + /* Jump to the correct function */ 787 + blr %r29, %r0 788 + /* Set %r28 as non-zero for now */ 789 + ldo 1(%r0),%r28 790 + 791 + /* 8bit CAS */ 792 + 13: ldb,ma 0(%sr3,%r26), %r29 793 + sub,= %r29, %r25, %r0 794 + b,n cas2_end 795 + 14: stb,ma %r24, 0(%sr3,%r26) 796 + b cas2_end 797 + copy %r0, %r28 798 + nop 799 + nop 800 + 801 + /* 16bit CAS */ 802 + 15: ldh,ma 0(%sr3,%r26), %r29 803 + sub,= %r29, %r25, %r0 804 + b,n cas2_end 805 + 16: sth,ma %r24, 0(%sr3,%r26) 806 + b cas2_end 807 + copy %r0, %r28 808 + nop 809 + nop 810 + 811 + /* 32bit CAS */ 812 + 17: ldw,ma 0(%sr3,%r26), %r29 813 + sub,= %r29, %r25, %r0 814 + b,n cas2_end 815 + 18: stw,ma %r24, 0(%sr3,%r26) 816 + b cas2_end 817 + copy %r0, %r28 818 + nop 819 + nop 820 + 821 + /* 64bit CAS */ 822 + #ifdef CONFIG_64BIT 823 + 19: ldd,ma 0(%sr3,%r26), %r29 824 + sub,= %r29, %r25, %r0 825 + b,n cas2_end 826 + 20: std,ma %r24, 0(%sr3,%r26) 827 + copy %r0, %r28 828 + #else 829 + /* Compare first word */ 830 + 19: ldw,ma 0(%sr3,%r26), %r29 831 + sub,= %r29, %r22, %r0 832 + b,n cas2_end 833 + /* Compare second word */ 834 + 20: ldw,ma 4(%sr3,%r26), %r29 835 + sub,= %r29, %r23, %r0 836 + b,n cas2_end 837 + /* Perform the store */ 838 + 21: fstdx %fr4, 0(%sr3,%r26) 839 + copy %r0, %r28 840 + #endif 841 + 842 + cas2_end: 843 + /* Free lock */ 844 + stw,ma %r20, 0(%sr2,%r20) 845 + /* Enable interrupts */ 846 + ssm PSW_SM_I, %r0 847 + /* Return to userspace, set no error */ 848 + b lws_exit 849 + copy %r0, %r21 850 + 851 + 22: 852 + /* Error occurred on load or store */ 853 + /* Free lock */ 854 + stw %r20, 0(%sr2,%r20) 855 + ssm PSW_SM_I, %r0 856 + ldo 1(%r0),%r28 857 + b lws_exit 858 + ldo -EFAULT(%r0),%r21 /* set errno */ 859 + nop 860 + nop 861 + nop 862 + 863 + /* Exception table entries, for the load and store, return EFAULT. 864 + Each of the entries must be relocated. */ 865 + ASM_EXCEPTIONTABLE_ENTRY(4b-linux_gateway_page, 22b-linux_gateway_page) 866 + ASM_EXCEPTIONTABLE_ENTRY(5b-linux_gateway_page, 22b-linux_gateway_page) 867 + ASM_EXCEPTIONTABLE_ENTRY(6b-linux_gateway_page, 22b-linux_gateway_page) 868 + ASM_EXCEPTIONTABLE_ENTRY(7b-linux_gateway_page, 22b-linux_gateway_page) 869 + ASM_EXCEPTIONTABLE_ENTRY(8b-linux_gateway_page, 22b-linux_gateway_page) 870 + ASM_EXCEPTIONTABLE_ENTRY(9b-linux_gateway_page, 22b-linux_gateway_page) 871 + ASM_EXCEPTIONTABLE_ENTRY(10b-linux_gateway_page, 22b-linux_gateway_page) 872 + ASM_EXCEPTIONTABLE_ENTRY(11b-linux_gateway_page, 22b-linux_gateway_page) 873 + ASM_EXCEPTIONTABLE_ENTRY(13b-linux_gateway_page, 22b-linux_gateway_page) 874 + ASM_EXCEPTIONTABLE_ENTRY(14b-linux_gateway_page, 22b-linux_gateway_page) 875 + ASM_EXCEPTIONTABLE_ENTRY(15b-linux_gateway_page, 22b-linux_gateway_page) 876 + ASM_EXCEPTIONTABLE_ENTRY(16b-linux_gateway_page, 22b-linux_gateway_page) 877 + ASM_EXCEPTIONTABLE_ENTRY(17b-linux_gateway_page, 22b-linux_gateway_page) 878 + ASM_EXCEPTIONTABLE_ENTRY(18b-linux_gateway_page, 22b-linux_gateway_page) 879 + ASM_EXCEPTIONTABLE_ENTRY(19b-linux_gateway_page, 22b-linux_gateway_page) 880 + ASM_EXCEPTIONTABLE_ENTRY(20b-linux_gateway_page, 22b-linux_gateway_page) 881 + #ifndef CONFIG_64BIT 882 + ASM_EXCEPTIONTABLE_ENTRY(12b-linux_gateway_page, 22b-linux_gateway_page) 883 + ASM_EXCEPTIONTABLE_ENTRY(21b-linux_gateway_page, 22b-linux_gateway_page) 884 + #endif 885 + 662 886 /* Make sure nothing else is placed on this page */ 663 887 .align PAGE_SIZE 664 888 END(linux_gateway_page) ··· 899 675 /* Light-weight-syscall table */ 900 676 /* Start of lws table. */ 901 677 ENTRY(lws_table) 902 - LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic compare and swap */ 903 - LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic compare and swap */ 678 + LWS_ENTRY(compare_and_swap32) /* 0 - ELF32 Atomic 32bit CAS */ 679 + LWS_ENTRY(compare_and_swap64) /* 1 - ELF64 Atomic 32bit CAS */ 680 + LWS_ENTRY(compare_and_swap_2) /* 2 - ELF32 Atomic 64bit CAS */ 904 681 END(lws_table) 905 682 /* End of lws table */ 906 683
+3
arch/parisc/kernel/syscall_table.S
··· 433 433 ENTRY_SAME(sched_getattr) /* 335 */ 434 434 ENTRY_COMP(utimes) 435 435 ENTRY_SAME(renameat2) 436 + ENTRY_SAME(seccomp) 437 + ENTRY_SAME(getrandom) 438 + ENTRY_SAME(memfd_create) /* 340 */ 436 439 437 440 /* Nothing yet */ 438 441
+1
arch/powerpc/configs/cell_defconfig
··· 5 5 CONFIG_NR_CPUS=4 6 6 CONFIG_EXPERIMENTAL=y 7 7 CONFIG_SYSVIPC=y 8 + CONFIG_FHANDLE=y 8 9 CONFIG_IKCONFIG=y 9 10 CONFIG_IKCONFIG_PROC=y 10 11 CONFIG_LOG_BUF_SHIFT=15
+1
arch/powerpc/configs/celleb_defconfig
··· 5 5 CONFIG_NR_CPUS=4 6 6 CONFIG_EXPERIMENTAL=y 7 7 CONFIG_SYSVIPC=y 8 + CONFIG_FHANDLE=y 8 9 CONFIG_IKCONFIG=y 9 10 CONFIG_IKCONFIG_PROC=y 10 11 CONFIG_LOG_BUF_SHIFT=15
+1
arch/powerpc/configs/corenet64_smp_defconfig
··· 4 4 CONFIG_SMP=y 5 5 CONFIG_NR_CPUS=24 6 6 CONFIG_SYSVIPC=y 7 + CONFIG_FHANDLE=y 7 8 CONFIG_IRQ_DOMAIN_DEBUG=y 8 9 CONFIG_NO_HZ=y 9 10 CONFIG_HIGH_RES_TIMERS=y
+1
arch/powerpc/configs/g5_defconfig
··· 5 5 CONFIG_EXPERIMENTAL=y 6 6 CONFIG_SYSVIPC=y 7 7 CONFIG_POSIX_MQUEUE=y 8 + CONFIG_FHANDLE=y 8 9 CONFIG_IKCONFIG=y 9 10 CONFIG_IKCONFIG_PROC=y 10 11 CONFIG_BLK_DEV_INITRD=y
+1
arch/powerpc/configs/maple_defconfig
··· 4 4 CONFIG_EXPERIMENTAL=y 5 5 CONFIG_SYSVIPC=y 6 6 CONFIG_POSIX_MQUEUE=y 7 + CONFIG_FHANDLE=y 7 8 CONFIG_IKCONFIG=y 8 9 CONFIG_IKCONFIG_PROC=y 9 10 # CONFIG_COMPAT_BRK is not set
+1
arch/powerpc/configs/pasemi_defconfig
··· 3 3 CONFIG_SMP=y 4 4 CONFIG_NR_CPUS=2 5 5 CONFIG_SYSVIPC=y 6 + CONFIG_FHANDLE=y 6 7 CONFIG_NO_HZ=y 7 8 CONFIG_HIGH_RES_TIMERS=y 8 9 CONFIG_BLK_DEV_INITRD=y
+1
arch/powerpc/configs/ppc64_defconfig
··· 4 4 CONFIG_SMP=y 5 5 CONFIG_SYSVIPC=y 6 6 CONFIG_POSIX_MQUEUE=y 7 + CONFIG_FHANDLE=y 7 8 CONFIG_IRQ_DOMAIN_DEBUG=y 8 9 CONFIG_NO_HZ=y 9 10 CONFIG_HIGH_RES_TIMERS=y
+1
arch/powerpc/configs/ppc64e_defconfig
··· 3 3 CONFIG_SMP=y 4 4 CONFIG_SYSVIPC=y 5 5 CONFIG_POSIX_MQUEUE=y 6 + CONFIG_FHANDLE=y 6 7 CONFIG_NO_HZ=y 7 8 CONFIG_HIGH_RES_TIMERS=y 8 9 CONFIG_TASKSTATS=y
+1
arch/powerpc/configs/ps3_defconfig
··· 5 5 CONFIG_NR_CPUS=2 6 6 CONFIG_SYSVIPC=y 7 7 CONFIG_POSIX_MQUEUE=y 8 + CONFIG_FHANDLE=y 8 9 CONFIG_HIGH_RES_TIMERS=y 9 10 CONFIG_BLK_DEV_INITRD=y 10 11 CONFIG_RD_LZMA=y
+1
arch/powerpc/configs/pseries_defconfig
··· 5 5 CONFIG_NR_CPUS=2048 6 6 CONFIG_SYSVIPC=y 7 7 CONFIG_POSIX_MQUEUE=y 8 + CONFIG_FHANDLE=y 8 9 CONFIG_AUDIT=y 9 10 CONFIG_AUDITSYSCALL=y 10 11 CONFIG_IRQ_DOMAIN_DEBUG=y
+1
arch/powerpc/configs/pseries_le_defconfig
··· 6 6 CONFIG_CPU_LITTLE_ENDIAN=y 7 7 CONFIG_SYSVIPC=y 8 8 CONFIG_POSIX_MQUEUE=y 9 + CONFIG_FHANDLE=y 9 10 CONFIG_AUDIT=y 10 11 CONFIG_AUDITSYSCALL=y 11 12 CONFIG_IRQ_DOMAIN_DEBUG=y
+7
arch/powerpc/include/asm/ptrace.h
··· 47 47 STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE) 48 48 #define STACK_FRAME_MARKER 12 49 49 50 + #if defined(_CALL_ELF) && _CALL_ELF == 2 51 + #define STACK_FRAME_MIN_SIZE 32 52 + #else 53 + #define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD 54 + #endif 55 + 50 56 /* Size of dummy stack frame allocated when calling signal handler. */ 51 57 #define __SIGNAL_FRAMESIZE 128 52 58 #define __SIGNAL_FRAMESIZE32 64 ··· 66 60 #define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773) 67 61 #define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD) 68 62 #define STACK_FRAME_MARKER 2 63 + #define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD 69 64 70 65 /* Size of stack frame allocated when calling signal handler. */ 71 66 #define __SIGNAL_FRAMESIZE 64
+3
arch/powerpc/include/asm/systbl.h
··· 362 362 SYSCALL_SPU(sched_setattr) 363 363 SYSCALL_SPU(sched_getattr) 364 364 SYSCALL_SPU(renameat2) 365 + SYSCALL_SPU(seccomp) 366 + SYSCALL_SPU(getrandom) 367 + SYSCALL_SPU(memfd_create)
+1 -1
arch/powerpc/include/asm/unistd.h
··· 12 12 #include <uapi/asm/unistd.h> 13 13 14 14 15 - #define __NR_syscalls 358 15 + #define __NR_syscalls 361 16 16 17 17 #define __NR__exit __NR_exit 18 18 #define NR_syscalls __NR_syscalls
+3
arch/powerpc/include/uapi/asm/unistd.h
··· 380 380 #define __NR_sched_setattr 355 381 381 #define __NR_sched_getattr 356 382 382 #define __NR_renameat2 357 383 + #define __NR_seccomp 358 384 + #define __NR_getrandom 359 385 + #define __NR_memfd_create 360 383 386 384 387 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
+1 -1
arch/powerpc/perf/callchain.c
··· 35 35 return 0; /* must be 16-byte aligned */ 36 36 if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD)) 37 37 return 0; 38 - if (sp >= prev_sp + STACK_FRAME_OVERHEAD) 38 + if (sp >= prev_sp + STACK_FRAME_MIN_SIZE) 39 39 return 1; 40 40 /* 41 41 * sp could decrease when we jump off an interrupt stack
+2 -1
arch/powerpc/platforms/powernv/opal-hmi.c
··· 28 28 29 29 #include <asm/opal.h> 30 30 #include <asm/cputable.h> 31 + #include <asm/machdep.h> 31 32 32 33 static int opal_hmi_handler_nb_init; 33 34 struct OpalHmiEvtNode { ··· 186 185 } 187 186 return 0; 188 187 } 189 - subsys_initcall(opal_hmi_handler_init); 188 + machine_subsys_initcall(powernv, opal_hmi_handler_init);
+19 -17
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 113 113 static int pseries_remove_mem_node(struct device_node *np) 114 114 { 115 115 const char *type; 116 - const unsigned int *regs; 116 + const __be32 *regs; 117 117 unsigned long base; 118 118 unsigned int lmb_size; 119 119 int ret = -EINVAL; ··· 132 132 if (!regs) 133 133 return ret; 134 134 135 - base = *(unsigned long *)regs; 136 - lmb_size = regs[3]; 135 + base = be64_to_cpu(*(unsigned long *)regs); 136 + lmb_size = be32_to_cpu(regs[3]); 137 137 138 138 pseries_remove_memblock(base, lmb_size); 139 139 return 0; ··· 153 153 static int pseries_add_mem_node(struct device_node *np) 154 154 { 155 155 const char *type; 156 - const unsigned int *regs; 156 + const __be32 *regs; 157 157 unsigned long base; 158 158 unsigned int lmb_size; 159 159 int ret = -EINVAL; ··· 172 172 if (!regs) 173 173 return ret; 174 174 175 - base = *(unsigned long *)regs; 176 - lmb_size = regs[3]; 175 + base = be64_to_cpu(*(unsigned long *)regs); 176 + lmb_size = be32_to_cpu(regs[3]); 177 177 178 178 /* 179 179 * Update memory region to represent the memory add ··· 187 187 struct of_drconf_cell *new_drmem, *old_drmem; 188 188 unsigned long memblock_size; 189 189 u32 entries; 190 - u32 *p; 190 + __be32 *p; 191 191 int i, rc = -EINVAL; 192 192 193 193 memblock_size = pseries_memory_block_size(); 194 194 if (!memblock_size) 195 195 return -EINVAL; 196 196 197 - p = (u32 *) pr->old_prop->value; 197 + p = (__be32 *) pr->old_prop->value; 198 198 if (!p) 199 199 return -EINVAL; 200 200 ··· 203 203 * entries. Get the niumber of entries and skip to the array of 204 204 * of_drconf_cell's. 205 205 */ 206 - entries = *p++; 206 + entries = be32_to_cpu(*p++); 207 207 old_drmem = (struct of_drconf_cell *)p; 208 208 209 - p = (u32 *)pr->prop->value; 209 + p = (__be32 *)pr->prop->value; 210 210 p++; 211 211 new_drmem = (struct of_drconf_cell *)p; 212 212 213 213 for (i = 0; i < entries; i++) { 214 - if ((old_drmem[i].flags & DRCONF_MEM_ASSIGNED) && 215 - (!(new_drmem[i].flags & DRCONF_MEM_ASSIGNED))) { 216 - rc = pseries_remove_memblock(old_drmem[i].base_addr, 214 + if ((be32_to_cpu(old_drmem[i].flags) & DRCONF_MEM_ASSIGNED) && 215 + (!(be32_to_cpu(new_drmem[i].flags) & DRCONF_MEM_ASSIGNED))) { 216 + rc = pseries_remove_memblock( 217 + be64_to_cpu(old_drmem[i].base_addr), 217 218 memblock_size); 218 219 break; 219 - } else if ((!(old_drmem[i].flags & DRCONF_MEM_ASSIGNED)) && 220 - (new_drmem[i].flags & DRCONF_MEM_ASSIGNED)) { 221 - rc = memblock_add(old_drmem[i].base_addr, 220 + } else if ((!(be32_to_cpu(old_drmem[i].flags) & 221 + DRCONF_MEM_ASSIGNED)) && 222 + (be32_to_cpu(new_drmem[i].flags) & 223 + DRCONF_MEM_ASSIGNED)) { 224 + rc = memblock_add(be64_to_cpu(old_drmem[i].base_addr), 222 225 memblock_size); 223 226 rc = (rc < 0) ? -EINVAL : 0; 224 227 break; 225 228 } 226 229 } 227 - 228 230 return rc; 229 231 } 230 232
+4 -4
arch/s390/include/asm/ipl.h
··· 17 17 #define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \ 18 18 sizeof(struct ipl_block_fcp)) 19 19 20 - #define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 8) 20 + #define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 16) 21 21 22 22 #define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \ 23 23 sizeof(struct ipl_block_ccw)) 24 24 25 - #define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 8) 25 + #define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 16) 26 26 27 27 #define IPL_MAX_SUPPORTED_VERSION (0) 28 28 ··· 38 38 u8 pbt; 39 39 u8 flags; 40 40 u16 reserved2; 41 + u8 loadparm[8]; 41 42 } __attribute__((packed)); 42 43 43 44 struct ipl_block_fcp { 44 - u8 reserved1[313-1]; 45 + u8 reserved1[305-1]; 45 46 u8 opt; 46 47 u8 reserved2[3]; 47 48 u16 reserved3; ··· 63 62 offsetof(struct ipl_block_fcp, scp_data))) 64 63 65 64 struct ipl_block_ccw { 66 - u8 load_parm[8]; 67 65 u8 reserved1[84]; 68 66 u8 reserved2[2]; 69 67 u16 devno;
+85 -43
arch/s390/kernel/ipl.c
··· 455 455 DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long) 456 456 IPL_PARMBLOCK_START->ipl_info.fcp.br_lba); 457 457 458 - static struct attribute *ipl_fcp_attrs[] = { 459 - &sys_ipl_type_attr.attr, 460 - &sys_ipl_device_attr.attr, 461 - &sys_ipl_fcp_wwpn_attr.attr, 462 - &sys_ipl_fcp_lun_attr.attr, 463 - &sys_ipl_fcp_bootprog_attr.attr, 464 - &sys_ipl_fcp_br_lba_attr.attr, 465 - NULL, 466 - }; 467 - 468 - static struct attribute_group ipl_fcp_attr_group = { 469 - .attrs = ipl_fcp_attrs, 470 - }; 471 - 472 - /* CCW ipl device attributes */ 473 - 474 458 static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj, 475 459 struct kobj_attribute *attr, char *page) 476 460 { ··· 470 486 471 487 static struct kobj_attribute sys_ipl_ccw_loadparm_attr = 472 488 __ATTR(loadparm, 0444, ipl_ccw_loadparm_show, NULL); 489 + 490 + static struct attribute *ipl_fcp_attrs[] = { 491 + &sys_ipl_type_attr.attr, 492 + &sys_ipl_device_attr.attr, 493 + &sys_ipl_fcp_wwpn_attr.attr, 494 + &sys_ipl_fcp_lun_attr.attr, 495 + &sys_ipl_fcp_bootprog_attr.attr, 496 + &sys_ipl_fcp_br_lba_attr.attr, 497 + &sys_ipl_ccw_loadparm_attr.attr, 498 + NULL, 499 + }; 500 + 501 + static struct attribute_group ipl_fcp_attr_group = { 502 + .attrs = ipl_fcp_attrs, 503 + }; 504 + 505 + /* CCW ipl device attributes */ 473 506 474 507 static struct attribute *ipl_ccw_attrs_vm[] = { 475 508 &sys_ipl_type_attr.attr, ··· 766 765 DEFINE_IPL_ATTR_RW(reipl_fcp, device, "0.0.%04llx\n", "0.0.%llx\n", 767 766 reipl_block_fcp->ipl_info.fcp.devno); 768 767 769 - static struct attribute *reipl_fcp_attrs[] = { 770 - &sys_reipl_fcp_device_attr.attr, 771 - &sys_reipl_fcp_wwpn_attr.attr, 772 - &sys_reipl_fcp_lun_attr.attr, 773 - &sys_reipl_fcp_bootprog_attr.attr, 774 - &sys_reipl_fcp_br_lba_attr.attr, 775 - NULL, 776 - }; 777 - 778 - static struct attribute_group reipl_fcp_attr_group = { 779 - .attrs = reipl_fcp_attrs, 780 - }; 781 - 782 - /* CCW reipl device attributes */ 783 - 784 - DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n", 785 - reipl_block_ccw->ipl_info.ccw.devno); 786 - 787 768 static void reipl_get_ascii_loadparm(char *loadparm, 788 769 struct ipl_parameter_block *ibp) 789 770 { 790 - memcpy(loadparm, ibp->ipl_info.ccw.load_parm, LOADPARM_LEN); 771 + memcpy(loadparm, ibp->hdr.loadparm, LOADPARM_LEN); 791 772 EBCASC(loadparm, LOADPARM_LEN); 792 773 loadparm[LOADPARM_LEN] = 0; 793 774 strim(loadparm); ··· 804 821 return -EINVAL; 805 822 } 806 823 /* initialize loadparm with blanks */ 807 - memset(ipb->ipl_info.ccw.load_parm, ' ', LOADPARM_LEN); 824 + memset(ipb->hdr.loadparm, ' ', LOADPARM_LEN); 808 825 /* copy and convert to ebcdic */ 809 - memcpy(ipb->ipl_info.ccw.load_parm, buf, lp_len); 810 - ASCEBC(ipb->ipl_info.ccw.load_parm, LOADPARM_LEN); 826 + memcpy(ipb->hdr.loadparm, buf, lp_len); 827 + ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN); 811 828 return len; 812 829 } 830 + 831 + /* FCP wrapper */ 832 + static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj, 833 + struct kobj_attribute *attr, char *page) 834 + { 835 + return reipl_generic_loadparm_show(reipl_block_fcp, page); 836 + } 837 + 838 + static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj, 839 + struct kobj_attribute *attr, 840 + const char *buf, size_t len) 841 + { 842 + return reipl_generic_loadparm_store(reipl_block_fcp, buf, len); 843 + } 844 + 845 + static struct kobj_attribute sys_reipl_fcp_loadparm_attr = 846 + __ATTR(loadparm, S_IRUGO | S_IWUSR, reipl_fcp_loadparm_show, 847 + reipl_fcp_loadparm_store); 848 + 849 + static struct attribute *reipl_fcp_attrs[] = { 850 + &sys_reipl_fcp_device_attr.attr, 851 + &sys_reipl_fcp_wwpn_attr.attr, 852 + &sys_reipl_fcp_lun_attr.attr, 853 + &sys_reipl_fcp_bootprog_attr.attr, 854 + &sys_reipl_fcp_br_lba_attr.attr, 855 + &sys_reipl_fcp_loadparm_attr.attr, 856 + NULL, 857 + }; 858 + 859 + static struct attribute_group reipl_fcp_attr_group = { 860 + .attrs = reipl_fcp_attrs, 861 + }; 862 + 863 + /* CCW reipl device attributes */ 864 + 865 + DEFINE_IPL_ATTR_RW(reipl_ccw, device, "0.0.%04llx\n", "0.0.%llx\n", 866 + reipl_block_ccw->ipl_info.ccw.devno); 813 867 814 868 /* NSS wrapper */ 815 869 static ssize_t reipl_nss_loadparm_show(struct kobject *kobj, ··· 1145 1125 /* LOADPARM */ 1146 1126 /* check if read scp info worked and set loadparm */ 1147 1127 if (sclp_ipl_info.is_valid) 1148 - memcpy(ipb->ipl_info.ccw.load_parm, 1149 - &sclp_ipl_info.loadparm, LOADPARM_LEN); 1128 + memcpy(ipb->hdr.loadparm, &sclp_ipl_info.loadparm, LOADPARM_LEN); 1150 1129 else 1151 1130 /* read scp info failed: set empty loadparm (EBCDIC blanks) */ 1152 - memset(ipb->ipl_info.ccw.load_parm, 0x40, LOADPARM_LEN); 1131 + memset(ipb->hdr.loadparm, 0x40, LOADPARM_LEN); 1153 1132 ipb->hdr.flags = DIAG308_FLAGS_LP_VALID; 1154 1133 1155 1134 /* VM PARM */ ··· 1270 1251 return rc; 1271 1252 } 1272 1253 1273 - if (ipl_info.type == IPL_TYPE_FCP) 1254 + if (ipl_info.type == IPL_TYPE_FCP) { 1274 1255 memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE); 1275 - else { 1256 + /* 1257 + * Fix loadparm: There are systems where the (SCSI) LOADPARM 1258 + * is invalid in the SCSI IPL parameter block, so take it 1259 + * always from sclp_ipl_info. 1260 + */ 1261 + memcpy(reipl_block_fcp->hdr.loadparm, sclp_ipl_info.loadparm, 1262 + LOADPARM_LEN); 1263 + } else { 1276 1264 reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN; 1277 1265 reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION; 1278 1266 reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN; ··· 1890 1864 1891 1865 static int __init s390_ipl_init(void) 1892 1866 { 1867 + char str[8] = {0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40}; 1868 + 1893 1869 sclp_get_ipl_info(&sclp_ipl_info); 1870 + /* 1871 + * Fix loadparm: There are systems where the (SCSI) LOADPARM 1872 + * returned by read SCP info is invalid (contains EBCDIC blanks) 1873 + * when the system has been booted via diag308. In that case we use 1874 + * the value from diag308, if available. 1875 + * 1876 + * There are also systems where diag308 store does not work in 1877 + * case the system is booted from HMC. Fortunately in this case 1878 + * READ SCP info provides the correct value. 1879 + */ 1880 + if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 && 1881 + diag308_set_works) 1882 + memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm, 1883 + LOADPARM_LEN); 1894 1884 shutdown_actions_init(); 1895 1885 shutdown_triggers_init(); 1896 1886 return 0;
+3 -7
arch/s390/kernel/vdso32/clock_gettime.S
··· 22 22 basr %r5,0 23 23 0: al %r5,21f-0b(%r5) /* get &_vdso_data */ 24 24 chi %r2,__CLOCK_REALTIME 25 - je 10f 25 + je 11f 26 26 chi %r2,__CLOCK_MONOTONIC 27 27 jne 19f 28 28 29 29 /* CLOCK_MONOTONIC */ 30 - ltr %r3,%r3 31 - jz 9f /* tp == NULL */ 32 30 1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 33 31 tml %r4,0x0001 /* pending update ? loop */ 34 32 jnz 1b ··· 65 67 j 6b 66 68 8: st %r2,0(%r3) /* store tp->tv_sec */ 67 69 st %r1,4(%r3) /* store tp->tv_nsec */ 68 - 9: lhi %r2,0 70 + lhi %r2,0 69 71 br %r14 70 72 71 73 /* CLOCK_REALTIME */ 72 - 10: ltr %r3,%r3 /* tp == NULL */ 73 - jz 18f 74 74 11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 75 75 tml %r4,0x0001 /* pending update ? loop */ 76 76 jnz 11b ··· 107 111 j 15b 108 112 17: st %r2,0(%r3) /* store tp->tv_sec */ 109 113 st %r1,4(%r3) /* store tp->tv_nsec */ 110 - 18: lhi %r2,0 114 + lhi %r2,0 111 115 br %r14 112 116 113 117 /* Fallback to system call */
+3 -7
arch/s390/kernel/vdso64/clock_gettime.S
··· 21 21 .cfi_startproc 22 22 larl %r5,_vdso_data 23 23 cghi %r2,__CLOCK_REALTIME 24 - je 4f 24 + je 5f 25 25 cghi %r2,__CLOCK_THREAD_CPUTIME_ID 26 26 je 9f 27 27 cghi %r2,-2 /* Per-thread CPUCLOCK with PID=0, VIRT=1 */ ··· 30 30 jne 12f 31 31 32 32 /* CLOCK_MONOTONIC */ 33 - ltgr %r3,%r3 34 - jz 3f /* tp == NULL */ 35 33 0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 36 34 tmll %r4,0x0001 /* pending update ? loop */ 37 35 jnz 0b ··· 51 53 j 1b 52 54 2: stg %r0,0(%r3) /* store tp->tv_sec */ 53 55 stg %r1,8(%r3) /* store tp->tv_nsec */ 54 - 3: lghi %r2,0 56 + lghi %r2,0 55 57 br %r14 56 58 57 59 /* CLOCK_REALTIME */ 58 - 4: ltr %r3,%r3 /* tp == NULL */ 59 - jz 8f 60 60 5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 61 61 tmll %r4,0x0001 /* pending update ? loop */ 62 62 jnz 5b ··· 76 80 j 6b 77 81 7: stg %r0,0(%r3) /* store tp->tv_sec */ 78 82 stg %r1,8(%r3) /* store tp->tv_nsec */ 79 - 8: lghi %r2,0 83 + lghi %r2,0 80 84 br %r14 81 85 82 86 /* CLOCK_THREAD_CPUTIME_ID for this thread */
+1
arch/s390/mm/init.c
··· 43 43 44 44 unsigned long empty_zero_page, zero_page_mask; 45 45 EXPORT_SYMBOL(empty_zero_page); 46 + EXPORT_SYMBOL(zero_page_mask); 46 47 47 48 static void __init setup_zero_pages(void) 48 49 {
+2
arch/sh/mm/gup.c
··· 105 105 VM_BUG_ON(!pfn_valid(pte_pfn(pte))); 106 106 page = pte_page(pte); 107 107 get_page(page); 108 + __flush_anon_page(page, addr); 109 + flush_dcache_page(page); 108 110 pages[*nr] = page; 109 111 (*nr)++; 110 112
+18 -7
arch/sparc/net/bpf_jit_comp.c
··· 234 234 __emit_load8(BASE, STRUCT, FIELD, DEST); \ 235 235 } while (0) 236 236 237 - #define emit_ldmem(OFF, DEST) \ 238 - do { *prog++ = LD32I | RS1(FP) | S13(-(OFF)) | RD(DEST); \ 237 + #ifdef CONFIG_SPARC64 238 + #define BIAS (STACK_BIAS - 4) 239 + #else 240 + #define BIAS (-4) 241 + #endif 242 + 243 + #define emit_ldmem(OFF, DEST) \ 244 + do { *prog++ = LD32I | RS1(SP) | S13(BIAS - (OFF)) | RD(DEST); \ 239 245 } while (0) 240 246 241 - #define emit_stmem(OFF, SRC) \ 242 - do { *prog++ = LD32I | RS1(FP) | S13(-(OFF)) | RD(SRC); \ 247 + #define emit_stmem(OFF, SRC) \ 248 + do { *prog++ = ST32I | RS1(SP) | S13(BIAS - (OFF)) | RD(SRC); \ 243 249 } while (0) 244 250 245 251 #ifdef CONFIG_SMP ··· 616 610 case BPF_ANC | SKF_AD_VLAN_TAG: 617 611 case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: 618 612 emit_skb_load16(vlan_tci, r_A); 619 - if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { 620 - emit_andi(r_A, VLAN_VID_MASK, r_A); 613 + if (code != (BPF_ANC | SKF_AD_VLAN_TAG)) { 614 + emit_alu_K(SRL, 12); 615 + emit_andi(r_A, 1, r_A); 621 616 } else { 622 - emit_loadimm(VLAN_TAG_PRESENT, r_TMP); 617 + emit_loadimm(~VLAN_TAG_PRESENT, r_TMP); 623 618 emit_and(r_A, r_TMP, r_A); 624 619 } 625 620 break; ··· 632 625 emit_loadimm(K, r_X); 633 626 break; 634 627 case BPF_LD | BPF_MEM: 628 + seen |= SEEN_MEM; 635 629 emit_ldmem(K * 4, r_A); 636 630 break; 637 631 case BPF_LDX | BPF_MEM: 632 + seen |= SEEN_MEM | SEEN_XREG; 638 633 emit_ldmem(K * 4, r_X); 639 634 break; 640 635 case BPF_ST: 636 + seen |= SEEN_MEM; 641 637 emit_stmem(K * 4, r_A); 642 638 break; 643 639 case BPF_STX: 640 + seen |= SEEN_MEM | SEEN_XREG; 644 641 emit_stmem(K * 4, r_X); 645 642 break; 646 643
+1
arch/x86/Kconfig
··· 23 23 def_bool y 24 24 select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI 25 25 select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS 26 + select ARCH_HAS_FAST_MULTIPLIER 26 27 select ARCH_MIGHT_HAVE_PC_PARPORT 27 28 select ARCH_MIGHT_HAVE_PC_SERIO 28 29 select HAVE_AOUT if X86_32
+11 -7
arch/x86/boot/compressed/eboot.c
··· 1032 1032 int i; 1033 1033 unsigned long ramdisk_addr; 1034 1034 unsigned long ramdisk_size; 1035 - unsigned long initrd_addr_max; 1036 1035 1037 1036 efi_early = c; 1038 1037 sys_table = (efi_system_table_t *)(unsigned long)efi_early->table; ··· 1094 1095 1095 1096 memset(sdt, 0, sizeof(*sdt)); 1096 1097 1097 - if (hdr->xloadflags & XLF_CAN_BE_LOADED_ABOVE_4G) 1098 - initrd_addr_max = -1UL; 1099 - else 1100 - initrd_addr_max = hdr->initrd_addr_max; 1101 - 1102 1098 status = handle_cmdline_files(sys_table, image, 1103 1099 (char *)(unsigned long)hdr->cmd_line_ptr, 1104 - "initrd=", initrd_addr_max, 1100 + "initrd=", hdr->initrd_addr_max, 1105 1101 &ramdisk_addr, &ramdisk_size); 1102 + 1103 + if (status != EFI_SUCCESS && 1104 + hdr->xloadflags & XLF_CAN_BE_LOADED_ABOVE_4G) { 1105 + efi_printk(sys_table, "Trying to load files to higher address\n"); 1106 + status = handle_cmdline_files(sys_table, image, 1107 + (char *)(unsigned long)hdr->cmd_line_ptr, 1108 + "initrd=", -1UL, 1109 + &ramdisk_addr, &ramdisk_size); 1110 + } 1111 + 1106 1112 if (status != EFI_SUCCESS) 1107 1113 goto fail2; 1108 1114 hdr->ramdisk_image = ramdisk_addr & 0xffffffff;
+40 -14
arch/x86/boot/compressed/head_32.S
··· 30 30 #include <asm/boot.h> 31 31 #include <asm/asm-offsets.h> 32 32 33 + /* 34 + * Adjust our own GOT 35 + * 36 + * The relocation base must be in %ebx 37 + * 38 + * It is safe to call this macro more than once, because in some of the 39 + * code paths multiple invocations are inevitable, e.g. via the efi* 40 + * entry points. 41 + * 42 + * Relocation is only performed the first time. 43 + */ 44 + .macro FIXUP_GOT 45 + cmpb $1, got_fixed(%ebx) 46 + je 2f 47 + 48 + leal _got(%ebx), %edx 49 + leal _egot(%ebx), %ecx 50 + 1: 51 + cmpl %ecx, %edx 52 + jae 2f 53 + addl %ebx, (%edx) 54 + addl $4, %edx 55 + jmp 1b 56 + 2: 57 + movb $1, got_fixed(%ebx) 58 + .endm 59 + 33 60 __HEAD 34 61 ENTRY(startup_32) 35 62 #ifdef CONFIG_EFI_STUB ··· 83 56 add %esi, 88(%eax) 84 57 pushl %eax 85 58 59 + movl %esi, %ebx 60 + FIXUP_GOT 61 + 86 62 call make_boot_params 87 63 cmpl $0, %eax 88 64 je fail ··· 111 81 leal efi32_config(%esi), %eax 112 82 add %esi, 88(%eax) 113 83 pushl %eax 84 + 85 + movl %esi, %ebx 86 + FIXUP_GOT 87 + 114 88 2: 115 89 call efi_main 116 90 cmpl $0, %eax ··· 224 190 shrl $2, %ecx 225 191 rep stosl 226 192 227 - /* 228 - * Adjust our own GOT 229 - */ 230 - leal _got(%ebx), %edx 231 - leal _egot(%ebx), %ecx 232 - 1: 233 - cmpl %ecx, %edx 234 - jae 2f 235 - addl %ebx, (%edx) 236 - addl $4, %edx 237 - jmp 1b 238 - 2: 239 - 193 + FIXUP_GOT 240 194 /* 241 195 * Do the decompression, and jump to the new kernel.. 242 196 */ ··· 247 225 xorl %ebx, %ebx 248 226 jmp *%eax 249 227 250 - #ifdef CONFIG_EFI_STUB 251 228 .data 229 + /* Have we relocated the GOT? */ 230 + got_fixed: 231 + .byte 0 232 + 233 + #ifdef CONFIG_EFI_STUB 252 234 efi32_config: 253 235 .fill 11,8,0 254 236 .long efi_call_phys
+41 -15
arch/x86/boot/compressed/head_64.S
··· 32 32 #include <asm/processor-flags.h> 33 33 #include <asm/asm-offsets.h> 34 34 35 + /* 36 + * Adjust our own GOT 37 + * 38 + * The relocation base must be in %rbx 39 + * 40 + * It is safe to call this macro more than once, because in some of the 41 + * code paths multiple invocations are inevitable, e.g. via the efi* 42 + * entry points. 43 + * 44 + * Relocation is only performed the first time. 45 + */ 46 + .macro FIXUP_GOT 47 + cmpb $1, got_fixed(%rip) 48 + je 2f 49 + 50 + leaq _got(%rip), %rdx 51 + leaq _egot(%rip), %rcx 52 + 1: 53 + cmpq %rcx, %rdx 54 + jae 2f 55 + addq %rbx, (%rdx) 56 + addq $8, %rdx 57 + jmp 1b 58 + 2: 59 + movb $1, got_fixed(%rip) 60 + .endm 61 + 35 62 __HEAD 36 63 .code32 37 64 ENTRY(startup_32) ··· 279 252 subq $1b, %rbp 280 253 281 254 /* 282 - * Relocate efi_config->call(). 255 + * Relocate efi_config->call() and the GOT entries. 283 256 */ 284 257 addq %rbp, efi64_config+88(%rip) 258 + 259 + movq %rbp, %rbx 260 + FIXUP_GOT 285 261 286 262 movq %rax, %rdi 287 263 call make_boot_params ··· 301 271 subq $1b, %rbp 302 272 303 273 /* 304 - * Relocate efi_config->call(). 274 + * Relocate efi_config->call() and the GOT entries. 305 275 */ 306 276 movq efi_config(%rip), %rax 307 277 addq %rbp, 88(%rax) 278 + 279 + movq %rbp, %rbx 280 + FIXUP_GOT 308 281 2: 309 282 movq efi_config(%rip), %rdi 310 283 call efi_main ··· 418 385 shrq $3, %rcx 419 386 rep stosq 420 387 421 - /* 422 - * Adjust our own GOT 423 - */ 424 - leaq _got(%rip), %rdx 425 - leaq _egot(%rip), %rcx 426 - 1: 427 - cmpq %rcx, %rdx 428 - jae 2f 429 - addq %rbx, (%rdx) 430 - addq $8, %rdx 431 - jmp 1b 432 - 2: 433 - 388 + FIXUP_GOT 389 + 434 390 /* 435 391 * Do the decompression, and jump to the new kernel.. 436 392 */ ··· 458 436 .quad 0x0080890000000000 /* TS descriptor */ 459 437 .quad 0x0000000000000000 /* TS continued */ 460 438 gdt_end: 439 + 440 + /* Have we relocated the GOT? */ 441 + got_fixed: 442 + .byte 0 461 443 462 444 #ifdef CONFIG_EFI_STUB 463 445 efi_config:
-2
arch/x86/include/asm/bitops.h
··· 497 497 498 498 #include <asm-generic/bitops/sched.h> 499 499 500 - #define ARCH_HAS_FAST_MULTIPLIER 1 501 - 502 500 #include <asm/arch_hweight.h> 503 501 504 502 #include <asm-generic/bitops/const_hweight.h>
+1
arch/x86/include/asm/io_apic.h
··· 239 239 static inline u32 mp_pin_to_gsi(int ioapic, int pin) { return UINT_MAX; } 240 240 static inline int mp_map_gsi_to_irq(u32 gsi, unsigned int flags) { return gsi; } 241 241 static inline void mp_unmap_irq(int irq) { } 242 + static inline bool mp_should_keep_irq(struct device *dev) { return 1; } 242 243 243 244 static inline int save_ioapic_entries(void) 244 245 {
+1
arch/x86/include/asm/pgtable_64.h
··· 19 19 extern pmd_t level2_kernel_pgt[512]; 20 20 extern pmd_t level2_fixmap_pgt[512]; 21 21 extern pmd_t level2_ident_pgt[512]; 22 + extern pte_t level1_fixmap_pgt[512]; 22 23 extern pgd_t init_level4_pgt[]; 23 24 24 25 #define swapper_pg_dir init_level4_pgt
+3 -1
arch/x86/kernel/kprobes/opt.c
··· 338 338 * a relative jump. 339 339 */ 340 340 rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE; 341 - if (abs(rel) > 0x7fffffff) 341 + if (abs(rel) > 0x7fffffff) { 342 + __arch_remove_optimized_kprobe(op, 0); 342 343 return -ERANGE; 344 + } 343 345 344 346 buf = (u8 *)op->optinsn.insn; 345 347
+4
arch/x86/mm/dump_pagetables.c
··· 48 48 LOW_KERNEL_NR, 49 49 VMALLOC_START_NR, 50 50 VMEMMAP_START_NR, 51 + # ifdef CONFIG_X86_ESPFIX64 51 52 ESPFIX_START_NR, 53 + # endif 52 54 HIGH_KERNEL_NR, 53 55 MODULES_VADDR_NR, 54 56 MODULES_END_NR, ··· 73 71 { PAGE_OFFSET, "Low Kernel Mapping" }, 74 72 { VMALLOC_START, "vmalloc() Area" }, 75 73 { VMEMMAP_START, "Vmemmap" }, 74 + # ifdef CONFIG_X86_ESPFIX64 76 75 { ESPFIX_BASE_ADDR, "ESPfix Area", 16 }, 76 + # endif 77 77 { __START_KERNEL_map, "High Kernel Mapping" }, 78 78 { MODULES_VADDR, "Modules" }, 79 79 { MODULES_END, "End Modules" },
+1 -1
arch/x86/mm/mmap.c
··· 31 31 #include <linux/sched.h> 32 32 #include <asm/elf.h> 33 33 34 - struct __read_mostly va_alignment va_align = { 34 + struct va_alignment __read_mostly va_align = { 35 35 .flags = -1, 36 36 }; 37 37
+1 -23
arch/x86/pci/fixup.c
··· 326 326 struct pci_bus *bus; 327 327 u16 config; 328 328 329 - if (!vga_default_device()) { 330 - resource_size_t start, end; 331 - int i; 332 - 333 - /* Does firmware framebuffer belong to us? */ 334 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 335 - if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM)) 336 - continue; 337 - 338 - start = pci_resource_start(pdev, i); 339 - end = pci_resource_end(pdev, i); 340 - 341 - if (!start || !end) 342 - continue; 343 - 344 - if (screen_info.lfb_base >= start && 345 - (screen_info.lfb_base + screen_info.lfb_size) < end) 346 - vga_set_default_device(pdev); 347 - } 348 - } 349 - 350 329 /* Is VGA routed to us? */ 351 330 bus = pdev->bus; 352 331 while (bus) { ··· 350 371 pci_read_config_word(pdev, PCI_COMMAND, &config); 351 372 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 352 373 pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 353 - dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n"); 354 - vga_set_default_device(pdev); 374 + dev_printk(KERN_DEBUG, &pdev->dev, "Video device with shadowed ROM\n"); 355 375 } 356 376 } 357 377 }
+12 -15
arch/x86/xen/mmu.c
··· 1866 1866 * 1867 1867 * We can construct this by grafting the Xen provided pagetable into 1868 1868 * head_64.S's preconstructed pagetables. We copy the Xen L2's into 1869 - * level2_ident_pgt, level2_kernel_pgt and level2_fixmap_pgt. This 1870 - * means that only the kernel has a physical mapping to start with - 1871 - * but that's enough to get __va working. We need to fill in the rest 1872 - * of the physical mapping once some sort of allocator has been set 1873 - * up. 1874 - * NOTE: for PVH, the page tables are native. 1869 + * level2_ident_pgt, and level2_kernel_pgt. This means that only the 1870 + * kernel has a physical mapping to start with - but that's enough to 1871 + * get __va working. We need to fill in the rest of the physical 1872 + * mapping once some sort of allocator has been set up. NOTE: for 1873 + * PVH, the page tables are native. 1875 1874 */ 1876 1875 void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn) 1877 1876 { ··· 1901 1902 /* L3_i[0] -> level2_ident_pgt */ 1902 1903 convert_pfn_mfn(level3_ident_pgt); 1903 1904 /* L3_k[510] -> level2_kernel_pgt 1904 - * L3_i[511] -> level2_fixmap_pgt */ 1905 + * L3_k[511] -> level2_fixmap_pgt */ 1905 1906 convert_pfn_mfn(level3_kernel_pgt); 1907 + 1908 + /* L3_k[511][506] -> level1_fixmap_pgt */ 1909 + convert_pfn_mfn(level2_fixmap_pgt); 1906 1910 } 1907 1911 /* We get [511][511] and have Xen's version of level2_kernel_pgt */ 1908 1912 l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd); ··· 1915 1913 addr[1] = (unsigned long)l3; 1916 1914 addr[2] = (unsigned long)l2; 1917 1915 /* Graft it onto L4[272][0]. Note that we creating an aliasing problem: 1918 - * Both L4[272][0] and L4[511][511] have entries that point to the same 1916 + * Both L4[272][0] and L4[511][510] have entries that point to the same 1919 1917 * L2 (PMD) tables. Meaning that if you modify it in __va space 1920 1918 * it will be also modified in the __ka space! (But if you just 1921 1919 * modify the PMD table to point to other PTE's or none, then you 1922 1920 * are OK - which is what cleanup_highmap does) */ 1923 1921 copy_page(level2_ident_pgt, l2); 1924 - /* Graft it onto L4[511][511] */ 1922 + /* Graft it onto L4[511][510] */ 1925 1923 copy_page(level2_kernel_pgt, l2); 1926 1924 1927 - /* Get [511][510] and graft that in level2_fixmap_pgt */ 1928 - l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd); 1929 - l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud); 1930 - copy_page(level2_fixmap_pgt, l2); 1931 - /* Note that we don't do anything with level1_fixmap_pgt which 1932 - * we don't need. */ 1933 1925 if (!xen_feature(XENFEAT_auto_translated_physmap)) { 1934 1926 /* Make pagetable pieces RO */ 1935 1927 set_page_prot(init_level4_pgt, PAGE_KERNEL_RO); ··· 1933 1937 set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO); 1934 1938 set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO); 1935 1939 set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO); 1940 + set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO); 1936 1941 1937 1942 /* Pin down new L4 */ 1938 1943 pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+11 -6
block/blk-merge.c
··· 10 10 #include "blk.h" 11 11 12 12 static unsigned int __blk_recalc_rq_segments(struct request_queue *q, 13 - struct bio *bio) 13 + struct bio *bio, 14 + bool no_sg_merge) 14 15 { 15 16 struct bio_vec bv, bvprv = { NULL }; 16 - int cluster, high, highprv = 1, no_sg_merge; 17 + int cluster, high, highprv = 1; 17 18 unsigned int seg_size, nr_phys_segs; 18 19 struct bio *fbio, *bbio; 19 20 struct bvec_iter iter; ··· 36 35 cluster = blk_queue_cluster(q); 37 36 seg_size = 0; 38 37 nr_phys_segs = 0; 39 - no_sg_merge = test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags); 40 38 high = 0; 41 39 for_each_bio(bio) { 42 40 bio_for_each_segment(bv, bio, iter) { ··· 88 88 89 89 void blk_recalc_rq_segments(struct request *rq) 90 90 { 91 - rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); 91 + bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE, 92 + &rq->q->queue_flags); 93 + 94 + rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio, 95 + no_sg_merge); 92 96 } 93 97 94 98 void blk_recount_segments(struct request_queue *q, struct bio *bio) 95 99 { 96 - if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags)) 100 + if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) && 101 + bio->bi_vcnt < queue_max_segments(q)) 97 102 bio->bi_phys_segments = bio->bi_vcnt; 98 103 else { 99 104 struct bio *nxt = bio->bi_next; 100 105 101 106 bio->bi_next = NULL; 102 - bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio); 107 + bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false); 103 108 bio->bi_next = nxt; 104 109 } 105 110
+72 -19
block/blk-mq.c
··· 1321 1321 continue; 1322 1322 set->ops->exit_request(set->driver_data, tags->rqs[i], 1323 1323 hctx_idx, i); 1324 + tags->rqs[i] = NULL; 1324 1325 } 1325 1326 } 1326 1327 ··· 1355 1354 1356 1355 INIT_LIST_HEAD(&tags->page_list); 1357 1356 1358 - tags->rqs = kmalloc_node(set->queue_depth * sizeof(struct request *), 1359 - GFP_KERNEL, set->numa_node); 1357 + tags->rqs = kzalloc_node(set->queue_depth * sizeof(struct request *), 1358 + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY, 1359 + set->numa_node); 1360 1360 if (!tags->rqs) { 1361 1361 blk_mq_free_tags(tags); 1362 1362 return NULL; ··· 1381 1379 this_order--; 1382 1380 1383 1381 do { 1384 - page = alloc_pages_node(set->numa_node, GFP_KERNEL, 1385 - this_order); 1382 + page = alloc_pages_node(set->numa_node, 1383 + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY, 1384 + this_order); 1386 1385 if (page) 1387 1386 break; 1388 1387 if (!this_order--) ··· 1407 1404 if (set->ops->init_request) { 1408 1405 if (set->ops->init_request(set->driver_data, 1409 1406 tags->rqs[i], hctx_idx, i, 1410 - set->numa_node)) 1407 + set->numa_node)) { 1408 + tags->rqs[i] = NULL; 1411 1409 goto fail; 1410 + } 1412 1411 } 1413 1412 1414 1413 p += rq_size; ··· 1421 1416 return tags; 1422 1417 1423 1418 fail: 1424 - pr_warn("%s: failed to allocate requests\n", __func__); 1425 1419 blk_mq_free_rq_map(set, tags, hctx_idx); 1426 1420 return NULL; 1427 1421 } ··· 1940 1936 return NOTIFY_OK; 1941 1937 } 1942 1938 1939 + static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) 1940 + { 1941 + int i; 1942 + 1943 + for (i = 0; i < set->nr_hw_queues; i++) { 1944 + set->tags[i] = blk_mq_init_rq_map(set, i); 1945 + if (!set->tags[i]) 1946 + goto out_unwind; 1947 + } 1948 + 1949 + return 0; 1950 + 1951 + out_unwind: 1952 + while (--i >= 0) 1953 + blk_mq_free_rq_map(set, set->tags[i], i); 1954 + 1955 + set->tags = NULL; 1956 + return -ENOMEM; 1957 + } 1958 + 1959 + /* 1960 + * Allocate the request maps associated with this tag_set. Note that this 1961 + * may reduce the depth asked for, if memory is tight. set->queue_depth 1962 + * will be updated to reflect the allocated depth. 1963 + */ 1964 + static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) 1965 + { 1966 + unsigned int depth; 1967 + int err; 1968 + 1969 + depth = set->queue_depth; 1970 + do { 1971 + err = __blk_mq_alloc_rq_maps(set); 1972 + if (!err) 1973 + break; 1974 + 1975 + set->queue_depth >>= 1; 1976 + if (set->queue_depth < set->reserved_tags + BLK_MQ_TAG_MIN) { 1977 + err = -ENOMEM; 1978 + break; 1979 + } 1980 + } while (set->queue_depth); 1981 + 1982 + if (!set->queue_depth || err) { 1983 + pr_err("blk-mq: failed to allocate request map\n"); 1984 + return -ENOMEM; 1985 + } 1986 + 1987 + if (depth != set->queue_depth) 1988 + pr_info("blk-mq: reduced tag depth (%u -> %u)\n", 1989 + depth, set->queue_depth); 1990 + 1991 + return 0; 1992 + } 1993 + 1943 1994 /* 1944 1995 * Alloc a tag set to be associated with one or more request queues. 1945 1996 * May fail with EINVAL for various error conditions. May adjust the ··· 2003 1944 */ 2004 1945 int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) 2005 1946 { 2006 - int i; 2007 - 2008 1947 if (!set->nr_hw_queues) 2009 1948 return -EINVAL; 2010 1949 if (!set->queue_depth) ··· 2023 1966 sizeof(struct blk_mq_tags *), 2024 1967 GFP_KERNEL, set->numa_node); 2025 1968 if (!set->tags) 2026 - goto out; 1969 + return -ENOMEM; 2027 1970 2028 - for (i = 0; i < set->nr_hw_queues; i++) { 2029 - set->tags[i] = blk_mq_init_rq_map(set, i); 2030 - if (!set->tags[i]) 2031 - goto out_unwind; 2032 - } 1971 + if (blk_mq_alloc_rq_maps(set)) 1972 + goto enomem; 2033 1973 2034 1974 mutex_init(&set->tag_list_lock); 2035 1975 INIT_LIST_HEAD(&set->tag_list); 2036 1976 2037 1977 return 0; 2038 - 2039 - out_unwind: 2040 - while (--i >= 0) 2041 - blk_mq_free_rq_map(set, set->tags[i], i); 2042 - out: 1978 + enomem: 1979 + kfree(set->tags); 1980 + set->tags = NULL; 2043 1981 return -ENOMEM; 2044 1982 } 2045 1983 EXPORT_SYMBOL(blk_mq_alloc_tag_set); ··· 2049 1997 } 2050 1998 2051 1999 kfree(set->tags); 2000 + set->tags = NULL; 2052 2001 } 2053 2002 EXPORT_SYMBOL(blk_mq_free_tag_set); 2054 2003
+4 -2
block/blk-sysfs.c
··· 554 554 * Initialization must be complete by now. Finish the initial 555 555 * bypass from queue allocation. 556 556 */ 557 - queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q); 558 - blk_queue_bypass_end(q); 557 + if (!blk_queue_init_done(q)) { 558 + queue_flag_set_unlocked(QUEUE_FLAG_INIT_DONE, q); 559 + blk_queue_bypass_end(q); 560 + } 559 561 560 562 ret = blk_trace_init_sysfs(dev); 561 563 if (ret)
+14 -10
block/genhd.c
··· 28 28 /* for extended dynamic devt allocation, currently only one major is used */ 29 29 #define NR_EXT_DEVT (1 << MINORBITS) 30 30 31 - /* For extended devt allocation. ext_devt_mutex prevents look up 31 + /* For extended devt allocation. ext_devt_lock prevents look up 32 32 * results from going away underneath its user. 33 33 */ 34 - static DEFINE_MUTEX(ext_devt_mutex); 34 + static DEFINE_SPINLOCK(ext_devt_lock); 35 35 static DEFINE_IDR(ext_devt_idr); 36 36 37 37 static struct device_type disk_type; ··· 420 420 } 421 421 422 422 /* allocate ext devt */ 423 - mutex_lock(&ext_devt_mutex); 424 - idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_KERNEL); 425 - mutex_unlock(&ext_devt_mutex); 423 + idr_preload(GFP_KERNEL); 424 + 425 + spin_lock(&ext_devt_lock); 426 + idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT); 427 + spin_unlock(&ext_devt_lock); 428 + 429 + idr_preload_end(); 426 430 if (idx < 0) 427 431 return idx == -ENOSPC ? -EBUSY : idx; 428 432 ··· 451 447 return; 452 448 453 449 if (MAJOR(devt) == BLOCK_EXT_MAJOR) { 454 - mutex_lock(&ext_devt_mutex); 450 + spin_lock(&ext_devt_lock); 455 451 idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); 456 - mutex_unlock(&ext_devt_mutex); 452 + spin_unlock(&ext_devt_lock); 457 453 } 458 454 } 459 455 ··· 669 665 sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk))); 670 666 pm_runtime_set_memalloc_noio(disk_to_dev(disk), false); 671 667 device_del(disk_to_dev(disk)); 672 - blk_free_devt(disk_to_dev(disk)->devt); 673 668 } 674 669 EXPORT_SYMBOL(del_gendisk); 675 670 ··· 693 690 } else { 694 691 struct hd_struct *part; 695 692 696 - mutex_lock(&ext_devt_mutex); 693 + spin_lock(&ext_devt_lock); 697 694 part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); 698 695 if (part && get_disk(part_to_disk(part))) { 699 696 *partno = part->partno; 700 697 disk = part_to_disk(part); 701 698 } 702 - mutex_unlock(&ext_devt_mutex); 699 + spin_unlock(&ext_devt_lock); 703 700 } 704 701 705 702 return disk; ··· 1101 1098 { 1102 1099 struct gendisk *disk = dev_to_disk(dev); 1103 1100 1101 + blk_free_devt(dev->devt); 1104 1102 disk_release_events(disk); 1105 1103 kfree(disk->random); 1106 1104 disk_replace_part_tbl(disk, NULL);
+1 -1
block/partition-generic.c
··· 211 211 static void part_release(struct device *dev) 212 212 { 213 213 struct hd_struct *p = dev_to_part(dev); 214 + blk_free_devt(dev->devt); 214 215 free_part_stats(p); 215 216 free_part_info(p); 216 217 kfree(p); ··· 254 253 rcu_assign_pointer(ptbl->last_lookup, NULL); 255 254 kobject_put(part->holder_dir); 256 255 device_del(part_to_dev(part)); 257 - blk_free_devt(part_devt(part)); 258 256 259 257 hd_struct_put(part); 260 258 }
-3
crypto/drbg.c
··· 1922 1922 /* overflow max addtllen with personalization string */ 1923 1923 ret = drbg_instantiate(drbg, &addtl, coreref, pr); 1924 1924 BUG_ON(0 == ret); 1925 - /* test uninstantated DRBG */ 1926 - len = drbg_generate(drbg, buf, (max_request_bytes + 1), NULL); 1927 - BUG_ON(0 < len); 1928 1925 /* all tests passed */ 1929 1926 rc = 0; 1930 1927
+1 -1
drivers/acpi/acpi_cmos_rtc.c
··· 33 33 void *handler_context, void *region_context) 34 34 { 35 35 int i; 36 - u8 *value = (u8 *)&value64; 36 + u8 *value = (u8 *)value64; 37 37 38 38 if (address > 0xff || !value64) 39 39 return AE_BAD_PARAMETER;
+5 -5
drivers/acpi/acpi_lpss.c
··· 610 610 return acpi_dev_suspend_late(dev); 611 611 } 612 612 613 - static int acpi_lpss_restore_early(struct device *dev) 613 + static int acpi_lpss_resume_early(struct device *dev) 614 614 { 615 615 int ret = acpi_dev_resume_early(dev); 616 616 ··· 650 650 static struct dev_pm_domain acpi_lpss_pm_domain = { 651 651 .ops = { 652 652 #ifdef CONFIG_PM_SLEEP 653 - .suspend_late = acpi_lpss_suspend_late, 654 - .restore_early = acpi_lpss_restore_early, 655 653 .prepare = acpi_subsys_prepare, 656 654 .complete = acpi_subsys_complete, 657 655 .suspend = acpi_subsys_suspend, 658 - .resume_early = acpi_subsys_resume_early, 656 + .suspend_late = acpi_lpss_suspend_late, 657 + .resume_early = acpi_lpss_resume_early, 659 658 .freeze = acpi_subsys_freeze, 660 659 .poweroff = acpi_subsys_suspend, 661 - .poweroff_late = acpi_subsys_suspend_late, 660 + .poweroff_late = acpi_lpss_suspend_late, 661 + .restore_early = acpi_lpss_resume_early, 662 662 #endif 663 663 #ifdef CONFIG_PM_RUNTIME 664 664 .runtime_suspend = acpi_lpss_runtime_suspend,
-14
drivers/acpi/battery.c
··· 534 534 " invalid.\n"); 535 535 } 536 536 537 - /* 538 - * When fully charged, some batteries wrongly report 539 - * capacity_now = design_capacity instead of = full_charge_capacity 540 - */ 541 - if (battery->capacity_now > battery->full_charge_capacity 542 - && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) { 543 - if (battery->capacity_now != battery->design_capacity) 544 - printk_once(KERN_WARNING FW_BUG 545 - "battery: reported current charge level (%d) " 546 - "is higher than reported maximum charge level (%d).\n", 547 - battery->capacity_now, battery->full_charge_capacity); 548 - battery->capacity_now = battery->full_charge_capacity; 549 - } 550 - 551 537 if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags) 552 538 && battery->capacity_now >= 0 && battery->capacity_now <= 100) 553 539 battery->capacity_now = (battery->capacity_now *
-10
drivers/acpi/bus.c
··· 177 177 } 178 178 EXPORT_SYMBOL_GPL(acpi_bus_detach_private_data); 179 179 180 - void acpi_bus_no_hotplug(acpi_handle handle) 181 - { 182 - struct acpi_device *adev = NULL; 183 - 184 - acpi_bus_get_device(handle, &adev); 185 - if (adev) 186 - adev->flags.no_hotplug = true; 187 - } 188 - EXPORT_SYMBOL_GPL(acpi_bus_no_hotplug); 189 - 190 180 static void acpi_print_osc_error(acpi_handle handle, 191 181 struct acpi_osc_context *context, char *error) 192 182 {
+8 -1
drivers/base/regmap/regmap-debugfs.c
··· 512 512 map, &regmap_reg_ranges_fops); 513 513 514 514 if (map->max_register || regmap_readable(map, 0)) { 515 - debugfs_create_file("registers", 0400, map->debugfs, 515 + umode_t registers_mode; 516 + 517 + if (IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS)) 518 + registers_mode = 0600; 519 + else 520 + registers_mode = 0400; 521 + 522 + debugfs_create_file("registers", registers_mode, map->debugfs, 516 523 map, &regmap_map_fops); 517 524 debugfs_create_file("access", 0400, map->debugfs, 518 525 map, &regmap_access_fops);
-1
drivers/block/mtip32xx/mtip32xx.c
··· 3918 3918 if (rv) { 3919 3919 dev_err(&dd->pdev->dev, 3920 3920 "Unable to allocate request queue\n"); 3921 - rv = -ENOMEM; 3922 3921 goto block_queue_alloc_init_error; 3923 3922 } 3924 3923
+21 -8
drivers/block/null_blk.c
··· 462 462 struct gendisk *disk; 463 463 struct nullb *nullb; 464 464 sector_t size; 465 + int rv; 465 466 466 467 nullb = kzalloc_node(sizeof(*nullb), GFP_KERNEL, home_node); 467 - if (!nullb) 468 + if (!nullb) { 469 + rv = -ENOMEM; 468 470 goto out; 471 + } 469 472 470 473 spin_lock_init(&nullb->lock); 471 474 472 475 if (queue_mode == NULL_Q_MQ && use_per_node_hctx) 473 476 submit_queues = nr_online_nodes; 474 477 475 - if (setup_queues(nullb)) 478 + rv = setup_queues(nullb); 479 + if (rv) 476 480 goto out_free_nullb; 477 481 478 482 if (queue_mode == NULL_Q_MQ) { ··· 488 484 nullb->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; 489 485 nullb->tag_set.driver_data = nullb; 490 486 491 - if (blk_mq_alloc_tag_set(&nullb->tag_set)) 487 + rv = blk_mq_alloc_tag_set(&nullb->tag_set); 488 + if (rv) 492 489 goto out_cleanup_queues; 493 490 494 491 nullb->q = blk_mq_init_queue(&nullb->tag_set); 495 - if (!nullb->q) 492 + if (!nullb->q) { 493 + rv = -ENOMEM; 496 494 goto out_cleanup_tags; 495 + } 497 496 } else if (queue_mode == NULL_Q_BIO) { 498 497 nullb->q = blk_alloc_queue_node(GFP_KERNEL, home_node); 499 - if (!nullb->q) 498 + if (!nullb->q) { 499 + rv = -ENOMEM; 500 500 goto out_cleanup_queues; 501 + } 501 502 blk_queue_make_request(nullb->q, null_queue_bio); 502 503 init_driver_queues(nullb); 503 504 } else { 504 505 nullb->q = blk_init_queue_node(null_request_fn, &nullb->lock, home_node); 505 - if (!nullb->q) 506 + if (!nullb->q) { 507 + rv = -ENOMEM; 506 508 goto out_cleanup_queues; 509 + } 507 510 blk_queue_prep_rq(nullb->q, null_rq_prep_fn); 508 511 blk_queue_softirq_done(nullb->q, null_softirq_done_fn); 509 512 init_driver_queues(nullb); ··· 520 509 queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q); 521 510 522 511 disk = nullb->disk = alloc_disk_node(1, home_node); 523 - if (!disk) 512 + if (!disk) { 513 + rv = -ENOMEM; 524 514 goto out_cleanup_blk_queue; 515 + } 525 516 526 517 mutex_lock(&lock); 527 518 list_add_tail(&nullb->list, &nullb_list); ··· 557 544 out_free_nullb: 558 545 kfree(nullb); 559 546 out: 560 - return -ENOMEM; 547 + return rv; 561 548 } 562 549 563 550 static int __init null_init(void)
+4 -2
drivers/block/rbd.c
··· 5087 5087 set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE); 5088 5088 set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only); 5089 5089 5090 - rbd_dev->rq_wq = alloc_workqueue(rbd_dev->disk->disk_name, 0, 0); 5091 - if (!rbd_dev->rq_wq) 5090 + rbd_dev->rq_wq = alloc_workqueue("%s", 0, 0, rbd_dev->disk->disk_name); 5091 + if (!rbd_dev->rq_wq) { 5092 + ret = -ENOMEM; 5092 5093 goto err_out_mapping; 5094 + } 5093 5095 5094 5096 ret = rbd_bus_add_dev(rbd_dev); 5095 5097 if (ret)
+7
drivers/char/hw_random/virtio-rng.c
··· 36 36 int index; 37 37 bool busy; 38 38 bool hwrng_register_done; 39 + bool hwrng_removed; 39 40 }; 40 41 41 42 ··· 68 67 { 69 68 int ret; 70 69 struct virtrng_info *vi = (struct virtrng_info *)rng->priv; 70 + 71 + if (vi->hwrng_removed) 72 + return -ENODEV; 71 73 72 74 if (!vi->busy) { 73 75 vi->busy = true; ··· 141 137 { 142 138 struct virtrng_info *vi = vdev->priv; 143 139 140 + vi->hwrng_removed = true; 141 + vi->data_avail = 0; 142 + complete(&vi->have_data); 144 143 vdev->config->reset(vdev); 145 144 vi->busy = false; 146 145 if (vi->hwrng_register_done)
+1 -1
drivers/clk/at91/clk-slow.c
··· 447 447 int i; 448 448 449 449 num_parents = of_count_phandle_with_args(np, "clocks", "#clock-cells"); 450 - if (num_parents <= 0 || num_parents > 1) 450 + if (num_parents != 2) 451 451 return; 452 452 453 453 for (i = 0; i < num_parents; ++i) {
+3 -3
drivers/clk/clk-efm32gg.c
··· 22 22 .clk_num = ARRAY_SIZE(clk), 23 23 }; 24 24 25 - static int __init efm32gg_cmu_init(struct device_node *np) 25 + static void __init efm32gg_cmu_init(struct device_node *np) 26 26 { 27 27 int i; 28 28 void __iomem *base; ··· 33 33 base = of_iomap(np, 0); 34 34 if (!base) { 35 35 pr_warn("Failed to map address range for efm32gg,cmu node\n"); 36 - return -EADDRNOTAVAIL; 36 + return; 37 37 } 38 38 39 39 clk[clk_HFXO] = clk_register_fixed_rate(NULL, "HFXO", NULL, ··· 76 76 clk[clk_HFPERCLKDAC0] = clk_register_gate(NULL, "HFPERCLK.DAC0", 77 77 "HFXO", 0, base + CMU_HFPERCLKEN0, 17, 0, NULL); 78 78 79 - return of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); 79 + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); 80 80 } 81 81 CLK_OF_DECLARE(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init);
+6 -1
drivers/clk/clk.c
··· 1467 1467 static void clk_change_rate(struct clk *clk) 1468 1468 { 1469 1469 struct clk *child; 1470 + struct hlist_node *tmp; 1470 1471 unsigned long old_rate; 1471 1472 unsigned long best_parent_rate = 0; 1472 1473 bool skip_set_rate = false; ··· 1503 1502 if (clk->notifier_count && old_rate != clk->rate) 1504 1503 __clk_notify(clk, POST_RATE_CHANGE, old_rate, clk->rate); 1505 1504 1506 - hlist_for_each_entry(child, &clk->children, child_node) { 1505 + /* 1506 + * Use safe iteration, as change_rate can actually swap parents 1507 + * for certain clock types. 1508 + */ 1509 + hlist_for_each_entry_safe(child, tmp, &clk->children, child_node) { 1507 1510 /* Skip children who will be reparented to another clock */ 1508 1511 if (child->new_parent && child->new_parent != clk) 1509 1512 continue;
+1 -1
drivers/clk/qcom/gcc-ipq806x.c
··· 1095 1095 }; 1096 1096 1097 1097 static const struct freq_tbl clk_tbl_sdc[] = { 1098 - { 144000, P_PXO, 5, 18,625 }, 1098 + { 200000, P_PXO, 2, 2, 125 }, 1099 1099 { 400000, P_PLL8, 4, 1, 240 }, 1100 1100 { 16000000, P_PLL8, 4, 1, 6 }, 1101 1101 { 17070000, P_PLL8, 1, 2, 45 },
+2 -2
drivers/clk/rockchip/clk-rk3288.c
··· 545 545 GATE(PCLK_PWM, "pclk_pwm", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 0, GFLAGS), 546 546 GATE(PCLK_TIMER, "pclk_timer", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 1, GFLAGS), 547 547 GATE(PCLK_I2C0, "pclk_i2c0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 2, GFLAGS), 548 - GATE(PCLK_I2C1, "pclk_i2c1", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 3, GFLAGS), 548 + GATE(PCLK_I2C2, "pclk_i2c2", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 3, GFLAGS), 549 549 GATE(0, "pclk_ddrupctl0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 14, GFLAGS), 550 550 GATE(0, "pclk_publ0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 15, GFLAGS), 551 551 GATE(0, "pclk_ddrupctl1", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 0, GFLAGS), ··· 603 603 GATE(PCLK_I2C4, "pclk_i2c4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 15, GFLAGS), 604 604 GATE(PCLK_UART3, "pclk_uart3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 11, GFLAGS), 605 605 GATE(PCLK_UART4, "pclk_uart4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 12, GFLAGS), 606 - GATE(PCLK_I2C2, "pclk_i2c2", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 13, GFLAGS), 606 + GATE(PCLK_I2C1, "pclk_i2c1", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 13, GFLAGS), 607 607 GATE(PCLK_I2C3, "pclk_i2c3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 14, GFLAGS), 608 608 GATE(PCLK_SARADC, "pclk_saradc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 1, GFLAGS), 609 609 GATE(PCLK_TSADC, "pclk_tsadc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 2, GFLAGS),
+5 -1
drivers/clk/ti/clk-dra7-atl.c
··· 139 139 static int atl_clk_set_rate(struct clk_hw *hw, unsigned long rate, 140 140 unsigned long parent_rate) 141 141 { 142 - struct dra7_atl_desc *cdesc = to_atl_desc(hw); 142 + struct dra7_atl_desc *cdesc; 143 143 u32 divider; 144 144 145 + if (!hw || !rate) 146 + return -EINVAL; 147 + 148 + cdesc = to_atl_desc(hw); 145 149 divider = ((parent_rate + rate / 2) / rate) - 1; 146 150 if (divider > DRA7_ATL_DIVIDER_MASK) 147 151 divider = DRA7_ATL_DIVIDER_MASK;
+6 -1
drivers/clk/ti/divider.c
··· 211 211 static int ti_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate, 212 212 unsigned long parent_rate) 213 213 { 214 - struct clk_divider *divider = to_clk_divider(hw); 214 + struct clk_divider *divider; 215 215 unsigned int div, value; 216 216 unsigned long flags = 0; 217 217 u32 val; 218 + 219 + if (!hw || !rate) 220 + return -EINVAL; 221 + 222 + divider = to_clk_divider(hw); 218 223 219 224 div = DIV_ROUND_UP(parent_rate, rate); 220 225 value = _get_val(divider, div);
+1 -1
drivers/cpufreq/cpufreq_opp.c
··· 60 60 goto out; 61 61 } 62 62 63 - freq_table = kcalloc(sizeof(*freq_table), (max_opps + 1), GFP_ATOMIC); 63 + freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC); 64 64 if (!freq_table) { 65 65 ret = -ENOMEM; 66 66 goto out;
+2 -1
drivers/dma/dma-jz4740.c
··· 362 362 vchan_cyclic_callback(&chan->desc->vdesc); 363 363 } else { 364 364 if (chan->next_sg == chan->desc->num_sgs) { 365 - chan->desc = NULL; 365 + list_del(&chan->desc->vdesc.node); 366 366 vchan_cookie_complete(&chan->desc->vdesc); 367 + chan->desc = NULL; 367 368 } 368 369 } 369 370 }
+9 -1
drivers/firmware/efi/libstub/fdt.c
··· 22 22 unsigned long map_size, unsigned long desc_size, 23 23 u32 desc_ver) 24 24 { 25 - int node, prev; 25 + int node, prev, num_rsv; 26 26 int status; 27 27 u32 fdt_val32; 28 28 u64 fdt_val64; ··· 72 72 73 73 prev = node; 74 74 } 75 + 76 + /* 77 + * Delete all memory reserve map entries. When booting via UEFI, 78 + * kernel will use the UEFI memory map to find reserved regions. 79 + */ 80 + num_rsv = fdt_num_mem_rsv(fdt); 81 + while (num_rsv-- > 0) 82 + fdt_del_mem_rsv(fdt, num_rsv); 75 83 76 84 node = fdt_subnode_offset(fdt, 0, "chosen"); 77 85 if (node < 0) {
+2 -1
drivers/gpu/drm/ast/ast_main.c
··· 67 67 { 68 68 struct ast_private *ast = dev->dev_private; 69 69 uint32_t data, jreg; 70 + ast_open_key(ast); 70 71 71 72 if (dev->pdev->device == PCI_CHIP_AST1180) { 72 73 ast->chip = AST1100; ··· 105 104 } 106 105 ast->vga2_clone = false; 107 106 } else { 108 - ast->chip = 2000; 107 + ast->chip = AST2000; 109 108 DRM_INFO("AST 2000 detected\n"); 110 109 } 111 110 }
+1
drivers/gpu/drm/bochs/bochs_kms.c
··· 250 250 DRM_MODE_CONNECTOR_VIRTUAL); 251 251 drm_connector_helper_add(connector, 252 252 &bochs_connector_connector_helper_funcs); 253 + drm_connector_register(connector); 253 254 } 254 255 255 256
+1
drivers/gpu/drm/cirrus/cirrus_mode.c
··· 555 555 556 556 drm_connector_helper_add(connector, &cirrus_vga_connector_helper_funcs); 557 557 558 + drm_connector_register(connector); 558 559 return connector; 559 560 } 560 561
+7 -2
drivers/gpu/drm/i915/i915_dma.c
··· 1336 1336 1337 1337 intel_power_domains_init_hw(dev_priv); 1338 1338 1339 + /* 1340 + * We enable some interrupt sources in our postinstall hooks, so mark 1341 + * interrupts as enabled _before_ actually enabling them to avoid 1342 + * special cases in our ordering checks. 1343 + */ 1344 + dev_priv->pm._irqs_disabled = false; 1345 + 1339 1346 ret = drm_irq_install(dev, dev->pdev->irq); 1340 1347 if (ret) 1341 1348 goto cleanup_gem_stolen; 1342 - 1343 - dev_priv->pm._irqs_disabled = false; 1344 1349 1345 1350 /* Important: The output setup functions called by modeset_init need 1346 1351 * working irqs for e.g. gmbus and dp aux transfers. */
+5 -5
drivers/gpu/drm/i915/i915_drv.h
··· 184 184 if ((1 << (domain)) & (mask)) 185 185 186 186 struct drm_i915_private; 187 + struct i915_mm_struct; 187 188 struct i915_mmu_object; 188 189 189 190 enum intel_dpll_id { ··· 1507 1506 struct i915_gtt gtt; /* VM representing the global address space */ 1508 1507 1509 1508 struct i915_gem_mm mm; 1510 - #if defined(CONFIG_MMU_NOTIFIER) 1511 - DECLARE_HASHTABLE(mmu_notifiers, 7); 1512 - #endif 1509 + DECLARE_HASHTABLE(mm_structs, 7); 1510 + struct mutex mm_lock; 1513 1511 1514 1512 /* Kernel Modesetting */ 1515 1513 ··· 1814 1814 unsigned workers :4; 1815 1815 #define I915_GEM_USERPTR_MAX_WORKERS 15 1816 1816 1817 - struct mm_struct *mm; 1818 - struct i915_mmu_object *mn; 1817 + struct i915_mm_struct *mm; 1818 + struct i915_mmu_object *mmu_object; 1819 1819 struct work_struct *work; 1820 1820 } userptr; 1821 1821 };
+7 -4
drivers/gpu/drm/i915/i915_gem.c
··· 1590 1590 out: 1591 1591 switch (ret) { 1592 1592 case -EIO: 1593 - /* If this -EIO is due to a gpu hang, give the reset code a 1594 - * chance to clean up the mess. Otherwise return the proper 1595 - * SIGBUS. */ 1596 - if (i915_terminally_wedged(&dev_priv->gpu_error)) { 1593 + /* 1594 + * We eat errors when the gpu is terminally wedged to avoid 1595 + * userspace unduly crashing (gl has no provisions for mmaps to 1596 + * fail). But any other -EIO isn't ours (e.g. swap in failure) 1597 + * and so needs to be reported. 1598 + */ 1599 + if (!i915_terminally_wedged(&dev_priv->gpu_error)) { 1597 1600 ret = VM_FAULT_SIGBUS; 1598 1601 break; 1599 1602 }
+230 -179
drivers/gpu/drm/i915/i915_gem_userptr.c
··· 32 32 #include <linux/mempolicy.h> 33 33 #include <linux/swap.h> 34 34 35 + struct i915_mm_struct { 36 + struct mm_struct *mm; 37 + struct drm_device *dev; 38 + struct i915_mmu_notifier *mn; 39 + struct hlist_node node; 40 + struct kref kref; 41 + struct work_struct work; 42 + }; 43 + 35 44 #if defined(CONFIG_MMU_NOTIFIER) 36 45 #include <linux/interval_tree.h> 37 46 ··· 50 41 struct mmu_notifier mn; 51 42 struct rb_root objects; 52 43 struct list_head linear; 53 - struct drm_device *dev; 54 - struct mm_struct *mm; 55 - struct work_struct work; 56 - unsigned long count; 57 44 unsigned long serial; 58 45 bool has_linear; 59 46 }; 60 47 61 48 struct i915_mmu_object { 62 - struct i915_mmu_notifier *mmu; 49 + struct i915_mmu_notifier *mn; 63 50 struct interval_tree_node it; 64 51 struct list_head link; 65 52 struct drm_i915_gem_object *obj; ··· 101 96 unsigned long start, 102 97 unsigned long end) 103 98 { 104 - struct i915_mmu_object *mmu; 99 + struct i915_mmu_object *mo; 105 100 unsigned long serial; 106 101 107 102 restart: 108 103 serial = mn->serial; 109 - list_for_each_entry(mmu, &mn->linear, link) { 104 + list_for_each_entry(mo, &mn->linear, link) { 110 105 struct drm_i915_gem_object *obj; 111 106 112 - if (mmu->it.last < start || mmu->it.start > end) 107 + if (mo->it.last < start || mo->it.start > end) 113 108 continue; 114 109 115 - obj = mmu->obj; 110 + obj = mo->obj; 116 111 drm_gem_object_reference(&obj->base); 117 112 spin_unlock(&mn->lock); 118 113 ··· 165 160 }; 166 161 167 162 static struct i915_mmu_notifier * 168 - __i915_mmu_notifier_lookup(struct drm_device *dev, struct mm_struct *mm) 163 + i915_mmu_notifier_create(struct mm_struct *mm) 169 164 { 170 - struct drm_i915_private *dev_priv = to_i915(dev); 171 - struct i915_mmu_notifier *mmu; 172 - 173 - /* Protected by dev->struct_mutex */ 174 - hash_for_each_possible(dev_priv->mmu_notifiers, mmu, node, (unsigned long)mm) 175 - if (mmu->mm == mm) 176 - return mmu; 177 - 178 - return NULL; 179 - } 180 - 181 - static struct i915_mmu_notifier * 182 - i915_mmu_notifier_get(struct drm_device *dev, struct mm_struct *mm) 183 - { 184 - struct drm_i915_private *dev_priv = to_i915(dev); 185 - struct i915_mmu_notifier *mmu; 165 + struct i915_mmu_notifier *mn; 186 166 int ret; 187 167 188 - lockdep_assert_held(&dev->struct_mutex); 189 - 190 - mmu = __i915_mmu_notifier_lookup(dev, mm); 191 - if (mmu) 192 - return mmu; 193 - 194 - mmu = kmalloc(sizeof(*mmu), GFP_KERNEL); 195 - if (mmu == NULL) 168 + mn = kmalloc(sizeof(*mn), GFP_KERNEL); 169 + if (mn == NULL) 196 170 return ERR_PTR(-ENOMEM); 197 171 198 - spin_lock_init(&mmu->lock); 199 - mmu->dev = dev; 200 - mmu->mn.ops = &i915_gem_userptr_notifier; 201 - mmu->mm = mm; 202 - mmu->objects = RB_ROOT; 203 - mmu->count = 0; 204 - mmu->serial = 1; 205 - INIT_LIST_HEAD(&mmu->linear); 206 - mmu->has_linear = false; 172 + spin_lock_init(&mn->lock); 173 + mn->mn.ops = &i915_gem_userptr_notifier; 174 + mn->objects = RB_ROOT; 175 + mn->serial = 1; 176 + INIT_LIST_HEAD(&mn->linear); 177 + mn->has_linear = false; 207 178 208 - /* Protected by mmap_sem (write-lock) */ 209 - ret = __mmu_notifier_register(&mmu->mn, mm); 179 + /* Protected by mmap_sem (write-lock) */ 180 + ret = __mmu_notifier_register(&mn->mn, mm); 210 181 if (ret) { 211 - kfree(mmu); 182 + kfree(mn); 212 183 return ERR_PTR(ret); 213 184 } 214 185 215 - /* Protected by dev->struct_mutex */ 216 - hash_add(dev_priv->mmu_notifiers, &mmu->node, (unsigned long)mm); 217 - return mmu; 186 + return mn; 218 187 } 219 188 220 - static void 221 - __i915_mmu_notifier_destroy_worker(struct work_struct *work) 189 + static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mn) 222 190 { 223 - struct i915_mmu_notifier *mmu = container_of(work, typeof(*mmu), work); 224 - mmu_notifier_unregister(&mmu->mn, mmu->mm); 225 - kfree(mmu); 226 - } 227 - 228 - static void 229 - __i915_mmu_notifier_destroy(struct i915_mmu_notifier *mmu) 230 - { 231 - lockdep_assert_held(&mmu->dev->struct_mutex); 232 - 233 - /* Protected by dev->struct_mutex */ 234 - hash_del(&mmu->node); 235 - 236 - /* Our lock ordering is: mmap_sem, mmu_notifier_scru, struct_mutex. 237 - * We enter the function holding struct_mutex, therefore we need 238 - * to drop our mutex prior to calling mmu_notifier_unregister in 239 - * order to prevent lock inversion (and system-wide deadlock) 240 - * between the mmap_sem and struct-mutex. Hence we defer the 241 - * unregistration to a workqueue where we hold no locks. 242 - */ 243 - INIT_WORK(&mmu->work, __i915_mmu_notifier_destroy_worker); 244 - schedule_work(&mmu->work); 245 - } 246 - 247 - static void __i915_mmu_notifier_update_serial(struct i915_mmu_notifier *mmu) 248 - { 249 - if (++mmu->serial == 0) 250 - mmu->serial = 1; 251 - } 252 - 253 - static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mmu) 254 - { 255 - struct i915_mmu_object *mn; 256 - 257 - list_for_each_entry(mn, &mmu->linear, link) 258 - if (mn->is_linear) 259 - return true; 260 - 261 - return false; 262 - } 263 - 264 - static void 265 - i915_mmu_notifier_del(struct i915_mmu_notifier *mmu, 266 - struct i915_mmu_object *mn) 267 - { 268 - lockdep_assert_held(&mmu->dev->struct_mutex); 269 - 270 - spin_lock(&mmu->lock); 271 - list_del(&mn->link); 272 - if (mn->is_linear) 273 - mmu->has_linear = i915_mmu_notifier_has_linear(mmu); 274 - else 275 - interval_tree_remove(&mn->it, &mmu->objects); 276 - __i915_mmu_notifier_update_serial(mmu); 277 - spin_unlock(&mmu->lock); 278 - 279 - /* Protected against _add() by dev->struct_mutex */ 280 - if (--mmu->count == 0) 281 - __i915_mmu_notifier_destroy(mmu); 191 + if (++mn->serial == 0) 192 + mn->serial = 1; 282 193 } 283 194 284 195 static int 285 - i915_mmu_notifier_add(struct i915_mmu_notifier *mmu, 286 - struct i915_mmu_object *mn) 196 + i915_mmu_notifier_add(struct drm_device *dev, 197 + struct i915_mmu_notifier *mn, 198 + struct i915_mmu_object *mo) 287 199 { 288 200 struct interval_tree_node *it; 289 201 int ret; 290 202 291 - ret = i915_mutex_lock_interruptible(mmu->dev); 203 + ret = i915_mutex_lock_interruptible(dev); 292 204 if (ret) 293 205 return ret; 294 206 ··· 213 291 * remove the objects from the interval tree) before we do 214 292 * the check for overlapping objects. 215 293 */ 216 - i915_gem_retire_requests(mmu->dev); 294 + i915_gem_retire_requests(dev); 217 295 218 - spin_lock(&mmu->lock); 219 - it = interval_tree_iter_first(&mmu->objects, 220 - mn->it.start, mn->it.last); 296 + spin_lock(&mn->lock); 297 + it = interval_tree_iter_first(&mn->objects, 298 + mo->it.start, mo->it.last); 221 299 if (it) { 222 300 struct drm_i915_gem_object *obj; 223 301 ··· 234 312 235 313 obj = container_of(it, struct i915_mmu_object, it)->obj; 236 314 if (!obj->userptr.workers) 237 - mmu->has_linear = mn->is_linear = true; 315 + mn->has_linear = mo->is_linear = true; 238 316 else 239 317 ret = -EAGAIN; 240 318 } else 241 - interval_tree_insert(&mn->it, &mmu->objects); 319 + interval_tree_insert(&mo->it, &mn->objects); 242 320 243 321 if (ret == 0) { 244 - list_add(&mn->link, &mmu->linear); 245 - __i915_mmu_notifier_update_serial(mmu); 322 + list_add(&mo->link, &mn->linear); 323 + __i915_mmu_notifier_update_serial(mn); 246 324 } 247 - spin_unlock(&mmu->lock); 248 - mutex_unlock(&mmu->dev->struct_mutex); 325 + spin_unlock(&mn->lock); 326 + mutex_unlock(&dev->struct_mutex); 249 327 250 328 return ret; 329 + } 330 + 331 + static bool i915_mmu_notifier_has_linear(struct i915_mmu_notifier *mn) 332 + { 333 + struct i915_mmu_object *mo; 334 + 335 + list_for_each_entry(mo, &mn->linear, link) 336 + if (mo->is_linear) 337 + return true; 338 + 339 + return false; 340 + } 341 + 342 + static void 343 + i915_mmu_notifier_del(struct i915_mmu_notifier *mn, 344 + struct i915_mmu_object *mo) 345 + { 346 + spin_lock(&mn->lock); 347 + list_del(&mo->link); 348 + if (mo->is_linear) 349 + mn->has_linear = i915_mmu_notifier_has_linear(mn); 350 + else 351 + interval_tree_remove(&mo->it, &mn->objects); 352 + __i915_mmu_notifier_update_serial(mn); 353 + spin_unlock(&mn->lock); 251 354 } 252 355 253 356 static void 254 357 i915_gem_userptr_release__mmu_notifier(struct drm_i915_gem_object *obj) 255 358 { 256 - struct i915_mmu_object *mn; 359 + struct i915_mmu_object *mo; 257 360 258 - mn = obj->userptr.mn; 259 - if (mn == NULL) 361 + mo = obj->userptr.mmu_object; 362 + if (mo == NULL) 260 363 return; 261 364 262 - i915_mmu_notifier_del(mn->mmu, mn); 263 - obj->userptr.mn = NULL; 365 + i915_mmu_notifier_del(mo->mn, mo); 366 + kfree(mo); 367 + 368 + obj->userptr.mmu_object = NULL; 369 + } 370 + 371 + static struct i915_mmu_notifier * 372 + i915_mmu_notifier_find(struct i915_mm_struct *mm) 373 + { 374 + if (mm->mn == NULL) { 375 + down_write(&mm->mm->mmap_sem); 376 + mutex_lock(&to_i915(mm->dev)->mm_lock); 377 + if (mm->mn == NULL) 378 + mm->mn = i915_mmu_notifier_create(mm->mm); 379 + mutex_unlock(&to_i915(mm->dev)->mm_lock); 380 + up_write(&mm->mm->mmap_sem); 381 + } 382 + return mm->mn; 264 383 } 265 384 266 385 static int 267 386 i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, 268 387 unsigned flags) 269 388 { 270 - struct i915_mmu_notifier *mmu; 271 - struct i915_mmu_object *mn; 389 + struct i915_mmu_notifier *mn; 390 + struct i915_mmu_object *mo; 272 391 int ret; 273 392 274 393 if (flags & I915_USERPTR_UNSYNCHRONIZED) 275 394 return capable(CAP_SYS_ADMIN) ? 0 : -EPERM; 276 395 277 - down_write(&obj->userptr.mm->mmap_sem); 278 - ret = i915_mutex_lock_interruptible(obj->base.dev); 279 - if (ret == 0) { 280 - mmu = i915_mmu_notifier_get(obj->base.dev, obj->userptr.mm); 281 - if (!IS_ERR(mmu)) 282 - mmu->count++; /* preemptive add to act as a refcount */ 283 - else 284 - ret = PTR_ERR(mmu); 285 - mutex_unlock(&obj->base.dev->struct_mutex); 286 - } 287 - up_write(&obj->userptr.mm->mmap_sem); 288 - if (ret) 396 + if (WARN_ON(obj->userptr.mm == NULL)) 397 + return -EINVAL; 398 + 399 + mn = i915_mmu_notifier_find(obj->userptr.mm); 400 + if (IS_ERR(mn)) 401 + return PTR_ERR(mn); 402 + 403 + mo = kzalloc(sizeof(*mo), GFP_KERNEL); 404 + if (mo == NULL) 405 + return -ENOMEM; 406 + 407 + mo->mn = mn; 408 + mo->it.start = obj->userptr.ptr; 409 + mo->it.last = mo->it.start + obj->base.size - 1; 410 + mo->obj = obj; 411 + 412 + ret = i915_mmu_notifier_add(obj->base.dev, mn, mo); 413 + if (ret) { 414 + kfree(mo); 289 415 return ret; 290 - 291 - mn = kzalloc(sizeof(*mn), GFP_KERNEL); 292 - if (mn == NULL) { 293 - ret = -ENOMEM; 294 - goto destroy_mmu; 295 416 } 296 417 297 - mn->mmu = mmu; 298 - mn->it.start = obj->userptr.ptr; 299 - mn->it.last = mn->it.start + obj->base.size - 1; 300 - mn->obj = obj; 301 - 302 - ret = i915_mmu_notifier_add(mmu, mn); 303 - if (ret) 304 - goto free_mn; 305 - 306 - obj->userptr.mn = mn; 418 + obj->userptr.mmu_object = mo; 307 419 return 0; 420 + } 308 421 309 - free_mn: 422 + static void 423 + i915_mmu_notifier_free(struct i915_mmu_notifier *mn, 424 + struct mm_struct *mm) 425 + { 426 + if (mn == NULL) 427 + return; 428 + 429 + mmu_notifier_unregister(&mn->mn, mm); 310 430 kfree(mn); 311 - destroy_mmu: 312 - mutex_lock(&obj->base.dev->struct_mutex); 313 - if (--mmu->count == 0) 314 - __i915_mmu_notifier_destroy(mmu); 315 - mutex_unlock(&obj->base.dev->struct_mutex); 316 - return ret; 317 431 } 318 432 319 433 #else ··· 371 413 372 414 return 0; 373 415 } 416 + 417 + static void 418 + i915_mmu_notifier_free(struct i915_mmu_notifier *mn, 419 + struct mm_struct *mm) 420 + { 421 + } 422 + 374 423 #endif 424 + 425 + static struct i915_mm_struct * 426 + __i915_mm_struct_find(struct drm_i915_private *dev_priv, struct mm_struct *real) 427 + { 428 + struct i915_mm_struct *mm; 429 + 430 + /* Protected by dev_priv->mm_lock */ 431 + hash_for_each_possible(dev_priv->mm_structs, mm, node, (unsigned long)real) 432 + if (mm->mm == real) 433 + return mm; 434 + 435 + return NULL; 436 + } 437 + 438 + static int 439 + i915_gem_userptr_init__mm_struct(struct drm_i915_gem_object *obj) 440 + { 441 + struct drm_i915_private *dev_priv = to_i915(obj->base.dev); 442 + struct i915_mm_struct *mm; 443 + int ret = 0; 444 + 445 + /* During release of the GEM object we hold the struct_mutex. This 446 + * precludes us from calling mmput() at that time as that may be 447 + * the last reference and so call exit_mmap(). exit_mmap() will 448 + * attempt to reap the vma, and if we were holding a GTT mmap 449 + * would then call drm_gem_vm_close() and attempt to reacquire 450 + * the struct mutex. So in order to avoid that recursion, we have 451 + * to defer releasing the mm reference until after we drop the 452 + * struct_mutex, i.e. we need to schedule a worker to do the clean 453 + * up. 454 + */ 455 + mutex_lock(&dev_priv->mm_lock); 456 + mm = __i915_mm_struct_find(dev_priv, current->mm); 457 + if (mm == NULL) { 458 + mm = kmalloc(sizeof(*mm), GFP_KERNEL); 459 + if (mm == NULL) { 460 + ret = -ENOMEM; 461 + goto out; 462 + } 463 + 464 + kref_init(&mm->kref); 465 + mm->dev = obj->base.dev; 466 + 467 + mm->mm = current->mm; 468 + atomic_inc(&current->mm->mm_count); 469 + 470 + mm->mn = NULL; 471 + 472 + /* Protected by dev_priv->mm_lock */ 473 + hash_add(dev_priv->mm_structs, 474 + &mm->node, (unsigned long)mm->mm); 475 + } else 476 + kref_get(&mm->kref); 477 + 478 + obj->userptr.mm = mm; 479 + out: 480 + mutex_unlock(&dev_priv->mm_lock); 481 + return ret; 482 + } 483 + 484 + static void 485 + __i915_mm_struct_free__worker(struct work_struct *work) 486 + { 487 + struct i915_mm_struct *mm = container_of(work, typeof(*mm), work); 488 + i915_mmu_notifier_free(mm->mn, mm->mm); 489 + mmdrop(mm->mm); 490 + kfree(mm); 491 + } 492 + 493 + static void 494 + __i915_mm_struct_free(struct kref *kref) 495 + { 496 + struct i915_mm_struct *mm = container_of(kref, typeof(*mm), kref); 497 + 498 + /* Protected by dev_priv->mm_lock */ 499 + hash_del(&mm->node); 500 + mutex_unlock(&to_i915(mm->dev)->mm_lock); 501 + 502 + INIT_WORK(&mm->work, __i915_mm_struct_free__worker); 503 + schedule_work(&mm->work); 504 + } 505 + 506 + static void 507 + i915_gem_userptr_release__mm_struct(struct drm_i915_gem_object *obj) 508 + { 509 + if (obj->userptr.mm == NULL) 510 + return; 511 + 512 + kref_put_mutex(&obj->userptr.mm->kref, 513 + __i915_mm_struct_free, 514 + &to_i915(obj->base.dev)->mm_lock); 515 + obj->userptr.mm = NULL; 516 + } 375 517 376 518 struct get_pages_work { 377 519 struct work_struct work; 378 520 struct drm_i915_gem_object *obj; 379 521 struct task_struct *task; 380 522 }; 381 - 382 523 383 524 #if IS_ENABLED(CONFIG_SWIOTLB) 384 525 #define swiotlb_active() swiotlb_nr_tbl() ··· 536 479 if (pvec == NULL) 537 480 pvec = drm_malloc_ab(num_pages, sizeof(struct page *)); 538 481 if (pvec != NULL) { 539 - struct mm_struct *mm = obj->userptr.mm; 482 + struct mm_struct *mm = obj->userptr.mm->mm; 540 483 541 484 down_read(&mm->mmap_sem); 542 485 while (pinned < num_pages) { ··· 602 545 603 546 pvec = NULL; 604 547 pinned = 0; 605 - if (obj->userptr.mm == current->mm) { 548 + if (obj->userptr.mm->mm == current->mm) { 606 549 pvec = kmalloc(num_pages*sizeof(struct page *), 607 550 GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); 608 551 if (pvec == NULL) { ··· 708 651 i915_gem_userptr_release(struct drm_i915_gem_object *obj) 709 652 { 710 653 i915_gem_userptr_release__mmu_notifier(obj); 711 - 712 - if (obj->userptr.mm) { 713 - mmput(obj->userptr.mm); 714 - obj->userptr.mm = NULL; 715 - } 654 + i915_gem_userptr_release__mm_struct(obj); 716 655 } 717 656 718 657 static int 719 658 i915_gem_userptr_dmabuf_export(struct drm_i915_gem_object *obj) 720 659 { 721 - if (obj->userptr.mn) 660 + if (obj->userptr.mmu_object) 722 661 return 0; 723 662 724 663 return i915_gem_userptr_init__mmu_notifier(obj, 0); ··· 789 736 return -ENODEV; 790 737 } 791 738 792 - /* Allocate the new object */ 793 739 obj = i915_gem_object_alloc(dev); 794 740 if (obj == NULL) 795 741 return -ENOMEM; ··· 806 754 * at binding. This means that we need to hook into the mmu_notifier 807 755 * in order to detect if the mmu is destroyed. 808 756 */ 809 - ret = -ENOMEM; 810 - if ((obj->userptr.mm = get_task_mm(current))) 757 + ret = i915_gem_userptr_init__mm_struct(obj); 758 + if (ret == 0) 811 759 ret = i915_gem_userptr_init__mmu_notifier(obj, args->flags); 812 760 if (ret == 0) 813 761 ret = drm_gem_handle_create(file, &obj->base, &handle); ··· 824 772 int 825 773 i915_gem_init_userptr(struct drm_device *dev) 826 774 { 827 - #if defined(CONFIG_MMU_NOTIFIER) 828 775 struct drm_i915_private *dev_priv = to_i915(dev); 829 - hash_init(dev_priv->mmu_notifiers); 830 - #endif 776 + mutex_init(&dev_priv->mm_lock); 777 + hash_init(dev_priv->mm_structs); 831 778 return 0; 832 779 }
+8 -4
drivers/gpu/drm/i915/i915_reg.h
··· 334 334 #define GFX_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1) 335 335 #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) 336 336 #define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) 337 - #define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4) 337 + 338 + #define COLOR_BLT_CMD (2<<29 | 0x40<<22 | (5-2)) 339 + #define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4) 338 340 #define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6) 339 341 #define XY_MONO_SRC_COPY_IMM_BLT ((2<<29)|(0x71<<22)|5) 340 - #define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21) 341 - #define XY_SRC_COPY_BLT_WRITE_RGB (1<<20) 342 + #define BLT_WRITE_A (2<<20) 343 + #define BLT_WRITE_RGB (1<<20) 344 + #define BLT_WRITE_RGBA (BLT_WRITE_RGB | BLT_WRITE_A) 342 345 #define BLT_DEPTH_8 (0<<24) 343 346 #define BLT_DEPTH_16_565 (1<<24) 344 347 #define BLT_DEPTH_16_1555 (2<<24) 345 348 #define BLT_DEPTH_32 (3<<24) 346 - #define BLT_ROP_GXCOPY (0xcc<<16) 349 + #define BLT_ROP_SRC_COPY (0xcc<<16) 350 + #define BLT_ROP_COLOR_COPY (0xf0<<16) 347 351 #define XY_SRC_COPY_BLT_SRC_TILED (1<<15) /* 965+ only */ 348 352 #define XY_SRC_COPY_BLT_DST_TILED (1<<11) /* 965+ only */ 349 353 #define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2)
+4
drivers/gpu/drm/i915/intel_dp.c
··· 1631 1631 1632 1632 pipe_config->adjusted_mode.flags |= flags; 1633 1633 1634 + if (!HAS_PCH_SPLIT(dev) && !IS_VALLEYVIEW(dev) && 1635 + tmp & DP_COLOR_RANGE_16_235) 1636 + pipe_config->limited_color_range = true; 1637 + 1634 1638 pipe_config->has_dp_encoder = true; 1635 1639 1636 1640 intel_dp_get_m_n(crtc, pipe_config);
+6 -1
drivers/gpu/drm/i915/intel_hdmi.c
··· 712 712 struct intel_crtc_config *pipe_config) 713 713 { 714 714 struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base); 715 - struct drm_i915_private *dev_priv = encoder->base.dev->dev_private; 715 + struct drm_device *dev = encoder->base.dev; 716 + struct drm_i915_private *dev_priv = dev->dev_private; 716 717 u32 tmp, flags = 0; 717 718 int dotclock; 718 719 ··· 734 733 735 734 if (tmp & HDMI_MODE_SELECT_HDMI) 736 735 pipe_config->has_audio = true; 736 + 737 + if (!HAS_PCH_SPLIT(dev) && 738 + tmp & HDMI_COLOR_RANGE_16_235) 739 + pipe_config->limited_color_range = true; 737 740 738 741 pipe_config->adjusted_mode.flags |= flags; 739 742
+39 -27
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 1363 1363 1364 1364 /* Just userspace ABI convention to limit the wa batch bo to a resonable size */ 1365 1365 #define I830_BATCH_LIMIT (256*1024) 1366 + #define I830_TLB_ENTRIES (2) 1367 + #define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT) 1366 1368 static int 1367 1369 i830_dispatch_execbuffer(struct intel_engine_cs *ring, 1368 1370 u64 offset, u32 len, 1369 1371 unsigned flags) 1370 1372 { 1373 + u32 cs_offset = ring->scratch.gtt_offset; 1371 1374 int ret; 1372 1375 1373 - if (flags & I915_DISPATCH_PINNED) { 1374 - ret = intel_ring_begin(ring, 4); 1375 - if (ret) 1376 - return ret; 1376 + ret = intel_ring_begin(ring, 6); 1377 + if (ret) 1378 + return ret; 1377 1379 1378 - intel_ring_emit(ring, MI_BATCH_BUFFER); 1379 - intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE)); 1380 - intel_ring_emit(ring, offset + len - 8); 1381 - intel_ring_emit(ring, MI_NOOP); 1382 - intel_ring_advance(ring); 1383 - } else { 1384 - u32 cs_offset = ring->scratch.gtt_offset; 1380 + /* Evict the invalid PTE TLBs */ 1381 + intel_ring_emit(ring, COLOR_BLT_CMD | BLT_WRITE_RGBA); 1382 + intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096); 1383 + intel_ring_emit(ring, I830_TLB_ENTRIES << 16 | 4); /* load each page */ 1384 + intel_ring_emit(ring, cs_offset); 1385 + intel_ring_emit(ring, 0xdeadbeef); 1386 + intel_ring_emit(ring, MI_NOOP); 1387 + intel_ring_advance(ring); 1385 1388 1389 + if ((flags & I915_DISPATCH_PINNED) == 0) { 1386 1390 if (len > I830_BATCH_LIMIT) 1387 1391 return -ENOSPC; 1388 1392 1389 - ret = intel_ring_begin(ring, 9+3); 1393 + ret = intel_ring_begin(ring, 6 + 2); 1390 1394 if (ret) 1391 1395 return ret; 1392 - /* Blit the batch (which has now all relocs applied) to the stable batch 1393 - * scratch bo area (so that the CS never stumbles over its tlb 1394 - * invalidation bug) ... */ 1395 - intel_ring_emit(ring, XY_SRC_COPY_BLT_CMD | 1396 - XY_SRC_COPY_BLT_WRITE_ALPHA | 1397 - XY_SRC_COPY_BLT_WRITE_RGB); 1398 - intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_GXCOPY | 4096); 1399 - intel_ring_emit(ring, 0); 1400 - intel_ring_emit(ring, (DIV_ROUND_UP(len, 4096) << 16) | 1024); 1396 + 1397 + /* Blit the batch (which has now all relocs applied) to the 1398 + * stable batch scratch bo area (so that the CS never 1399 + * stumbles over its tlb invalidation bug) ... 1400 + */ 1401 + intel_ring_emit(ring, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA); 1402 + intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096); 1403 + intel_ring_emit(ring, DIV_ROUND_UP(len, 4096) << 16 | 4096); 1401 1404 intel_ring_emit(ring, cs_offset); 1402 - intel_ring_emit(ring, 0); 1403 1405 intel_ring_emit(ring, 4096); 1404 1406 intel_ring_emit(ring, offset); 1407 + 1405 1408 intel_ring_emit(ring, MI_FLUSH); 1409 + intel_ring_emit(ring, MI_NOOP); 1410 + intel_ring_advance(ring); 1406 1411 1407 1412 /* ... and execute it. */ 1408 - intel_ring_emit(ring, MI_BATCH_BUFFER); 1409 - intel_ring_emit(ring, cs_offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE)); 1410 - intel_ring_emit(ring, cs_offset + len - 8); 1411 - intel_ring_advance(ring); 1413 + offset = cs_offset; 1412 1414 } 1415 + 1416 + ret = intel_ring_begin(ring, 4); 1417 + if (ret) 1418 + return ret; 1419 + 1420 + intel_ring_emit(ring, MI_BATCH_BUFFER); 1421 + intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE)); 1422 + intel_ring_emit(ring, offset + len - 8); 1423 + intel_ring_emit(ring, MI_NOOP); 1424 + intel_ring_advance(ring); 1413 1425 1414 1426 return 0; 1415 1427 } ··· 2212 2200 2213 2201 /* Workaround batchbuffer to combat CS tlb bug. */ 2214 2202 if (HAS_BROKEN_CS_TLB(dev)) { 2215 - obj = i915_gem_alloc_object(dev, I830_BATCH_LIMIT); 2203 + obj = i915_gem_alloc_object(dev, I830_WA_SIZE); 2216 2204 if (obj == NULL) { 2217 2205 DRM_ERROR("Failed to allocate batch bo\n"); 2218 2206 return -ENOMEM;
+4
drivers/gpu/drm/i915/intel_tv.c
··· 854 854 struct drm_device *dev = encoder->base.dev; 855 855 struct drm_i915_private *dev_priv = dev->dev_private; 856 856 857 + /* Prevents vblank waits from timing out in intel_tv_detect_type() */ 858 + intel_wait_for_vblank(encoder->base.dev, 859 + to_intel_crtc(encoder->base.crtc)->pipe); 860 + 857 861 I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE); 858 862 } 859 863
+24 -22
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 258 258 priv->hdmi_pdev = pdev; 259 259 } 260 260 261 + #ifdef CONFIG_OF 262 + static int get_gpio(struct device *dev, struct device_node *of_node, const char *name) 263 + { 264 + int gpio = of_get_named_gpio(of_node, name, 0); 265 + if (gpio < 0) { 266 + char name2[32]; 267 + snprintf(name2, sizeof(name2), "%s-gpio", name); 268 + gpio = of_get_named_gpio(of_node, name2, 0); 269 + if (gpio < 0) { 270 + dev_err(dev, "failed to get gpio: %s (%d)\n", 271 + name, gpio); 272 + gpio = -1; 273 + } 274 + } 275 + return gpio; 276 + } 277 + #endif 278 + 261 279 static int hdmi_bind(struct device *dev, struct device *master, void *data) 262 280 { 263 281 static struct hdmi_platform_config config = {}; 264 282 #ifdef CONFIG_OF 265 283 struct device_node *of_node = dev->of_node; 266 - 267 - int get_gpio(const char *name) 268 - { 269 - int gpio = of_get_named_gpio(of_node, name, 0); 270 - if (gpio < 0) { 271 - char name2[32]; 272 - snprintf(name2, sizeof(name2), "%s-gpio", name); 273 - gpio = of_get_named_gpio(of_node, name2, 0); 274 - if (gpio < 0) { 275 - dev_err(dev, "failed to get gpio: %s (%d)\n", 276 - name, gpio); 277 - gpio = -1; 278 - } 279 - } 280 - return gpio; 281 - } 282 284 283 285 if (of_device_is_compatible(of_node, "qcom,hdmi-tx-8074")) { 284 286 static const char *hpd_reg_names[] = {"hpd-gdsc", "hpd-5v"}; ··· 314 312 } 315 313 316 314 config.mmio_name = "core_physical"; 317 - config.ddc_clk_gpio = get_gpio("qcom,hdmi-tx-ddc-clk"); 318 - config.ddc_data_gpio = get_gpio("qcom,hdmi-tx-ddc-data"); 319 - config.hpd_gpio = get_gpio("qcom,hdmi-tx-hpd"); 320 - config.mux_en_gpio = get_gpio("qcom,hdmi-tx-mux-en"); 321 - config.mux_sel_gpio = get_gpio("qcom,hdmi-tx-mux-sel"); 322 - config.mux_lpm_gpio = get_gpio("qcom,hdmi-tx-mux-lpm"); 315 + config.ddc_clk_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-clk"); 316 + config.ddc_data_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-ddc-data"); 317 + config.hpd_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-hpd"); 318 + config.mux_en_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-en"); 319 + config.mux_sel_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-sel"); 320 + config.mux_lpm_gpio = get_gpio(dev, of_node, "qcom,hdmi-tx-mux-lpm"); 323 321 324 322 #else 325 323 static const char *hpd_clk_names[] = {
+13 -2
drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
··· 15 15 * this program. If not, see <http://www.gnu.org/licenses/>. 16 16 */ 17 17 18 + #ifdef CONFIG_COMMON_CLK 18 19 #include <linux/clk.h> 19 20 #include <linux/clk-provider.h> 21 + #endif 20 22 21 23 #include "hdmi.h" 22 24 23 25 struct hdmi_phy_8960 { 24 26 struct hdmi_phy base; 25 27 struct hdmi *hdmi; 28 + #ifdef CONFIG_COMMON_CLK 26 29 struct clk_hw pll_hw; 27 30 struct clk *pll; 28 31 unsigned long pixclk; 32 + #endif 29 33 }; 30 34 #define to_hdmi_phy_8960(x) container_of(x, struct hdmi_phy_8960, base) 35 + 36 + #ifdef CONFIG_COMMON_CLK 31 37 #define clk_to_phy(x) container_of(x, struct hdmi_phy_8960, pll_hw) 32 38 33 39 /* ··· 380 374 .parent_names = hdmi_pll_parents, 381 375 .num_parents = ARRAY_SIZE(hdmi_pll_parents), 382 376 }; 383 - 377 + #endif 384 378 385 379 /* 386 380 * HDMI Phy: ··· 486 480 { 487 481 struct hdmi_phy_8960 *phy_8960; 488 482 struct hdmi_phy *phy = NULL; 489 - int ret, i; 483 + int ret; 484 + #ifdef CONFIG_COMMON_CLK 485 + int i; 490 486 491 487 /* sanity check: */ 492 488 for (i = 0; i < (ARRAY_SIZE(freqtbl) - 1); i++) 493 489 if (WARN_ON(freqtbl[i].rate < freqtbl[i+1].rate)) 494 490 return ERR_PTR(-EINVAL); 491 + #endif 495 492 496 493 phy_8960 = kzalloc(sizeof(*phy_8960), GFP_KERNEL); 497 494 if (!phy_8960) { ··· 508 499 509 500 phy_8960->hdmi = hdmi; 510 501 502 + #ifdef CONFIG_COMMON_CLK 511 503 phy_8960->pll_hw.init = &pll_init; 512 504 phy_8960->pll = devm_clk_register(hdmi->dev->dev, &phy_8960->pll_hw); 513 505 if (IS_ERR(phy_8960->pll)) { ··· 516 506 phy_8960->pll = NULL; 517 507 goto fail; 518 508 } 509 + #endif 519 510 520 511 return phy; 521 512
+1 -1
drivers/gpu/drm/msm/msm_drv.c
··· 52 52 #define reglog 0 53 53 #endif 54 54 55 - static char *vram; 55 + static char *vram = "16m"; 56 56 MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU"); 57 57 module_param(vram, charp, 0); 58 58
-1
drivers/gpu/drm/nouveau/core/subdev/bar/nvc0.c
··· 200 200 201 201 nv_mask(priv, 0x000200, 0x00000100, 0x00000000); 202 202 nv_mask(priv, 0x000200, 0x00000100, 0x00000100); 203 - nv_mask(priv, 0x100c80, 0x00000001, 0x00000000); 204 203 205 204 nv_wr32(priv, 0x001704, 0x80000000 | priv->bar[1].mem->addr >> 12); 206 205 if (priv->bar[0].mem)
+1
drivers/gpu/drm/nouveau/core/subdev/fb/nvc0.c
··· 60 60 61 61 if (priv->r100c10_page) 62 62 nv_wr32(priv, 0x100c10, priv->r100c10 >> 8); 63 + nv_mask(priv, 0x100c80, 0x00000001, 0x00000000); /* 128KiB lpg */ 63 64 return 0; 64 65 } 65 66
+2
drivers/gpu/drm/nouveau/core/subdev/ltc/gf100.c
··· 98 98 gf100_ltc_init(struct nouveau_object *object) 99 99 { 100 100 struct nvkm_ltc_priv *priv = (void *)object; 101 + u32 lpg128 = !(nv_rd32(priv, 0x100c80) & 0x00000001); 101 102 int ret; 102 103 103 104 ret = nvkm_ltc_init(priv); ··· 108 107 nv_mask(priv, 0x17e820, 0x00100000, 0x00000000); /* INTR_EN &= ~0x10 */ 109 108 nv_wr32(priv, 0x17e8d8, priv->ltc_nr); 110 109 nv_wr32(priv, 0x17e8d4, priv->tag_base); 110 + nv_mask(priv, 0x17e8c0, 0x00000002, lpg128 ? 0x00000002 : 0x00000000); 111 111 return 0; 112 112 } 113 113
+2
drivers/gpu/drm/nouveau/core/subdev/ltc/gk104.c
··· 28 28 gk104_ltc_init(struct nouveau_object *object) 29 29 { 30 30 struct nvkm_ltc_priv *priv = (void *)object; 31 + u32 lpg128 = !(nv_rd32(priv, 0x100c80) & 0x00000001); 31 32 int ret; 32 33 33 34 ret = nvkm_ltc_init(priv); ··· 38 37 nv_wr32(priv, 0x17e8d8, priv->ltc_nr); 39 38 nv_wr32(priv, 0x17e000, priv->ltc_nr); 40 39 nv_wr32(priv, 0x17e8d4, priv->tag_base); 40 + nv_mask(priv, 0x17e8c0, 0x00000002, lpg128 ? 0x00000002 : 0x00000000); 41 41 return 0; 42 42 } 43 43
+2
drivers/gpu/drm/nouveau/core/subdev/ltc/gm107.c
··· 98 98 gm107_ltc_init(struct nouveau_object *object) 99 99 { 100 100 struct nvkm_ltc_priv *priv = (void *)object; 101 + u32 lpg128 = !(nv_rd32(priv, 0x100c80) & 0x00000001); 101 102 int ret; 102 103 103 104 ret = nvkm_ltc_init(priv); ··· 107 106 108 107 nv_wr32(priv, 0x17e27c, priv->ltc_nr); 109 108 nv_wr32(priv, 0x17e278, priv->tag_base); 109 + nv_mask(priv, 0x17e264, 0x00000002, lpg128 ? 0x00000002 : 0x00000000); 110 110 return 0; 111 111 } 112 112
+2 -14
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 46 46 bool dsm_detected; 47 47 bool optimus_detected; 48 48 acpi_handle dhandle; 49 - acpi_handle other_handle; 50 49 acpi_handle rom_handle; 51 50 } nouveau_dsm_priv; 52 51 ··· 221 222 if (!dhandle) 222 223 return false; 223 224 224 - if (!acpi_has_method(dhandle, "_DSM")) { 225 - nouveau_dsm_priv.other_handle = dhandle; 225 + if (!acpi_has_method(dhandle, "_DSM")) 226 226 return false; 227 - } 227 + 228 228 if (acpi_check_dsm(dhandle, nouveau_dsm_muid, 0x00000102, 229 229 1 << NOUVEAU_DSM_POWER)) 230 230 retval |= NOUVEAU_DSM_HAS_MUX; ··· 299 301 printk(KERN_INFO "VGA switcheroo: detected DSM switching method %s handle\n", 300 302 acpi_method_name); 301 303 nouveau_dsm_priv.dsm_detected = true; 302 - /* 303 - * On some systems hotplug events are generated for the device 304 - * being switched off when _DSM is executed. They cause ACPI 305 - * hotplug to trigger and attempt to remove the device from 306 - * the system, which causes it to break down. Prevent that from 307 - * happening by setting the no_hotplug flag for the involved 308 - * ACPI device objects. 309 - */ 310 - acpi_bus_no_hotplug(nouveau_dsm_priv.dhandle); 311 - acpi_bus_no_hotplug(nouveau_dsm_priv.other_handle); 312 304 ret = true; 313 305 } 314 306
+1
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 627 627 628 628 pci_save_state(pdev); 629 629 pci_disable_device(pdev); 630 + pci_ignore_hotplug(pdev); 630 631 pci_set_power_state(pdev, PCI_D3hot); 631 632 return 0; 632 633 }
+9
drivers/gpu/drm/nouveau/nouveau_vga.c
··· 108 108 nouveau_vga_fini(struct nouveau_drm *drm) 109 109 { 110 110 struct drm_device *dev = drm->dev; 111 + bool runtime = false; 112 + 113 + if (nouveau_runtime_pm == 1) 114 + runtime = true; 115 + if ((nouveau_runtime_pm == -1) && (nouveau_is_optimus() || nouveau_is_v1_dsm())) 116 + runtime = true; 117 + 111 118 vga_switcheroo_unregister_client(dev->pdev); 119 + if (runtime && nouveau_is_v1_dsm() && !nouveau_is_optimus()) 120 + vga_switcheroo_fini_domain_pm_ops(drm->dev->dev); 112 121 vga_client_register(dev->pdev, NULL, NULL, NULL); 113 122 } 114 123
+2 -5
drivers/gpu/drm/radeon/atombios_dp.c
··· 405 405 u8 msg[DP_DPCD_SIZE]; 406 406 int ret; 407 407 408 - char dpcd_hex_dump[DP_DPCD_SIZE * 3]; 409 - 410 408 ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_DPCD_REV, msg, 411 409 DP_DPCD_SIZE); 412 410 if (ret > 0) { 413 411 memcpy(dig_connector->dpcd, msg, DP_DPCD_SIZE); 414 412 415 - hex_dump_to_buffer(dig_connector->dpcd, sizeof(dig_connector->dpcd), 416 - 32, 1, dpcd_hex_dump, sizeof(dpcd_hex_dump), false); 417 - DRM_DEBUG_KMS("DPCD: %s\n", dpcd_hex_dump); 413 + DRM_DEBUG_KMS("DPCD: %*ph\n", (int)sizeof(dig_connector->dpcd), 414 + dig_connector->dpcd); 418 415 419 416 radeon_dp_probe_oui(radeon_connector); 420 417
-7
drivers/gpu/drm/radeon/cik_sdma.c
··· 489 489 { 490 490 int r; 491 491 492 - /* Reset dma */ 493 - WREG32(SRBM_SOFT_RESET, SOFT_RESET_SDMA | SOFT_RESET_SDMA1); 494 - RREG32(SRBM_SOFT_RESET); 495 - udelay(50); 496 - WREG32(SRBM_SOFT_RESET, 0); 497 - RREG32(SRBM_SOFT_RESET); 498 - 499 492 r = cik_sdma_load_microcode(rdev); 500 493 if (r) 501 494 return r;
+21 -7
drivers/gpu/drm/radeon/kv_dpm.c
··· 33 33 #define KV_MINIMUM_ENGINE_CLOCK 800 34 34 #define SMC_RAM_END 0x40000 35 35 36 + static int kv_enable_nb_dpm(struct radeon_device *rdev, 37 + bool enable); 36 38 static void kv_init_graphics_levels(struct radeon_device *rdev); 37 39 static int kv_calculate_ds_divider(struct radeon_device *rdev); 38 40 static int kv_calculate_nbps_level_settings(struct radeon_device *rdev); ··· 1297 1295 { 1298 1296 kv_smc_bapm_enable(rdev, false); 1299 1297 1298 + if (rdev->family == CHIP_MULLINS) 1299 + kv_enable_nb_dpm(rdev, false); 1300 + 1300 1301 /* powerup blocks */ 1301 1302 kv_dpm_powergate_acp(rdev, false); 1302 1303 kv_dpm_powergate_samu(rdev, false); ··· 1774 1769 return ret; 1775 1770 } 1776 1771 1777 - static int kv_enable_nb_dpm(struct radeon_device *rdev) 1772 + static int kv_enable_nb_dpm(struct radeon_device *rdev, 1773 + bool enable) 1778 1774 { 1779 1775 struct kv_power_info *pi = kv_get_pi(rdev); 1780 1776 int ret = 0; 1781 1777 1782 - if (pi->enable_nb_dpm && !pi->nb_dpm_enabled) { 1783 - ret = kv_notify_message_to_smu(rdev, PPSMC_MSG_NBDPM_Enable); 1784 - if (ret == 0) 1785 - pi->nb_dpm_enabled = true; 1778 + if (enable) { 1779 + if (pi->enable_nb_dpm && !pi->nb_dpm_enabled) { 1780 + ret = kv_notify_message_to_smu(rdev, PPSMC_MSG_NBDPM_Enable); 1781 + if (ret == 0) 1782 + pi->nb_dpm_enabled = true; 1783 + } 1784 + } else { 1785 + if (pi->enable_nb_dpm && pi->nb_dpm_enabled) { 1786 + ret = kv_notify_message_to_smu(rdev, PPSMC_MSG_NBDPM_Disable); 1787 + if (ret == 0) 1788 + pi->nb_dpm_enabled = false; 1789 + } 1786 1790 } 1787 1791 1788 1792 return ret; ··· 1878 1864 } 1879 1865 kv_update_sclk_t(rdev); 1880 1866 if (rdev->family == CHIP_MULLINS) 1881 - kv_enable_nb_dpm(rdev); 1867 + kv_enable_nb_dpm(rdev, true); 1882 1868 } 1883 1869 } else { 1884 1870 if (pi->enable_dpm) { ··· 1903 1889 } 1904 1890 kv_update_acp_boot_level(rdev); 1905 1891 kv_update_sclk_t(rdev); 1906 - kv_enable_nb_dpm(rdev); 1892 + kv_enable_nb_dpm(rdev, true); 1907 1893 } 1908 1894 } 1909 1895
-6
drivers/gpu/drm/radeon/ni_dma.c
··· 191 191 u32 reg_offset, wb_offset; 192 192 int i, r; 193 193 194 - /* Reset dma */ 195 - WREG32(SRBM_SOFT_RESET, SOFT_RESET_DMA | SOFT_RESET_DMA1); 196 - RREG32(SRBM_SOFT_RESET); 197 - udelay(50); 198 - WREG32(SRBM_SOFT_RESET, 0); 199 - 200 194 for (i = 0; i < 2; i++) { 201 195 if (i == 0) { 202 196 ring = &rdev->ring[R600_RING_TYPE_DMA_INDEX];
+14 -14
drivers/gpu/drm/radeon/r100.c
··· 821 821 return RREG32(RADEON_CRTC2_CRNT_FRAME); 822 822 } 823 823 824 + /** 825 + * r100_ring_hdp_flush - flush Host Data Path via the ring buffer 826 + * rdev: radeon device structure 827 + * ring: ring buffer struct for emitting packets 828 + */ 829 + static void r100_ring_hdp_flush(struct radeon_device *rdev, struct radeon_ring *ring) 830 + { 831 + radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0)); 832 + radeon_ring_write(ring, rdev->config.r100.hdp_cntl | 833 + RADEON_HDP_READ_BUFFER_INVALIDATE); 834 + radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0)); 835 + radeon_ring_write(ring, rdev->config.r100.hdp_cntl); 836 + } 837 + 824 838 /* Who ever call radeon_fence_emit should call ring_lock and ask 825 839 * for enough space (today caller are ib schedule and buffer move) */ 826 840 void r100_fence_ring_emit(struct radeon_device *rdev, ··· 1068 1054 { 1069 1055 WREG32(RADEON_CP_RB_WPTR, ring->wptr); 1070 1056 (void)RREG32(RADEON_CP_RB_WPTR); 1071 - } 1072 - 1073 - /** 1074 - * r100_ring_hdp_flush - flush Host Data Path via the ring buffer 1075 - * rdev: radeon device structure 1076 - * ring: ring buffer struct for emitting packets 1077 - */ 1078 - void r100_ring_hdp_flush(struct radeon_device *rdev, struct radeon_ring *ring) 1079 - { 1080 - radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0)); 1081 - radeon_ring_write(ring, rdev->config.r100.hdp_cntl | 1082 - RADEON_HDP_READ_BUFFER_INVALIDATE); 1083 - radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0)); 1084 - radeon_ring_write(ring, rdev->config.r100.hdp_cntl); 1085 1057 } 1086 1058 1087 1059 static void r100_cp_load_microcode(struct radeon_device *rdev)
+2 -2
drivers/gpu/drm/radeon/r600.c
··· 2769 2769 radeon_ring_write(ring, lower_32_bits(addr)); 2770 2770 radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel); 2771 2771 2772 - /* PFP_SYNC_ME packet only exists on 7xx+ */ 2773 - if (emit_wait && (rdev->family >= CHIP_RV770)) { 2772 + /* PFP_SYNC_ME packet only exists on 7xx+, only enable it on eg+ */ 2773 + if (emit_wait && (rdev->family >= CHIP_CEDAR)) { 2774 2774 /* Prevent the PFP from running ahead of the semaphore wait */ 2775 2775 radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0)); 2776 2776 radeon_ring_write(ring, 0x0);
-9
drivers/gpu/drm/radeon/r600_dma.c
··· 124 124 u32 rb_bufsz; 125 125 int r; 126 126 127 - /* Reset dma */ 128 - if (rdev->family >= CHIP_RV770) 129 - WREG32(SRBM_SOFT_RESET, RV770_SOFT_RESET_DMA); 130 - else 131 - WREG32(SRBM_SOFT_RESET, SOFT_RESET_DMA); 132 - RREG32(SRBM_SOFT_RESET); 133 - udelay(50); 134 - WREG32(SRBM_SOFT_RESET, 0); 135 - 136 127 WREG32(DMA_SEM_INCOMPLETE_TIMER_CNTL, 0); 137 128 WREG32(DMA_SEM_WAIT_FAIL_TIMER_CNTL, 0); 138 129
-7
drivers/gpu/drm/radeon/r600d.h
··· 44 44 #define R6XX_MAX_PIPES 8 45 45 #define R6XX_MAX_PIPES_MASK 0xff 46 46 47 - /* PTE flags */ 48 - #define PTE_VALID (1 << 0) 49 - #define PTE_SYSTEM (1 << 1) 50 - #define PTE_SNOOPED (1 << 2) 51 - #define PTE_READABLE (1 << 5) 52 - #define PTE_WRITEABLE (1 << 6) 53 - 54 47 /* tiling bits */ 55 48 #define ARRAY_LINEAR_GENERAL 0x00000000 56 49 #define ARRAY_LINEAR_ALIGNED 0x00000001
-2
drivers/gpu/drm/radeon/radeon_asic.c
··· 185 185 .get_rptr = &r100_gfx_get_rptr, 186 186 .get_wptr = &r100_gfx_get_wptr, 187 187 .set_wptr = &r100_gfx_set_wptr, 188 - .hdp_flush = &r100_ring_hdp_flush, 189 188 }; 190 189 191 190 static struct radeon_asic r100_asic = { ··· 331 332 .get_rptr = &r100_gfx_get_rptr, 332 333 .get_wptr = &r100_gfx_get_wptr, 333 334 .set_wptr = &r100_gfx_set_wptr, 334 - .hdp_flush = &r100_ring_hdp_flush, 335 335 }; 336 336 337 337 static struct radeon_asic r300_asic = {
+1 -2
drivers/gpu/drm/radeon/radeon_asic.h
··· 148 148 struct radeon_ring *ring); 149 149 void r100_gfx_set_wptr(struct radeon_device *rdev, 150 150 struct radeon_ring *ring); 151 - void r100_ring_hdp_flush(struct radeon_device *rdev, 152 - struct radeon_ring *ring); 151 + 153 152 /* 154 153 * r200,rv250,rs300,rv280 155 154 */
+26 -7
drivers/gpu/drm/radeon/radeon_atombios.c
··· 447 447 } 448 448 } 449 449 450 + /* Fujitsu D3003-S2 board lists DVI-I as DVI-I and VGA */ 451 + if ((dev->pdev->device == 0x9805) && 452 + (dev->pdev->subsystem_vendor == 0x1734) && 453 + (dev->pdev->subsystem_device == 0x11bd)) { 454 + if (*connector_type == DRM_MODE_CONNECTOR_VGA) 455 + return false; 456 + } 450 457 451 458 return true; 452 459 } ··· 2288 2281 (controller->ucFanParameters & 2289 2282 ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 2290 2283 rdev->pm.int_thermal_type = THERMAL_TYPE_KV; 2291 - } else if ((controller->ucType == 2292 - ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) || 2293 - (controller->ucType == 2294 - ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) || 2295 - (controller->ucType == 2296 - ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL)) { 2297 - DRM_INFO("Special thermal controller config\n"); 2284 + } else if (controller->ucType == 2285 + ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) { 2286 + DRM_INFO("External GPIO thermal controller %s fan control\n", 2287 + (controller->ucFanParameters & 2288 + ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 2289 + rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO; 2290 + } else if (controller->ucType == 2291 + ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) { 2292 + DRM_INFO("ADT7473 with internal thermal controller %s fan control\n", 2293 + (controller->ucFanParameters & 2294 + ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 2295 + rdev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL; 2296 + } else if (controller->ucType == 2297 + ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) { 2298 + DRM_INFO("EMC2103 with internal thermal controller %s fan control\n", 2299 + (controller->ucFanParameters & 2300 + ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 2301 + rdev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL; 2298 2302 } else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) { 2299 2303 DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n", 2300 2304 pp_lib_thermal_controller_names[controller->ucType], 2301 2305 controller->ucI2cAddress >> 1, 2302 2306 (controller->ucFanParameters & 2303 2307 ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 2308 + rdev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL; 2304 2309 i2c_bus = radeon_lookup_i2c_gpio(rdev, controller->ucI2cLine); 2305 2310 rdev->pm.i2c_bus = radeon_i2c_lookup(rdev, &i2c_bus); 2306 2311 if (rdev->pm.i2c_bus) {
+2 -14
drivers/gpu/drm/radeon/radeon_atpx_handler.c
··· 33 33 bool atpx_detected; 34 34 /* handle for device - and atpx */ 35 35 acpi_handle dhandle; 36 - acpi_handle other_handle; 37 36 struct radeon_atpx atpx; 38 37 } radeon_atpx_priv; 39 38 ··· 452 453 return false; 453 454 454 455 status = acpi_get_handle(dhandle, "ATPX", &atpx_handle); 455 - if (ACPI_FAILURE(status)) { 456 - radeon_atpx_priv.other_handle = dhandle; 456 + if (ACPI_FAILURE(status)) 457 457 return false; 458 - } 458 + 459 459 radeon_atpx_priv.dhandle = dhandle; 460 460 radeon_atpx_priv.atpx.handle = atpx_handle; 461 461 return true; ··· 538 540 printk(KERN_INFO "VGA switcheroo: detected switching method %s handle\n", 539 541 acpi_method_name); 540 542 radeon_atpx_priv.atpx_detected = true; 541 - /* 542 - * On some systems hotplug events are generated for the device 543 - * being switched off when ATPX is executed. They cause ACPI 544 - * hotplug to trigger and attempt to remove the device from 545 - * the system, which causes it to break down. Prevent that from 546 - * happening by setting the no_hotplug flag for the involved 547 - * ACPI device objects. 548 - */ 549 - acpi_bus_no_hotplug(radeon_atpx_priv.dhandle); 550 - acpi_bus_no_hotplug(radeon_atpx_priv.other_handle); 551 543 return true; 552 544 } 553 545 return false;
+9 -2
drivers/gpu/drm/radeon/radeon_device.c
··· 1393 1393 1394 1394 r = radeon_init(rdev); 1395 1395 if (r) 1396 - return r; 1396 + goto failed; 1397 1397 1398 1398 r = radeon_ib_ring_tests(rdev); 1399 1399 if (r) ··· 1413 1413 radeon_agp_disable(rdev); 1414 1414 r = radeon_init(rdev); 1415 1415 if (r) 1416 - return r; 1416 + goto failed; 1417 1417 } 1418 1418 1419 1419 if ((radeon_testing & 1)) { ··· 1435 1435 DRM_INFO("radeon: acceleration disabled, skipping benchmarks\n"); 1436 1436 } 1437 1437 return 0; 1438 + 1439 + failed: 1440 + if (runtime) 1441 + vga_switcheroo_fini_domain_pm_ops(rdev->dev); 1442 + return r; 1438 1443 } 1439 1444 1440 1445 static void radeon_debugfs_remove_files(struct radeon_device *rdev); ··· 1460 1455 radeon_bo_evict_vram(rdev); 1461 1456 radeon_fini(rdev); 1462 1457 vga_switcheroo_unregister_client(rdev->pdev); 1458 + if (rdev->flags & RADEON_IS_PX) 1459 + vga_switcheroo_fini_domain_pm_ops(rdev->dev); 1463 1460 vga_client_register(rdev->pdev, NULL, NULL, NULL); 1464 1461 if (rdev->rio_mem) 1465 1462 pci_iounmap(rdev->pdev, rdev->rio_mem);
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 83 83 * CIK: 1D and linear tiling modes contain valid PIPE_CONFIG 84 84 * 2.39.0 - Add INFO query for number of active CUs 85 85 * 2.40.0 - Add RADEON_GEM_GTT_WC/UC, flush HDP cache before submitting 86 - * CS to GPU 86 + * CS to GPU on >= r600 87 87 */ 88 88 #define KMS_DRIVER_MAJOR 2 89 89 #define KMS_DRIVER_MINOR 40 ··· 440 440 ret = radeon_suspend_kms(drm_dev, false, false); 441 441 pci_save_state(pdev); 442 442 pci_disable_device(pdev); 443 + pci_ignore_hotplug(pdev); 443 444 pci_set_power_state(pdev, PCI_D3cold); 444 445 drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; 445 446
+1 -1
drivers/gpu/drm/radeon/radeon_semaphore.c
··· 34 34 int radeon_semaphore_create(struct radeon_device *rdev, 35 35 struct radeon_semaphore **semaphore) 36 36 { 37 - uint32_t *cpu_addr; 37 + uint64_t *cpu_addr; 38 38 int i, r; 39 39 40 40 *semaphore = kmalloc(sizeof(struct radeon_semaphore), GFP_KERNEL);
+2 -2
drivers/gpu/drm/radeon/rs400.c
··· 221 221 entry = (lower_32_bits(addr) & PAGE_MASK) | 222 222 ((upper_32_bits(addr) & 0xff) << 4); 223 223 if (flags & RADEON_GART_PAGE_READ) 224 - addr |= RS400_PTE_READABLE; 224 + entry |= RS400_PTE_READABLE; 225 225 if (flags & RADEON_GART_PAGE_WRITE) 226 - addr |= RS400_PTE_WRITEABLE; 226 + entry |= RS400_PTE_WRITEABLE; 227 227 if (!(flags & RADEON_GART_PAGE_SNOOP)) 228 228 entry |= RS400_PTE_UNSNOOPED; 229 229 entry = cpu_to_le32(entry);
-1
drivers/gpu/drm/sti/sti_hdmi.c
··· 298 298 hdmi_write(hdmi, val, HDMI_SW_DI_N_PKT_WORD2(HDMI_IFRAME_SLOT_AVI)); 299 299 300 300 val = frame[0xC]; 301 - val |= frame[0xD] << 8; 302 301 hdmi_write(hdmi, val, HDMI_SW_DI_N_PKT_WORD3(HDMI_IFRAME_SLOT_AVI)); 303 302 304 303 /* Enable transmission slot for AVI infoframe
+6
drivers/gpu/vga/vga_switcheroo.c
··· 660 660 } 661 661 EXPORT_SYMBOL(vga_switcheroo_init_domain_pm_ops); 662 662 663 + void vga_switcheroo_fini_domain_pm_ops(struct device *dev) 664 + { 665 + dev->pm_domain = NULL; 666 + } 667 + EXPORT_SYMBOL(vga_switcheroo_fini_domain_pm_ops); 668 + 663 669 static int vga_switcheroo_runtime_resume_hdmi_audio(struct device *dev) 664 670 { 665 671 struct pci_dev *pdev = to_pci_dev(dev);
+37 -9
drivers/gpu/vga/vgaarb.c
··· 41 41 #include <linux/poll.h> 42 42 #include <linux/miscdevice.h> 43 43 #include <linux/slab.h> 44 + #include <linux/screen_info.h> 44 45 45 46 #include <linux/uaccess.h> 46 47 ··· 113 112 return 1; 114 113 } 115 114 116 - #ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE 117 115 /* this is only used a cookie - it should not be dereferenced */ 118 116 static struct pci_dev *vga_default; 119 - #endif 120 117 121 118 static void vga_arb_device_card_gone(struct pci_dev *pdev); 122 119 ··· 130 131 } 131 132 132 133 /* Returns the default VGA device (vgacon's babe) */ 133 - #ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE 134 134 struct pci_dev *vga_default_device(void) 135 135 { 136 136 return vga_default; ··· 145 147 pci_dev_put(vga_default); 146 148 vga_default = pci_dev_get(pdev); 147 149 } 148 - #endif 149 150 150 151 static inline void vga_irq_set_state(struct vga_device *vgadev, bool state) 151 152 { ··· 580 583 /* Deal with VGA default device. Use first enabled one 581 584 * by default if arch doesn't have it's own hook 582 585 */ 583 - #ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE 584 586 if (vga_default == NULL && 585 - ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)) 587 + ((vgadev->owns & VGA_RSRC_LEGACY_MASK) == VGA_RSRC_LEGACY_MASK)) { 588 + pr_info("vgaarb: setting as boot device: PCI:%s\n", 589 + pci_name(pdev)); 586 590 vga_set_default_device(pdev); 587 - #endif 591 + } 588 592 589 593 vga_arbiter_check_bridge_sharing(vgadev); 590 594 ··· 619 621 goto bail; 620 622 } 621 623 622 - #ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE 623 624 if (vga_default == pdev) 624 625 vga_set_default_device(NULL); 625 - #endif 626 626 627 627 if (vgadev->decodes & (VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM)) 628 628 vga_decode_count--; ··· 1316 1320 pr_info("vgaarb: loaded\n"); 1317 1321 1318 1322 list_for_each_entry(vgadev, &vga_list, list) { 1323 + #if defined(CONFIG_X86) || defined(CONFIG_IA64) 1324 + /* Override I/O based detection done by vga_arbiter_add_pci_device() 1325 + * as it may take the wrong device (e.g. on Apple system under EFI). 1326 + * 1327 + * Select the device owning the boot framebuffer if there is one. 1328 + */ 1329 + resource_size_t start, end; 1330 + int i; 1331 + 1332 + /* Does firmware framebuffer belong to us? */ 1333 + for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 1334 + if (!(pci_resource_flags(vgadev->pdev, i) & IORESOURCE_MEM)) 1335 + continue; 1336 + 1337 + start = pci_resource_start(vgadev->pdev, i); 1338 + end = pci_resource_end(vgadev->pdev, i); 1339 + 1340 + if (!start || !end) 1341 + continue; 1342 + 1343 + if (screen_info.lfb_base < start || 1344 + (screen_info.lfb_base + screen_info.lfb_size) >= end) 1345 + continue; 1346 + if (!vga_default_device()) 1347 + pr_info("vgaarb: setting as boot device: PCI:%s\n", 1348 + pci_name(vgadev->pdev)); 1349 + else if (vgadev->pdev != vga_default_device()) 1350 + pr_info("vgaarb: overriding boot device: PCI:%s\n", 1351 + pci_name(vgadev->pdev)); 1352 + vga_set_default_device(vgadev->pdev); 1353 + } 1354 + #endif 1319 1355 if (vgadev->bridge_has_one_vga) 1320 1356 pr_info("vgaarb: bridge control possible %s\n", pci_name(vgadev->pdev)); 1321 1357 else
+1 -1
drivers/iio/accel/bma180.c
··· 571 571 trig->ops = &bma180_trigger_ops; 572 572 iio_trigger_set_drvdata(trig, indio_dev); 573 573 data->trig = trig; 574 - indio_dev->trig = trig; 574 + indio_dev->trig = iio_trigger_get(trig); 575 575 576 576 ret = iio_trigger_register(trig); 577 577 if (ret)
+1 -1
drivers/iio/adc/ad_sigma_delta.c
··· 472 472 goto error_free_irq; 473 473 474 474 /* select default trigger */ 475 - indio_dev->trig = sigma_delta->trig; 475 + indio_dev->trig = iio_trigger_get(sigma_delta->trig); 476 476 477 477 return 0; 478 478
+7 -5
drivers/iio/adc/at91_adc.c
··· 196 196 bool done; 197 197 int irq; 198 198 u16 last_value; 199 + int chnb; 199 200 struct mutex lock; 200 201 u8 num_channels; 201 202 void __iomem *reg_base; ··· 275 274 disable_irq_nosync(irq); 276 275 iio_trigger_poll(idev->trig); 277 276 } else { 278 - st->last_value = at91_adc_readl(st, AT91_ADC_LCDR); 277 + st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb)); 279 278 st->done = true; 280 279 wake_up_interruptible(&st->wq_data_avail); 281 280 } ··· 352 351 unsigned int reg; 353 352 354 353 status &= at91_adc_readl(st, AT91_ADC_IMR); 355 - if (status & st->registers->drdy_mask) 354 + if (status & GENMASK(st->num_channels - 1, 0)) 356 355 handle_adc_eoc_trigger(irq, idev); 357 356 358 357 if (status & AT91RL_ADC_IER_PEN) { ··· 419 418 AT91_ADC_IER_YRDY | 420 419 AT91_ADC_IER_PRDY; 421 420 422 - if (status & st->registers->drdy_mask) 421 + if (status & GENMASK(st->num_channels - 1, 0)) 423 422 handle_adc_eoc_trigger(irq, idev); 424 423 425 424 if (status & AT91_ADC_IER_PEN) { ··· 690 689 case IIO_CHAN_INFO_RAW: 691 690 mutex_lock(&st->lock); 692 691 692 + st->chnb = chan->channel; 693 693 at91_adc_writel(st, AT91_ADC_CHER, 694 694 AT91_ADC_CH(chan->channel)); 695 - at91_adc_writel(st, AT91_ADC_IER, st->registers->drdy_mask); 695 + at91_adc_writel(st, AT91_ADC_IER, BIT(chan->channel)); 696 696 at91_adc_writel(st, AT91_ADC_CR, AT91_ADC_START); 697 697 698 698 ret = wait_event_interruptible_timeout(st->wq_data_avail, ··· 710 708 711 709 at91_adc_writel(st, AT91_ADC_CHDR, 712 710 AT91_ADC_CH(chan->channel)); 713 - at91_adc_writel(st, AT91_ADC_IDR, st->registers->drdy_mask); 711 + at91_adc_writel(st, AT91_ADC_IDR, BIT(chan->channel)); 714 712 715 713 st->last_value = 0; 716 714 st->done = false;
+1 -1
drivers/iio/adc/xilinx-xadc-core.c
··· 1126 1126 chan->address = XADC_REG_VPVN; 1127 1127 } else { 1128 1128 chan->scan_index = 15 + reg; 1129 - chan->scan_index = XADC_REG_VAUX(reg - 1); 1129 + chan->address = XADC_REG_VAUX(reg - 1); 1130 1130 } 1131 1131 num_channels++; 1132 1132 chan++;
+2 -1
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 122 122 dev_err(&indio_dev->dev, "Trigger Register Failed\n"); 123 123 goto error_free_trig; 124 124 } 125 - indio_dev->trig = attrb->trigger = trig; 125 + attrb->trigger = trig; 126 + indio_dev->trig = iio_trigger_get(trig); 126 127 127 128 return ret; 128 129
+1 -1
drivers/iio/common/st_sensors/st_sensors_trigger.c
··· 49 49 dev_err(&indio_dev->dev, "failed to register iio trigger.\n"); 50 50 goto iio_trigger_register_error; 51 51 } 52 - indio_dev->trig = sdata->trig; 52 + indio_dev->trig = iio_trigger_get(sdata->trig); 53 53 54 54 return 0; 55 55
+1 -1
drivers/iio/gyro/itg3200_buffer.c
··· 132 132 goto error_free_irq; 133 133 134 134 /* select default trigger */ 135 - indio_dev->trig = st->trig; 135 + indio_dev->trig = iio_trigger_get(st->trig); 136 136 137 137 return 0; 138 138
+1 -1
drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
··· 135 135 ret = iio_trigger_register(st->trig); 136 136 if (ret) 137 137 goto error_free_irq; 138 - indio_dev->trig = st->trig; 138 + indio_dev->trig = iio_trigger_get(st->trig); 139 139 140 140 return 0; 141 141
+1 -1
drivers/iio/inkern.c
··· 178 178 index = of_property_match_string(np, "io-channel-names", 179 179 name); 180 180 chan = of_iio_channel_get(np, index); 181 - if (!IS_ERR(chan)) 181 + if (!IS_ERR(chan) || PTR_ERR(chan) == -EPROBE_DEFER) 182 182 break; 183 183 else if (name && index >= 0) { 184 184 pr_err("ERROR: could not get IIO channel %s:%s(%i)\n",
+30 -22
drivers/iio/magnetometer/st_magn_core.c
··· 42 42 #define ST_MAGN_FS_AVL_5600MG 5600 43 43 #define ST_MAGN_FS_AVL_8000MG 8000 44 44 #define ST_MAGN_FS_AVL_8100MG 8100 45 - #define ST_MAGN_FS_AVL_10000MG 10000 45 + #define ST_MAGN_FS_AVL_12000MG 12000 46 + #define ST_MAGN_FS_AVL_16000MG 16000 46 47 47 48 /* CUSTOM VALUES FOR SENSOR 1 */ 48 49 #define ST_MAGN_1_WAI_EXP 0x3c ··· 70 69 #define ST_MAGN_1_FS_AVL_4700_VAL 0x05 71 70 #define ST_MAGN_1_FS_AVL_5600_VAL 0x06 72 71 #define ST_MAGN_1_FS_AVL_8100_VAL 0x07 73 - #define ST_MAGN_1_FS_AVL_1300_GAIN_XY 1100 74 - #define ST_MAGN_1_FS_AVL_1900_GAIN_XY 855 75 - #define ST_MAGN_1_FS_AVL_2500_GAIN_XY 670 76 - #define ST_MAGN_1_FS_AVL_4000_GAIN_XY 450 77 - #define ST_MAGN_1_FS_AVL_4700_GAIN_XY 400 78 - #define ST_MAGN_1_FS_AVL_5600_GAIN_XY 330 79 - #define ST_MAGN_1_FS_AVL_8100_GAIN_XY 230 80 - #define ST_MAGN_1_FS_AVL_1300_GAIN_Z 980 81 - #define ST_MAGN_1_FS_AVL_1900_GAIN_Z 760 82 - #define ST_MAGN_1_FS_AVL_2500_GAIN_Z 600 83 - #define ST_MAGN_1_FS_AVL_4000_GAIN_Z 400 84 - #define ST_MAGN_1_FS_AVL_4700_GAIN_Z 355 85 - #define ST_MAGN_1_FS_AVL_5600_GAIN_Z 295 86 - #define ST_MAGN_1_FS_AVL_8100_GAIN_Z 205 72 + #define ST_MAGN_1_FS_AVL_1300_GAIN_XY 909 73 + #define ST_MAGN_1_FS_AVL_1900_GAIN_XY 1169 74 + #define ST_MAGN_1_FS_AVL_2500_GAIN_XY 1492 75 + #define ST_MAGN_1_FS_AVL_4000_GAIN_XY 2222 76 + #define ST_MAGN_1_FS_AVL_4700_GAIN_XY 2500 77 + #define ST_MAGN_1_FS_AVL_5600_GAIN_XY 3030 78 + #define ST_MAGN_1_FS_AVL_8100_GAIN_XY 4347 79 + #define ST_MAGN_1_FS_AVL_1300_GAIN_Z 1020 80 + #define ST_MAGN_1_FS_AVL_1900_GAIN_Z 1315 81 + #define ST_MAGN_1_FS_AVL_2500_GAIN_Z 1666 82 + #define ST_MAGN_1_FS_AVL_4000_GAIN_Z 2500 83 + #define ST_MAGN_1_FS_AVL_4700_GAIN_Z 2816 84 + #define ST_MAGN_1_FS_AVL_5600_GAIN_Z 3389 85 + #define ST_MAGN_1_FS_AVL_8100_GAIN_Z 4878 87 86 #define ST_MAGN_1_MULTIREAD_BIT false 88 87 89 88 /* CUSTOM VALUES FOR SENSOR 2 */ ··· 106 105 #define ST_MAGN_2_FS_MASK 0x60 107 106 #define ST_MAGN_2_FS_AVL_4000_VAL 0x00 108 107 #define ST_MAGN_2_FS_AVL_8000_VAL 0x01 109 - #define ST_MAGN_2_FS_AVL_10000_VAL 0x02 110 - #define ST_MAGN_2_FS_AVL_4000_GAIN 430 111 - #define ST_MAGN_2_FS_AVL_8000_GAIN 230 112 - #define ST_MAGN_2_FS_AVL_10000_GAIN 230 108 + #define ST_MAGN_2_FS_AVL_12000_VAL 0x02 109 + #define ST_MAGN_2_FS_AVL_16000_VAL 0x03 110 + #define ST_MAGN_2_FS_AVL_4000_GAIN 146 111 + #define ST_MAGN_2_FS_AVL_8000_GAIN 292 112 + #define ST_MAGN_2_FS_AVL_12000_GAIN 438 113 + #define ST_MAGN_2_FS_AVL_16000_GAIN 584 113 114 #define ST_MAGN_2_MULTIREAD_BIT false 114 115 #define ST_MAGN_2_OUT_X_L_ADDR 0x28 115 116 #define ST_MAGN_2_OUT_Y_L_ADDR 0x2a ··· 269 266 .gain = ST_MAGN_2_FS_AVL_8000_GAIN, 270 267 }, 271 268 [2] = { 272 - .num = ST_MAGN_FS_AVL_10000MG, 273 - .value = ST_MAGN_2_FS_AVL_10000_VAL, 274 - .gain = ST_MAGN_2_FS_AVL_10000_GAIN, 269 + .num = ST_MAGN_FS_AVL_12000MG, 270 + .value = ST_MAGN_2_FS_AVL_12000_VAL, 271 + .gain = ST_MAGN_2_FS_AVL_12000_GAIN, 272 + }, 273 + [3] = { 274 + .num = ST_MAGN_FS_AVL_16000MG, 275 + .value = ST_MAGN_2_FS_AVL_16000_VAL, 276 + .gain = ST_MAGN_2_FS_AVL_16000_GAIN, 275 277 }, 276 278 }, 277 279 },
+1 -1
drivers/infiniband/hw/mlx4/main.c
··· 1680 1680 goto unlock; 1681 1681 1682 1682 update_params.smac_index = new_smac_index; 1683 - if (mlx4_update_qp(ibdev->dev, &qp->mqp, MLX4_UPDATE_QP_SMAC, 1683 + if (mlx4_update_qp(ibdev->dev, qp->mqp.qpn, MLX4_UPDATE_QP_SMAC, 1684 1684 &update_params)) { 1685 1685 release_mac = new_smac; 1686 1686 goto unlock;
+1 -1
drivers/infiniband/hw/mlx4/qp.c
··· 1682 1682 MLX4_IB_LINK_TYPE_ETH; 1683 1683 if (dev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) { 1684 1684 /* set QP to receive both tunneled & non-tunneled packets */ 1685 - if (!(context->flags & (1 << MLX4_RSS_QPC_FLAG_OFFSET))) 1685 + if (!(context->flags & cpu_to_be32(1 << MLX4_RSS_QPC_FLAG_OFFSET))) 1686 1686 context->srqn = cpu_to_be32(7 << 28); 1687 1687 } 1688 1688 }
+6
drivers/infiniband/ulp/ipoib/ipoib.h
··· 131 131 u8 hwaddr[INFINIBAND_ALEN]; 132 132 }; 133 133 134 + static inline struct ipoib_cb *ipoib_skb_cb(const struct sk_buff *skb) 135 + { 136 + BUILD_BUG_ON(sizeof(skb->cb) < sizeof(struct ipoib_cb)); 137 + return (struct ipoib_cb *)skb->cb; 138 + } 139 + 134 140 /* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */ 135 141 struct ipoib_mcast { 136 142 struct ib_sa_mcmember_rec mcmember;
+2 -2
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 716 716 { 717 717 struct ipoib_dev_priv *priv = netdev_priv(dev); 718 718 struct ipoib_neigh *neigh; 719 - struct ipoib_cb *cb = (struct ipoib_cb *) skb->cb; 719 + struct ipoib_cb *cb = ipoib_skb_cb(skb); 720 720 struct ipoib_header *header; 721 721 unsigned long flags; 722 722 ··· 813 813 const void *daddr, const void *saddr, unsigned len) 814 814 { 815 815 struct ipoib_header *header; 816 - struct ipoib_cb *cb = (struct ipoib_cb *) skb->cb; 816 + struct ipoib_cb *cb = ipoib_skb_cb(skb); 817 817 818 818 header = (struct ipoib_header *) skb_push(skb, sizeof *header); 819 819
+11 -9
drivers/infiniband/ulp/isert/ib_isert.c
··· 586 586 init_completion(&isert_conn->conn_wait); 587 587 init_completion(&isert_conn->conn_wait_comp_err); 588 588 kref_init(&isert_conn->conn_kref); 589 - kref_get(&isert_conn->conn_kref); 590 589 mutex_init(&isert_conn->conn_mutex); 591 590 spin_lock_init(&isert_conn->conn_lock); 592 591 INIT_LIST_HEAD(&isert_conn->conn_fr_pool); 593 592 594 593 cma_id->context = isert_conn; 595 594 isert_conn->conn_cm_id = cma_id; 596 - isert_conn->responder_resources = event->param.conn.responder_resources; 597 - isert_conn->initiator_depth = event->param.conn.initiator_depth; 598 - pr_debug("Using responder_resources: %u initiator_depth: %u\n", 599 - isert_conn->responder_resources, isert_conn->initiator_depth); 600 595 601 596 isert_conn->login_buf = kzalloc(ISCSI_DEF_MAX_RECV_SEG_LEN + 602 597 ISER_RX_LOGIN_SIZE, GFP_KERNEL); ··· 637 642 ret = PTR_ERR(device); 638 643 goto out_rsp_dma_map; 639 644 } 645 + 646 + /* Set max inflight RDMA READ requests */ 647 + isert_conn->initiator_depth = min_t(u8, 648 + event->param.conn.initiator_depth, 649 + device->dev_attr.max_qp_init_rd_atom); 650 + pr_debug("Using initiator_depth: %u\n", isert_conn->initiator_depth); 640 651 641 652 isert_conn->conn_device = device; 642 653 isert_conn->conn_pd = ib_alloc_pd(isert_conn->conn_device->ib_device); ··· 747 746 static void 748 747 isert_connected_handler(struct rdma_cm_id *cma_id) 749 748 { 750 - return; 749 + struct isert_conn *isert_conn = cma_id->context; 750 + 751 + kref_get(&isert_conn->conn_kref); 751 752 } 752 753 753 754 static void ··· 801 798 802 799 wake_up: 803 800 complete(&isert_conn->conn_wait); 804 - isert_put_conn(isert_conn); 805 801 } 806 802 807 803 static void ··· 3069 3067 int ret; 3070 3068 3071 3069 memset(&cp, 0, sizeof(struct rdma_conn_param)); 3072 - cp.responder_resources = isert_conn->responder_resources; 3073 3070 cp.initiator_depth = isert_conn->initiator_depth; 3074 3071 cp.retry_count = 7; 3075 3072 cp.rnr_retry_count = 7; ··· 3216 3215 pr_debug("isert_wait_conn: Starting \n"); 3217 3216 3218 3217 mutex_lock(&isert_conn->conn_mutex); 3219 - if (isert_conn->conn_cm_id) { 3218 + if (isert_conn->conn_cm_id && !isert_conn->disconnect) { 3220 3219 pr_debug("Calling rdma_disconnect from isert_wait_conn\n"); 3221 3220 rdma_disconnect(isert_conn->conn_cm_id); 3222 3221 } ··· 3235 3234 wait_for_completion(&isert_conn->conn_wait_comp_err); 3236 3235 3237 3236 wait_for_completion(&isert_conn->conn_wait); 3237 + isert_put_conn(isert_conn); 3238 3238 } 3239 3239 3240 3240 static void isert_free_conn(struct iscsi_conn *conn)
-8
drivers/input/keyboard/atkbd.c
··· 1791 1791 { 1792 1792 .matches = { 1793 1793 DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), 1794 - DMI_MATCH(DMI_PRODUCT_NAME, "LW25-B7HV"), 1795 - }, 1796 - .callback = atkbd_deactivate_fixup, 1797 - }, 1798 - { 1799 - .matches = { 1800 - DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), 1801 - DMI_MATCH(DMI_PRODUCT_NAME, "P1-J273B"), 1802 1794 }, 1803 1795 .callback = atkbd_deactivate_fixup, 1804 1796 },
+2 -2
drivers/input/keyboard/cap1106.c
··· 33 33 #define CAP1106_REG_SENSOR_CONFIG 0x22 34 34 #define CAP1106_REG_SENSOR_CONFIG2 0x23 35 35 #define CAP1106_REG_SAMPLING_CONFIG 0x24 36 - #define CAP1106_REG_CALIBRATION 0x25 37 - #define CAP1106_REG_INT_ENABLE 0x26 36 + #define CAP1106_REG_CALIBRATION 0x26 37 + #define CAP1106_REG_INT_ENABLE 0x27 38 38 #define CAP1106_REG_REPEAT_RATE 0x28 39 39 #define CAP1106_REG_MT_CONFIG 0x2a 40 40 #define CAP1106_REG_MT_PATTERN_CONFIG 0x2b
+5 -4
drivers/input/keyboard/matrix_keypad.c
··· 332 332 } 333 333 334 334 if (pdata->clustered_irq > 0) { 335 - err = request_irq(pdata->clustered_irq, 335 + err = request_any_context_irq(pdata->clustered_irq, 336 336 matrix_keypad_interrupt, 337 337 pdata->clustered_irq_flags, 338 338 "matrix-keypad", keypad); 339 - if (err) { 339 + if (err < 0) { 340 340 dev_err(&pdev->dev, 341 341 "Unable to acquire clustered interrupt\n"); 342 342 goto err_free_rows; 343 343 } 344 344 } else { 345 345 for (i = 0; i < pdata->num_row_gpios; i++) { 346 - err = request_irq(gpio_to_irq(pdata->row_gpios[i]), 346 + err = request_any_context_irq( 347 + gpio_to_irq(pdata->row_gpios[i]), 347 348 matrix_keypad_interrupt, 348 349 IRQF_TRIGGER_RISING | 349 350 IRQF_TRIGGER_FALLING, 350 351 "matrix-keypad", keypad); 351 - if (err) { 352 + if (err < 0) { 352 353 dev_err(&pdev->dev, 353 354 "Unable to acquire interrupt for GPIO line %i\n", 354 355 pdata->row_gpios[i]);
+4
drivers/input/mouse/alps.c
··· 2373 2373 dev2->keybit[BIT_WORD(BTN_LEFT)] = 2374 2374 BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_RIGHT); 2375 2375 2376 + __set_bit(INPUT_PROP_POINTER, dev2->propbit); 2377 + if (priv->flags & ALPS_DUALPOINT) 2378 + __set_bit(INPUT_PROP_POINTING_STICK, dev2->propbit); 2379 + 2376 2380 if (input_register_device(priv->dev2)) 2377 2381 goto init_fail; 2378 2382
+11
drivers/input/mouse/elantech.c
··· 1331 1331 if (param[1] == 0) 1332 1332 return true; 1333 1333 1334 + /* 1335 + * Some models have a revision higher then 20. Meaning param[2] may 1336 + * be 10 or 20, skip the rates check for these. 1337 + */ 1338 + if (param[0] == 0x46 && (param[1] & 0xef) == 0x0f && param[2] < 40) 1339 + return true; 1340 + 1334 1341 for (i = 0; i < ARRAY_SIZE(rates); i++) 1335 1342 if (param[2] == rates[i]) 1336 1343 return false; ··· 1614 1607 tp_dev->keybit[BIT_WORD(BTN_LEFT)] = 1615 1608 BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | 1616 1609 BIT_MASK(BTN_RIGHT); 1610 + 1611 + __set_bit(INPUT_PROP_POINTER, tp_dev->propbit); 1612 + __set_bit(INPUT_PROP_POINTING_STICK, tp_dev->propbit); 1613 + 1617 1614 error = input_register_device(etd->tp_dev); 1618 1615 if (error < 0) 1619 1616 goto init_fail_tp_reg;
+2
drivers/input/mouse/psmouse-base.c
··· 670 670 __set_bit(REL_X, input_dev->relbit); 671 671 __set_bit(REL_Y, input_dev->relbit); 672 672 673 + __set_bit(INPUT_PROP_POINTER, input_dev->propbit); 674 + 673 675 psmouse->set_rate = psmouse_set_rate; 674 676 psmouse->set_resolution = psmouse_set_resolution; 675 677 psmouse->poll = psmouse_poll;
+52 -16
drivers/input/mouse/synaptics.c
··· 629 629 ((buf[0] & 0x04) >> 1) | 630 630 ((buf[3] & 0x04) >> 2)); 631 631 632 + if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) || 633 + SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) && 634 + hw->w == 2) { 635 + synaptics_parse_agm(buf, priv, hw); 636 + return 1; 637 + } 638 + 639 + hw->x = (((buf[3] & 0x10) << 8) | 640 + ((buf[1] & 0x0f) << 8) | 641 + buf[4]); 642 + hw->y = (((buf[3] & 0x20) << 7) | 643 + ((buf[1] & 0xf0) << 4) | 644 + buf[5]); 645 + hw->z = buf[2]; 646 + 632 647 hw->left = (buf[0] & 0x01) ? 1 : 0; 633 648 hw->right = (buf[0] & 0x02) ? 1 : 0; 634 649 635 - if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { 650 + if (SYN_CAP_FORCEPAD(priv->ext_cap_0c)) { 651 + /* 652 + * ForcePads, like Clickpads, use middle button 653 + * bits to report primary button clicks. 654 + * Unfortunately they report primary button not 655 + * only when user presses on the pad above certain 656 + * threshold, but also when there are more than one 657 + * finger on the touchpad, which interferes with 658 + * out multi-finger gestures. 659 + */ 660 + if (hw->z == 0) { 661 + /* No contacts */ 662 + priv->press = priv->report_press = false; 663 + } else if (hw->w >= 4 && ((buf[0] ^ buf[3]) & 0x01)) { 664 + /* 665 + * Single-finger touch with pressure above 666 + * the threshold. If pressure stays long 667 + * enough, we'll start reporting primary 668 + * button. We rely on the device continuing 669 + * sending data even if finger does not 670 + * move. 671 + */ 672 + if (!priv->press) { 673 + priv->press_start = jiffies; 674 + priv->press = true; 675 + } else if (time_after(jiffies, 676 + priv->press_start + 677 + msecs_to_jiffies(50))) { 678 + priv->report_press = true; 679 + } 680 + } else { 681 + priv->press = false; 682 + } 683 + 684 + hw->left = priv->report_press; 685 + 686 + } else if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { 636 687 /* 637 688 * Clickpad's button is transmitted as middle button, 638 689 * however, since it is primary button, we will report ··· 701 650 hw->up = ((buf[0] ^ buf[3]) & 0x01) ? 1 : 0; 702 651 hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0; 703 652 } 704 - 705 - if ((SYN_CAP_ADV_GESTURE(priv->ext_cap_0c) || 706 - SYN_CAP_IMAGE_SENSOR(priv->ext_cap_0c)) && 707 - hw->w == 2) { 708 - synaptics_parse_agm(buf, priv, hw); 709 - return 1; 710 - } 711 - 712 - hw->x = (((buf[3] & 0x10) << 8) | 713 - ((buf[1] & 0x0f) << 8) | 714 - buf[4]); 715 - hw->y = (((buf[3] & 0x20) << 7) | 716 - ((buf[1] & 0xf0) << 4) | 717 - buf[5]); 718 - hw->z = buf[2]; 719 653 720 654 if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) && 721 655 ((buf[0] ^ buf[3]) & 0x02)) {
+11
drivers/input/mouse/synaptics.h
··· 78 78 * 2 0x08 image sensor image sensor tracks 5 fingers, but only 79 79 * reports 2. 80 80 * 2 0x20 report min query 0x0f gives min coord reported 81 + * 2 0x80 forcepad forcepad is a variant of clickpad that 82 + * does not have physical buttons but rather 83 + * uses pressure above certain threshold to 84 + * report primary clicks. Forcepads also have 85 + * clickpad bit set. 81 86 */ 82 87 #define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & 0x100000) /* 1-button ClickPad */ 83 88 #define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & 0x000100) /* 2-button ClickPad */ ··· 91 86 #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000) 92 87 #define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400) 93 88 #define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800) 89 + #define SYN_CAP_FORCEPAD(ex0c) ((ex0c) & 0x008000) 94 90 95 91 /* synaptics modes query bits */ 96 92 #define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7)) ··· 183 177 */ 184 178 struct synaptics_hw_state agm; 185 179 bool agm_pending; /* new AGM packet received */ 180 + 181 + /* ForcePad handling */ 182 + unsigned long press_start; 183 + bool press; 184 + bool report_press; 186 185 }; 187 186 188 187 void synaptics_module_init(void);
+6
drivers/input/mouse/synaptics_usb.c
··· 387 387 __set_bit(EV_REL, input_dev->evbit); 388 388 __set_bit(REL_X, input_dev->relbit); 389 389 __set_bit(REL_Y, input_dev->relbit); 390 + __set_bit(INPUT_PROP_POINTING_STICK, input_dev->propbit); 390 391 input_set_abs_params(input_dev, ABS_PRESSURE, 0, 127, 0, 0); 391 392 } else { 392 393 input_set_abs_params(input_dev, ABS_X, ··· 401 400 __set_bit(BTN_TOOL_DOUBLETAP, input_dev->keybit); 402 401 __set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit); 403 402 } 403 + 404 + if (synusb->flags & SYNUSB_TOUCHSCREEN) 405 + __set_bit(INPUT_PROP_DIRECT, input_dev->propbit); 406 + else 407 + __set_bit(INPUT_PROP_POINTER, input_dev->propbit); 404 408 405 409 __set_bit(BTN_LEFT, input_dev->keybit); 406 410 __set_bit(BTN_RIGHT, input_dev->keybit);
+3
drivers/input/mouse/trackpoint.c
··· 393 393 if ((button_info & 0x0f) >= 3) 394 394 __set_bit(BTN_MIDDLE, psmouse->dev->keybit); 395 395 396 + __set_bit(INPUT_PROP_POINTER, psmouse->dev->propbit); 397 + __set_bit(INPUT_PROP_POINTING_STICK, psmouse->dev->propbit); 398 + 396 399 trackpoint_defaults(psmouse->private); 397 400 398 401 error = trackpoint_power_on_reset(&psmouse->ps2dev);
+15
drivers/input/serio/i8042-x86ia64io.h
··· 465 465 DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"), 466 466 }, 467 467 }, 468 + { 469 + /* Avatar AVIU-145A6 */ 470 + .matches = { 471 + DMI_MATCH(DMI_SYS_VENDOR, "Intel"), 472 + DMI_MATCH(DMI_PRODUCT_NAME, "IC4I"), 473 + }, 474 + }, 468 475 { } 469 476 }; 470 477 ··· 613 606 .matches = { 614 607 DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 615 608 DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"), 609 + }, 610 + }, 611 + { 612 + /* Fujitsu U574 laptop */ 613 + /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */ 614 + .matches = { 615 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 616 + DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U574"), 616 617 }, 617 618 }, 618 619 { }
+2
drivers/input/serio/i8042.c
··· 1254 1254 } else { 1255 1255 snprintf(serio->name, sizeof(serio->name), "i8042 AUX%d port", idx); 1256 1256 snprintf(serio->phys, sizeof(serio->phys), I8042_MUX_PHYS_DESC, idx + 1); 1257 + strlcpy(serio->firmware_id, i8042_aux_firmware_id, 1258 + sizeof(serio->firmware_id)); 1257 1259 } 1258 1260 1259 1261 port->serio = serio;
+39 -8
drivers/input/serio/serport.c
··· 21 21 #include <linux/init.h> 22 22 #include <linux/serio.h> 23 23 #include <linux/tty.h> 24 + #include <linux/compat.h> 24 25 25 26 MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>"); 26 27 MODULE_DESCRIPTION("Input device TTY line discipline"); ··· 199 198 return 0; 200 199 } 201 200 201 + static void serport_set_type(struct tty_struct *tty, unsigned long type) 202 + { 203 + struct serport *serport = tty->disc_data; 204 + 205 + serport->id.proto = type & 0x000000ff; 206 + serport->id.id = (type & 0x0000ff00) >> 8; 207 + serport->id.extra = (type & 0x00ff0000) >> 16; 208 + } 209 + 202 210 /* 203 211 * serport_ldisc_ioctl() allows to set the port protocol, and device ID 204 212 */ 205 213 206 - static int serport_ldisc_ioctl(struct tty_struct * tty, struct file * file, unsigned int cmd, unsigned long arg) 214 + static int serport_ldisc_ioctl(struct tty_struct *tty, struct file *file, 215 + unsigned int cmd, unsigned long arg) 207 216 { 208 - struct serport *serport = (struct serport*) tty->disc_data; 209 - unsigned long type; 210 - 211 217 if (cmd == SPIOCSTYPE) { 218 + unsigned long type; 219 + 212 220 if (get_user(type, (unsigned long __user *) arg)) 213 221 return -EFAULT; 214 222 215 - serport->id.proto = type & 0x000000ff; 216 - serport->id.id = (type & 0x0000ff00) >> 8; 217 - serport->id.extra = (type & 0x00ff0000) >> 16; 218 - 223 + serport_set_type(tty, type); 219 224 return 0; 220 225 } 221 226 222 227 return -EINVAL; 223 228 } 229 + 230 + #ifdef CONFIG_COMPAT 231 + #define COMPAT_SPIOCSTYPE _IOW('q', 0x01, compat_ulong_t) 232 + static long serport_ldisc_compat_ioctl(struct tty_struct *tty, 233 + struct file *file, 234 + unsigned int cmd, unsigned long arg) 235 + { 236 + if (cmd == COMPAT_SPIOCSTYPE) { 237 + void __user *uarg = compat_ptr(arg); 238 + compat_ulong_t compat_type; 239 + 240 + if (get_user(compat_type, (compat_ulong_t __user *)uarg)) 241 + return -EFAULT; 242 + 243 + serport_set_type(tty, compat_type); 244 + return 0; 245 + } 246 + 247 + return -EINVAL; 248 + } 249 + #endif 224 250 225 251 static void serport_ldisc_write_wakeup(struct tty_struct * tty) 226 252 { ··· 271 243 .close = serport_ldisc_close, 272 244 .read = serport_ldisc_read, 273 245 .ioctl = serport_ldisc_ioctl, 246 + #ifdef CONFIG_COMPAT 247 + .compat_ioctl = serport_ldisc_compat_ioctl, 248 + #endif 274 249 .receive_buf = serport_ldisc_receive, 275 250 .write_wakeup = serport_ldisc_write_wakeup 276 251 };
+19 -6
drivers/input/touchscreen/atmel_mxt_ts.c
··· 837 837 count = data->msg_buf[0]; 838 838 839 839 if (count == 0) { 840 - dev_warn(dev, "Interrupt triggered but zero messages\n"); 840 + /* 841 + * This condition is caused by the CHG line being configured 842 + * in Mode 0. It results in unnecessary I2C operations but it 843 + * is benign. 844 + */ 845 + dev_dbg(dev, "Interrupt triggered but zero messages\n"); 841 846 return IRQ_NONE; 842 847 } else if (count > data->max_reportid) { 843 848 dev_err(dev, "T44 count %d exceeded max report id\n", count); ··· 1379 1374 return 0; 1380 1375 } 1381 1376 1377 + static void mxt_free_input_device(struct mxt_data *data) 1378 + { 1379 + if (data->input_dev) { 1380 + input_unregister_device(data->input_dev); 1381 + data->input_dev = NULL; 1382 + } 1383 + } 1384 + 1382 1385 static void mxt_free_object_table(struct mxt_data *data) 1383 1386 { 1384 - input_unregister_device(data->input_dev); 1385 - data->input_dev = NULL; 1386 - 1387 1387 kfree(data->object_table); 1388 1388 data->object_table = NULL; 1389 1389 kfree(data->msg_buf); ··· 1967 1957 ret = mxt_lookup_bootloader_address(data, 0); 1968 1958 if (ret) 1969 1959 goto release_firmware; 1960 + 1961 + mxt_free_input_device(data); 1962 + mxt_free_object_table(data); 1970 1963 } else { 1971 1964 enable_irq(data->irq); 1972 1965 } 1973 1966 1974 - mxt_free_object_table(data); 1975 1967 reinit_completion(&data->bl_completion); 1976 1968 1977 1969 ret = mxt_check_bootloader(data, MXT_WAITING_BOOTLOAD_CMD, false); ··· 2222 2210 return 0; 2223 2211 2224 2212 err_free_object: 2213 + mxt_free_input_device(data); 2225 2214 mxt_free_object_table(data); 2226 2215 err_free_irq: 2227 2216 free_irq(client->irq, data); ··· 2237 2224 2238 2225 sysfs_remove_group(&client->dev.kobj, &mxt_attr_group); 2239 2226 free_irq(data->irq, data); 2240 - input_unregister_device(data->input_dev); 2227 + mxt_free_input_device(data); 2241 2228 mxt_free_object_table(data); 2242 2229 kfree(data); 2243 2230
+1 -1
drivers/input/touchscreen/wm9712.c
··· 41 41 */ 42 42 static int rpu = 8; 43 43 module_param(rpu, int, 0); 44 - MODULE_PARM_DESC(rpu, "Set internal pull up resitor for pen detect."); 44 + MODULE_PARM_DESC(rpu, "Set internal pull up resistor for pen detect."); 45 45 46 46 /* 47 47 * Set current used for pressure measurement.
+1 -1
drivers/input/touchscreen/wm9713.c
··· 41 41 */ 42 42 static int rpu = 8; 43 43 module_param(rpu, int, 0); 44 - MODULE_PARM_DESC(rpu, "Set internal pull up resitor for pen detect."); 44 + MODULE_PARM_DESC(rpu, "Set internal pull up resistor for pen detect."); 45 45 46 46 /* 47 47 * Set current used for pressure measurement.
+73 -54
drivers/iommu/arm-smmu.c
··· 146 146 #define ID0_CTTW (1 << 14) 147 147 #define ID0_NUMIRPT_SHIFT 16 148 148 #define ID0_NUMIRPT_MASK 0xff 149 + #define ID0_NUMSIDB_SHIFT 9 150 + #define ID0_NUMSIDB_MASK 0xf 149 151 #define ID0_NUMSMRG_SHIFT 0 150 152 #define ID0_NUMSMRG_MASK 0xff 151 153 ··· 526 524 master->of_node = masterspec->np; 527 525 master->cfg.num_streamids = masterspec->args_count; 528 526 529 - for (i = 0; i < master->cfg.num_streamids; ++i) 530 - master->cfg.streamids[i] = masterspec->args[i]; 527 + for (i = 0; i < master->cfg.num_streamids; ++i) { 528 + u16 streamid = masterspec->args[i]; 531 529 530 + if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) && 531 + (streamid >= smmu->num_mapping_groups)) { 532 + dev_err(dev, 533 + "stream ID for master device %s greater than maximum allowed (%d)\n", 534 + masterspec->np->name, smmu->num_mapping_groups); 535 + return -ERANGE; 536 + } 537 + master->cfg.streamids[i] = streamid; 538 + } 532 539 return insert_smmu_master(smmu, master); 533 540 } 534 541 ··· 634 623 635 624 if (fsr & FSR_IGN) 636 625 dev_err_ratelimited(smmu->dev, 637 - "Unexpected context fault (fsr 0x%u)\n", 626 + "Unexpected context fault (fsr 0x%x)\n", 638 627 fsr); 639 628 640 629 fsynr = readl_relaxed(cb_base + ARM_SMMU_CB_FSYNR0); ··· 763 752 reg = (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT); 764 753 break; 765 754 case 39: 755 + case 40: 766 756 reg = (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT); 767 757 break; 768 758 case 42: ··· 785 773 reg |= (TTBCR2_ADDR_36 << TTBCR2_PASIZE_SHIFT); 786 774 break; 787 775 case 39: 776 + case 40: 788 777 reg |= (TTBCR2_ADDR_40 << TTBCR2_PASIZE_SHIFT); 789 778 break; 790 779 case 42: ··· 856 843 reg |= TTBCR_EAE | 857 844 (TTBCR_SH_IS << TTBCR_SH0_SHIFT) | 858 845 (TTBCR_RGN_WBWA << TTBCR_ORGN0_SHIFT) | 859 - (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT) | 860 - (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT); 846 + (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT); 847 + 848 + if (!stage1) 849 + reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT); 850 + 861 851 writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR); 862 852 863 853 /* MAIR0 (stage-1 only) */ ··· 884 868 static int arm_smmu_init_domain_context(struct iommu_domain *domain, 885 869 struct arm_smmu_device *smmu) 886 870 { 887 - int irq, ret, start; 871 + int irq, start, ret = 0; 872 + unsigned long flags; 888 873 struct arm_smmu_domain *smmu_domain = domain->priv; 889 874 struct arm_smmu_cfg *cfg = &smmu_domain->cfg; 875 + 876 + spin_lock_irqsave(&smmu_domain->lock, flags); 877 + if (smmu_domain->smmu) 878 + goto out_unlock; 890 879 891 880 if (smmu->features & ARM_SMMU_FEAT_TRANS_NESTED) { 892 881 /* ··· 911 890 ret = __arm_smmu_alloc_bitmap(smmu->context_map, start, 912 891 smmu->num_context_banks); 913 892 if (IS_ERR_VALUE(ret)) 914 - return ret; 893 + goto out_unlock; 915 894 916 895 cfg->cbndx = ret; 917 896 if (smmu->version == 1) { ··· 921 900 cfg->irptndx = cfg->cbndx; 922 901 } 923 902 903 + ACCESS_ONCE(smmu_domain->smmu) = smmu; 904 + arm_smmu_init_context_bank(smmu_domain); 905 + spin_unlock_irqrestore(&smmu_domain->lock, flags); 906 + 924 907 irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx]; 925 908 ret = request_irq(irq, arm_smmu_context_fault, IRQF_SHARED, 926 909 "arm-smmu-context-fault", domain); ··· 932 907 dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n", 933 908 cfg->irptndx, irq); 934 909 cfg->irptndx = INVALID_IRPTNDX; 935 - goto out_free_context; 936 910 } 937 911 938 - smmu_domain->smmu = smmu; 939 - arm_smmu_init_context_bank(smmu_domain); 940 912 return 0; 941 913 942 - out_free_context: 943 - __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx); 914 + out_unlock: 915 + spin_unlock_irqrestore(&smmu_domain->lock, flags); 944 916 return ret; 945 917 } 946 918 ··· 997 975 { 998 976 pgtable_t table = pmd_pgtable(*pmd); 999 977 1000 - pgtable_page_dtor(table); 1001 978 __free_page(table); 1002 979 } 1003 980 ··· 1129 1108 void __iomem *gr0_base = ARM_SMMU_GR0(smmu); 1130 1109 struct arm_smmu_smr *smrs = cfg->smrs; 1131 1110 1111 + if (!smrs) 1112 + return; 1113 + 1132 1114 /* Invalidate the SMRs before freeing back to the allocator */ 1133 1115 for (i = 0; i < cfg->num_streamids; ++i) { 1134 1116 u8 idx = smrs[i].idx; ··· 1142 1118 1143 1119 cfg->smrs = NULL; 1144 1120 kfree(smrs); 1145 - } 1146 - 1147 - static void arm_smmu_bypass_stream_mapping(struct arm_smmu_device *smmu, 1148 - struct arm_smmu_master_cfg *cfg) 1149 - { 1150 - int i; 1151 - void __iomem *gr0_base = ARM_SMMU_GR0(smmu); 1152 - 1153 - for (i = 0; i < cfg->num_streamids; ++i) { 1154 - u16 sid = cfg->streamids[i]; 1155 - 1156 - writel_relaxed(S2CR_TYPE_BYPASS, 1157 - gr0_base + ARM_SMMU_GR0_S2CR(sid)); 1158 - } 1159 1121 } 1160 1122 1161 1123 static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain, ··· 1170 1160 static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain, 1171 1161 struct arm_smmu_master_cfg *cfg) 1172 1162 { 1163 + int i; 1173 1164 struct arm_smmu_device *smmu = smmu_domain->smmu; 1165 + void __iomem *gr0_base = ARM_SMMU_GR0(smmu); 1174 1166 1175 1167 /* 1176 1168 * We *must* clear the S2CR first, because freeing the SMR means 1177 1169 * that it can be re-allocated immediately. 1178 1170 */ 1179 - arm_smmu_bypass_stream_mapping(smmu, cfg); 1171 + for (i = 0; i < cfg->num_streamids; ++i) { 1172 + u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i]; 1173 + 1174 + writel_relaxed(S2CR_TYPE_BYPASS, 1175 + gr0_base + ARM_SMMU_GR0_S2CR(idx)); 1176 + } 1177 + 1180 1178 arm_smmu_master_free_smrs(smmu, cfg); 1181 1179 } 1182 1180 1183 1181 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) 1184 1182 { 1185 - int ret = -EINVAL; 1183 + int ret; 1186 1184 struct arm_smmu_domain *smmu_domain = domain->priv; 1187 - struct arm_smmu_device *smmu; 1185 + struct arm_smmu_device *smmu, *dom_smmu; 1188 1186 struct arm_smmu_master_cfg *cfg; 1189 - unsigned long flags; 1190 1187 1191 1188 smmu = dev_get_master_dev(dev)->archdata.iommu; 1192 1189 if (!smmu) { ··· 1205 1188 * Sanity check the domain. We don't support domains across 1206 1189 * different SMMUs. 1207 1190 */ 1208 - spin_lock_irqsave(&smmu_domain->lock, flags); 1209 - if (!smmu_domain->smmu) { 1191 + dom_smmu = ACCESS_ONCE(smmu_domain->smmu); 1192 + if (!dom_smmu) { 1210 1193 /* Now that we have a master, we can finalise the domain */ 1211 1194 ret = arm_smmu_init_domain_context(domain, smmu); 1212 1195 if (IS_ERR_VALUE(ret)) 1213 - goto err_unlock; 1214 - } else if (smmu_domain->smmu != smmu) { 1196 + return ret; 1197 + 1198 + dom_smmu = smmu_domain->smmu; 1199 + } 1200 + 1201 + if (dom_smmu != smmu) { 1215 1202 dev_err(dev, 1216 1203 "cannot attach to SMMU %s whilst already attached to domain on SMMU %s\n", 1217 - dev_name(smmu_domain->smmu->dev), 1218 - dev_name(smmu->dev)); 1219 - goto err_unlock; 1204 + dev_name(smmu_domain->smmu->dev), dev_name(smmu->dev)); 1205 + return -EINVAL; 1220 1206 } 1221 - spin_unlock_irqrestore(&smmu_domain->lock, flags); 1222 1207 1223 1208 /* Looks ok, so add the device to the domain */ 1224 1209 cfg = find_smmu_master_cfg(smmu_domain->smmu, dev); ··· 1228 1209 return -ENODEV; 1229 1210 1230 1211 return arm_smmu_domain_add_master(smmu_domain, cfg); 1231 - 1232 - err_unlock: 1233 - spin_unlock_irqrestore(&smmu_domain->lock, flags); 1234 - return ret; 1235 1212 } 1236 1213 1237 1214 static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev) ··· 1262 1247 return -ENOMEM; 1263 1248 1264 1249 arm_smmu_flush_pgtable(smmu, page_address(table), PAGE_SIZE); 1265 - if (!pgtable_page_ctor(table)) { 1266 - __free_page(table); 1267 - return -ENOMEM; 1268 - } 1269 1250 pmd_populate(NULL, pmd, table); 1270 1251 arm_smmu_flush_pgtable(smmu, pmd, sizeof(*pmd)); 1271 1252 } ··· 1637 1626 1638 1627 /* Mark all SMRn as invalid and all S2CRn as bypass */ 1639 1628 for (i = 0; i < smmu->num_mapping_groups; ++i) { 1640 - writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(i)); 1629 + writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i)); 1641 1630 writel_relaxed(S2CR_TYPE_BYPASS, 1642 1631 gr0_base + ARM_SMMU_GR0_S2CR(i)); 1643 1632 } ··· 1772 1761 dev_notice(smmu->dev, 1773 1762 "\tstream matching with %u register groups, mask 0x%x", 1774 1763 smmu->num_mapping_groups, mask); 1764 + } else { 1765 + smmu->num_mapping_groups = (id >> ID0_NUMSIDB_SHIFT) & 1766 + ID0_NUMSIDB_MASK; 1775 1767 } 1776 1768 1777 1769 /* ID1 */ ··· 1808 1794 * Stage-1 output limited by stage-2 input size due to pgd 1809 1795 * allocation (PTRS_PER_PGD). 1810 1796 */ 1797 + if (smmu->features & ARM_SMMU_FEAT_TRANS_NESTED) { 1811 1798 #ifdef CONFIG_64BIT 1812 - smmu->s1_output_size = min_t(unsigned long, VA_BITS, size); 1799 + smmu->s1_output_size = min_t(unsigned long, VA_BITS, size); 1813 1800 #else 1814 - smmu->s1_output_size = min(32UL, size); 1801 + smmu->s1_output_size = min(32UL, size); 1815 1802 #endif 1803 + } else { 1804 + smmu->s1_output_size = min_t(unsigned long, PHYS_MASK_SHIFT, 1805 + size); 1806 + } 1816 1807 1817 1808 /* The stage-2 output mask is also applied for bypass */ 1818 1809 size = arm_smmu_id_size_to_bits((id >> ID2_OAS_SHIFT) & ID2_OAS_MASK); ··· 1908 1889 smmu->irqs[i] = irq; 1909 1890 } 1910 1891 1892 + err = arm_smmu_device_cfg_probe(smmu); 1893 + if (err) 1894 + return err; 1895 + 1911 1896 i = 0; 1912 1897 smmu->masters = RB_ROOT; 1913 1898 while (!of_parse_phandle_with_args(dev->of_node, "mmu-masters", ··· 1927 1904 i++; 1928 1905 } 1929 1906 dev_notice(dev, "registered %d master devices\n", i); 1930 - 1931 - err = arm_smmu_device_cfg_probe(smmu); 1932 - if (err) 1933 - goto out_put_masters; 1934 1907 1935 1908 parse_driver_options(smmu); 1936 1909
+1 -2
drivers/iommu/dmar.c
··· 678 678 andd->device_name); 679 679 continue; 680 680 } 681 - acpi_bus_get_device(h, &adev); 682 - if (!adev) { 681 + if (acpi_bus_get_device(h, &adev)) { 683 682 pr_err("Failed to get device for ACPI object %s\n", 684 683 andd->device_name); 685 684 continue;
+8 -2
drivers/iommu/fsl_pamu_domain.c
··· 984 984 struct iommu_group *group = ERR_PTR(-ENODEV); 985 985 struct pci_dev *pdev; 986 986 const u32 *prop; 987 - int ret, len; 987 + int ret = 0, len; 988 988 989 989 /* 990 990 * For platform devices we allocate a separate group for ··· 1007 1007 if (IS_ERR(group)) 1008 1008 return PTR_ERR(group); 1009 1009 1010 - ret = iommu_group_add_device(group, dev); 1010 + /* 1011 + * Check if device has already been added to an iommu group. 1012 + * Group could have already been created for a PCI device in 1013 + * the iommu_group_get_for_dev path. 1014 + */ 1015 + if (!dev->iommu_group) 1016 + ret = iommu_group_add_device(group, dev); 1011 1017 1012 1018 iommu_group_put(group); 1013 1019 return ret;
+5 -3
drivers/iommu/iommu.c
··· 678 678 */ 679 679 struct iommu_group *iommu_group_get_for_dev(struct device *dev) 680 680 { 681 - struct iommu_group *group = ERR_PTR(-EIO); 681 + struct iommu_group *group; 682 682 int ret; 683 683 684 684 group = iommu_group_get(dev); 685 685 if (group) 686 686 return group; 687 687 688 - if (dev_is_pci(dev)) 689 - group = iommu_group_get_for_pci_dev(to_pci_dev(dev)); 688 + if (!dev_is_pci(dev)) 689 + return ERR_PTR(-EINVAL); 690 + 691 + group = iommu_group_get_for_pci_dev(to_pci_dev(dev)); 690 692 691 693 if (IS_ERR(group)) 692 694 return group;
+1
drivers/irqchip/exynos-combiner.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/irqdomain.h> 17 17 #include <linux/irqchip/chained_irq.h> 18 + #include <linux/interrupt.h> 18 19 #include <linux/of_address.h> 19 20 #include <linux/of_irq.h> 20 21
+2 -2
drivers/irqchip/irq-crossbar.c
··· 220 220 of_property_read_u32_index(node, 221 221 "ti,irqs-reserved", 222 222 i, &entry); 223 - if (entry > max) { 223 + if (entry >= max) { 224 224 pr_err("Invalid reserved entry\n"); 225 225 ret = -EINVAL; 226 226 goto err_irq_map; ··· 238 238 of_property_read_u32_index(node, 239 239 "ti,irqs-skip", 240 240 i, &entry); 241 - if (entry > max) { 241 + if (entry >= max) { 242 242 pr_err("Invalid skip entry\n"); 243 243 ret = -EINVAL; 244 244 goto err_irq_map;
+19 -19
drivers/irqchip/irq-gic-v3.c
··· 36 36 struct gic_chip_data { 37 37 void __iomem *dist_base; 38 38 void __iomem **redist_base; 39 - void __percpu __iomem **rdist; 39 + void __iomem * __percpu *rdist; 40 40 struct irq_domain *domain; 41 41 u64 redist_stride; 42 42 u32 redist_regions; ··· 104 104 } 105 105 106 106 /* Low level accessors */ 107 - static u64 gic_read_iar(void) 107 + static u64 __maybe_unused gic_read_iar(void) 108 108 { 109 109 u64 irqstat; 110 110 ··· 112 112 return irqstat; 113 113 } 114 114 115 - static void gic_write_pmr(u64 val) 115 + static void __maybe_unused gic_write_pmr(u64 val) 116 116 { 117 117 asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" (val)); 118 118 } 119 119 120 - static void gic_write_ctlr(u64 val) 120 + static void __maybe_unused gic_write_ctlr(u64 val) 121 121 { 122 122 asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" (val)); 123 123 isb(); 124 124 } 125 125 126 - static void gic_write_grpen1(u64 val) 126 + static void __maybe_unused gic_write_grpen1(u64 val) 127 127 { 128 128 asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" (val)); 129 129 isb(); 130 130 } 131 131 132 - static void gic_write_sgi1r(u64 val) 132 + static void __maybe_unused gic_write_sgi1r(u64 val) 133 133 { 134 134 asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val)); 135 135 } ··· 198 198 199 199 writel_relaxed(mask, base + offset + (gic_irq(d) / 32) * 4); 200 200 rwp_wait(); 201 - } 202 - 203 - static int gic_peek_irq(struct irq_data *d, u32 offset) 204 - { 205 - u32 mask = 1 << (gic_irq(d) % 32); 206 - void __iomem *base; 207 - 208 - if (gic_irq_in_rdist(d)) 209 - base = gic_data_rdist_sgi_base(); 210 - else 211 - base = gic_data.dist_base; 212 - 213 - return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask); 214 201 } 215 202 216 203 static void gic_mask_irq(struct irq_data *d) ··· 388 401 } 389 402 390 403 #ifdef CONFIG_SMP 404 + static int gic_peek_irq(struct irq_data *d, u32 offset) 405 + { 406 + u32 mask = 1 << (gic_irq(d) % 32); 407 + void __iomem *base; 408 + 409 + if (gic_irq_in_rdist(d)) 410 + base = gic_data_rdist_sgi_base(); 411 + else 412 + base = gic_data.dist_base; 413 + 414 + return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask); 415 + } 416 + 391 417 static int gic_secondary_init(struct notifier_block *nfb, 392 418 unsigned long action, void *hcpu) 393 419 {
+1 -1
drivers/irqchip/irq-gic.c
··· 867 867 return 0; 868 868 } 869 869 870 - const struct irq_domain_ops gic_default_routable_irq_domain_ops = { 870 + static const struct irq_domain_ops gic_default_routable_irq_domain_ops = { 871 871 .map = gic_routable_irq_domain_map, 872 872 .unmap = gic_routable_irq_domain_unmap, 873 873 .xlate = gic_routable_irq_domain_xlate,
+2 -2
drivers/md/dm-cache-target.c
··· 895 895 struct cache *cache = mg->cache; 896 896 897 897 if (mg->writeback) { 898 - cell_defer(cache, mg->old_ocell, false); 899 898 clear_dirty(cache, mg->old_oblock, mg->cblock); 899 + cell_defer(cache, mg->old_ocell, false); 900 900 cleanup_migration(mg); 901 901 return; 902 902 ··· 951 951 } 952 952 953 953 } else { 954 + clear_dirty(cache, mg->new_oblock, mg->cblock); 954 955 if (mg->requeue_holder) 955 956 cell_defer(cache, mg->new_ocell, true); 956 957 else { 957 958 bio_endio(mg->new_ocell->holder, 0); 958 959 cell_defer(cache, mg->new_ocell, false); 959 960 } 960 - clear_dirty(cache, mg->new_oblock, mg->cblock); 961 961 cleanup_migration(mg); 962 962 } 963 963 }
-1
drivers/media/Kconfig
··· 182 182 depends on HAS_IOMEM 183 183 select I2C 184 184 select I2C_MUX 185 - select SPI 186 185 default y 187 186 help 188 187 By default, a media driver auto-selects all possible ancillary
+2
drivers/media/dvb-core/dvb-usb-ids.h
··· 280 280 #define USB_PID_PCTV_400E 0x020f 281 281 #define USB_PID_PCTV_450E 0x0222 282 282 #define USB_PID_PCTV_452E 0x021f 283 + #define USB_PID_PCTV_78E 0x025a 284 + #define USB_PID_PCTV_79E 0x0262 283 285 #define USB_PID_REALTEK_RTL2831U 0x2831 284 286 #define USB_PID_REALTEK_RTL2832U 0x2832 285 287 #define USB_PID_TECHNOTREND_CONNECT_S2_3600 0x3007
+13
drivers/media/dvb-frontends/af9033.c
··· 314 314 goto err; 315 315 } 316 316 317 + /* feed clock to RF tuner */ 318 + switch (state->cfg.tuner) { 319 + case AF9033_TUNER_IT9135_38: 320 + case AF9033_TUNER_IT9135_51: 321 + case AF9033_TUNER_IT9135_52: 322 + case AF9033_TUNER_IT9135_60: 323 + case AF9033_TUNER_IT9135_61: 324 + case AF9033_TUNER_IT9135_62: 325 + ret = af9033_wr_reg(state, 0x80fba8, 0x00); 326 + if (ret < 0) 327 + goto err; 328 + } 329 + 317 330 /* settings for TS interface */ 318 331 if (state->cfg.ts_mode == AF9033_TS_MODE_USB) { 319 332 ret = af9033_wr_reg_mask(state, 0x80f9a5, 0x00, 0x01);
+9 -11
drivers/media/dvb-frontends/af9033_priv.h
··· 1418 1418 { 0x800068, 0x0a }, 1419 1419 { 0x80006a, 0x03 }, 1420 1420 { 0x800070, 0x0a }, 1421 - { 0x800071, 0x05 }, 1421 + { 0x800071, 0x0a }, 1422 1422 { 0x800072, 0x02 }, 1423 1423 { 0x800075, 0x8c }, 1424 1424 { 0x800076, 0x8c }, ··· 1484 1484 { 0x800104, 0x02 }, 1485 1485 { 0x800105, 0xbe }, 1486 1486 { 0x800106, 0x00 }, 1487 - { 0x800109, 0x02 }, 1488 1487 { 0x800115, 0x0a }, 1489 1488 { 0x800116, 0x03 }, 1490 1489 { 0x80011a, 0xbe }, ··· 1509 1510 { 0x80014b, 0x8c }, 1510 1511 { 0x80014d, 0xac }, 1511 1512 { 0x80014e, 0xc6 }, 1512 - { 0x80014f, 0x03 }, 1513 1513 { 0x800151, 0x1e }, 1514 1514 { 0x800153, 0xbc }, 1515 1515 { 0x800178, 0x09 }, ··· 1520 1522 { 0x80018d, 0x5f }, 1521 1523 { 0x80018f, 0xa0 }, 1522 1524 { 0x800190, 0x5a }, 1523 - { 0x80ed02, 0xff }, 1524 - { 0x80ee42, 0xff }, 1525 - { 0x80ee82, 0xff }, 1525 + { 0x800191, 0x00 }, 1526 + { 0x80ed02, 0x40 }, 1527 + { 0x80ee42, 0x40 }, 1528 + { 0x80ee82, 0x40 }, 1526 1529 { 0x80f000, 0x0f }, 1527 1530 { 0x80f01f, 0x8c }, 1528 1531 { 0x80f020, 0x00 }, ··· 1698 1699 { 0x800104, 0x02 }, 1699 1700 { 0x800105, 0xc8 }, 1700 1701 { 0x800106, 0x00 }, 1701 - { 0x800109, 0x02 }, 1702 1702 { 0x800115, 0x0a }, 1703 1703 { 0x800116, 0x03 }, 1704 1704 { 0x80011a, 0xc6 }, ··· 1723 1725 { 0x80014b, 0x8c }, 1724 1726 { 0x80014d, 0xa8 }, 1725 1727 { 0x80014e, 0xc6 }, 1726 - { 0x80014f, 0x03 }, 1727 1728 { 0x800151, 0x28 }, 1728 1729 { 0x800153, 0xcc }, 1729 1730 { 0x800178, 0x09 }, ··· 1734 1737 { 0x80018d, 0x5f }, 1735 1738 { 0x80018f, 0xfb }, 1736 1739 { 0x800190, 0x5c }, 1737 - { 0x80ed02, 0xff }, 1738 - { 0x80ee42, 0xff }, 1739 - { 0x80ee82, 0xff }, 1740 + { 0x800191, 0x00 }, 1741 + { 0x80ed02, 0x40 }, 1742 + { 0x80ee42, 0x40 }, 1743 + { 0x80ee82, 0x40 }, 1740 1744 { 0x80f000, 0x0f }, 1741 1745 { 0x80f01f, 0x8c }, 1742 1746 { 0x80f020, 0x00 },
+3 -10
drivers/media/i2c/smiapp/smiapp-core.c
··· 1282 1282 1283 1283 mutex_lock(&sensor->power_mutex); 1284 1284 1285 - /* 1286 - * If the power count is modified from 0 to != 0 or from != 0 1287 - * to 0, update the power state. 1288 - */ 1289 - if (!sensor->power_count == !on) 1290 - goto out; 1291 - 1292 - if (on) { 1285 + if (on && !sensor->power_count) { 1293 1286 /* Power on and perform initialisation. */ 1294 1287 ret = smiapp_power_on(sensor); 1295 1288 if (ret < 0) 1296 1289 goto out; 1297 - } else { 1290 + } else if (!on && sensor->power_count == 1) { 1298 1291 smiapp_power_off(sensor); 1299 1292 } 1300 1293 ··· 2565 2572 2566 2573 this->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE; 2567 2574 this->sd.internal_ops = &smiapp_internal_ops; 2568 - this->sd.owner = NULL; 2575 + this->sd.owner = THIS_MODULE; 2569 2576 v4l2_set_subdevdata(&this->sd, client); 2570 2577 2571 2578 rval = media_entity_init(&this->sd.entity,
+1
drivers/media/pci/cx18/cx18-driver.c
··· 1091 1091 setup.addr = ADDR_UNSET; 1092 1092 setup.type = cx->options.tuner; 1093 1093 setup.mode_mask = T_ANALOG_TV; /* matches TV tuners */ 1094 + setup.config = NULL; 1094 1095 if (cx->options.radio > 0) 1095 1096 setup.mode_mask |= T_RADIO; 1096 1097 setup.tuner_callback = (setup.type == TUNER_XC2028) ?
+6
drivers/media/tuners/tuner_it913x.c
··· 396 396 struct i2c_adapter *i2c_adap, u8 i2c_addr, u8 config) 397 397 { 398 398 struct it913x_state *state = NULL; 399 + int ret; 399 400 400 401 /* allocate memory for the internal state */ 401 402 state = kzalloc(sizeof(struct it913x_state), GFP_KERNEL); ··· 425 424 426 425 state->tuner_type = config; 427 426 state->firmware_ver = 1; 427 + 428 + /* tuner RF initial */ 429 + ret = it913x_wr_reg(state, PRO_DMOD, 0xec4c, 0x68); 430 + if (ret < 0) 431 + goto error; 428 432 429 433 fe->tuner_priv = state; 430 434 memcpy(&fe->ops.tuner_ops, &it913x_tuner_ops,
+4
drivers/media/usb/dvb-usb-v2/af9035.c
··· 1575 1575 &af9035_props, "Leadtek WinFast DTV Dongle Dual", NULL) }, 1576 1576 { DVB_USB_DEVICE(USB_VID_HAUPPAUGE, 0xf900, 1577 1577 &af9035_props, "Hauppauge WinTV-MiniStick 2", NULL) }, 1578 + { DVB_USB_DEVICE(USB_VID_PCTV, USB_PID_PCTV_78E, 1579 + &af9035_props, "PCTV 78e", RC_MAP_IT913X_V1) }, 1580 + { DVB_USB_DEVICE(USB_VID_PCTV, USB_PID_PCTV_79E, 1581 + &af9035_props, "PCTV 79e", RC_MAP_IT913X_V2) }, 1578 1582 { } 1579 1583 }; 1580 1584 MODULE_DEVICE_TABLE(usb, af9035_id_table);
+1 -1
drivers/message/fusion/Kconfig
··· 29 29 config FUSION_FC 30 30 tristate "Fusion MPT ScsiHost drivers for FC" 31 31 depends on PCI && SCSI 32 - select SCSI_FC_ATTRS 32 + depends on SCSI_FC_ATTRS 33 33 ---help--- 34 34 SCSI HOST support for a Fiber Channel host adapters. 35 35
+5
drivers/misc/lattice-ecp3-config.c
··· 79 79 u32 jedec_id; 80 80 u32 status; 81 81 82 + if (fw == NULL) { 83 + dev_err(&spi->dev, "Cannot load firmware, aborting\n"); 84 + return; 85 + } 86 + 82 87 if (fw->size == 0) { 83 88 dev_err(&spi->dev, "Error: Firmware size is 0!\n"); 84 89 return;
+15 -4
drivers/net/bonding/bond_main.c
··· 175 175 "the same MAC; 0 for none (default), " 176 176 "1 for active, 2 for follow"); 177 177 module_param(all_slaves_active, int, 0); 178 - MODULE_PARM_DESC(all_slaves_active, "Keep all frames received on an interface" 178 + MODULE_PARM_DESC(all_slaves_active, "Keep all frames received on an interface " 179 179 "by setting active flag for all slaves; " 180 180 "0 for never (default), 1 for always."); 181 181 module_param(resend_igmp, int, 0); ··· 3531 3531 else 3532 3532 bond_xmit_slave_id(bond, skb, 0); 3533 3533 } else { 3534 - slave_id = bond_rr_gen_slave_id(bond); 3535 - bond_xmit_slave_id(bond, skb, slave_id % bond->slave_cnt); 3534 + int slave_cnt = ACCESS_ONCE(bond->slave_cnt); 3535 + 3536 + if (likely(slave_cnt)) { 3537 + slave_id = bond_rr_gen_slave_id(bond); 3538 + bond_xmit_slave_id(bond, skb, slave_id % slave_cnt); 3539 + } else { 3540 + dev_kfree_skb_any(skb); 3541 + } 3536 3542 } 3537 3543 3538 3544 return NETDEV_TX_OK; ··· 3568 3562 static int bond_xmit_xor(struct sk_buff *skb, struct net_device *bond_dev) 3569 3563 { 3570 3564 struct bonding *bond = netdev_priv(bond_dev); 3565 + int slave_cnt = ACCESS_ONCE(bond->slave_cnt); 3571 3566 3572 - bond_xmit_slave_id(bond, skb, bond_xmit_hash(bond, skb) % bond->slave_cnt); 3567 + if (likely(slave_cnt)) 3568 + bond_xmit_slave_id(bond, skb, 3569 + bond_xmit_hash(bond, skb) % slave_cnt); 3570 + else 3571 + dev_kfree_skb_any(skb); 3573 3572 3574 3573 return NETDEV_TX_OK; 3575 3574 }
+5 -3
drivers/net/can/at91_can.c
··· 1123 1123 struct at91_priv *priv = netdev_priv(dev); 1124 1124 int err; 1125 1125 1126 - clk_enable(priv->clk); 1126 + err = clk_prepare_enable(priv->clk); 1127 + if (err) 1128 + return err; 1127 1129 1128 1130 /* check or determine and set bittime */ 1129 1131 err = open_candev(dev); ··· 1151 1149 out_close: 1152 1150 close_candev(dev); 1153 1151 out: 1154 - clk_disable(priv->clk); 1152 + clk_disable_unprepare(priv->clk); 1155 1153 1156 1154 return err; 1157 1155 } ··· 1168 1166 at91_chip_stop(dev, CAN_STATE_STOPPED); 1169 1167 1170 1168 free_irq(dev->irq, dev); 1171 - clk_disable(priv->clk); 1169 + clk_disable_unprepare(priv->clk); 1172 1170 1173 1171 close_candev(dev); 1174 1172
+2 -2
drivers/net/can/c_can/c_can_platform.c
··· 97 97 ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance); 98 98 writel(ctrl, priv->raminit_ctrlreg); 99 99 ctrl &= ~CAN_RAMINIT_DONE_MASK(priv->instance); 100 - c_can_hw_raminit_wait_ti(priv, ctrl, mask); 100 + c_can_hw_raminit_wait_ti(priv, mask, ctrl); 101 101 102 102 if (enable) { 103 103 /* Set start bit and wait for the done bit. */ 104 104 ctrl |= CAN_RAMINIT_START_MASK(priv->instance); 105 105 writel(ctrl, priv->raminit_ctrlreg); 106 106 ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance); 107 - c_can_hw_raminit_wait_ti(priv, ctrl, mask); 107 + c_can_hw_raminit_wait_ti(priv, mask, ctrl); 108 108 } 109 109 spin_unlock(&raminit_lock); 110 110 }
+44 -10
drivers/net/can/flexcan.c
··· 62 62 #define FLEXCAN_MCR_BCC BIT(16) 63 63 #define FLEXCAN_MCR_LPRIO_EN BIT(13) 64 64 #define FLEXCAN_MCR_AEN BIT(12) 65 - #define FLEXCAN_MCR_MAXMB(x) ((x) & 0x1f) 65 + #define FLEXCAN_MCR_MAXMB(x) ((x) & 0x7f) 66 66 #define FLEXCAN_MCR_IDAM_A (0 << 8) 67 67 #define FLEXCAN_MCR_IDAM_B (1 << 8) 68 68 #define FLEXCAN_MCR_IDAM_C (2 << 8) ··· 146 146 FLEXCAN_ESR_BOFF_INT | FLEXCAN_ESR_ERR_INT) 147 147 148 148 /* FLEXCAN interrupt flag register (IFLAG) bits */ 149 - #define FLEXCAN_TX_BUF_ID 8 149 + /* Errata ERR005829 step7: Reserve first valid MB */ 150 + #define FLEXCAN_TX_BUF_RESERVED 8 151 + #define FLEXCAN_TX_BUF_ID 9 150 152 #define FLEXCAN_IFLAG_BUF(x) BIT(x) 151 153 #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7) 152 154 #define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6) ··· 159 157 160 158 /* FLEXCAN message buffers */ 161 159 #define FLEXCAN_MB_CNT_CODE(x) (((x) & 0xf) << 24) 160 + #define FLEXCAN_MB_CODE_RX_INACTIVE (0x0 << 24) 161 + #define FLEXCAN_MB_CODE_RX_EMPTY (0x4 << 24) 162 + #define FLEXCAN_MB_CODE_RX_FULL (0x2 << 24) 163 + #define FLEXCAN_MB_CODE_RX_OVERRRUN (0x6 << 24) 164 + #define FLEXCAN_MB_CODE_RX_RANSWER (0xa << 24) 165 + 166 + #define FLEXCAN_MB_CODE_TX_INACTIVE (0x8 << 24) 167 + #define FLEXCAN_MB_CODE_TX_ABORT (0x9 << 24) 168 + #define FLEXCAN_MB_CODE_TX_DATA (0xc << 24) 169 + #define FLEXCAN_MB_CODE_TX_TANSWER (0xe << 24) 170 + 162 171 #define FLEXCAN_MB_CNT_SRR BIT(22) 163 172 #define FLEXCAN_MB_CNT_IDE BIT(21) 164 173 #define FLEXCAN_MB_CNT_RTR BIT(20) ··· 346 333 flexcan_write(reg, &regs->mcr); 347 334 348 335 while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)) 349 - usleep_range(10, 20); 336 + udelay(10); 350 337 351 338 if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK) 352 339 return -ETIMEDOUT; ··· 365 352 flexcan_write(reg, &regs->mcr); 366 353 367 354 while (timeout-- && !(flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)) 368 - usleep_range(10, 20); 355 + udelay(10); 369 356 370 357 if (!(flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)) 371 358 return -ETIMEDOUT; ··· 384 371 flexcan_write(reg, &regs->mcr); 385 372 386 373 while (timeout-- && !(flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) 387 - usleep_range(100, 200); 374 + udelay(100); 388 375 389 376 if (!(flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) 390 377 return -ETIMEDOUT; ··· 403 390 flexcan_write(reg, &regs->mcr); 404 391 405 392 while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) 406 - usleep_range(10, 20); 393 + udelay(10); 407 394 408 395 if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK) 409 396 return -ETIMEDOUT; ··· 418 405 419 406 flexcan_write(FLEXCAN_MCR_SOFTRST, &regs->mcr); 420 407 while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_SOFTRST)) 421 - usleep_range(10, 20); 408 + udelay(10); 422 409 423 410 if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_SOFTRST) 424 411 return -ETIMEDOUT; ··· 499 486 500 487 flexcan_write(can_id, &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_id); 501 488 flexcan_write(ctrl, &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl); 489 + 490 + /* Errata ERR005829 step8: 491 + * Write twice INACTIVE(0x8) code to first MB. 492 + */ 493 + flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE, 494 + &regs->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl); 495 + flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE, 496 + &regs->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl); 502 497 503 498 return NETDEV_TX_OK; 504 499 } ··· 824 803 stats->tx_bytes += can_get_echo_skb(dev, 0); 825 804 stats->tx_packets++; 826 805 can_led_event(dev, CAN_LED_EVENT_TX); 806 + /* after sending a RTR frame mailbox is in RX mode */ 807 + flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE, 808 + &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl); 827 809 flexcan_write((1 << FLEXCAN_TX_BUF_ID), &regs->iflag1); 828 810 netif_wake_queue(dev); 829 811 } ··· 882 858 { 883 859 struct flexcan_priv *priv = netdev_priv(dev); 884 860 struct flexcan_regs __iomem *regs = priv->base; 885 - int err; 886 861 u32 reg_mcr, reg_ctrl, reg_crl2, reg_mecr; 862 + int err, i; 887 863 888 864 /* enable module */ 889 865 err = flexcan_chip_enable(priv); ··· 950 926 netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl); 951 927 flexcan_write(reg_ctrl, &regs->ctrl); 952 928 953 - /* Abort any pending TX, mark Mailbox as INACTIVE */ 954 - flexcan_write(FLEXCAN_MB_CNT_CODE(0x4), 929 + /* clear and invalidate all mailboxes first */ 930 + for (i = FLEXCAN_TX_BUF_ID; i < ARRAY_SIZE(regs->cantxfg); i++) { 931 + flexcan_write(FLEXCAN_MB_CODE_RX_INACTIVE, 932 + &regs->cantxfg[i].can_ctrl); 933 + } 934 + 935 + /* Errata ERR005829: mark first TX mailbox as INACTIVE */ 936 + flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE, 937 + &regs->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl); 938 + 939 + /* mark TX mailbox as INACTIVE */ 940 + flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE, 955 941 &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl); 956 942 957 943 /* acceptance mask/acceptance code (accept everything) */
+5 -1
drivers/net/can/sja1000/peak_pci.c
··· 70 70 #define PEAK_PC_104P_DEVICE_ID 0x0006 /* PCAN-PC/104+ cards */ 71 71 #define PEAK_PCI_104E_DEVICE_ID 0x0007 /* PCAN-PCI/104 Express cards */ 72 72 #define PEAK_MPCIE_DEVICE_ID 0x0008 /* The miniPCIe slot cards */ 73 + #define PEAK_PCIE_OEM_ID 0x0009 /* PCAN-PCI Express OEM */ 74 + #define PEAK_PCIEC34_DEVICE_ID 0x000A /* PCAN-PCI Express 34 (one channel) */ 73 75 74 76 #define PEAK_PCI_CHAN_MAX 4 75 77 ··· 89 87 {PEAK_PCI_VENDOR_ID, PEAK_CPCI_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,}, 90 88 #ifdef CONFIG_CAN_PEAK_PCIEC 91 89 {PEAK_PCI_VENDOR_ID, PEAK_PCIEC_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,}, 90 + {PEAK_PCI_VENDOR_ID, PEAK_PCIEC34_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,}, 92 91 #endif 93 92 {0,} 94 93 }; ··· 656 653 * This must be done *before* register_sja1000dev() but 657 654 * *after* devices linkage 658 655 */ 659 - if (pdev->device == PEAK_PCIEC_DEVICE_ID) { 656 + if (pdev->device == PEAK_PCIEC_DEVICE_ID || 657 + pdev->device == PEAK_PCIEC34_DEVICE_ID) { 660 658 err = peak_pciec_probe(pdev, dev); 661 659 if (err) { 662 660 dev_err(&pdev->dev,
+41 -9
drivers/net/ethernet/3com/3c59x.c
··· 2128 2128 int entry = vp->cur_tx % TX_RING_SIZE; 2129 2129 struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE]; 2130 2130 unsigned long flags; 2131 + dma_addr_t dma_addr; 2131 2132 2132 2133 if (vortex_debug > 6) { 2133 2134 pr_debug("boomerang_start_xmit()\n"); ··· 2163 2162 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum); 2164 2163 2165 2164 if (!skb_shinfo(skb)->nr_frags) { 2166 - vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, 2167 - skb->len, PCI_DMA_TODEVICE)); 2165 + dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, 2166 + PCI_DMA_TODEVICE); 2167 + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) 2168 + goto out_dma_err; 2169 + 2170 + vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr); 2168 2171 vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG); 2169 2172 } else { 2170 2173 int i; 2171 2174 2172 - vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, 2173 - skb_headlen(skb), PCI_DMA_TODEVICE)); 2175 + dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, 2176 + skb_headlen(skb), PCI_DMA_TODEVICE); 2177 + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) 2178 + goto out_dma_err; 2179 + 2180 + vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr); 2174 2181 vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb)); 2175 2182 2176 2183 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2177 2184 skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 2178 2185 2186 + dma_addr = skb_frag_dma_map(&VORTEX_PCI(vp)->dev, frag, 2187 + 0, 2188 + frag->size, 2189 + DMA_TO_DEVICE); 2190 + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) { 2191 + for(i = i-1; i >= 0; i--) 2192 + dma_unmap_page(&VORTEX_PCI(vp)->dev, 2193 + le32_to_cpu(vp->tx_ring[entry].frag[i+1].addr), 2194 + le32_to_cpu(vp->tx_ring[entry].frag[i+1].length), 2195 + DMA_TO_DEVICE); 2196 + 2197 + pci_unmap_single(VORTEX_PCI(vp), 2198 + le32_to_cpu(vp->tx_ring[entry].frag[0].addr), 2199 + le32_to_cpu(vp->tx_ring[entry].frag[0].length), 2200 + PCI_DMA_TODEVICE); 2201 + 2202 + goto out_dma_err; 2203 + } 2204 + 2179 2205 vp->tx_ring[entry].frag[i+1].addr = 2180 - cpu_to_le32(skb_frag_dma_map( 2181 - &VORTEX_PCI(vp)->dev, 2182 - frag, 2183 - frag->page_offset, frag->size, DMA_TO_DEVICE)); 2206 + cpu_to_le32(dma_addr); 2184 2207 2185 2208 if (i == skb_shinfo(skb)->nr_frags-1) 2186 2209 vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(skb_frag_size(frag)|LAST_FRAG); ··· 2213 2188 } 2214 2189 } 2215 2190 #else 2216 - vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE)); 2191 + dma_addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE)); 2192 + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) 2193 + goto out_dma_err; 2194 + vp->tx_ring[entry].addr = cpu_to_le32(dma_addr); 2217 2195 vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG); 2218 2196 vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded); 2219 2197 #endif ··· 2244 2216 skb_tx_timestamp(skb); 2245 2217 iowrite16(DownUnstall, ioaddr + EL3_CMD); 2246 2218 spin_unlock_irqrestore(&vp->lock, flags); 2219 + out: 2247 2220 return NETDEV_TX_OK; 2221 + out_dma_err: 2222 + dev_err(&VORTEX_PCI(vp)->dev, "Error mapping dma buffer\n"); 2223 + goto out; 2248 2224 } 2249 2225 2250 2226 /* The interrupt handler does all of the Rx thread work and cleans up
+37 -16
drivers/net/ethernet/arc/emac_main.c
··· 28 28 29 29 30 30 /** 31 + * arc_emac_tx_avail - Return the number of available slots in the tx ring. 32 + * @priv: Pointer to ARC EMAC private data structure. 33 + * 34 + * returns: the number of slots available for transmission in tx the ring. 35 + */ 36 + static inline int arc_emac_tx_avail(struct arc_emac_priv *priv) 37 + { 38 + return (priv->txbd_dirty + TX_BD_NUM - priv->txbd_curr - 1) % TX_BD_NUM; 39 + } 40 + 41 + /** 31 42 * arc_emac_adjust_link - Adjust the PHY link duplex. 32 43 * @ndev: Pointer to the net_device structure. 33 44 * ··· 193 182 txbd->info = 0; 194 183 195 184 *txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM; 196 - 197 - if (netif_queue_stopped(ndev)) 198 - netif_wake_queue(ndev); 199 185 } 186 + 187 + /* Ensure that txbd_dirty is visible to tx() before checking 188 + * for queue stopped. 189 + */ 190 + smp_mb(); 191 + 192 + if (netif_queue_stopped(ndev) && arc_emac_tx_avail(priv)) 193 + netif_wake_queue(ndev); 200 194 } 201 195 202 196 /** ··· 316 300 work_done = arc_emac_rx(ndev, budget); 317 301 if (work_done < budget) { 318 302 napi_complete(napi); 319 - arc_reg_or(priv, R_ENABLE, RXINT_MASK); 303 + arc_reg_or(priv, R_ENABLE, RXINT_MASK | TXINT_MASK); 320 304 } 321 305 322 306 return work_done; ··· 345 329 /* Reset all flags except "MDIO complete" */ 346 330 arc_reg_set(priv, R_STATUS, status); 347 331 348 - if (status & RXINT_MASK) { 332 + if (status & (RXINT_MASK | TXINT_MASK)) { 349 333 if (likely(napi_schedule_prep(&priv->napi))) { 350 - arc_reg_clr(priv, R_ENABLE, RXINT_MASK); 334 + arc_reg_clr(priv, R_ENABLE, RXINT_MASK | TXINT_MASK); 351 335 __napi_schedule(&priv->napi); 352 336 } 353 337 } ··· 458 442 arc_reg_set(priv, R_TX_RING, (unsigned int)priv->txbd_dma); 459 443 460 444 /* Enable interrupts */ 461 - arc_reg_set(priv, R_ENABLE, RXINT_MASK | ERR_MASK); 445 + arc_reg_set(priv, R_ENABLE, RXINT_MASK | TXINT_MASK | ERR_MASK); 462 446 463 447 /* Set CONTROL */ 464 448 arc_reg_set(priv, R_CTRL, ··· 529 513 netif_stop_queue(ndev); 530 514 531 515 /* Disable interrupts */ 532 - arc_reg_clr(priv, R_ENABLE, RXINT_MASK | ERR_MASK); 516 + arc_reg_clr(priv, R_ENABLE, RXINT_MASK | TXINT_MASK | ERR_MASK); 533 517 534 518 /* Disable EMAC */ 535 519 arc_reg_clr(priv, R_CTRL, EN_MASK); ··· 592 576 593 577 len = max_t(unsigned int, ETH_ZLEN, skb->len); 594 578 595 - /* EMAC still holds this buffer in its possession. 596 - * CPU must not modify this buffer descriptor 597 - */ 598 - if (unlikely((le32_to_cpu(*info) & OWN_MASK) == FOR_EMAC)) { 579 + if (unlikely(!arc_emac_tx_avail(priv))) { 599 580 netif_stop_queue(ndev); 581 + netdev_err(ndev, "BUG! Tx Ring full when queue awake!\n"); 600 582 return NETDEV_TX_BUSY; 601 583 } 602 584 ··· 623 609 /* Increment index to point to the next BD */ 624 610 *txbd_curr = (*txbd_curr + 1) % TX_BD_NUM; 625 611 626 - /* Get "info" of the next BD */ 627 - info = &priv->txbd[*txbd_curr].info; 612 + /* Ensure that tx_clean() sees the new txbd_curr before 613 + * checking the queue status. This prevents an unneeded wake 614 + * of the queue in tx_clean(). 615 + */ 616 + smp_mb(); 628 617 629 - /* Check if if Tx BD ring is full - next BD is still owned by EMAC */ 630 - if (unlikely((le32_to_cpu(*info) & OWN_MASK) == FOR_EMAC)) 618 + if (!arc_emac_tx_avail(priv)) { 631 619 netif_stop_queue(ndev); 620 + /* Refresh tx_dirty */ 621 + smp_mb(); 622 + if (arc_emac_tx_avail(priv)) 623 + netif_start_queue(ndev); 624 + } 632 625 633 626 arc_reg_set(priv, R_STATUS, TXPL_MASK); 634 627
+1 -1
drivers/net/ethernet/broadcom/b44.c
··· 1697 1697 hwstat->tx_underruns + 1698 1698 hwstat->tx_excessive_cols + 1699 1699 hwstat->tx_late_cols); 1700 - nstat->multicast = hwstat->tx_multicast_pkts; 1700 + nstat->multicast = hwstat->rx_multicast_pkts; 1701 1701 nstat->collisions = hwstat->tx_total_cols; 1702 1702 1703 1703 nstat->rx_length_errors = (hwstat->rx_oversize_pkts +
+19 -12
drivers/net/ethernet/broadcom/bcmsysport.c
··· 543 543 while ((processed < to_process) && (processed < budget)) { 544 544 cb = &priv->rx_cbs[priv->rx_read_ptr]; 545 545 skb = cb->skb; 546 + 547 + processed++; 548 + priv->rx_read_ptr++; 549 + 550 + if (priv->rx_read_ptr == priv->num_rx_bds) 551 + priv->rx_read_ptr = 0; 552 + 553 + /* We do not have a backing SKB, so we do not a corresponding 554 + * DMA mapping for this incoming packet since 555 + * bcm_sysport_rx_refill always either has both skb and mapping 556 + * or none. 557 + */ 558 + if (unlikely(!skb)) { 559 + netif_err(priv, rx_err, ndev, "out of memory!\n"); 560 + ndev->stats.rx_dropped++; 561 + ndev->stats.rx_errors++; 562 + goto refill; 563 + } 564 + 546 565 dma_unmap_single(kdev, dma_unmap_addr(cb, dma_addr), 547 566 RX_BUF_LENGTH, DMA_FROM_DEVICE); 548 567 ··· 571 552 status = (rsb->rx_status_len >> DESC_STATUS_SHIFT) & 572 553 DESC_STATUS_MASK; 573 554 574 - processed++; 575 - priv->rx_read_ptr++; 576 - if (priv->rx_read_ptr == priv->num_rx_bds) 577 - priv->rx_read_ptr = 0; 578 - 579 555 netif_dbg(priv, rx_status, ndev, 580 556 "p=%d, c=%d, rd_ptr=%d, len=%d, flag=0x%04x\n", 581 557 p_index, priv->rx_c_index, priv->rx_read_ptr, 582 558 len, status); 583 - 584 - if (unlikely(!skb)) { 585 - netif_err(priv, rx_err, ndev, "out of memory!\n"); 586 - ndev->stats.rx_dropped++; 587 - ndev->stats.rx_errors++; 588 - goto refill; 589 - } 590 559 591 560 if (unlikely(!(status & DESC_EOP) || !(status & DESC_SOP))) { 592 561 netif_err(priv, rx_status, ndev, "fragmented packet!\n");
+75 -68
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 875 875 int last_tx_cn, last_c_index, num_tx_bds; 876 876 struct enet_cb *tx_cb_ptr; 877 877 struct netdev_queue *txq; 878 + unsigned int bds_compl; 878 879 unsigned int c_index; 879 880 880 881 /* Compute how many buffers are transmitted since last xmit call */ ··· 900 899 /* Reclaim transmitted buffers */ 901 900 while (last_tx_cn-- > 0) { 902 901 tx_cb_ptr = ring->cbs + last_c_index; 902 + bds_compl = 0; 903 903 if (tx_cb_ptr->skb) { 904 + bds_compl = skb_shinfo(tx_cb_ptr->skb)->nr_frags + 1; 904 905 dev->stats.tx_bytes += tx_cb_ptr->skb->len; 905 906 dma_unmap_single(&dev->dev, 906 907 dma_unmap_addr(tx_cb_ptr, dma_addr), ··· 919 916 dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0); 920 917 } 921 918 dev->stats.tx_packets++; 922 - ring->free_bds += 1; 919 + ring->free_bds += bds_compl; 923 920 924 921 last_c_index++; 925 922 last_c_index &= (num_tx_bds - 1); ··· 1277 1274 1278 1275 while ((rxpktprocessed < rxpkttoprocess) && 1279 1276 (rxpktprocessed < budget)) { 1277 + cb = &priv->rx_cbs[priv->rx_read_ptr]; 1278 + skb = cb->skb; 1279 + 1280 + rxpktprocessed++; 1281 + 1282 + priv->rx_read_ptr++; 1283 + priv->rx_read_ptr &= (priv->num_rx_bds - 1); 1284 + 1285 + /* We do not have a backing SKB, so we do not have a 1286 + * corresponding DMA mapping for this incoming packet since 1287 + * bcmgenet_rx_refill always either has both skb and mapping or 1288 + * none. 1289 + */ 1290 + if (unlikely(!skb)) { 1291 + dev->stats.rx_dropped++; 1292 + dev->stats.rx_errors++; 1293 + goto refill; 1294 + } 1295 + 1280 1296 /* Unmap the packet contents such that we can use the 1281 1297 * RSV from the 64 bytes descriptor when enabled and save 1282 1298 * a 32-bits register read 1283 1299 */ 1284 - cb = &priv->rx_cbs[priv->rx_read_ptr]; 1285 - skb = cb->skb; 1286 1300 dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr), 1287 1301 priv->rx_buf_len, DMA_FROM_DEVICE); 1288 1302 ··· 1326 1306 "%s:p_ind=%d c_ind=%d read_ptr=%d len_stat=0x%08x\n", 1327 1307 __func__, p_index, priv->rx_c_index, 1328 1308 priv->rx_read_ptr, dma_length_status); 1329 - 1330 - rxpktprocessed++; 1331 - 1332 - priv->rx_read_ptr++; 1333 - priv->rx_read_ptr &= (priv->num_rx_bds - 1); 1334 - 1335 - /* out of memory, just drop packets at the hardware level */ 1336 - if (unlikely(!skb)) { 1337 - dev->stats.rx_dropped++; 1338 - dev->stats.rx_errors++; 1339 - goto refill; 1340 - } 1341 1309 1342 1310 if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) { 1343 1311 netif_err(priv, rx_status, dev, ··· 1744 1736 bcmgenet_tdma_writel(priv, reg, DMA_CTRL); 1745 1737 } 1746 1738 1739 + static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv) 1740 + { 1741 + int ret = 0; 1742 + int timeout = 0; 1743 + u32 reg; 1744 + 1745 + /* Disable TDMA to stop add more frames in TX DMA */ 1746 + reg = bcmgenet_tdma_readl(priv, DMA_CTRL); 1747 + reg &= ~DMA_EN; 1748 + bcmgenet_tdma_writel(priv, reg, DMA_CTRL); 1749 + 1750 + /* Check TDMA status register to confirm TDMA is disabled */ 1751 + while (timeout++ < DMA_TIMEOUT_VAL) { 1752 + reg = bcmgenet_tdma_readl(priv, DMA_STATUS); 1753 + if (reg & DMA_DISABLED) 1754 + break; 1755 + 1756 + udelay(1); 1757 + } 1758 + 1759 + if (timeout == DMA_TIMEOUT_VAL) { 1760 + netdev_warn(priv->dev, "Timed out while disabling TX DMA\n"); 1761 + ret = -ETIMEDOUT; 1762 + } 1763 + 1764 + /* Wait 10ms for packet drain in both tx and rx dma */ 1765 + usleep_range(10000, 20000); 1766 + 1767 + /* Disable RDMA */ 1768 + reg = bcmgenet_rdma_readl(priv, DMA_CTRL); 1769 + reg &= ~DMA_EN; 1770 + bcmgenet_rdma_writel(priv, reg, DMA_CTRL); 1771 + 1772 + timeout = 0; 1773 + /* Check RDMA status register to confirm RDMA is disabled */ 1774 + while (timeout++ < DMA_TIMEOUT_VAL) { 1775 + reg = bcmgenet_rdma_readl(priv, DMA_STATUS); 1776 + if (reg & DMA_DISABLED) 1777 + break; 1778 + 1779 + udelay(1); 1780 + } 1781 + 1782 + if (timeout == DMA_TIMEOUT_VAL) { 1783 + netdev_warn(priv->dev, "Timed out while disabling RX DMA\n"); 1784 + ret = -ETIMEDOUT; 1785 + } 1786 + 1787 + return ret; 1788 + } 1789 + 1747 1790 static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1748 1791 { 1749 1792 int i; 1750 1793 1751 1794 /* disable DMA */ 1752 - bcmgenet_rdma_writel(priv, 0, DMA_CTRL); 1753 - bcmgenet_tdma_writel(priv, 0, DMA_CTRL); 1795 + bcmgenet_dma_teardown(priv); 1754 1796 1755 1797 for (i = 0; i < priv->num_tx_bds; i++) { 1756 1798 if (priv->tx_cbs[i].skb != NULL) { ··· 2156 2098 err_clk_disable: 2157 2099 if (!IS_ERR(priv->clk)) 2158 2100 clk_disable_unprepare(priv->clk); 2159 - return ret; 2160 - } 2161 - 2162 - static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv) 2163 - { 2164 - int ret = 0; 2165 - int timeout = 0; 2166 - u32 reg; 2167 - 2168 - /* Disable TDMA to stop add more frames in TX DMA */ 2169 - reg = bcmgenet_tdma_readl(priv, DMA_CTRL); 2170 - reg &= ~DMA_EN; 2171 - bcmgenet_tdma_writel(priv, reg, DMA_CTRL); 2172 - 2173 - /* Check TDMA status register to confirm TDMA is disabled */ 2174 - while (timeout++ < DMA_TIMEOUT_VAL) { 2175 - reg = bcmgenet_tdma_readl(priv, DMA_STATUS); 2176 - if (reg & DMA_DISABLED) 2177 - break; 2178 - 2179 - udelay(1); 2180 - } 2181 - 2182 - if (timeout == DMA_TIMEOUT_VAL) { 2183 - netdev_warn(priv->dev, "Timed out while disabling TX DMA\n"); 2184 - ret = -ETIMEDOUT; 2185 - } 2186 - 2187 - /* Wait 10ms for packet drain in both tx and rx dma */ 2188 - usleep_range(10000, 20000); 2189 - 2190 - /* Disable RDMA */ 2191 - reg = bcmgenet_rdma_readl(priv, DMA_CTRL); 2192 - reg &= ~DMA_EN; 2193 - bcmgenet_rdma_writel(priv, reg, DMA_CTRL); 2194 - 2195 - timeout = 0; 2196 - /* Check RDMA status register to confirm RDMA is disabled */ 2197 - while (timeout++ < DMA_TIMEOUT_VAL) { 2198 - reg = bcmgenet_rdma_readl(priv, DMA_STATUS); 2199 - if (reg & DMA_DISABLED) 2200 - break; 2201 - 2202 - udelay(1); 2203 - } 2204 - 2205 - if (timeout == DMA_TIMEOUT_VAL) { 2206 - netdev_warn(priv->dev, "Timed out while disabling RX DMA\n"); 2207 - ret = -ETIMEDOUT; 2208 - } 2209 - 2210 2101 return ret; 2211 2102 } 2212 2103
+18 -2
drivers/net/ethernet/broadcom/tg3.c
··· 7914 7914 7915 7915 entry = tnapi->tx_prod; 7916 7916 base_flags = 0; 7917 - if (skb->ip_summed == CHECKSUM_PARTIAL) 7918 - base_flags |= TXD_FLAG_TCPUDP_CSUM; 7919 7917 7920 7918 mss = skb_shinfo(skb)->gso_size; 7921 7919 if (mss) { ··· 7926 7928 tcp_opt_len = tcp_optlen(skb); 7927 7929 7928 7930 hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - ETH_HLEN; 7931 + 7932 + /* HW/FW can not correctly segment packets that have been 7933 + * vlan encapsulated. 7934 + */ 7935 + if (skb->protocol == htons(ETH_P_8021Q) || 7936 + skb->protocol == htons(ETH_P_8021AD)) 7937 + return tg3_tso_bug(tp, tnapi, txq, skb); 7929 7938 7930 7939 if (!skb_is_gso_v6(skb)) { 7931 7940 if (unlikely((ETH_HLEN + hdr_len) > 80) && ··· 7983 7978 tsflags = (iph->ihl - 5) + (tcp_opt_len >> 2); 7984 7979 base_flags |= tsflags << 12; 7985 7980 } 7981 + } 7982 + } else if (skb->ip_summed == CHECKSUM_PARTIAL) { 7983 + /* HW/FW can not correctly checksum packets that have been 7984 + * vlan encapsulated. 7985 + */ 7986 + if (skb->protocol == htons(ETH_P_8021Q) || 7987 + skb->protocol == htons(ETH_P_8021AD)) { 7988 + if (skb_checksum_help(skb)) 7989 + goto drop; 7990 + } else { 7991 + base_flags |= TXD_FLAG_TCPUDP_CSUM; 7986 7992 } 7987 7993 } 7988 7994
+27 -22
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 6478 6478 struct port_info *pi; 6479 6479 bool highdma = false; 6480 6480 struct adapter *adapter = NULL; 6481 + void __iomem *regs; 6481 6482 6482 6483 printk_once(KERN_INFO "%s - version %s\n", DRV_DESC, DRV_VERSION); 6483 6484 ··· 6495 6494 goto out_release_regions; 6496 6495 } 6497 6496 6497 + regs = pci_ioremap_bar(pdev, 0); 6498 + if (!regs) { 6499 + dev_err(&pdev->dev, "cannot map device registers\n"); 6500 + err = -ENOMEM; 6501 + goto out_disable_device; 6502 + } 6503 + 6504 + /* We control everything through one PF */ 6505 + func = SOURCEPF_GET(readl(regs + PL_WHOAMI)); 6506 + if (func != ent->driver_data) { 6507 + iounmap(regs); 6508 + pci_disable_device(pdev); 6509 + pci_save_state(pdev); /* to restore SR-IOV later */ 6510 + goto sriov; 6511 + } 6512 + 6498 6513 if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { 6499 6514 highdma = true; 6500 6515 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 6501 6516 if (err) { 6502 6517 dev_err(&pdev->dev, "unable to obtain 64-bit DMA for " 6503 6518 "coherent allocations\n"); 6504 - goto out_disable_device; 6519 + goto out_unmap_bar0; 6505 6520 } 6506 6521 } else { 6507 6522 err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); 6508 6523 if (err) { 6509 6524 dev_err(&pdev->dev, "no usable DMA configuration\n"); 6510 - goto out_disable_device; 6525 + goto out_unmap_bar0; 6511 6526 } 6512 6527 } 6513 6528 ··· 6535 6518 adapter = kzalloc(sizeof(*adapter), GFP_KERNEL); 6536 6519 if (!adapter) { 6537 6520 err = -ENOMEM; 6538 - goto out_disable_device; 6521 + goto out_unmap_bar0; 6539 6522 } 6540 6523 6541 6524 adapter->workq = create_singlethread_workqueue("cxgb4"); ··· 6547 6530 /* PCI device has been enabled */ 6548 6531 adapter->flags |= DEV_ENABLED; 6549 6532 6550 - adapter->regs = pci_ioremap_bar(pdev, 0); 6551 - if (!adapter->regs) { 6552 - dev_err(&pdev->dev, "cannot map device registers\n"); 6553 - err = -ENOMEM; 6554 - goto out_free_adapter; 6555 - } 6556 - 6557 - /* We control everything through one PF */ 6558 - func = SOURCEPF_GET(readl(adapter->regs + PL_WHOAMI)); 6559 - if (func != ent->driver_data) { 6560 - pci_save_state(pdev); /* to restore SR-IOV later */ 6561 - goto sriov; 6562 - } 6563 - 6533 + adapter->regs = regs; 6564 6534 adapter->pdev = pdev; 6565 6535 adapter->pdev_dev = &pdev->dev; 6566 6536 adapter->mbox = func; ··· 6564 6560 6565 6561 err = t4_prep_adapter(adapter); 6566 6562 if (err) 6567 - goto out_unmap_bar0; 6563 + goto out_free_adapter; 6564 + 6568 6565 6569 6566 if (!is_t4(adapter->params.chip)) { 6570 6567 s_qpp = QUEUESPERPAGEPF1 * adapter->fn; ··· 6582 6577 dev_err(&pdev->dev, 6583 6578 "Incorrect number of egress queues per page\n"); 6584 6579 err = -EINVAL; 6585 - goto out_unmap_bar0; 6580 + goto out_free_adapter; 6586 6581 } 6587 6582 adapter->bar2 = ioremap_wc(pci_resource_start(pdev, 2), 6588 6583 pci_resource_len(pdev, 2)); 6589 6584 if (!adapter->bar2) { 6590 6585 dev_err(&pdev->dev, "cannot map device bar2 region\n"); 6591 6586 err = -ENOMEM; 6592 - goto out_unmap_bar0; 6587 + goto out_free_adapter; 6593 6588 } 6594 6589 } 6595 6590 ··· 6727 6722 out_unmap_bar: 6728 6723 if (!is_t4(adapter->params.chip)) 6729 6724 iounmap(adapter->bar2); 6730 - out_unmap_bar0: 6731 - iounmap(adapter->regs); 6732 6725 out_free_adapter: 6733 6726 if (adapter->workq) 6734 6727 destroy_workqueue(adapter->workq); 6735 6728 6736 6729 kfree(adapter); 6730 + out_unmap_bar0: 6731 + iounmap(regs); 6737 6732 out_disable_device: 6738 6733 pci_disable_pcie_error_reporting(pdev); 6739 6734 pci_disable_device(pdev);
+1 -1
drivers/net/ethernet/davicom/dm9000.c
··· 1399 1399 const void *mac_addr; 1400 1400 1401 1401 if (!IS_ENABLED(CONFIG_OF) || !np) 1402 - return NULL; 1402 + return ERR_PTR(-ENXIO); 1403 1403 1404 1404 pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); 1405 1405 if (!pdata)
+21
drivers/net/ethernet/mellanox/mlx4/cmd.c
··· 2389 2389 } 2390 2390 EXPORT_SYMBOL_GPL(mlx4_phys_to_slaves_pport_actv); 2391 2391 2392 + static int mlx4_slaves_closest_port(struct mlx4_dev *dev, int slave, int port) 2393 + { 2394 + struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave); 2395 + int min_port = find_first_bit(actv_ports.ports, dev->caps.num_ports) 2396 + + 1; 2397 + int max_port = min_port + 2398 + bitmap_weight(actv_ports.ports, dev->caps.num_ports); 2399 + 2400 + if (port < min_port) 2401 + port = min_port; 2402 + else if (port >= max_port) 2403 + port = max_port - 1; 2404 + 2405 + return port; 2406 + } 2407 + 2392 2408 int mlx4_set_vf_mac(struct mlx4_dev *dev, int port, int vf, u64 mac) 2393 2409 { 2394 2410 struct mlx4_priv *priv = mlx4_priv(dev); ··· 2418 2402 if (slave < 0) 2419 2403 return -EINVAL; 2420 2404 2405 + port = mlx4_slaves_closest_port(dev, slave, port); 2421 2406 s_info = &priv->mfunc.master.vf_admin[slave].vport[port]; 2422 2407 s_info->mac = mac; 2423 2408 mlx4_info(dev, "default mac on vf %d port %d to %llX will take afect only after vf restart\n", ··· 2445 2428 if (slave < 0) 2446 2429 return -EINVAL; 2447 2430 2431 + port = mlx4_slaves_closest_port(dev, slave, port); 2448 2432 vf_admin = &priv->mfunc.master.vf_admin[slave].vport[port]; 2449 2433 2450 2434 if ((0 == vlan) && (0 == qos)) ··· 2473 2455 struct mlx4_priv *priv; 2474 2456 2475 2457 priv = mlx4_priv(dev); 2458 + port = mlx4_slaves_closest_port(dev, slave, port); 2476 2459 vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port]; 2477 2460 2478 2461 if (MLX4_VGT != vp_oper->state.default_vlan) { ··· 2501 2482 if (slave < 0) 2502 2483 return -EINVAL; 2503 2484 2485 + port = mlx4_slaves_closest_port(dev, slave, port); 2504 2486 s_info = &priv->mfunc.master.vf_admin[slave].vport[port]; 2505 2487 s_info->spoofchk = setting; 2506 2488 ··· 2555 2535 if (slave < 0) 2556 2536 return -EINVAL; 2557 2537 2538 + port = mlx4_slaves_closest_port(dev, slave, port); 2558 2539 switch (link_state) { 2559 2540 case IFLA_VF_LINK_STATE_AUTO: 2560 2541 /* get current link state */
+3
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 487 487 struct mlx4_en_dev *mdev = priv->mdev; 488 488 int err; 489 489 490 + if (pause->autoneg) 491 + return -EINVAL; 492 + 490 493 priv->prof->tx_pause = pause->tx_pause != 0; 491 494 priv->prof->rx_pause = pause->rx_pause != 0; 492 495 err = mlx4_SET_PORT_general(mdev->dev, priv->port,
+11 -3
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 390 390 EXPORT_SYMBOL_GPL(mlx4_qp_alloc); 391 391 392 392 #define MLX4_UPDATE_QP_SUPPORTED_ATTRS MLX4_UPDATE_QP_SMAC 393 - int mlx4_update_qp(struct mlx4_dev *dev, struct mlx4_qp *qp, 393 + int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn, 394 394 enum mlx4_update_qp_attr attr, 395 395 struct mlx4_update_qp_params *params) 396 396 { 397 397 struct mlx4_cmd_mailbox *mailbox; 398 398 struct mlx4_update_qp_context *cmd; 399 399 u64 pri_addr_path_mask = 0; 400 + u64 qp_mask = 0; 400 401 int err = 0; 401 402 402 403 mailbox = mlx4_alloc_cmd_mailbox(dev); ··· 414 413 cmd->qp_context.pri_path.grh_mylmc = params->smac_index; 415 414 } 416 415 417 - cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask); 416 + if (attr & MLX4_UPDATE_QP_VSD) { 417 + qp_mask |= 1ULL << MLX4_UPD_QP_MASK_VSD; 418 + if (params->flags & MLX4_UPDATE_QP_PARAMS_FLAGS_VSD_ENABLE) 419 + cmd->qp_context.param3 |= cpu_to_be32(MLX4_STRIP_VLAN); 420 + } 418 421 419 - err = mlx4_cmd(dev, mailbox->dma, qp->qpn & 0xffffff, 0, 422 + cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask); 423 + cmd->qp_mask = cpu_to_be64(qp_mask); 424 + 425 + err = mlx4_cmd(dev, mailbox->dma, qpn & 0xffffff, 0, 420 426 MLX4_CMD_UPDATE_QP, MLX4_CMD_TIME_CLASS_A, 421 427 MLX4_CMD_NATIVE); 422 428
+28 -10
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 702 702 struct mlx4_qp_context *qpc = inbox->buf + 8; 703 703 struct mlx4_vport_oper_state *vp_oper; 704 704 struct mlx4_priv *priv; 705 + u32 qp_type; 705 706 int port; 706 707 707 708 port = (qpc->pri_path.sched_queue & 0x40) ? 2 : 1; 708 709 priv = mlx4_priv(dev); 709 710 vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port]; 711 + qp_type = (be32_to_cpu(qpc->flags) >> 16) & 0xff; 710 712 711 713 if (MLX4_VGT != vp_oper->state.default_vlan) { 712 714 /* the reserved QPs (special, proxy, tunnel) ··· 717 715 if (mlx4_is_qp_reserved(dev, qpn)) 718 716 return 0; 719 717 720 - /* force strip vlan by clear vsd */ 721 - qpc->param3 &= ~cpu_to_be32(MLX4_STRIP_VLAN); 718 + /* force strip vlan by clear vsd, MLX QP refers to Raw Ethernet */ 719 + if (qp_type == MLX4_QP_ST_UD || 720 + (qp_type == MLX4_QP_ST_MLX && mlx4_is_eth(dev, port))) { 721 + if (dev->caps.bmme_flags & MLX4_BMME_FLAG_VSD_INIT2RTR) { 722 + *(__be32 *)inbox->buf = 723 + cpu_to_be32(be32_to_cpu(*(__be32 *)inbox->buf) | 724 + MLX4_QP_OPTPAR_VLAN_STRIPPING); 725 + qpc->param3 &= ~cpu_to_be32(MLX4_STRIP_VLAN); 726 + } else { 727 + struct mlx4_update_qp_params params = {.flags = 0}; 728 + 729 + mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, &params); 730 + } 731 + } 722 732 723 733 if (vp_oper->state.link_state == IFLA_VF_LINK_STATE_DISABLE && 724 734 dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_UPDATE_QP) { ··· 4012 3998 } 4013 3999 4014 4000 port = (rqp->sched_queue >> 6 & 1) + 1; 4015 - smac_index = cmd->qp_context.pri_path.grh_mylmc; 4016 - err = mac_find_smac_ix_in_slave(dev, slave, port, 4017 - smac_index, &mac); 4018 - if (err) { 4019 - mlx4_err(dev, "Failed to update qpn 0x%x, MAC is invalid. smac_ix: %d\n", 4020 - qpn, smac_index); 4021 - goto err_mac; 4001 + 4002 + if (pri_addr_path_mask & (1ULL << MLX4_UPD_QP_PATH_MASK_MAC_INDEX)) { 4003 + smac_index = cmd->qp_context.pri_path.grh_mylmc; 4004 + err = mac_find_smac_ix_in_slave(dev, slave, port, 4005 + smac_index, &mac); 4006 + 4007 + if (err) { 4008 + mlx4_err(dev, "Failed to update qpn 0x%x, MAC is invalid. smac_ix: %d\n", 4009 + qpn, smac_index); 4010 + goto err_mac; 4011 + } 4022 4012 } 4023 4013 4024 4014 err = mlx4_cmd(dev, inbox->dma, ··· 4836 4818 MLX4_VLAN_CTRL_ETH_RX_BLOCK_UNTAGGED; 4837 4819 4838 4820 upd_context = mailbox->buf; 4839 - upd_context->qp_mask = cpu_to_be64(MLX4_UPD_QP_MASK_VSD); 4821 + upd_context->qp_mask = cpu_to_be64(1ULL << MLX4_UPD_QP_MASK_VSD); 4840 4822 4841 4823 spin_lock_irq(mlx4_tlock(dev)); 4842 4824 list_for_each_entry_safe(qp, tmp, qp_list, com.list) {
+3 -1
drivers/net/ethernet/octeon/octeon_mgmt.c
··· 290 290 /* Read the hardware TX timestamp if one was recorded */ 291 291 if (unlikely(re.s.tstamp)) { 292 292 struct skb_shared_hwtstamps ts; 293 + u64 ns; 294 + 293 295 memset(&ts, 0, sizeof(ts)); 294 296 /* Read the timestamp */ 295 - u64 ns = cvmx_read_csr(CVMX_MIXX_TSTAMP(p->port)); 297 + ns = cvmx_read_csr(CVMX_MIXX_TSTAMP(p->port)); 296 298 /* Remove the timestamp from the FIFO */ 297 299 cvmx_write_csr(CVMX_MIXX_TSCTL(p->port), 0); 298 300 /* Tell the kernel about the timestamp */
+1
drivers/net/ethernet/oki-semi/pch_gbe/Kconfig
··· 7 7 depends on PCI && (X86_32 || COMPILE_TEST) 8 8 select MII 9 9 select PTP_1588_CLOCK_PCH 10 + select NET_PTP_CLASSIFY 10 11 ---help--- 11 12 This is a gigabit ethernet driver for EG20T PCH. 12 13 EG20T PCH is the platform controller hub that is used in Intel's
+33 -34
drivers/net/ethernet/realtek/r8169.c
··· 1847 1847 netdev_features_t features) 1848 1848 { 1849 1849 struct rtl8169_private *tp = netdev_priv(dev); 1850 - netdev_features_t changed = features ^ dev->features; 1851 1850 void __iomem *ioaddr = tp->mmio_addr; 1851 + u32 rx_config; 1852 1852 1853 - if (!(changed & (NETIF_F_RXALL | NETIF_F_RXCSUM | 1854 - NETIF_F_HW_VLAN_CTAG_RX))) 1855 - return; 1853 + rx_config = RTL_R32(RxConfig); 1854 + if (features & NETIF_F_RXALL) 1855 + rx_config |= (AcceptErr | AcceptRunt); 1856 + else 1857 + rx_config &= ~(AcceptErr | AcceptRunt); 1856 1858 1857 - if (changed & (NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX)) { 1858 - if (features & NETIF_F_RXCSUM) 1859 - tp->cp_cmd |= RxChkSum; 1860 - else 1861 - tp->cp_cmd &= ~RxChkSum; 1859 + RTL_W32(RxConfig, rx_config); 1862 1860 1863 - if (dev->features & NETIF_F_HW_VLAN_CTAG_RX) 1864 - tp->cp_cmd |= RxVlan; 1865 - else 1866 - tp->cp_cmd &= ~RxVlan; 1861 + if (features & NETIF_F_RXCSUM) 1862 + tp->cp_cmd |= RxChkSum; 1863 + else 1864 + tp->cp_cmd &= ~RxChkSum; 1867 1865 1868 - RTL_W16(CPlusCmd, tp->cp_cmd); 1869 - RTL_R16(CPlusCmd); 1870 - } 1871 - if (changed & NETIF_F_RXALL) { 1872 - int tmp = (RTL_R32(RxConfig) & ~(AcceptErr | AcceptRunt)); 1873 - if (features & NETIF_F_RXALL) 1874 - tmp |= (AcceptErr | AcceptRunt); 1875 - RTL_W32(RxConfig, tmp); 1876 - } 1866 + if (features & NETIF_F_HW_VLAN_CTAG_RX) 1867 + tp->cp_cmd |= RxVlan; 1868 + else 1869 + tp->cp_cmd &= ~RxVlan; 1870 + 1871 + tp->cp_cmd |= RTL_R16(CPlusCmd) & ~(RxVlan | RxChkSum); 1872 + 1873 + RTL_W16(CPlusCmd, tp->cp_cmd); 1874 + RTL_R16(CPlusCmd); 1877 1875 } 1878 1876 1879 1877 static int rtl8169_set_features(struct net_device *dev, ··· 1879 1881 { 1880 1882 struct rtl8169_private *tp = netdev_priv(dev); 1881 1883 1884 + features &= NETIF_F_RXALL | NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX; 1885 + 1882 1886 rtl_lock_work(tp); 1883 - __rtl8169_set_features(dev, features); 1887 + if (features ^ dev->features) 1888 + __rtl8169_set_features(dev, features); 1884 1889 rtl_unlock_work(tp); 1885 1890 1886 1891 return 0; ··· 7532 7531 } 7533 7532 } 7534 7533 7535 - static int 7536 - rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 7534 + static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 7537 7535 { 7538 7536 const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data; 7539 7537 const unsigned int region = cfg->region; ··· 7607 7607 goto err_out_mwi_2; 7608 7608 } 7609 7609 7610 - tp->cp_cmd = RxChkSum; 7610 + tp->cp_cmd = 0; 7611 7611 7612 7612 if ((sizeof(dma_addr_t) > 4) && 7613 7613 !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) { ··· 7647 7647 rtl_ack_events(tp, 0xffff); 7648 7648 7649 7649 pci_set_master(pdev); 7650 - 7651 - /* 7652 - * Pretend we are using VLANs; This bypasses a nasty bug where 7653 - * Interrupts stop flowing on high load on 8110SCd controllers. 7654 - */ 7655 - if (tp->mac_version == RTL_GIGA_MAC_VER_05) 7656 - tp->cp_cmd |= RxVlan; 7657 7650 7658 7651 rtl_init_mdio_ops(tp); 7659 7652 rtl_init_pll_power_ops(tp); ··· 7731 7738 dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO | 7732 7739 NETIF_F_HIGHDMA; 7733 7740 7741 + tp->cp_cmd |= RxChkSum | RxVlan; 7742 + 7743 + /* 7744 + * Pretend we are using VLANs; This bypasses a nasty bug where 7745 + * Interrupts stop flowing on high load on 8110SCd controllers. 7746 + */ 7734 7747 if (tp->mac_version == RTL_GIGA_MAC_VER_05) 7735 - /* 8110SCd requires hardware Rx VLAN - disallow toggling */ 7748 + /* Disallow toggling */ 7736 7749 dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX; 7737 7750 7738 7751 if (tp->txd_version == RTL_TD_0)
+3
drivers/net/ethernet/sfc/farch.c
··· 2933 2933 u32 crc; 2934 2934 int bit; 2935 2935 2936 + if (!efx_dev_registered(efx)) 2937 + return; 2938 + 2936 2939 netif_addr_lock_bh(net_dev); 2937 2940 2938 2941 efx->unicast_filter = !(net_dev->flags & IFF_PROMISC);
+5 -2
drivers/net/ethernet/sun/sunvnet.c
··· 360 360 if (IS_ERR(desc)) 361 361 return PTR_ERR(desc); 362 362 363 + if (desc->hdr.state != VIO_DESC_READY) 364 + return 1; 365 + 366 + rmb(); 367 + 363 368 viodbg(DATA, "vio_walk_rx_one desc[%02x:%02x:%08x:%08x:%llx:%llx]\n", 364 369 desc->hdr.state, desc->hdr.ack, 365 370 desc->size, desc->ncookies, 366 371 desc->cookies[0].cookie_addr, 367 372 desc->cookies[0].cookie_size); 368 373 369 - if (desc->hdr.state != VIO_DESC_READY) 370 - return 1; 371 374 err = vnet_rx_one(port, desc->size, desc->cookies, desc->ncookies); 372 375 if (err == -ECONNRESET) 373 376 return err;
+47 -5
drivers/net/ethernet/ti/cpsw.c
··· 701 701 cpsw_dual_emac_src_port_detect(status, priv, ndev, skb); 702 702 703 703 if (unlikely(status < 0) || unlikely(!netif_running(ndev))) { 704 + bool ndev_status = false; 705 + struct cpsw_slave *slave = priv->slaves; 706 + int n; 707 + 708 + if (priv->data.dual_emac) { 709 + /* In dual emac mode check for all interfaces */ 710 + for (n = priv->data.slaves; n; n--, slave++) 711 + if (netif_running(slave->ndev)) 712 + ndev_status = true; 713 + } 714 + 715 + if (ndev_status && (status >= 0)) { 716 + /* The packet received is for the interface which 717 + * is already down and the other interface is up 718 + * and running, intead of freeing which results 719 + * in reducing of the number of rx descriptor in 720 + * DMA engine, requeue skb back to cpdma. 721 + */ 722 + new_skb = skb; 723 + goto requeue; 724 + } 725 + 704 726 /* the interface is going down, skbs are purged */ 705 727 dev_kfree_skb_any(skb); 706 728 return; ··· 741 719 new_skb = skb; 742 720 } 743 721 722 + requeue: 744 723 ret = cpdma_chan_submit(priv->rxch, new_skb, new_skb->data, 745 724 skb_tailroom(new_skb), 0); 746 725 if (WARN_ON(ret < 0)) ··· 2377 2354 struct net_device *ndev = platform_get_drvdata(pdev); 2378 2355 struct cpsw_priv *priv = netdev_priv(ndev); 2379 2356 2380 - if (netif_running(ndev)) 2381 - cpsw_ndo_stop(ndev); 2357 + if (priv->data.dual_emac) { 2358 + int i; 2382 2359 2383 - for_each_slave(priv, soft_reset_slave); 2360 + for (i = 0; i < priv->data.slaves; i++) { 2361 + if (netif_running(priv->slaves[i].ndev)) 2362 + cpsw_ndo_stop(priv->slaves[i].ndev); 2363 + soft_reset_slave(priv->slaves + i); 2364 + } 2365 + } else { 2366 + if (netif_running(ndev)) 2367 + cpsw_ndo_stop(ndev); 2368 + for_each_slave(priv, soft_reset_slave); 2369 + } 2384 2370 2385 2371 pm_runtime_put_sync(&pdev->dev); 2386 2372 ··· 2403 2371 { 2404 2372 struct platform_device *pdev = to_platform_device(dev); 2405 2373 struct net_device *ndev = platform_get_drvdata(pdev); 2374 + struct cpsw_priv *priv = netdev_priv(ndev); 2406 2375 2407 2376 pm_runtime_get_sync(&pdev->dev); 2408 2377 2409 2378 /* Select default pin state */ 2410 2379 pinctrl_pm_select_default_state(&pdev->dev); 2411 2380 2412 - if (netif_running(ndev)) 2413 - cpsw_ndo_open(ndev); 2381 + if (priv->data.dual_emac) { 2382 + int i; 2383 + 2384 + for (i = 0; i < priv->data.slaves; i++) { 2385 + if (netif_running(priv->slaves[i].ndev)) 2386 + cpsw_ndo_open(priv->slaves[i].ndev); 2387 + } 2388 + } else { 2389 + if (netif_running(ndev)) 2390 + cpsw_ndo_open(ndev); 2391 + } 2414 2392 return 0; 2415 2393 } 2416 2394
+3 -1
drivers/net/macvlan.c
··· 36 36 #include <linux/netpoll.h> 37 37 38 38 #define MACVLAN_HASH_SIZE (1 << BITS_PER_BYTE) 39 + #define MACVLAN_BC_QUEUE_LEN 1000 39 40 40 41 struct macvlan_port { 41 42 struct net_device *dev; ··· 249 248 goto err; 250 249 251 250 spin_lock(&port->bc_queue.lock); 252 - if (skb_queue_len(&port->bc_queue) < skb->dev->tx_queue_len) { 251 + if (skb_queue_len(&port->bc_queue) < MACVLAN_BC_QUEUE_LEN) { 253 252 __skb_queue_tail(&port->bc_queue, nskb); 254 253 err = 0; 255 254 } ··· 807 806 features, 808 807 mask); 809 808 features |= ALWAYS_ON_FEATURES; 809 + features &= ~NETIF_F_NETNS_LOCAL; 810 810 811 811 return features; 812 812 }
+1 -2
drivers/net/phy/micrel.c
··· 592 592 .phy_id = PHY_ID_KSZ9031, 593 593 .phy_id_mask = 0x00fffff0, 594 594 .name = "Micrel KSZ9031 Gigabit PHY", 595 - .features = (PHY_GBIT_FEATURES | SUPPORTED_Pause 596 - | SUPPORTED_Asym_Pause), 595 + .features = (PHY_GBIT_FEATURES | SUPPORTED_Pause), 597 596 .flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT, 598 597 .config_init = ksz9031_config_init, 599 598 .config_aneg = genphy_config_aneg,
+45 -17
drivers/net/usb/r8152.c
··· 2050 2050 return rtl_enable(tp); 2051 2051 } 2052 2052 2053 - static void rtl8152_disable(struct r8152 *tp) 2053 + static void rtl_disable(struct r8152 *tp) 2054 2054 { 2055 2055 u32 ocp_data; 2056 2056 int i; ··· 2291 2291 LINKENA | DIS_SDSAVE); 2292 2292 } 2293 2293 2294 + static void rtl8152_disable(struct r8152 *tp) 2295 + { 2296 + r8152b_disable_aldps(tp); 2297 + rtl_disable(tp); 2298 + r8152b_enable_aldps(tp); 2299 + } 2300 + 2294 2301 static void r8152b_hw_phy_cfg(struct r8152 *tp) 2295 2302 { 2296 2303 u16 data; ··· 2308 2301 r8152_mdio_write(tp, MII_BMCR, data); 2309 2302 } 2310 2303 2311 - r8152b_disable_aldps(tp); 2312 - 2313 2304 rtl_clear_bp(tp); 2314 2305 2315 - r8152b_enable_aldps(tp); 2316 2306 set_bit(PHY_RESET, &tp->flags); 2317 2307 } 2318 2308 ··· 2317 2313 { 2318 2314 u32 ocp_data; 2319 2315 int i; 2320 - 2321 - if (test_bit(RTL8152_UNPLUG, &tp->flags)) 2322 - return; 2323 2316 2324 2317 ocp_data = ocp_read_dword(tp, MCU_TYPE_PLA, PLA_RCR); 2325 2318 ocp_data &= ~RCR_ACPT_ALL; ··· 2405 2404 ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL1, RXFIFO_THR2_OOB); 2406 2405 ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL2, RXFIFO_THR3_OOB); 2407 2406 2408 - rtl8152_disable(tp); 2407 + rtl_disable(tp); 2409 2408 2410 2409 for (i = 0; i < 1000; i++) { 2411 2410 ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL); ··· 2541 2540 u32 ocp_data; 2542 2541 int i; 2543 2542 2544 - if (test_bit(RTL8152_UNPLUG, &tp->flags)) 2545 - return; 2546 - 2547 2543 rxdy_gated_en(tp, true); 2548 2544 r8153_teredo_off(tp); 2549 2545 ··· 2611 2613 ocp_data &= ~NOW_IS_OOB; 2612 2614 ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data); 2613 2615 2614 - rtl8152_disable(tp); 2616 + rtl_disable(tp); 2615 2617 2616 2618 for (i = 0; i < 1000; i++) { 2617 2619 ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL); ··· 2671 2673 data = ocp_reg_read(tp, OCP_POWER_CFG); 2672 2674 data |= EN_ALDPS; 2673 2675 ocp_reg_write(tp, OCP_POWER_CFG, data); 2676 + } 2677 + 2678 + static void rtl8153_disable(struct r8152 *tp) 2679 + { 2680 + r8153_disable_aldps(tp); 2681 + rtl_disable(tp); 2682 + r8153_enable_aldps(tp); 2674 2683 } 2675 2684 2676 2685 static int rtl8152_set_speed(struct r8152 *tp, u8 autoneg, u16 speed, u8 duplex) ··· 2770 2765 return ret; 2771 2766 } 2772 2767 2768 + static void rtl8152_up(struct r8152 *tp) 2769 + { 2770 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 2771 + return; 2772 + 2773 + r8152b_disable_aldps(tp); 2774 + r8152b_exit_oob(tp); 2775 + r8152b_enable_aldps(tp); 2776 + } 2777 + 2773 2778 static void rtl8152_down(struct r8152 *tp) 2774 2779 { 2775 2780 if (test_bit(RTL8152_UNPLUG, &tp->flags)) { ··· 2791 2776 r8152b_disable_aldps(tp); 2792 2777 r8152b_enter_oob(tp); 2793 2778 r8152b_enable_aldps(tp); 2779 + } 2780 + 2781 + static void rtl8153_up(struct r8152 *tp) 2782 + { 2783 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 2784 + return; 2785 + 2786 + r8153_disable_aldps(tp); 2787 + r8153_first_init(tp); 2788 + r8153_enable_aldps(tp); 2794 2789 } 2795 2790 2796 2791 static void rtl8153_down(struct r8152 *tp) ··· 3021 2996 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 3022 2997 return; 3023 2998 2999 + r8152b_disable_aldps(tp); 3000 + 3024 3001 if (tp->version == RTL_VER_01) { 3025 3002 ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_LED_FEATURE); 3026 3003 ocp_data &= ~LED_MODE_MASK; ··· 3061 3034 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 3062 3035 return; 3063 3036 3037 + r8153_disable_aldps(tp); 3064 3038 r8153_u1u2en(tp, false); 3065 3039 3066 3040 for (i = 0; i < 500; i++) { ··· 3472 3444 ops->init = r8152b_init; 3473 3445 ops->enable = rtl8152_enable; 3474 3446 ops->disable = rtl8152_disable; 3475 - ops->up = r8152b_exit_oob; 3447 + ops->up = rtl8152_up; 3476 3448 ops->down = rtl8152_down; 3477 3449 ops->unload = rtl8152_unload; 3478 3450 ret = 0; ··· 3480 3452 case PRODUCT_ID_RTL8153: 3481 3453 ops->init = r8153_init; 3482 3454 ops->enable = rtl8153_enable; 3483 - ops->disable = rtl8152_disable; 3484 - ops->up = r8153_first_init; 3455 + ops->disable = rtl8153_disable; 3456 + ops->up = rtl8153_up; 3485 3457 ops->down = rtl8153_down; 3486 3458 ops->unload = rtl8153_unload; 3487 3459 ret = 0; ··· 3496 3468 case PRODUCT_ID_SAMSUNG: 3497 3469 ops->init = r8153_init; 3498 3470 ops->enable = rtl8153_enable; 3499 - ops->disable = rtl8152_disable; 3500 - ops->up = r8153_first_init; 3471 + ops->disable = rtl8153_disable; 3472 + ops->up = rtl8153_up; 3501 3473 ops->down = rtl8153_down; 3502 3474 ops->unload = rtl8153_unload; 3503 3475 ret = 0;
+2 -3
drivers/net/wireless/ath/ath9k/common-beacon.c
··· 57 57 struct ath9k_beacon_state *bs) 58 58 { 59 59 struct ath_common *common = ath9k_hw_common(ah); 60 - int dtim_intval, sleepduration; 60 + int dtim_intval; 61 61 u64 tsf; 62 62 63 63 /* No need to configure beacon if we are not associated */ ··· 75 75 * last beacon we received (which may be none). 76 76 */ 77 77 dtim_intval = conf->intval * conf->dtim_period; 78 - sleepduration = ah->hw->conf.listen_interval * conf->intval; 79 78 80 79 /* 81 80 * Pull nexttbtt forward to reflect the current ··· 112 113 */ 113 114 114 115 bs->bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100), 115 - sleepduration)); 116 + conf->intval)); 116 117 if (bs->bs_sleepduration > bs->bs_dtimperiod) 117 118 bs->bs_sleepduration = bs->bs_dtimperiod; 118 119
+1 -1
drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
··· 978 978 struct ath_hw *ah = common->ah; 979 979 struct ath_htc_rx_status *rxstatus; 980 980 struct ath_rx_status rx_stats; 981 - bool decrypt_error; 981 + bool decrypt_error = false; 982 982 983 983 if (skb->len < HTC_RX_FRAME_HEADER_SIZE) { 984 984 ath_err(common, "Corrupted RX frame, dropping (len: %d)\n",
+10
drivers/net/wireless/brcm80211/Kconfig
··· 27 27 one of the bus interface support. If you choose to build a module, 28 28 it'll be called brcmfmac.ko. 29 29 30 + config BRCMFMAC_PROTO_BCDC 31 + bool 32 + 33 + config BRCMFMAC_PROTO_MSGBUF 34 + bool 35 + 30 36 config BRCMFMAC_SDIO 31 37 bool "SDIO bus interface support for FullMAC driver" 32 38 depends on (MMC = y || MMC = BRCMFMAC) 33 39 depends on BRCMFMAC 40 + select BRCMFMAC_PROTO_BCDC 34 41 select FW_LOADER 35 42 default y 36 43 ---help--- ··· 49 42 bool "USB bus interface support for FullMAC driver" 50 43 depends on (USB = y || USB = BRCMFMAC) 51 44 depends on BRCMFMAC 45 + select BRCMFMAC_PROTO_BCDC 52 46 select FW_LOADER 53 47 ---help--- 54 48 This option enables the USB bus interface support for Broadcom ··· 60 52 bool "PCIE bus interface support for FullMAC driver" 61 53 depends on BRCMFMAC 62 54 depends on PCI 55 + depends on HAS_DMA 56 + select BRCMFMAC_PROTO_MSGBUF 63 57 select FW_LOADER 64 58 ---help--- 65 59 This option enables the PCIE bus interface support for Broadcom
+6 -4
drivers/net/wireless/brcm80211/brcmfmac/Makefile
··· 30 30 fwsignal.o \ 31 31 p2p.o \ 32 32 proto.o \ 33 - bcdc.o \ 34 - commonring.o \ 35 - flowring.o \ 36 - msgbuf.o \ 37 33 dhd_common.o \ 38 34 dhd_linux.o \ 39 35 firmware.o \ 40 36 feature.o \ 41 37 btcoex.o \ 42 38 vendor.o 39 + brcmfmac-$(CONFIG_BRCMFMAC_PROTO_BCDC) += \ 40 + bcdc.o 41 + brcmfmac-$(CONFIG_BRCMFMAC_PROTO_MSGBUF) += \ 42 + commonring.o \ 43 + flowring.o \ 44 + msgbuf.o 43 45 brcmfmac-$(CONFIG_BRCMFMAC_SDIO) += \ 44 46 dhd_sdio.o \ 45 47 bcmsdh.o
+5 -2
drivers/net/wireless/brcm80211/brcmfmac/bcdc.h
··· 16 16 #ifndef BRCMFMAC_BCDC_H 17 17 #define BRCMFMAC_BCDC_H 18 18 19 - 19 + #ifdef CONFIG_BRCMFMAC_PROTO_BCDC 20 20 int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr); 21 21 void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr); 22 - 22 + #else 23 + static inline int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr) { return 0; } 24 + static inline void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr) {} 25 + #endif 23 26 24 27 #endif /* BRCMFMAC_BCDC_H */
+9 -3
drivers/net/wireless/brcm80211/brcmfmac/fweh.c
··· 185 185 ifevent->action, ifevent->ifidx, ifevent->bssidx, 186 186 ifevent->flags, ifevent->role); 187 187 188 - if (ifevent->flags & BRCMF_E_IF_FLAG_NOIF) { 188 + /* The P2P Device interface event must not be ignored 189 + * contrary to what firmware tells us. The only way to 190 + * distinguish the P2P Device is by looking at the ifidx 191 + * and bssidx received. 192 + */ 193 + if (!(ifevent->ifidx == 0 && ifevent->bssidx == 1) && 194 + (ifevent->flags & BRCMF_E_IF_FLAG_NOIF)) { 189 195 brcmf_dbg(EVENT, "event can be ignored\n"); 190 196 return; 191 197 } ··· 216 210 return; 217 211 } 218 212 219 - if (ifevent->action == BRCMF_E_IF_CHANGE) 213 + if (ifp && ifevent->action == BRCMF_E_IF_CHANGE) 220 214 brcmf_fws_reset_interface(ifp); 221 215 222 216 err = brcmf_fweh_call_event_handler(ifp, emsg->event_code, emsg, data); 223 217 224 - if (ifevent->action == BRCMF_E_IF_DEL) { 218 + if (ifp && ifevent->action == BRCMF_E_IF_DEL) { 225 219 brcmf_fws_del_interface(ifp); 226 220 brcmf_del_if(drvr, ifevent->bssidx); 227 221 }
+2
drivers/net/wireless/brcm80211/brcmfmac/fweh.h
··· 172 172 #define BRCMF_E_IF_ROLE_STA 0 173 173 #define BRCMF_E_IF_ROLE_AP 1 174 174 #define BRCMF_E_IF_ROLE_WDS 2 175 + #define BRCMF_E_IF_ROLE_P2P_GO 3 176 + #define BRCMF_E_IF_ROLE_P2P_CLIENT 4 175 177 176 178 /** 177 179 * definitions for event packet validation.
+9 -2
drivers/net/wireless/brcm80211/brcmfmac/msgbuf.h
··· 15 15 #ifndef BRCMFMAC_MSGBUF_H 16 16 #define BRCMFMAC_MSGBUF_H 17 17 18 + #ifdef CONFIG_BRCMFMAC_PROTO_MSGBUF 18 19 19 20 #define BRCMF_H2D_MSGRING_CONTROL_SUBMIT_MAX_ITEM 20 20 21 #define BRCMF_H2D_MSGRING_RXPOST_SUBMIT_MAX_ITEM 256 ··· 33 32 34 33 35 34 int brcmf_proto_msgbuf_rx_trigger(struct device *dev); 35 + void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid); 36 36 int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr); 37 37 void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr); 38 - void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid); 39 - 38 + #else 39 + static inline int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr) 40 + { 41 + return 0; 42 + } 43 + static inline void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr) {} 44 + #endif 40 45 41 46 #endif /* BRCMFMAC_MSGBUF_H */
+7 -2
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 497 497 static void 498 498 brcmf_cfg80211_update_proto_addr_mode(struct wireless_dev *wdev) 499 499 { 500 - struct net_device *ndev = wdev->netdev; 501 - struct brcmf_if *ifp = netdev_priv(ndev); 500 + struct brcmf_cfg80211_vif *vif; 501 + struct brcmf_if *ifp; 502 + 503 + vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev); 504 + ifp = vif->ifp; 502 505 503 506 if ((wdev->iftype == NL80211_IFTYPE_ADHOC) || 504 507 (wdev->iftype == NL80211_IFTYPE_AP) || ··· 5152 5149 5153 5150 ch.band = BRCMU_CHAN_BAND_2G; 5154 5151 ch.bw = BRCMU_CHAN_BW_40; 5152 + ch.sb = BRCMU_CHAN_SB_NONE; 5155 5153 ch.chnum = 0; 5156 5154 cfg->d11inf.encchspec(&ch); 5157 5155 ··· 5186 5182 5187 5183 brcmf_update_bw40_channel_flag(&band->channels[j], &ch); 5188 5184 } 5185 + kfree(pbuf); 5189 5186 } 5190 5187 return err; 5191 5188 }
+1 -1
drivers/net/wireless/iwlwifi/dvm/power.c
··· 40 40 #include "commands.h" 41 41 #include "power.h" 42 42 43 - static bool force_cam; 43 + static bool force_cam = true; 44 44 module_param(force_cam, bool, 0644); 45 45 MODULE_PARM_DESC(force_cam, "force continuously aware mode (no power saving at all)"); 46 46
+16
drivers/net/wireless/iwlwifi/iwl-7000.c
··· 85 85 #define IWL7260_TX_POWER_VERSION 0xffff /* meaningless */ 86 86 #define IWL3160_NVM_VERSION 0x709 87 87 #define IWL3160_TX_POWER_VERSION 0xffff /* meaningless */ 88 + #define IWL3165_NVM_VERSION 0x709 89 + #define IWL3165_TX_POWER_VERSION 0xffff /* meaningless */ 88 90 #define IWL7265_NVM_VERSION 0x0a1d 89 91 #define IWL7265_TX_POWER_VERSION 0xffff /* meaningless */ 90 92 ··· 95 93 96 94 #define IWL3160_FW_PRE "iwlwifi-3160-" 97 95 #define IWL3160_MODULE_FIRMWARE(api) IWL3160_FW_PRE __stringify(api) ".ucode" 96 + 97 + #define IWL3165_FW_PRE "iwlwifi-3165-" 98 + #define IWL3165_MODULE_FIRMWARE(api) IWL3165_FW_PRE __stringify(api) ".ucode" 98 99 99 100 #define IWL7265_FW_PRE "iwlwifi-7265-" 100 101 #define IWL7265_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode" ··· 220 215 {0}, 221 216 }; 222 217 218 + const struct iwl_cfg iwl3165_2ac_cfg = { 219 + .name = "Intel(R) Dual Band Wireless AC 3165", 220 + .fw_name_pre = IWL3165_FW_PRE, 221 + IWL_DEVICE_7000, 222 + .ht_params = &iwl7000_ht_params, 223 + .nvm_ver = IWL3165_NVM_VERSION, 224 + .nvm_calib_ver = IWL3165_TX_POWER_VERSION, 225 + .pwr_tx_backoffs = iwl7265_pwr_tx_backoffs, 226 + }; 227 + 223 228 const struct iwl_cfg iwl7265_2ac_cfg = { 224 229 .name = "Intel(R) Dual Band Wireless AC 7265", 225 230 .fw_name_pre = IWL7265_FW_PRE, ··· 262 247 263 248 MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK)); 264 249 MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK)); 250 + MODULE_FIRMWARE(IWL3165_MODULE_FIRMWARE(IWL3160_UCODE_API_OK)); 265 251 MODULE_FIRMWARE(IWL7265_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
+3
drivers/net/wireless/iwlwifi/iwl-config.h
··· 120 120 #define IWL_LONG_WD_TIMEOUT 10000 121 121 #define IWL_MAX_WD_TIMEOUT 120000 122 122 123 + #define IWL_DEFAULT_MAX_TX_POWER 22 124 + 123 125 /* Antenna presence definitions */ 124 126 #define ANT_NONE 0x0 125 127 #define ANT_A BIT(0) ··· 337 335 extern const struct iwl_cfg iwl3160_2ac_cfg; 338 336 extern const struct iwl_cfg iwl3160_2n_cfg; 339 337 extern const struct iwl_cfg iwl3160_n_cfg; 338 + extern const struct iwl_cfg iwl3165_2ac_cfg; 340 339 extern const struct iwl_cfg iwl7265_2ac_cfg; 341 340 extern const struct iwl_cfg iwl7265_2n_cfg; 342 341 extern const struct iwl_cfg iwl7265_n_cfg;
+1 -3
drivers/net/wireless/iwlwifi/iwl-nvm-parse.c
··· 148 148 #define LAST_2GHZ_HT_PLUS 9 149 149 #define LAST_5GHZ_HT 161 150 150 151 - #define DEFAULT_MAX_TX_POWER 16 152 - 153 151 /* rate data (static) */ 154 152 static struct ieee80211_rate iwl_cfg80211_rates[] = { 155 153 { .bitrate = 1 * 10, .hw_value = 0, .hw_value_short = 0, }, ··· 295 297 * Default value - highest tx power value. max_power 296 298 * is not used in mvm, and is used for backwards compatibility 297 299 */ 298 - channel->max_power = DEFAULT_MAX_TX_POWER; 300 + channel->max_power = IWL_DEFAULT_MAX_TX_POWER; 299 301 is_5ghz = channel->band == IEEE80211_BAND_5GHZ; 300 302 IWL_DEBUG_EEPROM(dev, 301 303 "Ch. %d [%sGHz] %s%s%s%s%s%s%s(0x%02x %ddBm): Ad-Hoc %ssupported\n",
+3 -6
drivers/net/wireless/iwlwifi/mvm/coex.c
··· 587 587 lockdep_assert_held(&mvm->mutex); 588 588 589 589 if (unlikely(mvm->bt_force_ant_mode != BT_FORCE_ANT_DIS)) { 590 - u32 mode; 591 - 592 590 switch (mvm->bt_force_ant_mode) { 593 591 case BT_FORCE_ANT_BT: 594 592 mode = BT_COEX_BT; ··· 756 758 struct iwl_bt_iterator_data *data = _data; 757 759 struct iwl_mvm *mvm = data->mvm; 758 760 struct ieee80211_chanctx_conf *chanctx_conf; 759 - enum ieee80211_smps_mode smps_mode; 761 + /* default smps_mode is AUTOMATIC - only used for client modes */ 762 + enum ieee80211_smps_mode smps_mode = IEEE80211_SMPS_AUTOMATIC; 760 763 u32 bt_activity_grading; 761 764 int ave_rssi; 762 765 ··· 765 766 766 767 switch (vif->type) { 767 768 case NL80211_IFTYPE_STATION: 768 - /* default smps_mode for BSS / P2P client is AUTOMATIC */ 769 - smps_mode = IEEE80211_SMPS_AUTOMATIC; 770 769 break; 771 770 case NL80211_IFTYPE_AP: 772 771 if (!mvmvif->ap_ibss_active) ··· 796 799 else if (bt_activity_grading >= BT_LOW_TRAFFIC) 797 800 smps_mode = IEEE80211_SMPS_DYNAMIC; 798 801 799 - /* relax SMPS contraints for next association */ 802 + /* relax SMPS constraints for next association */ 800 803 if (!vif->bss_conf.assoc) 801 804 smps_mode = IEEE80211_SMPS_AUTOMATIC; 802 805
+1 -2
drivers/net/wireless/iwlwifi/mvm/debugfs-vif.c
··· 76 76 77 77 switch (param) { 78 78 case MVM_DEBUGFS_PM_KEEP_ALIVE: { 79 - struct ieee80211_hw *hw = mvm->hw; 80 - int dtimper = hw->conf.ps_dtim_period ?: 1; 79 + int dtimper = vif->bss_conf.dtim_period ?: 1; 81 80 int dtimper_msec = dtimper * vif->bss_conf.beacon_int; 82 81 83 82 IWL_DEBUG_POWER(mvm, "debugfs: set keep_alive= %d sec\n", val);
+2 -2
drivers/net/wireless/iwlwifi/mvm/fw-api.h
··· 1603 1603 1604 1604 /** 1605 1605 * Smart Fifo configuration command. 1606 - * @state: smart fifo state, types listed in iwl_sf_sate. 1606 + * @state: smart fifo state, types listed in enum %iwl_sf_sate. 1607 1607 * @watermark: Minimum allowed availabe free space in RXF for transient state. 1608 1608 * @long_delay_timeouts: aging and idle timer values for each scenario 1609 1609 * in long delay state. 1610 1610 * @full_on_timeouts: timer values for each scenario in full on state. 1611 1611 */ 1612 1612 struct iwl_sf_cfg_cmd { 1613 - enum iwl_sf_state state; 1613 + __le32 state; 1614 1614 __le32 watermark[SF_TRANSIENT_STATES_NUMBER]; 1615 1615 __le32 long_delay_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES]; 1616 1616 __le32 full_on_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES];
+5 -5
drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c
··· 727 727 !force_assoc_off) { 728 728 u32 dtim_offs; 729 729 730 - /* Allow beacons to pass through as long as we are not 731 - * associated, or we do not have dtim period information. 732 - */ 733 - cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON); 734 - 735 730 /* 736 731 * The DTIM count counts down, so when it is N that means N 737 732 * more beacon intervals happen until the DTIM TBTT. Therefore ··· 760 765 ctxt_sta->is_assoc = cpu_to_le32(1); 761 766 } else { 762 767 ctxt_sta->is_assoc = cpu_to_le32(0); 768 + 769 + /* Allow beacons to pass through as long as we are not 770 + * associated, or we do not have dtim period information. 771 + */ 772 + cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON); 763 773 } 764 774 765 775 ctxt_sta->bi = cpu_to_le32(vif->bss_conf.beacon_int);
+14 -11
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 398 398 else 399 399 hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 400 400 401 - /* TODO: enable that only for firmwares that don't crash */ 402 - /* hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN; */ 403 - hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX; 404 - hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES; 405 - /* we create the 802.11 header and zero length SSID IE. */ 406 - hw->wiphy->max_sched_scan_ie_len = SCAN_OFFLOAD_PROBE_REQ_SIZE - 24 - 2; 401 + if (IWL_UCODE_API(mvm->fw->ucode_ver) >= 10) { 402 + hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN; 403 + hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX; 404 + hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES; 405 + /* we create the 802.11 header and zero length SSID IE. */ 406 + hw->wiphy->max_sched_scan_ie_len = 407 + SCAN_OFFLOAD_PROBE_REQ_SIZE - 24 - 2; 408 + } 407 409 408 410 hw->wiphy->features |= NL80211_FEATURE_P2P_GO_CTWIN | 409 411 NL80211_FEATURE_LOW_PRIORITY_SCAN | ··· 1546 1544 */ 1547 1545 iwl_mvm_remove_time_event(mvm, mvmvif, 1548 1546 &mvmvif->time_event_data); 1549 - } else if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS | 1550 - BSS_CHANGED_QOS)) { 1551 - ret = iwl_mvm_power_update_mac(mvm); 1552 - if (ret) 1553 - IWL_ERR(mvm, "failed to update power mode\n"); 1554 1547 } 1555 1548 1556 1549 if (changes & BSS_CHANGED_BEACON_INFO) { 1557 1550 iwl_mvm_sf_update(mvm, vif, false); 1558 1551 WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0)); 1552 + } 1553 + 1554 + if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS | BSS_CHANGED_QOS)) { 1555 + ret = iwl_mvm_power_update_mac(mvm); 1556 + if (ret) 1557 + IWL_ERR(mvm, "failed to update power mode\n"); 1559 1558 } 1560 1559 1561 1560 if (changes & BSS_CHANGED_TXPOWER) {
+2 -3
drivers/net/wireless/iwlwifi/mvm/power.c
··· 290 290 struct ieee80211_vif *vif, 291 291 struct iwl_mac_power_cmd *cmd) 292 292 { 293 - struct ieee80211_hw *hw = mvm->hw; 294 293 struct ieee80211_chanctx_conf *chanctx_conf; 295 294 struct ieee80211_channel *chan; 296 295 int dtimper, dtimper_msec; ··· 300 301 301 302 cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id, 302 303 mvmvif->color)); 303 - dtimper = hw->conf.ps_dtim_period ?: 1; 304 + dtimper = vif->bss_conf.dtim_period; 304 305 305 306 /* 306 307 * Regardless of power management state the driver must set ··· 962 963 iwl_mvm_power_build_cmd(mvm, vif, &cmd); 963 964 if (enable) { 964 965 /* configure skip over dtim up to 300 msec */ 965 - int dtimper = mvm->hw->conf.ps_dtim_period ?: 1; 966 + int dtimper = vif->bss_conf.dtim_period ?: 1; 966 967 int dtimper_msec = dtimper * vif->bss_conf.beacon_int; 967 968 968 969 if (WARN_ON(!dtimper_msec))
+3 -3
drivers/net/wireless/iwlwifi/mvm/rx.c
··· 151 151 le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_ENERGY_ANT_ABC_IDX]); 152 152 energy_a = (val & IWL_RX_INFO_ENERGY_ANT_A_MSK) >> 153 153 IWL_RX_INFO_ENERGY_ANT_A_POS; 154 - energy_a = energy_a ? -energy_a : -256; 154 + energy_a = energy_a ? -energy_a : S8_MIN; 155 155 energy_b = (val & IWL_RX_INFO_ENERGY_ANT_B_MSK) >> 156 156 IWL_RX_INFO_ENERGY_ANT_B_POS; 157 - energy_b = energy_b ? -energy_b : -256; 157 + energy_b = energy_b ? -energy_b : S8_MIN; 158 158 energy_c = (val & IWL_RX_INFO_ENERGY_ANT_C_MSK) >> 159 159 IWL_RX_INFO_ENERGY_ANT_C_POS; 160 - energy_c = energy_c ? -energy_c : -256; 160 + energy_c = energy_c ? -energy_c : S8_MIN; 161 161 max_energy = max(energy_a, energy_b); 162 162 max_energy = max(max_energy, energy_c); 163 163
+1 -1
drivers/net/wireless/iwlwifi/mvm/sf.c
··· 174 174 enum iwl_sf_state new_state) 175 175 { 176 176 struct iwl_sf_cfg_cmd sf_cmd = { 177 - .state = new_state, 177 + .state = cpu_to_le32(new_state), 178 178 }; 179 179 struct ieee80211_sta *sta; 180 180 int ret = 0;
+6 -2
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 170 170 171 171 /* 172 172 * for data packets, rate info comes from the table inside the fw. This 173 - * table is controlled by LINK_QUALITY commands 173 + * table is controlled by LINK_QUALITY commands. Exclude ctrl port 174 + * frames like EAPOLs which should be treated as mgmt frames. This 175 + * avoids them being sent initially in high rates which increases the 176 + * chances for completion of the 4-Way handshake. 174 177 */ 175 178 176 - if (ieee80211_is_data(fc) && sta) { 179 + if (ieee80211_is_data(fc) && sta && 180 + !(info->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO)) { 177 181 tx_cmd->initial_rate_index = 0; 178 182 tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); 179 183 return;
+7
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 354 354 {IWL_PCI_DEVICE(0x08B3, 0x8060, iwl3160_2n_cfg)}, 355 355 {IWL_PCI_DEVICE(0x08B3, 0x8062, iwl3160_n_cfg)}, 356 356 {IWL_PCI_DEVICE(0x08B4, 0x8270, iwl3160_2ac_cfg)}, 357 + {IWL_PCI_DEVICE(0x08B4, 0x8370, iwl3160_2ac_cfg)}, 358 + {IWL_PCI_DEVICE(0x08B4, 0x8272, iwl3160_2ac_cfg)}, 357 359 {IWL_PCI_DEVICE(0x08B3, 0x8470, iwl3160_2ac_cfg)}, 358 360 {IWL_PCI_DEVICE(0x08B3, 0x8570, iwl3160_2ac_cfg)}, 359 361 {IWL_PCI_DEVICE(0x08B3, 0x1070, iwl3160_2ac_cfg)}, 360 362 {IWL_PCI_DEVICE(0x08B3, 0x1170, iwl3160_2ac_cfg)}, 363 + 364 + /* 3165 Series */ 365 + {IWL_PCI_DEVICE(0x3165, 0x4010, iwl3165_2ac_cfg)}, 366 + {IWL_PCI_DEVICE(0x3165, 0x4210, iwl3165_2ac_cfg)}, 361 367 362 368 /* 7265 Series */ 363 369 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)}, ··· 386 380 {IWL_PCI_DEVICE(0x095B, 0x5202, iwl7265_n_cfg)}, 387 381 {IWL_PCI_DEVICE(0x095A, 0x9010, iwl7265_2ac_cfg)}, 388 382 {IWL_PCI_DEVICE(0x095A, 0x9012, iwl7265_2ac_cfg)}, 383 + {IWL_PCI_DEVICE(0x095A, 0x900A, iwl7265_2ac_cfg)}, 389 384 {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, 390 385 {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, 391 386 {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)},
+15 -2
drivers/ntb/ntb_transport.c
··· 510 510 511 511 WARN_ON(nt->mw[mw_num].virt_addr == NULL); 512 512 513 - if (nt->max_qps % mw_max && mw_num < nt->max_qps % mw_max) 513 + if (nt->max_qps % mw_max && mw_num + 1 < nt->max_qps / mw_max) 514 514 num_qps_mw = nt->max_qps / mw_max + 1; 515 515 else 516 516 num_qps_mw = nt->max_qps / mw_max; ··· 573 573 mw->size = 0; 574 574 dev_err(&pdev->dev, "Unable to allocate MW buffer of size %d\n", 575 575 (int) mw->size); 576 + return -ENOMEM; 577 + } 578 + 579 + /* 580 + * we must ensure that the memory address allocated is BAR size 581 + * aligned in order for the XLAT register to take the value. This 582 + * is a requirement of the hardware. It is recommended to setup CMA 583 + * for BAR sizes equal or greater than 4MB. 584 + */ 585 + if (!IS_ALIGNED(mw->dma_addr, mw->size)) { 586 + dev_err(&pdev->dev, "DMA memory %pad not aligned to BAR size\n", 587 + &mw->dma_addr); 588 + ntb_free_mw(nt, num_mw); 576 589 return -ENOMEM; 577 590 } 578 591 ··· 869 856 qp->client_ready = NTB_LINK_DOWN; 870 857 qp->event_handler = NULL; 871 858 872 - if (nt->max_qps % mw_max && mw_num < nt->max_qps % mw_max) 859 + if (nt->max_qps % mw_max && mw_num + 1 < nt->max_qps / mw_max) 873 860 num_qps_mw = nt->max_qps / mw_max + 1; 874 861 else 875 862 num_qps_mw = nt->max_qps / mw_max;
+1 -1
drivers/parisc/dino.c
··· 913 913 printk("%s version %s found at 0x%lx\n", name, version, hpa); 914 914 915 915 if (!request_mem_region(hpa, PAGE_SIZE, name)) { 916 - printk(KERN_ERR "DINO: Hey! Someone took my MMIO space (0x%ld)!\n", 916 + printk(KERN_ERR "DINO: Hey! Someone took my MMIO space (0x%lx)!\n", 917 917 hpa); 918 918 return 1; 919 919 }
+38
drivers/pci/host/pci-imx6.c
··· 49 49 50 50 /* PCIe Port Logic registers (memory-mapped) */ 51 51 #define PL_OFFSET 0x700 52 + #define PCIE_PL_PFLR (PL_OFFSET + 0x08) 53 + #define PCIE_PL_PFLR_LINK_STATE_MASK (0x3f << 16) 54 + #define PCIE_PL_PFLR_FORCE_LINK (1 << 15) 52 55 #define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28) 53 56 #define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c) 54 57 #define PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING (1 << 29) ··· 217 214 static int imx6_pcie_assert_core_reset(struct pcie_port *pp) 218 215 { 219 216 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); 217 + u32 val, gpr1, gpr12; 218 + 219 + /* 220 + * If the bootloader already enabled the link we need some special 221 + * handling to get the core back into a state where it is safe to 222 + * touch it for configuration. As there is no dedicated reset signal 223 + * wired up for MX6QDL, we need to manually force LTSSM into "detect" 224 + * state before completely disabling LTSSM, which is a prerequisite 225 + * for core configuration. 226 + * 227 + * If both LTSSM_ENABLE and REF_SSP_ENABLE are active we have a strong 228 + * indication that the bootloader activated the link. 229 + */ 230 + regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, &gpr1); 231 + regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, &gpr12); 232 + 233 + if ((gpr1 & IMX6Q_GPR1_PCIE_REF_CLK_EN) && 234 + (gpr12 & IMX6Q_GPR12_PCIE_CTL_2)) { 235 + val = readl(pp->dbi_base + PCIE_PL_PFLR); 236 + val &= ~PCIE_PL_PFLR_LINK_STATE_MASK; 237 + val |= PCIE_PL_PFLR_FORCE_LINK; 238 + writel(val, pp->dbi_base + PCIE_PL_PFLR); 239 + 240 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 241 + IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); 242 + } 220 243 221 244 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 222 245 IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18); ··· 618 589 return 0; 619 590 } 620 591 592 + static void imx6_pcie_shutdown(struct platform_device *pdev) 593 + { 594 + struct imx6_pcie *imx6_pcie = platform_get_drvdata(pdev); 595 + 596 + /* bring down link, so bootloader gets clean state in case of reboot */ 597 + imx6_pcie_assert_core_reset(&imx6_pcie->pp); 598 + } 599 + 621 600 static const struct of_device_id imx6_pcie_of_match[] = { 622 601 { .compatible = "fsl,imx6q-pcie", }, 623 602 {}, ··· 638 601 .owner = THIS_MODULE, 639 602 .of_match_table = imx6_pcie_of_match, 640 603 }, 604 + .shutdown = imx6_pcie_shutdown, 641 605 }; 642 606 643 607 /* Freescale PCIe driver does not allow module unload */
+6 -10
drivers/pci/hotplug/acpiphp_glue.c
··· 560 560 slot->flags &= (~SLOT_ENABLED); 561 561 } 562 562 563 - static bool acpiphp_no_hotplug(struct acpi_device *adev) 564 - { 565 - return adev && adev->flags.no_hotplug; 566 - } 567 - 568 563 static bool slot_no_hotplug(struct acpiphp_slot *slot) 569 564 { 570 - struct acpiphp_func *func; 565 + struct pci_bus *bus = slot->bus; 566 + struct pci_dev *dev; 571 567 572 - list_for_each_entry(func, &slot->funcs, sibling) 573 - if (acpiphp_no_hotplug(func_to_acpi_device(func))) 568 + list_for_each_entry(dev, &bus->devices, bus_list) { 569 + if (PCI_SLOT(dev->devfn) == slot->device && dev->ignore_hotplug) 574 570 return true; 575 - 571 + } 576 572 return false; 577 573 } 578 574 ··· 641 645 642 646 status = acpi_evaluate_integer(adev->handle, "_STA", NULL, &sta); 643 647 alive = (ACPI_SUCCESS(status) && device_status_valid(sta)) 644 - || acpiphp_no_hotplug(adev); 648 + || dev->ignore_hotplug; 645 649 } 646 650 if (!alive) 647 651 alive = pci_device_is_present(dev);
+12
drivers/pci/hotplug/pciehp_hpc.c
··· 506 506 { 507 507 struct controller *ctrl = (struct controller *)dev_id; 508 508 struct pci_dev *pdev = ctrl_dev(ctrl); 509 + struct pci_bus *subordinate = pdev->subordinate; 510 + struct pci_dev *dev; 509 511 struct slot *slot = ctrl->slot; 510 512 u16 detected, intr_loc; 511 513 ··· 539 537 ctrl->cmd_busy = 0; 540 538 smp_mb(); 541 539 wake_up(&ctrl->queue); 540 + } 541 + 542 + if (subordinate) { 543 + list_for_each_entry(dev, &subordinate->devices, bus_list) { 544 + if (dev->ignore_hotplug) { 545 + ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n", 546 + intr_loc, pci_name(dev)); 547 + return IRQ_HANDLED; 548 + } 549 + } 542 550 } 543 551 544 552 if (!(intr_loc & ~PCI_EXP_SLTSTA_CC))
+1 -5
drivers/pci/hotplug/pcihp_slot.c
··· 46 46 */ 47 47 if (pci_is_pcie(dev)) 48 48 return; 49 - dev_info(&dev->dev, "using default PCI settings\n"); 50 49 hpp = &pci_default_type0; 51 50 } 52 51 ··· 152 153 { 153 154 struct pci_dev *cdev; 154 155 struct hotplug_params hpp; 155 - int ret; 156 156 157 157 if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL || 158 158 (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE && ··· 161 163 pcie_bus_configure_settings(dev->bus); 162 164 163 165 memset(&hpp, 0, sizeof(hpp)); 164 - ret = pci_get_hp_params(dev, &hpp); 165 - if (ret) 166 - dev_warn(&dev->dev, "no hotplug settings from platform\n"); 166 + pci_get_hp_params(dev, &hpp); 167 167 168 168 program_hpp_type2(dev, hpp.t2); 169 169 program_hpp_type1(dev, hpp.t1);
+3 -1
drivers/phy/Kconfig
··· 41 41 config PHY_MIPHY365X 42 42 tristate "STMicroelectronics MIPHY365X PHY driver for STiH41x series" 43 43 depends on ARCH_STI 44 - depends on GENERIC_PHY 45 44 depends on HAS_IOMEM 46 45 depends on OF 46 + select GENERIC_PHY 47 47 help 48 48 Enable this to support the miphy transceiver (for SATA/PCIE) 49 49 that is part of STMicroelectronics STiH41x SoC series. ··· 214 214 config PHY_ST_SPEAR1310_MIPHY 215 215 tristate "ST SPEAR1310-MIPHY driver" 216 216 select GENERIC_PHY 217 + depends on MACH_SPEAR1310 || COMPILE_TEST 217 218 help 218 219 Support for ST SPEAr1310 MIPHY which can be used for PCIe and SATA. 219 220 220 221 config PHY_ST_SPEAR1340_MIPHY 221 222 tristate "ST SPEAR1340-MIPHY driver" 222 223 select GENERIC_PHY 224 + depends on MACH_SPEAR1340 || COMPILE_TEST 223 225 help 224 226 Support for ST SPEAr1340 MIPHY which can be used for PCIe and SATA. 225 227
+1
drivers/phy/phy-exynos5-usbdrd.c
··· 542 542 }, 543 543 { }, 544 544 }; 545 + MODULE_DEVICE_TABLE(of, exynos5_usbdrd_phy_of_match); 545 546 546 547 static int exynos5_usbdrd_phy_probe(struct platform_device *pdev) 547 548 {
+1
drivers/phy/phy-miphy365x.c
··· 163 163 }; 164 164 165 165 static u8 rx_tx_spd[] = { 166 + 0, /* GEN0 doesn't exist. */ 166 167 TX_SPDSEL_GEN1_VAL | RX_SPDSEL_GEN1_VAL, 167 168 TX_SPDSEL_GEN2_VAL | RX_SPDSEL_GEN2_VAL, 168 169 TX_SPDSEL_GEN3_VAL | RX_SPDSEL_GEN3_VAL
+76 -51
drivers/phy/phy-twl4030-usb.c
··· 34 34 #include <linux/delay.h> 35 35 #include <linux/usb/otg.h> 36 36 #include <linux/phy/phy.h> 37 + #include <linux/pm_runtime.h> 37 38 #include <linux/usb/musb-omap.h> 38 39 #include <linux/usb/ulpi.h> 39 40 #include <linux/i2c/twl.h> ··· 423 422 } 424 423 } 425 424 426 - static int twl4030_phy_power_off(struct phy *phy) 425 + static int twl4030_usb_runtime_suspend(struct device *dev) 427 426 { 428 - struct twl4030_usb *twl = phy_get_drvdata(phy); 427 + struct twl4030_usb *twl = dev_get_drvdata(dev); 429 428 429 + dev_dbg(twl->dev, "%s\n", __func__); 430 430 if (twl->asleep) 431 431 return 0; 432 432 433 433 twl4030_phy_power(twl, 0); 434 434 twl->asleep = 1; 435 - dev_dbg(twl->dev, "%s\n", __func__); 435 + 436 436 return 0; 437 437 } 438 438 439 - static void __twl4030_phy_power_on(struct twl4030_usb *twl) 439 + static int twl4030_usb_runtime_resume(struct device *dev) 440 440 { 441 + struct twl4030_usb *twl = dev_get_drvdata(dev); 442 + 443 + dev_dbg(twl->dev, "%s\n", __func__); 444 + if (!twl->asleep) 445 + return 0; 446 + 441 447 twl4030_phy_power(twl, 1); 442 - twl4030_i2c_access(twl, 1); 443 - twl4030_usb_set_mode(twl, twl->usb_mode); 444 - if (twl->usb_mode == T2_USB_MODE_ULPI) 445 - twl4030_i2c_access(twl, 0); 448 + twl->asleep = 0; 449 + 450 + return 0; 451 + } 452 + 453 + static int twl4030_phy_power_off(struct phy *phy) 454 + { 455 + struct twl4030_usb *twl = phy_get_drvdata(phy); 456 + 457 + dev_dbg(twl->dev, "%s\n", __func__); 458 + pm_runtime_mark_last_busy(twl->dev); 459 + pm_runtime_put_autosuspend(twl->dev); 460 + 461 + return 0; 446 462 } 447 463 448 464 static int twl4030_phy_power_on(struct phy *phy) 449 465 { 450 466 struct twl4030_usb *twl = phy_get_drvdata(phy); 451 467 452 - if (!twl->asleep) 453 - return 0; 454 - __twl4030_phy_power_on(twl); 455 - twl->asleep = 0; 456 468 dev_dbg(twl->dev, "%s\n", __func__); 469 + pm_runtime_get_sync(twl->dev); 470 + twl4030_i2c_access(twl, 1); 471 + twl4030_usb_set_mode(twl, twl->usb_mode); 472 + if (twl->usb_mode == T2_USB_MODE_ULPI) 473 + twl4030_i2c_access(twl, 0); 457 474 458 475 /* 459 476 * XXX When VBUS gets driven after musb goes to A mode, ··· 577 558 * USB_LINK_VBUS state. musb_hdrc won't care until it 578 559 * starts to handle softconnect right. 579 560 */ 580 - omap_musb_mailbox(status); 581 - } 582 - sysfs_notify(&twl->dev->kobj, NULL, "vbus"); 583 - 584 - return IRQ_HANDLED; 585 - } 586 - 587 - static void twl4030_id_workaround_work(struct work_struct *work) 588 - { 589 - struct twl4030_usb *twl = container_of(work, struct twl4030_usb, 590 - id_workaround_work.work); 591 - enum omap_musb_vbus_id_status status; 592 - bool status_changed = false; 593 - 594 - status = twl4030_usb_linkstat(twl); 595 - 596 - spin_lock_irq(&twl->lock); 597 - if (status >= 0 && status != twl->linkstat) { 598 - twl->linkstat = status; 599 - status_changed = true; 600 - } 601 - spin_unlock_irq(&twl->lock); 602 - 603 - if (status_changed) { 604 - dev_dbg(twl->dev, "handle missing status change to %d\n", 605 - status); 561 + if ((status == OMAP_MUSB_VBUS_VALID) || 562 + (status == OMAP_MUSB_ID_GROUND)) { 563 + if (twl->asleep) 564 + pm_runtime_get_sync(twl->dev); 565 + } else { 566 + if (!twl->asleep) { 567 + pm_runtime_mark_last_busy(twl->dev); 568 + pm_runtime_put_autosuspend(twl->dev); 569 + } 570 + } 606 571 omap_musb_mailbox(status); 607 572 } 608 573 ··· 595 592 cancel_delayed_work(&twl->id_workaround_work); 596 593 schedule_delayed_work(&twl->id_workaround_work, HZ); 597 594 } 595 + 596 + if (irq) 597 + sysfs_notify(&twl->dev->kobj, NULL, "vbus"); 598 + 599 + return IRQ_HANDLED; 600 + } 601 + 602 + static void twl4030_id_workaround_work(struct work_struct *work) 603 + { 604 + struct twl4030_usb *twl = container_of(work, struct twl4030_usb, 605 + id_workaround_work.work); 606 + 607 + twl4030_usb_irq(0, twl); 598 608 } 599 609 600 610 static int twl4030_phy_init(struct phy *phy) ··· 615 599 struct twl4030_usb *twl = phy_get_drvdata(phy); 616 600 enum omap_musb_vbus_id_status status; 617 601 618 - /* 619 - * Start in sleep state, we'll get called through set_suspend() 620 - * callback when musb is runtime resumed and it's time to start. 621 - */ 622 - __twl4030_phy_power(twl, 0); 623 - twl->asleep = 1; 624 - 602 + pm_runtime_get_sync(twl->dev); 625 603 status = twl4030_usb_linkstat(twl); 626 604 twl->linkstat = status; 627 605 628 - if (status == OMAP_MUSB_ID_GROUND || status == OMAP_MUSB_VBUS_VALID) { 606 + if (status == OMAP_MUSB_ID_GROUND || status == OMAP_MUSB_VBUS_VALID) 629 607 omap_musb_mailbox(twl->linkstat); 630 - twl4030_phy_power_on(phy); 631 - } 632 608 633 609 sysfs_notify(&twl->dev->kobj, NULL, "vbus"); 610 + pm_runtime_mark_last_busy(twl->dev); 611 + pm_runtime_put_autosuspend(twl->dev); 612 + 634 613 return 0; 635 614 } 636 615 ··· 659 648 .power_on = twl4030_phy_power_on, 660 649 .power_off = twl4030_phy_power_off, 661 650 .owner = THIS_MODULE, 651 + }; 652 + 653 + static const struct dev_pm_ops twl4030_usb_pm_ops = { 654 + SET_RUNTIME_PM_OPS(twl4030_usb_runtime_suspend, 655 + twl4030_usb_runtime_resume, NULL) 662 656 }; 663 657 664 658 static int twl4030_usb_probe(struct platform_device *pdev) ··· 742 726 743 727 ATOMIC_INIT_NOTIFIER_HEAD(&twl->phy.notifier); 744 728 729 + pm_runtime_use_autosuspend(&pdev->dev); 730 + pm_runtime_set_autosuspend_delay(&pdev->dev, 2000); 731 + pm_runtime_enable(&pdev->dev); 732 + pm_runtime_get_sync(&pdev->dev); 733 + 745 734 /* Our job is to use irqs and status from the power module 746 735 * to keep the transceiver disabled when nothing's connected. 747 736 * ··· 765 744 return status; 766 745 } 767 746 747 + pm_runtime_mark_last_busy(&pdev->dev); 748 + pm_runtime_put_autosuspend(twl->dev); 749 + 768 750 dev_info(&pdev->dev, "Initialized TWL4030 USB module\n"); 769 751 return 0; 770 752 } ··· 777 753 struct twl4030_usb *twl = platform_get_drvdata(pdev); 778 754 int val; 779 755 756 + pm_runtime_get_sync(twl->dev); 780 757 cancel_delayed_work(&twl->id_workaround_work); 781 758 device_remove_file(twl->dev, &dev_attr_vbus); 782 759 ··· 797 772 798 773 /* disable complete OTG block */ 799 774 twl4030_usb_clear_bits(twl, POWER_CTRL, POWER_CTRL_OTG_ENAB); 800 - 801 - if (!twl->asleep) 802 - twl4030_phy_power(twl, 0); 775 + pm_runtime_mark_last_busy(twl->dev); 776 + pm_runtime_put(twl->dev); 803 777 804 778 return 0; 805 779 } ··· 816 792 .remove = twl4030_usb_remove, 817 793 .driver = { 818 794 .name = "twl4030_usb", 795 + .pm = &twl4030_usb_pm_ops, 819 796 .owner = THIS_MODULE, 820 797 .of_match_table = of_match_ptr(twl4030_usb_id_table), 821 798 },
+1
drivers/pinctrl/pinctrl-baytrail.c
··· 461 461 .irq_mask = byt_irq_mask, 462 462 .irq_unmask = byt_irq_unmask, 463 463 .irq_set_type = byt_irq_type, 464 + .flags = IRQCHIP_SKIP_SET_WAKE, 464 465 }; 465 466 466 467 static void byt_gpio_irq_init_hw(struct byt_gpio *vg)
+1 -1
drivers/regulator/88pm8607.c
··· 319 319 struct regulator_config *config) 320 320 { 321 321 struct device_node *nproot, *np; 322 - nproot = of_node_get(pdev->dev.parent->of_node); 322 + nproot = pdev->dev.parent->of_node; 323 323 if (!nproot) 324 324 return -ENODEV; 325 325 nproot = of_get_child_by_name(nproot, "regulators");
+2 -2
drivers/regulator/da9052-regulator.c
··· 422 422 config.init_data = pdata->regulators[pdev->id]; 423 423 } else { 424 424 #ifdef CONFIG_OF 425 - struct device_node *nproot, *np; 425 + struct device_node *nproot = da9052->dev->of_node; 426 + struct device_node *np; 426 427 427 - nproot = of_node_get(da9052->dev->of_node); 428 428 if (!nproot) 429 429 return -ENODEV; 430 430
+1 -1
drivers/regulator/max8907-regulator.c
··· 226 226 struct device_node *np, *regulators; 227 227 int ret; 228 228 229 - np = of_node_get(pdev->dev.parent->of_node); 229 + np = pdev->dev.parent->of_node; 230 230 if (!np) 231 231 return 0; 232 232
+1 -1
drivers/regulator/max8925-regulator.c
··· 250 250 struct device_node *nproot, *np; 251 251 int rcount; 252 252 253 - nproot = of_node_get(pdev->dev.parent->of_node); 253 + nproot = pdev->dev.parent->of_node; 254 254 if (!nproot) 255 255 return -ENODEV; 256 256 np = of_get_child_by_name(nproot, "regulators");
+1 -1
drivers/regulator/max8997.c
··· 917 917 struct max8997_regulator_data *rdata; 918 918 unsigned int i, dvs_voltage_nr = 1, ret; 919 919 920 - pmic_np = of_node_get(iodev->dev->of_node); 920 + pmic_np = iodev->dev->of_node; 921 921 if (!pmic_np) { 922 922 dev_err(&pdev->dev, "could not find pmic sub-node\n"); 923 923 return -ENODEV;
-1
drivers/regulator/palmas-regulator.c
··· 1427 1427 u32 prop; 1428 1428 int idx, ret; 1429 1429 1430 - node = of_node_get(node); 1431 1430 regulators = of_get_child_by_name(node, "regulators"); 1432 1431 if (!regulators) { 1433 1432 dev_info(dev, "regulator node not found\n");
+1 -1
drivers/regulator/tps65910-regulator.c
··· 1014 1014 if (!pmic_plat_data) 1015 1015 return NULL; 1016 1016 1017 - np = of_node_get(pdev->dev.parent->of_node); 1017 + np = pdev->dev.parent->of_node; 1018 1018 regulators = of_get_child_by_name(np, "regulators"); 1019 1019 if (!regulators) { 1020 1020 dev_err(&pdev->dev, "regulator node not found\n");
+1 -1
drivers/s390/block/dasd_devmap.c
··· 77 77 * strings when running as a module. 78 78 */ 79 79 static char *dasd[256]; 80 - module_param_array(dasd, charp, NULL, 0); 80 + module_param_array(dasd, charp, NULL, S_IRUGO); 81 81 82 82 /* 83 83 * Single spinlock to protect devmap and servermap structures and lists.
+10 -10
drivers/scsi/Kconfig
··· 43 43 config SCSI_NETLINK 44 44 bool 45 45 default n 46 - select NET 46 + depends on NET 47 47 48 48 config SCSI_PROC_FS 49 49 bool "legacy /proc/scsi/ support" ··· 257 257 258 258 config SCSI_FC_ATTRS 259 259 tristate "FiberChannel Transport Attributes" 260 - depends on SCSI 260 + depends on SCSI && NET 261 261 select SCSI_NETLINK 262 262 help 263 263 If you wish to export transport-specific information about ··· 585 585 586 586 config LIBFC 587 587 tristate "LibFC module" 588 - select SCSI_FC_ATTRS 588 + depends on SCSI_FC_ATTRS 589 589 select CRC32 590 590 ---help--- 591 591 Fibre Channel library module 592 592 593 593 config LIBFCOE 594 594 tristate "LibFCoE module" 595 - select LIBFC 595 + depends on LIBFC 596 596 ---help--- 597 597 Library for Fibre Channel over Ethernet module 598 598 599 599 config FCOE 600 600 tristate "FCoE module" 601 601 depends on PCI 602 - select LIBFCOE 602 + depends on LIBFCOE 603 603 ---help--- 604 604 Fibre Channel over Ethernet module 605 605 606 606 config FCOE_FNIC 607 607 tristate "Cisco FNIC Driver" 608 608 depends on PCI && X86 609 - select LIBFCOE 609 + depends on LIBFCOE 610 610 help 611 611 This is support for the Cisco PCI-Express FCoE HBA. 612 612 ··· 816 816 config SCSI_IBMVFC 817 817 tristate "IBM Virtual FC support" 818 818 depends on PPC_PSERIES && SCSI 819 - select SCSI_FC_ATTRS 819 + depends on SCSI_FC_ATTRS 820 820 help 821 821 This is the IBM POWER Virtual FC Client 822 822 ··· 1266 1266 config SCSI_LPFC 1267 1267 tristate "Emulex LightPulse Fibre Channel Support" 1268 1268 depends on PCI && SCSI 1269 - select SCSI_FC_ATTRS 1269 + depends on SCSI_FC_ATTRS 1270 1270 select CRC_T10DIF 1271 1271 help 1272 1272 This lpfc driver supports the Emulex LightPulse ··· 1676 1676 config ZFCP 1677 1677 tristate "FCP host bus adapter driver for IBM eServer zSeries" 1678 1678 depends on S390 && QDIO && SCSI 1679 - select SCSI_FC_ATTRS 1679 + depends on SCSI_FC_ATTRS 1680 1680 help 1681 1681 If you want to access SCSI devices attached to your IBM eServer 1682 1682 zSeries by means of Fibre Channel interfaces say Y. ··· 1704 1704 config SCSI_BFA_FC 1705 1705 tristate "Brocade BFA Fibre Channel Support" 1706 1706 depends on PCI && SCSI 1707 - select SCSI_FC_ATTRS 1707 + depends on SCSI_FC_ATTRS 1708 1708 help 1709 1709 This bfa driver supports all Brocade PCIe FC/FCOE host adapters. 1710 1710
+3 -2
drivers/scsi/bnx2fc/Kconfig
··· 1 1 config SCSI_BNX2X_FCOE 2 2 tristate "QLogic NetXtreme II FCoE support" 3 3 depends on PCI 4 + depends on (IPV6 || IPV6=n) 5 + depends on LIBFC 6 + depends on LIBFCOE 4 7 select NETDEVICES 5 8 select ETHERNET 6 9 select NET_VENDOR_BROADCOM 7 - select LIBFC 8 - select LIBFCOE 9 10 select CNIC 10 11 ---help--- 11 12 This driver supports FCoE offload for the QLogic NetXtreme II
+1
drivers/scsi/bnx2i/Kconfig
··· 2 2 tristate "QLogic NetXtreme II iSCSI support" 3 3 depends on NET 4 4 depends on PCI 5 + depends on (IPV6 || IPV6=n) 5 6 select SCSI_ISCSI_ATTRS 6 7 select NETDEVICES 7 8 select ETHERNET
+1 -1
drivers/scsi/csiostor/Kconfig
··· 1 1 config SCSI_CHELSIO_FCOE 2 2 tristate "Chelsio Communications FCoE support" 3 3 depends on PCI && SCSI 4 - select SCSI_FC_ATTRS 4 + depends on SCSI_FC_ATTRS 5 5 select FW_LOADER 6 6 help 7 7 This driver supports FCoE Offload functionality over
+10
drivers/scsi/libiscsi.c
··· 717 717 return NULL; 718 718 } 719 719 720 + if (data_size > ISCSI_DEF_MAX_RECV_SEG_LEN) { 721 + iscsi_conn_printk(KERN_ERR, conn, "Invalid buffer len of %u for login task. Max len is %u\n", data_size, ISCSI_DEF_MAX_RECV_SEG_LEN); 722 + return NULL; 723 + } 724 + 720 725 task = conn->login_task; 721 726 } else { 722 727 if (session->state != ISCSI_STATE_LOGGED_IN) 723 728 return NULL; 729 + 730 + if (data_size != 0) { 731 + iscsi_conn_printk(KERN_ERR, conn, "Can not send data buffer of len %u for op 0x%x\n", data_size, opcode); 732 + return NULL; 733 + } 724 734 725 735 BUG_ON(conn->c_stage == ISCSI_CONN_INITIAL_STAGE); 726 736 BUG_ON(conn->c_stage == ISCSI_CONN_STOPPED);
+2 -2
drivers/scsi/qla2xxx/Kconfig
··· 1 1 config SCSI_QLA_FC 2 2 tristate "QLogic QLA2XXX Fibre Channel Support" 3 3 depends on PCI && SCSI 4 - select SCSI_FC_ATTRS 4 + depends on SCSI_FC_ATTRS 5 5 select FW_LOADER 6 6 ---help--- 7 7 This qla2xxx driver supports all QLogic Fibre Channel ··· 31 31 config TCM_QLA2XXX 32 32 tristate "TCM_QLA2XXX fabric module for Qlogic 2xxx series target mode HBAs" 33 33 depends on SCSI_QLA_FC && TARGET_CORE 34 - select LIBFC 34 + depends on LIBFC 35 35 select BTREE 36 36 default n 37 37 ---help---
+3 -2
drivers/scsi/scsi_lib.c
··· 733 733 } else { 734 734 unsigned long flags; 735 735 736 + if (bidi_bytes) 737 + scsi_release_bidi_buffers(cmd); 738 + 736 739 spin_lock_irqsave(q->queue_lock, flags); 737 740 blk_finish_request(req, error); 738 741 spin_unlock_irqrestore(q->queue_lock, flags); 739 742 740 - if (bidi_bytes) 741 - scsi_release_bidi_buffers(cmd); 742 743 scsi_release_buffers(cmd); 743 744 scsi_next_command(cmd); 744 745 }
+25 -14
drivers/spi/spi-davinci.c
··· 397 397 struct spi_master *master = spi->master; 398 398 struct device_node *np = spi->dev.of_node; 399 399 bool internal_cs = true; 400 - unsigned long flags = GPIOF_DIR_OUT; 401 400 402 401 dspi = spi_master_get_devdata(spi->master); 403 402 pdata = &dspi->pdata; 404 403 405 - flags |= (spi->mode & SPI_CS_HIGH) ? GPIOF_INIT_LOW : GPIOF_INIT_HIGH; 406 - 407 404 if (!(spi->mode & SPI_NO_CS)) { 408 405 if (np && (master->cs_gpios != NULL) && (spi->cs_gpio >= 0)) { 409 - retval = gpio_request_one(spi->cs_gpio, 410 - flags, dev_name(&spi->dev)); 406 + retval = gpio_direction_output( 407 + spi->cs_gpio, !(spi->mode & SPI_CS_HIGH)); 411 408 internal_cs = false; 412 409 } else if (pdata->chip_sel && 413 410 spi->chip_select < pdata->num_chipselect && 414 411 pdata->chip_sel[spi->chip_select] != SPI_INTERN_CS) { 415 412 spi->cs_gpio = pdata->chip_sel[spi->chip_select]; 416 - retval = gpio_request_one(spi->cs_gpio, 417 - flags, dev_name(&spi->dev)); 413 + retval = gpio_direction_output( 414 + spi->cs_gpio, !(spi->mode & SPI_CS_HIGH)); 418 415 internal_cs = false; 419 416 } 420 417 ··· 434 437 clear_io_bits(dspi->base + SPIGCR1, SPIGCR1_LOOPBACK_MASK); 435 438 436 439 return retval; 437 - } 438 - 439 - static void davinci_spi_cleanup(struct spi_device *spi) 440 - { 441 - if (spi->cs_gpio >= 0) 442 - gpio_free(spi->cs_gpio); 443 440 } 444 441 445 442 static int davinci_spi_check_error(struct davinci_spi *dspi, int int_status) ··· 947 956 master->num_chipselect = pdata->num_chipselect; 948 957 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16); 949 958 master->setup = davinci_spi_setup; 950 - master->cleanup = davinci_spi_cleanup; 951 959 952 960 dspi->bitbang.chipselect = davinci_spi_chipselect; 953 961 dspi->bitbang.setup_transfer = davinci_spi_setup_transfer; ··· 956 966 dspi->bitbang.flags = SPI_NO_CS | SPI_LSB_FIRST | SPI_LOOP; 957 967 if (dspi->version == SPI_VERSION_2) 958 968 dspi->bitbang.flags |= SPI_READY; 969 + 970 + if (pdev->dev.of_node) { 971 + int i; 972 + 973 + for (i = 0; i < pdata->num_chipselect; i++) { 974 + int cs_gpio = of_get_named_gpio(pdev->dev.of_node, 975 + "cs-gpios", i); 976 + 977 + if (cs_gpio == -EPROBE_DEFER) { 978 + ret = cs_gpio; 979 + goto free_clk; 980 + } 981 + 982 + if (gpio_is_valid(cs_gpio)) { 983 + ret = devm_gpio_request(&pdev->dev, cs_gpio, 984 + dev_name(&pdev->dev)); 985 + if (ret) 986 + goto free_clk; 987 + } 988 + } 989 + } 959 990 960 991 r = platform_get_resource(pdev, IORESOURCE_DMA, 0); 961 992 if (r)
+10 -2
drivers/spi/spi-dw.c
··· 547 547 /* Only alloc on first setup */ 548 548 chip = spi_get_ctldata(spi); 549 549 if (!chip) { 550 - chip = devm_kzalloc(&spi->dev, sizeof(struct chip_data), 551 - GFP_KERNEL); 550 + chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL); 552 551 if (!chip) 553 552 return -ENOMEM; 554 553 spi_set_ctldata(spi, chip); ··· 605 606 return 0; 606 607 } 607 608 609 + static void dw_spi_cleanup(struct spi_device *spi) 610 + { 611 + struct chip_data *chip = spi_get_ctldata(spi); 612 + 613 + kfree(chip); 614 + spi_set_ctldata(spi, NULL); 615 + } 616 + 608 617 /* Restart the controller, disable all interrupts, clean rx fifo */ 609 618 static void spi_hw_init(struct dw_spi *dws) 610 619 { ··· 668 661 master->bus_num = dws->bus_num; 669 662 master->num_chipselect = dws->num_cs; 670 663 master->setup = dw_spi_setup; 664 + master->cleanup = dw_spi_cleanup; 671 665 master->transfer_one_message = dw_spi_transfer_one_message; 672 666 master->max_speed_hz = dws->max_freq; 673 667
+12 -3
drivers/spi/spi-fsl-espi.c
··· 452 452 int retval; 453 453 u32 hw_mode; 454 454 u32 loop_mode; 455 - struct spi_mpc8xxx_cs *cs = spi->controller_state; 455 + struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi); 456 456 457 457 if (!spi->max_speed_hz) 458 458 return -EINVAL; 459 459 460 460 if (!cs) { 461 - cs = devm_kzalloc(&spi->dev, sizeof(*cs), GFP_KERNEL); 461 + cs = kzalloc(sizeof(*cs), GFP_KERNEL); 462 462 if (!cs) 463 463 return -ENOMEM; 464 - spi->controller_state = cs; 464 + spi_set_ctldata(spi, cs); 465 465 } 466 466 467 467 mpc8xxx_spi = spi_master_get_devdata(spi->master); ··· 494 494 return retval; 495 495 } 496 496 return 0; 497 + } 498 + 499 + static void fsl_espi_cleanup(struct spi_device *spi) 500 + { 501 + struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi); 502 + 503 + kfree(cs); 504 + spi_set_ctldata(spi, NULL); 497 505 } 498 506 499 507 void fsl_espi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events) ··· 613 605 614 606 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16); 615 607 master->setup = fsl_espi_setup; 608 + master->cleanup = fsl_espi_cleanup; 616 609 617 610 mpc8xxx_spi = spi_master_get_devdata(master); 618 611 mpc8xxx_spi->spi_do_one_msg = fsl_espi_do_one_msg;
+7 -3
drivers/spi/spi-fsl-spi.c
··· 425 425 struct fsl_spi_reg *reg_base; 426 426 int retval; 427 427 u32 hw_mode; 428 - struct spi_mpc8xxx_cs *cs = spi->controller_state; 428 + struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi); 429 429 430 430 if (!spi->max_speed_hz) 431 431 return -EINVAL; 432 432 433 433 if (!cs) { 434 - cs = devm_kzalloc(&spi->dev, sizeof(*cs), GFP_KERNEL); 434 + cs = kzalloc(sizeof(*cs), GFP_KERNEL); 435 435 if (!cs) 436 436 return -ENOMEM; 437 - spi->controller_state = cs; 437 + spi_set_ctldata(spi, cs); 438 438 } 439 439 mpc8xxx_spi = spi_master_get_devdata(spi->master); 440 440 ··· 496 496 static void fsl_spi_cleanup(struct spi_device *spi) 497 497 { 498 498 struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master); 499 + struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi); 499 500 500 501 if (mpc8xxx_spi->type == TYPE_GRLIB && gpio_is_valid(spi->cs_gpio)) 501 502 gpio_free(spi->cs_gpio); 503 + 504 + kfree(cs); 505 + spi_set_ctldata(spi, NULL); 502 506 } 503 507 504 508 static void fsl_spi_cpu_irq(struct mpc8xxx_spi *mspi, u32 events)
+1 -1
drivers/spi/spi-pl022.c
··· 2136 2136 cs_gpio); 2137 2137 else if (gpio_direction_output(cs_gpio, 1)) 2138 2138 dev_err(&adev->dev, 2139 - "could set gpio %d as output\n", 2139 + "could not set gpio %d as output\n", 2140 2140 cs_gpio); 2141 2141 } 2142 2142 }
+3 -2
drivers/spi/spi-rockchip.c
··· 220 220 do { 221 221 if (!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY)) 222 222 return; 223 - } while (time_before(jiffies, timeout)); 223 + } while (!time_after(jiffies, timeout)); 224 224 225 225 dev_warn(rs->dev, "spi controller is in busy state!\n"); 226 226 } ··· 529 529 int ret = 0; 530 530 struct rockchip_spi *rs = spi_master_get_devdata(master); 531 531 532 - WARN_ON((readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY)); 532 + WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) && 533 + (readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY)); 533 534 534 535 if (!xfer->tx_buf && !xfer->rx_buf) { 535 536 dev_err(rs->dev, "No buffer for transfer\n");
+4 -1
drivers/spi/spi-sirf.c
··· 312 312 u32 cmd; 313 313 314 314 sspi = spi_master_get_devdata(spi->master); 315 + writel(SIRFSOC_SPI_FIFO_RESET, sspi->base + SIRFSOC_SPI_TXFIFO_OP); 316 + writel(SIRFSOC_SPI_FIFO_START, sspi->base + SIRFSOC_SPI_TXFIFO_OP); 315 317 memcpy(&cmd, sspi->tx, t->len); 316 318 if (sspi->word_width == 1 && !(spi->mode & SPI_LSB_FIRST)) 317 319 cmd = cpu_to_be32(cmd) >> ··· 440 438 sspi->tx_word(sspi); 441 439 writel(SIRFSOC_SPI_TXFIFO_EMPTY_INT_EN | 442 440 SIRFSOC_SPI_TX_UFLOW_INT_EN | 443 - SIRFSOC_SPI_RX_OFLOW_INT_EN, 441 + SIRFSOC_SPI_RX_OFLOW_INT_EN | 442 + SIRFSOC_SPI_RX_IO_DMA_INT_EN, 444 443 sspi->base + SIRFSOC_SPI_INT_EN); 445 444 writel(SIRFSOC_SPI_RX_EN | SIRFSOC_SPI_TX_EN, 446 445 sspi->base + SIRFSOC_SPI_TX_RX_EN);
-1
drivers/staging/android/sync.c
··· 199 199 fence->num_fences = 1; 200 200 atomic_set(&fence->status, 1); 201 201 202 - fence_get(&pt->base); 203 202 fence->cbs[0].sync_pt = &pt->base; 204 203 fence->cbs[0].fence = fence; 205 204 if (fence_add_callback(&pt->base, &fence->cbs[0].cb,
+1 -1
drivers/staging/iio/meter/ade7758_trigger.c
··· 85 85 ret = iio_trigger_register(st->trig); 86 86 87 87 /* select default trigger */ 88 - indio_dev->trig = st->trig; 88 + indio_dev->trig = iio_trigger_get(st->trig); 89 89 if (ret) 90 90 goto error_free_irq; 91 91
+3
drivers/staging/imx-drm/imx-ldb.c
··· 574 574 for (i = 0; i < 2; i++) { 575 575 struct imx_ldb_channel *channel = &imx_ldb->channel[i]; 576 576 577 + if (!channel->connector.funcs) 578 + continue; 579 + 577 580 channel->connector.funcs->destroy(&channel->connector); 578 581 channel->encoder.funcs->destroy(&channel->encoder); 579 582 }
+2 -1
drivers/staging/imx-drm/ipuv3-plane.c
··· 281 281 282 282 ipu_idmac_put(ipu_plane->ipu_ch); 283 283 ipu_dmfc_put(ipu_plane->dmfc); 284 - ipu_dp_put(ipu_plane->dp); 284 + if (ipu_plane->dp) 285 + ipu_dp_put(ipu_plane->dp); 285 286 } 286 287 } 287 288
+1 -1
drivers/staging/lustre/lustre/llite/llite_lib.c
··· 568 568 if (sb->s_root == NULL) { 569 569 CERROR("%s: can't make root dentry\n", 570 570 ll_get_fsname(sb, NULL, 0)); 571 - GOTO(out_root, err = -ENOMEM); 571 + GOTO(out_lock_cn_cb, err = -ENOMEM); 572 572 } 573 573 574 574 sbi->ll_sdev_orig = sb->s_dev;
+3
drivers/staging/vt6655/hostap.c
··· 350 350 { 351 351 PSMgmtObject pMgmt = pDevice->pMgmt; 352 352 353 + if (param->u.generic_elem.len > sizeof(pMgmt->abyWPAIE)) 354 + return -EINVAL; 355 + 353 356 memcpy(pMgmt->abyWPAIE, 354 357 param->u.generic_elem.data, 355 358 param->u.generic_elem.len
+3 -1
drivers/target/iscsi/iscsi_target.c
··· 4540 4540 { 4541 4541 struct iscsi_conn *l_conn; 4542 4542 struct iscsi_session *sess = conn->sess; 4543 + bool conn_found = false; 4543 4544 4544 4545 if (!sess) 4545 4546 return; ··· 4549 4548 list_for_each_entry(l_conn, &sess->sess_conn_list, conn_list) { 4550 4549 if (l_conn->cid == cid) { 4551 4550 iscsit_inc_conn_usage_count(l_conn); 4551 + conn_found = true; 4552 4552 break; 4553 4553 } 4554 4554 } 4555 4555 spin_unlock_bh(&sess->conn_lock); 4556 4556 4557 - if (!l_conn) 4557 + if (!conn_found) 4558 4558 return; 4559 4559 4560 4560 if (l_conn->sock)
+1 -1
drivers/target/iscsi/iscsi_target_parameters.c
··· 601 601 param_list = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL); 602 602 if (!param_list) { 603 603 pr_err("Unable to allocate memory for struct iscsi_param_list.\n"); 604 - goto err_out; 604 + return -1; 605 605 } 606 606 INIT_LIST_HEAD(&param_list->param_list); 607 607 INIT_LIST_HEAD(&param_list->extra_response_list);
+2
drivers/target/iscsi/iscsi_target_util.c
··· 400 400 401 401 spin_lock_bh(&conn->cmd_lock); 402 402 list_for_each_entry(cmd, &conn->conn_cmd_list, i_conn_node) { 403 + if (cmd->cmd_flags & ICF_GOT_LAST_DATAOUT) 404 + continue; 403 405 if (cmd->init_task_tag == init_task_tag) { 404 406 spin_unlock_bh(&conn->cmd_lock); 405 407 return cmd;
+1 -1
drivers/target/target_core_configfs.c
··· 2363 2363 pr_err("Invalid value '%ld', must be '0' or '1'\n", tmp); \ 2364 2364 return -EINVAL; \ 2365 2365 } \ 2366 - if (!tmp) \ 2366 + if (tmp) \ 2367 2367 t->_var |= _bit; \ 2368 2368 else \ 2369 2369 t->_var &= ~_bit; \
+1 -1
drivers/target/target_core_spc.c
··· 664 664 buf[0] = dev->transport->get_device_type(dev); 665 665 buf[3] = 0x0c; 666 666 put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[8]); 667 - put_unaligned_be32(dev->t10_alua.lba_map_segment_size, &buf[12]); 667 + put_unaligned_be32(dev->t10_alua.lba_map_segment_multiplier, &buf[12]); 668 668 669 669 return 0; 670 670 }
+1
drivers/tty/serial/8250/8250_dw.c
··· 540 540 { "INT3434", 0 }, 541 541 { "INT3435", 0 }, 542 542 { "80860F0A", 0 }, 543 + { "8086228A", 0 }, 543 544 { }, 544 545 }; 545 546 MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
+42 -1
drivers/tty/serial/atmel_serial.c
··· 527 527 } 528 528 529 529 /* 530 + * Disable modem status interrupts 531 + */ 532 + static void atmel_disable_ms(struct uart_port *port) 533 + { 534 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 535 + uint32_t idr = 0; 536 + 537 + /* 538 + * Interrupt should not be disabled twice 539 + */ 540 + if (!atmel_port->ms_irq_enabled) 541 + return; 542 + 543 + atmel_port->ms_irq_enabled = false; 544 + 545 + if (atmel_port->gpio_irq[UART_GPIO_CTS] >= 0) 546 + disable_irq(atmel_port->gpio_irq[UART_GPIO_CTS]); 547 + else 548 + idr |= ATMEL_US_CTSIC; 549 + 550 + if (atmel_port->gpio_irq[UART_GPIO_DSR] >= 0) 551 + disable_irq(atmel_port->gpio_irq[UART_GPIO_DSR]); 552 + else 553 + idr |= ATMEL_US_DSRIC; 554 + 555 + if (atmel_port->gpio_irq[UART_GPIO_RI] >= 0) 556 + disable_irq(atmel_port->gpio_irq[UART_GPIO_RI]); 557 + else 558 + idr |= ATMEL_US_RIIC; 559 + 560 + if (atmel_port->gpio_irq[UART_GPIO_DCD] >= 0) 561 + disable_irq(atmel_port->gpio_irq[UART_GPIO_DCD]); 562 + else 563 + idr |= ATMEL_US_DCDIC; 564 + 565 + UART_PUT_IDR(port, idr); 566 + } 567 + 568 + /* 530 569 * Control the transmission of a break signal 531 570 */ 532 571 static void atmel_break_ctl(struct uart_port *port, int break_state) ··· 2032 1993 2033 1994 /* CTS flow-control and modem-status interrupts */ 2034 1995 if (UART_ENABLE_MS(port, termios->c_cflag)) 2035 - port->ops->enable_ms(port); 1996 + atmel_enable_ms(port); 1997 + else 1998 + atmel_disable_ms(port); 2036 1999 2037 2000 spin_unlock_irqrestore(&port->lock, flags); 2038 2001 }
+1 -1
drivers/tty/serial/xilinx_uartps.c
··· 581 581 { 582 582 unsigned int status; 583 583 584 - status = cdns_uart_readl(CDNS_UART_ISR_OFFSET) & CDNS_UART_IXR_TXEMPTY; 584 + status = cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY; 585 585 return status ? TIOCSER_TEMT : 0; 586 586 } 587 587
+2 -5
drivers/usb/chipidea/ci_hdrc_msm.c
··· 20 20 static void ci_hdrc_msm_notify_event(struct ci_hdrc *ci, unsigned event) 21 21 { 22 22 struct device *dev = ci->gadget.dev.parent; 23 - int val; 24 23 25 24 switch (event) { 26 25 case CI_HDRC_CONTROLLER_RESET_EVENT: 27 26 dev_dbg(dev, "CI_HDRC_CONTROLLER_RESET_EVENT received\n"); 28 27 writel(0, USB_AHBBURST); 29 28 writel(0, USB_AHBMODE); 29 + usb_phy_init(ci->transceiver); 30 30 break; 31 31 case CI_HDRC_CONTROLLER_STOPPED_EVENT: 32 32 dev_dbg(dev, "CI_HDRC_CONTROLLER_STOPPED_EVENT received\n"); ··· 34 34 * Put the transceiver in non-driving mode. Otherwise host 35 35 * may not detect soft-disconnection. 36 36 */ 37 - val = usb_phy_io_read(ci->transceiver, ULPI_FUNC_CTRL); 38 - val &= ~ULPI_FUNC_CTRL_OPMODE_MASK; 39 - val |= ULPI_FUNC_CTRL_OPMODE_NONDRIVING; 40 - usb_phy_io_write(ci->transceiver, val, ULPI_FUNC_CTRL); 37 + usb_phy_notify_disconnect(ci->transceiver, USB_SPEED_UNKNOWN); 41 38 break; 42 39 default: 43 40 dev_dbg(dev, "unknown ci_hdrc event\n");
+3 -1
drivers/usb/core/hub.c
··· 5024 5024 5025 5025 hub = list_entry(tmp, struct usb_hub, event_list); 5026 5026 kref_get(&hub->kref); 5027 + hdev = hub->hdev; 5028 + usb_get_dev(hdev); 5027 5029 spin_unlock_irq(&hub_event_lock); 5028 5030 5029 - hdev = hub->hdev; 5030 5031 hub_dev = hub->intfdev; 5031 5032 intf = to_usb_interface(hub_dev); 5032 5033 dev_dbg(hub_dev, "state %d ports %d chg %04x evt %04x\n", ··· 5140 5139 usb_autopm_put_interface(intf); 5141 5140 loop_disconnected: 5142 5141 usb_unlock_device(hdev); 5142 + usb_put_dev(hdev); 5143 5143 kref_put(&hub->kref, hub_release); 5144 5144 5145 5145 } /* end while (1) */
+25 -27
drivers/usb/dwc2/gadget.c
··· 1649 1649 dev_err(hsotg->dev, 1650 1650 "%s: timeout flushing fifo (GRSTCTL=%08x)\n", 1651 1651 __func__, val); 1652 + break; 1652 1653 } 1653 1654 1654 1655 udelay(1); ··· 2748 2747 2749 2748 dev_dbg(hsotg->dev, "pdev 0x%p\n", pdev); 2750 2749 2751 - if (hsotg->phy) { 2750 + if (hsotg->uphy) 2751 + usb_phy_init(hsotg->uphy); 2752 + else if (hsotg->plat && hsotg->plat->phy_init) 2753 + hsotg->plat->phy_init(pdev, hsotg->plat->phy_type); 2754 + else { 2752 2755 phy_init(hsotg->phy); 2753 2756 phy_power_on(hsotg->phy); 2754 - } else if (hsotg->uphy) 2755 - usb_phy_init(hsotg->uphy); 2756 - else if (hsotg->plat->phy_init) 2757 - hsotg->plat->phy_init(pdev, hsotg->plat->phy_type); 2757 + } 2758 2758 } 2759 2759 2760 2760 /** ··· 2769 2767 { 2770 2768 struct platform_device *pdev = to_platform_device(hsotg->dev); 2771 2769 2772 - if (hsotg->phy) { 2770 + if (hsotg->uphy) 2771 + usb_phy_shutdown(hsotg->uphy); 2772 + else if (hsotg->plat && hsotg->plat->phy_exit) 2773 + hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type); 2774 + else { 2773 2775 phy_power_off(hsotg->phy); 2774 2776 phy_exit(hsotg->phy); 2775 - } else if (hsotg->uphy) 2776 - usb_phy_shutdown(hsotg->uphy); 2777 - else if (hsotg->plat->phy_exit) 2778 - hsotg->plat->phy_exit(pdev, hsotg->plat->phy_type); 2777 + } 2779 2778 } 2780 2779 2781 2780 /** ··· 2895 2892 return -ENODEV; 2896 2893 2897 2894 /* all endpoints should be shutdown */ 2898 - for (ep = 0; ep < hsotg->num_of_eps; ep++) 2895 + for (ep = 1; ep < hsotg->num_of_eps; ep++) 2899 2896 s3c_hsotg_ep_disable(&hsotg->eps[ep].ep); 2900 2897 2901 2898 spin_lock_irqsave(&hsotg->lock, flags); 2902 - 2903 - s3c_hsotg_phy_disable(hsotg); 2904 2899 2905 2900 if (!driver) 2906 2901 hsotg->driver = NULL; ··· 2942 2941 s3c_hsotg_phy_enable(hsotg); 2943 2942 s3c_hsotg_core_init(hsotg); 2944 2943 } else { 2945 - s3c_hsotg_disconnect(hsotg); 2946 2944 s3c_hsotg_phy_disable(hsotg); 2947 2945 } 2948 2946 ··· 3441 3441 3442 3442 hsotg->irq = ret; 3443 3443 3444 - ret = devm_request_irq(&pdev->dev, hsotg->irq, s3c_hsotg_irq, 0, 3445 - dev_name(dev), hsotg); 3446 - if (ret < 0) { 3447 - dev_err(dev, "cannot claim IRQ\n"); 3448 - goto err_clk; 3449 - } 3450 - 3451 3444 dev_info(dev, "regs %p, irq %d\n", hsotg->regs, hsotg->irq); 3452 3445 3453 3446 hsotg->gadget.max_speed = USB_SPEED_HIGH; ··· 3481 3488 if (hsotg->phy && (phy_get_bus_width(phy) == 8)) 3482 3489 hsotg->phyif = GUSBCFG_PHYIF8; 3483 3490 3484 - if (hsotg->phy) 3485 - phy_init(hsotg->phy); 3486 - 3487 3491 /* usb phy enable */ 3488 3492 s3c_hsotg_phy_enable(hsotg); 3489 3493 3490 3494 s3c_hsotg_corereset(hsotg); 3491 3495 s3c_hsotg_init(hsotg); 3492 3496 s3c_hsotg_hw_cfg(hsotg); 3497 + 3498 + ret = devm_request_irq(&pdev->dev, hsotg->irq, s3c_hsotg_irq, 0, 3499 + dev_name(dev), hsotg); 3500 + if (ret < 0) { 3501 + s3c_hsotg_phy_disable(hsotg); 3502 + clk_disable_unprepare(hsotg->clk); 3503 + regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), 3504 + hsotg->supplies); 3505 + dev_err(dev, "cannot claim IRQ\n"); 3506 + goto err_clk; 3507 + } 3493 3508 3494 3509 /* hsotg->num_of_eps holds number of EPs other than ep0 */ 3495 3510 ··· 3583 3582 usb_gadget_unregister_driver(hsotg->driver); 3584 3583 } 3585 3584 3586 - s3c_hsotg_phy_disable(hsotg); 3587 - if (hsotg->phy) 3588 - phy_exit(hsotg->phy); 3589 3585 clk_disable_unprepare(hsotg->clk); 3590 3586 3591 3587 return 0;
+7 -6
drivers/usb/dwc3/core.c
··· 799 799 { 800 800 struct dwc3 *dwc = platform_get_drvdata(pdev); 801 801 802 + dwc3_debugfs_exit(dwc); 803 + dwc3_core_exit_mode(dwc); 804 + dwc3_event_buffers_cleanup(dwc); 805 + dwc3_free_event_buffers(dwc); 806 + 802 807 usb_phy_set_suspend(dwc->usb2_phy, 1); 803 808 usb_phy_set_suspend(dwc->usb3_phy, 1); 804 809 phy_power_off(dwc->usb2_generic_phy); 805 810 phy_power_off(dwc->usb3_generic_phy); 806 811 812 + dwc3_core_exit(dwc); 813 + 807 814 pm_runtime_put_sync(&pdev->dev); 808 815 pm_runtime_disable(&pdev->dev); 809 - 810 - dwc3_debugfs_exit(dwc); 811 - dwc3_core_exit_mode(dwc); 812 - dwc3_event_buffers_cleanup(dwc); 813 - dwc3_free_event_buffers(dwc); 814 - dwc3_core_exit(dwc); 815 816 816 817 return 0; 817 818 }
+1 -1
drivers/usb/dwc3/dwc3-omap.c
··· 576 576 if (omap->extcon_id_dev.edev) 577 577 extcon_unregister_interest(&omap->extcon_id_dev); 578 578 dwc3_omap_disable_irqs(omap); 579 + device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core); 579 580 pm_runtime_put_sync(&pdev->dev); 580 581 pm_runtime_disable(&pdev->dev); 581 - device_for_each_child(&pdev->dev, NULL, dwc3_omap_remove_core); 582 582 583 583 return 0; 584 584 }
+3 -8
drivers/usb/dwc3/gadget.c
··· 527 527 dep->stream_capable = true; 528 528 } 529 529 530 - if (usb_endpoint_xfer_isoc(desc)) 530 + if (!usb_endpoint_xfer_control(desc)) 531 531 params.param1 |= DWC3_DEPCFG_XFER_IN_PROGRESS_EN; 532 532 533 533 /* ··· 1225 1225 1226 1226 int ret; 1227 1227 1228 + spin_lock_irqsave(&dwc->lock, flags); 1228 1229 if (!dep->endpoint.desc) { 1229 1230 dev_dbg(dwc->dev, "trying to queue request %p to disabled %s\n", 1230 1231 request, ep->name); 1232 + spin_unlock_irqrestore(&dwc->lock, flags); 1231 1233 return -ESHUTDOWN; 1232 1234 } 1233 1235 1234 1236 dev_vdbg(dwc->dev, "queing request %p to %s length %d\n", 1235 1237 request, ep->name, request->length); 1236 1238 1237 - spin_lock_irqsave(&dwc->lock, flags); 1238 1239 ret = __dwc3_gadget_ep_queue(dep, req); 1239 1240 spin_unlock_irqrestore(&dwc->lock, flags); 1240 1241 ··· 2042 2041 dwc3_endpoint_transfer_complete(dwc, dep, event); 2043 2042 break; 2044 2043 case DWC3_DEPEVT_XFERINPROGRESS: 2045 - if (!usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 2046 - dev_dbg(dwc->dev, "%s is not an Isochronous endpoint\n", 2047 - dep->name); 2048 - return; 2049 - } 2050 - 2051 2044 dwc3_endpoint_transfer_complete(dwc, dep, event); 2052 2045 break; 2053 2046 case DWC3_DEPEVT_XFERNOTREADY:
+55 -11
drivers/usb/gadget/function/f_fs.c
··· 155 155 struct usb_request *req; 156 156 }; 157 157 158 + struct ffs_desc_helper { 159 + struct ffs_data *ffs; 160 + unsigned interfaces_count; 161 + unsigned eps_count; 162 + }; 163 + 158 164 static int __must_check ffs_epfiles_create(struct ffs_data *ffs); 159 165 static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count); 160 166 ··· 1836 1830 u8 *valuep, struct usb_descriptor_header *desc, 1837 1831 void *priv) 1838 1832 { 1839 - struct ffs_data *ffs = priv; 1833 + struct ffs_desc_helper *helper = priv; 1834 + struct usb_endpoint_descriptor *d; 1840 1835 1841 1836 ENTER(); 1842 1837 ··· 1851 1844 * encountered interface "n" then there are at least 1852 1845 * "n+1" interfaces. 1853 1846 */ 1854 - if (*valuep >= ffs->interfaces_count) 1855 - ffs->interfaces_count = *valuep + 1; 1847 + if (*valuep >= helper->interfaces_count) 1848 + helper->interfaces_count = *valuep + 1; 1856 1849 break; 1857 1850 1858 1851 case FFS_STRING: ··· 1860 1853 * Strings are indexed from 1 (0 is magic ;) reserved 1861 1854 * for languages list or some such) 1862 1855 */ 1863 - if (*valuep > ffs->strings_count) 1864 - ffs->strings_count = *valuep; 1856 + if (*valuep > helper->ffs->strings_count) 1857 + helper->ffs->strings_count = *valuep; 1865 1858 break; 1866 1859 1867 1860 case FFS_ENDPOINT: 1868 - /* Endpoints are indexed from 1 as well. */ 1869 - if ((*valuep & USB_ENDPOINT_NUMBER_MASK) > ffs->eps_count) 1870 - ffs->eps_count = (*valuep & USB_ENDPOINT_NUMBER_MASK); 1861 + d = (void *)desc; 1862 + helper->eps_count++; 1863 + if (helper->eps_count >= 15) 1864 + return -EINVAL; 1865 + /* Check if descriptors for any speed were already parsed */ 1866 + if (!helper->ffs->eps_count && !helper->ffs->interfaces_count) 1867 + helper->ffs->eps_addrmap[helper->eps_count] = 1868 + d->bEndpointAddress; 1869 + else if (helper->ffs->eps_addrmap[helper->eps_count] != 1870 + d->bEndpointAddress) 1871 + return -EINVAL; 1871 1872 break; 1872 1873 } 1873 1874 ··· 2068 2053 char *data = _data, *raw_descs; 2069 2054 unsigned os_descs_count = 0, counts[3], flags; 2070 2055 int ret = -EINVAL, i; 2056 + struct ffs_desc_helper helper; 2071 2057 2072 2058 ENTER(); 2073 2059 ··· 2117 2101 2118 2102 /* Read descriptors */ 2119 2103 raw_descs = data; 2104 + helper.ffs = ffs; 2120 2105 for (i = 0; i < 3; ++i) { 2121 2106 if (!counts[i]) 2122 2107 continue; 2108 + helper.interfaces_count = 0; 2109 + helper.eps_count = 0; 2123 2110 ret = ffs_do_descs(counts[i], data, len, 2124 - __ffs_data_do_entity, ffs); 2111 + __ffs_data_do_entity, &helper); 2125 2112 if (ret < 0) 2126 2113 goto error; 2114 + if (!ffs->eps_count && !ffs->interfaces_count) { 2115 + ffs->eps_count = helper.eps_count; 2116 + ffs->interfaces_count = helper.interfaces_count; 2117 + } else { 2118 + if (ffs->eps_count != helper.eps_count) { 2119 + ret = -EINVAL; 2120 + goto error; 2121 + } 2122 + if (ffs->interfaces_count != helper.interfaces_count) { 2123 + ret = -EINVAL; 2124 + goto error; 2125 + } 2126 + } 2127 2127 data += ret; 2128 2128 len -= ret; 2129 2129 } ··· 2374 2342 spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags); 2375 2343 } 2376 2344 2377 - 2378 2345 /* Bind/unbind USB function hooks *******************************************/ 2346 + 2347 + static int ffs_ep_addr2idx(struct ffs_data *ffs, u8 endpoint_address) 2348 + { 2349 + int i; 2350 + 2351 + for (i = 1; i < ARRAY_SIZE(ffs->eps_addrmap); ++i) 2352 + if (ffs->eps_addrmap[i] == endpoint_address) 2353 + return i; 2354 + return -ENOENT; 2355 + } 2379 2356 2380 2357 static int __ffs_func_bind_do_descs(enum ffs_entity_type type, u8 *valuep, 2381 2358 struct usb_descriptor_header *desc, ··· 2419 2378 if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT) 2420 2379 return 0; 2421 2380 2422 - idx = (ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK) - 1; 2381 + idx = ffs_ep_addr2idx(func->ffs, ds->bEndpointAddress) - 1; 2382 + if (idx < 0) 2383 + return idx; 2384 + 2423 2385 ffs_ep = func->eps + idx; 2424 2386 2425 2387 if (unlikely(ffs_ep->descs[ep_desc_id])) {
+2
drivers/usb/gadget/function/u_fs.h
··· 224 224 void *ms_os_descs_ext_prop_name_avail; 225 225 void *ms_os_descs_ext_prop_data_avail; 226 226 227 + u8 eps_addrmap[15]; 228 + 227 229 unsigned short strings_count; 228 230 unsigned short interfaces_count; 229 231 unsigned short eps_count;
+1 -1
drivers/usb/gadget/udc/fusb300_udc.h
··· 12 12 13 13 14 14 #ifndef __FUSB300_UDC_H__ 15 - #define __FUSB300_UDC_H_ 15 + #define __FUSB300_UDC_H__ 16 16 17 17 #include <linux/kernel.h> 18 18
+1 -1
drivers/usb/gadget/udc/net2280.c
··· 3320 3320 if (stat & tmp) { 3321 3321 writel(tmp, &dev->regs->irqstat1); 3322 3322 if ((((stat & BIT(ROOT_PORT_RESET_INTERRUPT)) && 3323 - (readl(&dev->usb->usbstat) & mask)) || 3323 + ((readl(&dev->usb->usbstat) & mask) == 0)) || 3324 3324 ((readl(&dev->usb->usbctl) & 3325 3325 BIT(VBUS_PIN)) == 0)) && 3326 3326 (dev->gadget.speed != USB_SPEED_UNKNOWN)) {
-2
drivers/usb/host/ehci-hcd.c
··· 965 965 } 966 966 967 967 qh->exception = 1; 968 - if (ehci->rh_state < EHCI_RH_RUNNING) 969 - qh->qh_state = QH_STATE_IDLE; 970 968 switch (qh->qh_state) { 971 969 case QH_STATE_LINKED: 972 970 WARN_ON(!list_empty(&qh->qtd_list));
+5 -3
drivers/usb/host/xhci-hub.c
··· 468 468 } 469 469 470 470 /* Updates Link Status for super Speed port */ 471 - static void xhci_hub_report_usb3_link_state(u32 *status, u32 status_reg) 471 + static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci, 472 + u32 *status, u32 status_reg) 472 473 { 473 474 u32 pls = status_reg & PORT_PLS_MASK; 474 475 ··· 508 507 * in which sometimes the port enters compliance mode 509 508 * caused by a delay on the host-device negotiation. 510 509 */ 511 - if (pls == USB_SS_PORT_LS_COMP_MOD) 510 + if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) && 511 + (pls == USB_SS_PORT_LS_COMP_MOD)) 512 512 pls |= USB_PORT_STAT_CONNECTION; 513 513 } 514 514 ··· 668 666 } 669 667 /* Update Port Link State */ 670 668 if (hcd->speed == HCD_USB3) { 671 - xhci_hub_report_usb3_link_state(&status, raw_port_status); 669 + xhci_hub_report_usb3_link_state(xhci, &status, raw_port_status); 672 670 /* 673 671 * Verify if all USB3 Ports Have entered U0 already. 674 672 * Delete Compliance Mode Timer if so.
+2 -1
drivers/usb/host/xhci-mem.c
··· 1812 1812 1813 1813 if (xhci->lpm_command) 1814 1814 xhci_free_command(xhci, xhci->lpm_command); 1815 + xhci->lpm_command = NULL; 1815 1816 if (xhci->cmd_ring) 1816 1817 xhci_ring_free(xhci, xhci->cmd_ring); 1817 1818 xhci->cmd_ring = NULL; ··· 1820 1819 xhci_cleanup_command_queue(xhci); 1821 1820 1822 1821 num_ports = HCS_MAX_PORTS(xhci->hcs_params1); 1823 - for (i = 0; i < num_ports; i++) { 1822 + for (i = 0; i < num_ports && xhci->rh_bw; i++) { 1824 1823 struct xhci_interval_bw_table *bwt = &xhci->rh_bw[i].bw_table; 1825 1824 for (j = 0; j < XHCI_MAX_INTERVAL; j++) { 1826 1825 struct list_head *ep = &bwt->interval_bw[j].endpoints;
+10 -2
drivers/usb/host/xhci.c
··· 3971 3971 int ret; 3972 3972 3973 3973 spin_lock_irqsave(&xhci->lock, flags); 3974 - if (max_exit_latency == xhci->devs[udev->slot_id]->current_mel) { 3974 + 3975 + virt_dev = xhci->devs[udev->slot_id]; 3976 + 3977 + /* 3978 + * virt_dev might not exists yet if xHC resumed from hibernate (S4) and 3979 + * xHC was re-initialized. Exit latency will be set later after 3980 + * hub_port_finish_reset() is done and xhci->devs[] are re-allocated 3981 + */ 3982 + 3983 + if (!virt_dev || max_exit_latency == virt_dev->current_mel) { 3975 3984 spin_unlock_irqrestore(&xhci->lock, flags); 3976 3985 return 0; 3977 3986 } 3978 3987 3979 3988 /* Attempt to issue an Evaluate Context command to change the MEL. */ 3980 - virt_dev = xhci->devs[udev->slot_id]; 3981 3989 command = xhci->lpm_command; 3982 3990 ctrl_ctx = xhci_get_input_control_ctx(xhci, command->in_ctx); 3983 3991 if (!ctrl_ctx) {
+15 -2
drivers/usb/musb/musb_cppi41.c
··· 39 39 u32 transferred; 40 40 u32 packet_sz; 41 41 struct list_head tx_check; 42 + int tx_zlp; 42 43 }; 43 44 44 45 #define MUSB_DMA_NUM_CHANNELS 15 ··· 123 122 { 124 123 struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep; 125 124 struct musb *musb = hw_ep->musb; 125 + void __iomem *epio = hw_ep->regs; 126 + u16 csr; 126 127 127 128 if (!cppi41_channel->prog_len || 128 129 (cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)) { ··· 134 131 cppi41_channel->transferred; 135 132 cppi41_channel->channel.status = MUSB_DMA_STATUS_FREE; 136 133 cppi41_channel->channel.rx_packet_done = true; 134 + 135 + /* 136 + * transmit ZLP using PIO mode for transfers which size is 137 + * multiple of EP packet size. 138 + */ 139 + if (cppi41_channel->tx_zlp && (cppi41_channel->transferred % 140 + cppi41_channel->packet_sz) == 0) { 141 + musb_ep_select(musb->mregs, hw_ep->epnum); 142 + csr = MUSB_TXCSR_MODE | MUSB_TXCSR_TXPKTRDY; 143 + musb_writew(epio, MUSB_TXCSR, csr); 144 + } 137 145 musb_dma_completion(musb, hw_ep->epnum, cppi41_channel->is_tx); 138 146 } else { 139 147 /* next iteration, reload */ 140 148 struct dma_chan *dc = cppi41_channel->dc; 141 149 struct dma_async_tx_descriptor *dma_desc; 142 150 enum dma_transfer_direction direction; 143 - u16 csr; 144 151 u32 remain_bytes; 145 - void __iomem *epio = cppi41_channel->hw_ep->regs; 146 152 147 153 cppi41_channel->buf_addr += cppi41_channel->packet_sz; 148 154 ··· 375 363 cppi41_channel->total_len = len; 376 364 cppi41_channel->transferred = 0; 377 365 cppi41_channel->packet_sz = packet_sz; 366 + cppi41_channel->tx_zlp = (cppi41_channel->is_tx && mode) ? 1 : 0; 378 367 379 368 /* 380 369 * Due to AM335x' Advisory 1.0.13 we are not allowed to transfer more
+7 -1
drivers/usb/phy/phy-mxs-usb.c
··· 1 1 /* 2 - * Copyright 2012-2013 Freescale Semiconductor, Inc. 2 + * Copyright 2012-2014 Freescale Semiconductor, Inc. 3 3 * Copyright (C) 2012 Marek Vasut <marex@denx.de> 4 4 * on behalf of DENX Software Engineering GmbH 5 5 * ··· 125 125 MXS_PHY_NEED_IP_FIX, 126 126 }; 127 127 128 + static const struct mxs_phy_data imx6sx_phy_data = { 129 + .flags = MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS | 130 + MXS_PHY_NEED_IP_FIX, 131 + }; 132 + 128 133 static const struct of_device_id mxs_phy_dt_ids[] = { 134 + { .compatible = "fsl,imx6sx-usbphy", .data = &imx6sx_phy_data, }, 129 135 { .compatible = "fsl,imx6sl-usbphy", .data = &imx6sl_phy_data, }, 130 136 { .compatible = "fsl,imx6q-usbphy", .data = &imx6q_phy_data, }, 131 137 { .compatible = "fsl,imx23-usbphy", .data = &imx23_phy_data, },
+2 -2
drivers/usb/phy/phy-tegra-usb.c
··· 878 878 return -ENOMEM; 879 879 } 880 880 881 - tegra_phy->config = devm_kzalloc(&pdev->dev, 882 - sizeof(*tegra_phy->config), GFP_KERNEL); 881 + tegra_phy->config = devm_kzalloc(&pdev->dev, sizeof(*config), 882 + GFP_KERNEL); 883 883 if (!tegra_phy->config) { 884 884 dev_err(&pdev->dev, 885 885 "unable to allocate memory for USB UTMIP config\n");
+66 -6
drivers/usb/renesas_usbhs/fifo.c
··· 108 108 return list_first_entry(&pipe->list, struct usbhs_pkt, node); 109 109 } 110 110 111 + static void usbhsf_fifo_clear(struct usbhs_pipe *pipe, 112 + struct usbhs_fifo *fifo); 113 + static void usbhsf_fifo_unselect(struct usbhs_pipe *pipe, 114 + struct usbhs_fifo *fifo); 115 + static struct dma_chan *usbhsf_dma_chan_get(struct usbhs_fifo *fifo, 116 + struct usbhs_pkt *pkt); 117 + #define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1) 118 + #define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0) 119 + static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map); 111 120 struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt) 112 121 { 113 122 struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe); 123 + struct usbhs_fifo *fifo = usbhs_pipe_to_fifo(pipe); 114 124 unsigned long flags; 115 125 116 126 /******************** spin lock ********************/ 117 127 usbhs_lock(priv, flags); 118 128 129 + usbhs_pipe_disable(pipe); 130 + 119 131 if (!pkt) 120 132 pkt = __usbhsf_pkt_get(pipe); 121 133 122 - if (pkt) 134 + if (pkt) { 135 + struct dma_chan *chan = NULL; 136 + 137 + if (fifo) 138 + chan = usbhsf_dma_chan_get(fifo, pkt); 139 + if (chan) { 140 + dmaengine_terminate_all(chan); 141 + usbhsf_fifo_clear(pipe, fifo); 142 + usbhsf_dma_unmap(pkt); 143 + } 144 + 123 145 __usbhsf_pkt_del(pkt); 146 + } 147 + 148 + if (fifo) 149 + usbhsf_fifo_unselect(pipe, fifo); 124 150 125 151 usbhs_unlock(priv, flags); 126 152 /******************** spin unlock ******************/ ··· 570 544 usbhsf_send_terminator(pipe, fifo); 571 545 572 546 usbhsf_tx_irq_ctrl(pipe, !*is_done); 547 + usbhs_pipe_running(pipe, !*is_done); 573 548 usbhs_pipe_enable(pipe); 574 549 575 550 dev_dbg(dev, " send %d (%d/ %d/ %d/ %d)\n", ··· 597 570 * retry in interrupt 598 571 */ 599 572 usbhsf_tx_irq_ctrl(pipe, 1); 573 + usbhs_pipe_running(pipe, 1); 600 574 601 575 return ret; 602 576 } 603 577 578 + static int usbhsf_pio_prepare_push(struct usbhs_pkt *pkt, int *is_done) 579 + { 580 + if (usbhs_pipe_is_running(pkt->pipe)) 581 + return 0; 582 + 583 + return usbhsf_pio_try_push(pkt, is_done); 584 + } 585 + 604 586 struct usbhs_pkt_handle usbhs_fifo_pio_push_handler = { 605 - .prepare = usbhsf_pio_try_push, 587 + .prepare = usbhsf_pio_prepare_push, 606 588 .try_run = usbhsf_pio_try_push, 607 589 }; 608 590 ··· 625 589 if (usbhs_pipe_is_busy(pipe)) 626 590 return 0; 627 591 592 + if (usbhs_pipe_is_running(pipe)) 593 + return 0; 594 + 628 595 /* 629 596 * pipe enable to prepare packet receive 630 597 */ ··· 636 597 637 598 usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->length); 638 599 usbhs_pipe_enable(pipe); 600 + usbhs_pipe_running(pipe, 1); 639 601 usbhsf_rx_irq_ctrl(pipe, 1); 640 602 641 603 return 0; ··· 682 642 (total_len < maxp)) { /* short packet */ 683 643 *is_done = 1; 684 644 usbhsf_rx_irq_ctrl(pipe, 0); 645 + usbhs_pipe_running(pipe, 0); 685 646 usbhs_pipe_disable(pipe); /* disable pipe first */ 686 647 } 687 648 ··· 804 763 usbhs_bset(priv, fifo->sel, DREQE, dreqe); 805 764 } 806 765 807 - #define usbhsf_dma_map(p) __usbhsf_dma_map_ctrl(p, 1) 808 - #define usbhsf_dma_unmap(p) __usbhsf_dma_map_ctrl(p, 0) 809 766 static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map) 810 767 { 811 768 struct usbhs_pipe *pipe = pkt->pipe; ··· 844 805 dev_dbg(dev, " %s %d (%d/ %d)\n", 845 806 fifo->name, usbhs_pipe_number(pipe), pkt->length, pkt->zero); 846 807 808 + usbhs_pipe_running(pipe, 1); 847 809 usbhs_pipe_set_trans_count_if_bulk(pipe, pkt->trans); 848 810 usbhs_pipe_enable(pipe); 849 811 usbhsf_dma_start(pipe, fifo); ··· 875 835 876 836 if ((uintptr_t)(pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */ 877 837 goto usbhsf_pio_prepare_push; 838 + 839 + /* return at this time if the pipe is running */ 840 + if (usbhs_pipe_is_running(pipe)) 841 + return 0; 878 842 879 843 /* get enable DMA fifo */ 880 844 fifo = usbhsf_get_dma_fifo(priv, pkt); ··· 913 869 static int usbhsf_dma_push_done(struct usbhs_pkt *pkt, int *is_done) 914 870 { 915 871 struct usbhs_pipe *pipe = pkt->pipe; 872 + int is_short = pkt->trans % usbhs_pipe_get_maxpacket(pipe); 916 873 917 - pkt->actual = pkt->trans; 874 + pkt->actual += pkt->trans; 918 875 919 - *is_done = !pkt->zero; /* send zero packet ? */ 876 + if (pkt->actual < pkt->length) 877 + *is_done = 0; /* there are remainder data */ 878 + else if (is_short) 879 + *is_done = 1; /* short packet */ 880 + else 881 + *is_done = !pkt->zero; /* send zero packet? */ 882 + 883 + usbhs_pipe_running(pipe, !*is_done); 920 884 921 885 usbhsf_dma_stop(pipe, pipe->fifo); 922 886 usbhsf_dma_unmap(pkt); 923 887 usbhsf_fifo_unselect(pipe, pipe->fifo); 888 + 889 + if (!*is_done) { 890 + /* change handler to PIO */ 891 + pkt->handler = &usbhs_fifo_pio_push_handler; 892 + return pkt->handler->try_run(pkt, is_done); 893 + } 924 894 925 895 return 0; 926 896 } ··· 1030 972 if ((pkt->actual == pkt->length) || /* receive all data */ 1031 973 (pkt->trans < maxp)) { /* short packet */ 1032 974 *is_done = 1; 975 + usbhs_pipe_running(pipe, 0); 1033 976 } else { 1034 977 /* re-enable */ 978 + usbhs_pipe_running(pipe, 0); 1035 979 usbhsf_prepare_pop(pkt, is_done); 1036 980 } 1037 981
+5
drivers/usb/renesas_usbhs/mod.c
··· 213 213 { 214 214 struct usbhs_mod *mod = usbhs_mod_get_current(priv); 215 215 u16 intenb0, intenb1; 216 + unsigned long flags; 216 217 218 + /******************** spin lock ********************/ 219 + usbhs_lock(priv, flags); 217 220 state->intsts0 = usbhs_read(priv, INTSTS0); 218 221 state->intsts1 = usbhs_read(priv, INTSTS1); 219 222 ··· 232 229 state->bempsts &= mod->irq_bempsts; 233 230 state->brdysts &= mod->irq_brdysts; 234 231 } 232 + usbhs_unlock(priv, flags); 233 + /******************** spin unlock ******************/ 235 234 236 235 /* 237 236 * Check whether the irq enable registers and the irq status are set
+13
drivers/usb/renesas_usbhs/pipe.c
··· 578 578 return usbhsp_flags_has(pipe, IS_DIR_HOST); 579 579 } 580 580 581 + int usbhs_pipe_is_running(struct usbhs_pipe *pipe) 582 + { 583 + return usbhsp_flags_has(pipe, IS_RUNNING); 584 + } 585 + 586 + void usbhs_pipe_running(struct usbhs_pipe *pipe, int running) 587 + { 588 + if (running) 589 + usbhsp_flags_set(pipe, IS_RUNNING); 590 + else 591 + usbhsp_flags_clr(pipe, IS_RUNNING); 592 + } 593 + 581 594 void usbhs_pipe_data_sequence(struct usbhs_pipe *pipe, int sequence) 582 595 { 583 596 u16 mask = (SQCLR | SQSET);
+4
drivers/usb/renesas_usbhs/pipe.h
··· 36 36 #define USBHS_PIPE_FLAGS_IS_USED (1 << 0) 37 37 #define USBHS_PIPE_FLAGS_IS_DIR_IN (1 << 1) 38 38 #define USBHS_PIPE_FLAGS_IS_DIR_HOST (1 << 2) 39 + #define USBHS_PIPE_FLAGS_IS_RUNNING (1 << 3) 39 40 40 41 struct usbhs_pkt_handle *handler; 41 42 ··· 81 80 void usbhs_pipe_remove(struct usbhs_priv *priv); 82 81 int usbhs_pipe_is_dir_in(struct usbhs_pipe *pipe); 83 82 int usbhs_pipe_is_dir_host(struct usbhs_pipe *pipe); 83 + int usbhs_pipe_is_running(struct usbhs_pipe *pipe); 84 + void usbhs_pipe_running(struct usbhs_pipe *pipe, int running); 85 + 84 86 void usbhs_pipe_init(struct usbhs_priv *priv, 85 87 int (*dma_map_ctrl)(struct usbhs_pkt *pkt, int map)); 86 88 int usbhs_pipe_get_maxpacket(struct usbhs_pipe *pipe);
+3
drivers/usb/serial/ftdi_sio.c
··· 728 728 { USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID), 729 729 .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk }, 730 730 { USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) }, 731 + { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) }, 731 732 { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S03_PID) }, 732 733 { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_59_PID) }, 733 734 { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57A_PID) }, ··· 940 939 { USB_DEVICE(FTDI_VID, FTDI_EKEY_CONV_USB_PID) }, 941 940 /* Infineon Devices */ 942 941 { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, 942 + /* GE Healthcare devices */ 943 + { USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) }, 943 944 { } /* Terminating entry */ 944 945 }; 945 946
+12
drivers/usb/serial/ftdi_sio_ids.h
··· 837 837 #define TELLDUS_TELLSTICK_PID 0x0C30 /* RF control dongle 433 MHz using FT232RL */ 838 838 839 839 /* 840 + * NOVITUS printers 841 + */ 842 + #define NOVITUS_VID 0x1a28 843 + #define NOVITUS_BONO_E_PID 0x6010 844 + 845 + /* 840 846 * RT Systems programming cables for various ham radios 841 847 */ 842 848 #define RTSYSTEMS_VID 0x2100 /* Vendor ID */ ··· 1391 1385 * ekey biometric systems GmbH (http://ekey.net/) 1392 1386 */ 1393 1387 #define FTDI_EKEY_CONV_USB_PID 0xCB08 /* Converter USB */ 1388 + 1389 + /* 1390 + * GE Healthcare devices 1391 + */ 1392 + #define GE_HEALTHCARE_VID 0x1901 1393 + #define GE_HEALTHCARE_NEMO_TRACKER_PID 0x0015
+7 -2
drivers/usb/serial/sierra.c
··· 282 282 /* Sierra Wireless HSPA Non-Composite Device */ 283 283 { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x6892, 0xFF, 0xFF, 0xFF)}, 284 284 { USB_DEVICE(0x1199, 0x6893) }, /* Sierra Wireless Device */ 285 - { USB_DEVICE(0x1199, 0x68A3), /* Sierra Wireless Direct IP modems */ 285 + /* Sierra Wireless Direct IP modems */ 286 + { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68A3, 0xFF, 0xFF, 0xFF), 287 + .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist 288 + }, 289 + { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68AA, 0xFF, 0xFF, 0xFF), 286 290 .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist 287 291 }, 288 292 /* AT&T Direct IP LTE modems */ 289 293 { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68AA, 0xFF, 0xFF, 0xFF), 290 294 .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist 291 295 }, 292 - { USB_DEVICE(0x0f3d, 0x68A3), /* Airprime/Sierra Wireless Direct IP modems */ 296 + /* Airprime/Sierra Wireless Direct IP modems */ 297 + { USB_DEVICE_AND_INTERFACE_INFO(0x0F3D, 0x68A3, 0xFF, 0xFF, 0xFF), 293 298 .driver_info = (kernel_ulong_t)&direct_ip_interface_blacklist 294 299 }, 295 300
+8
drivers/usb/serial/zte_ev.c
··· 272 272 } 273 273 274 274 static const struct usb_device_id id_table[] = { 275 + { USB_DEVICE(0x19d2, 0xffec) }, 276 + { USB_DEVICE(0x19d2, 0xffee) }, 277 + { USB_DEVICE(0x19d2, 0xfff6) }, 278 + { USB_DEVICE(0x19d2, 0xfff7) }, 279 + { USB_DEVICE(0x19d2, 0xfff8) }, 280 + { USB_DEVICE(0x19d2, 0xfff9) }, 281 + { USB_DEVICE(0x19d2, 0xfffb) }, 282 + { USB_DEVICE(0x19d2, 0xfffc) }, 275 283 /* MG880 */ 276 284 { USB_DEVICE(0x19d2, 0xfffd) }, 277 285 { },
+23 -4
drivers/usb/storage/uas-detect.h
··· 59 59 unsigned long flags = id->driver_info; 60 60 int r, alt; 61 61 62 - usb_stor_adjust_quirks(udev, &flags); 63 - 64 - if (flags & US_FL_IGNORE_UAS) 65 - return 0; 66 62 67 63 alt = uas_find_uas_alt_setting(intf); 68 64 if (alt < 0) ··· 67 71 r = uas_find_endpoints(&intf->altsetting[alt], eps); 68 72 if (r < 0) 69 73 return 0; 74 + 75 + /* 76 + * ASM1051 and older ASM1053 devices have the same usb-id, and UAS is 77 + * broken on the ASM1051, use the number of streams to differentiate. 78 + * New ASM1053-s also support 32 streams, but have a different prod-id. 79 + */ 80 + if (le16_to_cpu(udev->descriptor.idVendor) == 0x174c && 81 + le16_to_cpu(udev->descriptor.idProduct) == 0x55aa) { 82 + if (udev->speed < USB_SPEED_SUPER) { 83 + /* No streams info, assume ASM1051 */ 84 + flags |= US_FL_IGNORE_UAS; 85 + } else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) { 86 + flags |= US_FL_IGNORE_UAS; 87 + } 88 + } 89 + 90 + usb_stor_adjust_quirks(udev, &flags); 91 + 92 + if (flags & US_FL_IGNORE_UAS) { 93 + dev_warn(&udev->dev, 94 + "UAS is blacklisted for this device, using usb-storage instead\n"); 95 + return 0; 96 + } 70 97 71 98 if (udev->bus->sg_tablesize == 0) { 72 99 dev_warn(&udev->dev,
+38
drivers/usb/storage/unusual_devs.h
··· 101 101 "PhotoSmart R707", 102 102 USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_FIX_CAPACITY), 103 103 104 + UNUSUAL_DEV( 0x03f3, 0x0001, 0x0000, 0x9999, 105 + "Adaptec", 106 + "USBConnect 2000", 107 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, 108 + US_FL_SCM_MULT_TARG ), 109 + 104 110 /* Reported by Sebastian Kapfer <sebastian_kapfer@gmx.net> 105 111 * and Olaf Hering <olh@suse.de> (different bcd's, same vendor/product) 106 112 * for USB floppies that need the SINGLE_LUN enforcement. ··· 747 741 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 748 742 US_FL_SINGLE_LUN ), 749 743 744 + UNUSUAL_DEV( 0x059b, 0x0040, 0x0100, 0x0100, 745 + "Iomega", 746 + "Jaz USB Adapter", 747 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 748 + US_FL_SINGLE_LUN ), 749 + 750 750 /* Reported by <Hendryk.Pfeiffer@gmx.de> */ 751 751 UNUSUAL_DEV( 0x059f, 0x0643, 0x0000, 0x0000, 752 752 "LaCie", ··· 1130 1118 "Photo Frame", 1131 1119 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1132 1120 US_FL_NOT_LOCKABLE), 1121 + 1122 + UNUSUAL_DEV( 0x085a, 0x0026, 0x0100, 0x0133, 1123 + "Xircom", 1124 + "PortGear USB-SCSI (Mac USB Dock)", 1125 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, 1126 + US_FL_SCM_MULT_TARG ), 1127 + 1128 + UNUSUAL_DEV( 0x085a, 0x0028, 0x0100, 0x0133, 1129 + "Xircom", 1130 + "PortGear USB to SCSI Converter", 1131 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, 1132 + US_FL_SCM_MULT_TARG ), 1133 1133 1134 1134 /* Submitted by Jan De Luyck <lkml@kcore.org> */ 1135 1135 UNUSUAL_DEV( 0x08bd, 0x1100, 0x0000, 0x0000, ··· 1982 1958 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1983 1959 US_FL_IGNORE_RESIDUE | US_FL_SANE_SENSE ), 1984 1960 1961 + /* Entrega Technologies U1-SC25 (later Xircom PortGear PGSCSI) 1962 + * and Mac USB Dock USB-SCSI */ 1963 + UNUSUAL_DEV( 0x1645, 0x0007, 0x0100, 0x0133, 1964 + "Entrega Technologies", 1965 + "USB to SCSI Converter", 1966 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, 1967 + US_FL_SCM_MULT_TARG ), 1968 + 1985 1969 /* Reported by Robert Schedel <r.schedel@yahoo.de> 1986 1970 * Note: this is a 'super top' device like the above 14cd/6600 device */ 1987 1971 UNUSUAL_DEV( 0x1652, 0x6600, 0x0201, 0x0201, ··· 2011 1979 "PMP400", 2012 1980 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 2013 1981 US_FL_BULK_IGNORE_TAG | US_FL_MAX_SECTORS_64 ), 1982 + 1983 + UNUSUAL_DEV( 0x1822, 0x0001, 0x0000, 0x9999, 1984 + "Ariston Technologies", 1985 + "iConnect USB to SCSI adapter", 1986 + USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_euscsi_init, 1987 + US_FL_SCM_MULT_TARG ), 2014 1988 2015 1989 /* Reported by Hans de Goede <hdegoede@redhat.com> 2016 1990 * These Appotech controllers are found in Picture Frames, they provide a
+9 -4
drivers/uwb/lc-dev.c
··· 431 431 uwb_dev->mac_addr = *bce->mac_addr; 432 432 uwb_dev->dev_addr = bce->dev_addr; 433 433 dev_set_name(&uwb_dev->dev, "%s", macbuf); 434 + 435 + /* plug the beacon cache */ 436 + bce->uwb_dev = uwb_dev; 437 + uwb_dev->bce = bce; 438 + uwb_bce_get(bce); /* released in uwb_dev_sys_release() */ 439 + 434 440 result = uwb_dev_add(uwb_dev, &rc->uwb_dev.dev, rc); 435 441 if (result < 0) { 436 442 dev_err(dev, "new device %s: cannot instantiate device\n", 437 443 macbuf); 438 444 goto error_dev_add; 439 445 } 440 - /* plug the beacon cache */ 441 - bce->uwb_dev = uwb_dev; 442 - uwb_dev->bce = bce; 443 - uwb_bce_get(bce); /* released in uwb_dev_sys_release() */ 446 + 444 447 dev_info(dev, "uwb device (mac %s dev %s) connected to %s %s\n", 445 448 macbuf, devbuf, rc->uwb_dev.dev.parent->bus->name, 446 449 dev_name(rc->uwb_dev.dev.parent)); ··· 451 448 return; 452 449 453 450 error_dev_add: 451 + bce->uwb_dev = NULL; 452 + uwb_bce_put(bce); 454 453 kfree(uwb_dev); 455 454 return; 456 455 }
+1 -3
drivers/video/fbdev/amba-clcd.c
··· 639 639 if (g0 != panels[i].g0) 640 640 continue; 641 641 if (r0 == panels[i].r0 && b0 == panels[i].b0) 642 - fb->panel->caps = panels[i].caps & CLCD_CAP_RGB; 643 - if (r0 == panels[i].b0 && b0 == panels[i].r0) 644 - fb->panel->caps = panels[i].caps & CLCD_CAP_BGR; 642 + fb->panel->caps = panels[i].caps; 645 643 } 646 644 647 645 return fb->panel->caps ? 0 : -EINVAL;
+2 -2
drivers/xen/balloon.c
··· 230 230 rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT); 231 231 232 232 if (rc) { 233 - pr_info("%s: add_memory() failed: %i\n", __func__, rc); 234 - return BP_EAGAIN; 233 + pr_warn("Cannot add additional memory (%i)\n", rc); 234 + return BP_ECANCELED; 235 235 } 236 236 237 237 balloon_hotplug -= credit;
+7 -9
drivers/xen/gntalloc.c
··· 124 124 int i, rc, readonly; 125 125 LIST_HEAD(queue_gref); 126 126 LIST_HEAD(queue_file); 127 - struct gntalloc_gref *gref; 127 + struct gntalloc_gref *gref, *next; 128 128 129 129 readonly = !(op->flags & GNTALLOC_FLAG_WRITABLE); 130 130 rc = -ENOMEM; ··· 141 141 goto undo; 142 142 143 143 /* Grant foreign access to the page. */ 144 - gref->gref_id = gnttab_grant_foreign_access(op->domid, 144 + rc = gnttab_grant_foreign_access(op->domid, 145 145 pfn_to_mfn(page_to_pfn(gref->page)), readonly); 146 - if ((int)gref->gref_id < 0) { 147 - rc = gref->gref_id; 146 + if (rc < 0) 148 147 goto undo; 149 - } 150 - gref_ids[i] = gref->gref_id; 148 + gref_ids[i] = gref->gref_id = rc; 151 149 } 152 150 153 151 /* Add to gref lists. */ ··· 160 162 mutex_lock(&gref_mutex); 161 163 gref_size -= (op->count - i); 162 164 163 - list_for_each_entry(gref, &queue_file, next_file) { 164 - /* __del_gref does not remove from queue_file */ 165 + list_for_each_entry_safe(gref, next, &queue_file, next_file) { 166 + list_del(&gref->next_file); 165 167 __del_gref(gref); 166 168 } 167 169 ··· 191 193 192 194 gref->notify.flags = 0; 193 195 194 - if (gref->gref_id > 0) { 196 + if (gref->gref_id) { 195 197 if (gnttab_query_foreign_access(gref->gref_id)) 196 198 return; 197 199
-7
drivers/xen/manage.c
··· 103 103 104 104 shutting_down = SHUTDOWN_SUSPEND; 105 105 106 - #ifdef CONFIG_PREEMPT 107 - /* If the kernel is preemptible, we need to freeze all the processes 108 - to prevent them from being in the middle of a pagetable update 109 - during suspend. */ 110 106 err = freeze_processes(); 111 107 if (err) { 112 108 pr_err("%s: freeze failed %d\n", __func__, err); 113 109 goto out; 114 110 } 115 - #endif 116 111 117 112 err = dpm_suspend_start(PMSG_FREEZE); 118 113 if (err) { ··· 152 157 dpm_resume_end(si.cancelled ? PMSG_THAW : PMSG_RESTORE); 153 158 154 159 out_thaw: 155 - #ifdef CONFIG_PREEMPT 156 160 thaw_processes(); 157 161 out: 158 - #endif 159 162 shutting_down = SHUTDOWN_INVALID; 160 163 } 161 164 #endif /* CONFIG_HIBERNATE_CALLBACKS */
+11 -2
fs/btrfs/btrfs_inode.h
··· 234 234 BTRFS_I(inode)->last_sub_trans <= 235 235 BTRFS_I(inode)->last_log_commit && 236 236 BTRFS_I(inode)->last_sub_trans <= 237 - BTRFS_I(inode)->root->last_log_commit) 238 - return 1; 237 + BTRFS_I(inode)->root->last_log_commit) { 238 + /* 239 + * After a ranged fsync we might have left some extent maps 240 + * (that fall outside the fsync's range). So return false 241 + * here if the list isn't empty, to make sure btrfs_log_inode() 242 + * will be called and process those extent maps. 243 + */ 244 + smp_mb(); 245 + if (list_empty(&BTRFS_I(inode)->extent_tree.modified_extents)) 246 + return 1; 247 + } 239 248 return 0; 240 249 } 241 250
+1 -1
fs/btrfs/file.c
··· 1966 1966 1967 1967 btrfs_init_log_ctx(&ctx); 1968 1968 1969 - ret = btrfs_log_dentry_safe(trans, root, dentry, &ctx); 1969 + ret = btrfs_log_dentry_safe(trans, root, dentry, start, end, &ctx); 1970 1970 if (ret < 0) { 1971 1971 /* Fallthrough and commit/free transaction. */ 1972 1972 ret = 1;
+120 -71
fs/btrfs/inode.c
··· 778 778 ins.offset, 779 779 BTRFS_ORDERED_COMPRESSED, 780 780 async_extent->compress_type); 781 - if (ret) 781 + if (ret) { 782 + btrfs_drop_extent_cache(inode, async_extent->start, 783 + async_extent->start + 784 + async_extent->ram_size - 1, 0); 782 785 goto out_free_reserve; 786 + } 783 787 784 788 /* 785 789 * clear dirty, set writeback and unlock the pages. ··· 975 971 ret = btrfs_add_ordered_extent(inode, start, ins.objectid, 976 972 ram_size, cur_alloc_size, 0); 977 973 if (ret) 978 - goto out_reserve; 974 + goto out_drop_extent_cache; 979 975 980 976 if (root->root_key.objectid == 981 977 BTRFS_DATA_RELOC_TREE_OBJECTID) { 982 978 ret = btrfs_reloc_clone_csums(inode, start, 983 979 cur_alloc_size); 984 980 if (ret) 985 - goto out_reserve; 981 + goto out_drop_extent_cache; 986 982 } 987 983 988 984 if (disk_num_bytes < cur_alloc_size) ··· 1010 1006 out: 1011 1007 return ret; 1012 1008 1009 + out_drop_extent_cache: 1010 + btrfs_drop_extent_cache(inode, start, start + ram_size - 1, 0); 1013 1011 out_reserve: 1014 1012 btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1); 1015 1013 out_unlock: ··· 4248 4242 btrfs_abort_transaction(trans, root, ret); 4249 4243 } 4250 4244 error: 4251 - if (last_size != (u64)-1) 4245 + if (last_size != (u64)-1 && 4246 + root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) 4252 4247 btrfs_ordered_update_i_size(inode, last_size, NULL); 4253 4248 btrfs_free_path(path); 4254 4249 return err; ··· 5634 5627 return ret; 5635 5628 } 5636 5629 5630 + static int btrfs_insert_inode_locked(struct inode *inode) 5631 + { 5632 + struct btrfs_iget_args args; 5633 + args.location = &BTRFS_I(inode)->location; 5634 + args.root = BTRFS_I(inode)->root; 5635 + 5636 + return insert_inode_locked4(inode, 5637 + btrfs_inode_hash(inode->i_ino, BTRFS_I(inode)->root), 5638 + btrfs_find_actor, &args); 5639 + } 5640 + 5637 5641 static struct inode *btrfs_new_inode(struct btrfs_trans_handle *trans, 5638 5642 struct btrfs_root *root, 5639 5643 struct inode *dir, ··· 5737 5719 sizes[1] = name_len + sizeof(*ref); 5738 5720 } 5739 5721 5722 + location = &BTRFS_I(inode)->location; 5723 + location->objectid = objectid; 5724 + location->offset = 0; 5725 + btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY); 5726 + 5727 + ret = btrfs_insert_inode_locked(inode); 5728 + if (ret < 0) 5729 + goto fail; 5730 + 5740 5731 path->leave_spinning = 1; 5741 5732 ret = btrfs_insert_empty_items(trans, root, path, key, sizes, nitems); 5742 5733 if (ret != 0) 5743 - goto fail; 5734 + goto fail_unlock; 5744 5735 5745 5736 inode_init_owner(inode, dir, mode); 5746 5737 inode_set_bytes(inode, 0); ··· 5772 5745 btrfs_mark_buffer_dirty(path->nodes[0]); 5773 5746 btrfs_free_path(path); 5774 5747 5775 - location = &BTRFS_I(inode)->location; 5776 - location->objectid = objectid; 5777 - location->offset = 0; 5778 - btrfs_set_key_type(location, BTRFS_INODE_ITEM_KEY); 5779 - 5780 5748 btrfs_inherit_iflags(inode, dir); 5781 5749 5782 5750 if (S_ISREG(mode)) { ··· 5782 5760 BTRFS_INODE_NODATASUM; 5783 5761 } 5784 5762 5785 - btrfs_insert_inode_hash(inode); 5786 5763 inode_tree_add(inode); 5787 5764 5788 5765 trace_btrfs_inode_new(inode); ··· 5796 5775 btrfs_ino(inode), root->root_key.objectid, ret); 5797 5776 5798 5777 return inode; 5778 + 5779 + fail_unlock: 5780 + unlock_new_inode(inode); 5799 5781 fail: 5800 5782 if (dir && name) 5801 5783 BTRFS_I(dir)->index_cnt--; ··· 5933 5909 goto out_unlock; 5934 5910 } 5935 5911 5936 - err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 5937 - if (err) { 5938 - drop_inode = 1; 5939 - goto out_unlock; 5940 - } 5941 - 5942 5912 /* 5943 5913 * If the active LSM wants to access the inode during 5944 5914 * d_instantiate it needs these. Smack checks to see 5945 5915 * if the filesystem supports xattrs by looking at the 5946 5916 * ops vector. 5947 5917 */ 5948 - 5949 5918 inode->i_op = &btrfs_special_inode_operations; 5950 - err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 5919 + init_special_inode(inode, inode->i_mode, rdev); 5920 + 5921 + err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 5951 5922 if (err) 5952 - drop_inode = 1; 5953 - else { 5954 - init_special_inode(inode, inode->i_mode, rdev); 5923 + goto out_unlock_inode; 5924 + 5925 + err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 5926 + if (err) { 5927 + goto out_unlock_inode; 5928 + } else { 5955 5929 btrfs_update_inode(trans, root, inode); 5930 + unlock_new_inode(inode); 5956 5931 d_instantiate(dentry, inode); 5957 5932 } 5933 + 5958 5934 out_unlock: 5959 5935 btrfs_end_transaction(trans, root); 5960 5936 btrfs_balance_delayed_items(root); ··· 5964 5940 iput(inode); 5965 5941 } 5966 5942 return err; 5943 + 5944 + out_unlock_inode: 5945 + drop_inode = 1; 5946 + unlock_new_inode(inode); 5947 + goto out_unlock; 5948 + 5967 5949 } 5968 5950 5969 5951 static int btrfs_create(struct inode *dir, struct dentry *dentry, ··· 6004 5974 goto out_unlock; 6005 5975 } 6006 5976 drop_inode_on_err = 1; 6007 - 6008 - err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 6009 - if (err) 6010 - goto out_unlock; 6011 - 6012 - err = btrfs_update_inode(trans, root, inode); 6013 - if (err) 6014 - goto out_unlock; 6015 - 6016 5977 /* 6017 5978 * If the active LSM wants to access the inode during 6018 5979 * d_instantiate it needs these. Smack checks to see ··· 6012 5991 */ 6013 5992 inode->i_fop = &btrfs_file_operations; 6014 5993 inode->i_op = &btrfs_file_inode_operations; 5994 + inode->i_mapping->a_ops = &btrfs_aops; 5995 + inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 5996 + 5997 + err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 5998 + if (err) 5999 + goto out_unlock_inode; 6000 + 6001 + err = btrfs_update_inode(trans, root, inode); 6002 + if (err) 6003 + goto out_unlock_inode; 6015 6004 6016 6005 err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 6017 6006 if (err) 6018 - goto out_unlock; 6007 + goto out_unlock_inode; 6019 6008 6020 - inode->i_mapping->a_ops = &btrfs_aops; 6021 - inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 6022 6009 BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 6010 + unlock_new_inode(inode); 6023 6011 d_instantiate(dentry, inode); 6024 6012 6025 6013 out_unlock: ··· 6040 6010 btrfs_balance_delayed_items(root); 6041 6011 btrfs_btree_balance_dirty(root); 6042 6012 return err; 6013 + 6014 + out_unlock_inode: 6015 + unlock_new_inode(inode); 6016 + goto out_unlock; 6017 + 6043 6018 } 6044 6019 6045 6020 static int btrfs_link(struct dentry *old_dentry, struct inode *dir, ··· 6152 6117 } 6153 6118 6154 6119 drop_on_err = 1; 6120 + /* these must be set before we unlock the inode */ 6121 + inode->i_op = &btrfs_dir_inode_operations; 6122 + inode->i_fop = &btrfs_dir_file_operations; 6155 6123 6156 6124 err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 6157 6125 if (err) 6158 - goto out_fail; 6159 - 6160 - inode->i_op = &btrfs_dir_inode_operations; 6161 - inode->i_fop = &btrfs_dir_file_operations; 6126 + goto out_fail_inode; 6162 6127 6163 6128 btrfs_i_size_write(inode, 0); 6164 6129 err = btrfs_update_inode(trans, root, inode); 6165 6130 if (err) 6166 - goto out_fail; 6131 + goto out_fail_inode; 6167 6132 6168 6133 err = btrfs_add_link(trans, dir, inode, dentry->d_name.name, 6169 6134 dentry->d_name.len, 0, index); 6170 6135 if (err) 6171 - goto out_fail; 6136 + goto out_fail_inode; 6172 6137 6173 6138 d_instantiate(dentry, inode); 6139 + /* 6140 + * mkdir is special. We're unlocking after we call d_instantiate 6141 + * to avoid a race with nfsd calling d_instantiate. 6142 + */ 6143 + unlock_new_inode(inode); 6174 6144 drop_on_err = 0; 6175 6145 6176 6146 out_fail: ··· 6185 6145 btrfs_balance_delayed_items(root); 6186 6146 btrfs_btree_balance_dirty(root); 6187 6147 return err; 6148 + 6149 + out_fail_inode: 6150 + unlock_new_inode(inode); 6151 + goto out_fail; 6188 6152 } 6189 6153 6190 6154 /* helper for btfs_get_extent. Given an existing extent in the tree, ··· 8144 8100 8145 8101 set_nlink(inode, 1); 8146 8102 btrfs_i_size_write(inode, 0); 8103 + unlock_new_inode(inode); 8147 8104 8148 8105 err = btrfs_subvol_inherit_props(trans, new_root, parent_root); 8149 8106 if (err) ··· 8805 8760 goto out_unlock; 8806 8761 } 8807 8762 8808 - err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 8809 - if (err) { 8810 - drop_inode = 1; 8811 - goto out_unlock; 8812 - } 8813 - 8814 8763 /* 8815 8764 * If the active LSM wants to access the inode during 8816 8765 * d_instantiate it needs these. Smack checks to see ··· 8813 8774 */ 8814 8775 inode->i_fop = &btrfs_file_operations; 8815 8776 inode->i_op = &btrfs_file_inode_operations; 8777 + inode->i_mapping->a_ops = &btrfs_aops; 8778 + inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 8779 + BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 8780 + 8781 + err = btrfs_init_inode_security(trans, inode, dir, &dentry->d_name); 8782 + if (err) 8783 + goto out_unlock_inode; 8816 8784 8817 8785 err = btrfs_add_nondir(trans, dir, dentry, inode, 0, index); 8818 8786 if (err) 8819 - drop_inode = 1; 8820 - else { 8821 - inode->i_mapping->a_ops = &btrfs_aops; 8822 - inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 8823 - BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 8824 - } 8825 - if (drop_inode) 8826 - goto out_unlock; 8787 + goto out_unlock_inode; 8827 8788 8828 8789 path = btrfs_alloc_path(); 8829 8790 if (!path) { 8830 8791 err = -ENOMEM; 8831 - drop_inode = 1; 8832 - goto out_unlock; 8792 + goto out_unlock_inode; 8833 8793 } 8834 8794 key.objectid = btrfs_ino(inode); 8835 8795 key.offset = 0; ··· 8837 8799 err = btrfs_insert_empty_item(trans, root, path, &key, 8838 8800 datasize); 8839 8801 if (err) { 8840 - drop_inode = 1; 8841 8802 btrfs_free_path(path); 8842 - goto out_unlock; 8803 + goto out_unlock_inode; 8843 8804 } 8844 8805 leaf = path->nodes[0]; 8845 8806 ei = btrfs_item_ptr(leaf, path->slots[0], ··· 8862 8825 inode_set_bytes(inode, name_len); 8863 8826 btrfs_i_size_write(inode, name_len); 8864 8827 err = btrfs_update_inode(trans, root, inode); 8865 - if (err) 8828 + if (err) { 8866 8829 drop_inode = 1; 8830 + goto out_unlock_inode; 8831 + } 8832 + 8833 + unlock_new_inode(inode); 8834 + d_instantiate(dentry, inode); 8867 8835 8868 8836 out_unlock: 8869 - if (!err) 8870 - d_instantiate(dentry, inode); 8871 8837 btrfs_end_transaction(trans, root); 8872 8838 if (drop_inode) { 8873 8839 inode_dec_link_count(inode); ··· 8878 8838 } 8879 8839 btrfs_btree_balance_dirty(root); 8880 8840 return err; 8841 + 8842 + out_unlock_inode: 8843 + drop_inode = 1; 8844 + unlock_new_inode(inode); 8845 + goto out_unlock; 8881 8846 } 8882 8847 8883 8848 static int __btrfs_prealloc_file_range(struct inode *inode, int mode, ··· 9066 9021 goto out; 9067 9022 } 9068 9023 9069 - ret = btrfs_init_inode_security(trans, inode, dir, NULL); 9070 - if (ret) 9071 - goto out; 9072 - 9073 - ret = btrfs_update_inode(trans, root, inode); 9074 - if (ret) 9075 - goto out; 9076 - 9077 9024 inode->i_fop = &btrfs_file_operations; 9078 9025 inode->i_op = &btrfs_file_inode_operations; 9079 9026 ··· 9073 9036 inode->i_mapping->backing_dev_info = &root->fs_info->bdi; 9074 9037 BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 9075 9038 9039 + ret = btrfs_init_inode_security(trans, inode, dir, NULL); 9040 + if (ret) 9041 + goto out_inode; 9042 + 9043 + ret = btrfs_update_inode(trans, root, inode); 9044 + if (ret) 9045 + goto out_inode; 9076 9046 ret = btrfs_orphan_add(trans, inode); 9077 9047 if (ret) 9078 - goto out; 9048 + goto out_inode; 9079 9049 9080 9050 /* 9081 9051 * We set number of links to 0 in btrfs_new_inode(), and here we set ··· 9092 9048 * d_tmpfile() -> inode_dec_link_count() -> drop_nlink() 9093 9049 */ 9094 9050 set_nlink(inode, 1); 9051 + unlock_new_inode(inode); 9095 9052 d_tmpfile(dentry, inode); 9096 9053 mark_inode_dirty(inode); 9097 9054 ··· 9102 9057 iput(inode); 9103 9058 btrfs_balance_delayed_items(root); 9104 9059 btrfs_btree_balance_dirty(root); 9105 - 9106 9060 return ret; 9061 + 9062 + out_inode: 9063 + unlock_new_inode(inode); 9064 + goto out; 9065 + 9107 9066 } 9108 9067 9109 9068 static const struct inode_operations btrfs_dir_inode_operations = {
+19 -13
fs/btrfs/ioctl.c
··· 1019 1019 return false; 1020 1020 1021 1021 next = defrag_lookup_extent(inode, em->start + em->len); 1022 - if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE || 1023 - (em->block_start + em->block_len == next->block_start)) 1022 + if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE) 1023 + ret = false; 1024 + else if ((em->block_start + em->block_len == next->block_start) && 1025 + (em->block_len > 128 * 1024 && next->block_len > 128 * 1024)) 1024 1026 ret = false; 1025 1027 1026 1028 free_extent_map(next); ··· 1057 1055 } 1058 1056 1059 1057 next_mergeable = defrag_check_next_extent(inode, em); 1060 - 1061 1058 /* 1062 1059 * we hit a real extent, if it is big or the next extent is not a 1063 1060 * real extent, don't bother defragging it ··· 1703 1702 ~(BTRFS_SUBVOL_CREATE_ASYNC | BTRFS_SUBVOL_RDONLY | 1704 1703 BTRFS_SUBVOL_QGROUP_INHERIT)) { 1705 1704 ret = -EOPNOTSUPP; 1706 - goto out; 1705 + goto free_args; 1707 1706 } 1708 1707 1709 1708 if (vol_args->flags & BTRFS_SUBVOL_CREATE_ASYNC) ··· 1713 1712 if (vol_args->flags & BTRFS_SUBVOL_QGROUP_INHERIT) { 1714 1713 if (vol_args->size > PAGE_CACHE_SIZE) { 1715 1714 ret = -EINVAL; 1716 - goto out; 1715 + goto free_args; 1717 1716 } 1718 1717 inherit = memdup_user(vol_args->qgroup_inherit, vol_args->size); 1719 1718 if (IS_ERR(inherit)) { 1720 1719 ret = PTR_ERR(inherit); 1721 - goto out; 1720 + goto free_args; 1722 1721 } 1723 1722 } 1724 1723 1725 1724 ret = btrfs_ioctl_snap_create_transid(file, vol_args->name, 1726 1725 vol_args->fd, subvol, ptr, 1727 1726 readonly, inherit); 1727 + if (ret) 1728 + goto free_inherit; 1728 1729 1729 - if (ret == 0 && ptr && 1730 - copy_to_user(arg + 1731 - offsetof(struct btrfs_ioctl_vol_args_v2, 1732 - transid), ptr, sizeof(*ptr))) 1730 + if (ptr && copy_to_user(arg + 1731 + offsetof(struct btrfs_ioctl_vol_args_v2, 1732 + transid), 1733 + ptr, sizeof(*ptr))) 1733 1734 ret = -EFAULT; 1734 - out: 1735 - kfree(vol_args); 1735 + 1736 + free_inherit: 1736 1737 kfree(inherit); 1738 + free_args: 1739 + kfree(vol_args); 1737 1740 return ret; 1738 1741 } 1739 1742 ··· 2657 2652 vol_args = memdup_user(arg, sizeof(*vol_args)); 2658 2653 if (IS_ERR(vol_args)) { 2659 2654 ret = PTR_ERR(vol_args); 2660 - goto out; 2655 + goto err_drop; 2661 2656 } 2662 2657 2663 2658 vol_args->name[BTRFS_PATH_NAME_MAX] = '\0'; ··· 2675 2670 2676 2671 out: 2677 2672 kfree(vol_args); 2673 + err_drop: 2678 2674 mnt_drop_write_file(file); 2679 2675 return ret; 2680 2676 }
+49 -14
fs/btrfs/tree-log.c
··· 94 94 #define LOG_WALK_REPLAY_ALL 3 95 95 96 96 static int btrfs_log_inode(struct btrfs_trans_handle *trans, 97 - struct btrfs_root *root, struct inode *inode, 98 - int inode_only); 97 + struct btrfs_root *root, struct inode *inode, 98 + int inode_only, 99 + const loff_t start, 100 + const loff_t end); 99 101 static int link_to_fixup_dir(struct btrfs_trans_handle *trans, 100 102 struct btrfs_root *root, 101 103 struct btrfs_path *path, u64 objectid); ··· 3860 3858 * This handles both files and directories. 3861 3859 */ 3862 3860 static int btrfs_log_inode(struct btrfs_trans_handle *trans, 3863 - struct btrfs_root *root, struct inode *inode, 3864 - int inode_only) 3861 + struct btrfs_root *root, struct inode *inode, 3862 + int inode_only, 3863 + const loff_t start, 3864 + const loff_t end) 3865 3865 { 3866 3866 struct btrfs_path *path; 3867 3867 struct btrfs_path *dst_path; ··· 3880 3876 int ins_nr; 3881 3877 bool fast_search = false; 3882 3878 u64 ino = btrfs_ino(inode); 3879 + struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; 3883 3880 3884 3881 path = btrfs_alloc_path(); 3885 3882 if (!path) ··· 4054 4049 goto out_unlock; 4055 4050 } 4056 4051 } else if (inode_only == LOG_INODE_ALL) { 4057 - struct extent_map_tree *tree = &BTRFS_I(inode)->extent_tree; 4058 4052 struct extent_map *em, *n; 4059 4053 4060 - write_lock(&tree->lock); 4061 - list_for_each_entry_safe(em, n, &tree->modified_extents, list) 4062 - list_del_init(&em->list); 4063 - write_unlock(&tree->lock); 4054 + write_lock(&em_tree->lock); 4055 + /* 4056 + * We can't just remove every em if we're called for a ranged 4057 + * fsync - that is, one that doesn't cover the whole possible 4058 + * file range (0 to LLONG_MAX). This is because we can have 4059 + * em's that fall outside the range we're logging and therefore 4060 + * their ordered operations haven't completed yet 4061 + * (btrfs_finish_ordered_io() not invoked yet). This means we 4062 + * didn't get their respective file extent item in the fs/subvol 4063 + * tree yet, and need to let the next fast fsync (one which 4064 + * consults the list of modified extent maps) find the em so 4065 + * that it logs a matching file extent item and waits for the 4066 + * respective ordered operation to complete (if it's still 4067 + * running). 4068 + * 4069 + * Removing every em outside the range we're logging would make 4070 + * the next fast fsync not log their matching file extent items, 4071 + * therefore making us lose data after a log replay. 4072 + */ 4073 + list_for_each_entry_safe(em, n, &em_tree->modified_extents, 4074 + list) { 4075 + const u64 mod_end = em->mod_start + em->mod_len - 1; 4076 + 4077 + if (em->mod_start >= start && mod_end <= end) 4078 + list_del_init(&em->list); 4079 + } 4080 + write_unlock(&em_tree->lock); 4064 4081 } 4065 4082 4066 4083 if (inode_only == LOG_INODE_ALL && S_ISDIR(inode->i_mode)) { ··· 4092 4065 goto out_unlock; 4093 4066 } 4094 4067 } 4068 + 4095 4069 BTRFS_I(inode)->logged_trans = trans->transid; 4096 4070 BTRFS_I(inode)->last_log_commit = BTRFS_I(inode)->last_sub_trans; 4097 4071 out_unlock: ··· 4189 4161 */ 4190 4162 static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans, 4191 4163 struct btrfs_root *root, struct inode *inode, 4192 - struct dentry *parent, int exists_only, 4164 + struct dentry *parent, 4165 + const loff_t start, 4166 + const loff_t end, 4167 + int exists_only, 4193 4168 struct btrfs_log_ctx *ctx) 4194 4169 { 4195 4170 int inode_only = exists_only ? LOG_INODE_EXISTS : LOG_INODE_ALL; ··· 4238 4207 if (ret) 4239 4208 goto end_no_trans; 4240 4209 4241 - ret = btrfs_log_inode(trans, root, inode, inode_only); 4210 + ret = btrfs_log_inode(trans, root, inode, inode_only, start, end); 4242 4211 if (ret) 4243 4212 goto end_trans; 4244 4213 ··· 4266 4235 4267 4236 if (BTRFS_I(inode)->generation > 4268 4237 root->fs_info->last_trans_committed) { 4269 - ret = btrfs_log_inode(trans, root, inode, inode_only); 4238 + ret = btrfs_log_inode(trans, root, inode, inode_only, 4239 + 0, LLONG_MAX); 4270 4240 if (ret) 4271 4241 goto end_trans; 4272 4242 } ··· 4301 4269 */ 4302 4270 int btrfs_log_dentry_safe(struct btrfs_trans_handle *trans, 4303 4271 struct btrfs_root *root, struct dentry *dentry, 4272 + const loff_t start, 4273 + const loff_t end, 4304 4274 struct btrfs_log_ctx *ctx) 4305 4275 { 4306 4276 struct dentry *parent = dget_parent(dentry); 4307 4277 int ret; 4308 4278 4309 4279 ret = btrfs_log_inode_parent(trans, root, dentry->d_inode, parent, 4310 - 0, ctx); 4280 + start, end, 0, ctx); 4311 4281 dput(parent); 4312 4282 4313 4283 return ret; ··· 4546 4512 root->fs_info->last_trans_committed)) 4547 4513 return 0; 4548 4514 4549 - return btrfs_log_inode_parent(trans, root, inode, parent, 1, NULL); 4515 + return btrfs_log_inode_parent(trans, root, inode, parent, 0, 4516 + LLONG_MAX, 1, NULL); 4550 4517 } 4551 4518
+2
fs/btrfs/tree-log.h
··· 59 59 int btrfs_recover_log_trees(struct btrfs_root *tree_root); 60 60 int btrfs_log_dentry_safe(struct btrfs_trans_handle *trans, 61 61 struct btrfs_root *root, struct dentry *dentry, 62 + const loff_t start, 63 + const loff_t end, 62 64 struct btrfs_log_ctx *ctx); 63 65 int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans, 64 66 struct btrfs_root *root,
+6 -7
fs/btrfs/volumes.c
··· 529 529 */ 530 530 531 531 /* 532 - * As of now don't allow update to btrfs_fs_device through 533 - * the btrfs dev scan cli, after FS has been mounted. 532 + * For now, we do allow update to btrfs_fs_device through the 533 + * btrfs dev scan cli after FS has been mounted. We're still 534 + * tracking a problem where systems fail mount by subvolume id 535 + * when we reject replacement on a mounted FS. 534 536 */ 535 - if (fs_devices->opened) { 536 - return -EBUSY; 537 - } else { 537 + if (!fs_devices->opened && found_transid < device->generation) { 538 538 /* 539 539 * That is if the FS is _not_ mounted and if you 540 540 * are here, that means there is more than one ··· 542 542 * with larger generation number or the last-in if 543 543 * generation are equal. 544 544 */ 545 - if (found_transid < device->generation) 546 - return -EEXIST; 545 + return -EEXIST; 547 546 } 548 547 549 548 name = rcu_string_strdup(path, GFP_NOFS);
+4 -2
fs/buffer.c
··· 1022 1022 bh = page_buffers(page); 1023 1023 if (bh->b_size == size) { 1024 1024 end_block = init_page_buffers(page, bdev, 1025 - index << sizebits, size); 1025 + (sector_t)index << sizebits, 1026 + size); 1026 1027 goto done; 1027 1028 } 1028 1029 if (!try_to_free_buffers(page)) ··· 1044 1043 */ 1045 1044 spin_lock(&inode->i_mapping->private_lock); 1046 1045 link_dev_buffers(page, bh); 1047 - end_block = init_page_buffers(page, bdev, index << sizebits, size); 1046 + end_block = init_page_buffers(page, bdev, (sector_t)index << sizebits, 1047 + size); 1048 1048 spin_unlock(&inode->i_mapping->private_lock); 1049 1049 done: 1050 1050 ret = (block < end_block) ? 1 : -ENXIO;
+2 -1
fs/cachefiles/namei.c
··· 779 779 !subdir->d_inode->i_op->lookup || 780 780 !subdir->d_inode->i_op->mkdir || 781 781 !subdir->d_inode->i_op->create || 782 - !subdir->d_inode->i_op->rename || 782 + (!subdir->d_inode->i_op->rename && 783 + !subdir->d_inode->i_op->rename2) || 783 784 !subdir->d_inode->i_op->rmdir || 784 785 !subdir->d_inode->i_op->unlink) 785 786 goto check_error;
-6
fs/cachefiles/rdwr.c
··· 151 151 struct cachefiles_one_read *monitor; 152 152 struct cachefiles_object *object; 153 153 struct fscache_retrieval *op; 154 - struct pagevec pagevec; 155 154 int error, max; 156 155 157 156 op = container_of(_op, struct fscache_retrieval, op); ··· 158 159 struct cachefiles_object, fscache); 159 160 160 161 _enter("{ino=%lu}", object->backer->d_inode->i_ino); 161 - 162 - pagevec_init(&pagevec, 0); 163 162 164 163 max = 8; 165 164 spin_lock_irq(&object->work_lock); ··· 393 396 { 394 397 struct cachefiles_object *object; 395 398 struct cachefiles_cache *cache; 396 - struct pagevec pagevec; 397 399 struct inode *inode; 398 400 sector_t block0, block; 399 401 unsigned shift; ··· 422 426 op->op.flags &= FSCACHE_OP_KEEP_FLAGS; 423 427 op->op.flags |= FSCACHE_OP_ASYNC; 424 428 op->op.processor = cachefiles_read_copier; 425 - 426 - pagevec_init(&pagevec, 0); 427 429 428 430 /* we assume the absence or presence of the first block is a good 429 431 * enough indication for the page as a whole
+23 -12
fs/cifs/Kconfig
··· 22 22 support for OS/2 and Windows ME and similar servers is provided as 23 23 well. 24 24 25 + The module also provides optional support for the followon 26 + protocols for CIFS including SMB3, which enables 27 + useful performance and security features (see the description 28 + of CONFIG_CIFS_SMB2). 29 + 25 30 The cifs module provides an advanced network file system 26 31 client for mounting to CIFS compliant servers. It includes 27 32 support for DFS (hierarchical name space), secure per-user ··· 126 121 depends on CIFS_XATTR && KEYS 127 122 help 128 123 Allows fetching CIFS/NTFS ACL from the server. The DACL blob 129 - is handed over to the application/caller. 124 + is handed over to the application/caller. See the man 125 + page for getcifsacl for more information. 130 126 131 127 config CIFS_DEBUG 132 128 bool "Enable CIFS debugging routines" ··· 168 162 Allows NFS server to export a CIFS mounted share (nfsd over cifs) 169 163 170 164 config CIFS_SMB2 171 - bool "SMB2 network file system support" 165 + bool "SMB2 and SMB3 network file system support" 172 166 depends on CIFS && INET 173 167 select NLS 174 168 select KEYS ··· 176 170 select DNS_RESOLVER 177 171 178 172 help 179 - This enables experimental support for the SMB2 (Server Message Block 180 - version 2) protocol. The SMB2 protocol is the successor to the 181 - popular CIFS and SMB network file sharing protocols. SMB2 is the 182 - native file sharing mechanism for recent versions of Windows 183 - operating systems (since Vista). SMB2 enablement will eventually 184 - allow users better performance, security and features, than would be 185 - possible with cifs. Note that smb2 mount options also are simpler 186 - (compared to cifs) due to protocol improvements. 187 - 188 - Unless you are a developer or tester, say N. 173 + This enables support for the Server Message Block version 2 174 + family of protocols, including SMB3. SMB3 support is 175 + enabled on mount by specifying "vers=3.0" in the mount 176 + options. These protocols are the successors to the popular 177 + CIFS and SMB network file sharing protocols. SMB3 is the 178 + native file sharing mechanism for the more recent 179 + versions of Windows (Windows 8 and Windows 2012 and 180 + later) and Samba server and many others support SMB3 well. 181 + In general SMB3 enables better performance, security 182 + and features, than would be possible with CIFS (Note that 183 + when mounting to Samba, due to the CIFS POSIX extensions, 184 + CIFS mounts can provide slightly better POSIX compatibility 185 + than SMB3 mounts do though). Note that SMB2/SMB3 mount 186 + options are also slightly simpler (compared to CIFS) due 187 + to protocol improvements. 189 188 190 189 config CIFS_FSCACHE 191 190 bool "Provide CIFS client caching support"
+1 -1
fs/cifs/cifsfs.h
··· 136 136 extern const struct export_operations cifs_export_ops; 137 137 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 138 138 139 - #define CIFS_VERSION "2.04" 139 + #define CIFS_VERSION "2.05" 140 140 #endif /* _CIFSFS_H */
-5
fs/cifs/cifsglob.h
··· 70 70 #define SERVER_NAME_LENGTH 40 71 71 #define SERVER_NAME_LEN_WITH_NULL (SERVER_NAME_LENGTH + 1) 72 72 73 - /* used to define string lengths for reversing unicode strings */ 74 - /* (256+1)*2 = 514 */ 75 - /* (max path length + 1 for null) * 2 for unicode */ 76 - #define MAX_NAME 514 77 - 78 73 /* SMB echo "timeout" -- FIXME: tunable? */ 79 74 #define SMB_ECHO_INTERVAL (60 * HZ) 80 75
+2
fs/cifs/connect.c
··· 1600 1600 tmp_end++; 1601 1601 if (!(tmp_end < end && tmp_end[1] == delim)) { 1602 1602 /* No it is not. Set the password to NULL */ 1603 + kfree(vol->password); 1603 1604 vol->password = NULL; 1604 1605 break; 1605 1606 } ··· 1638 1637 options = end; 1639 1638 } 1640 1639 1640 + kfree(vol->password); 1641 1641 /* Now build new password string */ 1642 1642 temp_len = strlen(value); 1643 1643 vol->password = kzalloc(temp_len+1, GFP_KERNEL);
+8
fs/cifs/dir.c
··· 497 497 goto out; 498 498 } 499 499 500 + if (file->f_flags & O_DIRECT && 501 + CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) { 502 + if (CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_NO_BRL) 503 + file->f_op = &cifs_file_direct_nobrl_ops; 504 + else 505 + file->f_op = &cifs_file_direct_ops; 506 + } 507 + 500 508 file_info = cifs_new_fileinfo(&fid, file, tlink, oplock); 501 509 if (file_info == NULL) { 502 510 if (server->ops->close)
+8
fs/cifs/file.c
··· 467 467 cifs_dbg(FYI, "inode = 0x%p file flags are 0x%x for %s\n", 468 468 inode, file->f_flags, full_path); 469 469 470 + if (file->f_flags & O_DIRECT && 471 + cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) { 472 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL) 473 + file->f_op = &cifs_file_direct_nobrl_ops; 474 + else 475 + file->f_op = &cifs_file_direct_ops; 476 + } 477 + 470 478 if (server->oplocks) 471 479 oplock = REQ_OPLOCK; 472 480 else
+4 -1
fs/cifs/inode.c
··· 1720 1720 unlink_target: 1721 1721 /* Try unlinking the target dentry if it's not negative */ 1722 1722 if (target_dentry->d_inode && (rc == -EACCES || rc == -EEXIST)) { 1723 - tmprc = cifs_unlink(target_dir, target_dentry); 1723 + if (d_is_dir(target_dentry)) 1724 + tmprc = cifs_rmdir(target_dir, target_dentry); 1725 + else 1726 + tmprc = cifs_unlink(target_dir, target_dentry); 1724 1727 if (tmprc) 1725 1728 goto cifs_rename_exit; 1726 1729 rc = cifs_do_rename(xid, source_dentry, from_name,
+9 -3
fs/cifs/link.c
··· 213 213 if (rc) 214 214 goto out; 215 215 216 - rc = tcon->ses->server->ops->create_mf_symlink(xid, tcon, cifs_sb, 217 - fromName, buf, &bytes_written); 216 + if (tcon->ses->server->ops->create_mf_symlink) 217 + rc = tcon->ses->server->ops->create_mf_symlink(xid, tcon, 218 + cifs_sb, fromName, buf, &bytes_written); 219 + else 220 + rc = -EOPNOTSUPP; 221 + 218 222 if (rc) 219 223 goto out; 220 224 ··· 343 339 if (rc) 344 340 return rc; 345 341 346 - if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) 342 + if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) { 343 + rc = -ENOENT; 347 344 /* it's not a symlink */ 348 345 goto out; 346 + } 349 347 350 348 io_parms.netfid = fid.netfid; 351 349 io_parms.pid = current->tgid;
+16 -4
fs/cifs/netmisc.c
··· 925 925 /* BB what about the timezone? BB */ 926 926 927 927 /* Subtract the NTFS time offset, then convert to 1s intervals. */ 928 - u64 t; 928 + s64 t = le64_to_cpu(ntutc) - NTFS_TIME_OFFSET; 929 929 930 - t = le64_to_cpu(ntutc) - NTFS_TIME_OFFSET; 931 - ts.tv_nsec = do_div(t, 10000000) * 100; 932 - ts.tv_sec = t; 930 + /* 931 + * Unfortunately can not use normal 64 bit division on 32 bit arch, but 932 + * the alternative, do_div, does not work with negative numbers so have 933 + * to special case them 934 + */ 935 + if (t < 0) { 936 + t = -t; 937 + ts.tv_nsec = (long)(do_div(t, 10000000) * 100); 938 + ts.tv_nsec = -ts.tv_nsec; 939 + ts.tv_sec = -t; 940 + } else { 941 + ts.tv_nsec = (long)do_div(t, 10000000) * 100; 942 + ts.tv_sec = t; 943 + } 944 + 933 945 return ts; 934 946 } 935 947
+2 -2
fs/cifs/readdir.c
··· 596 596 if (server->ops->dir_needs_close(cfile)) { 597 597 cfile->invalidHandle = true; 598 598 spin_unlock(&cifs_file_list_lock); 599 - if (server->ops->close) 600 - server->ops->close(xid, tcon, &cfile->fid); 599 + if (server->ops->close_dir) 600 + server->ops->close_dir(xid, tcon, &cfile->fid); 601 601 } else 602 602 spin_unlock(&cifs_file_list_lock); 603 603 if (cfile->srch_inf.ntwrk_buf_start) {
+4 -20
fs/cifs/sess.c
··· 243 243 kfree(ses->serverOS); 244 244 245 245 ses->serverOS = kzalloc(len + 1, GFP_KERNEL); 246 - if (ses->serverOS) 246 + if (ses->serverOS) { 247 247 strncpy(ses->serverOS, bcc_ptr, len); 248 - if (strncmp(ses->serverOS, "OS/2", 4) == 0) 249 - cifs_dbg(FYI, "OS/2 server\n"); 248 + if (strncmp(ses->serverOS, "OS/2", 4) == 0) 249 + cifs_dbg(FYI, "OS/2 server\n"); 250 + } 250 251 251 252 bcc_ptr += len + 1; 252 253 bleft -= len + 1; ··· 745 744 sess_free_buffer(sess_data); 746 745 } 747 746 748 - #else 749 - 750 - static void 751 - sess_auth_lanman(struct sess_data *sess_data) 752 - { 753 - sess_data->result = -EOPNOTSUPP; 754 - sess_data->func = NULL; 755 - } 756 747 #endif 757 748 758 749 static void ··· 1095 1102 ses->auth_key.response = NULL; 1096 1103 } 1097 1104 1098 - #else 1099 - 1100 - static void 1101 - sess_auth_kerberos(struct sess_data *sess_data) 1102 - { 1103 - cifs_dbg(VFS, "Kerberos negotiated but upcall support disabled!\n"); 1104 - sess_data->result = -ENOSYS; 1105 - sess_data->func = NULL; 1106 - } 1107 1105 #endif /* ! CONFIG_CIFS_UPCALL */ 1108 1106 1109 1107 /*
+1 -1
fs/cifs/smb2file.c
··· 50 50 goto out; 51 51 } 52 52 53 - smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2, 53 + smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2, 54 54 GFP_KERNEL); 55 55 if (smb2_data == NULL) { 56 56 rc = -ENOMEM;
+1 -1
fs/cifs/smb2inode.c
··· 131 131 *adjust_tz = false; 132 132 *symlink = false; 133 133 134 - smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2, 134 + smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2, 135 135 GFP_KERNEL); 136 136 if (smb2_data == NULL) 137 137 return -ENOMEM;
+2 -2
fs/cifs/smb2ops.c
··· 389 389 int rc; 390 390 struct smb2_file_all_info *smb2_data; 391 391 392 - smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + MAX_NAME * 2, 392 + smb2_data = kzalloc(sizeof(struct smb2_file_all_info) + PATH_MAX * 2, 393 393 GFP_KERNEL); 394 394 if (smb2_data == NULL) 395 395 return -ENOMEM; ··· 1035 1035 if (keep_size == false) 1036 1036 return -EOPNOTSUPP; 1037 1037 1038 - /* 1038 + /* 1039 1039 * Must check if file sparse since fallocate -z (zero range) assumes 1040 1040 * non-sparse allocation 1041 1041 */
+3 -4
fs/cifs/smb2pdu.c
··· 530 530 struct smb2_sess_setup_rsp *rsp = NULL; 531 531 struct kvec iov[2]; 532 532 int rc = 0; 533 - int resp_buftype; 533 + int resp_buftype = CIFS_NO_BUFFER; 534 534 __le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */ 535 535 struct TCP_Server_Info *server = ses->server; 536 536 u16 blob_length = 0; ··· 1403 1403 rsp = (struct smb2_close_rsp *)iov[0].iov_base; 1404 1404 1405 1405 if (rc != 0) { 1406 - if (tcon) 1407 - cifs_stats_fail_inc(tcon, SMB2_CLOSE_HE); 1406 + cifs_stats_fail_inc(tcon, SMB2_CLOSE_HE); 1408 1407 goto close_exit; 1409 1408 } 1410 1409 ··· 1532 1533 { 1533 1534 return query_info(xid, tcon, persistent_fid, volatile_fid, 1534 1535 FILE_ALL_INFORMATION, 1535 - sizeof(struct smb2_file_all_info) + MAX_NAME * 2, 1536 + sizeof(struct smb2_file_all_info) + PATH_MAX * 2, 1536 1537 sizeof(struct smb2_file_all_info), data); 1537 1538 } 1538 1539
+7 -4
fs/dcache.c
··· 106 106 unsigned int hash) 107 107 { 108 108 hash += (unsigned long) parent / L1_CACHE_BYTES; 109 - hash = hash + (hash >> d_hash_shift); 110 - return dentry_hashtable + (hash & d_hash_mask); 109 + return dentry_hashtable + hash_32(hash, d_hash_shift); 111 110 } 112 111 113 112 /* Statistics gathering. */ ··· 2655 2656 dentry->d_parent = dentry; 2656 2657 list_del_init(&dentry->d_u.d_child); 2657 2658 anon->d_parent = dparent; 2659 + if (likely(!d_unhashed(anon))) { 2660 + hlist_bl_lock(&anon->d_sb->s_anon); 2661 + __hlist_bl_del(&anon->d_hash); 2662 + anon->d_hash.pprev = NULL; 2663 + hlist_bl_unlock(&anon->d_sb->s_anon); 2664 + } 2658 2665 list_move(&anon->d_u.d_child, &dparent->d_subdirs); 2659 2666 2660 2667 write_seqcount_end(&dentry->d_seq); ··· 2719 2714 write_seqlock(&rename_lock); 2720 2715 __d_materialise_dentry(dentry, new); 2721 2716 write_sequnlock(&rename_lock); 2722 - __d_drop(new); 2723 2717 _d_rehash(new); 2724 2718 spin_unlock(&new->d_lock); 2725 2719 spin_unlock(&inode->i_lock); ··· 2782 2778 * could splice into our tree? */ 2783 2779 __d_materialise_dentry(dentry, alias); 2784 2780 write_sequnlock(&rename_lock); 2785 - __d_drop(alias); 2786 2781 goto found; 2787 2782 } else { 2788 2783 /* Nope, but we must(!) avoid directory
+2 -1
fs/eventpoll.c
··· 1852 1852 goto error_tgt_fput; 1853 1853 1854 1854 /* Check if EPOLLWAKEUP is allowed */ 1855 - ep_take_care_of_epollwakeup(&epds); 1855 + if (ep_op_has_event(op)) 1856 + ep_take_care_of_epollwakeup(&epds); 1856 1857 1857 1858 /* 1858 1859 * We have to check that the file structure underneath the file descriptor
+2
fs/ext4/namei.c
··· 3240 3240 &new.de, &new.inlined); 3241 3241 if (IS_ERR(new.bh)) { 3242 3242 retval = PTR_ERR(new.bh); 3243 + new.bh = NULL; 3243 3244 goto end_rename; 3244 3245 } 3245 3246 if (new.bh) { ··· 3387 3386 &new.de, &new.inlined); 3388 3387 if (IS_ERR(new.bh)) { 3389 3388 retval = PTR_ERR(new.bh); 3389 + new.bh = NULL; 3390 3390 goto end_rename; 3391 3391 } 3392 3392
+2
fs/ext4/resize.c
··· 575 575 bh = bclean(handle, sb, block); 576 576 if (IS_ERR(bh)) { 577 577 err = PTR_ERR(bh); 578 + bh = NULL; 578 579 goto out; 579 580 } 580 581 overhead = ext4_group_overhead_blocks(sb, group); ··· 604 603 bh = bclean(handle, sb, block); 605 604 if (IS_ERR(bh)) { 606 605 err = PTR_ERR(bh); 606 + bh = NULL; 607 607 goto out; 608 608 } 609 609
+1
fs/fscache/object.c
··· 982 982 submit_op_failed: 983 983 clear_bit(FSCACHE_OBJECT_IS_LIVE, &object->flags); 984 984 spin_unlock(&cookie->lock); 985 + fscache_unuse_cookie(object); 985 986 kfree(op); 986 987 _leave(" [EIO]"); 987 988 return transit_to(KILL_OBJECT);
+21 -4
fs/fscache/page.c
··· 44 44 EXPORT_SYMBOL(__fscache_wait_on_page_write); 45 45 46 46 /* 47 + * wait for a page to finish being written to the cache. Put a timeout here 48 + * since we might be called recursively via parent fs. 49 + */ 50 + static 51 + bool release_page_wait_timeout(struct fscache_cookie *cookie, struct page *page) 52 + { 53 + wait_queue_head_t *wq = bit_waitqueue(&cookie->flags, 0); 54 + 55 + return wait_event_timeout(*wq, !__fscache_check_page_write(cookie, page), 56 + HZ); 57 + } 58 + 59 + /* 47 60 * decide whether a page can be released, possibly by cancelling a store to it 48 61 * - we're allowed to sleep if __GFP_WAIT is flagged 49 62 */ ··· 128 115 } 129 116 130 117 fscache_stat(&fscache_n_store_vmscan_wait); 131 - __fscache_wait_on_page_write(cookie, page); 118 + if (!release_page_wait_timeout(cookie, page)) 119 + _debug("fscache writeout timeout page: %p{%lx}", 120 + page, page->index); 121 + 132 122 gfp &= ~__GFP_WAIT; 133 123 goto try_again; 134 124 } ··· 198 182 { 199 183 struct fscache_operation *op; 200 184 struct fscache_object *object; 201 - bool wake_cookie; 185 + bool wake_cookie = false; 202 186 203 187 _enter("%p", cookie); 204 188 ··· 228 212 229 213 __fscache_use_cookie(cookie); 230 214 if (fscache_submit_exclusive_op(object, op) < 0) 231 - goto nobufs; 215 + goto nobufs_dec; 232 216 spin_unlock(&cookie->lock); 233 217 fscache_stat(&fscache_n_attr_changed_ok); 234 218 fscache_put_operation(op); 235 219 _leave(" = 0"); 236 220 return 0; 237 221 238 - nobufs: 222 + nobufs_dec: 239 223 wake_cookie = __fscache_unuse_cookie(cookie); 224 + nobufs: 240 225 spin_unlock(&cookie->lock); 241 226 kfree(op); 242 227 if (wake_cookie)
+5 -4
fs/gfs2/bmap.c
··· 359 359 * Returns: The length of the extent (minimum of one block) 360 360 */ 361 361 362 - static inline unsigned int gfs2_extent_length(void *start, unsigned int len, __be64 *ptr, unsigned limit, int *eob) 362 + static inline unsigned int gfs2_extent_length(void *start, unsigned int len, __be64 *ptr, size_t limit, int *eob) 363 363 { 364 364 const __be64 *end = (start + len); 365 365 const __be64 *first = ptr; ··· 449 449 struct buffer_head *bh_map, struct metapath *mp, 450 450 const unsigned int sheight, 451 451 const unsigned int height, 452 - const unsigned int maxlen) 452 + const size_t maxlen) 453 453 { 454 454 struct gfs2_inode *ip = GFS2_I(inode); 455 455 struct gfs2_sbd *sdp = GFS2_SB(inode); ··· 483 483 } else { 484 484 /* Need to allocate indirect blocks */ 485 485 ptrs_per_blk = height > 1 ? sdp->sd_inptrs : sdp->sd_diptrs; 486 - dblks = min(maxlen, ptrs_per_blk - mp->mp_list[end_of_metadata]); 486 + dblks = min(maxlen, (size_t)(ptrs_per_blk - 487 + mp->mp_list[end_of_metadata])); 487 488 if (height == ip->i_height) { 488 489 /* Writing into existing tree, extend tree down */ 489 490 iblks = height - sheight; ··· 606 605 struct gfs2_inode *ip = GFS2_I(inode); 607 606 struct gfs2_sbd *sdp = GFS2_SB(inode); 608 607 unsigned int bsize = sdp->sd_sb.sb_bsize; 609 - const unsigned int maxlen = bh_map->b_size >> inode->i_blkbits; 608 + const size_t maxlen = bh_map->b_size >> inode->i_blkbits; 610 609 const u64 *arr = sdp->sd_heightsize; 611 610 __be64 *ptr; 612 611 u64 size;
+12 -3
fs/gfs2/file.c
··· 26 26 #include <linux/dlm.h> 27 27 #include <linux/dlm_plock.h> 28 28 #include <linux/aio.h> 29 + #include <linux/delay.h> 29 30 30 31 #include "gfs2.h" 31 32 #include "incore.h" ··· 980 979 unsigned int state; 981 980 int flags; 982 981 int error = 0; 982 + int sleeptime; 983 983 984 984 state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED; 985 - flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY) | GL_EXACT; 985 + flags = (IS_SETLKW(cmd) ? 0 : LM_FLAG_TRY_1CB) | GL_EXACT; 986 986 987 987 mutex_lock(&fp->f_fl_mutex); 988 988 ··· 1003 1001 gfs2_holder_init(gl, state, flags, fl_gh); 1004 1002 gfs2_glock_put(gl); 1005 1003 } 1006 - error = gfs2_glock_nq(fl_gh); 1004 + for (sleeptime = 1; sleeptime <= 4; sleeptime <<= 1) { 1005 + error = gfs2_glock_nq(fl_gh); 1006 + if (error != GLR_TRYFAILED) 1007 + break; 1008 + fl_gh->gh_flags = LM_FLAG_TRY | GL_EXACT; 1009 + fl_gh->gh_error = 0; 1010 + msleep(sleeptime); 1011 + } 1007 1012 if (error) { 1008 1013 gfs2_holder_uninit(fl_gh); 1009 1014 if (error == GLR_TRYFAILED) ··· 1033 1024 mutex_lock(&fp->f_fl_mutex); 1034 1025 flock_lock_file_wait(file, fl); 1035 1026 if (fl_gh->gh_gl) { 1036 - gfs2_glock_dq_wait(fl_gh); 1027 + gfs2_glock_dq(fl_gh); 1037 1028 gfs2_holder_uninit(fl_gh); 1038 1029 } 1039 1030 mutex_unlock(&fp->f_fl_mutex);
+5 -2
fs/gfs2/incore.h
··· 262 262 unsigned long gh_ip; 263 263 }; 264 264 265 + /* Number of quota types we support */ 266 + #define GFS2_MAXQUOTAS 2 267 + 265 268 /* Resource group multi-block reservation, in order of appearance: 266 269 267 270 Step 1. Function prepares to write, allocates a mb, sets the size hint. ··· 285 282 u64 rs_inum; /* Inode number for reservation */ 286 283 287 284 /* ancillary quota stuff */ 288 - struct gfs2_quota_data *rs_qa_qd[2 * MAXQUOTAS]; 289 - struct gfs2_holder rs_qa_qd_ghs[2 * MAXQUOTAS]; 285 + struct gfs2_quota_data *rs_qa_qd[2 * GFS2_MAXQUOTAS]; 286 + struct gfs2_holder rs_qa_qd_ghs[2 * GFS2_MAXQUOTAS]; 290 287 unsigned int rs_qa_qd_num; 291 288 }; 292 289
+6 -3
fs/gfs2/inode.c
··· 626 626 if (!IS_ERR(inode)) { 627 627 d = d_splice_alias(inode, dentry); 628 628 error = PTR_ERR(d); 629 - if (IS_ERR(d)) 629 + if (IS_ERR(d)) { 630 + inode = ERR_CAST(d); 630 631 goto fail_gunlock; 632 + } 631 633 error = 0; 632 634 if (file) { 633 635 if (S_ISREG(inode->i_mode)) { ··· 842 840 int error; 843 841 844 842 inode = gfs2_lookupi(dir, &dentry->d_name, 0); 845 - if (!inode) 843 + if (inode == NULL) { 844 + d_add(dentry, NULL); 846 845 return NULL; 846 + } 847 847 if (IS_ERR(inode)) 848 848 return ERR_CAST(inode); 849 849 ··· 858 854 859 855 d = d_splice_alias(inode, dentry); 860 856 if (IS_ERR(d)) { 861 - iput(inode); 862 857 gfs2_glock_dq_uninit(&gh); 863 858 return d; 864 859 }
+10 -10
fs/gfs2/super.c
··· 1294 1294 int val; 1295 1295 1296 1296 if (is_ancestor(root, sdp->sd_master_dir)) 1297 - seq_printf(s, ",meta"); 1297 + seq_puts(s, ",meta"); 1298 1298 if (args->ar_lockproto[0]) 1299 1299 seq_printf(s, ",lockproto=%s", args->ar_lockproto); 1300 1300 if (args->ar_locktable[0]) ··· 1302 1302 if (args->ar_hostdata[0]) 1303 1303 seq_printf(s, ",hostdata=%s", args->ar_hostdata); 1304 1304 if (args->ar_spectator) 1305 - seq_printf(s, ",spectator"); 1305 + seq_puts(s, ",spectator"); 1306 1306 if (args->ar_localflocks) 1307 - seq_printf(s, ",localflocks"); 1307 + seq_puts(s, ",localflocks"); 1308 1308 if (args->ar_debug) 1309 - seq_printf(s, ",debug"); 1309 + seq_puts(s, ",debug"); 1310 1310 if (args->ar_posix_acl) 1311 - seq_printf(s, ",acl"); 1311 + seq_puts(s, ",acl"); 1312 1312 if (args->ar_quota != GFS2_QUOTA_DEFAULT) { 1313 1313 char *state; 1314 1314 switch (args->ar_quota) { ··· 1328 1328 seq_printf(s, ",quota=%s", state); 1329 1329 } 1330 1330 if (args->ar_suiddir) 1331 - seq_printf(s, ",suiddir"); 1331 + seq_puts(s, ",suiddir"); 1332 1332 if (args->ar_data != GFS2_DATA_DEFAULT) { 1333 1333 char *state; 1334 1334 switch (args->ar_data) { ··· 1345 1345 seq_printf(s, ",data=%s", state); 1346 1346 } 1347 1347 if (args->ar_discard) 1348 - seq_printf(s, ",discard"); 1348 + seq_puts(s, ",discard"); 1349 1349 val = sdp->sd_tune.gt_logd_secs; 1350 1350 if (val != 30) 1351 1351 seq_printf(s, ",commit=%d", val); ··· 1376 1376 seq_printf(s, ",errors=%s", state); 1377 1377 } 1378 1378 if (test_bit(SDF_NOBARRIERS, &sdp->sd_flags)) 1379 - seq_printf(s, ",nobarrier"); 1379 + seq_puts(s, ",nobarrier"); 1380 1380 if (test_bit(SDF_DEMOTE, &sdp->sd_flags)) 1381 - seq_printf(s, ",demote_interface_used"); 1381 + seq_puts(s, ",demote_interface_used"); 1382 1382 if (args->ar_rgrplvb) 1383 - seq_printf(s, ",rgrplvb"); 1383 + seq_puts(s, ",rgrplvb"); 1384 1384 return 0; 1385 1385 } 1386 1386
+1 -3
fs/lockd/svc.c
··· 253 253 254 254 error = make_socks(serv, net); 255 255 if (error < 0) 256 - goto err_socks; 256 + goto err_bind; 257 257 set_grace_period(net); 258 258 dprintk("lockd_up_net: per-net data created; net=%p\n", net); 259 259 return 0; 260 260 261 - err_socks: 262 - svc_rpcb_cleanup(serv, net); 263 261 err_bind: 264 262 ln->nlmsvc_users--; 265 263 return error;
+53 -43
fs/namei.c
··· 34 34 #include <linux/device_cgroup.h> 35 35 #include <linux/fs_struct.h> 36 36 #include <linux/posix_acl.h> 37 + #include <linux/hash.h> 37 38 #include <asm/uaccess.h> 38 39 39 40 #include "internal.h" ··· 644 643 645 644 static __always_inline void set_root(struct nameidata *nd) 646 645 { 647 - if (!nd->root.mnt) 648 - get_fs_root(current->fs, &nd->root); 646 + get_fs_root(current->fs, &nd->root); 649 647 } 650 648 651 649 static int link_path_walk(const char *, struct nameidata *); 652 650 653 - static __always_inline void set_root_rcu(struct nameidata *nd) 651 + static __always_inline unsigned set_root_rcu(struct nameidata *nd) 654 652 { 655 - if (!nd->root.mnt) { 656 - struct fs_struct *fs = current->fs; 657 - unsigned seq; 653 + struct fs_struct *fs = current->fs; 654 + unsigned seq, res; 658 655 659 - do { 660 - seq = read_seqcount_begin(&fs->seq); 661 - nd->root = fs->root; 662 - nd->seq = __read_seqcount_begin(&nd->root.dentry->d_seq); 663 - } while (read_seqcount_retry(&fs->seq, seq)); 664 - } 656 + do { 657 + seq = read_seqcount_begin(&fs->seq); 658 + nd->root = fs->root; 659 + res = __read_seqcount_begin(&nd->root.dentry->d_seq); 660 + } while (read_seqcount_retry(&fs->seq, seq)); 661 + return res; 665 662 } 666 663 667 664 static void path_put_conditional(struct path *path, struct nameidata *nd) ··· 859 860 return PTR_ERR(s); 860 861 } 861 862 if (*s == '/') { 862 - set_root(nd); 863 + if (!nd->root.mnt) 864 + set_root(nd); 863 865 path_put(&nd->path); 864 866 nd->path = nd->root; 865 867 path_get(&nd->root); ··· 1137 1137 */ 1138 1138 *inode = path->dentry->d_inode; 1139 1139 } 1140 - return read_seqretry(&mount_lock, nd->m_seq) && 1140 + return !read_seqretry(&mount_lock, nd->m_seq) && 1141 1141 !(path->dentry->d_flags & DCACHE_NEED_AUTOMOUNT); 1142 1142 } 1143 1143 1144 1144 static int follow_dotdot_rcu(struct nameidata *nd) 1145 1145 { 1146 - set_root_rcu(nd); 1146 + struct inode *inode = nd->inode; 1147 + if (!nd->root.mnt) 1148 + set_root_rcu(nd); 1147 1149 1148 1150 while (1) { 1149 1151 if (nd->path.dentry == nd->root.dentry && ··· 1157 1155 struct dentry *parent = old->d_parent; 1158 1156 unsigned seq; 1159 1157 1158 + inode = parent->d_inode; 1160 1159 seq = read_seqcount_begin(&parent->d_seq); 1161 1160 if (read_seqcount_retry(&old->d_seq, nd->seq)) 1162 1161 goto failed; ··· 1167 1164 } 1168 1165 if (!follow_up_rcu(&nd->path)) 1169 1166 break; 1167 + inode = nd->path.dentry->d_inode; 1170 1168 nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq); 1171 1169 } 1172 1170 while (d_mountpoint(nd->path.dentry)) { ··· 1177 1173 break; 1178 1174 nd->path.mnt = &mounted->mnt; 1179 1175 nd->path.dentry = mounted->mnt.mnt_root; 1176 + inode = nd->path.dentry->d_inode; 1180 1177 nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq); 1181 - if (!read_seqretry(&mount_lock, nd->m_seq)) 1178 + if (read_seqretry(&mount_lock, nd->m_seq)) 1182 1179 goto failed; 1183 1180 } 1184 - nd->inode = nd->path.dentry->d_inode; 1181 + nd->inode = inode; 1185 1182 return 0; 1186 1183 1187 1184 failed: ··· 1261 1256 1262 1257 static void follow_dotdot(struct nameidata *nd) 1263 1258 { 1264 - set_root(nd); 1259 + if (!nd->root.mnt) 1260 + set_root(nd); 1265 1261 1266 1262 while(1) { 1267 1263 struct dentry *old = nd->path.dentry; ··· 1640 1634 1641 1635 static inline unsigned int fold_hash(unsigned long hash) 1642 1636 { 1643 - hash += hash >> (8*sizeof(int)); 1644 - return hash; 1637 + return hash_64(hash, 32); 1645 1638 } 1646 1639 1647 1640 #else /* 32-bit case */ ··· 1674 1669 1675 1670 /* 1676 1671 * Calculate the length and hash of the path component, and 1677 - * return the length of the component; 1672 + * return the "hash_len" as the result. 1678 1673 */ 1679 - static inline unsigned long hash_name(const char *name, unsigned int *hashp) 1674 + static inline u64 hash_name(const char *name) 1680 1675 { 1681 1676 unsigned long a, b, adata, bdata, mask, hash, len; 1682 1677 const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS; ··· 1696 1691 mask = create_zero_mask(adata | bdata); 1697 1692 1698 1693 hash += a & zero_bytemask(mask); 1699 - *hashp = fold_hash(hash); 1700 - 1701 - return len + find_zero(mask); 1694 + len += find_zero(mask); 1695 + return hashlen_create(fold_hash(hash), len); 1702 1696 } 1703 1697 1704 1698 #else ··· 1715 1711 * We know there's a real path component here of at least 1716 1712 * one character. 1717 1713 */ 1718 - static inline unsigned long hash_name(const char *name, unsigned int *hashp) 1714 + static inline u64 hash_name(const char *name) 1719 1715 { 1720 1716 unsigned long hash = init_name_hash(); 1721 1717 unsigned long len = 0, c; ··· 1726 1722 hash = partial_name_hash(c, hash); 1727 1723 c = (unsigned char)name[len]; 1728 1724 } while (c && c != '/'); 1729 - *hashp = end_name_hash(hash); 1730 - return len; 1725 + return hashlen_create(end_name_hash(hash), len); 1731 1726 } 1732 1727 1733 1728 #endif ··· 1751 1748 1752 1749 /* At this point we know we have a real path component. */ 1753 1750 for(;;) { 1754 - struct qstr this; 1755 - long len; 1751 + u64 hash_len; 1756 1752 int type; 1757 1753 1758 1754 err = may_lookup(nd); 1759 1755 if (err) 1760 1756 break; 1761 1757 1762 - len = hash_name(name, &this.hash); 1763 - this.name = name; 1764 - this.len = len; 1758 + hash_len = hash_name(name); 1765 1759 1766 1760 type = LAST_NORM; 1767 - if (name[0] == '.') switch (len) { 1761 + if (name[0] == '.') switch (hashlen_len(hash_len)) { 1768 1762 case 2: 1769 1763 if (name[1] == '.') { 1770 1764 type = LAST_DOTDOT; ··· 1775 1775 struct dentry *parent = nd->path.dentry; 1776 1776 nd->flags &= ~LOOKUP_JUMPED; 1777 1777 if (unlikely(parent->d_flags & DCACHE_OP_HASH)) { 1778 + struct qstr this = { { .hash_len = hash_len }, .name = name }; 1778 1779 err = parent->d_op->d_hash(parent, &this); 1779 1780 if (err < 0) 1780 1781 break; 1782 + hash_len = this.hash_len; 1783 + name = this.name; 1781 1784 } 1782 1785 } 1783 1786 1784 - nd->last = this; 1787 + nd->last.hash_len = hash_len; 1788 + nd->last.name = name; 1785 1789 nd->last_type = type; 1786 1790 1787 - if (!name[len]) 1791 + name += hashlen_len(hash_len); 1792 + if (!*name) 1788 1793 return 0; 1789 1794 /* 1790 1795 * If it wasn't NUL, we know it was '/'. Skip that 1791 1796 * slash, and continue until no more slashes. 1792 1797 */ 1793 1798 do { 1794 - len++; 1795 - } while (unlikely(name[len] == '/')); 1796 - if (!name[len]) 1799 + name++; 1800 + } while (unlikely(*name == '/')); 1801 + if (!*name) 1797 1802 return 0; 1798 - 1799 - name += len; 1800 1803 1801 1804 err = walk_component(nd, &next, LOOKUP_FOLLOW); 1802 1805 if (err < 0) ··· 1855 1852 if (*name=='/') { 1856 1853 if (flags & LOOKUP_RCU) { 1857 1854 rcu_read_lock(); 1858 - set_root_rcu(nd); 1855 + nd->seq = set_root_rcu(nd); 1859 1856 } else { 1860 1857 set_root(nd); 1861 1858 path_get(&nd->root); ··· 1906 1903 } 1907 1904 1908 1905 nd->inode = nd->path.dentry->d_inode; 1909 - return 0; 1906 + if (!(flags & LOOKUP_RCU)) 1907 + return 0; 1908 + if (likely(!read_seqcount_retry(&nd->path.dentry->d_seq, nd->seq))) 1909 + return 0; 1910 + if (!(nd->flags & LOOKUP_ROOT)) 1911 + nd->root.mnt = NULL; 1912 + rcu_read_unlock(); 1913 + return -ECHILD; 1910 1914 } 1911 1915 1912 1916 static inline int lookup_last(struct nameidata *nd, struct path *path)
+3 -9
fs/nfs/client.c
··· 1412 1412 p = proc_create("volumes", S_IFREG|S_IRUGO, 1413 1413 nn->proc_nfsfs, &nfs_volume_list_fops); 1414 1414 if (!p) 1415 - goto error_2; 1415 + goto error_1; 1416 1416 return 0; 1417 1417 1418 - error_2: 1419 - remove_proc_entry("servers", nn->proc_nfsfs); 1420 1418 error_1: 1421 - remove_proc_entry("fs/nfsfs", NULL); 1419 + remove_proc_subtree("nfsfs", net->proc_net); 1422 1420 error_0: 1423 1421 return -ENOMEM; 1424 1422 } 1425 1423 1426 1424 void nfs_fs_proc_net_exit(struct net *net) 1427 1425 { 1428 - struct nfs_net *nn = net_generic(net, nfs_net_id); 1429 - 1430 - remove_proc_entry("volumes", nn->proc_nfsfs); 1431 - remove_proc_entry("servers", nn->proc_nfsfs); 1432 - remove_proc_entry("fs/nfsfs", NULL); 1426 + remove_proc_subtree("nfsfs", net->proc_net); 1433 1427 } 1434 1428 1435 1429 /*
+3 -2
fs/nfs/filelayout/filelayout.c
··· 1269 1269 static void filelayout_retry_commit(struct nfs_commit_info *cinfo, int idx) 1270 1270 { 1271 1271 struct pnfs_ds_commit_info *fl_cinfo = cinfo->ds; 1272 - struct pnfs_commit_bucket *bucket = fl_cinfo->buckets; 1272 + struct pnfs_commit_bucket *bucket; 1273 1273 struct pnfs_layout_segment *freeme; 1274 1274 int i; 1275 1275 1276 - for (i = idx; i < fl_cinfo->nbuckets; i++, bucket++) { 1276 + for (i = idx; i < fl_cinfo->nbuckets; i++) { 1277 + bucket = &fl_cinfo->buckets[i]; 1277 1278 if (list_empty(&bucket->committing)) 1278 1279 continue; 1279 1280 nfs_retry_commit(&bucket->committing, bucket->clseg, cinfo);
+6 -7
fs/nfs/nfs4_fs.h
··· 130 130 */ 131 131 132 132 struct nfs4_lock_state { 133 - struct list_head ls_locks; /* Other lock stateids */ 134 - struct nfs4_state * ls_state; /* Pointer to open state */ 133 + struct list_head ls_locks; /* Other lock stateids */ 134 + struct nfs4_state * ls_state; /* Pointer to open state */ 135 135 #define NFS_LOCK_INITIALIZED 0 136 136 #define NFS_LOCK_LOST 1 137 - unsigned long ls_flags; 137 + unsigned long ls_flags; 138 138 struct nfs_seqid_counter ls_seqid; 139 - nfs4_stateid ls_stateid; 140 - atomic_t ls_count; 141 - fl_owner_t ls_owner; 142 - struct work_struct ls_release; 139 + nfs4_stateid ls_stateid; 140 + atomic_t ls_count; 141 + fl_owner_t ls_owner; 143 142 }; 144 143 145 144 /* bits for nfs4_state->flags */
+20 -18
fs/nfs/nfs4client.c
··· 482 482 483 483 spin_lock(&nn->nfs_client_lock); 484 484 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 485 + 486 + if (pos->rpc_ops != new->rpc_ops) 487 + continue; 488 + 489 + if (pos->cl_proto != new->cl_proto) 490 + continue; 491 + 492 + if (pos->cl_minorversion != new->cl_minorversion) 493 + continue; 494 + 485 495 /* If "pos" isn't marked ready, we can't trust the 486 496 * remaining fields in "pos" */ 487 497 if (pos->cl_cons_state > NFS_CS_READY) { ··· 509 499 spin_lock(&nn->nfs_client_lock); 510 500 } 511 501 if (pos->cl_cons_state != NFS_CS_READY) 512 - continue; 513 - 514 - if (pos->rpc_ops != new->rpc_ops) 515 - continue; 516 - 517 - if (pos->cl_proto != new->cl_proto) 518 - continue; 519 - 520 - if (pos->cl_minorversion != new->cl_minorversion) 521 502 continue; 522 503 523 504 if (pos->cl_clientid != new->cl_clientid) ··· 623 622 624 623 spin_lock(&nn->nfs_client_lock); 625 624 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 625 + 626 + if (pos->rpc_ops != new->rpc_ops) 627 + continue; 628 + 629 + if (pos->cl_proto != new->cl_proto) 630 + continue; 631 + 632 + if (pos->cl_minorversion != new->cl_minorversion) 633 + continue; 634 + 626 635 /* If "pos" isn't marked ready, we can't trust the 627 636 * remaining fields in "pos", especially the client 628 637 * ID and serverowner fields. Wait for CREATE_SESSION ··· 656 645 status = -NFS4ERR_STALE_CLIENTID; 657 646 } 658 647 if (pos->cl_cons_state != NFS_CS_READY) 659 - continue; 660 - 661 - if (pos->rpc_ops != new->rpc_ops) 662 - continue; 663 - 664 - if (pos->cl_proto != new->cl_proto) 665 - continue; 666 - 667 - if (pos->cl_minorversion != new->cl_minorversion) 668 648 continue; 669 649 670 650 if (!nfs4_match_clientids(pos, new))
+22 -18
fs/nfs/nfs4proc.c
··· 2226 2226 ret = _nfs4_proc_open(opendata); 2227 2227 if (ret != 0) { 2228 2228 if (ret == -ENOENT) { 2229 - d_drop(opendata->dentry); 2230 - d_add(opendata->dentry, NULL); 2231 - nfs_set_verifier(opendata->dentry, 2229 + dentry = opendata->dentry; 2230 + if (dentry->d_inode) 2231 + d_delete(dentry); 2232 + else if (d_unhashed(dentry)) 2233 + d_add(dentry, NULL); 2234 + 2235 + nfs_set_verifier(dentry, 2232 2236 nfs_save_change_attribute(opendata->dir->d_inode)); 2233 2237 } 2234 2238 goto out; ··· 2618 2614 is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags); 2619 2615 is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags); 2620 2616 is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags); 2621 - /* Calculate the current open share mode */ 2622 - calldata->arg.fmode = 0; 2623 - if (is_rdonly || is_rdwr) 2624 - calldata->arg.fmode |= FMODE_READ; 2625 - if (is_wronly || is_rdwr) 2626 - calldata->arg.fmode |= FMODE_WRITE; 2627 2617 /* Calculate the change in open mode */ 2618 + calldata->arg.fmode = 0; 2628 2619 if (state->n_rdwr == 0) { 2629 - if (state->n_rdonly == 0) { 2630 - call_close |= is_rdonly || is_rdwr; 2631 - calldata->arg.fmode &= ~FMODE_READ; 2632 - } 2633 - if (state->n_wronly == 0) { 2634 - call_close |= is_wronly || is_rdwr; 2635 - calldata->arg.fmode &= ~FMODE_WRITE; 2636 - } 2637 - } 2620 + if (state->n_rdonly == 0) 2621 + call_close |= is_rdonly; 2622 + else if (is_rdonly) 2623 + calldata->arg.fmode |= FMODE_READ; 2624 + if (state->n_wronly == 0) 2625 + call_close |= is_wronly; 2626 + else if (is_wronly) 2627 + calldata->arg.fmode |= FMODE_WRITE; 2628 + } else if (is_rdwr) 2629 + calldata->arg.fmode |= FMODE_READ|FMODE_WRITE; 2630 + 2631 + if (calldata->arg.fmode == 0) 2632 + call_close |= is_rdwr; 2633 + 2638 2634 if (!nfs4_valid_open_stateid(state)) 2639 2635 call_close = 0; 2640 2636 spin_unlock(&state->owner->so_lock);
+6 -18
fs/nfs/nfs4state.c
··· 799 799 return NULL; 800 800 } 801 801 802 - static void 803 - free_lock_state_work(struct work_struct *work) 804 - { 805 - struct nfs4_lock_state *lsp = container_of(work, 806 - struct nfs4_lock_state, ls_release); 807 - struct nfs4_state *state = lsp->ls_state; 808 - struct nfs_server *server = state->owner->so_server; 809 - struct nfs_client *clp = server->nfs_client; 810 - 811 - clp->cl_mvops->free_lock_state(server, lsp); 812 - } 813 - 814 802 /* 815 803 * Return a compatible lock_state. If no initialized lock_state structure 816 804 * exists, return an uninitialized one. ··· 820 832 if (lsp->ls_seqid.owner_id < 0) 821 833 goto out_free; 822 834 INIT_LIST_HEAD(&lsp->ls_locks); 823 - INIT_WORK(&lsp->ls_release, free_lock_state_work); 824 835 return lsp; 825 836 out_free: 826 837 kfree(lsp); ··· 883 896 if (list_empty(&state->lock_states)) 884 897 clear_bit(LK_STATE_IN_USE, &state->flags); 885 898 spin_unlock(&state->state_lock); 886 - if (test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags)) 887 - queue_work(nfsiod_workqueue, &lsp->ls_release); 888 - else { 889 - server = state->owner->so_server; 899 + server = state->owner->so_server; 900 + if (test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags)) { 901 + struct nfs_client *clp = server->nfs_client; 902 + 903 + clp->cl_mvops->free_lock_state(server, lsp); 904 + } else 890 905 nfs4_free_lock_state(server, lsp); 891 - } 892 906 } 893 907 894 908 static void nfs4_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
+13 -1
fs/nfsd/nfs4xdr.c
··· 2657 2657 struct xdr_stream *xdr = cd->xdr; 2658 2658 int start_offset = xdr->buf->len; 2659 2659 int cookie_offset; 2660 + u32 name_and_cookie; 2660 2661 int entry_bytes; 2661 2662 __be32 nfserr = nfserr_toosmall; 2662 2663 __be64 wire_offset; ··· 2719 2718 cd->rd_maxcount -= entry_bytes; 2720 2719 if (!cd->rd_dircount) 2721 2720 goto fail; 2722 - cd->rd_dircount--; 2721 + /* 2722 + * RFC 3530 14.2.24 describes rd_dircount as only a "hint", so 2723 + * let's always let through the first entry, at least: 2724 + */ 2725 + name_and_cookie = 4 * XDR_QUADLEN(namlen) + 8; 2726 + if (name_and_cookie > cd->rd_dircount && cd->cookie_offset) 2727 + goto fail; 2728 + cd->rd_dircount -= min(cd->rd_dircount, name_and_cookie); 2723 2729 cd->cookie_offset = cookie_offset; 2724 2730 skip_entry: 2725 2731 cd->common.err = nfs_ok; ··· 3328 3320 goto err_no_verf; 3329 3321 } 3330 3322 maxcount = min_t(int, maxcount-16, bytes_left); 3323 + 3324 + /* RFC 3530 14.2.24 allows us to ignore dircount when it's 0: */ 3325 + if (!readdir->rd_dircount) 3326 + readdir->rd_dircount = INT_MAX; 3331 3327 3332 3328 readdir->xdr = xdr; 3333 3329 readdir->rd_maxcount = maxcount;
+2 -2
fs/notify/fdinfo.c
··· 42 42 { 43 43 struct { 44 44 struct file_handle handle; 45 - u8 pad[64]; 45 + u8 pad[MAX_HANDLE_SZ]; 46 46 } f; 47 47 int size, ret, i; 48 48 ··· 50 50 size = f.handle.handle_bytes >> 2; 51 51 52 52 ret = exportfs_encode_inode_fh(inode, (struct fid *)f.handle.f_handle, &size, 0); 53 - if ((ret == 255) || (ret == -ENOSPC)) { 53 + if ((ret == FILEID_INVALID) || (ret < 0)) { 54 54 WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret); 55 55 return 0; 56 56 }
+15 -13
fs/udf/ialloc.c
··· 45 45 udf_free_blocks(sb, NULL, &UDF_I(inode)->i_location, 0, 1); 46 46 } 47 47 48 - struct inode *udf_new_inode(struct inode *dir, umode_t mode, int *err) 48 + struct inode *udf_new_inode(struct inode *dir, umode_t mode) 49 49 { 50 50 struct super_block *sb = dir->i_sb; 51 51 struct udf_sb_info *sbi = UDF_SB(sb); ··· 55 55 struct udf_inode_info *iinfo; 56 56 struct udf_inode_info *dinfo = UDF_I(dir); 57 57 struct logicalVolIntegrityDescImpUse *lvidiu; 58 + int err; 58 59 59 60 inode = new_inode(sb); 60 61 61 - if (!inode) { 62 - *err = -ENOMEM; 63 - return NULL; 64 - } 65 - *err = -ENOSPC; 62 + if (!inode) 63 + return ERR_PTR(-ENOMEM); 66 64 67 65 iinfo = UDF_I(inode); 68 66 if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_EXTENDED_FE)) { ··· 78 80 } 79 81 if (!iinfo->i_ext.i_data) { 80 82 iput(inode); 81 - *err = -ENOMEM; 82 - return NULL; 83 + return ERR_PTR(-ENOMEM); 83 84 } 84 85 86 + err = -ENOSPC; 85 87 block = udf_new_block(dir->i_sb, NULL, 86 88 dinfo->i_location.partitionReferenceNum, 87 - start, err); 88 - if (*err) { 89 + start, &err); 90 + if (err) { 89 91 iput(inode); 90 - return NULL; 92 + return ERR_PTR(err); 91 93 } 92 94 93 95 lvidiu = udf_sb_lvidiu(sb); 94 96 if (lvidiu) { 95 97 iinfo->i_unique = lvid_get_unique_id(sb); 98 + inode->i_generation = iinfo->i_unique; 96 99 mutex_lock(&sbi->s_alloc_mutex); 97 100 if (S_ISDIR(mode)) 98 101 le32_add_cpu(&lvidiu->numDirs, 1); ··· 122 123 iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG; 123 124 inode->i_mtime = inode->i_atime = inode->i_ctime = 124 125 iinfo->i_crtime = current_fs_time(inode->i_sb); 125 - insert_inode_hash(inode); 126 + if (unlikely(insert_inode_locked(inode) < 0)) { 127 + make_bad_inode(inode); 128 + iput(inode); 129 + return ERR_PTR(-EIO); 130 + } 126 131 mark_inode_dirty(inode); 127 132 128 - *err = 0; 129 133 return inode; 130 134 }
+77 -86
fs/udf/inode.c
··· 51 51 52 52 static umode_t udf_convert_permissions(struct fileEntry *); 53 53 static int udf_update_inode(struct inode *, int); 54 - static void udf_fill_inode(struct inode *, struct buffer_head *); 55 54 static int udf_sync_inode(struct inode *inode); 56 55 static int udf_alloc_i_data(struct inode *inode, size_t size); 57 56 static sector_t inode_getblk(struct inode *, sector_t, int *, int *); ··· 1270 1271 return 0; 1271 1272 } 1272 1273 1273 - static void __udf_read_inode(struct inode *inode) 1274 + /* 1275 + * Maximum length of linked list formed by ICB hierarchy. The chosen number is 1276 + * arbitrary - just that we hopefully don't limit any real use of rewritten 1277 + * inode on write-once media but avoid looping for too long on corrupted media. 1278 + */ 1279 + #define UDF_MAX_ICB_NESTING 1024 1280 + 1281 + static int udf_read_inode(struct inode *inode) 1274 1282 { 1275 1283 struct buffer_head *bh = NULL; 1276 1284 struct fileEntry *fe; 1285 + struct extendedFileEntry *efe; 1277 1286 uint16_t ident; 1278 1287 struct udf_inode_info *iinfo = UDF_I(inode); 1288 + struct udf_sb_info *sbi = UDF_SB(inode->i_sb); 1289 + struct kernel_lb_addr *iloc = &iinfo->i_location; 1290 + unsigned int link_count; 1291 + unsigned int indirections = 0; 1292 + int ret = -EIO; 1293 + 1294 + reread: 1295 + if (iloc->logicalBlockNum >= 1296 + sbi->s_partmaps[iloc->partitionReferenceNum].s_partition_len) { 1297 + udf_debug("block=%d, partition=%d out of range\n", 1298 + iloc->logicalBlockNum, iloc->partitionReferenceNum); 1299 + return -EIO; 1300 + } 1279 1301 1280 1302 /* 1281 1303 * Set defaults, but the inode is still incomplete! ··· 1310 1290 * i_nlink = 1 1311 1291 * i_op = NULL; 1312 1292 */ 1313 - bh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 0, &ident); 1293 + bh = udf_read_ptagged(inode->i_sb, iloc, 0, &ident); 1314 1294 if (!bh) { 1315 1295 udf_err(inode->i_sb, "(ino %ld) failed !bh\n", inode->i_ino); 1316 - make_bad_inode(inode); 1317 - return; 1296 + return -EIO; 1318 1297 } 1319 1298 1320 1299 if (ident != TAG_IDENT_FE && ident != TAG_IDENT_EFE && 1321 1300 ident != TAG_IDENT_USE) { 1322 1301 udf_err(inode->i_sb, "(ino %ld) failed ident=%d\n", 1323 1302 inode->i_ino, ident); 1324 - brelse(bh); 1325 - make_bad_inode(inode); 1326 - return; 1303 + goto out; 1327 1304 } 1328 1305 1329 1306 fe = (struct fileEntry *)bh->b_data; 1307 + efe = (struct extendedFileEntry *)bh->b_data; 1330 1308 1331 1309 if (fe->icbTag.strategyType == cpu_to_le16(4096)) { 1332 1310 struct buffer_head *ibh; 1333 1311 1334 - ibh = udf_read_ptagged(inode->i_sb, &iinfo->i_location, 1, 1335 - &ident); 1312 + ibh = udf_read_ptagged(inode->i_sb, iloc, 1, &ident); 1336 1313 if (ident == TAG_IDENT_IE && ibh) { 1337 - struct buffer_head *nbh = NULL; 1338 1314 struct kernel_lb_addr loc; 1339 1315 struct indirectEntry *ie; 1340 1316 1341 1317 ie = (struct indirectEntry *)ibh->b_data; 1342 1318 loc = lelb_to_cpu(ie->indirectICB.extLocation); 1343 1319 1344 - if (ie->indirectICB.extLength && 1345 - (nbh = udf_read_ptagged(inode->i_sb, &loc, 0, 1346 - &ident))) { 1347 - if (ident == TAG_IDENT_FE || 1348 - ident == TAG_IDENT_EFE) { 1349 - memcpy(&iinfo->i_location, 1350 - &loc, 1351 - sizeof(struct kernel_lb_addr)); 1352 - brelse(bh); 1353 - brelse(ibh); 1354 - brelse(nbh); 1355 - __udf_read_inode(inode); 1356 - return; 1320 + if (ie->indirectICB.extLength) { 1321 + brelse(ibh); 1322 + memcpy(&iinfo->i_location, &loc, 1323 + sizeof(struct kernel_lb_addr)); 1324 + if (++indirections > UDF_MAX_ICB_NESTING) { 1325 + udf_err(inode->i_sb, 1326 + "too many ICBs in ICB hierarchy" 1327 + " (max %d supported)\n", 1328 + UDF_MAX_ICB_NESTING); 1329 + goto out; 1357 1330 } 1358 - brelse(nbh); 1331 + brelse(bh); 1332 + goto reread; 1359 1333 } 1360 1334 } 1361 1335 brelse(ibh); 1362 1336 } else if (fe->icbTag.strategyType != cpu_to_le16(4)) { 1363 1337 udf_err(inode->i_sb, "unsupported strategy type: %d\n", 1364 1338 le16_to_cpu(fe->icbTag.strategyType)); 1365 - brelse(bh); 1366 - make_bad_inode(inode); 1367 - return; 1339 + goto out; 1368 1340 } 1369 - udf_fill_inode(inode, bh); 1370 - 1371 - brelse(bh); 1372 - } 1373 - 1374 - static void udf_fill_inode(struct inode *inode, struct buffer_head *bh) 1375 - { 1376 - struct fileEntry *fe; 1377 - struct extendedFileEntry *efe; 1378 - struct udf_sb_info *sbi = UDF_SB(inode->i_sb); 1379 - struct udf_inode_info *iinfo = UDF_I(inode); 1380 - unsigned int link_count; 1381 - 1382 - fe = (struct fileEntry *)bh->b_data; 1383 - efe = (struct extendedFileEntry *)bh->b_data; 1384 - 1385 1341 if (fe->icbTag.strategyType == cpu_to_le16(4)) 1386 1342 iinfo->i_strat4096 = 0; 1387 1343 else /* if (fe->icbTag.strategyType == cpu_to_le16(4096)) */ ··· 1374 1378 if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_EFE)) { 1375 1379 iinfo->i_efe = 1; 1376 1380 iinfo->i_use = 0; 1377 - if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize - 1378 - sizeof(struct extendedFileEntry))) { 1379 - make_bad_inode(inode); 1380 - return; 1381 - } 1381 + ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize - 1382 + sizeof(struct extendedFileEntry)); 1383 + if (ret) 1384 + goto out; 1382 1385 memcpy(iinfo->i_ext.i_data, 1383 1386 bh->b_data + sizeof(struct extendedFileEntry), 1384 1387 inode->i_sb->s_blocksize - ··· 1385 1390 } else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_FE)) { 1386 1391 iinfo->i_efe = 0; 1387 1392 iinfo->i_use = 0; 1388 - if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize - 1389 - sizeof(struct fileEntry))) { 1390 - make_bad_inode(inode); 1391 - return; 1392 - } 1393 + ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize - 1394 + sizeof(struct fileEntry)); 1395 + if (ret) 1396 + goto out; 1393 1397 memcpy(iinfo->i_ext.i_data, 1394 1398 bh->b_data + sizeof(struct fileEntry), 1395 1399 inode->i_sb->s_blocksize - sizeof(struct fileEntry)); ··· 1398 1404 iinfo->i_lenAlloc = le32_to_cpu( 1399 1405 ((struct unallocSpaceEntry *)bh->b_data)-> 1400 1406 lengthAllocDescs); 1401 - if (udf_alloc_i_data(inode, inode->i_sb->s_blocksize - 1402 - sizeof(struct unallocSpaceEntry))) { 1403 - make_bad_inode(inode); 1404 - return; 1405 - } 1407 + ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize - 1408 + sizeof(struct unallocSpaceEntry)); 1409 + if (ret) 1410 + goto out; 1406 1411 memcpy(iinfo->i_ext.i_data, 1407 1412 bh->b_data + sizeof(struct unallocSpaceEntry), 1408 1413 inode->i_sb->s_blocksize - 1409 1414 sizeof(struct unallocSpaceEntry)); 1410 - return; 1415 + return 0; 1411 1416 } 1412 1417 1418 + ret = -EIO; 1413 1419 read_lock(&sbi->s_cred_lock); 1414 1420 i_uid_write(inode, le32_to_cpu(fe->uid)); 1415 1421 if (!uid_valid(inode->i_uid) || ··· 1435 1441 read_unlock(&sbi->s_cred_lock); 1436 1442 1437 1443 link_count = le16_to_cpu(fe->fileLinkCount); 1438 - if (!link_count) 1439 - link_count = 1; 1444 + if (!link_count) { 1445 + ret = -ESTALE; 1446 + goto out; 1447 + } 1440 1448 set_nlink(inode, link_count); 1441 1449 1442 1450 inode->i_size = le64_to_cpu(fe->informationLength); ··· 1484 1488 iinfo->i_lenAlloc = le32_to_cpu(efe->lengthAllocDescs); 1485 1489 iinfo->i_checkpoint = le32_to_cpu(efe->checkpoint); 1486 1490 } 1491 + inode->i_generation = iinfo->i_unique; 1487 1492 1488 1493 switch (fe->icbTag.fileType) { 1489 1494 case ICBTAG_FILE_TYPE_DIRECTORY: ··· 1534 1537 default: 1535 1538 udf_err(inode->i_sb, "(ino %ld) failed unknown file type=%d\n", 1536 1539 inode->i_ino, fe->icbTag.fileType); 1537 - make_bad_inode(inode); 1538 - return; 1540 + goto out; 1539 1541 } 1540 1542 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { 1541 1543 struct deviceSpec *dsea = ··· 1545 1549 le32_to_cpu(dsea->minorDeviceIdent))); 1546 1550 /* Developer ID ??? */ 1547 1551 } else 1548 - make_bad_inode(inode); 1552 + goto out; 1549 1553 } 1554 + ret = 0; 1555 + out: 1556 + brelse(bh); 1557 + return ret; 1550 1558 } 1551 1559 1552 1560 static int udf_alloc_i_data(struct inode *inode, size_t size) ··· 1664 1664 FE_PERM_U_DELETE | FE_PERM_U_CHATTR)); 1665 1665 fe->permissions = cpu_to_le32(udfperms); 1666 1666 1667 - if (S_ISDIR(inode->i_mode)) 1667 + if (S_ISDIR(inode->i_mode) && inode->i_nlink > 0) 1668 1668 fe->fileLinkCount = cpu_to_le16(inode->i_nlink - 1); 1669 1669 else 1670 1670 fe->fileLinkCount = cpu_to_le16(inode->i_nlink); ··· 1830 1830 { 1831 1831 unsigned long block = udf_get_lb_pblock(sb, ino, 0); 1832 1832 struct inode *inode = iget_locked(sb, block); 1833 + int err; 1833 1834 1834 1835 if (!inode) 1835 - return NULL; 1836 + return ERR_PTR(-ENOMEM); 1836 1837 1837 - if (inode->i_state & I_NEW) { 1838 - memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr)); 1839 - __udf_read_inode(inode); 1840 - unlock_new_inode(inode); 1838 + if (!(inode->i_state & I_NEW)) 1839 + return inode; 1840 + 1841 + memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr)); 1842 + err = udf_read_inode(inode); 1843 + if (err < 0) { 1844 + iget_failed(inode); 1845 + return ERR_PTR(err); 1841 1846 } 1842 - 1843 - if (is_bad_inode(inode)) 1844 - goto out_iput; 1845 - 1846 - if (ino->logicalBlockNum >= UDF_SB(sb)-> 1847 - s_partmaps[ino->partitionReferenceNum].s_partition_len) { 1848 - udf_debug("block=%d, partition=%d out of range\n", 1849 - ino->logicalBlockNum, ino->partitionReferenceNum); 1850 - make_bad_inode(inode); 1851 - goto out_iput; 1852 - } 1847 + unlock_new_inode(inode); 1853 1848 1854 1849 return inode; 1855 - 1856 - out_iput: 1857 - iput(inode); 1858 - return NULL; 1859 1850 } 1860 1851 1861 1852 int udf_add_aext(struct inode *inode, struct extent_position *epos,
+54 -103
fs/udf/namei.c
··· 270 270 NULL, 0), 271 271 }; 272 272 inode = udf_iget(dir->i_sb, lb); 273 - if (!inode) { 274 - return ERR_PTR(-EACCES); 275 - } 273 + if (IS_ERR(inode)) 274 + return inode; 276 275 } else 277 276 #endif /* UDF_RECOVERY */ 278 277 ··· 284 285 285 286 loc = lelb_to_cpu(cfi.icb.extLocation); 286 287 inode = udf_iget(dir->i_sb, &loc); 287 - if (!inode) { 288 - return ERR_PTR(-EACCES); 289 - } 288 + if (IS_ERR(inode)) 289 + return ERR_CAST(inode); 290 290 } 291 291 292 292 return d_splice_alias(inode, dentry); ··· 548 550 return udf_write_fi(inode, cfi, fi, fibh, NULL, NULL); 549 551 } 550 552 551 - static int udf_create(struct inode *dir, struct dentry *dentry, umode_t mode, 552 - bool excl) 553 + static int udf_add_nondir(struct dentry *dentry, struct inode *inode) 553 554 { 555 + struct udf_inode_info *iinfo = UDF_I(inode); 556 + struct inode *dir = dentry->d_parent->d_inode; 554 557 struct udf_fileident_bh fibh; 555 - struct inode *inode; 556 558 struct fileIdentDesc cfi, *fi; 557 559 int err; 558 - struct udf_inode_info *iinfo; 559 - 560 - inode = udf_new_inode(dir, mode, &err); 561 - if (!inode) { 562 - return err; 563 - } 564 - 565 - iinfo = UDF_I(inode); 566 - if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 567 - inode->i_data.a_ops = &udf_adinicb_aops; 568 - else 569 - inode->i_data.a_ops = &udf_aops; 570 - inode->i_op = &udf_file_inode_operations; 571 - inode->i_fop = &udf_file_operations; 572 - mark_inode_dirty(inode); 573 560 574 561 fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err); 575 - if (!fi) { 562 + if (unlikely(!fi)) { 576 563 inode_dec_link_count(inode); 564 + unlock_new_inode(inode); 577 565 iput(inode); 578 566 return err; 579 567 } ··· 573 589 if (fibh.sbh != fibh.ebh) 574 590 brelse(fibh.ebh); 575 591 brelse(fibh.sbh); 592 + unlock_new_inode(inode); 576 593 d_instantiate(dentry, inode); 577 594 578 595 return 0; 579 596 } 580 597 581 - static int udf_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode) 598 + static int udf_create(struct inode *dir, struct dentry *dentry, umode_t mode, 599 + bool excl) 582 600 { 583 - struct inode *inode; 584 - struct udf_inode_info *iinfo; 585 - int err; 601 + struct inode *inode = udf_new_inode(dir, mode); 586 602 587 - inode = udf_new_inode(dir, mode, &err); 588 - if (!inode) 589 - return err; 603 + if (IS_ERR(inode)) 604 + return PTR_ERR(inode); 590 605 591 - iinfo = UDF_I(inode); 592 - if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 606 + if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 593 607 inode->i_data.a_ops = &udf_adinicb_aops; 594 608 else 595 609 inode->i_data.a_ops = &udf_aops; ··· 595 613 inode->i_fop = &udf_file_operations; 596 614 mark_inode_dirty(inode); 597 615 616 + return udf_add_nondir(dentry, inode); 617 + } 618 + 619 + static int udf_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode) 620 + { 621 + struct inode *inode = udf_new_inode(dir, mode); 622 + 623 + if (IS_ERR(inode)) 624 + return PTR_ERR(inode); 625 + 626 + if (UDF_I(inode)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 627 + inode->i_data.a_ops = &udf_adinicb_aops; 628 + else 629 + inode->i_data.a_ops = &udf_aops; 630 + inode->i_op = &udf_file_inode_operations; 631 + inode->i_fop = &udf_file_operations; 632 + mark_inode_dirty(inode); 598 633 d_tmpfile(dentry, inode); 634 + unlock_new_inode(inode); 599 635 return 0; 600 636 } 601 637 ··· 621 621 dev_t rdev) 622 622 { 623 623 struct inode *inode; 624 - struct udf_fileident_bh fibh; 625 - struct fileIdentDesc cfi, *fi; 626 - int err; 627 - struct udf_inode_info *iinfo; 628 624 629 625 if (!old_valid_dev(rdev)) 630 626 return -EINVAL; 631 627 632 - err = -EIO; 633 - inode = udf_new_inode(dir, mode, &err); 634 - if (!inode) 635 - goto out; 628 + inode = udf_new_inode(dir, mode); 629 + if (IS_ERR(inode)) 630 + return PTR_ERR(inode); 636 631 637 - iinfo = UDF_I(inode); 638 632 init_special_inode(inode, mode, rdev); 639 - fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err); 640 - if (!fi) { 641 - inode_dec_link_count(inode); 642 - iput(inode); 643 - return err; 644 - } 645 - cfi.icb.extLength = cpu_to_le32(inode->i_sb->s_blocksize); 646 - cfi.icb.extLocation = cpu_to_lelb(iinfo->i_location); 647 - *(__le32 *)((struct allocDescImpUse *)cfi.icb.impUse)->impUse = 648 - cpu_to_le32(iinfo->i_unique & 0x00000000FFFFFFFFUL); 649 - udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL); 650 - if (UDF_I(dir)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 651 - mark_inode_dirty(dir); 652 - mark_inode_dirty(inode); 653 - 654 - if (fibh.sbh != fibh.ebh) 655 - brelse(fibh.ebh); 656 - brelse(fibh.sbh); 657 - d_instantiate(dentry, inode); 658 - err = 0; 659 - 660 - out: 661 - return err; 633 + return udf_add_nondir(dentry, inode); 662 634 } 663 635 664 636 static int udf_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) ··· 642 670 struct udf_inode_info *dinfo = UDF_I(dir); 643 671 struct udf_inode_info *iinfo; 644 672 645 - err = -EIO; 646 - inode = udf_new_inode(dir, S_IFDIR | mode, &err); 647 - if (!inode) 648 - goto out; 673 + inode = udf_new_inode(dir, S_IFDIR | mode); 674 + if (IS_ERR(inode)) 675 + return PTR_ERR(inode); 649 676 650 677 iinfo = UDF_I(inode); 651 678 inode->i_op = &udf_dir_inode_operations; ··· 652 681 fi = udf_add_entry(inode, NULL, &fibh, &cfi, &err); 653 682 if (!fi) { 654 683 inode_dec_link_count(inode); 684 + unlock_new_inode(inode); 655 685 iput(inode); 656 686 goto out; 657 687 } ··· 671 699 if (!fi) { 672 700 clear_nlink(inode); 673 701 mark_inode_dirty(inode); 702 + unlock_new_inode(inode); 674 703 iput(inode); 675 704 goto out; 676 705 } ··· 683 710 udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL); 684 711 inc_nlink(dir); 685 712 mark_inode_dirty(dir); 713 + unlock_new_inode(inode); 686 714 d_instantiate(dentry, inode); 687 715 if (fibh.sbh != fibh.ebh) 688 716 brelse(fibh.ebh); ··· 850 876 static int udf_symlink(struct inode *dir, struct dentry *dentry, 851 877 const char *symname) 852 878 { 853 - struct inode *inode; 879 + struct inode *inode = udf_new_inode(dir, S_IFLNK | S_IRWXUGO); 854 880 struct pathComponent *pc; 855 881 const char *compstart; 856 - struct udf_fileident_bh fibh; 857 882 struct extent_position epos = {}; 858 883 int eoffset, elen = 0; 859 - struct fileIdentDesc *fi; 860 - struct fileIdentDesc cfi; 861 884 uint8_t *ea; 862 885 int err; 863 886 int block; ··· 863 892 struct udf_inode_info *iinfo; 864 893 struct super_block *sb = dir->i_sb; 865 894 866 - inode = udf_new_inode(dir, S_IFLNK | S_IRWXUGO, &err); 867 - if (!inode) 868 - goto out; 895 + if (IS_ERR(inode)) 896 + return PTR_ERR(inode); 869 897 870 898 iinfo = UDF_I(inode); 871 899 down_write(&iinfo->i_data_sem); ··· 982 1012 mark_inode_dirty(inode); 983 1013 up_write(&iinfo->i_data_sem); 984 1014 985 - fi = udf_add_entry(dir, dentry, &fibh, &cfi, &err); 986 - if (!fi) 987 - goto out_fail; 988 - cfi.icb.extLength = cpu_to_le32(sb->s_blocksize); 989 - cfi.icb.extLocation = cpu_to_lelb(iinfo->i_location); 990 - if (UDF_SB(inode->i_sb)->s_lvid_bh) { 991 - *(__le32 *)((struct allocDescImpUse *)cfi.icb.impUse)->impUse = 992 - cpu_to_le32(lvid_get_unique_id(sb)); 993 - } 994 - udf_write_fi(dir, &cfi, fi, &fibh, NULL, NULL); 995 - if (UDF_I(dir)->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) 996 - mark_inode_dirty(dir); 997 - if (fibh.sbh != fibh.ebh) 998 - brelse(fibh.ebh); 999 - brelse(fibh.sbh); 1000 - d_instantiate(dentry, inode); 1001 - err = 0; 1002 - 1015 + err = udf_add_nondir(dentry, inode); 1003 1016 out: 1004 1017 kfree(name); 1005 1018 return err; 1006 1019 1007 1020 out_no_entry: 1008 1021 up_write(&iinfo->i_data_sem); 1009 - out_fail: 1010 1022 inode_dec_link_count(inode); 1023 + unlock_new_inode(inode); 1011 1024 iput(inode); 1012 1025 goto out; 1013 1026 } ··· 1175 1222 struct udf_fileident_bh fibh; 1176 1223 1177 1224 if (!udf_find_entry(child->d_inode, &dotdot, &fibh, &cfi)) 1178 - goto out_unlock; 1225 + return ERR_PTR(-EACCES); 1179 1226 1180 1227 if (fibh.sbh != fibh.ebh) 1181 1228 brelse(fibh.ebh); ··· 1183 1230 1184 1231 tloc = lelb_to_cpu(cfi.icb.extLocation); 1185 1232 inode = udf_iget(child->d_inode->i_sb, &tloc); 1186 - if (!inode) 1187 - goto out_unlock; 1233 + if (IS_ERR(inode)) 1234 + return ERR_CAST(inode); 1188 1235 1189 1236 return d_obtain_alias(inode); 1190 - out_unlock: 1191 - return ERR_PTR(-EACCES); 1192 1237 } 1193 1238 1194 1239 ··· 1203 1252 loc.partitionReferenceNum = partref; 1204 1253 inode = udf_iget(sb, &loc); 1205 1254 1206 - if (inode == NULL) 1207 - return ERR_PTR(-ENOMEM); 1255 + if (IS_ERR(inode)) 1256 + return ERR_CAST(inode); 1208 1257 1209 1258 if (generation && inode->i_generation != generation) { 1210 1259 iput(inode);
+41 -28
fs/udf/super.c
··· 961 961 962 962 metadata_fe = udf_iget(sb, &addr); 963 963 964 - if (metadata_fe == NULL) 964 + if (IS_ERR(metadata_fe)) { 965 965 udf_warn(sb, "metadata inode efe not found\n"); 966 - else if (UDF_I(metadata_fe)->i_alloc_type != ICBTAG_FLAG_AD_SHORT) { 966 + return metadata_fe; 967 + } 968 + if (UDF_I(metadata_fe)->i_alloc_type != ICBTAG_FLAG_AD_SHORT) { 967 969 udf_warn(sb, "metadata inode efe does not have short allocation descriptors!\n"); 968 970 iput(metadata_fe); 969 - metadata_fe = NULL; 971 + return ERR_PTR(-EIO); 970 972 } 971 973 972 974 return metadata_fe; ··· 980 978 struct udf_part_map *map; 981 979 struct udf_meta_data *mdata; 982 980 struct kernel_lb_addr addr; 981 + struct inode *fe; 983 982 984 983 map = &sbi->s_partmaps[partition]; 985 984 mdata = &map->s_type_specific.s_metadata; ··· 989 986 udf_debug("Metadata file location: block = %d part = %d\n", 990 987 mdata->s_meta_file_loc, map->s_partition_num); 991 988 992 - mdata->s_metadata_fe = udf_find_metadata_inode_efe(sb, 993 - mdata->s_meta_file_loc, map->s_partition_num); 994 - 995 - if (mdata->s_metadata_fe == NULL) { 989 + fe = udf_find_metadata_inode_efe(sb, mdata->s_meta_file_loc, 990 + map->s_partition_num); 991 + if (IS_ERR(fe)) { 996 992 /* mirror file entry */ 997 993 udf_debug("Mirror metadata file location: block = %d part = %d\n", 998 994 mdata->s_mirror_file_loc, map->s_partition_num); 999 995 1000 - mdata->s_mirror_fe = udf_find_metadata_inode_efe(sb, 1001 - mdata->s_mirror_file_loc, map->s_partition_num); 996 + fe = udf_find_metadata_inode_efe(sb, mdata->s_mirror_file_loc, 997 + map->s_partition_num); 1002 998 1003 - if (mdata->s_mirror_fe == NULL) { 999 + if (IS_ERR(fe)) { 1004 1000 udf_err(sb, "Both metadata and mirror metadata inode efe can not found\n"); 1005 - return -EIO; 1001 + return PTR_ERR(fe); 1006 1002 } 1007 - } 1003 + mdata->s_mirror_fe = fe; 1004 + } else 1005 + mdata->s_metadata_fe = fe; 1006 + 1008 1007 1009 1008 /* 1010 1009 * bitmap file entry ··· 1020 1015 udf_debug("Bitmap file location: block = %d part = %d\n", 1021 1016 addr.logicalBlockNum, addr.partitionReferenceNum); 1022 1017 1023 - mdata->s_bitmap_fe = udf_iget(sb, &addr); 1024 - if (mdata->s_bitmap_fe == NULL) { 1018 + fe = udf_iget(sb, &addr); 1019 + if (IS_ERR(fe)) { 1025 1020 if (sb->s_flags & MS_RDONLY) 1026 1021 udf_warn(sb, "bitmap inode efe not found but it's ok since the disc is mounted read-only\n"); 1027 1022 else { 1028 1023 udf_err(sb, "bitmap inode efe not found and attempted read-write mount\n"); 1029 - return -EIO; 1024 + return PTR_ERR(fe); 1030 1025 } 1031 - } 1026 + } else 1027 + mdata->s_bitmap_fe = fe; 1032 1028 } 1033 1029 1034 1030 udf_debug("udf_load_metadata_files Ok\n"); ··· 1117 1111 phd->unallocSpaceTable.extPosition), 1118 1112 .partitionReferenceNum = p_index, 1119 1113 }; 1114 + struct inode *inode; 1120 1115 1121 - map->s_uspace.s_table = udf_iget(sb, &loc); 1122 - if (!map->s_uspace.s_table) { 1116 + inode = udf_iget(sb, &loc); 1117 + if (IS_ERR(inode)) { 1123 1118 udf_debug("cannot load unallocSpaceTable (part %d)\n", 1124 1119 p_index); 1125 - return -EIO; 1120 + return PTR_ERR(inode); 1126 1121 } 1122 + map->s_uspace.s_table = inode; 1127 1123 map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_TABLE; 1128 1124 udf_debug("unallocSpaceTable (part %d) @ %ld\n", 1129 1125 p_index, map->s_uspace.s_table->i_ino); ··· 1152 1144 phd->freedSpaceTable.extPosition), 1153 1145 .partitionReferenceNum = p_index, 1154 1146 }; 1147 + struct inode *inode; 1155 1148 1156 - map->s_fspace.s_table = udf_iget(sb, &loc); 1157 - if (!map->s_fspace.s_table) { 1149 + inode = udf_iget(sb, &loc); 1150 + if (IS_ERR(inode)) { 1158 1151 udf_debug("cannot load freedSpaceTable (part %d)\n", 1159 1152 p_index); 1160 - return -EIO; 1153 + return PTR_ERR(inode); 1161 1154 } 1162 - 1155 + map->s_fspace.s_table = inode; 1163 1156 map->s_partition_flags |= UDF_PART_FLAG_FREED_TABLE; 1164 1157 udf_debug("freedSpaceTable (part %d) @ %ld\n", 1165 1158 p_index, map->s_fspace.s_table->i_ino); ··· 1187 1178 struct udf_part_map *map = &sbi->s_partmaps[p_index]; 1188 1179 sector_t vat_block; 1189 1180 struct kernel_lb_addr ino; 1181 + struct inode *inode; 1190 1182 1191 1183 /* 1192 1184 * VAT file entry is in the last recorded block. Some broken disks have ··· 1196 1186 ino.partitionReferenceNum = type1_index; 1197 1187 for (vat_block = start_block; 1198 1188 vat_block >= map->s_partition_root && 1199 - vat_block >= start_block - 3 && 1200 - !sbi->s_vat_inode; vat_block--) { 1189 + vat_block >= start_block - 3; vat_block--) { 1201 1190 ino.logicalBlockNum = vat_block - map->s_partition_root; 1202 - sbi->s_vat_inode = udf_iget(sb, &ino); 1191 + inode = udf_iget(sb, &ino); 1192 + if (!IS_ERR(inode)) { 1193 + sbi->s_vat_inode = inode; 1194 + break; 1195 + } 1203 1196 } 1204 1197 } 1205 1198 ··· 2218 2205 /* assign inodes by physical block number */ 2219 2206 /* perhaps it's not extensible enough, but for now ... */ 2220 2207 inode = udf_iget(sb, &rootdir); 2221 - if (!inode) { 2208 + if (IS_ERR(inode)) { 2222 2209 udf_err(sb, "Error in udf_iget, block=%d, partition=%d\n", 2223 2210 rootdir.logicalBlockNum, rootdir.partitionReferenceNum); 2224 - ret = -EIO; 2211 + ret = PTR_ERR(inode); 2225 2212 goto error_out; 2226 2213 } 2227 2214
+1 -2
fs/udf/udfdecl.h
··· 143 143 extern struct buffer_head *udf_expand_dir_adinicb(struct inode *, int *, int *); 144 144 extern struct buffer_head *udf_bread(struct inode *, int, int, int *); 145 145 extern int udf_setsize(struct inode *, loff_t); 146 - extern void udf_read_inode(struct inode *); 147 146 extern void udf_evict_inode(struct inode *); 148 147 extern int udf_write_inode(struct inode *, struct writeback_control *wbc); 149 148 extern long udf_block_map(struct inode *, sector_t); ··· 208 209 209 210 /* ialloc.c */ 210 211 extern void udf_free_inode(struct inode *); 211 - extern struct inode *udf_new_inode(struct inode *, umode_t, int *); 212 + extern struct inode *udf_new_inode(struct inode *, umode_t); 212 213 213 214 /* truncate.c */ 214 215 extern void udf_truncate_tail_extent(struct inode *);
+1 -3
include/acpi/acpi_bus.h
··· 204 204 u32 match_driver:1; 205 205 u32 initialized:1; 206 206 u32 visited:1; 207 - u32 no_hotplug:1; 208 207 u32 hotplug_notify:1; 209 208 u32 is_dock_station:1; 210 - u32 reserved:22; 209 + u32 reserved:23; 211 210 }; 212 211 213 212 /* File System */ ··· 410 411 int acpi_bus_get_private_data(acpi_handle, void **); 411 412 int acpi_bus_attach_private_data(acpi_handle, void *); 412 413 void acpi_bus_detach_private_data(acpi_handle); 413 - void acpi_bus_no_hotplug(acpi_handle handle); 414 414 extern int acpi_notifier_call_chain(struct acpi_device *, u32, u32); 415 415 extern int register_acpi_notifier(struct notifier_block *); 416 416 extern int unregister_acpi_notifier(struct notifier_block *);
+13
include/crypto/drbg.h
··· 162 162 163 163 static inline size_t drbg_max_addtl(struct drbg_state *drbg) 164 164 { 165 + #if (__BITS_PER_LONG == 32) 166 + /* 167 + * SP800-90A allows smaller maximum numbers to be returned -- we 168 + * return SIZE_MAX - 1 to allow the verification of the enforcement 169 + * of this value in drbg_healthcheck_sanity. 170 + */ 171 + return (SIZE_MAX - 1); 172 + #else 165 173 return (1UL<<(drbg->core->max_addtllen)); 174 + #endif 166 175 } 167 176 168 177 static inline size_t drbg_max_requests(struct drbg_state *drbg) 169 178 { 179 + #if (__BITS_PER_LONG == 32) 180 + return SIZE_MAX; 181 + #else 170 182 return (1UL<<(drbg->core->max_req)); 183 + #endif 171 184 } 172 185 173 186 /*
+1
include/linux/dcache.h
··· 55 55 #define QSTR_INIT(n,l) { { { .len = l } }, .name = n } 56 56 #define hashlen_hash(hashlen) ((u32) (hashlen)) 57 57 #define hashlen_len(hashlen) ((u32)((hashlen) >> 32)) 58 + #define hashlen_create(hash,len) (((u64)(len)<<32)|(u32)(hash)) 58 59 59 60 struct dentry_stat_t { 60 61 long nr_dentry;
+4
include/linux/hash.h
··· 37 37 { 38 38 u64 hash = val; 39 39 40 + #if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64 41 + hash = hash * GOLDEN_RATIO_PRIME_64; 42 + #else 40 43 /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ 41 44 u64 n = hash; 42 45 n <<= 18; ··· 54 51 hash += n; 55 52 n <<= 2; 56 53 hash += n; 54 + #endif 57 55 58 56 /* High bits are more random, so use them. */ 59 57 return hash >> (64 - bits);
+3 -1
include/linux/iio/trigger.h
··· 84 84 put_device(&trig->dev); 85 85 } 86 86 87 - static inline void iio_trigger_get(struct iio_trigger *trig) 87 + static inline struct iio_trigger *iio_trigger_get(struct iio_trigger *trig) 88 88 { 89 89 get_device(&trig->dev); 90 90 __module_get(trig->ops->owner); 91 + 92 + return trig; 91 93 } 92 94 93 95 /**
-12
include/linux/jiffies.h
··· 258 258 #define SEC_JIFFIE_SC (32 - SHIFT_HZ) 259 259 #endif 260 260 #define NSEC_JIFFIE_SC (SEC_JIFFIE_SC + 29) 261 - #define USEC_JIFFIE_SC (SEC_JIFFIE_SC + 19) 262 261 #define SEC_CONVERSION ((unsigned long)((((u64)NSEC_PER_SEC << SEC_JIFFIE_SC) +\ 263 262 TICK_NSEC -1) / (u64)TICK_NSEC)) 264 263 265 264 #define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\ 266 265 TICK_NSEC -1) / (u64)TICK_NSEC)) 267 - #define USEC_CONVERSION \ 268 - ((unsigned long)((((u64)NSEC_PER_USEC << USEC_JIFFIE_SC) +\ 269 - TICK_NSEC -1) / (u64)TICK_NSEC)) 270 - /* 271 - * USEC_ROUND is used in the timeval to jiffie conversion. See there 272 - * for more details. It is the scaled resolution rounding value. Note 273 - * that it is a 64-bit value. Since, when it is applied, we are already 274 - * in jiffies (albit scaled), it is nothing but the bits we will shift 275 - * off. 276 - */ 277 - #define USEC_ROUND (u64)(((u64)1 << USEC_JIFFIE_SC) - 1) 278 266 /* 279 267 * The maximum jiffie value is (MAX_INT >> 1). Here we translate that 280 268 * into seconds. The 64-bit case will overflow if we are not careful,
+1
include/linux/mlx4/device.h
··· 215 215 MLX4_BMME_FLAG_TYPE_2_WIN = 1 << 9, 216 216 MLX4_BMME_FLAG_RESERVED_LKEY = 1 << 10, 217 217 MLX4_BMME_FLAG_FAST_REG_WR = 1 << 11, 218 + MLX4_BMME_FLAG_VSD_INIT2RTR = 1 << 28, 218 219 }; 219 220 220 221 enum mlx4_event {
+10 -2
include/linux/mlx4/qp.h
··· 56 56 MLX4_QP_OPTPAR_RNR_RETRY = 1 << 13, 57 57 MLX4_QP_OPTPAR_ACK_TIMEOUT = 1 << 14, 58 58 MLX4_QP_OPTPAR_SCHED_QUEUE = 1 << 16, 59 - MLX4_QP_OPTPAR_COUNTER_INDEX = 1 << 20 59 + MLX4_QP_OPTPAR_COUNTER_INDEX = 1 << 20, 60 + MLX4_QP_OPTPAR_VLAN_STRIPPING = 1 << 21, 60 61 }; 61 62 62 63 enum mlx4_qp_state { ··· 424 423 425 424 enum mlx4_update_qp_attr { 426 425 MLX4_UPDATE_QP_SMAC = 1 << 0, 426 + MLX4_UPDATE_QP_VSD = 1 << 2, 427 + MLX4_UPDATE_QP_SUPPORTED_ATTRS = (1 << 2) - 1 428 + }; 429 + 430 + enum mlx4_update_qp_params_flags { 431 + MLX4_UPDATE_QP_PARAMS_FLAGS_VSD_ENABLE = 1 << 0, 427 432 }; 428 433 429 434 struct mlx4_update_qp_params { 430 435 u8 smac_index; 436 + u32 flags; 431 437 }; 432 438 433 - int mlx4_update_qp(struct mlx4_dev *dev, struct mlx4_qp *qp, 439 + int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn, 434 440 enum mlx4_update_qp_attr attr, 435 441 struct mlx4_update_qp_params *params); 436 442 int mlx4_qp_modify(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
+6
include/linux/pci.h
··· 303 303 D3cold, not set for devices 304 304 powered on/off by the 305 305 corresponding bridge */ 306 + unsigned int ignore_hotplug:1; /* Ignore hotplug events */ 306 307 unsigned int d3_delay; /* D3->D0 transition time in ms */ 307 308 unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */ 308 309 ··· 1021 1020 bool pci_dev_run_wake(struct pci_dev *dev); 1022 1021 bool pci_check_pme_status(struct pci_dev *dev); 1023 1022 void pci_pme_wakeup_bus(struct pci_bus *bus); 1023 + 1024 + static inline void pci_ignore_hotplug(struct pci_dev *dev) 1025 + { 1026 + dev->ignore_hotplug = 1; 1027 + } 1024 1028 1025 1029 static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state, 1026 1030 bool enable)
+2
include/linux/vga_switcheroo.h
··· 64 64 void vga_switcheroo_set_dynamic_switch(struct pci_dev *pdev, enum vga_switcheroo_state dynamic); 65 65 66 66 int vga_switcheroo_init_domain_pm_ops(struct device *dev, struct dev_pm_domain *domain); 67 + void vga_switcheroo_fini_domain_pm_ops(struct device *dev); 67 68 int vga_switcheroo_init_domain_pm_optimus_hdmi_audio(struct device *dev, struct dev_pm_domain *domain); 68 69 #else 69 70 ··· 83 82 static inline void vga_switcheroo_set_dynamic_switch(struct pci_dev *pdev, enum vga_switcheroo_state dynamic) {} 84 83 85 84 static inline int vga_switcheroo_init_domain_pm_ops(struct device *dev, struct dev_pm_domain *domain) { return -EINVAL; } 85 + static inline void vga_switcheroo_fini_domain_pm_ops(struct device *dev) {} 86 86 static inline int vga_switcheroo_init_domain_pm_optimus_hdmi_audio(struct device *dev, struct dev_pm_domain *domain) { return -EINVAL; } 87 87 88 88 #endif
-2
include/linux/vgaarb.h
··· 182 182 * vga_get()... 183 183 */ 184 184 185 - #ifndef __ARCH_HAS_VGA_DEFAULT_DEVICE 186 185 #ifdef CONFIG_VGA_ARB 187 186 extern struct pci_dev *vga_default_device(void); 188 187 extern void vga_set_default_device(struct pci_dev *pdev); 189 188 #else 190 189 static inline struct pci_dev *vga_default_device(void) { return NULL; }; 191 190 static inline void vga_set_default_device(struct pci_dev *pdev) { }; 192 - #endif 193 191 #endif 194 192 195 193 /**
+1 -1
include/linux/workqueue.h
··· 419 419 alloc_workqueue("%s", WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, \ 420 420 1, (name)) 421 421 #define create_singlethread_workqueue(name) \ 422 - alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM, 1, (name)) 422 + alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name) 423 423 424 424 extern void destroy_workqueue(struct workqueue_struct *wq); 425 425
+1
include/net/addrconf.h
··· 204 204 205 205 int __ipv6_dev_ac_inc(struct inet6_dev *idev, const struct in6_addr *addr); 206 206 int __ipv6_dev_ac_dec(struct inet6_dev *idev, const struct in6_addr *addr); 207 + void ipv6_ac_destroy_dev(struct inet6_dev *idev); 207 208 bool ipv6_chk_acast_addr(struct net *net, struct net_device *dev, 208 209 const struct in6_addr *addr); 209 210 bool ipv6_chk_acast_addr_src(struct net *net, struct net_device *dev,
+15 -1
include/net/dst.h
··· 480 480 /* Flags for xfrm_lookup flags argument. */ 481 481 enum { 482 482 XFRM_LOOKUP_ICMP = 1 << 0, 483 + XFRM_LOOKUP_QUEUE = 1 << 1, 483 484 }; 484 485 485 486 struct flowi; ··· 491 490 int flags) 492 491 { 493 492 return dst_orig; 494 - } 493 + } 494 + 495 + static inline struct dst_entry *xfrm_lookup_route(struct net *net, 496 + struct dst_entry *dst_orig, 497 + const struct flowi *fl, 498 + struct sock *sk, 499 + int flags) 500 + { 501 + return dst_orig; 502 + } 495 503 496 504 static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst) 497 505 { ··· 511 501 struct dst_entry *xfrm_lookup(struct net *net, struct dst_entry *dst_orig, 512 502 const struct flowi *fl, struct sock *sk, 513 503 int flags); 504 + 505 + struct dst_entry *xfrm_lookup_route(struct net *net, struct dst_entry *dst_orig, 506 + const struct flowi *fl, struct sock *sk, 507 + int flags); 514 508 515 509 /* skb attached with this dst needs transformation if dst->xfrm is valid */ 516 510 static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
+8
include/net/genetlink.h
··· 394 394 return netlink_set_err(net->genl_sock, portid, group, code); 395 395 } 396 396 397 + static inline int genl_has_listeners(struct genl_family *family, 398 + struct sock *sk, unsigned int group) 399 + { 400 + if (WARN_ON_ONCE(group >= family->n_mcgrps)) 401 + return -EINVAL; 402 + group = family->mcgrp_offset + group; 403 + return netlink_has_listeners(sk, group); 404 + } 397 405 #endif /* __NET_GENERIC_NETLINK_H */
+2 -1
include/net/sch_generic.h
··· 232 232 unsigned int pkt_len; 233 233 u16 slave_dev_queue_mapping; 234 234 u16 _pad; 235 - unsigned char data[24]; 235 + #define QDISC_CB_PRIV_LEN 20 236 + unsigned char data[QDISC_CB_PRIV_LEN]; 236 237 }; 237 238 238 239 static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+1 -1
include/scsi/scsi_tcq.h
··· 68 68 return; 69 69 70 70 if (!shost_use_blk_mq(sdev->host) && 71 - blk_queue_tagged(sdev->request_queue)) 71 + !blk_queue_tagged(sdev->request_queue)) 72 72 blk_queue_init_tags(sdev->request_queue, depth, 73 73 sdev->host->bqt); 74 74
+2
include/uapi/linux/Kbuild
··· 241 241 header-y += mdio.h 242 242 header-y += media.h 243 243 header-y += mei.h 244 + header-y += memfd.h 244 245 header-y += mempolicy.h 245 246 header-y += meye.h 246 247 header-y += mic_common.h ··· 397 396 header-y += unistd.h 398 397 header-y += unix_diag.h 399 398 header-y += usbdevice_fs.h 399 + header-y += usbip.h 400 400 header-y += utime.h 401 401 header-y += utsname.h 402 402 header-y += uuid.h
+1
include/uapi/linux/input.h
··· 165 165 #define INPUT_PROP_BUTTONPAD 0x02 /* has button(s) under pad */ 166 166 #define INPUT_PROP_SEMI_MT 0x03 /* touch rectangle only */ 167 167 #define INPUT_PROP_TOPBUTTONPAD 0x04 /* softbuttons at top of pad */ 168 + #define INPUT_PROP_POINTING_STICK 0x05 /* is a pointing stick */ 168 169 169 170 #define INPUT_PROP_MAX 0x1f 170 171 #define INPUT_PROP_CNT (INPUT_PROP_MAX + 1)
+3
include/xen/interface/features.h
··· 53 53 /* operation as Dom0 is supported */ 54 54 #define XENFEAT_dom0 11 55 55 56 + /* Xen also maps grant references at pfn = mfn */ 57 + #define XENFEAT_grant_map_identity 12 58 + 56 59 #define XENFEAT_NR_SUBMAPS 1 57 60 58 61 #endif /* __XEN_PUBLIC_FEATURES_H__ */
+6 -6
init/do_mounts.c
··· 539 539 { 540 540 int is_floppy; 541 541 542 + if (root_delay) { 543 + printk(KERN_INFO "Waiting %d sec before mounting root device...\n", 544 + root_delay); 545 + ssleep(root_delay); 546 + } 547 + 542 548 /* 543 549 * wait for the known devices to complete their probing 544 550 * ··· 570 564 571 565 if (initrd_load()) 572 566 goto out; 573 - 574 - if (root_delay) { 575 - pr_info("Waiting %d sec before mounting root device...\n", 576 - root_delay); 577 - ssleep(root_delay); 578 - } 579 567 580 568 /* wait for any asynchronous scanning to complete */ 581 569 if ((ROOT_DEV == 0) && root_wait) {
+10
kernel/events/core.c
··· 1524 1524 */ 1525 1525 if (ctx->is_active) { 1526 1526 raw_spin_unlock_irq(&ctx->lock); 1527 + /* 1528 + * Reload the task pointer, it might have been changed by 1529 + * a concurrent perf_event_context_sched_out(). 1530 + */ 1531 + task = ctx->task; 1527 1532 goto retry; 1528 1533 } 1529 1534 ··· 1972 1967 */ 1973 1968 if (ctx->is_active) { 1974 1969 raw_spin_unlock_irq(&ctx->lock); 1970 + /* 1971 + * Reload the task pointer, it might have been changed by 1972 + * a concurrent perf_event_context_sched_out(). 1973 + */ 1974 + task = ctx->task; 1975 1975 goto retry; 1976 1976 } 1977 1977
+1
kernel/futex.c
··· 2592 2592 * shared futexes. We need to compare the keys: 2593 2593 */ 2594 2594 if (match_futex(&q.key, &key2)) { 2595 + queue_unlock(hb); 2595 2596 ret = -EINVAL; 2596 2597 goto out_put_keys; 2597 2598 }
+4 -3
kernel/kcmp.c
··· 44 44 */ 45 45 static int kcmp_ptr(void *v1, void *v2, enum kcmp_type type) 46 46 { 47 - long ret; 47 + long t1, t2; 48 48 49 - ret = kptr_obfuscate((long)v1, type) - kptr_obfuscate((long)v2, type); 49 + t1 = kptr_obfuscate((long)v1, type); 50 + t2 = kptr_obfuscate((long)v2, type); 50 51 51 - return (ret < 0) | ((ret > 0) << 1); 52 + return (t1 < t2) | ((t1 > t2) << 1); 52 53 } 53 54 54 55 /* The caller must have pinned the task */
+3 -3
kernel/printk/printk.c
··· 1665 1665 raw_spin_lock(&logbuf_lock); 1666 1666 logbuf_cpu = this_cpu; 1667 1667 1668 - if (recursion_bug) { 1668 + if (unlikely(recursion_bug)) { 1669 1669 static const char recursion_msg[] = 1670 1670 "BUG: recent printk recursion!"; 1671 1671 1672 1672 recursion_bug = 0; 1673 - text_len = strlen(recursion_msg); 1674 1673 /* emit KERN_CRIT message */ 1675 1674 printed_len += log_store(0, 2, LOG_PREFIX|LOG_NEWLINE, 0, 1676 - NULL, 0, recursion_msg, text_len); 1675 + NULL, 0, recursion_msg, 1676 + strlen(recursion_msg)); 1677 1677 } 1678 1678 1679 1679 /*
+23 -11
kernel/time/alarmtimer.c
··· 464 464 static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm, 465 465 ktime_t now) 466 466 { 467 + unsigned long flags; 467 468 struct k_itimer *ptr = container_of(alarm, struct k_itimer, 468 469 it.alarm.alarmtimer); 469 - if (posix_timer_event(ptr, 0) != 0) 470 - ptr->it_overrun++; 470 + enum alarmtimer_restart result = ALARMTIMER_NORESTART; 471 + 472 + spin_lock_irqsave(&ptr->it_lock, flags); 473 + if ((ptr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE) { 474 + if (posix_timer_event(ptr, 0) != 0) 475 + ptr->it_overrun++; 476 + } 471 477 472 478 /* Re-add periodic timers */ 473 479 if (ptr->it.alarm.interval.tv64) { 474 480 ptr->it_overrun += alarm_forward(alarm, now, 475 481 ptr->it.alarm.interval); 476 - return ALARMTIMER_RESTART; 482 + result = ALARMTIMER_RESTART; 477 483 } 478 - return ALARMTIMER_NORESTART; 484 + spin_unlock_irqrestore(&ptr->it_lock, flags); 485 + 486 + return result; 479 487 } 480 488 481 489 /** ··· 549 541 * @new_timer: k_itimer pointer 550 542 * @cur_setting: itimerspec data to fill 551 543 * 552 - * Copies the itimerspec data out from the k_itimer 544 + * Copies out the current itimerspec data 553 545 */ 554 546 static void alarm_timer_get(struct k_itimer *timr, 555 547 struct itimerspec *cur_setting) 556 548 { 557 - memset(cur_setting, 0, sizeof(struct itimerspec)); 549 + ktime_t relative_expiry_time = 550 + alarm_expires_remaining(&(timr->it.alarm.alarmtimer)); 558 551 559 - cur_setting->it_interval = 560 - ktime_to_timespec(timr->it.alarm.interval); 561 - cur_setting->it_value = 562 - ktime_to_timespec(timr->it.alarm.alarmtimer.node.expires); 563 - return; 552 + if (ktime_to_ns(relative_expiry_time) > 0) { 553 + cur_setting->it_value = ktime_to_timespec(relative_expiry_time); 554 + } else { 555 + cur_setting->it_value.tv_sec = 0; 556 + cur_setting->it_value.tv_nsec = 0; 557 + } 558 + 559 + cur_setting->it_interval = ktime_to_timespec(timr->it.alarm.interval); 564 560 } 565 561 566 562 /**
+30 -24
kernel/time/time.c
··· 559 559 * that a remainder subtract here would not do the right thing as the 560 560 * resolution values don't fall on second boundries. I.e. the line: 561 561 * nsec -= nsec % TICK_NSEC; is NOT a correct resolution rounding. 562 + * Note that due to the small error in the multiplier here, this 563 + * rounding is incorrect for sufficiently large values of tv_nsec, but 564 + * well formed timespecs should have tv_nsec < NSEC_PER_SEC, so we're 565 + * OK. 562 566 * 563 567 * Rather, we just shift the bits off the right. 564 568 * 565 569 * The >> (NSEC_JIFFIE_SC - SEC_JIFFIE_SC) converts the scaled nsec 566 570 * value to a scaled second value. 567 571 */ 568 - unsigned long 569 - timespec_to_jiffies(const struct timespec *value) 572 + static unsigned long 573 + __timespec_to_jiffies(unsigned long sec, long nsec) 570 574 { 571 - unsigned long sec = value->tv_sec; 572 - long nsec = value->tv_nsec + TICK_NSEC - 1; 575 + nsec = nsec + TICK_NSEC - 1; 573 576 574 577 if (sec >= MAX_SEC_IN_JIFFIES){ 575 578 sec = MAX_SEC_IN_JIFFIES; ··· 583 580 (NSEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC; 584 581 585 582 } 583 + 584 + unsigned long 585 + timespec_to_jiffies(const struct timespec *value) 586 + { 587 + return __timespec_to_jiffies(value->tv_sec, value->tv_nsec); 588 + } 589 + 586 590 EXPORT_SYMBOL(timespec_to_jiffies); 587 591 588 592 void ··· 606 596 } 607 597 EXPORT_SYMBOL(jiffies_to_timespec); 608 598 609 - /* Same for "timeval" 599 + /* 600 + * We could use a similar algorithm to timespec_to_jiffies (with a 601 + * different multiplier for usec instead of nsec). But this has a 602 + * problem with rounding: we can't exactly add TICK_NSEC - 1 to the 603 + * usec value, since it's not necessarily integral. 610 604 * 611 - * Well, almost. The problem here is that the real system resolution is 612 - * in nanoseconds and the value being converted is in micro seconds. 613 - * Also for some machines (those that use HZ = 1024, in-particular), 614 - * there is a LARGE error in the tick size in microseconds. 615 - 616 - * The solution we use is to do the rounding AFTER we convert the 617 - * microsecond part. Thus the USEC_ROUND, the bits to be shifted off. 618 - * Instruction wise, this should cost only an additional add with carry 619 - * instruction above the way it was done above. 605 + * We could instead round in the intermediate scaled representation 606 + * (i.e. in units of 1/2^(large scale) jiffies) but that's also 607 + * perilous: the scaling introduces a small positive error, which 608 + * combined with a division-rounding-upward (i.e. adding 2^(scale) - 1 609 + * units to the intermediate before shifting) leads to accidental 610 + * overflow and overestimates. 611 + * 612 + * At the cost of one additional multiplication by a constant, just 613 + * use the timespec implementation. 620 614 */ 621 615 unsigned long 622 616 timeval_to_jiffies(const struct timeval *value) 623 617 { 624 - unsigned long sec = value->tv_sec; 625 - long usec = value->tv_usec; 626 - 627 - if (sec >= MAX_SEC_IN_JIFFIES){ 628 - sec = MAX_SEC_IN_JIFFIES; 629 - usec = 0; 630 - } 631 - return (((u64)sec * SEC_CONVERSION) + 632 - (((u64)usec * USEC_CONVERSION + USEC_ROUND) >> 633 - (USEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC; 618 + return __timespec_to_jiffies(value->tv_sec, 619 + value->tv_usec * NSEC_PER_USEC); 634 620 } 635 621 EXPORT_SYMBOL(timeval_to_jiffies); 636 622
+3
lib/Kconfig
··· 51 51 config ARCH_USE_CMPXCHG_LOCKREF 52 52 bool 53 53 54 + config ARCH_HAS_FAST_MULTIPLIER 55 + bool 56 + 54 57 config CRC_CCITT 55 58 tristate "CRC-CCITT functions" 56 59 help
+3 -1
lib/assoc_array.c
··· 1723 1723 shortcut = assoc_array_ptr_to_shortcut(ptr); 1724 1724 slot = shortcut->parent_slot; 1725 1725 cursor = shortcut->back_pointer; 1726 + if (!cursor) 1727 + goto gc_complete; 1726 1728 } else { 1727 1729 slot = node->parent_slot; 1728 1730 cursor = ptr; 1729 1731 } 1730 - BUG_ON(!ptr); 1732 + BUG_ON(!cursor); 1731 1733 node = assoc_array_ptr_to_node(cursor); 1732 1734 slot++; 1733 1735 goto continue_node;
+2 -2
lib/hweight.c
··· 11 11 12 12 unsigned int __sw_hweight32(unsigned int w) 13 13 { 14 - #ifdef ARCH_HAS_FAST_MULTIPLIER 14 + #ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER 15 15 w -= (w >> 1) & 0x55555555; 16 16 w = (w & 0x33333333) + ((w >> 2) & 0x33333333); 17 17 w = (w + (w >> 4)) & 0x0f0f0f0f; ··· 49 49 return __sw_hweight32((unsigned int)(w >> 32)) + 50 50 __sw_hweight32((unsigned int)w); 51 51 #elif BITS_PER_LONG == 64 52 - #ifdef ARCH_HAS_FAST_MULTIPLIER 52 + #ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER 53 53 w -= (w >> 1) & 0x5555555555555555ul; 54 54 w = (w & 0x3333333333333333ul) + ((w >> 2) & 0x3333333333333333ul); 55 55 w = (w + (w >> 4)) & 0x0f0f0f0f0f0f0f0ful;
-1
lib/rhashtable.c
··· 23 23 #include <linux/hash.h> 24 24 #include <linux/random.h> 25 25 #include <linux/rhashtable.h> 26 - #include <linux/log2.h> 27 26 28 27 #define HASH_DEFAULT_SIZE 64UL 29 28 #define HASH_MIN_SIZE 4UL
+2 -2
lib/string.c
··· 807 807 return check_bytes8(start, value, bytes); 808 808 809 809 value64 = value; 810 - #if defined(ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64 810 + #if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64 811 811 value64 *= 0x0101010101010101; 812 - #elif defined(ARCH_HAS_FAST_MULTIPLIER) 812 + #elif defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) 813 813 value64 *= 0x01010101; 814 814 value64 |= value64 << 32; 815 815 #else
+1 -1
mm/dmapool.c
··· 176 176 if (list_empty(&dev->dma_pools) && 177 177 device_create_file(dev, &dev_attr_pools)) { 178 178 kfree(retval); 179 - return NULL; 179 + retval = NULL; 180 180 } else 181 181 list_add(&retval->pools, &dev->dma_pools); 182 182 mutex_unlock(&pools_lock);
+4
mm/memblock.c
··· 816 816 if (nid != NUMA_NO_NODE && nid != m_nid) 817 817 continue; 818 818 819 + /* skip hotpluggable memory regions if needed */ 820 + if (movable_node_is_enabled() && memblock_is_hotpluggable(m)) 821 + continue; 822 + 819 823 if (!type_b) { 820 824 if (out_start) 821 825 *out_start = m_start;
+2
mm/memory.c
··· 118 118 unsigned long zero_pfn __read_mostly; 119 119 unsigned long highest_memmap_pfn __read_mostly; 120 120 121 + EXPORT_SYMBOL(zero_pfn); 122 + 121 123 /* 122 124 * CONFIG_MMU architectures set up ZERO_PAGE in their paging_init() 123 125 */
+8 -8
mm/mmap.c
··· 369 369 struct vm_area_struct *vma; 370 370 vma = rb_entry(nd, struct vm_area_struct, vm_rb); 371 371 if (vma->vm_start < prev) { 372 - pr_info("vm_start %lx prev %lx\n", vma->vm_start, prev); 372 + pr_emerg("vm_start %lx prev %lx\n", vma->vm_start, prev); 373 373 bug = 1; 374 374 } 375 375 if (vma->vm_start < pend) { 376 - pr_info("vm_start %lx pend %lx\n", vma->vm_start, pend); 376 + pr_emerg("vm_start %lx pend %lx\n", vma->vm_start, pend); 377 377 bug = 1; 378 378 } 379 379 if (vma->vm_start > vma->vm_end) { 380 - pr_info("vm_end %lx < vm_start %lx\n", 380 + pr_emerg("vm_end %lx < vm_start %lx\n", 381 381 vma->vm_end, vma->vm_start); 382 382 bug = 1; 383 383 } 384 384 if (vma->rb_subtree_gap != vma_compute_subtree_gap(vma)) { 385 - pr_info("free gap %lx, correct %lx\n", 385 + pr_emerg("free gap %lx, correct %lx\n", 386 386 vma->rb_subtree_gap, 387 387 vma_compute_subtree_gap(vma)); 388 388 bug = 1; ··· 396 396 for (nd = pn; nd; nd = rb_prev(nd)) 397 397 j++; 398 398 if (i != j) { 399 - pr_info("backwards %d, forwards %d\n", j, i); 399 + pr_emerg("backwards %d, forwards %d\n", j, i); 400 400 bug = 1; 401 401 } 402 402 return bug ? -1 : i; ··· 431 431 i++; 432 432 } 433 433 if (i != mm->map_count) { 434 - pr_info("map_count %d vm_next %d\n", mm->map_count, i); 434 + pr_emerg("map_count %d vm_next %d\n", mm->map_count, i); 435 435 bug = 1; 436 436 } 437 437 if (highest_address != mm->highest_vm_end) { 438 - pr_info("mm->highest_vm_end %lx, found %lx\n", 438 + pr_emerg("mm->highest_vm_end %lx, found %lx\n", 439 439 mm->highest_vm_end, highest_address); 440 440 bug = 1; 441 441 } 442 442 i = browse_rb(&mm->mm_rb); 443 443 if (i != mm->map_count) { 444 - pr_info("map_count %d rb %d\n", mm->map_count, i); 444 + pr_emerg("map_count %d rb %d\n", mm->map_count, i); 445 445 bug = 1; 446 446 } 447 447 BUG_ON(bug);
+2
mm/nobootmem.c
··· 119 119 phys_addr_t start, end; 120 120 u64 i; 121 121 122 + memblock_clear_hotplug(0, -1); 123 + 122 124 for_each_free_mem_range(i, NUMA_NO_NODE, &start, &end, NULL) 123 125 count += __free_memory_core(start, end); 124 126
+3
net/bridge/br_private.h
··· 309 309 int igmp; 310 310 int mrouters_only; 311 311 #endif 312 + #ifdef CONFIG_BRIDGE_VLAN_FILTERING 313 + bool vlan_filtered; 314 + #endif 312 315 }; 313 316 314 317 #define BR_INPUT_SKB_CB(__skb) ((struct br_input_skb_cb *)(__skb)->cb)
+13 -3
net/bridge/br_vlan.c
··· 27 27 { 28 28 if (flags & BRIDGE_VLAN_INFO_PVID) 29 29 __vlan_add_pvid(v, vid); 30 + else 31 + __vlan_delete_pvid(v, vid); 30 32 31 33 if (flags & BRIDGE_VLAN_INFO_UNTAGGED) 32 34 set_bit(vid, v->untagged_bitmap); 35 + else 36 + clear_bit(vid, v->untagged_bitmap); 33 37 } 34 38 35 39 static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags) ··· 129 125 { 130 126 u16 vid; 131 127 132 - if (!br->vlan_enabled) 128 + /* If this packet was not filtered at input, let it pass */ 129 + if (!BR_INPUT_SKB_CB(skb)->vlan_filtered) 133 130 goto out; 134 131 135 132 /* Vlan filter table must be configured at this point. The ··· 169 164 /* If VLAN filtering is disabled on the bridge, all packets are 170 165 * permitted. 171 166 */ 172 - if (!br->vlan_enabled) 167 + if (!br->vlan_enabled) { 168 + BR_INPUT_SKB_CB(skb)->vlan_filtered = false; 173 169 return true; 170 + } 174 171 175 172 /* If there are no vlan in the permitted list, all packets are 176 173 * rejected. ··· 180 173 if (!v) 181 174 goto drop; 182 175 176 + BR_INPUT_SKB_CB(skb)->vlan_filtered = true; 183 177 proto = br->vlan_proto; 184 178 185 179 /* If vlan tx offload is disabled on bridge device and frame was ··· 259 251 { 260 252 u16 vid; 261 253 262 - if (!br->vlan_enabled) 254 + /* If this packet was not filtered at input, let it pass */ 255 + if (!BR_INPUT_SKB_CB(skb)->vlan_filtered) 263 256 return true; 264 257 265 258 if (!v) ··· 279 270 struct net_bridge *br = p->br; 280 271 struct net_port_vlans *v; 281 272 273 + /* If filtering was disabled at input, let it pass. */ 282 274 if (!br->vlan_enabled) 283 275 return true; 284 276
+132 -118
net/ceph/auth_x.c
··· 13 13 #include "auth_x.h" 14 14 #include "auth_x_protocol.h" 15 15 16 - #define TEMP_TICKET_BUF_LEN 256 17 - 18 16 static void ceph_x_validate_tickets(struct ceph_auth_client *ac, int *pneed); 19 17 20 18 static int ceph_x_is_authenticated(struct ceph_auth_client *ac) ··· 62 64 } 63 65 64 66 static int ceph_x_decrypt(struct ceph_crypto_key *secret, 65 - void **p, void *end, void *obuf, size_t olen) 67 + void **p, void *end, void **obuf, size_t olen) 66 68 { 67 69 struct ceph_x_encrypt_header head; 68 70 size_t head_len = sizeof(head); ··· 73 75 return -EINVAL; 74 76 75 77 dout("ceph_x_decrypt len %d\n", len); 76 - ret = ceph_decrypt2(secret, &head, &head_len, obuf, &olen, 77 - *p, len); 78 + if (*obuf == NULL) { 79 + *obuf = kmalloc(len, GFP_NOFS); 80 + if (!*obuf) 81 + return -ENOMEM; 82 + olen = len; 83 + } 84 + 85 + ret = ceph_decrypt2(secret, &head, &head_len, *obuf, &olen, *p, len); 78 86 if (ret) 79 87 return ret; 80 88 if (head.struct_v != 1 || le64_to_cpu(head.magic) != CEPHX_ENC_MAGIC) ··· 133 129 kfree(th); 134 130 } 135 131 136 - static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac, 137 - struct ceph_crypto_key *secret, 138 - void *buf, void *end) 132 + static int process_one_ticket(struct ceph_auth_client *ac, 133 + struct ceph_crypto_key *secret, 134 + void **p, void *end) 139 135 { 140 136 struct ceph_x_info *xi = ac->private; 141 - int num; 142 - void *p = buf; 137 + int type; 138 + u8 tkt_struct_v, blob_struct_v; 139 + struct ceph_x_ticket_handler *th; 140 + void *dbuf = NULL; 141 + void *dp, *dend; 142 + int dlen; 143 + char is_enc; 144 + struct timespec validity; 145 + struct ceph_crypto_key old_key; 146 + void *ticket_buf = NULL; 147 + void *tp, *tpend; 148 + struct ceph_timespec new_validity; 149 + struct ceph_crypto_key new_session_key; 150 + struct ceph_buffer *new_ticket_blob; 151 + unsigned long new_expires, new_renew_after; 152 + u64 new_secret_id; 143 153 int ret; 144 - char *dbuf; 145 - char *ticket_buf; 146 - u8 reply_struct_v; 147 154 148 - dbuf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS); 149 - if (!dbuf) 150 - return -ENOMEM; 155 + ceph_decode_need(p, end, sizeof(u32) + 1, bad); 151 156 152 - ret = -ENOMEM; 153 - ticket_buf = kmalloc(TEMP_TICKET_BUF_LEN, GFP_NOFS); 154 - if (!ticket_buf) 155 - goto out_dbuf; 157 + type = ceph_decode_32(p); 158 + dout(" ticket type %d %s\n", type, ceph_entity_type_name(type)); 156 159 157 - ceph_decode_need(&p, end, 1 + sizeof(u32), bad); 158 - reply_struct_v = ceph_decode_8(&p); 159 - if (reply_struct_v != 1) 160 + tkt_struct_v = ceph_decode_8(p); 161 + if (tkt_struct_v != 1) 160 162 goto bad; 161 - num = ceph_decode_32(&p); 162 - dout("%d tickets\n", num); 163 - while (num--) { 164 - int type; 165 - u8 tkt_struct_v, blob_struct_v; 166 - struct ceph_x_ticket_handler *th; 167 - void *dp, *dend; 168 - int dlen; 169 - char is_enc; 170 - struct timespec validity; 171 - struct ceph_crypto_key old_key; 172 - void *tp, *tpend; 173 - struct ceph_timespec new_validity; 174 - struct ceph_crypto_key new_session_key; 175 - struct ceph_buffer *new_ticket_blob; 176 - unsigned long new_expires, new_renew_after; 177 - u64 new_secret_id; 178 163 179 - ceph_decode_need(&p, end, sizeof(u32) + 1, bad); 164 + th = get_ticket_handler(ac, type); 165 + if (IS_ERR(th)) { 166 + ret = PTR_ERR(th); 167 + goto out; 168 + } 180 169 181 - type = ceph_decode_32(&p); 182 - dout(" ticket type %d %s\n", type, ceph_entity_type_name(type)); 170 + /* blob for me */ 171 + dlen = ceph_x_decrypt(secret, p, end, &dbuf, 0); 172 + if (dlen <= 0) { 173 + ret = dlen; 174 + goto out; 175 + } 176 + dout(" decrypted %d bytes\n", dlen); 177 + dp = dbuf; 178 + dend = dp + dlen; 183 179 184 - tkt_struct_v = ceph_decode_8(&p); 185 - if (tkt_struct_v != 1) 186 - goto bad; 180 + tkt_struct_v = ceph_decode_8(&dp); 181 + if (tkt_struct_v != 1) 182 + goto bad; 187 183 188 - th = get_ticket_handler(ac, type); 189 - if (IS_ERR(th)) { 190 - ret = PTR_ERR(th); 191 - goto out; 192 - } 184 + memcpy(&old_key, &th->session_key, sizeof(old_key)); 185 + ret = ceph_crypto_key_decode(&new_session_key, &dp, dend); 186 + if (ret) 187 + goto out; 193 188 194 - /* blob for me */ 195 - dlen = ceph_x_decrypt(secret, &p, end, dbuf, 196 - TEMP_TICKET_BUF_LEN); 197 - if (dlen <= 0) { 189 + ceph_decode_copy(&dp, &new_validity, sizeof(new_validity)); 190 + ceph_decode_timespec(&validity, &new_validity); 191 + new_expires = get_seconds() + validity.tv_sec; 192 + new_renew_after = new_expires - (validity.tv_sec / 4); 193 + dout(" expires=%lu renew_after=%lu\n", new_expires, 194 + new_renew_after); 195 + 196 + /* ticket blob for service */ 197 + ceph_decode_8_safe(p, end, is_enc, bad); 198 + if (is_enc) { 199 + /* encrypted */ 200 + dout(" encrypted ticket\n"); 201 + dlen = ceph_x_decrypt(&old_key, p, end, &ticket_buf, 0); 202 + if (dlen < 0) { 198 203 ret = dlen; 199 204 goto out; 200 205 } 201 - dout(" decrypted %d bytes\n", dlen); 202 - dend = dbuf + dlen; 203 - dp = dbuf; 204 - 205 - tkt_struct_v = ceph_decode_8(&dp); 206 - if (tkt_struct_v != 1) 207 - goto bad; 208 - 209 - memcpy(&old_key, &th->session_key, sizeof(old_key)); 210 - ret = ceph_crypto_key_decode(&new_session_key, &dp, dend); 211 - if (ret) 212 - goto out; 213 - 214 - ceph_decode_copy(&dp, &new_validity, sizeof(new_validity)); 215 - ceph_decode_timespec(&validity, &new_validity); 216 - new_expires = get_seconds() + validity.tv_sec; 217 - new_renew_after = new_expires - (validity.tv_sec / 4); 218 - dout(" expires=%lu renew_after=%lu\n", new_expires, 219 - new_renew_after); 220 - 221 - /* ticket blob for service */ 222 - ceph_decode_8_safe(&p, end, is_enc, bad); 223 206 tp = ticket_buf; 224 - if (is_enc) { 225 - /* encrypted */ 226 - dout(" encrypted ticket\n"); 227 - dlen = ceph_x_decrypt(&old_key, &p, end, ticket_buf, 228 - TEMP_TICKET_BUF_LEN); 229 - if (dlen < 0) { 230 - ret = dlen; 231 - goto out; 232 - } 233 - dlen = ceph_decode_32(&tp); 234 - } else { 235 - /* unencrypted */ 236 - ceph_decode_32_safe(&p, end, dlen, bad); 237 - ceph_decode_need(&p, end, dlen, bad); 238 - ceph_decode_copy(&p, ticket_buf, dlen); 239 - } 240 - tpend = tp + dlen; 241 - dout(" ticket blob is %d bytes\n", dlen); 242 - ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad); 243 - blob_struct_v = ceph_decode_8(&tp); 244 - new_secret_id = ceph_decode_64(&tp); 245 - ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend); 246 - if (ret) 207 + dlen = ceph_decode_32(&tp); 208 + } else { 209 + /* unencrypted */ 210 + ceph_decode_32_safe(p, end, dlen, bad); 211 + ticket_buf = kmalloc(dlen, GFP_NOFS); 212 + if (!ticket_buf) { 213 + ret = -ENOMEM; 247 214 goto out; 248 - 249 - /* all is well, update our ticket */ 250 - ceph_crypto_key_destroy(&th->session_key); 251 - if (th->ticket_blob) 252 - ceph_buffer_put(th->ticket_blob); 253 - th->session_key = new_session_key; 254 - th->ticket_blob = new_ticket_blob; 255 - th->validity = new_validity; 256 - th->secret_id = new_secret_id; 257 - th->expires = new_expires; 258 - th->renew_after = new_renew_after; 259 - dout(" got ticket service %d (%s) secret_id %lld len %d\n", 260 - type, ceph_entity_type_name(type), th->secret_id, 261 - (int)th->ticket_blob->vec.iov_len); 262 - xi->have_keys |= th->service; 215 + } 216 + tp = ticket_buf; 217 + ceph_decode_need(p, end, dlen, bad); 218 + ceph_decode_copy(p, ticket_buf, dlen); 263 219 } 220 + tpend = tp + dlen; 221 + dout(" ticket blob is %d bytes\n", dlen); 222 + ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad); 223 + blob_struct_v = ceph_decode_8(&tp); 224 + new_secret_id = ceph_decode_64(&tp); 225 + ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend); 226 + if (ret) 227 + goto out; 264 228 265 - ret = 0; 229 + /* all is well, update our ticket */ 230 + ceph_crypto_key_destroy(&th->session_key); 231 + if (th->ticket_blob) 232 + ceph_buffer_put(th->ticket_blob); 233 + th->session_key = new_session_key; 234 + th->ticket_blob = new_ticket_blob; 235 + th->validity = new_validity; 236 + th->secret_id = new_secret_id; 237 + th->expires = new_expires; 238 + th->renew_after = new_renew_after; 239 + dout(" got ticket service %d (%s) secret_id %lld len %d\n", 240 + type, ceph_entity_type_name(type), th->secret_id, 241 + (int)th->ticket_blob->vec.iov_len); 242 + xi->have_keys |= th->service; 243 + 266 244 out: 267 245 kfree(ticket_buf); 268 - out_dbuf: 269 246 kfree(dbuf); 270 247 return ret; 271 248 272 249 bad: 273 250 ret = -EINVAL; 274 251 goto out; 252 + } 253 + 254 + static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac, 255 + struct ceph_crypto_key *secret, 256 + void *buf, void *end) 257 + { 258 + void *p = buf; 259 + u8 reply_struct_v; 260 + u32 num; 261 + int ret; 262 + 263 + ceph_decode_8_safe(&p, end, reply_struct_v, bad); 264 + if (reply_struct_v != 1) 265 + return -EINVAL; 266 + 267 + ceph_decode_32_safe(&p, end, num, bad); 268 + dout("%d tickets\n", num); 269 + 270 + while (num--) { 271 + ret = process_one_ticket(ac, secret, &p, end); 272 + if (ret) 273 + return ret; 274 + } 275 + 276 + return 0; 277 + 278 + bad: 279 + return -EINVAL; 275 280 } 276 281 277 282 static int ceph_x_build_authorizer(struct ceph_auth_client *ac, ··· 596 583 struct ceph_x_ticket_handler *th; 597 584 int ret = 0; 598 585 struct ceph_x_authorize_reply reply; 586 + void *preply = &reply; 599 587 void *p = au->reply_buf; 600 588 void *end = p + sizeof(au->reply_buf); 601 589 602 590 th = get_ticket_handler(ac, au->service); 603 591 if (IS_ERR(th)) 604 592 return PTR_ERR(th); 605 - ret = ceph_x_decrypt(&th->session_key, &p, end, &reply, sizeof(reply)); 593 + ret = ceph_x_decrypt(&th->session_key, &p, end, &preply, sizeof(reply)); 606 594 if (ret < 0) 607 595 return ret; 608 596 if (ret != sizeof(reply))
+8
net/ceph/mon_client.c
··· 1181 1181 if (!m) { 1182 1182 pr_info("alloc_msg unknown type %d\n", type); 1183 1183 *skip = 1; 1184 + } else if (front_len > m->front_alloc_len) { 1185 + pr_warning("mon_alloc_msg front %d > prealloc %d (%u#%llu)\n", 1186 + front_len, m->front_alloc_len, 1187 + (unsigned int)con->peer_name.type, 1188 + le64_to_cpu(con->peer_name.num)); 1189 + ceph_msg_put(m); 1190 + m = ceph_msg_new(type, front_len, GFP_NOFS, false); 1184 1191 } 1192 + 1185 1193 return m; 1186 1194 } 1187 1195
+11 -7
net/core/dev.c
··· 4865 4865 sysfs_remove_link(&(dev->dev.kobj), linkname); 4866 4866 } 4867 4867 4868 - #define netdev_adjacent_is_neigh_list(dev, dev_list) \ 4869 - (dev_list == &dev->adj_list.upper || \ 4870 - dev_list == &dev->adj_list.lower) 4868 + static inline bool netdev_adjacent_is_neigh_list(struct net_device *dev, 4869 + struct net_device *adj_dev, 4870 + struct list_head *dev_list) 4871 + { 4872 + return (dev_list == &dev->adj_list.upper || 4873 + dev_list == &dev->adj_list.lower) && 4874 + net_eq(dev_net(dev), dev_net(adj_dev)); 4875 + } 4871 4876 4872 4877 static int __netdev_adjacent_dev_insert(struct net_device *dev, 4873 4878 struct net_device *adj_dev, ··· 4902 4897 pr_debug("dev_hold for %s, because of link added from %s to %s\n", 4903 4898 adj_dev->name, dev->name, adj_dev->name); 4904 4899 4905 - if (netdev_adjacent_is_neigh_list(dev, dev_list)) { 4900 + if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list)) { 4906 4901 ret = netdev_adjacent_sysfs_add(dev, adj_dev, dev_list); 4907 4902 if (ret) 4908 4903 goto free_adj; ··· 4923 4918 return 0; 4924 4919 4925 4920 remove_symlinks: 4926 - if (netdev_adjacent_is_neigh_list(dev, dev_list)) 4921 + if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list)) 4927 4922 netdev_adjacent_sysfs_del(dev, adj_dev->name, dev_list); 4928 4923 free_adj: 4929 4924 kfree(adj); ··· 4956 4951 if (adj->master) 4957 4952 sysfs_remove_link(&(dev->dev.kobj), "master"); 4958 4953 4959 - if (netdev_adjacent_is_neigh_list(dev, dev_list) && 4960 - net_eq(dev_net(dev),dev_net(adj_dev))) 4954 + if (netdev_adjacent_is_neigh_list(dev, adj_dev, dev_list)) 4961 4955 netdev_adjacent_sysfs_del(dev, adj_dev->name, dev_list); 4962 4956 4963 4957 list_del_rcu(&adj->list);
+1 -1
net/core/sock.c
··· 1816 1816 * skb_page_frag_refill - check that a page_frag contains enough room 1817 1817 * @sz: minimum size of the fragment we want to get 1818 1818 * @pfrag: pointer to page_frag 1819 - * @prio: priority for memory allocation 1819 + * @gfp: priority for memory allocation 1820 1820 * 1821 1821 * Note: While this allocator tries to use high order pages, there is 1822 1822 * no guarantee that allocations succeed. Therefore, @sz MUST be
+3 -3
net/ipv4/ip_tunnel.c
··· 80 80 idst->saddr = saddr; 81 81 } 82 82 83 - static void tunnel_dst_set(struct ip_tunnel *t, 83 + static noinline void tunnel_dst_set(struct ip_tunnel *t, 84 84 struct dst_entry *dst, __be32 saddr) 85 85 { 86 - __tunnel_dst_set(this_cpu_ptr(t->dst_cache), dst, saddr); 86 + __tunnel_dst_set(raw_cpu_ptr(t->dst_cache), dst, saddr); 87 87 } 88 88 89 89 static void tunnel_dst_reset(struct ip_tunnel *t) ··· 107 107 struct dst_entry *dst; 108 108 109 109 rcu_read_lock(); 110 - idst = this_cpu_ptr(t->dst_cache); 110 + idst = raw_cpu_ptr(t->dst_cache); 111 111 dst = rcu_dereference(idst->dst); 112 112 if (dst && !atomic_inc_not_zero(&dst->__refcnt)) 113 113 dst = NULL;
+3 -3
net/ipv4/route.c
··· 2265 2265 return rt; 2266 2266 2267 2267 if (flp4->flowi4_proto) 2268 - rt = (struct rtable *) xfrm_lookup(net, &rt->dst, 2269 - flowi4_to_flowi(flp4), 2270 - sk, 0); 2268 + rt = (struct rtable *)xfrm_lookup_route(net, &rt->dst, 2269 + flowi4_to_flowi(flp4), 2270 + sk, 0); 2271 2271 2272 2272 return rt; 2273 2273 }
+5 -3
net/ipv6/addrconf.c
··· 3097 3097 3098 3098 write_unlock_bh(&idev->lock); 3099 3099 3100 - /* Step 5: Discard multicast list */ 3101 - if (how) 3100 + /* Step 5: Discard anycast and multicast list */ 3101 + if (how) { 3102 + ipv6_ac_destroy_dev(idev); 3102 3103 ipv6_mc_destroy_dev(idev); 3103 - else 3104 + } else { 3104 3105 ipv6_mc_down(idev); 3106 + } 3105 3107 3106 3108 idev->tstamp = jiffies; 3107 3109
+21
net/ipv6/anycast.c
··· 345 345 return __ipv6_dev_ac_dec(idev, addr); 346 346 } 347 347 348 + void ipv6_ac_destroy_dev(struct inet6_dev *idev) 349 + { 350 + struct ifacaddr6 *aca; 351 + 352 + write_lock_bh(&idev->lock); 353 + while ((aca = idev->ac_list) != NULL) { 354 + idev->ac_list = aca->aca_next; 355 + write_unlock_bh(&idev->lock); 356 + 357 + addrconf_leave_solict(idev, &aca->aca_addr); 358 + 359 + dst_hold(&aca->aca_rt->dst); 360 + ip6_del_rt(aca->aca_rt); 361 + 362 + aca_put(aca); 363 + 364 + write_lock_bh(&idev->lock); 365 + } 366 + write_unlock_bh(&idev->lock); 367 + } 368 + 348 369 /* 349 370 * check if the interface has this anycast address 350 371 * called with rcu_read_lock()
+2 -2
net/ipv6/ip6_output.c
··· 1004 1004 if (final_dst) 1005 1005 fl6->daddr = *final_dst; 1006 1006 1007 - return xfrm_lookup(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0); 1007 + return xfrm_lookup_route(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0); 1008 1008 } 1009 1009 EXPORT_SYMBOL_GPL(ip6_dst_lookup_flow); 1010 1010 ··· 1036 1036 if (final_dst) 1037 1037 fl6->daddr = *final_dst; 1038 1038 1039 - return xfrm_lookup(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0); 1039 + return xfrm_lookup_route(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0); 1040 1040 } 1041 1041 EXPORT_SYMBOL_GPL(ip6_sk_dst_lookup_flow); 1042 1042
+1 -1
net/mac80211/sta_info.c
··· 1822 1822 sinfo->bss_param.flags |= BSS_PARAM_FLAGS_SHORT_PREAMBLE; 1823 1823 if (sdata->vif.bss_conf.use_short_slot) 1824 1824 sinfo->bss_param.flags |= BSS_PARAM_FLAGS_SHORT_SLOT_TIME; 1825 - sinfo->bss_param.dtim_period = sdata->local->hw.conf.ps_dtim_period; 1825 + sinfo->bss_param.dtim_period = sdata->vif.bss_conf.dtim_period; 1826 1826 sinfo->bss_param.beacon_interval = sdata->vif.bss_conf.beacon_int; 1827 1827 1828 1828 sinfo->sta_flags.set = 0;
+5 -4
net/openvswitch/datapath.c
··· 78 78 79 79 /* Check if need to build a reply message. 80 80 * OVS userspace sets the NLM_F_ECHO flag if it needs the reply. */ 81 - static bool ovs_must_notify(struct genl_info *info, 82 - const struct genl_multicast_group *grp) 81 + static bool ovs_must_notify(struct genl_family *family, struct genl_info *info, 82 + unsigned int group) 83 83 { 84 84 return info->nlhdr->nlmsg_flags & NLM_F_ECHO || 85 - netlink_has_listeners(genl_info_net(info)->genl_sock, 0); 85 + genl_has_listeners(family, genl_info_net(info)->genl_sock, 86 + group); 86 87 } 87 88 88 89 static void ovs_notify(struct genl_family *family, ··· 763 762 { 764 763 struct sk_buff *skb; 765 764 766 - if (!always && !ovs_must_notify(info, &ovs_dp_flow_multicast_group)) 765 + if (!always && !ovs_must_notify(&dp_flow_genl_family, info, 0)) 767 766 return NULL; 768 767 769 768 skb = genlmsg_new_unicast(ovs_flow_cmd_msg_size(acts), info, GFP_KERNEL);
+1
net/rfkill/rfkill-gpio.c
··· 163 163 { "LNV4752", RFKILL_TYPE_GPS }, 164 164 { }, 165 165 }; 166 + MODULE_DEVICE_TABLE(acpi, rfkill_acpi_match); 166 167 #endif 167 168 168 169 static struct platform_driver rfkill_gpio_driver = {
+1 -1
net/rxrpc/ar-key.c
··· 1143 1143 if (copy_to_user(xdr, (s), _l) != 0) \ 1144 1144 goto fault; \ 1145 1145 if (_l & 3 && \ 1146 - copy_to_user((u8 *)xdr + _l, &zero, 4 - (_l & 3)) != 0) \ 1146 + copy_to_user((u8 __user *)xdr + _l, &zero, 4 - (_l & 3)) != 0) \ 1147 1147 goto fault; \ 1148 1148 xdr += (_l + 3) >> 2; \ 1149 1149 } while(0)
+14 -4
net/sched/sch_choke.c
··· 133 133 --sch->q.qlen; 134 134 } 135 135 136 + /* private part of skb->cb[] that a qdisc is allowed to use 137 + * is limited to QDISC_CB_PRIV_LEN bytes. 138 + * As a flow key might be too large, we store a part of it only. 139 + */ 140 + #define CHOKE_K_LEN min_t(u32, sizeof(struct flow_keys), QDISC_CB_PRIV_LEN - 3) 141 + 136 142 struct choke_skb_cb { 137 143 u16 classid; 138 144 u8 keys_valid; 139 - struct flow_keys keys; 145 + u8 keys[QDISC_CB_PRIV_LEN - 3]; 140 146 }; 141 147 142 148 static inline struct choke_skb_cb *choke_skb_cb(const struct sk_buff *skb) ··· 169 163 static bool choke_match_flow(struct sk_buff *skb1, 170 164 struct sk_buff *skb2) 171 165 { 166 + struct flow_keys temp; 167 + 172 168 if (skb1->protocol != skb2->protocol) 173 169 return false; 174 170 175 171 if (!choke_skb_cb(skb1)->keys_valid) { 176 172 choke_skb_cb(skb1)->keys_valid = 1; 177 - skb_flow_dissect(skb1, &choke_skb_cb(skb1)->keys); 173 + skb_flow_dissect(skb1, &temp); 174 + memcpy(&choke_skb_cb(skb1)->keys, &temp, CHOKE_K_LEN); 178 175 } 179 176 180 177 if (!choke_skb_cb(skb2)->keys_valid) { 181 178 choke_skb_cb(skb2)->keys_valid = 1; 182 - skb_flow_dissect(skb2, &choke_skb_cb(skb2)->keys); 179 + skb_flow_dissect(skb2, &temp); 180 + memcpy(&choke_skb_cb(skb2)->keys, &temp, CHOKE_K_LEN); 183 181 } 184 182 185 183 return !memcmp(&choke_skb_cb(skb1)->keys, 186 184 &choke_skb_cb(skb2)->keys, 187 - sizeof(struct flow_keys)); 185 + CHOKE_K_LEN); 188 186 } 189 187 190 188 /*
+3
net/socket.c
··· 1993 1993 if (copy_from_user(kmsg, umsg, sizeof(struct msghdr))) 1994 1994 return -EFAULT; 1995 1995 1996 + if (kmsg->msg_name == NULL) 1997 + kmsg->msg_namelen = 0; 1998 + 1996 1999 if (kmsg->msg_namelen < 0) 1997 2000 return -EINVAL; 1998 2001
+6
net/wireless/nl80211.c
··· 6977 6977 struct nlattr *data = ((void **)skb->cb)[2]; 6978 6978 enum nl80211_multicast_groups mcgrp = NL80211_MCGRP_TESTMODE; 6979 6979 6980 + /* clear CB data for netlink core to own from now on */ 6981 + memset(skb->cb, 0, sizeof(skb->cb)); 6982 + 6980 6983 nla_nest_end(skb, data); 6981 6984 genlmsg_end(skb, hdr); 6982 6985 ··· 9304 9301 struct cfg80211_registered_device *rdev = ((void **)skb->cb)[0]; 9305 9302 void *hdr = ((void **)skb->cb)[1]; 9306 9303 struct nlattr *data = ((void **)skb->cb)[2]; 9304 + 9305 + /* clear CB data for netlink core to own from now on */ 9306 + memset(skb->cb, 0, sizeof(skb->cb)); 9307 9307 9308 9308 if (WARN_ON(!rdev->cur_cmd_info)) { 9309 9309 kfree_skb(skb);
+40 -8
net/xfrm/xfrm_policy.c
··· 39 39 #define XFRM_QUEUE_TMO_MAX ((unsigned)(60*HZ)) 40 40 #define XFRM_MAX_QUEUE_LEN 100 41 41 42 + struct xfrm_flo { 43 + struct dst_entry *dst_orig; 44 + u8 flags; 45 + }; 46 + 42 47 static DEFINE_SPINLOCK(xfrm_policy_afinfo_lock); 43 48 static struct xfrm_policy_afinfo __rcu *xfrm_policy_afinfo[NPROTO] 44 49 __read_mostly; ··· 1882 1877 } 1883 1878 1884 1879 static struct xfrm_dst *xfrm_create_dummy_bundle(struct net *net, 1885 - struct dst_entry *dst, 1880 + struct xfrm_flo *xflo, 1886 1881 const struct flowi *fl, 1887 1882 int num_xfrms, 1888 1883 u16 family) 1889 1884 { 1890 1885 int err; 1891 1886 struct net_device *dev; 1887 + struct dst_entry *dst; 1892 1888 struct dst_entry *dst1; 1893 1889 struct xfrm_dst *xdst; 1894 1890 ··· 1897 1891 if (IS_ERR(xdst)) 1898 1892 return xdst; 1899 1893 1900 - if (net->xfrm.sysctl_larval_drop || num_xfrms <= 0) 1894 + if (!(xflo->flags & XFRM_LOOKUP_QUEUE) || 1895 + net->xfrm.sysctl_larval_drop || 1896 + num_xfrms <= 0) 1901 1897 return xdst; 1902 1898 1899 + dst = xflo->dst_orig; 1903 1900 dst1 = &xdst->u.dst; 1904 1901 dst_hold(dst); 1905 1902 xdst->route = dst; ··· 1944 1935 xfrm_bundle_lookup(struct net *net, const struct flowi *fl, u16 family, u8 dir, 1945 1936 struct flow_cache_object *oldflo, void *ctx) 1946 1937 { 1947 - struct dst_entry *dst_orig = (struct dst_entry *)ctx; 1938 + struct xfrm_flo *xflo = (struct xfrm_flo *)ctx; 1948 1939 struct xfrm_policy *pols[XFRM_POLICY_TYPE_MAX]; 1949 1940 struct xfrm_dst *xdst, *new_xdst; 1950 1941 int num_pols = 0, num_xfrms = 0, i, err, pol_dead; ··· 1985 1976 goto make_dummy_bundle; 1986 1977 } 1987 1978 1988 - new_xdst = xfrm_resolve_and_create_bundle(pols, num_pols, fl, family, dst_orig); 1979 + new_xdst = xfrm_resolve_and_create_bundle(pols, num_pols, fl, family, 1980 + xflo->dst_orig); 1989 1981 if (IS_ERR(new_xdst)) { 1990 1982 err = PTR_ERR(new_xdst); 1991 1983 if (err != -EAGAIN) ··· 2020 2010 /* We found policies, but there's no bundles to instantiate: 2021 2011 * either because the policy blocks, has no transformations or 2022 2012 * we could not build template (no xfrm_states).*/ 2023 - xdst = xfrm_create_dummy_bundle(net, dst_orig, fl, num_xfrms, family); 2013 + xdst = xfrm_create_dummy_bundle(net, xflo, fl, num_xfrms, family); 2024 2014 if (IS_ERR(xdst)) { 2025 2015 xfrm_pols_put(pols, num_pols); 2026 2016 return ERR_CAST(xdst); ··· 2114 2104 } 2115 2105 2116 2106 if (xdst == NULL) { 2107 + struct xfrm_flo xflo; 2108 + 2109 + xflo.dst_orig = dst_orig; 2110 + xflo.flags = flags; 2111 + 2117 2112 /* To accelerate a bit... */ 2118 2113 if ((dst_orig->flags & DST_NOXFRM) || 2119 2114 !net->xfrm.policy_count[XFRM_POLICY_OUT]) 2120 2115 goto nopol; 2121 2116 2122 2117 flo = flow_cache_lookup(net, fl, family, dir, 2123 - xfrm_bundle_lookup, dst_orig); 2118 + xfrm_bundle_lookup, &xflo); 2124 2119 if (flo == NULL) 2125 2120 goto nopol; 2126 2121 if (IS_ERR(flo)) { ··· 2153 2138 xfrm_pols_put(pols, drop_pols); 2154 2139 XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTNOSTATES); 2155 2140 2156 - return make_blackhole(net, family, dst_orig); 2141 + return ERR_PTR(-EREMOTE); 2157 2142 } 2158 2143 2159 2144 err = -EAGAIN; ··· 2209 2194 return ERR_PTR(err); 2210 2195 } 2211 2196 EXPORT_SYMBOL(xfrm_lookup); 2197 + 2198 + /* Callers of xfrm_lookup_route() must ensure a call to dst_output(). 2199 + * Otherwise we may send out blackholed packets. 2200 + */ 2201 + struct dst_entry *xfrm_lookup_route(struct net *net, struct dst_entry *dst_orig, 2202 + const struct flowi *fl, 2203 + struct sock *sk, int flags) 2204 + { 2205 + struct dst_entry *dst = xfrm_lookup(net, dst_orig, fl, sk, 2206 + flags | XFRM_LOOKUP_QUEUE); 2207 + 2208 + if (IS_ERR(dst) && PTR_ERR(dst) == -EREMOTE) 2209 + return make_blackhole(net, dst_orig->ops->family, dst_orig); 2210 + 2211 + return dst; 2212 + } 2213 + EXPORT_SYMBOL(xfrm_lookup_route); 2212 2214 2213 2215 static inline int 2214 2216 xfrm_secpath_reject(int idx, struct sk_buff *skb, const struct flowi *fl) ··· 2492 2460 2493 2461 skb_dst_force(skb); 2494 2462 2495 - dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, 0); 2463 + dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, XFRM_LOOKUP_QUEUE); 2496 2464 if (IS_ERR(dst)) { 2497 2465 res = 0; 2498 2466 dst = NULL;
+4 -1
scripts/checkpatch.pl
··· 2133 2133 # Check for improperly formed commit descriptions 2134 2134 if ($in_commit_log && 2135 2135 $line =~ /\bcommit\s+[0-9a-f]{5,}/i && 2136 - $line !~ /\b[Cc]ommit [0-9a-f]{12,40} \("/) { 2136 + !($line =~ /\b[Cc]ommit [0-9a-f]{12,40} \("/ || 2137 + ($line =~ /\b[Cc]ommit [0-9a-f]{12,40}\s*$/ && 2138 + defined $rawlines[$linenr] && 2139 + $rawlines[$linenr] =~ /^\s*\("/))) { 2137 2140 $line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i; 2138 2141 my $init_char = $1; 2139 2142 my $orig_commit = lc($2);
+12 -5
sound/pci/hda/patch_sigmatel.c
··· 566 566 if (snd_hda_jack_tbl_get(codec, nid)) 567 567 continue; 568 568 if (def_conf == AC_JACK_PORT_COMPLEX && 569 - !(spec->vref_mute_led_nid == nid || 570 - is_jack_detectable(codec, nid))) { 569 + spec->vref_mute_led_nid != nid && 570 + is_jack_detectable(codec, nid)) { 571 571 snd_hda_jack_detect_enable_callback(codec, nid, 572 572 STAC_PWR_EVENT, 573 573 jack_update_power); ··· 4276 4276 return err; 4277 4277 } 4278 4278 4279 - stac_init_power_map(codec); 4280 - 4281 4279 return 0; 4282 4280 } 4283 4281 4282 + static int stac_build_controls(struct hda_codec *codec) 4283 + { 4284 + int err = snd_hda_gen_build_controls(codec); 4285 + 4286 + if (err < 0) 4287 + return err; 4288 + stac_init_power_map(codec); 4289 + return 0; 4290 + } 4284 4291 4285 4292 static int stac_init(struct hda_codec *codec) 4286 4293 { ··· 4399 4392 #endif /* CONFIG_PM */ 4400 4393 4401 4394 static const struct hda_codec_ops stac_patch_ops = { 4402 - .build_controls = snd_hda_gen_build_controls, 4395 + .build_controls = stac_build_controls, 4403 4396 .build_pcms = snd_hda_gen_build_pcms, 4404 4397 .init = stac_init, 4405 4398 .free = stac_free,
+3 -3
sound/soc/codecs/cs4265.c
··· 458 458 if (params_width(params) == 16) { 459 459 snd_soc_update_bits(codec, CS4265_DAC_CTL, 460 460 CS4265_DAC_CTL_DIF, (1 << 5)); 461 - snd_soc_update_bits(codec, CS4265_ADC_CTL, 461 + snd_soc_update_bits(codec, CS4265_SPDIF_CTL2, 462 462 CS4265_SPDIF_CTL2_DIF, (1 << 7)); 463 463 } else { 464 464 snd_soc_update_bits(codec, CS4265_DAC_CTL, 465 465 CS4265_DAC_CTL_DIF, (3 << 5)); 466 - snd_soc_update_bits(codec, CS4265_ADC_CTL, 466 + snd_soc_update_bits(codec, CS4265_SPDIF_CTL2, 467 467 CS4265_SPDIF_CTL2_DIF, (1 << 7)); 468 468 } 469 469 break; ··· 472 472 CS4265_DAC_CTL_DIF, 0); 473 473 snd_soc_update_bits(codec, CS4265_ADC_CTL, 474 474 CS4265_ADC_DIF, 0); 475 - snd_soc_update_bits(codec, CS4265_ADC_CTL, 475 + snd_soc_update_bits(codec, CS4265_SPDIF_CTL2, 476 476 CS4265_SPDIF_CTL2_DIF, (1 << 6)); 477 477 478 478 break;
+2 -2
sound/soc/codecs/sta529.c
··· 4 4 * sound/soc/codecs/sta529.c -- spear ALSA Soc codec driver 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Rajeev Kumar <rajeev-dlh.kumar@st.com> 7 + * Rajeev Kumar <rajeevkumar.linux@gmail.com> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any ··· 426 426 module_i2c_driver(sta529_i2c_driver); 427 427 428 428 MODULE_DESCRIPTION("ASoC STA529 codec driver"); 429 - MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>"); 429 + MODULE_AUTHOR("Rajeev Kumar <rajeevkumar.linux@gmail.com>"); 430 430 MODULE_LICENSE("GPL");
+39 -12
sound/soc/codecs/tlv320aic31xx.c
··· 189 189 /* mclk rate pll: p j d dosr ndac mdac aors nadc madc */ 190 190 /* 8k rate */ 191 191 {12000000, 8000, 1, 8, 1920, 128, 48, 2, 128, 48, 2}, 192 + {12000000, 8000, 1, 8, 1920, 128, 32, 3, 128, 32, 3}, 192 193 {24000000, 8000, 2, 8, 1920, 128, 48, 2, 128, 48, 2}, 193 194 {25000000, 8000, 2, 7, 8643, 128, 48, 2, 128, 48, 2}, 194 195 /* 11.025k rate */ 195 196 {12000000, 11025, 1, 7, 5264, 128, 32, 2, 128, 32, 2}, 197 + {12000000, 11025, 1, 8, 4672, 128, 24, 3, 128, 24, 3}, 196 198 {24000000, 11025, 2, 7, 5264, 128, 32, 2, 128, 32, 2}, 197 199 {25000000, 11025, 2, 7, 2253, 128, 32, 2, 128, 32, 2}, 198 200 /* 16k rate */ 199 201 {12000000, 16000, 1, 8, 1920, 128, 24, 2, 128, 24, 2}, 202 + {12000000, 16000, 1, 8, 1920, 128, 16, 3, 128, 16, 3}, 200 203 {24000000, 16000, 2, 8, 1920, 128, 24, 2, 128, 24, 2}, 201 204 {25000000, 16000, 2, 7, 8643, 128, 24, 2, 128, 24, 2}, 202 205 /* 22.05k rate */ 203 206 {12000000, 22050, 1, 7, 5264, 128, 16, 2, 128, 16, 2}, 207 + {12000000, 22050, 1, 8, 4672, 128, 12, 3, 128, 12, 3}, 204 208 {24000000, 22050, 2, 7, 5264, 128, 16, 2, 128, 16, 2}, 205 209 {25000000, 22050, 2, 7, 2253, 128, 16, 2, 128, 16, 2}, 206 210 /* 32k rate */ 207 211 {12000000, 32000, 1, 8, 1920, 128, 12, 2, 128, 12, 2}, 212 + {12000000, 32000, 1, 8, 1920, 128, 8, 3, 128, 8, 3}, 208 213 {24000000, 32000, 2, 8, 1920, 128, 12, 2, 128, 12, 2}, 209 214 {25000000, 32000, 2, 7, 8643, 128, 12, 2, 128, 12, 2}, 210 215 /* 44.1k rate */ 211 216 {12000000, 44100, 1, 7, 5264, 128, 8, 2, 128, 8, 2}, 217 + {12000000, 44100, 1, 8, 4672, 128, 6, 3, 128, 6, 3}, 212 218 {24000000, 44100, 2, 7, 5264, 128, 8, 2, 128, 8, 2}, 213 219 {25000000, 44100, 2, 7, 2253, 128, 8, 2, 128, 8, 2}, 214 220 /* 48k rate */ 215 221 {12000000, 48000, 1, 8, 1920, 128, 8, 2, 128, 8, 2}, 222 + {12000000, 48000, 1, 7, 6800, 96, 5, 4, 96, 5, 4}, 216 223 {24000000, 48000, 2, 8, 1920, 128, 8, 2, 128, 8, 2}, 217 224 {25000000, 48000, 2, 7, 8643, 128, 8, 2, 128, 8, 2}, 218 225 /* 88.2k rate */ 219 226 {12000000, 88200, 1, 7, 5264, 64, 8, 2, 64, 8, 2}, 227 + {12000000, 88200, 1, 8, 4672, 64, 6, 3, 64, 6, 3}, 220 228 {24000000, 88200, 2, 7, 5264, 64, 8, 2, 64, 8, 2}, 221 229 {25000000, 88200, 2, 7, 2253, 64, 8, 2, 64, 8, 2}, 222 230 /* 96k rate */ 223 231 {12000000, 96000, 1, 8, 1920, 64, 8, 2, 64, 8, 2}, 232 + {12000000, 96000, 1, 7, 6800, 48, 5, 4, 48, 5, 4}, 224 233 {24000000, 96000, 2, 8, 1920, 64, 8, 2, 64, 8, 2}, 225 234 {25000000, 96000, 2, 7, 8643, 64, 8, 2, 64, 8, 2}, 226 235 /* 176.4k rate */ 227 236 {12000000, 176400, 1, 7, 5264, 32, 8, 2, 32, 8, 2}, 237 + {12000000, 176400, 1, 8, 4672, 32, 6, 3, 32, 6, 3}, 228 238 {24000000, 176400, 2, 7, 5264, 32, 8, 2, 32, 8, 2}, 229 239 {25000000, 176400, 2, 7, 2253, 32, 8, 2, 32, 8, 2}, 230 240 /* 192k rate */ 231 241 {12000000, 192000, 1, 8, 1920, 32, 8, 2, 32, 8, 2}, 242 + {12000000, 192000, 1, 7, 6800, 24, 5, 4, 24, 5, 4}, 232 243 {24000000, 192000, 2, 8, 1920, 32, 8, 2, 32, 8, 2}, 233 244 {25000000, 192000, 2, 7, 8643, 32, 8, 2, 32, 8, 2}, 234 245 }; ··· 691 680 struct snd_pcm_hw_params *params) 692 681 { 693 682 struct aic31xx_priv *aic31xx = snd_soc_codec_get_drvdata(codec); 683 + int bclk_score = snd_soc_params_to_frame_size(params); 694 684 int bclk_n = 0; 685 + int match = -1; 695 686 int i; 696 687 697 688 /* Use PLL as CODEC_CLKIN and DAC_CLK as BDIV_CLKIN */ ··· 704 691 705 692 for (i = 0; i < ARRAY_SIZE(aic31xx_divs); i++) { 706 693 if (aic31xx_divs[i].rate == params_rate(params) && 707 - aic31xx_divs[i].mclk == aic31xx->sysclk) 708 - break; 694 + aic31xx_divs[i].mclk == aic31xx->sysclk) { 695 + int s = (aic31xx_divs[i].dosr * aic31xx_divs[i].mdac) % 696 + snd_soc_params_to_frame_size(params); 697 + int bn = (aic31xx_divs[i].dosr * aic31xx_divs[i].mdac) / 698 + snd_soc_params_to_frame_size(params); 699 + if (s < bclk_score && bn > 0) { 700 + match = i; 701 + bclk_n = bn; 702 + bclk_score = s; 703 + } 704 + } 709 705 } 710 706 711 - if (i == ARRAY_SIZE(aic31xx_divs)) { 712 - dev_err(codec->dev, "%s: Sampling rate %u not supported\n", 707 + if (match == -1) { 708 + dev_err(codec->dev, 709 + "%s: Sample rate (%u) and format not supported\n", 713 710 __func__, params_rate(params)); 711 + /* See bellow for details how fix this. */ 714 712 return -EINVAL; 715 713 } 714 + if (bclk_score != 0) { 715 + dev_warn(codec->dev, "Can not produce exact bitclock"); 716 + /* This is fine if using dsp format, but if using i2s 717 + there may be trouble. To fix the issue edit the 718 + aic31xx_divs table for your mclk and sample 719 + rate. Details can be found from: 720 + http://www.ti.com/lit/ds/symlink/tlv320aic3100.pdf 721 + Section: 5.6 CLOCK Generation and PLL 722 + */ 723 + } 724 + i = match; 716 725 717 726 /* PLL configuration */ 718 727 snd_soc_update_bits(codec, AIC31XX_PLLPR, AIC31XX_PLL_MASK, ··· 764 729 snd_soc_write(codec, AIC31XX_AOSR, aic31xx_divs[i].aosr); 765 730 766 731 /* Bit clock divider configuration. */ 767 - bclk_n = (aic31xx_divs[i].dosr * aic31xx_divs[i].mdac) 768 - / snd_soc_params_to_frame_size(params); 769 - if (bclk_n == 0) { 770 - dev_err(codec->dev, "%s: Not enough BLCK bandwidth\n", 771 - __func__); 772 - return -EINVAL; 773 - } 774 - 775 732 snd_soc_update_bits(codec, AIC31XX_BCLKN, 776 733 AIC31XX_PLL_MASK, bclk_n); 777 734
+10 -1
sound/soc/davinci/davinci-mcasp.c
··· 467 467 { 468 468 u32 fmt; 469 469 u32 tx_rotate = (word_length / 4) & 0x7; 470 - u32 rx_rotate = (32 - word_length) / 4; 471 470 u32 mask = (1ULL << word_length) - 1; 471 + /* 472 + * For captured data we should not rotate, inversion and masking is 473 + * enoguh to get the data to the right position: 474 + * Format data from bus after reverse (XRBUF) 475 + * S16_LE: |LSB|MSB|xxx|xxx| |xxx|xxx|MSB|LSB| 476 + * S24_3LE: |LSB|DAT|MSB|xxx| |xxx|MSB|DAT|LSB| 477 + * S24_LE: |LSB|DAT|MSB|xxx| |xxx|MSB|DAT|LSB| 478 + * S32_LE: |LSB|DAT|DAT|MSB| |MSB|DAT|DAT|LSB| 479 + */ 480 + u32 rx_rotate = 0; 472 481 473 482 /* 474 483 * if s BCLK-to-LRCLK ratio has been configured via the set_clkdiv()
+2 -2
sound/soc/dwc/designware_i2s.c
··· 4 4 * sound/soc/dwc/designware_i2s.c 5 5 * 6 6 * Copyright (C) 2010 ST Microelectronics 7 - * Rajeev Kumar <rajeev-dlh.kumar@st.com> 7 + * Rajeev Kumar <rajeevkumar.linux@gmail.com> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any ··· 455 455 456 456 module_platform_driver(dw_i2s_driver); 457 457 458 - MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>"); 458 + MODULE_AUTHOR("Rajeev Kumar <rajeevkumar.linux@gmail.com>"); 459 459 MODULE_DESCRIPTION("DESIGNWARE I2S SoC Interface"); 460 460 MODULE_LICENSE("GPL"); 461 461 MODULE_ALIAS("platform:designware_i2s");
+7 -6
sound/soc/rockchip/rockchip_i2s.c
··· 165 165 struct rk_i2s_dev *i2s = to_info(cpu_dai); 166 166 unsigned int mask = 0, val = 0; 167 167 168 - mask = I2S_CKR_MSS_SLAVE; 168 + mask = I2S_CKR_MSS_MASK; 169 169 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { 170 170 case SND_SOC_DAIFMT_CBS_CFS: 171 - val = I2S_CKR_MSS_SLAVE; 171 + /* Set source clock in Master mode */ 172 + val = I2S_CKR_MSS_MASTER; 172 173 break; 173 174 case SND_SOC_DAIFMT_CBM_CFM: 174 - val = I2S_CKR_MSS_MASTER; 175 + val = I2S_CKR_MSS_SLAVE; 175 176 break; 176 177 default: 177 178 return -EINVAL; ··· 362 361 case I2S_XFER: 363 362 case I2S_CLR: 364 363 case I2S_RXDR: 364 + case I2S_FIFOLR: 365 + case I2S_INTSR: 365 366 return true; 366 367 default: 367 368 return false; ··· 373 370 static bool rockchip_i2s_volatile_reg(struct device *dev, unsigned int reg) 374 371 { 375 372 switch (reg) { 376 - case I2S_FIFOLR: 377 373 case I2S_INTSR: 374 + case I2S_CLR: 378 375 return true; 379 376 default: 380 377 return false; ··· 384 381 static bool rockchip_i2s_precious_reg(struct device *dev, unsigned int reg) 385 382 { 386 383 switch (reg) { 387 - case I2S_FIFOLR: 388 - return true; 389 384 default: 390 385 return false; 391 386 }
+3 -2
sound/soc/samsung/i2s.c
··· 462 462 if (dir == SND_SOC_CLOCK_IN) 463 463 rfs = 0; 464 464 465 - if ((rfs && other->rfs && (other->rfs != rfs)) || 465 + if ((rfs && other && other->rfs && (other->rfs != rfs)) || 466 466 (any_active(i2s) && 467 467 (((dir == SND_SOC_CLOCK_IN) 468 468 && !(mod & MOD_CDCLKCON)) || ··· 762 762 } else { 763 763 u32 mod = readl(i2s->addr + I2SMOD); 764 764 i2s->cdclk_out = !(mod & MOD_CDCLKCON); 765 - other->cdclk_out = i2s->cdclk_out; 765 + if (other) 766 + other->cdclk_out = i2s->cdclk_out; 766 767 } 767 768 /* Reset any constraint on RFS and BFS */ 768 769 i2s->rfs = 0;
+5 -1
sound/soc/soc-compress.c
··· 101 101 102 102 fe->dpcm[stream].runtime = fe_substream->runtime; 103 103 104 - if (dpcm_path_get(fe, stream, &list) <= 0) { 104 + ret = dpcm_path_get(fe, stream, &list); 105 + if (ret < 0) { 106 + mutex_unlock(&fe->card->mutex); 107 + goto fe_err; 108 + } else if (ret == 0) { 105 109 dev_dbg(fe->dev, "ASoC: %s no valid %s route\n", 106 110 fe->dai_link->name, stream ? "capture" : "playback"); 107 111 }
+5 -1
sound/soc/soc-pcm.c
··· 2352 2352 mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); 2353 2353 fe->dpcm[stream].runtime = fe_substream->runtime; 2354 2354 2355 - if (dpcm_path_get(fe, stream, &list) <= 0) { 2355 + ret = dpcm_path_get(fe, stream, &list); 2356 + if (ret < 0) { 2357 + mutex_unlock(&fe->card->mutex); 2358 + return ret; 2359 + } else if (ret == 0) { 2356 2360 dev_dbg(fe->dev, "ASoC: %s no valid %s route\n", 2357 2361 fe->dai_link->name, stream ? "capture" : "playback"); 2358 2362 }
+2 -2
sound/soc/spear/spear_pcm.c
··· 4 4 * sound/soc/spear/spear_pcm.c 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Rajeev Kumar<rajeev-dlh.kumar@st.com> 7 + * Rajeev Kumar<rajeevkumar.linux@gmail.com> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any ··· 50 50 } 51 51 EXPORT_SYMBOL_GPL(devm_spear_pcm_platform_register); 52 52 53 - MODULE_AUTHOR("Rajeev Kumar <rajeev-dlh.kumar@st.com>"); 53 + MODULE_AUTHOR("Rajeev Kumar <rajeevkumar.linux@gmail.com>"); 54 54 MODULE_DESCRIPTION("SPEAr PCM DMA module"); 55 55 MODULE_LICENSE("GPL");
+1 -1
tools/usb/usbip/libsrc/usbip_common.h
··· 15 15 #include <syslog.h> 16 16 #include <unistd.h> 17 17 #include <linux/usb/ch9.h> 18 - #include "../../uapi/usbip.h" 18 + #include <linux/usbip.h> 19 19 20 20 #ifndef USBIDS_FILE 21 21 #define USBIDS_FILE "/usr/share/hwdata/usb.ids"
+2 -2
virt/kvm/kvm_main.c
··· 110 110 bool kvm_is_mmio_pfn(pfn_t pfn) 111 111 { 112 112 if (pfn_valid(pfn)) 113 - return PageReserved(pfn_to_page(pfn)); 113 + return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)); 114 114 115 115 return true; 116 116 } ··· 1725 1725 rcu_read_lock(); 1726 1726 pid = rcu_dereference(target->pid); 1727 1727 if (pid) 1728 - task = get_pid_task(target->pid, PIDTYPE_PID); 1728 + task = get_pid_task(pid, PIDTYPE_PID); 1729 1729 rcu_read_unlock(); 1730 1730 if (!task) 1731 1731 return ret;