Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.0-rc7 into usb-next

We want the fixes in here, and to help resolve merge issues.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2834 -1383
+3 -1
Documentation/devicetree/bindings/net/dsa/dsa.txt
··· 19 19 (DSA_MAX_SWITCHES). 20 20 Each of these switch child nodes should have the following required properties: 21 21 22 - - reg : Describes the switch address on the MII bus 22 + - reg : Contains two fields. The first one describes the 23 + address on the MII bus. The second is the switch 24 + number that must be unique in cascaded configurations 23 25 - #address-cells : Must be 1 24 26 - #size-cells : Must be 0 25 27
+8
Documentation/input/alps.txt
··· 114 114 byte 4: 0 y6 y5 y4 y3 y2 y1 y0 115 115 byte 5: 0 z6 z5 z4 z3 z2 z1 z0 116 116 117 + Protocol Version 2 DualPoint devices send standard PS/2 mouse packets for 118 + the DualPoint Stick. 119 + 117 120 Dualpoint device -- interleaved packet format 118 121 --------------------------------------------- 119 122 ··· 129 126 byte 6: 0 y9 y8 y7 1 m r l 130 127 byte 7: 0 y6 y5 y4 y3 y2 y1 y0 131 128 byte 8: 0 z6 z5 z4 z3 z2 z1 z0 129 + 130 + Devices which use the interleaving format normally send standard PS/2 mouse 131 + packets for the DualPoint Stick + ALPS Absolute Mode packets for the 132 + touchpad, switching to the interleaved packet format when both the stick and 133 + the touchpad are used at the same time. 132 134 133 135 ALPS Absolute Mode - Protocol Version 3 134 136 ---------------------------------------
+6
Documentation/input/event-codes.txt
··· 294 294 The kernel does not provide button emulation for such devices but treats 295 295 them as any other INPUT_PROP_BUTTONPAD device. 296 296 297 + INPUT_PROP_ACCELEROMETER 298 + ------------------------- 299 + Directional axes on this device (absolute and/or relative x, y, z) represent 300 + accelerometer data. All other axes retain their meaning. A device must not mix 301 + regular directional axes and accelerometer axes on the same event node. 302 + 297 303 Guidelines: 298 304 ========== 299 305 The guidelines below ensure proper single-touch and multi-finger functionality.
+6 -3
Documentation/input/multi-touch-protocol.txt
··· 312 312 313 313 The type of approaching tool. A lot of kernel drivers cannot distinguish 314 314 between different tool types, such as a finger or a pen. In such cases, the 315 - event should be omitted. The protocol currently supports MT_TOOL_FINGER and 316 - MT_TOOL_PEN [2]. For type B devices, this event is handled by input core; 317 - drivers should instead use input_mt_report_slot_state(). 315 + event should be omitted. The protocol currently supports MT_TOOL_FINGER, 316 + MT_TOOL_PEN, and MT_TOOL_PALM [2]. For type B devices, this event is handled 317 + by input core; drivers should instead use input_mt_report_slot_state(). 318 + A contact's ABS_MT_TOOL_TYPE may change over time while still touching the 319 + device, because the firmware may not be able to determine which tool is being 320 + used when it first appears. 318 321 319 322 ABS_MT_BLOB_ID 320 323
+25 -19
MAINTAINERS
··· 637 637 F: include/uapi/linux/kfd_ioctl.h 638 638 639 639 AMD MICROCODE UPDATE SUPPORT 640 - M: Andreas Herrmann <herrmann.der.user@googlemail.com> 641 - L: amd64-microcode@amd64.org 640 + M: Borislav Petkov <bp@alien8.de> 642 641 S: Maintained 643 642 F: arch/x86/kernel/cpu/microcode/amd* 644 643 ··· 1185 1186 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1186 1187 S: Maintained 1187 1188 F: arch/arm/mach-mvebu/ 1188 - F: drivers/rtc/armada38x-rtc 1189 + F: drivers/rtc/rtc-armada38x.c 1189 1190 1190 1191 ARM/Marvell Berlin SoC support 1191 1192 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> ··· 1361 1362 F: drivers/*/*rockchip* 1362 1363 F: drivers/*/*/*rockchip* 1363 1364 F: sound/soc/rockchip/ 1365 + N: rockchip 1364 1366 1365 1367 ARM/SAMSUNG EXYNOS ARM ARCHITECTURES 1366 1368 M: Kukjin Kim <kgene@kernel.org> ··· 1675 1675 F: include/linux/platform_data/at24.h 1676 1676 1677 1677 ATA OVER ETHERNET (AOE) DRIVER 1678 - M: "Ed L. Cashin" <ecashin@coraid.com> 1679 - W: http://support.coraid.com/support/linux 1678 + M: "Ed L. Cashin" <ed.cashin@acm.org> 1679 + W: http://www.openaoe.org/ 1680 1680 S: Supported 1681 1681 F: Documentation/aoe/ 1682 1682 F: drivers/block/aoe/ ··· 3251 3251 S: Maintained 3252 3252 F: Documentation/hwmon/dme1737 3253 3253 F: drivers/hwmon/dme1737.c 3254 + 3255 + DMI/SMBIOS SUPPORT 3256 + M: Jean Delvare <jdelvare@suse.de> 3257 + S: Maintained 3258 + F: drivers/firmware/dmi-id.c 3259 + F: drivers/firmware/dmi_scan.c 3260 + F: include/linux/dmi.h 3254 3261 3255 3262 DOCKING STATION DRIVER 3256 3263 M: Shaohua Li <shaohua.li@intel.com> ··· 5094 5087 F: drivers/platform/x86/intel_menlow.c 5095 5088 5096 5089 INTEL IA32 MICROCODE UPDATE SUPPORT 5097 - M: Tigran Aivazian <tigran@aivazian.fsnet.co.uk> 5090 + M: Borislav Petkov <bp@alien8.de> 5098 5091 S: Maintained 5099 5092 F: arch/x86/kernel/cpu/microcode/core* 5100 5093 F: arch/x86/kernel/cpu/microcode/intel* ··· 5135 5128 S: Maintained 5136 5129 F: drivers/char/hw_random/ixp4xx-rng.c 5137 5130 5138 - INTEL ETHERNET DRIVERS (e100/e1000/e1000e/fm10k/igb/igbvf/ixgb/ixgbe/ixgbevf/i40e/i40evf) 5131 + INTEL ETHERNET DRIVERS 5139 5132 M: Jeff Kirsher <jeffrey.t.kirsher@intel.com> 5140 - M: Jesse Brandeburg <jesse.brandeburg@intel.com> 5141 - M: Bruce Allan <bruce.w.allan@intel.com> 5142 - M: Carolyn Wyborny <carolyn.wyborny@intel.com> 5143 - M: Don Skidmore <donald.c.skidmore@intel.com> 5144 - M: Greg Rose <gregory.v.rose@intel.com> 5145 - M: Matthew Vick <matthew.vick@intel.com> 5146 - M: John Ronciak <john.ronciak@intel.com> 5147 - M: Mitch Williams <mitch.a.williams@intel.com> 5148 - M: Linux NICS <linux.nics@intel.com> 5149 - L: e1000-devel@lists.sourceforge.net 5133 + R: Jesse Brandeburg <jesse.brandeburg@intel.com> 5134 + R: Shannon Nelson <shannon.nelson@intel.com> 5135 + R: Carolyn Wyborny <carolyn.wyborny@intel.com> 5136 + R: Don Skidmore <donald.c.skidmore@intel.com> 5137 + R: Matthew Vick <matthew.vick@intel.com> 5138 + R: John Ronciak <john.ronciak@intel.com> 5139 + R: Mitch Williams <mitch.a.williams@intel.com> 5140 + L: intel-wired-lan@lists.osuosl.org 5150 5141 W: http://www.intel.com/support/feedback.htm 5151 5142 W: http://e1000.sourceforge.net/ 5152 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net.git 5153 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next.git 5143 + Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/ 5144 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git 5145 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git 5154 5146 S: Supported 5155 5147 F: Documentation/networking/e100.txt 5156 5148 F: Documentation/networking/e1000.txt
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc7 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+18 -6
arch/arc/kernel/signal.c
··· 67 67 sigset_t *set) 68 68 { 69 69 int err; 70 - err = __copy_to_user(&(sf->uc.uc_mcontext.regs), regs, 70 + err = __copy_to_user(&(sf->uc.uc_mcontext.regs.scratch), regs, 71 71 sizeof(sf->uc.uc_mcontext.regs.scratch)); 72 72 err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t)); 73 73 ··· 83 83 if (!err) 84 84 set_current_blocked(&set); 85 85 86 - err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs), 86 + err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs.scratch), 87 87 sizeof(sf->uc.uc_mcontext.regs.scratch)); 88 88 89 89 return err; ··· 130 130 131 131 /* Don't restart from sigreturn */ 132 132 syscall_wont_restart(regs); 133 + 134 + /* 135 + * Ensure that sigreturn always returns to user mode (in case the 136 + * regs saved on user stack got fudged between save and sigreturn) 137 + * Otherwise it is easy to panic the kernel with a custom 138 + * signal handler and/or restorer which clobberes the status32/ret 139 + * to return to a bogus location in kernel mode. 140 + */ 141 + regs->status32 |= STATUS_U_MASK; 133 142 134 143 return regs->r0; 135 144 ··· 238 229 239 230 /* 240 231 * handler returns using sigreturn stub provided already by userpsace 232 + * If not, nuke the process right away 241 233 */ 242 - BUG_ON(!(ksig->ka.sa.sa_flags & SA_RESTORER)); 234 + if(!(ksig->ka.sa.sa_flags & SA_RESTORER)) 235 + return 1; 236 + 243 237 regs->blink = (unsigned long)ksig->ka.sa.sa_restorer; 244 238 245 239 /* User Stack for signal handler will be above the frame just carved */ ··· 308 296 handle_signal(struct ksignal *ksig, struct pt_regs *regs) 309 297 { 310 298 sigset_t *oldset = sigmask_to_save(); 311 - int ret; 299 + int failed; 312 300 313 301 /* Set up the stack frame */ 314 - ret = setup_rt_frame(ksig, oldset, regs); 302 + failed = setup_rt_frame(ksig, oldset, regs); 315 303 316 - signal_setup_done(ret, ksig, 0); 304 + signal_setup_done(failed, ksig, 0); 317 305 } 318 306 319 307 void do_signal(struct pt_regs *regs)
+1
arch/arm/Kconfig
··· 619 619 select GENERIC_CLOCKEVENTS 620 620 select GPIO_PXA 621 621 select HAVE_IDE 622 + select IRQ_DOMAIN 622 623 select MULTI_IRQ_HANDLER 623 624 select PLAT_PXA 624 625 select SPARSE_IRQ
+19
arch/arm/boot/dts/dm8168-evm.dts
··· 36 36 >; 37 37 }; 38 38 39 + mmc_pins: pinmux_mmc_pins { 40 + pinctrl-single,pins = < 41 + DM816X_IOPAD(0x0a70, MUX_MODE0) /* SD_POW */ 42 + DM816X_IOPAD(0x0a74, MUX_MODE0) /* SD_CLK */ 43 + DM816X_IOPAD(0x0a78, MUX_MODE0) /* SD_CMD */ 44 + DM816X_IOPAD(0x0a7C, MUX_MODE0) /* SD_DAT0 */ 45 + DM816X_IOPAD(0x0a80, MUX_MODE0) /* SD_DAT1 */ 46 + DM816X_IOPAD(0x0a84, MUX_MODE0) /* SD_DAT2 */ 47 + DM816X_IOPAD(0x0a88, MUX_MODE0) /* SD_DAT2 */ 48 + DM816X_IOPAD(0x0a8c, MUX_MODE2) /* GP1[7] */ 49 + DM816X_IOPAD(0x0a90, MUX_MODE2) /* GP1[8] */ 50 + >; 51 + }; 52 + 39 53 usb0_pins: pinmux_usb0_pins { 40 54 pinctrl-single,pins = < 41 55 DM816X_IOPAD(0x0d00, MUX_MODE0) /* USB0_DRVVBUS */ ··· 151 137 }; 152 138 153 139 &mmc1 { 140 + pinctrl-names = "default"; 141 + pinctrl-0 = <&mmc_pins>; 154 142 vmmc-supply = <&vmmcsd_fixed>; 143 + bus-width = <4>; 144 + cd-gpios = <&gpio2 7 GPIO_ACTIVE_LOW>; 145 + wp-gpios = <&gpio2 8 GPIO_ACTIVE_LOW>; 155 146 }; 156 147 157 148 /* At least dm8168-evm rev c won't support multipoint, later may */
+14 -4
arch/arm/boot/dts/dm816x.dtsi
··· 150 150 }; 151 151 152 152 gpio1: gpio@48032000 { 153 - compatible = "ti,omap3-gpio"; 153 + compatible = "ti,omap4-gpio"; 154 154 ti,hwmods = "gpio1"; 155 + ti,gpio-always-on; 155 156 reg = <0x48032000 0x1000>; 156 - interrupts = <97>; 157 + interrupts = <96>; 158 + gpio-controller; 159 + #gpio-cells = <2>; 160 + interrupt-controller; 161 + #interrupt-cells = <2>; 157 162 }; 158 163 159 164 gpio2: gpio@4804c000 { 160 - compatible = "ti,omap3-gpio"; 165 + compatible = "ti,omap4-gpio"; 161 166 ti,hwmods = "gpio2"; 167 + ti,gpio-always-on; 162 168 reg = <0x4804c000 0x1000>; 163 - interrupts = <99>; 169 + interrupts = <98>; 170 + gpio-controller; 171 + #gpio-cells = <2>; 172 + interrupt-controller; 173 + #interrupt-cells = <2>; 164 174 }; 165 175 166 176 gpmc: gpmc@50000000 {
-2
arch/arm/boot/dts/dra7.dtsi
··· 1111 1111 "wkupclk", "refclk", 1112 1112 "div-clk", "phy-div"; 1113 1113 #phy-cells = <0>; 1114 - ti,hwmods = "pcie1-phy"; 1115 1114 }; 1116 1115 1117 1116 pcie2_phy: pciephy@4a095000 { ··· 1129 1130 "wkupclk", "refclk", 1130 1131 "div-clk", "phy-div"; 1131 1132 #phy-cells = <0>; 1132 - ti,hwmods = "pcie2-phy"; 1133 1133 status = "disabled"; 1134 1134 }; 1135 1135 };
+4
arch/arm/boot/dts/omap3.dtsi
··· 92 92 ti,hwmods = "aes"; 93 93 reg = <0x480c5000 0x50>; 94 94 interrupts = <0>; 95 + dmas = <&sdma 65 &sdma 66>; 96 + dma-names = "tx", "rx"; 95 97 }; 96 98 97 99 prm: prm@48306000 { ··· 552 550 ti,hwmods = "sham"; 553 551 reg = <0x480c3000 0x64>; 554 552 interrupts = <49>; 553 + dmas = <&sdma 69>; 554 + dma-names = "rx"; 555 555 }; 556 556 557 557 smartreflex_core: smartreflex@480cb000 {
+1
arch/arm/boot/dts/rk3288.dtsi
··· 411 411 "mac_clk_rx", "mac_clk_tx", 412 412 "clk_mac_ref", "clk_mac_refout", 413 413 "aclk_mac", "pclk_mac"; 414 + status = "disabled"; 414 415 }; 415 416 416 417 usb_host0_ehci: usb@ff500000 {
+1 -1
arch/arm/boot/dts/socfpga.dtsi
··· 660 660 #address-cells = <1>; 661 661 #size-cells = <0>; 662 662 reg = <0xfff01000 0x1000>; 663 - interrupts = <0 156 4>; 663 + interrupts = <0 155 4>; 664 664 num-cs = <4>; 665 665 clocks = <&spi_m_clk>; 666 666 status = "disabled";
+16
arch/arm/boot/dts/sun4i-a10-olinuxino-lime.dts
··· 56 56 model = "Olimex A10-OLinuXino-LIME"; 57 57 compatible = "olimex,a10-olinuxino-lime", "allwinner,sun4i-a10"; 58 58 59 + cpus { 60 + cpu0: cpu@0 { 61 + /* 62 + * The A10-Lime is known to be unstable 63 + * when running at 1008 MHz 64 + */ 65 + operating-points = < 66 + /* kHz uV */ 67 + 912000 1350000 68 + 864000 1300000 69 + 624000 1250000 70 + >; 71 + cooling-max-level = <2>; 72 + }; 73 + }; 74 + 59 75 soc@01c00000 { 60 76 emac: ethernet@01c0b000 { 61 77 pinctrl-names = "default";
+1 -2
arch/arm/boot/dts/sun4i-a10.dtsi
··· 75 75 clock-latency = <244144>; /* 8 32k periods */ 76 76 operating-points = < 77 77 /* kHz uV */ 78 - 1056000 1500000 79 78 1008000 1400000 80 79 912000 1350000 81 80 864000 1300000 ··· 82 83 >; 83 84 #cooling-cells = <2>; 84 85 cooling-min-level = <0>; 85 - cooling-max-level = <4>; 86 + cooling-max-level = <3>; 86 87 }; 87 88 }; 88 89
+1 -2
arch/arm/boot/dts/sun5i-a13.dtsi
··· 47 47 clock-latency = <244144>; /* 8 32k periods */ 48 48 operating-points = < 49 49 /* kHz uV */ 50 - 1104000 1500000 51 50 1008000 1400000 52 51 912000 1350000 53 52 864000 1300000 ··· 56 57 >; 57 58 #cooling-cells = <2>; 58 59 cooling-min-level = <0>; 59 - cooling-max-level = <6>; 60 + cooling-max-level = <5>; 60 61 }; 61 62 }; 62 63
+1 -2
arch/arm/boot/dts/sun7i-a20.dtsi
··· 105 105 clock-latency = <244144>; /* 8 32k periods */ 106 106 operating-points = < 107 107 /* kHz uV */ 108 - 1008000 1450000 109 108 960000 1400000 110 109 912000 1400000 111 110 864000 1300000 ··· 115 116 >; 116 117 #cooling-cells = <2>; 117 118 cooling-min-level = <0>; 118 - cooling-max-level = <7>; 119 + cooling-max-level = <6>; 119 120 }; 120 121 121 122 cpu@1 {
+2
arch/arm/mach-omap2/id.c
··· 720 720 return kasprintf(GFP_KERNEL, "OMAP4"); 721 721 else if (soc_is_omap54xx()) 722 722 return kasprintf(GFP_KERNEL, "OMAP5"); 723 + else if (soc_is_am33xx() || soc_is_am335x()) 724 + return kasprintf(GFP_KERNEL, "AM33xx"); 723 725 else if (soc_is_am43xx()) 724 726 return kasprintf(GFP_KERNEL, "AM43xx"); 725 727 else if (soc_is_dra7xx())
+48 -63
arch/arm/mach-pxa/irq.c
··· 11 11 * it under the terms of the GNU General Public License version 2 as 12 12 * published by the Free Software Foundation. 13 13 */ 14 + #include <linux/bitops.h> 14 15 #include <linux/init.h> 15 16 #include <linux/module.h> 16 17 #include <linux/interrupt.h> ··· 41 40 #define ICHP_VAL_IRQ (1 << 31) 42 41 #define ICHP_IRQ(i) (((i) >> 16) & 0x7fff) 43 42 #define IPR_VALID (1 << 31) 44 - #define IRQ_BIT(n) (((n) - PXA_IRQ(0)) & 0x1f) 45 43 46 44 #define MAX_INTERNAL_IRQS 128 47 45 ··· 51 51 static void __iomem *pxa_irq_base; 52 52 static int pxa_internal_irq_nr; 53 53 static bool cpu_has_ipr; 54 + static struct irq_domain *pxa_irq_domain; 54 55 55 56 static inline void __iomem *irq_base(int i) 56 57 { ··· 67 66 void pxa_mask_irq(struct irq_data *d) 68 67 { 69 68 void __iomem *base = irq_data_get_irq_chip_data(d); 69 + irq_hw_number_t irq = irqd_to_hwirq(d); 70 70 uint32_t icmr = __raw_readl(base + ICMR); 71 71 72 - icmr &= ~(1 << IRQ_BIT(d->irq)); 72 + icmr &= ~BIT(irq & 0x1f); 73 73 __raw_writel(icmr, base + ICMR); 74 74 } 75 75 76 76 void pxa_unmask_irq(struct irq_data *d) 77 77 { 78 78 void __iomem *base = irq_data_get_irq_chip_data(d); 79 + irq_hw_number_t irq = irqd_to_hwirq(d); 79 80 uint32_t icmr = __raw_readl(base + ICMR); 80 81 81 - icmr |= 1 << IRQ_BIT(d->irq); 82 + icmr |= BIT(irq & 0x1f); 82 83 __raw_writel(icmr, base + ICMR); 83 84 } 84 85 ··· 121 118 } while (1); 122 119 } 123 120 124 - void __init pxa_init_irq(int irq_nr, int (*fn)(struct irq_data *, unsigned int)) 121 + static int pxa_irq_map(struct irq_domain *h, unsigned int virq, 122 + irq_hw_number_t hw) 125 123 { 126 - int irq, i, n; 124 + void __iomem *base = irq_base(hw / 32); 127 125 128 - BUG_ON(irq_nr > MAX_INTERNAL_IRQS); 126 + /* initialize interrupt priority */ 127 + if (cpu_has_ipr) 128 + __raw_writel(hw | IPR_VALID, pxa_irq_base + IPR(hw)); 129 + 130 + irq_set_chip_and_handler(virq, &pxa_internal_irq_chip, 131 + handle_level_irq); 132 + irq_set_chip_data(virq, base); 133 + set_irq_flags(virq, IRQF_VALID); 134 + 135 + return 0; 136 + } 137 + 138 + static struct irq_domain_ops pxa_irq_ops = { 139 + .map = pxa_irq_map, 140 + .xlate = irq_domain_xlate_onecell, 141 + }; 142 + 143 + static __init void 144 + pxa_init_irq_common(struct device_node *node, int irq_nr, 145 + int (*fn)(struct irq_data *, unsigned int)) 146 + { 147 + int n; 129 148 130 149 pxa_internal_irq_nr = irq_nr; 131 - cpu_has_ipr = !cpu_is_pxa25x(); 132 - pxa_irq_base = io_p2v(0x40d00000); 150 + pxa_irq_domain = irq_domain_add_legacy(node, irq_nr, 151 + PXA_IRQ(0), 0, 152 + &pxa_irq_ops, NULL); 153 + if (!pxa_irq_domain) 154 + panic("Unable to add PXA IRQ domain\n"); 155 + irq_set_default_host(pxa_irq_domain); 133 156 134 157 for (n = 0; n < irq_nr; n += 32) { 135 158 void __iomem *base = irq_base(n >> 5); 136 159 137 160 __raw_writel(0, base + ICMR); /* disable all IRQs */ 138 161 __raw_writel(0, base + ICLR); /* all IRQs are IRQ, not FIQ */ 139 - for (i = n; (i < (n + 32)) && (i < irq_nr); i++) { 140 - /* initialize interrupt priority */ 141 - if (cpu_has_ipr) 142 - __raw_writel(i | IPR_VALID, pxa_irq_base + IPR(i)); 143 - 144 - irq = PXA_IRQ(i); 145 - irq_set_chip_and_handler(irq, &pxa_internal_irq_chip, 146 - handle_level_irq); 147 - irq_set_chip_data(irq, base); 148 - set_irq_flags(irq, IRQF_VALID); 149 - } 150 162 } 151 - 152 163 /* only unmasked interrupts kick us out of idle */ 153 164 __raw_writel(1, irq_base(0) + ICCR); 154 165 155 166 pxa_internal_irq_chip.irq_set_wake = fn; 167 + } 168 + 169 + void __init pxa_init_irq(int irq_nr, int (*fn)(struct irq_data *, unsigned int)) 170 + { 171 + BUG_ON(irq_nr > MAX_INTERNAL_IRQS); 172 + 173 + pxa_irq_base = io_p2v(0x40d00000); 174 + cpu_has_ipr = !cpu_is_pxa25x(); 175 + pxa_init_irq_common(NULL, irq_nr, fn); 156 176 } 157 177 158 178 #ifdef CONFIG_PM ··· 229 203 }; 230 204 231 205 #ifdef CONFIG_OF 232 - static struct irq_domain *pxa_irq_domain; 233 - 234 - static int pxa_irq_map(struct irq_domain *h, unsigned int virq, 235 - irq_hw_number_t hw) 236 - { 237 - void __iomem *base = irq_base(hw / 32); 238 - 239 - /* initialize interrupt priority */ 240 - if (cpu_has_ipr) 241 - __raw_writel(hw | IPR_VALID, pxa_irq_base + IPR(hw)); 242 - 243 - irq_set_chip_and_handler(hw, &pxa_internal_irq_chip, 244 - handle_level_irq); 245 - irq_set_chip_data(hw, base); 246 - set_irq_flags(hw, IRQF_VALID); 247 - 248 - return 0; 249 - } 250 - 251 - static struct irq_domain_ops pxa_irq_ops = { 252 - .map = pxa_irq_map, 253 - .xlate = irq_domain_xlate_onecell, 254 - }; 255 - 256 206 static const struct of_device_id intc_ids[] __initconst = { 257 207 { .compatible = "marvell,pxa-intc", }, 258 208 {} ··· 238 236 { 239 237 struct device_node *node; 240 238 struct resource res; 241 - int n, ret; 239 + int ret; 242 240 243 241 node = of_find_matching_node(NULL, intc_ids); 244 242 if (!node) { ··· 269 267 return; 270 268 } 271 269 272 - pxa_irq_domain = irq_domain_add_legacy(node, pxa_internal_irq_nr, 0, 0, 273 - &pxa_irq_ops, NULL); 274 - if (!pxa_irq_domain) 275 - panic("Unable to add PXA IRQ domain\n"); 276 - 277 - irq_set_default_host(pxa_irq_domain); 278 - 279 - for (n = 0; n < pxa_internal_irq_nr; n += 32) { 280 - void __iomem *base = irq_base(n >> 5); 281 - 282 - __raw_writel(0, base + ICMR); /* disable all IRQs */ 283 - __raw_writel(0, base + ICLR); /* all IRQs are IRQ, not FIQ */ 284 - } 285 - 286 - /* only unmasked interrupts kick us out of idle */ 287 - __raw_writel(1, irq_base(0) + ICCR); 288 - 289 - pxa_internal_irq_chip.irq_set_wake = fn; 270 + pxa_init_irq_common(node, pxa_internal_irq_nr, fn); 290 271 } 291 272 #endif /* CONFIG_OF */
+1 -1
arch/arm/mach-pxa/zeus.c
··· 412 412 }; 413 413 414 414 static struct platform_device can_regulator_device = { 415 - .name = "reg-fixed-volage", 415 + .name = "reg-fixed-voltage", 416 416 .id = 0, 417 417 .dev = { 418 418 .platform_data = &can_regulator_pdata,
+2 -6
arch/arm/mach-sunxi/Kconfig
··· 1 1 menuconfig ARCH_SUNXI 2 2 bool "Allwinner SoCs" if ARCH_MULTI_V7 3 3 select ARCH_REQUIRE_GPIOLIB 4 + select ARCH_HAS_RESET_CONTROLLER 4 5 select CLKSRC_MMIO 5 6 select GENERIC_IRQ_CHIP 6 7 select PINCTRL 7 8 select SUN4I_TIMER 9 + select RESET_CONTROLLER 8 10 9 11 if ARCH_SUNXI 10 12 ··· 22 20 config MACH_SUN6I 23 21 bool "Allwinner A31 (sun6i) SoCs support" 24 22 default ARCH_SUNXI 25 - select ARCH_HAS_RESET_CONTROLLER 26 23 select ARM_GIC 27 24 select MFD_SUN6I_PRCM 28 - select RESET_CONTROLLER 29 25 select SUN5I_HSTIMER 30 26 31 27 config MACH_SUN7I ··· 37 37 config MACH_SUN8I 38 38 bool "Allwinner A23 (sun8i) SoCs support" 39 39 default ARCH_SUNXI 40 - select ARCH_HAS_RESET_CONTROLLER 41 40 select ARM_GIC 42 41 select MFD_SUN6I_PRCM 43 - select RESET_CONTROLLER 44 42 45 43 config MACH_SUN9I 46 44 bool "Allwinner (sun9i) SoCs support" 47 45 default ARCH_SUNXI 48 - select ARCH_HAS_RESET_CONTROLLER 49 46 select ARM_GIC 50 - select RESET_CONTROLLER 51 47 52 48 endif
+14 -1
arch/arm/plat-omap/dmtimer.c
··· 799 799 struct device *dev = &pdev->dev; 800 800 const struct of_device_id *match; 801 801 const struct dmtimer_platform_data *pdata; 802 + int ret; 802 803 803 804 match = of_match_device(of_match_ptr(omap_timer_match), dev); 804 805 pdata = match ? match->data : dev->platform_data; ··· 861 860 } 862 861 863 862 if (!timer->reserved) { 864 - pm_runtime_get_sync(dev); 863 + ret = pm_runtime_get_sync(dev); 864 + if (ret < 0) { 865 + dev_err(dev, "%s: pm_runtime_get_sync failed!\n", 866 + __func__); 867 + goto err_get_sync; 868 + } 865 869 __omap_dm_timer_init_regs(timer); 866 870 pm_runtime_put(dev); 867 871 } ··· 879 873 dev_dbg(dev, "Device Probed.\n"); 880 874 881 875 return 0; 876 + 877 + err_get_sync: 878 + pm_runtime_put_noidle(dev); 879 + pm_runtime_disable(dev); 880 + return ret; 882 881 } 883 882 884 883 /** ··· 909 898 break; 910 899 } 911 900 spin_unlock_irqrestore(&dm_timer_lock, flags); 901 + 902 + pm_runtime_disable(&pdev->dev); 912 903 913 904 return ret; 914 905 }
+1 -1
arch/arm64/boot/dts/arm/juno-clocks.dtsi
··· 8 8 */ 9 9 10 10 /* SoC fixed clocks */ 11 - soc_uartclk: refclk72738khz { 11 + soc_uartclk: refclk7273800hz { 12 12 compatible = "fixed-clock"; 13 13 #clock-cells = <0>; 14 14 clock-frequency = <7273800>;
+23 -7
arch/arm64/include/asm/cmpxchg.h
··· 246 246 __ret; \ 247 247 }) 248 248 249 - #define this_cpu_cmpxchg_1(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 250 - #define this_cpu_cmpxchg_2(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 251 - #define this_cpu_cmpxchg_4(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 252 - #define this_cpu_cmpxchg_8(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 249 + #define _protect_cmpxchg_local(pcp, o, n) \ 250 + ({ \ 251 + typeof(*raw_cpu_ptr(&(pcp))) __ret; \ 252 + preempt_disable(); \ 253 + __ret = cmpxchg_local(raw_cpu_ptr(&(pcp)), o, n); \ 254 + preempt_enable(); \ 255 + __ret; \ 256 + }) 253 257 254 - #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ 255 - cmpxchg_double_local(raw_cpu_ptr(&(ptr1)), raw_cpu_ptr(&(ptr2)), \ 256 - o1, o2, n1, n2) 258 + #define this_cpu_cmpxchg_1(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 259 + #define this_cpu_cmpxchg_2(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 260 + #define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 261 + #define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 262 + 263 + #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ 264 + ({ \ 265 + int __ret; \ 266 + preempt_disable(); \ 267 + __ret = cmpxchg_double_local( raw_cpu_ptr(&(ptr1)), \ 268 + raw_cpu_ptr(&(ptr2)), \ 269 + o1, o2, n1, n2); \ 270 + preempt_enable(); \ 271 + __ret; \ 272 + }) 257 273 258 274 #define cmpxchg64(ptr,o,n) cmpxchg((ptr),(o),(n)) 259 275 #define cmpxchg64_local(ptr,o,n) cmpxchg_local((ptr),(o),(n))
+9
arch/arm64/include/asm/mmu_context.h
··· 151 151 { 152 152 unsigned int cpu = smp_processor_id(); 153 153 154 + /* 155 + * init_mm.pgd does not contain any user mappings and it is always 156 + * active for kernel addresses in TTBR1. Just set the reserved TTBR0. 157 + */ 158 + if (next == &init_mm) { 159 + cpu_set_reserved_ttbr0(); 160 + return; 161 + } 162 + 154 163 if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) 155 164 check_and_switch_context(next, tsk); 156 165 }
+34 -12
arch/arm64/include/asm/percpu.h
··· 204 204 return ret; 205 205 } 206 206 207 - #define _percpu_add(pcp, val) \ 208 - __percpu_add(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) 207 + #define _percpu_read(pcp) \ 208 + ({ \ 209 + typeof(pcp) __retval; \ 210 + preempt_disable(); \ 211 + __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \ 212 + sizeof(pcp)); \ 213 + preempt_enable(); \ 214 + __retval; \ 215 + }) 209 216 210 - #define _percpu_add_return(pcp, val) (typeof(pcp)) (_percpu_add(pcp, val)) 217 + #define _percpu_write(pcp, val) \ 218 + do { \ 219 + preempt_disable(); \ 220 + __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \ 221 + sizeof(pcp)); \ 222 + preempt_enable(); \ 223 + } while(0) \ 224 + 225 + #define _pcp_protect(operation, pcp, val) \ 226 + ({ \ 227 + typeof(pcp) __retval; \ 228 + preempt_disable(); \ 229 + __retval = (typeof(pcp))operation(raw_cpu_ptr(&(pcp)), \ 230 + (val), sizeof(pcp)); \ 231 + preempt_enable(); \ 232 + __retval; \ 233 + }) 234 + 235 + #define _percpu_add(pcp, val) \ 236 + _pcp_protect(__percpu_add, pcp, val) 237 + 238 + #define _percpu_add_return(pcp, val) _percpu_add(pcp, val) 211 239 212 240 #define _percpu_and(pcp, val) \ 213 - __percpu_and(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) 241 + _pcp_protect(__percpu_and, pcp, val) 214 242 215 243 #define _percpu_or(pcp, val) \ 216 - __percpu_or(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) 217 - 218 - #define _percpu_read(pcp) (typeof(pcp)) \ 219 - (__percpu_read(raw_cpu_ptr(&(pcp)), sizeof(pcp))) 220 - 221 - #define _percpu_write(pcp, val) \ 222 - __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp)) 244 + _pcp_protect(__percpu_or, pcp, val) 223 245 224 246 #define _percpu_xchg(pcp, val) (typeof(pcp)) \ 225 - (__percpu_xchg(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp))) 247 + _pcp_protect(__percpu_xchg, pcp, (unsigned long)(val)) 226 248 227 249 #define this_cpu_add_1(pcp, val) _percpu_add(pcp, val) 228 250 #define this_cpu_add_2(pcp, val) _percpu_add(pcp, val)
+1
arch/metag/include/asm/io.h
··· 2 2 #define _ASM_METAG_IO_H 3 3 4 4 #include <linux/types.h> 5 + #include <asm/pgtable-bits.h> 5 6 6 7 #define IO_SPACE_LIMIT 0 7 8
+104
arch/metag/include/asm/pgtable-bits.h
··· 1 + /* 2 + * Meta page table definitions. 3 + */ 4 + 5 + #ifndef _METAG_PGTABLE_BITS_H 6 + #define _METAG_PGTABLE_BITS_H 7 + 8 + #include <asm/metag_mem.h> 9 + 10 + /* 11 + * Definitions for MMU descriptors 12 + * 13 + * These are the hardware bits in the MMCU pte entries. 14 + * Derived from the Meta toolkit headers. 15 + */ 16 + #define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT 17 + #define _PAGE_WRITE MMCU_ENTRY_WR_BIT 18 + #define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT 19 + /* Write combine bit - this can cause writes to occur out of order */ 20 + #define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT 21 + /* Sys coherent bit - this bit is never used by Linux */ 22 + #define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT 23 + #define _PAGE_ALWAYS_ZERO_1 0x020 24 + #define _PAGE_CACHE_CTRL0 0x040 25 + #define _PAGE_CACHE_CTRL1 0x080 26 + #define _PAGE_ALWAYS_ZERO_2 0x100 27 + #define _PAGE_ALWAYS_ZERO_3 0x200 28 + #define _PAGE_ALWAYS_ZERO_4 0x400 29 + #define _PAGE_ALWAYS_ZERO_5 0x800 30 + 31 + /* These are software bits that we stuff into the gaps in the hardware 32 + * pte entries that are not used. Note, these DO get stored in the actual 33 + * hardware, but the hardware just does not use them. 34 + */ 35 + #define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1 36 + #define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2 37 + 38 + /* Pages owned, and protected by, the kernel. */ 39 + #define _PAGE_KERNEL _PAGE_PRIV 40 + 41 + /* No cacheing of this page */ 42 + #define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S) 43 + /* burst cacheing - good for data streaming */ 44 + #define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S) 45 + /* One cache way per thread */ 46 + #define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S) 47 + /* Full on cacheing */ 48 + #define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S) 49 + 50 + #define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE) 51 + 52 + /* which bits are used for cache control ... */ 53 + #define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \ 54 + _PAGE_WR_COMBINE) 55 + 56 + /* This is a mask of the bits that pte_modify is allowed to change. */ 57 + #define _PAGE_CHG_MASK (PAGE_MASK) 58 + 59 + #define _PAGE_SZ_SHIFT 1 60 + #define _PAGE_SZ_4K (0x0) 61 + #define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT) 62 + #define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT) 63 + #define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT) 64 + #define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT) 65 + #define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT) 66 + #define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT) 67 + #define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT) 68 + #define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT) 69 + #define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT) 70 + #define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT) 71 + #define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT) 72 + 73 + #if defined(CONFIG_PAGE_SIZE_4K) 74 + #define _PAGE_SZ (_PAGE_SZ_4K) 75 + #elif defined(CONFIG_PAGE_SIZE_8K) 76 + #define _PAGE_SZ (_PAGE_SZ_8K) 77 + #elif defined(CONFIG_PAGE_SIZE_16K) 78 + #define _PAGE_SZ (_PAGE_SZ_16K) 79 + #endif 80 + #define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT) 81 + 82 + #if defined(CONFIG_HUGETLB_PAGE_SIZE_8K) 83 + # define _PAGE_SZHUGE (_PAGE_SZ_8K) 84 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K) 85 + # define _PAGE_SZHUGE (_PAGE_SZ_16K) 86 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K) 87 + # define _PAGE_SZHUGE (_PAGE_SZ_32K) 88 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K) 89 + # define _PAGE_SZHUGE (_PAGE_SZ_64K) 90 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K) 91 + # define _PAGE_SZHUGE (_PAGE_SZ_128K) 92 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K) 93 + # define _PAGE_SZHUGE (_PAGE_SZ_256K) 94 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K) 95 + # define _PAGE_SZHUGE (_PAGE_SZ_512K) 96 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M) 97 + # define _PAGE_SZHUGE (_PAGE_SZ_1M) 98 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M) 99 + # define _PAGE_SZHUGE (_PAGE_SZ_2M) 100 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M) 101 + # define _PAGE_SZHUGE (_PAGE_SZ_4M) 102 + #endif 103 + 104 + #endif /* _METAG_PGTABLE_BITS_H */
+1 -94
arch/metag/include/asm/pgtable.h
··· 5 5 #ifndef _METAG_PGTABLE_H 6 6 #define _METAG_PGTABLE_H 7 7 8 + #include <asm/pgtable-bits.h> 8 9 #include <asm-generic/pgtable-nopmd.h> 9 10 10 11 /* Invalid regions on Meta: 0x00000000-0x001FFFFF and 0xFFFF0000-0xFFFFFFFF */ ··· 19 18 #define CONSISTENT_END 0x773FFFFF 20 19 #define VMALLOC_START 0x78000000 21 20 #define VMALLOC_END 0x7FFFFFFF 22 - #endif 23 - 24 - /* 25 - * Definitions for MMU descriptors 26 - * 27 - * These are the hardware bits in the MMCU pte entries. 28 - * Derived from the Meta toolkit headers. 29 - */ 30 - #define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT 31 - #define _PAGE_WRITE MMCU_ENTRY_WR_BIT 32 - #define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT 33 - /* Write combine bit - this can cause writes to occur out of order */ 34 - #define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT 35 - /* Sys coherent bit - this bit is never used by Linux */ 36 - #define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT 37 - #define _PAGE_ALWAYS_ZERO_1 0x020 38 - #define _PAGE_CACHE_CTRL0 0x040 39 - #define _PAGE_CACHE_CTRL1 0x080 40 - #define _PAGE_ALWAYS_ZERO_2 0x100 41 - #define _PAGE_ALWAYS_ZERO_3 0x200 42 - #define _PAGE_ALWAYS_ZERO_4 0x400 43 - #define _PAGE_ALWAYS_ZERO_5 0x800 44 - 45 - /* These are software bits that we stuff into the gaps in the hardware 46 - * pte entries that are not used. Note, these DO get stored in the actual 47 - * hardware, but the hardware just does not use them. 48 - */ 49 - #define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1 50 - #define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2 51 - 52 - /* Pages owned, and protected by, the kernel. */ 53 - #define _PAGE_KERNEL _PAGE_PRIV 54 - 55 - /* No cacheing of this page */ 56 - #define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S) 57 - /* burst cacheing - good for data streaming */ 58 - #define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S) 59 - /* One cache way per thread */ 60 - #define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S) 61 - /* Full on cacheing */ 62 - #define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S) 63 - 64 - #define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE) 65 - 66 - /* which bits are used for cache control ... */ 67 - #define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \ 68 - _PAGE_WR_COMBINE) 69 - 70 - /* This is a mask of the bits that pte_modify is allowed to change. */ 71 - #define _PAGE_CHG_MASK (PAGE_MASK) 72 - 73 - #define _PAGE_SZ_SHIFT 1 74 - #define _PAGE_SZ_4K (0x0) 75 - #define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT) 76 - #define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT) 77 - #define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT) 78 - #define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT) 79 - #define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT) 80 - #define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT) 81 - #define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT) 82 - #define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT) 83 - #define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT) 84 - #define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT) 85 - #define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT) 86 - 87 - #if defined(CONFIG_PAGE_SIZE_4K) 88 - #define _PAGE_SZ (_PAGE_SZ_4K) 89 - #elif defined(CONFIG_PAGE_SIZE_8K) 90 - #define _PAGE_SZ (_PAGE_SZ_8K) 91 - #elif defined(CONFIG_PAGE_SIZE_16K) 92 - #define _PAGE_SZ (_PAGE_SZ_16K) 93 - #endif 94 - #define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT) 95 - 96 - #if defined(CONFIG_HUGETLB_PAGE_SIZE_8K) 97 - # define _PAGE_SZHUGE (_PAGE_SZ_8K) 98 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K) 99 - # define _PAGE_SZHUGE (_PAGE_SZ_16K) 100 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K) 101 - # define _PAGE_SZHUGE (_PAGE_SZ_32K) 102 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K) 103 - # define _PAGE_SZHUGE (_PAGE_SZ_64K) 104 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K) 105 - # define _PAGE_SZHUGE (_PAGE_SZ_128K) 106 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K) 107 - # define _PAGE_SZHUGE (_PAGE_SZ_256K) 108 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K) 109 - # define _PAGE_SZHUGE (_PAGE_SZ_512K) 110 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M) 111 - # define _PAGE_SZHUGE (_PAGE_SZ_1M) 112 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M) 113 - # define _PAGE_SZHUGE (_PAGE_SZ_2M) 114 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M) 115 - # define _PAGE_SZHUGE (_PAGE_SZ_4M) 116 21 #endif 117 22 118 23 /*
+10 -7
arch/parisc/include/asm/pgalloc.h
··· 26 26 27 27 if (likely(pgd != NULL)) { 28 28 memset(pgd, 0, PAGE_SIZE<<PGD_ALLOC_ORDER); 29 - #ifdef CONFIG_64BIT 29 + #if PT_NLEVELS == 3 30 30 actual_pgd += PTRS_PER_PGD; 31 31 /* Populate first pmd with allocated memory. We mark it 32 32 * with PxD_FLAG_ATTACHED as a signal to the system that this ··· 45 45 46 46 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) 47 47 { 48 - #ifdef CONFIG_64BIT 48 + #if PT_NLEVELS == 3 49 49 pgd -= PTRS_PER_PGD; 50 50 #endif 51 51 free_pages((unsigned long)pgd, PGD_ALLOC_ORDER); ··· 72 72 73 73 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) 74 74 { 75 - #ifdef CONFIG_64BIT 76 75 if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED) 77 - /* This is the permanent pmd attached to the pgd; 78 - * cannot free it */ 76 + /* 77 + * This is the permanent pmd attached to the pgd; 78 + * cannot free it. 79 + * Increment the counter to compensate for the decrement 80 + * done by generic mm code. 81 + */ 82 + mm_inc_nr_pmds(mm); 79 83 return; 80 - #endif 81 84 free_pages((unsigned long)pmd, PMD_ORDER); 82 85 } 83 86 ··· 102 99 static inline void 103 100 pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) 104 101 { 105 - #ifdef CONFIG_64BIT 102 + #if PT_NLEVELS == 3 106 103 /* preserve the gateway marker if this is the beginning of 107 104 * the permanent pmd */ 108 105 if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
+6 -3
arch/parisc/kernel/syscall_table.S
··· 55 55 #define ENTRY_COMP(_name_) .word sys_##_name_ 56 56 #endif 57 57 58 - ENTRY_SAME(restart_syscall) /* 0 */ 59 - ENTRY_SAME(exit) 58 + 90: ENTRY_SAME(restart_syscall) /* 0 */ 59 + 91: ENTRY_SAME(exit) 60 60 ENTRY_SAME(fork_wrapper) 61 61 ENTRY_SAME(read) 62 62 ENTRY_SAME(write) ··· 439 439 ENTRY_SAME(bpf) 440 440 ENTRY_COMP(execveat) 441 441 442 - /* Nothing yet */ 442 + 443 + .ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b)) 444 + .error "size of syscall table does not fit value of __NR_Linux_syscalls" 445 + .endif 443 446 444 447 #undef ENTRY_SAME 445 448 #undef ENTRY_DIFF
+1 -1
arch/powerpc/include/asm/cputhreads.h
··· 55 55 56 56 static inline int cpu_nr_cores(void) 57 57 { 58 - return NR_CPUS >> threads_shift; 58 + return nr_cpu_ids >> threads_shift; 59 59 } 60 60 61 61 static inline cpumask_t cpu_online_cores_map(void)
+3
arch/powerpc/include/asm/ppc-opcode.h
··· 153 153 #define PPC_INST_MFSPR_PVR_MASK 0xfc1fffff 154 154 #define PPC_INST_MFTMR 0x7c0002dc 155 155 #define PPC_INST_MSGSND 0x7c00019c 156 + #define PPC_INST_MSGCLR 0x7c0001dc 156 157 #define PPC_INST_MSGSNDP 0x7c00011c 157 158 #define PPC_INST_MTTMR 0x7c0003dc 158 159 #define PPC_INST_NOP 0x60000000 ··· 309 308 ___PPC_RT(t) | ___PPC_RA(a) | \ 310 309 ___PPC_RB(b) | __PPC_EH(eh)) 311 310 #define PPC_MSGSND(b) stringify_in_c(.long PPC_INST_MSGSND | \ 311 + ___PPC_RB(b)) 312 + #define PPC_MSGCLR(b) stringify_in_c(.long PPC_INST_MSGCLR | \ 312 313 ___PPC_RB(b)) 313 314 #define PPC_MSGSNDP(b) stringify_in_c(.long PPC_INST_MSGSNDP | \ 314 315 ___PPC_RB(b))
+3
arch/powerpc/include/asm/reg.h
··· 608 608 #define SRR1_ISI_N_OR_G 0x10000000 /* ISI: Access is no-exec or G */ 609 609 #define SRR1_ISI_PROT 0x08000000 /* ISI: Other protection fault */ 610 610 #define SRR1_WAKEMASK 0x00380000 /* reason for wakeup */ 611 + #define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 */ 611 612 #define SRR1_WAKESYSERR 0x00300000 /* System error */ 612 613 #define SRR1_WAKEEE 0x00200000 /* External interrupt */ 613 614 #define SRR1_WAKEMT 0x00280000 /* mtctrl */ 614 615 #define SRR1_WAKEHMI 0x00280000 /* Hypervisor maintenance */ 615 616 #define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */ 617 + #define SRR1_WAKEDBELL 0x00140000 /* Privileged doorbell on P8 */ 616 618 #define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */ 617 619 #define SRR1_WAKERESET 0x00100000 /* System reset */ 620 + #define SRR1_WAKEHDBELL 0x000c0000 /* Hypervisor doorbell on P8 */ 618 621 #define SRR1_WAKESTATE 0x00030000 /* Powersave exit mask [46:47] */ 619 622 #define SRR1_WS_DEEPEST 0x00030000 /* Some resources not maintained, 620 623 * may not be recoverable */
+20
arch/powerpc/kernel/cputable.c
··· 437 437 .machine_check_early = __machine_check_early_realmode_p8, 438 438 .platform = "power8", 439 439 }, 440 + { /* Power8NVL */ 441 + .pvr_mask = 0xffff0000, 442 + .pvr_value = 0x004c0000, 443 + .cpu_name = "POWER8NVL (raw)", 444 + .cpu_features = CPU_FTRS_POWER8, 445 + .cpu_user_features = COMMON_USER_POWER8, 446 + .cpu_user_features2 = COMMON_USER2_POWER8, 447 + .mmu_features = MMU_FTRS_POWER8, 448 + .icache_bsize = 128, 449 + .dcache_bsize = 128, 450 + .num_pmcs = 6, 451 + .pmc_type = PPC_PMC_IBM, 452 + .oprofile_cpu_type = "ppc64/power8", 453 + .oprofile_type = PPC_OPROFILE_INVALID, 454 + .cpu_setup = __setup_cpu_power8, 455 + .cpu_restore = __restore_cpu_power8, 456 + .flush_tlb = __flush_tlb_power8, 457 + .machine_check_early = __machine_check_early_realmode_p8, 458 + .platform = "power8", 459 + }, 440 460 { /* Power8 DD1: Does not support doorbell IPIs */ 441 461 .pvr_mask = 0xffffff00, 442 462 .pvr_value = 0x004d0100,
+2
arch/powerpc/kernel/dbell.c
··· 17 17 18 18 #include <asm/dbell.h> 19 19 #include <asm/irq_regs.h> 20 + #include <asm/kvm_ppc.h> 20 21 21 22 #ifdef CONFIG_SMP 22 23 void doorbell_setup_this_cpu(void) ··· 42 41 43 42 may_hard_irq_enable(); 44 43 44 + kvmppc_set_host_ipi(smp_processor_id(), 0); 45 45 __this_cpu_inc(irq_stat.doorbell_irqs); 46 46 47 47 smp_ipi_demux();
+1 -1
arch/powerpc/kernel/exceptions-64s.S
··· 1408 1408 bne 9f /* continue in V mode if we are. */ 1409 1409 1410 1410 5: 1411 - #ifdef CONFIG_KVM_BOOK3S_64_HV 1411 + #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 1412 1412 /* 1413 1413 * We are coming from kernel context. Check if we are coming from 1414 1414 * guest. if yes, then we can continue. We will fall through
+4 -4
arch/powerpc/kvm/book3s_hv.c
··· 636 636 spin_lock(&vcpu->arch.vpa_update_lock); 637 637 lppaca = (struct lppaca *)vcpu->arch.vpa.pinned_addr; 638 638 if (lppaca) 639 - yield_count = lppaca->yield_count; 639 + yield_count = be32_to_cpu(lppaca->yield_count); 640 640 spin_unlock(&vcpu->arch.vpa_update_lock); 641 641 return yield_count; 642 642 } ··· 942 942 static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr, 943 943 bool preserve_top32) 944 944 { 945 + struct kvm *kvm = vcpu->kvm; 945 946 struct kvmppc_vcore *vc = vcpu->arch.vcore; 946 947 u64 mask; 947 948 949 + mutex_lock(&kvm->lock); 948 950 spin_lock(&vc->lock); 949 951 /* 950 952 * If ILE (interrupt little-endian) has changed, update the 951 953 * MSR_LE bit in the intr_msr for each vcpu in this vcore. 952 954 */ 953 955 if ((new_lpcr & LPCR_ILE) != (vc->lpcr & LPCR_ILE)) { 954 - struct kvm *kvm = vcpu->kvm; 955 956 struct kvm_vcpu *vcpu; 956 957 int i; 957 958 958 - mutex_lock(&kvm->lock); 959 959 kvm_for_each_vcpu(i, vcpu, kvm) { 960 960 if (vcpu->arch.vcore != vc) 961 961 continue; ··· 964 964 else 965 965 vcpu->arch.intr_msr &= ~MSR_LE; 966 966 } 967 - mutex_unlock(&kvm->lock); 968 967 } 969 968 970 969 /* ··· 980 981 mask &= 0xFFFFFFFF; 981 982 vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask); 982 983 spin_unlock(&vc->lock); 984 + mutex_unlock(&kvm->lock); 983 985 } 984 986 985 987 static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1005 1005 /* Save HEIR (HV emulation assist reg) in emul_inst 1006 1006 if this is an HEI (HV emulation interrupt, e40) */ 1007 1007 li r3,KVM_INST_FETCH_FAILED 1008 + stw r3,VCPU_LAST_INST(r9) 1008 1009 cmpwi r12,BOOK3S_INTERRUPT_H_EMUL_ASSIST 1009 1010 bne 11f 1010 1011 mfspr r3,SPRN_HEIR
+12 -2
arch/powerpc/platforms/powernv/smp.c
··· 33 33 #include <asm/runlatch.h> 34 34 #include <asm/code-patching.h> 35 35 #include <asm/dbell.h> 36 + #include <asm/kvm_ppc.h> 37 + #include <asm/ppc-opcode.h> 36 38 37 39 #include "powernv.h" 38 40 ··· 151 149 static void pnv_smp_cpu_kill_self(void) 152 150 { 153 151 unsigned int cpu; 154 - unsigned long srr1; 152 + unsigned long srr1, wmask; 155 153 u32 idle_states; 156 154 157 155 /* Standard hot unplug procedure */ ··· 162 160 DBG("CPU%d offline\n", cpu); 163 161 generic_set_cpu_dead(cpu); 164 162 smp_wmb(); 163 + 164 + wmask = SRR1_WAKEMASK; 165 + if (cpu_has_feature(CPU_FTR_ARCH_207S)) 166 + wmask = SRR1_WAKEMASK_P8; 165 167 166 168 idle_states = pnv_get_supported_cpuidle_states(); 167 169 /* We don't want to take decrementer interrupts while we are offline, ··· 197 191 * having finished executing in a KVM guest, then srr1 198 192 * contains 0. 199 193 */ 200 - if ((srr1 & SRR1_WAKEMASK) == SRR1_WAKEEE) { 194 + if ((srr1 & wmask) == SRR1_WAKEEE) { 201 195 icp_native_flush_interrupt(); 202 196 local_paca->irq_happened &= PACA_IRQ_HARD_DIS; 203 197 smp_mb(); 198 + } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) { 199 + unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER); 200 + asm volatile(PPC_MSGCLR(%0) : : "r" (msg)); 201 + kvmppc_set_host_ipi(cpu, 0); 204 202 } 205 203 206 204 if (cpu_core_split_required())
+23 -21
arch/powerpc/platforms/pseries/mobility.c
··· 25 25 static struct kobject *mobility_kobj; 26 26 27 27 struct update_props_workarea { 28 - u32 phandle; 29 - u32 state; 30 - u64 reserved; 31 - u32 nprops; 28 + __be32 phandle; 29 + __be32 state; 30 + __be64 reserved; 31 + __be32 nprops; 32 32 } __packed; 33 33 34 34 #define NODE_ACTION_MASK 0xff000000 ··· 54 54 return rc; 55 55 } 56 56 57 - static int delete_dt_node(u32 phandle) 57 + static int delete_dt_node(__be32 phandle) 58 58 { 59 59 struct device_node *dn; 60 60 61 - dn = of_find_node_by_phandle(phandle); 61 + dn = of_find_node_by_phandle(be32_to_cpu(phandle)); 62 62 if (!dn) 63 63 return -ENOENT; 64 64 ··· 127 127 return 0; 128 128 } 129 129 130 - static int update_dt_node(u32 phandle, s32 scope) 130 + static int update_dt_node(__be32 phandle, s32 scope) 131 131 { 132 132 struct update_props_workarea *upwa; 133 133 struct device_node *dn; ··· 136 136 char *prop_data; 137 137 char *rtas_buf; 138 138 int update_properties_token; 139 + u32 nprops; 139 140 u32 vd; 140 141 141 142 update_properties_token = rtas_token("ibm,update-properties"); ··· 147 146 if (!rtas_buf) 148 147 return -ENOMEM; 149 148 150 - dn = of_find_node_by_phandle(phandle); 149 + dn = of_find_node_by_phandle(be32_to_cpu(phandle)); 151 150 if (!dn) { 152 151 kfree(rtas_buf); 153 152 return -ENOENT; ··· 163 162 break; 164 163 165 164 prop_data = rtas_buf + sizeof(*upwa); 165 + nprops = be32_to_cpu(upwa->nprops); 166 166 167 167 /* On the first call to ibm,update-properties for a node the 168 168 * the first property value descriptor contains an empty ··· 172 170 */ 173 171 if (*prop_data == 0) { 174 172 prop_data++; 175 - vd = *(u32 *)prop_data; 173 + vd = be32_to_cpu(*(__be32 *)prop_data); 176 174 prop_data += vd + sizeof(vd); 177 - upwa->nprops--; 175 + nprops--; 178 176 } 179 177 180 - for (i = 0; i < upwa->nprops; i++) { 178 + for (i = 0; i < nprops; i++) { 181 179 char *prop_name; 182 180 183 181 prop_name = prop_data; 184 182 prop_data += strlen(prop_name) + 1; 185 - vd = *(u32 *)prop_data; 183 + vd = be32_to_cpu(*(__be32 *)prop_data); 186 184 prop_data += sizeof(vd); 187 185 188 186 switch (vd) { ··· 214 212 return 0; 215 213 } 216 214 217 - static int add_dt_node(u32 parent_phandle, u32 drc_index) 215 + static int add_dt_node(__be32 parent_phandle, __be32 drc_index) 218 216 { 219 217 struct device_node *dn; 220 218 struct device_node *parent_dn; 221 219 int rc; 222 220 223 - parent_dn = of_find_node_by_phandle(parent_phandle); 221 + parent_dn = of_find_node_by_phandle(be32_to_cpu(parent_phandle)); 224 222 if (!parent_dn) 225 223 return -ENOENT; 226 224 ··· 239 237 int pseries_devicetree_update(s32 scope) 240 238 { 241 239 char *rtas_buf; 242 - u32 *data; 240 + __be32 *data; 243 241 int update_nodes_token; 244 242 int rc; 245 243 ··· 256 254 if (rc && rc != 1) 257 255 break; 258 256 259 - data = (u32 *)rtas_buf + 4; 260 - while (*data & NODE_ACTION_MASK) { 257 + data = (__be32 *)rtas_buf + 4; 258 + while (be32_to_cpu(*data) & NODE_ACTION_MASK) { 261 259 int i; 262 - u32 action = *data & NODE_ACTION_MASK; 263 - int node_count = *data & NODE_COUNT_MASK; 260 + u32 action = be32_to_cpu(*data) & NODE_ACTION_MASK; 261 + u32 node_count = be32_to_cpu(*data) & NODE_COUNT_MASK; 264 262 265 263 data++; 266 264 267 265 for (i = 0; i < node_count; i++) { 268 - u32 phandle = *data++; 269 - u32 drc_index; 266 + __be32 phandle = *data++; 267 + __be32 drc_index; 270 268 271 269 switch (action) { 272 270 case DELETE_DT_NODE:
+1 -1
arch/s390/include/asm/elf.h
··· 211 211 212 212 extern unsigned long mmap_rnd_mask; 213 213 214 - #define STACK_RND_MASK (mmap_rnd_mask) 214 + #define STACK_RND_MASK (test_thread_flag(TIF_31BIT) ? 0x7ff : mmap_rnd_mask) 215 215 216 216 #define ARCH_DLINFO \ 217 217 do { \
+45 -16
arch/s390/kernel/ftrace.c
··· 57 57 58 58 unsigned long ftrace_plt; 59 59 60 + static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn) 61 + { 62 + #ifdef CC_USING_HOTPATCH 63 + /* brcl 0,0 */ 64 + insn->opc = 0xc004; 65 + insn->disp = 0; 66 + #else 67 + /* stg r14,8(r15) */ 68 + insn->opc = 0xe3e0; 69 + insn->disp = 0xf0080024; 70 + #endif 71 + } 72 + 73 + static inline int is_kprobe_on_ftrace(struct ftrace_insn *insn) 74 + { 75 + #ifdef CONFIG_KPROBES 76 + if (insn->opc == BREAKPOINT_INSTRUCTION) 77 + return 1; 78 + #endif 79 + return 0; 80 + } 81 + 82 + static inline void ftrace_generate_kprobe_nop_insn(struct ftrace_insn *insn) 83 + { 84 + #ifdef CONFIG_KPROBES 85 + insn->opc = BREAKPOINT_INSTRUCTION; 86 + insn->disp = KPROBE_ON_FTRACE_NOP; 87 + #endif 88 + } 89 + 90 + static inline void ftrace_generate_kprobe_call_insn(struct ftrace_insn *insn) 91 + { 92 + #ifdef CONFIG_KPROBES 93 + insn->opc = BREAKPOINT_INSTRUCTION; 94 + insn->disp = KPROBE_ON_FTRACE_CALL; 95 + #endif 96 + } 97 + 60 98 int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 61 99 unsigned long addr) 62 100 { ··· 110 72 return -EFAULT; 111 73 if (addr == MCOUNT_ADDR) { 112 74 /* Initial code replacement */ 113 - #ifdef CC_USING_HOTPATCH 114 - /* We expect to see brcl 0,0 */ 115 - ftrace_generate_nop_insn(&orig); 116 - #else 117 - /* We expect to see stg r14,8(r15) */ 118 - orig.opc = 0xe3e0; 119 - orig.disp = 0xf0080024; 120 - #endif 75 + ftrace_generate_orig_insn(&orig); 121 76 ftrace_generate_nop_insn(&new); 122 - } else if (old.opc == BREAKPOINT_INSTRUCTION) { 77 + } else if (is_kprobe_on_ftrace(&old)) { 123 78 /* 124 79 * If we find a breakpoint instruction, a kprobe has been 125 80 * placed at the beginning of the function. We write the ··· 120 89 * bytes of the original instruction so that the kprobes 121 90 * handler can execute a nop, if it reaches this breakpoint. 122 91 */ 123 - new.opc = orig.opc = BREAKPOINT_INSTRUCTION; 124 - orig.disp = KPROBE_ON_FTRACE_CALL; 125 - new.disp = KPROBE_ON_FTRACE_NOP; 92 + ftrace_generate_kprobe_call_insn(&orig); 93 + ftrace_generate_kprobe_nop_insn(&new); 126 94 } else { 127 95 /* Replace ftrace call with a nop. */ 128 96 ftrace_generate_call_insn(&orig, rec->ip); ··· 141 111 142 112 if (probe_kernel_read(&old, (void *) rec->ip, sizeof(old))) 143 113 return -EFAULT; 144 - if (old.opc == BREAKPOINT_INSTRUCTION) { 114 + if (is_kprobe_on_ftrace(&old)) { 145 115 /* 146 116 * If we find a breakpoint instruction, a kprobe has been 147 117 * placed at the beginning of the function. We write the ··· 149 119 * bytes of the original instruction so that the kprobes 150 120 * handler can execute a brasl if it reaches this breakpoint. 151 121 */ 152 - new.opc = orig.opc = BREAKPOINT_INSTRUCTION; 153 - orig.disp = KPROBE_ON_FTRACE_NOP; 154 - new.disp = KPROBE_ON_FTRACE_CALL; 122 + ftrace_generate_kprobe_nop_insn(&orig); 123 + ftrace_generate_kprobe_call_insn(&new); 155 124 } else { 156 125 /* Replace nop with an ftrace call. */ 157 126 ftrace_generate_nop_insn(&orig);
+5 -2
arch/s390/kernel/perf_cpum_sf.c
··· 1415 1415 1416 1416 static struct attribute *cpumsf_pmu_events_attr[] = { 1417 1417 CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC), 1418 - CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG), 1418 + NULL, 1419 1419 NULL, 1420 1420 }; 1421 1421 ··· 1606 1606 return -EINVAL; 1607 1607 } 1608 1608 1609 - if (si.ad) 1609 + if (si.ad) { 1610 1610 sfb_set_limits(CPUM_SF_MIN_SDB, CPUM_SF_MAX_SDB); 1611 + cpumsf_pmu_events_attr[1] = 1612 + CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG); 1613 + } 1611 1614 1612 1615 sfdbg = debug_register(KMSG_COMPONENT, 2, 1, 80); 1613 1616 if (!sfdbg)
+11
arch/s390/kernel/swsusp_asm64.S
··· 177 177 lhi %r1,1 178 178 sigp %r1,%r0,SIGP_SET_ARCHITECTURE 179 179 sam64 180 + #ifdef CONFIG_SMP 181 + larl %r1,smp_cpu_mt_shift 182 + icm %r1,15,0(%r1) 183 + jz smt_done 184 + llgfr %r1,%r1 185 + smt_loop: 186 + sigp %r1,%r0,SIGP_SET_MULTI_THREADING 187 + brc 8,smt_done /* accepted */ 188 + brc 2,smt_loop /* busy, try again */ 189 + smt_done: 190 + #endif 180 191 larl %r1,.Lnew_pgm_check_psw 181 192 lpswe 0(%r1) 182 193 pgm_check_entry:
+12
arch/sparc/include/asm/hypervisor.h
··· 2957 2957 unsigned long reg_val); 2958 2958 #endif 2959 2959 2960 + 2961 + #define HV_FAST_M7_GET_PERFREG 0x43 2962 + #define HV_FAST_M7_SET_PERFREG 0x44 2963 + 2964 + #ifndef __ASSEMBLY__ 2965 + unsigned long sun4v_m7_get_perfreg(unsigned long reg_num, 2966 + unsigned long *reg_val); 2967 + unsigned long sun4v_m7_set_perfreg(unsigned long reg_num, 2968 + unsigned long reg_val); 2969 + #endif 2970 + 2960 2971 /* Function numbers for HV_CORE_TRAP. */ 2961 2972 #define HV_CORE_SET_VER 0x00 2962 2973 #define HV_CORE_PUTCHAR 0x01 ··· 2992 2981 #define HV_GRP_SDIO 0x0108 2993 2982 #define HV_GRP_SDIO_ERR 0x0109 2994 2983 #define HV_GRP_REBOOT_DATA 0x0110 2984 + #define HV_GRP_M7_PERF 0x0114 2995 2985 #define HV_GRP_NIAG_PERF 0x0200 2996 2986 #define HV_GRP_FIRE_PERF 0x0201 2997 2987 #define HV_GRP_N2_CPU 0x0202
+1
arch/sparc/kernel/hvapi.c
··· 48 48 { .group = HV_GRP_VT_CPU, }, 49 49 { .group = HV_GRP_T5_CPU, }, 50 50 { .group = HV_GRP_DIAG, .flags = FLAG_PRE_API }, 51 + { .group = HV_GRP_M7_PERF, }, 51 52 }; 52 53 53 54 static DEFINE_SPINLOCK(hvapi_lock);
+16
arch/sparc/kernel/hvcalls.S
··· 837 837 retl 838 838 nop 839 839 ENDPROC(sun4v_t5_set_perfreg) 840 + 841 + ENTRY(sun4v_m7_get_perfreg) 842 + mov %o1, %o4 843 + mov HV_FAST_M7_GET_PERFREG, %o5 844 + ta HV_FAST_TRAP 845 + stx %o1, [%o4] 846 + retl 847 + nop 848 + ENDPROC(sun4v_m7_get_perfreg) 849 + 850 + ENTRY(sun4v_m7_set_perfreg) 851 + mov HV_FAST_M7_SET_PERFREG, %o5 852 + ta HV_FAST_TRAP 853 + retl 854 + nop 855 + ENDPROC(sun4v_m7_set_perfreg)
+33
arch/sparc/kernel/pcr.c
··· 217 217 .pcr_nmi_disable = PCR_N4_PICNPT, 218 218 }; 219 219 220 + static u64 m7_pcr_read(unsigned long reg_num) 221 + { 222 + unsigned long val; 223 + 224 + (void) sun4v_m7_get_perfreg(reg_num, &val); 225 + 226 + return val; 227 + } 228 + 229 + static void m7_pcr_write(unsigned long reg_num, u64 val) 230 + { 231 + (void) sun4v_m7_set_perfreg(reg_num, val); 232 + } 233 + 234 + static const struct pcr_ops m7_pcr_ops = { 235 + .read_pcr = m7_pcr_read, 236 + .write_pcr = m7_pcr_write, 237 + .read_pic = n4_pic_read, 238 + .write_pic = n4_pic_write, 239 + .nmi_picl_value = n4_picl_value, 240 + .pcr_nmi_enable = (PCR_N4_PICNPT | PCR_N4_STRACE | 241 + PCR_N4_UTRACE | PCR_N4_TOE | 242 + (26 << PCR_N4_SL_SHIFT)), 243 + .pcr_nmi_disable = PCR_N4_PICNPT, 244 + }; 220 245 221 246 static unsigned long perf_hsvc_group; 222 247 static unsigned long perf_hsvc_major; ··· 271 246 272 247 case SUN4V_CHIP_NIAGARA5: 273 248 perf_hsvc_group = HV_GRP_T5_CPU; 249 + break; 250 + 251 + case SUN4V_CHIP_SPARC_M7: 252 + perf_hsvc_group = HV_GRP_M7_PERF; 274 253 break; 275 254 276 255 default: ··· 320 291 321 292 case SUN4V_CHIP_NIAGARA5: 322 293 pcr_ops = &n5_pcr_ops; 294 + break; 295 + 296 + case SUN4V_CHIP_SPARC_M7: 297 + pcr_ops = &m7_pcr_ops; 323 298 break; 324 299 325 300 default:
+43 -12
arch/sparc/kernel/perf_event.c
··· 792 792 .num_pic_regs = 4, 793 793 }; 794 794 795 + static void sparc_m7_write_pmc(int idx, u64 val) 796 + { 797 + u64 pcr; 798 + 799 + pcr = pcr_ops->read_pcr(idx); 800 + /* ensure ov and ntc are reset */ 801 + pcr &= ~(PCR_N4_OV | PCR_N4_NTC); 802 + 803 + pcr_ops->write_pic(idx, val & 0xffffffff); 804 + 805 + pcr_ops->write_pcr(idx, pcr); 806 + } 807 + 808 + static const struct sparc_pmu sparc_m7_pmu = { 809 + .event_map = niagara4_event_map, 810 + .cache_map = &niagara4_cache_map, 811 + .max_events = ARRAY_SIZE(niagara4_perfmon_event_map), 812 + .read_pmc = sparc_vt_read_pmc, 813 + .write_pmc = sparc_m7_write_pmc, 814 + .upper_shift = 5, 815 + .lower_shift = 5, 816 + .event_mask = 0x7ff, 817 + .user_bit = PCR_N4_UTRACE, 818 + .priv_bit = PCR_N4_STRACE, 819 + 820 + /* We explicitly don't support hypervisor tracing. */ 821 + .hv_bit = 0, 822 + 823 + .irq_bit = PCR_N4_TOE, 824 + .upper_nop = 0, 825 + .lower_nop = 0, 826 + .flags = 0, 827 + .max_hw_events = 4, 828 + .num_pcrs = 4, 829 + .num_pic_regs = 4, 830 + }; 795 831 static const struct sparc_pmu *sparc_pmu __read_mostly; 796 832 797 833 static u64 event_encoding(u64 event_id, int idx) ··· 996 960 cpuc->pcr[0] |= cpuc->event[0]->hw.config_base; 997 961 } 998 962 963 + static void sparc_pmu_start(struct perf_event *event, int flags); 964 + 999 965 /* On this PMU each PIC has it's own PCR control register. */ 1000 966 static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc) 1001 967 { ··· 1010 972 struct perf_event *cp = cpuc->event[i]; 1011 973 struct hw_perf_event *hwc = &cp->hw; 1012 974 int idx = hwc->idx; 1013 - u64 enc; 1014 975 1015 976 if (cpuc->current_idx[i] != PIC_NO_INDEX) 1016 977 continue; 1017 978 1018 - sparc_perf_event_set_period(cp, hwc, idx); 1019 979 cpuc->current_idx[i] = idx; 1020 980 1021 - enc = perf_event_get_enc(cpuc->events[i]); 1022 - cpuc->pcr[idx] &= ~mask_for_index(idx); 1023 - if (hwc->state & PERF_HES_STOPPED) 1024 - cpuc->pcr[idx] |= nop_for_index(idx); 1025 - else 1026 - cpuc->pcr[idx] |= event_encoding(enc, idx); 981 + sparc_pmu_start(cp, PERF_EF_RELOAD); 1027 982 } 1028 983 out: 1029 984 for (i = 0; i < cpuc->n_events; i++) { ··· 1132 1101 int i; 1133 1102 1134 1103 local_irq_save(flags); 1135 - perf_pmu_disable(event->pmu); 1136 1104 1137 1105 for (i = 0; i < cpuc->n_events; i++) { 1138 1106 if (event == cpuc->event[i]) { ··· 1157 1127 } 1158 1128 } 1159 1129 1160 - perf_pmu_enable(event->pmu); 1161 1130 local_irq_restore(flags); 1162 1131 } 1163 1132 ··· 1390 1361 unsigned long flags; 1391 1362 1392 1363 local_irq_save(flags); 1393 - perf_pmu_disable(event->pmu); 1394 1364 1395 1365 n0 = cpuc->n_events; 1396 1366 if (n0 >= sparc_pmu->max_hw_events) ··· 1422 1394 1423 1395 ret = 0; 1424 1396 out: 1425 - perf_pmu_enable(event->pmu); 1426 1397 local_irq_restore(flags); 1427 1398 return ret; 1428 1399 } ··· 1692 1665 if (!strcmp(sparc_pmu_type, "niagara4") || 1693 1666 !strcmp(sparc_pmu_type, "niagara5")) { 1694 1667 sparc_pmu = &niagara4_pmu; 1668 + return true; 1669 + } 1670 + if (!strcmp(sparc_pmu_type, "sparc-m7")) { 1671 + sparc_pmu = &sparc_m7_pmu; 1695 1672 return true; 1696 1673 } 1697 1674 return false;
+4
arch/sparc/kernel/process_64.c
··· 287 287 printk(" TPC[%lx] O7[%lx] I7[%lx] RPC[%lx]\n", 288 288 gp->tpc, gp->o7, gp->i7, gp->rpc); 289 289 } 290 + 291 + touch_nmi_watchdog(); 290 292 } 291 293 292 294 memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot)); ··· 364 362 (cpu == this_cpu ? '*' : ' '), cpu, 365 363 pp->pcr[0], pp->pcr[1], pp->pcr[2], pp->pcr[3], 366 364 pp->pic[0], pp->pic[1], pp->pic[2], pp->pic[3]); 365 + 366 + touch_nmi_watchdog(); 367 367 } 368 368 369 369 memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
+32 -3
arch/sparc/lib/memmove.S
··· 8 8 9 9 .text 10 10 ENTRY(memmove) /* o0=dst o1=src o2=len */ 11 - mov %o0, %g1 11 + brz,pn %o2, 99f 12 + mov %o0, %g1 13 + 12 14 cmp %o0, %o1 13 - bleu,pt %xcc, memcpy 15 + bleu,pt %xcc, 2f 14 16 add %o1, %o2, %g7 15 17 cmp %g7, %o0 16 18 bleu,pt %xcc, memcpy ··· 26 24 stb %g7, [%o0] 27 25 bne,pt %icc, 1b 28 26 sub %o0, 1, %o0 29 - 27 + 99: 30 28 retl 31 29 mov %g1, %o0 30 + 31 + /* We can't just call memcpy for these memmove cases. On some 32 + * chips the memcpy uses cache initializing stores and when dst 33 + * and src are close enough, those can clobber the source data 34 + * before we've loaded it in. 35 + */ 36 + 2: or %o0, %o1, %g7 37 + or %o2, %g7, %g7 38 + andcc %g7, 0x7, %g0 39 + bne,pn %xcc, 4f 40 + nop 41 + 42 + 3: ldx [%o1], %g7 43 + add %o1, 8, %o1 44 + subcc %o2, 8, %o2 45 + add %o0, 8, %o0 46 + bne,pt %icc, 3b 47 + stx %g7, [%o0 - 0x8] 48 + ba,a,pt %xcc, 99b 49 + 50 + 4: ldub [%o1], %g7 51 + add %o1, 1, %o1 52 + subcc %o2, 1, %o2 53 + add %o0, 1, %o0 54 + bne,pt %icc, 4b 55 + stb %g7, [%o0 - 0x1] 56 + ba,a,pt %xcc, 99b 32 57 ENDPROC(memmove)
+5 -5
arch/x86/kernel/cpu/perf_event_intel.c
··· 212 212 INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */ 213 213 INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */ 214 214 /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */ 215 - INTEL_EVENT_CONSTRAINT(0x08a3, 0x4), 215 + INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4), 216 216 /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */ 217 - INTEL_EVENT_CONSTRAINT(0x0ca3, 0x4), 217 + INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), 218 218 /* CYCLE_ACTIVITY.CYCLES_NO_EXECUTE */ 219 - INTEL_EVENT_CONSTRAINT(0x04a3, 0xf), 219 + INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), 220 220 EVENT_CONSTRAINT_END 221 221 }; 222 222 ··· 1649 1649 if (c) 1650 1650 return c; 1651 1651 1652 - c = intel_pebs_constraints(event); 1652 + c = intel_shared_regs_constraints(cpuc, event); 1653 1653 if (c) 1654 1654 return c; 1655 1655 1656 - c = intel_shared_regs_constraints(cpuc, event); 1656 + c = intel_pebs_constraints(event); 1657 1657 if (c) 1658 1658 return c; 1659 1659
+29 -5
arch/x86/kernel/entry_64.S
··· 364 364 * Has incomplete stack frame and undefined top of stack. 365 365 */ 366 366 ret_from_sys_call: 367 - testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) 368 - jnz int_ret_from_sys_call_fixup /* Go the the slow path */ 369 - 370 367 LOCKDEP_SYS_EXIT 371 368 DISABLE_INTERRUPTS(CLBR_NONE) 372 369 TRACE_IRQS_OFF 370 + 371 + /* 372 + * We must check ti flags with interrupts (or at least preemption) 373 + * off because we must *never* return to userspace without 374 + * processing exit work that is enqueued if we're preempted here. 375 + * In particular, returning to userspace with any of the one-shot 376 + * flags (TIF_NOTIFY_RESUME, TIF_USER_RETURN_NOTIFY, etc) set is 377 + * very bad. 378 + */ 379 + testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) 380 + jnz int_ret_from_sys_call_fixup /* Go the the slow path */ 381 + 373 382 CFI_REMEMBER_STATE 374 383 /* 375 384 * sysretq will re-enable interrupts: ··· 395 386 396 387 int_ret_from_sys_call_fixup: 397 388 FIXUP_TOP_OF_STACK %r11, -ARGOFFSET 398 - jmp int_ret_from_sys_call 389 + jmp int_ret_from_sys_call_irqs_off 399 390 400 391 /* Do syscall tracing */ 401 392 tracesys: ··· 441 432 GLOBAL(int_ret_from_sys_call) 442 433 DISABLE_INTERRUPTS(CLBR_NONE) 443 434 TRACE_IRQS_OFF 435 + int_ret_from_sys_call_irqs_off: 444 436 movl $_TIF_ALLWORK_MASK,%edi 445 437 /* edi: mask to check */ 446 438 GLOBAL(int_with_check) ··· 799 789 cmpq %r11,(EFLAGS-ARGOFFSET)(%rsp) /* R11 == RFLAGS */ 800 790 jne opportunistic_sysret_failed 801 791 802 - testq $X86_EFLAGS_RF,%r11 /* sysret can't restore RF */ 792 + /* 793 + * SYSRET can't restore RF. SYSRET can restore TF, but unlike IRET, 794 + * restoring TF results in a trap from userspace immediately after 795 + * SYSRET. This would cause an infinite loop whenever #DB happens 796 + * with register state that satisfies the opportunistic SYSRET 797 + * conditions. For example, single-stepping this user code: 798 + * 799 + * movq $stuck_here,%rcx 800 + * pushfq 801 + * popq %r11 802 + * stuck_here: 803 + * 804 + * would never get past 'stuck_here'. 805 + */ 806 + testq $(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11 803 807 jnz opportunistic_sysret_failed 804 808 805 809 /* nothing to check for RSP */
+1 -1
arch/x86/kernel/kgdb.c
··· 72 72 { "bx", 8, offsetof(struct pt_regs, bx) }, 73 73 { "cx", 8, offsetof(struct pt_regs, cx) }, 74 74 { "dx", 8, offsetof(struct pt_regs, dx) }, 75 - { "si", 8, offsetof(struct pt_regs, dx) }, 75 + { "si", 8, offsetof(struct pt_regs, si) }, 76 76 { "di", 8, offsetof(struct pt_regs, di) }, 77 77 { "bp", 8, offsetof(struct pt_regs, bp) }, 78 78 { "sp", 8, offsetof(struct pt_regs, sp) },
+10
arch/x86/kernel/reboot.c
··· 183 183 }, 184 184 }, 185 185 186 + /* ASRock */ 187 + { /* Handle problems with rebooting on ASRock Q1900DC-ITX */ 188 + .callback = set_pci_reboot, 189 + .ident = "ASRock Q1900DC-ITX", 190 + .matches = { 191 + DMI_MATCH(DMI_BOARD_VENDOR, "ASRock"), 192 + DMI_MATCH(DMI_BOARD_NAME, "Q1900DC-ITX"), 193 + }, 194 + }, 195 + 186 196 /* ASUS */ 187 197 { /* Handle problems with rebooting on ASUS P4S800 */ 188 198 .callback = set_bios_reboot,
+3 -1
arch/x86/kvm/ioapic.c
··· 422 422 struct kvm_ioapic *ioapic, int vector, int trigger_mode) 423 423 { 424 424 int i; 425 + struct kvm_lapic *apic = vcpu->arch.apic; 425 426 426 427 for (i = 0; i < IOAPIC_NUM_PINS; i++) { 427 428 union kvm_ioapic_redirect_entry *ent = &ioapic->redirtbl[i]; ··· 444 443 kvm_notify_acked_irq(ioapic->kvm, KVM_IRQCHIP_IOAPIC, i); 445 444 spin_lock(&ioapic->lock); 446 445 447 - if (trigger_mode != IOAPIC_LEVEL_TRIG) 446 + if (trigger_mode != IOAPIC_LEVEL_TRIG || 447 + kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) 448 448 continue; 449 449 450 450 ASSERT(ent->fields.trig_mode == IOAPIC_LEVEL_TRIG);
+1 -2
arch/x86/kvm/lapic.c
··· 833 833 834 834 static void kvm_ioapic_send_eoi(struct kvm_lapic *apic, int vector) 835 835 { 836 - if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) && 837 - kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) { 836 + if (kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) { 838 837 int trigger_mode; 839 838 if (apic_test_vector(vector, apic->regs + APIC_TMR)) 840 839 trigger_mode = IOAPIC_LEVEL_TRIG;
+5 -2
arch/x86/kvm/vmx.c
··· 2479 2479 if (enable_ept) { 2480 2480 /* nested EPT: emulate EPT also to L1 */ 2481 2481 vmx->nested.nested_vmx_secondary_ctls_high |= 2482 - SECONDARY_EXEC_ENABLE_EPT | 2483 - SECONDARY_EXEC_UNRESTRICTED_GUEST; 2482 + SECONDARY_EXEC_ENABLE_EPT; 2484 2483 vmx->nested.nested_vmx_ept_caps = VMX_EPT_PAGE_WALK_4_BIT | 2485 2484 VMX_EPTP_WB_BIT | VMX_EPT_2MB_PAGE_BIT | 2486 2485 VMX_EPT_INVEPT_BIT; ··· 2492 2493 vmx->nested.nested_vmx_ept_caps |= VMX_EPT_EXTENT_GLOBAL_BIT; 2493 2494 } else 2494 2495 vmx->nested.nested_vmx_ept_caps = 0; 2496 + 2497 + if (enable_unrestricted_guest) 2498 + vmx->nested.nested_vmx_secondary_ctls_high |= 2499 + SECONDARY_EXEC_UNRESTRICTED_GUEST; 2495 2500 2496 2501 /* miscellaneous data */ 2497 2502 rdmsr(MSR_IA32_VMX_MISC,
+9 -1
arch/x86/xen/p2m.c
··· 91 91 unsigned long xen_max_p2m_pfn __read_mostly; 92 92 EXPORT_SYMBOL_GPL(xen_max_p2m_pfn); 93 93 94 + #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 95 + #define P2M_LIMIT CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 96 + #else 97 + #define P2M_LIMIT 0 98 + #endif 99 + 94 100 static DEFINE_SPINLOCK(p2m_update_lock); 95 101 96 102 static unsigned long *p2m_mid_missing_mfn; ··· 391 385 void __init xen_vmalloc_p2m_tree(void) 392 386 { 393 387 static struct vm_struct vm; 388 + unsigned long p2m_limit; 394 389 390 + p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE; 395 391 vm.flags = VM_ALLOC; 396 - vm.size = ALIGN(sizeof(unsigned long) * xen_max_p2m_pfn, 392 + vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit), 397 393 PMD_SIZE * PMDS_PER_MID_PAGE); 398 394 vm_area_register_early(&vm, PMD_SIZE * PMDS_PER_MID_PAGE); 399 395 pr_notice("p2m virtual area at %p, size is %lx\n", vm.addr, vm.size);
+1 -1
block/blk-merge.c
··· 592 592 if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS)) { 593 593 struct bio_vec *bprev; 594 594 595 - bprev = &rq->biotail->bi_io_vec[bio->bi_vcnt - 1]; 595 + bprev = &rq->biotail->bi_io_vec[rq->biotail->bi_vcnt - 1]; 596 596 if (bvec_gap_to_prev(bprev, bio->bi_io_vec[0].bv_offset)) 597 597 return false; 598 598 }
+4 -2
block/blk-mq-tag.c
··· 278 278 /* 279 279 * We're out of tags on this hardware queue, kick any 280 280 * pending IO submits before going to sleep waiting for 281 - * some to complete. 281 + * some to complete. Note that hctx can be NULL here for 282 + * reserved tag allocation. 282 283 */ 283 - blk_mq_run_hw_queue(hctx, false); 284 + if (hctx) 285 + blk_mq_run_hw_queue(hctx, false); 284 286 285 287 /* 286 288 * Retry tag allocation after running the hardware queue,
+3 -3
block/blk-mq.c
··· 1938 1938 */ 1939 1939 if (percpu_ref_init(&q->mq_usage_counter, blk_mq_usage_counter_release, 1940 1940 PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) 1941 - goto err_map; 1941 + goto err_mq_usage; 1942 1942 1943 1943 setup_timer(&q->timeout, blk_mq_rq_timer, (unsigned long) q); 1944 1944 blk_queue_rq_timeout(q, 30000); ··· 1981 1981 blk_mq_init_cpu_queues(q, set->nr_hw_queues); 1982 1982 1983 1983 if (blk_mq_init_hw_queues(q, set)) 1984 - goto err_hw; 1984 + goto err_mq_usage; 1985 1985 1986 1986 mutex_lock(&all_q_mutex); 1987 1987 list_add_tail(&q->all_q_node, &all_q_list); ··· 1993 1993 1994 1994 return q; 1995 1995 1996 - err_hw: 1996 + err_mq_usage: 1997 1997 blk_cleanup_queue(q); 1998 1998 err_hctxs: 1999 1999 kfree(map);
+3 -3
block/blk-settings.c
··· 585 585 b->physical_block_size); 586 586 587 587 t->io_min = max(t->io_min, b->io_min); 588 - t->io_opt = lcm(t->io_opt, b->io_opt); 588 + t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); 589 589 590 590 t->cluster &= b->cluster; 591 591 t->discard_zeroes_data &= b->discard_zeroes_data; ··· 616 616 b->raid_partial_stripes_expensive); 617 617 618 618 /* Find lowest common alignment_offset */ 619 - t->alignment_offset = lcm(t->alignment_offset, alignment) 619 + t->alignment_offset = lcm_not_zero(t->alignment_offset, alignment) 620 620 % max(t->physical_block_size, t->io_min); 621 621 622 622 /* Verify that new alignment_offset is on a logical block boundary */ ··· 643 643 b->max_discard_sectors); 644 644 t->discard_granularity = max(t->discard_granularity, 645 645 b->discard_granularity); 646 - t->discard_alignment = lcm(t->discard_alignment, alignment) % 646 + t->discard_alignment = lcm_not_zero(t->discard_alignment, alignment) % 647 647 t->discard_granularity; 648 648 } 649 649
+15 -4
drivers/ata/libata-core.c
··· 4204 4204 { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER }, 4205 4205 4206 4206 /* devices that don't properly handle queued TRIM commands */ 4207 - { "Micron_M[56]*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4207 + { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4208 4208 ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4209 - { "Crucial_CT*SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, 4209 + { "Crucial_CT*M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4210 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4211 + { "Micron_M5[15]0*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4212 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4213 + { "Crucial_CT*M550*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4214 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4215 + { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4216 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4217 + { "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4218 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4210 4219 4211 4220 /* 4212 4221 * As defined, the DRAT (Deterministic Read After Trim) and RZAT ··· 4235 4226 */ 4236 4227 { "INTEL*SSDSC2MH*", NULL, 0, }, 4237 4228 4229 + { "Micron*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4230 + { "Crucial*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4238 4231 { "INTEL*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4239 4232 { "SSD*INTEL*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4240 4233 { "Samsung*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, ··· 4748 4737 return NULL; 4749 4738 4750 4739 /* libsas case */ 4751 - if (!ap->scsi_host) { 4740 + if (ap->flags & ATA_FLAG_SAS_HOST) { 4752 4741 tag = ata_sas_allocate_tag(ap); 4753 4742 if (tag < 0) 4754 4743 return NULL; ··· 4787 4776 tag = qc->tag; 4788 4777 if (likely(ata_tag_valid(tag))) { 4789 4778 qc->tag = ATA_TAG_POISON; 4790 - if (!ap->scsi_host) 4779 + if (ap->flags & ATA_FLAG_SAS_HOST) 4791 4780 ata_sas_free_tag(tag, ap); 4792 4781 } 4793 4782 }
+8
drivers/base/regmap/internal.h
··· 243 243 extern struct regcache_ops regcache_lzo_ops; 244 244 extern struct regcache_ops regcache_flat_ops; 245 245 246 + static inline const char *regmap_name(const struct regmap *map) 247 + { 248 + if (map->dev) 249 + return dev_name(map->dev); 250 + 251 + return map->name; 252 + } 253 + 246 254 #endif
+8 -8
drivers/base/regmap/regcache.c
··· 218 218 ret = map->cache_ops->read(map, reg, value); 219 219 220 220 if (ret == 0) 221 - trace_regmap_reg_read_cache(map->dev, reg, *value); 221 + trace_regmap_reg_read_cache(map, reg, *value); 222 222 223 223 return ret; 224 224 } ··· 311 311 dev_dbg(map->dev, "Syncing %s cache\n", 312 312 map->cache_ops->name); 313 313 name = map->cache_ops->name; 314 - trace_regcache_sync(map->dev, name, "start"); 314 + trace_regcache_sync(map, name, "start"); 315 315 316 316 if (!map->cache_dirty) 317 317 goto out; ··· 346 346 347 347 regmap_async_complete(map); 348 348 349 - trace_regcache_sync(map->dev, name, "stop"); 349 + trace_regcache_sync(map, name, "stop"); 350 350 351 351 return ret; 352 352 } ··· 381 381 name = map->cache_ops->name; 382 382 dev_dbg(map->dev, "Syncing %s cache from %d-%d\n", name, min, max); 383 383 384 - trace_regcache_sync(map->dev, name, "start region"); 384 + trace_regcache_sync(map, name, "start region"); 385 385 386 386 if (!map->cache_dirty) 387 387 goto out; ··· 401 401 402 402 regmap_async_complete(map); 403 403 404 - trace_regcache_sync(map->dev, name, "stop region"); 404 + trace_regcache_sync(map, name, "stop region"); 405 405 406 406 return ret; 407 407 } ··· 428 428 429 429 map->lock(map->lock_arg); 430 430 431 - trace_regcache_drop_region(map->dev, min, max); 431 + trace_regcache_drop_region(map, min, max); 432 432 433 433 ret = map->cache_ops->drop(map, min, max); 434 434 ··· 455 455 map->lock(map->lock_arg); 456 456 WARN_ON(map->cache_bypass && enable); 457 457 map->cache_only = enable; 458 - trace_regmap_cache_only(map->dev, enable); 458 + trace_regmap_cache_only(map, enable); 459 459 map->unlock(map->lock_arg); 460 460 } 461 461 EXPORT_SYMBOL_GPL(regcache_cache_only); ··· 493 493 map->lock(map->lock_arg); 494 494 WARN_ON(map->cache_only && enable); 495 495 map->cache_bypass = enable; 496 - trace_regmap_cache_bypass(map->dev, enable); 496 + trace_regmap_cache_bypass(map, enable); 497 497 map->unlock(map->lock_arg); 498 498 } 499 499 EXPORT_SYMBOL_GPL(regcache_cache_bypass);
+14 -18
drivers/base/regmap/regmap.c
··· 1281 1281 if (map->async && map->bus->async_write) { 1282 1282 struct regmap_async *async; 1283 1283 1284 - trace_regmap_async_write_start(map->dev, reg, val_len); 1284 + trace_regmap_async_write_start(map, reg, val_len); 1285 1285 1286 1286 spin_lock_irqsave(&map->async_lock, flags); 1287 1287 async = list_first_entry_or_null(&map->async_free, ··· 1339 1339 return ret; 1340 1340 } 1341 1341 1342 - trace_regmap_hw_write_start(map->dev, reg, 1343 - val_len / map->format.val_bytes); 1342 + trace_regmap_hw_write_start(map, reg, val_len / map->format.val_bytes); 1344 1343 1345 1344 /* If we're doing a single register write we can probably just 1346 1345 * send the work_buf directly, otherwise try to do a gather ··· 1371 1372 kfree(buf); 1372 1373 } 1373 1374 1374 - trace_regmap_hw_write_done(map->dev, reg, 1375 - val_len / map->format.val_bytes); 1375 + trace_regmap_hw_write_done(map, reg, val_len / map->format.val_bytes); 1376 1376 1377 1377 return ret; 1378 1378 } ··· 1405 1407 1406 1408 map->format.format_write(map, reg, val); 1407 1409 1408 - trace_regmap_hw_write_start(map->dev, reg, 1); 1410 + trace_regmap_hw_write_start(map, reg, 1); 1409 1411 1410 1412 ret = map->bus->write(map->bus_context, map->work_buf, 1411 1413 map->format.buf_size); 1412 1414 1413 - trace_regmap_hw_write_done(map->dev, reg, 1); 1415 + trace_regmap_hw_write_done(map, reg, 1); 1414 1416 1415 1417 return ret; 1416 1418 } ··· 1468 1470 dev_info(map->dev, "%x <= %x\n", reg, val); 1469 1471 #endif 1470 1472 1471 - trace_regmap_reg_write(map->dev, reg, val); 1473 + trace_regmap_reg_write(map, reg, val); 1472 1474 1473 1475 return map->reg_write(context, reg, val); 1474 1476 } ··· 1771 1773 for (i = 0; i < num_regs; i++) { 1772 1774 int reg = regs[i].reg; 1773 1775 int val = regs[i].def; 1774 - trace_regmap_hw_write_start(map->dev, reg, 1); 1776 + trace_regmap_hw_write_start(map, reg, 1); 1775 1777 map->format.format_reg(u8, reg, map->reg_shift); 1776 1778 u8 += reg_bytes + pad_bytes; 1777 1779 map->format.format_val(u8, val, 0); ··· 1786 1788 1787 1789 for (i = 0; i < num_regs; i++) { 1788 1790 int reg = regs[i].reg; 1789 - trace_regmap_hw_write_done(map->dev, reg, 1); 1791 + trace_regmap_hw_write_done(map, reg, 1); 1790 1792 } 1791 1793 return ret; 1792 1794 } ··· 2057 2059 */ 2058 2060 u8[0] |= map->read_flag_mask; 2059 2061 2060 - trace_regmap_hw_read_start(map->dev, reg, 2061 - val_len / map->format.val_bytes); 2062 + trace_regmap_hw_read_start(map, reg, val_len / map->format.val_bytes); 2062 2063 2063 2064 ret = map->bus->read(map->bus_context, map->work_buf, 2064 2065 map->format.reg_bytes + map->format.pad_bytes, 2065 2066 val, val_len); 2066 2067 2067 - trace_regmap_hw_read_done(map->dev, reg, 2068 - val_len / map->format.val_bytes); 2068 + trace_regmap_hw_read_done(map, reg, val_len / map->format.val_bytes); 2069 2069 2070 2070 return ret; 2071 2071 } ··· 2119 2123 dev_info(map->dev, "%x => %x\n", reg, *val); 2120 2124 #endif 2121 2125 2122 - trace_regmap_reg_read(map->dev, reg, *val); 2126 + trace_regmap_reg_read(map, reg, *val); 2123 2127 2124 2128 if (!map->cache_bypass) 2125 2129 regcache_write(map, reg, *val); ··· 2476 2480 struct regmap *map = async->map; 2477 2481 bool wake; 2478 2482 2479 - trace_regmap_async_io_complete(map->dev); 2483 + trace_regmap_async_io_complete(map); 2480 2484 2481 2485 spin_lock(&map->async_lock); 2482 2486 list_move(&async->list, &map->async_free); ··· 2521 2525 if (!map->bus || !map->bus->async_write) 2522 2526 return 0; 2523 2527 2524 - trace_regmap_async_complete_start(map->dev); 2528 + trace_regmap_async_complete_start(map); 2525 2529 2526 2530 wait_event(map->async_waitq, regmap_async_is_done(map)); 2527 2531 ··· 2530 2534 map->async_ret = 0; 2531 2535 spin_unlock_irqrestore(&map->async_lock, flags); 2532 2536 2533 - trace_regmap_async_complete_done(map->dev); 2537 + trace_regmap_async_complete_done(map); 2534 2538 2535 2539 return ret; 2536 2540 }
+4 -4
drivers/block/nbd.c
··· 803 803 return -EINVAL; 804 804 } 805 805 806 - nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL); 807 - if (!nbd_dev) 808 - return -ENOMEM; 809 - 810 806 part_shift = 0; 811 807 if (max_part > 0) { 812 808 part_shift = fls(max_part); ··· 823 827 824 828 if (nbds_max > 1UL << (MINORBITS - part_shift)) 825 829 return -EINVAL; 830 + 831 + nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL); 832 + if (!nbd_dev) 833 + return -ENOMEM; 826 834 827 835 for (i = 0; i < nbds_max; i++) { 828 836 struct gendisk *disk = alloc_disk(1 << part_shift);
+1
drivers/block/nvme-core.c
··· 3003 3003 } 3004 3004 get_device(dev->device); 3005 3005 3006 + INIT_LIST_HEAD(&dev->node); 3006 3007 INIT_WORK(&dev->probe_work, nvme_async_probe); 3007 3008 schedule_work(&dev->probe_work); 3008 3009 return 0;
+3
drivers/clocksource/Kconfig
··· 192 192 config SH_TIMER_CMT 193 193 bool "Renesas CMT timer driver" if COMPILE_TEST 194 194 depends on GENERIC_CLOCKEVENTS 195 + depends on HAS_IOMEM 195 196 default SYS_SUPPORTS_SH_CMT 196 197 help 197 198 This enables build of a clocksource and clockevent driver for ··· 202 201 config SH_TIMER_MTU2 203 202 bool "Renesas MTU2 timer driver" if COMPILE_TEST 204 203 depends on GENERIC_CLOCKEVENTS 204 + depends on HAS_IOMEM 205 205 default SYS_SUPPORTS_SH_MTU2 206 206 help 207 207 This enables build of a clockevent driver for the Multi-Function ··· 212 210 config SH_TIMER_TMU 213 211 bool "Renesas TMU timer driver" if COMPILE_TEST 214 212 depends on GENERIC_CLOCKEVENTS 213 + depends on HAS_IOMEM 215 214 default SYS_SUPPORTS_SH_TMU 216 215 help 217 216 This enables build of a clocksource and clockevent driver for
-7
drivers/clocksource/timer-sun5i.c
··· 17 17 #include <linux/irq.h> 18 18 #include <linux/irqreturn.h> 19 19 #include <linux/reset.h> 20 - #include <linux/sched_clock.h> 21 20 #include <linux/of.h> 22 21 #include <linux/of_address.h> 23 22 #include <linux/of_irq.h> ··· 136 137 .dev_id = &sun5i_clockevent, 137 138 }; 138 139 139 - static u64 sun5i_timer_sched_read(void) 140 - { 141 - return ~readl(timer_base + TIMER_CNTVAL_LO_REG(1)); 142 - } 143 - 144 140 static void __init sun5i_timer_init(struct device_node *node) 145 141 { 146 142 struct reset_control *rstc; ··· 166 172 writel(TIMER_CTL_ENABLE | TIMER_CTL_RELOAD, 167 173 timer_base + TIMER_CTL_REG(1)); 168 174 169 - sched_clock_register(sun5i_timer_sched_read, 32, rate); 170 175 clocksource_mmio_init(timer_base + TIMER_CNTVAL_LO_REG(1), node->name, 171 176 rate, 340, 32, clocksource_mmio_readl_down); 172 177
+1
drivers/dma/bcm2835-dma.c
··· 475 475 * c->desc is NULL and exit.) 476 476 */ 477 477 if (c->desc) { 478 + bcm2835_dma_desc_free(&c->desc->vd); 478 479 c->desc = NULL; 479 480 bcm2835_dma_abort(c->chan_base); 480 481
+7
drivers/dma/dma-jz4740.c
··· 511 511 kfree(container_of(vdesc, struct jz4740_dma_desc, vdesc)); 512 512 } 513 513 514 + #define JZ4740_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 515 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) 516 + 514 517 static int jz4740_dma_probe(struct platform_device *pdev) 515 518 { 516 519 struct jz4740_dmaengine_chan *chan; ··· 551 548 dd->device_prep_dma_cyclic = jz4740_dma_prep_dma_cyclic; 552 549 dd->device_config = jz4740_dma_slave_config; 553 550 dd->device_terminate_all = jz4740_dma_terminate_all; 551 + dd->src_addr_widths = JZ4740_DMA_BUSWIDTHS; 552 + dd->dst_addr_widths = JZ4740_DMA_BUSWIDTHS; 553 + dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 554 + dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 554 555 dd->dev = &pdev->dev; 555 556 INIT_LIST_HEAD(&dd->channels); 556 557
+7
drivers/dma/edma.c
··· 260 260 */ 261 261 if (echan->edesc) { 262 262 int cyclic = echan->edesc->cyclic; 263 + 264 + /* 265 + * free the running request descriptor 266 + * since it is not in any of the vdesc lists 267 + */ 268 + edma_desc_free(&echan->edesc->vdesc); 269 + 263 270 echan->edesc = NULL; 264 271 edma_stop(echan->ch_num); 265 272 /* Move the cyclic channel back to default queue */
+3 -1
drivers/dma/moxart-dma.c
··· 193 193 194 194 spin_lock_irqsave(&ch->vc.lock, flags); 195 195 196 - if (ch->desc) 196 + if (ch->desc) { 197 + moxart_dma_desc_free(&ch->desc->vd); 197 198 ch->desc = NULL; 199 + } 198 200 199 201 ctrl = readl(ch->base + REG_OFF_CTRL); 200 202 ctrl &= ~(APB_DMA_ENABLE | APB_DMA_FIN_INT_EN | APB_DMA_ERR_INT_EN);
+1
drivers/dma/omap-dma.c
··· 981 981 * c->desc is NULL and exit.) 982 982 */ 983 983 if (c->desc) { 984 + omap_dma_desc_free(&c->desc->vd); 984 985 c->desc = NULL; 985 986 /* Avoid stopping the dma twice */ 986 987 if (!c->paused)
+7 -15
drivers/firmware/dmi_scan.c
··· 86 86 int i = 0; 87 87 88 88 /* 89 - * Stop when we see all the items the table claimed to have 90 - * OR we run off the end of the table (also happens) 89 + * Stop when we have seen all the items the table claimed to have 90 + * (SMBIOS < 3.0 only) OR we reach an end-of-table marker OR we run 91 + * off the end of the table (should never happen but sometimes does 92 + * on bogus implementations.) 91 93 */ 92 - while ((i < num) && (data - buf + sizeof(struct dmi_header)) <= len) { 94 + while ((!num || i < num) && 95 + (data - buf + sizeof(struct dmi_header)) <= len) { 93 96 const struct dmi_header *dm = (const struct dmi_header *)data; 94 97 95 98 /* ··· 532 529 if (memcmp(buf, "_SM3_", 5) == 0 && 533 530 buf[6] < 32 && dmi_checksum(buf, buf[6])) { 534 531 dmi_ver = get_unaligned_be16(buf + 7); 532 + dmi_num = 0; /* No longer specified */ 535 533 dmi_len = get_unaligned_le32(buf + 12); 536 534 dmi_base = get_unaligned_le64(buf + 16); 537 - 538 - /* 539 - * The 64-bit SMBIOS 3.0 entry point no longer has a field 540 - * containing the number of structures present in the table. 541 - * Instead, it defines the table size as a maximum size, and 542 - * relies on the end-of-table structure type (#127) to be used 543 - * to signal the end of the table. 544 - * So let's define dmi_num as an upper bound as well: each 545 - * structure has a 4 byte header, so dmi_len / 4 is an upper 546 - * bound for the number of structures in the table. 547 - */ 548 - dmi_num = dmi_len / 4; 549 535 550 536 if (dmi_walk_early(dmi_decode) == 0) { 551 537 pr_info("SMBIOS %d.%d present.\n",
+1 -1
drivers/gpio/gpio-mpc8xxx.c
··· 334 334 .xlate = irq_domain_xlate_twocell, 335 335 }; 336 336 337 - static struct of_device_id mpc8xxx_gpio_ids[] __initdata = { 337 + static struct of_device_id mpc8xxx_gpio_ids[] = { 338 338 { .compatible = "fsl,mpc8349-gpio", }, 339 339 { .compatible = "fsl,mpc8572-gpio", }, 340 340 { .compatible = "fsl,mpc8610-gpio", },
+1 -1
drivers/gpio/gpio-syscon.c
··· 219 219 ret = of_property_read_u32_index(np, "gpio,syscon-dev", 2, 220 220 &priv->dir_reg_offset); 221 221 if (ret) 222 - dev_err(dev, "can't read the dir register offset!\n"); 222 + dev_dbg(dev, "can't read the dir register offset!\n"); 223 223 224 224 priv->dir_reg_offset <<= 3; 225 225 }
+10
drivers/gpio/gpiolib-acpi.c
··· 201 201 if (!handler) 202 202 return AE_BAD_PARAMETER; 203 203 204 + pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin); 205 + if (pin < 0) 206 + return AE_BAD_PARAMETER; 207 + 204 208 desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event"); 205 209 if (IS_ERR(desc)) { 206 210 dev_err(chip->dev, "Failed to request GPIO\n"); ··· 554 550 struct acpi_gpio_connection *conn; 555 551 struct gpio_desc *desc; 556 552 bool found; 553 + 554 + pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin); 555 + if (pin < 0) { 556 + status = AE_BAD_PARAMETER; 557 + goto out; 558 + } 557 559 558 560 mutex_lock(&achip->conn_lock); 559 561
+1 -12
drivers/gpu/drm/drm_crtc.c
··· 525 525 } 526 526 EXPORT_SYMBOL(drm_framebuffer_reference); 527 527 528 - static void drm_framebuffer_free_bug(struct kref *kref) 529 - { 530 - BUG(); 531 - } 532 - 533 - static void __drm_framebuffer_unreference(struct drm_framebuffer *fb) 534 - { 535 - DRM_DEBUG("%p: FB ID: %d (%d)\n", fb, fb->base.id, atomic_read(&fb->refcount.refcount)); 536 - kref_put(&fb->refcount, drm_framebuffer_free_bug); 537 - } 538 - 539 528 /** 540 529 * drm_framebuffer_unregister_private - unregister a private fb from the lookup idr 541 530 * @fb: fb to unregister ··· 1309 1320 return; 1310 1321 } 1311 1322 /* disconnect the plane from the fb and crtc: */ 1312 - __drm_framebuffer_unreference(plane->old_fb); 1323 + drm_framebuffer_unreference(plane->old_fb); 1313 1324 plane->old_fb = NULL; 1314 1325 plane->fb = NULL; 1315 1326 plane->crtc = NULL;
+1
drivers/gpu/drm/drm_edid_load.c
··· 287 287 288 288 drm_mode_connector_update_edid_property(connector, edid); 289 289 ret = drm_add_edid_modes(connector, edid); 290 + drm_edid_to_eld(connector, edid); 290 291 kfree(edid); 291 292 292 293 return ret;
+1
drivers/gpu/drm/drm_probe_helper.c
··· 174 174 struct edid *edid = (struct edid *) connector->edid_blob_ptr->data; 175 175 176 176 count = drm_add_edid_modes(connector, edid); 177 + drm_edid_to_eld(connector, edid); 177 178 } else 178 179 count = (*connector_funcs->get_modes)(connector); 179 180 }
+5 -3
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 147 147 unsigned int ovl_height; 148 148 unsigned int fb_width; 149 149 unsigned int fb_height; 150 + unsigned int fb_pitch; 150 151 unsigned int bpp; 151 152 unsigned int pixel_format; 152 153 dma_addr_t dma_addr; ··· 533 532 win_data->offset_y = plane->crtc_y; 534 533 win_data->ovl_width = plane->crtc_width; 535 534 win_data->ovl_height = plane->crtc_height; 535 + win_data->fb_pitch = plane->pitch; 536 536 win_data->fb_width = plane->fb_width; 537 537 win_data->fb_height = plane->fb_height; 538 538 win_data->dma_addr = plane->dma_addr[0] + offset; 539 539 win_data->bpp = plane->bpp; 540 540 win_data->pixel_format = plane->pixel_format; 541 - win_data->buf_offsize = (plane->fb_width - plane->crtc_width) * 542 - (plane->bpp >> 3); 541 + win_data->buf_offsize = 542 + plane->pitch - (plane->crtc_width * (plane->bpp >> 3)); 543 543 win_data->line_size = plane->crtc_width * (plane->bpp >> 3); 544 544 545 545 DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n", ··· 706 704 writel(val, ctx->regs + VIDWx_BUF_START(win, 0)); 707 705 708 706 /* buffer end address */ 709 - size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3); 707 + size = win_data->fb_pitch * win_data->ovl_height * (win_data->bpp >> 3); 710 708 val = (unsigned long)(win_data->dma_addr + size); 711 709 writel(val, ctx->regs + VIDWx_BUF_END(win, 0)); 712 710
+10 -7
drivers/gpu/drm/exynos/exynos_mixer.c
··· 55 55 unsigned int fb_x; 56 56 unsigned int fb_y; 57 57 unsigned int fb_width; 58 + unsigned int fb_pitch; 58 59 unsigned int fb_height; 59 60 unsigned int src_width; 60 61 unsigned int src_height; ··· 439 438 } else { 440 439 luma_addr[0] = win_data->dma_addr; 441 440 chroma_addr[0] = win_data->dma_addr 442 - + (win_data->fb_width * win_data->fb_height); 441 + + (win_data->fb_pitch * win_data->fb_height); 443 442 } 444 443 445 444 if (win_data->scan_flags & DRM_MODE_FLAG_INTERLACE) { ··· 448 447 luma_addr[1] = luma_addr[0] + 0x40; 449 448 chroma_addr[1] = chroma_addr[0] + 0x40; 450 449 } else { 451 - luma_addr[1] = luma_addr[0] + win_data->fb_width; 452 - chroma_addr[1] = chroma_addr[0] + win_data->fb_width; 450 + luma_addr[1] = luma_addr[0] + win_data->fb_pitch; 451 + chroma_addr[1] = chroma_addr[0] + win_data->fb_pitch; 453 452 } 454 453 } else { 455 454 ctx->interlace = false; ··· 470 469 vp_reg_writemask(res, VP_MODE, val, VP_MODE_FMT_MASK); 471 470 472 471 /* setting size of input image */ 473 - vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_width) | 472 + vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_pitch) | 474 473 VP_IMG_VSIZE(win_data->fb_height)); 475 474 /* chroma height has to reduced by 2 to avoid chroma distorions */ 476 - vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_width) | 475 + vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_pitch) | 477 476 VP_IMG_VSIZE(win_data->fb_height / 2)); 478 477 479 478 vp_reg_write(res, VP_SRC_WIDTH, win_data->src_width); ··· 560 559 /* converting dma address base and source offset */ 561 560 dma_addr = win_data->dma_addr 562 561 + (win_data->fb_x * win_data->bpp >> 3) 563 - + (win_data->fb_y * win_data->fb_width * win_data->bpp >> 3); 562 + + (win_data->fb_y * win_data->fb_pitch); 564 563 src_x_offset = 0; 565 564 src_y_offset = 0; 566 565 ··· 577 576 MXR_GRP_CFG_FORMAT_VAL(fmt), MXR_GRP_CFG_FORMAT_MASK); 578 577 579 578 /* setup geometry */ 580 - mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), win_data->fb_width); 579 + mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), 580 + win_data->fb_pitch / (win_data->bpp >> 3)); 581 581 582 582 /* setup display size */ 583 583 if (ctx->mxr_ver == MXR_VER_128_0_0_184 && ··· 963 961 win_data->fb_y = plane->fb_y; 964 962 win_data->fb_width = plane->fb_width; 965 963 win_data->fb_height = plane->fb_height; 964 + win_data->fb_pitch = plane->pitch; 966 965 win_data->src_width = plane->src_width; 967 966 win_data->src_height = plane->src_height; 968 967
+21 -17
drivers/gpu/drm/i915/i915_gem.c
··· 2737 2737 2738 2738 WARN_ON(i915_verify_lists(ring->dev)); 2739 2739 2740 - /* Move any buffers on the active list that are no longer referenced 2741 - * by the ringbuffer to the flushing/inactive lists as appropriate, 2742 - * before we free the context associated with the requests. 2740 + /* Retire requests first as we use it above for the early return. 2741 + * If we retire requests last, we may use a later seqno and so clear 2742 + * the requests lists without clearing the active list, leading to 2743 + * confusion. 2743 2744 */ 2744 - while (!list_empty(&ring->active_list)) { 2745 - struct drm_i915_gem_object *obj; 2746 - 2747 - obj = list_first_entry(&ring->active_list, 2748 - struct drm_i915_gem_object, 2749 - ring_list); 2750 - 2751 - if (!i915_gem_request_completed(obj->last_read_req, true)) 2752 - break; 2753 - 2754 - i915_gem_object_move_to_inactive(obj); 2755 - } 2756 - 2757 - 2758 2745 while (!list_empty(&ring->request_list)) { 2759 2746 struct drm_i915_gem_request *request; 2760 2747 struct intel_ringbuffer *ringbuf; ··· 2774 2787 ringbuf->last_retired_head = request->postfix; 2775 2788 2776 2789 i915_gem_free_request(request); 2790 + } 2791 + 2792 + /* Move any buffers on the active list that are no longer referenced 2793 + * by the ringbuffer to the flushing/inactive lists as appropriate, 2794 + * before we free the context associated with the requests. 2795 + */ 2796 + while (!list_empty(&ring->active_list)) { 2797 + struct drm_i915_gem_object *obj; 2798 + 2799 + obj = list_first_entry(&ring->active_list, 2800 + struct drm_i915_gem_object, 2801 + ring_list); 2802 + 2803 + if (!i915_gem_request_completed(obj->last_read_req, true)) 2804 + break; 2805 + 2806 + i915_gem_object_move_to_inactive(obj); 2777 2807 } 2778 2808 2779 2809 if (unlikely(ring->trace_irq_req &&
+1 -1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1487 1487 goto err; 1488 1488 } 1489 1489 1490 - if (i915_needs_cmd_parser(ring)) { 1490 + if (i915_needs_cmd_parser(ring) && args->batch_len) { 1491 1491 batch_obj = i915_gem_execbuffer_parse(ring, 1492 1492 &shadow_exec_entry, 1493 1493 eb,
+13 -5
drivers/gpu/drm/i915/intel_display.c
··· 2438 2438 if (!intel_crtc->base.primary->fb) 2439 2439 return; 2440 2440 2441 - if (intel_alloc_plane_obj(intel_crtc, plane_config)) 2441 + if (intel_alloc_plane_obj(intel_crtc, plane_config)) { 2442 + struct drm_plane *primary = intel_crtc->base.primary; 2443 + 2444 + primary->state->crtc = &intel_crtc->base; 2445 + primary->crtc = &intel_crtc->base; 2446 + update_state_fb(primary); 2447 + 2442 2448 return; 2449 + } 2443 2450 2444 2451 kfree(intel_crtc->base.primary->fb); 2445 2452 intel_crtc->base.primary->fb = NULL; ··· 2469 2462 continue; 2470 2463 2471 2464 if (i915_gem_obj_ggtt_offset(obj) == plane_config->base) { 2465 + struct drm_plane *primary = intel_crtc->base.primary; 2466 + 2472 2467 if (obj->tiling_mode != I915_TILING_NONE) 2473 2468 dev_priv->preserve_bios_swizzle = true; 2474 2469 2475 2470 drm_framebuffer_reference(c->primary->fb); 2476 - intel_crtc->base.primary->fb = c->primary->fb; 2471 + primary->fb = c->primary->fb; 2472 + primary->state->crtc = &intel_crtc->base; 2473 + primary->crtc = &intel_crtc->base; 2477 2474 obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe); 2478 2475 break; 2479 2476 } ··· 6674 6663 plane_config->size); 6675 6664 6676 6665 crtc->base.primary->fb = fb; 6677 - update_state_fb(crtc->base.primary); 6678 6666 } 6679 6667 6680 6668 static void chv_crtc_clock_get(struct intel_crtc *crtc, ··· 7714 7704 plane_config->size); 7715 7705 7716 7706 crtc->base.primary->fb = fb; 7717 - update_state_fb(crtc->base.primary); 7718 7707 return; 7719 7708 7720 7709 error: ··· 7807 7798 plane_config->size); 7808 7799 7809 7800 crtc->base.primary->fb = fb; 7810 - update_state_fb(crtc->base.primary); 7811 7801 } 7812 7802 7813 7803 static bool ironlake_get_pipe_config(struct intel_crtc *crtc,
+2 -2
drivers/gpu/drm/i915/intel_sprite.c
··· 1322 1322 drm_modeset_lock_all(dev); 1323 1323 1324 1324 plane = drm_plane_find(dev, set->plane_id); 1325 - if (!plane) { 1325 + if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) { 1326 1326 ret = -ENOENT; 1327 1327 goto out_unlock; 1328 1328 } ··· 1349 1349 drm_modeset_lock_all(dev); 1350 1350 1351 1351 plane = drm_plane_find(dev, get->plane_id); 1352 - if (!plane) { 1352 + if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) { 1353 1353 ret = -ENOENT; 1354 1354 goto out_unlock; 1355 1355 }
+1
drivers/gpu/drm/radeon/cikd.h
··· 2129 2129 #define VCE_UENC_REG_CLOCK_GATING 0x207c0 2130 2130 #define VCE_SYS_INT_EN 0x21300 2131 2131 # define VCE_SYS_INT_TRAP_INTERRUPT_EN (1 << 3) 2132 + #define VCE_LMI_VCPU_CACHE_40BIT_BAR 0x2145c 2132 2133 #define VCE_LMI_CTRL2 0x21474 2133 2134 #define VCE_LMI_CTRL 0x21498 2134 2135 #define VCE_LMI_VM_CTRL 0x214a0
+1
drivers/gpu/drm/radeon/radeon.h
··· 1565 1565 int new_active_crtc_count; 1566 1566 u32 current_active_crtcs; 1567 1567 int current_active_crtc_count; 1568 + bool single_display; 1568 1569 struct radeon_dpm_dynamic_state dyn_state; 1569 1570 struct radeon_dpm_fan fan; 1570 1571 u32 tdp_limit;
+7 -3
drivers/gpu/drm/radeon/radeon_bios.c
··· 76 76 77 77 static bool radeon_read_bios(struct radeon_device *rdev) 78 78 { 79 - uint8_t __iomem *bios; 79 + uint8_t __iomem *bios, val1, val2; 80 80 size_t size; 81 81 82 82 rdev->bios = NULL; ··· 86 86 return false; 87 87 } 88 88 89 - if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) { 89 + val1 = readb(&bios[0]); 90 + val2 = readb(&bios[1]); 91 + 92 + if (size == 0 || val1 != 0x55 || val2 != 0xaa) { 90 93 pci_unmap_rom(rdev->pdev, bios); 91 94 return false; 92 95 } 93 - rdev->bios = kmemdup(bios, size, GFP_KERNEL); 96 + rdev->bios = kzalloc(size, GFP_KERNEL); 94 97 if (rdev->bios == NULL) { 95 98 pci_unmap_rom(rdev->pdev, bios); 96 99 return false; 97 100 } 101 + memcpy_fromio(rdev->bios, bios, size); 98 102 pci_unmap_rom(rdev->pdev, bios); 99 103 return true; 100 104 }
+4 -7
drivers/gpu/drm/radeon/radeon_mn.c
··· 122 122 it = interval_tree_iter_first(&rmn->objects, start, end); 123 123 while (it) { 124 124 struct radeon_bo *bo; 125 - struct fence *fence; 126 125 int r; 127 126 128 127 bo = container_of(it, struct radeon_bo, mn_it); ··· 133 134 continue; 134 135 } 135 136 136 - fence = reservation_object_get_excl(bo->tbo.resv); 137 - if (fence) { 138 - r = radeon_fence_wait((struct radeon_fence *)fence, false); 139 - if (r) 140 - DRM_ERROR("(%d) failed to wait for user bo\n", r); 141 - } 137 + r = reservation_object_wait_timeout_rcu(bo->tbo.resv, true, 138 + false, MAX_SCHEDULE_TIMEOUT); 139 + if (r) 140 + DRM_ERROR("(%d) failed to wait for user bo\n", r); 142 141 143 142 radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); 144 143 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+17 -5
drivers/gpu/drm/radeon/radeon_pm.c
··· 837 837 radeon_pm_compute_clocks(rdev); 838 838 } 839 839 840 - static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev, 841 - enum radeon_pm_state_type dpm_state) 840 + static bool radeon_dpm_single_display(struct radeon_device *rdev) 842 841 { 843 - int i; 844 - struct radeon_ps *ps; 845 - u32 ui_class; 846 842 bool single_display = (rdev->pm.dpm.new_active_crtc_count < 2) ? 847 843 true : false; 848 844 ··· 853 857 */ 854 858 if (single_display && (r600_dpm_get_vrefresh(rdev) >= 120)) 855 859 single_display = false; 860 + 861 + return single_display; 862 + } 863 + 864 + static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev, 865 + enum radeon_pm_state_type dpm_state) 866 + { 867 + int i; 868 + struct radeon_ps *ps; 869 + u32 ui_class; 870 + bool single_display = radeon_dpm_single_display(rdev); 856 871 857 872 /* certain older asics have a separare 3D performance state, 858 873 * so try that first if the user selected performance ··· 990 983 struct radeon_ps *ps; 991 984 enum radeon_pm_state_type dpm_state; 992 985 int ret; 986 + bool single_display = radeon_dpm_single_display(rdev); 993 987 994 988 /* if dpm init failed */ 995 989 if (!rdev->pm.dpm_enabled) ··· 1014 1006 if (rdev->pm.dpm.current_ps == rdev->pm.dpm.requested_ps) { 1015 1007 /* vce just modifies an existing state so force a change */ 1016 1008 if (ps->vce_active != rdev->pm.dpm.vce_active) 1009 + goto force; 1010 + /* user has made a display change (such as timing) */ 1011 + if (rdev->pm.dpm.single_display != single_display) 1017 1012 goto force; 1018 1013 if ((rdev->family < CHIP_BARTS) || (rdev->flags & RADEON_IS_IGP)) { 1019 1014 /* for pre-BTC and APUs if the num crtcs changed but state is the same, ··· 1080 1069 1081 1070 rdev->pm.dpm.current_active_crtcs = rdev->pm.dpm.new_active_crtcs; 1082 1071 rdev->pm.dpm.current_active_crtc_count = rdev->pm.dpm.new_active_crtc_count; 1072 + rdev->pm.dpm.single_display = single_display; 1083 1073 1084 1074 /* wait for the rings to drain */ 1085 1075 for (i = 0; i < RADEON_NUM_RINGS; i++) {
+1 -1
drivers/gpu/drm/radeon/radeon_ring.c
··· 495 495 seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw); 496 496 seq_printf(m, "%u dwords in ring\n", count); 497 497 498 - if (!ring->ready) 498 + if (!ring->ring) 499 499 return 0; 500 500 501 501 /* print 8 dw before current rptr as often it's the last executed
+4
drivers/gpu/drm/radeon/radeon_ttm.c
··· 598 598 enum dma_data_direction direction = write ? 599 599 DMA_BIDIRECTIONAL : DMA_TO_DEVICE; 600 600 601 + /* double check that we don't free the table twice */ 602 + if (!ttm->sg->sgl) 603 + return; 604 + 601 605 /* free the sg table and pages again */ 602 606 dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction); 603 607
+3
drivers/gpu/drm/radeon/vce_v2_0.c
··· 156 156 WREG32(VCE_LMI_SWAP_CNTL1, 0); 157 157 WREG32(VCE_LMI_VM_CTRL, 0); 158 158 159 + WREG32(VCE_LMI_VCPU_CACHE_40BIT_BAR, addr >> 8); 160 + 161 + addr &= 0xff; 159 162 size = RADEON_GPU_PAGE_ALIGN(rdev->vce_fw->size); 160 163 WREG32(VCE_VCPU_CACHE_OFFSET0, addr & 0x7fffffff); 161 164 WREG32(VCE_VCPU_CACHE_SIZE0, size);
+1 -1
drivers/iio/accel/bma180.c
··· 659 659 660 660 mutex_lock(&data->mutex); 661 661 662 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 662 + for_each_set_bit(bit, indio_dev->active_scan_mask, 663 663 indio_dev->masklength) { 664 664 ret = bma180_get_data_reg(data, bit); 665 665 if (ret < 0) {
+10 -10
drivers/iio/accel/bmc150-accel.c
··· 168 168 int val; 169 169 int val2; 170 170 u8 bw_bits; 171 - } bmc150_accel_samp_freq_table[] = { {7, 810000, 0x08}, 172 - {15, 630000, 0x09}, 173 - {31, 250000, 0x0A}, 174 - {62, 500000, 0x0B}, 175 - {125, 0, 0x0C}, 176 - {250, 0, 0x0D}, 177 - {500, 0, 0x0E}, 178 - {1000, 0, 0x0F} }; 171 + } bmc150_accel_samp_freq_table[] = { {15, 620000, 0x08}, 172 + {31, 260000, 0x09}, 173 + {62, 500000, 0x0A}, 174 + {125, 0, 0x0B}, 175 + {250, 0, 0x0C}, 176 + {500, 0, 0x0D}, 177 + {1000, 0, 0x0E}, 178 + {2000, 0, 0x0F} }; 179 179 180 180 static const struct { 181 181 int bw_bits; ··· 840 840 } 841 841 842 842 static IIO_CONST_ATTR_SAMP_FREQ_AVAIL( 843 - "7.810000 15.630000 31.250000 62.500000 125 250 500 1000"); 843 + "15.620000 31.260000 62.50000 125 250 500 1000 2000"); 844 844 845 845 static struct attribute *bmc150_accel_attributes[] = { 846 846 &iio_const_attr_sampling_frequency_available.dev_attr.attr, ··· 986 986 int bit, ret, i = 0; 987 987 988 988 mutex_lock(&data->mutex); 989 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 989 + for_each_set_bit(bit, indio_dev->active_scan_mask, 990 990 indio_dev->masklength) { 991 991 ret = i2c_smbus_read_word_data(data->client, 992 992 BMC150_ACCEL_AXIS_TO_REG(bit));
+1 -1
drivers/iio/accel/kxcjk-1013.c
··· 956 956 957 957 mutex_lock(&data->mutex); 958 958 959 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 959 + for_each_set_bit(bit, indio_dev->active_scan_mask, 960 960 indio_dev->masklength) { 961 961 ret = kxcjk1013_get_acc_reg(data, bit); 962 962 if (ret < 0) {
+2 -1
drivers/iio/adc/Kconfig
··· 137 137 138 138 config CC10001_ADC 139 139 tristate "Cosmic Circuits 10001 ADC driver" 140 - depends on HAS_IOMEM || HAVE_CLK || REGULATOR 140 + depends on HAVE_CLK || REGULATOR 141 + depends on HAS_IOMEM 141 142 select IIO_BUFFER 142 143 select IIO_TRIGGERED_BUFFER 143 144 help
+2 -3
drivers/iio/adc/at91_adc.c
··· 544 544 { 545 545 struct iio_dev *idev = iio_trigger_get_drvdata(trig); 546 546 struct at91_adc_state *st = iio_priv(idev); 547 - struct iio_buffer *buffer = idev->buffer; 548 547 struct at91_adc_reg_desc *reg = st->registers; 549 548 u32 status = at91_adc_readl(st, reg->trigger_register); 550 549 int value; ··· 563 564 at91_adc_writel(st, reg->trigger_register, 564 565 status | value); 565 566 566 - for_each_set_bit(bit, buffer->scan_mask, 567 + for_each_set_bit(bit, idev->active_scan_mask, 567 568 st->num_channels) { 568 569 struct iio_chan_spec const *chan = idev->channels + bit; 569 570 at91_adc_writel(st, AT91_ADC_CHER, ··· 578 579 at91_adc_writel(st, reg->trigger_register, 579 580 status & ~value); 580 581 581 - for_each_set_bit(bit, buffer->scan_mask, 582 + for_each_set_bit(bit, idev->active_scan_mask, 582 583 st->num_channels) { 583 584 struct iio_chan_spec const *chan = idev->channels + bit; 584 585 at91_adc_writel(st, AT91_ADC_CHDR,
+1 -2
drivers/iio/adc/ti_am335x_adc.c
··· 188 188 static int tiadc_buffer_postenable(struct iio_dev *indio_dev) 189 189 { 190 190 struct tiadc_device *adc_dev = iio_priv(indio_dev); 191 - struct iio_buffer *buffer = indio_dev->buffer; 192 191 unsigned int enb = 0; 193 192 u8 bit; 194 193 195 194 tiadc_step_config(indio_dev); 196 - for_each_set_bit(bit, buffer->scan_mask, adc_dev->channels) 195 + for_each_set_bit(bit, indio_dev->active_scan_mask, adc_dev->channels) 197 196 enb |= (get_adc_step_bit(adc_dev, bit) << 1); 198 197 adc_dev->buffer_en_ch_steps = enb; 199 198
+61 -30
drivers/iio/adc/vf610_adc.c
··· 141 141 struct regulator *vref; 142 142 struct vf610_adc_feature adc_feature; 143 143 144 + u32 sample_freq_avail[5]; 145 + 144 146 struct completion completion; 145 147 }; 148 + 149 + static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 }; 146 150 147 151 #define VF610_ADC_CHAN(_idx, _chan_type) { \ 148 152 .type = (_chan_type), \ ··· 184 180 /* sentinel */ 185 181 }; 186 182 187 - /* 188 - * ADC sample frequency, unit is ADCK cycles. 189 - * ADC clk source is ipg clock, which is the same as bus clock. 190 - * 191 - * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder) 192 - * SFCAdder: fixed to 6 ADCK cycles 193 - * AverageNum: 1, 4, 8, 16, 32 samples for hardware average. 194 - * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 195 - * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles 196 - * 197 - * By default, enable 12 bit resolution mode, clock source 198 - * set to ipg clock, So get below frequency group: 199 - */ 200 - static const u32 vf610_sample_freq_avail[5] = 201 - {1941176, 559332, 286957, 145374, 73171}; 183 + static inline void vf610_adc_calculate_rates(struct vf610_adc *info) 184 + { 185 + unsigned long adck_rate, ipg_rate = clk_get_rate(info->clk); 186 + int i; 187 + 188 + /* 189 + * Calculate ADC sample frequencies 190 + * Sample time unit is ADCK cycles. ADCK clk source is ipg clock, 191 + * which is the same as bus clock. 192 + * 193 + * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder) 194 + * SFCAdder: fixed to 6 ADCK cycles 195 + * AverageNum: 1, 4, 8, 16, 32 samples for hardware average. 196 + * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 197 + * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles 198 + */ 199 + adck_rate = ipg_rate / info->adc_feature.clk_div; 200 + for (i = 0; i < ARRAY_SIZE(vf610_hw_avgs); i++) 201 + info->sample_freq_avail[i] = 202 + adck_rate / (6 + vf610_hw_avgs[i] * (25 + 3)); 203 + } 202 204 203 205 static inline void vf610_adc_cfg_init(struct vf610_adc *info) 204 206 { 207 + struct vf610_adc_feature *adc_feature = &info->adc_feature; 208 + 205 209 /* set default Configuration for ADC controller */ 206 - info->adc_feature.clk_sel = VF610_ADCIOC_BUSCLK_SET; 207 - info->adc_feature.vol_ref = VF610_ADCIOC_VR_VREF_SET; 210 + adc_feature->clk_sel = VF610_ADCIOC_BUSCLK_SET; 211 + adc_feature->vol_ref = VF610_ADCIOC_VR_VREF_SET; 208 212 209 - info->adc_feature.calibration = true; 210 - info->adc_feature.ovwren = true; 213 + adc_feature->calibration = true; 214 + adc_feature->ovwren = true; 211 215 212 - info->adc_feature.clk_div = 1; 213 - info->adc_feature.res_mode = 12; 214 - info->adc_feature.sample_rate = 1; 215 - info->adc_feature.lpm = true; 216 + adc_feature->res_mode = 12; 217 + adc_feature->sample_rate = 1; 218 + adc_feature->lpm = true; 219 + 220 + /* Use a save ADCK which is below 20MHz on all devices */ 221 + adc_feature->clk_div = 8; 222 + 223 + vf610_adc_calculate_rates(info); 216 224 } 217 225 218 226 static void vf610_adc_cfg_post_set(struct vf610_adc *info) ··· 306 290 307 291 cfg_data = readl(info->regs + VF610_REG_ADC_CFG); 308 292 309 - /* low power configuration */ 310 293 cfg_data &= ~VF610_ADC_ADLPC_EN; 311 294 if (adc_feature->lpm) 312 295 cfg_data |= VF610_ADC_ADLPC_EN; 313 296 314 - /* disable high speed */ 315 297 cfg_data &= ~VF610_ADC_ADHSC_EN; 316 298 317 299 writel(cfg_data, info->regs + VF610_REG_ADC_CFG); ··· 449 435 return IRQ_HANDLED; 450 436 } 451 437 452 - static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("1941176, 559332, 286957, 145374, 73171"); 438 + static ssize_t vf610_show_samp_freq_avail(struct device *dev, 439 + struct device_attribute *attr, char *buf) 440 + { 441 + struct vf610_adc *info = iio_priv(dev_to_iio_dev(dev)); 442 + size_t len = 0; 443 + int i; 444 + 445 + for (i = 0; i < ARRAY_SIZE(info->sample_freq_avail); i++) 446 + len += scnprintf(buf + len, PAGE_SIZE - len, 447 + "%u ", info->sample_freq_avail[i]); 448 + 449 + /* replace trailing space by newline */ 450 + buf[len - 1] = '\n'; 451 + 452 + return len; 453 + } 454 + 455 + static IIO_DEV_ATTR_SAMP_FREQ_AVAIL(vf610_show_samp_freq_avail); 453 456 454 457 static struct attribute *vf610_attributes[] = { 455 - &iio_const_attr_sampling_frequency_available.dev_attr.attr, 458 + &iio_dev_attr_sampling_frequency_available.dev_attr.attr, 456 459 NULL 457 460 }; 458 461 ··· 533 502 return IIO_VAL_FRACTIONAL_LOG2; 534 503 535 504 case IIO_CHAN_INFO_SAMP_FREQ: 536 - *val = vf610_sample_freq_avail[info->adc_feature.sample_rate]; 505 + *val = info->sample_freq_avail[info->adc_feature.sample_rate]; 537 506 *val2 = 0; 538 507 return IIO_VAL_INT; 539 508 ··· 556 525 switch (mask) { 557 526 case IIO_CHAN_INFO_SAMP_FREQ: 558 527 for (i = 0; 559 - i < ARRAY_SIZE(vf610_sample_freq_avail); 528 + i < ARRAY_SIZE(info->sample_freq_avail); 560 529 i++) 561 - if (val == vf610_sample_freq_avail[i]) { 530 + if (val == info->sample_freq_avail[i]) { 562 531 info->adc_feature.sample_rate = i; 563 532 vf610_adc_sample_set(info); 564 533 return 0;
+1 -1
drivers/iio/gyro/bmg160.c
··· 822 822 int bit, ret, i = 0; 823 823 824 824 mutex_lock(&data->mutex); 825 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 825 + for_each_set_bit(bit, indio_dev->active_scan_mask, 826 826 indio_dev->masklength) { 827 827 ret = i2c_smbus_read_word_data(data->client, 828 828 BMG160_AXIS_TO_REG(bit));
+1 -1
drivers/iio/imu/adis_trigger.c
··· 60 60 iio_trigger_set_drvdata(adis->trig, adis); 61 61 ret = iio_trigger_register(adis->trig); 62 62 63 - indio_dev->trig = adis->trig; 63 + indio_dev->trig = iio_trigger_get(adis->trig); 64 64 if (ret) 65 65 goto error_free_irq; 66 66
+30 -26
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 410 410 } 411 411 } 412 412 413 - static int inv_mpu6050_write_fsr(struct inv_mpu6050_state *st, int fsr) 413 + static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val) 414 414 { 415 - int result; 415 + int result, i; 416 416 u8 d; 417 417 418 - if (fsr < 0 || fsr > INV_MPU6050_MAX_GYRO_FS_PARAM) 419 - return -EINVAL; 420 - if (fsr == st->chip_config.fsr) 421 - return 0; 418 + for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) { 419 + if (gyro_scale_6050[i] == val) { 420 + d = (i << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT); 421 + result = inv_mpu6050_write_reg(st, 422 + st->reg->gyro_config, d); 423 + if (result) 424 + return result; 422 425 423 - d = (fsr << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT); 424 - result = inv_mpu6050_write_reg(st, st->reg->gyro_config, d); 425 - if (result) 426 - return result; 427 - st->chip_config.fsr = fsr; 426 + st->chip_config.fsr = i; 427 + return 0; 428 + } 429 + } 428 430 429 - return 0; 431 + return -EINVAL; 430 432 } 431 433 432 - static int inv_mpu6050_write_accel_fs(struct inv_mpu6050_state *st, int fs) 434 + static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val) 433 435 { 434 - int result; 436 + int result, i; 435 437 u8 d; 436 438 437 - if (fs < 0 || fs > INV_MPU6050_MAX_ACCL_FS_PARAM) 438 - return -EINVAL; 439 - if (fs == st->chip_config.accl_fs) 440 - return 0; 439 + for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) { 440 + if (accel_scale[i] == val) { 441 + d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT); 442 + result = inv_mpu6050_write_reg(st, 443 + st->reg->accl_config, d); 444 + if (result) 445 + return result; 441 446 442 - d = (fs << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT); 443 - result = inv_mpu6050_write_reg(st, st->reg->accl_config, d); 444 - if (result) 445 - return result; 446 - st->chip_config.accl_fs = fs; 447 + st->chip_config.accl_fs = i; 448 + return 0; 449 + } 450 + } 447 451 448 - return 0; 452 + return -EINVAL; 449 453 } 450 454 451 455 static int inv_mpu6050_write_raw(struct iio_dev *indio_dev, ··· 475 471 case IIO_CHAN_INFO_SCALE: 476 472 switch (chan->type) { 477 473 case IIO_ANGL_VEL: 478 - result = inv_mpu6050_write_fsr(st, val); 474 + result = inv_mpu6050_write_gyro_scale(st, val2); 479 475 break; 480 476 case IIO_ACCEL: 481 - result = inv_mpu6050_write_accel_fs(st, val); 477 + result = inv_mpu6050_write_accel_scale(st, val2); 482 478 break; 483 479 default: 484 480 result = -EINVAL;
+14 -11
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
··· 24 24 #include <linux/poll.h> 25 25 #include "inv_mpu_iio.h" 26 26 27 + static void inv_clear_kfifo(struct inv_mpu6050_state *st) 28 + { 29 + unsigned long flags; 30 + 31 + /* take the spin lock sem to avoid interrupt kick in */ 32 + spin_lock_irqsave(&st->time_stamp_lock, flags); 33 + kfifo_reset(&st->timestamps); 34 + spin_unlock_irqrestore(&st->time_stamp_lock, flags); 35 + } 36 + 27 37 int inv_reset_fifo(struct iio_dev *indio_dev) 28 38 { 29 39 int result; ··· 60 50 INV_MPU6050_BIT_FIFO_RST); 61 51 if (result) 62 52 goto reset_fifo_fail; 53 + 54 + /* clear timestamps fifo */ 55 + inv_clear_kfifo(st); 56 + 63 57 /* enable interrupt */ 64 58 if (st->chip_config.accl_fifo_enable || 65 59 st->chip_config.gyro_fifo_enable) { ··· 95 81 INV_MPU6050_BIT_DATA_RDY_EN); 96 82 97 83 return result; 98 - } 99 - 100 - static void inv_clear_kfifo(struct inv_mpu6050_state *st) 101 - { 102 - unsigned long flags; 103 - 104 - /* take the spin lock sem to avoid interrupt kick in */ 105 - spin_lock_irqsave(&st->time_stamp_lock, flags); 106 - kfifo_reset(&st->timestamps); 107 - spin_unlock_irqrestore(&st->time_stamp_lock, flags); 108 84 } 109 85 110 86 /** ··· 188 184 flush_fifo: 189 185 /* Flush HW and SW FIFOs. */ 190 186 inv_reset_fifo(indio_dev); 191 - inv_clear_kfifo(st); 192 187 mutex_unlock(&indio_dev->mlock); 193 188 iio_trigger_notify_done(indio_dev->trig); 194 189
+1 -1
drivers/iio/imu/kmx61.c
··· 1227 1227 base = KMX61_MAG_XOUT_L; 1228 1228 1229 1229 mutex_lock(&data->lock); 1230 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 1230 + for_each_set_bit(bit, indio_dev->active_scan_mask, 1231 1231 indio_dev->masklength) { 1232 1232 ret = kmx61_read_measurement(data, base, bit); 1233 1233 if (ret < 0) {
+3 -2
drivers/iio/industrialio-core.c
··· 847 847 * @attr_list: List of IIO device attributes 848 848 * 849 849 * This function frees the memory allocated for each of the IIO device 850 - * attributes in the list. Note: if you want to reuse the list after calling 851 - * this function you have to reinitialize it using INIT_LIST_HEAD(). 850 + * attributes in the list. 852 851 */ 853 852 void iio_free_chan_devattr_list(struct list_head *attr_list) 854 853 { ··· 855 856 856 857 list_for_each_entry_safe(p, n, attr_list, l) { 857 858 kfree(p->dev_attr.attr.name); 859 + list_del(&p->l); 858 860 kfree(p); 859 861 } 860 862 } ··· 936 936 937 937 iio_free_chan_devattr_list(&indio_dev->channel_attr_list); 938 938 kfree(indio_dev->chan_attr_group.attrs); 939 + indio_dev->chan_attr_group.attrs = NULL; 939 940 } 940 941 941 942 static void iio_dev_release(struct device *device)
+1
drivers/iio/industrialio-event.c
··· 500 500 error_free_setup_event_lines: 501 501 iio_free_chan_devattr_list(&indio_dev->event_interface->dev_attr_list); 502 502 kfree(indio_dev->event_interface); 503 + indio_dev->event_interface = NULL; 503 504 return ret; 504 505 } 505 506
+1 -1
drivers/iio/proximity/sx9500.c
··· 494 494 495 495 mutex_lock(&data->mutex); 496 496 497 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 497 + for_each_set_bit(bit, indio_dev->active_scan_mask, 498 498 indio_dev->masklength) { 499 499 ret = sx9500_read_proximity(data, &indio_dev->channels[bit], 500 500 &val);
+8
drivers/infiniband/core/umem.c
··· 99 99 if (dmasync) 100 100 dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs); 101 101 102 + /* 103 + * If the combination of the addr and size requested for this memory 104 + * region causes an integer overflow, return error. 105 + */ 106 + if ((PAGE_ALIGN(addr + size) <= size) || 107 + (PAGE_ALIGN(addr + size) <= addr)) 108 + return ERR_PTR(-EINVAL); 109 + 102 110 if (!can_do_mlock()) 103 111 return ERR_PTR(-EPERM); 104 112
+29 -19
drivers/input/mouse/alps.c
··· 1154 1154 mutex_unlock(&alps_mutex); 1155 1155 } 1156 1156 1157 - static void alps_report_bare_ps2_packet(struct input_dev *dev, 1157 + static void alps_report_bare_ps2_packet(struct psmouse *psmouse, 1158 1158 unsigned char packet[], 1159 1159 bool report_buttons) 1160 1160 { 1161 + struct alps_data *priv = psmouse->private; 1162 + struct input_dev *dev; 1163 + 1164 + /* Figure out which device to use to report the bare packet */ 1165 + if (priv->proto_version == ALPS_PROTO_V2 && 1166 + (priv->flags & ALPS_DUALPOINT)) { 1167 + /* On V2 devices the DualPoint Stick reports bare packets */ 1168 + dev = priv->dev2; 1169 + } else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) { 1170 + /* Register dev3 mouse if we received PS/2 packet first time */ 1171 + if (!IS_ERR(priv->dev3)) 1172 + psmouse_queue_work(psmouse, &priv->dev3_register_work, 1173 + 0); 1174 + return; 1175 + } else { 1176 + dev = priv->dev3; 1177 + } 1178 + 1161 1179 if (report_buttons) 1162 1180 alps_report_buttons(dev, NULL, 1163 1181 packet[0] & 1, packet[0] & 2, packet[0] & 4); ··· 1250 1232 * de-synchronization. 1251 1233 */ 1252 1234 1253 - alps_report_bare_ps2_packet(priv->dev2, 1254 - &psmouse->packet[3], false); 1235 + alps_report_bare_ps2_packet(psmouse, &psmouse->packet[3], 1236 + false); 1255 1237 1256 1238 /* 1257 1239 * Continue with the standard ALPS protocol handling, ··· 1307 1289 * properly we only do this if the device is fully synchronized. 1308 1290 */ 1309 1291 if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) { 1310 - 1311 - /* Register dev3 mouse if we received PS/2 packet first time */ 1312 - if (unlikely(!priv->dev3)) 1313 - psmouse_queue_work(psmouse, 1314 - &priv->dev3_register_work, 0); 1315 - 1316 1292 if (psmouse->pktcnt == 3) { 1317 - /* Once dev3 mouse device is registered report data */ 1318 - if (likely(!IS_ERR_OR_NULL(priv->dev3))) 1319 - alps_report_bare_ps2_packet(priv->dev3, 1320 - psmouse->packet, 1321 - true); 1293 + alps_report_bare_ps2_packet(psmouse, psmouse->packet, 1294 + true); 1322 1295 return PSMOUSE_FULL_PACKET; 1323 1296 } 1324 1297 return PSMOUSE_GOOD_DATA; ··· 2290 2281 priv->set_abs_params = alps_set_abs_params_mt; 2291 2282 priv->nibble_commands = alps_v3_nibble_commands; 2292 2283 priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 2293 - priv->x_max = 1360; 2294 - priv->y_max = 660; 2295 2284 priv->x_bits = 23; 2296 2285 priv->y_bits = 12; 2286 + 2287 + if (alps_dolphin_get_device_area(psmouse, priv)) 2288 + return -EIO; 2289 + 2297 2290 break; 2298 2291 2299 2292 case ALPS_PROTO_V6: ··· 2314 2303 priv->set_abs_params = alps_set_abs_params_mt; 2315 2304 priv->nibble_commands = alps_v3_nibble_commands; 2316 2305 priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 2317 - 2318 - if (alps_dolphin_get_device_area(psmouse, priv)) 2319 - return -EIO; 2306 + priv->x_max = 0xfff; 2307 + priv->y_max = 0x7ff; 2320 2308 2321 2309 if (priv->fw_ver[1] != 0xba) 2322 2310 priv->flags |= ALPS_BUTTONPAD;
+6 -1
drivers/input/mouse/synaptics.c
··· 154 154 }, 155 155 { 156 156 (const char * const []){"LEN2006", NULL}, 157 + {2691, 2691}, 158 + 1024, 5045, 2457, 4832 159 + }, 160 + { 161 + (const char * const []){"LEN2006", NULL}, 157 162 {ANY_BOARD_ID, ANY_BOARD_ID}, 158 163 1264, 5675, 1171, 4688 159 164 }, ··· 194 189 "LEN2003", 195 190 "LEN2004", /* L440 */ 196 191 "LEN2005", 197 - "LEN2006", 192 + "LEN2006", /* Edge E440/E540 */ 198 193 "LEN2007", 199 194 "LEN2008", 200 195 "LEN2009",
+6 -3
drivers/iommu/arm-smmu.c
··· 1288 1288 return 0; 1289 1289 1290 1290 spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags); 1291 - if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS) 1291 + if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS && 1292 + smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { 1292 1293 ret = arm_smmu_iova_to_phys_hard(domain, iova); 1293 - else 1294 + } else { 1294 1295 ret = ops->iova_to_phys(ops, iova); 1296 + } 1297 + 1295 1298 spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags); 1296 1299 1297 1300 return ret; ··· 1559 1556 return -ENODEV; 1560 1557 } 1561 1558 1562 - if (smmu->version == 1 || (!(id & ID0_ATOSNS) && (id & ID0_S1TS))) { 1559 + if ((id & ID0_S1TS) && ((smmu->version == 1) || (id & ID0_ATOSNS))) { 1563 1560 smmu->features |= ARM_SMMU_FEAT_TRANS_OPS; 1564 1561 dev_notice(smmu->dev, "\taddress translation ops\n"); 1565 1562 }
+3 -4
drivers/iommu/intel-iommu.c
··· 1742 1742 1743 1743 static void domain_exit(struct dmar_domain *domain) 1744 1744 { 1745 - struct dmar_drhd_unit *drhd; 1746 - struct intel_iommu *iommu; 1747 1745 struct page *freelist = NULL; 1746 + int i; 1748 1747 1749 1748 /* Domain 0 is reserved, so dont process it */ 1750 1749 if (!domain) ··· 1763 1764 1764 1765 /* clear attached or cached domains */ 1765 1766 rcu_read_lock(); 1766 - for_each_active_iommu(iommu, drhd) 1767 - iommu_detach_domain(domain, iommu); 1767 + for_each_set_bit(i, domain->iommu_bmp, g_num_of_iommus) 1768 + iommu_detach_domain(domain, g_iommus[i]); 1768 1769 rcu_read_unlock(); 1769 1770 1770 1771 dma_free_pagelist(freelist);
+1
drivers/iommu/ipmmu-vmsa.c
··· 851 851 852 852 static const struct of_device_id ipmmu_of_ids[] = { 853 853 { .compatible = "renesas,ipmmu-vmsa", }, 854 + { } 854 855 }; 855 856 856 857 static struct platform_driver ipmmu_driver = {
+48 -9
drivers/irqchip/irq-gic-v3-its.c
··· 169 169 170 170 static void its_encode_devid(struct its_cmd_block *cmd, u32 devid) 171 171 { 172 - cmd->raw_cmd[0] &= ~(0xffffUL << 32); 172 + cmd->raw_cmd[0] &= BIT_ULL(32) - 1; 173 173 cmd->raw_cmd[0] |= ((u64)devid) << 32; 174 174 } 175 175 ··· 802 802 int i; 803 803 int psz = SZ_64K; 804 804 u64 shr = GITS_BASER_InnerShareable; 805 + u64 cache = GITS_BASER_WaWb; 805 806 806 807 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 807 808 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 849 848 val = (virt_to_phys(base) | 850 849 (type << GITS_BASER_TYPE_SHIFT) | 851 850 ((entry_size - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) | 852 - GITS_BASER_WaWb | 851 + cache | 853 852 shr | 854 853 GITS_BASER_VALID); 855 854 ··· 875 874 * Shareability didn't stick. Just use 876 875 * whatever the read reported, which is likely 877 876 * to be the only thing this redistributor 878 - * supports. 877 + * supports. If that's zero, make it 878 + * non-cacheable as well. 879 879 */ 880 880 shr = tmp & GITS_BASER_SHAREABILITY_MASK; 881 + if (!shr) 882 + cache = GITS_BASER_nC; 881 883 goto retry_baser; 882 884 } 883 885 ··· 984 980 tmp = readq_relaxed(rbase + GICR_PROPBASER); 985 981 986 982 if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) { 983 + if (!(tmp & GICR_PROPBASER_SHAREABILITY_MASK)) { 984 + /* 985 + * The HW reports non-shareable, we must 986 + * remove the cacheability attributes as 987 + * well. 988 + */ 989 + val &= ~(GICR_PROPBASER_SHAREABILITY_MASK | 990 + GICR_PROPBASER_CACHEABILITY_MASK); 991 + val |= GICR_PROPBASER_nC; 992 + writeq_relaxed(val, rbase + GICR_PROPBASER); 993 + } 987 994 pr_info_once("GIC: using cache flushing for LPI property table\n"); 988 995 gic_rdists->flags |= RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING; 989 996 } 990 997 991 998 /* set PENDBASE */ 992 999 val = (page_to_phys(pend_page) | 993 - GICR_PROPBASER_InnerShareable | 994 - GICR_PROPBASER_WaWb); 1000 + GICR_PENDBASER_InnerShareable | 1001 + GICR_PENDBASER_WaWb); 995 1002 996 1003 writeq_relaxed(val, rbase + GICR_PENDBASER); 1004 + tmp = readq_relaxed(rbase + GICR_PENDBASER); 1005 + 1006 + if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) { 1007 + /* 1008 + * The HW reports non-shareable, we must remove the 1009 + * cacheability attributes as well. 1010 + */ 1011 + val &= ~(GICR_PENDBASER_SHAREABILITY_MASK | 1012 + GICR_PENDBASER_CACHEABILITY_MASK); 1013 + val |= GICR_PENDBASER_nC; 1014 + writeq_relaxed(val, rbase + GICR_PENDBASER); 1015 + } 997 1016 998 1017 /* Enable LPIs */ 999 1018 val = readl_relaxed(rbase + GICR_CTLR); ··· 1053 1026 * This ITS wants a linear CPU number. 1054 1027 */ 1055 1028 target = readq_relaxed(gic_data_rdist_rd_base() + GICR_TYPER); 1056 - target = GICR_TYPER_CPU_NUMBER(target); 1029 + target = GICR_TYPER_CPU_NUMBER(target) << 16; 1057 1030 } 1058 1031 1059 1032 /* Perform collection mapping */ ··· 1449 1422 1450 1423 writeq_relaxed(baser, its->base + GITS_CBASER); 1451 1424 tmp = readq_relaxed(its->base + GITS_CBASER); 1452 - writeq_relaxed(0, its->base + GITS_CWRITER); 1453 - writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1454 1425 1455 - if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) { 1426 + if ((tmp ^ baser) & GITS_CBASER_SHAREABILITY_MASK) { 1427 + if (!(tmp & GITS_CBASER_SHAREABILITY_MASK)) { 1428 + /* 1429 + * The HW reports non-shareable, we must 1430 + * remove the cacheability attributes as 1431 + * well. 1432 + */ 1433 + baser &= ~(GITS_CBASER_SHAREABILITY_MASK | 1434 + GITS_CBASER_CACHEABILITY_MASK); 1435 + baser |= GITS_CBASER_nC; 1436 + writeq_relaxed(baser, its->base + GITS_CBASER); 1437 + } 1456 1438 pr_info("ITS: using cache flushing for cmd queue\n"); 1457 1439 its->flags |= ITS_FLAGS_CMDQ_NEEDS_FLUSHING; 1458 1440 } 1441 + 1442 + writeq_relaxed(0, its->base + GITS_CWRITER); 1443 + writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1459 1444 1460 1445 if (of_property_read_bool(its->msi_chip.of_node, "msi-controller")) { 1461 1446 its->domain = irq_domain_add_tree(NULL, &its_domain_ops, its);
+1 -1
drivers/lguest/Kconfig
··· 1 1 config LGUEST 2 2 tristate "Linux hypervisor example code" 3 - depends on X86_32 && EVENTFD && TTY 3 + depends on X86_32 && EVENTFD && TTY && PCI_DIRECT 4 4 select HVC_DRIVER 5 5 ---help--- 6 6 This is a very simple module which allows you to run
+16 -10
drivers/md/dm.c
··· 433 433 434 434 dm_get(md); 435 435 atomic_inc(&md->open_count); 436 - 437 436 out: 438 437 spin_unlock(&_minor_lock); 439 438 ··· 441 442 442 443 static void dm_blk_close(struct gendisk *disk, fmode_t mode) 443 444 { 444 - struct mapped_device *md = disk->private_data; 445 + struct mapped_device *md; 445 446 446 447 spin_lock(&_minor_lock); 448 + 449 + md = disk->private_data; 450 + if (WARN_ON(!md)) 451 + goto out; 447 452 448 453 if (atomic_dec_and_test(&md->open_count) && 449 454 (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) 450 455 queue_work(deferred_remove_workqueue, &deferred_remove_work); 451 456 452 457 dm_put(md); 453 - 458 + out: 454 459 spin_unlock(&_minor_lock); 455 460 } 456 461 ··· 2244 2241 int minor = MINOR(disk_devt(md->disk)); 2245 2242 2246 2243 unlock_fs(md); 2247 - bdput(md->bdev); 2248 2244 destroy_workqueue(md->wq); 2249 2245 2250 2246 if (md->kworker_task) ··· 2254 2252 mempool_destroy(md->rq_pool); 2255 2253 if (md->bs) 2256 2254 bioset_free(md->bs); 2257 - blk_integrity_unregister(md->disk); 2258 - del_gendisk(md->disk); 2255 + 2259 2256 cleanup_srcu_struct(&md->io_barrier); 2260 2257 free_table_devices(&md->table_devices); 2261 - free_minor(minor); 2258 + dm_stats_cleanup(&md->stats); 2262 2259 2263 2260 spin_lock(&_minor_lock); 2264 2261 md->disk->private_data = NULL; 2265 2262 spin_unlock(&_minor_lock); 2266 - 2263 + if (blk_get_integrity(md->disk)) 2264 + blk_integrity_unregister(md->disk); 2265 + del_gendisk(md->disk); 2267 2266 put_disk(md->disk); 2268 2267 blk_cleanup_queue(md->queue); 2269 - dm_stats_cleanup(&md->stats); 2268 + bdput(md->bdev); 2269 + free_minor(minor); 2270 + 2270 2271 module_put(THIS_MODULE); 2271 2272 kfree(md); 2272 2273 } ··· 2647 2642 2648 2643 might_sleep(); 2649 2644 2650 - spin_lock(&_minor_lock); 2651 2645 map = dm_get_live_table(md, &srcu_idx); 2646 + 2647 + spin_lock(&_minor_lock); 2652 2648 idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md)))); 2653 2649 set_bit(DMF_FREEING, &md->flags); 2654 2650 spin_unlock(&_minor_lock);
+1 -1
drivers/mfd/kempld-core.c
··· 739 739 for (id = kempld_dmi_table; 740 740 id->matches[0].slot != DMI_NONE; id++) 741 741 if (strstr(id->ident, force_device_id)) 742 - if (id->callback && id->callback(id)) 742 + if (id->callback && !id->callback(id)) 743 743 break; 744 744 if (id->matches[0].slot == DMI_NONE) 745 745 return -ENODEV;
+24 -6
drivers/mfd/rtsx_usb.c
··· 196 196 int rtsx_usb_ep0_read_register(struct rtsx_ucr *ucr, u16 addr, u8 *data) 197 197 { 198 198 u16 value; 199 + u8 *buf; 200 + int ret; 199 201 200 202 if (!data) 201 203 return -EINVAL; 202 - *data = 0; 204 + 205 + buf = kzalloc(sizeof(u8), GFP_KERNEL); 206 + if (!buf) 207 + return -ENOMEM; 203 208 204 209 addr |= EP0_READ_REG_CMD << EP0_OP_SHIFT; 205 210 value = swab16(addr); 206 211 207 - return usb_control_msg(ucr->pusb_dev, 212 + ret = usb_control_msg(ucr->pusb_dev, 208 213 usb_rcvctrlpipe(ucr->pusb_dev, 0), RTSX_USB_REQ_REG_OP, 209 214 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 210 - value, 0, data, 1, 100); 215 + value, 0, buf, 1, 100); 216 + *data = *buf; 217 + 218 + kfree(buf); 219 + return ret; 211 220 } 212 221 EXPORT_SYMBOL_GPL(rtsx_usb_ep0_read_register); 213 222 ··· 297 288 int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status) 298 289 { 299 290 int ret; 291 + u16 *buf; 300 292 301 293 if (!status) 302 294 return -EINVAL; 303 295 304 - if (polling_pipe == 0) 296 + if (polling_pipe == 0) { 297 + buf = kzalloc(sizeof(u16), GFP_KERNEL); 298 + if (!buf) 299 + return -ENOMEM; 300 + 305 301 ret = usb_control_msg(ucr->pusb_dev, 306 302 usb_rcvctrlpipe(ucr->pusb_dev, 0), 307 303 RTSX_USB_REQ_POLL, 308 304 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 309 - 0, 0, status, 2, 100); 310 - else 305 + 0, 0, buf, 2, 100); 306 + *status = *buf; 307 + 308 + kfree(buf); 309 + } else { 311 310 ret = rtsx_usb_get_status_with_bulk(ucr, status); 311 + } 312 312 313 313 /* usb_control_msg may return positive when success */ 314 314 if (ret < 0)
+2 -1
drivers/net/bonding/bond_main.c
··· 3850 3850 /* Find out if any slaves have the same mapping as this skb. */ 3851 3851 bond_for_each_slave_rcu(bond, slave, iter) { 3852 3852 if (slave->queue_id == skb->queue_mapping) { 3853 - if (bond_slave_can_tx(slave)) { 3853 + if (bond_slave_is_up(slave) && 3854 + slave->link == BOND_LINK_UP) { 3854 3855 bond_dev_queue_xmit(bond, skb, slave->dev); 3855 3856 return 0; 3856 3857 }
+11 -7
drivers/net/can/flexcan.c
··· 592 592 rx_state = unlikely(reg_esr & FLEXCAN_ESR_RX_WRN) ? 593 593 CAN_STATE_ERROR_WARNING : CAN_STATE_ERROR_ACTIVE; 594 594 new_state = max(tx_state, rx_state); 595 - } else if (unlikely(flt == FLEXCAN_ESR_FLT_CONF_PASSIVE)) { 595 + } else { 596 596 __flexcan_get_berr_counter(dev, &bec); 597 - new_state = CAN_STATE_ERROR_PASSIVE; 597 + new_state = flt == FLEXCAN_ESR_FLT_CONF_PASSIVE ? 598 + CAN_STATE_ERROR_PASSIVE : CAN_STATE_BUS_OFF; 598 599 rx_state = bec.rxerr >= bec.txerr ? new_state : 0; 599 600 tx_state = bec.rxerr <= bec.txerr ? new_state : 0; 600 - } else { 601 - new_state = CAN_STATE_BUS_OFF; 602 601 } 603 602 604 603 /* state hasn't changed */ ··· 1157 1158 const struct flexcan_devtype_data *devtype_data; 1158 1159 struct net_device *dev; 1159 1160 struct flexcan_priv *priv; 1161 + struct regulator *reg_xceiver; 1160 1162 struct resource *mem; 1161 1163 struct clk *clk_ipg = NULL, *clk_per = NULL; 1162 1164 void __iomem *base; 1163 1165 int err, irq; 1164 1166 u32 clock_freq = 0; 1167 + 1168 + reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver"); 1169 + if (PTR_ERR(reg_xceiver) == -EPROBE_DEFER) 1170 + return -EPROBE_DEFER; 1171 + else if (IS_ERR(reg_xceiver)) 1172 + reg_xceiver = NULL; 1165 1173 1166 1174 if (pdev->dev.of_node) 1167 1175 of_property_read_u32(pdev->dev.of_node, ··· 1230 1224 priv->pdata = dev_get_platdata(&pdev->dev); 1231 1225 priv->devtype_data = devtype_data; 1232 1226 1233 - priv->reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver"); 1234 - if (IS_ERR(priv->reg_xceiver)) 1235 - priv->reg_xceiver = NULL; 1227 + priv->reg_xceiver = reg_xceiver; 1236 1228 1237 1229 netif_napi_add(dev, &priv->napi, flexcan_poll, FLEXCAN_NAPI_WEIGHT); 1238 1230
+2
drivers/net/can/usb/gs_usb.c
··· 901 901 } 902 902 903 903 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 904 + if (!dev) 905 + return -ENOMEM; 904 906 init_usb_anchor(&dev->rx_submitted); 905 907 906 908 atomic_set(&dev->active_channels, 0);
+42 -27
drivers/net/can/usb/kvaser_usb.c
··· 25 25 #include <linux/can/dev.h> 26 26 #include <linux/can/error.h> 27 27 28 - #define MAX_TX_URBS 16 29 28 #define MAX_RX_URBS 4 30 29 #define START_TIMEOUT 1000 /* msecs */ 31 30 #define STOP_TIMEOUT 1000 /* msecs */ ··· 442 443 }; 443 444 }; 444 445 446 + /* Context for an outstanding, not yet ACKed, transmission */ 445 447 struct kvaser_usb_tx_urb_context { 446 448 struct kvaser_usb_net_priv *priv; 447 449 u32 echo_index; ··· 456 456 struct usb_endpoint_descriptor *bulk_in, *bulk_out; 457 457 struct usb_anchor rx_submitted; 458 458 459 + /* @max_tx_urbs: Firmware-reported maximum number of oustanding, 460 + * not yet ACKed, transmissions on this device. This value is 461 + * also used as a sentinel for marking free tx contexts. 462 + */ 459 463 u32 fw_version; 460 464 unsigned int nchannels; 465 + unsigned int max_tx_urbs; 461 466 enum kvaser_usb_family family; 462 467 463 468 bool rxinitdone; ··· 472 467 473 468 struct kvaser_usb_net_priv { 474 469 struct can_priv can; 475 - 476 - spinlock_t tx_contexts_lock; 477 - int active_tx_contexts; 478 - struct kvaser_usb_tx_urb_context tx_contexts[MAX_TX_URBS]; 479 - 480 - struct usb_anchor tx_submitted; 481 - struct completion start_comp, stop_comp; 470 + struct can_berr_counter bec; 482 471 483 472 struct kvaser_usb *dev; 484 473 struct net_device *netdev; 485 474 int channel; 486 475 487 - struct can_berr_counter bec; 476 + struct completion start_comp, stop_comp; 477 + struct usb_anchor tx_submitted; 478 + 479 + spinlock_t tx_contexts_lock; 480 + int active_tx_contexts; 481 + struct kvaser_usb_tx_urb_context tx_contexts[]; 488 482 }; 489 483 490 484 static const struct usb_device_id kvaser_usb_table[] = { ··· 596 592 * for further details. 597 593 */ 598 594 if (tmp->len == 0) { 599 - pos = round_up(pos, 600 - dev->bulk_in->wMaxPacketSize); 595 + pos = round_up(pos, le16_to_cpu(dev->bulk_in-> 596 + wMaxPacketSize)); 601 597 continue; 602 598 } 603 599 ··· 661 657 switch (dev->family) { 662 658 case KVASER_LEAF: 663 659 dev->fw_version = le32_to_cpu(msg.u.leaf.softinfo.fw_version); 660 + dev->max_tx_urbs = 661 + le16_to_cpu(msg.u.leaf.softinfo.max_outstanding_tx); 664 662 break; 665 663 case KVASER_USBCAN: 666 664 dev->fw_version = le32_to_cpu(msg.u.usbcan.softinfo.fw_version); 665 + dev->max_tx_urbs = 666 + le16_to_cpu(msg.u.usbcan.softinfo.max_outstanding_tx); 667 667 break; 668 668 } 669 669 ··· 723 715 724 716 stats = &priv->netdev->stats; 725 717 726 - context = &priv->tx_contexts[tid % MAX_TX_URBS]; 718 + context = &priv->tx_contexts[tid % dev->max_tx_urbs]; 727 719 728 720 /* Sometimes the state change doesn't come after a bus-off event */ 729 721 if (priv->can.restart_ms && ··· 752 744 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 753 745 754 746 can_get_echo_skb(priv->netdev, context->echo_index); 755 - context->echo_index = MAX_TX_URBS; 747 + context->echo_index = dev->max_tx_urbs; 756 748 --priv->active_tx_contexts; 757 749 netif_wake_queue(priv->netdev); 758 750 ··· 1337 1329 * number of events in case of a heavy rx load on the bus. 1338 1330 */ 1339 1331 if (msg->len == 0) { 1340 - pos = round_up(pos, dev->bulk_in->wMaxPacketSize); 1332 + pos = round_up(pos, le16_to_cpu(dev->bulk_in-> 1333 + wMaxPacketSize)); 1341 1334 continue; 1342 1335 } 1343 1336 ··· 1521 1512 1522 1513 static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv) 1523 1514 { 1524 - int i; 1515 + int i, max_tx_urbs; 1516 + 1517 + max_tx_urbs = priv->dev->max_tx_urbs; 1525 1518 1526 1519 priv->active_tx_contexts = 0; 1527 - for (i = 0; i < MAX_TX_URBS; i++) 1528 - priv->tx_contexts[i].echo_index = MAX_TX_URBS; 1520 + for (i = 0; i < max_tx_urbs; i++) 1521 + priv->tx_contexts[i].echo_index = max_tx_urbs; 1529 1522 } 1530 1523 1531 1524 /* This method might sleep. Do not call it in the atomic context ··· 1713 1702 *msg_tx_can_flags |= MSG_FLAG_REMOTE_FRAME; 1714 1703 1715 1704 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1716 - for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) { 1717 - if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) { 1705 + for (i = 0; i < dev->max_tx_urbs; i++) { 1706 + if (priv->tx_contexts[i].echo_index == dev->max_tx_urbs) { 1718 1707 context = &priv->tx_contexts[i]; 1719 1708 1720 1709 context->echo_index = i; 1721 1710 can_put_echo_skb(skb, netdev, context->echo_index); 1722 1711 ++priv->active_tx_contexts; 1723 - if (priv->active_tx_contexts >= MAX_TX_URBS) 1712 + if (priv->active_tx_contexts >= dev->max_tx_urbs) 1724 1713 netif_stop_queue(netdev); 1725 1714 1726 1715 break; ··· 1754 1743 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1755 1744 1756 1745 can_free_echo_skb(netdev, context->echo_index); 1757 - context->echo_index = MAX_TX_URBS; 1746 + context->echo_index = dev->max_tx_urbs; 1758 1747 --priv->active_tx_contexts; 1759 1748 netif_wake_queue(netdev); 1760 1749 ··· 1892 1881 if (err) 1893 1882 return err; 1894 1883 1895 - netdev = alloc_candev(sizeof(*priv), MAX_TX_URBS); 1884 + netdev = alloc_candev(sizeof(*priv) + 1885 + dev->max_tx_urbs * sizeof(*priv->tx_contexts), 1886 + dev->max_tx_urbs); 1896 1887 if (!netdev) { 1897 1888 dev_err(&intf->dev, "Cannot alloc candev\n"); 1898 1889 return -ENOMEM; ··· 2022 2009 return err; 2023 2010 } 2024 2011 2012 + dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n", 2013 + ((dev->fw_version >> 24) & 0xff), 2014 + ((dev->fw_version >> 16) & 0xff), 2015 + (dev->fw_version & 0xffff)); 2016 + 2017 + dev_dbg(&intf->dev, "Max oustanding tx = %d URBs\n", dev->max_tx_urbs); 2018 + 2025 2019 err = kvaser_usb_get_card_info(dev); 2026 2020 if (err) { 2027 2021 dev_err(&intf->dev, 2028 2022 "Cannot get card infos, error %d\n", err); 2029 2023 return err; 2030 2024 } 2031 - 2032 - dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n", 2033 - ((dev->fw_version >> 24) & 0xff), 2034 - ((dev->fw_version >> 16) & 0xff), 2035 - (dev->fw_version & 0xffff)); 2036 2025 2037 2026 for (i = 0; i < dev->nchannels; i++) { 2038 2027 err = kvaser_usb_init_one(intf, id, i);
+8 -7
drivers/net/can/usb/peak_usb/pcan_ucan.h
··· 26 26 #define PUCAN_CMD_FILTER_STD 0x008 27 27 #define PUCAN_CMD_TX_ABORT 0x009 28 28 #define PUCAN_CMD_WR_ERR_CNT 0x00a 29 - #define PUCAN_CMD_RX_FRAME_ENABLE 0x00b 30 - #define PUCAN_CMD_RX_FRAME_DISABLE 0x00c 29 + #define PUCAN_CMD_SET_EN_OPTION 0x00b 30 + #define PUCAN_CMD_CLR_DIS_OPTION 0x00c 31 31 #define PUCAN_CMD_END_OF_COLLECTION 0x3ff 32 32 33 33 /* uCAN received messages list */ ··· 101 101 u16 unused; 102 102 }; 103 103 104 - /* uCAN RX_FRAME_ENABLE command fields */ 105 - #define PUCAN_FLTEXT_ERROR 0x0001 106 - #define PUCAN_FLTEXT_BUSLOAD 0x0002 104 + /* uCAN SET_EN/CLR_DIS _OPTION command fields */ 105 + #define PUCAN_OPTION_ERROR 0x0001 106 + #define PUCAN_OPTION_BUSLOAD 0x0002 107 + #define PUCAN_OPTION_CANDFDISO 0x0004 107 108 108 - struct __packed pucan_filter_ext { 109 + struct __packed pucan_options { 109 110 __le16 opcode_channel; 110 111 111 - __le16 ext_mask; 112 + __le16 options; 112 113 u32 unused; 113 114 }; 114 115
+50 -23
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 110 110 u8 unused[5]; 111 111 }; 112 112 113 - /* Extended usage of uCAN commands CMD_RX_FRAME_xxxABLE for PCAN-USB Pro FD */ 113 + /* Extended usage of uCAN commands CMD_xxx_xx_OPTION for PCAN-USB Pro FD */ 114 114 #define PCAN_UFD_FLTEXT_CALIBRATION 0x8000 115 115 116 - struct __packed pcan_ufd_filter_ext { 116 + struct __packed pcan_ufd_options { 117 117 __le16 opcode_channel; 118 118 119 - __le16 ext_mask; 119 + __le16 ucan_mask; 120 120 u16 unused; 121 121 __le16 usb_mask; 122 122 }; ··· 251 251 /* moves the pointer forward */ 252 252 pc += sizeof(struct pucan_wr_err_cnt); 253 253 254 + /* add command to switch from ISO to non-ISO mode, if fw allows it */ 255 + if (dev->can.ctrlmode_supported & CAN_CTRLMODE_FD_NON_ISO) { 256 + struct pucan_options *puo = (struct pucan_options *)pc; 257 + 258 + puo->opcode_channel = 259 + (dev->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO) ? 260 + pucan_cmd_opcode_channel(dev, 261 + PUCAN_CMD_CLR_DIS_OPTION) : 262 + pucan_cmd_opcode_channel(dev, PUCAN_CMD_SET_EN_OPTION); 263 + 264 + puo->options = cpu_to_le16(PUCAN_OPTION_CANDFDISO); 265 + 266 + /* to be sure that no other extended bits will be taken into 267 + * account 268 + */ 269 + puo->unused = 0; 270 + 271 + /* moves the pointer forward */ 272 + pc += sizeof(struct pucan_options); 273 + } 274 + 254 275 /* next, go back to operational mode */ 255 276 cmd = (struct pucan_command *)pc; 256 277 cmd->opcode_channel = pucan_cmd_opcode_channel(dev, ··· 342 321 return pcan_usb_fd_send_cmd(dev, cmd); 343 322 } 344 323 345 - /* set/unset notifications filter: 324 + /* set/unset options 346 325 * 347 - * onoff sets(1)/unset(0) notifications 348 - * mask each bit defines a kind of notification to set/unset 326 + * onoff set(1)/unset(0) options 327 + * mask each bit defines a kind of options to set/unset 349 328 */ 350 - static int pcan_usb_fd_set_filter_ext(struct peak_usb_device *dev, 351 - bool onoff, u16 ext_mask, u16 usb_mask) 329 + static int pcan_usb_fd_set_options(struct peak_usb_device *dev, 330 + bool onoff, u16 ucan_mask, u16 usb_mask) 352 331 { 353 - struct pcan_ufd_filter_ext *cmd = pcan_usb_fd_cmd_buffer(dev); 332 + struct pcan_ufd_options *cmd = pcan_usb_fd_cmd_buffer(dev); 354 333 355 334 cmd->opcode_channel = pucan_cmd_opcode_channel(dev, 356 - (onoff) ? PUCAN_CMD_RX_FRAME_ENABLE : 357 - PUCAN_CMD_RX_FRAME_DISABLE); 335 + (onoff) ? PUCAN_CMD_SET_EN_OPTION : 336 + PUCAN_CMD_CLR_DIS_OPTION); 358 337 359 - cmd->ext_mask = cpu_to_le16(ext_mask); 338 + cmd->ucan_mask = cpu_to_le16(ucan_mask); 360 339 cmd->usb_mask = cpu_to_le16(usb_mask); 361 340 362 341 /* send the command */ ··· 791 770 &pcan_usb_pro_fd); 792 771 793 772 /* enable USB calibration messages */ 794 - err = pcan_usb_fd_set_filter_ext(dev, 1, 795 - PUCAN_FLTEXT_ERROR, 796 - PCAN_UFD_FLTEXT_CALIBRATION); 773 + err = pcan_usb_fd_set_options(dev, 1, 774 + PUCAN_OPTION_ERROR, 775 + PCAN_UFD_FLTEXT_CALIBRATION); 797 776 } 798 777 799 778 pdev->usb_if->dev_opened_count++; ··· 827 806 828 807 /* turn off special msgs for that interface if no other dev opened */ 829 808 if (pdev->usb_if->dev_opened_count == 1) 830 - pcan_usb_fd_set_filter_ext(dev, 0, 831 - PUCAN_FLTEXT_ERROR, 832 - PCAN_UFD_FLTEXT_CALIBRATION); 809 + pcan_usb_fd_set_options(dev, 0, 810 + PUCAN_OPTION_ERROR, 811 + PCAN_UFD_FLTEXT_CALIBRATION); 833 812 pdev->usb_if->dev_opened_count--; 834 813 835 814 return 0; ··· 881 860 pdev->usb_if->fw_info.fw_version[2], 882 861 dev->adapter->ctrl_count); 883 862 884 - /* the currently supported hw is non-ISO */ 885 - dev->can.ctrlmode = CAN_CTRLMODE_FD_NON_ISO; 863 + /* check for ability to switch between ISO/non-ISO modes */ 864 + if (pdev->usb_if->fw_info.fw_version[0] >= 2) { 865 + /* firmware >= 2.x supports ISO/non-ISO switching */ 866 + dev->can.ctrlmode_supported |= CAN_CTRLMODE_FD_NON_ISO; 867 + } else { 868 + /* firmware < 2.x only supports fixed(!) non-ISO */ 869 + dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO; 870 + } 886 871 887 872 /* tell the hardware the can driver is running */ 888 873 err = pcan_usb_fd_drv_loaded(dev, 1); ··· 964 937 if (dev->ctrl_idx == 0) { 965 938 /* turn off calibration message if any device were opened */ 966 939 if (pdev->usb_if->dev_opened_count > 0) 967 - pcan_usb_fd_set_filter_ext(dev, 0, 968 - PUCAN_FLTEXT_ERROR, 969 - PCAN_UFD_FLTEXT_CALIBRATION); 940 + pcan_usb_fd_set_options(dev, 0, 941 + PUCAN_OPTION_ERROR, 942 + PCAN_UFD_FLTEXT_CALIBRATION); 970 943 971 944 /* tell USB adapter that the driver is being unloaded */ 972 945 pcan_usb_fd_drv_loaded(dev, 0);
+29 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1543 1543 { 1544 1544 struct pcnet32_private *lp; 1545 1545 int i, media; 1546 - int fdx, mii, fset, dxsuflo; 1546 + int fdx, mii, fset, dxsuflo, sram; 1547 1547 int chip_version; 1548 1548 char *chipname; 1549 1549 struct net_device *dev; ··· 1580 1580 } 1581 1581 1582 1582 /* initialize variables */ 1583 - fdx = mii = fset = dxsuflo = 0; 1583 + fdx = mii = fset = dxsuflo = sram = 0; 1584 1584 chip_version = (chip_version >> 12) & 0xffff; 1585 1585 1586 1586 switch (chip_version) { ··· 1613 1613 chipname = "PCnet/FAST III 79C973"; /* PCI */ 1614 1614 fdx = 1; 1615 1615 mii = 1; 1616 + sram = 1; 1616 1617 break; 1617 1618 case 0x2626: 1618 1619 chipname = "PCnet/Home 79C978"; /* PCI */ ··· 1637 1636 chipname = "PCnet/FAST III 79C975"; /* PCI */ 1638 1637 fdx = 1; 1639 1638 mii = 1; 1639 + sram = 1; 1640 1640 break; 1641 1641 case 0x2628: 1642 1642 chipname = "PCnet/PRO 79C976"; ··· 1664 1662 a->write_csr(ioaddr, 80, 1665 1663 (a->read_csr(ioaddr, 80) & 0x0C00) | 0x0c00); 1666 1664 dxsuflo = 1; 1665 + } 1666 + 1667 + /* 1668 + * The Am79C973/Am79C975 controllers come with 12K of SRAM 1669 + * which we can use for the Tx/Rx buffers but most importantly, 1670 + * the use of SRAM allow us to use the BCR18:NOUFLO bit to avoid 1671 + * Tx fifo underflows. 1672 + */ 1673 + if (sram) { 1674 + /* 1675 + * The SRAM is being configured in two steps. First we 1676 + * set the SRAM size in the BCR25:SRAM_SIZE bits. According 1677 + * to the datasheet, each bit corresponds to a 512-byte 1678 + * page so we can have at most 24 pages. The SRAM_SIZE 1679 + * holds the value of the upper 8 bits of the 16-bit SRAM size. 1680 + * The low 8-bits start at 0x00 and end at 0xff. So the 1681 + * address range is from 0x0000 up to 0x17ff. Therefore, 1682 + * the SRAM_SIZE is set to 0x17. The next step is to set 1683 + * the BCR26:SRAM_BND midway through so the Tx and Rx 1684 + * buffers can share the SRAM equally. 1685 + */ 1686 + a->write_bcr(ioaddr, 25, 0x17); 1687 + a->write_bcr(ioaddr, 26, 0xc); 1688 + /* And finally enable the NOUFLO bit */ 1689 + a->write_bcr(ioaddr, 18, a->read_bcr(ioaddr, 18) | (1 << 11)); 1667 1690 } 1668 1691 1669 1692 dev = alloc_etherdev(sizeof(*lp));
+1 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1811 1811 int stats_state; 1812 1812 1813 1813 /* used for synchronization of concurrent threads statistics handling */ 1814 - spinlock_t stats_lock; 1814 + struct mutex stats_lock; 1815 1815 1816 1816 /* used by dmae command loader */ 1817 1817 struct dmae_command stats_dmae; ··· 1935 1935 1936 1936 int fp_array_size; 1937 1937 u32 dump_preset_idx; 1938 - bool stats_started; 1939 - struct semaphore stats_sema; 1940 1938 1941 1939 u8 phys_port_id[ETH_ALEN]; 1942 1940
+54 -45
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 129 129 u32 xmac_val; 130 130 u32 emac_addr; 131 131 u32 emac_val; 132 - u32 umac_addr; 133 - u32 umac_val; 132 + u32 umac_addr[2]; 133 + u32 umac_val[2]; 134 134 u32 bmac_addr; 135 135 u32 bmac_val[2]; 136 136 }; ··· 7866 7866 return 0; 7867 7867 } 7868 7868 7869 + /* previous driver DMAE transaction may have occurred when pre-boot stage ended 7870 + * and boot began, or when kdump kernel was loaded. Either case would invalidate 7871 + * the addresses of the transaction, resulting in was-error bit set in the pci 7872 + * causing all hw-to-host pcie transactions to timeout. If this happened we want 7873 + * to clear the interrupt which detected this from the pglueb and the was done 7874 + * bit 7875 + */ 7876 + static void bnx2x_clean_pglue_errors(struct bnx2x *bp) 7877 + { 7878 + if (!CHIP_IS_E1x(bp)) 7879 + REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 7880 + 1 << BP_ABS_FUNC(bp)); 7881 + } 7882 + 7869 7883 static int bnx2x_init_hw_func(struct bnx2x *bp) 7870 7884 { 7871 7885 int port = BP_PORT(bp); ··· 7972 7958 7973 7959 bnx2x_init_block(bp, BLOCK_PGLUE_B, init_phase); 7974 7960 7975 - if (!CHIP_IS_E1x(bp)) 7976 - REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, func); 7961 + bnx2x_clean_pglue_errors(bp); 7977 7962 7978 7963 bnx2x_init_block(bp, BLOCK_ATC, init_phase); 7979 7964 bnx2x_init_block(bp, BLOCK_DMAE, init_phase); ··· 10154 10141 return base + (BP_ABS_FUNC(bp)) * stride; 10155 10142 } 10156 10143 10144 + static bool bnx2x_prev_unload_close_umac(struct bnx2x *bp, 10145 + u8 port, u32 reset_reg, 10146 + struct bnx2x_mac_vals *vals) 10147 + { 10148 + u32 mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port; 10149 + u32 base_addr; 10150 + 10151 + if (!(mask & reset_reg)) 10152 + return false; 10153 + 10154 + BNX2X_DEV_INFO("Disable umac Rx %02x\n", port); 10155 + base_addr = port ? GRCBASE_UMAC1 : GRCBASE_UMAC0; 10156 + vals->umac_addr[port] = base_addr + UMAC_REG_COMMAND_CONFIG; 10157 + vals->umac_val[port] = REG_RD(bp, vals->umac_addr[port]); 10158 + REG_WR(bp, vals->umac_addr[port], 0); 10159 + 10160 + return true; 10161 + } 10162 + 10157 10163 static void bnx2x_prev_unload_close_mac(struct bnx2x *bp, 10158 10164 struct bnx2x_mac_vals *vals) 10159 10165 { ··· 10181 10149 u8 port = BP_PORT(bp); 10182 10150 10183 10151 /* reset addresses as they also mark which values were changed */ 10184 - vals->bmac_addr = 0; 10185 - vals->umac_addr = 0; 10186 - vals->xmac_addr = 0; 10187 - vals->emac_addr = 0; 10152 + memset(vals, 0, sizeof(*vals)); 10188 10153 10189 10154 reset_reg = REG_RD(bp, MISC_REG_RESET_REG_2); 10190 10155 ··· 10230 10201 REG_WR(bp, vals->xmac_addr, 0); 10231 10202 mac_stopped = true; 10232 10203 } 10233 - mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port; 10234 - if (mask & reset_reg) { 10235 - BNX2X_DEV_INFO("Disable umac Rx\n"); 10236 - base_addr = BP_PORT(bp) ? GRCBASE_UMAC1 : GRCBASE_UMAC0; 10237 - vals->umac_addr = base_addr + UMAC_REG_COMMAND_CONFIG; 10238 - vals->umac_val = REG_RD(bp, vals->umac_addr); 10239 - REG_WR(bp, vals->umac_addr, 0); 10240 - mac_stopped = true; 10241 - } 10204 + 10205 + mac_stopped |= bnx2x_prev_unload_close_umac(bp, 0, 10206 + reset_reg, vals); 10207 + mac_stopped |= bnx2x_prev_unload_close_umac(bp, 1, 10208 + reset_reg, vals); 10242 10209 } 10243 10210 10244 10211 if (mac_stopped) ··· 10530 10505 /* Close the MAC Rx to prevent BRB from filling up */ 10531 10506 bnx2x_prev_unload_close_mac(bp, &mac_vals); 10532 10507 10533 - /* close LLH filters towards the BRB */ 10508 + /* close LLH filters for both ports towards the BRB */ 10534 10509 bnx2x_set_rx_filter(&bp->link_params, 0); 10510 + bp->link_params.port ^= 1; 10511 + bnx2x_set_rx_filter(&bp->link_params, 0); 10512 + bp->link_params.port ^= 1; 10535 10513 10536 10514 /* Check if the UNDI driver was previously loaded */ 10537 10515 if (bnx2x_prev_is_after_undi(bp)) { ··· 10581 10553 10582 10554 if (mac_vals.xmac_addr) 10583 10555 REG_WR(bp, mac_vals.xmac_addr, mac_vals.xmac_val); 10584 - if (mac_vals.umac_addr) 10585 - REG_WR(bp, mac_vals.umac_addr, mac_vals.umac_val); 10556 + if (mac_vals.umac_addr[0]) 10557 + REG_WR(bp, mac_vals.umac_addr[0], mac_vals.umac_val[0]); 10558 + if (mac_vals.umac_addr[1]) 10559 + REG_WR(bp, mac_vals.umac_addr[1], mac_vals.umac_val[1]); 10586 10560 if (mac_vals.emac_addr) 10587 10561 REG_WR(bp, mac_vals.emac_addr, mac_vals.emac_val); 10588 10562 if (mac_vals.bmac_addr) { ··· 10601 10571 return bnx2x_prev_mcp_done(bp); 10602 10572 } 10603 10573 10604 - /* previous driver DMAE transaction may have occurred when pre-boot stage ended 10605 - * and boot began, or when kdump kernel was loaded. Either case would invalidate 10606 - * the addresses of the transaction, resulting in was-error bit set in the pci 10607 - * causing all hw-to-host pcie transactions to timeout. If this happened we want 10608 - * to clear the interrupt which detected this from the pglueb and the was done 10609 - * bit 10610 - */ 10611 - static void bnx2x_prev_interrupted_dmae(struct bnx2x *bp) 10612 - { 10613 - if (!CHIP_IS_E1x(bp)) { 10614 - u32 val = REG_RD(bp, PGLUE_B_REG_PGLUE_B_INT_STS); 10615 - if (val & PGLUE_B_PGLUE_B_INT_STS_REG_WAS_ERROR_ATTN) { 10616 - DP(BNX2X_MSG_SP, 10617 - "'was error' bit was found to be set in pglueb upon startup. Clearing\n"); 10618 - REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 10619 - 1 << BP_FUNC(bp)); 10620 - } 10621 - } 10622 - } 10623 - 10624 10574 static int bnx2x_prev_unload(struct bnx2x *bp) 10625 10575 { 10626 10576 int time_counter = 10; ··· 10610 10600 /* clear hw from errors which may have resulted from an interrupted 10611 10601 * dmae transaction. 10612 10602 */ 10613 - bnx2x_prev_interrupted_dmae(bp); 10603 + bnx2x_clean_pglue_errors(bp); 10614 10604 10615 10605 /* Release previously held locks */ 10616 10606 hw_lock_reg = (BP_FUNC(bp) <= 5) ? ··· 12047 12037 mutex_init(&bp->port.phy_mutex); 12048 12038 mutex_init(&bp->fw_mb_mutex); 12049 12039 mutex_init(&bp->drv_info_mutex); 12040 + mutex_init(&bp->stats_lock); 12050 12041 bp->drv_info_mng_owner = false; 12051 - spin_lock_init(&bp->stats_lock); 12052 - sema_init(&bp->stats_sema, 1); 12053 12042 12054 12043 INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task); 12055 12044 INIT_DELAYED_WORK(&bp->sp_rtnl_task, bnx2x_sp_rtnl_task); ··· 13677 13668 cancel_delayed_work_sync(&bp->sp_task); 13678 13669 cancel_delayed_work_sync(&bp->period_task); 13679 13670 13680 - spin_lock_bh(&bp->stats_lock); 13671 + mutex_lock(&bp->stats_lock); 13681 13672 bp->stats_state = STATS_STATE_DISABLED; 13682 - spin_unlock_bh(&bp->stats_lock); 13673 + mutex_unlock(&bp->stats_lock); 13683 13674 13684 13675 bnx2x_save_statistics(bp); 13685 13676
+3 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 2238 2238 2239 2239 cookie.vf = vf; 2240 2240 cookie.state = VF_ACQUIRED; 2241 - bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie); 2241 + rc = bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie); 2242 + if (rc) 2243 + goto op_err; 2242 2244 } 2243 2245 2244 2246 DP(BNX2X_MSG_IOV, "set state to acquired\n");
+74 -90
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
··· 123 123 */ 124 124 static void bnx2x_storm_stats_post(struct bnx2x *bp) 125 125 { 126 - if (!bp->stats_pending) { 127 - int rc; 126 + int rc; 128 127 129 - spin_lock_bh(&bp->stats_lock); 128 + if (bp->stats_pending) 129 + return; 130 130 131 - if (bp->stats_pending) { 132 - spin_unlock_bh(&bp->stats_lock); 133 - return; 134 - } 131 + bp->fw_stats_req->hdr.drv_stats_counter = 132 + cpu_to_le16(bp->stats_counter++); 135 133 136 - bp->fw_stats_req->hdr.drv_stats_counter = 137 - cpu_to_le16(bp->stats_counter++); 134 + DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n", 135 + le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter)); 138 136 139 - DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n", 140 - le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter)); 137 + /* adjust the ramrod to include VF queues statistics */ 138 + bnx2x_iov_adjust_stats_req(bp); 139 + bnx2x_dp_stats(bp); 141 140 142 - /* adjust the ramrod to include VF queues statistics */ 143 - bnx2x_iov_adjust_stats_req(bp); 144 - bnx2x_dp_stats(bp); 145 - 146 - /* send FW stats ramrod */ 147 - rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0, 148 - U64_HI(bp->fw_stats_req_mapping), 149 - U64_LO(bp->fw_stats_req_mapping), 150 - NONE_CONNECTION_TYPE); 151 - if (rc == 0) 152 - bp->stats_pending = 1; 153 - 154 - spin_unlock_bh(&bp->stats_lock); 155 - } 141 + /* send FW stats ramrod */ 142 + rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0, 143 + U64_HI(bp->fw_stats_req_mapping), 144 + U64_LO(bp->fw_stats_req_mapping), 145 + NONE_CONNECTION_TYPE); 146 + if (rc == 0) 147 + bp->stats_pending = 1; 156 148 } 157 149 158 150 static void bnx2x_hw_stats_post(struct bnx2x *bp) ··· 213 221 */ 214 222 215 223 /* should be called under stats_sema */ 216 - static void __bnx2x_stats_pmf_update(struct bnx2x *bp) 224 + static void bnx2x_stats_pmf_update(struct bnx2x *bp) 217 225 { 218 226 struct dmae_command *dmae; 219 227 u32 opcode; ··· 511 519 } 512 520 513 521 /* should be called under stats_sema */ 514 - static void __bnx2x_stats_start(struct bnx2x *bp) 522 + static void bnx2x_stats_start(struct bnx2x *bp) 515 523 { 516 524 if (IS_PF(bp)) { 517 525 if (bp->port.pmf) ··· 523 531 bnx2x_hw_stats_post(bp); 524 532 bnx2x_storm_stats_post(bp); 525 533 } 526 - 527 - bp->stats_started = true; 528 - } 529 - 530 - static void bnx2x_stats_start(struct bnx2x *bp) 531 - { 532 - if (down_timeout(&bp->stats_sema, HZ/10)) 533 - BNX2X_ERR("Unable to acquire stats lock\n"); 534 - __bnx2x_stats_start(bp); 535 - up(&bp->stats_sema); 536 534 } 537 535 538 536 static void bnx2x_stats_pmf_start(struct bnx2x *bp) 539 537 { 540 - if (down_timeout(&bp->stats_sema, HZ/10)) 541 - BNX2X_ERR("Unable to acquire stats lock\n"); 542 538 bnx2x_stats_comp(bp); 543 - __bnx2x_stats_pmf_update(bp); 544 - __bnx2x_stats_start(bp); 545 - up(&bp->stats_sema); 546 - } 547 - 548 - static void bnx2x_stats_pmf_update(struct bnx2x *bp) 549 - { 550 - if (down_timeout(&bp->stats_sema, HZ/10)) 551 - BNX2X_ERR("Unable to acquire stats lock\n"); 552 - __bnx2x_stats_pmf_update(bp); 553 - up(&bp->stats_sema); 539 + bnx2x_stats_pmf_update(bp); 540 + bnx2x_stats_start(bp); 554 541 } 555 542 556 543 static void bnx2x_stats_restart(struct bnx2x *bp) ··· 539 568 */ 540 569 if (IS_VF(bp)) 541 570 return; 542 - if (down_timeout(&bp->stats_sema, HZ/10)) 543 - BNX2X_ERR("Unable to acquire stats lock\n"); 571 + 544 572 bnx2x_stats_comp(bp); 545 - __bnx2x_stats_start(bp); 546 - up(&bp->stats_sema); 573 + bnx2x_stats_start(bp); 547 574 } 548 575 549 576 static void bnx2x_bmac_stats_update(struct bnx2x *bp) ··· 1215 1246 { 1216 1247 u32 *stats_comp = bnx2x_sp(bp, stats_comp); 1217 1248 1218 - /* we run update from timer context, so give up 1219 - * if somebody is in the middle of transition 1220 - */ 1221 - if (down_trylock(&bp->stats_sema)) 1249 + if (bnx2x_edebug_stats_stopped(bp)) 1222 1250 return; 1223 - 1224 - if (bnx2x_edebug_stats_stopped(bp) || !bp->stats_started) 1225 - goto out; 1226 1251 1227 1252 if (IS_PF(bp)) { 1228 1253 if (*stats_comp != DMAE_COMP_VAL) 1229 - goto out; 1254 + return; 1230 1255 1231 1256 if (bp->port.pmf) 1232 1257 bnx2x_hw_stats_update(bp); ··· 1230 1267 BNX2X_ERR("storm stats were not updated for 3 times\n"); 1231 1268 bnx2x_panic(); 1232 1269 } 1233 - goto out; 1270 + return; 1234 1271 } 1235 1272 } else { 1236 1273 /* vf doesn't collect HW statistics, and doesn't get completions ··· 1244 1281 1245 1282 /* vf is done */ 1246 1283 if (IS_VF(bp)) 1247 - goto out; 1284 + return; 1248 1285 1249 1286 if (netif_msg_timer(bp)) { 1250 1287 struct bnx2x_eth_stats *estats = &bp->eth_stats; ··· 1255 1292 1256 1293 bnx2x_hw_stats_post(bp); 1257 1294 bnx2x_storm_stats_post(bp); 1258 - 1259 - out: 1260 - up(&bp->stats_sema); 1261 1295 } 1262 1296 1263 1297 static void bnx2x_port_stats_stop(struct bnx2x *bp) ··· 1318 1358 1319 1359 static void bnx2x_stats_stop(struct bnx2x *bp) 1320 1360 { 1321 - int update = 0; 1322 - 1323 - if (down_timeout(&bp->stats_sema, HZ/10)) 1324 - BNX2X_ERR("Unable to acquire stats lock\n"); 1325 - 1326 - bp->stats_started = false; 1361 + bool update = false; 1327 1362 1328 1363 bnx2x_stats_comp(bp); 1329 1364 ··· 1336 1381 bnx2x_hw_stats_post(bp); 1337 1382 bnx2x_stats_comp(bp); 1338 1383 } 1339 - 1340 - up(&bp->stats_sema); 1341 1384 } 1342 1385 1343 1386 static void bnx2x_stats_do_nothing(struct bnx2x *bp) ··· 1363 1410 1364 1411 void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event) 1365 1412 { 1366 - enum bnx2x_stats_state state; 1367 - void (*action)(struct bnx2x *bp); 1413 + enum bnx2x_stats_state state = bp->stats_state; 1414 + 1368 1415 if (unlikely(bp->panic)) 1369 1416 return; 1370 1417 1371 - spin_lock_bh(&bp->stats_lock); 1372 - state = bp->stats_state; 1373 - bp->stats_state = bnx2x_stats_stm[state][event].next_state; 1374 - action = bnx2x_stats_stm[state][event].action; 1375 - spin_unlock_bh(&bp->stats_lock); 1418 + /* Statistics update run from timer context, and we don't want to stop 1419 + * that context in case someone is in the middle of a transition. 1420 + * For other events, wait a bit until lock is taken. 1421 + */ 1422 + if (!mutex_trylock(&bp->stats_lock)) { 1423 + if (event == STATS_EVENT_UPDATE) 1424 + return; 1376 1425 1377 - action(bp); 1426 + DP(BNX2X_MSG_STATS, 1427 + "Unlikely stats' lock contention [event %d]\n", event); 1428 + mutex_lock(&bp->stats_lock); 1429 + } 1430 + 1431 + bnx2x_stats_stm[state][event].action(bp); 1432 + bp->stats_state = bnx2x_stats_stm[state][event].next_state; 1433 + 1434 + mutex_unlock(&bp->stats_lock); 1378 1435 1379 1436 if ((event != STATS_EVENT_UPDATE) || netif_msg_timer(bp)) 1380 1437 DP(BNX2X_MSG_STATS, "state %d -> event %d -> state %d\n", ··· 1961 1998 } 1962 1999 } 1963 2000 1964 - void bnx2x_stats_safe_exec(struct bnx2x *bp, 1965 - void (func_to_exec)(void *cookie), 1966 - void *cookie){ 1967 - if (down_timeout(&bp->stats_sema, HZ/10)) 1968 - BNX2X_ERR("Unable to acquire stats lock\n"); 2001 + int bnx2x_stats_safe_exec(struct bnx2x *bp, 2002 + void (func_to_exec)(void *cookie), 2003 + void *cookie) 2004 + { 2005 + int cnt = 10, rc = 0; 2006 + 2007 + /* Wait for statistics to end [while blocking further requests], 2008 + * then run supplied function 'safely'. 2009 + */ 2010 + mutex_lock(&bp->stats_lock); 2011 + 1969 2012 bnx2x_stats_comp(bp); 2013 + while (bp->stats_pending && cnt--) 2014 + if (bnx2x_storm_stats_update(bp)) 2015 + usleep_range(1000, 2000); 2016 + if (bp->stats_pending) { 2017 + BNX2X_ERR("Failed to wait for stats pending to clear [possibly FW is stuck]\n"); 2018 + rc = -EBUSY; 2019 + goto out; 2020 + } 2021 + 1970 2022 func_to_exec(cookie); 1971 - __bnx2x_stats_start(bp); 1972 - up(&bp->stats_sema); 2023 + 2024 + out: 2025 + /* No need to restart statistics - if they're enabled, the timer 2026 + * will restart the statistics. 2027 + */ 2028 + mutex_unlock(&bp->stats_lock); 2029 + 2030 + return rc; 1973 2031 }
+3 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
··· 539 539 void bnx2x_memset_stats(struct bnx2x *bp); 540 540 void bnx2x_stats_init(struct bnx2x *bp); 541 541 void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event); 542 - void bnx2x_stats_safe_exec(struct bnx2x *bp, 543 - void (func_to_exec)(void *cookie), 544 - void *cookie); 542 + int bnx2x_stats_safe_exec(struct bnx2x *bp, 543 + void (func_to_exec)(void *cookie), 544 + void *cookie); 545 545 546 546 /** 547 547 * bnx2x_save_statistics - save statistics when unloading.
+8 -6
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 376 376 enum { 377 377 INGQ_EXTRAS = 2, /* firmware event queue and */ 378 378 /* forwarded interrupts */ 379 - MAX_EGRQ = MAX_ETH_QSETS*2 + MAX_OFLD_QSETS*2 380 - + MAX_CTRL_QUEUES + MAX_RDMA_QUEUES + MAX_ISCSI_QUEUES, 381 379 MAX_INGQ = MAX_ETH_QSETS + MAX_OFLD_QSETS + MAX_RDMA_QUEUES 382 380 + MAX_RDMA_CIQS + MAX_ISCSI_QUEUES + INGQ_EXTRAS, 383 381 }; ··· 614 616 unsigned int idma_qid[2]; /* SGE IDMA Hung Ingress Queue ID */ 615 617 616 618 unsigned int egr_start; 619 + unsigned int egr_sz; 617 620 unsigned int ingr_start; 618 - void *egr_map[MAX_EGRQ]; /* qid->queue egress queue map */ 619 - struct sge_rspq *ingr_map[MAX_INGQ]; /* qid->queue ingress queue map */ 620 - DECLARE_BITMAP(starving_fl, MAX_EGRQ); 621 - DECLARE_BITMAP(txq_maperr, MAX_EGRQ); 621 + unsigned int ingr_sz; 622 + void **egr_map; /* qid->queue egress queue map */ 623 + struct sge_rspq **ingr_map; /* qid->queue ingress queue map */ 624 + unsigned long *starving_fl; 625 + unsigned long *txq_maperr; 622 626 struct timer_list rx_timer; /* refills starving FLs */ 623 627 struct timer_list tx_timer; /* checks Tx queues */ 624 628 }; ··· 1136 1136 1137 1137 unsigned int qtimer_val(const struct adapter *adap, 1138 1138 const struct sge_rspq *q); 1139 + 1140 + int t4_init_devlog_params(struct adapter *adapter); 1139 1141 int t4_init_sge_params(struct adapter *adapter); 1140 1142 int t4_init_tp_params(struct adapter *adap); 1141 1143 int t4_filter_field_shift(const struct adapter *adap, int filter_sel);
+7 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 670 670 "0.9375" }; 671 671 672 672 int i; 673 - u16 incr[NMTUS][NCCTRL_WIN]; 673 + u16 (*incr)[NCCTRL_WIN]; 674 674 struct adapter *adap = seq->private; 675 + 676 + incr = kmalloc(sizeof(*incr) * NMTUS, GFP_KERNEL); 677 + if (!incr) 678 + return -ENOMEM; 675 679 676 680 t4_read_cong_tbl(adap, incr); 677 681 ··· 689 685 adap->params.a_wnd[i], 690 686 dec_fac[adap->params.b_wnd[i]]); 691 687 } 688 + 689 + kfree(incr); 692 690 return 0; 693 691 } 694 692
+98 -39
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 920 920 { 921 921 int i; 922 922 923 - for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) { 923 + for (i = 0; i < adap->sge.ingr_sz; i++) { 924 924 struct sge_rspq *q = adap->sge.ingr_map[i]; 925 925 926 926 if (q && q->handler) { ··· 934 934 } 935 935 } 936 936 937 + /* Disable interrupt and napi handler */ 938 + static void disable_interrupts(struct adapter *adap) 939 + { 940 + if (adap->flags & FULL_INIT_DONE) { 941 + t4_intr_disable(adap); 942 + if (adap->flags & USING_MSIX) { 943 + free_msix_queue_irqs(adap); 944 + free_irq(adap->msix_info[0].vec, adap); 945 + } else { 946 + free_irq(adap->pdev->irq, adap); 947 + } 948 + quiesce_rx(adap); 949 + } 950 + } 951 + 937 952 /* 938 953 * Enable NAPI scheduling and interrupt generation for all Rx queues. 939 954 */ ··· 956 941 { 957 942 int i; 958 943 959 - for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) { 944 + for (i = 0; i < adap->sge.ingr_sz; i++) { 960 945 struct sge_rspq *q = adap->sge.ingr_map[i]; 961 946 962 947 if (!q) ··· 985 970 int err, msi_idx, i, j; 986 971 struct sge *s = &adap->sge; 987 972 988 - bitmap_zero(s->starving_fl, MAX_EGRQ); 989 - bitmap_zero(s->txq_maperr, MAX_EGRQ); 973 + bitmap_zero(s->starving_fl, s->egr_sz); 974 + bitmap_zero(s->txq_maperr, s->egr_sz); 990 975 991 976 if (adap->flags & USING_MSIX) 992 977 msi_idx = 1; /* vector 0 is for non-queue interrupts */ ··· 998 983 msi_idx = -((int)s->intrq.abs_id + 1); 999 984 } 1000 985 986 + /* NOTE: If you add/delete any Ingress/Egress Queue allocations in here, 987 + * don't forget to update the following which need to be 988 + * synchronized to and changes here. 989 + * 990 + * 1. The calculations of MAX_INGQ in cxgb4.h. 991 + * 992 + * 2. Update enable_msix/name_msix_vecs/request_msix_queue_irqs 993 + * to accommodate any new/deleted Ingress Queues 994 + * which need MSI-X Vectors. 995 + * 996 + * 3. Update sge_qinfo_show() to include information on the 997 + * new/deleted queues. 998 + */ 1001 999 err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0], 1002 1000 msi_idx, NULL, fwevtq_handler); 1003 1001 if (err) { ··· 4272 4244 4273 4245 static void cxgb_down(struct adapter *adapter) 4274 4246 { 4275 - t4_intr_disable(adapter); 4276 4247 cancel_work_sync(&adapter->tid_release_task); 4277 4248 cancel_work_sync(&adapter->db_full_task); 4278 4249 cancel_work_sync(&adapter->db_drop_task); 4279 4250 adapter->tid_release_task_busy = false; 4280 4251 adapter->tid_release_head = NULL; 4281 4252 4282 - if (adapter->flags & USING_MSIX) { 4283 - free_msix_queue_irqs(adapter); 4284 - free_irq(adapter->msix_info[0].vec, adapter); 4285 - } else 4286 - free_irq(adapter->pdev->irq, adapter); 4287 - quiesce_rx(adapter); 4288 4253 t4_sge_stop(adapter); 4289 4254 t4_free_sge_resources(adapter); 4290 4255 adapter->flags &= ~FULL_INIT_DONE; ··· 4754 4733 if (ret < 0) 4755 4734 return ret; 4756 4735 4757 - ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, MAX_EGRQ, 64, MAX_INGQ, 4758 - 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, FW_CMD_CAP_PF); 4736 + ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, adap->sge.egr_sz, 64, 4737 + MAX_INGQ, 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, 4738 + FW_CMD_CAP_PF); 4759 4739 if (ret < 0) 4760 4740 return ret; 4761 4741 ··· 5110 5088 enum dev_state state; 5111 5089 u32 params[7], val[7]; 5112 5090 struct fw_caps_config_cmd caps_cmd; 5113 - struct fw_devlog_cmd devlog_cmd; 5114 - u32 devlog_meminfo; 5115 5091 int reset = 1; 5092 + 5093 + /* Grab Firmware Device Log parameters as early as possible so we have 5094 + * access to it for debugging, etc. 5095 + */ 5096 + ret = t4_init_devlog_params(adap); 5097 + if (ret < 0) 5098 + return ret; 5116 5099 5117 5100 /* Contact FW, advertising Master capability */ 5118 5101 ret = t4_fw_hello(adap, adap->mbox, adap->mbox, MASTER_MAY, &state); ··· 5195 5168 ret = get_vpd_params(adap, &adap->params.vpd); 5196 5169 if (ret < 0) 5197 5170 goto bye; 5198 - 5199 - /* Read firmware device log parameters. We really need to find a way 5200 - * to get these parameters initialized with some default values (which 5201 - * are likely to be correct) for the case where we either don't 5202 - * attache to the firmware or it's crashed when we probe the adapter. 5203 - * That way we'll still be able to perform early firmware startup 5204 - * debugging ... If the request to get the Firmware's Device Log 5205 - * parameters fails, we'll live so we don't make that a fatal error. 5206 - */ 5207 - memset(&devlog_cmd, 0, sizeof(devlog_cmd)); 5208 - devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) | 5209 - FW_CMD_REQUEST_F | FW_CMD_READ_F); 5210 - devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd)); 5211 - ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd), 5212 - &devlog_cmd); 5213 - if (ret == 0) { 5214 - devlog_meminfo = 5215 - ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog); 5216 - adap->params.devlog.memtype = 5217 - FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo); 5218 - adap->params.devlog.start = 5219 - FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4; 5220 - adap->params.devlog.size = ntohl(devlog_cmd.memsize_devlog); 5221 - } 5222 5171 5223 5172 /* 5224 5173 * Find out what ports are available to us. Note that we need to do ··· 5295 5292 adap->tids.ftid_base = val[3]; 5296 5293 adap->tids.nftids = val[4] - val[3] + 1; 5297 5294 adap->sge.ingr_start = val[5]; 5295 + 5296 + /* qids (ingress/egress) returned from firmware can be anywhere 5297 + * in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END. 5298 + * Hence driver needs to allocate memory for this range to 5299 + * store the queue info. Get the highest IQFLINT/EQ index returned 5300 + * in FW_EQ_*_CMD.alloc command. 5301 + */ 5302 + params[0] = FW_PARAM_PFVF(EQ_END); 5303 + params[1] = FW_PARAM_PFVF(IQFLINT_END); 5304 + ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params, val); 5305 + if (ret < 0) 5306 + goto bye; 5307 + adap->sge.egr_sz = val[0] - adap->sge.egr_start + 1; 5308 + adap->sge.ingr_sz = val[1] - adap->sge.ingr_start + 1; 5309 + 5310 + adap->sge.egr_map = kcalloc(adap->sge.egr_sz, 5311 + sizeof(*adap->sge.egr_map), GFP_KERNEL); 5312 + if (!adap->sge.egr_map) { 5313 + ret = -ENOMEM; 5314 + goto bye; 5315 + } 5316 + 5317 + adap->sge.ingr_map = kcalloc(adap->sge.ingr_sz, 5318 + sizeof(*adap->sge.ingr_map), GFP_KERNEL); 5319 + if (!adap->sge.ingr_map) { 5320 + ret = -ENOMEM; 5321 + goto bye; 5322 + } 5323 + 5324 + /* Allocate the memory for the vaious egress queue bitmaps 5325 + * ie starving_fl and txq_maperr. 5326 + */ 5327 + adap->sge.starving_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), 5328 + sizeof(long), GFP_KERNEL); 5329 + if (!adap->sge.starving_fl) { 5330 + ret = -ENOMEM; 5331 + goto bye; 5332 + } 5333 + 5334 + adap->sge.txq_maperr = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), 5335 + sizeof(long), GFP_KERNEL); 5336 + if (!adap->sge.txq_maperr) { 5337 + ret = -ENOMEM; 5338 + goto bye; 5339 + } 5298 5340 5299 5341 params[0] = FW_PARAM_PFVF(CLIP_START); 5300 5342 params[1] = FW_PARAM_PFVF(CLIP_END); ··· 5549 5501 * happened to HW/FW, stop issuing commands. 5550 5502 */ 5551 5503 bye: 5504 + kfree(adap->sge.egr_map); 5505 + kfree(adap->sge.ingr_map); 5506 + kfree(adap->sge.starving_fl); 5507 + kfree(adap->sge.txq_maperr); 5552 5508 if (ret != -ETIMEDOUT && ret != -EIO) 5553 5509 t4_fw_bye(adap, adap->mbox); 5554 5510 return ret; ··· 5580 5528 netif_carrier_off(dev); 5581 5529 } 5582 5530 spin_unlock(&adap->stats_lock); 5531 + disable_interrupts(adap); 5583 5532 if (adap->flags & FULL_INIT_DONE) 5584 5533 cxgb_down(adap); 5585 5534 rtnl_unlock(); ··· 5965 5912 5966 5913 t4_free_mem(adapter->l2t); 5967 5914 t4_free_mem(adapter->tids.tid_tab); 5915 + kfree(adapter->sge.egr_map); 5916 + kfree(adapter->sge.ingr_map); 5917 + kfree(adapter->sge.starving_fl); 5918 + kfree(adapter->sge.txq_maperr); 5968 5919 disable_msi(adapter); 5969 5920 5970 5921 for_each_port(adapter, i) ··· 6293 6236 6294 6237 if (is_offload(adapter)) 6295 6238 detach_ulds(adapter); 6239 + 6240 + disable_interrupts(adapter); 6296 6241 6297 6242 for_each_port(adapter, i) 6298 6243 if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+4 -3
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2171 2171 struct adapter *adap = (struct adapter *)data; 2172 2172 struct sge *s = &adap->sge; 2173 2173 2174 - for (i = 0; i < ARRAY_SIZE(s->starving_fl); i++) 2174 + for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++) 2175 2175 for (m = s->starving_fl[i]; m; m &= m - 1) { 2176 2176 struct sge_eth_rxq *rxq; 2177 2177 unsigned int id = __ffs(m) + i * BITS_PER_LONG; ··· 2259 2259 struct adapter *adap = (struct adapter *)data; 2260 2260 struct sge *s = &adap->sge; 2261 2261 2262 - for (i = 0; i < ARRAY_SIZE(s->txq_maperr); i++) 2262 + for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++) 2263 2263 for (m = s->txq_maperr[i]; m; m &= m - 1) { 2264 2264 unsigned long id = __ffs(m) + i * BITS_PER_LONG; 2265 2265 struct sge_ofld_txq *txq = s->egr_map[id]; ··· 2741 2741 free_rspq_fl(adap, &adap->sge.intrq, NULL); 2742 2742 2743 2743 /* clear the reverse egress queue map */ 2744 - memset(adap->sge.egr_map, 0, sizeof(adap->sge.egr_map)); 2744 + memset(adap->sge.egr_map, 0, 2745 + adap->sge.egr_sz * sizeof(*adap->sge.egr_map)); 2745 2746 } 2746 2747 2747 2748 void t4_sge_start(struct adapter *adap)
+53
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 4459 4459 } 4460 4460 4461 4461 /** 4462 + * t4_init_devlog_params - initialize adapter->params.devlog 4463 + * @adap: the adapter 4464 + * 4465 + * Initialize various fields of the adapter's Firmware Device Log 4466 + * Parameters structure. 4467 + */ 4468 + int t4_init_devlog_params(struct adapter *adap) 4469 + { 4470 + struct devlog_params *dparams = &adap->params.devlog; 4471 + u32 pf_dparams; 4472 + unsigned int devlog_meminfo; 4473 + struct fw_devlog_cmd devlog_cmd; 4474 + int ret; 4475 + 4476 + /* If we're dealing with newer firmware, the Device Log Paramerters 4477 + * are stored in a designated register which allows us to access the 4478 + * Device Log even if we can't talk to the firmware. 4479 + */ 4480 + pf_dparams = 4481 + t4_read_reg(adap, PCIE_FW_REG(PCIE_FW_PF_A, PCIE_FW_PF_DEVLOG)); 4482 + if (pf_dparams) { 4483 + unsigned int nentries, nentries128; 4484 + 4485 + dparams->memtype = PCIE_FW_PF_DEVLOG_MEMTYPE_G(pf_dparams); 4486 + dparams->start = PCIE_FW_PF_DEVLOG_ADDR16_G(pf_dparams) << 4; 4487 + 4488 + nentries128 = PCIE_FW_PF_DEVLOG_NENTRIES128_G(pf_dparams); 4489 + nentries = (nentries128 + 1) * 128; 4490 + dparams->size = nentries * sizeof(struct fw_devlog_e); 4491 + 4492 + return 0; 4493 + } 4494 + 4495 + /* Otherwise, ask the firmware for it's Device Log Parameters. 4496 + */ 4497 + memset(&devlog_cmd, 0, sizeof(devlog_cmd)); 4498 + devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) | 4499 + FW_CMD_REQUEST_F | FW_CMD_READ_F); 4500 + devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd)); 4501 + ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd), 4502 + &devlog_cmd); 4503 + if (ret) 4504 + return ret; 4505 + 4506 + devlog_meminfo = ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog); 4507 + dparams->memtype = FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo); 4508 + dparams->start = FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4; 4509 + dparams->size = ntohl(devlog_cmd.memsize_devlog); 4510 + 4511 + return 0; 4512 + } 4513 + 4514 + /** 4462 4515 * t4_init_sge_params - initialize adap->params.sge 4463 4516 * @adapter: the adapter 4464 4517 *
+3
drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
··· 63 63 #define MC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 64 64 #define EDC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 65 65 66 + #define PCIE_FW_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 67 + 66 68 #define SGE_PF_KDOORBELL_A 0x0 67 69 68 70 #define QID_S 15 ··· 709 707 #define PFNUM_V(x) ((x) << PFNUM_S) 710 708 711 709 #define PCIE_FW_A 0x30b8 710 + #define PCIE_FW_PF_A 0x30bc 712 711 713 712 #define PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS_A 0x5908 714 713
+37 -2
drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
··· 101 101 FW_RI_BIND_MW_WR = 0x18, 102 102 FW_RI_FR_NSMR_WR = 0x19, 103 103 FW_RI_INV_LSTAG_WR = 0x1a, 104 - FW_LASTC2E_WR = 0x40 104 + FW_LASTC2E_WR = 0x70 105 105 }; 106 106 107 107 struct fw_wr_hdr { ··· 993 993 FW_MEMTYPE_CF_EXTMEM = 0x2, 994 994 FW_MEMTYPE_CF_FLASH = 0x4, 995 995 FW_MEMTYPE_CF_INTERNAL = 0x5, 996 + FW_MEMTYPE_CF_EXTMEM1 = 0x6, 996 997 }; 997 998 998 999 struct fw_caps_config_cmd { ··· 1036 1035 FW_PARAMS_MNEM_PFVF = 2, /* function params */ 1037 1036 FW_PARAMS_MNEM_REG = 3, /* limited register access */ 1038 1037 FW_PARAMS_MNEM_DMAQ = 4, /* dma queue params */ 1038 + FW_PARAMS_MNEM_CHNET = 5, /* chnet params */ 1039 1039 FW_PARAMS_MNEM_LAST 1040 1040 }; 1041 1041 ··· 3104 3102 FW_DEVLOG_FACILITY_FCOE = 0x2E, 3105 3103 FW_DEVLOG_FACILITY_FOISCSI = 0x30, 3106 3104 FW_DEVLOG_FACILITY_FOFCOE = 0x32, 3107 - FW_DEVLOG_FACILITY_MAX = 0x32, 3105 + FW_DEVLOG_FACILITY_CHNET = 0x34, 3106 + FW_DEVLOG_FACILITY_MAX = 0x34, 3108 3107 }; 3109 3108 3110 3109 /* log message format */ ··· 3141 3138 #define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(x) \ 3142 3139 (((x) >> FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S) & \ 3143 3140 FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M) 3141 + 3142 + /* P C I E F W P F 7 R E G I S T E R */ 3143 + 3144 + /* PF7 stores the Firmware Device Log parameters which allows Host Drivers to 3145 + * access the "devlog" which needing to contact firmware. The encoding is 3146 + * mostly the same as that returned by the DEVLOG command except for the size 3147 + * which is encoded as the number of entries in multiples-1 of 128 here rather 3148 + * than the memory size as is done in the DEVLOG command. Thus, 0 means 128 3149 + * and 15 means 2048. This of course in turn constrains the allowed values 3150 + * for the devlog size ... 3151 + */ 3152 + #define PCIE_FW_PF_DEVLOG 7 3153 + 3154 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_S 28 3155 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_M 0xf 3156 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_V(x) \ 3157 + ((x) << PCIE_FW_PF_DEVLOG_NENTRIES128_S) 3158 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_G(x) \ 3159 + (((x) >> PCIE_FW_PF_DEVLOG_NENTRIES128_S) & \ 3160 + PCIE_FW_PF_DEVLOG_NENTRIES128_M) 3161 + 3162 + #define PCIE_FW_PF_DEVLOG_ADDR16_S 4 3163 + #define PCIE_FW_PF_DEVLOG_ADDR16_M 0xffffff 3164 + #define PCIE_FW_PF_DEVLOG_ADDR16_V(x) ((x) << PCIE_FW_PF_DEVLOG_ADDR16_S) 3165 + #define PCIE_FW_PF_DEVLOG_ADDR16_G(x) \ 3166 + (((x) >> PCIE_FW_PF_DEVLOG_ADDR16_S) & PCIE_FW_PF_DEVLOG_ADDR16_M) 3167 + 3168 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_S 0 3169 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_M 0xf 3170 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_V(x) ((x) << PCIE_FW_PF_DEVLOG_MEMTYPE_S) 3171 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_G(x) \ 3172 + (((x) >> PCIE_FW_PF_DEVLOG_MEMTYPE_S) & PCIE_FW_PF_DEVLOG_MEMTYPE_M) 3144 3173 3145 3174 #endif /* _T4FW_INTERFACE_H_ */
+4 -4
drivers/net/ethernet/chelsio/cxgb4/t4fw_version.h
··· 36 36 #define __T4FW_VERSION_H__ 37 37 38 38 #define T4FW_VERSION_MAJOR 0x01 39 - #define T4FW_VERSION_MINOR 0x0C 40 - #define T4FW_VERSION_MICRO 0x19 39 + #define T4FW_VERSION_MINOR 0x0D 40 + #define T4FW_VERSION_MICRO 0x20 41 41 #define T4FW_VERSION_BUILD 0x00 42 42 43 43 #define T5FW_VERSION_MAJOR 0x01 44 - #define T5FW_VERSION_MINOR 0x0C 45 - #define T5FW_VERSION_MICRO 0x19 44 + #define T5FW_VERSION_MINOR 0x0D 45 + #define T5FW_VERSION_MICRO 0x20 46 46 #define T5FW_VERSION_BUILD 0x00 47 47 48 48 #endif
+8 -4
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 1004 1004 ? (tq->pidx - 1) 1005 1005 : (tq->size - 1)); 1006 1006 __be64 *src = (__be64 *)&tq->desc[index]; 1007 - __be64 __iomem *dst = (__be64 *)(tq->bar2_addr + 1007 + __be64 __iomem *dst = (__be64 __iomem *)(tq->bar2_addr + 1008 1008 SGE_UDB_WCDOORBELL); 1009 1009 unsigned int count = EQ_UNIT / sizeof(__be64); 1010 1010 ··· 1018 1018 * DMA. 1019 1019 */ 1020 1020 while (count) { 1021 - writeq(*src, dst); 1021 + /* the (__force u64) is because the compiler 1022 + * doesn't understand the endian swizzling 1023 + * going on 1024 + */ 1025 + writeq((__force u64)*src, dst); 1022 1026 src++; 1023 1027 dst++; 1024 1028 count--; ··· 1256 1252 BUG_ON(DIV_ROUND_UP(ETHTXQ_MAX_HDR, TXD_PER_EQ_UNIT) > 1); 1257 1253 wr = (void *)&txq->q.desc[txq->q.pidx]; 1258 1254 wr->equiq_to_len16 = cpu_to_be32(wr_mid); 1259 - wr->r3[0] = cpu_to_be64(0); 1260 - wr->r3[1] = cpu_to_be64(0); 1255 + wr->r3[0] = cpu_to_be32(0); 1256 + wr->r3[1] = cpu_to_be32(0); 1261 1257 skb_copy_from_linear_data(skb, (void *)wr->ethmacdst, fw_hdr_copy_len); 1262 1258 end = (u64 *)wr + flits; 1263 1259
+3 -3
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 210 210 211 211 if (rpl) { 212 212 /* request bit in high-order BE word */ 213 - WARN_ON((be32_to_cpu(*(const u32 *)cmd) 213 + WARN_ON((be32_to_cpu(*(const __be32 *)cmd) 214 214 & FW_CMD_REQUEST_F) == 0); 215 215 get_mbox_rpl(adapter, rpl, size, mbox_data); 216 - WARN_ON((be32_to_cpu(*(u32 *)rpl) 216 + WARN_ON((be32_to_cpu(*(__be32 *)rpl) 217 217 & FW_CMD_REQUEST_F) != 0); 218 218 } 219 219 t4_write_reg(adapter, mbox_ctl, ··· 484 484 * o The BAR2 Queue ID. 485 485 * o The BAR2 Queue ID Offset into the BAR2 page. 486 486 */ 487 - bar2_page_offset = ((qid >> qpp_shift) << page_shift); 487 + bar2_page_offset = ((u64)(qid >> qpp_shift) << page_shift); 488 488 bar2_qid = qid & qpp_mask; 489 489 bar2_qid_offset = bar2_qid * SGE_UDB_SIZE; 490 490
+2
drivers/net/ethernet/emulex/benet/be.h
··· 354 354 u16 vlan_tag; 355 355 u32 tx_rate; 356 356 u32 plink_tracking; 357 + u32 privileges; 357 358 }; 358 359 359 360 enum vf_state { ··· 424 423 425 424 u8 __iomem *csr; /* CSR BAR used only for BE2/3 */ 426 425 u8 __iomem *db; /* Door Bell */ 426 + u8 __iomem *pcicfg; /* On SH,BEx only. Shadow of PCI config space */ 427 427 428 428 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */ 429 429 struct be_dma_mem mbox_mem;
+7 -10
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 1902 1902 { 1903 1903 int num_eqs, i = 0; 1904 1904 1905 - if (lancer_chip(adapter) && num > 8) { 1906 - while (num) { 1907 - num_eqs = min(num, 8); 1908 - __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs); 1909 - i += num_eqs; 1910 - num -= num_eqs; 1911 - } 1912 - } else { 1913 - __be_cmd_modify_eqd(adapter, set_eqd, num); 1905 + while (num) { 1906 + num_eqs = min(num, 8); 1907 + __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs); 1908 + i += num_eqs; 1909 + num -= num_eqs; 1914 1910 } 1915 1911 1916 1912 return 0; ··· 1914 1918 1915 1919 /* Uses sycnhronous mcc */ 1916 1920 int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, 1917 - u32 num) 1921 + u32 num, u32 domain) 1918 1922 { 1919 1923 struct be_mcc_wrb *wrb; 1920 1924 struct be_cmd_req_vlan_config *req; ··· 1932 1936 be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, 1933 1937 OPCODE_COMMON_NTWK_VLAN_CONFIG, sizeof(*req), 1934 1938 wrb, NULL); 1939 + req->hdr.domain = domain; 1935 1940 1936 1941 req->interface_id = if_id; 1937 1942 req->untagged = BE_IF_FLAGS_UNTAGGED & be_if_cap_flags(adapter) ? 1 : 0;
+1 -1
drivers/net/ethernet/emulex/benet/be_cmds.h
··· 2256 2256 int be_cmd_get_fw_ver(struct be_adapter *adapter); 2257 2257 int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *, int num); 2258 2258 int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, 2259 - u32 num); 2259 + u32 num, u32 domain); 2260 2260 int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 status); 2261 2261 int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc); 2262 2262 int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc);
+98 -33
drivers/net/ethernet/emulex/benet/be_main.c
··· 1171 1171 for_each_set_bit(i, adapter->vids, VLAN_N_VID) 1172 1172 vids[num++] = cpu_to_le16(i); 1173 1173 1174 - status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num); 1174 + status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num, 0); 1175 1175 if (status) { 1176 1176 dev_err(dev, "Setting HW VLAN filtering failed\n"); 1177 1177 /* Set to VLAN promisc mode as setting VLAN filter failed */ ··· 1380 1380 return 0; 1381 1381 } 1382 1382 1383 + static int be_set_vf_tvt(struct be_adapter *adapter, int vf, u16 vlan) 1384 + { 1385 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1386 + u16 vids[BE_NUM_VLANS_SUPPORTED]; 1387 + int vf_if_id = vf_cfg->if_handle; 1388 + int status; 1389 + 1390 + /* Enable Transparent VLAN Tagging */ 1391 + status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, vf_if_id, 0); 1392 + if (status) 1393 + return status; 1394 + 1395 + /* Clear pre-programmed VLAN filters on VF if any, if TVT is enabled */ 1396 + vids[0] = 0; 1397 + status = be_cmd_vlan_config(adapter, vf_if_id, vids, 1, vf + 1); 1398 + if (!status) 1399 + dev_info(&adapter->pdev->dev, 1400 + "Cleared guest VLANs on VF%d", vf); 1401 + 1402 + /* After TVT is enabled, disallow VFs to program VLAN filters */ 1403 + if (vf_cfg->privileges & BE_PRIV_FILTMGMT) { 1404 + status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges & 1405 + ~BE_PRIV_FILTMGMT, vf + 1); 1406 + if (!status) 1407 + vf_cfg->privileges &= ~BE_PRIV_FILTMGMT; 1408 + } 1409 + return 0; 1410 + } 1411 + 1412 + static int be_clear_vf_tvt(struct be_adapter *adapter, int vf) 1413 + { 1414 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1415 + struct device *dev = &adapter->pdev->dev; 1416 + int status; 1417 + 1418 + /* Reset Transparent VLAN Tagging. */ 1419 + status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, vf + 1, 1420 + vf_cfg->if_handle, 0); 1421 + if (status) 1422 + return status; 1423 + 1424 + /* Allow VFs to program VLAN filtering */ 1425 + if (!(vf_cfg->privileges & BE_PRIV_FILTMGMT)) { 1426 + status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges | 1427 + BE_PRIV_FILTMGMT, vf + 1); 1428 + if (!status) { 1429 + vf_cfg->privileges |= BE_PRIV_FILTMGMT; 1430 + dev_info(dev, "VF%d: FILTMGMT priv enabled", vf); 1431 + } 1432 + } 1433 + 1434 + dev_info(dev, 1435 + "Disable/re-enable i/f in VM to clear Transparent VLAN tag"); 1436 + return 0; 1437 + } 1438 + 1383 1439 static int be_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos) 1384 1440 { 1385 1441 struct be_adapter *adapter = netdev_priv(netdev); 1386 1442 struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1387 - int status = 0; 1443 + int status; 1388 1444 1389 1445 if (!sriov_enabled(adapter)) 1390 1446 return -EPERM; ··· 1450 1394 1451 1395 if (vlan || qos) { 1452 1396 vlan |= qos << VLAN_PRIO_SHIFT; 1453 - if (vf_cfg->vlan_tag != vlan) 1454 - status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, 1455 - vf_cfg->if_handle, 0); 1397 + status = be_set_vf_tvt(adapter, vf, vlan); 1456 1398 } else { 1457 - /* Reset Transparent Vlan Tagging. */ 1458 - status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, 1459 - vf + 1, vf_cfg->if_handle, 0); 1399 + status = be_clear_vf_tvt(adapter, vf); 1460 1400 } 1461 1401 1462 1402 if (status) { 1463 1403 dev_err(&adapter->pdev->dev, 1464 - "VLAN %d config on VF %d failed : %#x\n", vlan, 1465 - vf, status); 1404 + "VLAN %d config on VF %d failed : %#x\n", vlan, vf, 1405 + status); 1466 1406 return be_cmd_status(status); 1467 1407 } 1468 1408 1469 1409 vf_cfg->vlan_tag = vlan; 1470 - 1471 1410 return 0; 1472 1411 } 1473 1412 ··· 2823 2772 } 2824 2773 } 2825 2774 } else { 2826 - pci_read_config_dword(adapter->pdev, 2827 - PCICFG_UE_STATUS_LOW, &ue_lo); 2828 - pci_read_config_dword(adapter->pdev, 2829 - PCICFG_UE_STATUS_HIGH, &ue_hi); 2830 - pci_read_config_dword(adapter->pdev, 2831 - PCICFG_UE_STATUS_LOW_MASK, &ue_lo_mask); 2832 - pci_read_config_dword(adapter->pdev, 2833 - PCICFG_UE_STATUS_HI_MASK, &ue_hi_mask); 2775 + ue_lo = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_LOW); 2776 + ue_hi = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_HIGH); 2777 + ue_lo_mask = ioread32(adapter->pcicfg + 2778 + PCICFG_UE_STATUS_LOW_MASK); 2779 + ue_hi_mask = ioread32(adapter->pcicfg + 2780 + PCICFG_UE_STATUS_HI_MASK); 2834 2781 2835 2782 ue_lo = (ue_lo & ~ue_lo_mask); 2836 2783 ue_hi = (ue_hi & ~ue_hi_mask); ··· 3388 3339 u32 cap_flags, u32 vf) 3389 3340 { 3390 3341 u32 en_flags; 3391 - int status; 3392 3342 3393 3343 en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST | 3394 3344 BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS | ··· 3395 3347 3396 3348 en_flags &= cap_flags; 3397 3349 3398 - status = be_cmd_if_create(adapter, cap_flags, en_flags, 3399 - if_handle, vf); 3400 - 3401 - return status; 3350 + return be_cmd_if_create(adapter, cap_flags, en_flags, if_handle, vf); 3402 3351 } 3403 3352 3404 3353 static int be_vfs_if_create(struct be_adapter *adapter) ··· 3413 3368 if (!BE3_chip(adapter)) { 3414 3369 status = be_cmd_get_profile_config(adapter, &res, 3415 3370 vf + 1); 3416 - if (!status) 3371 + if (!status) { 3417 3372 cap_flags = res.if_cap_flags; 3373 + /* Prevent VFs from enabling VLAN promiscuous 3374 + * mode 3375 + */ 3376 + cap_flags &= ~BE_IF_FLAGS_VLAN_PROMISCUOUS; 3377 + } 3418 3378 } 3419 3379 3420 3380 status = be_if_create(adapter, &vf_cfg->if_handle, ··· 3453 3403 struct device *dev = &adapter->pdev->dev; 3454 3404 struct be_vf_cfg *vf_cfg; 3455 3405 int status, old_vfs, vf; 3456 - u32 privileges; 3457 3406 3458 3407 old_vfs = pci_num_vf(adapter->pdev); 3459 3408 ··· 3482 3433 3483 3434 for_all_vfs(adapter, vf_cfg, vf) { 3484 3435 /* Allow VFs to programs MAC/VLAN filters */ 3485 - status = be_cmd_get_fn_privileges(adapter, &privileges, vf + 1); 3486 - if (!status && !(privileges & BE_PRIV_FILTMGMT)) { 3436 + status = be_cmd_get_fn_privileges(adapter, &vf_cfg->privileges, 3437 + vf + 1); 3438 + if (!status && !(vf_cfg->privileges & BE_PRIV_FILTMGMT)) { 3487 3439 status = be_cmd_set_fn_privileges(adapter, 3488 - privileges | 3440 + vf_cfg->privileges | 3489 3441 BE_PRIV_FILTMGMT, 3490 3442 vf + 1); 3491 - if (!status) 3443 + if (!status) { 3444 + vf_cfg->privileges |= BE_PRIV_FILTMGMT; 3492 3445 dev_info(dev, "VF%d has FILTMGMT privilege\n", 3493 3446 vf); 3447 + } 3494 3448 } 3495 3449 3496 3450 /* Allow full available bandwidth */ ··· 4872 4820 4873 4821 static int be_map_pci_bars(struct be_adapter *adapter) 4874 4822 { 4823 + struct pci_dev *pdev = adapter->pdev; 4875 4824 u8 __iomem *addr; 4876 4825 4877 4826 if (BEx_chip(adapter) && be_physfn(adapter)) { 4878 - adapter->csr = pci_iomap(adapter->pdev, 2, 0); 4827 + adapter->csr = pci_iomap(pdev, 2, 0); 4879 4828 if (!adapter->csr) 4880 4829 return -ENOMEM; 4881 4830 } 4882 4831 4883 - addr = pci_iomap(adapter->pdev, db_bar(adapter), 0); 4832 + addr = pci_iomap(pdev, db_bar(adapter), 0); 4884 4833 if (!addr) 4885 4834 goto pci_map_err; 4886 4835 adapter->db = addr; 4836 + 4837 + if (skyhawk_chip(adapter) || BEx_chip(adapter)) { 4838 + if (be_physfn(adapter)) { 4839 + /* PCICFG is the 2nd BAR in BE2 */ 4840 + addr = pci_iomap(pdev, BE2_chip(adapter) ? 1 : 0, 0); 4841 + if (!addr) 4842 + goto pci_map_err; 4843 + adapter->pcicfg = addr; 4844 + } else { 4845 + adapter->pcicfg = adapter->db + SRIOV_VF_PCICFG_OFFSET; 4846 + } 4847 + } 4887 4848 4888 4849 be_roce_map_pci_bars(adapter); 4889 4850 return 0; 4890 4851 4891 4852 pci_map_err: 4892 - dev_err(&adapter->pdev->dev, "Error in mapping PCI BARs\n"); 4853 + dev_err(&pdev->dev, "Error in mapping PCI BARs\n"); 4893 4854 be_unmap_pci_bars(adapter); 4894 4855 return -ENOMEM; 4895 4856 }
+27 -3
drivers/net/ethernet/freescale/fec_main.c
··· 1954 1954 struct fec_enet_private *fep = netdev_priv(ndev); 1955 1955 struct device_node *node; 1956 1956 int err = -ENXIO, i; 1957 + u32 mii_speed, holdtime; 1957 1958 1958 1959 /* 1959 1960 * The i.MX28 dual fec interfaces are not equal. ··· 1992 1991 * Reference Manual has an error on this, and gets fixed on i.MX6Q 1993 1992 * document. 1994 1993 */ 1995 - fep->phy_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); 1994 + mii_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); 1996 1995 if (fep->quirks & FEC_QUIRK_ENET_MAC) 1997 - fep->phy_speed--; 1998 - fep->phy_speed <<= 1; 1996 + mii_speed--; 1997 + if (mii_speed > 63) { 1998 + dev_err(&pdev->dev, 1999 + "fec clock (%lu) to fast to get right mii speed\n", 2000 + clk_get_rate(fep->clk_ipg)); 2001 + err = -EINVAL; 2002 + goto err_out; 2003 + } 2004 + 2005 + /* 2006 + * The i.MX28 and i.MX6 types have another filed in the MSCR (aka 2007 + * MII_SPEED) register that defines the MDIO output hold time. Earlier 2008 + * versions are RAZ there, so just ignore the difference and write the 2009 + * register always. 2010 + * The minimal hold time according to IEE802.3 (clause 22) is 10 ns. 2011 + * HOLDTIME + 1 is the number of clk cycles the fec is holding the 2012 + * output. 2013 + * The HOLDTIME bitfield takes values between 0 and 7 (inclusive). 2014 + * Given that ceil(clkrate / 5000000) <= 64, the calculation for 2015 + * holdtime cannot result in a value greater than 3. 2016 + */ 2017 + holdtime = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 100000000) - 1; 2018 + 2019 + fep->phy_speed = mii_speed << 1 | holdtime << 8; 2020 + 1999 2021 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); 2000 2022 2001 2023 fep->mii_bus = mdiobus_alloc();
+3
drivers/net/ethernet/freescale/ucc_geth.c
··· 3893 3893 ugeth->phy_interface = phy_interface; 3894 3894 ugeth->max_speed = max_speed; 3895 3895 3896 + /* Carrier starts down, phylib will bring it up */ 3897 + netif_carrier_off(dev); 3898 + 3896 3899 err = register_netdev(dev); 3897 3900 if (err) { 3898 3901 if (netif_msg_probe(ugeth))
+1 -6
drivers/net/ethernet/marvell/mvneta.c
··· 2658 2658 static int mvneta_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 2659 2659 { 2660 2660 struct mvneta_port *pp = netdev_priv(dev); 2661 - int ret; 2662 2661 2663 2662 if (!pp->phy_dev) 2664 2663 return -ENOTSUPP; 2665 2664 2666 - ret = phy_mii_ioctl(pp->phy_dev, ifr, cmd); 2667 - if (!ret) 2668 - mvneta_adjust_link(dev); 2669 - 2670 - return ret; 2665 + return phy_mii_ioctl(pp->phy_dev, ifr, cmd); 2671 2666 } 2672 2667 2673 2668 /* Ethtool methods */
+3 -2
drivers/net/ethernet/mellanox/mlx4/cmd.c
··· 724 724 * on the host, we deprecate the error message for this 725 725 * specific command/input_mod/opcode_mod/fw-status to be debug. 726 726 */ 727 - if (op == MLX4_CMD_SET_PORT && in_modifier == 1 && 727 + if (op == MLX4_CMD_SET_PORT && 728 + (in_modifier == 1 || in_modifier == 2) && 728 729 op_modifier == 0 && context->fw_status == CMD_STAT_BAD_SIZE) 729 730 mlx4_dbg(dev, "command 0x%x failed: fw status = 0x%x\n", 730 731 op, context->fw_status); ··· 1994 1993 goto reset_slave; 1995 1994 slave_state[slave].vhcr_dma = ((u64) param) << 48; 1996 1995 priv->mfunc.master.slave_state[slave].cookie = 0; 1997 - mutex_init(&priv->mfunc.master.gen_eqe_mutex[slave]); 1998 1996 break; 1999 1997 case MLX4_COMM_CMD_VHCR1: 2000 1998 if (slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR0) ··· 2225 2225 for (i = 0; i < dev->num_slaves; ++i) { 2226 2226 s_state = &priv->mfunc.master.slave_state[i]; 2227 2227 s_state->last_cmd = MLX4_COMM_CMD_RESET; 2228 + mutex_init(&priv->mfunc.master.gen_eqe_mutex[i]); 2228 2229 for (j = 0; j < MLX4_EVENT_TYPES_NUM; ++j) 2229 2230 s_state->event_eq[j].eqn = -1; 2230 2231 __raw_writel((__force u32) 0,
+8 -7
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2805 2805 netif_carrier_off(dev); 2806 2806 mlx4_en_set_default_moderation(priv); 2807 2807 2808 - err = register_netdev(dev); 2809 - if (err) { 2810 - en_err(priv, "Netdev registration failed for port %d\n", port); 2811 - goto out; 2812 - } 2813 - priv->registered = 1; 2814 - 2815 2808 en_warn(priv, "Using %d TX rings\n", prof->tx_ring_num); 2816 2809 en_warn(priv, "Using %d RX rings\n", prof->rx_ring_num); 2817 2810 ··· 2845 2852 SERVICE_TASK_DELAY); 2846 2853 2847 2854 mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 2855 + 2856 + err = register_netdev(dev); 2857 + if (err) { 2858 + en_err(priv, "Netdev registration failed for port %d\n", port); 2859 + goto out; 2860 + } 2861 + 2862 + priv->registered = 1; 2848 2863 2849 2864 return 0; 2850 2865
+7 -11
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 153 153 154 154 /* All active slaves need to receive the event */ 155 155 if (slave == ALL_SLAVES) { 156 - for (i = 0; i < dev->num_slaves; i++) { 157 - if (i != dev->caps.function && 158 - master->slave_state[i].active) 159 - if (mlx4_GEN_EQE(dev, i, eqe)) 160 - mlx4_warn(dev, "Failed to generate event for slave %d\n", 161 - i); 156 + for (i = 0; i <= dev->persist->num_vfs; i++) { 157 + if (mlx4_GEN_EQE(dev, i, eqe)) 158 + mlx4_warn(dev, "Failed to generate event for slave %d\n", 159 + i); 162 160 } 163 161 } else { 164 162 if (mlx4_GEN_EQE(dev, slave, eqe)) ··· 201 203 struct mlx4_eqe *eqe) 202 204 { 203 205 struct mlx4_priv *priv = mlx4_priv(dev); 204 - struct mlx4_slave_state *s_slave = 205 - &priv->mfunc.master.slave_state[slave]; 206 206 207 - if (!s_slave->active) { 208 - /*mlx4_warn(dev, "Trying to pass event to inactive slave\n");*/ 207 + if (slave < 0 || slave > dev->persist->num_vfs || 208 + slave == dev->caps.function || 209 + !priv->mfunc.master.slave_state[slave].active) 209 210 return; 210 - } 211 211 212 212 slave_event(dev, slave, eqe); 213 213 }
+6
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 3095 3095 if (!priv->mfunc.master.slave_state) 3096 3096 return -EINVAL; 3097 3097 3098 + /* check for slave valid, slave not PF, and slave active */ 3099 + if (slave < 0 || slave > dev->persist->num_vfs || 3100 + slave == dev->caps.function || 3101 + !priv->mfunc.master.slave_state[slave].active) 3102 + return 0; 3103 + 3098 3104 event_eq = &priv->mfunc.master.slave_state[slave].event_eq[eqe->type]; 3099 3105 3100 3106 /* Create the event only if the slave is registered */
+7 -1
drivers/net/ethernet/rocker/rocker.c
··· 4468 4468 struct net_device *master = netdev_master_upper_dev_get(dev); 4469 4469 int err = 0; 4470 4470 4471 + /* There are currently three cases handled here: 4472 + * 1. Joining a bridge 4473 + * 2. Leaving a previously joined bridge 4474 + * 3. Other, e.g. being added to or removed from a bond or openvswitch, 4475 + * in which case nothing is done 4476 + */ 4471 4477 if (master && master->rtnl_link_ops && 4472 4478 !strcmp(master->rtnl_link_ops->kind, "bridge")) 4473 4479 err = rocker_port_bridge_join(rocker_port, master); 4474 - else 4480 + else if (rocker_port_is_bridged(rocker_port)) 4475 4481 err = rocker_port_bridge_leave(rocker_port); 4476 4482 4477 4483 return err;
+3 -1
drivers/net/ipvlan/ipvlan.h
··· 114 114 rx_handler_result_t ipvlan_handle_frame(struct sk_buff **pskb); 115 115 int ipvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev); 116 116 void ipvlan_ht_addr_add(struct ipvl_dev *ipvlan, struct ipvl_addr *addr); 117 - bool ipvlan_addr_busy(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6); 117 + struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 118 + const void *iaddr, bool is_v6); 119 + bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6); 118 120 struct ipvl_addr *ipvlan_ht_addr_lookup(const struct ipvl_port *port, 119 121 const void *iaddr, bool is_v6); 120 122 void ipvlan_ht_addr_del(struct ipvl_addr *addr, bool sync);
+21 -9
drivers/net/ipvlan/ipvlan_core.c
··· 81 81 hash = (addr->atype == IPVL_IPV6) ? 82 82 ipvlan_get_v6_hash(&addr->ip6addr) : 83 83 ipvlan_get_v4_hash(&addr->ip4addr); 84 - hlist_add_head_rcu(&addr->hlnode, &port->hlhead[hash]); 84 + if (hlist_unhashed(&addr->hlnode)) 85 + hlist_add_head_rcu(&addr->hlnode, &port->hlhead[hash]); 85 86 } 86 87 87 88 void ipvlan_ht_addr_del(struct ipvl_addr *addr, bool sync) 88 89 { 89 - hlist_del_rcu(&addr->hlnode); 90 + hlist_del_init_rcu(&addr->hlnode); 90 91 if (sync) 91 92 synchronize_rcu(); 92 93 } 93 94 94 - bool ipvlan_addr_busy(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6) 95 + struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 96 + const void *iaddr, bool is_v6) 95 97 { 96 - struct ipvl_port *port = ipvlan->port; 97 98 struct ipvl_addr *addr; 98 99 99 100 list_for_each_entry(addr, &ipvlan->addrs, anode) { ··· 102 101 ipv6_addr_equal(&addr->ip6addr, iaddr)) || 103 102 (!is_v6 && addr->atype == IPVL_IPV4 && 104 103 addr->ip4addr.s_addr == ((struct in_addr *)iaddr)->s_addr)) 104 + return addr; 105 + } 106 + return NULL; 107 + } 108 + 109 + bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6) 110 + { 111 + struct ipvl_dev *ipvlan; 112 + 113 + ASSERT_RTNL(); 114 + 115 + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 116 + if (ipvlan_find_addr(ipvlan, iaddr, is_v6)) 105 117 return true; 106 118 } 107 - 108 - if (ipvlan_ht_addr_lookup(port, iaddr, is_v6)) 109 - return true; 110 - 111 119 return false; 112 120 } 113 121 ··· 202 192 if (skb->protocol == htons(ETH_P_PAUSE)) 203 193 return; 204 194 205 - list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 195 + rcu_read_lock(); 196 + list_for_each_entry_rcu(ipvlan, &port->ipvlans, pnode) { 206 197 if (local && (ipvlan == in_dev)) 207 198 continue; 208 199 ··· 230 219 mcast_acct: 231 220 ipvlan_count_rx(ipvlan, len, ret == NET_RX_SUCCESS, true); 232 221 } 222 + rcu_read_unlock(); 233 223 234 224 /* Locally generated? ...Forward a copy to the main-device as 235 225 * well. On the RX side we'll ignore it (wont give it to any
+19 -11
drivers/net/ipvlan/ipvlan_main.c
··· 505 505 if (ipvlan->ipv6cnt > 0 || ipvlan->ipv4cnt > 0) { 506 506 list_for_each_entry_safe(addr, next, &ipvlan->addrs, anode) { 507 507 ipvlan_ht_addr_del(addr, !dev->dismantle); 508 - list_del_rcu(&addr->anode); 508 + list_del(&addr->anode); 509 509 } 510 510 } 511 511 list_del_rcu(&ipvlan->pnode); ··· 607 607 { 608 608 struct ipvl_addr *addr; 609 609 610 - if (ipvlan_addr_busy(ipvlan, ip6_addr, true)) { 610 + if (ipvlan_addr_busy(ipvlan->port, ip6_addr, true)) { 611 611 netif_err(ipvlan, ifup, ipvlan->dev, 612 612 "Failed to add IPv6=%pI6c addr for %s intf\n", 613 613 ip6_addr, ipvlan->dev->name); ··· 620 620 addr->master = ipvlan; 621 621 memcpy(&addr->ip6addr, ip6_addr, sizeof(struct in6_addr)); 622 622 addr->atype = IPVL_IPV6; 623 - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); 623 + list_add_tail(&addr->anode, &ipvlan->addrs); 624 624 ipvlan->ipv6cnt++; 625 - ipvlan_ht_addr_add(ipvlan, addr); 625 + /* If the interface is not up, the address will be added to the hash 626 + * list by ipvlan_open. 627 + */ 628 + if (netif_running(ipvlan->dev)) 629 + ipvlan_ht_addr_add(ipvlan, addr); 626 630 627 631 return 0; 628 632 } ··· 635 631 { 636 632 struct ipvl_addr *addr; 637 633 638 - addr = ipvlan_ht_addr_lookup(ipvlan->port, ip6_addr, true); 634 + addr = ipvlan_find_addr(ipvlan, ip6_addr, true); 639 635 if (!addr) 640 636 return; 641 637 642 638 ipvlan_ht_addr_del(addr, true); 643 - list_del_rcu(&addr->anode); 639 + list_del(&addr->anode); 644 640 ipvlan->ipv6cnt--; 645 641 WARN_ON(ipvlan->ipv6cnt < 0); 646 642 kfree_rcu(addr, rcu); ··· 679 675 { 680 676 struct ipvl_addr *addr; 681 677 682 - if (ipvlan_addr_busy(ipvlan, ip4_addr, false)) { 678 + if (ipvlan_addr_busy(ipvlan->port, ip4_addr, false)) { 683 679 netif_err(ipvlan, ifup, ipvlan->dev, 684 680 "Failed to add IPv4=%pI4 on %s intf.\n", 685 681 ip4_addr, ipvlan->dev->name); ··· 692 688 addr->master = ipvlan; 693 689 memcpy(&addr->ip4addr, ip4_addr, sizeof(struct in_addr)); 694 690 addr->atype = IPVL_IPV4; 695 - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); 691 + list_add_tail(&addr->anode, &ipvlan->addrs); 696 692 ipvlan->ipv4cnt++; 697 - ipvlan_ht_addr_add(ipvlan, addr); 693 + /* If the interface is not up, the address will be added to the hash 694 + * list by ipvlan_open. 695 + */ 696 + if (netif_running(ipvlan->dev)) 697 + ipvlan_ht_addr_add(ipvlan, addr); 698 698 ipvlan_set_broadcast_mac_filter(ipvlan, true); 699 699 700 700 return 0; ··· 708 700 { 709 701 struct ipvl_addr *addr; 710 702 711 - addr = ipvlan_ht_addr_lookup(ipvlan->port, ip4_addr, false); 703 + addr = ipvlan_find_addr(ipvlan, ip4_addr, false); 712 704 if (!addr) 713 705 return; 714 706 715 707 ipvlan_ht_addr_del(addr, true); 716 - list_del_rcu(&addr->anode); 708 + list_del(&addr->anode); 717 709 ipvlan->ipv4cnt--; 718 710 WARN_ON(ipvlan->ipv4cnt < 0); 719 711 if (!ipvlan->ipv4cnt)
+2
drivers/net/usb/asix_common.c
··· 188 188 memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); 189 189 skb_put(skb, sizeof(padbytes)); 190 190 } 191 + 192 + usbnet_set_skb_tx_stats(skb, 1, 0); 191 193 return skb; 192 194 } 193 195
+8
drivers/net/usb/cdc_ether.c
··· 522 522 #define DELL_VENDOR_ID 0x413C 523 523 #define REALTEK_VENDOR_ID 0x0bda 524 524 #define SAMSUNG_VENDOR_ID 0x04e8 525 + #define LENOVO_VENDOR_ID 0x17ef 525 526 526 527 static const struct usb_device_id products[] = { 527 528 /* BLACKLIST !! ··· 699 698 /* Samsung USB Ethernet Adapters */ 700 699 { 701 700 USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM, 701 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 702 + .driver_info = 0, 703 + }, 704 + 705 + /* Lenovo Thinkpad USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 706 + { 707 + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x7205, USB_CLASS_COMM, 702 708 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 703 709 .driver_info = 0, 704 710 },
+3 -3
drivers/net/usb/cdc_ncm.c
··· 1172 1172 1173 1173 /* return skb */ 1174 1174 ctx->tx_curr_skb = NULL; 1175 - dev->net->stats.tx_packets += ctx->tx_curr_frame_num; 1176 1175 1177 1176 /* keep private stats: framing overhead and number of NTBs */ 1178 1177 ctx->tx_overhead += skb_out->len - ctx->tx_curr_frame_payload; 1179 1178 ctx->tx_ntbs++; 1180 1179 1181 - /* usbnet has already counted all the framing overhead. 1180 + /* usbnet will count all the framing overhead by default. 1182 1181 * Adjust the stats so that the tx_bytes counter show real 1183 1182 * payload data instead. 1184 1183 */ 1185 - dev->net->stats.tx_bytes -= skb_out->len - ctx->tx_curr_frame_payload; 1184 + usbnet_set_skb_tx_stats(skb_out, n, 1185 + ctx->tx_curr_frame_payload - skb_out->len); 1186 1186 1187 1187 return skb_out; 1188 1188
+24 -6
drivers/net/usb/cx82310_eth.c
··· 46 46 }; 47 47 48 48 #define CMD_PACKET_SIZE 64 49 - /* first command after power on can take around 8 seconds */ 50 - #define CMD_TIMEOUT 15000 49 + #define CMD_TIMEOUT 100 51 50 #define CMD_REPLY_RETRY 5 52 51 53 52 #define CX82310_MTU 1514 ··· 77 78 ret = usb_bulk_msg(udev, usb_sndbulkpipe(udev, CMD_EP), buf, 78 79 CMD_PACKET_SIZE, &actual_len, CMD_TIMEOUT); 79 80 if (ret < 0) { 80 - dev_err(&dev->udev->dev, "send command %#x: error %d\n", 81 - cmd, ret); 81 + if (cmd != CMD_GET_LINK_STATUS) 82 + dev_err(&dev->udev->dev, "send command %#x: error %d\n", 83 + cmd, ret); 82 84 goto end; 83 85 } 84 86 ··· 90 90 buf, CMD_PACKET_SIZE, &actual_len, 91 91 CMD_TIMEOUT); 92 92 if (ret < 0) { 93 - dev_err(&dev->udev->dev, 94 - "reply receive error %d\n", ret); 93 + if (cmd != CMD_GET_LINK_STATUS) 94 + dev_err(&dev->udev->dev, 95 + "reply receive error %d\n", 96 + ret); 95 97 goto end; 96 98 } 97 99 if (actual_len > 0) ··· 136 134 int ret; 137 135 char buf[15]; 138 136 struct usb_device *udev = dev->udev; 137 + u8 link[3]; 138 + int timeout = 50; 139 139 140 140 /* avoid ADSL modems - continue only if iProduct is "USB NET CARD" */ 141 141 if (usb_string(udev, udev->descriptor.iProduct, buf, sizeof(buf)) > 0 ··· 163 159 dev->partial_data = (unsigned long) kmalloc(dev->hard_mtu, GFP_KERNEL); 164 160 if (!dev->partial_data) 165 161 return -ENOMEM; 162 + 163 + /* wait for firmware to become ready (indicated by the link being up) */ 164 + while (--timeout) { 165 + ret = cx82310_cmd(dev, CMD_GET_LINK_STATUS, true, NULL, 0, 166 + link, sizeof(link)); 167 + /* the command can time out during boot - it's not an error */ 168 + if (!ret && link[0] == 1 && link[2] == 1) 169 + break; 170 + msleep(500); 171 + }; 172 + if (!timeout) { 173 + dev_err(&udev->dev, "firmware not ready in time\n"); 174 + return -ETIMEDOUT; 175 + } 166 176 167 177 /* enable ethernet mode (?) */ 168 178 ret = cx82310_cmd(dev, CMD_ETHERNET_MODE, true, "\x01", 1, NULL, 0);
+2
drivers/net/usb/r8152.c
··· 492 492 /* Define these values to match your device */ 493 493 #define VENDOR_ID_REALTEK 0x0bda 494 494 #define VENDOR_ID_SAMSUNG 0x04e8 495 + #define VENDOR_ID_LENOVO 0x17ef 495 496 496 497 #define MCU_TYPE_PLA 0x0100 497 498 #define MCU_TYPE_USB 0x0000 ··· 4038 4037 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8152)}, 4039 4038 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)}, 4040 4039 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)}, 4040 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 4041 4041 {} 4042 4042 }; 4043 4043
+1
drivers/net/usb/sr9800.c
··· 144 144 skb_put(skb, sizeof(padbytes)); 145 145 } 146 146 147 + usbnet_set_skb_tx_stats(skb, 1, 0); 147 148 return skb; 148 149 } 149 150
+14 -3
drivers/net/usb/usbnet.c
··· 1188 1188 struct usbnet *dev = entry->dev; 1189 1189 1190 1190 if (urb->status == 0) { 1191 - if (!(dev->driver_info->flags & FLAG_MULTI_PACKET)) 1192 - dev->net->stats.tx_packets++; 1191 + dev->net->stats.tx_packets += entry->packets; 1193 1192 dev->net->stats.tx_bytes += entry->length; 1194 1193 } else { 1195 1194 dev->net->stats.tx_errors++; ··· 1346 1347 } else 1347 1348 urb->transfer_flags |= URB_ZERO_PACKET; 1348 1349 } 1349 - entry->length = urb->transfer_buffer_length = length; 1350 + urb->transfer_buffer_length = length; 1351 + 1352 + if (info->flags & FLAG_MULTI_PACKET) { 1353 + /* Driver has set number of packets and a length delta. 1354 + * Calculate the complete length and ensure that it's 1355 + * positive. 1356 + */ 1357 + entry->length += length; 1358 + if (WARN_ON_ONCE(entry->length <= 0)) 1359 + entry->length = length; 1360 + } else { 1361 + usbnet_set_skb_tx_stats(skb, 1, length); 1362 + } 1350 1363 1351 1364 spin_lock_irqsave(&dev->txq.lock, flags); 1352 1365 retval = usb_autopm_get_interface_async(dev->intf);
+12 -8
drivers/net/wireless/ath/ath9k/beacon.c
··· 219 219 struct ath_common *common = ath9k_hw_common(sc->sc_ah); 220 220 struct ath_vif *avp = (void *)vif->drv_priv; 221 221 struct ath_buf *bf = avp->av_bcbuf; 222 + struct ath_beacon_config *cur_conf = &sc->cur_chan->beacon; 222 223 223 224 ath_dbg(common, CONFIG, "Removing interface at beacon slot: %d\n", 224 225 avp->av_bslot); 225 226 226 227 tasklet_disable(&sc->bcon_tasklet); 228 + 229 + cur_conf->enable_beacon &= ~BIT(avp->av_bslot); 227 230 228 231 if (bf && bf->bf_mpdu) { 229 232 struct sk_buff *skb = bf->bf_mpdu; ··· 524 521 } 525 522 526 523 if (sc->sc_ah->opmode == NL80211_IFTYPE_AP) { 527 - if ((vif->type != NL80211_IFTYPE_AP) || 528 - (sc->nbcnvifs > 1)) { 524 + if (vif->type != NL80211_IFTYPE_AP) { 529 525 ath_dbg(common, CONFIG, 530 526 "An AP interface is already present !\n"); 531 527 return false; ··· 618 616 * enabling/disabling SWBA. 619 617 */ 620 618 if (changed & BSS_CHANGED_BEACON_ENABLED) { 621 - if (!bss_conf->enable_beacon && 622 - (sc->nbcnvifs <= 1)) { 623 - cur_conf->enable_beacon = false; 624 - } else if (bss_conf->enable_beacon) { 625 - cur_conf->enable_beacon = true; 626 - ath9k_cache_beacon_config(sc, ctx, bss_conf); 619 + bool enabled = cur_conf->enable_beacon; 620 + 621 + if (!bss_conf->enable_beacon) { 622 + cur_conf->enable_beacon &= ~BIT(avp->av_bslot); 623 + } else { 624 + cur_conf->enable_beacon |= BIT(avp->av_bslot); 625 + if (!enabled) 626 + ath9k_cache_beacon_config(sc, ctx, bss_conf); 627 627 } 628 628 } 629 629
+1 -1
drivers/net/wireless/ath/ath9k/common.h
··· 54 54 u16 dtim_period; 55 55 u16 bmiss_timeout; 56 56 u8 dtim_count; 57 - bool enable_beacon; 57 + u8 enable_beacon; 58 58 bool ibss_creator; 59 59 u32 nexttbtt; 60 60 u32 intval;
+1 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 424 424 ah->power_mode = ATH9K_PM_UNDEFINED; 425 425 ah->htc_reset_init = true; 426 426 427 - ah->tpc_enabled = true; 427 + ah->tpc_enabled = false; 428 428 429 429 ah->ani_function = ATH9K_ANI_ALL; 430 430 if (!AR_SREV_9300_20_OR_LATER(ah))
+2 -1
drivers/net/wireless/brcm80211/brcmfmac/feature.c
··· 126 126 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_MCHAN, "mchan"); 127 127 if (drvr->bus_if->wowl_supported) 128 128 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_WOWL, "wowl"); 129 - brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0); 129 + if (drvr->bus_if->chip != BRCM_CC_43362_CHIP_ID) 130 + brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0); 130 131 131 132 /* set chip related quirks */ 132 133 switch (drvr->bus_if->chip) {
-1
drivers/net/wireless/iwlwifi/dvm/dev.h
··· 708 708 unsigned long reload_jiffies; 709 709 int reload_count; 710 710 bool ucode_loaded; 711 - bool init_ucode_run; /* Don't run init uCode again */ 712 711 713 712 u8 plcp_delta_threshold; 714 713
+9 -8
drivers/net/wireless/iwlwifi/dvm/mac80211.c
··· 1114 1114 scd_queues &= ~(BIT(IWL_IPAN_CMD_QUEUE_NUM) | 1115 1115 BIT(IWL_DEFAULT_CMD_QUEUE_NUM)); 1116 1116 1117 - if (vif) 1118 - scd_queues &= ~BIT(vif->hw_queue[IEEE80211_AC_VO]); 1119 - 1120 - IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", scd_queues); 1121 - if (iwlagn_txfifo_flush(priv, scd_queues)) { 1122 - IWL_ERR(priv, "flush request fail\n"); 1123 - goto done; 1117 + if (drop) { 1118 + IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", 1119 + scd_queues); 1120 + if (iwlagn_txfifo_flush(priv, scd_queues)) { 1121 + IWL_ERR(priv, "flush request fail\n"); 1122 + goto done; 1123 + } 1124 1124 } 1125 + 1125 1126 IWL_DEBUG_TX_QUEUES(priv, "wait transmit/flush all frames\n"); 1126 - iwl_trans_wait_tx_queue_empty(priv->trans, 0xffffffff); 1127 + iwl_trans_wait_tx_queue_empty(priv->trans, scd_queues); 1127 1128 done: 1128 1129 mutex_unlock(&priv->mutex); 1129 1130 IWL_DEBUG_MAC80211(priv, "leave\n");
-5
drivers/net/wireless/iwlwifi/dvm/ucode.c
··· 418 418 if (!priv->fw->img[IWL_UCODE_INIT].sec[0].len) 419 419 return 0; 420 420 421 - if (priv->init_ucode_run) 422 - return 0; 423 - 424 421 iwl_init_notification_wait(&priv->notif_wait, &calib_wait, 425 422 calib_complete, ARRAY_SIZE(calib_complete), 426 423 iwlagn_wait_calib, priv); ··· 437 440 */ 438 441 ret = iwl_wait_notification(&priv->notif_wait, &calib_wait, 439 442 UCODE_CALIB_TIMEOUT); 440 - if (!ret) 441 - priv->init_ucode_run = true; 442 443 443 444 goto out; 444 445
+1
drivers/net/wireless/iwlwifi/iwl-drv.c
··· 1257 1257 op->name, err); 1258 1258 #endif 1259 1259 } 1260 + kfree(pieces); 1260 1261 return; 1261 1262 1262 1263 try_again:
+22 -2
drivers/net/wireless/iwlwifi/mvm/rs.c
··· 1278 1278 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 1279 1279 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1280 1280 1281 + if (!iwl_mvm_sta_from_mac80211(sta)->vif) 1282 + return; 1283 + 1281 1284 if (!ieee80211_is_data(hdr->frame_control) || 1282 1285 info->flags & IEEE80211_TX_CTL_NO_ACK) 1283 1286 return; ··· 2514 2511 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 2515 2512 struct iwl_lq_sta *lq_sta = mvm_sta; 2516 2513 2514 + if (sta && !iwl_mvm_sta_from_mac80211(sta)->vif) { 2515 + /* if vif isn't initialized mvm doesn't know about 2516 + * this station, so don't do anything with the it 2517 + */ 2518 + sta = NULL; 2519 + mvm_sta = NULL; 2520 + } 2521 + 2517 2522 /* TODO: handle rate_idx_mask and rate_idx_mcs_mask */ 2518 2523 2519 2524 /* Treat uninitialized rate scaling data same as non-existing. */ ··· 2837 2826 struct iwl_op_mode *op_mode = 2838 2827 (struct iwl_op_mode *)mvm_r; 2839 2828 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 2829 + 2830 + if (!iwl_mvm_sta_from_mac80211(sta)->vif) 2831 + return; 2840 2832 2841 2833 /* Stop any ongoing aggregations as rs starts off assuming no agg */ 2842 2834 for (tid = 0; tid < IWL_MAX_TID_COUNT; tid++) ··· 3601 3587 3602 3588 MVM_DEBUGFS_READ_WRITE_FILE_OPS(ss_force, 32); 3603 3589 3604 - static void rs_add_debugfs(void *mvm, void *mvm_sta, struct dentry *dir) 3590 + static void rs_add_debugfs(void *mvm, void *priv_sta, struct dentry *dir) 3605 3591 { 3606 - struct iwl_lq_sta *lq_sta = mvm_sta; 3592 + struct iwl_lq_sta *lq_sta = priv_sta; 3593 + struct iwl_mvm_sta *mvmsta; 3594 + 3595 + mvmsta = container_of(lq_sta, struct iwl_mvm_sta, lq_sta); 3596 + 3597 + if (!mvmsta->vif) 3598 + return; 3607 3599 3608 3600 debugfs_create_file("rate_scale_table", S_IRUSR | S_IWUSR, dir, 3609 3601 lq_sta, &rs_sta_dbgfs_scale_table_ops);
+2
drivers/net/wireless/iwlwifi/mvm/time-event.c
··· 197 197 struct iwl_time_event_notif *notif) 198 198 { 199 199 if (!le32_to_cpu(notif->status)) { 200 + if (te_data->vif->type == NL80211_IFTYPE_STATION) 201 + ieee80211_connection_loss(te_data->vif); 200 202 IWL_DEBUG_TE(mvm, "CSA time event failed to start\n"); 201 203 iwl_mvm_te_clear_data(mvm, te_data); 202 204 return;
+4 -2
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 949 949 mvmsta = iwl_mvm_sta_from_mac80211(sta); 950 950 tid_data = &mvmsta->tid_data[tid]; 951 951 952 - if (WARN_ONCE(tid_data->txq_id != scd_flow, "Q %d, tid %d, flow %d", 953 - tid_data->txq_id, tid, scd_flow)) { 952 + if (tid_data->txq_id != scd_flow) { 953 + IWL_ERR(mvm, 954 + "invalid BA notification: Q %d, tid %d, flow %d\n", 955 + tid_data->txq_id, tid, scd_flow); 954 956 rcu_read_unlock(); 955 957 return 0; 956 958 }
+4 -2
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 368 368 /* 3165 Series */ 369 369 {IWL_PCI_DEVICE(0x3165, 0x4010, iwl3165_2ac_cfg)}, 370 370 {IWL_PCI_DEVICE(0x3165, 0x4012, iwl3165_2ac_cfg)}, 371 - {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)}, 372 - {IWL_PCI_DEVICE(0x3165, 0x4210, iwl3165_2ac_cfg)}, 373 371 {IWL_PCI_DEVICE(0x3165, 0x4410, iwl3165_2ac_cfg)}, 374 372 {IWL_PCI_DEVICE(0x3165, 0x4510, iwl3165_2ac_cfg)}, 373 + {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)}, 374 + {IWL_PCI_DEVICE(0x3166, 0x4310, iwl3165_2ac_cfg)}, 375 + {IWL_PCI_DEVICE(0x3166, 0x4210, iwl3165_2ac_cfg)}, 376 + {IWL_PCI_DEVICE(0x3165, 0x8010, iwl3165_2ac_cfg)}, 375 377 376 378 /* 7265 Series */ 377 379 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
+11 -1
drivers/net/wireless/rtlwifi/pci.c
··· 1124 1124 /*This is for new trx flow*/ 1125 1125 struct rtl_tx_buffer_desc *pbuffer_desc = NULL; 1126 1126 u8 temp_one = 1; 1127 + u8 *entry; 1127 1128 1128 1129 memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc)); 1129 1130 ring = &rtlpci->tx_ring[BEACON_QUEUE]; 1130 1131 pskb = __skb_dequeue(&ring->queue); 1131 - if (pskb) 1132 + if (rtlpriv->use_new_trx_flow) 1133 + entry = (u8 *)(&ring->buffer_desc[ring->idx]); 1134 + else 1135 + entry = (u8 *)(&ring->desc[ring->idx]); 1136 + if (pskb) { 1137 + pci_unmap_single(rtlpci->pdev, 1138 + rtlpriv->cfg->ops->get_desc( 1139 + (u8 *)entry, true, HW_DESC_TXBUFF_ADDR), 1140 + pskb->len, PCI_DMA_TODEVICE); 1132 1141 kfree_skb(pskb); 1142 + } 1133 1143 1134 1144 /*NB: the beacon data buffer must be 32-bit aligned. */ 1135 1145 pskb = ieee80211_beacon_get(hw, mac->vif);
+1 -4
drivers/net/xen-netfront.c
··· 1008 1008 1009 1009 static int xennet_change_mtu(struct net_device *dev, int mtu) 1010 1010 { 1011 - int max = xennet_can_sg(dev) ? 1012 - XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN; 1011 + int max = xennet_can_sg(dev) ? XEN_NETIF_MAX_TX_SIZE : ETH_DATA_LEN; 1013 1012 1014 1013 if (mtu > max) 1015 1014 return -EINVAL; ··· 1277 1278 1278 1279 netdev->ethtool_ops = &xennet_ethtool_ops; 1279 1280 SET_NETDEV_DEV(netdev, &dev->dev); 1280 - 1281 - netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER); 1282 1281 1283 1282 np->netdev = netdev; 1284 1283
+8 -3
drivers/of/address.c
··· 450 450 return NULL; 451 451 } 452 452 453 - static int of_empty_ranges_quirk(void) 453 + static int of_empty_ranges_quirk(struct device_node *np) 454 454 { 455 455 if (IS_ENABLED(CONFIG_PPC)) { 456 - /* To save cycles, we cache the result */ 456 + /* To save cycles, we cache the result for global "Mac" setting */ 457 457 static int quirk_state = -1; 458 458 459 + /* PA-SEMI sdc DT bug */ 460 + if (of_device_is_compatible(np, "1682m-sdc")) 461 + return true; 462 + 463 + /* Make quirk cached */ 459 464 if (quirk_state < 0) 460 465 quirk_state = 461 466 of_machine_is_compatible("Power Macintosh") || ··· 495 490 * This code is only enabled on powerpc. --gcl 496 491 */ 497 492 ranges = of_get_property(parent, rprop, &rlen); 498 - if (ranges == NULL && !of_empty_ranges_quirk()) { 493 + if (ranges == NULL && !of_empty_ranges_quirk(parent)) { 499 494 pr_debug("OF: no ranges; cannot translate\n"); 500 495 return 1; 501 496 }
+4
drivers/regulator/palmas-regulator.c
··· 1572 1572 if (!pmic) 1573 1573 return -ENOMEM; 1574 1574 1575 + if (of_device_is_compatible(node, "ti,tps659038-pmic")) 1576 + palmas_generic_regs_info[PALMAS_REG_REGEN2].ctrl_addr = 1577 + TPS659038_REGEN2_CTRL; 1578 + 1575 1579 pmic->dev = &pdev->dev; 1576 1580 pmic->palmas = palmas; 1577 1581 palmas->pmic = pmic;
+9 -8
drivers/rtc/rtc-mrst.c
··· 413 413 mrst->dev = NULL; 414 414 } 415 415 416 - #ifdef CONFIG_PM 417 - static int mrst_suspend(struct device *dev, pm_message_t mesg) 416 + #ifdef CONFIG_PM_SLEEP 417 + static int mrst_suspend(struct device *dev) 418 418 { 419 419 struct mrst_rtc *mrst = dev_get_drvdata(dev); 420 420 unsigned char tmp; ··· 453 453 */ 454 454 static inline int mrst_poweroff(struct device *dev) 455 455 { 456 - return mrst_suspend(dev, PMSG_HIBERNATE); 456 + return mrst_suspend(dev); 457 457 } 458 458 459 459 static int mrst_resume(struct device *dev) ··· 490 490 return 0; 491 491 } 492 492 493 + static SIMPLE_DEV_PM_OPS(mrst_pm_ops, mrst_suspend, mrst_resume); 494 + #define MRST_PM_OPS (&mrst_pm_ops) 495 + 493 496 #else 494 - #define mrst_suspend NULL 495 - #define mrst_resume NULL 497 + #define MRST_PM_OPS NULL 496 498 497 499 static inline int mrst_poweroff(struct device *dev) 498 500 { ··· 531 529 .remove = vrtc_mrst_platform_remove, 532 530 .shutdown = vrtc_mrst_platform_shutdown, 533 531 .driver = { 534 - .name = (char *) driver_name, 535 - .suspend = mrst_suspend, 536 - .resume = mrst_resume, 532 + .name = driver_name, 533 + .pm = MRST_PM_OPS, 537 534 } 538 535 }; 539 536
+2 -1
drivers/scsi/ipr.c
··· 6815 6815 }; 6816 6816 6817 6817 static struct ata_port_info sata_port_info = { 6818 - .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA, 6818 + .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | 6819 + ATA_FLAG_SAS_HOST, 6819 6820 .pio_mask = ATA_PIO4_ONLY, 6820 6821 .mwdma_mask = ATA_MWDMA2, 6821 6822 .udma_mask = ATA_UDMA6,
+2 -1
drivers/scsi/libsas/sas_ata.c
··· 547 547 }; 548 548 549 549 static struct ata_port_info sata_port_info = { 550 - .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | ATA_FLAG_NCQ, 550 + .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | ATA_FLAG_NCQ | 551 + ATA_FLAG_SAS_HOST, 551 552 .pio_mask = ATA_PIO4, 552 553 .mwdma_mask = ATA_MWDMA2, 553 554 .udma_mask = ATA_UDMA6,
+4 -2
drivers/spi/spi-dw-mid.c
··· 108 108 { 109 109 struct dw_spi *dws = arg; 110 110 111 - if (test_and_clear_bit(TX_BUSY, &dws->dma_chan_busy) & BIT(RX_BUSY)) 111 + clear_bit(TX_BUSY, &dws->dma_chan_busy); 112 + if (test_bit(RX_BUSY, &dws->dma_chan_busy)) 112 113 return; 113 114 dw_spi_xfer_done(dws); 114 115 } ··· 157 156 { 158 157 struct dw_spi *dws = arg; 159 158 160 - if (test_and_clear_bit(RX_BUSY, &dws->dma_chan_busy) & BIT(TX_BUSY)) 159 + clear_bit(RX_BUSY, &dws->dma_chan_busy); 160 + if (test_bit(TX_BUSY, &dws->dma_chan_busy)) 161 161 return; 162 162 dw_spi_xfer_done(dws); 163 163 }
+5 -4
drivers/spi/spi-qup.c
··· 498 498 struct resource *res; 499 499 struct device *dev; 500 500 void __iomem *base; 501 - u32 max_freq, iomode; 501 + u32 max_freq, iomode, num_cs; 502 502 int ret, irq, size; 503 503 504 504 dev = &pdev->dev; ··· 550 550 } 551 551 552 552 /* use num-cs unless not present or out of range */ 553 - if (of_property_read_u16(dev->of_node, "num-cs", 554 - &master->num_chipselect) || 555 - (master->num_chipselect > SPI_NUM_CHIPSELECTS)) 553 + if (of_property_read_u32(dev->of_node, "num-cs", &num_cs) || 554 + num_cs > SPI_NUM_CHIPSELECTS) 556 555 master->num_chipselect = SPI_NUM_CHIPSELECTS; 556 + else 557 + master->num_chipselect = num_cs; 557 558 558 559 master->bus_num = pdev->id; 559 560 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
+3 -2
drivers/spi/spi.c
··· 1105 1105 "failed to unprepare message: %d\n", ret); 1106 1106 } 1107 1107 } 1108 + 1109 + trace_spi_message_done(mesg); 1110 + 1108 1111 master->cur_msg_prepared = false; 1109 1112 1110 1113 mesg->state = NULL; 1111 1114 if (mesg->complete) 1112 1115 mesg->complete(mesg->context); 1113 - 1114 - trace_spi_message_done(mesg); 1115 1116 } 1116 1117 EXPORT_SYMBOL_GPL(spi_finalize_current_message); 1117 1118
+1
drivers/staging/iio/Kconfig
··· 38 38 config IIO_SIMPLE_DUMMY_BUFFER 39 39 bool "Buffered capture support" 40 40 select IIO_BUFFER 41 + select IIO_TRIGGER 41 42 select IIO_KFIFO_BUF 42 43 help 43 44 Add buffered data capture to the simple dummy driver.
+1
drivers/staging/iio/magnetometer/hmc5843_core.c
··· 592 592 mutex_init(&data->lock); 593 593 594 594 indio_dev->dev.parent = dev; 595 + indio_dev->name = dev->driver->name; 595 596 indio_dev->info = &hmc5843_info; 596 597 indio_dev->modes = INDIO_DIRECT_MODE; 597 598 indio_dev->channels = data->variant->channels;
+5
drivers/tty/serial/fsl_lpuart.c
··· 921 921 writeb(val | UARTPFIFO_TXFE | UARTPFIFO_RXFE, 922 922 sport->port.membase + UARTPFIFO); 923 923 924 + /* explicitly clear RDRF */ 925 + readb(sport->port.membase + UARTSR1); 926 + 924 927 /* flush Tx and Rx FIFO */ 925 928 writeb(UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH, 926 929 sport->port.membase + UARTCFIFO); ··· 1078 1075 1079 1076 sport->txfifo_size = 0x1 << (((temp >> UARTPFIFO_TXSIZE_OFF) & 1080 1077 UARTPFIFO_FIFOSIZE_MASK) + 1); 1078 + 1079 + sport->port.fifosize = sport->txfifo_size; 1081 1080 1082 1081 sport->rxfifo_size = 0x1 << (((temp >> UARTPFIFO_RXSIZE_OFF) & 1083 1082 UARTPFIFO_FIFOSIZE_MASK) + 1);
+1
drivers/tty/serial/samsung.c
··· 963 963 free_irq(ourport->tx_irq, ourport); 964 964 tx_enabled(port) = 0; 965 965 ourport->tx_claimed = 0; 966 + ourport->tx_mode = 0; 966 967 } 967 968 968 969 if (ourport->rx_claimed) {
+8 -1
drivers/usb/host/xhci-hub.c
··· 387 387 status = PORT_PLC; 388 388 port_change_bit = "link state"; 389 389 break; 390 + case USB_PORT_FEAT_C_PORT_CONFIG_ERROR: 391 + status = PORT_CEC; 392 + port_change_bit = "config error"; 393 + break; 390 394 default: 391 395 /* Should never happen */ 392 396 return; ··· 592 588 status |= USB_PORT_STAT_C_LINK_STATE << 16; 593 589 if ((raw_port_status & PORT_WRC)) 594 590 status |= USB_PORT_STAT_C_BH_RESET << 16; 591 + if ((raw_port_status & PORT_CEC)) 592 + status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; 595 593 } 596 594 597 595 if (hcd->speed != HCD_USB3) { ··· 1011 1005 case USB_PORT_FEAT_C_OVER_CURRENT: 1012 1006 case USB_PORT_FEAT_C_ENABLE: 1013 1007 case USB_PORT_FEAT_C_PORT_LINK_STATE: 1008 + case USB_PORT_FEAT_C_PORT_CONFIG_ERROR: 1014 1009 xhci_clear_port_change_bit(xhci, wValue, wIndex, 1015 1010 port_array[wIndex], temp); 1016 1011 break; ··· 1076 1069 */ 1077 1070 status = bus_state->resuming_ports; 1078 1071 1079 - mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC; 1072 + mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC; 1080 1073 1081 1074 spin_lock_irqsave(&xhci->lock, flags); 1082 1075 /* For each port, did anything change? If so, set that bit in buf. */
+1 -1
drivers/usb/host/xhci-pci.c
··· 115 115 if (pdev->vendor == PCI_VENDOR_ID_INTEL) { 116 116 xhci->quirks |= XHCI_LPM_SUPPORT; 117 117 xhci->quirks |= XHCI_INTEL_HOST; 118 + xhci->quirks |= XHCI_AVOID_BEI; 118 119 } 119 120 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 120 121 pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) { ··· 131 130 * PPT chipsets. 132 131 */ 133 132 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 134 - xhci->quirks |= XHCI_AVOID_BEI; 135 133 } 136 134 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 137 135 pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) {
+1 -1
drivers/usb/isp1760/isp1760-udc.c
··· 1203 1203 1204 1204 if (udc->driver) { 1205 1205 dev_err(udc->isp->dev, "UDC already has a gadget driver\n"); 1206 - spin_unlock(&udc->lock); 1206 + spin_unlock_irqrestore(&udc->lock, flags); 1207 1207 return -EBUSY; 1208 1208 } 1209 1209
+7 -2
drivers/usb/serial/ftdi_sio.c
··· 604 604 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 605 605 { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID), 606 606 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 607 + { USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) }, 607 608 /* 608 609 * ELV devices: 609 610 */ ··· 1884 1883 { 1885 1884 struct usb_device *udev = serial->dev; 1886 1885 1887 - if ((udev->manufacturer && !strcmp(udev->manufacturer, "CALAO Systems")) || 1888 - (udev->product && !strcmp(udev->product, "BeagleBone/XDS100V2"))) 1886 + if (udev->manufacturer && !strcmp(udev->manufacturer, "CALAO Systems")) 1887 + return ftdi_jtag_probe(serial); 1888 + 1889 + if (udev->product && 1890 + (!strcmp(udev->product, "BeagleBone/XDS100V2") || 1891 + !strcmp(udev->product, "SNAP Connect E10"))) 1889 1892 return ftdi_jtag_probe(serial); 1890 1893 1891 1894 return 0;
+6
drivers/usb/serial/ftdi_sio_ids.h
··· 561 561 */ 562 562 #define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */ 563 563 564 + /* 565 + * Synapse Wireless product ids (FTDI_VID) 566 + * http://www.synapse-wireless.com 567 + */ 568 + #define FTDI_SYNAPSE_SS200_PID 0x9090 /* SS200 - SNAP Stick 200 */ 569 + 564 570 565 571 /********************************/ 566 572 /** third-party VID/PID combos **/
+3
drivers/usb/serial/keyspan_pda.c
··· 61 61 /* For Xircom PGSDB9 and older Entrega version of the same device */ 62 62 #define XIRCOM_VENDOR_ID 0x085a 63 63 #define XIRCOM_FAKE_ID 0x8027 64 + #define XIRCOM_FAKE_ID_2 0x8025 /* "PGMFHUB" serial */ 64 65 #define ENTREGA_VENDOR_ID 0x1645 65 66 #define ENTREGA_FAKE_ID 0x8093 66 67 ··· 71 70 #endif 72 71 #ifdef XIRCOM 73 72 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 73 + { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 74 74 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 75 75 #endif 76 76 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_ID) }, ··· 95 93 #ifdef XIRCOM 96 94 static const struct usb_device_id id_table_fake_xircom[] = { 97 95 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 96 + { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 98 97 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 99 98 { } 100 99 };
+4 -4
drivers/watchdog/imgpdc_wdt.c
··· 42 42 #define PDC_WDT_MIN_TIMEOUT 1 43 43 #define PDC_WDT_DEF_TIMEOUT 64 44 44 45 - static int heartbeat; 45 + static int heartbeat = PDC_WDT_DEF_TIMEOUT; 46 46 module_param(heartbeat, int, 0); 47 - MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds. " 48 - "(default = " __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")"); 47 + MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds " 48 + "(default=" __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")"); 49 49 50 50 static bool nowayout = WATCHDOG_NOWAYOUT; 51 51 module_param(nowayout, bool, 0); ··· 191 191 pdc_wdt->wdt_dev.ops = &pdc_wdt_ops; 192 192 pdc_wdt->wdt_dev.max_timeout = 1 << PDC_WDT_CONFIG_DELAY_MASK; 193 193 pdc_wdt->wdt_dev.parent = &pdev->dev; 194 + watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt); 194 195 195 196 ret = watchdog_init_timeout(&pdc_wdt->wdt_dev, heartbeat, &pdev->dev); 196 197 if (ret < 0) { ··· 233 232 watchdog_set_nowayout(&pdc_wdt->wdt_dev, nowayout); 234 233 235 234 platform_set_drvdata(pdev, pdc_wdt); 236 - watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt); 237 235 238 236 ret = watchdog_register_device(&pdc_wdt->wdt_dev); 239 237 if (ret)
+1 -1
drivers/watchdog/mtk_wdt.c
··· 133 133 u32 reg; 134 134 struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev); 135 135 void __iomem *wdt_base = mtk_wdt->wdt_base; 136 - u32 ret; 136 + int ret; 137 137 138 138 ret = mtk_wdt_set_timeout(wdt_dev, wdt_dev->timeout); 139 139 if (ret < 0)
+17
drivers/xen/Kconfig
··· 55 55 56 56 In that case step 3 should be omitted. 57 57 58 + config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 59 + int "Hotplugged memory limit (in GiB) for a PV guest" 60 + default 512 if X86_64 61 + default 4 if X86_32 62 + range 0 64 if X86_32 63 + depends on XEN_HAVE_PVMMU 64 + depends on XEN_BALLOON_MEMORY_HOTPLUG 65 + help 66 + Maxmium amount of memory (in GiB) that a PV guest can be 67 + expanded to when using memory hotplug. 68 + 69 + A PV guest can have more memory than this limit if is 70 + started with a larger maximum. 71 + 72 + This value is used to allocate enough space in internal 73 + tables needed for physical memory administration. 74 + 58 75 config XEN_SCRUB_PAGES 59 76 bool "Scrub pages before returning them to system" 60 77 depends on XEN_BALLOON
+23
drivers/xen/balloon.c
··· 229 229 balloon_hotplug = round_up(balloon_hotplug, PAGES_PER_SECTION); 230 230 nid = memory_add_physaddr_to_nid(hotplug_start_paddr); 231 231 232 + #ifdef CONFIG_XEN_HAVE_PVMMU 233 + /* 234 + * add_memory() will build page tables for the new memory so 235 + * the p2m must contain invalid entries so the correct 236 + * non-present PTEs will be written. 237 + * 238 + * If a failure occurs, the original (identity) p2m entries 239 + * are not restored since this region is now known not to 240 + * conflict with any devices. 241 + */ 242 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 243 + unsigned long pfn, i; 244 + 245 + pfn = PFN_DOWN(hotplug_start_paddr); 246 + for (i = 0; i < balloon_hotplug; i++) { 247 + if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) { 248 + pr_warn("set_phys_to_machine() failed, no memory added\n"); 249 + return BP_ECANCELED; 250 + } 251 + } 252 + } 253 + #endif 254 + 232 255 rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT); 233 256 234 257 if (rc) {
+12 -7
fs/affs/file.c
··· 699 699 boff = tmp % bsize; 700 700 if (boff) { 701 701 bh = affs_bread_ino(inode, bidx, 0); 702 - if (IS_ERR(bh)) 703 - return PTR_ERR(bh); 702 + if (IS_ERR(bh)) { 703 + written = PTR_ERR(bh); 704 + goto err_first_bh; 705 + } 704 706 tmp = min(bsize - boff, to - from); 705 707 BUG_ON(boff + tmp > bsize || tmp > bsize); 706 708 memcpy(AFFS_DATA(bh) + boff, data + from, tmp); ··· 714 712 bidx++; 715 713 } else if (bidx) { 716 714 bh = affs_bread_ino(inode, bidx - 1, 0); 717 - if (IS_ERR(bh)) 718 - return PTR_ERR(bh); 715 + if (IS_ERR(bh)) { 716 + written = PTR_ERR(bh); 717 + goto err_first_bh; 718 + } 719 719 } 720 720 while (from + bsize <= to) { 721 721 prev_bh = bh; 722 722 bh = affs_getemptyblk_ino(inode, bidx); 723 723 if (IS_ERR(bh)) 724 - goto out; 724 + goto err_bh; 725 725 memcpy(AFFS_DATA(bh), data + from, bsize); 726 726 if (buffer_new(bh)) { 727 727 AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); ··· 755 751 prev_bh = bh; 756 752 bh = affs_bread_ino(inode, bidx, 1); 757 753 if (IS_ERR(bh)) 758 - goto out; 754 + goto err_bh; 759 755 tmp = min(bsize, to - from); 760 756 BUG_ON(tmp > bsize); 761 757 memcpy(AFFS_DATA(bh), data + from, tmp); ··· 794 790 if (tmp > inode->i_size) 795 791 inode->i_size = AFFS_I(inode)->mmu_private = tmp; 796 792 793 + err_first_bh: 797 794 unlock_page(page); 798 795 page_cache_release(page); 799 796 800 797 return written; 801 798 802 - out: 799 + err_bh: 803 800 bh = prev_bh; 804 801 if (!written) 805 802 written = PTR_ERR(bh);
+5 -1
fs/cifs/cifsencrypt.c
··· 1 1 /* 2 2 * fs/cifs/cifsencrypt.c 3 3 * 4 + * Encryption and hashing operations relating to NTLM, NTLMv2. See MS-NLMP 5 + * for more detailed information 6 + * 4 7 * Copyright (C) International Business Machines Corp., 2005,2013 5 8 * Author(s): Steve French (sfrench@us.ibm.com) 6 9 * ··· 518 515 __func__); 519 516 return rc; 520 517 } 521 - } else if (ses->serverName) { 518 + } else { 519 + /* We use ses->serverName if no domain name available */ 522 520 len = strlen(ses->serverName); 523 521 524 522 server = kmalloc(2 + (len * 2), GFP_KERNEL);
+11 -2
fs/cifs/connect.c
··· 1599 1599 pr_warn("CIFS: username too long\n"); 1600 1600 goto cifs_parse_mount_err; 1601 1601 } 1602 + 1603 + kfree(vol->username); 1602 1604 vol->username = kstrdup(string, GFP_KERNEL); 1603 1605 if (!vol->username) 1604 1606 goto cifs_parse_mount_err; ··· 1702 1700 goto cifs_parse_mount_err; 1703 1701 } 1704 1702 1703 + kfree(vol->domainname); 1705 1704 vol->domainname = kstrdup(string, GFP_KERNEL); 1706 1705 if (!vol->domainname) { 1707 1706 pr_warn("CIFS: no memory for domainname\n"); ··· 1734 1731 } 1735 1732 1736 1733 if (strncasecmp(string, "default", 7) != 0) { 1734 + kfree(vol->iocharset); 1737 1735 vol->iocharset = kstrdup(string, 1738 1736 GFP_KERNEL); 1739 1737 if (!vol->iocharset) { ··· 2917 2913 * calling name ends in null (byte 16) from old smb 2918 2914 * convention. 2919 2915 */ 2920 - if (server->workstation_RFC1001_name && 2921 - server->workstation_RFC1001_name[0] != 0) 2916 + if (server->workstation_RFC1001_name[0] != 0) 2922 2917 rfc1002mangle(ses_init_buf->trailer. 2923 2918 session_req.calling_name, 2924 2919 server->workstation_RFC1001_name, ··· 3695 3692 #endif /* CIFS_WEAK_PW_HASH */ 3696 3693 rc = SMBNTencrypt(tcon->password, ses->server->cryptkey, 3697 3694 bcc_ptr, nls_codepage); 3695 + if (rc) { 3696 + cifs_dbg(FYI, "%s Can't generate NTLM rsp. Error: %d\n", 3697 + __func__, rc); 3698 + cifs_buf_release(smb_buffer); 3699 + return rc; 3700 + } 3698 3701 3699 3702 bcc_ptr += CIFS_AUTH_RESP_SIZE; 3700 3703 if (ses->capabilities & CAP_UNICODE) {
+1
fs/cifs/file.c
··· 1823 1823 cifsFileInfo_put(inv_file); 1824 1824 spin_lock(&cifs_file_list_lock); 1825 1825 ++refind; 1826 + inv_file = NULL; 1826 1827 goto refind_writable; 1827 1828 } 1828 1829 }
+2
fs/cifs/inode.c
··· 771 771 cifs_buf_release(srchinf->ntwrk_buf_start); 772 772 } 773 773 kfree(srchinf); 774 + if (rc) 775 + goto cgii_exit; 774 776 } else 775 777 goto cgii_exit; 776 778
+1 -1
fs/cifs/smb2misc.c
··· 322 322 323 323 /* return pointer to beginning of data area, ie offset from SMB start */ 324 324 if ((*off != 0) && (*len != 0)) 325 - return hdr->ProtocolId + *off; 325 + return (char *)(&hdr->ProtocolId[0]) + *off; 326 326 else 327 327 return NULL; 328 328 }
+2 -1
fs/cifs/smb2ops.c
··· 684 684 685 685 /* No need to change MaxChunks since already set to 1 */ 686 686 chunk_sizes_updated = true; 687 - } 687 + } else 688 + goto cchunk_out; 688 689 } 689 690 690 691 cchunk_out:
+10 -7
fs/cifs/smb2pdu.c
··· 1218 1218 struct smb2_ioctl_req *req; 1219 1219 struct smb2_ioctl_rsp *rsp; 1220 1220 struct TCP_Server_Info *server; 1221 - struct cifs_ses *ses = tcon->ses; 1221 + struct cifs_ses *ses; 1222 1222 struct kvec iov[2]; 1223 1223 int resp_buftype; 1224 1224 int num_iovecs; ··· 1232 1232 /* zero out returned data len, in case of error */ 1233 1233 if (plen) 1234 1234 *plen = 0; 1235 + 1236 + if (tcon) 1237 + ses = tcon->ses; 1238 + else 1239 + return -EIO; 1235 1240 1236 1241 if (ses && (ses->server)) 1237 1242 server = ses->server; ··· 1301 1296 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base; 1302 1297 1303 1298 if ((rc != 0) && (rc != -EINVAL)) { 1304 - if (tcon) 1305 - cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1299 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1306 1300 goto ioctl_exit; 1307 1301 } else if (rc == -EINVAL) { 1308 1302 if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) && 1309 1303 (opcode != FSCTL_SRV_COPYCHUNK)) { 1310 - if (tcon) 1311 - cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1304 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1312 1305 goto ioctl_exit; 1313 1306 } 1314 1307 } ··· 1632 1629 1633 1630 rc = SendReceive2(xid, ses, iov, 1, &resp_buftype, 0); 1634 1631 1635 - if ((rc != 0) && tcon) 1632 + if (rc != 0) 1636 1633 cifs_stats_fail_inc(tcon, SMB2_FLUSH_HE); 1637 1634 1638 1635 free_rsp_buf(resp_buftype, iov[0].iov_base); ··· 2117 2114 struct kvec iov[2]; 2118 2115 int rc = 0; 2119 2116 int len; 2120 - int resp_buftype; 2117 + int resp_buftype = CIFS_NO_BUFFER; 2121 2118 unsigned char *bufptr; 2122 2119 struct TCP_Server_Info *server; 2123 2120 struct cifs_ses *ses = tcon->ses;
+83 -10
fs/fs-writeback.c
··· 53 53 struct completion *done; /* set if the caller waits */ 54 54 }; 55 55 56 + /* 57 + * If an inode is constantly having its pages dirtied, but then the 58 + * updates stop dirtytime_expire_interval seconds in the past, it's 59 + * possible for the worst case time between when an inode has its 60 + * timestamps updated and when they finally get written out to be two 61 + * dirtytime_expire_intervals. We set the default to 12 hours (in 62 + * seconds), which means most of the time inodes will have their 63 + * timestamps written to disk after 12 hours, but in the worst case a 64 + * few inodes might not their timestamps updated for 24 hours. 65 + */ 66 + unsigned int dirtytime_expire_interval = 12 * 60 * 60; 67 + 56 68 /** 57 69 * writeback_in_progress - determine whether there is writeback in progress 58 70 * @bdi: the device's backing_dev_info structure. ··· 287 275 288 276 if ((flags & EXPIRE_DIRTY_ATIME) == 0) 289 277 older_than_this = work->older_than_this; 290 - else if ((work->reason == WB_REASON_SYNC) == 0) { 291 - expire_time = jiffies - (HZ * 86400); 278 + else if (!work->for_sync) { 279 + expire_time = jiffies - (dirtytime_expire_interval * HZ); 292 280 older_than_this = &expire_time; 293 281 } 294 282 while (!list_empty(delaying_queue)) { ··· 470 458 */ 471 459 redirty_tail(inode, wb); 472 460 } else if (inode->i_state & I_DIRTY_TIME) { 461 + inode->dirtied_when = jiffies; 473 462 list_move(&inode->i_wb_list, &wb->b_dirty_time); 474 463 } else { 475 464 /* The inode is clean. Remove from writeback lists. */ ··· 518 505 spin_lock(&inode->i_lock); 519 506 520 507 dirty = inode->i_state & I_DIRTY; 521 - if (((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) && 522 - (inode->i_state & I_DIRTY_TIME)) || 523 - (inode->i_state & I_DIRTY_TIME_EXPIRED)) { 524 - dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED; 525 - trace_writeback_lazytime(inode); 526 - } 508 + if (inode->i_state & I_DIRTY_TIME) { 509 + if ((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) || 510 + unlikely(inode->i_state & I_DIRTY_TIME_EXPIRED) || 511 + unlikely(time_after(jiffies, 512 + (inode->dirtied_time_when + 513 + dirtytime_expire_interval * HZ)))) { 514 + dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED; 515 + trace_writeback_lazytime(inode); 516 + } 517 + } else 518 + inode->i_state &= ~I_DIRTY_TIME_EXPIRED; 527 519 inode->i_state &= ~dirty; 528 520 529 521 /* ··· 1149 1131 rcu_read_unlock(); 1150 1132 } 1151 1133 1134 + /* 1135 + * Wake up bdi's periodically to make sure dirtytime inodes gets 1136 + * written back periodically. We deliberately do *not* check the 1137 + * b_dirtytime list in wb_has_dirty_io(), since this would cause the 1138 + * kernel to be constantly waking up once there are any dirtytime 1139 + * inodes on the system. So instead we define a separate delayed work 1140 + * function which gets called much more rarely. (By default, only 1141 + * once every 12 hours.) 1142 + * 1143 + * If there is any other write activity going on in the file system, 1144 + * this function won't be necessary. But if the only thing that has 1145 + * happened on the file system is a dirtytime inode caused by an atime 1146 + * update, we need this infrastructure below to make sure that inode 1147 + * eventually gets pushed out to disk. 1148 + */ 1149 + static void wakeup_dirtytime_writeback(struct work_struct *w); 1150 + static DECLARE_DELAYED_WORK(dirtytime_work, wakeup_dirtytime_writeback); 1151 + 1152 + static void wakeup_dirtytime_writeback(struct work_struct *w) 1153 + { 1154 + struct backing_dev_info *bdi; 1155 + 1156 + rcu_read_lock(); 1157 + list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) { 1158 + if (list_empty(&bdi->wb.b_dirty_time)) 1159 + continue; 1160 + bdi_wakeup_thread(bdi); 1161 + } 1162 + rcu_read_unlock(); 1163 + schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 1164 + } 1165 + 1166 + static int __init start_dirtytime_writeback(void) 1167 + { 1168 + schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 1169 + return 0; 1170 + } 1171 + __initcall(start_dirtytime_writeback); 1172 + 1173 + int dirtytime_interval_handler(struct ctl_table *table, int write, 1174 + void __user *buffer, size_t *lenp, loff_t *ppos) 1175 + { 1176 + int ret; 1177 + 1178 + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1179 + if (ret == 0 && write) 1180 + mod_delayed_work(system_wq, &dirtytime_work, 0); 1181 + return ret; 1182 + } 1183 + 1152 1184 static noinline void block_dump___mark_inode_dirty(struct inode *inode) 1153 1185 { 1154 1186 if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) { ··· 1337 1269 } 1338 1270 1339 1271 inode->dirtied_when = jiffies; 1340 - list_move(&inode->i_wb_list, dirtytime ? 1341 - &bdi->wb.b_dirty_time : &bdi->wb.b_dirty); 1272 + if (dirtytime) 1273 + inode->dirtied_time_when = jiffies; 1274 + if (inode->i_state & (I_DIRTY_INODE | I_DIRTY_PAGES)) 1275 + list_move(&inode->i_wb_list, &bdi->wb.b_dirty); 1276 + else 1277 + list_move(&inode->i_wb_list, 1278 + &bdi->wb.b_dirty_time); 1342 1279 spin_unlock(&bdi->wb.list_lock); 1343 1280 trace_writeback_dirty_inode_enqueue(inode); 1344 1281
+11 -9
fs/hfsplus/brec.c
··· 131 131 hfs_bnode_write(node, entry, data_off + key_len, entry_len); 132 132 hfs_bnode_dump(node); 133 133 134 - if (new_node) { 135 - /* update parent key if we inserted a key 136 - * at the start of the first node 137 - */ 138 - if (!rec && new_node != node) 139 - hfs_brec_update_parent(fd); 134 + /* 135 + * update parent key if we inserted a key 136 + * at the start of the node and it is not the new node 137 + */ 138 + if (!rec && new_node != node) { 139 + hfs_bnode_read_key(node, fd->search_key, data_off + size); 140 + hfs_brec_update_parent(fd); 141 + } 140 142 143 + if (new_node) { 141 144 hfs_bnode_put(fd->bnode); 142 145 if (!new_node->parent) { 143 146 hfs_btree_inc_height(tree); ··· 170 167 } 171 168 goto again; 172 169 } 173 - 174 - if (!rec) 175 - hfs_brec_update_parent(fd); 176 170 177 171 return 0; 178 172 } ··· 370 370 if (IS_ERR(parent)) 371 371 return PTR_ERR(parent); 372 372 __hfs_brec_find(parent, fd, hfs_find_rec_by_key); 373 + if (fd->record < 0) 374 + return -ENOENT; 373 375 hfs_bnode_dump(parent); 374 376 rec = fd->record; 375 377
+2 -3
fs/locks.c
··· 1388 1388 int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) 1389 1389 { 1390 1390 int error = 0; 1391 - struct file_lock *new_fl; 1392 1391 struct file_lock_context *ctx = inode->i_flctx; 1393 - struct file_lock *fl; 1392 + struct file_lock *new_fl, *fl, *tmp; 1394 1393 unsigned long break_time; 1395 1394 int want_write = (mode & O_ACCMODE) != O_RDONLY; 1396 1395 LIST_HEAD(dispose); ··· 1419 1420 break_time++; /* so that 0 means no break time */ 1420 1421 } 1421 1422 1422 - list_for_each_entry(fl, &ctx->flc_lease, fl_list) { 1423 + list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) { 1423 1424 if (!leases_conflict(fl, new_fl)) 1424 1425 continue; 1425 1426 if (want_write) {
+1 -1
fs/nfsd/blocklayout.c
··· 137 137 seg->offset = iomap.offset; 138 138 seg->length = iomap.length; 139 139 140 - dprintk("GET: %lld:%lld %d\n", bex->foff, bex->len, bex->es); 140 + dprintk("GET: 0x%llx:0x%llx %d\n", bex->foff, bex->len, bex->es); 141 141 return 0; 142 142 143 143 out_error:
+3 -3
fs/nfsd/blocklayoutxdr.c
··· 122 122 123 123 p = xdr_decode_hyper(p, &bex.foff); 124 124 if (bex.foff & (block_size - 1)) { 125 - dprintk("%s: unaligned offset %lld\n", 125 + dprintk("%s: unaligned offset 0x%llx\n", 126 126 __func__, bex.foff); 127 127 goto fail; 128 128 } 129 129 p = xdr_decode_hyper(p, &bex.len); 130 130 if (bex.len & (block_size - 1)) { 131 - dprintk("%s: unaligned length %lld\n", 131 + dprintk("%s: unaligned length 0x%llx\n", 132 132 __func__, bex.foff); 133 133 goto fail; 134 134 } 135 135 p = xdr_decode_hyper(p, &bex.soff); 136 136 if (bex.soff & (block_size - 1)) { 137 - dprintk("%s: unaligned disk offset %lld\n", 137 + dprintk("%s: unaligned disk offset 0x%llx\n", 138 138 __func__, bex.soff); 139 139 goto fail; 140 140 }
+8 -4
fs/nfsd/nfs4layouts.c
··· 118 118 { 119 119 struct super_block *sb = exp->ex_path.mnt->mnt_sb; 120 120 121 - if (exp->ex_flags & NFSEXP_NOPNFS) 121 + if (!(exp->ex_flags & NFSEXP_PNFS)) 122 122 return; 123 123 124 124 if (sb->s_export_op->get_uuid && ··· 440 440 list_move_tail(&lp->lo_perstate, reaplist); 441 441 return; 442 442 } 443 - end = seg->offset; 443 + lo->offset = layout_end(seg); 444 444 } else { 445 445 /* retain the whole layout segment on a split. */ 446 446 if (layout_end(seg) < end) { 447 447 dprintk("%s: split not supported\n", __func__); 448 448 return; 449 449 } 450 - 451 - lo->offset = layout_end(seg); 450 + end = seg->offset; 452 451 } 453 452 454 453 layout_update_len(lo, end); ··· 512 513 513 514 spin_lock(&clp->cl_lock); 514 515 list_for_each_entry_safe(ls, n, &clp->cl_lo_states, ls_perclnt) { 516 + if (ls->ls_layout_type != lrp->lr_layout_type) 517 + continue; 518 + 515 519 if (lrp->lr_return_type == RETURN_FSID && 516 520 !fh_fsid_match(&ls->ls_stid.sc_file->fi_fhandle, 517 521 &cstate->current_fh.fh_handle)) ··· 588 586 int error; 589 587 590 588 rpc_ntop((struct sockaddr *)&clp->cl_addr, addr_str, sizeof(addr_str)); 589 + 590 + trace_layout_recall_fail(&ls->ls_stid.sc_stateid); 591 591 592 592 printk(KERN_WARNING 593 593 "nfsd: client %s failed to respond to layout recall. "
+1 -1
fs/nfsd/nfs4proc.c
··· 1237 1237 nfserr = ops->proc_getdeviceinfo(exp->ex_path.mnt->mnt_sb, gdp); 1238 1238 1239 1239 gdp->gd_notify_types &= ops->notify_types; 1240 - exp_put(exp); 1241 1240 out: 1241 + exp_put(exp); 1242 1242 return nfserr; 1243 1243 } 1244 1244
+2 -2
fs/nfsd/nfs4state.c
··· 3221 3221 } else 3222 3222 nfs4_free_openowner(&oo->oo_owner); 3223 3223 spin_unlock(&clp->cl_lock); 3224 - return oo; 3224 + return ret; 3225 3225 } 3226 3226 3227 3227 static void init_open_stateid(struct nfs4_ol_stateid *stp, struct nfs4_file *fp, struct nfsd4_open *open) { ··· 5062 5062 } else 5063 5063 nfs4_free_lockowner(&lo->lo_owner); 5064 5064 spin_unlock(&clp->cl_lock); 5065 - return lo; 5065 + return ret; 5066 5066 } 5067 5067 5068 5068 static void
+16 -4
fs/nfsd/nfs4xdr.c
··· 1562 1562 p = xdr_decode_hyper(p, &lgp->lg_seg.offset); 1563 1563 p = xdr_decode_hyper(p, &lgp->lg_seg.length); 1564 1564 p = xdr_decode_hyper(p, &lgp->lg_minlength); 1565 - nfsd4_decode_stateid(argp, &lgp->lg_sid); 1565 + 1566 + status = nfsd4_decode_stateid(argp, &lgp->lg_sid); 1567 + if (status) 1568 + return status; 1569 + 1566 1570 READ_BUF(4); 1567 1571 lgp->lg_maxcount = be32_to_cpup(p++); 1568 1572 ··· 1584 1580 p = xdr_decode_hyper(p, &lcp->lc_seg.offset); 1585 1581 p = xdr_decode_hyper(p, &lcp->lc_seg.length); 1586 1582 lcp->lc_reclaim = be32_to_cpup(p++); 1587 - nfsd4_decode_stateid(argp, &lcp->lc_sid); 1583 + 1584 + status = nfsd4_decode_stateid(argp, &lcp->lc_sid); 1585 + if (status) 1586 + return status; 1587 + 1588 1588 READ_BUF(4); 1589 1589 lcp->lc_newoffset = be32_to_cpup(p++); 1590 1590 if (lcp->lc_newoffset) { ··· 1636 1628 READ_BUF(16); 1637 1629 p = xdr_decode_hyper(p, &lrp->lr_seg.offset); 1638 1630 p = xdr_decode_hyper(p, &lrp->lr_seg.length); 1639 - nfsd4_decode_stateid(argp, &lrp->lr_sid); 1631 + 1632 + status = nfsd4_decode_stateid(argp, &lrp->lr_sid); 1633 + if (status) 1634 + return status; 1635 + 1640 1636 READ_BUF(4); 1641 1637 lrp->lrf_body_len = be32_to_cpup(p++); 1642 1638 if (lrp->lrf_body_len > 0) { ··· 4135 4123 return nfserr_resource; 4136 4124 *p++ = cpu_to_be32(lrp->lrs_present); 4137 4125 if (lrp->lrs_present) 4138 - nfsd4_encode_stateid(xdr, &lrp->lr_sid); 4126 + return nfsd4_encode_stateid(xdr, &lrp->lr_sid); 4139 4127 return nfs_ok; 4140 4128 } 4141 4129 #endif /* CONFIG_NFSD_PNFS */
+5 -1
fs/nfsd/nfscache.c
··· 165 165 { 166 166 unsigned int hashsize; 167 167 unsigned int i; 168 + int status = 0; 168 169 169 170 max_drc_entries = nfsd_cache_size_limit(); 170 171 atomic_set(&num_drc_entries, 0); 171 172 hashsize = nfsd_hashsize(max_drc_entries); 172 173 maskbits = ilog2(hashsize); 173 174 174 - register_shrinker(&nfsd_reply_cache_shrinker); 175 + status = register_shrinker(&nfsd_reply_cache_shrinker); 176 + if (status) 177 + return status; 178 + 175 179 drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep), 176 180 0, 0, NULL); 177 181 if (!drc_slab)
+1
include/linux/fs.h
··· 604 604 struct mutex i_mutex; 605 605 606 606 unsigned long dirtied_when; /* jiffies of first dirtying */ 607 + unsigned long dirtied_time_when; 607 608 608 609 struct hlist_node i_hash; 609 610 struct list_head i_wb_list; /* backing dev IO list */
+17
include/linux/irqchip/arm-gic-v3.h
··· 126 126 #define GICR_PROPBASER_WaWb (5U << 7) 127 127 #define GICR_PROPBASER_RaWaWt (6U << 7) 128 128 #define GICR_PROPBASER_RaWaWb (7U << 7) 129 + #define GICR_PROPBASER_CACHEABILITY_MASK (7U << 7) 129 130 #define GICR_PROPBASER_IDBITS_MASK (0x1f) 131 + 132 + #define GICR_PENDBASER_NonShareable (0U << 10) 133 + #define GICR_PENDBASER_InnerShareable (1U << 10) 134 + #define GICR_PENDBASER_OuterShareable (2U << 10) 135 + #define GICR_PENDBASER_SHAREABILITY_MASK (3UL << 10) 136 + #define GICR_PENDBASER_nCnB (0U << 7) 137 + #define GICR_PENDBASER_nC (1U << 7) 138 + #define GICR_PENDBASER_RaWt (2U << 7) 139 + #define GICR_PENDBASER_RaWb (3U << 7) 140 + #define GICR_PENDBASER_WaWt (4U << 7) 141 + #define GICR_PENDBASER_WaWb (5U << 7) 142 + #define GICR_PENDBASER_RaWaWt (6U << 7) 143 + #define GICR_PENDBASER_RaWaWb (7U << 7) 144 + #define GICR_PENDBASER_CACHEABILITY_MASK (7U << 7) 130 145 131 146 /* 132 147 * Re-Distributor registers, offsets from SGI_base ··· 197 182 #define GITS_CBASER_WaWb (5UL << 59) 198 183 #define GITS_CBASER_RaWaWt (6UL << 59) 199 184 #define GITS_CBASER_RaWaWb (7UL << 59) 185 + #define GITS_CBASER_CACHEABILITY_MASK (7UL << 59) 200 186 #define GITS_CBASER_NonShareable (0UL << 10) 201 187 #define GITS_CBASER_InnerShareable (1UL << 10) 202 188 #define GITS_CBASER_OuterShareable (2UL << 10) ··· 214 198 #define GITS_BASER_WaWb (5UL << 59) 215 199 #define GITS_BASER_RaWaWt (6UL << 59) 216 200 #define GITS_BASER_RaWaWb (7UL << 59) 201 + #define GITS_BASER_CACHEABILITY_MASK (7UL << 59) 217 202 #define GITS_BASER_TYPE_SHIFT (56) 218 203 #define GITS_BASER_TYPE(r) (((r) >> GITS_BASER_TYPE_SHIFT) & 7) 219 204 #define GITS_BASER_ENTRY_SIZE_SHIFT (48)
+1
include/linux/lcm.h
··· 4 4 #include <linux/compiler.h> 5 5 6 6 unsigned long lcm(unsigned long a, unsigned long b) __attribute_const__; 7 + unsigned long lcm_not_zero(unsigned long a, unsigned long b) __attribute_const__; 7 8 8 9 #endif /* _LCM_H */
+1
include/linux/libata.h
··· 232 232 * led */ 233 233 ATA_FLAG_NO_DIPM = (1 << 23), /* host not happy with DIPM */ 234 234 ATA_FLAG_LOWTAG = (1 << 24), /* host wants lowest available tag */ 235 + ATA_FLAG_SAS_HOST = (1 << 25), /* SAS host */ 235 236 236 237 /* bits 24:31 of ap->flags are reserved for LLD specific flags */ 237 238
+3
include/linux/mfd/palmas.h
··· 2999 2999 #define PALMAS_GPADC_TRIM15 0x0E 3000 3000 #define PALMAS_GPADC_TRIM16 0x0F 3001 3001 3002 + /* TPS659038 regen2_ctrl offset iss different from palmas */ 3003 + #define TPS659038_REGEN2_CTRL 0x12 3004 + 3002 3005 /* TPS65917 Interrupt registers */ 3003 3006 3004 3007 /* Registers for function INTERRUPT */
+6
include/linux/netdevice.h
··· 2185 2185 void synchronize_net(void); 2186 2186 int init_dummy_netdev(struct net_device *dev); 2187 2187 2188 + DECLARE_PER_CPU(int, xmit_recursion); 2189 + static inline int dev_recursion_level(void) 2190 + { 2191 + return this_cpu_read(xmit_recursion); 2192 + } 2193 + 2188 2194 struct net_device *dev_get_by_index(struct net *net, int ifindex); 2189 2195 struct net_device *__dev_get_by_index(struct net *net, int ifindex); 2190 2196 struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
+1 -1
include/linux/regulator/driver.h
··· 316 316 * @driver_data: private regulator data 317 317 * @of_node: OpenFirmware node to parse for device tree bindings (may be 318 318 * NULL). 319 - * @regmap: regmap to use for core regmap helpers if dev_get_regulator() is 319 + * @regmap: regmap to use for core regmap helpers if dev_get_regmap() is 320 320 * insufficient. 321 321 * @ena_gpio_initialized: GPIO controlling regulator enable was properly 322 322 * initialized, meaning that >= 0 is a valid gpio
+5 -4
include/linux/sched.h
··· 1625 1625 1626 1626 /* 1627 1627 * numa_faults_locality tracks if faults recorded during the last 1628 - * scan window were remote/local. The task scan period is adapted 1629 - * based on the locality of the faults with different weights 1630 - * depending on whether they were shared or private faults 1628 + * scan window were remote/local or failed to migrate. The task scan 1629 + * period is adapted based on the locality of the faults with different 1630 + * weights depending on whether they were shared or private faults 1631 1631 */ 1632 - unsigned long numa_faults_locality[2]; 1632 + unsigned long numa_faults_locality[3]; 1633 1633 1634 1634 unsigned long numa_pages_migrated; 1635 1635 #endif /* CONFIG_NUMA_BALANCING */ ··· 1719 1719 #define TNF_NO_GROUP 0x02 1720 1720 #define TNF_SHARED 0x04 1721 1721 #define TNF_FAULT_LOCAL 0x08 1722 + #define TNF_MIGRATE_FAIL 0x10 1722 1723 1723 1724 #ifdef CONFIG_NUMA_BALANCING 1724 1725 extern void task_numa_fault(int last_node, int node, int pages, int flags);
+9 -9
include/linux/sunrpc/debug.h
··· 60 60 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 61 61 void rpc_register_sysctl(void); 62 62 void rpc_unregister_sysctl(void); 63 - int sunrpc_debugfs_init(void); 63 + void sunrpc_debugfs_init(void); 64 64 void sunrpc_debugfs_exit(void); 65 - int rpc_clnt_debugfs_register(struct rpc_clnt *); 65 + void rpc_clnt_debugfs_register(struct rpc_clnt *); 66 66 void rpc_clnt_debugfs_unregister(struct rpc_clnt *); 67 - int rpc_xprt_debugfs_register(struct rpc_xprt *); 67 + void rpc_xprt_debugfs_register(struct rpc_xprt *); 68 68 void rpc_xprt_debugfs_unregister(struct rpc_xprt *); 69 69 #else 70 - static inline int 70 + static inline void 71 71 sunrpc_debugfs_init(void) 72 72 { 73 - return 0; 73 + return; 74 74 } 75 75 76 76 static inline void ··· 79 79 return; 80 80 } 81 81 82 - static inline int 82 + static inline void 83 83 rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 84 84 { 85 - return 0; 85 + return; 86 86 } 87 87 88 88 static inline void ··· 91 91 return; 92 92 } 93 93 94 - static inline int 94 + static inline void 95 95 rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 96 96 { 97 - return 0; 97 + return; 98 98 } 99 99 100 100 static inline void
+15 -1
include/linux/usb/usbnet.h
··· 227 227 struct urb *urb; 228 228 struct usbnet *dev; 229 229 enum skb_state state; 230 - size_t length; 230 + long length; 231 + unsigned long packets; 231 232 }; 233 + 234 + /* Drivers that set FLAG_MULTI_PACKET must call this in their 235 + * tx_fixup method before returning an skb. 236 + */ 237 + static inline void 238 + usbnet_set_skb_tx_stats(struct sk_buff *skb, 239 + unsigned long packets, long bytes_delta) 240 + { 241 + struct skb_data *entry = (struct skb_data *) skb->cb; 242 + 243 + entry->packets = packets; 244 + entry->length = bytes_delta; 245 + } 232 246 233 247 extern int usbnet_open(struct net_device *net); 234 248 extern int usbnet_stop(struct net_device *net);
+3
include/linux/writeback.h
··· 130 130 extern unsigned long vm_dirty_bytes; 131 131 extern unsigned int dirty_writeback_interval; 132 132 extern unsigned int dirty_expire_interval; 133 + extern unsigned int dirtytime_expire_interval; 133 134 extern int vm_highmem_is_dirtyable; 134 135 extern int block_dump; 135 136 extern int laptop_mode; ··· 147 146 extern int dirty_bytes_handler(struct ctl_table *table, int write, 148 147 void __user *buffer, size_t *lenp, 149 148 loff_t *ppos); 149 + int dirtytime_interval_handler(struct ctl_table *table, int write, 150 + void __user *buffer, size_t *lenp, loff_t *ppos); 150 151 151 152 struct ctl_table; 152 153 int dirty_writeback_centisecs_handler(struct ctl_table *, int,
-16
include/net/ip.h
··· 453 453 454 454 #endif 455 455 456 - static inline int sk_mc_loop(struct sock *sk) 457 - { 458 - if (!sk) 459 - return 1; 460 - switch (sk->sk_family) { 461 - case AF_INET: 462 - return inet_sk(sk)->mc_loop; 463 - #if IS_ENABLED(CONFIG_IPV6) 464 - case AF_INET6: 465 - return inet6_sk(sk)->mc_loop; 466 - #endif 467 - } 468 - WARN_ON(1); 469 - return 1; 470 - } 471 - 472 456 bool ip_call_ra_chain(struct sk_buff *skb); 473 457 474 458 /*
+2 -1
include/net/ip6_route.h
··· 174 174 175 175 static inline int ip6_skb_dst_mtu(struct sk_buff *skb) 176 176 { 177 - struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; 177 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? 178 + inet6_sk(skb->sk) : NULL; 178 179 179 180 return (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) ? 180 181 skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
+10
include/net/netfilter/nf_log.h
··· 79 79 const struct nf_loginfo *li, 80 80 const char *fmt, ...); 81 81 82 + __printf(8, 9) 83 + void nf_log_trace(struct net *net, 84 + u_int8_t pf, 85 + unsigned int hooknum, 86 + const struct sk_buff *skb, 87 + const struct net_device *in, 88 + const struct net_device *out, 89 + const struct nf_loginfo *li, 90 + const char *fmt, ...); 91 + 82 92 struct nf_log_buf; 83 93 84 94 struct nf_log_buf *nf_log_buf_open(void);
+2
include/net/sock.h
··· 1762 1762 1763 1763 struct dst_entry *sk_dst_check(struct sock *sk, u32 cookie); 1764 1764 1765 + bool sk_mc_loop(struct sock *sk); 1766 + 1765 1767 static inline bool sk_can_gso(const struct sock *sk) 1766 1768 { 1767 1769 return net_gso_ok(sk->sk_route_caps, sk->sk_gso_type);
+61 -62
include/trace/events/regmap.h
··· 7 7 #include <linux/ktime.h> 8 8 #include <linux/tracepoint.h> 9 9 10 - struct device; 11 - struct regmap; 10 + #include "../../../drivers/base/regmap/internal.h" 12 11 13 12 /* 14 13 * Log register events 15 14 */ 16 15 DECLARE_EVENT_CLASS(regmap_reg, 17 16 18 - TP_PROTO(struct device *dev, unsigned int reg, 17 + TP_PROTO(struct regmap *map, unsigned int reg, 19 18 unsigned int val), 20 19 21 - TP_ARGS(dev, reg, val), 20 + TP_ARGS(map, reg, val), 22 21 23 22 TP_STRUCT__entry( 24 - __string( name, dev_name(dev) ) 25 - __field( unsigned int, reg ) 26 - __field( unsigned int, val ) 23 + __string( name, regmap_name(map) ) 24 + __field( unsigned int, reg ) 25 + __field( unsigned int, val ) 27 26 ), 28 27 29 28 TP_fast_assign( 30 - __assign_str(name, dev_name(dev)); 29 + __assign_str(name, regmap_name(map)); 31 30 __entry->reg = reg; 32 31 __entry->val = val; 33 32 ), ··· 38 39 39 40 DEFINE_EVENT(regmap_reg, regmap_reg_write, 40 41 41 - TP_PROTO(struct device *dev, unsigned int reg, 42 + TP_PROTO(struct regmap *map, unsigned int reg, 42 43 unsigned int val), 43 44 44 - TP_ARGS(dev, reg, val) 45 + TP_ARGS(map, reg, val) 45 46 46 47 ); 47 48 48 49 DEFINE_EVENT(regmap_reg, regmap_reg_read, 49 50 50 - TP_PROTO(struct device *dev, unsigned int reg, 51 + TP_PROTO(struct regmap *map, unsigned int reg, 51 52 unsigned int val), 52 53 53 - TP_ARGS(dev, reg, val) 54 + TP_ARGS(map, reg, val) 54 55 55 56 ); 56 57 57 58 DEFINE_EVENT(regmap_reg, regmap_reg_read_cache, 58 59 59 - TP_PROTO(struct device *dev, unsigned int reg, 60 + TP_PROTO(struct regmap *map, unsigned int reg, 60 61 unsigned int val), 61 62 62 - TP_ARGS(dev, reg, val) 63 + TP_ARGS(map, reg, val) 63 64 64 65 ); 65 66 66 67 DECLARE_EVENT_CLASS(regmap_block, 67 68 68 - TP_PROTO(struct device *dev, unsigned int reg, int count), 69 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 69 70 70 - TP_ARGS(dev, reg, count), 71 + TP_ARGS(map, reg, count), 71 72 72 73 TP_STRUCT__entry( 73 - __string( name, dev_name(dev) ) 74 - __field( unsigned int, reg ) 75 - __field( int, count ) 74 + __string( name, regmap_name(map) ) 75 + __field( unsigned int, reg ) 76 + __field( int, count ) 76 77 ), 77 78 78 79 TP_fast_assign( 79 - __assign_str(name, dev_name(dev)); 80 + __assign_str(name, regmap_name(map)); 80 81 __entry->reg = reg; 81 82 __entry->count = count; 82 83 ), ··· 88 89 89 90 DEFINE_EVENT(regmap_block, regmap_hw_read_start, 90 91 91 - TP_PROTO(struct device *dev, unsigned int reg, int count), 92 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 92 93 93 - TP_ARGS(dev, reg, count) 94 + TP_ARGS(map, reg, count) 94 95 ); 95 96 96 97 DEFINE_EVENT(regmap_block, regmap_hw_read_done, 97 98 98 - TP_PROTO(struct device *dev, unsigned int reg, int count), 99 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 99 100 100 - TP_ARGS(dev, reg, count) 101 + TP_ARGS(map, reg, count) 101 102 ); 102 103 103 104 DEFINE_EVENT(regmap_block, regmap_hw_write_start, 104 105 105 - TP_PROTO(struct device *dev, unsigned int reg, int count), 106 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 106 107 107 - TP_ARGS(dev, reg, count) 108 + TP_ARGS(map, reg, count) 108 109 ); 109 110 110 111 DEFINE_EVENT(regmap_block, regmap_hw_write_done, 111 112 112 - TP_PROTO(struct device *dev, unsigned int reg, int count), 113 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 113 114 114 - TP_ARGS(dev, reg, count) 115 + TP_ARGS(map, reg, count) 115 116 ); 116 117 117 118 TRACE_EVENT(regcache_sync, 118 119 119 - TP_PROTO(struct device *dev, const char *type, 120 + TP_PROTO(struct regmap *map, const char *type, 120 121 const char *status), 121 122 122 - TP_ARGS(dev, type, status), 123 + TP_ARGS(map, type, status), 123 124 124 125 TP_STRUCT__entry( 125 - __string( name, dev_name(dev) ) 126 - __string( status, status ) 127 - __string( type, type ) 128 - __field( int, type ) 126 + __string( name, regmap_name(map) ) 127 + __string( status, status ) 128 + __string( type, type ) 129 + __field( int, type ) 129 130 ), 130 131 131 132 TP_fast_assign( 132 - __assign_str(name, dev_name(dev)); 133 + __assign_str(name, regmap_name(map)); 133 134 __assign_str(status, status); 134 135 __assign_str(type, type); 135 136 ), ··· 140 141 141 142 DECLARE_EVENT_CLASS(regmap_bool, 142 143 143 - TP_PROTO(struct device *dev, bool flag), 144 + TP_PROTO(struct regmap *map, bool flag), 144 145 145 - TP_ARGS(dev, flag), 146 + TP_ARGS(map, flag), 146 147 147 148 TP_STRUCT__entry( 148 - __string( name, dev_name(dev) ) 149 - __field( int, flag ) 149 + __string( name, regmap_name(map) ) 150 + __field( int, flag ) 150 151 ), 151 152 152 153 TP_fast_assign( 153 - __assign_str(name, dev_name(dev)); 154 + __assign_str(name, regmap_name(map)); 154 155 __entry->flag = flag; 155 156 ), 156 157 ··· 160 161 161 162 DEFINE_EVENT(regmap_bool, regmap_cache_only, 162 163 163 - TP_PROTO(struct device *dev, bool flag), 164 + TP_PROTO(struct regmap *map, bool flag), 164 165 165 - TP_ARGS(dev, flag) 166 + TP_ARGS(map, flag) 166 167 167 168 ); 168 169 169 170 DEFINE_EVENT(regmap_bool, regmap_cache_bypass, 170 171 171 - TP_PROTO(struct device *dev, bool flag), 172 + TP_PROTO(struct regmap *map, bool flag), 172 173 173 - TP_ARGS(dev, flag) 174 + TP_ARGS(map, flag) 174 175 175 176 ); 176 177 177 178 DECLARE_EVENT_CLASS(regmap_async, 178 179 179 - TP_PROTO(struct device *dev), 180 + TP_PROTO(struct regmap *map), 180 181 181 - TP_ARGS(dev), 182 + TP_ARGS(map), 182 183 183 184 TP_STRUCT__entry( 184 - __string( name, dev_name(dev) ) 185 + __string( name, regmap_name(map) ) 185 186 ), 186 187 187 188 TP_fast_assign( 188 - __assign_str(name, dev_name(dev)); 189 + __assign_str(name, regmap_name(map)); 189 190 ), 190 191 191 192 TP_printk("%s", __get_str(name)) ··· 193 194 194 195 DEFINE_EVENT(regmap_block, regmap_async_write_start, 195 196 196 - TP_PROTO(struct device *dev, unsigned int reg, int count), 197 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 197 198 198 - TP_ARGS(dev, reg, count) 199 + TP_ARGS(map, reg, count) 199 200 ); 200 201 201 202 DEFINE_EVENT(regmap_async, regmap_async_io_complete, 202 203 203 - TP_PROTO(struct device *dev), 204 + TP_PROTO(struct regmap *map), 204 205 205 - TP_ARGS(dev) 206 + TP_ARGS(map) 206 207 207 208 ); 208 209 209 210 DEFINE_EVENT(regmap_async, regmap_async_complete_start, 210 211 211 - TP_PROTO(struct device *dev), 212 + TP_PROTO(struct regmap *map), 212 213 213 - TP_ARGS(dev) 214 + TP_ARGS(map) 214 215 215 216 ); 216 217 217 218 DEFINE_EVENT(regmap_async, regmap_async_complete_done, 218 219 219 - TP_PROTO(struct device *dev), 220 + TP_PROTO(struct regmap *map), 220 221 221 - TP_ARGS(dev) 222 + TP_ARGS(map) 222 223 223 224 ); 224 225 225 226 TRACE_EVENT(regcache_drop_region, 226 227 227 - TP_PROTO(struct device *dev, unsigned int from, 228 + TP_PROTO(struct regmap *map, unsigned int from, 228 229 unsigned int to), 229 230 230 - TP_ARGS(dev, from, to), 231 + TP_ARGS(map, from, to), 231 232 232 233 TP_STRUCT__entry( 233 - __string( name, dev_name(dev) ) 234 - __field( unsigned int, from ) 235 - __field( unsigned int, to ) 234 + __string( name, regmap_name(map) ) 235 + __field( unsigned int, from ) 236 + __field( unsigned int, to ) 236 237 ), 237 238 238 239 TP_fast_assign( 239 - __assign_str(name, dev_name(dev)); 240 + __assign_str(name, regmap_name(map)); 240 241 __entry->from = from; 241 242 __entry->to = to; 242 243 ),
+2 -1
include/uapi/linux/input.h
··· 973 973 */ 974 974 #define MT_TOOL_FINGER 0 975 975 #define MT_TOOL_PEN 1 976 - #define MT_TOOL_MAX 1 976 + #define MT_TOOL_PALM 2 977 + #define MT_TOOL_MAX 2 977 978 978 979 /* 979 980 * Values describing the status of a force-feedback effect
+1 -1
include/uapi/linux/nfsd/export.h
··· 47 47 * exported filesystem. 48 48 */ 49 49 #define NFSEXP_V4ROOT 0x10000 50 - #define NFSEXP_NOPNFS 0x20000 50 + #define NFSEXP_PNFS 0x20000 51 51 52 52 /* All flags that we claim to support. (Note we don't support NOACL.) */ 53 53 #define NFSEXP_ALLFLAGS 0x3FE7F
+10
kernel/events/core.c
··· 4574 4574 { 4575 4575 struct perf_event *event = container_of(entry, 4576 4576 struct perf_event, pending); 4577 + int rctx; 4578 + 4579 + rctx = perf_swevent_get_recursion_context(); 4580 + /* 4581 + * If we 'fail' here, that's OK, it means recursion is already disabled 4582 + * and we won't recurse 'further'. 4583 + */ 4577 4584 4578 4585 if (event->pending_disable) { 4579 4586 event->pending_disable = 0; ··· 4591 4584 event->pending_wakeup = 0; 4592 4585 perf_event_wakeup(event); 4593 4586 } 4587 + 4588 + if (rctx >= 0) 4589 + perf_swevent_put_recursion_context(rctx); 4594 4590 } 4595 4591 4596 4592 /*
+55 -26
kernel/locking/lockdep.c
··· 633 633 if (!new_class->name) 634 634 return 0; 635 635 636 - list_for_each_entry(class, &all_lock_classes, lock_entry) { 636 + list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) { 637 637 if (new_class->key - new_class->subclass == class->key) 638 638 return class->name_version; 639 639 if (class->name && !strcmp(class->name, new_class->name)) ··· 700 700 hash_head = classhashentry(key); 701 701 702 702 /* 703 - * We can walk the hash lockfree, because the hash only 704 - * grows, and we are careful when adding entries to the end: 703 + * We do an RCU walk of the hash, see lockdep_free_key_range(). 705 704 */ 706 - list_for_each_entry(class, hash_head, hash_entry) { 705 + if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) 706 + return NULL; 707 + 708 + list_for_each_entry_rcu(class, hash_head, hash_entry) { 707 709 if (class->key == key) { 708 710 /* 709 711 * Huh! same key, different name? Did someone trample ··· 730 728 struct lockdep_subclass_key *key; 731 729 struct list_head *hash_head; 732 730 struct lock_class *class; 733 - unsigned long flags; 731 + 732 + DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 734 733 735 734 class = look_up_lock_class(lock, subclass); 736 735 if (likely(class)) ··· 753 750 key = lock->key->subkeys + subclass; 754 751 hash_head = classhashentry(key); 755 752 756 - raw_local_irq_save(flags); 757 753 if (!graph_lock()) { 758 - raw_local_irq_restore(flags); 759 754 return NULL; 760 755 } 761 756 /* 762 757 * We have to do the hash-walk again, to avoid races 763 758 * with another CPU: 764 759 */ 765 - list_for_each_entry(class, hash_head, hash_entry) 760 + list_for_each_entry_rcu(class, hash_head, hash_entry) { 766 761 if (class->key == key) 767 762 goto out_unlock_set; 763 + } 764 + 768 765 /* 769 766 * Allocate a new key from the static array, and add it to 770 767 * the hash: 771 768 */ 772 769 if (nr_lock_classes >= MAX_LOCKDEP_KEYS) { 773 770 if (!debug_locks_off_graph_unlock()) { 774 - raw_local_irq_restore(flags); 775 771 return NULL; 776 772 } 777 - raw_local_irq_restore(flags); 778 773 779 774 print_lockdep_off("BUG: MAX_LOCKDEP_KEYS too low!"); 780 775 dump_stack(); ··· 799 798 800 799 if (verbose(class)) { 801 800 graph_unlock(); 802 - raw_local_irq_restore(flags); 803 801 804 802 printk("\nnew class %p: %s", class->key, class->name); 805 803 if (class->name_version > 1) ··· 806 806 printk("\n"); 807 807 dump_stack(); 808 808 809 - raw_local_irq_save(flags); 810 809 if (!graph_lock()) { 811 - raw_local_irq_restore(flags); 812 810 return NULL; 813 811 } 814 812 } 815 813 out_unlock_set: 816 814 graph_unlock(); 817 - raw_local_irq_restore(flags); 818 815 819 816 out_set_class_cache: 820 817 if (!subclass || force) ··· 867 870 entry->distance = distance; 868 871 entry->trace = *trace; 869 872 /* 870 - * Since we never remove from the dependency list, the list can 871 - * be walked lockless by other CPUs, it's only allocation 872 - * that must be protected by the spinlock. But this also means 873 - * we must make new entries visible only once writes to the 874 - * entry become visible - hence the RCU op: 873 + * Both allocation and removal are done under the graph lock; but 874 + * iteration is under RCU-sched; see look_up_lock_class() and 875 + * lockdep_free_key_range(). 875 876 */ 876 877 list_add_tail_rcu(&entry->entry, head); 877 878 ··· 1020 1025 else 1021 1026 head = &lock->class->locks_before; 1022 1027 1023 - list_for_each_entry(entry, head, entry) { 1028 + DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 1029 + 1030 + list_for_each_entry_rcu(entry, head, entry) { 1024 1031 if (!lock_accessed(entry)) { 1025 1032 unsigned int cq_depth; 1026 1033 mark_lock_accessed(entry, lock); ··· 2019 2022 * We can walk it lock-free, because entries only get added 2020 2023 * to the hash: 2021 2024 */ 2022 - list_for_each_entry(chain, hash_head, entry) { 2025 + list_for_each_entry_rcu(chain, hash_head, entry) { 2023 2026 if (chain->chain_key == chain_key) { 2024 2027 cache_hit: 2025 2028 debug_atomic_inc(chain_lookup_hits); ··· 2993 2996 if (unlikely(!debug_locks)) 2994 2997 return; 2995 2998 2996 - if (subclass) 2999 + if (subclass) { 3000 + unsigned long flags; 3001 + 3002 + if (DEBUG_LOCKS_WARN_ON(current->lockdep_recursion)) 3003 + return; 3004 + 3005 + raw_local_irq_save(flags); 3006 + current->lockdep_recursion = 1; 2997 3007 register_lock_class(lock, subclass, 1); 3008 + current->lockdep_recursion = 0; 3009 + raw_local_irq_restore(flags); 3010 + } 2998 3011 } 2999 3012 EXPORT_SYMBOL_GPL(lockdep_init_map); 3000 3013 ··· 3894 3887 return addr >= start && addr < start + size; 3895 3888 } 3896 3889 3890 + /* 3891 + * Used in module.c to remove lock classes from memory that is going to be 3892 + * freed; and possibly re-used by other modules. 3893 + * 3894 + * We will have had one sync_sched() before getting here, so we're guaranteed 3895 + * nobody will look up these exact classes -- they're properly dead but still 3896 + * allocated. 3897 + */ 3897 3898 void lockdep_free_key_range(void *start, unsigned long size) 3898 3899 { 3899 - struct lock_class *class, *next; 3900 + struct lock_class *class; 3900 3901 struct list_head *head; 3901 3902 unsigned long flags; 3902 3903 int i; ··· 3920 3905 head = classhash_table + i; 3921 3906 if (list_empty(head)) 3922 3907 continue; 3923 - list_for_each_entry_safe(class, next, head, hash_entry) { 3908 + list_for_each_entry_rcu(class, head, hash_entry) { 3924 3909 if (within(class->key, start, size)) 3925 3910 zap_class(class); 3926 3911 else if (within(class->name, start, size)) ··· 3931 3916 if (locked) 3932 3917 graph_unlock(); 3933 3918 raw_local_irq_restore(flags); 3919 + 3920 + /* 3921 + * Wait for any possible iterators from look_up_lock_class() to pass 3922 + * before continuing to free the memory they refer to. 3923 + * 3924 + * sync_sched() is sufficient because the read-side is IRQ disable. 3925 + */ 3926 + synchronize_sched(); 3927 + 3928 + /* 3929 + * XXX at this point we could return the resources to the pool; 3930 + * instead we leak them. We would need to change to bitmap allocators 3931 + * instead of the linear allocators we have now. 3932 + */ 3934 3933 } 3935 3934 3936 3935 void lockdep_reset_lock(struct lockdep_map *lock) 3937 3936 { 3938 - struct lock_class *class, *next; 3937 + struct lock_class *class; 3939 3938 struct list_head *head; 3940 3939 unsigned long flags; 3941 3940 int i, j; ··· 3977 3948 head = classhash_table + i; 3978 3949 if (list_empty(head)) 3979 3950 continue; 3980 - list_for_each_entry_safe(class, next, head, hash_entry) { 3951 + list_for_each_entry_rcu(class, head, hash_entry) { 3981 3952 int match = 0; 3982 3953 3983 3954 for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+4 -4
kernel/module.c
··· 1865 1865 kfree(mod->args); 1866 1866 percpu_modfree(mod); 1867 1867 1868 - /* Free lock-classes: */ 1868 + /* Free lock-classes; relies on the preceding sync_rcu(). */ 1869 1869 lockdep_free_key_range(mod->module_core, mod->core_size); 1870 1870 1871 1871 /* Finally, free the core (containing the module structure) */ ··· 3349 3349 module_bug_cleanup(mod); 3350 3350 mutex_unlock(&module_mutex); 3351 3351 3352 - /* Free lock-classes: */ 3353 - lockdep_free_key_range(mod->module_core, mod->core_size); 3354 - 3355 3352 /* we can't deallocate the module until we clear memory protection */ 3356 3353 unset_module_init_ro_nx(mod); 3357 3354 unset_module_core_ro_nx(mod); ··· 3372 3375 synchronize_rcu(); 3373 3376 mutex_unlock(&module_mutex); 3374 3377 free_module: 3378 + /* Free lock-classes; relies on the preceding sync_rcu() */ 3379 + lockdep_free_key_range(mod->module_core, mod->core_size); 3380 + 3375 3381 module_deallocate(mod, info); 3376 3382 free_copy: 3377 3383 free_copy(info);
+2
kernel/sched/core.c
··· 3034 3034 } else { 3035 3035 if (dl_prio(oldprio)) 3036 3036 p->dl.dl_boosted = 0; 3037 + if (rt_prio(oldprio)) 3038 + p->rt.timeout = 0; 3037 3039 p->sched_class = &fair_sched_class; 3038 3040 } 3039 3041
+6 -2
kernel/sched/fair.c
··· 1609 1609 /* 1610 1610 * If there were no record hinting faults then either the task is 1611 1611 * completely idle or all activity is areas that are not of interest 1612 - * to automatic numa balancing. Scan slower 1612 + * to automatic numa balancing. Related to that, if there were failed 1613 + * migration then it implies we are migrating too quickly or the local 1614 + * node is overloaded. In either case, scan slower 1613 1615 */ 1614 - if (local + shared == 0) { 1616 + if (local + shared == 0 || p->numa_faults_locality[2]) { 1615 1617 p->numa_scan_period = min(p->numa_scan_period_max, 1616 1618 p->numa_scan_period << 1); 1617 1619 ··· 2082 2080 2083 2081 if (migrated) 2084 2082 p->numa_pages_migrated += pages; 2083 + if (flags & TNF_MIGRATE_FAIL) 2084 + p->numa_faults_locality[2] += pages; 2085 2085 2086 2086 p->numa_faults[task_faults_idx(NUMA_MEMBUF, mem_node, priv)] += pages; 2087 2087 p->numa_faults[task_faults_idx(NUMA_CPUBUF, cpu_node, priv)] += pages;
+8
kernel/sysctl.c
··· 1228 1228 .extra1 = &zero, 1229 1229 }, 1230 1230 { 1231 + .procname = "dirtytime_expire_seconds", 1232 + .data = &dirtytime_expire_interval, 1233 + .maxlen = sizeof(dirty_expire_interval), 1234 + .mode = 0644, 1235 + .proc_handler = dirtytime_interval_handler, 1236 + .extra1 = &zero, 1237 + }, 1238 + { 1231 1239 .procname = "nr_pdflush_threads", 1232 1240 .mode = 0444 /* read-only */, 1233 1241 .proc_handler = pdflush_proc_obsolete,
+9 -2
kernel/time/tick-broadcast-hrtimer.c
··· 49 49 */ 50 50 static int bc_set_next(ktime_t expires, struct clock_event_device *bc) 51 51 { 52 + int bc_moved; 52 53 /* 53 54 * We try to cancel the timer first. If the callback is on 54 55 * flight on some other cpu then we let it handle it. If we ··· 61 60 * restart the timer because we are in the callback, but we 62 61 * can set the expiry time and let the callback return 63 62 * HRTIMER_RESTART. 63 + * 64 + * Since we are in the idle loop at this point and because 65 + * hrtimer_{start/cancel} functions call into tracing, 66 + * calls to these functions must be bound within RCU_NONIDLE. 64 67 */ 65 - if (hrtimer_try_to_cancel(&bctimer) >= 0) { 66 - hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED); 68 + RCU_NONIDLE(bc_moved = (hrtimer_try_to_cancel(&bctimer) >= 0) ? 69 + !hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED) : 70 + 0); 71 + if (bc_moved) { 67 72 /* Bind the "device" to the cpu */ 68 73 bc->bound_on = smp_processor_id(); 69 74 } else if (bc->bound_on == smp_processor_id()) {
+11
lib/lcm.c
··· 12 12 return 0; 13 13 } 14 14 EXPORT_SYMBOL_GPL(lcm); 15 + 16 + unsigned long lcm_not_zero(unsigned long a, unsigned long b) 17 + { 18 + unsigned long l = lcm(a, b); 19 + 20 + if (l) 21 + return l; 22 + 23 + return (b ? : a); 24 + } 25 + EXPORT_SYMBOL_GPL(lcm_not_zero);
+2
lib/nlattr.c
··· 279 279 int minlen = min_t(int, count, nla_len(src)); 280 280 281 281 memcpy(dest, nla_data(src), minlen); 282 + if (count > minlen) 283 + memset(dest + minlen, 0, count - minlen); 282 284 283 285 return minlen; 284 286 }
+13 -13
mm/huge_memory.c
··· 1260 1260 int target_nid, last_cpupid = -1; 1261 1261 bool page_locked; 1262 1262 bool migrated = false; 1263 + bool was_writable; 1263 1264 int flags = 0; 1264 1265 1265 1266 /* A PROT_NONE fault should not end up here */ ··· 1292 1291 flags |= TNF_FAULT_LOCAL; 1293 1292 } 1294 1293 1295 - /* 1296 - * Avoid grouping on DSO/COW pages in specific and RO pages 1297 - * in general, RO pages shouldn't hurt as much anyway since 1298 - * they can be in shared cache state. 1299 - * 1300 - * FIXME! This checks "pmd_dirty()" as an approximation of 1301 - * "is this a read-only page", since checking "pmd_write()" 1302 - * is even more broken. We haven't actually turned this into 1303 - * a writable page, so pmd_write() will always be false. 1304 - */ 1305 - if (!pmd_dirty(pmd)) 1294 + /* See similar comment in do_numa_page for explanation */ 1295 + if (!(vma->vm_flags & VM_WRITE)) 1306 1296 flags |= TNF_NO_GROUP; 1307 1297 1308 1298 /* ··· 1350 1358 if (migrated) { 1351 1359 flags |= TNF_MIGRATED; 1352 1360 page_nid = target_nid; 1353 - } 1361 + } else 1362 + flags |= TNF_MIGRATE_FAIL; 1354 1363 1355 1364 goto out; 1356 1365 clear_pmdnuma: 1357 1366 BUG_ON(!PageLocked(page)); 1367 + was_writable = pmd_write(pmd); 1358 1368 pmd = pmd_modify(pmd, vma->vm_page_prot); 1369 + pmd = pmd_mkyoung(pmd); 1370 + if (was_writable) 1371 + pmd = pmd_mkwrite(pmd); 1359 1372 set_pmd_at(mm, haddr, pmdp, pmd); 1360 1373 update_mmu_cache_pmd(vma, addr, pmdp); 1361 1374 unlock_page(page); ··· 1484 1487 1485 1488 if (__pmd_trans_huge_lock(pmd, vma, &ptl) == 1) { 1486 1489 pmd_t entry; 1490 + bool preserve_write = prot_numa && pmd_write(*pmd); 1487 1491 ret = 1; 1488 1492 1489 1493 /* ··· 1500 1502 if (!prot_numa || !pmd_protnone(*pmd)) { 1501 1503 entry = pmdp_get_and_clear_notify(mm, addr, pmd); 1502 1504 entry = pmd_modify(entry, newprot); 1505 + if (preserve_write) 1506 + entry = pmd_mkwrite(entry); 1503 1507 ret = HPAGE_PMD_NR; 1504 1508 set_pmd_at(mm, addr, pmd, entry); 1505 - BUG_ON(pmd_write(entry)); 1509 + BUG_ON(!preserve_write && pmd_write(entry)); 1506 1510 } 1507 1511 spin_unlock(ptl); 1508 1512 }
+12 -10
mm/memory.c
··· 3035 3035 int last_cpupid; 3036 3036 int target_nid; 3037 3037 bool migrated = false; 3038 + bool was_writable = pte_write(pte); 3038 3039 int flags = 0; 3039 3040 3040 3041 /* A PROT_NONE fault should not end up here */ ··· 3060 3059 /* Make it present again */ 3061 3060 pte = pte_modify(pte, vma->vm_page_prot); 3062 3061 pte = pte_mkyoung(pte); 3062 + if (was_writable) 3063 + pte = pte_mkwrite(pte); 3063 3064 set_pte_at(mm, addr, ptep, pte); 3064 3065 update_mmu_cache(vma, addr, ptep); 3065 3066 ··· 3072 3069 } 3073 3070 3074 3071 /* 3075 - * Avoid grouping on DSO/COW pages in specific and RO pages 3076 - * in general, RO pages shouldn't hurt as much anyway since 3077 - * they can be in shared cache state. 3078 - * 3079 - * FIXME! This checks "pmd_dirty()" as an approximation of 3080 - * "is this a read-only page", since checking "pmd_write()" 3081 - * is even more broken. We haven't actually turned this into 3082 - * a writable page, so pmd_write() will always be false. 3072 + * Avoid grouping on RO pages in general. RO pages shouldn't hurt as 3073 + * much anyway since they can be in shared cache state. This misses 3074 + * the case where a mapping is writable but the process never writes 3075 + * to it but pte_write gets cleared during protection updates and 3076 + * pte_dirty has unpredictable behaviour between PTE scan updates, 3077 + * background writeback, dirty balancing and application behaviour. 3083 3078 */ 3084 - if (!pte_dirty(pte)) 3079 + if (!(vma->vm_flags & VM_WRITE)) 3085 3080 flags |= TNF_NO_GROUP; 3086 3081 3087 3082 /* ··· 3103 3102 if (migrated) { 3104 3103 page_nid = target_nid; 3105 3104 flags |= TNF_MIGRATED; 3106 - } 3105 + } else 3106 + flags |= TNF_MIGRATE_FAIL; 3107 3107 3108 3108 out: 3109 3109 if (page_nid != -1)
+4 -9
mm/memory_hotplug.c
··· 1092 1092 return NULL; 1093 1093 1094 1094 arch_refresh_nodedata(nid, pgdat); 1095 + } else { 1096 + /* Reset the nr_zones and classzone_idx to 0 before reuse */ 1097 + pgdat->nr_zones = 0; 1098 + pgdat->classzone_idx = 0; 1095 1099 } 1096 1100 1097 1101 /* we can use NODE_DATA(nid) from here */ ··· 1981 1977 if (is_vmalloc_addr(zone->wait_table)) 1982 1978 vfree(zone->wait_table); 1983 1979 } 1984 - 1985 - /* 1986 - * Since there is no way to guarentee the address of pgdat/zone is not 1987 - * on stack of any kernel threads or used by other kernel objects 1988 - * without reference counting or other symchronizing method, do not 1989 - * reset node_data and free pgdat here. Just reset it to 0 and reuse 1990 - * the memory when the node is online again. 1991 - */ 1992 - memset(pgdat, 0, sizeof(*pgdat)); 1993 1980 } 1994 1981 EXPORT_SYMBOL(try_offline_node); 1995 1982
+1 -3
mm/mmap.c
··· 774 774 775 775 importer->anon_vma = exporter->anon_vma; 776 776 error = anon_vma_clone(importer, exporter); 777 - if (error) { 778 - importer->anon_vma = NULL; 777 + if (error) 779 778 return error; 780 - } 781 779 } 782 780 } 783 781
+3
mm/mprotect.c
··· 75 75 oldpte = *pte; 76 76 if (pte_present(oldpte)) { 77 77 pte_t ptent; 78 + bool preserve_write = prot_numa && pte_write(oldpte); 78 79 79 80 /* 80 81 * Avoid trapping faults against the zero or KSM ··· 95 94 96 95 ptent = ptep_modify_prot_start(mm, addr, pte); 97 96 ptent = pte_modify(ptent, newprot); 97 + if (preserve_write) 98 + ptent = pte_mkwrite(ptent); 98 99 99 100 /* Avoid taking write faults for known dirty pages */ 100 101 if (dirty_accountable && pte_dirty(ptent) &&
+5 -2
mm/page-writeback.c
··· 857 857 * bw * elapsed + write_bandwidth * (period - elapsed) 858 858 * write_bandwidth = --------------------------------------------------- 859 859 * period 860 + * 861 + * @written may have decreased due to account_page_redirty(). 862 + * Avoid underflowing @bw calculation. 860 863 */ 861 - bw = written - bdi->written_stamp; 864 + bw = written - min(written, bdi->written_stamp); 862 865 bw *= HZ; 863 866 if (unlikely(elapsed > period)) { 864 867 do_div(bw, elapsed); ··· 925 922 unsigned long now) 926 923 { 927 924 static DEFINE_SPINLOCK(dirty_lock); 928 - static unsigned long update_time; 925 + static unsigned long update_time = INITIAL_JIFFIES; 929 926 930 927 /* 931 928 * check locklessly first to optimize away locking for the most time
+1
mm/page_isolation.c
··· 103 103 104 104 if (!is_migrate_isolate_page(buddy)) { 105 105 __isolate_free_page(page, order); 106 + kernel_map_pages(page, (1 << order), 1); 106 107 set_page_refcounted(page); 107 108 isolated_page = page; 108 109 }
+8 -1
mm/pagewalk.c
··· 265 265 vma = vma->vm_next; 266 266 267 267 err = walk_page_test(start, next, walk); 268 - if (err > 0) 268 + if (err > 0) { 269 + /* 270 + * positive return values are purely for 271 + * controlling the pagewalk, so should never 272 + * be passed to the callers. 273 + */ 274 + err = 0; 269 275 continue; 276 + } 270 277 if (err < 0) 271 278 break; 272 279 }
+7
mm/rmap.c
··· 287 287 return 0; 288 288 289 289 enomem_failure: 290 + /* 291 + * dst->anon_vma is dropped here otherwise its degree can be incorrectly 292 + * decremented in unlink_anon_vmas(). 293 + * We can safely do this because callers of anon_vma_clone() don't care 294 + * about dst->anon_vma if anon_vma_clone() failed. 295 + */ 296 + dst->anon_vma = NULL; 290 297 unlink_anon_vmas(dst); 291 298 return -ENOMEM; 292 299 }
+4 -2
mm/slub.c
··· 2449 2449 do { 2450 2450 tid = this_cpu_read(s->cpu_slab->tid); 2451 2451 c = raw_cpu_ptr(s->cpu_slab); 2452 - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); 2452 + } while (IS_ENABLED(CONFIG_PREEMPT) && 2453 + unlikely(tid != READ_ONCE(c->tid))); 2453 2454 2454 2455 /* 2455 2456 * Irqless object alloc/free algorithm used here depends on sequence ··· 2719 2718 do { 2720 2719 tid = this_cpu_read(s->cpu_slab->tid); 2721 2720 c = raw_cpu_ptr(s->cpu_slab); 2722 - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); 2721 + } while (IS_ENABLED(CONFIG_PREEMPT) && 2722 + unlikely(tid != READ_ONCE(c->tid))); 2723 2723 2724 2724 /* Same with comment on barrier() in slab_alloc_node() */ 2725 2725 barrier();
+7
net/compat.c
··· 49 49 __get_user(kmsg->msg_controllen, &umsg->msg_controllen) || 50 50 __get_user(kmsg->msg_flags, &umsg->msg_flags)) 51 51 return -EFAULT; 52 + 53 + if (!uaddr) 54 + kmsg->msg_namelen = 0; 55 + 56 + if (kmsg->msg_namelen < 0) 57 + return -EINVAL; 58 + 52 59 if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) 53 60 kmsg->msg_namelen = sizeof(struct sockaddr_storage); 54 61 kmsg->msg_control = compat_ptr(tmp3);
+3 -1
net/core/dev.c
··· 2848 2848 #define skb_update_prio(skb) 2849 2849 #endif 2850 2850 2851 - static DEFINE_PER_CPU(int, xmit_recursion); 2851 + DEFINE_PER_CPU(int, xmit_recursion); 2852 + EXPORT_SYMBOL(xmit_recursion); 2853 + 2852 2854 #define RECURSION_LIMIT 10 2853 2855 2854 2856 /**
+1 -1
net/core/fib_rules.c
··· 175 175 176 176 spin_lock(&net->rules_mod_lock); 177 177 list_del_rcu(&ops->list); 178 - fib_rules_cleanup_ops(ops); 179 178 spin_unlock(&net->rules_mod_lock); 180 179 180 + fib_rules_cleanup_ops(ops); 181 181 call_rcu(&ops->rcu, fib_rules_put_rcu); 182 182 } 183 183 EXPORT_SYMBOL_GPL(fib_rules_unregister);
+3 -1
net/core/net_namespace.c
··· 198 198 */ 199 199 int peernet2id(struct net *net, struct net *peer) 200 200 { 201 - int id = __peernet2id(net, peer, true); 201 + bool alloc = atomic_read(&peer->count) == 0 ? false : true; 202 + int id; 202 203 204 + id = __peernet2id(net, peer, alloc); 203 205 return id >= 0 ? id : NETNSA_NSID_NOT_ASSIGNED; 204 206 } 205 207 EXPORT_SYMBOL(peernet2id);
+2 -2
net/core/rtnetlink.c
··· 1932 1932 struct ifinfomsg *ifm, 1933 1933 struct nlattr **tb) 1934 1934 { 1935 - struct net_device *dev; 1935 + struct net_device *dev, *aux; 1936 1936 int err; 1937 1937 1938 - for_each_netdev(net, dev) { 1938 + for_each_netdev_safe(net, dev, aux) { 1939 1939 if (dev->group == group) { 1940 1940 err = do_setlink(skb, dev, ifm, tb, NULL, 0); 1941 1941 if (err < 0)
+19
net/core/sock.c
··· 653 653 sock_reset_flag(sk, bit); 654 654 } 655 655 656 + bool sk_mc_loop(struct sock *sk) 657 + { 658 + if (dev_recursion_level()) 659 + return false; 660 + if (!sk) 661 + return true; 662 + switch (sk->sk_family) { 663 + case AF_INET: 664 + return inet_sk(sk)->mc_loop; 665 + #if IS_ENABLED(CONFIG_IPV6) 666 + case AF_INET6: 667 + return inet6_sk(sk)->mc_loop; 668 + #endif 669 + } 670 + WARN_ON(1); 671 + return true; 672 + } 673 + EXPORT_SYMBOL(sk_mc_loop); 674 + 656 675 /* 657 676 * This is meant for all protocols to use and covers goings on 658 677 * at the socket level. Everything here is generic.
+2
net/decnet/dn_rules.c
··· 248 248 249 249 void __exit dn_fib_rules_cleanup(void) 250 250 { 251 + rtnl_lock(); 251 252 fib_rules_unregister(dn_fib_rules_ops); 253 + rtnl_unlock(); 252 254 rcu_barrier(); 253 255 } 254 256
+7 -16
net/dsa/dsa.c
··· 501 501 #ifdef CONFIG_OF 502 502 static int dsa_of_setup_routing_table(struct dsa_platform_data *pd, 503 503 struct dsa_chip_data *cd, 504 - int chip_index, 504 + int chip_index, int port_index, 505 505 struct device_node *link) 506 506 { 507 - int ret; 508 507 const __be32 *reg; 509 - int link_port_addr; 510 508 int link_sw_addr; 511 509 struct device_node *parent_sw; 512 510 int len; ··· 517 519 if (!reg || (len != sizeof(*reg) * 2)) 518 520 return -EINVAL; 519 521 522 + /* 523 + * Get the destination switch number from the second field of its 'reg' 524 + * property, i.e. for "reg = <0x19 1>" sw_addr is '1'. 525 + */ 520 526 link_sw_addr = be32_to_cpup(reg + 1); 521 527 522 528 if (link_sw_addr >= pd->nr_chips) ··· 537 535 memset(cd->rtable, -1, pd->nr_chips * sizeof(s8)); 538 536 } 539 537 540 - reg = of_get_property(link, "reg", NULL); 541 - if (!reg) { 542 - ret = -EINVAL; 543 - goto out; 544 - } 545 - 546 - link_port_addr = be32_to_cpup(reg); 547 - 548 - cd->rtable[link_sw_addr] = link_port_addr; 538 + cd->rtable[link_sw_addr] = port_index; 549 539 550 540 return 0; 551 - out: 552 - kfree(cd->rtable); 553 - return ret; 554 541 } 555 542 556 543 static void dsa_of_free_platform_data(struct dsa_platform_data *pd) ··· 649 658 if (!strcmp(port_name, "dsa") && link && 650 659 pd->nr_chips > 1) { 651 660 ret = dsa_of_setup_routing_table(pd, cd, 652 - chip_index, link); 661 + chip_index, port_index, link); 653 662 if (ret) 654 663 goto out_free_chip; 655 664 }
+1 -2
net/ipv4/fib_frontend.c
··· 1111 1111 { 1112 1112 unsigned int i; 1113 1113 1114 + rtnl_lock(); 1114 1115 #ifdef CONFIG_IP_MULTIPLE_TABLES 1115 1116 fib4_rules_exit(net); 1116 1117 #endif 1117 - 1118 - rtnl_lock(); 1119 1118 for (i = 0; i < FIB_TABLE_HASHSZ; i++) { 1120 1119 struct fib_table *tb; 1121 1120 struct hlist_head *head;
+6 -1
net/ipv4/ipmr.c
··· 268 268 return 0; 269 269 270 270 err2: 271 - kfree(mrt); 271 + ipmr_free_table(mrt); 272 272 err1: 273 273 fib_rules_unregister(ops); 274 274 return err; ··· 278 278 { 279 279 struct mr_table *mrt, *next; 280 280 281 + rtnl_lock(); 281 282 list_for_each_entry_safe(mrt, next, &net->ipv4.mr_tables, list) { 282 283 list_del(&mrt->list); 283 284 ipmr_free_table(mrt); 284 285 } 285 286 fib_rules_unregister(net->ipv4.mr_rules_ops); 287 + rtnl_unlock(); 286 288 } 287 289 #else 288 290 #define ipmr_for_each_table(mrt, net) \ ··· 310 308 311 309 static void __net_exit ipmr_rules_exit(struct net *net) 312 310 { 311 + rtnl_lock(); 313 312 ipmr_free_table(net->ipv4.mrt); 313 + net->ipv4.mrt = NULL; 314 + rtnl_unlock(); 314 315 } 315 316 #endif 316 317
+3 -3
net/ipv4/netfilter/ip_tables.c
··· 272 272 &chainname, &comment, &rulenum) != 0) 273 273 break; 274 274 275 - nf_log_packet(net, AF_INET, hook, skb, in, out, &trace_loginfo, 276 - "TRACE: %s:%s:%s:%u ", 277 - tablename, chainname, comment, rulenum); 275 + nf_log_trace(net, AF_INET, hook, skb, in, out, &trace_loginfo, 276 + "TRACE: %s:%s:%s:%u ", 277 + tablename, chainname, comment, rulenum); 278 278 } 279 279 #endif 280 280
+4 -3
net/ipv4/tcp_input.c
··· 3105 3105 if (!first_ackt.v64) 3106 3106 first_ackt = last_ackt; 3107 3107 3108 - if (!(sacked & TCPCB_SACKED_ACKED)) 3108 + if (!(sacked & TCPCB_SACKED_ACKED)) { 3109 3109 reord = min(pkts_acked, reord); 3110 - if (!after(scb->end_seq, tp->high_seq)) 3111 - flag |= FLAG_ORIG_SACK_ACKED; 3110 + if (!after(scb->end_seq, tp->high_seq)) 3111 + flag |= FLAG_ORIG_SACK_ACKED; 3112 + } 3112 3113 } 3113 3114 3114 3115 if (sacked & TCPCB_SACKED_ACKED)
+1 -1
net/ipv4/tcp_ipv4.c
··· 1518 1518 skb->sk = sk; 1519 1519 skb->destructor = sock_edemux; 1520 1520 if (sk->sk_state != TCP_TIME_WAIT) { 1521 - struct dst_entry *dst = sk->sk_rx_dst; 1521 + struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1522 1522 1523 1523 if (dst) 1524 1524 dst = dst_check(dst, 0);
+1 -5
net/ipv4/tcp_output.c
··· 2773 2773 } else { 2774 2774 /* Socket is locked, keep trying until memory is available. */ 2775 2775 for (;;) { 2776 - skb = alloc_skb_fclone(MAX_TCP_HEADER, 2777 - sk->sk_allocation); 2776 + skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); 2778 2777 if (skb) 2779 2778 break; 2780 2779 yield(); 2781 2780 } 2782 - 2783 - /* Reserve space for headers and prepare control bits. */ 2784 - skb_reserve(skb, MAX_TCP_HEADER); 2785 2781 /* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */ 2786 2782 tcp_init_nondata_skb(skb, tp->write_seq, 2787 2783 TCPHDR_ACK | TCPHDR_FIN);
+3
net/ipv6/fib6_rules.c
··· 104 104 goto again; 105 105 flp6->saddr = saddr; 106 106 } 107 + err = rt->dst.error; 107 108 goto out; 108 109 } 109 110 again: ··· 322 321 323 322 static void __net_exit fib6_rules_net_exit(struct net *net) 324 323 { 324 + rtnl_lock(); 325 325 fib_rules_unregister(net->ipv6.fib6_rules_ops); 326 + rtnl_unlock(); 326 327 } 327 328 328 329 static struct pernet_operations fib6_rules_net_ops = {
+2 -1
net/ipv6/ip6_output.c
··· 542 542 { 543 543 struct sk_buff *frag; 544 544 struct rt6_info *rt = (struct rt6_info *)skb_dst(skb); 545 - struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; 545 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? 546 + inet6_sk(skb->sk) : NULL; 546 547 struct ipv6hdr *tmp_hdr; 547 548 struct frag_hdr *fh; 548 549 unsigned int mtu, hlen, left, len;
+3 -3
net/ipv6/ip6mr.c
··· 252 252 return 0; 253 253 254 254 err2: 255 - kfree(mrt); 255 + ip6mr_free_table(mrt); 256 256 err1: 257 257 fib_rules_unregister(ops); 258 258 return err; ··· 267 267 list_del(&mrt->list); 268 268 ip6mr_free_table(mrt); 269 269 } 270 - rtnl_unlock(); 271 270 fib_rules_unregister(net->ipv6.mr6_rules_ops); 271 + rtnl_unlock(); 272 272 } 273 273 #else 274 274 #define ip6mr_for_each_table(mrt, net) \ ··· 336 336 337 337 static void ip6mr_free_table(struct mr6_table *mrt) 338 338 { 339 - del_timer(&mrt->ipmr_expire_timer); 339 + del_timer_sync(&mrt->ipmr_expire_timer); 340 340 mroute_clean_tables(mrt); 341 341 kfree(mrt); 342 342 }
+8 -1
net/ipv6/ndisc.c
··· 1218 1218 if (rt) 1219 1219 rt6_set_expires(rt, jiffies + (HZ * lifetime)); 1220 1220 if (ra_msg->icmph.icmp6_hop_limit) { 1221 - in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1221 + /* Only set hop_limit on the interface if it is higher than 1222 + * the current hop_limit. 1223 + */ 1224 + if (in6_dev->cnf.hop_limit < ra_msg->icmph.icmp6_hop_limit) { 1225 + in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1226 + } else { 1227 + ND_PRINTK(2, warn, "RA: Got route advertisement with lower hop_limit than current\n"); 1228 + } 1222 1229 if (rt) 1223 1230 dst_metric_set(&rt->dst, RTAX_HOPLIMIT, 1224 1231 ra_msg->icmph.icmp6_hop_limit);
+3 -3
net/ipv6/netfilter/ip6_tables.c
··· 298 298 &chainname, &comment, &rulenum) != 0) 299 299 break; 300 300 301 - nf_log_packet(net, AF_INET6, hook, skb, in, out, &trace_loginfo, 302 - "TRACE: %s:%s:%s:%u ", 303 - tablename, chainname, comment, rulenum); 301 + nf_log_trace(net, AF_INET6, hook, skb, in, out, &trace_loginfo, 302 + "TRACE: %s:%s:%s:%u ", 303 + tablename, chainname, comment, rulenum); 304 304 } 305 305 #endif 306 306
+12 -1
net/ipv6/tcp_ipv6.c
··· 1411 1411 TCP_SKB_CB(skb)->sacked = 0; 1412 1412 } 1413 1413 1414 + static void tcp_v6_restore_cb(struct sk_buff *skb) 1415 + { 1416 + /* We need to move header back to the beginning if xfrm6_policy_check() 1417 + * and tcp_v6_fill_cb() are going to be called again. 1418 + */ 1419 + memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6, 1420 + sizeof(struct inet6_skb_parm)); 1421 + } 1422 + 1414 1423 static int tcp_v6_rcv(struct sk_buff *skb) 1415 1424 { 1416 1425 const struct tcphdr *th; ··· 1552 1543 inet_twsk_deschedule(tw, &tcp_death_row); 1553 1544 inet_twsk_put(tw); 1554 1545 sk = sk2; 1546 + tcp_v6_restore_cb(skb); 1555 1547 goto process; 1556 1548 } 1557 1549 /* Fall through to ACK */ ··· 1561 1551 tcp_v6_timewait_ack(sk, skb); 1562 1552 break; 1563 1553 case TCP_TW_RST: 1554 + tcp_v6_restore_cb(skb); 1564 1555 goto no_tcp_socket; 1565 1556 case TCP_TW_SUCCESS: 1566 1557 ; ··· 1596 1585 skb->sk = sk; 1597 1586 skb->destructor = sock_edemux; 1598 1587 if (sk->sk_state != TCP_TIME_WAIT) { 1599 - struct dst_entry *dst = sk->sk_rx_dst; 1588 + struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1600 1589 1601 1590 if (dst) 1602 1591 dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);
+3 -5
net/ipv6/udp_offload.c
··· 112 112 fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen); 113 113 fptr->nexthdr = nexthdr; 114 114 fptr->reserved = 0; 115 - if (skb_shinfo(skb)->ip6_frag_id) 116 - fptr->identification = skb_shinfo(skb)->ip6_frag_id; 117 - else 118 - ipv6_select_ident(fptr, 119 - (struct rt6_info *)skb_dst(skb)); 115 + if (!skb_shinfo(skb)->ip6_frag_id) 116 + ipv6_proxy_select_ident(skb); 117 + fptr->identification = skb_shinfo(skb)->ip6_frag_id; 120 118 121 119 /* Fragment the skb. ipv6 header and the remaining fields of the 122 120 * fragment header are updated in ipv6_gso_segment()
+1 -3
net/iucv/af_iucv.c
··· 1114 1114 noblock, &err); 1115 1115 else 1116 1116 skb = sock_alloc_send_skb(sk, len, noblock, &err); 1117 - if (!skb) { 1118 - err = -ENOMEM; 1117 + if (!skb) 1119 1118 goto out; 1120 - } 1121 1119 if (iucv->transport == AF_IUCV_TRANS_HIPER) 1122 1120 skb_reserve(skb, sizeof(struct af_iucv_trans_hdr) + ETH_HLEN); 1123 1121 if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
+1
net/l2tp/l2tp_core.c
··· 1871 1871 l2tp_wq = alloc_workqueue("l2tp", WQ_UNBOUND, 0); 1872 1872 if (!l2tp_wq) { 1873 1873 pr_err("alloc_workqueue failed\n"); 1874 + unregister_pernet_device(&l2tp_net_ops); 1874 1875 rc = -ENOMEM; 1875 1876 goto out; 1876 1877 }
+6 -2
net/mac80211/agg-rx.c
··· 49 49 container_of(h, struct tid_ampdu_rx, rcu_head); 50 50 int i; 51 51 52 - del_timer_sync(&tid_rx->reorder_timer); 53 - 54 52 for (i = 0; i < tid_rx->buf_size; i++) 55 53 __skb_queue_purge(&tid_rx->reorder_buf[i]); 56 54 kfree(tid_rx->reorder_buf); ··· 90 92 tid, WLAN_BACK_RECIPIENT, reason); 91 93 92 94 del_timer_sync(&tid_rx->session_timer); 95 + 96 + /* make sure ieee80211_sta_reorder_release() doesn't re-arm the timer */ 97 + spin_lock_bh(&tid_rx->reorder_lock); 98 + tid_rx->removed = true; 99 + spin_unlock_bh(&tid_rx->reorder_lock); 100 + del_timer_sync(&tid_rx->reorder_timer); 93 101 94 102 call_rcu(&tid_rx->rcu_head, ieee80211_free_tid_rx); 95 103 }
+4 -3
net/mac80211/rx.c
··· 873 873 874 874 set_release_timer: 875 875 876 - mod_timer(&tid_agg_rx->reorder_timer, 877 - tid_agg_rx->reorder_time[j] + 1 + 878 - HT_RX_REORDER_BUF_TIMEOUT); 876 + if (!tid_agg_rx->removed) 877 + mod_timer(&tid_agg_rx->reorder_timer, 878 + tid_agg_rx->reorder_time[j] + 1 + 879 + HT_RX_REORDER_BUF_TIMEOUT); 879 880 } else { 880 881 del_timer(&tid_agg_rx->reorder_timer); 881 882 }
+2
net/mac80211/sta_info.h
··· 175 175 * @reorder_lock: serializes access to reorder buffer, see below. 176 176 * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and 177 177 * and ssn. 178 + * @removed: this session is removed (but might have been found due to RCU) 178 179 * 179 180 * This structure's lifetime is managed by RCU, assignments to 180 181 * the array holding it must hold the aggregation mutex. ··· 200 199 u16 timeout; 201 200 u8 dialog_token; 202 201 bool auto_seq; 202 + bool removed; 203 203 }; 204 204 205 205 /**
+24
net/netfilter/nf_log.c
··· 212 212 } 213 213 EXPORT_SYMBOL(nf_log_packet); 214 214 215 + void nf_log_trace(struct net *net, 216 + u_int8_t pf, 217 + unsigned int hooknum, 218 + const struct sk_buff *skb, 219 + const struct net_device *in, 220 + const struct net_device *out, 221 + const struct nf_loginfo *loginfo, const char *fmt, ...) 222 + { 223 + va_list args; 224 + char prefix[NF_LOG_PREFIXLEN]; 225 + const struct nf_logger *logger; 226 + 227 + rcu_read_lock(); 228 + logger = rcu_dereference(net->nf.nf_loggers[pf]); 229 + if (logger) { 230 + va_start(args, fmt); 231 + vsnprintf(prefix, sizeof(prefix), fmt, args); 232 + va_end(args); 233 + logger->logfn(net, pf, hooknum, skb, in, out, loginfo, prefix); 234 + } 235 + rcu_read_unlock(); 236 + } 237 + EXPORT_SYMBOL(nf_log_trace); 238 + 215 239 #define S_SIZE (1024 - (sizeof(unsigned int) + 1)) 216 240 217 241 struct nf_log_buf {
+4 -1
net/netfilter/nf_tables_api.c
··· 1225 1225 1226 1226 if (nla[NFTA_CHAIN_POLICY]) { 1227 1227 if ((chain != NULL && 1228 - !(chain->flags & NFT_BASE_CHAIN)) || 1228 + !(chain->flags & NFT_BASE_CHAIN))) 1229 + return -EOPNOTSUPP; 1230 + 1231 + if (chain == NULL && 1229 1232 nla[NFTA_CHAIN_HOOK] == NULL) 1230 1233 return -EOPNOTSUPP; 1231 1234
+4 -4
net/netfilter/nf_tables_core.c
··· 94 94 { 95 95 struct net *net = dev_net(pkt->in ? pkt->in : pkt->out); 96 96 97 - nf_log_packet(net, pkt->xt.family, pkt->ops->hooknum, pkt->skb, pkt->in, 98 - pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ", 99 - chain->table->name, chain->name, comments[type], 100 - rulenum); 97 + nf_log_trace(net, pkt->xt.family, pkt->ops->hooknum, pkt->skb, pkt->in, 98 + pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ", 99 + chain->table->name, chain->name, comments[type], 100 + rulenum); 101 101 } 102 102 103 103 unsigned int
+6
net/netfilter/nft_compat.c
··· 133 133 entry->e4.ip.invflags = inv ? IPT_INV_PROTO : 0; 134 134 break; 135 135 case AF_INET6: 136 + if (proto) 137 + entry->e6.ipv6.flags |= IP6T_F_PROTO; 138 + 136 139 entry->e6.ipv6.proto = proto; 137 140 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 138 141 break; ··· 347 344 entry->e4.ip.invflags = inv ? IPT_INV_PROTO : 0; 348 345 break; 349 346 case AF_INET6: 347 + if (proto) 348 + entry->e6.ipv6.flags |= IP6T_F_PROTO; 349 + 350 350 entry->e6.ipv6.proto = proto; 351 351 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 352 352 break;
+2
net/netfilter/nft_hash.c
··· 153 153 iter->err = err; 154 154 goto out; 155 155 } 156 + 157 + continue; 156 158 } 157 159 158 160 if (iter->count < iter->skip)
+2 -2
net/netfilter/xt_TPROXY.c
··· 513 513 { 514 514 const struct ip6t_ip6 *i = par->entryinfo; 515 515 516 - if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) 517 - && !(i->flags & IP6T_INV_PROTO)) 516 + if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) && 517 + !(i->invflags & IP6T_INV_PROTO)) 518 518 return 0; 519 519 520 520 pr_info("Can be used only in combination with "
+1 -3
net/openvswitch/vport.c
··· 274 274 ASSERT_OVSL(); 275 275 276 276 hlist_del_rcu(&vport->hash_node); 277 - 278 - vport->ops->destroy(vport); 279 - 280 277 module_put(vport->ops->owner); 278 + vport->ops->destroy(vport); 281 279 } 282 280 283 281 /**
+4
net/socket.c
··· 1702 1702 1703 1703 if (len > INT_MAX) 1704 1704 len = INT_MAX; 1705 + if (unlikely(!access_ok(VERIFY_READ, buff, len))) 1706 + return -EFAULT; 1705 1707 sock = sockfd_lookup_light(fd, &err, &fput_needed); 1706 1708 if (!sock) 1707 1709 goto out; ··· 1762 1760 1763 1761 if (size > INT_MAX) 1764 1762 size = INT_MAX; 1763 + if (unlikely(!access_ok(VERIFY_WRITE, ubuf, size))) 1764 + return -EFAULT; 1765 1765 sock = sockfd_lookup_light(fd, &err, &fput_needed); 1766 1766 if (!sock) 1767 1767 goto out;
+1 -3
net/sunrpc/clnt.c
··· 303 303 struct super_block *pipefs_sb; 304 304 int err; 305 305 306 - err = rpc_clnt_debugfs_register(clnt); 307 - if (err) 308 - return err; 306 + rpc_clnt_debugfs_register(clnt); 309 307 310 308 pipefs_sb = rpc_get_sb_net(net); 311 309 if (pipefs_sb) {
+29 -23
net/sunrpc/debugfs.c
··· 129 129 .release = tasks_release, 130 130 }; 131 131 132 - int 132 + void 133 133 rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 134 134 { 135 - int len, err; 135 + int len; 136 136 char name[24]; /* enough for "../../rpc_xprt/ + 8 hex digits + NULL */ 137 + struct rpc_xprt *xprt; 137 138 138 139 /* Already registered? */ 139 - if (clnt->cl_debugfs) 140 - return 0; 140 + if (clnt->cl_debugfs || !rpc_clnt_dir) 141 + return; 141 142 142 143 len = snprintf(name, sizeof(name), "%x", clnt->cl_clid); 143 144 if (len >= sizeof(name)) 144 - return -EINVAL; 145 + return; 145 146 146 147 /* make the per-client dir */ 147 148 clnt->cl_debugfs = debugfs_create_dir(name, rpc_clnt_dir); 148 149 if (!clnt->cl_debugfs) 149 - return -ENOMEM; 150 + return; 150 151 151 152 /* make tasks file */ 152 - err = -ENOMEM; 153 153 if (!debugfs_create_file("tasks", S_IFREG | S_IRUSR, clnt->cl_debugfs, 154 154 clnt, &tasks_fops)) 155 155 goto out_err; 156 156 157 - err = -EINVAL; 158 157 rcu_read_lock(); 158 + xprt = rcu_dereference(clnt->cl_xprt); 159 + /* no "debugfs" dentry? Don't bother with the symlink. */ 160 + if (!xprt->debugfs) { 161 + rcu_read_unlock(); 162 + return; 163 + } 159 164 len = snprintf(name, sizeof(name), "../../rpc_xprt/%s", 160 - rcu_dereference(clnt->cl_xprt)->debugfs->d_name.name); 165 + xprt->debugfs->d_name.name); 161 166 rcu_read_unlock(); 167 + 162 168 if (len >= sizeof(name)) 163 169 goto out_err; 164 170 165 - err = -ENOMEM; 166 171 if (!debugfs_create_symlink("xprt", clnt->cl_debugfs, name)) 167 172 goto out_err; 168 173 169 - return 0; 174 + return; 170 175 out_err: 171 176 debugfs_remove_recursive(clnt->cl_debugfs); 172 177 clnt->cl_debugfs = NULL; 173 - return err; 174 178 } 175 179 176 180 void ··· 230 226 .release = xprt_info_release, 231 227 }; 232 228 233 - int 229 + void 234 230 rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 235 231 { 236 232 int len, id; 237 233 static atomic_t cur_id; 238 234 char name[9]; /* 8 hex digits + NULL term */ 239 235 236 + if (!rpc_xprt_dir) 237 + return; 238 + 240 239 id = (unsigned int)atomic_inc_return(&cur_id); 241 240 242 241 len = snprintf(name, sizeof(name), "%x", id); 243 242 if (len >= sizeof(name)) 244 - return -EINVAL; 243 + return; 245 244 246 245 /* make the per-client dir */ 247 246 xprt->debugfs = debugfs_create_dir(name, rpc_xprt_dir); 248 247 if (!xprt->debugfs) 249 - return -ENOMEM; 248 + return; 250 249 251 250 /* make tasks file */ 252 251 if (!debugfs_create_file("info", S_IFREG | S_IRUSR, xprt->debugfs, 253 252 xprt, &xprt_info_fops)) { 254 253 debugfs_remove_recursive(xprt->debugfs); 255 254 xprt->debugfs = NULL; 256 - return -ENOMEM; 257 255 } 258 - 259 - return 0; 260 256 } 261 257 262 258 void ··· 270 266 sunrpc_debugfs_exit(void) 271 267 { 272 268 debugfs_remove_recursive(topdir); 269 + topdir = NULL; 270 + rpc_clnt_dir = NULL; 271 + rpc_xprt_dir = NULL; 273 272 } 274 273 275 - int __init 274 + void __init 276 275 sunrpc_debugfs_init(void) 277 276 { 278 277 topdir = debugfs_create_dir("sunrpc", NULL); 279 278 if (!topdir) 280 - goto out; 279 + return; 281 280 282 281 rpc_clnt_dir = debugfs_create_dir("rpc_clnt", topdir); 283 282 if (!rpc_clnt_dir) ··· 290 283 if (!rpc_xprt_dir) 291 284 goto out_remove; 292 285 293 - return 0; 286 + return; 294 287 out_remove: 295 288 debugfs_remove_recursive(topdir); 296 289 topdir = NULL; 297 - out: 298 - return -ENOMEM; 290 + rpc_clnt_dir = NULL; 299 291 }
+1 -6
net/sunrpc/sunrpc_syms.c
··· 98 98 if (err) 99 99 goto out4; 100 100 101 - err = sunrpc_debugfs_init(); 102 - if (err) 103 - goto out5; 104 - 101 + sunrpc_debugfs_init(); 105 102 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 106 103 rpc_register_sysctl(); 107 104 #endif ··· 106 109 init_socket_xprt(); /* clnt sock transport */ 107 110 return 0; 108 111 109 - out5: 110 - unregister_rpc_pipefs(); 111 112 out4: 112 113 unregister_pernet_subsys(&sunrpc_net_ops); 113 114 out3:
+1 -6
net/sunrpc/xprt.c
··· 1331 1331 */ 1332 1332 struct rpc_xprt *xprt_create_transport(struct xprt_create *args) 1333 1333 { 1334 - int err; 1335 1334 struct rpc_xprt *xprt; 1336 1335 struct xprt_class *t; 1337 1336 ··· 1371 1372 return ERR_PTR(-ENOMEM); 1372 1373 } 1373 1374 1374 - err = rpc_xprt_debugfs_register(xprt); 1375 - if (err) { 1376 - xprt_destroy(xprt); 1377 - return ERR_PTR(err); 1378 - } 1375 + rpc_xprt_debugfs_register(xprt); 1379 1376 1380 1377 dprintk("RPC: created transport %p with %u slots\n", xprt, 1381 1378 xprt->max_reqs);
+1 -1
net/tipc/core.c
··· 152 152 static void __exit tipc_exit(void) 153 153 { 154 154 tipc_bearer_cleanup(); 155 + unregister_pernet_subsys(&tipc_net_ops); 155 156 tipc_netlink_stop(); 156 157 tipc_netlink_compat_stop(); 157 158 tipc_socket_stop(); 158 159 tipc_unregister_sysctl(); 159 - unregister_pernet_subsys(&tipc_net_ops); 160 160 161 161 pr_info("Deactivated\n"); 162 162 }
+1 -1
security/selinux/selinuxfs.c
··· 152 152 goto out; 153 153 154 154 /* No partial writes. */ 155 - length = EINVAL; 155 + length = -EINVAL; 156 156 if (*ppos != 0) 157 157 goto out; 158 158
+1 -1
sound/pci/hda/hda_intel.c
··· 1989 1989 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 1990 1990 /* Sunrise Point */ 1991 1991 { PCI_DEVICE(0x8086, 0xa170), 1992 - .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 1992 + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE }, 1993 1993 /* Sunrise Point-LP */ 1994 1994 { PCI_DEVICE(0x8086, 0x9d70), 1995 1995 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },
+2 -1
sound/pci/hda/patch_realtek.c
··· 396 396 { 397 397 /* We currently only handle front, HP */ 398 398 static hda_nid_t pins[] = { 399 - 0x0f, 0x10, 0x14, 0x15, 0 399 + 0x0f, 0x10, 0x14, 0x15, 0x17, 0 400 400 }; 401 401 hda_nid_t *p; 402 402 for (p = pins; *p; p++) ··· 5036 5036 SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC), 5037 5037 SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK), 5038 5038 SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5039 + SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK), 5039 5040 SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5040 5041 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K), 5041 5042 SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+8
tools/testing/selftests/Makefile
··· 22 22 TARGETS_HOTPLUG = cpu-hotplug 23 23 TARGETS_HOTPLUG += memory-hotplug 24 24 25 + # Clear LDFLAGS and MAKEFLAGS if called from main 26 + # Makefile to avoid test build failures when test 27 + # Makefile doesn't have explicit build rules. 28 + ifeq (1,$(MAKELEVEL)) 29 + undefine LDFLAGS 30 + override MAKEFLAGS = 31 + endif 32 + 25 33 all: 26 34 for TARGET in $(TARGETS); do \ 27 35 make -C $$TARGET; \
+7 -7
virt/kvm/kvm_main.c
··· 471 471 BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); 472 472 473 473 r = -ENOMEM; 474 - kvm->memslots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL); 474 + kvm->memslots = kvm_kvzalloc(sizeof(struct kvm_memslots)); 475 475 if (!kvm->memslots) 476 476 goto out_err_no_srcu; 477 477 ··· 522 522 out_err_no_disable: 523 523 for (i = 0; i < KVM_NR_BUSES; i++) 524 524 kfree(kvm->buses[i]); 525 - kfree(kvm->memslots); 525 + kvfree(kvm->memslots); 526 526 kvm_arch_free_vm(kvm); 527 527 return ERR_PTR(r); 528 528 } ··· 578 578 kvm_for_each_memslot(memslot, slots) 579 579 kvm_free_physmem_slot(kvm, memslot, NULL); 580 580 581 - kfree(kvm->memslots); 581 + kvfree(kvm->memslots); 582 582 } 583 583 584 584 static void kvm_destroy_devices(struct kvm *kvm) ··· 871 871 goto out_free; 872 872 } 873 873 874 - slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots), 875 - GFP_KERNEL); 874 + slots = kvm_kvzalloc(sizeof(struct kvm_memslots)); 876 875 if (!slots) 877 876 goto out_free; 877 + memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots)); 878 878 879 879 if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) { 880 880 slot = id_to_memslot(slots, mem->slot); ··· 917 917 kvm_arch_commit_memory_region(kvm, mem, &old, change); 918 918 919 919 kvm_free_physmem_slot(kvm, &old, &new); 920 - kfree(old_memslots); 920 + kvfree(old_memslots); 921 921 922 922 /* 923 923 * IOMMU mapping: New slots need to be mapped. Old slots need to be ··· 936 936 return 0; 937 937 938 938 out_slots: 939 - kfree(slots); 939 + kvfree(slots); 940 940 out_free: 941 941 kvm_free_physmem_slot(kvm, &new, &old); 942 942 out: