Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'linus' into irq/core to get the GIC updates which conflict with pending GIC changes.

Conflicts:
drivers/usb/isp1760/isp1760-core.c

+11877 -7096
+27
Documentation/CodeOfConflict
··· 1 + Code of Conflict 2 + ---------------- 3 + 4 + The Linux kernel development effort is a very personal process compared 5 + to "traditional" ways of developing software. Your code and ideas 6 + behind it will be carefully reviewed, often resulting in critique and 7 + criticism. The review will almost always require improvements to the 8 + code before it can be included in the kernel. Know that this happens 9 + because everyone involved wants to see the best possible solution for 10 + the overall success of Linux. This development process has been proven 11 + to create the most robust operating system kernel ever, and we do not 12 + want to do anything to cause the quality of submission and eventual 13 + result to ever decrease. 14 + 15 + If however, anyone feels personally abused, threatened, or otherwise 16 + uncomfortable due to this process, that is not acceptable. If so, 17 + please contact the Linux Foundation's Technical Advisory Board at 18 + <tab@lists.linux-foundation.org>, or the individual members, and they 19 + will work to resolve the issue to the best of their ability. For more 20 + information on who is on the Technical Advisory Board and what their 21 + role is, please see: 22 + http://www.linuxfoundation.org/programs/advisory-councils/tab 23 + 24 + As a reviewer of code, please strive to keep things civil and focused on 25 + the technical issues involved. We are all humans, and frustrations can 26 + be high on both sides of the process. Try to keep in mind the immortal 27 + words of Bill and Ted, "Be excellent to each other."
+2
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 22 22 - pclkN, clkN: Pairs of parent of input clock and input clock to the 23 23 devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 24 24 are supported currently. 25 + - power-domains: phandle pointing to the parent power domain, for more details 26 + see Documentation/devicetree/bindings/power/power_domain.txt 25 27 26 28 Node of a device using power domains must have a power-domains property 27 29 defined with a phandle to respective power domain.
+4
Documentation/devicetree/bindings/arm/sti.txt
··· 13 13 Required root node property: 14 14 compatible = "st,stih407"; 15 15 16 + Boards with the ST STiH410 SoC shall have the following properties: 17 + Required root node property: 18 + compatible = "st,stih410"; 19 + 16 20 Boards with the ST STiH418 SoC shall have the following properties: 17 21 Required root node property: 18 22 compatible = "st,stih418";
+1
Documentation/devicetree/bindings/i2c/i2c-imx.txt
··· 7 7 - "fsl,vf610-i2c" for I2C compatible with the one integrated on Vybrid vf610 SoC 8 8 - reg : Should contain I2C/HS-I2C registers location and length 9 9 - interrupts : Should contain I2C/HS-I2C interrupt 10 + - clocks : Should contain the I2C/HS-I2C clock specifier 10 11 11 12 Optional properties: 12 13 - clock-frequency : Constains desired I2C/HS-I2C bus clock frequency in Hz.
+4
Documentation/devicetree/bindings/net/amd-xgbe-phy.txt
··· 27 27 - amd,serdes-cdr-rate: CDR rate speed selection 28 28 - amd,serdes-pq-skew: PQ (data sampling) skew 29 29 - amd,serdes-tx-amp: TX amplitude boost 30 + - amd,serdes-dfe-tap-config: DFE taps available to run 31 + - amd,serdes-dfe-tap-enable: DFE taps to enable 30 32 31 33 Example: 32 34 xgbe_phy@e1240800 { ··· 43 41 amd,serdes-cdr-rate = <2>, <2>, <7>; 44 42 amd,serdes-pq-skew = <10>, <10>, <30>; 45 43 amd,serdes-tx-amp = <15>, <15>, <10>; 44 + amd,serdes-dfe-tap-config = <3>, <3>, <1>; 45 + amd,serdes-dfe-tap-enable = <0>, <0>, <127>; 46 46 };
+4 -1
Documentation/devicetree/bindings/net/apm-xgene-enet.txt
··· 4 4 APM X-Gene SoC. 5 5 6 6 Required properties for all the ethernet interfaces: 7 - - compatible: Should be "apm,xgene-enet" 7 + - compatible: Should state binding information from the following list, 8 + - "apm,xgene-enet": RGMII based 1G interface 9 + - "apm,xgene1-sgenet": SGMII based 1G interface 10 + - "apm,xgene1-xgenet": XFI based 10G interface 8 11 - reg: Address and length of the register set for the device. It contains the 9 12 information of registers in the same order as described by reg-names 10 13 - reg-names: Should contain the register set names
+3 -1
Documentation/devicetree/bindings/net/dsa/dsa.txt
··· 19 19 (DSA_MAX_SWITCHES). 20 20 Each of these switch child nodes should have the following required properties: 21 21 22 - - reg : Describes the switch address on the MII bus 22 + - reg : Contains two fields. The first one describes the 23 + address on the MII bus. The second is the switch 24 + number that must be unique in cascaded configurations 23 25 - #address-cells : Must be 1 24 26 - #size-cells : Must be 0 25 27
+29
Documentation/devicetree/bindings/power/power_domain.txt
··· 19 19 providing multiple PM domains (e.g. power controllers), but can be any value 20 20 as specified by device tree binding documentation of particular provider. 21 21 22 + Optional properties: 23 + - power-domains : A phandle and PM domain specifier as defined by bindings of 24 + the power controller specified by phandle. 25 + Some power domains might be powered from another power domain (or have 26 + other hardware specific dependencies). For representing such dependency 27 + a standard PM domain consumer binding is used. When provided, all domains 28 + created by the given provider should be subdomains of the domain 29 + specified by this binding. More details about power domain specifier are 30 + available in the next section. 31 + 22 32 Example: 23 33 24 34 power: power-controller@12340000 { ··· 39 29 40 30 The node above defines a power controller that is a PM domain provider and 41 31 expects one cell as its phandle argument. 32 + 33 + Example 2: 34 + 35 + parent: power-controller@12340000 { 36 + compatible = "foo,power-controller"; 37 + reg = <0x12340000 0x1000>; 38 + #power-domain-cells = <1>; 39 + }; 40 + 41 + child: power-controller@12340000 { 42 + compatible = "foo,power-controller"; 43 + reg = <0x12341000 0x1000>; 44 + power-domains = <&parent 0>; 45 + #power-domain-cells = <1>; 46 + }; 47 + 48 + The nodes above define two power controllers: 'parent' and 'child'. 49 + Domains created by the 'child' power controller are subdomains of '0' power 50 + domain provided by the 'parent' power controller. 42 51 43 52 ==PM domain consumers== 44 53
+19
Documentation/devicetree/bindings/serial/axis,etraxfs-uart.txt
··· 1 + ETRAX FS UART 2 + 3 + Required properties: 4 + - compatible : "axis,etraxfs-uart" 5 + - reg: offset and length of the register set for the device. 6 + - interrupts: device interrupt 7 + 8 + Optional properties: 9 + - {dtr,dsr,ri,cd}-gpios: specify a GPIO for DTR/DSR/RI/CD 10 + line respectively. 11 + 12 + Example: 13 + 14 + serial@b00260000 { 15 + compatible = "axis,etraxfs-uart"; 16 + reg = <0xb0026000 0x1000>; 17 + interrupts = <68>; 18 + status = "disabled"; 19 + };
Documentation/devicetree/bindings/serial/of-serial.txt Documentation/devicetree/bindings/serial/8250.txt
+16
Documentation/devicetree/bindings/serial/snps-dw-apb-uart.txt
··· 21 21 - reg-io-width : the size (in bytes) of the IO accesses that should be 22 22 performed on the device. If this property is not present then single byte 23 23 accesses are used. 24 + - dcd-override : Override the DCD modem status signal. This signal will always 25 + be reported as active instead of being obtained from the modem status 26 + register. Define this if your serial port does not use this pin. 27 + - dsr-override : Override the DTS modem status signal. This signal will always 28 + be reported as active instead of being obtained from the modem status 29 + register. Define this if your serial port does not use this pin. 30 + - cts-override : Override the CTS modem status signal. This signal will always 31 + be reported as active instead of being obtained from the modem status 32 + register. Define this if your serial port does not use this pin. 33 + - ri-override : Override the RI modem status signal. This signal will always be 34 + reported as inactive instead of being obtained from the modem status register. 35 + Define this if your serial port does not use this pin. 24 36 25 37 Example: 26 38 ··· 43 31 interrupts = <10>; 44 32 reg-shift = <2>; 45 33 reg-io-width = <4>; 34 + dcd-override; 35 + dsr-override; 36 + cts-override; 37 + ri-override; 46 38 }; 47 39 48 40 Example with one clock:
+3
Documentation/devicetree/bindings/submitting-patches.txt
··· 12 12 13 13 devicetree@vger.kernel.org 14 14 15 + and Cc: the DT maintainers. Use scripts/get_maintainer.pl to identify 16 + all of the DT maintainers. 17 + 15 18 3) The Documentation/ portion of the patch should come in the series before 16 19 the code implementing the binding. 17 20
+2
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 20 20 ams AMS AG 21 21 amstaos AMS-Taos Inc. 22 22 apm Applied Micro Circuits Corporation (APM) 23 + arasan Arasan Chip Systems 23 24 arm ARM Ltd. 24 25 armadeus ARMadeus Systems SARL 25 26 asahi-kasei Asahi Kasei Corp. ··· 28 27 auo AU Optronics Corporation 29 28 avago Avago Technologies 30 29 avic Shanghai AVIC Optoelectronics Co., Ltd. 30 + axis Axis Communications AB 31 31 bosch Bosch Sensortec GmbH 32 32 brcm Broadcom Corporation 33 33 buffalo Buffalo, Inc.
+5
Documentation/devicetree/bindings/watchdog/atmel-wdt.txt
··· 26 26 - atmel,disable : Should be present if you want to disable the watchdog. 27 27 - atmel,idle-halt : Should be present if you want to stop the watchdog when 28 28 entering idle state. 29 + CAUTION: This property should be used with care, it actually makes the 30 + watchdog not counting when the CPU is in idle state, therefore the 31 + watchdog reset time depends on mean CPU usage and will not reset at all 32 + if the CPU stop working while it is in idle state, which is probably 33 + not what you want. 29 34 - atmel,dbg-halt : Should be present if you want to stop the watchdog when 30 35 entering debug state. 31 36
+8
Documentation/input/alps.txt
··· 114 114 byte 4: 0 y6 y5 y4 y3 y2 y1 y0 115 115 byte 5: 0 z6 z5 z4 z3 z2 z1 z0 116 116 117 + Protocol Version 2 DualPoint devices send standard PS/2 mouse packets for 118 + the DualPoint Stick. 119 + 117 120 Dualpoint device -- interleaved packet format 118 121 --------------------------------------------- 119 122 ··· 129 126 byte 6: 0 y9 y8 y7 1 m r l 130 127 byte 7: 0 y6 y5 y4 y3 y2 y1 y0 131 128 byte 8: 0 z6 z5 z4 z3 z2 z1 z0 129 + 130 + Devices which use the interleaving format normally send standard PS/2 mouse 131 + packets for the DualPoint Stick + ALPS Absolute Mode packets for the 132 + touchpad, switching to the interleaved packet format when both the stick and 133 + the touchpad are used at the same time. 132 134 133 135 ALPS Absolute Mode - Protocol Version 3 134 136 ---------------------------------------
+6
Documentation/input/event-codes.txt
··· 294 294 The kernel does not provide button emulation for such devices but treats 295 295 them as any other INPUT_PROP_BUTTONPAD device. 296 296 297 + INPUT_PROP_ACCELEROMETER 298 + ------------------------- 299 + Directional axes on this device (absolute and/or relative x, y, z) represent 300 + accelerometer data. All other axes retain their meaning. A device must not mix 301 + regular directional axes and accelerometer axes on the same event node. 302 + 297 303 Guidelines: 298 304 ========== 299 305 The guidelines below ensure proper single-touch and multi-finger functionality.
+6 -3
Documentation/input/multi-touch-protocol.txt
··· 312 312 313 313 The type of approaching tool. A lot of kernel drivers cannot distinguish 314 314 between different tool types, such as a finger or a pen. In such cases, the 315 - event should be omitted. The protocol currently supports MT_TOOL_FINGER and 316 - MT_TOOL_PEN [2]. For type B devices, this event is handled by input core; 317 - drivers should instead use input_mt_report_slot_state(). 315 + event should be omitted. The protocol currently supports MT_TOOL_FINGER, 316 + MT_TOOL_PEN, and MT_TOOL_PALM [2]. For type B devices, this event is handled 317 + by input core; drivers should instead use input_mt_report_slot_state(). 318 + A contact's ABS_MT_TOOL_TYPE may change over time while still touching the 319 + device, because the firmware may not be able to determine which tool is being 320 + used when it first appears. 318 321 319 322 ABS_MT_BLOB_ID 320 323
+17 -5
Documentation/power/suspend-and-interrupts.txt
··· 40 40 41 41 The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when 42 42 requesting a special-purpose interrupt. It causes suspend_device_irqs() to 43 - leave the corresponding IRQ enabled so as to allow the interrupt to work all 44 - the time as expected. 43 + leave the corresponding IRQ enabled so as to allow the interrupt to work as 44 + expected during the suspend-resume cycle, but does not guarantee that the 45 + interrupt will wake the system from a suspended state -- for such cases it is 46 + necessary to use enable_irq_wake(). 45 47 46 48 Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one 47 49 user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed ··· 112 110 IRQF_NO_SUSPEND and enable_irq_wake() 113 111 ------------------------------------- 114 112 115 - There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND 116 - flag on the same IRQ. 113 + There are very few valid reasons to use both enable_irq_wake() and the 114 + IRQF_NO_SUSPEND flag on the same IRQ, and it is never valid to use both for the 115 + same device. 117 116 118 117 First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND 119 118 interrupts (interrupt handlers are invoked after suspend_device_irqs()) are ··· 123 120 124 121 Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not 125 122 to individual interrupt handlers, so sharing an IRQ between a system wakeup 126 - interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense. 123 + interrupt source and an IRQF_NO_SUSPEND interrupt source does not generally 124 + make sense. 125 + 126 + In rare cases an IRQ can be shared between a wakeup device driver and an 127 + IRQF_NO_SUSPEND user. In order for this to be safe, the wakeup device driver 128 + must be able to discern spurious IRQs from genuine wakeup events (signalling 129 + the latter to the core with pm_system_wakeup()), must use enable_irq_wake() to 130 + ensure that the IRQ will function as a wakeup source, and must request the IRQ 131 + with IRQF_COND_SUSPEND to tell the core that it meets these requirements. If 132 + these requirements are not met, it is not valid to use IRQF_COND_SUSPEND.
+56 -24
MAINTAINERS
··· 637 637 F: include/uapi/linux/kfd_ioctl.h 638 638 639 639 AMD MICROCODE UPDATE SUPPORT 640 - M: Andreas Herrmann <herrmann.der.user@googlemail.com> 641 - L: amd64-microcode@amd64.org 640 + M: Borislav Petkov <bp@alien8.de> 642 641 S: Maintained 643 642 F: arch/x86/kernel/cpu/microcode/amd* 644 643 ··· 1029 1030 F: arch/arm/boot/dts/imx* 1030 1031 F: arch/arm/configs/imx*_defconfig 1031 1032 1033 + ARM/FREESCALE VYBRID ARM ARCHITECTURE 1034 + M: Shawn Guo <shawn.guo@linaro.org> 1035 + M: Sascha Hauer <kernel@pengutronix.de> 1036 + R: Stefan Agner <stefan@agner.ch> 1037 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1038 + S: Maintained 1039 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 1040 + F: arch/arm/mach-imx/*vf610* 1041 + F: arch/arm/boot/dts/vf* 1042 + 1032 1043 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 1033 1044 M: Lennert Buytenhek <kernel@wantstofly.org> 1034 1045 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 1185 1176 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1186 1177 S: Maintained 1187 1178 F: arch/arm/mach-mvebu/ 1188 - F: drivers/rtc/armada38x-rtc 1179 + F: drivers/rtc/rtc-armada38x.c 1189 1180 1190 1181 ARM/Marvell Berlin SoC support 1191 1182 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> ··· 1197 1188 M: Jason Cooper <jason@lakedaemon.net> 1198 1189 M: Andrew Lunn <andrew@lunn.ch> 1199 1190 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 1191 + M: Gregory Clement <gregory.clement@free-electrons.com> 1200 1192 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1201 1193 S: Maintained 1202 1194 F: arch/arm/mach-dove/ ··· 1361 1351 F: drivers/*/*rockchip* 1362 1352 F: drivers/*/*/*rockchip* 1363 1353 F: sound/soc/rockchip/ 1354 + N: rockchip 1364 1355 1365 1356 ARM/SAMSUNG EXYNOS ARM ARCHITECTURES 1366 1357 M: Kukjin Kim <kgene@kernel.org> ··· 1675 1664 F: include/linux/platform_data/at24.h 1676 1665 1677 1666 ATA OVER ETHERNET (AOE) DRIVER 1678 - M: "Ed L. Cashin" <ecashin@coraid.com> 1679 - W: http://support.coraid.com/support/linux 1667 + M: "Ed L. Cashin" <ed.cashin@acm.org> 1668 + W: http://www.openaoe.org/ 1680 1669 S: Supported 1681 1670 F: Documentation/aoe/ 1682 1671 F: drivers/block/aoe/ ··· 1741 1730 F: drivers/net/ethernet/atheros/ 1742 1731 1743 1732 ATM 1744 - M: Chas Williams <chas@cmf.nrl.navy.mil> 1733 + M: Chas Williams <3chas3@gmail.com> 1745 1734 L: linux-atm-general@lists.sourceforge.net (moderated for non-subscribers) 1746 1735 L: netdev@vger.kernel.org 1747 1736 W: http://linux-atm.sourceforge.net ··· 2076 2065 BONDING DRIVER 2077 2066 M: Jay Vosburgh <j.vosburgh@gmail.com> 2078 2067 M: Veaceslav Falico <vfalico@gmail.com> 2079 - M: Andy Gospodarek <andy@greyhouse.net> 2068 + M: Andy Gospodarek <gospo@cumulusnetworks.com> 2080 2069 L: netdev@vger.kernel.org 2081 2070 W: http://sourceforge.net/projects/bonding/ 2082 2071 S: Supported ··· 2118 2107 2119 2108 BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE 2120 2109 M: Christian Daudt <bcm@fixthebug.org> 2121 - M: Matt Porter <mporter@linaro.org> 2122 2110 M: Florian Fainelli <f.fainelli@gmail.com> 2123 2111 L: bcm-kernel-feedback-list@broadcom.com 2124 2112 T: git git://github.com/broadcom/mach-bcm ··· 2379 2369 2380 2370 CAN NETWORK LAYER 2381 2371 M: Oliver Hartkopp <socketcan@hartkopp.net> 2372 + M: Marc Kleine-Budde <mkl@pengutronix.de> 2382 2373 L: linux-can@vger.kernel.org 2383 - W: http://gitorious.org/linux-can 2374 + W: https://github.com/linux-can 2384 2375 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 2385 2376 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 2386 2377 S: Maintained ··· 2397 2386 M: Wolfgang Grandegger <wg@grandegger.com> 2398 2387 M: Marc Kleine-Budde <mkl@pengutronix.de> 2399 2388 L: linux-can@vger.kernel.org 2400 - W: http://gitorious.org/linux-can 2389 + W: https://github.com/linux-can 2401 2390 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 2402 2391 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 2403 2392 S: Maintained ··· 3251 3240 S: Maintained 3252 3241 F: Documentation/hwmon/dme1737 3253 3242 F: drivers/hwmon/dme1737.c 3243 + 3244 + DMI/SMBIOS SUPPORT 3245 + M: Jean Delvare <jdelvare@suse.de> 3246 + S: Maintained 3247 + F: drivers/firmware/dmi-id.c 3248 + F: drivers/firmware/dmi_scan.c 3249 + F: include/linux/dmi.h 3254 3250 3255 3251 DOCKING STATION DRIVER 3256 3252 M: Shaohua Li <shaohua.li@intel.com> ··· 5094 5076 F: drivers/platform/x86/intel_menlow.c 5095 5077 5096 5078 INTEL IA32 MICROCODE UPDATE SUPPORT 5097 - M: Tigran Aivazian <tigran@aivazian.fsnet.co.uk> 5079 + M: Borislav Petkov <bp@alien8.de> 5098 5080 S: Maintained 5099 5081 F: arch/x86/kernel/cpu/microcode/core* 5100 5082 F: arch/x86/kernel/cpu/microcode/intel* ··· 5135 5117 S: Maintained 5136 5118 F: drivers/char/hw_random/ixp4xx-rng.c 5137 5119 5138 - INTEL ETHERNET DRIVERS (e100/e1000/e1000e/fm10k/igb/igbvf/ixgb/ixgbe/ixgbevf/i40e/i40evf) 5120 + INTEL ETHERNET DRIVERS 5139 5121 M: Jeff Kirsher <jeffrey.t.kirsher@intel.com> 5140 - M: Jesse Brandeburg <jesse.brandeburg@intel.com> 5141 - M: Bruce Allan <bruce.w.allan@intel.com> 5142 - M: Carolyn Wyborny <carolyn.wyborny@intel.com> 5143 - M: Don Skidmore <donald.c.skidmore@intel.com> 5144 - M: Greg Rose <gregory.v.rose@intel.com> 5145 - M: Matthew Vick <matthew.vick@intel.com> 5146 - M: John Ronciak <john.ronciak@intel.com> 5147 - M: Mitch Williams <mitch.a.williams@intel.com> 5148 - M: Linux NICS <linux.nics@intel.com> 5149 - L: e1000-devel@lists.sourceforge.net 5122 + R: Jesse Brandeburg <jesse.brandeburg@intel.com> 5123 + R: Shannon Nelson <shannon.nelson@intel.com> 5124 + R: Carolyn Wyborny <carolyn.wyborny@intel.com> 5125 + R: Don Skidmore <donald.c.skidmore@intel.com> 5126 + R: Matthew Vick <matthew.vick@intel.com> 5127 + R: John Ronciak <john.ronciak@intel.com> 5128 + R: Mitch Williams <mitch.a.williams@intel.com> 5129 + L: intel-wired-lan@lists.osuosl.org 5150 5130 W: http://www.intel.com/support/feedback.htm 5151 5131 W: http://e1000.sourceforge.net/ 5152 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net.git 5153 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next.git 5132 + Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/ 5133 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git 5134 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git 5154 5135 S: Supported 5155 5136 F: Documentation/networking/e100.txt 5156 5137 F: Documentation/networking/e1000.txt ··· 8497 8480 L: netdev@vger.kernel.org 8498 8481 F: drivers/net/ethernet/samsung/sxgbe/ 8499 8482 8483 + SAMSUNG THERMAL DRIVER 8484 + M: Lukasz Majewski <l.majewski@samsung.com> 8485 + L: linux-pm@vger.kernel.org 8486 + L: linux-samsung-soc@vger.kernel.org 8487 + S: Supported 8488 + T: https://github.com/lmajewski/linux-samsung-thermal.git 8489 + F: drivers/thermal/samsung/ 8490 + 8500 8491 SAMSUNG USB2 PHY DRIVER 8501 8492 M: Kamil Debski <k.debski@samsung.com> 8502 8493 L: linux-kernel@vger.kernel.org ··· 10212 10187 S: Maintained 10213 10188 F: Documentation/usb/ohci.txt 10214 10189 F: drivers/usb/host/ohci* 10190 + 10191 + USB OTG FSM (Finite State Machine) 10192 + M: Peter Chen <Peter.Chen@freescale.com> 10193 + T: git git://github.com/hzpeterchen/linux-usb.git 10194 + L: linux-usb@vger.kernel.org 10195 + S: Maintained 10196 + F: drivers/usb/common/usb-otg-fsm.c 10215 10197 10216 10198 USB OVER IP DRIVER 10217 10199 M: Valentina Manea <valentina.manea.m@gmail.com>
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc7 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+7 -7
arch/arc/include/asm/processor.h
··· 47 47 /* Forward declaration, a strange C thing */ 48 48 struct task_struct; 49 49 50 - /* Return saved PC of a blocked thread */ 51 - unsigned long thread_saved_pc(struct task_struct *t); 52 - 53 50 #define task_pt_regs(p) \ 54 51 ((struct pt_regs *)(THREAD_SIZE + (void *)task_stack_page(p)) - 1) 55 52 ··· 69 72 #define release_segments(mm) do { } while (0) 70 73 71 74 #define KSTK_EIP(tsk) (task_pt_regs(tsk)->ret) 75 + #define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp) 72 76 73 77 /* 74 78 * Where abouts of Task's sp, fp, blink when it was last seen in kernel mode. 75 79 * Look in process.c for details of kernel stack layout 76 80 */ 77 - #define KSTK_ESP(tsk) (tsk->thread.ksp) 81 + #define TSK_K_ESP(tsk) (tsk->thread.ksp) 78 82 79 - #define KSTK_REG(tsk, off) (*((unsigned int *)(KSTK_ESP(tsk) + \ 83 + #define TSK_K_REG(tsk, off) (*((unsigned int *)(TSK_K_ESP(tsk) + \ 80 84 sizeof(struct callee_regs) + off))) 81 85 82 - #define KSTK_BLINK(tsk) KSTK_REG(tsk, 4) 83 - #define KSTK_FP(tsk) KSTK_REG(tsk, 0) 86 + #define TSK_K_BLINK(tsk) TSK_K_REG(tsk, 4) 87 + #define TSK_K_FP(tsk) TSK_K_REG(tsk, 0) 88 + 89 + #define thread_saved_pc(tsk) TSK_K_BLINK(tsk) 84 90 85 91 extern void start_thread(struct pt_regs * regs, unsigned long pc, 86 92 unsigned long usp);
+37
arch/arc/include/asm/stacktrace.h
··· 1 + /* 2 + * Copyright (C) 2014-15 Synopsys, Inc. (www.synopsys.com) 3 + * Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + */ 9 + 10 + #ifndef __ASM_STACKTRACE_H 11 + #define __ASM_STACKTRACE_H 12 + 13 + #include <linux/sched.h> 14 + 15 + /** 16 + * arc_unwind_core - Unwind the kernel mode stack for an execution context 17 + * @tsk: NULL for current task, specific task otherwise 18 + * @regs: pt_regs used to seed the unwinder {SP, FP, BLINK, PC} 19 + * If NULL, use pt_regs of @tsk (if !NULL) otherwise 20 + * use the current values of {SP, FP, BLINK, PC} 21 + * @consumer_fn: Callback invoked for each frame unwound 22 + * Returns 0 to continue unwinding, -1 to stop 23 + * @arg: Arg to callback 24 + * 25 + * Returns the address of first function in stack 26 + * 27 + * Semantics: 28 + * - synchronous unwinding (e.g. dump_stack): @tsk NULL, @regs NULL 29 + * - Asynchronous unwinding of sleeping task: @tsk !NULL, @regs NULL 30 + * - Asynchronous unwinding of intr/excp etc: @tsk !NULL, @regs !NULL 31 + */ 32 + notrace noinline unsigned int arc_unwind_core( 33 + struct task_struct *tsk, struct pt_regs *regs, 34 + int (*consumer_fn) (unsigned int, void *), 35 + void *arg); 36 + 37 + #endif /* __ASM_STACKTRACE_H */
-23
arch/arc/kernel/process.c
··· 192 192 return 0; 193 193 } 194 194 195 - /* 196 - * API: expected by schedular Code: If thread is sleeping where is that. 197 - * What is this good for? it will be always the scheduler or ret_from_fork. 198 - * So we hard code that anyways. 199 - */ 200 - unsigned long thread_saved_pc(struct task_struct *t) 201 - { 202 - struct pt_regs *regs = task_pt_regs(t); 203 - unsigned long blink = 0; 204 - 205 - /* 206 - * If the thread being queried for in not itself calling this, then it 207 - * implies it is not executing, which in turn implies it is sleeping, 208 - * which in turn implies it got switched OUT by the schedular. 209 - * In that case, it's kernel mode blink can reliably retrieved as per 210 - * the picture above (right above pt_regs). 211 - */ 212 - if (t != current && t->state != TASK_RUNNING) 213 - blink = *((unsigned int *)regs - 1); 214 - 215 - return blink; 216 - } 217 - 218 195 int elf_check_arch(const struct elf32_hdr *x) 219 196 { 220 197 unsigned int eflags;
+18 -6
arch/arc/kernel/signal.c
··· 67 67 sigset_t *set) 68 68 { 69 69 int err; 70 - err = __copy_to_user(&(sf->uc.uc_mcontext.regs), regs, 70 + err = __copy_to_user(&(sf->uc.uc_mcontext.regs.scratch), regs, 71 71 sizeof(sf->uc.uc_mcontext.regs.scratch)); 72 72 err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t)); 73 73 ··· 83 83 if (!err) 84 84 set_current_blocked(&set); 85 85 86 - err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs), 86 + err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs.scratch), 87 87 sizeof(sf->uc.uc_mcontext.regs.scratch)); 88 88 89 89 return err; ··· 130 130 131 131 /* Don't restart from sigreturn */ 132 132 syscall_wont_restart(regs); 133 + 134 + /* 135 + * Ensure that sigreturn always returns to user mode (in case the 136 + * regs saved on user stack got fudged between save and sigreturn) 137 + * Otherwise it is easy to panic the kernel with a custom 138 + * signal handler and/or restorer which clobberes the status32/ret 139 + * to return to a bogus location in kernel mode. 140 + */ 141 + regs->status32 |= STATUS_U_MASK; 133 142 134 143 return regs->r0; 135 144 ··· 238 229 239 230 /* 240 231 * handler returns using sigreturn stub provided already by userpsace 232 + * If not, nuke the process right away 241 233 */ 242 - BUG_ON(!(ksig->ka.sa.sa_flags & SA_RESTORER)); 234 + if(!(ksig->ka.sa.sa_flags & SA_RESTORER)) 235 + return 1; 236 + 243 237 regs->blink = (unsigned long)ksig->ka.sa.sa_restorer; 244 238 245 239 /* User Stack for signal handler will be above the frame just carved */ ··· 308 296 handle_signal(struct ksignal *ksig, struct pt_regs *regs) 309 297 { 310 298 sigset_t *oldset = sigmask_to_save(); 311 - int ret; 299 + int failed; 312 300 313 301 /* Set up the stack frame */ 314 - ret = setup_rt_frame(ksig, oldset, regs); 302 + failed = setup_rt_frame(ksig, oldset, regs); 315 303 316 - signal_setup_done(ret, ksig, 0); 304 + signal_setup_done(failed, ksig, 0); 317 305 } 318 306 319 307 void do_signal(struct pt_regs *regs)
+17 -4
arch/arc/kernel/stacktrace.c
··· 43 43 struct pt_regs *regs, 44 44 struct unwind_frame_info *frame_info) 45 45 { 46 + /* 47 + * synchronous unwinding (e.g. dump_stack) 48 + * - uses current values of SP and friends 49 + */ 46 50 if (tsk == NULL && regs == NULL) { 47 51 unsigned long fp, sp, blink, ret; 48 52 frame_info->task = current; ··· 65 61 frame_info->regs.r63 = ret; 66 62 frame_info->call_frame = 0; 67 63 } else if (regs == NULL) { 64 + /* 65 + * Asynchronous unwinding of sleeping task 66 + * - Gets SP etc from task's pt_regs (saved bottom of kernel 67 + * mode stack of task) 68 + */ 68 69 69 70 frame_info->task = tsk; 70 71 71 - frame_info->regs.r27 = KSTK_FP(tsk); 72 - frame_info->regs.r28 = KSTK_ESP(tsk); 73 - frame_info->regs.r31 = KSTK_BLINK(tsk); 72 + frame_info->regs.r27 = TSK_K_FP(tsk); 73 + frame_info->regs.r28 = TSK_K_ESP(tsk); 74 + frame_info->regs.r31 = TSK_K_BLINK(tsk); 74 75 frame_info->regs.r63 = (unsigned int)__switch_to; 75 76 76 77 /* In the prologue of __switch_to, first FP is saved on stack ··· 92 83 frame_info->call_frame = 0; 93 84 94 85 } else { 86 + /* 87 + * Asynchronous unwinding of intr/exception 88 + * - Just uses the pt_regs passed 89 + */ 95 90 frame_info->task = tsk; 96 91 97 92 frame_info->regs.r27 = regs->fp; ··· 108 95 109 96 #endif 110 97 111 - static noinline unsigned int 98 + notrace noinline unsigned int 112 99 arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs, 113 100 int (*consumer_fn) (unsigned int, void *), void *arg) 114 101 {
+2
arch/arc/kernel/unaligned.c
··· 12 12 */ 13 13 14 14 #include <linux/types.h> 15 + #include <linux/perf_event.h> 15 16 #include <linux/ptrace.h> 16 17 #include <linux/uaccess.h> 17 18 #include <asm/disasm.h> ··· 254 253 } 255 254 } 256 255 256 + perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, address); 257 257 return 0; 258 258 259 259 fault:
+10 -2
arch/arc/mm/fault.c
··· 14 14 #include <linux/ptrace.h> 15 15 #include <linux/uaccess.h> 16 16 #include <linux/kdebug.h> 17 + #include <linux/perf_event.h> 17 18 #include <asm/pgalloc.h> 18 19 #include <asm/mmu.h> 19 20 ··· 140 139 return; 141 140 } 142 141 142 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); 143 + 143 144 if (likely(!(fault & VM_FAULT_ERROR))) { 144 145 if (flags & FAULT_FLAG_ALLOW_RETRY) { 145 146 /* To avoid updating stats twice for retry case */ 146 - if (fault & VM_FAULT_MAJOR) 147 + if (fault & VM_FAULT_MAJOR) { 147 148 tsk->maj_flt++; 148 - else 149 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, 150 + regs, address); 151 + } else { 149 152 tsk->min_flt++; 153 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, 154 + regs, address); 155 + } 150 156 151 157 if (fault & VM_FAULT_RETRY) { 152 158 flags &= ~FAULT_FLAG_ALLOW_RETRY;
+1
arch/arm/Kconfig
··· 619 619 select GENERIC_CLOCKEVENTS 620 620 select GPIO_PXA 621 621 select HAVE_IDE 622 + select IRQ_DOMAIN 622 623 select MULTI_IRQ_HANDLER 623 624 select PLAT_PXA 624 625 select SPARSE_IRQ
+1
arch/arm/Makefile
··· 150 150 machine-$(CONFIG_ARCH_CLPS711X) += clps711x 151 151 machine-$(CONFIG_ARCH_CNS3XXX) += cns3xxx 152 152 machine-$(CONFIG_ARCH_DAVINCI) += davinci 153 + machine-$(CONFIG_ARCH_DIGICOLOR) += digicolor 153 154 machine-$(CONFIG_ARCH_DOVE) += dove 154 155 machine-$(CONFIG_ARCH_EBSA110) += ebsa110 155 156 machine-$(CONFIG_ARCH_EFM32) += efm32
+8
arch/arm/boot/dts/am335x-bone-common.dtsi
··· 301 301 cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; 302 302 cd-inverted; 303 303 }; 304 + 305 + &aes { 306 + status = "okay"; 307 + }; 308 + 309 + &sham { 310 + status = "okay"; 311 + };
-8
arch/arm/boot/dts/am335x-bone.dts
··· 24 24 &mmc1 { 25 25 vmmc-supply = <&ldo3_reg>; 26 26 }; 27 - 28 - &sham { 29 - status = "okay"; 30 - }; 31 - 32 - &aes { 33 - status = "okay"; 34 - };
+4
arch/arm/boot/dts/am335x-lxm.dts
··· 328 328 dual_emac_res_vlan = <3>; 329 329 }; 330 330 331 + &phy_sel { 332 + rmii-clock-ext; 333 + }; 334 + 331 335 &mac { 332 336 pinctrl-names = "default", "sleep"; 333 337 pinctrl-0 = <&cpsw_default>;
+3 -3
arch/arm/boot/dts/am33xx-clocks.dtsi
··· 99 99 ehrpwm0_tbclk: ehrpwm0_tbclk@44e10664 { 100 100 #clock-cells = <0>; 101 101 compatible = "ti,gate-clock"; 102 - clocks = <&dpll_per_m2_ck>; 102 + clocks = <&l4ls_gclk>; 103 103 ti,bit-shift = <0>; 104 104 reg = <0x0664>; 105 105 }; ··· 107 107 ehrpwm1_tbclk: ehrpwm1_tbclk@44e10664 { 108 108 #clock-cells = <0>; 109 109 compatible = "ti,gate-clock"; 110 - clocks = <&dpll_per_m2_ck>; 110 + clocks = <&l4ls_gclk>; 111 111 ti,bit-shift = <1>; 112 112 reg = <0x0664>; 113 113 }; ··· 115 115 ehrpwm2_tbclk: ehrpwm2_tbclk@44e10664 { 116 116 #clock-cells = <0>; 117 117 compatible = "ti,gate-clock"; 118 - clocks = <&dpll_per_m2_ck>; 118 + clocks = <&l4ls_gclk>; 119 119 ti,bit-shift = <2>; 120 120 reg = <0x0664>; 121 121 };
+6 -6
arch/arm/boot/dts/am43xx-clocks.dtsi
··· 107 107 ehrpwm0_tbclk: ehrpwm0_tbclk { 108 108 #clock-cells = <0>; 109 109 compatible = "ti,gate-clock"; 110 - clocks = <&dpll_per_m2_ck>; 110 + clocks = <&l4ls_gclk>; 111 111 ti,bit-shift = <0>; 112 112 reg = <0x0664>; 113 113 }; ··· 115 115 ehrpwm1_tbclk: ehrpwm1_tbclk { 116 116 #clock-cells = <0>; 117 117 compatible = "ti,gate-clock"; 118 - clocks = <&dpll_per_m2_ck>; 118 + clocks = <&l4ls_gclk>; 119 119 ti,bit-shift = <1>; 120 120 reg = <0x0664>; 121 121 }; ··· 123 123 ehrpwm2_tbclk: ehrpwm2_tbclk { 124 124 #clock-cells = <0>; 125 125 compatible = "ti,gate-clock"; 126 - clocks = <&dpll_per_m2_ck>; 126 + clocks = <&l4ls_gclk>; 127 127 ti,bit-shift = <2>; 128 128 reg = <0x0664>; 129 129 }; ··· 131 131 ehrpwm3_tbclk: ehrpwm3_tbclk { 132 132 #clock-cells = <0>; 133 133 compatible = "ti,gate-clock"; 134 - clocks = <&dpll_per_m2_ck>; 134 + clocks = <&l4ls_gclk>; 135 135 ti,bit-shift = <4>; 136 136 reg = <0x0664>; 137 137 }; ··· 139 139 ehrpwm4_tbclk: ehrpwm4_tbclk { 140 140 #clock-cells = <0>; 141 141 compatible = "ti,gate-clock"; 142 - clocks = <&dpll_per_m2_ck>; 142 + clocks = <&l4ls_gclk>; 143 143 ti,bit-shift = <5>; 144 144 reg = <0x0664>; 145 145 }; ··· 147 147 ehrpwm5_tbclk: ehrpwm5_tbclk { 148 148 #clock-cells = <0>; 149 149 compatible = "ti,gate-clock"; 150 - clocks = <&dpll_per_m2_ck>; 150 + clocks = <&l4ls_gclk>; 151 151 ti,bit-shift = <6>; 152 152 reg = <0x0664>; 153 153 };
+3 -4
arch/arm/boot/dts/at91sam9260.dtsi
··· 494 494 495 495 pinctrl_usart3_rts: usart3_rts-0 { 496 496 atmel,pins = 497 - <AT91_PIOB 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC8 periph B */ 497 + <AT91_PIOC 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; 498 498 }; 499 499 500 500 pinctrl_usart3_cts: usart3_cts-0 { 501 501 atmel,pins = 502 - <AT91_PIOB 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC10 periph B */ 502 + <AT91_PIOC 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; 503 503 }; 504 504 }; 505 505 ··· 853 853 }; 854 854 855 855 usb1: gadget@fffa4000 { 856 - compatible = "atmel,at91rm9200-udc"; 856 + compatible = "atmel,at91sam9260-udc"; 857 857 reg = <0xfffa4000 0x4000>; 858 858 interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>; 859 859 clocks = <&udc_clk>, <&udpck>; ··· 976 976 atmel,watchdog-type = "hardware"; 977 977 atmel,reset-type = "all"; 978 978 atmel,dbg-halt; 979 - atmel,idle-halt; 980 979 status = "disabled"; 981 980 }; 982 981
+5 -4
arch/arm/boot/dts/at91sam9261.dtsi
··· 124 124 }; 125 125 126 126 usb1: gadget@fffa4000 { 127 - compatible = "atmel,at91rm9200-udc"; 127 + compatible = "atmel,at91sam9261-udc"; 128 128 reg = <0xfffa4000 0x4000>; 129 129 interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>; 130 - clocks = <&usb>, <&udc_clk>, <&udpck>; 131 - clock-names = "usb_clk", "udc_clk", "udpck"; 130 + clocks = <&udc_clk>, <&udpck>; 131 + clock-names = "pclk", "hclk"; 132 + atmel,matrix = <&matrix>; 132 133 status = "disabled"; 133 134 }; 134 135 ··· 263 262 }; 264 263 265 264 matrix: matrix@ffffee00 { 266 - compatible = "atmel,at91sam9260-bus-matrix"; 265 + compatible = "atmel,at91sam9260-bus-matrix", "syscon"; 267 266 reg = <0xffffee00 0x200>; 268 267 }; 269 268
+2 -3
arch/arm/boot/dts/at91sam9263.dtsi
··· 69 69 70 70 sram1: sram@00500000 { 71 71 compatible = "mmio-sram"; 72 - reg = <0x00300000 0x4000>; 72 + reg = <0x00500000 0x4000>; 73 73 }; 74 74 75 75 ahb { ··· 856 856 }; 857 857 858 858 usb1: gadget@fff78000 { 859 - compatible = "atmel,at91rm9200-udc"; 859 + compatible = "atmel,at91sam9263-udc"; 860 860 reg = <0xfff78000 0x4000>; 861 861 interrupts = <24 IRQ_TYPE_LEVEL_HIGH 2>; 862 862 clocks = <&udc_clk>, <&udpck>; ··· 905 905 atmel,watchdog-type = "hardware"; 906 906 atmel,reset-type = "all"; 907 907 atmel,dbg-halt; 908 - atmel,idle-halt; 909 908 status = "disabled"; 910 909 }; 911 910
+1 -2
arch/arm/boot/dts/at91sam9g45.dtsi
··· 1116 1116 atmel,watchdog-type = "hardware"; 1117 1117 atmel,reset-type = "all"; 1118 1118 atmel,dbg-halt; 1119 - atmel,idle-halt; 1120 1119 status = "disabled"; 1121 1120 }; 1122 1121 ··· 1300 1301 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1301 1302 reg = <0x00800000 0x100000>; 1302 1303 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1303 - clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1304 + clocks = <&utmi>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1304 1305 clock-names = "usb_clk", "ehci_clk", "hclk", "uhpck"; 1305 1306 status = "disabled"; 1306 1307 };
-1
arch/arm/boot/dts/at91sam9n12.dtsi
··· 894 894 atmel,watchdog-type = "hardware"; 895 895 atmel,reset-type = "all"; 896 896 atmel,dbg-halt; 897 - atmel,idle-halt; 898 897 status = "disabled"; 899 898 }; 900 899
+2 -3
arch/arm/boot/dts/at91sam9x5.dtsi
··· 1066 1066 reg = <0x00500000 0x80000 1067 1067 0xf803c000 0x400>; 1068 1068 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; 1069 - clocks = <&usb>, <&udphs_clk>; 1069 + clocks = <&utmi>, <&udphs_clk>; 1070 1070 clock-names = "hclk", "pclk"; 1071 1071 status = "disabled"; 1072 1072 ··· 1130 1130 atmel,watchdog-type = "hardware"; 1131 1131 atmel,reset-type = "all"; 1132 1132 atmel,dbg-halt; 1133 - atmel,idle-halt; 1134 1133 status = "disabled"; 1135 1134 }; 1136 1135 ··· 1185 1186 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1186 1187 reg = <0x00700000 0x100000>; 1187 1188 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1188 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 1189 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 1189 1190 clock-names = "usb_clk", "ehci_clk", "uhpck"; 1190 1191 status = "disabled"; 1191 1192 };
+19
arch/arm/boot/dts/dm8168-evm.dts
··· 36 36 >; 37 37 }; 38 38 39 + mmc_pins: pinmux_mmc_pins { 40 + pinctrl-single,pins = < 41 + DM816X_IOPAD(0x0a70, MUX_MODE0) /* SD_POW */ 42 + DM816X_IOPAD(0x0a74, MUX_MODE0) /* SD_CLK */ 43 + DM816X_IOPAD(0x0a78, MUX_MODE0) /* SD_CMD */ 44 + DM816X_IOPAD(0x0a7C, MUX_MODE0) /* SD_DAT0 */ 45 + DM816X_IOPAD(0x0a80, MUX_MODE0) /* SD_DAT1 */ 46 + DM816X_IOPAD(0x0a84, MUX_MODE0) /* SD_DAT2 */ 47 + DM816X_IOPAD(0x0a88, MUX_MODE0) /* SD_DAT2 */ 48 + DM816X_IOPAD(0x0a8c, MUX_MODE2) /* GP1[7] */ 49 + DM816X_IOPAD(0x0a90, MUX_MODE2) /* GP1[8] */ 50 + >; 51 + }; 52 + 39 53 usb0_pins: pinmux_usb0_pins { 40 54 pinctrl-single,pins = < 41 55 DM816X_IOPAD(0x0d00, MUX_MODE0) /* USB0_DRVVBUS */ ··· 151 137 }; 152 138 153 139 &mmc1 { 140 + pinctrl-names = "default"; 141 + pinctrl-0 = <&mmc_pins>; 154 142 vmmc-supply = <&vmmcsd_fixed>; 143 + bus-width = <4>; 144 + cd-gpios = <&gpio2 7 GPIO_ACTIVE_LOW>; 145 + wp-gpios = <&gpio2 8 GPIO_ACTIVE_LOW>; 155 146 }; 156 147 157 148 /* At least dm8168-evm rev c won't support multipoint, later may */
+14 -4
arch/arm/boot/dts/dm816x.dtsi
··· 150 150 }; 151 151 152 152 gpio1: gpio@48032000 { 153 - compatible = "ti,omap3-gpio"; 153 + compatible = "ti,omap4-gpio"; 154 154 ti,hwmods = "gpio1"; 155 + ti,gpio-always-on; 155 156 reg = <0x48032000 0x1000>; 156 - interrupts = <97>; 157 + interrupts = <96>; 158 + gpio-controller; 159 + #gpio-cells = <2>; 160 + interrupt-controller; 161 + #interrupt-cells = <2>; 157 162 }; 158 163 159 164 gpio2: gpio@4804c000 { 160 - compatible = "ti,omap3-gpio"; 165 + compatible = "ti,omap4-gpio"; 161 166 ti,hwmods = "gpio2"; 167 + ti,gpio-always-on; 162 168 reg = <0x4804c000 0x1000>; 163 - interrupts = <99>; 169 + interrupts = <98>; 170 + gpio-controller; 171 + #gpio-cells = <2>; 172 + interrupt-controller; 173 + #interrupt-cells = <2>; 164 174 }; 165 175 166 176 gpmc: gpmc@50000000 {
+4 -6
arch/arm/boot/dts/dra7-evm.dts
··· 263 263 264 264 dcan1_pins_default: dcan1_pins_default { 265 265 pinctrl-single,pins = < 266 - 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */ 267 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 268 - 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */ 266 + 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */ 267 + 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */ 269 268 >; 270 269 }; 271 270 272 271 dcan1_pins_sleep: dcan1_pins_sleep { 273 272 pinctrl-single,pins = < 274 - 0x3d0 (MUX_MODE15) /* dcan1_tx.off */ 275 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 276 - 0x418 (MUX_MODE15) /* wakeup0.off */ 273 + 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */ 274 + 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */ 277 275 >; 278 276 }; 279 277 };
-2
arch/arm/boot/dts/dra7.dtsi
··· 1111 1111 "wkupclk", "refclk", 1112 1112 "div-clk", "phy-div"; 1113 1113 #phy-cells = <0>; 1114 - ti,hwmods = "pcie1-phy"; 1115 1114 }; 1116 1115 1117 1116 pcie2_phy: pciephy@4a095000 { ··· 1129 1130 "wkupclk", "refclk", 1130 1131 "div-clk", "phy-div"; 1131 1132 #phy-cells = <0>; 1132 - ti,hwmods = "pcie2-phy"; 1133 1133 status = "disabled"; 1134 1134 }; 1135 1135 };
+4 -6
arch/arm/boot/dts/dra72-evm.dts
··· 119 119 120 120 dcan1_pins_default: dcan1_pins_default { 121 121 pinctrl-single,pins = < 122 - 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */ 123 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 124 - 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */ 122 + 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */ 123 + 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */ 125 124 >; 126 125 }; 127 126 128 127 dcan1_pins_sleep: dcan1_pins_sleep { 129 128 pinctrl-single,pins = < 130 - 0x3d0 (MUX_MODE15) /* dcan1_tx.off */ 131 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 132 - 0x418 (MUX_MODE15) /* wakeup0.off */ 129 + 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */ 130 + 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */ 133 131 >; 134 132 }; 135 133
+81 -9
arch/arm/boot/dts/dra7xx-clocks.dtsi
··· 243 243 ti,invert-autoidle-bit; 244 244 }; 245 245 246 + dpll_core_byp_mux: dpll_core_byp_mux { 247 + #clock-cells = <0>; 248 + compatible = "ti,mux-clock"; 249 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 250 + ti,bit-shift = <23>; 251 + reg = <0x012c>; 252 + }; 253 + 246 254 dpll_core_ck: dpll_core_ck { 247 255 #clock-cells = <0>; 248 256 compatible = "ti,omap4-dpll-core-clock"; 249 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 257 + clocks = <&sys_clkin1>, <&dpll_core_byp_mux>; 250 258 reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>; 251 259 }; 252 260 ··· 317 309 clock-div = <1>; 318 310 }; 319 311 312 + dpll_dsp_byp_mux: dpll_dsp_byp_mux { 313 + #clock-cells = <0>; 314 + compatible = "ti,mux-clock"; 315 + clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>; 316 + ti,bit-shift = <23>; 317 + reg = <0x0240>; 318 + }; 319 + 320 320 dpll_dsp_ck: dpll_dsp_ck { 321 321 #clock-cells = <0>; 322 322 compatible = "ti,omap4-dpll-clock"; 323 - clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>; 323 + clocks = <&sys_clkin1>, <&dpll_dsp_byp_mux>; 324 324 reg = <0x0234>, <0x0238>, <0x0240>, <0x023c>; 325 325 }; 326 326 ··· 351 335 clock-div = <1>; 352 336 }; 353 337 338 + dpll_iva_byp_mux: dpll_iva_byp_mux { 339 + #clock-cells = <0>; 340 + compatible = "ti,mux-clock"; 341 + clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>; 342 + ti,bit-shift = <23>; 343 + reg = <0x01ac>; 344 + }; 345 + 354 346 dpll_iva_ck: dpll_iva_ck { 355 347 #clock-cells = <0>; 356 348 compatible = "ti,omap4-dpll-clock"; 357 - clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>; 349 + clocks = <&sys_clkin1>, <&dpll_iva_byp_mux>; 358 350 reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>; 359 351 }; 360 352 ··· 385 361 clock-div = <1>; 386 362 }; 387 363 364 + dpll_gpu_byp_mux: dpll_gpu_byp_mux { 365 + #clock-cells = <0>; 366 + compatible = "ti,mux-clock"; 367 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 368 + ti,bit-shift = <23>; 369 + reg = <0x02e4>; 370 + }; 371 + 388 372 dpll_gpu_ck: dpll_gpu_ck { 389 373 #clock-cells = <0>; 390 374 compatible = "ti,omap4-dpll-clock"; 391 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 375 + clocks = <&sys_clkin1>, <&dpll_gpu_byp_mux>; 392 376 reg = <0x02d8>, <0x02dc>, <0x02e4>, <0x02e0>; 393 377 }; 394 378 ··· 430 398 clock-div = <1>; 431 399 }; 432 400 401 + dpll_ddr_byp_mux: dpll_ddr_byp_mux { 402 + #clock-cells = <0>; 403 + compatible = "ti,mux-clock"; 404 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 405 + ti,bit-shift = <23>; 406 + reg = <0x021c>; 407 + }; 408 + 433 409 dpll_ddr_ck: dpll_ddr_ck { 434 410 #clock-cells = <0>; 435 411 compatible = "ti,omap4-dpll-clock"; 436 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 412 + clocks = <&sys_clkin1>, <&dpll_ddr_byp_mux>; 437 413 reg = <0x0210>, <0x0214>, <0x021c>, <0x0218>; 438 414 }; 439 415 ··· 456 416 ti,invert-autoidle-bit; 457 417 }; 458 418 419 + dpll_gmac_byp_mux: dpll_gmac_byp_mux { 420 + #clock-cells = <0>; 421 + compatible = "ti,mux-clock"; 422 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 423 + ti,bit-shift = <23>; 424 + reg = <0x02b4>; 425 + }; 426 + 459 427 dpll_gmac_ck: dpll_gmac_ck { 460 428 #clock-cells = <0>; 461 429 compatible = "ti,omap4-dpll-clock"; 462 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 430 + clocks = <&sys_clkin1>, <&dpll_gmac_byp_mux>; 463 431 reg = <0x02a8>, <0x02ac>, <0x02b4>, <0x02b0>; 464 432 }; 465 433 ··· 530 482 clock-div = <1>; 531 483 }; 532 484 485 + dpll_eve_byp_mux: dpll_eve_byp_mux { 486 + #clock-cells = <0>; 487 + compatible = "ti,mux-clock"; 488 + clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>; 489 + ti,bit-shift = <23>; 490 + reg = <0x0290>; 491 + }; 492 + 533 493 dpll_eve_ck: dpll_eve_ck { 534 494 #clock-cells = <0>; 535 495 compatible = "ti,omap4-dpll-clock"; 536 - clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>; 496 + clocks = <&sys_clkin1>, <&dpll_eve_byp_mux>; 537 497 reg = <0x0284>, <0x0288>, <0x0290>, <0x028c>; 538 498 }; 539 499 ··· 1305 1249 clock-div = <1>; 1306 1250 }; 1307 1251 1252 + dpll_per_byp_mux: dpll_per_byp_mux { 1253 + #clock-cells = <0>; 1254 + compatible = "ti,mux-clock"; 1255 + clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>; 1256 + ti,bit-shift = <23>; 1257 + reg = <0x014c>; 1258 + }; 1259 + 1308 1260 dpll_per_ck: dpll_per_ck { 1309 1261 #clock-cells = <0>; 1310 1262 compatible = "ti,omap4-dpll-clock"; 1311 - clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>; 1263 + clocks = <&sys_clkin1>, <&dpll_per_byp_mux>; 1312 1264 reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>; 1313 1265 }; 1314 1266 ··· 1339 1275 clock-div = <1>; 1340 1276 }; 1341 1277 1278 + dpll_usb_byp_mux: dpll_usb_byp_mux { 1279 + #clock-cells = <0>; 1280 + compatible = "ti,mux-clock"; 1281 + clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>; 1282 + ti,bit-shift = <23>; 1283 + reg = <0x018c>; 1284 + }; 1285 + 1342 1286 dpll_usb_ck: dpll_usb_ck { 1343 1287 #clock-cells = <0>; 1344 1288 compatible = "ti,omap4-dpll-j-type-clock"; 1345 - clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>; 1289 + clocks = <&sys_clkin1>, <&dpll_usb_byp_mux>; 1346 1290 reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>; 1347 1291 }; 1348 1292
+2
arch/arm/boot/dts/exynos3250.dtsi
··· 18 18 */ 19 19 20 20 #include "skeleton.dtsi" 21 + #include "exynos4-cpu-thermal.dtsi" 21 22 #include <dt-bindings/clock/exynos3250.h> 22 23 23 24 / { ··· 194 193 interrupts = <0 216 0>; 195 194 clocks = <&cmu CLK_TMU_APBIF>; 196 195 clock-names = "tmu_apbif"; 196 + #include "exynos4412-tmu-sensor-conf.dtsi" 197 197 status = "disabled"; 198 198 }; 199 199
+52
arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos4 thermal zone 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal.h> 13 + 14 + / { 15 + thermal-zones { 16 + cpu_thermal: cpu-thermal { 17 + thermal-sensors = <&tmu 0>; 18 + polling-delay-passive = <0>; 19 + polling-delay = <0>; 20 + trips { 21 + cpu_alert0: cpu-alert-0 { 22 + temperature = <70000>; /* millicelsius */ 23 + hysteresis = <10000>; /* millicelsius */ 24 + type = "active"; 25 + }; 26 + cpu_alert1: cpu-alert-1 { 27 + temperature = <95000>; /* millicelsius */ 28 + hysteresis = <10000>; /* millicelsius */ 29 + type = "active"; 30 + }; 31 + cpu_alert2: cpu-alert-2 { 32 + temperature = <110000>; /* millicelsius */ 33 + hysteresis = <10000>; /* millicelsius */ 34 + type = "active"; 35 + }; 36 + cpu_crit0: cpu-crit-0 { 37 + temperature = <120000>; /* millicelsius */ 38 + hysteresis = <0>; /* millicelsius */ 39 + type = "critical"; 40 + }; 41 + }; 42 + cooling-maps { 43 + map0 { 44 + trip = <&cpu_alert0>; 45 + }; 46 + map1 { 47 + trip = <&cpu_alert1>; 48 + }; 49 + }; 50 + }; 51 + }; 52 + };
+45
arch/arm/boot/dts/exynos4.dtsi
··· 38 38 i2c5 = &i2c_5; 39 39 i2c6 = &i2c_6; 40 40 i2c7 = &i2c_7; 41 + i2c8 = &i2c_8; 41 42 csis0 = &csis_0; 42 43 csis1 = &csis_1; 43 44 fimc0 = &fimc_0; ··· 105 104 compatible = "samsung,exynos4210-pd"; 106 105 reg = <0x10023C20 0x20>; 107 106 #power-domain-cells = <0>; 107 + power-domains = <&pd_lcd0>; 108 108 }; 109 109 110 110 pd_cam: cam-power-domain@10023C00 { ··· 556 554 status = "disabled"; 557 555 }; 558 556 557 + i2c_8: i2c@138E0000 { 558 + #address-cells = <1>; 559 + #size-cells = <0>; 560 + compatible = "samsung,s3c2440-hdmiphy-i2c"; 561 + reg = <0x138E0000 0x100>; 562 + interrupts = <0 93 0>; 563 + clocks = <&clock CLK_I2C_HDMI>; 564 + clock-names = "i2c"; 565 + status = "disabled"; 566 + 567 + hdmi_i2c_phy: hdmiphy@38 { 568 + compatible = "exynos4210-hdmiphy"; 569 + reg = <0x38>; 570 + }; 571 + }; 572 + 559 573 spi_0: spi@13920000 { 560 574 compatible = "samsung,exynos4210-spi"; 561 575 reg = <0x13920000 0x100>; ··· 678 660 clock-names = "sclk_fimd", "fimd"; 679 661 power-domains = <&pd_lcd0>; 680 662 samsung,sysreg = <&sys_reg>; 663 + status = "disabled"; 664 + }; 665 + 666 + tmu: tmu@100C0000 { 667 + #include "exynos4412-tmu-sensor-conf.dtsi" 668 + }; 669 + 670 + hdmi: hdmi@12D00000 { 671 + compatible = "samsung,exynos4210-hdmi"; 672 + reg = <0x12D00000 0x70000>; 673 + interrupts = <0 92 0>; 674 + clock-names = "hdmi", "sclk_hdmi", "sclk_pixel", "sclk_hdmiphy", 675 + "mout_hdmi"; 676 + clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>, 677 + <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>, 678 + <&clock CLK_MOUT_HDMI>; 679 + phy = <&hdmi_i2c_phy>; 680 + power-domains = <&pd_tv>; 681 + samsung,syscon-phandle = <&pmu_system_controller>; 682 + status = "disabled"; 683 + }; 684 + 685 + mixer: mixer@12C10000 { 686 + compatible = "samsung,exynos4210-mixer"; 687 + interrupts = <0 91 0>; 688 + reg = <0x12C10000 0x2100>, <0x12c00000 0x300>; 689 + power-domains = <&pd_tv>; 681 690 status = "disabled"; 682 691 }; 683 692
+19
arch/arm/boot/dts/exynos4210-trats.dts
··· 426 426 status = "okay"; 427 427 }; 428 428 429 + tmu@100C0000 { 430 + status = "okay"; 431 + }; 432 + 433 + thermal-zones { 434 + cpu_thermal: cpu-thermal { 435 + cooling-maps { 436 + map0 { 437 + /* Corresponds to 800MHz at freq_table */ 438 + cooling-device = <&cpu0 2 2>; 439 + }; 440 + map1 { 441 + /* Corresponds to 200MHz at freq_table */ 442 + cooling-device = <&cpu0 4 4>; 443 + }; 444 + }; 445 + }; 446 + }; 447 + 429 448 camera { 430 449 pinctrl-names = "default"; 431 450 pinctrl-0 = <>;
+57
arch/arm/boot/dts/exynos4210-universal_c210.dts
··· 505 505 assigned-clock-rates = <0>, <160000000>; 506 506 }; 507 507 }; 508 + 509 + hdmi_en: voltage-regulator-hdmi-5v { 510 + compatible = "regulator-fixed"; 511 + regulator-name = "HDMI_5V"; 512 + regulator-min-microvolt = <5000000>; 513 + regulator-max-microvolt = <5000000>; 514 + gpio = <&gpe0 1 0>; 515 + enable-active-high; 516 + }; 517 + 518 + hdmi_ddc: i2c-ddc { 519 + compatible = "i2c-gpio"; 520 + gpios = <&gpe4 2 0 &gpe4 3 0>; 521 + i2c-gpio,delay-us = <100>; 522 + #address-cells = <1>; 523 + #size-cells = <0>; 524 + 525 + pinctrl-0 = <&i2c_ddc_bus>; 526 + pinctrl-names = "default"; 527 + status = "okay"; 528 + }; 529 + 530 + mixer@12C10000 { 531 + status = "okay"; 532 + }; 533 + 534 + hdmi@12D00000 { 535 + hpd-gpio = <&gpx3 7 0>; 536 + pinctrl-names = "default"; 537 + pinctrl-0 = <&hdmi_hpd>; 538 + hdmi-en-supply = <&hdmi_en>; 539 + vdd-supply = <&ldo3_reg>; 540 + vdd_osc-supply = <&ldo4_reg>; 541 + vdd_pll-supply = <&ldo3_reg>; 542 + ddc = <&hdmi_ddc>; 543 + status = "okay"; 544 + }; 545 + 546 + i2c@138E0000 { 547 + status = "okay"; 548 + }; 549 + }; 550 + 551 + &pinctrl_1 { 552 + hdmi_hpd: hdmi-hpd { 553 + samsung,pins = "gpx3-7"; 554 + samsung,pin-pud = <0>; 555 + }; 556 + }; 557 + 558 + &pinctrl_0 { 559 + i2c_ddc_bus: i2c-ddc-bus { 560 + samsung,pins = "gpe4-2", "gpe4-3"; 561 + samsung,pin-function = <2>; 562 + samsung,pin-pud = <3>; 563 + samsung,pin-drv = <0>; 564 + }; 508 565 }; 509 566 510 567 &mdma1 {
+36 -2
arch/arm/boot/dts/exynos4210.dtsi
··· 21 21 22 22 #include "exynos4.dtsi" 23 23 #include "exynos4210-pinctrl.dtsi" 24 + #include "exynos4-cpu-thermal.dtsi" 24 25 25 26 / { 26 27 compatible = "samsung,exynos4210", "samsung,exynos4"; ··· 36 35 #address-cells = <1>; 37 36 #size-cells = <0>; 38 37 39 - cpu@900 { 38 + cpu0: cpu@900 { 40 39 device_type = "cpu"; 41 40 compatible = "arm,cortex-a9"; 42 41 reg = <0x900>; 42 + cooling-min-level = <4>; 43 + cooling-max-level = <2>; 44 + #cooling-cells = <2>; /* min followed by max */ 43 45 }; 44 46 45 47 cpu@901 { ··· 157 153 reg = <0x03860000 0x1000>; 158 154 }; 159 155 160 - tmu@100C0000 { 156 + tmu: tmu@100C0000 { 161 157 compatible = "samsung,exynos4210-tmu"; 162 158 interrupt-parent = <&combiner>; 163 159 reg = <0x100C0000 0x100>; 164 160 interrupts = <2 4>; 165 161 clocks = <&clock CLK_TMU_APBIF>; 166 162 clock-names = "tmu_apbif"; 163 + samsung,tmu_gain = <15>; 164 + samsung,tmu_reference_voltage = <7>; 167 165 status = "disabled"; 166 + }; 167 + 168 + thermal-zones { 169 + cpu_thermal: cpu-thermal { 170 + polling-delay-passive = <0>; 171 + polling-delay = <0>; 172 + thermal-sensors = <&tmu 0>; 173 + 174 + trips { 175 + cpu_alert0: cpu-alert-0 { 176 + temperature = <85000>; /* millicelsius */ 177 + }; 178 + cpu_alert1: cpu-alert-1 { 179 + temperature = <100000>; /* millicelsius */ 180 + }; 181 + cpu_alert2: cpu-alert-2 { 182 + temperature = <110000>; /* millicelsius */ 183 + }; 184 + }; 185 + }; 168 186 }; 169 187 170 188 g2d@12800000 { ··· 227 201 samsung,mainscaler-ext; 228 202 samsung,lcd-wb; 229 203 }; 204 + }; 205 + 206 + mixer: mixer@12C10000 { 207 + clock-names = "mixer", "hdmi", "sclk_hdmi", "vp", "mout_mixer", 208 + "sclk_mixer"; 209 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 210 + <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>, 211 + <&clock CLK_MOUT_MIXER>, <&clock CLK_SCLK_MIXER>; 230 212 }; 231 213 232 214 ppmu_lcd1: ppmu_lcd1@12240000 {
+4 -1
arch/arm/boot/dts/exynos4212.dtsi
··· 26 26 #address-cells = <1>; 27 27 #size-cells = <0>; 28 28 29 - cpu@A00 { 29 + cpu0: cpu@A00 { 30 30 device_type = "cpu"; 31 31 compatible = "arm,cortex-a9"; 32 32 reg = <0xA00>; 33 + cooling-min-level = <13>; 34 + cooling-max-level = <7>; 35 + #cooling-cells = <2>; /* min followed by max */ 33 36 }; 34 37 35 38 cpu@A01 {
+64
arch/arm/boot/dts/exynos4412-odroid-common.dtsi
··· 249 249 regulator-always-on; 250 250 }; 251 251 252 + ldo8_reg: ldo@8 { 253 + regulator-compatible = "LDO8"; 254 + regulator-name = "VDD10_HDMI_1.0V"; 255 + regulator-min-microvolt = <1000000>; 256 + regulator-max-microvolt = <1000000>; 257 + }; 258 + 259 + ldo10_reg: ldo@10 { 260 + regulator-compatible = "LDO10"; 261 + regulator-name = "VDDQ_MIPIHSI_1.8V"; 262 + regulator-min-microvolt = <1800000>; 263 + regulator-max-microvolt = <1800000>; 264 + }; 265 + 252 266 ldo11_reg: LDO11 { 253 267 regulator-name = "VDD18_ABB1_1.8V"; 254 268 regulator-min-microvolt = <1800000>; ··· 425 411 ehci: ehci@12580000 { 426 412 status = "okay"; 427 413 }; 414 + 415 + tmu@100C0000 { 416 + vtmu-supply = <&ldo10_reg>; 417 + status = "okay"; 418 + }; 419 + 420 + thermal-zones { 421 + cpu_thermal: cpu-thermal { 422 + cooling-maps { 423 + map0 { 424 + /* Corresponds to 800MHz at freq_table */ 425 + cooling-device = <&cpu0 7 7>; 426 + }; 427 + map1 { 428 + /* Corresponds to 200MHz at freq_table */ 429 + cooling-device = <&cpu0 13 13>; 430 + }; 431 + }; 432 + }; 433 + }; 434 + 435 + mixer: mixer@12C10000 { 436 + status = "okay"; 437 + }; 438 + 439 + hdmi@12D00000 { 440 + hpd-gpio = <&gpx3 7 0>; 441 + pinctrl-names = "default"; 442 + pinctrl-0 = <&hdmi_hpd>; 443 + vdd-supply = <&ldo8_reg>; 444 + vdd_osc-supply = <&ldo10_reg>; 445 + vdd_pll-supply = <&ldo8_reg>; 446 + ddc = <&hdmi_ddc>; 447 + status = "okay"; 448 + }; 449 + 450 + hdmi_ddc: i2c@13880000 { 451 + status = "okay"; 452 + pinctrl-names = "default"; 453 + pinctrl-0 = <&i2c2_bus>; 454 + }; 455 + 456 + i2c@138E0000 { 457 + status = "okay"; 458 + }; 428 459 }; 429 460 430 461 &pinctrl_1 { ··· 483 424 samsung,pin-function = <0>; 484 425 samsung,pin-pud = <0>; 485 426 samsung,pin-drv = <0>; 427 + }; 428 + 429 + hdmi_hpd: hdmi-hpd { 430 + samsung,pins = "gpx3-7"; 431 + samsung,pin-pud = <1>; 486 432 }; 487 433 };
+24
arch/arm/boot/dts/exynos4412-tmu-sensor-conf.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos4412 TMU sensor configuration 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal_exynos.h> 13 + 14 + #thermal-sensor-cells = <0>; 15 + samsung,tmu_gain = <8>; 16 + samsung,tmu_reference_voltage = <16>; 17 + samsung,tmu_noise_cancel_mode = <4>; 18 + samsung,tmu_efuse_value = <55>; 19 + samsung,tmu_min_efuse_value = <40>; 20 + samsung,tmu_max_efuse_value = <100>; 21 + samsung,tmu_first_point_trim = <25>; 22 + samsung,tmu_second_point_trim = <85>; 23 + samsung,tmu_default_temp_offset = <50>; 24 + samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+15
arch/arm/boot/dts/exynos4412-trats2.dts
··· 927 927 pulldown-ohm = <100000>; /* 100K */ 928 928 io-channels = <&adc 2>; /* Battery temperature */ 929 929 }; 930 + 931 + thermal-zones { 932 + cpu_thermal: cpu-thermal { 933 + cooling-maps { 934 + map0 { 935 + /* Corresponds to 800MHz at freq_table */ 936 + cooling-device = <&cpu0 7 7>; 937 + }; 938 + map1 { 939 + /* Corresponds to 200MHz at freq_table */ 940 + cooling-device = <&cpu0 13 13>; 941 + }; 942 + }; 943 + }; 944 + }; 930 945 }; 931 946 932 947 &pmu_system_controller {
+4 -1
arch/arm/boot/dts/exynos4412.dtsi
··· 26 26 #address-cells = <1>; 27 27 #size-cells = <0>; 28 28 29 - cpu@A00 { 29 + cpu0: cpu@A00 { 30 30 device_type = "cpu"; 31 31 compatible = "arm,cortex-a9"; 32 32 reg = <0xA00>; 33 + cooling-min-level = <13>; 34 + cooling-max-level = <7>; 35 + #cooling-cells = <2>; /* min followed by max */ 33 36 }; 34 37 35 38 cpu@A01 {
+12
arch/arm/boot/dts/exynos4x12.dtsi
··· 19 19 20 20 #include "exynos4.dtsi" 21 21 #include "exynos4x12-pinctrl.dtsi" 22 + #include "exynos4-cpu-thermal.dtsi" 22 23 23 24 / { 24 25 aliases { ··· 297 296 clocks = <&clock 383>; 298 297 clock-names = "tmu_apbif"; 299 298 status = "disabled"; 299 + }; 300 + 301 + hdmi: hdmi@12D00000 { 302 + compatible = "samsung,exynos4212-hdmi"; 303 + }; 304 + 305 + mixer: mixer@12C10000 { 306 + compatible = "samsung,exynos4212-mixer"; 307 + clock-names = "mixer", "hdmi", "sclk_hdmi", "vp"; 308 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 309 + <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>; 300 310 }; 301 311 };
+39 -5
arch/arm/boot/dts/exynos5250.dtsi
··· 20 20 #include <dt-bindings/clock/exynos5250.h> 21 21 #include "exynos5.dtsi" 22 22 #include "exynos5250-pinctrl.dtsi" 23 - 23 + #include "exynos4-cpu-thermal.dtsi" 24 24 #include <dt-bindings/clock/exynos-audss-clk.h> 25 25 26 26 / { ··· 58 58 #address-cells = <1>; 59 59 #size-cells = <0>; 60 60 61 - cpu@0 { 61 + cpu0: cpu@0 { 62 62 device_type = "cpu"; 63 63 compatible = "arm,cortex-a15"; 64 64 reg = <0>; 65 65 clock-frequency = <1700000000>; 66 + cooling-min-level = <15>; 67 + cooling-max-level = <9>; 68 + #cooling-cells = <2>; /* min followed by max */ 66 69 }; 67 70 cpu@1 { 68 71 device_type = "cpu"; ··· 102 99 pd_mfc: mfc-power-domain@10044040 { 103 100 compatible = "samsung,exynos4210-pd"; 104 101 reg = <0x10044040 0x20>; 102 + #power-domain-cells = <0>; 103 + }; 104 + 105 + pd_disp1: disp1-power-domain@100440A0 { 106 + compatible = "samsung,exynos4210-pd"; 107 + reg = <0x100440A0 0x20>; 105 108 #power-domain-cells = <0>; 106 109 }; 107 110 ··· 244 235 status = "disabled"; 245 236 }; 246 237 247 - tmu@10060000 { 238 + tmu: tmu@10060000 { 248 239 compatible = "samsung,exynos5250-tmu"; 249 240 reg = <0x10060000 0x100>; 250 241 interrupts = <0 65 0>; 251 242 clocks = <&clock CLK_TMU>; 252 243 clock-names = "tmu_apbif"; 244 + #include "exynos4412-tmu-sensor-conf.dtsi" 245 + }; 246 + 247 + thermal-zones { 248 + cpu_thermal: cpu-thermal { 249 + polling-delay-passive = <0>; 250 + polling-delay = <0>; 251 + thermal-sensors = <&tmu 0>; 252 + 253 + cooling-maps { 254 + map0 { 255 + /* Corresponds to 800MHz at freq_table */ 256 + cooling-device = <&cpu0 9 9>; 257 + }; 258 + map1 { 259 + /* Corresponds to 200MHz at freq_table */ 260 + cooling-device = <&cpu0 15 15>; 261 + }; 262 + }; 263 + }; 253 264 }; 254 265 255 266 serial@12C00000 { ··· 748 719 hdmi: hdmi { 749 720 compatible = "samsung,exynos4212-hdmi"; 750 721 reg = <0x14530000 0x70000>; 722 + power-domains = <&pd_disp1>; 751 723 interrupts = <0 95 0>; 752 724 clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>, 753 725 <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>, ··· 761 731 mixer { 762 732 compatible = "samsung,exynos5250-mixer"; 763 733 reg = <0x14450000 0x10000>; 734 + power-domains = <&pd_disp1>; 764 735 interrupts = <0 94 0>; 765 - clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>; 766 - clock-names = "mixer", "sclk_hdmi"; 736 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 737 + <&clock CLK_SCLK_HDMI>; 738 + clock-names = "mixer", "hdmi", "sclk_hdmi"; 767 739 }; 768 740 769 741 dp_phy: video-phy@10040720 { ··· 775 743 }; 776 744 777 745 dp: dp-controller@145B0000 { 746 + power-domains = <&pd_disp1>; 778 747 clocks = <&clock CLK_DP>; 779 748 clock-names = "dp"; 780 749 phys = <&dp_phy>; ··· 783 750 }; 784 751 785 752 fimd: fimd@14400000 { 753 + power-domains = <&pd_disp1>; 786 754 clocks = <&clock CLK_SCLK_FIMD1>, <&clock CLK_FIMD1>; 787 755 clock-names = "sclk_fimd", "fimd"; 788 756 };
+35
arch/arm/boot/dts/exynos5420-trip-points.dtsi
··· 1 + /* 2 + * Device tree sources for default Exynos5420 thermal zone definition 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + polling-delay-passive = <0>; 13 + polling-delay = <0>; 14 + trips { 15 + cpu-alert-0 { 16 + temperature = <85000>; /* millicelsius */ 17 + hysteresis = <10000>; /* millicelsius */ 18 + type = "active"; 19 + }; 20 + cpu-alert-1 { 21 + temperature = <103000>; /* millicelsius */ 22 + hysteresis = <10000>; /* millicelsius */ 23 + type = "active"; 24 + }; 25 + cpu-alert-2 { 26 + temperature = <110000>; /* millicelsius */ 27 + hysteresis = <10000>; /* millicelsius */ 28 + type = "active"; 29 + }; 30 + cpu-crit-0 { 31 + temperature = <1200000>; /* millicelsius */ 32 + hysteresis = <0>; /* millicelsius */ 33 + type = "critical"; 34 + }; 35 + };
+31 -2
arch/arm/boot/dts/exynos5420.dtsi
··· 740 740 compatible = "samsung,exynos5420-mixer"; 741 741 reg = <0x14450000 0x10000>; 742 742 interrupts = <0 94 0>; 743 - clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>; 744 - clock-names = "mixer", "sclk_hdmi"; 743 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 744 + <&clock CLK_SCLK_HDMI>; 745 + clock-names = "mixer", "hdmi", "sclk_hdmi"; 745 746 power-domains = <&disp_pd>; 746 747 }; 747 748 ··· 783 782 interrupts = <0 65 0>; 784 783 clocks = <&clock CLK_TMU>; 785 784 clock-names = "tmu_apbif"; 785 + #include "exynos4412-tmu-sensor-conf.dtsi" 786 786 }; 787 787 788 788 tmu_cpu1: tmu@10064000 { ··· 792 790 interrupts = <0 183 0>; 793 791 clocks = <&clock CLK_TMU>; 794 792 clock-names = "tmu_apbif"; 793 + #include "exynos4412-tmu-sensor-conf.dtsi" 795 794 }; 796 795 797 796 tmu_cpu2: tmu@10068000 { ··· 801 798 interrupts = <0 184 0>; 802 799 clocks = <&clock CLK_TMU>, <&clock CLK_TMU>; 803 800 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 801 + #include "exynos4412-tmu-sensor-conf.dtsi" 804 802 }; 805 803 806 804 tmu_cpu3: tmu@1006c000 { ··· 810 806 interrupts = <0 185 0>; 811 807 clocks = <&clock CLK_TMU>, <&clock CLK_TMU_GPU>; 812 808 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 809 + #include "exynos4412-tmu-sensor-conf.dtsi" 813 810 }; 814 811 815 812 tmu_gpu: tmu@100a0000 { ··· 819 814 interrupts = <0 215 0>; 820 815 clocks = <&clock CLK_TMU_GPU>, <&clock CLK_TMU>; 821 816 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 817 + #include "exynos4412-tmu-sensor-conf.dtsi" 818 + }; 819 + 820 + thermal-zones { 821 + cpu0_thermal: cpu0-thermal { 822 + thermal-sensors = <&tmu_cpu0>; 823 + #include "exynos5420-trip-points.dtsi" 824 + }; 825 + cpu1_thermal: cpu1-thermal { 826 + thermal-sensors = <&tmu_cpu1>; 827 + #include "exynos5420-trip-points.dtsi" 828 + }; 829 + cpu2_thermal: cpu2-thermal { 830 + thermal-sensors = <&tmu_cpu2>; 831 + #include "exynos5420-trip-points.dtsi" 832 + }; 833 + cpu3_thermal: cpu3-thermal { 834 + thermal-sensors = <&tmu_cpu3>; 835 + #include "exynos5420-trip-points.dtsi" 836 + }; 837 + gpu_thermal: gpu-thermal { 838 + thermal-sensors = <&tmu_gpu>; 839 + #include "exynos5420-trip-points.dtsi" 840 + }; 822 841 }; 823 842 824 843 watchdog: watchdog@101D0000 {
+24
arch/arm/boot/dts/exynos5440-tmu-sensor-conf.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos5440 TMU sensor configuration 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal_exynos.h> 13 + 14 + #thermal-sensor-cells = <0>; 15 + samsung,tmu_gain = <5>; 16 + samsung,tmu_reference_voltage = <16>; 17 + samsung,tmu_noise_cancel_mode = <4>; 18 + samsung,tmu_efuse_value = <0x5d2d>; 19 + samsung,tmu_min_efuse_value = <16>; 20 + samsung,tmu_max_efuse_value = <76>; 21 + samsung,tmu_first_point_trim = <25>; 22 + samsung,tmu_second_point_trim = <70>; 23 + samsung,tmu_default_temp_offset = <25>; 24 + samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+25
arch/arm/boot/dts/exynos5440-trip-points.dtsi
··· 1 + /* 2 + * Device tree sources for default Exynos5440 thermal zone definition 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + polling-delay-passive = <0>; 13 + polling-delay = <0>; 14 + trips { 15 + cpu-alert-0 { 16 + temperature = <100000>; /* millicelsius */ 17 + hysteresis = <0>; /* millicelsius */ 18 + type = "active"; 19 + }; 20 + cpu-crit-0 { 21 + temperature = <1050000>; /* millicelsius */ 22 + hysteresis = <0>; /* millicelsius */ 23 + type = "critical"; 24 + }; 25 + };
+18
arch/arm/boot/dts/exynos5440.dtsi
··· 219 219 interrupts = <0 58 0>; 220 220 clocks = <&clock CLK_B_125>; 221 221 clock-names = "tmu_apbif"; 222 + #include "exynos5440-tmu-sensor-conf.dtsi" 222 223 }; 223 224 224 225 tmuctrl_1: tmuctrl@16011C { ··· 228 227 interrupts = <0 58 0>; 229 228 clocks = <&clock CLK_B_125>; 230 229 clock-names = "tmu_apbif"; 230 + #include "exynos5440-tmu-sensor-conf.dtsi" 231 231 }; 232 232 233 233 tmuctrl_2: tmuctrl@160120 { ··· 237 235 interrupts = <0 58 0>; 238 236 clocks = <&clock CLK_B_125>; 239 237 clock-names = "tmu_apbif"; 238 + #include "exynos5440-tmu-sensor-conf.dtsi" 239 + }; 240 + 241 + thermal-zones { 242 + cpu0_thermal: cpu0-thermal { 243 + thermal-sensors = <&tmuctrl_0>; 244 + #include "exynos5440-trip-points.dtsi" 245 + }; 246 + cpu1_thermal: cpu1-thermal { 247 + thermal-sensors = <&tmuctrl_1>; 248 + #include "exynos5440-trip-points.dtsi" 249 + }; 250 + cpu2_thermal: cpu2-thermal { 251 + thermal-sensors = <&tmuctrl_2>; 252 + #include "exynos5440-trip-points.dtsi" 253 + }; 240 254 }; 241 255 242 256 sata@210000 {
+2
arch/arm/boot/dts/imx6qdl-sabresd.dtsi
··· 35 35 regulator-max-microvolt = <5000000>; 36 36 gpio = <&gpio3 22 0>; 37 37 enable-active-high; 38 + vin-supply = <&swbst_reg>; 38 39 }; 39 40 40 41 reg_usb_h1_vbus: regulator@1 { ··· 46 45 regulator-max-microvolt = <5000000>; 47 46 gpio = <&gpio1 29 0>; 48 47 enable-active-high; 48 + vin-supply = <&swbst_reg>; 49 49 }; 50 50 51 51 reg_audio: regulator@2 {
+2
arch/arm/boot/dts/imx6sl-evk.dts
··· 52 52 regulator-max-microvolt = <5000000>; 53 53 gpio = <&gpio4 0 0>; 54 54 enable-active-high; 55 + vin-supply = <&swbst_reg>; 55 56 }; 56 57 57 58 reg_usb_otg2_vbus: regulator@1 { ··· 63 62 regulator-max-microvolt = <5000000>; 64 63 gpio = <&gpio4 2 0>; 65 64 enable-active-high; 65 + vin-supply = <&swbst_reg>; 66 66 }; 67 67 68 68 reg_aud3v: regulator@2 {
+4
arch/arm/boot/dts/omap3.dtsi
··· 92 92 ti,hwmods = "aes"; 93 93 reg = <0x480c5000 0x50>; 94 94 interrupts = <0>; 95 + dmas = <&sdma 65 &sdma 66>; 96 + dma-names = "tx", "rx"; 95 97 }; 96 98 97 99 prm: prm@48306000 { ··· 552 550 ti,hwmods = "sham"; 553 551 reg = <0x480c3000 0x64>; 554 552 interrupts = <49>; 553 + dmas = <&sdma 69>; 554 + dma-names = "rx"; 555 555 }; 556 556 557 557 smartreflex_core: smartreflex@480cb000 {
+1 -1
arch/arm/boot/dts/omap5-core-thermal.dtsi
··· 13 13 14 14 core_thermal: core_thermal { 15 15 polling-delay-passive = <250>; /* milliseconds */ 16 - polling-delay = <1000>; /* milliseconds */ 16 + polling-delay = <500>; /* milliseconds */ 17 17 18 18 /* sensor ID */ 19 19 thermal-sensors = <&bandgap 2>;
+1 -1
arch/arm/boot/dts/omap5-gpu-thermal.dtsi
··· 13 13 14 14 gpu_thermal: gpu_thermal { 15 15 polling-delay-passive = <250>; /* milliseconds */ 16 - polling-delay = <1000>; /* milliseconds */ 16 + polling-delay = <500>; /* milliseconds */ 17 17 18 18 /* sensor ID */ 19 19 thermal-sensors = <&bandgap 1>;
+4
arch/arm/boot/dts/omap5.dtsi
··· 1079 1079 }; 1080 1080 }; 1081 1081 1082 + &cpu_thermal { 1083 + polling-delay = <500>; /* milliseconds */ 1084 + }; 1085 + 1082 1086 /include/ "omap54xx-clocks.dtsi"
+37 -4
arch/arm/boot/dts/omap54xx-clocks.dtsi
··· 167 167 ti,index-starts-at-one; 168 168 }; 169 169 170 + dpll_core_byp_mux: dpll_core_byp_mux { 171 + #clock-cells = <0>; 172 + compatible = "ti,mux-clock"; 173 + clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>; 174 + ti,bit-shift = <23>; 175 + reg = <0x012c>; 176 + }; 177 + 170 178 dpll_core_ck: dpll_core_ck { 171 179 #clock-cells = <0>; 172 180 compatible = "ti,omap4-dpll-core-clock"; 173 - clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>; 181 + clocks = <&sys_clkin>, <&dpll_core_byp_mux>; 174 182 reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>; 175 183 }; 176 184 ··· 302 294 clock-div = <1>; 303 295 }; 304 296 297 + dpll_iva_byp_mux: dpll_iva_byp_mux { 298 + #clock-cells = <0>; 299 + compatible = "ti,mux-clock"; 300 + clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>; 301 + ti,bit-shift = <23>; 302 + reg = <0x01ac>; 303 + }; 304 + 305 305 dpll_iva_ck: dpll_iva_ck { 306 306 #clock-cells = <0>; 307 307 compatible = "ti,omap4-dpll-clock"; 308 - clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>; 308 + clocks = <&sys_clkin>, <&dpll_iva_byp_mux>; 309 309 reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>; 310 310 }; 311 311 ··· 615 599 }; 616 600 }; 617 601 &cm_core_clocks { 602 + 603 + dpll_per_byp_mux: dpll_per_byp_mux { 604 + #clock-cells = <0>; 605 + compatible = "ti,mux-clock"; 606 + clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>; 607 + ti,bit-shift = <23>; 608 + reg = <0x014c>; 609 + }; 610 + 618 611 dpll_per_ck: dpll_per_ck { 619 612 #clock-cells = <0>; 620 613 compatible = "ti,omap4-dpll-clock"; 621 - clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>; 614 + clocks = <&sys_clkin>, <&dpll_per_byp_mux>; 622 615 reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>; 623 616 }; 624 617 ··· 739 714 ti,index-starts-at-one; 740 715 }; 741 716 717 + dpll_usb_byp_mux: dpll_usb_byp_mux { 718 + #clock-cells = <0>; 719 + compatible = "ti,mux-clock"; 720 + clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>; 721 + ti,bit-shift = <23>; 722 + reg = <0x018c>; 723 + }; 724 + 742 725 dpll_usb_ck: dpll_usb_ck { 743 726 #clock-cells = <0>; 744 727 compatible = "ti,omap4-dpll-j-type-clock"; 745 - clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>; 728 + clocks = <&sys_clkin>, <&dpll_usb_byp_mux>; 746 729 reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>; 747 730 }; 748 731
+1
arch/arm/boot/dts/rk3288.dtsi
··· 411 411 "mac_clk_rx", "mac_clk_tx", 412 412 "clk_mac_ref", "clk_mac_refout", 413 413 "aclk_mac", "pclk_mac"; 414 + status = "disabled"; 414 415 }; 415 416 416 417 usb_host0_ehci: usb@ff500000 {
+1 -2
arch/arm/boot/dts/sama5d3.dtsi
··· 1248 1248 atmel,watchdog-type = "hardware"; 1249 1249 atmel,reset-type = "all"; 1250 1250 atmel,dbg-halt; 1251 - atmel,idle-halt; 1252 1251 status = "disabled"; 1253 1252 }; 1254 1253 ··· 1415 1416 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1416 1417 reg = <0x00700000 0x100000>; 1417 1418 interrupts = <32 IRQ_TYPE_LEVEL_HIGH 2>; 1418 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 1419 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 1419 1420 clock-names = "usb_clk", "ehci_clk", "uhpck"; 1420 1421 status = "disabled"; 1421 1422 };
+5 -4
arch/arm/boot/dts/sama5d4.dtsi
··· 66 66 gpio4 = &pioE; 67 67 tcb0 = &tcb0; 68 68 tcb1 = &tcb1; 69 + i2c0 = &i2c0; 69 70 i2c2 = &i2c2; 70 71 }; 71 72 cpus { ··· 260 259 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 261 260 reg = <0x00600000 0x100000>; 262 261 interrupts = <46 IRQ_TYPE_LEVEL_HIGH 2>; 263 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 262 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 264 263 clock-names = "usb_clk", "ehci_clk", "uhpck"; 265 264 status = "disabled"; 266 265 }; ··· 462 461 463 462 lcdck: lcdck { 464 463 #clock-cells = <0>; 465 - reg = <4>; 466 - clocks = <&smd>; 464 + reg = <3>; 465 + clocks = <&mck>; 467 466 }; 468 467 469 468 smdck: smdck { ··· 771 770 reg = <50>; 772 771 }; 773 772 774 - lcd_clk: lcd_clk { 773 + lcdc_clk: lcdc_clk { 775 774 #clock-cells = <0>; 776 775 reg = <51>; 777 776 };
+7 -1
arch/arm/boot/dts/socfpga.dtsi
··· 660 660 #address-cells = <1>; 661 661 #size-cells = <0>; 662 662 reg = <0xfff01000 0x1000>; 663 - interrupts = <0 156 4>; 663 + interrupts = <0 155 4>; 664 664 num-cs = <4>; 665 665 clocks = <&spi_m_clk>; 666 666 status = "disabled"; ··· 713 713 reg-shift = <2>; 714 714 reg-io-width = <4>; 715 715 clocks = <&l4_sp_clk>; 716 + dmas = <&pdma 28>, 717 + <&pdma 29>; 718 + dma-names = "tx", "rx"; 716 719 }; 717 720 718 721 uart1: serial1@ffc03000 { ··· 725 722 reg-shift = <2>; 726 723 reg-io-width = <4>; 727 724 clocks = <&l4_sp_clk>; 725 + dmas = <&pdma 30>, 726 + <&pdma 31>; 727 + dma-names = "tx", "rx"; 728 728 }; 729 729 730 730 rst: rstmgr@ffd05000 {
+16
arch/arm/boot/dts/sun4i-a10-olinuxino-lime.dts
··· 56 56 model = "Olimex A10-OLinuXino-LIME"; 57 57 compatible = "olimex,a10-olinuxino-lime", "allwinner,sun4i-a10"; 58 58 59 + cpus { 60 + cpu0: cpu@0 { 61 + /* 62 + * The A10-Lime is known to be unstable 63 + * when running at 1008 MHz 64 + */ 65 + operating-points = < 66 + /* kHz uV */ 67 + 912000 1350000 68 + 864000 1300000 69 + 624000 1250000 70 + >; 71 + cooling-max-level = <2>; 72 + }; 73 + }; 74 + 59 75 soc@01c00000 { 60 76 emac: ethernet@01c0b000 { 61 77 pinctrl-names = "default";
+1 -2
arch/arm/boot/dts/sun4i-a10.dtsi
··· 75 75 clock-latency = <244144>; /* 8 32k periods */ 76 76 operating-points = < 77 77 /* kHz uV */ 78 - 1056000 1500000 79 78 1008000 1400000 80 79 912000 1350000 81 80 864000 1300000 ··· 82 83 >; 83 84 #cooling-cells = <2>; 84 85 cooling-min-level = <0>; 85 - cooling-max-level = <4>; 86 + cooling-max-level = <3>; 86 87 }; 87 88 }; 88 89
+1 -2
arch/arm/boot/dts/sun5i-a13.dtsi
··· 47 47 clock-latency = <244144>; /* 8 32k periods */ 48 48 operating-points = < 49 49 /* kHz uV */ 50 - 1104000 1500000 51 50 1008000 1400000 52 51 912000 1350000 53 52 864000 1300000 ··· 56 57 >; 57 58 #cooling-cells = <2>; 58 59 cooling-min-level = <0>; 59 - cooling-max-level = <6>; 60 + cooling-max-level = <5>; 60 61 }; 61 62 }; 62 63
+1 -2
arch/arm/boot/dts/sun7i-a20.dtsi
··· 105 105 clock-latency = <244144>; /* 8 32k periods */ 106 106 operating-points = < 107 107 /* kHz uV */ 108 - 1008000 1450000 109 108 960000 1400000 110 109 912000 1400000 111 110 864000 1300000 ··· 115 116 >; 116 117 #cooling-cells = <2>; 117 118 cooling-min-level = <0>; 118 - cooling-max-level = <7>; 119 + cooling-max-level = <6>; 119 120 }; 120 121 121 122 cpu@1 {
+1
arch/arm/configs/at91_dt_defconfig
··· 70 70 CONFIG_BLK_DEV_SD=y 71 71 # CONFIG_SCSI_LOWLEVEL is not set 72 72 CONFIG_NETDEVICES=y 73 + CONFIG_ARM_AT91_ETHER=y 73 74 CONFIG_MACB=y 74 75 # CONFIG_NET_VENDOR_BROADCOM is not set 75 76 CONFIG_DM9000=y
+1 -1
arch/arm/configs/multi_v7_defconfig
··· 99 99 CONFIG_PCI_RCAR_GEN2_PCIE=y 100 100 CONFIG_PCIEPORTBUS=y 101 101 CONFIG_SMP=y 102 - CONFIG_NR_CPUS=8 102 + CONFIG_NR_CPUS=16 103 103 CONFIG_HIGHPTE=y 104 104 CONFIG_CMA=y 105 105 CONFIG_ARM_APPENDED_DTB=y
+1
arch/arm/configs/omap2plus_defconfig
··· 377 377 CONFIG_PWM_TWL_LED=m 378 378 CONFIG_OMAP_USB2=m 379 379 CONFIG_TI_PIPE3=y 380 + CONFIG_TWL4030_USB=m 380 381 CONFIG_EXT2_FS=y 381 382 CONFIG_EXT3_FS=y 382 383 # CONFIG_EXT3_FS_XATTR is not set
-2
arch/arm/configs/sama5_defconfig
··· 3 3 CONFIG_SYSVIPC=y 4 4 CONFIG_IRQ_DOMAIN_DEBUG=y 5 5 CONFIG_LOG_BUF_SHIFT=14 6 - CONFIG_SYSFS_DEPRECATED=y 7 - CONFIG_SYSFS_DEPRECATED_V2=y 8 6 CONFIG_BLK_DEV_INITRD=y 9 7 CONFIG_EMBEDDED=y 10 8 CONFIG_SLAB=y
+1
arch/arm/configs/sunxi_defconfig
··· 4 4 CONFIG_PERF_EVENTS=y 5 5 CONFIG_ARCH_SUNXI=y 6 6 CONFIG_SMP=y 7 + CONFIG_NR_CPUS=8 7 8 CONFIG_AEABI=y 8 9 CONFIG_HIGHMEM=y 9 10 CONFIG_HIGHPTE=y
+1 -1
arch/arm/configs/vexpress_defconfig
··· 118 118 CONFIG_USB=y 119 119 CONFIG_USB_ANNOUNCE_NEW_DEVICES=y 120 120 CONFIG_USB_MON=y 121 - CONFIG_USB_ISP1760_HCD=y 122 121 CONFIG_USB_STORAGE=y 122 + CONFIG_USB_ISP1760=y 123 123 CONFIG_MMC=y 124 124 CONFIG_MMC_ARMMMCI=y 125 125 CONFIG_NEW_LEDS=y
+8 -4
arch/arm/crypto/aesbs-core.S_shipped
··· 58 58 # define VFP_ABI_FRAME 0 59 59 # define BSAES_ASM_EXTENDED_KEY 60 60 # define XTS_CHAIN_TWEAK 61 - # define __ARM_ARCH__ 7 61 + # define __ARM_ARCH__ __LINUX_ARM_ARCH__ 62 + # define __ARM_MAX_ARCH__ 7 62 63 #endif 63 64 64 65 #ifdef __thumb__ 65 66 # define adrl adr 66 67 #endif 67 68 68 - #if __ARM_ARCH__>=7 69 + #if __ARM_MAX_ARCH__>=7 70 + .arch armv7-a 71 + .fpu neon 72 + 69 73 .text 70 74 .syntax unified @ ARMv7-capable assembler is expected to handle this 71 75 #ifdef __thumb2__ ··· 77 73 #else 78 74 .code 32 79 75 #endif 80 - 81 - .fpu neon 82 76 83 77 .type _bsaes_decrypt8,%function 84 78 .align 4 ··· 2097 2095 vld1.8 {q8}, [r0] @ initial tweak 2098 2096 adr r2, .Lxts_magic 2099 2097 2098 + #ifndef XTS_CHAIN_TWEAK 2100 2099 tst r9, #0xf @ if not multiple of 16 2101 2100 it ne @ Thumb2 thing, sanity check in ARM 2102 2101 subne r9, #0x10 @ subtract another 16 bytes 2102 + #endif 2103 2103 subs r9, #0x80 2104 2104 2105 2105 blo .Lxts_dec_short
+8 -4
arch/arm/crypto/bsaes-armv7.pl
··· 701 701 # define VFP_ABI_FRAME 0 702 702 # define BSAES_ASM_EXTENDED_KEY 703 703 # define XTS_CHAIN_TWEAK 704 - # define __ARM_ARCH__ 7 704 + # define __ARM_ARCH__ __LINUX_ARM_ARCH__ 705 + # define __ARM_MAX_ARCH__ 7 705 706 #endif 706 707 707 708 #ifdef __thumb__ 708 709 # define adrl adr 709 710 #endif 710 711 711 - #if __ARM_ARCH__>=7 712 + #if __ARM_MAX_ARCH__>=7 713 + .arch armv7-a 714 + .fpu neon 715 + 712 716 .text 713 717 .syntax unified @ ARMv7-capable assembler is expected to handle this 714 718 #ifdef __thumb2__ ··· 720 716 #else 721 717 .code 32 722 718 #endif 723 - 724 - .fpu neon 725 719 726 720 .type _bsaes_decrypt8,%function 727 721 .align 4 ··· 2078 2076 vld1.8 {@XMM[8]}, [r0] @ initial tweak 2079 2077 adr $magic, .Lxts_magic 2080 2078 2079 + #ifndef XTS_CHAIN_TWEAK 2081 2080 tst $len, #0xf @ if not multiple of 16 2082 2081 it ne @ Thumb2 thing, sanity check in ARM 2083 2082 subne $len, #0x10 @ subtract another 16 bytes 2083 + #endif 2084 2084 subs $len, #0x80 2085 2085 2086 2086 blo .Lxts_dec_short
+8 -9
arch/arm/include/asm/kvm_mmu.h
··· 149 149 (__boundary - 1 < (end) - 1)? __boundary: (end); \ 150 150 }) 151 151 152 + #define kvm_pgd_index(addr) pgd_index(addr) 153 + 152 154 static inline bool kvm_page_empty(void *ptr) 153 155 { 154 156 struct page *ptr_page = virt_to_page(ptr); 155 157 return page_count(ptr_page) == 1; 156 158 } 157 - 158 159 159 160 #define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep) 160 161 #define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp) ··· 163 162 164 163 #define KVM_PREALLOC_LEVEL 0 165 164 166 - static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) 167 - { 168 - return 0; 169 - } 170 - 171 - static inline void kvm_free_hwpgd(struct kvm *kvm) { } 172 - 173 165 static inline void *kvm_get_hwpgd(struct kvm *kvm) 174 166 { 175 167 return kvm->arch.pgd; 168 + } 169 + 170 + static inline unsigned int kvm_get_hwpgd_size(void) 171 + { 172 + return PTRS_PER_S2_PGD * sizeof(pgd_t); 176 173 } 177 174 178 175 struct kvm; ··· 206 207 207 208 bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached; 208 209 209 - VM_BUG_ON(size & PAGE_MASK); 210 + VM_BUG_ON(size & ~PAGE_MASK); 210 211 211 212 if (!need_flush && !icache_is_pipt()) 212 213 goto vipt_cache;
+4 -1
arch/arm/include/debug/at91.S
··· 18 18 #define AT91_DBGU 0xfc00c000 /* SAMA5D4_BASE_USART3 */ 19 19 #endif 20 20 21 - /* Keep in sync with mach-at91/include/mach/hardware.h */ 21 + #ifdef CONFIG_MMU 22 22 #define AT91_IO_P2V(x) ((x) - 0x01000000) 23 + #else 24 + #define AT91_IO_P2V(x) (x) 25 + #endif 23 26 24 27 #define AT91_DBGU_SR (0x14) /* Status Register */ 25 28 #define AT91_DBGU_THR (0x1c) /* Transmitter Holding Register */
+1 -4
arch/arm/kernel/setup.c
··· 246 246 if (cpu_arch) 247 247 cpu_arch += CPU_ARCH_ARMv3; 248 248 } else if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) { 249 - unsigned int mmfr0; 250 - 251 249 /* Revised CPUID format. Read the Memory Model Feature 252 250 * Register 0 and check for VMSAv7 or PMSAv7 */ 253 - asm("mrc p15, 0, %0, c0, c1, 4" 254 - : "=r" (mmfr0)); 251 + unsigned int mmfr0 = read_cpuid_ext(CPUID_EXT_MMFR0); 255 252 if ((mmfr0 & 0x0000000f) >= 0x00000003 || 256 253 (mmfr0 & 0x000000f0) >= 0x00000030) 257 254 cpu_arch = CPU_ARCH_ARMv7;
+1 -1
arch/arm/kvm/arm.c
··· 540 540 541 541 vcpu->mode = OUTSIDE_GUEST_MODE; 542 542 kvm_guest_exit(); 543 - trace_kvm_exit(*vcpu_pc(vcpu)); 543 + trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); 544 544 /* 545 545 * We may have taken a host interrupt in HYP mode (ie 546 546 * while executing the guest). This interrupt is still
+53 -22
arch/arm/kvm/mmu.c
··· 290 290 phys_addr_t addr = start, end = start + size; 291 291 phys_addr_t next; 292 292 293 - pgd = pgdp + pgd_index(addr); 293 + pgd = pgdp + kvm_pgd_index(addr); 294 294 do { 295 295 next = kvm_pgd_addr_end(addr, end); 296 296 if (!pgd_none(*pgd)) ··· 355 355 phys_addr_t next; 356 356 pgd_t *pgd; 357 357 358 - pgd = kvm->arch.pgd + pgd_index(addr); 358 + pgd = kvm->arch.pgd + kvm_pgd_index(addr); 359 359 do { 360 360 next = kvm_pgd_addr_end(addr, end); 361 361 stage2_flush_puds(kvm, pgd, addr, next); ··· 632 632 __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE); 633 633 } 634 634 635 + /* Free the HW pgd, one page at a time */ 636 + static void kvm_free_hwpgd(void *hwpgd) 637 + { 638 + free_pages_exact(hwpgd, kvm_get_hwpgd_size()); 639 + } 640 + 641 + /* Allocate the HW PGD, making sure that each page gets its own refcount */ 642 + static void *kvm_alloc_hwpgd(void) 643 + { 644 + unsigned int size = kvm_get_hwpgd_size(); 645 + 646 + return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO); 647 + } 648 + 635 649 /** 636 650 * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation. 637 651 * @kvm: The KVM struct pointer for the VM. ··· 659 645 */ 660 646 int kvm_alloc_stage2_pgd(struct kvm *kvm) 661 647 { 662 - int ret; 663 648 pgd_t *pgd; 649 + void *hwpgd; 664 650 665 651 if (kvm->arch.pgd != NULL) { 666 652 kvm_err("kvm_arch already initialized?\n"); 667 653 return -EINVAL; 668 654 } 669 655 656 + hwpgd = kvm_alloc_hwpgd(); 657 + if (!hwpgd) 658 + return -ENOMEM; 659 + 660 + /* When the kernel uses more levels of page tables than the 661 + * guest, we allocate a fake PGD and pre-populate it to point 662 + * to the next-level page table, which will be the real 663 + * initial page table pointed to by the VTTBR. 664 + * 665 + * When KVM_PREALLOC_LEVEL==2, we allocate a single page for 666 + * the PMD and the kernel will use folded pud. 667 + * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD 668 + * pages. 669 + */ 670 670 if (KVM_PREALLOC_LEVEL > 0) { 671 + int i; 672 + 671 673 /* 672 674 * Allocate fake pgd for the page table manipulation macros to 673 675 * work. This is not used by the hardware and we have no ··· 691 661 */ 692 662 pgd = (pgd_t *)kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t), 693 663 GFP_KERNEL | __GFP_ZERO); 664 + 665 + if (!pgd) { 666 + kvm_free_hwpgd(hwpgd); 667 + return -ENOMEM; 668 + } 669 + 670 + /* Plug the HW PGD into the fake one. */ 671 + for (i = 0; i < PTRS_PER_S2_PGD; i++) { 672 + if (KVM_PREALLOC_LEVEL == 1) 673 + pgd_populate(NULL, pgd + i, 674 + (pud_t *)hwpgd + i * PTRS_PER_PUD); 675 + else if (KVM_PREALLOC_LEVEL == 2) 676 + pud_populate(NULL, pud_offset(pgd, 0) + i, 677 + (pmd_t *)hwpgd + i * PTRS_PER_PMD); 678 + } 694 679 } else { 695 680 /* 696 681 * Allocate actual first-level Stage-2 page table used by the 697 682 * hardware for Stage-2 page table walks. 698 683 */ 699 - pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, S2_PGD_ORDER); 684 + pgd = (pgd_t *)hwpgd; 700 685 } 701 - 702 - if (!pgd) 703 - return -ENOMEM; 704 - 705 - ret = kvm_prealloc_hwpgd(kvm, pgd); 706 - if (ret) 707 - goto out_err; 708 686 709 687 kvm_clean_pgd(pgd); 710 688 kvm->arch.pgd = pgd; 711 689 return 0; 712 - out_err: 713 - if (KVM_PREALLOC_LEVEL > 0) 714 - kfree(pgd); 715 - else 716 - free_pages((unsigned long)pgd, S2_PGD_ORDER); 717 - return ret; 718 690 } 719 691 720 692 /** ··· 817 785 return; 818 786 819 787 unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); 820 - kvm_free_hwpgd(kvm); 788 + kvm_free_hwpgd(kvm_get_hwpgd(kvm)); 821 789 if (KVM_PREALLOC_LEVEL > 0) 822 790 kfree(kvm->arch.pgd); 823 - else 824 - free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER); 791 + 825 792 kvm->arch.pgd = NULL; 826 793 } 827 794 ··· 830 799 pgd_t *pgd; 831 800 pud_t *pud; 832 801 833 - pgd = kvm->arch.pgd + pgd_index(addr); 802 + pgd = kvm->arch.pgd + kvm_pgd_index(addr); 834 803 if (WARN_ON(pgd_none(*pgd))) { 835 804 if (!cache) 836 805 return NULL; ··· 1120 1089 pgd_t *pgd; 1121 1090 phys_addr_t next; 1122 1091 1123 - pgd = kvm->arch.pgd + pgd_index(addr); 1092 + pgd = kvm->arch.pgd + kvm_pgd_index(addr); 1124 1093 do { 1125 1094 /* 1126 1095 * Release kvm_mmu_lock periodically if the memory region is
+7 -3
arch/arm/kvm/trace.h
··· 25 25 ); 26 26 27 27 TRACE_EVENT(kvm_exit, 28 - TP_PROTO(unsigned long vcpu_pc), 29 - TP_ARGS(vcpu_pc), 28 + TP_PROTO(unsigned int exit_reason, unsigned long vcpu_pc), 29 + TP_ARGS(exit_reason, vcpu_pc), 30 30 31 31 TP_STRUCT__entry( 32 + __field( unsigned int, exit_reason ) 32 33 __field( unsigned long, vcpu_pc ) 33 34 ), 34 35 35 36 TP_fast_assign( 37 + __entry->exit_reason = exit_reason; 36 38 __entry->vcpu_pc = vcpu_pc; 37 39 ), 38 40 39 - TP_printk("PC: 0x%08lx", __entry->vcpu_pc) 41 + TP_printk("HSR_EC: 0x%04x, PC: 0x%08lx", 42 + __entry->exit_reason, 43 + __entry->vcpu_pc) 40 44 ); 41 45 42 46 TRACE_EVENT(kvm_guest_fault,
+10 -12
arch/arm/mach-at91/pm.c
··· 270 270 phys_addr_t sram_pbase; 271 271 unsigned long sram_base; 272 272 struct device_node *node; 273 - struct platform_device *pdev; 273 + struct platform_device *pdev = NULL; 274 274 275 - node = of_find_compatible_node(NULL, NULL, "mmio-sram"); 276 - if (!node) { 277 - pr_warn("%s: failed to find sram node!\n", __func__); 278 - return; 275 + for_each_compatible_node(node, NULL, "mmio-sram") { 276 + pdev = of_find_device_by_node(node); 277 + if (pdev) { 278 + of_node_put(node); 279 + break; 280 + } 279 281 } 280 282 281 - pdev = of_find_device_by_node(node); 282 283 if (!pdev) { 283 284 pr_warn("%s: failed to find sram device!\n", __func__); 284 - goto put_node; 285 + return; 285 286 } 286 287 287 288 sram_pool = dev_get_gen_pool(&pdev->dev); 288 289 if (!sram_pool) { 289 290 pr_warn("%s: sram pool unavailable!\n", __func__); 290 - goto put_node; 291 + return; 291 292 } 292 293 293 294 sram_base = gen_pool_alloc(sram_pool, at91_slow_clock_sz); 294 295 if (!sram_base) { 295 296 pr_warn("%s: unable to alloc ocram!\n", __func__); 296 - goto put_node; 297 + return; 297 298 } 298 299 299 300 sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base); 300 301 slow_clock = __arm_ioremap_exec(sram_pbase, at91_slow_clock_sz, false); 301 - 302 - put_node: 303 - of_node_put(node); 304 302 } 305 303 #endif 306 304
+1 -1
arch/arm/mach-at91/pm.h
··· 44 44 " mcr p15, 0, %0, c7, c0, 4\n\t" 45 45 " str %5, [%1, %2]" 46 46 : 47 - : "r" (0), "r" (AT91_BASE_SYS), "r" (AT91RM9200_SDRAMC_LPR), 47 + : "r" (0), "r" (at91_ramc_base[0]), "r" (AT91RM9200_SDRAMC_LPR), 48 48 "r" (1), "r" (AT91RM9200_SDRAMC_SRR), 49 49 "r" (lpr)); 50 50 }
+46 -34
arch/arm/mach-at91/pm_slowclock.S
··· 25 25 */ 26 26 #undef SLOWDOWN_MASTER_CLOCK 27 27 28 - #define MCKRDY_TIMEOUT 1000 29 - #define MOSCRDY_TIMEOUT 1000 30 - #define PLLALOCK_TIMEOUT 1000 31 - #define PLLBLOCK_TIMEOUT 1000 32 - 33 28 pmc .req r0 34 29 sdramc .req r1 35 30 ramc1 .req r2 ··· 36 41 * Wait until master clock is ready (after switching master clock source) 37 42 */ 38 43 .macro wait_mckrdy 39 - mov tmp2, #MCKRDY_TIMEOUT 40 - 1: sub tmp2, tmp2, #1 41 - cmp tmp2, #0 42 - beq 2f 43 - ldr tmp1, [pmc, #AT91_PMC_SR] 44 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 44 45 tst tmp1, #AT91_PMC_MCKRDY 45 46 beq 1b 46 - 2: 47 47 .endm 48 48 49 49 /* 50 50 * Wait until master oscillator has stabilized. 51 51 */ 52 52 .macro wait_moscrdy 53 - mov tmp2, #MOSCRDY_TIMEOUT 54 - 1: sub tmp2, tmp2, #1 55 - cmp tmp2, #0 56 - beq 2f 57 - ldr tmp1, [pmc, #AT91_PMC_SR] 53 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 58 54 tst tmp1, #AT91_PMC_MOSCS 59 55 beq 1b 60 - 2: 61 56 .endm 62 57 63 58 /* 64 59 * Wait until PLLA has locked. 65 60 */ 66 61 .macro wait_pllalock 67 - mov tmp2, #PLLALOCK_TIMEOUT 68 - 1: sub tmp2, tmp2, #1 69 - cmp tmp2, #0 70 - beq 2f 71 - ldr tmp1, [pmc, #AT91_PMC_SR] 62 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 72 63 tst tmp1, #AT91_PMC_LOCKA 73 64 beq 1b 74 - 2: 75 65 .endm 76 66 77 67 /* 78 68 * Wait until PLLB has locked. 79 69 */ 80 70 .macro wait_pllblock 81 - mov tmp2, #PLLBLOCK_TIMEOUT 82 - 1: sub tmp2, tmp2, #1 83 - cmp tmp2, #0 84 - beq 2f 85 - ldr tmp1, [pmc, #AT91_PMC_SR] 71 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 86 72 tst tmp1, #AT91_PMC_LOCKB 87 73 beq 1b 88 - 2: 89 74 .endm 90 75 91 76 .text 77 + 78 + .arm 92 79 93 80 /* void at91_slow_clock(void __iomem *pmc, void __iomem *sdramc, 94 81 * void __iomem *ramc1, int memctrl) ··· 111 134 cmp memctrl, #AT91_MEMCTRL_DDRSDR 112 135 bne sdr_sr_enable 113 136 137 + /* LPDDR1 --> force DDR2 mode during self-refresh */ 138 + ldr tmp1, [sdramc, #AT91_DDRSDRC_MDR] 139 + str tmp1, .saved_sam9_mdr 140 + bic tmp1, tmp1, #~AT91_DDRSDRC_MD 141 + cmp tmp1, #AT91_DDRSDRC_MD_LOW_POWER_DDR 142 + ldreq tmp1, [sdramc, #AT91_DDRSDRC_MDR] 143 + biceq tmp1, tmp1, #AT91_DDRSDRC_MD 144 + orreq tmp1, tmp1, #AT91_DDRSDRC_MD_DDR2 145 + streq tmp1, [sdramc, #AT91_DDRSDRC_MDR] 146 + 114 147 /* prepare for DDRAM self-refresh mode */ 115 148 ldr tmp1, [sdramc, #AT91_DDRSDRC_LPR] 116 149 str tmp1, .saved_sam9_lpr ··· 129 142 130 143 /* figure out if we use the second ram controller */ 131 144 cmp ramc1, #0 132 - ldrne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 133 - strne tmp2, .saved_sam9_lpr1 134 - bicne tmp2, #AT91_DDRSDRC_LPCB 135 - orrne tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH 145 + beq ddr_no_2nd_ctrl 146 + 147 + ldr tmp2, [ramc1, #AT91_DDRSDRC_MDR] 148 + str tmp2, .saved_sam9_mdr1 149 + bic tmp2, tmp2, #~AT91_DDRSDRC_MD 150 + cmp tmp2, #AT91_DDRSDRC_MD_LOW_POWER_DDR 151 + ldreq tmp2, [ramc1, #AT91_DDRSDRC_MDR] 152 + biceq tmp2, tmp2, #AT91_DDRSDRC_MD 153 + orreq tmp2, tmp2, #AT91_DDRSDRC_MD_DDR2 154 + streq tmp2, [ramc1, #AT91_DDRSDRC_MDR] 155 + 156 + ldr tmp2, [ramc1, #AT91_DDRSDRC_LPR] 157 + str tmp2, .saved_sam9_lpr1 158 + bic tmp2, #AT91_DDRSDRC_LPCB 159 + orr tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH 136 160 137 161 /* Enable DDRAM self-refresh mode */ 162 + str tmp2, [ramc1, #AT91_DDRSDRC_LPR] 163 + ddr_no_2nd_ctrl: 138 164 str tmp1, [sdramc, #AT91_DDRSDRC_LPR] 139 - strne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 140 165 141 166 b sdr_sr_done 142 167 ··· 207 208 /* Turn off the main oscillator */ 208 209 ldr tmp1, [pmc, #AT91_CKGR_MOR] 209 210 bic tmp1, tmp1, #AT91_PMC_MOSCEN 211 + orr tmp1, tmp1, #AT91_PMC_KEY 210 212 str tmp1, [pmc, #AT91_CKGR_MOR] 211 213 212 214 /* Wait for interrupt */ ··· 216 216 /* Turn on the main oscillator */ 217 217 ldr tmp1, [pmc, #AT91_CKGR_MOR] 218 218 orr tmp1, tmp1, #AT91_PMC_MOSCEN 219 + orr tmp1, tmp1, #AT91_PMC_KEY 219 220 str tmp1, [pmc, #AT91_CKGR_MOR] 220 221 221 222 wait_moscrdy ··· 281 280 */ 282 281 cmp memctrl, #AT91_MEMCTRL_DDRSDR 283 282 bne sdr_en_restore 283 + /* Restore MDR in case of LPDDR1 */ 284 + ldr tmp1, .saved_sam9_mdr 285 + str tmp1, [sdramc, #AT91_DDRSDRC_MDR] 284 286 /* Restore LPR on AT91 with DDRAM */ 285 287 ldr tmp1, .saved_sam9_lpr 286 288 str tmp1, [sdramc, #AT91_DDRSDRC_LPR] 287 289 288 290 /* if we use the second ram controller */ 289 291 cmp ramc1, #0 292 + ldrne tmp2, .saved_sam9_mdr1 293 + strne tmp2, [ramc1, #AT91_DDRSDRC_MDR] 290 294 ldrne tmp2, .saved_sam9_lpr1 291 295 strne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 292 296 ··· 323 317 .word 0 324 318 325 319 .saved_sam9_lpr1: 320 + .word 0 321 + 322 + .saved_sam9_mdr: 323 + .word 0 324 + 325 + .saved_sam9_mdr1: 326 326 .word 0 327 327 328 328 ENTRY(at91_slow_clock_sz)
+1 -2
arch/arm/mach-exynos/platsmp.c
··· 126 126 */ 127 127 void exynos_cpu_power_down(int cpu) 128 128 { 129 - if (cpu == 0 && (of_machine_is_compatible("samsung,exynos5420") || 130 - of_machine_is_compatible("samsung,exynos5800"))) { 129 + if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) { 131 130 /* 132 131 * Bypass power down for CPU0 during suspend. Check for 133 132 * the SYS_PWR_REG value to decide if we are suspending
+28
arch/arm/mach-exynos/pm_domains.c
··· 161 161 of_genpd_add_provider_simple(np, &pd->pd); 162 162 } 163 163 164 + /* Assign the child power domains to their parents */ 165 + for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 166 + struct generic_pm_domain *child_domain, *parent_domain; 167 + struct of_phandle_args args; 168 + 169 + args.np = np; 170 + args.args_count = 0; 171 + child_domain = of_genpd_get_from_provider(&args); 172 + if (!child_domain) 173 + continue; 174 + 175 + if (of_parse_phandle_with_args(np, "power-domains", 176 + "#power-domain-cells", 0, &args) != 0) 177 + continue; 178 + 179 + parent_domain = of_genpd_get_from_provider(&args); 180 + if (!parent_domain) 181 + continue; 182 + 183 + if (pm_genpd_add_subdomain(parent_domain, child_domain)) 184 + pr_warn("%s failed to add subdomain: %s\n", 185 + parent_domain->name, child_domain->name); 186 + else 187 + pr_info("%s has as child subdomain: %s.\n", 188 + parent_domain->name, child_domain->name); 189 + of_node_put(np); 190 + } 191 + 164 192 return 0; 165 193 } 166 194 arch_initcall(exynos4_pm_init_power_domain);
+2 -2
arch/arm/mach-exynos/suspend.c
··· 87 87 static u32 exynos_irqwake_intmask = 0xffffffff; 88 88 89 89 static const struct exynos_wkup_irq exynos3250_wkup_irq[] = { 90 - { 73, BIT(1) }, /* RTC alarm */ 91 - { 74, BIT(2) }, /* RTC tick */ 90 + { 105, BIT(1) }, /* RTC alarm */ 91 + { 106, BIT(2) }, /* RTC tick */ 92 92 { /* sentinel */ }, 93 93 }; 94 94
+3 -2
arch/arm/mach-imx/mach-imx6q.c
··· 211 211 * set bit IOMUXC_GPR1[21]. Or the PTP clock must be from pad 212 212 * (external OSC), and we need to clear the bit. 213 213 */ 214 - clksel = ptp_clk == enet_ref ? IMX6Q_GPR1_ENET_CLK_SEL_ANATOP : 215 - IMX6Q_GPR1_ENET_CLK_SEL_PAD; 214 + clksel = clk_is_match(ptp_clk, enet_ref) ? 215 + IMX6Q_GPR1_ENET_CLK_SEL_ANATOP : 216 + IMX6Q_GPR1_ENET_CLK_SEL_PAD; 216 217 gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); 217 218 if (!IS_ERR(gpr)) 218 219 regmap_update_bits(gpr, IOMUXC_GPR1,
+7 -1
arch/arm/mach-msm/board-halibut.c
··· 20 20 #include <linux/input.h> 21 21 #include <linux/io.h> 22 22 #include <linux/delay.h> 23 + #include <linux/smc91x.h> 23 24 24 25 #include <mach/hardware.h> 25 26 #include <asm/mach-types.h> ··· 47 46 [1] = { 48 47 .start = MSM_GPIO_TO_INT(49), 49 48 .end = MSM_GPIO_TO_INT(49), 50 - .flags = IORESOURCE_IRQ, 49 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 51 50 }, 51 + }; 52 + 53 + static struct smc91x_platdata smc91x_platdata = { 54 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 52 55 }; 53 56 54 57 static struct platform_device smc91x_device = { ··· 60 55 .id = 0, 61 56 .num_resources = ARRAY_SIZE(smc91x_resources), 62 57 .resource = smc91x_resources, 58 + .dev.platform_data = &smc91x_platdata, 63 59 }; 64 60 65 61 static struct platform_device *devices[] __initdata = {
+7 -1
arch/arm/mach-msm/board-qsd8x50.c
··· 22 22 #include <linux/usb/msm_hsusb.h> 23 23 #include <linux/err.h> 24 24 #include <linux/clkdev.h> 25 + #include <linux/smc91x.h> 25 26 26 27 #include <asm/mach-types.h> 27 28 #include <asm/mach/arch.h> ··· 50 49 .flags = IORESOURCE_MEM, 51 50 }, 52 51 [1] = { 53 - .flags = IORESOURCE_IRQ, 52 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 54 53 }, 54 + }; 55 + 56 + static struct smc91x_platdata smc91x_platdata = { 57 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 55 58 }; 56 59 57 60 static struct platform_device smc91x_device = { ··· 63 58 .id = 0, 64 59 .num_resources = ARRAY_SIZE(smc91x_resources), 65 60 .resource = smc91x_resources, 61 + .dev.platform_data = &smc91x_platdata, 66 62 }; 67 63 68 64 static int __init msm_init_smc91x(void)
+2
arch/arm/mach-omap2/id.c
··· 720 720 return kasprintf(GFP_KERNEL, "OMAP4"); 721 721 else if (soc_is_omap54xx()) 722 722 return kasprintf(GFP_KERNEL, "OMAP5"); 723 + else if (soc_is_am33xx() || soc_is_am335x()) 724 + return kasprintf(GFP_KERNEL, "AM33xx"); 723 725 else if (soc_is_am43xx()) 724 726 return kasprintf(GFP_KERNEL, "AM43xx"); 725 727 else if (soc_is_dra7xx())
+5 -5
arch/arm/mach-omap2/omap_hwmod.c
··· 1692 1692 if (ret == -EBUSY) 1693 1693 pr_warn("omap_hwmod: %s: failed to hardreset\n", oh->name); 1694 1694 1695 - if (!ret) { 1695 + if (oh->clkdm) { 1696 1696 /* 1697 1697 * Set the clockdomain to HW_AUTO, assuming that the 1698 1698 * previous state was HW_AUTO. 1699 1699 */ 1700 - if (oh->clkdm && hwsup) 1700 + if (hwsup) 1701 1701 clkdm_allow_idle(oh->clkdm); 1702 - } else { 1703 - if (oh->clkdm) 1704 - clkdm_hwmod_disable(oh->clkdm, oh); 1702 + 1703 + clkdm_hwmod_disable(oh->clkdm, oh); 1705 1704 } 1706 1705 1707 1706 return ret; ··· 2697 2698 INIT_LIST_HEAD(&oh->master_ports); 2698 2699 INIT_LIST_HEAD(&oh->slave_ports); 2699 2700 spin_lock_init(&oh->_lock); 2701 + lockdep_set_class(&oh->_lock, &oh->hwmod_key); 2700 2702 2701 2703 oh->_state = _HWMOD_STATE_REGISTERED; 2702 2704
+1
arch/arm/mach-omap2/omap_hwmod.h
··· 674 674 u32 _sysc_cache; 675 675 void __iomem *_mpu_rt_va; 676 676 spinlock_t _lock; 677 + struct lock_class_key hwmod_key; /* unique lock class */ 677 678 struct list_head node; 678 679 struct omap_hwmod_ocp_if *_mpu_port; 679 680 unsigned int (*xlate_irq)(unsigned int);
+24 -79
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1466 1466 * 1467 1467 */ 1468 1468 1469 - static struct omap_hwmod_class dra7xx_pcie_hwmod_class = { 1469 + static struct omap_hwmod_class dra7xx_pciess_hwmod_class = { 1470 1470 .name = "pcie", 1471 1471 }; 1472 1472 1473 1473 /* pcie1 */ 1474 - static struct omap_hwmod dra7xx_pcie1_hwmod = { 1474 + static struct omap_hwmod dra7xx_pciess1_hwmod = { 1475 1475 .name = "pcie1", 1476 - .class = &dra7xx_pcie_hwmod_class, 1476 + .class = &dra7xx_pciess_hwmod_class, 1477 1477 .clkdm_name = "pcie_clkdm", 1478 - .main_clk = "l4_root_clk_div", 1479 - .prcm = { 1480 - .omap4 = { 1481 - .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET, 1482 - .modulemode = MODULEMODE_SWCTRL, 1483 - }, 1484 - }, 1485 - }; 1486 - 1487 - /* pcie2 */ 1488 - static struct omap_hwmod dra7xx_pcie2_hwmod = { 1489 - .name = "pcie2", 1490 - .class = &dra7xx_pcie_hwmod_class, 1491 - .clkdm_name = "pcie_clkdm", 1492 - .main_clk = "l4_root_clk_div", 1493 - .prcm = { 1494 - .omap4 = { 1495 - .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET, 1496 - .modulemode = MODULEMODE_SWCTRL, 1497 - }, 1498 - }, 1499 - }; 1500 - 1501 - /* 1502 - * 'PCIE PHY' class 1503 - * 1504 - */ 1505 - 1506 - static struct omap_hwmod_class dra7xx_pcie_phy_hwmod_class = { 1507 - .name = "pcie-phy", 1508 - }; 1509 - 1510 - /* pcie1 phy */ 1511 - static struct omap_hwmod dra7xx_pcie1_phy_hwmod = { 1512 - .name = "pcie1-phy", 1513 - .class = &dra7xx_pcie_phy_hwmod_class, 1514 - .clkdm_name = "l3init_clkdm", 1515 1478 .main_clk = "l4_root_clk_div", 1516 1479 .prcm = { 1517 1480 .omap4 = { ··· 1485 1522 }, 1486 1523 }; 1487 1524 1488 - /* pcie2 phy */ 1489 - static struct omap_hwmod dra7xx_pcie2_phy_hwmod = { 1490 - .name = "pcie2-phy", 1491 - .class = &dra7xx_pcie_phy_hwmod_class, 1492 - .clkdm_name = "l3init_clkdm", 1525 + /* pcie2 */ 1526 + static struct omap_hwmod dra7xx_pciess2_hwmod = { 1527 + .name = "pcie2", 1528 + .class = &dra7xx_pciess_hwmod_class, 1529 + .clkdm_name = "pcie_clkdm", 1493 1530 .main_clk = "l4_root_clk_div", 1494 1531 .prcm = { 1495 1532 .omap4 = { ··· 2840 2877 .user = OCP_USER_MPU | OCP_USER_SDMA, 2841 2878 }; 2842 2879 2843 - /* l3_main_1 -> pcie1 */ 2844 - static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie1 = { 2880 + /* l3_main_1 -> pciess1 */ 2881 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess1 = { 2845 2882 .master = &dra7xx_l3_main_1_hwmod, 2846 - .slave = &dra7xx_pcie1_hwmod, 2883 + .slave = &dra7xx_pciess1_hwmod, 2847 2884 .clk = "l3_iclk_div", 2848 2885 .user = OCP_USER_MPU | OCP_USER_SDMA, 2849 2886 }; 2850 2887 2851 - /* l4_cfg -> pcie1 */ 2852 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1 = { 2888 + /* l4_cfg -> pciess1 */ 2889 + static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess1 = { 2853 2890 .master = &dra7xx_l4_cfg_hwmod, 2854 - .slave = &dra7xx_pcie1_hwmod, 2891 + .slave = &dra7xx_pciess1_hwmod, 2855 2892 .clk = "l4_root_clk_div", 2856 2893 .user = OCP_USER_MPU | OCP_USER_SDMA, 2857 2894 }; 2858 2895 2859 - /* l3_main_1 -> pcie2 */ 2860 - static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie2 = { 2896 + /* l3_main_1 -> pciess2 */ 2897 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess2 = { 2861 2898 .master = &dra7xx_l3_main_1_hwmod, 2862 - .slave = &dra7xx_pcie2_hwmod, 2899 + .slave = &dra7xx_pciess2_hwmod, 2863 2900 .clk = "l3_iclk_div", 2864 2901 .user = OCP_USER_MPU | OCP_USER_SDMA, 2865 2902 }; 2866 2903 2867 - /* l4_cfg -> pcie2 */ 2868 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2 = { 2904 + /* l4_cfg -> pciess2 */ 2905 + static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess2 = { 2869 2906 .master = &dra7xx_l4_cfg_hwmod, 2870 - .slave = &dra7xx_pcie2_hwmod, 2871 - .clk = "l4_root_clk_div", 2872 - .user = OCP_USER_MPU | OCP_USER_SDMA, 2873 - }; 2874 - 2875 - /* l4_cfg -> pcie1 phy */ 2876 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1_phy = { 2877 - .master = &dra7xx_l4_cfg_hwmod, 2878 - .slave = &dra7xx_pcie1_phy_hwmod, 2879 - .clk = "l4_root_clk_div", 2880 - .user = OCP_USER_MPU | OCP_USER_SDMA, 2881 - }; 2882 - 2883 - /* l4_cfg -> pcie2 phy */ 2884 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2_phy = { 2885 - .master = &dra7xx_l4_cfg_hwmod, 2886 - .slave = &dra7xx_pcie2_phy_hwmod, 2907 + .slave = &dra7xx_pciess2_hwmod, 2887 2908 .clk = "l4_root_clk_div", 2888 2909 .user = OCP_USER_MPU | OCP_USER_SDMA, 2889 2910 }; ··· 3274 3327 &dra7xx_l4_cfg__mpu, 3275 3328 &dra7xx_l4_cfg__ocp2scp1, 3276 3329 &dra7xx_l4_cfg__ocp2scp3, 3277 - &dra7xx_l3_main_1__pcie1, 3278 - &dra7xx_l4_cfg__pcie1, 3279 - &dra7xx_l3_main_1__pcie2, 3280 - &dra7xx_l4_cfg__pcie2, 3281 - &dra7xx_l4_cfg__pcie1_phy, 3282 - &dra7xx_l4_cfg__pcie2_phy, 3330 + &dra7xx_l3_main_1__pciess1, 3331 + &dra7xx_l4_cfg__pciess1, 3332 + &dra7xx_l3_main_1__pciess2, 3333 + &dra7xx_l4_cfg__pciess2, 3283 3334 &dra7xx_l3_main_1__qspi, 3284 3335 &dra7xx_l4_per3__rtcss, 3285 3336 &dra7xx_l4_cfg__sata,
+1
arch/arm/mach-omap2/pdata-quirks.c
··· 173 173 174 174 static void __init omap3_evm_legacy_init(void) 175 175 { 176 + hsmmc2_internal_input_clk(); 176 177 legacy_init_wl12xx(WL12XX_REFCLOCK_38, 0, 149); 177 178 } 178 179
+2 -2
arch/arm/mach-omap2/prm44xx.c
··· 252 252 { 253 253 saved_mask[0] = 254 254 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 255 - OMAP4_PRM_IRQSTATUS_MPU_OFFSET); 255 + OMAP4_PRM_IRQENABLE_MPU_OFFSET); 256 256 saved_mask[1] = 257 257 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 258 - OMAP4_PRM_IRQSTATUS_MPU_2_OFFSET); 258 + OMAP4_PRM_IRQENABLE_MPU_2_OFFSET); 259 259 260 260 omap4_prm_write_inst_reg(0, OMAP4430_PRM_OCP_SOCKET_INST, 261 261 OMAP4_PRM_IRQENABLE_MPU_OFFSET);
+6
arch/arm/mach-pxa/idp.c
··· 36 36 #include <linux/platform_data/video-pxafb.h> 37 37 #include <mach/bitfield.h> 38 38 #include <linux/platform_data/mmc-pxamci.h> 39 + #include <linux/smc91x.h> 39 40 40 41 #include "generic.h" 41 42 #include "devices.h" ··· 82 81 } 83 82 }; 84 83 84 + static struct smc91x_platdata smc91x_platdata = { 85 + .flags = SMC91X_USE_32BIT | SMC91X_USE_DMA | SMC91X_NOWAIT, 86 + }; 87 + 85 88 static struct platform_device smc91x_device = { 86 89 .name = "smc91x", 87 90 .id = 0, 88 91 .num_resources = ARRAY_SIZE(smc91x_resources), 89 92 .resource = smc91x_resources, 93 + .dev.platform_data = &smc91x_platdata, 90 94 }; 91 95 92 96 static void idp_backlight_power(int on)
+48 -63
arch/arm/mach-pxa/irq.c
··· 11 11 * it under the terms of the GNU General Public License version 2 as 12 12 * published by the Free Software Foundation. 13 13 */ 14 + #include <linux/bitops.h> 14 15 #include <linux/init.h> 15 16 #include <linux/module.h> 16 17 #include <linux/interrupt.h> ··· 41 40 #define ICHP_VAL_IRQ (1 << 31) 42 41 #define ICHP_IRQ(i) (((i) >> 16) & 0x7fff) 43 42 #define IPR_VALID (1 << 31) 44 - #define IRQ_BIT(n) (((n) - PXA_IRQ(0)) & 0x1f) 45 43 46 44 #define MAX_INTERNAL_IRQS 128 47 45 ··· 51 51 static void __iomem *pxa_irq_base; 52 52 static int pxa_internal_irq_nr; 53 53 static bool cpu_has_ipr; 54 + static struct irq_domain *pxa_irq_domain; 54 55 55 56 static inline void __iomem *irq_base(int i) 56 57 { ··· 67 66 void pxa_mask_irq(struct irq_data *d) 68 67 { 69 68 void __iomem *base = irq_data_get_irq_chip_data(d); 69 + irq_hw_number_t irq = irqd_to_hwirq(d); 70 70 uint32_t icmr = __raw_readl(base + ICMR); 71 71 72 - icmr &= ~(1 << IRQ_BIT(d->irq)); 72 + icmr &= ~BIT(irq & 0x1f); 73 73 __raw_writel(icmr, base + ICMR); 74 74 } 75 75 76 76 void pxa_unmask_irq(struct irq_data *d) 77 77 { 78 78 void __iomem *base = irq_data_get_irq_chip_data(d); 79 + irq_hw_number_t irq = irqd_to_hwirq(d); 79 80 uint32_t icmr = __raw_readl(base + ICMR); 80 81 81 - icmr |= 1 << IRQ_BIT(d->irq); 82 + icmr |= BIT(irq & 0x1f); 82 83 __raw_writel(icmr, base + ICMR); 83 84 } 84 85 ··· 121 118 } while (1); 122 119 } 123 120 124 - void __init pxa_init_irq(int irq_nr, int (*fn)(struct irq_data *, unsigned int)) 121 + static int pxa_irq_map(struct irq_domain *h, unsigned int virq, 122 + irq_hw_number_t hw) 125 123 { 126 - int irq, i, n; 124 + void __iomem *base = irq_base(hw / 32); 127 125 128 - BUG_ON(irq_nr > MAX_INTERNAL_IRQS); 126 + /* initialize interrupt priority */ 127 + if (cpu_has_ipr) 128 + __raw_writel(hw | IPR_VALID, pxa_irq_base + IPR(hw)); 129 + 130 + irq_set_chip_and_handler(virq, &pxa_internal_irq_chip, 131 + handle_level_irq); 132 + irq_set_chip_data(virq, base); 133 + set_irq_flags(virq, IRQF_VALID); 134 + 135 + return 0; 136 + } 137 + 138 + static struct irq_domain_ops pxa_irq_ops = { 139 + .map = pxa_irq_map, 140 + .xlate = irq_domain_xlate_onecell, 141 + }; 142 + 143 + static __init void 144 + pxa_init_irq_common(struct device_node *node, int irq_nr, 145 + int (*fn)(struct irq_data *, unsigned int)) 146 + { 147 + int n; 129 148 130 149 pxa_internal_irq_nr = irq_nr; 131 - cpu_has_ipr = !cpu_is_pxa25x(); 132 - pxa_irq_base = io_p2v(0x40d00000); 150 + pxa_irq_domain = irq_domain_add_legacy(node, irq_nr, 151 + PXA_IRQ(0), 0, 152 + &pxa_irq_ops, NULL); 153 + if (!pxa_irq_domain) 154 + panic("Unable to add PXA IRQ domain\n"); 155 + irq_set_default_host(pxa_irq_domain); 133 156 134 157 for (n = 0; n < irq_nr; n += 32) { 135 158 void __iomem *base = irq_base(n >> 5); 136 159 137 160 __raw_writel(0, base + ICMR); /* disable all IRQs */ 138 161 __raw_writel(0, base + ICLR); /* all IRQs are IRQ, not FIQ */ 139 - for (i = n; (i < (n + 32)) && (i < irq_nr); i++) { 140 - /* initialize interrupt priority */ 141 - if (cpu_has_ipr) 142 - __raw_writel(i | IPR_VALID, pxa_irq_base + IPR(i)); 143 - 144 - irq = PXA_IRQ(i); 145 - irq_set_chip_and_handler(irq, &pxa_internal_irq_chip, 146 - handle_level_irq); 147 - irq_set_chip_data(irq, base); 148 - set_irq_flags(irq, IRQF_VALID); 149 - } 150 162 } 151 - 152 163 /* only unmasked interrupts kick us out of idle */ 153 164 __raw_writel(1, irq_base(0) + ICCR); 154 165 155 166 pxa_internal_irq_chip.irq_set_wake = fn; 167 + } 168 + 169 + void __init pxa_init_irq(int irq_nr, int (*fn)(struct irq_data *, unsigned int)) 170 + { 171 + BUG_ON(irq_nr > MAX_INTERNAL_IRQS); 172 + 173 + pxa_irq_base = io_p2v(0x40d00000); 174 + cpu_has_ipr = !cpu_is_pxa25x(); 175 + pxa_init_irq_common(NULL, irq_nr, fn); 156 176 } 157 177 158 178 #ifdef CONFIG_PM ··· 229 203 }; 230 204 231 205 #ifdef CONFIG_OF 232 - static struct irq_domain *pxa_irq_domain; 233 - 234 - static int pxa_irq_map(struct irq_domain *h, unsigned int virq, 235 - irq_hw_number_t hw) 236 - { 237 - void __iomem *base = irq_base(hw / 32); 238 - 239 - /* initialize interrupt priority */ 240 - if (cpu_has_ipr) 241 - __raw_writel(hw | IPR_VALID, pxa_irq_base + IPR(hw)); 242 - 243 - irq_set_chip_and_handler(hw, &pxa_internal_irq_chip, 244 - handle_level_irq); 245 - irq_set_chip_data(hw, base); 246 - set_irq_flags(hw, IRQF_VALID); 247 - 248 - return 0; 249 - } 250 - 251 - static struct irq_domain_ops pxa_irq_ops = { 252 - .map = pxa_irq_map, 253 - .xlate = irq_domain_xlate_onecell, 254 - }; 255 - 256 206 static const struct of_device_id intc_ids[] __initconst = { 257 207 { .compatible = "marvell,pxa-intc", }, 258 208 {} ··· 238 236 { 239 237 struct device_node *node; 240 238 struct resource res; 241 - int n, ret; 239 + int ret; 242 240 243 241 node = of_find_matching_node(NULL, intc_ids); 244 242 if (!node) { ··· 269 267 return; 270 268 } 271 269 272 - pxa_irq_domain = irq_domain_add_legacy(node, pxa_internal_irq_nr, 0, 0, 273 - &pxa_irq_ops, NULL); 274 - if (!pxa_irq_domain) 275 - panic("Unable to add PXA IRQ domain\n"); 276 - 277 - irq_set_default_host(pxa_irq_domain); 278 - 279 - for (n = 0; n < pxa_internal_irq_nr; n += 32) { 280 - void __iomem *base = irq_base(n >> 5); 281 - 282 - __raw_writel(0, base + ICMR); /* disable all IRQs */ 283 - __raw_writel(0, base + ICLR); /* all IRQs are IRQ, not FIQ */ 284 - } 285 - 286 - /* only unmasked interrupts kick us out of idle */ 287 - __raw_writel(1, irq_base(0) + ICCR); 288 - 289 - pxa_internal_irq_chip.irq_set_wake = fn; 270 + pxa_init_irq_common(node, pxa_internal_irq_nr, fn); 290 271 } 291 272 #endif /* CONFIG_OF */
+7 -1
arch/arm/mach-pxa/lpd270.c
··· 24 24 #include <linux/mtd/mtd.h> 25 25 #include <linux/mtd/partitions.h> 26 26 #include <linux/pwm_backlight.h> 27 + #include <linux/smc91x.h> 27 28 28 29 #include <asm/types.h> 29 30 #include <asm/setup.h> ··· 190 189 [1] = { 191 190 .start = LPD270_ETHERNET_IRQ, 192 191 .end = LPD270_ETHERNET_IRQ, 193 - .flags = IORESOURCE_IRQ, 192 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE, 194 193 }, 194 + }; 195 + 196 + struct smc91x_platdata smc91x_platdata = { 197 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 195 198 }; 196 199 197 200 static struct platform_device smc91x_device = { ··· 203 198 .id = 0, 204 199 .num_resources = ARRAY_SIZE(smc91x_resources), 205 200 .resource = smc91x_resources, 201 + .dev.platform_data = &smc91x_platdata, 206 202 }; 207 203 208 204 static struct resource lpd270_flash_resources[] = {
+1 -1
arch/arm/mach-pxa/zeus.c
··· 412 412 }; 413 413 414 414 static struct platform_device can_regulator_device = { 415 - .name = "reg-fixed-volage", 415 + .name = "reg-fixed-voltage", 416 416 .id = 0, 417 417 .dev = { 418 418 .platform_data = &can_regulator_pdata,
+7
arch/arm/mach-realview/core.c
··· 28 28 #include <linux/platform_data/video-clcd-versatile.h> 29 29 #include <linux/io.h> 30 30 #include <linux/smsc911x.h> 31 + #include <linux/smc91x.h> 31 32 #include <linux/ata_platform.h> 32 33 #include <linux/amba/mmci.h> 33 34 #include <linux/gfp.h> ··· 95 94 .phy_interface = PHY_INTERFACE_MODE_MII, 96 95 }; 97 96 97 + static struct smc91x_platdata smc91x_platdata = { 98 + .flags = SMC91X_USE_32BIT | SMC91X_NOWAIT, 99 + }; 100 + 98 101 static struct platform_device realview_eth_device = { 99 102 .name = "smsc911x", 100 103 .id = 0, ··· 112 107 realview_eth_device.resource = res; 113 108 if (strcmp(realview_eth_device.name, "smsc911x") == 0) 114 109 realview_eth_device.dev.platform_data = &smsc911x_config; 110 + else 111 + realview_eth_device.dev.platform_data = &smc91x_platdata; 115 112 116 113 return platform_device_register(&realview_eth_device); 117 114 }
+1 -1
arch/arm/mach-realview/realview_eb.c
··· 234 234 [1] = { 235 235 .start = IRQ_EB_ETH, 236 236 .end = IRQ_EB_ETH, 237 - .flags = IORESOURCE_IRQ, 237 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE, 238 238 }, 239 239 }; 240 240
+6
arch/arm/mach-sa1100/neponset.c
··· 12 12 #include <linux/pm.h> 13 13 #include <linux/serial_core.h> 14 14 #include <linux/slab.h> 15 + #include <linux/smc91x.h> 15 16 16 17 #include <asm/mach-types.h> 17 18 #include <asm/mach/map.h> ··· 259 258 0x02000000, "smc91x-attrib"), 260 259 { .flags = IORESOURCE_IRQ }, 261 260 }; 261 + struct smc91x_platdata smc91x_platdata = { 262 + .flags = SMC91X_USE_8BIT | SMC91X_IO_SHIFT_2 | SMC91X_NOWAIT, 263 + }; 262 264 struct platform_device_info smc91x_devinfo = { 263 265 .parent = &dev->dev, 264 266 .name = "smc91x", 265 267 .id = 0, 266 268 .res = smc91x_resources, 267 269 .num_res = ARRAY_SIZE(smc91x_resources), 270 + .data = &smc91x_platdata, 271 + .size_data = sizeof(smc91x_platdata), 268 272 }; 269 273 int ret, irq; 270 274
+7
arch/arm/mach-sa1100/pleb.c
··· 11 11 #include <linux/irq.h> 12 12 #include <linux/io.h> 13 13 #include <linux/mtd/partitions.h> 14 + #include <linux/smc91x.h> 14 15 15 16 #include <mach/hardware.h> 16 17 #include <asm/setup.h> ··· 44 43 #endif 45 44 }; 46 45 46 + static struct smc91x_platdata smc91x_platdata = { 47 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 48 + }; 47 49 48 50 static struct platform_device smc91x_device = { 49 51 .name = "smc91x", 50 52 .id = 0, 51 53 .num_resources = ARRAY_SIZE(smc91x_resources), 52 54 .resource = smc91x_resources, 55 + .dev = { 56 + .platform_data = &smc91x_platdata, 57 + }, 53 58 }; 54 59 55 60 static struct platform_device *devices[] __initdata = {
+1 -1
arch/arm/mach-socfpga/core.h
··· 45 45 46 46 extern unsigned long socfpga_cpu1start_addr; 47 47 48 - #define SOCFPGA_SCU_VIRT_BASE 0xfffec000 48 + #define SOCFPGA_SCU_VIRT_BASE 0xfee00000 49 49 50 50 #endif
+5
arch/arm/mach-socfpga/socfpga.c
··· 23 23 #include <asm/hardware/cache-l2x0.h> 24 24 #include <asm/mach/arch.h> 25 25 #include <asm/mach/map.h> 26 + #include <asm/cacheflush.h> 26 27 27 28 #include "core.h" 28 29 ··· 73 72 if (of_property_read_u32(np, "cpu1-start-addr", 74 73 (u32 *) &socfpga_cpu1start_addr)) 75 74 pr_err("SMP: Need cpu1-start-addr in device tree.\n"); 75 + 76 + /* Ensure that socfpga_cpu1start_addr is visible to other CPUs */ 77 + smp_wmb(); 78 + sync_cache_w(&socfpga_cpu1start_addr); 76 79 77 80 sys_manager_base_addr = of_iomap(np, 0); 78 81
+1
arch/arm/mach-sti/board-dt.c
··· 18 18 "st,stih415", 19 19 "st,stih416", 20 20 "st,stih407", 21 + "st,stih410", 21 22 "st,stih418", 22 23 NULL 23 24 };
+2 -6
arch/arm/mach-sunxi/Kconfig
··· 1 1 menuconfig ARCH_SUNXI 2 2 bool "Allwinner SoCs" if ARCH_MULTI_V7 3 3 select ARCH_REQUIRE_GPIOLIB 4 + select ARCH_HAS_RESET_CONTROLLER 4 5 select CLKSRC_MMIO 5 6 select GENERIC_IRQ_CHIP 6 7 select PINCTRL 7 8 select SUN4I_TIMER 9 + select RESET_CONTROLLER 8 10 9 11 if ARCH_SUNXI 10 12 ··· 22 20 config MACH_SUN6I 23 21 bool "Allwinner A31 (sun6i) SoCs support" 24 22 default ARCH_SUNXI 25 - select ARCH_HAS_RESET_CONTROLLER 26 23 select ARM_GIC 27 24 select MFD_SUN6I_PRCM 28 - select RESET_CONTROLLER 29 25 select SUN5I_HSTIMER 30 26 31 27 config MACH_SUN7I ··· 37 37 config MACH_SUN8I 38 38 bool "Allwinner A23 (sun8i) SoCs support" 39 39 default ARCH_SUNXI 40 - select ARCH_HAS_RESET_CONTROLLER 41 40 select ARM_GIC 42 41 select MFD_SUN6I_PRCM 43 - select RESET_CONTROLLER 44 42 45 43 config MACH_SUN9I 46 44 bool "Allwinner (sun9i) SoCs support" 47 45 default ARCH_SUNXI 48 - select ARCH_HAS_RESET_CONTROLLER 49 46 select ARM_GIC 50 - select RESET_CONTROLLER 51 47 52 48 endif
+16 -17
arch/arm/mm/cache-l2x0.c
··· 1131 1131 } 1132 1132 1133 1133 ret = l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_512K); 1134 - if (ret) 1135 - return; 1136 - 1137 - switch (assoc) { 1138 - case 16: 1139 - *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1140 - *aux_val |= L310_AUX_CTRL_ASSOCIATIVITY_16; 1141 - *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1142 - break; 1143 - case 8: 1144 - *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1145 - *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1146 - break; 1147 - default: 1148 - pr_err("L2C-310 OF cache associativity %d invalid, only 8 or 16 permitted\n", 1149 - assoc); 1150 - break; 1134 + if (!ret) { 1135 + switch (assoc) { 1136 + case 16: 1137 + *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1138 + *aux_val |= L310_AUX_CTRL_ASSOCIATIVITY_16; 1139 + *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1140 + break; 1141 + case 8: 1142 + *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1143 + *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1144 + break; 1145 + default: 1146 + pr_err("L2C-310 OF cache associativity %d invalid, only 8 or 16 permitted\n", 1147 + assoc); 1148 + break; 1149 + } 1151 1150 } 1152 1151 1153 1152 prefetch = l2x0_saved_regs.prefetch_ctrl;
+1 -1
arch/arm/mm/dma-mapping.c
··· 171 171 */ 172 172 if (sizeof(mask) != sizeof(dma_addr_t) && 173 173 mask > (dma_addr_t)~0 && 174 - dma_to_pfn(dev, ~0) < max_pfn) { 174 + dma_to_pfn(dev, ~0) < max_pfn - 1) { 175 175 if (warn) { 176 176 dev_warn(dev, "Coherent DMA mask %#llx is larger than dma_addr_t allows\n", 177 177 mask);
+1
arch/arm/mm/fault.c
··· 552 552 553 553 pr_alert("Unhandled fault: %s (0x%03x) at 0x%08lx\n", 554 554 inf->name, fsr, addr); 555 + show_pte(current->mm, addr); 555 556 556 557 info.si_signo = inf->sig; 557 558 info.si_errno = 0;
+4 -1
arch/arm/mm/pageattr.c
··· 49 49 WARN_ON_ONCE(1); 50 50 } 51 51 52 - if (!is_module_address(start) || !is_module_address(end - 1)) 52 + if (start < MODULES_VADDR || start >= MODULES_END) 53 + return -EINVAL; 54 + 55 + if (end < MODULES_VADDR || start >= MODULES_END) 53 56 return -EINVAL; 54 57 55 58 data.set_mask = set_mask;
+14 -1
arch/arm/plat-omap/dmtimer.c
··· 799 799 struct device *dev = &pdev->dev; 800 800 const struct of_device_id *match; 801 801 const struct dmtimer_platform_data *pdata; 802 + int ret; 802 803 803 804 match = of_match_device(of_match_ptr(omap_timer_match), dev); 804 805 pdata = match ? match->data : dev->platform_data; ··· 861 860 } 862 861 863 862 if (!timer->reserved) { 864 - pm_runtime_get_sync(dev); 863 + ret = pm_runtime_get_sync(dev); 864 + if (ret < 0) { 865 + dev_err(dev, "%s: pm_runtime_get_sync failed!\n", 866 + __func__); 867 + goto err_get_sync; 868 + } 865 869 __omap_dm_timer_init_regs(timer); 866 870 pm_runtime_put(dev); 867 871 } ··· 879 873 dev_dbg(dev, "Device Probed.\n"); 880 874 881 875 return 0; 876 + 877 + err_get_sync: 878 + pm_runtime_put_noidle(dev); 879 + pm_runtime_disable(dev); 880 + return ret; 882 881 } 883 882 884 883 /** ··· 909 898 break; 910 899 } 911 900 spin_unlock_irqrestore(&dm_timer_lock, flags); 901 + 902 + pm_runtime_disable(&pdev->dev); 912 903 913 904 return ret; 914 905 }
+2 -2
arch/arm64/boot/dts/apm/apm-storm.dtsi
··· 622 622 }; 623 623 624 624 sgenet0: ethernet@1f210000 { 625 - compatible = "apm,xgene-enet"; 625 + compatible = "apm,xgene1-sgenet"; 626 626 status = "disabled"; 627 627 reg = <0x0 0x1f210000 0x0 0xd100>, 628 628 <0x0 0x1f200000 0x0 0Xc300>, ··· 636 636 }; 637 637 638 638 xgenet: ethernet@1f610000 { 639 - compatible = "apm,xgene-enet"; 639 + compatible = "apm,xgene1-xgenet"; 640 640 status = "disabled"; 641 641 reg = <0x0 0x1f610000 0x0 0xd100>, 642 642 <0x0 0x1f600000 0x0 0Xc300>,
+1 -1
arch/arm64/boot/dts/arm/juno-clocks.dtsi
··· 8 8 */ 9 9 10 10 /* SoC fixed clocks */ 11 - soc_uartclk: refclk72738khz { 11 + soc_uartclk: refclk7273800hz { 12 12 compatible = "fixed-clock"; 13 13 #clock-cells = <0>; 14 14 clock-frequency = <7273800>;
+23 -7
arch/arm64/include/asm/cmpxchg.h
··· 246 246 __ret; \ 247 247 }) 248 248 249 - #define this_cpu_cmpxchg_1(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 250 - #define this_cpu_cmpxchg_2(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 251 - #define this_cpu_cmpxchg_4(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 252 - #define this_cpu_cmpxchg_8(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) 249 + #define _protect_cmpxchg_local(pcp, o, n) \ 250 + ({ \ 251 + typeof(*raw_cpu_ptr(&(pcp))) __ret; \ 252 + preempt_disable(); \ 253 + __ret = cmpxchg_local(raw_cpu_ptr(&(pcp)), o, n); \ 254 + preempt_enable(); \ 255 + __ret; \ 256 + }) 253 257 254 - #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ 255 - cmpxchg_double_local(raw_cpu_ptr(&(ptr1)), raw_cpu_ptr(&(ptr2)), \ 256 - o1, o2, n1, n2) 258 + #define this_cpu_cmpxchg_1(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 259 + #define this_cpu_cmpxchg_2(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 260 + #define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 261 + #define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) 262 + 263 + #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ 264 + ({ \ 265 + int __ret; \ 266 + preempt_disable(); \ 267 + __ret = cmpxchg_double_local( raw_cpu_ptr(&(ptr1)), \ 268 + raw_cpu_ptr(&(ptr2)), \ 269 + o1, o2, n1, n2); \ 270 + preempt_enable(); \ 271 + __ret; \ 272 + }) 257 273 258 274 #define cmpxchg64(ptr,o,n) cmpxchg((ptr),(o),(n)) 259 275 #define cmpxchg64_local(ptr,o,n) cmpxchg_local((ptr),(o),(n))
+3 -2
arch/arm64/include/asm/kvm_arm.h
··· 129 129 * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are 130 130 * not known to exist and will break with this configuration. 131 131 * 132 + * VTCR_EL2.PS is extracted from ID_AA64MMFR0_EL1.PARange at boot time 133 + * (see hyp-init.S). 134 + * 132 135 * Note that when using 4K pages, we concatenate two first level page tables 133 136 * together. 134 137 * ··· 141 138 #ifdef CONFIG_ARM64_64K_PAGES 142 139 /* 143 140 * Stage2 translation configuration: 144 - * 40bits output (PS = 2) 145 141 * 40bits input (T0SZ = 24) 146 142 * 64kB pages (TG0 = 1) 147 143 * 2 level page tables (SL = 1) ··· 152 150 #else 153 151 /* 154 152 * Stage2 translation configuration: 155 - * 40bits output (PS = 2) 156 153 * 40bits input (T0SZ = 24) 157 154 * 4kB pages (TG0 = 0) 158 155 * 3 level page tables (SL = 1)
+6 -42
arch/arm64/include/asm/kvm_mmu.h
··· 158 158 #define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT) 159 159 #define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) 160 160 161 + #define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1)) 162 + 161 163 /* 162 164 * If we are concatenating first level stage-2 page tables, we would have less 163 165 * than or equal to 16 pointers in the fake PGD, because that's what the ··· 172 170 #else 173 171 #define KVM_PREALLOC_LEVEL (0) 174 172 #endif 175 - 176 - /** 177 - * kvm_prealloc_hwpgd - allocate inital table for VTTBR 178 - * @kvm: The KVM struct pointer for the VM. 179 - * @pgd: The kernel pseudo pgd 180 - * 181 - * When the kernel uses more levels of page tables than the guest, we allocate 182 - * a fake PGD and pre-populate it to point to the next-level page table, which 183 - * will be the real initial page table pointed to by the VTTBR. 184 - * 185 - * When KVM_PREALLOC_LEVEL==2, we allocate a single page for the PMD and 186 - * the kernel will use folded pud. When KVM_PREALLOC_LEVEL==1, we 187 - * allocate 2 consecutive PUD pages. 188 - */ 189 - static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) 190 - { 191 - unsigned int i; 192 - unsigned long hwpgd; 193 - 194 - if (KVM_PREALLOC_LEVEL == 0) 195 - return 0; 196 - 197 - hwpgd = __get_free_pages(GFP_KERNEL | __GFP_ZERO, PTRS_PER_S2_PGD_SHIFT); 198 - if (!hwpgd) 199 - return -ENOMEM; 200 - 201 - for (i = 0; i < PTRS_PER_S2_PGD; i++) { 202 - if (KVM_PREALLOC_LEVEL == 1) 203 - pgd_populate(NULL, pgd + i, 204 - (pud_t *)hwpgd + i * PTRS_PER_PUD); 205 - else if (KVM_PREALLOC_LEVEL == 2) 206 - pud_populate(NULL, pud_offset(pgd, 0) + i, 207 - (pmd_t *)hwpgd + i * PTRS_PER_PMD); 208 - } 209 - 210 - return 0; 211 - } 212 173 213 174 static inline void *kvm_get_hwpgd(struct kvm *kvm) 214 175 { ··· 189 224 return pmd_offset(pud, 0); 190 225 } 191 226 192 - static inline void kvm_free_hwpgd(struct kvm *kvm) 227 + static inline unsigned int kvm_get_hwpgd_size(void) 193 228 { 194 - if (KVM_PREALLOC_LEVEL > 0) { 195 - unsigned long hwpgd = (unsigned long)kvm_get_hwpgd(kvm); 196 - free_pages(hwpgd, PTRS_PER_S2_PGD_SHIFT); 197 - } 229 + if (KVM_PREALLOC_LEVEL > 0) 230 + return PTRS_PER_S2_PGD * PAGE_SIZE; 231 + return PTRS_PER_S2_PGD * sizeof(pgd_t); 198 232 } 199 233 200 234 static inline bool kvm_page_empty(void *ptr)
+9
arch/arm64/include/asm/mmu_context.h
··· 151 151 { 152 152 unsigned int cpu = smp_processor_id(); 153 153 154 + /* 155 + * init_mm.pgd does not contain any user mappings and it is always 156 + * active for kernel addresses in TTBR1. Just set the reserved TTBR0. 157 + */ 158 + if (next == &init_mm) { 159 + cpu_set_reserved_ttbr0(); 160 + return; 161 + } 162 + 154 163 if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) 155 164 check_and_switch_context(next, tsk); 156 165 }
+34 -12
arch/arm64/include/asm/percpu.h
··· 204 204 return ret; 205 205 } 206 206 207 - #define _percpu_add(pcp, val) \ 208 - __percpu_add(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) 207 + #define _percpu_read(pcp) \ 208 + ({ \ 209 + typeof(pcp) __retval; \ 210 + preempt_disable(); \ 211 + __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \ 212 + sizeof(pcp)); \ 213 + preempt_enable(); \ 214 + __retval; \ 215 + }) 209 216 210 - #define _percpu_add_return(pcp, val) (typeof(pcp)) (_percpu_add(pcp, val)) 217 + #define _percpu_write(pcp, val) \ 218 + do { \ 219 + preempt_disable(); \ 220 + __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \ 221 + sizeof(pcp)); \ 222 + preempt_enable(); \ 223 + } while(0) \ 224 + 225 + #define _pcp_protect(operation, pcp, val) \ 226 + ({ \ 227 + typeof(pcp) __retval; \ 228 + preempt_disable(); \ 229 + __retval = (typeof(pcp))operation(raw_cpu_ptr(&(pcp)), \ 230 + (val), sizeof(pcp)); \ 231 + preempt_enable(); \ 232 + __retval; \ 233 + }) 234 + 235 + #define _percpu_add(pcp, val) \ 236 + _pcp_protect(__percpu_add, pcp, val) 237 + 238 + #define _percpu_add_return(pcp, val) _percpu_add(pcp, val) 211 239 212 240 #define _percpu_and(pcp, val) \ 213 - __percpu_and(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) 241 + _pcp_protect(__percpu_and, pcp, val) 214 242 215 243 #define _percpu_or(pcp, val) \ 216 - __percpu_or(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) 217 - 218 - #define _percpu_read(pcp) (typeof(pcp)) \ 219 - (__percpu_read(raw_cpu_ptr(&(pcp)), sizeof(pcp))) 220 - 221 - #define _percpu_write(pcp, val) \ 222 - __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp)) 244 + _pcp_protect(__percpu_or, pcp, val) 223 245 224 246 #define _percpu_xchg(pcp, val) (typeof(pcp)) \ 225 - (__percpu_xchg(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp))) 247 + _pcp_protect(__percpu_xchg, pcp, (unsigned long)(val)) 226 248 227 249 #define this_cpu_add_1(pcp, val) _percpu_add(pcp, val) 228 250 #define this_cpu_add_2(pcp, val) _percpu_add(pcp, val)
+5 -1
arch/arm64/include/asm/proc-fns.h
··· 39 39 40 40 #include <asm/memory.h> 41 41 42 - #define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm) 42 + #define cpu_switch_mm(pgd,mm) \ 43 + do { \ 44 + BUG_ON(pgd == swapper_pg_dir); \ 45 + cpu_do_switch_mm(virt_to_phys(pgd),mm); \ 46 + } while (0) 43 47 44 48 #define cpu_get_pgd() \ 45 49 ({ \
+3
arch/arm64/include/asm/tlb.h
··· 48 48 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, 49 49 unsigned long addr) 50 50 { 51 + __flush_tlb_pgtable(tlb->mm, addr); 51 52 pgtable_page_dtor(pte); 52 53 tlb_remove_entry(tlb, pte); 53 54 } ··· 57 56 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, 58 57 unsigned long addr) 59 58 { 59 + __flush_tlb_pgtable(tlb->mm, addr); 60 60 tlb_remove_entry(tlb, virt_to_page(pmdp)); 61 61 } 62 62 #endif ··· 66 64 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, 67 65 unsigned long addr) 68 66 { 67 + __flush_tlb_pgtable(tlb->mm, addr); 69 68 tlb_remove_entry(tlb, virt_to_page(pudp)); 70 69 } 71 70 #endif
+13
arch/arm64/include/asm/tlbflush.h
··· 144 144 } 145 145 146 146 /* 147 + * Used to invalidate the TLB (walk caches) corresponding to intermediate page 148 + * table levels (pgd/pud/pmd). 149 + */ 150 + static inline void __flush_tlb_pgtable(struct mm_struct *mm, 151 + unsigned long uaddr) 152 + { 153 + unsigned long addr = uaddr >> 12 | ((unsigned long)ASID(mm) << 48); 154 + 155 + dsb(ishst); 156 + asm("tlbi vae1is, %0" : : "r" (addr)); 157 + dsb(ish); 158 + } 159 + /* 147 160 * On AArch64, the cache coherency is handled via the set_pte_at() function. 148 161 */ 149 162 static inline void update_mmu_cache(struct vm_area_struct *vma,
+14 -1
arch/arm64/kernel/efi.c
··· 337 337 338 338 static void efi_set_pgd(struct mm_struct *mm) 339 339 { 340 - cpu_switch_mm(mm->pgd, mm); 340 + if (mm == &init_mm) 341 + cpu_set_reserved_ttbr0(); 342 + else 343 + cpu_switch_mm(mm->pgd, mm); 344 + 341 345 flush_tlb_all(); 342 346 if (icache_is_aivivt()) 343 347 __flush_icache_all(); ··· 357 353 { 358 354 efi_set_pgd(current->active_mm); 359 355 preempt_enable(); 356 + } 357 + 358 + /* 359 + * UpdateCapsule() depends on the system being shutdown via 360 + * ResetSystem(). 361 + */ 362 + bool efi_poweroff_required(void) 363 + { 364 + return efi_enabled(EFI_RUNTIME_SERVICES); 360 365 }
+1 -1
arch/arm64/kernel/head.S
··· 585 585 * zeroing of .bss would clobber it. 586 586 */ 587 587 .pushsection .data..cacheline_aligned 588 - ENTRY(__boot_cpu_mode) 589 588 .align L1_CACHE_SHIFT 589 + ENTRY(__boot_cpu_mode) 590 590 .long BOOT_CPU_MODE_EL2 591 591 .long 0 592 592 .popsection
+8
arch/arm64/kernel/process.c
··· 21 21 #include <stdarg.h> 22 22 23 23 #include <linux/compat.h> 24 + #include <linux/efi.h> 24 25 #include <linux/export.h> 25 26 #include <linux/sched.h> 26 27 #include <linux/kernel.h> ··· 150 149 /* Disable interrupts first */ 151 150 local_irq_disable(); 152 151 smp_send_stop(); 152 + 153 + /* 154 + * UpdateCapsule() depends on the system being reset via 155 + * ResetSystem(). 156 + */ 157 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 158 + efi_reboot(reboot_mode, NULL); 153 159 154 160 /* Now call the architecture specific reboot code. */ 155 161 if (arm_pm_restart)
+9 -3
arch/arm64/mm/dma-mapping.c
··· 51 51 } 52 52 early_param("coherent_pool", early_coherent_pool); 53 53 54 - static void *__alloc_from_pool(size_t size, struct page **ret_page) 54 + static void *__alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) 55 55 { 56 56 unsigned long val; 57 57 void *ptr = NULL; ··· 67 67 68 68 *ret_page = phys_to_page(phys); 69 69 ptr = (void *)val; 70 + if (flags & __GFP_ZERO) 71 + memset(ptr, 0, size); 70 72 } 71 73 72 74 return ptr; ··· 103 101 flags |= GFP_DMA; 104 102 if (IS_ENABLED(CONFIG_DMA_CMA) && (flags & __GFP_WAIT)) { 105 103 struct page *page; 104 + void *addr; 106 105 107 106 size = PAGE_ALIGN(size); 108 107 page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, ··· 112 109 return NULL; 113 110 114 111 *dma_handle = phys_to_dma(dev, page_to_phys(page)); 115 - return page_address(page); 112 + addr = page_address(page); 113 + if (flags & __GFP_ZERO) 114 + memset(addr, 0, size); 115 + return addr; 116 116 } else { 117 117 return swiotlb_alloc_coherent(dev, size, dma_handle, flags); 118 118 } ··· 152 146 153 147 if (!coherent && !(flags & __GFP_WAIT)) { 154 148 struct page *page = NULL; 155 - void *addr = __alloc_from_pool(size, &page); 149 + void *addr = __alloc_from_pool(size, &page, flags); 156 150 157 151 if (addr) 158 152 *dma_handle = phys_to_dma(dev, page_to_phys(page));
+4 -1
arch/arm64/mm/pageattr.c
··· 51 51 WARN_ON_ONCE(1); 52 52 } 53 53 54 - if (!is_module_address(start) || !is_module_address(end - 1)) 54 + if (start < MODULES_VADDR || start >= MODULES_END) 55 + return -EINVAL; 56 + 57 + if (end < MODULES_VADDR || end >= MODULES_END) 55 58 return -EINVAL; 56 59 57 60 data.set_mask = set_mask;
+5
arch/c6x/include/asm/pgtable.h
··· 67 67 */ 68 68 #define pgtable_cache_init() do { } while (0) 69 69 70 + /* 71 + * c6x is !MMU, so define the simpliest implementation 72 + */ 73 + #define pgprot_writecombine pgprot_noncached 74 + 70 75 #include <asm-generic/pgtable.h> 71 76 72 77 #endif /* _ASM_C6X_PGTABLE_H */
+1
arch/metag/include/asm/io.h
··· 2 2 #define _ASM_METAG_IO_H 3 3 4 4 #include <linux/types.h> 5 + #include <asm/pgtable-bits.h> 5 6 6 7 #define IO_SPACE_LIMIT 0 7 8
+104
arch/metag/include/asm/pgtable-bits.h
··· 1 + /* 2 + * Meta page table definitions. 3 + */ 4 + 5 + #ifndef _METAG_PGTABLE_BITS_H 6 + #define _METAG_PGTABLE_BITS_H 7 + 8 + #include <asm/metag_mem.h> 9 + 10 + /* 11 + * Definitions for MMU descriptors 12 + * 13 + * These are the hardware bits in the MMCU pte entries. 14 + * Derived from the Meta toolkit headers. 15 + */ 16 + #define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT 17 + #define _PAGE_WRITE MMCU_ENTRY_WR_BIT 18 + #define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT 19 + /* Write combine bit - this can cause writes to occur out of order */ 20 + #define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT 21 + /* Sys coherent bit - this bit is never used by Linux */ 22 + #define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT 23 + #define _PAGE_ALWAYS_ZERO_1 0x020 24 + #define _PAGE_CACHE_CTRL0 0x040 25 + #define _PAGE_CACHE_CTRL1 0x080 26 + #define _PAGE_ALWAYS_ZERO_2 0x100 27 + #define _PAGE_ALWAYS_ZERO_3 0x200 28 + #define _PAGE_ALWAYS_ZERO_4 0x400 29 + #define _PAGE_ALWAYS_ZERO_5 0x800 30 + 31 + /* These are software bits that we stuff into the gaps in the hardware 32 + * pte entries that are not used. Note, these DO get stored in the actual 33 + * hardware, but the hardware just does not use them. 34 + */ 35 + #define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1 36 + #define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2 37 + 38 + /* Pages owned, and protected by, the kernel. */ 39 + #define _PAGE_KERNEL _PAGE_PRIV 40 + 41 + /* No cacheing of this page */ 42 + #define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S) 43 + /* burst cacheing - good for data streaming */ 44 + #define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S) 45 + /* One cache way per thread */ 46 + #define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S) 47 + /* Full on cacheing */ 48 + #define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S) 49 + 50 + #define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE) 51 + 52 + /* which bits are used for cache control ... */ 53 + #define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \ 54 + _PAGE_WR_COMBINE) 55 + 56 + /* This is a mask of the bits that pte_modify is allowed to change. */ 57 + #define _PAGE_CHG_MASK (PAGE_MASK) 58 + 59 + #define _PAGE_SZ_SHIFT 1 60 + #define _PAGE_SZ_4K (0x0) 61 + #define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT) 62 + #define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT) 63 + #define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT) 64 + #define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT) 65 + #define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT) 66 + #define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT) 67 + #define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT) 68 + #define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT) 69 + #define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT) 70 + #define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT) 71 + #define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT) 72 + 73 + #if defined(CONFIG_PAGE_SIZE_4K) 74 + #define _PAGE_SZ (_PAGE_SZ_4K) 75 + #elif defined(CONFIG_PAGE_SIZE_8K) 76 + #define _PAGE_SZ (_PAGE_SZ_8K) 77 + #elif defined(CONFIG_PAGE_SIZE_16K) 78 + #define _PAGE_SZ (_PAGE_SZ_16K) 79 + #endif 80 + #define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT) 81 + 82 + #if defined(CONFIG_HUGETLB_PAGE_SIZE_8K) 83 + # define _PAGE_SZHUGE (_PAGE_SZ_8K) 84 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K) 85 + # define _PAGE_SZHUGE (_PAGE_SZ_16K) 86 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K) 87 + # define _PAGE_SZHUGE (_PAGE_SZ_32K) 88 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K) 89 + # define _PAGE_SZHUGE (_PAGE_SZ_64K) 90 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K) 91 + # define _PAGE_SZHUGE (_PAGE_SZ_128K) 92 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K) 93 + # define _PAGE_SZHUGE (_PAGE_SZ_256K) 94 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K) 95 + # define _PAGE_SZHUGE (_PAGE_SZ_512K) 96 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M) 97 + # define _PAGE_SZHUGE (_PAGE_SZ_1M) 98 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M) 99 + # define _PAGE_SZHUGE (_PAGE_SZ_2M) 100 + #elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M) 101 + # define _PAGE_SZHUGE (_PAGE_SZ_4M) 102 + #endif 103 + 104 + #endif /* _METAG_PGTABLE_BITS_H */
+1 -94
arch/metag/include/asm/pgtable.h
··· 5 5 #ifndef _METAG_PGTABLE_H 6 6 #define _METAG_PGTABLE_H 7 7 8 + #include <asm/pgtable-bits.h> 8 9 #include <asm-generic/pgtable-nopmd.h> 9 10 10 11 /* Invalid regions on Meta: 0x00000000-0x001FFFFF and 0xFFFF0000-0xFFFFFFFF */ ··· 19 18 #define CONSISTENT_END 0x773FFFFF 20 19 #define VMALLOC_START 0x78000000 21 20 #define VMALLOC_END 0x7FFFFFFF 22 - #endif 23 - 24 - /* 25 - * Definitions for MMU descriptors 26 - * 27 - * These are the hardware bits in the MMCU pte entries. 28 - * Derived from the Meta toolkit headers. 29 - */ 30 - #define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT 31 - #define _PAGE_WRITE MMCU_ENTRY_WR_BIT 32 - #define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT 33 - /* Write combine bit - this can cause writes to occur out of order */ 34 - #define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT 35 - /* Sys coherent bit - this bit is never used by Linux */ 36 - #define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT 37 - #define _PAGE_ALWAYS_ZERO_1 0x020 38 - #define _PAGE_CACHE_CTRL0 0x040 39 - #define _PAGE_CACHE_CTRL1 0x080 40 - #define _PAGE_ALWAYS_ZERO_2 0x100 41 - #define _PAGE_ALWAYS_ZERO_3 0x200 42 - #define _PAGE_ALWAYS_ZERO_4 0x400 43 - #define _PAGE_ALWAYS_ZERO_5 0x800 44 - 45 - /* These are software bits that we stuff into the gaps in the hardware 46 - * pte entries that are not used. Note, these DO get stored in the actual 47 - * hardware, but the hardware just does not use them. 48 - */ 49 - #define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1 50 - #define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2 51 - 52 - /* Pages owned, and protected by, the kernel. */ 53 - #define _PAGE_KERNEL _PAGE_PRIV 54 - 55 - /* No cacheing of this page */ 56 - #define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S) 57 - /* burst cacheing - good for data streaming */ 58 - #define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S) 59 - /* One cache way per thread */ 60 - #define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S) 61 - /* Full on cacheing */ 62 - #define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S) 63 - 64 - #define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE) 65 - 66 - /* which bits are used for cache control ... */ 67 - #define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \ 68 - _PAGE_WR_COMBINE) 69 - 70 - /* This is a mask of the bits that pte_modify is allowed to change. */ 71 - #define _PAGE_CHG_MASK (PAGE_MASK) 72 - 73 - #define _PAGE_SZ_SHIFT 1 74 - #define _PAGE_SZ_4K (0x0) 75 - #define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT) 76 - #define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT) 77 - #define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT) 78 - #define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT) 79 - #define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT) 80 - #define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT) 81 - #define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT) 82 - #define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT) 83 - #define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT) 84 - #define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT) 85 - #define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT) 86 - 87 - #if defined(CONFIG_PAGE_SIZE_4K) 88 - #define _PAGE_SZ (_PAGE_SZ_4K) 89 - #elif defined(CONFIG_PAGE_SIZE_8K) 90 - #define _PAGE_SZ (_PAGE_SZ_8K) 91 - #elif defined(CONFIG_PAGE_SIZE_16K) 92 - #define _PAGE_SZ (_PAGE_SZ_16K) 93 - #endif 94 - #define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT) 95 - 96 - #if defined(CONFIG_HUGETLB_PAGE_SIZE_8K) 97 - # define _PAGE_SZHUGE (_PAGE_SZ_8K) 98 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K) 99 - # define _PAGE_SZHUGE (_PAGE_SZ_16K) 100 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K) 101 - # define _PAGE_SZHUGE (_PAGE_SZ_32K) 102 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K) 103 - # define _PAGE_SZHUGE (_PAGE_SZ_64K) 104 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K) 105 - # define _PAGE_SZHUGE (_PAGE_SZ_128K) 106 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K) 107 - # define _PAGE_SZHUGE (_PAGE_SZ_256K) 108 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K) 109 - # define _PAGE_SZHUGE (_PAGE_SZ_512K) 110 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M) 111 - # define _PAGE_SZHUGE (_PAGE_SZ_1M) 112 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M) 113 - # define _PAGE_SZHUGE (_PAGE_SZ_2M) 114 - #elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M) 115 - # define _PAGE_SZHUGE (_PAGE_SZ_4M) 116 21 #endif 117 22 118 23 /*
+4 -3
arch/microblaze/kernel/entry.S
··· 348 348 * The LP register should point to the location where the called function 349 349 * should return. [note that MAKE_SYS_CALL uses label 1] */ 350 350 /* See if the system call number is valid */ 351 + blti r12, 5f 351 352 addi r11, r12, -__NR_syscalls; 352 - bgei r11,5f; 353 + bgei r11, 5f; 353 354 /* Figure out which function to use for this system call. */ 354 355 /* Note Microblaze barrel shift is optional, so don't rely on it */ 355 356 add r12, r12, r12; /* convert num -> ptr */ ··· 376 375 377 376 /* The syscall number is invalid, return an error. */ 378 377 5: 379 - rtsd r15, 8; /* looks like a normal subroutine return */ 378 + braid ret_from_trap 380 379 addi r3, r0, -ENOSYS; 381 380 382 381 /* Entry point used to return from a syscall/trap */ ··· 412 411 bri 1b 413 412 414 413 /* Maybe handle a signal */ 415 - 5: 414 + 5: 416 415 andi r11, r19, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME; 417 416 beqi r11, 4f; /* Signals to handle, handle them */ 418 417
+1
arch/mips/kvm/tlb.c
··· 216 216 if (idx > current_cpu_data.tlbsize) { 217 217 kvm_err("%s: Invalid Index: %d\n", __func__, idx); 218 218 kvm_mips_dump_host_tlbs(); 219 + local_irq_restore(flags); 219 220 return -1; 220 221 } 221 222
+3 -3
arch/mips/kvm/trace.h
··· 24 24 TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), 25 25 TP_ARGS(vcpu, reason), 26 26 TP_STRUCT__entry( 27 - __field(struct kvm_vcpu *, vcpu) 27 + __field(unsigned long, pc) 28 28 __field(unsigned int, reason) 29 29 ), 30 30 31 31 TP_fast_assign( 32 - __entry->vcpu = vcpu; 32 + __entry->pc = vcpu->arch.pc; 33 33 __entry->reason = reason; 34 34 ), 35 35 36 36 TP_printk("[%s]PC: 0x%08lx", 37 37 kvm_mips_exit_types_str[__entry->reason], 38 - __entry->vcpu->arch.pc) 38 + __entry->pc) 39 39 ); 40 40 41 41 #endif /* _TRACE_KVM_H */
+47
arch/nios2/include/asm/ptrace.h
··· 15 15 16 16 #include <uapi/asm/ptrace.h> 17 17 18 + /* This struct defines the way the registers are stored on the 19 + stack during a system call. */ 20 + 18 21 #ifndef __ASSEMBLY__ 22 + struct pt_regs { 23 + unsigned long r8; /* r8-r15 Caller-saved GP registers */ 24 + unsigned long r9; 25 + unsigned long r10; 26 + unsigned long r11; 27 + unsigned long r12; 28 + unsigned long r13; 29 + unsigned long r14; 30 + unsigned long r15; 31 + unsigned long r1; /* Assembler temporary */ 32 + unsigned long r2; /* Retval LS 32bits */ 33 + unsigned long r3; /* Retval MS 32bits */ 34 + unsigned long r4; /* r4-r7 Register arguments */ 35 + unsigned long r5; 36 + unsigned long r6; 37 + unsigned long r7; 38 + unsigned long orig_r2; /* Copy of r2 ?? */ 39 + unsigned long ra; /* Return address */ 40 + unsigned long fp; /* Frame pointer */ 41 + unsigned long sp; /* Stack pointer */ 42 + unsigned long gp; /* Global pointer */ 43 + unsigned long estatus; 44 + unsigned long ea; /* Exception return address (pc) */ 45 + unsigned long orig_r7; 46 + }; 47 + 48 + /* 49 + * This is the extended stack used by signal handlers and the context 50 + * switcher: it's pushed after the normal "struct pt_regs". 51 + */ 52 + struct switch_stack { 53 + unsigned long r16; /* r16-r23 Callee-saved GP registers */ 54 + unsigned long r17; 55 + unsigned long r18; 56 + unsigned long r19; 57 + unsigned long r20; 58 + unsigned long r21; 59 + unsigned long r22; 60 + unsigned long r23; 61 + unsigned long fp; 62 + unsigned long gp; 63 + unsigned long ra; 64 + }; 65 + 19 66 #define user_mode(regs) (((regs)->estatus & ESTATUS_EU)) 20 67 21 68 #define instruction_pointer(regs) ((regs)->ra)
-32
arch/nios2/include/asm/ucontext.h
··· 1 - /* 2 - * Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch> 3 - * Copyright (C) 2004 Microtronix Datacom Ltd 4 - * 5 - * This file is subject to the terms and conditions of the GNU General Public 6 - * License. See the file "COPYING" in the main directory of this archive 7 - * for more details. 8 - */ 9 - 10 - #ifndef _ASM_NIOS2_UCONTEXT_H 11 - #define _ASM_NIOS2_UCONTEXT_H 12 - 13 - typedef int greg_t; 14 - #define NGREG 32 15 - typedef greg_t gregset_t[NGREG]; 16 - 17 - struct mcontext { 18 - int version; 19 - gregset_t gregs; 20 - }; 21 - 22 - #define MCONTEXT_VERSION 2 23 - 24 - struct ucontext { 25 - unsigned long uc_flags; 26 - struct ucontext *uc_link; 27 - stack_t uc_stack; 28 - struct mcontext uc_mcontext; 29 - sigset_t uc_sigmask; /* mask last for extensibility */ 30 - }; 31 - 32 - #endif
+2 -1
arch/nios2/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 3 header-y += elf.h 4 - header-y += ucontext.h 4 + 5 + generic-y += ucontext.h
+1 -3
arch/nios2/include/uapi/asm/elf.h
··· 50 50 51 51 typedef unsigned long elf_greg_t; 52 52 53 - #define ELF_NGREG \ 54 - ((sizeof(struct pt_regs) + sizeof(struct switch_stack)) / \ 55 - sizeof(elf_greg_t)) 53 + #define ELF_NGREG 49 56 54 typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 57 55 58 56 typedef unsigned long elf_fpregset_t;
+3 -47
arch/nios2/include/uapi/asm/ptrace.h
··· 67 67 68 68 #define NUM_PTRACE_REG (PTR_TLBMISC + 1) 69 69 70 - /* this struct defines the way the registers are stored on the 71 - stack during a system call. 72 - 73 - There is a fake_regs in setup.c that has to match pt_regs.*/ 74 - 75 - struct pt_regs { 76 - unsigned long r8; /* r8-r15 Caller-saved GP registers */ 77 - unsigned long r9; 78 - unsigned long r10; 79 - unsigned long r11; 80 - unsigned long r12; 81 - unsigned long r13; 82 - unsigned long r14; 83 - unsigned long r15; 84 - unsigned long r1; /* Assembler temporary */ 85 - unsigned long r2; /* Retval LS 32bits */ 86 - unsigned long r3; /* Retval MS 32bits */ 87 - unsigned long r4; /* r4-r7 Register arguments */ 88 - unsigned long r5; 89 - unsigned long r6; 90 - unsigned long r7; 91 - unsigned long orig_r2; /* Copy of r2 ?? */ 92 - unsigned long ra; /* Return address */ 93 - unsigned long fp; /* Frame pointer */ 94 - unsigned long sp; /* Stack pointer */ 95 - unsigned long gp; /* Global pointer */ 96 - unsigned long estatus; 97 - unsigned long ea; /* Exception return address (pc) */ 98 - unsigned long orig_r7; 99 - }; 100 - 101 - /* 102 - * This is the extended stack used by signal handlers and the context 103 - * switcher: it's pushed after the normal "struct pt_regs". 104 - */ 105 - struct switch_stack { 106 - unsigned long r16; /* r16-r23 Callee-saved GP registers */ 107 - unsigned long r17; 108 - unsigned long r18; 109 - unsigned long r19; 110 - unsigned long r20; 111 - unsigned long r21; 112 - unsigned long r22; 113 - unsigned long r23; 114 - unsigned long fp; 115 - unsigned long gp; 116 - unsigned long ra; 70 + /* User structures for general purpose registers. */ 71 + struct user_pt_regs { 72 + __u32 regs[49]; 117 73 }; 118 74 119 75 #endif /* __ASSEMBLY__ */
+7 -5
arch/nios2/include/uapi/asm/sigcontext.h
··· 15 15 * details. 16 16 */ 17 17 18 - #ifndef _ASM_NIOS2_SIGCONTEXT_H 19 - #define _ASM_NIOS2_SIGCONTEXT_H 18 + #ifndef _UAPI__ASM_SIGCONTEXT_H 19 + #define _UAPI__ASM_SIGCONTEXT_H 20 20 21 - #include <asm/ptrace.h> 21 + #include <linux/types.h> 22 + 23 + #define MCONTEXT_VERSION 2 22 24 23 25 struct sigcontext { 24 - struct pt_regs regs; 25 - unsigned long sc_mask; /* old sigmask */ 26 + int version; 27 + unsigned long gregs[32]; 26 28 }; 27 29 28 30 #endif
+2 -2
arch/nios2/kernel/signal.c
··· 39 39 struct ucontext *uc, int *pr2) 40 40 { 41 41 int temp; 42 - greg_t *gregs = uc->uc_mcontext.gregs; 42 + unsigned long *gregs = uc->uc_mcontext.gregs; 43 43 int err; 44 44 45 45 /* Always make any pending restarted system calls return -EINTR */ ··· 127 127 static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs) 128 128 { 129 129 struct switch_stack *sw = (struct switch_stack *)regs - 1; 130 - greg_t *gregs = uc->uc_mcontext.gregs; 130 + unsigned long *gregs = uc->uc_mcontext.gregs; 131 131 int err = 0; 132 132 133 133 err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
-6
arch/nios2/mm/fault.c
··· 126 126 break; 127 127 } 128 128 129 - survive: 130 129 /* 131 130 * If for any reason at all we couldn't handle the fault, 132 131 * make sure we exit gracefully rather than endlessly redo ··· 219 220 */ 220 221 out_of_memory: 221 222 up_read(&mm->mmap_sem); 222 - if (is_global_init(tsk)) { 223 - yield(); 224 - down_read(&mm->mmap_sem); 225 - goto survive; 226 - } 227 223 if (!user_mode(regs)) 228 224 goto no_context; 229 225 pagefault_out_of_memory();
+10 -7
arch/parisc/include/asm/pgalloc.h
··· 26 26 27 27 if (likely(pgd != NULL)) { 28 28 memset(pgd, 0, PAGE_SIZE<<PGD_ALLOC_ORDER); 29 - #ifdef CONFIG_64BIT 29 + #if PT_NLEVELS == 3 30 30 actual_pgd += PTRS_PER_PGD; 31 31 /* Populate first pmd with allocated memory. We mark it 32 32 * with PxD_FLAG_ATTACHED as a signal to the system that this ··· 45 45 46 46 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) 47 47 { 48 - #ifdef CONFIG_64BIT 48 + #if PT_NLEVELS == 3 49 49 pgd -= PTRS_PER_PGD; 50 50 #endif 51 51 free_pages((unsigned long)pgd, PGD_ALLOC_ORDER); ··· 72 72 73 73 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) 74 74 { 75 - #ifdef CONFIG_64BIT 76 75 if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED) 77 - /* This is the permanent pmd attached to the pgd; 78 - * cannot free it */ 76 + /* 77 + * This is the permanent pmd attached to the pgd; 78 + * cannot free it. 79 + * Increment the counter to compensate for the decrement 80 + * done by generic mm code. 81 + */ 82 + mm_inc_nr_pmds(mm); 79 83 return; 80 - #endif 81 84 free_pages((unsigned long)pmd, PMD_ORDER); 82 85 } 83 86 ··· 102 99 static inline void 103 100 pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) 104 101 { 105 - #ifdef CONFIG_64BIT 102 + #if PT_NLEVELS == 3 106 103 /* preserve the gateway marker if this is the beginning of 107 104 * the permanent pmd */ 108 105 if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
+6 -3
arch/parisc/kernel/syscall_table.S
··· 55 55 #define ENTRY_COMP(_name_) .word sys_##_name_ 56 56 #endif 57 57 58 - ENTRY_SAME(restart_syscall) /* 0 */ 59 - ENTRY_SAME(exit) 58 + 90: ENTRY_SAME(restart_syscall) /* 0 */ 59 + 91: ENTRY_SAME(exit) 60 60 ENTRY_SAME(fork_wrapper) 61 61 ENTRY_SAME(read) 62 62 ENTRY_SAME(write) ··· 439 439 ENTRY_SAME(bpf) 440 440 ENTRY_COMP(execveat) 441 441 442 - /* Nothing yet */ 442 + 443 + .ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b)) 444 + .error "size of syscall table does not fit value of __NR_Linux_syscalls" 445 + .endif 443 446 444 447 #undef ENTRY_SAME 445 448 #undef ENTRY_DIFF
+1 -1
arch/powerpc/include/asm/cputhreads.h
··· 55 55 56 56 static inline int cpu_nr_cores(void) 57 57 { 58 - return NR_CPUS >> threads_shift; 58 + return nr_cpu_ids >> threads_shift; 59 59 } 60 60 61 61 static inline cpumask_t cpu_online_cores_map(void)
+6
arch/powerpc/include/asm/iommu.h
··· 113 113 int pci_domain_number, unsigned long pe_num); 114 114 extern int iommu_add_device(struct device *dev); 115 115 extern void iommu_del_device(struct device *dev); 116 + extern int __init tce_iommu_bus_notifier_init(void); 116 117 #else 117 118 static inline void iommu_register_group(struct iommu_table *tbl, 118 119 int pci_domain_number, ··· 128 127 129 128 static inline void iommu_del_device(struct device *dev) 130 129 { 130 + } 131 + 132 + static inline int __init tce_iommu_bus_notifier_init(void) 133 + { 134 + return 0; 131 135 } 132 136 #endif /* !CONFIG_IOMMU_API */ 133 137
+9
arch/powerpc/include/asm/irq_work.h
··· 1 + #ifndef _ASM_POWERPC_IRQ_WORK_H 2 + #define _ASM_POWERPC_IRQ_WORK_H 3 + 4 + static inline bool arch_irq_work_has_interrupt(void) 5 + { 6 + return true; 7 + } 8 + 9 + #endif /* _ASM_POWERPC_IRQ_WORK_H */
+3
arch/powerpc/include/asm/ppc-opcode.h
··· 153 153 #define PPC_INST_MFSPR_PVR_MASK 0xfc1fffff 154 154 #define PPC_INST_MFTMR 0x7c0002dc 155 155 #define PPC_INST_MSGSND 0x7c00019c 156 + #define PPC_INST_MSGCLR 0x7c0001dc 156 157 #define PPC_INST_MSGSNDP 0x7c00011c 157 158 #define PPC_INST_MTTMR 0x7c0003dc 158 159 #define PPC_INST_NOP 0x60000000 ··· 309 308 ___PPC_RT(t) | ___PPC_RA(a) | \ 310 309 ___PPC_RB(b) | __PPC_EH(eh)) 311 310 #define PPC_MSGSND(b) stringify_in_c(.long PPC_INST_MSGSND | \ 311 + ___PPC_RB(b)) 312 + #define PPC_MSGCLR(b) stringify_in_c(.long PPC_INST_MSGCLR | \ 312 313 ___PPC_RB(b)) 313 314 #define PPC_MSGSNDP(b) stringify_in_c(.long PPC_INST_MSGSNDP | \ 314 315 ___PPC_RB(b))
+3
arch/powerpc/include/asm/reg.h
··· 608 608 #define SRR1_ISI_N_OR_G 0x10000000 /* ISI: Access is no-exec or G */ 609 609 #define SRR1_ISI_PROT 0x08000000 /* ISI: Other protection fault */ 610 610 #define SRR1_WAKEMASK 0x00380000 /* reason for wakeup */ 611 + #define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 */ 611 612 #define SRR1_WAKESYSERR 0x00300000 /* System error */ 612 613 #define SRR1_WAKEEE 0x00200000 /* External interrupt */ 613 614 #define SRR1_WAKEMT 0x00280000 /* mtctrl */ 614 615 #define SRR1_WAKEHMI 0x00280000 /* Hypervisor maintenance */ 615 616 #define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */ 617 + #define SRR1_WAKEDBELL 0x00140000 /* Privileged doorbell on P8 */ 616 618 #define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */ 617 619 #define SRR1_WAKERESET 0x00100000 /* System reset */ 620 + #define SRR1_WAKEHDBELL 0x000c0000 /* Hypervisor doorbell on P8 */ 618 621 #define SRR1_WAKESTATE 0x00030000 /* Powersave exit mask [46:47] */ 619 622 #define SRR1_WS_DEEPEST 0x00030000 /* Some resources not maintained, 620 623 * may not be recoverable */
+20
arch/powerpc/kernel/cputable.c
··· 437 437 .machine_check_early = __machine_check_early_realmode_p8, 438 438 .platform = "power8", 439 439 }, 440 + { /* Power8NVL */ 441 + .pvr_mask = 0xffff0000, 442 + .pvr_value = 0x004c0000, 443 + .cpu_name = "POWER8NVL (raw)", 444 + .cpu_features = CPU_FTRS_POWER8, 445 + .cpu_user_features = COMMON_USER_POWER8, 446 + .cpu_user_features2 = COMMON_USER2_POWER8, 447 + .mmu_features = MMU_FTRS_POWER8, 448 + .icache_bsize = 128, 449 + .dcache_bsize = 128, 450 + .num_pmcs = 6, 451 + .pmc_type = PPC_PMC_IBM, 452 + .oprofile_cpu_type = "ppc64/power8", 453 + .oprofile_type = PPC_OPROFILE_INVALID, 454 + .cpu_setup = __setup_cpu_power8, 455 + .cpu_restore = __restore_cpu_power8, 456 + .flush_tlb = __flush_tlb_power8, 457 + .machine_check_early = __machine_check_early_realmode_p8, 458 + .platform = "power8", 459 + }, 440 460 { /* Power8 DD1: Does not support doorbell IPIs */ 441 461 .pvr_mask = 0xffffff00, 442 462 .pvr_value = 0x004d0100,
+2
arch/powerpc/kernel/dbell.c
··· 17 17 18 18 #include <asm/dbell.h> 19 19 #include <asm/irq_regs.h> 20 + #include <asm/kvm_ppc.h> 20 21 21 22 #ifdef CONFIG_SMP 22 23 void doorbell_setup_this_cpu(void) ··· 42 41 43 42 may_hard_irq_enable(); 44 43 44 + kvmppc_set_host_ipi(smp_processor_id(), 0); 45 45 __this_cpu_inc(irq_stat.doorbell_irqs); 46 46 47 47 smp_ipi_demux();
+1 -1
arch/powerpc/kernel/exceptions-64s.S
··· 1408 1408 bne 9f /* continue in V mode if we are. */ 1409 1409 1410 1410 5: 1411 - #ifdef CONFIG_KVM_BOOK3S_64_HV 1411 + #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 1412 1412 /* 1413 1413 * We are coming from kernel context. Check if we are coming from 1414 1414 * guest. if yes, then we can continue. We will fall through
+26
arch/powerpc/kernel/iommu.c
··· 1175 1175 } 1176 1176 EXPORT_SYMBOL_GPL(iommu_del_device); 1177 1177 1178 + static int tce_iommu_bus_notifier(struct notifier_block *nb, 1179 + unsigned long action, void *data) 1180 + { 1181 + struct device *dev = data; 1182 + 1183 + switch (action) { 1184 + case BUS_NOTIFY_ADD_DEVICE: 1185 + return iommu_add_device(dev); 1186 + case BUS_NOTIFY_DEL_DEVICE: 1187 + if (dev->iommu_group) 1188 + iommu_del_device(dev); 1189 + return 0; 1190 + default: 1191 + return 0; 1192 + } 1193 + } 1194 + 1195 + static struct notifier_block tce_iommu_bus_nb = { 1196 + .notifier_call = tce_iommu_bus_notifier, 1197 + }; 1198 + 1199 + int __init tce_iommu_bus_notifier_init(void) 1200 + { 1201 + bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); 1202 + return 0; 1203 + } 1178 1204 #endif /* CONFIG_IOMMU_API */
+2 -2
arch/powerpc/kernel/smp.c
··· 541 541 if (smp_ops->give_timebase) 542 542 smp_ops->give_timebase(); 543 543 544 - /* Wait until cpu puts itself in the online map */ 545 - while (!cpu_online(cpu)) 544 + /* Wait until cpu puts itself in the online & active maps */ 545 + while (!cpu_online(cpu) || !cpu_active(cpu)) 546 546 cpu_relax(); 547 547 548 548 return 0;
+4 -4
arch/powerpc/kvm/book3s_hv.c
··· 636 636 spin_lock(&vcpu->arch.vpa_update_lock); 637 637 lppaca = (struct lppaca *)vcpu->arch.vpa.pinned_addr; 638 638 if (lppaca) 639 - yield_count = lppaca->yield_count; 639 + yield_count = be32_to_cpu(lppaca->yield_count); 640 640 spin_unlock(&vcpu->arch.vpa_update_lock); 641 641 return yield_count; 642 642 } ··· 942 942 static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr, 943 943 bool preserve_top32) 944 944 { 945 + struct kvm *kvm = vcpu->kvm; 945 946 struct kvmppc_vcore *vc = vcpu->arch.vcore; 946 947 u64 mask; 947 948 949 + mutex_lock(&kvm->lock); 948 950 spin_lock(&vc->lock); 949 951 /* 950 952 * If ILE (interrupt little-endian) has changed, update the 951 953 * MSR_LE bit in the intr_msr for each vcpu in this vcore. 952 954 */ 953 955 if ((new_lpcr & LPCR_ILE) != (vc->lpcr & LPCR_ILE)) { 954 - struct kvm *kvm = vcpu->kvm; 955 956 struct kvm_vcpu *vcpu; 956 957 int i; 957 958 958 - mutex_lock(&kvm->lock); 959 959 kvm_for_each_vcpu(i, vcpu, kvm) { 960 960 if (vcpu->arch.vcore != vc) 961 961 continue; ··· 964 964 else 965 965 vcpu->arch.intr_msr &= ~MSR_LE; 966 966 } 967 - mutex_unlock(&kvm->lock); 968 967 } 969 968 970 969 /* ··· 980 981 mask &= 0xFFFFFFFF; 981 982 vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask); 982 983 spin_unlock(&vc->lock); 984 + mutex_unlock(&kvm->lock); 983 985 } 984 986 985 987 static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1005 1005 /* Save HEIR (HV emulation assist reg) in emul_inst 1006 1006 if this is an HEI (HV emulation interrupt, e40) */ 1007 1007 li r3,KVM_INST_FETCH_FAILED 1008 + stw r3,VCPU_LAST_INST(r9) 1008 1009 cmpwi r12,BOOK3S_INTERRUPT_H_EMUL_ASSIST 1009 1010 bne 11f 1010 1011 mfspr r3,SPRN_HEIR
-26
arch/powerpc/platforms/powernv/pci.c
··· 836 836 #endif 837 837 } 838 838 839 - static int tce_iommu_bus_notifier(struct notifier_block *nb, 840 - unsigned long action, void *data) 841 - { 842 - struct device *dev = data; 843 - 844 - switch (action) { 845 - case BUS_NOTIFY_ADD_DEVICE: 846 - return iommu_add_device(dev); 847 - case BUS_NOTIFY_DEL_DEVICE: 848 - if (dev->iommu_group) 849 - iommu_del_device(dev); 850 - return 0; 851 - default: 852 - return 0; 853 - } 854 - } 855 - 856 - static struct notifier_block tce_iommu_bus_nb = { 857 - .notifier_call = tce_iommu_bus_notifier, 858 - }; 859 - 860 - static int __init tce_iommu_bus_notifier_init(void) 861 - { 862 - bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); 863 - return 0; 864 - } 865 839 machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init);
+12 -2
arch/powerpc/platforms/powernv/smp.c
··· 33 33 #include <asm/runlatch.h> 34 34 #include <asm/code-patching.h> 35 35 #include <asm/dbell.h> 36 + #include <asm/kvm_ppc.h> 37 + #include <asm/ppc-opcode.h> 36 38 37 39 #include "powernv.h" 38 40 ··· 151 149 static void pnv_smp_cpu_kill_self(void) 152 150 { 153 151 unsigned int cpu; 154 - unsigned long srr1; 152 + unsigned long srr1, wmask; 155 153 u32 idle_states; 156 154 157 155 /* Standard hot unplug procedure */ ··· 162 160 DBG("CPU%d offline\n", cpu); 163 161 generic_set_cpu_dead(cpu); 164 162 smp_wmb(); 163 + 164 + wmask = SRR1_WAKEMASK; 165 + if (cpu_has_feature(CPU_FTR_ARCH_207S)) 166 + wmask = SRR1_WAKEMASK_P8; 165 167 166 168 idle_states = pnv_get_supported_cpuidle_states(); 167 169 /* We don't want to take decrementer interrupts while we are offline, ··· 197 191 * having finished executing in a KVM guest, then srr1 198 192 * contains 0. 199 193 */ 200 - if ((srr1 & SRR1_WAKEMASK) == SRR1_WAKEEE) { 194 + if ((srr1 & wmask) == SRR1_WAKEEE) { 201 195 icp_native_flush_interrupt(); 202 196 local_paca->irq_happened &= PACA_IRQ_HARD_DIS; 203 197 smp_mb(); 198 + } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) { 199 + unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER); 200 + asm volatile(PPC_MSGCLR(%0) : : "r" (msg)); 201 + kvmppc_set_host_ipi(cpu, 0); 204 202 } 205 203 206 204 if (cpu_core_split_required())
+2
arch/powerpc/platforms/pseries/iommu.c
··· 1340 1340 } 1341 1341 1342 1342 __setup("multitce=", disable_multitce); 1343 + 1344 + machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init);
+23 -21
arch/powerpc/platforms/pseries/mobility.c
··· 25 25 static struct kobject *mobility_kobj; 26 26 27 27 struct update_props_workarea { 28 - u32 phandle; 29 - u32 state; 30 - u64 reserved; 31 - u32 nprops; 28 + __be32 phandle; 29 + __be32 state; 30 + __be64 reserved; 31 + __be32 nprops; 32 32 } __packed; 33 33 34 34 #define NODE_ACTION_MASK 0xff000000 ··· 54 54 return rc; 55 55 } 56 56 57 - static int delete_dt_node(u32 phandle) 57 + static int delete_dt_node(__be32 phandle) 58 58 { 59 59 struct device_node *dn; 60 60 61 - dn = of_find_node_by_phandle(phandle); 61 + dn = of_find_node_by_phandle(be32_to_cpu(phandle)); 62 62 if (!dn) 63 63 return -ENOENT; 64 64 ··· 127 127 return 0; 128 128 } 129 129 130 - static int update_dt_node(u32 phandle, s32 scope) 130 + static int update_dt_node(__be32 phandle, s32 scope) 131 131 { 132 132 struct update_props_workarea *upwa; 133 133 struct device_node *dn; ··· 136 136 char *prop_data; 137 137 char *rtas_buf; 138 138 int update_properties_token; 139 + u32 nprops; 139 140 u32 vd; 140 141 141 142 update_properties_token = rtas_token("ibm,update-properties"); ··· 147 146 if (!rtas_buf) 148 147 return -ENOMEM; 149 148 150 - dn = of_find_node_by_phandle(phandle); 149 + dn = of_find_node_by_phandle(be32_to_cpu(phandle)); 151 150 if (!dn) { 152 151 kfree(rtas_buf); 153 152 return -ENOENT; ··· 163 162 break; 164 163 165 164 prop_data = rtas_buf + sizeof(*upwa); 165 + nprops = be32_to_cpu(upwa->nprops); 166 166 167 167 /* On the first call to ibm,update-properties for a node the 168 168 * the first property value descriptor contains an empty ··· 172 170 */ 173 171 if (*prop_data == 0) { 174 172 prop_data++; 175 - vd = *(u32 *)prop_data; 173 + vd = be32_to_cpu(*(__be32 *)prop_data); 176 174 prop_data += vd + sizeof(vd); 177 - upwa->nprops--; 175 + nprops--; 178 176 } 179 177 180 - for (i = 0; i < upwa->nprops; i++) { 178 + for (i = 0; i < nprops; i++) { 181 179 char *prop_name; 182 180 183 181 prop_name = prop_data; 184 182 prop_data += strlen(prop_name) + 1; 185 - vd = *(u32 *)prop_data; 183 + vd = be32_to_cpu(*(__be32 *)prop_data); 186 184 prop_data += sizeof(vd); 187 185 188 186 switch (vd) { ··· 214 212 return 0; 215 213 } 216 214 217 - static int add_dt_node(u32 parent_phandle, u32 drc_index) 215 + static int add_dt_node(__be32 parent_phandle, __be32 drc_index) 218 216 { 219 217 struct device_node *dn; 220 218 struct device_node *parent_dn; 221 219 int rc; 222 220 223 - parent_dn = of_find_node_by_phandle(parent_phandle); 221 + parent_dn = of_find_node_by_phandle(be32_to_cpu(parent_phandle)); 224 222 if (!parent_dn) 225 223 return -ENOENT; 226 224 ··· 239 237 int pseries_devicetree_update(s32 scope) 240 238 { 241 239 char *rtas_buf; 242 - u32 *data; 240 + __be32 *data; 243 241 int update_nodes_token; 244 242 int rc; 245 243 ··· 256 254 if (rc && rc != 1) 257 255 break; 258 256 259 - data = (u32 *)rtas_buf + 4; 260 - while (*data & NODE_ACTION_MASK) { 257 + data = (__be32 *)rtas_buf + 4; 258 + while (be32_to_cpu(*data) & NODE_ACTION_MASK) { 261 259 int i; 262 - u32 action = *data & NODE_ACTION_MASK; 263 - int node_count = *data & NODE_COUNT_MASK; 260 + u32 action = be32_to_cpu(*data) & NODE_ACTION_MASK; 261 + u32 node_count = be32_to_cpu(*data) & NODE_COUNT_MASK; 264 262 265 263 data++; 266 264 267 265 for (i = 0; i < node_count; i++) { 268 - u32 phandle = *data++; 269 - u32 drc_index; 266 + __be32 phandle = *data++; 267 + __be32 drc_index; 270 268 271 269 switch (action) { 272 270 case DELETE_DT_NODE:
+1 -1
arch/s390/include/asm/elf.h
··· 211 211 212 212 extern unsigned long mmap_rnd_mask; 213 213 214 - #define STACK_RND_MASK (mmap_rnd_mask) 214 + #define STACK_RND_MASK (test_thread_flag(TIF_31BIT) ? 0x7ff : mmap_rnd_mask) 215 215 216 216 #define ARCH_DLINFO \ 217 217 do { \
+6 -6
arch/s390/include/asm/kvm_host.h
··· 515 515 #define S390_ARCH_FAC_MASK_SIZE_U64 \ 516 516 (S390_ARCH_FAC_MASK_SIZE_BYTE / sizeof(u64)) 517 517 518 - struct s390_model_fac { 519 - /* facilities used in SIE context */ 520 - __u64 sie[S390_ARCH_FAC_LIST_SIZE_U64]; 521 - /* subset enabled by kvm */ 522 - __u64 kvm[S390_ARCH_FAC_LIST_SIZE_U64]; 518 + struct kvm_s390_fac { 519 + /* facility list requested by guest */ 520 + __u64 list[S390_ARCH_FAC_LIST_SIZE_U64]; 521 + /* facility mask supported by kvm & hosting machine */ 522 + __u64 mask[S390_ARCH_FAC_LIST_SIZE_U64]; 523 523 }; 524 524 525 525 struct kvm_s390_cpu_model { 526 - struct s390_model_fac *fac; 526 + struct kvm_s390_fac *fac; 527 527 struct cpuid cpu_id; 528 528 unsigned short ibc; 529 529 };
+1 -1
arch/s390/include/asm/mmu_context.h
··· 62 62 { 63 63 int cpu = smp_processor_id(); 64 64 65 + S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 65 66 if (prev == next) 66 67 return; 67 68 if (MACHINE_HAS_TLB_LC) ··· 74 73 atomic_dec(&prev->context.attach_count); 75 74 if (MACHINE_HAS_TLB_LC) 76 75 cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask); 77 - S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 78 76 } 79 77 80 78 #define finish_arch_post_lock_switch finish_arch_post_lock_switch
+1 -10
arch/s390/include/asm/page.h
··· 37 37 #endif 38 38 } 39 39 40 - static inline void clear_page(void *page) 41 - { 42 - register unsigned long reg1 asm ("1") = 0; 43 - register void *reg2 asm ("2") = page; 44 - register unsigned long reg3 asm ("3") = 4096; 45 - asm volatile( 46 - " mvcl 2,0" 47 - : "+d" (reg2), "+d" (reg3) : "d" (reg1) 48 - : "memory", "cc"); 49 - } 40 + #define clear_page(page) memset((page), 0, PAGE_SIZE) 50 41 51 42 /* 52 43 * copy_page uses the mvcl instruction with 0xb0 padding byte in order to
+45 -16
arch/s390/kernel/ftrace.c
··· 57 57 58 58 unsigned long ftrace_plt; 59 59 60 + static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn) 61 + { 62 + #ifdef CC_USING_HOTPATCH 63 + /* brcl 0,0 */ 64 + insn->opc = 0xc004; 65 + insn->disp = 0; 66 + #else 67 + /* stg r14,8(r15) */ 68 + insn->opc = 0xe3e0; 69 + insn->disp = 0xf0080024; 70 + #endif 71 + } 72 + 73 + static inline int is_kprobe_on_ftrace(struct ftrace_insn *insn) 74 + { 75 + #ifdef CONFIG_KPROBES 76 + if (insn->opc == BREAKPOINT_INSTRUCTION) 77 + return 1; 78 + #endif 79 + return 0; 80 + } 81 + 82 + static inline void ftrace_generate_kprobe_nop_insn(struct ftrace_insn *insn) 83 + { 84 + #ifdef CONFIG_KPROBES 85 + insn->opc = BREAKPOINT_INSTRUCTION; 86 + insn->disp = KPROBE_ON_FTRACE_NOP; 87 + #endif 88 + } 89 + 90 + static inline void ftrace_generate_kprobe_call_insn(struct ftrace_insn *insn) 91 + { 92 + #ifdef CONFIG_KPROBES 93 + insn->opc = BREAKPOINT_INSTRUCTION; 94 + insn->disp = KPROBE_ON_FTRACE_CALL; 95 + #endif 96 + } 97 + 60 98 int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 61 99 unsigned long addr) 62 100 { ··· 110 72 return -EFAULT; 111 73 if (addr == MCOUNT_ADDR) { 112 74 /* Initial code replacement */ 113 - #ifdef CC_USING_HOTPATCH 114 - /* We expect to see brcl 0,0 */ 115 - ftrace_generate_nop_insn(&orig); 116 - #else 117 - /* We expect to see stg r14,8(r15) */ 118 - orig.opc = 0xe3e0; 119 - orig.disp = 0xf0080024; 120 - #endif 75 + ftrace_generate_orig_insn(&orig); 121 76 ftrace_generate_nop_insn(&new); 122 - } else if (old.opc == BREAKPOINT_INSTRUCTION) { 77 + } else if (is_kprobe_on_ftrace(&old)) { 123 78 /* 124 79 * If we find a breakpoint instruction, a kprobe has been 125 80 * placed at the beginning of the function. We write the ··· 120 89 * bytes of the original instruction so that the kprobes 121 90 * handler can execute a nop, if it reaches this breakpoint. 122 91 */ 123 - new.opc = orig.opc = BREAKPOINT_INSTRUCTION; 124 - orig.disp = KPROBE_ON_FTRACE_CALL; 125 - new.disp = KPROBE_ON_FTRACE_NOP; 92 + ftrace_generate_kprobe_call_insn(&orig); 93 + ftrace_generate_kprobe_nop_insn(&new); 126 94 } else { 127 95 /* Replace ftrace call with a nop. */ 128 96 ftrace_generate_call_insn(&orig, rec->ip); ··· 141 111 142 112 if (probe_kernel_read(&old, (void *) rec->ip, sizeof(old))) 143 113 return -EFAULT; 144 - if (old.opc == BREAKPOINT_INSTRUCTION) { 114 + if (is_kprobe_on_ftrace(&old)) { 145 115 /* 146 116 * If we find a breakpoint instruction, a kprobe has been 147 117 * placed at the beginning of the function. We write the ··· 149 119 * bytes of the original instruction so that the kprobes 150 120 * handler can execute a brasl if it reaches this breakpoint. 151 121 */ 152 - new.opc = orig.opc = BREAKPOINT_INSTRUCTION; 153 - orig.disp = KPROBE_ON_FTRACE_NOP; 154 - new.disp = KPROBE_ON_FTRACE_CALL; 122 + ftrace_generate_kprobe_nop_insn(&orig); 123 + ftrace_generate_kprobe_call_insn(&new); 155 124 } else { 156 125 /* Replace nop with an ftrace call. */ 157 126 ftrace_generate_nop_insn(&orig);
+8 -4
arch/s390/kernel/jump_label.c
··· 36 36 insn->offset = (entry->target - entry->code) >> 1; 37 37 } 38 38 39 - static void jump_label_bug(struct jump_entry *entry, struct insn *insn) 39 + static void jump_label_bug(struct jump_entry *entry, struct insn *expected, 40 + struct insn *new) 40 41 { 41 42 unsigned char *ipc = (unsigned char *)entry->code; 42 - unsigned char *ipe = (unsigned char *)insn; 43 + unsigned char *ipe = (unsigned char *)expected; 44 + unsigned char *ipn = (unsigned char *)new; 43 45 44 46 pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc); 45 47 pr_emerg("Found: %02x %02x %02x %02x %02x %02x\n", 46 48 ipc[0], ipc[1], ipc[2], ipc[3], ipc[4], ipc[5]); 47 49 pr_emerg("Expected: %02x %02x %02x %02x %02x %02x\n", 48 50 ipe[0], ipe[1], ipe[2], ipe[3], ipe[4], ipe[5]); 51 + pr_emerg("New: %02x %02x %02x %02x %02x %02x\n", 52 + ipn[0], ipn[1], ipn[2], ipn[3], ipn[4], ipn[5]); 49 53 panic("Corrupted kernel text"); 50 54 } 51 55 ··· 73 69 } 74 70 if (init) { 75 71 if (memcmp((void *)entry->code, &orignop, sizeof(orignop))) 76 - jump_label_bug(entry, &old); 72 + jump_label_bug(entry, &orignop, &new); 77 73 } else { 78 74 if (memcmp((void *)entry->code, &old, sizeof(old))) 79 - jump_label_bug(entry, &old); 75 + jump_label_bug(entry, &old, &new); 80 76 } 81 77 probe_kernel_write((void *)entry->code, &new, sizeof(new)); 82 78 }
+1
arch/s390/kernel/module.c
··· 436 436 const Elf_Shdr *sechdrs, 437 437 struct module *me) 438 438 { 439 + jump_label_apply_nops(me); 439 440 vfree(me->arch.syminfo); 440 441 me->arch.syminfo = NULL; 441 442 return 0;
+5 -2
arch/s390/kernel/perf_cpum_sf.c
··· 1415 1415 1416 1416 static struct attribute *cpumsf_pmu_events_attr[] = { 1417 1417 CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC), 1418 - CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG), 1418 + NULL, 1419 1419 NULL, 1420 1420 }; 1421 1421 ··· 1606 1606 return -EINVAL; 1607 1607 } 1608 1608 1609 - if (si.ad) 1609 + if (si.ad) { 1610 1610 sfb_set_limits(CPUM_SF_MIN_SDB, CPUM_SF_MAX_SDB); 1611 + cpumsf_pmu_events_attr[1] = 1612 + CPUMF_EVENT_PTR(SF, SF_CYCLES_BASIC_DIAG); 1613 + } 1611 1614 1612 1615 sfdbg = debug_register(KMSG_COMPONENT, 2, 1, 80); 1613 1616 if (!sfdbg)
+1 -1
arch/s390/kernel/processor.c
··· 18 18 19 19 static DEFINE_PER_CPU(struct cpuid, cpu_id); 20 20 21 - void cpu_relax(void) 21 + void notrace cpu_relax(void) 22 22 { 23 23 if (!smp_cpu_mtid && MACHINE_HAS_DIAG44) 24 24 asm volatile("diag 0,0,0x44");
+11
arch/s390/kernel/swsusp_asm64.S
··· 177 177 lhi %r1,1 178 178 sigp %r1,%r0,SIGP_SET_ARCHITECTURE 179 179 sam64 180 + #ifdef CONFIG_SMP 181 + larl %r1,smp_cpu_mt_shift 182 + icm %r1,15,0(%r1) 183 + jz smt_done 184 + llgfr %r1,%r1 185 + smt_loop: 186 + sigp %r1,%r0,SIGP_SET_MULTI_THREADING 187 + brc 8,smt_done /* accepted */ 188 + brc 2,smt_loop /* busy, try again */ 189 + smt_done: 190 + #endif 180 191 larl %r1,.Lnew_pgm_check_psw 181 192 lpswe 0(%r1) 182 193 pgm_check_entry:
+31 -38
arch/s390/kvm/kvm-s390.c
··· 165 165 case KVM_CAP_ONE_REG: 166 166 case KVM_CAP_ENABLE_CAP: 167 167 case KVM_CAP_S390_CSS_SUPPORT: 168 - case KVM_CAP_IRQFD: 169 168 case KVM_CAP_IOEVENTFD: 170 169 case KVM_CAP_DEVICE_CTRL: 171 170 case KVM_CAP_ENABLE_CAP_VM: ··· 521 522 memcpy(&kvm->arch.model.cpu_id, &proc->cpuid, 522 523 sizeof(struct cpuid)); 523 524 kvm->arch.model.ibc = proc->ibc; 524 - memcpy(kvm->arch.model.fac->kvm, proc->fac_list, 525 + memcpy(kvm->arch.model.fac->list, proc->fac_list, 525 526 S390_ARCH_FAC_LIST_SIZE_BYTE); 526 527 } else 527 528 ret = -EFAULT; ··· 555 556 } 556 557 memcpy(&proc->cpuid, &kvm->arch.model.cpu_id, sizeof(struct cpuid)); 557 558 proc->ibc = kvm->arch.model.ibc; 558 - memcpy(&proc->fac_list, kvm->arch.model.fac->kvm, S390_ARCH_FAC_LIST_SIZE_BYTE); 559 + memcpy(&proc->fac_list, kvm->arch.model.fac->list, S390_ARCH_FAC_LIST_SIZE_BYTE); 559 560 if (copy_to_user((void __user *)attr->addr, proc, sizeof(*proc))) 560 561 ret = -EFAULT; 561 562 kfree(proc); ··· 575 576 } 576 577 get_cpu_id((struct cpuid *) &mach->cpuid); 577 578 mach->ibc = sclp_get_ibc(); 578 - memcpy(&mach->fac_mask, kvm_s390_fac_list_mask, 579 - kvm_s390_fac_list_mask_size() * sizeof(u64)); 579 + memcpy(&mach->fac_mask, kvm->arch.model.fac->mask, 580 + S390_ARCH_FAC_LIST_SIZE_BYTE); 580 581 memcpy((unsigned long *)&mach->fac_list, S390_lowcore.stfle_fac_list, 581 - S390_ARCH_FAC_LIST_SIZE_U64); 582 + S390_ARCH_FAC_LIST_SIZE_BYTE); 582 583 if (copy_to_user((void __user *)attr->addr, mach, sizeof(*mach))) 583 584 ret = -EFAULT; 584 585 kfree(mach); ··· 777 778 static int kvm_s390_query_ap_config(u8 *config) 778 779 { 779 780 u32 fcn_code = 0x04000000UL; 780 - u32 cc; 781 + u32 cc = 0; 781 782 783 + memset(config, 0, 128); 782 784 asm volatile( 783 785 "lgr 0,%1\n" 784 786 "lgr 2,%2\n" 785 787 ".long 0xb2af0000\n" /* PQAP(QCI) */ 786 - "ipm %0\n" 788 + "0: ipm %0\n" 787 789 "srl %0,28\n" 788 - : "=r" (cc) 790 + "1:\n" 791 + EX_TABLE(0b, 1b) 792 + : "+r" (cc) 789 793 : "r" (fcn_code), "r" (config) 790 794 : "cc", "0", "2", "memory" 791 795 ); ··· 841 839 842 840 kvm_s390_set_crycb_format(kvm); 843 841 844 - /* Disable AES/DEA protected key functions by default */ 845 - kvm->arch.crypto.aes_kw = 0; 846 - kvm->arch.crypto.dea_kw = 0; 842 + /* Enable AES/DEA protected key functions by default */ 843 + kvm->arch.crypto.aes_kw = 1; 844 + kvm->arch.crypto.dea_kw = 1; 845 + get_random_bytes(kvm->arch.crypto.crycb->aes_wrapping_key_mask, 846 + sizeof(kvm->arch.crypto.crycb->aes_wrapping_key_mask)); 847 + get_random_bytes(kvm->arch.crypto.crycb->dea_wrapping_key_mask, 848 + sizeof(kvm->arch.crypto.crycb->dea_wrapping_key_mask)); 847 849 848 850 return 0; 849 851 } ··· 892 886 /* 893 887 * The architectural maximum amount of facilities is 16 kbit. To store 894 888 * this amount, 2 kbyte of memory is required. Thus we need a full 895 - * page to hold the active copy (arch.model.fac->sie) and the current 896 - * facilities set (arch.model.fac->kvm). Its address size has to be 889 + * page to hold the guest facility list (arch.model.fac->list) and the 890 + * facility mask (arch.model.fac->mask). Its address size has to be 897 891 * 31 bits and word aligned. 898 892 */ 899 893 kvm->arch.model.fac = 900 - (struct s390_model_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 894 + (struct kvm_s390_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 901 895 if (!kvm->arch.model.fac) 902 896 goto out_nofac; 903 897 904 - memcpy(kvm->arch.model.fac->kvm, S390_lowcore.stfle_fac_list, 905 - S390_ARCH_FAC_LIST_SIZE_U64); 906 - 907 - /* 908 - * If this KVM host runs *not* in a LPAR, relax the facility bits 909 - * of the kvm facility mask by all missing facilities. This will allow 910 - * to determine the right CPU model by means of the remaining facilities. 911 - * Live guest migration must prohibit the migration of KVMs running in 912 - * a LPAR to non LPAR hosts. 913 - */ 914 - if (!MACHINE_IS_LPAR) 915 - for (i = 0; i < kvm_s390_fac_list_mask_size(); i++) 916 - kvm_s390_fac_list_mask[i] &= kvm->arch.model.fac->kvm[i]; 917 - 918 - /* 919 - * Apply the kvm facility mask to limit the kvm supported/tolerated 920 - * facility list. 921 - */ 898 + /* Populate the facility mask initially. */ 899 + memcpy(kvm->arch.model.fac->mask, S390_lowcore.stfle_fac_list, 900 + S390_ARCH_FAC_LIST_SIZE_BYTE); 922 901 for (i = 0; i < S390_ARCH_FAC_LIST_SIZE_U64; i++) { 923 902 if (i < kvm_s390_fac_list_mask_size()) 924 - kvm->arch.model.fac->kvm[i] &= kvm_s390_fac_list_mask[i]; 903 + kvm->arch.model.fac->mask[i] &= kvm_s390_fac_list_mask[i]; 925 904 else 926 - kvm->arch.model.fac->kvm[i] = 0UL; 905 + kvm->arch.model.fac->mask[i] = 0UL; 927 906 } 907 + 908 + /* Populate the facility list initially. */ 909 + memcpy(kvm->arch.model.fac->list, kvm->arch.model.fac->mask, 910 + S390_ARCH_FAC_LIST_SIZE_BYTE); 928 911 929 912 kvm_s390_get_cpu_id(&kvm->arch.model.cpu_id); 930 913 kvm->arch.model.ibc = sclp_get_ibc() & 0x0fff; ··· 1160 1165 1161 1166 mutex_lock(&vcpu->kvm->lock); 1162 1167 vcpu->arch.cpu_id = vcpu->kvm->arch.model.cpu_id; 1163 - memcpy(vcpu->kvm->arch.model.fac->sie, vcpu->kvm->arch.model.fac->kvm, 1164 - S390_ARCH_FAC_LIST_SIZE_BYTE); 1165 1168 vcpu->arch.sie_block->ibc = vcpu->kvm->arch.model.ibc; 1166 1169 mutex_unlock(&vcpu->kvm->lock); 1167 1170 ··· 1205 1212 vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca; 1206 1213 set_bit(63 - id, (unsigned long *) &kvm->arch.sca->mcn); 1207 1214 } 1208 - vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->sie; 1215 + vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->list; 1209 1216 1210 1217 spin_lock_init(&vcpu->arch.local_int.lock); 1211 1218 vcpu->arch.local_int.float_int = &kvm->arch.float_int;
+2 -1
arch/s390/kvm/kvm-s390.h
··· 128 128 /* test availability of facility in a kvm intance */ 129 129 static inline int test_kvm_facility(struct kvm *kvm, unsigned long nr) 130 130 { 131 - return __test_facility(nr, kvm->arch.model.fac->kvm); 131 + return __test_facility(nr, kvm->arch.model.fac->mask) && 132 + __test_facility(nr, kvm->arch.model.fac->list); 132 133 } 133 134 134 135 /* are cpu states controlled by user space */
+1 -1
arch/s390/kvm/priv.c
··· 348 348 * We need to shift the lower 32 facility bits (bit 0-31) from a u64 349 349 * into a u32 memory representation. They will remain bits 0-31. 350 350 */ 351 - fac = *vcpu->kvm->arch.model.fac->sie >> 32; 351 + fac = *vcpu->kvm->arch.model.fac->list >> 32; 352 352 rc = write_guest_lc(vcpu, offsetof(struct _lowcore, stfl_fac_list), 353 353 &fac, sizeof(fac)); 354 354 if (rc)
+16 -12
arch/s390/pci/pci.c
··· 287 287 addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48); 288 288 return (void __iomem *) addr + offset; 289 289 } 290 - EXPORT_SYMBOL_GPL(pci_iomap_range); 290 + EXPORT_SYMBOL(pci_iomap_range); 291 291 292 292 void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) 293 293 { ··· 309 309 } 310 310 spin_unlock(&zpci_iomap_lock); 311 311 } 312 - EXPORT_SYMBOL_GPL(pci_iounmap); 312 + EXPORT_SYMBOL(pci_iounmap); 313 313 314 314 static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, 315 315 int size, u32 *val) ··· 483 483 airq_iv_free_bit(zpci_aisb_iv, zdev->aisb); 484 484 } 485 485 486 - static void zpci_map_resources(struct zpci_dev *zdev) 486 + static void zpci_map_resources(struct pci_dev *pdev) 487 487 { 488 - struct pci_dev *pdev = zdev->pdev; 489 488 resource_size_t len; 490 489 int i; 491 490 ··· 498 499 } 499 500 } 500 501 501 - static void zpci_unmap_resources(struct zpci_dev *zdev) 502 + static void zpci_unmap_resources(struct pci_dev *pdev) 502 503 { 503 - struct pci_dev *pdev = zdev->pdev; 504 504 resource_size_t len; 505 505 int i; 506 506 ··· 649 651 650 652 zdev->pdev = pdev; 651 653 pdev->dev.groups = zpci_attr_groups; 652 - zpci_map_resources(zdev); 654 + zpci_map_resources(pdev); 653 655 654 656 for (i = 0; i < PCI_BAR_COUNT; i++) { 655 657 res = &pdev->resource[i]; ··· 661 663 return 0; 662 664 } 663 665 666 + void pcibios_release_device(struct pci_dev *pdev) 667 + { 668 + zpci_unmap_resources(pdev); 669 + } 670 + 664 671 int pcibios_enable_device(struct pci_dev *pdev, int mask) 665 672 { 666 673 struct zpci_dev *zdev = get_zdev(pdev); ··· 673 670 zdev->pdev = pdev; 674 671 zpci_debug_init_device(zdev); 675 672 zpci_fmb_enable_device(zdev); 676 - zpci_map_resources(zdev); 677 673 678 674 return pci_enable_resources(pdev, mask); 679 675 } ··· 681 679 { 682 680 struct zpci_dev *zdev = get_zdev(pdev); 683 681 684 - zpci_unmap_resources(zdev); 685 682 zpci_fmb_disable_device(zdev); 686 683 zpci_debug_exit_device(zdev); 687 684 zdev->pdev = NULL; ··· 689 688 #ifdef CONFIG_HIBERNATE_CALLBACKS 690 689 static int zpci_restore(struct device *dev) 691 690 { 692 - struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); 691 + struct pci_dev *pdev = to_pci_dev(dev); 692 + struct zpci_dev *zdev = get_zdev(pdev); 693 693 int ret = 0; 694 694 695 695 if (zdev->state != ZPCI_FN_STATE_ONLINE) ··· 700 698 if (ret) 701 699 goto out; 702 700 703 - zpci_map_resources(zdev); 701 + zpci_map_resources(pdev); 704 702 zpci_register_ioat(zdev, 0, zdev->start_dma + PAGE_OFFSET, 705 703 zdev->start_dma + zdev->iommu_size - 1, 706 704 (u64) zdev->dma_table); ··· 711 709 712 710 static int zpci_freeze(struct device *dev) 713 711 { 714 - struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); 712 + struct pci_dev *pdev = to_pci_dev(dev); 713 + struct zpci_dev *zdev = get_zdev(pdev); 715 714 716 715 if (zdev->state != ZPCI_FN_STATE_ONLINE) 717 716 return 0; 718 717 719 718 zpci_unregister_ioat(zdev, 0); 719 + zpci_unmap_resources(pdev); 720 720 return clp_disable_fh(zdev); 721 721 } 722 722
+8 -9
arch/s390/pci/pci_mmio.c
··· 64 64 if (copy_from_user(buf, user_buffer, length)) 65 65 goto out; 66 66 67 - memcpy_toio(io_addr, buf, length); 68 - ret = 0; 67 + ret = zpci_memcpy_toio(io_addr, buf, length); 69 68 out: 70 69 if (buf != local_buf) 71 70 kfree(buf); ··· 97 98 goto out; 98 99 io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK)); 99 100 100 - ret = -EFAULT; 101 - if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) 101 + if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) { 102 + ret = -EFAULT; 102 103 goto out; 103 - 104 - memcpy_fromio(buf, io_addr, length); 105 - 104 + } 105 + ret = zpci_memcpy_fromio(buf, io_addr, length); 106 + if (ret) 107 + goto out; 106 108 if (copy_to_user(user_buffer, buf, length)) 107 - goto out; 109 + ret = -EFAULT; 108 110 109 - ret = 0; 110 111 out: 111 112 if (buf != local_buf) 112 113 kfree(buf);
+3
arch/sparc/Kconfig
··· 86 86 default "arch/sparc/configs/sparc32_defconfig" if SPARC32 87 87 default "arch/sparc/configs/sparc64_defconfig" if SPARC64 88 88 89 + config ARCH_PROC_KCORE_TEXT 90 + def_bool y 91 + 89 92 config IOMMU_HELPER 90 93 bool 91 94 default y if SPARC64
+12
arch/sparc/include/asm/hypervisor.h
··· 2957 2957 unsigned long reg_val); 2958 2958 #endif 2959 2959 2960 + 2961 + #define HV_FAST_M7_GET_PERFREG 0x43 2962 + #define HV_FAST_M7_SET_PERFREG 0x44 2963 + 2964 + #ifndef __ASSEMBLY__ 2965 + unsigned long sun4v_m7_get_perfreg(unsigned long reg_num, 2966 + unsigned long *reg_val); 2967 + unsigned long sun4v_m7_set_perfreg(unsigned long reg_num, 2968 + unsigned long reg_val); 2969 + #endif 2970 + 2960 2971 /* Function numbers for HV_CORE_TRAP. */ 2961 2972 #define HV_CORE_SET_VER 0x00 2962 2973 #define HV_CORE_PUTCHAR 0x01 ··· 2992 2981 #define HV_GRP_SDIO 0x0108 2993 2982 #define HV_GRP_SDIO_ERR 0x0109 2994 2983 #define HV_GRP_REBOOT_DATA 0x0110 2984 + #define HV_GRP_M7_PERF 0x0114 2995 2985 #define HV_GRP_NIAG_PERF 0x0200 2996 2986 #define HV_GRP_FIRE_PERF 0x0201 2997 2987 #define HV_GRP_N2_CPU 0x0202
+10 -10
arch/sparc/include/asm/io_64.h
··· 407 407 { 408 408 } 409 409 410 - #define ioread8(X) readb(X) 411 - #define ioread16(X) readw(X) 412 - #define ioread16be(X) __raw_readw(X) 413 - #define ioread32(X) readl(X) 414 - #define ioread32be(X) __raw_readl(X) 415 - #define iowrite8(val,X) writeb(val,X) 416 - #define iowrite16(val,X) writew(val,X) 417 - #define iowrite16be(val,X) __raw_writew(val,X) 418 - #define iowrite32(val,X) writel(val,X) 419 - #define iowrite32be(val,X) __raw_writel(val,X) 410 + #define ioread8 readb 411 + #define ioread16 readw 412 + #define ioread16be __raw_readw 413 + #define ioread32 readl 414 + #define ioread32be __raw_readl 415 + #define iowrite8 writeb 416 + #define iowrite16 writew 417 + #define iowrite16be __raw_writew 418 + #define iowrite32 writel 419 + #define iowrite32be __raw_writel 420 420 421 421 /* Create a virtual mapping cookie for an IO port range */ 422 422 void __iomem *ioport_map(unsigned long port, unsigned int nr);
-1
arch/sparc/include/asm/starfire.h
··· 12 12 extern int this_is_starfire; 13 13 14 14 void check_if_starfire(void); 15 - int starfire_hard_smp_processor_id(void); 16 15 void starfire_hookup(int); 17 16 unsigned int starfire_translate(unsigned long imap, unsigned int upaid); 18 17
-4
arch/sparc/kernel/entry.h
··· 98 98 void do_privop(struct pt_regs *regs); 99 99 void do_privact(struct pt_regs *regs); 100 100 void do_cee(struct pt_regs *regs); 101 - void do_cee_tl1(struct pt_regs *regs); 102 - void do_dae_tl1(struct pt_regs *regs); 103 - void do_iae_tl1(struct pt_regs *regs); 104 101 void do_div0_tl1(struct pt_regs *regs); 105 - void do_fpdis_tl1(struct pt_regs *regs); 106 102 void do_fpieee_tl1(struct pt_regs *regs); 107 103 void do_fpother_tl1(struct pt_regs *regs); 108 104 void do_ill_tl1(struct pt_regs *regs);
+1
arch/sparc/kernel/hvapi.c
··· 48 48 { .group = HV_GRP_VT_CPU, }, 49 49 { .group = HV_GRP_T5_CPU, }, 50 50 { .group = HV_GRP_DIAG, .flags = FLAG_PRE_API }, 51 + { .group = HV_GRP_M7_PERF, }, 51 52 }; 52 53 53 54 static DEFINE_SPINLOCK(hvapi_lock);
+16
arch/sparc/kernel/hvcalls.S
··· 837 837 retl 838 838 nop 839 839 ENDPROC(sun4v_t5_set_perfreg) 840 + 841 + ENTRY(sun4v_m7_get_perfreg) 842 + mov %o1, %o4 843 + mov HV_FAST_M7_GET_PERFREG, %o5 844 + ta HV_FAST_TRAP 845 + stx %o1, [%o4] 846 + retl 847 + nop 848 + ENDPROC(sun4v_m7_get_perfreg) 849 + 850 + ENTRY(sun4v_m7_set_perfreg) 851 + mov HV_FAST_M7_SET_PERFREG, %o5 852 + ta HV_FAST_TRAP 853 + retl 854 + nop 855 + ENDPROC(sun4v_m7_set_perfreg)
+33
arch/sparc/kernel/pcr.c
··· 217 217 .pcr_nmi_disable = PCR_N4_PICNPT, 218 218 }; 219 219 220 + static u64 m7_pcr_read(unsigned long reg_num) 221 + { 222 + unsigned long val; 223 + 224 + (void) sun4v_m7_get_perfreg(reg_num, &val); 225 + 226 + return val; 227 + } 228 + 229 + static void m7_pcr_write(unsigned long reg_num, u64 val) 230 + { 231 + (void) sun4v_m7_set_perfreg(reg_num, val); 232 + } 233 + 234 + static const struct pcr_ops m7_pcr_ops = { 235 + .read_pcr = m7_pcr_read, 236 + .write_pcr = m7_pcr_write, 237 + .read_pic = n4_pic_read, 238 + .write_pic = n4_pic_write, 239 + .nmi_picl_value = n4_picl_value, 240 + .pcr_nmi_enable = (PCR_N4_PICNPT | PCR_N4_STRACE | 241 + PCR_N4_UTRACE | PCR_N4_TOE | 242 + (26 << PCR_N4_SL_SHIFT)), 243 + .pcr_nmi_disable = PCR_N4_PICNPT, 244 + }; 220 245 221 246 static unsigned long perf_hsvc_group; 222 247 static unsigned long perf_hsvc_major; ··· 271 246 272 247 case SUN4V_CHIP_NIAGARA5: 273 248 perf_hsvc_group = HV_GRP_T5_CPU; 249 + break; 250 + 251 + case SUN4V_CHIP_SPARC_M7: 252 + perf_hsvc_group = HV_GRP_M7_PERF; 274 253 break; 275 254 276 255 default: ··· 320 291 321 292 case SUN4V_CHIP_NIAGARA5: 322 293 pcr_ops = &n5_pcr_ops; 294 + break; 295 + 296 + case SUN4V_CHIP_SPARC_M7: 297 + pcr_ops = &m7_pcr_ops; 323 298 break; 324 299 325 300 default:
+43 -12
arch/sparc/kernel/perf_event.c
··· 792 792 .num_pic_regs = 4, 793 793 }; 794 794 795 + static void sparc_m7_write_pmc(int idx, u64 val) 796 + { 797 + u64 pcr; 798 + 799 + pcr = pcr_ops->read_pcr(idx); 800 + /* ensure ov and ntc are reset */ 801 + pcr &= ~(PCR_N4_OV | PCR_N4_NTC); 802 + 803 + pcr_ops->write_pic(idx, val & 0xffffffff); 804 + 805 + pcr_ops->write_pcr(idx, pcr); 806 + } 807 + 808 + static const struct sparc_pmu sparc_m7_pmu = { 809 + .event_map = niagara4_event_map, 810 + .cache_map = &niagara4_cache_map, 811 + .max_events = ARRAY_SIZE(niagara4_perfmon_event_map), 812 + .read_pmc = sparc_vt_read_pmc, 813 + .write_pmc = sparc_m7_write_pmc, 814 + .upper_shift = 5, 815 + .lower_shift = 5, 816 + .event_mask = 0x7ff, 817 + .user_bit = PCR_N4_UTRACE, 818 + .priv_bit = PCR_N4_STRACE, 819 + 820 + /* We explicitly don't support hypervisor tracing. */ 821 + .hv_bit = 0, 822 + 823 + .irq_bit = PCR_N4_TOE, 824 + .upper_nop = 0, 825 + .lower_nop = 0, 826 + .flags = 0, 827 + .max_hw_events = 4, 828 + .num_pcrs = 4, 829 + .num_pic_regs = 4, 830 + }; 795 831 static const struct sparc_pmu *sparc_pmu __read_mostly; 796 832 797 833 static u64 event_encoding(u64 event_id, int idx) ··· 996 960 cpuc->pcr[0] |= cpuc->event[0]->hw.config_base; 997 961 } 998 962 963 + static void sparc_pmu_start(struct perf_event *event, int flags); 964 + 999 965 /* On this PMU each PIC has it's own PCR control register. */ 1000 966 static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc) 1001 967 { ··· 1010 972 struct perf_event *cp = cpuc->event[i]; 1011 973 struct hw_perf_event *hwc = &cp->hw; 1012 974 int idx = hwc->idx; 1013 - u64 enc; 1014 975 1015 976 if (cpuc->current_idx[i] != PIC_NO_INDEX) 1016 977 continue; 1017 978 1018 - sparc_perf_event_set_period(cp, hwc, idx); 1019 979 cpuc->current_idx[i] = idx; 1020 980 1021 - enc = perf_event_get_enc(cpuc->events[i]); 1022 - cpuc->pcr[idx] &= ~mask_for_index(idx); 1023 - if (hwc->state & PERF_HES_STOPPED) 1024 - cpuc->pcr[idx] |= nop_for_index(idx); 1025 - else 1026 - cpuc->pcr[idx] |= event_encoding(enc, idx); 981 + sparc_pmu_start(cp, PERF_EF_RELOAD); 1027 982 } 1028 983 out: 1029 984 for (i = 0; i < cpuc->n_events; i++) { ··· 1132 1101 int i; 1133 1102 1134 1103 local_irq_save(flags); 1135 - perf_pmu_disable(event->pmu); 1136 1104 1137 1105 for (i = 0; i < cpuc->n_events; i++) { 1138 1106 if (event == cpuc->event[i]) { ··· 1157 1127 } 1158 1128 } 1159 1129 1160 - perf_pmu_enable(event->pmu); 1161 1130 local_irq_restore(flags); 1162 1131 } 1163 1132 ··· 1390 1361 unsigned long flags; 1391 1362 1392 1363 local_irq_save(flags); 1393 - perf_pmu_disable(event->pmu); 1394 1364 1395 1365 n0 = cpuc->n_events; 1396 1366 if (n0 >= sparc_pmu->max_hw_events) ··· 1422 1394 1423 1395 ret = 0; 1424 1396 out: 1425 - perf_pmu_enable(event->pmu); 1426 1397 local_irq_restore(flags); 1427 1398 return ret; 1428 1399 } ··· 1692 1665 if (!strcmp(sparc_pmu_type, "niagara4") || 1693 1666 !strcmp(sparc_pmu_type, "niagara5")) { 1694 1667 sparc_pmu = &niagara4_pmu; 1668 + return true; 1669 + } 1670 + if (!strcmp(sparc_pmu_type, "sparc-m7")) { 1671 + sparc_pmu = &sparc_m7_pmu; 1695 1672 return true; 1696 1673 } 1697 1674 return false;
+4
arch/sparc/kernel/process_64.c
··· 287 287 printk(" TPC[%lx] O7[%lx] I7[%lx] RPC[%lx]\n", 288 288 gp->tpc, gp->o7, gp->i7, gp->rpc); 289 289 } 290 + 291 + touch_nmi_watchdog(); 290 292 } 291 293 292 294 memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot)); ··· 364 362 (cpu == this_cpu ? '*' : ' '), cpu, 365 363 pp->pcr[0], pp->pcr[1], pp->pcr[2], pp->pcr[3], 366 364 pp->pic[0], pp->pic[1], pp->pic[2], pp->pic[3]); 365 + 366 + touch_nmi_watchdog(); 367 367 } 368 368 369 369 memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
+24 -3
arch/sparc/kernel/smp_64.c
··· 1406 1406 scheduler_ipi(); 1407 1407 } 1408 1408 1409 - /* This is a nop because we capture all other cpus 1410 - * anyways when making the PROM active. 1411 - */ 1409 + static void stop_this_cpu(void *dummy) 1410 + { 1411 + prom_stopself(); 1412 + } 1413 + 1412 1414 void smp_send_stop(void) 1413 1415 { 1416 + int cpu; 1417 + 1418 + if (tlb_type == hypervisor) { 1419 + for_each_online_cpu(cpu) { 1420 + if (cpu == smp_processor_id()) 1421 + continue; 1422 + #ifdef CONFIG_SUN_LDOMS 1423 + if (ldom_domaining_enabled) { 1424 + unsigned long hv_err; 1425 + hv_err = sun4v_cpu_stop(cpu); 1426 + if (hv_err) 1427 + printk(KERN_ERR "sun4v_cpu_stop() " 1428 + "failed err=%lu\n", hv_err); 1429 + } else 1430 + #endif 1431 + prom_stopcpu_cpuid(cpu); 1432 + } 1433 + } else 1434 + smp_call_function(stop_this_cpu, NULL, 0); 1414 1435 } 1415 1436 1416 1437 /**
-5
arch/sparc/kernel/starfire.c
··· 28 28 this_is_starfire = 1; 29 29 } 30 30 31 - int starfire_hard_smp_processor_id(void) 32 - { 33 - return upa_readl(0x1fff40000d0UL); 34 - } 35 - 36 31 /* 37 32 * Each Starfire board has 32 registers which perform translation 38 33 * and delivery of traditional interrupt packets into the extended
+1 -1
arch/sparc/kernel/sys_sparc_64.c
··· 333 333 long err; 334 334 335 335 /* No need for backward compatibility. We can start fresh... */ 336 - if (call <= SEMCTL) { 336 + if (call <= SEMTIMEDOP) { 337 337 switch (call) { 338 338 case SEMOP: 339 339 err = sys_semtimedop(first, ptr,
+2 -28
arch/sparc/kernel/traps_64.c
··· 2427 2427 } 2428 2428 user_instruction_dump ((unsigned int __user *) regs->tpc); 2429 2429 } 2430 + if (panic_on_oops) 2431 + panic("Fatal exception"); 2430 2432 if (regs->tstate & TSTATE_PRIV) 2431 2433 do_exit(SIGKILL); 2432 2434 do_exit(SIGSEGV); ··· 2566 2564 die_if_kernel("TL0: Cache Error Exception", regs); 2567 2565 } 2568 2566 2569 - void do_cee_tl1(struct pt_regs *regs) 2570 - { 2571 - exception_enter(); 2572 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2573 - die_if_kernel("TL1: Cache Error Exception", regs); 2574 - } 2575 - 2576 - void do_dae_tl1(struct pt_regs *regs) 2577 - { 2578 - exception_enter(); 2579 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2580 - die_if_kernel("TL1: Data Access Exception", regs); 2581 - } 2582 - 2583 - void do_iae_tl1(struct pt_regs *regs) 2584 - { 2585 - exception_enter(); 2586 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2587 - die_if_kernel("TL1: Instruction Access Exception", regs); 2588 - } 2589 - 2590 2567 void do_div0_tl1(struct pt_regs *regs) 2591 2568 { 2592 2569 exception_enter(); 2593 2570 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2594 2571 die_if_kernel("TL1: DIV0 Exception", regs); 2595 - } 2596 - 2597 - void do_fpdis_tl1(struct pt_regs *regs) 2598 - { 2599 - exception_enter(); 2600 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2601 - die_if_kernel("TL1: FPU Disabled", regs); 2602 2572 } 2603 2573 2604 2574 void do_fpieee_tl1(struct pt_regs *regs)
+32 -3
arch/sparc/lib/memmove.S
··· 8 8 9 9 .text 10 10 ENTRY(memmove) /* o0=dst o1=src o2=len */ 11 - mov %o0, %g1 11 + brz,pn %o2, 99f 12 + mov %o0, %g1 13 + 12 14 cmp %o0, %o1 13 - bleu,pt %xcc, memcpy 15 + bleu,pt %xcc, 2f 14 16 add %o1, %o2, %g7 15 17 cmp %g7, %o0 16 18 bleu,pt %xcc, memcpy ··· 26 24 stb %g7, [%o0] 27 25 bne,pt %icc, 1b 28 26 sub %o0, 1, %o0 29 - 27 + 99: 30 28 retl 31 29 mov %g1, %o0 30 + 31 + /* We can't just call memcpy for these memmove cases. On some 32 + * chips the memcpy uses cache initializing stores and when dst 33 + * and src are close enough, those can clobber the source data 34 + * before we've loaded it in. 35 + */ 36 + 2: or %o0, %o1, %g7 37 + or %o2, %g7, %g7 38 + andcc %g7, 0x7, %g0 39 + bne,pn %xcc, 4f 40 + nop 41 + 42 + 3: ldx [%o1], %g7 43 + add %o1, 8, %o1 44 + subcc %o2, 8, %o2 45 + add %o0, 8, %o0 46 + bne,pt %icc, 3b 47 + stx %g7, [%o0 - 0x8] 48 + ba,a,pt %xcc, 99b 49 + 50 + 4: ldub [%o1], %g7 51 + add %o1, 1, %o1 52 + subcc %o2, 1, %o2 53 + add %o0, 1, %o0 54 + bne,pt %icc, 4b 55 + stb %g7, [%o0 - 0x1] 56 + ba,a,pt %xcc, 99b 32 57 ENDPROC(memmove)
+1 -1
arch/sparc/mm/init_64.c
··· 2820 2820 2821 2821 return 0; 2822 2822 } 2823 - device_initcall(report_memory); 2823 + arch_initcall(report_memory); 2824 2824 2825 2825 #ifdef CONFIG_SMP 2826 2826 #define do_flush_tlb_kernel_range smp_flush_tlb_kernel_range
+1
arch/x86/Kconfig
··· 499 499 depends on X86_IO_APIC 500 500 select IOSF_MBI 501 501 select INTEL_IMR 502 + select COMMON_CLK 502 503 ---help--- 503 504 Select to include support for Quark X1000 SoC. 504 505 Say Y here if you have a Quark based system such as the Arduino
+1 -33
arch/x86/boot/compressed/aslr.c
··· 14 14 static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@" 15 15 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION; 16 16 17 - struct kaslr_setup_data { 18 - __u64 next; 19 - __u32 type; 20 - __u32 len; 21 - __u8 data[1]; 22 - } kaslr_setup_data; 23 - 24 17 #define I8254_PORT_CONTROL 0x43 25 18 #define I8254_PORT_COUNTER0 0x40 26 19 #define I8254_CMD_READBACK 0xC0 ··· 295 302 return slots_fetch_random(); 296 303 } 297 304 298 - static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled) 299 - { 300 - struct setup_data *data; 301 - 302 - kaslr_setup_data.type = SETUP_KASLR; 303 - kaslr_setup_data.len = 1; 304 - kaslr_setup_data.next = 0; 305 - kaslr_setup_data.data[0] = enabled; 306 - 307 - data = (struct setup_data *)(unsigned long)params->hdr.setup_data; 308 - 309 - while (data && data->next) 310 - data = (struct setup_data *)(unsigned long)data->next; 311 - 312 - if (data) 313 - data->next = (unsigned long)&kaslr_setup_data; 314 - else 315 - params->hdr.setup_data = (unsigned long)&kaslr_setup_data; 316 - 317 - } 318 - 319 - unsigned char *choose_kernel_location(struct boot_params *params, 320 - unsigned char *input, 305 + unsigned char *choose_kernel_location(unsigned char *input, 321 306 unsigned long input_size, 322 307 unsigned char *output, 323 308 unsigned long output_size) ··· 306 335 #ifdef CONFIG_HIBERNATION 307 336 if (!cmdline_find_option_bool("kaslr")) { 308 337 debug_putstr("KASLR disabled by default...\n"); 309 - add_kaslr_setup_data(params, 0); 310 338 goto out; 311 339 } 312 340 #else 313 341 if (cmdline_find_option_bool("nokaslr")) { 314 342 debug_putstr("KASLR disabled by cmdline...\n"); 315 - add_kaslr_setup_data(params, 0); 316 343 goto out; 317 344 } 318 345 #endif 319 - add_kaslr_setup_data(params, 1); 320 346 321 347 /* Record the various known unsafe memory ranges. */ 322 348 mem_avoid_init((unsigned long)input, input_size,
+1 -2
arch/x86/boot/compressed/misc.c
··· 401 401 * the entire decompressed kernel plus relocation table, or the 402 402 * entire decompressed kernel plus .bss and .brk sections. 403 403 */ 404 - output = choose_kernel_location(real_mode, input_data, input_len, 405 - output, 404 + output = choose_kernel_location(input_data, input_len, output, 406 405 output_len > run_size ? output_len 407 406 : run_size); 408 407
+2 -4
arch/x86/boot/compressed/misc.h
··· 57 57 58 58 #if CONFIG_RANDOMIZE_BASE 59 59 /* aslr.c */ 60 - unsigned char *choose_kernel_location(struct boot_params *params, 61 - unsigned char *input, 60 + unsigned char *choose_kernel_location(unsigned char *input, 62 61 unsigned long input_size, 63 62 unsigned char *output, 64 63 unsigned long output_size); ··· 65 66 bool has_cpuflag(int flag); 66 67 #else 67 68 static inline 68 - unsigned char *choose_kernel_location(struct boot_params *params, 69 - unsigned char *input, 69 + unsigned char *choose_kernel_location(unsigned char *input, 70 70 unsigned long input_size, 71 71 unsigned char *output, 72 72 unsigned long output_size)
+2 -2
arch/x86/crypto/aesni-intel_glue.c
··· 1155 1155 src = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); 1156 1156 if (!src) 1157 1157 return -ENOMEM; 1158 - assoc = (src + req->cryptlen + auth_tag_len); 1158 + assoc = (src + req->cryptlen); 1159 1159 scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0); 1160 1160 scatterwalk_map_and_copy(assoc, req->assoc, 0, 1161 1161 req->assoclen, 0); ··· 1180 1180 scatterwalk_done(&src_sg_walk, 0, 0); 1181 1181 scatterwalk_done(&assoc_sg_walk, 0, 0); 1182 1182 } else { 1183 - scatterwalk_map_and_copy(dst, req->dst, 0, req->cryptlen, 1); 1183 + scatterwalk_map_and_copy(dst, req->dst, 0, tempCipherLen, 1); 1184 1184 kfree(src); 1185 1185 } 1186 1186 return retval;
+1 -1
arch/x86/include/asm/fpu-internal.h
··· 370 370 preempt_disable(); 371 371 tsk->thread.fpu_counter = 0; 372 372 __drop_fpu(tsk); 373 - clear_used_math(); 373 + clear_stopped_child_used_math(tsk); 374 374 preempt_enable(); 375 375 } 376 376
-2
arch/x86/include/asm/page_types.h
··· 51 51 extern unsigned long max_low_pfn_mapped; 52 52 extern unsigned long max_pfn_mapped; 53 53 54 - extern bool kaslr_enabled; 55 - 56 54 static inline phys_addr_t get_max_mapped(void) 57 55 { 58 56 return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
+2
arch/x86/include/asm/pci_x86.h
··· 93 93 extern int (*pcibios_enable_irq)(struct pci_dev *dev); 94 94 extern void (*pcibios_disable_irq)(struct pci_dev *dev); 95 95 96 + extern bool mp_should_keep_irq(struct device *dev); 97 + 96 98 struct pci_raw_ops { 97 99 int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn, 98 100 int reg, int len, u32 *val);
+11 -17
arch/x86/include/asm/xsave.h
··· 82 82 if (boot_cpu_has(X86_FEATURE_XSAVES)) 83 83 asm volatile("1:"XSAVES"\n\t" 84 84 "2:\n\t" 85 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 85 + xstate_fault 86 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 86 87 : "memory"); 87 88 else 88 89 asm volatile("1:"XSAVE"\n\t" 89 90 "2:\n\t" 90 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 91 + xstate_fault 92 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 91 93 : "memory"); 92 - 93 - asm volatile(xstate_fault 94 - : "0" (0) 95 - : "memory"); 96 - 97 94 return err; 98 95 } 99 96 ··· 109 112 if (boot_cpu_has(X86_FEATURE_XSAVES)) 110 113 asm volatile("1:"XRSTORS"\n\t" 111 114 "2:\n\t" 112 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 115 + xstate_fault 116 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 113 117 : "memory"); 114 118 else 115 119 asm volatile("1:"XRSTOR"\n\t" 116 120 "2:\n\t" 117 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 121 + xstate_fault 122 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 118 123 : "memory"); 119 - 120 - asm volatile(xstate_fault 121 - : "0" (0) 122 - : "memory"); 123 - 124 124 return err; 125 125 } 126 126 ··· 143 149 */ 144 150 alternative_input_2( 145 151 "1:"XSAVE, 146 - "1:"XSAVEOPT, 152 + XSAVEOPT, 147 153 X86_FEATURE_XSAVEOPT, 148 - "1:"XSAVES, 154 + XSAVES, 149 155 X86_FEATURE_XSAVES, 150 156 [fx] "D" (fx), "a" (lmask), "d" (hmask) : 151 157 "memory"); ··· 172 178 */ 173 179 alternative_input( 174 180 "1: " XRSTOR, 175 - "1: " XRSTORS, 181 + XRSTORS, 176 182 X86_FEATURE_XSAVES, 177 183 "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 178 184 : "memory");
-1
arch/x86/include/uapi/asm/bootparam.h
··· 7 7 #define SETUP_DTB 2 8 8 #define SETUP_PCI 3 9 9 #define SETUP_EFI 4 10 - #define SETUP_KASLR 5 11 10 12 11 /* ram_size flags */ 13 12 #define RAMDISK_IMAGE_START_MASK 0x07FF
+25
arch/x86/kernel/acpi/boot.c
··· 1338 1338 } 1339 1339 1340 1340 /* 1341 + * ACPI offers an alternative platform interface model that removes 1342 + * ACPI hardware requirements for platforms that do not implement 1343 + * the PC Architecture. 1344 + * 1345 + * We initialize the Hardware-reduced ACPI model here: 1346 + */ 1347 + static void __init acpi_reduced_hw_init(void) 1348 + { 1349 + if (acpi_gbl_reduced_hardware) { 1350 + /* 1351 + * Override x86_init functions and bypass legacy pic 1352 + * in Hardware-reduced ACPI mode 1353 + */ 1354 + x86_init.timers.timer_init = x86_init_noop; 1355 + x86_init.irqs.pre_vector_init = x86_init_noop; 1356 + legacy_pic = &null_legacy_pic; 1357 + } 1358 + } 1359 + 1360 + /* 1341 1361 * If your system is blacklisted here, but you find that acpi=force 1342 1362 * works for you, please contact linux-acpi@vger.kernel.org 1343 1363 */ ··· 1555 1535 * Process the Multiple APIC Description Table (MADT), if present 1556 1536 */ 1557 1537 early_acpi_process_madt(); 1538 + 1539 + /* 1540 + * Hardware-reduced ACPI mode initialization: 1541 + */ 1542 + acpi_reduced_hw_init(); 1558 1543 1559 1544 return 0; 1560 1545 }
+16 -6
arch/x86/kernel/apic/apic_numachip.c
··· 37 37 static unsigned int get_apic_id(unsigned long x) 38 38 { 39 39 unsigned long value; 40 - unsigned int id; 40 + unsigned int id = (x >> 24) & 0xff; 41 41 42 - rdmsrl(MSR_FAM10H_NODE_ID, value); 43 - id = ((x >> 24) & 0xffU) | ((value << 2) & 0xff00U); 42 + if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) { 43 + rdmsrl(MSR_FAM10H_NODE_ID, value); 44 + id |= (value << 2) & 0xff00; 45 + } 44 46 45 47 return id; 46 48 } ··· 157 155 158 156 static void fixup_cpu_id(struct cpuinfo_x86 *c, int node) 159 157 { 160 - if (c->phys_proc_id != node) { 161 - c->phys_proc_id = node; 162 - per_cpu(cpu_llc_id, smp_processor_id()) = node; 158 + u64 val; 159 + u32 nodes = 1; 160 + 161 + this_cpu_write(cpu_llc_id, node); 162 + 163 + /* Account for nodes per socket in multi-core-module processors */ 164 + if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) { 165 + rdmsrl(MSR_FAM10H_NODE_ID, val); 166 + nodes = ((val >> 3) & 7) + 1; 163 167 } 168 + 169 + c->phys_proc_id = node / nodes; 164 170 } 165 171 166 172 static int __init numachip_system_init(void)
+5 -5
arch/x86/kernel/cpu/perf_event_intel.c
··· 212 212 INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */ 213 213 INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */ 214 214 /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */ 215 - INTEL_EVENT_CONSTRAINT(0x08a3, 0x4), 215 + INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4), 216 216 /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */ 217 - INTEL_EVENT_CONSTRAINT(0x0ca3, 0x4), 217 + INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), 218 218 /* CYCLE_ACTIVITY.CYCLES_NO_EXECUTE */ 219 - INTEL_EVENT_CONSTRAINT(0x04a3, 0xf), 219 + INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), 220 220 EVENT_CONSTRAINT_END 221 221 }; 222 222 ··· 1649 1649 if (c) 1650 1650 return c; 1651 1651 1652 - c = intel_pebs_constraints(event); 1652 + c = intel_shared_regs_constraints(cpuc, event); 1653 1653 if (c) 1654 1654 return c; 1655 1655 1656 - c = intel_shared_regs_constraints(cpuc, event); 1656 + c = intel_pebs_constraints(event); 1657 1657 if (c) 1658 1658 return c; 1659 1659
+37 -10
arch/x86/kernel/entry_64.S
··· 269 269 testl $3, CS-ARGOFFSET(%rsp) # from kernel_thread? 270 270 jz 1f 271 271 272 - testl $_TIF_IA32, TI_flags(%rcx) # 32-bit compat task needs IRET 273 - jnz int_ret_from_sys_call 274 - 275 - RESTORE_TOP_OF_STACK %rdi, -ARGOFFSET 276 - jmp ret_from_sys_call # go to the SYSRET fastpath 272 + /* 273 + * By the time we get here, we have no idea whether our pt_regs, 274 + * ti flags, and ti status came from the 64-bit SYSCALL fast path, 275 + * the slow path, or one of the ia32entry paths. 276 + * Use int_ret_from_sys_call to return, since it can safely handle 277 + * all of the above. 278 + */ 279 + jmp int_ret_from_sys_call 277 280 278 281 1: 279 282 subq $REST_SKIP, %rsp # leave space for volatiles ··· 364 361 * Has incomplete stack frame and undefined top of stack. 365 362 */ 366 363 ret_from_sys_call: 367 - testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) 368 - jnz int_ret_from_sys_call_fixup /* Go the the slow path */ 369 - 370 364 LOCKDEP_SYS_EXIT 371 365 DISABLE_INTERRUPTS(CLBR_NONE) 372 366 TRACE_IRQS_OFF 367 + 368 + /* 369 + * We must check ti flags with interrupts (or at least preemption) 370 + * off because we must *never* return to userspace without 371 + * processing exit work that is enqueued if we're preempted here. 372 + * In particular, returning to userspace with any of the one-shot 373 + * flags (TIF_NOTIFY_RESUME, TIF_USER_RETURN_NOTIFY, etc) set is 374 + * very bad. 375 + */ 376 + testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) 377 + jnz int_ret_from_sys_call_fixup /* Go the the slow path */ 378 + 373 379 CFI_REMEMBER_STATE 374 380 /* 375 381 * sysretq will re-enable interrupts: ··· 395 383 396 384 int_ret_from_sys_call_fixup: 397 385 FIXUP_TOP_OF_STACK %r11, -ARGOFFSET 398 - jmp int_ret_from_sys_call 386 + jmp int_ret_from_sys_call_irqs_off 399 387 400 388 /* Do syscall tracing */ 401 389 tracesys: ··· 441 429 GLOBAL(int_ret_from_sys_call) 442 430 DISABLE_INTERRUPTS(CLBR_NONE) 443 431 TRACE_IRQS_OFF 432 + int_ret_from_sys_call_irqs_off: 444 433 movl $_TIF_ALLWORK_MASK,%edi 445 434 /* edi: mask to check */ 446 435 GLOBAL(int_with_check) ··· 799 786 cmpq %r11,(EFLAGS-ARGOFFSET)(%rsp) /* R11 == RFLAGS */ 800 787 jne opportunistic_sysret_failed 801 788 802 - testq $X86_EFLAGS_RF,%r11 /* sysret can't restore RF */ 789 + /* 790 + * SYSRET can't restore RF. SYSRET can restore TF, but unlike IRET, 791 + * restoring TF results in a trap from userspace immediately after 792 + * SYSRET. This would cause an infinite loop whenever #DB happens 793 + * with register state that satisfies the opportunistic SYSRET 794 + * conditions. For example, single-stepping this user code: 795 + * 796 + * movq $stuck_here,%rcx 797 + * pushfq 798 + * popq %r11 799 + * stuck_here: 800 + * 801 + * would never get past 'stuck_here'. 802 + */ 803 + testq $(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11 803 804 jnz opportunistic_sysret_failed 804 805 805 806 /* nothing to check for RSP */
+1 -1
arch/x86/kernel/kgdb.c
··· 72 72 { "bx", 8, offsetof(struct pt_regs, bx) }, 73 73 { "cx", 8, offsetof(struct pt_regs, cx) }, 74 74 { "dx", 8, offsetof(struct pt_regs, dx) }, 75 - { "si", 8, offsetof(struct pt_regs, dx) }, 75 + { "si", 8, offsetof(struct pt_regs, si) }, 76 76 { "di", 8, offsetof(struct pt_regs, di) }, 77 77 { "bp", 8, offsetof(struct pt_regs, bp) }, 78 78 { "sp", 8, offsetof(struct pt_regs, sp) },
+9 -1
arch/x86/kernel/module.c
··· 47 47 48 48 #ifdef CONFIG_RANDOMIZE_BASE 49 49 static unsigned long module_load_offset; 50 + static int randomize_modules = 1; 50 51 51 52 /* Mutex protects the module_load_offset. */ 52 53 static DEFINE_MUTEX(module_kaslr_mutex); 53 54 55 + static int __init parse_nokaslr(char *p) 56 + { 57 + randomize_modules = 0; 58 + return 0; 59 + } 60 + early_param("nokaslr", parse_nokaslr); 61 + 54 62 static unsigned long int get_module_load_offset(void) 55 63 { 56 - if (kaslr_enabled) { 64 + if (randomize_modules) { 57 65 mutex_lock(&module_kaslr_mutex); 58 66 /* 59 67 * Calculate the module_load_offset the first time this
+10
arch/x86/kernel/reboot.c
··· 183 183 }, 184 184 }, 185 185 186 + /* ASRock */ 187 + { /* Handle problems with rebooting on ASRock Q1900DC-ITX */ 188 + .callback = set_pci_reboot, 189 + .ident = "ASRock Q1900DC-ITX", 190 + .matches = { 191 + DMI_MATCH(DMI_BOARD_VENDOR, "ASRock"), 192 + DMI_MATCH(DMI_BOARD_NAME, "Q1900DC-ITX"), 193 + }, 194 + }, 195 + 186 196 /* ASUS */ 187 197 { /* Handle problems with rebooting on ASUS P4S800 */ 188 198 .callback = set_bios_reboot,
+4 -18
arch/x86/kernel/setup.c
··· 122 122 unsigned long max_low_pfn_mapped; 123 123 unsigned long max_pfn_mapped; 124 124 125 - bool __read_mostly kaslr_enabled = false; 126 - 127 125 #ifdef CONFIG_DMI 128 126 RESERVE_BRK(dmi_alloc, 65536); 129 127 #endif ··· 425 427 } 426 428 #endif /* CONFIG_BLK_DEV_INITRD */ 427 429 428 - static void __init parse_kaslr_setup(u64 pa_data, u32 data_len) 429 - { 430 - kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data)); 431 - } 432 - 433 430 static void __init parse_setup_data(void) 434 431 { 435 432 struct setup_data *data; ··· 449 456 break; 450 457 case SETUP_EFI: 451 458 parse_efi_setup(pa_data, data_len); 452 - break; 453 - case SETUP_KASLR: 454 - parse_kaslr_setup(pa_data, data_len); 455 459 break; 456 460 default: 457 461 break; ··· 832 842 static int 833 843 dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) 834 844 { 835 - if (kaslr_enabled) 836 - pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n", 837 - (unsigned long)&_text - __START_KERNEL, 838 - __START_KERNEL, 839 - __START_KERNEL_map, 840 - MODULES_VADDR-1); 841 - else 842 - pr_emerg("Kernel Offset: disabled\n"); 845 + pr_emerg("Kernel Offset: 0x%lx from 0x%lx " 846 + "(relocation range: 0x%lx-0x%lx)\n", 847 + (unsigned long)&_text - __START_KERNEL, __START_KERNEL, 848 + __START_KERNEL_map, MODULES_VADDR-1); 843 849 844 850 return 0; 845 851 }
+2 -2
arch/x86/kernel/traps.c
··· 384 384 goto exit; 385 385 conditional_sti(regs); 386 386 387 - if (!user_mode(regs)) 387 + if (!user_mode_vm(regs)) 388 388 die("bounds", regs, error_code); 389 389 390 390 if (!cpu_feature_enabled(X86_FEATURE_MPX)) { ··· 637 637 * then it's very likely the result of an icebp/int01 trap. 638 638 * User wants a sigtrap for that. 639 639 */ 640 - if (!dr6 && user_mode(regs)) 640 + if (!dr6 && user_mode_vm(regs)) 641 641 user_icebp = 1; 642 642 643 643 /* Catch kmemcheck conditions first of all! */
+4 -3
arch/x86/kernel/xsave.c
··· 379 379 * thread's fpu state, reconstruct fxstate from the fsave 380 380 * header. Sanitize the copied state etc. 381 381 */ 382 - struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave; 382 + struct fpu *fpu = &tsk->thread.fpu; 383 383 struct user_i387_ia32_struct env; 384 384 int err = 0; 385 385 ··· 393 393 */ 394 394 drop_fpu(tsk); 395 395 396 - if (__copy_from_user(xsave, buf_fx, state_size) || 396 + if (__copy_from_user(&fpu->state->xsave, buf_fx, state_size) || 397 397 __copy_from_user(&env, buf, sizeof(env))) { 398 + fpu_finit(fpu); 398 399 err = -1; 399 400 } else { 400 401 sanitize_restored_xstate(tsk, &env, xstate_bv, fx_only); 401 - set_used_math(); 402 402 } 403 403 404 + set_used_math(); 404 405 if (use_eager_fpu()) { 405 406 preempt_disable(); 406 407 math_state_restore();
+2 -1
arch/x86/kvm/emulate.c
··· 4950 4950 goto done; 4951 4951 } 4952 4952 } 4953 - ctxt->dst.orig_val = ctxt->dst.val; 4953 + /* Copy full 64-bit value for CMPXCHG8B. */ 4954 + ctxt->dst.orig_val64 = ctxt->dst.val64; 4954 4955 4955 4956 special_insn: 4956 4957
+1
arch/x86/kvm/i8259.c
··· 507 507 return -EOPNOTSUPP; 508 508 509 509 if (len != 1) { 510 + memset(val, 0, len); 510 511 pr_pic_unimpl("non byte read\n"); 511 512 return 0; 512 513 }
+3 -1
arch/x86/kvm/ioapic.c
··· 422 422 struct kvm_ioapic *ioapic, int vector, int trigger_mode) 423 423 { 424 424 int i; 425 + struct kvm_lapic *apic = vcpu->arch.apic; 425 426 426 427 for (i = 0; i < IOAPIC_NUM_PINS; i++) { 427 428 union kvm_ioapic_redirect_entry *ent = &ioapic->redirtbl[i]; ··· 444 443 kvm_notify_acked_irq(ioapic->kvm, KVM_IRQCHIP_IOAPIC, i); 445 444 spin_lock(&ioapic->lock); 446 445 447 - if (trigger_mode != IOAPIC_LEVEL_TRIG) 446 + if (trigger_mode != IOAPIC_LEVEL_TRIG || 447 + kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) 448 448 continue; 449 449 450 450 ASSERT(ent->fields.trig_mode == IOAPIC_LEVEL_TRIG);
+3 -4
arch/x86/kvm/lapic.c
··· 833 833 834 834 static void kvm_ioapic_send_eoi(struct kvm_lapic *apic, int vector) 835 835 { 836 - if (!(kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI) && 837 - kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) { 836 + if (kvm_ioapic_handles_vector(apic->vcpu->kvm, vector)) { 838 837 int trigger_mode; 839 838 if (apic_test_vector(vector, apic->regs + APIC_TMR)) 840 839 trigger_mode = IOAPIC_LEVEL_TRIG; ··· 1571 1572 apic_set_reg(apic, APIC_TMR + 0x10 * i, 0); 1572 1573 } 1573 1574 apic->irr_pending = kvm_apic_vid_enabled(vcpu->kvm); 1574 - apic->isr_count = kvm_apic_vid_enabled(vcpu->kvm); 1575 + apic->isr_count = kvm_x86_ops->hwapic_isr_update ? 1 : 0; 1575 1576 apic->highest_isr_cache = -1; 1576 1577 update_divide_count(apic); 1577 1578 atomic_set(&apic->lapic_timer.pending, 0); ··· 1781 1782 update_divide_count(apic); 1782 1783 start_apic_timer(apic); 1783 1784 apic->irr_pending = true; 1784 - apic->isr_count = kvm_apic_vid_enabled(vcpu->kvm) ? 1785 + apic->isr_count = kvm_x86_ops->hwapic_isr_update ? 1785 1786 1 : count_vectors(apic->regs + APIC_ISR); 1786 1787 apic->highest_isr_cache = -1; 1787 1788 if (kvm_x86_ops->hwapic_irr_update)
-6
arch/x86/kvm/svm.c
··· 3649 3649 return; 3650 3650 } 3651 3651 3652 - static void svm_hwapic_isr_update(struct kvm *kvm, int isr) 3653 - { 3654 - return; 3655 - } 3656 - 3657 3652 static void svm_sync_pir_to_irr(struct kvm_vcpu *vcpu) 3658 3653 { 3659 3654 return; ··· 4398 4403 .set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode, 4399 4404 .vm_has_apicv = svm_vm_has_apicv, 4400 4405 .load_eoi_exitmap = svm_load_eoi_exitmap, 4401 - .hwapic_isr_update = svm_hwapic_isr_update, 4402 4406 .sync_pir_to_irr = svm_sync_pir_to_irr, 4403 4407 4404 4408 .set_tss_addr = svm_set_tss_addr,
+26 -15
arch/x86/kvm/vmx.c
··· 2168 2168 { 2169 2169 unsigned long *msr_bitmap; 2170 2170 2171 - if (irqchip_in_kernel(vcpu->kvm) && apic_x2apic_mode(vcpu->arch.apic)) { 2171 + if (is_guest_mode(vcpu)) 2172 + msr_bitmap = vmx_msr_bitmap_nested; 2173 + else if (irqchip_in_kernel(vcpu->kvm) && 2174 + apic_x2apic_mode(vcpu->arch.apic)) { 2172 2175 if (is_long_mode(vcpu)) 2173 2176 msr_bitmap = vmx_msr_bitmap_longmode_x2apic; 2174 2177 else ··· 2479 2476 if (enable_ept) { 2480 2477 /* nested EPT: emulate EPT also to L1 */ 2481 2478 vmx->nested.nested_vmx_secondary_ctls_high |= 2482 - SECONDARY_EXEC_ENABLE_EPT | 2483 - SECONDARY_EXEC_UNRESTRICTED_GUEST; 2479 + SECONDARY_EXEC_ENABLE_EPT; 2484 2480 vmx->nested.nested_vmx_ept_caps = VMX_EPT_PAGE_WALK_4_BIT | 2485 2481 VMX_EPTP_WB_BIT | VMX_EPT_2MB_PAGE_BIT | 2486 2482 VMX_EPT_INVEPT_BIT; ··· 2492 2490 vmx->nested.nested_vmx_ept_caps |= VMX_EPT_EXTENT_GLOBAL_BIT; 2493 2491 } else 2494 2492 vmx->nested.nested_vmx_ept_caps = 0; 2493 + 2494 + if (enable_unrestricted_guest) 2495 + vmx->nested.nested_vmx_secondary_ctls_high |= 2496 + SECONDARY_EXEC_UNRESTRICTED_GUEST; 2495 2497 2496 2498 /* miscellaneous data */ 2497 2499 rdmsr(MSR_IA32_VMX_MISC, ··· 4373 4367 return 0; 4374 4368 } 4375 4369 4370 + static inline bool kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu) 4371 + { 4372 + #ifdef CONFIG_SMP 4373 + if (vcpu->mode == IN_GUEST_MODE) { 4374 + apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), 4375 + POSTED_INTR_VECTOR); 4376 + return true; 4377 + } 4378 + #endif 4379 + return false; 4380 + } 4381 + 4376 4382 static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu, 4377 4383 int vector) 4378 4384 { ··· 4393 4375 if (is_guest_mode(vcpu) && 4394 4376 vector == vmx->nested.posted_intr_nv) { 4395 4377 /* the PIR and ON have been set by L1. */ 4396 - if (vcpu->mode == IN_GUEST_MODE) 4397 - apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), 4398 - POSTED_INTR_VECTOR); 4378 + kvm_vcpu_trigger_posted_interrupt(vcpu); 4399 4379 /* 4400 4380 * If a posted intr is not recognized by hardware, 4401 4381 * we will accomplish it in the next vmentry. ··· 4425 4409 4426 4410 r = pi_test_and_set_on(&vmx->pi_desc); 4427 4411 kvm_make_request(KVM_REQ_EVENT, vcpu); 4428 - #ifdef CONFIG_SMP 4429 - if (!r && (vcpu->mode == IN_GUEST_MODE)) 4430 - apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), 4431 - POSTED_INTR_VECTOR); 4432 - else 4433 - #endif 4412 + if (r || !kvm_vcpu_trigger_posted_interrupt(vcpu)) 4434 4413 kvm_vcpu_kick(vcpu); 4435 4414 } 4436 4415 ··· 9224 9213 } 9225 9214 9226 9215 if (cpu_has_vmx_msr_bitmap() && 9227 - exec_control & CPU_BASED_USE_MSR_BITMAPS && 9228 - nested_vmx_merge_msr_bitmap(vcpu, vmcs12)) { 9229 - vmcs_write64(MSR_BITMAP, __pa(vmx_msr_bitmap_nested)); 9216 + exec_control & CPU_BASED_USE_MSR_BITMAPS) { 9217 + nested_vmx_merge_msr_bitmap(vcpu, vmcs12); 9218 + /* MSR_BITMAP will be set by following vmx_set_efer. */ 9230 9219 } else 9231 9220 exec_control &= ~CPU_BASED_USE_MSR_BITMAPS; 9232 9221
-1
arch/x86/kvm/x86.c
··· 2744 2744 case KVM_CAP_USER_NMI: 2745 2745 case KVM_CAP_REINJECT_CONTROL: 2746 2746 case KVM_CAP_IRQ_INJECT_STATUS: 2747 - case KVM_CAP_IRQFD: 2748 2747 case KVM_CAP_IOEVENTFD: 2749 2748 case KVM_CAP_IOEVENTFD_NO_LENGTH: 2750 2749 case KVM_CAP_PIT2:
+8 -3
arch/x86/pci/acpi.c
··· 331 331 struct list_head *list) 332 332 { 333 333 int ret; 334 - struct resource_entry *entry; 334 + struct resource_entry *entry, *tmp; 335 335 336 336 sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum); 337 337 info->bridge = device; ··· 345 345 dev_dbg(&device->dev, 346 346 "no IO and memory resources present in _CRS\n"); 347 347 else 348 - resource_list_for_each_entry(entry, list) 349 - entry->res->name = info->name; 348 + resource_list_for_each_entry_safe(entry, tmp, list) { 349 + if ((entry->res->flags & IORESOURCE_WINDOW) == 0 || 350 + (entry->res->flags & IORESOURCE_DISABLED)) 351 + resource_list_destroy_entry(entry); 352 + else 353 + entry->res->name = info->name; 354 + } 350 355 } 351 356 352 357 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+6 -28
arch/x86/pci/common.c
··· 513 513 } 514 514 } 515 515 516 - /* 517 - * Some device drivers assume dev->irq won't change after calling 518 - * pci_disable_device(). So delay releasing of IRQ resource to driver 519 - * unbinding time. Otherwise it will break PM subsystem and drivers 520 - * like xen-pciback etc. 521 - */ 522 - static int pci_irq_notifier(struct notifier_block *nb, unsigned long action, 523 - void *data) 524 - { 525 - struct pci_dev *dev = to_pci_dev(data); 526 - 527 - if (action != BUS_NOTIFY_UNBOUND_DRIVER) 528 - return NOTIFY_DONE; 529 - 530 - if (pcibios_disable_irq) 531 - pcibios_disable_irq(dev); 532 - 533 - return NOTIFY_OK; 534 - } 535 - 536 - static struct notifier_block pci_irq_nb = { 537 - .notifier_call = pci_irq_notifier, 538 - .priority = INT_MIN, 539 - }; 540 - 541 516 int __init pcibios_init(void) 542 517 { 543 518 if (!raw_pci_ops) { ··· 525 550 526 551 if (pci_bf_sort >= pci_force_bf) 527 552 pci_sort_breadthfirst(); 528 - 529 - bus_register_notifier(&pci_bus_type, &pci_irq_nb); 530 - 531 553 return 0; 532 554 } 533 555 ··· 681 709 if (!pci_dev_msi_enabled(dev)) 682 710 return pcibios_enable_irq(dev); 683 711 return 0; 712 + } 713 + 714 + void pcibios_disable_device (struct pci_dev *dev) 715 + { 716 + if (!pci_dev_msi_enabled(dev) && pcibios_disable_irq) 717 + pcibios_disable_irq(dev); 684 718 } 685 719 686 720 int pci_ext_cfg_avail(void)
+2 -2
arch/x86/pci/intel_mid_pci.c
··· 234 234 235 235 static void intel_mid_pci_irq_disable(struct pci_dev *dev) 236 236 { 237 - if (dev->irq_managed && dev->irq > 0) { 237 + if (!mp_should_keep_irq(&dev->dev) && dev->irq_managed && 238 + dev->irq > 0) { 238 239 mp_unmap_irq(dev->irq); 239 240 dev->irq_managed = 0; 240 - dev->irq = 0; 241 241 } 242 242 } 243 243
+14 -1
arch/x86/pci/irq.c
··· 1256 1256 return 0; 1257 1257 } 1258 1258 1259 + bool mp_should_keep_irq(struct device *dev) 1260 + { 1261 + if (dev->power.is_prepared) 1262 + return true; 1263 + #ifdef CONFIG_PM 1264 + if (dev->power.runtime_status == RPM_SUSPENDING) 1265 + return true; 1266 + #endif 1267 + 1268 + return false; 1269 + } 1270 + 1259 1271 static void pirq_disable_irq(struct pci_dev *dev) 1260 1272 { 1261 - if (io_apic_assign_pci_irqs && dev->irq_managed && dev->irq) { 1273 + if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) && 1274 + dev->irq_managed && dev->irq) { 1262 1275 mp_unmap_irq(dev->irq); 1263 1276 dev->irq = 0; 1264 1277 dev->irq_managed = 0;
+1
arch/x86/vdso/vdso32/sigreturn.S
··· 17 17 .text 18 18 .globl __kernel_sigreturn 19 19 .type __kernel_sigreturn,@function 20 + nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */ 20 21 ALIGN 21 22 __kernel_sigreturn: 22 23 .LSTART_sigreturn:
+10 -2
arch/x86/xen/p2m.c
··· 91 91 unsigned long xen_max_p2m_pfn __read_mostly; 92 92 EXPORT_SYMBOL_GPL(xen_max_p2m_pfn); 93 93 94 + #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 95 + #define P2M_LIMIT CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 96 + #else 97 + #define P2M_LIMIT 0 98 + #endif 99 + 94 100 static DEFINE_SPINLOCK(p2m_update_lock); 95 101 96 102 static unsigned long *p2m_mid_missing_mfn; ··· 391 385 void __init xen_vmalloc_p2m_tree(void) 392 386 { 393 387 static struct vm_struct vm; 388 + unsigned long p2m_limit; 394 389 390 + p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE; 395 391 vm.flags = VM_ALLOC; 396 - vm.size = ALIGN(sizeof(unsigned long) * xen_max_p2m_pfn, 392 + vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit), 397 393 PMD_SIZE * PMDS_PER_MID_PAGE); 398 394 vm_area_register_early(&vm, PMD_SIZE * PMDS_PER_MID_PAGE); 399 395 pr_notice("p2m virtual area at %p, size is %lx\n", vm.addr, vm.size); ··· 571 563 if (p2m_pfn == PFN_DOWN(__pa(p2m_missing))) 572 564 p2m_init(p2m); 573 565 else 574 - p2m_init_identity(p2m, pfn); 566 + p2m_init_identity(p2m, pfn & ~(P2M_PER_PAGE - 1)); 575 567 576 568 spin_lock_irqsave(&p2m_update_lock, flags); 577 569
+1 -1
block/blk-merge.c
··· 592 592 if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS)) { 593 593 struct bio_vec *bprev; 594 594 595 - bprev = &rq->biotail->bi_io_vec[bio->bi_vcnt - 1]; 595 + bprev = &rq->biotail->bi_io_vec[rq->biotail->bi_vcnt - 1]; 596 596 if (bvec_gap_to_prev(bprev, bio->bi_io_vec[0].bv_offset)) 597 597 return false; 598 598 }
+4 -2
block/blk-mq-tag.c
··· 278 278 /* 279 279 * We're out of tags on this hardware queue, kick any 280 280 * pending IO submits before going to sleep waiting for 281 - * some to complete. 281 + * some to complete. Note that hctx can be NULL here for 282 + * reserved tag allocation. 282 283 */ 283 - blk_mq_run_hw_queue(hctx, false); 284 + if (hctx) 285 + blk_mq_run_hw_queue(hctx, false); 284 286 285 287 /* 286 288 * Retry tag allocation after running the hardware queue,
+3 -3
block/blk-mq.c
··· 1938 1938 */ 1939 1939 if (percpu_ref_init(&q->mq_usage_counter, blk_mq_usage_counter_release, 1940 1940 PERCPU_REF_INIT_ATOMIC, GFP_KERNEL)) 1941 - goto err_map; 1941 + goto err_mq_usage; 1942 1942 1943 1943 setup_timer(&q->timeout, blk_mq_rq_timer, (unsigned long) q); 1944 1944 blk_queue_rq_timeout(q, 30000); ··· 1981 1981 blk_mq_init_cpu_queues(q, set->nr_hw_queues); 1982 1982 1983 1983 if (blk_mq_init_hw_queues(q, set)) 1984 - goto err_hw; 1984 + goto err_mq_usage; 1985 1985 1986 1986 mutex_lock(&all_q_mutex); 1987 1987 list_add_tail(&q->all_q_node, &all_q_list); ··· 1993 1993 1994 1994 return q; 1995 1995 1996 - err_hw: 1996 + err_mq_usage: 1997 1997 blk_cleanup_queue(q); 1998 1998 err_hctxs: 1999 1999 kfree(map);
+3 -3
block/blk-settings.c
··· 585 585 b->physical_block_size); 586 586 587 587 t->io_min = max(t->io_min, b->io_min); 588 - t->io_opt = lcm(t->io_opt, b->io_opt); 588 + t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); 589 589 590 590 t->cluster &= b->cluster; 591 591 t->discard_zeroes_data &= b->discard_zeroes_data; ··· 616 616 b->raid_partial_stripes_expensive); 617 617 618 618 /* Find lowest common alignment_offset */ 619 - t->alignment_offset = lcm(t->alignment_offset, alignment) 619 + t->alignment_offset = lcm_not_zero(t->alignment_offset, alignment) 620 620 % max(t->physical_block_size, t->io_min); 621 621 622 622 /* Verify that new alignment_offset is on a logical block boundary */ ··· 643 643 b->max_discard_sectors); 644 644 t->discard_granularity = max(t->discard_granularity, 645 645 b->discard_granularity); 646 - t->discard_alignment = lcm(t->discard_alignment, alignment) % 646 + t->discard_alignment = lcm_not_zero(t->discard_alignment, alignment) % 647 647 t->discard_granularity; 648 648 } 649 649
+4 -1
drivers/acpi/acpi_lpss.c
··· 65 65 66 66 struct lpss_device_desc { 67 67 unsigned int flags; 68 + const char *clk_con_id; 68 69 unsigned int prv_offset; 69 70 size_t prv_size_override; 70 71 void (*setup)(struct lpss_private_data *pdata); ··· 141 140 142 141 static struct lpss_device_desc lpt_uart_dev_desc = { 143 142 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, 143 + .clk_con_id = "baudclk", 144 144 .prv_offset = 0x800, 145 145 .setup = lpss_uart_setup, 146 146 }; ··· 158 156 159 157 static struct lpss_device_desc byt_uart_dev_desc = { 160 158 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 159 + .clk_con_id = "baudclk", 161 160 .prv_offset = 0x800, 162 161 .setup = lpss_uart_setup, 163 162 }; ··· 316 313 return PTR_ERR(clk); 317 314 318 315 pdata->clk = clk; 319 - clk_register_clkdev(clk, NULL, devname); 316 + clk_register_clkdev(clk, dev_desc->clk_con_id, devname); 320 317 return 0; 321 318 } 322 319
+8 -1
drivers/acpi/pci_irq.c
··· 485 485 if (!pin || !dev->irq_managed || dev->irq <= 0) 486 486 return; 487 487 488 + /* Keep IOAPIC pin configuration when suspending */ 489 + if (dev->dev.power.is_prepared) 490 + return; 491 + #ifdef CONFIG_PM 492 + if (dev->dev.power.runtime_status == RPM_SUSPENDING) 493 + return; 494 + #endif 495 + 488 496 entry = acpi_pci_irq_lookup(dev, pin); 489 497 if (!entry) 490 498 return; ··· 513 505 if (gsi >= 0) { 514 506 acpi_unregister_gsi(gsi); 515 507 dev->irq_managed = 0; 516 - dev->irq = 0; 517 508 } 518 509 }
+3 -1
drivers/acpi/resource.c
··· 42 42 * CHECKME: len might be required to check versus a minimum 43 43 * length as well. 1 for io is fine, but for memory it does 44 44 * not make any sense at all. 45 + * Note: some BIOSes report incorrect length for ACPI address space 46 + * descriptor, so remove check of 'reslen == len' to avoid regression. 45 47 */ 46 - if (len && reslen && reslen == len && start <= end) 48 + if (len && reslen && start <= end) 47 49 return true; 48 50 49 51 pr_debug("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
+16 -4
drivers/acpi/video.c
··· 2110 2110 2111 2111 int acpi_video_register(void) 2112 2112 { 2113 - int result = 0; 2113 + int ret; 2114 + 2114 2115 if (register_count) { 2115 2116 /* 2116 2117 * if the function of acpi_video_register is already called, ··· 2123 2122 mutex_init(&video_list_lock); 2124 2123 INIT_LIST_HEAD(&video_bus_head); 2125 2124 2126 - result = acpi_bus_register_driver(&acpi_video_bus); 2127 - if (result < 0) 2128 - return -ENODEV; 2125 + ret = acpi_bus_register_driver(&acpi_video_bus); 2126 + if (ret) 2127 + return ret; 2129 2128 2130 2129 /* 2131 2130 * When the acpi_video_bus is loaded successfully, increase ··· 2177 2176 2178 2177 static int __init acpi_video_init(void) 2179 2178 { 2179 + /* 2180 + * Let the module load even if ACPI is disabled (e.g. due to 2181 + * a broken BIOS) so that i915.ko can still be loaded on such 2182 + * old systems without an AcpiOpRegion. 2183 + * 2184 + * acpi_video_register() will report -ENODEV later as well due 2185 + * to acpi_disabled when i915.ko tries to register itself afterwards. 2186 + */ 2187 + if (acpi_disabled) 2188 + return 0; 2189 + 2180 2190 dmi_check_system(video_dmi_table); 2181 2191 2182 2192 if (intel_opregion_present())
+5 -5
drivers/android/binder.c
··· 551 551 { 552 552 void *page_addr; 553 553 unsigned long user_page_addr; 554 - struct vm_struct tmp_area; 555 554 struct page **page; 556 555 struct mm_struct *mm; 557 556 ··· 599 600 proc->pid, page_addr); 600 601 goto err_alloc_page_failed; 601 602 } 602 - tmp_area.addr = page_addr; 603 - tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */; 604 - ret = map_vm_area(&tmp_area, PAGE_KERNEL, page); 605 - if (ret) { 603 + ret = map_kernel_range_noflush((unsigned long)page_addr, 604 + PAGE_SIZE, PAGE_KERNEL, page); 605 + flush_cache_vmap((unsigned long)page_addr, 606 + (unsigned long)page_addr + PAGE_SIZE); 607 + if (ret != 1) { 606 608 pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n", 607 609 proc->pid, page_addr); 608 610 goto err_map_kernel_failed;
+15 -4
drivers/ata/libata-core.c
··· 4204 4204 { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER }, 4205 4205 4206 4206 /* devices that don't properly handle queued TRIM commands */ 4207 - { "Micron_M[56]*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4207 + { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4208 4208 ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4209 - { "Crucial_CT*SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, 4209 + { "Crucial_CT*M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4210 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4211 + { "Micron_M5[15]0*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4212 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4213 + { "Crucial_CT*M550*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4214 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4215 + { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4216 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4217 + { "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4218 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4210 4219 4211 4220 /* 4212 4221 * As defined, the DRAT (Deterministic Read After Trim) and RZAT ··· 4235 4226 */ 4236 4227 { "INTEL*SSDSC2MH*", NULL, 0, }, 4237 4228 4229 + { "Micron*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4230 + { "Crucial*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4238 4231 { "INTEL*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4239 4232 { "SSD*INTEL*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4240 4233 { "Samsung*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, ··· 4748 4737 return NULL; 4749 4738 4750 4739 /* libsas case */ 4751 - if (!ap->scsi_host) { 4740 + if (ap->flags & ATA_FLAG_SAS_HOST) { 4752 4741 tag = ata_sas_allocate_tag(ap); 4753 4742 if (tag < 0) 4754 4743 return NULL; ··· 4787 4776 tag = qc->tag; 4788 4777 if (likely(ata_tag_valid(tag))) { 4789 4778 qc->tag = ATA_TAG_POISON; 4790 - if (!ap->scsi_host) 4779 + if (ap->flags & ATA_FLAG_SAS_HOST) 4791 4780 ata_sas_free_tag(tag, ap); 4792 4781 } 4793 4782 }
+2
drivers/ata/sata_fsl.c
··· 869 869 */ 870 870 ata_msleep(ap, 1); 871 871 872 + sata_set_spd(link); 873 + 872 874 /* 873 875 * Now, bring the host controller online again, this can take time 874 876 * as PHY reset and communication establishment, 1st D2H FIS and
+12 -12
drivers/base/power/domain.c
··· 2242 2242 } 2243 2243 2244 2244 static int pm_genpd_summary_one(struct seq_file *s, 2245 - struct generic_pm_domain *gpd) 2245 + struct generic_pm_domain *genpd) 2246 2246 { 2247 2247 static const char * const status_lookup[] = { 2248 2248 [GPD_STATE_ACTIVE] = "on", ··· 2256 2256 struct gpd_link *link; 2257 2257 int ret; 2258 2258 2259 - ret = mutex_lock_interruptible(&gpd->lock); 2259 + ret = mutex_lock_interruptible(&genpd->lock); 2260 2260 if (ret) 2261 2261 return -ERESTARTSYS; 2262 2262 2263 - if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup))) 2263 + if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup))) 2264 2264 goto exit; 2265 - seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]); 2265 + seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]); 2266 2266 2267 2267 /* 2268 2268 * Modifications on the list require holding locks on both 2269 2269 * master and slave, so we are safe. 2270 - * Also gpd->name is immutable. 2270 + * Also genpd->name is immutable. 2271 2271 */ 2272 - list_for_each_entry(link, &gpd->master_links, master_node) { 2272 + list_for_each_entry(link, &genpd->master_links, master_node) { 2273 2273 seq_printf(s, "%s", link->slave->name); 2274 - if (!list_is_last(&link->master_node, &gpd->master_links)) 2274 + if (!list_is_last(&link->master_node, &genpd->master_links)) 2275 2275 seq_puts(s, ", "); 2276 2276 } 2277 2277 2278 - list_for_each_entry(pm_data, &gpd->dev_list, list_node) { 2278 + list_for_each_entry(pm_data, &genpd->dev_list, list_node) { 2279 2279 kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL); 2280 2280 if (kobj_path == NULL) 2281 2281 continue; ··· 2287 2287 2288 2288 seq_puts(s, "\n"); 2289 2289 exit: 2290 - mutex_unlock(&gpd->lock); 2290 + mutex_unlock(&genpd->lock); 2291 2291 2292 2292 return 0; 2293 2293 } 2294 2294 2295 2295 static int pm_genpd_summary_show(struct seq_file *s, void *data) 2296 2296 { 2297 - struct generic_pm_domain *gpd; 2297 + struct generic_pm_domain *genpd; 2298 2298 int ret = 0; 2299 2299 2300 2300 seq_puts(s, " domain status slaves\n"); ··· 2305 2305 if (ret) 2306 2306 return -ERESTARTSYS; 2307 2307 2308 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 2309 - ret = pm_genpd_summary_one(s, gpd); 2308 + list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2309 + ret = pm_genpd_summary_one(s, genpd); 2310 2310 if (ret) 2311 2311 break; 2312 2312 }
+1
drivers/base/power/wakeup.c
··· 730 730 pm_abort_suspend = true; 731 731 freeze_wake(); 732 732 } 733 + EXPORT_SYMBOL_GPL(pm_system_wakeup); 733 734 734 735 void pm_wakeup_clear(void) 735 736 {
+8
drivers/base/regmap/internal.h
··· 243 243 extern struct regcache_ops regcache_lzo_ops; 244 244 extern struct regcache_ops regcache_flat_ops; 245 245 246 + static inline const char *regmap_name(const struct regmap *map) 247 + { 248 + if (map->dev) 249 + return dev_name(map->dev); 250 + 251 + return map->name; 252 + } 253 + 246 254 #endif
+1 -1
drivers/base/regmap/regcache-rbtree.c
··· 307 307 if (pos == 0) { 308 308 memmove(blk + offset * map->cache_word_size, 309 309 blk, rbnode->blklen * map->cache_word_size); 310 - bitmap_shift_right(present, present, offset, blklen); 310 + bitmap_shift_left(present, present, offset, blklen); 311 311 } 312 312 313 313 /* update the rbnode block, its size and the base register */
+12 -10
drivers/base/regmap/regcache.c
··· 218 218 ret = map->cache_ops->read(map, reg, value); 219 219 220 220 if (ret == 0) 221 - trace_regmap_reg_read_cache(map->dev, reg, *value); 221 + trace_regmap_reg_read_cache(map, reg, *value); 222 222 223 223 return ret; 224 224 } ··· 311 311 dev_dbg(map->dev, "Syncing %s cache\n", 312 312 map->cache_ops->name); 313 313 name = map->cache_ops->name; 314 - trace_regcache_sync(map->dev, name, "start"); 314 + trace_regcache_sync(map, name, "start"); 315 315 316 316 if (!map->cache_dirty) 317 317 goto out; ··· 346 346 347 347 regmap_async_complete(map); 348 348 349 - trace_regcache_sync(map->dev, name, "stop"); 349 + trace_regcache_sync(map, name, "stop"); 350 350 351 351 return ret; 352 352 } ··· 381 381 name = map->cache_ops->name; 382 382 dev_dbg(map->dev, "Syncing %s cache from %d-%d\n", name, min, max); 383 383 384 - trace_regcache_sync(map->dev, name, "start region"); 384 + trace_regcache_sync(map, name, "start region"); 385 385 386 386 if (!map->cache_dirty) 387 387 goto out; ··· 401 401 402 402 regmap_async_complete(map); 403 403 404 - trace_regcache_sync(map->dev, name, "stop region"); 404 + trace_regcache_sync(map, name, "stop region"); 405 405 406 406 return ret; 407 407 } ··· 428 428 429 429 map->lock(map->lock_arg); 430 430 431 - trace_regcache_drop_region(map->dev, min, max); 431 + trace_regcache_drop_region(map, min, max); 432 432 433 433 ret = map->cache_ops->drop(map, min, max); 434 434 ··· 455 455 map->lock(map->lock_arg); 456 456 WARN_ON(map->cache_bypass && enable); 457 457 map->cache_only = enable; 458 - trace_regmap_cache_only(map->dev, enable); 458 + trace_regmap_cache_only(map, enable); 459 459 map->unlock(map->lock_arg); 460 460 } 461 461 EXPORT_SYMBOL_GPL(regcache_cache_only); ··· 493 493 map->lock(map->lock_arg); 494 494 WARN_ON(map->cache_only && enable); 495 495 map->cache_bypass = enable; 496 - trace_regmap_cache_bypass(map->dev, enable); 496 + trace_regmap_cache_bypass(map, enable); 497 497 map->unlock(map->lock_arg); 498 498 } 499 499 EXPORT_SYMBOL_GPL(regcache_cache_bypass); ··· 608 608 for (i = start; i < end; i++) { 609 609 regtmp = block_base + (i * map->reg_stride); 610 610 611 - if (!regcache_reg_present(cache_present, i)) 611 + if (!regcache_reg_present(cache_present, i) || 612 + !regmap_writeable(map, regtmp)) 612 613 continue; 613 614 614 615 val = regcache_get_val(map, block, i); ··· 678 677 for (i = start; i < end; i++) { 679 678 regtmp = block_base + (i * map->reg_stride); 680 679 681 - if (!regcache_reg_present(cache_present, i)) { 680 + if (!regcache_reg_present(cache_present, i) || 681 + !regmap_writeable(map, regtmp)) { 682 682 ret = regcache_sync_block_raw_flush(map, &data, 683 683 base, regtmp); 684 684 if (ret != 0)
+2 -1
drivers/base/regmap/regmap-irq.c
··· 499 499 goto err_alloc; 500 500 } 501 501 502 - ret = request_threaded_irq(irq, NULL, regmap_irq_thread, irq_flags, 502 + ret = request_threaded_irq(irq, NULL, regmap_irq_thread, 503 + irq_flags | IRQF_ONESHOT, 503 504 chip->name, d); 504 505 if (ret != 0) { 505 506 dev_err(map->dev, "Failed to request IRQ %d for %s: %d\n",
+14 -18
drivers/base/regmap/regmap.c
··· 1281 1281 if (map->async && map->bus->async_write) { 1282 1282 struct regmap_async *async; 1283 1283 1284 - trace_regmap_async_write_start(map->dev, reg, val_len); 1284 + trace_regmap_async_write_start(map, reg, val_len); 1285 1285 1286 1286 spin_lock_irqsave(&map->async_lock, flags); 1287 1287 async = list_first_entry_or_null(&map->async_free, ··· 1339 1339 return ret; 1340 1340 } 1341 1341 1342 - trace_regmap_hw_write_start(map->dev, reg, 1343 - val_len / map->format.val_bytes); 1342 + trace_regmap_hw_write_start(map, reg, val_len / map->format.val_bytes); 1344 1343 1345 1344 /* If we're doing a single register write we can probably just 1346 1345 * send the work_buf directly, otherwise try to do a gather ··· 1371 1372 kfree(buf); 1372 1373 } 1373 1374 1374 - trace_regmap_hw_write_done(map->dev, reg, 1375 - val_len / map->format.val_bytes); 1375 + trace_regmap_hw_write_done(map, reg, val_len / map->format.val_bytes); 1376 1376 1377 1377 return ret; 1378 1378 } ··· 1405 1407 1406 1408 map->format.format_write(map, reg, val); 1407 1409 1408 - trace_regmap_hw_write_start(map->dev, reg, 1); 1410 + trace_regmap_hw_write_start(map, reg, 1); 1409 1411 1410 1412 ret = map->bus->write(map->bus_context, map->work_buf, 1411 1413 map->format.buf_size); 1412 1414 1413 - trace_regmap_hw_write_done(map->dev, reg, 1); 1415 + trace_regmap_hw_write_done(map, reg, 1); 1414 1416 1415 1417 return ret; 1416 1418 } ··· 1468 1470 dev_info(map->dev, "%x <= %x\n", reg, val); 1469 1471 #endif 1470 1472 1471 - trace_regmap_reg_write(map->dev, reg, val); 1473 + trace_regmap_reg_write(map, reg, val); 1472 1474 1473 1475 return map->reg_write(context, reg, val); 1474 1476 } ··· 1771 1773 for (i = 0; i < num_regs; i++) { 1772 1774 int reg = regs[i].reg; 1773 1775 int val = regs[i].def; 1774 - trace_regmap_hw_write_start(map->dev, reg, 1); 1776 + trace_regmap_hw_write_start(map, reg, 1); 1775 1777 map->format.format_reg(u8, reg, map->reg_shift); 1776 1778 u8 += reg_bytes + pad_bytes; 1777 1779 map->format.format_val(u8, val, 0); ··· 1786 1788 1787 1789 for (i = 0; i < num_regs; i++) { 1788 1790 int reg = regs[i].reg; 1789 - trace_regmap_hw_write_done(map->dev, reg, 1); 1791 + trace_regmap_hw_write_done(map, reg, 1); 1790 1792 } 1791 1793 return ret; 1792 1794 } ··· 2057 2059 */ 2058 2060 u8[0] |= map->read_flag_mask; 2059 2061 2060 - trace_regmap_hw_read_start(map->dev, reg, 2061 - val_len / map->format.val_bytes); 2062 + trace_regmap_hw_read_start(map, reg, val_len / map->format.val_bytes); 2062 2063 2063 2064 ret = map->bus->read(map->bus_context, map->work_buf, 2064 2065 map->format.reg_bytes + map->format.pad_bytes, 2065 2066 val, val_len); 2066 2067 2067 - trace_regmap_hw_read_done(map->dev, reg, 2068 - val_len / map->format.val_bytes); 2068 + trace_regmap_hw_read_done(map, reg, val_len / map->format.val_bytes); 2069 2069 2070 2070 return ret; 2071 2071 } ··· 2119 2123 dev_info(map->dev, "%x => %x\n", reg, *val); 2120 2124 #endif 2121 2125 2122 - trace_regmap_reg_read(map->dev, reg, *val); 2126 + trace_regmap_reg_read(map, reg, *val); 2123 2127 2124 2128 if (!map->cache_bypass) 2125 2129 regcache_write(map, reg, *val); ··· 2476 2480 struct regmap *map = async->map; 2477 2481 bool wake; 2478 2482 2479 - trace_regmap_async_io_complete(map->dev); 2483 + trace_regmap_async_io_complete(map); 2480 2484 2481 2485 spin_lock(&map->async_lock); 2482 2486 list_move(&async->list, &map->async_free); ··· 2521 2525 if (!map->bus || !map->bus->async_write) 2522 2526 return 0; 2523 2527 2524 - trace_regmap_async_complete_start(map->dev); 2528 + trace_regmap_async_complete_start(map); 2525 2529 2526 2530 wait_event(map->async_waitq, regmap_async_is_done(map)); 2527 2531 ··· 2530 2534 map->async_ret = 0; 2531 2535 spin_unlock_irqrestore(&map->async_lock, flags); 2532 2536 2533 - trace_regmap_async_complete_done(map->dev); 2537 + trace_regmap_async_complete_done(map); 2534 2538 2535 2539 return ret; 2536 2540 }
+4 -4
drivers/block/nbd.c
··· 803 803 return -EINVAL; 804 804 } 805 805 806 - nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL); 807 - if (!nbd_dev) 808 - return -ENOMEM; 809 - 810 806 part_shift = 0; 811 807 if (max_part > 0) { 812 808 part_shift = fls(max_part); ··· 823 827 824 828 if (nbds_max > 1UL << (MINORBITS - part_shift)) 825 829 return -EINVAL; 830 + 831 + nbd_dev = kcalloc(nbds_max, sizeof(*nbd_dev), GFP_KERNEL); 832 + if (!nbd_dev) 833 + return -ENOMEM; 826 834 827 835 for (i = 0; i < nbds_max; i++) { 828 836 struct gendisk *disk = alloc_disk(1 << part_shift);
+1
drivers/block/nvme-core.c
··· 3003 3003 } 3004 3004 get_device(dev->device); 3005 3005 3006 + INIT_LIST_HEAD(&dev->node); 3006 3007 INIT_WORK(&dev->probe_work, nvme_async_probe); 3007 3008 schedule_work(&dev->probe_work); 3008 3009 return 0;
+1
drivers/bluetooth/btusb.c
··· 272 272 { USB_DEVICE(0x1286, 0x2046), .driver_info = BTUSB_MARVELL }, 273 273 274 274 /* Intel Bluetooth devices */ 275 + { USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR }, 275 276 { USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL }, 276 277 { USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL }, 277 278 { USB_DEVICE(0x8087, 0x0a2b), .driver_info = BTUSB_INTEL_NEW },
+19 -25
drivers/char/tpm/tpm-chip.c
··· 140 140 { 141 141 int rc; 142 142 143 - rc = device_add(&chip->dev); 144 - if (rc) { 145 - dev_err(&chip->dev, 146 - "unable to device_register() %s, major %d, minor %d, err=%d\n", 147 - chip->devname, MAJOR(chip->dev.devt), 148 - MINOR(chip->dev.devt), rc); 149 - 150 - return rc; 151 - } 152 - 153 143 rc = cdev_add(&chip->cdev, chip->dev.devt, 1); 154 144 if (rc) { 155 145 dev_err(&chip->dev, ··· 148 158 MINOR(chip->dev.devt), rc); 149 159 150 160 device_unregister(&chip->dev); 161 + return rc; 162 + } 163 + 164 + rc = device_add(&chip->dev); 165 + if (rc) { 166 + dev_err(&chip->dev, 167 + "unable to device_register() %s, major %d, minor %d, err=%d\n", 168 + chip->devname, MAJOR(chip->dev.devt), 169 + MINOR(chip->dev.devt), rc); 170 + 151 171 return rc; 152 172 } 153 173 ··· 174 174 * tpm_chip_register() - create a character device for the TPM chip 175 175 * @chip: TPM chip to use. 176 176 * 177 - * Creates a character device for the TPM chip and adds sysfs interfaces for 178 - * the device, PPI and TCPA. As the last step this function adds the 179 - * chip to the list of TPM chips available for use. 177 + * Creates a character device for the TPM chip and adds sysfs attributes for 178 + * the device. As the last step this function adds the chip to the list of TPM 179 + * chips available for in-kernel use. 180 180 * 181 - * NOTE: This function should be only called after the chip initialization 182 - * is complete. 183 - * 184 - * Called from tpm_<specific>.c probe function only for devices 185 - * the driver has determined it should claim. Prior to calling 186 - * this function the specific probe function has called pci_enable_device 187 - * upon errant exit from this function specific probe function should call 188 - * pci_disable_device 181 + * This function should be only called after the chip initialization is 182 + * complete. 189 183 */ 190 184 int tpm_chip_register(struct tpm_chip *chip) 191 185 { 192 186 int rc; 193 - 194 - rc = tpm_dev_add_device(chip); 195 - if (rc) 196 - return rc; 197 187 198 188 /* Populate sysfs for TPM1 devices. */ 199 189 if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { ··· 197 207 198 208 chip->bios_dir = tpm_bios_log_setup(chip->devname); 199 209 } 210 + 211 + rc = tpm_dev_add_device(chip); 212 + if (rc) 213 + return rc; 200 214 201 215 /* Make the chip available. */ 202 216 spin_lock(&driver_lock);
+5 -5
drivers/char/tpm/tpm_ibmvtpm.c
··· 124 124 { 125 125 struct ibmvtpm_dev *ibmvtpm; 126 126 struct ibmvtpm_crq crq; 127 - u64 *word = (u64 *) &crq; 127 + __be64 *word = (__be64 *)&crq; 128 128 int rc; 129 129 130 130 ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip); ··· 145 145 memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count); 146 146 crq.valid = (u8)IBMVTPM_VALID_CMD; 147 147 crq.msg = (u8)VTPM_TPM_COMMAND; 148 - crq.len = (u16)count; 149 - crq.data = ibmvtpm->rtce_dma_handle; 148 + crq.len = cpu_to_be16(count); 149 + crq.data = cpu_to_be32(ibmvtpm->rtce_dma_handle); 150 150 151 - rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(word[0]), 152 - cpu_to_be64(word[1])); 151 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]), 152 + be64_to_cpu(word[1])); 153 153 if (rc != H_SUCCESS) { 154 154 dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc); 155 155 rc = 0;
+3 -3
drivers/char/tpm/tpm_ibmvtpm.h
··· 22 22 struct ibmvtpm_crq { 23 23 u8 valid; 24 24 u8 msg; 25 - u16 len; 26 - u32 data; 27 - u64 reserved; 25 + __be16 len; 26 + __be32 data; 27 + __be64 reserved; 28 28 } __attribute__((packed, aligned(8))); 29 29 30 30 struct ibmvtpm_crq_queue {
+18 -1
drivers/char/virtio_console.c
··· 142 142 * notification 143 143 */ 144 144 struct work_struct control_work; 145 + struct work_struct config_work; 145 146 146 147 struct list_head ports; 147 148 ··· 1838 1837 1839 1838 portdev = vdev->priv; 1840 1839 1840 + if (!use_multiport(portdev)) 1841 + schedule_work(&portdev->config_work); 1842 + } 1843 + 1844 + static void config_work_handler(struct work_struct *work) 1845 + { 1846 + struct ports_device *portdev; 1847 + 1848 + portdev = container_of(work, struct ports_device, control_work); 1841 1849 if (!use_multiport(portdev)) { 1850 + struct virtio_device *vdev; 1842 1851 struct port *port; 1843 1852 u16 rows, cols; 1844 1853 1854 + vdev = portdev->vdev; 1845 1855 virtio_cread(vdev, struct virtio_console_config, cols, &cols); 1846 1856 virtio_cread(vdev, struct virtio_console_config, rows, &rows); 1847 1857 ··· 2052 2040 2053 2041 virtio_device_ready(portdev->vdev); 2054 2042 2043 + INIT_WORK(&portdev->config_work, &config_work_handler); 2044 + INIT_WORK(&portdev->control_work, &control_work_handler); 2045 + 2055 2046 if (multiport) { 2056 2047 unsigned int nr_added_bufs; 2057 2048 2058 2049 spin_lock_init(&portdev->c_ivq_lock); 2059 2050 spin_lock_init(&portdev->c_ovq_lock); 2060 - INIT_WORK(&portdev->control_work, &control_work_handler); 2061 2051 2062 2052 nr_added_bufs = fill_queue(portdev->c_ivq, 2063 2053 &portdev->c_ivq_lock); ··· 2127 2113 /* Finish up work that's lined up */ 2128 2114 if (use_multiport(portdev)) 2129 2115 cancel_work_sync(&portdev->control_work); 2116 + else 2117 + cancel_work_sync(&portdev->config_work); 2130 2118 2131 2119 list_for_each_entry_safe(port, port2, &portdev->ports, list) 2132 2120 unplug_port(port); ··· 2180 2164 2181 2165 virtqueue_disable_cb(portdev->c_ivq); 2182 2166 cancel_work_sync(&portdev->control_work); 2167 + cancel_work_sync(&portdev->config_work); 2183 2168 /* 2184 2169 * Once more: if control_work_handler() was running, it would 2185 2170 * enable the cb as the last step.
+19 -1
drivers/clk/at91/pmc.c
··· 89 89 return 0; 90 90 } 91 91 92 + static void pmc_irq_suspend(struct irq_data *d) 93 + { 94 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d); 95 + 96 + pmc->imr = pmc_read(pmc, AT91_PMC_IMR); 97 + pmc_write(pmc, AT91_PMC_IDR, pmc->imr); 98 + } 99 + 100 + static void pmc_irq_resume(struct irq_data *d) 101 + { 102 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d); 103 + 104 + pmc_write(pmc, AT91_PMC_IER, pmc->imr); 105 + } 106 + 92 107 static struct irq_chip pmc_irq = { 93 108 .name = "PMC", 94 109 .irq_disable = pmc_irq_mask, 95 110 .irq_mask = pmc_irq_mask, 96 111 .irq_unmask = pmc_irq_unmask, 97 112 .irq_set_type = pmc_irq_set_type, 113 + .irq_suspend = pmc_irq_suspend, 114 + .irq_resume = pmc_irq_resume, 98 115 }; 99 116 100 117 static struct lock_class_key pmc_lock_class; ··· 241 224 goto out_free_pmc; 242 225 243 226 pmc_write(pmc, AT91_PMC_IDR, 0xffffffff); 244 - if (request_irq(pmc->virq, pmc_irq_handler, IRQF_SHARED, "pmc", pmc)) 227 + if (request_irq(pmc->virq, pmc_irq_handler, 228 + IRQF_SHARED | IRQF_COND_SUSPEND, "pmc", pmc)) 245 229 goto out_remove_irqdomain; 246 230 247 231 return pmc;
+1
drivers/clk/at91/pmc.h
··· 33 33 spinlock_t lock; 34 34 const struct at91_pmc_caps *caps; 35 35 struct irq_domain *irqdomain; 36 + u32 imr; 36 37 }; 37 38 38 39 static inline void pmc_lock(struct at91_pmc *pmc)
+14 -15
drivers/clk/clk-divider.c
··· 144 144 divider->flags); 145 145 } 146 146 147 - /* 148 - * The reverse of DIV_ROUND_UP: The maximum number which 149 - * divided by m is r 150 - */ 151 - #define MULT_ROUND_UP(r, m) ((r) * (m) + (m) - 1) 152 - 153 147 static bool _is_valid_table_div(const struct clk_div_table *table, 154 148 unsigned int div) 155 149 { ··· 219 225 unsigned long parent_rate, unsigned long rate, 220 226 unsigned long flags) 221 227 { 222 - int up, down, div; 228 + int up, down; 229 + unsigned long up_rate, down_rate; 223 230 224 - up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate); 231 + up = DIV_ROUND_UP(parent_rate, rate); 232 + down = parent_rate / rate; 225 233 226 234 if (flags & CLK_DIVIDER_POWER_OF_TWO) { 227 - up = __roundup_pow_of_two(div); 228 - down = __rounddown_pow_of_two(div); 235 + up = __roundup_pow_of_two(up); 236 + down = __rounddown_pow_of_two(down); 229 237 } else if (table) { 230 - up = _round_up_table(table, div); 231 - down = _round_down_table(table, div); 238 + up = _round_up_table(table, up); 239 + down = _round_down_table(table, down); 232 240 } 233 241 234 - return (up - div) <= (div - down) ? up : down; 242 + up_rate = DIV_ROUND_UP(parent_rate, up); 243 + down_rate = DIV_ROUND_UP(parent_rate, down); 244 + 245 + return (rate - up_rate) <= (down_rate - rate) ? up : down; 235 246 } 236 247 237 248 static int _div_round(const struct clk_div_table *table, ··· 312 313 return i; 313 314 } 314 315 parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), 315 - MULT_ROUND_UP(rate, i)); 316 + rate * i); 316 317 now = DIV_ROUND_UP(parent_rate, i); 317 318 if (_is_best_div(rate, now, best, flags)) { 318 319 bestdiv = i; ··· 352 353 bestdiv = readl(divider->reg) >> divider->shift; 353 354 bestdiv &= div_mask(divider->width); 354 355 bestdiv = _get_div(divider->table, bestdiv, divider->flags); 355 - return bestdiv; 356 + return DIV_ROUND_UP(*prate, bestdiv); 356 357 } 357 358 358 359 return divider_round_rate(hw, rate, prate, divider->table,
+26 -1
drivers/clk/clk.c
··· 1350 1350 1351 1351 return rate; 1352 1352 } 1353 - EXPORT_SYMBOL_GPL(clk_core_get_rate); 1354 1353 1355 1354 /** 1356 1355 * clk_get_rate - return the rate of clk ··· 2168 2169 2169 2170 return clk_core_get_phase(clk->core); 2170 2171 } 2172 + 2173 + /** 2174 + * clk_is_match - check if two clk's point to the same hardware clock 2175 + * @p: clk compared against q 2176 + * @q: clk compared against p 2177 + * 2178 + * Returns true if the two struct clk pointers both point to the same hardware 2179 + * clock node. Put differently, returns true if struct clk *p and struct clk *q 2180 + * share the same struct clk_core object. 2181 + * 2182 + * Returns false otherwise. Note that two NULL clks are treated as matching. 2183 + */ 2184 + bool clk_is_match(const struct clk *p, const struct clk *q) 2185 + { 2186 + /* trivial case: identical struct clk's or both NULL */ 2187 + if (p == q) 2188 + return true; 2189 + 2190 + /* true if clk->core pointers match. Avoid derefing garbage */ 2191 + if (!IS_ERR_OR_NULL(p) && !IS_ERR_OR_NULL(q)) 2192 + if (p->core == q->core) 2193 + return true; 2194 + 2195 + return false; 2196 + } 2197 + EXPORT_SYMBOL_GPL(clk_is_match); 2171 2198 2172 2199 /** 2173 2200 * __clk_init - initialize the data structures in a struct clk
+13
drivers/clk/qcom/gcc-msm8960.c
··· 48 48 }, 49 49 }; 50 50 51 + static struct clk_regmap pll4_vote = { 52 + .enable_reg = 0x34c0, 53 + .enable_mask = BIT(4), 54 + .hw.init = &(struct clk_init_data){ 55 + .name = "pll4_vote", 56 + .parent_names = (const char *[]){ "pll4" }, 57 + .num_parents = 1, 58 + .ops = &clk_pll_vote_ops, 59 + }, 60 + }; 61 + 51 62 static struct clk_pll pll8 = { 52 63 .l_reg = 0x3144, 53 64 .m_reg = 0x3148, ··· 3034 3023 3035 3024 static struct clk_regmap *gcc_msm8960_clks[] = { 3036 3025 [PLL3] = &pll3.clkr, 3026 + [PLL4_VOTE] = &pll4_vote, 3037 3027 [PLL8] = &pll8.clkr, 3038 3028 [PLL8_VOTE] = &pll8_vote, 3039 3029 [PLL14] = &pll14.clkr, ··· 3259 3247 3260 3248 static struct clk_regmap *gcc_apq8064_clks[] = { 3261 3249 [PLL3] = &pll3.clkr, 3250 + [PLL4_VOTE] = &pll4_vote, 3262 3251 [PLL8] = &pll8.clkr, 3263 3252 [PLL8_VOTE] = &pll8_vote, 3264 3253 [PLL14] = &pll14.clkr,
-1
drivers/clk/qcom/lcc-ipq806x.c
··· 462 462 .remove = lcc_ipq806x_remove, 463 463 .driver = { 464 464 .name = "lcc-ipq806x", 465 - .owner = THIS_MODULE, 466 465 .of_match_table = lcc_ipq806x_match_table, 467 466 }, 468 467 };
+3 -4
drivers/clk/qcom/lcc-msm8960.c
··· 417 417 .mnctr_en_bit = 8, 418 418 .mnctr_reset_bit = 7, 419 419 .mnctr_mode_shift = 5, 420 - .n_val_shift = 16, 421 - .m_val_shift = 16, 420 + .n_val_shift = 24, 421 + .m_val_shift = 8, 422 422 .width = 8, 423 423 }, 424 424 .p = { ··· 547 547 return PTR_ERR(regmap); 548 548 549 549 /* Use the correct frequency plan depending on speed of PLL4 */ 550 - val = regmap_read(regmap, 0x4, &val); 550 + regmap_read(regmap, 0x4, &val); 551 551 if (val == 0x12) { 552 552 slimbus_src.freq_tbl = clk_tbl_aif_osr_492; 553 553 mi2s_osr_src.freq_tbl = clk_tbl_aif_osr_492; ··· 574 574 .remove = lcc_msm8960_remove, 575 575 .driver = { 576 576 .name = "lcc-msm8960", 577 - .owner = THIS_MODULE, 578 577 .of_match_table = lcc_msm8960_match_table, 579 578 }, 580 579 };
+3 -3
drivers/clk/ti/fapll.c
··· 84 84 struct fapll_data *fd = to_fapll(hw); 85 85 u32 v = readl_relaxed(fd->base); 86 86 87 - v |= (1 << FAPLL_MAIN_PLLEN); 87 + v |= FAPLL_MAIN_PLLEN; 88 88 writel_relaxed(v, fd->base); 89 89 90 90 return 0; ··· 95 95 struct fapll_data *fd = to_fapll(hw); 96 96 u32 v = readl_relaxed(fd->base); 97 97 98 - v &= ~(1 << FAPLL_MAIN_PLLEN); 98 + v &= ~FAPLL_MAIN_PLLEN; 99 99 writel_relaxed(v, fd->base); 100 100 } 101 101 ··· 104 104 struct fapll_data *fd = to_fapll(hw); 105 105 u32 v = readl_relaxed(fd->base); 106 106 107 - return v & (1 << FAPLL_MAIN_PLLEN); 107 + return v & FAPLL_MAIN_PLLEN; 108 108 } 109 109 110 110 static unsigned long ti_fapll_recalc_rate(struct clk_hw *hw,
+3
drivers/clocksource/Kconfig
··· 192 192 config SH_TIMER_CMT 193 193 bool "Renesas CMT timer driver" if COMPILE_TEST 194 194 depends on GENERIC_CLOCKEVENTS 195 + depends on HAS_IOMEM 195 196 default SYS_SUPPORTS_SH_CMT 196 197 help 197 198 This enables build of a clocksource and clockevent driver for ··· 202 201 config SH_TIMER_MTU2 203 202 bool "Renesas MTU2 timer driver" if COMPILE_TEST 204 203 depends on GENERIC_CLOCKEVENTS 204 + depends on HAS_IOMEM 205 205 default SYS_SUPPORTS_SH_MTU2 206 206 help 207 207 This enables build of a clockevent driver for the Multi-Function ··· 212 210 config SH_TIMER_TMU 213 211 bool "Renesas TMU timer driver" if COMPILE_TEST 214 212 depends on GENERIC_CLOCKEVENTS 213 + depends on HAS_IOMEM 215 214 default SYS_SUPPORTS_SH_TMU 216 215 help 217 216 This enables build of a clocksource and clockevent driver for
+2 -2
drivers/clocksource/time-efm32.c
··· 225 225 clock_event_ddata.base = base; 226 226 clock_event_ddata.periodic_top = DIV_ROUND_CLOSEST(rate, 1024 * HZ); 227 227 228 - setup_irq(irq, &efm32_clock_event_irq); 229 - 230 228 clockevents_config_and_register(&clock_event_ddata.evtdev, 231 229 DIV_ROUND_CLOSEST(rate, 1024), 232 230 0xf, 0xffff); 231 + 232 + setup_irq(irq, &efm32_clock_event_irq); 233 233 234 234 return 0; 235 235
+4 -11
drivers/clocksource/timer-sun5i.c
··· 17 17 #include <linux/irq.h> 18 18 #include <linux/irqreturn.h> 19 19 #include <linux/reset.h> 20 - #include <linux/sched_clock.h> 21 20 #include <linux/of.h> 22 21 #include <linux/of_address.h> 23 22 #include <linux/of_irq.h> ··· 136 137 .dev_id = &sun5i_clockevent, 137 138 }; 138 139 139 - static u64 sun5i_timer_sched_read(void) 140 - { 141 - return ~readl(timer_base + TIMER_CNTVAL_LO_REG(1)); 142 - } 143 - 144 140 static void __init sun5i_timer_init(struct device_node *node) 145 141 { 146 142 struct reset_control *rstc; ··· 166 172 writel(TIMER_CTL_ENABLE | TIMER_CTL_RELOAD, 167 173 timer_base + TIMER_CTL_REG(1)); 168 174 169 - sched_clock_register(sun5i_timer_sched_read, 32, rate); 170 175 clocksource_mmio_init(timer_base + TIMER_CNTVAL_LO_REG(1), node->name, 171 176 rate, 340, 32, clocksource_mmio_readl_down); 172 177 173 178 ticks_per_jiffy = DIV_ROUND_UP(rate, HZ); 174 - 175 - ret = setup_irq(irq, &sun5i_timer_irq); 176 - if (ret) 177 - pr_warn("failed to setup irq %d\n", irq); 178 179 179 180 /* Enable timer0 interrupt */ 180 181 val = readl(timer_base + TIMER_IRQ_EN_REG); ··· 180 191 181 192 clockevents_config_and_register(&sun5i_clockevent, rate, 182 193 TIMER_SYNC_TICKS, 0xffffffff); 194 + 195 + ret = setup_irq(irq, &sun5i_timer_irq); 196 + if (ret) 197 + pr_warn("failed to setup irq %d\n", irq); 183 198 } 184 199 CLOCKSOURCE_OF_DECLARE(sun5i_a13, "allwinner,sun5i-a13-hstimer", 185 200 sun5i_timer_init);
+6 -15
drivers/cpufreq/exynos-cpufreq.c
··· 159 159 160 160 static int exynos_cpufreq_probe(struct platform_device *pdev) 161 161 { 162 - struct device_node *cpus, *np; 162 + struct device_node *cpu0; 163 163 int ret = -EINVAL; 164 164 165 165 exynos_info = kzalloc(sizeof(*exynos_info), GFP_KERNEL); ··· 206 206 if (ret) 207 207 goto err_cpufreq_reg; 208 208 209 - cpus = of_find_node_by_path("/cpus"); 210 - if (!cpus) { 211 - pr_err("failed to find cpus node\n"); 209 + cpu0 = of_get_cpu_node(0, NULL); 210 + if (!cpu0) { 211 + pr_err("failed to find cpu0 node\n"); 212 212 return 0; 213 213 } 214 214 215 - np = of_get_next_child(cpus, NULL); 216 - if (!np) { 217 - pr_err("failed to find cpus child node\n"); 218 - of_node_put(cpus); 219 - return 0; 220 - } 221 - 222 - if (of_find_property(np, "#cooling-cells", NULL)) { 223 - cdev = of_cpufreq_cooling_register(np, 215 + if (of_find_property(cpu0, "#cooling-cells", NULL)) { 216 + cdev = of_cpufreq_cooling_register(cpu0, 224 217 cpu_present_mask); 225 218 if (IS_ERR(cdev)) 226 219 pr_err("running cpufreq without cooling device: %ld\n", 227 220 PTR_ERR(cdev)); 228 221 } 229 - of_node_put(np); 230 - of_node_put(cpus); 231 222 232 223 return 0; 233 224
+2
drivers/cpufreq/ppc-corenet-cpufreq.c
··· 22 22 #include <linux/smp.h> 23 23 #include <sysdev/fsl_soc.h> 24 24 25 + #include <asm/smp.h> /* for get_hard_smp_processor_id() in UP configs */ 26 + 25 27 /** 26 28 * struct cpu_data - per CPU data struct 27 29 * @parent: the parent node of cpu clock
+6 -6
drivers/cpuidle/cpuidle-mvebu-v7.c
··· 37 37 deepidle = true; 38 38 39 39 ret = mvebu_v7_cpu_suspend(deepidle); 40 + cpu_pm_exit(); 41 + 40 42 if (ret) 41 43 return ret; 42 - 43 - cpu_pm_exit(); 44 44 45 45 return index; 46 46 } ··· 50 50 .states[0] = ARM_CPUIDLE_WFI_STATE, 51 51 .states[1] = { 52 52 .enter = mvebu_v7_enter_idle, 53 - .exit_latency = 10, 53 + .exit_latency = 100, 54 54 .power_usage = 50, 55 - .target_residency = 100, 55 + .target_residency = 1000, 56 56 .name = "MV CPU IDLE", 57 57 .desc = "CPU power down", 58 58 }, 59 59 .states[2] = { 60 60 .enter = mvebu_v7_enter_idle, 61 - .exit_latency = 100, 61 + .exit_latency = 1000, 62 62 .power_usage = 5, 63 - .target_residency = 1000, 63 + .target_residency = 10000, 64 64 .flags = MVEBU_V7_FLAG_DEEP_IDLE, 65 65 .name = "MV CPU DEEP IDLE", 66 66 .desc = "CPU and L2 Fabric power down",
+26 -35
drivers/cpuidle/cpuidle.c
··· 44 44 off = 1; 45 45 } 46 46 47 + bool cpuidle_not_available(struct cpuidle_driver *drv, 48 + struct cpuidle_device *dev) 49 + { 50 + return off || !initialized || !drv || !dev || !dev->enabled; 51 + } 52 + 47 53 /** 48 54 * cpuidle_play_dead - cpu off-lining 49 55 * ··· 72 66 return -ENODEV; 73 67 } 74 68 75 - /** 76 - * cpuidle_find_deepest_state - Find deepest state meeting specific conditions. 77 - * @drv: cpuidle driver for the given CPU. 78 - * @dev: cpuidle device for the given CPU. 79 - * @freeze: Whether or not the state should be suitable for suspend-to-idle. 80 - */ 81 - static int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 82 - struct cpuidle_device *dev, bool freeze) 69 + static int find_deepest_state(struct cpuidle_driver *drv, 70 + struct cpuidle_device *dev, bool freeze) 83 71 { 84 72 unsigned int latency_req = 0; 85 73 int i, ret = freeze ? -1 : CPUIDLE_DRIVER_STATE_START - 1; ··· 90 90 ret = i; 91 91 } 92 92 return ret; 93 + } 94 + 95 + /** 96 + * cpuidle_find_deepest_state - Find the deepest available idle state. 97 + * @drv: cpuidle driver for the given CPU. 98 + * @dev: cpuidle device for the given CPU. 99 + */ 100 + int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 101 + struct cpuidle_device *dev) 102 + { 103 + return find_deepest_state(drv, dev, false); 93 104 } 94 105 95 106 static void enter_freeze_proper(struct cpuidle_driver *drv, ··· 124 113 125 114 /** 126 115 * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle. 116 + * @drv: cpuidle driver for the given CPU. 117 + * @dev: cpuidle device for the given CPU. 127 118 * 128 119 * If there are states with the ->enter_freeze callback, find the deepest of 129 - * them and enter it with frozen tick. Otherwise, find the deepest state 130 - * available and enter it normally. 120 + * them and enter it with frozen tick. 131 121 */ 132 - void cpuidle_enter_freeze(void) 122 + int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev) 133 123 { 134 - struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices); 135 - struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 136 124 int index; 137 125 138 126 /* ··· 139 129 * that interrupts won't be enabled when it exits and allows the tick to 140 130 * be frozen safely. 141 131 */ 142 - index = cpuidle_find_deepest_state(drv, dev, true); 143 - if (index >= 0) { 144 - enter_freeze_proper(drv, dev, index); 145 - return; 146 - } 147 - 148 - /* 149 - * It is not safe to freeze the tick, find the deepest state available 150 - * at all and try to enter it normally. 151 - */ 152 - index = cpuidle_find_deepest_state(drv, dev, false); 132 + index = find_deepest_state(drv, dev, true); 153 133 if (index >= 0) 154 - cpuidle_enter(drv, dev, index); 155 - else 156 - arch_cpu_idle(); 134 + enter_freeze_proper(drv, dev, index); 157 135 158 - /* Interrupts are enabled again here. */ 159 - local_irq_disable(); 136 + return index; 160 137 } 161 138 162 139 /** ··· 202 205 */ 203 206 int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) 204 207 { 205 - if (off || !initialized) 206 - return -ENODEV; 207 - 208 - if (!drv || !dev || !dev->enabled) 209 - return -EBUSY; 210 - 211 208 return cpuidle_curr_governor->select(drv, dev); 212 209 } 213 210
+3
drivers/dma-buf/fence.c
··· 159 159 if (WARN_ON(timeout < 0)) 160 160 return -EINVAL; 161 161 162 + if (timeout == 0) 163 + return fence_is_signaled(fence); 164 + 162 165 trace_fence_wait_start(fence); 163 166 ret = fence->ops->wait(fence, intr, timeout); 164 167 trace_fence_wait_end(fence);
+3 -2
drivers/dma-buf/reservation.c
··· 327 327 unsigned seq, shared_count, i = 0; 328 328 long ret = timeout; 329 329 330 + if (!timeout) 331 + return reservation_object_test_signaled_rcu(obj, wait_all); 332 + 330 333 retry: 331 334 fence = NULL; 332 335 shared_count = 0; ··· 405 402 int ret = 1; 406 403 407 404 if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) { 408 - int ret; 409 - 410 405 fence = fence_get_rcu(lfence); 411 406 if (!fence) 412 407 return -1;
+14
drivers/dma/amba-pl08x.c
··· 97 97 98 98 #define DRIVER_NAME "pl08xdmac" 99 99 100 + #define PL80X_DMA_BUSWIDTHS \ 101 + BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \ 102 + BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 103 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \ 104 + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) 105 + 100 106 static struct amba_driver pl08x_amba_driver; 101 107 struct pl08x_driver_data; 102 108 ··· 2076 2070 pl08x->memcpy.device_pause = pl08x_pause; 2077 2071 pl08x->memcpy.device_resume = pl08x_resume; 2078 2072 pl08x->memcpy.device_terminate_all = pl08x_terminate_all; 2073 + pl08x->memcpy.src_addr_widths = PL80X_DMA_BUSWIDTHS; 2074 + pl08x->memcpy.dst_addr_widths = PL80X_DMA_BUSWIDTHS; 2075 + pl08x->memcpy.directions = BIT(DMA_MEM_TO_MEM); 2076 + pl08x->memcpy.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 2079 2077 2080 2078 /* Initialize slave engine */ 2081 2079 dma_cap_set(DMA_SLAVE, pl08x->slave.cap_mask); ··· 2096 2086 pl08x->slave.device_pause = pl08x_pause; 2097 2087 pl08x->slave.device_resume = pl08x_resume; 2098 2088 pl08x->slave.device_terminate_all = pl08x_terminate_all; 2089 + pl08x->slave.src_addr_widths = PL80X_DMA_BUSWIDTHS; 2090 + pl08x->slave.dst_addr_widths = PL80X_DMA_BUSWIDTHS; 2091 + pl08x->slave.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 2092 + pl08x->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 2099 2093 2100 2094 /* Get the platform data */ 2101 2095 pl08x->pd = dev_get_platdata(&adev->dev);
+114 -74
drivers/dma/at_hdmac.c
··· 238 238 } 239 239 240 240 /* 241 - * atc_get_current_descriptors - 242 - * locate the descriptor which equal to physical address in DSCR 243 - * @atchan: the channel we want to start 244 - * @dscr_addr: physical descriptor address in DSCR 241 + * atc_get_desc_by_cookie - get the descriptor of a cookie 242 + * @atchan: the DMA channel 243 + * @cookie: the cookie to get the descriptor for 245 244 */ 246 - static struct at_desc *atc_get_current_descriptors(struct at_dma_chan *atchan, 247 - u32 dscr_addr) 245 + static struct at_desc *atc_get_desc_by_cookie(struct at_dma_chan *atchan, 246 + dma_cookie_t cookie) 248 247 { 249 - struct at_desc *desc, *_desc, *child, *desc_cur = NULL; 248 + struct at_desc *desc, *_desc; 249 + 250 + list_for_each_entry_safe(desc, _desc, &atchan->queue, desc_node) { 251 + if (desc->txd.cookie == cookie) 252 + return desc; 253 + } 250 254 251 255 list_for_each_entry_safe(desc, _desc, &atchan->active_list, desc_node) { 252 - if (desc->lli.dscr == dscr_addr) { 253 - desc_cur = desc; 254 - break; 255 - } 256 - 257 - list_for_each_entry(child, &desc->tx_list, desc_node) { 258 - if (child->lli.dscr == dscr_addr) { 259 - desc_cur = child; 260 - break; 261 - } 262 - } 256 + if (desc->txd.cookie == cookie) 257 + return desc; 263 258 } 264 259 265 - return desc_cur; 260 + return NULL; 266 261 } 267 262 268 - /* 269 - * atc_get_bytes_left - 270 - * Get the number of bytes residue in dma buffer, 271 - * @chan: the channel we want to start 263 + /** 264 + * atc_calc_bytes_left - calculates the number of bytes left according to the 265 + * value read from CTRLA. 266 + * 267 + * @current_len: the number of bytes left before reading CTRLA 268 + * @ctrla: the value of CTRLA 269 + * @desc: the descriptor containing the transfer width 272 270 */ 273 - static int atc_get_bytes_left(struct dma_chan *chan) 271 + static inline int atc_calc_bytes_left(int current_len, u32 ctrla, 272 + struct at_desc *desc) 273 + { 274 + return current_len - ((ctrla & ATC_BTSIZE_MAX) << desc->tx_width); 275 + } 276 + 277 + /** 278 + * atc_calc_bytes_left_from_reg - calculates the number of bytes left according 279 + * to the current value of CTRLA. 280 + * 281 + * @current_len: the number of bytes left before reading CTRLA 282 + * @atchan: the channel to read CTRLA for 283 + * @desc: the descriptor containing the transfer width 284 + */ 285 + static inline int atc_calc_bytes_left_from_reg(int current_len, 286 + struct at_dma_chan *atchan, struct at_desc *desc) 287 + { 288 + u32 ctrla = channel_readl(atchan, CTRLA); 289 + 290 + return atc_calc_bytes_left(current_len, ctrla, desc); 291 + } 292 + 293 + /** 294 + * atc_get_bytes_left - get the number of bytes residue for a cookie 295 + * @chan: DMA channel 296 + * @cookie: transaction identifier to check status of 297 + */ 298 + static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie) 274 299 { 275 300 struct at_dma_chan *atchan = to_at_dma_chan(chan); 276 - struct at_dma *atdma = to_at_dma(chan->device); 277 - int chan_id = atchan->chan_common.chan_id; 278 301 struct at_desc *desc_first = atc_first_active(atchan); 279 - struct at_desc *desc_cur; 280 - int ret = 0, count = 0; 302 + struct at_desc *desc; 303 + int ret; 304 + u32 ctrla, dscr; 281 305 282 306 /* 283 - * Initialize necessary values in the first time. 284 - * remain_desc record remain desc length. 307 + * If the cookie doesn't match to the currently running transfer then 308 + * we can return the total length of the associated DMA transfer, 309 + * because it is still queued. 285 310 */ 286 - if (atchan->remain_desc == 0) 287 - /* First descriptor embedds the transaction length */ 288 - atchan->remain_desc = desc_first->len; 311 + desc = atc_get_desc_by_cookie(atchan, cookie); 312 + if (desc == NULL) 313 + return -EINVAL; 314 + else if (desc != desc_first) 315 + return desc->total_len; 289 316 290 - /* 291 - * This happens when current descriptor transfer complete. 292 - * The residual buffer size should reduce current descriptor length. 293 - */ 294 - if (unlikely(test_bit(ATC_IS_BTC, &atchan->status))) { 295 - clear_bit(ATC_IS_BTC, &atchan->status); 296 - desc_cur = atc_get_current_descriptors(atchan, 297 - channel_readl(atchan, DSCR)); 298 - if (!desc_cur) { 299 - ret = -EINVAL; 300 - goto out; 301 - } 317 + /* cookie matches to the currently running transfer */ 318 + ret = desc_first->total_len; 302 319 303 - count = (desc_cur->lli.ctrla & ATC_BTSIZE_MAX) 304 - << desc_first->tx_width; 305 - if (atchan->remain_desc < count) { 306 - ret = -EINVAL; 307 - goto out; 308 - } 320 + if (desc_first->lli.dscr) { 321 + /* hardware linked list transfer */ 309 322 310 - atchan->remain_desc -= count; 311 - ret = atchan->remain_desc; 312 - } else { 313 323 /* 314 - * Get residual bytes when current 315 - * descriptor transfer in progress. 324 + * Calculate the residue by removing the length of the child 325 + * descriptors already transferred from the total length. 326 + * To get the current child descriptor we can use the value of 327 + * the channel's DSCR register and compare it against the value 328 + * of the hardware linked list structure of each child 329 + * descriptor. 316 330 */ 317 - count = (channel_readl(atchan, CTRLA) & ATC_BTSIZE_MAX) 318 - << (desc_first->tx_width); 319 - ret = atchan->remain_desc - count; 320 - } 321 - /* 322 - * Check fifo empty. 323 - */ 324 - if (!(dma_readl(atdma, CHSR) & AT_DMA_EMPT(chan_id))) 325 - atc_issue_pending(chan); 326 331 327 - out: 332 + ctrla = channel_readl(atchan, CTRLA); 333 + rmb(); /* ensure CTRLA is read before DSCR */ 334 + dscr = channel_readl(atchan, DSCR); 335 + 336 + /* for the first descriptor we can be more accurate */ 337 + if (desc_first->lli.dscr == dscr) 338 + return atc_calc_bytes_left(ret, ctrla, desc_first); 339 + 340 + ret -= desc_first->len; 341 + list_for_each_entry(desc, &desc_first->tx_list, desc_node) { 342 + if (desc->lli.dscr == dscr) 343 + break; 344 + 345 + ret -= desc->len; 346 + } 347 + 348 + /* 349 + * For the last descriptor in the chain we can calculate 350 + * the remaining bytes using the channel's register. 351 + * Note that the transfer width of the first and last 352 + * descriptor may differ. 353 + */ 354 + if (!desc->lli.dscr) 355 + ret = atc_calc_bytes_left_from_reg(ret, atchan, desc); 356 + } else { 357 + /* single transfer */ 358 + ret = atc_calc_bytes_left_from_reg(ret, atchan, desc_first); 359 + } 360 + 328 361 return ret; 329 362 } 330 363 ··· 572 539 /* Give information to tasklet */ 573 540 set_bit(ATC_IS_ERROR, &atchan->status); 574 541 } 575 - if (pending & AT_DMA_BTC(i)) 576 - set_bit(ATC_IS_BTC, &atchan->status); 577 542 tasklet_schedule(&atchan->tasklet); 578 543 ret = IRQ_HANDLED; 579 544 } ··· 684 653 desc->lli.ctrlb = ctrlb; 685 654 686 655 desc->txd.cookie = 0; 656 + desc->len = xfer_count << src_width; 687 657 688 658 atc_desc_chain(&first, &prev, desc); 689 659 } 690 660 691 661 /* First descriptor of the chain embedds additional information */ 692 662 first->txd.cookie = -EBUSY; 693 - first->len = len; 663 + first->total_len = len; 664 + 665 + /* set transfer width for the calculation of the residue */ 694 666 first->tx_width = src_width; 667 + prev->tx_width = src_width; 695 668 696 669 /* set end-of-link to the last link descriptor of list*/ 697 670 set_desc_eol(desc); ··· 787 752 | ATC_SRC_WIDTH(mem_width) 788 753 | len >> mem_width; 789 754 desc->lli.ctrlb = ctrlb; 755 + desc->len = len; 790 756 791 757 atc_desc_chain(&first, &prev, desc); 792 758 total_len += len; ··· 828 792 | ATC_DST_WIDTH(mem_width) 829 793 | len >> reg_width; 830 794 desc->lli.ctrlb = ctrlb; 795 + desc->len = len; 831 796 832 797 atc_desc_chain(&first, &prev, desc); 833 798 total_len += len; ··· 843 806 844 807 /* First descriptor of the chain embedds additional information */ 845 808 first->txd.cookie = -EBUSY; 846 - first->len = total_len; 809 + first->total_len = total_len; 810 + 811 + /* set transfer width for the calculation of the residue */ 847 812 first->tx_width = reg_width; 813 + prev->tx_width = reg_width; 848 814 849 815 /* first link descriptor of list is responsible of flags */ 850 816 first->txd.flags = flags; /* client is in control of this ack */ ··· 912 872 | ATC_FC_MEM2PER 913 873 | ATC_SIF(atchan->mem_if) 914 874 | ATC_DIF(atchan->per_if); 875 + desc->len = period_len; 915 876 break; 916 877 917 878 case DMA_DEV_TO_MEM: ··· 924 883 | ATC_FC_PER2MEM 925 884 | ATC_SIF(atchan->per_if) 926 885 | ATC_DIF(atchan->mem_if); 886 + desc->len = period_len; 927 887 break; 928 888 929 889 default: ··· 1006 964 1007 965 /* First descriptor of the chain embedds additional information */ 1008 966 first->txd.cookie = -EBUSY; 1009 - first->len = buf_len; 967 + first->total_len = buf_len; 1010 968 first->tx_width = reg_width; 1011 969 1012 970 return &first->txd; ··· 1160 1118 spin_lock_irqsave(&atchan->lock, flags); 1161 1119 1162 1120 /* Get number of bytes left in the active transactions */ 1163 - bytes = atc_get_bytes_left(chan); 1121 + bytes = atc_get_bytes_left(chan, cookie); 1164 1122 1165 1123 spin_unlock_irqrestore(&atchan->lock, flags); 1166 1124 ··· 1256 1214 1257 1215 spin_lock_irqsave(&atchan->lock, flags); 1258 1216 atchan->descs_allocated = i; 1259 - atchan->remain_desc = 0; 1260 1217 list_splice(&tmp_list, &atchan->free_list); 1261 1218 dma_cookie_init(chan); 1262 1219 spin_unlock_irqrestore(&atchan->lock, flags); ··· 1298 1257 list_splice_init(&atchan->free_list, &list); 1299 1258 atchan->descs_allocated = 0; 1300 1259 atchan->status = 0; 1301 - atchan->remain_desc = 0; 1302 1260 1303 1261 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n"); 1304 1262 }
+3 -4
drivers/dma/at_hdmac_regs.h
··· 181 181 * @at_lli: hardware lli structure 182 182 * @txd: support for the async_tx api 183 183 * @desc_node: node on the channed descriptors list 184 - * @len: total transaction bytecount 184 + * @len: descriptor byte count 185 185 * @tx_width: transfer width 186 + * @total_len: total transaction byte count 186 187 */ 187 188 struct at_desc { 188 189 /* FIRST values the hardware uses */ ··· 195 194 struct list_head desc_node; 196 195 size_t len; 197 196 u32 tx_width; 197 + size_t total_len; 198 198 }; 199 199 200 200 static inline struct at_desc * ··· 215 213 enum atc_status { 216 214 ATC_IS_ERROR = 0, 217 215 ATC_IS_PAUSED = 1, 218 - ATC_IS_BTC = 2, 219 216 ATC_IS_CYCLIC = 24, 220 217 }; 221 218 ··· 232 231 * @save_cfg: configuration register that is saved on suspend/resume cycle 233 232 * @save_dscr: for cyclic operations, preserve next descriptor address in 234 233 * the cyclic list on suspend/resume cycle 235 - * @remain_desc: to save remain desc length 236 234 * @dma_sconfig: configuration for slave transfers, passed via 237 235 * .device_config 238 236 * @lock: serializes enqueue/dequeue operations to descriptors lists ··· 251 251 struct tasklet_struct tasklet; 252 252 u32 save_cfg; 253 253 u32 save_dscr; 254 - u32 remain_desc; 255 254 struct dma_slave_config dma_sconfig; 256 255 257 256 spinlock_t lock;
+3 -4
drivers/dma/at_xdmac.c
··· 664 664 struct at_xdmac_desc *first = NULL, *prev = NULL; 665 665 unsigned int periods = buf_len / period_len; 666 666 int i; 667 - u32 cfg; 668 667 669 668 dev_dbg(chan2dev(chan), "%s: buf_addr=%pad, buf_len=%zd, period_len=%zd, dir=%s, flags=0x%lx\n", 670 669 __func__, &buf_addr, buf_len, period_len, ··· 699 700 if (direction == DMA_DEV_TO_MEM) { 700 701 desc->lld.mbr_sa = atchan->per_src_addr; 701 702 desc->lld.mbr_da = buf_addr + i * period_len; 702 - cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG]; 703 + desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG]; 703 704 } else { 704 705 desc->lld.mbr_sa = buf_addr + i * period_len; 705 706 desc->lld.mbr_da = atchan->per_dst_addr; 706 - cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG]; 707 + desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG]; 707 708 } 708 709 desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1 709 710 | AT_XDMAC_MBR_UBC_NDEN 710 711 | AT_XDMAC_MBR_UBC_NSEN 711 712 | AT_XDMAC_MBR_UBC_NDE 712 - | period_len >> at_xdmac_get_dwidth(cfg); 713 + | period_len >> at_xdmac_get_dwidth(desc->lld.mbr_cfg); 713 714 714 715 dev_dbg(chan2dev(chan), 715 716 "%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x\n",
+1
drivers/dma/bcm2835-dma.c
··· 475 475 * c->desc is NULL and exit.) 476 476 */ 477 477 if (c->desc) { 478 + bcm2835_dma_desc_free(&c->desc->vd); 478 479 c->desc = NULL; 479 480 bcm2835_dma_abort(c->chan_base); 480 481
+7
drivers/dma/dma-jz4740.c
··· 511 511 kfree(container_of(vdesc, struct jz4740_dma_desc, vdesc)); 512 512 } 513 513 514 + #define JZ4740_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 515 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) 516 + 514 517 static int jz4740_dma_probe(struct platform_device *pdev) 515 518 { 516 519 struct jz4740_dmaengine_chan *chan; ··· 551 548 dd->device_prep_dma_cyclic = jz4740_dma_prep_dma_cyclic; 552 549 dd->device_config = jz4740_dma_slave_config; 553 550 dd->device_terminate_all = jz4740_dma_terminate_all; 551 + dd->src_addr_widths = JZ4740_DMA_BUSWIDTHS; 552 + dd->dst_addr_widths = JZ4740_DMA_BUSWIDTHS; 553 + dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 554 + dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 554 555 dd->dev = &pdev->dev; 555 556 INIT_LIST_HEAD(&dd->channels); 556 557
+1 -1
drivers/dma/dw/core.c
··· 626 626 dev_vdbg(dw->dma.dev, "%s: status=0x%x\n", __func__, status); 627 627 628 628 /* Check if we have any interrupt from the DMAC */ 629 - if (!status) 629 + if (!status || !dw->in_use) 630 630 return IRQ_NONE; 631 631 632 632 /*
+4 -1
drivers/dma/dw/platform.c
··· 26 26 27 27 #include "internal.h" 28 28 29 + #define DRV_NAME "dw_dmac" 30 + 29 31 static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec, 30 32 struct of_dma *ofdma) 31 33 { ··· 286 284 .remove = dw_remove, 287 285 .shutdown = dw_shutdown, 288 286 .driver = { 289 - .name = "dw_dmac", 287 + .name = DRV_NAME, 290 288 .pm = &dw_dev_pm_ops, 291 289 .of_match_table = of_match_ptr(dw_dma_of_id_table), 292 290 .acpi_match_table = ACPI_PTR(dw_dma_acpi_id_table), ··· 307 305 308 306 MODULE_LICENSE("GPL v2"); 309 307 MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller platform driver"); 308 + MODULE_ALIAS("platform:" DRV_NAME);
+7
drivers/dma/edma.c
··· 260 260 */ 261 261 if (echan->edesc) { 262 262 int cyclic = echan->edesc->cyclic; 263 + 264 + /* 265 + * free the running request descriptor 266 + * since it is not in any of the vdesc lists 267 + */ 268 + edma_desc_free(&echan->edesc->vdesc); 269 + 263 270 echan->edesc = NULL; 264 271 edma_stop(echan->ch_num); 265 272 /* Move the cyclic channel back to default queue */
+4 -3
drivers/dma/imx-sdma.c
··· 531 531 dev_err(sdma->dev, "Timeout waiting for CH0 ready\n"); 532 532 } 533 533 534 + /* Set bits of CONFIG register with dynamic context switching */ 535 + if (readl(sdma->regs + SDMA_H_CONFIG) == 0) 536 + writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG); 537 + 534 538 return ret ? 0 : -ETIMEDOUT; 535 539 } 536 540 ··· 1397 1393 writel_relaxed(0, sdma->regs + SDMA_H_CONFIG); 1398 1394 1399 1395 writel_relaxed(ccb_phys, sdma->regs + SDMA_H_C0PTR); 1400 - 1401 - /* Set bits of CONFIG register with given context switching mode */ 1402 - writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG); 1403 1396 1404 1397 /* Initializes channel's priorities */ 1405 1398 sdma_set_channel_priority(&sdma->channel[0], 7);
+4
drivers/dma/ioat/dma_v3.c
··· 230 230 switch (pdev->device) { 231 231 case PCI_DEVICE_ID_INTEL_IOAT_BWD2: 232 232 case PCI_DEVICE_ID_INTEL_IOAT_BWD3: 233 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE0: 234 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE1: 235 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE2: 236 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE3: 233 237 return true; 234 238 default: 235 239 return false;
+10
drivers/dma/mmp_pdma.c
··· 219 219 220 220 while (dint) { 221 221 i = __ffs(dint); 222 + /* only handle interrupts belonging to pdma driver*/ 223 + if (i >= pdev->dma_channels) 224 + break; 222 225 dint &= (dint - 1); 223 226 phy = &pdev->phy[i]; 224 227 ret = mmp_pdma_chan_handler(irq, phy); ··· 1002 999 struct resource *iores; 1003 1000 int i, ret, irq = 0; 1004 1001 int dma_channels = 0, irq_num = 0; 1002 + const enum dma_slave_buswidth widths = 1003 + DMA_SLAVE_BUSWIDTH_1_BYTE | DMA_SLAVE_BUSWIDTH_2_BYTES | 1004 + DMA_SLAVE_BUSWIDTH_4_BYTES; 1005 1005 1006 1006 pdev = devm_kzalloc(&op->dev, sizeof(*pdev), GFP_KERNEL); 1007 1007 if (!pdev) ··· 1072 1066 pdev->device.device_config = mmp_pdma_config; 1073 1067 pdev->device.device_terminate_all = mmp_pdma_terminate_all; 1074 1068 pdev->device.copy_align = PDMA_ALIGNMENT; 1069 + pdev->device.src_addr_widths = widths; 1070 + pdev->device.dst_addr_widths = widths; 1071 + pdev->device.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 1072 + pdev->device.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 1075 1073 1076 1074 if (pdev->dev->coherent_dma_mask) 1077 1075 dma_set_mask(pdev->dev, pdev->dev->coherent_dma_mask);
+25 -6
drivers/dma/mmp_tdma.c
··· 110 110 struct tasklet_struct tasklet; 111 111 112 112 struct mmp_tdma_desc *desc_arr; 113 - phys_addr_t desc_arr_phys; 113 + dma_addr_t desc_arr_phys; 114 114 int desc_num; 115 115 enum dma_transfer_direction dir; 116 116 dma_addr_t dev_addr; ··· 166 166 static int mmp_tdma_disable_chan(struct dma_chan *chan) 167 167 { 168 168 struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan); 169 + u32 tdcr; 169 170 170 - writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN, 171 - tdmac->reg_base + TDCR); 171 + tdcr = readl(tdmac->reg_base + TDCR); 172 + tdcr |= TDCR_ABR; 173 + tdcr &= ~TDCR_CHANEN; 174 + writel(tdcr, tdmac->reg_base + TDCR); 172 175 173 176 tdmac->status = DMA_COMPLETE; 174 177 ··· 299 296 return -EAGAIN; 300 297 } 301 298 299 + static size_t mmp_tdma_get_pos(struct mmp_tdma_chan *tdmac) 300 + { 301 + size_t reg; 302 + 303 + if (tdmac->idx == 0) { 304 + reg = __raw_readl(tdmac->reg_base + TDSAR); 305 + reg -= tdmac->desc_arr[0].src_addr; 306 + } else if (tdmac->idx == 1) { 307 + reg = __raw_readl(tdmac->reg_base + TDDAR); 308 + reg -= tdmac->desc_arr[0].dst_addr; 309 + } else 310 + return -EINVAL; 311 + 312 + return reg; 313 + } 314 + 302 315 static irqreturn_t mmp_tdma_chan_handler(int irq, void *dev_id) 303 316 { 304 317 struct mmp_tdma_chan *tdmac = dev_id; 305 318 306 319 if (mmp_tdma_clear_chan_irq(tdmac) == 0) { 307 - tdmac->pos = (tdmac->pos + tdmac->period_len) % tdmac->buf_len; 308 320 tasklet_schedule(&tdmac->tasklet); 309 321 return IRQ_HANDLED; 310 322 } else ··· 361 343 int size = tdmac->desc_num * sizeof(struct mmp_tdma_desc); 362 344 363 345 gpool = tdmac->pool; 364 - if (tdmac->desc_arr) 346 + if (gpool && tdmac->desc_arr) 365 347 gen_pool_free(gpool, (unsigned long)tdmac->desc_arr, 366 348 size); 367 349 tdmac->desc_arr = NULL; ··· 517 499 { 518 500 struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan); 519 501 502 + tdmac->pos = mmp_tdma_get_pos(tdmac); 520 503 dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 521 504 tdmac->buf_len - tdmac->pos); 522 505 ··· 629 610 int i, ret; 630 611 int irq = 0, irq_num = 0; 631 612 int chan_num = TDMA_CHANNEL_NUM; 632 - struct gen_pool *pool; 613 + struct gen_pool *pool = NULL; 633 614 634 615 of_id = of_match_device(mmp_tdma_dt_ids, &pdev->dev); 635 616 if (of_id)
+3 -1
drivers/dma/moxart-dma.c
··· 193 193 194 194 spin_lock_irqsave(&ch->vc.lock, flags); 195 195 196 - if (ch->desc) 196 + if (ch->desc) { 197 + moxart_dma_desc_free(&ch->desc->vd); 197 198 ch->desc = NULL; 199 + } 198 200 199 201 ctrl = readl(ch->base + REG_OFF_CTRL); 200 202 ctrl &= ~(APB_DMA_ENABLE | APB_DMA_FIN_INT_EN | APB_DMA_ERR_INT_EN);
+1
drivers/dma/omap-dma.c
··· 981 981 * c->desc is NULL and exit.) 982 982 */ 983 983 if (c->desc) { 984 + omap_dma_desc_free(&c->desc->vd); 984 985 c->desc = NULL; 985 986 /* Avoid stopping the dma twice */ 986 987 if (!c->paused)
+7 -3
drivers/dma/qcom_bam_dma.c
··· 162 162 [BAM_P_IRQ_STTS] = { 0x1010, 0x1000, 0x00, 0x00 }, 163 163 [BAM_P_IRQ_CLR] = { 0x1014, 0x1000, 0x00, 0x00 }, 164 164 [BAM_P_IRQ_EN] = { 0x1018, 0x1000, 0x00, 0x00 }, 165 - [BAM_P_EVNT_DEST_ADDR] = { 0x102C, 0x00, 0x1000, 0x00 }, 166 - [BAM_P_EVNT_REG] = { 0x1018, 0x00, 0x1000, 0x00 }, 167 - [BAM_P_SW_OFSTS] = { 0x1000, 0x00, 0x1000, 0x00 }, 165 + [BAM_P_EVNT_DEST_ADDR] = { 0x182C, 0x00, 0x1000, 0x00 }, 166 + [BAM_P_EVNT_REG] = { 0x1818, 0x00, 0x1000, 0x00 }, 167 + [BAM_P_SW_OFSTS] = { 0x1800, 0x00, 0x1000, 0x00 }, 168 168 [BAM_P_DATA_FIFO_ADDR] = { 0x1824, 0x00, 0x1000, 0x00 }, 169 169 [BAM_P_DESC_FIFO_ADDR] = { 0x181C, 0x00, 0x1000, 0x00 }, 170 170 [BAM_P_EVNT_GEN_TRSHLD] = { 0x1828, 0x00, 0x1000, 0x00 }, ··· 1143 1143 dma_cap_set(DMA_SLAVE, bdev->common.cap_mask); 1144 1144 1145 1145 /* initialize dmaengine apis */ 1146 + bdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 1147 + bdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 1148 + bdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES; 1149 + bdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES; 1146 1150 bdev->common.device_alloc_chan_resources = bam_alloc_chan; 1147 1151 bdev->common.device_free_chan_resources = bam_free_chan; 1148 1152 bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
+7 -8
drivers/dma/sh/shdmac.c
··· 582 582 } 583 583 } 584 584 585 - static void sh_dmae_shutdown(struct platform_device *pdev) 586 - { 587 - struct sh_dmae_device *shdev = platform_get_drvdata(pdev); 588 - sh_dmae_ctl_stop(shdev); 589 - } 590 - 591 585 #ifdef CONFIG_PM 592 586 static int sh_dmae_runtime_suspend(struct device *dev) 593 587 { 588 + struct sh_dmae_device *shdev = dev_get_drvdata(dev); 589 + 590 + sh_dmae_ctl_stop(shdev); 594 591 return 0; 595 592 } 596 593 ··· 602 605 #ifdef CONFIG_PM_SLEEP 603 606 static int sh_dmae_suspend(struct device *dev) 604 607 { 608 + struct sh_dmae_device *shdev = dev_get_drvdata(dev); 609 + 610 + sh_dmae_ctl_stop(shdev); 605 611 return 0; 606 612 } 607 613 ··· 929 929 } 930 930 931 931 static struct platform_driver sh_dmae_driver = { 932 - .driver = { 932 + .driver = { 933 933 .pm = &sh_dmae_pm, 934 934 .name = SH_DMAE_DRV_NAME, 935 935 .of_match_table = sh_dmae_of_match, 936 936 }, 937 937 .remove = sh_dmae_remove, 938 - .shutdown = sh_dmae_shutdown, 939 938 }; 940 939 941 940 static int __init sh_dmae_init(void)
+16 -23
drivers/firmware/dmi_scan.c
··· 78 78 * We have to be cautious here. We have seen BIOSes with DMI pointers 79 79 * pointing to completely the wrong place for example 80 80 */ 81 - static void dmi_table(u8 *buf, int len, int num, 81 + static void dmi_table(u8 *buf, u32 len, int num, 82 82 void (*decode)(const struct dmi_header *, void *), 83 83 void *private_data) 84 84 { ··· 86 86 int i = 0; 87 87 88 88 /* 89 - * Stop when we see all the items the table claimed to have 90 - * OR we run off the end of the table (also happens) 89 + * Stop when we have seen all the items the table claimed to have 90 + * (SMBIOS < 3.0 only) OR we reach an end-of-table marker OR we run 91 + * off the end of the table (should never happen but sometimes does 92 + * on bogus implementations.) 91 93 */ 92 - while ((i < num) && (data - buf + sizeof(struct dmi_header)) <= len) { 94 + while ((!num || i < num) && 95 + (data - buf + sizeof(struct dmi_header)) <= len) { 93 96 const struct dmi_header *dm = (const struct dmi_header *)data; 94 - 95 - /* 96 - * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0] 97 - */ 98 - if (dm->type == DMI_ENTRY_END_OF_TABLE) 99 - break; 100 97 101 98 /* 102 99 * We want to know the total length (formatted area and ··· 105 108 data++; 106 109 if (data - buf < len - 1) 107 110 decode(dm, private_data); 111 + 112 + /* 113 + * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0] 114 + */ 115 + if (dm->type == DMI_ENTRY_END_OF_TABLE) 116 + break; 117 + 108 118 data += 2; 109 119 i++; 110 120 } 111 121 } 112 122 113 123 static phys_addr_t dmi_base; 114 - static u16 dmi_len; 124 + static u32 dmi_len; 115 125 static u16 dmi_num; 116 126 117 127 static int __init dmi_walk_early(void (*decode)(const struct dmi_header *, ··· 532 528 if (memcmp(buf, "_SM3_", 5) == 0 && 533 529 buf[6] < 32 && dmi_checksum(buf, buf[6])) { 534 530 dmi_ver = get_unaligned_be16(buf + 7); 531 + dmi_num = 0; /* No longer specified */ 535 532 dmi_len = get_unaligned_le32(buf + 12); 536 533 dmi_base = get_unaligned_le64(buf + 16); 537 - 538 - /* 539 - * The 64-bit SMBIOS 3.0 entry point no longer has a field 540 - * containing the number of structures present in the table. 541 - * Instead, it defines the table size as a maximum size, and 542 - * relies on the end-of-table structure type (#127) to be used 543 - * to signal the end of the table. 544 - * So let's define dmi_num as an upper bound as well: each 545 - * structure has a 4 byte header, so dmi_len / 4 is an upper 546 - * bound for the number of structures in the table. 547 - */ 548 - dmi_num = dmi_len / 4; 549 534 550 535 if (dmi_walk_early(dmi_decode) == 0) { 551 536 pr_info("SMBIOS %d.%d present.\n",
+4 -4
drivers/firmware/efi/libstub/efi-stub-helper.c
··· 179 179 start = desc->phys_addr; 180 180 end = start + desc->num_pages * (1UL << EFI_PAGE_SHIFT); 181 181 182 - if ((start + size) > end || (start + size) > max) 183 - continue; 184 - 185 - if (end - size > max) 182 + if (end > max) 186 183 end = max; 184 + 185 + if ((start + size) > end) 186 + continue; 187 187 188 188 if (round_down(end - size, align) < start) 189 189 continue;
+1 -1
drivers/gpio/gpio-mpc8xxx.c
··· 334 334 .xlate = irq_domain_xlate_twocell, 335 335 }; 336 336 337 - static struct of_device_id mpc8xxx_gpio_ids[] __initdata = { 337 + static struct of_device_id mpc8xxx_gpio_ids[] = { 338 338 { .compatible = "fsl,mpc8349-gpio", }, 339 339 { .compatible = "fsl,mpc8572-gpio", }, 340 340 { .compatible = "fsl,mpc8610-gpio", },
+1 -1
drivers/gpio/gpio-syscon.c
··· 219 219 ret = of_property_read_u32_index(np, "gpio,syscon-dev", 2, 220 220 &priv->dir_reg_offset); 221 221 if (ret) 222 - dev_err(dev, "can't read the dir register offset!\n"); 222 + dev_dbg(dev, "can't read the dir register offset!\n"); 223 223 224 224 priv->dir_reg_offset <<= 3; 225 225 }
+10
drivers/gpio/gpiolib-acpi.c
··· 201 201 if (!handler) 202 202 return AE_BAD_PARAMETER; 203 203 204 + pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin); 205 + if (pin < 0) 206 + return AE_BAD_PARAMETER; 207 + 204 208 desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event"); 205 209 if (IS_ERR(desc)) { 206 210 dev_err(chip->dev, "Failed to request GPIO\n"); ··· 554 550 struct acpi_gpio_connection *conn; 555 551 struct gpio_desc *desc; 556 552 bool found; 553 + 554 + pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin); 555 + if (pin < 0) { 556 + status = AE_BAD_PARAMETER; 557 + goto out; 558 + } 557 559 558 560 mutex_lock(&achip->conn_lock); 559 561
+9 -1
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 645 645 pr_debug(" sdma queue id: %d\n", q->properties.sdma_queue_id); 646 646 pr_debug(" sdma engine id: %d\n", q->properties.sdma_engine_id); 647 647 648 + init_sdma_vm(dqm, q, qpd); 648 649 retval = mqd->init_mqd(mqd, &q->mqd, &q->mqd_mem_obj, 649 650 &q->gart_mqd_addr, &q->properties); 650 651 if (retval != 0) { ··· 653 652 return retval; 654 653 } 655 654 656 - init_sdma_vm(dqm, q, qpd); 655 + retval = mqd->load_mqd(mqd, q->mqd, 0, 656 + 0, NULL); 657 + if (retval != 0) { 658 + deallocate_sdma_queue(dqm, q->sdma_id); 659 + mqd->uninit_mqd(mqd, q->mqd, q->mqd_mem_obj); 660 + return retval; 661 + } 662 + 657 663 return 0; 658 664 } 659 665
+13 -9
drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
··· 44 44 BUG_ON(!kq || !dev); 45 45 BUG_ON(type != KFD_QUEUE_TYPE_DIQ && type != KFD_QUEUE_TYPE_HIQ); 46 46 47 - pr_debug("kfd: In func %s initializing queue type %d size %d\n", 47 + pr_debug("amdkfd: In func %s initializing queue type %d size %d\n", 48 48 __func__, KFD_QUEUE_TYPE_HIQ, queue_size); 49 49 50 50 nop.opcode = IT_NOP; ··· 69 69 70 70 prop.doorbell_ptr = kfd_get_kernel_doorbell(dev, &prop.doorbell_off); 71 71 72 - if (prop.doorbell_ptr == NULL) 72 + if (prop.doorbell_ptr == NULL) { 73 + pr_err("amdkfd: error init doorbell"); 73 74 goto err_get_kernel_doorbell; 75 + } 74 76 75 77 retval = kfd_gtt_sa_allocate(dev, queue_size, &kq->pq); 76 - if (retval != 0) 78 + if (retval != 0) { 79 + pr_err("amdkfd: error init pq queues size (%d)\n", queue_size); 77 80 goto err_pq_allocate_vidmem; 81 + } 78 82 79 83 kq->pq_kernel_addr = kq->pq->cpu_ptr; 80 84 kq->pq_gpu_addr = kq->pq->gpu_addr; ··· 169 165 err_eop_allocate_vidmem: 170 166 kfd_gtt_sa_free(dev, kq->pq); 171 167 err_pq_allocate_vidmem: 172 - pr_err("kfd: error init pq\n"); 173 168 kfd_release_kernel_doorbell(dev, prop.doorbell_ptr); 174 169 err_get_kernel_doorbell: 175 - pr_err("kfd: error init doorbell"); 176 170 return false; 177 171 178 172 } ··· 188 186 kq->queue->queue); 189 187 else if (kq->queue->properties.type == KFD_QUEUE_TYPE_DIQ) 190 188 kfd_gtt_sa_free(kq->dev, kq->fence_mem_obj); 189 + 190 + kq->mqd->uninit_mqd(kq->mqd, kq->queue->mqd, kq->queue->mqd_mem_obj); 191 191 192 192 kfd_gtt_sa_free(kq->dev, kq->rptr_mem); 193 193 kfd_gtt_sa_free(kq->dev, kq->wptr_mem); ··· 215 211 queue_address = (unsigned int *)kq->pq_kernel_addr; 216 212 queue_size_dwords = kq->queue->properties.queue_size / sizeof(uint32_t); 217 213 218 - pr_debug("kfd: In func %s\nrptr: %d\nwptr: %d\nqueue_address 0x%p\n", 214 + pr_debug("amdkfd: In func %s\nrptr: %d\nwptr: %d\nqueue_address 0x%p\n", 219 215 __func__, rptr, wptr, queue_address); 220 216 221 217 available_size = (rptr - 1 - wptr + queue_size_dwords) % ··· 300 296 } 301 297 302 298 if (kq->ops.initialize(kq, dev, type, KFD_KERNEL_QUEUE_SIZE) == false) { 303 - pr_err("kfd: failed to init kernel queue\n"); 299 + pr_err("amdkfd: failed to init kernel queue\n"); 304 300 kfree(kq); 305 301 return NULL; 306 302 } ··· 323 319 324 320 BUG_ON(!dev); 325 321 326 - pr_err("kfd: starting kernel queue test\n"); 322 + pr_err("amdkfd: starting kernel queue test\n"); 327 323 328 324 kq = kernel_queue_init(dev, KFD_QUEUE_TYPE_HIQ); 329 325 BUG_ON(!kq); ··· 334 330 buffer[i] = kq->nop_packet; 335 331 kq->ops.submit_packet(kq); 336 332 337 - pr_err("kfd: ending kernel queue test\n"); 333 + pr_err("amdkfd: ending kernel queue test\n"); 338 334 } 339 335 340 336
+20 -28
drivers/gpu/drm/drm_crtc.c
··· 43 43 #include "drm_crtc_internal.h" 44 44 #include "drm_internal.h" 45 45 46 - static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev, 47 - struct drm_mode_fb_cmd2 *r, 48 - struct drm_file *file_priv); 46 + static struct drm_framebuffer * 47 + internal_framebuffer_create(struct drm_device *dev, 48 + struct drm_mode_fb_cmd2 *r, 49 + struct drm_file *file_priv); 49 50 50 51 /* Avoid boilerplate. I'm tired of typing. */ 51 52 #define DRM_ENUM_NAME_FN(fnname, list) \ ··· 524 523 kref_get(&fb->refcount); 525 524 } 526 525 EXPORT_SYMBOL(drm_framebuffer_reference); 527 - 528 - static void drm_framebuffer_free_bug(struct kref *kref) 529 - { 530 - BUG(); 531 - } 532 - 533 - static void __drm_framebuffer_unreference(struct drm_framebuffer *fb) 534 - { 535 - DRM_DEBUG("%p: FB ID: %d (%d)\n", fb, fb->base.id, atomic_read(&fb->refcount.refcount)); 536 - kref_put(&fb->refcount, drm_framebuffer_free_bug); 537 - } 538 526 539 527 /** 540 528 * drm_framebuffer_unregister_private - unregister a private fb from the lookup idr ··· 1309 1319 return; 1310 1320 } 1311 1321 /* disconnect the plane from the fb and crtc: */ 1312 - __drm_framebuffer_unreference(plane->old_fb); 1322 + drm_framebuffer_unreference(plane->old_fb); 1313 1323 plane->old_fb = NULL; 1314 1324 plane->fb = NULL; 1315 1325 plane->crtc = NULL; ··· 2898 2908 */ 2899 2909 if (req->flags & DRM_MODE_CURSOR_BO) { 2900 2910 if (req->handle) { 2901 - fb = add_framebuffer_internal(dev, &fbreq, file_priv); 2911 + fb = internal_framebuffer_create(dev, &fbreq, file_priv); 2902 2912 if (IS_ERR(fb)) { 2903 2913 DRM_DEBUG_KMS("failed to wrap cursor buffer in drm framebuffer\n"); 2904 2914 return PTR_ERR(fb); 2905 2915 } 2906 - 2907 - drm_framebuffer_reference(fb); 2908 2916 } else { 2909 2917 fb = NULL; 2910 2918 } ··· 3255 3267 return 0; 3256 3268 } 3257 3269 3258 - static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev, 3259 - struct drm_mode_fb_cmd2 *r, 3260 - struct drm_file *file_priv) 3270 + static struct drm_framebuffer * 3271 + internal_framebuffer_create(struct drm_device *dev, 3272 + struct drm_mode_fb_cmd2 *r, 3273 + struct drm_file *file_priv) 3261 3274 { 3262 3275 struct drm_mode_config *config = &dev->mode_config; 3263 3276 struct drm_framebuffer *fb; ··· 3290 3301 return fb; 3291 3302 } 3292 3303 3293 - mutex_lock(&file_priv->fbs_lock); 3294 - r->fb_id = fb->base.id; 3295 - list_add(&fb->filp_head, &file_priv->fbs); 3296 - DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id); 3297 - mutex_unlock(&file_priv->fbs_lock); 3298 - 3299 3304 return fb; 3300 3305 } 3301 3306 ··· 3311 3328 int drm_mode_addfb2(struct drm_device *dev, 3312 3329 void *data, struct drm_file *file_priv) 3313 3330 { 3331 + struct drm_mode_fb_cmd2 *r = data; 3314 3332 struct drm_framebuffer *fb; 3315 3333 3316 3334 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 3317 3335 return -EINVAL; 3318 3336 3319 - fb = add_framebuffer_internal(dev, data, file_priv); 3337 + fb = internal_framebuffer_create(dev, r, file_priv); 3320 3338 if (IS_ERR(fb)) 3321 3339 return PTR_ERR(fb); 3340 + 3341 + /* Transfer ownership to the filp for reaping on close */ 3342 + 3343 + DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id); 3344 + mutex_lock(&file_priv->fbs_lock); 3345 + r->fb_id = fb->base.id; 3346 + list_add(&fb->filp_head, &file_priv->fbs); 3347 + mutex_unlock(&file_priv->fbs_lock); 3322 3348 3323 3349 return 0; 3324 3350 }
+8 -3
drivers/gpu/drm/drm_dp_mst_topology.c
··· 733 733 struct drm_dp_sideband_msg_tx *txmsg) 734 734 { 735 735 bool ret; 736 - mutex_lock(&mgr->qlock); 736 + 737 + /* 738 + * All updates to txmsg->state are protected by mgr->qlock, and the two 739 + * cases we check here are terminal states. For those the barriers 740 + * provided by the wake_up/wait_event pair are enough. 741 + */ 737 742 ret = (txmsg->state == DRM_DP_SIDEBAND_TX_RX || 738 743 txmsg->state == DRM_DP_SIDEBAND_TX_TIMEOUT); 739 - mutex_unlock(&mgr->qlock); 740 744 return ret; 741 745 } 742 746 ··· 1367 1363 return 0; 1368 1364 } 1369 1365 1370 - /* must be called holding qlock */ 1371 1366 static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr) 1372 1367 { 1373 1368 struct drm_dp_sideband_msg_tx *txmsg; 1374 1369 int ret; 1370 + 1371 + WARN_ON(!mutex_is_locked(&mgr->qlock)); 1375 1372 1376 1373 /* construct a chunk from the first msg in the tx_msg queue */ 1377 1374 if (list_empty(&mgr->tx_msg_downq)) {
+1
drivers/gpu/drm/drm_edid_load.c
··· 287 287 288 288 drm_mode_connector_update_edid_property(connector, edid); 289 289 ret = drm_add_edid_modes(connector, edid); 290 + drm_edid_to_eld(connector, edid); 290 291 kfree(edid); 291 292 292 293 return ret;
+80 -74
drivers/gpu/drm/drm_mm.c
··· 91 91 */ 92 92 93 93 static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, 94 - unsigned long size, 94 + u64 size, 95 95 unsigned alignment, 96 96 unsigned long color, 97 97 enum drm_mm_search_flags flags); 98 98 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, 99 - unsigned long size, 99 + u64 size, 100 100 unsigned alignment, 101 101 unsigned long color, 102 - unsigned long start, 103 - unsigned long end, 102 + u64 start, 103 + u64 end, 104 104 enum drm_mm_search_flags flags); 105 105 106 106 static void drm_mm_insert_helper(struct drm_mm_node *hole_node, 107 107 struct drm_mm_node *node, 108 - unsigned long size, unsigned alignment, 108 + u64 size, unsigned alignment, 109 109 unsigned long color, 110 110 enum drm_mm_allocator_flags flags) 111 111 { 112 112 struct drm_mm *mm = hole_node->mm; 113 - unsigned long hole_start = drm_mm_hole_node_start(hole_node); 114 - unsigned long hole_end = drm_mm_hole_node_end(hole_node); 115 - unsigned long adj_start = hole_start; 116 - unsigned long adj_end = hole_end; 113 + u64 hole_start = drm_mm_hole_node_start(hole_node); 114 + u64 hole_end = drm_mm_hole_node_end(hole_node); 115 + u64 adj_start = hole_start; 116 + u64 adj_end = hole_end; 117 117 118 118 BUG_ON(node->allocated); 119 119 ··· 124 124 adj_start = adj_end - size; 125 125 126 126 if (alignment) { 127 - unsigned tmp = adj_start % alignment; 128 - if (tmp) { 127 + u64 tmp = adj_start; 128 + unsigned rem; 129 + 130 + rem = do_div(tmp, alignment); 131 + if (rem) { 129 132 if (flags & DRM_MM_CREATE_TOP) 130 - adj_start -= tmp; 133 + adj_start -= rem; 131 134 else 132 - adj_start += alignment - tmp; 135 + adj_start += alignment - rem; 133 136 } 134 137 } 135 138 ··· 179 176 int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node) 180 177 { 181 178 struct drm_mm_node *hole; 182 - unsigned long end = node->start + node->size; 183 - unsigned long hole_start; 184 - unsigned long hole_end; 179 + u64 end = node->start + node->size; 180 + u64 hole_start; 181 + u64 hole_end; 185 182 186 183 BUG_ON(node == NULL); 187 184 ··· 230 227 * 0 on success, -ENOSPC if there's no suitable hole. 231 228 */ 232 229 int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node, 233 - unsigned long size, unsigned alignment, 230 + u64 size, unsigned alignment, 234 231 unsigned long color, 235 232 enum drm_mm_search_flags sflags, 236 233 enum drm_mm_allocator_flags aflags) ··· 249 246 250 247 static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, 251 248 struct drm_mm_node *node, 252 - unsigned long size, unsigned alignment, 249 + u64 size, unsigned alignment, 253 250 unsigned long color, 254 - unsigned long start, unsigned long end, 251 + u64 start, u64 end, 255 252 enum drm_mm_allocator_flags flags) 256 253 { 257 254 struct drm_mm *mm = hole_node->mm; 258 - unsigned long hole_start = drm_mm_hole_node_start(hole_node); 259 - unsigned long hole_end = drm_mm_hole_node_end(hole_node); 260 - unsigned long adj_start = hole_start; 261 - unsigned long adj_end = hole_end; 255 + u64 hole_start = drm_mm_hole_node_start(hole_node); 256 + u64 hole_end = drm_mm_hole_node_end(hole_node); 257 + u64 adj_start = hole_start; 258 + u64 adj_end = hole_end; 262 259 263 260 BUG_ON(!hole_node->hole_follows || node->allocated); 264 261 ··· 274 271 mm->color_adjust(hole_node, color, &adj_start, &adj_end); 275 272 276 273 if (alignment) { 277 - unsigned tmp = adj_start % alignment; 278 - if (tmp) { 274 + u64 tmp = adj_start; 275 + unsigned rem; 276 + 277 + rem = do_div(tmp, alignment); 278 + if (rem) { 279 279 if (flags & DRM_MM_CREATE_TOP) 280 - adj_start -= tmp; 280 + adj_start -= rem; 281 281 else 282 - adj_start += alignment - tmp; 282 + adj_start += alignment - rem; 283 283 } 284 284 } 285 285 ··· 330 324 * 0 on success, -ENOSPC if there's no suitable hole. 331 325 */ 332 326 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node, 333 - unsigned long size, unsigned alignment, 327 + u64 size, unsigned alignment, 334 328 unsigned long color, 335 - unsigned long start, unsigned long end, 329 + u64 start, u64 end, 336 330 enum drm_mm_search_flags sflags, 337 331 enum drm_mm_allocator_flags aflags) 338 332 { ··· 393 387 } 394 388 EXPORT_SYMBOL(drm_mm_remove_node); 395 389 396 - static int check_free_hole(unsigned long start, unsigned long end, 397 - unsigned long size, unsigned alignment) 390 + static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment) 398 391 { 399 392 if (end - start < size) 400 393 return 0; 401 394 402 395 if (alignment) { 403 - unsigned tmp = start % alignment; 404 - if (tmp) 405 - start += alignment - tmp; 396 + u64 tmp = start; 397 + unsigned rem; 398 + 399 + rem = do_div(tmp, alignment); 400 + if (rem) 401 + start += alignment - rem; 406 402 } 407 403 408 404 return end >= start + size; 409 405 } 410 406 411 407 static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, 412 - unsigned long size, 408 + u64 size, 413 409 unsigned alignment, 414 410 unsigned long color, 415 411 enum drm_mm_search_flags flags) 416 412 { 417 413 struct drm_mm_node *entry; 418 414 struct drm_mm_node *best; 419 - unsigned long adj_start; 420 - unsigned long adj_end; 421 - unsigned long best_size; 415 + u64 adj_start; 416 + u64 adj_end; 417 + u64 best_size; 422 418 423 419 BUG_ON(mm->scanned_blocks); 424 420 ··· 429 421 430 422 __drm_mm_for_each_hole(entry, mm, adj_start, adj_end, 431 423 flags & DRM_MM_SEARCH_BELOW) { 432 - unsigned long hole_size = adj_end - adj_start; 424 + u64 hole_size = adj_end - adj_start; 433 425 434 426 if (mm->color_adjust) { 435 427 mm->color_adjust(entry, color, &adj_start, &adj_end); ··· 453 445 } 454 446 455 447 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, 456 - unsigned long size, 448 + u64 size, 457 449 unsigned alignment, 458 450 unsigned long color, 459 - unsigned long start, 460 - unsigned long end, 451 + u64 start, 452 + u64 end, 461 453 enum drm_mm_search_flags flags) 462 454 { 463 455 struct drm_mm_node *entry; 464 456 struct drm_mm_node *best; 465 - unsigned long adj_start; 466 - unsigned long adj_end; 467 - unsigned long best_size; 457 + u64 adj_start; 458 + u64 adj_end; 459 + u64 best_size; 468 460 469 461 BUG_ON(mm->scanned_blocks); 470 462 ··· 473 465 474 466 __drm_mm_for_each_hole(entry, mm, adj_start, adj_end, 475 467 flags & DRM_MM_SEARCH_BELOW) { 476 - unsigned long hole_size = adj_end - adj_start; 468 + u64 hole_size = adj_end - adj_start; 477 469 478 470 if (adj_start < start) 479 471 adj_start = start; ··· 569 561 * adding/removing nodes to/from the scan list are allowed. 570 562 */ 571 563 void drm_mm_init_scan(struct drm_mm *mm, 572 - unsigned long size, 564 + u64 size, 573 565 unsigned alignment, 574 566 unsigned long color) 575 567 { ··· 602 594 * adding/removing nodes to/from the scan list are allowed. 603 595 */ 604 596 void drm_mm_init_scan_with_range(struct drm_mm *mm, 605 - unsigned long size, 597 + u64 size, 606 598 unsigned alignment, 607 599 unsigned long color, 608 - unsigned long start, 609 - unsigned long end) 600 + u64 start, 601 + u64 end) 610 602 { 611 603 mm->scan_color = color; 612 604 mm->scan_alignment = alignment; ··· 635 627 { 636 628 struct drm_mm *mm = node->mm; 637 629 struct drm_mm_node *prev_node; 638 - unsigned long hole_start, hole_end; 639 - unsigned long adj_start, adj_end; 630 + u64 hole_start, hole_end; 631 + u64 adj_start, adj_end; 640 632 641 633 mm->scanned_blocks++; 642 634 ··· 739 731 * 740 732 * Note that @mm must be cleared to 0 before calling this function. 741 733 */ 742 - void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size) 734 + void drm_mm_init(struct drm_mm * mm, u64 start, u64 size) 743 735 { 744 736 INIT_LIST_HEAD(&mm->hole_stack); 745 737 mm->scanned_blocks = 0; ··· 774 766 } 775 767 EXPORT_SYMBOL(drm_mm_takedown); 776 768 777 - static unsigned long drm_mm_debug_hole(struct drm_mm_node *entry, 778 - const char *prefix) 769 + static u64 drm_mm_debug_hole(struct drm_mm_node *entry, 770 + const char *prefix) 779 771 { 780 - unsigned long hole_start, hole_end, hole_size; 772 + u64 hole_start, hole_end, hole_size; 781 773 782 774 if (entry->hole_follows) { 783 775 hole_start = drm_mm_hole_node_start(entry); 784 776 hole_end = drm_mm_hole_node_end(entry); 785 777 hole_size = hole_end - hole_start; 786 - printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n", 787 - prefix, hole_start, hole_end, 788 - hole_size); 778 + pr_debug("%s %#llx-%#llx: %llu: free\n", prefix, hole_start, 779 + hole_end, hole_size); 789 780 return hole_size; 790 781 } 791 782 ··· 799 792 void drm_mm_debug_table(struct drm_mm *mm, const char *prefix) 800 793 { 801 794 struct drm_mm_node *entry; 802 - unsigned long total_used = 0, total_free = 0, total = 0; 795 + u64 total_used = 0, total_free = 0, total = 0; 803 796 804 797 total_free += drm_mm_debug_hole(&mm->head_node, prefix); 805 798 806 799 drm_mm_for_each_node(entry, mm) { 807 - printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: used\n", 808 - prefix, entry->start, entry->start + entry->size, 809 - entry->size); 800 + pr_debug("%s %#llx-%#llx: %llu: used\n", prefix, entry->start, 801 + entry->start + entry->size, entry->size); 810 802 total_used += entry->size; 811 803 total_free += drm_mm_debug_hole(entry, prefix); 812 804 } 813 805 total = total_free + total_used; 814 806 815 - printk(KERN_DEBUG "%s total: %lu, used %lu free %lu\n", prefix, total, 816 - total_used, total_free); 807 + pr_debug("%s total: %llu, used %llu free %llu\n", prefix, total, 808 + total_used, total_free); 817 809 } 818 810 EXPORT_SYMBOL(drm_mm_debug_table); 819 811 820 812 #if defined(CONFIG_DEBUG_FS) 821 - static unsigned long drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry) 813 + static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry) 822 814 { 823 - unsigned long hole_start, hole_end, hole_size; 815 + u64 hole_start, hole_end, hole_size; 824 816 825 817 if (entry->hole_follows) { 826 818 hole_start = drm_mm_hole_node_start(entry); 827 819 hole_end = drm_mm_hole_node_end(entry); 828 820 hole_size = hole_end - hole_start; 829 - seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: free\n", 830 - hole_start, hole_end, hole_size); 821 + seq_printf(m, "%#llx-%#llx: %llu: free\n", hole_start, 822 + hole_end, hole_size); 831 823 return hole_size; 832 824 } 833 825 ··· 841 835 int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm) 842 836 { 843 837 struct drm_mm_node *entry; 844 - unsigned long total_used = 0, total_free = 0, total = 0; 838 + u64 total_used = 0, total_free = 0, total = 0; 845 839 846 840 total_free += drm_mm_dump_hole(m, &mm->head_node); 847 841 848 842 drm_mm_for_each_node(entry, mm) { 849 - seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: used\n", 850 - entry->start, entry->start + entry->size, 851 - entry->size); 843 + seq_printf(m, "%#016llx-%#016llx: %llu: used\n", entry->start, 844 + entry->start + entry->size, entry->size); 852 845 total_used += entry->size; 853 846 total_free += drm_mm_dump_hole(m, entry); 854 847 } 855 848 total = total_free + total_used; 856 849 857 - seq_printf(m, "total: %lu, used %lu free %lu\n", total, total_used, total_free); 850 + seq_printf(m, "total: %llu, used %llu free %llu\n", total, 851 + total_used, total_free); 858 852 return 0; 859 853 } 860 854 EXPORT_SYMBOL(drm_mm_dump_table);
+1
drivers/gpu/drm/drm_probe_helper.c
··· 174 174 struct edid *edid = (struct edid *) connector->edid_blob_ptr->data; 175 175 176 176 count = drm_add_edid_modes(connector, edid); 177 + drm_edid_to_eld(connector, edid); 177 178 } else 178 179 count = (*connector_funcs->get_modes)(connector); 179 180 }
+1 -1
drivers/gpu/drm/exynos/Kconfig
··· 50 50 51 51 config DRM_EXYNOS_DP 52 52 bool "EXYNOS DRM DP driver support" 53 - depends on (DRM_EXYNOS_FIMD || DRM_EXYNOS7DECON) && ARCH_EXYNOS && (DRM_PTN3460=n || DRM_PTN3460=y || DRM_PTN3460=DRM_EXYNOS) 53 + depends on (DRM_EXYNOS_FIMD || DRM_EXYNOS7_DECON) && ARCH_EXYNOS && (DRM_PTN3460=n || DRM_PTN3460=y || DRM_PTN3460=DRM_EXYNOS) 54 54 default DRM_EXYNOS 55 55 select DRM_PANEL 56 56 help
+2 -2
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 888 888 of_node_put(i80_if_timings); 889 889 890 890 ctx->regs = of_iomap(dev->of_node, 0); 891 - if (IS_ERR(ctx->regs)) { 892 - ret = PTR_ERR(ctx->regs); 891 + if (!ctx->regs) { 892 + ret = -ENOMEM; 893 893 goto err_del_component; 894 894 } 895 895
-245
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 1 - /* 2 - * Copyright (c) 2011 Samsung Electronics Co., Ltd. 3 - * Authors: 4 - * Inki Dae <inki.dae@samsung.com> 5 - * Joonyoung Shim <jy0922.shim@samsung.com> 6 - * Seung-Woo Kim <sw0312.kim@samsung.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms of the GNU General Public License as published by the 10 - * Free Software Foundation; either version 2 of the License, or (at your 11 - * option) any later version. 12 - */ 13 - 14 - #include <drm/drmP.h> 15 - #include <drm/drm_crtc_helper.h> 16 - 17 - #include <drm/exynos_drm.h> 18 - #include "exynos_drm_drv.h" 19 - #include "exynos_drm_encoder.h" 20 - #include "exynos_drm_connector.h" 21 - 22 - #define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\ 23 - drm_connector) 24 - 25 - struct exynos_drm_connector { 26 - struct drm_connector drm_connector; 27 - uint32_t encoder_id; 28 - struct exynos_drm_display *display; 29 - }; 30 - 31 - static int exynos_drm_connector_get_modes(struct drm_connector *connector) 32 - { 33 - struct exynos_drm_connector *exynos_connector = 34 - to_exynos_connector(connector); 35 - struct exynos_drm_display *display = exynos_connector->display; 36 - struct edid *edid = NULL; 37 - unsigned int count = 0; 38 - int ret; 39 - 40 - /* 41 - * if get_edid() exists then get_edid() callback of hdmi side 42 - * is called to get edid data through i2c interface else 43 - * get timing from the FIMD driver(display controller). 44 - * 45 - * P.S. in case of lcd panel, count is always 1 if success 46 - * because lcd panel has only one mode. 47 - */ 48 - if (display->ops->get_edid) { 49 - edid = display->ops->get_edid(display, connector); 50 - if (IS_ERR_OR_NULL(edid)) { 51 - ret = PTR_ERR(edid); 52 - edid = NULL; 53 - DRM_ERROR("Panel operation get_edid failed %d\n", ret); 54 - goto out; 55 - } 56 - 57 - count = drm_add_edid_modes(connector, edid); 58 - if (!count) { 59 - DRM_ERROR("Add edid modes failed %d\n", count); 60 - goto out; 61 - } 62 - 63 - drm_mode_connector_update_edid_property(connector, edid); 64 - } else { 65 - struct exynos_drm_panel_info *panel; 66 - struct drm_display_mode *mode = drm_mode_create(connector->dev); 67 - if (!mode) { 68 - DRM_ERROR("failed to create a new display mode.\n"); 69 - return 0; 70 - } 71 - 72 - if (display->ops->get_panel) 73 - panel = display->ops->get_panel(display); 74 - else { 75 - drm_mode_destroy(connector->dev, mode); 76 - return 0; 77 - } 78 - 79 - drm_display_mode_from_videomode(&panel->vm, mode); 80 - mode->width_mm = panel->width_mm; 81 - mode->height_mm = panel->height_mm; 82 - connector->display_info.width_mm = mode->width_mm; 83 - connector->display_info.height_mm = mode->height_mm; 84 - 85 - mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 86 - drm_mode_set_name(mode); 87 - drm_mode_probed_add(connector, mode); 88 - 89 - count = 1; 90 - } 91 - 92 - out: 93 - kfree(edid); 94 - return count; 95 - } 96 - 97 - static int exynos_drm_connector_mode_valid(struct drm_connector *connector, 98 - struct drm_display_mode *mode) 99 - { 100 - struct exynos_drm_connector *exynos_connector = 101 - to_exynos_connector(connector); 102 - struct exynos_drm_display *display = exynos_connector->display; 103 - int ret = MODE_BAD; 104 - 105 - DRM_DEBUG_KMS("%s\n", __FILE__); 106 - 107 - if (display->ops->check_mode) 108 - if (!display->ops->check_mode(display, mode)) 109 - ret = MODE_OK; 110 - 111 - return ret; 112 - } 113 - 114 - static struct drm_encoder *exynos_drm_best_encoder( 115 - struct drm_connector *connector) 116 - { 117 - struct drm_device *dev = connector->dev; 118 - struct exynos_drm_connector *exynos_connector = 119 - to_exynos_connector(connector); 120 - return drm_encoder_find(dev, exynos_connector->encoder_id); 121 - } 122 - 123 - static struct drm_connector_helper_funcs exynos_connector_helper_funcs = { 124 - .get_modes = exynos_drm_connector_get_modes, 125 - .mode_valid = exynos_drm_connector_mode_valid, 126 - .best_encoder = exynos_drm_best_encoder, 127 - }; 128 - 129 - static int exynos_drm_connector_fill_modes(struct drm_connector *connector, 130 - unsigned int max_width, unsigned int max_height) 131 - { 132 - struct exynos_drm_connector *exynos_connector = 133 - to_exynos_connector(connector); 134 - struct exynos_drm_display *display = exynos_connector->display; 135 - unsigned int width, height; 136 - 137 - width = max_width; 138 - height = max_height; 139 - 140 - /* 141 - * if specific driver want to find desired_mode using maxmum 142 - * resolution then get max width and height from that driver. 143 - */ 144 - if (display->ops->get_max_resol) 145 - display->ops->get_max_resol(display, &width, &height); 146 - 147 - return drm_helper_probe_single_connector_modes(connector, width, 148 - height); 149 - } 150 - 151 - /* get detection status of display device. */ 152 - static enum drm_connector_status 153 - exynos_drm_connector_detect(struct drm_connector *connector, bool force) 154 - { 155 - struct exynos_drm_connector *exynos_connector = 156 - to_exynos_connector(connector); 157 - struct exynos_drm_display *display = exynos_connector->display; 158 - enum drm_connector_status status = connector_status_disconnected; 159 - 160 - if (display->ops->is_connected) { 161 - if (display->ops->is_connected(display)) 162 - status = connector_status_connected; 163 - else 164 - status = connector_status_disconnected; 165 - } 166 - 167 - return status; 168 - } 169 - 170 - static void exynos_drm_connector_destroy(struct drm_connector *connector) 171 - { 172 - struct exynos_drm_connector *exynos_connector = 173 - to_exynos_connector(connector); 174 - 175 - drm_connector_unregister(connector); 176 - drm_connector_cleanup(connector); 177 - kfree(exynos_connector); 178 - } 179 - 180 - static struct drm_connector_funcs exynos_connector_funcs = { 181 - .dpms = drm_helper_connector_dpms, 182 - .fill_modes = exynos_drm_connector_fill_modes, 183 - .detect = exynos_drm_connector_detect, 184 - .destroy = exynos_drm_connector_destroy, 185 - }; 186 - 187 - struct drm_connector *exynos_drm_connector_create(struct drm_device *dev, 188 - struct drm_encoder *encoder) 189 - { 190 - struct exynos_drm_connector *exynos_connector; 191 - struct exynos_drm_display *display = exynos_drm_get_display(encoder); 192 - struct drm_connector *connector; 193 - int type; 194 - int err; 195 - 196 - exynos_connector = kzalloc(sizeof(*exynos_connector), GFP_KERNEL); 197 - if (!exynos_connector) 198 - return NULL; 199 - 200 - connector = &exynos_connector->drm_connector; 201 - 202 - switch (display->type) { 203 - case EXYNOS_DISPLAY_TYPE_HDMI: 204 - type = DRM_MODE_CONNECTOR_HDMIA; 205 - connector->interlace_allowed = true; 206 - connector->polled = DRM_CONNECTOR_POLL_HPD; 207 - break; 208 - case EXYNOS_DISPLAY_TYPE_VIDI: 209 - type = DRM_MODE_CONNECTOR_VIRTUAL; 210 - connector->polled = DRM_CONNECTOR_POLL_HPD; 211 - break; 212 - default: 213 - type = DRM_MODE_CONNECTOR_Unknown; 214 - break; 215 - } 216 - 217 - drm_connector_init(dev, connector, &exynos_connector_funcs, type); 218 - drm_connector_helper_add(connector, &exynos_connector_helper_funcs); 219 - 220 - err = drm_connector_register(connector); 221 - if (err) 222 - goto err_connector; 223 - 224 - exynos_connector->encoder_id = encoder->base.id; 225 - exynos_connector->display = display; 226 - connector->dpms = DRM_MODE_DPMS_OFF; 227 - connector->encoder = encoder; 228 - 229 - err = drm_mode_connector_attach_encoder(connector, encoder); 230 - if (err) { 231 - DRM_ERROR("failed to attach a connector to a encoder\n"); 232 - goto err_sysfs; 233 - } 234 - 235 - DRM_DEBUG_KMS("connector has been created\n"); 236 - 237 - return connector; 238 - 239 - err_sysfs: 240 - drm_connector_unregister(connector); 241 - err_connector: 242 - drm_connector_cleanup(connector); 243 - kfree(exynos_connector); 244 - return NULL; 245 - }
-20
drivers/gpu/drm/exynos/exynos_drm_connector.h
··· 1 - /* 2 - * Copyright (c) 2011 Samsung Electronics Co., Ltd. 3 - * Authors: 4 - * Inki Dae <inki.dae@samsung.com> 5 - * Joonyoung Shim <jy0922.shim@samsung.com> 6 - * Seung-Woo Kim <sw0312.kim@samsung.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms of the GNU General Public License as published by the 10 - * Free Software Foundation; either version 2 of the License, or (at your 11 - * option) any later version. 12 - */ 13 - 14 - #ifndef _EXYNOS_DRM_CONNECTOR_H_ 15 - #define _EXYNOS_DRM_CONNECTOR_H_ 16 - 17 - struct drm_connector *exynos_drm_connector_create(struct drm_device *dev, 18 - struct drm_encoder *encoder); 19 - 20 - #endif
+16 -21
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 147 147 unsigned int ovl_height; 148 148 unsigned int fb_width; 149 149 unsigned int fb_height; 150 + unsigned int fb_pitch; 150 151 unsigned int bpp; 151 152 unsigned int pixel_format; 152 153 dma_addr_t dma_addr; ··· 285 284 } 286 285 } 287 286 288 - static int fimd_ctx_initialize(struct fimd_context *ctx, 287 + static int fimd_iommu_attach_devices(struct fimd_context *ctx, 289 288 struct drm_device *drm_dev) 290 289 { 291 - struct exynos_drm_private *priv; 292 - priv = drm_dev->dev_private; 293 - 294 - ctx->drm_dev = drm_dev; 295 - ctx->pipe = priv->pipe++; 296 290 297 291 /* attach this sub driver to iommu mapping if supported. */ 298 292 if (is_drm_iommu_supported(ctx->drm_dev)) { ··· 309 313 return 0; 310 314 } 311 315 312 - static void fimd_ctx_remove(struct fimd_context *ctx) 316 + static void fimd_iommu_detach_devices(struct fimd_context *ctx) 313 317 { 314 318 /* detach this sub driver from iommu mapping if supported. */ 315 319 if (is_drm_iommu_supported(ctx->drm_dev)) ··· 533 537 win_data->offset_y = plane->crtc_y; 534 538 win_data->ovl_width = plane->crtc_width; 535 539 win_data->ovl_height = plane->crtc_height; 540 + win_data->fb_pitch = plane->pitch; 536 541 win_data->fb_width = plane->fb_width; 537 542 win_data->fb_height = plane->fb_height; 538 543 win_data->dma_addr = plane->dma_addr[0] + offset; 539 544 win_data->bpp = plane->bpp; 540 545 win_data->pixel_format = plane->pixel_format; 541 - win_data->buf_offsize = (plane->fb_width - plane->crtc_width) * 542 - (plane->bpp >> 3); 546 + win_data->buf_offsize = 547 + plane->pitch - (plane->crtc_width * (plane->bpp >> 3)); 543 548 win_data->line_size = plane->crtc_width * (plane->bpp >> 3); 544 549 545 550 DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n", ··· 706 709 writel(val, ctx->regs + VIDWx_BUF_START(win, 0)); 707 710 708 711 /* buffer end address */ 709 - size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3); 712 + size = win_data->fb_pitch * win_data->ovl_height * (win_data->bpp >> 3); 710 713 val = (unsigned long)(win_data->dma_addr + size); 711 714 writel(val, ctx->regs + VIDWx_BUF_END(win, 0)); 712 715 ··· 1053 1056 { 1054 1057 struct fimd_context *ctx = dev_get_drvdata(dev); 1055 1058 struct drm_device *drm_dev = data; 1059 + struct exynos_drm_private *priv = drm_dev->dev_private; 1056 1060 int ret; 1057 1061 1058 - ret = fimd_ctx_initialize(ctx, drm_dev); 1059 - if (ret) { 1060 - DRM_ERROR("fimd_ctx_initialize failed.\n"); 1061 - return ret; 1062 - } 1062 + ctx->drm_dev = drm_dev; 1063 + ctx->pipe = priv->pipe++; 1063 1064 1064 1065 ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe, 1065 1066 EXYNOS_DISPLAY_TYPE_LCD, 1066 1067 &fimd_crtc_ops, ctx); 1067 - if (IS_ERR(ctx->crtc)) { 1068 - fimd_ctx_remove(ctx); 1069 - return PTR_ERR(ctx->crtc); 1070 - } 1071 1068 1072 1069 if (ctx->display) 1073 1070 exynos_drm_create_enc_conn(drm_dev, ctx->display); 1071 + 1072 + ret = fimd_iommu_attach_devices(ctx, drm_dev); 1073 + if (ret) 1074 + return ret; 1074 1075 1075 1076 return 0; 1076 1077 ··· 1081 1086 1082 1087 fimd_dpms(ctx->crtc, DRM_MODE_DPMS_OFF); 1083 1088 1089 + fimd_iommu_detach_devices(ctx); 1090 + 1084 1091 if (ctx->display) 1085 1092 exynos_dpi_remove(ctx->display); 1086 - 1087 - fimd_ctx_remove(ctx); 1088 1093 } 1089 1094 1090 1095 static const struct component_ops fimd_component_ops = {
+1 -1
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 175 175 struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane); 176 176 struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(plane->crtc); 177 177 178 - if (exynos_crtc->ops->win_disable) 178 + if (exynos_crtc && exynos_crtc->ops->win_disable) 179 179 exynos_crtc->ops->win_disable(exynos_crtc, 180 180 exynos_plane->zpos); 181 181
+10 -7
drivers/gpu/drm/exynos/exynos_mixer.c
··· 55 55 unsigned int fb_x; 56 56 unsigned int fb_y; 57 57 unsigned int fb_width; 58 + unsigned int fb_pitch; 58 59 unsigned int fb_height; 59 60 unsigned int src_width; 60 61 unsigned int src_height; ··· 439 438 } else { 440 439 luma_addr[0] = win_data->dma_addr; 441 440 chroma_addr[0] = win_data->dma_addr 442 - + (win_data->fb_width * win_data->fb_height); 441 + + (win_data->fb_pitch * win_data->fb_height); 443 442 } 444 443 445 444 if (win_data->scan_flags & DRM_MODE_FLAG_INTERLACE) { ··· 448 447 luma_addr[1] = luma_addr[0] + 0x40; 449 448 chroma_addr[1] = chroma_addr[0] + 0x40; 450 449 } else { 451 - luma_addr[1] = luma_addr[0] + win_data->fb_width; 452 - chroma_addr[1] = chroma_addr[0] + win_data->fb_width; 450 + luma_addr[1] = luma_addr[0] + win_data->fb_pitch; 451 + chroma_addr[1] = chroma_addr[0] + win_data->fb_pitch; 453 452 } 454 453 } else { 455 454 ctx->interlace = false; ··· 470 469 vp_reg_writemask(res, VP_MODE, val, VP_MODE_FMT_MASK); 471 470 472 471 /* setting size of input image */ 473 - vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_width) | 472 + vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_pitch) | 474 473 VP_IMG_VSIZE(win_data->fb_height)); 475 474 /* chroma height has to reduced by 2 to avoid chroma distorions */ 476 - vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_width) | 475 + vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_pitch) | 477 476 VP_IMG_VSIZE(win_data->fb_height / 2)); 478 477 479 478 vp_reg_write(res, VP_SRC_WIDTH, win_data->src_width); ··· 560 559 /* converting dma address base and source offset */ 561 560 dma_addr = win_data->dma_addr 562 561 + (win_data->fb_x * win_data->bpp >> 3) 563 - + (win_data->fb_y * win_data->fb_width * win_data->bpp >> 3); 562 + + (win_data->fb_y * win_data->fb_pitch); 564 563 src_x_offset = 0; 565 564 src_y_offset = 0; 566 565 ··· 577 576 MXR_GRP_CFG_FORMAT_VAL(fmt), MXR_GRP_CFG_FORMAT_MASK); 578 577 579 578 /* setup geometry */ 580 - mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), win_data->fb_width); 579 + mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), 580 + win_data->fb_pitch / (win_data->bpp >> 3)); 581 581 582 582 /* setup display size */ 583 583 if (ctx->mxr_ver == MXR_VER_128_0_0_184 && ··· 963 961 win_data->fb_y = plane->fb_y; 964 962 win_data->fb_width = plane->fb_width; 965 963 win_data->fb_height = plane->fb_height; 964 + win_data->fb_pitch = plane->pitch; 966 965 win_data->src_width = plane->src_width; 967 966 win_data->src_height = plane->src_height; 968 967
+2 -2
drivers/gpu/drm/i915/i915_debugfs.c
··· 152 152 seq_puts(m, " (pp"); 153 153 else 154 154 seq_puts(m, " (g"); 155 - seq_printf(m, "gtt offset: %08lx, size: %08lx, type: %u)", 155 + seq_printf(m, "gtt offset: %08llx, size: %08llx, type: %u)", 156 156 vma->node.start, vma->node.size, 157 157 vma->ggtt_view.type); 158 158 } 159 159 if (obj->stolen) 160 - seq_printf(m, " (stolen: %08lx)", obj->stolen->start); 160 + seq_printf(m, " (stolen: %08llx)", obj->stolen->start); 161 161 if (obj->pin_mappable || obj->fault_mappable) { 162 162 char s[3], *t = s; 163 163 if (obj->pin_mappable)
+25 -5
drivers/gpu/drm/i915/i915_drv.c
··· 622 622 return 0; 623 623 } 624 624 625 - static int i915_drm_suspend_late(struct drm_device *drm_dev) 625 + static int i915_drm_suspend_late(struct drm_device *drm_dev, bool hibernation) 626 626 { 627 627 struct drm_i915_private *dev_priv = drm_dev->dev_private; 628 628 int ret; ··· 636 636 } 637 637 638 638 pci_disable_device(drm_dev->pdev); 639 - pci_set_power_state(drm_dev->pdev, PCI_D3hot); 639 + /* 640 + * During hibernation on some GEN4 platforms the BIOS may try to access 641 + * the device even though it's already in D3 and hang the machine. So 642 + * leave the device in D0 on those platforms and hope the BIOS will 643 + * power down the device properly. Platforms where this was seen: 644 + * Lenovo Thinkpad X301, X61s 645 + */ 646 + if (!(hibernation && 647 + drm_dev->pdev->subsystem_vendor == PCI_VENDOR_ID_LENOVO && 648 + INTEL_INFO(dev_priv)->gen == 4)) 649 + pci_set_power_state(drm_dev->pdev, PCI_D3hot); 640 650 641 651 return 0; 642 652 } ··· 672 662 if (error) 673 663 return error; 674 664 675 - return i915_drm_suspend_late(dev); 665 + return i915_drm_suspend_late(dev, false); 676 666 } 677 667 678 668 static int i915_drm_resume(struct drm_device *dev) ··· 960 950 if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF) 961 951 return 0; 962 952 963 - return i915_drm_suspend_late(drm_dev); 953 + return i915_drm_suspend_late(drm_dev, false); 954 + } 955 + 956 + static int i915_pm_poweroff_late(struct device *dev) 957 + { 958 + struct drm_device *drm_dev = dev_to_i915(dev)->dev; 959 + 960 + if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF) 961 + return 0; 962 + 963 + return i915_drm_suspend_late(drm_dev, true); 964 964 } 965 965 966 966 static int i915_pm_resume_early(struct device *dev) ··· 1540 1520 .thaw_early = i915_pm_resume_early, 1541 1521 .thaw = i915_pm_resume, 1542 1522 .poweroff = i915_pm_suspend, 1543 - .poweroff_late = i915_pm_suspend_late, 1523 + .poweroff_late = i915_pm_poweroff_late, 1544 1524 .restore_early = i915_pm_resume_early, 1545 1525 .restore = i915_pm_resume, 1546 1526
+41 -22
drivers/gpu/drm/i915/i915_gem.c
··· 2737 2737 2738 2738 WARN_ON(i915_verify_lists(ring->dev)); 2739 2739 2740 - /* Move any buffers on the active list that are no longer referenced 2741 - * by the ringbuffer to the flushing/inactive lists as appropriate, 2742 - * before we free the context associated with the requests. 2740 + /* Retire requests first as we use it above for the early return. 2741 + * If we retire requests last, we may use a later seqno and so clear 2742 + * the requests lists without clearing the active list, leading to 2743 + * confusion. 2743 2744 */ 2744 - while (!list_empty(&ring->active_list)) { 2745 - struct drm_i915_gem_object *obj; 2746 - 2747 - obj = list_first_entry(&ring->active_list, 2748 - struct drm_i915_gem_object, 2749 - ring_list); 2750 - 2751 - if (!i915_gem_request_completed(obj->last_read_req, true)) 2752 - break; 2753 - 2754 - i915_gem_object_move_to_inactive(obj); 2755 - } 2756 - 2757 - 2758 2745 while (!list_empty(&ring->request_list)) { 2759 2746 struct drm_i915_gem_request *request; 2760 2747 struct intel_ringbuffer *ringbuf; ··· 2774 2787 ringbuf->last_retired_head = request->postfix; 2775 2788 2776 2789 i915_gem_free_request(request); 2790 + } 2791 + 2792 + /* Move any buffers on the active list that are no longer referenced 2793 + * by the ringbuffer to the flushing/inactive lists as appropriate, 2794 + * before we free the context associated with the requests. 2795 + */ 2796 + while (!list_empty(&ring->active_list)) { 2797 + struct drm_i915_gem_object *obj; 2798 + 2799 + obj = list_first_entry(&ring->active_list, 2800 + struct drm_i915_gem_object, 2801 + ring_list); 2802 + 2803 + if (!i915_gem_request_completed(obj->last_read_req, true)) 2804 + break; 2805 + 2806 + i915_gem_object_move_to_inactive(obj); 2777 2807 } 2778 2808 2779 2809 if (unlikely(ring->trace_irq_req && ··· 2940 2936 req = obj->last_read_req; 2941 2937 2942 2938 /* Do this after OLR check to make sure we make forward progress polling 2943 - * on this IOCTL with a timeout <=0 (like busy ioctl) 2939 + * on this IOCTL with a timeout == 0 (like busy ioctl) 2944 2940 */ 2945 - if (args->timeout_ns <= 0) { 2941 + if (args->timeout_ns == 0) { 2946 2942 ret = -ETIME; 2947 2943 goto out; 2948 2944 } ··· 2952 2948 i915_gem_request_reference(req); 2953 2949 mutex_unlock(&dev->struct_mutex); 2954 2950 2955 - ret = __i915_wait_request(req, reset_counter, true, &args->timeout_ns, 2951 + ret = __i915_wait_request(req, reset_counter, true, 2952 + args->timeout_ns > 0 ? &args->timeout_ns : NULL, 2956 2953 file->driver_priv); 2957 2954 mutex_lock(&dev->struct_mutex); 2958 2955 i915_gem_request_unreference(req); ··· 4797 4792 if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt()) 4798 4793 return -EIO; 4799 4794 4795 + /* Double layer security blanket, see i915_gem_init() */ 4796 + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 4797 + 4800 4798 if (dev_priv->ellc_size) 4801 4799 I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf)); 4802 4800 ··· 4832 4824 for_each_ring(ring, dev_priv, i) { 4833 4825 ret = ring->init_hw(ring); 4834 4826 if (ret) 4835 - return ret; 4827 + goto out; 4836 4828 } 4837 4829 4838 4830 for (i = 0; i < NUM_L3_SLICES(dev); i++) ··· 4849 4841 DRM_ERROR("Context enable failed %d\n", ret); 4850 4842 i915_gem_cleanup_ringbuffer(dev); 4851 4843 4852 - return ret; 4844 + goto out; 4853 4845 } 4854 4846 4847 + out: 4848 + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4855 4849 return ret; 4856 4850 } 4857 4851 ··· 4887 4877 dev_priv->gt.stop_ring = intel_logical_ring_stop; 4888 4878 } 4889 4879 4880 + /* This is just a security blanket to placate dragons. 4881 + * On some systems, we very sporadically observe that the first TLBs 4882 + * used by the CS may be stale, despite us poking the TLB reset. If 4883 + * we hold the forcewake during initialisation these problems 4884 + * just magically go away. 4885 + */ 4886 + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 4887 + 4890 4888 ret = i915_gem_init_userptr(dev); 4891 4889 if (ret) 4892 4890 goto out_unlock; ··· 4921 4903 } 4922 4904 4923 4905 out_unlock: 4906 + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4924 4907 mutex_unlock(&dev->struct_mutex); 4925 4908 4926 4909 return ret;
+1 -1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1487 1487 goto err; 1488 1488 } 1489 1489 1490 - if (i915_needs_cmd_parser(ring)) { 1490 + if (i915_needs_cmd_parser(ring) && args->batch_len) { 1491 1491 batch_obj = i915_gem_execbuffer_parse(ring, 1492 1492 &shadow_exec_entry, 1493 1493 eb,
+3 -3
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 1145 1145 1146 1146 ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true); 1147 1147 1148 - DRM_DEBUG_DRIVER("Allocated pde space (%ldM) at GTT entry: %lx\n", 1148 + DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n", 1149 1149 ppgtt->node.size >> 20, 1150 1150 ppgtt->node.start / PAGE_SIZE); 1151 1151 ··· 1713 1713 1714 1714 static void i915_gtt_color_adjust(struct drm_mm_node *node, 1715 1715 unsigned long color, 1716 - unsigned long *start, 1717 - unsigned long *end) 1716 + u64 *start, 1717 + u64 *end) 1718 1718 { 1719 1719 if (node->color != color) 1720 1720 *start += 4096;
+39 -7
drivers/gpu/drm/i915/intel_display.c
··· 37 37 #include <drm/i915_drm.h> 38 38 #include "i915_drv.h" 39 39 #include "i915_trace.h" 40 + #include <drm/drm_atomic.h> 40 41 #include <drm/drm_atomic_helper.h> 41 42 #include <drm/drm_dp_helper.h> 42 43 #include <drm/drm_crtc_helper.h> ··· 2417 2416 return false; 2418 2417 } 2419 2418 2419 + /* Update plane->state->fb to match plane->fb after driver-internal updates */ 2420 + static void 2421 + update_state_fb(struct drm_plane *plane) 2422 + { 2423 + if (plane->fb != plane->state->fb) 2424 + drm_atomic_set_fb_for_plane(plane->state, plane->fb); 2425 + } 2426 + 2420 2427 static void 2421 2428 intel_find_plane_obj(struct intel_crtc *intel_crtc, 2422 2429 struct intel_initial_plane_config *plane_config) ··· 2438 2429 if (!intel_crtc->base.primary->fb) 2439 2430 return; 2440 2431 2441 - if (intel_alloc_plane_obj(intel_crtc, plane_config)) 2432 + if (intel_alloc_plane_obj(intel_crtc, plane_config)) { 2433 + struct drm_plane *primary = intel_crtc->base.primary; 2434 + 2435 + primary->state->crtc = &intel_crtc->base; 2436 + primary->crtc = &intel_crtc->base; 2437 + update_state_fb(primary); 2438 + 2442 2439 return; 2440 + } 2443 2441 2444 2442 kfree(intel_crtc->base.primary->fb); 2445 2443 intel_crtc->base.primary->fb = NULL; ··· 2469 2453 continue; 2470 2454 2471 2455 if (i915_gem_obj_ggtt_offset(obj) == plane_config->base) { 2456 + struct drm_plane *primary = intel_crtc->base.primary; 2457 + 2472 2458 if (obj->tiling_mode != I915_TILING_NONE) 2473 2459 dev_priv->preserve_bios_swizzle = true; 2474 2460 2475 2461 drm_framebuffer_reference(c->primary->fb); 2476 - intel_crtc->base.primary->fb = c->primary->fb; 2462 + primary->fb = c->primary->fb; 2463 + primary->state->crtc = &intel_crtc->base; 2464 + primary->crtc = &intel_crtc->base; 2477 2465 obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe); 2478 2466 break; 2479 2467 } 2480 2468 } 2469 + 2470 + update_state_fb(intel_crtc->base.primary); 2481 2471 } 2482 2472 2483 2473 static void i9xx_update_primary_plane(struct drm_crtc *crtc, ··· 6624 6602 struct drm_framebuffer *fb; 6625 6603 struct intel_framebuffer *intel_fb; 6626 6604 6605 + val = I915_READ(DSPCNTR(plane)); 6606 + if (!(val & DISPLAY_PLANE_ENABLE)) 6607 + return; 6608 + 6627 6609 intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL); 6628 6610 if (!intel_fb) { 6629 6611 DRM_DEBUG_KMS("failed to alloc fb\n"); ··· 6635 6609 } 6636 6610 6637 6611 fb = &intel_fb->base; 6638 - 6639 - val = I915_READ(DSPCNTR(plane)); 6640 6612 6641 6613 if (INTEL_INFO(dev)->gen >= 4) 6642 6614 if (val & DISPPLANE_TILED) ··· 7667 7643 fb = &intel_fb->base; 7668 7644 7669 7645 val = I915_READ(PLANE_CTL(pipe, 0)); 7646 + if (!(val & PLANE_CTL_ENABLE)) 7647 + goto error; 7648 + 7670 7649 if (val & PLANE_CTL_TILED_MASK) 7671 7650 plane_config->tiling = I915_TILING_X; 7672 7651 ··· 7757 7730 struct drm_framebuffer *fb; 7758 7731 struct intel_framebuffer *intel_fb; 7759 7732 7733 + val = I915_READ(DSPCNTR(pipe)); 7734 + if (!(val & DISPLAY_PLANE_ENABLE)) 7735 + return; 7736 + 7760 7737 intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL); 7761 7738 if (!intel_fb) { 7762 7739 DRM_DEBUG_KMS("failed to alloc fb\n"); ··· 7768 7737 } 7769 7738 7770 7739 fb = &intel_fb->base; 7771 - 7772 - val = I915_READ(DSPCNTR(pipe)); 7773 7740 7774 7741 if (INTEL_INFO(dev)->gen >= 4) 7775 7742 if (val & DISPPLANE_TILED) ··· 9745 9716 struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 9746 9717 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 9747 9718 9748 - WARN_ON(!in_irq()); 9719 + WARN_ON(!in_interrupt()); 9749 9720 9750 9721 if (crtc == NULL) 9751 9722 return; ··· 9845 9816 drm_gem_object_reference(&obj->base); 9846 9817 9847 9818 crtc->primary->fb = fb; 9819 + update_state_fb(crtc->primary); 9848 9820 9849 9821 work->pending_flip_obj = obj; 9850 9822 ··· 9914 9884 cleanup_pending: 9915 9885 atomic_dec(&intel_crtc->unpin_work_count); 9916 9886 crtc->primary->fb = old_fb; 9887 + update_state_fb(crtc->primary); 9917 9888 drm_gem_object_unreference(&work->old_fb_obj->base); 9918 9889 drm_gem_object_unreference(&obj->base); 9919 9890 mutex_unlock(&dev->struct_mutex); ··· 13749 13718 to_intel_crtc(c)->pipe); 13750 13719 drm_framebuffer_unreference(c->primary->fb); 13751 13720 c->primary->fb = NULL; 13721 + update_state_fb(c->primary); 13752 13722 } 13753 13723 } 13754 13724 mutex_unlock(&dev->struct_mutex);
+7 -11
drivers/gpu/drm/i915/intel_fifo_underrun.c
··· 282 282 return ret; 283 283 } 284 284 285 - static bool 286 - __cpu_fifo_underrun_reporting_enabled(struct drm_i915_private *dev_priv, 287 - enum pipe pipe) 288 - { 289 - struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 290 - struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 291 - 292 - return !intel_crtc->cpu_fifo_underrun_disabled; 293 - } 294 - 295 285 /** 296 286 * intel_set_pch_fifo_underrun_reporting - set PCH fifo underrun reporting state 297 287 * @dev_priv: i915 device instance ··· 342 352 void intel_cpu_fifo_underrun_irq_handler(struct drm_i915_private *dev_priv, 343 353 enum pipe pipe) 344 354 { 355 + struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 356 + 357 + /* We may be called too early in init, thanks BIOS! */ 358 + if (crtc == NULL) 359 + return; 360 + 345 361 /* GMCH can't disable fifo underruns, filter them. */ 346 362 if (HAS_GMCH_DISPLAY(dev_priv->dev) && 347 - !__cpu_fifo_underrun_reporting_enabled(dev_priv, pipe)) 363 + to_intel_crtc(crtc)->cpu_fifo_underrun_disabled) 348 364 return; 349 365 350 366 if (intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false))
+2 -2
drivers/gpu/drm/i915/intel_sprite.c
··· 1322 1322 drm_modeset_lock_all(dev); 1323 1323 1324 1324 plane = drm_plane_find(dev, set->plane_id); 1325 - if (!plane) { 1325 + if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) { 1326 1326 ret = -ENOENT; 1327 1327 goto out_unlock; 1328 1328 } ··· 1349 1349 drm_modeset_lock_all(dev); 1350 1350 1351 1351 plane = drm_plane_find(dev, get->plane_id); 1352 - if (!plane) { 1352 + if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) { 1353 1353 ret = -ENOENT; 1354 1354 goto out_unlock; 1355 1355 }
+7 -1
drivers/gpu/drm/i915/intel_uncore.c
··· 1048 1048 1049 1049 /* We need to init first for ECOBUS access and then 1050 1050 * determine later if we want to reinit, in case of MT access is 1051 - * not working 1051 + * not working. In this stage we don't know which flavour this 1052 + * ivb is, so it is better to reset also the gen6 fw registers 1053 + * before the ecobus check. 1052 1054 */ 1055 + 1056 + __raw_i915_write32(dev_priv, FORCEWAKE, 0); 1057 + __raw_posting_read(dev_priv, ECOBUS); 1058 + 1053 1059 fw_domain_init(dev_priv, FW_DOMAIN_ID_RENDER, 1054 1060 FORCEWAKE_MT, FORCEWAKE_MT_ACK); 1055 1061
+31 -5
drivers/gpu/drm/imx/dw_hdmi-imx.c
··· 70 70 118800000, { 0x091c, 0x091c, 0x06dc }, 71 71 }, { 72 72 216000000, { 0x06dc, 0x0b5c, 0x091c }, 73 - } 73 + }, { 74 + ~0UL, { 0x0000, 0x0000, 0x0000 }, 75 + }, 74 76 }; 75 77 76 78 static const struct dw_hdmi_sym_term imx_sym_term[] = { ··· 138 136 .destroy = drm_encoder_cleanup, 139 137 }; 140 138 139 + static enum drm_mode_status imx6q_hdmi_mode_valid(struct drm_connector *con, 140 + struct drm_display_mode *mode) 141 + { 142 + if (mode->clock < 13500) 143 + return MODE_CLOCK_LOW; 144 + if (mode->clock > 266000) 145 + return MODE_CLOCK_HIGH; 146 + 147 + return MODE_OK; 148 + } 149 + 150 + static enum drm_mode_status imx6dl_hdmi_mode_valid(struct drm_connector *con, 151 + struct drm_display_mode *mode) 152 + { 153 + if (mode->clock < 13500) 154 + return MODE_CLOCK_LOW; 155 + if (mode->clock > 270000) 156 + return MODE_CLOCK_HIGH; 157 + 158 + return MODE_OK; 159 + } 160 + 141 161 static struct dw_hdmi_plat_data imx6q_hdmi_drv_data = { 142 - .mpll_cfg = imx_mpll_cfg, 143 - .cur_ctr = imx_cur_ctr, 144 - .sym_term = imx_sym_term, 145 - .dev_type = IMX6Q_HDMI, 162 + .mpll_cfg = imx_mpll_cfg, 163 + .cur_ctr = imx_cur_ctr, 164 + .sym_term = imx_sym_term, 165 + .dev_type = IMX6Q_HDMI, 166 + .mode_valid = imx6q_hdmi_mode_valid, 146 167 }; 147 168 148 169 static struct dw_hdmi_plat_data imx6dl_hdmi_drv_data = { ··· 173 148 .cur_ctr = imx_cur_ctr, 174 149 .sym_term = imx_sym_term, 175 150 .dev_type = IMX6DL_HDMI, 151 + .mode_valid = imx6dl_hdmi_mode_valid, 176 152 }; 177 153 178 154 static const struct of_device_id dw_hdmi_imx_dt_ids[] = {
+13 -15
drivers/gpu/drm/imx/imx-ldb.c
··· 163 163 { 164 164 struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder); 165 165 struct imx_ldb *ldb = imx_ldb_ch->ldb; 166 - struct drm_display_mode *mode = &encoder->crtc->hwmode; 167 166 u32 pixel_fmt; 168 - unsigned long serial_clk; 169 - unsigned long di_clk = mode->clock * 1000; 170 - int mux = imx_drm_encoder_get_mux_id(imx_ldb_ch->child, encoder); 171 - 172 - if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) { 173 - /* dual channel LVDS mode */ 174 - serial_clk = 3500UL * mode->clock; 175 - imx_ldb_set_clock(ldb, mux, 0, serial_clk, di_clk); 176 - imx_ldb_set_clock(ldb, mux, 1, serial_clk, di_clk); 177 - } else { 178 - serial_clk = 7000UL * mode->clock; 179 - imx_ldb_set_clock(ldb, mux, imx_ldb_ch->chno, serial_clk, 180 - di_clk); 181 - } 182 167 183 168 switch (imx_ldb_ch->chno) { 184 169 case 0: ··· 232 247 struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder); 233 248 struct imx_ldb *ldb = imx_ldb_ch->ldb; 234 249 int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN; 250 + unsigned long serial_clk; 251 + unsigned long di_clk = mode->clock * 1000; 252 + int mux = imx_drm_encoder_get_mux_id(imx_ldb_ch->child, encoder); 235 253 236 254 if (mode->clock > 170000) { 237 255 dev_warn(ldb->dev, ··· 243 255 if (mode->clock > 85000 && !dual) { 244 256 dev_warn(ldb->dev, 245 257 "%s: mode exceeds 85 MHz pixel clock\n", __func__); 258 + } 259 + 260 + if (dual) { 261 + serial_clk = 3500UL * mode->clock; 262 + imx_ldb_set_clock(ldb, mux, 0, serial_clk, di_clk); 263 + imx_ldb_set_clock(ldb, mux, 1, serial_clk, di_clk); 264 + } else { 265 + serial_clk = 7000UL * mode->clock; 266 + imx_ldb_set_clock(ldb, mux, imx_ldb_ch->chno, serial_clk, 267 + di_clk); 246 268 } 247 269 248 270 /* FIXME - assumes straight connections DI0 --> CH0, DI1 --> CH1 */
+4 -1
drivers/gpu/drm/imx/parallel-display.c
··· 236 236 } 237 237 238 238 panel_node = of_parse_phandle(np, "fsl,panel", 0); 239 - if (panel_node) 239 + if (panel_node) { 240 240 imxpd->panel = of_drm_find_panel(panel_node); 241 + if (!imxpd->panel) 242 + return -EPROBE_DEFER; 243 + } 241 244 242 245 imxpd->dev = dev; 243 246
+5
drivers/gpu/drm/msm/mdp/mdp4/mdp4_irq.c
··· 32 32 void mdp4_irq_preinstall(struct msm_kms *kms) 33 33 { 34 34 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); 35 + mdp4_enable(mdp4_kms); 35 36 mdp4_write(mdp4_kms, REG_MDP4_INTR_CLEAR, 0xffffffff); 37 + mdp4_write(mdp4_kms, REG_MDP4_INTR_ENABLE, 0x00000000); 38 + mdp4_disable(mdp4_kms); 36 39 } 37 40 38 41 int mdp4_irq_postinstall(struct msm_kms *kms) ··· 56 53 void mdp4_irq_uninstall(struct msm_kms *kms) 57 54 { 58 55 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); 56 + mdp4_enable(mdp4_kms); 59 57 mdp4_write(mdp4_kms, REG_MDP4_INTR_ENABLE, 0x00000000); 58 + mdp4_disable(mdp4_kms); 60 59 } 61 60 62 61 irqreturn_t mdp4_irq(struct msm_kms *kms)
+4 -11
drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
··· 8 8 git clone https://github.com/freedreno/envytools.git 9 9 10 10 The rules-ng-ng source files this header was generated from are: 11 - - /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 676 bytes, from 2014-12-05 15:34:49) 12 - - /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27) 13 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20908 bytes, from 2014-12-08 16:13:00) 14 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2014-12-08 16:13:00) 15 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 27208 bytes, from 2015-01-13 23:56:11) 16 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43) 17 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32) 18 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-10-31 16:48:57) 19 - - /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12) 20 - - /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 26848 bytes, from 2015-01-13 23:55:57) 21 - - /home/robclark/src/freedreno/envytools/rnndb/edp/edp.xml ( 8253 bytes, from 2014-12-08 16:13:00) 11 + - /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp5.xml ( 27229 bytes, from 2015-02-10 17:00:41) 12 + - /local/mnt2/workspace2/sviau/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2014-06-02 18:31:15) 13 + - /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2015-01-23 16:20:19) 22 14 23 15 Copyright (C) 2013-2015 by the following authors: 24 16 - Rob Clark <robdclark@gmail.com> (robclark) ··· 902 910 case 2: return (mdp5_cfg->lm.base[2]); 903 911 case 3: return (mdp5_cfg->lm.base[3]); 904 912 case 4: return (mdp5_cfg->lm.base[4]); 913 + case 5: return (mdp5_cfg->lm.base[5]); 905 914 default: return INVALID_IDX(idx); 906 915 } 907 916 }
+61 -38
drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
··· 62 62 63 63 /* current cursor being scanned out: */ 64 64 struct drm_gem_object *scanout_bo; 65 - uint32_t width; 66 - uint32_t height; 65 + uint32_t width, height; 66 + uint32_t x, y; 67 67 } cursor; 68 68 }; 69 69 #define to_mdp5_crtc(x) container_of(x, struct mdp5_crtc, base) ··· 103 103 struct drm_plane *plane; 104 104 uint32_t flush_mask = 0; 105 105 106 - /* we could have already released CTL in the disable path: */ 107 - if (!mdp5_crtc->ctl) 106 + /* this should not happen: */ 107 + if (WARN_ON(!mdp5_crtc->ctl)) 108 108 return; 109 109 110 110 drm_atomic_crtc_for_each_plane(plane, crtc) { ··· 142 142 143 143 drm_atomic_crtc_for_each_plane(plane, crtc) { 144 144 mdp5_plane_complete_flip(plane); 145 + } 146 + 147 + if (mdp5_crtc->ctl && !crtc->state->enable) { 148 + mdp5_ctl_release(mdp5_crtc->ctl); 149 + mdp5_crtc->ctl = NULL; 145 150 } 146 151 } 147 152 ··· 391 386 mdp5_crtc->event = crtc->state->event; 392 387 spin_unlock_irqrestore(&dev->event_lock, flags); 393 388 389 + /* 390 + * If no CTL has been allocated in mdp5_crtc_atomic_check(), 391 + * it means we are trying to flush a CRTC whose state is disabled: 392 + * nothing else needs to be done. 393 + */ 394 + if (unlikely(!mdp5_crtc->ctl)) 395 + return; 396 + 394 397 blend_setup(crtc); 395 398 crtc_flush_all(crtc); 396 399 request_pending(crtc, PENDING_FLIP); 397 - 398 - if (mdp5_crtc->ctl && !crtc->state->enable) { 399 - mdp5_ctl_release(mdp5_crtc->ctl); 400 - mdp5_crtc->ctl = NULL; 401 - } 402 400 } 403 401 404 402 static int mdp5_crtc_set_property(struct drm_crtc *crtc, ··· 409 401 { 410 402 // XXX 411 403 return -EINVAL; 404 + } 405 + 406 + static void get_roi(struct drm_crtc *crtc, uint32_t *roi_w, uint32_t *roi_h) 407 + { 408 + struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 409 + uint32_t xres = crtc->mode.hdisplay; 410 + uint32_t yres = crtc->mode.vdisplay; 411 + 412 + /* 413 + * Cursor Region Of Interest (ROI) is a plane read from cursor 414 + * buffer to render. The ROI region is determined by the visibility of 415 + * the cursor point. In the default Cursor image the cursor point will 416 + * be at the top left of the cursor image, unless it is specified 417 + * otherwise using hotspot feature. 418 + * 419 + * If the cursor point reaches the right (xres - x < cursor.width) or 420 + * bottom (yres - y < cursor.height) boundary of the screen, then ROI 421 + * width and ROI height need to be evaluated to crop the cursor image 422 + * accordingly. 423 + * (xres-x) will be new cursor width when x > (xres - cursor.width) 424 + * (yres-y) will be new cursor height when y > (yres - cursor.height) 425 + */ 426 + *roi_w = min(mdp5_crtc->cursor.width, xres - 427 + mdp5_crtc->cursor.x); 428 + *roi_h = min(mdp5_crtc->cursor.height, yres - 429 + mdp5_crtc->cursor.y); 412 430 } 413 431 414 432 static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, ··· 450 416 unsigned int depth; 451 417 enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL; 452 418 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); 419 + uint32_t roi_w, roi_h; 453 420 unsigned long flags; 454 421 455 422 if ((width > CURSOR_WIDTH) || (height > CURSOR_HEIGHT)) { ··· 481 446 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); 482 447 old_bo = mdp5_crtc->cursor.scanout_bo; 483 448 449 + mdp5_crtc->cursor.scanout_bo = cursor_bo; 450 + mdp5_crtc->cursor.width = width; 451 + mdp5_crtc->cursor.height = height; 452 + 453 + get_roi(crtc, &roi_w, &roi_h); 454 + 484 455 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_STRIDE(lm), stride); 485 456 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_FORMAT(lm), 486 457 MDP5_LM_CURSOR_FORMAT_FORMAT(CURSOR_FMT_ARGB8888)); ··· 494 453 MDP5_LM_CURSOR_IMG_SIZE_SRC_H(height) | 495 454 MDP5_LM_CURSOR_IMG_SIZE_SRC_W(width)); 496 455 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(lm), 497 - MDP5_LM_CURSOR_SIZE_ROI_H(height) | 498 - MDP5_LM_CURSOR_SIZE_ROI_W(width)); 456 + MDP5_LM_CURSOR_SIZE_ROI_H(roi_h) | 457 + MDP5_LM_CURSOR_SIZE_ROI_W(roi_w)); 499 458 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_BASE_ADDR(lm), cursor_addr); 500 459 501 - 502 460 blendcfg = MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_EN; 503 - blendcfg |= MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_TRANSP_EN; 504 461 blendcfg |= MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_ALPHA_SEL(cur_alpha); 505 462 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_BLEND_CONFIG(lm), blendcfg); 506 463 507 - mdp5_crtc->cursor.scanout_bo = cursor_bo; 508 - mdp5_crtc->cursor.width = width; 509 - mdp5_crtc->cursor.height = height; 510 464 spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags); 511 465 512 466 ret = mdp5_ctl_set_cursor(mdp5_crtc->ctl, true); ··· 525 489 struct mdp5_kms *mdp5_kms = get_kms(crtc); 526 490 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 527 491 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); 528 - uint32_t xres = crtc->mode.hdisplay; 529 - uint32_t yres = crtc->mode.vdisplay; 530 492 uint32_t roi_w; 531 493 uint32_t roi_h; 532 494 unsigned long flags; 533 495 534 - x = (x > 0) ? x : 0; 535 - y = (y > 0) ? y : 0; 496 + /* In case the CRTC is disabled, just drop the cursor update */ 497 + if (unlikely(!crtc->state->enable)) 498 + return 0; 536 499 537 - /* 538 - * Cursor Region Of Interest (ROI) is a plane read from cursor 539 - * buffer to render. The ROI region is determined by the visiblity of 540 - * the cursor point. In the default Cursor image the cursor point will 541 - * be at the top left of the cursor image, unless it is specified 542 - * otherwise using hotspot feature. 543 - * 544 - * If the cursor point reaches the right (xres - x < cursor.width) or 545 - * bottom (yres - y < cursor.height) boundary of the screen, then ROI 546 - * width and ROI height need to be evaluated to crop the cursor image 547 - * accordingly. 548 - * (xres-x) will be new cursor width when x > (xres - cursor.width) 549 - * (yres-y) will be new cursor height when y > (yres - cursor.height) 550 - */ 551 - roi_w = min(mdp5_crtc->cursor.width, xres - x); 552 - roi_h = min(mdp5_crtc->cursor.height, yres - y); 500 + mdp5_crtc->cursor.x = x = max(x, 0); 501 + mdp5_crtc->cursor.y = y = max(y, 0); 502 + 503 + get_roi(crtc, &roi_w, &roi_h); 553 504 554 505 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); 555 506 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(mdp5_crtc->lm), ··· 567 544 static const struct drm_crtc_helper_funcs mdp5_crtc_helper_funcs = { 568 545 .mode_fixup = mdp5_crtc_mode_fixup, 569 546 .mode_set_nofb = mdp5_crtc_mode_set_nofb, 570 - .prepare = mdp5_crtc_disable, 571 - .commit = mdp5_crtc_enable, 547 + .disable = mdp5_crtc_disable, 548 + .enable = mdp5_crtc_enable, 572 549 .atomic_check = mdp5_crtc_atomic_check, 573 550 .atomic_begin = mdp5_crtc_atomic_begin, 574 551 .atomic_flush = mdp5_crtc_atomic_flush,
+3 -3
drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c
··· 267 267 mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intf), 1); 268 268 spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags); 269 269 270 - mdp5_encoder->enabled = false; 270 + mdp5_encoder->enabled = true; 271 271 } 272 272 273 273 static const struct drm_encoder_helper_funcs mdp5_encoder_helper_funcs = { 274 274 .mode_fixup = mdp5_encoder_mode_fixup, 275 275 .mode_set = mdp5_encoder_mode_set, 276 - .prepare = mdp5_encoder_disable, 277 - .commit = mdp5_encoder_enable, 276 + .disable = mdp5_encoder_disable, 277 + .enable = mdp5_encoder_enable, 278 278 }; 279 279 280 280 /* initialize encoder */
+5
drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
··· 34 34 void mdp5_irq_preinstall(struct msm_kms *kms) 35 35 { 36 36 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 37 + mdp5_enable(mdp5_kms); 37 38 mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, 0xffffffff); 39 + mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); 40 + mdp5_disable(mdp5_kms); 38 41 } 39 42 40 43 int mdp5_irq_postinstall(struct msm_kms *kms) ··· 60 57 void mdp5_irq_uninstall(struct msm_kms *kms) 61 58 { 62 59 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 60 + mdp5_enable(mdp5_kms); 63 61 mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); 62 + mdp5_disable(mdp5_kms); 64 63 } 65 64 66 65 static void mdp5_irq_mdp(struct mdp_kms *mdp_kms)
+3 -1
drivers/gpu/drm/msm/msm_atomic.c
··· 219 219 * mark our set of crtc's as busy: 220 220 */ 221 221 ret = start_atomic(dev->dev_private, c->crtc_mask); 222 - if (ret) 222 + if (ret) { 223 + kfree(c); 223 224 return ret; 225 + } 224 226 225 227 /* 226 228 * This is the point of no return - everything below never fails except
+1 -1
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 418 418 nouveau_fbcon_zfill(dev, fbcon); 419 419 420 420 /* To allow resizeing without swapping buffers */ 421 - NV_INFO(drm, "allocated %dx%d fb: 0x%lx, bo %p\n", 421 + NV_INFO(drm, "allocated %dx%d fb: 0x%llx, bo %p\n", 422 422 nouveau_fb->base.width, nouveau_fb->base.height, 423 423 nvbo->bo.offset, nvbo); 424 424
+4 -2
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 340 340 341 341 /* switch mmio to cpu's native endianness */ 342 342 #ifndef __BIG_ENDIAN 343 - if (ioread32_native(map + 0x000004) != 0x00000000) 343 + if (ioread32_native(map + 0x000004) != 0x00000000) { 344 344 #else 345 - if (ioread32_native(map + 0x000004) == 0x00000000) 345 + if (ioread32_native(map + 0x000004) == 0x00000000) { 346 346 #endif 347 347 iowrite32_native(0x01000001, map + 0x000004); 348 + ioread32_native(map); 349 + } 348 350 349 351 /* read boot0 and strapping information */ 350 352 boot0 = ioread32_native(map + 0x000000);
+43
drivers/gpu/drm/nouveau/nvkm/engine/device/gm100.c
··· 142 142 device->oclass[NVDEV_ENGINE_MSPPP ] = &gf100_msppp_oclass; 143 143 #endif 144 144 break; 145 + case 0x126: 146 + device->cname = "GM206"; 147 + device->oclass[NVDEV_SUBDEV_VBIOS ] = &nvkm_bios_oclass; 148 + device->oclass[NVDEV_SUBDEV_GPIO ] = gk104_gpio_oclass; 149 + device->oclass[NVDEV_SUBDEV_I2C ] = gm204_i2c_oclass; 150 + device->oclass[NVDEV_SUBDEV_FUSE ] = &gm107_fuse_oclass; 151 + #if 0 152 + /* looks to be some non-trivial changes */ 153 + device->oclass[NVDEV_SUBDEV_CLK ] = &gk104_clk_oclass; 154 + /* priv ring says no to 0x10eb14 writes */ 155 + device->oclass[NVDEV_SUBDEV_THERM ] = &gm107_therm_oclass; 156 + #endif 157 + device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass; 158 + device->oclass[NVDEV_SUBDEV_DEVINIT] = gm204_devinit_oclass; 159 + device->oclass[NVDEV_SUBDEV_MC ] = gk20a_mc_oclass; 160 + device->oclass[NVDEV_SUBDEV_BUS ] = gf100_bus_oclass; 161 + device->oclass[NVDEV_SUBDEV_TIMER ] = &gk20a_timer_oclass; 162 + device->oclass[NVDEV_SUBDEV_FB ] = gm107_fb_oclass; 163 + device->oclass[NVDEV_SUBDEV_LTC ] = gm107_ltc_oclass; 164 + device->oclass[NVDEV_SUBDEV_IBUS ] = &gk104_ibus_oclass; 165 + device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass; 166 + device->oclass[NVDEV_SUBDEV_MMU ] = &gf100_mmu_oclass; 167 + device->oclass[NVDEV_SUBDEV_BAR ] = &gf100_bar_oclass; 168 + device->oclass[NVDEV_SUBDEV_PMU ] = gk208_pmu_oclass; 169 + #if 0 170 + device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass; 171 + #endif 172 + device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass; 173 + #if 0 174 + device->oclass[NVDEV_ENGINE_FIFO ] = gk208_fifo_oclass; 175 + device->oclass[NVDEV_ENGINE_SW ] = gf100_sw_oclass; 176 + device->oclass[NVDEV_ENGINE_GR ] = gm107_gr_oclass; 177 + #endif 178 + device->oclass[NVDEV_ENGINE_DISP ] = gm204_disp_oclass; 179 + #if 0 180 + device->oclass[NVDEV_ENGINE_CE0 ] = &gm204_ce0_oclass; 181 + device->oclass[NVDEV_ENGINE_CE1 ] = &gm204_ce1_oclass; 182 + device->oclass[NVDEV_ENGINE_CE2 ] = &gm204_ce2_oclass; 183 + device->oclass[NVDEV_ENGINE_MSVLD ] = &gk104_msvld_oclass; 184 + device->oclass[NVDEV_ENGINE_MSPDEC ] = &gk104_mspdec_oclass; 185 + device->oclass[NVDEV_ENGINE_MSPPP ] = &gf100_msppp_oclass; 186 + #endif 187 + break; 145 188 default: 146 189 nv_fatal(device, "unknown Maxwell chipset\n"); 147 190 return -EINVAL;
+43 -58
drivers/gpu/drm/nouveau/nvkm/engine/fifo/nv04.c
··· 502 502 { 503 503 struct nvkm_device *device = nv_device(subdev); 504 504 struct nv04_fifo_priv *priv = (void *)subdev; 505 - uint32_t status, reassign; 506 - int cnt = 0; 505 + u32 mask = nv_rd32(priv, NV03_PFIFO_INTR_EN_0); 506 + u32 stat = nv_rd32(priv, NV03_PFIFO_INTR_0) & mask; 507 + u32 reassign, chid, get, sem; 507 508 508 509 reassign = nv_rd32(priv, NV03_PFIFO_CACHES) & 1; 509 - while ((status = nv_rd32(priv, NV03_PFIFO_INTR_0)) && (cnt++ < 100)) { 510 - uint32_t chid, get; 510 + nv_wr32(priv, NV03_PFIFO_CACHES, 0); 511 511 512 - nv_wr32(priv, NV03_PFIFO_CACHES, 0); 512 + chid = nv_rd32(priv, NV03_PFIFO_CACHE1_PUSH1) & priv->base.max; 513 + get = nv_rd32(priv, NV03_PFIFO_CACHE1_GET); 513 514 514 - chid = nv_rd32(priv, NV03_PFIFO_CACHE1_PUSH1) & priv->base.max; 515 - get = nv_rd32(priv, NV03_PFIFO_CACHE1_GET); 516 - 517 - if (status & NV_PFIFO_INTR_CACHE_ERROR) { 518 - nv04_fifo_cache_error(device, priv, chid, get); 519 - status &= ~NV_PFIFO_INTR_CACHE_ERROR; 520 - } 521 - 522 - if (status & NV_PFIFO_INTR_DMA_PUSHER) { 523 - nv04_fifo_dma_pusher(device, priv, chid); 524 - status &= ~NV_PFIFO_INTR_DMA_PUSHER; 525 - } 526 - 527 - if (status & NV_PFIFO_INTR_SEMAPHORE) { 528 - uint32_t sem; 529 - 530 - status &= ~NV_PFIFO_INTR_SEMAPHORE; 531 - nv_wr32(priv, NV03_PFIFO_INTR_0, 532 - NV_PFIFO_INTR_SEMAPHORE); 533 - 534 - sem = nv_rd32(priv, NV10_PFIFO_CACHE1_SEMAPHORE); 535 - nv_wr32(priv, NV10_PFIFO_CACHE1_SEMAPHORE, sem | 0x1); 536 - 537 - nv_wr32(priv, NV03_PFIFO_CACHE1_GET, get + 4); 538 - nv_wr32(priv, NV04_PFIFO_CACHE1_PULL0, 1); 539 - } 540 - 541 - if (device->card_type == NV_50) { 542 - if (status & 0x00000010) { 543 - status &= ~0x00000010; 544 - nv_wr32(priv, 0x002100, 0x00000010); 545 - } 546 - 547 - if (status & 0x40000000) { 548 - nv_wr32(priv, 0x002100, 0x40000000); 549 - nvkm_fifo_uevent(&priv->base); 550 - status &= ~0x40000000; 551 - } 552 - } 553 - 554 - if (status) { 555 - nv_warn(priv, "unknown intr 0x%08x, ch %d\n", 556 - status, chid); 557 - nv_wr32(priv, NV03_PFIFO_INTR_0, status); 558 - status = 0; 559 - } 560 - 561 - nv_wr32(priv, NV03_PFIFO_CACHES, reassign); 515 + if (stat & NV_PFIFO_INTR_CACHE_ERROR) { 516 + nv04_fifo_cache_error(device, priv, chid, get); 517 + stat &= ~NV_PFIFO_INTR_CACHE_ERROR; 562 518 } 563 519 564 - if (status) { 565 - nv_error(priv, "still angry after %d spins, halt\n", cnt); 566 - nv_wr32(priv, 0x002140, 0); 567 - nv_wr32(priv, 0x000140, 0); 520 + if (stat & NV_PFIFO_INTR_DMA_PUSHER) { 521 + nv04_fifo_dma_pusher(device, priv, chid); 522 + stat &= ~NV_PFIFO_INTR_DMA_PUSHER; 568 523 } 569 524 570 - nv_wr32(priv, 0x000100, 0x00000100); 525 + if (stat & NV_PFIFO_INTR_SEMAPHORE) { 526 + stat &= ~NV_PFIFO_INTR_SEMAPHORE; 527 + nv_wr32(priv, NV03_PFIFO_INTR_0, NV_PFIFO_INTR_SEMAPHORE); 528 + 529 + sem = nv_rd32(priv, NV10_PFIFO_CACHE1_SEMAPHORE); 530 + nv_wr32(priv, NV10_PFIFO_CACHE1_SEMAPHORE, sem | 0x1); 531 + 532 + nv_wr32(priv, NV03_PFIFO_CACHE1_GET, get + 4); 533 + nv_wr32(priv, NV04_PFIFO_CACHE1_PULL0, 1); 534 + } 535 + 536 + if (device->card_type == NV_50) { 537 + if (stat & 0x00000010) { 538 + stat &= ~0x00000010; 539 + nv_wr32(priv, 0x002100, 0x00000010); 540 + } 541 + 542 + if (stat & 0x40000000) { 543 + nv_wr32(priv, 0x002100, 0x40000000); 544 + nvkm_fifo_uevent(&priv->base); 545 + stat &= ~0x40000000; 546 + } 547 + } 548 + 549 + if (stat) { 550 + nv_warn(priv, "unknown intr 0x%08x\n", stat); 551 + nv_mask(priv, NV03_PFIFO_INTR_EN_0, stat, 0x00000000); 552 + nv_wr32(priv, NV03_PFIFO_INTR_0, stat); 553 + } 554 + 555 + nv_wr32(priv, NV03_PFIFO_CACHES, reassign); 571 556 } 572 557 573 558 static int
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.c
··· 1032 1032 const int s = 8; 1033 1033 const int b = mmio_vram(info, impl->bundle_size, (1 << s), access); 1034 1034 mmio_refn(info, 0x408004, 0x00000000, s, b); 1035 - mmio_refn(info, 0x408008, 0x80000000 | (impl->bundle_size >> s), 0, b); 1035 + mmio_wr32(info, 0x408008, 0x80000000 | (impl->bundle_size >> s)); 1036 1036 mmio_refn(info, 0x418808, 0x00000000, s, b); 1037 - mmio_refn(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s), 0, b); 1037 + mmio_wr32(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s)); 1038 1038 } 1039 1039 1040 1040 void
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
··· 851 851 const int s = 8; 852 852 const int b = mmio_vram(info, impl->bundle_size, (1 << s), access); 853 853 mmio_refn(info, 0x408004, 0x00000000, s, b); 854 - mmio_refn(info, 0x408008, 0x80000000 | (impl->bundle_size >> s), 0, b); 854 + mmio_wr32(info, 0x408008, 0x80000000 | (impl->bundle_size >> s)); 855 855 mmio_refn(info, 0x418808, 0x00000000, s, b); 856 - mmio_refn(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s), 0, b); 856 + mmio_wr32(info, 0x41880c, 0x80000000 | (impl->bundle_size >> s)); 857 857 mmio_wr32(info, 0x4064c8, (state_limit << 16) | token_limit); 858 858 } 859 859
+2 -2
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
··· 871 871 const int s = 8; 872 872 const int b = mmio_vram(info, impl->bundle_size, (1 << s), access); 873 873 mmio_refn(info, 0x408004, 0x00000000, s, b); 874 - mmio_refn(info, 0x408008, 0x80000000 | (impl->bundle_size >> s), 0, b); 874 + mmio_wr32(info, 0x408008, 0x80000000 | (impl->bundle_size >> s)); 875 875 mmio_refn(info, 0x418e24, 0x00000000, s, b); 876 - mmio_refn(info, 0x418e28, 0x80000000 | (impl->bundle_size >> s), 0, b); 876 + mmio_wr32(info, 0x418e28, 0x80000000 | (impl->bundle_size >> s)); 877 877 mmio_wr32(info, 0x4064c8, (state_limit << 16) | token_limit); 878 878 } 879 879
+5 -1
drivers/gpu/drm/nouveau/nvkm/subdev/bios/i2c.c
··· 74 74 u16 ent = dcb_i2c_entry(bios, idx, &ver, &len); 75 75 if (ent) { 76 76 if (ver >= 0x41) { 77 - if (!(nv_ro32(bios, ent) & 0x80000000)) 77 + u32 ent_value = nv_ro32(bios, ent); 78 + u8 i2c_port = (ent_value >> 27) & 0x1f; 79 + u8 dpaux_port = (ent_value >> 22) & 0x1f; 80 + /* value 0x1f means unused according to DCB 4.x spec */ 81 + if (i2c_port == 0x1f && dpaux_port == 0x1f) 78 82 info->type = DCB_I2C_UNUSED; 79 83 else 80 84 info->type = DCB_I2C_PMGR;
+3
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1405 1405 (x << 16) | y); 1406 1406 viewport_w = crtc->mode.hdisplay; 1407 1407 viewport_h = (crtc->mode.vdisplay + 1) & ~1; 1408 + if ((rdev->family >= CHIP_BONAIRE) && 1409 + (crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)) 1410 + viewport_h *= 2; 1408 1411 WREG32(EVERGREEN_VIEWPORT_SIZE + radeon_crtc->crtc_offset, 1409 1412 (viewport_w << 16) | viewport_h); 1410 1413
+16 -14
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1626 1626 struct radeon_connector *radeon_connector = NULL; 1627 1627 struct radeon_connector_atom_dig *radeon_dig_connector = NULL; 1628 1628 bool travis_quirk = false; 1629 - int encoder_mode; 1630 1629 1631 1630 if (connector) { 1632 1631 radeon_connector = to_radeon_connector(connector); ··· 1721 1722 } 1722 1723 break; 1723 1724 } 1724 - 1725 - encoder_mode = atombios_get_encoder_mode(encoder); 1726 - if (connector && (radeon_audio != 0) && 1727 - ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 1728 - (ENCODER_MODE_IS_DP(encoder_mode) && 1729 - drm_detect_monitor_audio(radeon_connector_edid(connector))))) 1730 - radeon_audio_dpms(encoder, mode); 1731 1725 } 1732 1726 1733 1727 static void ··· 1729 1737 struct drm_device *dev = encoder->dev; 1730 1738 struct radeon_device *rdev = dev->dev_private; 1731 1739 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1740 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1741 + int encoder_mode = atombios_get_encoder_mode(encoder); 1732 1742 1733 1743 DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n", 1734 1744 radeon_encoder->encoder_id, mode, radeon_encoder->devices, 1735 1745 radeon_encoder->active_device); 1746 + 1747 + if (connector && (radeon_audio != 0) && 1748 + ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 1749 + (ENCODER_MODE_IS_DP(encoder_mode) && 1750 + drm_detect_monitor_audio(radeon_connector_edid(connector))))) 1751 + radeon_audio_dpms(encoder, mode); 1752 + 1736 1753 switch (radeon_encoder->encoder_id) { 1737 1754 case ENCODER_OBJECT_ID_INTERNAL_TMDS1: 1738 1755 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: ··· 2171 2170 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3: 2172 2171 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 2173 2172 /* handled in dpms */ 2174 - encoder_mode = atombios_get_encoder_mode(encoder); 2175 - if (connector && (radeon_audio != 0) && 2176 - ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 2177 - (ENCODER_MODE_IS_DP(encoder_mode) && 2178 - drm_detect_monitor_audio(radeon_connector_edid(connector))))) 2179 - radeon_audio_mode_set(encoder, adjusted_mode); 2180 2173 break; 2181 2174 case ENCODER_OBJECT_ID_INTERNAL_DDI: 2182 2175 case ENCODER_OBJECT_ID_INTERNAL_DVO1: ··· 2192 2197 } 2193 2198 2194 2199 atombios_apply_encoder_quirks(encoder, adjusted_mode); 2200 + 2201 + encoder_mode = atombios_get_encoder_mode(encoder); 2202 + if (connector && (radeon_audio != 0) && 2203 + ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 2204 + (ENCODER_MODE_IS_DP(encoder_mode) && 2205 + drm_detect_monitor_audio(radeon_connector_edid(connector))))) 2206 + radeon_audio_mode_set(encoder, adjusted_mode); 2195 2207 } 2196 2208 2197 2209 static bool
+3
drivers/gpu/drm/radeon/cik.c
··· 7555 7555 WREG32(DC_HPD5_INT_CONTROL, hpd5); 7556 7556 WREG32(DC_HPD6_INT_CONTROL, hpd6); 7557 7557 7558 + /* posting read */ 7559 + RREG32(SRBM_STATUS); 7560 + 7558 7561 return 0; 7559 7562 } 7560 7563
+1
drivers/gpu/drm/radeon/cikd.h
··· 2129 2129 #define VCE_UENC_REG_CLOCK_GATING 0x207c0 2130 2130 #define VCE_SYS_INT_EN 0x21300 2131 2131 # define VCE_SYS_INT_TRAP_INTERRUPT_EN (1 << 3) 2132 + #define VCE_LMI_VCPU_CACHE_40BIT_BAR 0x2145c 2132 2133 #define VCE_LMI_CTRL2 0x21474 2133 2134 #define VCE_LMI_CTRL 0x21498 2134 2135 #define VCE_LMI_VM_CTRL 0x214a0
+33 -35
drivers/gpu/drm/radeon/dce6_afmt.c
··· 26 26 #include "radeon_audio.h" 27 27 #include "sid.h" 28 28 29 + #define DCE8_DCCG_AUDIO_DTO1_PHASE 0x05b8 30 + #define DCE8_DCCG_AUDIO_DTO1_MODULE 0x05bc 31 + 29 32 u32 dce6_endpoint_rreg(struct radeon_device *rdev, 30 33 u32 block_offset, u32 reg) 31 34 { ··· 255 252 void dce6_hdmi_audio_set_dto(struct radeon_device *rdev, 256 253 struct radeon_crtc *crtc, unsigned int clock) 257 254 { 258 - /* Two dtos; generally use dto0 for HDMI */ 255 + /* Two dtos; generally use dto0 for HDMI */ 259 256 u32 value = 0; 260 257 261 - if (crtc) 258 + if (crtc) 262 259 value |= DCCG_AUDIO_DTO0_SOURCE_SEL(crtc->crtc_id); 263 260 264 261 WREG32(DCCG_AUDIO_DTO_SOURCE, value); 265 262 266 - /* Express [24MHz / target pixel clock] as an exact rational 267 - * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 268 - * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 269 - */ 270 - WREG32(DCCG_AUDIO_DTO0_PHASE, 24000); 271 - WREG32(DCCG_AUDIO_DTO0_MODULE, clock); 263 + /* Express [24MHz / target pixel clock] as an exact rational 264 + * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 265 + * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 266 + */ 267 + WREG32(DCCG_AUDIO_DTO0_PHASE, 24000); 268 + WREG32(DCCG_AUDIO_DTO0_MODULE, clock); 272 269 } 273 270 274 271 void dce6_dp_audio_set_dto(struct radeon_device *rdev, 275 272 struct radeon_crtc *crtc, unsigned int clock) 276 273 { 277 - /* Two dtos; generally use dto1 for DP */ 274 + /* Two dtos; generally use dto1 for DP */ 278 275 u32 value = 0; 279 276 value |= DCCG_AUDIO_DTO_SEL; 280 277 281 - if (crtc) 278 + if (crtc) 282 279 value |= DCCG_AUDIO_DTO0_SOURCE_SEL(crtc->crtc_id); 283 280 284 281 WREG32(DCCG_AUDIO_DTO_SOURCE, value); 285 282 286 - /* Express [24MHz / target pixel clock] as an exact rational 287 - * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 288 - * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 289 - */ 290 - WREG32(DCCG_AUDIO_DTO1_PHASE, 24000); 291 - WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 283 + /* Express [24MHz / target pixel clock] as an exact rational 284 + * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 285 + * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 286 + */ 287 + if (ASIC_IS_DCE8(rdev)) { 288 + WREG32(DCE8_DCCG_AUDIO_DTO1_PHASE, 24000); 289 + WREG32(DCE8_DCCG_AUDIO_DTO1_MODULE, clock); 290 + } else { 291 + WREG32(DCCG_AUDIO_DTO1_PHASE, 24000); 292 + WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 293 + } 292 294 } 293 295 294 - void dce6_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable) 296 + void dce6_dp_enable(struct drm_encoder *encoder, bool enable) 295 297 { 296 298 struct drm_device *dev = encoder->dev; 297 299 struct radeon_device *rdev = dev->dev_private; 298 300 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 299 301 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 300 - uint32_t offset; 301 302 302 303 if (!dig || !dig->afmt) 303 304 return; 304 305 305 - offset = dig->afmt->offset; 306 - 307 306 if (enable) { 308 - if (dig->afmt->enabled) 309 - return; 310 - 311 - WREG32(EVERGREEN_DP_SEC_TIMESTAMP + offset, EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 312 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 313 - EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 314 - EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 315 - EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 316 - EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 317 - radeon_audio_enable(rdev, dig->afmt->pin, true); 307 + WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset, 308 + EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 309 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 310 + EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 311 + EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 312 + EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 313 + EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 318 314 } else { 319 - if (!dig->afmt->enabled) 320 - return; 321 - 322 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 0); 323 - radeon_audio_enable(rdev, dig->afmt->pin, false); 315 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0); 324 316 } 325 317 326 318 dig->afmt->enabled = enable;
+3
drivers/gpu/drm/radeon/evergreen.c
··· 4593 4593 WREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET, afmt5); 4594 4594 WREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET, afmt6); 4595 4595 4596 + /* posting read */ 4597 + RREG32(SRBM_STATUS); 4598 + 4596 4599 return 0; 4597 4600 } 4598 4601
+21 -38
drivers/gpu/drm/radeon/evergreen_hdmi.c
··· 272 272 } 273 273 274 274 void dce4_dp_audio_set_dto(struct radeon_device *rdev, 275 - struct radeon_crtc *crtc, unsigned int clock) 275 + struct radeon_crtc *crtc, unsigned int clock) 276 276 { 277 277 u32 value; 278 278 ··· 294 294 * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 295 295 */ 296 296 WREG32(DCCG_AUDIO_DTO1_PHASE, 24000); 297 - WREG32(DCCG_AUDIO_DTO1_MODULE, rdev->clock.max_pixel_clock * 10); 297 + WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 298 298 } 299 299 300 300 void dce4_set_vbi_packet(struct drm_encoder *encoder, u32 offset) ··· 350 350 struct drm_device *dev = encoder->dev; 351 351 struct radeon_device *rdev = dev->dev_private; 352 352 353 - WREG32(HDMI_INFOFRAME_CONTROL0 + offset, 354 - HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */ 355 - HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */ 356 - 357 353 WREG32(AFMT_INFOFRAME_CONTROL0 + offset, 358 354 AFMT_AUDIO_INFO_UPDATE); /* required for audio info values to be updated */ 359 - 360 - WREG32(HDMI_INFOFRAME_CONTROL1 + offset, 361 - HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */ 362 - 363 - WREG32(HDMI_AUDIO_PACKET_CONTROL + offset, 364 - HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */ 365 - HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */ 366 355 367 356 WREG32(AFMT_60958_0 + offset, 368 357 AFMT_60958_CS_CHANNEL_NUMBER_L(1)); ··· 397 408 if (!dig || !dig->afmt) 398 409 return; 399 410 400 - /* Silent, r600_hdmi_enable will raise WARN for us */ 401 - if (enable && dig->afmt->enabled) 402 - return; 403 - if (!enable && !dig->afmt->enabled) 404 - return; 411 + if (enable) { 412 + WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset, 413 + HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */ 405 414 406 - if (!enable && dig->afmt->pin) { 407 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 408 - dig->afmt->pin = NULL; 415 + WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset, 416 + HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */ 417 + HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */ 418 + 419 + WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 420 + HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */ 421 + HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */ 422 + } else { 423 + WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0); 409 424 } 410 425 411 426 dig->afmt->enabled = enable; ··· 418 425 enable ? "En" : "Dis", dig->afmt->offset, radeon_encoder->encoder_id); 419 426 } 420 427 421 - void evergreen_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable) 428 + void evergreen_dp_enable(struct drm_encoder *encoder, bool enable) 422 429 { 423 430 struct drm_device *dev = encoder->dev; 424 431 struct radeon_device *rdev = dev->dev_private; 425 432 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 426 433 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 427 - uint32_t offset; 428 434 429 435 if (!dig || !dig->afmt) 430 436 return; 431 - 432 - offset = dig->afmt->offset; 433 437 434 438 if (enable) { 435 439 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); ··· 434 444 struct radeon_connector_atom_dig *dig_connector; 435 445 uint32_t val; 436 446 437 - if (dig->afmt->enabled) 438 - return; 439 - 440 - WREG32(EVERGREEN_DP_SEC_TIMESTAMP + offset, EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 447 + WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset, 448 + EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 441 449 442 450 if (radeon_connector->con_priv) { 443 451 dig_connector = radeon_connector->con_priv; 444 - val = RREG32(EVERGREEN_DP_SEC_AUD_N + offset); 452 + val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset); 445 453 val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf); 446 454 447 455 if (dig_connector->dp_clock == 162000) ··· 447 459 else 448 460 val |= EVERGREEN_DP_SEC_N_BASE_MULTIPLE(5); 449 461 450 - WREG32(EVERGREEN_DP_SEC_AUD_N + offset, val); 462 + WREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset, val); 451 463 } 452 464 453 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 465 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 454 466 EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 455 467 EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 456 468 EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 457 469 EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 458 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 459 470 } else { 460 - if (!dig->afmt->enabled) 461 - return; 462 - 463 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 0); 464 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 471 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0); 465 472 } 466 473 467 474 dig->afmt->enabled = enable;
+4
drivers/gpu/drm/radeon/r100.c
··· 728 728 tmp |= RADEON_FP2_DETECT_MASK; 729 729 } 730 730 WREG32(RADEON_GEN_INT_CNTL, tmp); 731 + 732 + /* read back to post the write */ 733 + RREG32(RADEON_GEN_INT_CNTL); 734 + 731 735 return 0; 732 736 } 733 737
+3
drivers/gpu/drm/radeon/r600.c
··· 3784 3784 WREG32(RV770_CG_THERMAL_INT, thermal_int); 3785 3785 } 3786 3786 3787 + /* posting read */ 3788 + RREG32(R_000E50_SRBM_STATUS); 3789 + 3787 3790 return 0; 3788 3791 } 3789 3792
-11
drivers/gpu/drm/radeon/r600_hdmi.c
··· 476 476 if (!dig || !dig->afmt) 477 477 return; 478 478 479 - /* Silent, r600_hdmi_enable will raise WARN for us */ 480 - if (enable && dig->afmt->enabled) 481 - return; 482 - if (!enable && !dig->afmt->enabled) 483 - return; 484 - 485 - if (!enable && dig->afmt->pin) { 486 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 487 - dig->afmt->pin = NULL; 488 - } 489 - 490 479 /* Older chipsets require setting HDMI and routing manually */ 491 480 if (!ASIC_IS_DCE3(rdev)) { 492 481 if (enable)
+1
drivers/gpu/drm/radeon/radeon.h
··· 1565 1565 int new_active_crtc_count; 1566 1566 u32 current_active_crtcs; 1567 1567 int current_active_crtc_count; 1568 + bool single_display; 1568 1569 struct radeon_dpm_dynamic_state dyn_state; 1569 1570 struct radeon_dpm_fan fan; 1570 1571 u32 tdp_limit;
+24 -26
drivers/gpu/drm/radeon/radeon_audio.c
··· 101 101 struct drm_display_mode *mode); 102 102 void r600_hdmi_enable(struct drm_encoder *encoder, bool enable); 103 103 void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable); 104 - void evergreen_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable); 105 - void dce6_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable); 104 + void evergreen_dp_enable(struct drm_encoder *encoder, bool enable); 105 + void dce6_dp_enable(struct drm_encoder *encoder, bool enable); 106 106 107 107 static const u32 pin_offsets[7] = 108 108 { ··· 210 210 .set_avi_packet = evergreen_set_avi_packet, 211 211 .set_audio_packet = dce4_set_audio_packet, 212 212 .mode_set = radeon_audio_dp_mode_set, 213 - .dpms = evergreen_enable_dp_audio_packets, 213 + .dpms = evergreen_dp_enable, 214 214 }; 215 215 216 216 static struct radeon_audio_funcs dce6_hdmi_funcs = { ··· 240 240 .set_avi_packet = evergreen_set_avi_packet, 241 241 .set_audio_packet = dce4_set_audio_packet, 242 242 .mode_set = radeon_audio_dp_mode_set, 243 - .dpms = dce6_enable_dp_audio_packets, 243 + .dpms = dce6_dp_enable, 244 244 }; 245 245 246 246 static void radeon_audio_interface_init(struct radeon_device *rdev) ··· 452 452 } 453 453 454 454 void radeon_audio_detect(struct drm_connector *connector, 455 - enum drm_connector_status status) 455 + enum drm_connector_status status) 456 456 { 457 457 struct radeon_device *rdev; 458 458 struct radeon_encoder *radeon_encoder; ··· 483 483 else 484 484 radeon_encoder->audio = rdev->audio.hdmi_funcs; 485 485 486 - radeon_audio_write_speaker_allocation(connector->encoder); 487 - radeon_audio_write_sad_regs(connector->encoder); 488 - if (connector->encoder->crtc) 489 - radeon_audio_write_latency_fields(connector->encoder, 490 - &connector->encoder->crtc->mode); 486 + dig->afmt->pin = radeon_audio_get_pin(connector->encoder); 491 487 radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 492 488 } else { 493 489 radeon_audio_enable(rdev, dig->afmt->pin, 0); 490 + dig->afmt->pin = NULL; 494 491 } 495 492 } 496 493 ··· 691 694 * update the info frames with the data from the current display mode 692 695 */ 693 696 static void radeon_audio_hdmi_mode_set(struct drm_encoder *encoder, 694 - struct drm_display_mode *mode) 697 + struct drm_display_mode *mode) 695 698 { 696 - struct radeon_device *rdev = encoder->dev->dev_private; 697 699 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 698 700 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 699 701 700 702 if (!dig || !dig->afmt) 701 703 return; 702 704 703 - /* disable audio prior to setting up hw */ 704 - dig->afmt->pin = radeon_audio_get_pin(encoder); 705 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 705 + radeon_audio_set_mute(encoder, true); 706 706 707 + radeon_audio_write_speaker_allocation(encoder); 708 + radeon_audio_write_sad_regs(encoder); 709 + radeon_audio_write_latency_fields(encoder, mode); 707 710 radeon_audio_set_dto(encoder, mode->clock); 708 711 radeon_audio_set_vbi_packet(encoder); 709 712 radeon_hdmi_set_color_depth(encoder); 710 - radeon_audio_set_mute(encoder, false); 711 713 radeon_audio_update_acr(encoder, mode->clock); 712 714 radeon_audio_set_audio_packet(encoder); 713 715 radeon_audio_select_pin(encoder); ··· 714 718 if (radeon_audio_set_avi_packet(encoder, mode) < 0) 715 719 return; 716 720 717 - /* enable audio after to setting up hw */ 718 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 721 + radeon_audio_set_mute(encoder, false); 719 722 } 720 723 721 724 static void radeon_audio_dp_mode_set(struct drm_encoder *encoder, ··· 724 729 struct radeon_device *rdev = dev->dev_private; 725 730 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 726 731 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 732 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 733 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 734 + struct radeon_connector_atom_dig *dig_connector = 735 + radeon_connector->con_priv; 727 736 728 737 if (!dig || !dig->afmt) 729 738 return; 730 739 731 - /* disable audio prior to setting up hw */ 732 - dig->afmt->pin = radeon_audio_get_pin(encoder); 733 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 734 - 735 - radeon_audio_set_dto(encoder, rdev->clock.default_dispclk * 10); 740 + radeon_audio_write_speaker_allocation(encoder); 741 + radeon_audio_write_sad_regs(encoder); 742 + radeon_audio_write_latency_fields(encoder, mode); 743 + if (rdev->clock.dp_extclk || ASIC_IS_DCE5(rdev)) 744 + radeon_audio_set_dto(encoder, rdev->clock.default_dispclk * 10); 745 + else 746 + radeon_audio_set_dto(encoder, dig_connector->dp_clock); 736 747 radeon_audio_set_audio_packet(encoder); 737 748 radeon_audio_select_pin(encoder); 738 749 739 750 if (radeon_audio_set_avi_packet(encoder, mode) < 0) 740 751 return; 741 - 742 - /* enable audio after to setting up hw */ 743 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 744 752 } 745 753 746 754 void radeon_audio_mode_set(struct drm_encoder *encoder,
+7 -3
drivers/gpu/drm/radeon/radeon_bios.c
··· 76 76 77 77 static bool radeon_read_bios(struct radeon_device *rdev) 78 78 { 79 - uint8_t __iomem *bios; 79 + uint8_t __iomem *bios, val1, val2; 80 80 size_t size; 81 81 82 82 rdev->bios = NULL; ··· 86 86 return false; 87 87 } 88 88 89 - if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) { 89 + val1 = readb(&bios[0]); 90 + val2 = readb(&bios[1]); 91 + 92 + if (size == 0 || val1 != 0x55 || val2 != 0xaa) { 90 93 pci_unmap_rom(rdev->pdev, bios); 91 94 return false; 92 95 } 93 - rdev->bios = kmemdup(bios, size, GFP_KERNEL); 96 + rdev->bios = kzalloc(size, GFP_KERNEL); 94 97 if (rdev->bios == NULL) { 95 98 pci_unmap_rom(rdev->pdev, bios); 96 99 return false; 97 100 } 101 + memcpy_fromio(rdev->bios, bios, size); 98 102 pci_unmap_rom(rdev->pdev, bios); 99 103 return true; 100 104 }
+3 -1
drivers/gpu/drm/radeon/radeon_cs.c
··· 256 256 u32 ring = RADEON_CS_RING_GFX; 257 257 s32 priority = 0; 258 258 259 + INIT_LIST_HEAD(&p->validated); 260 + 259 261 if (!cs->num_chunks) { 260 262 return 0; 261 263 } 264 + 262 265 /* get chunks */ 263 - INIT_LIST_HEAD(&p->validated); 264 266 p->idx = 0; 265 267 p->ib.sa_bo = NULL; 266 268 p->const_ib.sa_bo = NULL;
+44 -22
drivers/gpu/drm/radeon/radeon_fence.c
··· 1030 1030 return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags); 1031 1031 } 1032 1032 1033 + struct radeon_wait_cb { 1034 + struct fence_cb base; 1035 + struct task_struct *task; 1036 + }; 1037 + 1038 + static void 1039 + radeon_fence_wait_cb(struct fence *fence, struct fence_cb *cb) 1040 + { 1041 + struct radeon_wait_cb *wait = 1042 + container_of(cb, struct radeon_wait_cb, base); 1043 + 1044 + wake_up_process(wait->task); 1045 + } 1046 + 1033 1047 static signed long radeon_fence_default_wait(struct fence *f, bool intr, 1034 1048 signed long t) 1035 1049 { 1036 1050 struct radeon_fence *fence = to_radeon_fence(f); 1037 1051 struct radeon_device *rdev = fence->rdev; 1038 - bool signaled; 1052 + struct radeon_wait_cb cb; 1039 1053 1040 - fence_enable_sw_signaling(&fence->base); 1054 + cb.task = current; 1041 1055 1042 - /* 1043 - * This function has to return -EDEADLK, but cannot hold 1044 - * exclusive_lock during the wait because some callers 1045 - * may already hold it. This means checking needs_reset without 1046 - * lock, and not fiddling with any gpu internals. 1047 - * 1048 - * The callback installed with fence_enable_sw_signaling will 1049 - * run before our wait_event_*timeout call, so we will see 1050 - * both the signaled fence and the changes to needs_reset. 1051 - */ 1056 + if (fence_add_callback(f, &cb.base, radeon_fence_wait_cb)) 1057 + return t; 1052 1058 1053 - if (intr) 1054 - t = wait_event_interruptible_timeout(rdev->fence_queue, 1055 - ((signaled = radeon_test_signaled(fence)) || 1056 - rdev->needs_reset), t); 1057 - else 1058 - t = wait_event_timeout(rdev->fence_queue, 1059 - ((signaled = radeon_test_signaled(fence)) || 1060 - rdev->needs_reset), t); 1059 + while (t > 0) { 1060 + if (intr) 1061 + set_current_state(TASK_INTERRUPTIBLE); 1062 + else 1063 + set_current_state(TASK_UNINTERRUPTIBLE); 1061 1064 1062 - if (t > 0 && !signaled) 1063 - return -EDEADLK; 1065 + /* 1066 + * radeon_test_signaled must be called after 1067 + * set_current_state to prevent a race with wake_up_process 1068 + */ 1069 + if (radeon_test_signaled(fence)) 1070 + break; 1071 + 1072 + if (rdev->needs_reset) { 1073 + t = -EDEADLK; 1074 + break; 1075 + } 1076 + 1077 + t = schedule_timeout(t); 1078 + 1079 + if (t > 0 && intr && signal_pending(current)) 1080 + t = -ERESTARTSYS; 1081 + } 1082 + 1083 + __set_current_state(TASK_RUNNING); 1084 + fence_remove_callback(f, &cb.base); 1085 + 1064 1086 return t; 1065 1087 } 1066 1088
+1 -1
drivers/gpu/drm/radeon/radeon_kfd.c
··· 153 153 .compute_vmid_bitmap = 0xFF00, 154 154 155 155 .first_compute_pipe = 1, 156 - .compute_pipe_count = 8 - 1, 156 + .compute_pipe_count = 4 - 1, 157 157 }; 158 158 159 159 radeon_doorbell_get_kfd_info(rdev,
+4 -7
drivers/gpu/drm/radeon/radeon_mn.c
··· 122 122 it = interval_tree_iter_first(&rmn->objects, start, end); 123 123 while (it) { 124 124 struct radeon_bo *bo; 125 - struct fence *fence; 126 125 int r; 127 126 128 127 bo = container_of(it, struct radeon_bo, mn_it); ··· 133 134 continue; 134 135 } 135 136 136 - fence = reservation_object_get_excl(bo->tbo.resv); 137 - if (fence) { 138 - r = radeon_fence_wait((struct radeon_fence *)fence, false); 139 - if (r) 140 - DRM_ERROR("(%d) failed to wait for user bo\n", r); 141 - } 137 + r = reservation_object_wait_timeout_rcu(bo->tbo.resv, true, 138 + false, MAX_SCHEDULE_TIMEOUT); 139 + if (r) 140 + DRM_ERROR("(%d) failed to wait for user bo\n", r); 142 141 143 142 radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); 144 143 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
-11
drivers/gpu/drm/radeon/radeon_object.c
··· 173 173 else 174 174 rbo->placements[i].lpfn = 0; 175 175 } 176 - 177 - /* 178 - * Use two-ended allocation depending on the buffer size to 179 - * improve fragmentation quality. 180 - * 512kb was measured as the most optimal number. 181 - */ 182 - if (rbo->tbo.mem.size > 512 * 1024) { 183 - for (i = 0; i < c; i++) { 184 - rbo->placements[i].flags |= TTM_PL_FLAG_TOPDOWN; 185 - } 186 - } 187 176 } 188 177 189 178 int radeon_bo_create(struct radeon_device *rdev,
+17 -5
drivers/gpu/drm/radeon/radeon_pm.c
··· 837 837 radeon_pm_compute_clocks(rdev); 838 838 } 839 839 840 - static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev, 841 - enum radeon_pm_state_type dpm_state) 840 + static bool radeon_dpm_single_display(struct radeon_device *rdev) 842 841 { 843 - int i; 844 - struct radeon_ps *ps; 845 - u32 ui_class; 846 842 bool single_display = (rdev->pm.dpm.new_active_crtc_count < 2) ? 847 843 true : false; 848 844 ··· 853 857 */ 854 858 if (single_display && (r600_dpm_get_vrefresh(rdev) >= 120)) 855 859 single_display = false; 860 + 861 + return single_display; 862 + } 863 + 864 + static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev, 865 + enum radeon_pm_state_type dpm_state) 866 + { 867 + int i; 868 + struct radeon_ps *ps; 869 + u32 ui_class; 870 + bool single_display = radeon_dpm_single_display(rdev); 856 871 857 872 /* certain older asics have a separare 3D performance state, 858 873 * so try that first if the user selected performance ··· 990 983 struct radeon_ps *ps; 991 984 enum radeon_pm_state_type dpm_state; 992 985 int ret; 986 + bool single_display = radeon_dpm_single_display(rdev); 993 987 994 988 /* if dpm init failed */ 995 989 if (!rdev->pm.dpm_enabled) ··· 1014 1006 if (rdev->pm.dpm.current_ps == rdev->pm.dpm.requested_ps) { 1015 1007 /* vce just modifies an existing state so force a change */ 1016 1008 if (ps->vce_active != rdev->pm.dpm.vce_active) 1009 + goto force; 1010 + /* user has made a display change (such as timing) */ 1011 + if (rdev->pm.dpm.single_display != single_display) 1017 1012 goto force; 1018 1013 if ((rdev->family < CHIP_BARTS) || (rdev->flags & RADEON_IS_IGP)) { 1019 1014 /* for pre-BTC and APUs if the num crtcs changed but state is the same, ··· 1080 1069 1081 1070 rdev->pm.dpm.current_active_crtcs = rdev->pm.dpm.new_active_crtcs; 1082 1071 rdev->pm.dpm.current_active_crtc_count = rdev->pm.dpm.new_active_crtc_count; 1072 + rdev->pm.dpm.single_display = single_display; 1083 1073 1084 1074 /* wait for the rings to drain */ 1085 1075 for (i = 0; i < RADEON_NUM_RINGS; i++) {
+1 -1
drivers/gpu/drm/radeon/radeon_ring.c
··· 495 495 seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw); 496 496 seq_printf(m, "%u dwords in ring\n", count); 497 497 498 - if (!ring->ready) 498 + if (!ring->ring) 499 499 return 0; 500 500 501 501 /* print 8 dw before current rptr as often it's the last executed
+4
drivers/gpu/drm/radeon/radeon_ttm.c
··· 598 598 enum dma_data_direction direction = write ? 599 599 DMA_BIDIRECTIONAL : DMA_TO_DEVICE; 600 600 601 + /* double check that we don't free the table twice */ 602 + if (!ttm->sg->sgl) 603 + return; 604 + 601 605 /* free the sg table and pages again */ 602 606 dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction); 603 607
+4
drivers/gpu/drm/radeon/rs600.c
··· 694 694 WREG32(R_007D18_DC_HOT_PLUG_DETECT2_INT_CONTROL, hpd2); 695 695 if (ASIC_IS_DCE2(rdev)) 696 696 WREG32(R_007408_HDMI0_AUDIO_PACKET_CONTROL, hdmi0); 697 + 698 + /* posting read */ 699 + RREG32(R_000040_GEN_INT_CNTL); 700 + 697 701 return 0; 698 702 } 699 703
+5 -4
drivers/gpu/drm/radeon/si.c
··· 6203 6203 6204 6204 WREG32(CG_THERMAL_INT, thermal_int); 6205 6205 6206 + /* posting read */ 6207 + RREG32(SRBM_STATUS); 6208 + 6206 6209 return 0; 6207 6210 } 6208 6211 ··· 7130 7127 WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~UPLL_BYPASS_EN_MASK); 7131 7128 7132 7129 if (!vclk || !dclk) { 7133 - /* keep the Bypass mode, put PLL to sleep */ 7134 - WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); 7130 + /* keep the Bypass mode */ 7135 7131 return 0; 7136 7132 } 7137 7133 ··· 7146 7144 /* set VCO_MODE to 1 */ 7147 7145 WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_VCO_MODE_MASK, ~UPLL_VCO_MODE_MASK); 7148 7146 7149 - /* toggle UPLL_SLEEP to 1 then back to 0 */ 7150 - WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); 7147 + /* disable sleep mode */ 7151 7148 WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_SLEEP_MASK); 7152 7149 7153 7150 /* deassert UPLL_RESET */
+2 -2
drivers/gpu/drm/radeon/sid.h
··· 912 912 913 913 #define DCCG_AUDIO_DTO0_PHASE 0x05b0 914 914 #define DCCG_AUDIO_DTO0_MODULE 0x05b4 915 - #define DCCG_AUDIO_DTO1_PHASE 0x05b8 916 - #define DCCG_AUDIO_DTO1_MODULE 0x05bc 915 + #define DCCG_AUDIO_DTO1_PHASE 0x05c0 916 + #define DCCG_AUDIO_DTO1_MODULE 0x05c4 917 917 918 918 #define AFMT_AUDIO_SRC_CONTROL 0x713c 919 919 #define AFMT_AUDIO_SRC_SELECT(x) (((x) & 7) << 0)
+3
drivers/gpu/drm/radeon/vce_v2_0.c
··· 156 156 WREG32(VCE_LMI_SWAP_CNTL1, 0); 157 157 WREG32(VCE_LMI_VM_CTRL, 0); 158 158 159 + WREG32(VCE_LMI_VCPU_CACHE_40BIT_BAR, addr >> 8); 160 + 161 + addr &= 0xff; 159 162 size = RADEON_GPU_PAGE_ALIGN(rdev->vce_fw->size); 160 163 WREG32(VCE_VCPU_CACHE_OFFSET0, addr & 0x7fffffff); 161 164 WREG32(VCE_VCPU_CACHE_SIZE0, size);
+1 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 74 74 pr_err(" has_type: %d\n", man->has_type); 75 75 pr_err(" use_type: %d\n", man->use_type); 76 76 pr_err(" flags: 0x%08X\n", man->flags); 77 - pr_err(" gpu_offset: 0x%08lX\n", man->gpu_offset); 77 + pr_err(" gpu_offset: 0x%08llX\n", man->gpu_offset); 78 78 pr_err(" size: %llu\n", man->size); 79 79 pr_err(" available_caching: 0x%08X\n", man->available_caching); 80 80 pr_err(" default_caching: 0x%08X\n", man->default_caching);
+41 -37
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 725 725 goto out_err1; 726 726 } 727 727 728 - ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, 729 - (dev_priv->vram_size >> PAGE_SHIFT)); 730 - if (unlikely(ret != 0)) { 731 - DRM_ERROR("Failed initializing memory manager for VRAM.\n"); 732 - goto out_err2; 733 - } 734 - 735 - dev_priv->has_gmr = true; 736 - if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) || 737 - refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, 738 - VMW_PL_GMR) != 0) { 739 - DRM_INFO("No GMR memory available. " 740 - "Graphics memory resources are very limited.\n"); 741 - dev_priv->has_gmr = false; 742 - } 743 - 744 - if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) { 745 - dev_priv->has_mob = true; 746 - if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB, 747 - VMW_PL_MOB) != 0) { 748 - DRM_INFO("No MOB memory available. " 749 - "3D will be disabled.\n"); 750 - dev_priv->has_mob = false; 751 - } 752 - } 753 - 754 728 dev_priv->mmio_mtrr = arch_phys_wc_add(dev_priv->mmio_start, 755 729 dev_priv->mmio_size); 756 730 ··· 787 813 goto out_no_fman; 788 814 } 789 815 816 + 817 + ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, 818 + (dev_priv->vram_size >> PAGE_SHIFT)); 819 + if (unlikely(ret != 0)) { 820 + DRM_ERROR("Failed initializing memory manager for VRAM.\n"); 821 + goto out_no_vram; 822 + } 823 + 824 + dev_priv->has_gmr = true; 825 + if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) || 826 + refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, 827 + VMW_PL_GMR) != 0) { 828 + DRM_INFO("No GMR memory available. " 829 + "Graphics memory resources are very limited.\n"); 830 + dev_priv->has_gmr = false; 831 + } 832 + 833 + if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) { 834 + dev_priv->has_mob = true; 835 + if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB, 836 + VMW_PL_MOB) != 0) { 837 + DRM_INFO("No MOB memory available. " 838 + "3D will be disabled.\n"); 839 + dev_priv->has_mob = false; 840 + } 841 + } 842 + 790 843 vmw_kms_save_vga(dev_priv); 791 844 792 845 /* Start kms and overlay systems, needs fifo. */ ··· 839 838 vmw_kms_close(dev_priv); 840 839 out_no_kms: 841 840 vmw_kms_restore_vga(dev_priv); 841 + if (dev_priv->has_mob) 842 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 843 + if (dev_priv->has_gmr) 844 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 845 + (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 846 + out_no_vram: 842 847 vmw_fence_manager_takedown(dev_priv->fman); 843 848 out_no_fman: 844 849 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) ··· 860 853 iounmap(dev_priv->mmio_virt); 861 854 out_err3: 862 855 arch_phys_wc_del(dev_priv->mmio_mtrr); 863 - if (dev_priv->has_mob) 864 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 865 - if (dev_priv->has_gmr) 866 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 867 - (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 868 - out_err2: 869 856 (void)ttm_bo_device_release(&dev_priv->bdev); 870 857 out_err1: 871 858 vmw_ttm_global_release(dev_priv); ··· 888 887 } 889 888 vmw_kms_close(dev_priv); 890 889 vmw_overlay_close(dev_priv); 890 + 891 + if (dev_priv->has_mob) 892 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 893 + if (dev_priv->has_gmr) 894 + (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 895 + (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 896 + 891 897 vmw_fence_manager_takedown(dev_priv->fman); 892 898 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) 893 899 drm_irq_uninstall(dev_priv->dev); ··· 906 898 ttm_object_device_release(&dev_priv->tdev); 907 899 iounmap(dev_priv->mmio_virt); 908 900 arch_phys_wc_del(dev_priv->mmio_mtrr); 909 - if (dev_priv->has_mob) 910 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 911 - if (dev_priv->has_gmr) 912 - (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 913 - (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 914 901 (void)ttm_bo_device_release(&dev_priv->bdev); 915 902 vmw_ttm_global_release(dev_priv); 916 903 ··· 1238 1235 { 1239 1236 struct drm_device *dev = pci_get_drvdata(pdev); 1240 1237 1238 + pci_disable_device(pdev); 1241 1239 drm_put_dev(dev); 1242 1240 } 1243 1241
+9 -9
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 890 890 ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo); 891 891 if (unlikely(ret != 0)) { 892 892 DRM_ERROR("Could not find or use MOB buffer.\n"); 893 - return -EINVAL; 893 + ret = -EINVAL; 894 + goto out_no_reloc; 894 895 } 895 896 bo = &vmw_bo->base; 896 897 ··· 915 914 916 915 out_no_reloc: 917 916 vmw_dmabuf_unreference(&vmw_bo); 918 - vmw_bo_p = NULL; 917 + *vmw_bo_p = NULL; 919 918 return ret; 920 919 } 921 920 ··· 952 951 ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo); 953 952 if (unlikely(ret != 0)) { 954 953 DRM_ERROR("Could not find or use GMR region.\n"); 955 - return -EINVAL; 954 + ret = -EINVAL; 955 + goto out_no_reloc; 956 956 } 957 957 bo = &vmw_bo->base; 958 958 ··· 976 974 977 975 out_no_reloc: 978 976 vmw_dmabuf_unreference(&vmw_bo); 979 - vmw_bo_p = NULL; 977 + *vmw_bo_p = NULL; 980 978 return ret; 981 979 } 982 980 ··· 2782 2780 NULL, arg->command_size, arg->throttle_us, 2783 2781 (void __user *)(unsigned long)arg->fence_rep, 2784 2782 NULL); 2785 - 2783 + ttm_read_unlock(&dev_priv->reservation_sem); 2786 2784 if (unlikely(ret != 0)) 2787 - goto out_unlock; 2785 + return ret; 2788 2786 2789 2787 vmw_kms_cursor_post_execbuf(dev_priv); 2790 2788 2791 - out_unlock: 2792 - ttm_read_unlock(&dev_priv->reservation_sem); 2793 - return ret; 2789 + return 0; 2794 2790 }
+3 -11
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 2033 2033 int i; 2034 2034 struct drm_mode_config *mode_config = &dev->mode_config; 2035 2035 2036 - ret = ttm_read_lock(&dev_priv->reservation_sem, true); 2037 - if (unlikely(ret != 0)) 2038 - return ret; 2039 - 2040 2036 if (!arg->num_outputs) { 2041 2037 struct drm_vmw_rect def_rect = {0, 0, 800, 600}; 2042 2038 vmw_du_update_layout(dev_priv, 1, &def_rect); 2043 - goto out_unlock; 2039 + return 0; 2044 2040 } 2045 2041 2046 2042 rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect); 2047 2043 rects = kcalloc(arg->num_outputs, sizeof(struct drm_vmw_rect), 2048 2044 GFP_KERNEL); 2049 - if (unlikely(!rects)) { 2050 - ret = -ENOMEM; 2051 - goto out_unlock; 2052 - } 2045 + if (unlikely(!rects)) 2046 + return -ENOMEM; 2053 2047 2054 2048 user_rects = (void __user *)(unsigned long)arg->rects; 2055 2049 ret = copy_from_user(rects, user_rects, rects_size); ··· 2068 2074 2069 2075 out_free: 2070 2076 kfree(rects); 2071 - out_unlock: 2072 - ttm_read_unlock(&dev_priv->reservation_sem); 2073 2077 return ret; 2074 2078 }
+2
drivers/gpu/ipu-v3/ipu-di.c
··· 459 459 460 460 clkrate = clk_get_rate(di->clk_ipu); 461 461 div = DIV_ROUND_CLOSEST(clkrate, sig->mode.pixelclock); 462 + if (div == 0) 463 + div = 1; 462 464 rate = clkrate / div; 463 465 464 466 error = rate / (sig->mode.pixelclock / 1000);
+1
drivers/hid/hid-core.c
··· 1959 1959 { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65a) }, 1960 1960 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) }, 1961 1961 { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) }, 1962 + { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_PRO) }, 1962 1963 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) }, 1963 1964 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED2, USB_DEVICE_ID_TOPSEED2_RF_COMBO) }, 1964 1965 { HID_USB_DEVICE(USB_VENDOR_ID_TWINHAN, USB_DEVICE_ID_TWINHAN_IR_REMOTE) },
+2
drivers/hid/hid-ids.h
··· 586 586 #define USB_VENDOR_ID_LOGITECH 0x046d 587 587 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e 588 588 #define USB_DEVICE_ID_LOGITECH_T651 0xb00c 589 + #define USB_DEVICE_ID_LOGITECH_C077 0xc007 589 590 #define USB_DEVICE_ID_LOGITECH_RECEIVER 0xc101 590 591 #define USB_DEVICE_ID_LOGITECH_HARMONY_FIRST 0xc110 591 592 #define USB_DEVICE_ID_LOGITECH_HARMONY_LAST 0xc14f ··· 899 898 #define USB_VENDOR_ID_TIVO 0x150a 900 899 #define USB_DEVICE_ID_TIVO_SLIDE_BT 0x1200 901 900 #define USB_DEVICE_ID_TIVO_SLIDE 0x1201 901 + #define USB_DEVICE_ID_TIVO_SLIDE_PRO 0x1203 902 902 903 903 #define USB_VENDOR_ID_TOPSEED 0x0766 904 904 #define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204
+1
drivers/hid/hid-tivo.c
··· 64 64 /* TiVo Slide Bluetooth remote, pairs with a Broadcom dongle */ 65 65 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) }, 66 66 { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) }, 67 + { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_PRO) }, 67 68 { } 68 69 }; 69 70 MODULE_DEVICE_TABLE(hid, tivo_devices);
+1
drivers/hid/usbhid/hid-quirks.c
··· 78 78 { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, 79 79 { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS }, 80 80 { USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET }, 81 + { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL }, 81 82 { USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET }, 82 83 { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS }, 83 84 { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3_JP, HID_QUIRK_NO_INIT_REPORTS },
+51 -33
drivers/hid/wacom_wac.c
··· 551 551 (features->type == CINTIQ && !(data[1] & 0x40))) 552 552 return 1; 553 553 554 - if (features->quirks & WACOM_QUIRK_MULTI_INPUT) 554 + if (wacom->shared) { 555 555 wacom->shared->stylus_in_proximity = true; 556 + 557 + if (wacom->shared->touch_down) 558 + return 1; 559 + } 556 560 557 561 /* in Range while exiting */ 558 562 if (((data[1] & 0xfe) == 0x20) && wacom->reporting_data) { ··· 1047 1043 struct input_dev *input = wacom->input; 1048 1044 unsigned char *data = wacom->data; 1049 1045 int i; 1050 - int current_num_contacts = 0; 1046 + int current_num_contacts = data[61]; 1051 1047 int contacts_to_send = 0; 1052 1048 int num_contacts_left = 4; /* maximum contacts per packet */ 1053 1049 int byte_per_packet = WACOM_BYTES_PER_24HDT_PACKET; 1054 1050 int y_offset = 2; 1051 + static int contact_with_no_pen_down_count = 0; 1055 1052 1056 1053 if (wacom->features.type == WACOM_27QHDT) { 1057 1054 current_num_contacts = data[63]; 1058 1055 num_contacts_left = 10; 1059 1056 byte_per_packet = WACOM_BYTES_PER_QHDTHID_PACKET; 1060 1057 y_offset = 0; 1061 - } else { 1062 - current_num_contacts = data[61]; 1063 1058 } 1064 1059 1065 1060 /* 1066 1061 * First packet resets the counter since only the first 1067 1062 * packet in series will have non-zero current_num_contacts. 1068 1063 */ 1069 - if (current_num_contacts) 1064 + if (current_num_contacts) { 1070 1065 wacom->num_contacts_left = current_num_contacts; 1066 + contact_with_no_pen_down_count = 0; 1067 + } 1071 1068 1072 1069 contacts_to_send = min(num_contacts_left, wacom->num_contacts_left); 1073 1070 ··· 1101 1096 input_report_abs(input, ABS_MT_WIDTH_MINOR, min(w, h)); 1102 1097 input_report_abs(input, ABS_MT_ORIENTATION, w > h); 1103 1098 } 1099 + contact_with_no_pen_down_count++; 1104 1100 } 1105 1101 } 1106 1102 input_mt_report_pointer_emulation(input, true); 1107 1103 1108 1104 wacom->num_contacts_left -= contacts_to_send; 1109 - if (wacom->num_contacts_left <= 0) 1105 + if (wacom->num_contacts_left <= 0) { 1110 1106 wacom->num_contacts_left = 0; 1111 - 1112 - wacom->shared->touch_down = (wacom->num_contacts_left > 0); 1107 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1108 + } 1113 1109 return 1; 1114 1110 } 1115 1111 ··· 1122 1116 int current_num_contacts = data[2]; 1123 1117 int contacts_to_send = 0; 1124 1118 int x_offset = 0; 1119 + static int contact_with_no_pen_down_count = 0; 1125 1120 1126 1121 /* MTTPC does not support Height and Width */ 1127 1122 if (wacom->features.type == MTTPC || wacom->features.type == MTTPC_B) ··· 1132 1125 * First packet resets the counter since only the first 1133 1126 * packet in series will have non-zero current_num_contacts. 1134 1127 */ 1135 - if (current_num_contacts) 1128 + if (current_num_contacts) { 1136 1129 wacom->num_contacts_left = current_num_contacts; 1130 + contact_with_no_pen_down_count = 0; 1131 + } 1137 1132 1138 1133 /* There are at most 5 contacts per packet */ 1139 1134 contacts_to_send = min(5, wacom->num_contacts_left); ··· 1156 1147 int y = get_unaligned_le16(&data[offset + x_offset + 9]); 1157 1148 input_report_abs(input, ABS_MT_POSITION_X, x); 1158 1149 input_report_abs(input, ABS_MT_POSITION_Y, y); 1150 + contact_with_no_pen_down_count++; 1159 1151 } 1160 1152 } 1161 1153 input_mt_report_pointer_emulation(input, true); 1162 1154 1163 1155 wacom->num_contacts_left -= contacts_to_send; 1164 - if (wacom->num_contacts_left < 0) 1156 + if (wacom->num_contacts_left <= 0) { 1165 1157 wacom->num_contacts_left = 0; 1166 - 1167 - wacom->shared->touch_down = (wacom->num_contacts_left > 0); 1158 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1159 + } 1168 1160 return 1; 1169 1161 } 1170 1162 ··· 1203 1193 { 1204 1194 unsigned char *data = wacom->data; 1205 1195 struct input_dev *input = wacom->input; 1206 - bool prox; 1196 + bool prox = !wacom->shared->stylus_in_proximity; 1207 1197 int x = 0, y = 0; 1208 1198 1209 1199 if (wacom->features.touch_max > 1 || len > WACOM_PKGLEN_TPC2FG) 1210 1200 return 0; 1211 1201 1212 - if (!wacom->shared->stylus_in_proximity) { 1213 - if (len == WACOM_PKGLEN_TPC1FG) { 1214 - prox = data[0] & 0x01; 1215 - x = get_unaligned_le16(&data[1]); 1216 - y = get_unaligned_le16(&data[3]); 1217 - } else if (len == WACOM_PKGLEN_TPC1FG_B) { 1218 - prox = data[2] & 0x01; 1219 - x = get_unaligned_le16(&data[3]); 1220 - y = get_unaligned_le16(&data[5]); 1221 - } else { 1222 - prox = data[1] & 0x01; 1223 - x = le16_to_cpup((__le16 *)&data[2]); 1224 - y = le16_to_cpup((__le16 *)&data[4]); 1225 - } 1226 - } else 1227 - /* force touch out when pen is in prox */ 1228 - prox = 0; 1202 + if (len == WACOM_PKGLEN_TPC1FG) { 1203 + prox = prox && (data[0] & 0x01); 1204 + x = get_unaligned_le16(&data[1]); 1205 + y = get_unaligned_le16(&data[3]); 1206 + } else if (len == WACOM_PKGLEN_TPC1FG_B) { 1207 + prox = prox && (data[2] & 0x01); 1208 + x = get_unaligned_le16(&data[3]); 1209 + y = get_unaligned_le16(&data[5]); 1210 + } else { 1211 + prox = prox && (data[1] & 0x01); 1212 + x = le16_to_cpup((__le16 *)&data[2]); 1213 + y = le16_to_cpup((__le16 *)&data[4]); 1214 + } 1229 1215 1230 1216 if (prox) { 1231 1217 input_report_abs(input, ABS_X, x); ··· 1619 1613 struct input_dev *pad_input = wacom->pad_input; 1620 1614 unsigned char *data = wacom->data; 1621 1615 int i; 1616 + int contact_with_no_pen_down_count = 0; 1622 1617 1623 1618 if (data[0] != 0x02) 1624 1619 return 0; ··· 1647 1640 } 1648 1641 input_report_abs(input, ABS_MT_POSITION_X, x); 1649 1642 input_report_abs(input, ABS_MT_POSITION_Y, y); 1643 + contact_with_no_pen_down_count++; 1650 1644 } 1651 1645 } 1652 1646 ··· 1657 1649 input_report_key(pad_input, BTN_FORWARD, (data[1] & 0x04) != 0); 1658 1650 input_report_key(pad_input, BTN_BACK, (data[1] & 0x02) != 0); 1659 1651 input_report_key(pad_input, BTN_RIGHT, (data[1] & 0x01) != 0); 1652 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1660 1653 1661 1654 return 1; 1662 1655 } 1663 1656 1664 - static void wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data) 1657 + static int wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data, int last_touch_count) 1665 1658 { 1666 1659 struct wacom_features *features = &wacom->features; 1667 1660 struct input_dev *input = wacom->input; ··· 1670 1661 int slot = input_mt_get_slot_by_key(input, data[0]); 1671 1662 1672 1663 if (slot < 0) 1673 - return; 1664 + return 0; 1674 1665 1675 1666 touch = touch && !wacom->shared->stylus_in_proximity; 1676 1667 ··· 1702 1693 input_report_abs(input, ABS_MT_POSITION_Y, y); 1703 1694 input_report_abs(input, ABS_MT_TOUCH_MAJOR, width); 1704 1695 input_report_abs(input, ABS_MT_TOUCH_MINOR, height); 1696 + last_touch_count++; 1705 1697 } 1698 + return last_touch_count; 1706 1699 } 1707 1700 1708 1701 static void wacom_bpt3_button_msg(struct wacom_wac *wacom, unsigned char *data) ··· 1729 1718 unsigned char *data = wacom->data; 1730 1719 int count = data[1] & 0x07; 1731 1720 int i; 1721 + int contact_with_no_pen_down_count = 0; 1732 1722 1733 1723 if (data[0] != 0x02) 1734 1724 return 0; ··· 1740 1728 int msg_id = data[offset]; 1741 1729 1742 1730 if (msg_id >= 2 && msg_id <= 17) 1743 - wacom_bpt3_touch_msg(wacom, data + offset); 1731 + contact_with_no_pen_down_count = 1732 + wacom_bpt3_touch_msg(wacom, data + offset, 1733 + contact_with_no_pen_down_count); 1744 1734 else if (msg_id == 128) 1745 1735 wacom_bpt3_button_msg(wacom, data + offset); 1746 1736 1747 1737 } 1748 1738 input_mt_report_pointer_emulation(input, true); 1739 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1749 1740 1750 1741 return 1; 1751 1742 } ··· 1773 1758 } 1774 1759 return 0; 1775 1760 } 1761 + 1762 + if (wacom->shared->touch_down) 1763 + return 0; 1776 1764 1777 1765 prox = (data[1] & 0x20) == 0x20; 1778 1766
+21 -19
drivers/i2c/busses/i2c-designware-baytrail.c
··· 17 17 #include <linux/acpi.h> 18 18 #include <linux/i2c.h> 19 19 #include <linux/interrupt.h> 20 + 20 21 #include <asm/iosf_mbi.h> 22 + 21 23 #include "i2c-designware-core.h" 22 24 23 25 #define SEMAPHORE_TIMEOUT 100 24 26 #define PUNIT_SEMAPHORE 0x7 27 + #define PUNIT_SEMAPHORE_BIT BIT(0) 28 + #define PUNIT_SEMAPHORE_ACQUIRE BIT(1) 25 29 26 30 static unsigned long acquired; 27 31 28 32 static int get_sem(struct device *dev, u32 *sem) 29 33 { 30 - u32 reg_val; 34 + u32 data; 31 35 int ret; 32 36 33 37 ret = iosf_mbi_read(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_READ, PUNIT_SEMAPHORE, 34 - &reg_val); 38 + &data); 35 39 if (ret) { 36 40 dev_err(dev, "iosf failed to read punit semaphore\n"); 37 41 return ret; 38 42 } 39 43 40 - *sem = reg_val & 0x1; 44 + *sem = data & PUNIT_SEMAPHORE_BIT; 41 45 42 46 return 0; 43 47 } ··· 56 52 return; 57 53 } 58 54 59 - data = data & 0xfffffffe; 55 + data &= ~PUNIT_SEMAPHORE_BIT; 60 56 if (iosf_mbi_write(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_WRITE, 61 - PUNIT_SEMAPHORE, data)) 57 + PUNIT_SEMAPHORE, data)) 62 58 dev_err(dev, "iosf failed to reset punit semaphore during write\n"); 63 59 } 64 60 65 - int baytrail_i2c_acquire(struct dw_i2c_dev *dev) 61 + static int baytrail_i2c_acquire(struct dw_i2c_dev *dev) 66 62 { 67 - u32 sem = 0; 63 + u32 sem; 68 64 int ret; 69 65 unsigned long start, end; 66 + 67 + might_sleep(); 70 68 71 69 if (!dev || !dev->dev) 72 70 return -ENODEV; 73 71 74 - if (!dev->acquire_lock) 72 + if (!dev->release_lock) 75 73 return 0; 76 74 77 - /* host driver writes 0x2 to side band semaphore register */ 75 + /* host driver writes to side band semaphore register */ 78 76 ret = iosf_mbi_write(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_WRITE, 79 - PUNIT_SEMAPHORE, 0x2); 77 + PUNIT_SEMAPHORE, PUNIT_SEMAPHORE_ACQUIRE); 80 78 if (ret) { 81 79 dev_err(dev->dev, "iosf punit semaphore request failed\n"); 82 80 return ret; ··· 87 81 /* host driver waits for bit 0 to be set in semaphore register */ 88 82 start = jiffies; 89 83 end = start + msecs_to_jiffies(SEMAPHORE_TIMEOUT); 90 - while (!time_after(jiffies, end)) { 84 + do { 91 85 ret = get_sem(dev->dev, &sem); 92 86 if (!ret && sem) { 93 87 acquired = jiffies; ··· 97 91 } 98 92 99 93 usleep_range(1000, 2000); 100 - } 94 + } while (time_before(jiffies, end)); 101 95 102 96 dev_err(dev->dev, "punit semaphore timed out, resetting\n"); 103 97 reset_semaphore(dev->dev); 104 98 105 99 ret = iosf_mbi_read(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_READ, 106 - PUNIT_SEMAPHORE, &sem); 107 - if (!ret) 100 + PUNIT_SEMAPHORE, &sem); 101 + if (ret) 108 102 dev_err(dev->dev, "iosf failed to read punit semaphore\n"); 109 103 else 110 104 dev_err(dev->dev, "PUNIT SEM: %d\n", sem); ··· 113 107 114 108 return -ETIMEDOUT; 115 109 } 116 - EXPORT_SYMBOL(baytrail_i2c_acquire); 117 110 118 - void baytrail_i2c_release(struct dw_i2c_dev *dev) 111 + static void baytrail_i2c_release(struct dw_i2c_dev *dev) 119 112 { 120 113 if (!dev || !dev->dev) 121 114 return; ··· 126 121 dev_dbg(dev->dev, "punit semaphore held for %ums\n", 127 122 jiffies_to_msecs(jiffies - acquired)); 128 123 } 129 - EXPORT_SYMBOL(baytrail_i2c_release); 130 124 131 125 int i2c_dw_eval_lock_support(struct dw_i2c_dev *dev) 132 126 { ··· 141 137 return 0; 142 138 143 139 status = acpi_evaluate_integer(handle, "_SEM", NULL, &shared_host); 144 - 145 140 if (ACPI_FAILURE(status)) 146 141 return 0; 147 142 ··· 156 153 157 154 return 0; 158 155 } 159 - EXPORT_SYMBOL(i2c_dw_eval_lock_support); 160 156 161 157 MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>"); 162 158 MODULE_DESCRIPTION("Baytrail I2C Semaphore driver");
-3
drivers/i2c/i2c-core.c
··· 679 679 status = driver->remove(client); 680 680 } 681 681 682 - if (dev->of_node) 683 - irq_dispose_mapping(client->irq); 684 - 685 682 dev_pm_domain_detach(&client->dev, true); 686 683 return status; 687 684 }
+2 -2
drivers/ide/ide-tape.c
··· 1793 1793 tape->best_dsc_rw_freq = clamp_t(unsigned long, t, IDETAPE_DSC_RW_MIN, 1794 1794 IDETAPE_DSC_RW_MAX); 1795 1795 printk(KERN_INFO "ide-tape: %s <-> %s: %dKBps, %d*%dkB buffer, " 1796 - "%lums tDSC%s\n", 1796 + "%ums tDSC%s\n", 1797 1797 drive->name, tape->name, *(u16 *)&tape->caps[14], 1798 1798 (*(u16 *)&tape->caps[16] * 512) / tape->buffer_size, 1799 1799 tape->buffer_size / 1024, 1800 - tape->best_dsc_rw_freq * 1000 / HZ, 1800 + jiffies_to_msecs(tape->best_dsc_rw_freq), 1801 1801 (drive->dev_flags & IDE_DFLAG_USING_DMA) ? ", DMA" : ""); 1802 1802 1803 1803 ide_proc_register_driver(drive, tape->driver);
+1 -1
drivers/iio/accel/bma180.c
··· 659 659 660 660 mutex_lock(&data->mutex); 661 661 662 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 662 + for_each_set_bit(bit, indio_dev->active_scan_mask, 663 663 indio_dev->masklength) { 664 664 ret = bma180_get_data_reg(data, bit); 665 665 if (ret < 0) {
+10 -10
drivers/iio/accel/bmc150-accel.c
··· 168 168 int val; 169 169 int val2; 170 170 u8 bw_bits; 171 - } bmc150_accel_samp_freq_table[] = { {7, 810000, 0x08}, 172 - {15, 630000, 0x09}, 173 - {31, 250000, 0x0A}, 174 - {62, 500000, 0x0B}, 175 - {125, 0, 0x0C}, 176 - {250, 0, 0x0D}, 177 - {500, 0, 0x0E}, 178 - {1000, 0, 0x0F} }; 171 + } bmc150_accel_samp_freq_table[] = { {15, 620000, 0x08}, 172 + {31, 260000, 0x09}, 173 + {62, 500000, 0x0A}, 174 + {125, 0, 0x0B}, 175 + {250, 0, 0x0C}, 176 + {500, 0, 0x0D}, 177 + {1000, 0, 0x0E}, 178 + {2000, 0, 0x0F} }; 179 179 180 180 static const struct { 181 181 int bw_bits; ··· 840 840 } 841 841 842 842 static IIO_CONST_ATTR_SAMP_FREQ_AVAIL( 843 - "7.810000 15.630000 31.250000 62.500000 125 250 500 1000"); 843 + "15.620000 31.260000 62.50000 125 250 500 1000 2000"); 844 844 845 845 static struct attribute *bmc150_accel_attributes[] = { 846 846 &iio_const_attr_sampling_frequency_available.dev_attr.attr, ··· 986 986 int bit, ret, i = 0; 987 987 988 988 mutex_lock(&data->mutex); 989 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 989 + for_each_set_bit(bit, indio_dev->active_scan_mask, 990 990 indio_dev->masklength) { 991 991 ret = i2c_smbus_read_word_data(data->client, 992 992 BMC150_ACCEL_AXIS_TO_REG(bit));
+1 -1
drivers/iio/accel/kxcjk-1013.c
··· 956 956 957 957 mutex_lock(&data->mutex); 958 958 959 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 959 + for_each_set_bit(bit, indio_dev->active_scan_mask, 960 960 indio_dev->masklength) { 961 961 ret = kxcjk1013_get_acc_reg(data, bit); 962 962 if (ret < 0) {
+2 -1
drivers/iio/adc/Kconfig
··· 137 137 138 138 config CC10001_ADC 139 139 tristate "Cosmic Circuits 10001 ADC driver" 140 - depends on HAS_IOMEM || HAVE_CLK || REGULATOR 140 + depends on HAVE_CLK || REGULATOR 141 + depends on HAS_IOMEM 141 142 select IIO_BUFFER 142 143 select IIO_TRIGGERED_BUFFER 143 144 help
+2 -3
drivers/iio/adc/at91_adc.c
··· 544 544 { 545 545 struct iio_dev *idev = iio_trigger_get_drvdata(trig); 546 546 struct at91_adc_state *st = iio_priv(idev); 547 - struct iio_buffer *buffer = idev->buffer; 548 547 struct at91_adc_reg_desc *reg = st->registers; 549 548 u32 status = at91_adc_readl(st, reg->trigger_register); 550 549 int value; ··· 563 564 at91_adc_writel(st, reg->trigger_register, 564 565 status | value); 565 566 566 - for_each_set_bit(bit, buffer->scan_mask, 567 + for_each_set_bit(bit, idev->active_scan_mask, 567 568 st->num_channels) { 568 569 struct iio_chan_spec const *chan = idev->channels + bit; 569 570 at91_adc_writel(st, AT91_ADC_CHER, ··· 578 579 at91_adc_writel(st, reg->trigger_register, 579 580 status & ~value); 580 581 581 - for_each_set_bit(bit, buffer->scan_mask, 582 + for_each_set_bit(bit, idev->active_scan_mask, 582 583 st->num_channels) { 583 584 struct iio_chan_spec const *chan = idev->channels + bit; 584 585 at91_adc_writel(st, AT91_ADC_CHDR,
+4 -13
drivers/iio/adc/mcp3422.c
··· 58 58 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SAMP_FREQ), \ 59 59 } 60 60 61 - /* LSB is in nV to eliminate floating point */ 62 - static const u32 rates_to_lsb[] = {1000000, 250000, 62500, 15625}; 63 - 64 - /* 65 - * scales calculated as: 66 - * rates_to_lsb[sample_rate] / (1 << pga); 67 - * pga is 1 for 0, 2 68 - */ 69 - 70 61 static const int mcp3422_scales[4][4] = { 71 - { 1000000, 250000, 62500, 15625 }, 72 - { 500000 , 125000, 31250, 7812 }, 73 - { 250000 , 62500 , 15625, 3906 }, 74 - { 125000 , 31250 , 7812 , 1953 } }; 62 + { 1000000, 500000, 250000, 125000 }, 63 + { 250000 , 125000, 62500 , 31250 }, 64 + { 62500 , 31250 , 15625 , 7812 }, 65 + { 15625 , 7812 , 3906 , 1953 } }; 75 66 76 67 /* Constant msleep times for data acquisitions */ 77 68 static const int mcp3422_read_times[4] = {
+2 -1
drivers/iio/adc/qcom-spmi-iadc.c
··· 296 296 if (iadc->poll_eoc) { 297 297 ret = iadc_poll_wait_eoc(iadc, wait); 298 298 } else { 299 - ret = wait_for_completion_timeout(&iadc->complete, wait); 299 + ret = wait_for_completion_timeout(&iadc->complete, 300 + usecs_to_jiffies(wait)); 300 301 if (!ret) 301 302 ret = -ETIMEDOUT; 302 303 else
+1 -2
drivers/iio/adc/ti_am335x_adc.c
··· 188 188 static int tiadc_buffer_postenable(struct iio_dev *indio_dev) 189 189 { 190 190 struct tiadc_device *adc_dev = iio_priv(indio_dev); 191 - struct iio_buffer *buffer = indio_dev->buffer; 192 191 unsigned int enb = 0; 193 192 u8 bit; 194 193 195 194 tiadc_step_config(indio_dev); 196 - for_each_set_bit(bit, buffer->scan_mask, adc_dev->channels) 195 + for_each_set_bit(bit, indio_dev->active_scan_mask, adc_dev->channels) 197 196 enb |= (get_adc_step_bit(adc_dev, bit) << 1); 198 197 adc_dev->buffer_en_ch_steps = enb; 199 198
+61 -30
drivers/iio/adc/vf610_adc.c
··· 141 141 struct regulator *vref; 142 142 struct vf610_adc_feature adc_feature; 143 143 144 + u32 sample_freq_avail[5]; 145 + 144 146 struct completion completion; 145 147 }; 148 + 149 + static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 }; 146 150 147 151 #define VF610_ADC_CHAN(_idx, _chan_type) { \ 148 152 .type = (_chan_type), \ ··· 184 180 /* sentinel */ 185 181 }; 186 182 187 - /* 188 - * ADC sample frequency, unit is ADCK cycles. 189 - * ADC clk source is ipg clock, which is the same as bus clock. 190 - * 191 - * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder) 192 - * SFCAdder: fixed to 6 ADCK cycles 193 - * AverageNum: 1, 4, 8, 16, 32 samples for hardware average. 194 - * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 195 - * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles 196 - * 197 - * By default, enable 12 bit resolution mode, clock source 198 - * set to ipg clock, So get below frequency group: 199 - */ 200 - static const u32 vf610_sample_freq_avail[5] = 201 - {1941176, 559332, 286957, 145374, 73171}; 183 + static inline void vf610_adc_calculate_rates(struct vf610_adc *info) 184 + { 185 + unsigned long adck_rate, ipg_rate = clk_get_rate(info->clk); 186 + int i; 187 + 188 + /* 189 + * Calculate ADC sample frequencies 190 + * Sample time unit is ADCK cycles. ADCK clk source is ipg clock, 191 + * which is the same as bus clock. 192 + * 193 + * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder) 194 + * SFCAdder: fixed to 6 ADCK cycles 195 + * AverageNum: 1, 4, 8, 16, 32 samples for hardware average. 196 + * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 197 + * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles 198 + */ 199 + adck_rate = ipg_rate / info->adc_feature.clk_div; 200 + for (i = 0; i < ARRAY_SIZE(vf610_hw_avgs); i++) 201 + info->sample_freq_avail[i] = 202 + adck_rate / (6 + vf610_hw_avgs[i] * (25 + 3)); 203 + } 202 204 203 205 static inline void vf610_adc_cfg_init(struct vf610_adc *info) 204 206 { 207 + struct vf610_adc_feature *adc_feature = &info->adc_feature; 208 + 205 209 /* set default Configuration for ADC controller */ 206 - info->adc_feature.clk_sel = VF610_ADCIOC_BUSCLK_SET; 207 - info->adc_feature.vol_ref = VF610_ADCIOC_VR_VREF_SET; 210 + adc_feature->clk_sel = VF610_ADCIOC_BUSCLK_SET; 211 + adc_feature->vol_ref = VF610_ADCIOC_VR_VREF_SET; 208 212 209 - info->adc_feature.calibration = true; 210 - info->adc_feature.ovwren = true; 213 + adc_feature->calibration = true; 214 + adc_feature->ovwren = true; 211 215 212 - info->adc_feature.clk_div = 1; 213 - info->adc_feature.res_mode = 12; 214 - info->adc_feature.sample_rate = 1; 215 - info->adc_feature.lpm = true; 216 + adc_feature->res_mode = 12; 217 + adc_feature->sample_rate = 1; 218 + adc_feature->lpm = true; 219 + 220 + /* Use a save ADCK which is below 20MHz on all devices */ 221 + adc_feature->clk_div = 8; 222 + 223 + vf610_adc_calculate_rates(info); 216 224 } 217 225 218 226 static void vf610_adc_cfg_post_set(struct vf610_adc *info) ··· 306 290 307 291 cfg_data = readl(info->regs + VF610_REG_ADC_CFG); 308 292 309 - /* low power configuration */ 310 293 cfg_data &= ~VF610_ADC_ADLPC_EN; 311 294 if (adc_feature->lpm) 312 295 cfg_data |= VF610_ADC_ADLPC_EN; 313 296 314 - /* disable high speed */ 315 297 cfg_data &= ~VF610_ADC_ADHSC_EN; 316 298 317 299 writel(cfg_data, info->regs + VF610_REG_ADC_CFG); ··· 449 435 return IRQ_HANDLED; 450 436 } 451 437 452 - static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("1941176, 559332, 286957, 145374, 73171"); 438 + static ssize_t vf610_show_samp_freq_avail(struct device *dev, 439 + struct device_attribute *attr, char *buf) 440 + { 441 + struct vf610_adc *info = iio_priv(dev_to_iio_dev(dev)); 442 + size_t len = 0; 443 + int i; 444 + 445 + for (i = 0; i < ARRAY_SIZE(info->sample_freq_avail); i++) 446 + len += scnprintf(buf + len, PAGE_SIZE - len, 447 + "%u ", info->sample_freq_avail[i]); 448 + 449 + /* replace trailing space by newline */ 450 + buf[len - 1] = '\n'; 451 + 452 + return len; 453 + } 454 + 455 + static IIO_DEV_ATTR_SAMP_FREQ_AVAIL(vf610_show_samp_freq_avail); 453 456 454 457 static struct attribute *vf610_attributes[] = { 455 - &iio_const_attr_sampling_frequency_available.dev_attr.attr, 458 + &iio_dev_attr_sampling_frequency_available.dev_attr.attr, 456 459 NULL 457 460 }; 458 461 ··· 533 502 return IIO_VAL_FRACTIONAL_LOG2; 534 503 535 504 case IIO_CHAN_INFO_SAMP_FREQ: 536 - *val = vf610_sample_freq_avail[info->adc_feature.sample_rate]; 505 + *val = info->sample_freq_avail[info->adc_feature.sample_rate]; 537 506 *val2 = 0; 538 507 return IIO_VAL_INT; 539 508 ··· 556 525 switch (mask) { 557 526 case IIO_CHAN_INFO_SAMP_FREQ: 558 527 for (i = 0; 559 - i < ARRAY_SIZE(vf610_sample_freq_avail); 528 + i < ARRAY_SIZE(info->sample_freq_avail); 560 529 i++) 561 - if (val == vf610_sample_freq_avail[i]) { 530 + if (val == info->sample_freq_avail[i]) { 562 531 info->adc_feature.sample_rate = i; 563 532 vf610_adc_sample_set(info); 564 533 return 0;
+2
drivers/iio/common/ssp_sensors/ssp_dev.c
··· 640 640 return 0; 641 641 } 642 642 643 + #ifdef CONFIG_PM_SLEEP 643 644 static int ssp_suspend(struct device *dev) 644 645 { 645 646 int ret; ··· 689 688 690 689 return 0; 691 690 } 691 + #endif /* CONFIG_PM_SLEEP */ 692 692 693 693 static const struct dev_pm_ops ssp_pm_ops = { 694 694 SET_SYSTEM_SLEEP_PM_OPS(ssp_suspend, ssp_resume)
+1 -1
drivers/iio/dac/ad5686.c
··· 322 322 st = iio_priv(indio_dev); 323 323 spi_set_drvdata(spi, indio_dev); 324 324 325 - st->reg = devm_regulator_get(&spi->dev, "vcc"); 325 + st->reg = devm_regulator_get_optional(&spi->dev, "vcc"); 326 326 if (!IS_ERR(st->reg)) { 327 327 ret = regulator_enable(st->reg); 328 328 if (ret)
+1 -1
drivers/iio/gyro/bmg160.c
··· 822 822 int bit, ret, i = 0; 823 823 824 824 mutex_lock(&data->mutex); 825 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 825 + for_each_set_bit(bit, indio_dev->active_scan_mask, 826 826 indio_dev->masklength) { 827 827 ret = i2c_smbus_read_word_data(data->client, 828 828 BMG160_AXIS_TO_REG(bit));
+41 -28
drivers/iio/humidity/dht11.c
··· 29 29 #include <linux/wait.h> 30 30 #include <linux/bitops.h> 31 31 #include <linux/completion.h> 32 + #include <linux/mutex.h> 32 33 #include <linux/delay.h> 33 34 #include <linux/gpio.h> 34 35 #include <linux/of_gpio.h> ··· 40 39 41 40 #define DHT11_DATA_VALID_TIME 2000000000 /* 2s in ns */ 42 41 43 - #define DHT11_EDGES_PREAMBLE 4 42 + #define DHT11_EDGES_PREAMBLE 2 44 43 #define DHT11_BITS_PER_READ 40 44 + /* 45 + * Note that when reading the sensor actually 84 edges are detected, but 46 + * since the last edge is not significant, we only store 83: 47 + */ 45 48 #define DHT11_EDGES_PER_READ (2*DHT11_BITS_PER_READ + DHT11_EDGES_PREAMBLE + 1) 46 49 47 50 /* Data transmission timing (nano seconds) */ ··· 62 57 int irq; 63 58 64 59 struct completion completion; 60 + struct mutex lock; 65 61 66 62 s64 timestamp; 67 63 int temperature; ··· 94 88 unsigned char temp_int, temp_dec, hum_int, hum_dec, checksum; 95 89 96 90 /* Calculate timestamp resolution */ 97 - for (i = 0; i < dht11->num_edges; ++i) { 91 + for (i = 1; i < dht11->num_edges; ++i) { 98 92 t = dht11->edges[i].ts - dht11->edges[i-1].ts; 99 93 if (t > 0 && t < timeres) 100 94 timeres = t; ··· 144 138 return 0; 145 139 } 146 140 141 + /* 142 + * IRQ handler called on GPIO edges 143 + */ 144 + static irqreturn_t dht11_handle_irq(int irq, void *data) 145 + { 146 + struct iio_dev *iio = data; 147 + struct dht11 *dht11 = iio_priv(iio); 148 + 149 + /* TODO: Consider making the handler safe for IRQ sharing */ 150 + if (dht11->num_edges < DHT11_EDGES_PER_READ && dht11->num_edges >= 0) { 151 + dht11->edges[dht11->num_edges].ts = iio_get_time_ns(); 152 + dht11->edges[dht11->num_edges++].value = 153 + gpio_get_value(dht11->gpio); 154 + 155 + if (dht11->num_edges >= DHT11_EDGES_PER_READ) 156 + complete(&dht11->completion); 157 + } 158 + 159 + return IRQ_HANDLED; 160 + } 161 + 147 162 static int dht11_read_raw(struct iio_dev *iio_dev, 148 163 const struct iio_chan_spec *chan, 149 164 int *val, int *val2, long m) ··· 172 145 struct dht11 *dht11 = iio_priv(iio_dev); 173 146 int ret; 174 147 148 + mutex_lock(&dht11->lock); 175 149 if (dht11->timestamp + DHT11_DATA_VALID_TIME < iio_get_time_ns()) { 176 150 reinit_completion(&dht11->completion); 177 151 ··· 185 157 if (ret) 186 158 goto err; 187 159 160 + ret = request_irq(dht11->irq, dht11_handle_irq, 161 + IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 162 + iio_dev->name, iio_dev); 163 + if (ret) 164 + goto err; 165 + 188 166 ret = wait_for_completion_killable_timeout(&dht11->completion, 189 167 HZ); 168 + 169 + free_irq(dht11->irq, iio_dev); 170 + 190 171 if (ret == 0 && dht11->num_edges < DHT11_EDGES_PER_READ - 1) { 191 172 dev_err(&iio_dev->dev, 192 173 "Only %d signal edges detected\n", ··· 222 185 ret = -EINVAL; 223 186 err: 224 187 dht11->num_edges = -1; 188 + mutex_unlock(&dht11->lock); 225 189 return ret; 226 190 } 227 191 ··· 230 192 .driver_module = THIS_MODULE, 231 193 .read_raw = dht11_read_raw, 232 194 }; 233 - 234 - /* 235 - * IRQ handler called on GPIO edges 236 - */ 237 - static irqreturn_t dht11_handle_irq(int irq, void *data) 238 - { 239 - struct iio_dev *iio = data; 240 - struct dht11 *dht11 = iio_priv(iio); 241 - 242 - /* TODO: Consider making the handler safe for IRQ sharing */ 243 - if (dht11->num_edges < DHT11_EDGES_PER_READ && dht11->num_edges >= 0) { 244 - dht11->edges[dht11->num_edges].ts = iio_get_time_ns(); 245 - dht11->edges[dht11->num_edges++].value = 246 - gpio_get_value(dht11->gpio); 247 - 248 - if (dht11->num_edges >= DHT11_EDGES_PER_READ) 249 - complete(&dht11->completion); 250 - } 251 - 252 - return IRQ_HANDLED; 253 - } 254 195 255 196 static const struct iio_chan_spec dht11_chan_spec[] = { 256 197 { .type = IIO_TEMP, ··· 273 256 dev_err(dev, "GPIO %d has no interrupt\n", dht11->gpio); 274 257 return -EINVAL; 275 258 } 276 - ret = devm_request_irq(dev, dht11->irq, dht11_handle_irq, 277 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 278 - pdev->name, iio); 279 - if (ret) 280 - return ret; 281 259 282 260 dht11->timestamp = iio_get_time_ns() - DHT11_DATA_VALID_TIME - 1; 283 261 dht11->num_edges = -1; ··· 280 268 platform_set_drvdata(pdev, iio); 281 269 282 270 init_completion(&dht11->completion); 271 + mutex_init(&dht11->lock); 283 272 iio->name = pdev->name; 284 273 iio->dev.parent = &pdev->dev; 285 274 iio->info = &dht11_iio_info;
+3 -3
drivers/iio/humidity/si7020.c
··· 45 45 struct iio_chan_spec const *chan, int *val, 46 46 int *val2, long mask) 47 47 { 48 - struct i2c_client *client = iio_priv(indio_dev); 48 + struct i2c_client **client = iio_priv(indio_dev); 49 49 int ret; 50 50 51 51 switch (mask) { 52 52 case IIO_CHAN_INFO_RAW: 53 - ret = i2c_smbus_read_word_data(client, 53 + ret = i2c_smbus_read_word_data(*client, 54 54 chan->type == IIO_TEMP ? 55 55 SI7020CMD_TEMP_HOLD : 56 56 SI7020CMD_RH_HOLD); ··· 126 126 /* Wait the maximum power-up time after software reset. */ 127 127 msleep(15); 128 128 129 - indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*client)); 129 + indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data)); 130 130 if (!indio_dev) 131 131 return -ENOMEM; 132 132
+2 -1
drivers/iio/imu/adis16400_core.c
··· 26 26 #include <linux/list.h> 27 27 #include <linux/module.h> 28 28 #include <linux/debugfs.h> 29 + #include <linux/bitops.h> 29 30 30 31 #include <linux/iio/iio.h> 31 32 #include <linux/iio/sysfs.h> ··· 415 414 mutex_unlock(&indio_dev->mlock); 416 415 if (ret) 417 416 return ret; 418 - val16 = ((val16 & 0xFFF) << 4) >> 4; 417 + val16 = sign_extend32(val16, 11); 419 418 *val = val16; 420 419 return IIO_VAL_INT; 421 420 case IIO_CHAN_INFO_OFFSET:
+1 -1
drivers/iio/imu/adis_trigger.c
··· 60 60 iio_trigger_set_drvdata(adis->trig, adis); 61 61 ret = iio_trigger_register(adis->trig); 62 62 63 - indio_dev->trig = adis->trig; 63 + indio_dev->trig = iio_trigger_get(adis->trig); 64 64 if (ret) 65 65 goto error_free_irq; 66 66
+35 -27
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 410 410 } 411 411 } 412 412 413 - static int inv_mpu6050_write_fsr(struct inv_mpu6050_state *st, int fsr) 413 + static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val) 414 414 { 415 - int result; 415 + int result, i; 416 416 u8 d; 417 417 418 - if (fsr < 0 || fsr > INV_MPU6050_MAX_GYRO_FS_PARAM) 419 - return -EINVAL; 420 - if (fsr == st->chip_config.fsr) 421 - return 0; 418 + for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) { 419 + if (gyro_scale_6050[i] == val) { 420 + d = (i << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT); 421 + result = inv_mpu6050_write_reg(st, 422 + st->reg->gyro_config, d); 423 + if (result) 424 + return result; 422 425 423 - d = (fsr << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT); 424 - result = inv_mpu6050_write_reg(st, st->reg->gyro_config, d); 425 - if (result) 426 - return result; 427 - st->chip_config.fsr = fsr; 426 + st->chip_config.fsr = i; 427 + return 0; 428 + } 429 + } 428 430 429 - return 0; 431 + return -EINVAL; 430 432 } 431 433 432 - static int inv_mpu6050_write_accel_fs(struct inv_mpu6050_state *st, int fs) 434 + static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val) 433 435 { 434 - int result; 436 + int result, i; 435 437 u8 d; 436 438 437 - if (fs < 0 || fs > INV_MPU6050_MAX_ACCL_FS_PARAM) 438 - return -EINVAL; 439 - if (fs == st->chip_config.accl_fs) 440 - return 0; 439 + for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) { 440 + if (accel_scale[i] == val) { 441 + d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT); 442 + result = inv_mpu6050_write_reg(st, 443 + st->reg->accl_config, d); 444 + if (result) 445 + return result; 441 446 442 - d = (fs << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT); 443 - result = inv_mpu6050_write_reg(st, st->reg->accl_config, d); 444 - if (result) 445 - return result; 446 - st->chip_config.accl_fs = fs; 447 + st->chip_config.accl_fs = i; 448 + return 0; 449 + } 450 + } 447 451 448 - return 0; 452 + return -EINVAL; 449 453 } 450 454 451 455 static int inv_mpu6050_write_raw(struct iio_dev *indio_dev, ··· 475 471 case IIO_CHAN_INFO_SCALE: 476 472 switch (chan->type) { 477 473 case IIO_ANGL_VEL: 478 - result = inv_mpu6050_write_fsr(st, val); 474 + result = inv_mpu6050_write_gyro_scale(st, val2); 479 475 break; 480 476 case IIO_ACCEL: 481 - result = inv_mpu6050_write_accel_fs(st, val); 477 + result = inv_mpu6050_write_accel_scale(st, val2); 482 478 break; 483 479 default: 484 480 result = -EINVAL; ··· 784 780 785 781 i2c_set_clientdata(client, indio_dev); 786 782 indio_dev->dev.parent = &client->dev; 787 - indio_dev->name = id->name; 783 + /* id will be NULL when enumerated via ACPI */ 784 + if (id) 785 + indio_dev->name = (char *)id->name; 786 + else 787 + indio_dev->name = (char *)dev_name(&client->dev); 788 788 indio_dev->channels = inv_mpu_channels; 789 789 indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels); 790 790
+14 -11
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
··· 24 24 #include <linux/poll.h> 25 25 #include "inv_mpu_iio.h" 26 26 27 + static void inv_clear_kfifo(struct inv_mpu6050_state *st) 28 + { 29 + unsigned long flags; 30 + 31 + /* take the spin lock sem to avoid interrupt kick in */ 32 + spin_lock_irqsave(&st->time_stamp_lock, flags); 33 + kfifo_reset(&st->timestamps); 34 + spin_unlock_irqrestore(&st->time_stamp_lock, flags); 35 + } 36 + 27 37 int inv_reset_fifo(struct iio_dev *indio_dev) 28 38 { 29 39 int result; ··· 60 50 INV_MPU6050_BIT_FIFO_RST); 61 51 if (result) 62 52 goto reset_fifo_fail; 53 + 54 + /* clear timestamps fifo */ 55 + inv_clear_kfifo(st); 56 + 63 57 /* enable interrupt */ 64 58 if (st->chip_config.accl_fifo_enable || 65 59 st->chip_config.gyro_fifo_enable) { ··· 95 81 INV_MPU6050_BIT_DATA_RDY_EN); 96 82 97 83 return result; 98 - } 99 - 100 - static void inv_clear_kfifo(struct inv_mpu6050_state *st) 101 - { 102 - unsigned long flags; 103 - 104 - /* take the spin lock sem to avoid interrupt kick in */ 105 - spin_lock_irqsave(&st->time_stamp_lock, flags); 106 - kfifo_reset(&st->timestamps); 107 - spin_unlock_irqrestore(&st->time_stamp_lock, flags); 108 84 } 109 85 110 86 /** ··· 188 184 flush_fifo: 189 185 /* Flush HW and SW FIFOs. */ 190 186 inv_reset_fifo(indio_dev); 191 - inv_clear_kfifo(st); 192 187 mutex_unlock(&indio_dev->mlock); 193 188 iio_trigger_notify_done(indio_dev->trig); 194 189
+1 -1
drivers/iio/imu/kmx61.c
··· 1227 1227 base = KMX61_MAG_XOUT_L; 1228 1228 1229 1229 mutex_lock(&data->lock); 1230 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 1230 + for_each_set_bit(bit, indio_dev->active_scan_mask, 1231 1231 indio_dev->masklength) { 1232 1232 ret = kmx61_read_measurement(data, base, bit); 1233 1233 if (ret < 0) {
+3 -2
drivers/iio/industrialio-core.c
··· 847 847 * @attr_list: List of IIO device attributes 848 848 * 849 849 * This function frees the memory allocated for each of the IIO device 850 - * attributes in the list. Note: if you want to reuse the list after calling 851 - * this function you have to reinitialize it using INIT_LIST_HEAD(). 850 + * attributes in the list. 852 851 */ 853 852 void iio_free_chan_devattr_list(struct list_head *attr_list) 854 853 { ··· 855 856 856 857 list_for_each_entry_safe(p, n, attr_list, l) { 857 858 kfree(p->dev_attr.attr.name); 859 + list_del(&p->l); 858 860 kfree(p); 859 861 } 860 862 } ··· 936 936 937 937 iio_free_chan_devattr_list(&indio_dev->channel_attr_list); 938 938 kfree(indio_dev->chan_attr_group.attrs); 939 + indio_dev->chan_attr_group.attrs = NULL; 939 940 } 940 941 941 942 static void iio_dev_release(struct device *device)
+1
drivers/iio/industrialio-event.c
··· 500 500 error_free_setup_event_lines: 501 501 iio_free_chan_devattr_list(&indio_dev->event_interface->dev_attr_list); 502 502 kfree(indio_dev->event_interface); 503 + indio_dev->event_interface = NULL; 503 504 return ret; 504 505 } 505 506
+2
drivers/iio/light/Kconfig
··· 73 73 config GP2AP020A00F 74 74 tristate "Sharp GP2AP020A00F Proximity/ALS sensor" 75 75 depends on I2C 76 + select REGMAP_I2C 76 77 select IIO_BUFFER 77 78 select IIO_TRIGGERED_BUFFER 78 79 select IRQ_WORK ··· 127 126 config JSA1212 128 127 tristate "JSA1212 ALS and proximity sensor driver" 129 128 depends on I2C 129 + select REGMAP_I2C 130 130 help 131 131 Say Y here if you want to build a IIO driver for JSA1212 132 132 proximity & ALS sensor device.
+2
drivers/iio/magnetometer/Kconfig
··· 18 18 19 19 config AK09911 20 20 tristate "Asahi Kasei AK09911 3-axis Compass" 21 + depends on I2C 22 + depends on GPIOLIB 21 23 select AK8975 22 24 help 23 25 Deprecated: AK09911 is now supported by AK8975 driver.
+1 -1
drivers/iio/proximity/sx9500.c
··· 494 494 495 495 mutex_lock(&data->mutex); 496 496 497 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 497 + for_each_set_bit(bit, indio_dev->active_scan_mask, 498 498 indio_dev->masklength) { 499 499 ret = sx9500_read_proximity(data, &indio_dev->channels[bit], 500 500 &val);
+8
drivers/infiniband/core/umem.c
··· 99 99 if (dmasync) 100 100 dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs); 101 101 102 + /* 103 + * If the combination of the addr and size requested for this memory 104 + * region causes an integer overflow, return error. 105 + */ 106 + if ((PAGE_ALIGN(addr + size) <= size) || 107 + (PAGE_ALIGN(addr + size) <= addr)) 108 + return ERR_PTR(-EINVAL); 109 + 102 110 if (!can_do_mlock()) 103 111 return ERR_PTR(-EPERM); 104 112
+16 -4
drivers/infiniband/hw/mlx4/mad.c
··· 64 64 #define GUID_TBL_BLK_NUM_ENTRIES 8 65 65 #define GUID_TBL_BLK_SIZE (GUID_TBL_ENTRY_SIZE * GUID_TBL_BLK_NUM_ENTRIES) 66 66 67 + /* Counters should be saturate once they reach their maximum value */ 68 + #define ASSIGN_32BIT_COUNTER(counter, value) do {\ 69 + if ((value) > U32_MAX) \ 70 + counter = cpu_to_be32(U32_MAX); \ 71 + else \ 72 + counter = cpu_to_be32(value); \ 73 + } while (0) 74 + 67 75 struct mlx4_mad_rcv_buf { 68 76 struct ib_grh grh; 69 77 u8 payload[256]; ··· 814 806 static void edit_counter(struct mlx4_counter *cnt, 815 807 struct ib_pma_portcounters *pma_cnt) 816 808 { 817 - pma_cnt->port_xmit_data = cpu_to_be32((be64_to_cpu(cnt->tx_bytes)>>2)); 818 - pma_cnt->port_rcv_data = cpu_to_be32((be64_to_cpu(cnt->rx_bytes)>>2)); 819 - pma_cnt->port_xmit_packets = cpu_to_be32(be64_to_cpu(cnt->tx_frames)); 820 - pma_cnt->port_rcv_packets = cpu_to_be32(be64_to_cpu(cnt->rx_frames)); 809 + ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data, 810 + (be64_to_cpu(cnt->tx_bytes) >> 2)); 811 + ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data, 812 + (be64_to_cpu(cnt->rx_bytes) >> 2)); 813 + ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets, 814 + be64_to_cpu(cnt->tx_frames)); 815 + ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets, 816 + be64_to_cpu(cnt->rx_frames)); 821 817 } 822 818 823 819 static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
+5 -1
drivers/infiniband/hw/mlx4/main.c
··· 2697 2697 spin_lock_bh(&ibdev->iboe.lock); 2698 2698 for (i = 0; i < MLX4_MAX_PORTS; ++i) { 2699 2699 struct net_device *curr_netdev = ibdev->iboe.netdevs[i]; 2700 + enum ib_port_state curr_port_state; 2700 2701 2701 - enum ib_port_state curr_port_state = 2702 + if (!curr_netdev) 2703 + continue; 2704 + 2705 + curr_port_state = 2702 2706 (netif_running(curr_netdev) && 2703 2707 netif_carrier_ok(curr_netdev)) ? 2704 2708 IB_PORT_ACTIVE : IB_PORT_DOWN;
+3 -3
drivers/input/keyboard/tc3589x-keypad.c
··· 411 411 412 412 input_set_drvdata(input, keypad); 413 413 414 - error = request_threaded_irq(irq, NULL, 415 - tc3589x_keypad_irq, plat->irqtype, 416 - "tc3589x-keypad", keypad); 414 + error = request_threaded_irq(irq, NULL, tc3589x_keypad_irq, 415 + plat->irqtype | IRQF_ONESHOT, 416 + "tc3589x-keypad", keypad); 417 417 if (error < 0) { 418 418 dev_err(&pdev->dev, 419 419 "Could not allocate irq %d,error %d\n",
+1
drivers/input/misc/mma8450.c
··· 187 187 idev->private = m; 188 188 idev->input->name = MMA8450_DRV_NAME; 189 189 idev->input->id.bustype = BUS_I2C; 190 + idev->input->dev.parent = &c->dev; 190 191 idev->poll = mma8450_poll; 191 192 idev->poll_interval = POLL_INTERVAL; 192 193 idev->poll_interval_max = POLL_INTERVAL_MAX;
+32 -20
drivers/input/mouse/alps.c
··· 1154 1154 mutex_unlock(&alps_mutex); 1155 1155 } 1156 1156 1157 - static void alps_report_bare_ps2_packet(struct input_dev *dev, 1157 + static void alps_report_bare_ps2_packet(struct psmouse *psmouse, 1158 1158 unsigned char packet[], 1159 1159 bool report_buttons) 1160 1160 { 1161 + struct alps_data *priv = psmouse->private; 1162 + struct input_dev *dev; 1163 + 1164 + /* Figure out which device to use to report the bare packet */ 1165 + if (priv->proto_version == ALPS_PROTO_V2 && 1166 + (priv->flags & ALPS_DUALPOINT)) { 1167 + /* On V2 devices the DualPoint Stick reports bare packets */ 1168 + dev = priv->dev2; 1169 + } else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) { 1170 + /* Register dev3 mouse if we received PS/2 packet first time */ 1171 + if (!IS_ERR(priv->dev3)) 1172 + psmouse_queue_work(psmouse, &priv->dev3_register_work, 1173 + 0); 1174 + return; 1175 + } else { 1176 + dev = priv->dev3; 1177 + } 1178 + 1161 1179 if (report_buttons) 1162 1180 alps_report_buttons(dev, NULL, 1163 1181 packet[0] & 1, packet[0] & 2, packet[0] & 4); ··· 1250 1232 * de-synchronization. 1251 1233 */ 1252 1234 1253 - alps_report_bare_ps2_packet(priv->dev2, 1254 - &psmouse->packet[3], false); 1235 + alps_report_bare_ps2_packet(psmouse, &psmouse->packet[3], 1236 + false); 1255 1237 1256 1238 /* 1257 1239 * Continue with the standard ALPS protocol handling, ··· 1307 1289 * properly we only do this if the device is fully synchronized. 1308 1290 */ 1309 1291 if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) { 1310 - 1311 - /* Register dev3 mouse if we received PS/2 packet first time */ 1312 - if (unlikely(!priv->dev3)) 1313 - psmouse_queue_work(psmouse, 1314 - &priv->dev3_register_work, 0); 1315 - 1316 1292 if (psmouse->pktcnt == 3) { 1317 - /* Once dev3 mouse device is registered report data */ 1318 - if (likely(!IS_ERR_OR_NULL(priv->dev3))) 1319 - alps_report_bare_ps2_packet(priv->dev3, 1320 - psmouse->packet, 1321 - true); 1293 + alps_report_bare_ps2_packet(psmouse, psmouse->packet, 1294 + true); 1322 1295 return PSMOUSE_FULL_PACKET; 1323 1296 } 1324 1297 return PSMOUSE_GOOD_DATA; ··· 2290 2281 priv->set_abs_params = alps_set_abs_params_mt; 2291 2282 priv->nibble_commands = alps_v3_nibble_commands; 2292 2283 priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 2293 - priv->x_max = 1360; 2294 - priv->y_max = 660; 2295 2284 priv->x_bits = 23; 2296 2285 priv->y_bits = 12; 2286 + 2287 + if (alps_dolphin_get_device_area(psmouse, priv)) 2288 + return -EIO; 2289 + 2297 2290 break; 2298 2291 2299 2292 case ALPS_PROTO_V6: ··· 2314 2303 priv->set_abs_params = alps_set_abs_params_mt; 2315 2304 priv->nibble_commands = alps_v3_nibble_commands; 2316 2305 priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 2317 - 2318 - if (alps_dolphin_get_device_area(psmouse, priv)) 2319 - return -EIO; 2306 + priv->x_max = 0xfff; 2307 + priv->y_max = 0x7ff; 2320 2308 2321 2309 if (priv->fw_ver[1] != 0xba) 2322 2310 priv->flags |= ALPS_BUTTONPAD; ··· 2615 2605 return -ENOMEM; 2616 2606 2617 2607 error = alps_identify(psmouse, priv); 2618 - if (error) 2608 + if (error) { 2609 + kfree(priv); 2619 2610 return error; 2611 + } 2620 2612 2621 2613 if (set_properties) { 2622 2614 psmouse->vendor = "ALPS";
+1 -1
drivers/input/mouse/cyapa_gen3.c
··· 20 20 #include <linux/input/mt.h> 21 21 #include <linux/module.h> 22 22 #include <linux/slab.h> 23 - #include <linux/unaligned/access_ok.h> 23 + #include <asm/unaligned.h> 24 24 #include "cyapa.h" 25 25 26 26
+2 -2
drivers/input/mouse/cyapa_gen5.c
··· 17 17 #include <linux/mutex.h> 18 18 #include <linux/completion.h> 19 19 #include <linux/slab.h> 20 - #include <linux/unaligned/access_ok.h> 20 + #include <asm/unaligned.h> 21 21 #include <linux/crc-itu-t.h> 22 22 #include "cyapa.h" 23 23 ··· 1926 1926 electrodes_tx = cyapa->electrodes_x; 1927 1927 max_element_cnt = ((cyapa->aligned_electrodes_rx + 7) & 1928 1928 ~7u) * electrodes_tx; 1929 - } else if (idac_data_type == GEN5_RETRIEVE_SELF_CAP_PWC_DATA) { 1929 + } else { 1930 1930 offset = 2; 1931 1931 max_element_cnt = cyapa->electrodes_x + 1932 1932 cyapa->electrodes_y;
+35 -15
drivers/input/mouse/focaltech.c
··· 67 67 68 68 #define FOC_MAX_FINGERS 5 69 69 70 - #define FOC_MAX_X 2431 71 - #define FOC_MAX_Y 1663 72 - 73 70 /* 74 71 * Current state of a single finger on the touchpad. 75 72 */ ··· 126 129 input_mt_slot(dev, i); 127 130 input_mt_report_slot_state(dev, MT_TOOL_FINGER, active); 128 131 if (active) { 129 - input_report_abs(dev, ABS_MT_POSITION_X, finger->x); 132 + unsigned int clamped_x, clamped_y; 133 + /* 134 + * The touchpad might report invalid data, so we clamp 135 + * the resulting values so that we do not confuse 136 + * userspace. 137 + */ 138 + clamped_x = clamp(finger->x, 0U, priv->x_max); 139 + clamped_y = clamp(finger->y, 0U, priv->y_max); 140 + input_report_abs(dev, ABS_MT_POSITION_X, clamped_x); 130 141 input_report_abs(dev, ABS_MT_POSITION_Y, 131 - FOC_MAX_Y - finger->y); 142 + priv->y_max - clamped_y); 132 143 } 133 144 } 134 145 input_mt_report_pointer_emulation(dev, true); ··· 184 179 } 185 180 186 181 state->pressed = (packet[0] >> 4) & 1; 187 - 188 - /* 189 - * packet[5] contains some kind of tool size in the most 190 - * significant nibble. 0xff is a special value (latching) that 191 - * signals a large contact area. 192 - */ 193 - if (packet[5] == 0xff) { 194 - state->fingers[finger].valid = false; 195 - return; 196 - } 197 182 198 183 state->fingers[finger].x = ((packet[1] & 0xf) << 8) | packet[2]; 199 184 state->fingers[finger].y = (packet[3] << 8) | packet[4]; ··· 376 381 377 382 return 0; 378 383 } 384 + 385 + void focaltech_set_resolution(struct psmouse *psmouse, unsigned int resolution) 386 + { 387 + /* not supported yet */ 388 + } 389 + 390 + static void focaltech_set_rate(struct psmouse *psmouse, unsigned int rate) 391 + { 392 + /* not supported yet */ 393 + } 394 + 395 + static void focaltech_set_scale(struct psmouse *psmouse, 396 + enum psmouse_scale scale) 397 + { 398 + /* not supported yet */ 399 + } 400 + 379 401 int focaltech_init(struct psmouse *psmouse) 380 402 { 381 403 struct focaltech_data *priv; ··· 427 415 psmouse->cleanup = focaltech_reset; 428 416 /* resync is not supported yet */ 429 417 psmouse->resync_time = 0; 418 + /* 419 + * rate/resolution/scale changes are not supported yet, and 420 + * the generic implementations of these functions seem to 421 + * confuse some touchpads 422 + */ 423 + psmouse->set_resolution = focaltech_set_resolution; 424 + psmouse->set_rate = focaltech_set_rate; 425 + psmouse->set_scale = focaltech_set_scale; 430 426 431 427 return 0; 432 428
+13 -1
drivers/input/mouse/psmouse-base.c
··· 454 454 } 455 455 456 456 /* 457 + * Here we set the mouse scaling. 458 + */ 459 + 460 + static void psmouse_set_scale(struct psmouse *psmouse, enum psmouse_scale scale) 461 + { 462 + ps2_command(&psmouse->ps2dev, NULL, 463 + scale == PSMOUSE_SCALE21 ? PSMOUSE_CMD_SETSCALE21 : 464 + PSMOUSE_CMD_SETSCALE11); 465 + } 466 + 467 + /* 457 468 * psmouse_poll() - default poll handler. Everyone except for ALPS uses it. 458 469 */ 459 470 ··· 700 689 701 690 psmouse->set_rate = psmouse_set_rate; 702 691 psmouse->set_resolution = psmouse_set_resolution; 692 + psmouse->set_scale = psmouse_set_scale; 703 693 psmouse->poll = psmouse_poll; 704 694 psmouse->protocol_handler = psmouse_process_byte; 705 695 psmouse->pktsize = 3; ··· 1172 1160 if (psmouse_max_proto != PSMOUSE_PS2) { 1173 1161 psmouse->set_rate(psmouse, psmouse->rate); 1174 1162 psmouse->set_resolution(psmouse, psmouse->resolution); 1175 - ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11); 1163 + psmouse->set_scale(psmouse, PSMOUSE_SCALE11); 1176 1164 } 1177 1165 } 1178 1166
+6
drivers/input/mouse/psmouse.h
··· 36 36 PSMOUSE_FULL_PACKET 37 37 } psmouse_ret_t; 38 38 39 + enum psmouse_scale { 40 + PSMOUSE_SCALE11, 41 + PSMOUSE_SCALE21 42 + }; 43 + 39 44 struct psmouse { 40 45 void *private; 41 46 struct input_dev *dev; ··· 72 67 psmouse_ret_t (*protocol_handler)(struct psmouse *psmouse); 73 68 void (*set_rate)(struct psmouse *psmouse, unsigned int rate); 74 69 void (*set_resolution)(struct psmouse *psmouse, unsigned int resolution); 70 + void (*set_scale)(struct psmouse *psmouse, enum psmouse_scale scale); 75 71 76 72 int (*reconnect)(struct psmouse *psmouse); 77 73 void (*disconnect)(struct psmouse *psmouse);
+163 -56
drivers/input/mouse/synaptics.c
··· 67 67 #define X_MAX_POSITIVE 8176 68 68 #define Y_MAX_POSITIVE 8176 69 69 70 - /* maximum ABS_MT_POSITION displacement (in mm) */ 71 - #define DMAX 10 72 - 73 70 /***************************************************************************** 74 71 * Stuff we need even when we do not want native Synaptics support 75 72 ****************************************************************************/ ··· 120 123 121 124 static bool cr48_profile_sensor; 122 125 126 + #define ANY_BOARD_ID 0 123 127 struct min_max_quirk { 124 128 const char * const *pnp_ids; 129 + struct { 130 + unsigned long int min, max; 131 + } board_id; 125 132 int x_min, x_max, y_min, y_max; 126 133 }; 127 134 128 135 static const struct min_max_quirk min_max_pnpid_table[] = { 129 136 { 130 137 (const char * const []){"LEN0033", NULL}, 138 + {ANY_BOARD_ID, ANY_BOARD_ID}, 131 139 1024, 5052, 2258, 4832 132 140 }, 133 141 { 134 - (const char * const []){"LEN0035", "LEN0042", NULL}, 142 + (const char * const []){"LEN0042", NULL}, 143 + {ANY_BOARD_ID, ANY_BOARD_ID}, 135 144 1232, 5710, 1156, 4696 136 145 }, 137 146 { 138 147 (const char * const []){"LEN0034", "LEN0036", "LEN0037", 139 148 "LEN0039", "LEN2002", "LEN2004", 140 149 NULL}, 150 + {ANY_BOARD_ID, 2961}, 141 151 1024, 5112, 2024, 4832 142 152 }, 143 153 { 144 154 (const char * const []){"LEN2001", NULL}, 155 + {ANY_BOARD_ID, ANY_BOARD_ID}, 145 156 1024, 5022, 2508, 4832 146 157 }, 147 158 { 148 159 (const char * const []){"LEN2006", NULL}, 160 + {2691, 2691}, 161 + 1024, 5045, 2457, 4832 162 + }, 163 + { 164 + (const char * const []){"LEN2006", NULL}, 165 + {ANY_BOARD_ID, ANY_BOARD_ID}, 149 166 1264, 5675, 1171, 4688 150 167 }, 151 168 { } ··· 186 175 "LEN0041", 187 176 "LEN0042", /* Yoga */ 188 177 "LEN0045", 189 - "LEN0046", 190 178 "LEN0047", 191 - "LEN0048", 192 179 "LEN0049", 193 180 "LEN2000", 194 181 "LEN2001", /* Edge E431 */ ··· 194 185 "LEN2003", 195 186 "LEN2004", /* L440 */ 196 187 "LEN2005", 197 - "LEN2006", 188 + "LEN2006", /* Edge E440/E540 */ 198 189 "LEN2007", 199 190 "LEN2008", 200 191 "LEN2009", ··· 244 235 return 0; 245 236 } 246 237 238 + static int synaptics_more_extended_queries(struct psmouse *psmouse) 239 + { 240 + struct synaptics_data *priv = psmouse->private; 241 + unsigned char buf[3]; 242 + 243 + if (synaptics_send_cmd(psmouse, SYN_QUE_MEXT_CAPAB_10, buf)) 244 + return -1; 245 + 246 + priv->ext_cap_10 = (buf[0]<<16) | (buf[1]<<8) | buf[2]; 247 + 248 + return 0; 249 + } 250 + 247 251 /* 248 - * Read the board id from the touchpad 252 + * Read the board id and the "More Extended Queries" from the touchpad 249 253 * The board id is encoded in the "QUERY MODES" response 250 254 */ 251 - static int synaptics_board_id(struct psmouse *psmouse) 255 + static int synaptics_query_modes(struct psmouse *psmouse) 252 256 { 253 257 struct synaptics_data *priv = psmouse->private; 254 258 unsigned char bid[3]; 255 259 260 + /* firmwares prior 7.5 have no board_id encoded */ 261 + if (SYN_ID_FULL(priv->identity) < 0x705) 262 + return 0; 263 + 256 264 if (synaptics_send_cmd(psmouse, SYN_QUE_MODES, bid)) 257 265 return -1; 258 266 priv->board_id = ((bid[0] & 0xfc) << 6) | bid[1]; 267 + 268 + if (SYN_MEXT_CAP_BIT(bid[0])) 269 + return synaptics_more_extended_queries(psmouse); 270 + 259 271 return 0; 260 272 } 261 273 ··· 376 346 { 377 347 struct synaptics_data *priv = psmouse->private; 378 348 unsigned char resp[3]; 379 - int i; 380 349 381 350 if (SYN_ID_MAJOR(priv->identity) < 4) 382 351 return 0; ··· 387 358 } 388 359 } 389 360 390 - for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) { 391 - if (psmouse_matches_pnp_id(psmouse, 392 - min_max_pnpid_table[i].pnp_ids)) { 393 - priv->x_min = min_max_pnpid_table[i].x_min; 394 - priv->x_max = min_max_pnpid_table[i].x_max; 395 - priv->y_min = min_max_pnpid_table[i].y_min; 396 - priv->y_max = min_max_pnpid_table[i].y_max; 397 - return 0; 398 - } 399 - } 400 - 401 361 if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 5 && 402 362 SYN_CAP_MAX_DIMENSIONS(priv->ext_cap_0c)) { 403 363 if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MAX_COORDS, resp)) { ··· 395 377 } else { 396 378 priv->x_max = (resp[0] << 5) | ((resp[1] & 0x0f) << 1); 397 379 priv->y_max = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3); 380 + psmouse_info(psmouse, 381 + "queried max coordinates: x [..%d], y [..%d]\n", 382 + priv->x_max, priv->y_max); 398 383 } 399 384 } 400 385 401 - if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 && 402 - SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c)) { 386 + if (SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c) && 387 + (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 || 388 + /* 389 + * Firmware v8.1 does not report proper number of extended 390 + * capabilities, but has been proven to report correct min 391 + * coordinates. 392 + */ 393 + SYN_ID_FULL(priv->identity) == 0x801)) { 403 394 if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MIN_COORDS, resp)) { 404 395 psmouse_warn(psmouse, 405 396 "device claims to have min coordinates query, but I'm not able to read it.\n"); 406 397 } else { 407 398 priv->x_min = (resp[0] << 5) | ((resp[1] & 0x0f) << 1); 408 399 priv->y_min = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3); 400 + psmouse_info(psmouse, 401 + "queried min coordinates: x [%d..], y [%d..]\n", 402 + priv->x_min, priv->y_min); 409 403 } 410 404 } 411 405 412 406 return 0; 407 + } 408 + 409 + /* 410 + * Apply quirk(s) if the hardware matches 411 + */ 412 + 413 + static void synaptics_apply_quirks(struct psmouse *psmouse) 414 + { 415 + struct synaptics_data *priv = psmouse->private; 416 + int i; 417 + 418 + for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) { 419 + if (!psmouse_matches_pnp_id(psmouse, 420 + min_max_pnpid_table[i].pnp_ids)) 421 + continue; 422 + 423 + if (min_max_pnpid_table[i].board_id.min != ANY_BOARD_ID && 424 + priv->board_id < min_max_pnpid_table[i].board_id.min) 425 + continue; 426 + 427 + if (min_max_pnpid_table[i].board_id.max != ANY_BOARD_ID && 428 + priv->board_id > min_max_pnpid_table[i].board_id.max) 429 + continue; 430 + 431 + priv->x_min = min_max_pnpid_table[i].x_min; 432 + priv->x_max = min_max_pnpid_table[i].x_max; 433 + priv->y_min = min_max_pnpid_table[i].y_min; 434 + priv->y_max = min_max_pnpid_table[i].y_max; 435 + psmouse_info(psmouse, 436 + "quirked min/max coordinates: x [%d..%d], y [%d..%d]\n", 437 + priv->x_min, priv->x_max, 438 + priv->y_min, priv->y_max); 439 + break; 440 + } 413 441 } 414 442 415 443 static int synaptics_query_hardware(struct psmouse *psmouse) ··· 466 402 return -1; 467 403 if (synaptics_firmware_id(psmouse)) 468 404 return -1; 469 - if (synaptics_board_id(psmouse)) 405 + if (synaptics_query_modes(psmouse)) 470 406 return -1; 471 407 if (synaptics_capability(psmouse)) 472 408 return -1; 473 409 if (synaptics_resolution(psmouse)) 474 410 return -1; 411 + 412 + synaptics_apply_quirks(psmouse); 475 413 476 414 return 0; 477 415 } ··· 582 516 return (buf[0] & 0xFC) == 0x84 && (buf[3] & 0xCC) == 0xC4; 583 517 } 584 518 585 - static void synaptics_pass_pt_packet(struct serio *ptport, unsigned char *packet) 519 + static void synaptics_pass_pt_packet(struct psmouse *psmouse, 520 + struct serio *ptport, 521 + unsigned char *packet) 586 522 { 523 + struct synaptics_data *priv = psmouse->private; 587 524 struct psmouse *child = serio_get_drvdata(ptport); 588 525 589 526 if (child && child->state == PSMOUSE_ACTIVATED) { 590 - serio_interrupt(ptport, packet[1], 0); 527 + serio_interrupt(ptport, packet[1] | priv->pt_buttons, 0); 591 528 serio_interrupt(ptport, packet[4], 0); 592 529 serio_interrupt(ptport, packet[5], 0); 593 530 if (child->pktsize == 4) 594 531 serio_interrupt(ptport, packet[2], 0); 595 - } else 532 + } else { 596 533 serio_interrupt(ptport, packet[1], 0); 534 + } 597 535 } 598 536 599 537 static void synaptics_pt_activate(struct psmouse *psmouse) ··· 673 603 default: 674 604 break; 675 605 } 606 + } 607 + 608 + static void synaptics_parse_ext_buttons(const unsigned char buf[], 609 + struct synaptics_data *priv, 610 + struct synaptics_hw_state *hw) 611 + { 612 + unsigned int ext_bits = 613 + (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1; 614 + unsigned int ext_mask = GENMASK(ext_bits - 1, 0); 615 + 616 + hw->ext_buttons = buf[4] & ext_mask; 617 + hw->ext_buttons |= (buf[5] & ext_mask) << ext_bits; 676 618 } 677 619 678 620 static bool is_forcepad; ··· 773 691 hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0; 774 692 } 775 693 776 - if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) && 694 + if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) > 0 && 777 695 ((buf[0] ^ buf[3]) & 0x02)) { 778 - switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) { 779 - default: 780 - /* 781 - * if nExtBtn is greater than 8 it should be 782 - * considered invalid and treated as 0 783 - */ 784 - break; 785 - case 8: 786 - hw->ext_buttons |= ((buf[5] & 0x08)) ? 0x80 : 0; 787 - hw->ext_buttons |= ((buf[4] & 0x08)) ? 0x40 : 0; 788 - case 6: 789 - hw->ext_buttons |= ((buf[5] & 0x04)) ? 0x20 : 0; 790 - hw->ext_buttons |= ((buf[4] & 0x04)) ? 0x10 : 0; 791 - case 4: 792 - hw->ext_buttons |= ((buf[5] & 0x02)) ? 0x08 : 0; 793 - hw->ext_buttons |= ((buf[4] & 0x02)) ? 0x04 : 0; 794 - case 2: 795 - hw->ext_buttons |= ((buf[5] & 0x01)) ? 0x02 : 0; 796 - hw->ext_buttons |= ((buf[4] & 0x01)) ? 0x01 : 0; 797 - } 696 + synaptics_parse_ext_buttons(buf, priv, hw); 798 697 } 799 698 } else { 800 699 hw->x = (((buf[1] & 0x1f) << 8) | buf[2]); ··· 837 774 } 838 775 } 839 776 777 + static void synaptics_report_ext_buttons(struct psmouse *psmouse, 778 + const struct synaptics_hw_state *hw) 779 + { 780 + struct input_dev *dev = psmouse->dev; 781 + struct synaptics_data *priv = psmouse->private; 782 + int ext_bits = (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1; 783 + char buf[6] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; 784 + int i; 785 + 786 + if (!SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap)) 787 + return; 788 + 789 + /* Bug in FW 8.1, buttons are reported only when ExtBit is 1 */ 790 + if (SYN_ID_FULL(priv->identity) == 0x801 && 791 + !((psmouse->packet[0] ^ psmouse->packet[3]) & 0x02)) 792 + return; 793 + 794 + if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) { 795 + for (i = 0; i < ext_bits; i++) { 796 + input_report_key(dev, BTN_0 + 2 * i, 797 + hw->ext_buttons & (1 << i)); 798 + input_report_key(dev, BTN_1 + 2 * i, 799 + hw->ext_buttons & (1 << (i + ext_bits))); 800 + } 801 + return; 802 + } 803 + 804 + /* 805 + * This generation of touchpads has the trackstick buttons 806 + * physically wired to the touchpad. Re-route them through 807 + * the pass-through interface. 808 + */ 809 + if (!priv->pt_port) 810 + return; 811 + 812 + /* The trackstick expects at most 3 buttons */ 813 + priv->pt_buttons = SYN_CAP_EXT_BUTTON_STICK_L(hw->ext_buttons) | 814 + SYN_CAP_EXT_BUTTON_STICK_R(hw->ext_buttons) << 1 | 815 + SYN_CAP_EXT_BUTTON_STICK_M(hw->ext_buttons) << 2; 816 + 817 + synaptics_pass_pt_packet(psmouse, priv->pt_port, buf); 818 + } 819 + 840 820 static void synaptics_report_buttons(struct psmouse *psmouse, 841 821 const struct synaptics_hw_state *hw) 842 822 { 843 823 struct input_dev *dev = psmouse->dev; 844 824 struct synaptics_data *priv = psmouse->private; 845 - int i; 846 825 847 826 input_report_key(dev, BTN_LEFT, hw->left); 848 827 input_report_key(dev, BTN_RIGHT, hw->right); ··· 897 792 input_report_key(dev, BTN_BACK, hw->down); 898 793 } 899 794 900 - for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++) 901 - input_report_key(dev, BTN_0 + i, hw->ext_buttons & (1 << i)); 795 + synaptics_report_ext_buttons(psmouse, hw); 902 796 } 903 797 904 798 static void synaptics_report_mt_data(struct psmouse *psmouse, ··· 917 813 pos[i].y = synaptics_invert_y(hw[i]->y); 918 814 } 919 815 920 - input_mt_assign_slots(dev, slot, pos, nsemi, DMAX * priv->x_res); 816 + input_mt_assign_slots(dev, slot, pos, nsemi, 0); 921 817 922 818 for (i = 0; i < nsemi; i++) { 923 819 input_mt_slot(dev, slot[i]); ··· 1118 1014 if (SYN_CAP_PASS_THROUGH(priv->capabilities) && 1119 1015 synaptics_is_pt_packet(psmouse->packet)) { 1120 1016 if (priv->pt_port) 1121 - synaptics_pass_pt_packet(priv->pt_port, psmouse->packet); 1017 + synaptics_pass_pt_packet(psmouse, priv->pt_port, 1018 + psmouse->packet); 1122 1019 } else 1123 1020 synaptics_process_packet(psmouse); 1124 1021 ··· 1221 1116 __set_bit(BTN_BACK, dev->keybit); 1222 1117 } 1223 1118 1224 - for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++) 1225 - __set_bit(BTN_0 + i, dev->keybit); 1119 + if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) 1120 + for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++) 1121 + __set_bit(BTN_0 + i, dev->keybit); 1226 1122 1227 1123 __clear_bit(EV_REL, dev->evbit); 1228 1124 __clear_bit(REL_X, dev->relbit); ··· 1231 1125 1232 1126 if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { 1233 1127 __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit); 1234 - if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids)) 1128 + if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids) && 1129 + !SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) 1235 1130 __set_bit(INPUT_PROP_TOPBUTTONPAD, dev->propbit); 1236 1131 /* Clickpads report only left button */ 1237 1132 __clear_bit(BTN_RIGHT, dev->keybit);
+28
drivers/input/mouse/synaptics.h
··· 22 22 #define SYN_QUE_EXT_CAPAB_0C 0x0c 23 23 #define SYN_QUE_EXT_MAX_COORDS 0x0d 24 24 #define SYN_QUE_EXT_MIN_COORDS 0x0f 25 + #define SYN_QUE_MEXT_CAPAB_10 0x10 25 26 26 27 /* synatics modes */ 27 28 #define SYN_BIT_ABSOLUTE_MODE (1 << 7) ··· 54 53 #define SYN_EXT_CAP_REQUESTS(c) (((c) & 0x700000) >> 20) 55 54 #define SYN_CAP_MULTI_BUTTON_NO(ec) (((ec) & 0x00f000) >> 12) 56 55 #define SYN_CAP_PRODUCT_ID(ec) (((ec) & 0xff0000) >> 16) 56 + #define SYN_MEXT_CAP_BIT(m) ((m) & (1 << 1)) 57 57 58 58 /* 59 59 * The following describes response for the 0x0c query. ··· 90 88 #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000) 91 89 #define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400) 92 90 #define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800) 91 + 92 + /* 93 + * The following descibes response for the 0x10 query. 94 + * 95 + * byte mask name meaning 96 + * ---- ---- ------- ------------ 97 + * 1 0x01 ext buttons are stick buttons exported in the extended 98 + * capability are actually meant to be used 99 + * by the tracktick (pass-through). 100 + * 1 0x02 SecurePad the touchpad is a SecurePad, so it 101 + * contains a built-in fingerprint reader. 102 + * 1 0xe0 more ext count how many more extented queries are 103 + * available after this one. 104 + * 2 0xff SecurePad width the width of the SecurePad fingerprint 105 + * reader. 106 + * 3 0xff SecurePad height the height of the SecurePad fingerprint 107 + * reader. 108 + */ 109 + #define SYN_CAP_EXT_BUTTONS_STICK(ex10) ((ex10) & 0x010000) 110 + #define SYN_CAP_SECUREPAD(ex10) ((ex10) & 0x020000) 111 + 112 + #define SYN_CAP_EXT_BUTTON_STICK_L(eb) (!!((eb) & 0x01)) 113 + #define SYN_CAP_EXT_BUTTON_STICK_M(eb) (!!((eb) & 0x02)) 114 + #define SYN_CAP_EXT_BUTTON_STICK_R(eb) (!!((eb) & 0x04)) 93 115 94 116 /* synaptics modes query bits */ 95 117 #define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7)) ··· 169 143 unsigned long int capabilities; /* Capabilities */ 170 144 unsigned long int ext_cap; /* Extended Capabilities */ 171 145 unsigned long int ext_cap_0c; /* Ext Caps from 0x0c query */ 146 + unsigned long int ext_cap_10; /* Ext Caps from 0x10 query */ 172 147 unsigned long int identity; /* Identification */ 173 148 unsigned int x_res, y_res; /* X/Y resolution in units/mm */ 174 149 unsigned int x_max, y_max; /* Max coordinates (from FW) */ ··· 183 156 bool disable_gesture; /* disable gestures */ 184 157 185 158 struct serio *pt_port; /* Pass-through serio port */ 159 + unsigned char pt_buttons; /* Pass-through buttons */ 186 160 187 161 /* 188 162 * Last received Advanced Gesture Mode (AGM) packet. An AGM packet
+1
drivers/input/touchscreen/Kconfig
··· 943 943 tristate "Allwinner sun4i resistive touchscreen controller support" 944 944 depends on ARCH_SUNXI || COMPILE_TEST 945 945 depends on HWMON 946 + depends on THERMAL || !THERMAL_OF 946 947 help 947 948 This selects support for the resistive touchscreen controller 948 949 found on Allwinner sunxi SoCs.
+2
drivers/iommu/Kconfig
··· 23 23 config IOMMU_IO_PGTABLE_LPAE 24 24 bool "ARMv7/v8 Long Descriptor Format" 25 25 select IOMMU_IO_PGTABLE 26 + depends on ARM || ARM64 || COMPILE_TEST 26 27 help 27 28 Enable support for the ARM long descriptor pagetable format. 28 29 This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page ··· 64 63 bool "MSM IOMMU Support" 65 64 depends on ARM 66 65 depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST 66 + depends on BROKEN 67 67 select IOMMU_API 68 68 help 69 69 Support for the IOMMUs found on certain Qualcomm SOCs.
+6 -3
drivers/iommu/arm-smmu.c
··· 1288 1288 return 0; 1289 1289 1290 1290 spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags); 1291 - if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS) 1291 + if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS && 1292 + smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { 1292 1293 ret = arm_smmu_iova_to_phys_hard(domain, iova); 1293 - else 1294 + } else { 1294 1295 ret = ops->iova_to_phys(ops, iova); 1296 + } 1297 + 1295 1298 spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags); 1296 1299 1297 1300 return ret; ··· 1559 1556 return -ENODEV; 1560 1557 } 1561 1558 1562 - if (smmu->version == 1 || (!(id & ID0_ATOSNS) && (id & ID0_S1TS))) { 1559 + if ((id & ID0_S1TS) && ((smmu->version == 1) || (id & ID0_ATOSNS))) { 1563 1560 smmu->features |= ARM_SMMU_FEAT_TRANS_OPS; 1564 1561 dev_notice(smmu->dev, "\taddress translation ops\n"); 1565 1562 }
+7
drivers/iommu/exynos-iommu.c
··· 1186 1186 1187 1187 static int __init exynos_iommu_init(void) 1188 1188 { 1189 + struct device_node *np; 1189 1190 int ret; 1191 + 1192 + np = of_find_matching_node(NULL, sysmmu_of_match); 1193 + if (!np) 1194 + return 0; 1195 + 1196 + of_node_put(np); 1190 1197 1191 1198 lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table", 1192 1199 LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL);
+3 -4
drivers/iommu/intel-iommu.c
··· 1742 1742 1743 1743 static void domain_exit(struct dmar_domain *domain) 1744 1744 { 1745 - struct dmar_drhd_unit *drhd; 1746 - struct intel_iommu *iommu; 1747 1745 struct page *freelist = NULL; 1746 + int i; 1748 1747 1749 1748 /* Domain 0 is reserved, so dont process it */ 1750 1749 if (!domain) ··· 1763 1764 1764 1765 /* clear attached or cached domains */ 1765 1766 rcu_read_lock(); 1766 - for_each_active_iommu(iommu, drhd) 1767 - iommu_detach_domain(domain, iommu); 1767 + for_each_set_bit(i, domain->iommu_bmp, g_num_of_iommus) 1768 + iommu_detach_domain(domain, g_iommus[i]); 1768 1769 rcu_read_unlock(); 1769 1770 1770 1771 dma_free_pagelist(freelist);
+3 -2
drivers/iommu/io-pgtable-arm.c
··· 56 56 ((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1)) \ 57 57 * (d)->bits_per_level) + (d)->pg_shift) 58 58 59 - #define ARM_LPAE_PAGES_PER_PGD(d) ((d)->pgd_size >> (d)->pg_shift) 59 + #define ARM_LPAE_PAGES_PER_PGD(d) \ 60 + DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift) 60 61 61 62 /* 62 63 * Calculate the index at level l used to map virtual address a using the ··· 67 66 ((l) == ARM_LPAE_START_LVL(d) ? ilog2(ARM_LPAE_PAGES_PER_PGD(d)) : 0) 68 67 69 68 #define ARM_LPAE_LVL_IDX(a,l,d) \ 70 - (((a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ 69 + (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ 71 70 ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) 72 71 73 72 /* Calculate the block/page mapping size at level l for pagetable in d. */
+1
drivers/iommu/ipmmu-vmsa.c
··· 851 851 852 852 static const struct of_device_id ipmmu_of_ids[] = { 853 853 { .compatible = "renesas,ipmmu-vmsa", }, 854 + { } 854 855 }; 855 856 856 857 static struct platform_driver ipmmu_driver = {
+7
drivers/iommu/omap-iommu.c
··· 1376 1376 struct kmem_cache *p; 1377 1377 const unsigned long flags = SLAB_HWCACHE_ALIGN; 1378 1378 size_t align = 1 << 10; /* L2 pagetable alignement */ 1379 + struct device_node *np; 1380 + 1381 + np = of_find_matching_node(NULL, omap_iommu_of_match); 1382 + if (!np) 1383 + return 0; 1384 + 1385 + of_node_put(np); 1379 1386 1380 1387 p = kmem_cache_create("iopte_cache", IOPTE_TABLE_SIZE, align, flags, 1381 1388 iopte_cachep_ctor);
+7
drivers/iommu/rockchip-iommu.c
··· 1015 1015 1016 1016 static int __init rk_iommu_init(void) 1017 1017 { 1018 + struct device_node *np; 1018 1019 int ret; 1020 + 1021 + np = of_find_matching_node(NULL, rk_iommu_dt_ids); 1022 + if (!np) 1023 + return 0; 1024 + 1025 + of_node_put(np); 1019 1026 1020 1027 ret = bus_set_iommu(&platform_bus_type, &rk_iommu_ops); 1021 1028 if (ret)
+20 -1
drivers/irqchip/irq-armada-370-xp.c
··· 69 69 static void __iomem *main_int_base; 70 70 static struct irq_domain *armada_370_xp_mpic_domain; 71 71 static u32 doorbell_mask_reg; 72 + static int parent_irq; 72 73 #ifdef CONFIG_PCI_MSI 73 74 static struct irq_domain *armada_370_xp_msi_domain; 74 75 static DECLARE_BITMAP(msi_used, PCI_MSI_DOORBELL_NR); ··· 357 356 { 358 357 if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 359 358 armada_xp_mpic_smp_cpu_init(); 359 + 360 360 return NOTIFY_OK; 361 361 } 362 362 363 363 static struct notifier_block armada_370_xp_mpic_cpu_notifier = { 364 364 .notifier_call = armada_xp_mpic_secondary_init, 365 + .priority = 100, 366 + }; 367 + 368 + static int mpic_cascaded_secondary_init(struct notifier_block *nfb, 369 + unsigned long action, void *hcpu) 370 + { 371 + if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 372 + enable_percpu_irq(parent_irq, IRQ_TYPE_NONE); 373 + 374 + return NOTIFY_OK; 375 + } 376 + 377 + static struct notifier_block mpic_cascaded_cpu_notifier = { 378 + .notifier_call = mpic_cascaded_secondary_init, 365 379 .priority = 100, 366 380 }; 367 381 ··· 555 539 struct device_node *parent) 556 540 { 557 541 struct resource main_int_res, per_cpu_int_res; 558 - int parent_irq, nr_irqs, i; 542 + int nr_irqs, i; 559 543 u32 control; 560 544 561 545 BUG_ON(of_address_to_resource(node, 0, &main_int_res)); ··· 603 587 register_cpu_notifier(&armada_370_xp_mpic_cpu_notifier); 604 588 #endif 605 589 } else { 590 + #ifdef CONFIG_SMP 591 + register_cpu_notifier(&mpic_cascaded_cpu_notifier); 592 + #endif 606 593 irq_set_chained_handler(parent_irq, 607 594 armada_370_xp_mpic_handle_cascade_irq); 608 595 }
+175 -37
drivers/irqchip/irq-gic-v3-its.c
··· 169 169 170 170 static void its_encode_devid(struct its_cmd_block *cmd, u32 devid) 171 171 { 172 - cmd->raw_cmd[0] &= ~(0xffffUL << 32); 172 + cmd->raw_cmd[0] &= BIT_ULL(32) - 1; 173 173 cmd->raw_cmd[0] |= ((u64)devid) << 32; 174 174 } 175 175 ··· 416 416 { 417 417 struct its_cmd_block *cmd, *sync_cmd, *next_cmd; 418 418 struct its_collection *sync_col; 419 + unsigned long flags; 419 420 420 - raw_spin_lock(&its->lock); 421 + raw_spin_lock_irqsave(&its->lock, flags); 421 422 422 423 cmd = its_allocate_entry(its); 423 424 if (!cmd) { /* We're soooooo screewed... */ 424 425 pr_err_ratelimited("ITS can't allocate, dropping command\n"); 425 - raw_spin_unlock(&its->lock); 426 + raw_spin_unlock_irqrestore(&its->lock, flags); 426 427 return; 427 428 } 428 429 sync_col = builder(cmd, desc); ··· 443 442 444 443 post: 445 444 next_cmd = its_post_commands(its); 446 - raw_spin_unlock(&its->lock); 445 + raw_spin_unlock_irqrestore(&its->lock, flags); 447 446 448 447 its_wait_for_range_completion(its, cmd, next_cmd); 449 448 } ··· 800 799 { 801 800 int err; 802 801 int i; 803 - int psz = PAGE_SIZE; 802 + int psz = SZ_64K; 804 803 u64 shr = GITS_BASER_InnerShareable; 804 + u64 cache = GITS_BASER_WaWb; 805 805 806 806 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 807 807 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8); 808 808 u64 type = GITS_BASER_TYPE(val); 809 809 u64 entry_size = GITS_BASER_ENTRY_SIZE(val); 810 + int order = get_order(psz); 811 + int alloc_size; 810 812 u64 tmp; 811 813 void *base; 812 814 813 815 if (type == GITS_BASER_TYPE_NONE) 814 816 continue; 815 817 816 - /* We're lazy and only allocate a single page for now */ 817 - base = (void *)get_zeroed_page(GFP_KERNEL); 818 + /* 819 + * Allocate as many entries as required to fit the 820 + * range of device IDs that the ITS can grok... The ID 821 + * space being incredibly sparse, this results in a 822 + * massive waste of memory. 823 + * 824 + * For other tables, only allocate a single page. 825 + */ 826 + if (type == GITS_BASER_TYPE_DEVICE) { 827 + u64 typer = readq_relaxed(its->base + GITS_TYPER); 828 + u32 ids = GITS_TYPER_DEVBITS(typer); 829 + 830 + order = get_order((1UL << ids) * entry_size); 831 + if (order >= MAX_ORDER) { 832 + order = MAX_ORDER - 1; 833 + pr_warn("%s: Device Table too large, reduce its page order to %u\n", 834 + its->msi_chip.of_node->full_name, order); 835 + } 836 + } 837 + 838 + alloc_size = (1 << order) * PAGE_SIZE; 839 + base = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order); 818 840 if (!base) { 819 841 err = -ENOMEM; 820 842 goto out_free; ··· 849 825 val = (virt_to_phys(base) | 850 826 (type << GITS_BASER_TYPE_SHIFT) | 851 827 ((entry_size - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) | 852 - GITS_BASER_WaWb | 828 + cache | 853 829 shr | 854 830 GITS_BASER_VALID); 855 831 ··· 865 841 break; 866 842 } 867 843 868 - val |= (PAGE_SIZE / psz) - 1; 844 + val |= (alloc_size / psz) - 1; 869 845 870 846 writeq_relaxed(val, its->base + GITS_BASER + i * 8); 871 847 tmp = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 875 851 * Shareability didn't stick. Just use 876 852 * whatever the read reported, which is likely 877 853 * to be the only thing this redistributor 878 - * supports. 854 + * supports. If that's zero, make it 855 + * non-cacheable as well. 879 856 */ 880 857 shr = tmp & GITS_BASER_SHAREABILITY_MASK; 858 + if (!shr) 859 + cache = GITS_BASER_nC; 881 860 goto retry_baser; 882 861 } 883 862 ··· 909 882 } 910 883 911 884 pr_info("ITS: allocated %d %s @%lx (psz %dK, shr %d)\n", 912 - (int)(PAGE_SIZE / entry_size), 885 + (int)(alloc_size / entry_size), 913 886 its_base_type_string[type], 914 887 (unsigned long)virt_to_phys(base), 915 888 psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT); ··· 984 957 tmp = readq_relaxed(rbase + GICR_PROPBASER); 985 958 986 959 if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) { 960 + if (!(tmp & GICR_PROPBASER_SHAREABILITY_MASK)) { 961 + /* 962 + * The HW reports non-shareable, we must 963 + * remove the cacheability attributes as 964 + * well. 965 + */ 966 + val &= ~(GICR_PROPBASER_SHAREABILITY_MASK | 967 + GICR_PROPBASER_CACHEABILITY_MASK); 968 + val |= GICR_PROPBASER_nC; 969 + writeq_relaxed(val, rbase + GICR_PROPBASER); 970 + } 987 971 pr_info_once("GIC: using cache flushing for LPI property table\n"); 988 972 gic_rdists->flags |= RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING; 989 973 } 990 974 991 975 /* set PENDBASE */ 992 976 val = (page_to_phys(pend_page) | 993 - GICR_PROPBASER_InnerShareable | 994 - GICR_PROPBASER_WaWb); 977 + GICR_PENDBASER_InnerShareable | 978 + GICR_PENDBASER_WaWb); 995 979 996 980 writeq_relaxed(val, rbase + GICR_PENDBASER); 981 + tmp = readq_relaxed(rbase + GICR_PENDBASER); 982 + 983 + if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) { 984 + /* 985 + * The HW reports non-shareable, we must remove the 986 + * cacheability attributes as well. 987 + */ 988 + val &= ~(GICR_PENDBASER_SHAREABILITY_MASK | 989 + GICR_PENDBASER_CACHEABILITY_MASK); 990 + val |= GICR_PENDBASER_nC; 991 + writeq_relaxed(val, rbase + GICR_PENDBASER); 992 + } 997 993 998 994 /* Enable LPIs */ 999 995 val = readl_relaxed(rbase + GICR_CTLR); ··· 1053 1003 * This ITS wants a linear CPU number. 1054 1004 */ 1055 1005 target = readq_relaxed(gic_data_rdist_rd_base() + GICR_TYPER); 1056 - target = GICR_TYPER_CPU_NUMBER(target); 1006 + target = GICR_TYPER_CPU_NUMBER(target) << 16; 1057 1007 } 1058 1008 1059 1009 /* Perform collection mapping */ ··· 1070 1020 static struct its_device *its_find_device(struct its_node *its, u32 dev_id) 1071 1021 { 1072 1022 struct its_device *its_dev = NULL, *tmp; 1023 + unsigned long flags; 1073 1024 1074 - raw_spin_lock(&its->lock); 1025 + raw_spin_lock_irqsave(&its->lock, flags); 1075 1026 1076 1027 list_for_each_entry(tmp, &its->its_device_list, entry) { 1077 1028 if (tmp->device_id == dev_id) { ··· 1081 1030 } 1082 1031 } 1083 1032 1084 - raw_spin_unlock(&its->lock); 1033 + raw_spin_unlock_irqrestore(&its->lock, flags); 1085 1034 1086 1035 return its_dev; 1087 1036 } ··· 1091 1040 { 1092 1041 struct its_device *dev; 1093 1042 unsigned long *lpi_map; 1043 + unsigned long flags; 1094 1044 void *itt; 1095 1045 int lpi_base; 1096 1046 int nr_lpis; ··· 1108 1056 nr_ites = max(2UL, roundup_pow_of_two(nvecs)); 1109 1057 sz = nr_ites * its->ite_size; 1110 1058 sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; 1111 - itt = kmalloc(sz, GFP_KERNEL); 1059 + itt = kzalloc(sz, GFP_KERNEL); 1112 1060 lpi_map = its_lpi_alloc_chunks(nvecs, &lpi_base, &nr_lpis); 1113 1061 1114 1062 if (!dev || !itt || !lpi_map) { ··· 1127 1075 dev->device_id = dev_id; 1128 1076 INIT_LIST_HEAD(&dev->entry); 1129 1077 1130 - raw_spin_lock(&its->lock); 1078 + raw_spin_lock_irqsave(&its->lock, flags); 1131 1079 list_add(&dev->entry, &its->its_device_list); 1132 - raw_spin_unlock(&its->lock); 1080 + raw_spin_unlock_irqrestore(&its->lock, flags); 1133 1081 1134 1082 /* Bind the device to the first possible CPU */ 1135 1083 cpu = cpumask_first(cpu_online_mask); ··· 1143 1091 1144 1092 static void its_free_device(struct its_device *its_dev) 1145 1093 { 1146 - raw_spin_lock(&its_dev->its->lock); 1094 + unsigned long flags; 1095 + 1096 + raw_spin_lock_irqsave(&its_dev->its->lock, flags); 1147 1097 list_del(&its_dev->entry); 1148 - raw_spin_unlock(&its_dev->its->lock); 1098 + raw_spin_unlock_irqrestore(&its_dev->its->lock, flags); 1149 1099 kfree(its_dev->itt); 1150 1100 kfree(its_dev); 1151 1101 } ··· 1166 1112 return 0; 1167 1113 } 1168 1114 1115 + struct its_pci_alias { 1116 + struct pci_dev *pdev; 1117 + u32 dev_id; 1118 + u32 count; 1119 + }; 1120 + 1121 + static int its_pci_msi_vec_count(struct pci_dev *pdev) 1122 + { 1123 + int msi, msix; 1124 + 1125 + msi = max(pci_msi_vec_count(pdev), 0); 1126 + msix = max(pci_msix_vec_count(pdev), 0); 1127 + 1128 + return max(msi, msix); 1129 + } 1130 + 1131 + static int its_get_pci_alias(struct pci_dev *pdev, u16 alias, void *data) 1132 + { 1133 + struct its_pci_alias *dev_alias = data; 1134 + 1135 + dev_alias->dev_id = alias; 1136 + if (pdev != dev_alias->pdev) 1137 + dev_alias->count += its_pci_msi_vec_count(dev_alias->pdev); 1138 + 1139 + return 0; 1140 + } 1141 + 1169 1142 static int its_msi_prepare(struct irq_domain *domain, struct device *dev, 1170 1143 int nvec, msi_alloc_info_t *info) 1171 1144 { 1172 1145 struct pci_dev *pdev; 1173 1146 struct its_node *its; 1174 - u32 dev_id; 1175 1147 struct its_device *its_dev; 1148 + struct its_pci_alias dev_alias; 1176 1149 1177 1150 if (!dev_is_pci(dev)) 1178 1151 return -EINVAL; 1179 1152 1180 1153 pdev = to_pci_dev(dev); 1181 - dev_id = PCI_DEVID(pdev->bus->number, pdev->devfn); 1154 + dev_alias.pdev = pdev; 1155 + dev_alias.count = nvec; 1156 + 1157 + pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias); 1182 1158 its = domain->parent->host_data; 1183 1159 1184 - its_dev = its_find_device(its, dev_id); 1185 - if (WARN_ON(its_dev)) 1186 - return -EINVAL; 1160 + its_dev = its_find_device(its, dev_alias.dev_id); 1161 + if (its_dev) { 1162 + /* 1163 + * We already have seen this ID, probably through 1164 + * another alias (PCI bridge of some sort). No need to 1165 + * create the device. 1166 + */ 1167 + dev_dbg(dev, "Reusing ITT for devID %x\n", dev_alias.dev_id); 1168 + goto out; 1169 + } 1187 1170 1188 - its_dev = its_create_device(its, dev_id, nvec); 1171 + its_dev = its_create_device(its, dev_alias.dev_id, dev_alias.count); 1189 1172 if (!its_dev) 1190 1173 return -ENOMEM; 1191 1174 1192 - dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", nvec, ilog2(nvec)); 1193 - 1175 + dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", 1176 + dev_alias.count, ilog2(dev_alias.count)); 1177 + out: 1194 1178 info->scratchpad[0].ptr = its_dev; 1195 1179 info->scratchpad[1].ptr = dev; 1196 1180 return 0; ··· 1347 1255 .deactivate = its_irq_domain_deactivate, 1348 1256 }; 1349 1257 1258 + static int its_force_quiescent(void __iomem *base) 1259 + { 1260 + u32 count = 1000000; /* 1s */ 1261 + u32 val; 1262 + 1263 + val = readl_relaxed(base + GITS_CTLR); 1264 + if (val & GITS_CTLR_QUIESCENT) 1265 + return 0; 1266 + 1267 + /* Disable the generation of all interrupts to this ITS */ 1268 + val &= ~GITS_CTLR_ENABLE; 1269 + writel_relaxed(val, base + GITS_CTLR); 1270 + 1271 + /* Poll GITS_CTLR and wait until ITS becomes quiescent */ 1272 + while (1) { 1273 + val = readl_relaxed(base + GITS_CTLR); 1274 + if (val & GITS_CTLR_QUIESCENT) 1275 + return 0; 1276 + 1277 + count--; 1278 + if (!count) 1279 + return -EBUSY; 1280 + 1281 + cpu_relax(); 1282 + udelay(1); 1283 + } 1284 + } 1285 + 1350 1286 static int its_probe(struct device_node *node, struct irq_domain *parent) 1351 1287 { 1352 1288 struct resource res; ··· 1400 1280 if (val != 0x30 && val != 0x40) { 1401 1281 pr_warn("%s: no ITS detected, giving up\n", node->full_name); 1402 1282 err = -ENODEV; 1283 + goto out_unmap; 1284 + } 1285 + 1286 + err = its_force_quiescent(its_base); 1287 + if (err) { 1288 + pr_warn("%s: failed to quiesce, giving up\n", 1289 + node->full_name); 1403 1290 goto out_unmap; 1404 1291 } 1405 1292 ··· 1449 1322 1450 1323 writeq_relaxed(baser, its->base + GITS_CBASER); 1451 1324 tmp = readq_relaxed(its->base + GITS_CBASER); 1452 - writeq_relaxed(0, its->base + GITS_CWRITER); 1453 - writel_relaxed(1, its->base + GITS_CTLR); 1454 1325 1455 - if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) { 1326 + if ((tmp ^ baser) & GITS_CBASER_SHAREABILITY_MASK) { 1327 + if (!(tmp & GITS_CBASER_SHAREABILITY_MASK)) { 1328 + /* 1329 + * The HW reports non-shareable, we must 1330 + * remove the cacheability attributes as 1331 + * well. 1332 + */ 1333 + baser &= ~(GITS_CBASER_SHAREABILITY_MASK | 1334 + GITS_CBASER_CACHEABILITY_MASK); 1335 + baser |= GITS_CBASER_nC; 1336 + writeq_relaxed(baser, its->base + GITS_CBASER); 1337 + } 1456 1338 pr_info("ITS: using cache flushing for cmd queue\n"); 1457 1339 its->flags |= ITS_FLAGS_CMDQ_NEEDS_FLUSHING; 1458 1340 } 1341 + 1342 + writeq_relaxed(0, its->base + GITS_CWRITER); 1343 + writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1459 1344 1460 1345 if (of_property_read_bool(its->msi_chip.of_node, "msi-controller")) { 1461 1346 its->domain = irq_domain_add_tree(NULL, &its_domain_ops, its); ··· 1521 1382 1522 1383 int its_cpu_init(void) 1523 1384 { 1524 - if (!gic_rdists_supports_plpis()) { 1525 - pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); 1526 - return -ENXIO; 1527 - } 1528 - 1529 1385 if (!list_empty(&its_nodes)) { 1386 + if (!gic_rdists_supports_plpis()) { 1387 + pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); 1388 + return -ENXIO; 1389 + } 1530 1390 its_cpu_init_lpis(); 1531 1391 its_cpu_init_collection(); 1532 1392 }
+1 -1
drivers/irqchip/irq-gic-v3.c
··· 466 466 tlist |= 1 << (mpidr & 0xf); 467 467 468 468 cpu = cpumask_next(cpu, mask); 469 - if (cpu == nr_cpu_ids) 469 + if (cpu >= nr_cpu_ids) 470 470 goto out; 471 471 472 472 mpidr = cpu_logical_map(cpu);
+12 -8
drivers/irqchip/irq-gic.c
··· 154 154 static void gic_mask_irq(struct irq_data *d) 155 155 { 156 156 u32 mask = 1 << (gic_irq(d) % 32); 157 + unsigned long flags; 157 158 158 - raw_spin_lock(&irq_controller_lock); 159 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 159 160 writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_CLEAR + (gic_irq(d) / 32) * 4); 160 161 if (gic_arch_extn.irq_mask) 161 162 gic_arch_extn.irq_mask(d); 162 - raw_spin_unlock(&irq_controller_lock); 163 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 163 164 } 164 165 165 166 static void gic_unmask_irq(struct irq_data *d) 166 167 { 167 168 u32 mask = 1 << (gic_irq(d) % 32); 169 + unsigned long flags; 168 170 169 - raw_spin_lock(&irq_controller_lock); 171 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 170 172 if (gic_arch_extn.irq_unmask) 171 173 gic_arch_extn.irq_unmask(d); 172 174 writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_SET + (gic_irq(d) / 32) * 4); 173 - raw_spin_unlock(&irq_controller_lock); 175 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 174 176 } 175 177 176 178 static void gic_eoi_irq(struct irq_data *d) ··· 190 188 { 191 189 void __iomem *base = gic_dist_base(d); 192 190 unsigned int gicirq = gic_irq(d); 191 + unsigned long flags; 193 192 int ret; 194 193 195 194 /* Interrupt configuration for SGIs can't be changed */ ··· 202 199 type != IRQ_TYPE_EDGE_RISING) 203 200 return -EINVAL; 204 201 205 - raw_spin_lock(&irq_controller_lock); 202 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 206 203 207 204 if (gic_arch_extn.irq_set_type) 208 205 gic_arch_extn.irq_set_type(d, type); 209 206 210 207 ret = gic_configure_irq(gicirq, type, base, NULL); 211 208 212 - raw_spin_unlock(&irq_controller_lock); 209 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 213 210 214 211 return ret; 215 212 } ··· 230 227 void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); 231 228 unsigned int cpu, shift = (gic_irq(d) % 4) * 8; 232 229 u32 val, mask, bit; 230 + unsigned long flags; 233 231 234 232 if (!force) 235 233 cpu = cpumask_any_and(mask_val, cpu_online_mask); ··· 240 236 if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) 241 237 return -EINVAL; 242 238 243 - raw_spin_lock(&irq_controller_lock); 239 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 244 240 mask = 0xff << shift; 245 241 bit = gic_cpu_map[cpu] << shift; 246 242 val = readl_relaxed(reg) & ~mask; 247 243 writel_relaxed(val | bit, reg); 248 - raw_spin_unlock(&irq_controller_lock); 244 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 249 245 250 246 return IRQ_SET_MASK_OK; 251 247 }
+1 -1
drivers/isdn/hardware/mISDN/hfcpci.c
··· 1755 1755 enable_hwirq(hc); 1756 1756 spin_unlock_irqrestore(&hc->lock, flags); 1757 1757 /* Timeout 80ms */ 1758 - current->state = TASK_UNINTERRUPTIBLE; 1758 + set_current_state(TASK_UNINTERRUPTIBLE); 1759 1759 schedule_timeout((80 * HZ) / 1000); 1760 1760 printk(KERN_INFO "HFC PCI: IRQ %d count %d\n", 1761 1761 hc->irq, hc->irqcnt);
+1 -1
drivers/isdn/icn/icn.c
··· 1609 1609 if (ints[0] > 1) 1610 1610 membase = (unsigned long)ints[2]; 1611 1611 if (str && *str) { 1612 - strcpy(sid, str); 1612 + strlcpy(sid, str, sizeof(sid)); 1613 1613 icn_id = sid; 1614 1614 if ((p = strchr(sid, ','))) { 1615 1615 *p++ = 0;
+1 -1
drivers/lguest/Kconfig
··· 1 1 config LGUEST 2 2 tristate "Linux hypervisor example code" 3 - depends on X86_32 && EVENTFD && TTY 3 + depends on X86_32 && EVENTFD && TTY && PCI_DIRECT 4 4 select HVC_DRIVER 5 5 ---help--- 6 6 This is a very simple module which allows you to run
+11 -4
drivers/md/dm-io.c
··· 289 289 struct request_queue *q = bdev_get_queue(where->bdev); 290 290 unsigned short logical_block_size = queue_logical_block_size(q); 291 291 sector_t num_sectors; 292 + unsigned int uninitialized_var(special_cmd_max_sectors); 292 293 293 - /* Reject unsupported discard requests */ 294 - if ((rw & REQ_DISCARD) && !blk_queue_discard(q)) { 294 + /* 295 + * Reject unsupported discard and write same requests. 296 + */ 297 + if (rw & REQ_DISCARD) 298 + special_cmd_max_sectors = q->limits.max_discard_sectors; 299 + else if (rw & REQ_WRITE_SAME) 300 + special_cmd_max_sectors = q->limits.max_write_same_sectors; 301 + if ((rw & (REQ_DISCARD | REQ_WRITE_SAME)) && special_cmd_max_sectors == 0) { 295 302 dec_count(io, region, -EOPNOTSUPP); 296 303 return; 297 304 } ··· 324 317 store_io_and_region_in_bio(bio, io, region); 325 318 326 319 if (rw & REQ_DISCARD) { 327 - num_sectors = min_t(sector_t, q->limits.max_discard_sectors, remaining); 320 + num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining); 328 321 bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT; 329 322 remaining -= num_sectors; 330 323 } else if (rw & REQ_WRITE_SAME) { ··· 333 326 */ 334 327 dp->get_page(dp, &page, &len, &offset); 335 328 bio_add_page(bio, page, logical_block_size, offset); 336 - num_sectors = min_t(sector_t, q->limits.max_write_same_sectors, remaining); 329 + num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining); 337 330 bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT; 338 331 339 332 offset = 0;
+109 -11
drivers/md/dm-snap.c
··· 20 20 #include <linux/log2.h> 21 21 #include <linux/dm-kcopyd.h> 22 22 23 + #include "dm.h" 24 + 23 25 #include "dm-exception-store.h" 24 26 25 27 #define DM_MSG_PREFIX "snapshots" ··· 293 291 }; 294 292 295 293 /* 294 + * This structure is allocated for each origin target 295 + */ 296 + struct dm_origin { 297 + struct dm_dev *dev; 298 + struct dm_target *ti; 299 + unsigned split_boundary; 300 + struct list_head hash_list; 301 + }; 302 + 303 + /* 296 304 * Size of the hash table for origin volumes. If we make this 297 305 * the size of the minors list then it should be nearly perfect 298 306 */ 299 307 #define ORIGIN_HASH_SIZE 256 300 308 #define ORIGIN_MASK 0xFF 301 309 static struct list_head *_origins; 310 + static struct list_head *_dm_origins; 302 311 static struct rw_semaphore _origins_lock; 303 312 304 313 static DECLARE_WAIT_QUEUE_HEAD(_pending_exceptions_done); ··· 323 310 _origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head), 324 311 GFP_KERNEL); 325 312 if (!_origins) { 326 - DMERR("unable to allocate memory"); 313 + DMERR("unable to allocate memory for _origins"); 327 314 return -ENOMEM; 328 315 } 329 - 330 316 for (i = 0; i < ORIGIN_HASH_SIZE; i++) 331 317 INIT_LIST_HEAD(_origins + i); 318 + 319 + _dm_origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head), 320 + GFP_KERNEL); 321 + if (!_dm_origins) { 322 + DMERR("unable to allocate memory for _dm_origins"); 323 + kfree(_origins); 324 + return -ENOMEM; 325 + } 326 + for (i = 0; i < ORIGIN_HASH_SIZE; i++) 327 + INIT_LIST_HEAD(_dm_origins + i); 328 + 332 329 init_rwsem(&_origins_lock); 333 330 334 331 return 0; ··· 347 324 static void exit_origin_hash(void) 348 325 { 349 326 kfree(_origins); 327 + kfree(_dm_origins); 350 328 } 351 329 352 330 static unsigned origin_hash(struct block_device *bdev) ··· 372 348 { 373 349 struct list_head *sl = &_origins[origin_hash(o->bdev)]; 374 350 list_add_tail(&o->hash_list, sl); 351 + } 352 + 353 + static struct dm_origin *__lookup_dm_origin(struct block_device *origin) 354 + { 355 + struct list_head *ol; 356 + struct dm_origin *o; 357 + 358 + ol = &_dm_origins[origin_hash(origin)]; 359 + list_for_each_entry (o, ol, hash_list) 360 + if (bdev_equal(o->dev->bdev, origin)) 361 + return o; 362 + 363 + return NULL; 364 + } 365 + 366 + static void __insert_dm_origin(struct dm_origin *o) 367 + { 368 + struct list_head *sl = &_dm_origins[origin_hash(o->dev->bdev)]; 369 + list_add_tail(&o->hash_list, sl); 370 + } 371 + 372 + static void __remove_dm_origin(struct dm_origin *o) 373 + { 374 + list_del(&o->hash_list); 375 375 } 376 376 377 377 /* ··· 1888 1840 static void snapshot_resume(struct dm_target *ti) 1889 1841 { 1890 1842 struct dm_snapshot *s = ti->private; 1891 - struct dm_snapshot *snap_src = NULL, *snap_dest = NULL; 1843 + struct dm_snapshot *snap_src = NULL, *snap_dest = NULL, *snap_merging = NULL; 1844 + struct dm_origin *o; 1845 + struct mapped_device *origin_md = NULL; 1846 + bool must_restart_merging = false; 1892 1847 1893 1848 down_read(&_origins_lock); 1849 + 1850 + o = __lookup_dm_origin(s->origin->bdev); 1851 + if (o) 1852 + origin_md = dm_table_get_md(o->ti->table); 1853 + if (!origin_md) { 1854 + (void) __find_snapshots_sharing_cow(s, NULL, NULL, &snap_merging); 1855 + if (snap_merging) 1856 + origin_md = dm_table_get_md(snap_merging->ti->table); 1857 + } 1858 + if (origin_md == dm_table_get_md(ti->table)) 1859 + origin_md = NULL; 1860 + if (origin_md) { 1861 + if (dm_hold(origin_md)) 1862 + origin_md = NULL; 1863 + } 1864 + 1865 + up_read(&_origins_lock); 1866 + 1867 + if (origin_md) { 1868 + dm_internal_suspend_fast(origin_md); 1869 + if (snap_merging && test_bit(RUNNING_MERGE, &snap_merging->state_bits)) { 1870 + must_restart_merging = true; 1871 + stop_merge(snap_merging); 1872 + } 1873 + } 1874 + 1875 + down_read(&_origins_lock); 1876 + 1894 1877 (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); 1895 1878 if (snap_src && snap_dest) { 1896 1879 down_write(&snap_src->lock); ··· 1930 1851 up_write(&snap_dest->lock); 1931 1852 up_write(&snap_src->lock); 1932 1853 } 1854 + 1933 1855 up_read(&_origins_lock); 1856 + 1857 + if (origin_md) { 1858 + if (must_restart_merging) 1859 + start_merge(snap_merging); 1860 + dm_internal_resume_fast(origin_md); 1861 + dm_put(origin_md); 1862 + } 1934 1863 1935 1864 /* Now we have correct chunk size, reregister */ 1936 1865 reregister_snapshot(s); ··· 2220 2133 * Origin: maps a linear range of a device, with hooks for snapshotting. 2221 2134 */ 2222 2135 2223 - struct dm_origin { 2224 - struct dm_dev *dev; 2225 - unsigned split_boundary; 2226 - }; 2227 - 2228 2136 /* 2229 2137 * Construct an origin mapping: <dev_path> 2230 2138 * The context for an origin is merely a 'struct dm_dev *' ··· 2248 2166 goto bad_open; 2249 2167 } 2250 2168 2169 + o->ti = ti; 2251 2170 ti->private = o; 2252 2171 ti->num_flush_bios = 1; 2253 2172 ··· 2263 2180 static void origin_dtr(struct dm_target *ti) 2264 2181 { 2265 2182 struct dm_origin *o = ti->private; 2183 + 2266 2184 dm_put_device(ti, o->dev); 2267 2185 kfree(o); 2268 2186 } ··· 2300 2216 struct dm_origin *o = ti->private; 2301 2217 2302 2218 o->split_boundary = get_origin_minimum_chunksize(o->dev->bdev); 2219 + 2220 + down_write(&_origins_lock); 2221 + __insert_dm_origin(o); 2222 + up_write(&_origins_lock); 2223 + } 2224 + 2225 + static void origin_postsuspend(struct dm_target *ti) 2226 + { 2227 + struct dm_origin *o = ti->private; 2228 + 2229 + down_write(&_origins_lock); 2230 + __remove_dm_origin(o); 2231 + up_write(&_origins_lock); 2303 2232 } 2304 2233 2305 2234 static void origin_status(struct dm_target *ti, status_type_t type, ··· 2355 2258 2356 2259 static struct target_type origin_target = { 2357 2260 .name = "snapshot-origin", 2358 - .version = {1, 8, 1}, 2261 + .version = {1, 9, 0}, 2359 2262 .module = THIS_MODULE, 2360 2263 .ctr = origin_ctr, 2361 2264 .dtr = origin_dtr, 2362 2265 .map = origin_map, 2363 2266 .resume = origin_resume, 2267 + .postsuspend = origin_postsuspend, 2364 2268 .status = origin_status, 2365 2269 .merge = origin_merge, 2366 2270 .iterate_devices = origin_iterate_devices, ··· 2369 2271 2370 2272 static struct target_type snapshot_target = { 2371 2273 .name = "snapshot", 2372 - .version = {1, 12, 0}, 2274 + .version = {1, 13, 0}, 2373 2275 .module = THIS_MODULE, 2374 2276 .ctr = snapshot_ctr, 2375 2277 .dtr = snapshot_dtr, ··· 2383 2285 2384 2286 static struct target_type merge_target = { 2385 2287 .name = dm_snapshot_merge_target_name, 2386 - .version = {1, 2, 0}, 2288 + .version = {1, 3, 0}, 2387 2289 .module = THIS_MODULE, 2388 2290 .ctr = snapshot_ctr, 2389 2291 .dtr = snapshot_dtr,
-11
drivers/md/dm-thin.c
··· 2358 2358 return DM_MAPIO_REMAPPED; 2359 2359 2360 2360 case -ENODATA: 2361 - if (get_pool_mode(tc->pool) == PM_READ_ONLY) { 2362 - /* 2363 - * This block isn't provisioned, and we have no way 2364 - * of doing so. 2365 - */ 2366 - handle_unserviceable_bio(tc->pool, bio); 2367 - cell_defer_no_holder(tc, virt_cell); 2368 - return DM_MAPIO_SUBMITTED; 2369 - } 2370 - /* fall through */ 2371 - 2372 2361 case -EWOULDBLOCK: 2373 2362 thin_defer_cell(tc, virt_cell); 2374 2363 return DM_MAPIO_SUBMITTED;
+37 -10
drivers/md/dm.c
··· 433 433 434 434 dm_get(md); 435 435 atomic_inc(&md->open_count); 436 - 437 436 out: 438 437 spin_unlock(&_minor_lock); 439 438 ··· 441 442 442 443 static void dm_blk_close(struct gendisk *disk, fmode_t mode) 443 444 { 444 - struct mapped_device *md = disk->private_data; 445 + struct mapped_device *md; 445 446 446 447 spin_lock(&_minor_lock); 448 + 449 + md = disk->private_data; 450 + if (WARN_ON(!md)) 451 + goto out; 447 452 448 453 if (atomic_dec_and_test(&md->open_count) && 449 454 (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) 450 455 queue_work(deferred_remove_workqueue, &deferred_remove_work); 451 456 452 457 dm_put(md); 453 - 458 + out: 454 459 spin_unlock(&_minor_lock); 455 460 } 456 461 ··· 2244 2241 int minor = MINOR(disk_devt(md->disk)); 2245 2242 2246 2243 unlock_fs(md); 2247 - bdput(md->bdev); 2248 2244 destroy_workqueue(md->wq); 2249 2245 2250 2246 if (md->kworker_task) ··· 2254 2252 mempool_destroy(md->rq_pool); 2255 2253 if (md->bs) 2256 2254 bioset_free(md->bs); 2257 - blk_integrity_unregister(md->disk); 2258 - del_gendisk(md->disk); 2255 + 2259 2256 cleanup_srcu_struct(&md->io_barrier); 2260 2257 free_table_devices(&md->table_devices); 2261 - free_minor(minor); 2258 + dm_stats_cleanup(&md->stats); 2262 2259 2263 2260 spin_lock(&_minor_lock); 2264 2261 md->disk->private_data = NULL; 2265 2262 spin_unlock(&_minor_lock); 2266 - 2263 + if (blk_get_integrity(md->disk)) 2264 + blk_integrity_unregister(md->disk); 2265 + del_gendisk(md->disk); 2267 2266 put_disk(md->disk); 2268 2267 blk_cleanup_queue(md->queue); 2269 - dm_stats_cleanup(&md->stats); 2268 + bdput(md->bdev); 2269 + free_minor(minor); 2270 + 2270 2271 module_put(THIS_MODULE); 2271 2272 kfree(md); 2272 2273 } ··· 2621 2616 BUG_ON(test_bit(DMF_FREEING, &md->flags)); 2622 2617 } 2623 2618 2619 + int dm_hold(struct mapped_device *md) 2620 + { 2621 + spin_lock(&_minor_lock); 2622 + if (test_bit(DMF_FREEING, &md->flags)) { 2623 + spin_unlock(&_minor_lock); 2624 + return -EBUSY; 2625 + } 2626 + dm_get(md); 2627 + spin_unlock(&_minor_lock); 2628 + return 0; 2629 + } 2630 + EXPORT_SYMBOL_GPL(dm_hold); 2631 + 2624 2632 const char *dm_device_name(struct mapped_device *md) 2625 2633 { 2626 2634 return md->name; ··· 2647 2629 2648 2630 might_sleep(); 2649 2631 2650 - spin_lock(&_minor_lock); 2651 2632 map = dm_get_live_table(md, &srcu_idx); 2633 + 2634 + spin_lock(&_minor_lock); 2652 2635 idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md)))); 2653 2636 set_bit(DMF_FREEING, &md->flags); 2654 2637 spin_unlock(&_minor_lock); ··· 2657 2638 if (dm_request_based(md)) 2658 2639 flush_kthread_worker(&md->kworker); 2659 2640 2641 + /* 2642 + * Take suspend_lock so that presuspend and postsuspend methods 2643 + * do not race with internal suspend. 2644 + */ 2645 + mutex_lock(&md->suspend_lock); 2660 2646 if (!dm_suspended_md(md)) { 2661 2647 dm_table_presuspend_targets(map); 2662 2648 dm_table_postsuspend_targets(map); 2663 2649 } 2650 + mutex_unlock(&md->suspend_lock); 2664 2651 2665 2652 /* dm_put_live_table must be before msleep, otherwise deadlock is possible */ 2666 2653 dm_put_live_table(md, srcu_idx); ··· 3140 3115 flush_workqueue(md->wq); 3141 3116 dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE); 3142 3117 } 3118 + EXPORT_SYMBOL_GPL(dm_internal_suspend_fast); 3143 3119 3144 3120 void dm_internal_resume_fast(struct mapped_device *md) 3145 3121 { ··· 3152 3126 done: 3153 3127 mutex_unlock(&md->suspend_lock); 3154 3128 } 3129 + EXPORT_SYMBOL_GPL(dm_internal_resume_fast); 3155 3130 3156 3131 /*----------------------------------------------------------------- 3157 3132 * Event notification.
+2 -1
drivers/md/md.c
··· 5080 5080 } 5081 5081 if (err) { 5082 5082 mddev_detach(mddev); 5083 - pers->free(mddev, mddev->private); 5083 + if (mddev->private) 5084 + pers->free(mddev, mddev->private); 5084 5085 module_put(pers->owner); 5085 5086 bitmap_destroy(mddev); 5086 5087 return err;
-2
drivers/md/raid0.c
··· 467 467 dump_zones(mddev); 468 468 469 469 ret = md_integrity_register(mddev); 470 - if (ret) 471 - raid0_free(mddev, conf); 472 470 473 471 return ret; 474 472 }
+1 -1
drivers/mfd/kempld-core.c
··· 739 739 for (id = kempld_dmi_table; 740 740 id->matches[0].slot != DMI_NONE; id++) 741 741 if (strstr(id->ident, force_device_id)) 742 - if (id->callback && id->callback(id)) 742 + if (id->callback && !id->callback(id)) 743 743 break; 744 744 if (id->matches[0].slot == DMI_NONE) 745 745 return -ENODEV;
+24 -6
drivers/mfd/rtsx_usb.c
··· 196 196 int rtsx_usb_ep0_read_register(struct rtsx_ucr *ucr, u16 addr, u8 *data) 197 197 { 198 198 u16 value; 199 + u8 *buf; 200 + int ret; 199 201 200 202 if (!data) 201 203 return -EINVAL; 202 - *data = 0; 204 + 205 + buf = kzalloc(sizeof(u8), GFP_KERNEL); 206 + if (!buf) 207 + return -ENOMEM; 203 208 204 209 addr |= EP0_READ_REG_CMD << EP0_OP_SHIFT; 205 210 value = swab16(addr); 206 211 207 - return usb_control_msg(ucr->pusb_dev, 212 + ret = usb_control_msg(ucr->pusb_dev, 208 213 usb_rcvctrlpipe(ucr->pusb_dev, 0), RTSX_USB_REQ_REG_OP, 209 214 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 210 - value, 0, data, 1, 100); 215 + value, 0, buf, 1, 100); 216 + *data = *buf; 217 + 218 + kfree(buf); 219 + return ret; 211 220 } 212 221 EXPORT_SYMBOL_GPL(rtsx_usb_ep0_read_register); 213 222 ··· 297 288 int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status) 298 289 { 299 290 int ret; 291 + u16 *buf; 300 292 301 293 if (!status) 302 294 return -EINVAL; 303 295 304 - if (polling_pipe == 0) 296 + if (polling_pipe == 0) { 297 + buf = kzalloc(sizeof(u16), GFP_KERNEL); 298 + if (!buf) 299 + return -ENOMEM; 300 + 305 301 ret = usb_control_msg(ucr->pusb_dev, 306 302 usb_rcvctrlpipe(ucr->pusb_dev, 0), 307 303 RTSX_USB_REQ_POLL, 308 304 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 309 - 0, 0, status, 2, 100); 310 - else 305 + 0, 0, buf, 2, 100); 306 + *status = *buf; 307 + 308 + kfree(buf); 309 + } else { 311 310 ret = rtsx_usb_get_status_with_bulk(ucr, status); 311 + } 312 312 313 313 /* usb_control_msg may return positive when success */ 314 314 if (ret < 0)
+2
drivers/misc/mei/init.c
··· 341 341 342 342 dev->dev_state = MEI_DEV_POWER_DOWN; 343 343 mei_reset(dev); 344 + /* move device to disabled state unconditionally */ 345 + dev->dev_state = MEI_DEV_DISABLED; 344 346 345 347 mutex_unlock(&dev->device_lock); 346 348
+1 -1
drivers/mmc/core/pwrseq_simple.c
··· 124 124 PTR_ERR(pwrseq->reset_gpios[i]) != -ENOSYS) { 125 125 ret = PTR_ERR(pwrseq->reset_gpios[i]); 126 126 127 - while (--i) 127 + while (i--) 128 128 gpiod_put(pwrseq->reset_gpios[i]); 129 129 130 130 goto clk_put;
+1
drivers/mtd/nand/Kconfig
··· 526 526 527 527 config MTD_NAND_HISI504 528 528 tristate "Support for NAND controller on Hisilicon SoC Hip04" 529 + depends on HAS_DMA 529 530 help 530 531 Enables support for NAND controller on Hisilicon SoC Hip04. 531 532
+44 -6
drivers/mtd/nand/pxa3xx_nand.c
··· 480 480 nand_writel(info, NDCR, ndcr | int_mask); 481 481 } 482 482 483 + static void drain_fifo(struct pxa3xx_nand_info *info, void *data, int len) 484 + { 485 + if (info->ecc_bch) { 486 + int timeout; 487 + 488 + /* 489 + * According to the datasheet, when reading from NDDB 490 + * with BCH enabled, after each 32 bytes reads, we 491 + * have to make sure that the NDSR.RDDREQ bit is set. 492 + * 493 + * Drain the FIFO 8 32 bits reads at a time, and skip 494 + * the polling on the last read. 495 + */ 496 + while (len > 8) { 497 + __raw_readsl(info->mmio_base + NDDB, data, 8); 498 + 499 + for (timeout = 0; 500 + !(nand_readl(info, NDSR) & NDSR_RDDREQ); 501 + timeout++) { 502 + if (timeout >= 5) { 503 + dev_err(&info->pdev->dev, 504 + "Timeout on RDDREQ while draining the FIFO\n"); 505 + return; 506 + } 507 + 508 + mdelay(1); 509 + } 510 + 511 + data += 32; 512 + len -= 8; 513 + } 514 + } 515 + 516 + __raw_readsl(info->mmio_base + NDDB, data, len); 517 + } 518 + 483 519 static void handle_data_pio(struct pxa3xx_nand_info *info) 484 520 { 485 521 unsigned int do_bytes = min(info->data_size, info->chunk_size); ··· 532 496 DIV_ROUND_UP(info->oob_size, 4)); 533 497 break; 534 498 case STATE_PIO_READING: 535 - __raw_readsl(info->mmio_base + NDDB, 536 - info->data_buff + info->data_buff_pos, 537 - DIV_ROUND_UP(do_bytes, 4)); 499 + drain_fifo(info, 500 + info->data_buff + info->data_buff_pos, 501 + DIV_ROUND_UP(do_bytes, 4)); 538 502 539 503 if (info->oob_size > 0) 540 - __raw_readsl(info->mmio_base + NDDB, 541 - info->oob_buff + info->oob_buff_pos, 542 - DIV_ROUND_UP(info->oob_size, 4)); 504 + drain_fifo(info, 505 + info->oob_buff + info->oob_buff_pos, 506 + DIV_ROUND_UP(info->oob_size, 4)); 543 507 break; 544 508 default: 545 509 dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__, ··· 1608 1572 int ret, irq, cs; 1609 1573 1610 1574 pdata = dev_get_platdata(&pdev->dev); 1575 + if (pdata->num_cs <= 0) 1576 + return -ENODEV; 1611 1577 info = devm_kzalloc(&pdev->dev, sizeof(*info) + (sizeof(*mtd) + 1612 1578 sizeof(*host)) * pdata->num_cs, GFP_KERNEL); 1613 1579 if (!info)
+2 -1
drivers/mtd/ubi/eba.c
··· 425 425 ubi_warn(ubi, "corrupted VID header at PEB %d, LEB %d:%d", 426 426 pnum, vol_id, lnum); 427 427 err = -EBADMSG; 428 - } else 428 + } else { 429 429 err = -EINVAL; 430 430 ubi_ro_mode(ubi); 431 + } 431 432 } 432 433 goto out_free; 433 434 } else if (err == UBI_IO_BITFLIPS)
+1 -1
drivers/net/Kconfig
··· 157 157 making it transparent to the connected L2 switch. 158 158 159 159 Ipvlan devices can be added using the "ip" command from the 160 - iproute2 package starting with the iproute2-X.Y.ZZ release: 160 + iproute2 package starting with the iproute2-3.19 release: 161 161 162 162 "ip link add link <main-dev> [ NAME ] type ipvlan" 163 163
+1 -1
drivers/net/appletalk/Kconfig
··· 40 40 41 41 config LTPC 42 42 tristate "Apple/Farallon LocalTalk PC support" 43 - depends on DEV_APPLETALK && (ISA || EISA) && ISA_DMA_API 43 + depends on DEV_APPLETALK && (ISA || EISA) && ISA_DMA_API && VIRT_TO_BUS 44 44 help 45 45 This allows you to use the AppleTalk PC card to connect to LocalTalk 46 46 networks. The card is also known as the Farallon PhoneNet PC card.
+2 -1
drivers/net/bonding/bond_main.c
··· 3850 3850 /* Find out if any slaves have the same mapping as this skb. */ 3851 3851 bond_for_each_slave_rcu(bond, slave, iter) { 3852 3852 if (slave->queue_id == skb->queue_mapping) { 3853 - if (bond_slave_can_tx(slave)) { 3853 + if (bond_slave_is_up(slave) && 3854 + slave->link == BOND_LINK_UP) { 3854 3855 bond_dev_queue_xmit(bond, skb, slave->dev); 3855 3856 return 0; 3856 3857 }
+1 -1
drivers/net/can/Kconfig
··· 131 131 132 132 config CAN_XILINXCAN 133 133 tristate "Xilinx CAN" 134 - depends on ARCH_ZYNQ || MICROBLAZE || COMPILE_TEST 134 + depends on ARCH_ZYNQ || ARM64 || MICROBLAZE || COMPILE_TEST 135 135 depends on COMMON_CLK && HAS_IOMEM 136 136 ---help--- 137 137 Xilinx CAN driver. This driver supports both soft AXI CAN IP and
+8
drivers/net/can/dev.c
··· 579 579 skb->pkt_type = PACKET_BROADCAST; 580 580 skb->ip_summed = CHECKSUM_UNNECESSARY; 581 581 582 + skb_reset_mac_header(skb); 583 + skb_reset_network_header(skb); 584 + skb_reset_transport_header(skb); 585 + 582 586 can_skb_reserve(skb); 583 587 can_skb_prv(skb)->ifindex = dev->ifindex; 584 588 ··· 606 602 skb->protocol = htons(ETH_P_CANFD); 607 603 skb->pkt_type = PACKET_BROADCAST; 608 604 skb->ip_summed = CHECKSUM_UNNECESSARY; 605 + 606 + skb_reset_mac_header(skb); 607 + skb_reset_network_header(skb); 608 + skb_reset_transport_header(skb); 609 609 610 610 can_skb_reserve(skb); 611 611 can_skb_prv(skb)->ifindex = dev->ifindex;
+11 -7
drivers/net/can/flexcan.c
··· 592 592 rx_state = unlikely(reg_esr & FLEXCAN_ESR_RX_WRN) ? 593 593 CAN_STATE_ERROR_WARNING : CAN_STATE_ERROR_ACTIVE; 594 594 new_state = max(tx_state, rx_state); 595 - } else if (unlikely(flt == FLEXCAN_ESR_FLT_CONF_PASSIVE)) { 595 + } else { 596 596 __flexcan_get_berr_counter(dev, &bec); 597 - new_state = CAN_STATE_ERROR_PASSIVE; 597 + new_state = flt == FLEXCAN_ESR_FLT_CONF_PASSIVE ? 598 + CAN_STATE_ERROR_PASSIVE : CAN_STATE_BUS_OFF; 598 599 rx_state = bec.rxerr >= bec.txerr ? new_state : 0; 599 600 tx_state = bec.rxerr <= bec.txerr ? new_state : 0; 600 - } else { 601 - new_state = CAN_STATE_BUS_OFF; 602 601 } 603 602 604 603 /* state hasn't changed */ ··· 1157 1158 const struct flexcan_devtype_data *devtype_data; 1158 1159 struct net_device *dev; 1159 1160 struct flexcan_priv *priv; 1161 + struct regulator *reg_xceiver; 1160 1162 struct resource *mem; 1161 1163 struct clk *clk_ipg = NULL, *clk_per = NULL; 1162 1164 void __iomem *base; 1163 1165 int err, irq; 1164 1166 u32 clock_freq = 0; 1167 + 1168 + reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver"); 1169 + if (PTR_ERR(reg_xceiver) == -EPROBE_DEFER) 1170 + return -EPROBE_DEFER; 1171 + else if (IS_ERR(reg_xceiver)) 1172 + reg_xceiver = NULL; 1165 1173 1166 1174 if (pdev->dev.of_node) 1167 1175 of_property_read_u32(pdev->dev.of_node, ··· 1230 1224 priv->pdata = dev_get_platdata(&pdev->dev); 1231 1225 priv->devtype_data = devtype_data; 1232 1226 1233 - priv->reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver"); 1234 - if (IS_ERR(priv->reg_xceiver)) 1235 - priv->reg_xceiver = NULL; 1227 + priv->reg_xceiver = reg_xceiver; 1236 1228 1237 1229 netif_napi_add(dev, &priv->napi, flexcan_poll, FLEXCAN_NAPI_WEIGHT); 1238 1230
+2
drivers/net/can/usb/gs_usb.c
··· 901 901 } 902 902 903 903 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 904 + if (!dev) 905 + return -ENOMEM; 904 906 init_usb_anchor(&dev->rx_submitted); 905 907 906 908 atomic_set(&dev->active_channels, 0);
+114 -66
drivers/net/can/usb/kvaser_usb.c
··· 14 14 * Copyright (C) 2015 Valeo S.A. 15 15 */ 16 16 17 + #include <linux/spinlock.h> 18 + #include <linux/kernel.h> 17 19 #include <linux/completion.h> 18 20 #include <linux/module.h> 19 21 #include <linux/netdevice.h> ··· 25 23 #include <linux/can/dev.h> 26 24 #include <linux/can/error.h> 27 25 28 - #define MAX_TX_URBS 16 29 26 #define MAX_RX_URBS 4 30 27 #define START_TIMEOUT 1000 /* msecs */ 31 28 #define STOP_TIMEOUT 1000 /* msecs */ ··· 442 441 }; 443 442 }; 444 443 444 + /* Context for an outstanding, not yet ACKed, transmission */ 445 445 struct kvaser_usb_tx_urb_context { 446 446 struct kvaser_usb_net_priv *priv; 447 447 u32 echo_index; ··· 456 454 struct usb_endpoint_descriptor *bulk_in, *bulk_out; 457 455 struct usb_anchor rx_submitted; 458 456 457 + /* @max_tx_urbs: Firmware-reported maximum number of oustanding, 458 + * not yet ACKed, transmissions on this device. This value is 459 + * also used as a sentinel for marking free tx contexts. 460 + */ 459 461 u32 fw_version; 460 462 unsigned int nchannels; 463 + unsigned int max_tx_urbs; 461 464 enum kvaser_usb_family family; 462 465 463 466 bool rxinitdone; ··· 472 465 473 466 struct kvaser_usb_net_priv { 474 467 struct can_priv can; 475 - 476 - atomic_t active_tx_urbs; 477 - struct usb_anchor tx_submitted; 478 - struct kvaser_usb_tx_urb_context tx_contexts[MAX_TX_URBS]; 479 - 480 - struct completion start_comp, stop_comp; 468 + struct can_berr_counter bec; 481 469 482 470 struct kvaser_usb *dev; 483 471 struct net_device *netdev; 484 472 int channel; 485 473 486 - struct can_berr_counter bec; 474 + struct completion start_comp, stop_comp; 475 + struct usb_anchor tx_submitted; 476 + 477 + spinlock_t tx_contexts_lock; 478 + int active_tx_contexts; 479 + struct kvaser_usb_tx_urb_context tx_contexts[]; 487 480 }; 488 481 489 482 static const struct usb_device_id kvaser_usb_table[] = { ··· 591 584 while (pos <= actual_len - MSG_HEADER_LEN) { 592 585 tmp = buf + pos; 593 586 594 - if (!tmp->len) 595 - break; 587 + /* Handle messages crossing the USB endpoint max packet 588 + * size boundary. Check kvaser_usb_read_bulk_callback() 589 + * for further details. 590 + */ 591 + if (tmp->len == 0) { 592 + pos = round_up(pos, le16_to_cpu(dev->bulk_in-> 593 + wMaxPacketSize)); 594 + continue; 595 + } 596 596 597 597 if (pos + tmp->len > actual_len) { 598 598 dev_err(dev->udev->dev.parent, ··· 661 647 switch (dev->family) { 662 648 case KVASER_LEAF: 663 649 dev->fw_version = le32_to_cpu(msg.u.leaf.softinfo.fw_version); 650 + dev->max_tx_urbs = 651 + le16_to_cpu(msg.u.leaf.softinfo.max_outstanding_tx); 664 652 break; 665 653 case KVASER_USBCAN: 666 654 dev->fw_version = le32_to_cpu(msg.u.usbcan.softinfo.fw_version); 655 + dev->max_tx_urbs = 656 + le16_to_cpu(msg.u.usbcan.softinfo.max_outstanding_tx); 667 657 break; 668 658 } 669 659 ··· 704 686 struct kvaser_usb_net_priv *priv; 705 687 struct sk_buff *skb; 706 688 struct can_frame *cf; 689 + unsigned long flags; 707 690 u8 channel, tid; 708 691 709 692 channel = msg->u.tx_acknowledge_header.channel; ··· 723 704 724 705 stats = &priv->netdev->stats; 725 706 726 - context = &priv->tx_contexts[tid % MAX_TX_URBS]; 707 + context = &priv->tx_contexts[tid % dev->max_tx_urbs]; 727 708 728 709 /* Sometimes the state change doesn't come after a bus-off event */ 729 710 if (priv->can.restart_ms && ··· 748 729 749 730 stats->tx_packets++; 750 731 stats->tx_bytes += context->dlc; 732 + 733 + spin_lock_irqsave(&priv->tx_contexts_lock, flags); 734 + 751 735 can_get_echo_skb(priv->netdev, context->echo_index); 752 - 753 - context->echo_index = MAX_TX_URBS; 754 - atomic_dec(&priv->active_tx_urbs); 755 - 736 + context->echo_index = dev->max_tx_urbs; 737 + --priv->active_tx_contexts; 756 738 netif_wake_queue(priv->netdev); 739 + 740 + spin_unlock_irqrestore(&priv->tx_contexts_lock, flags); 757 741 } 758 742 759 743 static void kvaser_usb_simple_msg_callback(struct urb *urb) ··· 809 787 netdev_err(netdev, "Error transmitting URB\n"); 810 788 usb_unanchor_urb(urb); 811 789 usb_free_urb(urb); 812 - kfree(buf); 813 790 return err; 814 791 } 815 792 816 793 usb_free_urb(urb); 817 794 818 795 return 0; 819 - } 820 - 821 - static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv) 822 - { 823 - int i; 824 - 825 - usb_kill_anchored_urbs(&priv->tx_submitted); 826 - atomic_set(&priv->active_tx_urbs, 0); 827 - 828 - for (i = 0; i < MAX_TX_URBS; i++) 829 - priv->tx_contexts[i].echo_index = MAX_TX_URBS; 830 796 } 831 797 832 798 static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *priv, ··· 1327 1317 while (pos <= urb->actual_length - MSG_HEADER_LEN) { 1328 1318 msg = urb->transfer_buffer + pos; 1329 1319 1330 - if (!msg->len) 1331 - break; 1320 + /* The Kvaser firmware can only read and write messages that 1321 + * does not cross the USB's endpoint wMaxPacketSize boundary. 1322 + * If a follow-up command crosses such boundary, firmware puts 1323 + * a placeholder zero-length command in its place then aligns 1324 + * the real command to the next max packet size. 1325 + * 1326 + * Handle such cases or we're going to miss a significant 1327 + * number of events in case of a heavy rx load on the bus. 1328 + */ 1329 + if (msg->len == 0) { 1330 + pos = round_up(pos, le16_to_cpu(dev->bulk_in-> 1331 + wMaxPacketSize)); 1332 + continue; 1333 + } 1332 1334 1333 1335 if (pos + msg->len > urb->actual_length) { 1334 1336 dev_err(dev->udev->dev.parent, "Format error\n"); ··· 1348 1326 } 1349 1327 1350 1328 kvaser_usb_handle_message(dev, msg); 1351 - 1352 1329 pos += msg->len; 1353 1330 } 1354 1331 ··· 1519 1498 return err; 1520 1499 } 1521 1500 1501 + static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv) 1502 + { 1503 + int i, max_tx_urbs; 1504 + 1505 + max_tx_urbs = priv->dev->max_tx_urbs; 1506 + 1507 + priv->active_tx_contexts = 0; 1508 + for (i = 0; i < max_tx_urbs; i++) 1509 + priv->tx_contexts[i].echo_index = max_tx_urbs; 1510 + } 1511 + 1512 + /* This method might sleep. Do not call it in the atomic context 1513 + * of URB completions. 1514 + */ 1515 + static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv) 1516 + { 1517 + usb_kill_anchored_urbs(&priv->tx_submitted); 1518 + kvaser_usb_reset_tx_urb_contexts(priv); 1519 + } 1520 + 1522 1521 static void kvaser_usb_unlink_all_urbs(struct kvaser_usb *dev) 1523 1522 { 1524 1523 int i; ··· 1656 1615 struct urb *urb; 1657 1616 void *buf; 1658 1617 struct kvaser_msg *msg; 1659 - int i, err; 1660 - int ret = NETDEV_TX_OK; 1618 + int i, err, ret = NETDEV_TX_OK; 1661 1619 u8 *msg_tx_can_flags = NULL; /* GCC */ 1620 + unsigned long flags; 1662 1621 1663 1622 if (can_dropped_invalid_skb(netdev, skb)) 1664 1623 return NETDEV_TX_OK; ··· 1675 1634 if (!buf) { 1676 1635 stats->tx_dropped++; 1677 1636 dev_kfree_skb(skb); 1678 - goto nobufmem; 1637 + goto freeurb; 1679 1638 } 1680 1639 1681 1640 msg = buf; ··· 1712 1671 if (cf->can_id & CAN_RTR_FLAG) 1713 1672 *msg_tx_can_flags |= MSG_FLAG_REMOTE_FRAME; 1714 1673 1715 - for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) { 1716 - if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) { 1674 + spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1675 + for (i = 0; i < dev->max_tx_urbs; i++) { 1676 + if (priv->tx_contexts[i].echo_index == dev->max_tx_urbs) { 1717 1677 context = &priv->tx_contexts[i]; 1678 + 1679 + context->echo_index = i; 1680 + can_put_echo_skb(skb, netdev, context->echo_index); 1681 + ++priv->active_tx_contexts; 1682 + if (priv->active_tx_contexts >= dev->max_tx_urbs) 1683 + netif_stop_queue(netdev); 1684 + 1718 1685 break; 1719 1686 } 1720 1687 } 1688 + spin_unlock_irqrestore(&priv->tx_contexts_lock, flags); 1721 1689 1722 1690 /* This should never happen; it implies a flow control bug */ 1723 1691 if (!context) { 1724 1692 netdev_warn(netdev, "cannot find free context\n"); 1693 + 1694 + kfree(buf); 1725 1695 ret = NETDEV_TX_BUSY; 1726 - goto releasebuf; 1696 + goto freeurb; 1727 1697 } 1728 1698 1729 1699 context->priv = priv; 1730 - context->echo_index = i; 1731 1700 context->dlc = cf->can_dlc; 1732 1701 1733 1702 msg->u.tx_can.tid = context->echo_index; ··· 1749 1698 kvaser_usb_write_bulk_callback, context); 1750 1699 usb_anchor_urb(urb, &priv->tx_submitted); 1751 1700 1752 - can_put_echo_skb(skb, netdev, context->echo_index); 1753 - 1754 - atomic_inc(&priv->active_tx_urbs); 1755 - 1756 - if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS) 1757 - netif_stop_queue(netdev); 1758 - 1759 1701 err = usb_submit_urb(urb, GFP_ATOMIC); 1760 1702 if (unlikely(err)) { 1761 - can_free_echo_skb(netdev, context->echo_index); 1703 + spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1762 1704 1763 - atomic_dec(&priv->active_tx_urbs); 1705 + can_free_echo_skb(netdev, context->echo_index); 1706 + context->echo_index = dev->max_tx_urbs; 1707 + --priv->active_tx_contexts; 1708 + netif_wake_queue(netdev); 1709 + 1710 + spin_unlock_irqrestore(&priv->tx_contexts_lock, flags); 1711 + 1764 1712 usb_unanchor_urb(urb); 1765 1713 1766 1714 stats->tx_dropped++; ··· 1769 1719 else 1770 1720 netdev_warn(netdev, "Failed tx_urb %d\n", err); 1771 1721 1772 - goto releasebuf; 1722 + goto freeurb; 1773 1723 } 1774 1724 1775 - usb_free_urb(urb); 1725 + ret = NETDEV_TX_OK; 1776 1726 1777 - return NETDEV_TX_OK; 1778 - 1779 - releasebuf: 1780 - kfree(buf); 1781 - nobufmem: 1727 + freeurb: 1782 1728 usb_free_urb(urb); 1783 1729 return ret; 1784 1730 } ··· 1886 1840 struct kvaser_usb *dev = usb_get_intfdata(intf); 1887 1841 struct net_device *netdev; 1888 1842 struct kvaser_usb_net_priv *priv; 1889 - int i, err; 1843 + int err; 1890 1844 1891 1845 err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, channel); 1892 1846 if (err) 1893 1847 return err; 1894 1848 1895 - netdev = alloc_candev(sizeof(*priv), MAX_TX_URBS); 1849 + netdev = alloc_candev(sizeof(*priv) + 1850 + dev->max_tx_urbs * sizeof(*priv->tx_contexts), 1851 + dev->max_tx_urbs); 1896 1852 if (!netdev) { 1897 1853 dev_err(&intf->dev, "Cannot alloc candev\n"); 1898 1854 return -ENOMEM; ··· 1902 1854 1903 1855 priv = netdev_priv(netdev); 1904 1856 1857 + init_usb_anchor(&priv->tx_submitted); 1905 1858 init_completion(&priv->start_comp); 1906 1859 init_completion(&priv->stop_comp); 1907 - 1908 - init_usb_anchor(&priv->tx_submitted); 1909 - atomic_set(&priv->active_tx_urbs, 0); 1910 - 1911 - for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) 1912 - priv->tx_contexts[i].echo_index = MAX_TX_URBS; 1913 1860 1914 1861 priv->dev = dev; 1915 1862 priv->netdev = netdev; 1916 1863 priv->channel = channel; 1864 + 1865 + spin_lock_init(&priv->tx_contexts_lock); 1866 + kvaser_usb_reset_tx_urb_contexts(priv); 1917 1867 1918 1868 priv->can.state = CAN_STATE_STOPPED; 1919 1869 priv->can.clock.freq = CAN_USB_CLOCK; ··· 2022 1976 return err; 2023 1977 } 2024 1978 1979 + dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n", 1980 + ((dev->fw_version >> 24) & 0xff), 1981 + ((dev->fw_version >> 16) & 0xff), 1982 + (dev->fw_version & 0xffff)); 1983 + 1984 + dev_dbg(&intf->dev, "Max oustanding tx = %d URBs\n", dev->max_tx_urbs); 1985 + 2025 1986 err = kvaser_usb_get_card_info(dev); 2026 1987 if (err) { 2027 1988 dev_err(&intf->dev, 2028 1989 "Cannot get card infos, error %d\n", err); 2029 1990 return err; 2030 1991 } 2031 - 2032 - dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n", 2033 - ((dev->fw_version >> 24) & 0xff), 2034 - ((dev->fw_version >> 16) & 0xff), 2035 - (dev->fw_version & 0xffff)); 2036 1992 2037 1993 for (i = 0; i < dev->nchannels; i++) { 2038 1994 err = kvaser_usb_init_one(intf, id, i);
+8 -7
drivers/net/can/usb/peak_usb/pcan_ucan.h
··· 26 26 #define PUCAN_CMD_FILTER_STD 0x008 27 27 #define PUCAN_CMD_TX_ABORT 0x009 28 28 #define PUCAN_CMD_WR_ERR_CNT 0x00a 29 - #define PUCAN_CMD_RX_FRAME_ENABLE 0x00b 30 - #define PUCAN_CMD_RX_FRAME_DISABLE 0x00c 29 + #define PUCAN_CMD_SET_EN_OPTION 0x00b 30 + #define PUCAN_CMD_CLR_DIS_OPTION 0x00c 31 31 #define PUCAN_CMD_END_OF_COLLECTION 0x3ff 32 32 33 33 /* uCAN received messages list */ ··· 101 101 u16 unused; 102 102 }; 103 103 104 - /* uCAN RX_FRAME_ENABLE command fields */ 105 - #define PUCAN_FLTEXT_ERROR 0x0001 106 - #define PUCAN_FLTEXT_BUSLOAD 0x0002 104 + /* uCAN SET_EN/CLR_DIS _OPTION command fields */ 105 + #define PUCAN_OPTION_ERROR 0x0001 106 + #define PUCAN_OPTION_BUSLOAD 0x0002 107 + #define PUCAN_OPTION_CANDFDISO 0x0004 107 108 108 - struct __packed pucan_filter_ext { 109 + struct __packed pucan_options { 109 110 __le16 opcode_channel; 110 111 111 - __le16 ext_mask; 112 + __le16 options; 112 113 u32 unused; 113 114 }; 114 115
+54 -23
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 110 110 u8 unused[5]; 111 111 }; 112 112 113 - /* Extended usage of uCAN commands CMD_RX_FRAME_xxxABLE for PCAN-USB Pro FD */ 113 + /* Extended usage of uCAN commands CMD_xxx_xx_OPTION for PCAN-USB Pro FD */ 114 114 #define PCAN_UFD_FLTEXT_CALIBRATION 0x8000 115 115 116 - struct __packed pcan_ufd_filter_ext { 116 + struct __packed pcan_ufd_options { 117 117 __le16 opcode_channel; 118 118 119 - __le16 ext_mask; 119 + __le16 ucan_mask; 120 120 u16 unused; 121 121 __le16 usb_mask; 122 122 }; ··· 251 251 /* moves the pointer forward */ 252 252 pc += sizeof(struct pucan_wr_err_cnt); 253 253 254 + /* add command to switch from ISO to non-ISO mode, if fw allows it */ 255 + if (dev->can.ctrlmode_supported & CAN_CTRLMODE_FD_NON_ISO) { 256 + struct pucan_options *puo = (struct pucan_options *)pc; 257 + 258 + puo->opcode_channel = 259 + (dev->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO) ? 260 + pucan_cmd_opcode_channel(dev, 261 + PUCAN_CMD_CLR_DIS_OPTION) : 262 + pucan_cmd_opcode_channel(dev, PUCAN_CMD_SET_EN_OPTION); 263 + 264 + puo->options = cpu_to_le16(PUCAN_OPTION_CANDFDISO); 265 + 266 + /* to be sure that no other extended bits will be taken into 267 + * account 268 + */ 269 + puo->unused = 0; 270 + 271 + /* moves the pointer forward */ 272 + pc += sizeof(struct pucan_options); 273 + } 274 + 254 275 /* next, go back to operational mode */ 255 276 cmd = (struct pucan_command *)pc; 256 277 cmd->opcode_channel = pucan_cmd_opcode_channel(dev, ··· 342 321 return pcan_usb_fd_send_cmd(dev, cmd); 343 322 } 344 323 345 - /* set/unset notifications filter: 324 + /* set/unset options 346 325 * 347 - * onoff sets(1)/unset(0) notifications 348 - * mask each bit defines a kind of notification to set/unset 326 + * onoff set(1)/unset(0) options 327 + * mask each bit defines a kind of options to set/unset 349 328 */ 350 - static int pcan_usb_fd_set_filter_ext(struct peak_usb_device *dev, 351 - bool onoff, u16 ext_mask, u16 usb_mask) 329 + static int pcan_usb_fd_set_options(struct peak_usb_device *dev, 330 + bool onoff, u16 ucan_mask, u16 usb_mask) 352 331 { 353 - struct pcan_ufd_filter_ext *cmd = pcan_usb_fd_cmd_buffer(dev); 332 + struct pcan_ufd_options *cmd = pcan_usb_fd_cmd_buffer(dev); 354 333 355 334 cmd->opcode_channel = pucan_cmd_opcode_channel(dev, 356 - (onoff) ? PUCAN_CMD_RX_FRAME_ENABLE : 357 - PUCAN_CMD_RX_FRAME_DISABLE); 335 + (onoff) ? PUCAN_CMD_SET_EN_OPTION : 336 + PUCAN_CMD_CLR_DIS_OPTION); 358 337 359 - cmd->ext_mask = cpu_to_le16(ext_mask); 338 + cmd->ucan_mask = cpu_to_le16(ucan_mask); 360 339 cmd->usb_mask = cpu_to_le16(usb_mask); 361 340 362 341 /* send the command */ ··· 791 770 &pcan_usb_pro_fd); 792 771 793 772 /* enable USB calibration messages */ 794 - err = pcan_usb_fd_set_filter_ext(dev, 1, 795 - PUCAN_FLTEXT_ERROR, 796 - PCAN_UFD_FLTEXT_CALIBRATION); 773 + err = pcan_usb_fd_set_options(dev, 1, 774 + PUCAN_OPTION_ERROR, 775 + PCAN_UFD_FLTEXT_CALIBRATION); 797 776 } 798 777 799 778 pdev->usb_if->dev_opened_count++; ··· 827 806 828 807 /* turn off special msgs for that interface if no other dev opened */ 829 808 if (pdev->usb_if->dev_opened_count == 1) 830 - pcan_usb_fd_set_filter_ext(dev, 0, 831 - PUCAN_FLTEXT_ERROR, 832 - PCAN_UFD_FLTEXT_CALIBRATION); 809 + pcan_usb_fd_set_options(dev, 0, 810 + PUCAN_OPTION_ERROR, 811 + PCAN_UFD_FLTEXT_CALIBRATION); 833 812 pdev->usb_if->dev_opened_count--; 834 813 835 814 return 0; ··· 881 860 pdev->usb_if->fw_info.fw_version[2], 882 861 dev->adapter->ctrl_count); 883 862 884 - /* the currently supported hw is non-ISO */ 885 - dev->can.ctrlmode = CAN_CTRLMODE_FD_NON_ISO; 863 + /* check for ability to switch between ISO/non-ISO modes */ 864 + if (pdev->usb_if->fw_info.fw_version[0] >= 2) { 865 + /* firmware >= 2.x supports ISO/non-ISO switching */ 866 + dev->can.ctrlmode_supported |= CAN_CTRLMODE_FD_NON_ISO; 867 + } else { 868 + /* firmware < 2.x only supports fixed(!) non-ISO */ 869 + dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO; 870 + } 886 871 887 872 /* tell the hardware the can driver is running */ 888 873 err = pcan_usb_fd_drv_loaded(dev, 1); ··· 906 879 907 880 pdev->usb_if = ppdev->usb_if; 908 881 pdev->cmd_buffer_addr = ppdev->cmd_buffer_addr; 882 + 883 + /* do a copy of the ctrlmode[_supported] too */ 884 + dev->can.ctrlmode = ppdev->dev.can.ctrlmode; 885 + dev->can.ctrlmode_supported = ppdev->dev.can.ctrlmode_supported; 909 886 } 910 887 911 888 pdev->usb_if->dev[dev->ctrl_idx] = dev; ··· 964 933 if (dev->ctrl_idx == 0) { 965 934 /* turn off calibration message if any device were opened */ 966 935 if (pdev->usb_if->dev_opened_count > 0) 967 - pcan_usb_fd_set_filter_ext(dev, 0, 968 - PUCAN_FLTEXT_ERROR, 969 - PCAN_UFD_FLTEXT_CALIBRATION); 936 + pcan_usb_fd_set_options(dev, 0, 937 + PUCAN_OPTION_ERROR, 938 + PCAN_UFD_FLTEXT_CALIBRATION); 970 939 971 940 /* tell USB adapter that the driver is being unloaded */ 972 941 pcan_usb_fd_drv_loaded(dev, 0);
+1 -1
drivers/net/dsa/bcm_sf2.h
··· 105 105 { \ 106 106 u32 indir, dir; \ 107 107 spin_lock(&priv->indir_lock); \ 108 - indir = reg_readl(priv, REG_DIR_DATA_READ); \ 109 108 dir = __raw_readl(priv->name + off); \ 109 + indir = reg_readl(priv, REG_DIR_DATA_READ); \ 110 110 spin_unlock(&priv->indir_lock); \ 111 111 return (u64)indir << 32 | dir; \ 112 112 } \
+2 -5
drivers/net/ethernet/8390/axnet_cs.c
··· 484 484 link->open++; 485 485 486 486 info->link_status = 0x00; 487 - init_timer(&info->watchdog); 488 - info->watchdog.function = ei_watchdog; 489 - info->watchdog.data = (u_long)dev; 490 - info->watchdog.expires = jiffies + HZ; 491 - add_timer(&info->watchdog); 487 + setup_timer(&info->watchdog, ei_watchdog, (u_long)dev); 488 + mod_timer(&info->watchdog, jiffies + HZ); 492 489 493 490 return ax_open(dev); 494 491 } /* axnet_open */
+2 -5
drivers/net/ethernet/8390/pcnet_cs.c
··· 918 918 919 919 info->phy_id = info->eth_phy; 920 920 info->link_status = 0x00; 921 - init_timer(&info->watchdog); 922 - info->watchdog.function = ei_watchdog; 923 - info->watchdog.data = (u_long)dev; 924 - info->watchdog.expires = jiffies + HZ; 925 - add_timer(&info->watchdog); 921 + setup_timer(&info->watchdog, ei_watchdog, (u_long)dev); 922 + mod_timer(&info->watchdog, jiffies + HZ); 926 923 927 924 return ei_open(dev); 928 925 } /* pcnet_open */
+26 -27
drivers/net/ethernet/altera/altera_tse_main.c
··· 376 376 u16 pktlength; 377 377 u16 pktstatus; 378 378 379 - while ((rxstatus = priv->dmaops->get_rx_status(priv)) != 0) { 379 + while (((rxstatus = priv->dmaops->get_rx_status(priv)) != 0) && 380 + (count < limit)) { 380 381 pktstatus = rxstatus >> 16; 381 382 pktlength = rxstatus & 0xffff; 382 383 ··· 492 491 struct altera_tse_private *priv = 493 492 container_of(napi, struct altera_tse_private, napi); 494 493 int rxcomplete = 0; 495 - int txcomplete = 0; 496 494 unsigned long int flags; 497 495 498 - txcomplete = tse_tx_complete(priv); 496 + tse_tx_complete(priv); 499 497 500 498 rxcomplete = tse_rx(priv, budget); 501 499 502 - if (rxcomplete >= budget || txcomplete > 0) 503 - return rxcomplete; 500 + if (rxcomplete < budget) { 504 501 505 - napi_gro_flush(napi, false); 506 - __napi_complete(napi); 502 + napi_gro_flush(napi, false); 503 + __napi_complete(napi); 507 504 508 - netdev_dbg(priv->dev, 509 - "NAPI Complete, did %d packets with budget %d\n", 510 - txcomplete+rxcomplete, budget); 505 + netdev_dbg(priv->dev, 506 + "NAPI Complete, did %d packets with budget %d\n", 507 + rxcomplete, budget); 511 508 512 - spin_lock_irqsave(&priv->rxdma_irq_lock, flags); 513 - priv->dmaops->enable_rxirq(priv); 514 - priv->dmaops->enable_txirq(priv); 515 - spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 516 - return rxcomplete + txcomplete; 509 + spin_lock_irqsave(&priv->rxdma_irq_lock, flags); 510 + priv->dmaops->enable_rxirq(priv); 511 + priv->dmaops->enable_txirq(priv); 512 + spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 513 + } 514 + return rxcomplete; 517 515 } 518 516 519 517 /* DMA TX & RX FIFO interrupt routing ··· 521 521 { 522 522 struct net_device *dev = dev_id; 523 523 struct altera_tse_private *priv; 524 - unsigned long int flags; 525 524 526 525 if (unlikely(!dev)) { 527 526 pr_err("%s: invalid dev pointer\n", __func__); ··· 528 529 } 529 530 priv = netdev_priv(dev); 530 531 531 - /* turn off desc irqs and enable napi rx */ 532 - spin_lock_irqsave(&priv->rxdma_irq_lock, flags); 533 - 534 - if (likely(napi_schedule_prep(&priv->napi))) { 535 - priv->dmaops->disable_rxirq(priv); 536 - priv->dmaops->disable_txirq(priv); 537 - __napi_schedule(&priv->napi); 538 - } 539 - 532 + spin_lock(&priv->rxdma_irq_lock); 540 533 /* reset IRQs */ 541 534 priv->dmaops->clear_rxirq(priv); 542 535 priv->dmaops->clear_txirq(priv); 536 + spin_unlock(&priv->rxdma_irq_lock); 543 537 544 - spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags); 538 + if (likely(napi_schedule_prep(&priv->napi))) { 539 + spin_lock(&priv->rxdma_irq_lock); 540 + priv->dmaops->disable_rxirq(priv); 541 + priv->dmaops->disable_txirq(priv); 542 + spin_unlock(&priv->rxdma_irq_lock); 543 + __napi_schedule(&priv->napi); 544 + } 545 + 545 546 546 547 return IRQ_HANDLED; 547 548 } ··· 1398 1399 } 1399 1400 1400 1401 if (of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth", 1401 - &priv->rx_fifo_depth)) { 1402 + &priv->tx_fifo_depth)) { 1402 1403 dev_err(&pdev->dev, "cannot obtain tx-fifo-depth\n"); 1403 1404 ret = -ENXIO; 1404 1405 goto err_free_netdev;
+29 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1543 1543 { 1544 1544 struct pcnet32_private *lp; 1545 1545 int i, media; 1546 - int fdx, mii, fset, dxsuflo; 1546 + int fdx, mii, fset, dxsuflo, sram; 1547 1547 int chip_version; 1548 1548 char *chipname; 1549 1549 struct net_device *dev; ··· 1580 1580 } 1581 1581 1582 1582 /* initialize variables */ 1583 - fdx = mii = fset = dxsuflo = 0; 1583 + fdx = mii = fset = dxsuflo = sram = 0; 1584 1584 chip_version = (chip_version >> 12) & 0xffff; 1585 1585 1586 1586 switch (chip_version) { ··· 1613 1613 chipname = "PCnet/FAST III 79C973"; /* PCI */ 1614 1614 fdx = 1; 1615 1615 mii = 1; 1616 + sram = 1; 1616 1617 break; 1617 1618 case 0x2626: 1618 1619 chipname = "PCnet/Home 79C978"; /* PCI */ ··· 1637 1636 chipname = "PCnet/FAST III 79C975"; /* PCI */ 1638 1637 fdx = 1; 1639 1638 mii = 1; 1639 + sram = 1; 1640 1640 break; 1641 1641 case 0x2628: 1642 1642 chipname = "PCnet/PRO 79C976"; ··· 1664 1662 a->write_csr(ioaddr, 80, 1665 1663 (a->read_csr(ioaddr, 80) & 0x0C00) | 0x0c00); 1666 1664 dxsuflo = 1; 1665 + } 1666 + 1667 + /* 1668 + * The Am79C973/Am79C975 controllers come with 12K of SRAM 1669 + * which we can use for the Tx/Rx buffers but most importantly, 1670 + * the use of SRAM allow us to use the BCR18:NOUFLO bit to avoid 1671 + * Tx fifo underflows. 1672 + */ 1673 + if (sram) { 1674 + /* 1675 + * The SRAM is being configured in two steps. First we 1676 + * set the SRAM size in the BCR25:SRAM_SIZE bits. According 1677 + * to the datasheet, each bit corresponds to a 512-byte 1678 + * page so we can have at most 24 pages. The SRAM_SIZE 1679 + * holds the value of the upper 8 bits of the 16-bit SRAM size. 1680 + * The low 8-bits start at 0x00 and end at 0xff. So the 1681 + * address range is from 0x0000 up to 0x17ff. Therefore, 1682 + * the SRAM_SIZE is set to 0x17. The next step is to set 1683 + * the BCR26:SRAM_BND midway through so the Tx and Rx 1684 + * buffers can share the SRAM equally. 1685 + */ 1686 + a->write_bcr(ioaddr, 25, 0x17); 1687 + a->write_bcr(ioaddr, 26, 0xc); 1688 + /* And finally enable the NOUFLO bit */ 1689 + a->write_bcr(ioaddr, 18, a->read_bcr(ioaddr, 18) | (1 << 11)); 1667 1690 } 1668 1691 1669 1692 dev = alloc_etherdev(sizeof(*lp));
+93 -82
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 609 609 } 610 610 } 611 611 612 + static int xgbe_request_irqs(struct xgbe_prv_data *pdata) 613 + { 614 + struct xgbe_channel *channel; 615 + struct net_device *netdev = pdata->netdev; 616 + unsigned int i; 617 + int ret; 618 + 619 + ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0, 620 + netdev->name, pdata); 621 + if (ret) { 622 + netdev_alert(netdev, "error requesting irq %d\n", 623 + pdata->dev_irq); 624 + return ret; 625 + } 626 + 627 + if (!pdata->per_channel_irq) 628 + return 0; 629 + 630 + channel = pdata->channel; 631 + for (i = 0; i < pdata->channel_count; i++, channel++) { 632 + snprintf(channel->dma_irq_name, 633 + sizeof(channel->dma_irq_name) - 1, 634 + "%s-TxRx-%u", netdev_name(netdev), 635 + channel->queue_index); 636 + 637 + ret = devm_request_irq(pdata->dev, channel->dma_irq, 638 + xgbe_dma_isr, 0, 639 + channel->dma_irq_name, channel); 640 + if (ret) { 641 + netdev_alert(netdev, "error requesting irq %d\n", 642 + channel->dma_irq); 643 + goto err_irq; 644 + } 645 + } 646 + 647 + return 0; 648 + 649 + err_irq: 650 + /* Using an unsigned int, 'i' will go to UINT_MAX and exit */ 651 + for (i--, channel--; i < pdata->channel_count; i--, channel--) 652 + devm_free_irq(pdata->dev, channel->dma_irq, channel); 653 + 654 + devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 655 + 656 + return ret; 657 + } 658 + 659 + static void xgbe_free_irqs(struct xgbe_prv_data *pdata) 660 + { 661 + struct xgbe_channel *channel; 662 + unsigned int i; 663 + 664 + devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 665 + 666 + if (!pdata->per_channel_irq) 667 + return; 668 + 669 + channel = pdata->channel; 670 + for (i = 0; i < pdata->channel_count; i++, channel++) 671 + devm_free_irq(pdata->dev, channel->dma_irq, channel); 672 + } 673 + 612 674 void xgbe_init_tx_coalesce(struct xgbe_prv_data *pdata) 613 675 { 614 676 struct xgbe_hw_if *hw_if = &pdata->hw_if; ··· 872 810 return -EINVAL; 873 811 } 874 812 875 - phy_stop(pdata->phydev); 876 - 877 813 spin_lock_irqsave(&pdata->lock, flags); 878 814 879 815 if (caller == XGMAC_DRIVER_CONTEXT) 880 816 netif_device_detach(netdev); 881 817 882 818 netif_tx_stop_all_queues(netdev); 883 - xgbe_napi_disable(pdata, 0); 884 819 885 - /* Powerdown Tx/Rx */ 886 820 hw_if->powerdown_tx(pdata); 887 821 hw_if->powerdown_rx(pdata); 822 + 823 + xgbe_napi_disable(pdata, 0); 824 + 825 + phy_stop(pdata->phydev); 888 826 889 827 pdata->power_down = 1; 890 828 ··· 916 854 917 855 phy_start(pdata->phydev); 918 856 919 - /* Enable Tx/Rx */ 857 + xgbe_napi_enable(pdata, 0); 858 + 920 859 hw_if->powerup_tx(pdata); 921 860 hw_if->powerup_rx(pdata); 922 861 923 862 if (caller == XGMAC_DRIVER_CONTEXT) 924 863 netif_device_attach(netdev); 925 864 926 - xgbe_napi_enable(pdata, 0); 927 865 netif_tx_start_all_queues(netdev); 928 866 929 867 spin_unlock_irqrestore(&pdata->lock, flags); ··· 937 875 { 938 876 struct xgbe_hw_if *hw_if = &pdata->hw_if; 939 877 struct net_device *netdev = pdata->netdev; 878 + int ret; 940 879 941 880 DBGPR("-->xgbe_start\n"); 942 881 ··· 947 884 948 885 phy_start(pdata->phydev); 949 886 887 + xgbe_napi_enable(pdata, 1); 888 + 889 + ret = xgbe_request_irqs(pdata); 890 + if (ret) 891 + goto err_napi; 892 + 950 893 hw_if->enable_tx(pdata); 951 894 hw_if->enable_rx(pdata); 952 895 953 896 xgbe_init_tx_timers(pdata); 954 897 955 - xgbe_napi_enable(pdata, 1); 956 898 netif_tx_start_all_queues(netdev); 957 899 958 900 DBGPR("<--xgbe_start\n"); 959 901 960 902 return 0; 903 + 904 + err_napi: 905 + xgbe_napi_disable(pdata, 1); 906 + 907 + phy_stop(pdata->phydev); 908 + 909 + hw_if->exit(pdata); 910 + 911 + return ret; 961 912 } 962 913 963 914 static void xgbe_stop(struct xgbe_prv_data *pdata) ··· 984 907 985 908 DBGPR("-->xgbe_stop\n"); 986 909 987 - phy_stop(pdata->phydev); 988 - 989 910 netif_tx_stop_all_queues(netdev); 990 - xgbe_napi_disable(pdata, 1); 991 911 992 912 xgbe_stop_tx_timers(pdata); 993 913 994 914 hw_if->disable_tx(pdata); 995 915 hw_if->disable_rx(pdata); 916 + 917 + xgbe_free_irqs(pdata); 918 + 919 + xgbe_napi_disable(pdata, 1); 920 + 921 + phy_stop(pdata->phydev); 922 + 923 + hw_if->exit(pdata); 996 924 997 925 channel = pdata->channel; 998 926 for (i = 0; i < pdata->channel_count; i++, channel++) { ··· 1013 931 1014 932 static void xgbe_restart_dev(struct xgbe_prv_data *pdata) 1015 933 { 1016 - struct xgbe_channel *channel; 1017 - struct xgbe_hw_if *hw_if = &pdata->hw_if; 1018 - unsigned int i; 1019 - 1020 934 DBGPR("-->xgbe_restart_dev\n"); 1021 935 1022 936 /* If not running, "restart" will happen on open */ ··· 1020 942 return; 1021 943 1022 944 xgbe_stop(pdata); 1023 - synchronize_irq(pdata->dev_irq); 1024 - if (pdata->per_channel_irq) { 1025 - channel = pdata->channel; 1026 - for (i = 0; i < pdata->channel_count; i++, channel++) 1027 - synchronize_irq(channel->dma_irq); 1028 - } 1029 945 1030 946 xgbe_free_tx_data(pdata); 1031 947 xgbe_free_rx_data(pdata); 1032 - 1033 - /* Issue software reset to device */ 1034 - hw_if->exit(pdata); 1035 948 1036 949 xgbe_start(pdata); 1037 950 ··· 1352 1283 static int xgbe_open(struct net_device *netdev) 1353 1284 { 1354 1285 struct xgbe_prv_data *pdata = netdev_priv(netdev); 1355 - struct xgbe_hw_if *hw_if = &pdata->hw_if; 1356 1286 struct xgbe_desc_if *desc_if = &pdata->desc_if; 1357 - struct xgbe_channel *channel = NULL; 1358 - unsigned int i = 0; 1359 1287 int ret; 1360 1288 1361 1289 DBGPR("-->xgbe_open\n"); ··· 1395 1329 INIT_WORK(&pdata->restart_work, xgbe_restart); 1396 1330 INIT_WORK(&pdata->tx_tstamp_work, xgbe_tx_tstamp); 1397 1331 1398 - /* Request interrupts */ 1399 - ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0, 1400 - netdev->name, pdata); 1401 - if (ret) { 1402 - netdev_alert(netdev, "error requesting irq %d\n", 1403 - pdata->dev_irq); 1404 - goto err_rings; 1405 - } 1406 - 1407 - if (pdata->per_channel_irq) { 1408 - channel = pdata->channel; 1409 - for (i = 0; i < pdata->channel_count; i++, channel++) { 1410 - snprintf(channel->dma_irq_name, 1411 - sizeof(channel->dma_irq_name) - 1, 1412 - "%s-TxRx-%u", netdev_name(netdev), 1413 - channel->queue_index); 1414 - 1415 - ret = devm_request_irq(pdata->dev, channel->dma_irq, 1416 - xgbe_dma_isr, 0, 1417 - channel->dma_irq_name, channel); 1418 - if (ret) { 1419 - netdev_alert(netdev, 1420 - "error requesting irq %d\n", 1421 - channel->dma_irq); 1422 - goto err_irq; 1423 - } 1424 - } 1425 - } 1426 - 1427 1332 ret = xgbe_start(pdata); 1428 1333 if (ret) 1429 - goto err_start; 1334 + goto err_rings; 1430 1335 1431 1336 DBGPR("<--xgbe_open\n"); 1432 1337 1433 1338 return 0; 1434 - 1435 - err_start: 1436 - hw_if->exit(pdata); 1437 - 1438 - err_irq: 1439 - if (pdata->per_channel_irq) { 1440 - /* Using an unsigned int, 'i' will go to UINT_MAX and exit */ 1441 - for (i--, channel--; i < pdata->channel_count; i--, channel--) 1442 - devm_free_irq(pdata->dev, channel->dma_irq, channel); 1443 - } 1444 - 1445 - devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 1446 1339 1447 1340 err_rings: 1448 1341 desc_if->free_ring_resources(pdata); ··· 1424 1399 static int xgbe_close(struct net_device *netdev) 1425 1400 { 1426 1401 struct xgbe_prv_data *pdata = netdev_priv(netdev); 1427 - struct xgbe_hw_if *hw_if = &pdata->hw_if; 1428 1402 struct xgbe_desc_if *desc_if = &pdata->desc_if; 1429 - struct xgbe_channel *channel; 1430 - unsigned int i; 1431 1403 1432 1404 DBGPR("-->xgbe_close\n"); 1433 1405 1434 1406 /* Stop the device */ 1435 1407 xgbe_stop(pdata); 1436 1408 1437 - /* Issue software reset to device */ 1438 - hw_if->exit(pdata); 1439 - 1440 1409 /* Free the ring descriptors and buffers */ 1441 1410 desc_if->free_ring_resources(pdata); 1442 - 1443 - /* Release the interrupts */ 1444 - devm_free_irq(pdata->dev, pdata->dev_irq, pdata); 1445 - if (pdata->per_channel_irq) { 1446 - channel = pdata->channel; 1447 - for (i = 0; i < pdata->channel_count; i++, channel++) 1448 - devm_free_irq(pdata->dev, channel->dma_irq, channel); 1449 - } 1450 1411 1451 1412 /* Free the channel and ring structures */ 1452 1413 xgbe_free_channels(pdata);
+1 -1
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
··· 593 593 if (!xgene_ring_mgr_init(pdata)) 594 594 return -ENODEV; 595 595 596 - if (!efi_enabled(EFI_BOOT)) { 596 + if (pdata->clk) { 597 597 clk_prepare_enable(pdata->clk); 598 598 clk_disable_unprepare(pdata->clk); 599 599 clk_prepare_enable(pdata->clk);
+4
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 1025 1025 #ifdef CONFIG_ACPI 1026 1026 static const struct acpi_device_id xgene_enet_acpi_match[] = { 1027 1027 { "APMC0D05", }, 1028 + { "APMC0D30", }, 1029 + { "APMC0D31", }, 1028 1030 { } 1029 1031 }; 1030 1032 MODULE_DEVICE_TABLE(acpi, xgene_enet_acpi_match); ··· 1035 1033 #ifdef CONFIG_OF 1036 1034 static struct of_device_id xgene_enet_of_match[] = { 1037 1035 {.compatible = "apm,xgene-enet",}, 1036 + {.compatible = "apm,xgene1-sgenet",}, 1037 + {.compatible = "apm,xgene1-xgenet",}, 1038 1038 {}, 1039 1039 }; 1040 1040
+4 -4
drivers/net/ethernet/broadcom/bcm63xx_enet.c
··· 486 486 { 487 487 struct bcm_enet_priv *priv; 488 488 struct net_device *dev; 489 - int tx_work_done, rx_work_done; 489 + int rx_work_done; 490 490 491 491 priv = container_of(napi, struct bcm_enet_priv, napi); 492 492 dev = priv->net_dev; ··· 498 498 ENETDMAC_IR, priv->tx_chan); 499 499 500 500 /* reclaim sent skb */ 501 - tx_work_done = bcm_enet_tx_reclaim(dev, 0); 501 + bcm_enet_tx_reclaim(dev, 0); 502 502 503 503 spin_lock(&priv->rx_lock); 504 504 rx_work_done = bcm_enet_receive_queue(dev, budget); 505 505 spin_unlock(&priv->rx_lock); 506 506 507 - if (rx_work_done >= budget || tx_work_done > 0) { 508 - /* rx/tx queue is not yet empty/clean */ 507 + if (rx_work_done >= budget) { 508 + /* rx queue is not yet empty/clean */ 509 509 return rx_work_done; 510 510 } 511 511
+4 -3
drivers/net/ethernet/broadcom/bcmsysport.c
··· 274 274 /* RBUF misc statistics */ 275 275 STAT_RBUF("rbuf_ovflow_cnt", mib.rbuf_ovflow_cnt, RBUF_OVFL_DISC_CNTR), 276 276 STAT_RBUF("rbuf_err_cnt", mib.rbuf_err_cnt, RBUF_ERR_PKT_CNTR), 277 - STAT_MIB_RX("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 278 - STAT_MIB_RX("rx_dma_failed", mib.rx_dma_failed), 279 - STAT_MIB_TX("tx_dma_failed", mib.tx_dma_failed), 277 + STAT_MIB_SOFT("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 278 + STAT_MIB_SOFT("rx_dma_failed", mib.rx_dma_failed), 279 + STAT_MIB_SOFT("tx_dma_failed", mib.tx_dma_failed), 280 280 }; 281 281 282 282 #define BCM_SYSPORT_STATS_LEN ARRAY_SIZE(bcm_sysport_gstrings_stats) ··· 345 345 s = &bcm_sysport_gstrings_stats[i]; 346 346 switch (s->type) { 347 347 case BCM_SYSPORT_STAT_NETDEV: 348 + case BCM_SYSPORT_STAT_SOFT: 348 349 continue; 349 350 case BCM_SYSPORT_STAT_MIB_RX: 350 351 case BCM_SYSPORT_STAT_MIB_TX:
+2
drivers/net/ethernet/broadcom/bcmsysport.h
··· 570 570 BCM_SYSPORT_STAT_RUNT, 571 571 BCM_SYSPORT_STAT_RXCHK, 572 572 BCM_SYSPORT_STAT_RBUF, 573 + BCM_SYSPORT_STAT_SOFT, 573 574 }; 574 575 575 576 /* Macros to help define ethtool statistics */ ··· 591 590 #define STAT_MIB_RX(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_MIB_RX) 592 591 #define STAT_MIB_TX(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_MIB_TX) 593 592 #define STAT_RUNT(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_RUNT) 593 + #define STAT_MIB_SOFT(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_SOFT) 594 594 595 595 #define STAT_RXCHK(str, m, ofs) { \ 596 596 .stat_string = str, \
-7
drivers/net/ethernet/broadcom/bgmac.c
··· 302 302 slot->skb = skb; 303 303 slot->dma_addr = dma_addr; 304 304 305 - if (slot->dma_addr & 0xC0000000) 306 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 307 - 308 305 return 0; 309 306 } 310 307 ··· 502 505 ring->mmio_base); 503 506 goto err_dma_free; 504 507 } 505 - if (ring->dma_base & 0xC0000000) 506 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 507 508 508 509 ring->unaligned = bgmac_dma_unaligned(bgmac, ring, 509 510 BGMAC_DMA_RING_TX); ··· 531 536 err = -ENOMEM; 532 537 goto err_dma_free; 533 538 } 534 - if (ring->dma_base & 0xC0000000) 535 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 536 539 537 540 ring->unaligned = bgmac_dma_unaligned(bgmac, ring, 538 541 BGMAC_DMA_RING_RX);
+1 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1811 1811 int stats_state; 1812 1812 1813 1813 /* used for synchronization of concurrent threads statistics handling */ 1814 - spinlock_t stats_lock; 1814 + struct mutex stats_lock; 1815 1815 1816 1816 /* used by dmae command loader */ 1817 1817 struct dmae_command stats_dmae; ··· 1935 1935 1936 1936 int fp_array_size; 1937 1937 u32 dump_preset_idx; 1938 - bool stats_started; 1939 - struct semaphore stats_sema; 1940 1938 1941 1939 u8 phys_port_id[ETH_ALEN]; 1942 1940
+58 -46
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 129 129 u32 xmac_val; 130 130 u32 emac_addr; 131 131 u32 emac_val; 132 - u32 umac_addr; 133 - u32 umac_val; 132 + u32 umac_addr[2]; 133 + u32 umac_val[2]; 134 134 u32 bmac_addr; 135 135 u32 bmac_val[2]; 136 136 }; ··· 7866 7866 return 0; 7867 7867 } 7868 7868 7869 + /* previous driver DMAE transaction may have occurred when pre-boot stage ended 7870 + * and boot began, or when kdump kernel was loaded. Either case would invalidate 7871 + * the addresses of the transaction, resulting in was-error bit set in the pci 7872 + * causing all hw-to-host pcie transactions to timeout. If this happened we want 7873 + * to clear the interrupt which detected this from the pglueb and the was done 7874 + * bit 7875 + */ 7876 + static void bnx2x_clean_pglue_errors(struct bnx2x *bp) 7877 + { 7878 + if (!CHIP_IS_E1x(bp)) 7879 + REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 7880 + 1 << BP_ABS_FUNC(bp)); 7881 + } 7882 + 7869 7883 static int bnx2x_init_hw_func(struct bnx2x *bp) 7870 7884 { 7871 7885 int port = BP_PORT(bp); ··· 7972 7958 7973 7959 bnx2x_init_block(bp, BLOCK_PGLUE_B, init_phase); 7974 7960 7975 - if (!CHIP_IS_E1x(bp)) 7976 - REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, func); 7961 + bnx2x_clean_pglue_errors(bp); 7977 7962 7978 7963 bnx2x_init_block(bp, BLOCK_ATC, init_phase); 7979 7964 bnx2x_init_block(bp, BLOCK_DMAE, init_phase); ··· 10154 10141 return base + (BP_ABS_FUNC(bp)) * stride; 10155 10142 } 10156 10143 10144 + static bool bnx2x_prev_unload_close_umac(struct bnx2x *bp, 10145 + u8 port, u32 reset_reg, 10146 + struct bnx2x_mac_vals *vals) 10147 + { 10148 + u32 mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port; 10149 + u32 base_addr; 10150 + 10151 + if (!(mask & reset_reg)) 10152 + return false; 10153 + 10154 + BNX2X_DEV_INFO("Disable umac Rx %02x\n", port); 10155 + base_addr = port ? GRCBASE_UMAC1 : GRCBASE_UMAC0; 10156 + vals->umac_addr[port] = base_addr + UMAC_REG_COMMAND_CONFIG; 10157 + vals->umac_val[port] = REG_RD(bp, vals->umac_addr[port]); 10158 + REG_WR(bp, vals->umac_addr[port], 0); 10159 + 10160 + return true; 10161 + } 10162 + 10157 10163 static void bnx2x_prev_unload_close_mac(struct bnx2x *bp, 10158 10164 struct bnx2x_mac_vals *vals) 10159 10165 { ··· 10181 10149 u8 port = BP_PORT(bp); 10182 10150 10183 10151 /* reset addresses as they also mark which values were changed */ 10184 - vals->bmac_addr = 0; 10185 - vals->umac_addr = 0; 10186 - vals->xmac_addr = 0; 10187 - vals->emac_addr = 0; 10152 + memset(vals, 0, sizeof(*vals)); 10188 10153 10189 10154 reset_reg = REG_RD(bp, MISC_REG_RESET_REG_2); 10190 10155 ··· 10230 10201 REG_WR(bp, vals->xmac_addr, 0); 10231 10202 mac_stopped = true; 10232 10203 } 10233 - mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port; 10234 - if (mask & reset_reg) { 10235 - BNX2X_DEV_INFO("Disable umac Rx\n"); 10236 - base_addr = BP_PORT(bp) ? GRCBASE_UMAC1 : GRCBASE_UMAC0; 10237 - vals->umac_addr = base_addr + UMAC_REG_COMMAND_CONFIG; 10238 - vals->umac_val = REG_RD(bp, vals->umac_addr); 10239 - REG_WR(bp, vals->umac_addr, 0); 10240 - mac_stopped = true; 10241 - } 10204 + 10205 + mac_stopped |= bnx2x_prev_unload_close_umac(bp, 0, 10206 + reset_reg, vals); 10207 + mac_stopped |= bnx2x_prev_unload_close_umac(bp, 1, 10208 + reset_reg, vals); 10242 10209 } 10243 10210 10244 10211 if (mac_stopped) ··· 10530 10505 /* Close the MAC Rx to prevent BRB from filling up */ 10531 10506 bnx2x_prev_unload_close_mac(bp, &mac_vals); 10532 10507 10533 - /* close LLH filters towards the BRB */ 10508 + /* close LLH filters for both ports towards the BRB */ 10534 10509 bnx2x_set_rx_filter(&bp->link_params, 0); 10510 + bp->link_params.port ^= 1; 10511 + bnx2x_set_rx_filter(&bp->link_params, 0); 10512 + bp->link_params.port ^= 1; 10535 10513 10536 10514 /* Check if the UNDI driver was previously loaded */ 10537 10515 if (bnx2x_prev_is_after_undi(bp)) { ··· 10581 10553 10582 10554 if (mac_vals.xmac_addr) 10583 10555 REG_WR(bp, mac_vals.xmac_addr, mac_vals.xmac_val); 10584 - if (mac_vals.umac_addr) 10585 - REG_WR(bp, mac_vals.umac_addr, mac_vals.umac_val); 10556 + if (mac_vals.umac_addr[0]) 10557 + REG_WR(bp, mac_vals.umac_addr[0], mac_vals.umac_val[0]); 10558 + if (mac_vals.umac_addr[1]) 10559 + REG_WR(bp, mac_vals.umac_addr[1], mac_vals.umac_val[1]); 10586 10560 if (mac_vals.emac_addr) 10587 10561 REG_WR(bp, mac_vals.emac_addr, mac_vals.emac_val); 10588 10562 if (mac_vals.bmac_addr) { ··· 10601 10571 return bnx2x_prev_mcp_done(bp); 10602 10572 } 10603 10573 10604 - /* previous driver DMAE transaction may have occurred when pre-boot stage ended 10605 - * and boot began, or when kdump kernel was loaded. Either case would invalidate 10606 - * the addresses of the transaction, resulting in was-error bit set in the pci 10607 - * causing all hw-to-host pcie transactions to timeout. If this happened we want 10608 - * to clear the interrupt which detected this from the pglueb and the was done 10609 - * bit 10610 - */ 10611 - static void bnx2x_prev_interrupted_dmae(struct bnx2x *bp) 10612 - { 10613 - if (!CHIP_IS_E1x(bp)) { 10614 - u32 val = REG_RD(bp, PGLUE_B_REG_PGLUE_B_INT_STS); 10615 - if (val & PGLUE_B_PGLUE_B_INT_STS_REG_WAS_ERROR_ATTN) { 10616 - DP(BNX2X_MSG_SP, 10617 - "'was error' bit was found to be set in pglueb upon startup. Clearing\n"); 10618 - REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 10619 - 1 << BP_FUNC(bp)); 10620 - } 10621 - } 10622 - } 10623 - 10624 10574 static int bnx2x_prev_unload(struct bnx2x *bp) 10625 10575 { 10626 10576 int time_counter = 10; ··· 10610 10600 /* clear hw from errors which may have resulted from an interrupted 10611 10601 * dmae transaction. 10612 10602 */ 10613 - bnx2x_prev_interrupted_dmae(bp); 10603 + bnx2x_clean_pglue_errors(bp); 10614 10604 10615 10605 /* Release previously held locks */ 10616 10606 hw_lock_reg = (BP_FUNC(bp) <= 5) ? ··· 12047 12037 mutex_init(&bp->port.phy_mutex); 12048 12038 mutex_init(&bp->fw_mb_mutex); 12049 12039 mutex_init(&bp->drv_info_mutex); 12040 + mutex_init(&bp->stats_lock); 12050 12041 bp->drv_info_mng_owner = false; 12051 - spin_lock_init(&bp->stats_lock); 12052 - sema_init(&bp->stats_sema, 1); 12053 12042 12054 12043 INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task); 12055 12044 INIT_DELAYED_WORK(&bp->sp_rtnl_task, bnx2x_sp_rtnl_task); ··· 12731 12722 pci_write_config_dword(bp->pdev, PCICFG_GRC_ADDRESS, 12732 12723 PCICFG_VENDOR_ID_OFFSET); 12733 12724 12725 + /* Set PCIe reset type to fundamental for EEH recovery */ 12726 + pdev->needs_freset = 1; 12727 + 12734 12728 /* AER (Advanced Error reporting) configuration */ 12735 12729 rc = pci_enable_pcie_error_reporting(pdev); 12736 12730 if (!rc) ··· 12778 12766 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | 12779 12767 NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO | 12780 12768 NETIF_F_RXHASH | NETIF_F_HW_VLAN_CTAG_TX; 12781 - if (!CHIP_IS_E1x(bp)) { 12769 + if (!chip_is_e1x) { 12782 12770 dev->hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL | 12783 12771 NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT; 12784 12772 dev->hw_enc_features = ··· 13677 13665 cancel_delayed_work_sync(&bp->sp_task); 13678 13666 cancel_delayed_work_sync(&bp->period_task); 13679 13667 13680 - spin_lock_bh(&bp->stats_lock); 13668 + mutex_lock(&bp->stats_lock); 13681 13669 bp->stats_state = STATS_STATE_DISABLED; 13682 - spin_unlock_bh(&bp->stats_lock); 13670 + mutex_unlock(&bp->stats_lock); 13683 13671 13684 13672 bnx2x_save_statistics(bp); 13685 13673
+3 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 2238 2238 2239 2239 cookie.vf = vf; 2240 2240 cookie.state = VF_ACQUIRED; 2241 - bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie); 2241 + rc = bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie); 2242 + if (rc) 2243 + goto op_err; 2242 2244 } 2243 2245 2244 2246 DP(BNX2X_MSG_IOV, "set state to acquired\n");
+74 -90
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
··· 123 123 */ 124 124 static void bnx2x_storm_stats_post(struct bnx2x *bp) 125 125 { 126 - if (!bp->stats_pending) { 127 - int rc; 126 + int rc; 128 127 129 - spin_lock_bh(&bp->stats_lock); 128 + if (bp->stats_pending) 129 + return; 130 130 131 - if (bp->stats_pending) { 132 - spin_unlock_bh(&bp->stats_lock); 133 - return; 134 - } 131 + bp->fw_stats_req->hdr.drv_stats_counter = 132 + cpu_to_le16(bp->stats_counter++); 135 133 136 - bp->fw_stats_req->hdr.drv_stats_counter = 137 - cpu_to_le16(bp->stats_counter++); 134 + DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n", 135 + le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter)); 138 136 139 - DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n", 140 - le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter)); 137 + /* adjust the ramrod to include VF queues statistics */ 138 + bnx2x_iov_adjust_stats_req(bp); 139 + bnx2x_dp_stats(bp); 141 140 142 - /* adjust the ramrod to include VF queues statistics */ 143 - bnx2x_iov_adjust_stats_req(bp); 144 - bnx2x_dp_stats(bp); 145 - 146 - /* send FW stats ramrod */ 147 - rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0, 148 - U64_HI(bp->fw_stats_req_mapping), 149 - U64_LO(bp->fw_stats_req_mapping), 150 - NONE_CONNECTION_TYPE); 151 - if (rc == 0) 152 - bp->stats_pending = 1; 153 - 154 - spin_unlock_bh(&bp->stats_lock); 155 - } 141 + /* send FW stats ramrod */ 142 + rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0, 143 + U64_HI(bp->fw_stats_req_mapping), 144 + U64_LO(bp->fw_stats_req_mapping), 145 + NONE_CONNECTION_TYPE); 146 + if (rc == 0) 147 + bp->stats_pending = 1; 156 148 } 157 149 158 150 static void bnx2x_hw_stats_post(struct bnx2x *bp) ··· 213 221 */ 214 222 215 223 /* should be called under stats_sema */ 216 - static void __bnx2x_stats_pmf_update(struct bnx2x *bp) 224 + static void bnx2x_stats_pmf_update(struct bnx2x *bp) 217 225 { 218 226 struct dmae_command *dmae; 219 227 u32 opcode; ··· 511 519 } 512 520 513 521 /* should be called under stats_sema */ 514 - static void __bnx2x_stats_start(struct bnx2x *bp) 522 + static void bnx2x_stats_start(struct bnx2x *bp) 515 523 { 516 524 if (IS_PF(bp)) { 517 525 if (bp->port.pmf) ··· 523 531 bnx2x_hw_stats_post(bp); 524 532 bnx2x_storm_stats_post(bp); 525 533 } 526 - 527 - bp->stats_started = true; 528 - } 529 - 530 - static void bnx2x_stats_start(struct bnx2x *bp) 531 - { 532 - if (down_timeout(&bp->stats_sema, HZ/10)) 533 - BNX2X_ERR("Unable to acquire stats lock\n"); 534 - __bnx2x_stats_start(bp); 535 - up(&bp->stats_sema); 536 534 } 537 535 538 536 static void bnx2x_stats_pmf_start(struct bnx2x *bp) 539 537 { 540 - if (down_timeout(&bp->stats_sema, HZ/10)) 541 - BNX2X_ERR("Unable to acquire stats lock\n"); 542 538 bnx2x_stats_comp(bp); 543 - __bnx2x_stats_pmf_update(bp); 544 - __bnx2x_stats_start(bp); 545 - up(&bp->stats_sema); 546 - } 547 - 548 - static void bnx2x_stats_pmf_update(struct bnx2x *bp) 549 - { 550 - if (down_timeout(&bp->stats_sema, HZ/10)) 551 - BNX2X_ERR("Unable to acquire stats lock\n"); 552 - __bnx2x_stats_pmf_update(bp); 553 - up(&bp->stats_sema); 539 + bnx2x_stats_pmf_update(bp); 540 + bnx2x_stats_start(bp); 554 541 } 555 542 556 543 static void bnx2x_stats_restart(struct bnx2x *bp) ··· 539 568 */ 540 569 if (IS_VF(bp)) 541 570 return; 542 - if (down_timeout(&bp->stats_sema, HZ/10)) 543 - BNX2X_ERR("Unable to acquire stats lock\n"); 571 + 544 572 bnx2x_stats_comp(bp); 545 - __bnx2x_stats_start(bp); 546 - up(&bp->stats_sema); 573 + bnx2x_stats_start(bp); 547 574 } 548 575 549 576 static void bnx2x_bmac_stats_update(struct bnx2x *bp) ··· 1215 1246 { 1216 1247 u32 *stats_comp = bnx2x_sp(bp, stats_comp); 1217 1248 1218 - /* we run update from timer context, so give up 1219 - * if somebody is in the middle of transition 1220 - */ 1221 - if (down_trylock(&bp->stats_sema)) 1249 + if (bnx2x_edebug_stats_stopped(bp)) 1222 1250 return; 1223 - 1224 - if (bnx2x_edebug_stats_stopped(bp) || !bp->stats_started) 1225 - goto out; 1226 1251 1227 1252 if (IS_PF(bp)) { 1228 1253 if (*stats_comp != DMAE_COMP_VAL) 1229 - goto out; 1254 + return; 1230 1255 1231 1256 if (bp->port.pmf) 1232 1257 bnx2x_hw_stats_update(bp); ··· 1230 1267 BNX2X_ERR("storm stats were not updated for 3 times\n"); 1231 1268 bnx2x_panic(); 1232 1269 } 1233 - goto out; 1270 + return; 1234 1271 } 1235 1272 } else { 1236 1273 /* vf doesn't collect HW statistics, and doesn't get completions ··· 1244 1281 1245 1282 /* vf is done */ 1246 1283 if (IS_VF(bp)) 1247 - goto out; 1284 + return; 1248 1285 1249 1286 if (netif_msg_timer(bp)) { 1250 1287 struct bnx2x_eth_stats *estats = &bp->eth_stats; ··· 1255 1292 1256 1293 bnx2x_hw_stats_post(bp); 1257 1294 bnx2x_storm_stats_post(bp); 1258 - 1259 - out: 1260 - up(&bp->stats_sema); 1261 1295 } 1262 1296 1263 1297 static void bnx2x_port_stats_stop(struct bnx2x *bp) ··· 1318 1358 1319 1359 static void bnx2x_stats_stop(struct bnx2x *bp) 1320 1360 { 1321 - int update = 0; 1322 - 1323 - if (down_timeout(&bp->stats_sema, HZ/10)) 1324 - BNX2X_ERR("Unable to acquire stats lock\n"); 1325 - 1326 - bp->stats_started = false; 1361 + bool update = false; 1327 1362 1328 1363 bnx2x_stats_comp(bp); 1329 1364 ··· 1336 1381 bnx2x_hw_stats_post(bp); 1337 1382 bnx2x_stats_comp(bp); 1338 1383 } 1339 - 1340 - up(&bp->stats_sema); 1341 1384 } 1342 1385 1343 1386 static void bnx2x_stats_do_nothing(struct bnx2x *bp) ··· 1363 1410 1364 1411 void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event) 1365 1412 { 1366 - enum bnx2x_stats_state state; 1367 - void (*action)(struct bnx2x *bp); 1413 + enum bnx2x_stats_state state = bp->stats_state; 1414 + 1368 1415 if (unlikely(bp->panic)) 1369 1416 return; 1370 1417 1371 - spin_lock_bh(&bp->stats_lock); 1372 - state = bp->stats_state; 1373 - bp->stats_state = bnx2x_stats_stm[state][event].next_state; 1374 - action = bnx2x_stats_stm[state][event].action; 1375 - spin_unlock_bh(&bp->stats_lock); 1418 + /* Statistics update run from timer context, and we don't want to stop 1419 + * that context in case someone is in the middle of a transition. 1420 + * For other events, wait a bit until lock is taken. 1421 + */ 1422 + if (!mutex_trylock(&bp->stats_lock)) { 1423 + if (event == STATS_EVENT_UPDATE) 1424 + return; 1376 1425 1377 - action(bp); 1426 + DP(BNX2X_MSG_STATS, 1427 + "Unlikely stats' lock contention [event %d]\n", event); 1428 + mutex_lock(&bp->stats_lock); 1429 + } 1430 + 1431 + bnx2x_stats_stm[state][event].action(bp); 1432 + bp->stats_state = bnx2x_stats_stm[state][event].next_state; 1433 + 1434 + mutex_unlock(&bp->stats_lock); 1378 1435 1379 1436 if ((event != STATS_EVENT_UPDATE) || netif_msg_timer(bp)) 1380 1437 DP(BNX2X_MSG_STATS, "state %d -> event %d -> state %d\n", ··· 1961 1998 } 1962 1999 } 1963 2000 1964 - void bnx2x_stats_safe_exec(struct bnx2x *bp, 1965 - void (func_to_exec)(void *cookie), 1966 - void *cookie){ 1967 - if (down_timeout(&bp->stats_sema, HZ/10)) 1968 - BNX2X_ERR("Unable to acquire stats lock\n"); 2001 + int bnx2x_stats_safe_exec(struct bnx2x *bp, 2002 + void (func_to_exec)(void *cookie), 2003 + void *cookie) 2004 + { 2005 + int cnt = 10, rc = 0; 2006 + 2007 + /* Wait for statistics to end [while blocking further requests], 2008 + * then run supplied function 'safely'. 2009 + */ 2010 + mutex_lock(&bp->stats_lock); 2011 + 1969 2012 bnx2x_stats_comp(bp); 2013 + while (bp->stats_pending && cnt--) 2014 + if (bnx2x_storm_stats_update(bp)) 2015 + usleep_range(1000, 2000); 2016 + if (bp->stats_pending) { 2017 + BNX2X_ERR("Failed to wait for stats pending to clear [possibly FW is stuck]\n"); 2018 + rc = -EBUSY; 2019 + goto out; 2020 + } 2021 + 1970 2022 func_to_exec(cookie); 1971 - __bnx2x_stats_start(bp); 1972 - up(&bp->stats_sema); 2023 + 2024 + out: 2025 + /* No need to restart statistics - if they're enabled, the timer 2026 + * will restart the statistics. 2027 + */ 2028 + mutex_unlock(&bp->stats_lock); 2029 + 2030 + return rc; 1973 2031 }
+3 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
··· 539 539 void bnx2x_memset_stats(struct bnx2x *bp); 540 540 void bnx2x_stats_init(struct bnx2x *bp); 541 541 void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event); 542 - void bnx2x_stats_safe_exec(struct bnx2x *bp, 543 - void (func_to_exec)(void *cookie), 544 - void *cookie); 542 + int bnx2x_stats_safe_exec(struct bnx2x *bp, 543 + void (func_to_exec)(void *cookie), 544 + void *cookie); 545 545 546 546 /** 547 547 * bnx2x_save_statistics - save statistics when unloading.
+92 -30
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 487 487 BCMGENET_STAT_MIB_TX, 488 488 BCMGENET_STAT_RUNT, 489 489 BCMGENET_STAT_MISC, 490 + BCMGENET_STAT_SOFT, 490 491 }; 491 492 492 493 struct bcmgenet_stats { ··· 516 515 #define STAT_GENET_MIB_RX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_RX) 517 516 #define STAT_GENET_MIB_TX(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_MIB_TX) 518 517 #define STAT_GENET_RUNT(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_RUNT) 518 + #define STAT_GENET_SOFT_MIB(str, m) STAT_GENET_MIB(str, m, BCMGENET_STAT_SOFT) 519 519 520 520 #define STAT_GENET_MISC(str, m, offset) { \ 521 521 .stat_string = str, \ ··· 616 614 UMAC_RBUF_OVFL_CNT), 617 615 STAT_GENET_MISC("rbuf_err_cnt", mib.rbuf_err_cnt, UMAC_RBUF_ERR_CNT), 618 616 STAT_GENET_MISC("mdf_err_cnt", mib.mdf_err_cnt, UMAC_MDF_ERR_CNT), 619 - STAT_GENET_MIB_RX("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 620 - STAT_GENET_MIB_RX("rx_dma_failed", mib.rx_dma_failed), 621 - STAT_GENET_MIB_TX("tx_dma_failed", mib.tx_dma_failed), 617 + STAT_GENET_SOFT_MIB("alloc_rx_buff_failed", mib.alloc_rx_buff_failed), 618 + STAT_GENET_SOFT_MIB("rx_dma_failed", mib.rx_dma_failed), 619 + STAT_GENET_SOFT_MIB("tx_dma_failed", mib.tx_dma_failed), 622 620 }; 623 621 624 622 #define BCMGENET_STATS_LEN ARRAY_SIZE(bcmgenet_gstrings_stats) ··· 670 668 s = &bcmgenet_gstrings_stats[i]; 671 669 switch (s->type) { 672 670 case BCMGENET_STAT_NETDEV: 671 + case BCMGENET_STAT_SOFT: 673 672 continue; 674 673 case BCMGENET_STAT_MIB_RX: 675 674 case BCMGENET_STAT_MIB_TX: ··· 974 971 } 975 972 976 973 /* Unlocked version of the reclaim routine */ 977 - static void __bcmgenet_tx_reclaim(struct net_device *dev, 978 - struct bcmgenet_tx_ring *ring) 974 + static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev, 975 + struct bcmgenet_tx_ring *ring) 979 976 { 980 977 struct bcmgenet_priv *priv = netdev_priv(dev); 981 978 int last_tx_cn, last_c_index, num_tx_bds; 982 979 struct enet_cb *tx_cb_ptr; 983 980 struct netdev_queue *txq; 981 + unsigned int pkts_compl = 0; 984 982 unsigned int bds_compl; 985 983 unsigned int c_index; 986 984 ··· 1009 1005 tx_cb_ptr = ring->cbs + last_c_index; 1010 1006 bds_compl = 0; 1011 1007 if (tx_cb_ptr->skb) { 1008 + pkts_compl++; 1012 1009 bds_compl = skb_shinfo(tx_cb_ptr->skb)->nr_frags + 1; 1013 1010 dev->stats.tx_bytes += tx_cb_ptr->skb->len; 1014 1011 dma_unmap_single(&dev->dev, ··· 1033 1028 last_c_index &= (num_tx_bds - 1); 1034 1029 } 1035 1030 1036 - if (ring->free_bds > (MAX_SKB_FRAGS + 1)) 1037 - ring->int_disable(priv, ring); 1038 - 1039 - if (netif_tx_queue_stopped(txq)) 1040 - netif_tx_wake_queue(txq); 1031 + if (ring->free_bds > (MAX_SKB_FRAGS + 1)) { 1032 + if (netif_tx_queue_stopped(txq)) 1033 + netif_tx_wake_queue(txq); 1034 + } 1041 1035 1042 1036 ring->c_index = c_index; 1037 + 1038 + return pkts_compl; 1043 1039 } 1044 1040 1045 - static void bcmgenet_tx_reclaim(struct net_device *dev, 1041 + static unsigned int bcmgenet_tx_reclaim(struct net_device *dev, 1046 1042 struct bcmgenet_tx_ring *ring) 1047 1043 { 1044 + unsigned int released; 1048 1045 unsigned long flags; 1049 1046 1050 1047 spin_lock_irqsave(&ring->lock, flags); 1051 - __bcmgenet_tx_reclaim(dev, ring); 1048 + released = __bcmgenet_tx_reclaim(dev, ring); 1052 1049 spin_unlock_irqrestore(&ring->lock, flags); 1050 + 1051 + return released; 1052 + } 1053 + 1054 + static int bcmgenet_tx_poll(struct napi_struct *napi, int budget) 1055 + { 1056 + struct bcmgenet_tx_ring *ring = 1057 + container_of(napi, struct bcmgenet_tx_ring, napi); 1058 + unsigned int work_done = 0; 1059 + 1060 + work_done = bcmgenet_tx_reclaim(ring->priv->dev, ring); 1061 + 1062 + if (work_done == 0) { 1063 + napi_complete(napi); 1064 + ring->int_enable(ring->priv, ring); 1065 + 1066 + return 0; 1067 + } 1068 + 1069 + return budget; 1053 1070 } 1054 1071 1055 1072 static void bcmgenet_tx_reclaim_all(struct net_device *dev) ··· 1329 1302 bcmgenet_tdma_ring_writel(priv, ring->index, 1330 1303 ring->prod_index, TDMA_PROD_INDEX); 1331 1304 1332 - if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) { 1305 + if (ring->free_bds <= (MAX_SKB_FRAGS + 1)) 1333 1306 netif_tx_stop_queue(txq); 1334 - ring->int_enable(priv, ring); 1335 - } 1336 1307 1337 1308 out: 1338 1309 spin_unlock_irqrestore(&ring->lock, flags); ··· 1646 1621 struct device *kdev = &priv->pdev->dev; 1647 1622 int ret; 1648 1623 u32 reg, cpu_mask_clear; 1624 + int index; 1649 1625 1650 1626 dev_dbg(&priv->pdev->dev, "bcmgenet: init_umac\n"); 1651 1627 ··· 1673 1647 1674 1648 bcmgenet_intr_disable(priv); 1675 1649 1676 - cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE; 1650 + cpu_mask_clear = UMAC_IRQ_RXDMA_BDONE | UMAC_IRQ_TXDMA_BDONE; 1677 1651 1678 1652 dev_dbg(kdev, "%s:Enabling RXDMA_BDONE interrupt\n", __func__); 1679 1653 ··· 1700 1674 1701 1675 bcmgenet_intrl2_0_writel(priv, cpu_mask_clear, INTRL2_CPU_MASK_CLEAR); 1702 1676 1677 + for (index = 0; index < priv->hw_params->tx_queues; index++) 1678 + bcmgenet_intrl2_1_writel(priv, (1 << index), 1679 + INTRL2_CPU_MASK_CLEAR); 1680 + 1703 1681 /* Enable rx/tx engine.*/ 1704 1682 dev_dbg(kdev, "done init umac\n"); 1705 1683 ··· 1723 1693 unsigned int first_bd; 1724 1694 1725 1695 spin_lock_init(&ring->lock); 1696 + ring->priv = priv; 1697 + netif_napi_add(priv->dev, &ring->napi, bcmgenet_tx_poll, 64); 1726 1698 ring->index = index; 1727 1699 if (index == DESC_INDEX) { 1728 1700 ring->queue = 0; ··· 1770 1738 TDMA_WRITE_PTR); 1771 1739 bcmgenet_tdma_ring_writel(priv, index, end_ptr * words_per_bd - 1, 1772 1740 DMA_END_ADDR); 1741 + 1742 + napi_enable(&ring->napi); 1743 + } 1744 + 1745 + static void bcmgenet_fini_tx_ring(struct bcmgenet_priv *priv, 1746 + unsigned int index) 1747 + { 1748 + struct bcmgenet_tx_ring *ring = &priv->tx_rings[index]; 1749 + 1750 + napi_disable(&ring->napi); 1751 + netif_napi_del(&ring->napi); 1773 1752 } 1774 1753 1775 1754 /* Initialize a RDMA ring */ ··· 1950 1907 return ret; 1951 1908 } 1952 1909 1953 - static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1910 + static void __bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1954 1911 { 1955 1912 int i; 1956 1913 ··· 1967 1924 bcmgenet_free_rx_buffers(priv); 1968 1925 kfree(priv->rx_cbs); 1969 1926 kfree(priv->tx_cbs); 1927 + } 1928 + 1929 + static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) 1930 + { 1931 + int i; 1932 + 1933 + bcmgenet_fini_tx_ring(priv, DESC_INDEX); 1934 + 1935 + for (i = 0; i < priv->hw_params->tx_queues; i++) 1936 + bcmgenet_fini_tx_ring(priv, i); 1937 + 1938 + __bcmgenet_fini_dma(priv); 1970 1939 } 1971 1940 1972 1941 /* init_edma: Initialize DMA control register */ ··· 2007 1952 priv->tx_cbs = kcalloc(priv->num_tx_bds, sizeof(struct enet_cb), 2008 1953 GFP_KERNEL); 2009 1954 if (!priv->tx_cbs) { 2010 - bcmgenet_fini_dma(priv); 1955 + __bcmgenet_fini_dma(priv); 2011 1956 return -ENOMEM; 2012 1957 } 2013 1958 ··· 2029 1974 struct bcmgenet_priv *priv = container_of(napi, 2030 1975 struct bcmgenet_priv, napi); 2031 1976 unsigned int work_done; 2032 - 2033 - /* tx reclaim */ 2034 - bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]); 2035 1977 2036 1978 work_done = bcmgenet_desc_rx(priv, budget); 2037 1979 ··· 2074 2022 static irqreturn_t bcmgenet_isr1(int irq, void *dev_id) 2075 2023 { 2076 2024 struct bcmgenet_priv *priv = dev_id; 2025 + struct bcmgenet_tx_ring *ring; 2077 2026 unsigned int index; 2078 2027 2079 2028 /* Save irq status for bottom-half processing. */ 2080 2029 priv->irq1_stat = 2081 2030 bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_STAT) & 2082 - ~priv->int1_mask; 2031 + ~bcmgenet_intrl2_1_readl(priv, INTRL2_CPU_MASK_STATUS); 2083 2032 /* clear interrupts */ 2084 2033 bcmgenet_intrl2_1_writel(priv, priv->irq1_stat, INTRL2_CPU_CLEAR); 2085 2034 2086 2035 netif_dbg(priv, intr, priv->dev, 2087 2036 "%s: IRQ=0x%x\n", __func__, priv->irq1_stat); 2037 + 2088 2038 /* Check the MBDONE interrupts. 2089 2039 * packet is done, reclaim descriptors 2090 2040 */ 2091 - if (priv->irq1_stat & 0x0000ffff) { 2092 - index = 0; 2093 - for (index = 0; index < 16; index++) { 2094 - if (priv->irq1_stat & (1 << index)) 2095 - bcmgenet_tx_reclaim(priv->dev, 2096 - &priv->tx_rings[index]); 2041 + for (index = 0; index < priv->hw_params->tx_queues; index++) { 2042 + if (!(priv->irq1_stat & BIT(index))) 2043 + continue; 2044 + 2045 + ring = &priv->tx_rings[index]; 2046 + 2047 + if (likely(napi_schedule_prep(&ring->napi))) { 2048 + ring->int_disable(priv, ring); 2049 + __napi_schedule(&ring->napi); 2097 2050 } 2098 2051 } 2052 + 2099 2053 return IRQ_HANDLED; 2100 2054 } 2101 2055 ··· 2133 2075 } 2134 2076 if (priv->irq0_stat & 2135 2077 (UMAC_IRQ_TXDMA_BDONE | UMAC_IRQ_TXDMA_PDONE)) { 2136 - /* Tx reclaim */ 2137 - bcmgenet_tx_reclaim(priv->dev, &priv->tx_rings[DESC_INDEX]); 2078 + struct bcmgenet_tx_ring *ring = &priv->tx_rings[DESC_INDEX]; 2079 + 2080 + if (likely(napi_schedule_prep(&ring->napi))) { 2081 + ring->int_disable(priv, ring); 2082 + __napi_schedule(&ring->napi); 2083 + } 2138 2084 } 2139 2085 if (priv->irq0_stat & (UMAC_IRQ_PHY_DET_R | 2140 2086 UMAC_IRQ_PHY_DET_F |
+2
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 520 520 521 521 struct bcmgenet_tx_ring { 522 522 spinlock_t lock; /* ring lock */ 523 + struct napi_struct napi; /* NAPI per tx queue */ 523 524 unsigned int index; /* ring index */ 524 525 unsigned int queue; /* queue index */ 525 526 struct enet_cb *cbs; /* tx ring buffer control block*/ ··· 535 534 struct bcmgenet_tx_ring *); 536 535 void (*int_disable)(struct bcmgenet_priv *priv, 537 536 struct bcmgenet_tx_ring *); 537 + struct bcmgenet_priv *priv; 538 538 }; 539 539 540 540 /* device context */
+4 -2
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 73 73 if (wol->wolopts & ~(WAKE_MAGIC | WAKE_MAGICSECURE)) 74 74 return -EINVAL; 75 75 76 + reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 76 77 if (wol->wolopts & WAKE_MAGICSECURE) { 77 78 bcmgenet_umac_writel(priv, get_unaligned_be16(&wol->sopass[0]), 78 79 UMAC_MPD_PW_MS); 79 80 bcmgenet_umac_writel(priv, get_unaligned_be32(&wol->sopass[2]), 80 81 UMAC_MPD_PW_LS); 81 - reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 82 82 reg |= MPD_PW_EN; 83 - bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 83 + } else { 84 + reg &= ~MPD_PW_EN; 84 85 } 86 + bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 85 87 86 88 /* Flag the device and relevant IRQ as wakeup capable */ 87 89 if (wol->wolopts) {
+4 -4
drivers/net/ethernet/cadence/macb.c
··· 2113 2113 }; 2114 2114 2115 2115 #if defined(CONFIG_OF) 2116 - static struct macb_config pc302gem_config = { 2116 + static const struct macb_config pc302gem_config = { 2117 2117 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 2118 2118 .dma_burst_length = 16, 2119 2119 }; 2120 2120 2121 - static struct macb_config sama5d3_config = { 2121 + static const struct macb_config sama5d3_config = { 2122 2122 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 2123 2123 .dma_burst_length = 16, 2124 2124 }; 2125 2125 2126 - static struct macb_config sama5d4_config = { 2126 + static const struct macb_config sama5d4_config = { 2127 2127 .caps = 0, 2128 2128 .dma_burst_length = 4, 2129 2129 }; ··· 2154 2154 if (bp->pdev->dev.of_node) { 2155 2155 match = of_match_node(macb_dt_ids, bp->pdev->dev.of_node); 2156 2156 if (match && match->data) { 2157 - config = (const struct macb_config *)match->data; 2157 + config = match->data; 2158 2158 2159 2159 bp->caps = config->caps; 2160 2160 /*
+1 -1
drivers/net/ethernet/cadence/macb.h
··· 351 351 352 352 /* Bitfields in MID */ 353 353 #define MACB_IDNUM_OFFSET 16 354 - #define MACB_IDNUM_SIZE 16 354 + #define MACB_IDNUM_SIZE 12 355 355 #define MACB_REV_OFFSET 0 356 356 #define MACB_REV_SIZE 16 357 357
+29 -28
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.c
··· 35 35 } 36 36 37 37 static unsigned int clip_addr_hash(struct clip_tbl *ctbl, const u32 *addr, 38 - int addr_len) 38 + u8 v6) 39 39 { 40 - return addr_len == 4 ? ipv4_clip_hash(ctbl, addr) : 41 - ipv6_clip_hash(ctbl, addr); 40 + return v6 ? ipv6_clip_hash(ctbl, addr) : 41 + ipv4_clip_hash(ctbl, addr); 42 42 } 43 43 44 44 static int clip6_get_mbox(const struct net_device *dev, ··· 78 78 struct clip_entry *ce, *cte; 79 79 u32 *addr = (u32 *)lip; 80 80 int hash; 81 - int addr_len; 82 - int ret = 0; 81 + int ret = -1; 83 82 84 83 if (!ctbl) 85 84 return 0; 86 85 87 - if (v6) 88 - addr_len = 16; 89 - else 90 - addr_len = 4; 91 - 92 - hash = clip_addr_hash(ctbl, addr, addr_len); 86 + hash = clip_addr_hash(ctbl, addr, v6); 93 87 94 88 read_lock_bh(&ctbl->lock); 95 89 list_for_each_entry(cte, &ctbl->hash_list[hash], list) { 96 - if (addr_len == cte->addr_len && 97 - memcmp(lip, cte->addr, cte->addr_len) == 0) { 90 + if (cte->addr6.sin6_family == AF_INET6 && v6) 91 + ret = memcmp(lip, cte->addr6.sin6_addr.s6_addr, 92 + sizeof(struct in6_addr)); 93 + else if (cte->addr.sin_family == AF_INET && !v6) 94 + ret = memcmp(lip, (char *)(&cte->addr.sin_addr), 95 + sizeof(struct in_addr)); 96 + if (!ret) { 98 97 ce = cte; 99 98 read_unlock_bh(&ctbl->lock); 100 99 goto found; ··· 110 111 spin_lock_init(&ce->lock); 111 112 atomic_set(&ce->refcnt, 0); 112 113 atomic_dec(&ctbl->nfree); 113 - ce->addr_len = addr_len; 114 - memcpy(ce->addr, lip, addr_len); 115 114 list_add_tail(&ce->list, &ctbl->hash_list[hash]); 116 115 if (v6) { 116 + ce->addr6.sin6_family = AF_INET6; 117 + memcpy(ce->addr6.sin6_addr.s6_addr, 118 + lip, sizeof(struct in6_addr)); 117 119 ret = clip6_get_mbox(dev, (const struct in6_addr *)lip); 118 120 if (ret) { 119 121 write_unlock_bh(&ctbl->lock); 120 122 return ret; 121 123 } 124 + } else { 125 + ce->addr.sin_family = AF_INET; 126 + memcpy((char *)(&ce->addr.sin_addr), lip, 127 + sizeof(struct in_addr)); 122 128 } 123 129 } else { 124 130 write_unlock_bh(&ctbl->lock); ··· 144 140 struct clip_entry *ce, *cte; 145 141 u32 *addr = (u32 *)lip; 146 142 int hash; 147 - int addr_len; 143 + int ret = -1; 148 144 149 - if (v6) 150 - addr_len = 16; 151 - else 152 - addr_len = 4; 153 - 154 - hash = clip_addr_hash(ctbl, addr, addr_len); 145 + hash = clip_addr_hash(ctbl, addr, v6); 155 146 156 147 read_lock_bh(&ctbl->lock); 157 148 list_for_each_entry(cte, &ctbl->hash_list[hash], list) { 158 - if (addr_len == cte->addr_len && 159 - memcmp(lip, cte->addr, cte->addr_len) == 0) { 149 + if (cte->addr6.sin6_family == AF_INET6 && v6) 150 + ret = memcmp(lip, cte->addr6.sin6_addr.s6_addr, 151 + sizeof(struct in6_addr)); 152 + else if (cte->addr.sin_family == AF_INET && !v6) 153 + ret = memcmp(lip, (char *)(&cte->addr.sin_addr), 154 + sizeof(struct in_addr)); 155 + if (!ret) { 160 156 ce = cte; 161 157 read_unlock_bh(&ctbl->lock); 162 158 goto found; ··· 253 249 for (i = 0 ; i < ctbl->clipt_size; ++i) { 254 250 list_for_each_entry(ce, &ctbl->hash_list[i], list) { 255 251 ip[0] = '\0'; 256 - if (ce->addr_len == 16) 257 - sprintf(ip, "%pI6c", ce->addr); 258 - else 259 - sprintf(ip, "%pI4c", ce->addr); 252 + sprintf(ip, "%pISc", &ce->addr); 260 253 seq_printf(seq, "%-25s %u\n", ip, 261 254 atomic_read(&ce->refcnt)); 262 255 }
+4 -2
drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h
··· 14 14 spinlock_t lock; /* Hold while modifying clip reference */ 15 15 atomic_t refcnt; 16 16 struct list_head list; 17 - u32 addr[4]; 18 - int addr_len; 17 + union { 18 + struct sockaddr_in addr; 19 + struct sockaddr_in6 addr6; 20 + }; 19 21 }; 20 22 21 23 struct clip_tbl {
+9 -7
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 376 376 enum { 377 377 INGQ_EXTRAS = 2, /* firmware event queue and */ 378 378 /* forwarded interrupts */ 379 - MAX_EGRQ = MAX_ETH_QSETS*2 + MAX_OFLD_QSETS*2 380 - + MAX_CTRL_QUEUES + MAX_RDMA_QUEUES + MAX_ISCSI_QUEUES, 381 379 MAX_INGQ = MAX_ETH_QSETS + MAX_OFLD_QSETS + MAX_RDMA_QUEUES 382 380 + MAX_RDMA_CIQS + MAX_ISCSI_QUEUES + INGQ_EXTRAS, 383 381 }; ··· 614 616 unsigned int idma_qid[2]; /* SGE IDMA Hung Ingress Queue ID */ 615 617 616 618 unsigned int egr_start; 619 + unsigned int egr_sz; 617 620 unsigned int ingr_start; 618 - void *egr_map[MAX_EGRQ]; /* qid->queue egress queue map */ 619 - struct sge_rspq *ingr_map[MAX_INGQ]; /* qid->queue ingress queue map */ 620 - DECLARE_BITMAP(starving_fl, MAX_EGRQ); 621 - DECLARE_BITMAP(txq_maperr, MAX_EGRQ); 621 + unsigned int ingr_sz; 622 + void **egr_map; /* qid->queue egress queue map */ 623 + struct sge_rspq **ingr_map; /* qid->queue ingress queue map */ 624 + unsigned long *starving_fl; 625 + unsigned long *txq_maperr; 622 626 struct timer_list rx_timer; /* refills starving FLs */ 623 627 struct timer_list tx_timer; /* checks Tx queues */ 624 628 }; ··· 1103 1103 #define T4_MEMORY_WRITE 0 1104 1104 #define T4_MEMORY_READ 1 1105 1105 int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr, u32 len, 1106 - __be32 *buf, int dir); 1106 + void *buf, int dir); 1107 1107 static inline int t4_memory_write(struct adapter *adap, int mtype, u32 addr, 1108 1108 u32 len, __be32 *buf) 1109 1109 { ··· 1136 1136 1137 1137 unsigned int qtimer_val(const struct adapter *adap, 1138 1138 const struct sge_rspq *q); 1139 + 1140 + int t4_init_devlog_params(struct adapter *adapter); 1139 1141 int t4_init_sge_params(struct adapter *adapter); 1140 1142 int t4_init_tp_params(struct adapter *adap); 1141 1143 int t4_filter_field_shift(const struct adapter *adap, int filter_sel);
+7 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 670 670 "0.9375" }; 671 671 672 672 int i; 673 - u16 incr[NMTUS][NCCTRL_WIN]; 673 + u16 (*incr)[NCCTRL_WIN]; 674 674 struct adapter *adap = seq->private; 675 + 676 + incr = kmalloc(sizeof(*incr) * NMTUS, GFP_KERNEL); 677 + if (!incr) 678 + return -ENOMEM; 675 679 676 680 t4_read_cong_tbl(adap, incr); 677 681 ··· 689 685 adap->params.a_wnd[i], 690 686 dec_fac[adap->params.b_wnd[i]]); 691 687 } 688 + 689 + kfree(incr); 692 690 return 0; 693 691 } 694 692
+98 -39
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 920 920 { 921 921 int i; 922 922 923 - for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) { 923 + for (i = 0; i < adap->sge.ingr_sz; i++) { 924 924 struct sge_rspq *q = adap->sge.ingr_map[i]; 925 925 926 926 if (q && q->handler) { ··· 934 934 } 935 935 } 936 936 937 + /* Disable interrupt and napi handler */ 938 + static void disable_interrupts(struct adapter *adap) 939 + { 940 + if (adap->flags & FULL_INIT_DONE) { 941 + t4_intr_disable(adap); 942 + if (adap->flags & USING_MSIX) { 943 + free_msix_queue_irqs(adap); 944 + free_irq(adap->msix_info[0].vec, adap); 945 + } else { 946 + free_irq(adap->pdev->irq, adap); 947 + } 948 + quiesce_rx(adap); 949 + } 950 + } 951 + 937 952 /* 938 953 * Enable NAPI scheduling and interrupt generation for all Rx queues. 939 954 */ ··· 956 941 { 957 942 int i; 958 943 959 - for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) { 944 + for (i = 0; i < adap->sge.ingr_sz; i++) { 960 945 struct sge_rspq *q = adap->sge.ingr_map[i]; 961 946 962 947 if (!q) ··· 985 970 int err, msi_idx, i, j; 986 971 struct sge *s = &adap->sge; 987 972 988 - bitmap_zero(s->starving_fl, MAX_EGRQ); 989 - bitmap_zero(s->txq_maperr, MAX_EGRQ); 973 + bitmap_zero(s->starving_fl, s->egr_sz); 974 + bitmap_zero(s->txq_maperr, s->egr_sz); 990 975 991 976 if (adap->flags & USING_MSIX) 992 977 msi_idx = 1; /* vector 0 is for non-queue interrupts */ ··· 998 983 msi_idx = -((int)s->intrq.abs_id + 1); 999 984 } 1000 985 986 + /* NOTE: If you add/delete any Ingress/Egress Queue allocations in here, 987 + * don't forget to update the following which need to be 988 + * synchronized to and changes here. 989 + * 990 + * 1. The calculations of MAX_INGQ in cxgb4.h. 991 + * 992 + * 2. Update enable_msix/name_msix_vecs/request_msix_queue_irqs 993 + * to accommodate any new/deleted Ingress Queues 994 + * which need MSI-X Vectors. 995 + * 996 + * 3. Update sge_qinfo_show() to include information on the 997 + * new/deleted queues. 998 + */ 1001 999 err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0], 1002 1000 msi_idx, NULL, fwevtq_handler); 1003 1001 if (err) { ··· 4272 4244 4273 4245 static void cxgb_down(struct adapter *adapter) 4274 4246 { 4275 - t4_intr_disable(adapter); 4276 4247 cancel_work_sync(&adapter->tid_release_task); 4277 4248 cancel_work_sync(&adapter->db_full_task); 4278 4249 cancel_work_sync(&adapter->db_drop_task); 4279 4250 adapter->tid_release_task_busy = false; 4280 4251 adapter->tid_release_head = NULL; 4281 4252 4282 - if (adapter->flags & USING_MSIX) { 4283 - free_msix_queue_irqs(adapter); 4284 - free_irq(adapter->msix_info[0].vec, adapter); 4285 - } else 4286 - free_irq(adapter->pdev->irq, adapter); 4287 - quiesce_rx(adapter); 4288 4253 t4_sge_stop(adapter); 4289 4254 t4_free_sge_resources(adapter); 4290 4255 adapter->flags &= ~FULL_INIT_DONE; ··· 4754 4733 if (ret < 0) 4755 4734 return ret; 4756 4735 4757 - ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, MAX_EGRQ, 64, MAX_INGQ, 4758 - 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, FW_CMD_CAP_PF); 4736 + ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, adap->sge.egr_sz, 64, 4737 + MAX_INGQ, 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, 4738 + FW_CMD_CAP_PF); 4759 4739 if (ret < 0) 4760 4740 return ret; 4761 4741 ··· 5110 5088 enum dev_state state; 5111 5089 u32 params[7], val[7]; 5112 5090 struct fw_caps_config_cmd caps_cmd; 5113 - struct fw_devlog_cmd devlog_cmd; 5114 - u32 devlog_meminfo; 5115 5091 int reset = 1; 5092 + 5093 + /* Grab Firmware Device Log parameters as early as possible so we have 5094 + * access to it for debugging, etc. 5095 + */ 5096 + ret = t4_init_devlog_params(adap); 5097 + if (ret < 0) 5098 + return ret; 5116 5099 5117 5100 /* Contact FW, advertising Master capability */ 5118 5101 ret = t4_fw_hello(adap, adap->mbox, adap->mbox, MASTER_MAY, &state); ··· 5195 5168 ret = get_vpd_params(adap, &adap->params.vpd); 5196 5169 if (ret < 0) 5197 5170 goto bye; 5198 - 5199 - /* Read firmware device log parameters. We really need to find a way 5200 - * to get these parameters initialized with some default values (which 5201 - * are likely to be correct) for the case where we either don't 5202 - * attache to the firmware or it's crashed when we probe the adapter. 5203 - * That way we'll still be able to perform early firmware startup 5204 - * debugging ... If the request to get the Firmware's Device Log 5205 - * parameters fails, we'll live so we don't make that a fatal error. 5206 - */ 5207 - memset(&devlog_cmd, 0, sizeof(devlog_cmd)); 5208 - devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) | 5209 - FW_CMD_REQUEST_F | FW_CMD_READ_F); 5210 - devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd)); 5211 - ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd), 5212 - &devlog_cmd); 5213 - if (ret == 0) { 5214 - devlog_meminfo = 5215 - ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog); 5216 - adap->params.devlog.memtype = 5217 - FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo); 5218 - adap->params.devlog.start = 5219 - FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4; 5220 - adap->params.devlog.size = ntohl(devlog_cmd.memsize_devlog); 5221 - } 5222 5171 5223 5172 /* 5224 5173 * Find out what ports are available to us. Note that we need to do ··· 5295 5292 adap->tids.ftid_base = val[3]; 5296 5293 adap->tids.nftids = val[4] - val[3] + 1; 5297 5294 adap->sge.ingr_start = val[5]; 5295 + 5296 + /* qids (ingress/egress) returned from firmware can be anywhere 5297 + * in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END. 5298 + * Hence driver needs to allocate memory for this range to 5299 + * store the queue info. Get the highest IQFLINT/EQ index returned 5300 + * in FW_EQ_*_CMD.alloc command. 5301 + */ 5302 + params[0] = FW_PARAM_PFVF(EQ_END); 5303 + params[1] = FW_PARAM_PFVF(IQFLINT_END); 5304 + ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params, val); 5305 + if (ret < 0) 5306 + goto bye; 5307 + adap->sge.egr_sz = val[0] - adap->sge.egr_start + 1; 5308 + adap->sge.ingr_sz = val[1] - adap->sge.ingr_start + 1; 5309 + 5310 + adap->sge.egr_map = kcalloc(adap->sge.egr_sz, 5311 + sizeof(*adap->sge.egr_map), GFP_KERNEL); 5312 + if (!adap->sge.egr_map) { 5313 + ret = -ENOMEM; 5314 + goto bye; 5315 + } 5316 + 5317 + adap->sge.ingr_map = kcalloc(adap->sge.ingr_sz, 5318 + sizeof(*adap->sge.ingr_map), GFP_KERNEL); 5319 + if (!adap->sge.ingr_map) { 5320 + ret = -ENOMEM; 5321 + goto bye; 5322 + } 5323 + 5324 + /* Allocate the memory for the vaious egress queue bitmaps 5325 + * ie starving_fl and txq_maperr. 5326 + */ 5327 + adap->sge.starving_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), 5328 + sizeof(long), GFP_KERNEL); 5329 + if (!adap->sge.starving_fl) { 5330 + ret = -ENOMEM; 5331 + goto bye; 5332 + } 5333 + 5334 + adap->sge.txq_maperr = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), 5335 + sizeof(long), GFP_KERNEL); 5336 + if (!adap->sge.txq_maperr) { 5337 + ret = -ENOMEM; 5338 + goto bye; 5339 + } 5298 5340 5299 5341 params[0] = FW_PARAM_PFVF(CLIP_START); 5300 5342 params[1] = FW_PARAM_PFVF(CLIP_END); ··· 5549 5501 * happened to HW/FW, stop issuing commands. 5550 5502 */ 5551 5503 bye: 5504 + kfree(adap->sge.egr_map); 5505 + kfree(adap->sge.ingr_map); 5506 + kfree(adap->sge.starving_fl); 5507 + kfree(adap->sge.txq_maperr); 5552 5508 if (ret != -ETIMEDOUT && ret != -EIO) 5553 5509 t4_fw_bye(adap, adap->mbox); 5554 5510 return ret; ··· 5580 5528 netif_carrier_off(dev); 5581 5529 } 5582 5530 spin_unlock(&adap->stats_lock); 5531 + disable_interrupts(adap); 5583 5532 if (adap->flags & FULL_INIT_DONE) 5584 5533 cxgb_down(adap); 5585 5534 rtnl_unlock(); ··· 5965 5912 5966 5913 t4_free_mem(adapter->l2t); 5967 5914 t4_free_mem(adapter->tids.tid_tab); 5915 + kfree(adapter->sge.egr_map); 5916 + kfree(adapter->sge.ingr_map); 5917 + kfree(adapter->sge.starving_fl); 5918 + kfree(adapter->sge.txq_maperr); 5968 5919 disable_msi(adapter); 5969 5920 5970 5921 for_each_port(adapter, i) ··· 6293 6236 6294 6237 if (is_offload(adapter)) 6295 6238 detach_ulds(adapter); 6239 + 6240 + disable_interrupts(adapter); 6296 6241 6297 6242 for_each_port(adapter, i) 6298 6243 if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+4 -3
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2171 2171 struct adapter *adap = (struct adapter *)data; 2172 2172 struct sge *s = &adap->sge; 2173 2173 2174 - for (i = 0; i < ARRAY_SIZE(s->starving_fl); i++) 2174 + for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++) 2175 2175 for (m = s->starving_fl[i]; m; m &= m - 1) { 2176 2176 struct sge_eth_rxq *rxq; 2177 2177 unsigned int id = __ffs(m) + i * BITS_PER_LONG; ··· 2259 2259 struct adapter *adap = (struct adapter *)data; 2260 2260 struct sge *s = &adap->sge; 2261 2261 2262 - for (i = 0; i < ARRAY_SIZE(s->txq_maperr); i++) 2262 + for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++) 2263 2263 for (m = s->txq_maperr[i]; m; m &= m - 1) { 2264 2264 unsigned long id = __ffs(m) + i * BITS_PER_LONG; 2265 2265 struct sge_ofld_txq *txq = s->egr_map[id]; ··· 2741 2741 free_rspq_fl(adap, &adap->sge.intrq, NULL); 2742 2742 2743 2743 /* clear the reverse egress queue map */ 2744 - memset(adap->sge.egr_map, 0, sizeof(adap->sge.egr_map)); 2744 + memset(adap->sge.egr_map, 0, 2745 + adap->sge.egr_sz * sizeof(*adap->sge.egr_map)); 2745 2746 } 2746 2747 2747 2748 void t4_sge_start(struct adapter *adap)
+98 -11
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 449 449 * @mtype: memory type: MEM_EDC0, MEM_EDC1 or MEM_MC 450 450 * @addr: address within indicated memory type 451 451 * @len: amount of memory to transfer 452 - * @buf: host memory buffer 452 + * @hbuf: host memory buffer 453 453 * @dir: direction of transfer T4_MEMORY_READ (1) or T4_MEMORY_WRITE (0) 454 454 * 455 455 * Reads/writes an [almost] arbitrary memory region in the firmware: the ··· 460 460 * caller's responsibility to perform appropriate byte order conversions. 461 461 */ 462 462 int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr, 463 - u32 len, __be32 *buf, int dir) 463 + u32 len, void *hbuf, int dir) 464 464 { 465 465 u32 pos, offset, resid, memoffset; 466 466 u32 edc_size, mc_size, win_pf, mem_reg, mem_aperture, mem_base; 467 + u32 *buf; 467 468 468 469 /* Argument sanity checks ... 469 470 */ 470 - if (addr & 0x3) 471 + if (addr & 0x3 || (uintptr_t)hbuf & 0x3) 471 472 return -EINVAL; 473 + buf = (u32 *)hbuf; 472 474 473 475 /* It's convenient to be able to handle lengths which aren't a 474 476 * multiple of 32-bits because we often end up transferring files to ··· 534 532 535 533 /* Transfer data to/from the adapter as long as there's an integral 536 534 * number of 32-bit transfers to complete. 535 + * 536 + * A note on Endianness issues: 537 + * 538 + * The "register" reads and writes below from/to the PCI-E Memory 539 + * Window invoke the standard adapter Big-Endian to PCI-E Link 540 + * Little-Endian "swizzel." As a result, if we have the following 541 + * data in adapter memory: 542 + * 543 + * Memory: ... | b0 | b1 | b2 | b3 | ... 544 + * Address: i+0 i+1 i+2 i+3 545 + * 546 + * Then a read of the adapter memory via the PCI-E Memory Window 547 + * will yield: 548 + * 549 + * x = readl(i) 550 + * 31 0 551 + * [ b3 | b2 | b1 | b0 ] 552 + * 553 + * If this value is stored into local memory on a Little-Endian system 554 + * it will show up correctly in local memory as: 555 + * 556 + * ( ..., b0, b1, b2, b3, ... ) 557 + * 558 + * But on a Big-Endian system, the store will show up in memory 559 + * incorrectly swizzled as: 560 + * 561 + * ( ..., b3, b2, b1, b0, ... ) 562 + * 563 + * So we need to account for this in the reads and writes to the 564 + * PCI-E Memory Window below by undoing the register read/write 565 + * swizzels. 537 566 */ 538 567 while (len > 0) { 539 568 if (dir == T4_MEMORY_READ) 540 - *buf++ = (__force __be32) t4_read_reg(adap, 541 - mem_base + offset); 569 + *buf++ = le32_to_cpu((__force __le32)t4_read_reg(adap, 570 + mem_base + offset)); 542 571 else 543 572 t4_write_reg(adap, mem_base + offset, 544 - (__force u32) *buf++); 573 + (__force u32)cpu_to_le32(*buf++)); 545 574 offset += sizeof(__be32); 546 575 len -= sizeof(__be32); 547 576 ··· 601 568 */ 602 569 if (resid) { 603 570 union { 604 - __be32 word; 571 + u32 word; 605 572 char byte[4]; 606 573 } last; 607 574 unsigned char *bp; 608 575 int i; 609 576 610 577 if (dir == T4_MEMORY_READ) { 611 - last.word = (__force __be32) t4_read_reg(adap, 612 - mem_base + offset); 578 + last.word = le32_to_cpu( 579 + (__force __le32)t4_read_reg(adap, 580 + mem_base + offset)); 613 581 for (bp = (unsigned char *)buf, i = resid; i < 4; i++) 614 582 bp[i] = last.byte[i]; 615 583 } else { ··· 618 584 for (i = resid; i < 4; i++) 619 585 last.byte[i] = 0; 620 586 t4_write_reg(adap, mem_base + offset, 621 - (__force u32) last.word); 587 + (__force u32)cpu_to_le32(last.word)); 622 588 } 623 589 } 624 590 ··· 1120 1086 } 1121 1087 1122 1088 /* Installed successfully, update the cached header too. */ 1123 - memcpy(card_fw, fs_fw, sizeof(*card_fw)); 1089 + *card_fw = *fs_fw; 1124 1090 card_fw_usable = 1; 1125 1091 *reset = 0; /* already reset as part of load_fw */ 1126 1092 } ··· 4455 4421 4456 4422 *pbar2_qoffset = bar2_qoffset; 4457 4423 *pbar2_qid = bar2_qid; 4424 + return 0; 4425 + } 4426 + 4427 + /** 4428 + * t4_init_devlog_params - initialize adapter->params.devlog 4429 + * @adap: the adapter 4430 + * 4431 + * Initialize various fields of the adapter's Firmware Device Log 4432 + * Parameters structure. 4433 + */ 4434 + int t4_init_devlog_params(struct adapter *adap) 4435 + { 4436 + struct devlog_params *dparams = &adap->params.devlog; 4437 + u32 pf_dparams; 4438 + unsigned int devlog_meminfo; 4439 + struct fw_devlog_cmd devlog_cmd; 4440 + int ret; 4441 + 4442 + /* If we're dealing with newer firmware, the Device Log Paramerters 4443 + * are stored in a designated register which allows us to access the 4444 + * Device Log even if we can't talk to the firmware. 4445 + */ 4446 + pf_dparams = 4447 + t4_read_reg(adap, PCIE_FW_REG(PCIE_FW_PF_A, PCIE_FW_PF_DEVLOG)); 4448 + if (pf_dparams) { 4449 + unsigned int nentries, nentries128; 4450 + 4451 + dparams->memtype = PCIE_FW_PF_DEVLOG_MEMTYPE_G(pf_dparams); 4452 + dparams->start = PCIE_FW_PF_DEVLOG_ADDR16_G(pf_dparams) << 4; 4453 + 4454 + nentries128 = PCIE_FW_PF_DEVLOG_NENTRIES128_G(pf_dparams); 4455 + nentries = (nentries128 + 1) * 128; 4456 + dparams->size = nentries * sizeof(struct fw_devlog_e); 4457 + 4458 + return 0; 4459 + } 4460 + 4461 + /* Otherwise, ask the firmware for it's Device Log Parameters. 4462 + */ 4463 + memset(&devlog_cmd, 0, sizeof(devlog_cmd)); 4464 + devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) | 4465 + FW_CMD_REQUEST_F | FW_CMD_READ_F); 4466 + devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd)); 4467 + ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd), 4468 + &devlog_cmd); 4469 + if (ret) 4470 + return ret; 4471 + 4472 + devlog_meminfo = ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog); 4473 + dparams->memtype = FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo); 4474 + dparams->start = FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4; 4475 + dparams->size = ntohl(devlog_cmd.memsize_devlog); 4476 + 4458 4477 return 0; 4459 4478 } 4460 4479
+3
drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
··· 63 63 #define MC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 64 64 #define EDC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 65 65 66 + #define PCIE_FW_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 67 + 66 68 #define SGE_PF_KDOORBELL_A 0x0 67 69 68 70 #define QID_S 15 ··· 709 707 #define PFNUM_V(x) ((x) << PFNUM_S) 710 708 711 709 #define PCIE_FW_A 0x30b8 710 + #define PCIE_FW_PF_A 0x30bc 712 711 713 712 #define PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS_A 0x5908 714 713
+37 -2
drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
··· 101 101 FW_RI_BIND_MW_WR = 0x18, 102 102 FW_RI_FR_NSMR_WR = 0x19, 103 103 FW_RI_INV_LSTAG_WR = 0x1a, 104 - FW_LASTC2E_WR = 0x40 104 + FW_LASTC2E_WR = 0x70 105 105 }; 106 106 107 107 struct fw_wr_hdr { ··· 993 993 FW_MEMTYPE_CF_EXTMEM = 0x2, 994 994 FW_MEMTYPE_CF_FLASH = 0x4, 995 995 FW_MEMTYPE_CF_INTERNAL = 0x5, 996 + FW_MEMTYPE_CF_EXTMEM1 = 0x6, 996 997 }; 997 998 998 999 struct fw_caps_config_cmd { ··· 1036 1035 FW_PARAMS_MNEM_PFVF = 2, /* function params */ 1037 1036 FW_PARAMS_MNEM_REG = 3, /* limited register access */ 1038 1037 FW_PARAMS_MNEM_DMAQ = 4, /* dma queue params */ 1038 + FW_PARAMS_MNEM_CHNET = 5, /* chnet params */ 1039 1039 FW_PARAMS_MNEM_LAST 1040 1040 }; 1041 1041 ··· 3104 3102 FW_DEVLOG_FACILITY_FCOE = 0x2E, 3105 3103 FW_DEVLOG_FACILITY_FOISCSI = 0x30, 3106 3104 FW_DEVLOG_FACILITY_FOFCOE = 0x32, 3107 - FW_DEVLOG_FACILITY_MAX = 0x32, 3105 + FW_DEVLOG_FACILITY_CHNET = 0x34, 3106 + FW_DEVLOG_FACILITY_MAX = 0x34, 3108 3107 }; 3109 3108 3110 3109 /* log message format */ ··· 3141 3138 #define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(x) \ 3142 3139 (((x) >> FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S) & \ 3143 3140 FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M) 3141 + 3142 + /* P C I E F W P F 7 R E G I S T E R */ 3143 + 3144 + /* PF7 stores the Firmware Device Log parameters which allows Host Drivers to 3145 + * access the "devlog" which needing to contact firmware. The encoding is 3146 + * mostly the same as that returned by the DEVLOG command except for the size 3147 + * which is encoded as the number of entries in multiples-1 of 128 here rather 3148 + * than the memory size as is done in the DEVLOG command. Thus, 0 means 128 3149 + * and 15 means 2048. This of course in turn constrains the allowed values 3150 + * for the devlog size ... 3151 + */ 3152 + #define PCIE_FW_PF_DEVLOG 7 3153 + 3154 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_S 28 3155 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_M 0xf 3156 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_V(x) \ 3157 + ((x) << PCIE_FW_PF_DEVLOG_NENTRIES128_S) 3158 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_G(x) \ 3159 + (((x) >> PCIE_FW_PF_DEVLOG_NENTRIES128_S) & \ 3160 + PCIE_FW_PF_DEVLOG_NENTRIES128_M) 3161 + 3162 + #define PCIE_FW_PF_DEVLOG_ADDR16_S 4 3163 + #define PCIE_FW_PF_DEVLOG_ADDR16_M 0xffffff 3164 + #define PCIE_FW_PF_DEVLOG_ADDR16_V(x) ((x) << PCIE_FW_PF_DEVLOG_ADDR16_S) 3165 + #define PCIE_FW_PF_DEVLOG_ADDR16_G(x) \ 3166 + (((x) >> PCIE_FW_PF_DEVLOG_ADDR16_S) & PCIE_FW_PF_DEVLOG_ADDR16_M) 3167 + 3168 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_S 0 3169 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_M 0xf 3170 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_V(x) ((x) << PCIE_FW_PF_DEVLOG_MEMTYPE_S) 3171 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_G(x) \ 3172 + (((x) >> PCIE_FW_PF_DEVLOG_MEMTYPE_S) & PCIE_FW_PF_DEVLOG_MEMTYPE_M) 3144 3173 3145 3174 #endif /* _T4FW_INTERFACE_H_ */
+4 -4
drivers/net/ethernet/chelsio/cxgb4/t4fw_version.h
··· 36 36 #define __T4FW_VERSION_H__ 37 37 38 38 #define T4FW_VERSION_MAJOR 0x01 39 - #define T4FW_VERSION_MINOR 0x0C 40 - #define T4FW_VERSION_MICRO 0x19 39 + #define T4FW_VERSION_MINOR 0x0D 40 + #define T4FW_VERSION_MICRO 0x20 41 41 #define T4FW_VERSION_BUILD 0x00 42 42 43 43 #define T5FW_VERSION_MAJOR 0x01 44 - #define T5FW_VERSION_MINOR 0x0C 45 - #define T5FW_VERSION_MICRO 0x19 44 + #define T5FW_VERSION_MINOR 0x0D 45 + #define T5FW_VERSION_MICRO 0x20 46 46 #define T5FW_VERSION_BUILD 0x00 47 47 48 48 #endif
+8 -4
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 1004 1004 ? (tq->pidx - 1) 1005 1005 : (tq->size - 1)); 1006 1006 __be64 *src = (__be64 *)&tq->desc[index]; 1007 - __be64 __iomem *dst = (__be64 *)(tq->bar2_addr + 1007 + __be64 __iomem *dst = (__be64 __iomem *)(tq->bar2_addr + 1008 1008 SGE_UDB_WCDOORBELL); 1009 1009 unsigned int count = EQ_UNIT / sizeof(__be64); 1010 1010 ··· 1018 1018 * DMA. 1019 1019 */ 1020 1020 while (count) { 1021 - writeq(*src, dst); 1021 + /* the (__force u64) is because the compiler 1022 + * doesn't understand the endian swizzling 1023 + * going on 1024 + */ 1025 + writeq((__force u64)*src, dst); 1022 1026 src++; 1023 1027 dst++; 1024 1028 count--; ··· 1256 1252 BUG_ON(DIV_ROUND_UP(ETHTXQ_MAX_HDR, TXD_PER_EQ_UNIT) > 1); 1257 1253 wr = (void *)&txq->q.desc[txq->q.pidx]; 1258 1254 wr->equiq_to_len16 = cpu_to_be32(wr_mid); 1259 - wr->r3[0] = cpu_to_be64(0); 1260 - wr->r3[1] = cpu_to_be64(0); 1255 + wr->r3[0] = cpu_to_be32(0); 1256 + wr->r3[1] = cpu_to_be32(0); 1261 1257 skb_copy_from_linear_data(skb, (void *)wr->ethmacdst, fw_hdr_copy_len); 1262 1258 end = (u64 *)wr + flits; 1263 1259
+3 -3
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 210 210 211 211 if (rpl) { 212 212 /* request bit in high-order BE word */ 213 - WARN_ON((be32_to_cpu(*(const u32 *)cmd) 213 + WARN_ON((be32_to_cpu(*(const __be32 *)cmd) 214 214 & FW_CMD_REQUEST_F) == 0); 215 215 get_mbox_rpl(adapter, rpl, size, mbox_data); 216 - WARN_ON((be32_to_cpu(*(u32 *)rpl) 216 + WARN_ON((be32_to_cpu(*(__be32 *)rpl) 217 217 & FW_CMD_REQUEST_F) != 0); 218 218 } 219 219 t4_write_reg(adapter, mbox_ctl, ··· 484 484 * o The BAR2 Queue ID. 485 485 * o The BAR2 Queue ID Offset into the BAR2 page. 486 486 */ 487 - bar2_page_offset = ((qid >> qpp_shift) << page_shift); 487 + bar2_page_offset = ((u64)(qid >> qpp_shift) << page_shift); 488 488 bar2_qid = qid & qpp_mask; 489 489 bar2_qid_offset = bar2_qid * SGE_UDB_SIZE; 490 490
+2 -2
drivers/net/ethernet/cisco/enic/enic_main.c
··· 272 272 } 273 273 274 274 if (ENIC_TEST_INTR(pba, notify_intr)) { 275 - vnic_intr_return_all_credits(&enic->intr[notify_intr]); 276 275 enic_notify_check(enic); 276 + vnic_intr_return_all_credits(&enic->intr[notify_intr]); 277 277 } 278 278 279 279 if (ENIC_TEST_INTR(pba, err_intr)) { ··· 346 346 struct enic *enic = data; 347 347 unsigned int intr = enic_msix_notify_intr(enic); 348 348 349 - vnic_intr_return_all_credits(&enic->intr[intr]); 350 349 enic_notify_check(enic); 350 + vnic_intr_return_all_credits(&enic->intr[intr]); 351 351 352 352 return IRQ_HANDLED; 353 353 }
+1 -1
drivers/net/ethernet/dec/tulip/tulip_core.c
··· 589 589 (unsigned int)tp->rx_ring[i].buffer1, 590 590 (unsigned int)tp->rx_ring[i].buffer2, 591 591 buf[0], buf[1], buf[2]); 592 - for (j = 0; buf[j] != 0xee && j < 1600; j++) 592 + for (j = 0; ((j < 1600) && buf[j] != 0xee); j++) 593 593 if (j < 100) 594 594 pr_cont(" %02x", buf[j]); 595 595 pr_cont(" j=%d\n", j);
+2
drivers/net/ethernet/emulex/benet/be.h
··· 354 354 u16 vlan_tag; 355 355 u32 tx_rate; 356 356 u32 plink_tracking; 357 + u32 privileges; 357 358 }; 358 359 359 360 enum vf_state { ··· 424 423 425 424 u8 __iomem *csr; /* CSR BAR used only for BE2/3 */ 426 425 u8 __iomem *db; /* Door Bell */ 426 + u8 __iomem *pcicfg; /* On SH,BEx only. Shadow of PCI config space */ 427 427 428 428 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */ 429 429 struct be_dma_mem mbox_mem;
+7 -10
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 1902 1902 { 1903 1903 int num_eqs, i = 0; 1904 1904 1905 - if (lancer_chip(adapter) && num > 8) { 1906 - while (num) { 1907 - num_eqs = min(num, 8); 1908 - __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs); 1909 - i += num_eqs; 1910 - num -= num_eqs; 1911 - } 1912 - } else { 1913 - __be_cmd_modify_eqd(adapter, set_eqd, num); 1905 + while (num) { 1906 + num_eqs = min(num, 8); 1907 + __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs); 1908 + i += num_eqs; 1909 + num -= num_eqs; 1914 1910 } 1915 1911 1916 1912 return 0; ··· 1914 1918 1915 1919 /* Uses sycnhronous mcc */ 1916 1920 int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, 1917 - u32 num) 1921 + u32 num, u32 domain) 1918 1922 { 1919 1923 struct be_mcc_wrb *wrb; 1920 1924 struct be_cmd_req_vlan_config *req; ··· 1932 1936 be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, 1933 1937 OPCODE_COMMON_NTWK_VLAN_CONFIG, sizeof(*req), 1934 1938 wrb, NULL); 1939 + req->hdr.domain = domain; 1935 1940 1936 1941 req->interface_id = if_id; 1937 1942 req->untagged = BE_IF_FLAGS_UNTAGGED & be_if_cap_flags(adapter) ? 1 : 0;
+1 -1
drivers/net/ethernet/emulex/benet/be_cmds.h
··· 2256 2256 int be_cmd_get_fw_ver(struct be_adapter *adapter); 2257 2257 int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *, int num); 2258 2258 int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, 2259 - u32 num); 2259 + u32 num, u32 domain); 2260 2260 int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 status); 2261 2261 int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc); 2262 2262 int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc);
+98 -33
drivers/net/ethernet/emulex/benet/be_main.c
··· 1171 1171 for_each_set_bit(i, adapter->vids, VLAN_N_VID) 1172 1172 vids[num++] = cpu_to_le16(i); 1173 1173 1174 - status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num); 1174 + status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num, 0); 1175 1175 if (status) { 1176 1176 dev_err(dev, "Setting HW VLAN filtering failed\n"); 1177 1177 /* Set to VLAN promisc mode as setting VLAN filter failed */ ··· 1380 1380 return 0; 1381 1381 } 1382 1382 1383 + static int be_set_vf_tvt(struct be_adapter *adapter, int vf, u16 vlan) 1384 + { 1385 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1386 + u16 vids[BE_NUM_VLANS_SUPPORTED]; 1387 + int vf_if_id = vf_cfg->if_handle; 1388 + int status; 1389 + 1390 + /* Enable Transparent VLAN Tagging */ 1391 + status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, vf_if_id, 0); 1392 + if (status) 1393 + return status; 1394 + 1395 + /* Clear pre-programmed VLAN filters on VF if any, if TVT is enabled */ 1396 + vids[0] = 0; 1397 + status = be_cmd_vlan_config(adapter, vf_if_id, vids, 1, vf + 1); 1398 + if (!status) 1399 + dev_info(&adapter->pdev->dev, 1400 + "Cleared guest VLANs on VF%d", vf); 1401 + 1402 + /* After TVT is enabled, disallow VFs to program VLAN filters */ 1403 + if (vf_cfg->privileges & BE_PRIV_FILTMGMT) { 1404 + status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges & 1405 + ~BE_PRIV_FILTMGMT, vf + 1); 1406 + if (!status) 1407 + vf_cfg->privileges &= ~BE_PRIV_FILTMGMT; 1408 + } 1409 + return 0; 1410 + } 1411 + 1412 + static int be_clear_vf_tvt(struct be_adapter *adapter, int vf) 1413 + { 1414 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1415 + struct device *dev = &adapter->pdev->dev; 1416 + int status; 1417 + 1418 + /* Reset Transparent VLAN Tagging. */ 1419 + status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, vf + 1, 1420 + vf_cfg->if_handle, 0); 1421 + if (status) 1422 + return status; 1423 + 1424 + /* Allow VFs to program VLAN filtering */ 1425 + if (!(vf_cfg->privileges & BE_PRIV_FILTMGMT)) { 1426 + status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges | 1427 + BE_PRIV_FILTMGMT, vf + 1); 1428 + if (!status) { 1429 + vf_cfg->privileges |= BE_PRIV_FILTMGMT; 1430 + dev_info(dev, "VF%d: FILTMGMT priv enabled", vf); 1431 + } 1432 + } 1433 + 1434 + dev_info(dev, 1435 + "Disable/re-enable i/f in VM to clear Transparent VLAN tag"); 1436 + return 0; 1437 + } 1438 + 1383 1439 static int be_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos) 1384 1440 { 1385 1441 struct be_adapter *adapter = netdev_priv(netdev); 1386 1442 struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1387 - int status = 0; 1443 + int status; 1388 1444 1389 1445 if (!sriov_enabled(adapter)) 1390 1446 return -EPERM; ··· 1450 1394 1451 1395 if (vlan || qos) { 1452 1396 vlan |= qos << VLAN_PRIO_SHIFT; 1453 - if (vf_cfg->vlan_tag != vlan) 1454 - status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, 1455 - vf_cfg->if_handle, 0); 1397 + status = be_set_vf_tvt(adapter, vf, vlan); 1456 1398 } else { 1457 - /* Reset Transparent Vlan Tagging. */ 1458 - status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, 1459 - vf + 1, vf_cfg->if_handle, 0); 1399 + status = be_clear_vf_tvt(adapter, vf); 1460 1400 } 1461 1401 1462 1402 if (status) { 1463 1403 dev_err(&adapter->pdev->dev, 1464 - "VLAN %d config on VF %d failed : %#x\n", vlan, 1465 - vf, status); 1404 + "VLAN %d config on VF %d failed : %#x\n", vlan, vf, 1405 + status); 1466 1406 return be_cmd_status(status); 1467 1407 } 1468 1408 1469 1409 vf_cfg->vlan_tag = vlan; 1470 - 1471 1410 return 0; 1472 1411 } 1473 1412 ··· 2823 2772 } 2824 2773 } 2825 2774 } else { 2826 - pci_read_config_dword(adapter->pdev, 2827 - PCICFG_UE_STATUS_LOW, &ue_lo); 2828 - pci_read_config_dword(adapter->pdev, 2829 - PCICFG_UE_STATUS_HIGH, &ue_hi); 2830 - pci_read_config_dword(adapter->pdev, 2831 - PCICFG_UE_STATUS_LOW_MASK, &ue_lo_mask); 2832 - pci_read_config_dword(adapter->pdev, 2833 - PCICFG_UE_STATUS_HI_MASK, &ue_hi_mask); 2775 + ue_lo = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_LOW); 2776 + ue_hi = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_HIGH); 2777 + ue_lo_mask = ioread32(adapter->pcicfg + 2778 + PCICFG_UE_STATUS_LOW_MASK); 2779 + ue_hi_mask = ioread32(adapter->pcicfg + 2780 + PCICFG_UE_STATUS_HI_MASK); 2834 2781 2835 2782 ue_lo = (ue_lo & ~ue_lo_mask); 2836 2783 ue_hi = (ue_hi & ~ue_hi_mask); ··· 3388 3339 u32 cap_flags, u32 vf) 3389 3340 { 3390 3341 u32 en_flags; 3391 - int status; 3392 3342 3393 3343 en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST | 3394 3344 BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS | ··· 3395 3347 3396 3348 en_flags &= cap_flags; 3397 3349 3398 - status = be_cmd_if_create(adapter, cap_flags, en_flags, 3399 - if_handle, vf); 3400 - 3401 - return status; 3350 + return be_cmd_if_create(adapter, cap_flags, en_flags, if_handle, vf); 3402 3351 } 3403 3352 3404 3353 static int be_vfs_if_create(struct be_adapter *adapter) ··· 3413 3368 if (!BE3_chip(adapter)) { 3414 3369 status = be_cmd_get_profile_config(adapter, &res, 3415 3370 vf + 1); 3416 - if (!status) 3371 + if (!status) { 3417 3372 cap_flags = res.if_cap_flags; 3373 + /* Prevent VFs from enabling VLAN promiscuous 3374 + * mode 3375 + */ 3376 + cap_flags &= ~BE_IF_FLAGS_VLAN_PROMISCUOUS; 3377 + } 3418 3378 } 3419 3379 3420 3380 status = be_if_create(adapter, &vf_cfg->if_handle, ··· 3453 3403 struct device *dev = &adapter->pdev->dev; 3454 3404 struct be_vf_cfg *vf_cfg; 3455 3405 int status, old_vfs, vf; 3456 - u32 privileges; 3457 3406 3458 3407 old_vfs = pci_num_vf(adapter->pdev); 3459 3408 ··· 3482 3433 3483 3434 for_all_vfs(adapter, vf_cfg, vf) { 3484 3435 /* Allow VFs to programs MAC/VLAN filters */ 3485 - status = be_cmd_get_fn_privileges(adapter, &privileges, vf + 1); 3486 - if (!status && !(privileges & BE_PRIV_FILTMGMT)) { 3436 + status = be_cmd_get_fn_privileges(adapter, &vf_cfg->privileges, 3437 + vf + 1); 3438 + if (!status && !(vf_cfg->privileges & BE_PRIV_FILTMGMT)) { 3487 3439 status = be_cmd_set_fn_privileges(adapter, 3488 - privileges | 3440 + vf_cfg->privileges | 3489 3441 BE_PRIV_FILTMGMT, 3490 3442 vf + 1); 3491 - if (!status) 3443 + if (!status) { 3444 + vf_cfg->privileges |= BE_PRIV_FILTMGMT; 3492 3445 dev_info(dev, "VF%d has FILTMGMT privilege\n", 3493 3446 vf); 3447 + } 3494 3448 } 3495 3449 3496 3450 /* Allow full available bandwidth */ ··· 4872 4820 4873 4821 static int be_map_pci_bars(struct be_adapter *adapter) 4874 4822 { 4823 + struct pci_dev *pdev = adapter->pdev; 4875 4824 u8 __iomem *addr; 4876 4825 4877 4826 if (BEx_chip(adapter) && be_physfn(adapter)) { 4878 - adapter->csr = pci_iomap(adapter->pdev, 2, 0); 4827 + adapter->csr = pci_iomap(pdev, 2, 0); 4879 4828 if (!adapter->csr) 4880 4829 return -ENOMEM; 4881 4830 } 4882 4831 4883 - addr = pci_iomap(adapter->pdev, db_bar(adapter), 0); 4832 + addr = pci_iomap(pdev, db_bar(adapter), 0); 4884 4833 if (!addr) 4885 4834 goto pci_map_err; 4886 4835 adapter->db = addr; 4836 + 4837 + if (skyhawk_chip(adapter) || BEx_chip(adapter)) { 4838 + if (be_physfn(adapter)) { 4839 + /* PCICFG is the 2nd BAR in BE2 */ 4840 + addr = pci_iomap(pdev, BE2_chip(adapter) ? 1 : 0, 0); 4841 + if (!addr) 4842 + goto pci_map_err; 4843 + adapter->pcicfg = addr; 4844 + } else { 4845 + adapter->pcicfg = adapter->db + SRIOV_VF_PCICFG_OFFSET; 4846 + } 4847 + } 4887 4848 4888 4849 be_roce_map_pci_bars(adapter); 4889 4850 return 0; 4890 4851 4891 4852 pci_map_err: 4892 - dev_err(&adapter->pdev->dev, "Error in mapping PCI BARs\n"); 4853 + dev_err(&pdev->dev, "Error in mapping PCI BARs\n"); 4893 4854 be_unmap_pci_bars(adapter); 4894 4855 return -ENOMEM; 4895 4856 }
+40 -30
drivers/net/ethernet/freescale/fec_main.c
··· 1189 1189 fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) 1190 1190 { 1191 1191 struct fec_enet_private *fep; 1192 - struct bufdesc *bdp, *bdp_t; 1192 + struct bufdesc *bdp; 1193 1193 unsigned short status; 1194 1194 struct sk_buff *skb; 1195 1195 struct fec_enet_priv_tx_q *txq; 1196 1196 struct netdev_queue *nq; 1197 1197 int index = 0; 1198 - int i, bdnum; 1199 1198 int entries_free; 1200 1199 1201 1200 fep = netdev_priv(ndev); ··· 1215 1216 if (bdp == txq->cur_tx) 1216 1217 break; 1217 1218 1218 - bdp_t = bdp; 1219 - bdnum = 1; 1220 - index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep); 1221 - skb = txq->tx_skbuff[index]; 1222 - while (!skb) { 1223 - bdp_t = fec_enet_get_nextdesc(bdp_t, fep, queue_id); 1224 - index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep); 1225 - skb = txq->tx_skbuff[index]; 1226 - bdnum++; 1227 - } 1228 - if (skb_shinfo(skb)->nr_frags && 1229 - (status = bdp_t->cbd_sc) & BD_ENET_TX_READY) 1230 - break; 1219 + index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep); 1231 1220 1232 - for (i = 0; i < bdnum; i++) { 1233 - if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr)) 1234 - dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, 1235 - bdp->cbd_datlen, DMA_TO_DEVICE); 1236 - bdp->cbd_bufaddr = 0; 1237 - if (i < bdnum - 1) 1238 - bdp = fec_enet_get_nextdesc(bdp, fep, queue_id); 1239 - } 1221 + skb = txq->tx_skbuff[index]; 1240 1222 txq->tx_skbuff[index] = NULL; 1223 + if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr)) 1224 + dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, 1225 + bdp->cbd_datlen, DMA_TO_DEVICE); 1226 + bdp->cbd_bufaddr = 0; 1227 + if (!skb) { 1228 + bdp = fec_enet_get_nextdesc(bdp, fep, queue_id); 1229 + continue; 1230 + } 1241 1231 1242 1232 /* Check for errors. */ 1243 1233 if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC | ··· 1467 1479 1468 1480 vlan_packet_rcvd = true; 1469 1481 1470 - skb_copy_to_linear_data_offset(skb, VLAN_HLEN, 1471 - data, (2 * ETH_ALEN)); 1482 + memmove(skb->data + VLAN_HLEN, data, ETH_ALEN * 2); 1472 1483 skb_pull(skb, VLAN_HLEN); 1473 1484 } 1474 1485 ··· 1584 1597 writel(int_events, fep->hwp + FEC_IEVENT); 1585 1598 fec_enet_collect_events(fep, int_events); 1586 1599 1587 - if (fep->work_tx || fep->work_rx) { 1600 + if ((fep->work_tx || fep->work_rx) && fep->link) { 1588 1601 ret = IRQ_HANDLED; 1589 1602 1590 1603 if (napi_schedule_prep(&fep->napi)) { ··· 1954 1967 struct fec_enet_private *fep = netdev_priv(ndev); 1955 1968 struct device_node *node; 1956 1969 int err = -ENXIO, i; 1970 + u32 mii_speed, holdtime; 1957 1971 1958 1972 /* 1959 1973 * The i.MX28 dual fec interfaces are not equal. ··· 1992 2004 * Reference Manual has an error on this, and gets fixed on i.MX6Q 1993 2005 * document. 1994 2006 */ 1995 - fep->phy_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); 2007 + mii_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); 1996 2008 if (fep->quirks & FEC_QUIRK_ENET_MAC) 1997 - fep->phy_speed--; 1998 - fep->phy_speed <<= 1; 2009 + mii_speed--; 2010 + if (mii_speed > 63) { 2011 + dev_err(&pdev->dev, 2012 + "fec clock (%lu) to fast to get right mii speed\n", 2013 + clk_get_rate(fep->clk_ipg)); 2014 + err = -EINVAL; 2015 + goto err_out; 2016 + } 2017 + 2018 + /* 2019 + * The i.MX28 and i.MX6 types have another filed in the MSCR (aka 2020 + * MII_SPEED) register that defines the MDIO output hold time. Earlier 2021 + * versions are RAZ there, so just ignore the difference and write the 2022 + * register always. 2023 + * The minimal hold time according to IEE802.3 (clause 22) is 10 ns. 2024 + * HOLDTIME + 1 is the number of clk cycles the fec is holding the 2025 + * output. 2026 + * The HOLDTIME bitfield takes values between 0 and 7 (inclusive). 2027 + * Given that ceil(clkrate / 5000000) <= 64, the calculation for 2028 + * holdtime cannot result in a value greater than 3. 2029 + */ 2030 + holdtime = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 100000000) - 1; 2031 + 2032 + fep->phy_speed = mii_speed << 1 | holdtime << 8; 2033 + 1999 2034 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); 2000 2035 2001 2036 fep->mii_bus = mdiobus_alloc(); ··· 3394 3383 regulator_disable(fep->reg_phy); 3395 3384 if (fep->ptp_clock) 3396 3385 ptp_clock_unregister(fep->ptp_clock); 3397 - fec_enet_clk_enable(ndev, false); 3398 3386 of_node_put(fep->phy_node); 3399 3387 free_netdev(ndev); 3400 3388
+19 -4
drivers/net/ethernet/freescale/gianfar.c
··· 747 747 return 0; 748 748 } 749 749 750 + static int gfar_of_group_count(struct device_node *np) 751 + { 752 + struct device_node *child; 753 + int num = 0; 754 + 755 + for_each_available_child_of_node(np, child) 756 + if (!of_node_cmp(child->name, "queue-group")) 757 + num++; 758 + 759 + return num; 760 + } 761 + 750 762 static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) 751 763 { 752 764 const char *model; ··· 796 784 num_rx_qs = 1; 797 785 } else { /* MQ_MG_MODE */ 798 786 /* get the actual number of supported groups */ 799 - unsigned int num_grps = of_get_available_child_count(np); 787 + unsigned int num_grps = gfar_of_group_count(np); 800 788 801 789 if (num_grps == 0 || num_grps > MAXGROUPS) { 802 790 dev_err(&ofdev->dev, "Invalid # of int groups(%d)\n", ··· 863 851 864 852 /* Parse and initialize group specific information */ 865 853 if (priv->mode == MQ_MG_MODE) { 866 - for_each_child_of_node(np, child) { 854 + for_each_available_child_of_node(np, child) { 855 + if (of_node_cmp(child->name, "queue-group")) 856 + continue; 857 + 867 858 err = gfar_parse_group(child, priv, model); 868 859 if (err) 869 860 goto err_grp_init; ··· 3177 3162 struct phy_device *phydev = priv->phydev; 3178 3163 3179 3164 if (unlikely(phydev->link != priv->oldlink || 3180 - phydev->duplex != priv->oldduplex || 3181 - phydev->speed != priv->oldspeed)) 3165 + (phydev->link && (phydev->duplex != priv->oldduplex || 3166 + phydev->speed != priv->oldspeed)))) 3182 3167 gfar_update_link_state(priv); 3183 3168 } 3184 3169
+3
drivers/net/ethernet/freescale/ucc_geth.c
··· 3893 3893 ugeth->phy_interface = phy_interface; 3894 3894 ugeth->max_speed = max_speed; 3895 3895 3896 + /* Carrier starts down, phylib will bring it up */ 3897 + netif_carrier_off(dev); 3898 + 3896 3899 err = register_netdev(dev); 3897 3900 if (err) { 3898 3901 if (netif_msg_probe(ugeth))
+141 -105
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 3262 3262 device_remove_file(&dev->dev, &dev_attr_remove_port); 3263 3263 } 3264 3264 3265 + static int ehea_reboot_notifier(struct notifier_block *nb, 3266 + unsigned long action, void *unused) 3267 + { 3268 + if (action == SYS_RESTART) { 3269 + pr_info("Reboot: freeing all eHEA resources\n"); 3270 + ibmebus_unregister_driver(&ehea_driver); 3271 + } 3272 + return NOTIFY_DONE; 3273 + } 3274 + 3275 + static struct notifier_block ehea_reboot_nb = { 3276 + .notifier_call = ehea_reboot_notifier, 3277 + }; 3278 + 3279 + static int ehea_mem_notifier(struct notifier_block *nb, 3280 + unsigned long action, void *data) 3281 + { 3282 + int ret = NOTIFY_BAD; 3283 + struct memory_notify *arg = data; 3284 + 3285 + mutex_lock(&dlpar_mem_lock); 3286 + 3287 + switch (action) { 3288 + case MEM_CANCEL_OFFLINE: 3289 + pr_info("memory offlining canceled"); 3290 + /* Fall through: re-add canceled memory block */ 3291 + 3292 + case MEM_ONLINE: 3293 + pr_info("memory is going online"); 3294 + set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3295 + if (ehea_add_sect_bmap(arg->start_pfn, arg->nr_pages)) 3296 + goto out_unlock; 3297 + ehea_rereg_mrs(); 3298 + break; 3299 + 3300 + case MEM_GOING_OFFLINE: 3301 + pr_info("memory is going offline"); 3302 + set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3303 + if (ehea_rem_sect_bmap(arg->start_pfn, arg->nr_pages)) 3304 + goto out_unlock; 3305 + ehea_rereg_mrs(); 3306 + break; 3307 + 3308 + default: 3309 + break; 3310 + } 3311 + 3312 + ehea_update_firmware_handles(); 3313 + ret = NOTIFY_OK; 3314 + 3315 + out_unlock: 3316 + mutex_unlock(&dlpar_mem_lock); 3317 + return ret; 3318 + } 3319 + 3320 + static struct notifier_block ehea_mem_nb = { 3321 + .notifier_call = ehea_mem_notifier, 3322 + }; 3323 + 3324 + static void ehea_crash_handler(void) 3325 + { 3326 + int i; 3327 + 3328 + if (ehea_fw_handles.arr) 3329 + for (i = 0; i < ehea_fw_handles.num_entries; i++) 3330 + ehea_h_free_resource(ehea_fw_handles.arr[i].adh, 3331 + ehea_fw_handles.arr[i].fwh, 3332 + FORCE_FREE); 3333 + 3334 + if (ehea_bcmc_regs.arr) 3335 + for (i = 0; i < ehea_bcmc_regs.num_entries; i++) 3336 + ehea_h_reg_dereg_bcmc(ehea_bcmc_regs.arr[i].adh, 3337 + ehea_bcmc_regs.arr[i].port_id, 3338 + ehea_bcmc_regs.arr[i].reg_type, 3339 + ehea_bcmc_regs.arr[i].macaddr, 3340 + 0, H_DEREG_BCMC); 3341 + } 3342 + 3343 + static atomic_t ehea_memory_hooks_registered; 3344 + 3345 + /* Register memory hooks on probe of first adapter */ 3346 + static int ehea_register_memory_hooks(void) 3347 + { 3348 + int ret = 0; 3349 + 3350 + if (atomic_inc_and_test(&ehea_memory_hooks_registered)) 3351 + return 0; 3352 + 3353 + ret = ehea_create_busmap(); 3354 + if (ret) { 3355 + pr_info("ehea_create_busmap failed\n"); 3356 + goto out; 3357 + } 3358 + 3359 + ret = register_reboot_notifier(&ehea_reboot_nb); 3360 + if (ret) { 3361 + pr_info("register_reboot_notifier failed\n"); 3362 + goto out; 3363 + } 3364 + 3365 + ret = register_memory_notifier(&ehea_mem_nb); 3366 + if (ret) { 3367 + pr_info("register_memory_notifier failed\n"); 3368 + goto out2; 3369 + } 3370 + 3371 + ret = crash_shutdown_register(ehea_crash_handler); 3372 + if (ret) { 3373 + pr_info("crash_shutdown_register failed\n"); 3374 + goto out3; 3375 + } 3376 + 3377 + return 0; 3378 + 3379 + out3: 3380 + unregister_memory_notifier(&ehea_mem_nb); 3381 + out2: 3382 + unregister_reboot_notifier(&ehea_reboot_nb); 3383 + out: 3384 + return ret; 3385 + } 3386 + 3387 + static void ehea_unregister_memory_hooks(void) 3388 + { 3389 + if (atomic_read(&ehea_memory_hooks_registered)) 3390 + return; 3391 + 3392 + unregister_reboot_notifier(&ehea_reboot_nb); 3393 + if (crash_shutdown_unregister(ehea_crash_handler)) 3394 + pr_info("failed unregistering crash handler\n"); 3395 + unregister_memory_notifier(&ehea_mem_nb); 3396 + } 3397 + 3265 3398 static int ehea_probe_adapter(struct platform_device *dev) 3266 3399 { 3267 3400 struct ehea_adapter *adapter; 3268 3401 const u64 *adapter_handle; 3269 3402 int ret; 3270 3403 int i; 3404 + 3405 + ret = ehea_register_memory_hooks(); 3406 + if (ret) 3407 + return ret; 3271 3408 3272 3409 if (!dev || !dev->dev.of_node) { 3273 3410 pr_err("Invalid ibmebus device probed\n"); ··· 3529 3392 return 0; 3530 3393 } 3531 3394 3532 - static void ehea_crash_handler(void) 3533 - { 3534 - int i; 3535 - 3536 - if (ehea_fw_handles.arr) 3537 - for (i = 0; i < ehea_fw_handles.num_entries; i++) 3538 - ehea_h_free_resource(ehea_fw_handles.arr[i].adh, 3539 - ehea_fw_handles.arr[i].fwh, 3540 - FORCE_FREE); 3541 - 3542 - if (ehea_bcmc_regs.arr) 3543 - for (i = 0; i < ehea_bcmc_regs.num_entries; i++) 3544 - ehea_h_reg_dereg_bcmc(ehea_bcmc_regs.arr[i].adh, 3545 - ehea_bcmc_regs.arr[i].port_id, 3546 - ehea_bcmc_regs.arr[i].reg_type, 3547 - ehea_bcmc_regs.arr[i].macaddr, 3548 - 0, H_DEREG_BCMC); 3549 - } 3550 - 3551 - static int ehea_mem_notifier(struct notifier_block *nb, 3552 - unsigned long action, void *data) 3553 - { 3554 - int ret = NOTIFY_BAD; 3555 - struct memory_notify *arg = data; 3556 - 3557 - mutex_lock(&dlpar_mem_lock); 3558 - 3559 - switch (action) { 3560 - case MEM_CANCEL_OFFLINE: 3561 - pr_info("memory offlining canceled"); 3562 - /* Readd canceled memory block */ 3563 - case MEM_ONLINE: 3564 - pr_info("memory is going online"); 3565 - set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3566 - if (ehea_add_sect_bmap(arg->start_pfn, arg->nr_pages)) 3567 - goto out_unlock; 3568 - ehea_rereg_mrs(); 3569 - break; 3570 - case MEM_GOING_OFFLINE: 3571 - pr_info("memory is going offline"); 3572 - set_bit(__EHEA_STOP_XFER, &ehea_driver_flags); 3573 - if (ehea_rem_sect_bmap(arg->start_pfn, arg->nr_pages)) 3574 - goto out_unlock; 3575 - ehea_rereg_mrs(); 3576 - break; 3577 - default: 3578 - break; 3579 - } 3580 - 3581 - ehea_update_firmware_handles(); 3582 - ret = NOTIFY_OK; 3583 - 3584 - out_unlock: 3585 - mutex_unlock(&dlpar_mem_lock); 3586 - return ret; 3587 - } 3588 - 3589 - static struct notifier_block ehea_mem_nb = { 3590 - .notifier_call = ehea_mem_notifier, 3591 - }; 3592 - 3593 - static int ehea_reboot_notifier(struct notifier_block *nb, 3594 - unsigned long action, void *unused) 3595 - { 3596 - if (action == SYS_RESTART) { 3597 - pr_info("Reboot: freeing all eHEA resources\n"); 3598 - ibmebus_unregister_driver(&ehea_driver); 3599 - } 3600 - return NOTIFY_DONE; 3601 - } 3602 - 3603 - static struct notifier_block ehea_reboot_nb = { 3604 - .notifier_call = ehea_reboot_notifier, 3605 - }; 3606 - 3607 3395 static int check_module_parm(void) 3608 3396 { 3609 3397 int ret = 0; ··· 3582 3520 if (ret) 3583 3521 goto out; 3584 3522 3585 - ret = ehea_create_busmap(); 3586 - if (ret) 3587 - goto out; 3588 - 3589 - ret = register_reboot_notifier(&ehea_reboot_nb); 3590 - if (ret) 3591 - pr_info("failed registering reboot notifier\n"); 3592 - 3593 - ret = register_memory_notifier(&ehea_mem_nb); 3594 - if (ret) 3595 - pr_info("failed registering memory remove notifier\n"); 3596 - 3597 - ret = crash_shutdown_register(ehea_crash_handler); 3598 - if (ret) 3599 - pr_info("failed registering crash handler\n"); 3600 - 3601 3523 ret = ibmebus_register_driver(&ehea_driver); 3602 3524 if (ret) { 3603 3525 pr_err("failed registering eHEA device driver on ebus\n"); 3604 - goto out2; 3526 + goto out; 3605 3527 } 3606 3528 3607 3529 ret = driver_create_file(&ehea_driver.driver, ··· 3593 3547 if (ret) { 3594 3548 pr_err("failed to register capabilities attribute, ret=%d\n", 3595 3549 ret); 3596 - goto out3; 3550 + goto out2; 3597 3551 } 3598 3552 3599 3553 return ret; 3600 3554 3601 - out3: 3602 - ibmebus_unregister_driver(&ehea_driver); 3603 3555 out2: 3604 - unregister_memory_notifier(&ehea_mem_nb); 3605 - unregister_reboot_notifier(&ehea_reboot_nb); 3606 - crash_shutdown_unregister(ehea_crash_handler); 3556 + ibmebus_unregister_driver(&ehea_driver); 3607 3557 out: 3608 3558 return ret; 3609 3559 } 3610 3560 3611 3561 static void __exit ehea_module_exit(void) 3612 3562 { 3613 - int ret; 3614 - 3615 3563 driver_remove_file(&ehea_driver.driver, &driver_attr_capabilities); 3616 3564 ibmebus_unregister_driver(&ehea_driver); 3617 - unregister_reboot_notifier(&ehea_reboot_nb); 3618 - ret = crash_shutdown_unregister(ehea_crash_handler); 3619 - if (ret) 3620 - pr_info("failed unregistering crash handler\n"); 3621 - unregister_memory_notifier(&ehea_mem_nb); 3565 + ehea_unregister_memory_hooks(); 3622 3566 kfree(ehea_fw_handles.arr); 3623 3567 kfree(ehea_bcmc_regs.arr); 3624 3568 ehea_destroy_busmap();
+25 -3
drivers/net/ethernet/ibm/ibmveth.c
··· 1136 1136 ibmveth_replenish_task(adapter); 1137 1137 1138 1138 if (frames_processed < budget) { 1139 + napi_complete(napi); 1140 + 1139 1141 /* We think we are done - reenable interrupts, 1140 1142 * then check once more to make sure we are done. 1141 1143 */ ··· 1145 1143 VIO_IRQ_ENABLE); 1146 1144 1147 1145 BUG_ON(lpar_rc != H_SUCCESS); 1148 - 1149 - napi_complete(napi); 1150 1146 1151 1147 if (ibmveth_rxq_pending_buffer(adapter) && 1152 1148 napi_reschedule(napi)) { ··· 1327 1327 return ret; 1328 1328 } 1329 1329 1330 + static int ibmveth_set_mac_addr(struct net_device *dev, void *p) 1331 + { 1332 + struct ibmveth_adapter *adapter = netdev_priv(dev); 1333 + struct sockaddr *addr = p; 1334 + u64 mac_address; 1335 + int rc; 1336 + 1337 + if (!is_valid_ether_addr(addr->sa_data)) 1338 + return -EADDRNOTAVAIL; 1339 + 1340 + mac_address = ibmveth_encode_mac_addr(addr->sa_data); 1341 + rc = h_change_logical_lan_mac(adapter->vdev->unit_address, mac_address); 1342 + if (rc) { 1343 + netdev_err(adapter->netdev, "h_change_logical_lan_mac failed with rc=%d\n", rc); 1344 + return rc; 1345 + } 1346 + 1347 + ether_addr_copy(dev->dev_addr, addr->sa_data); 1348 + 1349 + return 0; 1350 + } 1351 + 1330 1352 static const struct net_device_ops ibmveth_netdev_ops = { 1331 1353 .ndo_open = ibmveth_open, 1332 1354 .ndo_stop = ibmveth_close, ··· 1359 1337 .ndo_fix_features = ibmveth_fix_features, 1360 1338 .ndo_set_features = ibmveth_set_features, 1361 1339 .ndo_validate_addr = eth_validate_addr, 1362 - .ndo_set_mac_address = eth_mac_addr, 1340 + .ndo_set_mac_address = ibmveth_set_mac_addr, 1363 1341 #ifdef CONFIG_NET_POLL_CONTROLLER 1364 1342 .ndo_poll_controller = ibmveth_poll_controller, 1365 1343 #endif
+4 -3
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 868 868 * The grst delay value is in 100ms units, and we'll wait a 869 869 * couple counts longer to be sure we don't just miss the end. 870 870 */ 871 - grst_del = rd32(hw, I40E_GLGEN_RSTCTL) & I40E_GLGEN_RSTCTL_GRSTDEL_MASK 872 - >> I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; 871 + grst_del = (rd32(hw, I40E_GLGEN_RSTCTL) & 872 + I40E_GLGEN_RSTCTL_GRSTDEL_MASK) >> 873 + I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; 873 874 for (cnt = 0; cnt < grst_del + 2; cnt++) { 874 875 reg = rd32(hw, I40E_GLGEN_RSTAT); 875 876 if (!(reg & I40E_GLGEN_RSTAT_DEVSTATE_MASK)) ··· 2847 2846 2848 2847 status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); 2849 2848 2850 - if (!status) 2849 + if (!status && filter_index) 2851 2850 *filter_index = resp->index; 2852 2851 2853 2852 return status;
+1 -1
drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
··· 40 40 u32 val; 41 41 42 42 val = rd32(hw, I40E_PRTDCB_GENC); 43 - *delay = (u16)(val & I40E_PRTDCB_GENC_PFCLDA_MASK >> 43 + *delay = (u16)((val & I40E_PRTDCB_GENC_PFCLDA_MASK) >> 44 44 I40E_PRTDCB_GENC_PFCLDA_SHIFT); 45 45 } 46 46
+3 -1
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 989 989 if (!cmd_buf) 990 990 return count; 991 991 bytes_not_copied = copy_from_user(cmd_buf, buffer, count); 992 - if (bytes_not_copied < 0) 992 + if (bytes_not_copied < 0) { 993 + kfree(cmd_buf); 993 994 return bytes_not_copied; 995 + } 994 996 if (bytes_not_copied > 0) 995 997 count -= bytes_not_copied; 996 998 cmd_buf[count] = '\0';
+33 -11
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1512 1512 vsi->tc_config.numtc = numtc; 1513 1513 vsi->tc_config.enabled_tc = enabled_tc ? enabled_tc : 1; 1514 1514 /* Number of queues per enabled TC */ 1515 - num_tc_qps = vsi->alloc_queue_pairs/numtc; 1515 + /* In MFP case we can have a much lower count of MSIx 1516 + * vectors available and so we need to lower the used 1517 + * q count. 1518 + */ 1519 + qcount = min_t(int, vsi->alloc_queue_pairs, pf->num_lan_msix); 1520 + num_tc_qps = qcount / numtc; 1516 1521 num_tc_qps = min_t(int, num_tc_qps, I40E_MAX_QUEUES_PER_TC); 1517 1522 1518 1523 /* Setup queue offset/count for all TCs for given VSI */ ··· 2689 2684 u16 qoffset, qcount; 2690 2685 int i, n; 2691 2686 2692 - if (!(vsi->back->flags & I40E_FLAG_DCB_ENABLED)) 2693 - return; 2687 + if (!(vsi->back->flags & I40E_FLAG_DCB_ENABLED)) { 2688 + /* Reset the TC information */ 2689 + for (i = 0; i < vsi->num_queue_pairs; i++) { 2690 + rx_ring = vsi->rx_rings[i]; 2691 + tx_ring = vsi->tx_rings[i]; 2692 + rx_ring->dcb_tc = 0; 2693 + tx_ring->dcb_tc = 0; 2694 + } 2695 + } 2694 2696 2695 2697 for (n = 0; n < I40E_MAX_TRAFFIC_CLASS; n++) { 2696 2698 if (!(vsi->tc_config.enabled_tc & (1 << n))) ··· 3841 3829 static void i40e_clear_interrupt_scheme(struct i40e_pf *pf) 3842 3830 { 3843 3831 int i; 3832 + 3833 + i40e_stop_misc_vector(pf); 3834 + if (pf->flags & I40E_FLAG_MSIX_ENABLED) { 3835 + synchronize_irq(pf->msix_entries[0].vector); 3836 + free_irq(pf->msix_entries[0].vector, pf); 3837 + } 3844 3838 3845 3839 i40e_put_lump(pf->irq_pile, 0, I40E_PILE_VALID_BIT-1); 3846 3840 for (i = 0; i < pf->num_alloc_vsi; i++) ··· 5272 5254 5273 5255 /* Wait for the PF's Tx queues to be disabled */ 5274 5256 ret = i40e_pf_wait_txq_disabled(pf); 5275 - if (!ret) 5257 + if (ret) { 5258 + /* Schedule PF reset to recover */ 5259 + set_bit(__I40E_PF_RESET_REQUESTED, &pf->state); 5260 + i40e_service_event_schedule(pf); 5261 + } else { 5276 5262 i40e_pf_unquiesce_all_vsi(pf); 5263 + } 5264 + 5277 5265 exit: 5278 5266 return ret; 5279 5267 } ··· 5611 5587 int i, v; 5612 5588 5613 5589 /* If we're down or resetting, just bail */ 5614 - if (test_bit(__I40E_CONFIG_BUSY, &pf->state)) 5590 + if (test_bit(__I40E_DOWN, &pf->state) || 5591 + test_bit(__I40E_CONFIG_BUSY, &pf->state)) 5615 5592 return; 5616 5593 5617 5594 /* for each VSI/netdev ··· 9558 9533 set_bit(__I40E_DOWN, &pf->state); 9559 9534 del_timer_sync(&pf->service_timer); 9560 9535 cancel_work_sync(&pf->service_task); 9536 + i40e_fdir_teardown(pf); 9561 9537 9562 9538 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) { 9563 9539 i40e_free_vfs(pf); ··· 9584 9558 */ 9585 9559 if (pf->vsi[pf->lan_vsi]) 9586 9560 i40e_vsi_release(pf->vsi[pf->lan_vsi]); 9587 - 9588 - i40e_stop_misc_vector(pf); 9589 - if (pf->flags & I40E_FLAG_MSIX_ENABLED) { 9590 - synchronize_irq(pf->msix_entries[0].vector); 9591 - free_irq(pf->msix_entries[0].vector, pf); 9592 - } 9593 9561 9594 9562 /* shutdown and destroy the HMC */ 9595 9563 if (pf->hw.hmc.hmc_obj) { ··· 9737 9717 9738 9718 wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); 9739 9719 wr32(hw, I40E_PFPM_WUFC, (pf->wol_en ? I40E_PFPM_WUFC_MAG_MASK : 0)); 9720 + 9721 + i40e_clear_interrupt_scheme(pf); 9740 9722 9741 9723 if (system_state == SYSTEM_POWER_OFF) { 9742 9724 pci_wake_from_d3(pdev, pf->wol_en);
+35
drivers/net/ethernet/intel/i40e/i40e_nvm.c
··· 679 679 { 680 680 i40e_status status; 681 681 enum i40e_nvmupd_cmd upd_cmd; 682 + bool retry_attempt = false; 682 683 683 684 upd_cmd = i40e_nvmupd_validate_command(hw, cmd, errno); 684 685 686 + retry: 685 687 switch (upd_cmd) { 686 688 case I40E_NVMUPD_WRITE_CON: 687 689 status = i40e_nvmupd_nvm_write(hw, cmd, bytes, errno); ··· 727 725 *errno = -ESRCH; 728 726 break; 729 727 } 728 + 729 + /* In some circumstances, a multi-write transaction takes longer 730 + * than the default 3 minute timeout on the write semaphore. If 731 + * the write failed with an EBUSY status, this is likely the problem, 732 + * so here we try to reacquire the semaphore then retry the write. 733 + * We only do one retry, then give up. 734 + */ 735 + if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) && 736 + !retry_attempt) { 737 + i40e_status old_status = status; 738 + u32 old_asq_status = hw->aq.asq_last_status; 739 + u32 gtime; 740 + 741 + gtime = rd32(hw, I40E_GLVFGEN_TIMER); 742 + if (gtime >= hw->nvm.hw_semaphore_timeout) { 743 + i40e_debug(hw, I40E_DEBUG_ALL, 744 + "NVMUPD: write semaphore expired (%d >= %lld), retrying\n", 745 + gtime, hw->nvm.hw_semaphore_timeout); 746 + i40e_release_nvm(hw); 747 + status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); 748 + if (status) { 749 + i40e_debug(hw, I40E_DEBUG_ALL, 750 + "NVMUPD: write semaphore reacquire failed aq_err = %d\n", 751 + hw->aq.asq_last_status); 752 + status = old_status; 753 + hw->aq.asq_last_status = old_asq_status; 754 + } else { 755 + retry_attempt = true; 756 + goto retry; 757 + } 758 + } 759 + } 760 + 730 761 return status; 731 762 } 732 763
+95 -24
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 586 586 } 587 587 588 588 /** 589 + * i40e_get_head - Retrieve head from head writeback 590 + * @tx_ring: tx ring to fetch head of 591 + * 592 + * Returns value of Tx ring head based on value stored 593 + * in head write-back location 594 + **/ 595 + static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 596 + { 597 + void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 598 + 599 + return le32_to_cpu(*(volatile __le32 *)head); 600 + } 601 + 602 + /** 589 603 * i40e_get_tx_pending - how many tx descriptors not processed 590 604 * @tx_ring: the ring of descriptors 591 605 * ··· 608 594 **/ 609 595 static u32 i40e_get_tx_pending(struct i40e_ring *ring) 610 596 { 611 - u32 ntu = ((ring->next_to_clean <= ring->next_to_use) 612 - ? ring->next_to_use 613 - : ring->next_to_use + ring->count); 614 - return ntu - ring->next_to_clean; 597 + u32 head, tail; 598 + 599 + head = i40e_get_head(ring); 600 + tail = readl(ring->tail); 601 + 602 + if (head != tail) 603 + return (head < tail) ? 604 + tail - head : (tail + ring->count - head); 605 + 606 + return 0; 615 607 } 616 608 617 609 /** ··· 626 606 **/ 627 607 static bool i40e_check_tx_hang(struct i40e_ring *tx_ring) 628 608 { 609 + u32 tx_done = tx_ring->stats.packets; 610 + u32 tx_done_old = tx_ring->tx_stats.tx_done_old; 629 611 u32 tx_pending = i40e_get_tx_pending(tx_ring); 630 612 struct i40e_pf *pf = tx_ring->vsi->back; 631 613 bool ret = false; ··· 645 623 * run the check_tx_hang logic with a transmit completion 646 624 * pending but without time to complete it yet. 647 625 */ 648 - if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) && 649 - (tx_pending >= I40E_MIN_DESC_PENDING)) { 626 + if ((tx_done_old == tx_done) && tx_pending) { 650 627 /* make sure it is true for two checks in a row */ 651 628 ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED, 652 629 &tx_ring->state); 653 - } else if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) && 654 - (tx_pending < I40E_MIN_DESC_PENDING) && 655 - (tx_pending > 0)) { 630 + } else if (tx_done_old == tx_done && 631 + (tx_pending < I40E_MIN_DESC_PENDING) && (tx_pending > 0)) { 656 632 if (I40E_DEBUG_FLOW & pf->hw.debug_mask) 657 633 dev_info(tx_ring->dev, "HW needs some more descs to do a cacheline flush. tx_pending %d, queue %d", 658 634 tx_pending, tx_ring->queue_index); 659 635 pf->tx_sluggish_count++; 660 636 } else { 661 637 /* update completed stats and disarm the hang check */ 662 - tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets; 638 + tx_ring->tx_stats.tx_done_old = tx_done; 663 639 clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state); 664 640 } 665 641 666 642 return ret; 667 - } 668 - 669 - /** 670 - * i40e_get_head - Retrieve head from head writeback 671 - * @tx_ring: tx ring to fetch head of 672 - * 673 - * Returns value of Tx ring head based on value stored 674 - * in head write-back location 675 - **/ 676 - static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 677 - { 678 - void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 679 - 680 - return le32_to_cpu(*(volatile __le32 *)head); 681 643 } 682 644 683 645 #define WB_STRIDE 0x3 ··· 2146 2140 } 2147 2141 2148 2142 /** 2143 + * i40e_chk_linearize - Check if there are more than 8 fragments per packet 2144 + * @skb: send buffer 2145 + * @tx_flags: collected send information 2146 + * @hdr_len: size of the packet header 2147 + * 2148 + * Note: Our HW can't scatter-gather more than 8 fragments to build 2149 + * a packet on the wire and so we need to figure out the cases where we 2150 + * need to linearize the skb. 2151 + **/ 2152 + static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags, 2153 + const u8 hdr_len) 2154 + { 2155 + struct skb_frag_struct *frag; 2156 + bool linearize = false; 2157 + unsigned int size = 0; 2158 + u16 num_frags; 2159 + u16 gso_segs; 2160 + 2161 + num_frags = skb_shinfo(skb)->nr_frags; 2162 + gso_segs = skb_shinfo(skb)->gso_segs; 2163 + 2164 + if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) { 2165 + u16 j = 1; 2166 + 2167 + if (num_frags < (I40E_MAX_BUFFER_TXD)) 2168 + goto linearize_chk_done; 2169 + /* try the simple math, if we have too many frags per segment */ 2170 + if (DIV_ROUND_UP((num_frags + gso_segs), gso_segs) > 2171 + I40E_MAX_BUFFER_TXD) { 2172 + linearize = true; 2173 + goto linearize_chk_done; 2174 + } 2175 + frag = &skb_shinfo(skb)->frags[0]; 2176 + size = hdr_len; 2177 + /* we might still have more fragments per segment */ 2178 + do { 2179 + size += skb_frag_size(frag); 2180 + frag++; j++; 2181 + if (j == I40E_MAX_BUFFER_TXD) { 2182 + if (size < skb_shinfo(skb)->gso_size) { 2183 + linearize = true; 2184 + break; 2185 + } 2186 + j = 1; 2187 + size -= skb_shinfo(skb)->gso_size; 2188 + if (size) 2189 + j++; 2190 + size += hdr_len; 2191 + } 2192 + num_frags--; 2193 + } while (num_frags); 2194 + } else { 2195 + if (num_frags >= I40E_MAX_BUFFER_TXD) 2196 + linearize = true; 2197 + } 2198 + 2199 + linearize_chk_done: 2200 + return linearize; 2201 + } 2202 + 2203 + /** 2149 2204 * i40e_tx_map - Build the Tx descriptor 2150 2205 * @tx_ring: ring to send buffer on 2151 2206 * @skb: send buffer ··· 2462 2395 2463 2396 if (tsyn) 2464 2397 tx_flags |= I40E_TX_FLAGS_TSYN; 2398 + 2399 + if (i40e_chk_linearize(skb, tx_flags, hdr_len)) 2400 + if (skb_linearize(skb)) 2401 + goto out_drop; 2465 2402 2466 2403 skb_tx_timestamp(skb); 2467 2404
+1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
··· 112 112 113 113 #define i40e_rx_desc i40e_32byte_rx_desc 114 114 115 + #define I40E_MAX_BUFFER_TXD 8 115 116 #define I40E_MIN_TX_LEN 17 116 117 #define I40E_MAX_DATA_PER_TXD 8192 117 118
+107 -36
drivers/net/ethernet/intel/i40evf/i40e_txrx.c
··· 126 126 } 127 127 128 128 /** 129 + * i40e_get_head - Retrieve head from head writeback 130 + * @tx_ring: tx ring to fetch head of 131 + * 132 + * Returns value of Tx ring head based on value stored 133 + * in head write-back location 134 + **/ 135 + static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 136 + { 137 + void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 138 + 139 + return le32_to_cpu(*(volatile __le32 *)head); 140 + } 141 + 142 + /** 129 143 * i40e_get_tx_pending - how many tx descriptors not processed 130 144 * @tx_ring: the ring of descriptors 131 145 * ··· 148 134 **/ 149 135 static u32 i40e_get_tx_pending(struct i40e_ring *ring) 150 136 { 151 - u32 ntu = ((ring->next_to_clean <= ring->next_to_use) 152 - ? ring->next_to_use 153 - : ring->next_to_use + ring->count); 154 - return ntu - ring->next_to_clean; 137 + u32 head, tail; 138 + 139 + head = i40e_get_head(ring); 140 + tail = readl(ring->tail); 141 + 142 + if (head != tail) 143 + return (head < tail) ? 144 + tail - head : (tail + ring->count - head); 145 + 146 + return 0; 155 147 } 156 148 157 149 /** ··· 166 146 **/ 167 147 static bool i40e_check_tx_hang(struct i40e_ring *tx_ring) 168 148 { 149 + u32 tx_done = tx_ring->stats.packets; 150 + u32 tx_done_old = tx_ring->tx_stats.tx_done_old; 169 151 u32 tx_pending = i40e_get_tx_pending(tx_ring); 170 152 bool ret = false; 171 153 ··· 184 162 * run the check_tx_hang logic with a transmit completion 185 163 * pending but without time to complete it yet. 186 164 */ 187 - if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) && 188 - (tx_pending >= I40E_MIN_DESC_PENDING)) { 165 + if ((tx_done_old == tx_done) && tx_pending) { 189 166 /* make sure it is true for two checks in a row */ 190 167 ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED, 191 168 &tx_ring->state); 192 - } else if (!(tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) || 193 - !(tx_pending < I40E_MIN_DESC_PENDING) || 194 - !(tx_pending > 0)) { 169 + } else if (tx_done_old == tx_done && 170 + (tx_pending < I40E_MIN_DESC_PENDING) && (tx_pending > 0)) { 195 171 /* update completed stats and disarm the hang check */ 196 - tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets; 172 + tx_ring->tx_stats.tx_done_old = tx_done; 197 173 clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state); 198 174 } 199 175 200 176 return ret; 201 - } 202 - 203 - /** 204 - * i40e_get_head - Retrieve head from head writeback 205 - * @tx_ring: tx ring to fetch head of 206 - * 207 - * Returns value of Tx ring head based on value stored 208 - * in head write-back location 209 - **/ 210 - static inline u32 i40e_get_head(struct i40e_ring *tx_ring) 211 - { 212 - void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count; 213 - 214 - return le32_to_cpu(*(volatile __le32 *)head); 215 177 } 216 178 217 179 #define WB_STRIDE 0x3 ··· 1212 1206 if (err < 0) 1213 1207 return err; 1214 1208 1215 - if (protocol == htons(ETH_P_IP)) { 1216 - iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); 1209 + iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); 1210 + ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb); 1211 + 1212 + if (iph->version == 4) { 1217 1213 tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb); 1218 1214 iph->tot_len = 0; 1219 1215 iph->check = 0; 1220 1216 tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, 1221 1217 0, IPPROTO_TCP, 0); 1222 - } else if (skb_is_gso_v6(skb)) { 1223 - 1224 - ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) 1225 - : ipv6_hdr(skb); 1218 + } else if (ipv6h->version == 6) { 1226 1219 tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb); 1227 1220 ipv6h->payload_len = 0; 1228 1221 tcph->check = ~csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr, ··· 1279 1274 I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM; 1280 1275 } 1281 1276 } else if (tx_flags & I40E_TX_FLAGS_IPV6) { 1282 - if (tx_flags & I40E_TX_FLAGS_TSO) { 1283 - *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6; 1277 + *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6; 1278 + if (tx_flags & I40E_TX_FLAGS_TSO) 1284 1279 ip_hdr(skb)->check = 0; 1285 - } else { 1286 - *cd_tunneling |= 1287 - I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM; 1288 - } 1289 1280 } 1290 1281 1291 1282 /* Now set the ctx descriptor fields */ ··· 1291 1290 ((skb_inner_network_offset(skb) - 1292 1291 skb_transport_offset(skb)) >> 1) << 1293 1292 I40E_TXD_CTX_QW0_NATLEN_SHIFT; 1293 + if (this_ip_hdr->version == 6) { 1294 + tx_flags &= ~I40E_TX_FLAGS_IPV4; 1295 + tx_flags |= I40E_TX_FLAGS_IPV6; 1296 + } 1297 + 1294 1298 1295 1299 } else { 1296 1300 network_hdr_len = skb_network_header_len(skb); ··· 1384 1378 context_desc->l2tag2 = cpu_to_le16(cd_l2tag2); 1385 1379 context_desc->rsvd = cpu_to_le16(0); 1386 1380 context_desc->type_cmd_tso_mss = cpu_to_le64(cd_type_cmd_tso_mss); 1381 + } 1382 + 1383 + /** 1384 + * i40e_chk_linearize - Check if there are more than 8 fragments per packet 1385 + * @skb: send buffer 1386 + * @tx_flags: collected send information 1387 + * @hdr_len: size of the packet header 1388 + * 1389 + * Note: Our HW can't scatter-gather more than 8 fragments to build 1390 + * a packet on the wire and so we need to figure out the cases where we 1391 + * need to linearize the skb. 1392 + **/ 1393 + static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags, 1394 + const u8 hdr_len) 1395 + { 1396 + struct skb_frag_struct *frag; 1397 + bool linearize = false; 1398 + unsigned int size = 0; 1399 + u16 num_frags; 1400 + u16 gso_segs; 1401 + 1402 + num_frags = skb_shinfo(skb)->nr_frags; 1403 + gso_segs = skb_shinfo(skb)->gso_segs; 1404 + 1405 + if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) { 1406 + u16 j = 1; 1407 + 1408 + if (num_frags < (I40E_MAX_BUFFER_TXD)) 1409 + goto linearize_chk_done; 1410 + /* try the simple math, if we have too many frags per segment */ 1411 + if (DIV_ROUND_UP((num_frags + gso_segs), gso_segs) > 1412 + I40E_MAX_BUFFER_TXD) { 1413 + linearize = true; 1414 + goto linearize_chk_done; 1415 + } 1416 + frag = &skb_shinfo(skb)->frags[0]; 1417 + size = hdr_len; 1418 + /* we might still have more fragments per segment */ 1419 + do { 1420 + size += skb_frag_size(frag); 1421 + frag++; j++; 1422 + if (j == I40E_MAX_BUFFER_TXD) { 1423 + if (size < skb_shinfo(skb)->gso_size) { 1424 + linearize = true; 1425 + break; 1426 + } 1427 + j = 1; 1428 + size -= skb_shinfo(skb)->gso_size; 1429 + if (size) 1430 + j++; 1431 + size += hdr_len; 1432 + } 1433 + num_frags--; 1434 + } while (num_frags); 1435 + } else { 1436 + if (num_frags >= I40E_MAX_BUFFER_TXD) 1437 + linearize = true; 1438 + } 1439 + 1440 + linearize_chk_done: 1441 + return linearize; 1387 1442 } 1388 1443 1389 1444 /** ··· 1720 1653 goto out_drop; 1721 1654 else if (tso) 1722 1655 tx_flags |= I40E_TX_FLAGS_TSO; 1656 + 1657 + if (i40e_chk_linearize(skb, tx_flags, hdr_len)) 1658 + if (skb_linearize(skb)) 1659 + goto out_drop; 1723 1660 1724 1661 skb_tx_timestamp(skb); 1725 1662
+1
drivers/net/ethernet/intel/i40evf/i40e_txrx.h
··· 112 112 113 113 #define i40e_rx_desc i40e_32byte_rx_desc 114 114 115 + #define I40E_MAX_BUFFER_TXD 8 115 116 #define I40E_MIN_TX_LEN 17 116 117 #define I40E_MAX_DATA_PER_TXD 8192 117 118
+1 -6
drivers/net/ethernet/marvell/mvneta.c
··· 2658 2658 static int mvneta_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 2659 2659 { 2660 2660 struct mvneta_port *pp = netdev_priv(dev); 2661 - int ret; 2662 2661 2663 2662 if (!pp->phy_dev) 2664 2663 return -ENOTSUPP; 2665 2664 2666 - ret = phy_mii_ioctl(pp->phy_dev, ifr, cmd); 2667 - if (!ret) 2668 - mvneta_adjust_link(dev); 2669 - 2670 - return ret; 2665 + return phy_mii_ioctl(pp->phy_dev, ifr, cmd); 2671 2666 } 2672 2667 2673 2668 /* Ethtool methods */
+3 -2
drivers/net/ethernet/mellanox/mlx4/cmd.c
··· 724 724 * on the host, we deprecate the error message for this 725 725 * specific command/input_mod/opcode_mod/fw-status to be debug. 726 726 */ 727 - if (op == MLX4_CMD_SET_PORT && in_modifier == 1 && 727 + if (op == MLX4_CMD_SET_PORT && 728 + (in_modifier == 1 || in_modifier == 2) && 728 729 op_modifier == 0 && context->fw_status == CMD_STAT_BAD_SIZE) 729 730 mlx4_dbg(dev, "command 0x%x failed: fw status = 0x%x\n", 730 731 op, context->fw_status); ··· 1994 1993 goto reset_slave; 1995 1994 slave_state[slave].vhcr_dma = ((u64) param) << 48; 1996 1995 priv->mfunc.master.slave_state[slave].cookie = 0; 1997 - mutex_init(&priv->mfunc.master.gen_eqe_mutex[slave]); 1998 1996 break; 1999 1997 case MLX4_COMM_CMD_VHCR1: 2000 1998 if (slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR0) ··· 2225 2225 for (i = 0; i < dev->num_slaves; ++i) { 2226 2226 s_state = &priv->mfunc.master.slave_state[i]; 2227 2227 s_state->last_cmd = MLX4_COMM_CMD_RESET; 2228 + mutex_init(&priv->mfunc.master.gen_eqe_mutex[i]); 2228 2229 for (j = 0; j < MLX4_EVENT_TYPES_NUM; ++j) 2229 2230 s_state->event_eq[j].eqn = -1; 2230 2231 __raw_writel((__force u32) 0,
+10 -9
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1698 1698 /* Schedule multicast task to populate multicast list */ 1699 1699 queue_work(mdev->workqueue, &priv->rx_mode_task); 1700 1700 1701 - mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 1702 - 1703 1701 #ifdef CONFIG_MLX4_EN_VXLAN 1704 1702 if (priv->mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 1705 1703 vxlan_get_rx_port(dev); ··· 2805 2807 netif_carrier_off(dev); 2806 2808 mlx4_en_set_default_moderation(priv); 2807 2809 2808 - err = register_netdev(dev); 2809 - if (err) { 2810 - en_err(priv, "Netdev registration failed for port %d\n", port); 2811 - goto out; 2812 - } 2813 - priv->registered = 1; 2814 - 2815 2810 en_warn(priv, "Using %d TX rings\n", prof->tx_ring_num); 2816 2811 en_warn(priv, "Using %d RX rings\n", prof->rx_ring_num); 2817 2812 ··· 2843 2852 if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS) 2844 2853 queue_delayed_work(mdev->workqueue, &priv->service_task, 2845 2854 SERVICE_TASK_DELAY); 2855 + 2856 + mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 2857 + 2858 + err = register_netdev(dev); 2859 + if (err) { 2860 + en_err(priv, "Netdev registration failed for port %d\n", port); 2861 + goto out; 2862 + } 2863 + 2864 + priv->registered = 1; 2846 2865 2847 2866 return 0; 2848 2867
+7 -1
drivers/net/ethernet/mellanox/mlx4/en_selftest.c
··· 81 81 { 82 82 u32 loopback_ok = 0; 83 83 int i; 84 - 84 + bool gro_enabled; 85 85 86 86 priv->loopback_ok = 0; 87 87 priv->validate_loopback = 1; 88 + gro_enabled = priv->dev->features & NETIF_F_GRO; 88 89 89 90 mlx4_en_update_loopback_state(priv->dev, priv->dev->features); 91 + priv->dev->features &= ~NETIF_F_GRO; 90 92 91 93 /* xmit */ 92 94 if (mlx4_en_test_loopback_xmit(priv)) { ··· 110 108 mlx4_en_test_loopback_exit: 111 109 112 110 priv->validate_loopback = 0; 111 + 112 + if (gro_enabled) 113 + priv->dev->features |= NETIF_F_GRO; 114 + 113 115 mlx4_en_update_loopback_state(priv->dev, priv->dev->features); 114 116 return !loopback_ok; 115 117 }
+7 -11
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 153 153 154 154 /* All active slaves need to receive the event */ 155 155 if (slave == ALL_SLAVES) { 156 - for (i = 0; i < dev->num_slaves; i++) { 157 - if (i != dev->caps.function && 158 - master->slave_state[i].active) 159 - if (mlx4_GEN_EQE(dev, i, eqe)) 160 - mlx4_warn(dev, "Failed to generate event for slave %d\n", 161 - i); 156 + for (i = 0; i <= dev->persist->num_vfs; i++) { 157 + if (mlx4_GEN_EQE(dev, i, eqe)) 158 + mlx4_warn(dev, "Failed to generate event for slave %d\n", 159 + i); 162 160 } 163 161 } else { 164 162 if (mlx4_GEN_EQE(dev, slave, eqe)) ··· 201 203 struct mlx4_eqe *eqe) 202 204 { 203 205 struct mlx4_priv *priv = mlx4_priv(dev); 204 - struct mlx4_slave_state *s_slave = 205 - &priv->mfunc.master.slave_state[slave]; 206 206 207 - if (!s_slave->active) { 208 - /*mlx4_warn(dev, "Trying to pass event to inactive slave\n");*/ 207 + if (slave < 0 || slave > dev->persist->num_vfs || 208 + slave == dev->caps.function || 209 + !priv->mfunc.master.slave_state[slave].active) 209 210 return; 210 - } 211 211 212 212 slave_event(dev, slave, eqe); 213 213 }
+1 -1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 453 453 unsigned long rx_chksum_none; 454 454 unsigned long rx_chksum_complete; 455 455 unsigned long tx_chksum_offload; 456 - #define NUM_PORT_STATS 9 456 + #define NUM_PORT_STATS 10 457 457 }; 458 458 459 459 struct mlx4_en_perf_stats {
-1
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 412 412 413 413 EXPORT_SYMBOL_GPL(mlx4_qp_alloc); 414 414 415 - #define MLX4_UPDATE_QP_SUPPORTED_ATTRS MLX4_UPDATE_QP_SMAC 416 415 int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn, 417 416 enum mlx4_update_qp_attr attr, 418 417 struct mlx4_update_qp_params *params)
+12 -3
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 713 713 struct mlx4_vport_oper_state *vp_oper; 714 714 struct mlx4_priv *priv; 715 715 u32 qp_type; 716 - int port; 716 + int port, err = 0; 717 717 718 718 port = (qpc->pri_path.sched_queue & 0x40) ? 2 : 1; 719 719 priv = mlx4_priv(dev); ··· 738 738 } else { 739 739 struct mlx4_update_qp_params params = {.flags = 0}; 740 740 741 - mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, &params); 741 + err = mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, &params); 742 + if (err) 743 + goto out; 742 744 } 743 745 } 744 746 ··· 775 773 qpc->pri_path.feup |= MLX4_FSM_FORCE_ETH_SRC_MAC; 776 774 qpc->pri_path.grh_mylmc = (0x80 & qpc->pri_path.grh_mylmc) + vp_oper->mac_idx; 777 775 } 778 - return 0; 776 + out: 777 + return err; 779 778 } 780 779 781 780 static int mpt_mask(struct mlx4_dev *dev) ··· 3094 3091 3095 3092 if (!priv->mfunc.master.slave_state) 3096 3093 return -EINVAL; 3094 + 3095 + /* check for slave valid, slave not PF, and slave active */ 3096 + if (slave < 0 || slave > dev->persist->num_vfs || 3097 + slave == dev->caps.function || 3098 + !priv->mfunc.master.slave_state[slave].active) 3099 + return 0; 3097 3100 3098 3101 event_eq = &priv->mfunc.master.slave_state[slave].event_eq[eqe->type]; 3099 3102
+3 -5
drivers/net/ethernet/pasemi/pasemi_mac.c
··· 1239 1239 if (mac->phydev) 1240 1240 phy_start(mac->phydev); 1241 1241 1242 - init_timer(&mac->tx->clean_timer); 1243 - mac->tx->clean_timer.function = pasemi_mac_tx_timer; 1244 - mac->tx->clean_timer.data = (unsigned long)mac->tx; 1245 - mac->tx->clean_timer.expires = jiffies+HZ; 1246 - add_timer(&mac->tx->clean_timer); 1242 + setup_timer(&mac->tx->clean_timer, pasemi_mac_tx_timer, 1243 + (unsigned long)mac->tx); 1244 + mod_timer(&mac->tx->clean_timer, jiffies + HZ); 1247 1245 1248 1246 return 0; 1249 1247
+2 -2
drivers/net/ethernet/qlogic/netxen/netxen_nic.h
··· 354 354 355 355 } __attribute__ ((aligned(64))); 356 356 357 - /* Note: sizeof(rcv_desc) should always be a mutliple of 2 */ 357 + /* Note: sizeof(rcv_desc) should always be a multiple of 2 */ 358 358 struct rcv_desc { 359 359 __le16 reference_handle; 360 360 __le16 reserved; ··· 499 499 #define NETXEN_IMAGE_START 0x43000 /* compressed image */ 500 500 #define NETXEN_SECONDARY_START 0x200000 /* backup images */ 501 501 #define NETXEN_PXE_START 0x3E0000 /* PXE boot rom */ 502 - #define NETXEN_USER_START 0x3E8000 /* Firmare info */ 502 + #define NETXEN_USER_START 0x3E8000 /* Firmware info */ 503 503 #define NETXEN_FIXED_START 0x3F0000 /* backup of crbinit */ 504 504 #define NETXEN_USER_START_OLD NETXEN_PXE_START /* very old flash */ 505 505
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
··· 314 314 #define QLCNIC_BRDCFG_START 0x4000 /* board config */ 315 315 #define QLCNIC_BOOTLD_START 0x10000 /* bootld */ 316 316 #define QLCNIC_IMAGE_START 0x43000 /* compressed image */ 317 - #define QLCNIC_USER_START 0x3E8000 /* Firmare info */ 317 + #define QLCNIC_USER_START 0x3E8000 /* Firmware info */ 318 318 319 319 #define QLCNIC_FW_VERSION_OFFSET (QLCNIC_USER_START+0x408) 320 320 #define QLCNIC_FW_SIZE_OFFSET (QLCNIC_USER_START+0x40c)
+8 -24
drivers/net/ethernet/realtek/r8169.c
··· 2561 2561 int rc = -EINVAL; 2562 2562 2563 2563 if (!rtl_fw_format_ok(tp, rtl_fw)) { 2564 - netif_err(tp, ifup, dev, "invalid firwmare\n"); 2564 + netif_err(tp, ifup, dev, "invalid firmware\n"); 2565 2565 goto out; 2566 2566 } 2567 2567 ··· 5067 5067 RTL_W8(ChipCmd, CmdReset); 5068 5068 5069 5069 rtl_udelay_loop_wait_low(tp, &rtl_chipcmd_cond, 100, 100); 5070 - 5071 - netdev_reset_queue(tp->dev); 5072 5070 } 5073 5071 5074 5072 static void rtl_request_uncached_firmware(struct rtl8169_private *tp) ··· 7047 7049 u32 status, len; 7048 7050 u32 opts[2]; 7049 7051 int frags; 7050 - bool stop_queue; 7051 7052 7052 7053 if (unlikely(!TX_FRAGS_READY_FOR(tp, skb_shinfo(skb)->nr_frags))) { 7053 7054 netif_err(tp, drv, dev, "BUG! Tx Ring full when queue awake!\n"); ··· 7087 7090 7088 7091 txd->opts2 = cpu_to_le32(opts[1]); 7089 7092 7090 - netdev_sent_queue(dev, skb->len); 7091 - 7092 7093 skb_tx_timestamp(skb); 7093 7094 7094 7095 /* Force memory writes to complete before releasing descriptor */ ··· 7101 7106 7102 7107 tp->cur_tx += frags + 1; 7103 7108 7104 - stop_queue = !TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS); 7109 + RTL_W8(TxPoll, NPQ); 7105 7110 7106 - if (!skb->xmit_more || stop_queue || 7107 - netif_xmit_stopped(netdev_get_tx_queue(dev, 0))) { 7108 - RTL_W8(TxPoll, NPQ); 7111 + mmiowb(); 7109 7112 7110 - mmiowb(); 7111 - } 7112 - 7113 - if (stop_queue) { 7113 + if (!TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) { 7114 7114 /* Avoid wrongly optimistic queue wake-up: rtl_tx thread must 7115 7115 * not miss a ring update when it notices a stopped queue. 7116 7116 */ ··· 7188 7198 static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp) 7189 7199 { 7190 7200 unsigned int dirty_tx, tx_left; 7191 - unsigned int bytes_compl = 0, pkts_compl = 0; 7192 7201 7193 7202 dirty_tx = tp->dirty_tx; 7194 7203 smp_rmb(); ··· 7211 7222 rtl8169_unmap_tx_skb(&tp->pci_dev->dev, tx_skb, 7212 7223 tp->TxDescArray + entry); 7213 7224 if (status & LastFrag) { 7214 - pkts_compl++; 7215 - bytes_compl += tx_skb->skb->len; 7225 + u64_stats_update_begin(&tp->tx_stats.syncp); 7226 + tp->tx_stats.packets++; 7227 + tp->tx_stats.bytes += tx_skb->skb->len; 7228 + u64_stats_update_end(&tp->tx_stats.syncp); 7216 7229 dev_kfree_skb_any(tx_skb->skb); 7217 7230 tx_skb->skb = NULL; 7218 7231 } ··· 7223 7232 } 7224 7233 7225 7234 if (tp->dirty_tx != dirty_tx) { 7226 - netdev_completed_queue(tp->dev, pkts_compl, bytes_compl); 7227 - 7228 - u64_stats_update_begin(&tp->tx_stats.syncp); 7229 - tp->tx_stats.packets += pkts_compl; 7230 - tp->tx_stats.bytes += bytes_compl; 7231 - u64_stats_update_end(&tp->tx_stats.syncp); 7232 - 7233 7235 tp->dirty_tx = dirty_tx; 7234 7236 /* Sync with rtl8169_start_xmit: 7235 7237 * - publish dirty_tx ring index (write barrier)
+13 -5
drivers/net/ethernet/renesas/sh_eth.c
··· 508 508 .tpauser = 1, 509 509 .hw_swap = 1, 510 510 .rmiimode = 1, 511 - .shift_rd0 = 1, 512 511 }; 513 512 514 513 static void sh_eth_set_rate_sh7724(struct net_device *ndev) ··· 1391 1392 msleep(2); /* max frame time at 10 Mbps < 1250 us */ 1392 1393 sh_eth_get_stats(ndev); 1393 1394 sh_eth_reset(ndev); 1395 + 1396 + /* Set MAC address again */ 1397 + update_mac_address(ndev); 1394 1398 } 1395 1399 1396 1400 /* free Tx skb function */ ··· 1409 1407 txdesc = &mdp->tx_ring[entry]; 1410 1408 if (txdesc->status & cpu_to_edmac(mdp, TD_TACT)) 1411 1409 break; 1410 + /* TACT bit must be checked before all the following reads */ 1411 + rmb(); 1412 1412 /* Free the original skb. */ 1413 1413 if (mdp->tx_skbuff[entry]) { 1414 1414 dma_unmap_single(&ndev->dev, txdesc->addr, ··· 1448 1444 limit = boguscnt; 1449 1445 rxdesc = &mdp->rx_ring[entry]; 1450 1446 while (!(rxdesc->status & cpu_to_edmac(mdp, RD_RACT))) { 1447 + /* RACT bit must be checked before all the following reads */ 1448 + rmb(); 1451 1449 desc_status = edmac_to_cpu(mdp, rxdesc->status); 1452 1450 pkt_len = rxdesc->frame_length; 1453 1451 ··· 1461 1455 1462 1456 /* In case of almost all GETHER/ETHERs, the Receive Frame State 1463 1457 * (RFS) bits in the Receive Descriptor 0 are from bit 9 to 1464 - * bit 0. However, in case of the R8A7740, R8A779x, and 1465 - * R7S72100 the RFS bits are from bit 25 to bit 16. So, the 1458 + * bit 0. However, in case of the R8A7740 and R7S72100 1459 + * the RFS bits are from bit 25 to bit 16. So, the 1466 1460 * driver needs right shifting by 16. 1467 1461 */ 1468 1462 if (mdp->cd->shift_rd0) ··· 1529 1523 skb_checksum_none_assert(skb); 1530 1524 rxdesc->addr = dma_addr; 1531 1525 } 1526 + wmb(); /* RACT bit must be set after all the above writes */ 1532 1527 if (entry >= mdp->num_rx_ring - 1) 1533 1528 rxdesc->status |= 1534 1529 cpu_to_edmac(mdp, RD_RACT | RD_RFP | RD_RDEL); ··· 1542 1535 /* If we don't need to check status, don't. -KDU */ 1543 1536 if (!(sh_eth_read(ndev, EDRRR) & EDRRR_R)) { 1544 1537 /* fix the values for the next receiving if RDE is set */ 1545 - if (intr_status & EESR_RDE) { 1538 + if (intr_status & EESR_RDE && mdp->reg_offset[RDFAR] != 0) { 1546 1539 u32 count = (sh_eth_read(ndev, RDFAR) - 1547 1540 sh_eth_read(ndev, RDLAR)) >> 4; 1548 1541 ··· 2181 2174 } 2182 2175 spin_unlock_irqrestore(&mdp->lock, flags); 2183 2176 2184 - if (skb_padto(skb, ETH_ZLEN)) 2177 + if (skb_put_padto(skb, ETH_ZLEN)) 2185 2178 return NETDEV_TX_OK; 2186 2179 2187 2180 entry = mdp->cur_tx % mdp->num_tx_ring; ··· 2199 2192 } 2200 2193 txdesc->buffer_length = skb->len; 2201 2194 2195 + wmb(); /* TACT bit must be set after all the above writes */ 2202 2196 if (entry >= mdp->num_tx_ring - 1) 2203 2197 txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE); 2204 2198 else
+11 -3
drivers/net/ethernet/rocker/rocker.c
··· 1257 1257 u64 val = rocker_read64(rocker_port->rocker, PORT_PHYS_ENABLE); 1258 1258 1259 1259 if (enable) 1260 - val |= 1 << rocker_port->lport; 1260 + val |= 1ULL << rocker_port->lport; 1261 1261 else 1262 - val &= ~(1 << rocker_port->lport); 1262 + val &= ~(1ULL << rocker_port->lport); 1263 1263 rocker_write64(rocker_port->rocker, PORT_PHYS_ENABLE, val); 1264 1264 } 1265 1265 ··· 4201 4201 4202 4202 alloc_size = sizeof(struct rocker_port *) * rocker->port_count; 4203 4203 rocker->ports = kmalloc(alloc_size, GFP_KERNEL); 4204 + if (!rocker->ports) 4205 + return -ENOMEM; 4204 4206 for (i = 0; i < rocker->port_count; i++) { 4205 4207 err = rocker_probe_port(rocker, i); 4206 4208 if (err) ··· 4468 4466 struct net_device *master = netdev_master_upper_dev_get(dev); 4469 4467 int err = 0; 4470 4468 4469 + /* There are currently three cases handled here: 4470 + * 1. Joining a bridge 4471 + * 2. Leaving a previously joined bridge 4472 + * 3. Other, e.g. being added to or removed from a bond or openvswitch, 4473 + * in which case nothing is done 4474 + */ 4471 4475 if (master && master->rtnl_link_ops && 4472 4476 !strcmp(master->rtnl_link_ops->kind, "bridge")) 4473 4477 err = rocker_port_bridge_join(rocker_port, master); 4474 - else 4478 + else if (rocker_port_is_bridged(rocker_port)) 4475 4479 err = rocker_port_bridge_leave(rocker_port); 4476 4480 4477 4481 return err;
+2 -5
drivers/net/ethernet/smsc/smc91c92_cs.c
··· 1070 1070 smc->packets_waiting = 0; 1071 1071 1072 1072 smc_reset(dev); 1073 - init_timer(&smc->media); 1074 - smc->media.function = media_check; 1075 - smc->media.data = (u_long) dev; 1076 - smc->media.expires = jiffies + HZ; 1077 - add_timer(&smc->media); 1073 + setup_timer(&smc->media, media_check, (u_long)dev); 1074 + mod_timer(&smc->media, jiffies + HZ); 1078 1075 1079 1076 return 0; 1080 1077 } /* smc_open */
+16 -14
drivers/net/ethernet/smsc/smc91x.c
··· 91 91 92 92 #include "smc91x.h" 93 93 94 + #if defined(CONFIG_ASSABET_NEPONSET) 95 + #include <mach/assabet.h> 96 + #include <mach/neponset.h> 97 + #endif 98 + 94 99 #ifndef SMC_NOWAIT 95 100 # define SMC_NOWAIT 0 96 101 #endif ··· 2248 2243 const struct of_device_id *match = NULL; 2249 2244 struct smc_local *lp; 2250 2245 struct net_device *ndev; 2251 - struct resource *res; 2246 + struct resource *res, *ires; 2252 2247 unsigned int __iomem *addr; 2253 2248 unsigned long irq_flags = SMC_IRQ_FLAGS; 2254 - unsigned long irq_resflags; 2255 2249 int ret; 2256 2250 2257 2251 ndev = alloc_etherdev(sizeof(struct smc_local)); ··· 2342 2338 goto out_free_netdev; 2343 2339 } 2344 2340 2345 - ndev->irq = platform_get_irq(pdev, 0); 2346 - if (ndev->irq <= 0) { 2341 + ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 2342 + if (!ires) { 2347 2343 ret = -ENODEV; 2348 2344 goto out_release_io; 2349 2345 } 2350 - /* 2351 - * If this platform does not specify any special irqflags, or if 2352 - * the resource supplies a trigger, override the irqflags with 2353 - * the trigger flags from the resource. 2354 - */ 2355 - irq_resflags = irqd_get_trigger_type(irq_get_irq_data(ndev->irq)); 2356 - if (irq_flags == -1 || irq_resflags & IRQF_TRIGGER_MASK) 2357 - irq_flags = irq_resflags & IRQF_TRIGGER_MASK; 2346 + 2347 + ndev->irq = ires->start; 2348 + 2349 + if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK) 2350 + irq_flags = ires->flags & IRQF_TRIGGER_MASK; 2358 2351 2359 2352 ret = smc_request_attrib(pdev, ndev); 2360 2353 if (ret) 2361 2354 goto out_release_io; 2362 - #if defined(CONFIG_SA1100_ASSABET) 2363 - neponset_ncr_set(NCR_ENET_OSC_EN); 2355 + #if defined(CONFIG_ASSABET_NEPONSET) 2356 + if (machine_is_assabet() && machine_has_neponset()) 2357 + neponset_ncr_set(NCR_ENET_OSC_EN); 2364 2358 #endif 2365 2359 platform_set_drvdata(pdev, ndev); 2366 2360 ret = smc_enable_device(pdev);
+3 -111
drivers/net/ethernet/smsc/smc91x.h
··· 39 39 * Define your architecture specific bus configuration parameters here. 40 40 */ 41 41 42 - #if defined(CONFIG_ARCH_LUBBOCK) ||\ 43 - defined(CONFIG_MACH_MAINSTONE) ||\ 44 - defined(CONFIG_MACH_ZYLONITE) ||\ 45 - defined(CONFIG_MACH_LITTLETON) ||\ 46 - defined(CONFIG_MACH_ZYLONITE2) ||\ 47 - defined(CONFIG_ARCH_VIPER) ||\ 48 - defined(CONFIG_MACH_STARGATE2) ||\ 49 - defined(CONFIG_ARCH_VERSATILE) 42 + #if defined(CONFIG_ARM) 50 43 51 44 #include <asm/mach-types.h> 52 45 ··· 67 74 /* We actually can't write halfwords properly if not word aligned */ 68 75 static inline void SMC_outw(u16 val, void __iomem *ioaddr, int reg) 69 76 { 70 - if ((machine_is_mainstone() || machine_is_stargate2()) && reg & 2) { 71 - unsigned int v = val << 16; 72 - v |= readl(ioaddr + (reg & ~2)) & 0xffff; 73 - writel(v, ioaddr + (reg & ~2)); 74 - } else { 75 - writew(val, ioaddr + reg); 76 - } 77 - } 78 - 79 - #elif defined(CONFIG_SA1100_PLEB) 80 - /* We can only do 16-bit reads and writes in the static memory space. */ 81 - #define SMC_CAN_USE_8BIT 1 82 - #define SMC_CAN_USE_16BIT 1 83 - #define SMC_CAN_USE_32BIT 0 84 - #define SMC_IO_SHIFT 0 85 - #define SMC_NOWAIT 1 86 - 87 - #define SMC_inb(a, r) readb((a) + (r)) 88 - #define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l)) 89 - #define SMC_inw(a, r) readw((a) + (r)) 90 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 91 - #define SMC_outb(v, a, r) writeb(v, (a) + (r)) 92 - #define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l)) 93 - #define SMC_outw(v, a, r) writew(v, (a) + (r)) 94 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 95 - 96 - #define SMC_IRQ_FLAGS (-1) 97 - 98 - #elif defined(CONFIG_SA1100_ASSABET) 99 - 100 - #include <mach/neponset.h> 101 - 102 - /* We can only do 8-bit reads and writes in the static memory space. */ 103 - #define SMC_CAN_USE_8BIT 1 104 - #define SMC_CAN_USE_16BIT 0 105 - #define SMC_CAN_USE_32BIT 0 106 - #define SMC_NOWAIT 1 107 - 108 - /* The first two address lines aren't connected... */ 109 - #define SMC_IO_SHIFT 2 110 - 111 - #define SMC_inb(a, r) readb((a) + (r)) 112 - #define SMC_outb(v, a, r) writeb(v, (a) + (r)) 113 - #define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l)) 114 - #define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l)) 115 - #define SMC_IRQ_FLAGS (-1) /* from resource */ 116 - 117 - #elif defined(CONFIG_MACH_LOGICPD_PXA270) || \ 118 - defined(CONFIG_MACH_NOMADIK_8815NHK) 119 - 120 - #define SMC_CAN_USE_8BIT 0 121 - #define SMC_CAN_USE_16BIT 1 122 - #define SMC_CAN_USE_32BIT 0 123 - #define SMC_IO_SHIFT 0 124 - #define SMC_NOWAIT 1 125 - 126 - #define SMC_inw(a, r) readw((a) + (r)) 127 - #define SMC_outw(v, a, r) writew(v, (a) + (r)) 128 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 129 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 130 - 131 - #elif defined(CONFIG_ARCH_INNOKOM) || \ 132 - defined(CONFIG_ARCH_PXA_IDP) || \ 133 - defined(CONFIG_ARCH_RAMSES) || \ 134 - defined(CONFIG_ARCH_PCM027) 135 - 136 - #define SMC_CAN_USE_8BIT 1 137 - #define SMC_CAN_USE_16BIT 1 138 - #define SMC_CAN_USE_32BIT 1 139 - #define SMC_IO_SHIFT 0 140 - #define SMC_NOWAIT 1 141 - #define SMC_USE_PXA_DMA 1 142 - 143 - #define SMC_inb(a, r) readb((a) + (r)) 144 - #define SMC_inw(a, r) readw((a) + (r)) 145 - #define SMC_inl(a, r) readl((a) + (r)) 146 - #define SMC_outb(v, a, r) writeb(v, (a) + (r)) 147 - #define SMC_outl(v, a, r) writel(v, (a) + (r)) 148 - #define SMC_insl(a, r, p, l) readsl((a) + (r), p, l) 149 - #define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l) 150 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 151 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 152 - #define SMC_IRQ_FLAGS (-1) /* from resource */ 153 - 154 - /* We actually can't write halfwords properly if not word aligned */ 155 - static inline void 156 - SMC_outw(u16 val, void __iomem *ioaddr, int reg) 157 - { 158 - if (reg & 2) { 77 + if ((machine_is_mainstone() || machine_is_stargate2() || 78 + machine_is_pxa_idp()) && reg & 2) { 159 79 unsigned int v = val << 16; 160 80 v |= readl(ioaddr + (reg & ~2)) & 0xffff; 161 81 writel(v, ioaddr + (reg & ~2)); ··· 142 236 143 237 #define RPC_LSA_DEFAULT RPC_LED_100_10 144 238 #define RPC_LSB_DEFAULT RPC_LED_TX_RX 145 - 146 - #elif defined(CONFIG_ARCH_MSM) 147 - 148 - #define SMC_CAN_USE_8BIT 0 149 - #define SMC_CAN_USE_16BIT 1 150 - #define SMC_CAN_USE_32BIT 0 151 - #define SMC_NOWAIT 1 152 - 153 - #define SMC_inw(a, r) readw((a) + (r)) 154 - #define SMC_outw(v, a, r) writew(v, (a) + (r)) 155 - #define SMC_insw(a, r, p, l) readsw((a) + (r), p, l) 156 - #define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l) 157 - 158 - #define SMC_IRQ_FLAGS IRQF_TRIGGER_HIGH 159 239 160 240 #elif defined(CONFIG_COLDFIRE) 161 241
+5 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 310 310 spin_lock_irqsave(&priv->lock, flags); 311 311 if (!priv->eee_active) { 312 312 priv->eee_active = 1; 313 - init_timer(&priv->eee_ctrl_timer); 314 - priv->eee_ctrl_timer.function = stmmac_eee_ctrl_timer; 315 - priv->eee_ctrl_timer.data = (unsigned long)priv; 316 - priv->eee_ctrl_timer.expires = STMMAC_LPI_T(eee_timer); 317 - add_timer(&priv->eee_ctrl_timer); 313 + setup_timer(&priv->eee_ctrl_timer, 314 + stmmac_eee_ctrl_timer, 315 + (unsigned long)priv); 316 + mod_timer(&priv->eee_ctrl_timer, 317 + STMMAC_LPI_T(eee_timer)); 318 318 319 319 priv->hw->mac->set_eee_timer(priv->hw, 320 320 STMMAC_DEFAULT_LIT_LS,
+36 -29
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 272 272 struct stmmac_priv *priv = NULL; 273 273 struct plat_stmmacenet_data *plat_dat = NULL; 274 274 const char *mac = NULL; 275 + int irq, wol_irq, lpi_irq; 276 + 277 + /* Get IRQ information early to have an ability to ask for deferred 278 + * probe if needed before we went too far with resource allocation. 279 + */ 280 + irq = platform_get_irq_byname(pdev, "macirq"); 281 + if (irq < 0) { 282 + if (irq != -EPROBE_DEFER) { 283 + dev_err(dev, 284 + "MAC IRQ configuration information not found\n"); 285 + } 286 + return irq; 287 + } 288 + 289 + /* On some platforms e.g. SPEAr the wake up irq differs from the mac irq 290 + * The external wake up irq can be passed through the platform code 291 + * named as "eth_wake_irq" 292 + * 293 + * In case the wake up interrupt is not passed from the platform 294 + * so the driver will continue to use the mac irq (ndev->irq) 295 + */ 296 + wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 297 + if (wol_irq < 0) { 298 + if (wol_irq == -EPROBE_DEFER) 299 + return -EPROBE_DEFER; 300 + wol_irq = irq; 301 + } 302 + 303 + lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 304 + if (lpi_irq == -EPROBE_DEFER) 305 + return -EPROBE_DEFER; 275 306 276 307 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 277 308 addr = devm_ioremap_resource(dev, res); ··· 354 323 return PTR_ERR(priv); 355 324 } 356 325 326 + /* Copy IRQ values to priv structure which is now avaialble */ 327 + priv->dev->irq = irq; 328 + priv->wol_irq = wol_irq; 329 + priv->lpi_irq = lpi_irq; 330 + 357 331 /* Get MAC address if available (DT) */ 358 332 if (mac) 359 333 memcpy(priv->dev->dev_addr, mac, ETH_ALEN); 360 - 361 - /* Get the MAC information */ 362 - priv->dev->irq = platform_get_irq_byname(pdev, "macirq"); 363 - if (priv->dev->irq < 0) { 364 - if (priv->dev->irq != -EPROBE_DEFER) { 365 - netdev_err(priv->dev, 366 - "MAC IRQ configuration information not found\n"); 367 - } 368 - return priv->dev->irq; 369 - } 370 - 371 - /* 372 - * On some platforms e.g. SPEAr the wake up irq differs from the mac irq 373 - * The external wake up irq can be passed through the platform code 374 - * named as "eth_wake_irq" 375 - * 376 - * In case the wake up interrupt is not passed from the platform 377 - * so the driver will continue to use the mac irq (ndev->irq) 378 - */ 379 - priv->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 380 - if (priv->wol_irq < 0) { 381 - if (priv->wol_irq == -EPROBE_DEFER) 382 - return -EPROBE_DEFER; 383 - priv->wol_irq = priv->dev->irq; 384 - } 385 - 386 - priv->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 387 - if (priv->lpi_irq == -EPROBE_DEFER) 388 - return -EPROBE_DEFER; 389 334 390 335 platform_set_drvdata(pdev, priv->dev); 391 336
+2 -4
drivers/net/ethernet/sun/niu.c
··· 6989 6989 *flow_type = IP_USER_FLOW; 6990 6990 break; 6991 6991 default: 6992 - return 0; 6992 + return -EINVAL; 6993 6993 } 6994 6994 6995 - return 1; 6995 + return 0; 6996 6996 } 6997 6997 6998 6998 static int niu_ethflow_to_class(int flow_type, u64 *class) ··· 7198 7198 class = (tp->key[0] & TCAM_V4KEY0_CLASS_CODE) >> 7199 7199 TCAM_V4KEY0_CLASS_CODE_SHIFT; 7200 7200 ret = niu_class_to_ethflow(class, &fsp->flow_type); 7201 - 7202 7201 if (ret < 0) { 7203 7202 netdev_info(np->dev, "niu%d: niu_class_to_ethflow failed\n", 7204 7203 parent->index); 7205 - ret = -EINVAL; 7206 7204 goto out; 7207 7205 } 7208 7206
+4 -5
drivers/net/ethernet/ti/cpsw.c
··· 1103 1103 cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast, 1104 1104 port_mask, ALE_VLAN, slave->port_vlan, 0); 1105 1105 cpsw_ale_add_ucast(priv->ale, priv->mac_addr, 1106 - priv->host_port, ALE_VLAN, slave->port_vlan); 1106 + priv->host_port, ALE_VLAN | ALE_SECURE, slave->port_vlan); 1107 1107 } 1108 1108 1109 1109 static void soft_reset_slave(struct cpsw_slave *slave) ··· 2466 2466 return 0; 2467 2467 } 2468 2468 2469 + #ifdef CONFIG_PM_SLEEP 2469 2470 static int cpsw_suspend(struct device *dev) 2470 2471 { 2471 2472 struct platform_device *pdev = to_platform_device(dev); ··· 2519 2518 } 2520 2519 return 0; 2521 2520 } 2521 + #endif 2522 2522 2523 - static const struct dev_pm_ops cpsw_pm_ops = { 2524 - .suspend = cpsw_suspend, 2525 - .resume = cpsw_resume, 2526 - }; 2523 + static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume); 2527 2524 2528 2525 static const struct of_device_id cpsw_of_mtable[] = { 2529 2526 { .compatible = "ti,cpsw", },
+3 -2
drivers/net/ethernet/ti/davinci_mdio.c
··· 423 423 return 0; 424 424 } 425 425 426 + #ifdef CONFIG_PM_SLEEP 426 427 static int davinci_mdio_suspend(struct device *dev) 427 428 { 428 429 struct davinci_mdio_data *data = dev_get_drvdata(dev); ··· 465 464 466 465 return 0; 467 466 } 467 + #endif 468 468 469 469 static const struct dev_pm_ops davinci_mdio_pm_ops = { 470 - .suspend_late = davinci_mdio_suspend, 471 - .resume_early = davinci_mdio_resume, 470 + SET_LATE_SYSTEM_SLEEP_PM_OPS(davinci_mdio_suspend, davinci_mdio_resume) 472 471 }; 473 472 474 473 #if IS_ENABLED(CONFIG_OF)
+1 -1
drivers/net/ethernet/wiznet/w5100.c
··· 498 498 } 499 499 500 500 if (rx_count < budget) { 501 + napi_complete(napi); 501 502 w5100_write(priv, W5100_IMR, IR_S0); 502 503 mmiowb(); 503 - napi_complete(napi); 504 504 } 505 505 506 506 return rx_count;
+1 -1
drivers/net/ethernet/wiznet/w5300.c
··· 418 418 } 419 419 420 420 if (rx_count < budget) { 421 + napi_complete(napi); 421 422 w5300_write(priv, W5300_IMR, IR_S0); 422 423 mmiowb(); 423 - napi_complete(napi); 424 424 } 425 425 426 426 return rx_count;
+1 -1
drivers/net/ethernet/xscale/ixp4xx_eth.c
··· 938 938 int i; 939 939 static const u8 allmulti[] = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00 }; 940 940 941 - if (dev->flags & IFF_ALLMULTI) { 941 + if ((dev->flags & IFF_ALLMULTI) && !(dev->flags & IFF_PROMISC)) { 942 942 for (i = 0; i < ETH_ALEN; i++) { 943 943 __raw_writel(allmulti[i], &port->regs->mcast_addr[i]); 944 944 __raw_writel(allmulti[i], &port->regs->mcast_mask[i]);
+3 -1
drivers/net/ipvlan/ipvlan.h
··· 114 114 rx_handler_result_t ipvlan_handle_frame(struct sk_buff **pskb); 115 115 int ipvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev); 116 116 void ipvlan_ht_addr_add(struct ipvl_dev *ipvlan, struct ipvl_addr *addr); 117 - bool ipvlan_addr_busy(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6); 117 + struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 118 + const void *iaddr, bool is_v6); 119 + bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6); 118 120 struct ipvl_addr *ipvlan_ht_addr_lookup(const struct ipvl_port *port, 119 121 const void *iaddr, bool is_v6); 120 122 void ipvlan_ht_addr_del(struct ipvl_addr *addr, bool sync);
+21 -9
drivers/net/ipvlan/ipvlan_core.c
··· 81 81 hash = (addr->atype == IPVL_IPV6) ? 82 82 ipvlan_get_v6_hash(&addr->ip6addr) : 83 83 ipvlan_get_v4_hash(&addr->ip4addr); 84 - hlist_add_head_rcu(&addr->hlnode, &port->hlhead[hash]); 84 + if (hlist_unhashed(&addr->hlnode)) 85 + hlist_add_head_rcu(&addr->hlnode, &port->hlhead[hash]); 85 86 } 86 87 87 88 void ipvlan_ht_addr_del(struct ipvl_addr *addr, bool sync) 88 89 { 89 - hlist_del_rcu(&addr->hlnode); 90 + hlist_del_init_rcu(&addr->hlnode); 90 91 if (sync) 91 92 synchronize_rcu(); 92 93 } 93 94 94 - bool ipvlan_addr_busy(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6) 95 + struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 96 + const void *iaddr, bool is_v6) 95 97 { 96 - struct ipvl_port *port = ipvlan->port; 97 98 struct ipvl_addr *addr; 98 99 99 100 list_for_each_entry(addr, &ipvlan->addrs, anode) { ··· 102 101 ipv6_addr_equal(&addr->ip6addr, iaddr)) || 103 102 (!is_v6 && addr->atype == IPVL_IPV4 && 104 103 addr->ip4addr.s_addr == ((struct in_addr *)iaddr)->s_addr)) 104 + return addr; 105 + } 106 + return NULL; 107 + } 108 + 109 + bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6) 110 + { 111 + struct ipvl_dev *ipvlan; 112 + 113 + ASSERT_RTNL(); 114 + 115 + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 116 + if (ipvlan_find_addr(ipvlan, iaddr, is_v6)) 105 117 return true; 106 118 } 107 - 108 - if (ipvlan_ht_addr_lookup(port, iaddr, is_v6)) 109 - return true; 110 - 111 119 return false; 112 120 } 113 121 ··· 202 192 if (skb->protocol == htons(ETH_P_PAUSE)) 203 193 return; 204 194 205 - list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 195 + rcu_read_lock(); 196 + list_for_each_entry_rcu(ipvlan, &port->ipvlans, pnode) { 206 197 if (local && (ipvlan == in_dev)) 207 198 continue; 208 199 ··· 230 219 mcast_acct: 231 220 ipvlan_count_rx(ipvlan, len, ret == NET_RX_SUCCESS, true); 232 221 } 222 + rcu_read_unlock(); 233 223 234 224 /* Locally generated? ...Forward a copy to the main-device as 235 225 * well. On the RX side we'll ignore it (wont give it to any
+19 -11
drivers/net/ipvlan/ipvlan_main.c
··· 505 505 if (ipvlan->ipv6cnt > 0 || ipvlan->ipv4cnt > 0) { 506 506 list_for_each_entry_safe(addr, next, &ipvlan->addrs, anode) { 507 507 ipvlan_ht_addr_del(addr, !dev->dismantle); 508 - list_del_rcu(&addr->anode); 508 + list_del(&addr->anode); 509 509 } 510 510 } 511 511 list_del_rcu(&ipvlan->pnode); ··· 607 607 { 608 608 struct ipvl_addr *addr; 609 609 610 - if (ipvlan_addr_busy(ipvlan, ip6_addr, true)) { 610 + if (ipvlan_addr_busy(ipvlan->port, ip6_addr, true)) { 611 611 netif_err(ipvlan, ifup, ipvlan->dev, 612 612 "Failed to add IPv6=%pI6c addr for %s intf\n", 613 613 ip6_addr, ipvlan->dev->name); ··· 620 620 addr->master = ipvlan; 621 621 memcpy(&addr->ip6addr, ip6_addr, sizeof(struct in6_addr)); 622 622 addr->atype = IPVL_IPV6; 623 - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); 623 + list_add_tail(&addr->anode, &ipvlan->addrs); 624 624 ipvlan->ipv6cnt++; 625 - ipvlan_ht_addr_add(ipvlan, addr); 625 + /* If the interface is not up, the address will be added to the hash 626 + * list by ipvlan_open. 627 + */ 628 + if (netif_running(ipvlan->dev)) 629 + ipvlan_ht_addr_add(ipvlan, addr); 626 630 627 631 return 0; 628 632 } ··· 635 631 { 636 632 struct ipvl_addr *addr; 637 633 638 - addr = ipvlan_ht_addr_lookup(ipvlan->port, ip6_addr, true); 634 + addr = ipvlan_find_addr(ipvlan, ip6_addr, true); 639 635 if (!addr) 640 636 return; 641 637 642 638 ipvlan_ht_addr_del(addr, true); 643 - list_del_rcu(&addr->anode); 639 + list_del(&addr->anode); 644 640 ipvlan->ipv6cnt--; 645 641 WARN_ON(ipvlan->ipv6cnt < 0); 646 642 kfree_rcu(addr, rcu); ··· 679 675 { 680 676 struct ipvl_addr *addr; 681 677 682 - if (ipvlan_addr_busy(ipvlan, ip4_addr, false)) { 678 + if (ipvlan_addr_busy(ipvlan->port, ip4_addr, false)) { 683 679 netif_err(ipvlan, ifup, ipvlan->dev, 684 680 "Failed to add IPv4=%pI4 on %s intf.\n", 685 681 ip4_addr, ipvlan->dev->name); ··· 692 688 addr->master = ipvlan; 693 689 memcpy(&addr->ip4addr, ip4_addr, sizeof(struct in_addr)); 694 690 addr->atype = IPVL_IPV4; 695 - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); 691 + list_add_tail(&addr->anode, &ipvlan->addrs); 696 692 ipvlan->ipv4cnt++; 697 - ipvlan_ht_addr_add(ipvlan, addr); 693 + /* If the interface is not up, the address will be added to the hash 694 + * list by ipvlan_open. 695 + */ 696 + if (netif_running(ipvlan->dev)) 697 + ipvlan_ht_addr_add(ipvlan, addr); 698 698 ipvlan_set_broadcast_mac_filter(ipvlan, true); 699 699 700 700 return 0; ··· 708 700 { 709 701 struct ipvl_addr *addr; 710 702 711 - addr = ipvlan_ht_addr_lookup(ipvlan->port, ip4_addr, false); 703 + addr = ipvlan_find_addr(ipvlan, ip4_addr, false); 712 704 if (!addr) 713 705 return; 714 706 715 707 ipvlan_ht_addr_del(addr, true); 716 - list_del_rcu(&addr->anode); 708 + list_del(&addr->anode); 717 709 ipvlan->ipv4cnt--; 718 710 WARN_ON(ipvlan->ipv4cnt < 0); 719 711 if (!ipvlan->ipv4cnt)
+5 -2
drivers/net/macvtap.c
··· 654 654 } /* else everything is zero */ 655 655 } 656 656 657 + /* Neighbour code has some assumptions on HH_DATA_MOD alignment */ 658 + #define MACVTAP_RESERVE HH_DATA_OFF(ETH_HLEN) 659 + 657 660 /* Get packet from user space buffer */ 658 661 static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m, 659 662 struct iov_iter *from, int noblock) 660 663 { 661 - int good_linear = SKB_MAX_HEAD(NET_IP_ALIGN); 664 + int good_linear = SKB_MAX_HEAD(MACVTAP_RESERVE); 662 665 struct sk_buff *skb; 663 666 struct macvlan_dev *vlan; 664 667 unsigned long total_len = iov_iter_count(from); ··· 725 722 linear = macvtap16_to_cpu(q, vnet_hdr.hdr_len); 726 723 } 727 724 728 - skb = macvtap_alloc_skb(&q->sk, NET_IP_ALIGN, copylen, 725 + skb = macvtap_alloc_skb(&q->sk, MACVTAP_RESERVE, copylen, 729 726 linear, noblock, &err); 730 727 if (!skb) 731 728 goto err;
+80 -2
drivers/net/phy/amd-xgbe-phy.c
··· 92 92 #define XGBE_PHY_CDR_RATE_PROPERTY "amd,serdes-cdr-rate" 93 93 #define XGBE_PHY_PQ_SKEW_PROPERTY "amd,serdes-pq-skew" 94 94 #define XGBE_PHY_TX_AMP_PROPERTY "amd,serdes-tx-amp" 95 + #define XGBE_PHY_DFE_CFG_PROPERTY "amd,serdes-dfe-tap-config" 96 + #define XGBE_PHY_DFE_ENA_PROPERTY "amd,serdes-dfe-tap-enable" 95 97 96 98 #define XGBE_PHY_SPEEDS 3 97 99 #define XGBE_PHY_SPEED_1000 0 ··· 179 177 #define SPEED_10000_BLWC 0 180 178 #define SPEED_10000_CDR 0x7 181 179 #define SPEED_10000_PLL 0x1 182 - #define SPEED_10000_PQ 0x1e 180 + #define SPEED_10000_PQ 0x12 183 181 #define SPEED_10000_RATE 0x0 184 182 #define SPEED_10000_TXAMP 0xa 185 183 #define SPEED_10000_WORD 0x7 184 + #define SPEED_10000_DFE_TAP_CONFIG 0x1 185 + #define SPEED_10000_DFE_TAP_ENABLE 0x7f 186 186 187 187 #define SPEED_2500_BLWC 1 188 188 #define SPEED_2500_CDR 0x2 ··· 193 189 #define SPEED_2500_RATE 0x1 194 190 #define SPEED_2500_TXAMP 0xf 195 191 #define SPEED_2500_WORD 0x1 192 + #define SPEED_2500_DFE_TAP_CONFIG 0x3 193 + #define SPEED_2500_DFE_TAP_ENABLE 0x0 196 194 197 195 #define SPEED_1000_BLWC 1 198 196 #define SPEED_1000_CDR 0x2 ··· 203 197 #define SPEED_1000_RATE 0x3 204 198 #define SPEED_1000_TXAMP 0xf 205 199 #define SPEED_1000_WORD 0x1 200 + #define SPEED_1000_DFE_TAP_CONFIG 0x3 201 + #define SPEED_1000_DFE_TAP_ENABLE 0x0 206 202 207 203 /* SerDes RxTx register offsets */ 204 + #define RXTX_REG6 0x0018 208 205 #define RXTX_REG20 0x0050 206 + #define RXTX_REG22 0x0058 209 207 #define RXTX_REG114 0x01c8 208 + #define RXTX_REG129 0x0204 210 209 211 210 /* SerDes RxTx register entry bit positions and sizes */ 211 + #define RXTX_REG6_RESETB_RXD_INDEX 8 212 + #define RXTX_REG6_RESETB_RXD_WIDTH 1 212 213 #define RXTX_REG20_BLWC_ENA_INDEX 2 213 214 #define RXTX_REG20_BLWC_ENA_WIDTH 1 214 215 #define RXTX_REG114_PQ_REG_INDEX 9 215 216 #define RXTX_REG114_PQ_REG_WIDTH 7 217 + #define RXTX_REG129_RXDFE_CONFIG_INDEX 14 218 + #define RXTX_REG129_RXDFE_CONFIG_WIDTH 2 216 219 217 220 /* Bit setting and getting macros 218 221 * The get macro will extract the current bit field value from within ··· 348 333 SPEED_10000_TXAMP, 349 334 }; 350 335 336 + static const u32 amd_xgbe_phy_serdes_dfe_tap_cfg[] = { 337 + SPEED_1000_DFE_TAP_CONFIG, 338 + SPEED_2500_DFE_TAP_CONFIG, 339 + SPEED_10000_DFE_TAP_CONFIG, 340 + }; 341 + 342 + static const u32 amd_xgbe_phy_serdes_dfe_tap_ena[] = { 343 + SPEED_1000_DFE_TAP_ENABLE, 344 + SPEED_2500_DFE_TAP_ENABLE, 345 + SPEED_10000_DFE_TAP_ENABLE, 346 + }; 347 + 351 348 enum amd_xgbe_phy_an { 352 349 AMD_XGBE_AN_READY = 0, 353 350 AMD_XGBE_AN_PAGE_RECEIVED, ··· 420 393 u32 serdes_cdr_rate[XGBE_PHY_SPEEDS]; 421 394 u32 serdes_pq_skew[XGBE_PHY_SPEEDS]; 422 395 u32 serdes_tx_amp[XGBE_PHY_SPEEDS]; 396 + u32 serdes_dfe_tap_cfg[XGBE_PHY_SPEEDS]; 397 + u32 serdes_dfe_tap_ena[XGBE_PHY_SPEEDS]; 423 398 424 399 /* Auto-negotiation state machine support */ 425 400 struct mutex an_mutex; ··· 510 481 status = XSIR0_IOREAD(priv, SIR0_STATUS); 511 482 if (XSIR_GET_BITS(status, SIR0_STATUS, RX_READY) && 512 483 XSIR_GET_BITS(status, SIR0_STATUS, TX_READY)) 513 - return; 484 + goto rx_reset; 514 485 } 515 486 516 487 netdev_dbg(phydev->attached_dev, "SerDes rx/tx not ready (%#hx)\n", 517 488 status); 489 + 490 + rx_reset: 491 + /* Perform Rx reset for the DFE changes */ 492 + XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RESETB_RXD, 0); 493 + XRXTX_IOWRITE_BITS(priv, RXTX_REG6, RESETB_RXD, 1); 518 494 } 519 495 520 496 static int amd_xgbe_phy_xgmii_mode(struct phy_device *phydev) ··· 568 534 priv->serdes_blwc[XGBE_PHY_SPEED_10000]); 569 535 XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, 570 536 priv->serdes_pq_skew[XGBE_PHY_SPEED_10000]); 537 + XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG, 538 + priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_10000]); 539 + XRXTX_IOWRITE(priv, RXTX_REG22, 540 + priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_10000]); 571 541 572 542 amd_xgbe_phy_serdes_complete_ratechange(phydev); 573 543 ··· 624 586 priv->serdes_blwc[XGBE_PHY_SPEED_2500]); 625 587 XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, 626 588 priv->serdes_pq_skew[XGBE_PHY_SPEED_2500]); 589 + XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG, 590 + priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_2500]); 591 + XRXTX_IOWRITE(priv, RXTX_REG22, 592 + priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_2500]); 627 593 628 594 amd_xgbe_phy_serdes_complete_ratechange(phydev); 629 595 ··· 680 638 priv->serdes_blwc[XGBE_PHY_SPEED_1000]); 681 639 XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, 682 640 priv->serdes_pq_skew[XGBE_PHY_SPEED_1000]); 641 + XRXTX_IOWRITE_BITS(priv, RXTX_REG129, RXDFE_CONFIG, 642 + priv->serdes_dfe_tap_cfg[XGBE_PHY_SPEED_1000]); 643 + XRXTX_IOWRITE(priv, RXTX_REG22, 644 + priv->serdes_dfe_tap_ena[XGBE_PHY_SPEED_1000]); 683 645 684 646 amd_xgbe_phy_serdes_complete_ratechange(phydev); 685 647 ··· 1712 1666 } else { 1713 1667 memcpy(priv->serdes_tx_amp, amd_xgbe_phy_serdes_tx_amp, 1714 1668 sizeof(priv->serdes_tx_amp)); 1669 + } 1670 + 1671 + if (device_property_present(phy_dev, XGBE_PHY_DFE_CFG_PROPERTY)) { 1672 + ret = device_property_read_u32_array(phy_dev, 1673 + XGBE_PHY_DFE_CFG_PROPERTY, 1674 + priv->serdes_dfe_tap_cfg, 1675 + XGBE_PHY_SPEEDS); 1676 + if (ret) { 1677 + dev_err(dev, "invalid %s property\n", 1678 + XGBE_PHY_DFE_CFG_PROPERTY); 1679 + goto err_sir1; 1680 + } 1681 + } else { 1682 + memcpy(priv->serdes_dfe_tap_cfg, 1683 + amd_xgbe_phy_serdes_dfe_tap_cfg, 1684 + sizeof(priv->serdes_dfe_tap_cfg)); 1685 + } 1686 + 1687 + if (device_property_present(phy_dev, XGBE_PHY_DFE_ENA_PROPERTY)) { 1688 + ret = device_property_read_u32_array(phy_dev, 1689 + XGBE_PHY_DFE_ENA_PROPERTY, 1690 + priv->serdes_dfe_tap_ena, 1691 + XGBE_PHY_SPEEDS); 1692 + if (ret) { 1693 + dev_err(dev, "invalid %s property\n", 1694 + XGBE_PHY_DFE_ENA_PROPERTY); 1695 + goto err_sir1; 1696 + } 1697 + } else { 1698 + memcpy(priv->serdes_dfe_tap_ena, 1699 + amd_xgbe_phy_serdes_dfe_tap_ena, 1700 + sizeof(priv->serdes_dfe_tap_ena)); 1715 1701 } 1716 1702 1717 1703 phydev->priv = priv;
+20 -3
drivers/net/phy/phy.c
··· 236 236 } 237 237 238 238 /** 239 + * phy_check_valid - check if there is a valid PHY setting which matches 240 + * speed, duplex, and feature mask 241 + * @speed: speed to match 242 + * @duplex: duplex to match 243 + * @features: A mask of the valid settings 244 + * 245 + * Description: Returns true if there is a valid setting, false otherwise. 246 + */ 247 + static inline bool phy_check_valid(int speed, int duplex, u32 features) 248 + { 249 + unsigned int idx; 250 + 251 + idx = phy_find_valid(phy_find_setting(speed, duplex), features); 252 + 253 + return settings[idx].speed == speed && settings[idx].duplex == duplex && 254 + (settings[idx].setting & features); 255 + } 256 + 257 + /** 239 258 * phy_sanitize_settings - make sure the PHY is set to supported speed and duplex 240 259 * @phydev: the target phy_device struct 241 260 * ··· 1064 1045 int eee_lp, eee_cap, eee_adv; 1065 1046 u32 lp, cap, adv; 1066 1047 int status; 1067 - unsigned int idx; 1068 1048 1069 1049 /* Read phy status to properly get the right settings */ 1070 1050 status = phy_read_status(phydev); ··· 1095 1077 1096 1078 adv = mmd_eee_adv_to_ethtool_adv_t(eee_adv); 1097 1079 lp = mmd_eee_adv_to_ethtool_adv_t(eee_lp); 1098 - idx = phy_find_setting(phydev->speed, phydev->duplex); 1099 - if (!(lp & adv & settings[idx].setting)) 1080 + if (!phy_check_valid(phydev->speed, phydev->duplex, lp & adv)) 1100 1081 goto eee_exit_err; 1101 1082 1102 1083 if (clk_stop_enable) {
+4 -6
drivers/net/team/team.c
··· 43 43 44 44 static struct team_port *team_port_get_rcu(const struct net_device *dev) 45 45 { 46 - struct team_port *port = rcu_dereference(dev->rx_handler_data); 47 - 48 - return team_port_exists(dev) ? port : NULL; 46 + return rcu_dereference(dev->rx_handler_data); 49 47 } 50 48 51 49 static struct team_port *team_port_get_rtnl(const struct net_device *dev) ··· 1730 1732 if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data)) 1731 1733 return -EADDRNOTAVAIL; 1732 1734 memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); 1733 - rcu_read_lock(); 1734 - list_for_each_entry_rcu(port, &team->port_list, list) 1735 + mutex_lock(&team->lock); 1736 + list_for_each_entry(port, &team->port_list, list) 1735 1737 if (team->ops.port_change_dev_addr) 1736 1738 team->ops.port_change_dev_addr(team, port); 1737 - rcu_read_unlock(); 1739 + mutex_unlock(&team->lock); 1738 1740 return 0; 1739 1741 } 1740 1742
+1
drivers/net/usb/Kconfig
··· 161 161 * Linksys USB200M 162 162 * Netgear FA120 163 163 * Sitecom LN-029 164 + * Sitecom LN-028 164 165 * Intellinet USB 2.0 Ethernet 165 166 * ST Lab USB 2.0 Ethernet 166 167 * TrendNet TU2-ET100
+2
drivers/net/usb/asix_common.c
··· 188 188 memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); 189 189 skb_put(skb, sizeof(padbytes)); 190 190 } 191 + 192 + usbnet_set_skb_tx_stats(skb, 1, 0); 191 193 return skb; 192 194 } 193 195
+4
drivers/net/usb/asix_devices.c
··· 979 979 USB_DEVICE (0x0df6, 0x0056), 980 980 .driver_info = (unsigned long) &ax88178_info, 981 981 }, { 982 + // Sitecom LN-028 "USB 2.0 10/100/1000 Ethernet adapter" 983 + USB_DEVICE (0x0df6, 0x061c), 984 + .driver_info = (unsigned long) &ax88178_info, 985 + }, { 982 986 // corega FEther USB2-TX 983 987 USB_DEVICE (0x07aa, 0x0017), 984 988 .driver_info = (unsigned long) &ax8817x_info,
+8
drivers/net/usb/cdc_ether.c
··· 522 522 #define DELL_VENDOR_ID 0x413C 523 523 #define REALTEK_VENDOR_ID 0x0bda 524 524 #define SAMSUNG_VENDOR_ID 0x04e8 525 + #define LENOVO_VENDOR_ID 0x17ef 525 526 526 527 static const struct usb_device_id products[] = { 527 528 /* BLACKLIST !! ··· 699 698 /* Samsung USB Ethernet Adapters */ 700 699 { 701 700 USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM, 701 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 702 + .driver_info = 0, 703 + }, 704 + 705 + /* Lenovo Thinkpad USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 706 + { 707 + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x7205, USB_CLASS_COMM, 702 708 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 703 709 .driver_info = 0, 704 710 },
+3 -3
drivers/net/usb/cdc_ncm.c
··· 1172 1172 1173 1173 /* return skb */ 1174 1174 ctx->tx_curr_skb = NULL; 1175 - dev->net->stats.tx_packets += ctx->tx_curr_frame_num; 1176 1175 1177 1176 /* keep private stats: framing overhead and number of NTBs */ 1178 1177 ctx->tx_overhead += skb_out->len - ctx->tx_curr_frame_payload; 1179 1178 ctx->tx_ntbs++; 1180 1179 1181 - /* usbnet has already counted all the framing overhead. 1180 + /* usbnet will count all the framing overhead by default. 1182 1181 * Adjust the stats so that the tx_bytes counter show real 1183 1182 * payload data instead. 1184 1183 */ 1185 - dev->net->stats.tx_bytes -= skb_out->len - ctx->tx_curr_frame_payload; 1184 + usbnet_set_skb_tx_stats(skb_out, n, 1185 + ctx->tx_curr_frame_payload - skb_out->len); 1186 1186 1187 1187 return skb_out; 1188 1188
+34 -7
drivers/net/usb/cx82310_eth.c
··· 46 46 }; 47 47 48 48 #define CMD_PACKET_SIZE 64 49 - /* first command after power on can take around 8 seconds */ 50 - #define CMD_TIMEOUT 15000 49 + #define CMD_TIMEOUT 100 51 50 #define CMD_REPLY_RETRY 5 52 51 53 52 #define CX82310_MTU 1514 ··· 77 78 ret = usb_bulk_msg(udev, usb_sndbulkpipe(udev, CMD_EP), buf, 78 79 CMD_PACKET_SIZE, &actual_len, CMD_TIMEOUT); 79 80 if (ret < 0) { 80 - dev_err(&dev->udev->dev, "send command %#x: error %d\n", 81 - cmd, ret); 81 + if (cmd != CMD_GET_LINK_STATUS) 82 + dev_err(&dev->udev->dev, "send command %#x: error %d\n", 83 + cmd, ret); 82 84 goto end; 83 85 } 84 86 ··· 90 90 buf, CMD_PACKET_SIZE, &actual_len, 91 91 CMD_TIMEOUT); 92 92 if (ret < 0) { 93 - dev_err(&dev->udev->dev, 94 - "reply receive error %d\n", ret); 93 + if (cmd != CMD_GET_LINK_STATUS) 94 + dev_err(&dev->udev->dev, 95 + "reply receive error %d\n", 96 + ret); 95 97 goto end; 96 98 } 97 99 if (actual_len > 0) ··· 136 134 int ret; 137 135 char buf[15]; 138 136 struct usb_device *udev = dev->udev; 137 + u8 link[3]; 138 + int timeout = 50; 139 139 140 140 /* avoid ADSL modems - continue only if iProduct is "USB NET CARD" */ 141 141 if (usb_string(udev, udev->descriptor.iProduct, buf, sizeof(buf)) > 0 ··· 163 159 dev->partial_data = (unsigned long) kmalloc(dev->hard_mtu, GFP_KERNEL); 164 160 if (!dev->partial_data) 165 161 return -ENOMEM; 162 + 163 + /* wait for firmware to become ready (indicated by the link being up) */ 164 + while (--timeout) { 165 + ret = cx82310_cmd(dev, CMD_GET_LINK_STATUS, true, NULL, 0, 166 + link, sizeof(link)); 167 + /* the command can time out during boot - it's not an error */ 168 + if (!ret && link[0] == 1 && link[2] == 1) 169 + break; 170 + msleep(500); 171 + }; 172 + if (!timeout) { 173 + dev_err(&udev->dev, "firmware not ready in time\n"); 174 + return -ETIMEDOUT; 175 + } 166 176 167 177 /* enable ethernet mode (?) */ 168 178 ret = cx82310_cmd(dev, CMD_ETHERNET_MODE, true, "\x01", 1, NULL, 0); ··· 318 300 .tx_fixup = cx82310_tx_fixup, 319 301 }; 320 302 303 + #define USB_DEVICE_CLASS(vend, prod, cl, sc, pr) \ 304 + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | \ 305 + USB_DEVICE_ID_MATCH_DEV_INFO, \ 306 + .idVendor = (vend), \ 307 + .idProduct = (prod), \ 308 + .bDeviceClass = (cl), \ 309 + .bDeviceSubClass = (sc), \ 310 + .bDeviceProtocol = (pr) 311 + 321 312 static const struct usb_device_id products[] = { 322 313 { 323 - USB_DEVICE_AND_INTERFACE_INFO(0x0572, 0xcb01, 0xff, 0, 0), 314 + USB_DEVICE_CLASS(0x0572, 0xcb01, 0xff, 0, 0), 324 315 .driver_info = (unsigned long) &cx82310_info 325 316 }, 326 317 { },
+1 -1
drivers/net/usb/hso.c
··· 1594 1594 } 1595 1595 cprev = cnow; 1596 1596 } 1597 - current->state = TASK_RUNNING; 1597 + __set_current_state(TASK_RUNNING); 1598 1598 remove_wait_queue(&tiocmget->waitq, &wait); 1599 1599 1600 1600 return ret;
+5
drivers/net/usb/plusb.c
··· 134 134 }, { 135 135 USB_DEVICE(0x050d, 0x258a), /* Belkin F5U258/F5U279 (PL-25A1) */ 136 136 .driver_info = (unsigned long) &prolific_info, 137 + }, { 138 + USB_DEVICE(0x3923, 0x7825), /* National Instruments USB 139 + * Host-to-Host Cable 140 + */ 141 + .driver_info = (unsigned long) &prolific_info, 137 142 }, 138 143 139 144 { }, // END
+2
drivers/net/usb/r8152.c
··· 492 492 /* Define these values to match your device */ 493 493 #define VENDOR_ID_REALTEK 0x0bda 494 494 #define VENDOR_ID_SAMSUNG 0x04e8 495 + #define VENDOR_ID_LENOVO 0x17ef 495 496 496 497 #define MCU_TYPE_PLA 0x0100 497 498 #define MCU_TYPE_USB 0x0000 ··· 4038 4037 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8152)}, 4039 4038 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)}, 4040 4039 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)}, 4040 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 4041 4041 {} 4042 4042 }; 4043 4043
+1
drivers/net/usb/sr9800.c
··· 144 144 skb_put(skb, sizeof(padbytes)); 145 145 } 146 146 147 + usbnet_set_skb_tx_stats(skb, 1, 0); 147 148 return skb; 148 149 } 149 150
+14 -3
drivers/net/usb/usbnet.c
··· 1188 1188 struct usbnet *dev = entry->dev; 1189 1189 1190 1190 if (urb->status == 0) { 1191 - if (!(dev->driver_info->flags & FLAG_MULTI_PACKET)) 1192 - dev->net->stats.tx_packets++; 1191 + dev->net->stats.tx_packets += entry->packets; 1193 1192 dev->net->stats.tx_bytes += entry->length; 1194 1193 } else { 1195 1194 dev->net->stats.tx_errors++; ··· 1346 1347 } else 1347 1348 urb->transfer_flags |= URB_ZERO_PACKET; 1348 1349 } 1349 - entry->length = urb->transfer_buffer_length = length; 1350 + urb->transfer_buffer_length = length; 1351 + 1352 + if (info->flags & FLAG_MULTI_PACKET) { 1353 + /* Driver has set number of packets and a length delta. 1354 + * Calculate the complete length and ensure that it's 1355 + * positive. 1356 + */ 1357 + entry->length += length; 1358 + if (WARN_ON_ONCE(entry->length <= 0)) 1359 + entry->length = length; 1360 + } else { 1361 + usbnet_set_skb_tx_stats(skb, 1, length); 1362 + } 1350 1363 1351 1364 spin_lock_irqsave(&dev->txq.lock, flags); 1352 1365 retval = usb_autopm_get_interface_async(dev->intf);
+4 -5
drivers/net/virtio_net.c
··· 1448 1448 { 1449 1449 int i; 1450 1450 1451 - for (i = 0; i < vi->max_queue_pairs; i++) 1451 + for (i = 0; i < vi->max_queue_pairs; i++) { 1452 + napi_hash_del(&vi->rq[i].napi); 1452 1453 netif_napi_del(&vi->rq[i].napi); 1454 + } 1453 1455 1454 1456 kfree(vi->rq); 1455 1457 kfree(vi->sq); ··· 1950 1948 cancel_delayed_work_sync(&vi->refill); 1951 1949 1952 1950 if (netif_running(vi->dev)) { 1953 - for (i = 0; i < vi->max_queue_pairs; i++) { 1951 + for (i = 0; i < vi->max_queue_pairs; i++) 1954 1952 napi_disable(&vi->rq[i].napi); 1955 - napi_hash_del(&vi->rq[i].napi); 1956 - netif_napi_del(&vi->rq[i].napi); 1957 - } 1958 1953 } 1959 1954 1960 1955 remove_vq_common(vi);
+2 -2
drivers/net/vxlan.c
··· 1218 1218 goto drop; 1219 1219 1220 1220 flags &= ~VXLAN_HF_RCO; 1221 - vni &= VXLAN_VID_MASK; 1221 + vni &= VXLAN_VNI_MASK; 1222 1222 } 1223 1223 1224 1224 /* For backwards compatibility, only allow reserved fields to be ··· 1239 1239 flags &= ~VXLAN_GBP_USED_BITS; 1240 1240 } 1241 1241 1242 - if (flags || (vni & ~VXLAN_VID_MASK)) { 1242 + if (flags || vni & ~VXLAN_VNI_MASK) { 1243 1243 /* If there are any unprocessed flags remaining treat 1244 1244 * this as a malformed packet. This behavior diverges from 1245 1245 * VXLAN RFC (RFC7348) which stipulates that bits in reserved
+6 -6
drivers/net/wan/cosa.c
··· 806 806 spin_lock_irqsave(&cosa->lock, flags); 807 807 add_wait_queue(&chan->rxwaitq, &wait); 808 808 while (!chan->rx_status) { 809 - current->state = TASK_INTERRUPTIBLE; 809 + set_current_state(TASK_INTERRUPTIBLE); 810 810 spin_unlock_irqrestore(&cosa->lock, flags); 811 811 schedule(); 812 812 spin_lock_irqsave(&cosa->lock, flags); 813 813 if (signal_pending(current) && chan->rx_status == 0) { 814 814 chan->rx_status = 1; 815 815 remove_wait_queue(&chan->rxwaitq, &wait); 816 - current->state = TASK_RUNNING; 816 + __set_current_state(TASK_RUNNING); 817 817 spin_unlock_irqrestore(&cosa->lock, flags); 818 818 mutex_unlock(&chan->rlock); 819 819 return -ERESTARTSYS; 820 820 } 821 821 } 822 822 remove_wait_queue(&chan->rxwaitq, &wait); 823 - current->state = TASK_RUNNING; 823 + __set_current_state(TASK_RUNNING); 824 824 kbuf = chan->rxdata; 825 825 count = chan->rxsize; 826 826 spin_unlock_irqrestore(&cosa->lock, flags); ··· 890 890 spin_lock_irqsave(&cosa->lock, flags); 891 891 add_wait_queue(&chan->txwaitq, &wait); 892 892 while (!chan->tx_status) { 893 - current->state = TASK_INTERRUPTIBLE; 893 + set_current_state(TASK_INTERRUPTIBLE); 894 894 spin_unlock_irqrestore(&cosa->lock, flags); 895 895 schedule(); 896 896 spin_lock_irqsave(&cosa->lock, flags); 897 897 if (signal_pending(current) && chan->tx_status == 0) { 898 898 chan->tx_status = 1; 899 899 remove_wait_queue(&chan->txwaitq, &wait); 900 - current->state = TASK_RUNNING; 900 + __set_current_state(TASK_RUNNING); 901 901 chan->tx_status = 1; 902 902 spin_unlock_irqrestore(&cosa->lock, flags); 903 903 up(&chan->wsem); ··· 905 905 } 906 906 } 907 907 remove_wait_queue(&chan->txwaitq, &wait); 908 - current->state = TASK_RUNNING; 908 + __set_current_state(TASK_RUNNING); 909 909 up(&chan->wsem); 910 910 spin_unlock_irqrestore(&cosa->lock, flags); 911 911 kfree(kbuf);
+12 -8
drivers/net/wireless/ath/ath9k/beacon.c
··· 219 219 struct ath_common *common = ath9k_hw_common(sc->sc_ah); 220 220 struct ath_vif *avp = (void *)vif->drv_priv; 221 221 struct ath_buf *bf = avp->av_bcbuf; 222 + struct ath_beacon_config *cur_conf = &sc->cur_chan->beacon; 222 223 223 224 ath_dbg(common, CONFIG, "Removing interface at beacon slot: %d\n", 224 225 avp->av_bslot); 225 226 226 227 tasklet_disable(&sc->bcon_tasklet); 228 + 229 + cur_conf->enable_beacon &= ~BIT(avp->av_bslot); 227 230 228 231 if (bf && bf->bf_mpdu) { 229 232 struct sk_buff *skb = bf->bf_mpdu; ··· 524 521 } 525 522 526 523 if (sc->sc_ah->opmode == NL80211_IFTYPE_AP) { 527 - if ((vif->type != NL80211_IFTYPE_AP) || 528 - (sc->nbcnvifs > 1)) { 524 + if (vif->type != NL80211_IFTYPE_AP) { 529 525 ath_dbg(common, CONFIG, 530 526 "An AP interface is already present !\n"); 531 527 return false; ··· 618 616 * enabling/disabling SWBA. 619 617 */ 620 618 if (changed & BSS_CHANGED_BEACON_ENABLED) { 621 - if (!bss_conf->enable_beacon && 622 - (sc->nbcnvifs <= 1)) { 623 - cur_conf->enable_beacon = false; 624 - } else if (bss_conf->enable_beacon) { 625 - cur_conf->enable_beacon = true; 626 - ath9k_cache_beacon_config(sc, ctx, bss_conf); 619 + bool enabled = cur_conf->enable_beacon; 620 + 621 + if (!bss_conf->enable_beacon) { 622 + cur_conf->enable_beacon &= ~BIT(avp->av_bslot); 623 + } else { 624 + cur_conf->enable_beacon |= BIT(avp->av_bslot); 625 + if (!enabled) 626 + ath9k_cache_beacon_config(sc, ctx, bss_conf); 627 627 } 628 628 } 629 629
+1 -1
drivers/net/wireless/ath/ath9k/common.h
··· 54 54 u16 dtim_period; 55 55 u16 bmiss_timeout; 56 56 u8 dtim_count; 57 - bool enable_beacon; 57 + u8 enable_beacon; 58 58 bool ibss_creator; 59 59 u32 nexttbtt; 60 60 u32 intval;
+1 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 424 424 ah->power_mode = ATH9K_PM_UNDEFINED; 425 425 ah->htc_reset_init = true; 426 426 427 - ah->tpc_enabled = true; 427 + ah->tpc_enabled = false; 428 428 429 429 ah->ani_function = ATH9K_ANI_ALL; 430 430 if (!AR_SREV_9300_20_OR_LATER(ah))
+1
drivers/net/wireless/b43/main.c
··· 5370 5370 case 0x432a: /* BCM4321 */ 5371 5371 case 0x432d: /* BCM4322 */ 5372 5372 case 0x4352: /* BCM43222 */ 5373 + case 0x435a: /* BCM43228 */ 5373 5374 case 0x4333: /* BCM4331 */ 5374 5375 case 0x43a2: /* BCM4360 */ 5375 5376 case 0x43b3: /* BCM4352 */
+2 -1
drivers/net/wireless/brcm80211/brcmfmac/feature.c
··· 126 126 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_MCHAN, "mchan"); 127 127 if (drvr->bus_if->wowl_supported) 128 128 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_WOWL, "wowl"); 129 - brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0); 129 + if (drvr->bus_if->chip != BRCM_CC_43362_CHIP_ID) 130 + brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0); 130 131 131 132 /* set chip related quirks */ 132 133 switch (drvr->bus_if->chip) {
+12 -3
drivers/net/wireless/brcm80211/brcmfmac/vendor.c
··· 39 39 void *dcmd_buf = NULL, *wr_pointer; 40 40 u16 msglen, maxmsglen = PAGE_SIZE - 0x100; 41 41 42 - brcmf_dbg(TRACE, "cmd %x set %d len %d\n", cmdhdr->cmd, cmdhdr->set, 43 - cmdhdr->len); 42 + if (len < sizeof(*cmdhdr)) { 43 + brcmf_err("vendor command too short: %d\n", len); 44 + return -EINVAL; 45 + } 44 46 45 47 vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev); 46 48 ifp = vif->ifp; 47 49 48 - len -= sizeof(struct brcmf_vndr_dcmd_hdr); 50 + brcmf_dbg(TRACE, "ifidx=%d, cmd=%d\n", ifp->ifidx, cmdhdr->cmd); 51 + 52 + if (cmdhdr->offset > len) { 53 + brcmf_err("bad buffer offset %d > %d\n", cmdhdr->offset, len); 54 + return -EINVAL; 55 + } 56 + 57 + len -= cmdhdr->offset; 49 58 ret_len = cmdhdr->len; 50 59 if (ret_len > 0 || len > 0) { 51 60 if (len > BRCMF_DCMD_MAXLEN) {
-1
drivers/net/wireless/iwlwifi/dvm/dev.h
··· 708 708 unsigned long reload_jiffies; 709 709 int reload_count; 710 710 bool ucode_loaded; 711 - bool init_ucode_run; /* Don't run init uCode again */ 712 711 713 712 u8 plcp_delta_threshold; 714 713
+9 -8
drivers/net/wireless/iwlwifi/dvm/mac80211.c
··· 1114 1114 scd_queues &= ~(BIT(IWL_IPAN_CMD_QUEUE_NUM) | 1115 1115 BIT(IWL_DEFAULT_CMD_QUEUE_NUM)); 1116 1116 1117 - if (vif) 1118 - scd_queues &= ~BIT(vif->hw_queue[IEEE80211_AC_VO]); 1119 - 1120 - IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", scd_queues); 1121 - if (iwlagn_txfifo_flush(priv, scd_queues)) { 1122 - IWL_ERR(priv, "flush request fail\n"); 1123 - goto done; 1117 + if (drop) { 1118 + IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", 1119 + scd_queues); 1120 + if (iwlagn_txfifo_flush(priv, scd_queues)) { 1121 + IWL_ERR(priv, "flush request fail\n"); 1122 + goto done; 1123 + } 1124 1124 } 1125 + 1125 1126 IWL_DEBUG_TX_QUEUES(priv, "wait transmit/flush all frames\n"); 1126 - iwl_trans_wait_tx_queue_empty(priv->trans, 0xffffffff); 1127 + iwl_trans_wait_tx_queue_empty(priv->trans, scd_queues); 1127 1128 done: 1128 1129 mutex_unlock(&priv->mutex); 1129 1130 IWL_DEBUG_MAC80211(priv, "leave\n");
-5
drivers/net/wireless/iwlwifi/dvm/ucode.c
··· 418 418 if (!priv->fw->img[IWL_UCODE_INIT].sec[0].len) 419 419 return 0; 420 420 421 - if (priv->init_ucode_run) 422 - return 0; 423 - 424 421 iwl_init_notification_wait(&priv->notif_wait, &calib_wait, 425 422 calib_complete, ARRAY_SIZE(calib_complete), 426 423 iwlagn_wait_calib, priv); ··· 437 440 */ 438 441 ret = iwl_wait_notification(&priv->notif_wait, &calib_wait, 439 442 UCODE_CALIB_TIMEOUT); 440 - if (!ret) 441 - priv->init_ucode_run = true; 442 443 443 444 goto out; 444 445
+4 -2
drivers/net/wireless/iwlwifi/iwl-1000.c
··· 95 95 .nvm_calib_ver = EEPROM_1000_TX_POWER_VERSION, \ 96 96 .base_params = &iwl1000_base_params, \ 97 97 .eeprom_params = &iwl1000_eeprom_params, \ 98 - .led_mode = IWL_LED_BLINK 98 + .led_mode = IWL_LED_BLINK, \ 99 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 99 100 100 101 const struct iwl_cfg iwl1000_bgn_cfg = { 101 102 .name = "Intel(R) Centrino(R) Wireless-N 1000 BGN", ··· 122 121 .base_params = &iwl1000_base_params, \ 123 122 .eeprom_params = &iwl1000_eeprom_params, \ 124 123 .led_mode = IWL_LED_RF_STATE, \ 125 - .rx_with_siso_diversity = true 124 + .rx_with_siso_diversity = true, \ 125 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 126 126 127 127 const struct iwl_cfg iwl100_bgn_cfg = { 128 128 .name = "Intel(R) Centrino(R) Wireless-N 100 BGN",
+9 -4
drivers/net/wireless/iwlwifi/iwl-2000.c
··· 123 123 .nvm_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ 124 124 .base_params = &iwl2000_base_params, \ 125 125 .eeprom_params = &iwl20x0_eeprom_params, \ 126 - .led_mode = IWL_LED_RF_STATE 126 + .led_mode = IWL_LED_RF_STATE, \ 127 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 128 + 127 129 128 130 const struct iwl_cfg iwl2000_2bgn_cfg = { 129 131 .name = "Intel(R) Centrino(R) Wireless-N 2200 BGN", ··· 151 149 .nvm_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ 152 150 .base_params = &iwl2030_base_params, \ 153 151 .eeprom_params = &iwl20x0_eeprom_params, \ 154 - .led_mode = IWL_LED_RF_STATE 152 + .led_mode = IWL_LED_RF_STATE, \ 153 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 155 154 156 155 const struct iwl_cfg iwl2030_2bgn_cfg = { 157 156 .name = "Intel(R) Centrino(R) Wireless-N 2230 BGN", ··· 173 170 .base_params = &iwl2000_base_params, \ 174 171 .eeprom_params = &iwl20x0_eeprom_params, \ 175 172 .led_mode = IWL_LED_RF_STATE, \ 176 - .rx_with_siso_diversity = true 173 + .rx_with_siso_diversity = true, \ 174 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 177 175 178 176 const struct iwl_cfg iwl105_bgn_cfg = { 179 177 .name = "Intel(R) Centrino(R) Wireless-N 105 BGN", ··· 201 197 .base_params = &iwl2030_base_params, \ 202 198 .eeprom_params = &iwl20x0_eeprom_params, \ 203 199 .led_mode = IWL_LED_RF_STATE, \ 204 - .rx_with_siso_diversity = true 200 + .rx_with_siso_diversity = true, \ 201 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 205 202 206 203 const struct iwl_cfg iwl135_bgn_cfg = { 207 204 .name = "Intel(R) Centrino(R) Wireless-N 135 BGN",
+4 -2
drivers/net/wireless/iwlwifi/iwl-5000.c
··· 93 93 .nvm_calib_ver = EEPROM_5000_TX_POWER_VERSION, \ 94 94 .base_params = &iwl5000_base_params, \ 95 95 .eeprom_params = &iwl5000_eeprom_params, \ 96 - .led_mode = IWL_LED_BLINK 96 + .led_mode = IWL_LED_BLINK, \ 97 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 97 98 98 99 const struct iwl_cfg iwl5300_agn_cfg = { 99 100 .name = "Intel(R) Ultimate N WiFi Link 5300 AGN", ··· 159 158 .base_params = &iwl5000_base_params, \ 160 159 .eeprom_params = &iwl5000_eeprom_params, \ 161 160 .led_mode = IWL_LED_BLINK, \ 162 - .internal_wimax_coex = true 161 + .internal_wimax_coex = true, \ 162 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 163 163 164 164 const struct iwl_cfg iwl5150_agn_cfg = { 165 165 .name = "Intel(R) WiMAX/WiFi Link 5150 AGN",
+12 -6
drivers/net/wireless/iwlwifi/iwl-6000.c
··· 145 145 .nvm_calib_ver = EEPROM_6005_TX_POWER_VERSION, \ 146 146 .base_params = &iwl6000_g2_base_params, \ 147 147 .eeprom_params = &iwl6000_eeprom_params, \ 148 - .led_mode = IWL_LED_RF_STATE 148 + .led_mode = IWL_LED_RF_STATE, \ 149 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 149 150 150 151 const struct iwl_cfg iwl6005_2agn_cfg = { 151 152 .name = "Intel(R) Centrino(R) Advanced-N 6205 AGN", ··· 200 199 .nvm_calib_ver = EEPROM_6030_TX_POWER_VERSION, \ 201 200 .base_params = &iwl6000_g2_base_params, \ 202 201 .eeprom_params = &iwl6000_eeprom_params, \ 203 - .led_mode = IWL_LED_RF_STATE 202 + .led_mode = IWL_LED_RF_STATE, \ 203 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 204 204 205 205 const struct iwl_cfg iwl6030_2agn_cfg = { 206 206 .name = "Intel(R) Centrino(R) Advanced-N 6230 AGN", ··· 237 235 .nvm_calib_ver = EEPROM_6030_TX_POWER_VERSION, \ 238 236 .base_params = &iwl6000_g2_base_params, \ 239 237 .eeprom_params = &iwl6000_eeprom_params, \ 240 - .led_mode = IWL_LED_RF_STATE 238 + .led_mode = IWL_LED_RF_STATE, \ 239 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 241 240 242 241 const struct iwl_cfg iwl6035_2agn_cfg = { 243 242 .name = "Intel(R) Centrino(R) Advanced-N 6235 AGN", ··· 293 290 .nvm_calib_ver = EEPROM_6000_TX_POWER_VERSION, \ 294 291 .base_params = &iwl6000_base_params, \ 295 292 .eeprom_params = &iwl6000_eeprom_params, \ 296 - .led_mode = IWL_LED_BLINK 293 + .led_mode = IWL_LED_BLINK, \ 294 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 297 295 298 296 const struct iwl_cfg iwl6000i_2agn_cfg = { 299 297 .name = "Intel(R) Centrino(R) Advanced-N 6200 AGN", ··· 326 322 .base_params = &iwl6050_base_params, \ 327 323 .eeprom_params = &iwl6000_eeprom_params, \ 328 324 .led_mode = IWL_LED_BLINK, \ 329 - .internal_wimax_coex = true 325 + .internal_wimax_coex = true, \ 326 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 330 327 331 328 const struct iwl_cfg iwl6050_2agn_cfg = { 332 329 .name = "Intel(R) Centrino(R) Advanced-N + WiMAX 6250 AGN", ··· 352 347 .base_params = &iwl6050_base_params, \ 353 348 .eeprom_params = &iwl6000_eeprom_params, \ 354 349 .led_mode = IWL_LED_BLINK, \ 355 - .internal_wimax_coex = true 350 + .internal_wimax_coex = true, \ 351 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 356 352 357 353 const struct iwl_cfg iwl6150_bgn_cfg = { 358 354 .name = "Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN",
+1
drivers/net/wireless/iwlwifi/iwl-drv.c
··· 1257 1257 op->name, err); 1258 1258 #endif 1259 1259 } 1260 + kfree(pieces); 1260 1261 return; 1261 1262 1262 1263 try_again:
+2 -1
drivers/net/wireless/iwlwifi/mvm/coex.c
··· 793 793 if (!vif->bss_conf.assoc) 794 794 smps_mode = IEEE80211_SMPS_AUTOMATIC; 795 795 796 - if (IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status, 796 + if (mvmvif->phy_ctxt && 797 + IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status, 797 798 mvmvif->phy_ctxt->id)) 798 799 smps_mode = IEEE80211_SMPS_AUTOMATIC; 799 800
+2 -1
drivers/net/wireless/iwlwifi/mvm/coex_legacy.c
··· 832 832 if (!vif->bss_conf.assoc) 833 833 smps_mode = IEEE80211_SMPS_AUTOMATIC; 834 834 835 - if (data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id)) 835 + if (mvmvif->phy_ctxt && 836 + data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id)) 836 837 smps_mode = IEEE80211_SMPS_AUTOMATIC; 837 838 838 839 IWL_DEBUG_COEX(data->mvm,
+35 -3
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 405 405 hw->wiphy->bands[IEEE80211_BAND_5GHZ] = 406 406 &mvm->nvm_data->bands[IEEE80211_BAND_5GHZ]; 407 407 408 - if (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_BEAMFORMER) 408 + if ((mvm->fw->ucode_capa.capa[0] & 409 + IWL_UCODE_TLV_CAPA_BEAMFORMER) && 410 + (mvm->fw->ucode_capa.api[0] & 411 + IWL_UCODE_TLV_API_LQ_SS_PARAMS)) 409 412 hw->wiphy->bands[IEEE80211_BAND_5GHZ]->vht_cap.cap |= 410 413 IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE; 411 414 } ··· 2218 2215 2219 2216 mutex_lock(&mvm->mutex); 2220 2217 2221 - iwl_mvm_cancel_scan(mvm); 2218 + /* Due to a race condition, it's possible that mac80211 asks 2219 + * us to stop a hw_scan when it's already stopped. This can 2220 + * happen, for instance, if we stopped the scan ourselves, 2221 + * called ieee80211_scan_completed() and the userspace called 2222 + * cancel scan scan before ieee80211_scan_work() could run. 2223 + * To handle that, simply return if the scan is not running. 2224 + */ 2225 + /* FIXME: for now, we ignore this race for UMAC scans, since 2226 + * they don't set the scan_status. 2227 + */ 2228 + if ((mvm->scan_status == IWL_MVM_SCAN_OS) || 2229 + (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN)) 2230 + iwl_mvm_cancel_scan(mvm); 2222 2231 2223 2232 mutex_unlock(&mvm->mutex); 2224 2233 } ··· 2574 2559 int ret; 2575 2560 2576 2561 mutex_lock(&mvm->mutex); 2562 + 2563 + /* Due to a race condition, it's possible that mac80211 asks 2564 + * us to stop a sched_scan when it's already stopped. This 2565 + * can happen, for instance, if we stopped the scan ourselves, 2566 + * called ieee80211_sched_scan_stopped() and the userspace called 2567 + * stop sched scan scan before ieee80211_sched_scan_stopped_work() 2568 + * could run. To handle this, simply return if the scan is 2569 + * not running. 2570 + */ 2571 + /* FIXME: for now, we ignore this race for UMAC scans, since 2572 + * they don't set the scan_status. 2573 + */ 2574 + if (mvm->scan_status != IWL_MVM_SCAN_SCHED && 2575 + !(mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN)) { 2576 + mutex_unlock(&mvm->mutex); 2577 + return 0; 2578 + } 2579 + 2577 2580 ret = iwl_mvm_scan_offload_stop(mvm, false); 2578 2581 mutex_unlock(&mvm->mutex); 2579 2582 iwl_mvm_wait_for_async_handlers(mvm); 2580 2583 2581 2584 return ret; 2582 - 2583 2585 } 2584 2586 2585 2587 static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+36 -9
drivers/net/wireless/iwlwifi/mvm/rs.c
··· 134 134 #define MAX_NEXT_COLUMNS 7 135 135 #define MAX_COLUMN_CHECKS 3 136 136 137 + struct rs_tx_column; 138 + 137 139 typedef bool (*allow_column_func_t) (struct iwl_mvm *mvm, 138 140 struct ieee80211_sta *sta, 139 - struct iwl_scale_tbl_info *tbl); 141 + struct iwl_scale_tbl_info *tbl, 142 + const struct rs_tx_column *next_col); 140 143 141 144 struct rs_tx_column { 142 145 enum rs_column_mode mode; ··· 150 147 }; 151 148 152 149 static bool rs_ant_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta, 153 - struct iwl_scale_tbl_info *tbl) 150 + struct iwl_scale_tbl_info *tbl, 151 + const struct rs_tx_column *next_col) 154 152 { 155 - return iwl_mvm_bt_coex_is_ant_avail(mvm, tbl->rate.ant); 153 + return iwl_mvm_bt_coex_is_ant_avail(mvm, next_col->ant); 156 154 } 157 155 158 156 static bool rs_mimo_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta, 159 - struct iwl_scale_tbl_info *tbl) 157 + struct iwl_scale_tbl_info *tbl, 158 + const struct rs_tx_column *next_col) 160 159 { 161 160 if (!sta->ht_cap.ht_supported) 162 161 return false; ··· 176 171 } 177 172 178 173 static bool rs_siso_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta, 179 - struct iwl_scale_tbl_info *tbl) 174 + struct iwl_scale_tbl_info *tbl, 175 + const struct rs_tx_column *next_col) 180 176 { 181 177 if (!sta->ht_cap.ht_supported) 182 178 return false; ··· 186 180 } 187 181 188 182 static bool rs_sgi_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta, 189 - struct iwl_scale_tbl_info *tbl) 183 + struct iwl_scale_tbl_info *tbl, 184 + const struct rs_tx_column *next_col) 190 185 { 191 186 struct rs_rate *rate = &tbl->rate; 192 187 struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap; ··· 1278 1271 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 1279 1272 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1280 1273 1274 + if (!iwl_mvm_sta_from_mac80211(sta)->vif) 1275 + return; 1276 + 1281 1277 if (!ieee80211_is_data(hdr->frame_control) || 1282 1278 info->flags & IEEE80211_TX_CTL_NO_ACK) 1283 1279 return; ··· 1600 1590 1601 1591 for (j = 0; j < MAX_COLUMN_CHECKS; j++) { 1602 1592 allow_func = next_col->checks[j]; 1603 - if (allow_func && !allow_func(mvm, sta, tbl)) 1593 + if (allow_func && !allow_func(mvm, sta, tbl, next_col)) 1604 1594 break; 1605 1595 } 1606 1596 ··· 2514 2504 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 2515 2505 struct iwl_lq_sta *lq_sta = mvm_sta; 2516 2506 2507 + if (sta && !iwl_mvm_sta_from_mac80211(sta)->vif) { 2508 + /* if vif isn't initialized mvm doesn't know about 2509 + * this station, so don't do anything with the it 2510 + */ 2511 + sta = NULL; 2512 + mvm_sta = NULL; 2513 + } 2514 + 2517 2515 /* TODO: handle rate_idx_mask and rate_idx_mcs_mask */ 2518 2516 2519 2517 /* Treat uninitialized rate scaling data same as non-existing. */ ··· 2837 2819 struct iwl_op_mode *op_mode = 2838 2820 (struct iwl_op_mode *)mvm_r; 2839 2821 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 2822 + 2823 + if (!iwl_mvm_sta_from_mac80211(sta)->vif) 2824 + return; 2840 2825 2841 2826 /* Stop any ongoing aggregations as rs starts off assuming no agg */ 2842 2827 for (tid = 0; tid < IWL_MAX_TID_COUNT; tid++) ··· 3601 3580 3602 3581 MVM_DEBUGFS_READ_WRITE_FILE_OPS(ss_force, 32); 3603 3582 3604 - static void rs_add_debugfs(void *mvm, void *mvm_sta, struct dentry *dir) 3583 + static void rs_add_debugfs(void *mvm, void *priv_sta, struct dentry *dir) 3605 3584 { 3606 - struct iwl_lq_sta *lq_sta = mvm_sta; 3585 + struct iwl_lq_sta *lq_sta = priv_sta; 3586 + struct iwl_mvm_sta *mvmsta; 3587 + 3588 + mvmsta = container_of(lq_sta, struct iwl_mvm_sta, lq_sta); 3589 + 3590 + if (!mvmsta->vif) 3591 + return; 3607 3592 3608 3593 debugfs_create_file("rate_scale_table", S_IRUSR | S_IWUSR, dir, 3609 3594 lq_sta, &rs_sta_dbgfs_scale_table_ops);
+6 -7
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 1128 1128 if (mvm->scan_status == IWL_MVM_SCAN_NONE) 1129 1129 return 0; 1130 1130 1131 - if (iwl_mvm_is_radio_killed(mvm)) 1131 + if (iwl_mvm_is_radio_killed(mvm)) { 1132 + ret = 0; 1132 1133 goto out; 1134 + } 1133 1135 1134 1136 if (mvm->scan_status != IWL_MVM_SCAN_SCHED && 1135 1137 (!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_LMAC_SCAN) || ··· 1150 1148 IWL_DEBUG_SCAN(mvm, "Send stop %sscan failed %d\n", 1151 1149 sched ? "offloaded " : "", ret); 1152 1150 iwl_remove_notification(&mvm->notif_wait, &wait_scan_done); 1153 - return ret; 1151 + goto out; 1154 1152 } 1155 1153 1156 1154 IWL_DEBUG_SCAN(mvm, "Successfully sent stop %sscan\n", 1157 1155 sched ? "offloaded " : ""); 1158 1156 1159 1157 ret = iwl_wait_notification(&mvm->notif_wait, &wait_scan_done, 1 * HZ); 1160 - if (ret) 1161 - return ret; 1162 - 1158 + out: 1163 1159 /* 1164 1160 * Clear the scan status so the next scan requests will succeed. This 1165 1161 * also ensures the Rx handler doesn't do anything, as the scan was ··· 1167 1167 if (mvm->scan_status == IWL_MVM_SCAN_OS) 1168 1168 iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 1169 1169 1170 - out: 1171 1170 mvm->scan_status = IWL_MVM_SCAN_NONE; 1172 1171 1173 1172 if (notify) { ··· 1176 1177 ieee80211_scan_completed(mvm->hw, true); 1177 1178 } 1178 1179 1179 - return 0; 1180 + return ret; 1180 1181 } 1181 1182 1182 1183 static void iwl_mvm_unified_scan_fill_tx_cmd(struct iwl_mvm *mvm,
+5 -6
drivers/net/wireless/iwlwifi/mvm/time-event.c
··· 197 197 struct iwl_time_event_notif *notif) 198 198 { 199 199 if (!le32_to_cpu(notif->status)) { 200 + if (te_data->vif->type == NL80211_IFTYPE_STATION) 201 + ieee80211_connection_loss(te_data->vif); 200 202 IWL_DEBUG_TE(mvm, "CSA time event failed to start\n"); 201 203 iwl_mvm_te_clear_data(mvm, te_data); 202 204 return; ··· 752 750 * request 753 751 */ 754 752 list_for_each_entry(te_data, &mvm->time_event_list, list) { 755 - if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE && 756 - te_data->running) { 753 + if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) { 757 754 mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 758 755 is_p2p = true; 759 756 goto remove_te; ··· 767 766 * request 768 767 */ 769 768 list_for_each_entry(te_data, &mvm->aux_roc_te_list, list) { 770 - if (te_data->running) { 771 - mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 772 - goto remove_te; 773 - } 769 + mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 770 + goto remove_te; 774 771 } 775 772 776 773 remove_te:
+4 -2
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 949 949 mvmsta = iwl_mvm_sta_from_mac80211(sta); 950 950 tid_data = &mvmsta->tid_data[tid]; 951 951 952 - if (WARN_ONCE(tid_data->txq_id != scd_flow, "Q %d, tid %d, flow %d", 953 - tid_data->txq_id, tid, scd_flow)) { 952 + if (tid_data->txq_id != scd_flow) { 953 + IWL_ERR(mvm, 954 + "invalid BA notification: Q %d, tid %d, flow %d\n", 955 + tid_data->txq_id, tid, scd_flow); 954 956 rcu_read_unlock(); 955 957 return 0; 956 958 }
+4 -2
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 368 368 /* 3165 Series */ 369 369 {IWL_PCI_DEVICE(0x3165, 0x4010, iwl3165_2ac_cfg)}, 370 370 {IWL_PCI_DEVICE(0x3165, 0x4012, iwl3165_2ac_cfg)}, 371 - {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)}, 372 - {IWL_PCI_DEVICE(0x3165, 0x4210, iwl3165_2ac_cfg)}, 373 371 {IWL_PCI_DEVICE(0x3165, 0x4410, iwl3165_2ac_cfg)}, 374 372 {IWL_PCI_DEVICE(0x3165, 0x4510, iwl3165_2ac_cfg)}, 373 + {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)}, 374 + {IWL_PCI_DEVICE(0x3166, 0x4310, iwl3165_2ac_cfg)}, 375 + {IWL_PCI_DEVICE(0x3166, 0x4210, iwl3165_2ac_cfg)}, 376 + {IWL_PCI_DEVICE(0x3165, 0x8010, iwl3165_2ac_cfg)}, 375 377 376 378 /* 7265 Series */ 377 379 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
+4 -1
drivers/net/wireless/mac80211_hwsim.c
··· 946 946 goto nla_put_failure; 947 947 948 948 genlmsg_end(skb, msg_head); 949 - genlmsg_unicast(&init_net, skb, dst_portid); 949 + if (genlmsg_unicast(&init_net, skb, dst_portid)) 950 + goto err_free_txskb; 950 951 951 952 /* Enqueue the packet */ 952 953 skb_queue_tail(&data->pending, my_skb); ··· 956 955 return; 957 956 958 957 nla_put_failure: 958 + nlmsg_free(skb); 959 + err_free_txskb: 959 960 printk(KERN_DEBUG "mac80211_hwsim: error occurred in %s\n", __func__); 960 961 ieee80211_free_txskb(hw, my_skb); 961 962 data->tx_failed++;
+5 -2
drivers/net/wireless/rtlwifi/base.c
··· 1386 1386 } 1387 1387 1388 1388 return true; 1389 - } else if (0x86DD == ether_type) { 1390 - return true; 1389 + } else if (ETH_P_IPV6 == ether_type) { 1390 + /* TODO: Handle any IPv6 cases that need special handling. 1391 + * For now, always return false 1392 + */ 1393 + goto end; 1391 1394 } 1392 1395 1393 1396 end:
+11 -1
drivers/net/wireless/rtlwifi/pci.c
··· 1124 1124 /*This is for new trx flow*/ 1125 1125 struct rtl_tx_buffer_desc *pbuffer_desc = NULL; 1126 1126 u8 temp_one = 1; 1127 + u8 *entry; 1127 1128 1128 1129 memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc)); 1129 1130 ring = &rtlpci->tx_ring[BEACON_QUEUE]; 1130 1131 pskb = __skb_dequeue(&ring->queue); 1131 - if (pskb) 1132 + if (rtlpriv->use_new_trx_flow) 1133 + entry = (u8 *)(&ring->buffer_desc[ring->idx]); 1134 + else 1135 + entry = (u8 *)(&ring->desc[ring->idx]); 1136 + if (pskb) { 1137 + pci_unmap_single(rtlpci->pdev, 1138 + rtlpriv->cfg->ops->get_desc( 1139 + (u8 *)entry, true, HW_DESC_TXBUFF_ADDR), 1140 + pskb->len, PCI_DMA_TODEVICE); 1132 1141 kfree_skb(pskb); 1142 + } 1133 1143 1134 1144 /*NB: the beacon data buffer must be 32-bit aligned. */ 1135 1145 pskb = ieee80211_beacon_get(hw, mac->vif);
+1 -2
drivers/net/xen-netback/interface.c
··· 340 340 unsigned int num_queues = vif->num_queues; 341 341 int i; 342 342 unsigned int queue_index; 343 - struct xenvif_stats *vif_stats; 344 343 345 344 for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) { 346 345 unsigned long accum = 0; 347 346 for (queue_index = 0; queue_index < num_queues; ++queue_index) { 348 - vif_stats = &vif->queues[queue_index].stats; 347 + void *vif_stats = &vif->queues[queue_index].stats; 349 348 accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset); 350 349 } 351 350 data[i] = accum;
+31 -15
drivers/net/xen-netback/netback.c
··· 96 96 static void make_tx_response(struct xenvif_queue *queue, 97 97 struct xen_netif_tx_request *txp, 98 98 s8 st); 99 + static void push_tx_responses(struct xenvif_queue *queue); 99 100 100 101 static inline int tx_work_todo(struct xenvif_queue *queue); 101 102 ··· 658 657 do { 659 658 spin_lock_irqsave(&queue->response_lock, flags); 660 659 make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR); 660 + push_tx_responses(queue); 661 661 spin_unlock_irqrestore(&queue->response_lock, flags); 662 662 if (cons == end) 663 663 break; ··· 1345 1343 { 1346 1344 unsigned int offset = skb_headlen(skb); 1347 1345 skb_frag_t frags[MAX_SKB_FRAGS]; 1348 - int i; 1346 + int i, f; 1349 1347 struct ubuf_info *uarg; 1350 1348 struct sk_buff *nskb = skb_shinfo(skb)->frag_list; 1351 1349 ··· 1385 1383 frags[i].page_offset = 0; 1386 1384 skb_frag_size_set(&frags[i], len); 1387 1385 } 1388 - /* swap out with old one */ 1389 - memcpy(skb_shinfo(skb)->frags, 1390 - frags, 1391 - i * sizeof(skb_frag_t)); 1392 - skb_shinfo(skb)->nr_frags = i; 1393 - skb->truesize += i * PAGE_SIZE; 1394 1386 1395 - /* remove traces of mapped pages and frag_list */ 1387 + /* Copied all the bits from the frag list -- free it. */ 1396 1388 skb_frag_list_init(skb); 1389 + xenvif_skb_zerocopy_prepare(queue, nskb); 1390 + kfree_skb(nskb); 1391 + 1392 + /* Release all the original (foreign) frags. */ 1393 + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) 1394 + skb_frag_unref(skb, f); 1397 1395 uarg = skb_shinfo(skb)->destructor_arg; 1398 1396 /* increase inflight counter to offset decrement in callback */ 1399 1397 atomic_inc(&queue->inflight_packets); 1400 1398 uarg->callback(uarg, true); 1401 1399 skb_shinfo(skb)->destructor_arg = NULL; 1402 1400 1403 - xenvif_skb_zerocopy_prepare(queue, nskb); 1404 - kfree_skb(nskb); 1401 + /* Fill the skb with the new (local) frags. */ 1402 + memcpy(skb_shinfo(skb)->frags, frags, i * sizeof(skb_frag_t)); 1403 + skb_shinfo(skb)->nr_frags = i; 1404 + skb->truesize += i * PAGE_SIZE; 1405 1405 1406 1406 return 0; 1407 1407 } ··· 1656 1652 unsigned long flags; 1657 1653 1658 1654 pending_tx_info = &queue->pending_tx_info[pending_idx]; 1655 + 1659 1656 spin_lock_irqsave(&queue->response_lock, flags); 1657 + 1660 1658 make_tx_response(queue, &pending_tx_info->req, status); 1661 - index = pending_index(queue->pending_prod); 1659 + 1660 + /* Release the pending index before pusing the Tx response so 1661 + * its available before a new Tx request is pushed by the 1662 + * frontend. 1663 + */ 1664 + index = pending_index(queue->pending_prod++); 1662 1665 queue->pending_ring[index] = pending_idx; 1663 - /* TX shouldn't use the index before we give it back here */ 1664 - mb(); 1665 - queue->pending_prod++; 1666 + 1667 + push_tx_responses(queue); 1668 + 1666 1669 spin_unlock_irqrestore(&queue->response_lock, flags); 1667 1670 } 1668 1671 ··· 1680 1669 { 1681 1670 RING_IDX i = queue->tx.rsp_prod_pvt; 1682 1671 struct xen_netif_tx_response *resp; 1683 - int notify; 1684 1672 1685 1673 resp = RING_GET_RESPONSE(&queue->tx, i); 1686 1674 resp->id = txp->id; ··· 1689 1679 RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL; 1690 1680 1691 1681 queue->tx.rsp_prod_pvt = ++i; 1682 + } 1683 + 1684 + static void push_tx_responses(struct xenvif_queue *queue) 1685 + { 1686 + int notify; 1687 + 1692 1688 RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 1693 1689 if (notify) 1694 1690 notify_remote_via_irq(queue->tx_irq);
+1 -4
drivers/net/xen-netfront.c
··· 1008 1008 1009 1009 static int xennet_change_mtu(struct net_device *dev, int mtu) 1010 1010 { 1011 - int max = xennet_can_sg(dev) ? 1012 - XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN; 1011 + int max = xennet_can_sg(dev) ? XEN_NETIF_MAX_TX_SIZE : ETH_DATA_LEN; 1013 1012 1014 1013 if (mtu > max) 1015 1014 return -EINVAL; ··· 1277 1278 1278 1279 netdev->ethtool_ops = &xennet_ethtool_ops; 1279 1280 SET_NETDEV_DEV(netdev, &dev->dev); 1280 - 1281 - netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER); 1282 1281 1283 1282 np->netdev = netdev; 1284 1283
+1 -2
drivers/of/Kconfig
··· 84 84 bool 85 85 86 86 config OF_OVERLAY 87 - bool 88 - depends on OF 87 + bool "Device Tree overlays" 89 88 select OF_DYNAMIC 90 89 select OF_RESOLVE 91 90
+8 -3
drivers/of/address.c
··· 450 450 return NULL; 451 451 } 452 452 453 - static int of_empty_ranges_quirk(void) 453 + static int of_empty_ranges_quirk(struct device_node *np) 454 454 { 455 455 if (IS_ENABLED(CONFIG_PPC)) { 456 - /* To save cycles, we cache the result */ 456 + /* To save cycles, we cache the result for global "Mac" setting */ 457 457 static int quirk_state = -1; 458 458 459 + /* PA-SEMI sdc DT bug */ 460 + if (of_device_is_compatible(np, "1682m-sdc")) 461 + return true; 462 + 463 + /* Make quirk cached */ 459 464 if (quirk_state < 0) 460 465 quirk_state = 461 466 of_machine_is_compatible("Power Macintosh") || ··· 495 490 * This code is only enabled on powerpc. --gcl 496 491 */ 497 492 ranges = of_get_property(parent, rprop, &rlen); 498 - if (ranges == NULL && !of_empty_ranges_quirk()) { 493 + if (ranges == NULL && !of_empty_ranges_quirk(parent)) { 499 494 pr_debug("OF: no ranges; cannot translate\n"); 500 495 return 1; 501 496 }
+10 -8
drivers/of/base.c
··· 714 714 const char *path) 715 715 { 716 716 struct device_node *child; 717 - int len = strchrnul(path, '/') - path; 718 - int term; 717 + int len; 719 718 719 + len = strcspn(path, "/:"); 720 720 if (!len) 721 721 return NULL; 722 - 723 - term = strchrnul(path, ':') - path; 724 - if (term < len) 725 - len = term; 726 722 727 723 __for_each_child_of_node(parent, child) { 728 724 const char *name = strrchr(child->full_name, '/'); ··· 764 768 765 769 /* The path could begin with an alias */ 766 770 if (*path != '/') { 767 - char *p = strchrnul(path, '/'); 768 - int len = separator ? separator - path : p - path; 771 + int len; 772 + const char *p = separator; 773 + 774 + if (!p) 775 + p = strchrnul(path, '/'); 776 + len = p - path; 769 777 770 778 /* of_aliases must not be NULL */ 771 779 if (!of_aliases) ··· 794 794 path++; /* Increment past '/' delimiter */ 795 795 np = __of_find_node_by_path(np, path); 796 796 path = strchrnul(path, '/'); 797 + if (separator && separator < path) 798 + break; 797 799 } 798 800 raw_spin_unlock_irqrestore(&devtree_lock, flags); 799 801 return np;
+7 -3
drivers/of/irq.c
··· 290 290 struct device_node *p; 291 291 const __be32 *intspec, *tmp, *addr; 292 292 u32 intsize, intlen; 293 - int i, res = -EINVAL; 293 + int i, res; 294 294 295 295 pr_debug("of_irq_parse_one: dev=%s, index=%d\n", of_node_full_name(device), index); 296 296 ··· 323 323 324 324 /* Get size of interrupt specifier */ 325 325 tmp = of_get_property(p, "#interrupt-cells", NULL); 326 - if (tmp == NULL) 326 + if (tmp == NULL) { 327 + res = -EINVAL; 327 328 goto out; 329 + } 328 330 intsize = be32_to_cpu(*tmp); 329 331 330 332 pr_debug(" intsize=%d intlen=%d\n", intsize, intlen); 331 333 332 334 /* Check index */ 333 - if ((index + 1) * intsize > intlen) 335 + if ((index + 1) * intsize > intlen) { 336 + res = -EINVAL; 334 337 goto out; 338 + } 335 339 336 340 /* Copy intspec into irq structure */ 337 341 intspec += index * intsize;
+2 -1
drivers/of/overlay.c
··· 19 19 #include <linux/string.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/err.h> 22 + #include <linux/idr.h> 22 23 23 24 #include "of_private.h" 24 25 ··· 86 85 struct device_node *target, struct device_node *child) 87 86 { 88 87 const char *cname; 89 - struct device_node *tchild, *grandchild; 88 + struct device_node *tchild; 90 89 int ret = 0; 91 90 92 91 cname = kbasename(child->full_name);
+24 -9
drivers/of/unittest.c
··· 92 92 "option path test failed\n"); 93 93 of_node_put(np); 94 94 95 + np = of_find_node_opts_by_path("/testcase-data:test/option", &options); 96 + selftest(np && !strcmp("test/option", options), 97 + "option path test, subcase #1 failed\n"); 98 + of_node_put(np); 99 + 100 + np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options); 101 + selftest(np && !strcmp("test/option", options), 102 + "option path test, subcase #2 failed\n"); 103 + of_node_put(np); 104 + 95 105 np = of_find_node_opts_by_path("/testcase-data:testoption", NULL); 96 106 selftest(np, "NULL option path test failed\n"); 97 107 of_node_put(np); ··· 110 100 &options); 111 101 selftest(np && !strcmp("testaliasoption", options), 112 102 "option alias path test failed\n"); 103 + of_node_put(np); 104 + 105 + np = of_find_node_opts_by_path("testcase-alias:test/alias/option", 106 + &options); 107 + selftest(np && !strcmp("test/alias/option", options), 108 + "option alias path test, subcase #1 failed\n"); 113 109 of_node_put(np); 114 110 115 111 np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL); ··· 394 378 rc = of_property_match_string(np, "phandle-list-names", "first"); 395 379 selftest(rc == 0, "first expected:0 got:%i\n", rc); 396 380 rc = of_property_match_string(np, "phandle-list-names", "second"); 397 - selftest(rc == 1, "second expected:0 got:%i\n", rc); 381 + selftest(rc == 1, "second expected:1 got:%i\n", rc); 398 382 rc = of_property_match_string(np, "phandle-list-names", "third"); 399 - selftest(rc == 2, "third expected:0 got:%i\n", rc); 383 + selftest(rc == 2, "third expected:2 got:%i\n", rc); 400 384 rc = of_property_match_string(np, "phandle-list-names", "fourth"); 401 385 selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc); 402 386 rc = of_property_match_string(np, "missing-property", "blah"); ··· 494 478 struct device_node *n1, *n2, *n21, *nremove, *parent, *np; 495 479 struct of_changeset chgset; 496 480 497 - of_changeset_init(&chgset); 498 481 n1 = __of_node_dup(NULL, "/testcase-data/changeset/n1"); 499 482 selftest(n1, "testcase setup failure\n"); 500 483 n2 = __of_node_dup(NULL, "/testcase-data/changeset/n2"); ··· 994 979 return pdev != NULL; 995 980 } 996 981 997 - #if IS_ENABLED(CONFIG_I2C) 982 + #if IS_BUILTIN(CONFIG_I2C) 998 983 999 984 /* get the i2c client device instantiated at the path */ 1000 985 static struct i2c_client *of_path_to_i2c_client(const char *path) ··· 1460 1445 return; 1461 1446 } 1462 1447 1463 - #if IS_ENABLED(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY) 1448 + #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY) 1464 1449 1465 1450 struct selftest_i2c_bus_data { 1466 1451 struct platform_device *pdev; ··· 1599 1584 .id_table = selftest_i2c_dev_id, 1600 1585 }; 1601 1586 1602 - #if IS_ENABLED(CONFIG_I2C_MUX) 1587 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1603 1588 1604 1589 struct selftest_i2c_mux_data { 1605 1590 int nchans; ··· 1710 1695 "could not register selftest i2c bus driver\n")) 1711 1696 return ret; 1712 1697 1713 - #if IS_ENABLED(CONFIG_I2C_MUX) 1698 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1714 1699 ret = i2c_add_driver(&selftest_i2c_mux_driver); 1715 1700 if (selftest(ret == 0, 1716 1701 "could not register selftest i2c mux driver\n")) ··· 1722 1707 1723 1708 static void of_selftest_overlay_i2c_cleanup(void) 1724 1709 { 1725 - #if IS_ENABLED(CONFIG_I2C_MUX) 1710 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1726 1711 i2c_del_driver(&selftest_i2c_mux_driver); 1727 1712 #endif 1728 1713 platform_driver_unregister(&selftest_i2c_bus_driver); ··· 1829 1814 of_selftest_overlay_10(); 1830 1815 of_selftest_overlay_11(); 1831 1816 1832 - #if IS_ENABLED(CONFIG_I2C) 1817 + #if IS_BUILTIN(CONFIG_I2C) 1833 1818 if (selftest(of_selftest_overlay_i2c_init() == 0, "i2c init failed\n")) 1834 1819 goto out; 1835 1820
+1 -1
drivers/pci/host/pci-versatile.c
··· 80 80 if (err) 81 81 return err; 82 82 83 - resource_list_for_each_entry(win, res, list) { 83 + resource_list_for_each_entry(win, res) { 84 84 struct resource *parent, *res = win->res; 85 85 86 86 switch (resource_type(res)) {
+2 -2
drivers/pci/host/pci-xgene.c
··· 127 127 return false; 128 128 } 129 129 130 - static int xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 130 + static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 131 131 int offset) 132 132 { 133 133 struct xgene_pcie_port *port = bus->sysdata; ··· 137 137 return NULL; 138 138 139 139 xgene_pcie_set_rtdid_reg(bus, devfn); 140 - return xgene_pcie_get_cfg_base(bus); 140 + return xgene_pcie_get_cfg_base(bus) + offset; 141 141 } 142 142 143 143 static struct pci_ops xgene_pcie_ops = {
+3 -2
drivers/pci/pci-sysfs.c
··· 521 521 struct pci_dev *pdev = to_pci_dev(dev); 522 522 char *driver_override, *old = pdev->driver_override, *cp; 523 523 524 - if (count > PATH_MAX) 524 + /* We need to keep extra room for a newline */ 525 + if (count >= (PAGE_SIZE - 1)) 525 526 return -EINVAL; 526 527 527 528 driver_override = kstrndup(buf, count, GFP_KERNEL); ··· 550 549 { 551 550 struct pci_dev *pdev = to_pci_dev(dev); 552 551 553 - return sprintf(buf, "%s\n", pdev->driver_override); 552 + return snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override); 554 553 } 555 554 static DEVICE_ATTR_RW(driver_override); 556 555
+3 -9
drivers/pcmcia/Kconfig
··· 69 69 tristate "CardBus yenta-compatible bridge support" 70 70 depends on PCI 71 71 select CARDBUS if !EXPERT 72 - select PCCARD_NONSTATIC if PCMCIA != n && ISA 73 - select PCCARD_PCI if PCMCIA !=n && !ISA 72 + select PCCARD_NONSTATIC if PCMCIA != n 74 73 ---help--- 75 74 This option enables support for CardBus host bridges. Virtually 76 75 all modern PCMCIA bridges are CardBus compatible. A "bridge" is ··· 109 110 config PD6729 110 111 tristate "Cirrus PD6729 compatible bridge support" 111 112 depends on PCMCIA && PCI 112 - select PCCARD_NONSTATIC if PCMCIA != n && ISA 113 - select PCCARD_PCI if PCMCIA !=n && !ISA 113 + select PCCARD_NONSTATIC 114 114 help 115 115 This provides support for the Cirrus PD6729 PCI-to-PCMCIA bridge 116 116 device, found in some older laptops and PCMCIA card readers. ··· 117 119 config I82092 118 120 tristate "i82092 compatible bridge support" 119 121 depends on PCMCIA && PCI 120 - select PCCARD_NONSTATIC if PCMCIA != n && ISA 121 - select PCCARD_PCI if PCMCIA !=n && !ISA 122 + select PCCARD_NONSTATIC 122 123 help 123 124 This provides support for the Intel I82092AA PCI-to-PCMCIA bridge device, 124 125 found in some older laptops and more commonly in evaluation boards for the ··· 287 290 help 288 291 Say Y here to support the CompactFlash controller on the 289 292 PA Semi Electra eval board. 290 - 291 - config PCCARD_PCI 292 - bool 293 293 294 294 config PCCARD_NONSTATIC 295 295 bool
-1
drivers/pcmcia/Makefile
··· 12 12 pcmcia_rsrc-y += rsrc_mgr.o 13 13 pcmcia_rsrc-$(CONFIG_PCCARD_NONSTATIC) += rsrc_nonstatic.o 14 14 pcmcia_rsrc-$(CONFIG_PCCARD_IODYN) += rsrc_iodyn.o 15 - pcmcia_rsrc-$(CONFIG_PCCARD_PCI) += rsrc_pci.o 16 15 obj-$(CONFIG_PCCARD) += pcmcia_rsrc.o 17 16 18 17
-173
drivers/pcmcia/rsrc_pci.c
··· 1 - #include <linux/slab.h> 2 - #include <linux/module.h> 3 - #include <linux/kernel.h> 4 - #include <linux/pci.h> 5 - 6 - #include <pcmcia/ss.h> 7 - #include <pcmcia/cistpl.h> 8 - #include "cs_internal.h" 9 - 10 - 11 - struct pcmcia_align_data { 12 - unsigned long mask; 13 - unsigned long offset; 14 - }; 15 - 16 - static resource_size_t pcmcia_align(void *align_data, 17 - const struct resource *res, 18 - resource_size_t size, resource_size_t align) 19 - { 20 - struct pcmcia_align_data *data = align_data; 21 - resource_size_t start; 22 - 23 - start = (res->start & ~data->mask) + data->offset; 24 - if (start < res->start) 25 - start += data->mask + 1; 26 - return start; 27 - } 28 - 29 - static struct resource *find_io_region(struct pcmcia_socket *s, 30 - unsigned long base, int num, 31 - unsigned long align) 32 - { 33 - struct resource *res = pcmcia_make_resource(0, num, IORESOURCE_IO, 34 - dev_name(&s->dev)); 35 - struct pcmcia_align_data data; 36 - int ret; 37 - 38 - data.mask = align - 1; 39 - data.offset = base & data.mask; 40 - 41 - ret = pci_bus_alloc_resource(s->cb_dev->bus, res, num, 1, 42 - base, 0, pcmcia_align, &data); 43 - if (ret != 0) { 44 - kfree(res); 45 - res = NULL; 46 - } 47 - return res; 48 - } 49 - 50 - static int res_pci_find_io(struct pcmcia_socket *s, unsigned int attr, 51 - unsigned int *base, unsigned int num, 52 - unsigned int align, struct resource **parent) 53 - { 54 - int i, ret = 0; 55 - 56 - /* Check for an already-allocated window that must conflict with 57 - * what was asked for. It is a hack because it does not catch all 58 - * potential conflicts, just the most obvious ones. 59 - */ 60 - for (i = 0; i < MAX_IO_WIN; i++) { 61 - if (!s->io[i].res) 62 - continue; 63 - 64 - if (!*base) 65 - continue; 66 - 67 - if ((s->io[i].res->start & (align-1)) == *base) 68 - return -EBUSY; 69 - } 70 - 71 - for (i = 0; i < MAX_IO_WIN; i++) { 72 - struct resource *res = s->io[i].res; 73 - unsigned int try; 74 - 75 - if (res && (res->flags & IORESOURCE_BITS) != 76 - (attr & IORESOURCE_BITS)) 77 - continue; 78 - 79 - if (!res) { 80 - if (align == 0) 81 - align = 0x10000; 82 - 83 - res = s->io[i].res = find_io_region(s, *base, num, 84 - align); 85 - if (!res) 86 - return -EINVAL; 87 - 88 - *base = res->start; 89 - s->io[i].res->flags = 90 - ((res->flags & ~IORESOURCE_BITS) | 91 - (attr & IORESOURCE_BITS)); 92 - s->io[i].InUse = num; 93 - *parent = res; 94 - return 0; 95 - } 96 - 97 - /* Try to extend top of window */ 98 - try = res->end + 1; 99 - if ((*base == 0) || (*base == try)) { 100 - ret = adjust_resource(s->io[i].res, res->start, 101 - resource_size(res) + num); 102 - if (ret) 103 - continue; 104 - *base = try; 105 - s->io[i].InUse += num; 106 - *parent = res; 107 - return 0; 108 - } 109 - 110 - /* Try to extend bottom of window */ 111 - try = res->start - num; 112 - if ((*base == 0) || (*base == try)) { 113 - ret = adjust_resource(s->io[i].res, 114 - res->start - num, 115 - resource_size(res) + num); 116 - if (ret) 117 - continue; 118 - *base = try; 119 - s->io[i].InUse += num; 120 - *parent = res; 121 - return 0; 122 - } 123 - } 124 - return -EINVAL; 125 - } 126 - 127 - static struct resource *res_pci_find_mem(u_long base, u_long num, 128 - u_long align, int low, struct pcmcia_socket *s) 129 - { 130 - struct resource *res = pcmcia_make_resource(0, num, IORESOURCE_MEM, 131 - dev_name(&s->dev)); 132 - struct pcmcia_align_data data; 133 - unsigned long min; 134 - int ret; 135 - 136 - if (align < 0x20000) 137 - align = 0x20000; 138 - data.mask = align - 1; 139 - data.offset = base & data.mask; 140 - 141 - min = 0; 142 - if (!low) 143 - min = 0x100000UL; 144 - 145 - ret = pci_bus_alloc_resource(s->cb_dev->bus, 146 - res, num, 1, min, 0, 147 - pcmcia_align, &data); 148 - 149 - if (ret != 0) { 150 - kfree(res); 151 - res = NULL; 152 - } 153 - return res; 154 - } 155 - 156 - 157 - static int res_pci_init(struct pcmcia_socket *s) 158 - { 159 - if (!s->cb_dev || !(s->features & SS_CAP_PAGE_REGS)) { 160 - dev_err(&s->dev, "not supported by res_pci\n"); 161 - return -EOPNOTSUPP; 162 - } 163 - return 0; 164 - } 165 - 166 - struct pccard_resource_ops pccard_nonstatic_ops = { 167 - .validate_mem = NULL, 168 - .find_io = res_pci_find_io, 169 - .find_mem = res_pci_find_mem, 170 - .init = res_pci_init, 171 - .exit = NULL, 172 - }; 173 - EXPORT_SYMBOL(pccard_nonstatic_ops);
+2 -1
drivers/phy/phy-armada375-usb2.c
··· 37 37 struct armada375_cluster_phy *cluster_phy; 38 38 u32 reg; 39 39 40 - cluster_phy = dev_get_drvdata(phy->dev.parent); 40 + cluster_phy = phy_get_drvdata(phy); 41 41 if (!cluster_phy) 42 42 return -ENODEV; 43 43 ··· 131 131 cluster_phy->reg = usb_cluster_base; 132 132 133 133 dev_set_drvdata(dev, cluster_phy); 134 + phy_set_drvdata(phy, cluster_phy); 134 135 135 136 phy_provider = devm_of_phy_provider_register(&pdev->dev, 136 137 armada375_usb_phy_xlate);
+6 -5
drivers/phy/phy-core.c
··· 52 52 53 53 static int devm_phy_match(struct device *dev, void *res, void *match_data) 54 54 { 55 - return res == match_data; 55 + struct phy **phy = res; 56 + 57 + return *phy == match_data; 56 58 } 57 59 58 60 /** ··· 225 223 ret = phy_pm_runtime_get_sync(phy); 226 224 if (ret < 0 && ret != -ENOTSUPP) 227 225 return ret; 226 + ret = 0; /* Override possible ret == -ENOTSUPP */ 228 227 229 228 mutex_lock(&phy->mutex); 230 229 if (phy->init_count == 0 && phy->ops->init) { ··· 234 231 dev_err(&phy->dev, "phy init failed --> %d\n", ret); 235 232 goto out; 236 233 } 237 - } else { 238 - ret = 0; /* Override possible ret == -ENOTSUPP */ 239 234 } 240 235 ++phy->init_count; 241 236 ··· 254 253 ret = phy_pm_runtime_get_sync(phy); 255 254 if (ret < 0 && ret != -ENOTSUPP) 256 255 return ret; 256 + ret = 0; /* Override possible ret == -ENOTSUPP */ 257 257 258 258 mutex_lock(&phy->mutex); 259 259 if (phy->init_count == 1 && phy->ops->exit) { ··· 289 287 ret = phy_pm_runtime_get_sync(phy); 290 288 if (ret < 0 && ret != -ENOTSUPP) 291 289 return ret; 290 + ret = 0; /* Override possible ret == -ENOTSUPP */ 292 291 293 292 mutex_lock(&phy->mutex); 294 293 if (phy->power_count == 0 && phy->ops->power_on) { ··· 298 295 dev_err(&phy->dev, "phy poweron failed --> %d\n", ret); 299 296 goto out; 300 297 } 301 - } else { 302 - ret = 0; /* Override possible ret == -ENOTSUPP */ 303 298 } 304 299 ++phy->power_count; 305 300 mutex_unlock(&phy->mutex);
+4 -20
drivers/phy/phy-exynos-dp-video.c
··· 30 30 const struct exynos_dp_video_phy_drvdata *drvdata; 31 31 }; 32 32 33 - static void exynos_dp_video_phy_pwr_isol(struct exynos_dp_video_phy *state, 34 - unsigned int on) 35 - { 36 - unsigned int val; 37 - 38 - if (IS_ERR(state->regs)) 39 - return; 40 - 41 - val = on ? 0 : EXYNOS5_PHY_ENABLE; 42 - 43 - regmap_update_bits(state->regs, state->drvdata->phy_ctrl_offset, 44 - EXYNOS5_PHY_ENABLE, val); 45 - } 46 - 47 33 static int exynos_dp_video_phy_power_on(struct phy *phy) 48 34 { 49 35 struct exynos_dp_video_phy *state = phy_get_drvdata(phy); 50 36 51 37 /* Disable power isolation on DP-PHY */ 52 - exynos_dp_video_phy_pwr_isol(state, 0); 53 - 54 - return 0; 38 + return regmap_update_bits(state->regs, state->drvdata->phy_ctrl_offset, 39 + EXYNOS5_PHY_ENABLE, EXYNOS5_PHY_ENABLE); 55 40 } 56 41 57 42 static int exynos_dp_video_phy_power_off(struct phy *phy) ··· 44 59 struct exynos_dp_video_phy *state = phy_get_drvdata(phy); 45 60 46 61 /* Enable power isolation on DP-PHY */ 47 - exynos_dp_video_phy_pwr_isol(state, 1); 48 - 49 - return 0; 62 + return regmap_update_bits(state->regs, state->drvdata->phy_ctrl_offset, 63 + EXYNOS5_PHY_ENABLE, 0); 50 64 } 51 65 52 66 static struct phy_ops exynos_dp_video_phy_ops = {
+4 -7
drivers/phy/phy-exynos-mipi-video.c
··· 43 43 } phys[EXYNOS_MIPI_PHYS_NUM]; 44 44 spinlock_t slock; 45 45 void __iomem *regs; 46 - struct mutex mutex; 47 46 struct regmap *regmap; 48 47 }; 49 48 ··· 58 59 else 59 60 reset = EXYNOS4_MIPI_PHY_SRESETN; 60 61 61 - if (state->regmap) { 62 - mutex_lock(&state->mutex); 62 + spin_lock(&state->slock); 63 + 64 + if (!IS_ERR(state->regmap)) { 63 65 regmap_read(state->regmap, offset, &val); 64 66 if (on) 65 67 val |= reset; ··· 72 72 else if (!(val & EXYNOS4_MIPI_PHY_RESET_MASK)) 73 73 val &= ~EXYNOS4_MIPI_PHY_ENABLE; 74 74 regmap_write(state->regmap, offset, val); 75 - mutex_unlock(&state->mutex); 76 75 } else { 77 76 addr = state->regs + EXYNOS_MIPI_PHY_CONTROL(id / 2); 78 77 79 - spin_lock(&state->slock); 80 78 val = readl(addr); 81 79 if (on) 82 80 val |= reset; ··· 88 90 val &= ~EXYNOS4_MIPI_PHY_ENABLE; 89 91 90 92 writel(val, addr); 91 - spin_unlock(&state->slock); 92 93 } 93 94 95 + spin_unlock(&state->slock); 94 96 return 0; 95 97 } 96 98 ··· 156 158 157 159 dev_set_drvdata(dev, state); 158 160 spin_lock_init(&state->slock); 159 - mutex_init(&state->mutex); 160 161 161 162 for (i = 0; i < EXYNOS_MIPI_PHYS_NUM; i++) { 162 163 struct phy *phy = devm_phy_create(dev, NULL,
-1
drivers/phy/phy-exynos4210-usb2.c
··· 250 250 .power_on = exynos4210_power_on, 251 251 .power_off = exynos4210_power_off, 252 252 }, 253 - {}, 254 253 }; 255 254 256 255 const struct samsung_usb2_phy_config exynos4210_usb2_phy_config = {
-1
drivers/phy/phy-exynos4x12-usb2.c
··· 361 361 .power_on = exynos4x12_power_on, 362 362 .power_off = exynos4x12_power_off, 363 363 }, 364 - {}, 365 364 }; 366 365 367 366 const struct samsung_usb2_phy_config exynos3250_usb2_phy_config = {
+1 -1
drivers/phy/phy-exynos5-usbdrd.c
··· 531 531 { 532 532 struct exynos5_usbdrd_phy *phy_drd = dev_get_drvdata(dev); 533 533 534 - if (WARN_ON(args->args[0] > EXYNOS5_DRDPHYS_NUM)) 534 + if (WARN_ON(args->args[0] >= EXYNOS5_DRDPHYS_NUM)) 535 535 return ERR_PTR(-ENODEV); 536 536 537 537 return phy_drd->phys[args->args[0]].phy;
-1
drivers/phy/phy-exynos5250-usb2.c
··· 391 391 .power_on = exynos5250_power_on, 392 392 .power_off = exynos5250_power_off, 393 393 }, 394 - {}, 395 394 }; 396 395 397 396 const struct samsung_usb2_phy_config exynos5250_usb2_phy_config = {
+3
drivers/phy/phy-hix5hd2-sata.c
··· 147 147 return -ENOMEM; 148 148 149 149 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 150 + if (!res) 151 + return -EINVAL; 152 + 150 153 priv->base = devm_ioremap(dev, res->start, resource_size(res)); 151 154 if (!priv->base) 152 155 return -ENOMEM;
+7 -6
drivers/phy/phy-miphy28lp.c
··· 228 228 struct regmap *regmap; 229 229 struct mutex miphy_mutex; 230 230 struct miphy28lp_phy **phys; 231 + int nphys; 231 232 }; 232 233 233 234 struct miphy_initval { ··· 1117 1116 return ERR_PTR(-EINVAL); 1118 1117 } 1119 1118 1120 - for (index = 0; index < of_get_child_count(dev->of_node); index++) 1119 + for (index = 0; index < miphy_dev->nphys; index++) 1121 1120 if (phynode == miphy_dev->phys[index]->phy->dev.of_node) { 1122 1121 miphy_phy = miphy_dev->phys[index]; 1123 1122 break; ··· 1139 1138 1140 1139 static struct phy_ops miphy28lp_ops = { 1141 1140 .init = miphy28lp_init, 1141 + .owner = THIS_MODULE, 1142 1142 }; 1143 1143 1144 1144 static int miphy28lp_probe_resets(struct device_node *node, ··· 1202 1200 struct miphy28lp_dev *miphy_dev; 1203 1201 struct phy_provider *provider; 1204 1202 struct phy *phy; 1205 - int chancount, port = 0; 1206 - int ret; 1203 + int ret, port = 0; 1207 1204 1208 1205 miphy_dev = devm_kzalloc(&pdev->dev, sizeof(*miphy_dev), GFP_KERNEL); 1209 1206 if (!miphy_dev) 1210 1207 return -ENOMEM; 1211 1208 1212 - chancount = of_get_child_count(np); 1213 - miphy_dev->phys = devm_kzalloc(&pdev->dev, sizeof(phy) * chancount, 1214 - GFP_KERNEL); 1209 + miphy_dev->nphys = of_get_child_count(np); 1210 + miphy_dev->phys = devm_kcalloc(&pdev->dev, miphy_dev->nphys, 1211 + sizeof(*miphy_dev->phys), GFP_KERNEL); 1215 1212 if (!miphy_dev->phys) 1216 1213 return -ENOMEM; 1217 1214
+6 -6
drivers/phy/phy-miphy365x.c
··· 150 150 struct regmap *regmap; 151 151 struct mutex miphy_mutex; 152 152 struct miphy365x_phy **phys; 153 + int nphys; 153 154 }; 154 155 155 156 /* ··· 486 485 return ERR_PTR(-EINVAL); 487 486 } 488 487 489 - for (index = 0; index < of_get_child_count(dev->of_node); index++) 488 + for (index = 0; index < miphy_dev->nphys; index++) 490 489 if (phynode == miphy_dev->phys[index]->phy->dev.of_node) { 491 490 miphy_phy = miphy_dev->phys[index]; 492 491 break; ··· 542 541 struct miphy365x_dev *miphy_dev; 543 542 struct phy_provider *provider; 544 543 struct phy *phy; 545 - int chancount, port = 0; 546 - int ret; 544 + int ret, port = 0; 547 545 548 546 miphy_dev = devm_kzalloc(&pdev->dev, sizeof(*miphy_dev), GFP_KERNEL); 549 547 if (!miphy_dev) 550 548 return -ENOMEM; 551 549 552 - chancount = of_get_child_count(np); 553 - miphy_dev->phys = devm_kzalloc(&pdev->dev, sizeof(phy) * chancount, 554 - GFP_KERNEL); 550 + miphy_dev->nphys = of_get_child_count(np); 551 + miphy_dev->phys = devm_kcalloc(&pdev->dev, miphy_dev->nphys, 552 + sizeof(*miphy_dev->phys), GFP_KERNEL); 555 553 if (!miphy_dev->phys) 556 554 return -ENOMEM; 557 555
+1 -1
drivers/phy/phy-omap-control.c
··· 360 360 } 361 361 module_exit(omap_control_phy_exit); 362 362 363 - MODULE_ALIAS("platform: omap_control_phy"); 363 + MODULE_ALIAS("platform:omap_control_phy"); 364 364 MODULE_AUTHOR("Texas Instruments Inc."); 365 365 MODULE_DESCRIPTION("OMAP Control Module PHY Driver"); 366 366 MODULE_LICENSE("GPL v2");
+4 -3
drivers/phy/phy-omap-usb2.c
··· 296 296 dev_warn(&pdev->dev, 297 297 "found usb_otg_ss_refclk960m, please fix DTS\n"); 298 298 } 299 - } else { 300 - clk_prepare(phy->optclk); 301 299 } 300 + 301 + if (!IS_ERR(phy->optclk)) 302 + clk_prepare(phy->optclk); 302 303 303 304 usb_add_phy_dev(&phy->phy); 304 305 ··· 384 383 385 384 module_platform_driver(omap_usb2_driver); 386 385 387 - MODULE_ALIAS("platform: omap_usb2"); 386 + MODULE_ALIAS("platform:omap_usb2"); 388 387 MODULE_AUTHOR("Texas Instruments Inc."); 389 388 MODULE_DESCRIPTION("OMAP USB2 phy driver"); 390 389 MODULE_LICENSE("GPL v2");
+3 -3
drivers/phy/phy-rockchip-usb.c
··· 61 61 return ret; 62 62 63 63 clk_disable_unprepare(phy->clk); 64 - if (ret) 65 - return ret; 66 64 67 65 return 0; 68 66 } ··· 76 78 77 79 /* Power up usb phy analog blocks by set siddq 0 */ 78 80 ret = rockchip_usb_phy_power(phy, 0); 79 - if (ret) 81 + if (ret) { 82 + clk_disable_unprepare(phy->clk); 80 83 return ret; 84 + } 81 85 82 86 return 0; 83 87 }
+4 -8
drivers/phy/phy-ti-pipe3.c
··· 165 165 cpu_relax(); 166 166 val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS); 167 167 if (val & PLL_LOCK) 168 - break; 168 + return 0; 169 169 } while (!time_after(jiffies, timeout)); 170 170 171 - if (!(val & PLL_LOCK)) { 172 - dev_err(phy->dev, "DPLL failed to lock\n"); 173 - return -EBUSY; 174 - } 175 - 176 - return 0; 171 + dev_err(phy->dev, "DPLL failed to lock\n"); 172 + return -EBUSY; 177 173 } 178 174 179 175 static int ti_pipe3_dpll_program(struct ti_pipe3 *phy) ··· 604 608 605 609 module_platform_driver(ti_pipe3_driver); 606 610 607 - MODULE_ALIAS("platform: ti_pipe3"); 611 + MODULE_ALIAS("platform:ti_pipe3"); 608 612 MODULE_AUTHOR("Texas Instruments Inc."); 609 613 MODULE_DESCRIPTION("TI PIPE3 phy driver"); 610 614 MODULE_LICENSE("GPL v2");
-1
drivers/phy/phy-twl4030-usb.c
··· 666 666 twl->dev = &pdev->dev; 667 667 twl->irq = platform_get_irq(pdev, 0); 668 668 twl->vbus_supplied = false; 669 - twl->linkstat = -EINVAL; 670 669 twl->linkstat = OMAP_MUSB_UNKNOWN; 671 670 672 671 twl->phy.dev = twl->dev;
-1
drivers/phy/phy-xgene.c
··· 1704 1704 for (i = 0; i < MAX_LANE; i++) 1705 1705 ctx->sata_param.speed[i] = 2; /* Default to Gen3 */ 1706 1706 1707 - ctx->dev = &pdev->dev; 1708 1707 platform_set_drvdata(pdev, ctx); 1709 1708 1710 1709 ctx->phy = devm_phy_create(ctx->dev, NULL, &xgene_phy_ops);
+188 -66
drivers/pinctrl/intel/pinctrl-baytrail.c
··· 66 66 #define BYT_DIR_MASK (BIT(1) | BIT(2)) 67 67 #define BYT_TRIG_MASK (BIT(26) | BIT(25) | BIT(24)) 68 68 69 + #define BYT_CONF0_RESTORE_MASK (BYT_DIRECT_IRQ_EN | BYT_TRIG_MASK | \ 70 + BYT_PIN_MUX) 71 + #define BYT_VAL_RESTORE_MASK (BYT_DIR_MASK | BYT_LEVEL) 72 + 69 73 #define BYT_NGPIO_SCORE 102 70 74 #define BYT_NGPIO_NCORE 28 71 75 #define BYT_NGPIO_SUS 44 ··· 138 134 }, 139 135 }; 140 136 137 + struct byt_gpio_pin_context { 138 + u32 conf0; 139 + u32 val; 140 + }; 141 + 141 142 struct byt_gpio { 142 143 struct gpio_chip chip; 143 144 struct platform_device *pdev; 144 145 spinlock_t lock; 145 146 void __iomem *reg_base; 146 147 struct pinctrl_gpio_range *range; 148 + struct byt_gpio_pin_context *saved_context; 147 149 }; 148 150 149 151 #define to_byt_gpio(c) container_of(c, struct byt_gpio, chip) ··· 168 158 return vg->reg_base + reg_offset + reg; 169 159 } 170 160 171 - static bool is_special_pin(struct byt_gpio *vg, unsigned offset) 161 + static void byt_gpio_clear_triggering(struct byt_gpio *vg, unsigned offset) 162 + { 163 + void __iomem *reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG); 164 + unsigned long flags; 165 + u32 value; 166 + 167 + spin_lock_irqsave(&vg->lock, flags); 168 + value = readl(reg); 169 + value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL); 170 + writel(value, reg); 171 + spin_unlock_irqrestore(&vg->lock, flags); 172 + } 173 + 174 + static u32 byt_get_gpio_mux(struct byt_gpio *vg, unsigned offset) 172 175 { 173 176 /* SCORE pin 92-93 */ 174 177 if (!strcmp(vg->range->name, BYT_SCORE_ACPI_UID) && 175 178 offset >= 92 && offset <= 93) 176 - return true; 179 + return 1; 177 180 178 181 /* SUS pin 11-21 */ 179 182 if (!strcmp(vg->range->name, BYT_SUS_ACPI_UID) && 180 183 offset >= 11 && offset <= 21) 181 - return true; 184 + return 1; 182 185 183 - return false; 186 + return 0; 184 187 } 185 188 186 189 static int byt_gpio_request(struct gpio_chip *chip, unsigned offset) 187 190 { 188 191 struct byt_gpio *vg = to_byt_gpio(chip); 189 192 void __iomem *reg = byt_gpio_reg(chip, offset, BYT_CONF0_REG); 190 - u32 value; 191 - bool special; 193 + u32 value, gpio_mux; 192 194 193 195 /* 194 196 * In most cases, func pin mux 000 means GPIO function. 195 197 * But, some pins may have func pin mux 001 represents 196 - * GPIO function. Only allow user to export pin with 197 - * func pin mux preset as GPIO function by BIOS/FW. 198 + * GPIO function. 199 + * 200 + * Because there are devices out there where some pins were not 201 + * configured correctly we allow changing the mux value from 202 + * request (but print out warning about that). 198 203 */ 199 204 value = readl(reg) & BYT_PIN_MUX; 200 - special = is_special_pin(vg, offset); 201 - if ((special && value != 1) || (!special && value)) { 202 - dev_err(&vg->pdev->dev, 203 - "pin %u cannot be used as GPIO.\n", offset); 204 - return -EINVAL; 205 + gpio_mux = byt_get_gpio_mux(vg, offset); 206 + if (WARN_ON(gpio_mux != value)) { 207 + unsigned long flags; 208 + 209 + spin_lock_irqsave(&vg->lock, flags); 210 + value = readl(reg) & ~BYT_PIN_MUX; 211 + value |= gpio_mux; 212 + writel(value, reg); 213 + spin_unlock_irqrestore(&vg->lock, flags); 214 + 215 + dev_warn(&vg->pdev->dev, 216 + "pin %u forcibly re-configured as GPIO\n", offset); 205 217 } 206 218 207 219 pm_runtime_get(&vg->pdev->dev); ··· 234 202 static void byt_gpio_free(struct gpio_chip *chip, unsigned offset) 235 203 { 236 204 struct byt_gpio *vg = to_byt_gpio(chip); 237 - void __iomem *reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG); 238 - u32 value; 239 205 240 - /* clear interrupt triggering */ 241 - value = readl(reg); 242 - value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL); 243 - writel(value, reg); 244 - 206 + byt_gpio_clear_triggering(vg, offset); 245 207 pm_runtime_put(&vg->pdev->dev); 246 208 } 247 209 ··· 262 236 value &= ~(BYT_DIRECT_IRQ_EN | BYT_TRIG_POS | BYT_TRIG_NEG | 263 237 BYT_TRIG_LVL); 264 238 265 - switch (type) { 266 - case IRQ_TYPE_LEVEL_HIGH: 267 - value |= BYT_TRIG_LVL; 268 - case IRQ_TYPE_EDGE_RISING: 269 - value |= BYT_TRIG_POS; 270 - break; 271 - case IRQ_TYPE_LEVEL_LOW: 272 - value |= BYT_TRIG_LVL; 273 - case IRQ_TYPE_EDGE_FALLING: 274 - value |= BYT_TRIG_NEG; 275 - break; 276 - case IRQ_TYPE_EDGE_BOTH: 277 - value |= (BYT_TRIG_NEG | BYT_TRIG_POS); 278 - break; 279 - } 280 239 writel(value, reg); 240 + 241 + if (type & IRQ_TYPE_EDGE_BOTH) 242 + __irq_set_handler_locked(d->irq, handle_edge_irq); 243 + else if (type & IRQ_TYPE_LEVEL_MASK) 244 + __irq_set_handler_locked(d->irq, handle_level_irq); 281 245 282 246 spin_unlock_irqrestore(&vg->lock, flags); 283 247 ··· 426 410 struct irq_data *data = irq_desc_get_irq_data(desc); 427 411 struct byt_gpio *vg = to_byt_gpio(irq_desc_get_handler_data(desc)); 428 412 struct irq_chip *chip = irq_data_get_irq_chip(data); 429 - u32 base, pin, mask; 413 + u32 base, pin; 430 414 void __iomem *reg; 431 - u32 pending; 415 + unsigned long pending; 432 416 unsigned virq; 433 - int looplimit = 0; 434 417 435 418 /* check from GPIO controller which pin triggered the interrupt */ 436 419 for (base = 0; base < vg->chip.ngpio; base += 32) { 437 - 438 420 reg = byt_gpio_reg(&vg->chip, base, BYT_INT_STAT_REG); 439 - 440 - while ((pending = readl(reg))) { 441 - pin = __ffs(pending); 442 - mask = BIT(pin); 443 - /* Clear before handling so we can't lose an edge */ 444 - writel(mask, reg); 445 - 421 + pending = readl(reg); 422 + for_each_set_bit(pin, &pending, 32) { 446 423 virq = irq_find_mapping(vg->chip.irqdomain, base + pin); 447 424 generic_handle_irq(virq); 448 - 449 - /* In case bios or user sets triggering incorretly a pin 450 - * might remain in "interrupt triggered" state. 451 - */ 452 - if (looplimit++ > 32) { 453 - dev_err(&vg->pdev->dev, 454 - "Gpio %d interrupt flood, disabling\n", 455 - base + pin); 456 - 457 - reg = byt_gpio_reg(&vg->chip, base + pin, 458 - BYT_CONF0_REG); 459 - mask = readl(reg); 460 - mask &= ~(BYT_TRIG_NEG | BYT_TRIG_POS | 461 - BYT_TRIG_LVL); 462 - writel(mask, reg); 463 - mask = readl(reg); /* flush */ 464 - break; 465 - } 466 425 } 467 426 } 468 427 chip->irq_eoi(data); 469 428 } 470 429 430 + static void byt_irq_ack(struct irq_data *d) 431 + { 432 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 433 + struct byt_gpio *vg = to_byt_gpio(gc); 434 + unsigned offset = irqd_to_hwirq(d); 435 + void __iomem *reg; 436 + 437 + reg = byt_gpio_reg(&vg->chip, offset, BYT_INT_STAT_REG); 438 + writel(BIT(offset % 32), reg); 439 + } 440 + 471 441 static void byt_irq_unmask(struct irq_data *d) 472 442 { 443 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 444 + struct byt_gpio *vg = to_byt_gpio(gc); 445 + unsigned offset = irqd_to_hwirq(d); 446 + unsigned long flags; 447 + void __iomem *reg; 448 + u32 value; 449 + 450 + spin_lock_irqsave(&vg->lock, flags); 451 + 452 + reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG); 453 + value = readl(reg); 454 + 455 + switch (irqd_get_trigger_type(d)) { 456 + case IRQ_TYPE_LEVEL_HIGH: 457 + value |= BYT_TRIG_LVL; 458 + case IRQ_TYPE_EDGE_RISING: 459 + value |= BYT_TRIG_POS; 460 + break; 461 + case IRQ_TYPE_LEVEL_LOW: 462 + value |= BYT_TRIG_LVL; 463 + case IRQ_TYPE_EDGE_FALLING: 464 + value |= BYT_TRIG_NEG; 465 + break; 466 + case IRQ_TYPE_EDGE_BOTH: 467 + value |= (BYT_TRIG_NEG | BYT_TRIG_POS); 468 + break; 469 + } 470 + 471 + writel(value, reg); 472 + 473 + spin_unlock_irqrestore(&vg->lock, flags); 473 474 } 474 475 475 476 static void byt_irq_mask(struct irq_data *d) 476 477 { 478 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 479 + struct byt_gpio *vg = to_byt_gpio(gc); 480 + 481 + byt_gpio_clear_triggering(vg, irqd_to_hwirq(d)); 477 482 } 478 483 479 484 static struct irq_chip byt_irqchip = { 480 485 .name = "BYT-GPIO", 486 + .irq_ack = byt_irq_ack, 481 487 .irq_mask = byt_irq_mask, 482 488 .irq_unmask = byt_irq_unmask, 483 489 .irq_set_type = byt_irq_type, ··· 510 472 { 511 473 void __iomem *reg; 512 474 u32 base, value; 475 + int i; 476 + 477 + /* 478 + * Clear interrupt triggers for all pins that are GPIOs and 479 + * do not use direct IRQ mode. This will prevent spurious 480 + * interrupts from misconfigured pins. 481 + */ 482 + for (i = 0; i < vg->chip.ngpio; i++) { 483 + value = readl(byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG)); 484 + if ((value & BYT_PIN_MUX) == byt_get_gpio_mux(vg, i) && 485 + !(value & BYT_DIRECT_IRQ_EN)) { 486 + byt_gpio_clear_triggering(vg, i); 487 + dev_dbg(&vg->pdev->dev, "disabling GPIO %d\n", i); 488 + } 489 + } 513 490 514 491 /* clear interrupt status trigger registers */ 515 492 for (base = 0; base < vg->chip.ngpio; base += 32) { ··· 594 541 gc->can_sleep = false; 595 542 gc->dev = dev; 596 543 544 + #ifdef CONFIG_PM_SLEEP 545 + vg->saved_context = devm_kcalloc(&pdev->dev, gc->ngpio, 546 + sizeof(*vg->saved_context), GFP_KERNEL); 547 + #endif 548 + 597 549 ret = gpiochip_add(gc); 598 550 if (ret) { 599 551 dev_err(&pdev->dev, "failed adding byt-gpio chip\n"); ··· 627 569 return 0; 628 570 } 629 571 572 + #ifdef CONFIG_PM_SLEEP 573 + static int byt_gpio_suspend(struct device *dev) 574 + { 575 + struct platform_device *pdev = to_platform_device(dev); 576 + struct byt_gpio *vg = platform_get_drvdata(pdev); 577 + int i; 578 + 579 + for (i = 0; i < vg->chip.ngpio; i++) { 580 + void __iomem *reg; 581 + u32 value; 582 + 583 + reg = byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG); 584 + value = readl(reg) & BYT_CONF0_RESTORE_MASK; 585 + vg->saved_context[i].conf0 = value; 586 + 587 + reg = byt_gpio_reg(&vg->chip, i, BYT_VAL_REG); 588 + value = readl(reg) & BYT_VAL_RESTORE_MASK; 589 + vg->saved_context[i].val = value; 590 + } 591 + 592 + return 0; 593 + } 594 + 595 + static int byt_gpio_resume(struct device *dev) 596 + { 597 + struct platform_device *pdev = to_platform_device(dev); 598 + struct byt_gpio *vg = platform_get_drvdata(pdev); 599 + int i; 600 + 601 + for (i = 0; i < vg->chip.ngpio; i++) { 602 + void __iomem *reg; 603 + u32 value; 604 + 605 + reg = byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG); 606 + value = readl(reg); 607 + if ((value & BYT_CONF0_RESTORE_MASK) != 608 + vg->saved_context[i].conf0) { 609 + value &= ~BYT_CONF0_RESTORE_MASK; 610 + value |= vg->saved_context[i].conf0; 611 + writel(value, reg); 612 + dev_info(dev, "restored pin %d conf0 %#08x", i, value); 613 + } 614 + 615 + reg = byt_gpio_reg(&vg->chip, i, BYT_VAL_REG); 616 + value = readl(reg); 617 + if ((value & BYT_VAL_RESTORE_MASK) != 618 + vg->saved_context[i].val) { 619 + u32 v; 620 + 621 + v = value & ~BYT_VAL_RESTORE_MASK; 622 + v |= vg->saved_context[i].val; 623 + if (v != value) { 624 + writel(v, reg); 625 + dev_dbg(dev, "restored pin %d val %#08x\n", 626 + i, v); 627 + } 628 + } 629 + } 630 + 631 + return 0; 632 + } 633 + #endif 634 + 630 635 static int byt_gpio_runtime_suspend(struct device *dev) 631 636 { 632 637 return 0; ··· 701 580 } 702 581 703 582 static const struct dev_pm_ops byt_gpio_pm_ops = { 704 - .runtime_suspend = byt_gpio_runtime_suspend, 705 - .runtime_resume = byt_gpio_runtime_resume, 583 + SET_LATE_SYSTEM_SLEEP_PM_OPS(byt_gpio_suspend, byt_gpio_resume) 584 + SET_RUNTIME_PM_OPS(byt_gpio_runtime_suspend, byt_gpio_runtime_resume, 585 + NULL) 706 586 }; 707 587 708 588 static const struct acpi_device_id byt_gpio_acpi_match[] = {
+1
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1226 1226 static int chv_gpio_direction_output(struct gpio_chip *chip, unsigned offset, 1227 1227 int value) 1228 1228 { 1229 + chv_gpio_set(chip, offset, value); 1229 1230 return pinctrl_gpio_direction_output(chip->base + offset); 1230 1231 } 1231 1232
+7 -10
drivers/pinctrl/pinctrl-at91.c
··· 1477 1477 /* the interrupt is already cleared before by reading ISR */ 1478 1478 } 1479 1479 1480 - static unsigned int gpio_irq_startup(struct irq_data *d) 1480 + static int gpio_irq_request_res(struct irq_data *d) 1481 1481 { 1482 1482 struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d); 1483 1483 unsigned pin = d->hwirq; 1484 1484 int ret; 1485 1485 1486 1486 ret = gpiochip_lock_as_irq(&at91_gpio->chip, pin); 1487 - if (ret) { 1487 + if (ret) 1488 1488 dev_err(at91_gpio->chip.dev, "unable to lock pind %lu IRQ\n", 1489 1489 d->hwirq); 1490 - return ret; 1491 - } 1492 - gpio_irq_unmask(d); 1493 - return 0; 1490 + 1491 + return ret; 1494 1492 } 1495 1493 1496 - static void gpio_irq_shutdown(struct irq_data *d) 1494 + static void gpio_irq_release_res(struct irq_data *d) 1497 1495 { 1498 1496 struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d); 1499 1497 unsigned pin = d->hwirq; 1500 1498 1501 - gpio_irq_mask(d); 1502 1499 gpiochip_unlock_as_irq(&at91_gpio->chip, pin); 1503 1500 } 1504 1501 ··· 1574 1577 static struct irq_chip gpio_irqchip = { 1575 1578 .name = "GPIO", 1576 1579 .irq_ack = gpio_irq_ack, 1577 - .irq_startup = gpio_irq_startup, 1578 - .irq_shutdown = gpio_irq_shutdown, 1580 + .irq_request_resources = gpio_irq_request_res, 1581 + .irq_release_resources = gpio_irq_release_res, 1579 1582 .irq_disable = gpio_irq_mask, 1580 1583 .irq_mask = gpio_irq_mask, 1581 1584 .irq_unmask = gpio_irq_unmask,
+1
drivers/pinctrl/sunxi/pinctrl-sun4i-a10.c
··· 1011 1011 .pins = sun4i_a10_pins, 1012 1012 .npins = ARRAY_SIZE(sun4i_a10_pins), 1013 1013 .irq_banks = 1, 1014 + .irq_read_needs_mux = true, 1014 1015 }; 1015 1016 1016 1017 static int sun4i_a10_pinctrl_probe(struct platform_device *pdev)
+12 -2
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 29 29 #include <linux/slab.h> 30 30 31 31 #include "../core.h" 32 + #include "../../gpio/gpiolib.h" 32 33 #include "pinctrl-sunxi.h" 33 34 34 35 static struct irq_chip sunxi_pinctrl_edge_irq_chip; ··· 465 464 static int sunxi_pinctrl_gpio_get(struct gpio_chip *chip, unsigned offset) 466 465 { 467 466 struct sunxi_pinctrl *pctl = dev_get_drvdata(chip->dev); 468 - 469 467 u32 reg = sunxi_data_reg(offset); 470 468 u8 index = sunxi_data_offset(offset); 471 - u32 val = (readl(pctl->membase + reg) >> index) & DATA_PINS_MASK; 469 + u32 set_mux = pctl->desc->irq_read_needs_mux && 470 + test_bit(FLAG_USED_AS_IRQ, &chip->desc[offset].flags); 471 + u32 val; 472 + 473 + if (set_mux) 474 + sunxi_pmx_set(pctl->pctl_dev, offset, SUN4I_FUNC_INPUT); 475 + 476 + val = (readl(pctl->membase + reg) >> index) & DATA_PINS_MASK; 477 + 478 + if (set_mux) 479 + sunxi_pmx_set(pctl->pctl_dev, offset, SUN4I_FUNC_IRQ); 472 480 473 481 return val; 474 482 }
+4
drivers/pinctrl/sunxi/pinctrl-sunxi.h
··· 77 77 #define IRQ_LEVEL_LOW 0x03 78 78 #define IRQ_EDGE_BOTH 0x04 79 79 80 + #define SUN4I_FUNC_INPUT 0 81 + #define SUN4I_FUNC_IRQ 6 82 + 80 83 struct sunxi_desc_function { 81 84 const char *name; 82 85 u8 muxval; ··· 97 94 int npins; 98 95 unsigned pin_base; 99 96 unsigned irq_banks; 97 + bool irq_read_needs_mux; 100 98 }; 101 99 102 100 struct sunxi_pinctrl_function {
+39 -15
drivers/powercap/intel_rapl.c
··· 73 73 74 74 #define TIME_WINDOW_MAX_MSEC 40000 75 75 #define TIME_WINDOW_MIN_MSEC 250 76 - 76 + #define ENERGY_UNIT_SCALE 1000 /* scale from driver unit to powercap unit */ 77 77 enum unit_type { 78 78 ARBITRARY_UNIT, /* no translation */ 79 79 POWER_UNIT, ··· 158 158 struct rapl_power_limit rpl[NR_POWER_LIMITS]; 159 159 u64 attr_map; /* track capabilities */ 160 160 unsigned int state; 161 + unsigned int domain_energy_unit; 161 162 int package_id; 162 163 }; 163 164 #define power_zone_to_rapl_domain(_zone) \ ··· 191 190 void (*set_floor_freq)(struct rapl_domain *rd, bool mode); 192 191 u64 (*compute_time_window)(struct rapl_package *rp, u64 val, 193 192 bool to_raw); 193 + unsigned int dram_domain_energy_unit; 194 194 }; 195 195 static struct rapl_defaults *rapl_defaults; 196 196 ··· 229 227 static int rapl_write_data_raw(struct rapl_domain *rd, 230 228 enum rapl_primitives prim, 231 229 unsigned long long value); 232 - static u64 rapl_unit_xlate(int package, enum unit_type type, u64 value, 230 + static u64 rapl_unit_xlate(struct rapl_domain *rd, int package, 231 + enum unit_type type, u64 value, 233 232 int to_raw); 234 233 static void package_power_limit_irq_save(int package_id); 235 234 ··· 308 305 309 306 static int get_max_energy_counter(struct powercap_zone *pcd_dev, u64 *energy) 310 307 { 311 - *energy = rapl_unit_xlate(0, ENERGY_UNIT, ENERGY_STATUS_MASK, 0); 308 + struct rapl_domain *rd = power_zone_to_rapl_domain(pcd_dev); 309 + 310 + *energy = rapl_unit_xlate(rd, 0, ENERGY_UNIT, ENERGY_STATUS_MASK, 0); 312 311 return 0; 313 312 } 314 313 ··· 644 639 rd->msrs[4] = MSR_DRAM_POWER_INFO; 645 640 rd->rpl[0].prim_id = PL1_ENABLE; 646 641 rd->rpl[0].name = pl1_name; 642 + rd->domain_energy_unit = 643 + rapl_defaults->dram_domain_energy_unit; 644 + if (rd->domain_energy_unit) 645 + pr_info("DRAM domain energy unit %dpj\n", 646 + rd->domain_energy_unit); 647 647 break; 648 648 } 649 649 if (mask) { ··· 658 648 } 659 649 } 660 650 661 - static u64 rapl_unit_xlate(int package, enum unit_type type, u64 value, 651 + static u64 rapl_unit_xlate(struct rapl_domain *rd, int package, 652 + enum unit_type type, u64 value, 662 653 int to_raw) 663 654 { 664 655 u64 units = 1; 665 656 struct rapl_package *rp; 657 + u64 scale = 1; 666 658 667 659 rp = find_package_by_id(package); 668 660 if (!rp) ··· 675 663 units = rp->power_unit; 676 664 break; 677 665 case ENERGY_UNIT: 678 - units = rp->energy_unit; 666 + scale = ENERGY_UNIT_SCALE; 667 + /* per domain unit takes precedence */ 668 + if (rd && rd->domain_energy_unit) 669 + units = rd->domain_energy_unit; 670 + else 671 + units = rp->energy_unit; 679 672 break; 680 673 case TIME_UNIT: 681 674 return rapl_defaults->compute_time_window(rp, value, to_raw); ··· 690 673 }; 691 674 692 675 if (to_raw) 693 - return div64_u64(value, units); 676 + return div64_u64(value, units) * scale; 694 677 695 678 value *= units; 696 679 697 - return value; 680 + return div64_u64(value, scale); 698 681 } 699 682 700 683 /* in the order of enum rapl_primitives */ ··· 790 773 final = value & rp->mask; 791 774 final = final >> rp->shift; 792 775 if (xlate) 793 - *data = rapl_unit_xlate(rd->package_id, rp->unit, final, 0); 776 + *data = rapl_unit_xlate(rd, rd->package_id, rp->unit, final, 0); 794 777 else 795 778 *data = final; 796 779 ··· 816 799 "failed to read msr 0x%x on cpu %d\n", msr, cpu); 817 800 return -EIO; 818 801 } 819 - value = rapl_unit_xlate(rd->package_id, rp->unit, value, 1); 802 + value = rapl_unit_xlate(rd, rd->package_id, rp->unit, value, 1); 820 803 msr_val &= ~rp->mask; 821 804 msr_val |= value << rp->shift; 822 805 if (wrmsrl_safe_on_cpu(cpu, msr, msr_val)) { ··· 835 818 * calculate units differ on different CPUs. 836 819 * We convert the units to below format based on CPUs. 837 820 * i.e. 838 - * energy unit: microJoules : Represented in microJoules by default 821 + * energy unit: picoJoules : Represented in picoJoules by default 839 822 * power unit : microWatts : Represented in milliWatts by default 840 823 * time unit : microseconds: Represented in seconds by default 841 824 */ ··· 851 834 } 852 835 853 836 value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET; 854 - rp->energy_unit = 1000000 / (1 << value); 837 + rp->energy_unit = ENERGY_UNIT_SCALE * 1000000 / (1 << value); 855 838 856 839 value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET; 857 840 rp->power_unit = 1000000 / (1 << value); ··· 859 842 value = (msr_val & TIME_UNIT_MASK) >> TIME_UNIT_OFFSET; 860 843 rp->time_unit = 1000000 / (1 << value); 861 844 862 - pr_debug("Core CPU package %d energy=%duJ, time=%dus, power=%duW\n", 845 + pr_debug("Core CPU package %d energy=%dpJ, time=%dus, power=%duW\n", 863 846 rp->id, rp->energy_unit, rp->time_unit, rp->power_unit); 864 847 865 848 return 0; ··· 876 859 return -ENODEV; 877 860 } 878 861 value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET; 879 - rp->energy_unit = 1 << value; 862 + rp->energy_unit = ENERGY_UNIT_SCALE * 1 << value; 880 863 881 864 value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET; 882 865 rp->power_unit = (1 << value) * 1000; ··· 884 867 value = (msr_val & TIME_UNIT_MASK) >> TIME_UNIT_OFFSET; 885 868 rp->time_unit = 1000000 / (1 << value); 886 869 887 - pr_debug("Atom package %d energy=%duJ, time=%dus, power=%duW\n", 870 + pr_debug("Atom package %d energy=%dpJ, time=%dus, power=%duW\n", 888 871 rp->id, rp->energy_unit, rp->time_unit, rp->power_unit); 889 872 890 873 return 0; ··· 1034 1017 .compute_time_window = rapl_compute_time_window_core, 1035 1018 }; 1036 1019 1020 + static const struct rapl_defaults rapl_defaults_hsw_server = { 1021 + .check_unit = rapl_check_unit_core, 1022 + .set_floor_freq = set_floor_freq_default, 1023 + .compute_time_window = rapl_compute_time_window_core, 1024 + .dram_domain_energy_unit = 15300, 1025 + }; 1026 + 1037 1027 static const struct rapl_defaults rapl_defaults_atom = { 1038 1028 .check_unit = rapl_check_unit_atom, 1039 1029 .set_floor_freq = set_floor_freq_atom, ··· 1061 1037 RAPL_CPU(0x3a, rapl_defaults_core),/* Ivy Bridge */ 1062 1038 RAPL_CPU(0x3c, rapl_defaults_core),/* Haswell */ 1063 1039 RAPL_CPU(0x3d, rapl_defaults_core),/* Broadwell */ 1064 - RAPL_CPU(0x3f, rapl_defaults_core),/* Haswell */ 1040 + RAPL_CPU(0x3f, rapl_defaults_hsw_server),/* Haswell servers */ 1065 1041 RAPL_CPU(0x45, rapl_defaults_core),/* Haswell ULT */ 1066 1042 RAPL_CPU(0x4C, rapl_defaults_atom),/* Braswell */ 1067 1043 RAPL_CPU(0x4A, rapl_defaults_atom),/* Tangier */
+17 -24
drivers/regulator/core.c
··· 1839 1839 } 1840 1840 1841 1841 if (rdev->ena_pin) { 1842 - ret = regulator_ena_gpio_ctrl(rdev, true); 1843 - if (ret < 0) 1844 - return ret; 1845 - rdev->ena_gpio_state = 1; 1842 + if (!rdev->ena_gpio_state) { 1843 + ret = regulator_ena_gpio_ctrl(rdev, true); 1844 + if (ret < 0) 1845 + return ret; 1846 + rdev->ena_gpio_state = 1; 1847 + } 1846 1848 } else if (rdev->desc->ops->enable) { 1847 1849 ret = rdev->desc->ops->enable(rdev); 1848 1850 if (ret < 0) ··· 1941 1939 trace_regulator_disable(rdev_get_name(rdev)); 1942 1940 1943 1941 if (rdev->ena_pin) { 1944 - ret = regulator_ena_gpio_ctrl(rdev, false); 1945 - if (ret < 0) 1946 - return ret; 1947 - rdev->ena_gpio_state = 0; 1942 + if (rdev->ena_gpio_state) { 1943 + ret = regulator_ena_gpio_ctrl(rdev, false); 1944 + if (ret < 0) 1945 + return ret; 1946 + rdev->ena_gpio_state = 0; 1947 + } 1948 1948 1949 1949 } else if (rdev->desc->ops->disable) { 1950 1950 ret = rdev->desc->ops->disable(rdev); ··· 3448 3444 if (attr == &dev_attr_requested_microamps.attr) 3449 3445 return rdev->desc->type == REGULATOR_CURRENT ? mode : 0; 3450 3446 3451 - /* all the other attributes exist to support constraints; 3452 - * don't show them if there are no constraints, or if the 3453 - * relevant supporting methods are missing. 3454 - */ 3455 - if (!rdev->constraints) 3456 - return 0; 3457 - 3458 3447 /* constraints need specific supporting methods */ 3459 3448 if (attr == &dev_attr_min_microvolts.attr || 3460 3449 attr == &dev_attr_max_microvolts.attr) ··· 3630 3633 config->ena_gpio, ret); 3631 3634 goto wash; 3632 3635 } 3633 - 3634 - if (config->ena_gpio_flags & GPIOF_OUT_INIT_HIGH) 3635 - rdev->ena_gpio_state = 1; 3636 - 3637 - if (config->ena_gpio_invert) 3638 - rdev->ena_gpio_state = !rdev->ena_gpio_state; 3639 3636 } 3640 3637 3641 3638 /* set regulator constraints */ ··· 3798 3807 list_for_each_entry(rdev, &regulator_list, list) { 3799 3808 mutex_lock(&rdev->mutex); 3800 3809 if (rdev->use_count > 0 || rdev->constraints->always_on) { 3801 - error = _regulator_do_enable(rdev); 3802 - if (error) 3803 - ret = error; 3810 + if (!_regulator_is_enabled(rdev)) { 3811 + error = _regulator_do_enable(rdev); 3812 + if (error) 3813 + ret = error; 3814 + } 3804 3815 } else { 3805 3816 if (!have_full_constraints()) 3806 3817 goto unlock;
+9
drivers/regulator/da9210-regulator.c
··· 152 152 config.regmap = chip->regmap; 153 153 config.of_node = dev->of_node; 154 154 155 + /* Mask all interrupt sources to deassert interrupt line */ 156 + error = regmap_write(chip->regmap, DA9210_REG_MASK_A, ~0); 157 + if (!error) 158 + error = regmap_write(chip->regmap, DA9210_REG_MASK_B, ~0); 159 + if (error) { 160 + dev_err(&i2c->dev, "Failed to write to mask reg: %d\n", error); 161 + return error; 162 + } 163 + 155 164 rdev = devm_regulator_register(&i2c->dev, &da9210_reg, &config); 156 165 if (IS_ERR(rdev)) { 157 166 dev_err(&i2c->dev, "Failed to register DA9210 regulator\n");
+4
drivers/regulator/palmas-regulator.c
··· 1572 1572 if (!pmic) 1573 1573 return -ENOMEM; 1574 1574 1575 + if (of_device_is_compatible(node, "ti,tps659038-pmic")) 1576 + palmas_generic_regs_info[PALMAS_REG_REGEN2].ctrl_addr = 1577 + TPS659038_REGEN2_CTRL; 1578 + 1575 1579 pmic->dev = &pdev->dev; 1576 1580 pmic->palmas = palmas; 1577 1581 palmas->pmic = pmic;
+8
drivers/regulator/rk808-regulator.c
··· 235 235 .vsel_mask = RK808_LDO_VSEL_MASK, 236 236 .enable_reg = RK808_LDO_EN_REG, 237 237 .enable_mask = BIT(0), 238 + .enable_time = 400, 238 239 .owner = THIS_MODULE, 239 240 }, { 240 241 .name = "LDO_REG2", ··· 250 249 .vsel_mask = RK808_LDO_VSEL_MASK, 251 250 .enable_reg = RK808_LDO_EN_REG, 252 251 .enable_mask = BIT(1), 252 + .enable_time = 400, 253 253 .owner = THIS_MODULE, 254 254 }, { 255 255 .name = "LDO_REG3", ··· 265 263 .vsel_mask = RK808_BUCK4_VSEL_MASK, 266 264 .enable_reg = RK808_LDO_EN_REG, 267 265 .enable_mask = BIT(2), 266 + .enable_time = 400, 268 267 .owner = THIS_MODULE, 269 268 }, { 270 269 .name = "LDO_REG4", ··· 280 277 .vsel_mask = RK808_LDO_VSEL_MASK, 281 278 .enable_reg = RK808_LDO_EN_REG, 282 279 .enable_mask = BIT(3), 280 + .enable_time = 400, 283 281 .owner = THIS_MODULE, 284 282 }, { 285 283 .name = "LDO_REG5", ··· 295 291 .vsel_mask = RK808_LDO_VSEL_MASK, 296 292 .enable_reg = RK808_LDO_EN_REG, 297 293 .enable_mask = BIT(4), 294 + .enable_time = 400, 298 295 .owner = THIS_MODULE, 299 296 }, { 300 297 .name = "LDO_REG6", ··· 310 305 .vsel_mask = RK808_LDO_VSEL_MASK, 311 306 .enable_reg = RK808_LDO_EN_REG, 312 307 .enable_mask = BIT(5), 308 + .enable_time = 400, 313 309 .owner = THIS_MODULE, 314 310 }, { 315 311 .name = "LDO_REG7", ··· 325 319 .vsel_mask = RK808_LDO_VSEL_MASK, 326 320 .enable_reg = RK808_LDO_EN_REG, 327 321 .enable_mask = BIT(6), 322 + .enable_time = 400, 328 323 .owner = THIS_MODULE, 329 324 }, { 330 325 .name = "LDO_REG8", ··· 340 333 .vsel_mask = RK808_LDO_VSEL_MASK, 341 334 .enable_reg = RK808_LDO_EN_REG, 342 335 .enable_mask = BIT(7), 336 + .enable_time = 400, 343 337 .owner = THIS_MODULE, 344 338 }, { 345 339 .name = "SWITCH_REG1",
+1
drivers/regulator/tps65910-regulator.c
··· 17 17 #include <linux/module.h> 18 18 #include <linux/init.h> 19 19 #include <linux/err.h> 20 + #include <linux/of.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/regulator/driver.h> 22 23 #include <linux/regulator/machine.h>
+16 -1
drivers/rpmsg/virtio_rpmsg_bus.c
··· 951 951 void *bufs_va; 952 952 int err = 0, i; 953 953 size_t total_buf_space; 954 + bool notify; 954 955 955 956 vrp = kzalloc(sizeof(*vrp), GFP_KERNEL); 956 957 if (!vrp) ··· 1031 1030 } 1032 1031 } 1033 1032 1033 + /* 1034 + * Prepare to kick but don't notify yet - we can't do this before 1035 + * device is ready. 1036 + */ 1037 + notify = virtqueue_kick_prepare(vrp->rvq); 1038 + 1039 + /* From this point on, we can notify and get callbacks. */ 1040 + virtio_device_ready(vdev); 1041 + 1034 1042 /* tell the remote processor it can start sending messages */ 1035 - virtqueue_kick(vrp->rvq); 1043 + /* 1044 + * this might be concurrent with callbacks, but we are only 1045 + * doing notify, not a full kick here, so that's ok. 1046 + */ 1047 + if (notify) 1048 + virtqueue_notify(vrp->rvq); 1036 1049 1037 1050 dev_info(&vdev->dev, "rpmsg host is online\n"); 1038 1051
+48 -14
drivers/rtc/rtc-at91rm9200.c
··· 31 31 #include <linux/io.h> 32 32 #include <linux/of.h> 33 33 #include <linux/of_device.h> 34 + #include <linux/suspend.h> 34 35 #include <linux/uaccess.h> 35 36 36 37 #include "rtc-at91rm9200.h" ··· 55 54 static int irq; 56 55 static DEFINE_SPINLOCK(at91_rtc_lock); 57 56 static u32 at91_rtc_shadow_imr; 57 + static bool suspended; 58 + static DEFINE_SPINLOCK(suspended_lock); 59 + static unsigned long cached_events; 60 + static u32 at91_rtc_imr; 58 61 59 62 static void at91_rtc_write_ier(u32 mask) 60 63 { ··· 295 290 struct rtc_device *rtc = platform_get_drvdata(pdev); 296 291 unsigned int rtsr; 297 292 unsigned long events = 0; 293 + int ret = IRQ_NONE; 298 294 295 + spin_lock(&suspended_lock); 299 296 rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr(); 300 297 if (rtsr) { /* this interrupt is shared! Is it ours? */ 301 298 if (rtsr & AT91_RTC_ALARM) ··· 311 304 312 305 at91_rtc_write(AT91_RTC_SCCR, rtsr); /* clear status reg */ 313 306 314 - rtc_update_irq(rtc, 1, events); 307 + if (!suspended) { 308 + rtc_update_irq(rtc, 1, events); 315 309 316 - dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", __func__, 317 - events >> 8, events & 0x000000FF); 310 + dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", 311 + __func__, events >> 8, events & 0x000000FF); 312 + } else { 313 + cached_events |= events; 314 + at91_rtc_write_idr(at91_rtc_imr); 315 + pm_system_wakeup(); 316 + } 318 317 319 - return IRQ_HANDLED; 318 + ret = IRQ_HANDLED; 320 319 } 321 - return IRQ_NONE; /* not handled */ 320 + spin_unlock(&suspended_lock); 321 + 322 + return ret; 322 323 } 323 324 324 325 static const struct at91_rtc_config at91rm9200_config = { ··· 416 401 AT91_RTC_CALEV); 417 402 418 403 ret = devm_request_irq(&pdev->dev, irq, at91_rtc_interrupt, 419 - IRQF_SHARED, 420 - "at91_rtc", pdev); 404 + IRQF_SHARED | IRQF_COND_SUSPEND, 405 + "at91_rtc", pdev); 421 406 if (ret) { 422 407 dev_err(&pdev->dev, "IRQ %d already in use.\n", irq); 423 408 return ret; ··· 469 454 470 455 /* AT91RM9200 RTC Power management control */ 471 456 472 - static u32 at91_rtc_imr; 473 - 474 457 static int at91_rtc_suspend(struct device *dev) 475 458 { 476 459 /* this IRQ is shared with DBGU and other hardware which isn't ··· 477 464 at91_rtc_imr = at91_rtc_read_imr() 478 465 & (AT91_RTC_ALARM|AT91_RTC_SECEV); 479 466 if (at91_rtc_imr) { 480 - if (device_may_wakeup(dev)) 467 + if (device_may_wakeup(dev)) { 468 + unsigned long flags; 469 + 481 470 enable_irq_wake(irq); 482 - else 471 + 472 + spin_lock_irqsave(&suspended_lock, flags); 473 + suspended = true; 474 + spin_unlock_irqrestore(&suspended_lock, flags); 475 + } else { 483 476 at91_rtc_write_idr(at91_rtc_imr); 477 + } 484 478 } 485 479 return 0; 486 480 } 487 481 488 482 static int at91_rtc_resume(struct device *dev) 489 483 { 484 + struct rtc_device *rtc = dev_get_drvdata(dev); 485 + 490 486 if (at91_rtc_imr) { 491 - if (device_may_wakeup(dev)) 487 + if (device_may_wakeup(dev)) { 488 + unsigned long flags; 489 + 490 + spin_lock_irqsave(&suspended_lock, flags); 491 + 492 + if (cached_events) { 493 + rtc_update_irq(rtc, 1, cached_events); 494 + cached_events = 0; 495 + } 496 + 497 + suspended = false; 498 + spin_unlock_irqrestore(&suspended_lock, flags); 499 + 492 500 disable_irq_wake(irq); 493 - else 494 - at91_rtc_write_ier(at91_rtc_imr); 501 + } 502 + at91_rtc_write_ier(at91_rtc_imr); 495 503 } 496 504 return 0; 497 505 }
+63 -14
drivers/rtc/rtc-at91sam9.c
··· 23 23 #include <linux/io.h> 24 24 #include <linux/mfd/syscon.h> 25 25 #include <linux/regmap.h> 26 + #include <linux/suspend.h> 26 27 #include <linux/clk.h> 27 28 28 29 /* ··· 78 77 unsigned int gpbr_offset; 79 78 int irq; 80 79 struct clk *sclk; 80 + bool suspended; 81 + unsigned long events; 82 + spinlock_t lock; 81 83 }; 82 84 83 85 #define rtt_readl(rtc, field) \ ··· 275 271 return 0; 276 272 } 277 273 278 - /* 279 - * IRQ handler for the RTC 280 - */ 281 - static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc) 274 + static irqreturn_t at91_rtc_cache_events(struct sam9_rtc *rtc) 282 275 { 283 - struct sam9_rtc *rtc = _rtc; 284 276 u32 sr, mr; 285 - unsigned long events = 0; 286 277 287 278 /* Shared interrupt may be for another device. Note: reading 288 279 * SR clears it, so we must only read it in this irq handler! ··· 289 290 290 291 /* alarm status */ 291 292 if (sr & AT91_RTT_ALMS) 292 - events |= (RTC_AF | RTC_IRQF); 293 + rtc->events |= (RTC_AF | RTC_IRQF); 293 294 294 295 /* timer update/increment */ 295 296 if (sr & AT91_RTT_RTTINC) 296 - events |= (RTC_UF | RTC_IRQF); 297 - 298 - rtc_update_irq(rtc->rtcdev, 1, events); 299 - 300 - pr_debug("%s: num=%ld, events=0x%02lx\n", __func__, 301 - events >> 8, events & 0x000000FF); 297 + rtc->events |= (RTC_UF | RTC_IRQF); 302 298 303 299 return IRQ_HANDLED; 300 + } 301 + 302 + static void at91_rtc_flush_events(struct sam9_rtc *rtc) 303 + { 304 + if (!rtc->events) 305 + return; 306 + 307 + rtc_update_irq(rtc->rtcdev, 1, rtc->events); 308 + rtc->events = 0; 309 + 310 + pr_debug("%s: num=%ld, events=0x%02lx\n", __func__, 311 + rtc->events >> 8, rtc->events & 0x000000FF); 312 + } 313 + 314 + /* 315 + * IRQ handler for the RTC 316 + */ 317 + static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc) 318 + { 319 + struct sam9_rtc *rtc = _rtc; 320 + int ret; 321 + 322 + spin_lock(&rtc->lock); 323 + 324 + ret = at91_rtc_cache_events(rtc); 325 + 326 + /* We're called in suspended state */ 327 + if (rtc->suspended) { 328 + /* Mask irqs coming from this peripheral */ 329 + rtt_writel(rtc, MR, 330 + rtt_readl(rtc, MR) & 331 + ~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN)); 332 + /* Trigger a system wakeup */ 333 + pm_system_wakeup(); 334 + } else { 335 + at91_rtc_flush_events(rtc); 336 + } 337 + 338 + spin_unlock(&rtc->lock); 339 + 340 + return ret; 304 341 } 305 342 306 343 static const struct rtc_class_ops at91_rtc_ops = { ··· 456 421 457 422 /* register irq handler after we know what name we'll use */ 458 423 ret = devm_request_irq(&pdev->dev, rtc->irq, at91_rtc_interrupt, 459 - IRQF_SHARED, dev_name(&rtc->rtcdev->dev), rtc); 424 + IRQF_SHARED | IRQF_COND_SUSPEND, 425 + dev_name(&rtc->rtcdev->dev), rtc); 460 426 if (ret) { 461 427 dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq); 462 428 return ret; ··· 518 482 rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN); 519 483 if (rtc->imr) { 520 484 if (device_may_wakeup(dev) && (mr & AT91_RTT_ALMIEN)) { 485 + unsigned long flags; 486 + 521 487 enable_irq_wake(rtc->irq); 488 + spin_lock_irqsave(&rtc->lock, flags); 489 + rtc->suspended = true; 490 + spin_unlock_irqrestore(&rtc->lock, flags); 522 491 /* don't let RTTINC cause wakeups */ 523 492 if (mr & AT91_RTT_RTTINCIEN) 524 493 rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN); ··· 540 499 u32 mr; 541 500 542 501 if (rtc->imr) { 502 + unsigned long flags; 503 + 543 504 if (device_may_wakeup(dev)) 544 505 disable_irq_wake(rtc->irq); 545 506 mr = rtt_readl(rtc, MR); 546 507 rtt_writel(rtc, MR, mr | rtc->imr); 508 + 509 + spin_lock_irqsave(&rtc->lock, flags); 510 + rtc->suspended = false; 511 + at91_rtc_cache_events(rtc); 512 + at91_rtc_flush_events(rtc); 513 + spin_unlock_irqrestore(&rtc->lock, flags); 547 514 } 548 515 549 516 return 0;
+9 -8
drivers/rtc/rtc-mrst.c
··· 413 413 mrst->dev = NULL; 414 414 } 415 415 416 - #ifdef CONFIG_PM 417 - static int mrst_suspend(struct device *dev, pm_message_t mesg) 416 + #ifdef CONFIG_PM_SLEEP 417 + static int mrst_suspend(struct device *dev) 418 418 { 419 419 struct mrst_rtc *mrst = dev_get_drvdata(dev); 420 420 unsigned char tmp; ··· 453 453 */ 454 454 static inline int mrst_poweroff(struct device *dev) 455 455 { 456 - return mrst_suspend(dev, PMSG_HIBERNATE); 456 + return mrst_suspend(dev); 457 457 } 458 458 459 459 static int mrst_resume(struct device *dev) ··· 490 490 return 0; 491 491 } 492 492 493 + static SIMPLE_DEV_PM_OPS(mrst_pm_ops, mrst_suspend, mrst_resume); 494 + #define MRST_PM_OPS (&mrst_pm_ops) 495 + 493 496 #else 494 - #define mrst_suspend NULL 495 - #define mrst_resume NULL 497 + #define MRST_PM_OPS NULL 496 498 497 499 static inline int mrst_poweroff(struct device *dev) 498 500 { ··· 531 529 .remove = vrtc_mrst_platform_remove, 532 530 .shutdown = vrtc_mrst_platform_shutdown, 533 531 .driver = { 534 - .name = (char *) driver_name, 535 - .suspend = mrst_suspend, 536 - .resume = mrst_resume, 532 + .name = driver_name, 533 + .pm = MRST_PM_OPS, 537 534 } 538 535 }; 539 536
+1
drivers/rtc/rtc-s3c.c
··· 849 849 850 850 static struct s3c_rtc_data const s3c6410_rtc_data = { 851 851 .max_user_freq = 32768, 852 + .needs_src_clk = true, 852 853 .irq_handler = s3c6410_rtc_irq, 853 854 .set_freq = s3c6410_rtc_setfreq, 854 855 .enable_tick = s3c6410_rtc_enable_tick,
+1 -1
drivers/s390/block/dcssblk.c
··· 547 547 * parse input 548 548 */ 549 549 num_of_segments = 0; 550 - for (i = 0; ((buf[i] != '\0') && (buf[i] != '\n') && i < count); i++) { 550 + for (i = 0; (i < count && (buf[i] != '\0') && (buf[i] != '\n')); i++) { 551 551 for (j = i; (buf[j] != ':') && 552 552 (buf[j] != '\0') && 553 553 (buf[j] != '\n') &&
+1 -1
drivers/s390/block/scm_blk_cluster.c
··· 92 92 add = 0; 93 93 continue; 94 94 } 95 - for (pos = 0; pos <= iter->aob->request.msb_count; pos++) { 95 + for (pos = 0; pos < iter->aob->request.msb_count; pos++) { 96 96 if (clusters_intersect(req, iter->request[pos]) && 97 97 (rq_data_dir(req) == WRITE || 98 98 rq_data_dir(iter->request[pos]) == WRITE)) {
+2 -1
drivers/scsi/ipr.c
··· 6815 6815 }; 6816 6816 6817 6817 static struct ata_port_info sata_port_info = { 6818 - .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA, 6818 + .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | 6819 + ATA_FLAG_SAS_HOST, 6819 6820 .pio_mask = ATA_PIO4_ONLY, 6820 6821 .mwdma_mask = ATA_MWDMA2, 6821 6822 .udma_mask = ATA_UDMA6,
+2 -1
drivers/scsi/libsas/sas_ata.c
··· 547 547 }; 548 548 549 549 static struct ata_port_info sata_port_info = { 550 - .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | ATA_FLAG_NCQ, 550 + .flags = ATA_FLAG_SATA | ATA_FLAG_PIO_DMA | ATA_FLAG_NCQ | 551 + ATA_FLAG_SAS_HOST, 551 552 .pio_mask = ATA_PIO4, 552 553 .mwdma_mask = ATA_MWDMA2, 553 554 .udma_mask = ATA_UDMA6,
+4 -2
drivers/scsi/libsas/sas_discover.c
··· 500 500 struct sas_discovery_event *ev = to_sas_discovery_event(work); 501 501 struct asd_sas_port *port = ev->port; 502 502 struct sas_ha_struct *ha = port->ha; 503 + struct domain_device *ddev = port->port_dev; 503 504 504 505 /* prevent revalidation from finding sata links in recovery */ 505 506 mutex_lock(&ha->disco_mutex); ··· 515 514 SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, 516 515 task_pid_nr(current)); 517 516 518 - if (port->port_dev) 519 - res = sas_ex_revalidate_domain(port->port_dev); 517 + if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || 518 + ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE)) 519 + res = sas_ex_revalidate_domain(ddev); 520 520 521 521 SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", 522 522 port->id, task_pid_nr(current), res);
+1 -1
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 1596 1596 /* 1597 1597 * Finally register the new FC Nexus with TCM 1598 1598 */ 1599 - __transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess); 1599 + transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess); 1600 1600 1601 1601 return 0; 1602 1602 }
+6 -6
drivers/spi/spi-atmel.c
··· 764 764 (unsigned long long)xfer->rx_dma); 765 765 } 766 766 767 - /* REVISIT: We're waiting for ENDRX before we start the next 767 + /* REVISIT: We're waiting for RXBUFF before we start the next 768 768 * transfer because we need to handle some difficult timing 769 - * issues otherwise. If we wait for ENDTX in one transfer and 770 - * then starts waiting for ENDRX in the next, it's difficult 771 - * to tell the difference between the ENDRX interrupt we're 772 - * actually waiting for and the ENDRX interrupt of the 769 + * issues otherwise. If we wait for TXBUFE in one transfer and 770 + * then starts waiting for RXBUFF in the next, it's difficult 771 + * to tell the difference between the RXBUFF interrupt we're 772 + * actually waiting for and the RXBUFF interrupt of the 773 773 * previous transfer. 774 774 * 775 775 * It should be doable, though. Just not now... 776 776 */ 777 - spi_writel(as, IER, SPI_BIT(ENDRX) | SPI_BIT(OVRES)); 777 + spi_writel(as, IER, SPI_BIT(RXBUFF) | SPI_BIT(OVRES)); 778 778 spi_writel(as, PTCR, SPI_BIT(TXTEN) | SPI_BIT(RXTEN)); 779 779 } 780 780
+10 -2
drivers/spi/spi-dw-mid.c
··· 108 108 { 109 109 struct dw_spi *dws = arg; 110 110 111 - if (test_and_clear_bit(TX_BUSY, &dws->dma_chan_busy) & BIT(RX_BUSY)) 111 + clear_bit(TX_BUSY, &dws->dma_chan_busy); 112 + if (test_bit(RX_BUSY, &dws->dma_chan_busy)) 112 113 return; 113 114 dw_spi_xfer_done(dws); 114 115 } ··· 140 139 1, 141 140 DMA_MEM_TO_DEV, 142 141 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 142 + if (!txdesc) 143 + return NULL; 144 + 143 145 txdesc->callback = dw_spi_dma_tx_done; 144 146 txdesc->callback_param = dws; 145 147 ··· 157 153 { 158 154 struct dw_spi *dws = arg; 159 155 160 - if (test_and_clear_bit(RX_BUSY, &dws->dma_chan_busy) & BIT(TX_BUSY)) 156 + clear_bit(RX_BUSY, &dws->dma_chan_busy); 157 + if (test_bit(TX_BUSY, &dws->dma_chan_busy)) 161 158 return; 162 159 dw_spi_xfer_done(dws); 163 160 } ··· 189 184 1, 190 185 DMA_DEV_TO_MEM, 191 186 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 187 + if (!rxdesc) 188 + return NULL; 189 + 192 190 rxdesc->callback = dw_spi_dma_rx_done; 193 191 rxdesc->callback_param = dws; 194 192
+2 -2
drivers/spi/spi-dw-pci.c
··· 36 36 37 37 static struct spi_pci_desc spi_pci_mid_desc_1 = { 38 38 .setup = dw_spi_mid_init, 39 - .num_cs = 32, 39 + .num_cs = 5, 40 40 .bus_num = 0, 41 41 }; 42 42 43 43 static struct spi_pci_desc spi_pci_mid_desc_2 = { 44 44 .setup = dw_spi_mid_init, 45 - .num_cs = 4, 45 + .num_cs = 2, 46 46 .bus_num = 1, 47 47 }; 48 48
+2 -2
drivers/spi/spi-dw.c
··· 621 621 if (!dws->fifo_len) { 622 622 u32 fifo; 623 623 624 - for (fifo = 2; fifo <= 256; fifo++) { 624 + for (fifo = 1; fifo < 256; fifo++) { 625 625 dw_writew(dws, DW_SPI_TXFLTR, fifo); 626 626 if (fifo != dw_readw(dws, DW_SPI_TXFLTR)) 627 627 break; 628 628 } 629 629 dw_writew(dws, DW_SPI_TXFLTR, 0); 630 630 631 - dws->fifo_len = (fifo == 2) ? 0 : fifo - 1; 631 + dws->fifo_len = (fifo == 1) ? 0 : fifo; 632 632 dev_dbg(dev, "Detected FIFO size: %u bytes\n", dws->fifo_len); 633 633 } 634 634 }
+7
drivers/spi/spi-img-spfi.c
··· 459 459 unsigned long flags; 460 460 int ret; 461 461 462 + if (xfer->len > SPFI_TRANSACTION_TSIZE_MASK) { 463 + dev_err(spfi->dev, 464 + "Transfer length (%d) is greater than the max supported (%d)", 465 + xfer->len, SPFI_TRANSACTION_TSIZE_MASK); 466 + return -EINVAL; 467 + } 468 + 462 469 /* 463 470 * Stop all DMA and reset the controller if the previous transaction 464 471 * timed-out and never completed it's DMA.
+1 -1
drivers/spi/spi-pl022.c
··· 534 534 pl022->cur_msg = NULL; 535 535 pl022->cur_transfer = NULL; 536 536 pl022->cur_chip = NULL; 537 - spi_finalize_current_message(pl022->master); 538 537 539 538 /* disable the SPI/SSP operation */ 540 539 writew((readw(SSP_CR1(pl022->virtbase)) & 541 540 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase)); 542 541 542 + spi_finalize_current_message(pl022->master); 543 543 } 544 544 545 545 /**
+5 -4
drivers/spi/spi-qup.c
··· 498 498 struct resource *res; 499 499 struct device *dev; 500 500 void __iomem *base; 501 - u32 max_freq, iomode; 501 + u32 max_freq, iomode, num_cs; 502 502 int ret, irq, size; 503 503 504 504 dev = &pdev->dev; ··· 550 550 } 551 551 552 552 /* use num-cs unless not present or out of range */ 553 - if (of_property_read_u16(dev->of_node, "num-cs", 554 - &master->num_chipselect) || 555 - (master->num_chipselect > SPI_NUM_CHIPSELECTS)) 553 + if (of_property_read_u32(dev->of_node, "num-cs", &num_cs) || 554 + num_cs > SPI_NUM_CHIPSELECTS) 556 555 master->num_chipselect = SPI_NUM_CHIPSELECTS; 556 + else 557 + master->num_chipselect = num_cs; 557 558 558 559 master->bus_num = pdev->id; 559 560 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
+22
drivers/spi/spi-ti-qspi.c
··· 101 101 #define QSPI_FLEN(n) ((n - 1) << 0) 102 102 103 103 /* STATUS REGISTER */ 104 + #define BUSY 0x01 104 105 #define WC 0x02 105 106 106 107 /* INTERRUPT REGISTER */ ··· 200 199 ti_qspi_write(qspi, ctx_reg->clkctrl, QSPI_SPI_CLOCK_CNTRL_REG); 201 200 } 202 201 202 + static inline u32 qspi_is_busy(struct ti_qspi *qspi) 203 + { 204 + u32 stat; 205 + unsigned long timeout = jiffies + QSPI_COMPLETION_TIMEOUT; 206 + 207 + stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG); 208 + while ((stat & BUSY) && time_after(timeout, jiffies)) { 209 + cpu_relax(); 210 + stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG); 211 + } 212 + 213 + WARN(stat & BUSY, "qspi busy\n"); 214 + return stat & BUSY; 215 + } 216 + 203 217 static int qspi_write_msg(struct ti_qspi *qspi, struct spi_transfer *t) 204 218 { 205 219 int wlen, count; ··· 227 211 wlen = t->bits_per_word >> 3; /* in bytes */ 228 212 229 213 while (count) { 214 + if (qspi_is_busy(qspi)) 215 + return -EBUSY; 216 + 230 217 switch (wlen) { 231 218 case 1: 232 219 dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %02x\n", ··· 285 266 286 267 while (count) { 287 268 dev_dbg(qspi->dev, "rx cmd %08x dc %08x\n", cmd, qspi->dc); 269 + if (qspi_is_busy(qspi)) 270 + return -EBUSY; 271 + 288 272 ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 289 273 if (!wait_for_completion_timeout(&qspi->transfer_complete, 290 274 QSPI_COMPLETION_TIMEOUT)) {
+3 -2
drivers/spi/spi.c
··· 1105 1105 "failed to unprepare message: %d\n", ret); 1106 1106 } 1107 1107 } 1108 + 1109 + trace_spi_message_done(mesg); 1110 + 1108 1111 master->cur_msg_prepared = false; 1109 1112 1110 1113 mesg->state = NULL; 1111 1114 if (mesg->complete) 1112 1115 mesg->complete(mesg->context); 1113 - 1114 - trace_spi_message_done(mesg); 1115 1116 } 1116 1117 EXPORT_SYMBOL_GPL(spi_finalize_current_message); 1117 1118
+1 -2
drivers/staging/comedi/drivers/adv_pci1710.c
··· 426 426 unsigned int *data) 427 427 { 428 428 struct pci1710_private *devpriv = dev->private; 429 - unsigned int chan = CR_CHAN(insn->chanspec); 430 429 int ret = 0; 431 430 int i; 432 431 ··· 446 447 if (ret) 447 448 break; 448 449 449 - ret = pci171x_ai_read_sample(dev, s, chan, &val); 450 + ret = pci171x_ai_read_sample(dev, s, 0, &val); 450 451 if (ret) 451 452 break; 452 453
+3 -2
drivers/staging/comedi/drivers/comedi_isadma.c
··· 91 91 stalled++; 92 92 if (stalled > 10) 93 93 break; 94 + } else { 95 + residue = new_residue; 96 + stalled = 0; 94 97 } 95 - residue = new_residue; 96 - stalled = 0; 97 98 } 98 99 return residue; 99 100 }
-71
drivers/staging/comedi/drivers/vmk80xx.c
··· 103 103 VMK8061_MODEL 104 104 }; 105 105 106 - struct firmware_version { 107 - unsigned char ic3_vers[32]; /* USB-Controller */ 108 - unsigned char ic6_vers[32]; /* CPU */ 109 - }; 110 - 111 106 static const struct comedi_lrange vmk8061_range = { 112 107 2, { 113 108 UNI_RANGE(5), ··· 151 156 struct vmk80xx_private { 152 157 struct usb_endpoint_descriptor *ep_rx; 153 158 struct usb_endpoint_descriptor *ep_tx; 154 - struct firmware_version fw; 155 159 struct semaphore limit_sem; 156 160 unsigned char *usb_rx_buf; 157 161 unsigned char *usb_tx_buf; 158 162 enum vmk80xx_model model; 159 163 }; 160 - 161 - static int vmk80xx_check_data_link(struct comedi_device *dev) 162 - { 163 - struct vmk80xx_private *devpriv = dev->private; 164 - struct usb_device *usb = comedi_to_usb_dev(dev); 165 - unsigned int tx_pipe; 166 - unsigned int rx_pipe; 167 - unsigned char tx[1]; 168 - unsigned char rx[2]; 169 - 170 - tx_pipe = usb_sndbulkpipe(usb, 0x01); 171 - rx_pipe = usb_rcvbulkpipe(usb, 0x81); 172 - 173 - tx[0] = VMK8061_CMD_RD_PWR_STAT; 174 - 175 - /* 176 - * Check that IC6 (PIC16F871) is powered and 177 - * running and the data link between IC3 and 178 - * IC6 is working properly 179 - */ 180 - usb_bulk_msg(usb, tx_pipe, tx, 1, NULL, devpriv->ep_tx->bInterval); 181 - usb_bulk_msg(usb, rx_pipe, rx, 2, NULL, HZ * 10); 182 - 183 - return (int)rx[1]; 184 - } 185 - 186 - static void vmk80xx_read_eeprom(struct comedi_device *dev, int flag) 187 - { 188 - struct vmk80xx_private *devpriv = dev->private; 189 - struct usb_device *usb = comedi_to_usb_dev(dev); 190 - unsigned int tx_pipe; 191 - unsigned int rx_pipe; 192 - unsigned char tx[1]; 193 - unsigned char rx[64]; 194 - int cnt; 195 - 196 - tx_pipe = usb_sndbulkpipe(usb, 0x01); 197 - rx_pipe = usb_rcvbulkpipe(usb, 0x81); 198 - 199 - tx[0] = VMK8061_CMD_RD_VERSION; 200 - 201 - /* 202 - * Read the firmware version info of IC3 and 203 - * IC6 from the internal EEPROM of the IC 204 - */ 205 - usb_bulk_msg(usb, tx_pipe, tx, 1, NULL, devpriv->ep_tx->bInterval); 206 - usb_bulk_msg(usb, rx_pipe, rx, 64, &cnt, HZ * 10); 207 - 208 - rx[cnt] = '\0'; 209 - 210 - if (flag & IC3_VERSION) 211 - strncpy(devpriv->fw.ic3_vers, rx + 1, 24); 212 - else /* IC6_VERSION */ 213 - strncpy(devpriv->fw.ic6_vers, rx + 25, 24); 214 - } 215 164 216 165 static void vmk80xx_do_bulk_msg(struct comedi_device *dev) 217 166 { ··· 816 877 sema_init(&devpriv->limit_sem, 8); 817 878 818 879 usb_set_intfdata(intf, devpriv); 819 - 820 - if (devpriv->model == VMK8061_MODEL) { 821 - vmk80xx_read_eeprom(dev, IC3_VERSION); 822 - dev_info(&intf->dev, "%s\n", devpriv->fw.ic3_vers); 823 - 824 - if (vmk80xx_check_data_link(dev)) { 825 - vmk80xx_read_eeprom(dev, IC6_VERSION); 826 - dev_info(&intf->dev, "%s\n", devpriv->fw.ic6_vers); 827 - } 828 - } 829 880 830 881 if (devpriv->model == VMK8055_MODEL) 831 882 vmk80xx_reset_device(dev);
+1
drivers/staging/iio/Kconfig
··· 38 38 config IIO_SIMPLE_DUMMY_BUFFER 39 39 bool "Buffered capture support" 40 40 select IIO_BUFFER 41 + select IIO_TRIGGER 41 42 select IIO_KFIFO_BUF 42 43 help 43 44 Add buffered data capture to the simple dummy driver.
+106 -103
drivers/staging/iio/adc/mxs-lradc.c
··· 214 214 unsigned long is_divided; 215 215 216 216 /* 217 - * Touchscreen LRADC channels receives a private slot in the CTRL4 218 - * register, the slot #7. Therefore only 7 slots instead of 8 in the 219 - * CTRL4 register can be mapped to LRADC channels when using the 220 - * touchscreen. 221 - * 217 + * When the touchscreen is enabled, we give it two private virtual 218 + * channels: #6 and #7. This means that only 6 virtual channels (instead 219 + * of 8) will be available for buffered capture. 220 + */ 221 + #define TOUCHSCREEN_VCHANNEL1 7 222 + #define TOUCHSCREEN_VCHANNEL2 6 223 + #define BUFFER_VCHANS_LIMITED 0x3f 224 + #define BUFFER_VCHANS_ALL 0xff 225 + u8 buffer_vchans; 226 + 227 + /* 222 228 * Furthermore, certain LRADC channels are shared between touchscreen 223 229 * and/or touch-buttons and generic LRADC block. Therefore when using 224 230 * either of these, these channels are not available for the regular ··· 348 342 #define LRADC_CTRL4 0x140 349 343 #define LRADC_CTRL4_LRADCSELECT_MASK(n) (0xf << ((n) * 4)) 350 344 #define LRADC_CTRL4_LRADCSELECT_OFFSET(n) ((n) * 4) 345 + #define LRADC_CTRL4_LRADCSELECT(n, x) \ 346 + (((x) << LRADC_CTRL4_LRADCSELECT_OFFSET(n)) & \ 347 + LRADC_CTRL4_LRADCSELECT_MASK(n)) 351 348 352 349 #define LRADC_RESOLUTION 12 353 350 #define LRADC_SINGLE_SAMPLE_MASK ((1 << LRADC_RESOLUTION) - 1) ··· 425 416 LRADC_STATUS_TOUCH_DETECT_RAW); 426 417 } 427 418 419 + static void mxs_lradc_map_channel(struct mxs_lradc *lradc, unsigned vch, 420 + unsigned ch) 421 + { 422 + mxs_lradc_reg_clear(lradc, LRADC_CTRL4_LRADCSELECT_MASK(vch), 423 + LRADC_CTRL4); 424 + mxs_lradc_reg_set(lradc, LRADC_CTRL4_LRADCSELECT(vch, ch), LRADC_CTRL4); 425 + } 426 + 428 427 static void mxs_lradc_setup_ts_channel(struct mxs_lradc *lradc, unsigned ch) 429 428 { 430 429 /* ··· 467 450 LRADC_DELAY_DELAY(lradc->over_sample_delay - 1), 468 451 LRADC_DELAY(3)); 469 452 470 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(2) | 471 - LRADC_CTRL1_LRADC_IRQ(3) | LRADC_CTRL1_LRADC_IRQ(4) | 472 - LRADC_CTRL1_LRADC_IRQ(5), LRADC_CTRL1); 453 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(ch), LRADC_CTRL1); 473 454 474 - /* wake us again, when the complete conversion is done */ 475 - mxs_lradc_reg_set(lradc, LRADC_CTRL1_LRADC_IRQ_EN(ch), LRADC_CTRL1); 476 455 /* 477 456 * after changing the touchscreen plates setting 478 457 * the signals need some initial time to settle. Start the ··· 522 509 LRADC_DELAY_DELAY(lradc->over_sample_delay - 1), 523 510 LRADC_DELAY(3)); 524 511 525 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(2) | 526 - LRADC_CTRL1_LRADC_IRQ(3) | LRADC_CTRL1_LRADC_IRQ(4) | 527 - LRADC_CTRL1_LRADC_IRQ(5), LRADC_CTRL1); 512 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(ch2), LRADC_CTRL1); 528 513 529 - /* wake us again, when the conversions are done */ 530 - mxs_lradc_reg_set(lradc, LRADC_CTRL1_LRADC_IRQ_EN(ch2), LRADC_CTRL1); 531 514 /* 532 515 * after changing the touchscreen plates setting 533 516 * the signals need some initial time to settle. Start the ··· 589 580 #define TS_CH_XM 4 590 581 #define TS_CH_YM 5 591 582 592 - static int mxs_lradc_read_ts_channel(struct mxs_lradc *lradc) 593 - { 594 - u32 reg; 595 - int val; 596 - 597 - reg = readl(lradc->base + LRADC_CTRL1); 598 - 599 - /* only channels 3 to 5 are of interest here */ 600 - if (reg & LRADC_CTRL1_LRADC_IRQ(TS_CH_YP)) { 601 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(TS_CH_YP) | 602 - LRADC_CTRL1_LRADC_IRQ(TS_CH_YP), LRADC_CTRL1); 603 - val = mxs_lradc_read_raw_channel(lradc, TS_CH_YP); 604 - } else if (reg & LRADC_CTRL1_LRADC_IRQ(TS_CH_XM)) { 605 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(TS_CH_XM) | 606 - LRADC_CTRL1_LRADC_IRQ(TS_CH_XM), LRADC_CTRL1); 607 - val = mxs_lradc_read_raw_channel(lradc, TS_CH_XM); 608 - } else if (reg & LRADC_CTRL1_LRADC_IRQ(TS_CH_YM)) { 609 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(TS_CH_YM) | 610 - LRADC_CTRL1_LRADC_IRQ(TS_CH_YM), LRADC_CTRL1); 611 - val = mxs_lradc_read_raw_channel(lradc, TS_CH_YM); 612 - } else { 613 - return -EIO; 614 - } 615 - 616 - mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(2)); 617 - mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(3)); 618 - 619 - return val; 620 - } 621 - 622 583 /* 623 584 * YP(open)--+-------------+ 624 585 * | |--+ ··· 632 653 mxs_lradc_reg_set(lradc, mxs_lradc_drive_x_plate(lradc), LRADC_CTRL0); 633 654 634 655 lradc->cur_plate = LRADC_SAMPLE_X; 635 - mxs_lradc_setup_ts_channel(lradc, TS_CH_YP); 656 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL1, TS_CH_YP); 657 + mxs_lradc_setup_ts_channel(lradc, TOUCHSCREEN_VCHANNEL1); 636 658 } 637 659 638 660 /* ··· 654 674 mxs_lradc_reg_set(lradc, mxs_lradc_drive_y_plate(lradc), LRADC_CTRL0); 655 675 656 676 lradc->cur_plate = LRADC_SAMPLE_Y; 657 - mxs_lradc_setup_ts_channel(lradc, TS_CH_XM); 677 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL1, TS_CH_XM); 678 + mxs_lradc_setup_ts_channel(lradc, TOUCHSCREEN_VCHANNEL1); 658 679 } 659 680 660 681 /* ··· 676 695 mxs_lradc_reg_set(lradc, mxs_lradc_drive_pressure(lradc), LRADC_CTRL0); 677 696 678 697 lradc->cur_plate = LRADC_SAMPLE_PRESSURE; 679 - mxs_lradc_setup_ts_pressure(lradc, TS_CH_XP, TS_CH_YM); 698 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL1, TS_CH_YM); 699 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL2, TS_CH_XP); 700 + mxs_lradc_setup_ts_pressure(lradc, TOUCHSCREEN_VCHANNEL2, 701 + TOUCHSCREEN_VCHANNEL1); 680 702 } 681 703 682 704 static void mxs_lradc_enable_touch_detection(struct mxs_lradc *lradc) ··· 690 706 mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ | 691 707 LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, LRADC_CTRL1); 692 708 mxs_lradc_reg_set(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, LRADC_CTRL1); 709 + } 710 + 711 + static void mxs_lradc_start_touch_event(struct mxs_lradc *lradc) 712 + { 713 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, 714 + LRADC_CTRL1); 715 + mxs_lradc_reg_set(lradc, 716 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL1), LRADC_CTRL1); 717 + /* 718 + * start with the Y-pos, because it uses nearly the same plate 719 + * settings like the touch detection 720 + */ 721 + mxs_lradc_prepare_y_pos(lradc); 693 722 } 694 723 695 724 static void mxs_lradc_report_ts_event(struct mxs_lradc *lradc) ··· 722 725 * start a dummy conversion to burn time to settle the signals 723 726 * note: we are not interested in the conversion's value 724 727 */ 725 - mxs_lradc_reg_wrt(lradc, 0, LRADC_CH(5)); 726 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(5), LRADC_CTRL1); 727 - mxs_lradc_reg_set(lradc, LRADC_CTRL1_LRADC_IRQ_EN(5), LRADC_CTRL1); 728 - mxs_lradc_reg_wrt(lradc, LRADC_DELAY_TRIGGER(1 << 5) | 728 + mxs_lradc_reg_wrt(lradc, 0, LRADC_CH(TOUCHSCREEN_VCHANNEL1)); 729 + mxs_lradc_reg_clear(lradc, 730 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1) | 731 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL2), LRADC_CTRL1); 732 + mxs_lradc_reg_wrt(lradc, 733 + LRADC_DELAY_TRIGGER(1 << TOUCHSCREEN_VCHANNEL1) | 729 734 LRADC_DELAY_KICK | LRADC_DELAY_DELAY(10), /* waste 5 ms */ 730 735 LRADC_DELAY(2)); 731 736 } ··· 759 760 760 761 /* if it is released, wait for the next touch via IRQ */ 761 762 lradc->cur_plate = LRADC_TOUCH; 762 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ, LRADC_CTRL1); 763 + mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(2)); 764 + mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(3)); 765 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ | 766 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL1) | 767 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1), LRADC_CTRL1); 763 768 mxs_lradc_reg_set(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, LRADC_CTRL1); 764 769 } 765 770 766 771 /* touchscreen's state machine */ 767 772 static void mxs_lradc_handle_touch(struct mxs_lradc *lradc) 768 773 { 769 - int val; 770 - 771 774 switch (lradc->cur_plate) { 772 775 case LRADC_TOUCH: 773 - /* 774 - * start with the Y-pos, because it uses nearly the same plate 775 - * settings like the touch detection 776 - */ 777 - if (mxs_lradc_check_touch_event(lradc)) { 778 - mxs_lradc_reg_clear(lradc, 779 - LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, 780 - LRADC_CTRL1); 781 - mxs_lradc_prepare_y_pos(lradc); 782 - } 776 + if (mxs_lradc_check_touch_event(lradc)) 777 + mxs_lradc_start_touch_event(lradc); 783 778 mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ, 784 779 LRADC_CTRL1); 785 780 return; 786 781 787 782 case LRADC_SAMPLE_Y: 788 - val = mxs_lradc_read_ts_channel(lradc); 789 - if (val < 0) { 790 - mxs_lradc_enable_touch_detection(lradc); /* re-start */ 791 - return; 792 - } 793 - lradc->ts_y_pos = val; 783 + lradc->ts_y_pos = mxs_lradc_read_raw_channel(lradc, 784 + TOUCHSCREEN_VCHANNEL1); 794 785 mxs_lradc_prepare_x_pos(lradc); 795 786 return; 796 787 797 788 case LRADC_SAMPLE_X: 798 - val = mxs_lradc_read_ts_channel(lradc); 799 - if (val < 0) { 800 - mxs_lradc_enable_touch_detection(lradc); /* re-start */ 801 - return; 802 - } 803 - lradc->ts_x_pos = val; 789 + lradc->ts_x_pos = mxs_lradc_read_raw_channel(lradc, 790 + TOUCHSCREEN_VCHANNEL1); 804 791 mxs_lradc_prepare_pressure(lradc); 805 792 return; 806 793 807 794 case LRADC_SAMPLE_PRESSURE: 808 - lradc->ts_pressure = 809 - mxs_lradc_read_ts_pressure(lradc, TS_CH_XP, TS_CH_YM); 795 + lradc->ts_pressure = mxs_lradc_read_ts_pressure(lradc, 796 + TOUCHSCREEN_VCHANNEL2, 797 + TOUCHSCREEN_VCHANNEL1); 810 798 mxs_lradc_complete_touch_event(lradc); 811 799 return; 812 800 813 801 case LRADC_SAMPLE_VALID: 814 - val = mxs_lradc_read_ts_channel(lradc); /* ignore the value */ 815 802 mxs_lradc_finish_touch_event(lradc, 1); 816 803 break; 817 804 } ··· 829 844 * used if doing raw sampling. 830 845 */ 831 846 if (lradc->soc == IMX28_LRADC) 832 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_MX28_LRADC_IRQ_EN_MASK, 847 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(0), 833 848 LRADC_CTRL1); 834 - mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0); 849 + mxs_lradc_reg_clear(lradc, 0x1, LRADC_CTRL0); 835 850 836 851 /* Enable / disable the divider per requirement */ 837 852 if (test_bit(chan, &lradc->is_divided)) ··· 1075 1090 { 1076 1091 /* stop all interrupts from firing */ 1077 1092 mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN | 1078 - LRADC_CTRL1_LRADC_IRQ_EN(2) | LRADC_CTRL1_LRADC_IRQ_EN(3) | 1079 - LRADC_CTRL1_LRADC_IRQ_EN(4) | LRADC_CTRL1_LRADC_IRQ_EN(5), 1080 - LRADC_CTRL1); 1093 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL1) | 1094 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL2), LRADC_CTRL1); 1081 1095 1082 1096 /* Power-down touchscreen touch-detect circuitry. */ 1083 1097 mxs_lradc_reg_clear(lradc, mxs_lradc_plate_mask(lradc), LRADC_CTRL0); ··· 1142 1158 struct iio_dev *iio = data; 1143 1159 struct mxs_lradc *lradc = iio_priv(iio); 1144 1160 unsigned long reg = readl(lradc->base + LRADC_CTRL1); 1161 + uint32_t clr_irq = mxs_lradc_irq_mask(lradc); 1145 1162 const uint32_t ts_irq_mask = 1146 1163 LRADC_CTRL1_TOUCH_DETECT_IRQ | 1147 - LRADC_CTRL1_LRADC_IRQ(2) | 1148 - LRADC_CTRL1_LRADC_IRQ(3) | 1149 - LRADC_CTRL1_LRADC_IRQ(4) | 1150 - LRADC_CTRL1_LRADC_IRQ(5); 1164 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1) | 1165 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL2); 1151 1166 1152 1167 if (!(reg & mxs_lradc_irq_mask(lradc))) 1153 1168 return IRQ_NONE; 1154 1169 1155 - if (lradc->use_touchscreen && (reg & ts_irq_mask)) 1170 + if (lradc->use_touchscreen && (reg & ts_irq_mask)) { 1156 1171 mxs_lradc_handle_touch(lradc); 1157 1172 1158 - if (iio_buffer_enabled(iio)) 1159 - iio_trigger_poll(iio->trig); 1160 - else if (reg & LRADC_CTRL1_LRADC_IRQ(0)) 1161 - complete(&lradc->completion); 1173 + /* Make sure we don't clear the next conversion's interrupt. */ 1174 + clr_irq &= ~(LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1) | 1175 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL2)); 1176 + } 1162 1177 1163 - mxs_lradc_reg_clear(lradc, reg & mxs_lradc_irq_mask(lradc), 1164 - LRADC_CTRL1); 1178 + if (iio_buffer_enabled(iio)) { 1179 + if (reg & lradc->buffer_vchans) 1180 + iio_trigger_poll(iio->trig); 1181 + } else if (reg & LRADC_CTRL1_LRADC_IRQ(0)) { 1182 + complete(&lradc->completion); 1183 + } 1184 + 1185 + mxs_lradc_reg_clear(lradc, reg & clr_irq, LRADC_CTRL1); 1165 1186 1166 1187 return IRQ_HANDLED; 1167 1188 } ··· 1278 1289 } 1279 1290 1280 1291 if (lradc->soc == IMX28_LRADC) 1281 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_MX28_LRADC_IRQ_EN_MASK, 1282 - LRADC_CTRL1); 1283 - mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0); 1292 + mxs_lradc_reg_clear(lradc, 1293 + lradc->buffer_vchans << LRADC_CTRL1_LRADC_IRQ_EN_OFFSET, 1294 + LRADC_CTRL1); 1295 + mxs_lradc_reg_clear(lradc, lradc->buffer_vchans, LRADC_CTRL0); 1284 1296 1285 1297 for_each_set_bit(chan, iio->active_scan_mask, LRADC_MAX_TOTAL_CHANS) { 1286 1298 ctrl4_set |= chan << LRADC_CTRL4_LRADCSELECT_OFFSET(ofs); ··· 1314 1324 mxs_lradc_reg_clear(lradc, LRADC_DELAY_TRIGGER_LRADCS_MASK | 1315 1325 LRADC_DELAY_KICK, LRADC_DELAY(0)); 1316 1326 1317 - mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0); 1327 + mxs_lradc_reg_clear(lradc, lradc->buffer_vchans, LRADC_CTRL0); 1318 1328 if (lradc->soc == IMX28_LRADC) 1319 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_MX28_LRADC_IRQ_EN_MASK, 1320 - LRADC_CTRL1); 1329 + mxs_lradc_reg_clear(lradc, 1330 + lradc->buffer_vchans << LRADC_CTRL1_LRADC_IRQ_EN_OFFSET, 1331 + LRADC_CTRL1); 1321 1332 1322 1333 kfree(lradc->buffer); 1323 1334 mutex_unlock(&lradc->lock); ··· 1344 1353 if (lradc->use_touchbutton) 1345 1354 rsvd_chans++; 1346 1355 if (lradc->use_touchscreen) 1347 - rsvd_chans++; 1356 + rsvd_chans += 2; 1348 1357 1349 1358 /* Test for attempts to map channels with special mode of operation. */ 1350 1359 if (bitmap_intersects(mask, &rsvd_mask, LRADC_MAX_TOTAL_CHANS)) ··· 1403 1412 BIT(IIO_CHAN_INFO_SCALE), 1404 1413 .channel = 8, 1405 1414 .scan_type = {.sign = 'u', .realbits = 18, .storagebits = 32,}, 1415 + }, 1416 + /* Hidden channel to keep indexes */ 1417 + { 1418 + .type = IIO_TEMP, 1419 + .indexed = 1, 1420 + .scan_index = -1, 1421 + .channel = 9, 1406 1422 }, 1407 1423 MXS_ADC_CHAN(10, IIO_VOLTAGE), /* VDDIO */ 1408 1424 MXS_ADC_CHAN(11, IIO_VOLTAGE), /* VTH */ ··· 1580 1582 } 1581 1583 1582 1584 touch_ret = mxs_lradc_probe_touchscreen(lradc, node); 1585 + 1586 + if (touch_ret == 0) 1587 + lradc->buffer_vchans = BUFFER_VCHANS_LIMITED; 1588 + else 1589 + lradc->buffer_vchans = BUFFER_VCHANS_ALL; 1583 1590 1584 1591 /* Grab all IRQ sources */ 1585 1592 for (i = 0; i < of_cfg->irq_count; i++) {
+1
drivers/staging/iio/magnetometer/hmc5843_core.c
··· 592 592 mutex_init(&data->lock); 593 593 594 594 indio_dev->dev.parent = dev; 595 + indio_dev->name = dev->driver->name; 595 596 indio_dev->info = &hmc5843_info; 596 597 indio_dev->modes = INDIO_DIRECT_MODE; 597 598 indio_dev->channels = data->variant->channels;
+2 -1
drivers/staging/iio/resolver/ad2s1200.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/gpio.h> 20 20 #include <linux/module.h> 21 + #include <linux/bitops.h> 21 22 22 23 #include <linux/iio/iio.h> 23 24 #include <linux/iio/sysfs.h> ··· 69 68 break; 70 69 case IIO_ANGL_VEL: 71 70 vel = (((s16)(st->rx[0])) << 4) | ((st->rx[1] & 0xF0) >> 4); 72 - vel = (vel << 4) >> 4; 71 + vel = sign_extend32(vel, 11); 73 72 *val = vel; 74 73 break; 75 74 default:
+15 -17
drivers/staging/vt6655/device_main.c
··· 330 330 /* zonetype initial */ 331 331 pDevice->byOriginalZonetype = pDevice->abyEEPROM[EEP_OFS_ZONETYPE]; 332 332 333 - /* Get RFType */ 334 - pDevice->byRFType = SROMbyReadEmbedded(pDevice->PortOffset, EEP_OFS_RFTYPE); 335 - 336 - /* force change RevID for VT3253 emu */ 337 - if ((pDevice->byRFType & RF_EMU) != 0) 338 - pDevice->byRevId = 0x80; 339 - 340 - pDevice->byRFType &= RF_MASK; 341 - pr_debug("pDevice->byRFType = %x\n", pDevice->byRFType); 342 - 343 333 if (!pDevice->bZoneRegExist) 344 334 pDevice->byZoneType = pDevice->abyEEPROM[EEP_OFS_ZONETYPE]; 345 335 ··· 1177 1187 { 1178 1188 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1179 1189 PSTxDesc head_td; 1180 - u32 dma_idx = TYPE_AC0DMA; 1190 + u32 dma_idx; 1181 1191 unsigned long flags; 1182 1192 1183 1193 spin_lock_irqsave(&priv->lock, flags); 1184 1194 1185 - if (!ieee80211_is_data(hdr->frame_control)) 1195 + if (ieee80211_is_data(hdr->frame_control)) 1196 + dma_idx = TYPE_AC0DMA; 1197 + else 1186 1198 dma_idx = TYPE_TXDMA0; 1187 1199 1188 1200 if (AVAIL_TD(priv, dma_idx) < 1) { ··· 1197 1205 head_td->m_td1TD1.byTCR = 0; 1198 1206 1199 1207 head_td->pTDInfo->skb = skb; 1208 + 1209 + if (dma_idx == TYPE_AC0DMA) 1210 + head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB; 1200 1211 1201 1212 priv->iTDUsed[dma_idx]++; 1202 1213 ··· 1229 1234 1230 1235 head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma); 1231 1236 1232 - if (dma_idx == TYPE_AC0DMA) { 1233 - head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB; 1234 - 1237 + if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) 1235 1238 MACvTransmitAC0(priv->PortOffset); 1236 - } else { 1239 + else 1237 1240 MACvTransmit0(priv->PortOffset); 1238 - } 1239 1241 1240 1242 spin_unlock_irqrestore(&priv->lock, flags); 1241 1243 ··· 1769 1777 /* initial to reload eeprom */ 1770 1778 MACvInitialize(priv->PortOffset); 1771 1779 MACvReadEtherAddress(priv->PortOffset, priv->abyCurrentNetAddr); 1780 + 1781 + /* Get RFType */ 1782 + priv->byRFType = SROMbyReadEmbedded(priv->PortOffset, EEP_OFS_RFTYPE); 1783 + priv->byRFType &= RF_MASK; 1784 + 1785 + dev_dbg(&pcid->dev, "RF Type = %x\n", priv->byRFType); 1772 1786 1773 1787 device_get_options(priv); 1774 1788 device_set_options(priv);
+1
drivers/staging/vt6655/rf.c
··· 794 794 break; 795 795 case RATE_6M: 796 796 case RATE_9M: 797 + case RATE_12M: 797 798 case RATE_18M: 798 799 byPwr = priv->abyOFDMPwrTbl[uCH]; 799 800 if (priv->byRFType == RF_UW2452)
+1
drivers/staging/vt6656/rf.c
··· 640 640 break; 641 641 case RATE_6M: 642 642 case RATE_9M: 643 + case RATE_12M: 643 644 case RATE_18M: 644 645 case RATE_24M: 645 646 case RATE_36M:
+10 -4
drivers/target/iscsi/iscsi_target.c
··· 4256 4256 pr_debug("Closing iSCSI connection CID %hu on SID:" 4257 4257 " %u\n", conn->cid, sess->sid); 4258 4258 /* 4259 - * Always up conn_logout_comp just in case the RX Thread is sleeping 4260 - * and the logout response never got sent because the connection 4261 - * failed. 4259 + * Always up conn_logout_comp for the traditional TCP case just in case 4260 + * the RX Thread in iscsi_target_rx_opcode() is sleeping and the logout 4261 + * response never got sent because the connection failed. 4262 + * 4263 + * However for iser-target, isert_wait4logout() is using conn_logout_comp 4264 + * to signal logout response TX interrupt completion. Go ahead and skip 4265 + * this for iser since isert_rx_opcode() does not wait on logout failure, 4266 + * and to avoid iscsi_conn pointer dereference in iser-target code. 4262 4267 */ 4263 - complete(&conn->conn_logout_comp); 4268 + if (conn->conn_transport->transport_type == ISCSI_TCP) 4269 + complete(&conn->conn_logout_comp); 4264 4270 4265 4271 iscsi_release_thread_set(conn); 4266 4272
+1 -3
drivers/target/iscsi/iscsi_target_erl0.c
··· 22 22 #include <target/target_core_fabric.h> 23 23 24 24 #include <target/iscsi/iscsi_target_core.h> 25 - #include <target/iscsi/iscsi_transport.h> 26 25 #include "iscsi_target_seq_pdu_list.h" 27 26 #include "iscsi_target_tq.h" 28 27 #include "iscsi_target_erl0.h" ··· 939 940 940 941 if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT) { 941 942 spin_unlock_bh(&conn->state_lock); 942 - if (conn->conn_transport->transport_type == ISCSI_TCP) 943 - iscsit_close_connection(conn); 943 + iscsit_close_connection(conn); 944 944 return; 945 945 } 946 946
+2 -5
drivers/target/loopback/tcm_loop.c
··· 953 953 transport_free_session(tl_nexus->se_sess); 954 954 goto out; 955 955 } 956 - /* 957 - * Now, register the SAS I_T Nexus as active with the call to 958 - * transport_register_session() 959 - */ 960 - __transport_register_session(se_tpg, tl_nexus->se_sess->se_node_acl, 956 + /* Now, register the SAS I_T Nexus as active. */ 957 + transport_register_session(se_tpg, tl_nexus->se_sess->se_node_acl, 961 958 tl_nexus->se_sess, tl_nexus); 962 959 tl_tpg->tl_nexus = tl_nexus; 963 960 pr_debug("TCM_Loop_ConfigFS: Established I_T Nexus to emulated"
+29 -3
drivers/target/target_core_device.c
··· 650 650 return aligned_max_sectors; 651 651 } 652 652 653 + bool se_dev_check_wce(struct se_device *dev) 654 + { 655 + bool wce = false; 656 + 657 + if (dev->transport->get_write_cache) 658 + wce = dev->transport->get_write_cache(dev); 659 + else if (dev->dev_attrib.emulate_write_cache > 0) 660 + wce = true; 661 + 662 + return wce; 663 + } 664 + 653 665 int se_dev_set_max_unmap_lba_count( 654 666 struct se_device *dev, 655 667 u32 max_unmap_lba_count) ··· 779 767 pr_err("Illegal value %d\n", flag); 780 768 return -EINVAL; 781 769 } 770 + if (flag && 771 + dev->transport->get_write_cache) { 772 + pr_err("emulate_fua_write not supported for this device\n"); 773 + return -EINVAL; 774 + } 775 + if (dev->export_count) { 776 + pr_err("emulate_fua_write cannot be changed with active" 777 + " exports: %d\n", dev->export_count); 778 + return -EINVAL; 779 + } 782 780 dev->dev_attrib.emulate_fua_write = flag; 783 781 pr_debug("dev[%p]: SE Device Forced Unit Access WRITEs: %d\n", 784 782 dev, dev->dev_attrib.emulate_fua_write); ··· 823 801 pr_err("emulate_write_cache not supported for this device\n"); 824 802 return -EINVAL; 825 803 } 826 - 804 + if (dev->export_count) { 805 + pr_err("emulate_write_cache cannot be changed with active" 806 + " exports: %d\n", dev->export_count); 807 + return -EINVAL; 808 + } 827 809 dev->dev_attrib.emulate_write_cache = flag; 828 810 pr_debug("dev[%p]: SE Device WRITE_CACHE_EMULATION flag: %d\n", 829 811 dev, dev->dev_attrib.emulate_write_cache); ··· 1560 1534 ret = dev->transport->configure_device(dev); 1561 1535 if (ret) 1562 1536 goto out; 1563 - dev->dev_flags |= DF_CONFIGURED; 1564 - 1565 1537 /* 1566 1538 * XXX: there is not much point to have two different values here.. 1567 1539 */ ··· 1620 1596 mutex_lock(&g_device_mutex); 1621 1597 list_add_tail(&dev->g_dev_node, &g_device_list); 1622 1598 mutex_unlock(&g_device_mutex); 1599 + 1600 + dev->dev_flags |= DF_CONFIGURED; 1623 1601 1624 1602 return 0; 1625 1603
+1 -1
drivers/target/target_core_pscsi.c
··· 1121 1121 struct pscsi_dev_virt *pdv = PSCSI_DEV(dev); 1122 1122 struct scsi_device *sd = pdv->pdv_sd; 1123 1123 1124 - return sd->type; 1124 + return (sd) ? sd->type : TYPE_NO_LUN; 1125 1125 } 1126 1126 1127 1127 static sector_t pscsi_get_blocks(struct se_device *dev)
+1 -2
drivers/target/target_core_sbc.c
··· 708 708 } 709 709 } 710 710 if (cdb[1] & 0x8) { 711 - if (!dev->dev_attrib.emulate_fua_write || 712 - !dev->dev_attrib.emulate_write_cache) { 711 + if (!dev->dev_attrib.emulate_fua_write || !se_dev_check_wce(dev)) { 713 712 pr_err("Got CDB: 0x%02x with FUA bit set, but device" 714 713 " does not advertise support for FUA write\n", 715 714 cdb[0]);
+3 -16
drivers/target/target_core_spc.c
··· 454 454 } 455 455 EXPORT_SYMBOL(spc_emulate_evpd_83); 456 456 457 - static bool 458 - spc_check_dev_wce(struct se_device *dev) 459 - { 460 - bool wce = false; 461 - 462 - if (dev->transport->get_write_cache) 463 - wce = dev->transport->get_write_cache(dev); 464 - else if (dev->dev_attrib.emulate_write_cache > 0) 465 - wce = true; 466 - 467 - return wce; 468 - } 469 - 470 457 /* Extended INQUIRY Data VPD Page */ 471 458 static sense_reason_t 472 459 spc_emulate_evpd_86(struct se_cmd *cmd, unsigned char *buf) ··· 477 490 buf[5] = 0x07; 478 491 479 492 /* If WriteCache emulation is enabled, set V_SUP */ 480 - if (spc_check_dev_wce(dev)) 493 + if (se_dev_check_wce(dev)) 481 494 buf[6] = 0x01; 482 495 /* If an LBA map is present set R_SUP */ 483 496 spin_lock(&cmd->se_dev->t10_alua.lba_map_lock); ··· 884 897 if (pc == 1) 885 898 goto out; 886 899 887 - if (spc_check_dev_wce(dev)) 900 + if (se_dev_check_wce(dev)) 888 901 p[2] = 0x04; /* Write Cache Enable */ 889 902 p[12] = 0x20; /* Disabled Read Ahead */ 890 903 ··· 996 1009 (cmd->se_deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY))) 997 1010 spc_modesense_write_protect(&buf[length], type); 998 1011 999 - if ((spc_check_dev_wce(dev)) && 1012 + if ((se_dev_check_wce(dev)) && 1000 1013 (dev->dev_attrib.emulate_fua_write > 0)) 1001 1014 spc_modesense_dpofua(&buf[length], type); 1002 1015
+4
drivers/target/target_core_transport.c
··· 2389 2389 list_add_tail(&se_cmd->se_cmd_list, &se_sess->sess_cmd_list); 2390 2390 out: 2391 2391 spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); 2392 + 2393 + if (ret && ack_kref) 2394 + target_put_sess_cmd(se_sess, se_cmd); 2395 + 2392 2396 return ret; 2393 2397 } 2394 2398 EXPORT_SYMBOL(target_get_sess_cmd);
+2 -1
drivers/target/tcm_fc/tfc_io.c
··· 359 359 ep = fc_seq_exch(seq); 360 360 if (ep) { 361 361 lport = ep->lp; 362 - if (lport && (ep->xid <= lport->lro_xid)) 362 + if (lport && (ep->xid <= lport->lro_xid)) { 363 363 /* 364 364 * "ddp_done" trigger invalidation of HW 365 365 * specific DDP context ··· 374 374 * identified using ep->xid) 375 375 */ 376 376 cmd->was_ddp_setup = 0; 377 + } 377 378 } 378 379 } 379 380 }
+6 -4
drivers/thermal/int340x_thermal/int340x_thermal_zone.c
··· 208 208 trip_cnt, GFP_KERNEL); 209 209 if (!int34x_thermal_zone->aux_trips) { 210 210 ret = -ENOMEM; 211 - goto free_mem; 211 + goto err_trip_alloc; 212 212 } 213 213 trip_mask = BIT(trip_cnt) - 1; 214 214 int34x_thermal_zone->aux_trip_nr = trip_cnt; ··· 248 248 0, 0); 249 249 if (IS_ERR(int34x_thermal_zone->zone)) { 250 250 ret = PTR_ERR(int34x_thermal_zone->zone); 251 - goto free_lpat; 251 + goto err_thermal_zone; 252 252 } 253 253 254 254 return int34x_thermal_zone; 255 255 256 - free_lpat: 256 + err_thermal_zone: 257 257 acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table); 258 - free_mem: 258 + kfree(int34x_thermal_zone->aux_trips); 259 + err_trip_alloc: 259 260 kfree(int34x_thermal_zone); 260 261 return ERR_PTR(ret); 261 262 } ··· 267 266 { 268 267 thermal_zone_device_unregister(int34x_thermal_zone->zone); 269 268 acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table); 269 + kfree(int34x_thermal_zone->aux_trips); 270 270 kfree(int34x_thermal_zone); 271 271 } 272 272 EXPORT_SYMBOL_GPL(int340x_thermal_zone_remove);
+2 -1
drivers/thermal/samsung/exynos_tmu.c
··· 682 682 683 683 if (on) { 684 684 con |= (1 << EXYNOS_TMU_CORE_EN_SHIFT); 685 + con |= (1 << EXYNOS7_PD_DET_EN_SHIFT); 685 686 interrupt_en = 686 687 (of_thermal_is_trip_valid(tz, 7) 687 688 << EXYNOS7_TMU_INTEN_RISE7_SHIFT) | ··· 705 704 interrupt_en << EXYNOS_TMU_INTEN_FALL0_SHIFT; 706 705 } else { 707 706 con &= ~(1 << EXYNOS_TMU_CORE_EN_SHIFT); 707 + con &= ~(1 << EXYNOS7_PD_DET_EN_SHIFT); 708 708 interrupt_en = 0; /* Disable all interrupts */ 709 709 } 710 - con |= 1 << EXYNOS7_PD_DET_EN_SHIFT; 711 710 712 711 writel(interrupt_en, data->base + EXYNOS7_TMU_REG_INTEN); 713 712 writel(con, data->base + EXYNOS_TMU_REG_CONTROL);
+17 -20
drivers/thermal/thermal_core.c
··· 899 899 return sprintf(buf, "%d\n", instance->trip); 900 900 } 901 901 902 + static struct attribute *cooling_device_attrs[] = { 903 + &dev_attr_cdev_type.attr, 904 + &dev_attr_max_state.attr, 905 + &dev_attr_cur_state.attr, 906 + NULL, 907 + }; 908 + 909 + static const struct attribute_group cooling_device_attr_group = { 910 + .attrs = cooling_device_attrs, 911 + }; 912 + 913 + static const struct attribute_group *cooling_device_attr_groups[] = { 914 + &cooling_device_attr_group, 915 + NULL, 916 + }; 917 + 902 918 /* Device management */ 903 919 904 920 /** ··· 1146 1130 cdev->ops = ops; 1147 1131 cdev->updated = false; 1148 1132 cdev->device.class = &thermal_class; 1133 + cdev->device.groups = cooling_device_attr_groups; 1149 1134 cdev->devdata = devdata; 1150 1135 dev_set_name(&cdev->device, "cooling_device%d", cdev->id); 1151 1136 result = device_register(&cdev->device); ··· 1155 1138 kfree(cdev); 1156 1139 return ERR_PTR(result); 1157 1140 } 1158 - 1159 - /* sys I/F */ 1160 - if (type) { 1161 - result = device_create_file(&cdev->device, &dev_attr_cdev_type); 1162 - if (result) 1163 - goto unregister; 1164 - } 1165 - 1166 - result = device_create_file(&cdev->device, &dev_attr_max_state); 1167 - if (result) 1168 - goto unregister; 1169 - 1170 - result = device_create_file(&cdev->device, &dev_attr_cur_state); 1171 - if (result) 1172 - goto unregister; 1173 1141 1174 1142 /* Add 'this' new cdev to the global cdev list */ 1175 1143 mutex_lock(&thermal_list_lock); ··· 1165 1163 bind_cdev(cdev); 1166 1164 1167 1165 return cdev; 1168 - 1169 - unregister: 1170 - release_idr(&thermal_cdev_idr, &thermal_idr_lock, cdev->id); 1171 - device_unregister(&cdev->device); 1172 - return ERR_PTR(result); 1173 1166 } 1174 1167 1175 1168 /**
-13
drivers/tty/bfin_jtag_comm.c
··· 210 210 return circ_cnt(&bfin_jc_write_buf); 211 211 } 212 212 213 - static void 214 - bfin_jc_wait_until_sent(struct tty_struct *tty, int timeout) 215 - { 216 - unsigned long expire = jiffies + timeout; 217 - while (!circ_empty(&bfin_jc_write_buf)) { 218 - if (signal_pending(current)) 219 - break; 220 - if (time_after(jiffies, expire)) 221 - break; 222 - } 223 - } 224 - 225 213 static const struct tty_operations bfin_jc_ops = { 226 214 .open = bfin_jc_open, 227 215 .close = bfin_jc_close, ··· 218 230 .flush_chars = bfin_jc_flush_chars, 219 231 .write_room = bfin_jc_write_room, 220 232 .chars_in_buffer = bfin_jc_chars_in_buffer, 221 - .wait_until_sent = bfin_jc_wait_until_sent, 222 233 }; 223 234 224 235 static int __init bfin_jc_init(void)
+5 -6
drivers/tty/serial/8250/8250_core.c
··· 2138 2138 /* 2139 2139 * Clear the interrupt registers. 2140 2140 */ 2141 - if (serial_port_in(port, UART_LSR) & UART_LSR_DR) 2142 - serial_port_in(port, UART_RX); 2141 + serial_port_in(port, UART_LSR); 2142 + serial_port_in(port, UART_RX); 2143 2143 serial_port_in(port, UART_IIR); 2144 2144 serial_port_in(port, UART_MSR); 2145 2145 ··· 2300 2300 * saved flags to avoid getting false values from polling 2301 2301 * routines or the previous session. 2302 2302 */ 2303 - if (serial_port_in(port, UART_LSR) & UART_LSR_DR) 2304 - serial_port_in(port, UART_RX); 2303 + serial_port_in(port, UART_LSR); 2304 + serial_port_in(port, UART_RX); 2305 2305 serial_port_in(port, UART_IIR); 2306 2306 serial_port_in(port, UART_MSR); 2307 2307 up->lsr_saved_flags = 0; ··· 2394 2394 * Read data port to reset things, and then unlink from 2395 2395 * the IRQ chain. 2396 2396 */ 2397 - if (serial_port_in(port, UART_LSR) & UART_LSR_DR) 2398 - serial_port_in(port, UART_RX); 2397 + serial_port_in(port, UART_RX); 2399 2398 serial8250_rpm_put(up); 2400 2399 2401 2400 del_timer_sync(&up->timer);
+44 -3
drivers/tty/serial/8250/8250_dw.c
··· 59 59 u8 usr_reg; 60 60 int last_mcr; 61 61 int line; 62 + int msr_mask_on; 63 + int msr_mask_off; 62 64 struct clk *clk; 63 65 struct clk *pclk; 64 66 struct reset_control *rst; ··· 81 79 if (offset == UART_MSR && d->last_mcr & UART_MCR_AFE) { 82 80 value |= UART_MSR_CTS; 83 81 value &= ~UART_MSR_DCTS; 82 + } 83 + 84 + /* Override any modem control signals if needed */ 85 + if (offset == UART_MSR) { 86 + value |= d->msr_mask_on; 87 + value &= ~d->msr_mask_off; 84 88 } 85 89 86 90 return value; ··· 119 111 dw8250_force_idle(p); 120 112 writeb(value, p->membase + (UART_LCR << p->regshift)); 121 113 } 122 - dev_err(p->dev, "Couldn't set LCR to %d\n", value); 114 + /* 115 + * FIXME: this deadlocks if port->lock is already held 116 + * dev_err(p->dev, "Couldn't set LCR to %d\n", value); 117 + */ 123 118 } 124 119 } 125 120 ··· 166 155 __raw_writeq(value & 0xff, 167 156 p->membase + (UART_LCR << p->regshift)); 168 157 } 169 - dev_err(p->dev, "Couldn't set LCR to %d\n", value); 158 + /* 159 + * FIXME: this deadlocks if port->lock is already held 160 + * dev_err(p->dev, "Couldn't set LCR to %d\n", value); 161 + */ 170 162 } 171 163 } 172 164 #endif /* CONFIG_64BIT */ ··· 193 179 dw8250_force_idle(p); 194 180 writel(value, p->membase + (UART_LCR << p->regshift)); 195 181 } 196 - dev_err(p->dev, "Couldn't set LCR to %d\n", value); 182 + /* 183 + * FIXME: this deadlocks if port->lock is already held 184 + * dev_err(p->dev, "Couldn't set LCR to %d\n", value); 185 + */ 197 186 } 198 187 } 199 188 ··· 350 333 id = of_alias_get_id(np, "serial"); 351 334 if (id >= 0) 352 335 p->line = id; 336 + 337 + if (of_property_read_bool(np, "dcd-override")) { 338 + /* Always report DCD as active */ 339 + data->msr_mask_on |= UART_MSR_DCD; 340 + data->msr_mask_off |= UART_MSR_DDCD; 341 + } 342 + 343 + if (of_property_read_bool(np, "dsr-override")) { 344 + /* Always report DSR as active */ 345 + data->msr_mask_on |= UART_MSR_DSR; 346 + data->msr_mask_off |= UART_MSR_DDSR; 347 + } 348 + 349 + if (of_property_read_bool(np, "cts-override")) { 350 + /* Always report DSR as active */ 351 + data->msr_mask_on |= UART_MSR_DSR; 352 + data->msr_mask_off |= UART_MSR_DDSR; 353 + } 354 + 355 + if (of_property_read_bool(np, "ri-override")) { 356 + /* Always report Ring indicator as inactive */ 357 + data->msr_mask_off |= UART_MSR_RI; 358 + data->msr_mask_off |= UART_MSR_TERI; 359 + } 353 360 354 361 /* clock got configured through clk api, all done */ 355 362 if (p->uartclk)
+1 -19
drivers/tty/serial/8250/8250_pci.c
··· 69 69 "Please send the output of lspci -vv, this\n" 70 70 "message (0x%04x,0x%04x,0x%04x,0x%04x), the\n" 71 71 "manufacturer and name of serial board or\n" 72 - "modem board to rmk+serial@arm.linux.org.uk.\n", 72 + "modem board to <linux-serial@vger.kernel.org>.\n", 73 73 pci_name(dev), str, dev->vendor, dev->device, 74 74 dev->subsystem_vendor, dev->subsystem_device); 75 75 } ··· 1989 1989 }, 1990 1990 { 1991 1991 .vendor = PCI_VENDOR_ID_INTEL, 1992 - .device = PCI_DEVICE_ID_INTEL_QRK_UART, 1993 - .subvendor = PCI_ANY_ID, 1994 - .subdevice = PCI_ANY_ID, 1995 - .setup = pci_default_setup, 1996 - }, 1997 - { 1998 - .vendor = PCI_VENDOR_ID_INTEL, 1999 1992 .device = PCI_DEVICE_ID_INTEL_BSW_UART1, 2000 1993 .subvendor = PCI_ANY_ID, 2001 1994 .subdevice = PCI_ANY_ID, ··· 2192 2199 /* 2193 2200 * PLX 2194 2201 */ 2195 - { 2196 - .vendor = PCI_VENDOR_ID_PLX, 2197 - .device = PCI_DEVICE_ID_PLX_9030, 2198 - .subvendor = PCI_SUBVENDOR_ID_PERLE, 2199 - .subdevice = PCI_ANY_ID, 2200 - .setup = pci_default_setup, 2201 - }, 2202 2202 { 2203 2203 .vendor = PCI_VENDOR_ID_PLX, 2204 2204 .device = PCI_DEVICE_ID_PLX_9050, ··· 5398 5412 0, 0, pbn_b0_bt_4_115200 }, 5399 5413 5400 5414 { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH353_2S1PF, 5401 - PCI_ANY_ID, PCI_ANY_ID, 5402 - 0, 0, pbn_b0_bt_2_115200 }, 5403 - 5404 - { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH352_2S, 5405 5415 PCI_ANY_ID, PCI_ANY_ID, 5406 5416 0, 0, pbn_b0_bt_2_115200 }, 5407 5417
+45 -4
drivers/tty/serial/atmel_serial.c
··· 47 47 #include <linux/gpio/consumer.h> 48 48 #include <linux/err.h> 49 49 #include <linux/irq.h> 50 + #include <linux/suspend.h> 50 51 51 52 #include <asm/io.h> 52 53 #include <asm/ioctls.h> ··· 174 173 bool ms_irq_enabled; 175 174 bool is_usart; /* usart or uart */ 176 175 struct timer_list uart_timer; /* uart timer */ 176 + 177 + bool suspended; 178 + unsigned int pending; 179 + unsigned int pending_status; 180 + spinlock_t lock_suspended; 181 + 177 182 int (*prepare_rx)(struct uart_port *port); 178 183 int (*prepare_tx)(struct uart_port *port); 179 184 void (*schedule_rx)(struct uart_port *port); ··· 1186 1179 { 1187 1180 struct uart_port *port = dev_id; 1188 1181 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1189 - unsigned int status, pending, pass_counter = 0; 1182 + unsigned int status, pending, mask, pass_counter = 0; 1190 1183 bool gpio_handled = false; 1184 + 1185 + spin_lock(&atmel_port->lock_suspended); 1191 1186 1192 1187 do { 1193 1188 status = atmel_get_lines_status(port); 1194 - pending = status & UART_GET_IMR(port); 1189 + mask = UART_GET_IMR(port); 1190 + pending = status & mask; 1195 1191 if (!gpio_handled) { 1196 1192 /* 1197 1193 * Dealing with GPIO interrupt ··· 1216 1206 if (!pending) 1217 1207 break; 1218 1208 1209 + if (atmel_port->suspended) { 1210 + atmel_port->pending |= pending; 1211 + atmel_port->pending_status = status; 1212 + UART_PUT_IDR(port, mask); 1213 + pm_system_wakeup(); 1214 + break; 1215 + } 1216 + 1219 1217 atmel_handle_receive(port, pending); 1220 1218 atmel_handle_status(port, pending, status); 1221 1219 atmel_handle_transmit(port, pending); 1222 1220 } while (pass_counter++ < ATMEL_ISR_PASS_LIMIT); 1221 + 1222 + spin_unlock(&atmel_port->lock_suspended); 1223 1223 1224 1224 return pass_counter ? IRQ_HANDLED : IRQ_NONE; 1225 1225 } ··· 1762 1742 /* 1763 1743 * Allocate the IRQ 1764 1744 */ 1765 - retval = request_irq(port->irq, atmel_interrupt, IRQF_SHARED, 1745 + retval = request_irq(port->irq, atmel_interrupt, 1746 + IRQF_SHARED | IRQF_COND_SUSPEND, 1766 1747 tty ? tty->name : "atmel_serial", port); 1767 1748 if (retval) { 1768 1749 dev_err(port->dev, "atmel_startup - Can't get irq\n"); ··· 2534 2513 2535 2514 /* we can not wake up if we're running on slow clock */ 2536 2515 atmel_port->may_wakeup = device_may_wakeup(&pdev->dev); 2537 - if (atmel_serial_clk_will_stop()) 2516 + if (atmel_serial_clk_will_stop()) { 2517 + unsigned long flags; 2518 + 2519 + spin_lock_irqsave(&atmel_port->lock_suspended, flags); 2520 + atmel_port->suspended = true; 2521 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags); 2538 2522 device_set_wakeup_enable(&pdev->dev, 0); 2523 + } 2539 2524 2540 2525 uart_suspend_port(&atmel_uart, port); 2541 2526 ··· 2552 2525 { 2553 2526 struct uart_port *port = platform_get_drvdata(pdev); 2554 2527 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 2528 + unsigned long flags; 2529 + 2530 + spin_lock_irqsave(&atmel_port->lock_suspended, flags); 2531 + if (atmel_port->pending) { 2532 + atmel_handle_receive(port, atmel_port->pending); 2533 + atmel_handle_status(port, atmel_port->pending, 2534 + atmel_port->pending_status); 2535 + atmel_handle_transmit(port, atmel_port->pending); 2536 + atmel_port->pending = 0; 2537 + } 2538 + atmel_port->suspended = false; 2539 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags); 2555 2540 2556 2541 uart_resume_port(&atmel_uart, port); 2557 2542 device_set_wakeup_enable(&pdev->dev, atmel_port->may_wakeup); ··· 2631 2592 port = &atmel_ports[ret]; 2632 2593 port->backup_imr = 0; 2633 2594 port->uart.line = ret; 2595 + 2596 + spin_lock_init(&port->lock_suspended); 2634 2597 2635 2598 ret = atmel_init_gpios(port, &pdev->dev); 2636 2599 if (ret < 0)
+5
drivers/tty/serial/fsl_lpuart.c
··· 921 921 writeb(val | UARTPFIFO_TXFE | UARTPFIFO_RXFE, 922 922 sport->port.membase + UARTPFIFO); 923 923 924 + /* explicitly clear RDRF */ 925 + readb(sport->port.membase + UARTSR1); 926 + 924 927 /* flush Tx and Rx FIFO */ 925 928 writeb(UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH, 926 929 sport->port.membase + UARTCFIFO); ··· 1078 1075 1079 1076 sport->txfifo_size = 0x1 << (((temp >> UARTPFIFO_TXSIZE_OFF) & 1080 1077 UARTPFIFO_FIFOSIZE_MASK) + 1); 1078 + 1079 + sport->port.fifosize = sport->txfifo_size; 1081 1080 1082 1081 sport->rxfifo_size = 0x1 << (((temp >> UARTPFIFO_RXSIZE_OFF) & 1083 1082 UARTPFIFO_FIFOSIZE_MASK) + 1);
-4
drivers/tty/serial/of_serial.c
··· 133 133 if (of_find_property(np, "no-loopback-test", NULL)) 134 134 port->flags |= UPF_SKIP_TEST; 135 135 136 - ret = of_alias_get_id(np, "serial"); 137 - if (ret >= 0) 138 - port->line = ret; 139 - 140 136 port->dev = &ofdev->dev; 141 137 142 138 switch (type) {
+1
drivers/tty/serial/samsung.c
··· 963 963 free_irq(ourport->tx_irq, ourport); 964 964 tx_enabled(port) = 0; 965 965 ourport->tx_claimed = 0; 966 + ourport->tx_mode = 0; 966 967 } 967 968 968 969 if (ourport->rx_claimed) {
+3 -1
drivers/tty/serial/sprd_serial.c
··· 293 293 294 294 ims = serial_in(port, SPRD_IMSR); 295 295 296 - if (!ims) 296 + if (!ims) { 297 + spin_unlock(&port->lock); 297 298 return IRQ_NONE; 299 + } 298 300 299 301 serial_out(port, SPRD_ICLR, ~0); 300 302
+2 -2
drivers/tty/tty_io.c
··· 1028 1028 /* We limit tty time update visibility to every 8 seconds or so. */ 1029 1029 static void tty_update_time(struct timespec *time) 1030 1030 { 1031 - unsigned long sec = get_seconds() & ~7; 1032 - if ((long)(sec - time->tv_sec) > 0) 1031 + unsigned long sec = get_seconds(); 1032 + if (abs(sec - time->tv_sec) & ~7) 1033 1033 time->tv_sec = sec; 1034 1034 } 1035 1035
+11 -5
drivers/tty/tty_ioctl.c
··· 217 217 #endif 218 218 if (!timeout) 219 219 timeout = MAX_SCHEDULE_TIMEOUT; 220 - if (wait_event_interruptible_timeout(tty->write_wait, 221 - !tty_chars_in_buffer(tty), timeout) >= 0) { 222 - if (tty->ops->wait_until_sent) 223 - tty->ops->wait_until_sent(tty, timeout); 224 - } 220 + 221 + timeout = wait_event_interruptible_timeout(tty->write_wait, 222 + !tty_chars_in_buffer(tty), timeout); 223 + if (timeout <= 0) 224 + return; 225 + 226 + if (timeout == MAX_SCHEDULE_TIMEOUT) 227 + timeout = 0; 228 + 229 + if (tty->ops->wait_until_sent) 230 + tty->ops->wait_until_sent(tty, timeout); 225 231 } 226 232 EXPORT_SYMBOL(tty_wait_until_sent); 227 233
+11
drivers/usb/chipidea/udc.c
··· 929 929 return retval; 930 930 } 931 931 932 + static int otg_a_alt_hnp_support(struct ci_hdrc *ci) 933 + { 934 + dev_warn(&ci->gadget.dev, 935 + "connect the device to an alternate port if you want HNP\n"); 936 + return isr_setup_status_phase(ci); 937 + } 938 + 932 939 /** 933 940 * isr_setup_packet_handler: setup packet handler 934 941 * @ci: UDC descriptor ··· 1067 1060 err = isr_setup_status_phase( 1068 1061 ci); 1069 1062 } 1063 + break; 1064 + case USB_DEVICE_A_ALT_HNP_SUPPORT: 1065 + if (ci_otg_is_fsm_mode(ci)) 1066 + err = otg_a_alt_hnp_support(ci); 1070 1067 break; 1071 1068 default: 1072 1069 goto delegate;
+2
drivers/usb/class/cdc-acm.c
··· 1650 1650 1651 1651 static const struct usb_device_id acm_ids[] = { 1652 1652 /* quirky and broken devices */ 1653 + { USB_DEVICE(0x076d, 0x0006), /* Denso Cradle CU-321 */ 1654 + .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */ 1653 1655 { USB_DEVICE(0x17ef, 0x7000), /* Lenovo USB modem */ 1654 1656 .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */ 1655 1657 { USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
+2 -2
drivers/usb/common/usb-otg-fsm.c
··· 150 150 break; 151 151 case OTG_STATE_B_PERIPHERAL: 152 152 otg_chrg_vbus(fsm, 0); 153 - otg_loc_conn(fsm, 1); 154 153 otg_loc_sof(fsm, 0); 155 154 otg_set_protocol(fsm, PROTO_GADGET); 155 + otg_loc_conn(fsm, 1); 156 156 break; 157 157 case OTG_STATE_B_WAIT_ACON: 158 158 otg_chrg_vbus(fsm, 0); ··· 213 213 214 214 break; 215 215 case OTG_STATE_A_PERIPHERAL: 216 - otg_loc_conn(fsm, 1); 217 216 otg_loc_sof(fsm, 0); 218 217 otg_set_protocol(fsm, PROTO_GADGET); 219 218 otg_drv_vbus(fsm, 1); 219 + otg_loc_conn(fsm, 1); 220 220 otg_add_timer(fsm, A_BIDL_ADIS); 221 221 break; 222 222 case OTG_STATE_A_WAIT_VFALL:
+2
drivers/usb/core/devio.c
··· 501 501 as->status = urb->status; 502 502 signr = as->signr; 503 503 if (signr) { 504 + memset(&sinfo, 0, sizeof(sinfo)); 504 505 sinfo.si_signo = as->signr; 505 506 sinfo.si_errno = as->status; 506 507 sinfo.si_code = SI_ASYNCIO; ··· 2383 2382 wake_up_all(&ps->wait); 2384 2383 list_del_init(&ps->list); 2385 2384 if (ps->discsignr) { 2385 + memset(&sinfo, 0, sizeof(sinfo)); 2386 2386 sinfo.si_signo = ps->discsignr; 2387 2387 sinfo.si_errno = EPIPE; 2388 2388 sinfo.si_code = SI_ASYNCIO;
+3
drivers/usb/dwc2/core_intr.c
··· 377 377 dwc2_is_host_mode(hsotg) ? "Host" : "Device", 378 378 dwc2_op_state_str(hsotg)); 379 379 380 + if (hsotg->op_state == OTG_STATE_A_HOST) 381 + dwc2_hcd_disconnect(hsotg); 382 + 380 383 /* Change to L3 (OFF) state */ 381 384 hsotg->lx_state = DWC2_L3; 382 385
+28 -2
drivers/usb/dwc3/dwc3-omap.c
··· 205 205 omap->irq0_offset, value); 206 206 } 207 207 208 + static void dwc3_omap_write_irqmisc_clr(struct dwc3_omap *omap, u32 value) 209 + { 210 + dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_CLR_MISC + 211 + omap->irqmisc_offset, value); 212 + } 213 + 214 + static void dwc3_omap_write_irq0_clr(struct dwc3_omap *omap, u32 value) 215 + { 216 + dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_CLR_0 - 217 + omap->irq0_offset, value); 218 + } 219 + 208 220 static void dwc3_omap_set_mailbox(struct dwc3_omap *omap, 209 221 enum omap_dwc3_vbus_id_status status) 210 222 { ··· 357 345 358 346 static void dwc3_omap_disable_irqs(struct dwc3_omap *omap) 359 347 { 348 + u32 reg; 349 + 360 350 /* disable all IRQs */ 361 - dwc3_omap_write_irqmisc_set(omap, 0x00); 362 - dwc3_omap_write_irq0_set(omap, 0x00); 351 + reg = USBOTGSS_IRQO_COREIRQ_ST; 352 + dwc3_omap_write_irq0_clr(omap, reg); 353 + 354 + reg = (USBOTGSS_IRQMISC_OEVT | 355 + USBOTGSS_IRQMISC_DRVVBUS_RISE | 356 + USBOTGSS_IRQMISC_CHRGVBUS_RISE | 357 + USBOTGSS_IRQMISC_DISCHRGVBUS_RISE | 358 + USBOTGSS_IRQMISC_IDPULLUP_RISE | 359 + USBOTGSS_IRQMISC_DRVVBUS_FALL | 360 + USBOTGSS_IRQMISC_CHRGVBUS_FALL | 361 + USBOTGSS_IRQMISC_DISCHRGVBUS_FALL | 362 + USBOTGSS_IRQMISC_IDPULLUP_FALL); 363 + 364 + dwc3_omap_write_irqmisc_clr(omap, reg); 363 365 } 364 366 365 367 static u64 dwc3_omap_dma_mask = DMA_BIT_MASK(32);
-2
drivers/usb/gadget/configfs.c
··· 1161 1161 if (desc->opts_mutex) 1162 1162 mutex_lock(desc->opts_mutex); 1163 1163 memcpy(desc->ext_compat_id, page, l); 1164 - desc->ext_compat_id[l] = '\0'; 1165 1164 1166 1165 if (desc->opts_mutex) 1167 1166 mutex_unlock(desc->opts_mutex); ··· 1191 1192 if (desc->opts_mutex) 1192 1193 mutex_lock(desc->opts_mutex); 1193 1194 memcpy(desc->ext_compat_id + 8, page, l); 1194 - desc->ext_compat_id[l + 8] = '\0'; 1195 1195 1196 1196 if (desc->opts_mutex) 1197 1197 mutex_unlock(desc->opts_mutex);
+90 -136
drivers/usb/gadget/function/f_fs.c
··· 144 144 bool read; 145 145 146 146 struct kiocb *kiocb; 147 - const struct iovec *iovec; 148 - unsigned long nr_segs; 149 - char __user *buf; 150 - size_t len; 147 + struct iov_iter data; 148 + const void *to_free; 149 + char *buf; 151 150 152 151 struct mm_struct *mm; 153 152 struct work_struct work; ··· 648 649 io_data->req->actual; 649 650 650 651 if (io_data->read && ret > 0) { 651 - int i; 652 - size_t pos = 0; 653 - 654 - /* 655 - * Since req->length may be bigger than io_data->len (after 656 - * being rounded up to maxpacketsize), we may end up with more 657 - * data then user space has space for. 658 - */ 659 - ret = min_t(int, ret, io_data->len); 660 - 661 652 use_mm(io_data->mm); 662 - for (i = 0; i < io_data->nr_segs; i++) { 663 - size_t len = min_t(size_t, ret - pos, 664 - io_data->iovec[i].iov_len); 665 - if (!len) 666 - break; 667 - if (unlikely(copy_to_user(io_data->iovec[i].iov_base, 668 - &io_data->buf[pos], len))) { 669 - ret = -EFAULT; 670 - break; 671 - } 672 - pos += len; 673 - } 653 + ret = copy_to_iter(io_data->buf, ret, &io_data->data); 654 + if (iov_iter_count(&io_data->data)) 655 + ret = -EFAULT; 674 656 unuse_mm(io_data->mm); 675 657 } 676 658 ··· 664 684 665 685 io_data->kiocb->private = NULL; 666 686 if (io_data->read) 667 - kfree(io_data->iovec); 687 + kfree(io_data->to_free); 668 688 kfree(io_data->buf); 669 689 kfree(io_data); 670 690 } ··· 723 743 * before the waiting completes, so do not assign to 'gadget' earlier 724 744 */ 725 745 struct usb_gadget *gadget = epfile->ffs->gadget; 746 + size_t copied; 726 747 727 748 spin_lock_irq(&epfile->ffs->eps_lock); 728 749 /* In the meantime, endpoint got disabled or changed. */ ··· 731 750 spin_unlock_irq(&epfile->ffs->eps_lock); 732 751 return -ESHUTDOWN; 733 752 } 753 + data_len = iov_iter_count(&io_data->data); 734 754 /* 735 755 * Controller may require buffer size to be aligned to 736 756 * maxpacketsize of an out endpoint. 737 757 */ 738 - data_len = io_data->read ? 739 - usb_ep_align_maybe(gadget, ep->ep, io_data->len) : 740 - io_data->len; 758 + if (io_data->read) 759 + data_len = usb_ep_align_maybe(gadget, ep->ep, data_len); 741 760 spin_unlock_irq(&epfile->ffs->eps_lock); 742 761 743 762 data = kmalloc(data_len, GFP_KERNEL); 744 763 if (unlikely(!data)) 745 764 return -ENOMEM; 746 - if (io_data->aio && !io_data->read) { 747 - int i; 748 - size_t pos = 0; 749 - for (i = 0; i < io_data->nr_segs; i++) { 750 - if (unlikely(copy_from_user(&data[pos], 751 - io_data->iovec[i].iov_base, 752 - io_data->iovec[i].iov_len))) { 753 - ret = -EFAULT; 754 - goto error; 755 - } 756 - pos += io_data->iovec[i].iov_len; 757 - } 758 - } else { 759 - if (!io_data->read && 760 - unlikely(__copy_from_user(data, io_data->buf, 761 - io_data->len))) { 765 + if (!io_data->read) { 766 + copied = copy_from_iter(data, data_len, &io_data->data); 767 + if (copied != data_len) { 762 768 ret = -EFAULT; 763 769 goto error; 764 770 } ··· 844 876 */ 845 877 ret = ep->status; 846 878 if (io_data->read && ret > 0) { 847 - ret = min_t(size_t, ret, io_data->len); 848 - 849 - if (unlikely(copy_to_user(io_data->buf, 850 - data, ret))) 879 + ret = copy_to_iter(data, ret, &io_data->data); 880 + if (unlikely(iov_iter_count(&io_data->data))) 851 881 ret = -EFAULT; 852 882 } 853 883 } ··· 862 896 error: 863 897 kfree(data); 864 898 return ret; 865 - } 866 - 867 - static ssize_t 868 - ffs_epfile_write(struct file *file, const char __user *buf, size_t len, 869 - loff_t *ptr) 870 - { 871 - struct ffs_io_data io_data; 872 - 873 - ENTER(); 874 - 875 - io_data.aio = false; 876 - io_data.read = false; 877 - io_data.buf = (char * __user)buf; 878 - io_data.len = len; 879 - 880 - return ffs_epfile_io(file, &io_data); 881 - } 882 - 883 - static ssize_t 884 - ffs_epfile_read(struct file *file, char __user *buf, size_t len, loff_t *ptr) 885 - { 886 - struct ffs_io_data io_data; 887 - 888 - ENTER(); 889 - 890 - io_data.aio = false; 891 - io_data.read = true; 892 - io_data.buf = buf; 893 - io_data.len = len; 894 - 895 - return ffs_epfile_io(file, &io_data); 896 899 } 897 900 898 901 static int ··· 900 965 return value; 901 966 } 902 967 903 - static ssize_t ffs_epfile_aio_write(struct kiocb *kiocb, 904 - const struct iovec *iovec, 905 - unsigned long nr_segs, loff_t loff) 968 + static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from) 906 969 { 907 - struct ffs_io_data *io_data; 970 + struct ffs_io_data io_data, *p = &io_data; 971 + ssize_t res; 908 972 909 973 ENTER(); 910 974 911 - io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 912 - if (unlikely(!io_data)) 913 - return -ENOMEM; 914 - 915 - io_data->aio = true; 916 - io_data->read = false; 917 - io_data->kiocb = kiocb; 918 - io_data->iovec = iovec; 919 - io_data->nr_segs = nr_segs; 920 - io_data->len = kiocb->ki_nbytes; 921 - io_data->mm = current->mm; 922 - 923 - kiocb->private = io_data; 924 - 925 - kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 926 - 927 - return ffs_epfile_io(kiocb->ki_filp, io_data); 928 - } 929 - 930 - static ssize_t ffs_epfile_aio_read(struct kiocb *kiocb, 931 - const struct iovec *iovec, 932 - unsigned long nr_segs, loff_t loff) 933 - { 934 - struct ffs_io_data *io_data; 935 - struct iovec *iovec_copy; 936 - 937 - ENTER(); 938 - 939 - iovec_copy = kmalloc_array(nr_segs, sizeof(*iovec_copy), GFP_KERNEL); 940 - if (unlikely(!iovec_copy)) 941 - return -ENOMEM; 942 - 943 - memcpy(iovec_copy, iovec, sizeof(struct iovec)*nr_segs); 944 - 945 - io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 946 - if (unlikely(!io_data)) { 947 - kfree(iovec_copy); 948 - return -ENOMEM; 975 + if (!is_sync_kiocb(kiocb)) { 976 + p = kmalloc(sizeof(io_data), GFP_KERNEL); 977 + if (unlikely(!p)) 978 + return -ENOMEM; 979 + p->aio = true; 980 + } else { 981 + p->aio = false; 949 982 } 950 983 951 - io_data->aio = true; 952 - io_data->read = true; 953 - io_data->kiocb = kiocb; 954 - io_data->iovec = iovec_copy; 955 - io_data->nr_segs = nr_segs; 956 - io_data->len = kiocb->ki_nbytes; 957 - io_data->mm = current->mm; 984 + p->read = false; 985 + p->kiocb = kiocb; 986 + p->data = *from; 987 + p->mm = current->mm; 958 988 959 - kiocb->private = io_data; 989 + kiocb->private = p; 960 990 961 991 kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 962 992 963 - return ffs_epfile_io(kiocb->ki_filp, io_data); 993 + res = ffs_epfile_io(kiocb->ki_filp, p); 994 + if (res == -EIOCBQUEUED) 995 + return res; 996 + if (p->aio) 997 + kfree(p); 998 + else 999 + *from = p->data; 1000 + return res; 1001 + } 1002 + 1003 + static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to) 1004 + { 1005 + struct ffs_io_data io_data, *p = &io_data; 1006 + ssize_t res; 1007 + 1008 + ENTER(); 1009 + 1010 + if (!is_sync_kiocb(kiocb)) { 1011 + p = kmalloc(sizeof(io_data), GFP_KERNEL); 1012 + if (unlikely(!p)) 1013 + return -ENOMEM; 1014 + p->aio = true; 1015 + } else { 1016 + p->aio = false; 1017 + } 1018 + 1019 + p->read = true; 1020 + p->kiocb = kiocb; 1021 + if (p->aio) { 1022 + p->to_free = dup_iter(&p->data, to, GFP_KERNEL); 1023 + if (!p->to_free) { 1024 + kfree(p); 1025 + return -ENOMEM; 1026 + } 1027 + } else { 1028 + p->data = *to; 1029 + p->to_free = NULL; 1030 + } 1031 + p->mm = current->mm; 1032 + 1033 + kiocb->private = p; 1034 + 1035 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 1036 + 1037 + res = ffs_epfile_io(kiocb->ki_filp, p); 1038 + if (res == -EIOCBQUEUED) 1039 + return res; 1040 + 1041 + if (p->aio) { 1042 + kfree(p->to_free); 1043 + kfree(p); 1044 + } else { 1045 + *to = p->data; 1046 + } 1047 + return res; 964 1048 } 965 1049 966 1050 static int ··· 1059 1105 .llseek = no_llseek, 1060 1106 1061 1107 .open = ffs_epfile_open, 1062 - .write = ffs_epfile_write, 1063 - .read = ffs_epfile_read, 1064 - .aio_write = ffs_epfile_aio_write, 1065 - .aio_read = ffs_epfile_aio_read, 1108 + .write = new_sync_write, 1109 + .read = new_sync_read, 1110 + .write_iter = ffs_epfile_write_iter, 1111 + .read_iter = ffs_epfile_read_iter, 1066 1112 .release = ffs_epfile_release, 1067 1113 .unlocked_ioctl = ffs_epfile_ioctl, 1068 1114 };
+1 -1
drivers/usb/gadget/function/f_hid.c
··· 569 569 return status; 570 570 } 571 571 572 - const struct file_operations f_hidg_fops = { 572 + static const struct file_operations f_hidg_fops = { 573 573 .owner = THIS_MODULE, 574 574 .open = f_hidg_open, 575 575 .release = f_hidg_release,
+1 -2
drivers/usb/gadget/function/f_loopback.c
··· 289 289 struct usb_composite_dev *cdev; 290 290 291 291 cdev = loop->function.config->cdev; 292 - disable_endpoints(cdev, loop->in_ep, loop->out_ep, NULL, NULL, NULL, 293 - NULL); 292 + disable_endpoints(cdev, loop->in_ep, loop->out_ep, NULL, NULL); 294 293 VDBG(cdev, "%s disabled\n", loop->function.name); 295 294 } 296 295
+4 -1
drivers/usb/gadget/function/f_phonet.c
··· 417 417 return -EINVAL; 418 418 419 419 spin_lock(&port->lock); 420 - __pn_reset(f); 420 + 421 + if (fp->in_ep->driver_data) 422 + __pn_reset(f); 423 + 421 424 if (alt == 1) { 422 425 int i; 423 426
+20 -491
drivers/usb/gadget/function/f_sourcesink.c
··· 23 23 #include "gadget_chips.h" 24 24 #include "u_f.h" 25 25 26 - #define USB_MS_TO_SS_INTERVAL(x) USB_MS_TO_HS_INTERVAL(x) 27 - 28 - enum eptype { 29 - EP_CONTROL = 0, 30 - EP_BULK, 31 - EP_ISOC, 32 - EP_INTERRUPT, 33 - }; 34 - 35 26 /* 36 27 * SOURCE/SINK FUNCTION ... a primary testing vehicle for USB peripheral 37 28 * controller drivers. ··· 55 64 struct usb_ep *out_ep; 56 65 struct usb_ep *iso_in_ep; 57 66 struct usb_ep *iso_out_ep; 58 - struct usb_ep *int_in_ep; 59 - struct usb_ep *int_out_ep; 60 67 int cur_alt; 61 68 }; 62 69 ··· 68 79 static unsigned isoc_maxpacket; 69 80 static unsigned isoc_mult; 70 81 static unsigned isoc_maxburst; 71 - static unsigned int_interval; /* In ms */ 72 - static unsigned int_maxpacket; 73 - static unsigned int_mult; 74 - static unsigned int_maxburst; 75 82 static unsigned buflen; 76 83 77 84 /*-------------------------------------------------------------------------*/ ··· 88 103 89 104 .bAlternateSetting = 1, 90 105 .bNumEndpoints = 4, 91 - .bInterfaceClass = USB_CLASS_VENDOR_SPEC, 92 - /* .iInterface = DYNAMIC */ 93 - }; 94 - 95 - static struct usb_interface_descriptor source_sink_intf_alt2 = { 96 - .bLength = USB_DT_INTERFACE_SIZE, 97 - .bDescriptorType = USB_DT_INTERFACE, 98 - 99 - .bAlternateSetting = 2, 100 - .bNumEndpoints = 2, 101 106 .bInterfaceClass = USB_CLASS_VENDOR_SPEC, 102 107 /* .iInterface = DYNAMIC */ 103 108 }; ··· 130 155 .bInterval = 4, 131 156 }; 132 157 133 - static struct usb_endpoint_descriptor fs_int_source_desc = { 134 - .bLength = USB_DT_ENDPOINT_SIZE, 135 - .bDescriptorType = USB_DT_ENDPOINT, 136 - 137 - .bEndpointAddress = USB_DIR_IN, 138 - .bmAttributes = USB_ENDPOINT_XFER_INT, 139 - .wMaxPacketSize = cpu_to_le16(64), 140 - .bInterval = GZERO_INT_INTERVAL, 141 - }; 142 - 143 - static struct usb_endpoint_descriptor fs_int_sink_desc = { 144 - .bLength = USB_DT_ENDPOINT_SIZE, 145 - .bDescriptorType = USB_DT_ENDPOINT, 146 - 147 - .bEndpointAddress = USB_DIR_OUT, 148 - .bmAttributes = USB_ENDPOINT_XFER_INT, 149 - .wMaxPacketSize = cpu_to_le16(64), 150 - .bInterval = GZERO_INT_INTERVAL, 151 - }; 152 - 153 158 static struct usb_descriptor_header *fs_source_sink_descs[] = { 154 159 (struct usb_descriptor_header *) &source_sink_intf_alt0, 155 160 (struct usb_descriptor_header *) &fs_sink_desc, ··· 140 185 (struct usb_descriptor_header *) &fs_source_desc, 141 186 (struct usb_descriptor_header *) &fs_iso_sink_desc, 142 187 (struct usb_descriptor_header *) &fs_iso_source_desc, 143 - (struct usb_descriptor_header *) &source_sink_intf_alt2, 144 - #define FS_ALT_IFC_2_OFFSET 8 145 - (struct usb_descriptor_header *) &fs_int_sink_desc, 146 - (struct usb_descriptor_header *) &fs_int_source_desc, 147 188 NULL, 148 189 }; 149 190 ··· 179 228 .bInterval = 4, 180 229 }; 181 230 182 - static struct usb_endpoint_descriptor hs_int_source_desc = { 183 - .bLength = USB_DT_ENDPOINT_SIZE, 184 - .bDescriptorType = USB_DT_ENDPOINT, 185 - 186 - .bmAttributes = USB_ENDPOINT_XFER_INT, 187 - .wMaxPacketSize = cpu_to_le16(1024), 188 - .bInterval = USB_MS_TO_HS_INTERVAL(GZERO_INT_INTERVAL), 189 - }; 190 - 191 - static struct usb_endpoint_descriptor hs_int_sink_desc = { 192 - .bLength = USB_DT_ENDPOINT_SIZE, 193 - .bDescriptorType = USB_DT_ENDPOINT, 194 - 195 - .bmAttributes = USB_ENDPOINT_XFER_INT, 196 - .wMaxPacketSize = cpu_to_le16(1024), 197 - .bInterval = USB_MS_TO_HS_INTERVAL(GZERO_INT_INTERVAL), 198 - }; 199 - 200 231 static struct usb_descriptor_header *hs_source_sink_descs[] = { 201 232 (struct usb_descriptor_header *) &source_sink_intf_alt0, 202 233 (struct usb_descriptor_header *) &hs_source_desc, ··· 189 256 (struct usb_descriptor_header *) &hs_sink_desc, 190 257 (struct usb_descriptor_header *) &hs_iso_source_desc, 191 258 (struct usb_descriptor_header *) &hs_iso_sink_desc, 192 - (struct usb_descriptor_header *) &source_sink_intf_alt2, 193 - #define HS_ALT_IFC_2_OFFSET 8 194 - (struct usb_descriptor_header *) &hs_int_source_desc, 195 - (struct usb_descriptor_header *) &hs_int_sink_desc, 196 259 NULL, 197 260 }; 198 261 ··· 264 335 .wBytesPerInterval = cpu_to_le16(1024), 265 336 }; 266 337 267 - static struct usb_endpoint_descriptor ss_int_source_desc = { 268 - .bLength = USB_DT_ENDPOINT_SIZE, 269 - .bDescriptorType = USB_DT_ENDPOINT, 270 - 271 - .bmAttributes = USB_ENDPOINT_XFER_INT, 272 - .wMaxPacketSize = cpu_to_le16(1024), 273 - .bInterval = USB_MS_TO_SS_INTERVAL(GZERO_INT_INTERVAL), 274 - }; 275 - 276 - struct usb_ss_ep_comp_descriptor ss_int_source_comp_desc = { 277 - .bLength = USB_DT_SS_EP_COMP_SIZE, 278 - .bDescriptorType = USB_DT_SS_ENDPOINT_COMP, 279 - 280 - .bMaxBurst = 0, 281 - .bmAttributes = 0, 282 - .wBytesPerInterval = cpu_to_le16(1024), 283 - }; 284 - 285 - static struct usb_endpoint_descriptor ss_int_sink_desc = { 286 - .bLength = USB_DT_ENDPOINT_SIZE, 287 - .bDescriptorType = USB_DT_ENDPOINT, 288 - 289 - .bmAttributes = USB_ENDPOINT_XFER_INT, 290 - .wMaxPacketSize = cpu_to_le16(1024), 291 - .bInterval = USB_MS_TO_SS_INTERVAL(GZERO_INT_INTERVAL), 292 - }; 293 - 294 - struct usb_ss_ep_comp_descriptor ss_int_sink_comp_desc = { 295 - .bLength = USB_DT_SS_EP_COMP_SIZE, 296 - .bDescriptorType = USB_DT_SS_ENDPOINT_COMP, 297 - 298 - .bMaxBurst = 0, 299 - .bmAttributes = 0, 300 - .wBytesPerInterval = cpu_to_le16(1024), 301 - }; 302 - 303 338 static struct usb_descriptor_header *ss_source_sink_descs[] = { 304 339 (struct usb_descriptor_header *) &source_sink_intf_alt0, 305 340 (struct usb_descriptor_header *) &ss_source_desc, ··· 280 387 (struct usb_descriptor_header *) &ss_iso_source_comp_desc, 281 388 (struct usb_descriptor_header *) &ss_iso_sink_desc, 282 389 (struct usb_descriptor_header *) &ss_iso_sink_comp_desc, 283 - (struct usb_descriptor_header *) &source_sink_intf_alt2, 284 - #define SS_ALT_IFC_2_OFFSET 14 285 - (struct usb_descriptor_header *) &ss_int_source_desc, 286 - (struct usb_descriptor_header *) &ss_int_source_comp_desc, 287 - (struct usb_descriptor_header *) &ss_int_sink_desc, 288 - (struct usb_descriptor_header *) &ss_int_sink_comp_desc, 289 390 NULL, 290 391 }; 291 392 ··· 301 414 }; 302 415 303 416 /*-------------------------------------------------------------------------*/ 304 - static const char *get_ep_string(enum eptype ep_type) 305 - { 306 - switch (ep_type) { 307 - case EP_ISOC: 308 - return "ISOC-"; 309 - case EP_INTERRUPT: 310 - return "INTERRUPT-"; 311 - case EP_CONTROL: 312 - return "CTRL-"; 313 - case EP_BULK: 314 - return "BULK-"; 315 - default: 316 - return "UNKNOWN-"; 317 - } 318 - } 319 417 320 418 static inline struct usb_request *ss_alloc_ep_req(struct usb_ep *ep, int len) 321 419 { ··· 328 456 329 457 void disable_endpoints(struct usb_composite_dev *cdev, 330 458 struct usb_ep *in, struct usb_ep *out, 331 - struct usb_ep *iso_in, struct usb_ep *iso_out, 332 - struct usb_ep *int_in, struct usb_ep *int_out) 459 + struct usb_ep *iso_in, struct usb_ep *iso_out) 333 460 { 334 461 disable_ep(cdev, in); 335 462 disable_ep(cdev, out); ··· 336 465 disable_ep(cdev, iso_in); 337 466 if (iso_out) 338 467 disable_ep(cdev, iso_out); 339 - if (int_in) 340 - disable_ep(cdev, int_in); 341 - if (int_out) 342 - disable_ep(cdev, int_out); 343 468 } 344 469 345 470 static int ··· 352 485 return id; 353 486 source_sink_intf_alt0.bInterfaceNumber = id; 354 487 source_sink_intf_alt1.bInterfaceNumber = id; 355 - source_sink_intf_alt2.bInterfaceNumber = id; 356 488 357 489 /* allocate bulk endpoints */ 358 490 ss->in_ep = usb_ep_autoconfig(cdev->gadget, &fs_source_desc); ··· 412 546 if (isoc_maxpacket > 1024) 413 547 isoc_maxpacket = 1024; 414 548 415 - /* sanity check the interrupt module parameters */ 416 - if (int_interval < 1) 417 - int_interval = 1; 418 - if (int_interval > 4096) 419 - int_interval = 4096; 420 - if (int_mult > 2) 421 - int_mult = 2; 422 - if (int_maxburst > 15) 423 - int_maxburst = 15; 424 - 425 - /* fill in the FS interrupt descriptors from the module parameters */ 426 - fs_int_source_desc.wMaxPacketSize = int_maxpacket > 64 ? 427 - 64 : int_maxpacket; 428 - fs_int_source_desc.bInterval = int_interval > 255 ? 429 - 255 : int_interval; 430 - fs_int_sink_desc.wMaxPacketSize = int_maxpacket > 64 ? 431 - 64 : int_maxpacket; 432 - fs_int_sink_desc.bInterval = int_interval > 255 ? 433 - 255 : int_interval; 434 - 435 - /* allocate int endpoints */ 436 - ss->int_in_ep = usb_ep_autoconfig(cdev->gadget, &fs_int_source_desc); 437 - if (!ss->int_in_ep) 438 - goto no_int; 439 - ss->int_in_ep->driver_data = cdev; /* claim */ 440 - 441 - ss->int_out_ep = usb_ep_autoconfig(cdev->gadget, &fs_int_sink_desc); 442 - if (ss->int_out_ep) { 443 - ss->int_out_ep->driver_data = cdev; /* claim */ 444 - } else { 445 - ss->int_in_ep->driver_data = NULL; 446 - ss->int_in_ep = NULL; 447 - no_int: 448 - fs_source_sink_descs[FS_ALT_IFC_2_OFFSET] = NULL; 449 - hs_source_sink_descs[HS_ALT_IFC_2_OFFSET] = NULL; 450 - ss_source_sink_descs[SS_ALT_IFC_2_OFFSET] = NULL; 451 - } 452 - 453 - if (int_maxpacket > 1024) 454 - int_maxpacket = 1024; 455 - 456 549 /* support high speed hardware */ 457 550 hs_source_desc.bEndpointAddress = fs_source_desc.bEndpointAddress; 458 551 hs_sink_desc.bEndpointAddress = fs_sink_desc.bEndpointAddress; 459 552 460 553 /* 461 - * Fill in the HS isoc and interrupt descriptors from the module 462 - * parameters. We assume that the user knows what they are doing and 463 - * won't give parameters that their UDC doesn't support. 554 + * Fill in the HS isoc descriptors from the module parameters. 555 + * We assume that the user knows what they are doing and won't 556 + * give parameters that their UDC doesn't support. 464 557 */ 465 558 hs_iso_source_desc.wMaxPacketSize = isoc_maxpacket; 466 559 hs_iso_source_desc.wMaxPacketSize |= isoc_mult << 11; ··· 432 607 hs_iso_sink_desc.bInterval = isoc_interval; 433 608 hs_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress; 434 609 435 - hs_int_source_desc.wMaxPacketSize = int_maxpacket; 436 - hs_int_source_desc.wMaxPacketSize |= int_mult << 11; 437 - hs_int_source_desc.bInterval = USB_MS_TO_HS_INTERVAL(int_interval); 438 - hs_int_source_desc.bEndpointAddress = 439 - fs_int_source_desc.bEndpointAddress; 440 - 441 - hs_int_sink_desc.wMaxPacketSize = int_maxpacket; 442 - hs_int_sink_desc.wMaxPacketSize |= int_mult << 11; 443 - hs_int_sink_desc.bInterval = USB_MS_TO_HS_INTERVAL(int_interval); 444 - hs_int_sink_desc.bEndpointAddress = fs_int_sink_desc.bEndpointAddress; 445 - 446 610 /* support super speed hardware */ 447 611 ss_source_desc.bEndpointAddress = 448 612 fs_source_desc.bEndpointAddress; ··· 439 625 fs_sink_desc.bEndpointAddress; 440 626 441 627 /* 442 - * Fill in the SS isoc and interrupt descriptors from the module 443 - * parameters. We assume that the user knows what they are doing and 444 - * won't give parameters that their UDC doesn't support. 628 + * Fill in the SS isoc descriptors from the module parameters. 629 + * We assume that the user knows what they are doing and won't 630 + * give parameters that their UDC doesn't support. 445 631 */ 446 632 ss_iso_source_desc.wMaxPacketSize = isoc_maxpacket; 447 633 ss_iso_source_desc.bInterval = isoc_interval; ··· 460 646 isoc_maxpacket * (isoc_mult + 1) * (isoc_maxburst + 1); 461 647 ss_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress; 462 648 463 - ss_int_source_desc.wMaxPacketSize = int_maxpacket; 464 - ss_int_source_desc.bInterval = USB_MS_TO_SS_INTERVAL(int_interval); 465 - ss_int_source_comp_desc.bmAttributes = int_mult; 466 - ss_int_source_comp_desc.bMaxBurst = int_maxburst; 467 - ss_int_source_comp_desc.wBytesPerInterval = 468 - int_maxpacket * (int_mult + 1) * (int_maxburst + 1); 469 - ss_int_source_desc.bEndpointAddress = 470 - fs_int_source_desc.bEndpointAddress; 471 - 472 - ss_int_sink_desc.wMaxPacketSize = int_maxpacket; 473 - ss_int_sink_desc.bInterval = USB_MS_TO_SS_INTERVAL(int_interval); 474 - ss_int_sink_comp_desc.bmAttributes = int_mult; 475 - ss_int_sink_comp_desc.bMaxBurst = int_maxburst; 476 - ss_int_sink_comp_desc.wBytesPerInterval = 477 - int_maxpacket * (int_mult + 1) * (int_maxburst + 1); 478 - ss_int_sink_desc.bEndpointAddress = fs_int_sink_desc.bEndpointAddress; 479 - 480 649 ret = usb_assign_descriptors(f, fs_source_sink_descs, 481 650 hs_source_sink_descs, ss_source_sink_descs); 482 651 if (ret) 483 652 return ret; 484 653 485 - DBG(cdev, "%s speed %s: IN/%s, OUT/%s, ISO-IN/%s, ISO-OUT/%s, " 486 - "INT-IN/%s, INT-OUT/%s\n", 654 + DBG(cdev, "%s speed %s: IN/%s, OUT/%s, ISO-IN/%s, ISO-OUT/%s\n", 487 655 (gadget_is_superspeed(c->cdev->gadget) ? "super" : 488 656 (gadget_is_dualspeed(c->cdev->gadget) ? "dual" : "full")), 489 657 f->name, ss->in_ep->name, ss->out_ep->name, 490 658 ss->iso_in_ep ? ss->iso_in_ep->name : "<none>", 491 - ss->iso_out_ep ? ss->iso_out_ep->name : "<none>", 492 - ss->int_in_ep ? ss->int_in_ep->name : "<none>", 493 - ss->int_out_ep ? ss->int_out_ep->name : "<none>"); 659 + ss->iso_out_ep ? ss->iso_out_ep->name : "<none>"); 494 660 return 0; 495 661 } 496 662 ··· 601 807 } 602 808 603 809 static int source_sink_start_ep(struct f_sourcesink *ss, bool is_in, 604 - enum eptype ep_type, int speed) 810 + bool is_iso, int speed) 605 811 { 606 812 struct usb_ep *ep; 607 813 struct usb_request *req; 608 814 int i, size, status; 609 815 610 816 for (i = 0; i < 8; i++) { 611 - switch (ep_type) { 612 - case EP_ISOC: 817 + if (is_iso) { 613 818 switch (speed) { 614 819 case USB_SPEED_SUPER: 615 820 size = isoc_maxpacket * (isoc_mult + 1) * ··· 624 831 } 625 832 ep = is_in ? ss->iso_in_ep : ss->iso_out_ep; 626 833 req = ss_alloc_ep_req(ep, size); 627 - break; 628 - case EP_INTERRUPT: 629 - switch (speed) { 630 - case USB_SPEED_SUPER: 631 - size = int_maxpacket * (int_mult + 1) * 632 - (int_maxburst + 1); 633 - break; 634 - case USB_SPEED_HIGH: 635 - size = int_maxpacket * (int_mult + 1); 636 - break; 637 - default: 638 - size = int_maxpacket > 1023 ? 639 - 1023 : int_maxpacket; 640 - break; 641 - } 642 - ep = is_in ? ss->int_in_ep : ss->int_out_ep; 643 - req = ss_alloc_ep_req(ep, size); 644 - break; 645 - default: 834 + } else { 646 835 ep = is_in ? ss->in_ep : ss->out_ep; 647 836 req = ss_alloc_ep_req(ep, 0); 648 - break; 649 837 } 650 838 651 839 if (!req) ··· 644 870 645 871 cdev = ss->function.config->cdev; 646 872 ERROR(cdev, "start %s%s %s --> %d\n", 647 - get_ep_string(ep_type), is_in ? "IN" : "OUT", 648 - ep->name, status); 873 + is_iso ? "ISO-" : "", is_in ? "IN" : "OUT", 874 + ep->name, status); 649 875 free_ep_req(ep, req); 650 876 } 651 877 652 - if (!(ep_type == EP_ISOC)) 878 + if (!is_iso) 653 879 break; 654 880 } 655 881 ··· 662 888 663 889 cdev = ss->function.config->cdev; 664 890 disable_endpoints(cdev, ss->in_ep, ss->out_ep, ss->iso_in_ep, 665 - ss->iso_out_ep, ss->int_in_ep, ss->int_out_ep); 891 + ss->iso_out_ep); 666 892 VDBG(cdev, "%s disabled\n", ss->function.name); 667 893 } 668 894 ··· 674 900 int speed = cdev->gadget->speed; 675 901 struct usb_ep *ep; 676 902 677 - if (alt == 2) { 678 - /* Configure for periodic interrupt endpoint */ 679 - ep = ss->int_in_ep; 680 - if (ep) { 681 - result = config_ep_by_speed(cdev->gadget, 682 - &(ss->function), ep); 683 - if (result) 684 - return result; 685 - 686 - result = usb_ep_enable(ep); 687 - if (result < 0) 688 - return result; 689 - 690 - ep->driver_data = ss; 691 - result = source_sink_start_ep(ss, true, EP_INTERRUPT, 692 - speed); 693 - if (result < 0) { 694 - fail1: 695 - ep = ss->int_in_ep; 696 - if (ep) { 697 - usb_ep_disable(ep); 698 - ep->driver_data = NULL; 699 - } 700 - return result; 701 - } 702 - } 703 - 704 - /* 705 - * one interrupt endpoint reads (sinks) anything OUT (from the 706 - * host) 707 - */ 708 - ep = ss->int_out_ep; 709 - if (ep) { 710 - result = config_ep_by_speed(cdev->gadget, 711 - &(ss->function), ep); 712 - if (result) 713 - goto fail1; 714 - 715 - result = usb_ep_enable(ep); 716 - if (result < 0) 717 - goto fail1; 718 - 719 - ep->driver_data = ss; 720 - result = source_sink_start_ep(ss, false, EP_INTERRUPT, 721 - speed); 722 - if (result < 0) { 723 - ep = ss->int_out_ep; 724 - usb_ep_disable(ep); 725 - ep->driver_data = NULL; 726 - goto fail1; 727 - } 728 - } 729 - 730 - goto out; 731 - } 732 - 733 903 /* one bulk endpoint writes (sources) zeroes IN (to the host) */ 734 904 ep = ss->in_ep; 735 905 result = config_ep_by_speed(cdev->gadget, &(ss->function), ep); ··· 684 966 return result; 685 967 ep->driver_data = ss; 686 968 687 - result = source_sink_start_ep(ss, true, EP_BULK, speed); 969 + result = source_sink_start_ep(ss, true, false, speed); 688 970 if (result < 0) { 689 971 fail: 690 972 ep = ss->in_ep; ··· 703 985 goto fail; 704 986 ep->driver_data = ss; 705 987 706 - result = source_sink_start_ep(ss, false, EP_BULK, speed); 988 + result = source_sink_start_ep(ss, false, false, speed); 707 989 if (result < 0) { 708 990 fail2: 709 991 ep = ss->out_ep; ··· 726 1008 goto fail2; 727 1009 ep->driver_data = ss; 728 1010 729 - result = source_sink_start_ep(ss, true, EP_ISOC, speed); 1011 + result = source_sink_start_ep(ss, true, true, speed); 730 1012 if (result < 0) { 731 1013 fail3: 732 1014 ep = ss->iso_in_ep; ··· 749 1031 goto fail3; 750 1032 ep->driver_data = ss; 751 1033 752 - result = source_sink_start_ep(ss, false, EP_ISOC, speed); 1034 + result = source_sink_start_ep(ss, false, true, speed); 753 1035 if (result < 0) { 754 1036 usb_ep_disable(ep); 755 1037 ep->driver_data = NULL; 756 1038 goto fail3; 757 1039 } 758 1040 } 759 - 760 1041 out: 761 1042 ss->cur_alt = alt; 762 1043 ··· 770 1053 struct usb_composite_dev *cdev = f->config->cdev; 771 1054 772 1055 if (ss->in_ep->driver_data) 773 - disable_source_sink(ss); 774 - else if (alt == 2 && ss->int_in_ep->driver_data) 775 1056 disable_source_sink(ss); 776 1057 return enable_source_sink(cdev, ss, alt); 777 1058 } ··· 883 1168 isoc_maxpacket = ss_opts->isoc_maxpacket; 884 1169 isoc_mult = ss_opts->isoc_mult; 885 1170 isoc_maxburst = ss_opts->isoc_maxburst; 886 - int_interval = ss_opts->int_interval; 887 - int_maxpacket = ss_opts->int_maxpacket; 888 - int_mult = ss_opts->int_mult; 889 - int_maxburst = ss_opts->int_maxburst; 890 1171 buflen = ss_opts->bulk_buflen; 891 1172 892 1173 ss->function.name = "source/sink"; ··· 1179 1468 f_ss_opts_bulk_buflen_show, 1180 1469 f_ss_opts_bulk_buflen_store); 1181 1470 1182 - static ssize_t f_ss_opts_int_interval_show(struct f_ss_opts *opts, char *page) 1183 - { 1184 - int result; 1185 - 1186 - mutex_lock(&opts->lock); 1187 - result = sprintf(page, "%u", opts->int_interval); 1188 - mutex_unlock(&opts->lock); 1189 - 1190 - return result; 1191 - } 1192 - 1193 - static ssize_t f_ss_opts_int_interval_store(struct f_ss_opts *opts, 1194 - const char *page, size_t len) 1195 - { 1196 - int ret; 1197 - u32 num; 1198 - 1199 - mutex_lock(&opts->lock); 1200 - if (opts->refcnt) { 1201 - ret = -EBUSY; 1202 - goto end; 1203 - } 1204 - 1205 - ret = kstrtou32(page, 0, &num); 1206 - if (ret) 1207 - goto end; 1208 - 1209 - if (num > 4096) { 1210 - ret = -EINVAL; 1211 - goto end; 1212 - } 1213 - 1214 - opts->int_interval = num; 1215 - ret = len; 1216 - end: 1217 - mutex_unlock(&opts->lock); 1218 - return ret; 1219 - } 1220 - 1221 - static struct f_ss_opts_attribute f_ss_opts_int_interval = 1222 - __CONFIGFS_ATTR(int_interval, S_IRUGO | S_IWUSR, 1223 - f_ss_opts_int_interval_show, 1224 - f_ss_opts_int_interval_store); 1225 - 1226 - static ssize_t f_ss_opts_int_maxpacket_show(struct f_ss_opts *opts, char *page) 1227 - { 1228 - int result; 1229 - 1230 - mutex_lock(&opts->lock); 1231 - result = sprintf(page, "%u", opts->int_maxpacket); 1232 - mutex_unlock(&opts->lock); 1233 - 1234 - return result; 1235 - } 1236 - 1237 - static ssize_t f_ss_opts_int_maxpacket_store(struct f_ss_opts *opts, 1238 - const char *page, size_t len) 1239 - { 1240 - int ret; 1241 - u16 num; 1242 - 1243 - mutex_lock(&opts->lock); 1244 - if (opts->refcnt) { 1245 - ret = -EBUSY; 1246 - goto end; 1247 - } 1248 - 1249 - ret = kstrtou16(page, 0, &num); 1250 - if (ret) 1251 - goto end; 1252 - 1253 - if (num > 1024) { 1254 - ret = -EINVAL; 1255 - goto end; 1256 - } 1257 - 1258 - opts->int_maxpacket = num; 1259 - ret = len; 1260 - end: 1261 - mutex_unlock(&opts->lock); 1262 - return ret; 1263 - } 1264 - 1265 - static struct f_ss_opts_attribute f_ss_opts_int_maxpacket = 1266 - __CONFIGFS_ATTR(int_maxpacket, S_IRUGO | S_IWUSR, 1267 - f_ss_opts_int_maxpacket_show, 1268 - f_ss_opts_int_maxpacket_store); 1269 - 1270 - static ssize_t f_ss_opts_int_mult_show(struct f_ss_opts *opts, char *page) 1271 - { 1272 - int result; 1273 - 1274 - mutex_lock(&opts->lock); 1275 - result = sprintf(page, "%u", opts->int_mult); 1276 - mutex_unlock(&opts->lock); 1277 - 1278 - return result; 1279 - } 1280 - 1281 - static ssize_t f_ss_opts_int_mult_store(struct f_ss_opts *opts, 1282 - const char *page, size_t len) 1283 - { 1284 - int ret; 1285 - u8 num; 1286 - 1287 - mutex_lock(&opts->lock); 1288 - if (opts->refcnt) { 1289 - ret = -EBUSY; 1290 - goto end; 1291 - } 1292 - 1293 - ret = kstrtou8(page, 0, &num); 1294 - if (ret) 1295 - goto end; 1296 - 1297 - if (num > 2) { 1298 - ret = -EINVAL; 1299 - goto end; 1300 - } 1301 - 1302 - opts->int_mult = num; 1303 - ret = len; 1304 - end: 1305 - mutex_unlock(&opts->lock); 1306 - return ret; 1307 - } 1308 - 1309 - static struct f_ss_opts_attribute f_ss_opts_int_mult = 1310 - __CONFIGFS_ATTR(int_mult, S_IRUGO | S_IWUSR, 1311 - f_ss_opts_int_mult_show, 1312 - f_ss_opts_int_mult_store); 1313 - 1314 - static ssize_t f_ss_opts_int_maxburst_show(struct f_ss_opts *opts, char *page) 1315 - { 1316 - int result; 1317 - 1318 - mutex_lock(&opts->lock); 1319 - result = sprintf(page, "%u", opts->int_maxburst); 1320 - mutex_unlock(&opts->lock); 1321 - 1322 - return result; 1323 - } 1324 - 1325 - static ssize_t f_ss_opts_int_maxburst_store(struct f_ss_opts *opts, 1326 - const char *page, size_t len) 1327 - { 1328 - int ret; 1329 - u8 num; 1330 - 1331 - mutex_lock(&opts->lock); 1332 - if (opts->refcnt) { 1333 - ret = -EBUSY; 1334 - goto end; 1335 - } 1336 - 1337 - ret = kstrtou8(page, 0, &num); 1338 - if (ret) 1339 - goto end; 1340 - 1341 - if (num > 15) { 1342 - ret = -EINVAL; 1343 - goto end; 1344 - } 1345 - 1346 - opts->int_maxburst = num; 1347 - ret = len; 1348 - end: 1349 - mutex_unlock(&opts->lock); 1350 - return ret; 1351 - } 1352 - 1353 - static struct f_ss_opts_attribute f_ss_opts_int_maxburst = 1354 - __CONFIGFS_ATTR(int_maxburst, S_IRUGO | S_IWUSR, 1355 - f_ss_opts_int_maxburst_show, 1356 - f_ss_opts_int_maxburst_store); 1357 - 1358 1471 static struct configfs_attribute *ss_attrs[] = { 1359 1472 &f_ss_opts_pattern.attr, 1360 1473 &f_ss_opts_isoc_interval.attr, ··· 1186 1651 &f_ss_opts_isoc_mult.attr, 1187 1652 &f_ss_opts_isoc_maxburst.attr, 1188 1653 &f_ss_opts_bulk_buflen.attr, 1189 - &f_ss_opts_int_interval.attr, 1190 - &f_ss_opts_int_maxpacket.attr, 1191 - &f_ss_opts_int_mult.attr, 1192 - &f_ss_opts_int_maxburst.attr, 1193 1654 NULL, 1194 1655 }; 1195 1656 ··· 1215 1684 ss_opts->isoc_interval = GZERO_ISOC_INTERVAL; 1216 1685 ss_opts->isoc_maxpacket = GZERO_ISOC_MAXPACKET; 1217 1686 ss_opts->bulk_buflen = GZERO_BULK_BUFLEN; 1218 - ss_opts->int_interval = GZERO_INT_INTERVAL; 1219 - ss_opts->int_maxpacket = GZERO_INT_MAXPACKET; 1220 1687 1221 1688 config_group_init_type_name(&ss_opts->func_inst.group, "", 1222 1689 &ss_func_type);
+17 -17
drivers/usb/gadget/function/f_uac2.c
··· 54 54 #define UNFLW_CTRL 8 55 55 #define OVFLW_CTRL 10 56 56 57 - const char *uac2_name = "snd_uac2"; 57 + static const char *uac2_name = "snd_uac2"; 58 58 59 59 struct uac2_req { 60 60 struct uac2_rtd_params *pp; /* parent param */ ··· 634 634 }; 635 635 636 636 /* Clock source for IN traffic */ 637 - struct uac_clock_source_descriptor in_clk_src_desc = { 637 + static struct uac_clock_source_descriptor in_clk_src_desc = { 638 638 .bLength = sizeof in_clk_src_desc, 639 639 .bDescriptorType = USB_DT_CS_INTERFACE, 640 640 ··· 646 646 }; 647 647 648 648 /* Clock source for OUT traffic */ 649 - struct uac_clock_source_descriptor out_clk_src_desc = { 649 + static struct uac_clock_source_descriptor out_clk_src_desc = { 650 650 .bLength = sizeof out_clk_src_desc, 651 651 .bDescriptorType = USB_DT_CS_INTERFACE, 652 652 ··· 658 658 }; 659 659 660 660 /* Input Terminal for USB_OUT */ 661 - struct uac2_input_terminal_descriptor usb_out_it_desc = { 661 + static struct uac2_input_terminal_descriptor usb_out_it_desc = { 662 662 .bLength = sizeof usb_out_it_desc, 663 663 .bDescriptorType = USB_DT_CS_INTERFACE, 664 664 ··· 672 672 }; 673 673 674 674 /* Input Terminal for I/O-In */ 675 - struct uac2_input_terminal_descriptor io_in_it_desc = { 675 + static struct uac2_input_terminal_descriptor io_in_it_desc = { 676 676 .bLength = sizeof io_in_it_desc, 677 677 .bDescriptorType = USB_DT_CS_INTERFACE, 678 678 ··· 686 686 }; 687 687 688 688 /* Ouput Terminal for USB_IN */ 689 - struct uac2_output_terminal_descriptor usb_in_ot_desc = { 689 + static struct uac2_output_terminal_descriptor usb_in_ot_desc = { 690 690 .bLength = sizeof usb_in_ot_desc, 691 691 .bDescriptorType = USB_DT_CS_INTERFACE, 692 692 ··· 700 700 }; 701 701 702 702 /* Ouput Terminal for I/O-Out */ 703 - struct uac2_output_terminal_descriptor io_out_ot_desc = { 703 + static struct uac2_output_terminal_descriptor io_out_ot_desc = { 704 704 .bLength = sizeof io_out_ot_desc, 705 705 .bDescriptorType = USB_DT_CS_INTERFACE, 706 706 ··· 713 713 .bmControls = (CONTROL_RDWR << COPY_CTRL), 714 714 }; 715 715 716 - struct uac2_ac_header_descriptor ac_hdr_desc = { 716 + static struct uac2_ac_header_descriptor ac_hdr_desc = { 717 717 .bLength = sizeof ac_hdr_desc, 718 718 .bDescriptorType = USB_DT_CS_INTERFACE, 719 719 ··· 751 751 }; 752 752 753 753 /* Audio Stream OUT Intface Desc */ 754 - struct uac2_as_header_descriptor as_out_hdr_desc = { 754 + static struct uac2_as_header_descriptor as_out_hdr_desc = { 755 755 .bLength = sizeof as_out_hdr_desc, 756 756 .bDescriptorType = USB_DT_CS_INTERFACE, 757 757 ··· 764 764 }; 765 765 766 766 /* Audio USB_OUT Format */ 767 - struct uac2_format_type_i_descriptor as_out_fmt1_desc = { 767 + static struct uac2_format_type_i_descriptor as_out_fmt1_desc = { 768 768 .bLength = sizeof as_out_fmt1_desc, 769 769 .bDescriptorType = USB_DT_CS_INTERFACE, 770 770 .bDescriptorSubtype = UAC_FORMAT_TYPE, ··· 772 772 }; 773 773 774 774 /* STD AS ISO OUT Endpoint */ 775 - struct usb_endpoint_descriptor fs_epout_desc = { 775 + static struct usb_endpoint_descriptor fs_epout_desc = { 776 776 .bLength = USB_DT_ENDPOINT_SIZE, 777 777 .bDescriptorType = USB_DT_ENDPOINT, 778 778 ··· 782 782 .bInterval = 1, 783 783 }; 784 784 785 - struct usb_endpoint_descriptor hs_epout_desc = { 785 + static struct usb_endpoint_descriptor hs_epout_desc = { 786 786 .bLength = USB_DT_ENDPOINT_SIZE, 787 787 .bDescriptorType = USB_DT_ENDPOINT, 788 788 ··· 828 828 }; 829 829 830 830 /* Audio Stream IN Intface Desc */ 831 - struct uac2_as_header_descriptor as_in_hdr_desc = { 831 + static struct uac2_as_header_descriptor as_in_hdr_desc = { 832 832 .bLength = sizeof as_in_hdr_desc, 833 833 .bDescriptorType = USB_DT_CS_INTERFACE, 834 834 ··· 841 841 }; 842 842 843 843 /* Audio USB_IN Format */ 844 - struct uac2_format_type_i_descriptor as_in_fmt1_desc = { 844 + static struct uac2_format_type_i_descriptor as_in_fmt1_desc = { 845 845 .bLength = sizeof as_in_fmt1_desc, 846 846 .bDescriptorType = USB_DT_CS_INTERFACE, 847 847 .bDescriptorSubtype = UAC_FORMAT_TYPE, ··· 849 849 }; 850 850 851 851 /* STD AS ISO IN Endpoint */ 852 - struct usb_endpoint_descriptor fs_epin_desc = { 852 + static struct usb_endpoint_descriptor fs_epin_desc = { 853 853 .bLength = USB_DT_ENDPOINT_SIZE, 854 854 .bDescriptorType = USB_DT_ENDPOINT, 855 855 ··· 859 859 .bInterval = 1, 860 860 }; 861 861 862 - struct usb_endpoint_descriptor hs_epin_desc = { 862 + static struct usb_endpoint_descriptor hs_epin_desc = { 863 863 .bLength = USB_DT_ENDPOINT_SIZE, 864 864 .bDescriptorType = USB_DT_ENDPOINT, 865 865 ··· 1563 1563 agdev->out_ep->driver_data = NULL; 1564 1564 } 1565 1565 1566 - struct usb_function *afunc_alloc(struct usb_function_instance *fi) 1566 + static struct usb_function *afunc_alloc(struct usb_function_instance *fi) 1567 1567 { 1568 1568 struct audio_dev *agdev; 1569 1569 struct f_uac2_opts *opts;
+1 -12
drivers/usb/gadget/function/g_zero.h
··· 10 10 #define GZERO_QLEN 32 11 11 #define GZERO_ISOC_INTERVAL 4 12 12 #define GZERO_ISOC_MAXPACKET 1024 13 - #define GZERO_INT_INTERVAL 1 /* Default interrupt interval = 1 ms */ 14 - #define GZERO_INT_MAXPACKET 1024 15 13 16 14 struct usb_zero_options { 17 15 unsigned pattern; ··· 17 19 unsigned isoc_maxpacket; 18 20 unsigned isoc_mult; 19 21 unsigned isoc_maxburst; 20 - unsigned int_interval; /* In ms */ 21 - unsigned int_maxpacket; 22 - unsigned int_mult; 23 - unsigned int_maxburst; 24 22 unsigned bulk_buflen; 25 23 unsigned qlen; 26 24 }; ··· 28 34 unsigned isoc_maxpacket; 29 35 unsigned isoc_mult; 30 36 unsigned isoc_maxburst; 31 - unsigned int_interval; /* In ms */ 32 - unsigned int_maxpacket; 33 - unsigned int_mult; 34 - unsigned int_maxburst; 35 37 unsigned bulk_buflen; 36 38 37 39 /* ··· 62 72 void free_ep_req(struct usb_ep *ep, struct usb_request *req); 63 73 void disable_endpoints(struct usb_composite_dev *cdev, 64 74 struct usb_ep *in, struct usb_ep *out, 65 - struct usb_ep *iso_in, struct usb_ep *iso_out, 66 - struct usb_ep *int_in, struct usb_ep *int_out); 75 + struct usb_ep *iso_in, struct usb_ep *iso_out); 67 76 68 77 #endif /* __G_ZERO_H */
+1
drivers/usb/gadget/function/uvc_v4l2.c
··· 27 27 #include "uvc.h" 28 28 #include "uvc_queue.h" 29 29 #include "uvc_video.h" 30 + #include "uvc_v4l2.h" 30 31 31 32 /* -------------------------------------------------------------------------- 32 33 * Requests handling
+1
drivers/usb/gadget/function/uvc_video.c
··· 21 21 22 22 #include "uvc.h" 23 23 #include "uvc_queue.h" 24 + #include "uvc_video.h" 24 25 25 26 /* -------------------------------------------------------------------------- 26 27 * Video codecs
+4 -2
drivers/usb/gadget/legacy/g_ffs.c
··· 133 133 struct usb_configuration c; 134 134 int (*eth)(struct usb_configuration *c); 135 135 int num; 136 - } gfs_configurations[] = { 136 + }; 137 + 138 + static struct gfs_configuration gfs_configurations[] = { 137 139 #ifdef CONFIG_USB_FUNCTIONFS_RNDIS 138 140 { 139 141 .eth = bind_rndis_config, ··· 280 278 if (!try_module_get(THIS_MODULE)) 281 279 return ERR_PTR(-ENOENT); 282 280 283 - return 0; 281 + return NULL; 284 282 } 285 283 286 284 static void functionfs_release_dev(struct ffs_dev *dev)
+194 -274
drivers/usb/gadget/legacy/inode.c
··· 74 74 MODULE_AUTHOR ("David Brownell"); 75 75 MODULE_LICENSE ("GPL"); 76 76 77 + static int ep_open(struct inode *, struct file *); 78 + 77 79 78 80 /*----------------------------------------------------------------------*/ 79 81 ··· 285 283 * still need dev->lock to use epdata->ep. 286 284 */ 287 285 static int 288 - get_ready_ep (unsigned f_flags, struct ep_data *epdata) 286 + get_ready_ep (unsigned f_flags, struct ep_data *epdata, bool is_write) 289 287 { 290 288 int val; 291 289 292 290 if (f_flags & O_NONBLOCK) { 293 291 if (!mutex_trylock(&epdata->lock)) 294 292 goto nonblock; 295 - if (epdata->state != STATE_EP_ENABLED) { 293 + if (epdata->state != STATE_EP_ENABLED && 294 + (!is_write || epdata->state != STATE_EP_READY)) { 296 295 mutex_unlock(&epdata->lock); 297 296 nonblock: 298 297 val = -EAGAIN; ··· 308 305 309 306 switch (epdata->state) { 310 307 case STATE_EP_ENABLED: 308 + return 0; 309 + case STATE_EP_READY: /* not configured yet */ 310 + if (is_write) 311 + return 0; 312 + // FALLTHRU 313 + case STATE_EP_UNBOUND: /* clean disconnect */ 311 314 break; 312 315 // case STATE_EP_DISABLED: /* "can't happen" */ 313 - // case STATE_EP_READY: /* "can't happen" */ 314 316 default: /* error! */ 315 317 pr_debug ("%s: ep %p not available, state %d\n", 316 318 shortname, epdata, epdata->state); 317 - // FALLTHROUGH 318 - case STATE_EP_UNBOUND: /* clean disconnect */ 319 - val = -ENODEV; 320 - mutex_unlock(&epdata->lock); 321 319 } 322 - return val; 320 + mutex_unlock(&epdata->lock); 321 + return -ENODEV; 323 322 } 324 323 325 324 static ssize_t ··· 368 363 return value; 369 364 } 370 365 371 - 372 - /* handle a synchronous OUT bulk/intr/iso transfer */ 373 - static ssize_t 374 - ep_read (struct file *fd, char __user *buf, size_t len, loff_t *ptr) 375 - { 376 - struct ep_data *data = fd->private_data; 377 - void *kbuf; 378 - ssize_t value; 379 - 380 - if ((value = get_ready_ep (fd->f_flags, data)) < 0) 381 - return value; 382 - 383 - /* halt any endpoint by doing a "wrong direction" i/o call */ 384 - if (usb_endpoint_dir_in(&data->desc)) { 385 - if (usb_endpoint_xfer_isoc(&data->desc)) { 386 - mutex_unlock(&data->lock); 387 - return -EINVAL; 388 - } 389 - DBG (data->dev, "%s halt\n", data->name); 390 - spin_lock_irq (&data->dev->lock); 391 - if (likely (data->ep != NULL)) 392 - usb_ep_set_halt (data->ep); 393 - spin_unlock_irq (&data->dev->lock); 394 - mutex_unlock(&data->lock); 395 - return -EBADMSG; 396 - } 397 - 398 - /* FIXME readahead for O_NONBLOCK and poll(); careful with ZLPs */ 399 - 400 - value = -ENOMEM; 401 - kbuf = kmalloc (len, GFP_KERNEL); 402 - if (unlikely (!kbuf)) 403 - goto free1; 404 - 405 - value = ep_io (data, kbuf, len); 406 - VDEBUG (data->dev, "%s read %zu OUT, status %d\n", 407 - data->name, len, (int) value); 408 - if (value >= 0 && copy_to_user (buf, kbuf, value)) 409 - value = -EFAULT; 410 - 411 - free1: 412 - mutex_unlock(&data->lock); 413 - kfree (kbuf); 414 - return value; 415 - } 416 - 417 - /* handle a synchronous IN bulk/intr/iso transfer */ 418 - static ssize_t 419 - ep_write (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 420 - { 421 - struct ep_data *data = fd->private_data; 422 - void *kbuf; 423 - ssize_t value; 424 - 425 - if ((value = get_ready_ep (fd->f_flags, data)) < 0) 426 - return value; 427 - 428 - /* halt any endpoint by doing a "wrong direction" i/o call */ 429 - if (!usb_endpoint_dir_in(&data->desc)) { 430 - if (usb_endpoint_xfer_isoc(&data->desc)) { 431 - mutex_unlock(&data->lock); 432 - return -EINVAL; 433 - } 434 - DBG (data->dev, "%s halt\n", data->name); 435 - spin_lock_irq (&data->dev->lock); 436 - if (likely (data->ep != NULL)) 437 - usb_ep_set_halt (data->ep); 438 - spin_unlock_irq (&data->dev->lock); 439 - mutex_unlock(&data->lock); 440 - return -EBADMSG; 441 - } 442 - 443 - /* FIXME writebehind for O_NONBLOCK and poll(), qlen = 1 */ 444 - 445 - value = -ENOMEM; 446 - kbuf = memdup_user(buf, len); 447 - if (IS_ERR(kbuf)) { 448 - value = PTR_ERR(kbuf); 449 - kbuf = NULL; 450 - goto free1; 451 - } 452 - 453 - value = ep_io (data, kbuf, len); 454 - VDEBUG (data->dev, "%s write %zu IN, status %d\n", 455 - data->name, len, (int) value); 456 - free1: 457 - mutex_unlock(&data->lock); 458 - kfree (kbuf); 459 - return value; 460 - } 461 - 462 366 static int 463 367 ep_release (struct inode *inode, struct file *fd) 464 368 { ··· 395 481 struct ep_data *data = fd->private_data; 396 482 int status; 397 483 398 - if ((status = get_ready_ep (fd->f_flags, data)) < 0) 484 + if ((status = get_ready_ep (fd->f_flags, data, false)) < 0) 399 485 return status; 400 486 401 487 spin_lock_irq (&data->dev->lock); ··· 431 517 struct mm_struct *mm; 432 518 struct work_struct work; 433 519 void *buf; 434 - const struct iovec *iv; 435 - unsigned long nr_segs; 520 + struct iov_iter to; 521 + const void *to_free; 436 522 unsigned actual; 437 523 }; 438 524 ··· 455 541 return value; 456 542 } 457 543 458 - static ssize_t ep_copy_to_user(struct kiocb_priv *priv) 459 - { 460 - ssize_t len, total; 461 - void *to_copy; 462 - int i; 463 - 464 - /* copy stuff into user buffers */ 465 - total = priv->actual; 466 - len = 0; 467 - to_copy = priv->buf; 468 - for (i=0; i < priv->nr_segs; i++) { 469 - ssize_t this = min((ssize_t)(priv->iv[i].iov_len), total); 470 - 471 - if (copy_to_user(priv->iv[i].iov_base, to_copy, this)) { 472 - if (len == 0) 473 - len = -EFAULT; 474 - break; 475 - } 476 - 477 - total -= this; 478 - len += this; 479 - to_copy += this; 480 - if (total == 0) 481 - break; 482 - } 483 - 484 - return len; 485 - } 486 - 487 544 static void ep_user_copy_worker(struct work_struct *work) 488 545 { 489 546 struct kiocb_priv *priv = container_of(work, struct kiocb_priv, work); ··· 463 578 size_t ret; 464 579 465 580 use_mm(mm); 466 - ret = ep_copy_to_user(priv); 581 + ret = copy_to_iter(priv->buf, priv->actual, &priv->to); 467 582 unuse_mm(mm); 583 + if (!ret) 584 + ret = -EFAULT; 468 585 469 586 /* completing the iocb can drop the ctx and mm, don't touch mm after */ 470 587 aio_complete(iocb, ret, ret); 471 588 472 589 kfree(priv->buf); 590 + kfree(priv->to_free); 473 591 kfree(priv); 474 592 } 475 593 ··· 491 603 * don't need to copy anything to userspace, so we can 492 604 * complete the aio request immediately. 493 605 */ 494 - if (priv->iv == NULL || unlikely(req->actual == 0)) { 606 + if (priv->to_free == NULL || unlikely(req->actual == 0)) { 495 607 kfree(req->buf); 608 + kfree(priv->to_free); 496 609 kfree(priv); 497 610 iocb->private = NULL; 498 611 /* aio_complete() reports bytes-transferred _and_ faults */ ··· 507 618 508 619 priv->buf = req->buf; 509 620 priv->actual = req->actual; 621 + INIT_WORK(&priv->work, ep_user_copy_worker); 510 622 schedule_work(&priv->work); 511 623 } 512 624 spin_unlock(&epdata->dev->lock); ··· 516 626 put_ep(epdata); 517 627 } 518 628 519 - static ssize_t 520 - ep_aio_rwtail( 521 - struct kiocb *iocb, 522 - char *buf, 523 - size_t len, 524 - struct ep_data *epdata, 525 - const struct iovec *iv, 526 - unsigned long nr_segs 527 - ) 629 + static ssize_t ep_aio(struct kiocb *iocb, 630 + struct kiocb_priv *priv, 631 + struct ep_data *epdata, 632 + char *buf, 633 + size_t len) 528 634 { 529 - struct kiocb_priv *priv; 530 - struct usb_request *req; 531 - ssize_t value; 635 + struct usb_request *req; 636 + ssize_t value; 532 637 533 - priv = kmalloc(sizeof *priv, GFP_KERNEL); 534 - if (!priv) { 535 - value = -ENOMEM; 536 - fail: 537 - kfree(buf); 538 - return value; 539 - } 540 638 iocb->private = priv; 541 639 priv->iocb = iocb; 542 - priv->iv = iv; 543 - priv->nr_segs = nr_segs; 544 - INIT_WORK(&priv->work, ep_user_copy_worker); 545 - 546 - value = get_ready_ep(iocb->ki_filp->f_flags, epdata); 547 - if (unlikely(value < 0)) { 548 - kfree(priv); 549 - goto fail; 550 - } 551 640 552 641 kiocb_set_cancel_fn(iocb, ep_aio_cancel); 553 642 get_ep(epdata); ··· 538 669 * allocate or submit those if the host disconnected. 539 670 */ 540 671 spin_lock_irq(&epdata->dev->lock); 541 - if (likely(epdata->ep)) { 542 - req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC); 543 - if (likely(req)) { 544 - priv->req = req; 545 - req->buf = buf; 546 - req->length = len; 547 - req->complete = ep_aio_complete; 548 - req->context = iocb; 549 - value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC); 550 - if (unlikely(0 != value)) 551 - usb_ep_free_request(epdata->ep, req); 552 - } else 553 - value = -EAGAIN; 554 - } else 555 - value = -ENODEV; 672 + value = -ENODEV; 673 + if (unlikely(epdata->ep)) 674 + goto fail; 675 + 676 + req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC); 677 + value = -ENOMEM; 678 + if (unlikely(!req)) 679 + goto fail; 680 + 681 + priv->req = req; 682 + req->buf = buf; 683 + req->length = len; 684 + req->complete = ep_aio_complete; 685 + req->context = iocb; 686 + value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC); 687 + if (unlikely(0 != value)) { 688 + usb_ep_free_request(epdata->ep, req); 689 + goto fail; 690 + } 556 691 spin_unlock_irq(&epdata->dev->lock); 692 + return -EIOCBQUEUED; 557 693 558 - mutex_unlock(&epdata->lock); 559 - 560 - if (unlikely(value)) { 561 - kfree(priv); 562 - put_ep(epdata); 563 - } else 564 - value = -EIOCBQUEUED; 694 + fail: 695 + spin_unlock_irq(&epdata->dev->lock); 696 + kfree(priv->to_free); 697 + kfree(priv); 698 + put_ep(epdata); 565 699 return value; 566 700 } 567 701 568 702 static ssize_t 569 - ep_aio_read(struct kiocb *iocb, const struct iovec *iov, 570 - unsigned long nr_segs, loff_t o) 703 + ep_read_iter(struct kiocb *iocb, struct iov_iter *to) 571 704 { 572 - struct ep_data *epdata = iocb->ki_filp->private_data; 573 - char *buf; 705 + struct file *file = iocb->ki_filp; 706 + struct ep_data *epdata = file->private_data; 707 + size_t len = iov_iter_count(to); 708 + ssize_t value; 709 + char *buf; 574 710 575 - if (unlikely(usb_endpoint_dir_in(&epdata->desc))) 576 - return -EINVAL; 711 + if ((value = get_ready_ep(file->f_flags, epdata, false)) < 0) 712 + return value; 577 713 578 - buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL); 579 - if (unlikely(!buf)) 714 + /* halt any endpoint by doing a "wrong direction" i/o call */ 715 + if (usb_endpoint_dir_in(&epdata->desc)) { 716 + if (usb_endpoint_xfer_isoc(&epdata->desc) || 717 + !is_sync_kiocb(iocb)) { 718 + mutex_unlock(&epdata->lock); 719 + return -EINVAL; 720 + } 721 + DBG (epdata->dev, "%s halt\n", epdata->name); 722 + spin_lock_irq(&epdata->dev->lock); 723 + if (likely(epdata->ep != NULL)) 724 + usb_ep_set_halt(epdata->ep); 725 + spin_unlock_irq(&epdata->dev->lock); 726 + mutex_unlock(&epdata->lock); 727 + return -EBADMSG; 728 + } 729 + 730 + buf = kmalloc(len, GFP_KERNEL); 731 + if (unlikely(!buf)) { 732 + mutex_unlock(&epdata->lock); 580 733 return -ENOMEM; 581 - 582 - return ep_aio_rwtail(iocb, buf, iocb->ki_nbytes, epdata, iov, nr_segs); 734 + } 735 + if (is_sync_kiocb(iocb)) { 736 + value = ep_io(epdata, buf, len); 737 + if (value >= 0 && copy_to_iter(buf, value, to)) 738 + value = -EFAULT; 739 + } else { 740 + struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL); 741 + value = -ENOMEM; 742 + if (!priv) 743 + goto fail; 744 + priv->to_free = dup_iter(&priv->to, to, GFP_KERNEL); 745 + if (!priv->to_free) { 746 + kfree(priv); 747 + goto fail; 748 + } 749 + value = ep_aio(iocb, priv, epdata, buf, len); 750 + if (value == -EIOCBQUEUED) 751 + buf = NULL; 752 + } 753 + fail: 754 + kfree(buf); 755 + mutex_unlock(&epdata->lock); 756 + return value; 583 757 } 584 758 759 + static ssize_t ep_config(struct ep_data *, const char *, size_t); 760 + 585 761 static ssize_t 586 - ep_aio_write(struct kiocb *iocb, const struct iovec *iov, 587 - unsigned long nr_segs, loff_t o) 762 + ep_write_iter(struct kiocb *iocb, struct iov_iter *from) 588 763 { 589 - struct ep_data *epdata = iocb->ki_filp->private_data; 590 - char *buf; 591 - size_t len = 0; 592 - int i = 0; 764 + struct file *file = iocb->ki_filp; 765 + struct ep_data *epdata = file->private_data; 766 + size_t len = iov_iter_count(from); 767 + bool configured; 768 + ssize_t value; 769 + char *buf; 593 770 594 - if (unlikely(!usb_endpoint_dir_in(&epdata->desc))) 595 - return -EINVAL; 771 + if ((value = get_ready_ep(file->f_flags, epdata, true)) < 0) 772 + return value; 596 773 597 - buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL); 598 - if (unlikely(!buf)) 599 - return -ENOMEM; 774 + configured = epdata->state == STATE_EP_ENABLED; 600 775 601 - for (i=0; i < nr_segs; i++) { 602 - if (unlikely(copy_from_user(&buf[len], iov[i].iov_base, 603 - iov[i].iov_len) != 0)) { 604 - kfree(buf); 605 - return -EFAULT; 776 + /* halt any endpoint by doing a "wrong direction" i/o call */ 777 + if (configured && !usb_endpoint_dir_in(&epdata->desc)) { 778 + if (usb_endpoint_xfer_isoc(&epdata->desc) || 779 + !is_sync_kiocb(iocb)) { 780 + mutex_unlock(&epdata->lock); 781 + return -EINVAL; 606 782 } 607 - len += iov[i].iov_len; 783 + DBG (epdata->dev, "%s halt\n", epdata->name); 784 + spin_lock_irq(&epdata->dev->lock); 785 + if (likely(epdata->ep != NULL)) 786 + usb_ep_set_halt(epdata->ep); 787 + spin_unlock_irq(&epdata->dev->lock); 788 + mutex_unlock(&epdata->lock); 789 + return -EBADMSG; 608 790 } 609 - return ep_aio_rwtail(iocb, buf, len, epdata, NULL, 0); 791 + 792 + buf = kmalloc(len, GFP_KERNEL); 793 + if (unlikely(!buf)) { 794 + mutex_unlock(&epdata->lock); 795 + return -ENOMEM; 796 + } 797 + 798 + if (unlikely(copy_from_iter(buf, len, from) != len)) { 799 + value = -EFAULT; 800 + goto out; 801 + } 802 + 803 + if (unlikely(!configured)) { 804 + value = ep_config(epdata, buf, len); 805 + } else if (is_sync_kiocb(iocb)) { 806 + value = ep_io(epdata, buf, len); 807 + } else { 808 + struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL); 809 + value = -ENOMEM; 810 + if (priv) { 811 + value = ep_aio(iocb, priv, epdata, buf, len); 812 + if (value == -EIOCBQUEUED) 813 + buf = NULL; 814 + } 815 + } 816 + out: 817 + kfree(buf); 818 + mutex_unlock(&epdata->lock); 819 + return value; 610 820 } 611 821 612 822 /*----------------------------------------------------------------------*/ ··· 693 745 /* used after endpoint configuration */ 694 746 static const struct file_operations ep_io_operations = { 695 747 .owner = THIS_MODULE, 696 - .llseek = no_llseek, 697 748 698 - .read = ep_read, 699 - .write = ep_write, 700 - .unlocked_ioctl = ep_ioctl, 749 + .open = ep_open, 701 750 .release = ep_release, 702 - 703 - .aio_read = ep_aio_read, 704 - .aio_write = ep_aio_write, 751 + .llseek = no_llseek, 752 + .read = new_sync_read, 753 + .write = new_sync_write, 754 + .unlocked_ioctl = ep_ioctl, 755 + .read_iter = ep_read_iter, 756 + .write_iter = ep_write_iter, 705 757 }; 706 758 707 759 /* ENDPOINT INITIALIZATION ··· 718 770 * speed descriptor, then optional high speed descriptor. 719 771 */ 720 772 static ssize_t 721 - ep_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 773 + ep_config (struct ep_data *data, const char *buf, size_t len) 722 774 { 723 - struct ep_data *data = fd->private_data; 724 775 struct usb_ep *ep; 725 776 u32 tag; 726 777 int value, length = len; 727 - 728 - value = mutex_lock_interruptible(&data->lock); 729 - if (value < 0) 730 - return value; 731 778 732 779 if (data->state != STATE_EP_READY) { 733 780 value = -EL2HLT; ··· 734 791 goto fail0; 735 792 736 793 /* we might need to change message format someday */ 737 - if (copy_from_user (&tag, buf, 4)) { 738 - goto fail1; 739 - } 794 + memcpy(&tag, buf, 4); 740 795 if (tag != 1) { 741 796 DBG(data->dev, "config %s, bad tag %d\n", data->name, tag); 742 797 goto fail0; ··· 747 806 */ 748 807 749 808 /* full/low speed descriptor, then high speed */ 750 - if (copy_from_user (&data->desc, buf, USB_DT_ENDPOINT_SIZE)) { 751 - goto fail1; 752 - } 809 + memcpy(&data->desc, buf, USB_DT_ENDPOINT_SIZE); 753 810 if (data->desc.bLength != USB_DT_ENDPOINT_SIZE 754 811 || data->desc.bDescriptorType != USB_DT_ENDPOINT) 755 812 goto fail0; 756 813 if (len != USB_DT_ENDPOINT_SIZE) { 757 814 if (len != 2 * USB_DT_ENDPOINT_SIZE) 758 815 goto fail0; 759 - if (copy_from_user (&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE, 760 - USB_DT_ENDPOINT_SIZE)) { 761 - goto fail1; 762 - } 816 + memcpy(&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE, 817 + USB_DT_ENDPOINT_SIZE); 763 818 if (data->hs_desc.bLength != USB_DT_ENDPOINT_SIZE 764 819 || data->hs_desc.bDescriptorType 765 820 != USB_DT_ENDPOINT) { ··· 777 840 case USB_SPEED_LOW: 778 841 case USB_SPEED_FULL: 779 842 ep->desc = &data->desc; 780 - value = usb_ep_enable(ep); 781 - if (value == 0) 782 - data->state = STATE_EP_ENABLED; 783 843 break; 784 844 case USB_SPEED_HIGH: 785 845 /* fails if caller didn't provide that descriptor... */ 786 846 ep->desc = &data->hs_desc; 787 - value = usb_ep_enable(ep); 788 - if (value == 0) 789 - data->state = STATE_EP_ENABLED; 790 847 break; 791 848 default: 792 849 DBG(data->dev, "unconnected, %s init abandoned\n", 793 850 data->name); 794 851 value = -EINVAL; 852 + goto gone; 795 853 } 854 + value = usb_ep_enable(ep); 796 855 if (value == 0) { 797 - fd->f_op = &ep_io_operations; 856 + data->state = STATE_EP_ENABLED; 798 857 value = length; 799 858 } 800 859 gone: ··· 800 867 data->desc.bDescriptorType = 0; 801 868 data->hs_desc.bDescriptorType = 0; 802 869 } 803 - mutex_unlock(&data->lock); 804 870 return value; 805 871 fail0: 806 872 value = -EINVAL; 807 - goto fail; 808 - fail1: 809 - value = -EFAULT; 810 873 goto fail; 811 874 } 812 875 ··· 830 901 mutex_unlock(&data->lock); 831 902 return value; 832 903 } 833 - 834 - /* used before endpoint configuration */ 835 - static const struct file_operations ep_config_operations = { 836 - .llseek = no_llseek, 837 - 838 - .open = ep_open, 839 - .write = ep_config, 840 - .release = ep_release, 841 - }; 842 904 843 905 /*----------------------------------------------------------------------*/ 844 906 ··· 909 989 enum ep0_state state; 910 990 911 991 spin_lock_irq (&dev->lock); 992 + if (dev->state <= STATE_DEV_OPENED) { 993 + retval = -EINVAL; 994 + goto done; 995 + } 912 996 913 997 /* report fd mode change before acting on it */ 914 998 if (dev->setup_abort) { ··· 1111 1187 struct dev_data *dev = fd->private_data; 1112 1188 ssize_t retval = -ESRCH; 1113 1189 1114 - spin_lock_irq (&dev->lock); 1115 - 1116 1190 /* report fd mode change before acting on it */ 1117 1191 if (dev->setup_abort) { 1118 1192 dev->setup_abort = 0; ··· 1156 1234 } else 1157 1235 DBG (dev, "fail %s, state %d\n", __func__, dev->state); 1158 1236 1159 - spin_unlock_irq (&dev->lock); 1160 1237 return retval; 1161 1238 } 1162 1239 ··· 1202 1281 struct dev_data *dev = fd->private_data; 1203 1282 int mask = 0; 1204 1283 1284 + if (dev->state <= STATE_DEV_OPENED) 1285 + return DEFAULT_POLLMASK; 1286 + 1205 1287 poll_wait(fd, &dev->wait, wait); 1206 1288 1207 1289 spin_lock_irq (&dev->lock); ··· 1239 1315 1240 1316 return ret; 1241 1317 } 1242 - 1243 - /* used after device configuration */ 1244 - static const struct file_operations ep0_io_operations = { 1245 - .owner = THIS_MODULE, 1246 - .llseek = no_llseek, 1247 - 1248 - .read = ep0_read, 1249 - .write = ep0_write, 1250 - .fasync = ep0_fasync, 1251 - .poll = ep0_poll, 1252 - .unlocked_ioctl = dev_ioctl, 1253 - .release = dev_release, 1254 - }; 1255 1318 1256 1319 /*----------------------------------------------------------------------*/ 1257 1320 ··· 1561 1650 goto enomem1; 1562 1651 1563 1652 data->dentry = gadgetfs_create_file (dev->sb, data->name, 1564 - data, &ep_config_operations); 1653 + data, &ep_io_operations); 1565 1654 if (!data->dentry) 1566 1655 goto enomem2; 1567 1656 list_add_tail (&data->epfiles, &dev->epfiles); ··· 1763 1852 u32 tag; 1764 1853 char *kbuf; 1765 1854 1855 + spin_lock_irq(&dev->lock); 1856 + if (dev->state > STATE_DEV_OPENED) { 1857 + value = ep0_write(fd, buf, len, ptr); 1858 + spin_unlock_irq(&dev->lock); 1859 + return value; 1860 + } 1861 + spin_unlock_irq(&dev->lock); 1862 + 1766 1863 if (len < (USB_DT_CONFIG_SIZE + USB_DT_DEVICE_SIZE + 4)) 1767 1864 return -EINVAL; 1768 1865 ··· 1844 1925 * on, they can work ... except in cleanup paths that 1845 1926 * kick in after the ep0 descriptor is closed. 1846 1927 */ 1847 - fd->f_op = &ep0_io_operations; 1848 1928 value = len; 1849 1929 } 1850 1930 return value; ··· 1874 1956 return value; 1875 1957 } 1876 1958 1877 - static const struct file_operations dev_init_operations = { 1959 + static const struct file_operations ep0_operations = { 1878 1960 .llseek = no_llseek, 1879 1961 1880 1962 .open = dev_open, 1963 + .read = ep0_read, 1881 1964 .write = dev_config, 1882 1965 .fasync = ep0_fasync, 1966 + .poll = ep0_poll, 1883 1967 .unlocked_ioctl = dev_ioctl, 1884 1968 .release = dev_release, 1885 1969 }; ··· 1997 2077 goto Enomem; 1998 2078 1999 2079 dev->sb = sb; 2000 - dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &dev_init_operations); 2080 + dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &ep0_operations); 2001 2081 if (!dev->dentry) { 2002 2082 put_dev(dev); 2003 2083 goto Enomem;
+2 -3
drivers/usb/gadget/legacy/tcm_usb_gadget.c
··· 1740 1740 goto err_session; 1741 1741 } 1742 1742 /* 1743 - * Now register the TCM vHost virtual I_T Nexus as active with the 1744 - * call to __transport_register_session() 1743 + * Now register the TCM vHost virtual I_T Nexus as active. 1745 1744 */ 1746 - __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl, 1745 + transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl, 1747 1746 tv_nexus->tvn_se_sess, tv_nexus); 1748 1747 tpg->tpg_nexus = tv_nexus; 1749 1748 mutex_unlock(&tpg->tpg_mutex);
-21
drivers/usb/gadget/legacy/zero.c
··· 68 68 .isoc_maxpacket = GZERO_ISOC_MAXPACKET, 69 69 .bulk_buflen = GZERO_BULK_BUFLEN, 70 70 .qlen = GZERO_QLEN, 71 - .int_interval = GZERO_INT_INTERVAL, 72 - .int_maxpacket = GZERO_INT_MAXPACKET, 73 71 }; 74 72 75 73 /*-------------------------------------------------------------------------*/ ··· 266 268 S_IRUGO|S_IWUSR); 267 269 MODULE_PARM_DESC(isoc_maxburst, "0 - 15 (ss only)"); 268 270 269 - module_param_named(int_interval, gzero_options.int_interval, uint, 270 - S_IRUGO|S_IWUSR); 271 - MODULE_PARM_DESC(int_interval, "1 - 16"); 272 - 273 - module_param_named(int_maxpacket, gzero_options.int_maxpacket, uint, 274 - S_IRUGO|S_IWUSR); 275 - MODULE_PARM_DESC(int_maxpacket, "0 - 1023 (fs), 0 - 1024 (hs/ss)"); 276 - 277 - module_param_named(int_mult, gzero_options.int_mult, uint, S_IRUGO|S_IWUSR); 278 - MODULE_PARM_DESC(int_mult, "0 - 2 (hs/ss only)"); 279 - 280 - module_param_named(int_maxburst, gzero_options.int_maxburst, uint, 281 - S_IRUGO|S_IWUSR); 282 - MODULE_PARM_DESC(int_maxburst, "0 - 15 (ss only)"); 283 - 284 271 static struct usb_function *func_lb; 285 272 static struct usb_function_instance *func_inst_lb; 286 273 ··· 301 318 ss_opts->isoc_maxpacket = gzero_options.isoc_maxpacket; 302 319 ss_opts->isoc_mult = gzero_options.isoc_mult; 303 320 ss_opts->isoc_maxburst = gzero_options.isoc_maxburst; 304 - ss_opts->int_interval = gzero_options.int_interval; 305 - ss_opts->int_maxpacket = gzero_options.int_maxpacket; 306 - ss_opts->int_mult = gzero_options.int_mult; 307 - ss_opts->int_maxburst = gzero_options.int_maxburst; 308 321 ss_opts->bulk_buflen = gzero_options.bulk_buflen; 309 322 310 323 func_ss = usb_get_function(func_inst_ss);
+9 -21
drivers/usb/host/ehci-atmel.c
··· 34 34 35 35 struct atmel_ehci_priv { 36 36 struct clk *iclk; 37 - struct clk *fclk; 38 37 struct clk *uclk; 39 38 bool clocked; 40 39 }; ··· 50 51 { 51 52 if (atmel_ehci->clocked) 52 53 return; 53 - if (IS_ENABLED(CONFIG_COMMON_CLK)) { 54 - clk_set_rate(atmel_ehci->uclk, 48000000); 55 - clk_prepare_enable(atmel_ehci->uclk); 56 - } 54 + 55 + clk_prepare_enable(atmel_ehci->uclk); 57 56 clk_prepare_enable(atmel_ehci->iclk); 58 - clk_prepare_enable(atmel_ehci->fclk); 59 57 atmel_ehci->clocked = true; 60 58 } 61 59 ··· 60 64 { 61 65 if (!atmel_ehci->clocked) 62 66 return; 63 - clk_disable_unprepare(atmel_ehci->fclk); 67 + 64 68 clk_disable_unprepare(atmel_ehci->iclk); 65 - if (IS_ENABLED(CONFIG_COMMON_CLK)) 66 - clk_disable_unprepare(atmel_ehci->uclk); 69 + clk_disable_unprepare(atmel_ehci->uclk); 67 70 atmel_ehci->clocked = false; 68 71 } 69 72 ··· 141 146 retval = -ENOENT; 142 147 goto fail_request_resource; 143 148 } 144 - atmel_ehci->fclk = devm_clk_get(&pdev->dev, "uhpck"); 145 - if (IS_ERR(atmel_ehci->fclk)) { 146 - dev_err(&pdev->dev, "Error getting function clock\n"); 147 - retval = -ENOENT; 149 + 150 + atmel_ehci->uclk = devm_clk_get(&pdev->dev, "usb_clk"); 151 + if (IS_ERR(atmel_ehci->uclk)) { 152 + dev_err(&pdev->dev, "failed to get uclk\n"); 153 + retval = PTR_ERR(atmel_ehci->uclk); 148 154 goto fail_request_resource; 149 - } 150 - if (IS_ENABLED(CONFIG_COMMON_CLK)) { 151 - atmel_ehci->uclk = devm_clk_get(&pdev->dev, "usb_clk"); 152 - if (IS_ERR(atmel_ehci->uclk)) { 153 - dev_err(&pdev->dev, "failed to get uclk\n"); 154 - retval = PTR_ERR(atmel_ehci->uclk); 155 - goto fail_request_resource; 156 - } 157 155 } 158 156 159 157 ehci = hcd_to_ehci(hcd);
+8 -1
drivers/usb/host/xhci-hub.c
··· 387 387 status = PORT_PLC; 388 388 port_change_bit = "link state"; 389 389 break; 390 + case USB_PORT_FEAT_C_PORT_CONFIG_ERROR: 391 + status = PORT_CEC; 392 + port_change_bit = "config error"; 393 + break; 390 394 default: 391 395 /* Should never happen */ 392 396 return; ··· 592 588 status |= USB_PORT_STAT_C_LINK_STATE << 16; 593 589 if ((raw_port_status & PORT_WRC)) 594 590 status |= USB_PORT_STAT_C_BH_RESET << 16; 591 + if ((raw_port_status & PORT_CEC)) 592 + status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; 595 593 } 596 594 597 595 if (hcd->speed != HCD_USB3) { ··· 1011 1005 case USB_PORT_FEAT_C_OVER_CURRENT: 1012 1006 case USB_PORT_FEAT_C_ENABLE: 1013 1007 case USB_PORT_FEAT_C_PORT_LINK_STATE: 1008 + case USB_PORT_FEAT_C_PORT_CONFIG_ERROR: 1014 1009 xhci_clear_port_change_bit(xhci, wValue, wIndex, 1015 1010 port_array[wIndex], temp); 1016 1011 break; ··· 1076 1069 */ 1077 1070 status = bus_state->resuming_ports; 1078 1071 1079 - mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC; 1072 + mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC; 1080 1073 1081 1074 spin_lock_irqsave(&xhci->lock, flags); 1082 1075 /* For each port, did anything change? If so, set that bit in buf. */
+31 -1
drivers/usb/host/xhci-pci.c
··· 37 37 38 38 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 39 39 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 40 + #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5 41 + #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f 42 + #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI 0x9d2f 40 43 41 44 static const char hcd_name[] = "xhci_hcd"; 42 45 ··· 115 112 if (pdev->vendor == PCI_VENDOR_ID_INTEL) { 116 113 xhci->quirks |= XHCI_LPM_SUPPORT; 117 114 xhci->quirks |= XHCI_INTEL_HOST; 115 + xhci->quirks |= XHCI_AVOID_BEI; 118 116 } 119 117 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 120 118 pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) { ··· 131 127 * PPT chipsets. 132 128 */ 133 129 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 134 - xhci->quirks |= XHCI_AVOID_BEI; 135 130 } 136 131 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 137 132 pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) { 138 133 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 134 + } 135 + if (pdev->vendor == PCI_VENDOR_ID_INTEL && 136 + (pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI || 137 + pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI || 138 + pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)) { 139 + xhci->quirks |= XHCI_PME_STUCK_QUIRK; 139 140 } 140 141 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 141 142 pdev->device == PCI_DEVICE_ID_EJ168) { ··· 166 157 if (xhci->quirks & XHCI_RESET_ON_RESUME) 167 158 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 168 159 "QUIRK: Resetting on resume"); 160 + } 161 + 162 + /* 163 + * Make sure PME works on some Intel xHCI controllers by writing 1 to clear 164 + * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4 165 + */ 166 + static void xhci_pme_quirk(struct xhci_hcd *xhci) 167 + { 168 + u32 val; 169 + void __iomem *reg; 170 + 171 + reg = (void __iomem *) xhci->cap_regs + 0x80a4; 172 + val = readl(reg); 173 + writel(val | BIT(28), reg); 174 + readl(reg); 169 175 } 170 176 171 177 /* called during probe() after chip reset completes */ ··· 307 283 if (xhci->quirks & XHCI_COMP_MODE_QUIRK) 308 284 pdev->no_d3cold = true; 309 285 286 + if (xhci->quirks & XHCI_PME_STUCK_QUIRK) 287 + xhci_pme_quirk(xhci); 288 + 310 289 return xhci_suspend(xhci, do_wakeup); 311 290 } 312 291 ··· 339 312 340 313 if (pdev->vendor == PCI_VENDOR_ID_INTEL) 341 314 usb_enable_intel_xhci_ports(pdev); 315 + 316 + if (xhci->quirks & XHCI_PME_STUCK_QUIRK) 317 + xhci_pme_quirk(xhci); 342 318 343 319 retval = xhci_resume(xhci, hibernated); 344 320 return retval;
+9 -10
drivers/usb/host/xhci-plat.c
··· 83 83 if (irq < 0) 84 84 return -ENODEV; 85 85 86 - 87 - if (of_device_is_compatible(pdev->dev.of_node, 88 - "marvell,armada-375-xhci") || 89 - of_device_is_compatible(pdev->dev.of_node, 90 - "marvell,armada-380-xhci")) { 91 - ret = xhci_mvebu_mbus_init_quirk(pdev); 92 - if (ret) 93 - return ret; 94 - } 95 - 96 86 /* Initialize dma_mask and coherent_dma_mask to 32-bits */ 97 87 ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 98 88 if (ret) ··· 115 125 ret = clk_prepare_enable(clk); 116 126 if (ret) 117 127 goto put_hcd; 128 + } 129 + 130 + if (of_device_is_compatible(pdev->dev.of_node, 131 + "marvell,armada-375-xhci") || 132 + of_device_is_compatible(pdev->dev.of_node, 133 + "marvell,armada-380-xhci")) { 134 + ret = xhci_mvebu_mbus_init_quirk(pdev); 135 + if (ret) 136 + goto disable_clk; 118 137 } 119 138 120 139 ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+8 -2
drivers/usb/host/xhci-ring.c
··· 1946 1946 if (event_trb != ep_ring->dequeue) { 1947 1947 /* The event was for the status stage */ 1948 1948 if (event_trb == td->last_trb) { 1949 - if (td->urb->actual_length != 0) { 1949 + if (td->urb_length_set) { 1950 1950 /* Don't overwrite a previously set error code 1951 1951 */ 1952 1952 if ((*status == -EINPROGRESS || *status == 0) && ··· 1960 1960 td->urb->transfer_buffer_length; 1961 1961 } 1962 1962 } else { 1963 - /* Maybe the event was for the data stage? */ 1963 + /* 1964 + * Maybe the event was for the data stage? If so, update 1965 + * already the actual_length of the URB and flag it as 1966 + * set, so that it is not overwritten in the event for 1967 + * the last TRB. 1968 + */ 1969 + td->urb_length_set = true; 1964 1970 td->urb->actual_length = 1965 1971 td->urb->transfer_buffer_length - 1966 1972 EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+7 -2
drivers/usb/host/xhci.h
··· 1 + 1 2 /* 2 3 * xHCI host controller driver 3 4 * ··· 89 88 #define HCS_IST(p) (((p) >> 0) & 0xf) 90 89 /* bits 4:7, max number of Event Ring segments */ 91 90 #define HCS_ERST_MAX(p) (((p) >> 4) & 0xf) 91 + /* bits 21:25 Hi 5 bits of Scratchpad buffers SW must allocate for the HW */ 92 92 /* bit 26 Scratchpad restore - for save/restore HW state - not used yet */ 93 - /* bits 27:31 number of Scratchpad buffers SW must allocate for the HW */ 94 - #define HCS_MAX_SCRATCHPAD(p) (((p) >> 27) & 0x1f) 93 + /* bits 27:31 Lo 5 bits of Scratchpad buffers SW must allocate for the HW */ 94 + #define HCS_MAX_SCRATCHPAD(p) ((((p) >> 16) & 0x3e0) | (((p) >> 27) & 0x1f)) 95 95 96 96 /* HCSPARAMS3 - hcs_params3 - bitmasks */ 97 97 /* bits 0:7, Max U1 to U0 latency for the roothub ports */ ··· 1290 1288 struct xhci_segment *start_seg; 1291 1289 union xhci_trb *first_trb; 1292 1290 union xhci_trb *last_trb; 1291 + /* actual_length of the URB has already been set */ 1292 + bool urb_length_set; 1293 1293 }; 1294 1294 1295 1295 /* xHCI command default timeout value */ ··· 1564 1560 #define XHCI_SPURIOUS_WAKEUP (1 << 18) 1565 1561 /* For controllers with a broken beyond repair streams implementation */ 1566 1562 #define XHCI_BROKEN_STREAMS (1 << 19) 1563 + #define XHCI_PME_STUCK_QUIRK (1 << 20) 1567 1564 unsigned int num_active_eps; 1568 1565 unsigned int limit_active_eps; 1569 1566 /* There are two roothubs to keep track of bus suspend info for */
+1 -1
drivers/usb/isp1760/isp1760-core.c
··· 151 151 } 152 152 153 153 if (IS_ENABLED(CONFIG_USB_ISP1761_UDC) && !udc_disabled) { 154 - ret = isp1760_udc_register(isp, irq, irqflags | IRQF_SHARED); 154 + ret = isp1760_udc_register(isp, irq, irqflags); 155 155 if (ret < 0) { 156 156 isp1760_hcd_unregister(&isp->hcd); 157 157 return ret;
+3 -3
drivers/usb/isp1760/isp1760-hcd.c
··· 1274 1274 for (slot = 0; slot < 32; slot++) 1275 1275 if (priv->atl_slots[slot].qh && time_after(jiffies, 1276 1276 priv->atl_slots[slot].timestamp + 1277 - SLOT_TIMEOUT * HZ / 1000)) { 1277 + msecs_to_jiffies(SLOT_TIMEOUT))) { 1278 1278 ptd_read(hcd->regs, ATL_PTD_OFFSET, slot, &ptd); 1279 1279 if (!FROM_DW0_VALID(ptd.dw0) && 1280 1280 !FROM_DW3_ACTIVE(ptd.dw3)) ··· 1286 1286 1287 1287 spin_unlock_irqrestore(&priv->lock, spinflags); 1288 1288 1289 - errata2_timer.expires = jiffies + SLOT_CHECK_PERIOD * HZ / 1000; 1289 + errata2_timer.expires = jiffies + msecs_to_jiffies(SLOT_CHECK_PERIOD); 1290 1290 add_timer(&errata2_timer); 1291 1291 } 1292 1292 ··· 1336 1336 return retval; 1337 1337 1338 1338 setup_timer(&errata2_timer, errata2_function, (unsigned long)hcd); 1339 - errata2_timer.expires = jiffies + SLOT_CHECK_PERIOD * HZ / 1000; 1339 + errata2_timer.expires = jiffies + msecs_to_jiffies(SLOT_CHECK_PERIOD); 1340 1340 add_timer(&errata2_timer); 1341 1341 1342 1342 chipid = reg_read32(hcd->regs, HC_CHIP_ID_REG);
+8 -6
drivers/usb/isp1760/isp1760-udc.c
··· 1191 1191 struct usb_gadget_driver *driver) 1192 1192 { 1193 1193 struct isp1760_udc *udc = gadget_to_udc(gadget); 1194 + unsigned long flags; 1194 1195 1195 1196 /* The hardware doesn't support low speed. */ 1196 1197 if (driver->max_speed < USB_SPEED_FULL) { ··· 1199 1198 return -EINVAL; 1200 1199 } 1201 1200 1202 - spin_lock(&udc->lock); 1201 + spin_lock_irqsave(&udc->lock, flags); 1203 1202 1204 1203 if (udc->driver) { 1205 1204 dev_err(udc->isp->dev, "UDC already has a gadget driver\n"); 1206 - spin_unlock(&udc->lock); 1205 + spin_unlock_irqrestore(&udc->lock, flags); 1207 1206 return -EBUSY; 1208 1207 } 1209 1208 1210 1209 udc->driver = driver; 1211 1210 1212 - spin_unlock(&udc->lock); 1211 + spin_unlock_irqrestore(&udc->lock, flags); 1213 1212 1214 1213 dev_dbg(udc->isp->dev, "starting UDC with driver %s\n", 1215 1214 driver->function); ··· 1233 1232 static int isp1760_udc_stop(struct usb_gadget *gadget) 1234 1233 { 1235 1234 struct isp1760_udc *udc = gadget_to_udc(gadget); 1235 + unsigned long flags; 1236 1236 1237 1237 dev_dbg(udc->isp->dev, "%s\n", __func__); 1238 1238 ··· 1241 1239 1242 1240 isp1760_udc_write(udc, DC_MODE, 0); 1243 1241 1244 - spin_lock(&udc->lock); 1242 + spin_lock_irqsave(&udc->lock, flags); 1245 1243 udc->driver = NULL; 1246 - spin_unlock(&udc->lock); 1244 + spin_unlock_irqrestore(&udc->lock, flags); 1247 1245 1248 1246 return 0; 1249 1247 } ··· 1413 1411 return -ENODEV; 1414 1412 } 1415 1413 1416 - if (chipid != 0x00011582) { 1414 + if (chipid != 0x00011582 && chipid != 0x00158210) { 1417 1415 dev_err(udc->isp->dev, "udc: invalid chip ID 0x%08x\n", chipid); 1418 1416 return -ENODEV; 1419 1417 }
+2 -1
drivers/usb/musb/Kconfig
··· 79 79 80 80 config USB_MUSB_OMAP2PLUS 81 81 tristate "OMAP2430 and onwards" 82 - depends on ARCH_OMAP2PLUS && USB && OMAP_CONTROL_PHY 82 + depends on ARCH_OMAP2PLUS && USB 83 + depends on OMAP_CONTROL_PHY || !OMAP_CONTROL_PHY 83 84 select GENERIC_PHY 84 85 85 86 config USB_MUSB_AM35X
+6 -4
drivers/usb/musb/musb_core.c
··· 1969 1969 goto fail0; 1970 1970 } 1971 1971 1972 - pm_runtime_use_autosuspend(musb->controller); 1973 - pm_runtime_set_autosuspend_delay(musb->controller, 200); 1974 - pm_runtime_enable(musb->controller); 1975 - 1976 1972 spin_lock_init(&musb->lock); 1977 1973 musb->board_set_power = plat->set_power; 1978 1974 musb->min_power = plat->min_power; ··· 1986 1990 musb_writew = musb_default_writew; 1987 1991 musb_readl = musb_default_readl; 1988 1992 musb_writel = musb_default_writel; 1993 + 1994 + /* We need musb_read/write functions initialized for PM */ 1995 + pm_runtime_use_autosuspend(musb->controller); 1996 + pm_runtime_set_autosuspend_delay(musb->controller, 200); 1997 + pm_runtime_irq_safe(musb->controller); 1998 + pm_runtime_enable(musb->controller); 1989 1999 1990 2000 /* The musb_platform_init() call: 1991 2001 * - adjusts musb->mregs
+29 -3
drivers/usb/musb/musb_dsps.c
··· 457 457 if (IS_ERR(musb->xceiv)) 458 458 return PTR_ERR(musb->xceiv); 459 459 460 + musb->phy = devm_phy_get(dev->parent, "usb2-phy"); 461 + 460 462 /* Returns zero if e.g. not clocked */ 461 463 rev = dsps_readl(reg_base, wrp->revision); 462 464 if (!rev) 463 465 return -ENODEV; 464 466 465 467 usb_phy_init(musb->xceiv); 468 + if (IS_ERR(musb->phy)) { 469 + musb->phy = NULL; 470 + } else { 471 + ret = phy_init(musb->phy); 472 + if (ret < 0) 473 + return ret; 474 + ret = phy_power_on(musb->phy); 475 + if (ret) { 476 + phy_exit(musb->phy); 477 + return ret; 478 + } 479 + } 480 + 466 481 setup_timer(&glue->timer, otg_timer, (unsigned long) musb); 467 482 468 483 /* Reset the musb */ ··· 517 502 518 503 del_timer_sync(&glue->timer); 519 504 usb_phy_shutdown(musb->xceiv); 505 + phy_power_off(musb->phy); 506 + phy_exit(musb->phy); 520 507 debugfs_remove_recursive(glue->dbgfs_root); 521 508 522 509 return 0; ··· 627 610 struct device *dev = musb->controller; 628 611 struct dsps_glue *glue = dev_get_drvdata(dev->parent); 629 612 const struct dsps_musb_wrapper *wrp = glue->wrp; 630 - int session_restart = 0; 613 + int session_restart = 0, error; 631 614 632 615 if (glue->sw_babble_enabled) 633 616 session_restart = sw_babble_control(musb); ··· 641 624 dsps_writel(musb->ctrl_base, wrp->control, (1 << wrp->reset)); 642 625 usleep_range(100, 200); 643 626 usb_phy_shutdown(musb->xceiv); 627 + error = phy_power_off(musb->phy); 628 + if (error) 629 + dev_err(dev, "phy shutdown failed: %i\n", error); 644 630 usleep_range(100, 200); 645 631 usb_phy_init(musb->xceiv); 632 + error = phy_power_on(musb->phy); 633 + if (error) 634 + dev_err(dev, "phy powerup failed: %i\n", error); 646 635 session_restart = 1; 647 636 } 648 637 ··· 710 687 struct musb_hdrc_config *config; 711 688 struct platform_device *musb; 712 689 struct device_node *dn = parent->dev.of_node; 713 - int ret; 690 + int ret, val; 714 691 715 692 memset(resources, 0, sizeof(resources)); 716 693 res = platform_get_resource_byname(parent, IORESOURCE_MEM, "mc"); ··· 762 739 pdata.mode = get_musb_port_mode(dev); 763 740 /* DT keeps this entry in mA, musb expects it as per USB spec */ 764 741 pdata.power = get_int_prop(dn, "mentor,power") / 2; 765 - config->multipoint = of_property_read_bool(dn, "mentor,multipoint"); 742 + 743 + ret = of_property_read_u32(dn, "mentor,multipoint", &val); 744 + if (!ret && val) 745 + config->multipoint = true; 766 746 767 747 ret = platform_device_add_data(musb, &pdata, sizeof(pdata)); 768 748 if (ret) {
+1 -1
drivers/usb/musb/musb_host.c
··· 2613 2613 .description = "musb-hcd", 2614 2614 .product_desc = "MUSB HDRC host driver", 2615 2615 .hcd_priv_size = sizeof(struct musb *), 2616 - .flags = HCD_USB2 | HCD_MEMORY, 2616 + .flags = HCD_USB2 | HCD_MEMORY | HCD_BH, 2617 2617 2618 2618 /* not using irq handler or reset hooks from usbcore, since 2619 2619 * those must be shared with peripheral code for OTG configs
+5 -2
drivers/usb/musb/omap2430.c
··· 516 516 struct omap2430_glue *glue; 517 517 struct device_node *np = pdev->dev.of_node; 518 518 struct musb_hdrc_config *config; 519 - int ret = -ENOMEM; 519 + int ret = -ENOMEM, val; 520 520 521 521 glue = devm_kzalloc(&pdev->dev, sizeof(*glue), GFP_KERNEL); 522 522 if (!glue) ··· 559 559 of_property_read_u32(np, "num-eps", (u32 *)&config->num_eps); 560 560 of_property_read_u32(np, "ram-bits", (u32 *)&config->ram_bits); 561 561 of_property_read_u32(np, "power", (u32 *)&pdata->power); 562 - config->multipoint = of_property_read_bool(np, "multipoint"); 562 + 563 + ret = of_property_read_u32(np, "multipoint", &val); 564 + if (!ret && val) 565 + config->multipoint = true; 563 566 564 567 pdata->board_data = data; 565 568 pdata->config = config;
+3
drivers/usb/phy/phy-am335x-control.c
··· 126 126 return NULL; 127 127 128 128 dev = bus_find_device(&platform_bus_type, NULL, node, match); 129 + if (!dev) 130 + return NULL; 131 + 129 132 ctrl_usb = dev_get_drvdata(dev); 130 133 if (!ctrl_usb) 131 134 return NULL;
+1
drivers/usb/renesas_usbhs/Kconfig
··· 6 6 tristate 'Renesas USBHS controller' 7 7 depends on USB_GADGET 8 8 depends on ARCH_SHMOBILE || SUPERH || COMPILE_TEST 9 + depends on EXTCON || !EXTCON # if EXTCON=m, USBHS cannot be built-in 9 10 default n 10 11 help 11 12 Renesas USBHS is a discrete USB host and peripheral controller chip
+20 -27
drivers/usb/serial/bus.c
··· 38 38 return 0; 39 39 } 40 40 41 - static ssize_t port_number_show(struct device *dev, 42 - struct device_attribute *attr, char *buf) 43 - { 44 - struct usb_serial_port *port = to_usb_serial_port(dev); 45 - 46 - return sprintf(buf, "%d\n", port->port_number); 47 - } 48 - static DEVICE_ATTR_RO(port_number); 49 - 50 41 static int usb_serial_device_probe(struct device *dev) 51 42 { 52 43 struct usb_serial_driver *driver; 53 44 struct usb_serial_port *port; 45 + struct device *tty_dev; 54 46 int retval = 0; 55 47 int minor; 56 48 57 49 port = to_usb_serial_port(dev); 58 - if (!port) { 59 - retval = -ENODEV; 60 - goto exit; 61 - } 50 + if (!port) 51 + return -ENODEV; 62 52 63 53 /* make sure suspend/resume doesn't race against port_probe */ 64 54 retval = usb_autopm_get_interface(port->serial->interface); 65 55 if (retval) 66 - goto exit; 56 + return retval; 67 57 68 58 driver = port->serial->type; 69 59 if (driver->port_probe) { 70 60 retval = driver->port_probe(port); 71 61 if (retval) 72 - goto exit_with_autopm; 73 - } 74 - 75 - retval = device_create_file(dev, &dev_attr_port_number); 76 - if (retval) { 77 - if (driver->port_remove) 78 - retval = driver->port_remove(port); 79 - goto exit_with_autopm; 62 + goto err_autopm_put; 80 63 } 81 64 82 65 minor = port->minor; 83 - tty_register_device(usb_serial_tty_driver, minor, dev); 66 + tty_dev = tty_register_device(usb_serial_tty_driver, minor, dev); 67 + if (IS_ERR(tty_dev)) { 68 + retval = PTR_ERR(tty_dev); 69 + goto err_port_remove; 70 + } 71 + 72 + usb_autopm_put_interface(port->serial->interface); 73 + 84 74 dev_info(&port->serial->dev->dev, 85 75 "%s converter now attached to ttyUSB%d\n", 86 76 driver->description, minor); 87 77 88 - exit_with_autopm: 78 + return 0; 79 + 80 + err_port_remove: 81 + if (driver->port_remove) 82 + driver->port_remove(port); 83 + err_autopm_put: 89 84 usb_autopm_put_interface(port->serial->interface); 90 - exit: 85 + 91 86 return retval; 92 87 } 93 88 ··· 108 113 109 114 minor = port->minor; 110 115 tty_unregister_device(usb_serial_tty_driver, minor); 111 - 112 - device_remove_file(&port->dev, &dev_attr_port_number); 113 116 114 117 driver = port->serial->type; 115 118 if (driver->port_remove)
+6 -9
drivers/usb/serial/ch341.c
··· 84 84 u8 line_status; /* active status of modem control inputs */ 85 85 }; 86 86 87 + static void ch341_set_termios(struct tty_struct *tty, 88 + struct usb_serial_port *port, 89 + struct ktermios *old_termios); 90 + 87 91 static int ch341_control_out(struct usb_device *dev, u8 request, 88 92 u16 value, u16 index) 89 93 { ··· 313 309 struct ch341_private *priv = usb_get_serial_port_data(port); 314 310 int r; 315 311 316 - priv->baud_rate = DEFAULT_BAUD_RATE; 317 - 318 312 r = ch341_configure(serial->dev, priv); 319 313 if (r) 320 314 goto out; 321 315 322 - r = ch341_set_handshake(serial->dev, priv->line_control); 323 - if (r) 324 - goto out; 325 - 326 - r = ch341_set_baudrate(serial->dev, priv); 327 - if (r) 328 - goto out; 316 + if (tty) 317 + ch341_set_termios(tty, port, NULL); 329 318 330 319 dev_dbg(&port->dev, "%s - submitting interrupt urb\n", __func__); 331 320 r = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL);
+2
drivers/usb/serial/console.c
··· 14 14 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 15 16 16 #include <linux/kernel.h> 17 + #include <linux/module.h> 17 18 #include <linux/slab.h> 18 19 #include <linux/tty.h> 19 20 #include <linux/console.h> ··· 145 144 init_ldsem(&tty->ldisc_sem); 146 145 INIT_LIST_HEAD(&tty->tty_files); 147 146 kref_get(&tty->driver->kref); 147 + __module_get(tty->driver->owner); 148 148 tty->ops = &usb_console_fake_tty_ops; 149 149 if (tty_init_termios(tty)) { 150 150 retval = -ENOMEM;
+2
drivers/usb/serial/cp210x.c
··· 147 147 { USB_DEVICE(0x166A, 0x0305) }, /* Clipsal C-5000CT2 C-Bus Spectrum Colour Touchscreen */ 148 148 { USB_DEVICE(0x166A, 0x0401) }, /* Clipsal L51xx C-Bus Architectural Dimmer */ 149 149 { USB_DEVICE(0x166A, 0x0101) }, /* Clipsal 5560884 C-Bus Multi-room Audio Matrix Switcher */ 150 + { USB_DEVICE(0x16C0, 0x09B0) }, /* Lunatico Seletek */ 151 + { USB_DEVICE(0x16C0, 0x09B1) }, /* Lunatico Seletek */ 150 152 { USB_DEVICE(0x16D6, 0x0001) }, /* Jablotron serial interface */ 151 153 { USB_DEVICE(0x16DC, 0x0010) }, /* W-IE-NE-R Plein & Baus GmbH PL512 Power Supply */ 152 154 { USB_DEVICE(0x16DC, 0x0011) }, /* W-IE-NE-R Plein & Baus GmbH RCM Remote Control for MARATON Power Supply */
+26 -2
drivers/usb/serial/ftdi_sio.c
··· 604 604 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 605 605 { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID), 606 606 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 607 + { USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) }, 607 608 /* 608 609 * ELV devices: 609 610 */ ··· 800 799 { USB_DEVICE(FTDI_VID, FTDI_ELSTER_UNICOM_PID) }, 801 800 { USB_DEVICE(FTDI_VID, FTDI_PROPOX_JTAGCABLEII_PID) }, 802 801 { USB_DEVICE(FTDI_VID, FTDI_PROPOX_ISPCABLEIII_PID) }, 802 + { USB_DEVICE(FTDI_VID, CYBER_CORTEX_AV_PID), 803 + .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 803 804 { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_PID), 804 805 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 805 806 { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_H_PID), ··· 981 978 { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, 982 979 /* GE Healthcare devices */ 983 980 { USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) }, 981 + /* Active Research (Actisense) devices */ 982 + { USB_DEVICE(FTDI_VID, ACTISENSE_NDC_PID) }, 983 + { USB_DEVICE(FTDI_VID, ACTISENSE_USG_PID) }, 984 + { USB_DEVICE(FTDI_VID, ACTISENSE_NGT_PID) }, 985 + { USB_DEVICE(FTDI_VID, ACTISENSE_NGW_PID) }, 986 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AC_PID) }, 987 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AD_PID) }, 988 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AE_PID) }, 989 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AF_PID) }, 990 + { USB_DEVICE(FTDI_VID, CHETCO_SEAGAUGE_PID) }, 991 + { USB_DEVICE(FTDI_VID, CHETCO_SEASWITCH_PID) }, 992 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_NMEA2000_PID) }, 993 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ETHERNET_PID) }, 994 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_WIFI_PID) }, 995 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) }, 996 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) }, 997 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) }, 984 998 { } /* Terminating entry */ 985 999 }; 986 1000 ··· 1884 1864 { 1885 1865 struct usb_device *udev = serial->dev; 1886 1866 1887 - if ((udev->manufacturer && !strcmp(udev->manufacturer, "CALAO Systems")) || 1888 - (udev->product && !strcmp(udev->product, "BeagleBone/XDS100V2"))) 1867 + if (udev->manufacturer && !strcmp(udev->manufacturer, "CALAO Systems")) 1868 + return ftdi_jtag_probe(serial); 1869 + 1870 + if (udev->product && 1871 + (!strcmp(udev->product, "BeagleBone/XDS100V2") || 1872 + !strcmp(udev->product, "SNAP Connect E10"))) 1889 1873 return ftdi_jtag_probe(serial); 1890 1874 1891 1875 return 0;
+29
drivers/usb/serial/ftdi_sio_ids.h
··· 38 38 39 39 #define FTDI_LUMEL_PD12_PID 0x6002 40 40 41 + /* Cyber Cortex AV by Fabulous Silicon (http://fabuloussilicon.com) */ 42 + #define CYBER_CORTEX_AV_PID 0x8698 43 + 41 44 /* 42 45 * Marvell OpenRD Base, Client 43 46 * http://www.open-rd.org ··· 560 557 * NovaTech product ids (FTDI_VID) 561 558 */ 562 559 #define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */ 560 + 561 + /* 562 + * Synapse Wireless product ids (FTDI_VID) 563 + * http://www.synapse-wireless.com 564 + */ 565 + #define FTDI_SYNAPSE_SS200_PID 0x9090 /* SS200 - SNAP Stick 200 */ 563 566 564 567 565 568 /********************************/ ··· 1447 1438 */ 1448 1439 #define GE_HEALTHCARE_VID 0x1901 1449 1440 #define GE_HEALTHCARE_NEMO_TRACKER_PID 0x0015 1441 + 1442 + /* 1443 + * Active Research (Actisense) devices 1444 + */ 1445 + #define ACTISENSE_NDC_PID 0xD9A8 /* NDC USB Serial Adapter */ 1446 + #define ACTISENSE_USG_PID 0xD9A9 /* USG USB Serial Adapter */ 1447 + #define ACTISENSE_NGT_PID 0xD9AA /* NGT NMEA2000 Interface */ 1448 + #define ACTISENSE_NGW_PID 0xD9AB /* NGW NMEA2000 Gateway */ 1449 + #define ACTISENSE_D9AC_PID 0xD9AC /* Actisense Reserved */ 1450 + #define ACTISENSE_D9AD_PID 0xD9AD /* Actisense Reserved */ 1451 + #define ACTISENSE_D9AE_PID 0xD9AE /* Actisense Reserved */ 1452 + #define ACTISENSE_D9AF_PID 0xD9AF /* Actisense Reserved */ 1453 + #define CHETCO_SEAGAUGE_PID 0xA548 /* SeaGauge USB Adapter */ 1454 + #define CHETCO_SEASWITCH_PID 0xA549 /* SeaSwitch USB Adapter */ 1455 + #define CHETCO_SEASMART_NMEA2000_PID 0xA54A /* SeaSmart NMEA2000 Gateway */ 1456 + #define CHETCO_SEASMART_ETHERNET_PID 0xA54B /* SeaSmart Ethernet Gateway */ 1457 + #define CHETCO_SEASMART_WIFI_PID 0xA5AC /* SeaSmart Wifi Gateway */ 1458 + #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */ 1459 + #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */ 1460 + #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */
+3 -2
drivers/usb/serial/generic.c
··· 258 258 * character or at least one jiffy. 259 259 */ 260 260 period = max_t(unsigned long, (10 * HZ / bps), 1); 261 - period = min_t(unsigned long, period, timeout); 261 + if (timeout) 262 + period = min_t(unsigned long, period, timeout); 262 263 263 264 dev_dbg(&port->dev, "%s - timeout = %u ms, period = %u ms\n", 264 265 __func__, jiffies_to_msecs(timeout), ··· 269 268 schedule_timeout_interruptible(period); 270 269 if (signal_pending(current)) 271 270 break; 272 - if (time_after(jiffies, expire)) 271 + if (timeout && time_after(jiffies, expire)) 273 272 break; 274 273 } 275 274 }
+3
drivers/usb/serial/keyspan_pda.c
··· 61 61 /* For Xircom PGSDB9 and older Entrega version of the same device */ 62 62 #define XIRCOM_VENDOR_ID 0x085a 63 63 #define XIRCOM_FAKE_ID 0x8027 64 + #define XIRCOM_FAKE_ID_2 0x8025 /* "PGMFHUB" serial */ 64 65 #define ENTREGA_VENDOR_ID 0x1645 65 66 #define ENTREGA_FAKE_ID 0x8093 66 67 ··· 71 70 #endif 72 71 #ifdef XIRCOM 73 72 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 73 + { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 74 74 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 75 75 #endif 76 76 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_ID) }, ··· 95 93 #ifdef XIRCOM 96 94 static const struct usb_device_id id_table_fake_xircom[] = { 97 95 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 96 + { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 98 97 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 99 98 { } 100 99 };
+2 -1
drivers/usb/serial/mxuport.c
··· 1284 1284 } 1285 1285 1286 1286 /* Initial port termios */ 1287 - mxuport_set_termios(tty, port, NULL); 1287 + if (tty) 1288 + mxuport_set_termios(tty, port, NULL); 1288 1289 1289 1290 /* 1290 1291 * TODO: use RQ_VENDOR_GET_MSR, once we know what it
+13 -5
drivers/usb/serial/pl2303.c
··· 132 132 #define UART_OVERRUN_ERROR 0x40 133 133 #define UART_CTS 0x80 134 134 135 + static void pl2303_set_break(struct usb_serial_port *port, bool enable); 135 136 136 137 enum pl2303_type { 137 138 TYPE_01, /* Type 0 and 1 (difference unknown) */ ··· 616 615 { 617 616 usb_serial_generic_close(port); 618 617 usb_kill_urb(port->interrupt_in_urb); 618 + pl2303_set_break(port, false); 619 619 } 620 620 621 621 static int pl2303_open(struct tty_struct *tty, struct usb_serial_port *port) ··· 743 741 return -ENOIOCTLCMD; 744 742 } 745 743 746 - static void pl2303_break_ctl(struct tty_struct *tty, int break_state) 744 + static void pl2303_set_break(struct usb_serial_port *port, bool enable) 747 745 { 748 - struct usb_serial_port *port = tty->driver_data; 749 746 struct usb_serial *serial = port->serial; 750 747 u16 state; 751 748 int result; 752 749 753 - if (break_state == 0) 754 - state = BREAK_OFF; 755 - else 750 + if (enable) 756 751 state = BREAK_ON; 752 + else 753 + state = BREAK_OFF; 757 754 758 755 dev_dbg(&port->dev, "%s - turning break %s\n", __func__, 759 756 state == BREAK_OFF ? "off" : "on"); ··· 762 761 0, NULL, 0, 100); 763 762 if (result) 764 763 dev_err(&port->dev, "error sending break = %d\n", result); 764 + } 765 + 766 + static void pl2303_break_ctl(struct tty_struct *tty, int state) 767 + { 768 + struct usb_serial_port *port = tty->driver_data; 769 + 770 + pl2303_set_break(port, state); 765 771 } 766 772 767 773 static void pl2303_update_line_status(struct usb_serial_port *port,
+19 -2
drivers/usb/serial/usb-serial.c
··· 687 687 drv->dtr_rts(p, on); 688 688 } 689 689 690 + static ssize_t port_number_show(struct device *dev, 691 + struct device_attribute *attr, char *buf) 692 + { 693 + struct usb_serial_port *port = to_usb_serial_port(dev); 694 + 695 + return sprintf(buf, "%u\n", port->port_number); 696 + } 697 + static DEVICE_ATTR_RO(port_number); 698 + 699 + static struct attribute *usb_serial_port_attrs[] = { 700 + &dev_attr_port_number.attr, 701 + NULL 702 + }; 703 + ATTRIBUTE_GROUPS(usb_serial_port); 704 + 690 705 static const struct tty_port_operations serial_port_ops = { 691 706 .carrier_raised = serial_port_carrier_raised, 692 707 .dtr_rts = serial_port_dtr_rts, ··· 917 902 port->dev.driver = NULL; 918 903 port->dev.bus = &usb_serial_bus_type; 919 904 port->dev.release = &usb_serial_port_release; 905 + port->dev.groups = usb_serial_port_groups; 920 906 device_initialize(&port->dev); 921 907 } 922 908 ··· 956 940 port = serial->port[i]; 957 941 if (kfifo_alloc(&port->write_fifo, PAGE_SIZE, GFP_KERNEL)) 958 942 goto probe_error; 959 - buffer_size = max_t(int, serial->type->bulk_out_size, 960 - usb_endpoint_maxp(endpoint)); 943 + buffer_size = serial->type->bulk_out_size; 944 + if (!buffer_size) 945 + buffer_size = usb_endpoint_maxp(endpoint); 961 946 port->bulk_out_size = buffer_size; 962 947 port->bulk_out_endpointAddress = endpoint->bEndpointAddress; 963 948
+14
drivers/usb/storage/unusual_uas.h
··· 113 113 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 114 114 US_FL_NO_ATA_1X), 115 115 116 + /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */ 117 + UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999, 118 + "Initio Corporation", 119 + "", 120 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 121 + US_FL_NO_ATA_1X), 122 + 123 + /* Reported-by: Tom Arild Naess <tanaess@gmail.com> */ 124 + UNUSUAL_DEV(0x152d, 0x0539, 0x0000, 0x9999, 125 + "JMicron", 126 + "JMS539", 127 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 128 + US_FL_NO_REPORT_OPCODES), 129 + 116 130 /* Reported-by: Claudio Bizzarri <claudio.bizzarri@gmail.com> */ 117 131 UNUSUAL_DEV(0x152d, 0x0567, 0x0000, 0x9999, 118 132 "JMicron",
+6
drivers/usb/storage/usb.c
··· 889 889 !(us->fflags & US_FL_SCM_MULT_TARG)) { 890 890 mutex_lock(&us->dev_mutex); 891 891 us->max_lun = usb_stor_Bulk_max_lun(us); 892 + /* 893 + * Allow proper scanning of devices that present more than 8 LUNs 894 + * While not affecting other devices that may need the previous behavior 895 + */ 896 + if (us->max_lun >= 8) 897 + us_to_host(us)->max_lun = us->max_lun+1; 892 898 mutex_unlock(&us->dev_mutex); 893 899 } 894 900 scsi_scan_host(us_to_host(us));
+2
drivers/vfio/pci/vfio_pci_intrs.c
··· 868 868 func = vfio_pci_set_err_trigger; 869 869 break; 870 870 } 871 + break; 871 872 case VFIO_PCI_REQ_IRQ_INDEX: 872 873 switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { 873 874 case VFIO_IRQ_SET_ACTION_TRIGGER: 874 875 func = vfio_pci_set_req_trigger; 875 876 break; 876 877 } 878 + break; 877 879 } 878 880 879 881 if (!func)
+14 -11
drivers/vhost/net.c
··· 591 591 * TODO: support TSO. 592 592 */ 593 593 iov_iter_advance(&msg.msg_iter, vhost_hlen); 594 - } else { 595 - /* It'll come from socket; we'll need to patch 596 - * ->num_buffers over if VIRTIO_NET_F_MRG_RXBUF 597 - */ 598 - iov_iter_advance(&fixup, sizeof(hdr)); 599 594 } 600 595 err = sock->ops->recvmsg(NULL, sock, &msg, 601 596 sock_len, MSG_DONTWAIT | MSG_TRUNC); ··· 604 609 continue; 605 610 } 606 611 /* Supply virtio_net_hdr if VHOST_NET_F_VIRTIO_NET_HDR */ 607 - if (unlikely(vhost_hlen) && 608 - copy_to_iter(&hdr, sizeof(hdr), &fixup) != sizeof(hdr)) { 609 - vq_err(vq, "Unable to write vnet_hdr at addr %p\n", 610 - vq->iov->iov_base); 611 - break; 612 + if (unlikely(vhost_hlen)) { 613 + if (copy_to_iter(&hdr, sizeof(hdr), 614 + &fixup) != sizeof(hdr)) { 615 + vq_err(vq, "Unable to write vnet_hdr " 616 + "at addr %p\n", vq->iov->iov_base); 617 + break; 618 + } 619 + } else { 620 + /* Header came from socket; we'll need to patch 621 + * ->num_buffers over if VIRTIO_NET_F_MRG_RXBUF 622 + */ 623 + iov_iter_advance(&fixup, sizeof(hdr)); 612 624 } 613 625 /* TODO: Should check and handle checksum. */ 614 626 615 627 num_buffers = cpu_to_vhost16(vq, headcount); 616 628 if (likely(mergeable) && 617 - copy_to_iter(&num_buffers, 2, &fixup) != 2) { 629 + copy_to_iter(&num_buffers, sizeof num_buffers, 630 + &fixup) != sizeof num_buffers) { 618 631 vq_err(vq, "Failed num_buffers write"); 619 632 vhost_discard_vq_desc(vq, headcount); 620 633 break;
+2 -3
drivers/vhost/scsi.c
··· 1956 1956 goto out; 1957 1957 } 1958 1958 /* 1959 - * Now register the TCM vhost virtual I_T Nexus as active with the 1960 - * call to __transport_register_session() 1959 + * Now register the TCM vhost virtual I_T Nexus as active. 1961 1960 */ 1962 - __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl, 1961 + transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl, 1963 1962 tv_nexus->tvn_se_sess, tv_nexus); 1964 1963 tpg->tpg_nexus = tv_nexus; 1965 1964
+3
drivers/video/fbdev/amba-clcd.c
··· 599 599 600 600 len = clcdfb_snprintf_mode(NULL, 0, mode); 601 601 name = devm_kzalloc(dev, len + 1, GFP_KERNEL); 602 + if (!name) 603 + return -ENOMEM; 604 + 602 605 clcdfb_snprintf_mode(name, len + 1, mode); 603 606 mode->name = name; 604 607
+3 -3
drivers/video/fbdev/core/fbmon.c
··· 624 624 int num = 0, i, first = 1; 625 625 int ver, rev; 626 626 627 - ver = edid[EDID_STRUCT_VERSION]; 628 - rev = edid[EDID_STRUCT_REVISION]; 629 - 630 627 mode = kzalloc(50 * sizeof(struct fb_videomode), GFP_KERNEL); 631 628 if (mode == NULL) 632 629 return NULL; ··· 633 636 kfree(mode); 634 637 return NULL; 635 638 } 639 + 640 + ver = edid[EDID_STRUCT_VERSION]; 641 + rev = edid[EDID_STRUCT_REVISION]; 636 642 637 643 *dbsize = 0; 638 644
+95 -84
drivers/video/fbdev/omap2/dss/display-sysfs.c
··· 28 28 #include <video/omapdss.h> 29 29 #include "dss.h" 30 30 31 - static struct omap_dss_device *to_dss_device_sysfs(struct device *dev) 31 + static ssize_t display_name_show(struct omap_dss_device *dssdev, char *buf) 32 32 { 33 - struct omap_dss_device *dssdev = NULL; 34 - 35 - for_each_dss_dev(dssdev) { 36 - if (dssdev->dev == dev) { 37 - omap_dss_put_device(dssdev); 38 - return dssdev; 39 - } 40 - } 41 - 42 - return NULL; 43 - } 44 - 45 - static ssize_t display_name_show(struct device *dev, 46 - struct device_attribute *attr, char *buf) 47 - { 48 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 49 - 50 33 return snprintf(buf, PAGE_SIZE, "%s\n", 51 34 dssdev->name ? 52 35 dssdev->name : ""); 53 36 } 54 37 55 - static ssize_t display_enabled_show(struct device *dev, 56 - struct device_attribute *attr, char *buf) 38 + static ssize_t display_enabled_show(struct omap_dss_device *dssdev, char *buf) 57 39 { 58 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 59 - 60 40 return snprintf(buf, PAGE_SIZE, "%d\n", 61 41 omapdss_device_is_enabled(dssdev)); 62 42 } 63 43 64 - static ssize_t display_enabled_store(struct device *dev, 65 - struct device_attribute *attr, 44 + static ssize_t display_enabled_store(struct omap_dss_device *dssdev, 66 45 const char *buf, size_t size) 67 46 { 68 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 69 47 int r; 70 48 bool enable; 71 49 ··· 68 90 return size; 69 91 } 70 92 71 - static ssize_t display_tear_show(struct device *dev, 72 - struct device_attribute *attr, char *buf) 93 + static ssize_t display_tear_show(struct omap_dss_device *dssdev, char *buf) 73 94 { 74 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 75 95 return snprintf(buf, PAGE_SIZE, "%d\n", 76 96 dssdev->driver->get_te ? 77 97 dssdev->driver->get_te(dssdev) : 0); 78 98 } 79 99 80 - static ssize_t display_tear_store(struct device *dev, 81 - struct device_attribute *attr, const char *buf, size_t size) 100 + static ssize_t display_tear_store(struct omap_dss_device *dssdev, 101 + const char *buf, size_t size) 82 102 { 83 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 84 103 int r; 85 104 bool te; 86 105 ··· 95 120 return size; 96 121 } 97 122 98 - static ssize_t display_timings_show(struct device *dev, 99 - struct device_attribute *attr, char *buf) 123 + static ssize_t display_timings_show(struct omap_dss_device *dssdev, char *buf) 100 124 { 101 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 102 125 struct omap_video_timings t; 103 126 104 127 if (!dssdev->driver->get_timings) ··· 110 137 t.y_res, t.vfp, t.vbp, t.vsw); 111 138 } 112 139 113 - static ssize_t display_timings_store(struct device *dev, 114 - struct device_attribute *attr, const char *buf, size_t size) 140 + static ssize_t display_timings_store(struct omap_dss_device *dssdev, 141 + const char *buf, size_t size) 115 142 { 116 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 117 143 struct omap_video_timings t = dssdev->panel.timings; 118 144 int r, found; 119 145 ··· 148 176 return size; 149 177 } 150 178 151 - static ssize_t display_rotate_show(struct device *dev, 152 - struct device_attribute *attr, char *buf) 179 + static ssize_t display_rotate_show(struct omap_dss_device *dssdev, char *buf) 153 180 { 154 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 155 181 int rotate; 156 182 if (!dssdev->driver->get_rotate) 157 183 return -ENOENT; ··· 157 187 return snprintf(buf, PAGE_SIZE, "%u\n", rotate); 158 188 } 159 189 160 - static ssize_t display_rotate_store(struct device *dev, 161 - struct device_attribute *attr, const char *buf, size_t size) 190 + static ssize_t display_rotate_store(struct omap_dss_device *dssdev, 191 + const char *buf, size_t size) 162 192 { 163 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 164 193 int rot, r; 165 194 166 195 if (!dssdev->driver->set_rotate || !dssdev->driver->get_rotate) ··· 176 207 return size; 177 208 } 178 209 179 - static ssize_t display_mirror_show(struct device *dev, 180 - struct device_attribute *attr, char *buf) 210 + static ssize_t display_mirror_show(struct omap_dss_device *dssdev, char *buf) 181 211 { 182 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 183 212 int mirror; 184 213 if (!dssdev->driver->get_mirror) 185 214 return -ENOENT; ··· 185 218 return snprintf(buf, PAGE_SIZE, "%u\n", mirror); 186 219 } 187 220 188 - static ssize_t display_mirror_store(struct device *dev, 189 - struct device_attribute *attr, const char *buf, size_t size) 221 + static ssize_t display_mirror_store(struct omap_dss_device *dssdev, 222 + const char *buf, size_t size) 190 223 { 191 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 192 224 int r; 193 225 bool mirror; 194 226 ··· 205 239 return size; 206 240 } 207 241 208 - static ssize_t display_wss_show(struct device *dev, 209 - struct device_attribute *attr, char *buf) 242 + static ssize_t display_wss_show(struct omap_dss_device *dssdev, char *buf) 210 243 { 211 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 212 244 unsigned int wss; 213 245 214 246 if (!dssdev->driver->get_wss) ··· 217 253 return snprintf(buf, PAGE_SIZE, "0x%05x\n", wss); 218 254 } 219 255 220 - static ssize_t display_wss_store(struct device *dev, 221 - struct device_attribute *attr, const char *buf, size_t size) 256 + static ssize_t display_wss_store(struct omap_dss_device *dssdev, 257 + const char *buf, size_t size) 222 258 { 223 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 224 259 u32 wss; 225 260 int r; 226 261 ··· 240 277 return size; 241 278 } 242 279 243 - static DEVICE_ATTR(display_name, S_IRUGO, display_name_show, NULL); 244 - static DEVICE_ATTR(enabled, S_IRUGO|S_IWUSR, 280 + struct display_attribute { 281 + struct attribute attr; 282 + ssize_t (*show)(struct omap_dss_device *, char *); 283 + ssize_t (*store)(struct omap_dss_device *, const char *, size_t); 284 + }; 285 + 286 + #define DISPLAY_ATTR(_name, _mode, _show, _store) \ 287 + struct display_attribute display_attr_##_name = \ 288 + __ATTR(_name, _mode, _show, _store) 289 + 290 + static DISPLAY_ATTR(name, S_IRUGO, display_name_show, NULL); 291 + static DISPLAY_ATTR(display_name, S_IRUGO, display_name_show, NULL); 292 + static DISPLAY_ATTR(enabled, S_IRUGO|S_IWUSR, 245 293 display_enabled_show, display_enabled_store); 246 - static DEVICE_ATTR(tear_elim, S_IRUGO|S_IWUSR, 294 + static DISPLAY_ATTR(tear_elim, S_IRUGO|S_IWUSR, 247 295 display_tear_show, display_tear_store); 248 - static DEVICE_ATTR(timings, S_IRUGO|S_IWUSR, 296 + static DISPLAY_ATTR(timings, S_IRUGO|S_IWUSR, 249 297 display_timings_show, display_timings_store); 250 - static DEVICE_ATTR(rotate, S_IRUGO|S_IWUSR, 298 + static DISPLAY_ATTR(rotate, S_IRUGO|S_IWUSR, 251 299 display_rotate_show, display_rotate_store); 252 - static DEVICE_ATTR(mirror, S_IRUGO|S_IWUSR, 300 + static DISPLAY_ATTR(mirror, S_IRUGO|S_IWUSR, 253 301 display_mirror_show, display_mirror_store); 254 - static DEVICE_ATTR(wss, S_IRUGO|S_IWUSR, 302 + static DISPLAY_ATTR(wss, S_IRUGO|S_IWUSR, 255 303 display_wss_show, display_wss_store); 256 304 257 - static const struct attribute *display_sysfs_attrs[] = { 258 - &dev_attr_display_name.attr, 259 - &dev_attr_enabled.attr, 260 - &dev_attr_tear_elim.attr, 261 - &dev_attr_timings.attr, 262 - &dev_attr_rotate.attr, 263 - &dev_attr_mirror.attr, 264 - &dev_attr_wss.attr, 305 + static struct attribute *display_sysfs_attrs[] = { 306 + &display_attr_name.attr, 307 + &display_attr_display_name.attr, 308 + &display_attr_enabled.attr, 309 + &display_attr_tear_elim.attr, 310 + &display_attr_timings.attr, 311 + &display_attr_rotate.attr, 312 + &display_attr_mirror.attr, 313 + &display_attr_wss.attr, 265 314 NULL 315 + }; 316 + 317 + static ssize_t display_attr_show(struct kobject *kobj, struct attribute *attr, 318 + char *buf) 319 + { 320 + struct omap_dss_device *dssdev; 321 + struct display_attribute *display_attr; 322 + 323 + dssdev = container_of(kobj, struct omap_dss_device, kobj); 324 + display_attr = container_of(attr, struct display_attribute, attr); 325 + 326 + if (!display_attr->show) 327 + return -ENOENT; 328 + 329 + return display_attr->show(dssdev, buf); 330 + } 331 + 332 + static ssize_t display_attr_store(struct kobject *kobj, struct attribute *attr, 333 + const char *buf, size_t size) 334 + { 335 + struct omap_dss_device *dssdev; 336 + struct display_attribute *display_attr; 337 + 338 + dssdev = container_of(kobj, struct omap_dss_device, kobj); 339 + display_attr = container_of(attr, struct display_attribute, attr); 340 + 341 + if (!display_attr->store) 342 + return -ENOENT; 343 + 344 + return display_attr->store(dssdev, buf, size); 345 + } 346 + 347 + static const struct sysfs_ops display_sysfs_ops = { 348 + .show = display_attr_show, 349 + .store = display_attr_store, 350 + }; 351 + 352 + static struct kobj_type display_ktype = { 353 + .sysfs_ops = &display_sysfs_ops, 354 + .default_attrs = display_sysfs_attrs, 266 355 }; 267 356 268 357 int display_init_sysfs(struct platform_device *pdev) ··· 323 308 int r; 324 309 325 310 for_each_dss_dev(dssdev) { 326 - struct kobject *kobj = &dssdev->dev->kobj; 327 - 328 - r = sysfs_create_files(kobj, display_sysfs_attrs); 311 + r = kobject_init_and_add(&dssdev->kobj, &display_ktype, 312 + &pdev->dev.kobj, dssdev->alias); 329 313 if (r) { 330 314 DSSERR("failed to create sysfs files\n"); 331 - goto err; 332 - } 333 - 334 - r = sysfs_create_link(&pdev->dev.kobj, kobj, dssdev->alias); 335 - if (r) { 336 - sysfs_remove_files(kobj, display_sysfs_attrs); 337 - 338 - DSSERR("failed to create sysfs display link\n"); 315 + omap_dss_put_device(dssdev); 339 316 goto err; 340 317 } 341 318 } ··· 345 338 struct omap_dss_device *dssdev = NULL; 346 339 347 340 for_each_dss_dev(dssdev) { 348 - sysfs_remove_link(&pdev->dev.kobj, dssdev->alias); 349 - sysfs_remove_files(&dssdev->dev->kobj, 350 - display_sysfs_attrs); 341 + if (kobject_name(&dssdev->kobj) == NULL) 342 + continue; 343 + 344 + kobject_del(&dssdev->kobj); 345 + kobject_put(&dssdev->kobj); 346 + 347 + memset(&dssdev->kobj, 0, sizeof(dssdev->kobj)); 351 348 } 352 349 }
+16 -5
drivers/virtio/virtio_balloon.c
··· 29 29 #include <linux/module.h> 30 30 #include <linux/balloon_compaction.h> 31 31 #include <linux/oom.h> 32 + #include <linux/wait.h> 32 33 33 34 /* 34 35 * Balloon device works in 4K page units. So each page is pointed to by ··· 335 334 static int balloon(void *_vballoon) 336 335 { 337 336 struct virtio_balloon *vb = _vballoon; 337 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 338 338 339 339 set_freezable(); 340 340 while (!kthread_should_stop()) { 341 341 s64 diff; 342 342 343 343 try_to_freeze(); 344 - wait_event_interruptible(vb->config_change, 345 - (diff = towards_target(vb)) != 0 346 - || vb->need_stats_update 347 - || kthread_should_stop() 348 - || freezing(current)); 344 + 345 + add_wait_queue(&vb->config_change, &wait); 346 + for (;;) { 347 + if ((diff = towards_target(vb)) != 0 || 348 + vb->need_stats_update || 349 + kthread_should_stop() || 350 + freezing(current)) 351 + break; 352 + wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); 353 + } 354 + remove_wait_queue(&vb->config_change, &wait); 355 + 349 356 if (vb->need_stats_update) 350 357 stats_handle_request(vb); 351 358 if (diff > 0) ··· 507 498 err = register_oom_notifier(&vb->nb); 508 499 if (err < 0) 509 500 goto out_oom_notify; 501 + 502 + virtio_device_ready(vdev); 510 503 511 504 vb->thread = kthread_run(balloon, vb, "vballoon"); 512 505 if (IS_ERR(vb->thread)) {
+82 -8
drivers/virtio/virtio_mmio.c
··· 156 156 void *buf, unsigned len) 157 157 { 158 158 struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); 159 - u8 *ptr = buf; 160 - int i; 159 + void __iomem *base = vm_dev->base + VIRTIO_MMIO_CONFIG; 160 + u8 b; 161 + __le16 w; 162 + __le32 l; 161 163 162 - for (i = 0; i < len; i++) 163 - ptr[i] = readb(vm_dev->base + VIRTIO_MMIO_CONFIG + offset + i); 164 + if (vm_dev->version == 1) { 165 + u8 *ptr = buf; 166 + int i; 167 + 168 + for (i = 0; i < len; i++) 169 + ptr[i] = readb(base + offset + i); 170 + return; 171 + } 172 + 173 + switch (len) { 174 + case 1: 175 + b = readb(base + offset); 176 + memcpy(buf, &b, sizeof b); 177 + break; 178 + case 2: 179 + w = cpu_to_le16(readw(base + offset)); 180 + memcpy(buf, &w, sizeof w); 181 + break; 182 + case 4: 183 + l = cpu_to_le32(readl(base + offset)); 184 + memcpy(buf, &l, sizeof l); 185 + break; 186 + case 8: 187 + l = cpu_to_le32(readl(base + offset)); 188 + memcpy(buf, &l, sizeof l); 189 + l = cpu_to_le32(ioread32(base + offset + sizeof l)); 190 + memcpy(buf + sizeof l, &l, sizeof l); 191 + break; 192 + default: 193 + BUG(); 194 + } 164 195 } 165 196 166 197 static void vm_set(struct virtio_device *vdev, unsigned offset, 167 198 const void *buf, unsigned len) 168 199 { 169 200 struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); 170 - const u8 *ptr = buf; 171 - int i; 201 + void __iomem *base = vm_dev->base + VIRTIO_MMIO_CONFIG; 202 + u8 b; 203 + __le16 w; 204 + __le32 l; 172 205 173 - for (i = 0; i < len; i++) 174 - writeb(ptr[i], vm_dev->base + VIRTIO_MMIO_CONFIG + offset + i); 206 + if (vm_dev->version == 1) { 207 + const u8 *ptr = buf; 208 + int i; 209 + 210 + for (i = 0; i < len; i++) 211 + writeb(ptr[i], base + offset + i); 212 + 213 + return; 214 + } 215 + 216 + switch (len) { 217 + case 1: 218 + memcpy(&b, buf, sizeof b); 219 + writeb(b, base + offset); 220 + break; 221 + case 2: 222 + memcpy(&w, buf, sizeof w); 223 + writew(le16_to_cpu(w), base + offset); 224 + break; 225 + case 4: 226 + memcpy(&l, buf, sizeof l); 227 + writel(le32_to_cpu(l), base + offset); 228 + break; 229 + case 8: 230 + memcpy(&l, buf, sizeof l); 231 + writel(le32_to_cpu(l), base + offset); 232 + memcpy(&l, buf + sizeof l, sizeof l); 233 + writel(le32_to_cpu(l), base + offset + sizeof l); 234 + break; 235 + default: 236 + BUG(); 237 + } 238 + } 239 + 240 + static u32 vm_generation(struct virtio_device *vdev) 241 + { 242 + struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); 243 + 244 + if (vm_dev->version == 1) 245 + return 0; 246 + else 247 + return readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION); 175 248 } 176 249 177 250 static u8 vm_get_status(struct virtio_device *vdev) ··· 513 440 static const struct virtio_config_ops virtio_mmio_config_ops = { 514 441 .get = vm_get, 515 442 .set = vm_set, 443 + .generation = vm_generation, 516 444 .get_status = vm_get_status, 517 445 .set_status = vm_set_status, 518 446 .reset = vm_reset,
+2 -1
drivers/watchdog/at91sam9_wdt.c
··· 208 208 209 209 if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) { 210 210 err = request_irq(wdt->irq, wdt_interrupt, 211 - IRQF_SHARED | IRQF_IRQPOLL, 211 + IRQF_SHARED | IRQF_IRQPOLL | 212 + IRQF_NO_SUSPEND, 212 213 pdev->name, wdt); 213 214 if (err) 214 215 return err;
+4 -4
drivers/watchdog/imgpdc_wdt.c
··· 42 42 #define PDC_WDT_MIN_TIMEOUT 1 43 43 #define PDC_WDT_DEF_TIMEOUT 64 44 44 45 - static int heartbeat; 45 + static int heartbeat = PDC_WDT_DEF_TIMEOUT; 46 46 module_param(heartbeat, int, 0); 47 - MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds. " 48 - "(default = " __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")"); 47 + MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds " 48 + "(default=" __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")"); 49 49 50 50 static bool nowayout = WATCHDOG_NOWAYOUT; 51 51 module_param(nowayout, bool, 0); ··· 191 191 pdc_wdt->wdt_dev.ops = &pdc_wdt_ops; 192 192 pdc_wdt->wdt_dev.max_timeout = 1 << PDC_WDT_CONFIG_DELAY_MASK; 193 193 pdc_wdt->wdt_dev.parent = &pdev->dev; 194 + watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt); 194 195 195 196 ret = watchdog_init_timeout(&pdc_wdt->wdt_dev, heartbeat, &pdev->dev); 196 197 if (ret < 0) { ··· 233 232 watchdog_set_nowayout(&pdc_wdt->wdt_dev, nowayout); 234 233 235 234 platform_set_drvdata(pdev, pdc_wdt); 236 - watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt); 237 235 238 236 ret = watchdog_register_device(&pdc_wdt->wdt_dev); 239 237 if (ret)
+1 -1
drivers/watchdog/mtk_wdt.c
··· 133 133 u32 reg; 134 134 struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev); 135 135 void __iomem *wdt_base = mtk_wdt->wdt_base; 136 - u32 ret; 136 + int ret; 137 137 138 138 ret = mtk_wdt_set_timeout(wdt_dev, wdt_dev->timeout); 139 139 if (ret < 0)
+17
drivers/xen/Kconfig
··· 55 55 56 56 In that case step 3 should be omitted. 57 57 58 + config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 59 + int "Hotplugged memory limit (in GiB) for a PV guest" 60 + default 512 if X86_64 61 + default 4 if X86_32 62 + range 0 64 if X86_32 63 + depends on XEN_HAVE_PVMMU 64 + depends on XEN_BALLOON_MEMORY_HOTPLUG 65 + help 66 + Maxmium amount of memory (in GiB) that a PV guest can be 67 + expanded to when using memory hotplug. 68 + 69 + A PV guest can have more memory than this limit if is 70 + started with a larger maximum. 71 + 72 + This value is used to allocate enough space in internal 73 + tables needed for physical memory administration. 74 + 58 75 config XEN_SCRUB_PAGES 59 76 bool "Scrub pages before returning them to system" 60 77 depends on XEN_BALLOON
+23
drivers/xen/balloon.c
··· 229 229 balloon_hotplug = round_up(balloon_hotplug, PAGES_PER_SECTION); 230 230 nid = memory_add_physaddr_to_nid(hotplug_start_paddr); 231 231 232 + #ifdef CONFIG_XEN_HAVE_PVMMU 233 + /* 234 + * add_memory() will build page tables for the new memory so 235 + * the p2m must contain invalid entries so the correct 236 + * non-present PTEs will be written. 237 + * 238 + * If a failure occurs, the original (identity) p2m entries 239 + * are not restored since this region is now known not to 240 + * conflict with any devices. 241 + */ 242 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 243 + unsigned long pfn, i; 244 + 245 + pfn = PFN_DOWN(hotplug_start_paddr); 246 + for (i = 0; i < balloon_hotplug; i++) { 247 + if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) { 248 + pr_warn("set_phys_to_machine() failed, no memory added\n"); 249 + return BP_ECANCELED; 250 + } 251 + } 252 + } 253 + #endif 254 + 232 255 rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT); 233 256 234 257 if (rc) {
+12 -6
drivers/xen/events/events_base.c
··· 526 526 pirq_query_unmask(irq); 527 527 528 528 rc = set_evtchn_to_irq(evtchn, irq); 529 - if (rc != 0) { 530 - pr_err("irq%d: Failed to set port to irq mapping (%d)\n", 531 - irq, rc); 532 - xen_evtchn_close(evtchn); 533 - return 0; 534 - } 529 + if (rc) 530 + goto err; 531 + 535 532 bind_evtchn_to_cpu(evtchn, 0); 536 533 info->evtchn = evtchn; 534 + 535 + rc = xen_evtchn_port_setup(info); 536 + if (rc) 537 + goto err; 537 538 538 539 out: 539 540 unmask_evtchn(evtchn); 540 541 eoi_pirq(irq_get_irq_data(irq)); 541 542 543 + return 0; 544 + 545 + err: 546 + pr_err("irq%d: Failed to set port to irq mapping (%d)\n", irq, rc); 547 + xen_evtchn_close(evtchn); 542 548 return 0; 543 549 } 544 550
+1 -1
drivers/xen/xen-pciback/conf_space.c
··· 16 16 #include "conf_space.h" 17 17 #include "conf_space_quirks.h" 18 18 19 - static bool permissive; 19 + bool permissive; 20 20 module_param(permissive, bool, 0644); 21 21 22 22 /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
+2
drivers/xen/xen-pciback/conf_space.h
··· 64 64 void *data; 65 65 }; 66 66 67 + extern bool permissive; 68 + 67 69 #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset) 68 70 69 71 /* Add fields to a device - the add_fields macro expects to get a pointer to
+47 -12
drivers/xen/xen-pciback/conf_space_header.c
··· 11 11 #include "pciback.h" 12 12 #include "conf_space.h" 13 13 14 + struct pci_cmd_info { 15 + u16 val; 16 + }; 17 + 14 18 struct pci_bar_info { 15 19 u32 val; 16 20 u32 len_val; ··· 24 20 #define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO)) 25 21 #define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER) 26 22 23 + /* Bits guests are allowed to control in permissive mode. */ 24 + #define PCI_COMMAND_GUEST (PCI_COMMAND_MASTER|PCI_COMMAND_SPECIAL| \ 25 + PCI_COMMAND_INVALIDATE|PCI_COMMAND_VGA_PALETTE| \ 26 + PCI_COMMAND_WAIT|PCI_COMMAND_FAST_BACK) 27 + 28 + static void *command_init(struct pci_dev *dev, int offset) 29 + { 30 + struct pci_cmd_info *cmd = kmalloc(sizeof(*cmd), GFP_KERNEL); 31 + int err; 32 + 33 + if (!cmd) 34 + return ERR_PTR(-ENOMEM); 35 + 36 + err = pci_read_config_word(dev, PCI_COMMAND, &cmd->val); 37 + if (err) { 38 + kfree(cmd); 39 + return ERR_PTR(err); 40 + } 41 + 42 + return cmd; 43 + } 44 + 27 45 static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data) 28 46 { 29 - int i; 30 - int ret; 47 + int ret = pci_read_config_word(dev, offset, value); 48 + const struct pci_cmd_info *cmd = data; 31 49 32 - ret = xen_pcibk_read_config_word(dev, offset, value, data); 33 - if (!pci_is_enabled(dev)) 34 - return ret; 35 - 36 - for (i = 0; i < PCI_ROM_RESOURCE; i++) { 37 - if (dev->resource[i].flags & IORESOURCE_IO) 38 - *value |= PCI_COMMAND_IO; 39 - if (dev->resource[i].flags & IORESOURCE_MEM) 40 - *value |= PCI_COMMAND_MEMORY; 41 - } 50 + *value &= PCI_COMMAND_GUEST; 51 + *value |= cmd->val & ~PCI_COMMAND_GUEST; 42 52 43 53 return ret; 44 54 } ··· 61 43 { 62 44 struct xen_pcibk_dev_data *dev_data; 63 45 int err; 46 + u16 val; 47 + struct pci_cmd_info *cmd = data; 64 48 65 49 dev_data = pci_get_drvdata(dev); 66 50 if (!pci_is_enabled(dev) && is_enable_cmd(value)) { ··· 102 82 value &= ~PCI_COMMAND_INVALIDATE; 103 83 } 104 84 } 85 + 86 + cmd->val = value; 87 + 88 + if (!permissive && (!dev_data || !dev_data->permissive)) 89 + return 0; 90 + 91 + /* Only allow the guest to control certain bits. */ 92 + err = pci_read_config_word(dev, offset, &val); 93 + if (err || val == value) 94 + return err; 95 + 96 + value &= PCI_COMMAND_GUEST; 97 + value |= val & ~PCI_COMMAND_GUEST; 105 98 106 99 return pci_write_config_word(dev, offset, value); 107 100 } ··· 315 282 { 316 283 .offset = PCI_COMMAND, 317 284 .size = 2, 285 + .init = command_init, 286 + .release = bar_release, 318 287 .u.w.read = command_read, 319 288 .u.w.write = command_write, 320 289 },
+2 -5
drivers/xen/xen-scsiback.c
··· 1659 1659 name); 1660 1660 goto out; 1661 1661 } 1662 - /* 1663 - * Now register the TCM pvscsi virtual I_T Nexus as active with the 1664 - * call to __transport_register_session() 1665 - */ 1666 - __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl, 1662 + /* Now register the TCM pvscsi virtual I_T Nexus as active. */ 1663 + transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl, 1667 1664 tv_nexus->tvn_se_sess, tv_nexus); 1668 1665 tpg->tpg_nexus = tv_nexus; 1669 1666
+12 -7
fs/affs/file.c
··· 699 699 boff = tmp % bsize; 700 700 if (boff) { 701 701 bh = affs_bread_ino(inode, bidx, 0); 702 - if (IS_ERR(bh)) 703 - return PTR_ERR(bh); 702 + if (IS_ERR(bh)) { 703 + written = PTR_ERR(bh); 704 + goto err_first_bh; 705 + } 704 706 tmp = min(bsize - boff, to - from); 705 707 BUG_ON(boff + tmp > bsize || tmp > bsize); 706 708 memcpy(AFFS_DATA(bh) + boff, data + from, tmp); ··· 714 712 bidx++; 715 713 } else if (bidx) { 716 714 bh = affs_bread_ino(inode, bidx - 1, 0); 717 - if (IS_ERR(bh)) 718 - return PTR_ERR(bh); 715 + if (IS_ERR(bh)) { 716 + written = PTR_ERR(bh); 717 + goto err_first_bh; 718 + } 719 719 } 720 720 while (from + bsize <= to) { 721 721 prev_bh = bh; 722 722 bh = affs_getemptyblk_ino(inode, bidx); 723 723 if (IS_ERR(bh)) 724 - goto out; 724 + goto err_bh; 725 725 memcpy(AFFS_DATA(bh), data + from, bsize); 726 726 if (buffer_new(bh)) { 727 727 AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); ··· 755 751 prev_bh = bh; 756 752 bh = affs_bread_ino(inode, bidx, 1); 757 753 if (IS_ERR(bh)) 758 - goto out; 754 + goto err_bh; 759 755 tmp = min(bsize, to - from); 760 756 BUG_ON(tmp > bsize); 761 757 memcpy(AFFS_DATA(bh), data + from, tmp); ··· 794 790 if (tmp > inode->i_size) 795 791 inode->i_size = AFFS_I(inode)->mmu_private = tmp; 796 792 793 + err_first_bh: 797 794 unlock_page(page); 798 795 page_cache_release(page); 799 796 800 797 return written; 801 798 802 - out: 799 + err_bh: 803 800 bh = prev_bh; 804 801 if (!written) 805 802 written = PTR_ERR(bh);
+4 -4
fs/btrfs/ctree.c
··· 1645 1645 1646 1646 parent_nritems = btrfs_header_nritems(parent); 1647 1647 blocksize = root->nodesize; 1648 - end_slot = parent_nritems; 1648 + end_slot = parent_nritems - 1; 1649 1649 1650 - if (parent_nritems == 1) 1650 + if (parent_nritems <= 1) 1651 1651 return 0; 1652 1652 1653 1653 btrfs_set_lock_blocking(parent); 1654 1654 1655 - for (i = start_slot; i < end_slot; i++) { 1655 + for (i = start_slot; i <= end_slot; i++) { 1656 1656 int close = 1; 1657 1657 1658 1658 btrfs_node_key(parent, &disk_key, i); ··· 1669 1669 other = btrfs_node_blockptr(parent, i - 1); 1670 1670 close = close_blocks(blocknr, other, blocksize); 1671 1671 } 1672 - if (!close && i < end_slot - 2) { 1672 + if (!close && i < end_slot) { 1673 1673 other = btrfs_node_blockptr(parent, i + 1); 1674 1674 close = close_blocks(blocknr, other, blocksize); 1675 1675 }
+5
fs/btrfs/ctree.h
··· 3387 3387 3388 3388 int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans, 3389 3389 struct btrfs_root *root); 3390 + int btrfs_setup_space_cache(struct btrfs_trans_handle *trans, 3391 + struct btrfs_root *root); 3390 3392 int btrfs_extent_readonly(struct btrfs_root *root, u64 bytenr); 3391 3393 int btrfs_free_block_groups(struct btrfs_fs_info *info); 3392 3394 int btrfs_read_block_groups(struct btrfs_root *root); ··· 3911 3909 loff_t actual_len, u64 *alloc_hint); 3912 3910 int btrfs_inode_check_errors(struct inode *inode); 3913 3911 extern const struct dentry_operations btrfs_dentry_operations; 3912 + #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS 3913 + void btrfs_test_inode_set_ops(struct inode *inode); 3914 + #endif 3914 3915 3915 3916 /* ioctl.c */ 3916 3917 long btrfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+1 -1
fs/btrfs/disk-io.c
··· 3921 3921 } 3922 3922 if (btrfs_super_sys_array_size(sb) < sizeof(struct btrfs_disk_key) 3923 3923 + sizeof(struct btrfs_chunk)) { 3924 - printk(KERN_ERR "BTRFS: system chunk array too small %u < %lu\n", 3924 + printk(KERN_ERR "BTRFS: system chunk array too small %u < %zu\n", 3925 3925 btrfs_super_sys_array_size(sb), 3926 3926 sizeof(struct btrfs_disk_key) 3927 3927 + sizeof(struct btrfs_chunk));
+50 -1
fs/btrfs/extent-tree.c
··· 3208 3208 return 0; 3209 3209 } 3210 3210 3211 + if (trans->aborted) 3212 + return 0; 3211 3213 again: 3212 3214 inode = lookup_free_space_inode(root, block_group, path); 3213 3215 if (IS_ERR(inode) && PTR_ERR(inode) != -ENOENT) { ··· 3245 3243 */ 3246 3244 BTRFS_I(inode)->generation = 0; 3247 3245 ret = btrfs_update_inode(trans, root, inode); 3246 + if (ret) { 3247 + /* 3248 + * So theoretically we could recover from this, simply set the 3249 + * super cache generation to 0 so we know to invalidate the 3250 + * cache, but then we'd have to keep track of the block groups 3251 + * that fail this way so we know we _have_ to reset this cache 3252 + * before the next commit or risk reading stale cache. So to 3253 + * limit our exposure to horrible edge cases lets just abort the 3254 + * transaction, this only happens in really bad situations 3255 + * anyway. 3256 + */ 3257 + btrfs_abort_transaction(trans, root, ret); 3258 + goto out_put; 3259 + } 3248 3260 WARN_ON(ret); 3249 3261 3250 3262 if (i_size_read(inode) > 0) { ··· 3323 3307 spin_unlock(&block_group->lock); 3324 3308 3325 3309 return ret; 3310 + } 3311 + 3312 + int btrfs_setup_space_cache(struct btrfs_trans_handle *trans, 3313 + struct btrfs_root *root) 3314 + { 3315 + struct btrfs_block_group_cache *cache, *tmp; 3316 + struct btrfs_transaction *cur_trans = trans->transaction; 3317 + struct btrfs_path *path; 3318 + 3319 + if (list_empty(&cur_trans->dirty_bgs) || 3320 + !btrfs_test_opt(root, SPACE_CACHE)) 3321 + return 0; 3322 + 3323 + path = btrfs_alloc_path(); 3324 + if (!path) 3325 + return -ENOMEM; 3326 + 3327 + /* Could add new block groups, use _safe just in case */ 3328 + list_for_each_entry_safe(cache, tmp, &cur_trans->dirty_bgs, 3329 + dirty_list) { 3330 + if (cache->disk_cache_state == BTRFS_DC_CLEAR) 3331 + cache_save_setup(cache, trans, path); 3332 + } 3333 + 3334 + btrfs_free_path(path); 3335 + return 0; 3326 3336 } 3327 3337 3328 3338 int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans, ··· 5136 5094 num_bytes = ALIGN(num_bytes, root->sectorsize); 5137 5095 5138 5096 spin_lock(&BTRFS_I(inode)->lock); 5139 - BTRFS_I(inode)->outstanding_extents++; 5097 + nr_extents = (unsigned)div64_u64(num_bytes + 5098 + BTRFS_MAX_EXTENT_SIZE - 1, 5099 + BTRFS_MAX_EXTENT_SIZE); 5100 + BTRFS_I(inode)->outstanding_extents += nr_extents; 5101 + nr_extents = 0; 5140 5102 5141 5103 if (BTRFS_I(inode)->outstanding_extents > 5142 5104 BTRFS_I(inode)->reserved_extents) ··· 5284 5238 spin_unlock(&BTRFS_I(inode)->lock); 5285 5239 if (dropped > 0) 5286 5240 to_free += btrfs_calc_trans_metadata_size(root, dropped); 5241 + 5242 + if (btrfs_test_is_dummy_root(root)) 5243 + return; 5287 5244 5288 5245 trace_btrfs_space_reservation(root->fs_info, "delalloc", 5289 5246 btrfs_ino(inode), to_free, 0);
+6
fs/btrfs/extent_io.c
··· 4968 4968 4969 4969 /* Should be safe to release our pages at this point */ 4970 4970 btrfs_release_extent_buffer_page(eb); 4971 + #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS 4972 + if (unlikely(test_bit(EXTENT_BUFFER_DUMMY, &eb->bflags))) { 4973 + __free_extent_buffer(eb); 4974 + return 1; 4975 + } 4976 + #endif 4971 4977 call_rcu(&eb->rcu_head, btrfs_release_extent_buffer_rcu); 4972 4978 return 1; 4973 4979 }
+56 -31
fs/btrfs/file.c
··· 1811 1811 mutex_unlock(&inode->i_mutex); 1812 1812 1813 1813 /* 1814 - * we want to make sure fsync finds this change 1815 - * but we haven't joined a transaction running right now. 1816 - * 1817 - * Later on, someone is sure to update the inode and get the 1818 - * real transid recorded. 1819 - * 1820 - * We set last_trans now to the fs_info generation + 1, 1821 - * this will either be one more than the running transaction 1822 - * or the generation used for the next transaction if there isn't 1823 - * one running right now. 1824 - * 1825 1814 * We also have to set last_sub_trans to the current log transid, 1826 1815 * otherwise subsequent syncs to a file that's been synced in this 1827 1816 * transaction will appear to have already occured. 1828 1817 */ 1829 - BTRFS_I(inode)->last_trans = root->fs_info->generation + 1; 1830 1818 BTRFS_I(inode)->last_sub_trans = root->log_transid; 1831 1819 if (num_written > 0) { 1832 1820 err = generic_write_sync(file, pos, num_written); ··· 1947 1959 atomic_inc(&root->log_batch); 1948 1960 1949 1961 /* 1950 - * check the transaction that last modified this inode 1951 - * and see if its already been committed 1952 - */ 1953 - if (!BTRFS_I(inode)->last_trans) { 1954 - mutex_unlock(&inode->i_mutex); 1955 - goto out; 1956 - } 1957 - 1958 - /* 1959 - * if the last transaction that changed this file was before 1960 - * the current transaction, we can bail out now without any 1961 - * syncing 1962 + * If the last transaction that changed this file was before the current 1963 + * transaction and we have the full sync flag set in our inode, we can 1964 + * bail out now without any syncing. 1965 + * 1966 + * Note that we can't bail out if the full sync flag isn't set. This is 1967 + * because when the full sync flag is set we start all ordered extents 1968 + * and wait for them to fully complete - when they complete they update 1969 + * the inode's last_trans field through: 1970 + * 1971 + * btrfs_finish_ordered_io() -> 1972 + * btrfs_update_inode_fallback() -> 1973 + * btrfs_update_inode() -> 1974 + * btrfs_set_inode_last_trans() 1975 + * 1976 + * So we are sure that last_trans is up to date and can do this check to 1977 + * bail out safely. For the fast path, when the full sync flag is not 1978 + * set in our inode, we can not do it because we start only our ordered 1979 + * extents and don't wait for them to complete (that is when 1980 + * btrfs_finish_ordered_io runs), so here at this point their last_trans 1981 + * value might be less than or equals to fs_info->last_trans_committed, 1982 + * and setting a speculative last_trans for an inode when a buffered 1983 + * write is made (such as fs_info->generation + 1 for example) would not 1984 + * be reliable since after setting the value and before fsync is called 1985 + * any number of transactions can start and commit (transaction kthread 1986 + * commits the current transaction periodically), and a transaction 1987 + * commit does not start nor waits for ordered extents to complete. 1962 1988 */ 1963 1989 smp_mb(); 1964 1990 if (btrfs_inode_in_log(inode, root->fs_info->generation) || 1965 - BTRFS_I(inode)->last_trans <= 1966 - root->fs_info->last_trans_committed) { 1967 - BTRFS_I(inode)->last_trans = 0; 1968 - 1991 + (full_sync && BTRFS_I(inode)->last_trans <= 1992 + root->fs_info->last_trans_committed)) { 1969 1993 /* 1970 1994 * We'v had everything committed since the last time we were 1971 1995 * modified so clear this flag in case it was set for whatever ··· 2275 2275 bool same_page; 2276 2276 bool no_holes = btrfs_fs_incompat(root->fs_info, NO_HOLES); 2277 2277 u64 ino_size; 2278 + bool truncated_page = false; 2279 + bool updated_inode = false; 2278 2280 2279 2281 ret = btrfs_wait_ordered_range(inode, offset, len); 2280 2282 if (ret) ··· 2308 2306 * entire page. 2309 2307 */ 2310 2308 if (same_page && len < PAGE_CACHE_SIZE) { 2311 - if (offset < ino_size) 2309 + if (offset < ino_size) { 2310 + truncated_page = true; 2312 2311 ret = btrfs_truncate_page(inode, offset, len, 0); 2312 + } else { 2313 + ret = 0; 2314 + } 2313 2315 goto out_only_mutex; 2314 2316 } 2315 2317 2316 2318 /* zero back part of the first page */ 2317 2319 if (offset < ino_size) { 2320 + truncated_page = true; 2318 2321 ret = btrfs_truncate_page(inode, offset, 0, 0); 2319 2322 if (ret) { 2320 2323 mutex_unlock(&inode->i_mutex); ··· 2355 2348 if (!ret) { 2356 2349 /* zero the front end of the last page */ 2357 2350 if (tail_start + tail_len < ino_size) { 2351 + truncated_page = true; 2358 2352 ret = btrfs_truncate_page(inode, 2359 2353 tail_start + tail_len, 0, 1); 2360 2354 if (ret) ··· 2365 2357 } 2366 2358 2367 2359 if (lockend < lockstart) { 2368 - mutex_unlock(&inode->i_mutex); 2369 - return 0; 2360 + ret = 0; 2361 + goto out_only_mutex; 2370 2362 } 2371 2363 2372 2364 while (1) { ··· 2514 2506 2515 2507 trans->block_rsv = &root->fs_info->trans_block_rsv; 2516 2508 ret = btrfs_update_inode(trans, root, inode); 2509 + updated_inode = true; 2517 2510 btrfs_end_transaction(trans, root); 2518 2511 btrfs_btree_balance_dirty(root); 2519 2512 out_free: ··· 2524 2515 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 2525 2516 &cached_state, GFP_NOFS); 2526 2517 out_only_mutex: 2518 + if (!updated_inode && truncated_page && !ret && !err) { 2519 + /* 2520 + * If we only end up zeroing part of a page, we still need to 2521 + * update the inode item, so that all the time fields are 2522 + * updated as well as the necessary btrfs inode in memory fields 2523 + * for detecting, at fsync time, if the inode isn't yet in the 2524 + * log tree or it's there but not up to date. 2525 + */ 2526 + trans = btrfs_start_transaction(root, 1); 2527 + if (IS_ERR(trans)) { 2528 + err = PTR_ERR(trans); 2529 + } else { 2530 + err = btrfs_update_inode(trans, root, inode); 2531 + ret = btrfs_end_transaction(trans, root); 2532 + } 2533 + } 2527 2534 mutex_unlock(&inode->i_mutex); 2528 2535 if (ret && !err) 2529 2536 err = ret;
+84 -29
fs/btrfs/inode.c
··· 108 108 109 109 static int btrfs_dirty_inode(struct inode *inode); 110 110 111 + #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS 112 + void btrfs_test_inode_set_ops(struct inode *inode) 113 + { 114 + BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; 115 + } 116 + #endif 117 + 111 118 static int btrfs_init_inode_security(struct btrfs_trans_handle *trans, 112 119 struct inode *inode, struct inode *dir, 113 120 const struct qstr *qstr) ··· 1549 1542 u64 new_size; 1550 1543 1551 1544 /* 1552 - * We need the largest size of the remaining extent to see if we 1553 - * need to add a new outstanding extent. Think of the following 1554 - * case 1555 - * 1556 - * [MEAX_EXTENT_SIZEx2 - 4k][4k] 1557 - * 1558 - * The new_size would just be 4k and we'd think we had enough 1559 - * outstanding extents for this if we only took one side of the 1560 - * split, same goes for the other direction. We need to see if 1561 - * the larger size still is the same amount of extents as the 1562 - * original size, because if it is we need to add a new 1563 - * outstanding extent. But if we split up and the larger size 1564 - * is less than the original then we are good to go since we've 1565 - * already accounted for the extra extent in our original 1566 - * accounting. 1545 + * See the explanation in btrfs_merge_extent_hook, the same 1546 + * applies here, just in reverse. 1567 1547 */ 1568 1548 new_size = orig->end - split + 1; 1569 - if ((split - orig->start) > new_size) 1570 - new_size = split - orig->start; 1571 - 1572 - num_extents = div64_u64(size + BTRFS_MAX_EXTENT_SIZE - 1, 1549 + num_extents = div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1, 1573 1550 BTRFS_MAX_EXTENT_SIZE); 1574 - if (div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1, 1575 - BTRFS_MAX_EXTENT_SIZE) < num_extents) 1551 + new_size = split - orig->start; 1552 + num_extents += div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1, 1553 + BTRFS_MAX_EXTENT_SIZE); 1554 + if (div64_u64(size + BTRFS_MAX_EXTENT_SIZE - 1, 1555 + BTRFS_MAX_EXTENT_SIZE) >= num_extents) 1576 1556 return; 1577 1557 } 1578 1558 ··· 1585 1591 if (!(other->state & EXTENT_DELALLOC)) 1586 1592 return; 1587 1593 1588 - old_size = other->end - other->start + 1; 1589 - new_size = old_size + (new->end - new->start + 1); 1594 + if (new->start > other->start) 1595 + new_size = new->end - other->start + 1; 1596 + else 1597 + new_size = other->end - new->start + 1; 1590 1598 1591 1599 /* we're not bigger than the max, unreserve the space and go */ 1592 1600 if (new_size <= BTRFS_MAX_EXTENT_SIZE) { ··· 1599 1603 } 1600 1604 1601 1605 /* 1602 - * If we grew by another max_extent, just return, we want to keep that 1603 - * reserved amount. 1606 + * We have to add up either side to figure out how many extents were 1607 + * accounted for before we merged into one big extent. If the number of 1608 + * extents we accounted for is <= the amount we need for the new range 1609 + * then we can return, otherwise drop. Think of it like this 1610 + * 1611 + * [ 4k][MAX_SIZE] 1612 + * 1613 + * So we've grown the extent by a MAX_SIZE extent, this would mean we 1614 + * need 2 outstanding extents, on one side we have 1 and the other side 1615 + * we have 1 so they are == and we can return. But in this case 1616 + * 1617 + * [MAX_SIZE+4k][MAX_SIZE+4k] 1618 + * 1619 + * Each range on their own accounts for 2 extents, but merged together 1620 + * they are only 3 extents worth of accounting, so we need to drop in 1621 + * this case. 1604 1622 */ 1623 + old_size = other->end - other->start + 1; 1605 1624 num_extents = div64_u64(old_size + BTRFS_MAX_EXTENT_SIZE - 1, 1606 1625 BTRFS_MAX_EXTENT_SIZE); 1626 + old_size = new->end - new->start + 1; 1627 + num_extents += div64_u64(old_size + BTRFS_MAX_EXTENT_SIZE - 1, 1628 + BTRFS_MAX_EXTENT_SIZE); 1629 + 1607 1630 if (div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1, 1608 - BTRFS_MAX_EXTENT_SIZE) > num_extents) 1631 + BTRFS_MAX_EXTENT_SIZE) >= num_extents) 1609 1632 return; 1610 1633 1611 1634 spin_lock(&BTRFS_I(inode)->lock); ··· 1701 1686 spin_unlock(&BTRFS_I(inode)->lock); 1702 1687 } 1703 1688 1689 + /* For sanity tests */ 1690 + if (btrfs_test_is_dummy_root(root)) 1691 + return; 1692 + 1704 1693 __percpu_counter_add(&root->fs_info->delalloc_bytes, len, 1705 1694 root->fs_info->delalloc_batch); 1706 1695 spin_lock(&BTRFS_I(inode)->lock); ··· 1759 1740 if (*bits & EXTENT_DO_ACCOUNTING && 1760 1741 root != root->fs_info->tree_root) 1761 1742 btrfs_delalloc_release_metadata(inode, len); 1743 + 1744 + /* For sanity tests. */ 1745 + if (btrfs_test_is_dummy_root(root)) 1746 + return; 1762 1747 1763 1748 if (root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID 1764 1749 && do_list && !(state->state & EXTENT_NORESERVE)) ··· 7236 7213 u64 start = iblock << inode->i_blkbits; 7237 7214 u64 lockstart, lockend; 7238 7215 u64 len = bh_result->b_size; 7239 - u64 orig_len = len; 7216 + u64 *outstanding_extents = NULL; 7240 7217 int unlock_bits = EXTENT_LOCKED; 7241 7218 int ret = 0; 7242 7219 ··· 7247 7224 7248 7225 lockstart = start; 7249 7226 lockend = start + len - 1; 7227 + 7228 + if (current->journal_info) { 7229 + /* 7230 + * Need to pull our outstanding extents and set journal_info to NULL so 7231 + * that anything that needs to check if there's a transction doesn't get 7232 + * confused. 7233 + */ 7234 + outstanding_extents = current->journal_info; 7235 + current->journal_info = NULL; 7236 + } 7250 7237 7251 7238 /* 7252 7239 * If this errors out it's because we couldn't invalidate pagecache for ··· 7318 7285 ((BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW) && 7319 7286 em->block_start != EXTENT_MAP_HOLE)) { 7320 7287 int type; 7321 - int ret; 7322 7288 u64 block_start, orig_start, orig_block_len, ram_bytes; 7323 7289 7324 7290 if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) ··· 7381 7349 if (start + len > i_size_read(inode)) 7382 7350 i_size_write(inode, start + len); 7383 7351 7384 - if (len < orig_len) { 7352 + /* 7353 + * If we have an outstanding_extents count still set then we're 7354 + * within our reservation, otherwise we need to adjust our inode 7355 + * counter appropriately. 7356 + */ 7357 + if (*outstanding_extents) { 7358 + (*outstanding_extents)--; 7359 + } else { 7385 7360 spin_lock(&BTRFS_I(inode)->lock); 7386 7361 BTRFS_I(inode)->outstanding_extents++; 7387 7362 spin_unlock(&BTRFS_I(inode)->lock); 7388 7363 } 7364 + 7365 + current->journal_info = outstanding_extents; 7389 7366 btrfs_free_reserved_data_space(inode, len); 7390 7367 } 7391 7368 ··· 7418 7377 unlock_err: 7419 7378 clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend, 7420 7379 unlock_bits, 1, 0, &cached_state, GFP_NOFS); 7380 + if (outstanding_extents) 7381 + current->journal_info = outstanding_extents; 7421 7382 return ret; 7422 7383 } 7423 7384 ··· 8119 8076 { 8120 8077 struct file *file = iocb->ki_filp; 8121 8078 struct inode *inode = file->f_mapping->host; 8079 + u64 outstanding_extents = 0; 8122 8080 size_t count = 0; 8123 8081 int flags = 0; 8124 8082 bool wakeup = true; ··· 8157 8113 ret = btrfs_delalloc_reserve_space(inode, count); 8158 8114 if (ret) 8159 8115 goto out; 8116 + outstanding_extents = div64_u64(count + 8117 + BTRFS_MAX_EXTENT_SIZE - 1, 8118 + BTRFS_MAX_EXTENT_SIZE); 8119 + 8120 + /* 8121 + * We need to know how many extents we reserved so that we can 8122 + * do the accounting properly if we go over the number we 8123 + * originally calculated. Abuse current->journal_info for this. 8124 + */ 8125 + current->journal_info = &outstanding_extents; 8160 8126 } else if (test_bit(BTRFS_INODE_READDIO_NEED_LOCK, 8161 8127 &BTRFS_I(inode)->runtime_flags)) { 8162 8128 inode_dio_done(inode); ··· 8179 8125 iter, offset, btrfs_get_blocks_direct, NULL, 8180 8126 btrfs_submit_direct, flags); 8181 8127 if (rw & WRITE) { 8128 + current->journal_info = NULL; 8182 8129 if (ret < 0 && ret != -EIOCBQUEUED) 8183 8130 btrfs_delalloc_release_space(inode, count); 8184 8131 else if (ret >= 0 && (size_t)ret < count)
+2 -5
fs/btrfs/ordered-data.c
··· 452 452 continue; 453 453 if (entry_end(ordered) <= start) 454 454 break; 455 - if (!list_empty(&ordered->log_list)) 456 - continue; 457 - if (test_bit(BTRFS_ORDERED_LOGGED, &ordered->flags)) 455 + if (test_and_set_bit(BTRFS_ORDERED_LOGGED, &ordered->flags)) 458 456 continue; 459 457 list_add(&ordered->log_list, logged_list); 460 458 atomic_inc(&ordered->refs); ··· 509 511 wait_event(ordered->wait, test_bit(BTRFS_ORDERED_IO_DONE, 510 512 &ordered->flags)); 511 513 512 - if (!test_and_set_bit(BTRFS_ORDERED_LOGGED, &ordered->flags)) 513 - list_add_tail(&ordered->trans_list, &trans->ordered); 514 + list_add_tail(&ordered->trans_list, &trans->ordered); 514 515 spin_lock_irq(&log->log_extents_lock[index]); 515 516 } 516 517 spin_unlock_irq(&log->log_extents_lock[index]);
+1 -1
fs/btrfs/qgroup.c
··· 1259 1259 if (oper1->seq < oper2->seq) 1260 1260 return -1; 1261 1261 if (oper1->seq > oper2->seq) 1262 - return -1; 1262 + return 1; 1263 1263 if (oper1->ref_root < oper2->ref_root) 1264 1264 return -1; 1265 1265 if (oper1->ref_root > oper2->ref_root)
+156 -15
fs/btrfs/send.c
··· 230 230 u64 parent_ino; 231 231 u64 ino; 232 232 u64 gen; 233 + bool is_orphan; 233 234 struct list_head update_refs; 234 235 }; 235 236 ··· 2985 2984 u64 ino_gen, 2986 2985 u64 parent_ino, 2987 2986 struct list_head *new_refs, 2988 - struct list_head *deleted_refs) 2987 + struct list_head *deleted_refs, 2988 + const bool is_orphan) 2989 2989 { 2990 2990 struct rb_node **p = &sctx->pending_dir_moves.rb_node; 2991 2991 struct rb_node *parent = NULL; ··· 3001 2999 pm->parent_ino = parent_ino; 3002 3000 pm->ino = ino; 3003 3001 pm->gen = ino_gen; 3002 + pm->is_orphan = is_orphan; 3004 3003 INIT_LIST_HEAD(&pm->list); 3005 3004 INIT_LIST_HEAD(&pm->update_refs); 3006 3005 RB_CLEAR_NODE(&pm->node); ··· 3134 3131 rmdir_ino = dm->rmdir_ino; 3135 3132 free_waiting_dir_move(sctx, dm); 3136 3133 3137 - ret = get_first_ref(sctx->parent_root, pm->ino, 3138 - &parent_ino, &parent_gen, name); 3139 - if (ret < 0) 3140 - goto out; 3141 - 3142 - ret = get_cur_path(sctx, parent_ino, parent_gen, 3143 - from_path); 3144 - if (ret < 0) 3145 - goto out; 3146 - ret = fs_path_add_path(from_path, name); 3134 + if (pm->is_orphan) { 3135 + ret = gen_unique_name(sctx, pm->ino, 3136 + pm->gen, from_path); 3137 + } else { 3138 + ret = get_first_ref(sctx->parent_root, pm->ino, 3139 + &parent_ino, &parent_gen, name); 3140 + if (ret < 0) 3141 + goto out; 3142 + ret = get_cur_path(sctx, parent_ino, parent_gen, 3143 + from_path); 3144 + if (ret < 0) 3145 + goto out; 3146 + ret = fs_path_add_path(from_path, name); 3147 + } 3147 3148 if (ret < 0) 3148 3149 goto out; 3149 3150 ··· 3157 3150 LIST_HEAD(deleted_refs); 3158 3151 ASSERT(ancestor > BTRFS_FIRST_FREE_OBJECTID); 3159 3152 ret = add_pending_dir_move(sctx, pm->ino, pm->gen, ancestor, 3160 - &pm->update_refs, &deleted_refs); 3153 + &pm->update_refs, &deleted_refs, 3154 + pm->is_orphan); 3161 3155 if (ret < 0) 3162 3156 goto out; 3163 3157 if (rmdir_ino) { ··· 3291 3283 return ret; 3292 3284 } 3293 3285 3286 + /* 3287 + * We might need to delay a directory rename even when no ancestor directory 3288 + * (in the send root) with a higher inode number than ours (sctx->cur_ino) was 3289 + * renamed. This happens when we rename a directory to the old name (the name 3290 + * in the parent root) of some other unrelated directory that got its rename 3291 + * delayed due to some ancestor with higher number that got renamed. 3292 + * 3293 + * Example: 3294 + * 3295 + * Parent snapshot: 3296 + * . (ino 256) 3297 + * |---- a/ (ino 257) 3298 + * | |---- file (ino 260) 3299 + * | 3300 + * |---- b/ (ino 258) 3301 + * |---- c/ (ino 259) 3302 + * 3303 + * Send snapshot: 3304 + * . (ino 256) 3305 + * |---- a/ (ino 258) 3306 + * |---- x/ (ino 259) 3307 + * |---- y/ (ino 257) 3308 + * |----- file (ino 260) 3309 + * 3310 + * Here we can not rename 258 from 'b' to 'a' without the rename of inode 257 3311 + * from 'a' to 'x/y' happening first, which in turn depends on the rename of 3312 + * inode 259 from 'c' to 'x'. So the order of rename commands the send stream 3313 + * must issue is: 3314 + * 3315 + * 1 - rename 259 from 'c' to 'x' 3316 + * 2 - rename 257 from 'a' to 'x/y' 3317 + * 3 - rename 258 from 'b' to 'a' 3318 + * 3319 + * Returns 1 if the rename of sctx->cur_ino needs to be delayed, 0 if it can 3320 + * be done right away and < 0 on error. 3321 + */ 3322 + static int wait_for_dest_dir_move(struct send_ctx *sctx, 3323 + struct recorded_ref *parent_ref, 3324 + const bool is_orphan) 3325 + { 3326 + struct btrfs_path *path; 3327 + struct btrfs_key key; 3328 + struct btrfs_key di_key; 3329 + struct btrfs_dir_item *di; 3330 + u64 left_gen; 3331 + u64 right_gen; 3332 + int ret = 0; 3333 + 3334 + if (RB_EMPTY_ROOT(&sctx->waiting_dir_moves)) 3335 + return 0; 3336 + 3337 + path = alloc_path_for_send(); 3338 + if (!path) 3339 + return -ENOMEM; 3340 + 3341 + key.objectid = parent_ref->dir; 3342 + key.type = BTRFS_DIR_ITEM_KEY; 3343 + key.offset = btrfs_name_hash(parent_ref->name, parent_ref->name_len); 3344 + 3345 + ret = btrfs_search_slot(NULL, sctx->parent_root, &key, path, 0, 0); 3346 + if (ret < 0) { 3347 + goto out; 3348 + } else if (ret > 0) { 3349 + ret = 0; 3350 + goto out; 3351 + } 3352 + 3353 + di = btrfs_match_dir_item_name(sctx->parent_root, path, 3354 + parent_ref->name, parent_ref->name_len); 3355 + if (!di) { 3356 + ret = 0; 3357 + goto out; 3358 + } 3359 + /* 3360 + * di_key.objectid has the number of the inode that has a dentry in the 3361 + * parent directory with the same name that sctx->cur_ino is being 3362 + * renamed to. We need to check if that inode is in the send root as 3363 + * well and if it is currently marked as an inode with a pending rename, 3364 + * if it is, we need to delay the rename of sctx->cur_ino as well, so 3365 + * that it happens after that other inode is renamed. 3366 + */ 3367 + btrfs_dir_item_key_to_cpu(path->nodes[0], di, &di_key); 3368 + if (di_key.type != BTRFS_INODE_ITEM_KEY) { 3369 + ret = 0; 3370 + goto out; 3371 + } 3372 + 3373 + ret = get_inode_info(sctx->parent_root, di_key.objectid, NULL, 3374 + &left_gen, NULL, NULL, NULL, NULL); 3375 + if (ret < 0) 3376 + goto out; 3377 + ret = get_inode_info(sctx->send_root, di_key.objectid, NULL, 3378 + &right_gen, NULL, NULL, NULL, NULL); 3379 + if (ret < 0) { 3380 + if (ret == -ENOENT) 3381 + ret = 0; 3382 + goto out; 3383 + } 3384 + 3385 + /* Different inode, no need to delay the rename of sctx->cur_ino */ 3386 + if (right_gen != left_gen) { 3387 + ret = 0; 3388 + goto out; 3389 + } 3390 + 3391 + if (is_waiting_for_move(sctx, di_key.objectid)) { 3392 + ret = add_pending_dir_move(sctx, 3393 + sctx->cur_ino, 3394 + sctx->cur_inode_gen, 3395 + di_key.objectid, 3396 + &sctx->new_refs, 3397 + &sctx->deleted_refs, 3398 + is_orphan); 3399 + if (!ret) 3400 + ret = 1; 3401 + } 3402 + out: 3403 + btrfs_free_path(path); 3404 + return ret; 3405 + } 3406 + 3294 3407 static int wait_for_parent_move(struct send_ctx *sctx, 3295 3408 struct recorded_ref *parent_ref) 3296 3409 { ··· 3478 3349 sctx->cur_inode_gen, 3479 3350 ino, 3480 3351 &sctx->new_refs, 3481 - &sctx->deleted_refs); 3352 + &sctx->deleted_refs, 3353 + false); 3482 3354 if (!ret) 3483 3355 ret = 1; 3484 3356 } ··· 3502 3372 int did_overwrite = 0; 3503 3373 int is_orphan = 0; 3504 3374 u64 last_dir_ino_rm = 0; 3375 + bool can_rename = true; 3505 3376 3506 3377 verbose_printk("btrfs: process_recorded_refs %llu\n", sctx->cur_ino); 3507 3378 ··· 3621 3490 } 3622 3491 } 3623 3492 3493 + if (S_ISDIR(sctx->cur_inode_mode) && sctx->parent_root) { 3494 + ret = wait_for_dest_dir_move(sctx, cur, is_orphan); 3495 + if (ret < 0) 3496 + goto out; 3497 + if (ret == 1) { 3498 + can_rename = false; 3499 + *pending_move = 1; 3500 + } 3501 + } 3502 + 3624 3503 /* 3625 3504 * link/move the ref to the new place. If we have an orphan 3626 3505 * inode, move it and update valid_path. If not, link or move 3627 3506 * it depending on the inode mode. 3628 3507 */ 3629 - if (is_orphan) { 3508 + if (is_orphan && can_rename) { 3630 3509 ret = send_rename(sctx, valid_path, cur->full_path); 3631 3510 if (ret < 0) 3632 3511 goto out; ··· 3644 3503 ret = fs_path_copy(valid_path, cur->full_path); 3645 3504 if (ret < 0) 3646 3505 goto out; 3647 - } else { 3506 + } else if (can_rename) { 3648 3507 if (S_ISDIR(sctx->cur_inode_mode)) { 3649 3508 /* 3650 3509 * Dirs can't be linked, so move it. For moved
+196 -1
fs/btrfs/tests/inode-tests.c
··· 911 911 return ret; 912 912 } 913 913 914 + static int test_extent_accounting(void) 915 + { 916 + struct inode *inode = NULL; 917 + struct btrfs_root *root = NULL; 918 + int ret = -ENOMEM; 919 + 920 + inode = btrfs_new_test_inode(); 921 + if (!inode) { 922 + test_msg("Couldn't allocate inode\n"); 923 + return ret; 924 + } 925 + 926 + root = btrfs_alloc_dummy_root(); 927 + if (IS_ERR(root)) { 928 + test_msg("Couldn't allocate root\n"); 929 + goto out; 930 + } 931 + 932 + root->fs_info = btrfs_alloc_dummy_fs_info(); 933 + if (!root->fs_info) { 934 + test_msg("Couldn't allocate dummy fs info\n"); 935 + goto out; 936 + } 937 + 938 + BTRFS_I(inode)->root = root; 939 + btrfs_test_inode_set_ops(inode); 940 + 941 + /* [BTRFS_MAX_EXTENT_SIZE] */ 942 + BTRFS_I(inode)->outstanding_extents++; 943 + ret = btrfs_set_extent_delalloc(inode, 0, BTRFS_MAX_EXTENT_SIZE - 1, 944 + NULL); 945 + if (ret) { 946 + test_msg("btrfs_set_extent_delalloc returned %d\n", ret); 947 + goto out; 948 + } 949 + if (BTRFS_I(inode)->outstanding_extents != 1) { 950 + ret = -EINVAL; 951 + test_msg("Miscount, wanted 1, got %u\n", 952 + BTRFS_I(inode)->outstanding_extents); 953 + goto out; 954 + } 955 + 956 + /* [BTRFS_MAX_EXTENT_SIZE][4k] */ 957 + BTRFS_I(inode)->outstanding_extents++; 958 + ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE, 959 + BTRFS_MAX_EXTENT_SIZE + 4095, NULL); 960 + if (ret) { 961 + test_msg("btrfs_set_extent_delalloc returned %d\n", ret); 962 + goto out; 963 + } 964 + if (BTRFS_I(inode)->outstanding_extents != 2) { 965 + ret = -EINVAL; 966 + test_msg("Miscount, wanted 2, got %u\n", 967 + BTRFS_I(inode)->outstanding_extents); 968 + goto out; 969 + } 970 + 971 + /* [BTRFS_MAX_EXTENT_SIZE/2][4K HOLE][the rest] */ 972 + ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 973 + BTRFS_MAX_EXTENT_SIZE >> 1, 974 + (BTRFS_MAX_EXTENT_SIZE >> 1) + 4095, 975 + EXTENT_DELALLOC | EXTENT_DIRTY | 976 + EXTENT_UPTODATE | EXTENT_DO_ACCOUNTING, 0, 0, 977 + NULL, GFP_NOFS); 978 + if (ret) { 979 + test_msg("clear_extent_bit returned %d\n", ret); 980 + goto out; 981 + } 982 + if (BTRFS_I(inode)->outstanding_extents != 2) { 983 + ret = -EINVAL; 984 + test_msg("Miscount, wanted 2, got %u\n", 985 + BTRFS_I(inode)->outstanding_extents); 986 + goto out; 987 + } 988 + 989 + /* [BTRFS_MAX_EXTENT_SIZE][4K] */ 990 + BTRFS_I(inode)->outstanding_extents++; 991 + ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE >> 1, 992 + (BTRFS_MAX_EXTENT_SIZE >> 1) + 4095, 993 + NULL); 994 + if (ret) { 995 + test_msg("btrfs_set_extent_delalloc returned %d\n", ret); 996 + goto out; 997 + } 998 + if (BTRFS_I(inode)->outstanding_extents != 2) { 999 + ret = -EINVAL; 1000 + test_msg("Miscount, wanted 2, got %u\n", 1001 + BTRFS_I(inode)->outstanding_extents); 1002 + goto out; 1003 + } 1004 + 1005 + /* 1006 + * [BTRFS_MAX_EXTENT_SIZE+4K][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4K] 1007 + * 1008 + * I'm artificially adding 2 to outstanding_extents because in the 1009 + * buffered IO case we'd add things up as we go, but I don't feel like 1010 + * doing that here, this isn't the interesting case we want to test. 1011 + */ 1012 + BTRFS_I(inode)->outstanding_extents += 2; 1013 + ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE + 8192, 1014 + (BTRFS_MAX_EXTENT_SIZE << 1) + 12287, 1015 + NULL); 1016 + if (ret) { 1017 + test_msg("btrfs_set_extent_delalloc returned %d\n", ret); 1018 + goto out; 1019 + } 1020 + if (BTRFS_I(inode)->outstanding_extents != 4) { 1021 + ret = -EINVAL; 1022 + test_msg("Miscount, wanted 4, got %u\n", 1023 + BTRFS_I(inode)->outstanding_extents); 1024 + goto out; 1025 + } 1026 + 1027 + /* [BTRFS_MAX_EXTENT_SIZE+4k][4k][BTRFS_MAX_EXTENT_SIZE+4k] */ 1028 + BTRFS_I(inode)->outstanding_extents++; 1029 + ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE+4096, 1030 + BTRFS_MAX_EXTENT_SIZE+8191, NULL); 1031 + if (ret) { 1032 + test_msg("btrfs_set_extent_delalloc returned %d\n", ret); 1033 + goto out; 1034 + } 1035 + if (BTRFS_I(inode)->outstanding_extents != 3) { 1036 + ret = -EINVAL; 1037 + test_msg("Miscount, wanted 3, got %u\n", 1038 + BTRFS_I(inode)->outstanding_extents); 1039 + goto out; 1040 + } 1041 + 1042 + /* [BTRFS_MAX_EXTENT_SIZE+4k][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4k] */ 1043 + ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 1044 + BTRFS_MAX_EXTENT_SIZE+4096, 1045 + BTRFS_MAX_EXTENT_SIZE+8191, 1046 + EXTENT_DIRTY | EXTENT_DELALLOC | 1047 + EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0, 1048 + NULL, GFP_NOFS); 1049 + if (ret) { 1050 + test_msg("clear_extent_bit returned %d\n", ret); 1051 + goto out; 1052 + } 1053 + if (BTRFS_I(inode)->outstanding_extents != 4) { 1054 + ret = -EINVAL; 1055 + test_msg("Miscount, wanted 4, got %u\n", 1056 + BTRFS_I(inode)->outstanding_extents); 1057 + goto out; 1058 + } 1059 + 1060 + /* 1061 + * Refill the hole again just for good measure, because I thought it 1062 + * might fail and I'd rather satisfy my paranoia at this point. 1063 + */ 1064 + BTRFS_I(inode)->outstanding_extents++; 1065 + ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE+4096, 1066 + BTRFS_MAX_EXTENT_SIZE+8191, NULL); 1067 + if (ret) { 1068 + test_msg("btrfs_set_extent_delalloc returned %d\n", ret); 1069 + goto out; 1070 + } 1071 + if (BTRFS_I(inode)->outstanding_extents != 3) { 1072 + ret = -EINVAL; 1073 + test_msg("Miscount, wanted 3, got %u\n", 1074 + BTRFS_I(inode)->outstanding_extents); 1075 + goto out; 1076 + } 1077 + 1078 + /* Empty */ 1079 + ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1, 1080 + EXTENT_DIRTY | EXTENT_DELALLOC | 1081 + EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0, 1082 + NULL, GFP_NOFS); 1083 + if (ret) { 1084 + test_msg("clear_extent_bit returned %d\n", ret); 1085 + goto out; 1086 + } 1087 + if (BTRFS_I(inode)->outstanding_extents) { 1088 + ret = -EINVAL; 1089 + test_msg("Miscount, wanted 0, got %u\n", 1090 + BTRFS_I(inode)->outstanding_extents); 1091 + goto out; 1092 + } 1093 + ret = 0; 1094 + out: 1095 + if (ret) 1096 + clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1, 1097 + EXTENT_DIRTY | EXTENT_DELALLOC | 1098 + EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0, 1099 + NULL, GFP_NOFS); 1100 + iput(inode); 1101 + btrfs_free_dummy_root(root); 1102 + return ret; 1103 + } 1104 + 914 1105 int btrfs_test_inodes(void) 915 1106 { 916 1107 int ret; ··· 1115 924 if (ret) 1116 925 return ret; 1117 926 test_msg("Running hole first btrfs_get_extent test\n"); 1118 - return test_hole_first(); 927 + ret = test_hole_first(); 928 + if (ret) 929 + return ret; 930 + test_msg("Running outstanding_extents tests\n"); 931 + return test_extent_accounting(); 1119 932 }
+25 -17
fs/btrfs/transaction.c
··· 1023 1023 u64 old_root_bytenr; 1024 1024 u64 old_root_used; 1025 1025 struct btrfs_root *tree_root = root->fs_info->tree_root; 1026 - bool extent_root = (root->objectid == BTRFS_EXTENT_TREE_OBJECTID); 1027 1026 1028 1027 old_root_used = btrfs_root_used(&root->root_item); 1029 - btrfs_write_dirty_block_groups(trans, root); 1030 1028 1031 1029 while (1) { 1032 1030 old_root_bytenr = btrfs_root_bytenr(&root->root_item); 1033 1031 if (old_root_bytenr == root->node->start && 1034 - old_root_used == btrfs_root_used(&root->root_item) && 1035 - (!extent_root || 1036 - list_empty(&trans->transaction->dirty_bgs))) 1032 + old_root_used == btrfs_root_used(&root->root_item)) 1037 1033 break; 1038 1034 1039 1035 btrfs_set_root_node(&root->root_item, root->node); ··· 1040 1044 return ret; 1041 1045 1042 1046 old_root_used = btrfs_root_used(&root->root_item); 1043 - if (extent_root) { 1044 - ret = btrfs_write_dirty_block_groups(trans, root); 1045 - if (ret) 1046 - return ret; 1047 - } 1048 - ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1049 - if (ret) 1050 - return ret; 1051 - ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1052 - if (ret) 1053 - return ret; 1054 1047 } 1055 1048 1056 1049 return 0; ··· 1056 1071 struct btrfs_root *root) 1057 1072 { 1058 1073 struct btrfs_fs_info *fs_info = root->fs_info; 1074 + struct list_head *dirty_bgs = &trans->transaction->dirty_bgs; 1059 1075 struct list_head *next; 1060 1076 struct extent_buffer *eb; 1061 1077 int ret; ··· 1084 1098 if (ret) 1085 1099 return ret; 1086 1100 1101 + ret = btrfs_setup_space_cache(trans, root); 1102 + if (ret) 1103 + return ret; 1104 + 1087 1105 /* run_qgroups might have added some more refs */ 1088 1106 ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1089 1107 if (ret) 1090 1108 return ret; 1091 - 1109 + again: 1092 1110 while (!list_empty(&fs_info->dirty_cowonly_roots)) { 1093 1111 next = fs_info->dirty_cowonly_roots.next; 1094 1112 list_del_init(next); ··· 1105 1115 ret = update_cowonly_root(trans, root); 1106 1116 if (ret) 1107 1117 return ret; 1118 + ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1119 + if (ret) 1120 + return ret; 1108 1121 } 1122 + 1123 + while (!list_empty(dirty_bgs)) { 1124 + ret = btrfs_write_dirty_block_groups(trans, root); 1125 + if (ret) 1126 + return ret; 1127 + ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1128 + if (ret) 1129 + return ret; 1130 + } 1131 + 1132 + if (!list_empty(&fs_info->dirty_cowonly_roots)) 1133 + goto again; 1109 1134 1110 1135 list_add_tail(&fs_info->extent_root->dirty_list, 1111 1136 &trans->transaction->switch_commits); ··· 1818 1813 ret = btrfs_end_transaction(trans, root); 1819 1814 1820 1815 wait_for_commit(root, cur_trans); 1816 + 1817 + if (unlikely(cur_trans->aborted)) 1818 + ret = cur_trans->aborted; 1821 1819 1822 1820 btrfs_put_transaction(cur_trans); 1823 1821
+1 -1
fs/btrfs/tree-log.c
··· 1012 1012 base = btrfs_item_ptr_offset(leaf, path->slots[0]); 1013 1013 1014 1014 while (cur_offset < item_size) { 1015 - extref = (struct btrfs_inode_extref *)base + cur_offset; 1015 + extref = (struct btrfs_inode_extref *)(base + cur_offset); 1016 1016 1017 1017 victim_name_len = btrfs_inode_extref_name_len(leaf, extref); 1018 1018
+6 -2
fs/btrfs/xattr.c
··· 111 111 name, name_len, -1); 112 112 if (!di && (flags & XATTR_REPLACE)) 113 113 ret = -ENODATA; 114 + else if (IS_ERR(di)) 115 + ret = PTR_ERR(di); 114 116 else if (di) 115 117 ret = btrfs_delete_one_dir_name(trans, root, path, di); 116 118 goto out; ··· 129 127 ASSERT(mutex_is_locked(&inode->i_mutex)); 130 128 di = btrfs_lookup_xattr(NULL, root, path, btrfs_ino(inode), 131 129 name, name_len, 0); 132 - if (!di) { 130 + if (!di) 133 131 ret = -ENODATA; 132 + else if (IS_ERR(di)) 133 + ret = PTR_ERR(di); 134 + if (ret) 134 135 goto out; 135 - } 136 136 btrfs_release_path(path); 137 137 di = NULL; 138 138 }
+5 -1
fs/cifs/cifsencrypt.c
··· 1 1 /* 2 2 * fs/cifs/cifsencrypt.c 3 3 * 4 + * Encryption and hashing operations relating to NTLM, NTLMv2. See MS-NLMP 5 + * for more detailed information 6 + * 4 7 * Copyright (C) International Business Machines Corp., 2005,2013 5 8 * Author(s): Steve French (sfrench@us.ibm.com) 6 9 * ··· 518 515 __func__); 519 516 return rc; 520 517 } 521 - } else if (ses->serverName) { 518 + } else { 519 + /* We use ses->serverName if no domain name available */ 522 520 len = strlen(ses->serverName); 523 521 524 522 server = kmalloc(2 + (len * 2), GFP_KERNEL);
+11 -2
fs/cifs/connect.c
··· 1599 1599 pr_warn("CIFS: username too long\n"); 1600 1600 goto cifs_parse_mount_err; 1601 1601 } 1602 + 1603 + kfree(vol->username); 1602 1604 vol->username = kstrdup(string, GFP_KERNEL); 1603 1605 if (!vol->username) 1604 1606 goto cifs_parse_mount_err; ··· 1702 1700 goto cifs_parse_mount_err; 1703 1701 } 1704 1702 1703 + kfree(vol->domainname); 1705 1704 vol->domainname = kstrdup(string, GFP_KERNEL); 1706 1705 if (!vol->domainname) { 1707 1706 pr_warn("CIFS: no memory for domainname\n"); ··· 1734 1731 } 1735 1732 1736 1733 if (strncasecmp(string, "default", 7) != 0) { 1734 + kfree(vol->iocharset); 1737 1735 vol->iocharset = kstrdup(string, 1738 1736 GFP_KERNEL); 1739 1737 if (!vol->iocharset) { ··· 2917 2913 * calling name ends in null (byte 16) from old smb 2918 2914 * convention. 2919 2915 */ 2920 - if (server->workstation_RFC1001_name && 2921 - server->workstation_RFC1001_name[0] != 0) 2916 + if (server->workstation_RFC1001_name[0] != 0) 2922 2917 rfc1002mangle(ses_init_buf->trailer. 2923 2918 session_req.calling_name, 2924 2919 server->workstation_RFC1001_name, ··· 3695 3692 #endif /* CIFS_WEAK_PW_HASH */ 3696 3693 rc = SMBNTencrypt(tcon->password, ses->server->cryptkey, 3697 3694 bcc_ptr, nls_codepage); 3695 + if (rc) { 3696 + cifs_dbg(FYI, "%s Can't generate NTLM rsp. Error: %d\n", 3697 + __func__, rc); 3698 + cifs_buf_release(smb_buffer); 3699 + return rc; 3700 + } 3698 3701 3699 3702 bcc_ptr += CIFS_AUTH_RESP_SIZE; 3700 3703 if (ses->capabilities & CAP_UNICODE) {
+1
fs/cifs/file.c
··· 1823 1823 cifsFileInfo_put(inv_file); 1824 1824 spin_lock(&cifs_file_list_lock); 1825 1825 ++refind; 1826 + inv_file = NULL; 1826 1827 goto refind_writable; 1827 1828 } 1828 1829 }
+2
fs/cifs/inode.c
··· 771 771 cifs_buf_release(srchinf->ntwrk_buf_start); 772 772 } 773 773 kfree(srchinf); 774 + if (rc) 775 + goto cgii_exit; 774 776 } else 775 777 goto cgii_exit; 776 778
+1 -1
fs/cifs/smb2misc.c
··· 322 322 323 323 /* return pointer to beginning of data area, ie offset from SMB start */ 324 324 if ((*off != 0) && (*len != 0)) 325 - return hdr->ProtocolId + *off; 325 + return (char *)(&hdr->ProtocolId[0]) + *off; 326 326 else 327 327 return NULL; 328 328 }
+2 -1
fs/cifs/smb2ops.c
··· 684 684 685 685 /* No need to change MaxChunks since already set to 1 */ 686 686 chunk_sizes_updated = true; 687 - } 687 + } else 688 + goto cchunk_out; 688 689 } 689 690 690 691 cchunk_out:
+10 -7
fs/cifs/smb2pdu.c
··· 1218 1218 struct smb2_ioctl_req *req; 1219 1219 struct smb2_ioctl_rsp *rsp; 1220 1220 struct TCP_Server_Info *server; 1221 - struct cifs_ses *ses = tcon->ses; 1221 + struct cifs_ses *ses; 1222 1222 struct kvec iov[2]; 1223 1223 int resp_buftype; 1224 1224 int num_iovecs; ··· 1232 1232 /* zero out returned data len, in case of error */ 1233 1233 if (plen) 1234 1234 *plen = 0; 1235 + 1236 + if (tcon) 1237 + ses = tcon->ses; 1238 + else 1239 + return -EIO; 1235 1240 1236 1241 if (ses && (ses->server)) 1237 1242 server = ses->server; ··· 1301 1296 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base; 1302 1297 1303 1298 if ((rc != 0) && (rc != -EINVAL)) { 1304 - if (tcon) 1305 - cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1299 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1306 1300 goto ioctl_exit; 1307 1301 } else if (rc == -EINVAL) { 1308 1302 if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) && 1309 1303 (opcode != FSCTL_SRV_COPYCHUNK)) { 1310 - if (tcon) 1311 - cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1304 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1312 1305 goto ioctl_exit; 1313 1306 } 1314 1307 } ··· 1632 1629 1633 1630 rc = SendReceive2(xid, ses, iov, 1, &resp_buftype, 0); 1634 1631 1635 - if ((rc != 0) && tcon) 1632 + if (rc != 0) 1636 1633 cifs_stats_fail_inc(tcon, SMB2_FLUSH_HE); 1637 1634 1638 1635 free_rsp_buf(resp_buftype, iov[0].iov_base); ··· 2117 2114 struct kvec iov[2]; 2118 2115 int rc = 0; 2119 2116 int len; 2120 - int resp_buftype; 2117 + int resp_buftype = CIFS_NO_BUFFER; 2121 2118 unsigned char *bufptr; 2122 2119 struct TCP_Server_Info *server; 2123 2120 struct cifs_ses *ses = tcon->ses;
+2 -2
fs/ecryptfs/ecryptfs_kernel.h
··· 124 124 } 125 125 126 126 #define ECRYPTFS_MAX_KEYSET_SIZE 1024 127 - #define ECRYPTFS_MAX_CIPHER_NAME_SIZE 32 127 + #define ECRYPTFS_MAX_CIPHER_NAME_SIZE 31 128 128 #define ECRYPTFS_MAX_NUM_ENC_KEYS 64 129 129 #define ECRYPTFS_MAX_IV_BYTES 16 /* 128 bits */ 130 130 #define ECRYPTFS_SALT_BYTES 2 ··· 237 237 struct crypto_ablkcipher *tfm; 238 238 struct crypto_hash *hash_tfm; /* Crypto context for generating 239 239 * the initialization vectors */ 240 - unsigned char cipher[ECRYPTFS_MAX_CIPHER_NAME_SIZE]; 240 + unsigned char cipher[ECRYPTFS_MAX_CIPHER_NAME_SIZE + 1]; 241 241 unsigned char key[ECRYPTFS_MAX_KEY_BYTES]; 242 242 unsigned char root_iv[ECRYPTFS_MAX_IV_BYTES]; 243 243 struct list_head keysig_list;
+30 -4
fs/ecryptfs/file.c
··· 303 303 struct file *lower_file = ecryptfs_file_to_lower(file); 304 304 long rc = -ENOTTY; 305 305 306 - if (lower_file->f_op->unlocked_ioctl) 306 + if (!lower_file->f_op->unlocked_ioctl) 307 + return rc; 308 + 309 + switch (cmd) { 310 + case FITRIM: 311 + case FS_IOC_GETFLAGS: 312 + case FS_IOC_SETFLAGS: 313 + case FS_IOC_GETVERSION: 314 + case FS_IOC_SETVERSION: 307 315 rc = lower_file->f_op->unlocked_ioctl(lower_file, cmd, arg); 308 - return rc; 316 + fsstack_copy_attr_all(file_inode(file), file_inode(lower_file)); 317 + 318 + return rc; 319 + default: 320 + return rc; 321 + } 309 322 } 310 323 311 324 #ifdef CONFIG_COMPAT ··· 328 315 struct file *lower_file = ecryptfs_file_to_lower(file); 329 316 long rc = -ENOIOCTLCMD; 330 317 331 - if (lower_file->f_op->compat_ioctl) 318 + if (!lower_file->f_op->compat_ioctl) 319 + return rc; 320 + 321 + switch (cmd) { 322 + case FITRIM: 323 + case FS_IOC32_GETFLAGS: 324 + case FS_IOC32_SETFLAGS: 325 + case FS_IOC32_GETVERSION: 326 + case FS_IOC32_SETVERSION: 332 327 rc = lower_file->f_op->compat_ioctl(lower_file, cmd, arg); 333 - return rc; 328 + fsstack_copy_attr_all(file_inode(file), file_inode(lower_file)); 329 + 330 + return rc; 331 + default: 332 + return rc; 333 + } 334 334 } 335 335 #endif 336 336
+1 -1
fs/ecryptfs/keystore.c
··· 891 891 struct blkcipher_desc desc; 892 892 char fnek_sig_hex[ECRYPTFS_SIG_SIZE_HEX + 1]; 893 893 char iv[ECRYPTFS_MAX_IV_BYTES]; 894 - char cipher_string[ECRYPTFS_MAX_CIPHER_NAME_SIZE]; 894 + char cipher_string[ECRYPTFS_MAX_CIPHER_NAME_SIZE + 1]; 895 895 }; 896 896 897 897 /**
+1 -1
fs/ecryptfs/main.c
··· 407 407 if (!cipher_name_set) { 408 408 int cipher_name_len = strlen(ECRYPTFS_DEFAULT_CIPHER); 409 409 410 - BUG_ON(cipher_name_len >= ECRYPTFS_MAX_CIPHER_NAME_SIZE); 410 + BUG_ON(cipher_name_len > ECRYPTFS_MAX_CIPHER_NAME_SIZE); 411 411 strcpy(mount_crypt_stat->global_default_cipher_name, 412 412 ECRYPTFS_DEFAULT_CIPHER); 413 413 }
+83 -10
fs/fs-writeback.c
··· 53 53 struct completion *done; /* set if the caller waits */ 54 54 }; 55 55 56 + /* 57 + * If an inode is constantly having its pages dirtied, but then the 58 + * updates stop dirtytime_expire_interval seconds in the past, it's 59 + * possible for the worst case time between when an inode has its 60 + * timestamps updated and when they finally get written out to be two 61 + * dirtytime_expire_intervals. We set the default to 12 hours (in 62 + * seconds), which means most of the time inodes will have their 63 + * timestamps written to disk after 12 hours, but in the worst case a 64 + * few inodes might not their timestamps updated for 24 hours. 65 + */ 66 + unsigned int dirtytime_expire_interval = 12 * 60 * 60; 67 + 56 68 /** 57 69 * writeback_in_progress - determine whether there is writeback in progress 58 70 * @bdi: the device's backing_dev_info structure. ··· 287 275 288 276 if ((flags & EXPIRE_DIRTY_ATIME) == 0) 289 277 older_than_this = work->older_than_this; 290 - else if ((work->reason == WB_REASON_SYNC) == 0) { 291 - expire_time = jiffies - (HZ * 86400); 278 + else if (!work->for_sync) { 279 + expire_time = jiffies - (dirtytime_expire_interval * HZ); 292 280 older_than_this = &expire_time; 293 281 } 294 282 while (!list_empty(delaying_queue)) { ··· 470 458 */ 471 459 redirty_tail(inode, wb); 472 460 } else if (inode->i_state & I_DIRTY_TIME) { 461 + inode->dirtied_when = jiffies; 473 462 list_move(&inode->i_wb_list, &wb->b_dirty_time); 474 463 } else { 475 464 /* The inode is clean. Remove from writeback lists. */ ··· 518 505 spin_lock(&inode->i_lock); 519 506 520 507 dirty = inode->i_state & I_DIRTY; 521 - if (((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) && 522 - (inode->i_state & I_DIRTY_TIME)) || 523 - (inode->i_state & I_DIRTY_TIME_EXPIRED)) { 524 - dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED; 525 - trace_writeback_lazytime(inode); 526 - } 508 + if (inode->i_state & I_DIRTY_TIME) { 509 + if ((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) || 510 + unlikely(inode->i_state & I_DIRTY_TIME_EXPIRED) || 511 + unlikely(time_after(jiffies, 512 + (inode->dirtied_time_when + 513 + dirtytime_expire_interval * HZ)))) { 514 + dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED; 515 + trace_writeback_lazytime(inode); 516 + } 517 + } else 518 + inode->i_state &= ~I_DIRTY_TIME_EXPIRED; 527 519 inode->i_state &= ~dirty; 528 520 529 521 /* ··· 1149 1131 rcu_read_unlock(); 1150 1132 } 1151 1133 1134 + /* 1135 + * Wake up bdi's periodically to make sure dirtytime inodes gets 1136 + * written back periodically. We deliberately do *not* check the 1137 + * b_dirtytime list in wb_has_dirty_io(), since this would cause the 1138 + * kernel to be constantly waking up once there are any dirtytime 1139 + * inodes on the system. So instead we define a separate delayed work 1140 + * function which gets called much more rarely. (By default, only 1141 + * once every 12 hours.) 1142 + * 1143 + * If there is any other write activity going on in the file system, 1144 + * this function won't be necessary. But if the only thing that has 1145 + * happened on the file system is a dirtytime inode caused by an atime 1146 + * update, we need this infrastructure below to make sure that inode 1147 + * eventually gets pushed out to disk. 1148 + */ 1149 + static void wakeup_dirtytime_writeback(struct work_struct *w); 1150 + static DECLARE_DELAYED_WORK(dirtytime_work, wakeup_dirtytime_writeback); 1151 + 1152 + static void wakeup_dirtytime_writeback(struct work_struct *w) 1153 + { 1154 + struct backing_dev_info *bdi; 1155 + 1156 + rcu_read_lock(); 1157 + list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) { 1158 + if (list_empty(&bdi->wb.b_dirty_time)) 1159 + continue; 1160 + bdi_wakeup_thread(bdi); 1161 + } 1162 + rcu_read_unlock(); 1163 + schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 1164 + } 1165 + 1166 + static int __init start_dirtytime_writeback(void) 1167 + { 1168 + schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 1169 + return 0; 1170 + } 1171 + __initcall(start_dirtytime_writeback); 1172 + 1173 + int dirtytime_interval_handler(struct ctl_table *table, int write, 1174 + void __user *buffer, size_t *lenp, loff_t *ppos) 1175 + { 1176 + int ret; 1177 + 1178 + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1179 + if (ret == 0 && write) 1180 + mod_delayed_work(system_wq, &dirtytime_work, 0); 1181 + return ret; 1182 + } 1183 + 1152 1184 static noinline void block_dump___mark_inode_dirty(struct inode *inode) 1153 1185 { 1154 1186 if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) { ··· 1337 1269 } 1338 1270 1339 1271 inode->dirtied_when = jiffies; 1340 - list_move(&inode->i_wb_list, dirtytime ? 1341 - &bdi->wb.b_dirty_time : &bdi->wb.b_dirty); 1272 + if (dirtytime) 1273 + inode->dirtied_time_when = jiffies; 1274 + if (inode->i_state & (I_DIRTY_INODE | I_DIRTY_PAGES)) 1275 + list_move(&inode->i_wb_list, &bdi->wb.b_dirty); 1276 + else 1277 + list_move(&inode->i_wb_list, 1278 + &bdi->wb.b_dirty_time); 1342 1279 spin_unlock(&bdi->wb.list_lock); 1343 1280 trace_writeback_dirty_inode_enqueue(inode); 1344 1281
+17 -2
fs/fuse/dev.c
··· 890 890 891 891 newpage = buf->page; 892 892 893 - if (WARN_ON(!PageUptodate(newpage))) 894 - return -EIO; 893 + if (!PageUptodate(newpage)) 894 + SetPageUptodate(newpage); 895 895 896 896 ClearPageMappedToDisk(newpage); 897 897 ··· 1353 1353 return err; 1354 1354 } 1355 1355 1356 + static int fuse_dev_open(struct inode *inode, struct file *file) 1357 + { 1358 + /* 1359 + * The fuse device's file's private_data is used to hold 1360 + * the fuse_conn(ection) when it is mounted, and is used to 1361 + * keep track of whether the file has been mounted already. 1362 + */ 1363 + file->private_data = NULL; 1364 + return 0; 1365 + } 1366 + 1356 1367 static ssize_t fuse_dev_read(struct kiocb *iocb, const struct iovec *iov, 1357 1368 unsigned long nr_segs, loff_t pos) 1358 1369 { ··· 1808 1797 static int fuse_notify(struct fuse_conn *fc, enum fuse_notify_code code, 1809 1798 unsigned int size, struct fuse_copy_state *cs) 1810 1799 { 1800 + /* Don't try to move pages (yet) */ 1801 + cs->move_pages = 0; 1802 + 1811 1803 switch (code) { 1812 1804 case FUSE_NOTIFY_POLL: 1813 1805 return fuse_notify_poll(fc, size, cs); ··· 2231 2217 2232 2218 const struct file_operations fuse_dev_operations = { 2233 2219 .owner = THIS_MODULE, 2220 + .open = fuse_dev_open, 2234 2221 .llseek = no_llseek, 2235 2222 .read = do_sync_read, 2236 2223 .aio_read = fuse_dev_read,
+11 -9
fs/hfsplus/brec.c
··· 131 131 hfs_bnode_write(node, entry, data_off + key_len, entry_len); 132 132 hfs_bnode_dump(node); 133 133 134 - if (new_node) { 135 - /* update parent key if we inserted a key 136 - * at the start of the first node 137 - */ 138 - if (!rec && new_node != node) 139 - hfs_brec_update_parent(fd); 134 + /* 135 + * update parent key if we inserted a key 136 + * at the start of the node and it is not the new node 137 + */ 138 + if (!rec && new_node != node) { 139 + hfs_bnode_read_key(node, fd->search_key, data_off + size); 140 + hfs_brec_update_parent(fd); 141 + } 140 142 143 + if (new_node) { 141 144 hfs_bnode_put(fd->bnode); 142 145 if (!new_node->parent) { 143 146 hfs_btree_inc_height(tree); ··· 170 167 } 171 168 goto again; 172 169 } 173 - 174 - if (!rec) 175 - hfs_brec_update_parent(fd); 176 170 177 171 return 0; 178 172 } ··· 370 370 if (IS_ERR(parent)) 371 371 return PTR_ERR(parent); 372 372 __hfs_brec_find(parent, fd, hfs_find_rec_by_key); 373 + if (fd->record < 0) 374 + return -ENOENT; 373 375 hfs_bnode_dump(parent); 374 376 rec = fd->record; 375 377
+1
fs/kernfs/file.c
··· 207 207 goto out_free; 208 208 } 209 209 210 + of->event = atomic_read(&of->kn->attr.open->event); 210 211 ops = kernfs_ops(of->kn); 211 212 if (ops->read) 212 213 len = ops->read(of, buf, len, *ppos);
+5 -5
fs/locks.c
··· 1388 1388 int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) 1389 1389 { 1390 1390 int error = 0; 1391 - struct file_lock *new_fl; 1392 1391 struct file_lock_context *ctx = inode->i_flctx; 1393 - struct file_lock *fl; 1392 + struct file_lock *new_fl, *fl, *tmp; 1394 1393 unsigned long break_time; 1395 1394 int want_write = (mode & O_ACCMODE) != O_RDONLY; 1396 1395 LIST_HEAD(dispose); ··· 1419 1420 break_time++; /* so that 0 means no break time */ 1420 1421 } 1421 1422 1422 - list_for_each_entry(fl, &ctx->flc_lease, fl_list) { 1423 + list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) { 1423 1424 if (!leases_conflict(fl, new_fl)) 1424 1425 continue; 1425 1426 if (want_write) { ··· 1664 1665 } 1665 1666 1666 1667 if (my_fl != NULL) { 1667 - error = lease->fl_lmops->lm_change(my_fl, arg, &dispose); 1668 + lease = my_fl; 1669 + error = lease->fl_lmops->lm_change(lease, arg, &dispose); 1668 1670 if (error) 1669 1671 goto out; 1670 1672 goto out_setup; ··· 1727 1727 break; 1728 1728 } 1729 1729 } 1730 - trace_generic_delete_lease(inode, fl); 1730 + trace_generic_delete_lease(inode, victim); 1731 1731 if (victim) 1732 1732 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); 1733 1733 spin_unlock(&ctx->flc_lock);
+1 -1
fs/nfs/client.c
··· 433 433 434 434 static bool nfs_client_init_is_complete(const struct nfs_client *clp) 435 435 { 436 - return clp->cl_cons_state != NFS_CS_INITING; 436 + return clp->cl_cons_state <= NFS_CS_READY; 437 437 } 438 438 439 439 int nfs_wait_client_init_complete(const struct nfs_client *clp)
+34 -11
fs/nfs/delegation.c
··· 181 181 clear_bit(NFS_DELEGATION_NEED_RECLAIM, 182 182 &delegation->flags); 183 183 spin_unlock(&delegation->lock); 184 - put_rpccred(oldcred); 185 184 rcu_read_unlock(); 185 + put_rpccred(oldcred); 186 186 trace_nfs4_reclaim_delegation(inode, res->delegation_type); 187 187 } else { 188 188 /* We appear to have raced with a delegation return. */ ··· 370 370 delegation = NULL; 371 371 goto out; 372 372 } 373 - freeme = nfs_detach_delegation_locked(nfsi, 373 + if (test_and_set_bit(NFS_DELEGATION_RETURNING, 374 + &old_delegation->flags)) 375 + goto out; 376 + freeme = nfs_detach_delegation_locked(nfsi, 374 377 old_delegation, clp); 375 378 if (freeme == NULL) 376 379 goto out; ··· 436 433 { 437 434 bool ret = false; 438 435 436 + if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) 437 + goto out; 439 438 if (test_and_clear_bit(NFS_DELEGATION_RETURN, &delegation->flags)) 440 439 ret = true; 441 440 if (test_and_clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags) && !ret) { ··· 449 444 ret = true; 450 445 spin_unlock(&delegation->lock); 451 446 } 447 + out: 452 448 return ret; 453 449 } 454 450 ··· 477 471 super_list) { 478 472 if (!nfs_delegation_need_return(delegation)) 479 473 continue; 480 - inode = nfs_delegation_grab_inode(delegation); 481 - if (inode == NULL) 474 + if (!nfs_sb_active(server->super)) 482 475 continue; 476 + inode = nfs_delegation_grab_inode(delegation); 477 + if (inode == NULL) { 478 + rcu_read_unlock(); 479 + nfs_sb_deactive(server->super); 480 + goto restart; 481 + } 483 482 delegation = nfs_start_delegation_return_locked(NFS_I(inode)); 484 483 rcu_read_unlock(); 485 484 486 485 err = nfs_end_delegation_return(inode, delegation, 0); 487 486 iput(inode); 487 + nfs_sb_deactive(server->super); 488 488 if (!err) 489 489 goto restart; 490 490 set_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state); ··· 821 809 list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) { 822 810 list_for_each_entry_rcu(delegation, &server->delegations, 823 811 super_list) { 812 + if (test_bit(NFS_DELEGATION_RETURNING, 813 + &delegation->flags)) 814 + continue; 824 815 if (test_bit(NFS_DELEGATION_NEED_RECLAIM, 825 816 &delegation->flags) == 0) 826 817 continue; 827 - inode = nfs_delegation_grab_inode(delegation); 828 - if (inode == NULL) 818 + if (!nfs_sb_active(server->super)) 829 819 continue; 830 - delegation = nfs_detach_delegation(NFS_I(inode), 831 - delegation, server); 820 + inode = nfs_delegation_grab_inode(delegation); 821 + if (inode == NULL) { 822 + rcu_read_unlock(); 823 + nfs_sb_deactive(server->super); 824 + goto restart; 825 + } 826 + delegation = nfs_start_delegation_return_locked(NFS_I(inode)); 832 827 rcu_read_unlock(); 833 - 834 - if (delegation != NULL) 835 - nfs_free_delegation(delegation); 828 + if (delegation != NULL) { 829 + delegation = nfs_detach_delegation(NFS_I(inode), 830 + delegation, server); 831 + if (delegation != NULL) 832 + nfs_free_delegation(delegation); 833 + } 836 834 iput(inode); 835 + nfs_sb_deactive(server->super); 837 836 goto restart; 838 837 } 839 838 }
+19 -3
fs/nfs/dir.c
··· 408 408 return 0; 409 409 } 410 410 411 + /* Match file and dirent using either filehandle or fileid 412 + * Note: caller is responsible for checking the fsid 413 + */ 411 414 static 412 415 int nfs_same_file(struct dentry *dentry, struct nfs_entry *entry) 413 416 { 417 + struct nfs_inode *nfsi; 418 + 414 419 if (dentry->d_inode == NULL) 415 420 goto different; 416 - if (nfs_compare_fh(entry->fh, NFS_FH(dentry->d_inode)) != 0) 417 - goto different; 418 - return 1; 421 + 422 + nfsi = NFS_I(dentry->d_inode); 423 + if (entry->fattr->fileid == nfsi->fileid) 424 + return 1; 425 + if (nfs_compare_fh(entry->fh, &nfsi->fh) == 0) 426 + return 1; 419 427 different: 420 428 return 0; 421 429 } ··· 477 469 struct inode *inode; 478 470 int status; 479 471 472 + if (!(entry->fattr->valid & NFS_ATTR_FATTR_FILEID)) 473 + return; 474 + if (!(entry->fattr->valid & NFS_ATTR_FATTR_FSID)) 475 + return; 480 476 if (filename.name[0] == '.') { 481 477 if (filename.len == 1) 482 478 return; ··· 491 479 492 480 dentry = d_lookup(parent, &filename); 493 481 if (dentry != NULL) { 482 + /* Is there a mountpoint here? If so, just exit */ 483 + if (!nfs_fsid_equal(&NFS_SB(dentry->d_sb)->fsid, 484 + &entry->fattr->fsid)) 485 + goto out; 494 486 if (nfs_same_file(dentry, entry)) { 495 487 nfs_set_verifier(dentry, nfs_save_change_attribute(dir)); 496 488 status = nfs_refresh_inode(dentry->d_inode, entry->fattr);
+9 -2
fs/nfs/file.c
··· 178 178 iocb->ki_filp, 179 179 iov_iter_count(to), (unsigned long) iocb->ki_pos); 180 180 181 - result = nfs_revalidate_mapping(inode, iocb->ki_filp->f_mapping); 181 + result = nfs_revalidate_mapping_protected(inode, iocb->ki_filp->f_mapping); 182 182 if (!result) { 183 183 result = generic_file_read_iter(iocb, to); 184 184 if (result > 0) ··· 199 199 dprintk("NFS: splice_read(%pD2, %lu@%Lu)\n", 200 200 filp, (unsigned long) count, (unsigned long long) *ppos); 201 201 202 - res = nfs_revalidate_mapping(inode, filp->f_mapping); 202 + res = nfs_revalidate_mapping_protected(inode, filp->f_mapping); 203 203 if (!res) { 204 204 res = generic_file_splice_read(filp, ppos, pipe, count, flags); 205 205 if (res > 0) ··· 372 372 nfs_wait_bit_killable, TASK_KILLABLE); 373 373 if (ret) 374 374 return ret; 375 + /* 376 + * Wait for O_DIRECT to complete 377 + */ 378 + nfs_inode_dio_wait(mapping->host); 375 379 376 380 page = grab_cache_page_write_begin(mapping, index, flags); 377 381 if (!page) ··· 622 618 623 619 /* make sure the cache has finished storing the page */ 624 620 nfs_fscache_wait_on_page_write(NFS_I(inode), page); 621 + 622 + wait_on_bit_action(&NFS_I(inode)->flags, NFS_INO_INVALIDATING, 623 + nfs_wait_bit_killable, TASK_KILLABLE); 625 624 626 625 lock_page(page); 627 626 mapping = page_file_mapping(page);
+92 -19
fs/nfs/inode.c
··· 556 556 * This is a copy of the common vmtruncate, but with the locking 557 557 * corrected to take into account the fact that NFS requires 558 558 * inode->i_size to be updated under the inode->i_lock. 559 + * Note: must be called with inode->i_lock held! 559 560 */ 560 561 static int nfs_vmtruncate(struct inode * inode, loff_t offset) 561 562 { ··· 566 565 if (err) 567 566 goto out; 568 567 569 - spin_lock(&inode->i_lock); 570 568 i_size_write(inode, offset); 571 569 /* Optimisation */ 572 570 if (offset == 0) 573 571 NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_DATA; 574 - spin_unlock(&inode->i_lock); 575 572 573 + spin_unlock(&inode->i_lock); 576 574 truncate_pagecache(inode, offset); 575 + spin_lock(&inode->i_lock); 577 576 out: 578 577 return err; 579 578 } ··· 586 585 * Note: we do this in the *proc.c in order to ensure that 587 586 * it works for things like exclusive creates too. 588 587 */ 589 - void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr) 588 + void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr, 589 + struct nfs_fattr *fattr) 590 590 { 591 + /* Barrier: bump the attribute generation count. */ 592 + nfs_fattr_set_barrier(fattr); 593 + 594 + spin_lock(&inode->i_lock); 595 + NFS_I(inode)->attr_gencount = fattr->gencount; 591 596 if ((attr->ia_valid & (ATTR_MODE|ATTR_UID|ATTR_GID)) != 0) { 592 - spin_lock(&inode->i_lock); 593 597 if ((attr->ia_valid & ATTR_MODE) != 0) { 594 598 int mode = attr->ia_mode & S_IALLUGO; 595 599 mode |= inode->i_mode & ~S_IALLUGO; ··· 606 600 inode->i_gid = attr->ia_gid; 607 601 nfs_set_cache_invalid(inode, NFS_INO_INVALID_ACCESS 608 602 | NFS_INO_INVALID_ACL); 609 - spin_unlock(&inode->i_lock); 610 603 } 611 604 if ((attr->ia_valid & ATTR_SIZE) != 0) { 612 605 nfs_inc_stats(inode, NFSIOS_SETATTRTRUNC); 613 606 nfs_vmtruncate(inode, attr->ia_size); 614 607 } 608 + nfs_update_inode(inode, fattr); 609 + spin_unlock(&inode->i_lock); 615 610 } 616 611 EXPORT_SYMBOL_GPL(nfs_setattr_update_inode); 617 612 ··· 1035 1028 1036 1029 if (mapping->nrpages != 0) { 1037 1030 if (S_ISREG(inode->i_mode)) { 1031 + unmap_mapping_range(mapping, 0, 0, 0); 1038 1032 ret = nfs_sync_mapping(mapping); 1039 1033 if (ret < 0) 1040 1034 return ret; ··· 1068 1060 } 1069 1061 1070 1062 /** 1071 - * nfs_revalidate_mapping - Revalidate the pagecache 1063 + * __nfs_revalidate_mapping - Revalidate the pagecache 1072 1064 * @inode - pointer to host inode 1073 1065 * @mapping - pointer to mapping 1066 + * @may_lock - take inode->i_mutex? 1074 1067 */ 1075 - int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping) 1068 + static int __nfs_revalidate_mapping(struct inode *inode, 1069 + struct address_space *mapping, 1070 + bool may_lock) 1076 1071 { 1077 1072 struct nfs_inode *nfsi = NFS_I(inode); 1078 1073 unsigned long *bitlock = &nfsi->flags; ··· 1124 1113 nfsi->cache_validity &= ~NFS_INO_INVALID_DATA; 1125 1114 spin_unlock(&inode->i_lock); 1126 1115 trace_nfs_invalidate_mapping_enter(inode); 1127 - ret = nfs_invalidate_mapping(inode, mapping); 1116 + if (may_lock) { 1117 + mutex_lock(&inode->i_mutex); 1118 + ret = nfs_invalidate_mapping(inode, mapping); 1119 + mutex_unlock(&inode->i_mutex); 1120 + } else 1121 + ret = nfs_invalidate_mapping(inode, mapping); 1128 1122 trace_nfs_invalidate_mapping_exit(inode, ret); 1129 1123 1130 1124 clear_bit_unlock(NFS_INO_INVALIDATING, bitlock); ··· 1137 1121 wake_up_bit(bitlock, NFS_INO_INVALIDATING); 1138 1122 out: 1139 1123 return ret; 1124 + } 1125 + 1126 + /** 1127 + * nfs_revalidate_mapping - Revalidate the pagecache 1128 + * @inode - pointer to host inode 1129 + * @mapping - pointer to mapping 1130 + */ 1131 + int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping) 1132 + { 1133 + return __nfs_revalidate_mapping(inode, mapping, false); 1134 + } 1135 + 1136 + /** 1137 + * nfs_revalidate_mapping_protected - Revalidate the pagecache 1138 + * @inode - pointer to host inode 1139 + * @mapping - pointer to mapping 1140 + * 1141 + * Differs from nfs_revalidate_mapping() in that it grabs the inode->i_mutex 1142 + * while invalidating the mapping. 1143 + */ 1144 + int nfs_revalidate_mapping_protected(struct inode *inode, struct address_space *mapping) 1145 + { 1146 + return __nfs_revalidate_mapping(inode, mapping, true); 1140 1147 } 1141 1148 1142 1149 static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr) ··· 1270 1231 return timespec_compare(&fattr->ctime, &inode->i_ctime) > 0; 1271 1232 } 1272 1233 1273 - static int nfs_size_need_update(const struct inode *inode, const struct nfs_fattr *fattr) 1274 - { 1275 - if (!(fattr->valid & NFS_ATTR_FATTR_SIZE)) 1276 - return 0; 1277 - return nfs_size_to_loff_t(fattr->size) > i_size_read(inode); 1278 - } 1279 - 1280 1234 static atomic_long_t nfs_attr_generation_counter; 1281 1235 1282 1236 static unsigned long nfs_read_attr_generation_counter(void) ··· 1281 1249 { 1282 1250 return atomic_long_inc_return(&nfs_attr_generation_counter); 1283 1251 } 1252 + EXPORT_SYMBOL_GPL(nfs_inc_attr_generation_counter); 1284 1253 1285 1254 void nfs_fattr_init(struct nfs_fattr *fattr) 1286 1255 { ··· 1292 1259 fattr->group_name = NULL; 1293 1260 } 1294 1261 EXPORT_SYMBOL_GPL(nfs_fattr_init); 1262 + 1263 + /** 1264 + * nfs_fattr_set_barrier 1265 + * @fattr: attributes 1266 + * 1267 + * Used to set a barrier after an attribute was updated. This 1268 + * barrier ensures that older attributes from RPC calls that may 1269 + * have raced with our update cannot clobber these new values. 1270 + * Note that you are still responsible for ensuring that other 1271 + * operations which change the attribute on the server do not 1272 + * collide. 1273 + */ 1274 + void nfs_fattr_set_barrier(struct nfs_fattr *fattr) 1275 + { 1276 + fattr->gencount = nfs_inc_attr_generation_counter(); 1277 + } 1295 1278 1296 1279 struct nfs_fattr *nfs_alloc_fattr(void) 1297 1280 { ··· 1419 1370 1420 1371 return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 || 1421 1372 nfs_ctime_need_update(inode, fattr) || 1422 - nfs_size_need_update(inode, fattr) || 1423 1373 ((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0); 1424 1374 } 1425 1375 ··· 1508 1460 int status; 1509 1461 1510 1462 spin_lock(&inode->i_lock); 1463 + nfs_fattr_set_barrier(fattr); 1511 1464 status = nfs_post_op_update_inode_locked(inode, fattr); 1512 1465 spin_unlock(&inode->i_lock); 1513 1466 ··· 1517 1468 EXPORT_SYMBOL_GPL(nfs_post_op_update_inode); 1518 1469 1519 1470 /** 1520 - * nfs_post_op_update_inode_force_wcc - try to update the inode attribute cache 1471 + * nfs_post_op_update_inode_force_wcc_locked - update the inode attribute cache 1521 1472 * @inode - pointer to inode 1522 1473 * @fattr - updated attributes 1523 1474 * ··· 1527 1478 * 1528 1479 * This function is mainly designed to be used by the ->write_done() functions. 1529 1480 */ 1530 - int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr) 1481 + int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fattr *fattr) 1531 1482 { 1532 1483 int status; 1533 1484 1534 - spin_lock(&inode->i_lock); 1535 1485 /* Don't do a WCC update if these attributes are already stale */ 1536 1486 if ((fattr->valid & NFS_ATTR_FATTR) == 0 || 1537 1487 !nfs_inode_attrs_need_update(inode, fattr)) { ··· 1562 1514 } 1563 1515 out_noforce: 1564 1516 status = nfs_post_op_update_inode_locked(inode, fattr); 1517 + return status; 1518 + } 1519 + 1520 + /** 1521 + * nfs_post_op_update_inode_force_wcc - try to update the inode attribute cache 1522 + * @inode - pointer to inode 1523 + * @fattr - updated attributes 1524 + * 1525 + * After an operation that has changed the inode metadata, mark the 1526 + * attribute cache as being invalid, then try to update it. Fake up 1527 + * weak cache consistency data, if none exist. 1528 + * 1529 + * This function is mainly designed to be used by the ->write_done() functions. 1530 + */ 1531 + int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr) 1532 + { 1533 + int status; 1534 + 1535 + spin_lock(&inode->i_lock); 1536 + nfs_fattr_set_barrier(fattr); 1537 + status = nfs_post_op_update_inode_force_wcc_locked(inode, fattr); 1565 1538 spin_unlock(&inode->i_lock); 1566 1539 return status; 1567 1540 } ··· 1784 1715 nfs_inc_stats(inode, NFSIOS_ATTRINVALIDATE); 1785 1716 nfsi->attrtimeo = NFS_MINATTRTIMEO(inode); 1786 1717 nfsi->attrtimeo_timestamp = now; 1718 + /* Set barrier to be more recent than all outstanding updates */ 1787 1719 nfsi->attr_gencount = nfs_inc_attr_generation_counter(); 1788 1720 } else { 1789 1721 if (!time_in_range_open(now, nfsi->attrtimeo_timestamp, nfsi->attrtimeo_timestamp + nfsi->attrtimeo)) { ··· 1792 1722 nfsi->attrtimeo = NFS_MAXATTRTIMEO(inode); 1793 1723 nfsi->attrtimeo_timestamp = now; 1794 1724 } 1725 + /* Set the barrier to be more recent than this fattr */ 1726 + if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0) 1727 + nfsi->attr_gencount = fattr->gencount; 1795 1728 } 1796 1729 invalid &= ~NFS_INO_INVALID_ATTR; 1797 1730 /* Don't invalidate the data if we were to blame */
+1
fs/nfs/internal.h
··· 459 459 struct nfs_commit_info *cinfo, 460 460 u32 ds_commit_idx); 461 461 int nfs_write_need_commit(struct nfs_pgio_header *); 462 + void nfs_writeback_update_inode(struct nfs_pgio_header *hdr); 462 463 int nfs_generic_commit_list(struct inode *inode, struct list_head *head, 463 464 int how, struct nfs_commit_info *cinfo); 464 465 void nfs_retry_commit(struct list_head *page_list,
+2 -2
fs/nfs/nfs3proc.c
··· 138 138 nfs_fattr_init(fattr); 139 139 status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); 140 140 if (status == 0) 141 - nfs_setattr_update_inode(inode, sattr); 141 + nfs_setattr_update_inode(inode, sattr, fattr); 142 142 dprintk("NFS reply setattr: %d\n", status); 143 143 return status; 144 144 } ··· 834 834 if (nfs3_async_handle_jukebox(task, inode)) 835 835 return -EAGAIN; 836 836 if (task->tk_status >= 0) 837 - nfs_post_op_update_inode_force_wcc(inode, hdr->res.fattr); 837 + nfs_writeback_update_inode(hdr); 838 838 return 0; 839 839 } 840 840
+5
fs/nfs/nfs3xdr.c
··· 1987 1987 if (entry->fattr->valid & NFS_ATTR_FATTR_V3) 1988 1988 entry->d_type = nfs_umode_to_dtype(entry->fattr->mode); 1989 1989 1990 + if (entry->fattr->fileid != entry->ino) { 1991 + entry->fattr->mounted_on_fileid = entry->ino; 1992 + entry->fattr->valid |= NFS_ATTR_FATTR_MOUNTED_ON_FILEID; 1993 + } 1994 + 1990 1995 /* In fact, a post_op_fh3: */ 1991 1996 p = xdr_inline_decode(xdr, 4); 1992 1997 if (unlikely(p == NULL))
+4 -5
fs/nfs/nfs4client.c
··· 621 621 spin_lock(&nn->nfs_client_lock); 622 622 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 623 623 624 + if (pos == new) 625 + goto found; 626 + 624 627 if (pos->rpc_ops != new->rpc_ops) 625 628 continue; 626 629 ··· 642 639 prev = pos; 643 640 644 641 status = nfs_wait_client_init_complete(pos); 645 - if (pos->cl_cons_state == NFS_CS_SESSION_INITING) { 646 - nfs4_schedule_lease_recovery(pos); 647 - status = nfs4_wait_clnt_recover(pos); 648 - } 649 642 spin_lock(&nn->nfs_client_lock); 650 643 if (status < 0) 651 644 break; ··· 667 668 */ 668 669 if (!nfs4_match_client_owner_id(pos, new)) 669 670 continue; 670 - 671 + found: 671 672 atomic_inc(&pos->cl_count); 672 673 *result = pos; 673 674 status = 0;
+21 -10
fs/nfs/nfs4proc.c
··· 901 901 if (!cinfo->atomic || cinfo->before != dir->i_version) 902 902 nfs_force_lookup_revalidate(dir); 903 903 dir->i_version = cinfo->after; 904 + nfsi->attr_gencount = nfs_inc_attr_generation_counter(); 904 905 nfs_fscache_invalidate(dir); 905 906 spin_unlock(&dir->i_lock); 906 907 } ··· 1553 1552 1554 1553 opendata->o_arg.open_flags = 0; 1555 1554 opendata->o_arg.fmode = fmode; 1555 + opendata->o_arg.share_access = nfs4_map_atomic_open_share( 1556 + NFS_SB(opendata->dentry->d_sb), 1557 + fmode, 0); 1556 1558 memset(&opendata->o_res, 0, sizeof(opendata->o_res)); 1557 1559 memset(&opendata->c_res, 0, sizeof(opendata->c_res)); 1558 1560 nfs4_init_opendata_res(opendata); ··· 2417 2413 opendata->o_res.f_attr, sattr, 2418 2414 state, label, olabel); 2419 2415 if (status == 0) { 2420 - nfs_setattr_update_inode(state->inode, sattr); 2421 - nfs_post_op_update_inode(state->inode, opendata->o_res.f_attr); 2416 + nfs_setattr_update_inode(state->inode, sattr, 2417 + opendata->o_res.f_attr); 2422 2418 nfs_setsecurity(state->inode, opendata->o_res.f_attr, olabel); 2423 2419 } 2424 2420 } ··· 2655 2651 case -NFS4ERR_BAD_STATEID: 2656 2652 case -NFS4ERR_EXPIRED: 2657 2653 if (!nfs4_stateid_match(&calldata->arg.stateid, 2658 - &state->stateid)) { 2654 + &state->open_stateid)) { 2659 2655 rpc_restart_call_prepare(task); 2660 2656 goto out_release; 2661 2657 } ··· 2691 2687 is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags); 2692 2688 is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags); 2693 2689 is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags); 2694 - nfs4_stateid_copy(&calldata->arg.stateid, &state->stateid); 2690 + nfs4_stateid_copy(&calldata->arg.stateid, &state->open_stateid); 2695 2691 /* Calculate the change in open mode */ 2696 2692 calldata->arg.fmode = 0; 2697 2693 if (state->n_rdwr == 0) { ··· 3292 3288 3293 3289 status = nfs4_do_setattr(inode, cred, fattr, sattr, state, NULL, label); 3294 3290 if (status == 0) { 3295 - nfs_setattr_update_inode(inode, sattr); 3291 + nfs_setattr_update_inode(inode, sattr, fattr); 3296 3292 nfs_setsecurity(inode, fattr, label); 3297 3293 } 3298 3294 nfs4_label_free(label); ··· 4238 4234 } 4239 4235 if (task->tk_status >= 0) { 4240 4236 renew_lease(NFS_SERVER(inode), hdr->timestamp); 4241 - nfs_post_op_update_inode_force_wcc(inode, &hdr->fattr); 4237 + nfs_writeback_update_inode(hdr); 4242 4238 } 4243 4239 return 0; 4244 4240 } ··· 6897 6893 6898 6894 if (status == 0) { 6899 6895 clp->cl_clientid = res.clientid; 6900 - clp->cl_exchange_flags = (res.flags & ~EXCHGID4_FLAG_CONFIRMED_R); 6901 - if (!(res.flags & EXCHGID4_FLAG_CONFIRMED_R)) 6896 + clp->cl_exchange_flags = res.flags; 6897 + /* Client ID is not confirmed */ 6898 + if (!(res.flags & EXCHGID4_FLAG_CONFIRMED_R)) { 6899 + clear_bit(NFS4_SESSION_ESTABLISHED, 6900 + &clp->cl_session->session_state); 6902 6901 clp->cl_seqid = res.seqid; 6902 + } 6903 6903 6904 6904 kfree(clp->cl_serverowner); 6905 6905 clp->cl_serverowner = res.server_owner; ··· 7235 7227 struct nfs41_create_session_res *res) 7236 7228 { 7237 7229 nfs4_copy_sessionid(&session->sess_id, &res->sessionid); 7230 + /* Mark client id and session as being confirmed */ 7231 + session->clp->cl_exchange_flags |= EXCHGID4_FLAG_CONFIRMED_R; 7232 + set_bit(NFS4_SESSION_ESTABLISHED, &session->session_state); 7238 7233 session->flags = res->flags; 7239 7234 memcpy(&session->fc_attrs, &res->fc_attrs, sizeof(session->fc_attrs)); 7240 7235 if (res->flags & SESSION4_BACK_CHAN) ··· 7333 7322 dprintk("--> nfs4_proc_destroy_session\n"); 7334 7323 7335 7324 /* session is still being setup */ 7336 - if (session->clp->cl_cons_state != NFS_CS_READY) 7337 - return status; 7325 + if (!test_and_clear_bit(NFS4_SESSION_ESTABLISHED, &session->session_state)) 7326 + return 0; 7338 7327 7339 7328 status = rpc_call_sync(session->clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT); 7340 7329 trace_nfs4_destroy_session(session->clp, status);
+1
fs/nfs/nfs4session.h
··· 70 70 71 71 enum nfs4_session_state { 72 72 NFS4_SESSION_INITING, 73 + NFS4_SESSION_ESTABLISHED, 73 74 }; 74 75 75 76 extern int nfs4_setup_slot_table(struct nfs4_slot_table *tbl,
+16 -2
fs/nfs/nfs4state.c
··· 346 346 status = nfs4_proc_exchange_id(clp, cred); 347 347 if (status != NFS4_OK) 348 348 return status; 349 - set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); 350 349 351 - return nfs41_walk_client_list(clp, result, cred); 350 + status = nfs41_walk_client_list(clp, result, cred); 351 + if (status < 0) 352 + return status; 353 + if (clp != *result) 354 + return 0; 355 + 356 + /* Purge state if the client id was established in a prior instance */ 357 + if (clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R) 358 + set_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state); 359 + else 360 + set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); 361 + nfs4_schedule_state_manager(clp); 362 + status = nfs_wait_client_init_complete(clp); 363 + if (status < 0) 364 + nfs_put_client(clp); 365 + return status; 352 366 } 353 367 354 368 #endif /* CONFIG_NFS_V4_1 */
+2 -4
fs/nfs/proc.c
··· 139 139 nfs_fattr_init(fattr); 140 140 status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); 141 141 if (status == 0) 142 - nfs_setattr_update_inode(inode, sattr); 142 + nfs_setattr_update_inode(inode, sattr, fattr); 143 143 dprintk("NFS reply setattr: %d\n", status); 144 144 return status; 145 145 } ··· 609 609 610 610 static int nfs_write_done(struct rpc_task *task, struct nfs_pgio_header *hdr) 611 611 { 612 - struct inode *inode = hdr->inode; 613 - 614 612 if (task->tk_status >= 0) 615 - nfs_post_op_update_inode_force_wcc(inode, hdr->res.fattr); 613 + nfs_writeback_update_inode(hdr); 616 614 return 0; 617 615 } 618 616
+30
fs/nfs/write.c
··· 1377 1377 return 0; 1378 1378 } 1379 1379 1380 + static void nfs_writeback_check_extend(struct nfs_pgio_header *hdr, 1381 + struct nfs_fattr *fattr) 1382 + { 1383 + struct nfs_pgio_args *argp = &hdr->args; 1384 + struct nfs_pgio_res *resp = &hdr->res; 1385 + 1386 + if (!(fattr->valid & NFS_ATTR_FATTR_SIZE)) 1387 + return; 1388 + if (argp->offset + resp->count != fattr->size) 1389 + return; 1390 + if (nfs_size_to_loff_t(fattr->size) < i_size_read(hdr->inode)) 1391 + return; 1392 + /* Set attribute barrier */ 1393 + nfs_fattr_set_barrier(fattr); 1394 + } 1395 + 1396 + void nfs_writeback_update_inode(struct nfs_pgio_header *hdr) 1397 + { 1398 + struct nfs_fattr *fattr = hdr->res.fattr; 1399 + struct inode *inode = hdr->inode; 1400 + 1401 + if (fattr == NULL) 1402 + return; 1403 + spin_lock(&inode->i_lock); 1404 + nfs_writeback_check_extend(hdr, fattr); 1405 + nfs_post_op_update_inode_force_wcc_locked(inode, fattr); 1406 + spin_unlock(&inode->i_lock); 1407 + } 1408 + EXPORT_SYMBOL_GPL(nfs_writeback_update_inode); 1409 + 1380 1410 /* 1381 1411 * This function is called when the WRITE call is complete. 1382 1412 */
+1 -1
fs/nfsd/blocklayout.c
··· 137 137 seg->offset = iomap.offset; 138 138 seg->length = iomap.length; 139 139 140 - dprintk("GET: %lld:%lld %d\n", bex->foff, bex->len, bex->es); 140 + dprintk("GET: 0x%llx:0x%llx %d\n", bex->foff, bex->len, bex->es); 141 141 return 0; 142 142 143 143 out_error:
+3 -3
fs/nfsd/blocklayoutxdr.c
··· 122 122 123 123 p = xdr_decode_hyper(p, &bex.foff); 124 124 if (bex.foff & (block_size - 1)) { 125 - dprintk("%s: unaligned offset %lld\n", 125 + dprintk("%s: unaligned offset 0x%llx\n", 126 126 __func__, bex.foff); 127 127 goto fail; 128 128 } 129 129 p = xdr_decode_hyper(p, &bex.len); 130 130 if (bex.len & (block_size - 1)) { 131 - dprintk("%s: unaligned length %lld\n", 131 + dprintk("%s: unaligned length 0x%llx\n", 132 132 __func__, bex.foff); 133 133 goto fail; 134 134 } 135 135 p = xdr_decode_hyper(p, &bex.soff); 136 136 if (bex.soff & (block_size - 1)) { 137 - dprintk("%s: unaligned disk offset %lld\n", 137 + dprintk("%s: unaligned disk offset 0x%llx\n", 138 138 __func__, bex.soff); 139 139 goto fail; 140 140 }
+7 -5
fs/nfsd/nfs4layouts.c
··· 118 118 { 119 119 struct super_block *sb = exp->ex_path.mnt->mnt_sb; 120 120 121 - if (exp->ex_flags & NFSEXP_NOPNFS) 121 + if (!(exp->ex_flags & NFSEXP_PNFS)) 122 122 return; 123 123 124 124 if (sb->s_export_op->get_uuid && ··· 440 440 list_move_tail(&lp->lo_perstate, reaplist); 441 441 return; 442 442 } 443 - end = seg->offset; 443 + lo->offset = layout_end(seg); 444 444 } else { 445 445 /* retain the whole layout segment on a split. */ 446 446 if (layout_end(seg) < end) { 447 447 dprintk("%s: split not supported\n", __func__); 448 448 return; 449 449 } 450 - 451 - lo->offset = layout_end(seg); 450 + end = seg->offset; 452 451 } 453 452 454 453 layout_update_len(lo, end); ··· 512 513 513 514 spin_lock(&clp->cl_lock); 514 515 list_for_each_entry_safe(ls, n, &clp->cl_lo_states, ls_perclnt) { 516 + if (ls->ls_layout_type != lrp->lr_layout_type) 517 + continue; 518 + 515 519 if (lrp->lr_return_type == RETURN_FSID && 516 520 !fh_fsid_match(&ls->ls_stid.sc_file->fi_fhandle, 517 521 &cstate->current_fh.fh_handle)) ··· 589 587 590 588 rpc_ntop((struct sockaddr *)&clp->cl_addr, addr_str, sizeof(addr_str)); 591 589 592 - nfsd4_cb_layout_fail(ls); 590 + trace_layout_recall_fail(&ls->ls_stid.sc_stateid); 593 591 594 592 printk(KERN_WARNING 595 593 "nfsd: client %s failed to respond to layout recall. "
+1 -1
fs/nfsd/nfs4proc.c
··· 1237 1237 nfserr = ops->proc_getdeviceinfo(exp->ex_path.mnt->mnt_sb, gdp); 1238 1238 1239 1239 gdp->gd_notify_types &= ops->notify_types; 1240 - exp_put(exp); 1241 1240 out: 1241 + exp_put(exp); 1242 1242 return nfserr; 1243 1243 } 1244 1244
+3 -3
fs/nfsd/nfs4state.c
··· 1638 1638 nfs4_put_stid(&dp->dl_stid); 1639 1639 } 1640 1640 while (!list_empty(&clp->cl_revoked)) { 1641 - dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru); 1641 + dp = list_entry(clp->cl_revoked.next, struct nfs4_delegation, dl_recall_lru); 1642 1642 list_del_init(&dp->dl_recall_lru); 1643 1643 nfs4_put_stid(&dp->dl_stid); 1644 1644 } ··· 3221 3221 } else 3222 3222 nfs4_free_openowner(&oo->oo_owner); 3223 3223 spin_unlock(&clp->cl_lock); 3224 - return oo; 3224 + return ret; 3225 3225 } 3226 3226 3227 3227 static void init_open_stateid(struct nfs4_ol_stateid *stp, struct nfs4_file *fp, struct nfsd4_open *open) { ··· 5062 5062 } else 5063 5063 nfs4_free_lockowner(&lo->lo_owner); 5064 5064 spin_unlock(&clp->cl_lock); 5065 - return lo; 5065 + return ret; 5066 5066 } 5067 5067 5068 5068 static void
+16 -4
fs/nfsd/nfs4xdr.c
··· 1562 1562 p = xdr_decode_hyper(p, &lgp->lg_seg.offset); 1563 1563 p = xdr_decode_hyper(p, &lgp->lg_seg.length); 1564 1564 p = xdr_decode_hyper(p, &lgp->lg_minlength); 1565 - nfsd4_decode_stateid(argp, &lgp->lg_sid); 1565 + 1566 + status = nfsd4_decode_stateid(argp, &lgp->lg_sid); 1567 + if (status) 1568 + return status; 1569 + 1566 1570 READ_BUF(4); 1567 1571 lgp->lg_maxcount = be32_to_cpup(p++); 1568 1572 ··· 1584 1580 p = xdr_decode_hyper(p, &lcp->lc_seg.offset); 1585 1581 p = xdr_decode_hyper(p, &lcp->lc_seg.length); 1586 1582 lcp->lc_reclaim = be32_to_cpup(p++); 1587 - nfsd4_decode_stateid(argp, &lcp->lc_sid); 1583 + 1584 + status = nfsd4_decode_stateid(argp, &lcp->lc_sid); 1585 + if (status) 1586 + return status; 1587 + 1588 1588 READ_BUF(4); 1589 1589 lcp->lc_newoffset = be32_to_cpup(p++); 1590 1590 if (lcp->lc_newoffset) { ··· 1636 1628 READ_BUF(16); 1637 1629 p = xdr_decode_hyper(p, &lrp->lr_seg.offset); 1638 1630 p = xdr_decode_hyper(p, &lrp->lr_seg.length); 1639 - nfsd4_decode_stateid(argp, &lrp->lr_sid); 1631 + 1632 + status = nfsd4_decode_stateid(argp, &lrp->lr_sid); 1633 + if (status) 1634 + return status; 1635 + 1640 1636 READ_BUF(4); 1641 1637 lrp->lrf_body_len = be32_to_cpup(p++); 1642 1638 if (lrp->lrf_body_len > 0) { ··· 4135 4123 return nfserr_resource; 4136 4124 *p++ = cpu_to_be32(lrp->lrs_present); 4137 4125 if (lrp->lrs_present) 4138 - nfsd4_encode_stateid(xdr, &lrp->lr_sid); 4126 + return nfsd4_encode_stateid(xdr, &lrp->lr_sid); 4139 4127 return nfs_ok; 4140 4128 } 4141 4129 #endif /* CONFIG_NFSD_PNFS */
+5 -1
fs/nfsd/nfscache.c
··· 165 165 { 166 166 unsigned int hashsize; 167 167 unsigned int i; 168 + int status = 0; 168 169 169 170 max_drc_entries = nfsd_cache_size_limit(); 170 171 atomic_set(&num_drc_entries, 0); 171 172 hashsize = nfsd_hashsize(max_drc_entries); 172 173 maskbits = ilog2(hashsize); 173 174 174 - register_shrinker(&nfsd_reply_cache_shrinker); 175 + status = register_shrinker(&nfsd_reply_cache_shrinker); 176 + if (status) 177 + return status; 178 + 175 179 drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep), 176 180 0, 0, NULL); 177 181 if (!drc_slab)
+4 -3
fs/nilfs2/segment.c
··· 1907 1907 struct the_nilfs *nilfs) 1908 1908 { 1909 1909 struct nilfs_inode_info *ii, *n; 1910 + int during_mount = !(sci->sc_super->s_flags & MS_ACTIVE); 1910 1911 int defer_iput = false; 1911 1912 1912 1913 spin_lock(&nilfs->ns_inode_lock); ··· 1920 1919 brelse(ii->i_bh); 1921 1920 ii->i_bh = NULL; 1922 1921 list_del_init(&ii->i_dirty); 1923 - if (!ii->vfs_inode.i_nlink) { 1922 + if (!ii->vfs_inode.i_nlink || during_mount) { 1924 1923 /* 1925 - * Defer calling iput() to avoid a deadlock 1926 - * over I_SYNC flag for inodes with i_nlink == 0 1924 + * Defer calling iput() to avoid deadlocks if 1925 + * i_nlink == 0 or mount is not yet finished. 1927 1926 */ 1928 1927 list_add_tail(&ii->i_dirty, &sci->sc_iput_queue); 1929 1928 defer_iput = true;
+2 -1
fs/notify/fanotify/fanotify.c
··· 143 143 !(marks_mask & FS_ISDIR & ~marks_ignored_mask)) 144 144 return false; 145 145 146 - if (event_mask & marks_mask & ~marks_ignored_mask) 146 + if (event_mask & FAN_ALL_OUTGOING_EVENTS & marks_mask & 147 + ~marks_ignored_mask) 147 148 return true; 148 149 149 150 return false;
+1 -1
fs/ocfs2/ocfs2.h
··· 502 502 503 503 static inline int ocfs2_supports_append_dio(struct ocfs2_super *osb) 504 504 { 505 - if (osb->s_feature_ro_compat & OCFS2_FEATURE_RO_COMPAT_APPEND_DIO) 505 + if (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_APPEND_DIO) 506 506 return 1; 507 507 return 0; 508 508 }
+8 -7
fs/ocfs2/ocfs2_fs.h
··· 102 102 | OCFS2_FEATURE_INCOMPAT_INDEXED_DIRS \ 103 103 | OCFS2_FEATURE_INCOMPAT_REFCOUNT_TREE \ 104 104 | OCFS2_FEATURE_INCOMPAT_DISCONTIG_BG \ 105 - | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO) 105 + | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO \ 106 + | OCFS2_FEATURE_INCOMPAT_APPEND_DIO) 106 107 #define OCFS2_FEATURE_RO_COMPAT_SUPP (OCFS2_FEATURE_RO_COMPAT_UNWRITTEN \ 107 108 | OCFS2_FEATURE_RO_COMPAT_USRQUOTA \ 108 - | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA \ 109 - | OCFS2_FEATURE_RO_COMPAT_APPEND_DIO) 109 + | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA) 110 110 111 111 /* 112 112 * Heartbeat-only devices are missing journals and other files. The ··· 179 179 #define OCFS2_FEATURE_INCOMPAT_CLUSTERINFO 0x4000 180 180 181 181 /* 182 + * Append Direct IO support 183 + */ 184 + #define OCFS2_FEATURE_INCOMPAT_APPEND_DIO 0x8000 185 + 186 + /* 182 187 * backup superblock flag is used to indicate that this volume 183 188 * has backup superblocks. 184 189 */ ··· 205 200 #define OCFS2_FEATURE_RO_COMPAT_USRQUOTA 0x0002 206 201 #define OCFS2_FEATURE_RO_COMPAT_GRPQUOTA 0x0004 207 202 208 - /* 209 - * Append Direct IO support 210 - */ 211 - #define OCFS2_FEATURE_RO_COMPAT_APPEND_DIO 0x0008 212 203 213 204 /* The byte offset of the first backup block will be 1G. 214 205 * The following will be 4G, 16G, 64G, 256G and 1T.
+27 -6
fs/overlayfs/super.c
··· 529 529 { 530 530 struct ovl_fs *ufs = sb->s_fs_info; 531 531 532 - if (!(*flags & MS_RDONLY) && 533 - (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY))) 532 + if (!(*flags & MS_RDONLY) && !ufs->upper_mnt) 534 533 return -EROFS; 535 534 536 535 return 0; ··· 614 615 break; 615 616 616 617 default: 618 + pr_err("overlayfs: unrecognized mount option \"%s\" or missing value\n", p); 617 619 return -EINVAL; 618 620 } 619 621 } 622 + 623 + /* Workdir is useless in non-upper mount */ 624 + if (!config->upperdir && config->workdir) { 625 + pr_info("overlayfs: option \"workdir=%s\" is useless in a non-upper mount, ignore\n", 626 + config->workdir); 627 + kfree(config->workdir); 628 + config->workdir = NULL; 629 + } 630 + 620 631 return 0; 621 632 } 622 633 ··· 846 837 847 838 sb->s_stack_depth = 0; 848 839 if (ufs->config.upperdir) { 849 - /* FIXME: workdir is not needed for a R/O mount */ 850 840 if (!ufs->config.workdir) { 851 841 pr_err("overlayfs: missing 'workdir'\n"); 852 842 goto out_free_config; ··· 854 846 err = ovl_mount_dir(ufs->config.upperdir, &upperpath); 855 847 if (err) 856 848 goto out_free_config; 849 + 850 + /* Upper fs should not be r/o */ 851 + if (upperpath.mnt->mnt_sb->s_flags & MS_RDONLY) { 852 + pr_err("overlayfs: upper fs is r/o, try multi-lower layers mount\n"); 853 + err = -EINVAL; 854 + goto out_put_upperpath; 855 + } 857 856 858 857 err = ovl_mount_dir(ufs->config.workdir, &workpath); 859 858 if (err) ··· 884 869 885 870 err = -EINVAL; 886 871 stacklen = ovl_split_lowerdirs(lowertmp); 887 - if (stacklen > OVL_MAX_STACK) 872 + if (stacklen > OVL_MAX_STACK) { 873 + pr_err("overlayfs: too many lower directries, limit is %d\n", 874 + OVL_MAX_STACK); 888 875 goto out_free_lowertmp; 876 + } else if (!ufs->config.upperdir && stacklen == 1) { 877 + pr_err("overlayfs: at least 2 lowerdir are needed while upperdir nonexistent\n"); 878 + goto out_free_lowertmp; 879 + } 889 880 890 881 stack = kcalloc(stacklen, sizeof(struct path), GFP_KERNEL); 891 882 if (!stack) ··· 953 932 ufs->numlower++; 954 933 } 955 934 956 - /* If the upper fs is r/o or nonexistent, we mark overlayfs r/o too */ 957 - if (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY)) 935 + /* If the upper fs is nonexistent, we mark overlayfs r/o too */ 936 + if (!ufs->upper_mnt) 958 937 sb->s_flags |= MS_RDONLY; 959 938 960 939 sb->s_d_op = &ovl_dentry_operations;
+3
fs/proc/task_mmu.c
··· 1325 1325 1326 1326 static int pagemap_open(struct inode *inode, struct file *file) 1327 1327 { 1328 + /* do not disclose physical addresses: attack vector */ 1329 + if (!capable(CAP_SYS_ADMIN)) 1330 + return -EPERM; 1328 1331 pr_warn_once("Bits 55-60 of /proc/PID/pagemap entries are about " 1329 1332 "to stop being page-shift some time soon. See the " 1330 1333 "linux/Documentation/vm/pagemap.txt for details.\n");
+26 -26
include/drm/drm_mm.h
··· 68 68 unsigned scanned_preceeds_hole : 1; 69 69 unsigned allocated : 1; 70 70 unsigned long color; 71 - unsigned long start; 72 - unsigned long size; 71 + u64 start; 72 + u64 size; 73 73 struct drm_mm *mm; 74 74 }; 75 75 ··· 82 82 unsigned int scan_check_range : 1; 83 83 unsigned scan_alignment; 84 84 unsigned long scan_color; 85 - unsigned long scan_size; 86 - unsigned long scan_hit_start; 87 - unsigned long scan_hit_end; 85 + u64 scan_size; 86 + u64 scan_hit_start; 87 + u64 scan_hit_end; 88 88 unsigned scanned_blocks; 89 - unsigned long scan_start; 90 - unsigned long scan_end; 89 + u64 scan_start; 90 + u64 scan_end; 91 91 struct drm_mm_node *prev_scanned_node; 92 92 93 93 void (*color_adjust)(struct drm_mm_node *node, unsigned long color, 94 - unsigned long *start, unsigned long *end); 94 + u64 *start, u64 *end); 95 95 }; 96 96 97 97 /** ··· 124 124 return mm->hole_stack.next; 125 125 } 126 126 127 - static inline unsigned long __drm_mm_hole_node_start(struct drm_mm_node *hole_node) 127 + static inline u64 __drm_mm_hole_node_start(struct drm_mm_node *hole_node) 128 128 { 129 129 return hole_node->start + hole_node->size; 130 130 } ··· 140 140 * Returns: 141 141 * Start of the subsequent hole. 142 142 */ 143 - static inline unsigned long drm_mm_hole_node_start(struct drm_mm_node *hole_node) 143 + static inline u64 drm_mm_hole_node_start(struct drm_mm_node *hole_node) 144 144 { 145 145 BUG_ON(!hole_node->hole_follows); 146 146 return __drm_mm_hole_node_start(hole_node); 147 147 } 148 148 149 - static inline unsigned long __drm_mm_hole_node_end(struct drm_mm_node *hole_node) 149 + static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node) 150 150 { 151 151 return list_entry(hole_node->node_list.next, 152 152 struct drm_mm_node, node_list)->start; ··· 163 163 * Returns: 164 164 * End of the subsequent hole. 165 165 */ 166 - static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node) 166 + static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node) 167 167 { 168 168 return __drm_mm_hole_node_end(hole_node); 169 169 } ··· 222 222 223 223 int drm_mm_insert_node_generic(struct drm_mm *mm, 224 224 struct drm_mm_node *node, 225 - unsigned long size, 225 + u64 size, 226 226 unsigned alignment, 227 227 unsigned long color, 228 228 enum drm_mm_search_flags sflags, ··· 245 245 */ 246 246 static inline int drm_mm_insert_node(struct drm_mm *mm, 247 247 struct drm_mm_node *node, 248 - unsigned long size, 248 + u64 size, 249 249 unsigned alignment, 250 250 enum drm_mm_search_flags flags) 251 251 { ··· 255 255 256 256 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, 257 257 struct drm_mm_node *node, 258 - unsigned long size, 258 + u64 size, 259 259 unsigned alignment, 260 260 unsigned long color, 261 - unsigned long start, 262 - unsigned long end, 261 + u64 start, 262 + u64 end, 263 263 enum drm_mm_search_flags sflags, 264 264 enum drm_mm_allocator_flags aflags); 265 265 /** ··· 282 282 */ 283 283 static inline int drm_mm_insert_node_in_range(struct drm_mm *mm, 284 284 struct drm_mm_node *node, 285 - unsigned long size, 285 + u64 size, 286 286 unsigned alignment, 287 - unsigned long start, 288 - unsigned long end, 287 + u64 start, 288 + u64 end, 289 289 enum drm_mm_search_flags flags) 290 290 { 291 291 return drm_mm_insert_node_in_range_generic(mm, node, size, alignment, ··· 296 296 void drm_mm_remove_node(struct drm_mm_node *node); 297 297 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new); 298 298 void drm_mm_init(struct drm_mm *mm, 299 - unsigned long start, 300 - unsigned long size); 299 + u64 start, 300 + u64 size); 301 301 void drm_mm_takedown(struct drm_mm *mm); 302 302 bool drm_mm_clean(struct drm_mm *mm); 303 303 304 304 void drm_mm_init_scan(struct drm_mm *mm, 305 - unsigned long size, 305 + u64 size, 306 306 unsigned alignment, 307 307 unsigned long color); 308 308 void drm_mm_init_scan_with_range(struct drm_mm *mm, 309 - unsigned long size, 309 + u64 size, 310 310 unsigned alignment, 311 311 unsigned long color, 312 - unsigned long start, 313 - unsigned long end); 312 + u64 start, 313 + u64 end); 314 314 bool drm_mm_scan_add_block(struct drm_mm_node *node); 315 315 bool drm_mm_scan_remove_block(struct drm_mm_node *node); 316 316
+1 -1
include/drm/ttm/ttm_bo_api.h
··· 249 249 * either of these locks held. 250 250 */ 251 251 252 - unsigned long offset; 252 + uint64_t offset; /* GPU address space is independent of CPU word size */ 253 253 uint32_t cur_placement; 254 254 255 255 struct sg_table *sg;
+1 -1
include/drm/ttm/ttm_bo_driver.h
··· 277 277 bool has_type; 278 278 bool use_type; 279 279 uint32_t flags; 280 - unsigned long gpu_offset; 280 + uint64_t gpu_offset; /* GPU address space is independent of CPU word size */ 281 281 uint64_t size; 282 282 uint32_t available_caching; 283 283 uint32_t default_caching;
+2 -1
include/dt-bindings/pinctrl/am33xx.h
··· 13 13 14 14 #define PULL_DISABLE (1 << 3) 15 15 #define INPUT_EN (1 << 5) 16 - #define SLEWCTRL_FAST (1 << 6) 16 + #define SLEWCTRL_SLOW (1 << 6) 17 + #define SLEWCTRL_FAST 0 17 18 18 19 /* update macro depending on INPUT_EN and PULL_ENA */ 19 20 #undef PIN_OUTPUT
+2 -1
include/dt-bindings/pinctrl/am43xx.h
··· 18 18 #define PULL_DISABLE (1 << 16) 19 19 #define PULL_UP (1 << 17) 20 20 #define INPUT_EN (1 << 18) 21 - #define SLEWCTRL_FAST (1 << 19) 21 + #define SLEWCTRL_SLOW (1 << 19) 22 + #define SLEWCTRL_FAST 0 22 23 #define DS0_PULL_UP_DOWN_EN (1 << 27) 23 24 24 25 #define PIN_OUTPUT (PULL_DISABLE)
+1
include/kvm/arm_vgic.h
··· 114 114 void (*sync_lr_elrsr)(struct kvm_vcpu *, int, struct vgic_lr); 115 115 u64 (*get_elrsr)(const struct kvm_vcpu *vcpu); 116 116 u64 (*get_eisr)(const struct kvm_vcpu *vcpu); 117 + void (*clear_eisr)(struct kvm_vcpu *vcpu); 117 118 u32 (*get_interrupt_status)(const struct kvm_vcpu *vcpu); 118 119 void (*enable_underflow)(struct kvm_vcpu *vcpu); 119 120 void (*disable_underflow)(struct kvm_vcpu *vcpu);
+18
include/linux/clk.h
··· 125 125 */ 126 126 int clk_get_phase(struct clk *clk); 127 127 128 + /** 129 + * clk_is_match - check if two clk's point to the same hardware clock 130 + * @p: clk compared against q 131 + * @q: clk compared against p 132 + * 133 + * Returns true if the two struct clk pointers both point to the same hardware 134 + * clock node. Put differently, returns true if struct clk *p and struct clk *q 135 + * share the same struct clk_core object. 136 + * 137 + * Returns false otherwise. Note that two NULL clks are treated as matching. 138 + */ 139 + bool clk_is_match(const struct clk *p, const struct clk *q); 140 + 128 141 #else 129 142 130 143 static inline long clk_get_accuracy(struct clk *clk) ··· 153 140 static inline long clk_get_phase(struct clk *clk) 154 141 { 155 142 return -ENOTSUPP; 143 + } 144 + 145 + static inline bool clk_is_match(const struct clk *p, const struct clk *q) 146 + { 147 + return p == q; 156 148 } 157 149 158 150 #endif
+15 -2
include/linux/cpuidle.h
··· 126 126 127 127 #ifdef CONFIG_CPU_IDLE 128 128 extern void disable_cpuidle(void); 129 + extern bool cpuidle_not_available(struct cpuidle_driver *drv, 130 + struct cpuidle_device *dev); 129 131 130 132 extern int cpuidle_select(struct cpuidle_driver *drv, 131 133 struct cpuidle_device *dev); ··· 152 150 extern int cpuidle_enable_device(struct cpuidle_device *dev); 153 151 extern void cpuidle_disable_device(struct cpuidle_device *dev); 154 152 extern int cpuidle_play_dead(void); 155 - extern void cpuidle_enter_freeze(void); 153 + extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 154 + struct cpuidle_device *dev); 155 + extern int cpuidle_enter_freeze(struct cpuidle_driver *drv, 156 + struct cpuidle_device *dev); 156 157 157 158 extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev); 158 159 #else 159 160 static inline void disable_cpuidle(void) { } 161 + static inline bool cpuidle_not_available(struct cpuidle_driver *drv, 162 + struct cpuidle_device *dev) 163 + {return true; } 160 164 static inline int cpuidle_select(struct cpuidle_driver *drv, 161 165 struct cpuidle_device *dev) 162 166 {return -ENODEV; } ··· 191 183 {return -ENODEV; } 192 184 static inline void cpuidle_disable_device(struct cpuidle_device *dev) { } 193 185 static inline int cpuidle_play_dead(void) {return -ENODEV; } 194 - static inline void cpuidle_enter_freeze(void) { } 186 + static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 187 + struct cpuidle_device *dev) 188 + {return -ENODEV; } 189 + static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv, 190 + struct cpuidle_device *dev) 191 + {return -ENODEV; } 195 192 static inline struct cpuidle_driver *cpuidle_get_cpu_driver( 196 193 struct cpuidle_device *dev) {return NULL; } 197 194 #endif
+1
include/linux/device-mapper.h
··· 375 375 */ 376 376 struct mapped_device *dm_get_md(dev_t dev); 377 377 void dm_get(struct mapped_device *md); 378 + int dm_hold(struct mapped_device *md); 378 379 void dm_put(struct mapped_device *md); 379 380 380 381 /*
+1
include/linux/fs.h
··· 604 604 struct mutex i_mutex; 605 605 606 606 unsigned long dirtied_when; /* jiffies of first dirtying */ 607 + unsigned long dirtied_time_when; 607 608 608 609 struct hlist_node i_hash; 609 610 struct list_head i_wb_list; /* backing dev IO list */
+8 -1
include/linux/interrupt.h
··· 50 50 * IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished. 51 51 * Used by threaded interrupts which need to keep the 52 52 * irq line disabled until the threaded handler has been run. 53 - * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend 53 + * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend. Does not guarantee 54 + * that this interrupt will wake the system from a suspended 55 + * state. See Documentation/power/suspend-and-interrupts.txt 54 56 * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set 55 57 * IRQF_NO_THREAD - Interrupt cannot be threaded 56 58 * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device 57 59 * resume time. 60 + * IRQF_COND_SUSPEND - If the IRQ is shared with a NO_SUSPEND user, execute this 61 + * interrupt handler after suspending interrupts. For system 62 + * wakeup devices users need to implement wakeup detection in 63 + * their interrupt handlers. 58 64 */ 59 65 #define IRQF_SHARED 0x00000080 60 66 #define IRQF_PROBE_SHARED 0x00000100 ··· 73 67 #define IRQF_FORCE_RESUME 0x00008000 74 68 #define IRQF_NO_THREAD 0x00010000 75 69 #define IRQF_EARLY_RESUME 0x00020000 70 + #define IRQF_COND_SUSPEND 0x00040000 76 71 77 72 #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD) 78 73
+22
include/linux/irqchip/arm-gic-v3.h
··· 126 126 #define GICR_PROPBASER_WaWb (5U << 7) 127 127 #define GICR_PROPBASER_RaWaWt (6U << 7) 128 128 #define GICR_PROPBASER_RaWaWb (7U << 7) 129 + #define GICR_PROPBASER_CACHEABILITY_MASK (7U << 7) 129 130 #define GICR_PROPBASER_IDBITS_MASK (0x1f) 131 + 132 + #define GICR_PENDBASER_NonShareable (0U << 10) 133 + #define GICR_PENDBASER_InnerShareable (1U << 10) 134 + #define GICR_PENDBASER_OuterShareable (2U << 10) 135 + #define GICR_PENDBASER_SHAREABILITY_MASK (3UL << 10) 136 + #define GICR_PENDBASER_nCnB (0U << 7) 137 + #define GICR_PENDBASER_nC (1U << 7) 138 + #define GICR_PENDBASER_RaWt (2U << 7) 139 + #define GICR_PENDBASER_RaWb (3U << 7) 140 + #define GICR_PENDBASER_WaWt (4U << 7) 141 + #define GICR_PENDBASER_WaWb (5U << 7) 142 + #define GICR_PENDBASER_RaWaWt (6U << 7) 143 + #define GICR_PENDBASER_RaWaWb (7U << 7) 144 + #define GICR_PENDBASER_CACHEABILITY_MASK (7U << 7) 130 145 131 146 /* 132 147 * Re-Distributor registers, offsets from SGI_base ··· 181 166 182 167 #define GITS_TRANSLATER 0x10040 183 168 169 + #define GITS_CTLR_ENABLE (1U << 0) 170 + #define GITS_CTLR_QUIESCENT (1U << 31) 171 + 172 + #define GITS_TYPER_DEVBITS_SHIFT 13 173 + #define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1) 184 174 #define GITS_TYPER_PTA (1UL << 19) 185 175 186 176 #define GITS_CBASER_VALID (1UL << 63) ··· 197 177 #define GITS_CBASER_WaWb (5UL << 59) 198 178 #define GITS_CBASER_RaWaWt (6UL << 59) 199 179 #define GITS_CBASER_RaWaWb (7UL << 59) 180 + #define GITS_CBASER_CACHEABILITY_MASK (7UL << 59) 200 181 #define GITS_CBASER_NonShareable (0UL << 10) 201 182 #define GITS_CBASER_InnerShareable (1UL << 10) 202 183 #define GITS_CBASER_OuterShareable (2UL << 10) ··· 214 193 #define GITS_BASER_WaWb (5UL << 59) 215 194 #define GITS_BASER_RaWaWt (6UL << 59) 216 195 #define GITS_BASER_RaWaWb (7UL << 59) 196 + #define GITS_BASER_CACHEABILITY_MASK (7UL << 59) 217 197 #define GITS_BASER_TYPE_SHIFT (56) 218 198 #define GITS_BASER_TYPE(r) (((r) >> GITS_BASER_TYPE_SHIFT) & 7) 219 199 #define GITS_BASER_ENTRY_SIZE_SHIFT (48)
+1
include/linux/irqdesc.h
··· 78 78 #ifdef CONFIG_PM_SLEEP 79 79 unsigned int nr_actions; 80 80 unsigned int no_suspend_depth; 81 + unsigned int cond_suspend_depth; 81 82 unsigned int force_resume_depth; 82 83 #endif 83 84 #ifdef CONFIG_PROC_FS
+3 -6
include/linux/kasan.h
··· 5 5 6 6 struct kmem_cache; 7 7 struct page; 8 + struct vm_struct; 8 9 9 10 #ifdef CONFIG_KASAN 10 11 ··· 50 49 void kasan_slab_alloc(struct kmem_cache *s, void *object); 51 50 void kasan_slab_free(struct kmem_cache *s, void *object); 52 51 53 - #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 54 - 55 52 int kasan_module_alloc(void *addr, size_t size); 56 - void kasan_module_free(void *addr); 53 + void kasan_free_shadow(const struct vm_struct *vm); 57 54 58 55 #else /* CONFIG_KASAN */ 59 - 60 - #define MODULE_ALIGN 1 61 56 62 57 static inline void kasan_unpoison_shadow(const void *address, size_t size) {} 63 58 ··· 79 82 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {} 80 83 81 84 static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } 82 - static inline void kasan_module_free(void *addr) {} 85 + static inline void kasan_free_shadow(const struct vm_struct *vm) {} 83 86 84 87 #endif /* CONFIG_KASAN */ 85 88
+1
include/linux/lcm.h
··· 4 4 #include <linux/compiler.h> 5 5 6 6 unsigned long lcm(unsigned long a, unsigned long b) __attribute_const__; 7 + unsigned long lcm_not_zero(unsigned long a, unsigned long b) __attribute_const__; 7 8 8 9 #endif /* _LCM_H */
+1
include/linux/libata.h
··· 232 232 * led */ 233 233 ATA_FLAG_NO_DIPM = (1 << 23), /* host not happy with DIPM */ 234 234 ATA_FLAG_LOWTAG = (1 << 24), /* host wants lowest available tag */ 235 + ATA_FLAG_SAS_HOST = (1 << 25), /* SAS host */ 235 236 236 237 /* bits 24:31 of ap->flags are reserved for LLD specific flags */ 237 238
+3
include/linux/mfd/palmas.h
··· 2999 2999 #define PALMAS_GPADC_TRIM15 0x0E 3000 3000 #define PALMAS_GPADC_TRIM16 0x0F 3001 3001 3002 + /* TPS659038 regen2_ctrl offset iss different from palmas */ 3003 + #define TPS659038_REGEN2_CTRL 0x12 3004 + 3002 3005 /* TPS65917 Interrupt registers */ 3003 3006 3004 3007 /* Registers for function INTERRUPT */
+1 -1
include/linux/mlx4/qp.h
··· 427 427 428 428 enum mlx4_update_qp_attr { 429 429 MLX4_UPDATE_QP_SMAC = 1 << 0, 430 - MLX4_UPDATE_QP_VSD = 1 << 2, 430 + MLX4_UPDATE_QP_VSD = 1 << 1, 431 431 MLX4_UPDATE_QP_SUPPORTED_ATTRS = (1 << 2) - 1 432 432 }; 433 433
+4
include/linux/module.h
··· 344 344 unsigned long *ftrace_callsites; 345 345 #endif 346 346 347 + #ifdef CONFIG_LIVEPATCH 348 + bool klp_alive; 349 + #endif 350 + 347 351 #ifdef CONFIG_MODULE_UNLOAD 348 352 /* What modules depend on me? */ 349 353 struct list_head source_list;
+8
include/linux/moduleloader.h
··· 84 84 85 85 /* Any cleanup before freeing mod->module_init */ 86 86 void module_arch_freeing_init(struct module *mod); 87 + 88 + #ifdef CONFIG_KASAN 89 + #include <linux/kasan.h> 90 + #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 91 + #else 92 + #define MODULE_ALIGN PAGE_SIZE 93 + #endif 94 + 87 95 #endif
+11 -1
include/linux/netdevice.h
··· 965 965 * Used to add FDB entries to dump requests. Implementers should add 966 966 * entries to skb and update idx with the number of entries. 967 967 * 968 - * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh) 968 + * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, 969 + * u16 flags) 969 970 * int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq, 970 971 * struct net_device *dev, u32 filter_mask) 972 + * int (*ndo_bridge_dellink)(struct net_device *dev, struct nlmsghdr *nlh, 973 + * u16 flags); 971 974 * 972 975 * int (*ndo_change_carrier)(struct net_device *dev, bool new_carrier); 973 976 * Called to change device carrier. Soft-devices (like dummy, team, etc) ··· 2185 2182 void synchronize_net(void); 2186 2183 int init_dummy_netdev(struct net_device *dev); 2187 2184 2185 + DECLARE_PER_CPU(int, xmit_recursion); 2186 + static inline int dev_recursion_level(void) 2187 + { 2188 + return this_cpu_read(xmit_recursion); 2189 + } 2190 + 2188 2191 struct net_device *dev_get_by_index(struct net *net, int ifindex); 2189 2192 struct net_device *__dev_get_by_index(struct net *net, int ifindex); 2190 2193 struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex); ··· 2351 2342 2352 2343 static inline void skb_gro_remcsum_init(struct gro_remcsum *grc) 2353 2344 { 2345 + grc->offset = 0; 2354 2346 grc->delta = 0; 2355 2347 } 2356 2348
+4 -1
include/linux/nfs_fs.h
··· 343 343 extern int nfs_refresh_inode(struct inode *, struct nfs_fattr *); 344 344 extern int nfs_post_op_update_inode(struct inode *inode, struct nfs_fattr *fattr); 345 345 extern int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr); 346 + extern int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fattr *fattr); 346 347 extern int nfs_getattr(struct vfsmount *, struct dentry *, struct kstat *); 347 348 extern void nfs_access_add_cache(struct inode *, struct nfs_access_entry *); 348 349 extern void nfs_access_set_mask(struct nfs_access_entry *, u32); ··· 356 355 extern int nfs_revalidate_inode_rcu(struct nfs_server *server, struct inode *inode); 357 356 extern int __nfs_revalidate_inode(struct nfs_server *, struct inode *); 358 357 extern int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping); 358 + extern int nfs_revalidate_mapping_protected(struct inode *inode, struct address_space *mapping); 359 359 extern int nfs_setattr(struct dentry *, struct iattr *); 360 - extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr); 360 + extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr, struct nfs_fattr *); 361 361 extern void nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr, 362 362 struct nfs4_label *label); 363 363 extern struct nfs_open_context *get_nfs_open_context(struct nfs_open_context *ctx); ··· 371 369 extern void nfs_put_lock_context(struct nfs_lock_context *l_ctx); 372 370 extern u64 nfs_compat_user_ino64(u64 fileid); 373 371 extern void nfs_fattr_init(struct nfs_fattr *fattr); 372 + extern void nfs_fattr_set_barrier(struct nfs_fattr *fattr); 374 373 extern unsigned long nfs_inc_attr_generation_counter(void); 375 374 376 375 extern struct nfs_fattr *nfs_alloc_fattr(void);
+1 -1
include/linux/of_platform.h
··· 84 84 static inline void of_platform_depopulate(struct device *parent) { } 85 85 #endif 86 86 87 - #ifdef CONFIG_OF_DYNAMIC 87 + #if defined(CONFIG_OF_DYNAMIC) && defined(CONFIG_OF_ADDRESS) 88 88 extern void of_platform_register_reconfig_notifier(void); 89 89 #else 90 90 static inline void of_platform_register_reconfig_notifier(void) { }
+3 -3
include/linux/pinctrl/consumer.h
··· 82 82 83 83 static inline struct pinctrl * __must_check pinctrl_get(struct device *dev) 84 84 { 85 - return ERR_PTR(-ENOSYS); 85 + return NULL; 86 86 } 87 87 88 88 static inline void pinctrl_put(struct pinctrl *p) ··· 93 93 struct pinctrl *p, 94 94 const char *name) 95 95 { 96 - return ERR_PTR(-ENOSYS); 96 + return NULL; 97 97 } 98 98 99 99 static inline int pinctrl_select_state(struct pinctrl *p, ··· 104 104 105 105 static inline struct pinctrl * __must_check devm_pinctrl_get(struct device *dev) 106 106 { 107 - return ERR_PTR(-ENOSYS); 107 + return NULL; 108 108 } 109 109 110 110 static inline void devm_pinctrl_put(struct pinctrl *p)
+1 -1
include/linux/regulator/driver.h
··· 316 316 * @driver_data: private regulator data 317 317 * @of_node: OpenFirmware node to parse for device tree bindings (may be 318 318 * NULL). 319 - * @regmap: regmap to use for core regmap helpers if dev_get_regulator() is 319 + * @regmap: regmap to use for core regmap helpers if dev_get_regmap() is 320 320 * insufficient. 321 321 * @ena_gpio_initialized: GPIO controlling regulator enable was properly 322 322 * initialized, meaning that >= 0 is a valid gpio
+5 -17
include/linux/rhashtable.h
··· 54 54 * @buckets: size * hash buckets 55 55 */ 56 56 struct bucket_table { 57 - size_t size; 58 - unsigned int locks_mask; 59 - spinlock_t *locks; 60 - struct rhash_head __rcu *buckets[]; 57 + size_t size; 58 + unsigned int locks_mask; 59 + spinlock_t *locks; 60 + 61 + struct rhash_head __rcu *buckets[] ____cacheline_aligned_in_smp; 61 62 }; 62 63 63 64 typedef u32 (*rht_hashfn_t)(const void *data, u32 len, u32 seed); ··· 79 78 * @locks_mul: Number of bucket locks to allocate per cpu (default: 128) 80 79 * @hashfn: Function to hash key 81 80 * @obj_hashfn: Function to hash object 82 - * @grow_decision: If defined, may return true if table should expand 83 - * @shrink_decision: If defined, may return true if table should shrink 84 - * 85 - * Note: when implementing the grow and shrink decision function, min/max 86 - * shift must be enforced, otherwise, resizing watermarks they set may be 87 - * useless. 88 81 */ 89 82 struct rhashtable_params { 90 83 size_t nelem_hint; ··· 92 97 size_t locks_mul; 93 98 rht_hashfn_t hashfn; 94 99 rht_obj_hashfn_t obj_hashfn; 95 - bool (*grow_decision)(const struct rhashtable *ht, 96 - size_t new_size); 97 - bool (*shrink_decision)(const struct rhashtable *ht, 98 - size_t new_size); 99 100 }; 100 101 101 102 /** ··· 182 191 183 192 void rhashtable_insert(struct rhashtable *ht, struct rhash_head *node); 184 193 bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *node); 185 - 186 - bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size); 187 - bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size); 188 194 189 195 int rhashtable_expand(struct rhashtable *ht); 190 196 int rhashtable_shrink(struct rhashtable *ht);
+5 -4
include/linux/sched.h
··· 1625 1625 1626 1626 /* 1627 1627 * numa_faults_locality tracks if faults recorded during the last 1628 - * scan window were remote/local. The task scan period is adapted 1629 - * based on the locality of the faults with different weights 1630 - * depending on whether they were shared or private faults 1628 + * scan window were remote/local or failed to migrate. The task scan 1629 + * period is adapted based on the locality of the faults with different 1630 + * weights depending on whether they were shared or private faults 1631 1631 */ 1632 - unsigned long numa_faults_locality[2]; 1632 + unsigned long numa_faults_locality[3]; 1633 1633 1634 1634 unsigned long numa_pages_migrated; 1635 1635 #endif /* CONFIG_NUMA_BALANCING */ ··· 1719 1719 #define TNF_NO_GROUP 0x02 1720 1720 #define TNF_SHARED 0x04 1721 1721 #define TNF_FAULT_LOCAL 0x08 1722 + #define TNF_MIGRATE_FAIL 0x10 1722 1723 1723 1724 #ifdef CONFIG_NUMA_BALANCING 1724 1725 extern void task_numa_fault(int last_node, int node, int pages, int flags);
+7 -7
include/linux/serial_core.h
··· 143 143 unsigned char iotype; /* io access style */ 144 144 unsigned char unused1; 145 145 146 - #define UPIO_PORT (0) /* 8b I/O port access */ 147 - #define UPIO_HUB6 (1) /* Hub6 ISA card */ 148 - #define UPIO_MEM (2) /* 8b MMIO access */ 149 - #define UPIO_MEM32 (3) /* 32b little endian */ 150 - #define UPIO_MEM32BE (4) /* 32b big endian */ 151 - #define UPIO_AU (5) /* Au1x00 and RT288x type IO */ 152 - #define UPIO_TSI (6) /* Tsi108/109 type IO */ 146 + #define UPIO_PORT (SERIAL_IO_PORT) /* 8b I/O port access */ 147 + #define UPIO_HUB6 (SERIAL_IO_HUB6) /* Hub6 ISA card */ 148 + #define UPIO_MEM (SERIAL_IO_MEM) /* 8b MMIO access */ 149 + #define UPIO_MEM32 (SERIAL_IO_MEM32) /* 32b little endian */ 150 + #define UPIO_AU (SERIAL_IO_AU) /* Au1x00 and RT288x type IO */ 151 + #define UPIO_TSI (SERIAL_IO_TSI) /* Tsi108/109 type IO */ 152 + #define UPIO_MEM32BE (SERIAL_IO_MEM32BE) /* 32b big endian */ 153 153 154 154 unsigned int read_status_mask; /* driver specific */ 155 155 unsigned int ignore_status_mask; /* driver specific */
+7
include/linux/skbuff.h
··· 948 948 to->l4_hash = from->l4_hash; 949 949 }; 950 950 951 + static inline void skb_sender_cpu_clear(struct sk_buff *skb) 952 + { 953 + #ifdef CONFIG_XPS 954 + skb->sender_cpu = 0; 955 + #endif 956 + } 957 + 951 958 #ifdef NET_SKBUFF_DATA_USES_OFFSET 952 959 static inline unsigned char *skb_end_pointer(const struct sk_buff *skb) 953 960 {
+1 -1
include/linux/spi/spi.h
··· 649 649 * sequence completes. On some systems, many such sequences can execute as 650 650 * as single programmed DMA transfer. On all systems, these messages are 651 651 * queued, and might complete after transactions to other devices. Messages 652 - * sent to a given spi_device are alway executed in FIFO order. 652 + * sent to a given spi_device are always executed in FIFO order. 653 653 * 654 654 * The code that submits an spi_message (and its spi_transfers) 655 655 * to the lower layers is responsible for managing its memory.
+9 -9
include/linux/sunrpc/debug.h
··· 60 60 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 61 61 void rpc_register_sysctl(void); 62 62 void rpc_unregister_sysctl(void); 63 - int sunrpc_debugfs_init(void); 63 + void sunrpc_debugfs_init(void); 64 64 void sunrpc_debugfs_exit(void); 65 - int rpc_clnt_debugfs_register(struct rpc_clnt *); 65 + void rpc_clnt_debugfs_register(struct rpc_clnt *); 66 66 void rpc_clnt_debugfs_unregister(struct rpc_clnt *); 67 - int rpc_xprt_debugfs_register(struct rpc_xprt *); 67 + void rpc_xprt_debugfs_register(struct rpc_xprt *); 68 68 void rpc_xprt_debugfs_unregister(struct rpc_xprt *); 69 69 #else 70 - static inline int 70 + static inline void 71 71 sunrpc_debugfs_init(void) 72 72 { 73 - return 0; 73 + return; 74 74 } 75 75 76 76 static inline void ··· 79 79 return; 80 80 } 81 81 82 - static inline int 82 + static inline void 83 83 rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 84 84 { 85 - return 0; 85 + return; 86 86 } 87 87 88 88 static inline void ··· 91 91 return; 92 92 } 93 93 94 - static inline int 94 + static inline void 95 95 rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 96 96 { 97 - return 0; 97 + return; 98 98 } 99 99 100 100 static inline void
+2
include/linux/uio.h
··· 98 98 size_t maxsize, size_t *start); 99 99 int iov_iter_npages(const struct iov_iter *i, int maxpages); 100 100 101 + const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags); 102 + 101 103 static inline size_t iov_iter_count(struct iov_iter *i) 102 104 { 103 105 return i->count;
+1 -2
include/linux/usb/serial.h
··· 190 190 * @num_ports: the number of different ports this device will have. 191 191 * @bulk_in_size: minimum number of bytes to allocate for bulk-in buffer 192 192 * (0 = end-point size) 193 - * @bulk_out_size: minimum number of bytes to allocate for bulk-out buffer 194 - * (0 = end-point size) 193 + * @bulk_out_size: bytes to allocate for bulk-out buffer (0 = end-point size) 195 194 * @calc_num_ports: pointer to a function to determine how many ports this 196 195 * device has dynamically. It will be called after the probe() 197 196 * callback is called, but before attach()
+15 -1
include/linux/usb/usbnet.h
··· 227 227 struct urb *urb; 228 228 struct usbnet *dev; 229 229 enum skb_state state; 230 - size_t length; 230 + long length; 231 + unsigned long packets; 231 232 }; 233 + 234 + /* Drivers that set FLAG_MULTI_PACKET must call this in their 235 + * tx_fixup method before returning an skb. 236 + */ 237 + static inline void 238 + usbnet_set_skb_tx_stats(struct sk_buff *skb, 239 + unsigned long packets, long bytes_delta) 240 + { 241 + struct skb_data *entry = (struct skb_data *) skb->cb; 242 + 243 + entry->packets = packets; 244 + entry->length = bytes_delta; 245 + } 232 246 233 247 extern int usbnet_open(struct net_device *net); 234 248 extern int usbnet_stop(struct net_device *net);
+1
include/linux/vmalloc.h
··· 17 17 #define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */ 18 18 #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ 19 19 #define VM_NO_GUARD 0x00000040 /* don't add guard page */ 20 + #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ 20 21 /* bits [20..32] reserved for arch specific ioremap internals */ 21 22 22 23 /*
+2 -1
include/linux/workqueue.h
··· 70 70 /* data contains off-queue information when !WORK_STRUCT_PWQ */ 71 71 WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT, 72 72 73 - WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE), 73 + __WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE, 74 + WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING), 74 75 75 76 /* 76 77 * When a work item is off queue, its high bits point to the last
+3
include/linux/writeback.h
··· 130 130 extern unsigned long vm_dirty_bytes; 131 131 extern unsigned int dirty_writeback_interval; 132 132 extern unsigned int dirty_expire_interval; 133 + extern unsigned int dirtytime_expire_interval; 133 134 extern int vm_highmem_is_dirtyable; 134 135 extern int block_dump; 135 136 extern int laptop_mode; ··· 147 146 extern int dirty_bytes_handler(struct ctl_table *table, int write, 148 147 void __user *buffer, size_t *lenp, 149 148 loff_t *ppos); 149 + int dirtytime_interval_handler(struct ctl_table *table, int write, 150 + void __user *buffer, size_t *lenp, loff_t *ppos); 150 151 151 152 struct ctl_table; 152 153 int dirty_writeback_centisecs_handler(struct ctl_table *, int,
+1 -1
include/net/caif/cfpkt.h
··· 171 171 * @return Checksum of buffer. 172 172 */ 173 173 174 - u16 cfpkt_iterate(struct cfpkt *pkt, 174 + int cfpkt_iterate(struct cfpkt *pkt, 175 175 u16 (*iter_func)(u16 chks, void *buf, u16 len), 176 176 u16 data); 177 177
+1
include/net/dst.h
··· 481 481 enum { 482 482 XFRM_LOOKUP_ICMP = 1 << 0, 483 483 XFRM_LOOKUP_QUEUE = 1 << 1, 484 + XFRM_LOOKUP_KEEP_DST_REF = 1 << 2, 484 485 }; 485 486 486 487 struct flowi;
-16
include/net/ip.h
··· 453 453 454 454 #endif 455 455 456 - static inline int sk_mc_loop(struct sock *sk) 457 - { 458 - if (!sk) 459 - return 1; 460 - switch (sk->sk_family) { 461 - case AF_INET: 462 - return inet_sk(sk)->mc_loop; 463 - #if IS_ENABLED(CONFIG_IPV6) 464 - case AF_INET6: 465 - return inet6_sk(sk)->mc_loop; 466 - #endif 467 - } 468 - WARN_ON(1); 469 - return 1; 470 - } 471 - 472 456 bool ip_call_ra_chain(struct sk_buff *skb); 473 457 474 458 /*
+2 -1
include/net/ip6_route.h
··· 174 174 175 175 static inline int ip6_skb_dst_mtu(struct sk_buff *skb) 176 176 { 177 - struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; 177 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? 178 + inet6_sk(skb->sk) : NULL; 178 179 179 180 return (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) ? 180 181 skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
+10
include/net/netfilter/nf_log.h
··· 79 79 const struct nf_loginfo *li, 80 80 const char *fmt, ...); 81 81 82 + __printf(8, 9) 83 + void nf_log_trace(struct net *net, 84 + u_int8_t pf, 85 + unsigned int hooknum, 86 + const struct sk_buff *skb, 87 + const struct net_device *in, 88 + const struct net_device *out, 89 + const struct nf_loginfo *li, 90 + const char *fmt, ...); 91 + 82 92 struct nf_log_buf; 83 93 84 94 struct nf_log_buf *nf_log_buf_open(void);
+19 -3
include/net/netfilter/nf_tables.h
··· 119 119 const struct nft_data *data, 120 120 enum nft_data_types type); 121 121 122 + 123 + /** 124 + * struct nft_userdata - user defined data associated with an object 125 + * 126 + * @len: length of the data 127 + * @data: content 128 + * 129 + * The presence of user data is indicated in an object specific fashion, 130 + * so a length of zero can't occur and the value "len" indicates data 131 + * of length len + 1. 132 + */ 133 + struct nft_userdata { 134 + u8 len; 135 + unsigned char data[0]; 136 + }; 137 + 122 138 /** 123 139 * struct nft_set_elem - generic representation of set elements 124 140 * ··· 396 380 * @handle: rule handle 397 381 * @genmask: generation mask 398 382 * @dlen: length of expression data 399 - * @ulen: length of user data (used for comments) 383 + * @udata: user data is appended to the rule 400 384 * @data: expression data 401 385 */ 402 386 struct nft_rule { ··· 404 388 u64 handle:42, 405 389 genmask:2, 406 390 dlen:12, 407 - ulen:8; 391 + udata:1; 408 392 unsigned char data[] 409 393 __attribute__((aligned(__alignof__(struct nft_expr)))); 410 394 }; ··· 492 476 return (struct nft_expr *)&rule->data[rule->dlen]; 493 477 } 494 478 495 - static inline void *nft_userdata(const struct nft_rule *rule) 479 + static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule) 496 480 { 497 481 return (void *)&rule->data[rule->dlen]; 498 482 }
+2
include/net/sock.h
··· 1762 1762 1763 1763 struct dst_entry *sk_dst_check(struct sock *sk, u32 cookie); 1764 1764 1765 + bool sk_mc_loop(struct sock *sk); 1766 + 1765 1767 static inline bool sk_can_gso(const struct sock *sk) 1766 1768 { 1767 1769 return net_gso_ok(sk->sk_route_caps, sk->sk_gso_type);
+1
include/net/vxlan.h
··· 91 91 92 92 #define VXLAN_N_VID (1u << 24) 93 93 #define VXLAN_VID_MASK (VXLAN_N_VID - 1) 94 + #define VXLAN_VNI_MASK (VXLAN_VID_MASK << 8) 94 95 #define VXLAN_HLEN (sizeof(struct udphdr) + sizeof(struct vxlanhdr)) 95 96 96 97 struct vxlan_metadata {
+1 -1
include/soc/at91/at91sam9_ddrsdr.h
··· 92 92 #define AT91_DDRSDRC_UPD_MR (3 << 20) /* Update load mode register and extended mode register */ 93 93 94 94 #define AT91_DDRSDRC_MDR 0x20 /* Memory Device Register */ 95 - #define AT91_DDRSDRC_MD (3 << 0) /* Memory Device Type */ 95 + #define AT91_DDRSDRC_MD (7 << 0) /* Memory Device Type */ 96 96 #define AT91_DDRSDRC_MD_SDR 0 97 97 #define AT91_DDRSDRC_MD_LOW_POWER_SDR 1 98 98 #define AT91_DDRSDRC_MD_LOW_POWER_DDR 3
+1
include/target/target_core_backend.h
··· 111 111 void target_core_setup_sub_cits(struct se_subsystem_api *); 112 112 113 113 /* attribute helpers from target_core_device.c for backend drivers */ 114 + bool se_dev_check_wce(struct se_device *); 114 115 int se_dev_set_max_unmap_lba_count(struct se_device *, u32); 115 116 int se_dev_set_max_unmap_block_desc_count(struct se_device *, u32); 116 117 int se_dev_set_unmap_granularity(struct se_device *, u32);
+61 -62
include/trace/events/regmap.h
··· 7 7 #include <linux/ktime.h> 8 8 #include <linux/tracepoint.h> 9 9 10 - struct device; 11 - struct regmap; 10 + #include "../../../drivers/base/regmap/internal.h" 12 11 13 12 /* 14 13 * Log register events 15 14 */ 16 15 DECLARE_EVENT_CLASS(regmap_reg, 17 16 18 - TP_PROTO(struct device *dev, unsigned int reg, 17 + TP_PROTO(struct regmap *map, unsigned int reg, 19 18 unsigned int val), 20 19 21 - TP_ARGS(dev, reg, val), 20 + TP_ARGS(map, reg, val), 22 21 23 22 TP_STRUCT__entry( 24 - __string( name, dev_name(dev) ) 25 - __field( unsigned int, reg ) 26 - __field( unsigned int, val ) 23 + __string( name, regmap_name(map) ) 24 + __field( unsigned int, reg ) 25 + __field( unsigned int, val ) 27 26 ), 28 27 29 28 TP_fast_assign( 30 - __assign_str(name, dev_name(dev)); 29 + __assign_str(name, regmap_name(map)); 31 30 __entry->reg = reg; 32 31 __entry->val = val; 33 32 ), ··· 38 39 39 40 DEFINE_EVENT(regmap_reg, regmap_reg_write, 40 41 41 - TP_PROTO(struct device *dev, unsigned int reg, 42 + TP_PROTO(struct regmap *map, unsigned int reg, 42 43 unsigned int val), 43 44 44 - TP_ARGS(dev, reg, val) 45 + TP_ARGS(map, reg, val) 45 46 46 47 ); 47 48 48 49 DEFINE_EVENT(regmap_reg, regmap_reg_read, 49 50 50 - TP_PROTO(struct device *dev, unsigned int reg, 51 + TP_PROTO(struct regmap *map, unsigned int reg, 51 52 unsigned int val), 52 53 53 - TP_ARGS(dev, reg, val) 54 + TP_ARGS(map, reg, val) 54 55 55 56 ); 56 57 57 58 DEFINE_EVENT(regmap_reg, regmap_reg_read_cache, 58 59 59 - TP_PROTO(struct device *dev, unsigned int reg, 60 + TP_PROTO(struct regmap *map, unsigned int reg, 60 61 unsigned int val), 61 62 62 - TP_ARGS(dev, reg, val) 63 + TP_ARGS(map, reg, val) 63 64 64 65 ); 65 66 66 67 DECLARE_EVENT_CLASS(regmap_block, 67 68 68 - TP_PROTO(struct device *dev, unsigned int reg, int count), 69 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 69 70 70 - TP_ARGS(dev, reg, count), 71 + TP_ARGS(map, reg, count), 71 72 72 73 TP_STRUCT__entry( 73 - __string( name, dev_name(dev) ) 74 - __field( unsigned int, reg ) 75 - __field( int, count ) 74 + __string( name, regmap_name(map) ) 75 + __field( unsigned int, reg ) 76 + __field( int, count ) 76 77 ), 77 78 78 79 TP_fast_assign( 79 - __assign_str(name, dev_name(dev)); 80 + __assign_str(name, regmap_name(map)); 80 81 __entry->reg = reg; 81 82 __entry->count = count; 82 83 ), ··· 88 89 89 90 DEFINE_EVENT(regmap_block, regmap_hw_read_start, 90 91 91 - TP_PROTO(struct device *dev, unsigned int reg, int count), 92 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 92 93 93 - TP_ARGS(dev, reg, count) 94 + TP_ARGS(map, reg, count) 94 95 ); 95 96 96 97 DEFINE_EVENT(regmap_block, regmap_hw_read_done, 97 98 98 - TP_PROTO(struct device *dev, unsigned int reg, int count), 99 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 99 100 100 - TP_ARGS(dev, reg, count) 101 + TP_ARGS(map, reg, count) 101 102 ); 102 103 103 104 DEFINE_EVENT(regmap_block, regmap_hw_write_start, 104 105 105 - TP_PROTO(struct device *dev, unsigned int reg, int count), 106 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 106 107 107 - TP_ARGS(dev, reg, count) 108 + TP_ARGS(map, reg, count) 108 109 ); 109 110 110 111 DEFINE_EVENT(regmap_block, regmap_hw_write_done, 111 112 112 - TP_PROTO(struct device *dev, unsigned int reg, int count), 113 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 113 114 114 - TP_ARGS(dev, reg, count) 115 + TP_ARGS(map, reg, count) 115 116 ); 116 117 117 118 TRACE_EVENT(regcache_sync, 118 119 119 - TP_PROTO(struct device *dev, const char *type, 120 + TP_PROTO(struct regmap *map, const char *type, 120 121 const char *status), 121 122 122 - TP_ARGS(dev, type, status), 123 + TP_ARGS(map, type, status), 123 124 124 125 TP_STRUCT__entry( 125 - __string( name, dev_name(dev) ) 126 - __string( status, status ) 127 - __string( type, type ) 128 - __field( int, type ) 126 + __string( name, regmap_name(map) ) 127 + __string( status, status ) 128 + __string( type, type ) 129 + __field( int, type ) 129 130 ), 130 131 131 132 TP_fast_assign( 132 - __assign_str(name, dev_name(dev)); 133 + __assign_str(name, regmap_name(map)); 133 134 __assign_str(status, status); 134 135 __assign_str(type, type); 135 136 ), ··· 140 141 141 142 DECLARE_EVENT_CLASS(regmap_bool, 142 143 143 - TP_PROTO(struct device *dev, bool flag), 144 + TP_PROTO(struct regmap *map, bool flag), 144 145 145 - TP_ARGS(dev, flag), 146 + TP_ARGS(map, flag), 146 147 147 148 TP_STRUCT__entry( 148 - __string( name, dev_name(dev) ) 149 - __field( int, flag ) 149 + __string( name, regmap_name(map) ) 150 + __field( int, flag ) 150 151 ), 151 152 152 153 TP_fast_assign( 153 - __assign_str(name, dev_name(dev)); 154 + __assign_str(name, regmap_name(map)); 154 155 __entry->flag = flag; 155 156 ), 156 157 ··· 160 161 161 162 DEFINE_EVENT(regmap_bool, regmap_cache_only, 162 163 163 - TP_PROTO(struct device *dev, bool flag), 164 + TP_PROTO(struct regmap *map, bool flag), 164 165 165 - TP_ARGS(dev, flag) 166 + TP_ARGS(map, flag) 166 167 167 168 ); 168 169 169 170 DEFINE_EVENT(regmap_bool, regmap_cache_bypass, 170 171 171 - TP_PROTO(struct device *dev, bool flag), 172 + TP_PROTO(struct regmap *map, bool flag), 172 173 173 - TP_ARGS(dev, flag) 174 + TP_ARGS(map, flag) 174 175 175 176 ); 176 177 177 178 DECLARE_EVENT_CLASS(regmap_async, 178 179 179 - TP_PROTO(struct device *dev), 180 + TP_PROTO(struct regmap *map), 180 181 181 - TP_ARGS(dev), 182 + TP_ARGS(map), 182 183 183 184 TP_STRUCT__entry( 184 - __string( name, dev_name(dev) ) 185 + __string( name, regmap_name(map) ) 185 186 ), 186 187 187 188 TP_fast_assign( 188 - __assign_str(name, dev_name(dev)); 189 + __assign_str(name, regmap_name(map)); 189 190 ), 190 191 191 192 TP_printk("%s", __get_str(name)) ··· 193 194 194 195 DEFINE_EVENT(regmap_block, regmap_async_write_start, 195 196 196 - TP_PROTO(struct device *dev, unsigned int reg, int count), 197 + TP_PROTO(struct regmap *map, unsigned int reg, int count), 197 198 198 - TP_ARGS(dev, reg, count) 199 + TP_ARGS(map, reg, count) 199 200 ); 200 201 201 202 DEFINE_EVENT(regmap_async, regmap_async_io_complete, 202 203 203 - TP_PROTO(struct device *dev), 204 + TP_PROTO(struct regmap *map), 204 205 205 - TP_ARGS(dev) 206 + TP_ARGS(map) 206 207 207 208 ); 208 209 209 210 DEFINE_EVENT(regmap_async, regmap_async_complete_start, 210 211 211 - TP_PROTO(struct device *dev), 212 + TP_PROTO(struct regmap *map), 212 213 213 - TP_ARGS(dev) 214 + TP_ARGS(map) 214 215 215 216 ); 216 217 217 218 DEFINE_EVENT(regmap_async, regmap_async_complete_done, 218 219 219 - TP_PROTO(struct device *dev), 220 + TP_PROTO(struct regmap *map), 220 221 221 - TP_ARGS(dev) 222 + TP_ARGS(map) 222 223 223 224 ); 224 225 225 226 TRACE_EVENT(regcache_drop_region, 226 227 227 - TP_PROTO(struct device *dev, unsigned int from, 228 + TP_PROTO(struct regmap *map, unsigned int from, 228 229 unsigned int to), 229 230 230 - TP_ARGS(dev, from, to), 231 + TP_ARGS(map, from, to), 231 232 232 233 TP_STRUCT__entry( 233 - __string( name, dev_name(dev) ) 234 - __field( unsigned int, from ) 235 - __field( unsigned int, to ) 234 + __string( name, regmap_name(map) ) 235 + __field( unsigned int, from ) 236 + __field( unsigned int, to ) 236 237 ), 237 238 238 239 TP_fast_assign( 239 - __assign_str(name, dev_name(dev)); 240 + __assign_str(name, regmap_name(map)); 240 241 __entry->from = from; 241 242 __entry->to = to; 242 243 ),
+2 -1
include/uapi/linux/input.h
··· 973 973 */ 974 974 #define MT_TOOL_FINGER 0 975 975 #define MT_TOOL_PEN 1 976 - #define MT_TOOL_MAX 1 976 + #define MT_TOOL_PALM 2 977 + #define MT_TOOL_MAX 2 977 978 978 979 /* 979 980 * Values describing the status of a force-feedback effect
+1 -1
include/uapi/linux/nfsd/export.h
··· 47 47 * exported filesystem. 48 48 */ 49 49 #define NFSEXP_V4ROOT 0x10000 50 - #define NFSEXP_NOPNFS 0x20000 50 + #define NFSEXP_PNFS 0x20000 51 51 52 52 /* All flags that we claim to support. (Note we don't support NOACL.) */ 53 53 #define NFSEXP_ALLFLAGS 0x3FE7F
+4
include/uapi/linux/serial.h
··· 65 65 #define SERIAL_IO_PORT 0 66 66 #define SERIAL_IO_HUB6 1 67 67 #define SERIAL_IO_MEM 2 68 + #define SERIAL_IO_MEM32 3 69 + #define SERIAL_IO_AU 4 70 + #define SERIAL_IO_TSI 5 71 + #define SERIAL_IO_MEM32BE 6 68 72 69 73 #define UART_CLEAR_FIFO 0x01 70 74 #define UART_USE_FIFO 0x02
+1
include/uapi/linux/tc_act/Kbuild
··· 9 9 header-y += tc_skbedit.h 10 10 header-y += tc_vlan.h 11 11 header-y += tc_bpf.h 12 + header-y += tc_connmark.h
+6 -2
include/uapi/linux/virtio_blk.h
··· 60 60 __u32 size_max; 61 61 /* The maximum number of segments (if VIRTIO_BLK_F_SEG_MAX) */ 62 62 __u32 seg_max; 63 - /* geometry the device (if VIRTIO_BLK_F_GEOMETRY) */ 63 + /* geometry of the device (if VIRTIO_BLK_F_GEOMETRY) */ 64 64 struct virtio_blk_geometry { 65 65 __u16 cylinders; 66 66 __u8 heads; ··· 119 119 #define VIRTIO_BLK_T_BARRIER 0x80000000 120 120 #endif /* !VIRTIO_BLK_NO_LEGACY */ 121 121 122 - /* This is the first element of the read scatter-gather list. */ 122 + /* 123 + * This comes first in the read scatter-gather list. 124 + * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, 125 + * this is the first element of the read scatter-gather list. 126 + */ 123 127 struct virtio_blk_outhdr { 124 128 /* VIRTIO_BLK_T* */ 125 129 __virtio32 type;
+10 -2
include/uapi/linux/virtio_scsi.h
··· 29 29 30 30 #include <linux/virtio_types.h> 31 31 32 - #define VIRTIO_SCSI_CDB_SIZE 32 33 - #define VIRTIO_SCSI_SENSE_SIZE 96 32 + /* Default values of the CDB and sense data size configuration fields */ 33 + #define VIRTIO_SCSI_CDB_DEFAULT_SIZE 32 34 + #define VIRTIO_SCSI_SENSE_DEFAULT_SIZE 96 35 + 36 + #ifndef VIRTIO_SCSI_CDB_SIZE 37 + #define VIRTIO_SCSI_CDB_SIZE VIRTIO_SCSI_CDB_DEFAULT_SIZE 38 + #endif 39 + #ifndef VIRTIO_SCSI_SENSE_SIZE 40 + #define VIRTIO_SCSI_SENSE_SIZE VIRTIO_SCSI_SENSE_DEFAULT_SIZE 41 + #endif 34 42 35 43 /* SCSI command request, followed by data-out */ 36 44 struct virtio_scsi_cmd_req {
+1
include/video/omapdss.h
··· 689 689 }; 690 690 691 691 struct omap_dss_device { 692 + struct kobject kobj; 692 693 struct device *dev; 693 694 694 695 struct module *owner;
+2 -2
include/xen/xenbus.h
··· 114 114 const char *mod_name); 115 115 116 116 #define xenbus_register_frontend(drv) \ 117 - __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME); 117 + __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME) 118 118 #define xenbus_register_backend(drv) \ 119 - __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME); 119 + __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME) 120 120 121 121 void xenbus_unregister_driver(struct xenbus_driver *drv); 122 122
+4 -5
kernel/cpuset.c
··· 548 548 549 549 rcu_read_lock(); 550 550 cpuset_for_each_descendant_pre(cp, pos_css, root_cs) { 551 - if (cp == root_cs) 552 - continue; 553 - 554 551 /* skip the whole subtree if @cp doesn't have any CPU */ 555 552 if (cpumask_empty(cp->cpus_allowed)) { 556 553 pos_css = css_rightmost_descendant(pos_css); ··· 870 873 * If it becomes empty, inherit the effective mask of the 871 874 * parent, which is guaranteed to have some CPUs. 872 875 */ 873 - if (cpumask_empty(new_cpus)) 876 + if (cgroup_on_dfl(cp->css.cgroup) && cpumask_empty(new_cpus)) 874 877 cpumask_copy(new_cpus, parent->effective_cpus); 875 878 876 879 /* Skip the whole subtree if the cpumask remains the same. */ ··· 1126 1129 * If it becomes empty, inherit the effective mask of the 1127 1130 * parent, which is guaranteed to have some MEMs. 1128 1131 */ 1129 - if (nodes_empty(*new_mems)) 1132 + if (cgroup_on_dfl(cp->css.cgroup) && nodes_empty(*new_mems)) 1130 1133 *new_mems = parent->effective_mems; 1131 1134 1132 1135 /* Skip the whole subtree if the nodemask remains the same. */ ··· 1976 1979 1977 1980 spin_lock_irq(&callback_lock); 1978 1981 cs->mems_allowed = parent->mems_allowed; 1982 + cs->effective_mems = parent->mems_allowed; 1979 1983 cpumask_copy(cs->cpus_allowed, parent->cpus_allowed); 1984 + cpumask_copy(cs->effective_cpus, parent->cpus_allowed); 1980 1985 spin_unlock_irq(&callback_lock); 1981 1986 out_unlock: 1982 1987 mutex_unlock(&cpuset_mutex);
+11 -1
kernel/events/core.c
··· 3591 3591 ctx = perf_event_ctx_lock_nested(event, SINGLE_DEPTH_NESTING); 3592 3592 WARN_ON_ONCE(ctx->parent_ctx); 3593 3593 perf_remove_from_context(event, true); 3594 - mutex_unlock(&ctx->mutex); 3594 + perf_event_ctx_unlock(event, ctx); 3595 3595 3596 3596 _free_event(event); 3597 3597 } ··· 4574 4574 { 4575 4575 struct perf_event *event = container_of(entry, 4576 4576 struct perf_event, pending); 4577 + int rctx; 4578 + 4579 + rctx = perf_swevent_get_recursion_context(); 4580 + /* 4581 + * If we 'fail' here, that's OK, it means recursion is already disabled 4582 + * and we won't recurse 'further'. 4583 + */ 4577 4584 4578 4585 if (event->pending_disable) { 4579 4586 event->pending_disable = 0; ··· 4591 4584 event->pending_wakeup = 0; 4592 4585 perf_event_wakeup(event); 4593 4586 } 4587 + 4588 + if (rctx >= 0) 4589 + perf_swevent_put_recursion_context(rctx); 4594 4590 } 4595 4591 4596 4592 /*
+6 -1
kernel/irq/manage.c
··· 1506 1506 * otherwise we'll have trouble later trying to figure out 1507 1507 * which interrupt is which (messes up the interrupt freeing 1508 1508 * logic etc). 1509 + * 1510 + * Also IRQF_COND_SUSPEND only makes sense for shared interrupts and 1511 + * it cannot be set along with IRQF_NO_SUSPEND. 1509 1512 */ 1510 - if ((irqflags & IRQF_SHARED) && !dev_id) 1513 + if (((irqflags & IRQF_SHARED) && !dev_id) || 1514 + (!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) || 1515 + ((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND))) 1511 1516 return -EINVAL; 1512 1517 1513 1518 desc = irq_to_desc(irq);
+6 -1
kernel/irq/pm.c
··· 43 43 44 44 if (action->flags & IRQF_NO_SUSPEND) 45 45 desc->no_suspend_depth++; 46 + else if (action->flags & IRQF_COND_SUSPEND) 47 + desc->cond_suspend_depth++; 46 48 47 49 WARN_ON_ONCE(desc->no_suspend_depth && 48 - desc->no_suspend_depth != desc->nr_actions); 50 + (desc->no_suspend_depth + 51 + desc->cond_suspend_depth) != desc->nr_actions); 49 52 } 50 53 51 54 /* ··· 64 61 65 62 if (action->flags & IRQF_NO_SUSPEND) 66 63 desc->no_suspend_depth--; 64 + else if (action->flags & IRQF_COND_SUSPEND) 65 + desc->cond_suspend_depth--; 67 66 } 68 67 69 68 static bool suspend_device_irq(struct irq_desc *desc, int irq)
+28 -5
kernel/livepatch/core.c
··· 89 89 /* sets obj->mod if object is not vmlinux and module is found */ 90 90 static void klp_find_object_module(struct klp_object *obj) 91 91 { 92 + struct module *mod; 93 + 92 94 if (!klp_is_module(obj)) 93 95 return; 94 96 95 97 mutex_lock(&module_mutex); 96 98 /* 97 - * We don't need to take a reference on the module here because we have 98 - * the klp_mutex, which is also taken by the module notifier. This 99 - * prevents any module from unloading until we release the klp_mutex. 99 + * We do not want to block removal of patched modules and therefore 100 + * we do not take a reference here. The patches are removed by 101 + * a going module handler instead. 100 102 */ 101 - obj->mod = find_module(obj->name); 103 + mod = find_module(obj->name); 104 + /* 105 + * Do not mess work of the module coming and going notifiers. 106 + * Note that the patch might still be needed before the going handler 107 + * is called. Module functions can be called even in the GOING state 108 + * until mod->exit() finishes. This is especially important for 109 + * patches that modify semantic of the functions. 110 + */ 111 + if (mod && mod->klp_alive) 112 + obj->mod = mod; 113 + 102 114 mutex_unlock(&module_mutex); 103 115 } 104 116 ··· 260 248 /* first, check if it's an exported symbol */ 261 249 preempt_disable(); 262 250 sym = find_symbol(name, NULL, NULL, true, true); 263 - preempt_enable(); 264 251 if (sym) { 265 252 *addr = sym->value; 253 + preempt_enable(); 266 254 return 0; 267 255 } 256 + preempt_enable(); 268 257 269 258 /* otherwise check if it's in another .o within the patch module */ 270 259 return klp_find_object_symbol(pmod->name, name, addr); ··· 779 766 return -EINVAL; 780 767 781 768 obj->state = KLP_DISABLED; 769 + obj->mod = NULL; 782 770 783 771 klp_find_object_module(obj); 784 772 ··· 973 959 return 0; 974 960 975 961 mutex_lock(&klp_mutex); 962 + 963 + /* 964 + * Each module has to know that the notifier has been called. 965 + * We never know what module will get patched by a new patch. 966 + */ 967 + if (action == MODULE_STATE_COMING) 968 + mod->klp_alive = true; 969 + else /* MODULE_STATE_GOING */ 970 + mod->klp_alive = false; 976 971 977 972 list_for_each_entry(patch, &klp_patches, list) { 978 973 for (obj = patch->objs; obj->funcs; obj++) {
+55 -26
kernel/locking/lockdep.c
··· 633 633 if (!new_class->name) 634 634 return 0; 635 635 636 - list_for_each_entry(class, &all_lock_classes, lock_entry) { 636 + list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) { 637 637 if (new_class->key - new_class->subclass == class->key) 638 638 return class->name_version; 639 639 if (class->name && !strcmp(class->name, new_class->name)) ··· 700 700 hash_head = classhashentry(key); 701 701 702 702 /* 703 - * We can walk the hash lockfree, because the hash only 704 - * grows, and we are careful when adding entries to the end: 703 + * We do an RCU walk of the hash, see lockdep_free_key_range(). 705 704 */ 706 - list_for_each_entry(class, hash_head, hash_entry) { 705 + if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) 706 + return NULL; 707 + 708 + list_for_each_entry_rcu(class, hash_head, hash_entry) { 707 709 if (class->key == key) { 708 710 /* 709 711 * Huh! same key, different name? Did someone trample ··· 730 728 struct lockdep_subclass_key *key; 731 729 struct list_head *hash_head; 732 730 struct lock_class *class; 733 - unsigned long flags; 731 + 732 + DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 734 733 735 734 class = look_up_lock_class(lock, subclass); 736 735 if (likely(class)) ··· 753 750 key = lock->key->subkeys + subclass; 754 751 hash_head = classhashentry(key); 755 752 756 - raw_local_irq_save(flags); 757 753 if (!graph_lock()) { 758 - raw_local_irq_restore(flags); 759 754 return NULL; 760 755 } 761 756 /* 762 757 * We have to do the hash-walk again, to avoid races 763 758 * with another CPU: 764 759 */ 765 - list_for_each_entry(class, hash_head, hash_entry) 760 + list_for_each_entry_rcu(class, hash_head, hash_entry) { 766 761 if (class->key == key) 767 762 goto out_unlock_set; 763 + } 764 + 768 765 /* 769 766 * Allocate a new key from the static array, and add it to 770 767 * the hash: 771 768 */ 772 769 if (nr_lock_classes >= MAX_LOCKDEP_KEYS) { 773 770 if (!debug_locks_off_graph_unlock()) { 774 - raw_local_irq_restore(flags); 775 771 return NULL; 776 772 } 777 - raw_local_irq_restore(flags); 778 773 779 774 print_lockdep_off("BUG: MAX_LOCKDEP_KEYS too low!"); 780 775 dump_stack(); ··· 799 798 800 799 if (verbose(class)) { 801 800 graph_unlock(); 802 - raw_local_irq_restore(flags); 803 801 804 802 printk("\nnew class %p: %s", class->key, class->name); 805 803 if (class->name_version > 1) ··· 806 806 printk("\n"); 807 807 dump_stack(); 808 808 809 - raw_local_irq_save(flags); 810 809 if (!graph_lock()) { 811 - raw_local_irq_restore(flags); 812 810 return NULL; 813 811 } 814 812 } 815 813 out_unlock_set: 816 814 graph_unlock(); 817 - raw_local_irq_restore(flags); 818 815 819 816 out_set_class_cache: 820 817 if (!subclass || force) ··· 867 870 entry->distance = distance; 868 871 entry->trace = *trace; 869 872 /* 870 - * Since we never remove from the dependency list, the list can 871 - * be walked lockless by other CPUs, it's only allocation 872 - * that must be protected by the spinlock. But this also means 873 - * we must make new entries visible only once writes to the 874 - * entry become visible - hence the RCU op: 873 + * Both allocation and removal are done under the graph lock; but 874 + * iteration is under RCU-sched; see look_up_lock_class() and 875 + * lockdep_free_key_range(). 875 876 */ 876 877 list_add_tail_rcu(&entry->entry, head); 877 878 ··· 1020 1025 else 1021 1026 head = &lock->class->locks_before; 1022 1027 1023 - list_for_each_entry(entry, head, entry) { 1028 + DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 1029 + 1030 + list_for_each_entry_rcu(entry, head, entry) { 1024 1031 if (!lock_accessed(entry)) { 1025 1032 unsigned int cq_depth; 1026 1033 mark_lock_accessed(entry, lock); ··· 2019 2022 * We can walk it lock-free, because entries only get added 2020 2023 * to the hash: 2021 2024 */ 2022 - list_for_each_entry(chain, hash_head, entry) { 2025 + list_for_each_entry_rcu(chain, hash_head, entry) { 2023 2026 if (chain->chain_key == chain_key) { 2024 2027 cache_hit: 2025 2028 debug_atomic_inc(chain_lookup_hits); ··· 2993 2996 if (unlikely(!debug_locks)) 2994 2997 return; 2995 2998 2996 - if (subclass) 2999 + if (subclass) { 3000 + unsigned long flags; 3001 + 3002 + if (DEBUG_LOCKS_WARN_ON(current->lockdep_recursion)) 3003 + return; 3004 + 3005 + raw_local_irq_save(flags); 3006 + current->lockdep_recursion = 1; 2997 3007 register_lock_class(lock, subclass, 1); 3008 + current->lockdep_recursion = 0; 3009 + raw_local_irq_restore(flags); 3010 + } 2998 3011 } 2999 3012 EXPORT_SYMBOL_GPL(lockdep_init_map); 3000 3013 ··· 3894 3887 return addr >= start && addr < start + size; 3895 3888 } 3896 3889 3890 + /* 3891 + * Used in module.c to remove lock classes from memory that is going to be 3892 + * freed; and possibly re-used by other modules. 3893 + * 3894 + * We will have had one sync_sched() before getting here, so we're guaranteed 3895 + * nobody will look up these exact classes -- they're properly dead but still 3896 + * allocated. 3897 + */ 3897 3898 void lockdep_free_key_range(void *start, unsigned long size) 3898 3899 { 3899 - struct lock_class *class, *next; 3900 + struct lock_class *class; 3900 3901 struct list_head *head; 3901 3902 unsigned long flags; 3902 3903 int i; ··· 3920 3905 head = classhash_table + i; 3921 3906 if (list_empty(head)) 3922 3907 continue; 3923 - list_for_each_entry_safe(class, next, head, hash_entry) { 3908 + list_for_each_entry_rcu(class, head, hash_entry) { 3924 3909 if (within(class->key, start, size)) 3925 3910 zap_class(class); 3926 3911 else if (within(class->name, start, size)) ··· 3931 3916 if (locked) 3932 3917 graph_unlock(); 3933 3918 raw_local_irq_restore(flags); 3919 + 3920 + /* 3921 + * Wait for any possible iterators from look_up_lock_class() to pass 3922 + * before continuing to free the memory they refer to. 3923 + * 3924 + * sync_sched() is sufficient because the read-side is IRQ disable. 3925 + */ 3926 + synchronize_sched(); 3927 + 3928 + /* 3929 + * XXX at this point we could return the resources to the pool; 3930 + * instead we leak them. We would need to change to bitmap allocators 3931 + * instead of the linear allocators we have now. 3932 + */ 3934 3933 } 3935 3934 3936 3935 void lockdep_reset_lock(struct lockdep_map *lock) 3937 3936 { 3938 - struct lock_class *class, *next; 3937 + struct lock_class *class; 3939 3938 struct list_head *head; 3940 3939 unsigned long flags; 3941 3940 int i, j; ··· 3977 3948 head = classhash_table + i; 3978 3949 if (list_empty(head)) 3979 3950 continue; 3980 - list_for_each_entry_safe(class, next, head, hash_entry) { 3951 + list_for_each_entry_rcu(class, head, hash_entry) { 3981 3952 int match = 0; 3982 3953 3983 3954 for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+6 -6
kernel/module.c
··· 56 56 #include <linux/async.h> 57 57 #include <linux/percpu.h> 58 58 #include <linux/kmemleak.h> 59 - #include <linux/kasan.h> 60 59 #include <linux/jump_label.h> 61 60 #include <linux/pfn.h> 62 61 #include <linux/bsearch.h> ··· 1813 1814 void __weak module_memfree(void *module_region) 1814 1815 { 1815 1816 vfree(module_region); 1816 - kasan_module_free(module_region); 1817 1817 } 1818 1818 1819 1819 void __weak module_arch_cleanup(struct module *mod) ··· 1865 1867 kfree(mod->args); 1866 1868 percpu_modfree(mod); 1867 1869 1868 - /* Free lock-classes: */ 1870 + /* Free lock-classes; relies on the preceding sync_rcu(). */ 1869 1871 lockdep_free_key_range(mod->module_core, mod->core_size); 1870 1872 1871 1873 /* Finally, free the core (containing the module structure) */ ··· 2311 2313 info->symoffs = ALIGN(mod->core_size, symsect->sh_addralign ?: 1); 2312 2314 info->stroffs = mod->core_size = info->symoffs + ndst * sizeof(Elf_Sym); 2313 2315 mod->core_size += strtab_size; 2316 + mod->core_size = debug_align(mod->core_size); 2314 2317 2315 2318 /* Put string table section at end of init part of module. */ 2316 2319 strsect->sh_flags |= SHF_ALLOC; 2317 2320 strsect->sh_entsize = get_offset(mod, &mod->init_size, strsect, 2318 2321 info->index.str) | INIT_OFFSET_MASK; 2322 + mod->init_size = debug_align(mod->init_size); 2319 2323 pr_debug("\t%s\n", info->secstrings + strsect->sh_name); 2320 2324 } 2321 2325 ··· 3349 3349 module_bug_cleanup(mod); 3350 3350 mutex_unlock(&module_mutex); 3351 3351 3352 - /* Free lock-classes: */ 3353 - lockdep_free_key_range(mod->module_core, mod->core_size); 3354 - 3355 3352 /* we can't deallocate the module until we clear memory protection */ 3356 3353 unset_module_init_ro_nx(mod); 3357 3354 unset_module_core_ro_nx(mod); ··· 3372 3375 synchronize_rcu(); 3373 3376 mutex_unlock(&module_mutex); 3374 3377 free_module: 3378 + /* Free lock-classes; relies on the preceding sync_rcu() */ 3379 + lockdep_free_key_range(mod->module_core, mod->core_size); 3380 + 3375 3381 module_deallocate(mod, info); 3376 3382 free_copy: 3377 3383 free_copy(info);
+1 -1
kernel/printk/console_cmdline.h
··· 3 3 4 4 struct console_cmdline 5 5 { 6 - char name[8]; /* Name of the driver */ 6 + char name[16]; /* Name of the driver */ 7 7 int index; /* Minor dev. to use */ 8 8 char *options; /* Options for the driver */ 9 9 #ifdef CONFIG_A11Y_BRAILLE_CONSOLE
+1
kernel/printk/printk.c
··· 2464 2464 for (i = 0, c = console_cmdline; 2465 2465 i < MAX_CMDLINECONSOLES && c->name[0]; 2466 2466 i++, c++) { 2467 + BUILD_BUG_ON(sizeof(c->name) != sizeof(newcon->name)); 2467 2468 if (strcmp(c->name, newcon->name) != 0) 2468 2469 continue; 2469 2470 if (newcon->index >= 0 &&
+2
kernel/sched/core.c
··· 3034 3034 } else { 3035 3035 if (dl_prio(oldprio)) 3036 3036 p->dl.dl_boosted = 0; 3037 + if (rt_prio(oldprio)) 3038 + p->rt.timeout = 0; 3037 3039 p->sched_class = &fair_sched_class; 3038 3040 } 3039 3041
+6 -2
kernel/sched/fair.c
··· 1609 1609 /* 1610 1610 * If there were no record hinting faults then either the task is 1611 1611 * completely idle or all activity is areas that are not of interest 1612 - * to automatic numa balancing. Scan slower 1612 + * to automatic numa balancing. Related to that, if there were failed 1613 + * migration then it implies we are migrating too quickly or the local 1614 + * node is overloaded. In either case, scan slower 1613 1615 */ 1614 - if (local + shared == 0) { 1616 + if (local + shared == 0 || p->numa_faults_locality[2]) { 1615 1617 p->numa_scan_period = min(p->numa_scan_period_max, 1616 1618 p->numa_scan_period << 1); 1617 1619 ··· 2082 2080 2083 2081 if (migrated) 2084 2082 p->numa_pages_migrated += pages; 2083 + if (flags & TNF_MIGRATE_FAIL) 2084 + p->numa_faults_locality[2] += pages; 2085 2085 2086 2086 p->numa_faults[task_faults_idx(NUMA_MEMBUF, mem_node, priv)] += pages; 2087 2087 p->numa_faults[task_faults_idx(NUMA_CPUBUF, cpu_node, priv)] += pages;
+34 -22
kernel/sched/idle.c
··· 82 82 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 83 83 int next_state, entered_state; 84 84 unsigned int broadcast; 85 + bool reflect; 85 86 86 87 /* 87 88 * Check if the idle task must be rescheduled. If it is the ··· 106 105 */ 107 106 rcu_idle_enter(); 108 107 108 + if (cpuidle_not_available(drv, dev)) 109 + goto use_default; 110 + 109 111 /* 110 112 * Suspend-to-idle ("freeze") is a system state in which all user space 111 113 * has been frozen, all I/O devices have been suspended and the only ··· 119 115 * until a proper wakeup interrupt happens. 120 116 */ 121 117 if (idle_should_freeze()) { 122 - cpuidle_enter_freeze(); 123 - local_irq_enable(); 124 - goto exit_idle; 125 - } 126 - 127 - /* 128 - * Ask the cpuidle framework to choose a convenient idle state. 129 - * Fall back to the default arch idle method on errors. 130 - */ 131 - next_state = cpuidle_select(drv, dev); 132 - if (next_state < 0) { 133 - use_default: 134 - /* 135 - * We can't use the cpuidle framework, let's use the default 136 - * idle routine. 137 - */ 138 - if (current_clr_polling_and_test()) 118 + entered_state = cpuidle_enter_freeze(drv, dev); 119 + if (entered_state >= 0) { 139 120 local_irq_enable(); 140 - else 141 - arch_cpu_idle(); 121 + goto exit_idle; 122 + } 142 123 143 - goto exit_idle; 124 + reflect = false; 125 + next_state = cpuidle_find_deepest_state(drv, dev); 126 + } else { 127 + reflect = true; 128 + /* 129 + * Ask the cpuidle framework to choose a convenient idle state. 130 + */ 131 + next_state = cpuidle_select(drv, dev); 144 132 } 145 - 133 + /* Fall back to the default arch idle method on errors. */ 134 + if (next_state < 0) 135 + goto use_default; 146 136 147 137 /* 148 138 * The idle task must be scheduled, it is pointless to ··· 181 183 /* 182 184 * Give the governor an opportunity to reflect on the outcome 183 185 */ 184 - cpuidle_reflect(dev, entered_state); 186 + if (reflect) 187 + cpuidle_reflect(dev, entered_state); 185 188 186 189 exit_idle: 187 190 __current_set_polling(); ··· 195 196 196 197 rcu_idle_exit(); 197 198 start_critical_timings(); 199 + return; 200 + 201 + use_default: 202 + /* 203 + * We can't use the cpuidle framework, let's use the default 204 + * idle routine. 205 + */ 206 + if (current_clr_polling_and_test()) 207 + local_irq_enable(); 208 + else 209 + arch_cpu_idle(); 210 + 211 + goto exit_idle; 198 212 } 199 213 200 214 /*
+8
kernel/sysctl.c
··· 1228 1228 .extra1 = &zero, 1229 1229 }, 1230 1230 { 1231 + .procname = "dirtytime_expire_seconds", 1232 + .data = &dirtytime_expire_interval, 1233 + .maxlen = sizeof(dirty_expire_interval), 1234 + .mode = 0644, 1235 + .proc_handler = dirtytime_interval_handler, 1236 + .extra1 = &zero, 1237 + }, 1238 + { 1231 1239 .procname = "nr_pdflush_threads", 1232 1240 .mode = 0444 /* read-only */, 1233 1241 .proc_handler = pdflush_proc_obsolete,
+9 -2
kernel/time/tick-broadcast-hrtimer.c
··· 49 49 */ 50 50 static int bc_set_next(ktime_t expires, struct clock_event_device *bc) 51 51 { 52 + int bc_moved; 52 53 /* 53 54 * We try to cancel the timer first. If the callback is on 54 55 * flight on some other cpu then we let it handle it. If we ··· 61 60 * restart the timer because we are in the callback, but we 62 61 * can set the expiry time and let the callback return 63 62 * HRTIMER_RESTART. 63 + * 64 + * Since we are in the idle loop at this point and because 65 + * hrtimer_{start/cancel} functions call into tracing, 66 + * calls to these functions must be bound within RCU_NONIDLE. 64 67 */ 65 - if (hrtimer_try_to_cancel(&bctimer) >= 0) { 66 - hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED); 68 + RCU_NONIDLE(bc_moved = (hrtimer_try_to_cancel(&bctimer) >= 0) ? 69 + !hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED) : 70 + 0); 71 + if (bc_moved) { 67 72 /* Bind the "device" to the cpu */ 68 73 bc->bound_on = smp_processor_id(); 69 74 } else if (bc->bound_on == smp_processor_id()) {
+30 -10
kernel/trace/ftrace.c
··· 1059 1059 1060 1060 static struct pid * const ftrace_swapper_pid = &init_struct_pid; 1061 1061 1062 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1063 + static int ftrace_graph_active; 1064 + #else 1065 + # define ftrace_graph_active 0 1066 + #endif 1067 + 1062 1068 #ifdef CONFIG_DYNAMIC_FTRACE 1063 1069 1064 1070 static struct ftrace_ops *removed_ops; ··· 2047 2041 if (!ftrace_rec_count(rec)) 2048 2042 rec->flags = 0; 2049 2043 else 2050 - /* Just disable the record (keep REGS state) */ 2051 - rec->flags &= ~FTRACE_FL_ENABLED; 2044 + /* 2045 + * Just disable the record, but keep the ops TRAMP 2046 + * and REGS states. The _EN flags must be disabled though. 2047 + */ 2048 + rec->flags &= ~(FTRACE_FL_ENABLED | FTRACE_FL_TRAMP_EN | 2049 + FTRACE_FL_REGS_EN); 2052 2050 } 2053 2051 2054 2052 return FTRACE_UPDATE_MAKE_NOP; ··· 2698 2688 2699 2689 static void ftrace_startup_sysctl(void) 2700 2690 { 2691 + int command; 2692 + 2701 2693 if (unlikely(ftrace_disabled)) 2702 2694 return; 2703 2695 2704 2696 /* Force update next time */ 2705 2697 saved_ftrace_func = NULL; 2706 2698 /* ftrace_start_up is true if we want ftrace running */ 2707 - if (ftrace_start_up) 2708 - ftrace_run_update_code(FTRACE_UPDATE_CALLS); 2699 + if (ftrace_start_up) { 2700 + command = FTRACE_UPDATE_CALLS; 2701 + if (ftrace_graph_active) 2702 + command |= FTRACE_START_FUNC_RET; 2703 + ftrace_startup_enable(command); 2704 + } 2709 2705 } 2710 2706 2711 2707 static void ftrace_shutdown_sysctl(void) 2712 2708 { 2709 + int command; 2710 + 2713 2711 if (unlikely(ftrace_disabled)) 2714 2712 return; 2715 2713 2716 2714 /* ftrace_start_up is true if ftrace is running */ 2717 - if (ftrace_start_up) 2718 - ftrace_run_update_code(FTRACE_DISABLE_CALLS); 2715 + if (ftrace_start_up) { 2716 + command = FTRACE_DISABLE_CALLS; 2717 + if (ftrace_graph_active) 2718 + command |= FTRACE_STOP_FUNC_RET; 2719 + ftrace_run_update_code(command); 2720 + } 2719 2721 } 2720 2722 2721 2723 static cycle_t ftrace_update_time; ··· 5580 5558 5581 5559 if (ftrace_enabled) { 5582 5560 5583 - ftrace_startup_sysctl(); 5584 - 5585 5561 /* we are starting ftrace again */ 5586 5562 if (ftrace_ops_list != &ftrace_list_end) 5587 5563 update_ftrace_function(); 5564 + 5565 + ftrace_startup_sysctl(); 5588 5566 5589 5567 } else { 5590 5568 /* stopping ftrace calls (just send to ftrace_stub) */ ··· 5611 5589 #endif 5612 5590 ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash) 5613 5591 }; 5614 - 5615 - static int ftrace_graph_active; 5616 5592 5617 5593 int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) 5618 5594 {
+52 -4
kernel/workqueue.c
··· 2728 2728 } 2729 2729 EXPORT_SYMBOL_GPL(flush_work); 2730 2730 2731 + struct cwt_wait { 2732 + wait_queue_t wait; 2733 + struct work_struct *work; 2734 + }; 2735 + 2736 + static int cwt_wakefn(wait_queue_t *wait, unsigned mode, int sync, void *key) 2737 + { 2738 + struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait); 2739 + 2740 + if (cwait->work != key) 2741 + return 0; 2742 + return autoremove_wake_function(wait, mode, sync, key); 2743 + } 2744 + 2731 2745 static bool __cancel_work_timer(struct work_struct *work, bool is_dwork) 2732 2746 { 2747 + static DECLARE_WAIT_QUEUE_HEAD(cancel_waitq); 2733 2748 unsigned long flags; 2734 2749 int ret; 2735 2750 2736 2751 do { 2737 2752 ret = try_to_grab_pending(work, is_dwork, &flags); 2738 2753 /* 2739 - * If someone else is canceling, wait for the same event it 2740 - * would be waiting for before retrying. 2754 + * If someone else is already canceling, wait for it to 2755 + * finish. flush_work() doesn't work for PREEMPT_NONE 2756 + * because we may get scheduled between @work's completion 2757 + * and the other canceling task resuming and clearing 2758 + * CANCELING - flush_work() will return false immediately 2759 + * as @work is no longer busy, try_to_grab_pending() will 2760 + * return -ENOENT as @work is still being canceled and the 2761 + * other canceling task won't be able to clear CANCELING as 2762 + * we're hogging the CPU. 2763 + * 2764 + * Let's wait for completion using a waitqueue. As this 2765 + * may lead to the thundering herd problem, use a custom 2766 + * wake function which matches @work along with exclusive 2767 + * wait and wakeup. 2741 2768 */ 2742 - if (unlikely(ret == -ENOENT)) 2743 - flush_work(work); 2769 + if (unlikely(ret == -ENOENT)) { 2770 + struct cwt_wait cwait; 2771 + 2772 + init_wait(&cwait.wait); 2773 + cwait.wait.func = cwt_wakefn; 2774 + cwait.work = work; 2775 + 2776 + prepare_to_wait_exclusive(&cancel_waitq, &cwait.wait, 2777 + TASK_UNINTERRUPTIBLE); 2778 + if (work_is_canceling(work)) 2779 + schedule(); 2780 + finish_wait(&cancel_waitq, &cwait.wait); 2781 + } 2744 2782 } while (unlikely(ret < 0)); 2745 2783 2746 2784 /* tell other tasks trying to grab @work to back off */ ··· 2787 2749 2788 2750 flush_work(work); 2789 2751 clear_work_data(work); 2752 + 2753 + /* 2754 + * Paired with prepare_to_wait() above so that either 2755 + * waitqueue_active() is visible here or !work_is_canceling() is 2756 + * visible there. 2757 + */ 2758 + smp_mb(); 2759 + if (waitqueue_active(&cancel_waitq)) 2760 + __wake_up(&cancel_waitq, TASK_NORMAL, 1, work); 2761 + 2790 2762 return ret; 2791 2763 } 2792 2764
+1 -1
lib/Makefile
··· 24 24 25 25 obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \ 26 26 bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ 27 - gcd.o lcm.o list_sort.o uuid.o flex_array.o clz_ctz.o \ 27 + gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \ 28 28 bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \ 29 29 percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o 30 30 obj-y += string_helpers.o
+11
lib/lcm.c
··· 12 12 return 0; 13 13 } 14 14 EXPORT_SYMBOL_GPL(lcm); 15 + 16 + unsigned long lcm_not_zero(unsigned long a, unsigned long b) 17 + { 18 + unsigned long l = lcm(a, b); 19 + 20 + if (l) 21 + return l; 22 + 23 + return (b ? : a); 24 + } 25 + EXPORT_SYMBOL_GPL(lcm_not_zero);
+3
lib/lz4/lz4_decompress.c
··· 139 139 /* Error: request to write beyond destination buffer */ 140 140 if (cpy > oend) 141 141 goto _output_error; 142 + if ((ref + COPYLENGTH) > oend || 143 + (op + COPYLENGTH) > oend) 144 + goto _output_error; 142 145 LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH)); 143 146 while (op < cpy) 144 147 *op++ = *ref++;
+2
lib/nlattr.c
··· 279 279 int minlen = min_t(int, count, nla_len(src)); 280 280 281 281 memcpy(dest, nla_data(src), minlen); 282 + if (count > minlen) 283 + memset(dest + minlen, 0, count - minlen); 282 284 283 285 return minlen; 284 286 }
+28 -34
lib/rhashtable.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/init.h> 19 19 #include <linux/log2.h> 20 + #include <linux/sched.h> 20 21 #include <linux/slab.h> 21 22 #include <linux/vmalloc.h> 22 23 #include <linux/mm.h> ··· 218 217 static struct bucket_table *bucket_table_alloc(struct rhashtable *ht, 219 218 size_t nbuckets) 220 219 { 221 - struct bucket_table *tbl; 220 + struct bucket_table *tbl = NULL; 222 221 size_t size; 223 222 int i; 224 223 225 224 size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]); 226 - tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN); 225 + if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) 226 + tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); 227 227 if (tbl == NULL) 228 228 tbl = vzalloc(size); 229 - 230 229 if (tbl == NULL) 231 230 return NULL; 232 231 ··· 248 247 * @ht: hash table 249 248 * @new_size: new table size 250 249 */ 251 - bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size) 250 + static bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size) 252 251 { 253 252 /* Expand table when exceeding 75% load */ 254 253 return atomic_read(&ht->nelems) > (new_size / 4 * 3) && 255 - (ht->p.max_shift && atomic_read(&ht->shift) < ht->p.max_shift); 254 + (!ht->p.max_shift || atomic_read(&ht->shift) < ht->p.max_shift); 256 255 } 257 - EXPORT_SYMBOL_GPL(rht_grow_above_75); 258 256 259 257 /** 260 258 * rht_shrink_below_30 - returns true if nelems < 0.3 * table-size 261 259 * @ht: hash table 262 260 * @new_size: new table size 263 261 */ 264 - bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size) 262 + static bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size) 265 263 { 266 264 /* Shrink table beneath 30% load */ 267 265 return atomic_read(&ht->nelems) < (new_size * 3 / 10) && 268 266 (atomic_read(&ht->shift) > ht->p.min_shift); 269 267 } 270 - EXPORT_SYMBOL_GPL(rht_shrink_below_30); 271 268 272 269 static void lock_buckets(struct bucket_table *new_tbl, 273 270 struct bucket_table *old_tbl, unsigned int hash) ··· 413 414 } 414 415 } 415 416 unlock_buckets(new_tbl, old_tbl, new_hash); 417 + cond_resched(); 416 418 } 417 419 418 420 /* Unzip interleaved hash chains */ ··· 437 437 complete = false; 438 438 439 439 unlock_buckets(new_tbl, old_tbl, old_hash); 440 + cond_resched(); 440 441 } 441 442 } 442 443 ··· 496 495 tbl->buckets[new_hash + new_tbl->size]); 497 496 498 497 unlock_buckets(new_tbl, tbl, new_hash); 498 + cond_resched(); 499 499 } 500 500 501 501 /* Publish the new, valid hash table */ ··· 530 528 list_for_each_entry(walker, &ht->walkers, list) 531 529 walker->resize = true; 532 530 533 - if (ht->p.grow_decision && ht->p.grow_decision(ht, tbl->size)) 531 + if (rht_grow_above_75(ht, tbl->size)) 534 532 rhashtable_expand(ht); 535 - else if (ht->p.shrink_decision && ht->p.shrink_decision(ht, tbl->size)) 533 + else if (rht_shrink_below_30(ht, tbl->size)) 536 534 rhashtable_shrink(ht); 537 - 538 535 unlock: 539 536 mutex_unlock(&ht->mutex); 540 537 } 541 538 542 - static void rhashtable_wakeup_worker(struct rhashtable *ht) 543 - { 544 - struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht); 545 - struct bucket_table *new_tbl = rht_dereference_rcu(ht->future_tbl, ht); 546 - size_t size = tbl->size; 547 - 548 - /* Only adjust the table if no resizing is currently in progress. */ 549 - if (tbl == new_tbl && 550 - ((ht->p.grow_decision && ht->p.grow_decision(ht, size)) || 551 - (ht->p.shrink_decision && ht->p.shrink_decision(ht, size)))) 552 - schedule_work(&ht->run_work); 553 - } 554 - 555 539 static void __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj, 556 - struct bucket_table *tbl, u32 hash) 540 + struct bucket_table *tbl, 541 + const struct bucket_table *old_tbl, u32 hash) 557 542 { 543 + bool no_resize_running = tbl == old_tbl; 558 544 struct rhash_head *head; 559 545 560 546 hash = rht_bucket_index(tbl, hash); ··· 558 568 rcu_assign_pointer(tbl->buckets[hash], obj); 559 569 560 570 atomic_inc(&ht->nelems); 561 - 562 - rhashtable_wakeup_worker(ht); 571 + if (no_resize_running && rht_grow_above_75(ht, tbl->size)) 572 + schedule_work(&ht->run_work); 563 573 } 564 574 565 575 /** ··· 589 599 hash = obj_raw_hashfn(ht, rht_obj(ht, obj)); 590 600 591 601 lock_buckets(tbl, old_tbl, hash); 592 - __rhashtable_insert(ht, obj, tbl, hash); 602 + __rhashtable_insert(ht, obj, tbl, old_tbl, hash); 593 603 unlock_buckets(tbl, old_tbl, hash); 594 604 595 605 rcu_read_unlock(); ··· 671 681 unlock_buckets(new_tbl, old_tbl, new_hash); 672 682 673 683 if (ret) { 684 + bool no_resize_running = new_tbl == old_tbl; 685 + 674 686 atomic_dec(&ht->nelems); 675 - rhashtable_wakeup_worker(ht); 687 + if (no_resize_running && rht_shrink_below_30(ht, new_tbl->size)) 688 + schedule_work(&ht->run_work); 676 689 } 677 690 678 691 rcu_read_unlock(); ··· 845 852 goto exit; 846 853 } 847 854 848 - __rhashtable_insert(ht, obj, new_tbl, new_hash); 855 + __rhashtable_insert(ht, obj, new_tbl, old_tbl, new_hash); 849 856 850 857 exit: 851 858 unlock_buckets(new_tbl, old_tbl, new_hash); ··· 886 893 iter->walker = kmalloc(sizeof(*iter->walker), GFP_KERNEL); 887 894 if (!iter->walker) 888 895 return -ENOMEM; 896 + 897 + INIT_LIST_HEAD(&iter->walker->list); 898 + iter->walker->resize = false; 889 899 890 900 mutex_lock(&ht->mutex); 891 901 list_add(&iter->walker->list, &ht->walkers); ··· 1107 1111 if (!ht->p.hash_rnd) 1108 1112 get_random_bytes(&ht->p.hash_rnd, sizeof(ht->p.hash_rnd)); 1109 1113 1110 - if (ht->p.grow_decision || ht->p.shrink_decision) 1111 - INIT_WORK(&ht->run_work, rht_deferred_worker); 1114 + INIT_WORK(&ht->run_work, rht_deferred_worker); 1112 1115 1113 1116 return 0; 1114 1117 } ··· 1125 1130 { 1126 1131 ht->being_destroyed = true; 1127 1132 1128 - if (ht->p.grow_decision || ht->p.shrink_decision) 1129 - cancel_work_sync(&ht->run_work); 1133 + cancel_work_sync(&ht->run_work); 1130 1134 1131 1135 mutex_lock(&ht->mutex); 1132 1136 bucket_table_free(rht_dereference(ht->tbl, ht));
+2 -2
lib/seq_buf.c
··· 61 61 62 62 if (s->len < s->size) { 63 63 len = vsnprintf(s->buffer + s->len, s->size - s->len, fmt, args); 64 - if (seq_buf_can_fit(s, len)) { 64 + if (s->len + len < s->size) { 65 65 s->len += len; 66 66 return 0; 67 67 } ··· 118 118 119 119 if (s->len < s->size) { 120 120 ret = bstr_printf(s->buffer + s->len, len, fmt, binary); 121 - if (seq_buf_can_fit(s, ret)) { 121 + if (s->len + ret < s->size) { 122 122 s->len += ret; 123 123 return 0; 124 124 }
+8 -3
lib/test_rhashtable.c
··· 191 191 return err; 192 192 } 193 193 194 + static struct rhashtable ht; 195 + 194 196 static int __init test_rht_init(void) 195 197 { 196 - struct rhashtable ht; 197 198 struct rhashtable_params params = { 198 199 .nelem_hint = TEST_HT_SIZE, 199 200 .head_offset = offsetof(struct test_obj, node), 200 201 .key_offset = offsetof(struct test_obj, value), 201 202 .key_len = sizeof(int), 202 203 .hashfn = jhash, 204 + .max_shift = 1, /* we expand/shrink manually here */ 203 205 .nulls_base = (3U << RHT_BASE_SHIFT), 204 - .grow_decision = rht_grow_above_75, 205 - .shrink_decision = rht_shrink_below_30, 206 206 }; 207 207 int err; 208 208 ··· 222 222 return err; 223 223 } 224 224 225 + static void __exit test_rht_exit(void) 226 + { 227 + } 228 + 225 229 module_init(test_rht_init); 230 + module_exit(test_rht_exit); 226 231 227 232 MODULE_LICENSE("GPL v2");
+1 -1
mm/Makefile
··· 21 21 mm_init.o mmu_context.o percpu.o slab_common.o \ 22 22 compaction.o vmacache.o \ 23 23 interval_tree.o list_lru.o workingset.o \ 24 - iov_iter.o debug.o $(mmu-y) 24 + debug.o $(mmu-y) 25 25 26 26 obj-y += init-mm.o 27 27
+7 -5
mm/cma.c
··· 64 64 return (1UL << (align_order - cma->order_per_bit)) - 1; 65 65 } 66 66 67 + /* 68 + * Find a PFN aligned to the specified order and return an offset represented in 69 + * order_per_bits. 70 + */ 67 71 static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order) 68 72 { 69 - unsigned int alignment; 70 - 71 73 if (align_order <= cma->order_per_bit) 72 74 return 0; 73 - alignment = 1UL << (align_order - cma->order_per_bit); 74 - return ALIGN(cma->base_pfn, alignment) - 75 - (cma->base_pfn >> cma->order_per_bit); 75 + 76 + return (ALIGN(cma->base_pfn, (1UL << align_order)) 77 + - cma->base_pfn) >> cma->order_per_bit; 76 78 } 77 79 78 80 static unsigned long cma_bitmap_maxno(struct cma *cma)
+15 -10
mm/huge_memory.c
··· 1260 1260 int target_nid, last_cpupid = -1; 1261 1261 bool page_locked; 1262 1262 bool migrated = false; 1263 + bool was_writable; 1263 1264 int flags = 0; 1264 1265 1265 1266 /* A PROT_NONE fault should not end up here */ ··· 1292 1291 flags |= TNF_FAULT_LOCAL; 1293 1292 } 1294 1293 1295 - /* 1296 - * Avoid grouping on DSO/COW pages in specific and RO pages 1297 - * in general, RO pages shouldn't hurt as much anyway since 1298 - * they can be in shared cache state. 1299 - */ 1300 - if (!pmd_write(pmd)) 1294 + /* See similar comment in do_numa_page for explanation */ 1295 + if (!(vma->vm_flags & VM_WRITE)) 1301 1296 flags |= TNF_NO_GROUP; 1302 1297 1303 1298 /* ··· 1350 1353 if (migrated) { 1351 1354 flags |= TNF_MIGRATED; 1352 1355 page_nid = target_nid; 1353 - } 1356 + } else 1357 + flags |= TNF_MIGRATE_FAIL; 1354 1358 1355 1359 goto out; 1356 1360 clear_pmdnuma: 1357 1361 BUG_ON(!PageLocked(page)); 1362 + was_writable = pmd_write(pmd); 1358 1363 pmd = pmd_modify(pmd, vma->vm_page_prot); 1364 + pmd = pmd_mkyoung(pmd); 1365 + if (was_writable) 1366 + pmd = pmd_mkwrite(pmd); 1359 1367 set_pmd_at(mm, haddr, pmdp, pmd); 1360 1368 update_mmu_cache_pmd(vma, addr, pmdp); 1361 1369 unlock_page(page); ··· 1484 1482 1485 1483 if (__pmd_trans_huge_lock(pmd, vma, &ptl) == 1) { 1486 1484 pmd_t entry; 1485 + bool preserve_write = prot_numa && pmd_write(*pmd); 1486 + ret = 1; 1487 1487 1488 1488 /* 1489 1489 * Avoid trapping faults against the zero page. The read-only ··· 1494 1490 */ 1495 1491 if (prot_numa && is_huge_zero_pmd(*pmd)) { 1496 1492 spin_unlock(ptl); 1497 - return 0; 1493 + return ret; 1498 1494 } 1499 1495 1500 1496 if (!prot_numa || !pmd_protnone(*pmd)) { 1501 - ret = 1; 1502 1497 entry = pmdp_get_and_clear_notify(mm, addr, pmd); 1503 1498 entry = pmd_modify(entry, newprot); 1499 + if (preserve_write) 1500 + entry = pmd_mkwrite(entry); 1504 1501 ret = HPAGE_PMD_NR; 1505 1502 set_pmd_at(mm, addr, pmd, entry); 1506 - BUG_ON(pmd_write(entry)); 1503 + BUG_ON(!preserve_write && pmd_write(entry)); 1507 1504 } 1508 1505 spin_unlock(ptl); 1509 1506 }
+3 -1
mm/hugetlb.c
··· 917 917 __SetPageHead(page); 918 918 __ClearPageReserved(page); 919 919 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 920 - __SetPageTail(p); 921 920 /* 922 921 * For gigantic hugepages allocated through bootmem at 923 922 * boot, it's safer to be consistent with the not-gigantic ··· 932 933 __ClearPageReserved(p); 933 934 set_page_count(p, 0); 934 935 p->first_page = page; 936 + /* Make sure p->first_page is always valid for PageTail() */ 937 + smp_wmb(); 938 + __SetPageTail(p); 935 939 } 936 940 } 937 941
+15
mm/iov_iter.c lib/iov_iter.c
··· 751 751 return npages; 752 752 } 753 753 EXPORT_SYMBOL(iov_iter_npages); 754 + 755 + const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags) 756 + { 757 + *new = *old; 758 + if (new->type & ITER_BVEC) 759 + return new->bvec = kmemdup(new->bvec, 760 + new->nr_segs * sizeof(struct bio_vec), 761 + flags); 762 + else 763 + /* iovec and kvec have identical layout */ 764 + return new->iov = kmemdup(new->iov, 765 + new->nr_segs * sizeof(struct iovec), 766 + flags); 767 + } 768 + EXPORT_SYMBOL(dup_iter);
+11 -3
mm/kasan/kasan.c
··· 29 29 #include <linux/stacktrace.h> 30 30 #include <linux/string.h> 31 31 #include <linux/types.h> 32 + #include <linux/vmalloc.h> 32 33 #include <linux/kasan.h> 33 34 34 35 #include "kasan.h" ··· 415 414 GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO, 416 415 PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, 417 416 __builtin_return_address(0)); 418 - return ret ? 0 : -ENOMEM; 417 + 418 + if (ret) { 419 + find_vm_area(addr)->flags |= VM_KASAN; 420 + return 0; 421 + } 422 + 423 + return -ENOMEM; 419 424 } 420 425 421 - void kasan_module_free(void *addr) 426 + void kasan_free_shadow(const struct vm_struct *vm) 422 427 { 423 - vfree(kasan_mem_to_shadow(addr)); 428 + if (vm->flags & VM_KASAN) 429 + vfree(kasan_mem_to_shadow(vm->addr)); 424 430 } 425 431 426 432 static void register_global(struct kasan_global *global)
+3 -1
mm/memcontrol.c
··· 5232 5232 * on for the root memcg is enough. 5233 5233 */ 5234 5234 if (cgroup_on_dfl(root_css->cgroup)) 5235 - mem_cgroup_from_css(root_css)->use_hierarchy = true; 5235 + root_mem_cgroup->use_hierarchy = true; 5236 + else 5237 + root_mem_cgroup->use_hierarchy = false; 5236 5238 } 5237 5239 5238 5240 static u64 memory_current_read(struct cgroup_subsys_state *css,
+12 -5
mm/memory.c
··· 3035 3035 int last_cpupid; 3036 3036 int target_nid; 3037 3037 bool migrated = false; 3038 + bool was_writable = pte_write(pte); 3038 3039 int flags = 0; 3039 3040 3040 3041 /* A PROT_NONE fault should not end up here */ ··· 3060 3059 /* Make it present again */ 3061 3060 pte = pte_modify(pte, vma->vm_page_prot); 3062 3061 pte = pte_mkyoung(pte); 3062 + if (was_writable) 3063 + pte = pte_mkwrite(pte); 3063 3064 set_pte_at(mm, addr, ptep, pte); 3064 3065 update_mmu_cache(vma, addr, ptep); 3065 3066 ··· 3072 3069 } 3073 3070 3074 3071 /* 3075 - * Avoid grouping on DSO/COW pages in specific and RO pages 3076 - * in general, RO pages shouldn't hurt as much anyway since 3077 - * they can be in shared cache state. 3072 + * Avoid grouping on RO pages in general. RO pages shouldn't hurt as 3073 + * much anyway since they can be in shared cache state. This misses 3074 + * the case where a mapping is writable but the process never writes 3075 + * to it but pte_write gets cleared during protection updates and 3076 + * pte_dirty has unpredictable behaviour between PTE scan updates, 3077 + * background writeback, dirty balancing and application behaviour. 3078 3078 */ 3079 - if (!pte_write(pte)) 3079 + if (!(vma->vm_flags & VM_WRITE)) 3080 3080 flags |= TNF_NO_GROUP; 3081 3081 3082 3082 /* ··· 3103 3097 if (migrated) { 3104 3098 page_nid = target_nid; 3105 3099 flags |= TNF_MIGRATED; 3106 - } 3100 + } else 3101 + flags |= TNF_MIGRATE_FAIL; 3107 3102 3108 3103 out: 3109 3104 if (page_nid != -1)
+4 -9
mm/memory_hotplug.c
··· 1092 1092 return NULL; 1093 1093 1094 1094 arch_refresh_nodedata(nid, pgdat); 1095 + } else { 1096 + /* Reset the nr_zones and classzone_idx to 0 before reuse */ 1097 + pgdat->nr_zones = 0; 1098 + pgdat->classzone_idx = 0; 1095 1099 } 1096 1100 1097 1101 /* we can use NODE_DATA(nid) from here */ ··· 1981 1977 if (is_vmalloc_addr(zone->wait_table)) 1982 1978 vfree(zone->wait_table); 1983 1979 } 1984 - 1985 - /* 1986 - * Since there is no way to guarentee the address of pgdat/zone is not 1987 - * on stack of any kernel threads or used by other kernel objects 1988 - * without reference counting or other symchronizing method, do not 1989 - * reset node_data and free pgdat here. Just reset it to 0 and reuse 1990 - * the memory when the node is online again. 1991 - */ 1992 - memset(pgdat, 0, sizeof(*pgdat)); 1993 1980 } 1994 1981 EXPORT_SYMBOL(try_offline_node); 1995 1982
+2 -2
mm/mlock.c
··· 26 26 27 27 int can_do_mlock(void) 28 28 { 29 - if (capable(CAP_IPC_LOCK)) 30 - return 1; 31 29 if (rlimit(RLIMIT_MEMLOCK) != 0) 30 + return 1; 31 + if (capable(CAP_IPC_LOCK)) 32 32 return 1; 33 33 return 0; 34 34 }
+1 -3
mm/mmap.c
··· 774 774 775 775 importer->anon_vma = exporter->anon_vma; 776 776 error = anon_vma_clone(importer, exporter); 777 - if (error) { 778 - importer->anon_vma = NULL; 777 + if (error) 779 778 return error; 780 - } 781 779 } 782 780 } 783 781
+3
mm/mprotect.c
··· 75 75 oldpte = *pte; 76 76 if (pte_present(oldpte)) { 77 77 pte_t ptent; 78 + bool preserve_write = prot_numa && pte_write(oldpte); 78 79 79 80 /* 80 81 * Avoid trapping faults against the zero or KSM ··· 95 94 96 95 ptent = ptep_modify_prot_start(mm, addr, pte); 97 96 ptent = pte_modify(ptent, newprot); 97 + if (preserve_write) 98 + ptent = pte_mkwrite(ptent); 98 99 99 100 /* Avoid taking write faults for known dirty pages */ 100 101 if (dirty_accountable && pte_dirty(ptent) &&
+1
mm/nommu.c
··· 62 62 EXPORT_SYMBOL(high_memory); 63 63 struct page *mem_map; 64 64 unsigned long max_mapnr; 65 + EXPORT_SYMBOL(max_mapnr); 65 66 unsigned long highest_memmap_pfn; 66 67 struct percpu_counter vm_committed_as; 67 68 int sysctl_overcommit_memory = OVERCOMMIT_GUESS; /* heuristic overcommit */
+5 -2
mm/page-writeback.c
··· 857 857 * bw * elapsed + write_bandwidth * (period - elapsed) 858 858 * write_bandwidth = --------------------------------------------------- 859 859 * period 860 + * 861 + * @written may have decreased due to account_page_redirty(). 862 + * Avoid underflowing @bw calculation. 860 863 */ 861 - bw = written - bdi->written_stamp; 864 + bw = written - min(written, bdi->written_stamp); 862 865 bw *= HZ; 863 866 if (unlikely(elapsed > period)) { 864 867 do_div(bw, elapsed); ··· 925 922 unsigned long now) 926 923 { 927 924 static DEFINE_SPINLOCK(dirty_lock); 928 - static unsigned long update_time; 925 + static unsigned long update_time = INITIAL_JIFFIES; 929 926 930 927 /* 931 928 * check locklessly first to optimize away locking for the most time
+2 -1
mm/page_alloc.c
··· 2373 2373 goto out; 2374 2374 } 2375 2375 /* Exhausted what can be done so it's blamo time */ 2376 - if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)) 2376 + if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false) 2377 + || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) 2377 2378 *did_some_progress = 1; 2378 2379 out: 2379 2380 oom_zonelist_unlock(ac->zonelist, gfp_mask);
+1
mm/page_isolation.c
··· 103 103 104 104 if (!is_migrate_isolate_page(buddy)) { 105 105 __isolate_free_page(page, order); 106 + kernel_map_pages(page, (1 << order), 1); 106 107 set_page_refcounted(page); 107 108 isolated_page = page; 108 109 }
+8 -1
mm/pagewalk.c
··· 265 265 vma = vma->vm_next; 266 266 267 267 err = walk_page_test(start, next, walk); 268 - if (err > 0) 268 + if (err > 0) { 269 + /* 270 + * positive return values are purely for 271 + * controlling the pagewalk, so should never 272 + * be passed to the callers. 273 + */ 274 + err = 0; 269 275 continue; 276 + } 270 277 if (err < 0) 271 278 break; 272 279 }
+7
mm/rmap.c
··· 287 287 return 0; 288 288 289 289 enomem_failure: 290 + /* 291 + * dst->anon_vma is dropped here otherwise its degree can be incorrectly 292 + * decremented in unlink_anon_vmas(). 293 + * We can safely do this because callers of anon_vma_clone() don't care 294 + * about dst->anon_vma if anon_vma_clone() failed. 295 + */ 296 + dst->anon_vma = NULL; 290 297 unlink_anon_vmas(dst); 291 298 return -ENOMEM; 292 299 }
+4 -2
mm/slub.c
··· 2449 2449 do { 2450 2450 tid = this_cpu_read(s->cpu_slab->tid); 2451 2451 c = raw_cpu_ptr(s->cpu_slab); 2452 - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); 2452 + } while (IS_ENABLED(CONFIG_PREEMPT) && 2453 + unlikely(tid != READ_ONCE(c->tid))); 2453 2454 2454 2455 /* 2455 2456 * Irqless object alloc/free algorithm used here depends on sequence ··· 2719 2718 do { 2720 2719 tid = this_cpu_read(s->cpu_slab->tid); 2721 2720 c = raw_cpu_ptr(s->cpu_slab); 2722 - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); 2721 + } while (IS_ENABLED(CONFIG_PREEMPT) && 2722 + unlikely(tid != READ_ONCE(c->tid))); 2723 2723 2724 2724 /* Same with comment on barrier() in slab_alloc_node() */ 2725 2725 barrier();
+1
mm/vmalloc.c
··· 1418 1418 spin_unlock(&vmap_area_lock); 1419 1419 1420 1420 vmap_debug_free_range(va->va_start, va->va_end); 1421 + kasan_free_shadow(vm); 1421 1422 free_unmap_vmap_area(va); 1422 1423 vm->size -= PAGE_SIZE; 1423 1424
+20 -4
net/9p/trans_virtio.c
··· 658 658 static void p9_virtio_remove(struct virtio_device *vdev) 659 659 { 660 660 struct virtio_chan *chan = vdev->priv; 661 - 662 - if (chan->inuse) 663 - p9_virtio_close(chan->client); 664 - vdev->config->del_vqs(vdev); 661 + unsigned long warning_time; 665 662 666 663 mutex_lock(&virtio_9p_lock); 664 + 665 + /* Remove self from list so we don't get new users. */ 667 666 list_del(&chan->chan_list); 667 + warning_time = jiffies; 668 + 669 + /* Wait for existing users to close. */ 670 + while (chan->inuse) { 671 + mutex_unlock(&virtio_9p_lock); 672 + msleep(250); 673 + if (time_after(jiffies, warning_time + 10 * HZ)) { 674 + dev_emerg(&vdev->dev, 675 + "p9_virtio_remove: waiting for device in use.\n"); 676 + warning_time = jiffies; 677 + } 678 + mutex_lock(&virtio_9p_lock); 679 + } 680 + 668 681 mutex_unlock(&virtio_9p_lock); 682 + 683 + vdev->config->del_vqs(vdev); 684 + 669 685 sysfs_remove_file(&(vdev->dev.kobj), &dev_attr_mount_tag.attr); 670 686 kobject_uevent(&(vdev->dev.kobj), KOBJ_CHANGE); 671 687 kfree(chan->tag);
+2
net/bridge/br.c
··· 190 190 { 191 191 int err; 192 192 193 + BUILD_BUG_ON(sizeof(struct br_input_skb_cb) > FIELD_SIZEOF(struct sk_buff, cb)); 194 + 193 195 err = stp_proto_register(&br_stp_proto); 194 196 if (err < 0) { 195 197 pr_err("bridge: can't register sap for STP\n");
+2
net/bridge/br_if.c
··· 563 563 */ 564 564 del_nbp(p); 565 565 566 + dev_set_mtu(br->dev, br_min_mtu(br)); 567 + 566 568 spin_lock_bh(&br->lock); 567 569 changed_addr = br_stp_recalculate_bridge_id(br); 568 570 spin_unlock_bh(&br->lock);
+1 -1
net/caif/caif_socket.c
··· 281 281 int copylen; 282 282 283 283 ret = -EOPNOTSUPP; 284 - if (m->msg_flags&MSG_OOB) 284 + if (flags & MSG_OOB) 285 285 goto read_error; 286 286 287 287 skb = skb_recv_datagram(sk, flags, 0 , &ret);
+1 -1
net/caif/cffrml.c
··· 84 84 u16 tmp; 85 85 u16 len; 86 86 u16 hdrchks; 87 - u16 pktchks; 87 + int pktchks; 88 88 struct cffrml *this; 89 89 this = container_obj(layr); 90 90
+3 -3
net/caif/cfpkt_skbuff.c
··· 255 255 return skb->len; 256 256 } 257 257 258 - inline u16 cfpkt_iterate(struct cfpkt *pkt, 259 - u16 (*iter_func)(u16, void *, u16), 260 - u16 data) 258 + int cfpkt_iterate(struct cfpkt *pkt, 259 + u16 (*iter_func)(u16, void *, u16), 260 + u16 data) 261 261 { 262 262 /* 263 263 * Don't care about the performance hit of linearizing,
+3
net/can/af_can.c
··· 259 259 goto inval_skb; 260 260 } 261 261 262 + skb->ip_summed = CHECKSUM_UNNECESSARY; 263 + 264 + skb_reset_mac_header(skb); 262 265 skb_reset_network_header(skb); 263 266 skb_reset_transport_header(skb); 264 267
+7 -9
net/compat.c
··· 49 49 __get_user(kmsg->msg_controllen, &umsg->msg_controllen) || 50 50 __get_user(kmsg->msg_flags, &umsg->msg_flags)) 51 51 return -EFAULT; 52 + 53 + if (!uaddr) 54 + kmsg->msg_namelen = 0; 55 + 56 + if (kmsg->msg_namelen < 0) 57 + return -EINVAL; 58 + 52 59 if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) 53 60 kmsg->msg_namelen = sizeof(struct sockaddr_storage); 54 61 kmsg->msg_control = compat_ptr(tmp3); ··· 718 711 719 712 COMPAT_SYSCALL_DEFINE3(sendmsg, int, fd, struct compat_msghdr __user *, msg, unsigned int, flags) 720 713 { 721 - if (flags & MSG_CMSG_COMPAT) 722 - return -EINVAL; 723 714 return __sys_sendmsg(fd, (struct user_msghdr __user *)msg, flags | MSG_CMSG_COMPAT); 724 715 } 725 716 726 717 COMPAT_SYSCALL_DEFINE4(sendmmsg, int, fd, struct compat_mmsghdr __user *, mmsg, 727 718 unsigned int, vlen, unsigned int, flags) 728 719 { 729 - if (flags & MSG_CMSG_COMPAT) 730 - return -EINVAL; 731 720 return __sys_sendmmsg(fd, (struct mmsghdr __user *)mmsg, vlen, 732 721 flags | MSG_CMSG_COMPAT); 733 722 } 734 723 735 724 COMPAT_SYSCALL_DEFINE3(recvmsg, int, fd, struct compat_msghdr __user *, msg, unsigned int, flags) 736 725 { 737 - if (flags & MSG_CMSG_COMPAT) 738 - return -EINVAL; 739 726 return __sys_recvmsg(fd, (struct user_msghdr __user *)msg, flags | MSG_CMSG_COMPAT); 740 727 } 741 728 ··· 751 750 { 752 751 int datagrams; 753 752 struct timespec ktspec; 754 - 755 - if (flags & MSG_CMSG_COMPAT) 756 - return -EINVAL; 757 753 758 754 if (timeout == NULL) 759 755 return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
+4 -2
net/core/dev.c
··· 946 946 return false; 947 947 948 948 while (*name) { 949 - if (*name == '/' || isspace(*name)) 949 + if (*name == '/' || *name == ':' || isspace(*name)) 950 950 return false; 951 951 name++; 952 952 } ··· 2848 2848 #define skb_update_prio(skb) 2849 2849 #endif 2850 2850 2851 - static DEFINE_PER_CPU(int, xmit_recursion); 2851 + DEFINE_PER_CPU(int, xmit_recursion); 2852 + EXPORT_SYMBOL(xmit_recursion); 2853 + 2852 2854 #define RECURSION_LIMIT 10 2853 2855 2854 2856 /**
+1
net/core/ethtool.c
··· 98 98 [NETIF_F_RXALL_BIT] = "rx-all", 99 99 [NETIF_F_HW_L2FW_DOFFLOAD_BIT] = "l2-fwd-offload", 100 100 [NETIF_F_BUSY_POLL_BIT] = "busy-poll", 101 + [NETIF_F_HW_SWITCH_OFFLOAD_BIT] = "hw-switch-offload", 101 102 }; 102 103 103 104 static const char
+1 -1
net/core/fib_rules.c
··· 175 175 176 176 spin_lock(&net->rules_mod_lock); 177 177 list_del_rcu(&ops->list); 178 - fib_rules_cleanup_ops(ops); 179 178 spin_unlock(&net->rules_mod_lock); 180 179 180 + fib_rules_cleanup_ops(ops); 181 181 call_rcu(&ops->rcu, fib_rules_put_rcu); 182 182 } 183 183 EXPORT_SYMBOL_GPL(fib_rules_unregister);
+14 -1
net/core/gen_stats.c
··· 32 32 return 0; 33 33 34 34 nla_put_failure: 35 + kfree(d->xstats); 36 + d->xstats = NULL; 37 + d->xstats_len = 0; 35 38 spin_unlock_bh(d->lock); 36 39 return -1; 37 40 } ··· 308 305 gnet_stats_copy_app(struct gnet_dump *d, void *st, int len) 309 306 { 310 307 if (d->compat_xstats) { 311 - d->xstats = st; 308 + d->xstats = kmemdup(st, len, GFP_ATOMIC); 309 + if (!d->xstats) 310 + goto err_out; 312 311 d->xstats_len = len; 313 312 } 314 313 ··· 318 313 return gnet_stats_copy(d, TCA_STATS_APP, st, len); 319 314 320 315 return 0; 316 + 317 + err_out: 318 + d->xstats_len = 0; 319 + spin_unlock_bh(d->lock); 320 + return -1; 321 321 } 322 322 EXPORT_SYMBOL(gnet_stats_copy_app); 323 323 ··· 355 345 return -1; 356 346 } 357 347 348 + kfree(d->xstats); 349 + d->xstats = NULL; 350 + d->xstats_len = 0; 358 351 spin_unlock_bh(d->lock); 359 352 return 0; 360 353 }
+3 -1
net/core/net_namespace.c
··· 198 198 */ 199 199 int peernet2id(struct net *net, struct net *peer) 200 200 { 201 - int id = __peernet2id(net, peer, true); 201 + bool alloc = atomic_read(&peer->count) == 0 ? false : true; 202 + int id; 202 203 204 + id = __peernet2id(net, peer, alloc); 203 205 return id >= 0 ? id : NETNSA_NSID_NOT_ASSIGNED; 204 206 } 205 207 EXPORT_SYMBOL(peernet2id);
+3
net/core/pktgen.c
··· 1134 1134 return len; 1135 1135 1136 1136 i += len; 1137 + if ((value > 1) && 1138 + (!(pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING))) 1139 + return -ENOTSUPP; 1137 1140 pkt_dev->burst = value < 1 ? 1 : value; 1138 1141 sprintf(pg_result, "OK: burst=%d", pkt_dev->burst); 1139 1142 return count;
+25 -20
net/core/rtnetlink.c
··· 1300 1300 s_h = cb->args[0]; 1301 1301 s_idx = cb->args[1]; 1302 1302 1303 - rcu_read_lock(); 1304 1303 cb->seq = net->dev_base_seq; 1305 1304 1306 1305 /* A hack to preserve kernel<->userspace interface. ··· 1321 1322 for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) { 1322 1323 idx = 0; 1323 1324 head = &net->dev_index_head[h]; 1324 - hlist_for_each_entry_rcu(dev, head, index_hlist) { 1325 + hlist_for_each_entry(dev, head, index_hlist) { 1325 1326 if (idx < s_idx) 1326 1327 goto cont; 1327 1328 err = rtnl_fill_ifinfo(skb, dev, RTM_NEWLINK, ··· 1343 1344 } 1344 1345 } 1345 1346 out: 1346 - rcu_read_unlock(); 1347 1347 cb->args[1] = idx; 1348 1348 cb->args[0] = h; 1349 1349 ··· 1932 1934 struct ifinfomsg *ifm, 1933 1935 struct nlattr **tb) 1934 1936 { 1935 - struct net_device *dev; 1937 + struct net_device *dev, *aux; 1936 1938 int err; 1937 1939 1938 - for_each_netdev(net, dev) { 1940 + for_each_netdev_safe(net, dev, aux) { 1939 1941 if (dev->group == group) { 1940 1942 err = do_setlink(skb, dev, ifm, tb, NULL, 0); 1941 1943 if (err < 0) ··· 2010 2012 } 2011 2013 2012 2014 if (1) { 2013 - struct nlattr *attr[ops ? ops->maxtype + 1 : 0]; 2014 - struct nlattr *slave_attr[m_ops ? m_ops->slave_maxtype + 1 : 0]; 2015 + struct nlattr *attr[ops ? ops->maxtype + 1 : 1]; 2016 + struct nlattr *slave_attr[m_ops ? m_ops->slave_maxtype + 1 : 1]; 2015 2017 struct nlattr **data = NULL; 2016 2018 struct nlattr **slave_data = NULL; 2017 2019 struct net *dest_net, *link_net = NULL; ··· 2120 2122 if (IS_ERR(dest_net)) 2121 2123 return PTR_ERR(dest_net); 2122 2124 2125 + err = -EPERM; 2126 + if (!netlink_ns_capable(skb, dest_net->user_ns, CAP_NET_ADMIN)) 2127 + goto out; 2128 + 2123 2129 if (tb[IFLA_LINK_NETNSID]) { 2124 2130 int id = nla_get_s32(tb[IFLA_LINK_NETNSID]); 2125 2131 ··· 2132 2130 err = -EINVAL; 2133 2131 goto out; 2134 2132 } 2133 + err = -EPERM; 2134 + if (!netlink_ns_capable(skb, link_net->user_ns, CAP_NET_ADMIN)) 2135 + goto out; 2135 2136 } 2136 2137 2137 2138 dev = rtnl_create_link(link_net ? : dest_net, ifname, ··· 2166 2161 } 2167 2162 } 2168 2163 err = rtnl_configure_link(dev, ifm); 2169 - if (err < 0) { 2170 - if (ops->newlink) { 2171 - LIST_HEAD(list_kill); 2172 - 2173 - ops->dellink(dev, &list_kill); 2174 - unregister_netdevice_many(&list_kill); 2175 - } else { 2176 - unregister_netdevice(dev); 2177 - } 2178 - goto out; 2179 - } 2180 - 2164 + if (err < 0) 2165 + goto out_unregister; 2181 2166 if (link_net) { 2182 2167 err = dev_change_net_namespace(dev, dest_net, ifname); 2183 2168 if (err < 0) 2184 - unregister_netdevice(dev); 2169 + goto out_unregister; 2185 2170 } 2186 2171 out: 2187 2172 if (link_net) 2188 2173 put_net(link_net); 2189 2174 put_net(dest_net); 2190 2175 return err; 2176 + out_unregister: 2177 + if (ops->newlink) { 2178 + LIST_HEAD(list_kill); 2179 + 2180 + ops->dellink(dev, &list_kill); 2181 + unregister_netdevice_many(&list_kill); 2182 + } else { 2183 + unregister_netdevice(dev); 2184 + } 2185 + goto out; 2191 2186 } 2192 2187 } 2193 2188
+10 -5
net/core/skbuff.c
··· 3621 3621 { 3622 3622 struct sk_buff_head *q = &sk->sk_error_queue; 3623 3623 struct sk_buff *skb, *skb_next; 3624 + unsigned long flags; 3624 3625 int err = 0; 3625 3626 3626 - spin_lock_bh(&q->lock); 3627 + spin_lock_irqsave(&q->lock, flags); 3627 3628 skb = __skb_dequeue(q); 3628 3629 if (skb && (skb_next = skb_peek(q))) 3629 3630 err = SKB_EXT_ERR(skb_next)->ee.ee_errno; 3630 - spin_unlock_bh(&q->lock); 3631 + spin_unlock_irqrestore(&q->lock, flags); 3631 3632 3632 3633 sk->sk_err = err; 3633 3634 if (err) ··· 3733 3732 struct sock *sk, int tstype) 3734 3733 { 3735 3734 struct sk_buff *skb; 3736 - bool tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY; 3735 + bool tsonly; 3737 3736 3738 - if (!sk || !skb_may_tx_timestamp(sk, tsonly)) 3737 + if (!sk) 3738 + return; 3739 + 3740 + tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY; 3741 + if (!skb_may_tx_timestamp(sk, tsonly)) 3739 3742 return; 3740 3743 3741 3744 if (tsonly) ··· 4177 4172 skb->ignore_df = 0; 4178 4173 skb_dst_drop(skb); 4179 4174 skb->mark = 0; 4180 - skb->sender_cpu = 0; 4175 + skb_sender_cpu_clear(skb); 4181 4176 skb_init_secmark(skb); 4182 4177 secpath_reset(skb); 4183 4178 nf_reset(skb);
+23
net/core/sock.c
··· 653 653 sock_reset_flag(sk, bit); 654 654 } 655 655 656 + bool sk_mc_loop(struct sock *sk) 657 + { 658 + if (dev_recursion_level()) 659 + return false; 660 + if (!sk) 661 + return true; 662 + switch (sk->sk_family) { 663 + case AF_INET: 664 + return inet_sk(sk)->mc_loop; 665 + #if IS_ENABLED(CONFIG_IPV6) 666 + case AF_INET6: 667 + return inet6_sk(sk)->mc_loop; 668 + #endif 669 + } 670 + WARN_ON(1); 671 + return true; 672 + } 673 + EXPORT_SYMBOL(sk_mc_loop); 674 + 656 675 /* 657 676 * This is meant for all protocols to use and covers goings on 658 677 * at the socket level. Everything here is generic. ··· 1674 1655 } 1675 1656 EXPORT_SYMBOL(sock_rfree); 1676 1657 1658 + /* 1659 + * Buffer destructor for skbs that are not used directly in read or write 1660 + * path, e.g. for error handler skbs. Automatically called from kfree_skb. 1661 + */ 1677 1662 void sock_efree(struct sk_buff *skb) 1678 1663 { 1679 1664 sock_put(skb->sk);
+6 -4
net/core/sysctl_net_core.c
··· 25 25 static int zero = 0; 26 26 static int one = 1; 27 27 static int ushort_max = USHRT_MAX; 28 + static int min_sndbuf = SOCK_MIN_SNDBUF; 29 + static int min_rcvbuf = SOCK_MIN_RCVBUF; 28 30 29 31 static int net_msg_warn; /* Unused, but still a sysctl */ 30 32 ··· 239 237 .maxlen = sizeof(int), 240 238 .mode = 0644, 241 239 .proc_handler = proc_dointvec_minmax, 242 - .extra1 = &one, 240 + .extra1 = &min_sndbuf, 243 241 }, 244 242 { 245 243 .procname = "rmem_max", ··· 247 245 .maxlen = sizeof(int), 248 246 .mode = 0644, 249 247 .proc_handler = proc_dointvec_minmax, 250 - .extra1 = &one, 248 + .extra1 = &min_rcvbuf, 251 249 }, 252 250 { 253 251 .procname = "wmem_default", ··· 255 253 .maxlen = sizeof(int), 256 254 .mode = 0644, 257 255 .proc_handler = proc_dointvec_minmax, 258 - .extra1 = &one, 256 + .extra1 = &min_sndbuf, 259 257 }, 260 258 { 261 259 .procname = "rmem_default", ··· 263 261 .maxlen = sizeof(int), 264 262 .mode = 0644, 265 263 .proc_handler = proc_dointvec_minmax, 266 - .extra1 = &one, 264 + .extra1 = &min_rcvbuf, 267 265 }, 268 266 { 269 267 .procname = "dev_weight",
+1 -1
net/decnet/dn_route.c
··· 1062 1062 if (decnet_debug_level & 16) 1063 1063 printk(KERN_DEBUG 1064 1064 "dn_route_output_slow: initial checks complete." 1065 - " dst=%o4x src=%04x oif=%d try_hard=%d\n", 1065 + " dst=%04x src=%04x oif=%d try_hard=%d\n", 1066 1066 le16_to_cpu(fld.daddr), le16_to_cpu(fld.saddr), 1067 1067 fld.flowidn_oif, try_hard); 1068 1068
+2
net/decnet/dn_rules.c
··· 248 248 249 249 void __exit dn_fib_rules_cleanup(void) 250 250 { 251 + rtnl_lock(); 251 252 fib_rules_unregister(dn_fib_rules_ops); 253 + rtnl_unlock(); 252 254 rcu_barrier(); 253 255 } 254 256
+7 -16
net/dsa/dsa.c
··· 501 501 #ifdef CONFIG_OF 502 502 static int dsa_of_setup_routing_table(struct dsa_platform_data *pd, 503 503 struct dsa_chip_data *cd, 504 - int chip_index, 504 + int chip_index, int port_index, 505 505 struct device_node *link) 506 506 { 507 - int ret; 508 507 const __be32 *reg; 509 - int link_port_addr; 510 508 int link_sw_addr; 511 509 struct device_node *parent_sw; 512 510 int len; ··· 517 519 if (!reg || (len != sizeof(*reg) * 2)) 518 520 return -EINVAL; 519 521 522 + /* 523 + * Get the destination switch number from the second field of its 'reg' 524 + * property, i.e. for "reg = <0x19 1>" sw_addr is '1'. 525 + */ 520 526 link_sw_addr = be32_to_cpup(reg + 1); 521 527 522 528 if (link_sw_addr >= pd->nr_chips) ··· 537 535 memset(cd->rtable, -1, pd->nr_chips * sizeof(s8)); 538 536 } 539 537 540 - reg = of_get_property(link, "reg", NULL); 541 - if (!reg) { 542 - ret = -EINVAL; 543 - goto out; 544 - } 545 - 546 - link_port_addr = be32_to_cpup(reg); 547 - 548 - cd->rtable[link_sw_addr] = link_port_addr; 538 + cd->rtable[link_sw_addr] = port_index; 549 539 550 540 return 0; 551 - out: 552 - kfree(cd->rtable); 553 - return ret; 554 541 } 555 542 556 543 static void dsa_of_free_platform_data(struct dsa_platform_data *pd) ··· 649 658 if (!strcmp(port_name, "dsa") && link && 650 659 pd->nr_chips > 1) { 651 660 ret = dsa_of_setup_routing_table(pd, cd, 652 - chip_index, link); 661 + chip_index, port_index, link); 653 662 if (ret) 654 663 goto out_free_chip; 655 664 }
+3
net/hsr/hsr_device.c
··· 359 359 struct hsr_port *port; 360 360 361 361 hsr = netdev_priv(hsr_dev); 362 + 363 + rtnl_lock(); 362 364 hsr_for_each_port(hsr, port) 363 365 hsr_del_port(port); 366 + rtnl_unlock(); 364 367 365 368 del_timer_sync(&hsr->prune_timer); 366 369 del_timer_sync(&hsr->announce_timer);
+4
net/hsr/hsr_main.c
··· 36 36 return NOTIFY_DONE; /* Not an HSR device */ 37 37 hsr = netdev_priv(dev); 38 38 port = hsr_port_get_hsr(hsr, HSR_PT_MASTER); 39 + if (port == NULL) { 40 + /* Resend of notification concerning removed device? */ 41 + return NOTIFY_DONE; 42 + } 39 43 } else { 40 44 hsr = port->hsr; 41 45 }
+7 -3
net/hsr/hsr_slave.c
··· 181 181 list_del_rcu(&port->port_list); 182 182 183 183 if (port != master) { 184 - netdev_update_features(master->dev); 185 - dev_set_mtu(master->dev, hsr_get_max_mtu(hsr)); 184 + if (master != NULL) { 185 + netdev_update_features(master->dev); 186 + dev_set_mtu(master->dev, hsr_get_max_mtu(hsr)); 187 + } 186 188 netdev_rx_handler_unregister(port->dev); 187 189 dev_set_promiscuity(port->dev, -1); 188 190 } ··· 194 192 */ 195 193 196 194 synchronize_rcu(); 197 - dev_put(port->dev); 195 + 196 + if (port != master) 197 + dev_put(port->dev); 198 198 }
+1 -2
net/ipv4/fib_frontend.c
··· 1111 1111 { 1112 1112 unsigned int i; 1113 1113 1114 + rtnl_lock(); 1114 1115 #ifdef CONFIG_IP_MULTIPLE_TABLES 1115 1116 fib4_rules_exit(net); 1116 1117 #endif 1117 - 1118 - rtnl_lock(); 1119 1118 for (i = 0; i < FIB_TABLE_HASHSZ; i++) { 1120 1119 struct fib_table *tb; 1121 1120 struct hlist_head *head;
+1
net/ipv4/inet_connection_sock.c
··· 268 268 release_sock(sk); 269 269 if (reqsk_queue_empty(&icsk->icsk_accept_queue)) 270 270 timeo = schedule_timeout(timeo); 271 + sched_annotate_sleep(); 271 272 lock_sock(sk); 272 273 err = 0; 273 274 if (!reqsk_queue_empty(&icsk->icsk_accept_queue))
+15 -3
net/ipv4/inet_diag.c
··· 71 71 mutex_unlock(&inet_diag_table_mutex); 72 72 } 73 73 74 + static size_t inet_sk_attr_size(void) 75 + { 76 + return nla_total_size(sizeof(struct tcp_info)) 77 + + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 78 + + nla_total_size(1) /* INET_DIAG_TOS */ 79 + + nla_total_size(1) /* INET_DIAG_TCLASS */ 80 + + nla_total_size(sizeof(struct inet_diag_meminfo)) 81 + + nla_total_size(sizeof(struct inet_diag_msg)) 82 + + nla_total_size(SK_MEMINFO_VARS * sizeof(u32)) 83 + + nla_total_size(TCP_CA_NAME_MAX) 84 + + nla_total_size(sizeof(struct tcpvegas_info)) 85 + + 64; 86 + } 87 + 74 88 int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, 75 89 struct sk_buff *skb, struct inet_diag_req_v2 *req, 76 90 struct user_namespace *user_ns, ··· 340 326 if (err) 341 327 goto out; 342 328 343 - rep = nlmsg_new(sizeof(struct inet_diag_msg) + 344 - sizeof(struct inet_diag_meminfo) + 345 - sizeof(struct tcp_info) + 64, GFP_KERNEL); 329 + rep = nlmsg_new(inet_sk_attr_size(), GFP_KERNEL); 346 330 if (!rep) { 347 331 err = -ENOMEM; 348 332 goto out;
+1
net/ipv4/ip_forward.c
··· 67 67 if (unlikely(opt->optlen)) 68 68 ip_forward_options(skb); 69 69 70 + skb_sender_cpu_clear(skb); 70 71 return dst_output(skb); 71 72 } 72 73
+7 -4
net/ipv4/ip_fragment.c
··· 659 659 struct sk_buff *ip_check_defrag(struct sk_buff *skb, u32 user) 660 660 { 661 661 struct iphdr iph; 662 + int netoff; 662 663 u32 len; 663 664 664 665 if (skb->protocol != htons(ETH_P_IP)) 665 666 return skb; 666 667 667 - if (!skb_copy_bits(skb, 0, &iph, sizeof(iph))) 668 + netoff = skb_network_offset(skb); 669 + 670 + if (skb_copy_bits(skb, netoff, &iph, sizeof(iph)) < 0) 668 671 return skb; 669 672 670 673 if (iph.ihl < 5 || iph.version != 4) 671 674 return skb; 672 675 673 676 len = ntohs(iph.tot_len); 674 - if (skb->len < len || len < (iph.ihl * 4)) 677 + if (skb->len < netoff + len || len < (iph.ihl * 4)) 675 678 return skb; 676 679 677 680 if (ip_is_fragment(&iph)) { 678 681 skb = skb_share_check(skb, GFP_ATOMIC); 679 682 if (skb) { 680 - if (!pskb_may_pull(skb, iph.ihl*4)) 683 + if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) 681 684 return skb; 682 - if (pskb_trim_rcsum(skb, len)) 685 + if (pskb_trim_rcsum(skb, netoff + len)) 683 686 return skb; 684 687 memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); 685 688 if (ip_defrag(skb, user))
+2 -1
net/ipv4/ip_output.c
··· 888 888 cork->length += length; 889 889 if (((length > mtu) || (skb && skb_is_gso(skb))) && 890 890 (sk->sk_protocol == IPPROTO_UDP) && 891 - (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len) { 891 + (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len && 892 + (sk->sk_type == SOCK_DGRAM)) { 892 893 err = ip_ufo_append_data(sk, queue, getfrag, from, length, 893 894 hh_len, fragheaderlen, transhdrlen, 894 895 maxfraglen, flags);
+23 -10
net/ipv4/ip_sockglue.c
··· 432 432 kfree_skb(skb); 433 433 } 434 434 435 - static bool ipv4_pktinfo_prepare_errqueue(const struct sock *sk, 436 - const struct sk_buff *skb, 437 - int ee_origin) 435 + /* IPv4 supports cmsg on all imcp errors and some timestamps 436 + * 437 + * Timestamp code paths do not initialize the fields expected by cmsg: 438 + * the PKTINFO fields in skb->cb[]. Fill those in here. 439 + */ 440 + static bool ipv4_datagram_support_cmsg(const struct sock *sk, 441 + struct sk_buff *skb, 442 + int ee_origin) 438 443 { 439 - struct in_pktinfo *info = PKTINFO_SKB_CB(skb); 444 + struct in_pktinfo *info; 440 445 441 - if ((ee_origin != SO_EE_ORIGIN_TIMESTAMPING) || 442 - (!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) || 446 + if (ee_origin == SO_EE_ORIGIN_ICMP) 447 + return true; 448 + 449 + if (ee_origin == SO_EE_ORIGIN_LOCAL) 450 + return false; 451 + 452 + /* Support IP_PKTINFO on tstamp packets if requested, to correlate 453 + * timestamp with egress dev. Not possible for packets without dev 454 + * or without payload (SOF_TIMESTAMPING_OPT_TSONLY). 455 + */ 456 + if ((!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) || 443 457 (!skb->dev)) 444 458 return false; 445 459 460 + info = PKTINFO_SKB_CB(skb); 446 461 info->ipi_spec_dst.s_addr = ip_hdr(skb)->saddr; 447 462 info->ipi_ifindex = skb->dev->ifindex; 448 463 return true; ··· 498 483 499 484 serr = SKB_EXT_ERR(skb); 500 485 501 - if (sin && skb->len) { 486 + if (sin && serr->port) { 502 487 sin->sin_family = AF_INET; 503 488 sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) + 504 489 serr->addr_offset); ··· 511 496 sin = &errhdr.offender; 512 497 memset(sin, 0, sizeof(*sin)); 513 498 514 - if (skb->len && 515 - (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP || 516 - ipv4_pktinfo_prepare_errqueue(sk, skb, serr->ee.ee_origin))) { 499 + if (ipv4_datagram_support_cmsg(sk, skb, serr->ee.ee_origin)) { 517 500 sin->sin_family = AF_INET; 518 501 sin->sin_addr.s_addr = ip_hdr(skb)->saddr; 519 502 if (inet_sk(sk)->cmsg_flags)
+6 -1
net/ipv4/ipmr.c
··· 268 268 return 0; 269 269 270 270 err2: 271 - kfree(mrt); 271 + ipmr_free_table(mrt); 272 272 err1: 273 273 fib_rules_unregister(ops); 274 274 return err; ··· 278 278 { 279 279 struct mr_table *mrt, *next; 280 280 281 + rtnl_lock(); 281 282 list_for_each_entry_safe(mrt, next, &net->ipv4.mr_tables, list) { 282 283 list_del(&mrt->list); 283 284 ipmr_free_table(mrt); 284 285 } 285 286 fib_rules_unregister(net->ipv4.mr_rules_ops); 287 + rtnl_unlock(); 286 288 } 287 289 #else 288 290 #define ipmr_for_each_table(mrt, net) \ ··· 310 308 311 309 static void __net_exit ipmr_rules_exit(struct net *net) 312 310 { 311 + rtnl_lock(); 313 312 ipmr_free_table(net->ipv4.mrt); 313 + net->ipv4.mrt = NULL; 314 + rtnl_unlock(); 314 315 } 315 316 #endif 316 317
+3 -3
net/ipv4/netfilter/ip_tables.c
··· 272 272 &chainname, &comment, &rulenum) != 0) 273 273 break; 274 274 275 - nf_log_packet(net, AF_INET, hook, skb, in, out, &trace_loginfo, 276 - "TRACE: %s:%s:%s:%u ", 277 - tablename, chainname, comment, rulenum); 275 + nf_log_trace(net, AF_INET, hook, skb, in, out, &trace_loginfo, 276 + "TRACE: %s:%s:%s:%u ", 277 + tablename, chainname, comment, rulenum); 278 278 } 279 279 #endif 280 280
+10 -2
net/ipv4/ping.c
··· 259 259 kgid_t low, high; 260 260 int ret = 0; 261 261 262 + if (sk->sk_family == AF_INET6) 263 + sk->sk_ipv6only = 1; 264 + 262 265 inet_get_ping_group_range_net(net, &low, &high); 263 266 if (gid_lte(low, group) && gid_lte(group, high)) 264 267 return 0; ··· 308 305 if (addr_len < sizeof(*addr)) 309 306 return -EINVAL; 310 307 308 + if (addr->sin_family != AF_INET && 309 + !(addr->sin_family == AF_UNSPEC && 310 + addr->sin_addr.s_addr == htonl(INADDR_ANY))) 311 + return -EAFNOSUPPORT; 312 + 311 313 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n", 312 314 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port)); 313 315 ··· 338 330 return -EINVAL; 339 331 340 332 if (addr->sin6_family != AF_INET6) 341 - return -EINVAL; 333 + return -EAFNOSUPPORT; 342 334 343 335 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI6c,port=%d)\n", 344 336 sk, addr->sin6_addr.s6_addr, ntohs(addr->sin6_port)); ··· 724 716 if (msg->msg_namelen < sizeof(*usin)) 725 717 return -EINVAL; 726 718 if (usin->sin_family != AF_INET) 727 - return -EINVAL; 719 + return -EAFNOSUPPORT; 728 720 daddr = usin->sin_addr.s_addr; 729 721 /* no remote port */ 730 722 } else {
+3 -7
net/ipv4/tcp.c
··· 835 835 int large_allowed) 836 836 { 837 837 struct tcp_sock *tp = tcp_sk(sk); 838 - u32 new_size_goal, size_goal, hlen; 838 + u32 new_size_goal, size_goal; 839 839 840 840 if (!large_allowed || !sk_can_gso(sk)) 841 841 return mss_now; 842 842 843 - /* Maybe we should/could use sk->sk_prot->max_header here ? */ 844 - hlen = inet_csk(sk)->icsk_af_ops->net_header_len + 845 - inet_csk(sk)->icsk_ext_hdr_len + 846 - tp->tcp_header_len; 847 - 848 - new_size_goal = sk->sk_gso_max_size - 1 - hlen; 843 + /* Note : tcp_tso_autosize() will eventually split this later */ 844 + new_size_goal = sk->sk_gso_max_size - 1 - MAX_TCP_HEADER; 849 845 new_size_goal = tcp_bound_to_half_wnd(tp, new_size_goal); 850 846 851 847 /* We try hard to avoid divides here */
+6
net/ipv4/tcp_cong.c
··· 378 378 */ 379 379 void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked) 380 380 { 381 + /* If credits accumulated at a higher w, apply them gently now. */ 382 + if (tp->snd_cwnd_cnt >= w) { 383 + tp->snd_cwnd_cnt = 0; 384 + tp->snd_cwnd++; 385 + } 386 + 381 387 tp->snd_cwnd_cnt += acked; 382 388 if (tp->snd_cwnd_cnt >= w) { 383 389 u32 delta = tp->snd_cwnd_cnt / w;
+4 -2
net/ipv4/tcp_cubic.c
··· 306 306 } 307 307 } 308 308 309 - if (ca->cnt == 0) /* cannot be zero */ 310 - ca->cnt = 1; 309 + /* The maximum rate of cwnd increase CUBIC allows is 1 packet per 310 + * 2 packets ACKed, meaning cwnd grows at 1.5x per RTT. 311 + */ 312 + ca->cnt = max(ca->cnt, 2U); 311 313 } 312 314 313 315 static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+5 -4
net/ipv4/tcp_input.c
··· 3105 3105 if (!first_ackt.v64) 3106 3106 first_ackt = last_ackt; 3107 3107 3108 - if (!(sacked & TCPCB_SACKED_ACKED)) 3108 + if (!(sacked & TCPCB_SACKED_ACKED)) { 3109 3109 reord = min(pkts_acked, reord); 3110 - if (!after(scb->end_seq, tp->high_seq)) 3111 - flag |= FLAG_ORIG_SACK_ACKED; 3110 + if (!after(scb->end_seq, tp->high_seq)) 3111 + flag |= FLAG_ORIG_SACK_ACKED; 3112 + } 3112 3113 } 3113 3114 3114 3115 if (sacked & TCPCB_SACKED_ACKED) ··· 4771 4770 return false; 4772 4771 4773 4772 /* If we filled the congestion window, do not expand. */ 4774 - if (tp->packets_out >= tp->snd_cwnd) 4773 + if (tcp_packets_in_flight(tp) >= tp->snd_cwnd) 4775 4774 return false; 4776 4775 4777 4776 return true;
+1 -1
net/ipv4/tcp_ipv4.c
··· 1518 1518 skb->sk = sk; 1519 1519 skb->destructor = sock_edemux; 1520 1520 if (sk->sk_state != TCP_TIME_WAIT) { 1521 - struct dst_entry *dst = sk->sk_rx_dst; 1521 + struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1522 1522 1523 1523 if (dst) 1524 1524 dst = dst_check(dst, 0);
+1 -5
net/ipv4/tcp_output.c
··· 2773 2773 } else { 2774 2774 /* Socket is locked, keep trying until memory is available. */ 2775 2775 for (;;) { 2776 - skb = alloc_skb_fclone(MAX_TCP_HEADER, 2777 - sk->sk_allocation); 2776 + skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); 2778 2777 if (skb) 2779 2778 break; 2780 2779 yield(); 2781 2780 } 2782 - 2783 - /* Reserve space for headers and prepare control bits. */ 2784 - skb_reserve(skb, MAX_TCP_HEADER); 2785 2781 /* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */ 2786 2782 tcp_init_nondata_skb(skb, tp->write_seq, 2787 2783 TCPHDR_ACK | TCPHDR_FIN);
+1 -1
net/ipv4/xfrm4_output.c
··· 63 63 return err; 64 64 65 65 IPCB(skb)->flags |= IPSKB_XFRM_TUNNEL_SIZE; 66 + skb->protocol = htons(ETH_P_IP); 66 67 67 68 return x->outer_mode->output2(x, skb); 68 69 } ··· 72 71 int xfrm4_output_finish(struct sk_buff *skb) 73 72 { 74 73 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 75 - skb->protocol = htons(ETH_P_IP); 76 74 77 75 #ifdef CONFIG_NETFILTER 78 76 IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
+16 -1
net/ipv6/addrconf.c
··· 4903 4903 return ret; 4904 4904 } 4905 4905 4906 + static 4907 + int addrconf_sysctl_mtu(struct ctl_table *ctl, int write, 4908 + void __user *buffer, size_t *lenp, loff_t *ppos) 4909 + { 4910 + struct inet6_dev *idev = ctl->extra1; 4911 + int min_mtu = IPV6_MIN_MTU; 4912 + struct ctl_table lctl; 4913 + 4914 + lctl = *ctl; 4915 + lctl.extra1 = &min_mtu; 4916 + lctl.extra2 = idev ? &idev->dev->mtu : NULL; 4917 + 4918 + return proc_dointvec_minmax(&lctl, write, buffer, lenp, ppos); 4919 + } 4920 + 4906 4921 static void dev_disable_change(struct inet6_dev *idev) 4907 4922 { 4908 4923 struct netdev_notifier_info info; ··· 5069 5054 .data = &ipv6_devconf.mtu6, 5070 5055 .maxlen = sizeof(int), 5071 5056 .mode = 0644, 5072 - .proc_handler = proc_dointvec, 5057 + .proc_handler = addrconf_sysctl_mtu, 5073 5058 }, 5074 5059 { 5075 5060 .procname = "accept_ra",
+28 -11
net/ipv6/datagram.c
··· 325 325 kfree_skb(skb); 326 326 } 327 327 328 - static void ip6_datagram_prepare_pktinfo_errqueue(struct sk_buff *skb) 328 + /* IPv6 supports cmsg on all origins aside from SO_EE_ORIGIN_LOCAL. 329 + * 330 + * At one point, excluding local errors was a quick test to identify icmp/icmp6 331 + * errors. This is no longer true, but the test remained, so the v6 stack, 332 + * unlike v4, also honors cmsg requests on all wifi and timestamp errors. 333 + * 334 + * Timestamp code paths do not initialize the fields expected by cmsg: 335 + * the PKTINFO fields in skb->cb[]. Fill those in here. 336 + */ 337 + static bool ip6_datagram_support_cmsg(struct sk_buff *skb, 338 + struct sock_exterr_skb *serr) 329 339 { 330 - int ifindex = skb->dev ? skb->dev->ifindex : -1; 340 + if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP || 341 + serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6) 342 + return true; 343 + 344 + if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL) 345 + return false; 346 + 347 + if (!skb->dev) 348 + return false; 331 349 332 350 if (skb->protocol == htons(ETH_P_IPV6)) 333 - IP6CB(skb)->iif = ifindex; 351 + IP6CB(skb)->iif = skb->dev->ifindex; 334 352 else 335 - PKTINFO_SKB_CB(skb)->ipi_ifindex = ifindex; 353 + PKTINFO_SKB_CB(skb)->ipi_ifindex = skb->dev->ifindex; 354 + 355 + return true; 336 356 } 337 357 338 358 /* ··· 389 369 390 370 serr = SKB_EXT_ERR(skb); 391 371 392 - if (sin && skb->len) { 372 + if (sin && serr->port) { 393 373 const unsigned char *nh = skb_network_header(skb); 394 374 sin->sin6_family = AF_INET6; 395 375 sin->sin6_flowinfo = 0; ··· 414 394 memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err)); 415 395 sin = &errhdr.offender; 416 396 memset(sin, 0, sizeof(*sin)); 417 - if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL && skb->len) { 397 + 398 + if (ip6_datagram_support_cmsg(skb, serr)) { 418 399 sin->sin6_family = AF_INET6; 419 - if (np->rxopt.all) { 420 - if (serr->ee.ee_origin != SO_EE_ORIGIN_ICMP && 421 - serr->ee.ee_origin != SO_EE_ORIGIN_ICMP6) 422 - ip6_datagram_prepare_pktinfo_errqueue(skb); 400 + if (np->rxopt.all) 423 401 ip6_datagram_recv_common_ctl(sk, msg, skb); 424 - } 425 402 if (skb->protocol == htons(ETH_P_IPV6)) { 426 403 sin->sin6_addr = ipv6_hdr(skb)->saddr; 427 404 if (np->rxopt.all)
+3
net/ipv6/fib6_rules.c
··· 104 104 goto again; 105 105 flp6->saddr = saddr; 106 106 } 107 + err = rt->dst.error; 107 108 goto out; 108 109 } 109 110 again: ··· 322 321 323 322 static void __net_exit fib6_rules_net_exit(struct net *net) 324 323 { 324 + rtnl_lock(); 325 325 fib_rules_unregister(net->ipv6.fib6_rules_ops); 326 + rtnl_unlock(); 326 327 } 327 328 328 329 static struct pernet_operations fib6_rules_net_ops = {
+5 -2
net/ipv6/ip6_output.c
··· 318 318 319 319 static inline int ip6_forward_finish(struct sk_buff *skb) 320 320 { 321 + skb_sender_cpu_clear(skb); 321 322 return dst_output(skb); 322 323 } 323 324 ··· 542 541 { 543 542 struct sk_buff *frag; 544 543 struct rt6_info *rt = (struct rt6_info *)skb_dst(skb); 545 - struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; 544 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? 545 + inet6_sk(skb->sk) : NULL; 546 546 struct ipv6hdr *tmp_hdr; 547 547 struct frag_hdr *fh; 548 548 unsigned int mtu, hlen, left, len; ··· 1300 1298 if (((length > mtu) || 1301 1299 (skb && skb_is_gso(skb))) && 1302 1300 (sk->sk_protocol == IPPROTO_UDP) && 1303 - (rt->dst.dev->features & NETIF_F_UFO)) { 1301 + (rt->dst.dev->features & NETIF_F_UFO) && 1302 + (sk->sk_type == SOCK_DGRAM)) { 1304 1303 err = ip6_ufo_append_data(sk, queue, getfrag, from, length, 1305 1304 hh_len, fragheaderlen, 1306 1305 transhdrlen, mtu, flags, rt);
+17 -16
net/ipv6/ip6_tunnel.c
··· 314 314 * Create tunnel matching given parameters. 315 315 * 316 316 * Return: 317 - * created tunnel or NULL 317 + * created tunnel or error pointer 318 318 **/ 319 319 320 320 static struct ip6_tnl *ip6_tnl_create(struct net *net, struct __ip6_tnl_parm *p) ··· 322 322 struct net_device *dev; 323 323 struct ip6_tnl *t; 324 324 char name[IFNAMSIZ]; 325 - int err; 325 + int err = -ENOMEM; 326 326 327 327 if (p->name[0]) 328 328 strlcpy(name, p->name, IFNAMSIZ); ··· 348 348 failed_free: 349 349 ip6_dev_free(dev); 350 350 failed: 351 - return NULL; 351 + return ERR_PTR(err); 352 352 } 353 353 354 354 /** ··· 362 362 * tunnel device is created and registered for use. 363 363 * 364 364 * Return: 365 - * matching tunnel or NULL 365 + * matching tunnel or error pointer 366 366 **/ 367 367 368 368 static struct ip6_tnl *ip6_tnl_locate(struct net *net, ··· 380 380 if (ipv6_addr_equal(local, &t->parms.laddr) && 381 381 ipv6_addr_equal(remote, &t->parms.raddr)) { 382 382 if (create) 383 - return NULL; 383 + return ERR_PTR(-EEXIST); 384 384 385 385 return t; 386 386 } 387 387 } 388 388 if (!create) 389 - return NULL; 389 + return ERR_PTR(-ENODEV); 390 390 return ip6_tnl_create(net, p); 391 391 } 392 392 ··· 1420 1420 } 1421 1421 ip6_tnl_parm_from_user(&p1, &p); 1422 1422 t = ip6_tnl_locate(net, &p1, 0); 1423 - if (t == NULL) 1423 + if (IS_ERR(t)) 1424 1424 t = netdev_priv(dev); 1425 1425 } else { 1426 1426 memset(&p, 0, sizeof(p)); ··· 1445 1445 ip6_tnl_parm_from_user(&p1, &p); 1446 1446 t = ip6_tnl_locate(net, &p1, cmd == SIOCADDTUNNEL); 1447 1447 if (cmd == SIOCCHGTUNNEL) { 1448 - if (t != NULL) { 1448 + if (!IS_ERR(t)) { 1449 1449 if (t->dev != dev) { 1450 1450 err = -EEXIST; 1451 1451 break; ··· 1457 1457 else 1458 1458 err = ip6_tnl_update(t, &p1); 1459 1459 } 1460 - if (t) { 1460 + if (!IS_ERR(t)) { 1461 1461 err = 0; 1462 1462 ip6_tnl_parm_to_user(&p, &t->parms); 1463 1463 if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p))) 1464 1464 err = -EFAULT; 1465 1465 1466 - } else 1467 - err = (cmd == SIOCADDTUNNEL ? -ENOBUFS : -ENOENT); 1466 + } else { 1467 + err = PTR_ERR(t); 1468 + } 1468 1469 break; 1469 1470 case SIOCDELTUNNEL: 1470 1471 err = -EPERM; ··· 1479 1478 err = -ENOENT; 1480 1479 ip6_tnl_parm_from_user(&p1, &p); 1481 1480 t = ip6_tnl_locate(net, &p1, 0); 1482 - if (t == NULL) 1481 + if (IS_ERR(t)) 1483 1482 break; 1484 1483 err = -EPERM; 1485 1484 if (t->dev == ip6n->fb_tnl_dev) ··· 1673 1672 struct nlattr *tb[], struct nlattr *data[]) 1674 1673 { 1675 1674 struct net *net = dev_net(dev); 1676 - struct ip6_tnl *nt; 1675 + struct ip6_tnl *nt, *t; 1677 1676 1678 1677 nt = netdev_priv(dev); 1679 1678 ip6_tnl_netlink_parms(data, &nt->parms); 1680 1679 1681 - if (ip6_tnl_locate(net, &nt->parms, 0)) 1680 + t = ip6_tnl_locate(net, &nt->parms, 0); 1681 + if (!IS_ERR(t)) 1682 1682 return -EEXIST; 1683 1683 1684 1684 return ip6_tnl_create2(dev); ··· 1699 1697 ip6_tnl_netlink_parms(data, &p); 1700 1698 1701 1699 t = ip6_tnl_locate(net, &p, 0); 1702 - 1703 - if (t) { 1700 + if (!IS_ERR(t)) { 1704 1701 if (t->dev != dev) 1705 1702 return -EEXIST; 1706 1703 } else
+3 -3
net/ipv6/ip6mr.c
··· 252 252 return 0; 253 253 254 254 err2: 255 - kfree(mrt); 255 + ip6mr_free_table(mrt); 256 256 err1: 257 257 fib_rules_unregister(ops); 258 258 return err; ··· 267 267 list_del(&mrt->list); 268 268 ip6mr_free_table(mrt); 269 269 } 270 - rtnl_unlock(); 271 270 fib_rules_unregister(net->ipv6.mr6_rules_ops); 271 + rtnl_unlock(); 272 272 } 273 273 #else 274 274 #define ip6mr_for_each_table(mrt, net) \ ··· 336 336 337 337 static void ip6mr_free_table(struct mr6_table *mrt) 338 338 { 339 - del_timer(&mrt->ipmr_expire_timer); 339 + del_timer_sync(&mrt->ipmr_expire_timer); 340 340 mroute_clean_tables(mrt); 341 341 kfree(mrt); 342 342 }
+8 -1
net/ipv6/ndisc.c
··· 1218 1218 if (rt) 1219 1219 rt6_set_expires(rt, jiffies + (HZ * lifetime)); 1220 1220 if (ra_msg->icmph.icmp6_hop_limit) { 1221 - in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1221 + /* Only set hop_limit on the interface if it is higher than 1222 + * the current hop_limit. 1223 + */ 1224 + if (in6_dev->cnf.hop_limit < ra_msg->icmph.icmp6_hop_limit) { 1225 + in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1226 + } else { 1227 + ND_PRINTK(2, warn, "RA: Got route advertisement with lower hop_limit than current\n"); 1228 + } 1222 1229 if (rt) 1223 1230 dst_metric_set(&rt->dst, RTAX_HOPLIMIT, 1224 1231 ra_msg->icmph.icmp6_hop_limit);
+3 -3
net/ipv6/netfilter/ip6_tables.c
··· 298 298 &chainname, &comment, &rulenum) != 0) 299 299 break; 300 300 301 - nf_log_packet(net, AF_INET6, hook, skb, in, out, &trace_loginfo, 302 - "TRACE: %s:%s:%s:%u ", 303 - tablename, chainname, comment, rulenum); 301 + nf_log_trace(net, AF_INET6, hook, skb, in, out, &trace_loginfo, 302 + "TRACE: %s:%s:%s:%u ", 303 + tablename, chainname, comment, rulenum); 304 304 } 305 305 #endif 306 306
+3 -2
net/ipv6/ping.c
··· 102 102 103 103 if (msg->msg_name) { 104 104 DECLARE_SOCKADDR(struct sockaddr_in6 *, u, msg->msg_name); 105 - if (msg->msg_namelen < sizeof(struct sockaddr_in6) || 106 - u->sin6_family != AF_INET6) { 105 + if (msg->msg_namelen < sizeof(*u)) 107 106 return -EINVAL; 107 + if (u->sin6_family != AF_INET6) { 108 + return -EAFNOSUPPORT; 108 109 } 109 110 if (sk->sk_bound_dev_if && 110 111 sk->sk_bound_dev_if != u->sin6_scope_id) {
+12 -1
net/ipv6/tcp_ipv6.c
··· 1411 1411 TCP_SKB_CB(skb)->sacked = 0; 1412 1412 } 1413 1413 1414 + static void tcp_v6_restore_cb(struct sk_buff *skb) 1415 + { 1416 + /* We need to move header back to the beginning if xfrm6_policy_check() 1417 + * and tcp_v6_fill_cb() are going to be called again. 1418 + */ 1419 + memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6, 1420 + sizeof(struct inet6_skb_parm)); 1421 + } 1422 + 1414 1423 static int tcp_v6_rcv(struct sk_buff *skb) 1415 1424 { 1416 1425 const struct tcphdr *th; ··· 1552 1543 inet_twsk_deschedule(tw, &tcp_death_row); 1553 1544 inet_twsk_put(tw); 1554 1545 sk = sk2; 1546 + tcp_v6_restore_cb(skb); 1555 1547 goto process; 1556 1548 } 1557 1549 /* Fall through to ACK */ ··· 1561 1551 tcp_v6_timewait_ack(sk, skb); 1562 1552 break; 1563 1553 case TCP_TW_RST: 1554 + tcp_v6_restore_cb(skb); 1564 1555 goto no_tcp_socket; 1565 1556 case TCP_TW_SUCCESS: 1566 1557 ; ··· 1596 1585 skb->sk = sk; 1597 1586 skb->destructor = sock_edemux; 1598 1587 if (sk->sk_state != TCP_TIME_WAIT) { 1599 - struct dst_entry *dst = sk->sk_rx_dst; 1588 + struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1600 1589 1601 1590 if (dst) 1602 1591 dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);
+3 -5
net/ipv6/udp_offload.c
··· 112 112 fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen); 113 113 fptr->nexthdr = nexthdr; 114 114 fptr->reserved = 0; 115 - if (skb_shinfo(skb)->ip6_frag_id) 116 - fptr->identification = skb_shinfo(skb)->ip6_frag_id; 117 - else 118 - ipv6_select_ident(fptr, 119 - (struct rt6_info *)skb_dst(skb)); 115 + if (!skb_shinfo(skb)->ip6_frag_id) 116 + ipv6_proxy_select_ident(skb); 117 + fptr->identification = skb_shinfo(skb)->ip6_frag_id; 120 118 121 119 /* Fragment the skb. ipv6 header and the remaining fields of the 122 120 * fragment header are updated in ipv6_gso_segment()
+1 -1
net/ipv6/xfrm6_output.c
··· 114 114 return err; 115 115 116 116 skb->ignore_df = 1; 117 + skb->protocol = htons(ETH_P_IPV6); 117 118 118 119 return x->outer_mode->output2(x, skb); 119 120 } ··· 123 122 int xfrm6_output_finish(struct sk_buff *skb) 124 123 { 125 124 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 126 - skb->protocol = htons(ETH_P_IPV6); 127 125 128 126 #ifdef CONFIG_NETFILTER 129 127 IP6CB(skb)->flags |= IP6SKB_XFRM_TRANSFORMED;
+1
net/ipv6/xfrm6_policy.c
··· 200 200 201 201 #if IS_ENABLED(CONFIG_IPV6_MIP6) 202 202 case IPPROTO_MH: 203 + offset += ipv6_optlen(exthdr); 203 204 if (!onlyproto && pskb_may_pull(skb, nh + offset + 3 - skb->data)) { 204 205 struct ip6_mh *mh; 205 206
+4 -2
net/irda/ircomm/ircomm_tty.c
··· 798 798 orig_jiffies = jiffies; 799 799 800 800 /* Set poll time to 200 ms */ 801 - poll_time = IRDA_MIN(timeout, msecs_to_jiffies(200)); 801 + poll_time = msecs_to_jiffies(200); 802 + if (timeout) 803 + poll_time = min_t(unsigned long, timeout, poll_time); 802 804 803 805 spin_lock_irqsave(&self->spinlock, flags); 804 806 while (self->tx_skb && self->tx_skb->len) { ··· 813 811 break; 814 812 } 815 813 spin_unlock_irqrestore(&self->spinlock, flags); 816 - current->state = TASK_RUNNING; 814 + __set_current_state(TASK_RUNNING); 817 815 } 818 816 819 817 /*
+2 -2
net/irda/irnet/irnet_ppp.c
··· 305 305 306 306 /* Put ourselves on the wait queue to be woken up */ 307 307 add_wait_queue(&irnet_events.rwait, &wait); 308 - current->state = TASK_INTERRUPTIBLE; 308 + set_current_state(TASK_INTERRUPTIBLE); 309 309 for(;;) 310 310 { 311 311 /* If there is unread events */ ··· 321 321 /* Yield and wait to be woken up */ 322 322 schedule(); 323 323 } 324 - current->state = TASK_RUNNING; 324 + __set_current_state(TASK_RUNNING); 325 325 remove_wait_queue(&irnet_events.rwait, &wait); 326 326 327 327 /* Did we got it ? */
+1 -3
net/iucv/af_iucv.c
··· 1114 1114 noblock, &err); 1115 1115 else 1116 1116 skb = sock_alloc_send_skb(sk, len, noblock, &err); 1117 - if (!skb) { 1118 - err = -ENOMEM; 1117 + if (!skb) 1119 1118 goto out; 1120 - } 1121 1119 if (iucv->transport == AF_IUCV_TRANS_HIPER) 1122 1120 skb_reserve(skb, sizeof(struct af_iucv_trans_hdr) + ETH_HLEN); 1123 1121 if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
+1
net/l2tp/l2tp_core.c
··· 1871 1871 l2tp_wq = alloc_workqueue("l2tp", WQ_UNBOUND, 0); 1872 1872 if (!l2tp_wq) { 1873 1873 pr_err("alloc_workqueue failed\n"); 1874 + unregister_pernet_device(&l2tp_net_ops); 1874 1875 rc = -ENOMEM; 1875 1876 goto out; 1876 1877 }
+6 -2
net/mac80211/agg-rx.c
··· 49 49 container_of(h, struct tid_ampdu_rx, rcu_head); 50 50 int i; 51 51 52 - del_timer_sync(&tid_rx->reorder_timer); 53 - 54 52 for (i = 0; i < tid_rx->buf_size; i++) 55 53 __skb_queue_purge(&tid_rx->reorder_buf[i]); 56 54 kfree(tid_rx->reorder_buf); ··· 90 92 tid, WLAN_BACK_RECIPIENT, reason); 91 93 92 94 del_timer_sync(&tid_rx->session_timer); 95 + 96 + /* make sure ieee80211_sta_reorder_release() doesn't re-arm the timer */ 97 + spin_lock_bh(&tid_rx->reorder_lock); 98 + tid_rx->removed = true; 99 + spin_unlock_bh(&tid_rx->reorder_lock); 100 + del_timer_sync(&tid_rx->reorder_timer); 93 101 94 102 call_rcu(&tid_rx->rcu_head, ieee80211_free_tid_rx); 95 103 }
+5
net/mac80211/chan.c
··· 1508 1508 if (ieee80211_chanctx_refcount(local, ctx) == 0) 1509 1509 ieee80211_free_chanctx(local, ctx); 1510 1510 1511 + sdata->radar_required = false; 1512 + 1511 1513 /* Unreserving may ready an in-place reservation. */ 1512 1514 if (use_reserved_switch) 1513 1515 ieee80211_vif_use_reserved_switch(local); ··· 1568 1566 ieee80211_recalc_smps_chanctx(local, ctx); 1569 1567 ieee80211_recalc_radar_chanctx(local, ctx); 1570 1568 out: 1569 + if (ret) 1570 + sdata->radar_required = false; 1571 + 1571 1572 mutex_unlock(&local->chanctx_mtx); 1572 1573 return ret; 1573 1574 }
+18 -6
net/mac80211/ieee80211_i.h
··· 58 58 #define IEEE80211_UNSET_POWER_LEVEL INT_MIN 59 59 60 60 /* 61 - * Some APs experience problems when working with U-APSD. Decrease the 62 - * probability of that happening by using legacy mode for all ACs but VO. 63 - * The AP that caused us trouble was a Cisco 4410N. It ignores our 64 - * setting, and always treats non-VO ACs as legacy. 61 + * Some APs experience problems when working with U-APSD. Decreasing the 62 + * probability of that happening by using legacy mode for all ACs but VO isn't 63 + * enough. 64 + * 65 + * Cisco 4410N originally forced us to enable VO by default only because it 66 + * treated non-VO ACs as legacy. 67 + * 68 + * However some APs (notably Netgear R7000) silently reclassify packets to 69 + * different ACs. Since u-APSD ACs require trigger frames for frame retrieval 70 + * clients would never see some frames (e.g. ARP responses) or would fetch them 71 + * accidentally after a long time. 72 + * 73 + * It makes little sense to enable u-APSD queues by default because it needs 74 + * userspace applications to be aware of it to actually take advantage of the 75 + * possible additional powersavings. Implicitly depending on driver autotrigger 76 + * frame support doesn't make much sense. 65 77 */ 66 - #define IEEE80211_DEFAULT_UAPSD_QUEUES \ 67 - IEEE80211_WMM_IE_STA_QOSINFO_AC_VO 78 + #define IEEE80211_DEFAULT_UAPSD_QUEUES 0 68 79 69 80 #define IEEE80211_DEFAULT_MAX_SP_LEN \ 70 81 IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL ··· 464 453 unsigned int flags; 465 454 466 455 bool csa_waiting_bcn; 456 + bool csa_ignored_same_chan; 467 457 468 458 bool beacon_crc_valid; 469 459 u32 beacon_crc;
+15 -1
net/mac80211/mlme.c
··· 1150 1150 return; 1151 1151 } 1152 1152 1153 + if (cfg80211_chandef_identical(&csa_ie.chandef, 1154 + &sdata->vif.bss_conf.chandef)) { 1155 + if (ifmgd->csa_ignored_same_chan) 1156 + return; 1157 + sdata_info(sdata, 1158 + "AP %pM tries to chanswitch to same channel, ignore\n", 1159 + ifmgd->associated->bssid); 1160 + ifmgd->csa_ignored_same_chan = true; 1161 + return; 1162 + } 1163 + 1153 1164 mutex_lock(&local->mtx); 1154 1165 mutex_lock(&local->chanctx_mtx); 1155 1166 conf = rcu_dereference_protected(sdata->vif.chanctx_conf, ··· 1221 1210 sdata->vif.csa_active = true; 1222 1211 sdata->csa_chandef = csa_ie.chandef; 1223 1212 sdata->csa_block_tx = csa_ie.mode; 1213 + ifmgd->csa_ignored_same_chan = false; 1224 1214 1225 1215 if (sdata->csa_block_tx) 1226 1216 ieee80211_stop_vif_queues(local, sdata, ··· 2102 2090 2103 2091 sdata->vif.csa_active = false; 2104 2092 ifmgd->csa_waiting_bcn = false; 2093 + ifmgd->csa_ignored_same_chan = false; 2105 2094 if (sdata->csa_block_tx) { 2106 2095 ieee80211_wake_vif_queues(local, sdata, 2107 2096 IEEE80211_QUEUE_STOP_REASON_CSA); ··· 3217 3204 (1ULL << WLAN_EID_CHANNEL_SWITCH) | 3218 3205 (1ULL << WLAN_EID_PWR_CONSTRAINT) | 3219 3206 (1ULL << WLAN_EID_HT_CAPABILITY) | 3220 - (1ULL << WLAN_EID_HT_OPERATION); 3207 + (1ULL << WLAN_EID_HT_OPERATION) | 3208 + (1ULL << WLAN_EID_EXT_CHANSWITCH_ANN); 3221 3209 3222 3210 static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, 3223 3211 struct ieee80211_mgmt *mgmt, size_t len,
+1 -1
net/mac80211/rc80211_minstrel.c
··· 373 373 rate++; 374 374 mi->sample_deferred++; 375 375 } else { 376 - if (!msr->sample_limit != 0) 376 + if (!msr->sample_limit) 377 377 return; 378 378 379 379 mi->sample_packets++;
+7 -3
net/mac80211/rx.c
··· 873 873 874 874 set_release_timer: 875 875 876 - mod_timer(&tid_agg_rx->reorder_timer, 877 - tid_agg_rx->reorder_time[j] + 1 + 878 - HT_RX_REORDER_BUF_TIMEOUT); 876 + if (!tid_agg_rx->removed) 877 + mod_timer(&tid_agg_rx->reorder_timer, 878 + tid_agg_rx->reorder_time[j] + 1 + 879 + HT_RX_REORDER_BUF_TIMEOUT); 879 880 } else { 880 881 del_timer(&tid_agg_rx->reorder_timer); 881 882 } ··· 2214 2213 /* reload pointers */ 2215 2214 hdr = (struct ieee80211_hdr *) skb->data; 2216 2215 mesh_hdr = (struct ieee80211s_hdr *) (skb->data + hdrlen); 2216 + 2217 + if (ieee80211_drop_unencrypted(rx, hdr->frame_control)) 2218 + return RX_DROP_MONITOR; 2217 2219 2218 2220 /* frame is in RMC, don't forward */ 2219 2221 if (ieee80211_is_data(hdr->frame_control) &&
+2
net/mac80211/sta_info.h
··· 175 175 * @reorder_lock: serializes access to reorder buffer, see below. 176 176 * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and 177 177 * and ssn. 178 + * @removed: this session is removed (but might have been found due to RCU) 178 179 * 179 180 * This structure's lifetime is managed by RCU, assignments to 180 181 * the array holding it must hold the aggregation mutex. ··· 200 199 u16 timeout; 201 200 u8 dialog_token; 202 201 bool auto_seq; 202 + bool removed; 203 203 }; 204 204 205 205 /**
+1
net/mac80211/tx.c
··· 566 566 if (tx->sdata->control_port_no_encrypt) 567 567 info->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT; 568 568 info->control.flags |= IEEE80211_TX_CTRL_PORT_CTRL_PROTO; 569 + info->flags |= IEEE80211_TX_CTL_USE_MINRATE; 569 570 } 570 571 571 572 return TX_CONTINUE;
+1 -1
net/mac80211/util.c
··· 3178 3178 wdev_iter = &sdata_iter->wdev; 3179 3179 3180 3180 if (sdata_iter == sdata || 3181 - rcu_access_pointer(sdata_iter->vif.chanctx_conf) == NULL || 3181 + !ieee80211_sdata_running(sdata_iter) || 3182 3182 local->hw.wiphy->software_iftypes & BIT(wdev_iter->iftype)) 3183 3183 continue; 3184 3184
+1 -1
net/netfilter/ipvs/ip_vs_ctl.c
··· 3402 3402 if (udest.af == 0) 3403 3403 udest.af = svc->af; 3404 3404 3405 - if (udest.af != svc->af) { 3405 + if (udest.af != svc->af && cmd != IPVS_CMD_DEL_DEST) { 3406 3406 /* The synchronization protocol is incompatible 3407 3407 * with mixed family services 3408 3408 */
+3
net/netfilter/ipvs/ip_vs_sync.c
··· 896 896 IP_VS_DBG(2, "BACKUP, add new conn. failed\n"); 897 897 return; 898 898 } 899 + if (!(flags & IP_VS_CONN_F_TEMPLATE)) 900 + kfree(param->pe_data); 899 901 } 900 902 901 903 if (opt) ··· 1171 1169 (opt_flags & IPVS_OPT_F_SEQ_DATA ? &opt : NULL) 1172 1170 ); 1173 1171 #endif 1172 + ip_vs_pe_put(param.pe); 1174 1173 return 0; 1175 1174 /* Error exit */ 1176 1175 out:
+24
net/netfilter/nf_log.c
··· 212 212 } 213 213 EXPORT_SYMBOL(nf_log_packet); 214 214 215 + void nf_log_trace(struct net *net, 216 + u_int8_t pf, 217 + unsigned int hooknum, 218 + const struct sk_buff *skb, 219 + const struct net_device *in, 220 + const struct net_device *out, 221 + const struct nf_loginfo *loginfo, const char *fmt, ...) 222 + { 223 + va_list args; 224 + char prefix[NF_LOG_PREFIXLEN]; 225 + const struct nf_logger *logger; 226 + 227 + rcu_read_lock(); 228 + logger = rcu_dereference(net->nf.nf_loggers[pf]); 229 + if (logger) { 230 + va_start(args, fmt); 231 + vsnprintf(prefix, sizeof(prefix), fmt, args); 232 + va_end(args); 233 + logger->logfn(net, pf, hooknum, skb, in, out, loginfo, prefix); 234 + } 235 + rcu_read_unlock(); 236 + } 237 + EXPORT_SYMBOL(nf_log_trace); 238 + 215 239 #define S_SIZE (1024 - (sizeof(unsigned int) + 1)) 216 240 217 241 struct nf_log_buf {
+40 -26
net/netfilter/nf_tables_api.c
··· 227 227 228 228 static inline void nft_rule_clear(struct net *net, struct nft_rule *rule) 229 229 { 230 - rule->genmask = 0; 230 + rule->genmask &= ~(1 << gencursor_next(net)); 231 231 } 232 232 233 233 static int ··· 1225 1225 1226 1226 if (nla[NFTA_CHAIN_POLICY]) { 1227 1227 if ((chain != NULL && 1228 - !(chain->flags & NFT_BASE_CHAIN)) || 1228 + !(chain->flags & NFT_BASE_CHAIN))) 1229 + return -EOPNOTSUPP; 1230 + 1231 + if (chain == NULL && 1229 1232 nla[NFTA_CHAIN_HOOK] == NULL) 1230 1233 return -EOPNOTSUPP; 1231 1234 ··· 1714 1711 } 1715 1712 nla_nest_end(skb, list); 1716 1713 1717 - if (rule->ulen && 1718 - nla_put(skb, NFTA_RULE_USERDATA, rule->ulen, nft_userdata(rule))) 1719 - goto nla_put_failure; 1714 + if (rule->udata) { 1715 + struct nft_userdata *udata = nft_userdata(rule); 1716 + if (nla_put(skb, NFTA_RULE_USERDATA, udata->len + 1, 1717 + udata->data) < 0) 1718 + goto nla_put_failure; 1719 + } 1720 1720 1721 1721 nlmsg_end(skb, nlh); 1722 1722 return 0; ··· 1902 1896 struct nft_table *table; 1903 1897 struct nft_chain *chain; 1904 1898 struct nft_rule *rule, *old_rule = NULL; 1899 + struct nft_userdata *udata; 1905 1900 struct nft_trans *trans = NULL; 1906 1901 struct nft_expr *expr; 1907 1902 struct nft_ctx ctx; 1908 1903 struct nlattr *tmp; 1909 - unsigned int size, i, n, ulen = 0; 1904 + unsigned int size, i, n, ulen = 0, usize = 0; 1910 1905 int err, rem; 1911 1906 bool create; 1912 1907 u64 handle, pos_handle; ··· 1975 1968 n++; 1976 1969 } 1977 1970 } 1971 + /* Check for overflow of dlen field */ 1972 + err = -EFBIG; 1973 + if (size >= 1 << 12) 1974 + goto err1; 1978 1975 1979 - if (nla[NFTA_RULE_USERDATA]) 1976 + if (nla[NFTA_RULE_USERDATA]) { 1980 1977 ulen = nla_len(nla[NFTA_RULE_USERDATA]); 1978 + if (ulen > 0) 1979 + usize = sizeof(struct nft_userdata) + ulen; 1980 + } 1981 1981 1982 1982 err = -ENOMEM; 1983 - rule = kzalloc(sizeof(*rule) + size + ulen, GFP_KERNEL); 1983 + rule = kzalloc(sizeof(*rule) + size + usize, GFP_KERNEL); 1984 1984 if (rule == NULL) 1985 1985 goto err1; 1986 1986 ··· 1995 1981 1996 1982 rule->handle = handle; 1997 1983 rule->dlen = size; 1998 - rule->ulen = ulen; 1984 + rule->udata = ulen ? 1 : 0; 1999 1985 2000 - if (ulen) 2001 - nla_memcpy(nft_userdata(rule), nla[NFTA_RULE_USERDATA], ulen); 1986 + if (ulen) { 1987 + udata = nft_userdata(rule); 1988 + udata->len = ulen - 1; 1989 + nla_memcpy(udata->data, nla[NFTA_RULE_USERDATA], ulen); 1990 + } 2002 1991 2003 1992 expr = nft_expr_first(rule); 2004 1993 for (i = 0; i < n; i++) { ··· 2048 2031 2049 2032 err3: 2050 2033 list_del_rcu(&rule->list); 2051 - if (trans) { 2052 - list_del_rcu(&nft_trans_rule(trans)->list); 2053 - nft_rule_clear(net, nft_trans_rule(trans)); 2054 - nft_trans_destroy(trans); 2055 - chain->use++; 2056 - } 2057 2034 err2: 2058 2035 nf_tables_rule_destroy(&ctx, rule); 2059 2036 err1: ··· 3623 3612 &te->elem, 3624 3613 NFT_MSG_DELSETELEM, 0); 3625 3614 te->set->ops->get(te->set, &te->elem); 3626 - te->set->ops->remove(te->set, &te->elem); 3627 3615 nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3628 - if (te->elem.flags & NFT_SET_MAP) { 3629 - nft_data_uninit(&te->elem.data, 3630 - te->set->dtype); 3631 - } 3616 + if (te->set->flags & NFT_SET_MAP && 3617 + !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END)) 3618 + nft_data_uninit(&te->elem.data, te->set->dtype); 3619 + te->set->ops->remove(te->set, &te->elem); 3632 3620 nft_trans_destroy(trans); 3633 3621 break; 3634 3622 } ··· 3668 3658 { 3669 3659 struct net *net = sock_net(skb->sk); 3670 3660 struct nft_trans *trans, *next; 3671 - struct nft_set *set; 3661 + struct nft_trans_elem *te; 3672 3662 3673 3663 list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) { 3674 3664 switch (trans->msg_type) { ··· 3729 3719 break; 3730 3720 case NFT_MSG_NEWSETELEM: 3731 3721 nft_trans_elem_set(trans)->nelems--; 3732 - set = nft_trans_elem_set(trans); 3733 - set->ops->get(set, &nft_trans_elem(trans)); 3734 - set->ops->remove(set, &nft_trans_elem(trans)); 3722 + te = (struct nft_trans_elem *)trans->data; 3723 + te->set->ops->get(te->set, &te->elem); 3724 + nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3725 + if (te->set->flags & NFT_SET_MAP && 3726 + !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END)) 3727 + nft_data_uninit(&te->elem.data, te->set->dtype); 3728 + te->set->ops->remove(te->set, &te->elem); 3735 3729 nft_trans_destroy(trans); 3736 3730 break; 3737 3731 case NFT_MSG_DELSETELEM:
+4 -4
net/netfilter/nf_tables_core.c
··· 94 94 { 95 95 struct net *net = dev_net(pkt->in ? pkt->in : pkt->out); 96 96 97 - nf_log_packet(net, pkt->xt.family, pkt->ops->hooknum, pkt->skb, pkt->in, 98 - pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ", 99 - chain->table->name, chain->name, comments[type], 100 - rulenum); 97 + nf_log_trace(net, pkt->xt.family, pkt->ops->hooknum, pkt->skb, pkt->in, 98 + pkt->out, &trace_loginfo, "TRACE: %s:%s:%s:%u ", 99 + chain->table->name, chain->name, comments[type], 100 + rulenum); 101 101 } 102 102 103 103 unsigned int
+23 -9
net/netfilter/nft_compat.c
··· 123 123 nft_target_set_tgchk_param(struct xt_tgchk_param *par, 124 124 const struct nft_ctx *ctx, 125 125 struct xt_target *target, void *info, 126 - union nft_entry *entry, u8 proto, bool inv) 126 + union nft_entry *entry, u16 proto, bool inv) 127 127 { 128 128 par->net = ctx->net; 129 129 par->table = ctx->table->name; ··· 133 133 entry->e4.ip.invflags = inv ? IPT_INV_PROTO : 0; 134 134 break; 135 135 case AF_INET6: 136 + if (proto) 137 + entry->e6.ipv6.flags |= IP6T_F_PROTO; 138 + 136 139 entry->e6.ipv6.proto = proto; 137 140 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 138 141 break; 139 142 case NFPROTO_BRIDGE: 140 - entry->ebt.ethproto = proto; 143 + entry->ebt.ethproto = (__force __be16)proto; 141 144 entry->ebt.invflags = inv ? EBT_IPROTO : 0; 142 145 break; 143 146 } ··· 174 171 [NFTA_RULE_COMPAT_FLAGS] = { .type = NLA_U32 }, 175 172 }; 176 173 177 - static int nft_parse_compat(const struct nlattr *attr, u8 *proto, bool *inv) 174 + static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv) 178 175 { 179 176 struct nlattr *tb[NFTA_RULE_COMPAT_MAX+1]; 180 177 u32 flags; ··· 206 203 struct xt_target *target = expr->ops->data; 207 204 struct xt_tgchk_param par; 208 205 size_t size = XT_ALIGN(nla_len(tb[NFTA_TARGET_INFO])); 209 - u8 proto = 0; 206 + u16 proto = 0; 210 207 bool inv = false; 211 208 union nft_entry e = {}; 212 209 int ret; ··· 337 334 static void 338 335 nft_match_set_mtchk_param(struct xt_mtchk_param *par, const struct nft_ctx *ctx, 339 336 struct xt_match *match, void *info, 340 - union nft_entry *entry, u8 proto, bool inv) 337 + union nft_entry *entry, u16 proto, bool inv) 341 338 { 342 339 par->net = ctx->net; 343 340 par->table = ctx->table->name; ··· 347 344 entry->e4.ip.invflags = inv ? IPT_INV_PROTO : 0; 348 345 break; 349 346 case AF_INET6: 347 + if (proto) 348 + entry->e6.ipv6.flags |= IP6T_F_PROTO; 349 + 350 350 entry->e6.ipv6.proto = proto; 351 351 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 352 352 break; 353 353 case NFPROTO_BRIDGE: 354 - entry->ebt.ethproto = proto; 354 + entry->ebt.ethproto = (__force __be16)proto; 355 355 entry->ebt.invflags = inv ? EBT_IPROTO : 0; 356 356 break; 357 357 } ··· 391 385 struct xt_match *match = expr->ops->data; 392 386 struct xt_mtchk_param par; 393 387 size_t size = XT_ALIGN(nla_len(tb[NFTA_MATCH_INFO])); 394 - u8 proto = 0; 388 + u16 proto = 0; 395 389 bool inv = false; 396 390 union nft_entry e = {}; 397 391 int ret; ··· 631 625 struct xt_match *match = nft_match->ops.data; 632 626 633 627 if (strcmp(match->name, mt_name) == 0 && 634 - match->revision == rev && match->family == family) 628 + match->revision == rev && match->family == family) { 629 + if (!try_module_get(match->me)) 630 + return ERR_PTR(-ENOENT); 631 + 635 632 return &nft_match->ops; 633 + } 636 634 } 637 635 638 636 match = xt_request_find_match(family, mt_name, rev); ··· 705 695 struct xt_target *target = nft_target->ops.data; 706 696 707 697 if (strcmp(target->name, tg_name) == 0 && 708 - target->revision == rev && target->family == family) 698 + target->revision == rev && target->family == family) { 699 + if (!try_module_get(target->me)) 700 + return ERR_PTR(-ENOENT); 701 + 709 702 return &nft_target->ops; 703 + } 710 704 } 711 705 712 706 target = xt_request_find_target(family, tg_name, rev);
+2 -2
net/netfilter/nft_hash.c
··· 153 153 iter->err = err; 154 154 goto out; 155 155 } 156 + 157 + continue; 156 158 } 157 159 158 160 if (iter->count < iter->skip) ··· 194 192 .key_offset = offsetof(struct nft_hash_elem, key), 195 193 .key_len = set->klen, 196 194 .hashfn = jhash, 197 - .grow_decision = rht_grow_above_75, 198 - .shrink_decision = rht_shrink_below_30, 199 195 }; 200 196 201 197 return rhashtable_init(priv, &params);
+2 -2
net/netfilter/xt_TPROXY.c
··· 513 513 { 514 514 const struct ip6t_ip6 *i = par->entryinfo; 515 515 516 - if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) 517 - && !(i->flags & IP6T_INV_PROTO)) 516 + if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) && 517 + !(i->invflags & IP6T_INV_PROTO)) 518 518 return 0; 519 519 520 520 pr_info("Can be used only in combination with "
+5 -6
net/netfilter/xt_recent.c
··· 378 378 mutex_lock(&recent_mutex); 379 379 t = recent_table_lookup(recent_net, info->name); 380 380 if (t != NULL) { 381 - if (info->hit_count > t->nstamps_max_mask) { 382 - pr_info("hitcount (%u) is larger than packets to be remembered (%u) for table %s\n", 383 - info->hit_count, t->nstamps_max_mask + 1, 384 - info->name); 385 - ret = -EINVAL; 386 - goto out; 381 + if (nstamp_mask > t->nstamps_max_mask) { 382 + spin_lock_bh(&recent_lock); 383 + recent_table_flush(t); 384 + t->nstamps_max_mask = nstamp_mask; 385 + spin_unlock_bh(&recent_lock); 387 386 } 388 387 389 388 t->refcnt++;
+12 -9
net/netfilter/xt_socket.c
··· 243 243 extract_icmp6_fields(const struct sk_buff *skb, 244 244 unsigned int outside_hdrlen, 245 245 int *protocol, 246 - struct in6_addr **raddr, 247 - struct in6_addr **laddr, 246 + const struct in6_addr **raddr, 247 + const struct in6_addr **laddr, 248 248 __be16 *rport, 249 - __be16 *lport) 249 + __be16 *lport, 250 + struct ipv6hdr *ipv6_var) 250 251 { 251 - struct ipv6hdr *inside_iph, _inside_iph; 252 + const struct ipv6hdr *inside_iph; 252 253 struct icmp6hdr *icmph, _icmph; 253 254 __be16 *ports, _ports[2]; 254 255 u8 inside_nexthdr; ··· 264 263 if (icmph->icmp6_type & ICMPV6_INFOMSG_MASK) 265 264 return 1; 266 265 267 - inside_iph = skb_header_pointer(skb, outside_hdrlen + sizeof(_icmph), sizeof(_inside_iph), &_inside_iph); 266 + inside_iph = skb_header_pointer(skb, outside_hdrlen + sizeof(_icmph), 267 + sizeof(*ipv6_var), ipv6_var); 268 268 if (inside_iph == NULL) 269 269 return 1; 270 270 inside_nexthdr = inside_iph->nexthdr; 271 271 272 - inside_hdrlen = ipv6_skip_exthdr(skb, outside_hdrlen + sizeof(_icmph) + sizeof(_inside_iph), 272 + inside_hdrlen = ipv6_skip_exthdr(skb, outside_hdrlen + sizeof(_icmph) + 273 + sizeof(*ipv6_var), 273 274 &inside_nexthdr, &inside_fragoff); 274 275 if (inside_hdrlen < 0) 275 276 return 1; /* hjm: Packet has no/incomplete transport layer headers. */ ··· 318 315 static bool 319 316 socket_mt6_v1_v2(const struct sk_buff *skb, struct xt_action_param *par) 320 317 { 321 - struct ipv6hdr *iph = ipv6_hdr(skb); 318 + struct ipv6hdr ipv6_var, *iph = ipv6_hdr(skb); 322 319 struct udphdr _hdr, *hp = NULL; 323 320 struct sock *sk = skb->sk; 324 - struct in6_addr *daddr = NULL, *saddr = NULL; 321 + const struct in6_addr *daddr = NULL, *saddr = NULL; 325 322 __be16 uninitialized_var(dport), uninitialized_var(sport); 326 323 int thoff = 0, uninitialized_var(tproto); 327 324 const struct xt_socket_mtinfo1 *info = (struct xt_socket_mtinfo1 *) par->matchinfo; ··· 345 342 346 343 } else if (tproto == IPPROTO_ICMPV6) { 347 344 if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr, 348 - &sport, &dport)) 345 + &sport, &dport, &ipv6_var)) 349 346 return false; 350 347 } else { 351 348 return false;
-2
net/netlink/af_netlink.c
··· 3126 3126 .key_len = sizeof(u32), /* portid */ 3127 3127 .hashfn = jhash, 3128 3128 .max_shift = 16, /* 64K */ 3129 - .grow_decision = rht_grow_above_75, 3130 - .shrink_decision = rht_shrink_below_30, 3131 3129 }; 3132 3130 3133 3131 if (err != 0)
+43 -2
net/openvswitch/datapath.c
··· 2194 2194 return 0; 2195 2195 } 2196 2196 2197 - static void __net_exit ovs_exit_net(struct net *net) 2197 + static void __net_exit list_vports_from_net(struct net *net, struct net *dnet, 2198 + struct list_head *head) 2199 + { 2200 + struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 2201 + struct datapath *dp; 2202 + 2203 + list_for_each_entry(dp, &ovs_net->dps, list_node) { 2204 + int i; 2205 + 2206 + for (i = 0; i < DP_VPORT_HASH_BUCKETS; i++) { 2207 + struct vport *vport; 2208 + 2209 + hlist_for_each_entry(vport, &dp->ports[i], dp_hash_node) { 2210 + struct netdev_vport *netdev_vport; 2211 + 2212 + if (vport->ops->type != OVS_VPORT_TYPE_INTERNAL) 2213 + continue; 2214 + 2215 + netdev_vport = netdev_vport_priv(vport); 2216 + if (dev_net(netdev_vport->dev) == dnet) 2217 + list_add(&vport->detach_list, head); 2218 + } 2219 + } 2220 + } 2221 + } 2222 + 2223 + static void __net_exit ovs_exit_net(struct net *dnet) 2198 2224 { 2199 2225 struct datapath *dp, *dp_next; 2200 - struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 2226 + struct ovs_net *ovs_net = net_generic(dnet, ovs_net_id); 2227 + struct vport *vport, *vport_next; 2228 + struct net *net; 2229 + LIST_HEAD(head); 2201 2230 2202 2231 ovs_lock(); 2203 2232 list_for_each_entry_safe(dp, dp_next, &ovs_net->dps, list_node) 2204 2233 __dp_destroy(dp); 2234 + 2235 + rtnl_lock(); 2236 + for_each_net(net) 2237 + list_vports_from_net(net, dnet, &head); 2238 + rtnl_unlock(); 2239 + 2240 + /* Detach all vports from given namespace. */ 2241 + list_for_each_entry_safe(vport, vport_next, &head, detach_list) { 2242 + list_del(&vport->detach_list); 2243 + ovs_dp_detach_port(vport); 2244 + } 2245 + 2205 2246 ovs_unlock(); 2206 2247 2207 2248 cancel_work_sync(&ovs_net->dp_notify_work);
+7 -1
net/openvswitch/flow_netlink.c
··· 2253 2253 struct sk_buff *skb) 2254 2254 { 2255 2255 const struct nlattr *ovs_key = nla_data(a); 2256 + struct nlattr *nla; 2256 2257 size_t key_len = nla_len(ovs_key) / 2; 2257 2258 2258 2259 /* Revert the conversion we did from a non-masked set action to 2259 2260 * masked set action. 2260 2261 */ 2261 - if (nla_put(skb, OVS_ACTION_ATTR_SET, nla_len(a) - key_len, ovs_key)) 2262 + nla = nla_nest_start(skb, OVS_ACTION_ATTR_SET); 2263 + if (!nla) 2262 2264 return -EMSGSIZE; 2263 2265 2266 + if (nla_put(skb, nla_type(ovs_key), key_len, nla_data(ovs_key))) 2267 + return -EMSGSIZE; 2268 + 2269 + nla_nest_end(skb, nla); 2264 2270 return 0; 2265 2271 } 2266 2272
+1 -3
net/openvswitch/vport.c
··· 274 274 ASSERT_OVSL(); 275 275 276 276 hlist_del_rcu(&vport->hash_node); 277 - 278 - vport->ops->destroy(vport); 279 - 280 277 module_put(vport->ops->owner); 278 + vport->ops->destroy(vport); 281 279 } 282 280 283 281 /**
+2
net/openvswitch/vport.h
··· 103 103 * @ops: Class structure. 104 104 * @percpu_stats: Points to per-CPU statistics used and maintained by vport 105 105 * @err_stats: Points to error statistics used and maintained by vport 106 + * @detach_list: list used for detaching vport in net-exit call. 106 107 */ 107 108 struct vport { 108 109 struct rcu_head rcu; ··· 118 117 struct pcpu_sw_netstats __percpu *percpu_stats; 119 118 120 119 struct vport_err_stats err_stats; 120 + struct list_head detach_list; 121 121 }; 122 122 123 123 /**
+28 -14
net/packet/af_packet.c
··· 698 698 699 699 if (pkc->last_kactive_blk_num == pkc->kactive_blk_num) { 700 700 if (!frozen) { 701 + if (!BLOCK_NUM_PKTS(pbd)) { 702 + /* An empty block. Just refresh the timer. */ 703 + goto refresh_timer; 704 + } 701 705 prb_retire_current_block(pkc, po, TP_STATUS_BLK_TMO); 702 706 if (!prb_dispatch_next_block(pkc, po)) 703 707 goto refresh_timer; ··· 802 798 h1->ts_last_pkt.ts_sec = last_pkt->tp_sec; 803 799 h1->ts_last_pkt.ts_nsec = last_pkt->tp_nsec; 804 800 } else { 805 - /* Ok, we tmo'd - so get the current time */ 801 + /* Ok, we tmo'd - so get the current time. 802 + * 803 + * It shouldn't really happen as we don't close empty 804 + * blocks. See prb_retire_rx_blk_timer_expired(). 805 + */ 806 806 struct timespec ts; 807 807 getnstimeofday(&ts); 808 808 h1->ts_last_pkt.ts_sec = ts.tv_sec; ··· 1357 1349 return 0; 1358 1350 } 1359 1351 1352 + if (fanout_has_flag(f, PACKET_FANOUT_FLAG_DEFRAG)) { 1353 + skb = ip_check_defrag(skb, IP_DEFRAG_AF_PACKET); 1354 + if (!skb) 1355 + return 0; 1356 + } 1360 1357 switch (f->type) { 1361 1358 case PACKET_FANOUT_HASH: 1362 1359 default: 1363 - if (fanout_has_flag(f, PACKET_FANOUT_FLAG_DEFRAG)) { 1364 - skb = ip_check_defrag(skb, IP_DEFRAG_AF_PACKET); 1365 - if (!skb) 1366 - return 0; 1367 - } 1368 1360 idx = fanout_demux_hash(f, skb, num); 1369 1361 break; 1370 1362 case PACKET_FANOUT_LB: ··· 3123 3115 return 0; 3124 3116 } 3125 3117 3126 - static void packet_dev_mclist(struct net_device *dev, struct packet_mclist *i, int what) 3118 + static void packet_dev_mclist_delete(struct net_device *dev, 3119 + struct packet_mclist **mlp) 3127 3120 { 3128 - for ( ; i; i = i->next) { 3129 - if (i->ifindex == dev->ifindex) 3130 - packet_dev_mc(dev, i, what); 3121 + struct packet_mclist *ml; 3122 + 3123 + while ((ml = *mlp) != NULL) { 3124 + if (ml->ifindex == dev->ifindex) { 3125 + packet_dev_mc(dev, ml, -1); 3126 + *mlp = ml->next; 3127 + kfree(ml); 3128 + } else 3129 + mlp = &ml->next; 3131 3130 } 3132 3131 } 3133 3132 ··· 3211 3196 packet_dev_mc(dev, ml, -1); 3212 3197 kfree(ml); 3213 3198 } 3214 - rtnl_unlock(); 3215 - return 0; 3199 + break; 3216 3200 } 3217 3201 } 3218 3202 rtnl_unlock(); 3219 - return -EADDRNOTAVAIL; 3203 + return 0; 3220 3204 } 3221 3205 3222 3206 static void packet_flush_mclist(struct sock *sk) ··· 3565 3551 switch (msg) { 3566 3552 case NETDEV_UNREGISTER: 3567 3553 if (po->mclist) 3568 - packet_dev_mclist(dev, po->mclist, -1); 3554 + packet_dev_mclist_delete(dev, &po->mclist); 3569 3555 /* fallthrough */ 3570 3556 3571 3557 case NETDEV_DOWN:
+22 -18
net/rds/iw_rdma.c
··· 88 88 int *unpinned); 89 89 static void rds_iw_destroy_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr); 90 90 91 - static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwdev, struct rdma_cm_id **cm_id) 91 + static int rds_iw_get_device(struct sockaddr_in *src, struct sockaddr_in *dst, 92 + struct rds_iw_device **rds_iwdev, 93 + struct rdma_cm_id **cm_id) 92 94 { 93 95 struct rds_iw_device *iwdev; 94 96 struct rds_iw_cm_id *i_cm_id; ··· 114 112 src_addr->sin_port, 115 113 dst_addr->sin_addr.s_addr, 116 114 dst_addr->sin_port, 117 - rs->rs_bound_addr, 118 - rs->rs_bound_port, 119 - rs->rs_conn_addr, 120 - rs->rs_conn_port); 115 + src->sin_addr.s_addr, 116 + src->sin_port, 117 + dst->sin_addr.s_addr, 118 + dst->sin_port); 121 119 #ifdef WORKING_TUPLE_DETECTION 122 - if (src_addr->sin_addr.s_addr == rs->rs_bound_addr && 123 - src_addr->sin_port == rs->rs_bound_port && 124 - dst_addr->sin_addr.s_addr == rs->rs_conn_addr && 125 - dst_addr->sin_port == rs->rs_conn_port) { 120 + if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr && 121 + src_addr->sin_port == src->sin_port && 122 + dst_addr->sin_addr.s_addr == dst->sin_addr.s_addr && 123 + dst_addr->sin_port == dst->sin_port) { 126 124 #else 127 125 /* FIXME - needs to compare the local and remote 128 126 * ipaddr/port tuple, but the ipaddr is the only ··· 130 128 * zero'ed. It doesn't appear to be properly populated 131 129 * during connection setup... 132 130 */ 133 - if (src_addr->sin_addr.s_addr == rs->rs_bound_addr) { 131 + if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr) { 134 132 #endif 135 133 spin_unlock_irq(&iwdev->spinlock); 136 134 *rds_iwdev = iwdev; ··· 182 180 { 183 181 struct sockaddr_in *src_addr, *dst_addr; 184 182 struct rds_iw_device *rds_iwdev_old; 185 - struct rds_sock rs; 186 183 struct rdma_cm_id *pcm_id; 187 184 int rc; 188 185 189 186 src_addr = (struct sockaddr_in *)&cm_id->route.addr.src_addr; 190 187 dst_addr = (struct sockaddr_in *)&cm_id->route.addr.dst_addr; 191 188 192 - rs.rs_bound_addr = src_addr->sin_addr.s_addr; 193 - rs.rs_bound_port = src_addr->sin_port; 194 - rs.rs_conn_addr = dst_addr->sin_addr.s_addr; 195 - rs.rs_conn_port = dst_addr->sin_port; 196 - 197 - rc = rds_iw_get_device(&rs, &rds_iwdev_old, &pcm_id); 189 + rc = rds_iw_get_device(src_addr, dst_addr, &rds_iwdev_old, &pcm_id); 198 190 if (rc) 199 191 rds_iw_remove_cm_id(rds_iwdev, cm_id); 200 192 ··· 594 598 struct rds_iw_device *rds_iwdev; 595 599 struct rds_iw_mr *ibmr = NULL; 596 600 struct rdma_cm_id *cm_id; 601 + struct sockaddr_in src = { 602 + .sin_addr.s_addr = rs->rs_bound_addr, 603 + .sin_port = rs->rs_bound_port, 604 + }; 605 + struct sockaddr_in dst = { 606 + .sin_addr.s_addr = rs->rs_conn_addr, 607 + .sin_port = rs->rs_conn_port, 608 + }; 597 609 int ret; 598 610 599 - ret = rds_iw_get_device(rs, &rds_iwdev, &cm_id); 611 + ret = rds_iw_get_device(&src, &dst, &rds_iwdev, &cm_id); 600 612 if (ret || !cm_id) { 601 613 ret = -ENODEV; 602 614 goto out;
+5 -4
net/rxrpc/ar-ack.c
··· 218 218 struct rxrpc_header *hdr; 219 219 struct sk_buff *txb; 220 220 unsigned long *p_txb, resend_at; 221 - int loop, stop; 221 + bool stop; 222 + int loop; 222 223 u8 resend; 223 224 224 225 _enter("{%d,%d,%d,%d},", ··· 227 226 atomic_read(&call->sequence), 228 227 CIRC_CNT(call->acks_head, call->acks_tail, call->acks_winsz)); 229 228 230 - stop = 0; 229 + stop = false; 231 230 resend = 0; 232 231 resend_at = 0; 233 232 ··· 256 255 _proto("Tx DATA %%%u { #%d }", 257 256 ntohl(sp->hdr.serial), ntohl(sp->hdr.seq)); 258 257 if (rxrpc_send_packet(call->conn->trans, txb) < 0) { 259 - stop = 0; 258 + stop = true; 260 259 sp->resend_at = jiffies + 3; 261 260 } else { 262 261 sp->resend_at = 263 - jiffies + rxrpc_resend_timeout * HZ; 262 + jiffies + rxrpc_resend_timeout; 264 263 } 265 264 } 266 265
+2 -2
net/rxrpc/ar-error.c
··· 42 42 _leave("UDP socket errqueue empty"); 43 43 return; 44 44 } 45 - if (!skb->len) { 45 + serr = SKB_EXT_ERR(skb); 46 + if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { 46 47 _leave("UDP empty message"); 47 48 kfree_skb(skb); 48 49 return; ··· 51 50 52 51 rxrpc_new_skb(skb); 53 52 54 - serr = SKB_EXT_ERR(skb); 55 53 addr = *(__be32 *)(skb_network_header(skb) + serr->addr_offset); 56 54 port = serr->port; 57 55
+1 -1
net/rxrpc/ar-recvmsg.c
··· 87 87 if (!skb) { 88 88 /* nothing remains on the queue */ 89 89 if (copied && 90 - (msg->msg_flags & MSG_PEEK || timeo == 0)) 90 + (flags & MSG_PEEK || timeo == 0)) 91 91 goto out; 92 92 93 93 /* wait for a message to turn up */
+28 -8
net/sched/act_bpf.c
··· 25 25 struct tcf_result *res) 26 26 { 27 27 struct tcf_bpf *b = a->priv; 28 - int action; 29 - int filter_res; 28 + int action, filter_res; 30 29 31 30 spin_lock(&b->tcf_lock); 31 + 32 32 b->tcf_tm.lastuse = jiffies; 33 33 bstats_update(&b->tcf_bstats, skb); 34 - action = b->tcf_action; 35 34 36 35 filter_res = BPF_PROG_RUN(b->filter, skb); 37 - if (filter_res == 0) { 38 - /* Return code 0 from the BPF program 39 - * is being interpreted as a drop here. 40 - */ 41 - action = TC_ACT_SHOT; 36 + 37 + /* A BPF program may overwrite the default action opcode. 38 + * Similarly as in cls_bpf, if filter_res == -1 we use the 39 + * default action specified from tc. 40 + * 41 + * In case a different well-known TC_ACT opcode has been 42 + * returned, it will overwrite the default one. 43 + * 44 + * For everything else that is unkown, TC_ACT_UNSPEC is 45 + * returned. 46 + */ 47 + switch (filter_res) { 48 + case TC_ACT_PIPE: 49 + case TC_ACT_RECLASSIFY: 50 + case TC_ACT_OK: 51 + action = filter_res; 52 + break; 53 + case TC_ACT_SHOT: 54 + action = filter_res; 42 55 b->tcf_qstats.drops++; 56 + break; 57 + case TC_ACT_UNSPEC: 58 + action = b->tcf_action; 59 + break; 60 + default: 61 + action = TC_ACT_UNSPEC; 62 + break; 43 63 } 44 64 45 65 spin_unlock(&b->tcf_lock);
+4 -1
net/sched/cls_u32.c
··· 78 78 struct tc_u_common *tp_c; 79 79 int refcnt; 80 80 unsigned int divisor; 81 - struct tc_u_knode __rcu *ht[1]; 82 81 struct rcu_head rcu; 82 + /* The 'ht' field MUST be the last field in structure to allow for 83 + * more entries allocated at end of structure. 84 + */ 85 + struct tc_u_knode __rcu *ht[1]; 83 86 }; 84 87 85 88 struct tc_u_common {
+1
net/sched/ematch.c
··· 228 228 * to replay the request. 229 229 */ 230 230 module_put(em->ops->owner); 231 + em->ops = NULL; 231 232 err = -EAGAIN; 232 233 } 233 234 #endif
+4
net/socket.c
··· 1702 1702 1703 1703 if (len > INT_MAX) 1704 1704 len = INT_MAX; 1705 + if (unlikely(!access_ok(VERIFY_READ, buff, len))) 1706 + return -EFAULT; 1705 1707 sock = sockfd_lookup_light(fd, &err, &fput_needed); 1706 1708 if (!sock) 1707 1709 goto out; ··· 1762 1760 1763 1761 if (size > INT_MAX) 1764 1762 size = INT_MAX; 1763 + if (unlikely(!access_ok(VERIFY_WRITE, ubuf, size))) 1764 + return -EFAULT; 1765 1765 sock = sockfd_lookup_light(fd, &err, &fput_needed); 1766 1766 if (!sock) 1767 1767 goto out;
+2
net/sunrpc/auth_gss/gss_rpc_upcall.c
··· 217 217 218 218 for (i = 0; i < arg->npages && arg->pages[i]; i++) 219 219 __free_page(arg->pages[i]); 220 + 221 + kfree(arg->pages); 220 222 } 221 223 222 224 static int gssp_alloc_receive_pages(struct gssx_arg_accept_sec_context *arg)
+2
net/sunrpc/auth_gss/svcauth_gss.c
··· 463 463 /* number of additional gid's */ 464 464 if (get_int(&mesg, &N)) 465 465 goto out; 466 + if (N < 0 || N > NGROUPS_MAX) 467 + goto out; 466 468 status = -ENOMEM; 467 469 rsci.cred.cr_group_info = groups_alloc(N); 468 470 if (rsci.cred.cr_group_info == NULL)
+1 -1
net/sunrpc/cache.c
··· 921 921 poll_wait(filp, &queue_wait, wait); 922 922 923 923 /* alway allow write */ 924 - mask = POLL_OUT | POLLWRNORM; 924 + mask = POLLOUT | POLLWRNORM; 925 925 926 926 if (!rp) 927 927 return mask;
+1 -3
net/sunrpc/clnt.c
··· 303 303 struct super_block *pipefs_sb; 304 304 int err; 305 305 306 - err = rpc_clnt_debugfs_register(clnt); 307 - if (err) 308 - return err; 306 + rpc_clnt_debugfs_register(clnt); 309 307 310 308 pipefs_sb = rpc_get_sb_net(net); 311 309 if (pipefs_sb) {
+29 -23
net/sunrpc/debugfs.c
··· 129 129 .release = tasks_release, 130 130 }; 131 131 132 - int 132 + void 133 133 rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 134 134 { 135 - int len, err; 135 + int len; 136 136 char name[24]; /* enough for "../../rpc_xprt/ + 8 hex digits + NULL */ 137 + struct rpc_xprt *xprt; 137 138 138 139 /* Already registered? */ 139 - if (clnt->cl_debugfs) 140 - return 0; 140 + if (clnt->cl_debugfs || !rpc_clnt_dir) 141 + return; 141 142 142 143 len = snprintf(name, sizeof(name), "%x", clnt->cl_clid); 143 144 if (len >= sizeof(name)) 144 - return -EINVAL; 145 + return; 145 146 146 147 /* make the per-client dir */ 147 148 clnt->cl_debugfs = debugfs_create_dir(name, rpc_clnt_dir); 148 149 if (!clnt->cl_debugfs) 149 - return -ENOMEM; 150 + return; 150 151 151 152 /* make tasks file */ 152 - err = -ENOMEM; 153 153 if (!debugfs_create_file("tasks", S_IFREG | S_IRUSR, clnt->cl_debugfs, 154 154 clnt, &tasks_fops)) 155 155 goto out_err; 156 156 157 - err = -EINVAL; 158 157 rcu_read_lock(); 158 + xprt = rcu_dereference(clnt->cl_xprt); 159 + /* no "debugfs" dentry? Don't bother with the symlink. */ 160 + if (!xprt->debugfs) { 161 + rcu_read_unlock(); 162 + return; 163 + } 159 164 len = snprintf(name, sizeof(name), "../../rpc_xprt/%s", 160 - rcu_dereference(clnt->cl_xprt)->debugfs->d_name.name); 165 + xprt->debugfs->d_name.name); 161 166 rcu_read_unlock(); 167 + 162 168 if (len >= sizeof(name)) 163 169 goto out_err; 164 170 165 - err = -ENOMEM; 166 171 if (!debugfs_create_symlink("xprt", clnt->cl_debugfs, name)) 167 172 goto out_err; 168 173 169 - return 0; 174 + return; 170 175 out_err: 171 176 debugfs_remove_recursive(clnt->cl_debugfs); 172 177 clnt->cl_debugfs = NULL; 173 - return err; 174 178 } 175 179 176 180 void ··· 230 226 .release = xprt_info_release, 231 227 }; 232 228 233 - int 229 + void 234 230 rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 235 231 { 236 232 int len, id; 237 233 static atomic_t cur_id; 238 234 char name[9]; /* 8 hex digits + NULL term */ 239 235 236 + if (!rpc_xprt_dir) 237 + return; 238 + 240 239 id = (unsigned int)atomic_inc_return(&cur_id); 241 240 242 241 len = snprintf(name, sizeof(name), "%x", id); 243 242 if (len >= sizeof(name)) 244 - return -EINVAL; 243 + return; 245 244 246 245 /* make the per-client dir */ 247 246 xprt->debugfs = debugfs_create_dir(name, rpc_xprt_dir); 248 247 if (!xprt->debugfs) 249 - return -ENOMEM; 248 + return; 250 249 251 250 /* make tasks file */ 252 251 if (!debugfs_create_file("info", S_IFREG | S_IRUSR, xprt->debugfs, 253 252 xprt, &xprt_info_fops)) { 254 253 debugfs_remove_recursive(xprt->debugfs); 255 254 xprt->debugfs = NULL; 256 - return -ENOMEM; 257 255 } 258 - 259 - return 0; 260 256 } 261 257 262 258 void ··· 270 266 sunrpc_debugfs_exit(void) 271 267 { 272 268 debugfs_remove_recursive(topdir); 269 + topdir = NULL; 270 + rpc_clnt_dir = NULL; 271 + rpc_xprt_dir = NULL; 273 272 } 274 273 275 - int __init 274 + void __init 276 275 sunrpc_debugfs_init(void) 277 276 { 278 277 topdir = debugfs_create_dir("sunrpc", NULL); 279 278 if (!topdir) 280 - goto out; 279 + return; 281 280 282 281 rpc_clnt_dir = debugfs_create_dir("rpc_clnt", topdir); 283 282 if (!rpc_clnt_dir) ··· 290 283 if (!rpc_xprt_dir) 291 284 goto out_remove; 292 285 293 - return 0; 286 + return; 294 287 out_remove: 295 288 debugfs_remove_recursive(topdir); 296 289 topdir = NULL; 297 - out: 298 - return -ENOMEM; 290 + rpc_clnt_dir = NULL; 299 291 }
+1 -6
net/sunrpc/sunrpc_syms.c
··· 98 98 if (err) 99 99 goto out4; 100 100 101 - err = sunrpc_debugfs_init(); 102 - if (err) 103 - goto out5; 104 - 101 + sunrpc_debugfs_init(); 105 102 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 106 103 rpc_register_sysctl(); 107 104 #endif ··· 106 109 init_socket_xprt(); /* clnt sock transport */ 107 110 return 0; 108 111 109 - out5: 110 - unregister_rpc_pipefs(); 111 112 out4: 112 113 unregister_pernet_subsys(&sunrpc_net_ops); 113 114 out3:
+1 -6
net/sunrpc/xprt.c
··· 1331 1331 */ 1332 1332 struct rpc_xprt *xprt_create_transport(struct xprt_create *args) 1333 1333 { 1334 - int err; 1335 1334 struct rpc_xprt *xprt; 1336 1335 struct xprt_class *t; 1337 1336 ··· 1371 1372 return ERR_PTR(-ENOMEM); 1372 1373 } 1373 1374 1374 - err = rpc_xprt_debugfs_register(xprt); 1375 - if (err) { 1376 - xprt_destroy(xprt); 1377 - return ERR_PTR(err); 1378 - } 1375 + rpc_xprt_debugfs_register(xprt); 1379 1376 1380 1377 dprintk("RPC: created transport %p with %u slots\n", xprt, 1381 1378 xprt->max_reqs);
+2 -1
net/sunrpc/xprtrdma/rpc_rdma.c
··· 738 738 struct rpc_xprt *xprt = rep->rr_xprt; 739 739 struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt); 740 740 __be32 *iptr; 741 - int credits, rdmalen, status; 741 + int rdmalen, status; 742 742 unsigned long cwnd; 743 + u32 credits; 743 744 744 745 /* Check status. If bad, signal disconnect and return rep to pool */ 745 746 if (rep->rr_len == ~0U) {
+1 -1
net/sunrpc/xprtrdma/xprt_rdma.h
··· 285 285 */ 286 286 struct rpcrdma_buffer { 287 287 spinlock_t rb_lock; /* protects indexes */ 288 - int rb_max_requests;/* client max requests */ 288 + u32 rb_max_requests;/* client max requests */ 289 289 struct list_head rb_mws; /* optional memory windows/fmrs/frmrs */ 290 290 struct list_head rb_all; 291 291 int rb_send_index;
+1 -1
net/tipc/core.c
··· 152 152 static void __exit tipc_exit(void) 153 153 { 154 154 tipc_bearer_cleanup(); 155 + unregister_pernet_subsys(&tipc_net_ops); 155 156 tipc_netlink_stop(); 156 157 tipc_netlink_compat_stop(); 157 158 tipc_socket_stop(); 158 159 tipc_unregister_sysctl(); 159 - unregister_pernet_subsys(&tipc_net_ops); 160 160 161 161 pr_info("Deactivated\n"); 162 162 }
+4 -3
net/tipc/link.c
··· 464 464 /* Clean up all queues, except inputq: */ 465 465 __skb_queue_purge(&l_ptr->outqueue); 466 466 __skb_queue_purge(&l_ptr->deferred_queue); 467 - skb_queue_splice_init(&l_ptr->wakeupq, &l_ptr->inputq); 468 - if (!skb_queue_empty(&l_ptr->inputq)) 467 + if (!owner->inputq) 468 + owner->inputq = &l_ptr->inputq; 469 + skb_queue_splice_init(&l_ptr->wakeupq, owner->inputq); 470 + if (!skb_queue_empty(owner->inputq)) 469 471 owner->action_flags |= TIPC_MSG_EVT; 470 - owner->inputq = &l_ptr->inputq; 471 472 l_ptr->next_out = NULL; 472 473 l_ptr->unacked_window = 0; 473 474 l_ptr->checkpoint = 1;
-2
net/tipc/socket.c
··· 2364 2364 .hashfn = jhash, 2365 2365 .max_shift = 20, /* 1M */ 2366 2366 .min_shift = 8, /* 256 */ 2367 - .grow_decision = rht_grow_above_75, 2368 - .shrink_decision = rht_shrink_below_30, 2369 2367 }; 2370 2368 2371 2369 return rhashtable_init(&tn->sk_rht, &rht_params);
+1
net/wireless/core.c
··· 1199 1199 regulatory_exit(); 1200 1200 out_fail_reg: 1201 1201 debugfs_remove(ieee80211_debugfs_dir); 1202 + nl80211_exit(); 1202 1203 out_fail_nl80211: 1203 1204 unregister_netdevice_notifier(&cfg80211_netdev_notifier); 1204 1205 out_fail_notifier:
+15 -7
net/wireless/nl80211.c
··· 2654 2654 return err; 2655 2655 } 2656 2656 2657 - msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 2658 - if (!msg) 2659 - return -ENOMEM; 2660 - 2661 2657 err = parse_monitor_flags(type == NL80211_IFTYPE_MONITOR ? 2662 2658 info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL, 2663 2659 &flags); ··· 2661 2665 if (!err && (flags & MONITOR_FLAG_ACTIVE) && 2662 2666 !(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR)) 2663 2667 return -EOPNOTSUPP; 2668 + 2669 + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 2670 + if (!msg) 2671 + return -ENOMEM; 2664 2672 2665 2673 wdev = rdev_add_virtual_intf(rdev, 2666 2674 nla_data(info->attrs[NL80211_ATTR_IFNAME]), ··· 4399 4399 4400 4400 if (parse_station_flags(info, dev->ieee80211_ptr->iftype, &params)) 4401 4401 return -EINVAL; 4402 + 4403 + /* HT/VHT requires QoS, but if we don't have that just ignore HT/VHT 4404 + * as userspace might just pass through the capabilities from the IEs 4405 + * directly, rather than enforcing this restriction and returning an 4406 + * error in this case. 4407 + */ 4408 + if (!(params.sta_flags_set & BIT(NL80211_STA_FLAG_WME))) { 4409 + params.ht_capa = NULL; 4410 + params.vht_capa = NULL; 4411 + } 4402 4412 4403 4413 /* When you run into this, adjust the code below for the new flag */ 4404 4414 BUILD_BUG_ON(NL80211_STA_FLAG_MAX != 7); ··· 12538 12528 } 12539 12529 12540 12530 for (j = 0; j < match->n_channels; j++) { 12541 - if (nla_put_u32(msg, 12542 - NL80211_ATTR_WIPHY_FREQ, 12543 - match->channels[j])) { 12531 + if (nla_put_u32(msg, j, match->channels[j])) { 12544 12532 nla_nest_cancel(msg, nl_freqs); 12545 12533 nla_nest_cancel(msg, nl_match); 12546 12534 goto out;
+1 -1
net/wireless/reg.c
··· 228 228 229 229 /* We keep a static world regulatory domain in case of the absence of CRDA */ 230 230 static const struct ieee80211_regdomain world_regdom = { 231 - .n_reg_rules = 6, 231 + .n_reg_rules = 8, 232 232 .alpha2 = "00", 233 233 .reg_rules = { 234 234 /* IEEE 802.11b/g, channels 1..11 */
+6 -6
net/xfrm/xfrm_policy.c
··· 2269 2269 * have the xfrm_state's. We need to wait for KM to 2270 2270 * negotiate new SA's or bail out with error.*/ 2271 2271 if (net->xfrm.sysctl_larval_drop) { 2272 - dst_release(dst); 2273 - xfrm_pols_put(pols, drop_pols); 2274 2272 XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTNOSTATES); 2275 - 2276 - return ERR_PTR(-EREMOTE); 2273 + err = -EREMOTE; 2274 + goto error; 2277 2275 } 2278 2276 2279 2277 err = -EAGAIN; ··· 2322 2324 error: 2323 2325 dst_release(dst); 2324 2326 dropdst: 2325 - dst_release(dst_orig); 2327 + if (!(flags & XFRM_LOOKUP_KEEP_DST_REF)) 2328 + dst_release(dst_orig); 2326 2329 xfrm_pols_put(pols, drop_pols); 2327 2330 return ERR_PTR(err); 2328 2331 } ··· 2337 2338 struct sock *sk, int flags) 2338 2339 { 2339 2340 struct dst_entry *dst = xfrm_lookup(net, dst_orig, fl, sk, 2340 - flags | XFRM_LOOKUP_QUEUE); 2341 + flags | XFRM_LOOKUP_QUEUE | 2342 + XFRM_LOOKUP_KEEP_DST_REF); 2341 2343 2342 2344 if (IS_ERR(dst) && PTR_ERR(dst) == -EREMOTE) 2343 2345 return make_blackhole(net, dst_orig->ops->family, dst_orig);
+1 -1
security/selinux/selinuxfs.c
··· 152 152 goto out; 153 153 154 154 /* No partial writes. */ 155 - length = EINVAL; 155 + length = -EINVAL; 156 156 if (*ppos != 0) 157 157 goto out; 158 158
+4
sound/core/control.c
··· 1170 1170 1171 1171 if (info->count < 1) 1172 1172 return -EINVAL; 1173 + if (!*info->id.name) 1174 + return -EINVAL; 1175 + if (strnlen(info->id.name, sizeof(info->id.name)) >= sizeof(info->id.name)) 1176 + return -EINVAL; 1173 1177 access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE : 1174 1178 (info->access & (SNDRV_CTL_ELEM_ACCESS_READWRITE| 1175 1179 SNDRV_CTL_ELEM_ACCESS_INACTIVE|
+2
sound/drivers/opl3/opl3_midi.c
··· 105 105 int pitchbend = chan->midi_pitchbend; 106 106 int segment; 107 107 108 + if (pitchbend < -0x2000) 109 + pitchbend = -0x2000; 108 110 if (pitchbend > 0x1FFF) 109 111 pitchbend = 0x1FFF; 110 112
+1 -2
sound/firewire/iso-resources.c
··· 26 26 int fw_iso_resources_init(struct fw_iso_resources *r, struct fw_unit *unit) 27 27 { 28 28 r->channels_mask = ~0uLL; 29 - r->unit = fw_unit_get(unit); 29 + r->unit = unit; 30 30 mutex_init(&r->mutex); 31 31 r->allocated = false; 32 32 ··· 42 42 { 43 43 WARN_ON(r->allocated); 44 44 mutex_destroy(&r->mutex); 45 - fw_unit_put(r->unit); 46 45 } 47 46 EXPORT_SYMBOL(fw_iso_resources_destroy); 48 47
+3 -2
sound/firewire/oxfw/oxfw-stream.c
··· 171 171 } 172 172 173 173 /* Wait first packet */ 174 - err = amdtp_stream_wait_callback(stream, CALLBACK_TIMEOUT); 175 - if (err < 0) 174 + if (!amdtp_stream_wait_callback(stream, CALLBACK_TIMEOUT)) { 176 175 stop_stream(oxfw, stream); 176 + err = -ETIMEDOUT; 177 + } 177 178 end: 178 179 return err; 179 180 }
+2 -1
sound/isa/msnd/msnd_pinnacle_mixer.c
··· 306 306 spin_lock_init(&chip->mixer_lock); 307 307 strcpy(card->mixername, "MSND Pinnacle Mixer"); 308 308 309 - for (idx = 0; idx < ARRAY_SIZE(snd_msnd_controls); idx++) 309 + for (idx = 0; idx < ARRAY_SIZE(snd_msnd_controls); idx++) { 310 310 err = snd_ctl_add(card, 311 311 snd_ctl_new1(snd_msnd_controls + idx, chip)); 312 312 if (err < 0) 313 313 return err; 314 + } 314 315 315 316 return 0; 316 317 }
+1 -1
sound/pci/hda/hda_controller.c
··· 1164 1164 } 1165 1165 } 1166 1166 1167 - if (!bus->no_response_fallback) 1167 + if (bus->no_response_fallback) 1168 1168 return -1; 1169 1169 1170 1170 if (!chip->polling_mode && chip->poll_count < 2) {
+39 -8
sound/pci/hda/hda_generic.c
··· 687 687 return val; 688 688 } 689 689 690 + /* is this a stereo widget or a stereo-to-mono mix? */ 691 + static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, int dir) 692 + { 693 + unsigned int wcaps = get_wcaps(codec, nid); 694 + hda_nid_t conn; 695 + 696 + if (wcaps & AC_WCAP_STEREO) 697 + return true; 698 + if (dir != HDA_INPUT || get_wcaps_type(wcaps) != AC_WID_AUD_MIX) 699 + return false; 700 + if (snd_hda_get_num_conns(codec, nid) != 1) 701 + return false; 702 + if (snd_hda_get_connections(codec, nid, &conn, 1) < 0) 703 + return false; 704 + return !!(get_wcaps(codec, conn) & AC_WCAP_STEREO); 705 + } 706 + 690 707 /* initialize the amp value (only at the first time) */ 691 708 static void init_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx) 692 709 { 693 710 unsigned int caps = query_amp_caps(codec, nid, dir); 694 711 int val = get_amp_val_to_activate(codec, nid, dir, caps, false); 695 - snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); 712 + 713 + if (is_stereo_amps(codec, nid, dir)) 714 + snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); 715 + else 716 + snd_hda_codec_amp_init(codec, nid, 0, dir, idx, 0xff, val); 717 + } 718 + 719 + /* update the amp, doing in stereo or mono depending on NID */ 720 + static int update_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx, 721 + unsigned int mask, unsigned int val) 722 + { 723 + if (is_stereo_amps(codec, nid, dir)) 724 + return snd_hda_codec_amp_stereo(codec, nid, dir, idx, 725 + mask, val); 726 + else 727 + return snd_hda_codec_amp_update(codec, nid, 0, dir, idx, 728 + mask, val); 696 729 } 697 730 698 731 /* calculate amp value mask we can modify; ··· 765 732 return; 766 733 767 734 val &= mask; 768 - snd_hda_codec_amp_stereo(codec, nid, dir, idx, mask, val); 735 + update_amp(codec, nid, dir, idx, mask, val); 769 736 } 770 737 771 738 static void activate_amp_out(struct hda_codec *codec, struct nid_path *path, ··· 4457 4424 has_amp = nid_has_mute(codec, mix, HDA_INPUT); 4458 4425 for (i = 0; i < nums; i++) { 4459 4426 if (has_amp) 4460 - snd_hda_codec_amp_stereo(codec, mix, 4461 - HDA_INPUT, i, 4462 - 0xff, HDA_AMP_MUTE); 4427 + update_amp(codec, mix, HDA_INPUT, i, 4428 + 0xff, HDA_AMP_MUTE); 4463 4429 else if (nid_has_volume(codec, conn[i], HDA_OUTPUT)) 4464 - snd_hda_codec_amp_stereo(codec, conn[i], 4465 - HDA_OUTPUT, 0, 4466 - 0xff, HDA_AMP_MUTE); 4430 + update_amp(codec, conn[i], HDA_OUTPUT, 0, 4431 + 0xff, HDA_AMP_MUTE); 4467 4432 } 4468 4433 } 4469 4434
+1 -1
sound/pci/hda/hda_intel.c
··· 1989 1989 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 1990 1990 /* Sunrise Point */ 1991 1991 { PCI_DEVICE(0x8086, 0xa170), 1992 - .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 1992 + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE }, 1993 1993 /* Sunrise Point-LP */ 1994 1994 { PCI_DEVICE(0x8086, 0x9d70), 1995 1995 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },
+30 -8
sound/pci/hda/hda_proc.c
··· 134 134 (caps & AC_AMPCAP_MUTE) >> AC_AMPCAP_MUTE_SHIFT); 135 135 } 136 136 137 + /* is this a stereo widget or a stereo-to-mono mix? */ 138 + static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, 139 + int dir, unsigned int wcaps, int indices) 140 + { 141 + hda_nid_t conn; 142 + 143 + if (wcaps & AC_WCAP_STEREO) 144 + return true; 145 + /* check for a stereo-to-mono mix; it must be: 146 + * only a single connection, only for input, and only a mixer widget 147 + */ 148 + if (indices != 1 || dir != HDA_INPUT || 149 + get_wcaps_type(wcaps) != AC_WID_AUD_MIX) 150 + return false; 151 + 152 + if (snd_hda_get_raw_connections(codec, nid, &conn, 1) < 0) 153 + return false; 154 + /* the connection source is a stereo? */ 155 + wcaps = snd_hda_param_read(codec, conn, AC_PAR_AUDIO_WIDGET_CAP); 156 + return !!(wcaps & AC_WCAP_STEREO); 157 + } 158 + 137 159 static void print_amp_vals(struct snd_info_buffer *buffer, 138 160 struct hda_codec *codec, hda_nid_t nid, 139 - int dir, int stereo, int indices) 161 + int dir, unsigned int wcaps, int indices) 140 162 { 141 163 unsigned int val; 164 + bool stereo; 142 165 int i; 166 + 167 + stereo = is_stereo_amps(codec, nid, dir, wcaps, indices); 143 168 144 169 dir = dir == HDA_OUTPUT ? AC_AMP_GET_OUTPUT : AC_AMP_GET_INPUT; 145 170 for (i = 0; i < indices; i++) { ··· 782 757 (codec->single_adc_amp && 783 758 wid_type == AC_WID_AUD_IN)) 784 759 print_amp_vals(buffer, codec, nid, HDA_INPUT, 785 - wid_caps & AC_WCAP_STEREO, 786 - 1); 760 + wid_caps, 1); 787 761 else 788 762 print_amp_vals(buffer, codec, nid, HDA_INPUT, 789 - wid_caps & AC_WCAP_STEREO, 790 - conn_len); 763 + wid_caps, conn_len); 791 764 } 792 765 if (wid_caps & AC_WCAP_OUT_AMP) { 793 766 snd_iprintf(buffer, " Amp-Out caps: "); ··· 794 771 if (wid_type == AC_WID_PIN && 795 772 codec->pin_amp_workaround) 796 773 print_amp_vals(buffer, codec, nid, HDA_OUTPUT, 797 - wid_caps & AC_WCAP_STEREO, 798 - conn_len); 774 + wid_caps, conn_len); 799 775 else 800 776 print_amp_vals(buffer, codec, nid, HDA_OUTPUT, 801 - wid_caps & AC_WCAP_STEREO, 1); 777 + wid_caps, 1); 802 778 } 803 779 804 780 switch (wid_type) {
+2
sound/pci/hda/patch_cirrus.c
··· 393 393 SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81), 394 394 SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122), 395 395 SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101), 396 + SND_PCI_QUIRK(0x106b, 0x5600, "MacBookAir 5,2", CS420X_MBP81), 396 397 SND_PCI_QUIRK(0x106b, 0x5b00, "MacBookAir 4,2", CS420X_MBA42), 397 398 SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 398 399 {} /* terminator */ ··· 585 584 return -ENOMEM; 586 585 587 586 spec->gen.automute_hook = cs_automute; 587 + codec->single_adc_amp = 1; 588 588 589 589 snd_hda_pick_fixup(codec, cs420x_models, cs420x_fixup_tbl, 590 590 cs420x_fixups);
+11
sound/pci/hda/patch_conexant.c
··· 223 223 CXT_PINCFG_LENOVO_TP410, 224 224 CXT_PINCFG_LEMOTE_A1004, 225 225 CXT_PINCFG_LEMOTE_A1205, 226 + CXT_PINCFG_COMPAQ_CQ60, 226 227 CXT_FIXUP_STEREO_DMIC, 227 228 CXT_FIXUP_INC_MIC_BOOST, 228 229 CXT_FIXUP_HEADPHONE_MIC_PIN, ··· 661 660 .type = HDA_FIXUP_PINS, 662 661 .v.pins = cxt_pincfg_lemote, 663 662 }, 663 + [CXT_PINCFG_COMPAQ_CQ60] = { 664 + .type = HDA_FIXUP_PINS, 665 + .v.pins = (const struct hda_pintbl[]) { 666 + /* 0x17 was falsely set up as a mic, it should 0x1d */ 667 + { 0x17, 0x400001f0 }, 668 + { 0x1d, 0x97a70120 }, 669 + { } 670 + } 671 + }, 664 672 [CXT_FIXUP_STEREO_DMIC] = { 665 673 .type = HDA_FIXUP_FUNC, 666 674 .v.func = cxt_fixup_stereo_dmic, ··· 779 769 }; 780 770 781 771 static const struct snd_pci_quirk cxt5051_fixups[] = { 772 + SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60), 782 773 SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200), 783 774 {} 784 775 };
+9 -1
sound/pci/hda/patch_realtek.c
··· 396 396 { 397 397 /* We currently only handle front, HP */ 398 398 static hda_nid_t pins[] = { 399 - 0x0f, 0x10, 0x14, 0x15, 0 399 + 0x0f, 0x10, 0x14, 0x15, 0x17, 0 400 400 }; 401 401 hda_nid_t *p; 402 402 for (p = pins; *p; p++) ··· 5036 5036 SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC), 5037 5037 SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK), 5038 5038 SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5039 + SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK), 5039 5040 SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5040 5041 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K), 5041 5042 SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD), ··· 5210 5209 {0x17, 0x40000000}, 5211 5210 {0x1d, 0x40700001}, 5212 5211 {0x21, 0x02211040}), 5212 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5213 + ALC255_STANDARD_PINS, 5214 + {0x12, 0x90a60170}, 5215 + {0x14, 0x90170140}, 5216 + {0x17, 0x40000000}, 5217 + {0x1d, 0x40700001}, 5218 + {0x21, 0x02211050}), 5213 5219 SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4, 5214 5220 {0x12, 0x90a60130}, 5215 5221 {0x13, 0x40000000},
+29 -35
sound/soc/atmel/sam9g20_wm8731.c
··· 46 46 #include <sound/pcm_params.h> 47 47 #include <sound/soc.h> 48 48 49 - #include <asm/mach-types.h> 50 - 51 49 #include "../codecs/wm8731.h" 52 50 #include "atmel-pcm.h" 53 51 #include "atmel_ssc_dai.h" ··· 169 171 int ret; 170 172 171 173 if (!np) { 172 - if (!(machine_is_at91sam9g20ek() || 173 - machine_is_at91sam9g20ek_2mmc())) 174 - return -ENODEV; 174 + return -ENODEV; 175 175 } 176 176 177 177 ret = atmel_ssc_set_audio(0); ··· 206 210 card->dev = &pdev->dev; 207 211 208 212 /* Parse device node info */ 209 - if (np) { 210 - ret = snd_soc_of_parse_card_name(card, "atmel,model"); 211 - if (ret) 212 - goto err; 213 + ret = snd_soc_of_parse_card_name(card, "atmel,model"); 214 + if (ret) 215 + goto err; 213 216 214 - ret = snd_soc_of_parse_audio_routing(card, 215 - "atmel,audio-routing"); 216 - if (ret) 217 - goto err; 217 + ret = snd_soc_of_parse_audio_routing(card, 218 + "atmel,audio-routing"); 219 + if (ret) 220 + goto err; 218 221 219 - /* Parse codec info */ 220 - at91sam9g20ek_dai.codec_name = NULL; 221 - codec_np = of_parse_phandle(np, "atmel,audio-codec", 0); 222 - if (!codec_np) { 223 - dev_err(&pdev->dev, "codec info missing\n"); 224 - return -EINVAL; 225 - } 226 - at91sam9g20ek_dai.codec_of_node = codec_np; 227 - 228 - /* Parse dai and platform info */ 229 - at91sam9g20ek_dai.cpu_dai_name = NULL; 230 - at91sam9g20ek_dai.platform_name = NULL; 231 - cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0); 232 - if (!cpu_np) { 233 - dev_err(&pdev->dev, "dai and pcm info missing\n"); 234 - return -EINVAL; 235 - } 236 - at91sam9g20ek_dai.cpu_of_node = cpu_np; 237 - at91sam9g20ek_dai.platform_of_node = cpu_np; 238 - 239 - of_node_put(codec_np); 240 - of_node_put(cpu_np); 222 + /* Parse codec info */ 223 + at91sam9g20ek_dai.codec_name = NULL; 224 + codec_np = of_parse_phandle(np, "atmel,audio-codec", 0); 225 + if (!codec_np) { 226 + dev_err(&pdev->dev, "codec info missing\n"); 227 + return -EINVAL; 241 228 } 229 + at91sam9g20ek_dai.codec_of_node = codec_np; 230 + 231 + /* Parse dai and platform info */ 232 + at91sam9g20ek_dai.cpu_dai_name = NULL; 233 + at91sam9g20ek_dai.platform_name = NULL; 234 + cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0); 235 + if (!cpu_np) { 236 + dev_err(&pdev->dev, "dai and pcm info missing\n"); 237 + return -EINVAL; 238 + } 239 + at91sam9g20ek_dai.cpu_of_node = cpu_np; 240 + at91sam9g20ek_dai.platform_of_node = cpu_np; 241 + 242 + of_node_put(codec_np); 243 + of_node_put(cpu_np); 242 244 243 245 ret = snd_soc_register_card(card); 244 246 if (ret) {
+1 -1
sound/soc/cirrus/Kconfig
··· 16 16 17 17 config SND_EP93XX_SOC_SNAPPERCL15 18 18 tristate "SoC Audio support for Bluewater Systems Snapper CL15 module" 19 - depends on SND_EP93XX_SOC && MACH_SNAPPER_CL15 19 + depends on SND_EP93XX_SOC && MACH_SNAPPER_CL15 && I2C 20 20 select SND_EP93XX_SOC_I2S 21 21 select SND_SOC_TLV320AIC23_I2C 22 22 help
+1 -1
sound/soc/codecs/Kconfig
··· 69 69 select SND_SOC_MAX98088 if I2C 70 70 select SND_SOC_MAX98090 if I2C 71 71 select SND_SOC_MAX98095 if I2C 72 - select SND_SOC_MAX98357A 72 + select SND_SOC_MAX98357A if GPIOLIB 73 73 select SND_SOC_MAX9850 if I2C 74 74 select SND_SOC_MAX9768 if I2C 75 75 select SND_SOC_MAX9877 if I2C
+2 -2
sound/soc/codecs/adav80x.c
··· 317 317 { 318 318 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 319 319 struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec); 320 - unsigned int deemph = ucontrol->value.enumerated.item[0]; 320 + unsigned int deemph = ucontrol->value.integer.value[0]; 321 321 322 322 if (deemph > 1) 323 323 return -EINVAL; ··· 333 333 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 334 334 struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec); 335 335 336 - ucontrol->value.enumerated.item[0] = adav80x->deemph; 336 + ucontrol->value.integer.value[0] = adav80x->deemph; 337 337 return 0; 338 338 }; 339 339
+2 -2
sound/soc/codecs/ak4641.c
··· 76 76 { 77 77 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 78 78 struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec); 79 - int deemph = ucontrol->value.enumerated.item[0]; 79 + int deemph = ucontrol->value.integer.value[0]; 80 80 81 81 if (deemph > 1) 82 82 return -EINVAL; ··· 92 92 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 93 93 struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec); 94 94 95 - ucontrol->value.enumerated.item[0] = ak4641->deemph; 95 + ucontrol->value.integer.value[0] = ak4641->deemph; 96 96 return 0; 97 97 }; 98 98
+22 -22
sound/soc/codecs/ak4671.c
··· 343 343 }; 344 344 345 345 static const struct snd_soc_dapm_route ak4671_intercon[] = { 346 - {"DAC Left", "NULL", "PMPLL"}, 347 - {"DAC Right", "NULL", "PMPLL"}, 348 - {"ADC Left", "NULL", "PMPLL"}, 349 - {"ADC Right", "NULL", "PMPLL"}, 346 + {"DAC Left", NULL, "PMPLL"}, 347 + {"DAC Right", NULL, "PMPLL"}, 348 + {"ADC Left", NULL, "PMPLL"}, 349 + {"ADC Right", NULL, "PMPLL"}, 350 350 351 351 /* Outputs */ 352 - {"LOUT1", "NULL", "LOUT1 Mixer"}, 353 - {"ROUT1", "NULL", "ROUT1 Mixer"}, 354 - {"LOUT2", "NULL", "LOUT2 Mix Amp"}, 355 - {"ROUT2", "NULL", "ROUT2 Mix Amp"}, 356 - {"LOUT3", "NULL", "LOUT3 Mixer"}, 357 - {"ROUT3", "NULL", "ROUT3 Mixer"}, 352 + {"LOUT1", NULL, "LOUT1 Mixer"}, 353 + {"ROUT1", NULL, "ROUT1 Mixer"}, 354 + {"LOUT2", NULL, "LOUT2 Mix Amp"}, 355 + {"ROUT2", NULL, "ROUT2 Mix Amp"}, 356 + {"LOUT3", NULL, "LOUT3 Mixer"}, 357 + {"ROUT3", NULL, "ROUT3 Mixer"}, 358 358 359 359 {"LOUT1 Mixer", "DACL", "DAC Left"}, 360 360 {"ROUT1 Mixer", "DACR", "DAC Right"}, 361 361 {"LOUT2 Mixer", "DACHL", "DAC Left"}, 362 362 {"ROUT2 Mixer", "DACHR", "DAC Right"}, 363 - {"LOUT2 Mix Amp", "NULL", "LOUT2 Mixer"}, 364 - {"ROUT2 Mix Amp", "NULL", "ROUT2 Mixer"}, 363 + {"LOUT2 Mix Amp", NULL, "LOUT2 Mixer"}, 364 + {"ROUT2 Mix Amp", NULL, "ROUT2 Mixer"}, 365 365 {"LOUT3 Mixer", "DACSL", "DAC Left"}, 366 366 {"ROUT3 Mixer", "DACSR", "DAC Right"}, 367 367 ··· 381 381 {"LIN2", NULL, "Mic Bias"}, 382 382 {"RIN2", NULL, "Mic Bias"}, 383 383 384 - {"ADC Left", "NULL", "LIN MUX"}, 385 - {"ADC Right", "NULL", "RIN MUX"}, 384 + {"ADC Left", NULL, "LIN MUX"}, 385 + {"ADC Right", NULL, "RIN MUX"}, 386 386 387 387 /* Analog Loops */ 388 - {"LIN1 Mixing Circuit", "NULL", "LIN1"}, 389 - {"RIN1 Mixing Circuit", "NULL", "RIN1"}, 390 - {"LIN2 Mixing Circuit", "NULL", "LIN2"}, 391 - {"RIN2 Mixing Circuit", "NULL", "RIN2"}, 392 - {"LIN3 Mixing Circuit", "NULL", "LIN3"}, 393 - {"RIN3 Mixing Circuit", "NULL", "RIN3"}, 394 - {"LIN4 Mixing Circuit", "NULL", "LIN4"}, 395 - {"RIN4 Mixing Circuit", "NULL", "RIN4"}, 388 + {"LIN1 Mixing Circuit", NULL, "LIN1"}, 389 + {"RIN1 Mixing Circuit", NULL, "RIN1"}, 390 + {"LIN2 Mixing Circuit", NULL, "LIN2"}, 391 + {"RIN2 Mixing Circuit", NULL, "RIN2"}, 392 + {"LIN3 Mixing Circuit", NULL, "LIN3"}, 393 + {"RIN3 Mixing Circuit", NULL, "RIN3"}, 394 + {"LIN4 Mixing Circuit", NULL, "LIN4"}, 395 + {"RIN4 Mixing Circuit", NULL, "RIN4"}, 396 396 397 397 {"LOUT1 Mixer", "LINL1", "LIN1 Mixing Circuit"}, 398 398 {"ROUT1 Mixer", "RINR1", "RIN1 Mixing Circuit"},
+2 -2
sound/soc/codecs/cs4271.c
··· 286 286 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 287 287 struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec); 288 288 289 - ucontrol->value.enumerated.item[0] = cs4271->deemph; 289 + ucontrol->value.integer.value[0] = cs4271->deemph; 290 290 return 0; 291 291 } 292 292 ··· 296 296 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 297 297 struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec); 298 298 299 - cs4271->deemph = ucontrol->value.enumerated.item[0]; 299 + cs4271->deemph = ucontrol->value.integer.value[0]; 300 300 return cs4271_set_deemph(codec); 301 301 } 302 302
+4 -4
sound/soc/codecs/da732x.c
··· 876 876 877 877 static const struct snd_soc_dapm_route da732x_dapm_routes[] = { 878 878 /* Inputs */ 879 - {"AUX1L PGA", "NULL", "AUX1L"}, 880 - {"AUX1R PGA", "NULL", "AUX1R"}, 879 + {"AUX1L PGA", NULL, "AUX1L"}, 880 + {"AUX1R PGA", NULL, "AUX1R"}, 881 881 {"MIC1 PGA", NULL, "MIC1"}, 882 - {"MIC2 PGA", "NULL", "MIC2"}, 883 - {"MIC3 PGA", "NULL", "MIC3"}, 882 + {"MIC2 PGA", NULL, "MIC2"}, 883 + {"MIC3 PGA", NULL, "MIC3"}, 884 884 885 885 /* Capture Path */ 886 886 {"ADC1 Left MUX", "MIC1", "MIC1 PGA"},
+2 -2
sound/soc/codecs/es8328.c
··· 120 120 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 121 121 struct es8328_priv *es8328 = snd_soc_codec_get_drvdata(codec); 122 122 123 - ucontrol->value.enumerated.item[0] = es8328->deemph; 123 + ucontrol->value.integer.value[0] = es8328->deemph; 124 124 return 0; 125 125 } 126 126 ··· 129 129 { 130 130 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 131 131 struct es8328_priv *es8328 = snd_soc_codec_get_drvdata(codec); 132 - int deemph = ucontrol->value.enumerated.item[0]; 132 + int deemph = ucontrol->value.integer.value[0]; 133 133 int ret; 134 134 135 135 if (deemph > 1)
+11 -1
sound/soc/codecs/max98357a.c
··· 12 12 * max98357a.c -- MAX98357A ALSA SoC Codec driver 13 13 */ 14 14 15 - #include <linux/module.h> 15 + #include <linux/device.h> 16 + #include <linux/err.h> 16 17 #include <linux/gpio.h> 18 + #include <linux/gpio/consumer.h> 19 + #include <linux/kernel.h> 20 + #include <linux/mod_devicetable.h> 21 + #include <linux/module.h> 22 + #include <linux/of.h> 23 + #include <linux/platform_device.h> 24 + #include <sound/pcm.h> 17 25 #include <sound/soc.h> 26 + #include <sound/soc-dai.h> 27 + #include <sound/soc-dapm.h> 18 28 19 29 #define DRV_NAME "max98357a" 20 30
+2 -2
sound/soc/codecs/pcm1681.c
··· 118 118 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 119 119 struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec); 120 120 121 - ucontrol->value.enumerated.item[0] = priv->deemph; 121 + ucontrol->value.integer.value[0] = priv->deemph; 122 122 123 123 return 0; 124 124 } ··· 129 129 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 130 130 struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec); 131 131 132 - priv->deemph = ucontrol->value.enumerated.item[0]; 132 + priv->deemph = ucontrol->value.integer.value[0]; 133 133 134 134 return pcm1681_set_deemph(codec); 135 135 }
+1 -1
sound/soc/codecs/rt286.c
··· 1198 1198 .ident = "Dell Dino", 1199 1199 .matches = { 1200 1200 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 1201 - DMI_MATCH(DMI_BOARD_NAME, "0144P8") 1201 + DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343") 1202 1202 } 1203 1203 }, 1204 1204 { }
+6 -1
sound/soc/codecs/rt5670.c
··· 225 225 case RT5670_ADC_EQ_CTRL1: 226 226 case RT5670_EQ_CTRL1: 227 227 case RT5670_ALC_CTRL_1: 228 - case RT5670_IRQ_CTRL1: 229 228 case RT5670_IRQ_CTRL2: 230 229 case RT5670_INT_IRQ_ST: 231 230 case RT5670_IL_CMD: ··· 2701 2702 msleep(100); 2702 2703 2703 2704 regmap_write(rt5670->regmap, RT5670_RESET, 0); 2705 + 2706 + regmap_read(rt5670->regmap, RT5670_VENDOR_ID, &val); 2707 + if (val >= 4) 2708 + regmap_write(rt5670->regmap, RT5670_GPIO_CTRL3, 0x0980); 2709 + else 2710 + regmap_write(rt5670->regmap, RT5670_GPIO_CTRL3, 0x0d00); 2704 2711 2705 2712 ret = regmap_register_patch(rt5670->regmap, init_list, 2706 2713 ARRAY_SIZE(init_list));
+16 -16
sound/soc/codecs/rt5677.c
··· 3284 3284 { "IB45 Bypass Mux", "Bypass", "IB45 Mux" }, 3285 3285 { "IB45 Bypass Mux", "Pass SRC", "IB45 Mux" }, 3286 3286 3287 - { "IB6 Mux", "IF1 DAC 6", "IF1 DAC6" }, 3288 - { "IB6 Mux", "IF2 DAC 6", "IF2 DAC6" }, 3287 + { "IB6 Mux", "IF1 DAC 6", "IF1 DAC6 Mux" }, 3288 + { "IB6 Mux", "IF2 DAC 6", "IF2 DAC6 Mux" }, 3289 3289 { "IB6 Mux", "SLB DAC 6", "SLB DAC6" }, 3290 3290 { "IB6 Mux", "STO4 ADC MIX L", "Stereo4 ADC MIXL" }, 3291 3291 { "IB6 Mux", "IF4 DAC L", "IF4 DAC L" }, ··· 3293 3293 { "IB6 Mux", "STO2 ADC MIX L", "Stereo2 ADC MIXL" }, 3294 3294 { "IB6 Mux", "STO3 ADC MIX L", "Stereo3 ADC MIXL" }, 3295 3295 3296 - { "IB7 Mux", "IF1 DAC 7", "IF1 DAC7" }, 3297 - { "IB7 Mux", "IF2 DAC 7", "IF2 DAC7" }, 3296 + { "IB7 Mux", "IF1 DAC 7", "IF1 DAC7 Mux" }, 3297 + { "IB7 Mux", "IF2 DAC 7", "IF2 DAC7 Mux" }, 3298 3298 { "IB7 Mux", "SLB DAC 7", "SLB DAC7" }, 3299 3299 { "IB7 Mux", "STO4 ADC MIX R", "Stereo4 ADC MIXR" }, 3300 3300 { "IB7 Mux", "IF4 DAC R", "IF4 DAC R" }, ··· 3635 3635 { "DAC1 FS", NULL, "DAC1 MIXL" }, 3636 3636 { "DAC1 FS", NULL, "DAC1 MIXR" }, 3637 3637 3638 - { "DAC2 L Mux", "IF1 DAC 2", "IF1 DAC2" }, 3639 - { "DAC2 L Mux", "IF2 DAC 2", "IF2 DAC2" }, 3638 + { "DAC2 L Mux", "IF1 DAC 2", "IF1 DAC2 Mux" }, 3639 + { "DAC2 L Mux", "IF2 DAC 2", "IF2 DAC2 Mux" }, 3640 3640 { "DAC2 L Mux", "IF3 DAC L", "IF3 DAC L" }, 3641 3641 { "DAC2 L Mux", "IF4 DAC L", "IF4 DAC L" }, 3642 3642 { "DAC2 L Mux", "SLB DAC 2", "SLB DAC2" }, 3643 3643 { "DAC2 L Mux", "OB 2", "OutBound2" }, 3644 3644 3645 - { "DAC2 R Mux", "IF1 DAC 3", "IF1 DAC3" }, 3646 - { "DAC2 R Mux", "IF2 DAC 3", "IF2 DAC3" }, 3645 + { "DAC2 R Mux", "IF1 DAC 3", "IF1 DAC3 Mux" }, 3646 + { "DAC2 R Mux", "IF2 DAC 3", "IF2 DAC3 Mux" }, 3647 3647 { "DAC2 R Mux", "IF3 DAC R", "IF3 DAC R" }, 3648 3648 { "DAC2 R Mux", "IF4 DAC R", "IF4 DAC R" }, 3649 3649 { "DAC2 R Mux", "SLB DAC 3", "SLB DAC3" }, ··· 3651 3651 { "DAC2 R Mux", "Haptic Generator", "Haptic Generator" }, 3652 3652 { "DAC2 R Mux", "VAD ADC", "VAD ADC Mux" }, 3653 3653 3654 - { "DAC3 L Mux", "IF1 DAC 4", "IF1 DAC4" }, 3655 - { "DAC3 L Mux", "IF2 DAC 4", "IF2 DAC4" }, 3654 + { "DAC3 L Mux", "IF1 DAC 4", "IF1 DAC4 Mux" }, 3655 + { "DAC3 L Mux", "IF2 DAC 4", "IF2 DAC4 Mux" }, 3656 3656 { "DAC3 L Mux", "IF3 DAC L", "IF3 DAC L" }, 3657 3657 { "DAC3 L Mux", "IF4 DAC L", "IF4 DAC L" }, 3658 3658 { "DAC3 L Mux", "SLB DAC 4", "SLB DAC4" }, 3659 3659 { "DAC3 L Mux", "OB 4", "OutBound4" }, 3660 3660 3661 - { "DAC3 R Mux", "IF1 DAC 5", "IF1 DAC4" }, 3662 - { "DAC3 R Mux", "IF2 DAC 5", "IF2 DAC4" }, 3661 + { "DAC3 R Mux", "IF1 DAC 5", "IF1 DAC5 Mux" }, 3662 + { "DAC3 R Mux", "IF2 DAC 5", "IF2 DAC5 Mux" }, 3663 3663 { "DAC3 R Mux", "IF3 DAC R", "IF3 DAC R" }, 3664 3664 { "DAC3 R Mux", "IF4 DAC R", "IF4 DAC R" }, 3665 3665 { "DAC3 R Mux", "SLB DAC 5", "SLB DAC5" }, 3666 3666 { "DAC3 R Mux", "OB 5", "OutBound5" }, 3667 3667 3668 - { "DAC4 L Mux", "IF1 DAC 6", "IF1 DAC6" }, 3669 - { "DAC4 L Mux", "IF2 DAC 6", "IF2 DAC6" }, 3668 + { "DAC4 L Mux", "IF1 DAC 6", "IF1 DAC6 Mux" }, 3669 + { "DAC4 L Mux", "IF2 DAC 6", "IF2 DAC6 Mux" }, 3670 3670 { "DAC4 L Mux", "IF3 DAC L", "IF3 DAC L" }, 3671 3671 { "DAC4 L Mux", "IF4 DAC L", "IF4 DAC L" }, 3672 3672 { "DAC4 L Mux", "SLB DAC 6", "SLB DAC6" }, 3673 3673 { "DAC4 L Mux", "OB 6", "OutBound6" }, 3674 3674 3675 - { "DAC4 R Mux", "IF1 DAC 7", "IF1 DAC7" }, 3676 - { "DAC4 R Mux", "IF2 DAC 7", "IF2 DAC7" }, 3675 + { "DAC4 R Mux", "IF1 DAC 7", "IF1 DAC7 Mux" }, 3676 + { "DAC4 R Mux", "IF2 DAC 7", "IF2 DAC7 Mux" }, 3677 3677 { "DAC4 R Mux", "IF3 DAC R", "IF3 DAC R" }, 3678 3678 { "DAC4 R Mux", "IF4 DAC R", "IF4 DAC R" }, 3679 3679 { "DAC4 R Mux", "SLB DAC 7", "SLB DAC7" },
+1 -7
sound/soc/codecs/sgtl5000.c
··· 1151 1151 /* Enable VDDC charge pump */ 1152 1152 ana_pwr |= SGTL5000_VDDC_CHRGPMP_POWERUP; 1153 1153 } else if (vddio >= 3100 && vdda >= 3100) { 1154 - /* 1155 - * if vddio and vddd > 3.1v, 1156 - * charge pump should be clean before set ana_pwr 1157 - */ 1158 - snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER, 1159 - SGTL5000_VDDC_CHRGPMP_POWERUP, 0); 1160 - 1154 + ana_pwr &= ~SGTL5000_VDDC_CHRGPMP_POWERUP; 1161 1155 /* VDDC use VDDIO rail */ 1162 1156 lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD; 1163 1157 lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO <<
+2 -2
sound/soc/codecs/sn95031.c
··· 538 538 /* speaker map */ 539 539 { "IHFOUTL", NULL, "Speaker Rail"}, 540 540 { "IHFOUTR", NULL, "Speaker Rail"}, 541 - { "IHFOUTL", "NULL", "Speaker Left Playback"}, 542 - { "IHFOUTR", "NULL", "Speaker Right Playback"}, 541 + { "IHFOUTL", NULL, "Speaker Left Playback"}, 542 + { "IHFOUTR", NULL, "Speaker Right Playback"}, 543 543 { "Speaker Left Playback", NULL, "Speaker Left Filter"}, 544 544 { "Speaker Right Playback", NULL, "Speaker Right Filter"}, 545 545 { "Speaker Left Filter", NULL, "IHFDAC Left"},
+2 -4
sound/soc/codecs/sta32x.c
··· 106 106 }; 107 107 108 108 static const struct regmap_range sta32x_write_regs_range[] = { 109 - regmap_reg_range(STA32X_CONFA, STA32X_AUTO2), 110 - regmap_reg_range(STA32X_C1CFG, STA32X_FDRC2), 109 + regmap_reg_range(STA32X_CONFA, STA32X_FDRC2), 111 110 }; 112 111 113 112 static const struct regmap_range sta32x_read_regs_range[] = { 114 - regmap_reg_range(STA32X_CONFA, STA32X_AUTO2), 115 - regmap_reg_range(STA32X_C1CFG, STA32X_FDRC2), 113 + regmap_reg_range(STA32X_CONFA, STA32X_FDRC2), 116 114 }; 117 115 118 116 static const struct regmap_range sta32x_volatile_regs_range[] = {
+2 -2
sound/soc/codecs/tas5086.c
··· 281 281 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 282 282 struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); 283 283 284 - ucontrol->value.enumerated.item[0] = priv->deemph; 284 + ucontrol->value.integer.value[0] = priv->deemph; 285 285 286 286 return 0; 287 287 } ··· 292 292 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 293 293 struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); 294 294 295 - priv->deemph = ucontrol->value.enumerated.item[0]; 295 + priv->deemph = ucontrol->value.integer.value[0]; 296 296 297 297 return tas5086_set_deemph(codec); 298 298 }
+4 -4
sound/soc/codecs/wm2000.c
··· 610 610 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 611 611 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 612 612 613 - ucontrol->value.enumerated.item[0] = wm2000->anc_active; 613 + ucontrol->value.integer.value[0] = wm2000->anc_active; 614 614 615 615 return 0; 616 616 } ··· 620 620 { 621 621 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 622 622 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 623 - int anc_active = ucontrol->value.enumerated.item[0]; 623 + int anc_active = ucontrol->value.integer.value[0]; 624 624 int ret; 625 625 626 626 if (anc_active > 1) ··· 643 643 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 644 644 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 645 645 646 - ucontrol->value.enumerated.item[0] = wm2000->spk_ena; 646 + ucontrol->value.integer.value[0] = wm2000->spk_ena; 647 647 648 648 return 0; 649 649 } ··· 653 653 { 654 654 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 655 655 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 656 - int val = ucontrol->value.enumerated.item[0]; 656 + int val = ucontrol->value.integer.value[0]; 657 657 int ret; 658 658 659 659 if (val > 1)
+2 -2
sound/soc/codecs/wm8731.c
··· 125 125 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 126 126 struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec); 127 127 128 - ucontrol->value.enumerated.item[0] = wm8731->deemph; 128 + ucontrol->value.integer.value[0] = wm8731->deemph; 129 129 130 130 return 0; 131 131 } ··· 135 135 { 136 136 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 137 137 struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec); 138 - int deemph = ucontrol->value.enumerated.item[0]; 138 + int deemph = ucontrol->value.integer.value[0]; 139 139 int ret = 0; 140 140 141 141 if (deemph > 1)
+2 -2
sound/soc/codecs/wm8903.c
··· 442 442 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 443 443 struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec); 444 444 445 - ucontrol->value.enumerated.item[0] = wm8903->deemph; 445 + ucontrol->value.integer.value[0] = wm8903->deemph; 446 446 447 447 return 0; 448 448 } ··· 452 452 { 453 453 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 454 454 struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec); 455 - int deemph = ucontrol->value.enumerated.item[0]; 455 + int deemph = ucontrol->value.integer.value[0]; 456 456 int ret = 0; 457 457 458 458 if (deemph > 1)
+2 -2
sound/soc/codecs/wm8904.c
··· 525 525 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 526 526 struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec); 527 527 528 - ucontrol->value.enumerated.item[0] = wm8904->deemph; 528 + ucontrol->value.integer.value[0] = wm8904->deemph; 529 529 return 0; 530 530 } 531 531 ··· 534 534 { 535 535 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 536 536 struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec); 537 - int deemph = ucontrol->value.enumerated.item[0]; 537 + int deemph = ucontrol->value.integer.value[0]; 538 538 539 539 if (deemph > 1) 540 540 return -EINVAL;
+2 -2
sound/soc/codecs/wm8955.c
··· 393 393 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 394 394 struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec); 395 395 396 - ucontrol->value.enumerated.item[0] = wm8955->deemph; 396 + ucontrol->value.integer.value[0] = wm8955->deemph; 397 397 return 0; 398 398 } 399 399 ··· 402 402 { 403 403 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 404 404 struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec); 405 - int deemph = ucontrol->value.enumerated.item[0]; 405 + int deemph = ucontrol->value.integer.value[0]; 406 406 407 407 if (deemph > 1) 408 408 return -EINVAL;
+2 -2
sound/soc/codecs/wm8960.c
··· 184 184 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 185 185 struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec); 186 186 187 - ucontrol->value.enumerated.item[0] = wm8960->deemph; 187 + ucontrol->value.integer.value[0] = wm8960->deemph; 188 188 return 0; 189 189 } 190 190 ··· 193 193 { 194 194 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 195 195 struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec); 196 - int deemph = ucontrol->value.enumerated.item[0]; 196 + int deemph = ucontrol->value.integer.value[0]; 197 197 198 198 if (deemph > 1) 199 199 return -EINVAL;
+3 -3
sound/soc/codecs/wm9712.c
··· 180 180 struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol); 181 181 struct snd_soc_codec *codec = snd_soc_dapm_to_codec(dapm); 182 182 struct wm9712_priv *wm9712 = snd_soc_codec_get_drvdata(codec); 183 - unsigned int val = ucontrol->value.enumerated.item[0]; 183 + unsigned int val = ucontrol->value.integer.value[0]; 184 184 struct soc_mixer_control *mc = 185 185 (struct soc_mixer_control *)kcontrol->private_value; 186 186 unsigned int mixer, mask, shift, old; ··· 193 193 194 194 mutex_lock(&wm9712->lock); 195 195 old = wm9712->hp_mixer[mixer]; 196 - if (ucontrol->value.enumerated.item[0]) 196 + if (ucontrol->value.integer.value[0]) 197 197 wm9712->hp_mixer[mixer] |= mask; 198 198 else 199 199 wm9712->hp_mixer[mixer] &= ~mask; ··· 231 231 mixer = mc->shift >> 8; 232 232 shift = mc->shift & 0xff; 233 233 234 - ucontrol->value.enumerated.item[0] = 234 + ucontrol->value.integer.value[0] = 235 235 (wm9712->hp_mixer[mixer] >> shift) & 1; 236 236 237 237 return 0;
+3 -3
sound/soc/codecs/wm9713.c
··· 255 255 struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol); 256 256 struct snd_soc_codec *codec = snd_soc_dapm_to_codec(dapm); 257 257 struct wm9713_priv *wm9713 = snd_soc_codec_get_drvdata(codec); 258 - unsigned int val = ucontrol->value.enumerated.item[0]; 258 + unsigned int val = ucontrol->value.integer.value[0]; 259 259 struct soc_mixer_control *mc = 260 260 (struct soc_mixer_control *)kcontrol->private_value; 261 261 unsigned int mixer, mask, shift, old; ··· 268 268 269 269 mutex_lock(&wm9713->lock); 270 270 old = wm9713->hp_mixer[mixer]; 271 - if (ucontrol->value.enumerated.item[0]) 271 + if (ucontrol->value.integer.value[0]) 272 272 wm9713->hp_mixer[mixer] |= mask; 273 273 else 274 274 wm9713->hp_mixer[mixer] &= ~mask; ··· 306 306 mixer = mc->shift >> 8; 307 307 shift = mc->shift & 0xff; 308 308 309 - ucontrol->value.enumerated.item[0] = 309 + ucontrol->value.integer.value[0] = 310 310 (wm9713->hp_mixer[mixer] >> shift) & 1; 311 311 312 312 return 0;
+2 -2
sound/soc/fsl/fsl_spdif.c
··· 1049 1049 enum spdif_txrate index, bool round) 1050 1050 { 1051 1051 const u32 rate[] = { 32000, 44100, 48000, 96000, 192000 }; 1052 - bool is_sysclk = clk == spdif_priv->sysclk; 1052 + bool is_sysclk = clk_is_match(clk, spdif_priv->sysclk); 1053 1053 u64 rate_ideal, rate_actual, sub; 1054 1054 u32 sysclk_dfmin, sysclk_dfmax; 1055 1055 u32 txclk_df, sysclk_df, arate; ··· 1143 1143 spdif_priv->txclk_src[index], rate[index]); 1144 1144 dev_dbg(&pdev->dev, "use txclk df %d for %dHz sample rate\n", 1145 1145 spdif_priv->txclk_df[index], rate[index]); 1146 - if (spdif_priv->txclk[index] == spdif_priv->sysclk) 1146 + if (clk_is_match(spdif_priv->txclk[index], spdif_priv->sysclk)) 1147 1147 dev_dbg(&pdev->dev, "use sysclk df %d for %dHz sample rate\n", 1148 1148 spdif_priv->sysclk_df[index], rate[index]); 1149 1149 dev_dbg(&pdev->dev, "the best rate for %dHz sample rate is %dHz\n",
+9 -6
sound/soc/fsl/fsl_ssi.c
··· 603 603 factor = (div2 + 1) * (7 * psr + 1) * 2; 604 604 605 605 for (i = 0; i < 255; i++) { 606 - /* The bclk rate must be smaller than 1/5 sysclk rate */ 607 - if (factor * (i + 1) < 5) 608 - continue; 609 - 610 - tmprate = freq * factor * (i + 2); 606 + tmprate = freq * factor * (i + 1); 611 607 612 608 if (baudclk_is_used) 613 609 clkrate = clk_get_rate(ssi_private->baudclk); 614 610 else 615 611 clkrate = clk_round_rate(ssi_private->baudclk, tmprate); 612 + 613 + /* 614 + * Hardware limitation: The bclk rate must be 615 + * never greater than 1/5 IPG clock rate 616 + */ 617 + if (clkrate * 5 > clk_get_rate(ssi_private->clk)) 618 + continue; 616 619 617 620 clkrate /= factor; 618 621 afreq = clkrate / (i + 1); ··· 1227 1224 ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; 1228 1225 ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0; 1229 1226 1230 - ret = !of_property_read_u32_array(np, "dmas", dmas, 4); 1227 + ret = of_property_read_u32_array(np, "dmas", dmas, 4); 1231 1228 if (ssi_private->use_dma && !ret && dmas[2] == IMX_DMATYPE_SSI_DUAL) { 1232 1229 ssi_private->use_dual_fifo = true; 1233 1230 /* When using dual fifo mode, we need to keep watermark
+5
sound/soc/generic/simple-card.c
··· 372 372 strlen(dai_link->cpu_dai_name) + 373 373 strlen(dai_link->codec_dai_name) + 2, 374 374 GFP_KERNEL); 375 + if (!name) { 376 + ret = -ENOMEM; 377 + goto dai_link_of_err; 378 + } 379 + 375 380 sprintf(name, "%s-%s", dai_link->cpu_dai_name, 376 381 dai_link->codec_dai_name); 377 382 dai_link->name = dai_link->stream_name = name;
+1 -1
sound/soc/intel/sst-atom-controls.h
··· 150 150 151 151 enum sst_task { 152 152 SST_TASK_SBA = 1, 153 - SST_TASK_MMX, 153 + SST_TASK_MMX = 3, 154 154 }; 155 155 156 156 enum sst_type {
-3
sound/soc/intel/sst-haswell-dsp.c
··· 207 207 module = (void *)module + sizeof(*module) + module->mod_size; 208 208 } 209 209 210 - /* allocate scratch mem regions */ 211 - sst_block_alloc_scratch(dsp); 212 - 213 210 return 0; 214 211 } 215 212
+24 -8
sound/soc/intel/sst-haswell-ipc.c
··· 1732 1732 int sst_hsw_dsp_load(struct sst_hsw *hsw) 1733 1733 { 1734 1734 struct sst_dsp *dsp = hsw->dsp; 1735 + struct sst_fw *sst_fw, *t; 1735 1736 int ret; 1736 1737 1737 1738 dev_dbg(hsw->dev, "loading audio DSP...."); ··· 1749 1748 return ret; 1750 1749 } 1751 1750 1752 - ret = sst_fw_reload(hsw->sst_fw); 1753 - if (ret < 0) { 1754 - dev_err(hsw->dev, "error: SST FW reload failed\n"); 1755 - sst_dsp_dma_put_channel(dsp); 1756 - return -ENOMEM; 1751 + list_for_each_entry_safe_reverse(sst_fw, t, &dsp->fw_list, list) { 1752 + ret = sst_fw_reload(sst_fw); 1753 + if (ret < 0) { 1754 + dev_err(hsw->dev, "error: SST FW reload failed\n"); 1755 + sst_dsp_dma_put_channel(dsp); 1756 + return -ENOMEM; 1757 + } 1757 1758 } 1759 + ret = sst_block_alloc_scratch(hsw->dsp); 1760 + if (ret < 0) 1761 + return -EINVAL; 1758 1762 1759 1763 sst_dsp_dma_put_channel(dsp); 1760 1764 return 0; ··· 1815 1809 1816 1810 int sst_hsw_dsp_runtime_sleep(struct sst_hsw *hsw) 1817 1811 { 1818 - sst_fw_unload(hsw->sst_fw); 1819 - sst_block_free_scratch(hsw->dsp); 1812 + struct sst_fw *sst_fw, *t; 1813 + struct sst_dsp *dsp = hsw->dsp; 1814 + 1815 + list_for_each_entry_safe(sst_fw, t, &dsp->fw_list, list) { 1816 + sst_fw_unload(sst_fw); 1817 + } 1818 + sst_block_free_scratch(dsp); 1820 1819 1821 1820 hsw->boot_complete = false; 1822 1821 1823 - sst_dsp_sleep(hsw->dsp); 1822 + sst_dsp_sleep(dsp); 1824 1823 1825 1824 return 0; 1826 1825 } ··· 1953 1942 dev_err(dev, "error: failed to load firmware\n"); 1954 1943 goto fw_err; 1955 1944 } 1945 + 1946 + /* allocate scratch mem regions */ 1947 + ret = sst_block_alloc_scratch(hsw->dsp); 1948 + if (ret < 0) 1949 + goto boot_err; 1956 1950 1957 1951 /* wait for DSP boot completion */ 1958 1952 sst_dsp_boot(hsw->dsp);
+9 -1
sound/soc/intel/sst/sst.c
··· 350 350 351 351 spin_lock_irqsave(&ctx->ipc_spin_lock, irq_flags); 352 352 353 - shim_regs->imrx = sst_shim_read64(shim, SST_IMRX), 353 + shim_regs->imrx = sst_shim_read64(shim, SST_IMRX); 354 + shim_regs->csr = sst_shim_read64(shim, SST_CSR); 355 + 354 356 355 357 spin_unlock_irqrestore(&ctx->ipc_spin_lock, irq_flags); 356 358 } ··· 369 367 */ 370 368 spin_lock_irqsave(&ctx->ipc_spin_lock, irq_flags); 371 369 sst_shim_write64(shim, SST_IMRX, shim_regs->imrx), 370 + sst_shim_write64(shim, SST_CSR, shim_regs->csr), 372 371 spin_unlock_irqrestore(&ctx->ipc_spin_lock, irq_flags); 373 372 } 374 373 ··· 382 379 * initially active. So change the state to active before 383 380 * enabling the pm 384 381 */ 382 + 383 + if (!acpi_disabled) 384 + pm_runtime_set_active(ctx->dev); 385 + 385 386 pm_runtime_enable(ctx->dev); 386 387 387 388 if (acpi_disabled) ··· 416 409 synchronize_irq(ctx->irq_num); 417 410 flush_workqueue(ctx->post_msg_wq); 418 411 412 + ctx->ops->reset(ctx); 419 413 /* save the shim registers because PMC doesn't save state */ 420 414 sst_save_shim64(ctx, ctx->shim, ctx->shim_regs64); 421 415
+1 -1
sound/soc/kirkwood/kirkwood-i2s.c
··· 579 579 if (PTR_ERR(priv->extclk) == -EPROBE_DEFER) 580 580 return -EPROBE_DEFER; 581 581 } else { 582 - if (priv->extclk == priv->clk) { 582 + if (clk_is_match(priv->extclk, priv->clk)) { 583 583 devm_clk_put(&pdev->dev, priv->extclk); 584 584 priv->extclk = ERR_PTR(-EINVAL); 585 585 } else {
+3
sound/soc/omap/omap-hdmi-audio.c
··· 352 352 return ret; 353 353 354 354 card = devm_kzalloc(dev, sizeof(*card), GFP_KERNEL); 355 + if (!card) 356 + return -ENOMEM; 357 + 355 358 card->name = devm_kasprintf(dev, GFP_KERNEL, 356 359 "HDMI %s", dev_name(ad->dssdev)); 357 360 card->owner = THIS_MODULE;
+11
sound/soc/omap/omap-mcbsp.c
··· 530 530 531 531 case OMAP_MCBSP_SYSCLK_CLKX_EXT: 532 532 regs->srgr2 |= CLKSM; 533 + regs->pcr0 |= SCLKME; 534 + /* 535 + * If McBSP is master but yet the CLKX/CLKR pin drives the SRG, 536 + * disable output on those pins. This enables to inject the 537 + * reference clock through CLKX/CLKR. For this to work 538 + * set_dai_sysclk() _needs_ to be called after set_dai_fmt(). 539 + */ 540 + regs->pcr0 &= ~CLKXM; 541 + break; 533 542 case OMAP_MCBSP_SYSCLK_CLKR_EXT: 534 543 regs->pcr0 |= SCLKME; 544 + /* Disable ouput on CLKR pin in master mode */ 545 + regs->pcr0 &= ~CLKRM; 535 546 break; 536 547 default: 537 548 err = -ENODEV;
+1 -1
sound/soc/omap/omap-pcm.c
··· 201 201 struct snd_pcm *pcm = rtd->pcm; 202 202 int ret; 203 203 204 - ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(64)); 204 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 205 205 if (ret) 206 206 return ret; 207 207
+5 -5
sound/soc/samsung/Kconfig
··· 174 174 175 175 config SND_SOC_SPEYSIDE 176 176 tristate "Audio support for Wolfson Speyside" 177 - depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 177 + depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && I2C && SPI_MASTER 178 178 select SND_SAMSUNG_I2S 179 179 select SND_SOC_WM8996 180 180 select SND_SOC_WM9081 ··· 189 189 190 190 config SND_SOC_BELLS 191 191 tristate "Audio support for Wolfson Bells" 192 - depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && MFD_ARIZONA 192 + depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && MFD_ARIZONA && I2C && SPI_MASTER 193 193 select SND_SAMSUNG_I2S 194 194 select SND_SOC_WM5102 195 195 select SND_SOC_WM5110 ··· 206 206 207 207 config SND_SOC_LITTLEMILL 208 208 tristate "Audio support for Wolfson Littlemill" 209 - depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 209 + depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && I2C 210 210 select SND_SAMSUNG_I2S 211 211 select MFD_WM8994 212 212 select SND_SOC_WM8994 ··· 223 223 224 224 config SND_SOC_ODROIDX2 225 225 tristate "Audio support for Odroid-X2 and Odroid-U3" 226 - depends on SND_SOC_SAMSUNG 226 + depends on SND_SOC_SAMSUNG && I2C 227 227 select SND_SOC_MAX98090 228 228 select SND_SAMSUNG_I2S 229 229 help ··· 231 231 232 232 config SND_SOC_ARNDALE_RT5631_ALC5631 233 233 tristate "Audio support for RT5631(ALC5631) on Arndale Board" 234 - depends on SND_SOC_SAMSUNG 234 + depends on SND_SOC_SAMSUNG && I2C 235 235 select SND_SAMSUNG_I2S 236 236 select SND_SOC_RT5631
+2 -2
sound/soc/sh/rcar/core.c
··· 1252 1252 goto exit_snd_probe; 1253 1253 } 1254 1254 1255 + dev_set_drvdata(dev, priv); 1256 + 1255 1257 /* 1256 1258 * asoc register 1257 1259 */ ··· 1269 1267 dev_err(dev, "cannot snd dai register\n"); 1270 1268 goto exit_snd_soc; 1271 1269 } 1272 - 1273 - dev_set_drvdata(dev, priv); 1274 1270 1275 1271 pm_runtime_enable(dev); 1276 1272
+30 -11
sound/soc/soc-core.c
··· 347 347 if (!buf) 348 348 return -ENOMEM; 349 349 350 + mutex_lock(&client_mutex); 351 + 350 352 list_for_each_entry(codec, &codec_list, list) { 351 353 len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n", 352 354 codec->component.name); ··· 359 357 break; 360 358 } 361 359 } 360 + 361 + mutex_unlock(&client_mutex); 362 362 363 363 if (ret >= 0) 364 364 ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret); ··· 386 382 if (!buf) 387 383 return -ENOMEM; 388 384 385 + mutex_lock(&client_mutex); 386 + 389 387 list_for_each_entry(component, &component_list, list) { 390 388 list_for_each_entry(dai, &component->dai_list, list) { 391 389 len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n", ··· 400 394 } 401 395 } 402 396 } 397 + 398 + mutex_unlock(&client_mutex); 403 399 404 400 ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret); 405 401 ··· 426 418 if (!buf) 427 419 return -ENOMEM; 428 420 421 + mutex_lock(&client_mutex); 422 + 429 423 list_for_each_entry(platform, &platform_list, list) { 430 424 len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n", 431 425 platform->component.name); ··· 438 428 break; 439 429 } 440 430 } 431 + 432 + mutex_unlock(&client_mutex); 441 433 442 434 ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret); 443 435 ··· 848 836 { 849 837 struct snd_soc_component *component; 850 838 839 + lockdep_assert_held(&client_mutex); 840 + 851 841 list_for_each_entry(component, &component_list, list) { 852 842 if (of_node) { 853 843 if (component->dev->of_node == of_node) ··· 867 853 { 868 854 struct snd_soc_component *component; 869 855 struct snd_soc_dai *dai; 856 + 857 + lockdep_assert_held(&client_mutex); 870 858 871 859 /* Find CPU DAI from registered DAIs*/ 872 860 list_for_each_entry(component, &component_list, list) { ··· 1524 1508 struct snd_soc_codec *codec; 1525 1509 int ret, i, order; 1526 1510 1511 + mutex_lock(&client_mutex); 1527 1512 mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_INIT); 1528 1513 1529 1514 /* bind DAIs */ ··· 1679 1662 card->instantiated = 1; 1680 1663 snd_soc_dapm_sync(&card->dapm); 1681 1664 mutex_unlock(&card->mutex); 1665 + mutex_unlock(&client_mutex); 1682 1666 1683 1667 return 0; 1684 1668 ··· 1698 1680 1699 1681 base_error: 1700 1682 mutex_unlock(&card->mutex); 1683 + mutex_unlock(&client_mutex); 1701 1684 1702 1685 return ret; 1703 1686 } ··· 2732 2713 list_del(&component->list); 2733 2714 } 2734 2715 2735 - static void snd_soc_component_del(struct snd_soc_component *component) 2736 - { 2737 - mutex_lock(&client_mutex); 2738 - snd_soc_component_del_unlocked(component); 2739 - mutex_unlock(&client_mutex); 2740 - } 2741 - 2742 2716 int snd_soc_register_component(struct device *dev, 2743 2717 const struct snd_soc_component_driver *cmpnt_drv, 2744 2718 struct snd_soc_dai_driver *dai_drv, ··· 2779 2767 { 2780 2768 struct snd_soc_component *cmpnt; 2781 2769 2770 + mutex_lock(&client_mutex); 2782 2771 list_for_each_entry(cmpnt, &component_list, list) { 2783 2772 if (dev == cmpnt->dev && cmpnt->registered_as_component) 2784 2773 goto found; 2785 2774 } 2775 + mutex_unlock(&client_mutex); 2786 2776 return; 2787 2777 2788 2778 found: 2789 - snd_soc_component_del(cmpnt); 2779 + snd_soc_component_del_unlocked(cmpnt); 2780 + mutex_unlock(&client_mutex); 2790 2781 snd_soc_component_cleanup(cmpnt); 2791 2782 kfree(cmpnt); 2792 2783 } ··· 2897 2882 { 2898 2883 struct snd_soc_platform *platform; 2899 2884 2885 + mutex_lock(&client_mutex); 2900 2886 list_for_each_entry(platform, &platform_list, list) { 2901 - if (dev == platform->dev) 2887 + if (dev == platform->dev) { 2888 + mutex_unlock(&client_mutex); 2902 2889 return platform; 2890 + } 2903 2891 } 2892 + mutex_unlock(&client_mutex); 2904 2893 2905 2894 return NULL; 2906 2895 } ··· 3109 3090 { 3110 3091 struct snd_soc_codec *codec; 3111 3092 3093 + mutex_lock(&client_mutex); 3112 3094 list_for_each_entry(codec, &codec_list, list) { 3113 3095 if (dev == codec->dev) 3114 3096 goto found; 3115 3097 } 3098 + mutex_unlock(&client_mutex); 3116 3099 return; 3117 3100 3118 3101 found: 3119 - 3120 - mutex_lock(&client_mutex); 3121 3102 list_del(&codec->list); 3122 3103 snd_soc_component_del_unlocked(&codec->component); 3123 3104 mutex_unlock(&client_mutex);
+3 -3
sound/usb/line6/playback.c
··· 39 39 for (; p < buf_end; ++p) { 40 40 short pv = le16_to_cpu(*p); 41 41 int val = (pv * volume[chn & 1]) >> 8; 42 - pv = clamp(val, 0x7fff, -0x8000); 42 + pv = clamp(val, -0x8000, 0x7fff); 43 43 *p = cpu_to_le16(pv); 44 44 ++chn; 45 45 } ··· 54 54 55 55 val = p[0] + (p[1] << 8) + ((signed char)p[2] << 16); 56 56 val = (val * volume[chn & 1]) >> 8; 57 - val = clamp(val, 0x7fffff, -0x800000); 57 + val = clamp(val, -0x800000, 0x7fffff); 58 58 p[0] = val; 59 59 p[1] = val >> 8; 60 60 p[2] = val >> 16; ··· 126 126 short pov = le16_to_cpu(*po); 127 127 short piv = le16_to_cpu(*pi); 128 128 int val = pov + ((piv * volume) >> 8); 129 - pov = clamp(val, 0x7fff, -0x8000); 129 + pov = clamp(val, -0x8000, 0x7fff); 130 130 *po = cpu_to_le16(pov); 131 131 } 132 132 }
+30
sound/usb/quirks-table.h
··· 1773 1773 } 1774 1774 } 1775 1775 }, 1776 + { 1777 + USB_DEVICE(0x0582, 0x0159), 1778 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 1779 + /* .vendor_name = "Roland", */ 1780 + /* .product_name = "UA-22", */ 1781 + .ifnum = QUIRK_ANY_INTERFACE, 1782 + .type = QUIRK_COMPOSITE, 1783 + .data = (const struct snd_usb_audio_quirk[]) { 1784 + { 1785 + .ifnum = 0, 1786 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1787 + }, 1788 + { 1789 + .ifnum = 1, 1790 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1791 + }, 1792 + { 1793 + .ifnum = 2, 1794 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 1795 + .data = & (const struct snd_usb_midi_endpoint_info) { 1796 + .out_cables = 0x0001, 1797 + .in_cables = 0x0001 1798 + } 1799 + }, 1800 + { 1801 + .ifnum = -1 1802 + } 1803 + } 1804 + } 1805 + }, 1776 1806 /* this catches most recent vendor-specific Roland devices */ 1777 1807 { 1778 1808 .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
+2
tools/perf/util/annotate.c
··· 30 30 31 31 static void ins__delete(struct ins_operands *ops) 32 32 { 33 + if (ops == NULL) 34 + return; 33 35 zfree(&ops->source.raw); 34 36 zfree(&ops->source.name); 35 37 zfree(&ops->target.raw);
+1 -1
tools/power/cpupower/Makefile
··· 209 209 210 210 $(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_MAJ) 211 211 $(ECHO) " CC " $@ 212 - $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -Wl,-rpath=./ -lrt -lpci -L$(OUTPUT) -o $@ 212 + $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -lrt -lpci -L$(OUTPUT) -o $@ 213 213 $(QUIET) $(STRIPCMD) $@ 214 214 215 215 $(OUTPUT)po/$(PACKAGE).pot: $(UTIL_SRC)
+8
tools/testing/selftests/Makefile
··· 22 22 TARGETS_HOTPLUG = cpu-hotplug 23 23 TARGETS_HOTPLUG += memory-hotplug 24 24 25 + # Clear LDFLAGS and MAKEFLAGS if called from main 26 + # Makefile to avoid test build failures when test 27 + # Makefile doesn't have explicit build rules. 28 + ifeq (1,$(MAKELEVEL)) 29 + undefine LDFLAGS 30 + override MAKEFLAGS = 31 + endif 32 + 25 33 all: 26 34 for TARGET in $(TARGETS); do \ 27 35 make -C $$TARGET; \
+9 -1
tools/testing/selftests/exec/execveat.c
··· 30 30 #ifdef __NR_execveat 31 31 return syscall(__NR_execveat, fd, path, argv, envp, flags); 32 32 #else 33 - errno = -ENOSYS; 33 + errno = ENOSYS; 34 34 return -1; 35 35 #endif 36 36 } ··· 233 233 int fd_script_ephemeral = open_or_die("script.ephemeral", O_RDONLY); 234 234 int fd_cloexec = open_or_die("execveat", O_RDONLY|O_CLOEXEC); 235 235 int fd_script_cloexec = open_or_die("script", O_RDONLY|O_CLOEXEC); 236 + 237 + /* Check if we have execveat at all, and bail early if not */ 238 + errno = 0; 239 + execveat_(-1, NULL, NULL, NULL, 0); 240 + if (errno == ENOSYS) { 241 + printf("[FAIL] ENOSYS calling execveat - no kernel support?\n"); 242 + return 1; 243 + } 236 244 237 245 /* Change file position to confirm it doesn't affect anything */ 238 246 lseek(fd, 10, SEEK_SET);
+8
virt/kvm/arm/vgic-v2.c
··· 72 72 { 73 73 if (!(lr_desc.state & LR_STATE_MASK)) 74 74 vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr |= (1ULL << lr); 75 + else 76 + vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr &= ~(1ULL << lr); 75 77 } 76 78 77 79 static u64 vgic_v2_get_elrsr(const struct kvm_vcpu *vcpu) ··· 84 82 static u64 vgic_v2_get_eisr(const struct kvm_vcpu *vcpu) 85 83 { 86 84 return vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr; 85 + } 86 + 87 + static void vgic_v2_clear_eisr(struct kvm_vcpu *vcpu) 88 + { 89 + vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr = 0; 87 90 } 88 91 89 92 static u32 vgic_v2_get_interrupt_status(const struct kvm_vcpu *vcpu) ··· 155 148 .sync_lr_elrsr = vgic_v2_sync_lr_elrsr, 156 149 .get_elrsr = vgic_v2_get_elrsr, 157 150 .get_eisr = vgic_v2_get_eisr, 151 + .clear_eisr = vgic_v2_clear_eisr, 158 152 .get_interrupt_status = vgic_v2_get_interrupt_status, 159 153 .enable_underflow = vgic_v2_enable_underflow, 160 154 .disable_underflow = vgic_v2_disable_underflow,
+8
virt/kvm/arm/vgic-v3.c
··· 104 104 { 105 105 if (!(lr_desc.state & LR_STATE_MASK)) 106 106 vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr |= (1U << lr); 107 + else 108 + vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr &= ~(1U << lr); 107 109 } 108 110 109 111 static u64 vgic_v3_get_elrsr(const struct kvm_vcpu *vcpu) ··· 116 114 static u64 vgic_v3_get_eisr(const struct kvm_vcpu *vcpu) 117 115 { 118 116 return vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr; 117 + } 118 + 119 + static void vgic_v3_clear_eisr(struct kvm_vcpu *vcpu) 120 + { 121 + vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr = 0; 119 122 } 120 123 121 124 static u32 vgic_v3_get_interrupt_status(const struct kvm_vcpu *vcpu) ··· 199 192 .sync_lr_elrsr = vgic_v3_sync_lr_elrsr, 200 193 .get_elrsr = vgic_v3_get_elrsr, 201 194 .get_eisr = vgic_v3_get_eisr, 195 + .clear_eisr = vgic_v3_clear_eisr, 202 196 .get_interrupt_status = vgic_v3_get_interrupt_status, 203 197 .enable_underflow = vgic_v3_enable_underflow, 204 198 .disable_underflow = vgic_v3_disable_underflow,
+20 -2
virt/kvm/arm/vgic.c
··· 883 883 return vgic_ops->get_eisr(vcpu); 884 884 } 885 885 886 + static inline void vgic_clear_eisr(struct kvm_vcpu *vcpu) 887 + { 888 + vgic_ops->clear_eisr(vcpu); 889 + } 890 + 886 891 static inline u32 vgic_get_interrupt_status(struct kvm_vcpu *vcpu) 887 892 { 888 893 return vgic_ops->get_interrupt_status(vcpu); ··· 927 922 vgic_set_lr(vcpu, lr_nr, vlr); 928 923 clear_bit(lr_nr, vgic_cpu->lr_used); 929 924 vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; 925 + vgic_sync_lr_elrsr(vcpu, lr_nr, vlr); 930 926 } 931 927 932 928 /* ··· 984 978 BUG_ON(!test_bit(lr, vgic_cpu->lr_used)); 985 979 vlr.state |= LR_STATE_PENDING; 986 980 vgic_set_lr(vcpu, lr, vlr); 981 + vgic_sync_lr_elrsr(vcpu, lr, vlr); 987 982 return true; 988 983 } 989 984 } ··· 1006 999 vlr.state |= LR_EOI_INT; 1007 1000 1008 1001 vgic_set_lr(vcpu, lr, vlr); 1002 + vgic_sync_lr_elrsr(vcpu, lr, vlr); 1009 1003 1010 1004 return true; 1011 1005 } ··· 1143 1135 1144 1136 if (status & INT_STATUS_UNDERFLOW) 1145 1137 vgic_disable_underflow(vcpu); 1138 + 1139 + /* 1140 + * In the next iterations of the vcpu loop, if we sync the vgic state 1141 + * after flushing it, but before entering the guest (this happens for 1142 + * pending signals and vmid rollovers), then make sure we don't pick 1143 + * up any old maintenance interrupts here. 1144 + */ 1145 + vgic_clear_eisr(vcpu); 1146 1146 1147 1147 return level_pending; 1148 1148 } ··· 1599 1583 * emulation. So check this here again. KVM_CREATE_DEVICE does 1600 1584 * the proper checks already. 1601 1585 */ 1602 - if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2) 1603 - return -ENODEV; 1586 + if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2) { 1587 + ret = -ENODEV; 1588 + goto out; 1589 + } 1604 1590 1605 1591 /* 1606 1592 * Any time a vcpu is run, vcpu_load is called which tries to grab the
+8 -7
virt/kvm/kvm_main.c
··· 471 471 BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); 472 472 473 473 r = -ENOMEM; 474 - kvm->memslots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL); 474 + kvm->memslots = kvm_kvzalloc(sizeof(struct kvm_memslots)); 475 475 if (!kvm->memslots) 476 476 goto out_err_no_srcu; 477 477 ··· 522 522 out_err_no_disable: 523 523 for (i = 0; i < KVM_NR_BUSES; i++) 524 524 kfree(kvm->buses[i]); 525 - kfree(kvm->memslots); 525 + kvfree(kvm->memslots); 526 526 kvm_arch_free_vm(kvm); 527 527 return ERR_PTR(r); 528 528 } ··· 578 578 kvm_for_each_memslot(memslot, slots) 579 579 kvm_free_physmem_slot(kvm, memslot, NULL); 580 580 581 - kfree(kvm->memslots); 581 + kvfree(kvm->memslots); 582 582 } 583 583 584 584 static void kvm_destroy_devices(struct kvm *kvm) ··· 871 871 goto out_free; 872 872 } 873 873 874 - slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots), 875 - GFP_KERNEL); 874 + slots = kvm_kvzalloc(sizeof(struct kvm_memslots)); 876 875 if (!slots) 877 876 goto out_free; 877 + memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots)); 878 878 879 879 if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) { 880 880 slot = id_to_memslot(slots, mem->slot); ··· 917 917 kvm_arch_commit_memory_region(kvm, mem, &old, change); 918 918 919 919 kvm_free_physmem_slot(kvm, &old, &new); 920 - kfree(old_memslots); 920 + kvfree(old_memslots); 921 921 922 922 /* 923 923 * IOMMU mapping: New slots need to be mapped. Old slots need to be ··· 936 936 return 0; 937 937 938 938 out_slots: 939 - kfree(slots); 939 + kvfree(slots); 940 940 out_free: 941 941 kvm_free_physmem_slot(kvm, &new, &old); 942 942 out: ··· 2492 2492 case KVM_CAP_SIGNAL_MSI: 2493 2493 #endif 2494 2494 #ifdef CONFIG_HAVE_KVM_IRQFD 2495 + case KVM_CAP_IRQFD: 2495 2496 case KVM_CAP_IRQFD_RESAMPLE: 2496 2497 #endif 2497 2498 case KVM_CAP_CHECK_EXTENSION_VM: