Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'usb-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB patches from Greg KH:
"Here's the big USB pull request for 3.15-rc1.

The normal set of patches, lots of controller driver updates, and a
smattering of individual USB driver updates as well.

All have been in linux-next for a while"

* tag 'usb-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (249 commits)
xhci: Transition maintainership to Mathias Nyman.
USB: disable reset-resume when USB_QUIRK_RESET is set
USB: unbind all interfaces before rebinding any
usb: phy: Add ulpi IDs for SMSC USB3320 and TI TUSB1210
usb: gadget: tcm_usb_gadget: stop format strings
usb: gadget: f_fs: add missing spinlock and mutex unlock
usb: gadget: composite: switch over to ERR_CAST()
usb: gadget: inode: switch over to memdup_user()
usb: gadget: f_subset: switch over to PTR_RET
usb: gadget: lpc32xx_udc: fix wrong clk_put() sequence
USB: keyspan: remove dead debugging code
USB: serial: add missing newlines to dev_<level> messages.
USB: serial: add missing braces
USB: serial: continue to write on errors
USB: serial: continue to read on errors
USB: serial: make bulk_out_size a lower limit
USB: cypress_m8: fix potential scheduling while atomic
devicetree: bindings: document lsi,zevio-usb
usb: chipidea: add support for USB OTG controller on LSI Zevio SoCs
usb: chipidea: imx: Use dev_name() for ci_hdrc name to distinguish USBs
...

+9166 -2201
+79
Documentation/devicetree/bindings/phy/apm-xgene-phy.txt
··· 1 + * APM X-Gene 15Gbps Multi-purpose PHY nodes 2 + 3 + PHY nodes are defined to describe on-chip 15Gbps Multi-purpose PHY. Each 4 + PHY (pair of lanes) has its own node. 5 + 6 + Required properties: 7 + - compatible : Shall be "apm,xgene-phy". 8 + - reg : PHY memory resource is the SDS PHY access resource. 9 + - #phy-cells : Shall be 1 as it expects one argument for setting 10 + the mode of the PHY. Possible values are 0 (SATA), 11 + 1 (SGMII), 2 (PCIe), 3 (USB), and 4 (XFI). 12 + 13 + Optional properties: 14 + - status : Shall be "ok" if enabled or "disabled" if disabled. 15 + Default is "ok". 16 + - clocks : Reference to the clock entry. 17 + - apm,tx-eye-tuning : Manual control to fine tune the capture of the serial 18 + bit lines from the automatic calibrated position. 19 + Two set of 3-tuple setting for each (up to 3) 20 + supported link speed on the host. Range from 0 to 21 + 127 in unit of one bit period. Default is 10. 22 + - apm,tx-eye-direction : Eye tuning manual control direction. 0 means sample 23 + data earlier than the nominal sampling point. 1 means 24 + sample data later than the nominal sampling point. 25 + Two set of 3-tuple setting for each (up to 3) 26 + supported link speed on the host. Default is 0. 27 + - apm,tx-boost-gain : Frequency boost AC (LSB 3-bit) and DC (2-bit) 28 + gain control. Two set of 3-tuple setting for each 29 + (up to 3) supported link speed on the host. Range is 30 + between 0 to 31 in unit of dB. Default is 3. 31 + - apm,tx-amplitude : Amplitude control. Two set of 3-tuple setting for 32 + each (up to 3) supported link speed on the host. 33 + Range is between 0 to 199500 in unit of uV. 34 + Default is 199500 uV. 35 + - apm,tx-pre-cursor1 : 1st pre-cursor emphasis taps control. Two set of 36 + 3-tuple setting for each (up to 3) supported link 37 + speed on the host. Range is 0 to 273000 in unit of 38 + uV. Default is 0. 39 + - apm,tx-pre-cursor2 : 2st pre-cursor emphasis taps control. Two set of 40 + 3-tuple setting for each (up to 3) supported link 41 + speed on the host. Range is 0 to 127400 in unit uV. 42 + Default is 0x0. 43 + - apm,tx-post-cursor : Post-cursor emphasis taps control. Two set of 44 + 3-tuple setting for Gen1, Gen2, and Gen3. Range is 45 + between 0 to 0x1f in unit of 18.2mV. Default is 0xf. 46 + - apm,tx-speed : Tx operating speed. One set of 3-tuple for each 47 + supported link speed on the host. 48 + 0 = 1-2Gbps 49 + 1 = 2-4Gbps (1st tuple default) 50 + 2 = 4-8Gbps 51 + 3 = 8-15Gbps (2nd tuple default) 52 + 4 = 2.5-4Gbps 53 + 5 = 4-5Gbps 54 + 6 = 5-6Gbps 55 + 7 = 6-16Gbps (3rd tuple default) 56 + 57 + NOTE: PHY override parameters are board specific setting. 58 + 59 + Example: 60 + phy1: phy@1f21a000 { 61 + compatible = "apm,xgene-phy"; 62 + reg = <0x0 0x1f21a000 0x0 0x100>; 63 + #phy-cells = <1>; 64 + status = "disabled"; 65 + }; 66 + 67 + phy2: phy@1f22a000 { 68 + compatible = "apm,xgene-phy"; 69 + reg = <0x0 0x1f22a000 0x0 0x100>; 70 + #phy-cells = <1>; 71 + status = "ok"; 72 + }; 73 + 74 + phy3: phy@1f23a000 { 75 + compatible = "apm,xgene-phy"; 76 + reg = <0x0 0x1f23a000 0x0 0x100>; 77 + #phy-cells = <1>; 78 + status = "ok"; 79 + };
+54
Documentation/devicetree/bindings/phy/samsung-phy.txt
··· 20 20 - compatible : should be "samsung,exynos5250-dp-video-phy"; 21 21 - reg : offset and length of the Display Port PHY register set; 22 22 - #phy-cells : from the generic PHY bindings, must be 0; 23 + 24 + Samsung S5P/EXYNOS SoC series USB PHY 25 + ------------------------------------------------- 26 + 27 + Required properties: 28 + - compatible : should be one of the listed compatibles: 29 + - "samsung,exynos4210-usb2-phy" 30 + - "samsung,exynos4x12-usb2-phy" 31 + - "samsung,exynos5250-usb2-phy" 32 + - reg : a list of registers used by phy driver 33 + - first and obligatory is the location of phy modules registers 34 + - samsung,sysreg-phandle - handle to syscon used to control the system registers 35 + - samsung,pmureg-phandle - handle to syscon used to control PMU registers 36 + - #phy-cells : from the generic phy bindings, must be 1; 37 + - clocks and clock-names: 38 + - the "phy" clock is required by the phy module, used as a gate 39 + - the "ref" clock is used to get the rate of the clock provided to the 40 + PHY module 41 + 42 + The first phandle argument in the PHY specifier identifies the PHY, its 43 + meaning is compatible dependent. For the currently supported SoCs (Exynos 4210 44 + and Exynos 4212) it is as follows: 45 + 0 - USB device ("device"), 46 + 1 - USB host ("host"), 47 + 2 - HSIC0 ("hsic0"), 48 + 3 - HSIC1 ("hsic1"), 49 + 50 + Exynos 4210 and Exynos 4212 use mode switching and require that mode switch 51 + register is supplied. 52 + 53 + Example: 54 + 55 + For Exynos 4412 (compatible with Exynos 4212): 56 + 57 + usbphy: phy@125b0000 { 58 + compatible = "samsung,exynos4x12-usb2-phy"; 59 + reg = <0x125b0000 0x100>; 60 + clocks = <&clock 305>, <&clock 2>; 61 + clock-names = "phy", "ref"; 62 + status = "okay"; 63 + #phy-cells = <1>; 64 + samsung,sysreg-phandle = <&sys_reg>; 65 + samsung,pmureg-phandle = <&pmu_reg>; 66 + }; 67 + 68 + Then the PHY can be used in other nodes such as: 69 + 70 + phy-consumer@12340000 { 71 + phys = <&usbphy 2>; 72 + phy-names = "phy"; 73 + }; 74 + 75 + Refer to DT bindings documentation of particular PHY consumer devices for more 76 + information about required PHYs and the way of specification.
+26
Documentation/devicetree/bindings/phy/sun4i-usb-phy.txt
··· 1 + Allwinner sun4i USB PHY 2 + ----------------------- 3 + 4 + Required properties: 5 + - compatible : should be one of "allwinner,sun4i-a10-usb-phy", 6 + "allwinner,sun5i-a13-usb-phy" or "allwinner,sun7i-a20-usb-phy" 7 + - reg : a list of offset + length pairs 8 + - reg-names : "phy_ctrl", "pmu1" and for sun4i or sun7i "pmu2" 9 + - #phy-cells : from the generic phy bindings, must be 1 10 + - clocks : phandle + clock specifier for the phy clock 11 + - clock-names : "usb_phy" 12 + - resets : a list of phandle + reset specifier pairs 13 + - reset-names : "usb0_reset", "usb1_reset" and for sun4i or sun7i "usb2_reset" 14 + 15 + Example: 16 + usbphy: phy@0x01c13400 { 17 + #phy-cells = <1>; 18 + compatible = "allwinner,sun4i-a10-usb-phy"; 19 + /* phy base regs, phy1 pmu reg, phy2 pmu reg */ 20 + reg = <0x01c13400 0x10 0x01c14800 0x4 0x01c1c800 0x4>; 21 + reg-names = "phy_ctrl", "pmu1", "pmu2"; 22 + clocks = <&usb_clk 8>; 23 + clock-names = "usb_phy"; 24 + resets = <&usb_clk 1>, <&usb_clk 2>; 25 + reset-names = "usb1_reset", "usb2_reset"; 26 + };
+86
Documentation/devicetree/bindings/phy/ti-phy.txt
··· 1 + TI PHY: DT DOCUMENTATION FOR PHYs in TI PLATFORMs 2 + 3 + OMAP CONTROL PHY 4 + 5 + Required properties: 6 + - compatible: Should be one of 7 + "ti,control-phy-otghs" - if it has otghs_control mailbox register as on OMAP4. 8 + "ti,control-phy-usb2" - if it has Power down bit in control_dev_conf register 9 + e.g. USB2_PHY on OMAP5. 10 + "ti,control-phy-pipe3" - if it has DPLL and individual Rx & Tx power control 11 + e.g. USB3 PHY and SATA PHY on OMAP5. 12 + "ti,control-phy-usb2-dra7" - if it has power down register like USB2 PHY on 13 + DRA7 platform. 14 + "ti,control-phy-usb2-am437" - if it has power down register like USB2 PHY on 15 + AM437 platform. 16 + - reg : Address and length of the register set for the device. It contains 17 + the address of "otghs_control" for control-phy-otghs or "power" register 18 + for other types. 19 + - reg-names: should be "otghs_control" control-phy-otghs and "power" for 20 + other types. 21 + 22 + omap_control_usb: omap-control-usb@4a002300 { 23 + compatible = "ti,control-phy-otghs"; 24 + reg = <0x4a00233c 0x4>; 25 + reg-names = "otghs_control"; 26 + }; 27 + 28 + OMAP USB2 PHY 29 + 30 + Required properties: 31 + - compatible: Should be "ti,omap-usb2" 32 + - reg : Address and length of the register set for the device. 33 + - #phy-cells: determine the number of cells that should be given in the 34 + phandle while referencing this phy. 35 + 36 + Optional properties: 37 + - ctrl-module : phandle of the control module used by PHY driver to power on 38 + the PHY. 39 + 40 + This is usually a subnode of ocp2scp to which it is connected. 41 + 42 + usb2phy@4a0ad080 { 43 + compatible = "ti,omap-usb2"; 44 + reg = <0x4a0ad080 0x58>; 45 + ctrl-module = <&omap_control_usb>; 46 + #phy-cells = <0>; 47 + }; 48 + 49 + TI PIPE3 PHY 50 + 51 + Required properties: 52 + - compatible: Should be "ti,phy-usb3" or "ti,phy-pipe3-sata". 53 + "ti,omap-usb3" is deprecated. 54 + - reg : Address and length of the register set for the device. 55 + - reg-names: The names of the register addresses corresponding to the registers 56 + filled in "reg". 57 + - #phy-cells: determine the number of cells that should be given in the 58 + phandle while referencing this phy. 59 + - clocks: a list of phandles and clock-specifier pairs, one for each entry in 60 + clock-names. 61 + - clock-names: should include: 62 + * "wkupclk" - wakeup clock. 63 + * "sysclk" - system clock. 64 + * "refclk" - reference clock. 65 + 66 + Optional properties: 67 + - ctrl-module : phandle of the control module used by PHY driver to power on 68 + the PHY. 69 + 70 + This is usually a subnode of ocp2scp to which it is connected. 71 + 72 + usb3phy@4a084400 { 73 + compatible = "ti,phy-usb3"; 74 + reg = <0x4a084400 0x80>, 75 + <0x4a084800 0x64>, 76 + <0x4a084c00 0x40>; 77 + reg-names = "phy_rx", "phy_tx", "pll_ctrl"; 78 + ctrl-module = <&omap_control_usb>; 79 + #phy-cells = <0>; 80 + clocks = <&usb_phy_cm_clk32k>, 81 + <&sys_clkin>, 82 + <&usb_otg_ss_refclk960m>; 83 + clock-names = "wkupclk", 84 + "sysclk", 85 + "refclk"; 86 + };
+2
Documentation/devicetree/bindings/usb/ci-hdrc-imx.txt
··· 18 18 - vbus-supply: regulator for vbus 19 19 - disable-over-current: disable over current detect 20 20 - external-vbus-divider: enables off-chip resistor divider for Vbus 21 + - maximum-speed: limit the maximum connection speed to "full-speed". 21 22 22 23 Examples: 23 24 usb@02184000 { /* USB OTG */ ··· 29 28 fsl,usbmisc = <&usbmisc 0>; 30 29 disable-over-current; 31 30 external-vbus-divider; 31 + maximum-speed = "full-speed"; 32 32 };
+17
Documentation/devicetree/bindings/usb/ci-hdrc-zevio.txt
··· 1 + * LSI Zevio USB OTG Controller 2 + 3 + Required properties: 4 + - compatible: Should be "lsi,zevio-usb" 5 + - reg: Should contain registers location and length 6 + - interrupts: Should contain controller interrupt 7 + 8 + Optional properties: 9 + - vbus-supply: regulator for vbus 10 + 11 + Examples: 12 + usb0: usb@b0000000 { 13 + reg = <0xb0000000 0x1000>; 14 + compatible = "lsi,zevio-usb"; 15 + interrupts = <8>; 16 + vbus-supply = <&vbus_reg>; 17 + };
+4 -2
Documentation/devicetree/bindings/usb/dwc3.txt
··· 6 6 - compatible: must be "snps,dwc3" 7 7 - reg : Address and length of the register set for the device 8 8 - interrupts: Interrupts used by the dwc3 controller. 9 + 10 + Optional properties: 9 11 - usb-phy : array of phandle for the PHY device. The first element 10 12 in the array is expected to be a handle to the USB2/HS PHY and 11 13 the second element is expected to be a handle to the USB3/SS PHY 12 - 13 - Optional properties: 14 + - phys: from the *Generic PHY* bindings 15 + - phy-names: from the *Generic PHY* bindings 14 16 - tx-fifo-resize: determines if the FIFO *has* to be reallocated. 15 17 16 18 This is usually a subnode to DWC3 glue to which it is connected.
+7 -1
Documentation/devicetree/bindings/usb/mxs-phy.txt
··· 1 1 * Freescale MXS USB Phy Device 2 2 3 3 Required properties: 4 - - compatible: Should be "fsl,imx23-usbphy" 4 + - compatible: should contain: 5 + * "fsl,imx23-usbphy" for imx23 and imx28 6 + * "fsl,imx6q-usbphy" for imx6dq and imx6dl 7 + * "fsl,imx6sl-usbphy" for imx6sl 8 + "fsl,imx23-usbphy" is still a fallback for other strings 5 9 - reg: Should contain registers location and length 6 10 - interrupts: Should contain phy interrupt 11 + - fsl,anatop: phandle for anatop register, it is only for imx6 SoC series 7 12 8 13 Example: 9 14 usbphy1: usbphy@020c9000 { 10 15 compatible = "fsl,imx6q-usbphy", "fsl,imx23-usbphy"; 11 16 reg = <0x020c9000 0x1000>; 12 17 interrupts = <0 44 0x04>; 18 + fsl,anatop = <&anatop>; 13 19 };
-24
Documentation/devicetree/bindings/usb/omap-usb.txt
··· 76 76 ranges; 77 77 }; 78 78 79 - OMAP CONTROL USB 80 - 81 - Required properties: 82 - - compatible: Should be one of 83 - "ti,control-phy-otghs" - if it has otghs_control mailbox register as on OMAP4. 84 - "ti,control-phy-usb2" - if it has Power down bit in control_dev_conf register 85 - e.g. USB2_PHY on OMAP5. 86 - "ti,control-phy-pipe3" - if it has DPLL and individual Rx & Tx power control 87 - e.g. USB3 PHY and SATA PHY on OMAP5. 88 - "ti,control-phy-dra7usb2" - if it has power down register like USB2 PHY on 89 - DRA7 platform. 90 - "ti,control-phy-am437usb2" - if it has power down register like USB2 PHY on 91 - AM437 platform. 92 - - reg : Address and length of the register set for the device. It contains 93 - the address of "otghs_control" for control-phy-otghs or "power" register 94 - for other types. 95 - - reg-names: should be "otghs_control" control-phy-otghs and "power" for 96 - other types. 97 - 98 - omap_control_usb: omap-control-usb@4a002300 { 99 - compatible = "ti,control-phy-otghs"; 100 - reg = <0x4a00233c 0x4>; 101 - reg-names = "otghs_control"; 102 - };
+2 -2
Documentation/devicetree/bindings/usb/platform-uhci.txt Documentation/devicetree/bindings/usb/usb-uhci.txt
··· 2 2 ----------------------------------------------------- 3 3 4 4 Required properties: 5 - - compatible : "platform-uhci" 5 + - compatible : "generic-uhci" (deprecated: "platform-uhci") 6 6 - reg : Should contain 1 register ranges(address and length) 7 7 - interrupts : UHCI controller interrupt 8 8 9 9 Example: 10 10 11 11 uhci@d8007b00 { 12 - compatible = "platform-uhci"; 12 + compatible = "generic-uhci"; 13 13 reg = <0xd8007b00 0x200>; 14 14 interrupts = <43>; 15 15 };
+19 -8
Documentation/devicetree/bindings/usb/usb-ehci.txt
··· 1 1 USB EHCI controllers 2 2 3 3 Required properties: 4 - - compatible : should be "usb-ehci". 4 + - compatible : should be "generic-ehci". 5 5 - reg : should contain at least address and length of the standard EHCI 6 6 register set for the device. Optional platform-dependent registers 7 7 (debug-port or other) can be also specified here, but only after 8 8 definition of standard EHCI registers. 9 9 - interrupts : one EHCI interrupt should be described here. 10 - If device registers are implemented in big endian mode, the device 11 - node should have "big-endian-regs" property. 12 - If controller implementation operates with big endian descriptors, 13 - "big-endian-desc" property should be specified. 14 - If both big endian registers and descriptors are used by the controller 15 - implementation, "big-endian" property can be specified instead of having 16 - both "big-endian-regs" and "big-endian-desc". 10 + 11 + Optional properties: 12 + - big-endian-regs : boolean, set this for hcds with big-endian registers 13 + - big-endian-desc : boolean, set this for hcds with big-endian descriptors 14 + - big-endian : boolean, for hcds with big-endian-regs + big-endian-desc 15 + - clocks : a list of phandle + clock specifier pairs 16 + - phys : phandle + phy specifier pair 17 + - phy-names : "usb" 17 18 18 19 Example (Sequoia 440EPx): 19 20 ehci@e0000300 { ··· 23 22 interrupts = <1a 4>; 24 23 reg = <0 e0000300 90 0 e0000390 70>; 25 24 big-endian; 25 + }; 26 + 27 + Example (Allwinner sun4i A10 SoC): 28 + ehci0: usb@01c14000 { 29 + compatible = "allwinner,sun4i-a10-ehci", "generic-ehci"; 30 + reg = <0x01c14000 0x100>; 31 + interrupts = <39>; 32 + clocks = <&ahb_gates 1>; 33 + phys = <&usbphy 1>; 34 + phy-names = "usb"; 26 35 };
+25
Documentation/devicetree/bindings/usb/usb-ohci.txt
··· 1 + USB OHCI controllers 2 + 3 + Required properties: 4 + - compatible : "generic-ohci" 5 + - reg : ohci controller register range (address and length) 6 + - interrupts : ohci controller interrupt 7 + 8 + Optional properties: 9 + - big-endian-regs : boolean, set this for hcds with big-endian registers 10 + - big-endian-desc : boolean, set this for hcds with big-endian descriptors 11 + - big-endian : boolean, for hcds with big-endian-regs + big-endian-desc 12 + - clocks : a list of phandle + clock specifier pairs 13 + - phys : phandle + phy specifier pair 14 + - phy-names : "usb" 15 + 16 + Example: 17 + 18 + ohci0: usb@01c14400 { 19 + compatible = "allwinner,sun4i-a10-ohci", "generic-ohci"; 20 + reg = <0x01c14400 0x100>; 21 + interrupts = <64>; 22 + clocks = <&usb_clk 6>, <&ahb_gates 2>; 23 + phys = <&usbphy 1>; 24 + phy-names = "usb"; 25 + };
-48
Documentation/devicetree/bindings/usb/usb-phy.txt
··· 1 - USB PHY 2 - 3 - OMAP USB2 PHY 4 - 5 - Required properties: 6 - - compatible: Should be "ti,omap-usb2" 7 - - reg : Address and length of the register set for the device. 8 - - #phy-cells: determine the number of cells that should be given in the 9 - phandle while referencing this phy. 10 - 11 - Optional properties: 12 - - ctrl-module : phandle of the control module used by PHY driver to power on 13 - the PHY. 14 - 15 - This is usually a subnode of ocp2scp to which it is connected. 16 - 17 - usb2phy@4a0ad080 { 18 - compatible = "ti,omap-usb2"; 19 - reg = <0x4a0ad080 0x58>; 20 - ctrl-module = <&omap_control_usb>; 21 - #phy-cells = <0>; 22 - }; 23 - 24 - OMAP USB3 PHY 25 - 26 - Required properties: 27 - - compatible: Should be "ti,omap-usb3" 28 - - reg : Address and length of the register set for the device. 29 - - reg-names: The names of the register addresses corresponding to the registers 30 - filled in "reg". 31 - - #phy-cells: determine the number of cells that should be given in the 32 - phandle while referencing this phy. 33 - 34 - Optional properties: 35 - - ctrl-module : phandle of the control module used by PHY driver to power on 36 - the PHY. 37 - 38 - This is usually a subnode of ocp2scp to which it is connected. 39 - 40 - usb3phy@4a084400 { 41 - compatible = "ti,omap-usb3"; 42 - reg = <0x4a084400 0x80>, 43 - <0x4a084800 0x64>, 44 - <0x4a084c00 0x40>; 45 - reg-names = "phy_rx", "phy_tx", "pll_ctrl"; 46 - ctrl-module = <&omap_control_usb>; 47 - #phy-cells = <0>; 48 - };
+2 -2
Documentation/devicetree/bindings/usb/usb-xhci.txt
··· 1 1 USB xHCI controllers 2 2 3 3 Required properties: 4 - - compatible: should be "xhci-platform". 4 + - compatible: should be "generic-xhci" (deprecated: "xhci-platform"). 5 5 - reg: should contain address and length of the standard XHCI 6 6 register set for the device. 7 7 - interrupts: one XHCI interrupt should be described here. 8 8 9 9 Example: 10 10 usb@f0931000 { 11 - compatible = "xhci-platform"; 11 + compatible = "generic-xhci"; 12 12 reg = <0xf0931000 0x8c8>; 13 13 interrupts = <0x0 0x4e 0x0>; 14 14 };
-15
Documentation/devicetree/bindings/usb/via,vt8500-ehci.txt
··· 1 - VIA/Wondermedia VT8500 EHCI Controller 2 - ----------------------------------------------------- 3 - 4 - Required properties: 5 - - compatible : "via,vt8500-ehci" 6 - - reg : Should contain 1 register ranges(address and length) 7 - - interrupts : ehci controller interrupt 8 - 9 - Example: 10 - 11 - ehci@d8007900 { 12 - compatible = "via,vt8500-ehci"; 13 - reg = <0xd8007900 0x200>; 14 - interrupts = <43>; 15 - };
-12
Documentation/devicetree/bindings/usb/vt8500-ehci.txt
··· 1 - VIA VT8500 and Wondermedia WM8xxx SoC USB controllers. 2 - 3 - Required properties: 4 - - compatible: Should be "via,vt8500-ehci" or "wm,prizm-ehci". 5 - - reg: Address range of the ehci registers. size should be 0x200 6 - - interrupts: Should contain the ehci interrupt. 7 - 8 - usb: ehci@D8007100 { 9 - compatible = "wm,prizm-ehci", "usb-ehci"; 10 - reg = <0xD8007100 0x200>; 11 - interrupts = <1>; 12 - };
+135
Documentation/phy/samsung-usb2.txt
··· 1 + .------------------------------------------------------------------------------+ 2 + | Samsung USB 2.0 PHY adaptation layer | 3 + +-----------------------------------------------------------------------------+' 4 + 5 + | 1. Description 6 + +---------------- 7 + 8 + The architecture of the USB 2.0 PHY module in Samsung SoCs is similar 9 + among many SoCs. In spite of the similarities it proved difficult to 10 + create a one driver that would fit all these PHY controllers. Often 11 + the differences were minor and were found in particular bits of the 12 + registers of the PHY. In some rare cases the order of register writes or 13 + the PHY powering up process had to be altered. This adaptation layer is 14 + a compromise between having separate drivers and having a single driver 15 + with added support for many special cases. 16 + 17 + | 2. Files description 18 + +---------------------- 19 + 20 + - phy-samsung-usb2.c 21 + This is the main file of the adaptation layer. This file contains 22 + the probe function and provides two callbacks to the Generic PHY 23 + Framework. This two callbacks are used to power on and power off the 24 + phy. They carry out the common work that has to be done on all version 25 + of the PHY module. Depending on which SoC was chosen they execute SoC 26 + specific callbacks. The specific SoC version is selected by choosing 27 + the appropriate compatible string. In addition, this file contains 28 + struct of_device_id definitions for particular SoCs. 29 + 30 + - phy-samsung-usb2.h 31 + This is the include file. It declares the structures used by this 32 + driver. In addition it should contain extern declarations for 33 + structures that describe particular SoCs. 34 + 35 + | 3. Supporting SoCs 36 + +-------------------- 37 + 38 + To support a new SoC a new file should be added to the drivers/phy 39 + directory. Each SoC's configuration is stored in an instance of the 40 + struct samsung_usb2_phy_config. 41 + 42 + struct samsung_usb2_phy_config { 43 + const struct samsung_usb2_common_phy *phys; 44 + int (*rate_to_clk)(unsigned long, u32 *); 45 + unsigned int num_phys; 46 + bool has_mode_switch; 47 + }; 48 + 49 + The num_phys is the number of phys handled by the driver. *phys is an 50 + array that contains the configuration for each phy. The has_mode_switch 51 + property is a boolean flag that determines whether the SoC has USB host 52 + and device on a single pair of pins. If so, a special register has to 53 + be modified to change the internal routing of these pins between a USB 54 + device or host module. 55 + 56 + For example the configuration for Exynos 4210 is following: 57 + 58 + const struct samsung_usb2_phy_config exynos4210_usb2_phy_config = { 59 + .has_mode_switch = 0, 60 + .num_phys = EXYNOS4210_NUM_PHYS, 61 + .phys = exynos4210_phys, 62 + .rate_to_clk = exynos4210_rate_to_clk, 63 + } 64 + 65 + - int (*rate_to_clk)(unsigned long, u32 *) 66 + The rate_to_clk callback is to convert the rate of the clock 67 + used as the reference clock for the PHY module to the value 68 + that should be written in the hardware register. 69 + 70 + The exynos4210_phys configuration array is as follows: 71 + 72 + static const struct samsung_usb2_common_phy exynos4210_phys[] = { 73 + { 74 + .label = "device", 75 + .id = EXYNOS4210_DEVICE, 76 + .power_on = exynos4210_power_on, 77 + .power_off = exynos4210_power_off, 78 + }, 79 + { 80 + .label = "host", 81 + .id = EXYNOS4210_HOST, 82 + .power_on = exynos4210_power_on, 83 + .power_off = exynos4210_power_off, 84 + }, 85 + { 86 + .label = "hsic0", 87 + .id = EXYNOS4210_HSIC0, 88 + .power_on = exynos4210_power_on, 89 + .power_off = exynos4210_power_off, 90 + }, 91 + { 92 + .label = "hsic1", 93 + .id = EXYNOS4210_HSIC1, 94 + .power_on = exynos4210_power_on, 95 + .power_off = exynos4210_power_off, 96 + }, 97 + {}, 98 + }; 99 + 100 + - int (*power_on)(struct samsung_usb2_phy_instance *); 101 + - int (*power_off)(struct samsung_usb2_phy_instance *); 102 + These two callbacks are used to power on and power off the phy 103 + by modifying appropriate registers. 104 + 105 + Final change to the driver is adding appropriate compatible value to the 106 + phy-samsung-usb2.c file. In case of Exynos 4210 the following lines were 107 + added to the struct of_device_id samsung_usb2_phy_of_match[] array: 108 + 109 + #ifdef CONFIG_PHY_EXYNOS4210_USB2 110 + { 111 + .compatible = "samsung,exynos4210-usb2-phy", 112 + .data = &exynos4210_usb2_phy_config, 113 + }, 114 + #endif 115 + 116 + To add further flexibility to the driver the Kconfig file enables to 117 + include support for selected SoCs in the compiled driver. The Kconfig 118 + entry for Exynos 4210 is following: 119 + 120 + config PHY_EXYNOS4210_USB2 121 + bool "Support for Exynos 4210" 122 + depends on PHY_SAMSUNG_USB2 123 + depends on CPU_EXYNOS4210 124 + help 125 + Enable USB PHY support for Exynos 4210. This option requires that 126 + Samsung USB 2.0 PHY driver is enabled and means that support for this 127 + particular SoC is compiled in the driver. In case of Exynos 4210 four 128 + phys are available - device, host, HSCI0 and HSCI1. 129 + 130 + The newly created file that supports the new SoC has to be also added to the 131 + Makefile. In case of Exynos 4210 the added line is following: 132 + 133 + obj-$(CONFIG_PHY_EXYNOS4210_USB2) += phy-exynos4210-usb2.o 134 + 135 + After completing these steps the support for the new SoC should be ready.
+2 -3
MAINTAINERS
··· 9130 9130 F: drivers/net/wireless/ath/ar5523/ 9131 9131 9132 9132 USB ATTACHED SCSI 9133 - M: Matthew Wilcox <willy@linux.intel.com> 9134 - M: Sarah Sharp <sarah.a.sharp@linux.intel.com> 9133 + M: Hans de Goede <hdegoede@redhat.com> 9135 9134 M: Gerd Hoffmann <kraxel@redhat.com> 9136 9135 L: linux-usb@vger.kernel.org 9137 9136 L: linux-scsi@vger.kernel.org ··· 9356 9357 F: drivers/net/wireless/rndis_wlan.c 9357 9358 9358 9359 USB XHCI DRIVER 9359 - M: Sarah Sharp <sarah.a.sharp@linux.intel.com> 9360 + M: Mathias Nyman <mathias.nyman@intel.com> 9360 9361 L: linux-usb@vger.kernel.org 9361 9362 S: Supported 9362 9363 F: drivers/usb/host/xhci*
-3
arch/arm/Kconfig
··· 534 534 select PINCTRL 535 535 select PINCTRL_DOVE 536 536 select PLAT_ORION_LEGACY 537 - select USB_ARCH_HAS_EHCI 538 537 help 539 538 Support for the Marvell Dove SoC 88AP510 540 539 ··· 632 633 select GENERIC_CLOCKEVENTS 633 634 select HAVE_IDE 634 635 select HAVE_PWM 635 - select USB_ARCH_HAS_OHCI 636 636 select USE_OF 637 637 help 638 638 Support for the NXP LPC32XX family of processors ··· 768 770 select SAMSUNG_ATAGS 769 771 select SAMSUNG_WAKEMASK 770 772 select SAMSUNG_WDT_RESET 771 - select USB_ARCH_HAS_OHCI 772 773 help 773 774 Samsung S3C64XX series based systems 774 775
-1
arch/arm/mach-exynos/Kconfig
··· 36 36 select HAVE_ARM_SCU if SMP 37 37 select HAVE_SMP 38 38 select PINCTRL 39 - select USB_ARCH_HAS_XHCI 40 39 help 41 40 Samsung EXYNOS5 (Cortex-A15) SoC based systems 42 41
-2
arch/arm/mach-omap2/Kconfig
··· 21 21 select PM_OPP if PM 22 22 select PM_RUNTIME if CPU_IDLE 23 23 select SOC_HAS_OMAP2_SDRC 24 - select USB_ARCH_HAS_EHCI if USB_SUPPORT 25 24 26 25 config ARCH_OMAP4 27 26 bool "TI OMAP4" ··· 41 42 select PL310_ERRATA_727915 42 43 select PM_OPP if PM 43 44 select PM_RUNTIME if CPU_IDLE 44 - select USB_ARCH_HAS_EHCI if USB_SUPPORT 45 45 select ARM_ERRATA_754322 46 46 select ARM_ERRATA_775420 47 47
-4
arch/arm/mach-shmobile/Kconfig
··· 114 114 select CPU_V7 115 115 select SH_CLK_CPG 116 116 select ARM_GIC 117 - select USB_ARCH_HAS_EHCI 118 - select USB_ARCH_HAS_OHCI 119 117 select SYS_SUPPORTS_SH_TMU 120 118 121 119 config ARCH_R8A7779 ··· 122 124 select ARM_GIC 123 125 select CPU_V7 124 126 select SH_CLK_CPG 125 - select USB_ARCH_HAS_EHCI 126 - select USB_ARCH_HAS_OHCI 127 127 select RENESAS_INTC_IRQPIN 128 128 select SYS_SUPPORTS_SH_TMU 129 129
-1
arch/arm/mach-tegra/Kconfig
··· 19 19 select RESET_CONTROLLER 20 20 select SOC_BUS 21 21 select SPARSE_IRQ 22 - select USB_ARCH_HAS_EHCI if USB_SUPPORT 23 22 select USB_ULPI if USB_PHY 24 23 select USB_ULPI_VIEWPORT if USB_PHY 25 24 select USE_OF
-7
arch/mips/Kconfig
··· 67 67 select SYS_SUPPORTS_APM_EMULATION 68 68 select ARCH_REQUIRE_GPIOLIB 69 69 select SYS_SUPPORTS_ZBOOT 70 - select USB_ARCH_HAS_OHCI 71 - select USB_ARCH_HAS_EHCI 72 70 73 71 config AR7 74 72 bool "Texas Instruments AR7" ··· 358 360 select SYS_SUPPORTS_LITTLE_ENDIAN 359 361 select SYS_SUPPORTS_SMARTMIPS 360 362 select SYS_SUPPORTS_MICROMIPS 361 - select USB_ARCH_HAS_EHCI 362 363 select USB_EHCI_BIG_ENDIAN_DESC 363 364 select USB_EHCI_BIG_ENDIAN_MMIO 364 365 select USE_OF ··· 715 718 select SWAP_IO_SPACE 716 719 select HW_HAS_PCI 717 720 select ZONE_DMA32 718 - select USB_ARCH_HAS_OHCI 719 - select USB_ARCH_HAS_EHCI 720 721 select HOLES_IN_ZONE 721 722 select ARCH_REQUIRE_GPIOLIB 722 723 help ··· 751 756 select ZONE_DMA32 if 64BIT 752 757 select SYNC_R4K 753 758 select SYS_HAS_EARLY_PRINTK 754 - select USB_ARCH_HAS_OHCI if USB_SUPPORT 755 - select USB_ARCH_HAS_EHCI if USB_SUPPORT 756 759 select SYS_SUPPORTS_ZBOOT 757 760 select SYS_SUPPORTS_ZBOOT_UART16550 758 761 help
-8
arch/mips/ath79/Kconfig
··· 74 74 endmenu 75 75 76 76 config SOC_AR71XX 77 - select USB_ARCH_HAS_EHCI 78 - select USB_ARCH_HAS_OHCI 79 77 select HW_HAS_PCI 80 78 def_bool n 81 79 82 80 config SOC_AR724X 83 - select USB_ARCH_HAS_EHCI 84 - select USB_ARCH_HAS_OHCI 85 81 select HW_HAS_PCI 86 82 select PCI_AR724X if PCI 87 83 def_bool n 88 84 89 85 config SOC_AR913X 90 - select USB_ARCH_HAS_EHCI 91 86 def_bool n 92 87 93 88 config SOC_AR933X 94 - select USB_ARCH_HAS_EHCI 95 89 def_bool n 96 90 97 91 config SOC_AR934X 98 - select USB_ARCH_HAS_EHCI 99 92 select HW_HAS_PCI 100 93 select PCI_AR724X if PCI 101 94 def_bool n 102 95 103 96 config SOC_QCA955X 104 - select USB_ARCH_HAS_EHCI 105 97 select HW_HAS_PCI 106 98 select PCI_AR724X if PCI 107 99 def_bool n
-6
arch/mips/ralink/Kconfig
··· 20 20 config SOC_RT305X 21 21 bool "RT305x" 22 22 select USB_ARCH_HAS_HCD 23 - select USB_ARCH_HAS_OHCI 24 - select USB_ARCH_HAS_EHCI 25 23 26 24 config SOC_RT3883 27 25 bool "RT3883" 28 - select USB_ARCH_HAS_OHCI 29 - select USB_ARCH_HAS_EHCI 30 26 select HW_HAS_PCI 31 27 32 28 config SOC_MT7620 33 29 bool "MT7620" 34 - select USB_ARCH_HAS_OHCI 35 - select USB_ARCH_HAS_EHCI 36 30 37 31 endchoice 38 32
-1
arch/powerpc/platforms/44x/Kconfig
··· 265 265 select PPC_FPU 266 266 select IBM440EP_ERR42 267 267 select IBM_EMAC_ZMII 268 - select USB_ARCH_HAS_OHCI 269 268 270 269 config 440EPX 271 270 bool
-2
arch/powerpc/platforms/ps3/Kconfig
··· 2 2 bool "Sony PS3" 3 3 depends on PPC64 && PPC_BOOK3S 4 4 select PPC_CELL 5 - select USB_ARCH_HAS_OHCI 6 5 select USB_OHCI_LITTLE_ENDIAN 7 6 select USB_OHCI_BIG_ENDIAN_MMIO 8 - select USB_ARCH_HAS_EHCI 9 7 select USB_EHCI_BIG_ENDIAN_MMIO 10 8 select PPC_PCI_CHOICE 11 9 help
-9
arch/sh/Kconfig
··· 347 347 select CPU_HAS_DSP 348 348 select SYS_SUPPORTS_SH_CMT 349 349 select ARCH_WANT_OPTIONAL_GPIOLIB 350 - select USB_ARCH_HAS_OHCI 351 350 select USB_OHCI_SH if USB_OHCI_HCD 352 351 select PINCTRL 353 352 help ··· 357 358 select CPU_SH3 358 359 select CPU_HAS_DSP 359 360 select SYS_SUPPORTS_SH_CMT 360 - select USB_ARCH_HAS_OHCI 361 361 select USB_OHCI_SH if USB_OHCI_HCD 362 362 help 363 363 Select SH7721 if you have a SH3-DSP SH7721 CPU. ··· 434 436 select CPU_SH4A 435 437 select CPU_SHX2 436 438 select ARCH_WANT_OPTIONAL_GPIOLIB 437 - select USB_ARCH_HAS_OHCI 438 - select USB_ARCH_HAS_EHCI 439 439 select PINCTRL 440 440 help 441 441 Select SH7734 if you have a SH4A SH7734 CPU. ··· 443 447 select CPU_SH4A 444 448 select CPU_SHX2 445 449 select ARCH_WANT_OPTIONAL_GPIOLIB 446 - select USB_ARCH_HAS_OHCI 447 - select USB_ARCH_HAS_EHCI 448 450 select PINCTRL 449 451 help 450 452 Select SH7757 if you have a SH4A SH7757 CPU. ··· 450 456 config CPU_SUBTYPE_SH7763 451 457 bool "Support SH7763 processor" 452 458 select CPU_SH4A 453 - select USB_ARCH_HAS_OHCI 454 459 select USB_OHCI_SH if USB_OHCI_HCD 455 460 help 456 461 Select SH7763 if you have a SH4A SH7763(R5S77631) CPU. ··· 478 485 select CPU_HAS_PTEAEX 479 486 select GENERIC_CLOCKEVENTS_BROADCAST if SMP 480 487 select ARCH_WANT_OPTIONAL_GPIOLIB 481 - select USB_ARCH_HAS_OHCI 482 488 select USB_OHCI_SH if USB_OHCI_HCD 483 - select USB_ARCH_HAS_EHCI 484 489 select USB_EHCI_SH if USB_EHCI_HCD 485 490 select PINCTRL 486 491
+1
drivers/hid/hid-core.c
··· 2278 2278 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_1_PHIDGETSERVO_20) }, 2279 2279 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_8_8_4_IF_KIT) }, 2280 2280 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 2281 + { HID_USB_DEVICE(USB_VENDOR_ID_RISO_KAGAKU, USB_DEVICE_ID_RI_KA_WEBMAIL) }, 2281 2282 { } 2282 2283 }; 2283 2284
+3
drivers/hid/hid-ids.h
··· 961 961 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 962 962 963 963 964 + #define USB_VENDOR_ID_RISO_KAGAKU 0x1294 /* Riso Kagaku Corp. */ 965 + #define USB_DEVICE_ID_RI_KA_WEBMAIL 0x1320 /* Webmail Notifier */ 966 + 964 967 #endif
+104 -3
drivers/phy/Kconfig
··· 16 16 framework should select this config. 17 17 18 18 config PHY_EXYNOS_MIPI_VIDEO 19 - depends on HAS_IOMEM 20 19 tristate "S5P/EXYNOS SoC series MIPI CSI-2/DSI PHY driver" 20 + depends on HAS_IOMEM 21 + depends on ARCH_S5PV210 || ARCH_EXYNOS || COMPILE_TEST 22 + select GENERIC_PHY 23 + default y if ARCH_S5PV210 || ARCH_EXYNOS 21 24 help 22 25 Support for MIPI CSI-2 and MIPI DSI DPHY found on Samsung S5P 23 26 and EXYNOS SoCs. 24 27 25 28 config PHY_MVEBU_SATA 26 29 def_bool y 27 - depends on ARCH_KIRKWOOD || ARCH_DOVE 30 + depends on ARCH_KIRKWOOD || ARCH_DOVE || MACH_DOVE 28 31 depends on OF 29 32 select GENERIC_PHY 33 + 34 + config OMAP_CONTROL_PHY 35 + tristate "OMAP CONTROL PHY Driver" 36 + help 37 + Enable this to add support for the PHY part present in the control 38 + module. This driver has API to power on the USB2 PHY and to write to 39 + the mailbox. The mailbox is present only in omap4 and the register to 40 + power on the USB2 PHY is present in OMAP4 and OMAP5. OMAP5 has an 41 + additional register to power on USB3 PHY/SATA PHY/PCIE PHY 42 + (PIPE3 PHY). 30 43 31 44 config OMAP_USB2 32 45 tristate "OMAP USB2 PHY Driver" 33 46 depends on ARCH_OMAP2PLUS 34 47 depends on USB_PHY 35 48 select GENERIC_PHY 36 - select OMAP_CONTROL_USB 49 + select OMAP_CONTROL_PHY 50 + depends on OMAP_OCP2SCP 37 51 help 38 52 Enable this to support the transceiver that is part of SOC. This 39 53 driver takes care of all the PHY functionality apart from comparator. 40 54 The USB OTG controller communicates with the comparator using this 41 55 driver. 56 + 57 + config TI_PIPE3 58 + tristate "TI PIPE3 PHY Driver" 59 + depends on ARCH_OMAP2PLUS || COMPILE_TEST 60 + select GENERIC_PHY 61 + select OMAP_CONTROL_PHY 62 + depends on OMAP_OCP2SCP 63 + help 64 + Enable this to support the PIPE3 PHY that is part of TI SOCs. This 65 + driver takes care of all the PHY functionality apart from comparator. 66 + This driver interacts with the "OMAP Control PHY Driver" to power 67 + on/off the PHY. 42 68 43 69 config TWL4030_USB 44 70 tristate "TWL4030 USB Transceiver Driver" ··· 80 54 config PHY_EXYNOS_DP_VIDEO 81 55 tristate "EXYNOS SoC series Display Port PHY driver" 82 56 depends on OF 57 + depends on ARCH_EXYNOS || COMPILE_TEST 58 + default ARCH_EXYNOS 83 59 select GENERIC_PHY 84 60 help 85 61 Support for Display Port PHY found on Samsung EXYNOS SoCs. ··· 92 64 depends on HAS_IOMEM 93 65 help 94 66 Enable this to support the Broadcom Kona USB 2.0 PHY. 67 + 68 + config PHY_EXYNOS5250_SATA 69 + tristate "Exynos5250 Sata SerDes/PHY driver" 70 + depends on SOC_EXYNOS5250 71 + depends on HAS_IOMEM 72 + depends on OF 73 + select GENERIC_PHY 74 + select I2C 75 + select I2C_S3C2410 76 + select MFD_SYSCON 77 + help 78 + Enable this to support SATA SerDes/Phy found on Samsung's 79 + Exynos5250 based SoCs.This SerDes/Phy supports SATA 1.5 Gb/s, 80 + SATA 3.0 Gb/s, SATA 6.0 Gb/s speeds. It supports one SATA host 81 + port to accept one SATA device. 82 + 83 + config PHY_SUN4I_USB 84 + tristate "Allwinner sunxi SoC USB PHY driver" 85 + depends on ARCH_SUNXI && HAS_IOMEM && OF 86 + select GENERIC_PHY 87 + help 88 + Enable this to support the transceiver that is part of Allwinner 89 + sunxi SoCs. 90 + 91 + This driver controls the entire USB PHY block, both the USB OTG 92 + parts, as well as the 2 regular USB 2 host PHYs. 93 + 94 + config PHY_SAMSUNG_USB2 95 + tristate "Samsung USB 2.0 PHY driver" 96 + select GENERIC_PHY 97 + select MFD_SYSCON 98 + help 99 + Enable this to support the Samsung USB 2.0 PHY driver for Samsung 100 + SoCs. This driver provides the interface for USB 2.0 PHY. Support for 101 + particular SoCs has to be enabled in addition to this driver. Number 102 + and type of supported phys depends on the SoC. 103 + 104 + config PHY_EXYNOS4210_USB2 105 + bool "Support for Exynos 4210" 106 + depends on PHY_SAMSUNG_USB2 107 + depends on CPU_EXYNOS4210 108 + help 109 + Enable USB PHY support for Exynos 4210. This option requires that 110 + Samsung USB 2.0 PHY driver is enabled and means that support for this 111 + particular SoC is compiled in the driver. In case of Exynos 4210 four 112 + phys are available - device, host, HSIC0 and HSIC1. 113 + 114 + config PHY_EXYNOS4X12_USB2 115 + bool "Support for Exynos 4x12" 116 + depends on PHY_SAMSUNG_USB2 117 + depends on (SOC_EXYNOS4212 || SOC_EXYNOS4412) 118 + help 119 + Enable USB PHY support for Exynos 4x12. This option requires that 120 + Samsung USB 2.0 PHY driver is enabled and means that support for this 121 + particular SoC is compiled in the driver. In case of Exynos 4x12 four 122 + phys are available - device, host, HSIC0 and HSIC1. 123 + 124 + config PHY_EXYNOS5250_USB2 125 + bool "Support for Exynos 5250" 126 + depends on PHY_SAMSUNG_USB2 127 + depends on SOC_EXYNOS5250 128 + help 129 + Enable USB PHY support for Exynos 5250. This option requires that 130 + Samsung USB 2.0 PHY driver is enabled and means that support for this 131 + particular SoC is compiled in the driver. In case of Exynos 5250 four 132 + phys are available - device, host, HSIC0 and HSIC. 133 + 134 + config PHY_XGENE 135 + tristate "APM X-Gene 15Gbps PHY support" 136 + depends on HAS_IOMEM && OF && (ARM64 || COMPILE_TEST) 137 + select GENERIC_PHY 138 + help 139 + This option enables support for APM X-Gene SoC multi-purpose PHY. 95 140 96 141 endmenu
+9
drivers/phy/Makefile
··· 7 7 obj-$(CONFIG_PHY_EXYNOS_DP_VIDEO) += phy-exynos-dp-video.o 8 8 obj-$(CONFIG_PHY_EXYNOS_MIPI_VIDEO) += phy-exynos-mipi-video.o 9 9 obj-$(CONFIG_PHY_MVEBU_SATA) += phy-mvebu-sata.o 10 + obj-$(CONFIG_OMAP_CONTROL_PHY) += phy-omap-control.o 10 11 obj-$(CONFIG_OMAP_USB2) += phy-omap-usb2.o 12 + obj-$(CONFIG_TI_PIPE3) += phy-ti-pipe3.o 11 13 obj-$(CONFIG_TWL4030_USB) += phy-twl4030-usb.o 14 + obj-$(CONFIG_PHY_EXYNOS5250_SATA) += phy-exynos5250-sata.o 15 + obj-$(CONFIG_PHY_SUN4I_USB) += phy-sun4i-usb.o 16 + obj-$(CONFIG_PHY_SAMSUNG_USB2) += phy-samsung-usb2.o 17 + obj-$(CONFIG_PHY_EXYNOS4210_USB2) += phy-exynos4210-usb2.o 18 + obj-$(CONFIG_PHY_EXYNOS4X12_USB2) += phy-exynos4x12-usb2.o 19 + obj-$(CONFIG_PHY_EXYNOS5250_USB2) += phy-exynos5250-usb2.o 20 + obj-$(CONFIG_PHY_XGENE) += phy-xgene.o
+1 -3
drivers/phy/phy-bcm-kona-usb2.c
··· 128 128 129 129 phy_provider = devm_of_phy_provider_register(dev, 130 130 of_phy_simple_xlate); 131 - if (IS_ERR(phy_provider)) 132 - return PTR_ERR(phy_provider); 133 131 134 - return 0; 132 + return PTR_ERR_OR_ZERO(phy_provider); 135 133 } 136 134 137 135 static const struct of_device_id bcm_kona_usb2_dt_ids[] = {
+67 -9
drivers/phy/phy-core.c
··· 274 274 EXPORT_SYMBOL_GPL(phy_power_off); 275 275 276 276 /** 277 - * of_phy_get() - lookup and obtain a reference to a phy by phandle 278 - * @dev: device that requests this phy 277 + * _of_phy_get() - lookup and obtain a reference to a phy by phandle 278 + * @np: device_node for which to get the phy 279 279 * @index: the index of the phy 280 280 * 281 281 * Returns the phy associated with the given phandle value, ··· 284 284 * not yet loaded. This function uses of_xlate call back function provided 285 285 * while registering the phy_provider to find the phy instance. 286 286 */ 287 - static struct phy *of_phy_get(struct device *dev, int index) 287 + static struct phy *_of_phy_get(struct device_node *np, int index) 288 288 { 289 289 int ret; 290 290 struct phy_provider *phy_provider; 291 291 struct phy *phy = NULL; 292 292 struct of_phandle_args args; 293 293 294 - ret = of_parse_phandle_with_args(dev->of_node, "phys", "#phy-cells", 294 + ret = of_parse_phandle_with_args(np, "phys", "#phy-cells", 295 295 index, &args); 296 - if (ret) { 297 - dev_dbg(dev, "failed to get phy in %s node\n", 298 - dev->of_node->full_name); 296 + if (ret) 299 297 return ERR_PTR(-ENODEV); 300 - } 301 298 302 299 mutex_lock(&phy_provider_mutex); 303 300 phy_provider = of_phy_provider_lookup(args.np); ··· 312 315 313 316 return phy; 314 317 } 318 + 319 + /** 320 + * of_phy_get() - lookup and obtain a reference to a phy using a device_node. 321 + * @np: device_node for which to get the phy 322 + * @con_id: name of the phy from device's point of view 323 + * 324 + * Returns the phy driver, after getting a refcount to it; or 325 + * -ENODEV if there is no such phy. The caller is responsible for 326 + * calling phy_put() to release that count. 327 + */ 328 + struct phy *of_phy_get(struct device_node *np, const char *con_id) 329 + { 330 + struct phy *phy = NULL; 331 + int index = 0; 332 + 333 + if (con_id) 334 + index = of_property_match_string(np, "phy-names", con_id); 335 + 336 + phy = _of_phy_get(np, index); 337 + if (IS_ERR(phy)) 338 + return phy; 339 + 340 + if (!try_module_get(phy->ops->owner)) 341 + return ERR_PTR(-EPROBE_DEFER); 342 + 343 + get_device(&phy->dev); 344 + 345 + return phy; 346 + } 347 + EXPORT_SYMBOL_GPL(of_phy_get); 315 348 316 349 /** 317 350 * phy_put() - release the PHY ··· 434 407 if (dev->of_node) { 435 408 index = of_property_match_string(dev->of_node, "phy-names", 436 409 string); 437 - phy = of_phy_get(dev, index); 410 + phy = _of_phy_get(dev->of_node, index); 438 411 } else { 439 412 phy = phy_lookup(dev, string); 440 413 } ··· 524 497 return phy; 525 498 } 526 499 EXPORT_SYMBOL_GPL(devm_phy_optional_get); 500 + 501 + /** 502 + * devm_of_phy_get() - lookup and obtain a reference to a phy. 503 + * @dev: device that requests this phy 504 + * @np: node containing the phy 505 + * @con_id: name of the phy from device's point of view 506 + * 507 + * Gets the phy using of_phy_get(), and associates a device with it using 508 + * devres. On driver detach, release function is invoked on the devres data, 509 + * then, devres data is freed. 510 + */ 511 + struct phy *devm_of_phy_get(struct device *dev, struct device_node *np, 512 + const char *con_id) 513 + { 514 + struct phy **ptr, *phy; 515 + 516 + ptr = devres_alloc(devm_phy_release, sizeof(*ptr), GFP_KERNEL); 517 + if (!ptr) 518 + return ERR_PTR(-ENOMEM); 519 + 520 + phy = of_phy_get(np, con_id); 521 + if (!IS_ERR(phy)) { 522 + *ptr = phy; 523 + devres_add(dev, ptr); 524 + } else { 525 + devres_free(ptr); 526 + } 527 + 528 + return phy; 529 + } 530 + EXPORT_SYMBOL_GPL(devm_of_phy_get); 527 531 528 532 /** 529 533 * phy_create() - create a new phy
+261
drivers/phy/phy-exynos4210-usb2.c
··· 1 + /* 2 + * Samsung SoC USB 1.1/2.0 PHY driver - Exynos 4210 support 3 + * 4 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 + * Author: Kamil Debski <k.debski@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/delay.h> 13 + #include <linux/io.h> 14 + #include <linux/phy/phy.h> 15 + #include <linux/regmap.h> 16 + #include "phy-samsung-usb2.h" 17 + 18 + /* Exynos USB PHY registers */ 19 + 20 + /* PHY power control */ 21 + #define EXYNOS_4210_UPHYPWR 0x0 22 + 23 + #define EXYNOS_4210_UPHYPWR_PHY0_SUSPEND BIT(0) 24 + #define EXYNOS_4210_UPHYPWR_PHY0_PWR BIT(3) 25 + #define EXYNOS_4210_UPHYPWR_PHY0_OTG_PWR BIT(4) 26 + #define EXYNOS_4210_UPHYPWR_PHY0_SLEEP BIT(5) 27 + #define EXYNOS_4210_UPHYPWR_PHY0 ( \ 28 + EXYNOS_4210_UPHYPWR_PHY0_SUSPEND | \ 29 + EXYNOS_4210_UPHYPWR_PHY0_PWR | \ 30 + EXYNOS_4210_UPHYPWR_PHY0_OTG_PWR | \ 31 + EXYNOS_4210_UPHYPWR_PHY0_SLEEP) 32 + 33 + #define EXYNOS_4210_UPHYPWR_PHY1_SUSPEND BIT(6) 34 + #define EXYNOS_4210_UPHYPWR_PHY1_PWR BIT(7) 35 + #define EXYNOS_4210_UPHYPWR_PHY1_SLEEP BIT(8) 36 + #define EXYNOS_4210_UPHYPWR_PHY1 ( \ 37 + EXYNOS_4210_UPHYPWR_PHY1_SUSPEND | \ 38 + EXYNOS_4210_UPHYPWR_PHY1_PWR | \ 39 + EXYNOS_4210_UPHYPWR_PHY1_SLEEP) 40 + 41 + #define EXYNOS_4210_UPHYPWR_HSIC0_SUSPEND BIT(9) 42 + #define EXYNOS_4210_UPHYPWR_HSIC0_SLEEP BIT(10) 43 + #define EXYNOS_4210_UPHYPWR_HSIC0 ( \ 44 + EXYNOS_4210_UPHYPWR_HSIC0_SUSPEND | \ 45 + EXYNOS_4210_UPHYPWR_HSIC0_SLEEP) 46 + 47 + #define EXYNOS_4210_UPHYPWR_HSIC1_SUSPEND BIT(11) 48 + #define EXYNOS_4210_UPHYPWR_HSIC1_SLEEP BIT(12) 49 + #define EXYNOS_4210_UPHYPWR_HSIC1 ( \ 50 + EXYNOS_4210_UPHYPWR_HSIC1_SUSPEND | \ 51 + EXYNOS_4210_UPHYPWR_HSIC1_SLEEP) 52 + 53 + /* PHY clock control */ 54 + #define EXYNOS_4210_UPHYCLK 0x4 55 + 56 + #define EXYNOS_4210_UPHYCLK_PHYFSEL_MASK (0x3 << 0) 57 + #define EXYNOS_4210_UPHYCLK_PHYFSEL_OFFSET 0 58 + #define EXYNOS_4210_UPHYCLK_PHYFSEL_48MHZ (0x0 << 0) 59 + #define EXYNOS_4210_UPHYCLK_PHYFSEL_24MHZ (0x3 << 0) 60 + #define EXYNOS_4210_UPHYCLK_PHYFSEL_12MHZ (0x2 << 0) 61 + 62 + #define EXYNOS_4210_UPHYCLK_PHY0_ID_PULLUP BIT(2) 63 + #define EXYNOS_4210_UPHYCLK_PHY0_COMMON_ON BIT(4) 64 + #define EXYNOS_4210_UPHYCLK_PHY1_COMMON_ON BIT(7) 65 + 66 + /* PHY reset control */ 67 + #define EXYNOS_4210_UPHYRST 0x8 68 + 69 + #define EXYNOS_4210_URSTCON_PHY0 BIT(0) 70 + #define EXYNOS_4210_URSTCON_OTG_HLINK BIT(1) 71 + #define EXYNOS_4210_URSTCON_OTG_PHYLINK BIT(2) 72 + #define EXYNOS_4210_URSTCON_PHY1_ALL BIT(3) 73 + #define EXYNOS_4210_URSTCON_PHY1_P0 BIT(4) 74 + #define EXYNOS_4210_URSTCON_PHY1_P1P2 BIT(5) 75 + #define EXYNOS_4210_URSTCON_HOST_LINK_ALL BIT(6) 76 + #define EXYNOS_4210_URSTCON_HOST_LINK_P0 BIT(7) 77 + #define EXYNOS_4210_URSTCON_HOST_LINK_P1 BIT(8) 78 + #define EXYNOS_4210_URSTCON_HOST_LINK_P2 BIT(9) 79 + 80 + /* Isolation, configured in the power management unit */ 81 + #define EXYNOS_4210_USB_ISOL_DEVICE_OFFSET 0x704 82 + #define EXYNOS_4210_USB_ISOL_DEVICE BIT(0) 83 + #define EXYNOS_4210_USB_ISOL_HOST_OFFSET 0x708 84 + #define EXYNOS_4210_USB_ISOL_HOST BIT(0) 85 + 86 + /* USBYPHY1 Floating prevention */ 87 + #define EXYNOS_4210_UPHY1CON 0x34 88 + #define EXYNOS_4210_UPHY1CON_FLOAT_PREVENTION 0x1 89 + 90 + /* Mode switching SUB Device <-> Host */ 91 + #define EXYNOS_4210_MODE_SWITCH_OFFSET 0x21c 92 + #define EXYNOS_4210_MODE_SWITCH_MASK 1 93 + #define EXYNOS_4210_MODE_SWITCH_DEVICE 0 94 + #define EXYNOS_4210_MODE_SWITCH_HOST 1 95 + 96 + enum exynos4210_phy_id { 97 + EXYNOS4210_DEVICE, 98 + EXYNOS4210_HOST, 99 + EXYNOS4210_HSIC0, 100 + EXYNOS4210_HSIC1, 101 + EXYNOS4210_NUM_PHYS, 102 + }; 103 + 104 + /* 105 + * exynos4210_rate_to_clk() converts the supplied clock rate to the value that 106 + * can be written to the phy register. 107 + */ 108 + static int exynos4210_rate_to_clk(unsigned long rate, u32 *reg) 109 + { 110 + switch (rate) { 111 + case 12 * MHZ: 112 + *reg = EXYNOS_4210_UPHYCLK_PHYFSEL_12MHZ; 113 + break; 114 + case 24 * MHZ: 115 + *reg = EXYNOS_4210_UPHYCLK_PHYFSEL_24MHZ; 116 + break; 117 + case 48 * MHZ: 118 + *reg = EXYNOS_4210_UPHYCLK_PHYFSEL_48MHZ; 119 + break; 120 + default: 121 + return -EINVAL; 122 + } 123 + 124 + return 0; 125 + } 126 + 127 + static void exynos4210_isol(struct samsung_usb2_phy_instance *inst, bool on) 128 + { 129 + struct samsung_usb2_phy_driver *drv = inst->drv; 130 + u32 offset; 131 + u32 mask; 132 + 133 + switch (inst->cfg->id) { 134 + case EXYNOS4210_DEVICE: 135 + offset = EXYNOS_4210_USB_ISOL_DEVICE_OFFSET; 136 + mask = EXYNOS_4210_USB_ISOL_DEVICE; 137 + break; 138 + case EXYNOS4210_HOST: 139 + offset = EXYNOS_4210_USB_ISOL_HOST_OFFSET; 140 + mask = EXYNOS_4210_USB_ISOL_HOST; 141 + break; 142 + default: 143 + return; 144 + }; 145 + 146 + regmap_update_bits(drv->reg_pmu, offset, mask, on ? 0 : mask); 147 + } 148 + 149 + static void exynos4210_phy_pwr(struct samsung_usb2_phy_instance *inst, bool on) 150 + { 151 + struct samsung_usb2_phy_driver *drv = inst->drv; 152 + u32 rstbits = 0; 153 + u32 phypwr = 0; 154 + u32 rst; 155 + u32 pwr; 156 + u32 clk; 157 + 158 + switch (inst->cfg->id) { 159 + case EXYNOS4210_DEVICE: 160 + phypwr = EXYNOS_4210_UPHYPWR_PHY0; 161 + rstbits = EXYNOS_4210_URSTCON_PHY0; 162 + break; 163 + case EXYNOS4210_HOST: 164 + phypwr = EXYNOS_4210_UPHYPWR_PHY1; 165 + rstbits = EXYNOS_4210_URSTCON_PHY1_ALL | 166 + EXYNOS_4210_URSTCON_PHY1_P0 | 167 + EXYNOS_4210_URSTCON_PHY1_P1P2 | 168 + EXYNOS_4210_URSTCON_HOST_LINK_ALL | 169 + EXYNOS_4210_URSTCON_HOST_LINK_P0; 170 + writel(on, drv->reg_phy + EXYNOS_4210_UPHY1CON); 171 + break; 172 + case EXYNOS4210_HSIC0: 173 + phypwr = EXYNOS_4210_UPHYPWR_HSIC0; 174 + rstbits = EXYNOS_4210_URSTCON_PHY1_P1P2 | 175 + EXYNOS_4210_URSTCON_HOST_LINK_P1; 176 + break; 177 + case EXYNOS4210_HSIC1: 178 + phypwr = EXYNOS_4210_UPHYPWR_HSIC1; 179 + rstbits = EXYNOS_4210_URSTCON_PHY1_P1P2 | 180 + EXYNOS_4210_URSTCON_HOST_LINK_P2; 181 + break; 182 + }; 183 + 184 + if (on) { 185 + clk = readl(drv->reg_phy + EXYNOS_4210_UPHYCLK); 186 + clk &= ~EXYNOS_4210_UPHYCLK_PHYFSEL_MASK; 187 + clk |= drv->ref_reg_val << EXYNOS_4210_UPHYCLK_PHYFSEL_OFFSET; 188 + writel(clk, drv->reg_phy + EXYNOS_4210_UPHYCLK); 189 + 190 + pwr = readl(drv->reg_phy + EXYNOS_4210_UPHYPWR); 191 + pwr &= ~phypwr; 192 + writel(pwr, drv->reg_phy + EXYNOS_4210_UPHYPWR); 193 + 194 + rst = readl(drv->reg_phy + EXYNOS_4210_UPHYRST); 195 + rst |= rstbits; 196 + writel(rst, drv->reg_phy + EXYNOS_4210_UPHYRST); 197 + udelay(10); 198 + rst &= ~rstbits; 199 + writel(rst, drv->reg_phy + EXYNOS_4210_UPHYRST); 200 + /* The following delay is necessary for the reset sequence to be 201 + * completed */ 202 + udelay(80); 203 + } else { 204 + pwr = readl(drv->reg_phy + EXYNOS_4210_UPHYPWR); 205 + pwr |= phypwr; 206 + writel(pwr, drv->reg_phy + EXYNOS_4210_UPHYPWR); 207 + } 208 + } 209 + 210 + static int exynos4210_power_on(struct samsung_usb2_phy_instance *inst) 211 + { 212 + /* Order of initialisation is important - first power then isolation */ 213 + exynos4210_phy_pwr(inst, 1); 214 + exynos4210_isol(inst, 0); 215 + 216 + return 0; 217 + } 218 + 219 + static int exynos4210_power_off(struct samsung_usb2_phy_instance *inst) 220 + { 221 + exynos4210_isol(inst, 1); 222 + exynos4210_phy_pwr(inst, 0); 223 + 224 + return 0; 225 + } 226 + 227 + 228 + static const struct samsung_usb2_common_phy exynos4210_phys[] = { 229 + { 230 + .label = "device", 231 + .id = EXYNOS4210_DEVICE, 232 + .power_on = exynos4210_power_on, 233 + .power_off = exynos4210_power_off, 234 + }, 235 + { 236 + .label = "host", 237 + .id = EXYNOS4210_HOST, 238 + .power_on = exynos4210_power_on, 239 + .power_off = exynos4210_power_off, 240 + }, 241 + { 242 + .label = "hsic0", 243 + .id = EXYNOS4210_HSIC0, 244 + .power_on = exynos4210_power_on, 245 + .power_off = exynos4210_power_off, 246 + }, 247 + { 248 + .label = "hsic1", 249 + .id = EXYNOS4210_HSIC1, 250 + .power_on = exynos4210_power_on, 251 + .power_off = exynos4210_power_off, 252 + }, 253 + {}, 254 + }; 255 + 256 + const struct samsung_usb2_phy_config exynos4210_usb2_phy_config = { 257 + .has_mode_switch = 0, 258 + .num_phys = EXYNOS4210_NUM_PHYS, 259 + .phys = exynos4210_phys, 260 + .rate_to_clk = exynos4210_rate_to_clk, 261 + };
+328
drivers/phy/phy-exynos4x12-usb2.c
··· 1 + /* 2 + * Samsung SoC USB 1.1/2.0 PHY driver - Exynos 4x12 support 3 + * 4 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 + * Author: Kamil Debski <k.debski@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/delay.h> 13 + #include <linux/io.h> 14 + #include <linux/phy/phy.h> 15 + #include <linux/regmap.h> 16 + #include "phy-samsung-usb2.h" 17 + 18 + /* Exynos USB PHY registers */ 19 + 20 + /* PHY power control */ 21 + #define EXYNOS_4x12_UPHYPWR 0x0 22 + 23 + #define EXYNOS_4x12_UPHYPWR_PHY0_SUSPEND BIT(0) 24 + #define EXYNOS_4x12_UPHYPWR_PHY0_PWR BIT(3) 25 + #define EXYNOS_4x12_UPHYPWR_PHY0_OTG_PWR BIT(4) 26 + #define EXYNOS_4x12_UPHYPWR_PHY0_SLEEP BIT(5) 27 + #define EXYNOS_4x12_UPHYPWR_PHY0 ( \ 28 + EXYNOS_4x12_UPHYPWR_PHY0_SUSPEND | \ 29 + EXYNOS_4x12_UPHYPWR_PHY0_PWR | \ 30 + EXYNOS_4x12_UPHYPWR_PHY0_OTG_PWR | \ 31 + EXYNOS_4x12_UPHYPWR_PHY0_SLEEP) 32 + 33 + #define EXYNOS_4x12_UPHYPWR_PHY1_SUSPEND BIT(6) 34 + #define EXYNOS_4x12_UPHYPWR_PHY1_PWR BIT(7) 35 + #define EXYNOS_4x12_UPHYPWR_PHY1_SLEEP BIT(8) 36 + #define EXYNOS_4x12_UPHYPWR_PHY1 ( \ 37 + EXYNOS_4x12_UPHYPWR_PHY1_SUSPEND | \ 38 + EXYNOS_4x12_UPHYPWR_PHY1_PWR | \ 39 + EXYNOS_4x12_UPHYPWR_PHY1_SLEEP) 40 + 41 + #define EXYNOS_4x12_UPHYPWR_HSIC0_SUSPEND BIT(9) 42 + #define EXYNOS_4x12_UPHYPWR_HSIC0_PWR BIT(10) 43 + #define EXYNOS_4x12_UPHYPWR_HSIC0_SLEEP BIT(11) 44 + #define EXYNOS_4x12_UPHYPWR_HSIC0 ( \ 45 + EXYNOS_4x12_UPHYPWR_HSIC0_SUSPEND | \ 46 + EXYNOS_4x12_UPHYPWR_HSIC0_PWR | \ 47 + EXYNOS_4x12_UPHYPWR_HSIC0_SLEEP) 48 + 49 + #define EXYNOS_4x12_UPHYPWR_HSIC1_SUSPEND BIT(12) 50 + #define EXYNOS_4x12_UPHYPWR_HSIC1_PWR BIT(13) 51 + #define EXYNOS_4x12_UPHYPWR_HSIC1_SLEEP BIT(14) 52 + #define EXYNOS_4x12_UPHYPWR_HSIC1 ( \ 53 + EXYNOS_4x12_UPHYPWR_HSIC1_SUSPEND | \ 54 + EXYNOS_4x12_UPHYPWR_HSIC1_PWR | \ 55 + EXYNOS_4x12_UPHYPWR_HSIC1_SLEEP) 56 + 57 + /* PHY clock control */ 58 + #define EXYNOS_4x12_UPHYCLK 0x4 59 + 60 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_MASK (0x7 << 0) 61 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_OFFSET 0 62 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_9MHZ6 (0x0 << 0) 63 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_10MHZ (0x1 << 0) 64 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_12MHZ (0x2 << 0) 65 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_19MHZ2 (0x3 << 0) 66 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_20MHZ (0x4 << 0) 67 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_24MHZ (0x5 << 0) 68 + #define EXYNOS_4x12_UPHYCLK_PHYFSEL_50MHZ (0x7 << 0) 69 + 70 + #define EXYNOS_4x12_UPHYCLK_PHY0_ID_PULLUP BIT(3) 71 + #define EXYNOS_4x12_UPHYCLK_PHY0_COMMON_ON BIT(4) 72 + #define EXYNOS_4x12_UPHYCLK_PHY1_COMMON_ON BIT(7) 73 + 74 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_MASK (0x7f << 10) 75 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_OFFSET 10 76 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_12MHZ (0x24 << 10) 77 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_15MHZ (0x1c << 10) 78 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_16MHZ (0x1a << 10) 79 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_19MHZ2 (0x15 << 10) 80 + #define EXYNOS_4x12_UPHYCLK_HSIC_REFCLK_20MHZ (0x14 << 10) 81 + 82 + /* PHY reset control */ 83 + #define EXYNOS_4x12_UPHYRST 0x8 84 + 85 + #define EXYNOS_4x12_URSTCON_PHY0 BIT(0) 86 + #define EXYNOS_4x12_URSTCON_OTG_HLINK BIT(1) 87 + #define EXYNOS_4x12_URSTCON_OTG_PHYLINK BIT(2) 88 + #define EXYNOS_4x12_URSTCON_HOST_PHY BIT(3) 89 + #define EXYNOS_4x12_URSTCON_PHY1 BIT(4) 90 + #define EXYNOS_4x12_URSTCON_HSIC0 BIT(5) 91 + #define EXYNOS_4x12_URSTCON_HSIC1 BIT(6) 92 + #define EXYNOS_4x12_URSTCON_HOST_LINK_ALL BIT(7) 93 + #define EXYNOS_4x12_URSTCON_HOST_LINK_P0 BIT(8) 94 + #define EXYNOS_4x12_URSTCON_HOST_LINK_P1 BIT(9) 95 + #define EXYNOS_4x12_URSTCON_HOST_LINK_P2 BIT(10) 96 + 97 + /* Isolation, configured in the power management unit */ 98 + #define EXYNOS_4x12_USB_ISOL_OFFSET 0x704 99 + #define EXYNOS_4x12_USB_ISOL_OTG BIT(0) 100 + #define EXYNOS_4x12_USB_ISOL_HSIC0_OFFSET 0x708 101 + #define EXYNOS_4x12_USB_ISOL_HSIC0 BIT(0) 102 + #define EXYNOS_4x12_USB_ISOL_HSIC1_OFFSET 0x70c 103 + #define EXYNOS_4x12_USB_ISOL_HSIC1 BIT(0) 104 + 105 + /* Mode switching SUB Device <-> Host */ 106 + #define EXYNOS_4x12_MODE_SWITCH_OFFSET 0x21c 107 + #define EXYNOS_4x12_MODE_SWITCH_MASK 1 108 + #define EXYNOS_4x12_MODE_SWITCH_DEVICE 0 109 + #define EXYNOS_4x12_MODE_SWITCH_HOST 1 110 + 111 + enum exynos4x12_phy_id { 112 + EXYNOS4x12_DEVICE, 113 + EXYNOS4x12_HOST, 114 + EXYNOS4x12_HSIC0, 115 + EXYNOS4x12_HSIC1, 116 + EXYNOS4x12_NUM_PHYS, 117 + }; 118 + 119 + /* 120 + * exynos4x12_rate_to_clk() converts the supplied clock rate to the value that 121 + * can be written to the phy register. 122 + */ 123 + static int exynos4x12_rate_to_clk(unsigned long rate, u32 *reg) 124 + { 125 + /* EXYNOS_4x12_UPHYCLK_PHYFSEL_MASK */ 126 + 127 + switch (rate) { 128 + case 9600 * KHZ: 129 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_9MHZ6; 130 + break; 131 + case 10 * MHZ: 132 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_10MHZ; 133 + break; 134 + case 12 * MHZ: 135 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_12MHZ; 136 + break; 137 + case 19200 * KHZ: 138 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_19MHZ2; 139 + break; 140 + case 20 * MHZ: 141 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_20MHZ; 142 + break; 143 + case 24 * MHZ: 144 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_24MHZ; 145 + break; 146 + case 50 * MHZ: 147 + *reg = EXYNOS_4x12_UPHYCLK_PHYFSEL_50MHZ; 148 + break; 149 + default: 150 + return -EINVAL; 151 + } 152 + 153 + return 0; 154 + } 155 + 156 + static void exynos4x12_isol(struct samsung_usb2_phy_instance *inst, bool on) 157 + { 158 + struct samsung_usb2_phy_driver *drv = inst->drv; 159 + u32 offset; 160 + u32 mask; 161 + 162 + switch (inst->cfg->id) { 163 + case EXYNOS4x12_DEVICE: 164 + case EXYNOS4x12_HOST: 165 + offset = EXYNOS_4x12_USB_ISOL_OFFSET; 166 + mask = EXYNOS_4x12_USB_ISOL_OTG; 167 + break; 168 + case EXYNOS4x12_HSIC0: 169 + offset = EXYNOS_4x12_USB_ISOL_HSIC0_OFFSET; 170 + mask = EXYNOS_4x12_USB_ISOL_HSIC0; 171 + break; 172 + case EXYNOS4x12_HSIC1: 173 + offset = EXYNOS_4x12_USB_ISOL_HSIC1_OFFSET; 174 + mask = EXYNOS_4x12_USB_ISOL_HSIC1; 175 + break; 176 + default: 177 + return; 178 + }; 179 + 180 + regmap_update_bits(drv->reg_pmu, offset, mask, on ? 0 : mask); 181 + } 182 + 183 + static void exynos4x12_setup_clk(struct samsung_usb2_phy_instance *inst) 184 + { 185 + struct samsung_usb2_phy_driver *drv = inst->drv; 186 + u32 clk; 187 + 188 + clk = readl(drv->reg_phy + EXYNOS_4x12_UPHYCLK); 189 + clk &= ~EXYNOS_4x12_UPHYCLK_PHYFSEL_MASK; 190 + clk |= drv->ref_reg_val << EXYNOS_4x12_UPHYCLK_PHYFSEL_OFFSET; 191 + writel(clk, drv->reg_phy + EXYNOS_4x12_UPHYCLK); 192 + } 193 + 194 + static void exynos4x12_phy_pwr(struct samsung_usb2_phy_instance *inst, bool on) 195 + { 196 + struct samsung_usb2_phy_driver *drv = inst->drv; 197 + u32 rstbits = 0; 198 + u32 phypwr = 0; 199 + u32 rst; 200 + u32 pwr; 201 + u32 mode = 0; 202 + u32 switch_mode = 0; 203 + 204 + switch (inst->cfg->id) { 205 + case EXYNOS4x12_DEVICE: 206 + phypwr = EXYNOS_4x12_UPHYPWR_PHY0; 207 + rstbits = EXYNOS_4x12_URSTCON_PHY0; 208 + mode = EXYNOS_4x12_MODE_SWITCH_DEVICE; 209 + switch_mode = 1; 210 + break; 211 + case EXYNOS4x12_HOST: 212 + phypwr = EXYNOS_4x12_UPHYPWR_PHY1; 213 + rstbits = EXYNOS_4x12_URSTCON_HOST_PHY; 214 + mode = EXYNOS_4x12_MODE_SWITCH_HOST; 215 + switch_mode = 1; 216 + break; 217 + case EXYNOS4x12_HSIC0: 218 + phypwr = EXYNOS_4x12_UPHYPWR_HSIC0; 219 + rstbits = EXYNOS_4x12_URSTCON_HSIC1 | 220 + EXYNOS_4x12_URSTCON_HOST_LINK_P0 | 221 + EXYNOS_4x12_URSTCON_HOST_PHY; 222 + break; 223 + case EXYNOS4x12_HSIC1: 224 + phypwr = EXYNOS_4x12_UPHYPWR_HSIC1; 225 + rstbits = EXYNOS_4x12_URSTCON_HSIC1 | 226 + EXYNOS_4x12_URSTCON_HOST_LINK_P1; 227 + break; 228 + }; 229 + 230 + if (on) { 231 + if (switch_mode) 232 + regmap_update_bits(drv->reg_sys, 233 + EXYNOS_4x12_MODE_SWITCH_OFFSET, 234 + EXYNOS_4x12_MODE_SWITCH_MASK, mode); 235 + 236 + pwr = readl(drv->reg_phy + EXYNOS_4x12_UPHYPWR); 237 + pwr &= ~phypwr; 238 + writel(pwr, drv->reg_phy + EXYNOS_4x12_UPHYPWR); 239 + 240 + rst = readl(drv->reg_phy + EXYNOS_4x12_UPHYRST); 241 + rst |= rstbits; 242 + writel(rst, drv->reg_phy + EXYNOS_4x12_UPHYRST); 243 + udelay(10); 244 + rst &= ~rstbits; 245 + writel(rst, drv->reg_phy + EXYNOS_4x12_UPHYRST); 246 + /* The following delay is necessary for the reset sequence to be 247 + * completed */ 248 + udelay(80); 249 + } else { 250 + pwr = readl(drv->reg_phy + EXYNOS_4x12_UPHYPWR); 251 + pwr |= phypwr; 252 + writel(pwr, drv->reg_phy + EXYNOS_4x12_UPHYPWR); 253 + } 254 + } 255 + 256 + static int exynos4x12_power_on(struct samsung_usb2_phy_instance *inst) 257 + { 258 + struct samsung_usb2_phy_driver *drv = inst->drv; 259 + 260 + inst->enabled = 1; 261 + exynos4x12_setup_clk(inst); 262 + exynos4x12_phy_pwr(inst, 1); 263 + exynos4x12_isol(inst, 0); 264 + 265 + /* Power on the device, as it is necessary for HSIC to work */ 266 + if (inst->cfg->id == EXYNOS4x12_HSIC0) { 267 + struct samsung_usb2_phy_instance *device = 268 + &drv->instances[EXYNOS4x12_DEVICE]; 269 + exynos4x12_phy_pwr(device, 1); 270 + exynos4x12_isol(device, 0); 271 + } 272 + 273 + return 0; 274 + } 275 + 276 + static int exynos4x12_power_off(struct samsung_usb2_phy_instance *inst) 277 + { 278 + struct samsung_usb2_phy_driver *drv = inst->drv; 279 + struct samsung_usb2_phy_instance *device = 280 + &drv->instances[EXYNOS4x12_DEVICE]; 281 + 282 + inst->enabled = 0; 283 + exynos4x12_isol(inst, 1); 284 + exynos4x12_phy_pwr(inst, 0); 285 + 286 + if (inst->cfg->id == EXYNOS4x12_HSIC0 && !device->enabled) { 287 + exynos4x12_isol(device, 1); 288 + exynos4x12_phy_pwr(device, 0); 289 + } 290 + 291 + return 0; 292 + } 293 + 294 + 295 + static const struct samsung_usb2_common_phy exynos4x12_phys[] = { 296 + { 297 + .label = "device", 298 + .id = EXYNOS4x12_DEVICE, 299 + .power_on = exynos4x12_power_on, 300 + .power_off = exynos4x12_power_off, 301 + }, 302 + { 303 + .label = "host", 304 + .id = EXYNOS4x12_HOST, 305 + .power_on = exynos4x12_power_on, 306 + .power_off = exynos4x12_power_off, 307 + }, 308 + { 309 + .label = "hsic0", 310 + .id = EXYNOS4x12_HSIC0, 311 + .power_on = exynos4x12_power_on, 312 + .power_off = exynos4x12_power_off, 313 + }, 314 + { 315 + .label = "hsic1", 316 + .id = EXYNOS4x12_HSIC1, 317 + .power_on = exynos4x12_power_on, 318 + .power_off = exynos4x12_power_off, 319 + }, 320 + {}, 321 + }; 322 + 323 + const struct samsung_usb2_phy_config exynos4x12_usb2_phy_config = { 324 + .has_mode_switch = 1, 325 + .num_phys = EXYNOS4x12_NUM_PHYS, 326 + .phys = exynos4x12_phys, 327 + .rate_to_clk = exynos4x12_rate_to_clk, 328 + };
+251
drivers/phy/phy-exynos5250-sata.c
··· 1 + /* 2 + * Samsung SATA SerDes(PHY) driver 3 + * 4 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 + * Authors: Girish K S <ks.giri@samsung.com> 6 + * Yuvaraj Kumar C D <yuvaraj.cd@samsung.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/clk.h> 14 + #include <linux/delay.h> 15 + #include <linux/io.h> 16 + #include <linux/i2c.h> 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/of.h> 20 + #include <linux/of_address.h> 21 + #include <linux/phy/phy.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/regmap.h> 24 + #include <linux/spinlock.h> 25 + #include <linux/mfd/syscon.h> 26 + 27 + #define SATAPHY_CONTROL_OFFSET 0x0724 28 + #define EXYNOS5_SATAPHY_PMU_ENABLE BIT(0) 29 + #define EXYNOS5_SATA_RESET 0x4 30 + #define RESET_GLOBAL_RST_N BIT(0) 31 + #define RESET_CMN_RST_N BIT(1) 32 + #define RESET_CMN_BLOCK_RST_N BIT(2) 33 + #define RESET_CMN_I2C_RST_N BIT(3) 34 + #define RESET_TX_RX_PIPE_RST_N BIT(4) 35 + #define RESET_TX_RX_BLOCK_RST_N BIT(5) 36 + #define RESET_TX_RX_I2C_RST_N (BIT(6) | BIT(7)) 37 + #define LINK_RESET 0xf0000 38 + #define EXYNOS5_SATA_MODE0 0x10 39 + #define SATA_SPD_GEN3 BIT(1) 40 + #define EXYNOS5_SATA_CTRL0 0x14 41 + #define CTRL0_P0_PHY_CALIBRATED_SEL BIT(9) 42 + #define CTRL0_P0_PHY_CALIBRATED BIT(8) 43 + #define EXYNOS5_SATA_PHSATA_CTRLM 0xe0 44 + #define PHCTRLM_REF_RATE BIT(1) 45 + #define PHCTRLM_HIGH_SPEED BIT(0) 46 + #define EXYNOS5_SATA_PHSATA_STATM 0xf0 47 + #define PHSTATM_PLL_LOCKED BIT(0) 48 + 49 + #define PHY_PLL_TIMEOUT (usecs_to_jiffies(1000)) 50 + 51 + struct exynos_sata_phy { 52 + struct phy *phy; 53 + struct clk *phyclk; 54 + void __iomem *regs; 55 + struct regmap *pmureg; 56 + struct i2c_client *client; 57 + }; 58 + 59 + static int wait_for_reg_status(void __iomem *base, u32 reg, u32 checkbit, 60 + u32 status) 61 + { 62 + unsigned long timeout = jiffies + PHY_PLL_TIMEOUT; 63 + 64 + while (time_before(jiffies, timeout)) { 65 + if ((readl(base + reg) & checkbit) == status) 66 + return 0; 67 + } 68 + 69 + return -EFAULT; 70 + } 71 + 72 + static int exynos_sata_phy_power_on(struct phy *phy) 73 + { 74 + struct exynos_sata_phy *sata_phy = phy_get_drvdata(phy); 75 + 76 + return regmap_update_bits(sata_phy->pmureg, SATAPHY_CONTROL_OFFSET, 77 + EXYNOS5_SATAPHY_PMU_ENABLE, true); 78 + 79 + } 80 + 81 + static int exynos_sata_phy_power_off(struct phy *phy) 82 + { 83 + struct exynos_sata_phy *sata_phy = phy_get_drvdata(phy); 84 + 85 + return regmap_update_bits(sata_phy->pmureg, SATAPHY_CONTROL_OFFSET, 86 + EXYNOS5_SATAPHY_PMU_ENABLE, false); 87 + 88 + } 89 + 90 + static int exynos_sata_phy_init(struct phy *phy) 91 + { 92 + u32 val = 0; 93 + int ret = 0; 94 + u8 buf[] = { 0x3a, 0x0b }; 95 + struct exynos_sata_phy *sata_phy = phy_get_drvdata(phy); 96 + 97 + ret = regmap_update_bits(sata_phy->pmureg, SATAPHY_CONTROL_OFFSET, 98 + EXYNOS5_SATAPHY_PMU_ENABLE, true); 99 + if (ret != 0) 100 + dev_err(&sata_phy->phy->dev, "phy init failed\n"); 101 + 102 + writel(val, sata_phy->regs + EXYNOS5_SATA_RESET); 103 + 104 + val = readl(sata_phy->regs + EXYNOS5_SATA_RESET); 105 + val |= RESET_GLOBAL_RST_N | RESET_CMN_RST_N | RESET_CMN_BLOCK_RST_N 106 + | RESET_CMN_I2C_RST_N | RESET_TX_RX_PIPE_RST_N 107 + | RESET_TX_RX_BLOCK_RST_N | RESET_TX_RX_I2C_RST_N; 108 + writel(val, sata_phy->regs + EXYNOS5_SATA_RESET); 109 + 110 + val = readl(sata_phy->regs + EXYNOS5_SATA_RESET); 111 + val |= LINK_RESET; 112 + writel(val, sata_phy->regs + EXYNOS5_SATA_RESET); 113 + 114 + val = readl(sata_phy->regs + EXYNOS5_SATA_RESET); 115 + val |= RESET_CMN_RST_N; 116 + writel(val, sata_phy->regs + EXYNOS5_SATA_RESET); 117 + 118 + val = readl(sata_phy->regs + EXYNOS5_SATA_PHSATA_CTRLM); 119 + val &= ~PHCTRLM_REF_RATE; 120 + writel(val, sata_phy->regs + EXYNOS5_SATA_PHSATA_CTRLM); 121 + 122 + /* High speed enable for Gen3 */ 123 + val = readl(sata_phy->regs + EXYNOS5_SATA_PHSATA_CTRLM); 124 + val |= PHCTRLM_HIGH_SPEED; 125 + writel(val, sata_phy->regs + EXYNOS5_SATA_PHSATA_CTRLM); 126 + 127 + val = readl(sata_phy->regs + EXYNOS5_SATA_CTRL0); 128 + val |= CTRL0_P0_PHY_CALIBRATED_SEL | CTRL0_P0_PHY_CALIBRATED; 129 + writel(val, sata_phy->regs + EXYNOS5_SATA_CTRL0); 130 + 131 + val = readl(sata_phy->regs + EXYNOS5_SATA_MODE0); 132 + val |= SATA_SPD_GEN3; 133 + writel(val, sata_phy->regs + EXYNOS5_SATA_MODE0); 134 + 135 + ret = i2c_master_send(sata_phy->client, buf, sizeof(buf)); 136 + if (ret < 0) 137 + return ret; 138 + 139 + /* release cmu reset */ 140 + val = readl(sata_phy->regs + EXYNOS5_SATA_RESET); 141 + val &= ~RESET_CMN_RST_N; 142 + writel(val, sata_phy->regs + EXYNOS5_SATA_RESET); 143 + 144 + val = readl(sata_phy->regs + EXYNOS5_SATA_RESET); 145 + val |= RESET_CMN_RST_N; 146 + writel(val, sata_phy->regs + EXYNOS5_SATA_RESET); 147 + 148 + ret = wait_for_reg_status(sata_phy->regs, 149 + EXYNOS5_SATA_PHSATA_STATM, 150 + PHSTATM_PLL_LOCKED, 1); 151 + if (ret < 0) 152 + dev_err(&sata_phy->phy->dev, 153 + "PHY PLL locking failed\n"); 154 + return ret; 155 + } 156 + 157 + static struct phy_ops exynos_sata_phy_ops = { 158 + .init = exynos_sata_phy_init, 159 + .power_on = exynos_sata_phy_power_on, 160 + .power_off = exynos_sata_phy_power_off, 161 + .owner = THIS_MODULE, 162 + }; 163 + 164 + static int exynos_sata_phy_probe(struct platform_device *pdev) 165 + { 166 + struct exynos_sata_phy *sata_phy; 167 + struct device *dev = &pdev->dev; 168 + struct resource *res; 169 + struct phy_provider *phy_provider; 170 + struct device_node *node; 171 + int ret = 0; 172 + 173 + sata_phy = devm_kzalloc(dev, sizeof(*sata_phy), GFP_KERNEL); 174 + if (!sata_phy) 175 + return -ENOMEM; 176 + 177 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 178 + 179 + sata_phy->regs = devm_ioremap_resource(dev, res); 180 + if (IS_ERR(sata_phy->regs)) 181 + return PTR_ERR(sata_phy->regs); 182 + 183 + sata_phy->pmureg = syscon_regmap_lookup_by_phandle(dev->of_node, 184 + "samsung,syscon-phandle"); 185 + if (IS_ERR(sata_phy->pmureg)) { 186 + dev_err(dev, "syscon regmap lookup failed.\n"); 187 + return PTR_ERR(sata_phy->pmureg); 188 + } 189 + 190 + node = of_parse_phandle(dev->of_node, 191 + "samsung,exynos-sataphy-i2c-phandle", 0); 192 + if (!node) 193 + return -EINVAL; 194 + 195 + sata_phy->client = of_find_i2c_device_by_node(node); 196 + if (!sata_phy->client) 197 + return -EPROBE_DEFER; 198 + 199 + dev_set_drvdata(dev, sata_phy); 200 + 201 + sata_phy->phyclk = devm_clk_get(dev, "sata_phyctrl"); 202 + if (IS_ERR(sata_phy->phyclk)) { 203 + dev_err(dev, "failed to get clk for PHY\n"); 204 + return PTR_ERR(sata_phy->phyclk); 205 + } 206 + 207 + ret = clk_prepare_enable(sata_phy->phyclk); 208 + if (ret < 0) { 209 + dev_err(dev, "failed to enable source clk\n"); 210 + return ret; 211 + } 212 + 213 + sata_phy->phy = devm_phy_create(dev, &exynos_sata_phy_ops, NULL); 214 + if (IS_ERR(sata_phy->phy)) { 215 + clk_disable_unprepare(sata_phy->phyclk); 216 + dev_err(dev, "failed to create PHY\n"); 217 + return PTR_ERR(sata_phy->phy); 218 + } 219 + 220 + phy_set_drvdata(sata_phy->phy, sata_phy); 221 + 222 + phy_provider = devm_of_phy_provider_register(dev, 223 + of_phy_simple_xlate); 224 + if (IS_ERR(phy_provider)) { 225 + clk_disable_unprepare(sata_phy->phyclk); 226 + return PTR_ERR(phy_provider); 227 + } 228 + 229 + return 0; 230 + } 231 + 232 + static const struct of_device_id exynos_sata_phy_of_match[] = { 233 + { .compatible = "samsung,exynos5250-sata-phy" }, 234 + { }, 235 + }; 236 + MODULE_DEVICE_TABLE(of, exynos_sata_phy_of_match); 237 + 238 + static struct platform_driver exynos_sata_phy_driver = { 239 + .probe = exynos_sata_phy_probe, 240 + .driver = { 241 + .of_match_table = exynos_sata_phy_of_match, 242 + .name = "samsung,sata-phy", 243 + .owner = THIS_MODULE, 244 + } 245 + }; 246 + module_platform_driver(exynos_sata_phy_driver); 247 + 248 + MODULE_DESCRIPTION("Samsung SerDes PHY driver"); 249 + MODULE_LICENSE("GPL V2"); 250 + MODULE_AUTHOR("Girish K S <ks.giri@samsung.com>"); 251 + MODULE_AUTHOR("Yuvaraj C D <yuvaraj.cd@samsung.com>");
+404
drivers/phy/phy-exynos5250-usb2.c
··· 1 + /* 2 + * Samsung SoC USB 1.1/2.0 PHY driver - Exynos 5250 support 3 + * 4 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 + * Author: Kamil Debski <k.debski@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/delay.h> 13 + #include <linux/io.h> 14 + #include <linux/phy/phy.h> 15 + #include <linux/regmap.h> 16 + #include "phy-samsung-usb2.h" 17 + 18 + /* Exynos USB PHY registers */ 19 + #define EXYNOS_5250_REFCLKSEL_CRYSTAL 0x0 20 + #define EXYNOS_5250_REFCLKSEL_XO 0x1 21 + #define EXYNOS_5250_REFCLKSEL_CLKCORE 0x2 22 + 23 + #define EXYNOS_5250_FSEL_9MHZ6 0x0 24 + #define EXYNOS_5250_FSEL_10MHZ 0x1 25 + #define EXYNOS_5250_FSEL_12MHZ 0x2 26 + #define EXYNOS_5250_FSEL_19MHZ2 0x3 27 + #define EXYNOS_5250_FSEL_20MHZ 0x4 28 + #define EXYNOS_5250_FSEL_24MHZ 0x5 29 + #define EXYNOS_5250_FSEL_50MHZ 0x7 30 + 31 + /* Normal host */ 32 + #define EXYNOS_5250_HOSTPHYCTRL0 0x0 33 + 34 + #define EXYNOS_5250_HOSTPHYCTRL0_PHYSWRSTALL BIT(31) 35 + #define EXYNOS_5250_HOSTPHYCTRL0_REFCLKSEL_SHIFT 19 36 + #define EXYNOS_5250_HOSTPHYCTRL0_REFCLKSEL_MASK \ 37 + (0x3 << EXYNOS_5250_HOSTPHYCTRL0_REFCLKSEL_SHIFT) 38 + #define EXYNOS_5250_HOSTPHYCTRL0_FSEL_SHIFT 16 39 + #define EXYNOS_5250_HOSTPHYCTRL0_FSEL_MASK \ 40 + (0x7 << EXYNOS_5250_HOSTPHYCTRL0_FSEL_SHIFT) 41 + #define EXYNOS_5250_HOSTPHYCTRL0_TESTBURNIN BIT(11) 42 + #define EXYNOS_5250_HOSTPHYCTRL0_RETENABLE BIT(10) 43 + #define EXYNOS_5250_HOSTPHYCTRL0_COMMON_ON_N BIT(9) 44 + #define EXYNOS_5250_HOSTPHYCTRL0_VATESTENB_MASK (0x3 << 7) 45 + #define EXYNOS_5250_HOSTPHYCTRL0_VATESTENB_DUAL (0x0 << 7) 46 + #define EXYNOS_5250_HOSTPHYCTRL0_VATESTENB_ID0 (0x1 << 7) 47 + #define EXYNOS_5250_HOSTPHYCTRL0_VATESTENB_ANALOGTEST (0x2 << 7) 48 + #define EXYNOS_5250_HOSTPHYCTRL0_SIDDQ BIT(6) 49 + #define EXYNOS_5250_HOSTPHYCTRL0_FORCESLEEP BIT(5) 50 + #define EXYNOS_5250_HOSTPHYCTRL0_FORCESUSPEND BIT(4) 51 + #define EXYNOS_5250_HOSTPHYCTRL0_WORDINTERFACE BIT(3) 52 + #define EXYNOS_5250_HOSTPHYCTRL0_UTMISWRST BIT(2) 53 + #define EXYNOS_5250_HOSTPHYCTRL0_LINKSWRST BIT(1) 54 + #define EXYNOS_5250_HOSTPHYCTRL0_PHYSWRST BIT(0) 55 + 56 + /* HSIC0 & HSIC1 */ 57 + #define EXYNOS_5250_HSICPHYCTRL1 0x10 58 + #define EXYNOS_5250_HSICPHYCTRL2 0x20 59 + 60 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKSEL_MASK (0x3 << 23) 61 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKSEL_DEFAULT (0x2 << 23) 62 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_MASK (0x7f << 16) 63 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_12 (0x24 << 16) 64 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_15 (0x1c << 16) 65 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_16 (0x1a << 16) 66 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_19_2 (0x15 << 16) 67 + #define EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_20 (0x14 << 16) 68 + #define EXYNOS_5250_HSICPHYCTRLX_SIDDQ BIT(6) 69 + #define EXYNOS_5250_HSICPHYCTRLX_FORCESLEEP BIT(5) 70 + #define EXYNOS_5250_HSICPHYCTRLX_FORCESUSPEND BIT(4) 71 + #define EXYNOS_5250_HSICPHYCTRLX_WORDINTERFACE BIT(3) 72 + #define EXYNOS_5250_HSICPHYCTRLX_UTMISWRST BIT(2) 73 + #define EXYNOS_5250_HSICPHYCTRLX_PHYSWRST BIT(0) 74 + 75 + /* EHCI control */ 76 + #define EXYNOS_5250_HOSTEHCICTRL 0x30 77 + #define EXYNOS_5250_HOSTEHCICTRL_ENAINCRXALIGN BIT(29) 78 + #define EXYNOS_5250_HOSTEHCICTRL_ENAINCR4 BIT(28) 79 + #define EXYNOS_5250_HOSTEHCICTRL_ENAINCR8 BIT(27) 80 + #define EXYNOS_5250_HOSTEHCICTRL_ENAINCR16 BIT(26) 81 + #define EXYNOS_5250_HOSTEHCICTRL_AUTOPPDONOVRCUREN BIT(25) 82 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVAL0_SHIFT 19 83 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVAL0_MASK \ 84 + (0x3f << EXYNOS_5250_HOSTEHCICTRL_FLADJVAL0_SHIFT) 85 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVAL1_SHIFT 13 86 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVAL1_MASK \ 87 + (0x3f << EXYNOS_5250_HOSTEHCICTRL_FLADJVAL1_SHIFT) 88 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVAL2_SHIFT 7 89 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVAL0_MASK \ 90 + (0x3f << EXYNOS_5250_HOSTEHCICTRL_FLADJVAL0_SHIFT) 91 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVALHOST_SHIFT 1 92 + #define EXYNOS_5250_HOSTEHCICTRL_FLADJVALHOST_MASK \ 93 + (0x1 << EXYNOS_5250_HOSTEHCICTRL_FLADJVALHOST_SHIFT) 94 + #define EXYNOS_5250_HOSTEHCICTRL_SIMULATIONMODE BIT(0) 95 + 96 + /* OHCI control */ 97 + #define EXYNOS_5250_HOSTOHCICTRL 0x34 98 + #define EXYNOS_5250_HOSTOHCICTRL_FRAMELENVAL_SHIFT 1 99 + #define EXYNOS_5250_HOSTOHCICTRL_FRAMELENVAL_MASK \ 100 + (0x3ff << EXYNOS_5250_HOSTOHCICTRL_FRAMELENVAL_SHIFT) 101 + #define EXYNOS_5250_HOSTOHCICTRL_FRAMELENVALEN BIT(0) 102 + 103 + /* USBOTG */ 104 + #define EXYNOS_5250_USBOTGSYS 0x38 105 + #define EXYNOS_5250_USBOTGSYS_PHYLINK_SW_RESET BIT(14) 106 + #define EXYNOS_5250_USBOTGSYS_LINK_SW_RST_UOTG BIT(13) 107 + #define EXYNOS_5250_USBOTGSYS_PHY_SW_RST BIT(12) 108 + #define EXYNOS_5250_USBOTGSYS_REFCLKSEL_SHIFT 9 109 + #define EXYNOS_5250_USBOTGSYS_REFCLKSEL_MASK \ 110 + (0x3 << EXYNOS_5250_USBOTGSYS_REFCLKSEL_SHIFT) 111 + #define EXYNOS_5250_USBOTGSYS_ID_PULLUP BIT(8) 112 + #define EXYNOS_5250_USBOTGSYS_COMMON_ON BIT(7) 113 + #define EXYNOS_5250_USBOTGSYS_FSEL_SHIFT 4 114 + #define EXYNOS_5250_USBOTGSYS_FSEL_MASK \ 115 + (0x3 << EXYNOS_5250_USBOTGSYS_FSEL_SHIFT) 116 + #define EXYNOS_5250_USBOTGSYS_FORCE_SLEEP BIT(3) 117 + #define EXYNOS_5250_USBOTGSYS_OTGDISABLE BIT(2) 118 + #define EXYNOS_5250_USBOTGSYS_SIDDQ_UOTG BIT(1) 119 + #define EXYNOS_5250_USBOTGSYS_FORCE_SUSPEND BIT(0) 120 + 121 + /* Isolation, configured in the power management unit */ 122 + #define EXYNOS_5250_USB_ISOL_OTG_OFFSET 0x704 123 + #define EXYNOS_5250_USB_ISOL_OTG BIT(0) 124 + #define EXYNOS_5250_USB_ISOL_HOST_OFFSET 0x708 125 + #define EXYNOS_5250_USB_ISOL_HOST BIT(0) 126 + 127 + /* Mode swtich register */ 128 + #define EXYNOS_5250_MODE_SWITCH_OFFSET 0x230 129 + #define EXYNOS_5250_MODE_SWITCH_MASK 1 130 + #define EXYNOS_5250_MODE_SWITCH_DEVICE 0 131 + #define EXYNOS_5250_MODE_SWITCH_HOST 1 132 + 133 + enum exynos4x12_phy_id { 134 + EXYNOS5250_DEVICE, 135 + EXYNOS5250_HOST, 136 + EXYNOS5250_HSIC0, 137 + EXYNOS5250_HSIC1, 138 + EXYNOS5250_NUM_PHYS, 139 + }; 140 + 141 + /* 142 + * exynos5250_rate_to_clk() converts the supplied clock rate to the value that 143 + * can be written to the phy register. 144 + */ 145 + static int exynos5250_rate_to_clk(unsigned long rate, u32 *reg) 146 + { 147 + /* EXYNOS_5250_FSEL_MASK */ 148 + 149 + switch (rate) { 150 + case 9600 * KHZ: 151 + *reg = EXYNOS_5250_FSEL_9MHZ6; 152 + break; 153 + case 10 * MHZ: 154 + *reg = EXYNOS_5250_FSEL_10MHZ; 155 + break; 156 + case 12 * MHZ: 157 + *reg = EXYNOS_5250_FSEL_12MHZ; 158 + break; 159 + case 19200 * KHZ: 160 + *reg = EXYNOS_5250_FSEL_19MHZ2; 161 + break; 162 + case 20 * MHZ: 163 + *reg = EXYNOS_5250_FSEL_20MHZ; 164 + break; 165 + case 24 * MHZ: 166 + *reg = EXYNOS_5250_FSEL_24MHZ; 167 + break; 168 + case 50 * MHZ: 169 + *reg = EXYNOS_5250_FSEL_50MHZ; 170 + break; 171 + default: 172 + return -EINVAL; 173 + } 174 + 175 + return 0; 176 + } 177 + 178 + static void exynos5250_isol(struct samsung_usb2_phy_instance *inst, bool on) 179 + { 180 + struct samsung_usb2_phy_driver *drv = inst->drv; 181 + u32 offset; 182 + u32 mask; 183 + 184 + switch (inst->cfg->id) { 185 + case EXYNOS5250_DEVICE: 186 + offset = EXYNOS_5250_USB_ISOL_OTG_OFFSET; 187 + mask = EXYNOS_5250_USB_ISOL_OTG; 188 + break; 189 + case EXYNOS5250_HOST: 190 + offset = EXYNOS_5250_USB_ISOL_HOST_OFFSET; 191 + mask = EXYNOS_5250_USB_ISOL_HOST; 192 + break; 193 + default: 194 + return; 195 + }; 196 + 197 + regmap_update_bits(drv->reg_pmu, offset, mask, on ? 0 : mask); 198 + } 199 + 200 + static int exynos5250_power_on(struct samsung_usb2_phy_instance *inst) 201 + { 202 + struct samsung_usb2_phy_driver *drv = inst->drv; 203 + u32 ctrl0; 204 + u32 otg; 205 + u32 ehci; 206 + u32 ohci; 207 + u32 hsic; 208 + 209 + switch (inst->cfg->id) { 210 + case EXYNOS5250_DEVICE: 211 + regmap_update_bits(drv->reg_sys, 212 + EXYNOS_5250_MODE_SWITCH_OFFSET, 213 + EXYNOS_5250_MODE_SWITCH_MASK, 214 + EXYNOS_5250_MODE_SWITCH_DEVICE); 215 + 216 + /* OTG configuration */ 217 + otg = readl(drv->reg_phy + EXYNOS_5250_USBOTGSYS); 218 + /* The clock */ 219 + otg &= ~EXYNOS_5250_USBOTGSYS_FSEL_MASK; 220 + otg |= drv->ref_reg_val << EXYNOS_5250_USBOTGSYS_FSEL_SHIFT; 221 + /* Reset */ 222 + otg &= ~(EXYNOS_5250_USBOTGSYS_FORCE_SUSPEND | 223 + EXYNOS_5250_USBOTGSYS_FORCE_SLEEP | 224 + EXYNOS_5250_USBOTGSYS_SIDDQ_UOTG); 225 + otg |= EXYNOS_5250_USBOTGSYS_PHY_SW_RST | 226 + EXYNOS_5250_USBOTGSYS_PHYLINK_SW_RESET | 227 + EXYNOS_5250_USBOTGSYS_LINK_SW_RST_UOTG | 228 + EXYNOS_5250_USBOTGSYS_OTGDISABLE; 229 + /* Ref clock */ 230 + otg &= ~EXYNOS_5250_USBOTGSYS_REFCLKSEL_MASK; 231 + otg |= EXYNOS_5250_REFCLKSEL_CLKCORE << 232 + EXYNOS_5250_USBOTGSYS_REFCLKSEL_SHIFT; 233 + writel(otg, drv->reg_phy + EXYNOS_5250_USBOTGSYS); 234 + udelay(100); 235 + otg &= ~(EXYNOS_5250_USBOTGSYS_PHY_SW_RST | 236 + EXYNOS_5250_USBOTGSYS_LINK_SW_RST_UOTG | 237 + EXYNOS_5250_USBOTGSYS_PHYLINK_SW_RESET | 238 + EXYNOS_5250_USBOTGSYS_OTGDISABLE); 239 + writel(otg, drv->reg_phy + EXYNOS_5250_USBOTGSYS); 240 + 241 + 242 + break; 243 + case EXYNOS5250_HOST: 244 + case EXYNOS5250_HSIC0: 245 + case EXYNOS5250_HSIC1: 246 + /* Host registers configuration */ 247 + ctrl0 = readl(drv->reg_phy + EXYNOS_5250_HOSTPHYCTRL0); 248 + /* The clock */ 249 + ctrl0 &= ~EXYNOS_5250_HOSTPHYCTRL0_FSEL_MASK; 250 + ctrl0 |= drv->ref_reg_val << 251 + EXYNOS_5250_HOSTPHYCTRL0_FSEL_SHIFT; 252 + 253 + /* Reset */ 254 + ctrl0 &= ~(EXYNOS_5250_HOSTPHYCTRL0_PHYSWRST | 255 + EXYNOS_5250_HOSTPHYCTRL0_PHYSWRSTALL | 256 + EXYNOS_5250_HOSTPHYCTRL0_SIDDQ | 257 + EXYNOS_5250_HOSTPHYCTRL0_FORCESUSPEND | 258 + EXYNOS_5250_HOSTPHYCTRL0_FORCESLEEP); 259 + ctrl0 |= EXYNOS_5250_HOSTPHYCTRL0_LINKSWRST | 260 + EXYNOS_5250_HOSTPHYCTRL0_UTMISWRST | 261 + EXYNOS_5250_HOSTPHYCTRL0_COMMON_ON_N; 262 + writel(ctrl0, drv->reg_phy + EXYNOS_5250_HOSTPHYCTRL0); 263 + udelay(10); 264 + ctrl0 &= ~(EXYNOS_5250_HOSTPHYCTRL0_LINKSWRST | 265 + EXYNOS_5250_HOSTPHYCTRL0_UTMISWRST); 266 + writel(ctrl0, drv->reg_phy + EXYNOS_5250_HOSTPHYCTRL0); 267 + 268 + /* OTG configuration */ 269 + otg = readl(drv->reg_phy + EXYNOS_5250_USBOTGSYS); 270 + /* The clock */ 271 + otg &= ~EXYNOS_5250_USBOTGSYS_FSEL_MASK; 272 + otg |= drv->ref_reg_val << EXYNOS_5250_USBOTGSYS_FSEL_SHIFT; 273 + /* Reset */ 274 + otg &= ~(EXYNOS_5250_USBOTGSYS_FORCE_SUSPEND | 275 + EXYNOS_5250_USBOTGSYS_FORCE_SLEEP | 276 + EXYNOS_5250_USBOTGSYS_SIDDQ_UOTG); 277 + otg |= EXYNOS_5250_USBOTGSYS_PHY_SW_RST | 278 + EXYNOS_5250_USBOTGSYS_PHYLINK_SW_RESET | 279 + EXYNOS_5250_USBOTGSYS_LINK_SW_RST_UOTG | 280 + EXYNOS_5250_USBOTGSYS_OTGDISABLE; 281 + /* Ref clock */ 282 + otg &= ~EXYNOS_5250_USBOTGSYS_REFCLKSEL_MASK; 283 + otg |= EXYNOS_5250_REFCLKSEL_CLKCORE << 284 + EXYNOS_5250_USBOTGSYS_REFCLKSEL_SHIFT; 285 + writel(otg, drv->reg_phy + EXYNOS_5250_USBOTGSYS); 286 + udelay(10); 287 + otg &= ~(EXYNOS_5250_USBOTGSYS_PHY_SW_RST | 288 + EXYNOS_5250_USBOTGSYS_LINK_SW_RST_UOTG | 289 + EXYNOS_5250_USBOTGSYS_PHYLINK_SW_RESET); 290 + 291 + /* HSIC phy configuration */ 292 + hsic = (EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_12 | 293 + EXYNOS_5250_HSICPHYCTRLX_REFCLKSEL_DEFAULT | 294 + EXYNOS_5250_HSICPHYCTRLX_PHYSWRST); 295 + writel(hsic, drv->reg_phy + EXYNOS_5250_HSICPHYCTRL1); 296 + writel(hsic, drv->reg_phy + EXYNOS_5250_HSICPHYCTRL2); 297 + udelay(10); 298 + hsic &= ~EXYNOS_5250_HSICPHYCTRLX_PHYSWRST; 299 + writel(hsic, drv->reg_phy + EXYNOS_5250_HSICPHYCTRL1); 300 + writel(hsic, drv->reg_phy + EXYNOS_5250_HSICPHYCTRL2); 301 + /* The following delay is necessary for the reset sequence to be 302 + * completed */ 303 + udelay(80); 304 + 305 + /* Enable EHCI DMA burst */ 306 + ehci = readl(drv->reg_phy + EXYNOS_5250_HOSTEHCICTRL); 307 + ehci |= EXYNOS_5250_HOSTEHCICTRL_ENAINCRXALIGN | 308 + EXYNOS_5250_HOSTEHCICTRL_ENAINCR4 | 309 + EXYNOS_5250_HOSTEHCICTRL_ENAINCR8 | 310 + EXYNOS_5250_HOSTEHCICTRL_ENAINCR16; 311 + writel(ehci, drv->reg_phy + EXYNOS_5250_HOSTEHCICTRL); 312 + 313 + /* OHCI settings */ 314 + ohci = readl(drv->reg_phy + EXYNOS_5250_HOSTOHCICTRL); 315 + /* Following code is based on the old driver */ 316 + ohci |= 0x1 << 3; 317 + writel(ohci, drv->reg_phy + EXYNOS_5250_HOSTOHCICTRL); 318 + 319 + break; 320 + } 321 + inst->enabled = 1; 322 + exynos5250_isol(inst, 0); 323 + 324 + return 0; 325 + } 326 + 327 + static int exynos5250_power_off(struct samsung_usb2_phy_instance *inst) 328 + { 329 + struct samsung_usb2_phy_driver *drv = inst->drv; 330 + u32 ctrl0; 331 + u32 otg; 332 + u32 hsic; 333 + 334 + inst->enabled = 0; 335 + exynos5250_isol(inst, 1); 336 + 337 + switch (inst->cfg->id) { 338 + case EXYNOS5250_DEVICE: 339 + otg = readl(drv->reg_phy + EXYNOS_5250_USBOTGSYS); 340 + otg |= (EXYNOS_5250_USBOTGSYS_FORCE_SUSPEND | 341 + EXYNOS_5250_USBOTGSYS_SIDDQ_UOTG | 342 + EXYNOS_5250_USBOTGSYS_FORCE_SLEEP); 343 + writel(otg, drv->reg_phy + EXYNOS_5250_USBOTGSYS); 344 + break; 345 + case EXYNOS5250_HOST: 346 + ctrl0 = readl(drv->reg_phy + EXYNOS_5250_HOSTPHYCTRL0); 347 + ctrl0 |= (EXYNOS_5250_HOSTPHYCTRL0_SIDDQ | 348 + EXYNOS_5250_HOSTPHYCTRL0_FORCESUSPEND | 349 + EXYNOS_5250_HOSTPHYCTRL0_FORCESLEEP | 350 + EXYNOS_5250_HOSTPHYCTRL0_PHYSWRST | 351 + EXYNOS_5250_HOSTPHYCTRL0_PHYSWRSTALL); 352 + writel(ctrl0, drv->reg_phy + EXYNOS_5250_HOSTPHYCTRL0); 353 + break; 354 + case EXYNOS5250_HSIC0: 355 + case EXYNOS5250_HSIC1: 356 + hsic = (EXYNOS_5250_HSICPHYCTRLX_REFCLKDIV_12 | 357 + EXYNOS_5250_HSICPHYCTRLX_REFCLKSEL_DEFAULT | 358 + EXYNOS_5250_HSICPHYCTRLX_SIDDQ | 359 + EXYNOS_5250_HSICPHYCTRLX_FORCESLEEP | 360 + EXYNOS_5250_HSICPHYCTRLX_FORCESUSPEND 361 + ); 362 + writel(hsic, drv->reg_phy + EXYNOS_5250_HSICPHYCTRL1); 363 + writel(hsic, drv->reg_phy + EXYNOS_5250_HSICPHYCTRL2); 364 + break; 365 + } 366 + 367 + return 0; 368 + } 369 + 370 + 371 + static const struct samsung_usb2_common_phy exynos5250_phys[] = { 372 + { 373 + .label = "device", 374 + .id = EXYNOS5250_DEVICE, 375 + .power_on = exynos5250_power_on, 376 + .power_off = exynos5250_power_off, 377 + }, 378 + { 379 + .label = "host", 380 + .id = EXYNOS5250_HOST, 381 + .power_on = exynos5250_power_on, 382 + .power_off = exynos5250_power_off, 383 + }, 384 + { 385 + .label = "hsic0", 386 + .id = EXYNOS5250_HSIC0, 387 + .power_on = exynos5250_power_on, 388 + .power_off = exynos5250_power_off, 389 + }, 390 + { 391 + .label = "hsic1", 392 + .id = EXYNOS5250_HSIC1, 393 + .power_on = exynos5250_power_on, 394 + .power_off = exynos5250_power_off, 395 + }, 396 + {}, 397 + }; 398 + 399 + const struct samsung_usb2_phy_config exynos5250_usb2_phy_config = { 400 + .has_mode_switch = 1, 401 + .num_phys = EXYNOS5250_NUM_PHYS, 402 + .phys = exynos5250_phys, 403 + .rate_to_clk = exynos5250_rate_to_clk, 404 + };
+95 -42
drivers/phy/phy-omap-usb2.c
··· 21 21 #include <linux/slab.h> 22 22 #include <linux/of.h> 23 23 #include <linux/io.h> 24 - #include <linux/usb/omap_usb.h> 24 + #include <linux/phy/omap_usb.h> 25 25 #include <linux/usb/phy_companion.h> 26 26 #include <linux/clk.h> 27 27 #include <linux/err.h> 28 28 #include <linux/pm_runtime.h> 29 29 #include <linux/delay.h> 30 - #include <linux/usb/omap_control_usb.h> 30 + #include <linux/phy/omap_control_phy.h> 31 31 #include <linux/phy/phy.h> 32 32 #include <linux/of_platform.h> 33 + 34 + #define USB2PHY_DISCON_BYP_LATCH (1 << 31) 35 + #define USB2PHY_ANA_CONFIG1 0x4c 33 36 34 37 /** 35 38 * omap_usb2_set_comparator - links the comparator present in the sytem with ··· 101 98 return 0; 102 99 } 103 100 104 - static int omap_usb2_suspend(struct usb_phy *x, int suspend) 105 - { 106 - struct omap_usb *phy = phy_to_omapusb(x); 107 - int ret; 108 - 109 - if (suspend && !phy->is_suspended) { 110 - omap_control_usb_phy_power(phy->control_dev, 0); 111 - pm_runtime_put_sync(phy->dev); 112 - phy->is_suspended = 1; 113 - } else if (!suspend && phy->is_suspended) { 114 - ret = pm_runtime_get_sync(phy->dev); 115 - if (ret < 0) { 116 - dev_err(phy->dev, "get_sync failed with err %d\n", ret); 117 - return ret; 118 - } 119 - omap_control_usb_phy_power(phy->control_dev, 1); 120 - phy->is_suspended = 0; 121 - } 122 - 123 - return 0; 124 - } 125 - 126 101 static int omap_usb_power_off(struct phy *x) 127 102 { 128 103 struct omap_usb *phy = phy_get_drvdata(x); 129 104 130 - omap_control_usb_phy_power(phy->control_dev, 0); 105 + omap_control_phy_power(phy->control_dev, 0); 131 106 132 107 return 0; 133 108 } ··· 114 133 { 115 134 struct omap_usb *phy = phy_get_drvdata(x); 116 135 117 - omap_control_usb_phy_power(phy->control_dev, 1); 136 + omap_control_phy_power(phy->control_dev, 1); 137 + 138 + return 0; 139 + } 140 + 141 + static int omap_usb_init(struct phy *x) 142 + { 143 + struct omap_usb *phy = phy_get_drvdata(x); 144 + u32 val; 145 + 146 + if (phy->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { 147 + /* 148 + * 149 + * Reduce the sensitivity of internal PHY by enabling the 150 + * DISCON_BYP_LATCH of the USB2PHY_ANA_CONFIG1 register. This 151 + * resolves issues with certain devices which can otherwise 152 + * be prone to false disconnects. 153 + * 154 + */ 155 + val = omap_usb_readl(phy->phy_base, USB2PHY_ANA_CONFIG1); 156 + val |= USB2PHY_DISCON_BYP_LATCH; 157 + omap_usb_writel(phy->phy_base, USB2PHY_ANA_CONFIG1, val); 158 + } 118 159 119 160 return 0; 120 161 } 121 162 122 163 static struct phy_ops ops = { 164 + .init = omap_usb_init, 123 165 .power_on = omap_usb_power_on, 124 166 .power_off = omap_usb_power_off, 125 167 .owner = THIS_MODULE, 126 168 }; 127 169 170 + #ifdef CONFIG_OF 171 + static const struct usb_phy_data omap_usb2_data = { 172 + .label = "omap_usb2", 173 + .flags = OMAP_USB2_HAS_START_SRP | OMAP_USB2_HAS_SET_VBUS, 174 + }; 175 + 176 + static const struct usb_phy_data omap5_usb2_data = { 177 + .label = "omap5_usb2", 178 + .flags = 0, 179 + }; 180 + 181 + static const struct usb_phy_data dra7x_usb2_data = { 182 + .label = "dra7x_usb2", 183 + .flags = OMAP_USB2_CALIBRATE_FALSE_DISCONNECT, 184 + }; 185 + 186 + static const struct usb_phy_data am437x_usb2_data = { 187 + .label = "am437x_usb2", 188 + .flags = 0, 189 + }; 190 + 191 + static const struct of_device_id omap_usb2_id_table[] = { 192 + { 193 + .compatible = "ti,omap-usb2", 194 + .data = &omap_usb2_data, 195 + }, 196 + { 197 + .compatible = "ti,omap5-usb2", 198 + .data = &omap5_usb2_data, 199 + }, 200 + { 201 + .compatible = "ti,dra7x-usb2", 202 + .data = &dra7x_usb2_data, 203 + }, 204 + { 205 + .compatible = "ti,am437x-usb2", 206 + .data = &am437x_usb2_data, 207 + }, 208 + {}, 209 + }; 210 + MODULE_DEVICE_TABLE(of, omap_usb2_id_table); 211 + #endif 212 + 128 213 static int omap_usb2_probe(struct platform_device *pdev) 129 214 { 130 215 struct omap_usb *phy; 131 216 struct phy *generic_phy; 217 + struct resource *res; 132 218 struct phy_provider *phy_provider; 133 219 struct usb_otg *otg; 134 220 struct device_node *node = pdev->dev.of_node; 135 221 struct device_node *control_node; 136 222 struct platform_device *control_pdev; 223 + const struct of_device_id *of_id; 224 + struct usb_phy_data *phy_data; 137 225 138 - if (!node) 226 + of_id = of_match_device(of_match_ptr(omap_usb2_id_table), &pdev->dev); 227 + 228 + if (!of_id) 139 229 return -EINVAL; 230 + 231 + phy_data = (struct usb_phy_data *)of_id->data; 140 232 141 233 phy = devm_kzalloc(&pdev->dev, sizeof(*phy), GFP_KERNEL); 142 234 if (!phy) { ··· 226 172 phy->dev = &pdev->dev; 227 173 228 174 phy->phy.dev = phy->dev; 229 - phy->phy.label = "omap-usb2"; 230 - phy->phy.set_suspend = omap_usb2_suspend; 175 + phy->phy.label = phy_data->label; 231 176 phy->phy.otg = otg; 232 177 phy->phy.type = USB_PHY_TYPE_USB2; 178 + 179 + if (phy_data->flags & OMAP_USB2_CALIBRATE_FALSE_DISCONNECT) { 180 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 181 + phy->phy_base = devm_ioremap_resource(&pdev->dev, res); 182 + if (!phy->phy_base) 183 + return -ENOMEM; 184 + phy->flags |= OMAP_USB2_CALIBRATE_FALSE_DISCONNECT; 185 + } 233 186 234 187 control_node = of_parse_phandle(node, "ctrl-module", 0); 235 188 if (!control_node) { ··· 251 190 } 252 191 253 192 phy->control_dev = &control_pdev->dev; 254 - 255 - phy->is_suspended = 1; 256 - omap_control_usb_phy_power(phy->control_dev, 0); 193 + omap_control_phy_power(phy->control_dev, 0); 257 194 258 195 otg->set_host = omap_usb_set_host; 259 196 otg->set_peripheral = omap_usb_set_peripheral; 260 - otg->set_vbus = omap_usb_set_vbus; 261 - otg->start_srp = omap_usb_start_srp; 197 + if (phy_data->flags & OMAP_USB2_HAS_SET_VBUS) 198 + otg->set_vbus = omap_usb_set_vbus; 199 + if (phy_data->flags & OMAP_USB2_HAS_START_SRP) 200 + otg->start_srp = omap_usb_start_srp; 262 201 otg->phy = &phy->phy; 263 202 264 203 platform_set_drvdata(pdev, phy); ··· 356 295 #define DEV_PM_OPS (&omap_usb2_pm_ops) 357 296 #else 358 297 #define DEV_PM_OPS NULL 359 - #endif 360 - 361 - #ifdef CONFIG_OF 362 - static const struct of_device_id omap_usb2_id_table[] = { 363 - { .compatible = "ti,omap-usb2" }, 364 - {} 365 - }; 366 - MODULE_DEVICE_TABLE(of, omap_usb2_id_table); 367 298 #endif 368 299 369 300 static struct platform_driver omap_usb2_driver = {
+228
drivers/phy/phy-samsung-usb2.c
··· 1 + /* 2 + * Samsung SoC USB 1.1/2.0 PHY driver 3 + * 4 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 + * Author: Kamil Debski <k.debski@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/of_address.h> 17 + #include <linux/phy/phy.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/spinlock.h> 20 + #include "phy-samsung-usb2.h" 21 + 22 + static int samsung_usb2_phy_power_on(struct phy *phy) 23 + { 24 + struct samsung_usb2_phy_instance *inst = phy_get_drvdata(phy); 25 + struct samsung_usb2_phy_driver *drv = inst->drv; 26 + int ret; 27 + 28 + dev_dbg(drv->dev, "Request to power_on \"%s\" usb phy\n", 29 + inst->cfg->label); 30 + ret = clk_prepare_enable(drv->clk); 31 + if (ret) 32 + goto err_main_clk; 33 + ret = clk_prepare_enable(drv->ref_clk); 34 + if (ret) 35 + goto err_instance_clk; 36 + if (inst->cfg->power_on) { 37 + spin_lock(&drv->lock); 38 + ret = inst->cfg->power_on(inst); 39 + spin_unlock(&drv->lock); 40 + } 41 + 42 + return 0; 43 + 44 + err_instance_clk: 45 + clk_disable_unprepare(drv->clk); 46 + err_main_clk: 47 + return ret; 48 + } 49 + 50 + static int samsung_usb2_phy_power_off(struct phy *phy) 51 + { 52 + struct samsung_usb2_phy_instance *inst = phy_get_drvdata(phy); 53 + struct samsung_usb2_phy_driver *drv = inst->drv; 54 + int ret = 0; 55 + 56 + dev_dbg(drv->dev, "Request to power_off \"%s\" usb phy\n", 57 + inst->cfg->label); 58 + if (inst->cfg->power_off) { 59 + spin_lock(&drv->lock); 60 + ret = inst->cfg->power_off(inst); 61 + spin_unlock(&drv->lock); 62 + } 63 + clk_disable_unprepare(drv->ref_clk); 64 + clk_disable_unprepare(drv->clk); 65 + return ret; 66 + } 67 + 68 + static struct phy_ops samsung_usb2_phy_ops = { 69 + .power_on = samsung_usb2_phy_power_on, 70 + .power_off = samsung_usb2_phy_power_off, 71 + .owner = THIS_MODULE, 72 + }; 73 + 74 + static struct phy *samsung_usb2_phy_xlate(struct device *dev, 75 + struct of_phandle_args *args) 76 + { 77 + struct samsung_usb2_phy_driver *drv; 78 + 79 + drv = dev_get_drvdata(dev); 80 + if (!drv) 81 + return ERR_PTR(-EINVAL); 82 + 83 + if (WARN_ON(args->args[0] >= drv->cfg->num_phys)) 84 + return ERR_PTR(-ENODEV); 85 + 86 + return drv->instances[args->args[0]].phy; 87 + } 88 + 89 + static const struct of_device_id samsung_usb2_phy_of_match[] = { 90 + #ifdef CONFIG_PHY_EXYNOS4210_USB2 91 + { 92 + .compatible = "samsung,exynos4210-usb2-phy", 93 + .data = &exynos4210_usb2_phy_config, 94 + }, 95 + #endif 96 + #ifdef CONFIG_PHY_EXYNOS4X12_USB2 97 + { 98 + .compatible = "samsung,exynos4x12-usb2-phy", 99 + .data = &exynos4x12_usb2_phy_config, 100 + }, 101 + #endif 102 + #ifdef CONFIG_PHY_EXYNOS5250_USB2 103 + { 104 + .compatible = "samsung,exynos5250-usb2-phy", 105 + .data = &exynos5250_usb2_phy_config, 106 + }, 107 + #endif 108 + { }, 109 + }; 110 + 111 + static int samsung_usb2_phy_probe(struct platform_device *pdev) 112 + { 113 + const struct of_device_id *match; 114 + const struct samsung_usb2_phy_config *cfg; 115 + struct device *dev = &pdev->dev; 116 + struct phy_provider *phy_provider; 117 + struct resource *mem; 118 + struct samsung_usb2_phy_driver *drv; 119 + int i, ret; 120 + 121 + if (!pdev->dev.of_node) { 122 + dev_err(dev, "This driver is required to be instantiated from device tree\n"); 123 + return -EINVAL; 124 + } 125 + 126 + match = of_match_node(samsung_usb2_phy_of_match, pdev->dev.of_node); 127 + if (!match) { 128 + dev_err(dev, "of_match_node() failed\n"); 129 + return -EINVAL; 130 + } 131 + cfg = match->data; 132 + 133 + drv = devm_kzalloc(dev, sizeof(struct samsung_usb2_phy_driver) + 134 + cfg->num_phys * sizeof(struct samsung_usb2_phy_instance), 135 + GFP_KERNEL); 136 + if (!drv) 137 + return -ENOMEM; 138 + 139 + dev_set_drvdata(dev, drv); 140 + spin_lock_init(&drv->lock); 141 + 142 + drv->cfg = cfg; 143 + drv->dev = dev; 144 + 145 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 146 + drv->reg_phy = devm_ioremap_resource(dev, mem); 147 + if (IS_ERR(drv->reg_phy)) { 148 + dev_err(dev, "Failed to map register memory (phy)\n"); 149 + return PTR_ERR(drv->reg_phy); 150 + } 151 + 152 + drv->reg_pmu = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, 153 + "samsung,pmureg-phandle"); 154 + if (IS_ERR(drv->reg_pmu)) { 155 + dev_err(dev, "Failed to map PMU registers (via syscon)\n"); 156 + return PTR_ERR(drv->reg_pmu); 157 + } 158 + 159 + if (drv->cfg->has_mode_switch) { 160 + drv->reg_sys = syscon_regmap_lookup_by_phandle( 161 + pdev->dev.of_node, "samsung,sysreg-phandle"); 162 + if (IS_ERR(drv->reg_sys)) { 163 + dev_err(dev, "Failed to map system registers (via syscon)\n"); 164 + return PTR_ERR(drv->reg_sys); 165 + } 166 + } 167 + 168 + drv->clk = devm_clk_get(dev, "phy"); 169 + if (IS_ERR(drv->clk)) { 170 + dev_err(dev, "Failed to get clock of phy controller\n"); 171 + return PTR_ERR(drv->clk); 172 + } 173 + 174 + drv->ref_clk = devm_clk_get(dev, "ref"); 175 + if (IS_ERR(drv->ref_clk)) { 176 + dev_err(dev, "Failed to get reference clock for the phy controller\n"); 177 + return PTR_ERR(drv->ref_clk); 178 + } 179 + 180 + drv->ref_rate = clk_get_rate(drv->ref_clk); 181 + if (drv->cfg->rate_to_clk) { 182 + ret = drv->cfg->rate_to_clk(drv->ref_rate, &drv->ref_reg_val); 183 + if (ret) 184 + return ret; 185 + } 186 + 187 + for (i = 0; i < drv->cfg->num_phys; i++) { 188 + char *label = drv->cfg->phys[i].label; 189 + struct samsung_usb2_phy_instance *p = &drv->instances[i]; 190 + 191 + dev_dbg(dev, "Creating phy \"%s\"\n", label); 192 + p->phy = devm_phy_create(dev, &samsung_usb2_phy_ops, NULL); 193 + if (IS_ERR(p->phy)) { 194 + dev_err(drv->dev, "Failed to create usb2_phy \"%s\"\n", 195 + label); 196 + return PTR_ERR(p->phy); 197 + } 198 + 199 + p->cfg = &drv->cfg->phys[i]; 200 + p->drv = drv; 201 + phy_set_bus_width(p->phy, 8); 202 + phy_set_drvdata(p->phy, p); 203 + } 204 + 205 + phy_provider = devm_of_phy_provider_register(dev, 206 + samsung_usb2_phy_xlate); 207 + if (IS_ERR(phy_provider)) { 208 + dev_err(drv->dev, "Failed to register phy provider\n"); 209 + return PTR_ERR(phy_provider); 210 + } 211 + 212 + return 0; 213 + } 214 + 215 + static struct platform_driver samsung_usb2_phy_driver = { 216 + .probe = samsung_usb2_phy_probe, 217 + .driver = { 218 + .of_match_table = samsung_usb2_phy_of_match, 219 + .name = "samsung-usb2-phy", 220 + .owner = THIS_MODULE, 221 + } 222 + }; 223 + 224 + module_platform_driver(samsung_usb2_phy_driver); 225 + MODULE_DESCRIPTION("Samsung S5P/EXYNOS SoC USB PHY driver"); 226 + MODULE_AUTHOR("Kamil Debski <k.debski@samsung.com>"); 227 + MODULE_LICENSE("GPL v2"); 228 + MODULE_ALIAS("platform:samsung-usb2-phy");
+67
drivers/phy/phy-samsung-usb2.h
··· 1 + /* 2 + * Samsung SoC USB 1.1/2.0 PHY driver 3 + * 4 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 5 + * Author: Kamil Debski <k.debski@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef _PHY_EXYNOS_USB2_H 13 + #define _PHY_EXYNOS_USB2_H 14 + 15 + #include <linux/clk.h> 16 + #include <linux/phy/phy.h> 17 + #include <linux/device.h> 18 + #include <linux/regmap.h> 19 + #include <linux/spinlock.h> 20 + 21 + #define KHZ 1000 22 + #define MHZ (KHZ * KHZ) 23 + 24 + struct samsung_usb2_phy_driver; 25 + struct samsung_usb2_phy_instance; 26 + struct samsung_usb2_phy_config; 27 + 28 + struct samsung_usb2_phy_instance { 29 + const struct samsung_usb2_common_phy *cfg; 30 + struct phy *phy; 31 + struct samsung_usb2_phy_driver *drv; 32 + bool enabled; 33 + }; 34 + 35 + struct samsung_usb2_phy_driver { 36 + const struct samsung_usb2_phy_config *cfg; 37 + struct clk *clk; 38 + struct clk *ref_clk; 39 + unsigned long ref_rate; 40 + u32 ref_reg_val; 41 + struct device *dev; 42 + void __iomem *reg_phy; 43 + struct regmap *reg_pmu; 44 + struct regmap *reg_sys; 45 + spinlock_t lock; 46 + struct samsung_usb2_phy_instance instances[0]; 47 + }; 48 + 49 + struct samsung_usb2_common_phy { 50 + int (*power_on)(struct samsung_usb2_phy_instance *); 51 + int (*power_off)(struct samsung_usb2_phy_instance *); 52 + unsigned int id; 53 + char *label; 54 + }; 55 + 56 + 57 + struct samsung_usb2_phy_config { 58 + const struct samsung_usb2_common_phy *phys; 59 + int (*rate_to_clk)(unsigned long, u32 *); 60 + unsigned int num_phys; 61 + bool has_mode_switch; 62 + }; 63 + 64 + extern const struct samsung_usb2_phy_config exynos4210_usb2_phy_config; 65 + extern const struct samsung_usb2_phy_config exynos4x12_usb2_phy_config; 66 + extern const struct samsung_usb2_phy_config exynos5250_usb2_phy_config; 67 + #endif
+331
drivers/phy/phy-sun4i-usb.c
··· 1 + /* 2 + * Allwinner sun4i USB phy driver 3 + * 4 + * Copyright (C) 2014 Hans de Goede <hdegoede@redhat.com> 5 + * 6 + * Based on code from 7 + * Allwinner Technology Co., Ltd. <www.allwinnertech.com> 8 + * 9 + * Modelled after: Samsung S5P/EXYNOS SoC series MIPI CSIS/DSIM DPHY driver 10 + * Copyright (C) 2013 Samsung Electronics Co., Ltd. 11 + * Author: Sylwester Nawrocki <s.nawrocki@samsung.com> 12 + * 13 + * This program is free software; you can redistribute it and/or modify 14 + * it under the terms of the GNU General Public License as published by 15 + * the Free Software Foundation; either version 2 of the License, or 16 + * (at your option) any later version. 17 + * 18 + * This program is distributed in the hope that it will be useful, 19 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 20 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 + * GNU General Public License for more details. 22 + */ 23 + 24 + #include <linux/clk.h> 25 + #include <linux/io.h> 26 + #include <linux/kernel.h> 27 + #include <linux/module.h> 28 + #include <linux/mutex.h> 29 + #include <linux/of.h> 30 + #include <linux/of_address.h> 31 + #include <linux/phy/phy.h> 32 + #include <linux/platform_device.h> 33 + #include <linux/regulator/consumer.h> 34 + #include <linux/reset.h> 35 + 36 + #define REG_ISCR 0x00 37 + #define REG_PHYCTL 0x04 38 + #define REG_PHYBIST 0x08 39 + #define REG_PHYTUNE 0x0c 40 + 41 + #define PHYCTL_DATA BIT(7) 42 + 43 + #define SUNXI_AHB_ICHR8_EN BIT(10) 44 + #define SUNXI_AHB_INCR4_BURST_EN BIT(9) 45 + #define SUNXI_AHB_INCRX_ALIGN_EN BIT(8) 46 + #define SUNXI_ULPI_BYPASS_EN BIT(0) 47 + 48 + /* Common Control Bits for Both PHYs */ 49 + #define PHY_PLL_BW 0x03 50 + #define PHY_RES45_CAL_EN 0x0c 51 + 52 + /* Private Control Bits for Each PHY */ 53 + #define PHY_TX_AMPLITUDE_TUNE 0x20 54 + #define PHY_TX_SLEWRATE_TUNE 0x22 55 + #define PHY_VBUSVALID_TH_SEL 0x25 56 + #define PHY_PULLUP_RES_SEL 0x27 57 + #define PHY_OTG_FUNC_EN 0x28 58 + #define PHY_VBUS_DET_EN 0x29 59 + #define PHY_DISCON_TH_SEL 0x2a 60 + 61 + #define MAX_PHYS 3 62 + 63 + struct sun4i_usb_phy_data { 64 + struct clk *clk; 65 + void __iomem *base; 66 + struct mutex mutex; 67 + int num_phys; 68 + u32 disc_thresh; 69 + struct sun4i_usb_phy { 70 + struct phy *phy; 71 + void __iomem *pmu; 72 + struct regulator *vbus; 73 + struct reset_control *reset; 74 + int index; 75 + } phys[MAX_PHYS]; 76 + }; 77 + 78 + #define to_sun4i_usb_phy_data(phy) \ 79 + container_of((phy), struct sun4i_usb_phy_data, phys[(phy)->index]) 80 + 81 + static void sun4i_usb_phy_write(struct sun4i_usb_phy *phy, u32 addr, u32 data, 82 + int len) 83 + { 84 + struct sun4i_usb_phy_data *phy_data = to_sun4i_usb_phy_data(phy); 85 + u32 temp, usbc_bit = BIT(phy->index * 2); 86 + int i; 87 + 88 + mutex_lock(&phy_data->mutex); 89 + 90 + for (i = 0; i < len; i++) { 91 + temp = readl(phy_data->base + REG_PHYCTL); 92 + 93 + /* clear the address portion */ 94 + temp &= ~(0xff << 8); 95 + 96 + /* set the address */ 97 + temp |= ((addr + i) << 8); 98 + writel(temp, phy_data->base + REG_PHYCTL); 99 + 100 + /* set the data bit and clear usbc bit*/ 101 + temp = readb(phy_data->base + REG_PHYCTL); 102 + if (data & 0x1) 103 + temp |= PHYCTL_DATA; 104 + else 105 + temp &= ~PHYCTL_DATA; 106 + temp &= ~usbc_bit; 107 + writeb(temp, phy_data->base + REG_PHYCTL); 108 + 109 + /* pulse usbc_bit */ 110 + temp = readb(phy_data->base + REG_PHYCTL); 111 + temp |= usbc_bit; 112 + writeb(temp, phy_data->base + REG_PHYCTL); 113 + 114 + temp = readb(phy_data->base + REG_PHYCTL); 115 + temp &= ~usbc_bit; 116 + writeb(temp, phy_data->base + REG_PHYCTL); 117 + 118 + data >>= 1; 119 + } 120 + mutex_unlock(&phy_data->mutex); 121 + } 122 + 123 + static void sun4i_usb_phy_passby(struct sun4i_usb_phy *phy, int enable) 124 + { 125 + u32 bits, reg_value; 126 + 127 + if (!phy->pmu) 128 + return; 129 + 130 + bits = SUNXI_AHB_ICHR8_EN | SUNXI_AHB_INCR4_BURST_EN | 131 + SUNXI_AHB_INCRX_ALIGN_EN | SUNXI_ULPI_BYPASS_EN; 132 + 133 + reg_value = readl(phy->pmu); 134 + 135 + if (enable) 136 + reg_value |= bits; 137 + else 138 + reg_value &= ~bits; 139 + 140 + writel(reg_value, phy->pmu); 141 + } 142 + 143 + static int sun4i_usb_phy_init(struct phy *_phy) 144 + { 145 + struct sun4i_usb_phy *phy = phy_get_drvdata(_phy); 146 + struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy); 147 + int ret; 148 + 149 + ret = clk_prepare_enable(data->clk); 150 + if (ret) 151 + return ret; 152 + 153 + ret = reset_control_deassert(phy->reset); 154 + if (ret) { 155 + clk_disable_unprepare(data->clk); 156 + return ret; 157 + } 158 + 159 + /* Adjust PHY's magnitude and rate */ 160 + sun4i_usb_phy_write(phy, PHY_TX_AMPLITUDE_TUNE, 0x14, 5); 161 + 162 + /* Disconnect threshold adjustment */ 163 + sun4i_usb_phy_write(phy, PHY_DISCON_TH_SEL, data->disc_thresh, 2); 164 + 165 + sun4i_usb_phy_passby(phy, 1); 166 + 167 + return 0; 168 + } 169 + 170 + static int sun4i_usb_phy_exit(struct phy *_phy) 171 + { 172 + struct sun4i_usb_phy *phy = phy_get_drvdata(_phy); 173 + struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy); 174 + 175 + sun4i_usb_phy_passby(phy, 0); 176 + reset_control_assert(phy->reset); 177 + clk_disable_unprepare(data->clk); 178 + 179 + return 0; 180 + } 181 + 182 + static int sun4i_usb_phy_power_on(struct phy *_phy) 183 + { 184 + struct sun4i_usb_phy *phy = phy_get_drvdata(_phy); 185 + int ret = 0; 186 + 187 + if (phy->vbus) 188 + ret = regulator_enable(phy->vbus); 189 + 190 + return ret; 191 + } 192 + 193 + static int sun4i_usb_phy_power_off(struct phy *_phy) 194 + { 195 + struct sun4i_usb_phy *phy = phy_get_drvdata(_phy); 196 + 197 + if (phy->vbus) 198 + regulator_disable(phy->vbus); 199 + 200 + return 0; 201 + } 202 + 203 + static struct phy_ops sun4i_usb_phy_ops = { 204 + .init = sun4i_usb_phy_init, 205 + .exit = sun4i_usb_phy_exit, 206 + .power_on = sun4i_usb_phy_power_on, 207 + .power_off = sun4i_usb_phy_power_off, 208 + .owner = THIS_MODULE, 209 + }; 210 + 211 + static struct phy *sun4i_usb_phy_xlate(struct device *dev, 212 + struct of_phandle_args *args) 213 + { 214 + struct sun4i_usb_phy_data *data = dev_get_drvdata(dev); 215 + 216 + if (WARN_ON(args->args[0] == 0 || args->args[0] >= data->num_phys)) 217 + return ERR_PTR(-ENODEV); 218 + 219 + return data->phys[args->args[0]].phy; 220 + } 221 + 222 + static int sun4i_usb_phy_probe(struct platform_device *pdev) 223 + { 224 + struct sun4i_usb_phy_data *data; 225 + struct device *dev = &pdev->dev; 226 + struct device_node *np = dev->of_node; 227 + void __iomem *pmu = NULL; 228 + struct phy_provider *phy_provider; 229 + struct reset_control *reset; 230 + struct regulator *vbus; 231 + struct resource *res; 232 + struct phy *phy; 233 + char name[16]; 234 + int i; 235 + 236 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 237 + if (!data) 238 + return -ENOMEM; 239 + 240 + mutex_init(&data->mutex); 241 + 242 + if (of_device_is_compatible(np, "allwinner,sun5i-a13-usb-phy")) 243 + data->num_phys = 2; 244 + else 245 + data->num_phys = 3; 246 + 247 + if (of_device_is_compatible(np, "allwinner,sun4i-a10-usb-phy")) 248 + data->disc_thresh = 3; 249 + else 250 + data->disc_thresh = 2; 251 + 252 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "phy_ctrl"); 253 + data->base = devm_ioremap_resource(dev, res); 254 + if (IS_ERR(data->base)) 255 + return PTR_ERR(data->base); 256 + 257 + data->clk = devm_clk_get(dev, "usb_phy"); 258 + if (IS_ERR(data->clk)) { 259 + dev_err(dev, "could not get usb_phy clock\n"); 260 + return PTR_ERR(data->clk); 261 + } 262 + 263 + /* Skip 0, 0 is the phy for otg which is not yet supported. */ 264 + for (i = 1; i < data->num_phys; i++) { 265 + snprintf(name, sizeof(name), "usb%d_vbus", i); 266 + vbus = devm_regulator_get_optional(dev, name); 267 + if (IS_ERR(vbus)) { 268 + if (PTR_ERR(vbus) == -EPROBE_DEFER) 269 + return -EPROBE_DEFER; 270 + vbus = NULL; 271 + } 272 + 273 + snprintf(name, sizeof(name), "usb%d_reset", i); 274 + reset = devm_reset_control_get(dev, name); 275 + if (IS_ERR(reset)) { 276 + dev_err(dev, "failed to get reset %s\n", name); 277 + return PTR_ERR(reset); 278 + } 279 + 280 + if (i) { /* No pmu for usbc0 */ 281 + snprintf(name, sizeof(name), "pmu%d", i); 282 + res = platform_get_resource_byname(pdev, 283 + IORESOURCE_MEM, name); 284 + pmu = devm_ioremap_resource(dev, res); 285 + if (IS_ERR(pmu)) 286 + return PTR_ERR(pmu); 287 + } 288 + 289 + phy = devm_phy_create(dev, &sun4i_usb_phy_ops, NULL); 290 + if (IS_ERR(phy)) { 291 + dev_err(dev, "failed to create PHY %d\n", i); 292 + return PTR_ERR(phy); 293 + } 294 + 295 + data->phys[i].phy = phy; 296 + data->phys[i].pmu = pmu; 297 + data->phys[i].vbus = vbus; 298 + data->phys[i].reset = reset; 299 + data->phys[i].index = i; 300 + phy_set_drvdata(phy, &data->phys[i]); 301 + } 302 + 303 + dev_set_drvdata(dev, data); 304 + phy_provider = devm_of_phy_provider_register(dev, sun4i_usb_phy_xlate); 305 + if (IS_ERR(phy_provider)) 306 + return PTR_ERR(phy_provider); 307 + 308 + return 0; 309 + } 310 + 311 + static const struct of_device_id sun4i_usb_phy_of_match[] = { 312 + { .compatible = "allwinner,sun4i-a10-usb-phy" }, 313 + { .compatible = "allwinner,sun5i-a13-usb-phy" }, 314 + { .compatible = "allwinner,sun7i-a20-usb-phy" }, 315 + { }, 316 + }; 317 + MODULE_DEVICE_TABLE(of, sun4i_usb_phy_of_match); 318 + 319 + static struct platform_driver sun4i_usb_phy_driver = { 320 + .probe = sun4i_usb_phy_probe, 321 + .driver = { 322 + .of_match_table = sun4i_usb_phy_of_match, 323 + .name = "sun4i-usb-phy", 324 + .owner = THIS_MODULE, 325 + } 326 + }; 327 + module_platform_driver(sun4i_usb_phy_driver); 328 + 329 + MODULE_DESCRIPTION("Allwinner sun4i USB phy driver"); 330 + MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 331 + MODULE_LICENSE("GPL v2");
+470
drivers/phy/phy-ti-pipe3.c
··· 1 + /* 2 + * phy-ti-pipe3 - PIPE3 PHY driver. 3 + * 4 + * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * Author: Kishon Vijay Abraham I <kishon@ti.com> 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/slab.h> 22 + #include <linux/phy/phy.h> 23 + #include <linux/of.h> 24 + #include <linux/clk.h> 25 + #include <linux/err.h> 26 + #include <linux/io.h> 27 + #include <linux/pm_runtime.h> 28 + #include <linux/delay.h> 29 + #include <linux/phy/omap_control_phy.h> 30 + #include <linux/of_platform.h> 31 + 32 + #define PLL_STATUS 0x00000004 33 + #define PLL_GO 0x00000008 34 + #define PLL_CONFIGURATION1 0x0000000C 35 + #define PLL_CONFIGURATION2 0x00000010 36 + #define PLL_CONFIGURATION3 0x00000014 37 + #define PLL_CONFIGURATION4 0x00000020 38 + 39 + #define PLL_REGM_MASK 0x001FFE00 40 + #define PLL_REGM_SHIFT 0x9 41 + #define PLL_REGM_F_MASK 0x0003FFFF 42 + #define PLL_REGM_F_SHIFT 0x0 43 + #define PLL_REGN_MASK 0x000001FE 44 + #define PLL_REGN_SHIFT 0x1 45 + #define PLL_SELFREQDCO_MASK 0x0000000E 46 + #define PLL_SELFREQDCO_SHIFT 0x1 47 + #define PLL_SD_MASK 0x0003FC00 48 + #define PLL_SD_SHIFT 10 49 + #define SET_PLL_GO 0x1 50 + #define PLL_LDOPWDN BIT(15) 51 + #define PLL_TICOPWDN BIT(16) 52 + #define PLL_LOCK 0x2 53 + #define PLL_IDLE 0x1 54 + 55 + /* 56 + * This is an Empirical value that works, need to confirm the actual 57 + * value required for the PIPE3PHY_PLL_CONFIGURATION2.PLL_IDLE status 58 + * to be correctly reflected in the PIPE3PHY_PLL_STATUS register. 59 + */ 60 + #define PLL_IDLE_TIME 100 /* in milliseconds */ 61 + #define PLL_LOCK_TIME 100 /* in milliseconds */ 62 + 63 + struct pipe3_dpll_params { 64 + u16 m; 65 + u8 n; 66 + u8 freq:3; 67 + u8 sd; 68 + u32 mf; 69 + }; 70 + 71 + struct pipe3_dpll_map { 72 + unsigned long rate; 73 + struct pipe3_dpll_params params; 74 + }; 75 + 76 + struct ti_pipe3 { 77 + void __iomem *pll_ctrl_base; 78 + struct device *dev; 79 + struct device *control_dev; 80 + struct clk *wkupclk; 81 + struct clk *sys_clk; 82 + struct clk *refclk; 83 + struct pipe3_dpll_map *dpll_map; 84 + }; 85 + 86 + static struct pipe3_dpll_map dpll_map_usb[] = { 87 + {12000000, {1250, 5, 4, 20, 0} }, /* 12 MHz */ 88 + {16800000, {3125, 20, 4, 20, 0} }, /* 16.8 MHz */ 89 + {19200000, {1172, 8, 4, 20, 65537} }, /* 19.2 MHz */ 90 + {20000000, {1000, 7, 4, 10, 0} }, /* 20 MHz */ 91 + {26000000, {1250, 12, 4, 20, 0} }, /* 26 MHz */ 92 + {38400000, {3125, 47, 4, 20, 92843} }, /* 38.4 MHz */ 93 + { }, /* Terminator */ 94 + }; 95 + 96 + static struct pipe3_dpll_map dpll_map_sata[] = { 97 + {12000000, {1000, 7, 4, 6, 0} }, /* 12 MHz */ 98 + {16800000, {714, 7, 4, 6, 0} }, /* 16.8 MHz */ 99 + {19200000, {625, 7, 4, 6, 0} }, /* 19.2 MHz */ 100 + {20000000, {600, 7, 4, 6, 0} }, /* 20 MHz */ 101 + {26000000, {461, 7, 4, 6, 0} }, /* 26 MHz */ 102 + {38400000, {312, 7, 4, 6, 0} }, /* 38.4 MHz */ 103 + { }, /* Terminator */ 104 + }; 105 + 106 + static inline u32 ti_pipe3_readl(void __iomem *addr, unsigned offset) 107 + { 108 + return __raw_readl(addr + offset); 109 + } 110 + 111 + static inline void ti_pipe3_writel(void __iomem *addr, unsigned offset, 112 + u32 data) 113 + { 114 + __raw_writel(data, addr + offset); 115 + } 116 + 117 + static struct pipe3_dpll_params *ti_pipe3_get_dpll_params(struct ti_pipe3 *phy) 118 + { 119 + unsigned long rate; 120 + struct pipe3_dpll_map *dpll_map = phy->dpll_map; 121 + 122 + rate = clk_get_rate(phy->sys_clk); 123 + 124 + for (; dpll_map->rate; dpll_map++) { 125 + if (rate == dpll_map->rate) 126 + return &dpll_map->params; 127 + } 128 + 129 + dev_err(phy->dev, "No DPLL configuration for %lu Hz SYS CLK\n", rate); 130 + 131 + return NULL; 132 + } 133 + 134 + static int ti_pipe3_power_off(struct phy *x) 135 + { 136 + struct ti_pipe3 *phy = phy_get_drvdata(x); 137 + 138 + omap_control_phy_power(phy->control_dev, 0); 139 + 140 + return 0; 141 + } 142 + 143 + static int ti_pipe3_power_on(struct phy *x) 144 + { 145 + struct ti_pipe3 *phy = phy_get_drvdata(x); 146 + 147 + omap_control_phy_power(phy->control_dev, 1); 148 + 149 + return 0; 150 + } 151 + 152 + static int ti_pipe3_dpll_wait_lock(struct ti_pipe3 *phy) 153 + { 154 + u32 val; 155 + unsigned long timeout; 156 + 157 + timeout = jiffies + msecs_to_jiffies(PLL_LOCK_TIME); 158 + do { 159 + cpu_relax(); 160 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS); 161 + if (val & PLL_LOCK) 162 + break; 163 + } while (!time_after(jiffies, timeout)); 164 + 165 + if (!(val & PLL_LOCK)) { 166 + dev_err(phy->dev, "DPLL failed to lock\n"); 167 + return -EBUSY; 168 + } 169 + 170 + return 0; 171 + } 172 + 173 + static int ti_pipe3_dpll_program(struct ti_pipe3 *phy) 174 + { 175 + u32 val; 176 + struct pipe3_dpll_params *dpll_params; 177 + 178 + dpll_params = ti_pipe3_get_dpll_params(phy); 179 + if (!dpll_params) 180 + return -EINVAL; 181 + 182 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION1); 183 + val &= ~PLL_REGN_MASK; 184 + val |= dpll_params->n << PLL_REGN_SHIFT; 185 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION1, val); 186 + 187 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION2); 188 + val &= ~PLL_SELFREQDCO_MASK; 189 + val |= dpll_params->freq << PLL_SELFREQDCO_SHIFT; 190 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION2, val); 191 + 192 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION1); 193 + val &= ~PLL_REGM_MASK; 194 + val |= dpll_params->m << PLL_REGM_SHIFT; 195 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION1, val); 196 + 197 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION4); 198 + val &= ~PLL_REGM_F_MASK; 199 + val |= dpll_params->mf << PLL_REGM_F_SHIFT; 200 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION4, val); 201 + 202 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION3); 203 + val &= ~PLL_SD_MASK; 204 + val |= dpll_params->sd << PLL_SD_SHIFT; 205 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION3, val); 206 + 207 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_GO, SET_PLL_GO); 208 + 209 + return ti_pipe3_dpll_wait_lock(phy); 210 + } 211 + 212 + static int ti_pipe3_init(struct phy *x) 213 + { 214 + struct ti_pipe3 *phy = phy_get_drvdata(x); 215 + u32 val; 216 + int ret = 0; 217 + 218 + /* Bring it out of IDLE if it is IDLE */ 219 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION2); 220 + if (val & PLL_IDLE) { 221 + val &= ~PLL_IDLE; 222 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION2, val); 223 + ret = ti_pipe3_dpll_wait_lock(phy); 224 + } 225 + 226 + /* Program the DPLL only if not locked */ 227 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS); 228 + if (!(val & PLL_LOCK)) 229 + if (ti_pipe3_dpll_program(phy)) 230 + return -EINVAL; 231 + 232 + return ret; 233 + } 234 + 235 + static int ti_pipe3_exit(struct phy *x) 236 + { 237 + struct ti_pipe3 *phy = phy_get_drvdata(x); 238 + u32 val; 239 + unsigned long timeout; 240 + 241 + /* SATA DPLL can't be powered down due to Errata i783 */ 242 + if (of_device_is_compatible(phy->dev->of_node, "ti,phy-pipe3-sata")) 243 + return 0; 244 + 245 + /* Put DPLL in IDLE mode */ 246 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_CONFIGURATION2); 247 + val |= PLL_IDLE; 248 + ti_pipe3_writel(phy->pll_ctrl_base, PLL_CONFIGURATION2, val); 249 + 250 + /* wait for LDO and Oscillator to power down */ 251 + timeout = jiffies + msecs_to_jiffies(PLL_IDLE_TIME); 252 + do { 253 + cpu_relax(); 254 + val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS); 255 + if ((val & PLL_TICOPWDN) && (val & PLL_LDOPWDN)) 256 + break; 257 + } while (!time_after(jiffies, timeout)); 258 + 259 + if (!(val & PLL_TICOPWDN) || !(val & PLL_LDOPWDN)) { 260 + dev_err(phy->dev, "Failed to power down: PLL_STATUS 0x%x\n", 261 + val); 262 + return -EBUSY; 263 + } 264 + 265 + return 0; 266 + } 267 + static struct phy_ops ops = { 268 + .init = ti_pipe3_init, 269 + .exit = ti_pipe3_exit, 270 + .power_on = ti_pipe3_power_on, 271 + .power_off = ti_pipe3_power_off, 272 + .owner = THIS_MODULE, 273 + }; 274 + 275 + #ifdef CONFIG_OF 276 + static const struct of_device_id ti_pipe3_id_table[]; 277 + #endif 278 + 279 + static int ti_pipe3_probe(struct platform_device *pdev) 280 + { 281 + struct ti_pipe3 *phy; 282 + struct phy *generic_phy; 283 + struct phy_provider *phy_provider; 284 + struct resource *res; 285 + struct device_node *node = pdev->dev.of_node; 286 + struct device_node *control_node; 287 + struct platform_device *control_pdev; 288 + const struct of_device_id *match; 289 + 290 + match = of_match_device(of_match_ptr(ti_pipe3_id_table), &pdev->dev); 291 + if (!match) 292 + return -EINVAL; 293 + 294 + phy = devm_kzalloc(&pdev->dev, sizeof(*phy), GFP_KERNEL); 295 + if (!phy) { 296 + dev_err(&pdev->dev, "unable to alloc mem for TI PIPE3 PHY\n"); 297 + return -ENOMEM; 298 + } 299 + 300 + phy->dpll_map = (struct pipe3_dpll_map *)match->data; 301 + if (!phy->dpll_map) { 302 + dev_err(&pdev->dev, "no DPLL data\n"); 303 + return -EINVAL; 304 + } 305 + 306 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pll_ctrl"); 307 + phy->pll_ctrl_base = devm_ioremap_resource(&pdev->dev, res); 308 + if (IS_ERR(phy->pll_ctrl_base)) 309 + return PTR_ERR(phy->pll_ctrl_base); 310 + 311 + phy->dev = &pdev->dev; 312 + 313 + if (!of_device_is_compatible(node, "ti,phy-pipe3-sata")) { 314 + 315 + phy->wkupclk = devm_clk_get(phy->dev, "wkupclk"); 316 + if (IS_ERR(phy->wkupclk)) { 317 + dev_err(&pdev->dev, "unable to get wkupclk\n"); 318 + return PTR_ERR(phy->wkupclk); 319 + } 320 + 321 + phy->refclk = devm_clk_get(phy->dev, "refclk"); 322 + if (IS_ERR(phy->refclk)) { 323 + dev_err(&pdev->dev, "unable to get refclk\n"); 324 + return PTR_ERR(phy->refclk); 325 + } 326 + } else { 327 + phy->wkupclk = ERR_PTR(-ENODEV); 328 + phy->refclk = ERR_PTR(-ENODEV); 329 + } 330 + 331 + phy->sys_clk = devm_clk_get(phy->dev, "sysclk"); 332 + if (IS_ERR(phy->sys_clk)) { 333 + dev_err(&pdev->dev, "unable to get sysclk\n"); 334 + return -EINVAL; 335 + } 336 + 337 + control_node = of_parse_phandle(node, "ctrl-module", 0); 338 + if (!control_node) { 339 + dev_err(&pdev->dev, "Failed to get control device phandle\n"); 340 + return -EINVAL; 341 + } 342 + 343 + control_pdev = of_find_device_by_node(control_node); 344 + if (!control_pdev) { 345 + dev_err(&pdev->dev, "Failed to get control device\n"); 346 + return -EINVAL; 347 + } 348 + 349 + phy->control_dev = &control_pdev->dev; 350 + 351 + omap_control_phy_power(phy->control_dev, 0); 352 + 353 + platform_set_drvdata(pdev, phy); 354 + pm_runtime_enable(phy->dev); 355 + 356 + generic_phy = devm_phy_create(phy->dev, &ops, NULL); 357 + if (IS_ERR(generic_phy)) 358 + return PTR_ERR(generic_phy); 359 + 360 + phy_set_drvdata(generic_phy, phy); 361 + phy_provider = devm_of_phy_provider_register(phy->dev, 362 + of_phy_simple_xlate); 363 + if (IS_ERR(phy_provider)) 364 + return PTR_ERR(phy_provider); 365 + 366 + pm_runtime_get(&pdev->dev); 367 + 368 + return 0; 369 + } 370 + 371 + static int ti_pipe3_remove(struct platform_device *pdev) 372 + { 373 + if (!pm_runtime_suspended(&pdev->dev)) 374 + pm_runtime_put(&pdev->dev); 375 + pm_runtime_disable(&pdev->dev); 376 + 377 + return 0; 378 + } 379 + 380 + #ifdef CONFIG_PM_RUNTIME 381 + 382 + static int ti_pipe3_runtime_suspend(struct device *dev) 383 + { 384 + struct ti_pipe3 *phy = dev_get_drvdata(dev); 385 + 386 + if (!IS_ERR(phy->wkupclk)) 387 + clk_disable_unprepare(phy->wkupclk); 388 + if (!IS_ERR(phy->refclk)) 389 + clk_disable_unprepare(phy->refclk); 390 + 391 + return 0; 392 + } 393 + 394 + static int ti_pipe3_runtime_resume(struct device *dev) 395 + { 396 + u32 ret = 0; 397 + struct ti_pipe3 *phy = dev_get_drvdata(dev); 398 + 399 + if (!IS_ERR(phy->refclk)) { 400 + ret = clk_prepare_enable(phy->refclk); 401 + if (ret) { 402 + dev_err(phy->dev, "Failed to enable refclk %d\n", ret); 403 + goto err1; 404 + } 405 + } 406 + 407 + if (!IS_ERR(phy->wkupclk)) { 408 + ret = clk_prepare_enable(phy->wkupclk); 409 + if (ret) { 410 + dev_err(phy->dev, "Failed to enable wkupclk %d\n", ret); 411 + goto err2; 412 + } 413 + } 414 + 415 + return 0; 416 + 417 + err2: 418 + if (!IS_ERR(phy->refclk)) 419 + clk_disable_unprepare(phy->refclk); 420 + 421 + err1: 422 + return ret; 423 + } 424 + 425 + static const struct dev_pm_ops ti_pipe3_pm_ops = { 426 + SET_RUNTIME_PM_OPS(ti_pipe3_runtime_suspend, 427 + ti_pipe3_runtime_resume, NULL) 428 + }; 429 + 430 + #define DEV_PM_OPS (&ti_pipe3_pm_ops) 431 + #else 432 + #define DEV_PM_OPS NULL 433 + #endif 434 + 435 + #ifdef CONFIG_OF 436 + static const struct of_device_id ti_pipe3_id_table[] = { 437 + { 438 + .compatible = "ti,phy-usb3", 439 + .data = dpll_map_usb, 440 + }, 441 + { 442 + .compatible = "ti,omap-usb3", 443 + .data = dpll_map_usb, 444 + }, 445 + { 446 + .compatible = "ti,phy-pipe3-sata", 447 + .data = dpll_map_sata, 448 + }, 449 + {} 450 + }; 451 + MODULE_DEVICE_TABLE(of, ti_pipe3_id_table); 452 + #endif 453 + 454 + static struct platform_driver ti_pipe3_driver = { 455 + .probe = ti_pipe3_probe, 456 + .remove = ti_pipe3_remove, 457 + .driver = { 458 + .name = "ti-pipe3", 459 + .owner = THIS_MODULE, 460 + .pm = DEV_PM_OPS, 461 + .of_match_table = of_match_ptr(ti_pipe3_id_table), 462 + }, 463 + }; 464 + 465 + module_platform_driver(ti_pipe3_driver); 466 + 467 + MODULE_ALIAS("platform: ti_pipe3"); 468 + MODULE_AUTHOR("Texas Instruments Inc."); 469 + MODULE_DESCRIPTION("TI PIPE3 phy driver"); 470 + MODULE_LICENSE("GPL v2");
+3 -3
drivers/phy/phy-twl4030-usb.c
··· 338 338 dev_err(twl->dev, "unsupported T2 transceiver mode %d\n", 339 339 mode); 340 340 break; 341 - }; 341 + } 342 342 } 343 343 344 344 static void twl4030_i2c_access(struct twl4030_usb *twl, int on) ··· 661 661 struct phy_provider *phy_provider; 662 662 struct phy_init_data *init_data = NULL; 663 663 664 - twl = devm_kzalloc(&pdev->dev, sizeof *twl, GFP_KERNEL); 664 + twl = devm_kzalloc(&pdev->dev, sizeof(*twl), GFP_KERNEL); 665 665 if (!twl) 666 666 return -ENOMEM; 667 667 ··· 676 676 return -EINVAL; 677 677 } 678 678 679 - otg = devm_kzalloc(&pdev->dev, sizeof *otg, GFP_KERNEL); 679 + otg = devm_kzalloc(&pdev->dev, sizeof(*otg), GFP_KERNEL); 680 680 if (!otg) 681 681 return -ENOMEM; 682 682
+1750
drivers/phy/phy-xgene.c
··· 1 + /* 2 + * AppliedMicro X-Gene Multi-purpose PHY driver 3 + * 4 + * Copyright (c) 2014, Applied Micro Circuits Corporation 5 + * Author: Loc Ho <lho@apm.com> 6 + * Tuan Phan <tphan@apm.com> 7 + * Suman Tripathi <stripathi@apm.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify it 10 + * under the terms of the GNU General Public License as published by the 11 + * Free Software Foundation; either version 2 of the License, or (at your 12 + * option) any later version. 13 + * 14 + * This program is distributed in the hope that it will be useful, 15 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 + * GNU General Public License for more details. 18 + * 19 + * You should have received a copy of the GNU General Public License 20 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 21 + * 22 + * The APM X-Gene PHY consists of two PLL clock macro's (CMU) and lanes. 23 + * The first PLL clock macro is used for internal reference clock. The second 24 + * PLL clock macro is used to generate the clock for the PHY. This driver 25 + * configures the first PLL CMU, the second PLL CMU, and programs the PHY to 26 + * operate according to the mode of operation. The first PLL CMU is only 27 + * required if internal clock is enabled. 28 + * 29 + * Logical Layer Out Of HW module units: 30 + * 31 + * ----------------- 32 + * | Internal | |------| 33 + * | Ref PLL CMU |----| | ------------- --------- 34 + * ------------ ---- | MUX |-----|PHY PLL CMU|----| Serdes| 35 + * | | | | --------- 36 + * External Clock ------| | ------------- 37 + * |------| 38 + * 39 + * The Ref PLL CMU CSR (Configuration System Registers) is accessed 40 + * indirectly from the SDS offset at 0x2000. It is only required for 41 + * internal reference clock. 42 + * The PHY PLL CMU CSR is accessed indirectly from the SDS offset at 0x0000. 43 + * The Serdes CSR is accessed indirectly from the SDS offset at 0x0400. 44 + * 45 + * The Ref PLL CMU can be located within the same PHY IP or outside the PHY IP 46 + * due to shared Ref PLL CMU. For PHY with Ref PLL CMU shared with another IP, 47 + * it is located outside the PHY IP. This is the case for the PHY located 48 + * at 0x1f23a000 (SATA Port 4/5). For such PHY, another resource is required 49 + * to located the SDS/Ref PLL CMU module and its clock for that IP enabled. 50 + * 51 + * Currently, this driver only supports Gen3 SATA mode with external clock. 52 + */ 53 + #include <linux/module.h> 54 + #include <linux/platform_device.h> 55 + #include <linux/io.h> 56 + #include <linux/delay.h> 57 + #include <linux/phy/phy.h> 58 + #include <linux/clk.h> 59 + 60 + /* Max 2 lanes per a PHY unit */ 61 + #define MAX_LANE 2 62 + 63 + /* Register offset inside the PHY */ 64 + #define SERDES_PLL_INDIRECT_OFFSET 0x0000 65 + #define SERDES_PLL_REF_INDIRECT_OFFSET 0x2000 66 + #define SERDES_INDIRECT_OFFSET 0x0400 67 + #define SERDES_LANE_STRIDE 0x0200 68 + 69 + /* Some default Serdes parameters */ 70 + #define DEFAULT_SATA_TXBOOST_GAIN { 0x1e, 0x1e, 0x1e } 71 + #define DEFAULT_SATA_TXEYEDIRECTION { 0x0, 0x0, 0x0 } 72 + #define DEFAULT_SATA_TXEYETUNING { 0xa, 0xa, 0xa } 73 + #define DEFAULT_SATA_SPD_SEL { 0x1, 0x3, 0x7 } 74 + #define DEFAULT_SATA_TXAMP { 0x8, 0x8, 0x8 } 75 + #define DEFAULT_SATA_TXCN1 { 0x2, 0x2, 0x2 } 76 + #define DEFAULT_SATA_TXCN2 { 0x0, 0x0, 0x0 } 77 + #define DEFAULT_SATA_TXCP1 { 0xa, 0xa, 0xa } 78 + 79 + #define SATA_SPD_SEL_GEN3 0x7 80 + #define SATA_SPD_SEL_GEN2 0x3 81 + #define SATA_SPD_SEL_GEN1 0x1 82 + 83 + #define SSC_DISABLE 0 84 + #define SSC_ENABLE 1 85 + 86 + #define FBDIV_VAL_50M 0x77 87 + #define REFDIV_VAL_50M 0x1 88 + #define FBDIV_VAL_100M 0x3B 89 + #define REFDIV_VAL_100M 0x0 90 + 91 + /* SATA Clock/Reset CSR */ 92 + #define SATACLKENREG 0x00000000 93 + #define SATA0_CORE_CLKEN 0x00000002 94 + #define SATA1_CORE_CLKEN 0x00000004 95 + #define SATASRESETREG 0x00000004 96 + #define SATA_MEM_RESET_MASK 0x00000020 97 + #define SATA_MEM_RESET_RD(src) (((src) & 0x00000020) >> 5) 98 + #define SATA_SDS_RESET_MASK 0x00000004 99 + #define SATA_CSR_RESET_MASK 0x00000001 100 + #define SATA_CORE_RESET_MASK 0x00000002 101 + #define SATA_PMCLK_RESET_MASK 0x00000010 102 + #define SATA_PCLK_RESET_MASK 0x00000008 103 + 104 + /* SDS CSR used for PHY Indirect access */ 105 + #define SATA_ENET_SDS_PCS_CTL0 0x00000000 106 + #define REGSPEC_CFG_I_TX_WORDMODE0_SET(dst, src) \ 107 + (((dst) & ~0x00070000) | (((u32) (src) << 16) & 0x00070000)) 108 + #define REGSPEC_CFG_I_RX_WORDMODE0_SET(dst, src) \ 109 + (((dst) & ~0x00e00000) | (((u32) (src) << 21) & 0x00e00000)) 110 + #define SATA_ENET_SDS_CTL0 0x0000000c 111 + #define REGSPEC_CFG_I_CUSTOMER_PIN_MODE0_SET(dst, src) \ 112 + (((dst) & ~0x00007fff) | (((u32) (src)) & 0x00007fff)) 113 + #define SATA_ENET_SDS_CTL1 0x00000010 114 + #define CFG_I_SPD_SEL_CDR_OVR1_SET(dst, src) \ 115 + (((dst) & ~0x0000000f) | (((u32) (src)) & 0x0000000f)) 116 + #define SATA_ENET_SDS_RST_CTL 0x00000024 117 + #define SATA_ENET_SDS_IND_CMD_REG 0x0000003c 118 + #define CFG_IND_WR_CMD_MASK 0x00000001 119 + #define CFG_IND_RD_CMD_MASK 0x00000002 120 + #define CFG_IND_CMD_DONE_MASK 0x00000004 121 + #define CFG_IND_ADDR_SET(dst, src) \ 122 + (((dst) & ~0x003ffff0) | (((u32) (src) << 4) & 0x003ffff0)) 123 + #define SATA_ENET_SDS_IND_RDATA_REG 0x00000040 124 + #define SATA_ENET_SDS_IND_WDATA_REG 0x00000044 125 + #define SATA_ENET_CLK_MACRO_REG 0x0000004c 126 + #define I_RESET_B_SET(dst, src) \ 127 + (((dst) & ~0x00000001) | (((u32) (src)) & 0x00000001)) 128 + #define I_PLL_FBDIV_SET(dst, src) \ 129 + (((dst) & ~0x001ff000) | (((u32) (src) << 12) & 0x001ff000)) 130 + #define I_CUSTOMEROV_SET(dst, src) \ 131 + (((dst) & ~0x00000f80) | (((u32) (src) << 7) & 0x00000f80)) 132 + #define O_PLL_LOCK_RD(src) (((src) & 0x40000000) >> 30) 133 + #define O_PLL_READY_RD(src) (((src) & 0x80000000) >> 31) 134 + 135 + /* PLL Clock Macro Unit (CMU) CSR accessing from SDS indirectly */ 136 + #define CMU_REG0 0x00000 137 + #define CMU_REG0_PLL_REF_SEL_MASK 0x00002000 138 + #define CMU_REG0_PLL_REF_SEL_SET(dst, src) \ 139 + (((dst) & ~0x00002000) | (((u32) (src) << 13) & 0x00002000)) 140 + #define CMU_REG0_PDOWN_MASK 0x00004000 141 + #define CMU_REG0_CAL_COUNT_RESOL_SET(dst, src) \ 142 + (((dst) & ~0x000000e0) | (((u32) (src) << 5) & 0x000000e0)) 143 + #define CMU_REG1 0x00002 144 + #define CMU_REG1_PLL_CP_SET(dst, src) \ 145 + (((dst) & ~0x00003c00) | (((u32) (src) << 10) & 0x00003c00)) 146 + #define CMU_REG1_PLL_MANUALCAL_SET(dst, src) \ 147 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 148 + #define CMU_REG1_PLL_CP_SEL_SET(dst, src) \ 149 + (((dst) & ~0x000003e0) | (((u32) (src) << 5) & 0x000003e0)) 150 + #define CMU_REG1_REFCLK_CMOS_SEL_MASK 0x00000001 151 + #define CMU_REG1_REFCLK_CMOS_SEL_SET(dst, src) \ 152 + (((dst) & ~0x00000001) | (((u32) (src) << 0) & 0x00000001)) 153 + #define CMU_REG2 0x00004 154 + #define CMU_REG2_PLL_REFDIV_SET(dst, src) \ 155 + (((dst) & ~0x0000c000) | (((u32) (src) << 14) & 0x0000c000)) 156 + #define CMU_REG2_PLL_LFRES_SET(dst, src) \ 157 + (((dst) & ~0x0000001e) | (((u32) (src) << 1) & 0x0000001e)) 158 + #define CMU_REG2_PLL_FBDIV_SET(dst, src) \ 159 + (((dst) & ~0x00003fe0) | (((u32) (src) << 5) & 0x00003fe0)) 160 + #define CMU_REG3 0x00006 161 + #define CMU_REG3_VCOVARSEL_SET(dst, src) \ 162 + (((dst) & ~0x0000000f) | (((u32) (src) << 0) & 0x0000000f)) 163 + #define CMU_REG3_VCO_MOMSEL_INIT_SET(dst, src) \ 164 + (((dst) & ~0x000003f0) | (((u32) (src) << 4) & 0x000003f0)) 165 + #define CMU_REG3_VCO_MANMOMSEL_SET(dst, src) \ 166 + (((dst) & ~0x0000fc00) | (((u32) (src) << 10) & 0x0000fc00)) 167 + #define CMU_REG4 0x00008 168 + #define CMU_REG5 0x0000a 169 + #define CMU_REG5_PLL_LFSMCAP_SET(dst, src) \ 170 + (((dst) & ~0x0000c000) | (((u32) (src) << 14) & 0x0000c000)) 171 + #define CMU_REG5_PLL_LOCK_RESOLUTION_SET(dst, src) \ 172 + (((dst) & ~0x0000000e) | (((u32) (src) << 1) & 0x0000000e)) 173 + #define CMU_REG5_PLL_LFCAP_SET(dst, src) \ 174 + (((dst) & ~0x00003000) | (((u32) (src) << 12) & 0x00003000)) 175 + #define CMU_REG5_PLL_RESETB_MASK 0x00000001 176 + #define CMU_REG6 0x0000c 177 + #define CMU_REG6_PLL_VREGTRIM_SET(dst, src) \ 178 + (((dst) & ~0x00000600) | (((u32) (src) << 9) & 0x00000600)) 179 + #define CMU_REG6_MAN_PVT_CAL_SET(dst, src) \ 180 + (((dst) & ~0x00000004) | (((u32) (src) << 2) & 0x00000004)) 181 + #define CMU_REG7 0x0000e 182 + #define CMU_REG7_PLL_CALIB_DONE_RD(src) ((0x00004000 & (u32) (src)) >> 14) 183 + #define CMU_REG7_VCO_CAL_FAIL_RD(src) ((0x00000c00 & (u32) (src)) >> 10) 184 + #define CMU_REG8 0x00010 185 + #define CMU_REG9 0x00012 186 + #define CMU_REG9_WORD_LEN_8BIT 0x000 187 + #define CMU_REG9_WORD_LEN_10BIT 0x001 188 + #define CMU_REG9_WORD_LEN_16BIT 0x002 189 + #define CMU_REG9_WORD_LEN_20BIT 0x003 190 + #define CMU_REG9_WORD_LEN_32BIT 0x004 191 + #define CMU_REG9_WORD_LEN_40BIT 0x005 192 + #define CMU_REG9_WORD_LEN_64BIT 0x006 193 + #define CMU_REG9_WORD_LEN_66BIT 0x007 194 + #define CMU_REG9_TX_WORD_MODE_CH1_SET(dst, src) \ 195 + (((dst) & ~0x00000380) | (((u32) (src) << 7) & 0x00000380)) 196 + #define CMU_REG9_TX_WORD_MODE_CH0_SET(dst, src) \ 197 + (((dst) & ~0x00000070) | (((u32) (src) << 4) & 0x00000070)) 198 + #define CMU_REG9_PLL_POST_DIVBY2_SET(dst, src) \ 199 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 200 + #define CMU_REG9_VBG_BYPASSB_SET(dst, src) \ 201 + (((dst) & ~0x00000004) | (((u32) (src) << 2) & 0x00000004)) 202 + #define CMU_REG9_IGEN_BYPASS_SET(dst, src) \ 203 + (((dst) & ~0x00000002) | (((u32) (src) << 1) & 0x00000002)) 204 + #define CMU_REG10 0x00014 205 + #define CMU_REG10_VREG_REFSEL_SET(dst, src) \ 206 + (((dst) & ~0x00000001) | (((u32) (src) << 0) & 0x00000001)) 207 + #define CMU_REG11 0x00016 208 + #define CMU_REG12 0x00018 209 + #define CMU_REG12_STATE_DELAY9_SET(dst, src) \ 210 + (((dst) & ~0x000000f0) | (((u32) (src) << 4) & 0x000000f0)) 211 + #define CMU_REG13 0x0001a 212 + #define CMU_REG14 0x0001c 213 + #define CMU_REG15 0x0001e 214 + #define CMU_REG16 0x00020 215 + #define CMU_REG16_PVT_DN_MAN_ENA_MASK 0x00000001 216 + #define CMU_REG16_PVT_UP_MAN_ENA_MASK 0x00000002 217 + #define CMU_REG16_VCOCAL_WAIT_BTW_CODE_SET(dst, src) \ 218 + (((dst) & ~0x0000001c) | (((u32) (src) << 2) & 0x0000001c)) 219 + #define CMU_REG16_CALIBRATION_DONE_OVERRIDE_SET(dst, src) \ 220 + (((dst) & ~0x00000040) | (((u32) (src) << 6) & 0x00000040)) 221 + #define CMU_REG16_BYPASS_PLL_LOCK_SET(dst, src) \ 222 + (((dst) & ~0x00000020) | (((u32) (src) << 5) & 0x00000020)) 223 + #define CMU_REG17 0x00022 224 + #define CMU_REG17_PVT_CODE_R2A_SET(dst, src) \ 225 + (((dst) & ~0x00007f00) | (((u32) (src) << 8) & 0x00007f00)) 226 + #define CMU_REG17_RESERVED_7_SET(dst, src) \ 227 + (((dst) & ~0x000000e0) | (((u32) (src) << 5) & 0x000000e0)) 228 + #define CMU_REG17_PVT_TERM_MAN_ENA_MASK 0x00008000 229 + #define CMU_REG18 0x00024 230 + #define CMU_REG19 0x00026 231 + #define CMU_REG20 0x00028 232 + #define CMU_REG21 0x0002a 233 + #define CMU_REG22 0x0002c 234 + #define CMU_REG23 0x0002e 235 + #define CMU_REG24 0x00030 236 + #define CMU_REG25 0x00032 237 + #define CMU_REG26 0x00034 238 + #define CMU_REG26_FORCE_PLL_LOCK_SET(dst, src) \ 239 + (((dst) & ~0x00000001) | (((u32) (src) << 0) & 0x00000001)) 240 + #define CMU_REG27 0x00036 241 + #define CMU_REG28 0x00038 242 + #define CMU_REG29 0x0003a 243 + #define CMU_REG30 0x0003c 244 + #define CMU_REG30_LOCK_COUNT_SET(dst, src) \ 245 + (((dst) & ~0x00000006) | (((u32) (src) << 1) & 0x00000006)) 246 + #define CMU_REG30_PCIE_MODE_SET(dst, src) \ 247 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 248 + #define CMU_REG31 0x0003e 249 + #define CMU_REG32 0x00040 250 + #define CMU_REG32_FORCE_VCOCAL_START_MASK 0x00004000 251 + #define CMU_REG32_PVT_CAL_WAIT_SEL_SET(dst, src) \ 252 + (((dst) & ~0x00000006) | (((u32) (src) << 1) & 0x00000006)) 253 + #define CMU_REG32_IREF_ADJ_SET(dst, src) \ 254 + (((dst) & ~0x00000180) | (((u32) (src) << 7) & 0x00000180)) 255 + #define CMU_REG33 0x00042 256 + #define CMU_REG34 0x00044 257 + #define CMU_REG34_VCO_CAL_VTH_LO_MAX_SET(dst, src) \ 258 + (((dst) & ~0x0000000f) | (((u32) (src) << 0) & 0x0000000f)) 259 + #define CMU_REG34_VCO_CAL_VTH_HI_MAX_SET(dst, src) \ 260 + (((dst) & ~0x00000f00) | (((u32) (src) << 8) & 0x00000f00)) 261 + #define CMU_REG34_VCO_CAL_VTH_LO_MIN_SET(dst, src) \ 262 + (((dst) & ~0x000000f0) | (((u32) (src) << 4) & 0x000000f0)) 263 + #define CMU_REG34_VCO_CAL_VTH_HI_MIN_SET(dst, src) \ 264 + (((dst) & ~0x0000f000) | (((u32) (src) << 12) & 0x0000f000)) 265 + #define CMU_REG35 0x00046 266 + #define CMU_REG35_PLL_SSC_MOD_SET(dst, src) \ 267 + (((dst) & ~0x0000fe00) | (((u32) (src) << 9) & 0x0000fe00)) 268 + #define CMU_REG36 0x00048 269 + #define CMU_REG36_PLL_SSC_EN_SET(dst, src) \ 270 + (((dst) & ~0x00000010) | (((u32) (src) << 4) & 0x00000010)) 271 + #define CMU_REG36_PLL_SSC_VSTEP_SET(dst, src) \ 272 + (((dst) & ~0x0000ffc0) | (((u32) (src) << 6) & 0x0000ffc0)) 273 + #define CMU_REG36_PLL_SSC_DSMSEL_SET(dst, src) \ 274 + (((dst) & ~0x00000020) | (((u32) (src) << 5) & 0x00000020)) 275 + #define CMU_REG37 0x0004a 276 + #define CMU_REG38 0x0004c 277 + #define CMU_REG39 0x0004e 278 + 279 + /* PHY lane CSR accessing from SDS indirectly */ 280 + #define RXTX_REG0 0x000 281 + #define RXTX_REG0_CTLE_EQ_HR_SET(dst, src) \ 282 + (((dst) & ~0x0000f800) | (((u32) (src) << 11) & 0x0000f800)) 283 + #define RXTX_REG0_CTLE_EQ_QR_SET(dst, src) \ 284 + (((dst) & ~0x000007c0) | (((u32) (src) << 6) & 0x000007c0)) 285 + #define RXTX_REG0_CTLE_EQ_FR_SET(dst, src) \ 286 + (((dst) & ~0x0000003e) | (((u32) (src) << 1) & 0x0000003e)) 287 + #define RXTX_REG1 0x002 288 + #define RXTX_REG1_RXACVCM_SET(dst, src) \ 289 + (((dst) & ~0x0000f000) | (((u32) (src) << 12) & 0x0000f000)) 290 + #define RXTX_REG1_CTLE_EQ_SET(dst, src) \ 291 + (((dst) & ~0x00000f80) | (((u32) (src) << 7) & 0x00000f80)) 292 + #define RXTX_REG1_RXVREG1_SET(dst, src) \ 293 + (((dst) & ~0x00000060) | (((u32) (src) << 5) & 0x00000060)) 294 + #define RXTX_REG1_RXIREF_ADJ_SET(dst, src) \ 295 + (((dst) & ~0x00000006) | (((u32) (src) << 1) & 0x00000006)) 296 + #define RXTX_REG2 0x004 297 + #define RXTX_REG2_VTT_ENA_SET(dst, src) \ 298 + (((dst) & ~0x00000100) | (((u32) (src) << 8) & 0x00000100)) 299 + #define RXTX_REG2_TX_FIFO_ENA_SET(dst, src) \ 300 + (((dst) & ~0x00000020) | (((u32) (src) << 5) & 0x00000020)) 301 + #define RXTX_REG2_VTT_SEL_SET(dst, src) \ 302 + (((dst) & ~0x000000c0) | (((u32) (src) << 6) & 0x000000c0)) 303 + #define RXTX_REG4 0x008 304 + #define RXTX_REG4_TX_LOOPBACK_BUF_EN_MASK 0x00000040 305 + #define RXTX_REG4_TX_DATA_RATE_SET(dst, src) \ 306 + (((dst) & ~0x0000c000) | (((u32) (src) << 14) & 0x0000c000)) 307 + #define RXTX_REG4_TX_WORD_MODE_SET(dst, src) \ 308 + (((dst) & ~0x00003800) | (((u32) (src) << 11) & 0x00003800)) 309 + #define RXTX_REG5 0x00a 310 + #define RXTX_REG5_TX_CN1_SET(dst, src) \ 311 + (((dst) & ~0x0000f800) | (((u32) (src) << 11) & 0x0000f800)) 312 + #define RXTX_REG5_TX_CP1_SET(dst, src) \ 313 + (((dst) & ~0x000007e0) | (((u32) (src) << 5) & 0x000007e0)) 314 + #define RXTX_REG5_TX_CN2_SET(dst, src) \ 315 + (((dst) & ~0x0000001f) | (((u32) (src) << 0) & 0x0000001f)) 316 + #define RXTX_REG6 0x00c 317 + #define RXTX_REG6_TXAMP_CNTL_SET(dst, src) \ 318 + (((dst) & ~0x00000780) | (((u32) (src) << 7) & 0x00000780)) 319 + #define RXTX_REG6_TXAMP_ENA_SET(dst, src) \ 320 + (((dst) & ~0x00000040) | (((u32) (src) << 6) & 0x00000040)) 321 + #define RXTX_REG6_RX_BIST_ERRCNT_RD_SET(dst, src) \ 322 + (((dst) & ~0x00000001) | (((u32) (src) << 0) & 0x00000001)) 323 + #define RXTX_REG6_TX_IDLE_SET(dst, src) \ 324 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 325 + #define RXTX_REG6_RX_BIST_RESYNC_SET(dst, src) \ 326 + (((dst) & ~0x00000002) | (((u32) (src) << 1) & 0x00000002)) 327 + #define RXTX_REG7 0x00e 328 + #define RXTX_REG7_RESETB_RXD_MASK 0x00000100 329 + #define RXTX_REG7_RESETB_RXA_MASK 0x00000080 330 + #define RXTX_REG7_BIST_ENA_RX_SET(dst, src) \ 331 + (((dst) & ~0x00000040) | (((u32) (src) << 6) & 0x00000040)) 332 + #define RXTX_REG7_RX_WORD_MODE_SET(dst, src) \ 333 + (((dst) & ~0x00003800) | (((u32) (src) << 11) & 0x00003800)) 334 + #define RXTX_REG8 0x010 335 + #define RXTX_REG8_CDR_LOOP_ENA_SET(dst, src) \ 336 + (((dst) & ~0x00004000) | (((u32) (src) << 14) & 0x00004000)) 337 + #define RXTX_REG8_CDR_BYPASS_RXLOS_SET(dst, src) \ 338 + (((dst) & ~0x00000800) | (((u32) (src) << 11) & 0x00000800)) 339 + #define RXTX_REG8_SSC_ENABLE_SET(dst, src) \ 340 + (((dst) & ~0x00000200) | (((u32) (src) << 9) & 0x00000200)) 341 + #define RXTX_REG8_SD_VREF_SET(dst, src) \ 342 + (((dst) & ~0x000000f0) | (((u32) (src) << 4) & 0x000000f0)) 343 + #define RXTX_REG8_SD_DISABLE_SET(dst, src) \ 344 + (((dst) & ~0x00000100) | (((u32) (src) << 8) & 0x00000100)) 345 + #define RXTX_REG7 0x00e 346 + #define RXTX_REG7_RESETB_RXD_SET(dst, src) \ 347 + (((dst) & ~0x00000100) | (((u32) (src) << 8) & 0x00000100)) 348 + #define RXTX_REG7_RESETB_RXA_SET(dst, src) \ 349 + (((dst) & ~0x00000080) | (((u32) (src) << 7) & 0x00000080)) 350 + #define RXTX_REG7_LOOP_BACK_ENA_CTLE_MASK 0x00004000 351 + #define RXTX_REG7_LOOP_BACK_ENA_CTLE_SET(dst, src) \ 352 + (((dst) & ~0x00004000) | (((u32) (src) << 14) & 0x00004000)) 353 + #define RXTX_REG11 0x016 354 + #define RXTX_REG11_PHASE_ADJUST_LIMIT_SET(dst, src) \ 355 + (((dst) & ~0x0000f800) | (((u32) (src) << 11) & 0x0000f800)) 356 + #define RXTX_REG12 0x018 357 + #define RXTX_REG12_LATCH_OFF_ENA_SET(dst, src) \ 358 + (((dst) & ~0x00002000) | (((u32) (src) << 13) & 0x00002000)) 359 + #define RXTX_REG12_SUMOS_ENABLE_SET(dst, src) \ 360 + (((dst) & ~0x00000004) | (((u32) (src) << 2) & 0x00000004)) 361 + #define RXTX_REG12_RX_DET_TERM_ENABLE_MASK 0x00000002 362 + #define RXTX_REG12_RX_DET_TERM_ENABLE_SET(dst, src) \ 363 + (((dst) & ~0x00000002) | (((u32) (src) << 1) & 0x00000002)) 364 + #define RXTX_REG13 0x01a 365 + #define RXTX_REG14 0x01c 366 + #define RXTX_REG14_CLTE_LATCAL_MAN_PROG_SET(dst, src) \ 367 + (((dst) & ~0x0000003f) | (((u32) (src) << 0) & 0x0000003f)) 368 + #define RXTX_REG14_CTLE_LATCAL_MAN_ENA_SET(dst, src) \ 369 + (((dst) & ~0x00000040) | (((u32) (src) << 6) & 0x00000040)) 370 + #define RXTX_REG26 0x034 371 + #define RXTX_REG26_PERIOD_ERROR_LATCH_SET(dst, src) \ 372 + (((dst) & ~0x00003800) | (((u32) (src) << 11) & 0x00003800)) 373 + #define RXTX_REG26_BLWC_ENA_SET(dst, src) \ 374 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 375 + #define RXTX_REG21 0x02a 376 + #define RXTX_REG21_DO_LATCH_CALOUT_RD(src) ((0x0000fc00 & (u32) (src)) >> 10) 377 + #define RXTX_REG21_XO_LATCH_CALOUT_RD(src) ((0x000003f0 & (u32) (src)) >> 4) 378 + #define RXTX_REG21_LATCH_CAL_FAIL_ODD_RD(src) ((0x0000000f & (u32)(src))) 379 + #define RXTX_REG22 0x02c 380 + #define RXTX_REG22_SO_LATCH_CALOUT_RD(src) ((0x000003f0 & (u32) (src)) >> 4) 381 + #define RXTX_REG22_EO_LATCH_CALOUT_RD(src) ((0x0000fc00 & (u32) (src)) >> 10) 382 + #define RXTX_REG22_LATCH_CAL_FAIL_EVEN_RD(src) ((0x0000000f & (u32)(src))) 383 + #define RXTX_REG23 0x02e 384 + #define RXTX_REG23_DE_LATCH_CALOUT_RD(src) ((0x0000fc00 & (u32) (src)) >> 10) 385 + #define RXTX_REG23_XE_LATCH_CALOUT_RD(src) ((0x000003f0 & (u32) (src)) >> 4) 386 + #define RXTX_REG24 0x030 387 + #define RXTX_REG24_EE_LATCH_CALOUT_RD(src) ((0x0000fc00 & (u32) (src)) >> 10) 388 + #define RXTX_REG24_SE_LATCH_CALOUT_RD(src) ((0x000003f0 & (u32) (src)) >> 4) 389 + #define RXTX_REG27 0x036 390 + #define RXTX_REG28 0x038 391 + #define RXTX_REG31 0x03e 392 + #define RXTX_REG38 0x04c 393 + #define RXTX_REG38_CUSTOMER_PINMODE_INV_SET(dst, src) \ 394 + (((dst) & 0x0000fffe) | (((u32) (src) << 1) & 0x0000fffe)) 395 + #define RXTX_REG39 0x04e 396 + #define RXTX_REG40 0x050 397 + #define RXTX_REG41 0x052 398 + #define RXTX_REG42 0x054 399 + #define RXTX_REG43 0x056 400 + #define RXTX_REG44 0x058 401 + #define RXTX_REG45 0x05a 402 + #define RXTX_REG46 0x05c 403 + #define RXTX_REG47 0x05e 404 + #define RXTX_REG48 0x060 405 + #define RXTX_REG49 0x062 406 + #define RXTX_REG50 0x064 407 + #define RXTX_REG51 0x066 408 + #define RXTX_REG52 0x068 409 + #define RXTX_REG53 0x06a 410 + #define RXTX_REG54 0x06c 411 + #define RXTX_REG55 0x06e 412 + #define RXTX_REG61 0x07a 413 + #define RXTX_REG61_ISCAN_INBERT_SET(dst, src) \ 414 + (((dst) & ~0x00000010) | (((u32) (src) << 4) & 0x00000010)) 415 + #define RXTX_REG61_LOADFREQ_SHIFT_SET(dst, src) \ 416 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 417 + #define RXTX_REG61_EYE_COUNT_WIDTH_SEL_SET(dst, src) \ 418 + (((dst) & ~0x000000c0) | (((u32) (src) << 6) & 0x000000c0)) 419 + #define RXTX_REG61_SPD_SEL_CDR_SET(dst, src) \ 420 + (((dst) & ~0x00003c00) | (((u32) (src) << 10) & 0x00003c00)) 421 + #define RXTX_REG62 0x07c 422 + #define RXTX_REG62_PERIOD_H1_QLATCH_SET(dst, src) \ 423 + (((dst) & ~0x00003800) | (((u32) (src) << 11) & 0x00003800)) 424 + #define RXTX_REG81 0x0a2 425 + #define RXTX_REG89_MU_TH7_SET(dst, src) \ 426 + (((dst) & ~0x0000f800) | (((u32) (src) << 11) & 0x0000f800)) 427 + #define RXTX_REG89_MU_TH8_SET(dst, src) \ 428 + (((dst) & ~0x000007c0) | (((u32) (src) << 6) & 0x000007c0)) 429 + #define RXTX_REG89_MU_TH9_SET(dst, src) \ 430 + (((dst) & ~0x0000003e) | (((u32) (src) << 1) & 0x0000003e)) 431 + #define RXTX_REG96 0x0c0 432 + #define RXTX_REG96_MU_FREQ1_SET(dst, src) \ 433 + (((dst) & ~0x0000f800) | (((u32) (src) << 11) & 0x0000f800)) 434 + #define RXTX_REG96_MU_FREQ2_SET(dst, src) \ 435 + (((dst) & ~0x000007c0) | (((u32) (src) << 6) & 0x000007c0)) 436 + #define RXTX_REG96_MU_FREQ3_SET(dst, src) \ 437 + (((dst) & ~0x0000003e) | (((u32) (src) << 1) & 0x0000003e)) 438 + #define RXTX_REG99 0x0c6 439 + #define RXTX_REG99_MU_PHASE1_SET(dst, src) \ 440 + (((dst) & ~0x0000f800) | (((u32) (src) << 11) & 0x0000f800)) 441 + #define RXTX_REG99_MU_PHASE2_SET(dst, src) \ 442 + (((dst) & ~0x000007c0) | (((u32) (src) << 6) & 0x000007c0)) 443 + #define RXTX_REG99_MU_PHASE3_SET(dst, src) \ 444 + (((dst) & ~0x0000003e) | (((u32) (src) << 1) & 0x0000003e)) 445 + #define RXTX_REG102 0x0cc 446 + #define RXTX_REG102_FREQLOOP_LIMIT_SET(dst, src) \ 447 + (((dst) & ~0x00000060) | (((u32) (src) << 5) & 0x00000060)) 448 + #define RXTX_REG114 0x0e4 449 + #define RXTX_REG121 0x0f2 450 + #define RXTX_REG121_SUMOS_CAL_CODE_RD(src) ((0x0000003e & (u32)(src)) >> 0x1) 451 + #define RXTX_REG125 0x0fa 452 + #define RXTX_REG125_PQ_REG_SET(dst, src) \ 453 + (((dst) & ~0x0000fe00) | (((u32) (src) << 9) & 0x0000fe00)) 454 + #define RXTX_REG125_SIGN_PQ_SET(dst, src) \ 455 + (((dst) & ~0x00000100) | (((u32) (src) << 8) & 0x00000100)) 456 + #define RXTX_REG125_SIGN_PQ_2C_SET(dst, src) \ 457 + (((dst) & ~0x00000080) | (((u32) (src) << 7) & 0x00000080)) 458 + #define RXTX_REG125_PHZ_MANUALCODE_SET(dst, src) \ 459 + (((dst) & ~0x0000007c) | (((u32) (src) << 2) & 0x0000007c)) 460 + #define RXTX_REG125_PHZ_MANUAL_SET(dst, src) \ 461 + (((dst) & ~0x00000002) | (((u32) (src) << 1) & 0x00000002)) 462 + #define RXTX_REG127 0x0fe 463 + #define RXTX_REG127_FORCE_SUM_CAL_START_MASK 0x00000002 464 + #define RXTX_REG127_FORCE_LAT_CAL_START_MASK 0x00000004 465 + #define RXTX_REG127_FORCE_SUM_CAL_START_SET(dst, src) \ 466 + (((dst) & ~0x00000002) | (((u32) (src) << 1) & 0x00000002)) 467 + #define RXTX_REG127_FORCE_LAT_CAL_START_SET(dst, src) \ 468 + (((dst) & ~0x00000004) | (((u32) (src) << 2) & 0x00000004)) 469 + #define RXTX_REG127_LATCH_MAN_CAL_ENA_SET(dst, src) \ 470 + (((dst) & ~0x00000008) | (((u32) (src) << 3) & 0x00000008)) 471 + #define RXTX_REG127_DO_LATCH_MANCAL_SET(dst, src) \ 472 + (((dst) & ~0x0000fc00) | (((u32) (src) << 10) & 0x0000fc00)) 473 + #define RXTX_REG127_XO_LATCH_MANCAL_SET(dst, src) \ 474 + (((dst) & ~0x000003f0) | (((u32) (src) << 4) & 0x000003f0)) 475 + #define RXTX_REG128 0x100 476 + #define RXTX_REG128_LATCH_CAL_WAIT_SEL_SET(dst, src) \ 477 + (((dst) & ~0x0000000c) | (((u32) (src) << 2) & 0x0000000c)) 478 + #define RXTX_REG128_EO_LATCH_MANCAL_SET(dst, src) \ 479 + (((dst) & ~0x0000fc00) | (((u32) (src) << 10) & 0x0000fc00)) 480 + #define RXTX_REG128_SO_LATCH_MANCAL_SET(dst, src) \ 481 + (((dst) & ~0x000003f0) | (((u32) (src) << 4) & 0x000003f0)) 482 + #define RXTX_REG129 0x102 483 + #define RXTX_REG129_DE_LATCH_MANCAL_SET(dst, src) \ 484 + (((dst) & ~0x0000fc00) | (((u32) (src) << 10) & 0x0000fc00)) 485 + #define RXTX_REG129_XE_LATCH_MANCAL_SET(dst, src) \ 486 + (((dst) & ~0x000003f0) | (((u32) (src) << 4) & 0x000003f0)) 487 + #define RXTX_REG130 0x104 488 + #define RXTX_REG130_EE_LATCH_MANCAL_SET(dst, src) \ 489 + (((dst) & ~0x0000fc00) | (((u32) (src) << 10) & 0x0000fc00)) 490 + #define RXTX_REG130_SE_LATCH_MANCAL_SET(dst, src) \ 491 + (((dst) & ~0x000003f0) | (((u32) (src) << 4) & 0x000003f0)) 492 + #define RXTX_REG145 0x122 493 + #define RXTX_REG145_TX_IDLE_SATA_SET(dst, src) \ 494 + (((dst) & ~0x00000001) | (((u32) (src) << 0) & 0x00000001)) 495 + #define RXTX_REG145_RXES_ENA_SET(dst, src) \ 496 + (((dst) & ~0x00000002) | (((u32) (src) << 1) & 0x00000002)) 497 + #define RXTX_REG145_RXDFE_CONFIG_SET(dst, src) \ 498 + (((dst) & ~0x0000c000) | (((u32) (src) << 14) & 0x0000c000)) 499 + #define RXTX_REG145_RXVWES_LATENA_SET(dst, src) \ 500 + (((dst) & ~0x00000004) | (((u32) (src) << 2) & 0x00000004)) 501 + #define RXTX_REG147 0x126 502 + #define RXTX_REG148 0x128 503 + 504 + /* Clock macro type */ 505 + enum cmu_type_t { 506 + REF_CMU = 0, /* Clock macro is the internal reference clock */ 507 + PHY_CMU = 1, /* Clock macro is the PLL for the Serdes */ 508 + }; 509 + 510 + enum mux_type_t { 511 + MUX_SELECT_ATA = 0, /* Switch the MUX to ATA */ 512 + MUX_SELECT_SGMMII = 0, /* Switch the MUX to SGMII */ 513 + }; 514 + 515 + enum clk_type_t { 516 + CLK_EXT_DIFF = 0, /* External differential */ 517 + CLK_INT_DIFF = 1, /* Internal differential */ 518 + CLK_INT_SING = 2, /* Internal single ended */ 519 + }; 520 + 521 + enum phy_mode { 522 + MODE_SATA = 0, /* List them for simple reference */ 523 + MODE_SGMII = 1, 524 + MODE_PCIE = 2, 525 + MODE_USB = 3, 526 + MODE_XFI = 4, 527 + MODE_MAX 528 + }; 529 + 530 + struct xgene_sata_override_param { 531 + u32 speed[MAX_LANE]; /* Index for override parameter per lane */ 532 + u32 txspeed[3]; /* Tx speed */ 533 + u32 txboostgain[MAX_LANE*3]; /* Tx freq boost and gain control */ 534 + u32 txeyetuning[MAX_LANE*3]; /* Tx eye tuning */ 535 + u32 txeyedirection[MAX_LANE*3]; /* Tx eye tuning direction */ 536 + u32 txamplitude[MAX_LANE*3]; /* Tx amplitude control */ 537 + u32 txprecursor_cn1[MAX_LANE*3]; /* Tx emphasis taps 1st pre-cursor */ 538 + u32 txprecursor_cn2[MAX_LANE*3]; /* Tx emphasis taps 2nd pre-cursor */ 539 + u32 txpostcursor_cp1[MAX_LANE*3]; /* Tx emphasis taps post-cursor */ 540 + }; 541 + 542 + struct xgene_phy_ctx { 543 + struct device *dev; 544 + struct phy *phy; 545 + enum phy_mode mode; /* Mode of operation */ 546 + enum clk_type_t clk_type; /* Input clock selection */ 547 + void __iomem *sds_base; /* PHY CSR base addr */ 548 + struct clk *clk; /* Optional clock */ 549 + 550 + /* Override Serdes parameters */ 551 + struct xgene_sata_override_param sata_param; 552 + }; 553 + 554 + /* 555 + * For chip earlier than A3 version, enable this flag. 556 + * To enable, pass boot argument phy_xgene.preA3Chip=1 557 + */ 558 + static int preA3Chip; 559 + MODULE_PARM_DESC(preA3Chip, "Enable pre-A3 chip support (1=enable 0=disable)"); 560 + module_param_named(preA3Chip, preA3Chip, int, 0444); 561 + 562 + static void sds_wr(void __iomem *csr_base, u32 indirect_cmd_reg, 563 + u32 indirect_data_reg, u32 addr, u32 data) 564 + { 565 + unsigned long deadline = jiffies + HZ; 566 + u32 val; 567 + u32 cmd; 568 + 569 + cmd = CFG_IND_WR_CMD_MASK | CFG_IND_CMD_DONE_MASK; 570 + cmd = CFG_IND_ADDR_SET(cmd, addr); 571 + writel(data, csr_base + indirect_data_reg); 572 + readl(csr_base + indirect_data_reg); /* Force a barrier */ 573 + writel(cmd, csr_base + indirect_cmd_reg); 574 + readl(csr_base + indirect_cmd_reg); /* Force a barrier */ 575 + do { 576 + val = readl(csr_base + indirect_cmd_reg); 577 + } while (!(val & CFG_IND_CMD_DONE_MASK) && 578 + time_before(jiffies, deadline)); 579 + if (!(val & CFG_IND_CMD_DONE_MASK)) 580 + pr_err("SDS WR timeout at 0x%p offset 0x%08X value 0x%08X\n", 581 + csr_base + indirect_cmd_reg, addr, data); 582 + } 583 + 584 + static void sds_rd(void __iomem *csr_base, u32 indirect_cmd_reg, 585 + u32 indirect_data_reg, u32 addr, u32 *data) 586 + { 587 + unsigned long deadline = jiffies + HZ; 588 + u32 val; 589 + u32 cmd; 590 + 591 + cmd = CFG_IND_RD_CMD_MASK | CFG_IND_CMD_DONE_MASK; 592 + cmd = CFG_IND_ADDR_SET(cmd, addr); 593 + writel(cmd, csr_base + indirect_cmd_reg); 594 + readl(csr_base + indirect_cmd_reg); /* Force a barrier */ 595 + do { 596 + val = readl(csr_base + indirect_cmd_reg); 597 + } while (!(val & CFG_IND_CMD_DONE_MASK) && 598 + time_before(jiffies, deadline)); 599 + *data = readl(csr_base + indirect_data_reg); 600 + if (!(val & CFG_IND_CMD_DONE_MASK)) 601 + pr_err("SDS WR timeout at 0x%p offset 0x%08X value 0x%08X\n", 602 + csr_base + indirect_cmd_reg, addr, *data); 603 + } 604 + 605 + static void cmu_wr(struct xgene_phy_ctx *ctx, enum cmu_type_t cmu_type, 606 + u32 reg, u32 data) 607 + { 608 + void __iomem *sds_base = ctx->sds_base; 609 + u32 val; 610 + 611 + if (cmu_type == REF_CMU) 612 + reg += SERDES_PLL_REF_INDIRECT_OFFSET; 613 + else 614 + reg += SERDES_PLL_INDIRECT_OFFSET; 615 + sds_wr(sds_base, SATA_ENET_SDS_IND_CMD_REG, 616 + SATA_ENET_SDS_IND_WDATA_REG, reg, data); 617 + sds_rd(sds_base, SATA_ENET_SDS_IND_CMD_REG, 618 + SATA_ENET_SDS_IND_RDATA_REG, reg, &val); 619 + pr_debug("CMU WR addr 0x%X value 0x%08X <-> 0x%08X\n", reg, data, val); 620 + } 621 + 622 + static void cmu_rd(struct xgene_phy_ctx *ctx, enum cmu_type_t cmu_type, 623 + u32 reg, u32 *data) 624 + { 625 + void __iomem *sds_base = ctx->sds_base; 626 + 627 + if (cmu_type == REF_CMU) 628 + reg += SERDES_PLL_REF_INDIRECT_OFFSET; 629 + else 630 + reg += SERDES_PLL_INDIRECT_OFFSET; 631 + sds_rd(sds_base, SATA_ENET_SDS_IND_CMD_REG, 632 + SATA_ENET_SDS_IND_RDATA_REG, reg, data); 633 + pr_debug("CMU RD addr 0x%X value 0x%08X\n", reg, *data); 634 + } 635 + 636 + static void cmu_toggle1to0(struct xgene_phy_ctx *ctx, enum cmu_type_t cmu_type, 637 + u32 reg, u32 bits) 638 + { 639 + u32 val; 640 + 641 + cmu_rd(ctx, cmu_type, reg, &val); 642 + val |= bits; 643 + cmu_wr(ctx, cmu_type, reg, val); 644 + cmu_rd(ctx, cmu_type, reg, &val); 645 + val &= ~bits; 646 + cmu_wr(ctx, cmu_type, reg, val); 647 + } 648 + 649 + static void cmu_clrbits(struct xgene_phy_ctx *ctx, enum cmu_type_t cmu_type, 650 + u32 reg, u32 bits) 651 + { 652 + u32 val; 653 + 654 + cmu_rd(ctx, cmu_type, reg, &val); 655 + val &= ~bits; 656 + cmu_wr(ctx, cmu_type, reg, val); 657 + } 658 + 659 + static void cmu_setbits(struct xgene_phy_ctx *ctx, enum cmu_type_t cmu_type, 660 + u32 reg, u32 bits) 661 + { 662 + u32 val; 663 + 664 + cmu_rd(ctx, cmu_type, reg, &val); 665 + val |= bits; 666 + cmu_wr(ctx, cmu_type, reg, val); 667 + } 668 + 669 + static void serdes_wr(struct xgene_phy_ctx *ctx, int lane, u32 reg, u32 data) 670 + { 671 + void __iomem *sds_base = ctx->sds_base; 672 + u32 val; 673 + 674 + reg += SERDES_INDIRECT_OFFSET; 675 + reg += lane * SERDES_LANE_STRIDE; 676 + sds_wr(sds_base, SATA_ENET_SDS_IND_CMD_REG, 677 + SATA_ENET_SDS_IND_WDATA_REG, reg, data); 678 + sds_rd(sds_base, SATA_ENET_SDS_IND_CMD_REG, 679 + SATA_ENET_SDS_IND_RDATA_REG, reg, &val); 680 + pr_debug("SERDES WR addr 0x%X value 0x%08X <-> 0x%08X\n", reg, data, 681 + val); 682 + } 683 + 684 + static void serdes_rd(struct xgene_phy_ctx *ctx, int lane, u32 reg, u32 *data) 685 + { 686 + void __iomem *sds_base = ctx->sds_base; 687 + 688 + reg += SERDES_INDIRECT_OFFSET; 689 + reg += lane * SERDES_LANE_STRIDE; 690 + sds_rd(sds_base, SATA_ENET_SDS_IND_CMD_REG, 691 + SATA_ENET_SDS_IND_RDATA_REG, reg, data); 692 + pr_debug("SERDES RD addr 0x%X value 0x%08X\n", reg, *data); 693 + } 694 + 695 + static void serdes_clrbits(struct xgene_phy_ctx *ctx, int lane, u32 reg, 696 + u32 bits) 697 + { 698 + u32 val; 699 + 700 + serdes_rd(ctx, lane, reg, &val); 701 + val &= ~bits; 702 + serdes_wr(ctx, lane, reg, val); 703 + } 704 + 705 + static void serdes_setbits(struct xgene_phy_ctx *ctx, int lane, u32 reg, 706 + u32 bits) 707 + { 708 + u32 val; 709 + 710 + serdes_rd(ctx, lane, reg, &val); 711 + val |= bits; 712 + serdes_wr(ctx, lane, reg, val); 713 + } 714 + 715 + static void xgene_phy_cfg_cmu_clk_type(struct xgene_phy_ctx *ctx, 716 + enum cmu_type_t cmu_type, 717 + enum clk_type_t clk_type) 718 + { 719 + u32 val; 720 + 721 + /* Set the reset sequence delay for TX ready assertion */ 722 + cmu_rd(ctx, cmu_type, CMU_REG12, &val); 723 + val = CMU_REG12_STATE_DELAY9_SET(val, 0x1); 724 + cmu_wr(ctx, cmu_type, CMU_REG12, val); 725 + /* Set the programmable stage delays between various enable stages */ 726 + cmu_wr(ctx, cmu_type, CMU_REG13, 0x0222); 727 + cmu_wr(ctx, cmu_type, CMU_REG14, 0x2225); 728 + 729 + /* Configure clock type */ 730 + if (clk_type == CLK_EXT_DIFF) { 731 + /* Select external clock mux */ 732 + cmu_rd(ctx, cmu_type, CMU_REG0, &val); 733 + val = CMU_REG0_PLL_REF_SEL_SET(val, 0x0); 734 + cmu_wr(ctx, cmu_type, CMU_REG0, val); 735 + /* Select CMOS as reference clock */ 736 + cmu_rd(ctx, cmu_type, CMU_REG1, &val); 737 + val = CMU_REG1_REFCLK_CMOS_SEL_SET(val, 0x0); 738 + cmu_wr(ctx, cmu_type, CMU_REG1, val); 739 + dev_dbg(ctx->dev, "Set external reference clock\n"); 740 + } else if (clk_type == CLK_INT_DIFF) { 741 + /* Select internal clock mux */ 742 + cmu_rd(ctx, cmu_type, CMU_REG0, &val); 743 + val = CMU_REG0_PLL_REF_SEL_SET(val, 0x1); 744 + cmu_wr(ctx, cmu_type, CMU_REG0, val); 745 + /* Select CMOS as reference clock */ 746 + cmu_rd(ctx, cmu_type, CMU_REG1, &val); 747 + val = CMU_REG1_REFCLK_CMOS_SEL_SET(val, 0x1); 748 + cmu_wr(ctx, cmu_type, CMU_REG1, val); 749 + dev_dbg(ctx->dev, "Set internal reference clock\n"); 750 + } else if (clk_type == CLK_INT_SING) { 751 + /* 752 + * NOTE: This clock type is NOT support for controller 753 + * whose internal clock shared in the PCIe controller 754 + * 755 + * Select internal clock mux 756 + */ 757 + cmu_rd(ctx, cmu_type, CMU_REG1, &val); 758 + val = CMU_REG1_REFCLK_CMOS_SEL_SET(val, 0x1); 759 + cmu_wr(ctx, cmu_type, CMU_REG1, val); 760 + /* Select CML as reference clock */ 761 + cmu_rd(ctx, cmu_type, CMU_REG1, &val); 762 + val = CMU_REG1_REFCLK_CMOS_SEL_SET(val, 0x0); 763 + cmu_wr(ctx, cmu_type, CMU_REG1, val); 764 + dev_dbg(ctx->dev, 765 + "Set internal single ended reference clock\n"); 766 + } 767 + } 768 + 769 + static void xgene_phy_sata_cfg_cmu_core(struct xgene_phy_ctx *ctx, 770 + enum cmu_type_t cmu_type, 771 + enum clk_type_t clk_type) 772 + { 773 + u32 val; 774 + int ref_100MHz; 775 + 776 + if (cmu_type == REF_CMU) { 777 + /* Set VCO calibration voltage threshold */ 778 + cmu_rd(ctx, cmu_type, CMU_REG34, &val); 779 + val = CMU_REG34_VCO_CAL_VTH_LO_MAX_SET(val, 0x7); 780 + val = CMU_REG34_VCO_CAL_VTH_HI_MAX_SET(val, 0xc); 781 + val = CMU_REG34_VCO_CAL_VTH_LO_MIN_SET(val, 0x3); 782 + val = CMU_REG34_VCO_CAL_VTH_HI_MIN_SET(val, 0x8); 783 + cmu_wr(ctx, cmu_type, CMU_REG34, val); 784 + } 785 + 786 + /* Set the VCO calibration counter */ 787 + cmu_rd(ctx, cmu_type, CMU_REG0, &val); 788 + if (cmu_type == REF_CMU || preA3Chip) 789 + val = CMU_REG0_CAL_COUNT_RESOL_SET(val, 0x4); 790 + else 791 + val = CMU_REG0_CAL_COUNT_RESOL_SET(val, 0x7); 792 + cmu_wr(ctx, cmu_type, CMU_REG0, val); 793 + 794 + /* Configure PLL for calibration */ 795 + cmu_rd(ctx, cmu_type, CMU_REG1, &val); 796 + val = CMU_REG1_PLL_CP_SET(val, 0x1); 797 + if (cmu_type == REF_CMU || preA3Chip) 798 + val = CMU_REG1_PLL_CP_SEL_SET(val, 0x5); 799 + else 800 + val = CMU_REG1_PLL_CP_SEL_SET(val, 0x3); 801 + if (cmu_type == REF_CMU) 802 + val = CMU_REG1_PLL_MANUALCAL_SET(val, 0x0); 803 + else 804 + val = CMU_REG1_PLL_MANUALCAL_SET(val, 0x1); 805 + cmu_wr(ctx, cmu_type, CMU_REG1, val); 806 + 807 + if (cmu_type != REF_CMU) 808 + cmu_clrbits(ctx, cmu_type, CMU_REG5, CMU_REG5_PLL_RESETB_MASK); 809 + 810 + /* Configure the PLL for either 100MHz or 50MHz */ 811 + cmu_rd(ctx, cmu_type, CMU_REG2, &val); 812 + if (cmu_type == REF_CMU) { 813 + val = CMU_REG2_PLL_LFRES_SET(val, 0xa); 814 + ref_100MHz = 1; 815 + } else { 816 + val = CMU_REG2_PLL_LFRES_SET(val, 0x3); 817 + if (clk_type == CLK_EXT_DIFF) 818 + ref_100MHz = 0; 819 + else 820 + ref_100MHz = 1; 821 + } 822 + if (ref_100MHz) { 823 + val = CMU_REG2_PLL_FBDIV_SET(val, FBDIV_VAL_100M); 824 + val = CMU_REG2_PLL_REFDIV_SET(val, REFDIV_VAL_100M); 825 + } else { 826 + val = CMU_REG2_PLL_FBDIV_SET(val, FBDIV_VAL_50M); 827 + val = CMU_REG2_PLL_REFDIV_SET(val, REFDIV_VAL_50M); 828 + } 829 + cmu_wr(ctx, cmu_type, CMU_REG2, val); 830 + 831 + /* Configure the VCO */ 832 + cmu_rd(ctx, cmu_type, CMU_REG3, &val); 833 + if (cmu_type == REF_CMU) { 834 + val = CMU_REG3_VCOVARSEL_SET(val, 0x3); 835 + val = CMU_REG3_VCO_MOMSEL_INIT_SET(val, 0x10); 836 + } else { 837 + val = CMU_REG3_VCOVARSEL_SET(val, 0xF); 838 + if (preA3Chip) 839 + val = CMU_REG3_VCO_MOMSEL_INIT_SET(val, 0x15); 840 + else 841 + val = CMU_REG3_VCO_MOMSEL_INIT_SET(val, 0x1a); 842 + val = CMU_REG3_VCO_MANMOMSEL_SET(val, 0x15); 843 + } 844 + cmu_wr(ctx, cmu_type, CMU_REG3, val); 845 + 846 + /* Disable force PLL lock */ 847 + cmu_rd(ctx, cmu_type, CMU_REG26, &val); 848 + val = CMU_REG26_FORCE_PLL_LOCK_SET(val, 0x0); 849 + cmu_wr(ctx, cmu_type, CMU_REG26, val); 850 + 851 + /* Setup PLL loop filter */ 852 + cmu_rd(ctx, cmu_type, CMU_REG5, &val); 853 + val = CMU_REG5_PLL_LFSMCAP_SET(val, 0x3); 854 + val = CMU_REG5_PLL_LFCAP_SET(val, 0x3); 855 + if (cmu_type == REF_CMU || !preA3Chip) 856 + val = CMU_REG5_PLL_LOCK_RESOLUTION_SET(val, 0x7); 857 + else 858 + val = CMU_REG5_PLL_LOCK_RESOLUTION_SET(val, 0x4); 859 + cmu_wr(ctx, cmu_type, CMU_REG5, val); 860 + 861 + /* Enable or disable manual calibration */ 862 + cmu_rd(ctx, cmu_type, CMU_REG6, &val); 863 + val = CMU_REG6_PLL_VREGTRIM_SET(val, preA3Chip ? 0x0 : 0x2); 864 + val = CMU_REG6_MAN_PVT_CAL_SET(val, preA3Chip ? 0x1 : 0x0); 865 + cmu_wr(ctx, cmu_type, CMU_REG6, val); 866 + 867 + /* Configure lane for 20-bits */ 868 + if (cmu_type == PHY_CMU) { 869 + cmu_rd(ctx, cmu_type, CMU_REG9, &val); 870 + val = CMU_REG9_TX_WORD_MODE_CH1_SET(val, 871 + CMU_REG9_WORD_LEN_20BIT); 872 + val = CMU_REG9_TX_WORD_MODE_CH0_SET(val, 873 + CMU_REG9_WORD_LEN_20BIT); 874 + val = CMU_REG9_PLL_POST_DIVBY2_SET(val, 0x1); 875 + if (!preA3Chip) { 876 + val = CMU_REG9_VBG_BYPASSB_SET(val, 0x0); 877 + val = CMU_REG9_IGEN_BYPASS_SET(val , 0x0); 878 + } 879 + cmu_wr(ctx, cmu_type, CMU_REG9, val); 880 + 881 + if (!preA3Chip) { 882 + cmu_rd(ctx, cmu_type, CMU_REG10, &val); 883 + val = CMU_REG10_VREG_REFSEL_SET(val, 0x1); 884 + cmu_wr(ctx, cmu_type, CMU_REG10, val); 885 + } 886 + } 887 + 888 + cmu_rd(ctx, cmu_type, CMU_REG16, &val); 889 + val = CMU_REG16_CALIBRATION_DONE_OVERRIDE_SET(val, 0x1); 890 + val = CMU_REG16_BYPASS_PLL_LOCK_SET(val, 0x1); 891 + if (cmu_type == REF_CMU || preA3Chip) 892 + val = CMU_REG16_VCOCAL_WAIT_BTW_CODE_SET(val, 0x4); 893 + else 894 + val = CMU_REG16_VCOCAL_WAIT_BTW_CODE_SET(val, 0x7); 895 + cmu_wr(ctx, cmu_type, CMU_REG16, val); 896 + 897 + /* Configure for SATA */ 898 + cmu_rd(ctx, cmu_type, CMU_REG30, &val); 899 + val = CMU_REG30_PCIE_MODE_SET(val, 0x0); 900 + val = CMU_REG30_LOCK_COUNT_SET(val, 0x3); 901 + cmu_wr(ctx, cmu_type, CMU_REG30, val); 902 + 903 + /* Disable state machine bypass */ 904 + cmu_wr(ctx, cmu_type, CMU_REG31, 0xF); 905 + 906 + cmu_rd(ctx, cmu_type, CMU_REG32, &val); 907 + val = CMU_REG32_PVT_CAL_WAIT_SEL_SET(val, 0x3); 908 + if (cmu_type == REF_CMU || preA3Chip) 909 + val = CMU_REG32_IREF_ADJ_SET(val, 0x3); 910 + else 911 + val = CMU_REG32_IREF_ADJ_SET(val, 0x1); 912 + cmu_wr(ctx, cmu_type, CMU_REG32, val); 913 + 914 + /* Set VCO calibration threshold */ 915 + if (cmu_type != REF_CMU && preA3Chip) 916 + cmu_wr(ctx, cmu_type, CMU_REG34, 0x8d27); 917 + else 918 + cmu_wr(ctx, cmu_type, CMU_REG34, 0x873c); 919 + 920 + /* Set CTLE Override and override waiting from state machine */ 921 + cmu_wr(ctx, cmu_type, CMU_REG37, 0xF00F); 922 + } 923 + 924 + static void xgene_phy_ssc_enable(struct xgene_phy_ctx *ctx, 925 + enum cmu_type_t cmu_type) 926 + { 927 + u32 val; 928 + 929 + /* Set SSC modulation value */ 930 + cmu_rd(ctx, cmu_type, CMU_REG35, &val); 931 + val = CMU_REG35_PLL_SSC_MOD_SET(val, 98); 932 + cmu_wr(ctx, cmu_type, CMU_REG35, val); 933 + 934 + /* Enable SSC, set vertical step and DSM value */ 935 + cmu_rd(ctx, cmu_type, CMU_REG36, &val); 936 + val = CMU_REG36_PLL_SSC_VSTEP_SET(val, 30); 937 + val = CMU_REG36_PLL_SSC_EN_SET(val, 1); 938 + val = CMU_REG36_PLL_SSC_DSMSEL_SET(val, 1); 939 + cmu_wr(ctx, cmu_type, CMU_REG36, val); 940 + 941 + /* Reset the PLL */ 942 + cmu_clrbits(ctx, cmu_type, CMU_REG5, CMU_REG5_PLL_RESETB_MASK); 943 + cmu_setbits(ctx, cmu_type, CMU_REG5, CMU_REG5_PLL_RESETB_MASK); 944 + 945 + /* Force VCO calibration to restart */ 946 + cmu_toggle1to0(ctx, cmu_type, CMU_REG32, 947 + CMU_REG32_FORCE_VCOCAL_START_MASK); 948 + } 949 + 950 + static void xgene_phy_sata_cfg_lanes(struct xgene_phy_ctx *ctx) 951 + { 952 + u32 val; 953 + u32 reg; 954 + int i; 955 + int lane; 956 + 957 + for (lane = 0; lane < MAX_LANE; lane++) { 958 + serdes_wr(ctx, lane, RXTX_REG147, 0x6); 959 + 960 + /* Set boost control for quarter, half, and full rate */ 961 + serdes_rd(ctx, lane, RXTX_REG0, &val); 962 + val = RXTX_REG0_CTLE_EQ_HR_SET(val, 0x10); 963 + val = RXTX_REG0_CTLE_EQ_QR_SET(val, 0x10); 964 + val = RXTX_REG0_CTLE_EQ_FR_SET(val, 0x10); 965 + serdes_wr(ctx, lane, RXTX_REG0, val); 966 + 967 + /* Set boost control value */ 968 + serdes_rd(ctx, lane, RXTX_REG1, &val); 969 + val = RXTX_REG1_RXACVCM_SET(val, 0x7); 970 + val = RXTX_REG1_CTLE_EQ_SET(val, 971 + ctx->sata_param.txboostgain[lane * 3 + 972 + ctx->sata_param.speed[lane]]); 973 + serdes_wr(ctx, lane, RXTX_REG1, val); 974 + 975 + /* Latch VTT value based on the termination to ground and 976 + enable TX FIFO */ 977 + serdes_rd(ctx, lane, RXTX_REG2, &val); 978 + val = RXTX_REG2_VTT_ENA_SET(val, 0x1); 979 + val = RXTX_REG2_VTT_SEL_SET(val, 0x1); 980 + val = RXTX_REG2_TX_FIFO_ENA_SET(val, 0x1); 981 + serdes_wr(ctx, lane, RXTX_REG2, val); 982 + 983 + /* Configure Tx for 20-bits */ 984 + serdes_rd(ctx, lane, RXTX_REG4, &val); 985 + val = RXTX_REG4_TX_WORD_MODE_SET(val, CMU_REG9_WORD_LEN_20BIT); 986 + serdes_wr(ctx, lane, RXTX_REG4, val); 987 + 988 + if (!preA3Chip) { 989 + serdes_rd(ctx, lane, RXTX_REG1, &val); 990 + val = RXTX_REG1_RXVREG1_SET(val, 0x2); 991 + val = RXTX_REG1_RXIREF_ADJ_SET(val, 0x2); 992 + serdes_wr(ctx, lane, RXTX_REG1, val); 993 + } 994 + 995 + /* Set pre-emphasis first 1 and 2, and post-emphasis values */ 996 + serdes_rd(ctx, lane, RXTX_REG5, &val); 997 + val = RXTX_REG5_TX_CN1_SET(val, 998 + ctx->sata_param.txprecursor_cn1[lane * 3 + 999 + ctx->sata_param.speed[lane]]); 1000 + val = RXTX_REG5_TX_CP1_SET(val, 1001 + ctx->sata_param.txpostcursor_cp1[lane * 3 + 1002 + ctx->sata_param.speed[lane]]); 1003 + val = RXTX_REG5_TX_CN2_SET(val, 1004 + ctx->sata_param.txprecursor_cn2[lane * 3 + 1005 + ctx->sata_param.speed[lane]]); 1006 + serdes_wr(ctx, lane, RXTX_REG5, val); 1007 + 1008 + /* Set TX amplitude value */ 1009 + serdes_rd(ctx, lane, RXTX_REG6, &val); 1010 + val = RXTX_REG6_TXAMP_CNTL_SET(val, 1011 + ctx->sata_param.txamplitude[lane * 3 + 1012 + ctx->sata_param.speed[lane]]); 1013 + val = RXTX_REG6_TXAMP_ENA_SET(val, 0x1); 1014 + val = RXTX_REG6_TX_IDLE_SET(val, 0x0); 1015 + val = RXTX_REG6_RX_BIST_RESYNC_SET(val, 0x0); 1016 + val = RXTX_REG6_RX_BIST_ERRCNT_RD_SET(val, 0x0); 1017 + serdes_wr(ctx, lane, RXTX_REG6, val); 1018 + 1019 + /* Configure Rx for 20-bits */ 1020 + serdes_rd(ctx, lane, RXTX_REG7, &val); 1021 + val = RXTX_REG7_BIST_ENA_RX_SET(val, 0x0); 1022 + val = RXTX_REG7_RX_WORD_MODE_SET(val, CMU_REG9_WORD_LEN_20BIT); 1023 + serdes_wr(ctx, lane, RXTX_REG7, val); 1024 + 1025 + /* Set CDR and LOS values and enable Rx SSC */ 1026 + serdes_rd(ctx, lane, RXTX_REG8, &val); 1027 + val = RXTX_REG8_CDR_LOOP_ENA_SET(val, 0x1); 1028 + val = RXTX_REG8_CDR_BYPASS_RXLOS_SET(val, 0x0); 1029 + val = RXTX_REG8_SSC_ENABLE_SET(val, 0x1); 1030 + val = RXTX_REG8_SD_DISABLE_SET(val, 0x0); 1031 + val = RXTX_REG8_SD_VREF_SET(val, 0x4); 1032 + serdes_wr(ctx, lane, RXTX_REG8, val); 1033 + 1034 + /* Set phase adjust upper/lower limits */ 1035 + serdes_rd(ctx, lane, RXTX_REG11, &val); 1036 + val = RXTX_REG11_PHASE_ADJUST_LIMIT_SET(val, 0x0); 1037 + serdes_wr(ctx, lane, RXTX_REG11, val); 1038 + 1039 + /* Enable Latch Off; disable SUMOS and Tx termination */ 1040 + serdes_rd(ctx, lane, RXTX_REG12, &val); 1041 + val = RXTX_REG12_LATCH_OFF_ENA_SET(val, 0x1); 1042 + val = RXTX_REG12_SUMOS_ENABLE_SET(val, 0x0); 1043 + val = RXTX_REG12_RX_DET_TERM_ENABLE_SET(val, 0x0); 1044 + serdes_wr(ctx, lane, RXTX_REG12, val); 1045 + 1046 + /* Set period error latch to 512T and enable BWL */ 1047 + serdes_rd(ctx, lane, RXTX_REG26, &val); 1048 + val = RXTX_REG26_PERIOD_ERROR_LATCH_SET(val, 0x0); 1049 + val = RXTX_REG26_BLWC_ENA_SET(val, 0x1); 1050 + serdes_wr(ctx, lane, RXTX_REG26, val); 1051 + 1052 + serdes_wr(ctx, lane, RXTX_REG28, 0x0); 1053 + 1054 + /* Set DFE loop preset value */ 1055 + serdes_wr(ctx, lane, RXTX_REG31, 0x0); 1056 + 1057 + /* Set Eye Monitor counter width to 12-bit */ 1058 + serdes_rd(ctx, lane, RXTX_REG61, &val); 1059 + val = RXTX_REG61_ISCAN_INBERT_SET(val, 0x1); 1060 + val = RXTX_REG61_LOADFREQ_SHIFT_SET(val, 0x0); 1061 + val = RXTX_REG61_EYE_COUNT_WIDTH_SEL_SET(val, 0x0); 1062 + serdes_wr(ctx, lane, RXTX_REG61, val); 1063 + 1064 + serdes_rd(ctx, lane, RXTX_REG62, &val); 1065 + val = RXTX_REG62_PERIOD_H1_QLATCH_SET(val, 0x0); 1066 + serdes_wr(ctx, lane, RXTX_REG62, val); 1067 + 1068 + /* Set BW select tap X for DFE loop */ 1069 + for (i = 0; i < 9; i++) { 1070 + reg = RXTX_REG81 + i * 2; 1071 + serdes_rd(ctx, lane, reg, &val); 1072 + val = RXTX_REG89_MU_TH7_SET(val, 0xe); 1073 + val = RXTX_REG89_MU_TH8_SET(val, 0xe); 1074 + val = RXTX_REG89_MU_TH9_SET(val, 0xe); 1075 + serdes_wr(ctx, lane, reg, val); 1076 + } 1077 + 1078 + /* Set BW select tap X for frequency adjust loop */ 1079 + for (i = 0; i < 3; i++) { 1080 + reg = RXTX_REG96 + i * 2; 1081 + serdes_rd(ctx, lane, reg, &val); 1082 + val = RXTX_REG96_MU_FREQ1_SET(val, 0x10); 1083 + val = RXTX_REG96_MU_FREQ2_SET(val, 0x10); 1084 + val = RXTX_REG96_MU_FREQ3_SET(val, 0x10); 1085 + serdes_wr(ctx, lane, reg, val); 1086 + } 1087 + 1088 + /* Set BW select tap X for phase adjust loop */ 1089 + for (i = 0; i < 3; i++) { 1090 + reg = RXTX_REG99 + i * 2; 1091 + serdes_rd(ctx, lane, reg, &val); 1092 + val = RXTX_REG99_MU_PHASE1_SET(val, 0x7); 1093 + val = RXTX_REG99_MU_PHASE2_SET(val, 0x7); 1094 + val = RXTX_REG99_MU_PHASE3_SET(val, 0x7); 1095 + serdes_wr(ctx, lane, reg, val); 1096 + } 1097 + 1098 + serdes_rd(ctx, lane, RXTX_REG102, &val); 1099 + val = RXTX_REG102_FREQLOOP_LIMIT_SET(val, 0x0); 1100 + serdes_wr(ctx, lane, RXTX_REG102, val); 1101 + 1102 + serdes_wr(ctx, lane, RXTX_REG114, 0xffe0); 1103 + 1104 + serdes_rd(ctx, lane, RXTX_REG125, &val); 1105 + val = RXTX_REG125_SIGN_PQ_SET(val, 1106 + ctx->sata_param.txeyedirection[lane * 3 + 1107 + ctx->sata_param.speed[lane]]); 1108 + val = RXTX_REG125_PQ_REG_SET(val, 1109 + ctx->sata_param.txeyetuning[lane * 3 + 1110 + ctx->sata_param.speed[lane]]); 1111 + val = RXTX_REG125_PHZ_MANUAL_SET(val, 0x1); 1112 + serdes_wr(ctx, lane, RXTX_REG125, val); 1113 + 1114 + serdes_rd(ctx, lane, RXTX_REG127, &val); 1115 + val = RXTX_REG127_LATCH_MAN_CAL_ENA_SET(val, 0x0); 1116 + serdes_wr(ctx, lane, RXTX_REG127, val); 1117 + 1118 + serdes_rd(ctx, lane, RXTX_REG128, &val); 1119 + val = RXTX_REG128_LATCH_CAL_WAIT_SEL_SET(val, 0x3); 1120 + serdes_wr(ctx, lane, RXTX_REG128, val); 1121 + 1122 + serdes_rd(ctx, lane, RXTX_REG145, &val); 1123 + val = RXTX_REG145_RXDFE_CONFIG_SET(val, 0x3); 1124 + val = RXTX_REG145_TX_IDLE_SATA_SET(val, 0x0); 1125 + if (preA3Chip) { 1126 + val = RXTX_REG145_RXES_ENA_SET(val, 0x1); 1127 + val = RXTX_REG145_RXVWES_LATENA_SET(val, 0x1); 1128 + } else { 1129 + val = RXTX_REG145_RXES_ENA_SET(val, 0x0); 1130 + val = RXTX_REG145_RXVWES_LATENA_SET(val, 0x0); 1131 + } 1132 + serdes_wr(ctx, lane, RXTX_REG145, val); 1133 + 1134 + /* 1135 + * Set Rx LOS filter clock rate, sample rate, and threshold 1136 + * windows 1137 + */ 1138 + for (i = 0; i < 4; i++) { 1139 + reg = RXTX_REG148 + i * 2; 1140 + serdes_wr(ctx, lane, reg, 0xFFFF); 1141 + } 1142 + } 1143 + } 1144 + 1145 + static int xgene_phy_cal_rdy_chk(struct xgene_phy_ctx *ctx, 1146 + enum cmu_type_t cmu_type, 1147 + enum clk_type_t clk_type) 1148 + { 1149 + void __iomem *csr_serdes = ctx->sds_base; 1150 + int loop; 1151 + u32 val; 1152 + 1153 + /* Release PHY main reset */ 1154 + writel(0xdf, csr_serdes + SATA_ENET_SDS_RST_CTL); 1155 + readl(csr_serdes + SATA_ENET_SDS_RST_CTL); /* Force a barrier */ 1156 + 1157 + if (cmu_type != REF_CMU) { 1158 + cmu_setbits(ctx, cmu_type, CMU_REG5, CMU_REG5_PLL_RESETB_MASK); 1159 + /* 1160 + * As per PHY design spec, the PLL reset requires a minimum 1161 + * of 800us. 1162 + */ 1163 + usleep_range(800, 1000); 1164 + 1165 + cmu_rd(ctx, cmu_type, CMU_REG1, &val); 1166 + val = CMU_REG1_PLL_MANUALCAL_SET(val, 0x0); 1167 + cmu_wr(ctx, cmu_type, CMU_REG1, val); 1168 + /* 1169 + * As per PHY design spec, the PLL auto calibration requires 1170 + * a minimum of 800us. 1171 + */ 1172 + usleep_range(800, 1000); 1173 + 1174 + cmu_toggle1to0(ctx, cmu_type, CMU_REG32, 1175 + CMU_REG32_FORCE_VCOCAL_START_MASK); 1176 + /* 1177 + * As per PHY design spec, the PLL requires a minimum of 1178 + * 800us to settle. 1179 + */ 1180 + usleep_range(800, 1000); 1181 + } 1182 + 1183 + if (!preA3Chip) 1184 + goto skip_manual_cal; 1185 + 1186 + /* 1187 + * Configure the termination resister calibration 1188 + * The serial receive pins, RXP/RXN, have TERMination resistor 1189 + * that is required to be calibrated. 1190 + */ 1191 + cmu_rd(ctx, cmu_type, CMU_REG17, &val); 1192 + val = CMU_REG17_PVT_CODE_R2A_SET(val, 0x12); 1193 + val = CMU_REG17_RESERVED_7_SET(val, 0x0); 1194 + cmu_wr(ctx, cmu_type, CMU_REG17, val); 1195 + cmu_toggle1to0(ctx, cmu_type, CMU_REG17, 1196 + CMU_REG17_PVT_TERM_MAN_ENA_MASK); 1197 + /* 1198 + * The serial transmit pins, TXP/TXN, have Pull-UP and Pull-DOWN 1199 + * resistors that are required to the calibrated. 1200 + * Configure the pull DOWN calibration 1201 + */ 1202 + cmu_rd(ctx, cmu_type, CMU_REG17, &val); 1203 + val = CMU_REG17_PVT_CODE_R2A_SET(val, 0x29); 1204 + val = CMU_REG17_RESERVED_7_SET(val, 0x0); 1205 + cmu_wr(ctx, cmu_type, CMU_REG17, val); 1206 + cmu_toggle1to0(ctx, cmu_type, CMU_REG16, 1207 + CMU_REG16_PVT_DN_MAN_ENA_MASK); 1208 + /* Configure the pull UP calibration */ 1209 + cmu_rd(ctx, cmu_type, CMU_REG17, &val); 1210 + val = CMU_REG17_PVT_CODE_R2A_SET(val, 0x28); 1211 + val = CMU_REG17_RESERVED_7_SET(val, 0x0); 1212 + cmu_wr(ctx, cmu_type, CMU_REG17, val); 1213 + cmu_toggle1to0(ctx, cmu_type, CMU_REG16, 1214 + CMU_REG16_PVT_UP_MAN_ENA_MASK); 1215 + 1216 + skip_manual_cal: 1217 + /* Poll the PLL calibration completion status for at least 1 ms */ 1218 + loop = 100; 1219 + do { 1220 + cmu_rd(ctx, cmu_type, CMU_REG7, &val); 1221 + if (CMU_REG7_PLL_CALIB_DONE_RD(val)) 1222 + break; 1223 + /* 1224 + * As per PHY design spec, PLL calibration status requires 1225 + * a minimum of 10us to be updated. 1226 + */ 1227 + usleep_range(10, 100); 1228 + } while (--loop > 0); 1229 + 1230 + cmu_rd(ctx, cmu_type, CMU_REG7, &val); 1231 + dev_dbg(ctx->dev, "PLL calibration %s\n", 1232 + CMU_REG7_PLL_CALIB_DONE_RD(val) ? "done" : "failed"); 1233 + if (CMU_REG7_VCO_CAL_FAIL_RD(val)) { 1234 + dev_err(ctx->dev, 1235 + "PLL calibration failed due to VCO failure\n"); 1236 + return -1; 1237 + } 1238 + dev_dbg(ctx->dev, "PLL calibration successful\n"); 1239 + 1240 + cmu_rd(ctx, cmu_type, CMU_REG15, &val); 1241 + dev_dbg(ctx->dev, "PHY Tx is %sready\n", val & 0x300 ? "" : "not "); 1242 + return 0; 1243 + } 1244 + 1245 + static void xgene_phy_pdwn_force_vco(struct xgene_phy_ctx *ctx, 1246 + enum cmu_type_t cmu_type, 1247 + enum clk_type_t clk_type) 1248 + { 1249 + u32 val; 1250 + 1251 + dev_dbg(ctx->dev, "Reset VCO and re-start again\n"); 1252 + if (cmu_type == PHY_CMU) { 1253 + cmu_rd(ctx, cmu_type, CMU_REG16, &val); 1254 + val = CMU_REG16_VCOCAL_WAIT_BTW_CODE_SET(val, 0x7); 1255 + cmu_wr(ctx, cmu_type, CMU_REG16, val); 1256 + } 1257 + 1258 + cmu_toggle1to0(ctx, cmu_type, CMU_REG0, CMU_REG0_PDOWN_MASK); 1259 + cmu_toggle1to0(ctx, cmu_type, CMU_REG32, 1260 + CMU_REG32_FORCE_VCOCAL_START_MASK); 1261 + } 1262 + 1263 + static int xgene_phy_hw_init_sata(struct xgene_phy_ctx *ctx, 1264 + enum clk_type_t clk_type, int ssc_enable) 1265 + { 1266 + void __iomem *sds_base = ctx->sds_base; 1267 + u32 val; 1268 + int i; 1269 + 1270 + /* Configure the PHY for operation */ 1271 + dev_dbg(ctx->dev, "Reset PHY\n"); 1272 + /* Place PHY into reset */ 1273 + writel(0x0, sds_base + SATA_ENET_SDS_RST_CTL); 1274 + val = readl(sds_base + SATA_ENET_SDS_RST_CTL); /* Force a barrier */ 1275 + /* Release PHY lane from reset (active high) */ 1276 + writel(0x20, sds_base + SATA_ENET_SDS_RST_CTL); 1277 + readl(sds_base + SATA_ENET_SDS_RST_CTL); /* Force a barrier */ 1278 + /* Release all PHY module out of reset except PHY main reset */ 1279 + writel(0xde, sds_base + SATA_ENET_SDS_RST_CTL); 1280 + readl(sds_base + SATA_ENET_SDS_RST_CTL); /* Force a barrier */ 1281 + 1282 + /* Set the operation speed */ 1283 + val = readl(sds_base + SATA_ENET_SDS_CTL1); 1284 + val = CFG_I_SPD_SEL_CDR_OVR1_SET(val, 1285 + ctx->sata_param.txspeed[ctx->sata_param.speed[0]]); 1286 + writel(val, sds_base + SATA_ENET_SDS_CTL1); 1287 + 1288 + dev_dbg(ctx->dev, "Set the customer pin mode to SATA\n"); 1289 + val = readl(sds_base + SATA_ENET_SDS_CTL0); 1290 + val = REGSPEC_CFG_I_CUSTOMER_PIN_MODE0_SET(val, 0x4421); 1291 + writel(val, sds_base + SATA_ENET_SDS_CTL0); 1292 + 1293 + /* Configure the clock macro unit (CMU) clock type */ 1294 + xgene_phy_cfg_cmu_clk_type(ctx, PHY_CMU, clk_type); 1295 + 1296 + /* Configure the clock macro */ 1297 + xgene_phy_sata_cfg_cmu_core(ctx, PHY_CMU, clk_type); 1298 + 1299 + /* Enable SSC if enabled */ 1300 + if (ssc_enable) 1301 + xgene_phy_ssc_enable(ctx, PHY_CMU); 1302 + 1303 + /* Configure PHY lanes */ 1304 + xgene_phy_sata_cfg_lanes(ctx); 1305 + 1306 + /* Set Rx/Tx 20-bit */ 1307 + val = readl(sds_base + SATA_ENET_SDS_PCS_CTL0); 1308 + val = REGSPEC_CFG_I_RX_WORDMODE0_SET(val, 0x3); 1309 + val = REGSPEC_CFG_I_TX_WORDMODE0_SET(val, 0x3); 1310 + writel(val, sds_base + SATA_ENET_SDS_PCS_CTL0); 1311 + 1312 + /* Start PLL calibration and try for three times */ 1313 + i = 10; 1314 + do { 1315 + if (!xgene_phy_cal_rdy_chk(ctx, PHY_CMU, clk_type)) 1316 + break; 1317 + /* If failed, toggle the VCO power signal and start again */ 1318 + xgene_phy_pdwn_force_vco(ctx, PHY_CMU, clk_type); 1319 + } while (--i > 0); 1320 + /* Even on failure, allow to continue any way */ 1321 + if (i <= 0) 1322 + dev_err(ctx->dev, "PLL calibration failed\n"); 1323 + 1324 + return 0; 1325 + } 1326 + 1327 + static int xgene_phy_hw_initialize(struct xgene_phy_ctx *ctx, 1328 + enum clk_type_t clk_type, 1329 + int ssc_enable) 1330 + { 1331 + int rc; 1332 + 1333 + dev_dbg(ctx->dev, "PHY init clk type %d\n", clk_type); 1334 + 1335 + if (ctx->mode == MODE_SATA) { 1336 + rc = xgene_phy_hw_init_sata(ctx, clk_type, ssc_enable); 1337 + if (rc) 1338 + return rc; 1339 + } else { 1340 + dev_err(ctx->dev, "Un-supported customer pin mode %d\n", 1341 + ctx->mode); 1342 + return -ENODEV; 1343 + } 1344 + 1345 + return 0; 1346 + } 1347 + 1348 + /* 1349 + * Receiver Offset Calibration: 1350 + * 1351 + * Calibrate the receiver signal path offset in two steps - summar and 1352 + * latch calibrations 1353 + */ 1354 + static void xgene_phy_force_lat_summer_cal(struct xgene_phy_ctx *ctx, int lane) 1355 + { 1356 + int i; 1357 + struct { 1358 + u32 reg; 1359 + u32 val; 1360 + } serdes_reg[] = { 1361 + {RXTX_REG38, 0x0}, 1362 + {RXTX_REG39, 0xff00}, 1363 + {RXTX_REG40, 0xffff}, 1364 + {RXTX_REG41, 0xffff}, 1365 + {RXTX_REG42, 0xffff}, 1366 + {RXTX_REG43, 0xffff}, 1367 + {RXTX_REG44, 0xffff}, 1368 + {RXTX_REG45, 0xffff}, 1369 + {RXTX_REG46, 0xffff}, 1370 + {RXTX_REG47, 0xfffc}, 1371 + {RXTX_REG48, 0x0}, 1372 + {RXTX_REG49, 0x0}, 1373 + {RXTX_REG50, 0x0}, 1374 + {RXTX_REG51, 0x0}, 1375 + {RXTX_REG52, 0x0}, 1376 + {RXTX_REG53, 0x0}, 1377 + {RXTX_REG54, 0x0}, 1378 + {RXTX_REG55, 0x0}, 1379 + }; 1380 + 1381 + /* Start SUMMER calibration */ 1382 + serdes_setbits(ctx, lane, RXTX_REG127, 1383 + RXTX_REG127_FORCE_SUM_CAL_START_MASK); 1384 + /* 1385 + * As per PHY design spec, the Summer calibration requires a minimum 1386 + * of 100us to complete. 1387 + */ 1388 + usleep_range(100, 500); 1389 + serdes_clrbits(ctx, lane, RXTX_REG127, 1390 + RXTX_REG127_FORCE_SUM_CAL_START_MASK); 1391 + /* 1392 + * As per PHY design spec, the auto calibration requires a minimum 1393 + * of 100us to complete. 1394 + */ 1395 + usleep_range(100, 500); 1396 + 1397 + /* Start latch calibration */ 1398 + serdes_setbits(ctx, lane, RXTX_REG127, 1399 + RXTX_REG127_FORCE_LAT_CAL_START_MASK); 1400 + /* 1401 + * As per PHY design spec, the latch calibration requires a minimum 1402 + * of 100us to complete. 1403 + */ 1404 + usleep_range(100, 500); 1405 + serdes_clrbits(ctx, lane, RXTX_REG127, 1406 + RXTX_REG127_FORCE_LAT_CAL_START_MASK); 1407 + 1408 + /* Configure the PHY lane for calibration */ 1409 + serdes_wr(ctx, lane, RXTX_REG28, 0x7); 1410 + serdes_wr(ctx, lane, RXTX_REG31, 0x7e00); 1411 + serdes_clrbits(ctx, lane, RXTX_REG4, 1412 + RXTX_REG4_TX_LOOPBACK_BUF_EN_MASK); 1413 + serdes_clrbits(ctx, lane, RXTX_REG7, 1414 + RXTX_REG7_LOOP_BACK_ENA_CTLE_MASK); 1415 + for (i = 0; i < ARRAY_SIZE(serdes_reg); i++) 1416 + serdes_wr(ctx, lane, serdes_reg[i].reg, 1417 + serdes_reg[i].val); 1418 + } 1419 + 1420 + static void xgene_phy_reset_rxd(struct xgene_phy_ctx *ctx, int lane) 1421 + { 1422 + /* Reset digital Rx */ 1423 + serdes_clrbits(ctx, lane, RXTX_REG7, RXTX_REG7_RESETB_RXD_MASK); 1424 + /* As per PHY design spec, the reset requires a minimum of 100us. */ 1425 + usleep_range(100, 150); 1426 + serdes_setbits(ctx, lane, RXTX_REG7, RXTX_REG7_RESETB_RXD_MASK); 1427 + } 1428 + 1429 + static int xgene_phy_get_avg(int accum, int samples) 1430 + { 1431 + return (accum + (samples / 2)) / samples; 1432 + } 1433 + 1434 + static void xgene_phy_gen_avg_val(struct xgene_phy_ctx *ctx, int lane) 1435 + { 1436 + int max_loop = 10; 1437 + int avg_loop = 0; 1438 + int lat_do = 0, lat_xo = 0, lat_eo = 0, lat_so = 0; 1439 + int lat_de = 0, lat_xe = 0, lat_ee = 0, lat_se = 0; 1440 + int sum_cal = 0; 1441 + int lat_do_itr, lat_xo_itr, lat_eo_itr, lat_so_itr; 1442 + int lat_de_itr, lat_xe_itr, lat_ee_itr, lat_se_itr; 1443 + int sum_cal_itr; 1444 + int fail_even; 1445 + int fail_odd; 1446 + u32 val; 1447 + 1448 + dev_dbg(ctx->dev, "Generating avg calibration value for lane %d\n", 1449 + lane); 1450 + 1451 + /* Enable RX Hi-Z termination */ 1452 + serdes_setbits(ctx, lane, RXTX_REG12, 1453 + RXTX_REG12_RX_DET_TERM_ENABLE_MASK); 1454 + /* Turn off DFE */ 1455 + serdes_wr(ctx, lane, RXTX_REG28, 0x0000); 1456 + /* DFE Presets to zero */ 1457 + serdes_wr(ctx, lane, RXTX_REG31, 0x0000); 1458 + 1459 + /* 1460 + * Receiver Offset Calibration: 1461 + * Calibrate the receiver signal path offset in two steps - summar 1462 + * and latch calibration. 1463 + * Runs the "Receiver Offset Calibration multiple times to determine 1464 + * the average value to use. 1465 + */ 1466 + while (avg_loop < max_loop) { 1467 + /* Start the calibration */ 1468 + xgene_phy_force_lat_summer_cal(ctx, lane); 1469 + 1470 + serdes_rd(ctx, lane, RXTX_REG21, &val); 1471 + lat_do_itr = RXTX_REG21_DO_LATCH_CALOUT_RD(val); 1472 + lat_xo_itr = RXTX_REG21_XO_LATCH_CALOUT_RD(val); 1473 + fail_odd = RXTX_REG21_LATCH_CAL_FAIL_ODD_RD(val); 1474 + 1475 + serdes_rd(ctx, lane, RXTX_REG22, &val); 1476 + lat_eo_itr = RXTX_REG22_EO_LATCH_CALOUT_RD(val); 1477 + lat_so_itr = RXTX_REG22_SO_LATCH_CALOUT_RD(val); 1478 + fail_even = RXTX_REG22_LATCH_CAL_FAIL_EVEN_RD(val); 1479 + 1480 + serdes_rd(ctx, lane, RXTX_REG23, &val); 1481 + lat_de_itr = RXTX_REG23_DE_LATCH_CALOUT_RD(val); 1482 + lat_xe_itr = RXTX_REG23_XE_LATCH_CALOUT_RD(val); 1483 + 1484 + serdes_rd(ctx, lane, RXTX_REG24, &val); 1485 + lat_ee_itr = RXTX_REG24_EE_LATCH_CALOUT_RD(val); 1486 + lat_se_itr = RXTX_REG24_SE_LATCH_CALOUT_RD(val); 1487 + 1488 + serdes_rd(ctx, lane, RXTX_REG121, &val); 1489 + sum_cal_itr = RXTX_REG121_SUMOS_CAL_CODE_RD(val); 1490 + 1491 + /* Check for failure. If passed, sum them for averaging */ 1492 + if ((fail_even == 0 || fail_even == 1) && 1493 + (fail_odd == 0 || fail_odd == 1)) { 1494 + lat_do += lat_do_itr; 1495 + lat_xo += lat_xo_itr; 1496 + lat_eo += lat_eo_itr; 1497 + lat_so += lat_so_itr; 1498 + lat_de += lat_de_itr; 1499 + lat_xe += lat_xe_itr; 1500 + lat_ee += lat_ee_itr; 1501 + lat_se += lat_se_itr; 1502 + sum_cal += sum_cal_itr; 1503 + 1504 + dev_dbg(ctx->dev, "Iteration %d:\n", avg_loop); 1505 + dev_dbg(ctx->dev, "DO 0x%x XO 0x%x EO 0x%x SO 0x%x\n", 1506 + lat_do_itr, lat_xo_itr, lat_eo_itr, 1507 + lat_so_itr); 1508 + dev_dbg(ctx->dev, "DE 0x%x XE 0x%x EE 0x%x SE 0x%x\n", 1509 + lat_de_itr, lat_xe_itr, lat_ee_itr, 1510 + lat_se_itr); 1511 + dev_dbg(ctx->dev, "SUM 0x%x\n", sum_cal_itr); 1512 + ++avg_loop; 1513 + } else { 1514 + dev_err(ctx->dev, 1515 + "Receiver calibration failed at %d loop\n", 1516 + avg_loop); 1517 + } 1518 + xgene_phy_reset_rxd(ctx, lane); 1519 + } 1520 + 1521 + /* Update latch manual calibration with average value */ 1522 + serdes_rd(ctx, lane, RXTX_REG127, &val); 1523 + val = RXTX_REG127_DO_LATCH_MANCAL_SET(val, 1524 + xgene_phy_get_avg(lat_do, max_loop)); 1525 + val = RXTX_REG127_XO_LATCH_MANCAL_SET(val, 1526 + xgene_phy_get_avg(lat_xo, max_loop)); 1527 + serdes_wr(ctx, lane, RXTX_REG127, val); 1528 + 1529 + serdes_rd(ctx, lane, RXTX_REG128, &val); 1530 + val = RXTX_REG128_EO_LATCH_MANCAL_SET(val, 1531 + xgene_phy_get_avg(lat_eo, max_loop)); 1532 + val = RXTX_REG128_SO_LATCH_MANCAL_SET(val, 1533 + xgene_phy_get_avg(lat_so, max_loop)); 1534 + serdes_wr(ctx, lane, RXTX_REG128, val); 1535 + 1536 + serdes_rd(ctx, lane, RXTX_REG129, &val); 1537 + val = RXTX_REG129_DE_LATCH_MANCAL_SET(val, 1538 + xgene_phy_get_avg(lat_de, max_loop)); 1539 + val = RXTX_REG129_XE_LATCH_MANCAL_SET(val, 1540 + xgene_phy_get_avg(lat_xe, max_loop)); 1541 + serdes_wr(ctx, lane, RXTX_REG129, val); 1542 + 1543 + serdes_rd(ctx, lane, RXTX_REG130, &val); 1544 + val = RXTX_REG130_EE_LATCH_MANCAL_SET(val, 1545 + xgene_phy_get_avg(lat_ee, max_loop)); 1546 + val = RXTX_REG130_SE_LATCH_MANCAL_SET(val, 1547 + xgene_phy_get_avg(lat_se, max_loop)); 1548 + serdes_wr(ctx, lane, RXTX_REG130, val); 1549 + 1550 + /* Update SUMMER calibration with average value */ 1551 + serdes_rd(ctx, lane, RXTX_REG14, &val); 1552 + val = RXTX_REG14_CLTE_LATCAL_MAN_PROG_SET(val, 1553 + xgene_phy_get_avg(sum_cal, max_loop)); 1554 + serdes_wr(ctx, lane, RXTX_REG14, val); 1555 + 1556 + dev_dbg(ctx->dev, "Average Value:\n"); 1557 + dev_dbg(ctx->dev, "DO 0x%x XO 0x%x EO 0x%x SO 0x%x\n", 1558 + xgene_phy_get_avg(lat_do, max_loop), 1559 + xgene_phy_get_avg(lat_xo, max_loop), 1560 + xgene_phy_get_avg(lat_eo, max_loop), 1561 + xgene_phy_get_avg(lat_so, max_loop)); 1562 + dev_dbg(ctx->dev, "DE 0x%x XE 0x%x EE 0x%x SE 0x%x\n", 1563 + xgene_phy_get_avg(lat_de, max_loop), 1564 + xgene_phy_get_avg(lat_xe, max_loop), 1565 + xgene_phy_get_avg(lat_ee, max_loop), 1566 + xgene_phy_get_avg(lat_se, max_loop)); 1567 + dev_dbg(ctx->dev, "SUM 0x%x\n", 1568 + xgene_phy_get_avg(sum_cal, max_loop)); 1569 + 1570 + serdes_rd(ctx, lane, RXTX_REG14, &val); 1571 + val = RXTX_REG14_CTLE_LATCAL_MAN_ENA_SET(val, 0x1); 1572 + serdes_wr(ctx, lane, RXTX_REG14, val); 1573 + dev_dbg(ctx->dev, "Enable Manual Summer calibration\n"); 1574 + 1575 + serdes_rd(ctx, lane, RXTX_REG127, &val); 1576 + val = RXTX_REG127_LATCH_MAN_CAL_ENA_SET(val, 0x1); 1577 + dev_dbg(ctx->dev, "Enable Manual Latch calibration\n"); 1578 + serdes_wr(ctx, lane, RXTX_REG127, val); 1579 + 1580 + /* Disable RX Hi-Z termination */ 1581 + serdes_rd(ctx, lane, RXTX_REG12, &val); 1582 + val = RXTX_REG12_RX_DET_TERM_ENABLE_SET(val, 0); 1583 + serdes_wr(ctx, lane, RXTX_REG12, val); 1584 + /* Turn on DFE */ 1585 + serdes_wr(ctx, lane, RXTX_REG28, 0x0007); 1586 + /* Set DFE preset */ 1587 + serdes_wr(ctx, lane, RXTX_REG31, 0x7e00); 1588 + } 1589 + 1590 + static int xgene_phy_hw_init(struct phy *phy) 1591 + { 1592 + struct xgene_phy_ctx *ctx = phy_get_drvdata(phy); 1593 + int rc; 1594 + int i; 1595 + 1596 + rc = xgene_phy_hw_initialize(ctx, CLK_EXT_DIFF, SSC_DISABLE); 1597 + if (rc) { 1598 + dev_err(ctx->dev, "PHY initialize failed %d\n", rc); 1599 + return rc; 1600 + } 1601 + 1602 + /* Setup clock properly after PHY configuration */ 1603 + if (!IS_ERR(ctx->clk)) { 1604 + /* HW requires an toggle of the clock */ 1605 + clk_prepare_enable(ctx->clk); 1606 + clk_disable_unprepare(ctx->clk); 1607 + clk_prepare_enable(ctx->clk); 1608 + } 1609 + 1610 + /* Compute average value */ 1611 + for (i = 0; i < MAX_LANE; i++) 1612 + xgene_phy_gen_avg_val(ctx, i); 1613 + 1614 + dev_dbg(ctx->dev, "PHY initialized\n"); 1615 + return 0; 1616 + } 1617 + 1618 + static const struct phy_ops xgene_phy_ops = { 1619 + .init = xgene_phy_hw_init, 1620 + .owner = THIS_MODULE, 1621 + }; 1622 + 1623 + static struct phy *xgene_phy_xlate(struct device *dev, 1624 + struct of_phandle_args *args) 1625 + { 1626 + struct xgene_phy_ctx *ctx = dev_get_drvdata(dev); 1627 + 1628 + if (args->args_count <= 0) 1629 + return ERR_PTR(-EINVAL); 1630 + if (args->args[0] < MODE_SATA || args->args[0] >= MODE_MAX) 1631 + return ERR_PTR(-EINVAL); 1632 + 1633 + ctx->mode = args->args[0]; 1634 + return ctx->phy; 1635 + } 1636 + 1637 + static void xgene_phy_get_param(struct platform_device *pdev, 1638 + const char *name, u32 *buffer, 1639 + int count, u32 *default_val, 1640 + u32 conv_factor) 1641 + { 1642 + int i; 1643 + 1644 + if (!of_property_read_u32_array(pdev->dev.of_node, name, buffer, 1645 + count)) { 1646 + for (i = 0; i < count; i++) 1647 + buffer[i] /= conv_factor; 1648 + return; 1649 + } 1650 + /* Does not exist, load default */ 1651 + for (i = 0; i < count; i++) 1652 + buffer[i] = default_val[i % 3]; 1653 + } 1654 + 1655 + static int xgene_phy_probe(struct platform_device *pdev) 1656 + { 1657 + struct phy_provider *phy_provider; 1658 + struct xgene_phy_ctx *ctx; 1659 + struct resource *res; 1660 + int rc = 0; 1661 + u32 default_spd[] = DEFAULT_SATA_SPD_SEL; 1662 + u32 default_txboost_gain[] = DEFAULT_SATA_TXBOOST_GAIN; 1663 + u32 default_txeye_direction[] = DEFAULT_SATA_TXEYEDIRECTION; 1664 + u32 default_txeye_tuning[] = DEFAULT_SATA_TXEYETUNING; 1665 + u32 default_txamp[] = DEFAULT_SATA_TXAMP; 1666 + u32 default_txcn1[] = DEFAULT_SATA_TXCN1; 1667 + u32 default_txcn2[] = DEFAULT_SATA_TXCN2; 1668 + u32 default_txcp1[] = DEFAULT_SATA_TXCP1; 1669 + int i; 1670 + 1671 + ctx = devm_kzalloc(&pdev->dev, sizeof(*ctx), GFP_KERNEL); 1672 + if (!ctx) 1673 + return -ENOMEM; 1674 + 1675 + ctx->dev = &pdev->dev; 1676 + 1677 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1678 + ctx->sds_base = devm_ioremap_resource(&pdev->dev, res); 1679 + if (IS_ERR(ctx->sds_base)) { 1680 + rc = PTR_ERR(ctx->sds_base); 1681 + goto error; 1682 + } 1683 + 1684 + /* Retrieve optional clock */ 1685 + ctx->clk = clk_get(&pdev->dev, NULL); 1686 + 1687 + /* Load override paramaters */ 1688 + xgene_phy_get_param(pdev, "apm,tx-eye-tuning", 1689 + ctx->sata_param.txeyetuning, 6, default_txeye_tuning, 1); 1690 + xgene_phy_get_param(pdev, "apm,tx-eye-direction", 1691 + ctx->sata_param.txeyedirection, 6, default_txeye_direction, 1); 1692 + xgene_phy_get_param(pdev, "apm,tx-boost-gain", 1693 + ctx->sata_param.txboostgain, 6, default_txboost_gain, 1); 1694 + xgene_phy_get_param(pdev, "apm,tx-amplitude", 1695 + ctx->sata_param.txamplitude, 6, default_txamp, 13300); 1696 + xgene_phy_get_param(pdev, "apm,tx-pre-cursor1", 1697 + ctx->sata_param.txprecursor_cn1, 6, default_txcn1, 18200); 1698 + xgene_phy_get_param(pdev, "apm,tx-pre-cursor2", 1699 + ctx->sata_param.txprecursor_cn2, 6, default_txcn2, 18200); 1700 + xgene_phy_get_param(pdev, "apm,tx-post-cursor", 1701 + ctx->sata_param.txpostcursor_cp1, 6, default_txcp1, 18200); 1702 + xgene_phy_get_param(pdev, "apm,tx-speed", 1703 + ctx->sata_param.txspeed, 3, default_spd, 1); 1704 + for (i = 0; i < MAX_LANE; i++) 1705 + ctx->sata_param.speed[i] = 2; /* Default to Gen3 */ 1706 + 1707 + ctx->dev = &pdev->dev; 1708 + platform_set_drvdata(pdev, ctx); 1709 + 1710 + ctx->phy = devm_phy_create(ctx->dev, &xgene_phy_ops, NULL); 1711 + if (IS_ERR(ctx->phy)) { 1712 + dev_dbg(&pdev->dev, "Failed to create PHY\n"); 1713 + rc = PTR_ERR(ctx->phy); 1714 + goto error; 1715 + } 1716 + phy_set_drvdata(ctx->phy, ctx); 1717 + 1718 + phy_provider = devm_of_phy_provider_register(ctx->dev, 1719 + xgene_phy_xlate); 1720 + if (IS_ERR(phy_provider)) { 1721 + rc = PTR_ERR(phy_provider); 1722 + goto error; 1723 + } 1724 + 1725 + return 0; 1726 + 1727 + error: 1728 + return rc; 1729 + } 1730 + 1731 + static const struct of_device_id xgene_phy_of_match[] = { 1732 + {.compatible = "apm,xgene-phy",}, 1733 + {}, 1734 + }; 1735 + MODULE_DEVICE_TABLE(of, xgene_phy_of_match); 1736 + 1737 + static struct platform_driver xgene_phy_driver = { 1738 + .probe = xgene_phy_probe, 1739 + .driver = { 1740 + .name = "xgene-phy", 1741 + .owner = THIS_MODULE, 1742 + .of_match_table = xgene_phy_of_match, 1743 + }, 1744 + }; 1745 + module_platform_driver(xgene_phy_driver); 1746 + 1747 + MODULE_DESCRIPTION("APM X-Gene Multi-Purpose PHY driver"); 1748 + MODULE_AUTHOR("Loc Ho <lho@apm.com>"); 1749 + MODULE_LICENSE("GPL v2"); 1750 + MODULE_VERSION("0.1");
-10
drivers/usb/Kconfig
··· 2 2 # USB device configuration 3 3 # 4 4 5 - # These are unused now, remove them once they are no longer selected 6 - config USB_ARCH_HAS_OHCI 7 - bool 8 - 9 5 config USB_OHCI_BIG_ENDIAN_DESC 10 6 bool 11 7 ··· 13 17 default n if STB03xxx || PPC_MPC52xx 14 18 default y 15 19 16 - config USB_ARCH_HAS_EHCI 17 - bool 18 - 19 20 config USB_EHCI_BIG_ENDIAN_MMIO 20 21 bool 21 22 22 23 config USB_EHCI_BIG_ENDIAN_DESC 23 - bool 24 - 25 - config USB_ARCH_HAS_XHCI 26 24 bool 27 25 28 26 menuconfig USB_SUPPORT
+1
drivers/usb/chipidea/Makefile
··· 10 10 # Glue/Bridge layers go here 11 11 12 12 obj-$(CONFIG_USB_CHIPIDEA) += ci_hdrc_msm.o 13 + obj-$(CONFIG_USB_CHIPIDEA) += ci_hdrc_zevio.o 13 14 14 15 # PCI doesn't provide stubs, need to check 15 16 ifneq ($(CONFIG_PCI),)
+2
drivers/usb/chipidea/bits.h
··· 50 50 #define PORTSC_PTC (0x0FUL << 16) 51 51 #define PORTSC_PHCD(d) ((d) ? BIT(22) : BIT(23)) 52 52 /* PTS and PTW for non lpm version only */ 53 + #define PORTSC_PFSC BIT(24) 53 54 #define PORTSC_PTS(d) \ 54 55 (u32)((((d) & 0x3) << 30) | (((d) & 0x4) ? BIT(25) : 0)) 55 56 #define PORTSC_PTW BIT(28) 56 57 #define PORTSC_STS BIT(29) 57 58 58 59 /* DEVLC */ 60 + #define DEVLC_PFSC BIT(23) 59 61 #define DEVLC_PSPD (0x03UL << 25) 60 62 #define DEVLC_PSPD_HS (0x02UL << 25) 61 63 #define DEVLC_PTW BIT(27)
-2
drivers/usb/chipidea/ci.h
··· 196 196 197 197 struct ci_hdrc_platform_data *platdata; 198 198 int vbus_active; 199 - /* FIXME: some day, we'll not use global phy */ 200 - bool global_phy; 201 199 struct usb_phy *transceiver; 202 200 struct usb_hcd *hcd; 203 201 struct dentry *debugfs;
+1 -1
drivers/usb/chipidea/ci_hdrc_imx.c
··· 96 96 { 97 97 struct ci_hdrc_imx_data *data; 98 98 struct ci_hdrc_platform_data pdata = { 99 - .name = "ci_hdrc_imx", 99 + .name = dev_name(&pdev->dev), 100 100 .capoffset = DEF_CAPOFFSET, 101 101 .flags = CI_HDRC_REQUIRE_TRANSCEIVER | 102 102 CI_HDRC_DISABLE_STREAMING,
+72
drivers/usb/chipidea/ci_hdrc_zevio.c
··· 1 + /* 2 + * Copyright (C) 2013 Daniel Tang <tangrs@tangrs.id.au> 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2, as 6 + * published by the Free Software Foundation. 7 + * 8 + * Based off drivers/usb/chipidea/ci_hdrc_msm.c 9 + * 10 + */ 11 + 12 + #include <linux/module.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/usb/gadget.h> 15 + #include <linux/usb/chipidea.h> 16 + 17 + #include "ci.h" 18 + 19 + static struct ci_hdrc_platform_data ci_hdrc_zevio_platdata = { 20 + .name = "ci_hdrc_zevio", 21 + .flags = CI_HDRC_REGS_SHARED, 22 + .capoffset = DEF_CAPOFFSET, 23 + }; 24 + 25 + static int ci_hdrc_zevio_probe(struct platform_device *pdev) 26 + { 27 + struct platform_device *ci_pdev; 28 + 29 + dev_dbg(&pdev->dev, "ci_hdrc_zevio_probe\n"); 30 + 31 + ci_pdev = ci_hdrc_add_device(&pdev->dev, 32 + pdev->resource, pdev->num_resources, 33 + &ci_hdrc_zevio_platdata); 34 + 35 + if (IS_ERR(ci_pdev)) { 36 + dev_err(&pdev->dev, "ci_hdrc_add_device failed!\n"); 37 + return PTR_ERR(ci_pdev); 38 + } 39 + 40 + platform_set_drvdata(pdev, ci_pdev); 41 + 42 + return 0; 43 + } 44 + 45 + static int ci_hdrc_zevio_remove(struct platform_device *pdev) 46 + { 47 + struct platform_device *ci_pdev = platform_get_drvdata(pdev); 48 + 49 + ci_hdrc_remove_device(ci_pdev); 50 + 51 + return 0; 52 + } 53 + 54 + static const struct of_device_id ci_hdrc_zevio_dt_ids[] = { 55 + { .compatible = "lsi,zevio-usb", }, 56 + { /* sentinel */ } 57 + }; 58 + 59 + static struct platform_driver ci_hdrc_zevio_driver = { 60 + .probe = ci_hdrc_zevio_probe, 61 + .remove = ci_hdrc_zevio_remove, 62 + .driver = { 63 + .name = "zevio_usb", 64 + .owner = THIS_MODULE, 65 + .of_match_table = ci_hdrc_zevio_dt_ids, 66 + }, 67 + }; 68 + 69 + MODULE_DEVICE_TABLE(of, ci_hdrc_zevio_dt_ids); 70 + module_platform_driver(ci_hdrc_zevio_driver); 71 + 72 + MODULE_LICENSE("GPL v2");
+40 -47
drivers/usb/chipidea/core.c
··· 64 64 #include <linux/usb/otg.h> 65 65 #include <linux/usb/chipidea.h> 66 66 #include <linux/usb/of.h> 67 + #include <linux/of.h> 67 68 #include <linux/phy.h> 68 69 #include <linux/regulator/consumer.h> 69 70 ··· 299 298 if (ci->platdata->flags & CI_HDRC_DISABLE_STREAMING) 300 299 hw_write(ci, OP_USBMODE, USBMODE_CI_SDIS, USBMODE_CI_SDIS); 301 300 301 + if (ci->platdata->flags & CI_HDRC_FORCE_FULLSPEED) { 302 + if (ci->hw_bank.lpm) 303 + hw_write(ci, OP_DEVLC, DEVLC_PFSC, DEVLC_PFSC); 304 + else 305 + hw_write(ci, OP_PORTSC, PORTSC_PFSC, PORTSC_PFSC); 306 + } 307 + 302 308 /* USBMODE should be configured step by step */ 303 309 hw_write(ci, OP_USBMODE, USBMODE_CM, USBMODE_CM_IDLE); 304 310 hw_write(ci, OP_USBMODE, USBMODE_CM, mode); ··· 420 412 } 421 413 } 422 414 415 + if (of_usb_get_maximum_speed(dev->of_node) == USB_SPEED_FULL) 416 + platdata->flags |= CI_HDRC_FORCE_FULLSPEED; 417 + 423 418 return 0; 424 419 } 425 420 ··· 507 496 } 508 497 } 509 498 510 - static int ci_usb_phy_init(struct ci_hdrc *ci) 511 - { 512 - if (ci->platdata->phy) { 513 - ci->transceiver = ci->platdata->phy; 514 - return usb_phy_init(ci->transceiver); 515 - } else { 516 - ci->global_phy = true; 517 - ci->transceiver = usb_get_phy(USB_PHY_TYPE_USB2); 518 - if (IS_ERR(ci->transceiver)) 519 - ci->transceiver = NULL; 520 - 521 - return 0; 522 - } 523 - } 524 - 525 - static void ci_usb_phy_destroy(struct ci_hdrc *ci) 526 - { 527 - if (!ci->transceiver) 528 - return; 529 - 530 - otg_set_peripheral(ci->transceiver->otg, NULL); 531 - if (ci->global_phy) 532 - usb_put_phy(ci->transceiver); 533 - else 534 - usb_phy_shutdown(ci->transceiver); 535 - } 536 - 537 499 static int ci_hdrc_probe(struct platform_device *pdev) 538 500 { 539 501 struct device *dev = &pdev->dev; ··· 516 532 int ret; 517 533 enum usb_dr_mode dr_mode; 518 534 519 - if (!dev->platform_data) { 535 + if (!dev_get_platdata(dev)) { 520 536 dev_err(dev, "platform data missing\n"); 521 537 return -ENODEV; 522 538 } ··· 533 549 } 534 550 535 551 ci->dev = dev; 536 - ci->platdata = dev->platform_data; 552 + ci->platdata = dev_get_platdata(dev); 537 553 ci->imx28_write_fix = !!(ci->platdata->flags & 538 554 CI_HDRC_IMX28_WRITE_FIX); 539 555 ··· 545 561 546 562 hw_phymode_configure(ci); 547 563 548 - ret = ci_usb_phy_init(ci); 564 + if (ci->platdata->phy) 565 + ci->transceiver = ci->platdata->phy; 566 + else 567 + ci->transceiver = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2); 568 + 569 + if (IS_ERR(ci->transceiver)) { 570 + ret = PTR_ERR(ci->transceiver); 571 + /* 572 + * if -ENXIO is returned, it means PHY layer wasn't 573 + * enabled, so it makes no sense to return -EPROBE_DEFER 574 + * in that case, since no PHY driver will ever probe. 575 + */ 576 + if (ret == -ENXIO) 577 + return ret; 578 + 579 + dev_err(dev, "no usb2 phy configured\n"); 580 + return -EPROBE_DEFER; 581 + } 582 + 583 + ret = usb_phy_init(ci->transceiver); 549 584 if (ret) { 550 585 dev_err(dev, "unable to init phy: %d\n", ret); 551 586 return ret; ··· 575 572 ci->irq = platform_get_irq(pdev, 0); 576 573 if (ci->irq < 0) { 577 574 dev_err(dev, "missing IRQ\n"); 578 - ret = -ENODEV; 579 - goto destroy_phy; 575 + ret = ci->irq; 576 + goto deinit_phy; 580 577 } 581 578 582 579 ci_get_otg_capable(ci); ··· 593 590 ret = ci_hdrc_gadget_init(ci); 594 591 if (ret) 595 592 dev_info(dev, "doesn't support gadget\n"); 596 - if (!ret && ci->transceiver) { 597 - ret = otg_set_peripheral(ci->transceiver->otg, 598 - &ci->gadget); 599 - /* 600 - * If we implement all USB functions using chipidea drivers, 601 - * it doesn't need to call above API, meanwhile, if we only 602 - * use gadget function, calling above API is useless. 603 - */ 604 - if (ret && ret != -ENOTSUPP) 605 - goto destroy_phy; 606 - } 607 593 } 608 594 609 595 if (!ci->roles[CI_ROLE_HOST] && !ci->roles[CI_ROLE_GADGET]) { 610 596 dev_err(dev, "no supported roles\n"); 611 597 ret = -ENODEV; 612 - goto destroy_phy; 598 + goto deinit_phy; 613 599 } 614 600 615 601 if (ci->is_otg) { ··· 655 663 free_irq(ci->irq, ci); 656 664 stop: 657 665 ci_role_destroy(ci); 658 - destroy_phy: 659 - ci_usb_phy_destroy(ci); 666 + deinit_phy: 667 + usb_phy_shutdown(ci->transceiver); 660 668 661 669 return ret; 662 670 } ··· 669 677 free_irq(ci->irq, ci); 670 678 ci_role_destroy(ci); 671 679 ci_hdrc_enter_lpm(ci, true); 672 - ci_usb_phy_destroy(ci); 680 + usb_phy_shutdown(ci->transceiver); 681 + kfree(ci->hw_bank.regmap); 673 682 674 683 return 0; 675 684 }
+160 -164
drivers/usb/chipidea/udc.c
··· 178 178 } 179 179 180 180 /** 181 - * hw_test_and_clear_setup_status: test & clear setup status (execute without 182 - * interruption) 183 - * @n: endpoint number 184 - * 185 - * This function returns setup status 186 - */ 187 - static int hw_test_and_clear_setup_status(struct ci_hdrc *ci, int n) 188 - { 189 - n = ep_to_bit(ci, n); 190 - return hw_test_and_clear(ci, OP_ENDPTSETUPSTAT, BIT(n)); 191 - } 192 - 193 - /** 194 181 * hw_ep_prime: primes endpoint (execute without interruption) 195 182 * @num: endpoint number 196 183 * @dir: endpoint direction ··· 949 962 } 950 963 951 964 /** 965 + * isr_setup_packet_handler: setup packet handler 966 + * @ci: UDC descriptor 967 + * 968 + * This function handles setup packet 969 + */ 970 + static void isr_setup_packet_handler(struct ci_hdrc *ci) 971 + __releases(ci->lock) 972 + __acquires(ci->lock) 973 + { 974 + struct ci_hw_ep *hwep = &ci->ci_hw_ep[0]; 975 + struct usb_ctrlrequest req; 976 + int type, num, dir, err = -EINVAL; 977 + u8 tmode = 0; 978 + 979 + /* 980 + * Flush data and handshake transactions of previous 981 + * setup packet. 982 + */ 983 + _ep_nuke(ci->ep0out); 984 + _ep_nuke(ci->ep0in); 985 + 986 + /* read_setup_packet */ 987 + do { 988 + hw_test_and_set_setup_guard(ci); 989 + memcpy(&req, &hwep->qh.ptr->setup, sizeof(req)); 990 + } while (!hw_test_and_clear_setup_guard(ci)); 991 + 992 + type = req.bRequestType; 993 + 994 + ci->ep0_dir = (type & USB_DIR_IN) ? TX : RX; 995 + 996 + switch (req.bRequest) { 997 + case USB_REQ_CLEAR_FEATURE: 998 + if (type == (USB_DIR_OUT|USB_RECIP_ENDPOINT) && 999 + le16_to_cpu(req.wValue) == 1000 + USB_ENDPOINT_HALT) { 1001 + if (req.wLength != 0) 1002 + break; 1003 + num = le16_to_cpu(req.wIndex); 1004 + dir = num & USB_ENDPOINT_DIR_MASK; 1005 + num &= USB_ENDPOINT_NUMBER_MASK; 1006 + if (dir) /* TX */ 1007 + num += ci->hw_ep_max / 2; 1008 + if (!ci->ci_hw_ep[num].wedge) { 1009 + spin_unlock(&ci->lock); 1010 + err = usb_ep_clear_halt( 1011 + &ci->ci_hw_ep[num].ep); 1012 + spin_lock(&ci->lock); 1013 + if (err) 1014 + break; 1015 + } 1016 + err = isr_setup_status_phase(ci); 1017 + } else if (type == (USB_DIR_OUT|USB_RECIP_DEVICE) && 1018 + le16_to_cpu(req.wValue) == 1019 + USB_DEVICE_REMOTE_WAKEUP) { 1020 + if (req.wLength != 0) 1021 + break; 1022 + ci->remote_wakeup = 0; 1023 + err = isr_setup_status_phase(ci); 1024 + } else { 1025 + goto delegate; 1026 + } 1027 + break; 1028 + case USB_REQ_GET_STATUS: 1029 + if (type != (USB_DIR_IN|USB_RECIP_DEVICE) && 1030 + type != (USB_DIR_IN|USB_RECIP_ENDPOINT) && 1031 + type != (USB_DIR_IN|USB_RECIP_INTERFACE)) 1032 + goto delegate; 1033 + if (le16_to_cpu(req.wLength) != 2 || 1034 + le16_to_cpu(req.wValue) != 0) 1035 + break; 1036 + err = isr_get_status_response(ci, &req); 1037 + break; 1038 + case USB_REQ_SET_ADDRESS: 1039 + if (type != (USB_DIR_OUT|USB_RECIP_DEVICE)) 1040 + goto delegate; 1041 + if (le16_to_cpu(req.wLength) != 0 || 1042 + le16_to_cpu(req.wIndex) != 0) 1043 + break; 1044 + ci->address = (u8)le16_to_cpu(req.wValue); 1045 + ci->setaddr = true; 1046 + err = isr_setup_status_phase(ci); 1047 + break; 1048 + case USB_REQ_SET_FEATURE: 1049 + if (type == (USB_DIR_OUT|USB_RECIP_ENDPOINT) && 1050 + le16_to_cpu(req.wValue) == 1051 + USB_ENDPOINT_HALT) { 1052 + if (req.wLength != 0) 1053 + break; 1054 + num = le16_to_cpu(req.wIndex); 1055 + dir = num & USB_ENDPOINT_DIR_MASK; 1056 + num &= USB_ENDPOINT_NUMBER_MASK; 1057 + if (dir) /* TX */ 1058 + num += ci->hw_ep_max / 2; 1059 + 1060 + spin_unlock(&ci->lock); 1061 + err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep); 1062 + spin_lock(&ci->lock); 1063 + if (!err) 1064 + isr_setup_status_phase(ci); 1065 + } else if (type == (USB_DIR_OUT|USB_RECIP_DEVICE)) { 1066 + if (req.wLength != 0) 1067 + break; 1068 + switch (le16_to_cpu(req.wValue)) { 1069 + case USB_DEVICE_REMOTE_WAKEUP: 1070 + ci->remote_wakeup = 1; 1071 + err = isr_setup_status_phase(ci); 1072 + break; 1073 + case USB_DEVICE_TEST_MODE: 1074 + tmode = le16_to_cpu(req.wIndex) >> 8; 1075 + switch (tmode) { 1076 + case TEST_J: 1077 + case TEST_K: 1078 + case TEST_SE0_NAK: 1079 + case TEST_PACKET: 1080 + case TEST_FORCE_EN: 1081 + ci->test_mode = tmode; 1082 + err = isr_setup_status_phase( 1083 + ci); 1084 + break; 1085 + default: 1086 + break; 1087 + } 1088 + default: 1089 + goto delegate; 1090 + } 1091 + } else { 1092 + goto delegate; 1093 + } 1094 + break; 1095 + default: 1096 + delegate: 1097 + if (req.wLength == 0) /* no data phase */ 1098 + ci->ep0_dir = TX; 1099 + 1100 + spin_unlock(&ci->lock); 1101 + err = ci->driver->setup(&ci->gadget, &req); 1102 + spin_lock(&ci->lock); 1103 + break; 1104 + } 1105 + 1106 + if (err < 0) { 1107 + spin_unlock(&ci->lock); 1108 + if (usb_ep_set_halt(&hwep->ep)) 1109 + dev_err(ci->dev, "error: ep_set_halt\n"); 1110 + spin_lock(&ci->lock); 1111 + } 1112 + } 1113 + 1114 + /** 952 1115 * isr_tr_complete_handler: transaction complete interrupt handler 953 1116 * @ci: UDC descriptor 954 1117 * ··· 1109 972 __acquires(ci->lock) 1110 973 { 1111 974 unsigned i; 1112 - u8 tmode = 0; 975 + int err; 1113 976 1114 977 for (i = 0; i < ci->hw_ep_max; i++) { 1115 978 struct ci_hw_ep *hwep = &ci->ci_hw_ep[i]; 1116 - int type, num, dir, err = -EINVAL; 1117 - struct usb_ctrlrequest req; 1118 979 1119 980 if (hwep->ep.desc == NULL) 1120 981 continue; /* not configured */ ··· 1132 997 } 1133 998 } 1134 999 1135 - if (hwep->type != USB_ENDPOINT_XFER_CONTROL || 1136 - !hw_test_and_clear_setup_status(ci, i)) 1137 - continue; 1138 - 1139 - if (i != 0) { 1140 - dev_warn(ci->dev, "ctrl traffic at endpoint %d\n", i); 1141 - continue; 1142 - } 1143 - 1144 - /* 1145 - * Flush data and handshake transactions of previous 1146 - * setup packet. 1147 - */ 1148 - _ep_nuke(ci->ep0out); 1149 - _ep_nuke(ci->ep0in); 1150 - 1151 - /* read_setup_packet */ 1152 - do { 1153 - hw_test_and_set_setup_guard(ci); 1154 - memcpy(&req, &hwep->qh.ptr->setup, sizeof(req)); 1155 - } while (!hw_test_and_clear_setup_guard(ci)); 1156 - 1157 - type = req.bRequestType; 1158 - 1159 - ci->ep0_dir = (type & USB_DIR_IN) ? TX : RX; 1160 - 1161 - switch (req.bRequest) { 1162 - case USB_REQ_CLEAR_FEATURE: 1163 - if (type == (USB_DIR_OUT|USB_RECIP_ENDPOINT) && 1164 - le16_to_cpu(req.wValue) == 1165 - USB_ENDPOINT_HALT) { 1166 - if (req.wLength != 0) 1167 - break; 1168 - num = le16_to_cpu(req.wIndex); 1169 - dir = num & USB_ENDPOINT_DIR_MASK; 1170 - num &= USB_ENDPOINT_NUMBER_MASK; 1171 - if (dir) /* TX */ 1172 - num += ci->hw_ep_max/2; 1173 - if (!ci->ci_hw_ep[num].wedge) { 1174 - spin_unlock(&ci->lock); 1175 - err = usb_ep_clear_halt( 1176 - &ci->ci_hw_ep[num].ep); 1177 - spin_lock(&ci->lock); 1178 - if (err) 1179 - break; 1180 - } 1181 - err = isr_setup_status_phase(ci); 1182 - } else if (type == (USB_DIR_OUT|USB_RECIP_DEVICE) && 1183 - le16_to_cpu(req.wValue) == 1184 - USB_DEVICE_REMOTE_WAKEUP) { 1185 - if (req.wLength != 0) 1186 - break; 1187 - ci->remote_wakeup = 0; 1188 - err = isr_setup_status_phase(ci); 1189 - } else { 1190 - goto delegate; 1191 - } 1192 - break; 1193 - case USB_REQ_GET_STATUS: 1194 - if (type != (USB_DIR_IN|USB_RECIP_DEVICE) && 1195 - type != (USB_DIR_IN|USB_RECIP_ENDPOINT) && 1196 - type != (USB_DIR_IN|USB_RECIP_INTERFACE)) 1197 - goto delegate; 1198 - if (le16_to_cpu(req.wLength) != 2 || 1199 - le16_to_cpu(req.wValue) != 0) 1200 - break; 1201 - err = isr_get_status_response(ci, &req); 1202 - break; 1203 - case USB_REQ_SET_ADDRESS: 1204 - if (type != (USB_DIR_OUT|USB_RECIP_DEVICE)) 1205 - goto delegate; 1206 - if (le16_to_cpu(req.wLength) != 0 || 1207 - le16_to_cpu(req.wIndex) != 0) 1208 - break; 1209 - ci->address = (u8)le16_to_cpu(req.wValue); 1210 - ci->setaddr = true; 1211 - err = isr_setup_status_phase(ci); 1212 - break; 1213 - case USB_REQ_SET_FEATURE: 1214 - if (type == (USB_DIR_OUT|USB_RECIP_ENDPOINT) && 1215 - le16_to_cpu(req.wValue) == 1216 - USB_ENDPOINT_HALT) { 1217 - if (req.wLength != 0) 1218 - break; 1219 - num = le16_to_cpu(req.wIndex); 1220 - dir = num & USB_ENDPOINT_DIR_MASK; 1221 - num &= USB_ENDPOINT_NUMBER_MASK; 1222 - if (dir) /* TX */ 1223 - num += ci->hw_ep_max/2; 1224 - 1225 - spin_unlock(&ci->lock); 1226 - err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep); 1227 - spin_lock(&ci->lock); 1228 - if (!err) 1229 - isr_setup_status_phase(ci); 1230 - } else if (type == (USB_DIR_OUT|USB_RECIP_DEVICE)) { 1231 - if (req.wLength != 0) 1232 - break; 1233 - switch (le16_to_cpu(req.wValue)) { 1234 - case USB_DEVICE_REMOTE_WAKEUP: 1235 - ci->remote_wakeup = 1; 1236 - err = isr_setup_status_phase(ci); 1237 - break; 1238 - case USB_DEVICE_TEST_MODE: 1239 - tmode = le16_to_cpu(req.wIndex) >> 8; 1240 - switch (tmode) { 1241 - case TEST_J: 1242 - case TEST_K: 1243 - case TEST_SE0_NAK: 1244 - case TEST_PACKET: 1245 - case TEST_FORCE_EN: 1246 - ci->test_mode = tmode; 1247 - err = isr_setup_status_phase( 1248 - ci); 1249 - break; 1250 - default: 1251 - break; 1252 - } 1253 - default: 1254 - goto delegate; 1255 - } 1256 - } else { 1257 - goto delegate; 1258 - } 1259 - break; 1260 - default: 1261 - delegate: 1262 - if (req.wLength == 0) /* no data phase */ 1263 - ci->ep0_dir = TX; 1264 - 1265 - spin_unlock(&ci->lock); 1266 - err = ci->driver->setup(&ci->gadget, &req); 1267 - spin_lock(&ci->lock); 1268 - break; 1269 - } 1270 - 1271 - if (err < 0) { 1272 - spin_unlock(&ci->lock); 1273 - if (usb_ep_set_halt(&hwep->ep)) 1274 - dev_err(ci->dev, "error: ep_set_halt\n"); 1275 - spin_lock(&ci->lock); 1276 - } 1000 + /* Only handle setup packet below */ 1001 + if (i == 0 && 1002 + hw_test_and_clear(ci, OP_ENDPTSETUPSTAT, BIT(0))) 1003 + isr_setup_packet_handler(ci); 1277 1004 } 1278 1005 } 1279 1006 ··· 1189 1192 hwep->qh.ptr->cap = cpu_to_le32(cap); 1190 1193 1191 1194 hwep->qh.ptr->td.next |= cpu_to_le32(TD_TERMINATE); /* needed? */ 1195 + 1196 + if (hwep->num != 0 && hwep->type == USB_ENDPOINT_XFER_CONTROL) { 1197 + dev_err(hwep->ci->dev, "Set control xfer at non-ep0\n"); 1198 + retval = -EINVAL; 1199 + } 1192 1200 1193 1201 /* 1194 1202 * Enable endpoints in the HW other than ep0 as ep0 ··· 1839 1837 1840 1838 dma_pool_destroy(ci->td_pool); 1841 1839 dma_pool_destroy(ci->qh_pool); 1842 - 1843 - if (ci->transceiver) { 1844 - otg_set_peripheral(ci->transceiver->otg, NULL); 1845 - if (ci->global_phy) 1846 - usb_put_phy(ci->transceiver); 1847 - } 1848 1840 } 1849 1841 1850 1842 static int udc_id_switch_for_device(struct ci_hdrc *ci)
-1
drivers/usb/core/config.c
··· 10 10 11 11 12 12 #define USB_MAXALTSETTING 128 /* Hard limit */ 13 - #define USB_MAXENDPOINTS 30 /* Hard limit */ 14 13 15 14 #define USB_MAXCONFIG 8 /* Arbitrary limit */ 16 15
+159 -15
drivers/usb/core/devio.c
··· 769 769 return ret; 770 770 } 771 771 772 + static struct usb_host_endpoint *ep_to_host_endpoint(struct usb_device *dev, 773 + unsigned char ep) 774 + { 775 + if (ep & USB_ENDPOINT_DIR_MASK) 776 + return dev->ep_in[ep & USB_ENDPOINT_NUMBER_MASK]; 777 + else 778 + return dev->ep_out[ep & USB_ENDPOINT_NUMBER_MASK]; 779 + } 780 + 781 + static int parse_usbdevfs_streams(struct usb_dev_state *ps, 782 + struct usbdevfs_streams __user *streams, 783 + unsigned int *num_streams_ret, 784 + unsigned int *num_eps_ret, 785 + struct usb_host_endpoint ***eps_ret, 786 + struct usb_interface **intf_ret) 787 + { 788 + unsigned int i, num_streams, num_eps; 789 + struct usb_host_endpoint **eps; 790 + struct usb_interface *intf = NULL; 791 + unsigned char ep; 792 + int ifnum, ret; 793 + 794 + if (get_user(num_streams, &streams->num_streams) || 795 + get_user(num_eps, &streams->num_eps)) 796 + return -EFAULT; 797 + 798 + if (num_eps < 1 || num_eps > USB_MAXENDPOINTS) 799 + return -EINVAL; 800 + 801 + /* The XHCI controller allows max 2 ^ 16 streams */ 802 + if (num_streams_ret && (num_streams < 2 || num_streams > 65536)) 803 + return -EINVAL; 804 + 805 + eps = kmalloc(num_eps * sizeof(*eps), GFP_KERNEL); 806 + if (!eps) 807 + return -ENOMEM; 808 + 809 + for (i = 0; i < num_eps; i++) { 810 + if (get_user(ep, &streams->eps[i])) { 811 + ret = -EFAULT; 812 + goto error; 813 + } 814 + eps[i] = ep_to_host_endpoint(ps->dev, ep); 815 + if (!eps[i]) { 816 + ret = -EINVAL; 817 + goto error; 818 + } 819 + 820 + /* usb_alloc/free_streams operate on an usb_interface */ 821 + ifnum = findintfep(ps->dev, ep); 822 + if (ifnum < 0) { 823 + ret = ifnum; 824 + goto error; 825 + } 826 + 827 + if (i == 0) { 828 + ret = checkintf(ps, ifnum); 829 + if (ret < 0) 830 + goto error; 831 + intf = usb_ifnum_to_if(ps->dev, ifnum); 832 + } else { 833 + /* Verify all eps belong to the same interface */ 834 + if (ifnum != intf->altsetting->desc.bInterfaceNumber) { 835 + ret = -EINVAL; 836 + goto error; 837 + } 838 + } 839 + } 840 + 841 + if (num_streams_ret) 842 + *num_streams_ret = num_streams; 843 + *num_eps_ret = num_eps; 844 + *eps_ret = eps; 845 + *intf_ret = intf; 846 + 847 + return 0; 848 + 849 + error: 850 + kfree(eps); 851 + return ret; 852 + } 853 + 772 854 static int match_devt(struct device *dev, void *data) 773 855 { 774 856 return dev->devt == (dev_t) (unsigned long) data; ··· 1125 1043 return ret; 1126 1044 } 1127 1045 1046 + static void check_reset_of_active_ep(struct usb_device *udev, 1047 + unsigned int epnum, char *ioctl_name) 1048 + { 1049 + struct usb_host_endpoint **eps; 1050 + struct usb_host_endpoint *ep; 1051 + 1052 + eps = (epnum & USB_DIR_IN) ? udev->ep_in : udev->ep_out; 1053 + ep = eps[epnum & 0x0f]; 1054 + if (ep && !list_empty(&ep->urb_list)) 1055 + dev_warn(&udev->dev, "Process %d (%s) called USBDEVFS_%s for active endpoint 0x%02x\n", 1056 + task_pid_nr(current), current->comm, 1057 + ioctl_name, epnum); 1058 + } 1059 + 1128 1060 static int proc_resetep(struct usb_dev_state *ps, void __user *arg) 1129 1061 { 1130 1062 unsigned int ep; ··· 1152 1056 ret = checkintf(ps, ret); 1153 1057 if (ret) 1154 1058 return ret; 1059 + check_reset_of_active_ep(ps->dev, ep, "RESETEP"); 1155 1060 usb_reset_endpoint(ps->dev, ep); 1156 1061 return 0; 1157 1062 } ··· 1171 1074 ret = checkintf(ps, ret); 1172 1075 if (ret) 1173 1076 return ret; 1077 + check_reset_of_active_ep(ps->dev, ep, "CLEAR_HALT"); 1174 1078 if (ep & USB_DIR_IN) 1175 1079 pipe = usb_rcvbulkpipe(ps->dev, ep & 0x7f); 1176 1080 else ··· 1225 1127 return -EFAULT; 1226 1128 if ((ret = checkintf(ps, setintf.interface))) 1227 1129 return ret; 1130 + 1131 + destroy_async_on_interface(ps, setintf.interface); 1132 + 1228 1133 return usb_set_interface(ps->dev, setintf.interface, 1229 1134 setintf.altsetting); 1230 1135 } ··· 1290 1189 struct usb_ctrlrequest *dr = NULL; 1291 1190 unsigned int u, totlen, isofrmlen; 1292 1191 int i, ret, is_in, num_sgs = 0, ifnum = -1; 1192 + int number_of_packets = 0; 1193 + unsigned int stream_id = 0; 1293 1194 void *buf; 1294 1195 1295 1196 if (uurb->flags & ~(USBDEVFS_URB_ISO_ASAP | ··· 1312 1209 if (ret) 1313 1210 return ret; 1314 1211 } 1315 - if ((uurb->endpoint & USB_ENDPOINT_DIR_MASK) != 0) { 1316 - is_in = 1; 1317 - ep = ps->dev->ep_in[uurb->endpoint & USB_ENDPOINT_NUMBER_MASK]; 1318 - } else { 1319 - is_in = 0; 1320 - ep = ps->dev->ep_out[uurb->endpoint & USB_ENDPOINT_NUMBER_MASK]; 1321 - } 1212 + ep = ep_to_host_endpoint(ps->dev, uurb->endpoint); 1322 1213 if (!ep) 1323 1214 return -ENOENT; 1215 + is_in = (uurb->endpoint & USB_ENDPOINT_DIR_MASK) != 0; 1324 1216 1325 1217 u = 0; 1326 1218 switch(uurb->type) { ··· 1340 1242 le16_to_cpup(&dr->wIndex)); 1341 1243 if (ret) 1342 1244 goto error; 1343 - uurb->number_of_packets = 0; 1344 1245 uurb->buffer_length = le16_to_cpup(&dr->wLength); 1345 1246 uurb->buffer += 8; 1346 1247 if ((dr->bRequestType & USB_DIR_IN) && uurb->buffer_length) { ··· 1369 1272 uurb->type = USBDEVFS_URB_TYPE_INTERRUPT; 1370 1273 goto interrupt_urb; 1371 1274 } 1372 - uurb->number_of_packets = 0; 1373 1275 num_sgs = DIV_ROUND_UP(uurb->buffer_length, USB_SG_SIZE); 1374 1276 if (num_sgs == 1 || num_sgs > ps->dev->bus->sg_tablesize) 1375 1277 num_sgs = 0; 1278 + if (ep->streams) 1279 + stream_id = uurb->stream_id; 1376 1280 break; 1377 1281 1378 1282 case USBDEVFS_URB_TYPE_INTERRUPT: 1379 1283 if (!usb_endpoint_xfer_int(&ep->desc)) 1380 1284 return -EINVAL; 1381 1285 interrupt_urb: 1382 - uurb->number_of_packets = 0; 1383 1286 break; 1384 1287 1385 1288 case USBDEVFS_URB_TYPE_ISO: ··· 1389 1292 return -EINVAL; 1390 1293 if (!usb_endpoint_xfer_isoc(&ep->desc)) 1391 1294 return -EINVAL; 1295 + number_of_packets = uurb->number_of_packets; 1392 1296 isofrmlen = sizeof(struct usbdevfs_iso_packet_desc) * 1393 - uurb->number_of_packets; 1297 + number_of_packets; 1394 1298 if (!(isopkt = kmalloc(isofrmlen, GFP_KERNEL))) 1395 1299 return -ENOMEM; 1396 1300 if (copy_from_user(isopkt, iso_frame_desc, isofrmlen)) { 1397 1301 ret = -EFAULT; 1398 1302 goto error; 1399 1303 } 1400 - for (totlen = u = 0; u < uurb->number_of_packets; u++) { 1304 + for (totlen = u = 0; u < number_of_packets; u++) { 1401 1305 /* 1402 1306 * arbitrary limit need for USB 3.0 1403 1307 * bMaxBurst (0~15 allowed, 1~16 packets) ··· 1429 1331 ret = -EFAULT; 1430 1332 goto error; 1431 1333 } 1432 - as = alloc_async(uurb->number_of_packets); 1334 + as = alloc_async(number_of_packets); 1433 1335 if (!as) { 1434 1336 ret = -ENOMEM; 1435 1337 goto error; ··· 1523 1425 as->urb->setup_packet = (unsigned char *)dr; 1524 1426 dr = NULL; 1525 1427 as->urb->start_frame = uurb->start_frame; 1526 - as->urb->number_of_packets = uurb->number_of_packets; 1428 + as->urb->number_of_packets = number_of_packets; 1429 + as->urb->stream_id = stream_id; 1527 1430 if (uurb->type == USBDEVFS_URB_TYPE_ISO || 1528 1431 ps->dev->speed == USB_SPEED_HIGH) 1529 1432 as->urb->interval = 1 << min(15, ep->desc.bInterval - 1); ··· 1532 1433 as->urb->interval = ep->desc.bInterval; 1533 1434 as->urb->context = as; 1534 1435 as->urb->complete = async_completed; 1535 - for (totlen = u = 0; u < uurb->number_of_packets; u++) { 1436 + for (totlen = u = 0; u < number_of_packets; u++) { 1536 1437 as->urb->iso_frame_desc[u].offset = totlen; 1537 1438 as->urb->iso_frame_desc[u].length = isopkt[u].length; 1538 1439 totlen += isopkt[u].length; ··· 2082 1983 return claimintf(ps, dc.interface); 2083 1984 } 2084 1985 1986 + static int proc_alloc_streams(struct usb_dev_state *ps, void __user *arg) 1987 + { 1988 + unsigned num_streams, num_eps; 1989 + struct usb_host_endpoint **eps; 1990 + struct usb_interface *intf; 1991 + int r; 1992 + 1993 + r = parse_usbdevfs_streams(ps, arg, &num_streams, &num_eps, 1994 + &eps, &intf); 1995 + if (r) 1996 + return r; 1997 + 1998 + destroy_async_on_interface(ps, 1999 + intf->altsetting[0].desc.bInterfaceNumber); 2000 + 2001 + r = usb_alloc_streams(intf, eps, num_eps, num_streams, GFP_KERNEL); 2002 + kfree(eps); 2003 + return r; 2004 + } 2005 + 2006 + static int proc_free_streams(struct usb_dev_state *ps, void __user *arg) 2007 + { 2008 + unsigned num_eps; 2009 + struct usb_host_endpoint **eps; 2010 + struct usb_interface *intf; 2011 + int r; 2012 + 2013 + r = parse_usbdevfs_streams(ps, arg, NULL, &num_eps, &eps, &intf); 2014 + if (r) 2015 + return r; 2016 + 2017 + destroy_async_on_interface(ps, 2018 + intf->altsetting[0].desc.bInterfaceNumber); 2019 + 2020 + r = usb_free_streams(intf, eps, num_eps, GFP_KERNEL); 2021 + kfree(eps); 2022 + return r; 2023 + } 2024 + 2085 2025 /* 2086 2026 * NOTE: All requests here that have interface numbers as parameters 2087 2027 * are assuming that somehow the configuration has been prevented from ··· 2296 2158 break; 2297 2159 case USBDEVFS_DISCONNECT_CLAIM: 2298 2160 ret = proc_disconnect_claim(ps, p); 2161 + break; 2162 + case USBDEVFS_ALLOC_STREAMS: 2163 + ret = proc_alloc_streams(ps, p); 2164 + break; 2165 + case USBDEVFS_FREE_STREAMS: 2166 + ret = proc_free_streams(ps, p); 2299 2167 break; 2300 2168 } 2301 2169 usb_unlock_device(dev);
+86 -47
drivers/usb/core/driver.c
··· 312 312 return error; 313 313 } 314 314 315 - id = usb_match_id(intf, driver->id_table); 315 + id = usb_match_dynamic_id(intf, driver); 316 316 if (!id) 317 - id = usb_match_dynamic_id(intf, driver); 317 + id = usb_match_id(intf, driver->id_table); 318 318 if (!id) 319 319 return error; 320 320 ··· 400 400 { 401 401 struct usb_driver *driver = to_usb_driver(dev->driver); 402 402 struct usb_interface *intf = to_usb_interface(dev); 403 + struct usb_host_endpoint *ep, **eps = NULL; 403 404 struct usb_device *udev; 404 - int error, r, lpm_disable_error; 405 + int i, j, error, r, lpm_disable_error; 405 406 406 407 intf->condition = USB_INTERFACE_UNBINDING; 407 408 ··· 425 424 426 425 driver->disconnect(intf); 427 426 usb_cancel_queued_reset(intf); 427 + 428 + /* Free streams */ 429 + for (i = 0, j = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) { 430 + ep = &intf->cur_altsetting->endpoint[i]; 431 + if (ep->streams == 0) 432 + continue; 433 + if (j == 0) { 434 + eps = kmalloc(USB_MAXENDPOINTS * sizeof(void *), 435 + GFP_KERNEL); 436 + if (!eps) { 437 + dev_warn(dev, "oom, leaking streams\n"); 438 + break; 439 + } 440 + } 441 + eps[j++] = ep; 442 + } 443 + if (j) { 444 + usb_free_streams(intf, eps, j, GFP_KERNEL); 445 + kfree(eps); 446 + } 428 447 429 448 /* Reset other interface state. 430 449 * We cannot do a Set-Interface if the device is suspended or ··· 1011 990 * it doesn't support pre_reset/post_reset/reset_resume or 1012 991 * because it doesn't support suspend/resume. 1013 992 * 1014 - * The caller must hold @intf's device's lock, but not its pm_mutex 1015 - * and not @intf->dev.sem. 993 + * The caller must hold @intf's device's lock, but not @intf's lock. 1016 994 */ 1017 995 void usb_forced_unbind_intf(struct usb_interface *intf) 1018 996 { ··· 1024 1004 intf->needs_binding = 1; 1025 1005 } 1026 1006 1007 + /* 1008 + * Unbind drivers for @udev's marked interfaces. These interfaces have 1009 + * the needs_binding flag set, for example by usb_resume_interface(). 1010 + * 1011 + * The caller must hold @udev's device lock. 1012 + */ 1013 + static void unbind_marked_interfaces(struct usb_device *udev) 1014 + { 1015 + struct usb_host_config *config; 1016 + int i; 1017 + struct usb_interface *intf; 1018 + 1019 + config = udev->actconfig; 1020 + if (config) { 1021 + for (i = 0; i < config->desc.bNumInterfaces; ++i) { 1022 + intf = config->interface[i]; 1023 + if (intf->dev.driver && intf->needs_binding) 1024 + usb_forced_unbind_intf(intf); 1025 + } 1026 + } 1027 + } 1028 + 1027 1029 /* Delayed forced unbinding of a USB interface driver and scan 1028 1030 * for rebinding. 1029 1031 * 1030 - * The caller must hold @intf's device's lock, but not its pm_mutex 1031 - * and not @intf->dev.sem. 1032 + * The caller must hold @intf's device's lock, but not @intf's lock. 1032 1033 * 1033 1034 * Note: Rebinds will be skipped if a system sleep transition is in 1034 1035 * progress and the PM "complete" callback hasn't occurred yet. 1035 1036 */ 1036 - void usb_rebind_intf(struct usb_interface *intf) 1037 + static void usb_rebind_intf(struct usb_interface *intf) 1037 1038 { 1038 1039 int rc; 1039 1040 ··· 1069 1028 if (rc < 0) 1070 1029 dev_warn(&intf->dev, "rebind failed: %d\n", rc); 1071 1030 } 1031 + } 1032 + 1033 + /* 1034 + * Rebind drivers to @udev's marked interfaces. These interfaces have 1035 + * the needs_binding flag set. 1036 + * 1037 + * The caller must hold @udev's device lock. 1038 + */ 1039 + static void rebind_marked_interfaces(struct usb_device *udev) 1040 + { 1041 + struct usb_host_config *config; 1042 + int i; 1043 + struct usb_interface *intf; 1044 + 1045 + config = udev->actconfig; 1046 + if (config) { 1047 + for (i = 0; i < config->desc.bNumInterfaces; ++i) { 1048 + intf = config->interface[i]; 1049 + if (intf->needs_binding) 1050 + usb_rebind_intf(intf); 1051 + } 1052 + } 1053 + } 1054 + 1055 + /* 1056 + * Unbind all of @udev's marked interfaces and then rebind all of them. 1057 + * This ordering is necessary because some drivers claim several interfaces 1058 + * when they are first probed. 1059 + * 1060 + * The caller must hold @udev's device lock. 1061 + */ 1062 + void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev) 1063 + { 1064 + unbind_marked_interfaces(udev); 1065 + rebind_marked_interfaces(udev); 1072 1066 } 1073 1067 1074 1068 #ifdef CONFIG_PM ··· 1131 1055 if (!drv->suspend || !drv->resume) 1132 1056 usb_forced_unbind_intf(intf); 1133 1057 } 1134 - } 1135 - } 1136 - } 1137 - 1138 - /* Unbind drivers for @udev's interfaces that failed to support reset-resume. 1139 - * These interfaces have the needs_binding flag set by usb_resume_interface(). 1140 - * 1141 - * The caller must hold @udev's device lock. 1142 - */ 1143 - static void unbind_no_reset_resume_drivers_interfaces(struct usb_device *udev) 1144 - { 1145 - struct usb_host_config *config; 1146 - int i; 1147 - struct usb_interface *intf; 1148 - 1149 - config = udev->actconfig; 1150 - if (config) { 1151 - for (i = 0; i < config->desc.bNumInterfaces; ++i) { 1152 - intf = config->interface[i]; 1153 - if (intf->dev.driver && intf->needs_binding) 1154 - usb_forced_unbind_intf(intf); 1155 - } 1156 - } 1157 - } 1158 - 1159 - static void do_rebind_interfaces(struct usb_device *udev) 1160 - { 1161 - struct usb_host_config *config; 1162 - int i; 1163 - struct usb_interface *intf; 1164 - 1165 - config = udev->actconfig; 1166 - if (config) { 1167 - for (i = 0; i < config->desc.bNumInterfaces; ++i) { 1168 - intf = config->interface[i]; 1169 - if (intf->needs_binding) 1170 - usb_rebind_intf(intf); 1171 1058 } 1172 1059 } 1173 1060 } ··· 1459 1420 * whose needs_binding flag is set 1460 1421 */ 1461 1422 if (udev->state != USB_STATE_NOTATTACHED) 1462 - do_rebind_interfaces(udev); 1423 + rebind_marked_interfaces(udev); 1463 1424 return 0; 1464 1425 } 1465 1426 ··· 1481 1442 pm_runtime_disable(dev); 1482 1443 pm_runtime_set_active(dev); 1483 1444 pm_runtime_enable(dev); 1484 - unbind_no_reset_resume_drivers_interfaces(udev); 1445 + unbind_marked_interfaces(udev); 1485 1446 } 1486 1447 1487 1448 /* Avoid PM error messages for devices disconnected while suspended
+27 -10
drivers/usb/core/hcd.c
··· 2049 2049 { 2050 2050 struct usb_hcd *hcd; 2051 2051 struct usb_device *dev; 2052 - int i; 2052 + int i, ret; 2053 2053 2054 2054 dev = interface_to_usbdev(interface); 2055 2055 hcd = bus_to_hcd(dev->bus); ··· 2058 2058 if (dev->speed != USB_SPEED_SUPER) 2059 2059 return -EINVAL; 2060 2060 2061 - /* Streams only apply to bulk endpoints. */ 2062 - for (i = 0; i < num_eps; i++) 2061 + for (i = 0; i < num_eps; i++) { 2062 + /* Streams only apply to bulk endpoints. */ 2063 2063 if (!usb_endpoint_xfer_bulk(&eps[i]->desc)) 2064 2064 return -EINVAL; 2065 + /* Re-alloc is not allowed */ 2066 + if (eps[i]->streams) 2067 + return -EINVAL; 2068 + } 2065 2069 2066 - return hcd->driver->alloc_streams(hcd, dev, eps, num_eps, 2070 + ret = hcd->driver->alloc_streams(hcd, dev, eps, num_eps, 2067 2071 num_streams, mem_flags); 2072 + if (ret < 0) 2073 + return ret; 2074 + 2075 + for (i = 0; i < num_eps; i++) 2076 + eps[i]->streams = ret; 2077 + 2078 + return ret; 2068 2079 } 2069 2080 EXPORT_SYMBOL_GPL(usb_alloc_streams); 2070 2081 ··· 2089 2078 * Reverts a group of bulk endpoints back to not using stream IDs. 2090 2079 * Can fail if we are given bad arguments, or HCD is broken. 2091 2080 * 2092 - * Return: On success, the number of allocated streams. On failure, a negative 2093 - * error code. 2081 + * Return: 0 on success. On failure, a negative error code. 2094 2082 */ 2095 2083 int usb_free_streams(struct usb_interface *interface, 2096 2084 struct usb_host_endpoint **eps, unsigned int num_eps, ··· 2097 2087 { 2098 2088 struct usb_hcd *hcd; 2099 2089 struct usb_device *dev; 2100 - int i; 2090 + int i, ret; 2101 2091 2102 2092 dev = interface_to_usbdev(interface); 2103 2093 hcd = bus_to_hcd(dev->bus); 2104 2094 if (dev->speed != USB_SPEED_SUPER) 2105 2095 return -EINVAL; 2106 2096 2107 - /* Streams only apply to bulk endpoints. */ 2097 + /* Double-free is not allowed */ 2108 2098 for (i = 0; i < num_eps; i++) 2109 - if (!eps[i] || !usb_endpoint_xfer_bulk(&eps[i]->desc)) 2099 + if (!eps[i] || !eps[i]->streams) 2110 2100 return -EINVAL; 2111 2101 2112 - return hcd->driver->free_streams(hcd, dev, eps, num_eps, mem_flags); 2102 + ret = hcd->driver->free_streams(hcd, dev, eps, num_eps, mem_flags); 2103 + if (ret < 0) 2104 + return ret; 2105 + 2106 + for (i = 0; i < num_eps; i++) 2107 + eps[i]->streams = 0; 2108 + 2109 + return ret; 2113 2110 } 2114 2111 EXPORT_SYMBOL_GPL(usb_free_streams); 2115 2112
+74 -23
drivers/usb/core/hub.c
··· 141 141 return 0; 142 142 } 143 143 144 - /* All USB 3.0 must support LPM, but we need their max exit latency 145 - * information from the SuperSpeed Extended Capabilities BOS descriptor. 144 + /* 145 + * According to the USB 3.0 spec, all USB 3.0 devices must support LPM. 146 + * However, there are some that don't, and they set the U1/U2 exit 147 + * latencies to zero. 146 148 */ 147 149 if (!udev->bos->ss_cap) { 148 - dev_warn(&udev->dev, "No LPM exit latency info found. " 149 - "Power management will be impacted.\n"); 150 + dev_info(&udev->dev, "No LPM exit latency info found, disabling LPM.\n"); 150 151 return 0; 151 152 } 152 - if (udev->parent->lpm_capable) 153 - return 1; 154 153 155 - dev_warn(&udev->dev, "Parent hub missing LPM exit latency info. " 156 - "Power management will be impacted.\n"); 154 + if (udev->bos->ss_cap->bU1devExitLat == 0 && 155 + udev->bos->ss_cap->bU2DevExitLat == 0) { 156 + if (udev->parent) 157 + dev_info(&udev->dev, "LPM exit latency is zeroed, disabling LPM.\n"); 158 + else 159 + dev_info(&udev->dev, "We don't know the algorithms for LPM for this host, disabling LPM.\n"); 160 + return 0; 161 + } 162 + 163 + if (!udev->parent || udev->parent->lpm_capable) 164 + return 1; 157 165 return 0; 158 166 } 159 167 ··· 507 499 changed++; 508 500 } 509 501 if (changed) 510 - schedule_delayed_work(&hub->leds, LED_CYCLE_PERIOD); 502 + queue_delayed_work(system_power_efficient_wq, 503 + &hub->leds, LED_CYCLE_PERIOD); 511 504 } 512 505 513 506 /* use a short timeout for hub/port status fetches */ ··· 1050 1041 if (type == HUB_INIT) { 1051 1042 delay = hub_power_on(hub, false); 1052 1043 INIT_DELAYED_WORK(&hub->init_work, hub_init_func2); 1053 - schedule_delayed_work(&hub->init_work, 1044 + queue_delayed_work(system_power_efficient_wq, 1045 + &hub->init_work, 1054 1046 msecs_to_jiffies(delay)); 1055 1047 1056 1048 /* Suppress autosuspend until init is done */ ··· 1205 1195 /* Don't do a long sleep inside a workqueue routine */ 1206 1196 if (type == HUB_INIT2) { 1207 1197 INIT_DELAYED_WORK(&hub->init_work, hub_init_func3); 1208 - schedule_delayed_work(&hub->init_work, 1198 + queue_delayed_work(system_power_efficient_wq, 1199 + &hub->init_work, 1209 1200 msecs_to_jiffies(delay)); 1210 1201 return; /* Continues at init3: below */ 1211 1202 } else { ··· 1220 1209 if (status < 0) 1221 1210 dev_err(hub->intfdev, "activate --> %d\n", status); 1222 1211 if (hub->has_indicators && blinkenlights) 1223 - schedule_delayed_work(&hub->leds, LED_CYCLE_PERIOD); 1212 + queue_delayed_work(system_power_efficient_wq, 1213 + &hub->leds, LED_CYCLE_PERIOD); 1224 1214 1225 1215 /* Scan all ports that need attention */ 1226 1216 kick_khubd(hub); ··· 3107 3095 * operation is carried out here, after the port has been 3108 3096 * resumed. 3109 3097 */ 3110 - if (udev->reset_resume) 3098 + if (udev->reset_resume) { 3099 + /* 3100 + * If the device morphs or switches modes when it is reset, 3101 + * we don't want to perform a reset-resume. We'll fail the 3102 + * resume, which will cause a logical disconnect, and then 3103 + * the device will be rediscovered. 3104 + */ 3111 3105 retry_reset_resume: 3112 - status = usb_reset_and_verify_device(udev); 3106 + if (udev->quirks & USB_QUIRK_RESET) 3107 + status = -ENODEV; 3108 + else 3109 + status = usb_reset_and_verify_device(udev); 3110 + } 3113 3111 3114 3112 /* 10.5.4.5 says be sure devices in the tree are still there. 3115 3113 * For now let's assume the device didn't go crazy on resume, ··· 3982 3960 connect_type = usb_get_hub_port_connect_type(udev->parent, 3983 3961 udev->portnum); 3984 3962 3985 - if ((udev->bos->ext_cap->bmAttributes & USB_BESL_SUPPORT) || 3963 + if ((udev->bos->ext_cap->bmAttributes & cpu_to_le32(USB_BESL_SUPPORT)) || 3986 3964 connect_type == USB_PORT_CONNECT_TYPE_HARD_WIRED) { 3987 3965 udev->usb2_hw_lpm_allowed = 1; 3988 3966 usb_set_usb2_hardware_lpm(udev, 1); ··· 4131 4109 4132 4110 did_new_scheme = true; 4133 4111 retval = hub_enable_device(udev); 4134 - if (retval < 0) 4112 + if (retval < 0) { 4113 + dev_err(&udev->dev, 4114 + "hub failed to enable device, error %d\n", 4115 + retval); 4135 4116 goto fail; 4117 + } 4136 4118 4137 4119 #define GET_DESCRIPTOR_BUFSIZE 64 4138 4120 buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO); ··· 4339 4313 /* hub LEDs are probably harder to miss than syslog */ 4340 4314 if (hub->has_indicators) { 4341 4315 hub->indicator[port1-1] = INDICATOR_GREEN_BLINK; 4342 - schedule_delayed_work (&hub->leds, 0); 4316 + queue_delayed_work(system_power_efficient_wq, 4317 + &hub->leds, 0); 4343 4318 } 4344 4319 } 4345 4320 kfree(qual); ··· 4569 4542 if (hub->has_indicators) { 4570 4543 hub->indicator[port1-1] = 4571 4544 INDICATOR_AMBER_BLINK; 4572 - schedule_delayed_work (&hub->leds, 0); 4545 + queue_delayed_work( 4546 + system_power_efficient_wq, 4547 + &hub->leds, 0); 4573 4548 } 4574 4549 status = -ENOTCONN; /* Don't retry */ 4575 4550 goto loop_disable; ··· 4770 4741 4771 4742 /* deal with port status changes */ 4772 4743 for (i = 1; i <= hdev->maxchild; i++) { 4744 + struct usb_device *udev = hub->ports[i - 1]->child; 4745 + 4773 4746 if (test_bit(i, hub->busy_bits)) 4774 4747 continue; 4775 4748 connect_change = test_bit(i, hub->change_bits); ··· 4870 4839 */ 4871 4840 if (hub_port_warm_reset_required(hub, portstatus)) { 4872 4841 int status; 4873 - struct usb_device *udev = 4874 - hub->ports[i - 1]->child; 4875 4842 4876 4843 dev_dbg(hub_dev, "warm reset port %d\n", i); 4877 4844 if (!udev || ··· 4886 4857 usb_unlock_device(udev); 4887 4858 connect_change = 0; 4888 4859 } 4860 + /* 4861 + * On disconnect USB3 protocol ports transit from U0 to 4862 + * SS.Inactive to Rx.Detect. If this happens a warm- 4863 + * reset is not needed, but a (re)connect may happen 4864 + * before khubd runs and sees the disconnect, and the 4865 + * device may be an unknown state. 4866 + * 4867 + * If the port went through SS.Inactive without khubd 4868 + * seeing it the C_LINK_STATE change flag will be set, 4869 + * and we reset the dev to put it in a known state. 4870 + */ 4871 + } else if (udev && hub_is_superspeed(hub->hdev) && 4872 + (portchange & USB_PORT_STAT_C_LINK_STATE) && 4873 + (portstatus & USB_PORT_STAT_CONNECTION)) { 4874 + usb_lock_device(udev); 4875 + usb_reset_device(udev); 4876 + usb_unlock_device(udev); 4877 + connect_change = 0; 4889 4878 } 4890 4879 4891 4880 if (connect_change) ··· 5161 5114 struct usb_hcd *hcd = bus_to_hcd(udev->bus); 5162 5115 struct usb_device_descriptor descriptor = udev->descriptor; 5163 5116 struct usb_host_bos *bos; 5164 - int i, ret = 0; 5117 + int i, j, ret = 0; 5165 5118 int port1 = udev->portnum; 5166 5119 5167 5120 if (udev->state == USB_STATE_NOTATTACHED || ··· 5287 5240 ret); 5288 5241 goto re_enumerate; 5289 5242 } 5243 + /* Resetting also frees any allocated streams */ 5244 + for (j = 0; j < intf->cur_altsetting->desc.bNumEndpoints; j++) 5245 + intf->cur_altsetting->endpoint[j].streams = 0; 5290 5246 } 5291 5247 5292 5248 done: ··· 5392 5342 else if (cintf->condition == 5393 5343 USB_INTERFACE_BOUND) 5394 5344 rebind = 1; 5345 + if (rebind) 5346 + cintf->needs_binding = 1; 5395 5347 } 5396 - if (ret == 0 && rebind) 5397 - usb_rebind_intf(cintf); 5398 5348 } 5349 + usb_unbind_and_rebind_marked_interfaces(udev); 5399 5350 } 5400 5351 5401 5352 usb_autosuspend_device(udev);
+5 -2
drivers/usb/core/message.c
··· 1293 1293 struct usb_interface *iface; 1294 1294 struct usb_host_interface *alt; 1295 1295 struct usb_hcd *hcd = bus_to_hcd(dev->bus); 1296 - int ret; 1297 - int manual = 0; 1296 + int i, ret, manual = 0; 1298 1297 unsigned int epaddr; 1299 1298 unsigned int pipe; 1300 1299 ··· 1328 1329 mutex_unlock(hcd->bandwidth_mutex); 1329 1330 return -ENOMEM; 1330 1331 } 1332 + /* Changing alt-setting also frees any allocated streams */ 1333 + for (i = 0; i < iface->cur_altsetting->desc.bNumEndpoints; i++) 1334 + iface->cur_altsetting->endpoint[i].streams = 0; 1335 + 1331 1336 ret = usb_hcd_alloc_bandwidth(dev, NULL, iface->cur_altsetting, alt); 1332 1337 if (ret < 0) { 1333 1338 dev_info(&dev->dev, "Not enough bandwidth for altsetting %d\n",
+1 -1
drivers/usb/core/usb.h
··· 55 55 extern int usb_match_device(struct usb_device *dev, 56 56 const struct usb_device_id *id); 57 57 extern void usb_forced_unbind_intf(struct usb_interface *intf); 58 - extern void usb_rebind_intf(struct usb_interface *intf); 58 + extern void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev); 59 59 60 60 extern void usb_hub_release_all_ports(struct usb_device *hdev, 61 61 struct usb_dev_state *owner);
+22 -3
drivers/usb/dwc2/core_intr.c
··· 72 72 } 73 73 74 74 /** 75 + * dwc2_handle_usb_port_intr - handles OTG PRTINT interrupts. 76 + * When the PRTINT interrupt fires, there are certain status bits in the Host 77 + * Port that needs to get cleared. 78 + * 79 + * @hsotg: Programming view of DWC_otg controller 80 + */ 81 + static void dwc2_handle_usb_port_intr(struct dwc2_hsotg *hsotg) 82 + { 83 + u32 hprt0 = readl(hsotg->regs + HPRT0); 84 + 85 + if (hprt0 & HPRT0_ENACHG) { 86 + hprt0 &= ~HPRT0_ENA; 87 + writel(hprt0, hsotg->regs + HPRT0); 88 + } 89 + 90 + /* Clear interrupt */ 91 + writel(GINTSTS_PRTINT, hsotg->regs + GINTSTS); 92 + } 93 + 94 + /** 75 95 * dwc2_handle_mode_mismatch_intr() - Logs a mode mismatch warning message 76 96 * 77 97 * @hsotg: Programming view of DWC_otg controller ··· 499 479 if (dwc2_is_device_mode(hsotg)) { 500 480 dev_dbg(hsotg->dev, 501 481 " --Port interrupt received in Device mode--\n"); 502 - gintsts = GINTSTS_PRTINT; 503 - writel(gintsts, hsotg->regs + GINTSTS); 504 - retval = 1; 482 + dwc2_handle_usb_port_intr(hsotg); 483 + retval = IRQ_HANDLED; 505 484 } 506 485 } 507 486
+9 -5
drivers/usb/dwc2/hcd_intr.c
··· 975 975 struct dwc2_qtd *qtd) 976 976 { 977 977 struct dwc2_hcd_urb *urb = qtd->urb; 978 - int pipe_type = dwc2_hcd_get_pipe_type(&urb->pipe_info); 979 978 enum dwc2_halt_status halt_status = DWC2_HC_XFER_COMPLETE; 979 + int pipe_type; 980 980 int urb_xfer_done; 981 981 982 982 if (dbg_hc(chan)) 983 983 dev_vdbg(hsotg->dev, 984 984 "--Host Channel %d Interrupt: Transfer Complete--\n", 985 985 chnum); 986 + 987 + if (!urb) 988 + goto handle_xfercomp_done; 989 + 990 + pipe_type = dwc2_hcd_get_pipe_type(&urb->pipe_info); 986 991 987 992 if (hsotg->core_params->dma_desc_enable > 0) { 988 993 dwc2_hcd_complete_xfer_ddma(hsotg, chan, chnum, halt_status); ··· 1009 1004 qtd->complete_split = 0; 1010 1005 } 1011 1006 } 1012 - 1013 - if (!urb) 1014 - goto handle_xfercomp_done; 1015 1007 1016 1008 /* Update the QTD and URB states */ 1017 1009 switch (pipe_type) { ··· 1107 1105 struct dwc2_qtd *qtd) 1108 1106 { 1109 1107 struct dwc2_hcd_urb *urb = qtd->urb; 1110 - int pipe_type = dwc2_hcd_get_pipe_type(&urb->pipe_info); 1108 + int pipe_type; 1111 1109 1112 1110 dev_dbg(hsotg->dev, "--Host Channel %d Interrupt: STALL Received--\n", 1113 1111 chnum); ··· 1120 1118 1121 1119 if (!urb) 1122 1120 goto handle_stall_halt; 1121 + 1122 + pipe_type = dwc2_hcd_get_pipe_type(&urb->pipe_info); 1123 1123 1124 1124 if (pipe_type == USB_ENDPOINT_XFER_CONTROL) 1125 1125 dwc2_host_complete(hsotg, qtd, -EPIPE);
+222 -31
drivers/usb/dwc3/core.c
··· 61 61 * dwc3_core_soft_reset - Issues core soft reset and PHY reset 62 62 * @dwc: pointer to our context structure 63 63 */ 64 - static void dwc3_core_soft_reset(struct dwc3 *dwc) 64 + static int dwc3_core_soft_reset(struct dwc3 *dwc) 65 65 { 66 66 u32 reg; 67 + int ret; 67 68 68 69 /* Before Resetting PHY, put Core in Reset */ 69 70 reg = dwc3_readl(dwc->regs, DWC3_GCTL); ··· 83 82 84 83 usb_phy_init(dwc->usb2_phy); 85 84 usb_phy_init(dwc->usb3_phy); 85 + ret = phy_init(dwc->usb2_generic_phy); 86 + if (ret < 0) 87 + return ret; 88 + 89 + ret = phy_init(dwc->usb3_generic_phy); 90 + if (ret < 0) { 91 + phy_exit(dwc->usb2_generic_phy); 92 + return ret; 93 + } 86 94 mdelay(100); 87 95 88 96 /* Clear USB3 PHY reset */ ··· 110 100 reg = dwc3_readl(dwc->regs, DWC3_GCTL); 111 101 reg &= ~DWC3_GCTL_CORESOFTRESET; 112 102 dwc3_writel(dwc->regs, DWC3_GCTL, reg); 103 + 104 + return 0; 113 105 } 114 106 115 107 /** ··· 254 242 } 255 243 } 256 244 245 + static int dwc3_alloc_scratch_buffers(struct dwc3 *dwc) 246 + { 247 + if (!dwc->has_hibernation) 248 + return 0; 249 + 250 + if (!dwc->nr_scratch) 251 + return 0; 252 + 253 + dwc->scratchbuf = kmalloc_array(dwc->nr_scratch, 254 + DWC3_SCRATCHBUF_SIZE, GFP_KERNEL); 255 + if (!dwc->scratchbuf) 256 + return -ENOMEM; 257 + 258 + return 0; 259 + } 260 + 261 + static int dwc3_setup_scratch_buffers(struct dwc3 *dwc) 262 + { 263 + dma_addr_t scratch_addr; 264 + u32 param; 265 + int ret; 266 + 267 + if (!dwc->has_hibernation) 268 + return 0; 269 + 270 + if (!dwc->nr_scratch) 271 + return 0; 272 + 273 + /* should never fall here */ 274 + if (!WARN_ON(dwc->scratchbuf)) 275 + return 0; 276 + 277 + scratch_addr = dma_map_single(dwc->dev, dwc->scratchbuf, 278 + dwc->nr_scratch * DWC3_SCRATCHBUF_SIZE, 279 + DMA_BIDIRECTIONAL); 280 + if (dma_mapping_error(dwc->dev, scratch_addr)) { 281 + dev_err(dwc->dev, "failed to map scratch buffer\n"); 282 + ret = -EFAULT; 283 + goto err0; 284 + } 285 + 286 + dwc->scratch_addr = scratch_addr; 287 + 288 + param = lower_32_bits(scratch_addr); 289 + 290 + ret = dwc3_send_gadget_generic_command(dwc, 291 + DWC3_DGCMD_SET_SCRATCHPAD_ADDR_LO, param); 292 + if (ret < 0) 293 + goto err1; 294 + 295 + param = upper_32_bits(scratch_addr); 296 + 297 + ret = dwc3_send_gadget_generic_command(dwc, 298 + DWC3_DGCMD_SET_SCRATCHPAD_ADDR_HI, param); 299 + if (ret < 0) 300 + goto err1; 301 + 302 + return 0; 303 + 304 + err1: 305 + dma_unmap_single(dwc->dev, dwc->scratch_addr, dwc->nr_scratch * 306 + DWC3_SCRATCHBUF_SIZE, DMA_BIDIRECTIONAL); 307 + 308 + err0: 309 + return ret; 310 + } 311 + 312 + static void dwc3_free_scratch_buffers(struct dwc3 *dwc) 313 + { 314 + if (!dwc->has_hibernation) 315 + return; 316 + 317 + if (!dwc->nr_scratch) 318 + return; 319 + 320 + /* should never fall here */ 321 + if (!WARN_ON(dwc->scratchbuf)) 322 + return; 323 + 324 + dma_unmap_single(dwc->dev, dwc->scratch_addr, dwc->nr_scratch * 325 + DWC3_SCRATCHBUF_SIZE, DMA_BIDIRECTIONAL); 326 + kfree(dwc->scratchbuf); 327 + } 328 + 257 329 static void dwc3_core_num_eps(struct dwc3 *dwc) 258 330 { 259 331 struct dwc3_hwparams *parms = &dwc->hwparams; ··· 373 277 static int dwc3_core_init(struct dwc3 *dwc) 374 278 { 375 279 unsigned long timeout; 280 + u32 hwparams4 = dwc->hwparams.hwparams4; 376 281 u32 reg; 377 282 int ret; 378 283 ··· 403 306 cpu_relax(); 404 307 } while (true); 405 308 406 - dwc3_core_soft_reset(dwc); 309 + ret = dwc3_core_soft_reset(dwc); 310 + if (ret) 311 + goto err0; 407 312 408 313 reg = dwc3_readl(dwc->regs, DWC3_GCTL); 409 314 reg &= ~DWC3_GCTL_SCALEDOWN_MASK; ··· 413 314 414 315 switch (DWC3_GHWPARAMS1_EN_PWROPT(dwc->hwparams.hwparams1)) { 415 316 case DWC3_GHWPARAMS1_EN_PWROPT_CLK: 416 - reg &= ~DWC3_GCTL_DSBLCLKGTNG; 317 + /** 318 + * WORKAROUND: DWC3 revisions between 2.10a and 2.50a have an 319 + * issue which would cause xHCI compliance tests to fail. 320 + * 321 + * Because of that we cannot enable clock gating on such 322 + * configurations. 323 + * 324 + * Refers to: 325 + * 326 + * STAR#9000588375: Clock Gating, SOF Issues when ref_clk-Based 327 + * SOF/ITP Mode Used 328 + */ 329 + if ((dwc->dr_mode == USB_DR_MODE_HOST || 330 + dwc->dr_mode == USB_DR_MODE_OTG) && 331 + (dwc->revision >= DWC3_REVISION_210A && 332 + dwc->revision <= DWC3_REVISION_250A)) 333 + reg |= DWC3_GCTL_DSBLCLKGTNG | DWC3_GCTL_SOFITPSYNC; 334 + else 335 + reg &= ~DWC3_GCTL_DSBLCLKGTNG; 336 + break; 337 + case DWC3_GHWPARAMS1_EN_PWROPT_HIB: 338 + /* enable hibernation here */ 339 + dwc->nr_scratch = DWC3_GHWPARAMS4_HIBER_SCRATCHBUFS(hwparams4); 417 340 break; 418 341 default: 419 342 dev_dbg(dwc->dev, "No power optimization available\n"); ··· 454 333 455 334 dwc3_writel(dwc->regs, DWC3_GCTL, reg); 456 335 336 + ret = dwc3_alloc_scratch_buffers(dwc); 337 + if (ret) 338 + goto err1; 339 + 340 + ret = dwc3_setup_scratch_buffers(dwc); 341 + if (ret) 342 + goto err2; 343 + 457 344 return 0; 345 + 346 + err2: 347 + dwc3_free_scratch_buffers(dwc); 348 + 349 + err1: 350 + usb_phy_shutdown(dwc->usb2_phy); 351 + usb_phy_shutdown(dwc->usb3_phy); 352 + phy_exit(dwc->usb2_generic_phy); 353 + phy_exit(dwc->usb3_generic_phy); 458 354 459 355 err0: 460 356 return ret; ··· 479 341 480 342 static void dwc3_core_exit(struct dwc3 *dwc) 481 343 { 344 + dwc3_free_scratch_buffers(dwc); 482 345 usb_phy_shutdown(dwc->usb2_phy); 483 346 usb_phy_shutdown(dwc->usb3_phy); 347 + phy_exit(dwc->usb2_generic_phy); 348 + phy_exit(dwc->usb3_generic_phy); 484 349 } 485 350 486 351 #define DWC3_ALIGN_MASK (16 - 1) ··· 552 411 553 412 if (IS_ERR(dwc->usb2_phy)) { 554 413 ret = PTR_ERR(dwc->usb2_phy); 555 - 556 - /* 557 - * if -ENXIO is returned, it means PHY layer wasn't 558 - * enabled, so it makes no sense to return -EPROBE_DEFER 559 - * in that case, since no PHY driver will ever probe. 560 - */ 561 - if (ret == -ENXIO) 414 + if (ret == -ENXIO || ret == -ENODEV) { 415 + dwc->usb2_phy = NULL; 416 + } else if (ret == -EPROBE_DEFER) { 562 417 return ret; 563 - 564 - dev_err(dev, "no usb2 phy configured\n"); 565 - return -EPROBE_DEFER; 418 + } else { 419 + dev_err(dev, "no usb2 phy configured\n"); 420 + return ret; 421 + } 566 422 } 567 423 568 424 if (IS_ERR(dwc->usb3_phy)) { 569 425 ret = PTR_ERR(dwc->usb3_phy); 570 - 571 - /* 572 - * if -ENXIO is returned, it means PHY layer wasn't 573 - * enabled, so it makes no sense to return -EPROBE_DEFER 574 - * in that case, since no PHY driver will ever probe. 575 - */ 576 - if (ret == -ENXIO) 426 + if (ret == -ENXIO || ret == -ENODEV) { 427 + dwc->usb3_phy = NULL; 428 + } else if (ret == -EPROBE_DEFER) { 577 429 return ret; 430 + } else { 431 + dev_err(dev, "no usb3 phy configured\n"); 432 + return ret; 433 + } 434 + } 578 435 579 - dev_err(dev, "no usb3 phy configured\n"); 580 - return -EPROBE_DEFER; 436 + dwc->usb2_generic_phy = devm_phy_get(dev, "usb2-phy"); 437 + if (IS_ERR(dwc->usb2_generic_phy)) { 438 + ret = PTR_ERR(dwc->usb2_generic_phy); 439 + if (ret == -ENOSYS || ret == -ENODEV) { 440 + dwc->usb2_generic_phy = NULL; 441 + } else if (ret == -EPROBE_DEFER) { 442 + return ret; 443 + } else { 444 + dev_err(dev, "no usb2 phy configured\n"); 445 + return ret; 446 + } 447 + } 448 + 449 + dwc->usb3_generic_phy = devm_phy_get(dev, "usb3-phy"); 450 + if (IS_ERR(dwc->usb3_generic_phy)) { 451 + ret = PTR_ERR(dwc->usb3_generic_phy); 452 + if (ret == -ENOSYS || ret == -ENODEV) { 453 + dwc->usb3_generic_phy = NULL; 454 + } else if (ret == -EPROBE_DEFER) { 455 + return ret; 456 + } else { 457 + dev_err(dev, "no usb3 phy configured\n"); 458 + return ret; 459 + } 581 460 } 582 461 583 462 dwc->xhci_resources[0].start = res->start; ··· 640 479 goto err0; 641 480 } 642 481 482 + if (IS_ENABLED(CONFIG_USB_DWC3_HOST)) 483 + dwc->dr_mode = USB_DR_MODE_HOST; 484 + else if (IS_ENABLED(CONFIG_USB_DWC3_GADGET)) 485 + dwc->dr_mode = USB_DR_MODE_PERIPHERAL; 486 + 487 + if (dwc->dr_mode == USB_DR_MODE_UNKNOWN) 488 + dwc->dr_mode = USB_DR_MODE_OTG; 489 + 643 490 ret = dwc3_core_init(dwc); 644 491 if (ret) { 645 492 dev_err(dev, "failed to initialize core\n"); ··· 656 487 657 488 usb_phy_set_suspend(dwc->usb2_phy, 0); 658 489 usb_phy_set_suspend(dwc->usb3_phy, 0); 490 + ret = phy_power_on(dwc->usb2_generic_phy); 491 + if (ret < 0) 492 + goto err1; 493 + 494 + ret = phy_power_on(dwc->usb3_generic_phy); 495 + if (ret < 0) 496 + goto err_usb2phy_power; 659 497 660 498 ret = dwc3_event_buffers_setup(dwc); 661 499 if (ret) { 662 500 dev_err(dwc->dev, "failed to setup event buffers\n"); 663 - goto err1; 501 + goto err_usb3phy_power; 664 502 } 665 - 666 - if (IS_ENABLED(CONFIG_USB_DWC3_HOST)) 667 - dwc->dr_mode = USB_DR_MODE_HOST; 668 - else if (IS_ENABLED(CONFIG_USB_DWC3_GADGET)) 669 - dwc->dr_mode = USB_DR_MODE_PERIPHERAL; 670 - 671 - if (dwc->dr_mode == USB_DR_MODE_UNKNOWN) 672 - dwc->dr_mode = USB_DR_MODE_OTG; 673 503 674 504 switch (dwc->dr_mode) { 675 505 case USB_DR_MODE_PERIPHERAL: ··· 736 568 err2: 737 569 dwc3_event_buffers_cleanup(dwc); 738 570 571 + err_usb3phy_power: 572 + phy_power_off(dwc->usb3_generic_phy); 573 + 574 + err_usb2phy_power: 575 + phy_power_off(dwc->usb2_generic_phy); 576 + 739 577 err1: 740 578 usb_phy_set_suspend(dwc->usb2_phy, 1); 741 579 usb_phy_set_suspend(dwc->usb3_phy, 1); ··· 759 585 760 586 usb_phy_set_suspend(dwc->usb2_phy, 1); 761 587 usb_phy_set_suspend(dwc->usb3_phy, 1); 588 + phy_power_off(dwc->usb2_generic_phy); 589 + phy_power_off(dwc->usb3_generic_phy); 762 590 763 591 pm_runtime_put_sync(&pdev->dev); 764 592 pm_runtime_disable(&pdev->dev); ··· 858 682 859 683 usb_phy_shutdown(dwc->usb3_phy); 860 684 usb_phy_shutdown(dwc->usb2_phy); 685 + phy_exit(dwc->usb2_generic_phy); 686 + phy_exit(dwc->usb3_generic_phy); 861 687 862 688 return 0; 863 689 } ··· 868 690 { 869 691 struct dwc3 *dwc = dev_get_drvdata(dev); 870 692 unsigned long flags; 693 + int ret; 871 694 872 695 usb_phy_init(dwc->usb3_phy); 873 696 usb_phy_init(dwc->usb2_phy); 697 + ret = phy_init(dwc->usb2_generic_phy); 698 + if (ret < 0) 699 + return ret; 700 + 701 + ret = phy_init(dwc->usb3_generic_phy); 702 + if (ret < 0) 703 + goto err_usb2phy_init; 874 704 875 705 spin_lock_irqsave(&dwc->lock, flags); 876 706 ··· 902 716 pm_runtime_enable(dev); 903 717 904 718 return 0; 719 + 720 + err_usb2phy_init: 721 + phy_exit(dwc->usb2_generic_phy); 722 + 723 + return ret; 905 724 } 906 725 907 726 static const struct dev_pm_ops dwc3_dev_pm_ops = {
+82 -23
drivers/usb/dwc3/core.h
··· 31 31 #include <linux/usb/gadget.h> 32 32 #include <linux/usb/otg.h> 33 33 34 + #include <linux/phy/phy.h> 35 + 34 36 /* Global constants */ 35 37 #define DWC3_EP0_BOUNCE_SIZE 512 36 38 #define DWC3_ENDPOINTS_NUM 32 37 39 #define DWC3_XHCI_RESOURCES_NUM 2 38 40 41 + #define DWC3_SCRATCHBUF_SIZE 4096 /* each buffer is assumed to be 4KiB */ 39 42 #define DWC3_EVENT_SIZE 4 /* bytes */ 40 43 #define DWC3_EVENT_MAX_NUM 64 /* 2 events/endpoint */ 41 44 #define DWC3_EVENT_BUFFERS_SIZE (DWC3_EVENT_SIZE * DWC3_EVENT_MAX_NUM) ··· 160 157 #define DWC3_GCTL_PRTCAP_OTG 3 161 158 162 159 #define DWC3_GCTL_CORESOFTRESET (1 << 11) 160 + #define DWC3_GCTL_SOFITPSYNC (1 << 10) 163 161 #define DWC3_GCTL_SCALEDOWN(n) ((n) << 4) 164 162 #define DWC3_GCTL_SCALEDOWN_MASK DWC3_GCTL_SCALEDOWN(3) 165 163 #define DWC3_GCTL_DISSCRAMBLE (1 << 3) ··· 322 318 /* Device Endpoint Command Register */ 323 319 #define DWC3_DEPCMD_PARAM_SHIFT 16 324 320 #define DWC3_DEPCMD_PARAM(x) ((x) << DWC3_DEPCMD_PARAM_SHIFT) 325 - #define DWC3_DEPCMD_GET_RSC_IDX(x) (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f) 321 + #define DWC3_DEPCMD_GET_RSC_IDX(x) (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f) 326 322 #define DWC3_DEPCMD_STATUS(x) (((x) >> 15) & 1) 327 323 #define DWC3_DEPCMD_HIPRI_FORCERM (1 << 11) 328 324 #define DWC3_DEPCMD_CMDACT (1 << 10) ··· 397 393 * @busy_slot: first slot which is owned by HW 398 394 * @desc: usb_endpoint_descriptor pointer 399 395 * @dwc: pointer to DWC controller 396 + * @saved_state: ep state saved during hibernation 400 397 * @flags: endpoint flags (wedged, stalled, ...) 401 398 * @current_trb: index of current used trb 402 399 * @number: endpoint number (1 - 15) ··· 420 415 const struct usb_ss_ep_comp_descriptor *comp_desc; 421 416 struct dwc3 *dwc; 422 417 418 + u32 saved_state; 423 419 unsigned flags; 424 420 #define DWC3_EP_ENABLED (1 << 0) 425 421 #define DWC3_EP_STALL (1 << 1) ··· 604 598 * @ep0_trb: dma address of ep0_trb 605 599 * @ep0_usb_req: dummy req used while handling STD USB requests 606 600 * @ep0_bounce_addr: dma address of ep0_bounce 601 + * @scratch_addr: dma address of scratchbuf 607 602 * @lock: for synchronizing 608 603 * @dev: pointer to our struct device 609 604 * @xhci: pointer to our xHCI child ··· 613 606 * @gadget_driver: pointer to the gadget driver 614 607 * @regs: base address for our registers 615 608 * @regs_size: address space size 609 + * @nr_scratch: number of scratch buffers 616 610 * @num_event_buffers: calculated number of event buffers 617 611 * @u1u2: only used on revisions <1.83a for workaround 618 612 * @maximum_speed: maximum speed requested (mainly for testing purposes) ··· 621 613 * @dr_mode: requested mode of operation 622 614 * @usb2_phy: pointer to USB2 PHY 623 615 * @usb3_phy: pointer to USB3 PHY 616 + * @usb2_generic_phy: pointer to USB2 PHY 617 + * @usb3_generic_phy: pointer to USB3 PHY 624 618 * @dcfg: saved contents of DCFG register 625 619 * @gctl: saved contents of GCTL register 626 - * @is_selfpowered: true when we are selfpowered 627 - * @three_stage_setup: set if we perform a three phase setup 628 - * @ep0_bounced: true when we used bounce buffer 629 - * @ep0_expect_in: true when we expect a DATA IN transfer 630 - * @start_config_issued: true when StartConfig command has been issued 631 - * @setup_packet_pending: true when there's a Setup Packet in FIFO. Workaround 632 - * @needs_fifo_resize: not all users might want fifo resizing, flag it 633 - * @resize_fifos: tells us it's ok to reconfigure our TxFIFO sizes. 634 620 * @isoch_delay: wValue from Set Isochronous Delay request; 635 621 * @u2sel: parameter from Set SEL request. 636 622 * @u2pel: parameter from Set SEL request. ··· 639 637 * @mem: points to start of memory which is used for this struct. 640 638 * @hwparams: copy of hwparams registers 641 639 * @root: debugfs root folder pointer 640 + * @regset: debugfs pointer to regdump file 641 + * @test_mode: true when we're entering a USB test mode 642 + * @test_mode_nr: test feature selector 643 + * @delayed_status: true when gadget driver asks for delayed status 644 + * @ep0_bounced: true when we used bounce buffer 645 + * @ep0_expect_in: true when we expect a DATA IN transfer 646 + * @has_hibernation: true when dwc3 was configured with Hibernation 647 + * @is_selfpowered: true when we are selfpowered 648 + * @needs_fifo_resize: not all users might want fifo resizing, flag it 649 + * @pullups_connected: true when Run/Stop bit is set 650 + * @resize_fifos: tells us it's ok to reconfigure our TxFIFO sizes. 651 + * @setup_packet_pending: true when there's a Setup Packet in FIFO. Workaround 652 + * @start_config_issued: true when StartConfig command has been issued 653 + * @three_stage_setup: set if we perform a three phase setup 642 654 */ 643 655 struct dwc3 { 644 656 struct usb_ctrlrequest *ctrl_req; 645 657 struct dwc3_trb *ep0_trb; 646 658 void *ep0_bounce; 659 + void *scratchbuf; 647 660 u8 *setup_buf; 648 661 dma_addr_t ctrl_req_addr; 649 662 dma_addr_t ep0_trb_addr; 650 663 dma_addr_t ep0_bounce_addr; 664 + dma_addr_t scratch_addr; 651 665 struct dwc3_request ep0_usb_req; 652 666 653 667 /* device lock */ ··· 683 665 struct usb_phy *usb2_phy; 684 666 struct usb_phy *usb3_phy; 685 667 668 + struct phy *usb2_generic_phy; 669 + struct phy *usb3_generic_phy; 670 + 686 671 void __iomem *regs; 687 672 size_t regs_size; 688 673 ··· 695 674 u32 dcfg; 696 675 u32 gctl; 697 676 677 + u32 nr_scratch; 698 678 u32 num_event_buffers; 699 679 u32 u1u2; 700 680 u32 maximum_speed; ··· 717 695 #define DWC3_REVISION_230A 0x5533230a 718 696 #define DWC3_REVISION_240A 0x5533240a 719 697 #define DWC3_REVISION_250A 0x5533250a 720 - 721 - unsigned is_selfpowered:1; 722 - unsigned three_stage_setup:1; 723 - unsigned ep0_bounced:1; 724 - unsigned ep0_expect_in:1; 725 - unsigned start_config_issued:1; 726 - unsigned setup_packet_pending:1; 727 - unsigned delayed_status:1; 728 - unsigned needs_fifo_resize:1; 729 - unsigned resize_fifos:1; 730 - unsigned pullups_connected:1; 698 + #define DWC3_REVISION_260A 0x5533260a 699 + #define DWC3_REVISION_270A 0x5533270a 700 + #define DWC3_REVISION_280A 0x5533280a 731 701 732 702 enum dwc3_ep0_next ep0_next_event; 733 703 enum dwc3_ep0_state ep0state; ··· 744 730 745 731 u8 test_mode; 746 732 u8 test_mode_nr; 733 + 734 + unsigned delayed_status:1; 735 + unsigned ep0_bounced:1; 736 + unsigned ep0_expect_in:1; 737 + unsigned has_hibernation:1; 738 + unsigned is_selfpowered:1; 739 + unsigned needs_fifo_resize:1; 740 + unsigned pullups_connected:1; 741 + unsigned resize_fifos:1; 742 + unsigned setup_packet_pending:1; 743 + unsigned start_config_issued:1; 744 + unsigned three_stage_setup:1; 747 745 }; 748 746 749 747 /* -------------------------------------------------------------------------- */ ··· 841 815 * 12 - VndrDevTstRcved 842 816 * @reserved15_12: Reserved, not used 843 817 * @event_info: Information about this event 844 - * @reserved31_24: Reserved, not used 818 + * @reserved31_25: Reserved, not used 845 819 */ 846 820 struct dwc3_event_devt { 847 821 u32 one_bit:1; 848 822 u32 device_event:7; 849 823 u32 type:4; 850 824 u32 reserved15_12:4; 851 - u32 event_info:8; 852 - u32 reserved31_24:8; 825 + u32 event_info:9; 826 + u32 reserved31_25:7; 853 827 } __packed; 854 828 855 829 /** ··· 882 856 struct dwc3_event_gevt gevt; 883 857 }; 884 858 859 + /** 860 + * struct dwc3_gadget_ep_cmd_params - representation of endpoint command 861 + * parameters 862 + * @param2: third parameter 863 + * @param1: second parameter 864 + * @param0: first parameter 865 + */ 866 + struct dwc3_gadget_ep_cmd_params { 867 + u32 param2; 868 + u32 param1; 869 + u32 param0; 870 + }; 871 + 885 872 /* 886 873 * DWC3 Features to be used as Driver Data 887 874 */ ··· 920 881 #if IS_ENABLED(CONFIG_USB_DWC3_GADGET) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE) 921 882 int dwc3_gadget_init(struct dwc3 *dwc); 922 883 void dwc3_gadget_exit(struct dwc3 *dwc); 884 + int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode); 885 + int dwc3_gadget_get_link_state(struct dwc3 *dwc); 886 + int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state); 887 + int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, 888 + unsigned cmd, struct dwc3_gadget_ep_cmd_params *params); 889 + int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param); 923 890 #else 924 891 static inline int dwc3_gadget_init(struct dwc3 *dwc) 925 892 { return 0; } 926 893 static inline void dwc3_gadget_exit(struct dwc3 *dwc) 927 894 { } 895 + static inline int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode) 896 + { return 0; } 897 + static inline int dwc3_gadget_get_link_state(struct dwc3 *dwc) 898 + { return 0; } 899 + static inline int dwc3_gadget_set_link_state(struct dwc3 *dwc, 900 + enum dwc3_link_state state) 901 + { return 0; } 902 + 903 + static inline int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, 904 + unsigned cmd, struct dwc3_gadget_ep_cmd_params *params) 905 + { return 0; } 906 + static inline int dwc3_send_gadget_generic_command(struct dwc3 *dwc, 907 + int cmd, u32 param) 908 + { return 0; } 928 909 #endif 929 910 930 911 /* power management interface */
-5
drivers/usb/dwc3/dwc3-omap.c
··· 423 423 } 424 424 425 425 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 426 - if (!res) { 427 - dev_err(dev, "missing memory base resource\n"); 428 - return -EINVAL; 429 - } 430 - 431 426 base = devm_ioremap_resource(dev, res); 432 427 if (IS_ERR(base)) 433 428 return PTR_ERR(base);
+152 -31
drivers/usb/dwc3/gadget.c
··· 68 68 } 69 69 70 70 /** 71 + * dwc3_gadget_get_link_state - Gets current state of USB Link 72 + * @dwc: pointer to our context structure 73 + * 74 + * Caller should take care of locking. This function will 75 + * return the link state on success (>= 0) or -ETIMEDOUT. 76 + */ 77 + int dwc3_gadget_get_link_state(struct dwc3 *dwc) 78 + { 79 + u32 reg; 80 + 81 + reg = dwc3_readl(dwc->regs, DWC3_DSTS); 82 + 83 + return DWC3_DSTS_USBLNKST(reg); 84 + } 85 + 86 + /** 71 87 * dwc3_gadget_set_link_state - Sets USB Link to a particular State 72 88 * @dwc: pointer to our context structure 73 89 * @state: the state to put link into ··· 433 417 static int dwc3_gadget_set_ep_config(struct dwc3 *dwc, struct dwc3_ep *dep, 434 418 const struct usb_endpoint_descriptor *desc, 435 419 const struct usb_ss_ep_comp_descriptor *comp_desc, 436 - bool ignore) 420 + bool ignore, bool restore) 437 421 { 438 422 struct dwc3_gadget_ep_cmd_params params; 439 423 ··· 451 435 452 436 if (ignore) 453 437 params.param0 |= DWC3_DEPCFG_IGN_SEQ_NUM; 438 + 439 + if (restore) { 440 + params.param0 |= DWC3_DEPCFG_ACTION_RESTORE; 441 + params.param2 |= dep->saved_state; 442 + } 454 443 455 444 params.param1 = DWC3_DEPCFG_XFER_COMPLETE_EN 456 445 | DWC3_DEPCFG_XFER_NOT_READY_EN; ··· 515 494 static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, 516 495 const struct usb_endpoint_descriptor *desc, 517 496 const struct usb_ss_ep_comp_descriptor *comp_desc, 518 - bool ignore) 497 + bool ignore, bool restore) 519 498 { 520 499 struct dwc3 *dwc = dep->dwc; 521 500 u32 reg; ··· 529 508 return ret; 530 509 } 531 510 532 - ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc, ignore); 511 + ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc, ignore, 512 + restore); 533 513 if (ret) 534 514 return ret; 535 515 ··· 570 548 return 0; 571 549 } 572 550 573 - static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum); 551 + static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum, bool force); 574 552 static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep) 575 553 { 576 554 struct dwc3_request *req; 577 555 578 556 if (!list_empty(&dep->req_queued)) { 579 - dwc3_stop_active_transfer(dwc, dep->number); 557 + dwc3_stop_active_transfer(dwc, dep->number, true); 580 558 581 559 /* - giveback all requests to gadget driver */ 582 560 while (!list_empty(&dep->req_queued)) { ··· 681 659 } 682 660 683 661 spin_lock_irqsave(&dwc->lock, flags); 684 - ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc, false); 662 + ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc, false, false); 685 663 spin_unlock_irqrestore(&dwc->lock, flags); 686 664 687 665 return ret; ··· 793 771 trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS_FIRST; 794 772 else 795 773 trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS; 796 - 797 - if (!req->request.no_interrupt && !chain) 798 - trb->ctrl |= DWC3_TRB_CTRL_IOC; 799 774 break; 800 775 801 776 case USB_ENDPOINT_XFER_BULK: ··· 806 787 */ 807 788 BUG(); 808 789 } 790 + 791 + if (!req->request.no_interrupt && !chain) 792 + trb->ctrl |= DWC3_TRB_CTRL_IOC; 809 793 810 794 if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 811 795 trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; ··· 1099 1077 */ 1100 1078 if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 1101 1079 if (list_empty(&dep->req_queued)) { 1102 - dwc3_stop_active_transfer(dwc, dep->number); 1080 + dwc3_stop_active_transfer(dwc, dep->number, true); 1103 1081 dep->flags = DWC3_EP_ENABLED; 1104 1082 } 1105 1083 return 0; ··· 1127 1105 dev_dbg(dwc->dev, "%s: failed to kick transfers\n", 1128 1106 dep->name); 1129 1107 return ret; 1108 + } 1109 + 1110 + /* 1111 + * 4. Stream Capable Bulk Endpoints. We need to start the transfer 1112 + * right away, otherwise host will not know we have streams to be 1113 + * handled. 1114 + */ 1115 + if (dep->stream_capable) { 1116 + int ret; 1117 + 1118 + ret = __dwc3_gadget_kick_transfer(dep, 0, true); 1119 + if (ret && ret != -EBUSY) { 1120 + struct dwc3 *dwc = dep->dwc; 1121 + 1122 + dev_dbg(dwc->dev, "%s: failed to kick transfers\n", 1123 + dep->name); 1124 + } 1130 1125 } 1131 1126 1132 1127 return 0; ··· 1202 1163 } 1203 1164 if (r == req) { 1204 1165 /* wait until it is processed */ 1205 - dwc3_stop_active_transfer(dwc, dep->number); 1166 + dwc3_stop_active_transfer(dwc, dep->number, true); 1206 1167 goto out1; 1207 1168 } 1208 1169 dev_err(dwc->dev, "request %p was not queued to %s\n", ··· 1233 1194 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 1234 1195 DWC3_DEPCMD_SETSTALL, &params); 1235 1196 if (ret) 1236 - dev_err(dwc->dev, "failed to %s STALL on %s\n", 1237 - value ? "set" : "clear", 1197 + dev_err(dwc->dev, "failed to set STALL on %s\n", 1238 1198 dep->name); 1239 1199 else 1240 1200 dep->flags |= DWC3_EP_STALL; ··· 1241 1203 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 1242 1204 DWC3_DEPCMD_CLEARSTALL, &params); 1243 1205 if (ret) 1244 - dev_err(dwc->dev, "failed to %s STALL on %s\n", 1245 - value ? "set" : "clear", 1206 + dev_err(dwc->dev, "failed to clear STALL on %s\n", 1246 1207 dep->name); 1247 1208 else 1248 1209 dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE); ··· 1424 1387 return 0; 1425 1388 } 1426 1389 1427 - static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on) 1390 + static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) 1428 1391 { 1429 1392 u32 reg; 1430 1393 u32 timeout = 500; ··· 1439 1402 if (dwc->revision >= DWC3_REVISION_194A) 1440 1403 reg &= ~DWC3_DCTL_KEEP_CONNECT; 1441 1404 reg |= DWC3_DCTL_RUN_STOP; 1405 + 1406 + if (dwc->has_hibernation) 1407 + reg |= DWC3_DCTL_KEEP_CONNECT; 1408 + 1442 1409 dwc->pullups_connected = true; 1443 1410 } else { 1444 1411 reg &= ~DWC3_DCTL_RUN_STOP; 1412 + 1413 + if (dwc->has_hibernation && !suspend) 1414 + reg &= ~DWC3_DCTL_KEEP_CONNECT; 1415 + 1445 1416 dwc->pullups_connected = false; 1446 1417 } 1447 1418 ··· 1487 1442 is_on = !!is_on; 1488 1443 1489 1444 spin_lock_irqsave(&dwc->lock, flags); 1490 - ret = dwc3_gadget_run_stop(dwc, is_on); 1445 + ret = dwc3_gadget_run_stop(dwc, is_on, false); 1491 1446 spin_unlock_irqrestore(&dwc->lock, flags); 1492 1447 1493 1448 return ret; ··· 1594 1549 dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512); 1595 1550 1596 1551 dep = dwc->eps[0]; 1597 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 1552 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 1553 + false); 1598 1554 if (ret) { 1599 1555 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 1600 1556 goto err2; 1601 1557 } 1602 1558 1603 1559 dep = dwc->eps[1]; 1604 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 1560 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 1561 + false); 1605 1562 if (ret) { 1606 1563 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 1607 1564 goto err3; ··· 1896 1849 */ 1897 1850 dep->flags = DWC3_EP_PENDING_REQUEST; 1898 1851 } else { 1899 - dwc3_stop_active_transfer(dwc, dep->number); 1852 + dwc3_stop_active_transfer(dwc, dep->number, true); 1900 1853 dep->flags = DWC3_EP_ENABLED; 1901 1854 } 1902 1855 return 1; 1903 1856 } 1904 1857 1905 - if ((event->status & DEPEVT_STATUS_IOC) && 1906 - (trb->ctrl & DWC3_TRB_CTRL_IOC)) 1907 - return 0; 1908 1858 return 1; 1909 1859 } 1910 1860 ··· 2043 1999 } 2044 2000 } 2045 2001 2046 - static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum) 2002 + static void dwc3_suspend_gadget(struct dwc3 *dwc) 2003 + { 2004 + if (dwc->gadget_driver && dwc->gadget_driver->suspend) { 2005 + spin_unlock(&dwc->lock); 2006 + dwc->gadget_driver->suspend(&dwc->gadget); 2007 + spin_lock(&dwc->lock); 2008 + } 2009 + } 2010 + 2011 + static void dwc3_resume_gadget(struct dwc3 *dwc) 2012 + { 2013 + if (dwc->gadget_driver && dwc->gadget_driver->resume) { 2014 + spin_unlock(&dwc->lock); 2015 + dwc->gadget_driver->resume(&dwc->gadget); 2016 + spin_lock(&dwc->lock); 2017 + } 2018 + } 2019 + 2020 + static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum, bool force) 2047 2021 { 2048 2022 struct dwc3_ep *dep; 2049 2023 struct dwc3_gadget_ep_cmd_params params; ··· 2093 2031 */ 2094 2032 2095 2033 cmd = DWC3_DEPCMD_ENDTRANSFER; 2096 - cmd |= DWC3_DEPCMD_HIPRI_FORCERM | DWC3_DEPCMD_CMDIOC; 2034 + cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0; 2035 + cmd |= DWC3_DEPCMD_CMDIOC; 2097 2036 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); 2098 2037 memset(&params, 0, sizeof(params)); 2099 2038 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params); ··· 2323 2260 reg |= DWC3_DCTL_HIRD_THRES(12); 2324 2261 2325 2262 dwc3_writel(dwc->regs, DWC3_DCTL, reg); 2263 + } else { 2264 + reg = dwc3_readl(dwc->regs, DWC3_DCTL); 2265 + reg &= ~DWC3_DCTL_HIRD_THRES_MASK; 2266 + dwc3_writel(dwc->regs, DWC3_DCTL, reg); 2326 2267 } 2327 2268 2328 2269 dep = dwc->eps[0]; 2329 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true); 2270 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true, 2271 + false); 2330 2272 if (ret) { 2331 2273 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 2332 2274 return; 2333 2275 } 2334 2276 2335 2277 dep = dwc->eps[1]; 2336 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true); 2278 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true, 2279 + false); 2337 2280 if (ret) { 2338 2281 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 2339 2282 return; ··· 2447 2378 2448 2379 dwc->link_state = next; 2449 2380 2381 + switch (next) { 2382 + case DWC3_LINK_STATE_U1: 2383 + if (dwc->speed == USB_SPEED_SUPER) 2384 + dwc3_suspend_gadget(dwc); 2385 + break; 2386 + case DWC3_LINK_STATE_U2: 2387 + case DWC3_LINK_STATE_U3: 2388 + dwc3_suspend_gadget(dwc); 2389 + break; 2390 + case DWC3_LINK_STATE_RESUME: 2391 + dwc3_resume_gadget(dwc); 2392 + break; 2393 + default: 2394 + /* do nothing */ 2395 + break; 2396 + } 2397 + 2450 2398 dev_vdbg(dwc->dev, "%s link %d\n", __func__, dwc->link_state); 2399 + } 2400 + 2401 + static void dwc3_gadget_hibernation_interrupt(struct dwc3 *dwc, 2402 + unsigned int evtinfo) 2403 + { 2404 + unsigned int is_ss = evtinfo & BIT(4); 2405 + 2406 + /** 2407 + * WORKAROUND: DWC3 revison 2.20a with hibernation support 2408 + * have a known issue which can cause USB CV TD.9.23 to fail 2409 + * randomly. 2410 + * 2411 + * Because of this issue, core could generate bogus hibernation 2412 + * events which SW needs to ignore. 2413 + * 2414 + * Refers to: 2415 + * 2416 + * STAR#9000546576: Device Mode Hibernation: Issue in USB 2.0 2417 + * Device Fallback from SuperSpeed 2418 + */ 2419 + if (is_ss ^ (dwc->speed == USB_SPEED_SUPER)) 2420 + return; 2421 + 2422 + /* enter hibernation here */ 2451 2423 } 2452 2424 2453 2425 static void dwc3_gadget_interrupt(struct dwc3 *dwc, ··· 2506 2396 break; 2507 2397 case DWC3_DEVICE_EVENT_WAKEUP: 2508 2398 dwc3_gadget_wakeup_interrupt(dwc); 2399 + break; 2400 + case DWC3_DEVICE_EVENT_HIBER_REQ: 2401 + if (dev_WARN_ONCE(dwc->dev, !dwc->has_hibernation, 2402 + "unexpected hibernation event\n")) 2403 + break; 2404 + 2405 + dwc3_gadget_hibernation_interrupt(dwc, event->event_info); 2509 2406 break; 2510 2407 case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE: 2511 2408 dwc3_gadget_linksts_change_interrupt(dwc, event->event_info); ··· 2778 2661 2779 2662 int dwc3_gadget_prepare(struct dwc3 *dwc) 2780 2663 { 2781 - if (dwc->pullups_connected) 2664 + if (dwc->pullups_connected) { 2782 2665 dwc3_gadget_disable_irq(dwc); 2666 + dwc3_gadget_run_stop(dwc, true, true); 2667 + } 2783 2668 2784 2669 return 0; 2785 2670 } ··· 2790 2671 { 2791 2672 if (dwc->pullups_connected) { 2792 2673 dwc3_gadget_enable_irq(dwc); 2793 - dwc3_gadget_run_stop(dwc, true); 2674 + dwc3_gadget_run_stop(dwc, true, false); 2794 2675 } 2795 2676 } 2796 2677 ··· 2813 2694 dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512); 2814 2695 2815 2696 dep = dwc->eps[0]; 2816 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 2697 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 2698 + false); 2817 2699 if (ret) 2818 2700 goto err0; 2819 2701 2820 2702 dep = dwc->eps[1]; 2821 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 2703 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 2704 + false); 2822 2705 if (ret) 2823 2706 goto err1; 2824 2707
-12
drivers/usb/dwc3/gadget.h
··· 56 56 /* DEPXFERCFG parameter 0 */ 57 57 #define DWC3_DEPXFERCFG_NUM_XFER_RES(n) ((n) & 0xffff) 58 58 59 - struct dwc3_gadget_ep_cmd_params { 60 - u32 param2; 61 - u32 param1; 62 - u32 param0; 63 - }; 64 - 65 59 /* -------------------------------------------------------------------------- */ 66 60 67 61 #define to_dwc3_request(r) (container_of(r, struct dwc3_request, request)) ··· 79 85 void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req, 80 86 int status); 81 87 82 - int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode); 83 - int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state); 84 - 85 88 void dwc3_ep0_interrupt(struct dwc3 *dwc, 86 89 const struct dwc3_event_depevt *event); 87 90 void dwc3_ep0_out_start(struct dwc3 *dwc); ··· 86 95 int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, 87 96 gfp_t gfp_flags); 88 97 int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value); 89 - int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, 90 - unsigned cmd, struct dwc3_gadget_ep_cmd_params *params); 91 - int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param); 92 98 93 99 /** 94 100 * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW
+1 -2
drivers/usb/gadget/Kconfig
··· 226 226 config USB_OMAP 227 227 tristate "OMAP USB Device Controller" 228 228 depends on ARCH_OMAP1 229 - select ISP1301_OMAP if MACH_OMAP_H2 || MACH_OMAP_H3 || MACH_OMAP_H4_OTG 229 + select ISP1301_OMAP if MACH_OMAP_H2 || MACH_OMAP_H3 230 230 help 231 231 Many Texas Instruments OMAP processors have flexible full 232 232 speed USB device controllers, with support for up to 30 ··· 301 301 gadget drivers to also be dynamically linked. 302 302 303 303 config USB_S3C_HSOTG 304 - depends on ARM 305 304 tristate "Designware/S3C HS/OtG USB Device controller" 306 305 help 307 306 The Designware USB2.0 high-speed gadget controller
+7 -7
drivers/usb/gadget/at91_udc.c
··· 1758 1758 1759 1759 /* newer chips have more FIFO memory than rm9200 */ 1760 1760 if (cpu_is_at91sam9260() || cpu_is_at91sam9g20()) { 1761 - usb_ep_set_maxpacket_limit(&udc->ep[0].ep, 64); 1762 - usb_ep_set_maxpacket_limit(&udc->ep[3].ep, 64); 1763 - usb_ep_set_maxpacket_limit(&udc->ep[4].ep, 512); 1764 - usb_ep_set_maxpacket_limit(&udc->ep[5].ep, 512); 1761 + udc->ep[0].maxpacket = 64; 1762 + udc->ep[3].maxpacket = 64; 1763 + udc->ep[4].maxpacket = 512; 1764 + udc->ep[5].maxpacket = 512; 1765 1765 } else if (cpu_is_at91sam9261() || cpu_is_at91sam9g10()) { 1766 - usb_ep_set_maxpacket_limit(&udc->ep[3].ep, 64); 1766 + udc->ep[3].maxpacket = 64; 1767 1767 } else if (cpu_is_at91sam9263()) { 1768 - usb_ep_set_maxpacket_limit(&udc->ep[0].ep, 64); 1769 - usb_ep_set_maxpacket_limit(&udc->ep[3].ep, 64); 1768 + udc->ep[0].maxpacket = 64; 1769 + udc->ep[3].maxpacket = 64; 1770 1770 } 1771 1771 1772 1772 udc->udp_baseaddr = ioremap(res->start, resource_size(res));
+11 -5
drivers/usb/gadget/atmel_usba_udc.c
··· 1661 1661 if (dma_status) { 1662 1662 int i; 1663 1663 1664 - for (i = 1; i < USBA_NR_ENDPOINTS; i++) 1664 + for (i = 1; i < USBA_NR_DMAS; i++) 1665 1665 if (dma_status & (1 << i)) 1666 1666 usba_dma_irq(udc, &udc->usba_ep[i]); 1667 1667 } ··· 1670 1670 if (ep_status) { 1671 1671 int i; 1672 1672 1673 - for (i = 0; i < USBA_NR_ENDPOINTS; i++) 1673 + for (i = 0; i < udc->num_ep; i++) 1674 1674 if (ep_status & (1 << i)) { 1675 1675 if (ep_is_control(&udc->usba_ep[i])) 1676 1676 usba_control_irq(udc, &udc->usba_ep[i]); ··· 1827 1827 toggle_bias(0); 1828 1828 usba_writel(udc, CTRL, USBA_DISABLE_MASK); 1829 1829 1830 - udc->driver = NULL; 1831 - 1832 1830 clk_disable_unprepare(udc->hclk); 1833 1831 clk_disable_unprepare(udc->pclk); 1834 1832 1835 - DBG(DBG_GADGET, "unregistered driver `%s'\n", driver->driver.name); 1833 + DBG(DBG_GADGET, "unregistered driver `%s'\n", udc->driver->driver.name); 1834 + 1835 + udc->driver = NULL; 1836 1836 1837 1837 return 0; 1838 1838 } ··· 1912 1912 list_add_tail(&ep->ep.ep_list, &udc->gadget.ep_list); 1913 1913 1914 1914 i++; 1915 + } 1916 + 1917 + if (i == 0) { 1918 + dev_err(&pdev->dev, "of_probe: no endpoint specified\n"); 1919 + ret = -EINVAL; 1920 + goto err; 1915 1921 } 1916 1922 1917 1923 return eps;
+1 -1
drivers/usb/gadget/atmel_usba_udc.h
··· 210 210 #define USBA_FIFO_BASE(x) ((x) << 16) 211 211 212 212 /* Synth parameters */ 213 - #define USBA_NR_ENDPOINTS 7 213 + #define USBA_NR_DMAS 7 214 214 215 215 #define EP0_FIFO_SIZE 64 216 216 #define EP0_EPT_SIZE USBA_EPT_SIZE_64
+1 -1
drivers/usb/gadget/composite.c
··· 1139 1139 1140 1140 uc = copy_gadget_strings(sp, n_gstrings, n_strings); 1141 1141 if (IS_ERR(uc)) 1142 - return ERR_PTR(PTR_ERR(uc)); 1142 + return ERR_CAST(uc); 1143 1143 1144 1144 n_gs = get_containers_gs(uc); 1145 1145 ret = usb_string_ids_tab(cdev, n_gs[0]->strings);
+482 -138
drivers/usb/gadget/f_fs.c
··· 28 28 #include <linux/usb/composite.h> 29 29 #include <linux/usb/functionfs.h> 30 30 31 + #include <linux/aio.h> 32 + #include <linux/mmu_context.h> 33 + #include <linux/poll.h> 34 + 31 35 #include "u_fs.h" 32 36 #include "configfs.h" 33 37 ··· 103 99 } 104 100 105 101 102 + static inline enum ffs_setup_state 103 + ffs_setup_state_clear_cancelled(struct ffs_data *ffs) 104 + { 105 + return (enum ffs_setup_state) 106 + cmpxchg(&ffs->setup_state, FFS_SETUP_CANCELLED, FFS_NO_SETUP); 107 + } 108 + 109 + 106 110 static void ffs_func_eps_disable(struct ffs_function *func); 107 111 static int __must_check ffs_func_eps_enable(struct ffs_function *func); 108 112 ··· 134 122 struct usb_ep *ep; /* P: ffs->eps_lock */ 135 123 struct usb_request *req; /* P: epfile->mutex */ 136 124 137 - /* [0]: full speed, [1]: high speed */ 138 - struct usb_endpoint_descriptor *descs[2]; 125 + /* [0]: full speed, [1]: high speed, [2]: super speed */ 126 + struct usb_endpoint_descriptor *descs[3]; 139 127 140 128 u8 num; 141 129 ··· 160 148 unsigned char _pad; 161 149 }; 162 150 151 + /* ffs_io_data structure ***************************************************/ 152 + 153 + struct ffs_io_data { 154 + bool aio; 155 + bool read; 156 + 157 + struct kiocb *kiocb; 158 + const struct iovec *iovec; 159 + unsigned long nr_segs; 160 + char __user *buf; 161 + size_t len; 162 + 163 + struct mm_struct *mm; 164 + struct work_struct work; 165 + 166 + struct usb_ep *ep; 167 + struct usb_request *req; 168 + }; 169 + 163 170 static int __must_check ffs_epfiles_create(struct ffs_data *ffs); 164 171 static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count); 165 172 ··· 192 161 DEFINE_MUTEX(ffs_lock); 193 162 EXPORT_SYMBOL(ffs_lock); 194 163 195 - static struct ffs_dev *ffs_find_dev(const char *name); 164 + static struct ffs_dev *_ffs_find_dev(const char *name); 165 + static struct ffs_dev *_ffs_alloc_dev(void); 196 166 static int _ffs_name_dev(struct ffs_dev *dev, const char *name); 167 + static void _ffs_free_dev(struct ffs_dev *dev); 197 168 static void *ffs_acquire_dev(const char *dev_name); 198 169 static void ffs_release_dev(struct ffs_data *ffs_data); 199 170 static int ffs_ready(struct ffs_data *ffs); ··· 251 218 } 252 219 253 220 ffs->setup_state = FFS_NO_SETUP; 254 - return ffs->ep0req_status; 221 + return req->status ? req->status : req->actual; 255 222 } 256 223 257 224 static int __ffs_ep0_stall(struct ffs_data *ffs) ··· 277 244 ENTER(); 278 245 279 246 /* Fast check if setup was canceled */ 280 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) 247 + if (ffs_setup_state_clear_cancelled(ffs) == FFS_SETUP_CANCELLED) 281 248 return -EIDRM; 282 249 283 250 /* Acquire mutex */ ··· 343 310 * rather then _irqsave 344 311 */ 345 312 spin_lock_irq(&ffs->ev.waitq.lock); 346 - switch (FFS_SETUP_STATE(ffs)) { 347 - case FFS_SETUP_CANCELED: 313 + switch (ffs_setup_state_clear_cancelled(ffs)) { 314 + case FFS_SETUP_CANCELLED: 348 315 ret = -EIDRM; 349 316 goto done_spin; 350 317 ··· 379 346 /* 380 347 * We are guaranteed to be still in FFS_ACTIVE state 381 348 * but the state of setup could have changed from 382 - * FFS_SETUP_PENDING to FFS_SETUP_CANCELED so we need 349 + * FFS_SETUP_PENDING to FFS_SETUP_CANCELLED so we need 383 350 * to check for that. If that happened we copied data 384 351 * from user space in vain but it's unlikely. 385 352 * ··· 388 355 * transition can be performed and it's protected by 389 356 * mutex. 390 357 */ 391 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) { 358 + if (ffs_setup_state_clear_cancelled(ffs) == 359 + FFS_SETUP_CANCELLED) { 392 360 ret = -EIDRM; 393 361 done_spin: 394 362 spin_unlock_irq(&ffs->ev.waitq.lock); ··· 455 421 ENTER(); 456 422 457 423 /* Fast check if setup was canceled */ 458 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) 424 + if (ffs_setup_state_clear_cancelled(ffs) == FFS_SETUP_CANCELLED) 459 425 return -EIDRM; 460 426 461 427 /* Acquire mutex */ ··· 475 441 */ 476 442 spin_lock_irq(&ffs->ev.waitq.lock); 477 443 478 - switch (FFS_SETUP_STATE(ffs)) { 479 - case FFS_SETUP_CANCELED: 444 + switch (ffs_setup_state_clear_cancelled(ffs)) { 445 + case FFS_SETUP_CANCELLED: 480 446 ret = -EIDRM; 481 447 break; 482 448 ··· 523 489 spin_lock_irq(&ffs->ev.waitq.lock); 524 490 525 491 /* See ffs_ep0_write() */ 526 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) { 492 + if (ffs_setup_state_clear_cancelled(ffs) == 493 + FFS_SETUP_CANCELLED) { 527 494 ret = -EIDRM; 528 495 break; 529 496 } ··· 593 558 return ret; 594 559 } 595 560 561 + static unsigned int ffs_ep0_poll(struct file *file, poll_table *wait) 562 + { 563 + struct ffs_data *ffs = file->private_data; 564 + unsigned int mask = POLLWRNORM; 565 + int ret; 566 + 567 + poll_wait(file, &ffs->ev.waitq, wait); 568 + 569 + ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 570 + if (unlikely(ret < 0)) 571 + return mask; 572 + 573 + switch (ffs->state) { 574 + case FFS_READ_DESCRIPTORS: 575 + case FFS_READ_STRINGS: 576 + mask |= POLLOUT; 577 + break; 578 + 579 + case FFS_ACTIVE: 580 + switch (ffs->setup_state) { 581 + case FFS_NO_SETUP: 582 + if (ffs->ev.count) 583 + mask |= POLLIN; 584 + break; 585 + 586 + case FFS_SETUP_PENDING: 587 + case FFS_SETUP_CANCELLED: 588 + mask |= (POLLIN | POLLOUT); 589 + break; 590 + } 591 + case FFS_CLOSING: 592 + break; 593 + } 594 + 595 + mutex_unlock(&ffs->mutex); 596 + 597 + return mask; 598 + } 599 + 596 600 static const struct file_operations ffs_ep0_operations = { 597 601 .llseek = no_llseek, 598 602 ··· 640 566 .read = ffs_ep0_read, 641 567 .release = ffs_ep0_release, 642 568 .unlocked_ioctl = ffs_ep0_ioctl, 569 + .poll = ffs_ep0_poll, 643 570 }; 644 571 645 572 ··· 656 581 } 657 582 } 658 583 659 - static ssize_t ffs_epfile_io(struct file *file, 660 - char __user *buf, size_t len, int read) 584 + static void ffs_user_copy_worker(struct work_struct *work) 585 + { 586 + struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, 587 + work); 588 + int ret = io_data->req->status ? io_data->req->status : 589 + io_data->req->actual; 590 + 591 + if (io_data->read && ret > 0) { 592 + int i; 593 + size_t pos = 0; 594 + use_mm(io_data->mm); 595 + for (i = 0; i < io_data->nr_segs; i++) { 596 + if (unlikely(copy_to_user(io_data->iovec[i].iov_base, 597 + &io_data->buf[pos], 598 + io_data->iovec[i].iov_len))) { 599 + ret = -EFAULT; 600 + break; 601 + } 602 + pos += io_data->iovec[i].iov_len; 603 + } 604 + unuse_mm(io_data->mm); 605 + } 606 + 607 + aio_complete(io_data->kiocb, ret, ret); 608 + 609 + usb_ep_free_request(io_data->ep, io_data->req); 610 + 611 + io_data->kiocb->private = NULL; 612 + if (io_data->read) 613 + kfree(io_data->iovec); 614 + kfree(io_data->buf); 615 + kfree(io_data); 616 + } 617 + 618 + static void ffs_epfile_async_io_complete(struct usb_ep *_ep, 619 + struct usb_request *req) 620 + { 621 + struct ffs_io_data *io_data = req->context; 622 + 623 + ENTER(); 624 + 625 + INIT_WORK(&io_data->work, ffs_user_copy_worker); 626 + schedule_work(&io_data->work); 627 + } 628 + 629 + static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) 661 630 { 662 631 struct ffs_epfile *epfile = file->private_data; 663 632 struct ffs_ep *ep; ··· 731 612 } 732 613 733 614 /* Do we halt? */ 734 - halt = !read == !epfile->in; 615 + halt = (!io_data->read == !epfile->in); 735 616 if (halt && epfile->isoc) { 736 617 ret = -EINVAL; 737 618 goto error; ··· 749 630 * Controller may require buffer size to be aligned to 750 631 * maxpacketsize of an out endpoint. 751 632 */ 752 - data_len = read ? usb_ep_align_maybe(gadget, ep->ep, len) : len; 633 + data_len = io_data->read ? 634 + usb_ep_align_maybe(gadget, ep->ep, io_data->len) : 635 + io_data->len; 753 636 754 637 data = kmalloc(data_len, GFP_KERNEL); 755 638 if (unlikely(!data)) 756 639 return -ENOMEM; 757 - 758 - if (!read && unlikely(copy_from_user(data, buf, len))) { 759 - ret = -EFAULT; 760 - goto error; 640 + if (io_data->aio && !io_data->read) { 641 + int i; 642 + size_t pos = 0; 643 + for (i = 0; i < io_data->nr_segs; i++) { 644 + if (unlikely(copy_from_user(&data[pos], 645 + io_data->iovec[i].iov_base, 646 + io_data->iovec[i].iov_len))) { 647 + ret = -EFAULT; 648 + goto error; 649 + } 650 + pos += io_data->iovec[i].iov_len; 651 + } 652 + } else { 653 + if (!io_data->read && 654 + unlikely(__copy_from_user(data, io_data->buf, 655 + io_data->len))) { 656 + ret = -EFAULT; 657 + goto error; 658 + } 761 659 } 762 660 } 763 661 ··· 797 661 ret = -EBADMSG; 798 662 } else { 799 663 /* Fire the request */ 800 - DECLARE_COMPLETION_ONSTACK(done); 664 + struct usb_request *req; 801 665 802 - struct usb_request *req = ep->req; 803 - req->context = &done; 804 - req->complete = ffs_epfile_io_complete; 805 - req->buf = data; 806 - req->length = data_len; 666 + if (io_data->aio) { 667 + req = usb_ep_alloc_request(ep->ep, GFP_KERNEL); 668 + if (unlikely(!req)) 669 + goto error_lock; 807 670 808 - ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 671 + req->buf = data; 672 + req->length = io_data->len; 809 673 810 - spin_unlock_irq(&epfile->ffs->eps_lock); 674 + io_data->buf = data; 675 + io_data->ep = ep->ep; 676 + io_data->req = req; 811 677 812 - if (unlikely(ret < 0)) { 813 - /* nop */ 814 - } else if (unlikely(wait_for_completion_interruptible(&done))) { 815 - ret = -EINTR; 816 - usb_ep_dequeue(ep->ep, req); 678 + req->context = io_data; 679 + req->complete = ffs_epfile_async_io_complete; 680 + 681 + ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 682 + if (unlikely(ret)) { 683 + usb_ep_free_request(ep->ep, req); 684 + goto error_lock; 685 + } 686 + ret = -EIOCBQUEUED; 687 + 688 + spin_unlock_irq(&epfile->ffs->eps_lock); 817 689 } else { 818 - /* 819 - * XXX We may end up silently droping data here. 820 - * Since data_len (i.e. req->length) may be bigger 821 - * than len (after being rounded up to maxpacketsize), 822 - * we may end up with more data then user space has 823 - * space for. 824 - */ 825 - ret = ep->status; 826 - if (read && ret > 0 && 827 - unlikely(copy_to_user(buf, data, 828 - min_t(size_t, ret, len)))) 829 - ret = -EFAULT; 690 + DECLARE_COMPLETION_ONSTACK(done); 691 + 692 + req = ep->req; 693 + req->buf = data; 694 + req->length = io_data->len; 695 + 696 + req->context = &done; 697 + req->complete = ffs_epfile_io_complete; 698 + 699 + ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 700 + 701 + spin_unlock_irq(&epfile->ffs->eps_lock); 702 + 703 + if (unlikely(ret < 0)) { 704 + /* nop */ 705 + } else if (unlikely( 706 + wait_for_completion_interruptible(&done))) { 707 + ret = -EINTR; 708 + usb_ep_dequeue(ep->ep, req); 709 + } else { 710 + /* 711 + * XXX We may end up silently droping data 712 + * here. Since data_len (i.e. req->length) may 713 + * be bigger than len (after being rounded up 714 + * to maxpacketsize), we may end up with more 715 + * data then user space has space for. 716 + */ 717 + ret = ep->status; 718 + if (io_data->read && ret > 0) { 719 + ret = min_t(size_t, ret, io_data->len); 720 + 721 + if (unlikely(copy_to_user(io_data->buf, 722 + data, ret))) 723 + ret = -EFAULT; 724 + } 725 + } 726 + kfree(data); 830 727 } 831 728 } 832 729 730 + mutex_unlock(&epfile->mutex); 731 + return ret; 732 + 733 + error_lock: 734 + spin_unlock_irq(&epfile->ffs->eps_lock); 833 735 mutex_unlock(&epfile->mutex); 834 736 error: 835 737 kfree(data); ··· 878 704 ffs_epfile_write(struct file *file, const char __user *buf, size_t len, 879 705 loff_t *ptr) 880 706 { 707 + struct ffs_io_data io_data; 708 + 881 709 ENTER(); 882 710 883 - return ffs_epfile_io(file, (char __user *)buf, len, 0); 711 + io_data.aio = false; 712 + io_data.read = false; 713 + io_data.buf = (char * __user)buf; 714 + io_data.len = len; 715 + 716 + return ffs_epfile_io(file, &io_data); 884 717 } 885 718 886 719 static ssize_t 887 720 ffs_epfile_read(struct file *file, char __user *buf, size_t len, loff_t *ptr) 888 721 { 722 + struct ffs_io_data io_data; 723 + 889 724 ENTER(); 890 725 891 - return ffs_epfile_io(file, buf, len, 1); 726 + io_data.aio = false; 727 + io_data.read = true; 728 + io_data.buf = buf; 729 + io_data.len = len; 730 + 731 + return ffs_epfile_io(file, &io_data); 892 732 } 893 733 894 734 static int ··· 919 731 ffs_data_opened(epfile->ffs); 920 732 921 733 return 0; 734 + } 735 + 736 + static int ffs_aio_cancel(struct kiocb *kiocb) 737 + { 738 + struct ffs_io_data *io_data = kiocb->private; 739 + struct ffs_epfile *epfile = kiocb->ki_filp->private_data; 740 + int value; 741 + 742 + ENTER(); 743 + 744 + spin_lock_irq(&epfile->ffs->eps_lock); 745 + 746 + if (likely(io_data && io_data->ep && io_data->req)) 747 + value = usb_ep_dequeue(io_data->ep, io_data->req); 748 + else 749 + value = -EINVAL; 750 + 751 + spin_unlock_irq(&epfile->ffs->eps_lock); 752 + 753 + return value; 754 + } 755 + 756 + static ssize_t ffs_epfile_aio_write(struct kiocb *kiocb, 757 + const struct iovec *iovec, 758 + unsigned long nr_segs, loff_t loff) 759 + { 760 + struct ffs_io_data *io_data; 761 + 762 + ENTER(); 763 + 764 + io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 765 + if (unlikely(!io_data)) 766 + return -ENOMEM; 767 + 768 + io_data->aio = true; 769 + io_data->read = false; 770 + io_data->kiocb = kiocb; 771 + io_data->iovec = iovec; 772 + io_data->nr_segs = nr_segs; 773 + io_data->len = kiocb->ki_nbytes; 774 + io_data->mm = current->mm; 775 + 776 + kiocb->private = io_data; 777 + 778 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 779 + 780 + return ffs_epfile_io(kiocb->ki_filp, io_data); 781 + } 782 + 783 + static ssize_t ffs_epfile_aio_read(struct kiocb *kiocb, 784 + const struct iovec *iovec, 785 + unsigned long nr_segs, loff_t loff) 786 + { 787 + struct ffs_io_data *io_data; 788 + struct iovec *iovec_copy; 789 + 790 + ENTER(); 791 + 792 + iovec_copy = kmalloc_array(nr_segs, sizeof(*iovec_copy), GFP_KERNEL); 793 + if (unlikely(!iovec_copy)) 794 + return -ENOMEM; 795 + 796 + memcpy(iovec_copy, iovec, sizeof(struct iovec)*nr_segs); 797 + 798 + io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 799 + if (unlikely(!io_data)) { 800 + kfree(iovec_copy); 801 + return -ENOMEM; 802 + } 803 + 804 + io_data->aio = true; 805 + io_data->read = true; 806 + io_data->kiocb = kiocb; 807 + io_data->iovec = iovec_copy; 808 + io_data->nr_segs = nr_segs; 809 + io_data->len = kiocb->ki_nbytes; 810 + io_data->mm = current->mm; 811 + 812 + kiocb->private = io_data; 813 + 814 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 815 + 816 + return ffs_epfile_io(kiocb->ki_filp, io_data); 922 817 } 923 818 924 819 static int ··· 1060 789 .open = ffs_epfile_open, 1061 790 .write = ffs_epfile_write, 1062 791 .read = ffs_epfile_read, 792 + .aio_write = ffs_epfile_aio_write, 793 + .aio_read = ffs_epfile_aio_read, 1063 794 .release = ffs_epfile_release, 1064 795 .unlocked_ioctl = ffs_epfile_ioctl, 1065 796 }; ··· 1445 1172 if (ffs->epfiles) 1446 1173 ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count); 1447 1174 1448 - kfree(ffs->raw_descs); 1175 + kfree(ffs->raw_descs_data); 1449 1176 kfree(ffs->raw_strings); 1450 1177 kfree(ffs->stringtabs); 1451 1178 } ··· 1457 1184 ffs_data_clear(ffs); 1458 1185 1459 1186 ffs->epfiles = NULL; 1187 + ffs->raw_descs_data = NULL; 1460 1188 ffs->raw_descs = NULL; 1461 1189 ffs->raw_strings = NULL; 1462 1190 ffs->stringtabs = NULL; 1463 1191 1464 1192 ffs->raw_descs_length = 0; 1465 - ffs->raw_fs_descs_length = 0; 1466 1193 ffs->fs_descs_count = 0; 1467 1194 ffs->hs_descs_count = 0; 1195 + ffs->ss_descs_count = 0; 1468 1196 1469 1197 ffs->strings_count = 0; 1470 1198 ffs->interfaces_count = 0; ··· 1608 1334 spin_lock_irqsave(&func->ffs->eps_lock, flags); 1609 1335 do { 1610 1336 struct usb_endpoint_descriptor *ds; 1611 - ds = ep->descs[ep->descs[1] ? 1 : 0]; 1337 + int desc_idx; 1338 + 1339 + if (ffs->gadget->speed == USB_SPEED_SUPER) 1340 + desc_idx = 2; 1341 + else if (ffs->gadget->speed == USB_SPEED_HIGH) 1342 + desc_idx = 1; 1343 + else 1344 + desc_idx = 0; 1345 + 1346 + /* fall-back to lower speed if desc missing for current speed */ 1347 + do { 1348 + ds = ep->descs[desc_idx]; 1349 + } while (!ds && --desc_idx >= 0); 1350 + 1351 + if (!ds) { 1352 + ret = -EINVAL; 1353 + break; 1354 + } 1612 1355 1613 1356 ep->ep->driver_data = ep; 1614 1357 ep->ep->desc = ds; ··· 1760 1469 } 1761 1470 break; 1762 1471 1472 + case USB_DT_SS_ENDPOINT_COMP: 1473 + pr_vdebug("EP SS companion descriptor\n"); 1474 + if (length != sizeof(struct usb_ss_ep_comp_descriptor)) 1475 + goto inv_length; 1476 + break; 1477 + 1763 1478 case USB_DT_OTHER_SPEED_CONFIG: 1764 1479 case USB_DT_INTERFACE_POWER: 1765 1480 case USB_DT_DEBUG: ··· 1876 1579 static int __ffs_data_got_descs(struct ffs_data *ffs, 1877 1580 char *const _data, size_t len) 1878 1581 { 1879 - unsigned fs_count, hs_count; 1880 - int fs_len, ret = -EINVAL; 1881 - char *data = _data; 1582 + char *data = _data, *raw_descs; 1583 + unsigned counts[3], flags; 1584 + int ret = -EINVAL, i; 1882 1585 1883 1586 ENTER(); 1884 1587 1885 - if (unlikely(get_unaligned_le32(data) != FUNCTIONFS_DESCRIPTORS_MAGIC || 1886 - get_unaligned_le32(data + 4) != len)) 1588 + if (get_unaligned_le32(data + 4) != len) 1887 1589 goto error; 1888 - fs_count = get_unaligned_le32(data + 8); 1889 - hs_count = get_unaligned_le32(data + 12); 1890 1590 1891 - if (!fs_count && !hs_count) 1892 - goto einval; 1893 - 1894 - data += 16; 1895 - len -= 16; 1896 - 1897 - if (likely(fs_count)) { 1898 - fs_len = ffs_do_descs(fs_count, data, len, 1899 - __ffs_data_do_entity, ffs); 1900 - if (unlikely(fs_len < 0)) { 1901 - ret = fs_len; 1591 + switch (get_unaligned_le32(data)) { 1592 + case FUNCTIONFS_DESCRIPTORS_MAGIC: 1593 + flags = FUNCTIONFS_HAS_FS_DESC | FUNCTIONFS_HAS_HS_DESC; 1594 + data += 8; 1595 + len -= 8; 1596 + break; 1597 + case FUNCTIONFS_DESCRIPTORS_MAGIC_V2: 1598 + flags = get_unaligned_le32(data + 8); 1599 + if (flags & ~(FUNCTIONFS_HAS_FS_DESC | 1600 + FUNCTIONFS_HAS_HS_DESC | 1601 + FUNCTIONFS_HAS_SS_DESC)) { 1602 + ret = -ENOSYS; 1902 1603 goto error; 1903 1604 } 1904 - 1905 - data += fs_len; 1906 - len -= fs_len; 1907 - } else { 1908 - fs_len = 0; 1605 + data += 12; 1606 + len -= 12; 1607 + break; 1608 + default: 1609 + goto error; 1909 1610 } 1910 1611 1911 - if (likely(hs_count)) { 1912 - ret = ffs_do_descs(hs_count, data, len, 1913 - __ffs_data_do_entity, ffs); 1914 - if (unlikely(ret < 0)) 1612 + /* Read fs_count, hs_count and ss_count (if present) */ 1613 + for (i = 0; i < 3; ++i) { 1614 + if (!(flags & (1 << i))) { 1615 + counts[i] = 0; 1616 + } else if (len < 4) { 1915 1617 goto error; 1916 - } else { 1917 - ret = 0; 1618 + } else { 1619 + counts[i] = get_unaligned_le32(data); 1620 + data += 4; 1621 + len -= 4; 1622 + } 1918 1623 } 1919 1624 1920 - if (unlikely(len != ret)) 1921 - goto einval; 1625 + /* Read descriptors */ 1626 + raw_descs = data; 1627 + for (i = 0; i < 3; ++i) { 1628 + if (!counts[i]) 1629 + continue; 1630 + ret = ffs_do_descs(counts[i], data, len, 1631 + __ffs_data_do_entity, ffs); 1632 + if (ret < 0) 1633 + goto error; 1634 + data += ret; 1635 + len -= ret; 1636 + } 1922 1637 1923 - ffs->raw_fs_descs_length = fs_len; 1924 - ffs->raw_descs_length = fs_len + ret; 1925 - ffs->raw_descs = _data; 1926 - ffs->fs_descs_count = fs_count; 1927 - ffs->hs_descs_count = hs_count; 1638 + if (raw_descs == data || len) { 1639 + ret = -EINVAL; 1640 + goto error; 1641 + } 1642 + 1643 + ffs->raw_descs_data = _data; 1644 + ffs->raw_descs = raw_descs; 1645 + ffs->raw_descs_length = data - raw_descs; 1646 + ffs->fs_descs_count = counts[0]; 1647 + ffs->hs_descs_count = counts[1]; 1648 + ffs->ss_descs_count = counts[2]; 1928 1649 1929 1650 return 0; 1930 1651 1931 - einval: 1932 - ret = -EINVAL; 1933 1652 error: 1934 1653 kfree(_data); 1935 1654 return ret; ··· 2102 1789 * the source does nothing. 2103 1790 */ 2104 1791 if (ffs->setup_state == FFS_SETUP_PENDING) 2105 - ffs->setup_state = FFS_SETUP_CANCELED; 1792 + ffs->setup_state = FFS_SETUP_CANCELLED; 2106 1793 2107 1794 switch (type) { 2108 1795 case FUNCTIONFS_RESUME: ··· 2163 1850 struct usb_endpoint_descriptor *ds = (void *)desc; 2164 1851 struct ffs_function *func = priv; 2165 1852 struct ffs_ep *ffs_ep; 2166 - 2167 - /* 2168 - * If hs_descriptors is not NULL then we are reading hs 2169 - * descriptors now 2170 - */ 2171 - const int isHS = func->function.hs_descriptors != NULL; 2172 - unsigned idx; 1853 + unsigned ep_desc_id, idx; 1854 + static const char *speed_names[] = { "full", "high", "super" }; 2173 1855 2174 1856 if (type != FFS_DESCRIPTOR) 2175 1857 return 0; 2176 1858 2177 - if (isHS) 1859 + /* 1860 + * If ss_descriptors is not NULL, we are reading super speed 1861 + * descriptors; if hs_descriptors is not NULL, we are reading high 1862 + * speed descriptors; otherwise, we are reading full speed 1863 + * descriptors. 1864 + */ 1865 + if (func->function.ss_descriptors) { 1866 + ep_desc_id = 2; 1867 + func->function.ss_descriptors[(long)valuep] = desc; 1868 + } else if (func->function.hs_descriptors) { 1869 + ep_desc_id = 1; 2178 1870 func->function.hs_descriptors[(long)valuep] = desc; 2179 - else 1871 + } else { 1872 + ep_desc_id = 0; 2180 1873 func->function.fs_descriptors[(long)valuep] = desc; 1874 + } 2181 1875 2182 1876 if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT) 2183 1877 return 0; ··· 2192 1872 idx = (ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK) - 1; 2193 1873 ffs_ep = func->eps + idx; 2194 1874 2195 - if (unlikely(ffs_ep->descs[isHS])) { 2196 - pr_vdebug("two %sspeed descriptors for EP %d\n", 2197 - isHS ? "high" : "full", 1875 + if (unlikely(ffs_ep->descs[ep_desc_id])) { 1876 + pr_err("two %sspeed descriptors for EP %d\n", 1877 + speed_names[ep_desc_id], 2198 1878 ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); 2199 1879 return -EINVAL; 2200 1880 } 2201 - ffs_ep->descs[isHS] = ds; 1881 + ffs_ep->descs[ep_desc_id] = ds; 2202 1882 2203 1883 ffs_dump_mem(": Original ep desc", ds, ds->bLength); 2204 1884 if (ffs_ep->ep) { ··· 2342 2022 const int full = !!func->ffs->fs_descs_count; 2343 2023 const int high = gadget_is_dualspeed(func->gadget) && 2344 2024 func->ffs->hs_descs_count; 2025 + const int super = gadget_is_superspeed(func->gadget) && 2026 + func->ffs->ss_descs_count; 2345 2027 2346 - int ret; 2028 + int fs_len, hs_len, ret; 2347 2029 2348 2030 /* Make it a single chunk, less management later on */ 2349 2031 vla_group(d); ··· 2354 2032 full ? ffs->fs_descs_count + 1 : 0); 2355 2033 vla_item_with_sz(d, struct usb_descriptor_header *, hs_descs, 2356 2034 high ? ffs->hs_descs_count + 1 : 0); 2035 + vla_item_with_sz(d, struct usb_descriptor_header *, ss_descs, 2036 + super ? ffs->ss_descs_count + 1 : 0); 2357 2037 vla_item_with_sz(d, short, inums, ffs->interfaces_count); 2358 - vla_item_with_sz(d, char, raw_descs, 2359 - high ? ffs->raw_descs_length : ffs->raw_fs_descs_length); 2038 + vla_item_with_sz(d, char, raw_descs, ffs->raw_descs_length); 2360 2039 char *vlabuf; 2361 2040 2362 2041 ENTER(); 2363 2042 2364 - /* Only high speed but not supported by gadget? */ 2365 - if (unlikely(!(full | high))) 2043 + /* Has descriptors only for speeds gadget does not support */ 2044 + if (unlikely(!(full | high | super))) 2366 2045 return -ENOTSUPP; 2367 2046 2368 2047 /* Allocate a single chunk, less management later on */ ··· 2373 2050 2374 2051 /* Zero */ 2375 2052 memset(vla_ptr(vlabuf, d, eps), 0, d_eps__sz); 2376 - memcpy(vla_ptr(vlabuf, d, raw_descs), ffs->raw_descs + 16, 2377 - d_raw_descs__sz); 2053 + /* Copy descriptors */ 2054 + memcpy(vla_ptr(vlabuf, d, raw_descs), ffs->raw_descs, 2055 + ffs->raw_descs_length); 2056 + 2378 2057 memset(vla_ptr(vlabuf, d, inums), 0xff, d_inums__sz); 2379 2058 for (ret = ffs->eps_count; ret; --ret) { 2380 2059 struct ffs_ep *ptr; ··· 2398 2073 */ 2399 2074 if (likely(full)) { 2400 2075 func->function.fs_descriptors = vla_ptr(vlabuf, d, fs_descs); 2401 - ret = ffs_do_descs(ffs->fs_descs_count, 2402 - vla_ptr(vlabuf, d, raw_descs), 2403 - d_raw_descs__sz, 2404 - __ffs_func_bind_do_descs, func); 2405 - if (unlikely(ret < 0)) 2076 + fs_len = ffs_do_descs(ffs->fs_descs_count, 2077 + vla_ptr(vlabuf, d, raw_descs), 2078 + d_raw_descs__sz, 2079 + __ffs_func_bind_do_descs, func); 2080 + if (unlikely(fs_len < 0)) { 2081 + ret = fs_len; 2406 2082 goto error; 2083 + } 2407 2084 } else { 2408 - ret = 0; 2085 + fs_len = 0; 2409 2086 } 2410 2087 2411 2088 if (likely(high)) { 2412 2089 func->function.hs_descriptors = vla_ptr(vlabuf, d, hs_descs); 2413 - ret = ffs_do_descs(ffs->hs_descs_count, 2414 - vla_ptr(vlabuf, d, raw_descs) + ret, 2415 - d_raw_descs__sz - ret, 2416 - __ffs_func_bind_do_descs, func); 2090 + hs_len = ffs_do_descs(ffs->hs_descs_count, 2091 + vla_ptr(vlabuf, d, raw_descs) + fs_len, 2092 + d_raw_descs__sz - fs_len, 2093 + __ffs_func_bind_do_descs, func); 2094 + if (unlikely(hs_len < 0)) { 2095 + ret = hs_len; 2096 + goto error; 2097 + } 2098 + } else { 2099 + hs_len = 0; 2100 + } 2101 + 2102 + if (likely(super)) { 2103 + func->function.ss_descriptors = vla_ptr(vlabuf, d, ss_descs); 2104 + ret = ffs_do_descs(ffs->ss_descs_count, 2105 + vla_ptr(vlabuf, d, raw_descs) + fs_len + hs_len, 2106 + d_raw_descs__sz - fs_len - hs_len, 2107 + __ffs_func_bind_do_descs, func); 2417 2108 if (unlikely(ret < 0)) 2418 2109 goto error; 2419 2110 } ··· 2440 2099 * now. 2441 2100 */ 2442 2101 ret = ffs_do_descs(ffs->fs_descs_count + 2443 - (high ? ffs->hs_descs_count : 0), 2102 + (high ? ffs->hs_descs_count : 0) + 2103 + (super ? ffs->ss_descs_count : 0), 2444 2104 vla_ptr(vlabuf, d, raw_descs), d_raw_descs__sz, 2445 2105 __ffs_func_bind_do_nums, func); 2446 2106 if (unlikely(ret < 0)) ··· 2600 2258 2601 2259 static LIST_HEAD(ffs_devices); 2602 2260 2603 - static struct ffs_dev *_ffs_find_dev(const char *name) 2261 + static struct ffs_dev *_ffs_do_find_dev(const char *name) 2604 2262 { 2605 2263 struct ffs_dev *dev; 2606 2264 ··· 2617 2275 /* 2618 2276 * ffs_lock must be taken by the caller of this function 2619 2277 */ 2620 - static struct ffs_dev *ffs_get_single_dev(void) 2278 + static struct ffs_dev *_ffs_get_single_dev(void) 2621 2279 { 2622 2280 struct ffs_dev *dev; 2623 2281 ··· 2633 2291 /* 2634 2292 * ffs_lock must be taken by the caller of this function 2635 2293 */ 2636 - static struct ffs_dev *ffs_find_dev(const char *name) 2294 + static struct ffs_dev *_ffs_find_dev(const char *name) 2637 2295 { 2638 2296 struct ffs_dev *dev; 2639 2297 2640 - dev = ffs_get_single_dev(); 2298 + dev = _ffs_get_single_dev(); 2641 2299 if (dev) 2642 2300 return dev; 2643 2301 2644 - return _ffs_find_dev(name); 2302 + return _ffs_do_find_dev(name); 2645 2303 } 2646 2304 2647 2305 /* Configfs support *********************************************************/ ··· 2677 2335 2678 2336 opts = to_f_fs_opts(f); 2679 2337 ffs_dev_lock(); 2680 - ffs_free_dev(opts->dev); 2338 + _ffs_free_dev(opts->dev); 2681 2339 ffs_dev_unlock(); 2682 2340 kfree(opts); 2683 2341 } ··· 2732 2390 opts->func_inst.set_inst_name = ffs_set_inst_name; 2733 2391 opts->func_inst.free_func_inst = ffs_free_inst; 2734 2392 ffs_dev_lock(); 2735 - dev = ffs_alloc_dev(); 2393 + dev = _ffs_alloc_dev(); 2736 2394 ffs_dev_unlock(); 2737 2395 if (IS_ERR(dev)) { 2738 2396 kfree(opts); ··· 2788 2446 */ 2789 2447 func->function.fs_descriptors = NULL; 2790 2448 func->function.hs_descriptors = NULL; 2449 + func->function.ss_descriptors = NULL; 2791 2450 func->interfaces_nums = NULL; 2792 2451 2793 2452 ffs_event_add(ffs, FUNCTIONFS_UNBIND); ··· 2821 2478 /* 2822 2479 * ffs_lock must be taken by the caller of this function 2823 2480 */ 2824 - struct ffs_dev *ffs_alloc_dev(void) 2481 + static struct ffs_dev *_ffs_alloc_dev(void) 2825 2482 { 2826 2483 struct ffs_dev *dev; 2827 2484 int ret; 2828 2485 2829 - if (ffs_get_single_dev()) 2486 + if (_ffs_get_single_dev()) 2830 2487 return ERR_PTR(-EBUSY); 2831 2488 2832 2489 dev = kzalloc(sizeof(*dev), GFP_KERNEL); ··· 2854 2511 { 2855 2512 struct ffs_dev *existing; 2856 2513 2857 - existing = _ffs_find_dev(name); 2514 + existing = _ffs_do_find_dev(name); 2858 2515 if (existing) 2859 2516 return -EBUSY; 2860 - 2517 + 2861 2518 dev->name = name; 2862 2519 2863 2520 return 0; ··· 2898 2555 /* 2899 2556 * ffs_lock must be taken by the caller of this function 2900 2557 */ 2901 - void ffs_free_dev(struct ffs_dev *dev) 2558 + static void _ffs_free_dev(struct ffs_dev *dev) 2902 2559 { 2903 2560 list_del(&dev->entry); 2904 2561 if (dev->name_allocated) ··· 2915 2572 ENTER(); 2916 2573 ffs_dev_lock(); 2917 2574 2918 - ffs_dev = ffs_find_dev(dev_name); 2575 + ffs_dev = _ffs_find_dev(dev_name); 2919 2576 if (!ffs_dev) 2920 2577 ffs_dev = ERR_PTR(-ENODEV); 2921 2578 else if (ffs_dev->mounted) ··· 2938 2595 ffs_dev_lock(); 2939 2596 2940 2597 ffs_dev = ffs_data->private_data; 2941 - if (ffs_dev) 2598 + if (ffs_dev) { 2942 2599 ffs_dev->mounted = false; 2943 - 2944 - if (ffs_dev->ffs_release_dev_callback) 2945 - ffs_dev->ffs_release_dev_callback(ffs_dev); 2600 + 2601 + if (ffs_dev->ffs_release_dev_callback) 2602 + ffs_dev->ffs_release_dev_callback(ffs_dev); 2603 + } 2946 2604 2947 2605 ffs_dev_unlock(); 2948 2606 }
+1 -1
drivers/usb/gadget/f_subset.c
··· 276 276 } 277 277 278 278 net = gether_connect(&geth->port); 279 - return IS_ERR(net) ? PTR_ERR(net) : 0; 279 + return PTR_RET(net); 280 280 } 281 281 282 282 static void geth_disable(struct usb_function *f)
+2 -8
drivers/usb/gadget/gr_udc.c
··· 225 225 const char *name = "gr_udc_state"; 226 226 227 227 dev->dfs_root = debugfs_create_dir(dev_name(dev->dev), NULL); 228 - if (IS_ERR(dev->dfs_root)) { 229 - dev_err(dev->dev, "Failed to create debugfs directory\n"); 230 - return; 231 - } 232 - dev->dfs_state = debugfs_create_file(name, 0444, dev->dfs_root, 233 - dev, &gr_dfs_fops); 234 - if (IS_ERR(dev->dfs_state)) 235 - dev_err(dev->dev, "Failed to create debugfs file %s\n", name); 228 + dev->dfs_state = debugfs_create_file(name, 0444, dev->dfs_root, dev, 229 + &gr_dfs_fops); 236 230 } 237 231 238 232 static void gr_dfs_delete(struct gr_udc *dev)
+3 -6
drivers/usb/gadget/inode.c
··· 439 439 /* FIXME writebehind for O_NONBLOCK and poll(), qlen = 1 */ 440 440 441 441 value = -ENOMEM; 442 - kbuf = kmalloc (len, GFP_KERNEL); 443 - if (!kbuf) 444 - goto free1; 445 - if (copy_from_user (kbuf, buf, len)) { 446 - value = -EFAULT; 442 + kbuf = memdup_user(buf, len); 443 + if (!kbuf) { 444 + value = PTR_ERR(kbuf); 447 445 goto free1; 448 446 } 449 447 ··· 450 452 data->name, len, (int) value); 451 453 free1: 452 454 mutex_unlock(&data->lock); 453 - kfree (kbuf); 454 455 return value; 455 456 } 456 457
+2 -2
drivers/usb/gadget/lpc32xx_udc.c
··· 3295 3295 pll_set_fail: 3296 3296 clk_disable(udc->usb_pll_clk); 3297 3297 pll_enable_fail: 3298 - clk_put(udc->usb_slv_clk); 3299 - usb_otg_clk_get_fail: 3300 3298 clk_put(udc->usb_otg_clk); 3299 + usb_otg_clk_get_fail: 3300 + clk_put(udc->usb_slv_clk); 3301 3301 usb_clk_get_fail: 3302 3302 clk_put(udc->usb_pll_clk); 3303 3303 pll_get_fail:
+7 -1
drivers/usb/gadget/printer.c
··· 427 427 req->length = USB_BUFSIZE; 428 428 req->complete = rx_complete; 429 429 430 + /* here, we unlock, and only unlock, to avoid deadlock. */ 431 + spin_unlock(&dev->lock); 430 432 error = usb_ep_queue(dev->out_ep, req, GFP_ATOMIC); 433 + spin_lock(&dev->lock); 431 434 if (error) { 432 435 DBG(dev, "rx submit --> %d\n", error); 433 436 list_add(&req->list, &dev->rx_reqs); 434 437 break; 435 - } else { 438 + } 439 + /* if the req is empty, then add it into dev->rx_reqs_active. */ 440 + else if (list_empty(&req->list)) { 436 441 list_add(&req->list, &dev->rx_reqs_active); 437 442 } 438 443 } ··· 1138 1133 NULL, "g_printer"); 1139 1134 if (IS_ERR(dev->pdev)) { 1140 1135 ERROR(dev, "Failed to create device: g_printer\n"); 1136 + status = PTR_ERR(dev->pdev); 1141 1137 goto fail; 1142 1138 } 1143 1139
+101 -42
drivers/usb/gadget/s3c-hsotg.c
··· 617 617 to_write = DIV_ROUND_UP(to_write, 4); 618 618 data = hs_req->req.buf + buf_pos; 619 619 620 - writesl(hsotg->regs + EPFIFO(hs_ep->index), data, to_write); 620 + iowrite32_rep(hsotg->regs + EPFIFO(hs_ep->index), data, to_write); 621 621 622 622 return (to_write >= can_write) ? -ENOSPC : 0; 623 623 } ··· 720 720 ureq->length, ureq->actual); 721 721 if (0) 722 722 dev_dbg(hsotg->dev, 723 - "REQ buf %p len %d dma 0x%08x noi=%d zp=%d snok=%d\n", 724 - ureq->buf, length, ureq->dma, 723 + "REQ buf %p len %d dma 0x%pad noi=%d zp=%d snok=%d\n", 724 + ureq->buf, length, &ureq->dma, 725 725 ureq->no_interrupt, ureq->zero, ureq->short_not_ok); 726 726 727 727 maxreq = get_ep_limit(hs_ep); ··· 789 789 dma_reg = dir_in ? DIEPDMA(index) : DOEPDMA(index); 790 790 writel(ureq->dma, hsotg->regs + dma_reg); 791 791 792 - dev_dbg(hsotg->dev, "%s: 0x%08x => 0x%08x\n", 793 - __func__, ureq->dma, dma_reg); 792 + dev_dbg(hsotg->dev, "%s: 0x%pad => 0x%08x\n", 793 + __func__, &ureq->dma, dma_reg); 794 794 } 795 795 796 796 ctrl |= DxEPCTL_EPEna; /* ensure ep enabled */ ··· 1186 1186 static void s3c_hsotg_disconnect(struct s3c_hsotg *hsotg); 1187 1187 1188 1188 /** 1189 + * s3c_hsotg_stall_ep0 - stall ep0 1190 + * @hsotg: The device state 1191 + * 1192 + * Set stall for ep0 as response for setup request. 1193 + */ 1194 + static void s3c_hsotg_stall_ep0(struct s3c_hsotg *hsotg) { 1195 + struct s3c_hsotg_ep *ep0 = &hsotg->eps[0]; 1196 + u32 reg; 1197 + u32 ctrl; 1198 + 1199 + dev_dbg(hsotg->dev, "ep0 stall (dir=%d)\n", ep0->dir_in); 1200 + reg = (ep0->dir_in) ? DIEPCTL0 : DOEPCTL0; 1201 + 1202 + /* 1203 + * DxEPCTL_Stall will be cleared by EP once it has 1204 + * taken effect, so no need to clear later. 1205 + */ 1206 + 1207 + ctrl = readl(hsotg->regs + reg); 1208 + ctrl |= DxEPCTL_Stall; 1209 + ctrl |= DxEPCTL_CNAK; 1210 + writel(ctrl, hsotg->regs + reg); 1211 + 1212 + dev_dbg(hsotg->dev, 1213 + "written DxEPCTL=0x%08x to %08x (DxEPCTL=0x%08x)\n", 1214 + ctrl, reg, readl(hsotg->regs + reg)); 1215 + 1216 + /* 1217 + * complete won't be called, so we enqueue 1218 + * setup request here 1219 + */ 1220 + s3c_hsotg_enqueue_setup(hsotg); 1221 + } 1222 + 1223 + /** 1189 1224 * s3c_hsotg_process_control - process a control request 1190 1225 * @hsotg: The device state 1191 1226 * @ctrl: The control request received ··· 1297 1262 * so respond with a STALL for the status stage to indicate failure. 1298 1263 */ 1299 1264 1300 - if (ret < 0) { 1301 - u32 reg; 1302 - u32 ctrl; 1303 - 1304 - dev_dbg(hsotg->dev, "ep0 stall (dir=%d)\n", ep0->dir_in); 1305 - reg = (ep0->dir_in) ? DIEPCTL0 : DOEPCTL0; 1306 - 1307 - /* 1308 - * DxEPCTL_Stall will be cleared by EP once it has 1309 - * taken effect, so no need to clear later. 1310 - */ 1311 - 1312 - ctrl = readl(hsotg->regs + reg); 1313 - ctrl |= DxEPCTL_Stall; 1314 - ctrl |= DxEPCTL_CNAK; 1315 - writel(ctrl, hsotg->regs + reg); 1316 - 1317 - dev_dbg(hsotg->dev, 1318 - "written DxEPCTL=0x%08x to %08x (DxEPCTL=0x%08x)\n", 1319 - ctrl, reg, readl(hsotg->regs + reg)); 1320 - 1321 - /* 1322 - * don't believe we need to anything more to get the EP 1323 - * to reply with a STALL packet 1324 - */ 1325 - 1326 - /* 1327 - * complete won't be called, so we enqueue 1328 - * setup request here 1329 - */ 1330 - s3c_hsotg_enqueue_setup(hsotg); 1331 - } 1265 + if (ret < 0) 1266 + s3c_hsotg_stall_ep0(hsotg); 1332 1267 } 1333 1268 1334 1269 /** ··· 1493 1488 * note, we might over-write the buffer end by 3 bytes depending on 1494 1489 * alignment of the data. 1495 1490 */ 1496 - readsl(fifo, hs_req->req.buf + read_ptr, to_read); 1491 + ioread32_rep(fifo, hs_req->req.buf + read_ptr, to_read); 1497 1492 } 1498 1493 1499 1494 /** ··· 2837 2832 2838 2833 dev_info(hs->dev, "%s(ep %p %s, %d)\n", __func__, ep, ep->name, value); 2839 2834 2835 + if (index == 0) { 2836 + if (value) 2837 + s3c_hsotg_stall_ep0(hs); 2838 + else 2839 + dev_warn(hs->dev, 2840 + "%s: can't clear halt on ep0\n", __func__); 2841 + return 0; 2842 + } 2843 + 2840 2844 /* write both IN and OUT control registers */ 2841 2845 2842 2846 epreg = DIEPCTL(index); ··· 3774 3760 return 0; 3775 3761 } 3776 3762 3777 - #if 1 3778 - #define s3c_hsotg_suspend NULL 3779 - #define s3c_hsotg_resume NULL 3780 - #endif 3763 + static int s3c_hsotg_suspend(struct platform_device *pdev, pm_message_t state) 3764 + { 3765 + struct s3c_hsotg *hsotg = platform_get_drvdata(pdev); 3766 + unsigned long flags; 3767 + int ret = 0; 3768 + 3769 + if (hsotg->driver) 3770 + dev_info(hsotg->dev, "suspending usb gadget %s\n", 3771 + hsotg->driver->driver.name); 3772 + 3773 + spin_lock_irqsave(&hsotg->lock, flags); 3774 + s3c_hsotg_disconnect(hsotg); 3775 + s3c_hsotg_phy_disable(hsotg); 3776 + hsotg->gadget.speed = USB_SPEED_UNKNOWN; 3777 + spin_unlock_irqrestore(&hsotg->lock, flags); 3778 + 3779 + if (hsotg->driver) { 3780 + int ep; 3781 + for (ep = 0; ep < hsotg->num_of_eps; ep++) 3782 + s3c_hsotg_ep_disable(&hsotg->eps[ep].ep); 3783 + 3784 + ret = regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), 3785 + hsotg->supplies); 3786 + } 3787 + 3788 + return ret; 3789 + } 3790 + 3791 + static int s3c_hsotg_resume(struct platform_device *pdev) 3792 + { 3793 + struct s3c_hsotg *hsotg = platform_get_drvdata(pdev); 3794 + unsigned long flags; 3795 + int ret = 0; 3796 + 3797 + if (hsotg->driver) { 3798 + dev_info(hsotg->dev, "resuming usb gadget %s\n", 3799 + hsotg->driver->driver.name); 3800 + ret = regulator_bulk_enable(ARRAY_SIZE(hsotg->supplies), 3801 + hsotg->supplies); 3802 + } 3803 + 3804 + spin_lock_irqsave(&hsotg->lock, flags); 3805 + hsotg->last_rst = jiffies; 3806 + s3c_hsotg_phy_enable(hsotg); 3807 + s3c_hsotg_core_init(hsotg); 3808 + spin_unlock_irqrestore(&hsotg->lock, flags); 3809 + 3810 + return ret; 3811 + } 3781 3812 3782 3813 #ifdef CONFIG_OF 3783 3814 static const struct of_device_id s3c_hsotg_of_ids[] = {
-1
drivers/usb/gadget/s3c-hsudc.c
··· 1344 1344 1345 1345 return 0; 1346 1346 err_add_udc: 1347 - err_add_device: 1348 1347 clk_disable(hsudc->uclk); 1349 1348 err_res: 1350 1349 if (!IS_ERR_OR_NULL(hsudc->transceiver))
+1 -1
drivers/usb/gadget/tcm_usb_gadget.c
··· 1613 1613 return ERR_PTR(-ENOMEM); 1614 1614 } 1615 1615 tport->tport_wwpn = wwpn; 1616 - snprintf(tport->tport_name, sizeof(tport->tport_name), wnn_name); 1616 + snprintf(tport->tport_name, sizeof(tport->tport_name), "%s", wnn_name); 1617 1617 return &tport->tport_wwn; 1618 1618 } 1619 1619
+66 -35
drivers/usb/gadget/u_ether.c
··· 48 48 49 49 #define UETH__VERSION "29-May-2008" 50 50 51 + #define GETHER_NAPI_WEIGHT 32 52 + 51 53 struct eth_dev { 52 54 /* lock is held while accessing port_usb 53 55 */ ··· 74 72 struct sk_buff_head *list); 75 73 76 74 struct work_struct work; 75 + struct napi_struct rx_napi; 77 76 78 77 unsigned long todo; 79 78 #define WORK_RX_MEMORY 0 ··· 256 253 DBG(dev, "rx submit --> %d\n", retval); 257 254 if (skb) 258 255 dev_kfree_skb_any(skb); 259 - spin_lock_irqsave(&dev->req_lock, flags); 260 - list_add(&req->list, &dev->rx_reqs); 261 - spin_unlock_irqrestore(&dev->req_lock, flags); 262 256 } 263 257 return retval; 264 258 } 265 259 266 260 static void rx_complete(struct usb_ep *ep, struct usb_request *req) 267 261 { 268 - struct sk_buff *skb = req->context, *skb2; 262 + struct sk_buff *skb = req->context; 269 263 struct eth_dev *dev = ep->driver_data; 270 264 int status = req->status; 265 + bool rx_queue = 0; 271 266 272 267 switch (status) { 273 268 ··· 289 288 } else { 290 289 skb_queue_tail(&dev->rx_frames, skb); 291 290 } 292 - skb = NULL; 293 - 294 - skb2 = skb_dequeue(&dev->rx_frames); 295 - while (skb2) { 296 - if (status < 0 297 - || ETH_HLEN > skb2->len 298 - || skb2->len > VLAN_ETH_FRAME_LEN) { 299 - dev->net->stats.rx_errors++; 300 - dev->net->stats.rx_length_errors++; 301 - DBG(dev, "rx length %d\n", skb2->len); 302 - dev_kfree_skb_any(skb2); 303 - goto next_frame; 304 - } 305 - skb2->protocol = eth_type_trans(skb2, dev->net); 306 - dev->net->stats.rx_packets++; 307 - dev->net->stats.rx_bytes += skb2->len; 308 - 309 - /* no buffer copies needed, unless hardware can't 310 - * use skb buffers. 311 - */ 312 - status = netif_rx(skb2); 313 - next_frame: 314 - skb2 = skb_dequeue(&dev->rx_frames); 315 - } 291 + if (!status) 292 + rx_queue = 1; 316 293 break; 317 294 318 295 /* software-driven interface shutdown */ ··· 313 334 /* FALLTHROUGH */ 314 335 315 336 default: 337 + rx_queue = 1; 338 + dev_kfree_skb_any(skb); 316 339 dev->net->stats.rx_errors++; 317 340 DBG(dev, "rx status %d\n", status); 318 341 break; 319 342 } 320 343 321 - if (skb) 322 - dev_kfree_skb_any(skb); 323 - if (!netif_running(dev->net)) { 324 344 clean: 325 345 spin_lock(&dev->req_lock); 326 346 list_add(&req->list, &dev->rx_reqs); 327 347 spin_unlock(&dev->req_lock); 328 - req = NULL; 329 - } 330 - if (req) 331 - rx_submit(dev, req, GFP_ATOMIC); 348 + 349 + if (rx_queue && likely(napi_schedule_prep(&dev->rx_napi))) 350 + __napi_schedule(&dev->rx_napi); 332 351 } 333 352 334 353 static int prealloc(struct list_head *list, struct usb_ep *ep, unsigned n) ··· 391 414 { 392 415 struct usb_request *req; 393 416 unsigned long flags; 417 + int rx_counts = 0; 394 418 395 419 /* fill unused rxq slots with some skb */ 396 420 spin_lock_irqsave(&dev->req_lock, flags); 397 421 while (!list_empty(&dev->rx_reqs)) { 422 + 423 + if (++rx_counts > qlen(dev->gadget, dev->qmult)) 424 + break; 425 + 398 426 req = container_of(dev->rx_reqs.next, 399 427 struct usb_request, list); 400 428 list_del_init(&req->list); 401 429 spin_unlock_irqrestore(&dev->req_lock, flags); 402 430 403 431 if (rx_submit(dev, req, gfp_flags) < 0) { 432 + spin_lock_irqsave(&dev->req_lock, flags); 433 + list_add(&req->list, &dev->rx_reqs); 434 + spin_unlock_irqrestore(&dev->req_lock, flags); 404 435 defer_kevent(dev, WORK_RX_MEMORY); 405 436 return; 406 437 } ··· 416 431 spin_lock_irqsave(&dev->req_lock, flags); 417 432 } 418 433 spin_unlock_irqrestore(&dev->req_lock, flags); 434 + } 435 + 436 + static int gether_poll(struct napi_struct *napi, int budget) 437 + { 438 + struct eth_dev *dev = container_of(napi, struct eth_dev, rx_napi); 439 + struct sk_buff *skb; 440 + unsigned int work_done = 0; 441 + int status = 0; 442 + 443 + while ((skb = skb_dequeue(&dev->rx_frames))) { 444 + if (status < 0 445 + || ETH_HLEN > skb->len 446 + || skb->len > VLAN_ETH_FRAME_LEN) { 447 + dev->net->stats.rx_errors++; 448 + dev->net->stats.rx_length_errors++; 449 + DBG(dev, "rx length %d\n", skb->len); 450 + dev_kfree_skb_any(skb); 451 + continue; 452 + } 453 + skb->protocol = eth_type_trans(skb, dev->net); 454 + dev->net->stats.rx_packets++; 455 + dev->net->stats.rx_bytes += skb->len; 456 + 457 + status = netif_rx_ni(skb); 458 + } 459 + 460 + if (netif_running(dev->net)) { 461 + rx_fill(dev, GFP_KERNEL); 462 + work_done++; 463 + } 464 + 465 + if (work_done < budget) 466 + napi_complete(&dev->rx_napi); 467 + 468 + return work_done; 419 469 } 420 470 421 471 static void eth_work(struct work_struct *work) ··· 645 625 /* and open the tx floodgates */ 646 626 atomic_set(&dev->tx_qlen, 0); 647 627 netif_wake_queue(dev->net); 628 + napi_enable(&dev->rx_napi); 648 629 } 649 630 650 631 static int eth_open(struct net_device *net) ··· 672 651 unsigned long flags; 673 652 674 653 VDBG(dev, "%s\n", __func__); 654 + napi_disable(&dev->rx_napi); 675 655 netif_stop_queue(net); 676 656 677 657 DBG(dev, "stop stats: rx/tx %ld/%ld, errs %ld/%ld\n", ··· 790 768 return ERR_PTR(-ENOMEM); 791 769 792 770 dev = netdev_priv(net); 771 + netif_napi_add(net, &dev->rx_napi, gether_poll, GETHER_NAPI_WEIGHT); 793 772 spin_lock_init(&dev->lock); 794 773 spin_lock_init(&dev->req_lock); 795 774 INIT_WORK(&dev->work, eth_work); ··· 853 830 return ERR_PTR(-ENOMEM); 854 831 855 832 dev = netdev_priv(net); 833 + netif_napi_add(net, &dev->rx_napi, gether_poll, GETHER_NAPI_WEIGHT); 856 834 spin_lock_init(&dev->lock); 857 835 spin_lock_init(&dev->req_lock); 858 836 INIT_WORK(&dev->work, eth_work); ··· 1137 1113 { 1138 1114 struct eth_dev *dev = link->ioport; 1139 1115 struct usb_request *req; 1116 + struct sk_buff *skb; 1140 1117 1141 1118 WARN_ON(!dev); 1142 1119 if (!dev) ··· 1164 1139 spin_lock(&dev->req_lock); 1165 1140 } 1166 1141 spin_unlock(&dev->req_lock); 1142 + 1143 + spin_lock(&dev->rx_frames.lock); 1144 + while ((skb = __skb_dequeue(&dev->rx_frames))) 1145 + dev_kfree_skb_any(skb); 1146 + spin_unlock(&dev->rx_frames.lock); 1147 + 1167 1148 link->in_ep->driver_data = NULL; 1168 1149 link->in_ep->desc = NULL; 1169 1150
+13 -17
drivers/usb/gadget/u_fs.h
··· 65 65 mutex_unlock(&ffs_lock); 66 66 } 67 67 68 - struct ffs_dev *ffs_alloc_dev(void); 69 68 int ffs_name_dev(struct ffs_dev *dev, const char *name); 70 69 int ffs_single_dev(struct ffs_dev *dev); 71 - void ffs_free_dev(struct ffs_dev *dev); 72 70 73 71 struct ffs_epfile; 74 72 struct ffs_function; ··· 123 125 * setup. If this state is set read/write on ep0 return 124 126 * -EIDRM. This state is only set when adding event. 125 127 */ 126 - FFS_SETUP_CANCELED 128 + FFS_SETUP_CANCELLED 127 129 }; 128 130 129 131 struct ffs_data { ··· 154 156 */ 155 157 struct usb_request *ep0req; /* P: mutex */ 156 158 struct completion ep0req_completion; /* P: mutex */ 157 - int ep0req_status; /* P: mutex */ 158 159 159 160 /* reference counter */ 160 161 atomic_t ref; ··· 165 168 166 169 /* 167 170 * Possible transitions: 168 - * + FFS_NO_SETUP -> FFS_SETUP_PENDING -- P: ev.waitq.lock 171 + * + FFS_NO_SETUP -> FFS_SETUP_PENDING -- P: ev.waitq.lock 169 172 * happens only in ep0 read which is P: mutex 170 - * + FFS_SETUP_PENDING -> FFS_NO_SETUP -- P: ev.waitq.lock 173 + * + FFS_SETUP_PENDING -> FFS_NO_SETUP -- P: ev.waitq.lock 171 174 * happens only in ep0 i/o which is P: mutex 172 - * + FFS_SETUP_PENDING -> FFS_SETUP_CANCELED -- P: ev.waitq.lock 173 - * + FFS_SETUP_CANCELED -> FFS_NO_SETUP -- cmpxchg 175 + * + FFS_SETUP_PENDING -> FFS_SETUP_CANCELLED -- P: ev.waitq.lock 176 + * + FFS_SETUP_CANCELLED -> FFS_NO_SETUP -- cmpxchg 177 + * 178 + * This field should never be accessed directly and instead 179 + * ffs_setup_state_clear_cancelled function should be used. 174 180 */ 175 181 enum ffs_setup_state setup_state; 176 - 177 - #define FFS_SETUP_STATE(ffs) \ 178 - ((enum ffs_setup_state)cmpxchg(&(ffs)->setup_state, \ 179 - FFS_SETUP_CANCELED, FFS_NO_SETUP)) 180 182 181 183 /* Events & such. */ 182 184 struct { ··· 206 210 207 211 /* filled by __ffs_data_got_descs() */ 208 212 /* 209 - * Real descriptors are 16 bytes after raw_descs (so you need 210 - * to skip 16 bytes (ie. ffs->raw_descs + 16) to get to the 211 - * first full speed descriptor). raw_descs_length and 212 - * raw_fs_descs_length do not have those 16 bytes added. 213 + * raw_descs is what you kfree, real_descs points inside of raw_descs, 214 + * where full speed, high speed and super speed descriptors start. 215 + * real_descs_length is the length of all those descriptors. 213 216 */ 217 + const void *raw_descs_data; 214 218 const void *raw_descs; 215 219 unsigned raw_descs_length; 216 - unsigned raw_fs_descs_length; 217 220 unsigned fs_descs_count; 218 221 unsigned hs_descs_count; 222 + unsigned ss_descs_count; 219 223 220 224 unsigned short strings_count; 221 225 unsigned short interfaces_count;
-1
drivers/usb/host/Kconfig
··· 584 584 config USB_U132_HCD 585 585 tristate "Elan U132 Adapter Host Controller" 586 586 depends on USB_FTDI_ELAN 587 - default M 588 587 help 589 588 The U132 adapter is a USB to CardBus adapter specifically designed 590 589 for PC cards that contain an OHCI host controller. Typical PC cards
+156 -26
drivers/usb/host/ehci-platform.c
··· 3 3 * 4 4 * Copyright 2007 Steven Brown <sbrown@cortland.com> 5 5 * Copyright 2010-2012 Hauke Mehrtens <hauke@hauke-m.de> 6 + * Copyright 2014 Hans de Goede <hdegoede@redhat.com> 6 7 * 7 8 * Derived from the ohci-ssb driver 8 9 * Copyright 2007 Michael Buesch <m@bues.ch> ··· 19 18 * 20 19 * Licensed under the GNU/GPL. See COPYING for details. 21 20 */ 21 + #include <linux/clk.h> 22 22 #include <linux/dma-mapping.h> 23 23 #include <linux/err.h> 24 24 #include <linux/kernel.h> ··· 27 25 #include <linux/io.h> 28 26 #include <linux/module.h> 29 27 #include <linux/of.h> 28 + #include <linux/phy/phy.h> 30 29 #include <linux/platform_device.h> 31 30 #include <linux/usb.h> 32 31 #include <linux/usb/hcd.h> ··· 36 33 #include "ehci.h" 37 34 38 35 #define DRIVER_DESC "EHCI generic platform driver" 36 + #define EHCI_MAX_CLKS 3 37 + #define hcd_to_ehci_priv(h) ((struct ehci_platform_priv *)hcd_to_ehci(h)->priv) 38 + 39 + struct ehci_platform_priv { 40 + struct clk *clks[EHCI_MAX_CLKS]; 41 + struct phy *phy; 42 + }; 39 43 40 44 static const char hcd_name[] = "ehci-platform"; 41 45 ··· 55 45 56 46 hcd->has_tt = pdata->has_tt; 57 47 ehci->has_synopsys_hc_bug = pdata->has_synopsys_hc_bug; 58 - ehci->big_endian_desc = pdata->big_endian_desc; 59 - ehci->big_endian_mmio = pdata->big_endian_mmio; 60 48 61 49 if (pdata->pre_setup) { 62 50 retval = pdata->pre_setup(hcd); ··· 72 64 return 0; 73 65 } 74 66 67 + static int ehci_platform_power_on(struct platform_device *dev) 68 + { 69 + struct usb_hcd *hcd = platform_get_drvdata(dev); 70 + struct ehci_platform_priv *priv = hcd_to_ehci_priv(hcd); 71 + int clk, ret; 72 + 73 + for (clk = 0; clk < EHCI_MAX_CLKS && priv->clks[clk]; clk++) { 74 + ret = clk_prepare_enable(priv->clks[clk]); 75 + if (ret) 76 + goto err_disable_clks; 77 + } 78 + 79 + if (priv->phy) { 80 + ret = phy_init(priv->phy); 81 + if (ret) 82 + goto err_disable_clks; 83 + 84 + ret = phy_power_on(priv->phy); 85 + if (ret) 86 + goto err_exit_phy; 87 + } 88 + 89 + return 0; 90 + 91 + err_exit_phy: 92 + phy_exit(priv->phy); 93 + err_disable_clks: 94 + while (--clk >= 0) 95 + clk_disable_unprepare(priv->clks[clk]); 96 + 97 + return ret; 98 + } 99 + 100 + static void ehci_platform_power_off(struct platform_device *dev) 101 + { 102 + struct usb_hcd *hcd = platform_get_drvdata(dev); 103 + struct ehci_platform_priv *priv = hcd_to_ehci_priv(hcd); 104 + int clk; 105 + 106 + if (priv->phy) { 107 + phy_power_off(priv->phy); 108 + phy_exit(priv->phy); 109 + } 110 + 111 + for (clk = EHCI_MAX_CLKS - 1; clk >= 0; clk--) 112 + if (priv->clks[clk]) 113 + clk_disable_unprepare(priv->clks[clk]); 114 + } 115 + 75 116 static struct hc_driver __read_mostly ehci_platform_hc_driver; 76 117 77 118 static const struct ehci_driver_overrides platform_overrides __initconst = { 78 - .reset = ehci_platform_reset, 119 + .reset = ehci_platform_reset, 120 + .extra_priv_size = sizeof(struct ehci_platform_priv), 79 121 }; 80 122 81 - static struct usb_ehci_pdata ehci_platform_defaults; 123 + static struct usb_ehci_pdata ehci_platform_defaults = { 124 + .power_on = ehci_platform_power_on, 125 + .power_suspend = ehci_platform_power_off, 126 + .power_off = ehci_platform_power_off, 127 + }; 82 128 83 129 static int ehci_platform_probe(struct platform_device *dev) 84 130 { 85 131 struct usb_hcd *hcd; 86 132 struct resource *res_mem; 87 - struct usb_ehci_pdata *pdata; 88 - int irq; 89 - int err; 133 + struct usb_ehci_pdata *pdata = dev_get_platdata(&dev->dev); 134 + struct ehci_platform_priv *priv; 135 + struct ehci_hcd *ehci; 136 + int err, irq, clk = 0; 90 137 91 138 if (usb_disabled()) 92 139 return -ENODEV; 93 140 94 141 /* 95 - * use reasonable defaults so platforms don't have to provide these. 96 - * with DT probing on ARM, none of these are set. 142 + * Use reasonable defaults so platforms don't have to provide these 143 + * with DT probing on ARM. 97 144 */ 98 - if (!dev_get_platdata(&dev->dev)) 99 - dev->dev.platform_data = &ehci_platform_defaults; 145 + if (!pdata) 146 + pdata = &ehci_platform_defaults; 100 147 101 148 err = dma_coerce_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)); 102 149 if (err) 103 150 return err; 104 - 105 - pdata = dev_get_platdata(&dev->dev); 106 151 107 152 irq = platform_get_irq(dev, 0); 108 153 if (irq < 0) { ··· 168 107 return -ENXIO; 169 108 } 170 109 110 + hcd = usb_create_hcd(&ehci_platform_hc_driver, &dev->dev, 111 + dev_name(&dev->dev)); 112 + if (!hcd) 113 + return -ENOMEM; 114 + 115 + platform_set_drvdata(dev, hcd); 116 + dev->dev.platform_data = pdata; 117 + priv = hcd_to_ehci_priv(hcd); 118 + ehci = hcd_to_ehci(hcd); 119 + 120 + if (pdata == &ehci_platform_defaults && dev->dev.of_node) { 121 + if (of_property_read_bool(dev->dev.of_node, "big-endian-regs")) 122 + ehci->big_endian_mmio = 1; 123 + 124 + if (of_property_read_bool(dev->dev.of_node, "big-endian-desc")) 125 + ehci->big_endian_desc = 1; 126 + 127 + if (of_property_read_bool(dev->dev.of_node, "big-endian")) 128 + ehci->big_endian_mmio = ehci->big_endian_desc = 1; 129 + 130 + priv->phy = devm_phy_get(&dev->dev, "usb"); 131 + if (IS_ERR(priv->phy)) { 132 + err = PTR_ERR(priv->phy); 133 + if (err == -EPROBE_DEFER) 134 + goto err_put_hcd; 135 + priv->phy = NULL; 136 + } 137 + 138 + for (clk = 0; clk < EHCI_MAX_CLKS; clk++) { 139 + priv->clks[clk] = of_clk_get(dev->dev.of_node, clk); 140 + if (IS_ERR(priv->clks[clk])) { 141 + err = PTR_ERR(priv->clks[clk]); 142 + if (err == -EPROBE_DEFER) 143 + goto err_put_clks; 144 + priv->clks[clk] = NULL; 145 + break; 146 + } 147 + } 148 + } 149 + 150 + if (pdata->big_endian_desc) 151 + ehci->big_endian_desc = 1; 152 + if (pdata->big_endian_mmio) 153 + ehci->big_endian_mmio = 1; 154 + 155 + #ifndef CONFIG_USB_EHCI_BIG_ENDIAN_MMIO 156 + if (ehci->big_endian_mmio) { 157 + dev_err(&dev->dev, 158 + "Error: CONFIG_USB_EHCI_BIG_ENDIAN_MMIO not set\n"); 159 + err = -EINVAL; 160 + goto err_put_clks; 161 + } 162 + #endif 163 + #ifndef CONFIG_USB_EHCI_BIG_ENDIAN_DESC 164 + if (ehci->big_endian_desc) { 165 + dev_err(&dev->dev, 166 + "Error: CONFIG_USB_EHCI_BIG_ENDIAN_DESC not set\n"); 167 + err = -EINVAL; 168 + goto err_put_clks; 169 + } 170 + #endif 171 + 171 172 if (pdata->power_on) { 172 173 err = pdata->power_on(dev); 173 174 if (err < 0) 174 - return err; 175 - } 176 - 177 - hcd = usb_create_hcd(&ehci_platform_hc_driver, &dev->dev, 178 - dev_name(&dev->dev)); 179 - if (!hcd) { 180 - err = -ENOMEM; 181 - goto err_power; 175 + goto err_put_clks; 182 176 } 183 177 184 178 hcd->rsrc_start = res_mem->start; ··· 242 126 hcd->regs = devm_ioremap_resource(&dev->dev, res_mem); 243 127 if (IS_ERR(hcd->regs)) { 244 128 err = PTR_ERR(hcd->regs); 245 - goto err_put_hcd; 129 + goto err_power; 246 130 } 247 131 err = usb_add_hcd(hcd, irq, IRQF_SHARED); 248 132 if (err) 249 - goto err_put_hcd; 133 + goto err_power; 250 134 251 135 device_wakeup_enable(hcd->self.controller); 252 136 platform_set_drvdata(dev, hcd); 253 137 254 138 return err; 255 139 256 - err_put_hcd: 257 - usb_put_hcd(hcd); 258 140 err_power: 259 141 if (pdata->power_off) 260 142 pdata->power_off(dev); 143 + err_put_clks: 144 + while (--clk >= 0) 145 + clk_put(priv->clks[clk]); 146 + err_put_hcd: 147 + if (pdata == &ehci_platform_defaults) 148 + dev->dev.platform_data = NULL; 149 + 150 + usb_put_hcd(hcd); 261 151 262 152 return err; 263 153 } ··· 272 150 { 273 151 struct usb_hcd *hcd = platform_get_drvdata(dev); 274 152 struct usb_ehci_pdata *pdata = dev_get_platdata(&dev->dev); 153 + struct ehci_platform_priv *priv = hcd_to_ehci_priv(hcd); 154 + int clk; 275 155 276 156 usb_remove_hcd(hcd); 277 - usb_put_hcd(hcd); 278 157 279 158 if (pdata->power_off) 280 159 pdata->power_off(dev); 160 + 161 + for (clk = 0; clk < EHCI_MAX_CLKS && priv->clks[clk]; clk++) 162 + clk_put(priv->clks[clk]); 163 + 164 + usb_put_hcd(hcd); 281 165 282 166 if (pdata == &ehci_platform_defaults) 283 167 dev->dev.platform_data = NULL; ··· 335 207 static const struct of_device_id vt8500_ehci_ids[] = { 336 208 { .compatible = "via,vt8500-ehci", }, 337 209 { .compatible = "wm,prizm-ehci", }, 210 + { .compatible = "generic-ehci", }, 338 211 {} 339 212 }; 213 + MODULE_DEVICE_TABLE(of, vt8500_ehci_ids); 340 214 341 215 static const struct platform_device_id ehci_platform_table[] = { 342 216 { "ehci-platform", 0 },
-4
drivers/usb/host/ehci-tegra.c
··· 38 38 39 39 #include "ehci.h" 40 40 41 - #define TEGRA_USB_BASE 0xC5000000 42 - #define TEGRA_USB2_BASE 0xC5004000 43 - #define TEGRA_USB3_BASE 0xC5008000 44 - 45 41 #define PORT_WAKE_BITS (PORT_WKOC_E|PORT_WKDISC_E|PORT_WKCONN_E) 46 42 47 43 #define TEGRA_USB_DMA_ALIGN 32
+40 -2
drivers/usb/host/hwa-hc.c
··· 261 261 dev_err(dev, "cannot listen to notifications: %d\n", result); 262 262 goto error_stop; 263 263 } 264 + /* 265 + * If WUSB_QUIRK_ALEREON_HWA_DISABLE_XFER_NOTIFICATIONS is set, 266 + * disable transfer notifications. 267 + */ 268 + if (hwahc->wa.quirks & 269 + WUSB_QUIRK_ALEREON_HWA_DISABLE_XFER_NOTIFICATIONS) { 270 + struct usb_host_interface *cur_altsetting = 271 + hwahc->wa.usb_iface->cur_altsetting; 272 + 273 + result = usb_control_msg(hwahc->wa.usb_dev, 274 + usb_sndctrlpipe(hwahc->wa.usb_dev, 0), 275 + WA_REQ_ALEREON_DISABLE_XFER_NOTIFICATIONS, 276 + USB_DIR_OUT | USB_TYPE_VENDOR | 277 + USB_RECIP_INTERFACE, 278 + WA_REQ_ALEREON_FEATURE_SET, 279 + cur_altsetting->desc.bInterfaceNumber, 280 + NULL, 0, 281 + USB_CTRL_SET_TIMEOUT); 282 + /* 283 + * If we successfully sent the control message, start DTI here 284 + * because no transfer notifications will be received which is 285 + * where DTI is normally started. 286 + */ 287 + if (result == 0) 288 + result = wa_dti_start(&hwahc->wa); 289 + else 290 + result = 0; /* OK. Continue normally. */ 291 + 292 + if (result < 0) { 293 + dev_err(dev, "cannot start DTI: %d\n", result); 294 + goto error_dti_start; 295 + } 296 + } 297 + 264 298 return result; 265 299 300 + error_dti_start: 301 + wa_nep_disarm(&hwahc->wa); 266 302 error_stop: 267 303 __wa_clear_feature(&hwahc->wa, WA_ENABLE); 268 304 return result; ··· 863 827 static struct usb_device_id hwahc_id_table[] = { 864 828 /* Alereon 5310 */ 865 829 { USB_DEVICE_AND_INTERFACE_INFO(0x13dc, 0x5310, 0xe0, 0x02, 0x01), 866 - .driver_info = WUSB_QUIRK_ALEREON_HWA_CONCAT_ISOC }, 830 + .driver_info = WUSB_QUIRK_ALEREON_HWA_CONCAT_ISOC | 831 + WUSB_QUIRK_ALEREON_HWA_DISABLE_XFER_NOTIFICATIONS }, 867 832 /* Alereon 5611 */ 868 833 { USB_DEVICE_AND_INTERFACE_INFO(0x13dc, 0x5611, 0xe0, 0x02, 0x01), 869 - .driver_info = WUSB_QUIRK_ALEREON_HWA_CONCAT_ISOC }, 834 + .driver_info = WUSB_QUIRK_ALEREON_HWA_CONCAT_ISOC | 835 + WUSB_QUIRK_ALEREON_HWA_DISABLE_XFER_NOTIFICATIONS }, 870 836 /* FIXME: use class labels for this */ 871 837 { USB_INTERFACE_INFO(0xe0, 0x02, 0x01), }, 872 838 {},
+173 -26
drivers/usb/host/ohci-platform.c
··· 3 3 * 4 4 * Copyright 2007 Michael Buesch <m@bues.ch> 5 5 * Copyright 2011-2012 Hauke Mehrtens <hauke@hauke-m.de> 6 + * Copyright 2014 Hans de Goede <hdegoede@redhat.com> 6 7 * 7 8 * Derived from the OCHI-SSB driver 8 9 * Derived from the OHCI-PCI driver ··· 15 14 * Licensed under the GNU/GPL. See COPYING for details. 16 15 */ 17 16 17 + #include <linux/clk.h> 18 + #include <linux/dma-mapping.h> 18 19 #include <linux/hrtimer.h> 19 20 #include <linux/io.h> 20 21 #include <linux/kernel.h> 21 22 #include <linux/module.h> 22 23 #include <linux/err.h> 24 + #include <linux/phy/phy.h> 23 25 #include <linux/platform_device.h> 24 26 #include <linux/usb/ohci_pdriver.h> 25 27 #include <linux/usb.h> ··· 31 27 #include "ohci.h" 32 28 33 29 #define DRIVER_DESC "OHCI generic platform driver" 30 + #define OHCI_MAX_CLKS 3 31 + #define hcd_to_ohci_priv(h) ((struct ohci_platform_priv *)hcd_to_ohci(h)->priv) 32 + 33 + struct ohci_platform_priv { 34 + struct clk *clks[OHCI_MAX_CLKS]; 35 + struct phy *phy; 36 + }; 34 37 35 38 static const char hcd_name[] = "ohci-platform"; 36 39 ··· 47 36 struct usb_ohci_pdata *pdata = dev_get_platdata(&pdev->dev); 48 37 struct ohci_hcd *ohci = hcd_to_ohci(hcd); 49 38 50 - if (pdata->big_endian_desc) 51 - ohci->flags |= OHCI_QUIRK_BE_DESC; 52 - if (pdata->big_endian_mmio) 53 - ohci->flags |= OHCI_QUIRK_BE_MMIO; 54 39 if (pdata->no_big_frame_no) 55 40 ohci->flags |= OHCI_QUIRK_FRAME_NO; 56 41 if (pdata->num_ports) ··· 55 48 return ohci_setup(hcd); 56 49 } 57 50 51 + static int ohci_platform_power_on(struct platform_device *dev) 52 + { 53 + struct usb_hcd *hcd = platform_get_drvdata(dev); 54 + struct ohci_platform_priv *priv = hcd_to_ohci_priv(hcd); 55 + int clk, ret; 56 + 57 + for (clk = 0; clk < OHCI_MAX_CLKS && priv->clks[clk]; clk++) { 58 + ret = clk_prepare_enable(priv->clks[clk]); 59 + if (ret) 60 + goto err_disable_clks; 61 + } 62 + 63 + if (priv->phy) { 64 + ret = phy_init(priv->phy); 65 + if (ret) 66 + goto err_disable_clks; 67 + 68 + ret = phy_power_on(priv->phy); 69 + if (ret) 70 + goto err_exit_phy; 71 + } 72 + 73 + return 0; 74 + 75 + err_exit_phy: 76 + phy_exit(priv->phy); 77 + err_disable_clks: 78 + while (--clk >= 0) 79 + clk_disable_unprepare(priv->clks[clk]); 80 + 81 + return ret; 82 + } 83 + 84 + static void ohci_platform_power_off(struct platform_device *dev) 85 + { 86 + struct usb_hcd *hcd = platform_get_drvdata(dev); 87 + struct ohci_platform_priv *priv = hcd_to_ohci_priv(hcd); 88 + int clk; 89 + 90 + if (priv->phy) { 91 + phy_power_off(priv->phy); 92 + phy_exit(priv->phy); 93 + } 94 + 95 + for (clk = OHCI_MAX_CLKS - 1; clk >= 0; clk--) 96 + if (priv->clks[clk]) 97 + clk_disable_unprepare(priv->clks[clk]); 98 + } 99 + 58 100 static struct hc_driver __read_mostly ohci_platform_hc_driver; 59 101 60 102 static const struct ohci_driver_overrides platform_overrides __initconst = { 61 - .product_desc = "Generic Platform OHCI controller", 62 - .reset = ohci_platform_reset, 103 + .product_desc = "Generic Platform OHCI controller", 104 + .reset = ohci_platform_reset, 105 + .extra_priv_size = sizeof(struct ohci_platform_priv), 106 + }; 107 + 108 + static struct usb_ohci_pdata ohci_platform_defaults = { 109 + .power_on = ohci_platform_power_on, 110 + .power_suspend = ohci_platform_power_off, 111 + .power_off = ohci_platform_power_off, 63 112 }; 64 113 65 114 static int ohci_platform_probe(struct platform_device *dev) ··· 123 60 struct usb_hcd *hcd; 124 61 struct resource *res_mem; 125 62 struct usb_ohci_pdata *pdata = dev_get_platdata(&dev->dev); 126 - int irq; 127 - int err = -ENOMEM; 128 - 129 - if (!pdata) { 130 - WARN_ON(1); 131 - return -ENODEV; 132 - } 63 + struct ohci_platform_priv *priv; 64 + struct ohci_hcd *ohci; 65 + int err, irq, clk = 0; 133 66 134 67 if (usb_disabled()) 135 68 return -ENODEV; 69 + 70 + /* 71 + * Use reasonable defaults so platforms don't have to provide these 72 + * with DT probing on ARM. 73 + */ 74 + if (!pdata) 75 + pdata = &ohci_platform_defaults; 76 + 77 + err = dma_coerce_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)); 78 + if (err) 79 + return err; 136 80 137 81 irq = platform_get_irq(dev, 0); 138 82 if (irq < 0) { ··· 153 83 return -ENXIO; 154 84 } 155 85 86 + hcd = usb_create_hcd(&ohci_platform_hc_driver, &dev->dev, 87 + dev_name(&dev->dev)); 88 + if (!hcd) 89 + return -ENOMEM; 90 + 91 + platform_set_drvdata(dev, hcd); 92 + dev->dev.platform_data = pdata; 93 + priv = hcd_to_ohci_priv(hcd); 94 + ohci = hcd_to_ohci(hcd); 95 + 96 + if (pdata == &ohci_platform_defaults && dev->dev.of_node) { 97 + if (of_property_read_bool(dev->dev.of_node, "big-endian-regs")) 98 + ohci->flags |= OHCI_QUIRK_BE_MMIO; 99 + 100 + if (of_property_read_bool(dev->dev.of_node, "big-endian-desc")) 101 + ohci->flags |= OHCI_QUIRK_BE_DESC; 102 + 103 + if (of_property_read_bool(dev->dev.of_node, "big-endian")) 104 + ohci->flags |= OHCI_QUIRK_BE_MMIO | OHCI_QUIRK_BE_DESC; 105 + 106 + priv->phy = devm_phy_get(&dev->dev, "usb"); 107 + if (IS_ERR(priv->phy)) { 108 + err = PTR_ERR(priv->phy); 109 + if (err == -EPROBE_DEFER) 110 + goto err_put_hcd; 111 + priv->phy = NULL; 112 + } 113 + 114 + for (clk = 0; clk < OHCI_MAX_CLKS; clk++) { 115 + priv->clks[clk] = of_clk_get(dev->dev.of_node, clk); 116 + if (IS_ERR(priv->clks[clk])) { 117 + err = PTR_ERR(priv->clks[clk]); 118 + if (err == -EPROBE_DEFER) 119 + goto err_put_clks; 120 + priv->clks[clk] = NULL; 121 + break; 122 + } 123 + } 124 + } 125 + 126 + if (pdata->big_endian_desc) 127 + ohci->flags |= OHCI_QUIRK_BE_DESC; 128 + if (pdata->big_endian_mmio) 129 + ohci->flags |= OHCI_QUIRK_BE_MMIO; 130 + 131 + #ifndef CONFIG_USB_OHCI_BIG_ENDIAN_MMIO 132 + if (ohci->flags & OHCI_QUIRK_BE_MMIO) { 133 + dev_err(&dev->dev, 134 + "Error: CONFIG_USB_OHCI_BIG_ENDIAN_MMIO not set\n"); 135 + err = -EINVAL; 136 + goto err_put_clks; 137 + } 138 + #endif 139 + #ifndef CONFIG_USB_OHCI_BIG_ENDIAN_DESC 140 + if (ohci->flags & OHCI_QUIRK_BE_DESC) { 141 + dev_err(&dev->dev, 142 + "Error: CONFIG_USB_OHCI_BIG_ENDIAN_DESC not set\n"); 143 + err = -EINVAL; 144 + goto err_put_clks; 145 + } 146 + #endif 147 + 156 148 if (pdata->power_on) { 157 149 err = pdata->power_on(dev); 158 150 if (err < 0) 159 - return err; 160 - } 161 - 162 - hcd = usb_create_hcd(&ohci_platform_hc_driver, &dev->dev, 163 - dev_name(&dev->dev)); 164 - if (!hcd) { 165 - err = -ENOMEM; 166 - goto err_power; 151 + goto err_put_clks; 167 152 } 168 153 169 154 hcd->rsrc_start = res_mem->start; ··· 227 102 hcd->regs = devm_ioremap_resource(&dev->dev, res_mem); 228 103 if (IS_ERR(hcd->regs)) { 229 104 err = PTR_ERR(hcd->regs); 230 - goto err_put_hcd; 105 + goto err_power; 231 106 } 232 107 err = usb_add_hcd(hcd, irq, IRQF_SHARED); 233 108 if (err) 234 - goto err_put_hcd; 109 + goto err_power; 235 110 236 111 device_wakeup_enable(hcd->self.controller); 237 112 ··· 239 114 240 115 return err; 241 116 242 - err_put_hcd: 243 - usb_put_hcd(hcd); 244 117 err_power: 245 118 if (pdata->power_off) 246 119 pdata->power_off(dev); 120 + err_put_clks: 121 + while (--clk >= 0) 122 + clk_put(priv->clks[clk]); 123 + err_put_hcd: 124 + if (pdata == &ohci_platform_defaults) 125 + dev->dev.platform_data = NULL; 126 + 127 + usb_put_hcd(hcd); 247 128 248 129 return err; 249 130 } ··· 258 127 { 259 128 struct usb_hcd *hcd = platform_get_drvdata(dev); 260 129 struct usb_ohci_pdata *pdata = dev_get_platdata(&dev->dev); 130 + struct ohci_platform_priv *priv = hcd_to_ohci_priv(hcd); 131 + int clk; 261 132 262 133 usb_remove_hcd(hcd); 263 - usb_put_hcd(hcd); 264 134 265 135 if (pdata->power_off) 266 136 pdata->power_off(dev); 137 + 138 + for (clk = 0; clk < OHCI_MAX_CLKS && priv->clks[clk]; clk++) 139 + clk_put(priv->clks[clk]); 140 + 141 + usb_put_hcd(hcd); 142 + 143 + if (pdata == &ohci_platform_defaults) 144 + dev->dev.platform_data = NULL; 267 145 268 146 return 0; 269 147 } ··· 320 180 #define ohci_platform_resume NULL 321 181 #endif /* CONFIG_PM */ 322 182 183 + static const struct of_device_id ohci_platform_ids[] = { 184 + { .compatible = "generic-ohci", }, 185 + { } 186 + }; 187 + MODULE_DEVICE_TABLE(of, ohci_platform_ids); 188 + 323 189 static const struct platform_device_id ohci_platform_table[] = { 324 190 { "ohci-platform", 0 }, 325 191 { } ··· 346 200 .owner = THIS_MODULE, 347 201 .name = "ohci-platform", 348 202 .pm = &ohci_platform_pm_ops, 203 + .of_match_table = ohci_platform_ids, 349 204 } 350 205 }; 351 206
+1
drivers/usb/host/uhci-platform.c
··· 148 148 } 149 149 150 150 static const struct of_device_id platform_uhci_ids[] = { 151 + { .compatible = "generic-uhci", }, 151 152 { .compatible = "platform-uhci", }, 152 153 {} 153 154 };
+5 -3
drivers/usb/host/xhci-hub.c
··· 732 732 /* Set the U1 and U2 exit latencies. */ 733 733 memcpy(buf, &usb_bos_descriptor, 734 734 USB_DT_BOS_SIZE + USB_DT_USB_SS_CAP_SIZE); 735 - temp = readl(&xhci->cap_regs->hcs_params3); 736 - buf[12] = HCS_U1_LATENCY(temp); 737 - put_unaligned_le16(HCS_U2_LATENCY(temp), &buf[13]); 735 + if ((xhci->quirks & XHCI_LPM_SUPPORT)) { 736 + temp = readl(&xhci->cap_regs->hcs_params3); 737 + buf[12] = HCS_U1_LATENCY(temp); 738 + put_unaligned_le16(HCS_U2_LATENCY(temp), &buf[13]); 739 + } 738 740 739 741 /* Indicate whether the host has LTM support. */ 740 742 temp = readl(&xhci->cap_regs->hcc_params);
+157 -55
drivers/usb/host/xhci-mem.c
··· 149 149 } 150 150 } 151 151 152 + /* 153 + * We need a radix tree for mapping physical addresses of TRBs to which stream 154 + * ID they belong to. We need to do this because the host controller won't tell 155 + * us which stream ring the TRB came from. We could store the stream ID in an 156 + * event data TRB, but that doesn't help us for the cancellation case, since the 157 + * endpoint may stop before it reaches that event data TRB. 158 + * 159 + * The radix tree maps the upper portion of the TRB DMA address to a ring 160 + * segment that has the same upper portion of DMA addresses. For example, say I 161 + * have segments of size 1KB, that are always 1KB aligned. A segment may 162 + * start at 0x10c91000 and end at 0x10c913f0. If I use the upper 10 bits, the 163 + * key to the stream ID is 0x43244. I can use the DMA address of the TRB to 164 + * pass the radix tree a key to get the right stream ID: 165 + * 166 + * 0x10c90fff >> 10 = 0x43243 167 + * 0x10c912c0 >> 10 = 0x43244 168 + * 0x10c91400 >> 10 = 0x43245 169 + * 170 + * Obviously, only those TRBs with DMA addresses that are within the segment 171 + * will make the radix tree return the stream ID for that ring. 172 + * 173 + * Caveats for the radix tree: 174 + * 175 + * The radix tree uses an unsigned long as a key pair. On 32-bit systems, an 176 + * unsigned long will be 32-bits; on a 64-bit system an unsigned long will be 177 + * 64-bits. Since we only request 32-bit DMA addresses, we can use that as the 178 + * key on 32-bit or 64-bit systems (it would also be fine if we asked for 64-bit 179 + * PCI DMA addresses on a 64-bit system). There might be a problem on 32-bit 180 + * extended systems (where the DMA address can be bigger than 32-bits), 181 + * if we allow the PCI dma mask to be bigger than 32-bits. So don't do that. 182 + */ 183 + static int xhci_insert_segment_mapping(struct radix_tree_root *trb_address_map, 184 + struct xhci_ring *ring, 185 + struct xhci_segment *seg, 186 + gfp_t mem_flags) 187 + { 188 + unsigned long key; 189 + int ret; 190 + 191 + key = (unsigned long)(seg->dma >> TRB_SEGMENT_SHIFT); 192 + /* Skip any segments that were already added. */ 193 + if (radix_tree_lookup(trb_address_map, key)) 194 + return 0; 195 + 196 + ret = radix_tree_maybe_preload(mem_flags); 197 + if (ret) 198 + return ret; 199 + ret = radix_tree_insert(trb_address_map, 200 + key, ring); 201 + radix_tree_preload_end(); 202 + return ret; 203 + } 204 + 205 + static void xhci_remove_segment_mapping(struct radix_tree_root *trb_address_map, 206 + struct xhci_segment *seg) 207 + { 208 + unsigned long key; 209 + 210 + key = (unsigned long)(seg->dma >> TRB_SEGMENT_SHIFT); 211 + if (radix_tree_lookup(trb_address_map, key)) 212 + radix_tree_delete(trb_address_map, key); 213 + } 214 + 215 + static int xhci_update_stream_segment_mapping( 216 + struct radix_tree_root *trb_address_map, 217 + struct xhci_ring *ring, 218 + struct xhci_segment *first_seg, 219 + struct xhci_segment *last_seg, 220 + gfp_t mem_flags) 221 + { 222 + struct xhci_segment *seg; 223 + struct xhci_segment *failed_seg; 224 + int ret; 225 + 226 + if (WARN_ON_ONCE(trb_address_map == NULL)) 227 + return 0; 228 + 229 + seg = first_seg; 230 + do { 231 + ret = xhci_insert_segment_mapping(trb_address_map, 232 + ring, seg, mem_flags); 233 + if (ret) 234 + goto remove_streams; 235 + if (seg == last_seg) 236 + return 0; 237 + seg = seg->next; 238 + } while (seg != first_seg); 239 + 240 + return 0; 241 + 242 + remove_streams: 243 + failed_seg = seg; 244 + seg = first_seg; 245 + do { 246 + xhci_remove_segment_mapping(trb_address_map, seg); 247 + if (seg == failed_seg) 248 + return ret; 249 + seg = seg->next; 250 + } while (seg != first_seg); 251 + 252 + return ret; 253 + } 254 + 255 + static void xhci_remove_stream_mapping(struct xhci_ring *ring) 256 + { 257 + struct xhci_segment *seg; 258 + 259 + if (WARN_ON_ONCE(ring->trb_address_map == NULL)) 260 + return; 261 + 262 + seg = ring->first_seg; 263 + do { 264 + xhci_remove_segment_mapping(ring->trb_address_map, seg); 265 + seg = seg->next; 266 + } while (seg != ring->first_seg); 267 + } 268 + 269 + static int xhci_update_stream_mapping(struct xhci_ring *ring, gfp_t mem_flags) 270 + { 271 + return xhci_update_stream_segment_mapping(ring->trb_address_map, ring, 272 + ring->first_seg, ring->last_seg, mem_flags); 273 + } 274 + 152 275 /* XXX: Do we need the hcd structure in all these functions? */ 153 276 void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring) 154 277 { 155 278 if (!ring) 156 279 return; 157 280 158 - if (ring->first_seg) 281 + if (ring->first_seg) { 282 + if (ring->type == TYPE_STREAM) 283 + xhci_remove_stream_mapping(ring); 159 284 xhci_free_segments_for_ring(xhci, ring->first_seg); 285 + } 160 286 161 287 kfree(ring); 162 288 } ··· 475 349 if (ret) 476 350 return -ENOMEM; 477 351 352 + if (ring->type == TYPE_STREAM) 353 + ret = xhci_update_stream_segment_mapping(ring->trb_address_map, 354 + ring, first, last, flags); 355 + if (ret) { 356 + struct xhci_segment *next; 357 + do { 358 + next = first->next; 359 + xhci_segment_free(xhci, first); 360 + if (first == last) 361 + break; 362 + first = next; 363 + } while (true); 364 + return ret; 365 + } 366 + 478 367 xhci_link_rings(xhci, ring, first, last, num_segs); 479 368 xhci_dbg_trace(xhci, trace_xhci_dbg_ring_expansion, 480 369 "ring expansion succeed, now has %d segments", ··· 575 434 struct xhci_stream_ctx *stream_ctx, dma_addr_t dma) 576 435 { 577 436 struct device *dev = xhci_to_hcd(xhci)->self.controller; 437 + size_t size = sizeof(struct xhci_stream_ctx) * num_stream_ctxs; 578 438 579 - if (num_stream_ctxs > MEDIUM_STREAM_ARRAY_SIZE) 580 - dma_free_coherent(dev, 581 - sizeof(struct xhci_stream_ctx)*num_stream_ctxs, 439 + if (size > MEDIUM_STREAM_ARRAY_SIZE) 440 + dma_free_coherent(dev, size, 582 441 stream_ctx, dma); 583 - else if (num_stream_ctxs <= SMALL_STREAM_ARRAY_SIZE) 442 + else if (size <= SMALL_STREAM_ARRAY_SIZE) 584 443 return dma_pool_free(xhci->small_streams_pool, 585 444 stream_ctx, dma); 586 445 else ··· 603 462 gfp_t mem_flags) 604 463 { 605 464 struct device *dev = xhci_to_hcd(xhci)->self.controller; 465 + size_t size = sizeof(struct xhci_stream_ctx) * num_stream_ctxs; 606 466 607 - if (num_stream_ctxs > MEDIUM_STREAM_ARRAY_SIZE) 608 - return dma_alloc_coherent(dev, 609 - sizeof(struct xhci_stream_ctx)*num_stream_ctxs, 467 + if (size > MEDIUM_STREAM_ARRAY_SIZE) 468 + return dma_alloc_coherent(dev, size, 610 469 dma, mem_flags); 611 - else if (num_stream_ctxs <= SMALL_STREAM_ARRAY_SIZE) 470 + else if (size <= SMALL_STREAM_ARRAY_SIZE) 612 471 return dma_pool_alloc(xhci->small_streams_pool, 613 472 mem_flags, dma); 614 473 else ··· 651 510 * The number of stream contexts in the stream context array may be bigger than 652 511 * the number of streams the driver wants to use. This is because the number of 653 512 * stream context array entries must be a power of two. 654 - * 655 - * We need a radix tree for mapping physical addresses of TRBs to which stream 656 - * ID they belong to. We need to do this because the host controller won't tell 657 - * us which stream ring the TRB came from. We could store the stream ID in an 658 - * event data TRB, but that doesn't help us for the cancellation case, since the 659 - * endpoint may stop before it reaches that event data TRB. 660 - * 661 - * The radix tree maps the upper portion of the TRB DMA address to a ring 662 - * segment that has the same upper portion of DMA addresses. For example, say I 663 - * have segments of size 1KB, that are always 64-byte aligned. A segment may 664 - * start at 0x10c91000 and end at 0x10c913f0. If I use the upper 10 bits, the 665 - * key to the stream ID is 0x43244. I can use the DMA address of the TRB to 666 - * pass the radix tree a key to get the right stream ID: 667 - * 668 - * 0x10c90fff >> 10 = 0x43243 669 - * 0x10c912c0 >> 10 = 0x43244 670 - * 0x10c91400 >> 10 = 0x43245 671 - * 672 - * Obviously, only those TRBs with DMA addresses that are within the segment 673 - * will make the radix tree return the stream ID for that ring. 674 - * 675 - * Caveats for the radix tree: 676 - * 677 - * The radix tree uses an unsigned long as a key pair. On 32-bit systems, an 678 - * unsigned long will be 32-bits; on a 64-bit system an unsigned long will be 679 - * 64-bits. Since we only request 32-bit DMA addresses, we can use that as the 680 - * key on 32-bit or 64-bit systems (it would also be fine if we asked for 64-bit 681 - * PCI DMA addresses on a 64-bit system). There might be a problem on 32-bit 682 - * extended systems (where the DMA address can be bigger than 32-bits), 683 - * if we allow the PCI dma mask to be bigger than 32-bits. So don't do that. 684 513 */ 685 514 struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci, 686 515 unsigned int num_stream_ctxs, ··· 659 548 struct xhci_stream_info *stream_info; 660 549 u32 cur_stream; 661 550 struct xhci_ring *cur_ring; 662 - unsigned long key; 663 551 u64 addr; 664 552 int ret; 665 553 ··· 713 603 if (!cur_ring) 714 604 goto cleanup_rings; 715 605 cur_ring->stream_id = cur_stream; 606 + cur_ring->trb_address_map = &stream_info->trb_address_map; 716 607 /* Set deq ptr, cycle bit, and stream context type */ 717 608 addr = cur_ring->first_seg->dma | 718 609 SCT_FOR_CTX(SCT_PRI_TR) | ··· 723 612 xhci_dbg(xhci, "Setting stream %d ring ptr to 0x%08llx\n", 724 613 cur_stream, (unsigned long long) addr); 725 614 726 - key = (unsigned long) 727 - (cur_ring->first_seg->dma >> TRB_SEGMENT_SHIFT); 728 - ret = radix_tree_insert(&stream_info->trb_address_map, 729 - key, cur_ring); 615 + ret = xhci_update_stream_mapping(cur_ring, mem_flags); 730 616 if (ret) { 731 617 xhci_ring_free(xhci, cur_ring); 732 618 stream_info->stream_rings[cur_stream] = NULL; ··· 743 635 for (cur_stream = 1; cur_stream < num_streams; cur_stream++) { 744 636 cur_ring = stream_info->stream_rings[cur_stream]; 745 637 if (cur_ring) { 746 - addr = cur_ring->first_seg->dma; 747 - radix_tree_delete(&stream_info->trb_address_map, 748 - addr >> TRB_SEGMENT_SHIFT); 749 638 xhci_ring_free(xhci, cur_ring); 750 639 stream_info->stream_rings[cur_stream] = NULL; 751 640 } ··· 803 698 { 804 699 int cur_stream; 805 700 struct xhci_ring *cur_ring; 806 - dma_addr_t addr; 807 701 808 702 if (!stream_info) 809 703 return; ··· 811 707 cur_stream++) { 812 708 cur_ring = stream_info->stream_rings[cur_stream]; 813 709 if (cur_ring) { 814 - addr = cur_ring->first_seg->dma; 815 - radix_tree_delete(&stream_info->trb_address_map, 816 - addr >> TRB_SEGMENT_SHIFT); 817 710 xhci_ring_free(xhci, cur_ring); 818 711 stream_info->stream_rings[cur_stream] = NULL; 819 712 } ··· 1812 1711 1813 1712 if (xhci->lpm_command) 1814 1713 xhci_free_command(xhci, xhci->lpm_command); 1815 - xhci->cmd_ring_reserved_trbs = 0; 1816 1714 if (xhci->cmd_ring) 1817 1715 xhci_ring_free(xhci, xhci->cmd_ring); 1818 1716 xhci->cmd_ring = NULL; ··· 1876 1776 } 1877 1777 1878 1778 no_bw: 1779 + xhci->cmd_ring_reserved_trbs = 0; 1879 1780 xhci->num_usb2_ports = 0; 1880 1781 xhci->num_usb3_ports = 0; 1881 1782 xhci->num_active_eps = 0; ··· 2375 2274 /* 2376 2275 * Initialize the ring segment pool. The ring must be a contiguous 2377 2276 * structure comprised of TRBs. The TRBs must be 16 byte aligned, 2378 - * however, the command ring segment needs 64-byte aligned segments, 2379 - * so we pick the greater alignment need. 2277 + * however, the command ring segment needs 64-byte aligned segments 2278 + * and our use of dma addresses in the trb_address_map radix tree needs 2279 + * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. 2380 2280 */ 2381 2281 xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, 2382 - TRB_SEGMENT_SIZE, 64, xhci->page_size); 2282 + TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); 2383 2283 2384 2284 /* See Table 46 and Note on Figure 55 */ 2385 2285 xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
+12 -6
drivers/usb/host/xhci-pci.c
··· 190 190 struct usb_hcd *hcd; 191 191 192 192 driver = (struct hc_driver *)id->driver_data; 193 + 194 + /* Prevent runtime suspending between USB-2 and USB-3 initialization */ 195 + pm_runtime_get_noresume(&dev->dev); 196 + 193 197 /* Register the USB 2.0 roothub. 194 198 * FIXME: USB core must know to register the USB 2.0 roothub first. 195 199 * This is sort of silly, because we could just set the HCD driver flags ··· 203 199 retval = usb_hcd_pci_probe(dev, id); 204 200 205 201 if (retval) 206 - return retval; 202 + goto put_runtime_pm; 207 203 208 204 /* USB 2.0 roothub is stored in the PCI device now. */ 209 205 hcd = dev_get_drvdata(&dev->dev); ··· 226 222 goto put_usb3_hcd; 227 223 /* Roothub already marked as USB 3.0 speed */ 228 224 229 - /* We know the LPM timeout algorithms for this host, let the USB core 230 - * enable and disable LPM for devices under the USB 3.0 roothub. 231 - */ 232 - if (xhci->quirks & XHCI_LPM_SUPPORT) 233 - hcd_to_bus(xhci->shared_hcd)->root_hub->lpm_capable = 1; 225 + if (HCC_MAX_PSA(xhci->hcc_params) >= 4) 226 + xhci->shared_hcd->can_do_streams = 1; 227 + 228 + /* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */ 229 + pm_runtime_put_noidle(&dev->dev); 234 230 235 231 return 0; 236 232 ··· 238 234 usb_put_hcd(xhci->shared_hcd); 239 235 dealloc_usb2_hcd: 240 236 usb_hcd_pci_remove(dev); 237 + put_runtime_pm: 238 + pm_runtime_put_noidle(&dev->dev); 241 239 return retval; 242 240 } 243 241
+4
drivers/usb/host/xhci-plat.c
··· 158 158 */ 159 159 *((struct xhci_hcd **) xhci->shared_hcd->hcd_priv) = xhci; 160 160 161 + if (HCC_MAX_PSA(xhci->hcc_params) >= 4) 162 + xhci->shared_hcd->can_do_streams = 1; 163 + 161 164 ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED); 162 165 if (ret) 163 166 goto put_usb3_hcd; ··· 229 226 230 227 #ifdef CONFIG_OF 231 228 static const struct of_device_id usb_xhci_of_match[] = { 229 + { .compatible = "generic-xhci" }, 232 230 { .compatible = "xhci-platform" }, 233 231 { }, 234 232 };
+91 -55
drivers/usb/host/xhci-ring.c
··· 546 546 struct xhci_dequeue_state *state) 547 547 { 548 548 struct xhci_virt_device *dev = xhci->devs[slot_id]; 549 + struct xhci_virt_ep *ep = &dev->eps[ep_index]; 549 550 struct xhci_ring *ep_ring; 550 551 struct xhci_generic_trb *trb; 551 - struct xhci_ep_ctx *ep_ctx; 552 552 dma_addr_t addr; 553 553 554 554 ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id, ··· 573 573 /* Dig out the cycle state saved by the xHC during the stop ep cmd */ 574 574 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 575 575 "Finding endpoint context"); 576 - ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); 577 - state->new_cycle_state = 0x1 & le64_to_cpu(ep_ctx->deq); 576 + /* 4.6.9 the css flag is written to the stream context for streams */ 577 + if (ep->ep_state & EP_HAS_STREAMS) { 578 + struct xhci_stream_ctx *ctx = 579 + &ep->stream_info->stream_ctx_array[stream_id]; 580 + state->new_cycle_state = 0x1 & le64_to_cpu(ctx->stream_ring); 581 + } else { 582 + struct xhci_ep_ctx *ep_ctx 583 + = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); 584 + state->new_cycle_state = 0x1 & le64_to_cpu(ep_ctx->deq); 585 + } 578 586 579 587 state->new_deq_ptr = cur_td->last_trb; 580 588 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, ··· 900 892 /* Return to the event handler with xhci->lock re-acquired */ 901 893 } 902 894 895 + static void xhci_kill_ring_urbs(struct xhci_hcd *xhci, struct xhci_ring *ring) 896 + { 897 + struct xhci_td *cur_td; 898 + 899 + while (!list_empty(&ring->td_list)) { 900 + cur_td = list_first_entry(&ring->td_list, 901 + struct xhci_td, td_list); 902 + list_del_init(&cur_td->td_list); 903 + if (!list_empty(&cur_td->cancelled_td_list)) 904 + list_del_init(&cur_td->cancelled_td_list); 905 + xhci_giveback_urb_in_irq(xhci, cur_td, -ESHUTDOWN); 906 + } 907 + } 908 + 909 + static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci, 910 + int slot_id, int ep_index) 911 + { 912 + struct xhci_td *cur_td; 913 + struct xhci_virt_ep *ep; 914 + struct xhci_ring *ring; 915 + 916 + ep = &xhci->devs[slot_id]->eps[ep_index]; 917 + if ((ep->ep_state & EP_HAS_STREAMS) || 918 + (ep->ep_state & EP_GETTING_NO_STREAMS)) { 919 + int stream_id; 920 + 921 + for (stream_id = 0; stream_id < ep->stream_info->num_streams; 922 + stream_id++) { 923 + xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 924 + "Killing URBs for slot ID %u, ep index %u, stream %u", 925 + slot_id, ep_index, stream_id + 1); 926 + xhci_kill_ring_urbs(xhci, 927 + ep->stream_info->stream_rings[stream_id]); 928 + } 929 + } else { 930 + ring = ep->ring; 931 + if (!ring) 932 + return; 933 + xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 934 + "Killing URBs for slot ID %u, ep index %u", 935 + slot_id, ep_index); 936 + xhci_kill_ring_urbs(xhci, ring); 937 + } 938 + while (!list_empty(&ep->cancelled_td_list)) { 939 + cur_td = list_first_entry(&ep->cancelled_td_list, 940 + struct xhci_td, cancelled_td_list); 941 + list_del_init(&cur_td->cancelled_td_list); 942 + xhci_giveback_urb_in_irq(xhci, cur_td, -ESHUTDOWN); 943 + } 944 + } 945 + 903 946 /* Watchdog timer function for when a stop endpoint command fails to complete. 904 947 * In this case, we assume the host controller is broken or dying or dead. The 905 948 * host may still be completing some other events, so we have to be careful to ··· 974 915 { 975 916 struct xhci_hcd *xhci; 976 917 struct xhci_virt_ep *ep; 977 - struct xhci_virt_ep *temp_ep; 978 - struct xhci_ring *ring; 979 - struct xhci_td *cur_td; 980 918 int ret, i, j; 981 919 unsigned long flags; 982 920 ··· 1030 974 for (i = 0; i < MAX_HC_SLOTS; i++) { 1031 975 if (!xhci->devs[i]) 1032 976 continue; 1033 - for (j = 0; j < 31; j++) { 1034 - temp_ep = &xhci->devs[i]->eps[j]; 1035 - ring = temp_ep->ring; 1036 - if (!ring) 1037 - continue; 1038 - xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 1039 - "Killing URBs for slot ID %u, " 1040 - "ep index %u", i, j); 1041 - while (!list_empty(&ring->td_list)) { 1042 - cur_td = list_first_entry(&ring->td_list, 1043 - struct xhci_td, 1044 - td_list); 1045 - list_del_init(&cur_td->td_list); 1046 - if (!list_empty(&cur_td->cancelled_td_list)) 1047 - list_del_init(&cur_td->cancelled_td_list); 1048 - xhci_giveback_urb_in_irq(xhci, cur_td, 1049 - -ESHUTDOWN); 1050 - } 1051 - while (!list_empty(&temp_ep->cancelled_td_list)) { 1052 - cur_td = list_first_entry( 1053 - &temp_ep->cancelled_td_list, 1054 - struct xhci_td, 1055 - cancelled_td_list); 1056 - list_del_init(&cur_td->cancelled_td_list); 1057 - xhci_giveback_urb_in_irq(xhci, cur_td, 1058 - -ESHUTDOWN); 1059 - } 1060 - } 977 + for (j = 0; j < 31; j++) 978 + xhci_kill_endpoint_urbs(xhci, i, j); 1061 979 } 1062 980 spin_unlock_irqrestore(&xhci->lock, flags); 1063 981 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, ··· 1103 1073 unsigned int stream_id; 1104 1074 struct xhci_ring *ep_ring; 1105 1075 struct xhci_virt_device *dev; 1076 + struct xhci_virt_ep *ep; 1106 1077 struct xhci_ep_ctx *ep_ctx; 1107 1078 struct xhci_slot_ctx *slot_ctx; 1108 1079 1109 1080 ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3])); 1110 1081 stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2])); 1111 1082 dev = xhci->devs[slot_id]; 1083 + ep = &dev->eps[ep_index]; 1112 1084 1113 1085 ep_ring = xhci_stream_id_to_ring(dev, ep_index, stream_id); 1114 1086 if (!ep_ring) { 1115 - xhci_warn(xhci, "WARN Set TR deq ptr command for " 1116 - "freed stream ID %u\n", 1087 + xhci_warn(xhci, "WARN Set TR deq ptr command for freed stream ID %u\n", 1117 1088 stream_id); 1118 1089 /* XXX: Harmless??? */ 1119 1090 dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING; ··· 1130 1099 1131 1100 switch (cmd_comp_code) { 1132 1101 case COMP_TRB_ERR: 1133 - xhci_warn(xhci, "WARN Set TR Deq Ptr cmd invalid because " 1134 - "of stream ID configuration\n"); 1102 + xhci_warn(xhci, "WARN Set TR Deq Ptr cmd invalid because of stream ID configuration\n"); 1135 1103 break; 1136 1104 case COMP_CTX_STATE: 1137 - xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed due " 1138 - "to incorrect slot or ep state.\n"); 1105 + xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed due to incorrect slot or ep state.\n"); 1139 1106 ep_state = le32_to_cpu(ep_ctx->ep_info); 1140 1107 ep_state &= EP_STATE_MASK; 1141 1108 slot_state = le32_to_cpu(slot_ctx->dev_state); ··· 1143 1114 slot_state, ep_state); 1144 1115 break; 1145 1116 case COMP_EBADSLT: 1146 - xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed because " 1147 - "slot %u was not enabled.\n", slot_id); 1117 + xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed because slot %u was not enabled.\n", 1118 + slot_id); 1148 1119 break; 1149 1120 default: 1150 - xhci_warn(xhci, "WARN Set TR Deq Ptr cmd with unknown " 1151 - "completion code of %u.\n", 1152 - cmd_comp_code); 1121 + xhci_warn(xhci, "WARN Set TR Deq Ptr cmd with unknown completion code of %u.\n", 1122 + cmd_comp_code); 1153 1123 break; 1154 1124 } 1155 1125 /* OK what do we do now? The endpoint state is hosed, and we ··· 1158 1130 * cancelling URBs, which might not be an error... 1159 1131 */ 1160 1132 } else { 1133 + u64 deq; 1134 + /* 4.6.10 deq ptr is written to the stream ctx for streams */ 1135 + if (ep->ep_state & EP_HAS_STREAMS) { 1136 + struct xhci_stream_ctx *ctx = 1137 + &ep->stream_info->stream_ctx_array[stream_id]; 1138 + deq = le64_to_cpu(ctx->stream_ring) & SCTX_DEQ_MASK; 1139 + } else { 1140 + deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK; 1141 + } 1161 1142 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 1162 - "Successful Set TR Deq Ptr cmd, deq = @%08llx", 1163 - le64_to_cpu(ep_ctx->deq)); 1164 - if (xhci_trb_virt_to_dma(dev->eps[ep_index].queued_deq_seg, 1165 - dev->eps[ep_index].queued_deq_ptr) == 1166 - (le64_to_cpu(ep_ctx->deq) & ~(EP_CTX_CYCLE_MASK))) { 1143 + "Successful Set TR Deq Ptr cmd, deq = @%08llx", deq); 1144 + if (xhci_trb_virt_to_dma(ep->queued_deq_seg, 1145 + ep->queued_deq_ptr) == deq) { 1167 1146 /* Update the ring's dequeue segment and dequeue pointer 1168 1147 * to reflect the new position. 1169 1148 */ 1170 1149 update_ring_for_set_deq_completion(xhci, dev, 1171 1150 ep_ring, ep_index); 1172 1151 } else { 1173 - xhci_warn(xhci, "Mismatch between completed Set TR Deq " 1174 - "Ptr command & xHCI internal state.\n"); 1152 + xhci_warn(xhci, "Mismatch between completed Set TR Deq Ptr command & xHCI internal state.\n"); 1175 1153 xhci_warn(xhci, "ep deq seg = %p, deq ptr = %p\n", 1176 - dev->eps[ep_index].queued_deq_seg, 1177 - dev->eps[ep_index].queued_deq_ptr); 1154 + ep->queued_deq_seg, ep->queued_deq_ptr); 1178 1155 } 1179 1156 } 1180 1157 ··· 4103 4070 u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); 4104 4071 u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); 4105 4072 u32 trb_stream_id = STREAM_ID_FOR_TRB(stream_id); 4073 + u32 trb_sct = 0; 4106 4074 u32 type = TRB_TYPE(TRB_SET_DEQ); 4107 4075 struct xhci_virt_ep *ep; 4108 4076 ··· 4122 4088 } 4123 4089 ep->queued_deq_seg = deq_seg; 4124 4090 ep->queued_deq_ptr = deq_ptr; 4125 - return queue_command(xhci, lower_32_bits(addr) | cycle_state, 4091 + if (stream_id) 4092 + trb_sct = SCT_FOR_TRB(SCT_PRI_TR); 4093 + return queue_command(xhci, lower_32_bits(addr) | trb_sct | cycle_state, 4126 4094 upper_32_bits(addr), trb_stream_id, 4127 4095 trb_slot_id | trb_ep_index | type, false); 4128 4096 }
+31 -2
drivers/usb/host/xhci.c
··· 390 390 } 391 391 392 392 legacy_irq: 393 + if (!strlen(hcd->irq_descr)) 394 + snprintf(hcd->irq_descr, sizeof(hcd->irq_descr), "%s:usb%d", 395 + hcd->driver->description, hcd->self.busnum); 396 + 393 397 /* fall back to legacy interrupt*/ 394 398 ret = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED, 395 399 hcd->irq_descr, hcd); ··· 2682 2678 return ret; 2683 2679 } 2684 2680 2681 + static void xhci_check_bw_drop_ep_streams(struct xhci_hcd *xhci, 2682 + struct xhci_virt_device *vdev, int i) 2683 + { 2684 + struct xhci_virt_ep *ep = &vdev->eps[i]; 2685 + 2686 + if (ep->ep_state & EP_HAS_STREAMS) { 2687 + xhci_warn(xhci, "WARN: endpoint 0x%02x has streams on set_interface, freeing streams.\n", 2688 + xhci_get_endpoint_address(i)); 2689 + xhci_free_stream_info(xhci, ep->stream_info); 2690 + ep->stream_info = NULL; 2691 + ep->ep_state &= ~EP_HAS_STREAMS; 2692 + } 2693 + } 2694 + 2685 2695 /* Called after one or more calls to xhci_add_endpoint() or 2686 2696 * xhci_drop_endpoint(). If this call fails, the USB core is expected 2687 2697 * to call xhci_reset_bandwidth(). ··· 2760 2742 /* Free any rings that were dropped, but not changed. */ 2761 2743 for (i = 1; i < 31; ++i) { 2762 2744 if ((le32_to_cpu(ctrl_ctx->drop_flags) & (1 << (i + 1))) && 2763 - !(le32_to_cpu(ctrl_ctx->add_flags) & (1 << (i + 1)))) 2745 + !(le32_to_cpu(ctrl_ctx->add_flags) & (1 << (i + 1)))) { 2764 2746 xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i); 2747 + xhci_check_bw_drop_ep_streams(xhci, virt_dev, i); 2748 + } 2765 2749 } 2766 2750 xhci_zero_in_ctx(xhci, virt_dev); 2767 2751 /* ··· 2779 2759 if (virt_dev->eps[i].ring) { 2780 2760 xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i); 2781 2761 } 2762 + xhci_check_bw_drop_ep_streams(xhci, virt_dev, i); 2782 2763 virt_dev->eps[i].ring = virt_dev->eps[i].new_ring; 2783 2764 virt_dev->eps[i].new_ring = NULL; 2784 2765 } ··· 2975 2954 ret = xhci_check_args(xhci_to_hcd(xhci), udev, ep, 1, true, __func__); 2976 2955 if (ret <= 0) 2977 2956 return -EINVAL; 2978 - if (ep->ss_ep_comp.bmAttributes == 0) { 2957 + if (usb_ss_max_streams(&ep->ss_ep_comp) == 0) { 2979 2958 xhci_warn(xhci, "WARN: SuperSpeed Endpoint Companion" 2980 2959 " descriptor for ep 0x%x does not support streams\n", 2981 2960 ep->desc.bEndpointAddress); ··· 3141 3120 xhci = hcd_to_xhci(hcd); 3142 3121 xhci_dbg(xhci, "Driver wants %u stream IDs (including stream 0).\n", 3143 3122 num_streams); 3123 + 3124 + /* MaxPSASize value 0 (2 streams) means streams are not supported */ 3125 + if (HCC_MAX_PSA(xhci->hcc_params) < 4) { 3126 + xhci_dbg(xhci, "xHCI controller does not support streams.\n"); 3127 + return -ENOSYS; 3128 + } 3144 3129 3145 3130 config_cmd = xhci_alloc_command(xhci, true, true, mem_flags); 3146 3131 if (!config_cmd) { ··· 3546 3519 struct xhci_virt_ep *ep = &virt_dev->eps[i]; 3547 3520 3548 3521 if (ep->ep_state & EP_HAS_STREAMS) { 3522 + xhci_warn(xhci, "WARN: endpoint 0x%02x has streams on device reset, freeing streams.\n", 3523 + xhci_get_endpoint_address(i)); 3549 3524 xhci_free_stream_info(xhci, ep->stream_info); 3550 3525 ep->stream_info = NULL; 3551 3526 ep->ep_state &= ~EP_HAS_STREAMS;
+4 -1
drivers/usb/host/xhci.h
··· 703 703 704 704 /* deq bitmasks */ 705 705 #define EP_CTX_CYCLE_MASK (1 << 0) 706 + #define SCTX_DEQ_MASK (~0xfL) 706 707 707 708 708 709 /** ··· 1119 1118 #define TRB_TO_SUSPEND_PORT(p) (((p) & (1 << 23)) >> 23) 1120 1119 #define LAST_EP_INDEX 30 1121 1120 1122 - /* Set TR Dequeue Pointer command TRB fields */ 1121 + /* Set TR Dequeue Pointer command TRB fields, 6.4.3.9 */ 1123 1122 #define TRB_TO_STREAM_ID(p) ((((p) & (0xffff << 16)) >> 16)) 1124 1123 #define STREAM_ID_FOR_TRB(p) ((((p)) & 0xffff) << 16) 1124 + #define SCT_FOR_TRB(p) (((p) << 1) & 0x7) 1125 1125 1126 1126 1127 1127 /* Port Status Change Event TRB fields */ ··· 1343 1341 unsigned int num_trbs_free_temp; 1344 1342 enum xhci_ring_type type; 1345 1343 bool last_td_was_short; 1344 + struct radix_tree_root *trb_address_map; 1346 1345 }; 1347 1346 1348 1347 struct xhci_erst_entry {
-1
drivers/usb/misc/Kconfig
··· 128 128 129 129 config USB_FTDI_ELAN 130 130 tristate "Elan PCMCIA CardBus Adapter USB Client" 131 - default M 132 131 help 133 132 ELAN's Uxxx series of adapters are USB to PCMCIA CardBus adapters. 134 133 Currently only the U132 adapter is available.
+6 -4
drivers/usb/misc/sisusbvga/sisusb.c
··· 2123 2123 u8 tmp8, tmp82, ramtype; 2124 2124 int bw = 0; 2125 2125 char *ramtypetext1 = NULL; 2126 - const char *ramtypetext2[] = { "SDR SDRAM", "SDR SGRAM", 2127 - "DDR SDRAM", "DDR SGRAM" }; 2126 + static const char ram_datarate[4] = {'S', 'S', 'D', 'D'}; 2127 + static const char ram_dynamictype[4] = {'D', 'G', 'D', 'G'}; 2128 2128 static const int busSDR[4] = {64, 64, 128, 128}; 2129 2129 static const int busDDR[4] = {32, 32, 64, 64}; 2130 2130 static const int busDDRA[4] = {64+32, 64+32 , (64+32)*2, (64+32)*2}; ··· 2156 2156 break; 2157 2157 } 2158 2158 2159 - dev_info(&sisusb->sisusb_dev->dev, "%dMB %s %s, bus width %d\n", (sisusb->vramsize >> 20), ramtypetext1, 2160 - ramtypetext2[ramtype], bw); 2159 + 2160 + dev_info(&sisusb->sisusb_dev->dev, "%dMB %s %cDR S%cRAM, bus width %d\n", 2161 + sisusb->vramsize >> 20, ramtypetext1, 2162 + ram_datarate[ramtype], ram_dynamictype[ramtype], bw); 2161 2163 } 2162 2164 2163 2165 static int
+34
drivers/usb/misc/usbled.c
··· 22 22 enum led_type { 23 23 DELCOM_VISUAL_SIGNAL_INDICATOR, 24 24 DREAM_CHEEKY_WEBMAIL_NOTIFIER, 25 + RISO_KAGAKU_LED 25 26 }; 27 + 28 + /* the Webmail LED made by RISO KAGAKU CORP. decodes a color index 29 + internally, we want to keep the red+green+blue sysfs api, so we decode 30 + from 1-bit RGB to the riso kagaku color index according to this table... */ 31 + 32 + static unsigned const char riso_kagaku_tbl[] = { 33 + /* R+2G+4B -> riso kagaku color index */ 34 + [0] = 0, /* black */ 35 + [1] = 2, /* red */ 36 + [2] = 1, /* green */ 37 + [3] = 5, /* yellow */ 38 + [4] = 3, /* blue */ 39 + [5] = 6, /* magenta */ 40 + [6] = 4, /* cyan */ 41 + [7] = 7 /* white */ 42 + }; 43 + 44 + #define RISO_KAGAKU_IX(r,g,b) riso_kagaku_tbl[((r)?1:0)+((g)?2:0)+((b)?4:0)] 26 45 27 46 /* table of devices that work with this driver */ 28 47 static const struct usb_device_id id_table[] = { ··· 51 32 .driver_info = DREAM_CHEEKY_WEBMAIL_NOTIFIER }, 52 33 { USB_DEVICE(0x1d34, 0x000a), 53 34 .driver_info = DREAM_CHEEKY_WEBMAIL_NOTIFIER }, 35 + { USB_DEVICE(0x1294, 0x1320), 36 + .driver_info = RISO_KAGAKU_LED }, 54 37 { }, 55 38 }; 56 39 MODULE_DEVICE_TABLE(usb, id_table); ··· 69 48 { 70 49 int retval = 0; 71 50 unsigned char *buffer; 51 + int actlength; 72 52 73 53 buffer = kmalloc(8, GFP_KERNEL); 74 54 if (!buffer) { ··· 124 102 buffer, 125 103 8, 126 104 2000); 105 + break; 106 + 107 + case RISO_KAGAKU_LED: 108 + buffer[0] = RISO_KAGAKU_IX(led->red, led->green, led->blue); 109 + buffer[1] = 0; 110 + buffer[2] = 0; 111 + buffer[3] = 0; 112 + buffer[4] = 0; 113 + 114 + retval = usb_interrupt_msg(led->udev, 115 + usb_sndctrlpipe(led->udev, 2), 116 + buffer, 5, &actlength, 1000 /*ms timeout*/); 127 117 break; 128 118 129 119 default:
+2
drivers/usb/musb/Kconfig
··· 43 43 config USB_MUSB_GADGET 44 44 bool "Gadget only mode" 45 45 depends on USB_GADGET=y || USB_GADGET=USB_MUSB_HDRC 46 + depends on HAS_DMA 46 47 help 47 48 Select this when you want to use MUSB in gadget mode only, 48 49 thereby the host feature will be regressed. ··· 51 50 config USB_MUSB_DUAL_ROLE 52 51 bool "Dual Role mode" 53 52 depends on ((USB=y || USB=USB_MUSB_HDRC) && (USB_GADGET=y || USB_GADGET=USB_MUSB_HDRC)) 53 + depends on HAS_DMA 54 54 help 55 55 This is the default mode of working of MUSB controller where 56 56 both host and gadget features are enabled.
+2 -3
drivers/usb/musb/musb_core.c
··· 438 438 static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb, 439 439 u8 devctl) 440 440 { 441 - struct usb_otg *otg = musb->xceiv->otg; 442 441 irqreturn_t handled = IRQ_NONE; 443 442 444 443 dev_dbg(musb->controller, "<== DevCtl=%02x, int_usb=0x%x\n", devctl, ··· 655 656 break; 656 657 case OTG_STATE_B_PERIPHERAL: 657 658 musb_g_suspend(musb); 658 - musb->is_active = otg->gadget->b_hnp_enable; 659 + musb->is_active = musb->g.b_hnp_enable; 659 660 if (musb->is_active) { 660 661 musb->xceiv->state = OTG_STATE_B_WAIT_ACON; 661 662 dev_dbg(musb->controller, "HNP: Setting timer for b_ase0_brst\n"); ··· 671 672 break; 672 673 case OTG_STATE_A_HOST: 673 674 musb->xceiv->state = OTG_STATE_A_SUSPEND; 674 - musb->is_active = otg->host->b_hnp_enable; 675 + musb->is_active = musb->hcd->self.b_hnp_enable; 675 676 break; 676 677 case OTG_STATE_B_HOST: 677 678 /* Transition to B_PERIPHERAL, see 6.8.2.6 p 44 */
+68 -1
drivers/usb/musb/musb_cppi41.c
··· 39 39 u32 transferred; 40 40 u32 packet_sz; 41 41 struct list_head tx_check; 42 + struct work_struct dma_completion; 42 43 }; 43 44 44 45 #define MUSB_DMA_NUM_CHANNELS 15 ··· 113 112 return true; 114 113 } 115 114 115 + static bool is_isoc(struct musb_hw_ep *hw_ep, bool in) 116 + { 117 + if (in && hw_ep->in_qh) { 118 + if (hw_ep->in_qh->type == USB_ENDPOINT_XFER_ISOC) 119 + return true; 120 + } else if (hw_ep->out_qh) { 121 + if (hw_ep->out_qh->type == USB_ENDPOINT_XFER_ISOC) 122 + return true; 123 + } 124 + return false; 125 + } 126 + 116 127 static void cppi41_dma_callback(void *private_data); 117 128 118 129 static void cppi41_trans_done(struct cppi41_dma_channel *cppi41_channel) ··· 132 119 struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep; 133 120 struct musb *musb = hw_ep->musb; 134 121 135 - if (!cppi41_channel->prog_len) { 122 + if (!cppi41_channel->prog_len || 123 + (cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)) { 136 124 137 125 /* done, complete */ 138 126 cppi41_channel->channel.actual_len = ··· 175 161 csr = musb_readw(epio, MUSB_RXCSR); 176 162 csr |= MUSB_RXCSR_H_REQPKT; 177 163 musb_writew(epio, MUSB_RXCSR, csr); 164 + } 165 + } 166 + } 167 + 168 + static void cppi_trans_done_work(struct work_struct *work) 169 + { 170 + unsigned long flags; 171 + struct cppi41_dma_channel *cppi41_channel = 172 + container_of(work, struct cppi41_dma_channel, dma_completion); 173 + struct cppi41_dma_controller *controller = cppi41_channel->controller; 174 + struct musb *musb = controller->musb; 175 + struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep; 176 + bool empty; 177 + 178 + if (!cppi41_channel->is_tx && is_isoc(hw_ep, 1)) { 179 + spin_lock_irqsave(&musb->lock, flags); 180 + cppi41_trans_done(cppi41_channel); 181 + spin_unlock_irqrestore(&musb->lock, flags); 182 + } else { 183 + empty = musb_is_tx_fifo_empty(hw_ep); 184 + if (empty) { 185 + spin_lock_irqsave(&musb->lock, flags); 186 + cppi41_trans_done(cppi41_channel); 187 + spin_unlock_irqrestore(&musb->lock, flags); 188 + } else { 189 + schedule_work(&cppi41_channel->dma_completion); 178 190 } 179 191 } 180 192 } ··· 268 228 transferred < cppi41_channel->packet_sz) 269 229 cppi41_channel->prog_len = 0; 270 230 231 + if (!cppi41_channel->is_tx) { 232 + if (is_isoc(hw_ep, 1)) 233 + schedule_work(&cppi41_channel->dma_completion); 234 + else 235 + cppi41_trans_done(cppi41_channel); 236 + goto out; 237 + } 238 + 271 239 empty = musb_is_tx_fifo_empty(hw_ep); 272 240 if (empty) { 273 241 cppi41_trans_done(cppi41_channel); ··· 311 263 cppi41_trans_done(cppi41_channel); 312 264 goto out; 313 265 } 266 + } 267 + if (is_isoc(hw_ep, 0)) { 268 + schedule_work(&cppi41_channel->dma_completion); 269 + goto out; 314 270 } 315 271 list_add_tail(&cppi41_channel->tx_check, 316 272 &controller->early_tx_list); ··· 500 448 dma_addr_t dma_addr, u32 len) 501 449 { 502 450 int ret; 451 + struct cppi41_dma_channel *cppi41_channel = channel->private_data; 452 + int hb_mult = 0; 503 453 504 454 BUG_ON(channel->status == MUSB_DMA_STATUS_UNKNOWN || 505 455 channel->status == MUSB_DMA_STATUS_BUSY); 506 456 457 + if (is_host_active(cppi41_channel->controller->musb)) { 458 + if (cppi41_channel->is_tx) 459 + hb_mult = cppi41_channel->hw_ep->out_qh->hb_mult; 460 + else 461 + hb_mult = cppi41_channel->hw_ep->in_qh->hb_mult; 462 + } 463 + 507 464 channel->status = MUSB_DMA_STATUS_BUSY; 508 465 channel->actual_len = 0; 466 + 467 + if (hb_mult) 468 + packet_sz = hb_mult * (packet_sz & 0x7FF); 469 + 509 470 ret = cppi41_configure_channel(channel, packet_sz, mode, dma_addr, len); 510 471 if (!ret) 511 472 channel->status = MUSB_DMA_STATUS_FREE; ··· 672 607 cppi41_channel->port_num = port; 673 608 cppi41_channel->is_tx = is_tx; 674 609 INIT_LIST_HEAD(&cppi41_channel->tx_check); 610 + INIT_WORK(&cppi41_channel->dma_completion, 611 + cppi_trans_done_work); 675 612 676 613 musb_dma = &cppi41_channel->channel; 677 614 musb_dma->private_data = cppi41_channel;
+55 -3
drivers/usb/musb/musb_dsps.c
··· 45 45 #include <linux/of_irq.h> 46 46 #include <linux/usb/of.h> 47 47 48 + #include <linux/debugfs.h> 49 + 48 50 #include "musb_core.h" 49 51 50 52 static const struct of_device_id musb_dsps_of_match[]; ··· 138 136 unsigned long last_timer; /* last timer data for each instance */ 139 137 140 138 struct dsps_context context; 139 + struct debugfs_regset32 regset; 140 + struct dentry *dbgfs_root; 141 + }; 142 + 143 + static const struct debugfs_reg32 dsps_musb_regs[] = { 144 + { "revision", 0x00 }, 145 + { "control", 0x14 }, 146 + { "status", 0x18 }, 147 + { "eoi", 0x24 }, 148 + { "intr0_stat", 0x30 }, 149 + { "intr1_stat", 0x34 }, 150 + { "intr0_set", 0x38 }, 151 + { "intr1_set", 0x3c }, 152 + { "txmode", 0x70 }, 153 + { "rxmode", 0x74 }, 154 + { "autoreq", 0xd0 }, 155 + { "srpfixtime", 0xd4 }, 156 + { "tdown", 0xd8 }, 157 + { "phy_utmi", 0xe0 }, 158 + { "mode", 0xe8 }, 141 159 }; 142 160 143 161 static void dsps_musb_try_idle(struct musb *musb, unsigned long timeout) ··· 390 368 return ret; 391 369 } 392 370 371 + static int dsps_musb_dbg_init(struct musb *musb, struct dsps_glue *glue) 372 + { 373 + struct dentry *root; 374 + struct dentry *file; 375 + char buf[128]; 376 + 377 + sprintf(buf, "%s.dsps", dev_name(musb->controller)); 378 + root = debugfs_create_dir(buf, NULL); 379 + if (!root) 380 + return -ENOMEM; 381 + glue->dbgfs_root = root; 382 + 383 + glue->regset.regs = dsps_musb_regs; 384 + glue->regset.nregs = ARRAY_SIZE(dsps_musb_regs); 385 + glue->regset.base = musb->ctrl_base; 386 + 387 + file = debugfs_create_regset32("regdump", S_IRUGO, root, &glue->regset); 388 + if (!file) { 389 + debugfs_remove_recursive(root); 390 + return -ENOMEM; 391 + } 392 + return 0; 393 + } 394 + 393 395 static int dsps_musb_init(struct musb *musb) 394 396 { 395 397 struct device *dev = musb->controller; ··· 423 377 void __iomem *reg_base; 424 378 struct resource *r; 425 379 u32 rev, val; 380 + int ret; 426 381 427 382 r = platform_get_resource_byname(parent, IORESOURCE_MEM, "control"); 428 383 if (!r) ··· 456 409 val = dsps_readl(reg_base, wrp->phy_utmi); 457 410 val &= ~(1 << wrp->otg_disable); 458 411 dsps_writel(musb->ctrl_base, wrp->phy_utmi, val); 412 + 413 + ret = dsps_musb_dbg_init(musb, glue); 414 + if (ret) 415 + return ret; 459 416 460 417 return 0; 461 418 } ··· 667 616 wrp = match->data; 668 617 669 618 /* allocate glue */ 670 - glue = kzalloc(sizeof(*glue), GFP_KERNEL); 619 + glue = devm_kzalloc(&pdev->dev, sizeof(*glue), GFP_KERNEL); 671 620 if (!glue) { 672 621 dev_err(&pdev->dev, "unable to allocate glue memory\n"); 673 622 return -ENOMEM; ··· 695 644 pm_runtime_put(&pdev->dev); 696 645 err2: 697 646 pm_runtime_disable(&pdev->dev); 698 - kfree(glue); 699 647 return ret; 700 648 } 701 649 ··· 707 657 /* disable usbss clocks */ 708 658 pm_runtime_put(&pdev->dev); 709 659 pm_runtime_disable(&pdev->dev); 710 - kfree(glue); 660 + 661 + debugfs_remove_recursive(glue->dbgfs_root); 662 + 711 663 return 0; 712 664 } 713 665
+26 -4
drivers/usb/musb/musb_host.c
··· 1694 1694 | MUSB_RXCSR_RXPKTRDY); 1695 1695 musb_writew(hw_ep->regs, MUSB_RXCSR, val); 1696 1696 1697 - #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) 1697 + #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) || \ 1698 + defined(CONFIG_USB_TI_CPPI41_DMA) 1698 1699 if (usb_pipeisoc(pipe)) { 1699 1700 struct usb_iso_packet_descriptor *d; 1700 1701 ··· 1708 1707 if (d->status != -EILSEQ && d->status != -EOVERFLOW) 1709 1708 d->status = 0; 1710 1709 1711 - if (++qh->iso_idx >= urb->number_of_packets) 1710 + if (++qh->iso_idx >= urb->number_of_packets) { 1712 1711 done = true; 1713 - else 1712 + } else { 1713 + #if defined(CONFIG_USB_TI_CPPI41_DMA) 1714 + struct dma_controller *c; 1715 + dma_addr_t *buf; 1716 + u32 length, ret; 1717 + 1718 + c = musb->dma_controller; 1719 + buf = (void *) 1720 + urb->iso_frame_desc[qh->iso_idx].offset 1721 + + (u32)urb->transfer_dma; 1722 + 1723 + length = 1724 + urb->iso_frame_desc[qh->iso_idx].length; 1725 + 1726 + val |= MUSB_RXCSR_DMAENAB; 1727 + musb_writew(hw_ep->regs, MUSB_RXCSR, val); 1728 + 1729 + ret = c->channel_program(dma, qh->maxpacket, 1730 + 0, (u32) buf, length); 1731 + #endif 1714 1732 done = false; 1733 + } 1715 1734 1716 1735 } else { 1717 1736 /* done if urb buffer is full or short packet is recd */ ··· 1771 1750 } 1772 1751 1773 1752 /* we are expecting IN packets */ 1774 - #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) 1753 + #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) || \ 1754 + defined(CONFIG_USB_TI_CPPI41_DMA) 1775 1755 if (dma) { 1776 1756 struct dma_controller *c; 1777 1757 u16 rx_count;
+1 -1
drivers/usb/musb/omap2430.c
··· 37 37 #include <linux/err.h> 38 38 #include <linux/delay.h> 39 39 #include <linux/usb/musb-omap.h> 40 - #include <linux/usb/omap_control_usb.h> 40 + #include <linux/phy/omap_control_phy.h> 41 41 #include <linux/of_platform.h> 42 42 43 43 #include "musb_core.h"
-21
drivers/usb/phy/Kconfig
··· 75 75 built-in with usb ip or which are autonomous and doesn't require any 76 76 phy programming such as ISP1x04 etc. 77 77 78 - config OMAP_CONTROL_USB 79 - tristate "OMAP CONTROL USB Driver" 80 - depends on ARCH_OMAP2PLUS || COMPILE_TEST 81 - help 82 - Enable this to add support for the USB part present in the control 83 - module. This driver has API to power on the USB2 PHY and to write to 84 - the mailbox. The mailbox is present only in omap4 and the register to 85 - power on the USB2 PHY is present in OMAP4 and OMAP5. OMAP5 has an 86 - additional register to power on USB3 PHY. 87 - 88 - config OMAP_USB3 89 - tristate "OMAP USB3 PHY Driver" 90 - depends on ARCH_OMAP2PLUS || COMPILE_TEST 91 - select OMAP_CONTROL_USB 92 - select USB_PHY 93 - help 94 - Enable this to support the USB3 PHY that is part of SOC. This 95 - driver takes care of all the PHY functionality apart from comparator. 96 - This driver interacts with the "OMAP Control USB Driver" to power 97 - on/off the PHY. 98 - 99 78 config AM335X_CONTROL_USB 100 79 tristate 101 80
-2
drivers/usb/phy/Makefile
··· 13 13 obj-$(CONFIG_MV_U3D_PHY) += phy-mv-u3d-usb.o 14 14 obj-$(CONFIG_NOP_USB_XCEIV) += phy-generic.o 15 15 obj-$(CONFIG_TAHVO_USB) += phy-tahvo.o 16 - obj-$(CONFIG_OMAP_CONTROL_USB) += phy-omap-control.o 17 16 obj-$(CONFIG_AM335X_CONTROL_USB) += phy-am335x-control.o 18 17 obj-$(CONFIG_AM335X_PHY_USB) += phy-am335x.o 19 18 obj-$(CONFIG_OMAP_OTG) += phy-omap-otg.o 20 - obj-$(CONFIG_OMAP_USB3) += phy-omap-usb3.o 21 19 obj-$(CONFIG_SAMSUNG_USBPHY) += phy-samsung-usb.o 22 20 obj-$(CONFIG_SAMSUNG_USB2PHY) += phy-samsung-usb2.o 23 21 obj-$(CONFIG_SAMSUNG_USB3PHY) += phy-samsung-usb3.o
+5 -4
drivers/usb/phy/phy-fsm-usb.c
··· 317 317 otg_set_state(fsm, OTG_STATE_A_WAIT_VFALL); 318 318 break; 319 319 case OTG_STATE_A_HOST: 320 - if ((!fsm->a_bus_req || fsm->a_suspend_req_inf) && 320 + if (fsm->id || fsm->a_bus_drop) 321 + otg_set_state(fsm, OTG_STATE_A_WAIT_VFALL); 322 + else if ((!fsm->a_bus_req || fsm->a_suspend_req_inf) && 321 323 fsm->otg->host->b_hnp_enable) 322 324 otg_set_state(fsm, OTG_STATE_A_SUSPEND); 323 - else if (fsm->id || !fsm->b_conn || fsm->a_bus_drop) 325 + else if (!fsm->b_conn) 324 326 otg_set_state(fsm, OTG_STATE_A_WAIT_BCON); 325 327 else if (!fsm->a_vbus_vld) 326 328 otg_set_state(fsm, OTG_STATE_A_VBUS_ERR); ··· 348 346 otg_set_state(fsm, OTG_STATE_A_VBUS_ERR); 349 347 break; 350 348 case OTG_STATE_A_WAIT_VFALL: 351 - if (fsm->a_wait_vfall_tmout || fsm->id || fsm->a_bus_req || 352 - (!fsm->a_sess_vld && !fsm->b_conn)) 349 + if (fsm->a_wait_vfall_tmout) 353 350 otg_set_state(fsm, OTG_STATE_A_IDLE); 354 351 break; 355 352 case OTG_STATE_A_VBUS_ERR:
+2 -1
drivers/usb/phy/phy-msm-usb.c
··· 1428 1428 motg->phy.otg = kzalloc(sizeof(struct usb_otg), GFP_KERNEL); 1429 1429 if (!motg->phy.otg) { 1430 1430 dev_err(&pdev->dev, "unable to allocate msm_otg\n"); 1431 - return -ENOMEM; 1431 + ret = -ENOMEM; 1432 + goto free_motg; 1432 1433 } 1433 1434 1434 1435 motg->pdata = dev_get_platdata(&pdev->dev);
+296 -14
drivers/usb/phy/phy-mxs-usb.c
··· 1 1 /* 2 - * Copyright 2012 Freescale Semiconductor, Inc. 2 + * Copyright 2012-2013 Freescale Semiconductor, Inc. 3 3 * Copyright (C) 2012 Marek Vasut <marex@denx.de> 4 4 * on behalf of DENX Software Engineering GmbH 5 5 * ··· 20 20 #include <linux/delay.h> 21 21 #include <linux/err.h> 22 22 #include <linux/io.h> 23 + #include <linux/of_device.h> 24 + #include <linux/regmap.h> 25 + #include <linux/mfd/syscon.h> 23 26 24 27 #define DRIVER_NAME "mxs_phy" 25 28 ··· 31 28 #define HW_USBPHY_CTRL_SET 0x34 32 29 #define HW_USBPHY_CTRL_CLR 0x38 33 30 31 + #define HW_USBPHY_DEBUG_SET 0x54 32 + #define HW_USBPHY_DEBUG_CLR 0x58 33 + 34 + #define HW_USBPHY_IP 0x90 35 + #define HW_USBPHY_IP_SET 0x94 36 + #define HW_USBPHY_IP_CLR 0x98 37 + 34 38 #define BM_USBPHY_CTRL_SFTRST BIT(31) 35 39 #define BM_USBPHY_CTRL_CLKGATE BIT(30) 40 + #define BM_USBPHY_CTRL_ENAUTOSET_USBCLKS BIT(26) 41 + #define BM_USBPHY_CTRL_ENAUTOCLR_USBCLKGATE BIT(25) 42 + #define BM_USBPHY_CTRL_ENVBUSCHG_WKUP BIT(23) 43 + #define BM_USBPHY_CTRL_ENIDCHG_WKUP BIT(22) 44 + #define BM_USBPHY_CTRL_ENDPDMCHG_WKUP BIT(21) 45 + #define BM_USBPHY_CTRL_ENAUTOCLR_PHY_PWD BIT(20) 46 + #define BM_USBPHY_CTRL_ENAUTOCLR_CLKGATE BIT(19) 47 + #define BM_USBPHY_CTRL_ENAUTO_PWRON_PLL BIT(18) 36 48 #define BM_USBPHY_CTRL_ENUTMILEVEL3 BIT(15) 37 49 #define BM_USBPHY_CTRL_ENUTMILEVEL2 BIT(14) 38 50 #define BM_USBPHY_CTRL_ENHOSTDISCONDETECT BIT(1) 39 51 52 + #define BM_USBPHY_IP_FIX (BIT(17) | BIT(18)) 53 + 54 + #define BM_USBPHY_DEBUG_CLKGATE BIT(30) 55 + 56 + /* Anatop Registers */ 57 + #define ANADIG_ANA_MISC0 0x150 58 + #define ANADIG_ANA_MISC0_SET 0x154 59 + #define ANADIG_ANA_MISC0_CLR 0x158 60 + 61 + #define ANADIG_USB1_VBUS_DET_STAT 0x1c0 62 + #define ANADIG_USB2_VBUS_DET_STAT 0x220 63 + 64 + #define ANADIG_USB1_LOOPBACK_SET 0x1e4 65 + #define ANADIG_USB1_LOOPBACK_CLR 0x1e8 66 + #define ANADIG_USB2_LOOPBACK_SET 0x244 67 + #define ANADIG_USB2_LOOPBACK_CLR 0x248 68 + 69 + #define BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG BIT(12) 70 + #define BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG_SL BIT(11) 71 + 72 + #define BM_ANADIG_USB1_VBUS_DET_STAT_VBUS_VALID BIT(3) 73 + #define BM_ANADIG_USB2_VBUS_DET_STAT_VBUS_VALID BIT(3) 74 + 75 + #define BM_ANADIG_USB1_LOOPBACK_UTMI_DIG_TST1 BIT(2) 76 + #define BM_ANADIG_USB1_LOOPBACK_TSTI_TX_EN BIT(5) 77 + #define BM_ANADIG_USB2_LOOPBACK_UTMI_DIG_TST1 BIT(2) 78 + #define BM_ANADIG_USB2_LOOPBACK_TSTI_TX_EN BIT(5) 79 + 80 + #define to_mxs_phy(p) container_of((p), struct mxs_phy, phy) 81 + 82 + /* Do disconnection between PHY and controller without vbus */ 83 + #define MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS BIT(0) 84 + 85 + /* 86 + * The PHY will be in messy if there is a wakeup after putting 87 + * bus to suspend (set portsc.suspendM) but before setting PHY to low 88 + * power mode (set portsc.phcd). 89 + */ 90 + #define MXS_PHY_ABNORMAL_IN_SUSPEND BIT(1) 91 + 92 + /* 93 + * The SOF sends too fast after resuming, it will cause disconnection 94 + * between host and high speed device. 95 + */ 96 + #define MXS_PHY_SENDING_SOF_TOO_FAST BIT(2) 97 + 98 + /* 99 + * IC has bug fixes logic, they include 100 + * MXS_PHY_ABNORMAL_IN_SUSPEND and MXS_PHY_SENDING_SOF_TOO_FAST 101 + * which are described at above flags, the RTL will handle it 102 + * according to different versions. 103 + */ 104 + #define MXS_PHY_NEED_IP_FIX BIT(3) 105 + 106 + struct mxs_phy_data { 107 + unsigned int flags; 108 + }; 109 + 110 + static const struct mxs_phy_data imx23_phy_data = { 111 + .flags = MXS_PHY_ABNORMAL_IN_SUSPEND | MXS_PHY_SENDING_SOF_TOO_FAST, 112 + }; 113 + 114 + static const struct mxs_phy_data imx6q_phy_data = { 115 + .flags = MXS_PHY_SENDING_SOF_TOO_FAST | 116 + MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS | 117 + MXS_PHY_NEED_IP_FIX, 118 + }; 119 + 120 + static const struct mxs_phy_data imx6sl_phy_data = { 121 + .flags = MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS | 122 + MXS_PHY_NEED_IP_FIX, 123 + }; 124 + 125 + static const struct of_device_id mxs_phy_dt_ids[] = { 126 + { .compatible = "fsl,imx6sl-usbphy", .data = &imx6sl_phy_data, }, 127 + { .compatible = "fsl,imx6q-usbphy", .data = &imx6q_phy_data, }, 128 + { .compatible = "fsl,imx23-usbphy", .data = &imx23_phy_data, }, 129 + { /* sentinel */ } 130 + }; 131 + MODULE_DEVICE_TABLE(of, mxs_phy_dt_ids); 132 + 40 133 struct mxs_phy { 41 134 struct usb_phy phy; 42 135 struct clk *clk; 136 + const struct mxs_phy_data *data; 137 + struct regmap *regmap_anatop; 138 + int port_id; 43 139 }; 44 140 45 - #define to_mxs_phy(p) container_of((p), struct mxs_phy, phy) 141 + static inline bool is_imx6q_phy(struct mxs_phy *mxs_phy) 142 + { 143 + return mxs_phy->data == &imx6q_phy_data; 144 + } 145 + 146 + static inline bool is_imx6sl_phy(struct mxs_phy *mxs_phy) 147 + { 148 + return mxs_phy->data == &imx6sl_phy_data; 149 + } 150 + 151 + /* 152 + * PHY needs some 32K cycles to switch from 32K clock to 153 + * bus (such as AHB/AXI, etc) clock. 154 + */ 155 + static void mxs_phy_clock_switch_delay(void) 156 + { 157 + usleep_range(300, 400); 158 + } 46 159 47 160 static int mxs_phy_hw_init(struct mxs_phy *mxs_phy) 48 161 { ··· 172 53 /* Power up the PHY */ 173 54 writel(0, base + HW_USBPHY_PWD); 174 55 175 - /* enable FS/LS device */ 176 - writel(BM_USBPHY_CTRL_ENUTMILEVEL2 | 177 - BM_USBPHY_CTRL_ENUTMILEVEL3, 56 + /* 57 + * USB PHY Ctrl Setting 58 + * - Auto clock/power on 59 + * - Enable full/low speed support 60 + */ 61 + writel(BM_USBPHY_CTRL_ENAUTOSET_USBCLKS | 62 + BM_USBPHY_CTRL_ENAUTOCLR_USBCLKGATE | 63 + BM_USBPHY_CTRL_ENAUTOCLR_PHY_PWD | 64 + BM_USBPHY_CTRL_ENAUTOCLR_CLKGATE | 65 + BM_USBPHY_CTRL_ENAUTO_PWRON_PLL | 66 + BM_USBPHY_CTRL_ENUTMILEVEL2 | 67 + BM_USBPHY_CTRL_ENUTMILEVEL3, 178 68 base + HW_USBPHY_CTRL_SET); 179 69 70 + if (mxs_phy->data->flags & MXS_PHY_NEED_IP_FIX) 71 + writel(BM_USBPHY_IP_FIX, base + HW_USBPHY_IP_SET); 72 + 180 73 return 0; 74 + } 75 + 76 + /* Return true if the vbus is there */ 77 + static bool mxs_phy_get_vbus_status(struct mxs_phy *mxs_phy) 78 + { 79 + unsigned int vbus_value; 80 + 81 + if (mxs_phy->port_id == 0) 82 + regmap_read(mxs_phy->regmap_anatop, 83 + ANADIG_USB1_VBUS_DET_STAT, 84 + &vbus_value); 85 + else if (mxs_phy->port_id == 1) 86 + regmap_read(mxs_phy->regmap_anatop, 87 + ANADIG_USB2_VBUS_DET_STAT, 88 + &vbus_value); 89 + 90 + if (vbus_value & BM_ANADIG_USB1_VBUS_DET_STAT_VBUS_VALID) 91 + return true; 92 + else 93 + return false; 94 + } 95 + 96 + static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect) 97 + { 98 + void __iomem *base = mxs_phy->phy.io_priv; 99 + u32 reg; 100 + 101 + if (disconnect) 102 + writel_relaxed(BM_USBPHY_DEBUG_CLKGATE, 103 + base + HW_USBPHY_DEBUG_CLR); 104 + 105 + if (mxs_phy->port_id == 0) { 106 + reg = disconnect ? ANADIG_USB1_LOOPBACK_SET 107 + : ANADIG_USB1_LOOPBACK_CLR; 108 + regmap_write(mxs_phy->regmap_anatop, reg, 109 + BM_ANADIG_USB1_LOOPBACK_UTMI_DIG_TST1 | 110 + BM_ANADIG_USB1_LOOPBACK_TSTI_TX_EN); 111 + } else if (mxs_phy->port_id == 1) { 112 + reg = disconnect ? ANADIG_USB2_LOOPBACK_SET 113 + : ANADIG_USB2_LOOPBACK_CLR; 114 + regmap_write(mxs_phy->regmap_anatop, reg, 115 + BM_ANADIG_USB2_LOOPBACK_UTMI_DIG_TST1 | 116 + BM_ANADIG_USB2_LOOPBACK_TSTI_TX_EN); 117 + } 118 + 119 + if (!disconnect) 120 + writel_relaxed(BM_USBPHY_DEBUG_CLKGATE, 121 + base + HW_USBPHY_DEBUG_SET); 122 + 123 + /* Delay some time, and let Linestate be SE0 for controller */ 124 + if (disconnect) 125 + usleep_range(500, 1000); 126 + } 127 + 128 + static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on) 129 + { 130 + bool vbus_is_on = false; 131 + 132 + /* If the SoCs don't need to disconnect line without vbus, quit */ 133 + if (!(mxs_phy->data->flags & MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS)) 134 + return; 135 + 136 + /* If the SoCs don't have anatop, quit */ 137 + if (!mxs_phy->regmap_anatop) 138 + return; 139 + 140 + vbus_is_on = mxs_phy_get_vbus_status(mxs_phy); 141 + 142 + if (on && !vbus_is_on) 143 + __mxs_phy_disconnect_line(mxs_phy, true); 144 + else 145 + __mxs_phy_disconnect_line(mxs_phy, false); 146 + 181 147 } 182 148 183 149 static int mxs_phy_init(struct usb_phy *phy) ··· 270 66 int ret; 271 67 struct mxs_phy *mxs_phy = to_mxs_phy(phy); 272 68 69 + mxs_phy_clock_switch_delay(); 273 70 ret = clk_prepare_enable(mxs_phy->clk); 274 71 if (ret) 275 72 return ret; ··· 299 94 x->io_priv + HW_USBPHY_CTRL_SET); 300 95 clk_disable_unprepare(mxs_phy->clk); 301 96 } else { 97 + mxs_phy_clock_switch_delay(); 302 98 ret = clk_prepare_enable(mxs_phy->clk); 303 99 if (ret) 304 100 return ret; ··· 311 105 return 0; 312 106 } 313 107 108 + static int mxs_phy_set_wakeup(struct usb_phy *x, bool enabled) 109 + { 110 + struct mxs_phy *mxs_phy = to_mxs_phy(x); 111 + u32 value = BM_USBPHY_CTRL_ENVBUSCHG_WKUP | 112 + BM_USBPHY_CTRL_ENDPDMCHG_WKUP | 113 + BM_USBPHY_CTRL_ENIDCHG_WKUP; 114 + if (enabled) { 115 + mxs_phy_disconnect_line(mxs_phy, true); 116 + writel_relaxed(value, x->io_priv + HW_USBPHY_CTRL_SET); 117 + } else { 118 + writel_relaxed(value, x->io_priv + HW_USBPHY_CTRL_CLR); 119 + mxs_phy_disconnect_line(mxs_phy, false); 120 + } 121 + 122 + return 0; 123 + } 124 + 314 125 static int mxs_phy_on_connect(struct usb_phy *phy, 315 126 enum usb_device_speed speed) 316 127 { 317 - dev_dbg(phy->dev, "%s speed device has connected\n", 318 - (speed == USB_SPEED_HIGH) ? "high" : "non-high"); 128 + dev_dbg(phy->dev, "%s device has connected\n", 129 + (speed == USB_SPEED_HIGH) ? "HS" : "FS/LS"); 319 130 320 131 if (speed == USB_SPEED_HIGH) 321 132 writel(BM_USBPHY_CTRL_ENHOSTDISCONDETECT, ··· 344 121 static int mxs_phy_on_disconnect(struct usb_phy *phy, 345 122 enum usb_device_speed speed) 346 123 { 347 - dev_dbg(phy->dev, "%s speed device has disconnected\n", 348 - (speed == USB_SPEED_HIGH) ? "high" : "non-high"); 124 + dev_dbg(phy->dev, "%s device has disconnected\n", 125 + (speed == USB_SPEED_HIGH) ? "HS" : "FS/LS"); 349 126 350 127 if (speed == USB_SPEED_HIGH) 351 128 writel(BM_USBPHY_CTRL_ENHOSTDISCONDETECT, ··· 361 138 struct clk *clk; 362 139 struct mxs_phy *mxs_phy; 363 140 int ret; 141 + const struct of_device_id *of_id = 142 + of_match_device(mxs_phy_dt_ids, &pdev->dev); 143 + struct device_node *np = pdev->dev.of_node; 364 144 365 145 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 366 146 base = devm_ioremap_resource(&pdev->dev, res); ··· 383 157 return -ENOMEM; 384 158 } 385 159 160 + /* Some SoCs don't have anatop registers */ 161 + if (of_get_property(np, "fsl,anatop", NULL)) { 162 + mxs_phy->regmap_anatop = syscon_regmap_lookup_by_phandle 163 + (np, "fsl,anatop"); 164 + if (IS_ERR(mxs_phy->regmap_anatop)) { 165 + dev_dbg(&pdev->dev, 166 + "failed to find regmap for anatop\n"); 167 + return PTR_ERR(mxs_phy->regmap_anatop); 168 + } 169 + } 170 + 171 + ret = of_alias_get_id(np, "usbphy"); 172 + if (ret < 0) 173 + dev_dbg(&pdev->dev, "failed to get alias id, errno %d\n", ret); 174 + mxs_phy->port_id = ret; 175 + 386 176 mxs_phy->phy.io_priv = base; 387 177 mxs_phy->phy.dev = &pdev->dev; 388 178 mxs_phy->phy.label = DRIVER_NAME; ··· 408 166 mxs_phy->phy.notify_connect = mxs_phy_on_connect; 409 167 mxs_phy->phy.notify_disconnect = mxs_phy_on_disconnect; 410 168 mxs_phy->phy.type = USB_PHY_TYPE_USB2; 169 + mxs_phy->phy.set_wakeup = mxs_phy_set_wakeup; 411 170 412 171 mxs_phy->clk = clk; 172 + mxs_phy->data = of_id->data; 413 173 414 174 platform_set_drvdata(pdev, mxs_phy); 175 + 176 + device_set_wakeup_capable(&pdev->dev, true); 415 177 416 178 ret = usb_add_phy_dev(&mxs_phy->phy); 417 179 if (ret) ··· 433 187 return 0; 434 188 } 435 189 436 - static const struct of_device_id mxs_phy_dt_ids[] = { 437 - { .compatible = "fsl,imx23-usbphy", }, 438 - { /* sentinel */ } 439 - }; 440 - MODULE_DEVICE_TABLE(of, mxs_phy_dt_ids); 190 + #ifdef CONFIG_PM_SLEEP 191 + static void mxs_phy_enable_ldo_in_suspend(struct mxs_phy *mxs_phy, bool on) 192 + { 193 + unsigned int reg = on ? ANADIG_ANA_MISC0_SET : ANADIG_ANA_MISC0_CLR; 194 + 195 + /* If the SoCs don't have anatop, quit */ 196 + if (!mxs_phy->regmap_anatop) 197 + return; 198 + 199 + if (is_imx6q_phy(mxs_phy)) 200 + regmap_write(mxs_phy->regmap_anatop, reg, 201 + BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG); 202 + else if (is_imx6sl_phy(mxs_phy)) 203 + regmap_write(mxs_phy->regmap_anatop, 204 + reg, BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG_SL); 205 + } 206 + 207 + static int mxs_phy_system_suspend(struct device *dev) 208 + { 209 + struct mxs_phy *mxs_phy = dev_get_drvdata(dev); 210 + 211 + if (device_may_wakeup(dev)) 212 + mxs_phy_enable_ldo_in_suspend(mxs_phy, true); 213 + 214 + return 0; 215 + } 216 + 217 + static int mxs_phy_system_resume(struct device *dev) 218 + { 219 + struct mxs_phy *mxs_phy = dev_get_drvdata(dev); 220 + 221 + if (device_may_wakeup(dev)) 222 + mxs_phy_enable_ldo_in_suspend(mxs_phy, false); 223 + 224 + return 0; 225 + } 226 + #endif /* CONFIG_PM_SLEEP */ 227 + 228 + static SIMPLE_DEV_PM_OPS(mxs_phy_pm, mxs_phy_system_suspend, 229 + mxs_phy_system_resume); 441 230 442 231 static struct platform_driver mxs_phy_driver = { 443 232 .probe = mxs_phy_probe, ··· 481 200 .name = DRIVER_NAME, 482 201 .owner = THIS_MODULE, 483 202 .of_match_table = mxs_phy_dt_ids, 203 + .pm = &mxs_phy_pm, 484 204 }, 485 205 }; 486 206
+85 -84
drivers/usb/phy/phy-omap-control.c drivers/phy/phy-omap-control.c
··· 1 1 /* 2 - * omap-control-usb.c - The USB part of control module. 2 + * omap-control-phy.c - The PHY part of control module. 3 3 * 4 4 * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com 5 5 * This program is free software; you can redistribute it and/or modify ··· 24 24 #include <linux/err.h> 25 25 #include <linux/io.h> 26 26 #include <linux/clk.h> 27 - #include <linux/usb/omap_control_usb.h> 27 + #include <linux/phy/omap_control_phy.h> 28 28 29 29 /** 30 - * omap_control_usb_phy_power - power on/off the phy using control module reg 30 + * omap_control_phy_power - power on/off the phy using control module reg 31 31 * @dev: the control module device 32 32 * @on: 0 or 1, based on powering on or off the PHY 33 33 */ 34 - void omap_control_usb_phy_power(struct device *dev, int on) 34 + void omap_control_phy_power(struct device *dev, int on) 35 35 { 36 36 u32 val; 37 37 unsigned long rate; 38 - struct omap_control_usb *control_usb; 38 + struct omap_control_phy *control_phy; 39 39 40 40 if (IS_ERR(dev) || !dev) { 41 41 pr_err("%s: invalid device\n", __func__); 42 42 return; 43 43 } 44 44 45 - control_usb = dev_get_drvdata(dev); 46 - if (!control_usb) { 47 - dev_err(dev, "%s: invalid control usb device\n", __func__); 45 + control_phy = dev_get_drvdata(dev); 46 + if (!control_phy) { 47 + dev_err(dev, "%s: invalid control phy device\n", __func__); 48 48 return; 49 49 } 50 50 51 - if (control_usb->type == OMAP_CTRL_TYPE_OTGHS) 51 + if (control_phy->type == OMAP_CTRL_TYPE_OTGHS) 52 52 return; 53 53 54 - val = readl(control_usb->power); 54 + val = readl(control_phy->power); 55 55 56 - switch (control_usb->type) { 56 + switch (control_phy->type) { 57 57 case OMAP_CTRL_TYPE_USB2: 58 58 if (on) 59 59 val &= ~OMAP_CTRL_DEV_PHY_PD; ··· 62 62 break; 63 63 64 64 case OMAP_CTRL_TYPE_PIPE3: 65 - rate = clk_get_rate(control_usb->sys_clk); 65 + rate = clk_get_rate(control_phy->sys_clk); 66 66 rate = rate/1000000; 67 67 68 68 if (on) { 69 - val &= ~(OMAP_CTRL_USB_PWRCTL_CLK_CMD_MASK | 70 - OMAP_CTRL_USB_PWRCTL_CLK_FREQ_MASK); 71 - val |= OMAP_CTRL_USB3_PHY_TX_RX_POWERON << 72 - OMAP_CTRL_USB_PWRCTL_CLK_CMD_SHIFT; 73 - val |= rate << OMAP_CTRL_USB_PWRCTL_CLK_FREQ_SHIFT; 69 + val &= ~(OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_CMD_MASK | 70 + OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_FREQ_MASK); 71 + val |= OMAP_CTRL_PIPE3_PHY_TX_RX_POWERON << 72 + OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_CMD_SHIFT; 73 + val |= rate << 74 + OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_FREQ_SHIFT; 74 75 } else { 75 - val &= ~OMAP_CTRL_USB_PWRCTL_CLK_CMD_MASK; 76 - val |= OMAP_CTRL_USB3_PHY_TX_RX_POWEROFF << 77 - OMAP_CTRL_USB_PWRCTL_CLK_CMD_SHIFT; 76 + val &= ~OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_CMD_MASK; 77 + val |= OMAP_CTRL_PIPE3_PHY_TX_RX_POWEROFF << 78 + OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_CMD_SHIFT; 78 79 } 79 80 break; 80 81 ··· 101 100 break; 102 101 default: 103 102 dev_err(dev, "%s: type %d not recognized\n", 104 - __func__, control_usb->type); 103 + __func__, control_phy->type); 105 104 break; 106 105 } 107 106 108 - writel(val, control_usb->power); 107 + writel(val, control_phy->power); 109 108 } 110 - EXPORT_SYMBOL_GPL(omap_control_usb_phy_power); 109 + EXPORT_SYMBOL_GPL(omap_control_phy_power); 111 110 112 111 /** 113 112 * omap_control_usb_host_mode - set AVALID, VBUSVALID and ID pin in grounded 114 - * @ctrl_usb: struct omap_control_usb * 113 + * @ctrl_phy: struct omap_control_phy * 115 114 * 116 115 * Writes to the mailbox register to notify the usb core that a usb 117 116 * device has been connected. 118 117 */ 119 - static void omap_control_usb_host_mode(struct omap_control_usb *ctrl_usb) 118 + static void omap_control_usb_host_mode(struct omap_control_phy *ctrl_phy) 120 119 { 121 120 u32 val; 122 121 123 - val = readl(ctrl_usb->otghs_control); 122 + val = readl(ctrl_phy->otghs_control); 124 123 val &= ~(OMAP_CTRL_DEV_IDDIG | OMAP_CTRL_DEV_SESSEND); 125 124 val |= OMAP_CTRL_DEV_AVALID | OMAP_CTRL_DEV_VBUSVALID; 126 - writel(val, ctrl_usb->otghs_control); 125 + writel(val, ctrl_phy->otghs_control); 127 126 } 128 127 129 128 /** 130 129 * omap_control_usb_device_mode - set AVALID, VBUSVALID and ID pin in high 131 130 * impedance 132 - * @ctrl_usb: struct omap_control_usb * 131 + * @ctrl_phy: struct omap_control_phy * 133 132 * 134 133 * Writes to the mailbox register to notify the usb core that it has been 135 134 * connected to a usb host. 136 135 */ 137 - static void omap_control_usb_device_mode(struct omap_control_usb *ctrl_usb) 136 + static void omap_control_usb_device_mode(struct omap_control_phy *ctrl_phy) 138 137 { 139 138 u32 val; 140 139 141 - val = readl(ctrl_usb->otghs_control); 140 + val = readl(ctrl_phy->otghs_control); 142 141 val &= ~OMAP_CTRL_DEV_SESSEND; 143 142 val |= OMAP_CTRL_DEV_IDDIG | OMAP_CTRL_DEV_AVALID | 144 143 OMAP_CTRL_DEV_VBUSVALID; 145 - writel(val, ctrl_usb->otghs_control); 144 + writel(val, ctrl_phy->otghs_control); 146 145 } 147 146 148 147 /** 149 148 * omap_control_usb_set_sessionend - Enable SESSIONEND and IDIG to high 150 149 * impedance 151 - * @ctrl_usb: struct omap_control_usb * 150 + * @ctrl_phy: struct omap_control_phy * 152 151 * 153 152 * Writes to the mailbox register to notify the usb core it's now in 154 153 * disconnected state. 155 154 */ 156 - static void omap_control_usb_set_sessionend(struct omap_control_usb *ctrl_usb) 155 + static void omap_control_usb_set_sessionend(struct omap_control_phy *ctrl_phy) 157 156 { 158 157 u32 val; 159 158 160 - val = readl(ctrl_usb->otghs_control); 159 + val = readl(ctrl_phy->otghs_control); 161 160 val &= ~(OMAP_CTRL_DEV_AVALID | OMAP_CTRL_DEV_VBUSVALID); 162 161 val |= OMAP_CTRL_DEV_IDDIG | OMAP_CTRL_DEV_SESSEND; 163 - writel(val, ctrl_usb->otghs_control); 162 + writel(val, ctrl_phy->otghs_control); 164 163 } 165 164 166 165 /** ··· 175 174 void omap_control_usb_set_mode(struct device *dev, 176 175 enum omap_control_usb_mode mode) 177 176 { 178 - struct omap_control_usb *ctrl_usb; 177 + struct omap_control_phy *ctrl_phy; 179 178 180 179 if (IS_ERR(dev) || !dev) 181 180 return; 182 181 183 - ctrl_usb = dev_get_drvdata(dev); 182 + ctrl_phy = dev_get_drvdata(dev); 184 183 185 - if (!ctrl_usb) { 186 - dev_err(dev, "Invalid control usb device\n"); 184 + if (!ctrl_phy) { 185 + dev_err(dev, "Invalid control phy device\n"); 187 186 return; 188 187 } 189 188 190 - if (ctrl_usb->type != OMAP_CTRL_TYPE_OTGHS) 189 + if (ctrl_phy->type != OMAP_CTRL_TYPE_OTGHS) 191 190 return; 192 191 193 192 switch (mode) { 194 193 case USB_MODE_HOST: 195 - omap_control_usb_host_mode(ctrl_usb); 194 + omap_control_usb_host_mode(ctrl_phy); 196 195 break; 197 196 case USB_MODE_DEVICE: 198 - omap_control_usb_device_mode(ctrl_usb); 197 + omap_control_usb_device_mode(ctrl_phy); 199 198 break; 200 199 case USB_MODE_DISCONNECT: 201 - omap_control_usb_set_sessionend(ctrl_usb); 200 + omap_control_usb_set_sessionend(ctrl_phy); 202 201 break; 203 202 default: 204 203 dev_vdbg(dev, "invalid omap control usb mode\n"); ··· 208 207 209 208 #ifdef CONFIG_OF 210 209 211 - static const enum omap_control_usb_type otghs_data = OMAP_CTRL_TYPE_OTGHS; 212 - static const enum omap_control_usb_type usb2_data = OMAP_CTRL_TYPE_USB2; 213 - static const enum omap_control_usb_type pipe3_data = OMAP_CTRL_TYPE_PIPE3; 214 - static const enum omap_control_usb_type dra7usb2_data = OMAP_CTRL_TYPE_DRA7USB2; 215 - static const enum omap_control_usb_type am437usb2_data = OMAP_CTRL_TYPE_AM437USB2; 210 + static const enum omap_control_phy_type otghs_data = OMAP_CTRL_TYPE_OTGHS; 211 + static const enum omap_control_phy_type usb2_data = OMAP_CTRL_TYPE_USB2; 212 + static const enum omap_control_phy_type pipe3_data = OMAP_CTRL_TYPE_PIPE3; 213 + static const enum omap_control_phy_type dra7usb2_data = OMAP_CTRL_TYPE_DRA7USB2; 214 + static const enum omap_control_phy_type am437usb2_data = OMAP_CTRL_TYPE_AM437USB2; 216 215 217 - static const struct of_device_id omap_control_usb_id_table[] = { 216 + static const struct of_device_id omap_control_phy_id_table[] = { 218 217 { 219 218 .compatible = "ti,control-phy-otghs", 220 219 .data = &otghs_data, ··· 228 227 .data = &pipe3_data, 229 228 }, 230 229 { 231 - .compatible = "ti,control-phy-dra7usb2", 230 + .compatible = "ti,control-phy-usb2-dra7", 232 231 .data = &dra7usb2_data, 233 232 }, 234 233 { 235 - .compatible = "ti,control-phy-am437usb2", 234 + .compatible = "ti,control-phy-usb2-am437", 236 235 .data = &am437usb2_data, 237 236 }, 238 237 {}, 239 238 }; 240 - MODULE_DEVICE_TABLE(of, omap_control_usb_id_table); 239 + MODULE_DEVICE_TABLE(of, omap_control_phy_id_table); 241 240 #endif 242 241 243 242 244 - static int omap_control_usb_probe(struct platform_device *pdev) 243 + static int omap_control_phy_probe(struct platform_device *pdev) 245 244 { 246 245 struct resource *res; 247 246 const struct of_device_id *of_id; 248 - struct omap_control_usb *control_usb; 247 + struct omap_control_phy *control_phy; 249 248 250 - of_id = of_match_device(of_match_ptr(omap_control_usb_id_table), 251 - &pdev->dev); 249 + of_id = of_match_device(of_match_ptr(omap_control_phy_id_table), 250 + &pdev->dev); 252 251 if (!of_id) 253 252 return -EINVAL; 254 253 255 - control_usb = devm_kzalloc(&pdev->dev, sizeof(*control_usb), 254 + control_phy = devm_kzalloc(&pdev->dev, sizeof(*control_phy), 256 255 GFP_KERNEL); 257 - if (!control_usb) { 258 - dev_err(&pdev->dev, "unable to alloc memory for control usb\n"); 256 + if (!control_phy) { 257 + dev_err(&pdev->dev, "unable to alloc memory for control phy\n"); 259 258 return -ENOMEM; 260 259 } 261 260 262 - control_usb->dev = &pdev->dev; 263 - control_usb->type = *(enum omap_control_usb_type *)of_id->data; 261 + control_phy->dev = &pdev->dev; 262 + control_phy->type = *(enum omap_control_phy_type *)of_id->data; 264 263 265 - if (control_usb->type == OMAP_CTRL_TYPE_OTGHS) { 264 + if (control_phy->type == OMAP_CTRL_TYPE_OTGHS) { 266 265 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 267 266 "otghs_control"); 268 - control_usb->otghs_control = devm_ioremap_resource( 267 + control_phy->otghs_control = devm_ioremap_resource( 269 268 &pdev->dev, res); 270 - if (IS_ERR(control_usb->otghs_control)) 271 - return PTR_ERR(control_usb->otghs_control); 269 + if (IS_ERR(control_phy->otghs_control)) 270 + return PTR_ERR(control_phy->otghs_control); 272 271 } else { 273 272 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 274 273 "power"); 275 - control_usb->power = devm_ioremap_resource(&pdev->dev, res); 276 - if (IS_ERR(control_usb->power)) { 274 + control_phy->power = devm_ioremap_resource(&pdev->dev, res); 275 + if (IS_ERR(control_phy->power)) { 277 276 dev_err(&pdev->dev, "Couldn't get power register\n"); 278 - return PTR_ERR(control_usb->power); 277 + return PTR_ERR(control_phy->power); 279 278 } 280 279 } 281 280 282 - if (control_usb->type == OMAP_CTRL_TYPE_PIPE3) { 283 - control_usb->sys_clk = devm_clk_get(control_usb->dev, 281 + if (control_phy->type == OMAP_CTRL_TYPE_PIPE3) { 282 + control_phy->sys_clk = devm_clk_get(control_phy->dev, 284 283 "sys_clkin"); 285 - if (IS_ERR(control_usb->sys_clk)) { 284 + if (IS_ERR(control_phy->sys_clk)) { 286 285 pr_err("%s: unable to get sys_clkin\n", __func__); 287 286 return -EINVAL; 288 287 } 289 288 } 290 289 291 - dev_set_drvdata(control_usb->dev, control_usb); 290 + dev_set_drvdata(control_phy->dev, control_phy); 292 291 293 292 return 0; 294 293 } 295 294 296 - static struct platform_driver omap_control_usb_driver = { 297 - .probe = omap_control_usb_probe, 295 + static struct platform_driver omap_control_phy_driver = { 296 + .probe = omap_control_phy_probe, 298 297 .driver = { 299 - .name = "omap-control-usb", 298 + .name = "omap-control-phy", 300 299 .owner = THIS_MODULE, 301 - .of_match_table = of_match_ptr(omap_control_usb_id_table), 300 + .of_match_table = of_match_ptr(omap_control_phy_id_table), 302 301 }, 303 302 }; 304 303 305 - static int __init omap_control_usb_init(void) 304 + static int __init omap_control_phy_init(void) 306 305 { 307 - return platform_driver_register(&omap_control_usb_driver); 306 + return platform_driver_register(&omap_control_phy_driver); 308 307 } 309 - subsys_initcall(omap_control_usb_init); 308 + subsys_initcall(omap_control_phy_init); 310 309 311 - static void __exit omap_control_usb_exit(void) 310 + static void __exit omap_control_phy_exit(void) 312 311 { 313 - platform_driver_unregister(&omap_control_usb_driver); 312 + platform_driver_unregister(&omap_control_phy_driver); 314 313 } 315 - module_exit(omap_control_usb_exit); 314 + module_exit(omap_control_phy_exit); 316 315 317 - MODULE_ALIAS("platform: omap_control_usb"); 316 + MODULE_ALIAS("platform: omap_control_phy"); 318 317 MODULE_AUTHOR("Texas Instruments Inc."); 319 - MODULE_DESCRIPTION("OMAP Control Module USB Driver"); 318 + MODULE_DESCRIPTION("OMAP Control Module PHY Driver"); 320 319 MODULE_LICENSE("GPL v2");
-361
drivers/usb/phy/phy-omap-usb3.c
··· 1 - /* 2 - * omap-usb3 - USB PHY, talking to dwc3 controller in OMAP. 3 - * 4 - * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License as published by 7 - * the Free Software Foundation; either version 2 of the License, or 8 - * (at your option) any later version. 9 - * 10 - * Author: Kishon Vijay Abraham I <kishon@ti.com> 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - */ 18 - 19 - #include <linux/module.h> 20 - #include <linux/platform_device.h> 21 - #include <linux/slab.h> 22 - #include <linux/usb/omap_usb.h> 23 - #include <linux/of.h> 24 - #include <linux/clk.h> 25 - #include <linux/err.h> 26 - #include <linux/pm_runtime.h> 27 - #include <linux/delay.h> 28 - #include <linux/usb/omap_control_usb.h> 29 - #include <linux/of_platform.h> 30 - 31 - #define PLL_STATUS 0x00000004 32 - #define PLL_GO 0x00000008 33 - #define PLL_CONFIGURATION1 0x0000000C 34 - #define PLL_CONFIGURATION2 0x00000010 35 - #define PLL_CONFIGURATION3 0x00000014 36 - #define PLL_CONFIGURATION4 0x00000020 37 - 38 - #define PLL_REGM_MASK 0x001FFE00 39 - #define PLL_REGM_SHIFT 0x9 40 - #define PLL_REGM_F_MASK 0x0003FFFF 41 - #define PLL_REGM_F_SHIFT 0x0 42 - #define PLL_REGN_MASK 0x000001FE 43 - #define PLL_REGN_SHIFT 0x1 44 - #define PLL_SELFREQDCO_MASK 0x0000000E 45 - #define PLL_SELFREQDCO_SHIFT 0x1 46 - #define PLL_SD_MASK 0x0003FC00 47 - #define PLL_SD_SHIFT 0x9 48 - #define SET_PLL_GO 0x1 49 - #define PLL_TICOPWDN 0x10000 50 - #define PLL_LOCK 0x2 51 - #define PLL_IDLE 0x1 52 - 53 - /* 54 - * This is an Empirical value that works, need to confirm the actual 55 - * value required for the USB3PHY_PLL_CONFIGURATION2.PLL_IDLE status 56 - * to be correctly reflected in the USB3PHY_PLL_STATUS register. 57 - */ 58 - # define PLL_IDLE_TIME 100; 59 - 60 - struct usb_dpll_map { 61 - unsigned long rate; 62 - struct usb_dpll_params params; 63 - }; 64 - 65 - static struct usb_dpll_map dpll_map[] = { 66 - {12000000, {1250, 5, 4, 20, 0} }, /* 12 MHz */ 67 - {16800000, {3125, 20, 4, 20, 0} }, /* 16.8 MHz */ 68 - {19200000, {1172, 8, 4, 20, 65537} }, /* 19.2 MHz */ 69 - {20000000, {1000, 7, 4, 10, 0} }, /* 20 MHz */ 70 - {26000000, {1250, 12, 4, 20, 0} }, /* 26 MHz */ 71 - {38400000, {3125, 47, 4, 20, 92843} }, /* 38.4 MHz */ 72 - }; 73 - 74 - static struct usb_dpll_params *omap_usb3_get_dpll_params(unsigned long rate) 75 - { 76 - int i; 77 - 78 - for (i = 0; i < ARRAY_SIZE(dpll_map); i++) { 79 - if (rate == dpll_map[i].rate) 80 - return &dpll_map[i].params; 81 - } 82 - 83 - return NULL; 84 - } 85 - 86 - static int omap_usb3_suspend(struct usb_phy *x, int suspend) 87 - { 88 - struct omap_usb *phy = phy_to_omapusb(x); 89 - int val; 90 - int timeout = PLL_IDLE_TIME; 91 - 92 - if (suspend && !phy->is_suspended) { 93 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION2); 94 - val |= PLL_IDLE; 95 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION2, val); 96 - 97 - do { 98 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_STATUS); 99 - if (val & PLL_TICOPWDN) 100 - break; 101 - udelay(1); 102 - } while (--timeout); 103 - 104 - omap_control_usb_phy_power(phy->control_dev, 0); 105 - 106 - phy->is_suspended = 1; 107 - } else if (!suspend && phy->is_suspended) { 108 - phy->is_suspended = 0; 109 - 110 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION2); 111 - val &= ~PLL_IDLE; 112 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION2, val); 113 - 114 - do { 115 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_STATUS); 116 - if (!(val & PLL_TICOPWDN)) 117 - break; 118 - udelay(1); 119 - } while (--timeout); 120 - } 121 - 122 - return 0; 123 - } 124 - 125 - static void omap_usb_dpll_relock(struct omap_usb *phy) 126 - { 127 - u32 val; 128 - unsigned long timeout; 129 - 130 - omap_usb_writel(phy->pll_ctrl_base, PLL_GO, SET_PLL_GO); 131 - 132 - timeout = jiffies + msecs_to_jiffies(20); 133 - do { 134 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_STATUS); 135 - if (val & PLL_LOCK) 136 - break; 137 - } while (!WARN_ON(time_after(jiffies, timeout))); 138 - } 139 - 140 - static int omap_usb_dpll_lock(struct omap_usb *phy) 141 - { 142 - u32 val; 143 - unsigned long rate; 144 - struct usb_dpll_params *dpll_params; 145 - 146 - rate = clk_get_rate(phy->sys_clk); 147 - dpll_params = omap_usb3_get_dpll_params(rate); 148 - if (!dpll_params) { 149 - dev_err(phy->dev, 150 - "No DPLL configuration for %lu Hz SYS CLK\n", rate); 151 - return -EINVAL; 152 - } 153 - 154 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION1); 155 - val &= ~PLL_REGN_MASK; 156 - val |= dpll_params->n << PLL_REGN_SHIFT; 157 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION1, val); 158 - 159 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION2); 160 - val &= ~PLL_SELFREQDCO_MASK; 161 - val |= dpll_params->freq << PLL_SELFREQDCO_SHIFT; 162 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION2, val); 163 - 164 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION1); 165 - val &= ~PLL_REGM_MASK; 166 - val |= dpll_params->m << PLL_REGM_SHIFT; 167 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION1, val); 168 - 169 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION4); 170 - val &= ~PLL_REGM_F_MASK; 171 - val |= dpll_params->mf << PLL_REGM_F_SHIFT; 172 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION4, val); 173 - 174 - val = omap_usb_readl(phy->pll_ctrl_base, PLL_CONFIGURATION3); 175 - val &= ~PLL_SD_MASK; 176 - val |= dpll_params->sd << PLL_SD_SHIFT; 177 - omap_usb_writel(phy->pll_ctrl_base, PLL_CONFIGURATION3, val); 178 - 179 - omap_usb_dpll_relock(phy); 180 - 181 - return 0; 182 - } 183 - 184 - static int omap_usb3_init(struct usb_phy *x) 185 - { 186 - struct omap_usb *phy = phy_to_omapusb(x); 187 - int ret; 188 - 189 - ret = omap_usb_dpll_lock(phy); 190 - if (ret) 191 - return ret; 192 - 193 - omap_control_usb_phy_power(phy->control_dev, 1); 194 - 195 - return 0; 196 - } 197 - 198 - static int omap_usb3_probe(struct platform_device *pdev) 199 - { 200 - struct omap_usb *phy; 201 - struct resource *res; 202 - struct device_node *node = pdev->dev.of_node; 203 - struct device_node *control_node; 204 - struct platform_device *control_pdev; 205 - 206 - if (!node) 207 - return -EINVAL; 208 - 209 - phy = devm_kzalloc(&pdev->dev, sizeof(*phy), GFP_KERNEL); 210 - if (!phy) { 211 - dev_err(&pdev->dev, "unable to alloc mem for OMAP USB3 PHY\n"); 212 - return -ENOMEM; 213 - } 214 - 215 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pll_ctrl"); 216 - phy->pll_ctrl_base = devm_ioremap_resource(&pdev->dev, res); 217 - if (IS_ERR(phy->pll_ctrl_base)) 218 - return PTR_ERR(phy->pll_ctrl_base); 219 - 220 - phy->dev = &pdev->dev; 221 - 222 - phy->phy.dev = phy->dev; 223 - phy->phy.label = "omap-usb3"; 224 - phy->phy.init = omap_usb3_init; 225 - phy->phy.set_suspend = omap_usb3_suspend; 226 - phy->phy.type = USB_PHY_TYPE_USB3; 227 - 228 - phy->is_suspended = 1; 229 - phy->wkupclk = devm_clk_get(phy->dev, "usb_phy_cm_clk32k"); 230 - if (IS_ERR(phy->wkupclk)) { 231 - dev_err(&pdev->dev, "unable to get usb_phy_cm_clk32k\n"); 232 - return PTR_ERR(phy->wkupclk); 233 - } 234 - clk_prepare(phy->wkupclk); 235 - 236 - phy->optclk = devm_clk_get(phy->dev, "usb_otg_ss_refclk960m"); 237 - if (IS_ERR(phy->optclk)) { 238 - dev_err(&pdev->dev, "unable to get usb_otg_ss_refclk960m\n"); 239 - return PTR_ERR(phy->optclk); 240 - } 241 - clk_prepare(phy->optclk); 242 - 243 - phy->sys_clk = devm_clk_get(phy->dev, "sys_clkin"); 244 - if (IS_ERR(phy->sys_clk)) { 245 - pr_err("%s: unable to get sys_clkin\n", __func__); 246 - return -EINVAL; 247 - } 248 - 249 - control_node = of_parse_phandle(node, "ctrl-module", 0); 250 - if (!control_node) { 251 - dev_err(&pdev->dev, "Failed to get control device phandle\n"); 252 - return -EINVAL; 253 - } 254 - control_pdev = of_find_device_by_node(control_node); 255 - if (!control_pdev) { 256 - dev_err(&pdev->dev, "Failed to get control device\n"); 257 - return -EINVAL; 258 - } 259 - 260 - phy->control_dev = &control_pdev->dev; 261 - 262 - omap_control_usb_phy_power(phy->control_dev, 0); 263 - usb_add_phy_dev(&phy->phy); 264 - 265 - platform_set_drvdata(pdev, phy); 266 - 267 - pm_runtime_enable(phy->dev); 268 - pm_runtime_get(&pdev->dev); 269 - 270 - return 0; 271 - } 272 - 273 - static int omap_usb3_remove(struct platform_device *pdev) 274 - { 275 - struct omap_usb *phy = platform_get_drvdata(pdev); 276 - 277 - clk_unprepare(phy->wkupclk); 278 - clk_unprepare(phy->optclk); 279 - usb_remove_phy(&phy->phy); 280 - if (!pm_runtime_suspended(&pdev->dev)) 281 - pm_runtime_put(&pdev->dev); 282 - pm_runtime_disable(&pdev->dev); 283 - 284 - return 0; 285 - } 286 - 287 - #ifdef CONFIG_PM_RUNTIME 288 - 289 - static int omap_usb3_runtime_suspend(struct device *dev) 290 - { 291 - struct platform_device *pdev = to_platform_device(dev); 292 - struct omap_usb *phy = platform_get_drvdata(pdev); 293 - 294 - clk_disable(phy->wkupclk); 295 - clk_disable(phy->optclk); 296 - 297 - return 0; 298 - } 299 - 300 - static int omap_usb3_runtime_resume(struct device *dev) 301 - { 302 - u32 ret = 0; 303 - struct platform_device *pdev = to_platform_device(dev); 304 - struct omap_usb *phy = platform_get_drvdata(pdev); 305 - 306 - ret = clk_enable(phy->optclk); 307 - if (ret) { 308 - dev_err(phy->dev, "Failed to enable optclk %d\n", ret); 309 - goto err1; 310 - } 311 - 312 - ret = clk_enable(phy->wkupclk); 313 - if (ret) { 314 - dev_err(phy->dev, "Failed to enable wkupclk %d\n", ret); 315 - goto err2; 316 - } 317 - 318 - return 0; 319 - 320 - err2: 321 - clk_disable(phy->optclk); 322 - 323 - err1: 324 - return ret; 325 - } 326 - 327 - static const struct dev_pm_ops omap_usb3_pm_ops = { 328 - SET_RUNTIME_PM_OPS(omap_usb3_runtime_suspend, omap_usb3_runtime_resume, 329 - NULL) 330 - }; 331 - 332 - #define DEV_PM_OPS (&omap_usb3_pm_ops) 333 - #else 334 - #define DEV_PM_OPS NULL 335 - #endif 336 - 337 - #ifdef CONFIG_OF 338 - static const struct of_device_id omap_usb3_id_table[] = { 339 - { .compatible = "ti,omap-usb3" }, 340 - {} 341 - }; 342 - MODULE_DEVICE_TABLE(of, omap_usb3_id_table); 343 - #endif 344 - 345 - static struct platform_driver omap_usb3_driver = { 346 - .probe = omap_usb3_probe, 347 - .remove = omap_usb3_remove, 348 - .driver = { 349 - .name = "omap-usb3", 350 - .owner = THIS_MODULE, 351 - .pm = DEV_PM_OPS, 352 - .of_match_table = of_match_ptr(omap_usb3_id_table), 353 - }, 354 - }; 355 - 356 - module_platform_driver(omap_usb3_driver); 357 - 358 - MODULE_ALIAS("platform: omap_usb3"); 359 - MODULE_AUTHOR("Texas Instruments Inc."); 360 - MODULE_DESCRIPTION("OMAP USB3 phy driver"); 361 - MODULE_LICENSE("GPL v2");
+3 -3
drivers/usb/phy/phy-rcar-gen2-usb.c
··· 177 177 struct clk *clk; 178 178 int retval; 179 179 180 - pdata = dev_get_platdata(&pdev->dev); 180 + pdata = dev_get_platdata(dev); 181 181 if (!pdata) { 182 182 dev_err(dev, "No platform data\n"); 183 183 return -EINVAL; 184 184 } 185 185 186 - clk = devm_clk_get(&pdev->dev, "usbhs"); 186 + clk = devm_clk_get(dev, "usbhs"); 187 187 if (IS_ERR(clk)) { 188 - dev_err(&pdev->dev, "Can't get the clock\n"); 188 + dev_err(dev, "Can't get the clock\n"); 189 189 return PTR_ERR(clk); 190 190 } 191 191
+1 -1
drivers/usb/phy/phy-twl6030-usb.c
··· 27 27 #include <linux/io.h> 28 28 #include <linux/usb/musb-omap.h> 29 29 #include <linux/usb/phy_companion.h> 30 - #include <linux/usb/omap_usb.h> 30 + #include <linux/phy/omap_usb.h> 31 31 #include <linux/i2c/twl.h> 32 32 #include <linux/regulator/consumer.h> 33 33 #include <linux/err.h>
+2
drivers/usb/phy/phy-ulpi.c
··· 47 47 static struct ulpi_info ulpi_ids[] = { 48 48 ULPI_INFO(ULPI_ID(0x04cc, 0x1504), "NXP ISP1504"), 49 49 ULPI_INFO(ULPI_ID(0x0424, 0x0006), "SMSC USB331x"), 50 + ULPI_INFO(ULPI_ID(0x0424, 0x0007), "SMSC USB3320"), 51 + ULPI_INFO(ULPI_ID(0x0451, 0x1507), "TI TUSB1210"), 50 52 }; 51 53 52 54 static int ulpi_set_otg_flags(struct usb_phy *phy)
+3 -3
drivers/usb/serial/ch341.c
··· 323 323 if (r) 324 324 goto out; 325 325 326 - dev_dbg(&port->dev, "%s - submitting interrupt urb", __func__); 326 + dev_dbg(&port->dev, "%s - submitting interrupt urb\n", __func__); 327 327 r = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); 328 328 if (r) { 329 - dev_err(&port->dev, "%s - failed submitting interrupt urb," 330 - " error %d\n", __func__, r); 329 + dev_err(&port->dev, "%s - failed to submit interrupt urb: %d\n", 330 + __func__, r); 331 331 ch341_close(port); 332 332 goto out; 333 333 }
+1 -1
drivers/usb/serial/cyberjack.c
··· 220 220 result = usb_submit_urb(port->write_urb, GFP_ATOMIC); 221 221 if (result) { 222 222 dev_err(&port->dev, 223 - "%s - failed submitting write urb, error %d", 223 + "%s - failed submitting write urb, error %d\n", 224 224 __func__, result); 225 225 /* Throw away data. No better idea what to do with it. */ 226 226 priv->wrfilled = 0;
+4 -17
drivers/usb/serial/cypress_m8.c
··· 279 279 * the generic firmware, but are not used with 280 280 * NMEA and SiRF protocols */ 281 281 dev_dbg(&port->dev, 282 - "%s - failed setting baud rate, unsupported speed of %d on Earthmate GPS", 282 + "%s - failed setting baud rate, unsupported speed of %d on Earthmate GPS\n", 283 283 __func__, new_rate); 284 284 return -1; 285 285 } ··· 1224 1224 struct usb_serial_port *port = urb->context; 1225 1225 struct cypress_private *priv = usb_get_serial_port_data(port); 1226 1226 struct device *dev = &urb->dev->dev; 1227 - int result; 1228 1227 int status = urb->status; 1229 1228 1230 1229 switch (status) { ··· 1238 1239 __func__, status); 1239 1240 priv->write_urb_in_use = 0; 1240 1241 return; 1241 - case -EPIPE: /* no break needed; clear halt and resubmit */ 1242 - if (!priv->comm_is_ok) 1243 - break; 1244 - usb_clear_halt(port->serial->dev, 0x02); 1245 - /* error in the urb, so we have to resubmit it */ 1246 - dev_dbg(dev, "%s - nonzero write bulk status received: %d\n", 1247 - __func__, status); 1248 - port->interrupt_out_urb->transfer_buffer_length = 1; 1249 - result = usb_submit_urb(port->interrupt_out_urb, GFP_ATOMIC); 1250 - if (!result) 1251 - return; 1252 - dev_err(dev, "%s - failed resubmitting write urb, error %d\n", 1253 - __func__, result); 1254 - cypress_set_dead(port); 1255 - break; 1242 + case -EPIPE: 1243 + /* Cannot call usb_clear_halt while in_interrupt */ 1244 + /* FALLTHROUGH */ 1256 1245 default: 1257 1246 dev_err(dev, "%s - unexpected nonzero write status received: %d\n", 1258 1247 __func__, status);
+42 -19
drivers/usb/serial/generic.c
··· 332 332 * stuff like 3G modems, so shortcircuit it in the 99.9999999% of 333 333 * cases where the USB serial is not a console anyway. 334 334 */ 335 - if (!port->port.console || !port->sysrq) 335 + if (!port->port.console || !port->sysrq) { 336 336 tty_insert_flip_string(&port->port, ch, urb->actual_length); 337 - else { 337 + } else { 338 338 for (i = 0; i < urb->actual_length; i++, ch++) { 339 339 if (!usb_serial_handle_sysrq_char(port, *ch)) 340 340 tty_insert_flip_char(&port->port, *ch, TTY_NORMAL); ··· 359 359 360 360 dev_dbg(&port->dev, "%s - urb %d, len %d\n", __func__, i, 361 361 urb->actual_length); 362 - 363 - if (urb->status) { 364 - dev_dbg(&port->dev, "%s - non-zero urb status: %d\n", 365 - __func__, urb->status); 362 + switch (urb->status) { 363 + case 0: 364 + break; 365 + case -ENOENT: 366 + case -ECONNRESET: 367 + case -ESHUTDOWN: 368 + dev_dbg(&port->dev, "%s - urb stopped: %d\n", 369 + __func__, urb->status); 366 370 return; 371 + case -EPIPE: 372 + dev_err(&port->dev, "%s - urb stopped: %d\n", 373 + __func__, urb->status); 374 + return; 375 + default: 376 + dev_err(&port->dev, "%s - nonzero urb status: %d\n", 377 + __func__, urb->status); 378 + goto resubmit; 367 379 } 368 380 369 381 usb_serial_debug_data(&port->dev, __func__, urb->actual_length, data); 370 382 port->serial->type->process_read_urb(urb); 371 383 384 + resubmit: 372 385 /* Throttle the device if requested by tty */ 373 386 spin_lock_irqsave(&port->lock, flags); 374 387 port->throttled = port->throttle_req; 375 388 if (!port->throttled) { 376 389 spin_unlock_irqrestore(&port->lock, flags); 377 390 usb_serial_generic_submit_read_urb(port, i, GFP_ATOMIC); 378 - } else 391 + } else { 379 392 spin_unlock_irqrestore(&port->lock, flags); 393 + } 380 394 } 381 395 EXPORT_SYMBOL_GPL(usb_serial_generic_read_bulk_callback); 382 396 ··· 398 384 { 399 385 unsigned long flags; 400 386 struct usb_serial_port *port = urb->context; 401 - int status = urb->status; 402 387 int i; 403 388 404 - for (i = 0; i < ARRAY_SIZE(port->write_urbs); ++i) 389 + for (i = 0; i < ARRAY_SIZE(port->write_urbs); ++i) { 405 390 if (port->write_urbs[i] == urb) 406 391 break; 407 - 392 + } 408 393 spin_lock_irqsave(&port->lock, flags); 409 394 port->tx_bytes -= urb->transfer_buffer_length; 410 395 set_bit(i, &port->write_urbs_free); 411 396 spin_unlock_irqrestore(&port->lock, flags); 412 397 413 - if (status) { 414 - dev_dbg(&port->dev, "%s - non-zero urb status: %d\n", 415 - __func__, status); 416 - 417 - spin_lock_irqsave(&port->lock, flags); 418 - kfifo_reset_out(&port->write_fifo); 419 - spin_unlock_irqrestore(&port->lock, flags); 420 - } else { 421 - usb_serial_generic_write_start(port, GFP_ATOMIC); 398 + switch (urb->status) { 399 + case 0: 400 + break; 401 + case -ENOENT: 402 + case -ECONNRESET: 403 + case -ESHUTDOWN: 404 + dev_dbg(&port->dev, "%s - urb stopped: %d\n", 405 + __func__, urb->status); 406 + return; 407 + case -EPIPE: 408 + dev_err_console(port, "%s - urb stopped: %d\n", 409 + __func__, urb->status); 410 + return; 411 + default: 412 + dev_err_console(port, "%s - nonzero urb status: %d\n", 413 + __func__, urb->status); 414 + goto resubmit; 422 415 } 423 416 417 + resubmit: 418 + usb_serial_generic_write_start(port, GFP_ATOMIC); 424 419 usb_serial_port_softint(port); 425 420 } 426 421 EXPORT_SYMBOL_GPL(usb_serial_generic_write_bulk_callback);
+1 -1
drivers/usb/serial/iuu_phoenix.c
··· 1151 1151 goto fail_store_vcc_mode; 1152 1152 } 1153 1153 1154 - dev_dbg(dev, "%s: setting vcc_mode = %ld", __func__, v); 1154 + dev_dbg(dev, "%s: setting vcc_mode = %ld\n", __func__, v); 1155 1155 1156 1156 if ((v != 3) && (v != 5)) { 1157 1157 dev_err(dev, "%s - vcc_mode %ld is invalid\n", __func__, v);
-30
drivers/usb/serial/keyspan.c
··· 397 397 398 398 msg = (struct keyspan_usa26_portStatusMessage *)data; 399 399 400 - #if 0 401 - dev_dbg(&urb->dev->dev, 402 - "%s - port status: port %d cts %d dcd %d dsr %d ri %d toff %d txoff %d rxen %d cr %d", 403 - __func__, msg->port, msg->hskia_cts, msg->gpia_dcd, msg->dsr, 404 - msg->ri, msg->_txOff, msg->_txXoff, msg->rxEnabled, 405 - msg->controlResponse); 406 - #endif 407 - 408 - /* Now do something useful with the data */ 409 - 410 - 411 400 /* Check port number from message and retrieve private data */ 412 401 if (msg->port >= serial->num_ports) { 413 402 dev_dbg(&urb->dev->dev, "%s - Unexpected port number %d\n", __func__, msg->port); ··· 512 523 goto exit; 513 524 } 514 525 515 - /*dev_dbg(&urb->dev->dev, "%s %12ph", __func__, data);*/ 516 - 517 - /* Now do something useful with the data */ 518 526 msg = (struct keyspan_usa28_portStatusMessage *)data; 519 527 520 528 /* Check port number from message and retrieve private data */ ··· 591 605 goto exit; 592 606 } 593 607 594 - /*dev_dbg(&urb->dev->dev, "%s: %11ph", __func__, data);*/ 595 - 596 - /* Now do something useful with the data */ 597 608 msg = (struct keyspan_usa49_portStatusMessage *)data; 598 609 599 610 /* Check port number from message and retrieve private data */ ··· 1776 1793 err = usb_submit_urb(this_urb, GFP_ATOMIC); 1777 1794 if (err != 0) 1778 1795 dev_dbg(&port->dev, "%s - usb_submit_urb(setup) failed\n", __func__); 1779 - #if 0 1780 - else { 1781 - dev_dbg(&port->dev, "%s - usb_submit_urb(setup) OK %d bytes\n", __func__, 1782 - this_urb->transfer_buffer_length); 1783 - } 1784 - #endif 1785 1796 1786 1797 return 0; 1787 1798 } ··· 1953 1976 err = usb_submit_urb(this_urb, GFP_ATOMIC); 1954 1977 if (err != 0) 1955 1978 dev_dbg(&port->dev, "%s - usb_submit_urb(setup) failed (%d)\n", __func__, err); 1956 - #if 0 1957 - else { 1958 - dev_dbg(&port->dev, "%s - usb_submit_urb(%d) OK %d bytes (end %d)\n", __func__, 1959 - outcont_urb, this_urb->transfer_buffer_length, 1960 - usb_pipeendpoint(this_urb->pipe)); 1961 - } 1962 - #endif 1963 1979 1964 1980 return 0; 1965 1981 }
+1 -1
drivers/usb/serial/keyspan_pda.c
··· 189 189 retval = usb_submit_urb(urb, GFP_ATOMIC); 190 190 if (retval) 191 191 dev_err(&port->dev, 192 - "%s - usb_submit_urb failed with result %d", 192 + "%s - usb_submit_urb failed with result %d\n", 193 193 __func__, retval); 194 194 } 195 195
+2 -2
drivers/usb/serial/kl5kusb105.c
··· 201 201 else { 202 202 status = get_unaligned_le16(status_buf); 203 203 204 - dev_info(&port->serial->dev->dev, "read status %x %x", 204 + dev_info(&port->serial->dev->dev, "read status %x %x\n", 205 205 status_buf[0], status_buf[1]); 206 206 207 207 *line_state_p = klsi_105_status2linestate(status); ··· 464 464 priv->cfg.baudrate = kl5kusb105a_sio_b115200; 465 465 break; 466 466 default: 467 - dev_dbg(dev, "KLSI USB->Serial converter: unsupported baudrate request, using default of 9600"); 467 + dev_dbg(dev, "unsupported baudrate, using 9600\n"); 468 468 priv->cfg.baudrate = kl5kusb105a_sio_b9600; 469 469 baud = 9600; 470 470 break;
+2 -1
drivers/usb/serial/kobil_sct.c
··· 557 557 ); 558 558 559 559 dev_dbg(&port->dev, 560 - "%s - Send reset_all_queues (FLUSH) URB returns: %i", __func__, result); 560 + "%s - Send reset_all_queues (FLUSH) URB returns: %i\n", 561 + __func__, result); 561 562 kfree(transfer_buffer); 562 563 return (result < 0) ? -EIO: 0; 563 564 default:
+6 -6
drivers/usb/serial/mos7720.c
··· 209 209 index, NULL, 0, MOS_WDR_TIMEOUT); 210 210 if (status < 0) 211 211 dev_err(&usbdev->dev, 212 - "mos7720: usb_control_msg() failed: %d", status); 212 + "mos7720: usb_control_msg() failed: %d\n", status); 213 213 return status; 214 214 } 215 215 ··· 240 240 *data = *buf; 241 241 else if (status < 0) 242 242 dev_err(&usbdev->dev, 243 - "mos7720: usb_control_msg() failed: %d", status); 243 + "mos7720: usb_control_msg() failed: %d\n", status); 244 244 kfree(buf); 245 245 246 246 return status; ··· 399 399 &mos_parport->deferred_urbs); 400 400 spin_unlock_irqrestore(&mos_parport->listlock, flags); 401 401 tasklet_schedule(&mos_parport->urb_tasklet); 402 - dev_dbg(&usbdev->dev, "tasklet scheduled"); 402 + dev_dbg(&usbdev->dev, "tasklet scheduled\n"); 403 403 return 0; 404 404 } 405 405 ··· 418 418 mutex_unlock(&serial->disc_mutex); 419 419 if (ret_val) { 420 420 dev_err(&usbdev->dev, 421 - "%s: submit_urb() failed: %d", __func__, ret_val); 421 + "%s: submit_urb() failed: %d\n", __func__, ret_val); 422 422 spin_lock_irqsave(&mos_parport->listlock, flags); 423 423 list_del(&urbtrack->urblist_entry); 424 424 spin_unlock_irqrestore(&mos_parport->listlock, flags); ··· 656 656 parport_epilogue(pp); 657 657 if (retval) { 658 658 dev_err(&mos_parport->serial->dev->dev, 659 - "mos7720: usb_bulk_msg() failed: %d", retval); 659 + "mos7720: usb_bulk_msg() failed: %d\n", retval); 660 660 return 0; 661 661 } 662 662 return actual_len; ··· 875 875 if (!(iir & 0x01)) { /* serial port interrupt pending */ 876 876 switch (iir & 0x0f) { 877 877 case SERIAL_IIR_RLS: 878 - dev_dbg(dev, "Serial Port: Receiver status error or address bit detected in 9-bit mode\n\n"); 878 + dev_dbg(dev, "Serial Port: Receiver status error or address bit detected in 9-bit mode\n"); 879 879 break; 880 880 case SERIAL_IIR_CTI: 881 881 dev_dbg(dev, "Serial Port: Receiver time out\n");
+2 -2
drivers/usb/serial/mos7840.c
··· 522 522 case -ENOENT: 523 523 case -ESHUTDOWN: 524 524 /* This urb is terminated, clean up */ 525 - dev_dbg(&urb->dev->dev, "%s - urb shutting down with status: %d", 525 + dev_dbg(&urb->dev->dev, "%s - urb shutting down: %d\n", 526 526 __func__, urb->status); 527 527 break; 528 528 default: 529 - dev_dbg(&urb->dev->dev, "%s - nonzero urb status received: %d", 529 + dev_dbg(&urb->dev->dev, "%s - nonzero urb status: %d\n", 530 530 __func__, urb->status); 531 531 } 532 532 }
+1 -1
drivers/usb/serial/quatech2.c
··· 372 372 device_port, data, 2, QT2_USB_TIMEOUT); 373 373 374 374 if (status < 0) { 375 - dev_err(&port->dev, "%s - open port failed %i", __func__, 375 + dev_err(&port->dev, "%s - open port failed %i\n", __func__, 376 376 status); 377 377 kfree(data); 378 378 return status;
+3 -4
drivers/usb/serial/spcp8x5.c
··· 220 220 GET_UART_STATUS, GET_UART_STATUS_TYPE, 221 221 0, GET_UART_STATUS_MSR, buf, 1, 100); 222 222 if (ret < 0) 223 - dev_err(&port->dev, "failed to get modem status: %d", ret); 223 + dev_err(&port->dev, "failed to get modem status: %d\n", ret); 224 224 225 - dev_dbg(&port->dev, "0xc0:0x22:0:6 %d - 0x02%x", ret, *buf); 225 + dev_dbg(&port->dev, "0xc0:0x22:0:6 %d - 0x02%x\n", ret, *buf); 226 226 *status = *buf; 227 227 kfree(buf); 228 228 ··· 342 342 case 1000000: 343 343 buf[0] = 0x0b; break; 344 344 default: 345 - dev_err(&port->dev, "spcp825 driver does not support the " 346 - "baudrate requested, using default of 9600.\n"); 345 + dev_err(&port->dev, "unsupported baudrate, using 9600\n"); 347 346 } 348 347 349 348 /* Set Data Length : 00:5bit, 01:6bit, 10:7bit, 11:8bit */
+1 -3
drivers/usb/serial/symbolserial.c
··· 74 74 tty_insert_flip_string(&port->port, &data[1], data_length); 75 75 tty_flip_buffer_push(&port->port); 76 76 } else { 77 - dev_dbg(&port->dev, 78 - "Improper amount of data received from the device, " 79 - "%d bytes", urb->actual_length); 77 + dev_dbg(&port->dev, "%s - short packet\n", __func__); 80 78 } 81 79 82 80 exit:
+2 -2
drivers/usb/serial/ti_usb_3410_5052.c
··· 293 293 int status; 294 294 295 295 dev_dbg(&dev->dev, 296 - "%s - product 0x%4X, num configurations %d, configuration value %d", 296 + "%s - product 0x%4X, num configurations %d, configuration value %d\n", 297 297 __func__, le16_to_cpu(dev->descriptor.idProduct), 298 298 dev->descriptor.bNumConfigurations, 299 299 dev->actconfig->desc.bConfigurationValue); ··· 803 803 tty_encode_baud_rate(tty, baud, baud); 804 804 805 805 dev_dbg(&port->dev, 806 - "%s - BaudRate=%d, wBaudRate=%d, wFlags=0x%04X, bDataBits=%d, bParity=%d, bStopBits=%d, cXon=%d, cXoff=%d, bUartMode=%d", 806 + "%s - BaudRate=%d, wBaudRate=%d, wFlags=0x%04X, bDataBits=%d, bParity=%d, bStopBits=%d, cXon=%d, cXoff=%d, bUartMode=%d\n", 807 807 __func__, baud, config->wBaudRate, config->wFlags, 808 808 config->bDataBits, config->bParity, config->bStopBits, 809 809 config->cXon, config->cXoff, config->bUartMode);
+8 -9
drivers/usb/serial/usb-serial.c
··· 868 868 max_endpoints = max(max_endpoints, (int)serial->num_ports); 869 869 serial->num_port_pointers = max_endpoints; 870 870 871 - dev_dbg(ddev, "setting up %d port structures for this device", max_endpoints); 871 + dev_dbg(ddev, "setting up %d port structure(s)\n", max_endpoints); 872 872 for (i = 0; i < max_endpoints; ++i) { 873 873 port = kzalloc(sizeof(struct usb_serial_port), GFP_KERNEL); 874 874 if (!port) ··· 923 923 port = serial->port[i]; 924 924 if (kfifo_alloc(&port->write_fifo, PAGE_SIZE, GFP_KERNEL)) 925 925 goto probe_error; 926 - buffer_size = serial->type->bulk_out_size; 927 - if (!buffer_size) 928 - buffer_size = usb_endpoint_maxp(endpoint); 926 + buffer_size = max_t(int, serial->type->bulk_out_size, 927 + usb_endpoint_maxp(endpoint)); 929 928 port->bulk_out_size = buffer_size; 930 929 port->bulk_out_endpointAddress = endpoint->bEndpointAddress; 931 930 ··· 1033 1034 for (i = 0; i < num_ports; ++i) { 1034 1035 port = serial->port[i]; 1035 1036 dev_set_name(&port->dev, "ttyUSB%d", port->minor); 1036 - dev_dbg(ddev, "registering %s", dev_name(&port->dev)); 1037 + dev_dbg(ddev, "registering %s\n", dev_name(&port->dev)); 1037 1038 device_enable_async_suspend(&port->dev); 1038 1039 1039 1040 retval = device_add(&port->dev); ··· 1160 1161 usb_serial_unpoison_port_urbs(serial); 1161 1162 1162 1163 serial->suspending = 0; 1163 - if (serial->type->reset_resume) 1164 + if (serial->type->reset_resume) { 1164 1165 rv = serial->type->reset_resume(serial); 1165 - else { 1166 + } else { 1166 1167 rv = -EOPNOTSUPP; 1167 1168 intf->needs_binding = 1; 1168 1169 } ··· 1337 1338 if (retval) { 1338 1339 pr_err("problem %d when registering driver %s\n", retval, driver->description); 1339 1340 list_del(&driver->driver_list); 1340 - } else 1341 + } else { 1341 1342 pr_info("USB Serial support registered for %s\n", driver->description); 1342 - 1343 + } 1343 1344 mutex_unlock(&table_lock); 1344 1345 return retval; 1345 1346 }
+1 -1
drivers/usb/storage/Kconfig
··· 204 204 205 205 config USB_UAS 206 206 tristate "USB Attached SCSI" 207 - depends on SCSI && BROKEN 207 + depends on SCSI && USB_STORAGE 208 208 help 209 209 The USB Attached SCSI protocol is supported by some USB 210 210 storage devices. It permits higher performance by supporting
+96
drivers/usb/storage/uas-detect.h
··· 1 + #include <linux/usb.h> 2 + #include <linux/usb/hcd.h> 3 + #include "usb.h" 4 + 5 + static int uas_is_interface(struct usb_host_interface *intf) 6 + { 7 + return (intf->desc.bInterfaceClass == USB_CLASS_MASS_STORAGE && 8 + intf->desc.bInterfaceSubClass == USB_SC_SCSI && 9 + intf->desc.bInterfaceProtocol == USB_PR_UAS); 10 + } 11 + 12 + static int uas_isnt_supported(struct usb_device *udev) 13 + { 14 + struct usb_hcd *hcd = bus_to_hcd(udev->bus); 15 + 16 + dev_warn(&udev->dev, "The driver for the USB controller %s does not " 17 + "support scatter-gather which is\n", 18 + hcd->driver->description); 19 + dev_warn(&udev->dev, "required by the UAS driver. Please try an" 20 + "alternative USB controller if you wish to use UAS.\n"); 21 + return -ENODEV; 22 + } 23 + 24 + static int uas_find_uas_alt_setting(struct usb_interface *intf) 25 + { 26 + int i; 27 + struct usb_device *udev = interface_to_usbdev(intf); 28 + int sg_supported = udev->bus->sg_tablesize != 0; 29 + 30 + for (i = 0; i < intf->num_altsetting; i++) { 31 + struct usb_host_interface *alt = &intf->altsetting[i]; 32 + 33 + if (uas_is_interface(alt)) { 34 + if (!sg_supported) 35 + return uas_isnt_supported(udev); 36 + return alt->desc.bAlternateSetting; 37 + } 38 + } 39 + 40 + return -ENODEV; 41 + } 42 + 43 + static int uas_find_endpoints(struct usb_host_interface *alt, 44 + struct usb_host_endpoint *eps[]) 45 + { 46 + struct usb_host_endpoint *endpoint = alt->endpoint; 47 + unsigned i, n_endpoints = alt->desc.bNumEndpoints; 48 + 49 + for (i = 0; i < n_endpoints; i++) { 50 + unsigned char *extra = endpoint[i].extra; 51 + int len = endpoint[i].extralen; 52 + while (len >= 3) { 53 + if (extra[1] == USB_DT_PIPE_USAGE) { 54 + unsigned pipe_id = extra[2]; 55 + if (pipe_id > 0 && pipe_id < 5) 56 + eps[pipe_id - 1] = &endpoint[i]; 57 + break; 58 + } 59 + len -= extra[0]; 60 + extra += extra[0]; 61 + } 62 + } 63 + 64 + if (!eps[0] || !eps[1] || !eps[2] || !eps[3]) 65 + return -ENODEV; 66 + 67 + return 0; 68 + } 69 + 70 + static int uas_use_uas_driver(struct usb_interface *intf, 71 + const struct usb_device_id *id) 72 + { 73 + struct usb_host_endpoint *eps[4] = { }; 74 + struct usb_device *udev = interface_to_usbdev(intf); 75 + struct usb_hcd *hcd = bus_to_hcd(udev->bus); 76 + unsigned long flags = id->driver_info; 77 + int r, alt; 78 + 79 + usb_stor_adjust_quirks(udev, &flags); 80 + 81 + if (flags & US_FL_IGNORE_UAS) 82 + return 0; 83 + 84 + if (udev->speed >= USB_SPEED_SUPER && !hcd->can_do_streams) 85 + return 0; 86 + 87 + alt = uas_find_uas_alt_setting(intf); 88 + if (alt < 0) 89 + return 0; 90 + 91 + r = uas_find_endpoints(&intf->altsetting[alt], eps); 92 + if (r < 0) 93 + return 0; 94 + 95 + return 1; 96 + }
+441 -254
drivers/usb/storage/uas.c
··· 2 2 * USB Attached SCSI 3 3 * Note that this is not the same as the USB Mass Storage driver 4 4 * 5 + * Copyright Hans de Goede <hdegoede@redhat.com> for Red Hat, Inc. 2013 5 6 * Copyright Matthew Wilcox for Intel Corp, 2010 6 7 * Copyright Sarah Sharp for Intel Corp, 2010 7 8 * ··· 14 13 #include <linux/types.h> 15 14 #include <linux/module.h> 16 15 #include <linux/usb.h> 16 + #include <linux/usb_usual.h> 17 17 #include <linux/usb/hcd.h> 18 18 #include <linux/usb/storage.h> 19 19 #include <linux/usb/uas.h> 20 20 21 21 #include <scsi/scsi.h> 22 + #include <scsi/scsi_eh.h> 22 23 #include <scsi/scsi_dbg.h> 23 24 #include <scsi/scsi_cmnd.h> 24 25 #include <scsi/scsi_device.h> 25 26 #include <scsi/scsi_host.h> 26 27 #include <scsi/scsi_tcq.h> 28 + 29 + #include "uas-detect.h" 27 30 28 31 /* 29 32 * The r00-r01c specs define this version of the SENSE IU data structure. ··· 50 45 struct usb_anchor sense_urbs; 51 46 struct usb_anchor data_urbs; 52 47 int qdepth, resetting; 53 - struct response_ui response; 48 + struct response_iu response; 54 49 unsigned cmd_pipe, status_pipe, data_in_pipe, data_out_pipe; 55 50 unsigned use_streams:1; 56 51 unsigned uas_sense_old:1; 52 + unsigned running_task:1; 53 + unsigned shutdown:1; 57 54 struct scsi_cmnd *cmnd; 58 55 spinlock_t lock; 56 + struct work_struct work; 57 + struct list_head inflight_list; 58 + struct list_head dead_list; 59 59 }; 60 60 61 61 enum { ··· 95 85 struct uas_dev_info *devinfo, gfp_t gfp); 96 86 static void uas_do_work(struct work_struct *work); 97 87 static int uas_try_complete(struct scsi_cmnd *cmnd, const char *caller); 88 + static void uas_free_streams(struct uas_dev_info *devinfo); 89 + static void uas_log_cmd_state(struct scsi_cmnd *cmnd, const char *caller); 98 90 99 - static DECLARE_WORK(uas_work, uas_do_work); 100 - static DEFINE_SPINLOCK(uas_work_lock); 101 - static LIST_HEAD(uas_work_list); 102 - 91 + /* Must be called with devinfo->lock held, will temporary unlock the lock */ 103 92 static void uas_unlink_data_urbs(struct uas_dev_info *devinfo, 104 - struct uas_cmd_info *cmdinfo) 93 + struct uas_cmd_info *cmdinfo, 94 + unsigned long *lock_flags) 105 95 { 106 - unsigned long flags; 107 - 108 96 /* 109 97 * The UNLINK_DATA_URBS flag makes sure uas_try_complete 110 98 * (called by urb completion) doesn't release cmdinfo 111 99 * underneath us. 112 100 */ 113 - spin_lock_irqsave(&devinfo->lock, flags); 114 101 cmdinfo->state |= UNLINK_DATA_URBS; 115 - spin_unlock_irqrestore(&devinfo->lock, flags); 102 + spin_unlock_irqrestore(&devinfo->lock, *lock_flags); 116 103 117 104 if (cmdinfo->data_in_urb) 118 105 usb_unlink_urb(cmdinfo->data_in_urb); 119 106 if (cmdinfo->data_out_urb) 120 107 usb_unlink_urb(cmdinfo->data_out_urb); 121 108 122 - spin_lock_irqsave(&devinfo->lock, flags); 109 + spin_lock_irqsave(&devinfo->lock, *lock_flags); 123 110 cmdinfo->state &= ~UNLINK_DATA_URBS; 124 - spin_unlock_irqrestore(&devinfo->lock, flags); 125 111 } 126 112 127 113 static void uas_do_work(struct work_struct *work) 128 114 { 115 + struct uas_dev_info *devinfo = 116 + container_of(work, struct uas_dev_info, work); 129 117 struct uas_cmd_info *cmdinfo; 130 - struct uas_cmd_info *temp; 131 - struct list_head list; 132 118 unsigned long flags; 133 119 int err; 134 120 135 - spin_lock_irq(&uas_work_lock); 136 - list_replace_init(&uas_work_list, &list); 137 - spin_unlock_irq(&uas_work_lock); 138 - 139 - list_for_each_entry_safe(cmdinfo, temp, &list, list) { 121 + spin_lock_irqsave(&devinfo->lock, flags); 122 + list_for_each_entry(cmdinfo, &devinfo->inflight_list, list) { 140 123 struct scsi_pointer *scp = (void *)cmdinfo; 141 - struct scsi_cmnd *cmnd = container_of(scp, 142 - struct scsi_cmnd, SCp); 143 - struct uas_dev_info *devinfo = (void *)cmnd->device->hostdata; 144 - spin_lock_irqsave(&devinfo->lock, flags); 145 - err = uas_submit_urbs(cmnd, cmnd->device->hostdata, GFP_ATOMIC); 124 + struct scsi_cmnd *cmnd = container_of(scp, struct scsi_cmnd, 125 + SCp); 126 + 127 + if (!(cmdinfo->state & IS_IN_WORK_LIST)) 128 + continue; 129 + 130 + err = uas_submit_urbs(cmnd, cmnd->device->hostdata, GFP_NOIO); 146 131 if (!err) 147 132 cmdinfo->state &= ~IS_IN_WORK_LIST; 148 - spin_unlock_irqrestore(&devinfo->lock, flags); 149 - if (err) { 150 - list_del(&cmdinfo->list); 151 - spin_lock_irq(&uas_work_lock); 152 - list_add_tail(&cmdinfo->list, &uas_work_list); 153 - spin_unlock_irq(&uas_work_lock); 154 - schedule_work(&uas_work); 155 - } 133 + else 134 + schedule_work(&devinfo->work); 156 135 } 136 + spin_unlock_irqrestore(&devinfo->lock, flags); 157 137 } 158 138 159 - static void uas_abort_work(struct uas_dev_info *devinfo) 139 + static void uas_mark_cmd_dead(struct uas_dev_info *devinfo, 140 + struct uas_cmd_info *cmdinfo, 141 + int result, const char *caller) 142 + { 143 + struct scsi_pointer *scp = (void *)cmdinfo; 144 + struct scsi_cmnd *cmnd = container_of(scp, struct scsi_cmnd, SCp); 145 + 146 + uas_log_cmd_state(cmnd, caller); 147 + WARN_ON_ONCE(!spin_is_locked(&devinfo->lock)); 148 + WARN_ON_ONCE(cmdinfo->state & COMMAND_ABORTED); 149 + cmdinfo->state |= COMMAND_ABORTED; 150 + cmdinfo->state &= ~IS_IN_WORK_LIST; 151 + cmnd->result = result << 16; 152 + list_move_tail(&cmdinfo->list, &devinfo->dead_list); 153 + } 154 + 155 + static void uas_abort_inflight(struct uas_dev_info *devinfo, int result, 156 + const char *caller) 160 157 { 161 158 struct uas_cmd_info *cmdinfo; 162 159 struct uas_cmd_info *temp; 163 - struct list_head list; 164 160 unsigned long flags; 165 161 166 - spin_lock_irq(&uas_work_lock); 167 - list_replace_init(&uas_work_list, &list); 168 - spin_unlock_irq(&uas_work_lock); 162 + spin_lock_irqsave(&devinfo->lock, flags); 163 + list_for_each_entry_safe(cmdinfo, temp, &devinfo->inflight_list, list) 164 + uas_mark_cmd_dead(devinfo, cmdinfo, result, caller); 165 + spin_unlock_irqrestore(&devinfo->lock, flags); 166 + } 167 + 168 + static void uas_add_work(struct uas_cmd_info *cmdinfo) 169 + { 170 + struct scsi_pointer *scp = (void *)cmdinfo; 171 + struct scsi_cmnd *cmnd = container_of(scp, struct scsi_cmnd, SCp); 172 + struct uas_dev_info *devinfo = cmnd->device->hostdata; 173 + 174 + WARN_ON_ONCE(!spin_is_locked(&devinfo->lock)); 175 + cmdinfo->state |= IS_IN_WORK_LIST; 176 + schedule_work(&devinfo->work); 177 + } 178 + 179 + static void uas_zap_dead(struct uas_dev_info *devinfo) 180 + { 181 + struct uas_cmd_info *cmdinfo; 182 + struct uas_cmd_info *temp; 183 + unsigned long flags; 169 184 170 185 spin_lock_irqsave(&devinfo->lock, flags); 171 - list_for_each_entry_safe(cmdinfo, temp, &list, list) { 186 + list_for_each_entry_safe(cmdinfo, temp, &devinfo->dead_list, list) { 172 187 struct scsi_pointer *scp = (void *)cmdinfo; 173 - struct scsi_cmnd *cmnd = container_of(scp, 174 - struct scsi_cmnd, SCp); 175 - struct uas_dev_info *di = (void *)cmnd->device->hostdata; 176 - 177 - if (di == devinfo) { 178 - cmdinfo->state |= COMMAND_ABORTED; 179 - cmdinfo->state &= ~IS_IN_WORK_LIST; 180 - if (devinfo->resetting) { 181 - /* uas_stat_cmplt() will not do that 182 - * when a device reset is in 183 - * progress */ 184 - cmdinfo->state &= ~COMMAND_INFLIGHT; 185 - } 186 - uas_try_complete(cmnd, __func__); 187 - } else { 188 - /* not our uas device, relink into list */ 189 - list_del(&cmdinfo->list); 190 - spin_lock_irq(&uas_work_lock); 191 - list_add_tail(&cmdinfo->list, &uas_work_list); 192 - spin_unlock_irq(&uas_work_lock); 193 - } 188 + struct scsi_cmnd *cmnd = container_of(scp, struct scsi_cmnd, 189 + SCp); 190 + uas_log_cmd_state(cmnd, __func__); 191 + WARN_ON_ONCE(!(cmdinfo->state & COMMAND_ABORTED)); 192 + /* all urbs are killed, clear inflight bits */ 193 + cmdinfo->state &= ~(COMMAND_INFLIGHT | 194 + DATA_IN_URB_INFLIGHT | 195 + DATA_OUT_URB_INFLIGHT); 196 + uas_try_complete(cmnd, __func__); 194 197 } 198 + devinfo->running_task = 0; 195 199 spin_unlock_irqrestore(&devinfo->lock, flags); 196 200 } 197 201 ··· 283 259 struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp; 284 260 struct uas_dev_info *devinfo = (void *)cmnd->device->hostdata; 285 261 286 - WARN_ON(!spin_is_locked(&devinfo->lock)); 262 + WARN_ON_ONCE(!spin_is_locked(&devinfo->lock)); 287 263 if (cmdinfo->state & (COMMAND_INFLIGHT | 288 264 DATA_IN_URB_INFLIGHT | 289 265 DATA_OUT_URB_INFLIGHT | 290 266 UNLINK_DATA_URBS)) 291 267 return -EBUSY; 292 - BUG_ON(cmdinfo->state & COMMAND_COMPLETED); 268 + WARN_ON_ONCE(cmdinfo->state & COMMAND_COMPLETED); 293 269 cmdinfo->state |= COMMAND_COMPLETED; 294 270 usb_free_urb(cmdinfo->data_in_urb); 295 271 usb_free_urb(cmdinfo->data_out_urb); 296 - if (cmdinfo->state & COMMAND_ABORTED) { 272 + if (cmdinfo->state & COMMAND_ABORTED) 297 273 scmd_printk(KERN_INFO, cmnd, "abort completed\n"); 298 - cmnd->result = DID_ABORT << 16; 299 - } 274 + list_del(&cmdinfo->list); 300 275 cmnd->scsi_done(cmnd); 301 276 return 0; 302 277 } ··· 309 286 cmdinfo->state |= direction | SUBMIT_STATUS_URB; 310 287 err = uas_submit_urbs(cmnd, cmnd->device->hostdata, GFP_ATOMIC); 311 288 if (err) { 312 - spin_lock(&uas_work_lock); 313 - list_add_tail(&cmdinfo->list, &uas_work_list); 314 - cmdinfo->state |= IS_IN_WORK_LIST; 315 - spin_unlock(&uas_work_lock); 316 - schedule_work(&uas_work); 289 + uas_add_work(cmdinfo); 317 290 } 318 291 } 319 292 ··· 317 298 { 318 299 struct iu *iu = urb->transfer_buffer; 319 300 struct Scsi_Host *shost = urb->context; 320 - struct uas_dev_info *devinfo = (void *)shost->hostdata[0]; 301 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 321 302 struct scsi_cmnd *cmnd; 322 303 struct uas_cmd_info *cmdinfo; 323 304 unsigned long flags; 324 305 u16 tag; 325 306 326 307 if (urb->status) { 327 - dev_err(&urb->dev->dev, "URB BAD STATUS %d\n", urb->status); 308 + if (urb->status == -ENOENT) { 309 + dev_err(&urb->dev->dev, "stat urb: killed, stream %d\n", 310 + urb->stream_id); 311 + } else { 312 + dev_err(&urb->dev->dev, "stat urb: status %d\n", 313 + urb->status); 314 + } 328 315 usb_free_urb(urb); 329 316 return; 330 317 } ··· 349 324 350 325 if (!cmnd) { 351 326 if (iu->iu_id == IU_ID_RESPONSE) { 327 + if (!devinfo->running_task) 328 + dev_warn(&urb->dev->dev, 329 + "stat urb: recv unexpected response iu\n"); 352 330 /* store results for uas_eh_task_mgmt() */ 353 331 memcpy(&devinfo->response, iu, sizeof(devinfo->response)); 354 332 } ··· 374 346 uas_sense(urb, cmnd); 375 347 if (cmnd->result != 0) { 376 348 /* cancel data transfers on error */ 377 - spin_unlock_irqrestore(&devinfo->lock, flags); 378 - uas_unlink_data_urbs(devinfo, cmdinfo); 379 - spin_lock_irqsave(&devinfo->lock, flags); 349 + uas_unlink_data_urbs(devinfo, cmdinfo, &flags); 380 350 } 381 351 cmdinfo->state &= ~COMMAND_INFLIGHT; 382 352 uas_try_complete(cmnd, __func__); 383 353 break; 384 354 case IU_ID_READ_READY: 355 + if (!cmdinfo->data_in_urb || 356 + (cmdinfo->state & DATA_IN_URB_INFLIGHT)) { 357 + scmd_printk(KERN_ERR, cmnd, "unexpected read rdy\n"); 358 + break; 359 + } 385 360 uas_xfer_data(urb, cmnd, SUBMIT_DATA_IN_URB); 386 361 break; 387 362 case IU_ID_WRITE_READY: 363 + if (!cmdinfo->data_out_urb || 364 + (cmdinfo->state & DATA_OUT_URB_INFLIGHT)) { 365 + scmd_printk(KERN_ERR, cmnd, "unexpected write rdy\n"); 366 + break; 367 + } 388 368 uas_xfer_data(urb, cmnd, SUBMIT_DATA_OUT_URB); 389 369 break; 390 370 default: ··· 419 383 sdb = scsi_out(cmnd); 420 384 cmdinfo->state &= ~DATA_OUT_URB_INFLIGHT; 421 385 } 422 - BUG_ON(sdb == NULL); 423 - if (urb->status) { 386 + if (sdb == NULL) { 387 + WARN_ON_ONCE(1); 388 + } else if (urb->status) { 389 + if (urb->status != -ECONNRESET) { 390 + uas_log_cmd_state(cmnd, __func__); 391 + scmd_printk(KERN_ERR, cmnd, 392 + "data cmplt err %d stream %d\n", 393 + urb->status, urb->stream_id); 394 + } 424 395 /* error: no data transfered */ 425 396 sdb->resid = sdb->length; 426 397 } else { ··· 435 392 } 436 393 uas_try_complete(cmnd, __func__); 437 394 spin_unlock_irqrestore(&devinfo->lock, flags); 395 + } 396 + 397 + static void uas_cmd_cmplt(struct urb *urb) 398 + { 399 + struct scsi_cmnd *cmnd = urb->context; 400 + 401 + if (urb->status) { 402 + uas_log_cmd_state(cmnd, __func__); 403 + scmd_printk(KERN_ERR, cmnd, "cmd cmplt err %d\n", urb->status); 404 + } 405 + usb_free_urb(urb); 438 406 } 439 407 440 408 static struct urb *uas_alloc_data_urb(struct uas_dev_info *devinfo, gfp_t gfp, ··· 462 408 goto out; 463 409 usb_fill_bulk_urb(urb, udev, pipe, NULL, sdb->length, 464 410 uas_data_cmplt, cmnd); 465 - if (devinfo->use_streams) 466 - urb->stream_id = stream_id; 411 + urb->stream_id = stream_id; 467 412 urb->num_sgs = udev->bus->sg_tablesize ? sdb->table.nents : 0; 468 413 urb->sg = sdb->table.sgl; 469 414 out: ··· 495 442 } 496 443 497 444 static struct urb *uas_alloc_cmd_urb(struct uas_dev_info *devinfo, gfp_t gfp, 498 - struct scsi_cmnd *cmnd, u16 stream_id) 445 + struct scsi_cmnd *cmnd) 499 446 { 500 447 struct usb_device *udev = devinfo->udev; 501 448 struct scsi_device *sdev = cmnd->device; ··· 525 472 memcpy(iu->cdb, cmnd->cmnd, cmnd->cmd_len); 526 473 527 474 usb_fill_bulk_urb(urb, udev, devinfo->cmd_pipe, iu, sizeof(*iu) + len, 528 - usb_free_urb, NULL); 475 + uas_cmd_cmplt, cmnd); 529 476 urb->transfer_flags |= URB_FREE_BUFFER; 530 477 out: 531 478 return urb; ··· 565 512 } 566 513 567 514 usb_fill_bulk_urb(urb, udev, devinfo->cmd_pipe, iu, sizeof(*iu), 568 - usb_free_urb, NULL); 515 + uas_cmd_cmplt, cmnd); 569 516 urb->transfer_flags |= URB_FREE_BUFFER; 570 517 571 - err = usb_submit_urb(urb, gfp); 572 - if (err) 573 - goto err; 574 518 usb_anchor_urb(urb, &devinfo->cmd_urbs); 519 + err = usb_submit_urb(urb, gfp); 520 + if (err) { 521 + usb_unanchor_urb(urb); 522 + uas_log_cmd_state(cmnd, __func__); 523 + scmd_printk(KERN_ERR, cmnd, "task submission err %d\n", err); 524 + goto err; 525 + } 575 526 576 527 return 0; 577 528 ··· 590 533 * daft to me. 591 534 */ 592 535 593 - static int uas_submit_sense_urb(struct Scsi_Host *shost, 594 - gfp_t gfp, unsigned int stream) 536 + static struct urb *uas_submit_sense_urb(struct scsi_cmnd *cmnd, 537 + gfp_t gfp, unsigned int stream) 595 538 { 596 - struct uas_dev_info *devinfo = (void *)shost->hostdata[0]; 539 + struct Scsi_Host *shost = cmnd->device->host; 540 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 597 541 struct urb *urb; 542 + int err; 598 543 599 544 urb = uas_alloc_sense_urb(devinfo, gfp, shost, stream); 600 545 if (!urb) 601 - return SCSI_MLQUEUE_DEVICE_BUSY; 602 - if (usb_submit_urb(urb, gfp)) { 603 - shost_printk(KERN_INFO, shost, 604 - "sense urb submission failure\n"); 605 - usb_free_urb(urb); 606 - return SCSI_MLQUEUE_DEVICE_BUSY; 607 - } 546 + return NULL; 608 547 usb_anchor_urb(urb, &devinfo->sense_urbs); 609 - return 0; 548 + err = usb_submit_urb(urb, gfp); 549 + if (err) { 550 + usb_unanchor_urb(urb); 551 + uas_log_cmd_state(cmnd, __func__); 552 + shost_printk(KERN_INFO, shost, 553 + "sense urb submission error %d stream %d\n", 554 + err, stream); 555 + usb_free_urb(urb); 556 + return NULL; 557 + } 558 + return urb; 610 559 } 611 560 612 561 static int uas_submit_urbs(struct scsi_cmnd *cmnd, 613 562 struct uas_dev_info *devinfo, gfp_t gfp) 614 563 { 615 564 struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp; 565 + struct urb *urb; 616 566 int err; 617 567 618 - WARN_ON(!spin_is_locked(&devinfo->lock)); 568 + WARN_ON_ONCE(!spin_is_locked(&devinfo->lock)); 619 569 if (cmdinfo->state & SUBMIT_STATUS_URB) { 620 - err = uas_submit_sense_urb(cmnd->device->host, gfp, 621 - cmdinfo->stream); 622 - if (err) { 623 - return err; 624 - } 570 + urb = uas_submit_sense_urb(cmnd, gfp, cmdinfo->stream); 571 + if (!urb) 572 + return SCSI_MLQUEUE_DEVICE_BUSY; 625 573 cmdinfo->state &= ~SUBMIT_STATUS_URB; 626 574 } 627 575 ··· 640 578 } 641 579 642 580 if (cmdinfo->state & SUBMIT_DATA_IN_URB) { 643 - if (usb_submit_urb(cmdinfo->data_in_urb, gfp)) { 581 + usb_anchor_urb(cmdinfo->data_in_urb, &devinfo->data_urbs); 582 + err = usb_submit_urb(cmdinfo->data_in_urb, gfp); 583 + if (err) { 584 + usb_unanchor_urb(cmdinfo->data_in_urb); 585 + uas_log_cmd_state(cmnd, __func__); 644 586 scmd_printk(KERN_INFO, cmnd, 645 - "data in urb submission failure\n"); 587 + "data in urb submission error %d stream %d\n", 588 + err, cmdinfo->data_in_urb->stream_id); 646 589 return SCSI_MLQUEUE_DEVICE_BUSY; 647 590 } 648 591 cmdinfo->state &= ~SUBMIT_DATA_IN_URB; 649 592 cmdinfo->state |= DATA_IN_URB_INFLIGHT; 650 - usb_anchor_urb(cmdinfo->data_in_urb, &devinfo->data_urbs); 651 593 } 652 594 653 595 if (cmdinfo->state & ALLOC_DATA_OUT_URB) { ··· 664 598 } 665 599 666 600 if (cmdinfo->state & SUBMIT_DATA_OUT_URB) { 667 - if (usb_submit_urb(cmdinfo->data_out_urb, gfp)) { 601 + usb_anchor_urb(cmdinfo->data_out_urb, &devinfo->data_urbs); 602 + err = usb_submit_urb(cmdinfo->data_out_urb, gfp); 603 + if (err) { 604 + usb_unanchor_urb(cmdinfo->data_out_urb); 605 + uas_log_cmd_state(cmnd, __func__); 668 606 scmd_printk(KERN_INFO, cmnd, 669 - "data out urb submission failure\n"); 607 + "data out urb submission error %d stream %d\n", 608 + err, cmdinfo->data_out_urb->stream_id); 670 609 return SCSI_MLQUEUE_DEVICE_BUSY; 671 610 } 672 611 cmdinfo->state &= ~SUBMIT_DATA_OUT_URB; 673 612 cmdinfo->state |= DATA_OUT_URB_INFLIGHT; 674 - usb_anchor_urb(cmdinfo->data_out_urb, &devinfo->data_urbs); 675 613 } 676 614 677 615 if (cmdinfo->state & ALLOC_CMD_URB) { 678 - cmdinfo->cmd_urb = uas_alloc_cmd_urb(devinfo, gfp, cmnd, 679 - cmdinfo->stream); 616 + cmdinfo->cmd_urb = uas_alloc_cmd_urb(devinfo, gfp, cmnd); 680 617 if (!cmdinfo->cmd_urb) 681 618 return SCSI_MLQUEUE_DEVICE_BUSY; 682 619 cmdinfo->state &= ~ALLOC_CMD_URB; 683 620 } 684 621 685 622 if (cmdinfo->state & SUBMIT_CMD_URB) { 686 - usb_get_urb(cmdinfo->cmd_urb); 687 - if (usb_submit_urb(cmdinfo->cmd_urb, gfp)) { 623 + usb_anchor_urb(cmdinfo->cmd_urb, &devinfo->cmd_urbs); 624 + err = usb_submit_urb(cmdinfo->cmd_urb, gfp); 625 + if (err) { 626 + usb_unanchor_urb(cmdinfo->cmd_urb); 627 + uas_log_cmd_state(cmnd, __func__); 688 628 scmd_printk(KERN_INFO, cmnd, 689 - "cmd urb submission failure\n"); 629 + "cmd urb submission error %d\n", err); 690 630 return SCSI_MLQUEUE_DEVICE_BUSY; 691 631 } 692 - usb_anchor_urb(cmdinfo->cmd_urb, &devinfo->cmd_urbs); 693 - usb_put_urb(cmdinfo->cmd_urb); 694 632 cmdinfo->cmd_urb = NULL; 695 633 cmdinfo->state &= ~SUBMIT_CMD_URB; 696 634 cmdinfo->state |= COMMAND_INFLIGHT; ··· 714 644 715 645 BUILD_BUG_ON(sizeof(struct uas_cmd_info) > sizeof(struct scsi_pointer)); 716 646 647 + spin_lock_irqsave(&devinfo->lock, flags); 648 + 717 649 if (devinfo->resetting) { 718 650 cmnd->result = DID_ERROR << 16; 719 651 cmnd->scsi_done(cmnd); 652 + spin_unlock_irqrestore(&devinfo->lock, flags); 720 653 return 0; 721 654 } 722 655 723 - spin_lock_irqsave(&devinfo->lock, flags); 724 656 if (devinfo->cmnd) { 725 657 spin_unlock_irqrestore(&devinfo->lock, flags); 726 658 return SCSI_MLQUEUE_DEVICE_BUSY; 727 659 } 660 + 661 + memset(cmdinfo, 0, sizeof(*cmdinfo)); 728 662 729 663 if (blk_rq_tagged(cmnd->request)) { 730 664 cmdinfo->stream = cmnd->request->tag + 2; ··· 766 692 spin_unlock_irqrestore(&devinfo->lock, flags); 767 693 return SCSI_MLQUEUE_DEVICE_BUSY; 768 694 } 769 - spin_lock(&uas_work_lock); 770 - list_add_tail(&cmdinfo->list, &uas_work_list); 771 - cmdinfo->state |= IS_IN_WORK_LIST; 772 - spin_unlock(&uas_work_lock); 773 - schedule_work(&uas_work); 695 + uas_add_work(cmdinfo); 774 696 } 775 697 698 + list_add_tail(&cmdinfo->list, &devinfo->inflight_list); 776 699 spin_unlock_irqrestore(&devinfo->lock, flags); 777 700 return 0; 778 701 } ··· 780 709 const char *fname, u8 function) 781 710 { 782 711 struct Scsi_Host *shost = cmnd->device->host; 783 - struct uas_dev_info *devinfo = (void *)shost->hostdata[0]; 784 - u16 tag = devinfo->qdepth - 1; 712 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 713 + u16 tag = devinfo->qdepth; 785 714 unsigned long flags; 715 + struct urb *sense_urb; 716 + int result = SUCCESS; 786 717 787 718 spin_lock_irqsave(&devinfo->lock, flags); 788 - memset(&devinfo->response, 0, sizeof(devinfo->response)); 789 - if (uas_submit_sense_urb(shost, GFP_ATOMIC, tag)) { 719 + 720 + if (devinfo->resetting) { 721 + spin_unlock_irqrestore(&devinfo->lock, flags); 722 + return FAILED; 723 + } 724 + 725 + if (devinfo->running_task) { 790 726 shost_printk(KERN_INFO, shost, 791 - "%s: %s: submit sense urb failed\n", 727 + "%s: %s: error already running a task\n", 792 728 __func__, fname); 793 729 spin_unlock_irqrestore(&devinfo->lock, flags); 794 730 return FAILED; 795 731 } 796 - if (uas_submit_task_urb(cmnd, GFP_ATOMIC, function, tag)) { 732 + 733 + devinfo->running_task = 1; 734 + memset(&devinfo->response, 0, sizeof(devinfo->response)); 735 + sense_urb = uas_submit_sense_urb(cmnd, GFP_NOIO, 736 + devinfo->use_streams ? tag : 0); 737 + if (!sense_urb) { 738 + shost_printk(KERN_INFO, shost, 739 + "%s: %s: submit sense urb failed\n", 740 + __func__, fname); 741 + devinfo->running_task = 0; 742 + spin_unlock_irqrestore(&devinfo->lock, flags); 743 + return FAILED; 744 + } 745 + if (uas_submit_task_urb(cmnd, GFP_NOIO, function, tag)) { 797 746 shost_printk(KERN_INFO, shost, 798 747 "%s: %s: submit task mgmt urb failed\n", 799 748 __func__, fname); 749 + devinfo->running_task = 0; 800 750 spin_unlock_irqrestore(&devinfo->lock, flags); 751 + usb_kill_urb(sense_urb); 801 752 return FAILED; 802 753 } 803 754 spin_unlock_irqrestore(&devinfo->lock, flags); 804 755 805 756 if (usb_wait_anchor_empty_timeout(&devinfo->sense_urbs, 3000) == 0) { 757 + /* 758 + * Note we deliberately do not clear running_task here. If we 759 + * allow new tasks to be submitted, there is no way to figure 760 + * out if a received response_iu is for the failed task or for 761 + * the new one. A bus-reset will eventually clear running_task. 762 + */ 806 763 shost_printk(KERN_INFO, shost, 807 764 "%s: %s timed out\n", __func__, fname); 808 765 return FAILED; 809 766 } 767 + 768 + spin_lock_irqsave(&devinfo->lock, flags); 769 + devinfo->running_task = 0; 810 770 if (be16_to_cpu(devinfo->response.tag) != tag) { 811 771 shost_printk(KERN_INFO, shost, 812 772 "%s: %s failed (wrong tag %d/%d)\n", __func__, 813 773 fname, be16_to_cpu(devinfo->response.tag), tag); 814 - return FAILED; 815 - } 816 - if (devinfo->response.response_code != RC_TMF_COMPLETE) { 774 + result = FAILED; 775 + } else if (devinfo->response.response_code != RC_TMF_COMPLETE) { 817 776 shost_printk(KERN_INFO, shost, 818 777 "%s: %s failed (rc 0x%x)\n", __func__, 819 778 fname, devinfo->response.response_code); 820 - return FAILED; 779 + result = FAILED; 821 780 } 822 - return SUCCESS; 781 + spin_unlock_irqrestore(&devinfo->lock, flags); 782 + 783 + return result; 823 784 } 824 785 825 786 static int uas_eh_abort_handler(struct scsi_cmnd *cmnd) ··· 861 758 unsigned long flags; 862 759 int ret; 863 760 864 - uas_log_cmd_state(cmnd, __func__); 865 761 spin_lock_irqsave(&devinfo->lock, flags); 866 - cmdinfo->state |= COMMAND_ABORTED; 867 - if (cmdinfo->state & IS_IN_WORK_LIST) { 868 - spin_lock(&uas_work_lock); 869 - list_del(&cmdinfo->list); 870 - cmdinfo->state &= ~IS_IN_WORK_LIST; 871 - spin_unlock(&uas_work_lock); 762 + 763 + if (devinfo->resetting) { 764 + spin_unlock_irqrestore(&devinfo->lock, flags); 765 + return FAILED; 872 766 } 767 + 768 + uas_mark_cmd_dead(devinfo, cmdinfo, DID_ABORT, __func__); 873 769 if (cmdinfo->state & COMMAND_INFLIGHT) { 874 770 spin_unlock_irqrestore(&devinfo->lock, flags); 875 771 ret = uas_eh_task_mgmt(cmnd, "ABORT TASK", TMF_ABORT_TASK); 876 772 } else { 877 - spin_unlock_irqrestore(&devinfo->lock, flags); 878 - uas_unlink_data_urbs(devinfo, cmdinfo); 879 - spin_lock_irqsave(&devinfo->lock, flags); 773 + uas_unlink_data_urbs(devinfo, cmdinfo, &flags); 880 774 uas_try_complete(cmnd, __func__); 881 775 spin_unlock_irqrestore(&devinfo->lock, flags); 882 776 ret = SUCCESS; ··· 895 795 struct usb_device *udev = devinfo->udev; 896 796 int err; 897 797 798 + err = usb_lock_device_for_reset(udev, devinfo->intf); 799 + if (err) { 800 + shost_printk(KERN_ERR, sdev->host, 801 + "%s FAILED to get lock err %d\n", __func__, err); 802 + return FAILED; 803 + } 804 + 805 + shost_printk(KERN_INFO, sdev->host, "%s start\n", __func__); 898 806 devinfo->resetting = 1; 899 - uas_abort_work(devinfo); 807 + uas_abort_inflight(devinfo, DID_RESET, __func__); 900 808 usb_kill_anchored_urbs(&devinfo->cmd_urbs); 901 809 usb_kill_anchored_urbs(&devinfo->sense_urbs); 902 810 usb_kill_anchored_urbs(&devinfo->data_urbs); 811 + uas_zap_dead(devinfo); 903 812 err = usb_reset_device(udev); 904 813 devinfo->resetting = 0; 814 + 815 + usb_unlock_device(udev); 905 816 906 817 if (err) { 907 818 shost_printk(KERN_INFO, sdev->host, "%s FAILED\n", __func__); ··· 925 814 926 815 static int uas_slave_alloc(struct scsi_device *sdev) 927 816 { 928 - sdev->hostdata = (void *)sdev->host->hostdata[0]; 817 + sdev->hostdata = (void *)sdev->host->hostdata; 818 + 819 + /* USB has unusual DMA-alignment requirements: Although the 820 + * starting address of each scatter-gather element doesn't matter, 821 + * the length of each element except the last must be divisible 822 + * by the Bulk maxpacket value. There's currently no way to 823 + * express this by block-layer constraints, so we'll cop out 824 + * and simply require addresses to be aligned at 512-byte 825 + * boundaries. This is okay since most block I/O involves 826 + * hardware sectors that are multiples of 512 bytes in length, 827 + * and since host controllers up through USB 2.0 have maxpacket 828 + * values no larger than 512. 829 + * 830 + * But it doesn't suffice for Wireless USB, where Bulk maxpacket 831 + * values can be as large as 2048. To make that work properly 832 + * will require changes to the block layer. 833 + */ 834 + blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1)); 835 + 929 836 return 0; 930 837 } 931 838 ··· 951 822 { 952 823 struct uas_dev_info *devinfo = sdev->hostdata; 953 824 scsi_set_tag_type(sdev, MSG_ORDERED_TAG); 954 - scsi_activate_tcq(sdev, devinfo->qdepth - 3); 825 + scsi_activate_tcq(sdev, devinfo->qdepth - 2); 955 826 return 0; 956 827 } 957 828 ··· 972 843 .ordered_tag = 1, 973 844 }; 974 845 846 + #define UNUSUAL_DEV(id_vendor, id_product, bcdDeviceMin, bcdDeviceMax, \ 847 + vendorName, productName, useProtocol, useTransport, \ 848 + initFunction, flags) \ 849 + { USB_DEVICE_VER(id_vendor, id_product, bcdDeviceMin, bcdDeviceMax), \ 850 + .driver_info = (flags) } 851 + 975 852 static struct usb_device_id uas_usb_ids[] = { 853 + # include "unusual_uas.h" 976 854 { USB_INTERFACE_INFO(USB_CLASS_MASS_STORAGE, USB_SC_SCSI, USB_PR_BULK) }, 977 855 { USB_INTERFACE_INFO(USB_CLASS_MASS_STORAGE, USB_SC_SCSI, USB_PR_UAS) }, 978 856 /* 0xaa is a prototype device I happen to have access to */ ··· 988 852 }; 989 853 MODULE_DEVICE_TABLE(usb, uas_usb_ids); 990 854 991 - static int uas_is_interface(struct usb_host_interface *intf) 992 - { 993 - return (intf->desc.bInterfaceClass == USB_CLASS_MASS_STORAGE && 994 - intf->desc.bInterfaceSubClass == USB_SC_SCSI && 995 - intf->desc.bInterfaceProtocol == USB_PR_UAS); 996 - } 997 - 998 - static int uas_isnt_supported(struct usb_device *udev) 999 - { 1000 - struct usb_hcd *hcd = bus_to_hcd(udev->bus); 1001 - 1002 - dev_warn(&udev->dev, "The driver for the USB controller %s does not " 1003 - "support scatter-gather which is\n", 1004 - hcd->driver->description); 1005 - dev_warn(&udev->dev, "required by the UAS driver. Please try an" 1006 - "alternative USB controller if you wish to use UAS.\n"); 1007 - return -ENODEV; 1008 - } 855 + #undef UNUSUAL_DEV 1009 856 1010 857 static int uas_switch_interface(struct usb_device *udev, 1011 - struct usb_interface *intf) 858 + struct usb_interface *intf) 1012 859 { 1013 - int i; 1014 - int sg_supported = udev->bus->sg_tablesize != 0; 860 + int alt; 1015 861 1016 - for (i = 0; i < intf->num_altsetting; i++) { 1017 - struct usb_host_interface *alt = &intf->altsetting[i]; 862 + alt = uas_find_uas_alt_setting(intf); 863 + if (alt < 0) 864 + return alt; 1018 865 1019 - if (uas_is_interface(alt)) { 1020 - if (!sg_supported) 1021 - return uas_isnt_supported(udev); 1022 - return usb_set_interface(udev, 1023 - alt->desc.bInterfaceNumber, 1024 - alt->desc.bAlternateSetting); 1025 - } 1026 - } 1027 - 1028 - return -ENODEV; 866 + return usb_set_interface(udev, 867 + intf->altsetting[0].desc.bInterfaceNumber, alt); 1029 868 } 1030 869 1031 - static void uas_configure_endpoints(struct uas_dev_info *devinfo) 870 + static int uas_configure_endpoints(struct uas_dev_info *devinfo) 1032 871 { 1033 872 struct usb_host_endpoint *eps[4] = { }; 1034 - struct usb_interface *intf = devinfo->intf; 1035 873 struct usb_device *udev = devinfo->udev; 1036 - struct usb_host_endpoint *endpoint = intf->cur_altsetting->endpoint; 1037 - unsigned i, n_endpoints = intf->cur_altsetting->desc.bNumEndpoints; 874 + int r; 1038 875 1039 876 devinfo->uas_sense_old = 0; 1040 877 devinfo->cmnd = NULL; 1041 878 1042 - for (i = 0; i < n_endpoints; i++) { 1043 - unsigned char *extra = endpoint[i].extra; 1044 - int len = endpoint[i].extralen; 1045 - while (len > 1) { 1046 - if (extra[1] == USB_DT_PIPE_USAGE) { 1047 - unsigned pipe_id = extra[2]; 1048 - if (pipe_id > 0 && pipe_id < 5) 1049 - eps[pipe_id - 1] = &endpoint[i]; 1050 - break; 1051 - } 1052 - len -= extra[0]; 1053 - extra += extra[0]; 1054 - } 1055 - } 879 + r = uas_find_endpoints(devinfo->intf->cur_altsetting, eps); 880 + if (r) 881 + return r; 1056 882 1057 - /* 1058 - * Assume that if we didn't find a control pipe descriptor, we're 1059 - * using a device with old firmware that happens to be set up like 1060 - * this. 1061 - */ 1062 - if (!eps[0]) { 1063 - devinfo->cmd_pipe = usb_sndbulkpipe(udev, 1); 1064 - devinfo->status_pipe = usb_rcvbulkpipe(udev, 1); 1065 - devinfo->data_in_pipe = usb_rcvbulkpipe(udev, 2); 1066 - devinfo->data_out_pipe = usb_sndbulkpipe(udev, 2); 883 + devinfo->cmd_pipe = usb_sndbulkpipe(udev, 884 + usb_endpoint_num(&eps[0]->desc)); 885 + devinfo->status_pipe = usb_rcvbulkpipe(udev, 886 + usb_endpoint_num(&eps[1]->desc)); 887 + devinfo->data_in_pipe = usb_rcvbulkpipe(udev, 888 + usb_endpoint_num(&eps[2]->desc)); 889 + devinfo->data_out_pipe = usb_sndbulkpipe(udev, 890 + usb_endpoint_num(&eps[3]->desc)); 1067 891 1068 - eps[1] = usb_pipe_endpoint(udev, devinfo->status_pipe); 1069 - eps[2] = usb_pipe_endpoint(udev, devinfo->data_in_pipe); 1070 - eps[3] = usb_pipe_endpoint(udev, devinfo->data_out_pipe); 1071 - } else { 1072 - devinfo->cmd_pipe = usb_sndbulkpipe(udev, 1073 - eps[0]->desc.bEndpointAddress); 1074 - devinfo->status_pipe = usb_rcvbulkpipe(udev, 1075 - eps[1]->desc.bEndpointAddress); 1076 - devinfo->data_in_pipe = usb_rcvbulkpipe(udev, 1077 - eps[2]->desc.bEndpointAddress); 1078 - devinfo->data_out_pipe = usb_sndbulkpipe(udev, 1079 - eps[3]->desc.bEndpointAddress); 1080 - } 1081 - 1082 - devinfo->qdepth = usb_alloc_streams(devinfo->intf, eps + 1, 3, 256, 1083 - GFP_KERNEL); 1084 - if (devinfo->qdepth < 0) { 892 + if (udev->speed != USB_SPEED_SUPER) { 1085 893 devinfo->qdepth = 256; 1086 894 devinfo->use_streams = 0; 1087 895 } else { 896 + devinfo->qdepth = usb_alloc_streams(devinfo->intf, eps + 1, 897 + 3, 256, GFP_KERNEL); 898 + if (devinfo->qdepth < 0) 899 + return devinfo->qdepth; 1088 900 devinfo->use_streams = 1; 1089 901 } 902 + 903 + return 0; 1090 904 } 1091 905 1092 906 static void uas_free_streams(struct uas_dev_info *devinfo) ··· 1050 964 usb_free_streams(devinfo->intf, eps, 3, GFP_KERNEL); 1051 965 } 1052 966 1053 - /* 1054 - * XXX: What I'd like to do here is register a SCSI host for each USB host in 1055 - * the system. Follow usb-storage's design of registering a SCSI host for 1056 - * each USB device for the moment. Can implement this by walking up the 1057 - * USB hierarchy until we find a USB host. 1058 - */ 1059 967 static int uas_probe(struct usb_interface *intf, const struct usb_device_id *id) 1060 968 { 1061 - int result; 1062 - struct Scsi_Host *shost; 969 + int result = -ENOMEM; 970 + struct Scsi_Host *shost = NULL; 1063 971 struct uas_dev_info *devinfo; 1064 972 struct usb_device *udev = interface_to_usbdev(intf); 973 + 974 + if (!uas_use_uas_driver(intf, id)) 975 + return -ENODEV; 1065 976 1066 977 if (uas_switch_interface(udev, intf)) 1067 978 return -ENODEV; 1068 979 1069 - devinfo = kmalloc(sizeof(struct uas_dev_info), GFP_KERNEL); 1070 - if (!devinfo) 1071 - return -ENOMEM; 1072 - 1073 - result = -ENOMEM; 1074 - shost = scsi_host_alloc(&uas_host_template, sizeof(void *)); 980 + shost = scsi_host_alloc(&uas_host_template, 981 + sizeof(struct uas_dev_info)); 1075 982 if (!shost) 1076 - goto free; 983 + goto set_alt0; 1077 984 1078 985 shost->max_cmd_len = 16 + 252; 1079 986 shost->max_id = 1; ··· 1074 995 shost->max_channel = 0; 1075 996 shost->sg_tablesize = udev->bus->sg_tablesize; 1076 997 998 + devinfo = (struct uas_dev_info *)shost->hostdata; 1077 999 devinfo->intf = intf; 1078 1000 devinfo->udev = udev; 1079 1001 devinfo->resetting = 0; 1002 + devinfo->running_task = 0; 1003 + devinfo->shutdown = 0; 1080 1004 init_usb_anchor(&devinfo->cmd_urbs); 1081 1005 init_usb_anchor(&devinfo->sense_urbs); 1082 1006 init_usb_anchor(&devinfo->data_urbs); 1083 1007 spin_lock_init(&devinfo->lock); 1084 - uas_configure_endpoints(devinfo); 1008 + INIT_WORK(&devinfo->work, uas_do_work); 1009 + INIT_LIST_HEAD(&devinfo->inflight_list); 1010 + INIT_LIST_HEAD(&devinfo->dead_list); 1085 1011 1086 - result = scsi_init_shared_tag_map(shost, devinfo->qdepth - 3); 1012 + result = uas_configure_endpoints(devinfo); 1087 1013 if (result) 1088 - goto free; 1014 + goto set_alt0; 1015 + 1016 + result = scsi_init_shared_tag_map(shost, devinfo->qdepth - 2); 1017 + if (result) 1018 + goto free_streams; 1089 1019 1090 1020 result = scsi_add_host(shost, &intf->dev); 1091 1021 if (result) 1092 - goto deconfig_eps; 1093 - 1094 - shost->hostdata[0] = (unsigned long)devinfo; 1022 + goto free_streams; 1095 1023 1096 1024 scsi_scan_host(shost); 1097 1025 usb_set_intfdata(intf, shost); 1098 1026 return result; 1099 1027 1100 - deconfig_eps: 1028 + free_streams: 1101 1029 uas_free_streams(devinfo); 1102 - free: 1103 - kfree(devinfo); 1030 + set_alt0: 1031 + usb_set_interface(udev, intf->altsetting[0].desc.bInterfaceNumber, 0); 1104 1032 if (shost) 1105 1033 scsi_host_put(shost); 1106 1034 return result; ··· 1115 1029 1116 1030 static int uas_pre_reset(struct usb_interface *intf) 1117 1031 { 1118 - /* XXX: Need to return 1 if it's not our device in error handling */ 1032 + struct Scsi_Host *shost = usb_get_intfdata(intf); 1033 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 1034 + unsigned long flags; 1035 + 1036 + if (devinfo->shutdown) 1037 + return 0; 1038 + 1039 + /* Block new requests */ 1040 + spin_lock_irqsave(shost->host_lock, flags); 1041 + scsi_block_requests(shost); 1042 + spin_unlock_irqrestore(shost->host_lock, flags); 1043 + 1044 + /* Wait for any pending requests to complete */ 1045 + flush_work(&devinfo->work); 1046 + if (usb_wait_anchor_empty_timeout(&devinfo->sense_urbs, 5000) == 0) { 1047 + shost_printk(KERN_ERR, shost, "%s: timed out\n", __func__); 1048 + return 1; 1049 + } 1050 + 1051 + uas_free_streams(devinfo); 1052 + 1119 1053 return 0; 1120 1054 } 1121 1055 1122 1056 static int uas_post_reset(struct usb_interface *intf) 1123 1057 { 1124 - /* XXX: Need to return 1 if it's not our device in error handling */ 1058 + struct Scsi_Host *shost = usb_get_intfdata(intf); 1059 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 1060 + unsigned long flags; 1061 + 1062 + if (devinfo->shutdown) 1063 + return 0; 1064 + 1065 + if (uas_configure_endpoints(devinfo) != 0) { 1066 + shost_printk(KERN_ERR, shost, 1067 + "%s: alloc streams error after reset", __func__); 1068 + return 1; 1069 + } 1070 + 1071 + spin_lock_irqsave(shost->host_lock, flags); 1072 + scsi_report_bus_reset(shost, 0); 1073 + spin_unlock_irqrestore(shost->host_lock, flags); 1074 + 1075 + scsi_unblock_requests(shost); 1076 + 1077 + return 0; 1078 + } 1079 + 1080 + static int uas_suspend(struct usb_interface *intf, pm_message_t message) 1081 + { 1082 + struct Scsi_Host *shost = usb_get_intfdata(intf); 1083 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 1084 + 1085 + /* Wait for any pending requests to complete */ 1086 + flush_work(&devinfo->work); 1087 + if (usb_wait_anchor_empty_timeout(&devinfo->sense_urbs, 5000) == 0) { 1088 + shost_printk(KERN_ERR, shost, "%s: timed out\n", __func__); 1089 + return -ETIME; 1090 + } 1091 + 1092 + return 0; 1093 + } 1094 + 1095 + static int uas_resume(struct usb_interface *intf) 1096 + { 1097 + return 0; 1098 + } 1099 + 1100 + static int uas_reset_resume(struct usb_interface *intf) 1101 + { 1102 + struct Scsi_Host *shost = usb_get_intfdata(intf); 1103 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 1104 + unsigned long flags; 1105 + 1106 + if (uas_configure_endpoints(devinfo) != 0) { 1107 + shost_printk(KERN_ERR, shost, 1108 + "%s: alloc streams error after reset", __func__); 1109 + return -EIO; 1110 + } 1111 + 1112 + spin_lock_irqsave(shost->host_lock, flags); 1113 + scsi_report_bus_reset(shost, 0); 1114 + spin_unlock_irqrestore(shost->host_lock, flags); 1115 + 1125 1116 return 0; 1126 1117 } 1127 1118 1128 1119 static void uas_disconnect(struct usb_interface *intf) 1129 1120 { 1130 1121 struct Scsi_Host *shost = usb_get_intfdata(intf); 1131 - struct uas_dev_info *devinfo = (void *)shost->hostdata[0]; 1122 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 1132 1123 1133 1124 devinfo->resetting = 1; 1134 - uas_abort_work(devinfo); 1125 + cancel_work_sync(&devinfo->work); 1126 + uas_abort_inflight(devinfo, DID_NO_CONNECT, __func__); 1135 1127 usb_kill_anchored_urbs(&devinfo->cmd_urbs); 1136 1128 usb_kill_anchored_urbs(&devinfo->sense_urbs); 1137 1129 usb_kill_anchored_urbs(&devinfo->data_urbs); 1130 + uas_zap_dead(devinfo); 1138 1131 scsi_remove_host(shost); 1139 1132 uas_free_streams(devinfo); 1140 - kfree(devinfo); 1133 + scsi_host_put(shost); 1141 1134 } 1142 1135 1143 1136 /* 1144 - * XXX: Should this plug into libusual so we can auto-upgrade devices from 1145 - * Bulk-Only to UAS? 1137 + * Put the device back in usb-storage mode on shutdown, as some BIOS-es 1138 + * hang on reboot when the device is still in uas mode. Note the reset is 1139 + * necessary as some devices won't revert to usb-storage mode without it. 1146 1140 */ 1141 + static void uas_shutdown(struct device *dev) 1142 + { 1143 + struct usb_interface *intf = to_usb_interface(dev); 1144 + struct usb_device *udev = interface_to_usbdev(intf); 1145 + struct Scsi_Host *shost = usb_get_intfdata(intf); 1146 + struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; 1147 + 1148 + if (system_state != SYSTEM_RESTART) 1149 + return; 1150 + 1151 + devinfo->shutdown = 1; 1152 + uas_free_streams(devinfo); 1153 + usb_set_interface(udev, intf->altsetting[0].desc.bInterfaceNumber, 0); 1154 + usb_reset_device(udev); 1155 + } 1156 + 1147 1157 static struct usb_driver uas_driver = { 1148 1158 .name = "uas", 1149 1159 .probe = uas_probe, 1150 1160 .disconnect = uas_disconnect, 1151 1161 .pre_reset = uas_pre_reset, 1152 1162 .post_reset = uas_post_reset, 1163 + .suspend = uas_suspend, 1164 + .resume = uas_resume, 1165 + .reset_resume = uas_reset_resume, 1166 + .drvwrap.driver.shutdown = uas_shutdown, 1153 1167 .id_table = uas_usb_ids, 1154 1168 }; 1155 1169 1156 1170 module_usb_driver(uas_driver); 1157 1171 1158 1172 MODULE_LICENSE("GPL"); 1159 - MODULE_AUTHOR("Matthew Wilcox and Sarah Sharp"); 1173 + MODULE_AUTHOR( 1174 + "Hans de Goede <hdegoede@redhat.com>, Matthew Wilcox and Sarah Sharp");
+5
drivers/usb/storage/unusual_devs.h
··· 2086 2086 "Digital MP3 Audio Player", 2087 2087 USB_SC_DEVICE, USB_PR_DEVICE, NULL, US_FL_NOT_LOCKABLE ), 2088 2088 2089 + /* Unusual uas devices */ 2090 + #if IS_ENABLED(CONFIG_USB_UAS) 2091 + #include "unusual_uas.h" 2092 + #endif 2093 + 2089 2094 /* Control/Bulk transport for all SubClass values */ 2090 2095 USUAL_DEV(USB_SC_RBC, USB_PR_CB), 2091 2096 USUAL_DEV(USB_SC_8020, USB_PR_CB),
+52
drivers/usb/storage/unusual_uas.h
··· 1 + /* Driver for USB Attached SCSI devices - Unusual Devices File 2 + * 3 + * (c) 2013 Hans de Goede <hdegoede@redhat.com> 4 + * 5 + * Based on the same file for the usb-storage driver, which is: 6 + * (c) 2000-2002 Matthew Dharm (mdharm-usb@one-eyed-alien.net) 7 + * (c) 2000 Adam J. Richter (adam@yggdrasil.com), Yggdrasil Computing, Inc. 8 + * 9 + * This program is free software; you can redistribute it and/or modify it 10 + * under the terms of the GNU General Public License as published by the 11 + * Free Software Foundation; either version 2, or (at your option) any 12 + * later version. 13 + * 14 + * This program is distributed in the hope that it will be useful, but 15 + * WITHOUT ANY WARRANTY; without even the implied warranty of 16 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 + * General Public License for more details. 18 + * 19 + * You should have received a copy of the GNU General Public License along 20 + * with this program; if not, write to the Free Software Foundation, Inc., 21 + * 675 Mass Ave, Cambridge, MA 02139, USA. 22 + */ 23 + 24 + /* 25 + * IMPORTANT NOTE: This file must be included in another file which defines 26 + * a UNUSUAL_DEV macro before this file is included. 27 + */ 28 + 29 + /* 30 + * If you edit this file, please try to keep it sorted first by VendorID, 31 + * then by ProductID. 32 + * 33 + * If you want to add an entry for this file, be sure to include the 34 + * following information: 35 + * - a patch that adds the entry for your device, including your 36 + * email address right above the entry (plus maybe a brief 37 + * explanation of the reason for the entry), 38 + * - lsusb -v output for the device 39 + * Send your submission to Hans de Goede <hdegoede@redhat.com> 40 + * and don't forget to CC: the USB development list <linux-usb@vger.kernel.org> 41 + */ 42 + 43 + /* 44 + * This is an example entry for the US_FL_IGNORE_UAS flag. Once we have an 45 + * actual entry using US_FL_IGNORE_UAS this entry should be removed. 46 + * 47 + * UNUSUAL_DEV( 0xabcd, 0x1234, 0x0100, 0x0100, 48 + * "Example", 49 + * "Storage with broken UAS", 50 + * USB_SC_DEVICE, USB_PR_DEVICE, NULL, 51 + * US_FL_IGNORE_UAS), 52 + */
+20 -6
drivers/usb/storage/usb.c
··· 72 72 #include "sierra_ms.h" 73 73 #include "option_ms.h" 74 74 75 + #if IS_ENABLED(CONFIG_USB_UAS) 76 + #include "uas-detect.h" 77 + #endif 78 + 75 79 /* Some informational data */ 76 80 MODULE_AUTHOR("Matthew Dharm <mdharm-usb@one-eyed-alien.net>"); 77 81 MODULE_DESCRIPTION("USB Mass Storage driver for Linux"); ··· 463 459 #define TOLOWER(x) ((x) | 0x20) 464 460 465 461 /* Adjust device flags based on the "quirks=" module parameter */ 466 - static void adjust_quirks(struct us_data *us) 462 + void usb_stor_adjust_quirks(struct usb_device *udev, unsigned long *fflags) 467 463 { 468 464 char *p; 469 - u16 vid = le16_to_cpu(us->pusb_dev->descriptor.idVendor); 470 - u16 pid = le16_to_cpu(us->pusb_dev->descriptor.idProduct); 465 + u16 vid = le16_to_cpu(udev->descriptor.idVendor); 466 + u16 pid = le16_to_cpu(udev->descriptor.idProduct); 471 467 unsigned f = 0; 472 468 unsigned int mask = (US_FL_SANE_SENSE | US_FL_BAD_SENSE | 473 - US_FL_FIX_CAPACITY | 469 + US_FL_FIX_CAPACITY | US_FL_IGNORE_UAS | 474 470 US_FL_CAPACITY_HEURISTICS | US_FL_IGNORE_DEVICE | 475 471 US_FL_NOT_LOCKABLE | US_FL_MAX_SECTORS_64 | 476 472 US_FL_CAPACITY_OK | US_FL_IGNORE_RESIDUE | ··· 541 537 case 's': 542 538 f |= US_FL_SINGLE_LUN; 543 539 break; 540 + case 'u': 541 + f |= US_FL_IGNORE_UAS; 542 + break; 544 543 case 'w': 545 544 f |= US_FL_NO_WP_DETECT; 546 545 break; 547 546 /* Ignore unrecognized flag characters */ 548 547 } 549 548 } 550 - us->fflags = (us->fflags & ~mask) | f; 549 + *fflags = (*fflags & ~mask) | f; 551 550 } 551 + EXPORT_SYMBOL_GPL(usb_stor_adjust_quirks); 552 552 553 553 /* Get the unusual_devs entries and the string descriptors */ 554 554 static int get_device_info(struct us_data *us, const struct usb_device_id *id, ··· 572 564 idesc->bInterfaceProtocol : 573 565 unusual_dev->useTransport; 574 566 us->fflags = id->driver_info; 575 - adjust_quirks(us); 567 + usb_stor_adjust_quirks(us->pusb_dev, &us->fflags); 576 568 577 569 if (us->fflags & US_FL_IGNORE_DEVICE) { 578 570 dev_info(pdev, "device ignored\n"); ··· 1042 1034 struct us_data *us; 1043 1035 int result; 1044 1036 int size; 1037 + 1038 + /* If uas is enabled and this device can do uas then ignore it. */ 1039 + #if IS_ENABLED(CONFIG_USB_UAS) 1040 + if (uas_use_uas_driver(intf, id)) 1041 + return -ENXIO; 1042 + #endif 1045 1043 1046 1044 /* 1047 1045 * If the device isn't standard (is handled by a subdriver
+3
drivers/usb/storage/usb.h
··· 201 201 extern int usb_stor_probe2(struct us_data *us); 202 202 extern void usb_stor_disconnect(struct usb_interface *intf); 203 203 204 + extern void usb_stor_adjust_quirks(struct usb_device *dev, 205 + unsigned long *fflags); 206 + 204 207 #endif
+1 -3
drivers/usb/wusbcore/devconnect.c
··· 284 284 struct device *dev = wusbhc->dev; 285 285 struct wusb_dev *wusb_dev; 286 286 struct wusb_port *port; 287 - unsigned idx, devnum; 287 + unsigned idx; 288 288 289 289 mutex_lock(&wusbhc->mutex); 290 290 ··· 311 311 * connection, right? */ 312 312 goto error_unlock; 313 313 } 314 - 315 - devnum = idx + 2; 316 314 317 315 /* Make sure we are using no crypto on that "virtual port" */ 318 316 wusbhc->set_ptk(wusbhc, idx, 0, NULL, 0);
-2
drivers/usb/wusbcore/wa-hc.c
··· 75 75 if (wa->dti_urb) { 76 76 usb_kill_urb(wa->dti_urb); 77 77 usb_put_urb(wa->dti_urb); 78 - usb_kill_urb(wa->buf_in_urb); 79 - usb_put_urb(wa->buf_in_urb); 80 78 } 81 79 kfree(wa->dti_buf); 82 80 wa_nep_destroy(wa);
+24 -2
drivers/usb/wusbcore/wa-hc.h
··· 125 125 126 126 enum wa_dti_state { 127 127 WA_DTI_TRANSFER_RESULT_PENDING, 128 - WA_DTI_ISOC_PACKET_STATUS_PENDING 128 + WA_DTI_ISOC_PACKET_STATUS_PENDING, 129 + WA_DTI_BUF_IN_DATA_PENDING 129 130 }; 130 131 131 132 enum wa_quirks { ··· 135 134 * requests to be concatenated and not sent as separate packets. 136 135 */ 137 136 WUSB_QUIRK_ALEREON_HWA_CONCAT_ISOC = 0x01, 137 + /* 138 + * The Alereon HWA can be instructed to not send transfer notifications 139 + * as an optimization. 140 + */ 141 + WUSB_QUIRK_ALEREON_HWA_DISABLE_XFER_NOTIFICATIONS = 0x02, 138 142 }; 139 143 144 + enum wa_vendor_specific_requests { 145 + WA_REQ_ALEREON_DISABLE_XFER_NOTIFICATIONS = 0x4C, 146 + WA_REQ_ALEREON_FEATURE_SET = 0x01, 147 + WA_REQ_ALEREON_FEATURE_CLEAR = 0x00, 148 + }; 149 + 150 + #define WA_MAX_BUF_IN_URBS 4 140 151 /** 141 152 * Instance of a HWA Host Controller 142 153 * ··· 219 206 u32 dti_isoc_xfer_in_progress; 220 207 u8 dti_isoc_xfer_seg; 221 208 struct urb *dti_urb; /* URB for reading xfer results */ 222 - struct urb *buf_in_urb; /* URB for reading data in */ 209 + /* URBs for reading data in */ 210 + struct urb buf_in_urbs[WA_MAX_BUF_IN_URBS]; 211 + int active_buf_in_urbs; /* number of buf_in_urbs active. */ 223 212 struct edc dti_edc; /* DTI error density counter */ 224 213 void *dti_buf; 225 214 size_t dti_buf_size; ··· 249 234 extern int wa_create(struct wahc *wa, struct usb_interface *iface, 250 235 kernel_ulong_t); 251 236 extern void __wa_destroy(struct wahc *wa); 237 + extern int wa_dti_start(struct wahc *wa); 252 238 void wa_reset_all(struct wahc *wa); 253 239 254 240 ··· 291 275 292 276 static inline void wa_init(struct wahc *wa) 293 277 { 278 + int index; 279 + 294 280 edc_init(&wa->nep_edc); 295 281 atomic_set(&wa->notifs_queued, 0); 296 282 wa->dti_state = WA_DTI_TRANSFER_RESULT_PENDING; ··· 306 288 INIT_WORK(&wa->xfer_error_work, wa_process_errored_transfers_run); 307 289 wa->dto_in_use = 0; 308 290 atomic_set(&wa->xfer_id_count, 1); 291 + /* init the buf in URBs */ 292 + for (index = 0; index < WA_MAX_BUF_IN_URBS; ++index) 293 + usb_init_urb(&(wa->buf_in_urbs[index])); 294 + wa->active_buf_in_urbs = 0; 309 295 } 310 296 311 297 /**
+1 -1
drivers/usb/wusbcore/wa-rpipe.c
··· 298 298 break; 299 299 } 300 300 itr += hdr->bLength; 301 - itr_size -= hdr->bDescriptorType; 301 + itr_size -= hdr->bLength; 302 302 } 303 303 out: 304 304 return epcd;
+413 -174
drivers/usb/wusbcore/wa-xfer.c
··· 167 167 168 168 static void __wa_populate_dto_urb_isoc(struct wa_xfer *xfer, 169 169 struct wa_seg *seg, int curr_iso_frame); 170 + static void wa_complete_remaining_xfer_segs(struct wa_xfer *xfer, 171 + int starting_index, enum wa_seg_status status); 170 172 171 173 static inline void wa_xfer_init(struct wa_xfer *xfer) 172 174 { ··· 369 367 break; 370 368 case WA_SEG_ERROR: 371 369 xfer->result = seg->result; 372 - dev_dbg(dev, "xfer %p ID %08X#%u: ERROR result %zu(0x%08zX)\n", 370 + dev_dbg(dev, "xfer %p ID %08X#%u: ERROR result %zi(0x%08zX)\n", 373 371 xfer, wa_xfer_id(xfer), seg->index, seg->result, 374 372 seg->result); 375 373 goto out; 376 374 case WA_SEG_ABORTED: 377 375 xfer->result = seg->result; 378 - dev_dbg(dev, "xfer %p ID %08X#%u: ABORTED result %zu(0x%08zX)\n", 376 + dev_dbg(dev, "xfer %p ID %08X#%u: ABORTED result %zi(0x%08zX)\n", 379 377 xfer, wa_xfer_id(xfer), seg->index, seg->result, 380 378 seg->result); 381 379 goto out; ··· 389 387 xfer->result = 0; 390 388 out: 391 389 return result; 390 + } 391 + 392 + /* 393 + * Mark the given segment as done. Return true if this completes the xfer. 394 + * This should only be called for segs that have been submitted to an RPIPE. 395 + * Delayed segs are not marked as submitted so they do not need to be marked 396 + * as done when cleaning up. 397 + * 398 + * xfer->lock has to be locked 399 + */ 400 + static unsigned __wa_xfer_mark_seg_as_done(struct wa_xfer *xfer, 401 + struct wa_seg *seg, enum wa_seg_status status) 402 + { 403 + seg->status = status; 404 + xfer->segs_done++; 405 + 406 + /* check for done. */ 407 + return __wa_xfer_is_done(xfer); 392 408 } 393 409 394 410 /* ··· 436 416 437 417 struct wa_xfer_abort_buffer { 438 418 struct urb urb; 419 + struct wahc *wa; 439 420 struct wa_xfer_abort cmd; 440 421 }; 441 422 442 423 static void __wa_xfer_abort_cb(struct urb *urb) 443 424 { 444 425 struct wa_xfer_abort_buffer *b = urb->context; 426 + struct wahc *wa = b->wa; 427 + 428 + /* 429 + * If the abort request URB failed, then the HWA did not get the abort 430 + * command. Forcibly clean up the xfer without waiting for a Transfer 431 + * Result from the HWA. 432 + */ 433 + if (urb->status < 0) { 434 + struct wa_xfer *xfer; 435 + struct device *dev = &wa->usb_iface->dev; 436 + 437 + xfer = wa_xfer_get_by_id(wa, le32_to_cpu(b->cmd.dwTransferID)); 438 + dev_err(dev, "%s: Transfer Abort request failed. result: %d\n", 439 + __func__, urb->status); 440 + if (xfer) { 441 + unsigned long flags; 442 + int done; 443 + struct wa_rpipe *rpipe = xfer->ep->hcpriv; 444 + 445 + dev_err(dev, "%s: cleaning up xfer %p ID 0x%08X.\n", 446 + __func__, xfer, wa_xfer_id(xfer)); 447 + spin_lock_irqsave(&xfer->lock, flags); 448 + /* mark all segs as aborted. */ 449 + wa_complete_remaining_xfer_segs(xfer, 0, 450 + WA_SEG_ABORTED); 451 + done = __wa_xfer_is_done(xfer); 452 + spin_unlock_irqrestore(&xfer->lock, flags); 453 + if (done) 454 + wa_xfer_completion(xfer); 455 + wa_xfer_delayed_run(rpipe); 456 + wa_xfer_put(xfer); 457 + } else { 458 + dev_err(dev, "%s: xfer ID 0x%08X already gone.\n", 459 + __func__, le32_to_cpu(b->cmd.dwTransferID)); 460 + } 461 + } 462 + 463 + wa_put(wa); /* taken in __wa_xfer_abort */ 445 464 usb_put_urb(&b->urb); 446 465 } 447 466 ··· 508 449 b->cmd.bRequestType = WA_XFER_ABORT; 509 450 b->cmd.wRPipe = rpipe->descr.wRPipeIndex; 510 451 b->cmd.dwTransferID = wa_xfer_id_le32(xfer); 452 + b->wa = wa_get(xfer->wa); 511 453 512 454 usb_init_urb(&b->urb); 513 455 usb_fill_bulk_urb(&b->urb, xfer->wa->usb_dev, ··· 522 462 523 463 524 464 error_submit: 465 + wa_put(xfer->wa); 525 466 if (printk_ratelimit()) 526 467 dev_err(dev, "xfer %p: Can't submit abort request: %d\n", 527 468 xfer, result); ··· 794 733 seg->isoc_frame_offset + seg->isoc_frame_index); 795 734 796 735 /* resubmit the URB with the next isoc frame. */ 736 + /* take a ref on resubmit. */ 737 + wa_xfer_get(xfer); 797 738 result = usb_submit_urb(seg->dto_urb, GFP_ATOMIC); 798 739 if (result < 0) { 799 740 dev_err(dev, "xfer 0x%08X#%u: DTO submit failed: %d\n", ··· 823 760 goto error_default; 824 761 } 825 762 763 + /* taken when this URB was submitted. */ 764 + wa_xfer_put(xfer); 826 765 return; 827 766 828 767 error_dto_submit: 768 + /* taken on resubmit attempt. */ 769 + wa_xfer_put(xfer); 829 770 error_default: 830 771 spin_lock_irqsave(&xfer->lock, flags); 831 772 rpipe = xfer->ep->hcpriv; ··· 839 772 wa_reset_all(wa); 840 773 } 841 774 if (seg->status != WA_SEG_ERROR) { 842 - seg->status = WA_SEG_ERROR; 843 775 seg->result = urb->status; 844 - xfer->segs_done++; 845 776 __wa_xfer_abort(xfer); 846 777 rpipe_ready = rpipe_avail_inc(rpipe); 847 - done = __wa_xfer_is_done(xfer); 778 + done = __wa_xfer_mark_seg_as_done(xfer, seg, WA_SEG_ERROR); 848 779 } 849 780 spin_unlock_irqrestore(&xfer->lock, flags); 850 781 if (holding_dto) { ··· 853 788 wa_xfer_completion(xfer); 854 789 if (rpipe_ready) 855 790 wa_xfer_delayed_run(rpipe); 856 - 791 + /* taken when this URB was submitted. */ 792 + wa_xfer_put(xfer); 857 793 } 858 794 859 795 /* ··· 908 842 } 909 843 if (seg->status != WA_SEG_ERROR) { 910 844 usb_unlink_urb(seg->dto_urb); 911 - seg->status = WA_SEG_ERROR; 912 845 seg->result = urb->status; 913 - xfer->segs_done++; 914 846 __wa_xfer_abort(xfer); 915 847 rpipe_ready = rpipe_avail_inc(rpipe); 916 - done = __wa_xfer_is_done(xfer); 848 + done = __wa_xfer_mark_seg_as_done(xfer, seg, 849 + WA_SEG_ERROR); 917 850 } 918 851 spin_unlock_irqrestore(&xfer->lock, flags); 919 852 if (done) ··· 920 855 if (rpipe_ready) 921 856 wa_xfer_delayed_run(rpipe); 922 857 } 858 + /* taken when this URB was submitted. */ 859 + wa_xfer_put(xfer); 923 860 } 924 861 925 862 /* ··· 986 919 } 987 920 usb_unlink_urb(seg->isoc_pack_desc_urb); 988 921 usb_unlink_urb(seg->dto_urb); 989 - seg->status = WA_SEG_ERROR; 990 922 seg->result = urb->status; 991 - xfer->segs_done++; 992 923 __wa_xfer_abort(xfer); 993 924 rpipe_ready = rpipe_avail_inc(rpipe); 994 - done = __wa_xfer_is_done(xfer); 925 + done = __wa_xfer_mark_seg_as_done(xfer, seg, WA_SEG_ERROR); 995 926 spin_unlock_irqrestore(&xfer->lock, flags); 996 927 if (done) 997 928 wa_xfer_completion(xfer); 998 929 if (rpipe_ready) 999 930 wa_xfer_delayed_run(rpipe); 1000 931 } 932 + /* taken when this URB was submitted. */ 933 + wa_xfer_put(xfer); 1001 934 } 1002 935 1003 936 /* ··· 1007 940 */ 1008 941 static struct scatterlist *wa_xfer_create_subset_sg(struct scatterlist *in_sg, 1009 942 const unsigned int bytes_transferred, 1010 - const unsigned int bytes_to_transfer, unsigned int *out_num_sgs) 943 + const unsigned int bytes_to_transfer, int *out_num_sgs) 1011 944 { 1012 945 struct scatterlist *out_sg; 1013 946 unsigned int bytes_processed = 0, offset_into_current_page_data = 0, ··· 1161 1094 */ 1162 1095 static int __wa_xfer_setup_segs(struct wa_xfer *xfer, size_t xfer_hdr_size) 1163 1096 { 1164 - int result, cnt, iso_frame_offset; 1097 + int result, cnt, isoc_frame_offset = 0; 1165 1098 size_t alloc_size = sizeof(*xfer->seg[0]) 1166 1099 - sizeof(xfer->seg[0]->xfer_hdr) + xfer_hdr_size; 1167 1100 struct usb_device *usb_dev = xfer->wa->usb_dev; 1168 1101 const struct usb_endpoint_descriptor *dto_epd = xfer->wa->dto_epd; 1169 1102 struct wa_seg *seg; 1170 1103 size_t buf_itr, buf_size, buf_itr_size; 1171 - int isoc_frame_offset = 0; 1172 1104 1173 1105 result = -ENOMEM; 1174 1106 xfer->seg = kcalloc(xfer->segs, sizeof(xfer->seg[0]), GFP_ATOMIC); ··· 1175 1109 goto error_segs_kzalloc; 1176 1110 buf_itr = 0; 1177 1111 buf_size = xfer->urb->transfer_buffer_length; 1178 - iso_frame_offset = 0; 1179 1112 for (cnt = 0; cnt < xfer->segs; cnt++) { 1180 1113 size_t iso_pkt_descr_size = 0; 1181 1114 int seg_isoc_frame_count = 0, seg_isoc_size = 0; ··· 1383 1318 /* default to done unless we encounter a multi-frame isoc segment. */ 1384 1319 *dto_done = 1; 1385 1320 1321 + /* 1322 + * Take a ref for each segment urb so the xfer cannot disappear until 1323 + * all of the callbacks run. 1324 + */ 1325 + wa_xfer_get(xfer); 1386 1326 /* submit the transfer request. */ 1327 + seg->status = WA_SEG_SUBMITTED; 1387 1328 result = usb_submit_urb(&seg->tr_urb, GFP_ATOMIC); 1388 1329 if (result < 0) { 1389 1330 pr_err("%s: xfer %p#%u: REQ submit failed: %d\n", 1390 1331 __func__, xfer, seg->index, result); 1391 - goto error_seg_submit; 1332 + wa_xfer_put(xfer); 1333 + goto error_tr_submit; 1392 1334 } 1393 1335 /* submit the isoc packet descriptor if present. */ 1394 1336 if (seg->isoc_pack_desc_urb) { 1337 + wa_xfer_get(xfer); 1395 1338 result = usb_submit_urb(seg->isoc_pack_desc_urb, GFP_ATOMIC); 1396 1339 seg->isoc_frame_index = 0; 1397 1340 if (result < 0) { 1398 1341 pr_err("%s: xfer %p#%u: ISO packet descriptor submit failed: %d\n", 1399 1342 __func__, xfer, seg->index, result); 1343 + wa_xfer_put(xfer); 1400 1344 goto error_iso_pack_desc_submit; 1401 1345 } 1402 1346 } 1403 1347 /* submit the out data if this is an out request. */ 1404 1348 if (seg->dto_urb) { 1405 1349 struct wahc *wa = xfer->wa; 1350 + wa_xfer_get(xfer); 1406 1351 result = usb_submit_urb(seg->dto_urb, GFP_ATOMIC); 1407 1352 if (result < 0) { 1408 1353 pr_err("%s: xfer %p#%u: DTO submit failed: %d\n", 1409 1354 __func__, xfer, seg->index, result); 1355 + wa_xfer_put(xfer); 1410 1356 goto error_dto_submit; 1411 1357 } 1412 1358 /* ··· 1429 1353 && (seg->isoc_frame_count > 1)) 1430 1354 *dto_done = 0; 1431 1355 } 1432 - seg->status = WA_SEG_SUBMITTED; 1433 1356 rpipe_avail_dec(rpipe); 1434 1357 return 0; 1435 1358 ··· 1436 1361 usb_unlink_urb(seg->isoc_pack_desc_urb); 1437 1362 error_iso_pack_desc_submit: 1438 1363 usb_unlink_urb(&seg->tr_urb); 1439 - error_seg_submit: 1364 + error_tr_submit: 1440 1365 seg->status = WA_SEG_ERROR; 1441 1366 seg->result = result; 1442 1367 *dto_done = 1; ··· 1468 1393 list_node); 1469 1394 list_del(&seg->list_node); 1470 1395 xfer = seg->xfer; 1396 + /* 1397 + * Get a reference to the xfer in case the callbacks for the 1398 + * URBs submitted by __wa_seg_submit attempt to complete 1399 + * the xfer before this function completes. 1400 + */ 1401 + wa_xfer_get(xfer); 1471 1402 result = __wa_seg_submit(rpipe, xfer, seg, &dto_done); 1472 1403 /* release the dto resource if this RPIPE is done with it. */ 1473 1404 if (dto_done) ··· 1482 1401 xfer, wa_xfer_id(xfer), seg->index, 1483 1402 atomic_read(&rpipe->segs_available), result); 1484 1403 if (unlikely(result < 0)) { 1404 + int done; 1405 + 1485 1406 spin_unlock_irqrestore(&rpipe->seg_lock, flags); 1486 1407 spin_lock_irqsave(&xfer->lock, flags); 1487 1408 __wa_xfer_abort(xfer); 1409 + /* 1410 + * This seg was marked as submitted when it was put on 1411 + * the RPIPE seg_list. Mark it done. 1412 + */ 1488 1413 xfer->segs_done++; 1414 + done = __wa_xfer_is_done(xfer); 1489 1415 spin_unlock_irqrestore(&xfer->lock, flags); 1416 + if (done) 1417 + wa_xfer_completion(xfer); 1490 1418 spin_lock_irqsave(&rpipe->seg_lock, flags); 1491 1419 } 1420 + wa_xfer_put(xfer); 1492 1421 } 1493 1422 /* 1494 1423 * Mark this RPIPE as waiting if dto was not acquired, there are ··· 1683 1592 dev_err(&(urb->dev->dev), "%s: error_xfer_setup\n", __func__); 1684 1593 goto error_xfer_setup; 1685 1594 } 1595 + /* 1596 + * Get a xfer reference since __wa_xfer_submit starts asynchronous 1597 + * operations that may try to complete the xfer before this function 1598 + * exits. 1599 + */ 1600 + wa_xfer_get(xfer); 1686 1601 result = __wa_xfer_submit(xfer); 1687 1602 if (result < 0) { 1688 1603 dev_err(&(urb->dev->dev), "%s: error_xfer_submit\n", __func__); 1689 1604 goto error_xfer_submit; 1690 1605 } 1691 1606 spin_unlock_irqrestore(&xfer->lock, flags); 1607 + wa_xfer_put(xfer); 1692 1608 return 0; 1693 1609 1694 1610 /* ··· 1721 1623 spin_unlock_irqrestore(&xfer->lock, flags); 1722 1624 if (done) 1723 1625 wa_xfer_completion(xfer); 1626 + wa_xfer_put(xfer); 1724 1627 /* return success since the completion routine will run. */ 1725 1628 return 0; 1726 1629 } ··· 1931 1832 /* check if it is safe to unlink. */ 1932 1833 spin_lock_irqsave(&wa->xfer_list_lock, flags); 1933 1834 result = usb_hcd_check_unlink_urb(&(wa->wusb->usb_hcd), urb, status); 1835 + if ((result == 0) && urb->hcpriv) { 1836 + /* 1837 + * Get a xfer ref to prevent a race with wa_xfer_giveback 1838 + * cleaning up the xfer while we are working with it. 1839 + */ 1840 + wa_xfer_get(urb->hcpriv); 1841 + } 1934 1842 spin_unlock_irqrestore(&wa->xfer_list_lock, flags); 1935 1843 if (result) 1936 1844 return result; 1937 1845 1938 1846 xfer = urb->hcpriv; 1939 - if (xfer == NULL) { 1940 - /* 1941 - * Nothing setup yet enqueue will see urb->status != 1942 - * -EINPROGRESS (by hcd layer) and bail out with 1943 - * error, no need to do completion 1944 - */ 1945 - BUG_ON(urb->status == -EINPROGRESS); 1946 - goto out; 1947 - } 1847 + if (xfer == NULL) 1848 + return -ENOENT; 1948 1849 spin_lock_irqsave(&xfer->lock, flags); 1949 1850 pr_debug("%s: DEQUEUE xfer id 0x%08X\n", __func__, wa_xfer_id(xfer)); 1950 1851 rpipe = xfer->ep->hcpriv; ··· 1952 1853 pr_debug("%s: xfer %p id 0x%08X has no RPIPE. %s", 1953 1854 __func__, xfer, wa_xfer_id(xfer), 1954 1855 "Probably already aborted.\n" ); 1856 + result = -ENOENT; 1857 + goto out_unlock; 1858 + } 1859 + /* 1860 + * Check for done to avoid racing with wa_xfer_giveback and completing 1861 + * twice. 1862 + */ 1863 + if (__wa_xfer_is_done(xfer)) { 1864 + pr_debug("%s: xfer %p id 0x%08X already done.\n", __func__, 1865 + xfer, wa_xfer_id(xfer)); 1955 1866 result = -ENOENT; 1956 1867 goto out_unlock; 1957 1868 } ··· 1974 1865 goto out_unlock; /* setup(), enqueue_b() completes */ 1975 1866 /* Ok, the xfer is in flight already, it's been setup and submitted.*/ 1976 1867 xfer_abort_pending = __wa_xfer_abort(xfer) >= 0; 1868 + /* 1869 + * grab the rpipe->seg_lock here to prevent racing with 1870 + * __wa_xfer_delayed_run. 1871 + */ 1872 + spin_lock(&rpipe->seg_lock); 1977 1873 for (cnt = 0; cnt < xfer->segs; cnt++) { 1978 1874 seg = xfer->seg[cnt]; 1979 1875 pr_debug("%s: xfer id 0x%08X#%d status = %d\n", ··· 1999 1885 */ 2000 1886 seg->status = WA_SEG_ABORTED; 2001 1887 seg->result = -ENOENT; 2002 - spin_lock_irqsave(&rpipe->seg_lock, flags2); 2003 1888 list_del(&seg->list_node); 2004 1889 xfer->segs_done++; 2005 - spin_unlock_irqrestore(&rpipe->seg_lock, flags2); 2006 1890 break; 2007 1891 case WA_SEG_DONE: 2008 1892 case WA_SEG_ERROR: 2009 1893 case WA_SEG_ABORTED: 1894 + break; 1895 + /* 1896 + * The buf_in data for a segment in the 1897 + * WA_SEG_DTI_PENDING state is actively being read. 1898 + * Let wa_buf_in_cb handle it since it will be called 1899 + * and will increment xfer->segs_done. Cleaning up 1900 + * here could cause wa_buf_in_cb to access the xfer 1901 + * after it has been completed/freed. 1902 + */ 1903 + case WA_SEG_DTI_PENDING: 2010 1904 break; 2011 1905 /* 2012 1906 * In the states below, the HWA device already knows ··· 2025 1903 */ 2026 1904 case WA_SEG_SUBMITTED: 2027 1905 case WA_SEG_PENDING: 2028 - case WA_SEG_DTI_PENDING: 2029 1906 /* 2030 1907 * Check if the abort was successfully sent. This could 2031 1908 * be false if the HWA has been removed but we haven't ··· 2038 1917 break; 2039 1918 } 2040 1919 } 1920 + spin_unlock(&rpipe->seg_lock); 2041 1921 xfer->result = urb->status; /* -ENOENT or -ECONNRESET */ 2042 1922 done = __wa_xfer_is_done(xfer); 2043 1923 spin_unlock_irqrestore(&xfer->lock, flags); ··· 2046 1924 wa_xfer_completion(xfer); 2047 1925 if (rpipe_ready) 2048 1926 wa_xfer_delayed_run(rpipe); 1927 + wa_xfer_put(xfer); 2049 1928 return result; 2050 1929 2051 1930 out_unlock: 2052 1931 spin_unlock_irqrestore(&xfer->lock, flags); 2053 - out: 1932 + wa_xfer_put(xfer); 2054 1933 return result; 2055 1934 2056 1935 dequeue_delayed: ··· 2060 1937 xfer->result = urb->status; 2061 1938 spin_unlock_irqrestore(&xfer->lock, flags); 2062 1939 wa_xfer_giveback(xfer); 1940 + wa_xfer_put(xfer); 2063 1941 usb_put_urb(urb); /* we got a ref in enqueue() */ 2064 1942 return 0; 2065 1943 } ··· 2120 1996 * no other segment transfer results will be returned from the device. 2121 1997 * Mark the remaining submitted or pending xfers as completed so that 2122 1998 * the xfer will complete cleanly. 1999 + * 2000 + * xfer->lock must be held 2001 + * 2123 2002 */ 2124 2003 static void wa_complete_remaining_xfer_segs(struct wa_xfer *xfer, 2125 - struct wa_seg *incoming_seg, enum wa_seg_status status) 2004 + int starting_index, enum wa_seg_status status) 2126 2005 { 2127 2006 int index; 2128 2007 struct wa_rpipe *rpipe = xfer->ep->hcpriv; 2129 2008 2130 - for (index = incoming_seg->index + 1; index < xfer->segs_submitted; 2131 - index++) { 2009 + for (index = starting_index; index < xfer->segs_submitted; index++) { 2132 2010 struct wa_seg *current_seg = xfer->seg[index]; 2133 2011 2134 2012 BUG_ON(current_seg == NULL); ··· 2159 2033 } 2160 2034 } 2161 2035 2162 - /* Populate the wa->buf_in_urb based on the current isoc transfer state. */ 2163 - static void __wa_populate_buf_in_urb_isoc(struct wahc *wa, struct wa_xfer *xfer, 2164 - struct wa_seg *seg, int curr_iso_frame) 2036 + /* Populate the given urb based on the current isoc transfer state. */ 2037 + static int __wa_populate_buf_in_urb_isoc(struct wahc *wa, 2038 + struct urb *buf_in_urb, struct wa_xfer *xfer, struct wa_seg *seg) 2165 2039 { 2166 - BUG_ON(wa->buf_in_urb->status == -EINPROGRESS); 2040 + int urb_start_frame = seg->isoc_frame_index + seg->isoc_frame_offset; 2041 + int seg_index, total_len = 0, urb_frame_index = urb_start_frame; 2042 + struct usb_iso_packet_descriptor *iso_frame_desc = 2043 + xfer->urb->iso_frame_desc; 2044 + const int dti_packet_size = usb_endpoint_maxp(wa->dti_epd); 2045 + int next_frame_contiguous; 2046 + struct usb_iso_packet_descriptor *iso_frame; 2047 + 2048 + BUG_ON(buf_in_urb->status == -EINPROGRESS); 2049 + 2050 + /* 2051 + * If the current frame actual_length is contiguous with the next frame 2052 + * and actual_length is a multiple of the DTI endpoint max packet size, 2053 + * combine the current frame with the next frame in a single URB. This 2054 + * reduces the number of URBs that must be submitted in that case. 2055 + */ 2056 + seg_index = seg->isoc_frame_index; 2057 + do { 2058 + next_frame_contiguous = 0; 2059 + 2060 + iso_frame = &iso_frame_desc[urb_frame_index]; 2061 + total_len += iso_frame->actual_length; 2062 + ++urb_frame_index; 2063 + ++seg_index; 2064 + 2065 + if (seg_index < seg->isoc_frame_count) { 2066 + struct usb_iso_packet_descriptor *next_iso_frame; 2067 + 2068 + next_iso_frame = &iso_frame_desc[urb_frame_index]; 2069 + 2070 + if ((iso_frame->offset + iso_frame->actual_length) == 2071 + next_iso_frame->offset) 2072 + next_frame_contiguous = 1; 2073 + } 2074 + } while (next_frame_contiguous 2075 + && ((iso_frame->actual_length % dti_packet_size) == 0)); 2167 2076 2168 2077 /* this should always be 0 before a resubmit. */ 2169 - wa->buf_in_urb->num_mapped_sgs = 0; 2170 - wa->buf_in_urb->transfer_dma = xfer->urb->transfer_dma + 2171 - xfer->urb->iso_frame_desc[curr_iso_frame].offset; 2172 - wa->buf_in_urb->transfer_buffer_length = 2173 - xfer->urb->iso_frame_desc[curr_iso_frame].length; 2174 - wa->buf_in_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 2175 - wa->buf_in_urb->transfer_buffer = NULL; 2176 - wa->buf_in_urb->sg = NULL; 2177 - wa->buf_in_urb->num_sgs = 0; 2178 - wa->buf_in_urb->context = seg; 2078 + buf_in_urb->num_mapped_sgs = 0; 2079 + buf_in_urb->transfer_dma = xfer->urb->transfer_dma + 2080 + iso_frame_desc[urb_start_frame].offset; 2081 + buf_in_urb->transfer_buffer_length = total_len; 2082 + buf_in_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 2083 + buf_in_urb->transfer_buffer = NULL; 2084 + buf_in_urb->sg = NULL; 2085 + buf_in_urb->num_sgs = 0; 2086 + buf_in_urb->context = seg; 2087 + 2088 + /* return the number of frames included in this URB. */ 2089 + return seg_index - seg->isoc_frame_index; 2179 2090 } 2180 2091 2181 - /* Populate the wa->buf_in_urb based on the current transfer state. */ 2182 - static int wa_populate_buf_in_urb(struct wahc *wa, struct wa_xfer *xfer, 2092 + /* Populate the given urb based on the current transfer state. */ 2093 + static int wa_populate_buf_in_urb(struct urb *buf_in_urb, struct wa_xfer *xfer, 2183 2094 unsigned int seg_idx, unsigned int bytes_transferred) 2184 2095 { 2185 2096 int result = 0; 2186 2097 struct wa_seg *seg = xfer->seg[seg_idx]; 2187 2098 2188 - BUG_ON(wa->buf_in_urb->status == -EINPROGRESS); 2099 + BUG_ON(buf_in_urb->status == -EINPROGRESS); 2189 2100 /* this should always be 0 before a resubmit. */ 2190 - wa->buf_in_urb->num_mapped_sgs = 0; 2101 + buf_in_urb->num_mapped_sgs = 0; 2191 2102 2192 2103 if (xfer->is_dma) { 2193 - wa->buf_in_urb->transfer_dma = xfer->urb->transfer_dma 2104 + buf_in_urb->transfer_dma = xfer->urb->transfer_dma 2194 2105 + (seg_idx * xfer->seg_size); 2195 - wa->buf_in_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 2196 - wa->buf_in_urb->transfer_buffer = NULL; 2197 - wa->buf_in_urb->sg = NULL; 2198 - wa->buf_in_urb->num_sgs = 0; 2106 + buf_in_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 2107 + buf_in_urb->transfer_buffer = NULL; 2108 + buf_in_urb->sg = NULL; 2109 + buf_in_urb->num_sgs = 0; 2199 2110 } else { 2200 2111 /* do buffer or SG processing. */ 2201 - wa->buf_in_urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP; 2112 + buf_in_urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP; 2202 2113 2203 2114 if (xfer->urb->transfer_buffer) { 2204 - wa->buf_in_urb->transfer_buffer = 2115 + buf_in_urb->transfer_buffer = 2205 2116 xfer->urb->transfer_buffer 2206 2117 + (seg_idx * xfer->seg_size); 2207 - wa->buf_in_urb->sg = NULL; 2208 - wa->buf_in_urb->num_sgs = 0; 2118 + buf_in_urb->sg = NULL; 2119 + buf_in_urb->num_sgs = 0; 2209 2120 } else { 2210 2121 /* allocate an SG list to store seg_size bytes 2211 2122 and copy the subset of the xfer->urb->sg 2212 2123 that matches the buffer subset we are 2213 2124 about to read. */ 2214 - wa->buf_in_urb->sg = wa_xfer_create_subset_sg( 2125 + buf_in_urb->sg = wa_xfer_create_subset_sg( 2215 2126 xfer->urb->sg, 2216 2127 seg_idx * xfer->seg_size, 2217 2128 bytes_transferred, 2218 - &(wa->buf_in_urb->num_sgs)); 2129 + &(buf_in_urb->num_sgs)); 2219 2130 2220 - if (!(wa->buf_in_urb->sg)) { 2221 - wa->buf_in_urb->num_sgs = 0; 2131 + if (!(buf_in_urb->sg)) { 2132 + buf_in_urb->num_sgs = 0; 2222 2133 result = -ENOMEM; 2223 2134 } 2224 - wa->buf_in_urb->transfer_buffer = NULL; 2135 + buf_in_urb->transfer_buffer = NULL; 2225 2136 } 2226 2137 } 2227 - wa->buf_in_urb->transfer_buffer_length = bytes_transferred; 2228 - wa->buf_in_urb->context = seg; 2138 + buf_in_urb->transfer_buffer_length = bytes_transferred; 2139 + buf_in_urb->context = seg; 2229 2140 2230 2141 return result; 2231 2142 } ··· 2287 2124 u8 usb_status; 2288 2125 unsigned rpipe_ready = 0; 2289 2126 unsigned bytes_transferred = le32_to_cpu(xfer_result->dwTransferLength); 2127 + struct urb *buf_in_urb = &(wa->buf_in_urbs[0]); 2290 2128 2291 2129 spin_lock_irqsave(&xfer->lock, flags); 2292 2130 seg_idx = xfer_result->bTransferSegment & 0x7f; ··· 2311 2147 } 2312 2148 if (usb_status & 0x80) { 2313 2149 seg->result = wa_xfer_status_to_errno(usb_status); 2314 - dev_err(dev, "DTI: xfer %p#:%08X:%u failed (0x%02x)\n", 2150 + dev_err(dev, "DTI: xfer %p 0x%08X:#%u failed (0x%02x)\n", 2315 2151 xfer, xfer->id, seg->index, usb_status); 2316 2152 seg->status = ((usb_status & 0x7F) == WA_XFER_STATUS_ABORTED) ? 2317 2153 WA_SEG_ABORTED : WA_SEG_ERROR; ··· 2326 2162 * transfers with data or below for no data, the xfer will complete. 2327 2163 */ 2328 2164 if (xfer_result->bTransferSegment & 0x80) 2329 - wa_complete_remaining_xfer_segs(xfer, seg, WA_SEG_DONE); 2165 + wa_complete_remaining_xfer_segs(xfer, seg->index + 1, 2166 + WA_SEG_DONE); 2330 2167 if (usb_pipeisoc(xfer->urb->pipe) 2331 2168 && (le32_to_cpu(xfer_result->dwNumOfPackets) > 0)) { 2332 2169 /* set up WA state to read the isoc packet status next. */ ··· 2338 2173 && (bytes_transferred > 0)) { 2339 2174 /* IN data phase: read to buffer */ 2340 2175 seg->status = WA_SEG_DTI_PENDING; 2341 - result = wa_populate_buf_in_urb(wa, xfer, seg_idx, 2176 + result = wa_populate_buf_in_urb(buf_in_urb, xfer, seg_idx, 2342 2177 bytes_transferred); 2343 2178 if (result < 0) 2344 2179 goto error_buf_in_populate; 2345 - result = usb_submit_urb(wa->buf_in_urb, GFP_ATOMIC); 2346 - if (result < 0) 2180 + ++(wa->active_buf_in_urbs); 2181 + result = usb_submit_urb(buf_in_urb, GFP_ATOMIC); 2182 + if (result < 0) { 2183 + --(wa->active_buf_in_urbs); 2347 2184 goto error_submit_buf_in; 2185 + } 2348 2186 } else { 2349 2187 /* OUT data phase or no data, complete it -- */ 2350 - seg->status = WA_SEG_DONE; 2351 2188 seg->result = bytes_transferred; 2352 - xfer->segs_done++; 2353 2189 rpipe_ready = rpipe_avail_inc(rpipe); 2354 - done = __wa_xfer_is_done(xfer); 2190 + done = __wa_xfer_mark_seg_as_done(xfer, seg, WA_SEG_DONE); 2355 2191 } 2356 2192 spin_unlock_irqrestore(&xfer->lock, flags); 2357 2193 if (done) ··· 2371 2205 dev_err(dev, "xfer %p#%u: can't submit DTI data phase: %d\n", 2372 2206 xfer, seg_idx, result); 2373 2207 seg->result = result; 2374 - kfree(wa->buf_in_urb->sg); 2375 - wa->buf_in_urb->sg = NULL; 2208 + kfree(buf_in_urb->sg); 2209 + buf_in_urb->sg = NULL; 2376 2210 error_buf_in_populate: 2377 2211 __wa_xfer_abort(xfer); 2378 2212 seg->status = WA_SEG_ERROR; 2379 2213 error_complete: 2380 2214 xfer->segs_done++; 2381 2215 rpipe_ready = rpipe_avail_inc(rpipe); 2382 - wa_complete_remaining_xfer_segs(xfer, seg, seg->status); 2216 + wa_complete_remaining_xfer_segs(xfer, seg->index + 1, seg->status); 2383 2217 done = __wa_xfer_is_done(xfer); 2384 2218 /* 2385 2219 * queue work item to clear STALL for control endpoints. ··· 2481 2315 for (seg_index = 0; seg_index < seg->isoc_frame_count; ++seg_index) { 2482 2316 struct usb_iso_packet_descriptor *iso_frame_desc = 2483 2317 xfer->urb->iso_frame_desc; 2484 - const int urb_frame_index = 2318 + const int xfer_frame_index = 2485 2319 seg->isoc_frame_offset + seg_index; 2486 2320 2487 - iso_frame_desc[urb_frame_index].status = 2321 + iso_frame_desc[xfer_frame_index].status = 2488 2322 wa_xfer_status_to_errno( 2489 2323 le16_to_cpu(status_array[seg_index].PacketStatus)); 2490 - iso_frame_desc[urb_frame_index].actual_length = 2324 + iso_frame_desc[xfer_frame_index].actual_length = 2491 2325 le16_to_cpu(status_array[seg_index].PacketLength); 2492 2326 /* track the number of frames successfully transferred. */ 2493 - if (iso_frame_desc[urb_frame_index].actual_length > 0) { 2327 + if (iso_frame_desc[xfer_frame_index].actual_length > 0) { 2494 2328 /* save the starting frame index for buf_in_urb. */ 2495 2329 if (!data_frame_count) 2496 2330 first_frame_index = seg_index; ··· 2499 2333 } 2500 2334 2501 2335 if (xfer->is_inbound && data_frame_count) { 2502 - int result; 2336 + int result, total_frames_read = 0, urb_index = 0; 2337 + struct urb *buf_in_urb; 2503 2338 2339 + /* IN data phase: read to buffer */ 2340 + seg->status = WA_SEG_DTI_PENDING; 2341 + 2342 + /* start with the first frame with data. */ 2504 2343 seg->isoc_frame_index = first_frame_index; 2505 - /* submit a read URB for the first frame with data. */ 2506 - __wa_populate_buf_in_urb_isoc(wa, xfer, seg, 2507 - seg->isoc_frame_index + seg->isoc_frame_offset); 2344 + /* submit up to WA_MAX_BUF_IN_URBS read URBs. */ 2345 + do { 2346 + int urb_frame_index, urb_frame_count; 2347 + struct usb_iso_packet_descriptor *iso_frame_desc; 2508 2348 2509 - result = usb_submit_urb(wa->buf_in_urb, GFP_ATOMIC); 2349 + buf_in_urb = &(wa->buf_in_urbs[urb_index]); 2350 + urb_frame_count = __wa_populate_buf_in_urb_isoc(wa, 2351 + buf_in_urb, xfer, seg); 2352 + /* advance frame index to start of next read URB. */ 2353 + seg->isoc_frame_index += urb_frame_count; 2354 + total_frames_read += urb_frame_count; 2355 + 2356 + ++(wa->active_buf_in_urbs); 2357 + result = usb_submit_urb(buf_in_urb, GFP_ATOMIC); 2358 + 2359 + /* skip 0-byte frames. */ 2360 + urb_frame_index = 2361 + seg->isoc_frame_offset + seg->isoc_frame_index; 2362 + iso_frame_desc = 2363 + &(xfer->urb->iso_frame_desc[urb_frame_index]); 2364 + while ((seg->isoc_frame_index < 2365 + seg->isoc_frame_count) && 2366 + (iso_frame_desc->actual_length == 0)) { 2367 + ++(seg->isoc_frame_index); 2368 + ++iso_frame_desc; 2369 + } 2370 + ++urb_index; 2371 + 2372 + } while ((result == 0) && (urb_index < WA_MAX_BUF_IN_URBS) 2373 + && (seg->isoc_frame_index < 2374 + seg->isoc_frame_count)); 2375 + 2510 2376 if (result < 0) { 2377 + --(wa->active_buf_in_urbs); 2511 2378 dev_err(dev, "DTI Error: Could not submit buf in URB (%d)", 2512 2379 result); 2513 2380 wa_reset_all(wa); 2514 - } else if (data_frame_count > 1) 2515 - /* If we need to read multiple frames, set DTI busy. */ 2381 + } else if (data_frame_count > total_frames_read) 2382 + /* If we need to read more frames, set DTI busy. */ 2516 2383 dti_busy = 1; 2517 2384 } else { 2518 2385 /* OUT transfer or no more IN data, complete it -- */ 2519 - seg->status = WA_SEG_DONE; 2520 - xfer->segs_done++; 2521 2386 rpipe_ready = rpipe_avail_inc(rpipe); 2522 - done = __wa_xfer_is_done(xfer); 2387 + done = __wa_xfer_mark_seg_as_done(xfer, seg, WA_SEG_DONE); 2523 2388 } 2524 2389 spin_unlock_irqrestore(&xfer->lock, flags); 2525 - wa->dti_state = WA_DTI_TRANSFER_RESULT_PENDING; 2390 + if (dti_busy) 2391 + wa->dti_state = WA_DTI_BUF_IN_DATA_PENDING; 2392 + else 2393 + wa->dti_state = WA_DTI_TRANSFER_RESULT_PENDING; 2526 2394 if (done) 2527 2395 wa_xfer_completion(xfer); 2528 2396 if (rpipe_ready) ··· 2588 2388 struct wahc *wa; 2589 2389 struct device *dev; 2590 2390 struct wa_rpipe *rpipe; 2591 - unsigned rpipe_ready = 0, seg_index, isoc_data_frame_count = 0; 2391 + unsigned rpipe_ready = 0, isoc_data_frame_count = 0; 2592 2392 unsigned long flags; 2393 + int resubmit_dti = 0, active_buf_in_urbs; 2593 2394 u8 done = 0; 2594 2395 2595 2396 /* free the sg if it was used. */ ··· 2600 2399 spin_lock_irqsave(&xfer->lock, flags); 2601 2400 wa = xfer->wa; 2602 2401 dev = &wa->usb_iface->dev; 2402 + --(wa->active_buf_in_urbs); 2403 + active_buf_in_urbs = wa->active_buf_in_urbs; 2603 2404 2604 2405 if (usb_pipeisoc(xfer->urb->pipe)) { 2406 + struct usb_iso_packet_descriptor *iso_frame_desc = 2407 + xfer->urb->iso_frame_desc; 2408 + int seg_index; 2409 + 2605 2410 /* 2606 - * Find the next isoc frame with data. Bail out after 2607 - * isoc_data_frame_count > 1 since there is no need to walk 2608 - * the entire frame array. We just need to know if 2609 - * isoc_data_frame_count is 0, 1, or >1. 2411 + * Find the next isoc frame with data and count how many 2412 + * frames with data remain. 2610 2413 */ 2611 - seg_index = seg->isoc_frame_index + 1; 2612 - while ((seg_index < seg->isoc_frame_count) 2613 - && (isoc_data_frame_count <= 1)) { 2614 - struct usb_iso_packet_descriptor *iso_frame_desc = 2615 - xfer->urb->iso_frame_desc; 2414 + seg_index = seg->isoc_frame_index; 2415 + while (seg_index < seg->isoc_frame_count) { 2616 2416 const int urb_frame_index = 2617 2417 seg->isoc_frame_offset + seg_index; 2618 2418 ··· 2634 2432 2635 2433 seg->result += urb->actual_length; 2636 2434 if (isoc_data_frame_count > 0) { 2637 - int result; 2638 - /* submit a read URB for the first frame with data. */ 2639 - __wa_populate_buf_in_urb_isoc(wa, xfer, seg, 2640 - seg->isoc_frame_index + seg->isoc_frame_offset); 2641 - result = usb_submit_urb(wa->buf_in_urb, GFP_ATOMIC); 2435 + int result, urb_frame_count; 2436 + 2437 + /* submit a read URB for the next frame with data. */ 2438 + urb_frame_count = __wa_populate_buf_in_urb_isoc(wa, urb, 2439 + xfer, seg); 2440 + /* advance index to start of next read URB. */ 2441 + seg->isoc_frame_index += urb_frame_count; 2442 + ++(wa->active_buf_in_urbs); 2443 + result = usb_submit_urb(urb, GFP_ATOMIC); 2642 2444 if (result < 0) { 2445 + --(wa->active_buf_in_urbs); 2643 2446 dev_err(dev, "DTI Error: Could not submit buf in URB (%d)", 2644 2447 result); 2645 2448 wa_reset_all(wa); 2646 2449 } 2647 - } else { 2450 + /* 2451 + * If we are in this callback and 2452 + * isoc_data_frame_count > 0, it means that the dti_urb 2453 + * submission was delayed in wa_dti_cb. Once 2454 + * we submit the last buf_in_urb, we can submit the 2455 + * delayed dti_urb. 2456 + */ 2457 + resubmit_dti = (isoc_data_frame_count == 2458 + urb_frame_count); 2459 + } else if (active_buf_in_urbs == 0) { 2648 2460 rpipe = xfer->ep->hcpriv; 2649 - seg->status = WA_SEG_DONE; 2650 - dev_dbg(dev, "xfer %p#%u: data in done (%zu bytes)\n", 2651 - xfer, seg->index, seg->result); 2652 - xfer->segs_done++; 2461 + dev_dbg(dev, 2462 + "xfer %p 0x%08X#%u: data in done (%zu bytes)\n", 2463 + xfer, wa_xfer_id(xfer), seg->index, 2464 + seg->result); 2653 2465 rpipe_ready = rpipe_avail_inc(rpipe); 2654 - done = __wa_xfer_is_done(xfer); 2466 + done = __wa_xfer_mark_seg_as_done(xfer, seg, 2467 + WA_SEG_DONE); 2655 2468 } 2656 2469 spin_unlock_irqrestore(&xfer->lock, flags); 2657 2470 if (done) ··· 2678 2461 case -ENOENT: /* as it was done by the who unlinked us */ 2679 2462 break; 2680 2463 default: /* Other errors ... */ 2464 + /* 2465 + * Error on data buf read. Only resubmit DTI if it hasn't 2466 + * already been done by previously hitting this error or by a 2467 + * successful completion of the previous buf_in_urb. 2468 + */ 2469 + resubmit_dti = wa->dti_state != WA_DTI_TRANSFER_RESULT_PENDING; 2681 2470 spin_lock_irqsave(&xfer->lock, flags); 2682 2471 rpipe = xfer->ep->hcpriv; 2683 2472 if (printk_ratelimit()) 2684 - dev_err(dev, "xfer %p#%u: data in error %d\n", 2685 - xfer, seg->index, urb->status); 2473 + dev_err(dev, "xfer %p 0x%08X#%u: data in error %d\n", 2474 + xfer, wa_xfer_id(xfer), seg->index, 2475 + urb->status); 2686 2476 if (edc_inc(&wa->nep_edc, EDC_MAX_ERRORS, 2687 2477 EDC_ERROR_TIMEFRAME)){ 2688 2478 dev_err(dev, "DTO: URB max acceptable errors " 2689 2479 "exceeded, resetting device\n"); 2690 2480 wa_reset_all(wa); 2691 2481 } 2692 - seg->status = WA_SEG_ERROR; 2693 2482 seg->result = urb->status; 2694 - xfer->segs_done++; 2695 2483 rpipe_ready = rpipe_avail_inc(rpipe); 2696 - __wa_xfer_abort(xfer); 2697 - done = __wa_xfer_is_done(xfer); 2484 + if (active_buf_in_urbs == 0) 2485 + done = __wa_xfer_mark_seg_as_done(xfer, seg, 2486 + WA_SEG_ERROR); 2487 + else 2488 + __wa_xfer_abort(xfer); 2698 2489 spin_unlock_irqrestore(&xfer->lock, flags); 2699 2490 if (done) 2700 2491 wa_xfer_completion(xfer); 2701 2492 if (rpipe_ready) 2702 2493 wa_xfer_delayed_run(rpipe); 2703 2494 } 2704 - /* 2705 - * If we are in this callback and isoc_data_frame_count > 0, it means 2706 - * that the dti_urb submission was delayed in wa_dti_cb. Once 2707 - * isoc_data_frame_count gets to 1, we can submit the deferred URB 2708 - * since the last buf_in_urb was just submitted. 2709 - */ 2710 - if (isoc_data_frame_count == 1) { 2711 - int result = usb_submit_urb(wa->dti_urb, GFP_ATOMIC); 2495 + 2496 + if (resubmit_dti) { 2497 + int result; 2498 + 2499 + wa->dti_state = WA_DTI_TRANSFER_RESULT_PENDING; 2500 + 2501 + result = usb_submit_urb(wa->dti_urb, GFP_ATOMIC); 2712 2502 if (result < 0) { 2713 2503 dev_err(dev, "DTI Error: Could not submit DTI URB (%d)\n", 2714 2504 result); ··· 2785 2561 xfer_result->hdr.bNotifyType); 2786 2562 break; 2787 2563 } 2788 - usb_status = xfer_result->bTransferStatus & 0x3f; 2789 - if (usb_status == WA_XFER_STATUS_NOT_FOUND) 2790 - /* taken care of already */ 2791 - break; 2792 2564 xfer_id = le32_to_cpu(xfer_result->dwTransferID); 2565 + usb_status = xfer_result->bTransferStatus & 0x3f; 2566 + if (usb_status == WA_XFER_STATUS_NOT_FOUND) { 2567 + /* taken care of already */ 2568 + dev_dbg(dev, "%s: xfer 0x%08X#%u not found.\n", 2569 + __func__, xfer_id, 2570 + xfer_result->bTransferSegment & 0x7f); 2571 + break; 2572 + } 2793 2573 xfer = wa_xfer_get_by_id(wa, xfer_id); 2794 2574 if (xfer == NULL) { 2795 2575 /* FIXME: transaction not found. */ ··· 2842 2614 } 2843 2615 2844 2616 /* 2617 + * Initialize the DTI URB for reading transfer result notifications and also 2618 + * the buffer-in URB, for reading buffers. Then we just submit the DTI URB. 2619 + */ 2620 + int wa_dti_start(struct wahc *wa) 2621 + { 2622 + const struct usb_endpoint_descriptor *dti_epd = wa->dti_epd; 2623 + struct device *dev = &wa->usb_iface->dev; 2624 + int result = -ENOMEM, index; 2625 + 2626 + if (wa->dti_urb != NULL) /* DTI URB already started */ 2627 + goto out; 2628 + 2629 + wa->dti_urb = usb_alloc_urb(0, GFP_KERNEL); 2630 + if (wa->dti_urb == NULL) { 2631 + dev_err(dev, "Can't allocate DTI URB\n"); 2632 + goto error_dti_urb_alloc; 2633 + } 2634 + usb_fill_bulk_urb( 2635 + wa->dti_urb, wa->usb_dev, 2636 + usb_rcvbulkpipe(wa->usb_dev, 0x80 | dti_epd->bEndpointAddress), 2637 + wa->dti_buf, wa->dti_buf_size, 2638 + wa_dti_cb, wa); 2639 + 2640 + /* init the buf in URBs */ 2641 + for (index = 0; index < WA_MAX_BUF_IN_URBS; ++index) { 2642 + usb_fill_bulk_urb( 2643 + &(wa->buf_in_urbs[index]), wa->usb_dev, 2644 + usb_rcvbulkpipe(wa->usb_dev, 2645 + 0x80 | dti_epd->bEndpointAddress), 2646 + NULL, 0, wa_buf_in_cb, wa); 2647 + } 2648 + result = usb_submit_urb(wa->dti_urb, GFP_KERNEL); 2649 + if (result < 0) { 2650 + dev_err(dev, "DTI Error: Could not submit DTI URB (%d) resetting\n", 2651 + result); 2652 + goto error_dti_urb_submit; 2653 + } 2654 + out: 2655 + return 0; 2656 + 2657 + error_dti_urb_submit: 2658 + usb_put_urb(wa->dti_urb); 2659 + wa->dti_urb = NULL; 2660 + error_dti_urb_alloc: 2661 + return result; 2662 + } 2663 + EXPORT_SYMBOL_GPL(wa_dti_start); 2664 + /* 2845 2665 * Transfer complete notification 2846 2666 * 2847 2667 * Called from the notif.c code. We get a notification on EP2 saying ··· 2903 2627 * Follow up in wa_dti_cb(), as that's where the whole state 2904 2628 * machine starts. 2905 2629 * 2906 - * So here we just initialize the DTI URB for reading transfer result 2907 - * notifications and also the buffer-in URB, for reading buffers. Then 2908 - * we just submit the DTI URB. 2909 - * 2910 2630 * @wa shall be referenced 2911 2631 */ 2912 2632 void wa_handle_notif_xfer(struct wahc *wa, struct wa_notif_hdr *notif_hdr) 2913 2633 { 2914 - int result; 2915 2634 struct device *dev = &wa->usb_iface->dev; 2916 2635 struct wa_notif_xfer *notif_xfer; 2917 2636 const struct usb_endpoint_descriptor *dti_epd = wa->dti_epd; ··· 2920 2649 notif_xfer->bEndpoint, dti_epd->bEndpointAddress); 2921 2650 goto error; 2922 2651 } 2923 - if (wa->dti_urb != NULL) /* DTI URB already started */ 2924 - goto out; 2925 2652 2926 - wa->dti_urb = usb_alloc_urb(0, GFP_KERNEL); 2927 - if (wa->dti_urb == NULL) { 2928 - dev_err(dev, "Can't allocate DTI URB\n"); 2929 - goto error_dti_urb_alloc; 2930 - } 2931 - usb_fill_bulk_urb( 2932 - wa->dti_urb, wa->usb_dev, 2933 - usb_rcvbulkpipe(wa->usb_dev, 0x80 | notif_xfer->bEndpoint), 2934 - wa->dti_buf, wa->dti_buf_size, 2935 - wa_dti_cb, wa); 2653 + /* attempt to start the DTI ep processing. */ 2654 + if (wa_dti_start(wa) < 0) 2655 + goto error; 2936 2656 2937 - wa->buf_in_urb = usb_alloc_urb(0, GFP_KERNEL); 2938 - if (wa->buf_in_urb == NULL) { 2939 - dev_err(dev, "Can't allocate BUF-IN URB\n"); 2940 - goto error_buf_in_urb_alloc; 2941 - } 2942 - usb_fill_bulk_urb( 2943 - wa->buf_in_urb, wa->usb_dev, 2944 - usb_rcvbulkpipe(wa->usb_dev, 0x80 | notif_xfer->bEndpoint), 2945 - NULL, 0, wa_buf_in_cb, wa); 2946 - result = usb_submit_urb(wa->dti_urb, GFP_KERNEL); 2947 - if (result < 0) { 2948 - dev_err(dev, "DTI Error: Could not submit DTI URB (%d) resetting\n", 2949 - result); 2950 - goto error_dti_urb_submit; 2951 - } 2952 - out: 2953 2657 return; 2954 2658 2955 - error_dti_urb_submit: 2956 - usb_put_urb(wa->buf_in_urb); 2957 - wa->buf_in_urb = NULL; 2958 - error_buf_in_urb_alloc: 2959 - usb_put_urb(wa->dti_urb); 2960 - wa->dti_urb = NULL; 2961 - error_dti_urb_alloc: 2962 2659 error: 2963 2660 wa_reset_all(wa); 2964 2661 }
+15
include/linux/phy/phy.h
··· 149 149 struct phy *phy_optional_get(struct device *dev, const char *string); 150 150 struct phy *devm_phy_get(struct device *dev, const char *string); 151 151 struct phy *devm_phy_optional_get(struct device *dev, const char *string); 152 + struct phy *devm_of_phy_get(struct device *dev, struct device_node *np, 153 + const char *con_id); 152 154 void phy_put(struct phy *phy); 153 155 void devm_phy_put(struct device *dev, struct phy *phy); 156 + struct phy *of_phy_get(struct device_node *np, const char *con_id); 154 157 struct phy *of_phy_simple_xlate(struct device *dev, 155 158 struct of_phandle_args *args); 156 159 struct phy *phy_create(struct device *dev, const struct phy_ops *ops, ··· 254 251 return ERR_PTR(-ENOSYS); 255 252 } 256 253 254 + static inline struct phy *devm_of_phy_get(struct device *dev, 255 + struct device_node *np, 256 + const char *con_id) 257 + { 258 + return ERR_PTR(-ENOSYS); 259 + } 260 + 257 261 static inline void phy_put(struct phy *phy) 258 262 { 259 263 } 260 264 261 265 static inline void devm_phy_put(struct device *dev, struct phy *phy) 262 266 { 267 + } 268 + 269 + static inline struct phy *of_phy_get(struct device_node *np, const char *con_id) 270 + { 271 + return ERR_PTR(-ENOSYS); 263 272 } 264 273 265 274 static inline struct phy *of_phy_simple_xlate(struct device *dev,
+4
include/linux/usb.h
··· 57 57 * @extra: descriptors following this endpoint in the configuration 58 58 * @extralen: how many bytes of "extra" are valid 59 59 * @enabled: URBs may be submitted to this endpoint 60 + * @streams: number of USB-3 streams allocated on the endpoint 60 61 * 61 62 * USB requests are always queued to a given endpoint, identified by a 62 63 * descriptor within an active interface in a given USB configuration. ··· 72 71 unsigned char *extra; /* Extra descriptors */ 73 72 int extralen; 74 73 int enabled; 74 + int streams; 75 75 }; 76 76 77 77 /* host-side wrapper for one interface setting's parsed descriptors */ ··· 204 202 struct usb_interface *usb_get_intf(struct usb_interface *intf); 205 203 void usb_put_intf(struct usb_interface *intf); 206 204 205 + /* Hard limit */ 206 + #define USB_MAXENDPOINTS 30 207 207 /* this maximum is arbitrary */ 208 208 #define USB_MAXINTERFACES 32 209 209 #define USB_MAXIADS (USB_MAXINTERFACES/2)
+1
include/linux/usb/chipidea.h
··· 25 25 */ 26 26 #define CI_HDRC_DUAL_ROLE_NOT_OTG BIT(4) 27 27 #define CI_HDRC_IMX28_WRITE_FIX BIT(5) 28 + #define CI_HDRC_FORCE_FULLSPEED BIT(6) 28 29 enum usb_dr_mode dr_mode; 29 30 #define CI_HDRC_CONTROLLER_RESET_EVENT 0 30 31 #define CI_HDRC_CONTROLLER_STOPPED_EVENT 1
+1
include/linux/usb/hcd.h
··· 143 143 unsigned authorized_default:1; 144 144 unsigned has_tt:1; /* Integrated TT in root hub */ 145 145 unsigned amd_resume_bug:1; /* AMD remote wakeup quirk */ 146 + unsigned can_do_streams:1; /* HC supports streams */ 146 147 147 148 unsigned int irq; /* irq allocated */ 148 149 void __iomem *regs; /* device memory/io */
+18 -18
include/linux/usb/omap_control_usb.h include/linux/phy/omap_control_phy.h
··· 1 1 /* 2 - * omap_control_usb.h - Header file for the USB part of control module. 2 + * omap_control_phy.h - Header file for the PHY part of control module. 3 3 * 4 4 * Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com 5 5 * This program is free software; you can redistribute it and/or modify ··· 16 16 * 17 17 */ 18 18 19 - #ifndef __OMAP_CONTROL_USB_H__ 20 - #define __OMAP_CONTROL_USB_H__ 19 + #ifndef __OMAP_CONTROL_PHY_H__ 20 + #define __OMAP_CONTROL_PHY_H__ 21 21 22 - enum omap_control_usb_type { 22 + enum omap_control_phy_type { 23 23 OMAP_CTRL_TYPE_OTGHS = 1, /* Mailbox OTGHS_CONTROL */ 24 24 OMAP_CTRL_TYPE_USB2, /* USB2_PHY, power down in CONTROL_DEV_CONF */ 25 25 OMAP_CTRL_TYPE_PIPE3, /* PIPE3 PHY, DPLL & seperate Rx/Tx power */ ··· 27 27 OMAP_CTRL_TYPE_AM437USB2, /* USB2 PHY, power e.g. AM437x */ 28 28 }; 29 29 30 - struct omap_control_usb { 30 + struct omap_control_phy { 31 31 struct device *dev; 32 32 33 33 u32 __iomem *otghs_control; ··· 36 36 37 37 struct clk *sys_clk; 38 38 39 - enum omap_control_usb_type type; 39 + enum omap_control_phy_type type; 40 40 }; 41 41 42 42 enum omap_control_usb_mode { ··· 54 54 #define OMAP_CTRL_DEV_SESSEND BIT(3) 55 55 #define OMAP_CTRL_DEV_IDDIG BIT(4) 56 56 57 - #define OMAP_CTRL_USB_PWRCTL_CLK_CMD_MASK 0x003FC000 58 - #define OMAP_CTRL_USB_PWRCTL_CLK_CMD_SHIFT 0xE 57 + #define OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_CMD_MASK 0x003FC000 58 + #define OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_CMD_SHIFT 0xE 59 59 60 - #define OMAP_CTRL_USB_PWRCTL_CLK_FREQ_MASK 0xFFC00000 61 - #define OMAP_CTRL_USB_PWRCTL_CLK_FREQ_SHIFT 0x16 60 + #define OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_FREQ_MASK 0xFFC00000 61 + #define OMAP_CTRL_PIPE3_PHY_PWRCTL_CLK_FREQ_SHIFT 0x16 62 62 63 - #define OMAP_CTRL_USB3_PHY_TX_RX_POWERON 0x3 64 - #define OMAP_CTRL_USB3_PHY_TX_RX_POWEROFF 0x0 63 + #define OMAP_CTRL_PIPE3_PHY_TX_RX_POWERON 0x3 64 + #define OMAP_CTRL_PIPE3_PHY_TX_RX_POWEROFF 0x0 65 65 66 66 #define OMAP_CTRL_USB2_PHY_PD BIT(28) 67 67 ··· 70 70 #define AM437X_CTRL_USB2_OTGVDET_EN BIT(19) 71 71 #define AM437X_CTRL_USB2_OTGSESSEND_EN BIT(20) 72 72 73 - #if IS_ENABLED(CONFIG_OMAP_CONTROL_USB) 74 - extern void omap_control_usb_phy_power(struct device *dev, int on); 75 - extern void omap_control_usb_set_mode(struct device *dev, 76 - enum omap_control_usb_mode mode); 73 + #if IS_ENABLED(CONFIG_OMAP_CONTROL_PHY) 74 + void omap_control_phy_power(struct device *dev, int on); 75 + void omap_control_usb_set_mode(struct device *dev, 76 + enum omap_control_usb_mode mode); 77 77 #else 78 78 79 - static inline void omap_control_usb_phy_power(struct device *dev, int on) 79 + static inline void omap_control_phy_power(struct device *dev, int on) 80 80 { 81 81 } 82 82 ··· 86 86 } 87 87 #endif 88 88 89 - #endif /* __OMAP_CONTROL_USB_H__ */ 89 + #endif /* __OMAP_CONTROL_PHY_H__ */
+12 -2
include/linux/usb/omap_usb.h include/linux/phy/omap_usb.h
··· 34 34 struct usb_phy phy; 35 35 struct phy_companion *comparator; 36 36 void __iomem *pll_ctrl_base; 37 + void __iomem *phy_base; 37 38 struct device *dev; 38 39 struct device *control_dev; 39 40 struct clk *wkupclk; 40 - struct clk *sys_clk; 41 41 struct clk *optclk; 42 - u8 is_suspended:1; 42 + u8 flags; 43 43 }; 44 + 45 + struct usb_phy_data { 46 + const char *label; 47 + u8 flags; 48 + }; 49 + 50 + /* Driver Flags */ 51 + #define OMAP_USB2_HAS_START_SRP (1 << 0) 52 + #define OMAP_USB2_HAS_SET_VBUS (1 << 1) 53 + #define OMAP_USB2_CALIBRATE_FALSE_DISCONNECT (1 << 2) 44 54 45 55 #define phy_to_omapusb(x) container_of((x), struct omap_usb, phy) 46 56
+16
include/linux/usb/phy.h
··· 111 111 int (*set_suspend)(struct usb_phy *x, 112 112 int suspend); 113 113 114 + /* 115 + * Set wakeup enable for PHY, in that case, the PHY can be 116 + * woken up from suspend status due to external events, 117 + * like vbus change, dp/dm change and id. 118 + */ 119 + int (*set_wakeup)(struct usb_phy *x, bool enabled); 120 + 114 121 /* notify phy connect status change */ 115 122 int (*notify_connect)(struct usb_phy *x, 116 123 enum usb_device_speed speed); ··· 267 260 { 268 261 if (x && x->set_suspend != NULL) 269 262 return x->set_suspend(x, suspend); 263 + else 264 + return 0; 265 + } 266 + 267 + static inline int 268 + usb_phy_set_wakeup(struct usb_phy *x, bool enabled) 269 + { 270 + if (x && x->set_wakeup) 271 + return x->set_wakeup(x, enabled); 270 272 else 271 273 return 0; 272 274 }
+2 -1
include/linux/usb/serial.h
··· 190 190 * @num_ports: the number of different ports this device will have. 191 191 * @bulk_in_size: minimum number of bytes to allocate for bulk-in buffer 192 192 * (0 = end-point size) 193 - * @bulk_out_size: bytes to allocate for bulk-out buffer (0 = end-point size) 193 + * @bulk_out_size: minimum number of bytes to allocate for bulk-out buffer 194 + * (0 = end-point size) 194 195 * @calc_num_ports: pointer to a function to determine how many ports this 195 196 * device has dynamically. It will be called after the probe() 196 197 * callback is called, but before attach()
+7 -7
include/linux/usb/uas.h
··· 9 9 __u8 iu_id; 10 10 __u8 rsvd1; 11 11 __be16 tag; 12 - }; 12 + } __attribute__((__packed__)); 13 13 14 14 enum { 15 15 IU_ID_COMMAND = 0x01, ··· 52 52 __u8 rsvd7; 53 53 struct scsi_lun lun; 54 54 __u8 cdb[16]; /* XXX: Overflow-checking tools may misunderstand */ 55 - }; 55 + } __attribute__((__packed__)); 56 56 57 57 struct task_mgmt_iu { 58 58 __u8 iu_id; ··· 62 62 __u8 rsvd2; 63 63 __be16 task_tag; 64 64 struct scsi_lun lun; 65 - }; 65 + } __attribute__((__packed__)); 66 66 67 67 /* 68 68 * Also used for the Read Ready and Write Ready IUs since they have the ··· 77 77 __u8 rsvd7[7]; 78 78 __be16 len; 79 79 __u8 sense[SCSI_SENSE_BUFFERSIZE]; 80 - }; 80 + } __attribute__((__packed__)); 81 81 82 - struct response_ui { 82 + struct response_iu { 83 83 __u8 iu_id; 84 84 __u8 rsvd1; 85 85 __be16 tag; 86 - __be16 add_response_info; 86 + __u8 add_response_info[3]; 87 87 __u8 response_code; 88 - }; 88 + } __attribute__((__packed__)); 89 89 90 90 struct usb_pipe_usage_descriptor { 91 91 __u8 bLength;
+4 -2
include/linux/usb_usual.h
··· 67 67 /* Initial READ(10) (and others) must be retried */ \ 68 68 US_FLAG(WRITE_CACHE, 0x00200000) \ 69 69 /* Write Cache status is not available */ \ 70 - US_FLAG(NEEDS_CAP16, 0x00400000) 71 - /* cannot handle READ_CAPACITY_10 */ 70 + US_FLAG(NEEDS_CAP16, 0x00400000) \ 71 + /* cannot handle READ_CAPACITY_10 */ \ 72 + US_FLAG(IGNORE_UAS, 0x00800000) \ 73 + /* Device advertises UAS but it is broken */ 72 74 73 75 #define US_FLAG(name, value) US_FL_##name = value , 74 76 enum { US_DO_ALL_FLAGS };
+30 -14
include/uapi/linux/usb/functionfs.h
··· 10 10 11 11 enum { 12 12 FUNCTIONFS_DESCRIPTORS_MAGIC = 1, 13 - FUNCTIONFS_STRINGS_MAGIC = 2 13 + FUNCTIONFS_STRINGS_MAGIC = 2, 14 + FUNCTIONFS_DESCRIPTORS_MAGIC_V2 = 3, 14 15 }; 15 16 17 + enum functionfs_flags { 18 + FUNCTIONFS_HAS_FS_DESC = 1, 19 + FUNCTIONFS_HAS_HS_DESC = 2, 20 + FUNCTIONFS_HAS_SS_DESC = 4, 21 + }; 16 22 17 23 #ifndef __KERNEL__ 18 24 ··· 35 29 36 30 37 31 /* 38 - * All numbers must be in little endian order. 39 - */ 40 - 41 - struct usb_functionfs_descs_head { 42 - __le32 magic; 43 - __le32 length; 44 - __le32 fs_count; 45 - __le32 hs_count; 46 - } __attribute__((packed)); 47 - 48 - /* 49 32 * Descriptors format: 50 33 * 51 34 * | off | name | type | description | 52 35 * |-----+-----------+--------------+--------------------------------------| 53 - * | 0 | magic | LE32 | FUNCTIONFS_{FS,HS}_DESCRIPTORS_MAGIC | 36 + * | 0 | magic | LE32 | FUNCTIONFS_DESCRIPTORS_MAGIC_V2 | 37 + * | 4 | length | LE32 | length of the whole data chunk | 38 + * | 8 | flags | LE32 | combination of functionfs_flags | 39 + * | | fs_count | LE32 | number of full-speed descriptors | 40 + * | | hs_count | LE32 | number of high-speed descriptors | 41 + * | | ss_count | LE32 | number of super-speed descriptors | 42 + * | | fs_descrs | Descriptor[] | list of full-speed descriptors | 43 + * | | hs_descrs | Descriptor[] | list of high-speed descriptors | 44 + * | | ss_descrs | Descriptor[] | list of super-speed descriptors | 45 + * 46 + * Depending on which flags are set, various fields may be missing in the 47 + * structure. Any flags that are not recognised cause the whole block to be 48 + * rejected with -ENOSYS. 49 + * 50 + * Legacy descriptors format: 51 + * 52 + * | off | name | type | description | 53 + * |-----+-----------+--------------+--------------------------------------| 54 + * | 0 | magic | LE32 | FUNCTIONFS_DESCRIPTORS_MAGIC | 54 55 * | 4 | length | LE32 | length of the whole data chunk | 55 56 * | 8 | fs_count | LE32 | number of full-speed descriptors | 56 57 * | 12 | hs_count | LE32 | number of high-speed descriptors | 57 58 * | 16 | fs_descrs | Descriptor[] | list of full-speed descriptors | 58 59 * | | hs_descrs | Descriptor[] | list of high-speed descriptors | 59 60 * 60 - * descs are just valid USB descriptors and have the following format: 61 + * All numbers must be in little endian order. 62 + * 63 + * Descriptor[] is an array of valid USB descriptors which have the following 64 + * format: 61 65 * 62 66 * | off | name | type | description | 63 67 * |-----+-----------------+------+--------------------------|
+11 -1
include/uapi/linux/usbdevice_fs.h
··· 102 102 int buffer_length; 103 103 int actual_length; 104 104 int start_frame; 105 - int number_of_packets; 105 + union { 106 + int number_of_packets; /* Only used for isoc urbs */ 107 + unsigned int stream_id; /* Only used with bulk streams */ 108 + }; 106 109 int error_count; 107 110 unsigned int signr; /* signal to be sent on completion, 108 111 or 0 if none should be sent. */ ··· 147 144 char driver[USBDEVFS_MAXDRIVERNAME + 1]; 148 145 }; 149 146 147 + struct usbdevfs_streams { 148 + unsigned int num_streams; /* Not used by USBDEVFS_FREE_STREAMS */ 149 + unsigned int num_eps; 150 + unsigned char eps[0]; 151 + }; 150 152 151 153 #define USBDEVFS_CONTROL _IOWR('U', 0, struct usbdevfs_ctrltransfer) 152 154 #define USBDEVFS_CONTROL32 _IOWR('U', 0, struct usbdevfs_ctrltransfer32) ··· 184 176 #define USBDEVFS_RELEASE_PORT _IOR('U', 25, unsigned int) 185 177 #define USBDEVFS_GET_CAPABILITIES _IOR('U', 26, __u32) 186 178 #define USBDEVFS_DISCONNECT_CLAIM _IOR('U', 27, struct usbdevfs_disconnect_claim) 179 + #define USBDEVFS_ALLOC_STREAMS _IOR('U', 28, struct usbdevfs_streams) 180 + #define USBDEVFS_FREE_STREAMS _IOR('U', 29, struct usbdevfs_streams) 187 181 188 182 #endif /* _UAPI_LINUX_USBDEVICE_FS_H */