Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'usb-for-v3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-next

Felipe writes:

usb: patches for v3.15

another substantial pull request with new features all over
the place.

dwc3 got a bit closer towards hibernation support with after
a few patches re-factoring code to be reused for hibernation.
Also in dwc3 two new workarounds for known silicon bugs have
been implemented, some randconfig build errors have been fixed,
and it was taught about the new generic phy layer.

MUSB on AM335x now supports isochronous transfers thanks to
George Cherian's work.

The atmel_usba driver got two crash fixes: one when no endpoint
was specified in DeviceTree data and another when stopping the UDC
in DEBUG builds.

Function FS got a much needed fix to ffs_epfile_io() which was
copying too much data to userspace in some cases.

The printer gadget got a fix for a possible deadlock and plugged
a memory leak.

Ethernet drivers now use NAPI for RX which gives improved throughput.

Other than that, the usual miscelaneous fixes, cleanups, and
the like.

Signed-of-by: Felipe Balbi <balbi@ti.com>

+1742 -479
+86
Documentation/devicetree/bindings/phy/ti-phy.txt
··· 1 + TI PHY: DT DOCUMENTATION FOR PHYs in TI PLATFORMs 2 + 3 + OMAP CONTROL PHY 4 + 5 + Required properties: 6 + - compatible: Should be one of 7 + "ti,control-phy-otghs" - if it has otghs_control mailbox register as on OMAP4. 8 + "ti,control-phy-usb2" - if it has Power down bit in control_dev_conf register 9 + e.g. USB2_PHY on OMAP5. 10 + "ti,control-phy-pipe3" - if it has DPLL and individual Rx & Tx power control 11 + e.g. USB3 PHY and SATA PHY on OMAP5. 12 + "ti,control-phy-usb2-dra7" - if it has power down register like USB2 PHY on 13 + DRA7 platform. 14 + "ti,control-phy-usb2-am437" - if it has power down register like USB2 PHY on 15 + AM437 platform. 16 + - reg : Address and length of the register set for the device. It contains 17 + the address of "otghs_control" for control-phy-otghs or "power" register 18 + for other types. 19 + - reg-names: should be "otghs_control" control-phy-otghs and "power" for 20 + other types. 21 + 22 + omap_control_usb: omap-control-usb@4a002300 { 23 + compatible = "ti,control-phy-otghs"; 24 + reg = <0x4a00233c 0x4>; 25 + reg-names = "otghs_control"; 26 + }; 27 + 28 + OMAP USB2 PHY 29 + 30 + Required properties: 31 + - compatible: Should be "ti,omap-usb2" 32 + - reg : Address and length of the register set for the device. 33 + - #phy-cells: determine the number of cells that should be given in the 34 + phandle while referencing this phy. 35 + 36 + Optional properties: 37 + - ctrl-module : phandle of the control module used by PHY driver to power on 38 + the PHY. 39 + 40 + This is usually a subnode of ocp2scp to which it is connected. 41 + 42 + usb2phy@4a0ad080 { 43 + compatible = "ti,omap-usb2"; 44 + reg = <0x4a0ad080 0x58>; 45 + ctrl-module = <&omap_control_usb>; 46 + #phy-cells = <0>; 47 + }; 48 + 49 + TI PIPE3 PHY 50 + 51 + Required properties: 52 + - compatible: Should be "ti,phy-usb3" or "ti,phy-pipe3-sata". 53 + "ti,omap-usb3" is deprecated. 54 + - reg : Address and length of the register set for the device. 55 + - reg-names: The names of the register addresses corresponding to the registers 56 + filled in "reg". 57 + - #phy-cells: determine the number of cells that should be given in the 58 + phandle while referencing this phy. 59 + - clocks: a list of phandles and clock-specifier pairs, one for each entry in 60 + clock-names. 61 + - clock-names: should include: 62 + * "wkupclk" - wakeup clock. 63 + * "sysclk" - system clock. 64 + * "refclk" - reference clock. 65 + 66 + Optional properties: 67 + - ctrl-module : phandle of the control module used by PHY driver to power on 68 + the PHY. 69 + 70 + This is usually a subnode of ocp2scp to which it is connected. 71 + 72 + usb3phy@4a084400 { 73 + compatible = "ti,phy-usb3"; 74 + reg = <0x4a084400 0x80>, 75 + <0x4a084800 0x64>, 76 + <0x4a084c00 0x40>; 77 + reg-names = "phy_rx", "phy_tx", "pll_ctrl"; 78 + ctrl-module = <&omap_control_usb>; 79 + #phy-cells = <0>; 80 + clocks = <&usb_phy_cm_clk32k>, 81 + <&sys_clkin>, 82 + <&usb_otg_ss_refclk960m>; 83 + clock-names = "wkupclk", 84 + "sysclk", 85 + "refclk"; 86 + };
+4 -2
Documentation/devicetree/bindings/usb/dwc3.txt
··· 6 6 - compatible: must be "snps,dwc3" 7 7 - reg : Address and length of the register set for the device 8 8 - interrupts: Interrupts used by the dwc3 controller. 9 + 10 + Optional properties: 9 11 - usb-phy : array of phandle for the PHY device. The first element 10 12 in the array is expected to be a handle to the USB2/HS PHY and 11 13 the second element is expected to be a handle to the USB3/SS PHY 12 - 13 - Optional properties: 14 + - phys: from the *Generic PHY* bindings 15 + - phy-names: from the *Generic PHY* bindings 14 16 - tx-fifo-resize: determines if the FIFO *has* to be reallocated. 15 17 16 18 This is usually a subnode to DWC3 glue to which it is connected.
+7 -1
Documentation/devicetree/bindings/usb/mxs-phy.txt
··· 1 1 * Freescale MXS USB Phy Device 2 2 3 3 Required properties: 4 - - compatible: Should be "fsl,imx23-usbphy" 4 + - compatible: should contain: 5 + * "fsl,imx23-usbphy" for imx23 and imx28 6 + * "fsl,imx6q-usbphy" for imx6dq and imx6dl 7 + * "fsl,imx6sl-usbphy" for imx6sl 8 + "fsl,imx23-usbphy" is still a fallback for other strings 5 9 - reg: Should contain registers location and length 6 10 - interrupts: Should contain phy interrupt 11 + - fsl,anatop: phandle for anatop register, it is only for imx6 SoC series 7 12 8 13 Example: 9 14 usbphy1: usbphy@020c9000 { 10 15 compatible = "fsl,imx6q-usbphy", "fsl,imx23-usbphy"; 11 16 reg = <0x020c9000 0x1000>; 12 17 interrupts = <0 44 0x04>; 18 + fsl,anatop = <&anatop>; 13 19 };
-24
Documentation/devicetree/bindings/usb/omap-usb.txt
··· 76 76 ranges; 77 77 }; 78 78 79 - OMAP CONTROL USB 80 - 81 - Required properties: 82 - - compatible: Should be one of 83 - "ti,control-phy-otghs" - if it has otghs_control mailbox register as on OMAP4. 84 - "ti,control-phy-usb2" - if it has Power down bit in control_dev_conf register 85 - e.g. USB2_PHY on OMAP5. 86 - "ti,control-phy-pipe3" - if it has DPLL and individual Rx & Tx power control 87 - e.g. USB3 PHY and SATA PHY on OMAP5. 88 - "ti,control-phy-dra7usb2" - if it has power down register like USB2 PHY on 89 - DRA7 platform. 90 - "ti,control-phy-am437usb2" - if it has power down register like USB2 PHY on 91 - AM437 platform. 92 - - reg : Address and length of the register set for the device. It contains 93 - the address of "otghs_control" for control-phy-otghs or "power" register 94 - for other types. 95 - - reg-names: should be "otghs_control" control-phy-otghs and "power" for 96 - other types. 97 - 98 - omap_control_usb: omap-control-usb@4a002300 { 99 - compatible = "ti,control-phy-otghs"; 100 - reg = <0x4a00233c 0x4>; 101 - reg-names = "otghs_control"; 102 - };
-48
Documentation/devicetree/bindings/usb/usb-phy.txt
··· 1 - USB PHY 2 - 3 - OMAP USB2 PHY 4 - 5 - Required properties: 6 - - compatible: Should be "ti,omap-usb2" 7 - - reg : Address and length of the register set for the device. 8 - - #phy-cells: determine the number of cells that should be given in the 9 - phandle while referencing this phy. 10 - 11 - Optional properties: 12 - - ctrl-module : phandle of the control module used by PHY driver to power on 13 - the PHY. 14 - 15 - This is usually a subnode of ocp2scp to which it is connected. 16 - 17 - usb2phy@4a0ad080 { 18 - compatible = "ti,omap-usb2"; 19 - reg = <0x4a0ad080 0x58>; 20 - ctrl-module = <&omap_control_usb>; 21 - #phy-cells = <0>; 22 - }; 23 - 24 - OMAP USB3 PHY 25 - 26 - Required properties: 27 - - compatible: Should be "ti,omap-usb3" 28 - - reg : Address and length of the register set for the device. 29 - - reg-names: The names of the register addresses corresponding to the registers 30 - filled in "reg". 31 - - #phy-cells: determine the number of cells that should be given in the 32 - phandle while referencing this phy. 33 - 34 - Optional properties: 35 - - ctrl-module : phandle of the control module used by PHY driver to power on 36 - the PHY. 37 - 38 - This is usually a subnode of ocp2scp to which it is connected. 39 - 40 - usb3phy@4a084400 { 41 - compatible = "ti,omap-usb3"; 42 - reg = <0x4a084400 0x80>, 43 - <0x4a084800 0x64>, 44 - <0x4a084c00 0x40>; 45 - reg-names = "phy_rx", "phy_tx", "pll_ctrl"; 46 - ctrl-module = <&omap_control_usb>; 47 - #phy-cells = <0>; 48 - };
+222 -31
drivers/usb/dwc3/core.c
··· 61 61 * dwc3_core_soft_reset - Issues core soft reset and PHY reset 62 62 * @dwc: pointer to our context structure 63 63 */ 64 - static void dwc3_core_soft_reset(struct dwc3 *dwc) 64 + static int dwc3_core_soft_reset(struct dwc3 *dwc) 65 65 { 66 66 u32 reg; 67 + int ret; 67 68 68 69 /* Before Resetting PHY, put Core in Reset */ 69 70 reg = dwc3_readl(dwc->regs, DWC3_GCTL); ··· 83 82 84 83 usb_phy_init(dwc->usb2_phy); 85 84 usb_phy_init(dwc->usb3_phy); 85 + ret = phy_init(dwc->usb2_generic_phy); 86 + if (ret < 0) 87 + return ret; 88 + 89 + ret = phy_init(dwc->usb3_generic_phy); 90 + if (ret < 0) { 91 + phy_exit(dwc->usb2_generic_phy); 92 + return ret; 93 + } 86 94 mdelay(100); 87 95 88 96 /* Clear USB3 PHY reset */ ··· 110 100 reg = dwc3_readl(dwc->regs, DWC3_GCTL); 111 101 reg &= ~DWC3_GCTL_CORESOFTRESET; 112 102 dwc3_writel(dwc->regs, DWC3_GCTL, reg); 103 + 104 + return 0; 113 105 } 114 106 115 107 /** ··· 254 242 } 255 243 } 256 244 245 + static int dwc3_alloc_scratch_buffers(struct dwc3 *dwc) 246 + { 247 + if (!dwc->has_hibernation) 248 + return 0; 249 + 250 + if (!dwc->nr_scratch) 251 + return 0; 252 + 253 + dwc->scratchbuf = kmalloc_array(dwc->nr_scratch, 254 + DWC3_SCRATCHBUF_SIZE, GFP_KERNEL); 255 + if (!dwc->scratchbuf) 256 + return -ENOMEM; 257 + 258 + return 0; 259 + } 260 + 261 + static int dwc3_setup_scratch_buffers(struct dwc3 *dwc) 262 + { 263 + dma_addr_t scratch_addr; 264 + u32 param; 265 + int ret; 266 + 267 + if (!dwc->has_hibernation) 268 + return 0; 269 + 270 + if (!dwc->nr_scratch) 271 + return 0; 272 + 273 + /* should never fall here */ 274 + if (!WARN_ON(dwc->scratchbuf)) 275 + return 0; 276 + 277 + scratch_addr = dma_map_single(dwc->dev, dwc->scratchbuf, 278 + dwc->nr_scratch * DWC3_SCRATCHBUF_SIZE, 279 + DMA_BIDIRECTIONAL); 280 + if (dma_mapping_error(dwc->dev, scratch_addr)) { 281 + dev_err(dwc->dev, "failed to map scratch buffer\n"); 282 + ret = -EFAULT; 283 + goto err0; 284 + } 285 + 286 + dwc->scratch_addr = scratch_addr; 287 + 288 + param = lower_32_bits(scratch_addr); 289 + 290 + ret = dwc3_send_gadget_generic_command(dwc, 291 + DWC3_DGCMD_SET_SCRATCHPAD_ADDR_LO, param); 292 + if (ret < 0) 293 + goto err1; 294 + 295 + param = upper_32_bits(scratch_addr); 296 + 297 + ret = dwc3_send_gadget_generic_command(dwc, 298 + DWC3_DGCMD_SET_SCRATCHPAD_ADDR_HI, param); 299 + if (ret < 0) 300 + goto err1; 301 + 302 + return 0; 303 + 304 + err1: 305 + dma_unmap_single(dwc->dev, dwc->scratch_addr, dwc->nr_scratch * 306 + DWC3_SCRATCHBUF_SIZE, DMA_BIDIRECTIONAL); 307 + 308 + err0: 309 + return ret; 310 + } 311 + 312 + static void dwc3_free_scratch_buffers(struct dwc3 *dwc) 313 + { 314 + if (!dwc->has_hibernation) 315 + return; 316 + 317 + if (!dwc->nr_scratch) 318 + return; 319 + 320 + /* should never fall here */ 321 + if (!WARN_ON(dwc->scratchbuf)) 322 + return; 323 + 324 + dma_unmap_single(dwc->dev, dwc->scratch_addr, dwc->nr_scratch * 325 + DWC3_SCRATCHBUF_SIZE, DMA_BIDIRECTIONAL); 326 + kfree(dwc->scratchbuf); 327 + } 328 + 257 329 static void dwc3_core_num_eps(struct dwc3 *dwc) 258 330 { 259 331 struct dwc3_hwparams *parms = &dwc->hwparams; ··· 373 277 static int dwc3_core_init(struct dwc3 *dwc) 374 278 { 375 279 unsigned long timeout; 280 + u32 hwparams4 = dwc->hwparams.hwparams4; 376 281 u32 reg; 377 282 int ret; 378 283 ··· 403 306 cpu_relax(); 404 307 } while (true); 405 308 406 - dwc3_core_soft_reset(dwc); 309 + ret = dwc3_core_soft_reset(dwc); 310 + if (ret) 311 + goto err0; 407 312 408 313 reg = dwc3_readl(dwc->regs, DWC3_GCTL); 409 314 reg &= ~DWC3_GCTL_SCALEDOWN_MASK; ··· 413 314 414 315 switch (DWC3_GHWPARAMS1_EN_PWROPT(dwc->hwparams.hwparams1)) { 415 316 case DWC3_GHWPARAMS1_EN_PWROPT_CLK: 416 - reg &= ~DWC3_GCTL_DSBLCLKGTNG; 317 + /** 318 + * WORKAROUND: DWC3 revisions between 2.10a and 2.50a have an 319 + * issue which would cause xHCI compliance tests to fail. 320 + * 321 + * Because of that we cannot enable clock gating on such 322 + * configurations. 323 + * 324 + * Refers to: 325 + * 326 + * STAR#9000588375: Clock Gating, SOF Issues when ref_clk-Based 327 + * SOF/ITP Mode Used 328 + */ 329 + if ((dwc->dr_mode == USB_DR_MODE_HOST || 330 + dwc->dr_mode == USB_DR_MODE_OTG) && 331 + (dwc->revision >= DWC3_REVISION_210A && 332 + dwc->revision <= DWC3_REVISION_250A)) 333 + reg |= DWC3_GCTL_DSBLCLKGTNG | DWC3_GCTL_SOFITPSYNC; 334 + else 335 + reg &= ~DWC3_GCTL_DSBLCLKGTNG; 336 + break; 337 + case DWC3_GHWPARAMS1_EN_PWROPT_HIB: 338 + /* enable hibernation here */ 339 + dwc->nr_scratch = DWC3_GHWPARAMS4_HIBER_SCRATCHBUFS(hwparams4); 417 340 break; 418 341 default: 419 342 dev_dbg(dwc->dev, "No power optimization available\n"); ··· 454 333 455 334 dwc3_writel(dwc->regs, DWC3_GCTL, reg); 456 335 336 + ret = dwc3_alloc_scratch_buffers(dwc); 337 + if (ret) 338 + goto err1; 339 + 340 + ret = dwc3_setup_scratch_buffers(dwc); 341 + if (ret) 342 + goto err2; 343 + 457 344 return 0; 345 + 346 + err2: 347 + dwc3_free_scratch_buffers(dwc); 348 + 349 + err1: 350 + usb_phy_shutdown(dwc->usb2_phy); 351 + usb_phy_shutdown(dwc->usb3_phy); 352 + phy_exit(dwc->usb2_generic_phy); 353 + phy_exit(dwc->usb3_generic_phy); 458 354 459 355 err0: 460 356 return ret; ··· 479 341 480 342 static void dwc3_core_exit(struct dwc3 *dwc) 481 343 { 344 + dwc3_free_scratch_buffers(dwc); 482 345 usb_phy_shutdown(dwc->usb2_phy); 483 346 usb_phy_shutdown(dwc->usb3_phy); 347 + phy_exit(dwc->usb2_generic_phy); 348 + phy_exit(dwc->usb3_generic_phy); 484 349 } 485 350 486 351 #define DWC3_ALIGN_MASK (16 - 1) ··· 552 411 553 412 if (IS_ERR(dwc->usb2_phy)) { 554 413 ret = PTR_ERR(dwc->usb2_phy); 555 - 556 - /* 557 - * if -ENXIO is returned, it means PHY layer wasn't 558 - * enabled, so it makes no sense to return -EPROBE_DEFER 559 - * in that case, since no PHY driver will ever probe. 560 - */ 561 - if (ret == -ENXIO) 414 + if (ret == -ENXIO || ret == -ENODEV) { 415 + dwc->usb2_phy = NULL; 416 + } else if (ret == -EPROBE_DEFER) { 562 417 return ret; 563 - 564 - dev_err(dev, "no usb2 phy configured\n"); 565 - return -EPROBE_DEFER; 418 + } else { 419 + dev_err(dev, "no usb2 phy configured\n"); 420 + return ret; 421 + } 566 422 } 567 423 568 424 if (IS_ERR(dwc->usb3_phy)) { 569 425 ret = PTR_ERR(dwc->usb3_phy); 570 - 571 - /* 572 - * if -ENXIO is returned, it means PHY layer wasn't 573 - * enabled, so it makes no sense to return -EPROBE_DEFER 574 - * in that case, since no PHY driver will ever probe. 575 - */ 576 - if (ret == -ENXIO) 426 + if (ret == -ENXIO || ret == -ENODEV) { 427 + dwc->usb3_phy = NULL; 428 + } else if (ret == -EPROBE_DEFER) { 577 429 return ret; 430 + } else { 431 + dev_err(dev, "no usb3 phy configured\n"); 432 + return ret; 433 + } 434 + } 578 435 579 - dev_err(dev, "no usb3 phy configured\n"); 580 - return -EPROBE_DEFER; 436 + dwc->usb2_generic_phy = devm_phy_get(dev, "usb2-phy"); 437 + if (IS_ERR(dwc->usb2_generic_phy)) { 438 + ret = PTR_ERR(dwc->usb2_generic_phy); 439 + if (ret == -ENOSYS || ret == -ENODEV) { 440 + dwc->usb2_generic_phy = NULL; 441 + } else if (ret == -EPROBE_DEFER) { 442 + return ret; 443 + } else { 444 + dev_err(dev, "no usb2 phy configured\n"); 445 + return ret; 446 + } 447 + } 448 + 449 + dwc->usb3_generic_phy = devm_phy_get(dev, "usb3-phy"); 450 + if (IS_ERR(dwc->usb3_generic_phy)) { 451 + ret = PTR_ERR(dwc->usb3_generic_phy); 452 + if (ret == -ENOSYS || ret == -ENODEV) { 453 + dwc->usb3_generic_phy = NULL; 454 + } else if (ret == -EPROBE_DEFER) { 455 + return ret; 456 + } else { 457 + dev_err(dev, "no usb3 phy configured\n"); 458 + return ret; 459 + } 581 460 } 582 461 583 462 dwc->xhci_resources[0].start = res->start; ··· 640 479 goto err0; 641 480 } 642 481 482 + if (IS_ENABLED(CONFIG_USB_DWC3_HOST)) 483 + dwc->dr_mode = USB_DR_MODE_HOST; 484 + else if (IS_ENABLED(CONFIG_USB_DWC3_GADGET)) 485 + dwc->dr_mode = USB_DR_MODE_PERIPHERAL; 486 + 487 + if (dwc->dr_mode == USB_DR_MODE_UNKNOWN) 488 + dwc->dr_mode = USB_DR_MODE_OTG; 489 + 643 490 ret = dwc3_core_init(dwc); 644 491 if (ret) { 645 492 dev_err(dev, "failed to initialize core\n"); ··· 656 487 657 488 usb_phy_set_suspend(dwc->usb2_phy, 0); 658 489 usb_phy_set_suspend(dwc->usb3_phy, 0); 490 + ret = phy_power_on(dwc->usb2_generic_phy); 491 + if (ret < 0) 492 + goto err1; 493 + 494 + ret = phy_power_on(dwc->usb3_generic_phy); 495 + if (ret < 0) 496 + goto err_usb2phy_power; 659 497 660 498 ret = dwc3_event_buffers_setup(dwc); 661 499 if (ret) { 662 500 dev_err(dwc->dev, "failed to setup event buffers\n"); 663 - goto err1; 501 + goto err_usb3phy_power; 664 502 } 665 - 666 - if (IS_ENABLED(CONFIG_USB_DWC3_HOST)) 667 - dwc->dr_mode = USB_DR_MODE_HOST; 668 - else if (IS_ENABLED(CONFIG_USB_DWC3_GADGET)) 669 - dwc->dr_mode = USB_DR_MODE_PERIPHERAL; 670 - 671 - if (dwc->dr_mode == USB_DR_MODE_UNKNOWN) 672 - dwc->dr_mode = USB_DR_MODE_OTG; 673 503 674 504 switch (dwc->dr_mode) { 675 505 case USB_DR_MODE_PERIPHERAL: ··· 736 568 err2: 737 569 dwc3_event_buffers_cleanup(dwc); 738 570 571 + err_usb3phy_power: 572 + phy_power_off(dwc->usb3_generic_phy); 573 + 574 + err_usb2phy_power: 575 + phy_power_off(dwc->usb2_generic_phy); 576 + 739 577 err1: 740 578 usb_phy_set_suspend(dwc->usb2_phy, 1); 741 579 usb_phy_set_suspend(dwc->usb3_phy, 1); ··· 759 585 760 586 usb_phy_set_suspend(dwc->usb2_phy, 1); 761 587 usb_phy_set_suspend(dwc->usb3_phy, 1); 588 + phy_power_off(dwc->usb2_generic_phy); 589 + phy_power_off(dwc->usb3_generic_phy); 762 590 763 591 pm_runtime_put_sync(&pdev->dev); 764 592 pm_runtime_disable(&pdev->dev); ··· 858 682 859 683 usb_phy_shutdown(dwc->usb3_phy); 860 684 usb_phy_shutdown(dwc->usb2_phy); 685 + phy_exit(dwc->usb2_generic_phy); 686 + phy_exit(dwc->usb3_generic_phy); 861 687 862 688 return 0; 863 689 } ··· 868 690 { 869 691 struct dwc3 *dwc = dev_get_drvdata(dev); 870 692 unsigned long flags; 693 + int ret; 871 694 872 695 usb_phy_init(dwc->usb3_phy); 873 696 usb_phy_init(dwc->usb2_phy); 697 + ret = phy_init(dwc->usb2_generic_phy); 698 + if (ret < 0) 699 + return ret; 700 + 701 + ret = phy_init(dwc->usb3_generic_phy); 702 + if (ret < 0) 703 + goto err_usb2phy_init; 874 704 875 705 spin_lock_irqsave(&dwc->lock, flags); 876 706 ··· 902 716 pm_runtime_enable(dev); 903 717 904 718 return 0; 719 + 720 + err_usb2phy_init: 721 + phy_exit(dwc->usb2_generic_phy); 722 + 723 + return ret; 905 724 } 906 725 907 726 static const struct dev_pm_ops dwc3_dev_pm_ops = {
+82 -23
drivers/usb/dwc3/core.h
··· 31 31 #include <linux/usb/gadget.h> 32 32 #include <linux/usb/otg.h> 33 33 34 + #include <linux/phy/phy.h> 35 + 34 36 /* Global constants */ 35 37 #define DWC3_EP0_BOUNCE_SIZE 512 36 38 #define DWC3_ENDPOINTS_NUM 32 37 39 #define DWC3_XHCI_RESOURCES_NUM 2 38 40 41 + #define DWC3_SCRATCHBUF_SIZE 4096 /* each buffer is assumed to be 4KiB */ 39 42 #define DWC3_EVENT_SIZE 4 /* bytes */ 40 43 #define DWC3_EVENT_MAX_NUM 64 /* 2 events/endpoint */ 41 44 #define DWC3_EVENT_BUFFERS_SIZE (DWC3_EVENT_SIZE * DWC3_EVENT_MAX_NUM) ··· 160 157 #define DWC3_GCTL_PRTCAP_OTG 3 161 158 162 159 #define DWC3_GCTL_CORESOFTRESET (1 << 11) 160 + #define DWC3_GCTL_SOFITPSYNC (1 << 10) 163 161 #define DWC3_GCTL_SCALEDOWN(n) ((n) << 4) 164 162 #define DWC3_GCTL_SCALEDOWN_MASK DWC3_GCTL_SCALEDOWN(3) 165 163 #define DWC3_GCTL_DISSCRAMBLE (1 << 3) ··· 322 318 /* Device Endpoint Command Register */ 323 319 #define DWC3_DEPCMD_PARAM_SHIFT 16 324 320 #define DWC3_DEPCMD_PARAM(x) ((x) << DWC3_DEPCMD_PARAM_SHIFT) 325 - #define DWC3_DEPCMD_GET_RSC_IDX(x) (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f) 321 + #define DWC3_DEPCMD_GET_RSC_IDX(x) (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f) 326 322 #define DWC3_DEPCMD_STATUS(x) (((x) >> 15) & 1) 327 323 #define DWC3_DEPCMD_HIPRI_FORCERM (1 << 11) 328 324 #define DWC3_DEPCMD_CMDACT (1 << 10) ··· 397 393 * @busy_slot: first slot which is owned by HW 398 394 * @desc: usb_endpoint_descriptor pointer 399 395 * @dwc: pointer to DWC controller 396 + * @saved_state: ep state saved during hibernation 400 397 * @flags: endpoint flags (wedged, stalled, ...) 401 398 * @current_trb: index of current used trb 402 399 * @number: endpoint number (1 - 15) ··· 420 415 const struct usb_ss_ep_comp_descriptor *comp_desc; 421 416 struct dwc3 *dwc; 422 417 418 + u32 saved_state; 423 419 unsigned flags; 424 420 #define DWC3_EP_ENABLED (1 << 0) 425 421 #define DWC3_EP_STALL (1 << 1) ··· 604 598 * @ep0_trb: dma address of ep0_trb 605 599 * @ep0_usb_req: dummy req used while handling STD USB requests 606 600 * @ep0_bounce_addr: dma address of ep0_bounce 601 + * @scratch_addr: dma address of scratchbuf 607 602 * @lock: for synchronizing 608 603 * @dev: pointer to our struct device 609 604 * @xhci: pointer to our xHCI child ··· 613 606 * @gadget_driver: pointer to the gadget driver 614 607 * @regs: base address for our registers 615 608 * @regs_size: address space size 609 + * @nr_scratch: number of scratch buffers 616 610 * @num_event_buffers: calculated number of event buffers 617 611 * @u1u2: only used on revisions <1.83a for workaround 618 612 * @maximum_speed: maximum speed requested (mainly for testing purposes) ··· 621 613 * @dr_mode: requested mode of operation 622 614 * @usb2_phy: pointer to USB2 PHY 623 615 * @usb3_phy: pointer to USB3 PHY 616 + * @usb2_generic_phy: pointer to USB2 PHY 617 + * @usb3_generic_phy: pointer to USB3 PHY 624 618 * @dcfg: saved contents of DCFG register 625 619 * @gctl: saved contents of GCTL register 626 - * @is_selfpowered: true when we are selfpowered 627 - * @three_stage_setup: set if we perform a three phase setup 628 - * @ep0_bounced: true when we used bounce buffer 629 - * @ep0_expect_in: true when we expect a DATA IN transfer 630 - * @start_config_issued: true when StartConfig command has been issued 631 - * @setup_packet_pending: true when there's a Setup Packet in FIFO. Workaround 632 - * @needs_fifo_resize: not all users might want fifo resizing, flag it 633 - * @resize_fifos: tells us it's ok to reconfigure our TxFIFO sizes. 634 620 * @isoch_delay: wValue from Set Isochronous Delay request; 635 621 * @u2sel: parameter from Set SEL request. 636 622 * @u2pel: parameter from Set SEL request. ··· 639 637 * @mem: points to start of memory which is used for this struct. 640 638 * @hwparams: copy of hwparams registers 641 639 * @root: debugfs root folder pointer 640 + * @regset: debugfs pointer to regdump file 641 + * @test_mode: true when we're entering a USB test mode 642 + * @test_mode_nr: test feature selector 643 + * @delayed_status: true when gadget driver asks for delayed status 644 + * @ep0_bounced: true when we used bounce buffer 645 + * @ep0_expect_in: true when we expect a DATA IN transfer 646 + * @has_hibernation: true when dwc3 was configured with Hibernation 647 + * @is_selfpowered: true when we are selfpowered 648 + * @needs_fifo_resize: not all users might want fifo resizing, flag it 649 + * @pullups_connected: true when Run/Stop bit is set 650 + * @resize_fifos: tells us it's ok to reconfigure our TxFIFO sizes. 651 + * @setup_packet_pending: true when there's a Setup Packet in FIFO. Workaround 652 + * @start_config_issued: true when StartConfig command has been issued 653 + * @three_stage_setup: set if we perform a three phase setup 642 654 */ 643 655 struct dwc3 { 644 656 struct usb_ctrlrequest *ctrl_req; 645 657 struct dwc3_trb *ep0_trb; 646 658 void *ep0_bounce; 659 + void *scratchbuf; 647 660 u8 *setup_buf; 648 661 dma_addr_t ctrl_req_addr; 649 662 dma_addr_t ep0_trb_addr; 650 663 dma_addr_t ep0_bounce_addr; 664 + dma_addr_t scratch_addr; 651 665 struct dwc3_request ep0_usb_req; 652 666 653 667 /* device lock */ ··· 683 665 struct usb_phy *usb2_phy; 684 666 struct usb_phy *usb3_phy; 685 667 668 + struct phy *usb2_generic_phy; 669 + struct phy *usb3_generic_phy; 670 + 686 671 void __iomem *regs; 687 672 size_t regs_size; 688 673 ··· 695 674 u32 dcfg; 696 675 u32 gctl; 697 676 677 + u32 nr_scratch; 698 678 u32 num_event_buffers; 699 679 u32 u1u2; 700 680 u32 maximum_speed; ··· 717 695 #define DWC3_REVISION_230A 0x5533230a 718 696 #define DWC3_REVISION_240A 0x5533240a 719 697 #define DWC3_REVISION_250A 0x5533250a 720 - 721 - unsigned is_selfpowered:1; 722 - unsigned three_stage_setup:1; 723 - unsigned ep0_bounced:1; 724 - unsigned ep0_expect_in:1; 725 - unsigned start_config_issued:1; 726 - unsigned setup_packet_pending:1; 727 - unsigned delayed_status:1; 728 - unsigned needs_fifo_resize:1; 729 - unsigned resize_fifos:1; 730 - unsigned pullups_connected:1; 698 + #define DWC3_REVISION_260A 0x5533260a 699 + #define DWC3_REVISION_270A 0x5533270a 700 + #define DWC3_REVISION_280A 0x5533280a 731 701 732 702 enum dwc3_ep0_next ep0_next_event; 733 703 enum dwc3_ep0_state ep0state; ··· 744 730 745 731 u8 test_mode; 746 732 u8 test_mode_nr; 733 + 734 + unsigned delayed_status:1; 735 + unsigned ep0_bounced:1; 736 + unsigned ep0_expect_in:1; 737 + unsigned has_hibernation:1; 738 + unsigned is_selfpowered:1; 739 + unsigned needs_fifo_resize:1; 740 + unsigned pullups_connected:1; 741 + unsigned resize_fifos:1; 742 + unsigned setup_packet_pending:1; 743 + unsigned start_config_issued:1; 744 + unsigned three_stage_setup:1; 747 745 }; 748 746 749 747 /* -------------------------------------------------------------------------- */ ··· 841 815 * 12 - VndrDevTstRcved 842 816 * @reserved15_12: Reserved, not used 843 817 * @event_info: Information about this event 844 - * @reserved31_24: Reserved, not used 818 + * @reserved31_25: Reserved, not used 845 819 */ 846 820 struct dwc3_event_devt { 847 821 u32 one_bit:1; 848 822 u32 device_event:7; 849 823 u32 type:4; 850 824 u32 reserved15_12:4; 851 - u32 event_info:8; 852 - u32 reserved31_24:8; 825 + u32 event_info:9; 826 + u32 reserved31_25:7; 853 827 } __packed; 854 828 855 829 /** ··· 882 856 struct dwc3_event_gevt gevt; 883 857 }; 884 858 859 + /** 860 + * struct dwc3_gadget_ep_cmd_params - representation of endpoint command 861 + * parameters 862 + * @param2: third parameter 863 + * @param1: second parameter 864 + * @param0: first parameter 865 + */ 866 + struct dwc3_gadget_ep_cmd_params { 867 + u32 param2; 868 + u32 param1; 869 + u32 param0; 870 + }; 871 + 885 872 /* 886 873 * DWC3 Features to be used as Driver Data 887 874 */ ··· 920 881 #if IS_ENABLED(CONFIG_USB_DWC3_GADGET) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE) 921 882 int dwc3_gadget_init(struct dwc3 *dwc); 922 883 void dwc3_gadget_exit(struct dwc3 *dwc); 884 + int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode); 885 + int dwc3_gadget_get_link_state(struct dwc3 *dwc); 886 + int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state); 887 + int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, 888 + unsigned cmd, struct dwc3_gadget_ep_cmd_params *params); 889 + int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param); 923 890 #else 924 891 static inline int dwc3_gadget_init(struct dwc3 *dwc) 925 892 { return 0; } 926 893 static inline void dwc3_gadget_exit(struct dwc3 *dwc) 927 894 { } 895 + static inline int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode) 896 + { return 0; } 897 + static inline int dwc3_gadget_get_link_state(struct dwc3 *dwc) 898 + { return 0; } 899 + static inline int dwc3_gadget_set_link_state(struct dwc3 *dwc, 900 + enum dwc3_link_state state) 901 + { return 0; } 902 + 903 + static inline int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, 904 + unsigned cmd, struct dwc3_gadget_ep_cmd_params *params) 905 + { return 0; } 906 + static inline int dwc3_send_gadget_generic_command(struct dwc3 *dwc, 907 + int cmd, u32 param) 908 + { return 0; } 928 909 #endif 929 910 930 911 /* power management interface */
-5
drivers/usb/dwc3/dwc3-omap.c
··· 424 424 } 425 425 426 426 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 427 - if (!res) { 428 - dev_err(dev, "missing memory base resource\n"); 429 - return -EINVAL; 430 - } 431 - 432 427 base = devm_ioremap_resource(dev, res); 433 428 if (IS_ERR(base)) 434 429 return PTR_ERR(base);
+152 -31
drivers/usb/dwc3/gadget.c
··· 68 68 } 69 69 70 70 /** 71 + * dwc3_gadget_get_link_state - Gets current state of USB Link 72 + * @dwc: pointer to our context structure 73 + * 74 + * Caller should take care of locking. This function will 75 + * return the link state on success (>= 0) or -ETIMEDOUT. 76 + */ 77 + int dwc3_gadget_get_link_state(struct dwc3 *dwc) 78 + { 79 + u32 reg; 80 + 81 + reg = dwc3_readl(dwc->regs, DWC3_DSTS); 82 + 83 + return DWC3_DSTS_USBLNKST(reg); 84 + } 85 + 86 + /** 71 87 * dwc3_gadget_set_link_state - Sets USB Link to a particular State 72 88 * @dwc: pointer to our context structure 73 89 * @state: the state to put link into ··· 433 417 static int dwc3_gadget_set_ep_config(struct dwc3 *dwc, struct dwc3_ep *dep, 434 418 const struct usb_endpoint_descriptor *desc, 435 419 const struct usb_ss_ep_comp_descriptor *comp_desc, 436 - bool ignore) 420 + bool ignore, bool restore) 437 421 { 438 422 struct dwc3_gadget_ep_cmd_params params; 439 423 ··· 451 435 452 436 if (ignore) 453 437 params.param0 |= DWC3_DEPCFG_IGN_SEQ_NUM; 438 + 439 + if (restore) { 440 + params.param0 |= DWC3_DEPCFG_ACTION_RESTORE; 441 + params.param2 |= dep->saved_state; 442 + } 454 443 455 444 params.param1 = DWC3_DEPCFG_XFER_COMPLETE_EN 456 445 | DWC3_DEPCFG_XFER_NOT_READY_EN; ··· 515 494 static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, 516 495 const struct usb_endpoint_descriptor *desc, 517 496 const struct usb_ss_ep_comp_descriptor *comp_desc, 518 - bool ignore) 497 + bool ignore, bool restore) 519 498 { 520 499 struct dwc3 *dwc = dep->dwc; 521 500 u32 reg; ··· 529 508 return ret; 530 509 } 531 510 532 - ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc, ignore); 511 + ret = dwc3_gadget_set_ep_config(dwc, dep, desc, comp_desc, ignore, 512 + restore); 533 513 if (ret) 534 514 return ret; 535 515 ··· 570 548 return 0; 571 549 } 572 550 573 - static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum); 551 + static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum, bool force); 574 552 static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep) 575 553 { 576 554 struct dwc3_request *req; 577 555 578 556 if (!list_empty(&dep->req_queued)) { 579 - dwc3_stop_active_transfer(dwc, dep->number); 557 + dwc3_stop_active_transfer(dwc, dep->number, true); 580 558 581 559 /* - giveback all requests to gadget driver */ 582 560 while (!list_empty(&dep->req_queued)) { ··· 681 659 } 682 660 683 661 spin_lock_irqsave(&dwc->lock, flags); 684 - ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc, false); 662 + ret = __dwc3_gadget_ep_enable(dep, desc, ep->comp_desc, false, false); 685 663 spin_unlock_irqrestore(&dwc->lock, flags); 686 664 687 665 return ret; ··· 793 771 trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS_FIRST; 794 772 else 795 773 trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS; 796 - 797 - if (!req->request.no_interrupt && !chain) 798 - trb->ctrl |= DWC3_TRB_CTRL_IOC; 799 774 break; 800 775 801 776 case USB_ENDPOINT_XFER_BULK: ··· 806 787 */ 807 788 BUG(); 808 789 } 790 + 791 + if (!req->request.no_interrupt && !chain) 792 + trb->ctrl |= DWC3_TRB_CTRL_IOC; 809 793 810 794 if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 811 795 trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; ··· 1099 1077 */ 1100 1078 if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 1101 1079 if (list_empty(&dep->req_queued)) { 1102 - dwc3_stop_active_transfer(dwc, dep->number); 1080 + dwc3_stop_active_transfer(dwc, dep->number, true); 1103 1081 dep->flags = DWC3_EP_ENABLED; 1104 1082 } 1105 1083 return 0; ··· 1127 1105 dev_dbg(dwc->dev, "%s: failed to kick transfers\n", 1128 1106 dep->name); 1129 1107 return ret; 1108 + } 1109 + 1110 + /* 1111 + * 4. Stream Capable Bulk Endpoints. We need to start the transfer 1112 + * right away, otherwise host will not know we have streams to be 1113 + * handled. 1114 + */ 1115 + if (dep->stream_capable) { 1116 + int ret; 1117 + 1118 + ret = __dwc3_gadget_kick_transfer(dep, 0, true); 1119 + if (ret && ret != -EBUSY) { 1120 + struct dwc3 *dwc = dep->dwc; 1121 + 1122 + dev_dbg(dwc->dev, "%s: failed to kick transfers\n", 1123 + dep->name); 1124 + } 1130 1125 } 1131 1126 1132 1127 return 0; ··· 1202 1163 } 1203 1164 if (r == req) { 1204 1165 /* wait until it is processed */ 1205 - dwc3_stop_active_transfer(dwc, dep->number); 1166 + dwc3_stop_active_transfer(dwc, dep->number, true); 1206 1167 goto out1; 1207 1168 } 1208 1169 dev_err(dwc->dev, "request %p was not queued to %s\n", ··· 1233 1194 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 1234 1195 DWC3_DEPCMD_SETSTALL, &params); 1235 1196 if (ret) 1236 - dev_err(dwc->dev, "failed to %s STALL on %s\n", 1237 - value ? "set" : "clear", 1197 + dev_err(dwc->dev, "failed to set STALL on %s\n", 1238 1198 dep->name); 1239 1199 else 1240 1200 dep->flags |= DWC3_EP_STALL; ··· 1241 1203 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 1242 1204 DWC3_DEPCMD_CLEARSTALL, &params); 1243 1205 if (ret) 1244 - dev_err(dwc->dev, "failed to %s STALL on %s\n", 1245 - value ? "set" : "clear", 1206 + dev_err(dwc->dev, "failed to clear STALL on %s\n", 1246 1207 dep->name); 1247 1208 else 1248 1209 dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE); ··· 1424 1387 return 0; 1425 1388 } 1426 1389 1427 - static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on) 1390 + static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) 1428 1391 { 1429 1392 u32 reg; 1430 1393 u32 timeout = 500; ··· 1439 1402 if (dwc->revision >= DWC3_REVISION_194A) 1440 1403 reg &= ~DWC3_DCTL_KEEP_CONNECT; 1441 1404 reg |= DWC3_DCTL_RUN_STOP; 1405 + 1406 + if (dwc->has_hibernation) 1407 + reg |= DWC3_DCTL_KEEP_CONNECT; 1408 + 1442 1409 dwc->pullups_connected = true; 1443 1410 } else { 1444 1411 reg &= ~DWC3_DCTL_RUN_STOP; 1412 + 1413 + if (dwc->has_hibernation && !suspend) 1414 + reg &= ~DWC3_DCTL_KEEP_CONNECT; 1415 + 1445 1416 dwc->pullups_connected = false; 1446 1417 } 1447 1418 ··· 1487 1442 is_on = !!is_on; 1488 1443 1489 1444 spin_lock_irqsave(&dwc->lock, flags); 1490 - ret = dwc3_gadget_run_stop(dwc, is_on); 1445 + ret = dwc3_gadget_run_stop(dwc, is_on, false); 1491 1446 spin_unlock_irqrestore(&dwc->lock, flags); 1492 1447 1493 1448 return ret; ··· 1594 1549 dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512); 1595 1550 1596 1551 dep = dwc->eps[0]; 1597 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 1552 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 1553 + false); 1598 1554 if (ret) { 1599 1555 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 1600 1556 goto err2; 1601 1557 } 1602 1558 1603 1559 dep = dwc->eps[1]; 1604 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 1560 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 1561 + false); 1605 1562 if (ret) { 1606 1563 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 1607 1564 goto err3; ··· 1896 1849 */ 1897 1850 dep->flags = DWC3_EP_PENDING_REQUEST; 1898 1851 } else { 1899 - dwc3_stop_active_transfer(dwc, dep->number); 1852 + dwc3_stop_active_transfer(dwc, dep->number, true); 1900 1853 dep->flags = DWC3_EP_ENABLED; 1901 1854 } 1902 1855 return 1; 1903 1856 } 1904 1857 1905 - if ((event->status & DEPEVT_STATUS_IOC) && 1906 - (trb->ctrl & DWC3_TRB_CTRL_IOC)) 1907 - return 0; 1908 1858 return 1; 1909 1859 } 1910 1860 ··· 2043 1999 } 2044 2000 } 2045 2001 2046 - static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum) 2002 + static void dwc3_suspend_gadget(struct dwc3 *dwc) 2003 + { 2004 + if (dwc->gadget_driver && dwc->gadget_driver->suspend) { 2005 + spin_unlock(&dwc->lock); 2006 + dwc->gadget_driver->suspend(&dwc->gadget); 2007 + spin_lock(&dwc->lock); 2008 + } 2009 + } 2010 + 2011 + static void dwc3_resume_gadget(struct dwc3 *dwc) 2012 + { 2013 + if (dwc->gadget_driver && dwc->gadget_driver->resume) { 2014 + spin_unlock(&dwc->lock); 2015 + dwc->gadget_driver->resume(&dwc->gadget); 2016 + spin_lock(&dwc->lock); 2017 + } 2018 + } 2019 + 2020 + static void dwc3_stop_active_transfer(struct dwc3 *dwc, u32 epnum, bool force) 2047 2021 { 2048 2022 struct dwc3_ep *dep; 2049 2023 struct dwc3_gadget_ep_cmd_params params; ··· 2093 2031 */ 2094 2032 2095 2033 cmd = DWC3_DEPCMD_ENDTRANSFER; 2096 - cmd |= DWC3_DEPCMD_HIPRI_FORCERM | DWC3_DEPCMD_CMDIOC; 2034 + cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0; 2035 + cmd |= DWC3_DEPCMD_CMDIOC; 2097 2036 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); 2098 2037 memset(&params, 0, sizeof(params)); 2099 2038 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params); ··· 2323 2260 reg |= DWC3_DCTL_HIRD_THRES(12); 2324 2261 2325 2262 dwc3_writel(dwc->regs, DWC3_DCTL, reg); 2263 + } else { 2264 + reg = dwc3_readl(dwc->regs, DWC3_DCTL); 2265 + reg &= ~DWC3_DCTL_HIRD_THRES_MASK; 2266 + dwc3_writel(dwc->regs, DWC3_DCTL, reg); 2326 2267 } 2327 2268 2328 2269 dep = dwc->eps[0]; 2329 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true); 2270 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true, 2271 + false); 2330 2272 if (ret) { 2331 2273 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 2332 2274 return; 2333 2275 } 2334 2276 2335 2277 dep = dwc->eps[1]; 2336 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true); 2278 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, true, 2279 + false); 2337 2280 if (ret) { 2338 2281 dev_err(dwc->dev, "failed to enable %s\n", dep->name); 2339 2282 return; ··· 2447 2378 2448 2379 dwc->link_state = next; 2449 2380 2381 + switch (next) { 2382 + case DWC3_LINK_STATE_U1: 2383 + if (dwc->speed == USB_SPEED_SUPER) 2384 + dwc3_suspend_gadget(dwc); 2385 + break; 2386 + case DWC3_LINK_STATE_U2: 2387 + case DWC3_LINK_STATE_U3: 2388 + dwc3_suspend_gadget(dwc); 2389 + break; 2390 + case DWC3_LINK_STATE_RESUME: 2391 + dwc3_resume_gadget(dwc); 2392 + break; 2393 + default: 2394 + /* do nothing */ 2395 + break; 2396 + } 2397 + 2450 2398 dev_vdbg(dwc->dev, "%s link %d\n", __func__, dwc->link_state); 2399 + } 2400 + 2401 + static void dwc3_gadget_hibernation_interrupt(struct dwc3 *dwc, 2402 + unsigned int evtinfo) 2403 + { 2404 + unsigned int is_ss = evtinfo & BIT(4); 2405 + 2406 + /** 2407 + * WORKAROUND: DWC3 revison 2.20a with hibernation support 2408 + * have a known issue which can cause USB CV TD.9.23 to fail 2409 + * randomly. 2410 + * 2411 + * Because of this issue, core could generate bogus hibernation 2412 + * events which SW needs to ignore. 2413 + * 2414 + * Refers to: 2415 + * 2416 + * STAR#9000546576: Device Mode Hibernation: Issue in USB 2.0 2417 + * Device Fallback from SuperSpeed 2418 + */ 2419 + if (is_ss ^ (dwc->speed == USB_SPEED_SUPER)) 2420 + return; 2421 + 2422 + /* enter hibernation here */ 2451 2423 } 2452 2424 2453 2425 static void dwc3_gadget_interrupt(struct dwc3 *dwc, ··· 2506 2396 break; 2507 2397 case DWC3_DEVICE_EVENT_WAKEUP: 2508 2398 dwc3_gadget_wakeup_interrupt(dwc); 2399 + break; 2400 + case DWC3_DEVICE_EVENT_HIBER_REQ: 2401 + if (dev_WARN_ONCE(dwc->dev, !dwc->has_hibernation, 2402 + "unexpected hibernation event\n")) 2403 + break; 2404 + 2405 + dwc3_gadget_hibernation_interrupt(dwc, event->event_info); 2509 2406 break; 2510 2407 case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE: 2511 2408 dwc3_gadget_linksts_change_interrupt(dwc, event->event_info); ··· 2778 2661 2779 2662 int dwc3_gadget_prepare(struct dwc3 *dwc) 2780 2663 { 2781 - if (dwc->pullups_connected) 2664 + if (dwc->pullups_connected) { 2782 2665 dwc3_gadget_disable_irq(dwc); 2666 + dwc3_gadget_run_stop(dwc, true, true); 2667 + } 2783 2668 2784 2669 return 0; 2785 2670 } ··· 2790 2671 { 2791 2672 if (dwc->pullups_connected) { 2792 2673 dwc3_gadget_enable_irq(dwc); 2793 - dwc3_gadget_run_stop(dwc, true); 2674 + dwc3_gadget_run_stop(dwc, true, false); 2794 2675 } 2795 2676 } 2796 2677 ··· 2813 2694 dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512); 2814 2695 2815 2696 dep = dwc->eps[0]; 2816 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 2697 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 2698 + false); 2817 2699 if (ret) 2818 2700 goto err0; 2819 2701 2820 2702 dep = dwc->eps[1]; 2821 - ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false); 2703 + ret = __dwc3_gadget_ep_enable(dep, &dwc3_gadget_ep0_desc, NULL, false, 2704 + false); 2822 2705 if (ret) 2823 2706 goto err1; 2824 2707
-12
drivers/usb/dwc3/gadget.h
··· 56 56 /* DEPXFERCFG parameter 0 */ 57 57 #define DWC3_DEPXFERCFG_NUM_XFER_RES(n) ((n) & 0xffff) 58 58 59 - struct dwc3_gadget_ep_cmd_params { 60 - u32 param2; 61 - u32 param1; 62 - u32 param0; 63 - }; 64 - 65 59 /* -------------------------------------------------------------------------- */ 66 60 67 61 #define to_dwc3_request(r) (container_of(r, struct dwc3_request, request)) ··· 79 85 void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req, 80 86 int status); 81 87 82 - int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode); 83 - int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state); 84 - 85 88 void dwc3_ep0_interrupt(struct dwc3 *dwc, 86 89 const struct dwc3_event_depevt *event); 87 90 void dwc3_ep0_out_start(struct dwc3 *dwc); ··· 86 95 int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, 87 96 gfp_t gfp_flags); 88 97 int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value); 89 - int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep, 90 - unsigned cmd, struct dwc3_gadget_ep_cmd_params *params); 91 - int dwc3_send_gadget_generic_command(struct dwc3 *dwc, int cmd, u32 param); 92 98 93 99 /** 94 100 * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW
-1
drivers/usb/gadget/Kconfig
··· 301 301 gadget drivers to also be dynamically linked. 302 302 303 303 config USB_S3C_HSOTG 304 - depends on ARM 305 304 tristate "Designware/S3C HS/OtG USB Device controller" 306 305 help 307 306 The Designware USB2.0 high-speed gadget controller
+7 -7
drivers/usb/gadget/at91_udc.c
··· 1758 1758 1759 1759 /* newer chips have more FIFO memory than rm9200 */ 1760 1760 if (cpu_is_at91sam9260() || cpu_is_at91sam9g20()) { 1761 - usb_ep_set_maxpacket_limit(&udc->ep[0].ep, 64); 1762 - usb_ep_set_maxpacket_limit(&udc->ep[3].ep, 64); 1763 - usb_ep_set_maxpacket_limit(&udc->ep[4].ep, 512); 1764 - usb_ep_set_maxpacket_limit(&udc->ep[5].ep, 512); 1761 + udc->ep[0].maxpacket = 64; 1762 + udc->ep[3].maxpacket = 64; 1763 + udc->ep[4].maxpacket = 512; 1764 + udc->ep[5].maxpacket = 512; 1765 1765 } else if (cpu_is_at91sam9261() || cpu_is_at91sam9g10()) { 1766 - usb_ep_set_maxpacket_limit(&udc->ep[3].ep, 64); 1766 + udc->ep[3].maxpacket = 64; 1767 1767 } else if (cpu_is_at91sam9263()) { 1768 - usb_ep_set_maxpacket_limit(&udc->ep[0].ep, 64); 1769 - usb_ep_set_maxpacket_limit(&udc->ep[3].ep, 64); 1768 + udc->ep[0].maxpacket = 64; 1769 + udc->ep[3].maxpacket = 64; 1770 1770 } 1771 1771 1772 1772 udc->udp_baseaddr = ioremap(res->start, resource_size(res));
+11 -5
drivers/usb/gadget/atmel_usba_udc.c
··· 1661 1661 if (dma_status) { 1662 1662 int i; 1663 1663 1664 - for (i = 1; i < USBA_NR_ENDPOINTS; i++) 1664 + for (i = 1; i < USBA_NR_DMAS; i++) 1665 1665 if (dma_status & (1 << i)) 1666 1666 usba_dma_irq(udc, &udc->usba_ep[i]); 1667 1667 } ··· 1670 1670 if (ep_status) { 1671 1671 int i; 1672 1672 1673 - for (i = 0; i < USBA_NR_ENDPOINTS; i++) 1673 + for (i = 0; i < udc->num_ep; i++) 1674 1674 if (ep_status & (1 << i)) { 1675 1675 if (ep_is_control(&udc->usba_ep[i])) 1676 1676 usba_control_irq(udc, &udc->usba_ep[i]); ··· 1827 1827 toggle_bias(0); 1828 1828 usba_writel(udc, CTRL, USBA_DISABLE_MASK); 1829 1829 1830 - udc->driver = NULL; 1831 - 1832 1830 clk_disable_unprepare(udc->hclk); 1833 1831 clk_disable_unprepare(udc->pclk); 1834 1832 1835 - DBG(DBG_GADGET, "unregistered driver `%s'\n", driver->driver.name); 1833 + DBG(DBG_GADGET, "unregistered driver `%s'\n", udc->driver->driver.name); 1834 + 1835 + udc->driver = NULL; 1836 1836 1837 1837 return 0; 1838 1838 } ··· 1912 1912 list_add_tail(&ep->ep.ep_list, &udc->gadget.ep_list); 1913 1913 1914 1914 i++; 1915 + } 1916 + 1917 + if (i == 0) { 1918 + dev_err(&pdev->dev, "of_probe: no endpoint specified\n"); 1919 + ret = -EINVAL; 1920 + goto err; 1915 1921 } 1916 1922 1917 1923 return eps;
+1 -1
drivers/usb/gadget/atmel_usba_udc.h
··· 210 210 #define USBA_FIFO_BASE(x) ((x) << 16) 211 211 212 212 /* Synth parameters */ 213 - #define USBA_NR_ENDPOINTS 7 213 + #define USBA_NR_DMAS 7 214 214 215 215 #define EP0_FIFO_SIZE 64 216 216 #define EP0_EPT_SIZE USBA_EPT_SIZE_64
+478 -138
drivers/usb/gadget/f_fs.c
··· 28 28 #include <linux/usb/composite.h> 29 29 #include <linux/usb/functionfs.h> 30 30 31 + #include <linux/aio.h> 32 + #include <linux/mmu_context.h> 33 + #include <linux/poll.h> 34 + 31 35 #include "u_fs.h" 32 36 #include "configfs.h" 33 37 ··· 103 99 } 104 100 105 101 102 + static inline enum ffs_setup_state 103 + ffs_setup_state_clear_cancelled(struct ffs_data *ffs) 104 + { 105 + return (enum ffs_setup_state) 106 + cmpxchg(&ffs->setup_state, FFS_SETUP_CANCELLED, FFS_NO_SETUP); 107 + } 108 + 109 + 106 110 static void ffs_func_eps_disable(struct ffs_function *func); 107 111 static int __must_check ffs_func_eps_enable(struct ffs_function *func); 108 112 ··· 134 122 struct usb_ep *ep; /* P: ffs->eps_lock */ 135 123 struct usb_request *req; /* P: epfile->mutex */ 136 124 137 - /* [0]: full speed, [1]: high speed */ 138 - struct usb_endpoint_descriptor *descs[2]; 125 + /* [0]: full speed, [1]: high speed, [2]: super speed */ 126 + struct usb_endpoint_descriptor *descs[3]; 139 127 140 128 u8 num; 141 129 ··· 160 148 unsigned char _pad; 161 149 }; 162 150 151 + /* ffs_io_data structure ***************************************************/ 152 + 153 + struct ffs_io_data { 154 + bool aio; 155 + bool read; 156 + 157 + struct kiocb *kiocb; 158 + const struct iovec *iovec; 159 + unsigned long nr_segs; 160 + char __user *buf; 161 + size_t len; 162 + 163 + struct mm_struct *mm; 164 + struct work_struct work; 165 + 166 + struct usb_ep *ep; 167 + struct usb_request *req; 168 + }; 169 + 163 170 static int __must_check ffs_epfiles_create(struct ffs_data *ffs); 164 171 static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count); 165 172 ··· 192 161 DEFINE_MUTEX(ffs_lock); 193 162 EXPORT_SYMBOL(ffs_lock); 194 163 195 - static struct ffs_dev *ffs_find_dev(const char *name); 164 + static struct ffs_dev *_ffs_find_dev(const char *name); 165 + static struct ffs_dev *_ffs_alloc_dev(void); 196 166 static int _ffs_name_dev(struct ffs_dev *dev, const char *name); 167 + static void _ffs_free_dev(struct ffs_dev *dev); 197 168 static void *ffs_acquire_dev(const char *dev_name); 198 169 static void ffs_release_dev(struct ffs_data *ffs_data); 199 170 static int ffs_ready(struct ffs_data *ffs); ··· 251 218 } 252 219 253 220 ffs->setup_state = FFS_NO_SETUP; 254 - return ffs->ep0req_status; 221 + return req->status ? req->status : req->actual; 255 222 } 256 223 257 224 static int __ffs_ep0_stall(struct ffs_data *ffs) ··· 277 244 ENTER(); 278 245 279 246 /* Fast check if setup was canceled */ 280 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) 247 + if (ffs_setup_state_clear_cancelled(ffs) == FFS_SETUP_CANCELLED) 281 248 return -EIDRM; 282 249 283 250 /* Acquire mutex */ ··· 343 310 * rather then _irqsave 344 311 */ 345 312 spin_lock_irq(&ffs->ev.waitq.lock); 346 - switch (FFS_SETUP_STATE(ffs)) { 347 - case FFS_SETUP_CANCELED: 313 + switch (ffs_setup_state_clear_cancelled(ffs)) { 314 + case FFS_SETUP_CANCELLED: 348 315 ret = -EIDRM; 349 316 goto done_spin; 350 317 ··· 379 346 /* 380 347 * We are guaranteed to be still in FFS_ACTIVE state 381 348 * but the state of setup could have changed from 382 - * FFS_SETUP_PENDING to FFS_SETUP_CANCELED so we need 349 + * FFS_SETUP_PENDING to FFS_SETUP_CANCELLED so we need 383 350 * to check for that. If that happened we copied data 384 351 * from user space in vain but it's unlikely. 385 352 * ··· 388 355 * transition can be performed and it's protected by 389 356 * mutex. 390 357 */ 391 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) { 358 + if (ffs_setup_state_clear_cancelled(ffs) == 359 + FFS_SETUP_CANCELLED) { 392 360 ret = -EIDRM; 393 361 done_spin: 394 362 spin_unlock_irq(&ffs->ev.waitq.lock); ··· 455 421 ENTER(); 456 422 457 423 /* Fast check if setup was canceled */ 458 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) 424 + if (ffs_setup_state_clear_cancelled(ffs) == FFS_SETUP_CANCELLED) 459 425 return -EIDRM; 460 426 461 427 /* Acquire mutex */ ··· 475 441 */ 476 442 spin_lock_irq(&ffs->ev.waitq.lock); 477 443 478 - switch (FFS_SETUP_STATE(ffs)) { 479 - case FFS_SETUP_CANCELED: 444 + switch (ffs_setup_state_clear_cancelled(ffs)) { 445 + case FFS_SETUP_CANCELLED: 480 446 ret = -EIDRM; 481 447 break; 482 448 ··· 523 489 spin_lock_irq(&ffs->ev.waitq.lock); 524 490 525 491 /* See ffs_ep0_write() */ 526 - if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) { 492 + if (ffs_setup_state_clear_cancelled(ffs) == 493 + FFS_SETUP_CANCELLED) { 527 494 ret = -EIDRM; 528 495 break; 529 496 } ··· 593 558 return ret; 594 559 } 595 560 561 + static unsigned int ffs_ep0_poll(struct file *file, poll_table *wait) 562 + { 563 + struct ffs_data *ffs = file->private_data; 564 + unsigned int mask = POLLWRNORM; 565 + int ret; 566 + 567 + poll_wait(file, &ffs->ev.waitq, wait); 568 + 569 + ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 570 + if (unlikely(ret < 0)) 571 + return mask; 572 + 573 + switch (ffs->state) { 574 + case FFS_READ_DESCRIPTORS: 575 + case FFS_READ_STRINGS: 576 + mask |= POLLOUT; 577 + break; 578 + 579 + case FFS_ACTIVE: 580 + switch (ffs->setup_state) { 581 + case FFS_NO_SETUP: 582 + if (ffs->ev.count) 583 + mask |= POLLIN; 584 + break; 585 + 586 + case FFS_SETUP_PENDING: 587 + case FFS_SETUP_CANCELLED: 588 + mask |= (POLLIN | POLLOUT); 589 + break; 590 + } 591 + case FFS_CLOSING: 592 + break; 593 + } 594 + 595 + mutex_unlock(&ffs->mutex); 596 + 597 + return mask; 598 + } 599 + 596 600 static const struct file_operations ffs_ep0_operations = { 597 601 .llseek = no_llseek, 598 602 ··· 640 566 .read = ffs_ep0_read, 641 567 .release = ffs_ep0_release, 642 568 .unlocked_ioctl = ffs_ep0_ioctl, 569 + .poll = ffs_ep0_poll, 643 570 }; 644 571 645 572 ··· 656 581 } 657 582 } 658 583 659 - static ssize_t ffs_epfile_io(struct file *file, 660 - char __user *buf, size_t len, int read) 584 + static void ffs_user_copy_worker(struct work_struct *work) 585 + { 586 + struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, 587 + work); 588 + int ret = io_data->req->status ? io_data->req->status : 589 + io_data->req->actual; 590 + 591 + if (io_data->read && ret > 0) { 592 + int i; 593 + size_t pos = 0; 594 + use_mm(io_data->mm); 595 + for (i = 0; i < io_data->nr_segs; i++) { 596 + if (unlikely(copy_to_user(io_data->iovec[i].iov_base, 597 + &io_data->buf[pos], 598 + io_data->iovec[i].iov_len))) { 599 + ret = -EFAULT; 600 + break; 601 + } 602 + pos += io_data->iovec[i].iov_len; 603 + } 604 + unuse_mm(io_data->mm); 605 + } 606 + 607 + aio_complete(io_data->kiocb, ret, ret); 608 + 609 + usb_ep_free_request(io_data->ep, io_data->req); 610 + 611 + io_data->kiocb->private = NULL; 612 + if (io_data->read) 613 + kfree(io_data->iovec); 614 + kfree(io_data->buf); 615 + kfree(io_data); 616 + } 617 + 618 + static void ffs_epfile_async_io_complete(struct usb_ep *_ep, 619 + struct usb_request *req) 620 + { 621 + struct ffs_io_data *io_data = req->context; 622 + 623 + ENTER(); 624 + 625 + INIT_WORK(&io_data->work, ffs_user_copy_worker); 626 + schedule_work(&io_data->work); 627 + } 628 + 629 + static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) 661 630 { 662 631 struct ffs_epfile *epfile = file->private_data; 663 632 struct ffs_ep *ep; ··· 731 612 } 732 613 733 614 /* Do we halt? */ 734 - halt = !read == !epfile->in; 615 + halt = (!io_data->read == !epfile->in); 735 616 if (halt && epfile->isoc) { 736 617 ret = -EINVAL; 737 618 goto error; ··· 749 630 * Controller may require buffer size to be aligned to 750 631 * maxpacketsize of an out endpoint. 751 632 */ 752 - data_len = read ? usb_ep_align_maybe(gadget, ep->ep, len) : len; 633 + data_len = io_data->read ? 634 + usb_ep_align_maybe(gadget, ep->ep, io_data->len) : 635 + io_data->len; 753 636 754 637 data = kmalloc(data_len, GFP_KERNEL); 755 638 if (unlikely(!data)) 756 639 return -ENOMEM; 757 - 758 - if (!read && unlikely(copy_from_user(data, buf, len))) { 759 - ret = -EFAULT; 760 - goto error; 640 + if (io_data->aio && !io_data->read) { 641 + int i; 642 + size_t pos = 0; 643 + for (i = 0; i < io_data->nr_segs; i++) { 644 + if (unlikely(copy_from_user(&data[pos], 645 + io_data->iovec[i].iov_base, 646 + io_data->iovec[i].iov_len))) { 647 + ret = -EFAULT; 648 + goto error; 649 + } 650 + pos += io_data->iovec[i].iov_len; 651 + } 652 + } else { 653 + if (!io_data->read && 654 + unlikely(__copy_from_user(data, io_data->buf, 655 + io_data->len))) { 656 + ret = -EFAULT; 657 + goto error; 658 + } 761 659 } 762 660 } 763 661 ··· 797 661 ret = -EBADMSG; 798 662 } else { 799 663 /* Fire the request */ 800 - DECLARE_COMPLETION_ONSTACK(done); 664 + struct usb_request *req; 801 665 802 - struct usb_request *req = ep->req; 803 - req->context = &done; 804 - req->complete = ffs_epfile_io_complete; 805 - req->buf = data; 806 - req->length = data_len; 666 + if (io_data->aio) { 667 + req = usb_ep_alloc_request(ep->ep, GFP_KERNEL); 668 + if (unlikely(!req)) 669 + goto error; 807 670 808 - ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 671 + req->buf = data; 672 + req->length = io_data->len; 809 673 810 - spin_unlock_irq(&epfile->ffs->eps_lock); 674 + io_data->buf = data; 675 + io_data->ep = ep->ep; 676 + io_data->req = req; 811 677 812 - if (unlikely(ret < 0)) { 813 - /* nop */ 814 - } else if (unlikely(wait_for_completion_interruptible(&done))) { 815 - ret = -EINTR; 816 - usb_ep_dequeue(ep->ep, req); 678 + req->context = io_data; 679 + req->complete = ffs_epfile_async_io_complete; 680 + 681 + ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 682 + if (unlikely(ret)) { 683 + usb_ep_free_request(ep->ep, req); 684 + goto error; 685 + } 686 + ret = -EIOCBQUEUED; 687 + 688 + spin_unlock_irq(&epfile->ffs->eps_lock); 817 689 } else { 818 - /* 819 - * XXX We may end up silently droping data here. 820 - * Since data_len (i.e. req->length) may be bigger 821 - * than len (after being rounded up to maxpacketsize), 822 - * we may end up with more data then user space has 823 - * space for. 824 - */ 825 - ret = ep->status; 826 - if (read && ret > 0 && 827 - unlikely(copy_to_user(buf, data, 828 - min_t(size_t, ret, len)))) 829 - ret = -EFAULT; 690 + DECLARE_COMPLETION_ONSTACK(done); 691 + 692 + req = ep->req; 693 + req->buf = data; 694 + req->length = io_data->len; 695 + 696 + req->context = &done; 697 + req->complete = ffs_epfile_io_complete; 698 + 699 + ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 700 + 701 + spin_unlock_irq(&epfile->ffs->eps_lock); 702 + 703 + if (unlikely(ret < 0)) { 704 + /* nop */ 705 + } else if (unlikely( 706 + wait_for_completion_interruptible(&done))) { 707 + ret = -EINTR; 708 + usb_ep_dequeue(ep->ep, req); 709 + } else { 710 + /* 711 + * XXX We may end up silently droping data 712 + * here. Since data_len (i.e. req->length) may 713 + * be bigger than len (after being rounded up 714 + * to maxpacketsize), we may end up with more 715 + * data then user space has space for. 716 + */ 717 + ret = ep->status; 718 + if (io_data->read && ret > 0) { 719 + ret = min_t(size_t, ret, io_data->len); 720 + 721 + if (unlikely(copy_to_user(io_data->buf, 722 + data, ret))) 723 + ret = -EFAULT; 724 + } 725 + } 726 + kfree(data); 830 727 } 831 728 } 832 729 833 730 mutex_unlock(&epfile->mutex); 731 + return ret; 834 732 error: 835 733 kfree(data); 836 734 return ret; ··· 874 704 ffs_epfile_write(struct file *file, const char __user *buf, size_t len, 875 705 loff_t *ptr) 876 706 { 707 + struct ffs_io_data io_data; 708 + 877 709 ENTER(); 878 710 879 - return ffs_epfile_io(file, (char __user *)buf, len, 0); 711 + io_data.aio = false; 712 + io_data.read = false; 713 + io_data.buf = (char * __user)buf; 714 + io_data.len = len; 715 + 716 + return ffs_epfile_io(file, &io_data); 880 717 } 881 718 882 719 static ssize_t 883 720 ffs_epfile_read(struct file *file, char __user *buf, size_t len, loff_t *ptr) 884 721 { 722 + struct ffs_io_data io_data; 723 + 885 724 ENTER(); 886 725 887 - return ffs_epfile_io(file, buf, len, 1); 726 + io_data.aio = false; 727 + io_data.read = true; 728 + io_data.buf = buf; 729 + io_data.len = len; 730 + 731 + return ffs_epfile_io(file, &io_data); 888 732 } 889 733 890 734 static int ··· 915 731 ffs_data_opened(epfile->ffs); 916 732 917 733 return 0; 734 + } 735 + 736 + static int ffs_aio_cancel(struct kiocb *kiocb) 737 + { 738 + struct ffs_io_data *io_data = kiocb->private; 739 + struct ffs_epfile *epfile = kiocb->ki_filp->private_data; 740 + int value; 741 + 742 + ENTER(); 743 + 744 + spin_lock_irq(&epfile->ffs->eps_lock); 745 + 746 + if (likely(io_data && io_data->ep && io_data->req)) 747 + value = usb_ep_dequeue(io_data->ep, io_data->req); 748 + else 749 + value = -EINVAL; 750 + 751 + spin_unlock_irq(&epfile->ffs->eps_lock); 752 + 753 + return value; 754 + } 755 + 756 + static ssize_t ffs_epfile_aio_write(struct kiocb *kiocb, 757 + const struct iovec *iovec, 758 + unsigned long nr_segs, loff_t loff) 759 + { 760 + struct ffs_io_data *io_data; 761 + 762 + ENTER(); 763 + 764 + io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 765 + if (unlikely(!io_data)) 766 + return -ENOMEM; 767 + 768 + io_data->aio = true; 769 + io_data->read = false; 770 + io_data->kiocb = kiocb; 771 + io_data->iovec = iovec; 772 + io_data->nr_segs = nr_segs; 773 + io_data->len = kiocb->ki_nbytes; 774 + io_data->mm = current->mm; 775 + 776 + kiocb->private = io_data; 777 + 778 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 779 + 780 + return ffs_epfile_io(kiocb->ki_filp, io_data); 781 + } 782 + 783 + static ssize_t ffs_epfile_aio_read(struct kiocb *kiocb, 784 + const struct iovec *iovec, 785 + unsigned long nr_segs, loff_t loff) 786 + { 787 + struct ffs_io_data *io_data; 788 + struct iovec *iovec_copy; 789 + 790 + ENTER(); 791 + 792 + iovec_copy = kmalloc_array(nr_segs, sizeof(*iovec_copy), GFP_KERNEL); 793 + if (unlikely(!iovec_copy)) 794 + return -ENOMEM; 795 + 796 + memcpy(iovec_copy, iovec, sizeof(struct iovec)*nr_segs); 797 + 798 + io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 799 + if (unlikely(!io_data)) { 800 + kfree(iovec_copy); 801 + return -ENOMEM; 802 + } 803 + 804 + io_data->aio = true; 805 + io_data->read = true; 806 + io_data->kiocb = kiocb; 807 + io_data->iovec = iovec_copy; 808 + io_data->nr_segs = nr_segs; 809 + io_data->len = kiocb->ki_nbytes; 810 + io_data->mm = current->mm; 811 + 812 + kiocb->private = io_data; 813 + 814 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 815 + 816 + return ffs_epfile_io(kiocb->ki_filp, io_data); 918 817 } 919 818 920 819 static int ··· 1056 789 .open = ffs_epfile_open, 1057 790 .write = ffs_epfile_write, 1058 791 .read = ffs_epfile_read, 792 + .aio_write = ffs_epfile_aio_write, 793 + .aio_read = ffs_epfile_aio_read, 1059 794 .release = ffs_epfile_release, 1060 795 .unlocked_ioctl = ffs_epfile_ioctl, 1061 796 }; ··· 1441 1172 if (ffs->epfiles) 1442 1173 ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count); 1443 1174 1444 - kfree(ffs->raw_descs); 1175 + kfree(ffs->raw_descs_data); 1445 1176 kfree(ffs->raw_strings); 1446 1177 kfree(ffs->stringtabs); 1447 1178 } ··· 1453 1184 ffs_data_clear(ffs); 1454 1185 1455 1186 ffs->epfiles = NULL; 1187 + ffs->raw_descs_data = NULL; 1456 1188 ffs->raw_descs = NULL; 1457 1189 ffs->raw_strings = NULL; 1458 1190 ffs->stringtabs = NULL; 1459 1191 1460 1192 ffs->raw_descs_length = 0; 1461 - ffs->raw_fs_descs_length = 0; 1462 1193 ffs->fs_descs_count = 0; 1463 1194 ffs->hs_descs_count = 0; 1195 + ffs->ss_descs_count = 0; 1464 1196 1465 1197 ffs->strings_count = 0; 1466 1198 ffs->interfaces_count = 0; ··· 1604 1334 spin_lock_irqsave(&func->ffs->eps_lock, flags); 1605 1335 do { 1606 1336 struct usb_endpoint_descriptor *ds; 1607 - ds = ep->descs[ep->descs[1] ? 1 : 0]; 1337 + int desc_idx; 1338 + 1339 + if (ffs->gadget->speed == USB_SPEED_SUPER) 1340 + desc_idx = 2; 1341 + else if (ffs->gadget->speed == USB_SPEED_HIGH) 1342 + desc_idx = 1; 1343 + else 1344 + desc_idx = 0; 1345 + 1346 + /* fall-back to lower speed if desc missing for current speed */ 1347 + do { 1348 + ds = ep->descs[desc_idx]; 1349 + } while (!ds && --desc_idx >= 0); 1350 + 1351 + if (!ds) { 1352 + ret = -EINVAL; 1353 + break; 1354 + } 1608 1355 1609 1356 ep->ep->driver_data = ep; 1610 1357 ep->ep->desc = ds; ··· 1756 1469 } 1757 1470 break; 1758 1471 1472 + case USB_DT_SS_ENDPOINT_COMP: 1473 + pr_vdebug("EP SS companion descriptor\n"); 1474 + if (length != sizeof(struct usb_ss_ep_comp_descriptor)) 1475 + goto inv_length; 1476 + break; 1477 + 1759 1478 case USB_DT_OTHER_SPEED_CONFIG: 1760 1479 case USB_DT_INTERFACE_POWER: 1761 1480 case USB_DT_DEBUG: ··· 1872 1579 static int __ffs_data_got_descs(struct ffs_data *ffs, 1873 1580 char *const _data, size_t len) 1874 1581 { 1875 - unsigned fs_count, hs_count; 1876 - int fs_len, ret = -EINVAL; 1877 - char *data = _data; 1582 + char *data = _data, *raw_descs; 1583 + unsigned counts[3], flags; 1584 + int ret = -EINVAL, i; 1878 1585 1879 1586 ENTER(); 1880 1587 1881 - if (unlikely(get_unaligned_le32(data) != FUNCTIONFS_DESCRIPTORS_MAGIC || 1882 - get_unaligned_le32(data + 4) != len)) 1588 + if (get_unaligned_le32(data + 4) != len) 1883 1589 goto error; 1884 - fs_count = get_unaligned_le32(data + 8); 1885 - hs_count = get_unaligned_le32(data + 12); 1886 1590 1887 - if (!fs_count && !hs_count) 1888 - goto einval; 1889 - 1890 - data += 16; 1891 - len -= 16; 1892 - 1893 - if (likely(fs_count)) { 1894 - fs_len = ffs_do_descs(fs_count, data, len, 1895 - __ffs_data_do_entity, ffs); 1896 - if (unlikely(fs_len < 0)) { 1897 - ret = fs_len; 1591 + switch (get_unaligned_le32(data)) { 1592 + case FUNCTIONFS_DESCRIPTORS_MAGIC: 1593 + flags = FUNCTIONFS_HAS_FS_DESC | FUNCTIONFS_HAS_HS_DESC; 1594 + data += 8; 1595 + len -= 8; 1596 + break; 1597 + case FUNCTIONFS_DESCRIPTORS_MAGIC_V2: 1598 + flags = get_unaligned_le32(data + 8); 1599 + if (flags & ~(FUNCTIONFS_HAS_FS_DESC | 1600 + FUNCTIONFS_HAS_HS_DESC | 1601 + FUNCTIONFS_HAS_SS_DESC)) { 1602 + ret = -ENOSYS; 1898 1603 goto error; 1899 1604 } 1900 - 1901 - data += fs_len; 1902 - len -= fs_len; 1903 - } else { 1904 - fs_len = 0; 1605 + data += 12; 1606 + len -= 12; 1607 + break; 1608 + default: 1609 + goto error; 1905 1610 } 1906 1611 1907 - if (likely(hs_count)) { 1908 - ret = ffs_do_descs(hs_count, data, len, 1909 - __ffs_data_do_entity, ffs); 1910 - if (unlikely(ret < 0)) 1612 + /* Read fs_count, hs_count and ss_count (if present) */ 1613 + for (i = 0; i < 3; ++i) { 1614 + if (!(flags & (1 << i))) { 1615 + counts[i] = 0; 1616 + } else if (len < 4) { 1911 1617 goto error; 1912 - } else { 1913 - ret = 0; 1618 + } else { 1619 + counts[i] = get_unaligned_le32(data); 1620 + data += 4; 1621 + len -= 4; 1622 + } 1914 1623 } 1915 1624 1916 - if (unlikely(len != ret)) 1917 - goto einval; 1625 + /* Read descriptors */ 1626 + raw_descs = data; 1627 + for (i = 0; i < 3; ++i) { 1628 + if (!counts[i]) 1629 + continue; 1630 + ret = ffs_do_descs(counts[i], data, len, 1631 + __ffs_data_do_entity, ffs); 1632 + if (ret < 0) 1633 + goto error; 1634 + data += ret; 1635 + len -= ret; 1636 + } 1918 1637 1919 - ffs->raw_fs_descs_length = fs_len; 1920 - ffs->raw_descs_length = fs_len + ret; 1921 - ffs->raw_descs = _data; 1922 - ffs->fs_descs_count = fs_count; 1923 - ffs->hs_descs_count = hs_count; 1638 + if (raw_descs == data || len) { 1639 + ret = -EINVAL; 1640 + goto error; 1641 + } 1642 + 1643 + ffs->raw_descs_data = _data; 1644 + ffs->raw_descs = raw_descs; 1645 + ffs->raw_descs_length = data - raw_descs; 1646 + ffs->fs_descs_count = counts[0]; 1647 + ffs->hs_descs_count = counts[1]; 1648 + ffs->ss_descs_count = counts[2]; 1924 1649 1925 1650 return 0; 1926 1651 1927 - einval: 1928 - ret = -EINVAL; 1929 1652 error: 1930 1653 kfree(_data); 1931 1654 return ret; ··· 2098 1789 * the source does nothing. 2099 1790 */ 2100 1791 if (ffs->setup_state == FFS_SETUP_PENDING) 2101 - ffs->setup_state = FFS_SETUP_CANCELED; 1792 + ffs->setup_state = FFS_SETUP_CANCELLED; 2102 1793 2103 1794 switch (type) { 2104 1795 case FUNCTIONFS_RESUME: ··· 2159 1850 struct usb_endpoint_descriptor *ds = (void *)desc; 2160 1851 struct ffs_function *func = priv; 2161 1852 struct ffs_ep *ffs_ep; 2162 - 2163 - /* 2164 - * If hs_descriptors is not NULL then we are reading hs 2165 - * descriptors now 2166 - */ 2167 - const int isHS = func->function.hs_descriptors != NULL; 2168 - unsigned idx; 1853 + unsigned ep_desc_id, idx; 1854 + static const char *speed_names[] = { "full", "high", "super" }; 2169 1855 2170 1856 if (type != FFS_DESCRIPTOR) 2171 1857 return 0; 2172 1858 2173 - if (isHS) 1859 + /* 1860 + * If ss_descriptors is not NULL, we are reading super speed 1861 + * descriptors; if hs_descriptors is not NULL, we are reading high 1862 + * speed descriptors; otherwise, we are reading full speed 1863 + * descriptors. 1864 + */ 1865 + if (func->function.ss_descriptors) { 1866 + ep_desc_id = 2; 1867 + func->function.ss_descriptors[(long)valuep] = desc; 1868 + } else if (func->function.hs_descriptors) { 1869 + ep_desc_id = 1; 2174 1870 func->function.hs_descriptors[(long)valuep] = desc; 2175 - else 1871 + } else { 1872 + ep_desc_id = 0; 2176 1873 func->function.fs_descriptors[(long)valuep] = desc; 1874 + } 2177 1875 2178 1876 if (!desc || desc->bDescriptorType != USB_DT_ENDPOINT) 2179 1877 return 0; ··· 2188 1872 idx = (ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK) - 1; 2189 1873 ffs_ep = func->eps + idx; 2190 1874 2191 - if (unlikely(ffs_ep->descs[isHS])) { 2192 - pr_vdebug("two %sspeed descriptors for EP %d\n", 2193 - isHS ? "high" : "full", 1875 + if (unlikely(ffs_ep->descs[ep_desc_id])) { 1876 + pr_err("two %sspeed descriptors for EP %d\n", 1877 + speed_names[ep_desc_id], 2194 1878 ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); 2195 1879 return -EINVAL; 2196 1880 } 2197 - ffs_ep->descs[isHS] = ds; 1881 + ffs_ep->descs[ep_desc_id] = ds; 2198 1882 2199 1883 ffs_dump_mem(": Original ep desc", ds, ds->bLength); 2200 1884 if (ffs_ep->ep) { ··· 2338 2022 const int full = !!func->ffs->fs_descs_count; 2339 2023 const int high = gadget_is_dualspeed(func->gadget) && 2340 2024 func->ffs->hs_descs_count; 2025 + const int super = gadget_is_superspeed(func->gadget) && 2026 + func->ffs->ss_descs_count; 2341 2027 2342 - int ret; 2028 + int fs_len, hs_len, ret; 2343 2029 2344 2030 /* Make it a single chunk, less management later on */ 2345 2031 vla_group(d); ··· 2350 2032 full ? ffs->fs_descs_count + 1 : 0); 2351 2033 vla_item_with_sz(d, struct usb_descriptor_header *, hs_descs, 2352 2034 high ? ffs->hs_descs_count + 1 : 0); 2035 + vla_item_with_sz(d, struct usb_descriptor_header *, ss_descs, 2036 + super ? ffs->ss_descs_count + 1 : 0); 2353 2037 vla_item_with_sz(d, short, inums, ffs->interfaces_count); 2354 - vla_item_with_sz(d, char, raw_descs, 2355 - high ? ffs->raw_descs_length : ffs->raw_fs_descs_length); 2038 + vla_item_with_sz(d, char, raw_descs, ffs->raw_descs_length); 2356 2039 char *vlabuf; 2357 2040 2358 2041 ENTER(); 2359 2042 2360 - /* Only high speed but not supported by gadget? */ 2361 - if (unlikely(!(full | high))) 2043 + /* Has descriptors only for speeds gadget does not support */ 2044 + if (unlikely(!(full | high | super))) 2362 2045 return -ENOTSUPP; 2363 2046 2364 2047 /* Allocate a single chunk, less management later on */ ··· 2369 2050 2370 2051 /* Zero */ 2371 2052 memset(vla_ptr(vlabuf, d, eps), 0, d_eps__sz); 2372 - memcpy(vla_ptr(vlabuf, d, raw_descs), ffs->raw_descs + 16, 2373 - d_raw_descs__sz); 2053 + /* Copy descriptors */ 2054 + memcpy(vla_ptr(vlabuf, d, raw_descs), ffs->raw_descs, 2055 + ffs->raw_descs_length); 2056 + 2374 2057 memset(vla_ptr(vlabuf, d, inums), 0xff, d_inums__sz); 2375 2058 for (ret = ffs->eps_count; ret; --ret) { 2376 2059 struct ffs_ep *ptr; ··· 2394 2073 */ 2395 2074 if (likely(full)) { 2396 2075 func->function.fs_descriptors = vla_ptr(vlabuf, d, fs_descs); 2397 - ret = ffs_do_descs(ffs->fs_descs_count, 2398 - vla_ptr(vlabuf, d, raw_descs), 2399 - d_raw_descs__sz, 2400 - __ffs_func_bind_do_descs, func); 2401 - if (unlikely(ret < 0)) 2076 + fs_len = ffs_do_descs(ffs->fs_descs_count, 2077 + vla_ptr(vlabuf, d, raw_descs), 2078 + d_raw_descs__sz, 2079 + __ffs_func_bind_do_descs, func); 2080 + if (unlikely(fs_len < 0)) { 2081 + ret = fs_len; 2402 2082 goto error; 2083 + } 2403 2084 } else { 2404 - ret = 0; 2085 + fs_len = 0; 2405 2086 } 2406 2087 2407 2088 if (likely(high)) { 2408 2089 func->function.hs_descriptors = vla_ptr(vlabuf, d, hs_descs); 2409 - ret = ffs_do_descs(ffs->hs_descs_count, 2410 - vla_ptr(vlabuf, d, raw_descs) + ret, 2411 - d_raw_descs__sz - ret, 2412 - __ffs_func_bind_do_descs, func); 2090 + hs_len = ffs_do_descs(ffs->hs_descs_count, 2091 + vla_ptr(vlabuf, d, raw_descs) + fs_len, 2092 + d_raw_descs__sz - fs_len, 2093 + __ffs_func_bind_do_descs, func); 2094 + if (unlikely(hs_len < 0)) { 2095 + ret = hs_len; 2096 + goto error; 2097 + } 2098 + } else { 2099 + hs_len = 0; 2100 + } 2101 + 2102 + if (likely(super)) { 2103 + func->function.ss_descriptors = vla_ptr(vlabuf, d, ss_descs); 2104 + ret = ffs_do_descs(ffs->ss_descs_count, 2105 + vla_ptr(vlabuf, d, raw_descs) + fs_len + hs_len, 2106 + d_raw_descs__sz - fs_len - hs_len, 2107 + __ffs_func_bind_do_descs, func); 2413 2108 if (unlikely(ret < 0)) 2414 2109 goto error; 2415 2110 } ··· 2436 2099 * now. 2437 2100 */ 2438 2101 ret = ffs_do_descs(ffs->fs_descs_count + 2439 - (high ? ffs->hs_descs_count : 0), 2102 + (high ? ffs->hs_descs_count : 0) + 2103 + (super ? ffs->ss_descs_count : 0), 2440 2104 vla_ptr(vlabuf, d, raw_descs), d_raw_descs__sz, 2441 2105 __ffs_func_bind_do_nums, func); 2442 2106 if (unlikely(ret < 0)) ··· 2596 2258 2597 2259 static LIST_HEAD(ffs_devices); 2598 2260 2599 - static struct ffs_dev *_ffs_find_dev(const char *name) 2261 + static struct ffs_dev *_ffs_do_find_dev(const char *name) 2600 2262 { 2601 2263 struct ffs_dev *dev; 2602 2264 ··· 2613 2275 /* 2614 2276 * ffs_lock must be taken by the caller of this function 2615 2277 */ 2616 - static struct ffs_dev *ffs_get_single_dev(void) 2278 + static struct ffs_dev *_ffs_get_single_dev(void) 2617 2279 { 2618 2280 struct ffs_dev *dev; 2619 2281 ··· 2629 2291 /* 2630 2292 * ffs_lock must be taken by the caller of this function 2631 2293 */ 2632 - static struct ffs_dev *ffs_find_dev(const char *name) 2294 + static struct ffs_dev *_ffs_find_dev(const char *name) 2633 2295 { 2634 2296 struct ffs_dev *dev; 2635 2297 2636 - dev = ffs_get_single_dev(); 2298 + dev = _ffs_get_single_dev(); 2637 2299 if (dev) 2638 2300 return dev; 2639 2301 2640 - return _ffs_find_dev(name); 2302 + return _ffs_do_find_dev(name); 2641 2303 } 2642 2304 2643 2305 /* Configfs support *********************************************************/ ··· 2673 2335 2674 2336 opts = to_f_fs_opts(f); 2675 2337 ffs_dev_lock(); 2676 - ffs_free_dev(opts->dev); 2338 + _ffs_free_dev(opts->dev); 2677 2339 ffs_dev_unlock(); 2678 2340 kfree(opts); 2679 2341 } ··· 2728 2390 opts->func_inst.set_inst_name = ffs_set_inst_name; 2729 2391 opts->func_inst.free_func_inst = ffs_free_inst; 2730 2392 ffs_dev_lock(); 2731 - dev = ffs_alloc_dev(); 2393 + dev = _ffs_alloc_dev(); 2732 2394 ffs_dev_unlock(); 2733 2395 if (IS_ERR(dev)) { 2734 2396 kfree(opts); ··· 2784 2446 */ 2785 2447 func->function.fs_descriptors = NULL; 2786 2448 func->function.hs_descriptors = NULL; 2449 + func->function.ss_descriptors = NULL; 2787 2450 func->interfaces_nums = NULL; 2788 2451 2789 2452 ffs_event_add(ffs, FUNCTIONFS_UNBIND); ··· 2817 2478 /* 2818 2479 * ffs_lock must be taken by the caller of this function 2819 2480 */ 2820 - struct ffs_dev *ffs_alloc_dev(void) 2481 + static struct ffs_dev *_ffs_alloc_dev(void) 2821 2482 { 2822 2483 struct ffs_dev *dev; 2823 2484 int ret; 2824 2485 2825 - if (ffs_get_single_dev()) 2486 + if (_ffs_get_single_dev()) 2826 2487 return ERR_PTR(-EBUSY); 2827 2488 2828 2489 dev = kzalloc(sizeof(*dev), GFP_KERNEL); ··· 2850 2511 { 2851 2512 struct ffs_dev *existing; 2852 2513 2853 - existing = _ffs_find_dev(name); 2514 + existing = _ffs_do_find_dev(name); 2854 2515 if (existing) 2855 2516 return -EBUSY; 2856 - 2517 + 2857 2518 dev->name = name; 2858 2519 2859 2520 return 0; ··· 2894 2555 /* 2895 2556 * ffs_lock must be taken by the caller of this function 2896 2557 */ 2897 - void ffs_free_dev(struct ffs_dev *dev) 2558 + static void _ffs_free_dev(struct ffs_dev *dev) 2898 2559 { 2899 2560 list_del(&dev->entry); 2900 2561 if (dev->name_allocated) ··· 2911 2572 ENTER(); 2912 2573 ffs_dev_lock(); 2913 2574 2914 - ffs_dev = ffs_find_dev(dev_name); 2575 + ffs_dev = _ffs_find_dev(dev_name); 2915 2576 if (!ffs_dev) 2916 2577 ffs_dev = ERR_PTR(-ENODEV); 2917 2578 else if (ffs_dev->mounted) ··· 2934 2595 ffs_dev_lock(); 2935 2596 2936 2597 ffs_dev = ffs_data->private_data; 2937 - if (ffs_dev) 2598 + if (ffs_dev) { 2938 2599 ffs_dev->mounted = false; 2939 - 2940 - if (ffs_dev->ffs_release_dev_callback) 2941 - ffs_dev->ffs_release_dev_callback(ffs_dev); 2600 + 2601 + if (ffs_dev->ffs_release_dev_callback) 2602 + ffs_dev->ffs_release_dev_callback(ffs_dev); 2603 + } 2942 2604 2943 2605 ffs_dev_unlock(); 2944 2606 }
+2 -8
drivers/usb/gadget/gr_udc.c
··· 225 225 const char *name = "gr_udc_state"; 226 226 227 227 dev->dfs_root = debugfs_create_dir(dev_name(dev->dev), NULL); 228 - if (IS_ERR(dev->dfs_root)) { 229 - dev_err(dev->dev, "Failed to create debugfs directory\n"); 230 - return; 231 - } 232 - dev->dfs_state = debugfs_create_file(name, 0444, dev->dfs_root, 233 - dev, &gr_dfs_fops); 234 - if (IS_ERR(dev->dfs_state)) 235 - dev_err(dev->dev, "Failed to create debugfs file %s\n", name); 228 + dev->dfs_state = debugfs_create_file(name, 0444, dev->dfs_root, dev, 229 + &gr_dfs_fops); 236 230 } 237 231 238 232 static void gr_dfs_delete(struct gr_udc *dev)
+7 -1
drivers/usb/gadget/printer.c
··· 427 427 req->length = USB_BUFSIZE; 428 428 req->complete = rx_complete; 429 429 430 + /* here, we unlock, and only unlock, to avoid deadlock. */ 431 + spin_unlock(&dev->lock); 430 432 error = usb_ep_queue(dev->out_ep, req, GFP_ATOMIC); 433 + spin_lock(&dev->lock); 431 434 if (error) { 432 435 DBG(dev, "rx submit --> %d\n", error); 433 436 list_add(&req->list, &dev->rx_reqs); 434 437 break; 435 - } else { 438 + } 439 + /* if the req is empty, then add it into dev->rx_reqs_active. */ 440 + else if (list_empty(&req->list)) { 436 441 list_add(&req->list, &dev->rx_reqs_active); 437 442 } 438 443 } ··· 1138 1133 NULL, "g_printer"); 1139 1134 if (IS_ERR(dev->pdev)) { 1140 1135 ERROR(dev, "Failed to create device: g_printer\n"); 1136 + status = PTR_ERR(dev->pdev); 1141 1137 goto fail; 1142 1138 } 1143 1139
+101 -42
drivers/usb/gadget/s3c-hsotg.c
··· 617 617 to_write = DIV_ROUND_UP(to_write, 4); 618 618 data = hs_req->req.buf + buf_pos; 619 619 620 - writesl(hsotg->regs + EPFIFO(hs_ep->index), data, to_write); 620 + iowrite32_rep(hsotg->regs + EPFIFO(hs_ep->index), data, to_write); 621 621 622 622 return (to_write >= can_write) ? -ENOSPC : 0; 623 623 } ··· 720 720 ureq->length, ureq->actual); 721 721 if (0) 722 722 dev_dbg(hsotg->dev, 723 - "REQ buf %p len %d dma 0x%08x noi=%d zp=%d snok=%d\n", 724 - ureq->buf, length, ureq->dma, 723 + "REQ buf %p len %d dma 0x%pad noi=%d zp=%d snok=%d\n", 724 + ureq->buf, length, &ureq->dma, 725 725 ureq->no_interrupt, ureq->zero, ureq->short_not_ok); 726 726 727 727 maxreq = get_ep_limit(hs_ep); ··· 789 789 dma_reg = dir_in ? DIEPDMA(index) : DOEPDMA(index); 790 790 writel(ureq->dma, hsotg->regs + dma_reg); 791 791 792 - dev_dbg(hsotg->dev, "%s: 0x%08x => 0x%08x\n", 793 - __func__, ureq->dma, dma_reg); 792 + dev_dbg(hsotg->dev, "%s: 0x%pad => 0x%08x\n", 793 + __func__, &ureq->dma, dma_reg); 794 794 } 795 795 796 796 ctrl |= DxEPCTL_EPEna; /* ensure ep enabled */ ··· 1186 1186 static void s3c_hsotg_disconnect(struct s3c_hsotg *hsotg); 1187 1187 1188 1188 /** 1189 + * s3c_hsotg_stall_ep0 - stall ep0 1190 + * @hsotg: The device state 1191 + * 1192 + * Set stall for ep0 as response for setup request. 1193 + */ 1194 + static void s3c_hsotg_stall_ep0(struct s3c_hsotg *hsotg) { 1195 + struct s3c_hsotg_ep *ep0 = &hsotg->eps[0]; 1196 + u32 reg; 1197 + u32 ctrl; 1198 + 1199 + dev_dbg(hsotg->dev, "ep0 stall (dir=%d)\n", ep0->dir_in); 1200 + reg = (ep0->dir_in) ? DIEPCTL0 : DOEPCTL0; 1201 + 1202 + /* 1203 + * DxEPCTL_Stall will be cleared by EP once it has 1204 + * taken effect, so no need to clear later. 1205 + */ 1206 + 1207 + ctrl = readl(hsotg->regs + reg); 1208 + ctrl |= DxEPCTL_Stall; 1209 + ctrl |= DxEPCTL_CNAK; 1210 + writel(ctrl, hsotg->regs + reg); 1211 + 1212 + dev_dbg(hsotg->dev, 1213 + "written DxEPCTL=0x%08x to %08x (DxEPCTL=0x%08x)\n", 1214 + ctrl, reg, readl(hsotg->regs + reg)); 1215 + 1216 + /* 1217 + * complete won't be called, so we enqueue 1218 + * setup request here 1219 + */ 1220 + s3c_hsotg_enqueue_setup(hsotg); 1221 + } 1222 + 1223 + /** 1189 1224 * s3c_hsotg_process_control - process a control request 1190 1225 * @hsotg: The device state 1191 1226 * @ctrl: The control request received ··· 1297 1262 * so respond with a STALL for the status stage to indicate failure. 1298 1263 */ 1299 1264 1300 - if (ret < 0) { 1301 - u32 reg; 1302 - u32 ctrl; 1303 - 1304 - dev_dbg(hsotg->dev, "ep0 stall (dir=%d)\n", ep0->dir_in); 1305 - reg = (ep0->dir_in) ? DIEPCTL0 : DOEPCTL0; 1306 - 1307 - /* 1308 - * DxEPCTL_Stall will be cleared by EP once it has 1309 - * taken effect, so no need to clear later. 1310 - */ 1311 - 1312 - ctrl = readl(hsotg->regs + reg); 1313 - ctrl |= DxEPCTL_Stall; 1314 - ctrl |= DxEPCTL_CNAK; 1315 - writel(ctrl, hsotg->regs + reg); 1316 - 1317 - dev_dbg(hsotg->dev, 1318 - "written DxEPCTL=0x%08x to %08x (DxEPCTL=0x%08x)\n", 1319 - ctrl, reg, readl(hsotg->regs + reg)); 1320 - 1321 - /* 1322 - * don't believe we need to anything more to get the EP 1323 - * to reply with a STALL packet 1324 - */ 1325 - 1326 - /* 1327 - * complete won't be called, so we enqueue 1328 - * setup request here 1329 - */ 1330 - s3c_hsotg_enqueue_setup(hsotg); 1331 - } 1265 + if (ret < 0) 1266 + s3c_hsotg_stall_ep0(hsotg); 1332 1267 } 1333 1268 1334 1269 /** ··· 1493 1488 * note, we might over-write the buffer end by 3 bytes depending on 1494 1489 * alignment of the data. 1495 1490 */ 1496 - readsl(fifo, hs_req->req.buf + read_ptr, to_read); 1491 + ioread32_rep(fifo, hs_req->req.buf + read_ptr, to_read); 1497 1492 } 1498 1493 1499 1494 /** ··· 2837 2832 2838 2833 dev_info(hs->dev, "%s(ep %p %s, %d)\n", __func__, ep, ep->name, value); 2839 2834 2835 + if (index == 0) { 2836 + if (value) 2837 + s3c_hsotg_stall_ep0(hs); 2838 + else 2839 + dev_warn(hs->dev, 2840 + "%s: can't clear halt on ep0\n", __func__); 2841 + return 0; 2842 + } 2843 + 2840 2844 /* write both IN and OUT control registers */ 2841 2845 2842 2846 epreg = DIEPCTL(index); ··· 3774 3760 return 0; 3775 3761 } 3776 3762 3777 - #if 1 3778 - #define s3c_hsotg_suspend NULL 3779 - #define s3c_hsotg_resume NULL 3780 - #endif 3763 + static int s3c_hsotg_suspend(struct platform_device *pdev, pm_message_t state) 3764 + { 3765 + struct s3c_hsotg *hsotg = platform_get_drvdata(pdev); 3766 + unsigned long flags; 3767 + int ret = 0; 3768 + 3769 + if (hsotg->driver) 3770 + dev_info(hsotg->dev, "suspending usb gadget %s\n", 3771 + hsotg->driver->driver.name); 3772 + 3773 + spin_lock_irqsave(&hsotg->lock, flags); 3774 + s3c_hsotg_disconnect(hsotg); 3775 + s3c_hsotg_phy_disable(hsotg); 3776 + hsotg->gadget.speed = USB_SPEED_UNKNOWN; 3777 + spin_unlock_irqrestore(&hsotg->lock, flags); 3778 + 3779 + if (hsotg->driver) { 3780 + int ep; 3781 + for (ep = 0; ep < hsotg->num_of_eps; ep++) 3782 + s3c_hsotg_ep_disable(&hsotg->eps[ep].ep); 3783 + 3784 + ret = regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), 3785 + hsotg->supplies); 3786 + } 3787 + 3788 + return ret; 3789 + } 3790 + 3791 + static int s3c_hsotg_resume(struct platform_device *pdev) 3792 + { 3793 + struct s3c_hsotg *hsotg = platform_get_drvdata(pdev); 3794 + unsigned long flags; 3795 + int ret = 0; 3796 + 3797 + if (hsotg->driver) { 3798 + dev_info(hsotg->dev, "resuming usb gadget %s\n", 3799 + hsotg->driver->driver.name); 3800 + ret = regulator_bulk_enable(ARRAY_SIZE(hsotg->supplies), 3801 + hsotg->supplies); 3802 + } 3803 + 3804 + spin_lock_irqsave(&hsotg->lock, flags); 3805 + hsotg->last_rst = jiffies; 3806 + s3c_hsotg_phy_enable(hsotg); 3807 + s3c_hsotg_core_init(hsotg); 3808 + spin_unlock_irqrestore(&hsotg->lock, flags); 3809 + 3810 + return ret; 3811 + } 3781 3812 3782 3813 #ifdef CONFIG_OF 3783 3814 static const struct of_device_id s3c_hsotg_of_ids[] = {
-1
drivers/usb/gadget/s3c-hsudc.c
··· 1344 1344 1345 1345 return 0; 1346 1346 err_add_udc: 1347 - err_add_device: 1348 1347 clk_disable(hsudc->uclk); 1349 1348 err_res: 1350 1349 if (!IS_ERR_OR_NULL(hsudc->transceiver))
+66 -35
drivers/usb/gadget/u_ether.c
··· 48 48 49 49 #define UETH__VERSION "29-May-2008" 50 50 51 + #define GETHER_NAPI_WEIGHT 32 52 + 51 53 struct eth_dev { 52 54 /* lock is held while accessing port_usb 53 55 */ ··· 74 72 struct sk_buff_head *list); 75 73 76 74 struct work_struct work; 75 + struct napi_struct rx_napi; 77 76 78 77 unsigned long todo; 79 78 #define WORK_RX_MEMORY 0 ··· 256 253 DBG(dev, "rx submit --> %d\n", retval); 257 254 if (skb) 258 255 dev_kfree_skb_any(skb); 259 - spin_lock_irqsave(&dev->req_lock, flags); 260 - list_add(&req->list, &dev->rx_reqs); 261 - spin_unlock_irqrestore(&dev->req_lock, flags); 262 256 } 263 257 return retval; 264 258 } 265 259 266 260 static void rx_complete(struct usb_ep *ep, struct usb_request *req) 267 261 { 268 - struct sk_buff *skb = req->context, *skb2; 262 + struct sk_buff *skb = req->context; 269 263 struct eth_dev *dev = ep->driver_data; 270 264 int status = req->status; 265 + bool rx_queue = 0; 271 266 272 267 switch (status) { 273 268 ··· 289 288 } else { 290 289 skb_queue_tail(&dev->rx_frames, skb); 291 290 } 292 - skb = NULL; 293 - 294 - skb2 = skb_dequeue(&dev->rx_frames); 295 - while (skb2) { 296 - if (status < 0 297 - || ETH_HLEN > skb2->len 298 - || skb2->len > VLAN_ETH_FRAME_LEN) { 299 - dev->net->stats.rx_errors++; 300 - dev->net->stats.rx_length_errors++; 301 - DBG(dev, "rx length %d\n", skb2->len); 302 - dev_kfree_skb_any(skb2); 303 - goto next_frame; 304 - } 305 - skb2->protocol = eth_type_trans(skb2, dev->net); 306 - dev->net->stats.rx_packets++; 307 - dev->net->stats.rx_bytes += skb2->len; 308 - 309 - /* no buffer copies needed, unless hardware can't 310 - * use skb buffers. 311 - */ 312 - status = netif_rx(skb2); 313 - next_frame: 314 - skb2 = skb_dequeue(&dev->rx_frames); 315 - } 291 + if (!status) 292 + rx_queue = 1; 316 293 break; 317 294 318 295 /* software-driven interface shutdown */ ··· 313 334 /* FALLTHROUGH */ 314 335 315 336 default: 337 + rx_queue = 1; 338 + dev_kfree_skb_any(skb); 316 339 dev->net->stats.rx_errors++; 317 340 DBG(dev, "rx status %d\n", status); 318 341 break; 319 342 } 320 343 321 - if (skb) 322 - dev_kfree_skb_any(skb); 323 - if (!netif_running(dev->net)) { 324 344 clean: 325 345 spin_lock(&dev->req_lock); 326 346 list_add(&req->list, &dev->rx_reqs); 327 347 spin_unlock(&dev->req_lock); 328 - req = NULL; 329 - } 330 - if (req) 331 - rx_submit(dev, req, GFP_ATOMIC); 348 + 349 + if (rx_queue && likely(napi_schedule_prep(&dev->rx_napi))) 350 + __napi_schedule(&dev->rx_napi); 332 351 } 333 352 334 353 static int prealloc(struct list_head *list, struct usb_ep *ep, unsigned n) ··· 391 414 { 392 415 struct usb_request *req; 393 416 unsigned long flags; 417 + int rx_counts = 0; 394 418 395 419 /* fill unused rxq slots with some skb */ 396 420 spin_lock_irqsave(&dev->req_lock, flags); 397 421 while (!list_empty(&dev->rx_reqs)) { 422 + 423 + if (++rx_counts > qlen(dev->gadget, dev->qmult)) 424 + break; 425 + 398 426 req = container_of(dev->rx_reqs.next, 399 427 struct usb_request, list); 400 428 list_del_init(&req->list); 401 429 spin_unlock_irqrestore(&dev->req_lock, flags); 402 430 403 431 if (rx_submit(dev, req, gfp_flags) < 0) { 432 + spin_lock_irqsave(&dev->req_lock, flags); 433 + list_add(&req->list, &dev->rx_reqs); 434 + spin_unlock_irqrestore(&dev->req_lock, flags); 404 435 defer_kevent(dev, WORK_RX_MEMORY); 405 436 return; 406 437 } ··· 416 431 spin_lock_irqsave(&dev->req_lock, flags); 417 432 } 418 433 spin_unlock_irqrestore(&dev->req_lock, flags); 434 + } 435 + 436 + static int gether_poll(struct napi_struct *napi, int budget) 437 + { 438 + struct eth_dev *dev = container_of(napi, struct eth_dev, rx_napi); 439 + struct sk_buff *skb; 440 + unsigned int work_done = 0; 441 + int status = 0; 442 + 443 + while ((skb = skb_dequeue(&dev->rx_frames))) { 444 + if (status < 0 445 + || ETH_HLEN > skb->len 446 + || skb->len > VLAN_ETH_FRAME_LEN) { 447 + dev->net->stats.rx_errors++; 448 + dev->net->stats.rx_length_errors++; 449 + DBG(dev, "rx length %d\n", skb->len); 450 + dev_kfree_skb_any(skb); 451 + continue; 452 + } 453 + skb->protocol = eth_type_trans(skb, dev->net); 454 + dev->net->stats.rx_packets++; 455 + dev->net->stats.rx_bytes += skb->len; 456 + 457 + status = netif_rx_ni(skb); 458 + } 459 + 460 + if (netif_running(dev->net)) { 461 + rx_fill(dev, GFP_KERNEL); 462 + work_done++; 463 + } 464 + 465 + if (work_done < budget) 466 + napi_complete(&dev->rx_napi); 467 + 468 + return work_done; 419 469 } 420 470 421 471 static void eth_work(struct work_struct *work) ··· 645 625 /* and open the tx floodgates */ 646 626 atomic_set(&dev->tx_qlen, 0); 647 627 netif_wake_queue(dev->net); 628 + napi_enable(&dev->rx_napi); 648 629 } 649 630 650 631 static int eth_open(struct net_device *net) ··· 672 651 unsigned long flags; 673 652 674 653 VDBG(dev, "%s\n", __func__); 654 + napi_disable(&dev->rx_napi); 675 655 netif_stop_queue(net); 676 656 677 657 DBG(dev, "stop stats: rx/tx %ld/%ld, errs %ld/%ld\n", ··· 790 768 return ERR_PTR(-ENOMEM); 791 769 792 770 dev = netdev_priv(net); 771 + netif_napi_add(net, &dev->rx_napi, gether_poll, GETHER_NAPI_WEIGHT); 793 772 spin_lock_init(&dev->lock); 794 773 spin_lock_init(&dev->req_lock); 795 774 INIT_WORK(&dev->work, eth_work); ··· 853 830 return ERR_PTR(-ENOMEM); 854 831 855 832 dev = netdev_priv(net); 833 + netif_napi_add(net, &dev->rx_napi, gether_poll, GETHER_NAPI_WEIGHT); 856 834 spin_lock_init(&dev->lock); 857 835 spin_lock_init(&dev->req_lock); 858 836 INIT_WORK(&dev->work, eth_work); ··· 1137 1113 { 1138 1114 struct eth_dev *dev = link->ioport; 1139 1115 struct usb_request *req; 1116 + struct sk_buff *skb; 1140 1117 1141 1118 WARN_ON(!dev); 1142 1119 if (!dev) ··· 1164 1139 spin_lock(&dev->req_lock); 1165 1140 } 1166 1141 spin_unlock(&dev->req_lock); 1142 + 1143 + spin_lock(&dev->rx_frames.lock); 1144 + while ((skb = __skb_dequeue(&dev->rx_frames))) 1145 + dev_kfree_skb_any(skb); 1146 + spin_unlock(&dev->rx_frames.lock); 1147 + 1167 1148 link->in_ep->driver_data = NULL; 1168 1149 link->in_ep->desc = NULL; 1169 1150
+13 -17
drivers/usb/gadget/u_fs.h
··· 65 65 mutex_unlock(&ffs_lock); 66 66 } 67 67 68 - struct ffs_dev *ffs_alloc_dev(void); 69 68 int ffs_name_dev(struct ffs_dev *dev, const char *name); 70 69 int ffs_single_dev(struct ffs_dev *dev); 71 - void ffs_free_dev(struct ffs_dev *dev); 72 70 73 71 struct ffs_epfile; 74 72 struct ffs_function; ··· 123 125 * setup. If this state is set read/write on ep0 return 124 126 * -EIDRM. This state is only set when adding event. 125 127 */ 126 - FFS_SETUP_CANCELED 128 + FFS_SETUP_CANCELLED 127 129 }; 128 130 129 131 struct ffs_data { ··· 154 156 */ 155 157 struct usb_request *ep0req; /* P: mutex */ 156 158 struct completion ep0req_completion; /* P: mutex */ 157 - int ep0req_status; /* P: mutex */ 158 159 159 160 /* reference counter */ 160 161 atomic_t ref; ··· 165 168 166 169 /* 167 170 * Possible transitions: 168 - * + FFS_NO_SETUP -> FFS_SETUP_PENDING -- P: ev.waitq.lock 171 + * + FFS_NO_SETUP -> FFS_SETUP_PENDING -- P: ev.waitq.lock 169 172 * happens only in ep0 read which is P: mutex 170 - * + FFS_SETUP_PENDING -> FFS_NO_SETUP -- P: ev.waitq.lock 173 + * + FFS_SETUP_PENDING -> FFS_NO_SETUP -- P: ev.waitq.lock 171 174 * happens only in ep0 i/o which is P: mutex 172 - * + FFS_SETUP_PENDING -> FFS_SETUP_CANCELED -- P: ev.waitq.lock 173 - * + FFS_SETUP_CANCELED -> FFS_NO_SETUP -- cmpxchg 175 + * + FFS_SETUP_PENDING -> FFS_SETUP_CANCELLED -- P: ev.waitq.lock 176 + * + FFS_SETUP_CANCELLED -> FFS_NO_SETUP -- cmpxchg 177 + * 178 + * This field should never be accessed directly and instead 179 + * ffs_setup_state_clear_cancelled function should be used. 174 180 */ 175 181 enum ffs_setup_state setup_state; 176 - 177 - #define FFS_SETUP_STATE(ffs) \ 178 - ((enum ffs_setup_state)cmpxchg(&(ffs)->setup_state, \ 179 - FFS_SETUP_CANCELED, FFS_NO_SETUP)) 180 182 181 183 /* Events & such. */ 182 184 struct { ··· 206 210 207 211 /* filled by __ffs_data_got_descs() */ 208 212 /* 209 - * Real descriptors are 16 bytes after raw_descs (so you need 210 - * to skip 16 bytes (ie. ffs->raw_descs + 16) to get to the 211 - * first full speed descriptor). raw_descs_length and 212 - * raw_fs_descs_length do not have those 16 bytes added. 213 + * raw_descs is what you kfree, real_descs points inside of raw_descs, 214 + * where full speed, high speed and super speed descriptors start. 215 + * real_descs_length is the length of all those descriptors. 213 216 */ 217 + const void *raw_descs_data; 214 218 const void *raw_descs; 215 219 unsigned raw_descs_length; 216 - unsigned raw_fs_descs_length; 217 220 unsigned fs_descs_count; 218 221 unsigned hs_descs_count; 222 + unsigned ss_descs_count; 219 223 220 224 unsigned short strings_count; 221 225 unsigned short interfaces_count;
+2
drivers/usb/musb/Kconfig
··· 43 43 config USB_MUSB_GADGET 44 44 bool "Gadget only mode" 45 45 depends on USB_GADGET=y || USB_GADGET=USB_MUSB_HDRC 46 + depends on HAS_DMA 46 47 help 47 48 Select this when you want to use MUSB in gadget mode only, 48 49 thereby the host feature will be regressed. ··· 51 50 config USB_MUSB_DUAL_ROLE 52 51 bool "Dual Role mode" 53 52 depends on ((USB=y || USB=USB_MUSB_HDRC) && (USB_GADGET=y || USB_GADGET=USB_MUSB_HDRC)) 53 + depends on HAS_DMA 54 54 help 55 55 This is the default mode of working of MUSB controller where 56 56 both host and gadget features are enabled.
+2 -3
drivers/usb/musb/musb_core.c
··· 438 438 static irqreturn_t musb_stage0_irq(struct musb *musb, u8 int_usb, 439 439 u8 devctl) 440 440 { 441 - struct usb_otg *otg = musb->xceiv->otg; 442 441 irqreturn_t handled = IRQ_NONE; 443 442 444 443 dev_dbg(musb->controller, "<== DevCtl=%02x, int_usb=0x%x\n", devctl, ··· 655 656 break; 656 657 case OTG_STATE_B_PERIPHERAL: 657 658 musb_g_suspend(musb); 658 - musb->is_active = otg->gadget->b_hnp_enable; 659 + musb->is_active = musb->g.b_hnp_enable; 659 660 if (musb->is_active) { 660 661 musb->xceiv->state = OTG_STATE_B_WAIT_ACON; 661 662 dev_dbg(musb->controller, "HNP: Setting timer for b_ase0_brst\n"); ··· 671 672 break; 672 673 case OTG_STATE_A_HOST: 673 674 musb->xceiv->state = OTG_STATE_A_SUSPEND; 674 - musb->is_active = otg->host->b_hnp_enable; 675 + musb->is_active = musb->hcd->self.b_hnp_enable; 675 676 break; 676 677 case OTG_STATE_B_HOST: 677 678 /* Transition to B_PERIPHERAL, see 6.8.2.6 p 44 */
+68 -1
drivers/usb/musb/musb_cppi41.c
··· 39 39 u32 transferred; 40 40 u32 packet_sz; 41 41 struct list_head tx_check; 42 + struct work_struct dma_completion; 42 43 }; 43 44 44 45 #define MUSB_DMA_NUM_CHANNELS 15 ··· 113 112 return true; 114 113 } 115 114 115 + static bool is_isoc(struct musb_hw_ep *hw_ep, bool in) 116 + { 117 + if (in && hw_ep->in_qh) { 118 + if (hw_ep->in_qh->type == USB_ENDPOINT_XFER_ISOC) 119 + return true; 120 + } else if (hw_ep->out_qh) { 121 + if (hw_ep->out_qh->type == USB_ENDPOINT_XFER_ISOC) 122 + return true; 123 + } 124 + return false; 125 + } 126 + 116 127 static void cppi41_dma_callback(void *private_data); 117 128 118 129 static void cppi41_trans_done(struct cppi41_dma_channel *cppi41_channel) ··· 132 119 struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep; 133 120 struct musb *musb = hw_ep->musb; 134 121 135 - if (!cppi41_channel->prog_len) { 122 + if (!cppi41_channel->prog_len || 123 + (cppi41_channel->channel.status == MUSB_DMA_STATUS_FREE)) { 136 124 137 125 /* done, complete */ 138 126 cppi41_channel->channel.actual_len = ··· 175 161 csr = musb_readw(epio, MUSB_RXCSR); 176 162 csr |= MUSB_RXCSR_H_REQPKT; 177 163 musb_writew(epio, MUSB_RXCSR, csr); 164 + } 165 + } 166 + } 167 + 168 + static void cppi_trans_done_work(struct work_struct *work) 169 + { 170 + unsigned long flags; 171 + struct cppi41_dma_channel *cppi41_channel = 172 + container_of(work, struct cppi41_dma_channel, dma_completion); 173 + struct cppi41_dma_controller *controller = cppi41_channel->controller; 174 + struct musb *musb = controller->musb; 175 + struct musb_hw_ep *hw_ep = cppi41_channel->hw_ep; 176 + bool empty; 177 + 178 + if (!cppi41_channel->is_tx && is_isoc(hw_ep, 1)) { 179 + spin_lock_irqsave(&musb->lock, flags); 180 + cppi41_trans_done(cppi41_channel); 181 + spin_unlock_irqrestore(&musb->lock, flags); 182 + } else { 183 + empty = musb_is_tx_fifo_empty(hw_ep); 184 + if (empty) { 185 + spin_lock_irqsave(&musb->lock, flags); 186 + cppi41_trans_done(cppi41_channel); 187 + spin_unlock_irqrestore(&musb->lock, flags); 188 + } else { 189 + schedule_work(&cppi41_channel->dma_completion); 178 190 } 179 191 } 180 192 } ··· 268 228 transferred < cppi41_channel->packet_sz) 269 229 cppi41_channel->prog_len = 0; 270 230 231 + if (!cppi41_channel->is_tx) { 232 + if (is_isoc(hw_ep, 1)) 233 + schedule_work(&cppi41_channel->dma_completion); 234 + else 235 + cppi41_trans_done(cppi41_channel); 236 + goto out; 237 + } 238 + 271 239 empty = musb_is_tx_fifo_empty(hw_ep); 272 240 if (empty) { 273 241 cppi41_trans_done(cppi41_channel); ··· 311 263 cppi41_trans_done(cppi41_channel); 312 264 goto out; 313 265 } 266 + } 267 + if (is_isoc(hw_ep, 0)) { 268 + schedule_work(&cppi41_channel->dma_completion); 269 + goto out; 314 270 } 315 271 list_add_tail(&cppi41_channel->tx_check, 316 272 &controller->early_tx_list); ··· 500 448 dma_addr_t dma_addr, u32 len) 501 449 { 502 450 int ret; 451 + struct cppi41_dma_channel *cppi41_channel = channel->private_data; 452 + int hb_mult = 0; 503 453 504 454 BUG_ON(channel->status == MUSB_DMA_STATUS_UNKNOWN || 505 455 channel->status == MUSB_DMA_STATUS_BUSY); 506 456 457 + if (is_host_active(cppi41_channel->controller->musb)) { 458 + if (cppi41_channel->is_tx) 459 + hb_mult = cppi41_channel->hw_ep->out_qh->hb_mult; 460 + else 461 + hb_mult = cppi41_channel->hw_ep->in_qh->hb_mult; 462 + } 463 + 507 464 channel->status = MUSB_DMA_STATUS_BUSY; 508 465 channel->actual_len = 0; 466 + 467 + if (hb_mult) 468 + packet_sz = hb_mult * (packet_sz & 0x7FF); 469 + 509 470 ret = cppi41_configure_channel(channel, packet_sz, mode, dma_addr, len); 510 471 if (!ret) 511 472 channel->status = MUSB_DMA_STATUS_FREE; ··· 672 607 cppi41_channel->port_num = port; 673 608 cppi41_channel->is_tx = is_tx; 674 609 INIT_LIST_HEAD(&cppi41_channel->tx_check); 610 + INIT_WORK(&cppi41_channel->dma_completion, 611 + cppi_trans_done_work); 675 612 676 613 musb_dma = &cppi41_channel->channel; 677 614 musb_dma->private_data = cppi41_channel;
+55 -3
drivers/usb/musb/musb_dsps.c
··· 45 45 #include <linux/of_irq.h> 46 46 #include <linux/usb/of.h> 47 47 48 + #include <linux/debugfs.h> 49 + 48 50 #include "musb_core.h" 49 51 50 52 static const struct of_device_id musb_dsps_of_match[]; ··· 138 136 unsigned long last_timer; /* last timer data for each instance */ 139 137 140 138 struct dsps_context context; 139 + struct debugfs_regset32 regset; 140 + struct dentry *dbgfs_root; 141 + }; 142 + 143 + static const struct debugfs_reg32 dsps_musb_regs[] = { 144 + { "revision", 0x00 }, 145 + { "control", 0x14 }, 146 + { "status", 0x18 }, 147 + { "eoi", 0x24 }, 148 + { "intr0_stat", 0x30 }, 149 + { "intr1_stat", 0x34 }, 150 + { "intr0_set", 0x38 }, 151 + { "intr1_set", 0x3c }, 152 + { "txmode", 0x70 }, 153 + { "rxmode", 0x74 }, 154 + { "autoreq", 0xd0 }, 155 + { "srpfixtime", 0xd4 }, 156 + { "tdown", 0xd8 }, 157 + { "phy_utmi", 0xe0 }, 158 + { "mode", 0xe8 }, 141 159 }; 142 160 143 161 static void dsps_musb_try_idle(struct musb *musb, unsigned long timeout) ··· 390 368 return ret; 391 369 } 392 370 371 + static int dsps_musb_dbg_init(struct musb *musb, struct dsps_glue *glue) 372 + { 373 + struct dentry *root; 374 + struct dentry *file; 375 + char buf[128]; 376 + 377 + sprintf(buf, "%s.dsps", dev_name(musb->controller)); 378 + root = debugfs_create_dir(buf, NULL); 379 + if (!root) 380 + return -ENOMEM; 381 + glue->dbgfs_root = root; 382 + 383 + glue->regset.regs = dsps_musb_regs; 384 + glue->regset.nregs = ARRAY_SIZE(dsps_musb_regs); 385 + glue->regset.base = musb->ctrl_base; 386 + 387 + file = debugfs_create_regset32("regdump", S_IRUGO, root, &glue->regset); 388 + if (!file) { 389 + debugfs_remove_recursive(root); 390 + return -ENOMEM; 391 + } 392 + return 0; 393 + } 394 + 393 395 static int dsps_musb_init(struct musb *musb) 394 396 { 395 397 struct device *dev = musb->controller; ··· 423 377 void __iomem *reg_base; 424 378 struct resource *r; 425 379 u32 rev, val; 380 + int ret; 426 381 427 382 r = platform_get_resource_byname(parent, IORESOURCE_MEM, "control"); 428 383 if (!r) ··· 456 409 val = dsps_readl(reg_base, wrp->phy_utmi); 457 410 val &= ~(1 << wrp->otg_disable); 458 411 dsps_writel(musb->ctrl_base, wrp->phy_utmi, val); 412 + 413 + ret = dsps_musb_dbg_init(musb, glue); 414 + if (ret) 415 + return ret; 459 416 460 417 return 0; 461 418 } ··· 667 616 wrp = match->data; 668 617 669 618 /* allocate glue */ 670 - glue = kzalloc(sizeof(*glue), GFP_KERNEL); 619 + glue = devm_kzalloc(&pdev->dev, sizeof(*glue), GFP_KERNEL); 671 620 if (!glue) { 672 621 dev_err(&pdev->dev, "unable to allocate glue memory\n"); 673 622 return -ENOMEM; ··· 695 644 pm_runtime_put(&pdev->dev); 696 645 err2: 697 646 pm_runtime_disable(&pdev->dev); 698 - kfree(glue); 699 647 return ret; 700 648 } 701 649 ··· 707 657 /* disable usbss clocks */ 708 658 pm_runtime_put(&pdev->dev); 709 659 pm_runtime_disable(&pdev->dev); 710 - kfree(glue); 660 + 661 + debugfs_remove_recursive(glue->dbgfs_root); 662 + 711 663 return 0; 712 664 } 713 665
+26 -4
drivers/usb/musb/musb_host.c
··· 1694 1694 | MUSB_RXCSR_RXPKTRDY); 1695 1695 musb_writew(hw_ep->regs, MUSB_RXCSR, val); 1696 1696 1697 - #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) 1697 + #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) || \ 1698 + defined(CONFIG_USB_TI_CPPI41_DMA) 1698 1699 if (usb_pipeisoc(pipe)) { 1699 1700 struct usb_iso_packet_descriptor *d; 1700 1701 ··· 1708 1707 if (d->status != -EILSEQ && d->status != -EOVERFLOW) 1709 1708 d->status = 0; 1710 1709 1711 - if (++qh->iso_idx >= urb->number_of_packets) 1710 + if (++qh->iso_idx >= urb->number_of_packets) { 1712 1711 done = true; 1713 - else 1712 + } else { 1713 + #if defined(CONFIG_USB_TI_CPPI41_DMA) 1714 + struct dma_controller *c; 1715 + dma_addr_t *buf; 1716 + u32 length, ret; 1717 + 1718 + c = musb->dma_controller; 1719 + buf = (void *) 1720 + urb->iso_frame_desc[qh->iso_idx].offset 1721 + + (u32)urb->transfer_dma; 1722 + 1723 + length = 1724 + urb->iso_frame_desc[qh->iso_idx].length; 1725 + 1726 + val |= MUSB_RXCSR_DMAENAB; 1727 + musb_writew(hw_ep->regs, MUSB_RXCSR, val); 1728 + 1729 + ret = c->channel_program(dma, qh->maxpacket, 1730 + 0, (u32) buf, length); 1731 + #endif 1714 1732 done = false; 1733 + } 1715 1734 1716 1735 } else { 1717 1736 /* done if urb buffer is full or short packet is recd */ ··· 1771 1750 } 1772 1751 1773 1752 /* we are expecting IN packets */ 1774 - #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) 1753 + #if defined(CONFIG_USB_INVENTRA_DMA) || defined(CONFIG_USB_UX500_DMA) || \ 1754 + defined(CONFIG_USB_TI_CPPI41_DMA) 1775 1755 if (dma) { 1776 1756 struct dma_controller *c; 1777 1757 u16 rx_count;
+5 -4
drivers/usb/phy/phy-fsm-usb.c
··· 317 317 otg_set_state(fsm, OTG_STATE_A_WAIT_VFALL); 318 318 break; 319 319 case OTG_STATE_A_HOST: 320 - if ((!fsm->a_bus_req || fsm->a_suspend_req_inf) && 320 + if (fsm->id || fsm->a_bus_drop) 321 + otg_set_state(fsm, OTG_STATE_A_WAIT_VFALL); 322 + else if ((!fsm->a_bus_req || fsm->a_suspend_req_inf) && 321 323 fsm->otg->host->b_hnp_enable) 322 324 otg_set_state(fsm, OTG_STATE_A_SUSPEND); 323 - else if (fsm->id || !fsm->b_conn || fsm->a_bus_drop) 325 + else if (!fsm->b_conn) 324 326 otg_set_state(fsm, OTG_STATE_A_WAIT_BCON); 325 327 else if (!fsm->a_vbus_vld) 326 328 otg_set_state(fsm, OTG_STATE_A_VBUS_ERR); ··· 348 346 otg_set_state(fsm, OTG_STATE_A_VBUS_ERR); 349 347 break; 350 348 case OTG_STATE_A_WAIT_VFALL: 351 - if (fsm->a_wait_vfall_tmout || fsm->id || fsm->a_bus_req || 352 - (!fsm->a_sess_vld && !fsm->b_conn)) 349 + if (fsm->a_wait_vfall_tmout) 353 350 otg_set_state(fsm, OTG_STATE_A_IDLE); 354 351 break; 355 352 case OTG_STATE_A_VBUS_ERR:
+296 -14
drivers/usb/phy/phy-mxs-usb.c
··· 1 1 /* 2 - * Copyright 2012 Freescale Semiconductor, Inc. 2 + * Copyright 2012-2013 Freescale Semiconductor, Inc. 3 3 * Copyright (C) 2012 Marek Vasut <marex@denx.de> 4 4 * on behalf of DENX Software Engineering GmbH 5 5 * ··· 20 20 #include <linux/delay.h> 21 21 #include <linux/err.h> 22 22 #include <linux/io.h> 23 + #include <linux/of_device.h> 24 + #include <linux/regmap.h> 25 + #include <linux/mfd/syscon.h> 23 26 24 27 #define DRIVER_NAME "mxs_phy" 25 28 ··· 31 28 #define HW_USBPHY_CTRL_SET 0x34 32 29 #define HW_USBPHY_CTRL_CLR 0x38 33 30 31 + #define HW_USBPHY_DEBUG_SET 0x54 32 + #define HW_USBPHY_DEBUG_CLR 0x58 33 + 34 + #define HW_USBPHY_IP 0x90 35 + #define HW_USBPHY_IP_SET 0x94 36 + #define HW_USBPHY_IP_CLR 0x98 37 + 34 38 #define BM_USBPHY_CTRL_SFTRST BIT(31) 35 39 #define BM_USBPHY_CTRL_CLKGATE BIT(30) 40 + #define BM_USBPHY_CTRL_ENAUTOSET_USBCLKS BIT(26) 41 + #define BM_USBPHY_CTRL_ENAUTOCLR_USBCLKGATE BIT(25) 42 + #define BM_USBPHY_CTRL_ENVBUSCHG_WKUP BIT(23) 43 + #define BM_USBPHY_CTRL_ENIDCHG_WKUP BIT(22) 44 + #define BM_USBPHY_CTRL_ENDPDMCHG_WKUP BIT(21) 45 + #define BM_USBPHY_CTRL_ENAUTOCLR_PHY_PWD BIT(20) 46 + #define BM_USBPHY_CTRL_ENAUTOCLR_CLKGATE BIT(19) 47 + #define BM_USBPHY_CTRL_ENAUTO_PWRON_PLL BIT(18) 36 48 #define BM_USBPHY_CTRL_ENUTMILEVEL3 BIT(15) 37 49 #define BM_USBPHY_CTRL_ENUTMILEVEL2 BIT(14) 38 50 #define BM_USBPHY_CTRL_ENHOSTDISCONDETECT BIT(1) 39 51 52 + #define BM_USBPHY_IP_FIX (BIT(17) | BIT(18)) 53 + 54 + #define BM_USBPHY_DEBUG_CLKGATE BIT(30) 55 + 56 + /* Anatop Registers */ 57 + #define ANADIG_ANA_MISC0 0x150 58 + #define ANADIG_ANA_MISC0_SET 0x154 59 + #define ANADIG_ANA_MISC0_CLR 0x158 60 + 61 + #define ANADIG_USB1_VBUS_DET_STAT 0x1c0 62 + #define ANADIG_USB2_VBUS_DET_STAT 0x220 63 + 64 + #define ANADIG_USB1_LOOPBACK_SET 0x1e4 65 + #define ANADIG_USB1_LOOPBACK_CLR 0x1e8 66 + #define ANADIG_USB2_LOOPBACK_SET 0x244 67 + #define ANADIG_USB2_LOOPBACK_CLR 0x248 68 + 69 + #define BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG BIT(12) 70 + #define BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG_SL BIT(11) 71 + 72 + #define BM_ANADIG_USB1_VBUS_DET_STAT_VBUS_VALID BIT(3) 73 + #define BM_ANADIG_USB2_VBUS_DET_STAT_VBUS_VALID BIT(3) 74 + 75 + #define BM_ANADIG_USB1_LOOPBACK_UTMI_DIG_TST1 BIT(2) 76 + #define BM_ANADIG_USB1_LOOPBACK_TSTI_TX_EN BIT(5) 77 + #define BM_ANADIG_USB2_LOOPBACK_UTMI_DIG_TST1 BIT(2) 78 + #define BM_ANADIG_USB2_LOOPBACK_TSTI_TX_EN BIT(5) 79 + 80 + #define to_mxs_phy(p) container_of((p), struct mxs_phy, phy) 81 + 82 + /* Do disconnection between PHY and controller without vbus */ 83 + #define MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS BIT(0) 84 + 85 + /* 86 + * The PHY will be in messy if there is a wakeup after putting 87 + * bus to suspend (set portsc.suspendM) but before setting PHY to low 88 + * power mode (set portsc.phcd). 89 + */ 90 + #define MXS_PHY_ABNORMAL_IN_SUSPEND BIT(1) 91 + 92 + /* 93 + * The SOF sends too fast after resuming, it will cause disconnection 94 + * between host and high speed device. 95 + */ 96 + #define MXS_PHY_SENDING_SOF_TOO_FAST BIT(2) 97 + 98 + /* 99 + * IC has bug fixes logic, they include 100 + * MXS_PHY_ABNORMAL_IN_SUSPEND and MXS_PHY_SENDING_SOF_TOO_FAST 101 + * which are described at above flags, the RTL will handle it 102 + * according to different versions. 103 + */ 104 + #define MXS_PHY_NEED_IP_FIX BIT(3) 105 + 106 + struct mxs_phy_data { 107 + unsigned int flags; 108 + }; 109 + 110 + static const struct mxs_phy_data imx23_phy_data = { 111 + .flags = MXS_PHY_ABNORMAL_IN_SUSPEND | MXS_PHY_SENDING_SOF_TOO_FAST, 112 + }; 113 + 114 + static const struct mxs_phy_data imx6q_phy_data = { 115 + .flags = MXS_PHY_SENDING_SOF_TOO_FAST | 116 + MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS | 117 + MXS_PHY_NEED_IP_FIX, 118 + }; 119 + 120 + static const struct mxs_phy_data imx6sl_phy_data = { 121 + .flags = MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS | 122 + MXS_PHY_NEED_IP_FIX, 123 + }; 124 + 125 + static const struct of_device_id mxs_phy_dt_ids[] = { 126 + { .compatible = "fsl,imx6sl-usbphy", .data = &imx6sl_phy_data, }, 127 + { .compatible = "fsl,imx6q-usbphy", .data = &imx6q_phy_data, }, 128 + { .compatible = "fsl,imx23-usbphy", .data = &imx23_phy_data, }, 129 + { /* sentinel */ } 130 + }; 131 + MODULE_DEVICE_TABLE(of, mxs_phy_dt_ids); 132 + 40 133 struct mxs_phy { 41 134 struct usb_phy phy; 42 135 struct clk *clk; 136 + const struct mxs_phy_data *data; 137 + struct regmap *regmap_anatop; 138 + int port_id; 43 139 }; 44 140 45 - #define to_mxs_phy(p) container_of((p), struct mxs_phy, phy) 141 + static inline bool is_imx6q_phy(struct mxs_phy *mxs_phy) 142 + { 143 + return mxs_phy->data == &imx6q_phy_data; 144 + } 145 + 146 + static inline bool is_imx6sl_phy(struct mxs_phy *mxs_phy) 147 + { 148 + return mxs_phy->data == &imx6sl_phy_data; 149 + } 150 + 151 + /* 152 + * PHY needs some 32K cycles to switch from 32K clock to 153 + * bus (such as AHB/AXI, etc) clock. 154 + */ 155 + static void mxs_phy_clock_switch_delay(void) 156 + { 157 + usleep_range(300, 400); 158 + } 46 159 47 160 static int mxs_phy_hw_init(struct mxs_phy *mxs_phy) 48 161 { ··· 172 53 /* Power up the PHY */ 173 54 writel(0, base + HW_USBPHY_PWD); 174 55 175 - /* enable FS/LS device */ 176 - writel(BM_USBPHY_CTRL_ENUTMILEVEL2 | 177 - BM_USBPHY_CTRL_ENUTMILEVEL3, 56 + /* 57 + * USB PHY Ctrl Setting 58 + * - Auto clock/power on 59 + * - Enable full/low speed support 60 + */ 61 + writel(BM_USBPHY_CTRL_ENAUTOSET_USBCLKS | 62 + BM_USBPHY_CTRL_ENAUTOCLR_USBCLKGATE | 63 + BM_USBPHY_CTRL_ENAUTOCLR_PHY_PWD | 64 + BM_USBPHY_CTRL_ENAUTOCLR_CLKGATE | 65 + BM_USBPHY_CTRL_ENAUTO_PWRON_PLL | 66 + BM_USBPHY_CTRL_ENUTMILEVEL2 | 67 + BM_USBPHY_CTRL_ENUTMILEVEL3, 178 68 base + HW_USBPHY_CTRL_SET); 179 69 70 + if (mxs_phy->data->flags & MXS_PHY_NEED_IP_FIX) 71 + writel(BM_USBPHY_IP_FIX, base + HW_USBPHY_IP_SET); 72 + 180 73 return 0; 74 + } 75 + 76 + /* Return true if the vbus is there */ 77 + static bool mxs_phy_get_vbus_status(struct mxs_phy *mxs_phy) 78 + { 79 + unsigned int vbus_value; 80 + 81 + if (mxs_phy->port_id == 0) 82 + regmap_read(mxs_phy->regmap_anatop, 83 + ANADIG_USB1_VBUS_DET_STAT, 84 + &vbus_value); 85 + else if (mxs_phy->port_id == 1) 86 + regmap_read(mxs_phy->regmap_anatop, 87 + ANADIG_USB2_VBUS_DET_STAT, 88 + &vbus_value); 89 + 90 + if (vbus_value & BM_ANADIG_USB1_VBUS_DET_STAT_VBUS_VALID) 91 + return true; 92 + else 93 + return false; 94 + } 95 + 96 + static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect) 97 + { 98 + void __iomem *base = mxs_phy->phy.io_priv; 99 + u32 reg; 100 + 101 + if (disconnect) 102 + writel_relaxed(BM_USBPHY_DEBUG_CLKGATE, 103 + base + HW_USBPHY_DEBUG_CLR); 104 + 105 + if (mxs_phy->port_id == 0) { 106 + reg = disconnect ? ANADIG_USB1_LOOPBACK_SET 107 + : ANADIG_USB1_LOOPBACK_CLR; 108 + regmap_write(mxs_phy->regmap_anatop, reg, 109 + BM_ANADIG_USB1_LOOPBACK_UTMI_DIG_TST1 | 110 + BM_ANADIG_USB1_LOOPBACK_TSTI_TX_EN); 111 + } else if (mxs_phy->port_id == 1) { 112 + reg = disconnect ? ANADIG_USB2_LOOPBACK_SET 113 + : ANADIG_USB2_LOOPBACK_CLR; 114 + regmap_write(mxs_phy->regmap_anatop, reg, 115 + BM_ANADIG_USB2_LOOPBACK_UTMI_DIG_TST1 | 116 + BM_ANADIG_USB2_LOOPBACK_TSTI_TX_EN); 117 + } 118 + 119 + if (!disconnect) 120 + writel_relaxed(BM_USBPHY_DEBUG_CLKGATE, 121 + base + HW_USBPHY_DEBUG_SET); 122 + 123 + /* Delay some time, and let Linestate be SE0 for controller */ 124 + if (disconnect) 125 + usleep_range(500, 1000); 126 + } 127 + 128 + static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on) 129 + { 130 + bool vbus_is_on = false; 131 + 132 + /* If the SoCs don't need to disconnect line without vbus, quit */ 133 + if (!(mxs_phy->data->flags & MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS)) 134 + return; 135 + 136 + /* If the SoCs don't have anatop, quit */ 137 + if (!mxs_phy->regmap_anatop) 138 + return; 139 + 140 + vbus_is_on = mxs_phy_get_vbus_status(mxs_phy); 141 + 142 + if (on && !vbus_is_on) 143 + __mxs_phy_disconnect_line(mxs_phy, true); 144 + else 145 + __mxs_phy_disconnect_line(mxs_phy, false); 146 + 181 147 } 182 148 183 149 static int mxs_phy_init(struct usb_phy *phy) ··· 270 66 int ret; 271 67 struct mxs_phy *mxs_phy = to_mxs_phy(phy); 272 68 69 + mxs_phy_clock_switch_delay(); 273 70 ret = clk_prepare_enable(mxs_phy->clk); 274 71 if (ret) 275 72 return ret; ··· 299 94 x->io_priv + HW_USBPHY_CTRL_SET); 300 95 clk_disable_unprepare(mxs_phy->clk); 301 96 } else { 97 + mxs_phy_clock_switch_delay(); 302 98 ret = clk_prepare_enable(mxs_phy->clk); 303 99 if (ret) 304 100 return ret; ··· 311 105 return 0; 312 106 } 313 107 108 + static int mxs_phy_set_wakeup(struct usb_phy *x, bool enabled) 109 + { 110 + struct mxs_phy *mxs_phy = to_mxs_phy(x); 111 + u32 value = BM_USBPHY_CTRL_ENVBUSCHG_WKUP | 112 + BM_USBPHY_CTRL_ENDPDMCHG_WKUP | 113 + BM_USBPHY_CTRL_ENIDCHG_WKUP; 114 + if (enabled) { 115 + mxs_phy_disconnect_line(mxs_phy, true); 116 + writel_relaxed(value, x->io_priv + HW_USBPHY_CTRL_SET); 117 + } else { 118 + writel_relaxed(value, x->io_priv + HW_USBPHY_CTRL_CLR); 119 + mxs_phy_disconnect_line(mxs_phy, false); 120 + } 121 + 122 + return 0; 123 + } 124 + 314 125 static int mxs_phy_on_connect(struct usb_phy *phy, 315 126 enum usb_device_speed speed) 316 127 { 317 - dev_dbg(phy->dev, "%s speed device has connected\n", 318 - (speed == USB_SPEED_HIGH) ? "high" : "non-high"); 128 + dev_dbg(phy->dev, "%s device has connected\n", 129 + (speed == USB_SPEED_HIGH) ? "HS" : "FS/LS"); 319 130 320 131 if (speed == USB_SPEED_HIGH) 321 132 writel(BM_USBPHY_CTRL_ENHOSTDISCONDETECT, ··· 344 121 static int mxs_phy_on_disconnect(struct usb_phy *phy, 345 122 enum usb_device_speed speed) 346 123 { 347 - dev_dbg(phy->dev, "%s speed device has disconnected\n", 348 - (speed == USB_SPEED_HIGH) ? "high" : "non-high"); 124 + dev_dbg(phy->dev, "%s device has disconnected\n", 125 + (speed == USB_SPEED_HIGH) ? "HS" : "FS/LS"); 349 126 350 127 if (speed == USB_SPEED_HIGH) 351 128 writel(BM_USBPHY_CTRL_ENHOSTDISCONDETECT, ··· 361 138 struct clk *clk; 362 139 struct mxs_phy *mxs_phy; 363 140 int ret; 141 + const struct of_device_id *of_id = 142 + of_match_device(mxs_phy_dt_ids, &pdev->dev); 143 + struct device_node *np = pdev->dev.of_node; 364 144 365 145 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 366 146 base = devm_ioremap_resource(&pdev->dev, res); ··· 383 157 return -ENOMEM; 384 158 } 385 159 160 + /* Some SoCs don't have anatop registers */ 161 + if (of_get_property(np, "fsl,anatop", NULL)) { 162 + mxs_phy->regmap_anatop = syscon_regmap_lookup_by_phandle 163 + (np, "fsl,anatop"); 164 + if (IS_ERR(mxs_phy->regmap_anatop)) { 165 + dev_dbg(&pdev->dev, 166 + "failed to find regmap for anatop\n"); 167 + return PTR_ERR(mxs_phy->regmap_anatop); 168 + } 169 + } 170 + 171 + ret = of_alias_get_id(np, "usbphy"); 172 + if (ret < 0) 173 + dev_dbg(&pdev->dev, "failed to get alias id, errno %d\n", ret); 174 + mxs_phy->port_id = ret; 175 + 386 176 mxs_phy->phy.io_priv = base; 387 177 mxs_phy->phy.dev = &pdev->dev; 388 178 mxs_phy->phy.label = DRIVER_NAME; ··· 408 166 mxs_phy->phy.notify_connect = mxs_phy_on_connect; 409 167 mxs_phy->phy.notify_disconnect = mxs_phy_on_disconnect; 410 168 mxs_phy->phy.type = USB_PHY_TYPE_USB2; 169 + mxs_phy->phy.set_wakeup = mxs_phy_set_wakeup; 411 170 412 171 mxs_phy->clk = clk; 172 + mxs_phy->data = of_id->data; 413 173 414 174 platform_set_drvdata(pdev, mxs_phy); 175 + 176 + device_set_wakeup_capable(&pdev->dev, true); 415 177 416 178 ret = usb_add_phy_dev(&mxs_phy->phy); 417 179 if (ret) ··· 433 187 return 0; 434 188 } 435 189 436 - static const struct of_device_id mxs_phy_dt_ids[] = { 437 - { .compatible = "fsl,imx23-usbphy", }, 438 - { /* sentinel */ } 439 - }; 440 - MODULE_DEVICE_TABLE(of, mxs_phy_dt_ids); 190 + #ifdef CONFIG_PM_SLEEP 191 + static void mxs_phy_enable_ldo_in_suspend(struct mxs_phy *mxs_phy, bool on) 192 + { 193 + unsigned int reg = on ? ANADIG_ANA_MISC0_SET : ANADIG_ANA_MISC0_CLR; 194 + 195 + /* If the SoCs don't have anatop, quit */ 196 + if (!mxs_phy->regmap_anatop) 197 + return; 198 + 199 + if (is_imx6q_phy(mxs_phy)) 200 + regmap_write(mxs_phy->regmap_anatop, reg, 201 + BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG); 202 + else if (is_imx6sl_phy(mxs_phy)) 203 + regmap_write(mxs_phy->regmap_anatop, 204 + reg, BM_ANADIG_ANA_MISC0_STOP_MODE_CONFIG_SL); 205 + } 206 + 207 + static int mxs_phy_system_suspend(struct device *dev) 208 + { 209 + struct mxs_phy *mxs_phy = dev_get_drvdata(dev); 210 + 211 + if (device_may_wakeup(dev)) 212 + mxs_phy_enable_ldo_in_suspend(mxs_phy, true); 213 + 214 + return 0; 215 + } 216 + 217 + static int mxs_phy_system_resume(struct device *dev) 218 + { 219 + struct mxs_phy *mxs_phy = dev_get_drvdata(dev); 220 + 221 + if (device_may_wakeup(dev)) 222 + mxs_phy_enable_ldo_in_suspend(mxs_phy, false); 223 + 224 + return 0; 225 + } 226 + #endif /* CONFIG_PM_SLEEP */ 227 + 228 + static SIMPLE_DEV_PM_OPS(mxs_phy_pm, mxs_phy_system_suspend, 229 + mxs_phy_system_resume); 441 230 442 231 static struct platform_driver mxs_phy_driver = { 443 232 .probe = mxs_phy_probe, ··· 481 200 .name = DRIVER_NAME, 482 201 .owner = THIS_MODULE, 483 202 .of_match_table = mxs_phy_dt_ids, 203 + .pm = &mxs_phy_pm, 484 204 }, 485 205 }; 486 206
+3 -3
drivers/usb/phy/phy-rcar-gen2-usb.c
··· 177 177 struct clk *clk; 178 178 int retval; 179 179 180 - pdata = dev_get_platdata(&pdev->dev); 180 + pdata = dev_get_platdata(dev); 181 181 if (!pdata) { 182 182 dev_err(dev, "No platform data\n"); 183 183 return -EINVAL; 184 184 } 185 185 186 - clk = devm_clk_get(&pdev->dev, "usbhs"); 186 + clk = devm_clk_get(dev, "usbhs"); 187 187 if (IS_ERR(clk)) { 188 - dev_err(&pdev->dev, "Can't get the clock\n"); 188 + dev_err(dev, "Can't get the clock\n"); 189 189 return PTR_ERR(clk); 190 190 } 191 191
+16
include/linux/usb/phy.h
··· 111 111 int (*set_suspend)(struct usb_phy *x, 112 112 int suspend); 113 113 114 + /* 115 + * Set wakeup enable for PHY, in that case, the PHY can be 116 + * woken up from suspend status due to external events, 117 + * like vbus change, dp/dm change and id. 118 + */ 119 + int (*set_wakeup)(struct usb_phy *x, bool enabled); 120 + 114 121 /* notify phy connect status change */ 115 122 int (*notify_connect)(struct usb_phy *x, 116 123 enum usb_device_speed speed); ··· 267 260 { 268 261 if (x && x->set_suspend != NULL) 269 262 return x->set_suspend(x, suspend); 263 + else 264 + return 0; 265 + } 266 + 267 + static inline int 268 + usb_phy_set_wakeup(struct usb_phy *x, bool enabled) 269 + { 270 + if (x && x->set_wakeup) 271 + return x->set_wakeup(x, enabled); 270 272 else 271 273 return 0; 272 274 }
+30 -14
include/uapi/linux/usb/functionfs.h
··· 10 10 11 11 enum { 12 12 FUNCTIONFS_DESCRIPTORS_MAGIC = 1, 13 - FUNCTIONFS_STRINGS_MAGIC = 2 13 + FUNCTIONFS_STRINGS_MAGIC = 2, 14 + FUNCTIONFS_DESCRIPTORS_MAGIC_V2 = 3, 14 15 }; 15 16 17 + enum functionfs_flags { 18 + FUNCTIONFS_HAS_FS_DESC = 1, 19 + FUNCTIONFS_HAS_HS_DESC = 2, 20 + FUNCTIONFS_HAS_SS_DESC = 4, 21 + }; 16 22 17 23 #ifndef __KERNEL__ 18 24 ··· 35 29 36 30 37 31 /* 38 - * All numbers must be in little endian order. 39 - */ 40 - 41 - struct usb_functionfs_descs_head { 42 - __le32 magic; 43 - __le32 length; 44 - __le32 fs_count; 45 - __le32 hs_count; 46 - } __attribute__((packed)); 47 - 48 - /* 49 32 * Descriptors format: 50 33 * 51 34 * | off | name | type | description | 52 35 * |-----+-----------+--------------+--------------------------------------| 53 - * | 0 | magic | LE32 | FUNCTIONFS_{FS,HS}_DESCRIPTORS_MAGIC | 36 + * | 0 | magic | LE32 | FUNCTIONFS_DESCRIPTORS_MAGIC_V2 | 37 + * | 4 | length | LE32 | length of the whole data chunk | 38 + * | 8 | flags | LE32 | combination of functionfs_flags | 39 + * | | fs_count | LE32 | number of full-speed descriptors | 40 + * | | hs_count | LE32 | number of high-speed descriptors | 41 + * | | ss_count | LE32 | number of super-speed descriptors | 42 + * | | fs_descrs | Descriptor[] | list of full-speed descriptors | 43 + * | | hs_descrs | Descriptor[] | list of high-speed descriptors | 44 + * | | ss_descrs | Descriptor[] | list of super-speed descriptors | 45 + * 46 + * Depending on which flags are set, various fields may be missing in the 47 + * structure. Any flags that are not recognised cause the whole block to be 48 + * rejected with -ENOSYS. 49 + * 50 + * Legacy descriptors format: 51 + * 52 + * | off | name | type | description | 53 + * |-----+-----------+--------------+--------------------------------------| 54 + * | 0 | magic | LE32 | FUNCTIONFS_DESCRIPTORS_MAGIC | 54 55 * | 4 | length | LE32 | length of the whole data chunk | 55 56 * | 8 | fs_count | LE32 | number of full-speed descriptors | 56 57 * | 12 | hs_count | LE32 | number of high-speed descriptors | 57 58 * | 16 | fs_descrs | Descriptor[] | list of full-speed descriptors | 58 59 * | | hs_descrs | Descriptor[] | list of high-speed descriptors | 59 60 * 60 - * descs are just valid USB descriptors and have the following format: 61 + * All numbers must be in little endian order. 62 + * 63 + * Descriptor[] is an array of valid USB descriptors which have the following 64 + * format: 61 65 * 62 66 * | off | name | type | description | 63 67 * |-----+-----------------+------+--------------------------|