Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'usb-next' into musb-merge

* usb-next: (132 commits)
USB: uas: Use GFP_NOIO instead of GFP_KERNEL in I/O submission path
USB: uas: Ensure we only bind to a UAS interface
USB: uas: Rename sense pipe and sense urb to status pipe and status urb
USB: uas: Use kzalloc instead of kmalloc
USB: uas: Fix up the Sense IU
usb: musb: core: kill unneeded #include's
DA8xx: assign name to MUSB IRQ resource
usb: gadget: g_ncm added
usb: gadget: f_ncm.c added
usb: gadget: u_ether: prepare for NCM
usb: pch_udc: Fix setup transfers with data out
usb: pch_udc: Fix compile error, warnings and checkpatch warnings
usb: add ab8500 usb transceiver driver
USB: gadget: Implement runtime PM for MSM bus glue driver
USB: gadget: Implement runtime PM for ci13xxx gadget
USB: gadget: Add USB controller driver for MSM SoC
USB: gadget: Introduce ci13xxx_udc_driver struct
USB: gadget: Initialize ci13xxx gadget device's coherent DMA mask
USB: gadget: Fix "scheduling while atomic" bugs in ci13xxx_udc
USB: gadget: Separate out PCI bus code from ci13xxx_udc
...

+15279 -1965
+57 -54
Documentation/usb/power-management.txt
··· 2 2 3 3 Alan Stern <stern@rowland.harvard.edu> 4 4 5 - December 11, 2009 5 + October 28, 2010 6 6 7 7 8 8 ··· 107 107 The user interface for controlling dynamic PM is located in the power/ 108 108 subdirectory of each USB device's sysfs directory, that is, in 109 109 /sys/bus/usb/devices/.../power/ where "..." is the device's ID. The 110 - relevant attribute files are: wakeup, control, and autosuspend. 111 - (There may also be a file named "level"; this file was deprecated 112 - as of the 2.6.35 kernel and replaced by the "control" file.) 110 + relevant attribute files are: wakeup, control, and 111 + autosuspend_delay_ms. (There may also be a file named "level"; this 112 + file was deprecated as of the 2.6.35 kernel and replaced by the 113 + "control" file. In 2.6.38 the "autosuspend" file will be deprecated 114 + and replaced by the "autosuspend_delay_ms" file. The only difference 115 + is that the newer file expresses the delay in milliseconds whereas the 116 + older file uses seconds. Confusingly, both files are present in 2.6.37 117 + but only "autosuspend" works.) 113 118 114 119 power/wakeup 115 120 ··· 145 140 suspended and autoresume was not allowed. This 146 141 setting is no longer supported.) 147 142 148 - power/autosuspend 143 + power/autosuspend_delay_ms 149 144 150 145 This file contains an integer value, which is the 151 - number of seconds the device should remain idle before 152 - the kernel will autosuspend it (the idle-delay time). 153 - The default is 2. 0 means to autosuspend as soon as 154 - the device becomes idle, and negative values mean 155 - never to autosuspend. You can write a number to the 156 - file to change the autosuspend idle-delay time. 146 + number of milliseconds the device should remain idle 147 + before the kernel will autosuspend it (the idle-delay 148 + time). The default is 2000. 0 means to autosuspend 149 + as soon as the device becomes idle, and negative 150 + values mean never to autosuspend. You can write a 151 + number to the file to change the autosuspend 152 + idle-delay time. 157 153 158 - Writing "-1" to power/autosuspend and writing "on" to power/control do 159 - essentially the same thing -- they both prevent the device from being 160 - autosuspended. Yes, this is a redundancy in the API. 154 + Writing "-1" to power/autosuspend_delay_ms and writing "on" to 155 + power/control do essentially the same thing -- they both prevent the 156 + device from being autosuspended. Yes, this is a redundancy in the 157 + API. 161 158 162 159 (In 2.6.21 writing "0" to power/autosuspend would prevent the device 163 160 from being autosuspended; the behavior was changed in 2.6.22. The 164 161 power/autosuspend attribute did not exist prior to 2.6.21, and the 165 162 power/level attribute did not exist prior to 2.6.22. power/control 166 - was added in 2.6.34.) 163 + was added in 2.6.34, and power/autosuspend_delay_ms was added in 164 + 2.6.37 but did not become functional until 2.6.38.) 167 165 168 166 169 167 Changing the default idle-delay time 170 168 ------------------------------------ 171 169 172 - The default autosuspend idle-delay time is controlled by a module 173 - parameter in usbcore. You can specify the value when usbcore is 174 - loaded. For example, to set it to 5 seconds instead of 2 you would 170 + The default autosuspend idle-delay time (in seconds) is controlled by 171 + a module parameter in usbcore. You can specify the value when usbcore 172 + is loaded. For example, to set it to 5 seconds instead of 2 you would 175 173 do: 176 174 177 175 modprobe usbcore autosuspend=5 ··· 242 234 243 235 If a driver knows that its device has proper suspend/resume support, 244 236 it can enable autosuspend all by itself. For example, the video 245 - driver for a laptop's webcam might do this, since these devices are 246 - rarely used and so should normally be autosuspended. 237 + driver for a laptop's webcam might do this (in recent kernels they 238 + do), since these devices are rarely used and so should normally be 239 + autosuspended. 247 240 248 241 Sometimes it turns out that even when a device does work okay with 249 - autosuspend there are still problems. For example, there are 250 - experimental patches adding autosuspend support to the usbhid driver, 251 - which manages keyboards and mice, among other things. Tests with a 252 - number of keyboards showed that typing on a suspended keyboard, while 253 - causing the keyboard to do a remote wakeup all right, would 254 - nonetheless frequently result in lost keystrokes. Tests with mice 255 - showed that some of them would issue a remote-wakeup request in 256 - response to button presses but not to motion, and some in response to 257 - neither. 242 + autosuspend there are still problems. For example, the usbhid driver, 243 + which manages keyboards and mice, has autosuspend support. Tests with 244 + a number of keyboards show that typing on a suspended keyboard, while 245 + causing the keyboard to do a remote wakeup all right, will nonetheless 246 + frequently result in lost keystrokes. Tests with mice show that some 247 + of them will issue a remote-wakeup request in response to button 248 + presses but not to motion, and some in response to neither. 258 249 259 250 The kernel will not prevent you from enabling autosuspend on devices 260 251 that can't handle it. It is even possible in theory to damage a 261 - device by suspending it at the wrong time -- for example, suspending a 262 - USB hard disk might cause it to spin down without parking the heads. 263 - (Highly unlikely, but possible.) Take care. 252 + device by suspending it at the wrong time. (Highly unlikely, but 253 + possible.) Take care. 264 254 265 255 266 256 The driver interface for Power Management ··· 342 336 then the interface is considered to be idle, and the kernel may 343 337 autosuspend the device. 344 338 345 - (There is a similar usage counter field in struct usb_device, 346 - associated with the device itself rather than any of its interfaces. 347 - This counter is used only by the USB core.) 348 - 349 339 Drivers need not be concerned about balancing changes to the usage 350 340 counter; the USB core will undo any remaining "get"s when a driver 351 341 is unbound from its interface. As a corollary, drivers must not call ··· 411 409 autosuspending a keyboard if the user can't cause the keyboard to do a 412 410 remote wakeup by typing on it. If the driver sets 413 411 intf->needs_remote_wakeup to 1, the kernel won't autosuspend the 414 - device if remote wakeup isn't available or has been disabled through 415 - the power/wakeup attribute. (If the device is already autosuspended, 416 - though, setting this flag won't cause the kernel to autoresume it. 417 - Normally a driver would set this flag in its probe method, at which 418 - time the device is guaranteed not to be autosuspended.) 412 + device if remote wakeup isn't available. (If the device is already 413 + autosuspended, though, setting this flag won't cause the kernel to 414 + autoresume it. Normally a driver would set this flag in its probe 415 + method, at which time the device is guaranteed not to be 416 + autosuspended.) 419 417 420 418 If a driver does its I/O asynchronously in interrupt context, it 421 419 should call usb_autopm_get_interface_async() before starting output and ··· 424 422 425 423 usb_mark_last_busy(struct usb_device *udev); 426 424 427 - in the event handler. This sets udev->last_busy to the current time. 428 - udev->last_busy is the field used for idle-delay calculations; 429 - updating it will cause any pending autosuspend to be moved back. Most 430 - of the usb_autopm_* routines will also set the last_busy field to the 431 - current time. 425 + in the event handler. This tells the PM core that the device was just 426 + busy and therefore the next autosuspend idle-delay expiration should 427 + be pushed back. Many of the usb_autopm_* routines also make this call, 428 + so drivers need to worry only when interrupt-driven input arrives. 432 429 433 430 Asynchronous operation is always subject to races. For example, a 434 - driver may call one of the usb_autopm_*_interface_async() routines at 435 - a time when the core has just finished deciding the device has been 436 - idle for long enough but not yet gotten around to calling the driver's 437 - suspend method. The suspend method must be responsible for 438 - synchronizing with the output request routine and the URB completion 439 - handler; it should cause autosuspends to fail with -EBUSY if the 440 - driver needs to use the device. 431 + driver may call the usb_autopm_get_interface_async() routine at a time 432 + when the core has just finished deciding the device has been idle for 433 + long enough but not yet gotten around to calling the driver's suspend 434 + method. The suspend method must be responsible for synchronizing with 435 + the I/O request routine and the URB completion handler; it should 436 + cause autosuspends to fail with -EBUSY if the driver needs to use the 437 + device. 441 438 442 439 External suspend calls should never be allowed to fail in this way, 443 440 only autosuspend calls. The driver can tell them apart by checking ··· 473 472 occurs. Since system suspends are supposed to be as transparent as 474 473 possible, the device should remain suspended following the system 475 474 resume. But this theory may not work out well in practice; over time 476 - the kernel's behavior in this regard has changed. 475 + the kernel's behavior in this regard has changed. As of 2.6.37 the 476 + policy is to resume all devices during a system resume and let them 477 + handle their own runtime suspends afterward. 477 478 478 479 Secondly, a dynamic power-management event may occur as a system 479 480 suspend is underway. The window for this is short, since system
+5 -1
arch/arm/mach-davinci/usb.c
··· 64 64 { 65 65 .start = IRQ_USBINT, 66 66 .flags = IORESOURCE_IRQ, 67 + .name = "mc" 67 68 }, 68 69 { 69 70 /* placeholder for the dedicated CPPI IRQ */ 70 71 .flags = IORESOURCE_IRQ, 72 + .name = "dma" 71 73 }, 72 74 }; 73 75 74 76 static u64 usb_dmamask = DMA_BIT_MASK(32); 75 77 76 78 static struct platform_device usb_dev = { 77 - .name = "musb_hdrc", 79 + .name = "musb-davinci", 78 80 .id = -1, 79 81 .dev = { 80 82 .platform_data = &usb_data, ··· 112 110 { 113 111 .start = IRQ_DA8XX_USB_INT, 114 112 .flags = IORESOURCE_IRQ, 113 + .name = "mc", 115 114 }, 116 115 }; 117 116 ··· 124 121 125 122 usb_dev.resource = da8xx_usb20_resources; 126 123 usb_dev.num_resources = ARRAY_SIZE(da8xx_usb20_resources); 124 + usb_dev.name = "musb-da8xx"; 127 125 128 126 return platform_device_register(&usb_dev); 129 127 }
+1
arch/arm/mach-omap2/Kconfig
··· 44 44 select ARM_GIC 45 45 select PL310_ERRATA_588369 46 46 select ARM_ERRATA_720789 47 + select USB_ARCH_HAS_EHCI 47 48 48 49 comment "OMAP Core Type" 49 50 depends on ARCH_OMAP2
+4 -2
arch/arm/mach-omap2/Makefile
··· 168 168 obj-$(CONFIG_MACH_OMAP3_TOUCHBOOK) += board-omap3touchbook.o \ 169 169 hsmmc.o 170 170 obj-$(CONFIG_MACH_OMAP_4430SDP) += board-4430sdp.o \ 171 - hsmmc.o 171 + hsmmc.o \ 172 + omap_phy_internal.o 172 173 obj-$(CONFIG_MACH_OMAP4_PANDA) += board-omap4panda.o \ 173 - hsmmc.o 174 + hsmmc.o \ 175 + omap_phy_internal.o 174 176 175 177 obj-$(CONFIG_MACH_OMAP3517EVM) += board-am3517evm.o 176 178
+29 -6
arch/arm/mach-omap2/board-4430sdp.c
··· 42 42 #define ETH_KS8851_IRQ 34 43 43 #define ETH_KS8851_POWER_ON 48 44 44 #define ETH_KS8851_QUART 138 45 + #define OMAP4SDP_MDM_PWR_EN_GPIO 157 45 46 #define OMAP4_SFH7741_SENSOR_OUTPUT_GPIO 184 46 47 #define OMAP4_SFH7741_ENABLE_GPIO 188 47 48 ··· 226 225 omap_gpio_init(); 227 226 } 228 227 228 + static const struct ehci_hcd_omap_platform_data ehci_pdata __initconst = { 229 + .port_mode[0] = EHCI_HCD_OMAP_MODE_PHY, 230 + .port_mode[1] = EHCI_HCD_OMAP_MODE_UNKNOWN, 231 + .port_mode[2] = EHCI_HCD_OMAP_MODE_UNKNOWN, 232 + .phy_reset = false, 233 + .reset_gpio_port[0] = -EINVAL, 234 + .reset_gpio_port[1] = -EINVAL, 235 + .reset_gpio_port[2] = -EINVAL, 236 + }; 237 + 229 238 static struct omap_musb_board_data musb_board_data = { 230 239 .interface_type = MUSB_INTERFACE_UTMI, 231 - .mode = MUSB_PERIPHERAL, 240 + .mode = MUSB_OTG, 232 241 .power = 100, 242 + }; 243 + 244 + static struct twl4030_usb_data omap4_usbphy_data = { 245 + .phy_init = omap4430_phy_init, 246 + .phy_exit = omap4430_phy_exit, 247 + .phy_power = omap4430_phy_power, 248 + .phy_set_clock = omap4430_phy_set_clk, 233 249 }; 234 250 235 251 static struct omap2_hsmmc_info mmc[] = { ··· 468 450 .vaux1 = &sdp4430_vaux1, 469 451 .vaux2 = &sdp4430_vaux2, 470 452 .vaux3 = &sdp4430_vaux3, 453 + .usb = &omap4_usbphy_data 471 454 }; 472 455 473 456 static struct i2c_board_info __initdata sdp4430_i2c_boardinfo[] = { ··· 533 514 platform_add_devices(sdp4430_devices, ARRAY_SIZE(sdp4430_devices)); 534 515 omap_serial_init(); 535 516 omap4_twl6030_hsmmc_init(mmc); 536 - /* OMAP4 SDP uses internal transceiver so register nop transceiver */ 537 - usb_nop_xceiv_register(); 538 - /* FIXME: allow multi-omap to boot until musb is updated for omap4 */ 539 - if (!cpu_is_omap44xx()) 540 - usb_musb_init(&musb_board_data); 517 + 518 + /* Power on the ULPI PHY */ 519 + if (gpio_is_valid(OMAP4SDP_MDM_PWR_EN_GPIO)) { 520 + /* FIXME: Assumes pad is already muxed for GPIO mode */ 521 + gpio_request(OMAP4SDP_MDM_PWR_EN_GPIO, "USBB1 PHY VMDM_3V3"); 522 + gpio_direction_output(OMAP4SDP_MDM_PWR_EN_GPIO, 1); 523 + } 524 + usb_ehci_init(&ehci_pdata); 525 + usb_musb_init(&musb_board_data); 541 526 542 527 status = omap_ethernet_init(); 543 528 if (status) {
+2 -3
arch/arm/mach-omap2/board-n8x0.c
··· 46 46 #define TUSB6010_GPIO_ENABLE 0 47 47 #define TUSB6010_DMACHAN 0x3f 48 48 49 - #if defined(CONFIG_USB_TUSB6010) || \ 50 - defined(CONFIG_USB_TUSB6010_MODULE) 49 + #ifdef CONFIG_USB_MUSB_TUSB6010 51 50 /* 52 51 * Enable or disable power to TUSB6010. When enabling, turn on 3.3 V and 53 52 * 1.5 V voltage regulators of PM companion chip. Companion chip will then ··· 133 134 134 135 static void __init n8x0_usb_init(void) {} 135 136 136 - #endif /*CONFIG_USB_TUSB6010 */ 137 + #endif /*CONFIG_USB_MUSB_TUSB6010 */ 137 138 138 139 139 140 static struct omap2_mcspi_device_config p54spi_mcspi_config = {
+10 -4
arch/arm/mach-omap2/board-omap4panda.c
··· 133 133 134 134 static struct omap_musb_board_data musb_board_data = { 135 135 .interface_type = MUSB_INTERFACE_UTMI, 136 - .mode = MUSB_PERIPHERAL, 136 + .mode = MUSB_OTG, 137 137 .power = 100, 138 + }; 139 + 140 + static struct twl4030_usb_data omap4_usbphy_data = { 141 + .phy_init = omap4430_phy_init, 142 + .phy_exit = omap4430_phy_exit, 143 + .phy_power = omap4430_phy_power, 144 + .phy_set_clock = omap4430_phy_set_clk, 138 145 }; 139 146 140 147 static struct omap2_hsmmc_info mmc[] = { ··· 352 345 .vaux1 = &omap4_panda_vaux1, 353 346 .vaux2 = &omap4_panda_vaux2, 354 347 .vaux3 = &omap4_panda_vaux3, 348 + .usb = &omap4_usbphy_data, 355 349 }; 356 350 357 351 static struct i2c_board_info __initdata omap4_panda_i2c_boardinfo[] = { ··· 385 377 /* OMAP4 Panda uses internal transceiver so register nop transceiver */ 386 378 usb_nop_xceiv_register(); 387 379 omap4_ehci_init(); 388 - /* FIXME: allow multi-omap to boot until musb is updated for omap4 */ 389 - if (!cpu_is_omap44xx()) 390 - usb_musb_init(&musb_board_data); 380 + usb_musb_init(&musb_board_data); 391 381 } 392 382 393 383 static void __init omap4_panda_map_io(void)
+1 -1
arch/arm/mach-omap2/clock2420_data.c
··· 1877 1877 CLK("omap-aes", "ick", &aes_ick, CK_242X), 1878 1878 CLK(NULL, "pka_ick", &pka_ick, CK_242X), 1879 1879 CLK(NULL, "usb_fck", &usb_fck, CK_242X), 1880 - CLK("musb_hdrc", "fck", &osc_ck, CK_242X), 1880 + CLK("musb-hdrc", "fck", &osc_ck, CK_242X), 1881 1881 }; 1882 1882 1883 1883 /*
+1 -1
arch/arm/mach-omap2/clock2430_data.c
··· 1983 1983 CLK("omap-aes", "ick", &aes_ick, CK_243X), 1984 1984 CLK(NULL, "pka_ick", &pka_ick, CK_243X), 1985 1985 CLK(NULL, "usb_fck", &usb_fck, CK_243X), 1986 - CLK("musb_hdrc", "ick", &usbhs_ick, CK_243X), 1986 + CLK("musb-omap2430", "ick", &usbhs_ick, CK_243X), 1987 1987 CLK("mmci-omap-hs.0", "ick", &mmchs1_ick, CK_243X), 1988 1988 CLK("mmci-omap-hs.0", "fck", &mmchs1_fck, CK_243X), 1989 1989 CLK("mmci-omap-hs.1", "ick", &mmchs2_ick, CK_243X),
+9 -4
arch/arm/mach-omap2/clock3xxx_data.c
··· 3278 3278 CLK(NULL, "cpefuse_fck", &cpefuse_fck, CK_3430ES2 | CK_AM35XX), 3279 3279 CLK(NULL, "ts_fck", &ts_fck, CK_3430ES2 | CK_AM35XX), 3280 3280 CLK(NULL, "usbtll_fck", &usbtll_fck, CK_3430ES2 | CK_AM35XX), 3281 + CLK("ehci-omap.0", "usbtll_fck", &usbtll_fck, CK_3430ES2 | CK_AM35XX), 3281 3282 CLK("omap-mcbsp.1", "prcm_fck", &core_96m_fck, CK_3XXX), 3282 3283 CLK("omap-mcbsp.5", "prcm_fck", &core_96m_fck, CK_3XXX), 3283 3284 CLK(NULL, "core_96m_fck", &core_96m_fck, CK_3XXX), ··· 3306 3305 CLK(NULL, "ssi_sst_fck", &ssi_sst_fck_3430es1, CK_3430ES1), 3307 3306 CLK(NULL, "ssi_sst_fck", &ssi_sst_fck_3430es2, CK_3430ES2), 3308 3307 CLK(NULL, "core_l3_ick", &core_l3_ick, CK_3XXX), 3309 - CLK("musb_hdrc", "ick", &hsotgusb_ick_3430es1, CK_3430ES1), 3310 - CLK("musb_hdrc", "ick", &hsotgusb_ick_3430es2, CK_3430ES2), 3308 + CLK("musb-omap2430", "ick", &hsotgusb_ick_3430es1, CK_3430ES1), 3309 + CLK("musb-omap2430", "ick", &hsotgusb_ick_3430es2, CK_3430ES2), 3311 3310 CLK(NULL, "sdrc_ick", &sdrc_ick, CK_3XXX), 3312 3311 CLK(NULL, "gpmc_fck", &gpmc_fck, CK_3XXX), 3313 3312 CLK(NULL, "security_l3_ick", &security_l3_ick, CK_343X), 3314 3313 CLK(NULL, "pka_ick", &pka_ick, CK_343X), 3315 3314 CLK(NULL, "core_l4_ick", &core_l4_ick, CK_3XXX), 3316 3315 CLK(NULL, "usbtll_ick", &usbtll_ick, CK_3430ES2 | CK_AM35XX), 3316 + CLK("ehci-omap.0", "usbtll_ick", &usbtll_ick, CK_3430ES2 | CK_AM35XX), 3317 3317 CLK("mmci-omap-hs.2", "ick", &mmchs3_ick, CK_3430ES2 | CK_AM35XX), 3318 3318 CLK(NULL, "icr_ick", &icr_ick, CK_343X), 3319 3319 CLK("omap-aes", "ick", &aes2_ick, CK_343X), ··· 3360 3358 CLK(NULL, "cam_ick", &cam_ick, CK_343X), 3361 3359 CLK(NULL, "csi2_96m_fck", &csi2_96m_fck, CK_343X), 3362 3360 CLK(NULL, "usbhost_120m_fck", &usbhost_120m_fck, CK_3430ES2 | CK_AM35XX), 3361 + CLK("ehci-omap.0", "hs_fck", &usbhost_120m_fck, CK_3430ES2 | CK_AM35XX), 3363 3362 CLK(NULL, "usbhost_48m_fck", &usbhost_48m_fck, CK_3430ES2 | CK_AM35XX), 3363 + CLK("ehci-omap.0", "fs_fck", &usbhost_48m_fck, CK_3430ES2 | CK_AM35XX), 3364 3364 CLK(NULL, "usbhost_ick", &usbhost_ick, CK_3430ES2 | CK_AM35XX), 3365 + CLK("ehci-omap.0", "usbhost_ick", &usbhost_ick, CK_3430ES2 | CK_AM35XX), 3365 3366 CLK(NULL, "usim_fck", &usim_fck, CK_3430ES2), 3366 3367 CLK(NULL, "gpt1_fck", &gpt1_fck, CK_3XXX), 3367 3368 CLK(NULL, "wkup_32k_fck", &wkup_32k_fck, CK_3XXX), ··· 3442 3437 CLK("davinci_emac", "phy_clk", &emac_fck, CK_AM35XX), 3443 3438 CLK("vpfe-capture", "master", &vpfe_ick, CK_AM35XX), 3444 3439 CLK("vpfe-capture", "slave", &vpfe_fck, CK_AM35XX), 3445 - CLK("musb_hdrc", "ick", &hsotgusb_ick_am35xx, CK_AM35XX), 3446 - CLK("musb_hdrc", "fck", &hsotgusb_fck_am35xx, CK_AM35XX), 3440 + CLK("musb-am35x", "ick", &hsotgusb_ick_am35xx, CK_AM35XX), 3441 + CLK("musb-am35x", "fck", &hsotgusb_fck_am35xx, CK_AM35XX), 3447 3442 CLK(NULL, "hecc_ck", &hecc_ck, CK_AM35XX), 3448 3443 CLK(NULL, "uart4_ick", &uart4_ick_am35xx, CK_AM35XX), 3449 3444 };
+6 -1
arch/arm/mach-omap2/clock44xx_data.c
··· 2937 2937 CLK(NULL, "uart3_fck", &uart3_fck, CK_443X), 2938 2938 CLK(NULL, "uart4_fck", &uart4_fck, CK_443X), 2939 2939 CLK(NULL, "usb_host_fs_fck", &usb_host_fs_fck, CK_443X), 2940 + CLK("ehci-omap.0", "fs_fck", &usb_host_fs_fck, CK_443X), 2940 2941 CLK(NULL, "usb_host_hs_utmi_p3_clk", &usb_host_hs_utmi_p3_clk, CK_443X), 2941 2942 CLK(NULL, "usb_host_hs_hsic60m_p1_clk", &usb_host_hs_hsic60m_p1_clk, CK_443X), 2942 2943 CLK(NULL, "usb_host_hs_hsic60m_p2_clk", &usb_host_hs_hsic60m_p2_clk, CK_443X), ··· 2949 2948 CLK(NULL, "usb_host_hs_hsic480m_p2_clk", &usb_host_hs_hsic480m_p2_clk, CK_443X), 2950 2949 CLK(NULL, "usb_host_hs_func48mclk", &usb_host_hs_func48mclk, CK_443X), 2951 2950 CLK(NULL, "usb_host_hs_fck", &usb_host_hs_fck, CK_443X), 2951 + CLK("ehci-omap.0", "hs_fck", &usb_host_hs_fck, CK_443X), 2952 + CLK("ehci-omap.0", "usbhost_ick", &dummy_ck, CK_443X), 2952 2953 CLK(NULL, "otg_60m_gfclk", &otg_60m_gfclk, CK_443X), 2953 2954 CLK(NULL, "usb_otg_hs_xclk", &usb_otg_hs_xclk, CK_443X), 2954 - CLK("musb_hdrc", "ick", &usb_otg_hs_ick, CK_443X), 2955 + CLK("musb-omap2430", "ick", &usb_otg_hs_ick, CK_443X), 2955 2956 CLK(NULL, "usb_phy_cm_clk32k", &usb_phy_cm_clk32k, CK_443X), 2956 2957 CLK(NULL, "usb_tll_hs_usb_ch2_clk", &usb_tll_hs_usb_ch2_clk, CK_443X), 2957 2958 CLK(NULL, "usb_tll_hs_usb_ch0_clk", &usb_tll_hs_usb_ch0_clk, CK_443X), 2958 2959 CLK(NULL, "usb_tll_hs_usb_ch1_clk", &usb_tll_hs_usb_ch1_clk, CK_443X), 2959 2960 CLK(NULL, "usb_tll_hs_ick", &usb_tll_hs_ick, CK_443X), 2961 + CLK("ehci-omap.0", "usbtll_ick", &usb_tll_hs_ick, CK_443X), 2962 + CLK("ehci-omap.0", "usbtll_fck", &dummy_ck, CK_443X), 2960 2963 CLK(NULL, "usim_ck", &usim_ck, CK_443X), 2961 2964 CLK(NULL, "usim_fclk", &usim_fclk, CK_443X), 2962 2965 CLK(NULL, "usim_fck", &usim_fck, CK_443X),
+149
arch/arm/mach-omap2/omap_phy_internal.c
··· 1 + /* 2 + * This file configures the internal USB PHY in OMAP4430. Used 3 + * with TWL6030 transceiver and MUSB on OMAP4430. 4 + * 5 + * Copyright (C) 2010 Texas Instruments Incorporated - http://www.ti.com 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + * 11 + * Author: Hema HK <hemahk@ti.com> 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program; if not, write to the Free Software 20 + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 21 + * 22 + */ 23 + 24 + #include <linux/types.h> 25 + #include <linux/delay.h> 26 + #include <linux/clk.h> 27 + #include <linux/io.h> 28 + #include <linux/err.h> 29 + #include <linux/usb.h> 30 + 31 + #include <plat/usb.h> 32 + 33 + /* OMAP control module register for UTMI PHY */ 34 + #define CONTROL_DEV_CONF 0x300 35 + #define PHY_PD 0x1 36 + 37 + #define USBOTGHS_CONTROL 0x33c 38 + #define AVALID BIT(0) 39 + #define BVALID BIT(1) 40 + #define VBUSVALID BIT(2) 41 + #define SESSEND BIT(3) 42 + #define IDDIG BIT(4) 43 + 44 + static struct clk *phyclk, *clk48m, *clk32k; 45 + static void __iomem *ctrl_base; 46 + 47 + int omap4430_phy_init(struct device *dev) 48 + { 49 + ctrl_base = ioremap(OMAP443X_SCM_BASE, SZ_1K); 50 + if (!ctrl_base) { 51 + dev_err(dev, "control module ioremap failed\n"); 52 + return -ENOMEM; 53 + } 54 + /* Power down the phy */ 55 + __raw_writel(PHY_PD, ctrl_base + CONTROL_DEV_CONF); 56 + phyclk = clk_get(dev, "ocp2scp_usb_phy_ick"); 57 + 58 + if (IS_ERR(phyclk)) { 59 + dev_err(dev, "cannot clk_get ocp2scp_usb_phy_ick\n"); 60 + iounmap(ctrl_base); 61 + return PTR_ERR(phyclk); 62 + } 63 + 64 + clk48m = clk_get(dev, "ocp2scp_usb_phy_phy_48m"); 65 + if (IS_ERR(clk48m)) { 66 + dev_err(dev, "cannot clk_get ocp2scp_usb_phy_phy_48m\n"); 67 + clk_put(phyclk); 68 + iounmap(ctrl_base); 69 + return PTR_ERR(clk48m); 70 + } 71 + 72 + clk32k = clk_get(dev, "usb_phy_cm_clk32k"); 73 + if (IS_ERR(clk32k)) { 74 + dev_err(dev, "cannot clk_get usb_phy_cm_clk32k\n"); 75 + clk_put(phyclk); 76 + clk_put(clk48m); 77 + iounmap(ctrl_base); 78 + return PTR_ERR(clk32k); 79 + } 80 + return 0; 81 + } 82 + 83 + int omap4430_phy_set_clk(struct device *dev, int on) 84 + { 85 + static int state; 86 + 87 + if (on && !state) { 88 + /* Enable the phy clocks */ 89 + clk_enable(phyclk); 90 + clk_enable(clk48m); 91 + clk_enable(clk32k); 92 + state = 1; 93 + } else if (state) { 94 + /* Disable the phy clocks */ 95 + clk_disable(phyclk); 96 + clk_disable(clk48m); 97 + clk_disable(clk32k); 98 + state = 0; 99 + } 100 + return 0; 101 + } 102 + 103 + int omap4430_phy_power(struct device *dev, int ID, int on) 104 + { 105 + if (on) { 106 + /* enabled the clocks */ 107 + omap4430_phy_set_clk(dev, 1); 108 + /* power on the phy */ 109 + if (__raw_readl(ctrl_base + CONTROL_DEV_CONF) & PHY_PD) { 110 + __raw_writel(~PHY_PD, ctrl_base + CONTROL_DEV_CONF); 111 + mdelay(200); 112 + } 113 + if (ID) 114 + /* enable VBUS valid, IDDIG groung */ 115 + __raw_writel(AVALID | VBUSVALID, ctrl_base + 116 + USBOTGHS_CONTROL); 117 + else 118 + /* 119 + * Enable VBUS Valid, AValid and IDDIG 120 + * high impedence 121 + */ 122 + __raw_writel(IDDIG | AVALID | VBUSVALID, 123 + ctrl_base + USBOTGHS_CONTROL); 124 + } else { 125 + /* Enable session END and IDIG to high impedence. */ 126 + __raw_writel(SESSEND | IDDIG, ctrl_base + 127 + USBOTGHS_CONTROL); 128 + /* Disable the clocks */ 129 + omap4430_phy_set_clk(dev, 0); 130 + /* Power down the phy */ 131 + __raw_writel(PHY_PD, ctrl_base + CONTROL_DEV_CONF); 132 + } 133 + 134 + return 0; 135 + } 136 + 137 + int omap4430_phy_exit(struct device *dev) 138 + { 139 + if (ctrl_base) 140 + iounmap(ctrl_base); 141 + if (phyclk) 142 + clk_put(phyclk); 143 + if (clk48m) 144 + clk_put(clk48m); 145 + if (clk32k) 146 + clk_put(clk32k); 147 + 148 + return 0; 149 + }
+136 -8
arch/arm/mach-omap2/usb-ehci.c
··· 34 34 35 35 static struct resource ehci_resources[] = { 36 36 { 37 - .start = OMAP34XX_EHCI_BASE, 38 - .end = OMAP34XX_EHCI_BASE + SZ_1K - 1, 39 37 .flags = IORESOURCE_MEM, 40 38 }, 41 39 { 42 - .start = OMAP34XX_UHH_CONFIG_BASE, 43 - .end = OMAP34XX_UHH_CONFIG_BASE + SZ_1K - 1, 44 40 .flags = IORESOURCE_MEM, 45 41 }, 46 42 { 47 - .start = OMAP34XX_USBTLL_BASE, 48 - .end = OMAP34XX_USBTLL_BASE + SZ_4K - 1, 49 43 .flags = IORESOURCE_MEM, 50 44 }, 51 45 { /* general IRQ */ 52 - .start = INT_34XX_EHCI_IRQ, 53 46 .flags = IORESOURCE_IRQ, 54 47 } 55 48 }; ··· 207 214 return; 208 215 } 209 216 217 + static void setup_4430ehci_io_mux(const enum ehci_hcd_omap_mode *port_mode) 218 + { 219 + switch (port_mode[0]) { 220 + case EHCI_HCD_OMAP_MODE_PHY: 221 + omap_mux_init_signal("usbb1_ulpiphy_stp", 222 + OMAP_PIN_OUTPUT); 223 + omap_mux_init_signal("usbb1_ulpiphy_clk", 224 + OMAP_PIN_INPUT_PULLDOWN); 225 + omap_mux_init_signal("usbb1_ulpiphy_dir", 226 + OMAP_PIN_INPUT_PULLDOWN); 227 + omap_mux_init_signal("usbb1_ulpiphy_nxt", 228 + OMAP_PIN_INPUT_PULLDOWN); 229 + omap_mux_init_signal("usbb1_ulpiphy_dat0", 230 + OMAP_PIN_INPUT_PULLDOWN); 231 + omap_mux_init_signal("usbb1_ulpiphy_dat1", 232 + OMAP_PIN_INPUT_PULLDOWN); 233 + omap_mux_init_signal("usbb1_ulpiphy_dat2", 234 + OMAP_PIN_INPUT_PULLDOWN); 235 + omap_mux_init_signal("usbb1_ulpiphy_dat3", 236 + OMAP_PIN_INPUT_PULLDOWN); 237 + omap_mux_init_signal("usbb1_ulpiphy_dat4", 238 + OMAP_PIN_INPUT_PULLDOWN); 239 + omap_mux_init_signal("usbb1_ulpiphy_dat5", 240 + OMAP_PIN_INPUT_PULLDOWN); 241 + omap_mux_init_signal("usbb1_ulpiphy_dat6", 242 + OMAP_PIN_INPUT_PULLDOWN); 243 + omap_mux_init_signal("usbb1_ulpiphy_dat7", 244 + OMAP_PIN_INPUT_PULLDOWN); 245 + break; 246 + case EHCI_HCD_OMAP_MODE_TLL: 247 + omap_mux_init_signal("usbb1_ulpitll_stp", 248 + OMAP_PIN_INPUT_PULLUP); 249 + omap_mux_init_signal("usbb1_ulpitll_clk", 250 + OMAP_PIN_INPUT_PULLDOWN); 251 + omap_mux_init_signal("usbb1_ulpitll_dir", 252 + OMAP_PIN_INPUT_PULLDOWN); 253 + omap_mux_init_signal("usbb1_ulpitll_nxt", 254 + OMAP_PIN_INPUT_PULLDOWN); 255 + omap_mux_init_signal("usbb1_ulpitll_dat0", 256 + OMAP_PIN_INPUT_PULLDOWN); 257 + omap_mux_init_signal("usbb1_ulpitll_dat1", 258 + OMAP_PIN_INPUT_PULLDOWN); 259 + omap_mux_init_signal("usbb1_ulpitll_dat2", 260 + OMAP_PIN_INPUT_PULLDOWN); 261 + omap_mux_init_signal("usbb1_ulpitll_dat3", 262 + OMAP_PIN_INPUT_PULLDOWN); 263 + omap_mux_init_signal("usbb1_ulpitll_dat4", 264 + OMAP_PIN_INPUT_PULLDOWN); 265 + omap_mux_init_signal("usbb1_ulpitll_dat5", 266 + OMAP_PIN_INPUT_PULLDOWN); 267 + omap_mux_init_signal("usbb1_ulpitll_dat6", 268 + OMAP_PIN_INPUT_PULLDOWN); 269 + omap_mux_init_signal("usbb1_ulpitll_dat7", 270 + OMAP_PIN_INPUT_PULLDOWN); 271 + break; 272 + case EHCI_HCD_OMAP_MODE_UNKNOWN: 273 + default: 274 + break; 275 + } 276 + switch (port_mode[1]) { 277 + case EHCI_HCD_OMAP_MODE_PHY: 278 + omap_mux_init_signal("usbb2_ulpiphy_stp", 279 + OMAP_PIN_OUTPUT); 280 + omap_mux_init_signal("usbb2_ulpiphy_clk", 281 + OMAP_PIN_INPUT_PULLDOWN); 282 + omap_mux_init_signal("usbb2_ulpiphy_dir", 283 + OMAP_PIN_INPUT_PULLDOWN); 284 + omap_mux_init_signal("usbb2_ulpiphy_nxt", 285 + OMAP_PIN_INPUT_PULLDOWN); 286 + omap_mux_init_signal("usbb2_ulpiphy_dat0", 287 + OMAP_PIN_INPUT_PULLDOWN); 288 + omap_mux_init_signal("usbb2_ulpiphy_dat1", 289 + OMAP_PIN_INPUT_PULLDOWN); 290 + omap_mux_init_signal("usbb2_ulpiphy_dat2", 291 + OMAP_PIN_INPUT_PULLDOWN); 292 + omap_mux_init_signal("usbb2_ulpiphy_dat3", 293 + OMAP_PIN_INPUT_PULLDOWN); 294 + omap_mux_init_signal("usbb2_ulpiphy_dat4", 295 + OMAP_PIN_INPUT_PULLDOWN); 296 + omap_mux_init_signal("usbb2_ulpiphy_dat5", 297 + OMAP_PIN_INPUT_PULLDOWN); 298 + omap_mux_init_signal("usbb2_ulpiphy_dat6", 299 + OMAP_PIN_INPUT_PULLDOWN); 300 + omap_mux_init_signal("usbb2_ulpiphy_dat7", 301 + OMAP_PIN_INPUT_PULLDOWN); 302 + break; 303 + case EHCI_HCD_OMAP_MODE_TLL: 304 + omap_mux_init_signal("usbb2_ulpitll_stp", 305 + OMAP_PIN_INPUT_PULLUP); 306 + omap_mux_init_signal("usbb2_ulpitll_clk", 307 + OMAP_PIN_INPUT_PULLDOWN); 308 + omap_mux_init_signal("usbb2_ulpitll_dir", 309 + OMAP_PIN_INPUT_PULLDOWN); 310 + omap_mux_init_signal("usbb2_ulpitll_nxt", 311 + OMAP_PIN_INPUT_PULLDOWN); 312 + omap_mux_init_signal("usbb2_ulpitll_dat0", 313 + OMAP_PIN_INPUT_PULLDOWN); 314 + omap_mux_init_signal("usbb2_ulpitll_dat1", 315 + OMAP_PIN_INPUT_PULLDOWN); 316 + omap_mux_init_signal("usbb2_ulpitll_dat2", 317 + OMAP_PIN_INPUT_PULLDOWN); 318 + omap_mux_init_signal("usbb2_ulpitll_dat3", 319 + OMAP_PIN_INPUT_PULLDOWN); 320 + omap_mux_init_signal("usbb2_ulpitll_dat4", 321 + OMAP_PIN_INPUT_PULLDOWN); 322 + omap_mux_init_signal("usbb2_ulpitll_dat5", 323 + OMAP_PIN_INPUT_PULLDOWN); 324 + omap_mux_init_signal("usbb2_ulpitll_dat6", 325 + OMAP_PIN_INPUT_PULLDOWN); 326 + omap_mux_init_signal("usbb2_ulpitll_dat7", 327 + OMAP_PIN_INPUT_PULLDOWN); 328 + break; 329 + case EHCI_HCD_OMAP_MODE_UNKNOWN: 330 + default: 331 + break; 332 + } 333 + } 334 + 210 335 void __init usb_ehci_init(const struct ehci_hcd_omap_platform_data *pdata) 211 336 { 212 337 platform_device_add_data(&ehci_device, pdata, sizeof(*pdata)); 213 338 214 339 /* Setup Pin IO MUX for EHCI */ 215 - if (cpu_is_omap34xx()) 340 + if (cpu_is_omap34xx()) { 341 + ehci_resources[0].start = OMAP34XX_EHCI_BASE; 342 + ehci_resources[0].end = OMAP34XX_EHCI_BASE + SZ_1K - 1; 343 + ehci_resources[1].start = OMAP34XX_UHH_CONFIG_BASE; 344 + ehci_resources[1].end = OMAP34XX_UHH_CONFIG_BASE + SZ_1K - 1; 345 + ehci_resources[2].start = OMAP34XX_USBTLL_BASE; 346 + ehci_resources[2].end = OMAP34XX_USBTLL_BASE + SZ_4K - 1; 347 + ehci_resources[3].start = INT_34XX_EHCI_IRQ; 216 348 setup_ehci_io_mux(pdata->port_mode); 349 + } else if (cpu_is_omap44xx()) { 350 + ehci_resources[0].start = OMAP44XX_HSUSB_EHCI_BASE; 351 + ehci_resources[0].end = OMAP44XX_HSUSB_EHCI_BASE + SZ_1K - 1; 352 + ehci_resources[1].start = OMAP44XX_UHH_CONFIG_BASE; 353 + ehci_resources[1].end = OMAP44XX_UHH_CONFIG_BASE + SZ_2K - 1; 354 + ehci_resources[2].start = OMAP44XX_USBTLL_BASE; 355 + ehci_resources[2].end = OMAP44XX_USBTLL_BASE + SZ_4K - 1; 356 + ehci_resources[3].start = OMAP44XX_IRQ_EHCI; 357 + setup_4430ehci_io_mux(pdata->port_mode); 358 + } 217 359 218 360 if (platform_device_register(&ehci_device) < 0) { 219 361 printk(KERN_ERR "Unable to register HS-USB (EHCI) device\n");
+102 -2
arch/arm/mach-omap2/usb-musb.c
··· 30 30 #include <mach/irqs.h> 31 31 #include <mach/am35xx.h> 32 32 #include <plat/usb.h> 33 + #include "control.h" 33 34 34 - #ifdef CONFIG_USB_MUSB_SOC 35 + #if defined(CONFIG_USB_MUSB_OMAP2PLUS) || defined (CONFIG_USB_MUSB_AM35X) 36 + 37 + static void am35x_musb_reset(void) 38 + { 39 + u32 regval; 40 + 41 + /* Reset the musb interface */ 42 + regval = omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); 43 + 44 + regval |= AM35XX_USBOTGSS_SW_RST; 45 + omap_ctrl_writel(regval, AM35XX_CONTROL_IP_SW_RESET); 46 + 47 + regval &= ~AM35XX_USBOTGSS_SW_RST; 48 + omap_ctrl_writel(regval, AM35XX_CONTROL_IP_SW_RESET); 49 + 50 + regval = omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); 51 + } 52 + 53 + static void am35x_musb_phy_power(u8 on) 54 + { 55 + unsigned long timeout = jiffies + msecs_to_jiffies(100); 56 + u32 devconf2; 57 + 58 + if (on) { 59 + /* 60 + * Start the on-chip PHY and its PLL. 61 + */ 62 + devconf2 = omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2); 63 + 64 + devconf2 &= ~(CONF2_RESET | CONF2_PHYPWRDN | CONF2_OTGPWRDN); 65 + devconf2 |= CONF2_PHY_PLLON; 66 + 67 + omap_ctrl_writel(devconf2, AM35XX_CONTROL_DEVCONF2); 68 + 69 + pr_info(KERN_INFO "Waiting for PHY clock good...\n"); 70 + while (!(omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2) 71 + & CONF2_PHYCLKGD)) { 72 + cpu_relax(); 73 + 74 + if (time_after(jiffies, timeout)) { 75 + pr_err(KERN_ERR "musb PHY clock good timed out\n"); 76 + break; 77 + } 78 + } 79 + } else { 80 + /* 81 + * Power down the on-chip PHY. 82 + */ 83 + devconf2 = omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2); 84 + 85 + devconf2 &= ~CONF2_PHY_PLLON; 86 + devconf2 |= CONF2_PHYPWRDN | CONF2_OTGPWRDN; 87 + omap_ctrl_writel(devconf2, AM35XX_CONTROL_DEVCONF2); 88 + } 89 + } 90 + 91 + static void am35x_musb_clear_irq(void) 92 + { 93 + u32 regval; 94 + 95 + regval = omap_ctrl_readl(AM35XX_CONTROL_LVL_INTR_CLEAR); 96 + regval |= AM35XX_USBOTGSS_INT_CLR; 97 + omap_ctrl_writel(regval, AM35XX_CONTROL_LVL_INTR_CLEAR); 98 + regval = omap_ctrl_readl(AM35XX_CONTROL_LVL_INTR_CLEAR); 99 + } 100 + 101 + static void am35x_musb_set_mode(u8 musb_mode) 102 + { 103 + u32 devconf2 = omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2); 104 + 105 + devconf2 &= ~CONF2_OTGMODE; 106 + switch (musb_mode) { 107 + #ifdef CONFIG_USB_MUSB_HDRC_HCD 108 + case MUSB_HOST: /* Force VBUS valid, ID = 0 */ 109 + devconf2 |= CONF2_FORCE_HOST; 110 + break; 111 + #endif 112 + #ifdef CONFIG_USB_GADGET_MUSB_HDRC 113 + case MUSB_PERIPHERAL: /* Force VBUS valid, ID = 1 */ 114 + devconf2 |= CONF2_FORCE_DEVICE; 115 + break; 116 + #endif 117 + #ifdef CONFIG_USB_MUSB_OTG 118 + case MUSB_OTG: /* Don't override the VBUS/ID comparators */ 119 + devconf2 |= CONF2_NO_OVERRIDE; 120 + break; 121 + #endif 122 + default: 123 + pr_info(KERN_INFO "Unsupported mode %u\n", musb_mode); 124 + } 125 + 126 + omap_ctrl_writel(devconf2, AM35XX_CONTROL_DEVCONF2); 127 + } 35 128 36 129 static struct resource musb_resources[] = { 37 130 [0] = { /* start and end set dynamically */ ··· 133 40 [1] = { /* general IRQ */ 134 41 .start = INT_243X_HS_USB_MC, 135 42 .flags = IORESOURCE_IRQ, 43 + .name = "mc", 136 44 }, 137 45 [2] = { /* DMA IRQ */ 138 46 .start = INT_243X_HS_USB_DMA, 139 47 .flags = IORESOURCE_IRQ, 48 + .name = "dma", 140 49 }, 141 50 }; 142 51 ··· 170 75 static u64 musb_dmamask = DMA_BIT_MASK(32); 171 76 172 77 static struct platform_device musb_device = { 173 - .name = "musb_hdrc", 78 + .name = "musb-omap2430", 174 79 .id = -1, 175 80 .dev = { 176 81 .dma_mask = &musb_dmamask, ··· 186 91 if (cpu_is_omap243x()) { 187 92 musb_resources[0].start = OMAP243X_HS_BASE; 188 93 } else if (cpu_is_omap3517() || cpu_is_omap3505()) { 94 + musb_device.name = "musb-am35x"; 189 95 musb_resources[0].start = AM35XX_IPSS_USBOTGSS_BASE; 190 96 musb_resources[1].start = INT_35XX_USBOTG_IRQ; 97 + board_data->set_phy_power = am35x_musb_phy_power; 98 + board_data->clear_irq = am35x_musb_clear_irq; 99 + board_data->set_mode = am35x_musb_set_mode; 100 + board_data->reset = am35x_musb_reset; 191 101 } else if (cpu_is_omap34xx()) { 192 102 musb_resources[0].start = OMAP34XX_HSUSB_OTG_BASE; 193 103 } else if (cpu_is_omap44xx()) {
+1 -1
arch/arm/mach-omap2/usb-tusb6010.c
··· 223 223 static u64 tusb_dmamask = ~(u32)0; 224 224 225 225 static struct platform_device tusb_device = { 226 - .name = "musb_hdrc", 226 + .name = "musb-tusb", 227 227 .id = -1, 228 228 .dev = { 229 229 .dma_mask = &tusb_dmamask,
+5
arch/arm/plat-omap/include/plat/omap44xx.h
··· 52 52 #define OMAP4_MMU1_BASE 0x55082000 53 53 #define OMAP4_MMU2_BASE 0x4A066000 54 54 55 + #define OMAP44XX_USBTLL_BASE (L4_44XX_BASE + 0x62000) 56 + #define OMAP44XX_UHH_CONFIG_BASE (L4_44XX_BASE + 0x64000) 57 + #define OMAP44XX_HSUSB_OHCI_BASE (L4_44XX_BASE + 0x64800) 58 + #define OMAP44XX_HSUSB_EHCI_BASE (L4_44XX_BASE + 0x64C00) 59 + 55 60 #endif /* __ASM_ARCH_OMAP44XX_H */ 56 61
+10
arch/arm/plat-omap/include/plat/usb.h
··· 11 11 EHCI_HCD_OMAP_MODE_UNKNOWN, 12 12 EHCI_HCD_OMAP_MODE_PHY, 13 13 EHCI_HCD_OMAP_MODE_TLL, 14 + EHCI_HCD_OMAP_MODE_HSIC, 14 15 }; 15 16 16 17 enum ohci_omap3_port_mode { ··· 70 69 u8 mode; 71 70 u16 power; 72 71 unsigned extvbus:1; 72 + void (*set_phy_power)(u8 on); 73 + void (*clear_irq)(void); 74 + void (*set_mode)(u8 mode); 75 + void (*reset)(void); 73 76 }; 74 77 75 78 enum musb_interface {MUSB_INTERFACE_ULPI, MUSB_INTERFACE_UTMI}; ··· 83 78 extern void usb_ehci_init(const struct ehci_hcd_omap_platform_data *pdata); 84 79 85 80 extern void usb_ohci_init(const struct ohci_hcd_omap_platform_data *pdata); 81 + 82 + extern int omap4430_phy_power(struct device *dev, int ID, int on); 83 + extern int omap4430_phy_set_clk(struct device *dev, int on); 84 + extern int omap4430_phy_init(struct device *dev); 85 + extern int omap4430_phy_exit(struct device *dev); 86 86 87 87 #endif 88 88
+1 -1
arch/blackfin/mach-bf527/boards/ad7160eval.c
··· 83 83 static u64 musb_dmamask = ~(u32)0; 84 84 85 85 static struct platform_device musb_device = { 86 - .name = "musb_hdrc", 86 + .name = "musb-blackfin", 87 87 .id = 0, 88 88 .dev = { 89 89 .dma_mask = &musb_dmamask,
+3 -1
arch/blackfin/mach-bf527/boards/cm_bf527.c
··· 82 82 .start = IRQ_USB_INT0, 83 83 .end = IRQ_USB_INT0, 84 84 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 85 + .name = "mc" 85 86 }, 86 87 [2] = { /* DMA IRQ */ 87 88 .start = IRQ_USB_DMA, 88 89 .end = IRQ_USB_DMA, 89 90 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 91 + .name = "dma" 90 92 }, 91 93 }; 92 94 ··· 120 118 static u64 musb_dmamask = ~(u32)0; 121 119 122 120 static struct platform_device musb_device = { 123 - .name = "musb_hdrc", 121 + .name = "musb-blackfin", 124 122 .id = 0, 125 123 .dev = { 126 124 .dma_mask = &musb_dmamask,
+3 -1
arch/blackfin/mach-bf527/boards/ezbrd.c
··· 46 46 .start = IRQ_USB_INT0, 47 47 .end = IRQ_USB_INT0, 48 48 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 49 + .name = "mc" 49 50 }, 50 51 [2] = { /* DMA IRQ */ 51 52 .start = IRQ_USB_DMA, 52 53 .end = IRQ_USB_DMA, 53 54 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 55 + .name = "dma" 54 56 }, 55 57 }; 56 58 ··· 84 82 static u64 musb_dmamask = ~(u32)0; 85 83 86 84 static struct platform_device musb_device = { 87 - .name = "musb_hdrc", 85 + .name = "musb-blackfin", 88 86 .id = 0, 89 87 .dev = { 90 88 .dma_mask = &musb_dmamask,
+3 -1
arch/blackfin/mach-bf527/boards/ezkit.c
··· 86 86 .start = IRQ_USB_INT0, 87 87 .end = IRQ_USB_INT0, 88 88 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 89 + .name = "mc" 89 90 }, 90 91 [2] = { /* DMA IRQ */ 91 92 .start = IRQ_USB_DMA, 92 93 .end = IRQ_USB_DMA, 93 94 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 95 + .name = "dma" 94 96 }, 95 97 }; 96 98 ··· 124 122 static u64 musb_dmamask = ~(u32)0; 125 123 126 124 static struct platform_device musb_device = { 127 - .name = "musb_hdrc", 125 + .name = "musb-blackfin", 128 126 .id = 0, 129 127 .dev = { 130 128 .dma_mask = &musb_dmamask,
+1 -1
arch/blackfin/mach-bf527/boards/tll6527m.c
··· 91 91 static u64 musb_dmamask = ~(u32)0; 92 92 93 93 static struct platform_device musb_device = { 94 - .name = "musb_hdrc", 94 + .name = "musb-blackfin", 95 95 .id = 0, 96 96 .dev = { 97 97 .dma_mask = &musb_dmamask,
+3 -1
arch/blackfin/mach-bf548/boards/cm_bf548.c
··· 482 482 .start = IRQ_USB_INT0, 483 483 .end = IRQ_USB_INT0, 484 484 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 485 + .name = "mc" 485 486 }, 486 487 [2] = { /* DMA IRQ */ 487 488 .start = IRQ_USB_DMA, 488 489 .end = IRQ_USB_DMA, 489 490 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 491 + .name = "dma" 490 492 }, 491 493 }; 492 494 ··· 520 518 static u64 musb_dmamask = ~(u32)0; 521 519 522 520 static struct platform_device musb_device = { 523 - .name = "musb_hdrc", 521 + .name = "musb-blackfin", 524 522 .id = 0, 525 523 .dev = { 526 524 .dma_mask = &musb_dmamask,
+3 -1
arch/blackfin/mach-bf548/boards/ezkit.c
··· 587 587 .start = IRQ_USB_INT0, 588 588 .end = IRQ_USB_INT0, 589 589 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 590 + .name = "mc" 590 591 }, 591 592 [2] = { /* DMA IRQ */ 592 593 .start = IRQ_USB_DMA, 593 594 .end = IRQ_USB_DMA, 594 595 .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 596 + .name = "dma" 595 597 }, 596 598 }; 597 599 ··· 625 623 static u64 musb_dmamask = ~(u32)0; 626 624 627 625 static struct platform_device musb_device = { 628 - .name = "musb_hdrc", 626 + .name = "musb-blackfin", 629 627 .id = 0, 630 628 .dev = { 631 629 .dma_mask = &musb_dmamask,
+5
arch/sh/Kconfig
··· 346 346 select CPU_SH3 347 347 select CPU_HAS_DSP 348 348 select SYS_SUPPORTS_CMT 349 + select USB_ARCH_HAS_OHCI 349 350 help 350 351 Select SH7720 if you have a SH3-DSP SH7720 CPU. 351 352 ··· 355 354 select CPU_SH3 356 355 select CPU_HAS_DSP 357 356 select SYS_SUPPORTS_CMT 357 + select USB_ARCH_HAS_OHCI 358 358 help 359 359 Select SH7721 if you have a SH3-DSP SH7721 CPU. 360 360 ··· 433 431 config CPU_SUBTYPE_SH7763 434 432 bool "Support SH7763 processor" 435 433 select CPU_SH4A 434 + select USB_ARCH_HAS_OHCI 436 435 help 437 436 Select SH7763 if you have a SH4A SH7763(R5S77631) CPU. 438 437 ··· 458 455 select CPU_SHX3 459 456 select CPU_HAS_PTEAEX 460 457 select GENERIC_CLOCKEVENTS_BROADCAST if SMP 458 + select USB_ARCH_HAS_OHCI 459 + select USB_ARCH_HAS_EHCI 461 460 462 461 config CPU_SUBTYPE_SHX3 463 462 bool "Support SH-X3 processor"
+32 -5
arch/sh/kernel/cpu/sh4a/setup-sh7786.c
··· 522 522 }, 523 523 }; 524 524 525 - static struct resource usb_ohci_resources[] = { 525 + #define USB_EHCI_START 0xffe70000 526 + #define USB_OHCI_START 0xffe70400 527 + 528 + static struct resource usb_ehci_resources[] = { 526 529 [0] = { 527 - .start = 0xffe70400, 528 - .end = 0xffe704ff, 530 + .start = USB_EHCI_START, 531 + .end = USB_EHCI_START + 0x3ff, 529 532 .flags = IORESOURCE_MEM, 530 533 }, 531 534 [1] = { ··· 538 535 }, 539 536 }; 540 537 541 - static u64 usb_ohci_dma_mask = DMA_BIT_MASK(32); 538 + static struct platform_device usb_ehci_device = { 539 + .name = "sh_ehci", 540 + .id = -1, 541 + .dev = { 542 + .dma_mask = &usb_ehci_device.dev.coherent_dma_mask, 543 + .coherent_dma_mask = DMA_BIT_MASK(32), 544 + }, 545 + .num_resources = ARRAY_SIZE(usb_ehci_resources), 546 + .resource = usb_ehci_resources, 547 + }; 548 + 549 + static struct resource usb_ohci_resources[] = { 550 + [0] = { 551 + .start = USB_OHCI_START, 552 + .end = USB_OHCI_START + 0x3ff, 553 + .flags = IORESOURCE_MEM, 554 + }, 555 + [1] = { 556 + .start = 77, 557 + .end = 77, 558 + .flags = IORESOURCE_IRQ, 559 + }, 560 + }; 561 + 542 562 static struct platform_device usb_ohci_device = { 543 563 .name = "sh_ohci", 544 564 .id = -1, 545 565 .dev = { 546 - .dma_mask = &usb_ohci_dma_mask, 566 + .dma_mask = &usb_ohci_device.dev.coherent_dma_mask, 547 567 .coherent_dma_mask = DMA_BIT_MASK(32), 548 568 }, 549 569 .num_resources = ARRAY_SIZE(usb_ohci_resources), ··· 596 570 597 571 static struct platform_device *sh7786_devices[] __initdata = { 598 572 &dma0_device, 573 + &usb_ehci_device, 599 574 &usb_ohci_device, 600 575 }; 601 576
+2 -1
drivers/media/video/tlg2300/pd-main.c
··· 452 452 453 453 device_init_wakeup(&udev->dev, 1); 454 454 #ifdef CONFIG_PM 455 - pd->udev->autosuspend_delay = HZ * PM_SUSPEND_DELAY; 455 + pm_runtime_set_autosuspend_delay(&pd->udev->dev, 456 + 1000 * PM_SUSPEND_DELAY); 456 457 usb_enable_autosuspend(pd->udev); 457 458 458 459 if (in_hibernation(pd)) {
+39 -5
drivers/mfd/twl-core.c
··· 95 95 #define twl_has_rtc() false 96 96 #endif 97 97 98 - #if defined(CONFIG_TWL4030_USB) || defined(CONFIG_TWL4030_USB_MODULE) 98 + #if defined(CONFIG_TWL4030_USB) || defined(CONFIG_TWL4030_USB_MODULE) ||\ 99 + defined(CONFIG_TWL6030_USB) || defined(CONFIG_TWL6030_USB_MODULE) 99 100 #define twl_has_usb() true 100 101 #else 101 102 #define twl_has_usb() false ··· 683 682 usb3v1.dev = child; 684 683 } 685 684 } 685 + if (twl_has_usb() && pdata->usb && twl_class_is_6030()) { 686 + 687 + static struct regulator_consumer_supply usb3v3 = { 688 + .supply = "vusb", 689 + }; 690 + 691 + if (twl_has_regulator()) { 692 + /* this is a template that gets copied */ 693 + struct regulator_init_data usb_fixed = { 694 + .constraints.valid_modes_mask = 695 + REGULATOR_MODE_NORMAL 696 + | REGULATOR_MODE_STANDBY, 697 + .constraints.valid_ops_mask = 698 + REGULATOR_CHANGE_MODE 699 + | REGULATOR_CHANGE_STATUS, 700 + }; 701 + 702 + child = add_regulator_linked(TWL6030_REG_VUSB, 703 + &usb_fixed, &usb3v3, 1); 704 + if (IS_ERR(child)) 705 + return PTR_ERR(child); 706 + } 707 + 708 + child = add_child(0, "twl6030_usb", 709 + pdata->usb, sizeof(*pdata->usb), 710 + true, 711 + /* irq1 = VBUS_PRES, irq0 = USB ID */ 712 + pdata->irq_base + USBOTG_INTR_OFFSET, 713 + pdata->irq_base + USB_PRES_INTR_OFFSET); 714 + 715 + if (IS_ERR(child)) 716 + return PTR_ERR(child); 717 + /* we need to connect regulators to this transceiver */ 718 + if (twl_has_regulator() && child) 719 + usb3v3.dev = child; 720 + 721 + } 686 722 687 723 if (twl_has_watchdog()) { 688 724 child = add_child(0, "twl4030_wdt", NULL, 0, false, 0, 0); ··· 850 812 return PTR_ERR(child); 851 813 852 814 child = add_regulator(TWL6030_REG_VDAC, pdata->vdac); 853 - if (IS_ERR(child)) 854 - return PTR_ERR(child); 855 - 856 - child = add_regulator(TWL6030_REG_VUSB, pdata->vusb); 857 815 if (IS_ERR(child)) 858 816 return PTR_ERR(child); 859 817
+8 -1
drivers/mfd/twl6030-irq.c
··· 74 74 USBOTG_INTR_OFFSET, /* Bit 16 ID_WKUP */ 75 75 USBOTG_INTR_OFFSET, /* Bit 17 VBUS_WKUP */ 76 76 USBOTG_INTR_OFFSET, /* Bit 18 ID */ 77 - USBOTG_INTR_OFFSET, /* Bit 19 VBUS */ 77 + USB_PRES_INTR_OFFSET, /* Bit 19 VBUS */ 78 78 CHARGER_INTR_OFFSET, /* Bit 20 CHRG_CTRL */ 79 79 CHARGER_INTR_OFFSET, /* Bit 21 EXT_CHRG */ 80 80 CHARGER_INTR_OFFSET, /* Bit 22 INT_CHRG */ ··· 127 127 128 128 129 129 sts.bytes[3] = 0; /* Only 24 bits are valid*/ 130 + 131 + /* 132 + * Since VBUS status bit is not reliable for VBUS disconnect 133 + * use CHARGER VBUS detection status bit instead. 134 + */ 135 + if (sts.bytes[2] & 0x10) 136 + sts.bytes[2] |= 0x08; 130 137 131 138 for (i = 0; sts.int_sts; sts.int_sts >>= 1, i++) { 132 139 local_irq_disable();
+1 -1
drivers/net/wimax/i2400m/usb.c
··· 514 514 #ifdef CONFIG_PM 515 515 iface->needs_remote_wakeup = 1; /* autosuspend (15s delay) */ 516 516 device_init_wakeup(dev, 1); 517 - usb_dev->autosuspend_delay = 15 * HZ; 517 + pm_runtime_set_autosuspend_delay(&usb_dev->dev, 15000); 518 518 usb_enable_autosuspend(usb_dev); 519 519 #endif 520 520
+1 -1
drivers/staging/bcm/InterfaceInit.c
··· 277 277 if(psAdapter->bDoSuspend) 278 278 { 279 279 #ifdef CONFIG_PM 280 - udev->autosuspend_delay = 0; 280 + pm_runtime_set_autosuspend_delay(&udev->dev, 0); 281 281 intf->needs_remote_wakeup = 1; 282 282 #if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 35) 283 283 udev->autosuspend_disabled = 0;
+4 -5
drivers/usb/Kconfig
··· 41 41 default y if MFD_TC6393XB 42 42 default y if ARCH_W90X900 43 43 default y if ARCH_DAVINCI_DA8XX 44 + default y if PLAT_SPEAR 44 45 # PPC: 45 46 default y if STB03xxx 46 47 default y if PPC_MPC52xx 47 48 # MIPS: 48 49 default y if MIPS_ALCHEMY 49 50 default y if MACH_JZ4740 50 - # SH: 51 - default y if CPU_SUBTYPE_SH7720 52 - default y if CPU_SUBTYPE_SH7721 53 - default y if CPU_SUBTYPE_SH7763 54 - default y if CPU_SUBTYPE_SH7786 55 51 # more: 56 52 default PCI 57 53 ··· 62 66 default y if ARCH_AT91SAM9G45 63 67 default y if ARCH_MXC 64 68 default y if ARCH_OMAP3 69 + default y if ARCH_VT8500 70 + default y if PLAT_SPEAR 71 + default y if ARCH_MSM 65 72 default PCI 66 73 67 74 # ARM SA1111 chips have a non-PCI based "OHCI-compatible" USB host interface.
+26 -124
drivers/usb/core/driver.c
··· 27 27 #include <linux/usb.h> 28 28 #include <linux/usb/quirks.h> 29 29 #include <linux/usb/hcd.h> 30 - #include <linux/pm_runtime.h> 31 30 32 31 #include "usb.h" 33 32 ··· 1261 1262 udev->reset_resume); 1262 1263 } 1263 1264 } 1265 + usb_mark_last_busy(udev); 1264 1266 1265 1267 done: 1266 1268 dev_vdbg(&udev->dev, "%s: status %d\n", __func__, status); ··· 1329 1329 pm_runtime_disable(dev); 1330 1330 pm_runtime_set_active(dev); 1331 1331 pm_runtime_enable(dev); 1332 - udev->last_busy = jiffies; 1333 1332 do_unbind_rebind(udev, DO_REBIND); 1334 1333 } 1335 1334 } ··· 1396 1397 { 1397 1398 int status; 1398 1399 1399 - udev->last_busy = jiffies; 1400 - status = pm_runtime_put_sync(&udev->dev); 1401 - dev_vdbg(&udev->dev, "%s: cnt %d -> %d\n", 1402 - __func__, atomic_read(&udev->dev.power.usage_count), 1403 - status); 1404 - } 1405 - 1406 - /** 1407 - * usb_try_autosuspend_device - attempt an autosuspend of a USB device and its interfaces 1408 - * @udev: the usb_device to autosuspend 1409 - * 1410 - * This routine should be called when a core subsystem thinks @udev may 1411 - * be ready to autosuspend. 1412 - * 1413 - * @udev's usage counter left unchanged. If it is 0 and all the interfaces 1414 - * are inactive then an autosuspend will be attempted. The attempt may 1415 - * fail or be delayed. 1416 - * 1417 - * The caller must hold @udev's device lock. 1418 - * 1419 - * This routine can run only in process context. 1420 - */ 1421 - void usb_try_autosuspend_device(struct usb_device *udev) 1422 - { 1423 - int status; 1424 - 1425 - status = pm_runtime_idle(&udev->dev); 1400 + usb_mark_last_busy(udev); 1401 + status = pm_runtime_put_sync_autosuspend(&udev->dev); 1426 1402 dev_vdbg(&udev->dev, "%s: cnt %d -> %d\n", 1427 1403 __func__, atomic_read(&udev->dev.power.usage_count), 1428 1404 status); ··· 1456 1482 struct usb_device *udev = interface_to_usbdev(intf); 1457 1483 int status; 1458 1484 1459 - udev->last_busy = jiffies; 1485 + usb_mark_last_busy(udev); 1460 1486 atomic_dec(&intf->pm_usage_cnt); 1461 1487 status = pm_runtime_put_sync(&intf->dev); 1462 1488 dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n", ··· 1483 1509 void usb_autopm_put_interface_async(struct usb_interface *intf) 1484 1510 { 1485 1511 struct usb_device *udev = interface_to_usbdev(intf); 1486 - unsigned long last_busy; 1487 - int status = 0; 1512 + int status; 1488 1513 1489 - last_busy = udev->last_busy; 1490 - udev->last_busy = jiffies; 1514 + usb_mark_last_busy(udev); 1491 1515 atomic_dec(&intf->pm_usage_cnt); 1492 - pm_runtime_put_noidle(&intf->dev); 1493 - 1494 - if (udev->dev.power.runtime_auto) { 1495 - /* Optimization: Don't schedule a delayed autosuspend if 1496 - * the timer is already running and the expiration time 1497 - * wouldn't change. 1498 - * 1499 - * We have to use the interface's timer. Attempts to 1500 - * schedule a suspend for the device would fail because 1501 - * the interface is still active. 1502 - */ 1503 - if (intf->dev.power.timer_expires == 0 || 1504 - round_jiffies_up(last_busy) != 1505 - round_jiffies_up(jiffies)) { 1506 - status = pm_schedule_suspend(&intf->dev, 1507 - jiffies_to_msecs( 1508 - round_jiffies_up_relative( 1509 - udev->autosuspend_delay))); 1510 - } 1511 - } 1516 + status = pm_runtime_put(&intf->dev); 1512 1517 dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n", 1513 1518 __func__, atomic_read(&intf->dev.power.usage_count), 1514 1519 status); ··· 1507 1554 { 1508 1555 struct usb_device *udev = interface_to_usbdev(intf); 1509 1556 1510 - udev->last_busy = jiffies; 1557 + usb_mark_last_busy(udev); 1511 1558 atomic_dec(&intf->pm_usage_cnt); 1512 1559 pm_runtime_put_noidle(&intf->dev); 1513 1560 } ··· 1565 1612 */ 1566 1613 int usb_autopm_get_interface_async(struct usb_interface *intf) 1567 1614 { 1568 - int status = 0; 1569 - enum rpm_status s; 1615 + int status; 1570 1616 1571 - /* Don't request a resume unless the interface is already suspending 1572 - * or suspended. Doing so would force a running suspend timer to be 1573 - * cancelled. 1574 - */ 1575 - pm_runtime_get_noresume(&intf->dev); 1576 - s = ACCESS_ONCE(intf->dev.power.runtime_status); 1577 - if (s == RPM_SUSPENDING || s == RPM_SUSPENDED) 1578 - status = pm_request_resume(&intf->dev); 1579 - 1617 + status = pm_runtime_get(&intf->dev); 1580 1618 if (status < 0 && status != -EINPROGRESS) 1581 1619 pm_runtime_put_noidle(&intf->dev); 1582 1620 else ··· 1594 1650 { 1595 1651 struct usb_device *udev = interface_to_usbdev(intf); 1596 1652 1597 - udev->last_busy = jiffies; 1653 + usb_mark_last_busy(udev); 1598 1654 atomic_inc(&intf->pm_usage_cnt); 1599 1655 pm_runtime_get_noresume(&intf->dev); 1600 1656 } ··· 1605 1661 { 1606 1662 int w, i; 1607 1663 struct usb_interface *intf; 1608 - unsigned long suspend_time, j; 1609 1664 1610 1665 /* Fail if autosuspend is disabled, or any interfaces are in use, or 1611 1666 * any interface drivers require remote wakeup but it isn't available. ··· 1644 1701 return -EOPNOTSUPP; 1645 1702 } 1646 1703 udev->do_remote_wakeup = w; 1647 - 1648 - /* If everything is okay but the device hasn't been idle for long 1649 - * enough, queue a delayed autosuspend request. 1650 - */ 1651 - j = ACCESS_ONCE(jiffies); 1652 - suspend_time = udev->last_busy + udev->autosuspend_delay; 1653 - if (time_before(j, suspend_time)) { 1654 - pm_schedule_suspend(&udev->dev, jiffies_to_msecs( 1655 - round_jiffies_up_relative(suspend_time - j))); 1656 - return -EAGAIN; 1657 - } 1658 1704 return 0; 1659 1705 } 1660 1706 1661 1707 static int usb_runtime_suspend(struct device *dev) 1662 1708 { 1663 - int status = 0; 1709 + struct usb_device *udev = to_usb_device(dev); 1710 + int status; 1664 1711 1665 1712 /* A USB device can be suspended if it passes the various autosuspend 1666 1713 * checks. Runtime suspend for a USB device means suspending all the 1667 1714 * interfaces and then the device itself. 1668 1715 */ 1669 - if (is_usb_device(dev)) { 1670 - struct usb_device *udev = to_usb_device(dev); 1716 + if (autosuspend_check(udev) != 0) 1717 + return -EAGAIN; 1671 1718 1672 - if (autosuspend_check(udev) != 0) 1673 - return -EAGAIN; 1674 - 1675 - status = usb_suspend_both(udev, PMSG_AUTO_SUSPEND); 1676 - 1677 - /* If an interface fails the suspend, adjust the last_busy 1678 - * time so that we don't get another suspend attempt right 1679 - * away. 1680 - */ 1681 - if (status) { 1682 - udev->last_busy = jiffies + 1683 - (udev->autosuspend_delay == 0 ? 1684 - HZ/2 : 0); 1685 - } 1686 - 1687 - /* Prevent the parent from suspending immediately after */ 1688 - else if (udev->parent) 1689 - udev->parent->last_busy = jiffies; 1690 - } 1691 - 1692 - /* Runtime suspend for a USB interface doesn't mean anything. */ 1719 + status = usb_suspend_both(udev, PMSG_AUTO_SUSPEND); 1693 1720 return status; 1694 1721 } 1695 1722 1696 1723 static int usb_runtime_resume(struct device *dev) 1697 1724 { 1725 + struct usb_device *udev = to_usb_device(dev); 1726 + int status; 1727 + 1698 1728 /* Runtime resume for a USB device means resuming both the device 1699 1729 * and all its interfaces. 1700 1730 */ 1701 - if (is_usb_device(dev)) { 1702 - struct usb_device *udev = to_usb_device(dev); 1703 - int status; 1704 - 1705 - status = usb_resume_both(udev, PMSG_AUTO_RESUME); 1706 - udev->last_busy = jiffies; 1707 - return status; 1708 - } 1709 - 1710 - /* Runtime resume for a USB interface doesn't mean anything. */ 1711 - return 0; 1731 + status = usb_resume_both(udev, PMSG_AUTO_RESUME); 1732 + return status; 1712 1733 } 1713 1734 1714 1735 static int usb_runtime_idle(struct device *dev) 1715 1736 { 1737 + struct usb_device *udev = to_usb_device(dev); 1738 + 1716 1739 /* An idle USB device can be suspended if it passes the various 1717 - * autosuspend checks. An idle interface can be suspended at 1718 - * any time. 1740 + * autosuspend checks. 1719 1741 */ 1720 - if (is_usb_device(dev)) { 1721 - struct usb_device *udev = to_usb_device(dev); 1722 - 1723 - if (autosuspend_check(udev) != 0) 1724 - return 0; 1725 - } 1726 - 1727 - pm_runtime_suspend(dev); 1742 + if (autosuspend_check(udev) == 0) 1743 + pm_runtime_autosuspend(dev); 1728 1744 return 0; 1729 1745 } 1730 1746
-1
drivers/usb/core/hcd-pci.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <linux/module.h> 21 21 #include <linux/pci.h> 22 - #include <linux/pm_runtime.h> 23 22 #include <linux/usb.h> 24 23 #include <linux/usb/hcd.h> 25 24
-1
drivers/usb/core/hcd.c
··· 38 38 #include <asm/unaligned.h> 39 39 #include <linux/platform_device.h> 40 40 #include <linux/workqueue.h> 41 - #include <linux/pm_runtime.h> 42 41 43 42 #include <linux/usb.h> 44 43 #include <linux/usb/hcd.h>
+10 -1
drivers/usb/core/hub.c
··· 24 24 #include <linux/kthread.h> 25 25 #include <linux/mutex.h> 26 26 #include <linux/freezer.h> 27 - #include <linux/pm_runtime.h> 28 27 29 28 #include <asm/uaccess.h> 30 29 #include <asm/byteorder.h> ··· 1803 1804 1804 1805 /* Tell the runtime-PM framework the device is active */ 1805 1806 pm_runtime_set_active(&udev->dev); 1807 + pm_runtime_get_noresume(&udev->dev); 1808 + pm_runtime_use_autosuspend(&udev->dev); 1806 1809 pm_runtime_enable(&udev->dev); 1810 + 1811 + /* By default, forbid autosuspend for all devices. It will be 1812 + * allowed for hubs during binding. 1813 + */ 1814 + usb_disable_autosuspend(udev); 1807 1815 1808 1816 err = usb_enumerate_device(udev); /* Read descriptors */ 1809 1817 if (err < 0) ··· 1837 1831 } 1838 1832 1839 1833 (void) usb_create_ep_devs(&udev->dev, &udev->ep0, udev); 1834 + usb_mark_last_busy(udev); 1835 + pm_runtime_put_sync_autosuspend(&udev->dev); 1840 1836 return err; 1841 1837 1842 1838 fail: ··· 2229 2221 usb_set_device_state(udev, USB_STATE_SUSPENDED); 2230 2222 msleep(10); 2231 2223 } 2224 + usb_mark_last_busy(hub->hdev); 2232 2225 return status; 2233 2226 } 2234 2227
+1
drivers/usb/core/message.c
··· 1804 1804 INIT_WORK(&intf->reset_ws, __usb_queue_reset_device); 1805 1805 intf->minor = -1; 1806 1806 device_initialize(&intf->dev); 1807 + pm_runtime_no_callbacks(&intf->dev); 1807 1808 dev_set_name(&intf->dev, "%d-%s:%d.%d", 1808 1809 dev->bus->busnum, dev->devpath, 1809 1810 configuration, alt->desc.bInterfaceNumber);
-15
drivers/usb/core/quirks.c
··· 117 117 dev_dbg(&udev->dev, "USB quirks for this device: %x\n", 118 118 udev->quirks); 119 119 120 - #ifdef CONFIG_USB_SUSPEND 121 - 122 - /* By default, disable autosuspend for all devices. The hub driver 123 - * will enable it for hubs. 124 - */ 125 - usb_disable_autosuspend(udev); 126 - 127 - /* Autosuspend can also be disabled if the initial autosuspend_delay 128 - * is negative. 129 - */ 130 - if (udev->autosuspend_delay < 0) 131 - usb_autoresume_device(udev); 132 - 133 - #endif 134 - 135 120 /* For the present, all devices default to USB-PERSIST enabled */ 136 121 #if 0 /* was: #ifdef CONFIG_PM */ 137 122 /* Hubs are automatically enabled for USB-PERSIST */
+22 -62
drivers/usb/core/sysfs.c
··· 233 233 234 234 #ifdef CONFIG_PM 235 235 236 - static const char power_group[] = "power"; 237 - 238 236 static ssize_t 239 237 show_persist(struct device *dev, struct device_attribute *attr, char *buf) 240 238 { ··· 276 278 if (udev->descriptor.bDeviceClass != USB_CLASS_HUB) 277 279 rc = sysfs_add_file_to_group(&dev->kobj, 278 280 &dev_attr_persist.attr, 279 - power_group); 281 + power_group_name); 280 282 } 281 283 return rc; 282 284 } ··· 285 287 { 286 288 sysfs_remove_file_from_group(&dev->kobj, 287 289 &dev_attr_persist.attr, 288 - power_group); 290 + power_group_name); 289 291 } 290 292 #else 291 293 ··· 334 336 static ssize_t 335 337 show_autosuspend(struct device *dev, struct device_attribute *attr, char *buf) 336 338 { 337 - struct usb_device *udev = to_usb_device(dev); 338 - 339 - return sprintf(buf, "%d\n", udev->autosuspend_delay / HZ); 339 + return sprintf(buf, "%d\n", dev->power.autosuspend_delay / 1000); 340 340 } 341 341 342 342 static ssize_t 343 343 set_autosuspend(struct device *dev, struct device_attribute *attr, 344 344 const char *buf, size_t count) 345 345 { 346 - struct usb_device *udev = to_usb_device(dev); 347 - int value, old_delay; 348 - int rc; 346 + int value; 349 347 350 - if (sscanf(buf, "%d", &value) != 1 || value >= INT_MAX/HZ || 351 - value <= - INT_MAX/HZ) 348 + if (sscanf(buf, "%d", &value) != 1 || value >= INT_MAX/1000 || 349 + value <= -INT_MAX/1000) 352 350 return -EINVAL; 353 - value *= HZ; 354 351 355 - usb_lock_device(udev); 356 - old_delay = udev->autosuspend_delay; 357 - udev->autosuspend_delay = value; 358 - 359 - if (old_delay < 0) { /* Autosuspend wasn't allowed */ 360 - if (value >= 0) 361 - usb_autosuspend_device(udev); 362 - } else { /* Autosuspend was allowed */ 363 - if (value < 0) { 364 - rc = usb_autoresume_device(udev); 365 - if (rc < 0) { 366 - count = rc; 367 - udev->autosuspend_delay = old_delay; 368 - } 369 - } else { 370 - usb_try_autosuspend_device(udev); 371 - } 372 - } 373 - 374 - usb_unlock_device(udev); 352 + pm_runtime_set_autosuspend_delay(dev, value * 1000); 375 353 return count; 376 354 } 377 355 ··· 412 438 413 439 static DEVICE_ATTR(level, S_IRUGO | S_IWUSR, show_level, set_level); 414 440 441 + static struct attribute *power_attrs[] = { 442 + &dev_attr_autosuspend.attr, 443 + &dev_attr_level.attr, 444 + &dev_attr_connected_duration.attr, 445 + &dev_attr_active_duration.attr, 446 + NULL, 447 + }; 448 + static struct attribute_group power_attr_group = { 449 + .name = power_group_name, 450 + .attrs = power_attrs, 451 + }; 452 + 415 453 static int add_power_attributes(struct device *dev) 416 454 { 417 455 int rc = 0; 418 456 419 - if (is_usb_device(dev)) { 420 - rc = sysfs_add_file_to_group(&dev->kobj, 421 - &dev_attr_autosuspend.attr, 422 - power_group); 423 - if (rc == 0) 424 - rc = sysfs_add_file_to_group(&dev->kobj, 425 - &dev_attr_level.attr, 426 - power_group); 427 - if (rc == 0) 428 - rc = sysfs_add_file_to_group(&dev->kobj, 429 - &dev_attr_connected_duration.attr, 430 - power_group); 431 - if (rc == 0) 432 - rc = sysfs_add_file_to_group(&dev->kobj, 433 - &dev_attr_active_duration.attr, 434 - power_group); 435 - } 457 + if (is_usb_device(dev)) 458 + rc = sysfs_merge_group(&dev->kobj, &power_attr_group); 436 459 return rc; 437 460 } 438 461 439 462 static void remove_power_attributes(struct device *dev) 440 463 { 441 - sysfs_remove_file_from_group(&dev->kobj, 442 - &dev_attr_active_duration.attr, 443 - power_group); 444 - sysfs_remove_file_from_group(&dev->kobj, 445 - &dev_attr_connected_duration.attr, 446 - power_group); 447 - sysfs_remove_file_from_group(&dev->kobj, 448 - &dev_attr_level.attr, 449 - power_group); 450 - sysfs_remove_file_from_group(&dev->kobj, 451 - &dev_attr_autosuspend.attr, 452 - power_group); 464 + sysfs_unmerge_group(&dev->kobj, &power_attr_group); 453 465 } 454 466 455 467 #else
+2 -1
drivers/usb/core/usb.c
··· 445 445 INIT_LIST_HEAD(&dev->filelist); 446 446 447 447 #ifdef CONFIG_PM 448 - dev->autosuspend_delay = usb_autosuspend_delay * HZ; 448 + pm_runtime_set_autosuspend_delay(&dev->dev, 449 + usb_autosuspend_delay * 1000); 449 450 dev->connect_time = jiffies; 450 451 dev->active_duration = -jiffies; 451 452 #endif
-2
drivers/usb/core/usb.h
··· 75 75 #ifdef CONFIG_USB_SUSPEND 76 76 77 77 extern void usb_autosuspend_device(struct usb_device *udev); 78 - extern void usb_try_autosuspend_device(struct usb_device *udev); 79 78 extern int usb_autoresume_device(struct usb_device *udev); 80 79 extern int usb_remote_wakeup(struct usb_device *dev); 81 80 82 81 #else 83 82 84 83 #define usb_autosuspend_device(udev) do {} while (0) 85 - #define usb_try_autosuspend_device(udev) do {} while (0) 86 84 static inline int usb_autoresume_device(struct usb_device *udev) 87 85 { 88 86 return 0;
+73 -4
drivers/usb/gadget/Kconfig
··· 338 338 boolean "S3C2410 udc debug messages" 339 339 depends on USB_GADGET_S3C2410 340 340 341 + config USB_GADGET_PXA_U2O 342 + boolean "PXA9xx Processor USB2.0 controller" 343 + select USB_GADGET_DUALSPEED 344 + help 345 + PXA9xx Processor series include a high speed USB2.0 device 346 + controller, which support high speed and full speed USB peripheral. 347 + 348 + config USB_PXA_U2O 349 + tristate 350 + depends on USB_GADGET_PXA_U2O 351 + default USB_GADGET 352 + select USB_GADGET_SELECTED 353 + 341 354 # 342 355 # Controllers available in both integrated and discrete versions 343 356 # ··· 427 414 default USB_GADGET 428 415 select USB_GADGET_SELECTED 429 416 430 - config USB_GADGET_CI13XXX 431 - boolean "MIPS USB CI13xxx" 417 + config USB_GADGET_CI13XXX_PCI 418 + boolean "MIPS USB CI13xxx PCI UDC" 432 419 depends on PCI 433 420 select USB_GADGET_DUALSPEED 434 421 help ··· 439 426 dynamically linked module called "ci13xxx_udc" and force all 440 427 gadget drivers to also be dynamically linked. 441 428 442 - config USB_CI13XXX 429 + config USB_CI13XXX_PCI 443 430 tristate 444 - depends on USB_GADGET_CI13XXX 431 + depends on USB_GADGET_CI13XXX_PCI 445 432 default USB_GADGET 446 433 select USB_GADGET_SELECTED 447 434 ··· 508 495 default USB_GADGET 509 496 select USB_GADGET_SELECTED 510 497 498 + config USB_GADGET_EG20T 499 + boolean "Intel EG20T(Topcliff) USB Device controller" 500 + depends on PCI 501 + select USB_GADGET_DUALSPEED 502 + help 503 + This is a USB device driver for EG20T PCH. 504 + EG20T PCH is the platform controller hub that is used in Intel's 505 + general embedded platform. EG20T PCH has USB device interface. 506 + Using this interface, it is able to access system devices connected 507 + to USB device. 508 + This driver enables USB device function. 509 + USB device is a USB peripheral controller which 510 + supports both full and high speed USB 2.0 data transfers. 511 + This driver supports both control transfer and bulk transfer modes. 512 + This driver dose not support interrupt transfer or isochronous 513 + transfer modes. 514 + 515 + config USB_EG20T 516 + tristate 517 + depends on USB_GADGET_EG20T 518 + default USB_GADGET 519 + select USB_GADGET_SELECTED 520 + 521 + config USB_GADGET_CI13XXX_MSM 522 + boolean "MIPS USB CI13xxx for MSM" 523 + depends on ARCH_MSM 524 + select USB_GADGET_DUALSPEED 525 + select USB_MSM_OTG_72K 526 + help 527 + MSM SoC has chipidea USB controller. This driver uses 528 + ci13xxx_udc core. 529 + This driver depends on OTG driver for PHY initialization, 530 + clock management, powering up VBUS, and power management. 531 + 532 + Say "y" to link the driver statically, or "m" to build a 533 + dynamically linked module called "ci13xxx_msm" and force all 534 + gadget drivers to also be dynamically linked. 535 + 536 + config USB_CI13XXX_MSM 537 + tristate 538 + depends on USB_GADGET_CI13XXX_MSM 539 + default USB_GADGET 540 + select USB_GADGET_SELECTED 511 541 512 542 # 513 543 # LAST -- dummy/emulated controller ··· 740 684 741 685 If you say "y" here, the Ethernet gadget driver will use the EEM 742 686 protocol rather than ECM. If unsure, say "n". 687 + 688 + config USB_G_NCM 689 + tristate "Network Control Model (NCM) support" 690 + depends on NET 691 + select CRC32 692 + help 693 + This driver implements USB CDC NCM subclass standard. NCM is 694 + an advanced protocol for Ethernet encapsulation, allows grouping 695 + of several ethernet frames into one USB transfer and diffferent 696 + alignment possibilities. 697 + 698 + Say "y" to link the driver statically, or "m" to build a 699 + dynamically linked module called "g_ncm". 743 700 744 701 config USB_GADGETFS 745 702 tristate "Gadget Filesystem (EXPERIMENTAL)"
+7 -1
drivers/usb/gadget/Makefile
··· 21 21 obj-$(CONFIG_USB_M66592) += m66592-udc.o 22 22 obj-$(CONFIG_USB_R8A66597) += r8a66597-udc.o 23 23 obj-$(CONFIG_USB_FSL_QE) += fsl_qe_udc.o 24 - obj-$(CONFIG_USB_CI13XXX) += ci13xxx_udc.o 24 + obj-$(CONFIG_USB_CI13XXX_PCI) += ci13xxx_pci.o 25 25 obj-$(CONFIG_USB_S3C_HSOTG) += s3c-hsotg.o 26 26 obj-$(CONFIG_USB_LANGWELL) += langwell_udc.o 27 + obj-$(CONFIG_USB_EG20T) += pch_udc.o 28 + obj-$(CONFIG_USB_PXA_U2O) += mv_udc.o 29 + mv_udc-y := mv_udc_core.o mv_udc_phy.o 30 + obj-$(CONFIG_USB_CI13XXX_MSM) += ci13xxx_msm.o 27 31 28 32 # 29 33 # USB gadget drivers ··· 47 43 g_dbgp-y := dbgp.o 48 44 g_nokia-y := nokia.o 49 45 g_webcam-y := webcam.o 46 + g_ncm-y := ncm.o 50 47 51 48 obj-$(CONFIG_USB_ZERO) += g_zero.o 52 49 obj-$(CONFIG_USB_AUDIO) += g_audio.o ··· 65 60 obj-$(CONFIG_USB_G_MULTI) += g_multi.o 66 61 obj-$(CONFIG_USB_G_NOKIA) += g_nokia.o 67 62 obj-$(CONFIG_USB_G_WEBCAM) += g_webcam.o 63 + obj-$(CONFIG_USB_G_NCM) += g_ncm.o
-1
drivers/usb/gadget/amd5536udc.c
··· 3359 3359 dev_set_name(&dev->gadget.dev, "gadget"); 3360 3360 dev->gadget.dev.release = gadget_release; 3361 3361 dev->gadget.name = name; 3362 - dev->gadget.name = name; 3363 3362 dev->gadget.is_dualspeed = 1; 3364 3363 3365 3364 /* init registers, interrupts, ... */
+134
drivers/usb/gadget/ci13xxx_msm.c
··· 1 + /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 2 + * 3 + * This program is free software; you can redistribute it and/or modify 4 + * it under the terms of the GNU General Public License version 2 and 5 + * only version 2 as published by the Free Software Foundation. 6 + * 7 + * This program is distributed in the hope that it will be useful, 8 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 + * GNU General Public License for more details. 11 + * 12 + * You should have received a copy of the GNU General Public License 13 + * along with this program; if not, write to the Free Software 14 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 15 + * 02110-1301, USA. 16 + * 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/pm_runtime.h> 22 + #include <linux/usb/msm_hsusb_hw.h> 23 + #include <linux/usb/ulpi.h> 24 + 25 + #include "ci13xxx_udc.c" 26 + 27 + #define MSM_USB_BASE (udc->regs) 28 + 29 + static irqreturn_t msm_udc_irq(int irq, void *data) 30 + { 31 + return udc_irq(); 32 + } 33 + 34 + static void ci13xxx_msm_notify_event(struct ci13xxx *udc, unsigned event) 35 + { 36 + struct device *dev = udc->gadget.dev.parent; 37 + int val; 38 + 39 + switch (event) { 40 + case CI13XXX_CONTROLLER_RESET_EVENT: 41 + dev_dbg(dev, "CI13XXX_CONTROLLER_RESET_EVENT received\n"); 42 + writel(0, USB_AHBBURST); 43 + writel(0, USB_AHBMODE); 44 + break; 45 + case CI13XXX_CONTROLLER_STOPPED_EVENT: 46 + dev_dbg(dev, "CI13XXX_CONTROLLER_STOPPED_EVENT received\n"); 47 + /* 48 + * Put the transceiver in non-driving mode. Otherwise host 49 + * may not detect soft-disconnection. 50 + */ 51 + val = otg_io_read(udc->transceiver, ULPI_FUNC_CTRL); 52 + val &= ~ULPI_FUNC_CTRL_OPMODE_MASK; 53 + val |= ULPI_FUNC_CTRL_OPMODE_NONDRIVING; 54 + otg_io_write(udc->transceiver, val, ULPI_FUNC_CTRL); 55 + break; 56 + default: 57 + dev_dbg(dev, "unknown ci13xxx_udc event\n"); 58 + break; 59 + } 60 + } 61 + 62 + static struct ci13xxx_udc_driver ci13xxx_msm_udc_driver = { 63 + .name = "ci13xxx_msm", 64 + .flags = CI13XXX_REGS_SHARED | 65 + CI13XXX_REQUIRE_TRANSCEIVER | 66 + CI13XXX_PULLUP_ON_VBUS | 67 + CI13XXX_DISABLE_STREAMING, 68 + 69 + .notify_event = ci13xxx_msm_notify_event, 70 + }; 71 + 72 + static int ci13xxx_msm_probe(struct platform_device *pdev) 73 + { 74 + struct resource *res; 75 + void __iomem *regs; 76 + int irq; 77 + int ret; 78 + 79 + dev_dbg(&pdev->dev, "ci13xxx_msm_probe\n"); 80 + 81 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 82 + if (!res) { 83 + dev_err(&pdev->dev, "failed to get platform resource mem\n"); 84 + return -ENXIO; 85 + } 86 + 87 + regs = ioremap(res->start, resource_size(res)); 88 + if (!regs) { 89 + dev_err(&pdev->dev, "ioremap failed\n"); 90 + return -ENOMEM; 91 + } 92 + 93 + ret = udc_probe(&ci13xxx_msm_udc_driver, &pdev->dev, regs); 94 + if (ret < 0) { 95 + dev_err(&pdev->dev, "udc_probe failed\n"); 96 + goto iounmap; 97 + } 98 + 99 + irq = platform_get_irq(pdev, 0); 100 + if (irq < 0) { 101 + dev_err(&pdev->dev, "IRQ not found\n"); 102 + ret = -ENXIO; 103 + goto udc_remove; 104 + } 105 + 106 + ret = request_irq(irq, msm_udc_irq, IRQF_SHARED, pdev->name, pdev); 107 + if (ret < 0) { 108 + dev_err(&pdev->dev, "request_irq failed\n"); 109 + goto udc_remove; 110 + } 111 + 112 + pm_runtime_no_callbacks(&pdev->dev); 113 + pm_runtime_enable(&pdev->dev); 114 + 115 + return 0; 116 + 117 + udc_remove: 118 + udc_remove(); 119 + iounmap: 120 + iounmap(regs); 121 + 122 + return ret; 123 + } 124 + 125 + static struct platform_driver ci13xxx_msm_driver = { 126 + .probe = ci13xxx_msm_probe, 127 + .driver = { .name = "msm_hsusb", }, 128 + }; 129 + 130 + static int __init ci13xxx_msm_init(void) 131 + { 132 + return platform_driver_register(&ci13xxx_msm_driver); 133 + } 134 + module_init(ci13xxx_msm_init);
+176
drivers/usb/gadget/ci13xxx_pci.c
··· 1 + /* 2 + * ci13xxx_pci.c - MIPS USB IP core family device controller 3 + * 4 + * Copyright (C) 2008 Chipidea - MIPS Technologies, Inc. All rights reserved. 5 + * 6 + * Author: David Lopo 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/pci.h> 15 + 16 + #include "ci13xxx_udc.c" 17 + 18 + /* driver name */ 19 + #define UDC_DRIVER_NAME "ci13xxx_pci" 20 + 21 + /****************************************************************************** 22 + * PCI block 23 + *****************************************************************************/ 24 + /** 25 + * ci13xxx_pci_irq: interrut handler 26 + * @irq: irq number 27 + * @pdev: USB Device Controller interrupt source 28 + * 29 + * This function returns IRQ_HANDLED if the IRQ has been handled 30 + * This is an ISR don't trace, use attribute interface instead 31 + */ 32 + static irqreturn_t ci13xxx_pci_irq(int irq, void *pdev) 33 + { 34 + if (irq == 0) { 35 + dev_err(&((struct pci_dev *)pdev)->dev, "Invalid IRQ0 usage!"); 36 + return IRQ_HANDLED; 37 + } 38 + return udc_irq(); 39 + } 40 + 41 + static struct ci13xxx_udc_driver ci13xxx_pci_udc_driver = { 42 + .name = UDC_DRIVER_NAME, 43 + }; 44 + 45 + /** 46 + * ci13xxx_pci_probe: PCI probe 47 + * @pdev: USB device controller being probed 48 + * @id: PCI hotplug ID connecting controller to UDC framework 49 + * 50 + * This function returns an error code 51 + * Allocates basic PCI resources for this USB device controller, and then 52 + * invokes the udc_probe() method to start the UDC associated with it 53 + */ 54 + static int __devinit ci13xxx_pci_probe(struct pci_dev *pdev, 55 + const struct pci_device_id *id) 56 + { 57 + void __iomem *regs = NULL; 58 + int retval = 0; 59 + 60 + if (id == NULL) 61 + return -EINVAL; 62 + 63 + retval = pci_enable_device(pdev); 64 + if (retval) 65 + goto done; 66 + 67 + if (!pdev->irq) { 68 + dev_err(&pdev->dev, "No IRQ, check BIOS/PCI setup!"); 69 + retval = -ENODEV; 70 + goto disable_device; 71 + } 72 + 73 + retval = pci_request_regions(pdev, UDC_DRIVER_NAME); 74 + if (retval) 75 + goto disable_device; 76 + 77 + /* BAR 0 holds all the registers */ 78 + regs = pci_iomap(pdev, 0, 0); 79 + if (!regs) { 80 + dev_err(&pdev->dev, "Error mapping memory!"); 81 + retval = -EFAULT; 82 + goto release_regions; 83 + } 84 + pci_set_drvdata(pdev, (__force void *)regs); 85 + 86 + pci_set_master(pdev); 87 + pci_try_set_mwi(pdev); 88 + 89 + retval = udc_probe(&ci13xxx_pci_udc_driver, &pdev->dev, regs); 90 + if (retval) 91 + goto iounmap; 92 + 93 + /* our device does not have MSI capability */ 94 + 95 + retval = request_irq(pdev->irq, ci13xxx_pci_irq, IRQF_SHARED, 96 + UDC_DRIVER_NAME, pdev); 97 + if (retval) 98 + goto gadget_remove; 99 + 100 + return 0; 101 + 102 + gadget_remove: 103 + udc_remove(); 104 + iounmap: 105 + pci_iounmap(pdev, regs); 106 + release_regions: 107 + pci_release_regions(pdev); 108 + disable_device: 109 + pci_disable_device(pdev); 110 + done: 111 + return retval; 112 + } 113 + 114 + /** 115 + * ci13xxx_pci_remove: PCI remove 116 + * @pdev: USB Device Controller being removed 117 + * 118 + * Reverses the effect of ci13xxx_pci_probe(), 119 + * first invoking the udc_remove() and then releases 120 + * all PCI resources allocated for this USB device controller 121 + */ 122 + static void __devexit ci13xxx_pci_remove(struct pci_dev *pdev) 123 + { 124 + free_irq(pdev->irq, pdev); 125 + udc_remove(); 126 + pci_iounmap(pdev, (__force void __iomem *)pci_get_drvdata(pdev)); 127 + pci_release_regions(pdev); 128 + pci_disable_device(pdev); 129 + } 130 + 131 + /** 132 + * PCI device table 133 + * PCI device structure 134 + * 135 + * Check "pci.h" for details 136 + */ 137 + static DEFINE_PCI_DEVICE_TABLE(ci13xxx_pci_id_table) = { 138 + { PCI_DEVICE(0x153F, 0x1004) }, 139 + { PCI_DEVICE(0x153F, 0x1006) }, 140 + { 0, 0, 0, 0, 0, 0, 0 /* end: all zeroes */ } 141 + }; 142 + MODULE_DEVICE_TABLE(pci, ci13xxx_pci_id_table); 143 + 144 + static struct pci_driver ci13xxx_pci_driver = { 145 + .name = UDC_DRIVER_NAME, 146 + .id_table = ci13xxx_pci_id_table, 147 + .probe = ci13xxx_pci_probe, 148 + .remove = __devexit_p(ci13xxx_pci_remove), 149 + }; 150 + 151 + /** 152 + * ci13xxx_pci_init: module init 153 + * 154 + * Driver load 155 + */ 156 + static int __init ci13xxx_pci_init(void) 157 + { 158 + return pci_register_driver(&ci13xxx_pci_driver); 159 + } 160 + module_init(ci13xxx_pci_init); 161 + 162 + /** 163 + * ci13xxx_pci_exit: module exit 164 + * 165 + * Driver unload 166 + */ 167 + static void __exit ci13xxx_pci_exit(void) 168 + { 169 + pci_unregister_driver(&ci13xxx_pci_driver); 170 + } 171 + module_exit(ci13xxx_pci_exit); 172 + 173 + MODULE_AUTHOR("MIPS - David Lopo <dlopo@chipidea.mips.com>"); 174 + MODULE_DESCRIPTION("MIPS CI13XXX USB Peripheral Controller"); 175 + MODULE_LICENSE("GPL"); 176 + MODULE_VERSION("June 2008");
+172 -226
drivers/usb/gadget/ci13xxx_udc.c
··· 22 22 * - ENDPT: endpoint operations (Gadget API) 23 23 * - GADGET: gadget operations (Gadget API) 24 24 * - BUS: bus glue code, bus abstraction layer 25 - * - PCI: PCI core interface and PCI resources (interrupts, memory...) 26 25 * 27 26 * Compile Options 28 27 * - CONFIG_USB_GADGET_DEBUG_FILES: enable debug facilities ··· 59 60 #include <linux/io.h> 60 61 #include <linux/irq.h> 61 62 #include <linux/kernel.h> 62 - #include <linux/module.h> 63 - #include <linux/pci.h> 64 63 #include <linux/slab.h> 64 + #include <linux/pm_runtime.h> 65 65 #include <linux/usb/ch9.h> 66 66 #include <linux/usb/gadget.h> 67 + #include <linux/usb/otg.h> 67 68 68 69 #include "ci13xxx_udc.h" 69 70 ··· 73 74 *****************************************************************************/ 74 75 /* ctrl register bank access */ 75 76 static DEFINE_SPINLOCK(udc_lock); 76 - 77 - /* driver name */ 78 - #define UDC_DRIVER_NAME "ci13xxx_udc" 79 77 80 78 /* control endpoint description */ 81 79 static const struct usb_endpoint_descriptor ··· 128 132 size_t size; /* bank size */ 129 133 } hw_bank; 130 134 135 + /* MSM specific */ 136 + #define ABS_AHBBURST (0x0090UL) 137 + #define ABS_AHBMODE (0x0098UL) 131 138 /* UDC register map */ 132 139 #define ABS_CAPLENGTH (0x100UL) 133 140 #define ABS_HCCPARAMS (0x108UL) ··· 247 248 return (reg & mask) >> ffs_nr(mask); 248 249 } 249 250 250 - /** 251 - * hw_device_reset: resets chip (execute without interruption) 252 - * @base: register base address 253 - * 254 - * This function returns an error code 255 - */ 256 - static int hw_device_reset(void __iomem *base) 251 + static int hw_device_init(void __iomem *base) 257 252 { 258 253 u32 reg; 259 254 ··· 264 271 hw_bank.size += CAP_LAST; 265 272 hw_bank.size /= sizeof(u32); 266 273 267 - /* should flush & stop before reset */ 268 - hw_cwrite(CAP_ENDPTFLUSH, ~0, ~0); 269 - hw_cwrite(CAP_USBCMD, USBCMD_RS, 0); 270 - 271 - hw_cwrite(CAP_USBCMD, USBCMD_RST, USBCMD_RST); 272 - while (hw_cread(CAP_USBCMD, USBCMD_RST)) 273 - udelay(10); /* not RTOS friendly */ 274 - 275 - /* USBMODE should be configured step by step */ 276 - hw_cwrite(CAP_USBMODE, USBMODE_CM, USBMODE_CM_IDLE); 277 - hw_cwrite(CAP_USBMODE, USBMODE_CM, USBMODE_CM_DEVICE); 278 - hw_cwrite(CAP_USBMODE, USBMODE_SLOM, USBMODE_SLOM); /* HW >= 2.3 */ 279 - 280 - if (hw_cread(CAP_USBMODE, USBMODE_CM) != USBMODE_CM_DEVICE) { 281 - pr_err("cannot enter in device mode"); 282 - pr_err("lpm = %i", hw_bank.lpm); 283 - return -ENODEV; 284 - } 285 - 286 274 reg = hw_aread(ABS_DCCPARAMS, DCCPARAMS_DEN) >> ffs_nr(DCCPARAMS_DEN); 287 275 if (reg == 0 || reg > ENDPT_MAX) 288 276 return -ENODEV; ··· 275 301 /* ENDPTSETUPSTAT is '0' by default */ 276 302 277 303 /* HCSPARAMS.bf.ppc SHOULD BE zero for device */ 304 + 305 + return 0; 306 + } 307 + /** 308 + * hw_device_reset: resets chip (execute without interruption) 309 + * @base: register base address 310 + * 311 + * This function returns an error code 312 + */ 313 + static int hw_device_reset(struct ci13xxx *udc) 314 + { 315 + /* should flush & stop before reset */ 316 + hw_cwrite(CAP_ENDPTFLUSH, ~0, ~0); 317 + hw_cwrite(CAP_USBCMD, USBCMD_RS, 0); 318 + 319 + hw_cwrite(CAP_USBCMD, USBCMD_RST, USBCMD_RST); 320 + while (hw_cread(CAP_USBCMD, USBCMD_RST)) 321 + udelay(10); /* not RTOS friendly */ 322 + 323 + 324 + if (udc->udc_driver->notify_event) 325 + udc->udc_driver->notify_event(udc, 326 + CI13XXX_CONTROLLER_RESET_EVENT); 327 + 328 + if (udc->udc_driver->flags && CI13XXX_DISABLE_STREAMING) 329 + hw_cwrite(CAP_USBMODE, USBMODE_SDIS, USBMODE_SDIS); 330 + 331 + /* USBMODE should be configured step by step */ 332 + hw_cwrite(CAP_USBMODE, USBMODE_CM, USBMODE_CM_IDLE); 333 + hw_cwrite(CAP_USBMODE, USBMODE_CM, USBMODE_CM_DEVICE); 334 + hw_cwrite(CAP_USBMODE, USBMODE_SLOM, USBMODE_SLOM); /* HW >= 2.3 */ 335 + 336 + if (hw_cread(CAP_USBMODE, USBMODE_CM) != USBMODE_CM_DEVICE) { 337 + pr_err("cannot enter in device mode"); 338 + pr_err("lpm = %i", hw_bank.lpm); 339 + return -ENODEV; 340 + } 278 341 279 342 return 0; 280 343 } ··· 1568 1557 * Caller must hold lock 1569 1558 */ 1570 1559 static int _gadget_stop_activity(struct usb_gadget *gadget) 1571 - __releases(udc->lock) 1572 - __acquires(udc->lock) 1573 1560 { 1574 1561 struct usb_ep *ep; 1575 1562 struct ci13xxx *udc = container_of(gadget, struct ci13xxx, gadget); ··· 1578 1569 1579 1570 if (gadget == NULL) 1580 1571 return -EINVAL; 1581 - 1582 - spin_unlock(udc->lock); 1583 1572 1584 1573 /* flush all endpoints */ 1585 1574 gadget_for_each_ep(ep, gadget) { ··· 1597 1590 usb_ep_free_request(gadget->ep0, mEp->status); 1598 1591 mEp->status = NULL; 1599 1592 } 1600 - 1601 - spin_lock(udc->lock); 1602 1593 1603 1594 return 0; 1604 1595 } ··· 1626 1621 1627 1622 dbg_event(0xFF, "BUS RST", 0); 1628 1623 1624 + spin_unlock(udc->lock); 1629 1625 retval = _gadget_stop_activity(&udc->gadget); 1630 1626 if (retval) 1631 1627 goto done; ··· 1635 1629 if (retval) 1636 1630 goto done; 1637 1631 1638 - spin_unlock(udc->lock); 1639 1632 retval = usb_ep_enable(&mEp->ep, &ctrl_endpt_desc); 1640 1633 if (!retval) { 1641 - mEp->status = usb_ep_alloc_request(&mEp->ep, GFP_KERNEL); 1634 + mEp->status = usb_ep_alloc_request(&mEp->ep, GFP_ATOMIC); 1642 1635 if (mEp->status == NULL) { 1643 1636 usb_ep_disable(&mEp->ep); 1644 1637 retval = -ENOMEM; ··· 2066 2061 { 2067 2062 struct ci13xxx_ep *mEp = container_of(ep, struct ci13xxx_ep, ep); 2068 2063 struct ci13xxx_req *mReq = NULL; 2069 - unsigned long flags; 2070 2064 2071 2065 trace("%p, %i", ep, gfp_flags); 2072 2066 ··· 2073 2069 err("EINVAL"); 2074 2070 return NULL; 2075 2071 } 2076 - 2077 - spin_lock_irqsave(mEp->lock, flags); 2078 2072 2079 2073 mReq = kzalloc(sizeof(struct ci13xxx_req), gfp_flags); 2080 2074 if (mReq != NULL) { ··· 2087 2085 } 2088 2086 2089 2087 dbg_event(_usb_addr(mEp), "ALLOC", mReq == NULL); 2090 - 2091 - spin_unlock_irqrestore(mEp->lock, flags); 2092 2088 2093 2089 return (mReq == NULL) ? NULL : &mReq->req; 2094 2090 } ··· 2332 2332 /****************************************************************************** 2333 2333 * GADGET block 2334 2334 *****************************************************************************/ 2335 + static int ci13xxx_vbus_session(struct usb_gadget *_gadget, int is_active) 2336 + { 2337 + struct ci13xxx *udc = container_of(_gadget, struct ci13xxx, gadget); 2338 + unsigned long flags; 2339 + int gadget_ready = 0; 2340 + 2341 + if (!(udc->udc_driver->flags & CI13XXX_PULLUP_ON_VBUS)) 2342 + return -EOPNOTSUPP; 2343 + 2344 + spin_lock_irqsave(udc->lock, flags); 2345 + udc->vbus_active = is_active; 2346 + if (udc->driver) 2347 + gadget_ready = 1; 2348 + spin_unlock_irqrestore(udc->lock, flags); 2349 + 2350 + if (gadget_ready) { 2351 + if (is_active) { 2352 + pm_runtime_get_sync(&_gadget->dev); 2353 + hw_device_reset(udc); 2354 + hw_device_state(udc->ci13xxx_ep[0].qh[RX].dma); 2355 + } else { 2356 + hw_device_state(0); 2357 + if (udc->udc_driver->notify_event) 2358 + udc->udc_driver->notify_event(udc, 2359 + CI13XXX_CONTROLLER_STOPPED_EVENT); 2360 + _gadget_stop_activity(&udc->gadget); 2361 + pm_runtime_put_sync(&_gadget->dev); 2362 + } 2363 + } 2364 + 2365 + return 0; 2366 + } 2367 + 2335 2368 /** 2336 2369 * Device operations part of the API to the USB controller hardware, 2337 2370 * which don't involve endpoints (or i/o) 2338 2371 * Check "usb_gadget.h" for details 2339 2372 */ 2340 - static const struct usb_gadget_ops usb_gadget_ops; 2373 + static const struct usb_gadget_ops usb_gadget_ops = { 2374 + .vbus_session = ci13xxx_vbus_session, 2375 + }; 2341 2376 2342 2377 /** 2343 2378 * usb_gadget_probe_driver: register a gadget driver ··· 2425 2390 info("hw_ep_max = %d", hw_ep_max); 2426 2391 2427 2392 udc->driver = driver; 2428 - udc->gadget.ops = NULL; 2429 2393 udc->gadget.dev.driver = NULL; 2430 2394 2431 2395 retval = 0; ··· 2444 2410 /* this allocation cannot be random */ 2445 2411 for (k = RX; k <= TX; k++) { 2446 2412 INIT_LIST_HEAD(&mEp->qh[k].queue); 2413 + spin_unlock_irqrestore(udc->lock, flags); 2447 2414 mEp->qh[k].ptr = dma_pool_alloc(udc->qh_pool, 2448 2415 GFP_KERNEL, 2449 2416 &mEp->qh[k].dma); 2417 + spin_lock_irqsave(udc->lock, flags); 2450 2418 if (mEp->qh[k].ptr == NULL) 2451 2419 retval = -ENOMEM; 2452 2420 else ··· 2465 2429 2466 2430 /* bind gadget */ 2467 2431 driver->driver.bus = NULL; 2468 - udc->gadget.ops = &usb_gadget_ops; 2469 2432 udc->gadget.dev.driver = &driver->driver; 2470 2433 2471 2434 spin_unlock_irqrestore(udc->lock, flags); ··· 2472 2437 spin_lock_irqsave(udc->lock, flags); 2473 2438 2474 2439 if (retval) { 2475 - udc->gadget.ops = NULL; 2476 2440 udc->gadget.dev.driver = NULL; 2477 2441 goto done; 2478 2442 } 2479 2443 2444 + pm_runtime_get_sync(&udc->gadget.dev); 2445 + if (udc->udc_driver->flags & CI13XXX_PULLUP_ON_VBUS) { 2446 + if (udc->vbus_active) { 2447 + if (udc->udc_driver->flags & CI13XXX_REGS_SHARED) 2448 + hw_device_reset(udc); 2449 + } else { 2450 + pm_runtime_put_sync(&udc->gadget.dev); 2451 + goto done; 2452 + } 2453 + } 2454 + 2480 2455 retval = hw_device_state(udc->ci13xxx_ep[0].qh[RX].dma); 2456 + if (retval) 2457 + pm_runtime_put_sync(&udc->gadget.dev); 2481 2458 2482 2459 done: 2483 2460 spin_unlock_irqrestore(udc->lock, flags); ··· 2522 2475 2523 2476 spin_lock_irqsave(udc->lock, flags); 2524 2477 2525 - hw_device_state(0); 2478 + if (!(udc->udc_driver->flags & CI13XXX_PULLUP_ON_VBUS) || 2479 + udc->vbus_active) { 2480 + hw_device_state(0); 2481 + if (udc->udc_driver->notify_event) 2482 + udc->udc_driver->notify_event(udc, 2483 + CI13XXX_CONTROLLER_STOPPED_EVENT); 2484 + _gadget_stop_activity(&udc->gadget); 2485 + pm_runtime_put(&udc->gadget.dev); 2486 + } 2526 2487 2527 2488 /* unbind gadget */ 2528 - if (udc->gadget.ops != NULL) { 2529 - _gadget_stop_activity(&udc->gadget); 2489 + spin_unlock_irqrestore(udc->lock, flags); 2490 + driver->unbind(&udc->gadget); /* MAY SLEEP */ 2491 + spin_lock_irqsave(udc->lock, flags); 2530 2492 2531 - spin_unlock_irqrestore(udc->lock, flags); 2532 - driver->unbind(&udc->gadget); /* MAY SLEEP */ 2533 - spin_lock_irqsave(udc->lock, flags); 2534 - 2535 - udc->gadget.ops = NULL; 2536 - udc->gadget.dev.driver = NULL; 2537 - } 2493 + udc->gadget.dev.driver = NULL; 2538 2494 2539 2495 /* free resources */ 2540 2496 for (i = 0; i < hw_ep_max; i++) { ··· 2594 2544 } 2595 2545 2596 2546 spin_lock(udc->lock); 2547 + 2548 + if (udc->udc_driver->flags & CI13XXX_REGS_SHARED) { 2549 + if (hw_cread(CAP_USBMODE, USBMODE_CM) != 2550 + USBMODE_CM_DEVICE) { 2551 + spin_unlock(udc->lock); 2552 + return IRQ_NONE; 2553 + } 2554 + } 2597 2555 intr = hw_test_and_clear_intr_active(); 2598 2556 if (intr) { 2599 2557 isr_statistics.hndl.buf[isr_statistics.hndl.idx++] = intr; ··· 2660 2602 * No interrupts active, the IRQ has not been requested yet 2661 2603 * Kernel assumes 32-bit DMA operations by default, no need to dma_set_mask 2662 2604 */ 2663 - static int udc_probe(struct device *dev, void __iomem *regs, const char *name) 2605 + static int udc_probe(struct ci13xxx_udc_driver *driver, struct device *dev, 2606 + void __iomem *regs) 2664 2607 { 2665 2608 struct ci13xxx *udc; 2666 2609 int retval = 0; 2667 2610 2668 2611 trace("%p, %p, %p", dev, regs, name); 2669 2612 2670 - if (dev == NULL || regs == NULL || name == NULL) 2613 + if (dev == NULL || regs == NULL || driver == NULL || 2614 + driver->name == NULL) 2671 2615 return -EINVAL; 2672 2616 2673 2617 udc = kzalloc(sizeof(struct ci13xxx), GFP_KERNEL); ··· 2677 2617 return -ENOMEM; 2678 2618 2679 2619 udc->lock = &udc_lock; 2620 + udc->regs = regs; 2621 + udc->udc_driver = driver; 2680 2622 2681 - retval = hw_device_reset(regs); 2682 - if (retval) 2683 - goto done; 2684 - 2685 - udc->gadget.ops = NULL; 2623 + udc->gadget.ops = &usb_gadget_ops; 2686 2624 udc->gadget.speed = USB_SPEED_UNKNOWN; 2687 2625 udc->gadget.is_dualspeed = 1; 2688 2626 udc->gadget.is_otg = 0; 2689 - udc->gadget.name = name; 2627 + udc->gadget.name = driver->name; 2690 2628 2691 2629 INIT_LIST_HEAD(&udc->gadget.ep_list); 2692 2630 udc->gadget.ep0 = NULL; 2693 2631 2694 2632 dev_set_name(&udc->gadget.dev, "gadget"); 2695 2633 udc->gadget.dev.dma_mask = dev->dma_mask; 2634 + udc->gadget.dev.coherent_dma_mask = dev->coherent_dma_mask; 2696 2635 udc->gadget.dev.parent = dev; 2697 2636 udc->gadget.dev.release = udc_release; 2698 2637 2638 + retval = hw_device_init(regs); 2639 + if (retval < 0) 2640 + goto free_udc; 2641 + 2642 + udc->transceiver = otg_get_transceiver(); 2643 + 2644 + if (udc->udc_driver->flags & CI13XXX_REQUIRE_TRANSCEIVER) { 2645 + if (udc->transceiver == NULL) { 2646 + retval = -ENODEV; 2647 + goto free_udc; 2648 + } 2649 + } 2650 + 2651 + if (!(udc->udc_driver->flags & CI13XXX_REGS_SHARED)) { 2652 + retval = hw_device_reset(udc); 2653 + if (retval) 2654 + goto put_transceiver; 2655 + } 2656 + 2699 2657 retval = device_register(&udc->gadget.dev); 2700 - if (retval) 2701 - goto done; 2658 + if (retval) { 2659 + put_device(&udc->gadget.dev); 2660 + goto put_transceiver; 2661 + } 2702 2662 2703 2663 #ifdef CONFIG_USB_GADGET_DEBUG_FILES 2704 2664 retval = dbg_create_files(&udc->gadget.dev); 2705 2665 #endif 2706 - if (retval) { 2707 - device_unregister(&udc->gadget.dev); 2708 - goto done; 2666 + if (retval) 2667 + goto unreg_device; 2668 + 2669 + if (udc->transceiver) { 2670 + retval = otg_set_peripheral(udc->transceiver, &udc->gadget); 2671 + if (retval) 2672 + goto remove_dbg; 2709 2673 } 2674 + pm_runtime_no_callbacks(&udc->gadget.dev); 2675 + pm_runtime_enable(&udc->gadget.dev); 2710 2676 2711 2677 _udc = udc; 2712 2678 return retval; 2713 2679 2714 - done: 2715 2680 err("error = %i", retval); 2681 + remove_dbg: 2682 + #ifdef CONFIG_USB_GADGET_DEBUG_FILES 2683 + dbg_remove_files(&udc->gadget.dev); 2684 + #endif 2685 + unreg_device: 2686 + device_unregister(&udc->gadget.dev); 2687 + put_transceiver: 2688 + if (udc->transceiver) 2689 + otg_put_transceiver(udc->transceiver); 2690 + free_udc: 2716 2691 kfree(udc); 2717 2692 _udc = NULL; 2718 2693 return retval; ··· 2767 2672 return; 2768 2673 } 2769 2674 2675 + if (udc->transceiver) { 2676 + otg_set_peripheral(udc->transceiver, &udc->gadget); 2677 + otg_put_transceiver(udc->transceiver); 2678 + } 2770 2679 #ifdef CONFIG_USB_GADGET_DEBUG_FILES 2771 2680 dbg_remove_files(&udc->gadget.dev); 2772 2681 #endif ··· 2779 2680 kfree(udc); 2780 2681 _udc = NULL; 2781 2682 } 2782 - 2783 - /****************************************************************************** 2784 - * PCI block 2785 - *****************************************************************************/ 2786 - /** 2787 - * ci13xxx_pci_irq: interrut handler 2788 - * @irq: irq number 2789 - * @pdev: USB Device Controller interrupt source 2790 - * 2791 - * This function returns IRQ_HANDLED if the IRQ has been handled 2792 - * This is an ISR don't trace, use attribute interface instead 2793 - */ 2794 - static irqreturn_t ci13xxx_pci_irq(int irq, void *pdev) 2795 - { 2796 - if (irq == 0) { 2797 - dev_err(&((struct pci_dev *)pdev)->dev, "Invalid IRQ0 usage!"); 2798 - return IRQ_HANDLED; 2799 - } 2800 - return udc_irq(); 2801 - } 2802 - 2803 - /** 2804 - * ci13xxx_pci_probe: PCI probe 2805 - * @pdev: USB device controller being probed 2806 - * @id: PCI hotplug ID connecting controller to UDC framework 2807 - * 2808 - * This function returns an error code 2809 - * Allocates basic PCI resources for this USB device controller, and then 2810 - * invokes the udc_probe() method to start the UDC associated with it 2811 - */ 2812 - static int __devinit ci13xxx_pci_probe(struct pci_dev *pdev, 2813 - const struct pci_device_id *id) 2814 - { 2815 - void __iomem *regs = NULL; 2816 - int retval = 0; 2817 - 2818 - if (id == NULL) 2819 - return -EINVAL; 2820 - 2821 - retval = pci_enable_device(pdev); 2822 - if (retval) 2823 - goto done; 2824 - 2825 - if (!pdev->irq) { 2826 - dev_err(&pdev->dev, "No IRQ, check BIOS/PCI setup!"); 2827 - retval = -ENODEV; 2828 - goto disable_device; 2829 - } 2830 - 2831 - retval = pci_request_regions(pdev, UDC_DRIVER_NAME); 2832 - if (retval) 2833 - goto disable_device; 2834 - 2835 - /* BAR 0 holds all the registers */ 2836 - regs = pci_iomap(pdev, 0, 0); 2837 - if (!regs) { 2838 - dev_err(&pdev->dev, "Error mapping memory!"); 2839 - retval = -EFAULT; 2840 - goto release_regions; 2841 - } 2842 - pci_set_drvdata(pdev, (__force void *)regs); 2843 - 2844 - pci_set_master(pdev); 2845 - pci_try_set_mwi(pdev); 2846 - 2847 - retval = udc_probe(&pdev->dev, regs, UDC_DRIVER_NAME); 2848 - if (retval) 2849 - goto iounmap; 2850 - 2851 - /* our device does not have MSI capability */ 2852 - 2853 - retval = request_irq(pdev->irq, ci13xxx_pci_irq, IRQF_SHARED, 2854 - UDC_DRIVER_NAME, pdev); 2855 - if (retval) 2856 - goto gadget_remove; 2857 - 2858 - return 0; 2859 - 2860 - gadget_remove: 2861 - udc_remove(); 2862 - iounmap: 2863 - pci_iounmap(pdev, regs); 2864 - release_regions: 2865 - pci_release_regions(pdev); 2866 - disable_device: 2867 - pci_disable_device(pdev); 2868 - done: 2869 - return retval; 2870 - } 2871 - 2872 - /** 2873 - * ci13xxx_pci_remove: PCI remove 2874 - * @pdev: USB Device Controller being removed 2875 - * 2876 - * Reverses the effect of ci13xxx_pci_probe(), 2877 - * first invoking the udc_remove() and then releases 2878 - * all PCI resources allocated for this USB device controller 2879 - */ 2880 - static void __devexit ci13xxx_pci_remove(struct pci_dev *pdev) 2881 - { 2882 - free_irq(pdev->irq, pdev); 2883 - udc_remove(); 2884 - pci_iounmap(pdev, (__force void __iomem *)pci_get_drvdata(pdev)); 2885 - pci_release_regions(pdev); 2886 - pci_disable_device(pdev); 2887 - } 2888 - 2889 - /** 2890 - * PCI device table 2891 - * PCI device structure 2892 - * 2893 - * Check "pci.h" for details 2894 - */ 2895 - static DEFINE_PCI_DEVICE_TABLE(ci13xxx_pci_id_table) = { 2896 - { PCI_DEVICE(0x153F, 0x1004) }, 2897 - { PCI_DEVICE(0x153F, 0x1006) }, 2898 - { 0, 0, 0, 0, 0, 0, 0 /* end: all zeroes */ } 2899 - }; 2900 - MODULE_DEVICE_TABLE(pci, ci13xxx_pci_id_table); 2901 - 2902 - static struct pci_driver ci13xxx_pci_driver = { 2903 - .name = UDC_DRIVER_NAME, 2904 - .id_table = ci13xxx_pci_id_table, 2905 - .probe = ci13xxx_pci_probe, 2906 - .remove = __devexit_p(ci13xxx_pci_remove), 2907 - }; 2908 - 2909 - /** 2910 - * ci13xxx_pci_init: module init 2911 - * 2912 - * Driver load 2913 - */ 2914 - static int __init ci13xxx_pci_init(void) 2915 - { 2916 - return pci_register_driver(&ci13xxx_pci_driver); 2917 - } 2918 - module_init(ci13xxx_pci_init); 2919 - 2920 - /** 2921 - * ci13xxx_pci_exit: module exit 2922 - * 2923 - * Driver unload 2924 - */ 2925 - static void __exit ci13xxx_pci_exit(void) 2926 - { 2927 - pci_unregister_driver(&ci13xxx_pci_driver); 2928 - } 2929 - module_exit(ci13xxx_pci_exit); 2930 - 2931 - MODULE_AUTHOR("MIPS - David Lopo <dlopo@chipidea.mips.com>"); 2932 - MODULE_DESCRIPTION("MIPS CI13XXX USB Peripheral Controller"); 2933 - MODULE_LICENSE("GPL"); 2934 - MODULE_VERSION("June 2008");
+19
drivers/usb/gadget/ci13xxx_udc.h
··· 97 97 struct dma_pool *td_pool; 98 98 }; 99 99 100 + struct ci13xxx; 101 + struct ci13xxx_udc_driver { 102 + const char *name; 103 + unsigned long flags; 104 + #define CI13XXX_REGS_SHARED BIT(0) 105 + #define CI13XXX_REQUIRE_TRANSCEIVER BIT(1) 106 + #define CI13XXX_PULLUP_ON_VBUS BIT(2) 107 + #define CI13XXX_DISABLE_STREAMING BIT(3) 108 + 109 + #define CI13XXX_CONTROLLER_RESET_EVENT 0 110 + #define CI13XXX_CONTROLLER_STOPPED_EVENT 1 111 + void (*notify_event) (struct ci13xxx *udc, unsigned event); 112 + }; 113 + 100 114 /* CI13XXX UDC descriptor & global resources */ 101 115 struct ci13xxx { 102 116 spinlock_t *lock; /* ctrl register bank access */ 117 + void __iomem *regs; /* registers address space */ 103 118 104 119 struct dma_pool *qh_pool; /* DMA pool for queue heads */ 105 120 struct dma_pool *td_pool; /* DMA pool for transfer descs */ ··· 123 108 struct ci13xxx_ep ci13xxx_ep[ENDPT_MAX]; /* extended endpts */ 124 109 125 110 struct usb_gadget_driver *driver; /* 3rd party gadget driver */ 111 + struct ci13xxx_udc_driver *udc_driver; /* device controller driver */ 112 + int vbus_active; /* is VBUS active */ 113 + struct otg_transceiver *transceiver; /* Transceiver struct */ 126 114 }; 127 115 128 116 /****************************************************************************** ··· 175 157 #define USBMODE_CM_DEVICE (0x02UL << 0) 176 158 #define USBMODE_CM_HOST (0x03UL << 0) 177 159 #define USBMODE_SLOM BIT(3) 160 + #define USBMODE_SDIS BIT(4) 178 161 179 162 /* ENDPTCTRL */ 180 163 #define ENDPTCTRL_RXS BIT(0)
+9 -1
drivers/usb/gadget/composite.c
··· 1126 1126 cdev->desc = *composite->dev; 1127 1127 cdev->desc.bMaxPacketSize0 = gadget->ep0->maxpacket; 1128 1128 1129 - /* stirng overrides */ 1129 + /* string overrides */ 1130 1130 if (iManufacturer || !cdev->desc.iManufacturer) { 1131 1131 if (!iManufacturer && !composite->iManufacturer && 1132 1132 !*composite_manufacturer) ··· 1188 1188 composite->suspend(cdev); 1189 1189 1190 1190 cdev->suspended = 1; 1191 + 1192 + usb_gadget_vbus_draw(gadget, 2); 1191 1193 } 1192 1194 1193 1195 static void ··· 1197 1195 { 1198 1196 struct usb_composite_dev *cdev = get_gadget_data(gadget); 1199 1197 struct usb_function *f; 1198 + u8 maxpower; 1200 1199 1201 1200 /* REVISIT: should we have config level 1202 1201 * suspend/resume callbacks? ··· 1210 1207 if (f->resume) 1211 1208 f->resume(f); 1212 1209 } 1210 + 1211 + maxpower = cdev->config->bMaxPower; 1212 + 1213 + usb_gadget_vbus_draw(gadget, maxpower ? 1214 + (2 * maxpower) : CONFIG_USB_GADGET_VBUS_DRAW); 1213 1215 } 1214 1216 1215 1217 cdev->suspended = 0;
+135 -116
drivers/usb/gadget/dummy_hcd.c
··· 1197 1197 #define Ep_Request (USB_TYPE_STANDARD | USB_RECIP_ENDPOINT) 1198 1198 #define Ep_InRequest (Ep_Request | USB_DIR_IN) 1199 1199 1200 + 1201 + /** 1202 + * handle_control_request() - handles all control transfers 1203 + * @dum: pointer to dummy (the_controller) 1204 + * @urb: the urb request to handle 1205 + * @setup: pointer to the setup data for a USB device control 1206 + * request 1207 + * @status: pointer to request handling status 1208 + * 1209 + * Return 0 - if the request was handled 1210 + * 1 - if the request wasn't handles 1211 + * error code on error 1212 + */ 1213 + static int handle_control_request(struct dummy *dum, struct urb *urb, 1214 + struct usb_ctrlrequest *setup, 1215 + int *status) 1216 + { 1217 + struct dummy_ep *ep2; 1218 + int ret_val = 1; 1219 + unsigned w_index; 1220 + unsigned w_value; 1221 + 1222 + w_index = le16_to_cpu(setup->wIndex); 1223 + w_value = le16_to_cpu(setup->wValue); 1224 + switch (setup->bRequest) { 1225 + case USB_REQ_SET_ADDRESS: 1226 + if (setup->bRequestType != Dev_Request) 1227 + break; 1228 + dum->address = w_value; 1229 + *status = 0; 1230 + dev_dbg(udc_dev(dum), "set_address = %d\n", 1231 + w_value); 1232 + ret_val = 0; 1233 + break; 1234 + case USB_REQ_SET_FEATURE: 1235 + if (setup->bRequestType == Dev_Request) { 1236 + ret_val = 0; 1237 + switch (w_value) { 1238 + case USB_DEVICE_REMOTE_WAKEUP: 1239 + break; 1240 + case USB_DEVICE_B_HNP_ENABLE: 1241 + dum->gadget.b_hnp_enable = 1; 1242 + break; 1243 + case USB_DEVICE_A_HNP_SUPPORT: 1244 + dum->gadget.a_hnp_support = 1; 1245 + break; 1246 + case USB_DEVICE_A_ALT_HNP_SUPPORT: 1247 + dum->gadget.a_alt_hnp_support = 1; 1248 + break; 1249 + default: 1250 + ret_val = -EOPNOTSUPP; 1251 + } 1252 + if (ret_val == 0) { 1253 + dum->devstatus |= (1 << w_value); 1254 + *status = 0; 1255 + } 1256 + } else if (setup->bRequestType == Ep_Request) { 1257 + /* endpoint halt */ 1258 + ep2 = find_endpoint(dum, w_index); 1259 + if (!ep2 || ep2->ep.name == ep0name) { 1260 + ret_val = -EOPNOTSUPP; 1261 + break; 1262 + } 1263 + ep2->halted = 1; 1264 + ret_val = 0; 1265 + *status = 0; 1266 + } 1267 + break; 1268 + case USB_REQ_CLEAR_FEATURE: 1269 + if (setup->bRequestType == Dev_Request) { 1270 + ret_val = 0; 1271 + switch (w_value) { 1272 + case USB_DEVICE_REMOTE_WAKEUP: 1273 + w_value = USB_DEVICE_REMOTE_WAKEUP; 1274 + break; 1275 + default: 1276 + ret_val = -EOPNOTSUPP; 1277 + break; 1278 + } 1279 + if (ret_val == 0) { 1280 + dum->devstatus &= ~(1 << w_value); 1281 + *status = 0; 1282 + } 1283 + } else if (setup->bRequestType == Ep_Request) { 1284 + /* endpoint halt */ 1285 + ep2 = find_endpoint(dum, w_index); 1286 + if (!ep2) { 1287 + ret_val = -EOPNOTSUPP; 1288 + break; 1289 + } 1290 + if (!ep2->wedged) 1291 + ep2->halted = 0; 1292 + ret_val = 0; 1293 + *status = 0; 1294 + } 1295 + break; 1296 + case USB_REQ_GET_STATUS: 1297 + if (setup->bRequestType == Dev_InRequest 1298 + || setup->bRequestType == Intf_InRequest 1299 + || setup->bRequestType == Ep_InRequest) { 1300 + char *buf; 1301 + /* 1302 + * device: remote wakeup, selfpowered 1303 + * interface: nothing 1304 + * endpoint: halt 1305 + */ 1306 + buf = (char *)urb->transfer_buffer; 1307 + if (urb->transfer_buffer_length > 0) { 1308 + if (setup->bRequestType == Ep_InRequest) { 1309 + ep2 = find_endpoint(dum, w_index); 1310 + if (!ep2) { 1311 + ret_val = -EOPNOTSUPP; 1312 + break; 1313 + } 1314 + buf[0] = ep2->halted; 1315 + } else if (setup->bRequestType == 1316 + Dev_InRequest) { 1317 + buf[0] = (u8)dum->devstatus; 1318 + } else 1319 + buf[0] = 0; 1320 + } 1321 + if (urb->transfer_buffer_length > 1) 1322 + buf[1] = 0; 1323 + urb->actual_length = min_t(u32, 2, 1324 + urb->transfer_buffer_length); 1325 + ret_val = 0; 1326 + *status = 0; 1327 + } 1328 + break; 1329 + } 1330 + return ret_val; 1331 + } 1332 + 1200 1333 /* drive both sides of the transfers; looks like irq handlers to 1201 1334 * both drivers except the callbacks aren't in_irq(). 1202 1335 */ ··· 1432 1299 if (ep == &dum->ep [0] && ep->setup_stage) { 1433 1300 struct usb_ctrlrequest setup; 1434 1301 int value = 1; 1435 - struct dummy_ep *ep2; 1436 - unsigned w_index; 1437 - unsigned w_value; 1438 1302 1439 1303 setup = *(struct usb_ctrlrequest*) urb->setup_packet; 1440 - w_index = le16_to_cpu(setup.wIndex); 1441 - w_value = le16_to_cpu(setup.wValue); 1442 - 1443 1304 /* paranoia, in case of stale queued data */ 1444 1305 list_for_each_entry (req, &ep->queue, queue) { 1445 1306 list_del_init (&req->queue); ··· 1455 1328 ep->last_io = jiffies; 1456 1329 ep->setup_stage = 0; 1457 1330 ep->halted = 0; 1458 - switch (setup.bRequest) { 1459 - case USB_REQ_SET_ADDRESS: 1460 - if (setup.bRequestType != Dev_Request) 1461 - break; 1462 - dum->address = w_value; 1463 - status = 0; 1464 - dev_dbg (udc_dev(dum), "set_address = %d\n", 1465 - w_value); 1466 - value = 0; 1467 - break; 1468 - case USB_REQ_SET_FEATURE: 1469 - if (setup.bRequestType == Dev_Request) { 1470 - value = 0; 1471 - switch (w_value) { 1472 - case USB_DEVICE_REMOTE_WAKEUP: 1473 - break; 1474 - case USB_DEVICE_B_HNP_ENABLE: 1475 - dum->gadget.b_hnp_enable = 1; 1476 - break; 1477 - case USB_DEVICE_A_HNP_SUPPORT: 1478 - dum->gadget.a_hnp_support = 1; 1479 - break; 1480 - case USB_DEVICE_A_ALT_HNP_SUPPORT: 1481 - dum->gadget.a_alt_hnp_support 1482 - = 1; 1483 - break; 1484 - default: 1485 - value = -EOPNOTSUPP; 1486 - } 1487 - if (value == 0) { 1488 - dum->devstatus |= 1489 - (1 << w_value); 1490 - status = 0; 1491 - } 1492 1331 1493 - } else if (setup.bRequestType == Ep_Request) { 1494 - // endpoint halt 1495 - ep2 = find_endpoint (dum, w_index); 1496 - if (!ep2 || ep2->ep.name == ep0name) { 1497 - value = -EOPNOTSUPP; 1498 - break; 1499 - } 1500 - ep2->halted = 1; 1501 - value = 0; 1502 - status = 0; 1503 - } 1504 - break; 1505 - case USB_REQ_CLEAR_FEATURE: 1506 - if (setup.bRequestType == Dev_Request) { 1507 - switch (w_value) { 1508 - case USB_DEVICE_REMOTE_WAKEUP: 1509 - dum->devstatus &= ~(1 << 1510 - USB_DEVICE_REMOTE_WAKEUP); 1511 - value = 0; 1512 - status = 0; 1513 - break; 1514 - default: 1515 - value = -EOPNOTSUPP; 1516 - break; 1517 - } 1518 - } else if (setup.bRequestType == Ep_Request) { 1519 - // endpoint halt 1520 - ep2 = find_endpoint (dum, w_index); 1521 - if (!ep2) { 1522 - value = -EOPNOTSUPP; 1523 - break; 1524 - } 1525 - if (!ep2->wedged) 1526 - ep2->halted = 0; 1527 - value = 0; 1528 - status = 0; 1529 - } 1530 - break; 1531 - case USB_REQ_GET_STATUS: 1532 - if (setup.bRequestType == Dev_InRequest 1533 - || setup.bRequestType 1534 - == Intf_InRequest 1535 - || setup.bRequestType 1536 - == Ep_InRequest 1537 - ) { 1538 - char *buf; 1539 - 1540 - // device: remote wakeup, selfpowered 1541 - // interface: nothing 1542 - // endpoint: halt 1543 - buf = (char *)urb->transfer_buffer; 1544 - if (urb->transfer_buffer_length > 0) { 1545 - if (setup.bRequestType == 1546 - Ep_InRequest) { 1547 - ep2 = find_endpoint (dum, w_index); 1548 - if (!ep2) { 1549 - value = -EOPNOTSUPP; 1550 - break; 1551 - } 1552 - buf [0] = ep2->halted; 1553 - } else if (setup.bRequestType == 1554 - Dev_InRequest) { 1555 - buf [0] = (u8) 1556 - dum->devstatus; 1557 - } else 1558 - buf [0] = 0; 1559 - } 1560 - if (urb->transfer_buffer_length > 1) 1561 - buf [1] = 0; 1562 - urb->actual_length = min_t(u32, 2, 1563 - urb->transfer_buffer_length); 1564 - value = 0; 1565 - status = 0; 1566 - } 1567 - break; 1568 - } 1332 + value = handle_control_request(dum, urb, &setup, 1333 + &status); 1569 1334 1570 1335 /* gadget driver handles all other requests. block 1571 1336 * until setup() returns; no reentrancy issues etc.
+213 -224
drivers/usb/gadget/f_fs.c
··· 1 1 /* 2 - * f_fs.c -- user mode filesystem api for usb composite funtcion controllers 2 + * f_fs.c -- user mode file system API for USB composite function controllers 3 3 * 4 4 * Copyright (C) 2010 Samsung Electronics 5 5 * Author: Michal Nazarewicz <m.nazarewicz@samsung.com> 6 6 * 7 - * Based on inode.c (GadgetFS): 7 + * Based on inode.c (GadgetFS) which was: 8 8 * Copyright (C) 2003-2004 David Brownell 9 9 * Copyright (C) 2003 Agilent Technologies 10 10 * ··· 38 38 #define FUNCTIONFS_MAGIC 0xa647361 /* Chosen by a honest dice roll ;) */ 39 39 40 40 41 - /* Debuging *****************************************************************/ 42 - 43 - #define ffs_printk(level, fmt, args...) printk(level "f_fs: " fmt "\n", ## args) 44 - 45 - #define FERR(...) ffs_printk(KERN_ERR, __VA_ARGS__) 46 - #define FINFO(...) ffs_printk(KERN_INFO, __VA_ARGS__) 47 - 48 - #ifdef DEBUG 49 - # define FDBG(...) ffs_printk(KERN_DEBUG, __VA_ARGS__) 50 - #else 51 - # define FDBG(...) do { } while (0) 52 - #endif /* DEBUG */ 41 + /* Debugging ****************************************************************/ 53 42 54 43 #ifdef VERBOSE_DEBUG 55 - # define FVDBG FDBG 44 + # define pr_vdebug pr_debug 45 + # define ffs_dump_mem(prefix, ptr, len) \ 46 + print_hex_dump_bytes(pr_fmt(prefix ": "), DUMP_PREFIX_NONE, ptr, len) 56 47 #else 57 - # define FVDBG(...) do { } while (0) 48 + # define pr_vdebug(...) do { } while (0) 49 + # define ffs_dump_mem(prefix, ptr, len) do { } while (0) 58 50 #endif /* VERBOSE_DEBUG */ 59 51 60 - #define ENTER() FVDBG("%s()", __func__) 61 - 62 - #ifdef VERBOSE_DEBUG 63 - # define ffs_dump_mem(prefix, ptr, len) \ 64 - print_hex_dump_bytes("f_fs" prefix ": ", DUMP_PREFIX_NONE, ptr, len) 65 - #else 66 - # define ffs_dump_mem(prefix, ptr, len) do { } while (0) 67 - #endif 52 + #define ENTER() pr_vdebug("%s()\n", __func__) 68 53 69 54 70 55 /* The data structure and setup file ****************************************/ 71 56 72 57 enum ffs_state { 73 - /* Waiting for descriptors and strings. */ 74 - /* In this state no open(2), read(2) or write(2) on epfiles 58 + /* 59 + * Waiting for descriptors and strings. 60 + * 61 + * In this state no open(2), read(2) or write(2) on epfiles 75 62 * may succeed (which should not be the problem as there 76 - * should be no such files opened in the firts place). */ 63 + * should be no such files opened in the first place). 64 + */ 77 65 FFS_READ_DESCRIPTORS, 78 66 FFS_READ_STRINGS, 79 67 80 - /* We've got descriptors and strings. We are or have called 68 + /* 69 + * We've got descriptors and strings. We are or have called 81 70 * functionfs_ready_callback(). functionfs_bind() may have 82 - * been called but we don't know. */ 83 - /* This is the only state in which operations on epfiles may 84 - * succeed. */ 71 + * been called but we don't know. 72 + * 73 + * This is the only state in which operations on epfiles may 74 + * succeed. 75 + */ 85 76 FFS_ACTIVE, 86 77 87 - /* All endpoints have been closed. This state is also set if 78 + /* 79 + * All endpoints have been closed. This state is also set if 88 80 * we encounter an unrecoverable error. The only 89 81 * unrecoverable error is situation when after reading strings 90 - * from user space we fail to initialise EP files or 91 - * functionfs_ready_callback() returns with error (<0). */ 92 - /* In this state no open(2), read(2) or write(2) (both on ep0 82 + * from user space we fail to initialise epfiles or 83 + * functionfs_ready_callback() returns with error (<0). 84 + * 85 + * In this state no open(2), read(2) or write(2) (both on ep0 93 86 * as well as epfile) may succeed (at this point epfiles are 94 87 * unlinked and all closed so this is not a problem; ep0 is 95 88 * also closed but ep0 file exists and so open(2) on ep0 must 96 - * fail). */ 89 + * fail). 90 + */ 97 91 FFS_CLOSING 98 92 }; 99 93 ··· 95 101 enum ffs_setup_state { 96 102 /* There is no setup request pending. */ 97 103 FFS_NO_SETUP, 98 - /* User has read events and there was a setup request event 104 + /* 105 + * User has read events and there was a setup request event 99 106 * there. The next read/write on ep0 will handle the 100 - * request. */ 107 + * request. 108 + */ 101 109 FFS_SETUP_PENDING, 102 - /* There was event pending but before user space handled it 110 + /* 111 + * There was event pending but before user space handled it 103 112 * some other event was introduced which canceled existing 104 113 * setup. If this state is set read/write on ep0 return 105 - * -EIDRM. This state is only set when adding event. */ 114 + * -EIDRM. This state is only set when adding event. 115 + */ 106 116 FFS_SETUP_CANCELED 107 117 }; 108 118 ··· 118 120 struct ffs_data { 119 121 struct usb_gadget *gadget; 120 122 121 - /* Protect access read/write operations, only one read/write 123 + /* 124 + * Protect access read/write operations, only one read/write 122 125 * at a time. As a consequence protects ep0req and company. 123 126 * While setup request is being processed (queued) this is 124 - * held. */ 127 + * held. 128 + */ 125 129 struct mutex mutex; 126 130 127 - /* Protect access to enpoint related structures (basically 131 + /* 132 + * Protect access to endpoint related structures (basically 128 133 * usb_ep_queue(), usb_ep_dequeue(), etc. calls) except for 129 - * endpint zero. */ 134 + * endpoint zero. 135 + */ 130 136 spinlock_t eps_lock; 131 137 132 - /* XXX REVISIT do we need our own request? Since we are not 133 - * handling setup requests immidiatelly user space may be so 138 + /* 139 + * XXX REVISIT do we need our own request? Since we are not 140 + * handling setup requests immediately user space may be so 134 141 * slow that another setup will be sent to the gadget but this 135 142 * time not to us but another function and then there could be 136 143 * a race. Is that the case? Or maybe we can use cdev->req 137 - * after all, maybe we just need some spinlock for that? */ 144 + * after all, maybe we just need some spinlock for that? 145 + */ 138 146 struct usb_request *ep0req; /* P: mutex */ 139 147 struct completion ep0req_completion; /* P: mutex */ 140 148 int ep0req_status; /* P: mutex */ ··· 154 150 enum ffs_state state; 155 151 156 152 /* 157 - * Possible transations: 153 + * Possible transitions: 158 154 * + FFS_NO_SETUP -> FFS_SETUP_PENDING -- P: ev.waitq.lock 159 155 * happens only in ep0 read which is P: mutex 160 156 * + FFS_SETUP_PENDING -> FFS_NO_SETUP -- P: ev.waitq.lock ··· 187 183 /* Active function */ 188 184 struct ffs_function *func; 189 185 190 - /* Device name, write once when file system is mounted. 191 - * Intendet for user to read if she wants. */ 186 + /* 187 + * Device name, write once when file system is mounted. 188 + * Intended for user to read if she wants. 189 + */ 192 190 const char *dev_name; 193 - /* Private data for our user (ie. gadget). Managed by 194 - * user. */ 191 + /* Private data for our user (ie. gadget). Managed by user. */ 195 192 void *private_data; 196 193 197 194 /* filled by __ffs_data_got_descs() */ 198 - /* real descriptors are 16 bytes after raw_descs (so you need 195 + /* 196 + * Real descriptors are 16 bytes after raw_descs (so you need 199 197 * to skip 16 bytes (ie. ffs->raw_descs + 16) to get to the 200 198 * first full speed descriptor). raw_descs_length and 201 - * raw_fs_descs_length do not have those 16 bytes added. */ 199 + * raw_fs_descs_length do not have those 16 bytes added. 200 + */ 202 201 const void *raw_descs; 203 202 unsigned raw_descs_length; 204 203 unsigned raw_fs_descs_length; ··· 218 211 const void *raw_strings; 219 212 struct usb_gadget_strings **stringtabs; 220 213 221 - /* File system's super block, write once when file system is mounted. */ 214 + /* 215 + * File system's super block, write once when file system is 216 + * mounted. 217 + */ 222 218 struct super_block *sb; 223 219 224 - /* File permissions, written once when fs is mounted*/ 220 + /* File permissions, written once when fs is mounted */ 225 221 struct ffs_file_perms { 226 222 umode_t mode; 227 223 uid_t uid; 228 224 gid_t gid; 229 225 } file_perms; 230 226 231 - /* The endpoint files, filled by ffs_epfiles_create(), 232 - * destroyed by ffs_epfiles_destroy(). */ 227 + /* 228 + * The endpoint files, filled by ffs_epfiles_create(), 229 + * destroyed by ffs_epfiles_destroy(). 230 + */ 233 231 struct ffs_epfile *epfiles; 234 232 }; 235 233 ··· 248 236 static void ffs_data_opened(struct ffs_data *ffs); 249 237 static void ffs_data_closed(struct ffs_data *ffs); 250 238 251 - /* Called with ffs->mutex held; take over ownerrship of data. */ 239 + /* Called with ffs->mutex held; take over ownership of data. */ 252 240 static int __must_check 253 241 __ffs_data_got_descs(struct ffs_data *ffs, char *data, size_t len); 254 242 static int __must_check ··· 279 267 280 268 static void ffs_func_free(struct ffs_function *func); 281 269 282 - 283 270 static void ffs_func_eps_disable(struct ffs_function *func); 284 271 static int __must_check ffs_func_eps_enable(struct ffs_function *func); 285 - 286 272 287 273 static int ffs_func_bind(struct usb_configuration *, 288 274 struct usb_function *); ··· 296 286 297 287 static int ffs_func_revmap_ep(struct ffs_function *func, u8 num); 298 288 static int ffs_func_revmap_intf(struct ffs_function *func, u8 intf); 299 - 300 289 301 290 302 291 /* The endpoints structures *************************************************/ ··· 330 321 unsigned char _pad; 331 322 }; 332 323 333 - 334 324 static int __must_check ffs_epfiles_create(struct ffs_data *ffs); 335 325 static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count); 336 326 ··· 355 347 356 348 complete_all(&ffs->ep0req_completion); 357 349 } 358 - 359 350 360 351 static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len) 361 352 { ··· 387 380 static int __ffs_ep0_stall(struct ffs_data *ffs) 388 381 { 389 382 if (ffs->ev.can_stall) { 390 - FVDBG("ep0 stall\n"); 383 + pr_vdebug("ep0 stall\n"); 391 384 usb_ep_set_halt(ffs->gadget->ep0); 392 385 ffs->setup_state = FFS_NO_SETUP; 393 386 return -EL2HLT; 394 387 } else { 395 - FDBG("bogus ep0 stall!\n"); 388 + pr_debug("bogus ep0 stall!\n"); 396 389 return -ESRCH; 397 390 } 398 391 } 399 - 400 392 401 393 static ssize_t ffs_ep0_write(struct file *file, const char __user *buf, 402 394 size_t len, loff_t *ptr) ··· 415 409 if (unlikely(ret < 0)) 416 410 return ret; 417 411 418 - 419 412 /* Check state */ 420 413 switch (ffs->state) { 421 414 case FFS_READ_DESCRIPTORS: ··· 426 421 } 427 422 428 423 data = ffs_prepare_buffer(buf, len); 429 - if (unlikely(IS_ERR(data))) { 424 + if (IS_ERR(data)) { 430 425 ret = PTR_ERR(data); 431 426 break; 432 427 } 433 428 434 429 /* Handle data */ 435 430 if (ffs->state == FFS_READ_DESCRIPTORS) { 436 - FINFO("read descriptors"); 431 + pr_info("read descriptors\n"); 437 432 ret = __ffs_data_got_descs(ffs, data, len); 438 433 if (unlikely(ret < 0)) 439 434 break; ··· 441 436 ffs->state = FFS_READ_STRINGS; 442 437 ret = len; 443 438 } else { 444 - FINFO("read strings"); 439 + pr_info("read strings\n"); 445 440 ret = __ffs_data_got_strings(ffs, data, len); 446 441 if (unlikely(ret < 0)) 447 442 break; ··· 466 461 } 467 462 break; 468 463 469 - 470 464 case FFS_ACTIVE: 471 465 data = NULL; 472 - /* We're called from user space, we can use _irq 473 - * rather then _irqsave */ 466 + /* 467 + * We're called from user space, we can use _irq 468 + * rather then _irqsave 469 + */ 474 470 spin_lock_irq(&ffs->ev.waitq.lock); 475 471 switch (FFS_SETUP_STATE(ffs)) { 476 472 case FFS_SETUP_CANCELED: ··· 499 493 spin_unlock_irq(&ffs->ev.waitq.lock); 500 494 501 495 data = ffs_prepare_buffer(buf, len); 502 - if (unlikely(IS_ERR(data))) { 496 + if (IS_ERR(data)) { 503 497 ret = PTR_ERR(data); 504 498 break; 505 499 } 506 500 507 501 spin_lock_irq(&ffs->ev.waitq.lock); 508 502 509 - /* We are guaranteed to be still in FFS_ACTIVE state 503 + /* 504 + * We are guaranteed to be still in FFS_ACTIVE state 510 505 * but the state of setup could have changed from 511 506 * FFS_SETUP_PENDING to FFS_SETUP_CANCELED so we need 512 507 * to check for that. If that happened we copied data 513 - * from user space in vain but it's unlikely. */ 514 - /* For sure we are not in FFS_NO_SETUP since this is 508 + * from user space in vain but it's unlikely. 509 + * 510 + * For sure we are not in FFS_NO_SETUP since this is 515 511 * the only place FFS_SETUP_PENDING -> FFS_NO_SETUP 516 512 * transition can be performed and it's protected by 517 - * mutex. */ 518 - 513 + * mutex. 514 + */ 519 515 if (FFS_SETUP_STATE(ffs) == FFS_SETUP_CANCELED) { 520 516 ret = -EIDRM; 521 517 done_spin: ··· 529 521 kfree(data); 530 522 break; 531 523 532 - 533 524 default: 534 525 ret = -EBADFD; 535 526 break; 536 527 } 537 528 538 - 539 529 mutex_unlock(&ffs->mutex); 540 530 return ret; 541 531 } 542 532 543 - 544 - 545 533 static ssize_t __ffs_ep0_read_events(struct ffs_data *ffs, char __user *buf, 546 534 size_t n) 547 535 { 548 - /* We are holding ffs->ev.waitq.lock and ffs->mutex and we need 549 - * to release them. */ 550 - 536 + /* 537 + * We are holding ffs->ev.waitq.lock and ffs->mutex and we need 538 + * to release them. 539 + */ 551 540 struct usb_functionfs_event events[n]; 552 541 unsigned i = 0; 553 542 ··· 573 568 ? -EFAULT : sizeof events; 574 569 } 575 570 576 - 577 571 static ssize_t ffs_ep0_read(struct file *file, char __user *buf, 578 572 size_t len, loff_t *ptr) 579 573 { ··· 592 588 if (unlikely(ret < 0)) 593 589 return ret; 594 590 595 - 596 591 /* Check state */ 597 592 if (ffs->state != FFS_ACTIVE) { 598 593 ret = -EBADFD; 599 594 goto done_mutex; 600 595 } 601 596 602 - 603 - /* We're called from user space, we can use _irq rather then 604 - * _irqsave */ 597 + /* 598 + * We're called from user space, we can use _irq rather then 599 + * _irqsave 600 + */ 605 601 spin_lock_irq(&ffs->ev.waitq.lock); 606 602 607 603 switch (FFS_SETUP_STATE(ffs)) { ··· 621 617 break; 622 618 } 623 619 624 - if (unlikely(wait_event_interruptible_exclusive_locked_irq(ffs->ev.waitq, ffs->ev.count))) { 620 + if (wait_event_interruptible_exclusive_locked_irq(ffs->ev.waitq, 621 + ffs->ev.count)) { 625 622 ret = -EINTR; 626 623 break; 627 624 } 628 625 629 626 return __ffs_ep0_read_events(ffs, buf, 630 627 min(n, (size_t)ffs->ev.count)); 631 - 632 628 633 629 case FFS_SETUP_PENDING: 634 630 if (ffs->ev.setup.bRequestType & USB_DIR_IN) { ··· 675 671 return ret; 676 672 } 677 673 678 - 679 - 680 674 static int ffs_ep0_open(struct inode *inode, struct file *file) 681 675 { 682 676 struct ffs_data *ffs = inode->i_private; ··· 690 688 return 0; 691 689 } 692 690 693 - 694 691 static int ffs_ep0_release(struct inode *inode, struct file *file) 695 692 { 696 693 struct ffs_data *ffs = file->private_data; ··· 700 699 701 700 return 0; 702 701 } 703 - 704 702 705 703 static long ffs_ep0_ioctl(struct file *file, unsigned code, unsigned long value) 706 704 { ··· 721 721 return ret; 722 722 } 723 723 724 - 725 724 static const struct file_operations ffs_ep0_operations = { 726 725 .owner = THIS_MODULE, 727 726 .llseek = no_llseek, ··· 735 736 736 737 /* "Normal" endpoints operations ********************************************/ 737 738 738 - 739 739 static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req) 740 740 { 741 741 ENTER(); ··· 744 746 complete(req->context); 745 747 } 746 748 } 747 - 748 749 749 750 static ssize_t ffs_epfile_io(struct file *file, 750 751 char __user *buf, size_t len, int read) ··· 774 777 goto error; 775 778 } 776 779 777 - if (unlikely(wait_event_interruptible 778 - (epfile->wait, (ep = epfile->ep)))) { 780 + if (wait_event_interruptible(epfile->wait, 781 + (ep = epfile->ep))) { 779 782 ret = -EINTR; 780 783 goto error; 781 784 } ··· 807 810 if (unlikely(ret)) 808 811 goto error; 809 812 810 - /* We're called from user space, we can use _irq rather then 811 - * _irqsave */ 813 + /* 814 + * We're called from user space, we can use _irq rather then 815 + * _irqsave 816 + */ 812 817 spin_lock_irq(&epfile->ffs->eps_lock); 813 818 814 - /* While we were acquiring mutex endpoint got disabled 815 - * or changed? */ 819 + /* 820 + * While we were acquiring mutex endpoint got disabled 821 + * or changed? 822 + */ 816 823 } while (unlikely(epfile->ep != ep)); 817 824 818 825 /* Halt */ ··· 857 856 kfree(data); 858 857 return ret; 859 858 } 860 - 861 859 862 860 static ssize_t 863 861 ffs_epfile_write(struct file *file, const char __user *buf, size_t len, ··· 903 903 return 0; 904 904 } 905 905 906 - 907 906 static long ffs_epfile_ioctl(struct file *file, unsigned code, 908 907 unsigned long value) 909 908 { ··· 941 942 return ret; 942 943 } 943 944 944 - 945 945 static const struct file_operations ffs_epfile_operations = { 946 946 .owner = THIS_MODULE, 947 947 .llseek = no_llseek, ··· 953 955 }; 954 956 955 957 956 - 957 958 /* File system and super block operations ***********************************/ 958 959 959 960 /* 960 - * Mounting the filesystem creates a controller file, used first for 961 + * Mounting the file system creates a controller file, used first for 961 962 * function configuration then later for event monitoring. 962 963 */ 963 - 964 964 965 965 static struct inode *__must_check 966 966 ffs_sb_make_inode(struct super_block *sb, void *data, ··· 992 996 return inode; 993 997 } 994 998 995 - 996 999 /* Create "regular" file */ 997 - 998 1000 static struct inode *ffs_sb_create_file(struct super_block *sb, 999 1001 const char *name, void *data, 1000 1002 const struct file_operations *fops, ··· 1021 1027 return inode; 1022 1028 } 1023 1029 1024 - 1025 1030 /* Super block */ 1026 - 1027 1031 static const struct super_operations ffs_sb_operations = { 1028 1032 .statfs = simple_statfs, 1029 1033 .drop_inode = generic_delete_inode, ··· 1042 1050 1043 1051 ENTER(); 1044 1052 1045 - /* Initialize data */ 1053 + /* Initialise data */ 1046 1054 ffs = ffs_data_new(); 1047 1055 if (unlikely(!ffs)) 1048 1056 goto enomem0; ··· 1088 1096 return -ENOMEM; 1089 1097 } 1090 1098 1091 - 1092 1099 static int ffs_fs_parse_opts(struct ffs_sb_fill_data *data, char *opts) 1093 1100 { 1094 1101 ENTER(); ··· 1107 1116 /* Value limit */ 1108 1117 eq = strchr(opts, '='); 1109 1118 if (unlikely(!eq)) { 1110 - FERR("'=' missing in %s", opts); 1119 + pr_err("'=' missing in %s\n", opts); 1111 1120 return -EINVAL; 1112 1121 } 1113 1122 *eq = 0; ··· 1115 1124 /* Parse value */ 1116 1125 value = simple_strtoul(eq + 1, &end, 0); 1117 1126 if (unlikely(*end != ',' && *end != 0)) { 1118 - FERR("%s: invalid value: %s", opts, eq + 1); 1127 + pr_err("%s: invalid value: %s\n", opts, eq + 1); 1119 1128 return -EINVAL; 1120 1129 } 1121 1130 ··· 1150 1159 1151 1160 default: 1152 1161 invalid: 1153 - FERR("%s: invalid option", opts); 1162 + pr_err("%s: invalid option\n", opts); 1154 1163 return -EINVAL; 1155 1164 } 1156 1165 ··· 1162 1171 1163 1172 return 0; 1164 1173 } 1165 - 1166 1174 1167 1175 /* "mount -t functionfs dev_name /dev/function" ends up here */ 1168 1176 ··· 1214 1224 }; 1215 1225 1216 1226 1217 - 1218 1227 /* Driver's main init/cleanup functions *************************************/ 1219 - 1220 1228 1221 1229 static int functionfs_init(void) 1222 1230 { ··· 1224 1236 1225 1237 ret = register_filesystem(&ffs_fs_type); 1226 1238 if (likely(!ret)) 1227 - FINFO("file system registered"); 1239 + pr_info("file system registered\n"); 1228 1240 else 1229 - FERR("failed registering file system (%d)", ret); 1241 + pr_err("failed registering file system (%d)\n", ret); 1230 1242 1231 1243 return ret; 1232 1244 } ··· 1235 1247 { 1236 1248 ENTER(); 1237 1249 1238 - FINFO("unloading"); 1250 + pr_info("unloading\n"); 1239 1251 unregister_filesystem(&ffs_fs_type); 1240 1252 } 1241 - 1242 1253 1243 1254 1244 1255 /* ffs_data and ffs_function construction and destruction code **************/ 1245 1256 1246 1257 static void ffs_data_clear(struct ffs_data *ffs); 1247 1258 static void ffs_data_reset(struct ffs_data *ffs); 1248 - 1249 1259 1250 1260 static void ffs_data_get(struct ffs_data *ffs) 1251 1261 { ··· 1265 1279 ENTER(); 1266 1280 1267 1281 if (unlikely(atomic_dec_and_test(&ffs->ref))) { 1268 - FINFO("%s(): freeing", __func__); 1282 + pr_info("%s(): freeing\n", __func__); 1269 1283 ffs_data_clear(ffs); 1270 1284 BUG_ON(mutex_is_locked(&ffs->mutex) || 1271 1285 spin_is_locked(&ffs->ev.waitq.lock) || ··· 1274 1288 kfree(ffs); 1275 1289 } 1276 1290 } 1277 - 1278 - 1279 1291 1280 1292 static void ffs_data_closed(struct ffs_data *ffs) 1281 1293 { ··· 1286 1302 1287 1303 ffs_data_put(ffs); 1288 1304 } 1289 - 1290 1305 1291 1306 static struct ffs_data *ffs_data_new(void) 1292 1307 { ··· 1309 1326 return ffs; 1310 1327 } 1311 1328 1312 - 1313 1329 static void ffs_data_clear(struct ffs_data *ffs) 1314 1330 { 1315 1331 ENTER(); ··· 1325 1343 kfree(ffs->raw_strings); 1326 1344 kfree(ffs->stringtabs); 1327 1345 } 1328 - 1329 1346 1330 1347 static void ffs_data_reset(struct ffs_data *ffs) 1331 1348 { ··· 1388 1407 return 0; 1389 1408 } 1390 1409 1391 - 1392 1410 static void functionfs_unbind(struct ffs_data *ffs) 1393 1411 { 1394 1412 ENTER(); ··· 1399 1419 ffs_data_put(ffs); 1400 1420 } 1401 1421 } 1402 - 1403 1422 1404 1423 static int ffs_epfiles_create(struct ffs_data *ffs) 1405 1424 { ··· 1430 1451 return 0; 1431 1452 } 1432 1453 1433 - 1434 1454 static void ffs_epfiles_destroy(struct ffs_epfile *epfiles, unsigned count) 1435 1455 { 1436 1456 struct ffs_epfile *epfile = epfiles; ··· 1448 1470 1449 1471 kfree(epfiles); 1450 1472 } 1451 - 1452 1473 1453 1474 static int functionfs_bind_config(struct usb_composite_dev *cdev, 1454 1475 struct usb_configuration *c, ··· 1468 1491 func->function.bind = ffs_func_bind; 1469 1492 func->function.unbind = ffs_func_unbind; 1470 1493 func->function.set_alt = ffs_func_set_alt; 1471 - /*func->function.get_alt = ffs_func_get_alt;*/ 1472 1494 func->function.disable = ffs_func_disable; 1473 1495 func->function.setup = ffs_func_setup; 1474 1496 func->function.suspend = ffs_func_suspend; ··· 1492 1516 ffs_data_put(func->ffs); 1493 1517 1494 1518 kfree(func->eps); 1495 - /* eps and interfaces_nums are allocated in the same chunk so 1519 + /* 1520 + * eps and interfaces_nums are allocated in the same chunk so 1496 1521 * only one free is required. Descriptors are also allocated 1497 - * in the same chunk. */ 1522 + * in the same chunk. 1523 + */ 1498 1524 1499 1525 kfree(func); 1500 1526 } 1501 - 1502 1527 1503 1528 static void ffs_func_eps_disable(struct ffs_function *func) 1504 1529 { ··· 1558 1581 1559 1582 /* Parsing and building descriptors and strings *****************************/ 1560 1583 1561 - 1562 - /* This validates if data pointed by data is a valid USB descriptor as 1584 + /* 1585 + * This validates if data pointed by data is a valid USB descriptor as 1563 1586 * well as record how many interfaces, endpoints and strings are 1564 - * required by given configuration. Returns address afther the 1565 - * descriptor or NULL if data is invalid. */ 1587 + * required by given configuration. Returns address after the 1588 + * descriptor or NULL if data is invalid. 1589 + */ 1566 1590 1567 1591 enum ffs_entity_type { 1568 1592 FFS_DESCRIPTOR, FFS_INTERFACE, FFS_STRING, FFS_ENDPOINT ··· 1585 1607 1586 1608 /* At least two bytes are required: length and type */ 1587 1609 if (len < 2) { 1588 - FVDBG("descriptor too short"); 1610 + pr_vdebug("descriptor too short\n"); 1589 1611 return -EINVAL; 1590 1612 } 1591 1613 1592 1614 /* If we have at least as many bytes as the descriptor takes? */ 1593 1615 length = _ds->bLength; 1594 1616 if (len < length) { 1595 - FVDBG("descriptor longer then available data"); 1617 + pr_vdebug("descriptor longer then available data\n"); 1596 1618 return -EINVAL; 1597 1619 } 1598 1620 ··· 1600 1622 #define __entity_check_STRING(val) (val) 1601 1623 #define __entity_check_ENDPOINT(val) ((val) & USB_ENDPOINT_NUMBER_MASK) 1602 1624 #define __entity(type, val) do { \ 1603 - FVDBG("entity " #type "(%02x)", (val)); \ 1625 + pr_vdebug("entity " #type "(%02x)\n", (val)); \ 1604 1626 if (unlikely(!__entity_check_ ##type(val))) { \ 1605 - FVDBG("invalid entity's value"); \ 1627 + pr_vdebug("invalid entity's value\n"); \ 1606 1628 return -EINVAL; \ 1607 1629 } \ 1608 1630 ret = entity(FFS_ ##type, &val, _ds, priv); \ 1609 1631 if (unlikely(ret < 0)) { \ 1610 - FDBG("entity " #type "(%02x); ret = %d", \ 1611 - (val), ret); \ 1632 + pr_debug("entity " #type "(%02x); ret = %d\n", \ 1633 + (val), ret); \ 1612 1634 return ret; \ 1613 1635 } \ 1614 1636 } while (0) ··· 1620 1642 case USB_DT_STRING: 1621 1643 case USB_DT_DEVICE_QUALIFIER: 1622 1644 /* function can't have any of those */ 1623 - FVDBG("descriptor reserved for gadget: %d", _ds->bDescriptorType); 1645 + pr_vdebug("descriptor reserved for gadget: %d\n", 1646 + _ds->bDescriptorType); 1624 1647 return -EINVAL; 1625 1648 1626 1649 case USB_DT_INTERFACE: { 1627 1650 struct usb_interface_descriptor *ds = (void *)_ds; 1628 - FVDBG("interface descriptor"); 1651 + pr_vdebug("interface descriptor\n"); 1629 1652 if (length != sizeof *ds) 1630 1653 goto inv_length; 1631 1654 ··· 1638 1659 1639 1660 case USB_DT_ENDPOINT: { 1640 1661 struct usb_endpoint_descriptor *ds = (void *)_ds; 1641 - FVDBG("endpoint descriptor"); 1662 + pr_vdebug("endpoint descriptor\n"); 1642 1663 if (length != USB_DT_ENDPOINT_SIZE && 1643 1664 length != USB_DT_ENDPOINT_AUDIO_SIZE) 1644 1665 goto inv_length; ··· 1653 1674 1654 1675 case USB_DT_INTERFACE_ASSOCIATION: { 1655 1676 struct usb_interface_assoc_descriptor *ds = (void *)_ds; 1656 - FVDBG("interface association descriptor"); 1677 + pr_vdebug("interface association descriptor\n"); 1657 1678 if (length != sizeof *ds) 1658 1679 goto inv_length; 1659 1680 if (ds->iFunction) ··· 1667 1688 case USB_DT_SECURITY: 1668 1689 case USB_DT_CS_RADIO_CONTROL: 1669 1690 /* TODO */ 1670 - FVDBG("unimplemented descriptor: %d", _ds->bDescriptorType); 1691 + pr_vdebug("unimplemented descriptor: %d\n", _ds->bDescriptorType); 1671 1692 return -EINVAL; 1672 1693 1673 1694 default: 1674 1695 /* We should never be here */ 1675 - FVDBG("unknown descriptor: %d", _ds->bDescriptorType); 1696 + pr_vdebug("unknown descriptor: %d\n", _ds->bDescriptorType); 1676 1697 return -EINVAL; 1677 1698 1678 - inv_length: 1679 - FVDBG("invalid length: %d (descriptor %d)", 1680 - _ds->bLength, _ds->bDescriptorType); 1699 + inv_length: 1700 + pr_vdebug("invalid length: %d (descriptor %d)\n", 1701 + _ds->bLength, _ds->bDescriptorType); 1681 1702 return -EINVAL; 1682 1703 } 1683 1704 ··· 1689 1710 1690 1711 return length; 1691 1712 } 1692 - 1693 1713 1694 1714 static int __must_check ffs_do_descs(unsigned count, char *data, unsigned len, 1695 1715 ffs_entity_callback entity, void *priv) ··· 1704 1726 if (num == count) 1705 1727 data = NULL; 1706 1728 1707 - /* Record "descriptor" entitny */ 1729 + /* Record "descriptor" entity */ 1708 1730 ret = entity(FFS_DESCRIPTOR, (u8 *)num, (void *)data, priv); 1709 1731 if (unlikely(ret < 0)) { 1710 - FDBG("entity DESCRIPTOR(%02lx); ret = %d", num, ret); 1732 + pr_debug("entity DESCRIPTOR(%02lx); ret = %d\n", 1733 + num, ret); 1711 1734 return ret; 1712 1735 } 1713 1736 ··· 1717 1738 1718 1739 ret = ffs_do_desc(data, len, entity, priv); 1719 1740 if (unlikely(ret < 0)) { 1720 - FDBG("%s returns %d", __func__, ret); 1741 + pr_debug("%s returns %d\n", __func__, ret); 1721 1742 return ret; 1722 1743 } 1723 1744 ··· 1726 1747 ++num; 1727 1748 } 1728 1749 } 1729 - 1730 1750 1731 1751 static int __ffs_data_do_entity(enum ffs_entity_type type, 1732 1752 u8 *valuep, struct usb_descriptor_header *desc, ··· 1740 1762 break; 1741 1763 1742 1764 case FFS_INTERFACE: 1743 - /* Interfaces are indexed from zero so if we 1765 + /* 1766 + * Interfaces are indexed from zero so if we 1744 1767 * encountered interface "n" then there are at least 1745 - * "n+1" interfaces. */ 1768 + * "n+1" interfaces. 1769 + */ 1746 1770 if (*valuep >= ffs->interfaces_count) 1747 1771 ffs->interfaces_count = *valuep + 1; 1748 1772 break; 1749 1773 1750 1774 case FFS_STRING: 1751 - /* Strings are indexed from 1 (0 is magic ;) reserved 1752 - * for languages list or some such) */ 1775 + /* 1776 + * Strings are indexed from 1 (0 is magic ;) reserved 1777 + * for languages list or some such) 1778 + */ 1753 1779 if (*valuep > ffs->strings_count) 1754 1780 ffs->strings_count = *valuep; 1755 1781 break; ··· 1767 1785 1768 1786 return 0; 1769 1787 } 1770 - 1771 1788 1772 1789 static int __ffs_data_got_descs(struct ffs_data *ffs, 1773 1790 char *const _data, size_t len) ··· 1830 1849 return ret; 1831 1850 } 1832 1851 1833 - 1834 - 1835 1852 static int __ffs_data_got_strings(struct ffs_data *ffs, 1836 1853 char *const _data, size_t len) 1837 1854 { ··· 1855 1876 if (unlikely(str_count < needed_count)) 1856 1877 goto error; 1857 1878 1858 - /* If we don't need any strings just return and free all 1859 - * memory */ 1879 + /* 1880 + * If we don't need any strings just return and free all 1881 + * memory. 1882 + */ 1860 1883 if (!needed_count) { 1861 1884 kfree(_data); 1862 1885 return 0; 1863 1886 } 1864 1887 1865 - /* Allocate */ 1888 + /* Allocate everything in one chunk so there's less maintenance. */ 1866 1889 { 1867 - /* Allocate everything in one chunk so there's less 1868 - * maintanance. */ 1869 1890 struct { 1870 1891 struct usb_gadget_strings *stringtabs[lang_count + 1]; 1871 1892 struct usb_gadget_strings stringtab[lang_count]; ··· 1916 1937 if (unlikely(length == len)) 1917 1938 goto error_free; 1918 1939 1919 - /* user may provide more strings then we need, 1920 - * if that's the case we simply ingore the 1921 - * rest */ 1940 + /* 1941 + * User may provide more strings then we need, 1942 + * if that's the case we simply ignore the 1943 + * rest 1944 + */ 1922 1945 if (likely(needed)) { 1923 - /* s->id will be set while adding 1946 + /* 1947 + * s->id will be set while adding 1924 1948 * function to configuration so for 1925 - * now just leave garbage here. */ 1949 + * now just leave garbage here. 1950 + */ 1926 1951 s->s = data; 1927 1952 --needed; 1928 1953 ++s; ··· 1960 1977 } 1961 1978 1962 1979 1963 - 1964 - 1965 1980 /* Events handling and management *******************************************/ 1966 1981 1967 1982 static void __ffs_event_add(struct ffs_data *ffs, ··· 1968 1987 enum usb_functionfs_event_type rem_type1, rem_type2 = type; 1969 1988 int neg = 0; 1970 1989 1971 - /* Abort any unhandled setup */ 1972 - /* We do not need to worry about some cmpxchg() changing value 1990 + /* 1991 + * Abort any unhandled setup 1992 + * 1993 + * We do not need to worry about some cmpxchg() changing value 1973 1994 * of ffs->setup_state without holding the lock because when 1974 1995 * state is FFS_SETUP_PENDING cmpxchg() in several places in 1975 - * the source does nothing. */ 1996 + * the source does nothing. 1997 + */ 1976 1998 if (ffs->setup_state == FFS_SETUP_PENDING) 1977 1999 ffs->setup_state = FFS_SETUP_CANCELED; 1978 2000 1979 2001 switch (type) { 1980 2002 case FUNCTIONFS_RESUME: 1981 2003 rem_type2 = FUNCTIONFS_SUSPEND; 1982 - /* FALL THGOUTH */ 2004 + /* FALL THROUGH */ 1983 2005 case FUNCTIONFS_SUSPEND: 1984 2006 case FUNCTIONFS_SETUP: 1985 2007 rem_type1 = type; 1986 - /* discard all similar events */ 2008 + /* Discard all similar events */ 1987 2009 break; 1988 2010 1989 2011 case FUNCTIONFS_BIND: 1990 2012 case FUNCTIONFS_UNBIND: 1991 2013 case FUNCTIONFS_DISABLE: 1992 2014 case FUNCTIONFS_ENABLE: 1993 - /* discard everything other then power management. */ 2015 + /* Discard everything other then power management. */ 1994 2016 rem_type1 = FUNCTIONFS_SUSPEND; 1995 2017 rem_type2 = FUNCTIONFS_RESUME; 1996 2018 neg = 1; ··· 2010 2026 if ((*ev == rem_type1 || *ev == rem_type2) == neg) 2011 2027 *out++ = *ev; 2012 2028 else 2013 - FVDBG("purging event %d", *ev); 2029 + pr_vdebug("purging event %d\n", *ev); 2014 2030 ffs->ev.count = out - ffs->ev.types; 2015 2031 } 2016 2032 2017 - FVDBG("adding event %d", type); 2033 + pr_vdebug("adding event %d\n", type); 2018 2034 ffs->ev.types[ffs->ev.count++] = type; 2019 2035 wake_up_locked(&ffs->ev.waitq); 2020 2036 } ··· 2039 2055 struct ffs_function *func = priv; 2040 2056 struct ffs_ep *ffs_ep; 2041 2057 2042 - /* If hs_descriptors is not NULL then we are reading hs 2043 - * descriptors now */ 2058 + /* 2059 + * If hs_descriptors is not NULL then we are reading hs 2060 + * descriptors now 2061 + */ 2044 2062 const int isHS = func->function.hs_descriptors != NULL; 2045 2063 unsigned idx; 2046 2064 ··· 2061 2075 ffs_ep = func->eps + idx; 2062 2076 2063 2077 if (unlikely(ffs_ep->descs[isHS])) { 2064 - FVDBG("two %sspeed descriptors for EP %d", 2065 - isHS ? "high" : "full", 2066 - ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); 2078 + pr_vdebug("two %sspeed descriptors for EP %d\n", 2079 + isHS ? "high" : "full", 2080 + ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); 2067 2081 return -EINVAL; 2068 2082 } 2069 2083 ffs_ep->descs[isHS] = ds; ··· 2077 2091 struct usb_request *req; 2078 2092 struct usb_ep *ep; 2079 2093 2080 - FVDBG("autoconfig"); 2094 + pr_vdebug("autoconfig\n"); 2081 2095 ep = usb_ep_autoconfig(func->gadget, ds); 2082 2096 if (unlikely(!ep)) 2083 2097 return -ENOTSUPP; 2084 - ep->driver_data = func->eps + idx;; 2098 + ep->driver_data = func->eps + idx; 2085 2099 2086 2100 req = usb_ep_alloc_request(ep, GFP_KERNEL); 2087 2101 if (unlikely(!req)) ··· 2096 2110 2097 2111 return 0; 2098 2112 } 2099 - 2100 2113 2101 2114 static int __ffs_func_bind_do_nums(enum ffs_entity_type type, u8 *valuep, 2102 2115 struct usb_descriptor_header *desc, ··· 2128 2143 break; 2129 2144 2130 2145 case FFS_ENDPOINT: 2131 - /* USB_DT_ENDPOINT are handled in 2132 - * __ffs_func_bind_do_descs(). */ 2146 + /* 2147 + * USB_DT_ENDPOINT are handled in 2148 + * __ffs_func_bind_do_descs(). 2149 + */ 2133 2150 if (desc->bDescriptorType == USB_DT_ENDPOINT) 2134 2151 return 0; 2135 2152 ··· 2147 2160 break; 2148 2161 } 2149 2162 2150 - FVDBG("%02x -> %02x", *valuep, newValue); 2163 + pr_vdebug("%02x -> %02x\n", *valuep, newValue); 2151 2164 *valuep = newValue; 2152 2165 return 0; 2153 2166 } ··· 2198 2211 func->eps = data->eps; 2199 2212 func->interfaces_nums = data->inums; 2200 2213 2201 - /* Go throught all the endpoint descriptors and allocate 2214 + /* 2215 + * Go through all the endpoint descriptors and allocate 2202 2216 * endpoints first, so that later we can rewrite the endpoint 2203 - * numbers without worying that it may be described later on. */ 2217 + * numbers without worrying that it may be described later on. 2218 + */ 2204 2219 if (likely(full)) { 2205 2220 func->function.descriptors = data->fs_descs; 2206 2221 ret = ffs_do_descs(ffs->fs_descs_count, ··· 2223 2234 __ffs_func_bind_do_descs, func); 2224 2235 } 2225 2236 2226 - /* Now handle interface numbers allocation and interface and 2227 - * enpoint numbers rewritting. We can do that in one go 2228 - * now. */ 2237 + /* 2238 + * Now handle interface numbers allocation and interface and 2239 + * endpoint numbers rewriting. We can do that in one go 2240 + * now. 2241 + */ 2229 2242 ret = ffs_do_descs(ffs->fs_descs_count + 2230 2243 (high ? ffs->hs_descs_count : 0), 2231 2244 data->raw_descs, sizeof data->raw_descs, ··· 2264 2273 2265 2274 ffs_func_free(func); 2266 2275 } 2267 - 2268 2276 2269 2277 static int ffs_func_set_alt(struct usb_function *f, 2270 2278 unsigned interface, unsigned alt) ··· 2312 2322 2313 2323 ENTER(); 2314 2324 2315 - FVDBG("creq->bRequestType = %02x", creq->bRequestType); 2316 - FVDBG("creq->bRequest = %02x", creq->bRequest); 2317 - FVDBG("creq->wValue = %04x", le16_to_cpu(creq->wValue)); 2318 - FVDBG("creq->wIndex = %04x", le16_to_cpu(creq->wIndex)); 2319 - FVDBG("creq->wLength = %04x", le16_to_cpu(creq->wLength)); 2325 + pr_vdebug("creq->bRequestType = %02x\n", creq->bRequestType); 2326 + pr_vdebug("creq->bRequest = %02x\n", creq->bRequest); 2327 + pr_vdebug("creq->wValue = %04x\n", le16_to_cpu(creq->wValue)); 2328 + pr_vdebug("creq->wIndex = %04x\n", le16_to_cpu(creq->wIndex)); 2329 + pr_vdebug("creq->wLength = %04x\n", le16_to_cpu(creq->wLength)); 2320 2330 2321 - /* Most requests directed to interface go throught here 2331 + /* 2332 + * Most requests directed to interface go through here 2322 2333 * (notable exceptions are set/get interface) so we need to 2323 2334 * handle them. All other either handled by composite or 2324 2335 * passed to usb_configuration->setup() (if one is set). No 2325 2336 * matter, we will handle requests directed to endpoint here 2326 2337 * as well (as it's straightforward) but what to do with any 2327 - * other request? */ 2328 - 2338 + * other request? 2339 + */ 2329 2340 if (ffs->state != FFS_ACTIVE) 2330 2341 return -ENODEV; 2331 2342 ··· 2369 2378 } 2370 2379 2371 2380 2372 - 2373 - /* Enpoint and interface numbers reverse mapping ****************************/ 2381 + /* Endpoint and interface numbers reverse mapping ***************************/ 2374 2382 2375 2383 static int ffs_func_revmap_ep(struct ffs_function *func, u8 num) 2376 2384 { ··· 2400 2410 : mutex_lock_interruptible(mutex); 2401 2411 } 2402 2412 2403 - 2404 2413 static char *ffs_prepare_buffer(const char * __user buf, size_t len) 2405 2414 { 2406 2415 char *data; ··· 2416 2427 return ERR_PTR(-EFAULT); 2417 2428 } 2418 2429 2419 - FVDBG("Buffer from user space:"); 2430 + pr_vdebug("Buffer from user space:\n"); 2420 2431 ffs_dump_mem("", data, len); 2421 2432 2422 2433 return data;
+278 -246
drivers/usb/gadget/f_mass_storage.c
··· 37 37 * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 38 38 */ 39 39 40 - 41 40 /* 42 41 * The Mass Storage Function acts as a USB Mass Storage device, 43 42 * appearing to the host as a disk drive or as a CD-ROM drive. In ··· 184 185 * <http://www.usb.org/developers/devclass_docs/usbmass-ufi10.pdf>. 185 186 */ 186 187 187 - 188 188 /* 189 189 * Driver Design 190 190 * ··· 273 275 /* #define VERBOSE_DEBUG */ 274 276 /* #define DUMP_MSGS */ 275 277 276 - 277 278 #include <linux/blkdev.h> 278 279 #include <linux/completion.h> 279 280 #include <linux/dcache.h> ··· 297 300 #include "gadget_chips.h" 298 301 299 302 300 - 301 303 /*------------------------------------------------------------------------*/ 302 304 303 305 #define FSG_DRIVER_DESC "Mass Storage Function" 304 306 #define FSG_DRIVER_VERSION "2009/09/11" 305 307 306 308 static const char fsg_string_interface[] = "Mass Storage"; 307 - 308 309 309 310 #define FSG_NO_INTR_EP 1 310 311 #define FSG_NO_DEVICE_STRINGS 1 ··· 319 324 320 325 /* FSF callback functions */ 321 326 struct fsg_operations { 322 - /* Callback function to call when thread exits. If no 327 + /* 328 + * Callback function to call when thread exits. If no 323 329 * callback is set or it returns value lower then zero MSF 324 330 * will force eject all LUNs it operates on (including those 325 331 * marked as non-removable or with prevent_medium_removal flag 326 - * set). */ 332 + * set). 333 + */ 327 334 int (*thread_exits)(struct fsg_common *common); 328 335 329 - /* Called prior to ejection. Negative return means error, 336 + /* 337 + * Called prior to ejection. Negative return means error, 330 338 * zero means to continue with ejection, positive means not to 331 - * eject. */ 339 + * eject. 340 + */ 332 341 int (*pre_eject)(struct fsg_common *common, 333 342 struct fsg_lun *lun, int num); 334 - /* Called after ejection. Negative return means error, zero 335 - * or positive is just a success. */ 343 + /* 344 + * Called after ejection. Negative return means error, zero 345 + * or positive is just a success. 346 + */ 336 347 int (*post_eject)(struct fsg_common *common, 337 348 struct fsg_lun *lun, int num); 338 349 }; 339 - 340 350 341 351 /* Data shared by all the FSG instances. */ 342 352 struct fsg_common { ··· 398 398 /* Gadget's private data. */ 399 399 void *private_data; 400 400 401 - /* Vendor (8 chars), product (16 chars), release (4 402 - * hexadecimal digits) and NUL byte */ 401 + /* 402 + * Vendor (8 chars), product (16 chars), release (4 403 + * hexadecimal digits) and NUL byte 404 + */ 403 405 char inquiry_string[8 + 16 + 4 + 1]; 404 406 405 407 struct kref ref; 406 408 }; 407 - 408 409 409 410 struct fsg_config { 410 411 unsigned nluns; ··· 432 431 char can_stall; 433 432 }; 434 433 435 - 436 434 struct fsg_dev { 437 435 struct usb_function function; 438 436 struct usb_gadget *gadget; /* Copy of cdev->gadget */ ··· 449 449 struct usb_ep *bulk_out; 450 450 }; 451 451 452 - 453 452 static inline int __fsg_is_set(struct fsg_common *common, 454 453 const char *func, unsigned line) 455 454 { ··· 461 462 462 463 #define fsg_is_set(common) likely(__fsg_is_set(common, __func__, __LINE__)) 463 464 464 - 465 465 static inline struct fsg_dev *fsg_from_func(struct usb_function *f) 466 466 { 467 467 return container_of(f, struct fsg_dev, function); 468 468 } 469 - 470 469 471 470 typedef void (*fsg_routine_t)(struct fsg_dev *); 472 471 ··· 475 478 476 479 /* Make bulk-out requests be divisible by the maxpacket size */ 477 480 static void set_bulk_out_req_length(struct fsg_common *common, 478 - struct fsg_buffhd *bh, unsigned int length) 481 + struct fsg_buffhd *bh, unsigned int length) 479 482 { 480 483 unsigned int rem; 481 484 ··· 485 488 length += common->bulk_out_maxpacket - rem; 486 489 bh->outreq->length = length; 487 490 } 491 + 488 492 489 493 /*-------------------------------------------------------------------------*/ 490 494 ··· 517 519 wake_up_process(common->thread_task); 518 520 } 519 521 520 - 521 522 static void raise_exception(struct fsg_common *common, enum fsg_state new_state) 522 523 { 523 524 unsigned long flags; 524 525 525 - /* Do nothing if a higher-priority exception is already in progress. 526 + /* 527 + * Do nothing if a higher-priority exception is already in progress. 526 528 * If a lower-or-equal priority exception is in progress, preempt it 527 - * and notify the main thread by sending it a signal. */ 529 + * and notify the main thread by sending it a signal. 530 + */ 528 531 spin_lock_irqsave(&common->lock, flags); 529 532 if (common->state <= new_state) { 530 533 common->exception_req_tag = common->ep0_req_tag; ··· 554 555 return rc; 555 556 } 556 557 558 + 557 559 /*-------------------------------------------------------------------------*/ 558 560 559 - /* Bulk and interrupt endpoint completion handlers. 560 - * These always run in_irq. */ 561 + /* Completion handlers. These always run in_irq. */ 561 562 562 563 static void bulk_in_complete(struct usb_ep *ep, struct usb_request *req) 563 564 { ··· 566 567 567 568 if (req->status || req->actual != req->length) 568 569 DBG(common, "%s --> %d, %u/%u\n", __func__, 569 - req->status, req->actual, req->length); 570 + req->status, req->actual, req->length); 570 571 if (req->status == -ECONNRESET) /* Request was cancelled */ 571 572 usb_ep_fifo_flush(ep); 572 573 ··· 587 588 dump_msg(common, "bulk-out", req->buf, req->actual); 588 589 if (req->status || req->actual != bh->bulk_out_intended_length) 589 590 DBG(common, "%s --> %d, %u/%u\n", __func__, 590 - req->status, req->actual, 591 - bh->bulk_out_intended_length); 591 + req->status, req->actual, bh->bulk_out_intended_length); 592 592 if (req->status == -ECONNRESET) /* Request was cancelled */ 593 593 usb_ep_fifo_flush(ep); 594 594 ··· 600 602 spin_unlock(&common->lock); 601 603 } 602 604 603 - 604 - /*-------------------------------------------------------------------------*/ 605 - 606 - /* Ep0 class-specific handlers. These always run in_irq. */ 607 - 608 605 static int fsg_setup(struct usb_function *f, 609 - const struct usb_ctrlrequest *ctrl) 606 + const struct usb_ctrlrequest *ctrl) 610 607 { 611 608 struct fsg_dev *fsg = fsg_from_func(f); 612 609 struct usb_request *req = fsg->common->ep0req; ··· 621 628 if (w_index != fsg->interface_number || w_value != 0) 622 629 return -EDOM; 623 630 624 - /* Raise an exception to stop the current operation 625 - * and reinitialize our state. */ 631 + /* 632 + * Raise an exception to stop the current operation 633 + * and reinitialize our state. 634 + */ 626 635 DBG(fsg, "bulk reset request\n"); 627 636 raise_exception(fsg->common, FSG_STATE_RESET); 628 637 return DELAYED_STATUS; ··· 636 641 if (w_index != fsg->interface_number || w_value != 0) 637 642 return -EDOM; 638 643 VDBG(fsg, "get max LUN\n"); 639 - *(u8 *) req->buf = fsg->common->nluns - 1; 644 + *(u8 *)req->buf = fsg->common->nluns - 1; 640 645 641 646 /* Respond with data/status */ 642 647 req->length = min((u16)1, w_length); ··· 644 649 } 645 650 646 651 VDBG(fsg, 647 - "unknown class-specific control req " 648 - "%02x.%02x v%04x i%04x l%u\n", 652 + "unknown class-specific control req %02x.%02x v%04x i%04x l%u\n", 649 653 ctrl->bRequestType, ctrl->bRequest, 650 654 le16_to_cpu(ctrl->wValue), w_index, w_length); 651 655 return -EOPNOTSUPP; ··· 655 661 656 662 /* All the following routines run in process context */ 657 663 658 - 659 664 /* Use this for bulk or interrupt transfers, not ep0 */ 660 665 static void start_transfer(struct fsg_dev *fsg, struct usb_ep *ep, 661 - struct usb_request *req, int *pbusy, 662 - enum fsg_buffer_state *state) 666 + struct usb_request *req, int *pbusy, 667 + enum fsg_buffer_state *state) 663 668 { 664 669 int rc; 665 670 ··· 676 683 677 684 /* We can't do much more than wait for a reset */ 678 685 679 - /* Note: currently the net2280 driver fails zero-length 680 - * submissions if DMA is enabled. */ 681 - if (rc != -ESHUTDOWN && !(rc == -EOPNOTSUPP && 682 - req->length == 0)) 686 + /* 687 + * Note: currently the net2280 driver fails zero-length 688 + * submissions if DMA is enabled. 689 + */ 690 + if (rc != -ESHUTDOWN && 691 + !(rc == -EOPNOTSUPP && req->length == 0)) 683 692 WARNING(fsg, "error in submission: %s --> %d\n", 684 - ep->name, rc); 693 + ep->name, rc); 685 694 } 686 695 } 687 696 688 - #define START_TRANSFER_OR(common, ep_name, req, pbusy, state) \ 689 - if (fsg_is_set(common)) \ 690 - start_transfer((common)->fsg, (common)->fsg->ep_name, \ 691 - req, pbusy, state); \ 692 - else 697 + static bool start_in_transfer(struct fsg_common *common, struct fsg_buffhd *bh) 698 + { 699 + if (!fsg_is_set(common)) 700 + return false; 701 + start_transfer(common->fsg, common->fsg->bulk_in, 702 + bh->inreq, &bh->inreq_busy, &bh->state); 703 + return true; 704 + } 693 705 694 - #define START_TRANSFER(common, ep_name, req, pbusy, state) \ 695 - START_TRANSFER_OR(common, ep_name, req, pbusy, state) (void)0 696 - 697 - 706 + static bool start_out_transfer(struct fsg_common *common, struct fsg_buffhd *bh) 707 + { 708 + if (!fsg_is_set(common)) 709 + return false; 710 + start_transfer(common->fsg, common->fsg->bulk_out, 711 + bh->outreq, &bh->outreq_busy, &bh->state); 712 + return true; 713 + } 698 714 699 715 static int sleep_thread(struct fsg_common *common) 700 716 { ··· 741 739 unsigned int partial_page; 742 740 ssize_t nread; 743 741 744 - /* Get the starting Logical Block Address and check that it's 745 - * not too big */ 742 + /* 743 + * Get the starting Logical Block Address and check that it's 744 + * not too big. 745 + */ 746 746 if (common->cmnd[0] == READ_6) 747 747 lba = get_unaligned_be24(&common->cmnd[1]); 748 748 else { 749 749 lba = get_unaligned_be32(&common->cmnd[2]); 750 750 751 - /* We allow DPO (Disable Page Out = don't save data in the 751 + /* 752 + * We allow DPO (Disable Page Out = don't save data in the 752 753 * cache) and FUA (Force Unit Access = don't read from the 753 - * cache), but we don't implement them. */ 754 + * cache), but we don't implement them. 755 + */ 754 756 if ((common->cmnd[1] & ~0x18) != 0) { 755 757 curlun->sense_data = SS_INVALID_FIELD_IN_CDB; 756 758 return -EINVAL; ··· 772 766 return -EIO; /* No default reply */ 773 767 774 768 for (;;) { 775 - 776 - /* Figure out how much we need to read: 769 + /* 770 + * Figure out how much we need to read: 777 771 * Try to read the remaining amount. 778 772 * But don't read more than the buffer size. 779 773 * And don't try to read past the end of the file. 780 774 * Finally, if we're not at a page boundary, don't read past 781 775 * the next page. 782 776 * If this means reading 0 then we were asked to read past 783 - * the end of file. */ 777 + * the end of file. 778 + */ 784 779 amount = min(amount_left, FSG_BUFLEN); 785 - amount = min((loff_t) amount, 786 - curlun->file_length - file_offset); 780 + amount = min((loff_t)amount, 781 + curlun->file_length - file_offset); 787 782 partial_page = file_offset & (PAGE_CACHE_SIZE - 1); 788 783 if (partial_page > 0) 789 - amount = min(amount, (unsigned int) PAGE_CACHE_SIZE - 790 - partial_page); 784 + amount = min(amount, (unsigned int)PAGE_CACHE_SIZE - 785 + partial_page); 791 786 792 787 /* Wait for the next buffer to become available */ 793 788 bh = common->next_buffhd_to_fill; ··· 798 791 return rc; 799 792 } 800 793 801 - /* If we were asked to read past the end of file, 802 - * end with an empty buffer. */ 794 + /* 795 + * If we were asked to read past the end of file, 796 + * end with an empty buffer. 797 + */ 803 798 if (amount == 0) { 804 799 curlun->sense_data = 805 800 SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE; ··· 815 806 /* Perform the read */ 816 807 file_offset_tmp = file_offset; 817 808 nread = vfs_read(curlun->filp, 818 - (char __user *) bh->buf, 819 - amount, &file_offset_tmp); 809 + (char __user *)bh->buf, 810 + amount, &file_offset_tmp); 820 811 VLDBG(curlun, "file read %u @ %llu -> %d\n", amount, 821 - (unsigned long long) file_offset, 822 - (int) nread); 812 + (unsigned long long)file_offset, (int)nread); 823 813 if (signal_pending(current)) 824 814 return -EINTR; 825 815 826 816 if (nread < 0) { 827 - LDBG(curlun, "error in file read: %d\n", 828 - (int) nread); 817 + LDBG(curlun, "error in file read: %d\n", (int)nread); 829 818 nread = 0; 830 819 } else if (nread < amount) { 831 820 LDBG(curlun, "partial file read: %d/%u\n", 832 - (int) nread, amount); 821 + (int)nread, amount); 833 822 nread -= (nread & 511); /* Round down to a block */ 834 823 } 835 824 file_offset += nread; ··· 849 842 850 843 /* Send this buffer and go read some more */ 851 844 bh->inreq->zero = 0; 852 - START_TRANSFER_OR(common, bulk_in, bh->inreq, 853 - &bh->inreq_busy, &bh->state) 854 - /* Don't know what to do if 855 - * common->fsg is NULL */ 845 + if (!start_in_transfer(common, bh)) 846 + /* Don't know what to do if common->fsg is NULL */ 856 847 return -EIO; 857 848 common->next_buffhd_to_fill = bh->next; 858 849 } ··· 882 877 curlun->filp->f_flags &= ~O_SYNC; /* Default is not to wait */ 883 878 spin_unlock(&curlun->filp->f_lock); 884 879 885 - /* Get the starting Logical Block Address and check that it's 886 - * not too big */ 880 + /* 881 + * Get the starting Logical Block Address and check that it's 882 + * not too big 883 + */ 887 884 if (common->cmnd[0] == WRITE_6) 888 885 lba = get_unaligned_be24(&common->cmnd[1]); 889 886 else { 890 887 lba = get_unaligned_be32(&common->cmnd[2]); 891 888 892 - /* We allow DPO (Disable Page Out = don't save data in the 889 + /* 890 + * We allow DPO (Disable Page Out = don't save data in the 893 891 * cache) and FUA (Force Unit Access = write directly to the 894 892 * medium). We don't implement DPO; we implement FUA by 895 - * performing synchronous output. */ 893 + * performing synchronous output. 894 + */ 896 895 if (common->cmnd[1] & ~0x18) { 897 896 curlun->sense_data = SS_INVALID_FIELD_IN_CDB; 898 897 return -EINVAL; ··· 924 915 bh = common->next_buffhd_to_fill; 925 916 if (bh->state == BUF_STATE_EMPTY && get_some_more) { 926 917 927 - /* Figure out how much we want to get: 918 + /* 919 + * Figure out how much we want to get: 928 920 * Try to get the remaining amount. 929 921 * But don't get more than the buffer size. 930 922 * And don't try to go past the end of the file. ··· 933 923 * don't go past the next page. 934 924 * If this means getting 0, then we were asked 935 925 * to write past the end of file. 936 - * Finally, round down to a block boundary. */ 926 + * Finally, round down to a block boundary. 927 + */ 937 928 amount = min(amount_left_to_req, FSG_BUFLEN); 938 - amount = min((loff_t) amount, curlun->file_length - 939 - usb_offset); 929 + amount = min((loff_t)amount, 930 + curlun->file_length - usb_offset); 940 931 partial_page = usb_offset & (PAGE_CACHE_SIZE - 1); 941 932 if (partial_page > 0) 942 933 amount = min(amount, 943 - (unsigned int) PAGE_CACHE_SIZE - partial_page); 934 + (unsigned int)PAGE_CACHE_SIZE - partial_page); 944 935 945 936 if (amount == 0) { 946 937 get_some_more = 0; ··· 951 940 curlun->info_valid = 1; 952 941 continue; 953 942 } 954 - amount -= (amount & 511); 943 + amount -= amount & 511; 955 944 if (amount == 0) { 956 945 957 - /* Why were we were asked to transfer a 958 - * partial block? */ 946 + /* 947 + * Why were we were asked to transfer a 948 + * partial block? 949 + */ 959 950 get_some_more = 0; 960 951 continue; 961 952 } ··· 969 956 if (amount_left_to_req == 0) 970 957 get_some_more = 0; 971 958 972 - /* amount is always divisible by 512, hence by 973 - * the bulk-out maxpacket size */ 959 + /* 960 + * amount is always divisible by 512, hence by 961 + * the bulk-out maxpacket size 962 + */ 974 963 bh->outreq->length = amount; 975 964 bh->bulk_out_intended_length = amount; 976 965 bh->outreq->short_not_ok = 1; 977 - START_TRANSFER_OR(common, bulk_out, bh->outreq, 978 - &bh->outreq_busy, &bh->state) 979 - /* Don't know what to do if 980 - * common->fsg is NULL */ 966 + if (!start_out_transfer(common, bh)) 967 + /* Dunno what to do if common->fsg is NULL */ 981 968 return -EIO; 982 969 common->next_buffhd_to_fill = bh->next; 983 970 continue; ··· 1003 990 amount = bh->outreq->actual; 1004 991 if (curlun->file_length - file_offset < amount) { 1005 992 LERROR(curlun, 1006 - "write %u @ %llu beyond end %llu\n", 1007 - amount, (unsigned long long) file_offset, 1008 - (unsigned long long) curlun->file_length); 993 + "write %u @ %llu beyond end %llu\n", 994 + amount, (unsigned long long)file_offset, 995 + (unsigned long long)curlun->file_length); 1009 996 amount = curlun->file_length - file_offset; 1010 997 } 1011 998 1012 999 /* Perform the write */ 1013 1000 file_offset_tmp = file_offset; 1014 1001 nwritten = vfs_write(curlun->filp, 1015 - (char __user *) bh->buf, 1016 - amount, &file_offset_tmp); 1002 + (char __user *)bh->buf, 1003 + amount, &file_offset_tmp); 1017 1004 VLDBG(curlun, "file write %u @ %llu -> %d\n", amount, 1018 - (unsigned long long) file_offset, 1019 - (int) nwritten); 1005 + (unsigned long long)file_offset, (int)nwritten); 1020 1006 if (signal_pending(current)) 1021 1007 return -EINTR; /* Interrupted! */ 1022 1008 1023 1009 if (nwritten < 0) { 1024 1010 LDBG(curlun, "error in file write: %d\n", 1025 - (int) nwritten); 1011 + (int)nwritten); 1026 1012 nwritten = 0; 1027 1013 } else if (nwritten < amount) { 1028 1014 LDBG(curlun, "partial file write: %d/%u\n", 1029 - (int) nwritten, amount); 1015 + (int)nwritten, amount); 1030 1016 nwritten -= (nwritten & 511); 1031 1017 /* Round down to a block */ 1032 1018 } ··· 1098 1086 unsigned int amount; 1099 1087 ssize_t nread; 1100 1088 1101 - /* Get the starting Logical Block Address and check that it's 1102 - * not too big */ 1089 + /* 1090 + * Get the starting Logical Block Address and check that it's 1091 + * not too big. 1092 + */ 1103 1093 lba = get_unaligned_be32(&common->cmnd[2]); 1104 1094 if (lba >= curlun->num_sectors) { 1105 1095 curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE; 1106 1096 return -EINVAL; 1107 1097 } 1108 1098 1109 - /* We allow DPO (Disable Page Out = don't save data in the 1110 - * cache) but we don't implement it. */ 1099 + /* 1100 + * We allow DPO (Disable Page Out = don't save data in the 1101 + * cache) but we don't implement it. 1102 + */ 1111 1103 if (common->cmnd[1] & ~0x10) { 1112 1104 curlun->sense_data = SS_INVALID_FIELD_IN_CDB; 1113 1105 return -EINVAL; ··· 1136 1120 1137 1121 /* Just try to read the requested blocks */ 1138 1122 while (amount_left > 0) { 1139 - 1140 - /* Figure out how much we need to read: 1123 + /* 1124 + * Figure out how much we need to read: 1141 1125 * Try to read the remaining amount, but not more than 1142 1126 * the buffer size. 1143 1127 * And don't try to read past the end of the file. 1144 1128 * If this means reading 0 then we were asked to read 1145 - * past the end of file. */ 1129 + * past the end of file. 1130 + */ 1146 1131 amount = min(amount_left, FSG_BUFLEN); 1147 - amount = min((loff_t) amount, 1148 - curlun->file_length - file_offset); 1132 + amount = min((loff_t)amount, 1133 + curlun->file_length - file_offset); 1149 1134 if (amount == 0) { 1150 1135 curlun->sense_data = 1151 1136 SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE; ··· 1167 1150 return -EINTR; 1168 1151 1169 1152 if (nread < 0) { 1170 - LDBG(curlun, "error in file verify: %d\n", 1171 - (int) nread); 1153 + LDBG(curlun, "error in file verify: %d\n", (int)nread); 1172 1154 nread = 0; 1173 1155 } else if (nread < amount) { 1174 1156 LDBG(curlun, "partial file verify: %d/%u\n", 1175 - (int) nread, amount); 1176 - nread -= (nread & 511); /* Round down to a sector */ 1157 + (int)nread, amount); 1158 + nread -= nread & 511; /* Round down to a sector */ 1177 1159 } 1178 1160 if (nread == 0) { 1179 1161 curlun->sense_data = SS_UNRECOVERED_READ_ERROR; ··· 1213 1197 memcpy(buf + 8, common->inquiry_string, sizeof common->inquiry_string); 1214 1198 return 36; 1215 1199 } 1216 - 1217 1200 1218 1201 static int do_request_sense(struct fsg_common *common, struct fsg_buffhd *bh) 1219 1202 { ··· 1267 1252 return 18; 1268 1253 } 1269 1254 1270 - 1271 1255 static int do_read_capacity(struct fsg_common *common, struct fsg_buffhd *bh) 1272 1256 { 1273 1257 struct fsg_lun *curlun = common->curlun; 1274 1258 u32 lba = get_unaligned_be32(&common->cmnd[2]); 1275 1259 int pmi = common->cmnd[8]; 1276 - u8 *buf = (u8 *) bh->buf; 1260 + u8 *buf = (u8 *)bh->buf; 1277 1261 1278 1262 /* Check the PMI and LBA fields */ 1279 1263 if (pmi > 1 || (pmi == 0 && lba != 0)) { ··· 1286 1272 return 8; 1287 1273 } 1288 1274 1289 - 1290 1275 static int do_read_header(struct fsg_common *common, struct fsg_buffhd *bh) 1291 1276 { 1292 1277 struct fsg_lun *curlun = common->curlun; 1293 1278 int msf = common->cmnd[1] & 0x02; 1294 1279 u32 lba = get_unaligned_be32(&common->cmnd[2]); 1295 - u8 *buf = (u8 *) bh->buf; 1280 + u8 *buf = (u8 *)bh->buf; 1296 1281 1297 1282 if (common->cmnd[1] & ~0x02) { /* Mask away MSF */ 1298 1283 curlun->sense_data = SS_INVALID_FIELD_IN_CDB; ··· 1308 1295 return 8; 1309 1296 } 1310 1297 1311 - 1312 1298 static int do_read_toc(struct fsg_common *common, struct fsg_buffhd *bh) 1313 1299 { 1314 1300 struct fsg_lun *curlun = common->curlun; 1315 1301 int msf = common->cmnd[1] & 0x02; 1316 1302 int start_track = common->cmnd[6]; 1317 - u8 *buf = (u8 *) bh->buf; 1303 + u8 *buf = (u8 *)bh->buf; 1318 1304 1319 1305 if ((common->cmnd[1] & ~0x02) != 0 || /* Mask away MSF */ 1320 1306 start_track > 1) { ··· 1334 1322 store_cdrom_address(&buf[16], msf, curlun->num_sectors); 1335 1323 return 20; 1336 1324 } 1337 - 1338 1325 1339 1326 static int do_mode_sense(struct fsg_common *common, struct fsg_buffhd *bh) 1340 1327 { ··· 1359 1348 changeable_values = (pc == 1); 1360 1349 all_pages = (page_code == 0x3f); 1361 1350 1362 - /* Write the mode parameter header. Fixed values are: default 1351 + /* 1352 + * Write the mode parameter header. Fixed values are: default 1363 1353 * medium type, no cache control (DPOFUA), and no block descriptors. 1364 1354 * The only variable value is the WriteProtect bit. We will fill in 1365 - * the mode data length later. */ 1355 + * the mode data length later. 1356 + */ 1366 1357 memset(buf, 0, 8); 1367 1358 if (mscmnd == MODE_SENSE) { 1368 1359 buf[2] = (curlun->ro ? 0x80 : 0x00); /* WP, DPOFUA */ ··· 1378 1365 1379 1366 /* No block descriptors */ 1380 1367 1381 - /* The mode pages, in numerical order. The only page we support 1382 - * is the Caching page. */ 1368 + /* 1369 + * The mode pages, in numerical order. The only page we support 1370 + * is the Caching page. 1371 + */ 1383 1372 if (page_code == 0x08 || all_pages) { 1384 1373 valid_page = 1; 1385 1374 buf[0] = 0x08; /* Page code */ ··· 1403 1388 buf += 12; 1404 1389 } 1405 1390 1406 - /* Check that a valid page was requested and the mode data length 1407 - * isn't too long. */ 1391 + /* 1392 + * Check that a valid page was requested and the mode data length 1393 + * isn't too long. 1394 + */ 1408 1395 len = buf - buf0; 1409 1396 if (!valid_page || len > limit) { 1410 1397 curlun->sense_data = SS_INVALID_FIELD_IN_CDB; ··· 1420 1403 put_unaligned_be16(len - 2, buf0); 1421 1404 return len; 1422 1405 } 1423 - 1424 1406 1425 1407 static int do_start_stop(struct fsg_common *common) 1426 1408 { ··· 1440 1424 loej = common->cmnd[4] & 0x02; 1441 1425 start = common->cmnd[4] & 0x01; 1442 1426 1443 - /* Our emulation doesn't support mounting; the medium is 1444 - * available for use as soon as it is loaded. */ 1427 + /* 1428 + * Our emulation doesn't support mounting; the medium is 1429 + * available for use as soon as it is loaded. 1430 + */ 1445 1431 if (start) { 1446 1432 if (!fsg_lun_is_open(curlun)) { 1447 1433 curlun->sense_data = SS_MEDIUM_NOT_PRESENT; ··· 1484 1466 : 0; 1485 1467 } 1486 1468 1487 - 1488 1469 static int do_prevent_allow(struct fsg_common *common) 1489 1470 { 1490 1471 struct fsg_lun *curlun = common->curlun; ··· 1508 1491 return 0; 1509 1492 } 1510 1493 1511 - 1512 1494 static int do_read_format_capacities(struct fsg_common *common, 1513 1495 struct fsg_buffhd *bh) 1514 1496 { ··· 1524 1508 buf[4] = 0x02; /* Current capacity */ 1525 1509 return 12; 1526 1510 } 1527 - 1528 1511 1529 1512 static int do_mode_select(struct fsg_common *common, struct fsg_buffhd *bh) 1530 1513 { ··· 1606 1591 bh->inreq->length = nsend; 1607 1592 bh->inreq->zero = 0; 1608 1593 start_transfer(fsg, fsg->bulk_in, bh->inreq, 1609 - &bh->inreq_busy, &bh->state); 1594 + &bh->inreq_busy, &bh->state); 1610 1595 bh = fsg->common->next_buffhd_to_fill = bh->next; 1611 1596 fsg->common->usb_amount_left -= nsend; 1612 1597 nkeep = 0; ··· 1632 1617 1633 1618 /* A short packet or an error ends everything */ 1634 1619 if (bh->outreq->actual != bh->outreq->length || 1635 - bh->outreq->status != 0) { 1620 + bh->outreq->status != 0) { 1636 1621 raise_exception(common, 1637 1622 FSG_STATE_ABORT_BULK_OUT); 1638 1623 return -EINTR; ··· 1646 1631 && common->usb_amount_left > 0) { 1647 1632 amount = min(common->usb_amount_left, FSG_BUFLEN); 1648 1633 1649 - /* amount is always divisible by 512, hence by 1650 - * the bulk-out maxpacket size */ 1634 + /* 1635 + * amount is always divisible by 512, hence by 1636 + * the bulk-out maxpacket size. 1637 + */ 1651 1638 bh->outreq->length = amount; 1652 1639 bh->bulk_out_intended_length = amount; 1653 1640 bh->outreq->short_not_ok = 1; 1654 - START_TRANSFER_OR(common, bulk_out, bh->outreq, 1655 - &bh->outreq_busy, &bh->state) 1656 - /* Don't know what to do if 1657 - * common->fsg is NULL */ 1641 + if (!start_out_transfer(common, bh)) 1642 + /* Dunno what to do if common->fsg is NULL */ 1658 1643 return -EIO; 1659 1644 common->next_buffhd_to_fill = bh->next; 1660 1645 common->usb_amount_left -= amount; ··· 1669 1654 return 0; 1670 1655 } 1671 1656 1672 - 1673 1657 static int finish_reply(struct fsg_common *common) 1674 1658 { 1675 1659 struct fsg_buffhd *bh = common->next_buffhd_to_fill; ··· 1678 1664 case DATA_DIR_NONE: 1679 1665 break; /* Nothing to send */ 1680 1666 1681 - /* If we don't know whether the host wants to read or write, 1667 + /* 1668 + * If we don't know whether the host wants to read or write, 1682 1669 * this must be CB or CBI with an unknown command. We mustn't 1683 1670 * try to send or receive any data. So stall both bulk pipes 1684 - * if we can and wait for a reset. */ 1671 + * if we can and wait for a reset. 1672 + */ 1685 1673 case DATA_DIR_UNKNOWN: 1686 1674 if (!common->can_stall) { 1687 1675 /* Nothing */ ··· 1704 1688 /* If there's no residue, simply send the last buffer */ 1705 1689 } else if (common->residue == 0) { 1706 1690 bh->inreq->zero = 0; 1707 - START_TRANSFER_OR(common, bulk_in, bh->inreq, 1708 - &bh->inreq_busy, &bh->state) 1691 + if (!start_in_transfer(common, bh)) 1709 1692 return -EIO; 1710 1693 common->next_buffhd_to_fill = bh->next; 1711 1694 1712 - /* For Bulk-only, if we're allowed to stall then send the 1695 + /* 1696 + * For Bulk-only, if we're allowed to stall then send the 1713 1697 * short packet and halt the bulk-in endpoint. If we can't 1714 - * stall, pad out the remaining data with 0's. */ 1698 + * stall, pad out the remaining data with 0's. 1699 + */ 1715 1700 } else if (common->can_stall) { 1716 1701 bh->inreq->zero = 1; 1717 - START_TRANSFER_OR(common, bulk_in, bh->inreq, 1718 - &bh->inreq_busy, &bh->state) 1702 + if (!start_in_transfer(common, bh)) 1719 1703 /* Don't know what to do if 1720 1704 * common->fsg is NULL */ 1721 1705 rc = -EIO; ··· 1730 1714 } 1731 1715 break; 1732 1716 1733 - /* We have processed all we want from the data the host has sent. 1734 - * There may still be outstanding bulk-out requests. */ 1717 + /* 1718 + * We have processed all we want from the data the host has sent. 1719 + * There may still be outstanding bulk-out requests. 1720 + */ 1735 1721 case DATA_DIR_FROM_HOST: 1736 1722 if (common->residue == 0) { 1737 1723 /* Nothing to receive */ ··· 1743 1725 raise_exception(common, FSG_STATE_ABORT_BULK_OUT); 1744 1726 rc = -EINTR; 1745 1727 1746 - /* We haven't processed all the incoming data. Even though 1728 + /* 1729 + * We haven't processed all the incoming data. Even though 1747 1730 * we may be allowed to stall, doing so would cause a race. 1748 1731 * The controller may already have ACK'ed all the remaining 1749 1732 * bulk-out packets, in which case the host wouldn't see a 1750 1733 * STALL. Not realizing the endpoint was halted, it wouldn't 1751 - * clear the halt -- leading to problems later on. */ 1734 + * clear the halt -- leading to problems later on. 1735 + */ 1752 1736 #if 0 1753 1737 } else if (common->can_stall) { 1754 1738 if (fsg_is_set(common)) ··· 1760 1740 rc = -EINTR; 1761 1741 #endif 1762 1742 1763 - /* We can't stall. Read in the excess data and throw it 1764 - * all away. */ 1743 + /* 1744 + * We can't stall. Read in the excess data and throw it 1745 + * all away. 1746 + */ 1765 1747 } else { 1766 1748 rc = throw_away_data(common); 1767 1749 } ··· 1771 1749 } 1772 1750 return rc; 1773 1751 } 1774 - 1775 1752 1776 1753 static int send_status(struct fsg_common *common) 1777 1754 { ··· 1819 1798 1820 1799 bh->inreq->length = USB_BULK_CS_WRAP_LEN; 1821 1800 bh->inreq->zero = 0; 1822 - START_TRANSFER_OR(common, bulk_in, bh->inreq, 1823 - &bh->inreq_busy, &bh->state) 1801 + if (!start_in_transfer(common, bh)) 1824 1802 /* Don't know what to do if common->fsg is NULL */ 1825 1803 return -EIO; 1826 1804 ··· 1830 1810 1831 1811 /*-------------------------------------------------------------------------*/ 1832 1812 1833 - /* Check whether the command is properly formed and whether its data size 1834 - * and direction agree with the values we already have. */ 1813 + /* 1814 + * Check whether the command is properly formed and whether its data size 1815 + * and direction agree with the values we already have. 1816 + */ 1835 1817 static int check_command(struct fsg_common *common, int cmnd_size, 1836 - enum data_direction data_dir, unsigned int mask, 1837 - int needs_medium, const char *name) 1818 + enum data_direction data_dir, unsigned int mask, 1819 + int needs_medium, const char *name) 1838 1820 { 1839 1821 int i; 1840 1822 int lun = common->cmnd[1] >> 5; ··· 1847 1825 hdlen[0] = 0; 1848 1826 if (common->data_dir != DATA_DIR_UNKNOWN) 1849 1827 sprintf(hdlen, ", H%c=%u", dirletter[(int) common->data_dir], 1850 - common->data_size); 1828 + common->data_size); 1851 1829 VDBG(common, "SCSI command: %s; Dc=%d, D%c=%u; Hc=%d%s\n", 1852 1830 name, cmnd_size, dirletter[(int) data_dir], 1853 1831 common->data_size_from_cmnd, common->cmnd_size, hdlen); 1854 1832 1855 - /* We can't reply at all until we know the correct data direction 1856 - * and size. */ 1833 + /* 1834 + * We can't reply at all until we know the correct data direction 1835 + * and size. 1836 + */ 1857 1837 if (common->data_size_from_cmnd == 0) 1858 1838 data_dir = DATA_DIR_NONE; 1859 1839 if (common->data_size < common->data_size_from_cmnd) { 1860 - /* Host data size < Device data size is a phase error. 1840 + /* 1841 + * Host data size < Device data size is a phase error. 1861 1842 * Carry out the command, but only transfer as much as 1862 - * we are allowed. */ 1843 + * we are allowed. 1844 + */ 1863 1845 common->data_size_from_cmnd = common->data_size; 1864 1846 common->phase_error = 1; 1865 1847 } ··· 1871 1845 common->usb_amount_left = common->data_size; 1872 1846 1873 1847 /* Conflicting data directions is a phase error */ 1874 - if (common->data_dir != data_dir 1875 - && common->data_size_from_cmnd > 0) { 1848 + if (common->data_dir != data_dir && common->data_size_from_cmnd > 0) { 1876 1849 common->phase_error = 1; 1877 1850 return -EINVAL; 1878 1851 } ··· 1879 1854 /* Verify the length of the command itself */ 1880 1855 if (cmnd_size != common->cmnd_size) { 1881 1856 1882 - /* Special case workaround: There are plenty of buggy SCSI 1857 + /* 1858 + * Special case workaround: There are plenty of buggy SCSI 1883 1859 * implementations. Many have issues with cbw->Length 1884 1860 * field passing a wrong command size. For those cases we 1885 1861 * always try to work around the problem by using the length ··· 1922 1896 curlun = NULL; 1923 1897 common->bad_lun_okay = 0; 1924 1898 1925 - /* INQUIRY and REQUEST SENSE commands are explicitly allowed 1926 - * to use unsupported LUNs; all others may not. */ 1899 + /* 1900 + * INQUIRY and REQUEST SENSE commands are explicitly allowed 1901 + * to use unsupported LUNs; all others may not. 1902 + */ 1927 1903 if (common->cmnd[0] != INQUIRY && 1928 1904 common->cmnd[0] != REQUEST_SENSE) { 1929 1905 DBG(common, "unsupported LUN %d\n", common->lun); ··· 1933 1905 } 1934 1906 } 1935 1907 1936 - /* If a unit attention condition exists, only INQUIRY and 1937 - * REQUEST SENSE commands are allowed; anything else must fail. */ 1908 + /* 1909 + * If a unit attention condition exists, only INQUIRY and 1910 + * REQUEST SENSE commands are allowed; anything else must fail. 1911 + */ 1938 1912 if (curlun && curlun->unit_attention_data != SS_NO_SENSE && 1939 - common->cmnd[0] != INQUIRY && 1940 - common->cmnd[0] != REQUEST_SENSE) { 1913 + common->cmnd[0] != INQUIRY && 1914 + common->cmnd[0] != REQUEST_SENSE) { 1941 1915 curlun->sense_data = curlun->unit_attention_data; 1942 1916 curlun->unit_attention_data = SS_NO_SENSE; 1943 1917 return -EINVAL; ··· 1964 1934 1965 1935 return 0; 1966 1936 } 1967 - 1968 1937 1969 1938 static int do_scsi_command(struct fsg_common *common) 1970 1939 { ··· 2152 2123 "TEST UNIT READY"); 2153 2124 break; 2154 2125 2155 - /* Although optional, this command is used by MS-Windows. We 2156 - * support a minimal version: BytChk must be 0. */ 2126 + /* 2127 + * Although optional, this command is used by MS-Windows. We 2128 + * support a minimal version: BytChk must be 0. 2129 + */ 2157 2130 case VERIFY: 2158 2131 common->data_size_from_cmnd = 0; 2159 2132 reply = check_command(common, 10, DATA_DIR_NONE, ··· 2195 2164 reply = do_write(common); 2196 2165 break; 2197 2166 2198 - /* Some mandatory commands that we recognize but don't implement. 2167 + /* 2168 + * Some mandatory commands that we recognize but don't implement. 2199 2169 * They don't mean much in this setting. It's left as an exercise 2200 2170 * for anyone interested to implement RESERVE and RELEASE in terms 2201 - * of Posix locks. */ 2171 + * of Posix locks. 2172 + */ 2202 2173 case FORMAT_UNIT: 2203 2174 case RELEASE: 2204 2175 case RESERVE: ··· 2228 2195 if (reply == -EINVAL) 2229 2196 reply = 0; /* Error reply length */ 2230 2197 if (reply >= 0 && common->data_dir == DATA_DIR_TO_HOST) { 2231 - reply = min((u32) reply, common->data_size_from_cmnd); 2198 + reply = min((u32)reply, common->data_size_from_cmnd); 2232 2199 bh->inreq->length = reply; 2233 2200 bh->state = BUF_STATE_FULL; 2234 2201 common->residue -= reply; ··· 2258 2225 req->actual, 2259 2226 le32_to_cpu(cbw->Signature)); 2260 2227 2261 - /* The Bulk-only spec says we MUST stall the IN endpoint 2228 + /* 2229 + * The Bulk-only spec says we MUST stall the IN endpoint 2262 2230 * (6.6.1), so it's unavoidable. It also says we must 2263 2231 * retain this state until the next reset, but there's 2264 2232 * no way to tell the controller driver it should ignore ··· 2267 2233 * 2268 2234 * We aren't required to halt the OUT endpoint; instead 2269 2235 * we can simply accept and discard any data received 2270 - * until the next reset. */ 2236 + * until the next reset. 2237 + */ 2271 2238 wedge_bulk_in_endpoint(fsg); 2272 2239 set_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags); 2273 2240 return -EINVAL; ··· 2281 2246 "cmdlen %u\n", 2282 2247 cbw->Lun, cbw->Flags, cbw->Length); 2283 2248 2284 - /* We can do anything we want here, so let's stall the 2285 - * bulk pipes if we are allowed to. */ 2249 + /* 2250 + * We can do anything we want here, so let's stall the 2251 + * bulk pipes if we are allowed to. 2252 + */ 2286 2253 if (common->can_stall) { 2287 2254 fsg_set_halt(fsg, fsg->bulk_out); 2288 2255 halt_bulk_in_endpoint(fsg); ··· 2307 2270 return 0; 2308 2271 } 2309 2272 2310 - 2311 2273 static int get_next_command(struct fsg_common *common) 2312 2274 { 2313 2275 struct fsg_buffhd *bh; ··· 2323 2287 /* Queue a request to read a Bulk-only CBW */ 2324 2288 set_bulk_out_req_length(common, bh, USB_BULK_CB_WRAP_LEN); 2325 2289 bh->outreq->short_not_ok = 1; 2326 - START_TRANSFER_OR(common, bulk_out, bh->outreq, 2327 - &bh->outreq_busy, &bh->state) 2290 + if (!start_out_transfer(common, bh)) 2328 2291 /* Don't know what to do if common->fsg is NULL */ 2329 2292 return -EIO; 2330 2293 2331 - /* We will drain the buffer in software, which means we 2294 + /* 2295 + * We will drain the buffer in software, which means we 2332 2296 * can reuse it for the next filling. No need to advance 2333 - * next_buffhd_to_fill. */ 2297 + * next_buffhd_to_fill. 2298 + */ 2334 2299 2335 2300 /* Wait for the CBW to arrive */ 2336 2301 while (bh->state != BUF_STATE_FULL) { ··· 2462 2425 2463 2426 /****************************** ALT CONFIGS ******************************/ 2464 2427 2465 - 2466 2428 static int fsg_set_alt(struct usb_function *f, unsigned intf, unsigned alt) 2467 2429 { 2468 2430 struct fsg_dev *fsg = fsg_from_func(f); ··· 2489 2453 struct fsg_lun *curlun; 2490 2454 unsigned int exception_req_tag; 2491 2455 2492 - /* Clear the existing signals. Anything but SIGUSR1 is converted 2493 - * into a high-priority EXIT exception. */ 2456 + /* 2457 + * Clear the existing signals. Anything but SIGUSR1 is converted 2458 + * into a high-priority EXIT exception. 2459 + */ 2494 2460 for (;;) { 2495 2461 int sig = 2496 2462 dequeue_signal_lock(current, &current->blocked, &info); ··· 2536 2498 usb_ep_fifo_flush(common->fsg->bulk_out); 2537 2499 } 2538 2500 2539 - /* Reset the I/O buffer states and pointers, the SCSI 2540 - * state, and the exception. Then invoke the handler. */ 2501 + /* 2502 + * Reset the I/O buffer states and pointers, the SCSI 2503 + * state, and the exception. Then invoke the handler. 2504 + */ 2541 2505 spin_lock_irq(&common->lock); 2542 2506 2543 2507 for (i = 0; i < FSG_NUM_BUFFERS; ++i) { ··· 2577 2537 break; 2578 2538 2579 2539 case FSG_STATE_RESET: 2580 - /* In case we were forced against our will to halt a 2540 + /* 2541 + * In case we were forced against our will to halt a 2581 2542 * bulk endpoint, clear the halt now. (The SuperH UDC 2582 - * requires this.) */ 2543 + * requires this.) 2544 + */ 2583 2545 if (!fsg_is_set(common)) 2584 2546 break; 2585 2547 if (test_and_clear_bit(IGNORE_BULK_OUT, ··· 2591 2549 if (common->ep0_req_tag == exception_req_tag) 2592 2550 ep0_queue(common); /* Complete the status stage */ 2593 2551 2594 - /* Technically this should go here, but it would only be 2552 + /* 2553 + * Technically this should go here, but it would only be 2595 2554 * a waste of time. Ditto for the INTERFACE_CHANGE and 2596 - * CONFIG_CHANGE cases. */ 2555 + * CONFIG_CHANGE cases. 2556 + */ 2597 2557 /* for (i = 0; i < common->nluns; ++i) */ 2598 2558 /* common->luns[i].unit_attention_data = */ 2599 2559 /* SS_RESET_OCCURRED; */ ··· 2630 2586 { 2631 2587 struct fsg_common *common = common_; 2632 2588 2633 - /* Allow the thread to be killed by a signal, but set the signal mask 2634 - * to block everything but INT, TERM, KILL, and USR1. */ 2589 + /* 2590 + * Allow the thread to be killed by a signal, but set the signal mask 2591 + * to block everything but INT, TERM, KILL, and USR1. 2592 + */ 2635 2593 allow_signal(SIGINT); 2636 2594 allow_signal(SIGTERM); 2637 2595 allow_signal(SIGKILL); ··· 2642 2596 /* Allow the thread to be frozen */ 2643 2597 set_freezable(); 2644 2598 2645 - /* Arrange for userspace references to be interpreted as kernel 2599 + /* 2600 + * Arrange for userspace references to be interpreted as kernel 2646 2601 * pointers. That way we can pass a kernel pointer to a routine 2647 - * that expects a __user pointer and it will work okay. */ 2602 + * that expects a __user pointer and it will work okay. 2603 + */ 2648 2604 set_fs(get_ds()); 2649 2605 2650 2606 /* The main loop */ ··· 2706 2658 up_write(&common->filesem); 2707 2659 } 2708 2660 2709 - /* Let the unbind and cleanup routines know the thread has exited */ 2661 + /* Let fsg_unbind() know the thread has exited */ 2710 2662 complete_and_exit(&common->thread_notifier, 0); 2711 2663 } 2712 2664 ··· 2737 2689 { 2738 2690 kref_put(&common->ref, fsg_common_release); 2739 2691 } 2740 - 2741 2692 2742 2693 static struct fsg_common *fsg_common_init(struct fsg_common *common, 2743 2694 struct usb_composite_dev *cdev, ··· 2783 2736 fsg_intf_desc.iInterface = rc; 2784 2737 } 2785 2738 2786 - /* Create the LUNs, open their backing files, and register the 2787 - * LUN devices in sysfs. */ 2739 + /* 2740 + * Create the LUNs, open their backing files, and register the 2741 + * LUN devices in sysfs. 2742 + */ 2788 2743 curlun = kzalloc(nluns * sizeof *curlun, GFP_KERNEL); 2789 2744 if (unlikely(!curlun)) { 2790 2745 rc = -ENOMEM; ··· 2814 2765 if (rc) { 2815 2766 INFO(common, "failed to register LUN%d: %d\n", i, rc); 2816 2767 common->nluns = i; 2768 + put_device(&curlun->dev); 2817 2769 goto error_release; 2818 2770 } 2819 2771 ··· 2840 2790 } 2841 2791 common->nluns = nluns; 2842 2792 2843 - 2844 2793 /* Data buffers cyclic list */ 2845 2794 bh = common->buffhds; 2846 2795 i = FSG_NUM_BUFFERS; ··· 2856 2807 } while (--i); 2857 2808 bh->next = common->buffhds; 2858 2809 2859 - 2860 2810 /* Prepare inquiryString */ 2861 2811 if (cfg->release != 0xffff) { 2862 2812 i = cfg->release; ··· 2869 2821 i = 0x0399; 2870 2822 } 2871 2823 } 2872 - #define OR(x, y) ((x) ? (x) : (y)) 2873 2824 snprintf(common->inquiry_string, sizeof common->inquiry_string, 2874 - "%-8s%-16s%04x", 2875 - OR(cfg->vendor_name, "Linux "), 2825 + "%-8s%-16s%04x", cfg->vendor_name ?: "Linux", 2876 2826 /* Assume product name dependent on the first LUN */ 2877 - OR(cfg->product_name, common->luns->cdrom 2827 + cfg->product_name ?: (common->luns->cdrom 2878 2828 ? "File-Stor Gadget" 2879 - : "File-CD Gadget "), 2829 + : "File-CD Gadget"), 2880 2830 i); 2881 2831 2882 - 2883 - /* Some peripheral controllers are known not to be able to 2832 + /* 2833 + * Some peripheral controllers are known not to be able to 2884 2834 * halt bulk endpoints correctly. If one of them is present, 2885 2835 * disable stalls. 2886 2836 */ 2887 2837 common->can_stall = cfg->can_stall && 2888 2838 !(gadget_is_at91(common->gadget)); 2889 2839 2890 - 2891 2840 spin_lock_init(&common->lock); 2892 2841 kref_init(&common->ref); 2893 - 2894 2842 2895 2843 /* Tell the thread to start working */ 2896 2844 common->thread_task = 2897 2845 kthread_create(fsg_main_thread, common, 2898 - OR(cfg->thread_name, "file-storage")); 2846 + cfg->thread_name ?: "file-storage"); 2899 2847 if (IS_ERR(common->thread_task)) { 2900 2848 rc = PTR_ERR(common->thread_task); 2901 2849 goto error_release; 2902 2850 } 2903 2851 init_completion(&common->thread_notifier); 2904 2852 init_waitqueue_head(&common->fsg_wait); 2905 - #undef OR 2906 - 2907 2853 2908 2854 /* Information */ 2909 2855 INFO(common, FSG_DRIVER_DESC ", version: " FSG_DRIVER_VERSION "\n"); ··· 2931 2889 2932 2890 return common; 2933 2891 2934 - 2935 2892 error_luns: 2936 2893 common->nluns = i + 1; 2937 2894 error_release: 2938 2895 common->state = FSG_STATE_TERMINATED; /* The thread is dead */ 2939 - /* Call fsg_common_release() directly, ref might be not 2940 - * initialised */ 2896 + /* Call fsg_common_release() directly, ref might be not initialised. */ 2941 2897 fsg_common_release(&common->ref); 2942 2898 return ERR_PTR(rc); 2943 2899 } 2944 - 2945 2900 2946 2901 static void fsg_common_release(struct kref *ref) 2947 2902 { ··· 2948 2909 if (common->state != FSG_STATE_TERMINATED) { 2949 2910 raise_exception(common, FSG_STATE_EXIT); 2950 2911 wait_for_completion(&common->thread_notifier); 2951 - 2952 - /* The cleanup routine waits for this completion also */ 2953 - complete(&common->thread_notifier); 2954 2912 } 2955 2913 2956 2914 if (likely(common->luns)) { ··· 2981 2945 2982 2946 /*-------------------------------------------------------------------------*/ 2983 2947 2984 - 2985 2948 static void fsg_unbind(struct usb_configuration *c, struct usb_function *f) 2986 2949 { 2987 2950 struct fsg_dev *fsg = fsg_from_func(f); ··· 2999 2964 usb_free_descriptors(fsg->function.hs_descriptors); 3000 2965 kfree(fsg); 3001 2966 } 3002 - 3003 2967 3004 2968 static int fsg_bind(struct usb_configuration *c, struct usb_function *f) 3005 2969 { ··· 3082 3048 fsg->function.disable = fsg_disable; 3083 3049 3084 3050 fsg->common = common; 3085 - /* Our caller holds a reference to common structure so we 3051 + /* 3052 + * Our caller holds a reference to common structure so we 3086 3053 * don't have to be worry about it being freed until we return 3087 3054 * from this function. So instead of incrementing counter now 3088 3055 * and decrement in error recovery we increment it only when 3089 - * call to usb_add_function() was successful. */ 3056 + * call to usb_add_function() was successful. 3057 + */ 3090 3058 3091 3059 rc = usb_add_function(c, &fsg->function); 3092 3060 if (unlikely(rc)) ··· 3099 3063 } 3100 3064 3101 3065 static inline int __deprecated __maybe_unused 3102 - fsg_add(struct usb_composite_dev *cdev, 3103 - struct usb_configuration *c, 3066 + fsg_add(struct usb_composite_dev *cdev, struct usb_configuration *c, 3104 3067 struct fsg_common *common) 3105 3068 { 3106 3069 return fsg_bind_config(cdev, c, common); ··· 3107 3072 3108 3073 3109 3074 /************************* Module parameters *************************/ 3110 - 3111 3075 3112 3076 struct fsg_module_parameters { 3113 3077 char *file[FSG_MAX_LUNS]; ··· 3120 3086 unsigned int luns; /* nluns */ 3121 3087 int stall; /* can_stall */ 3122 3088 }; 3123 - 3124 3089 3125 3090 #define _FSG_MODULE_PARAM_ARRAY(prefix, params, name, type, desc) \ 3126 3091 module_param_array_named(prefix ## name, params.name, type, \ ··· 3147 3114 "number of LUNs"); \ 3148 3115 _FSG_MODULE_PARAM(prefix, params, stall, bool, \ 3149 3116 "false to prevent bulk stalls") 3150 - 3151 3117 3152 3118 static void 3153 3119 fsg_config_from_params(struct fsg_config *cfg,
+1407
drivers/usb/gadget/f_ncm.c
··· 1 + /* 2 + * f_ncm.c -- USB CDC Network (NCM) link function driver 3 + * 4 + * Copyright (C) 2010 Nokia Corporation 5 + * Contact: Yauheni Kaliuta <yauheni.kaliuta@nokia.com> 6 + * 7 + * The driver borrows from f_ecm.c which is: 8 + * 9 + * Copyright (C) 2003-2005,2008 David Brownell 10 + * Copyright (C) 2008 Nokia Corporation 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License as published by 14 + * the Free Software Foundation; either version 2 of the License, or 15 + * (at your option) any later version. 16 + * 17 + * This program is distributed in the hope that it will be useful, 18 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 + * GNU General Public License for more details. 21 + * 22 + * You should have received a copy of the GNU General Public License 23 + * along with this program; if not, write to the Free Software 24 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 25 + */ 26 + 27 + #include <linux/kernel.h> 28 + #include <linux/device.h> 29 + #include <linux/etherdevice.h> 30 + #include <linux/crc32.h> 31 + 32 + #include <linux/usb/cdc.h> 33 + 34 + #include "u_ether.h" 35 + 36 + /* 37 + * This function is a "CDC Network Control Model" (CDC NCM) Ethernet link. 38 + * NCM is intended to be used with high-speed network attachments. 39 + * 40 + * Note that NCM requires the use of "alternate settings" for its data 41 + * interface. This means that the set_alt() method has real work to do, 42 + * and also means that a get_alt() method is required. 43 + */ 44 + 45 + /* to trigger crc/non-crc ndp signature */ 46 + 47 + #define NCM_NDP_HDR_CRC_MASK 0x01000000 48 + #define NCM_NDP_HDR_CRC 0x01000000 49 + #define NCM_NDP_HDR_NOCRC 0x00000000 50 + 51 + struct ncm_ep_descs { 52 + struct usb_endpoint_descriptor *in; 53 + struct usb_endpoint_descriptor *out; 54 + struct usb_endpoint_descriptor *notify; 55 + }; 56 + 57 + enum ncm_notify_state { 58 + NCM_NOTIFY_NONE, /* don't notify */ 59 + NCM_NOTIFY_CONNECT, /* issue CONNECT next */ 60 + NCM_NOTIFY_SPEED, /* issue SPEED_CHANGE next */ 61 + }; 62 + 63 + struct f_ncm { 64 + struct gether port; 65 + u8 ctrl_id, data_id; 66 + 67 + char ethaddr[14]; 68 + 69 + struct ncm_ep_descs fs; 70 + struct ncm_ep_descs hs; 71 + 72 + struct usb_ep *notify; 73 + struct usb_endpoint_descriptor *notify_desc; 74 + struct usb_request *notify_req; 75 + u8 notify_state; 76 + bool is_open; 77 + 78 + struct ndp_parser_opts *parser_opts; 79 + bool is_crc; 80 + 81 + /* 82 + * for notification, it is accessed from both 83 + * callback and ethernet open/close 84 + */ 85 + spinlock_t lock; 86 + }; 87 + 88 + static inline struct f_ncm *func_to_ncm(struct usb_function *f) 89 + { 90 + return container_of(f, struct f_ncm, port.func); 91 + } 92 + 93 + /* peak (theoretical) bulk transfer rate in bits-per-second */ 94 + static inline unsigned ncm_bitrate(struct usb_gadget *g) 95 + { 96 + if (gadget_is_dualspeed(g) && g->speed == USB_SPEED_HIGH) 97 + return 13 * 512 * 8 * 1000 * 8; 98 + else 99 + return 19 * 64 * 1 * 1000 * 8; 100 + } 101 + 102 + /*-------------------------------------------------------------------------*/ 103 + 104 + /* 105 + * We cannot group frames so use just the minimal size which ok to put 106 + * one max-size ethernet frame. 107 + * If the host can group frames, allow it to do that, 16K is selected, 108 + * because it's used by default by the current linux host driver 109 + */ 110 + #define NTB_DEFAULT_IN_SIZE USB_CDC_NCM_NTB_MIN_IN_SIZE 111 + #define NTB_OUT_SIZE 16384 112 + 113 + /* 114 + * skbs of size less than that will not be alligned 115 + * to NCM's dwNtbInMaxSize to save bus bandwidth 116 + */ 117 + 118 + #define MAX_TX_NONFIXED (512 * 3) 119 + 120 + #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ 121 + USB_CDC_NCM_NTB32_SUPPORTED) 122 + 123 + static struct usb_cdc_ncm_ntb_parameters ntb_parameters = { 124 + .wLength = sizeof ntb_parameters, 125 + .bmNtbFormatsSupported = cpu_to_le16(FORMATS_SUPPORTED), 126 + .dwNtbInMaxSize = cpu_to_le32(NTB_DEFAULT_IN_SIZE), 127 + .wNdpInDivisor = cpu_to_le16(4), 128 + .wNdpInPayloadRemainder = cpu_to_le16(0), 129 + .wNdpInAlignment = cpu_to_le16(4), 130 + 131 + .dwNtbOutMaxSize = cpu_to_le32(NTB_OUT_SIZE), 132 + .wNdpOutDivisor = cpu_to_le16(4), 133 + .wNdpOutPayloadRemainder = cpu_to_le16(0), 134 + .wNdpOutAlignment = cpu_to_le16(4), 135 + }; 136 + 137 + /* 138 + * Use wMaxPacketSize big enough to fit CDC_NOTIFY_SPEED_CHANGE in one 139 + * packet, to simplify cancellation; and a big transfer interval, to 140 + * waste less bandwidth. 141 + */ 142 + 143 + #define LOG2_STATUS_INTERVAL_MSEC 5 /* 1 << 5 == 32 msec */ 144 + #define NCM_STATUS_BYTECOUNT 16 /* 8 byte header + data */ 145 + 146 + static struct usb_interface_assoc_descriptor ncm_iad_desc __initdata = { 147 + .bLength = sizeof ncm_iad_desc, 148 + .bDescriptorType = USB_DT_INTERFACE_ASSOCIATION, 149 + 150 + /* .bFirstInterface = DYNAMIC, */ 151 + .bInterfaceCount = 2, /* control + data */ 152 + .bFunctionClass = USB_CLASS_COMM, 153 + .bFunctionSubClass = USB_CDC_SUBCLASS_NCM, 154 + .bFunctionProtocol = USB_CDC_PROTO_NONE, 155 + /* .iFunction = DYNAMIC */ 156 + }; 157 + 158 + /* interface descriptor: */ 159 + 160 + static struct usb_interface_descriptor ncm_control_intf __initdata = { 161 + .bLength = sizeof ncm_control_intf, 162 + .bDescriptorType = USB_DT_INTERFACE, 163 + 164 + /* .bInterfaceNumber = DYNAMIC */ 165 + .bNumEndpoints = 1, 166 + .bInterfaceClass = USB_CLASS_COMM, 167 + .bInterfaceSubClass = USB_CDC_SUBCLASS_NCM, 168 + .bInterfaceProtocol = USB_CDC_PROTO_NONE, 169 + /* .iInterface = DYNAMIC */ 170 + }; 171 + 172 + static struct usb_cdc_header_desc ncm_header_desc __initdata = { 173 + .bLength = sizeof ncm_header_desc, 174 + .bDescriptorType = USB_DT_CS_INTERFACE, 175 + .bDescriptorSubType = USB_CDC_HEADER_TYPE, 176 + 177 + .bcdCDC = cpu_to_le16(0x0110), 178 + }; 179 + 180 + static struct usb_cdc_union_desc ncm_union_desc __initdata = { 181 + .bLength = sizeof(ncm_union_desc), 182 + .bDescriptorType = USB_DT_CS_INTERFACE, 183 + .bDescriptorSubType = USB_CDC_UNION_TYPE, 184 + /* .bMasterInterface0 = DYNAMIC */ 185 + /* .bSlaveInterface0 = DYNAMIC */ 186 + }; 187 + 188 + static struct usb_cdc_ether_desc ecm_desc __initdata = { 189 + .bLength = sizeof ecm_desc, 190 + .bDescriptorType = USB_DT_CS_INTERFACE, 191 + .bDescriptorSubType = USB_CDC_ETHERNET_TYPE, 192 + 193 + /* this descriptor actually adds value, surprise! */ 194 + /* .iMACAddress = DYNAMIC */ 195 + .bmEthernetStatistics = cpu_to_le32(0), /* no statistics */ 196 + .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), 197 + .wNumberMCFilters = cpu_to_le16(0), 198 + .bNumberPowerFilters = 0, 199 + }; 200 + 201 + #define NCAPS (USB_CDC_NCM_NCAP_ETH_FILTER | USB_CDC_NCM_NCAP_CRC_MODE) 202 + 203 + static struct usb_cdc_ncm_desc ncm_desc __initdata = { 204 + .bLength = sizeof ncm_desc, 205 + .bDescriptorType = USB_DT_CS_INTERFACE, 206 + .bDescriptorSubType = USB_CDC_NCM_TYPE, 207 + 208 + .bcdNcmVersion = cpu_to_le16(0x0100), 209 + /* can process SetEthernetPacketFilter */ 210 + .bmNetworkCapabilities = NCAPS, 211 + }; 212 + 213 + /* the default data interface has no endpoints ... */ 214 + 215 + static struct usb_interface_descriptor ncm_data_nop_intf __initdata = { 216 + .bLength = sizeof ncm_data_nop_intf, 217 + .bDescriptorType = USB_DT_INTERFACE, 218 + 219 + .bInterfaceNumber = 1, 220 + .bAlternateSetting = 0, 221 + .bNumEndpoints = 0, 222 + .bInterfaceClass = USB_CLASS_CDC_DATA, 223 + .bInterfaceSubClass = 0, 224 + .bInterfaceProtocol = USB_CDC_NCM_PROTO_NTB, 225 + /* .iInterface = DYNAMIC */ 226 + }; 227 + 228 + /* ... but the "real" data interface has two bulk endpoints */ 229 + 230 + static struct usb_interface_descriptor ncm_data_intf __initdata = { 231 + .bLength = sizeof ncm_data_intf, 232 + .bDescriptorType = USB_DT_INTERFACE, 233 + 234 + .bInterfaceNumber = 1, 235 + .bAlternateSetting = 1, 236 + .bNumEndpoints = 2, 237 + .bInterfaceClass = USB_CLASS_CDC_DATA, 238 + .bInterfaceSubClass = 0, 239 + .bInterfaceProtocol = USB_CDC_NCM_PROTO_NTB, 240 + /* .iInterface = DYNAMIC */ 241 + }; 242 + 243 + /* full speed support: */ 244 + 245 + static struct usb_endpoint_descriptor fs_ncm_notify_desc __initdata = { 246 + .bLength = USB_DT_ENDPOINT_SIZE, 247 + .bDescriptorType = USB_DT_ENDPOINT, 248 + 249 + .bEndpointAddress = USB_DIR_IN, 250 + .bmAttributes = USB_ENDPOINT_XFER_INT, 251 + .wMaxPacketSize = cpu_to_le16(NCM_STATUS_BYTECOUNT), 252 + .bInterval = 1 << LOG2_STATUS_INTERVAL_MSEC, 253 + }; 254 + 255 + static struct usb_endpoint_descriptor fs_ncm_in_desc __initdata = { 256 + .bLength = USB_DT_ENDPOINT_SIZE, 257 + .bDescriptorType = USB_DT_ENDPOINT, 258 + 259 + .bEndpointAddress = USB_DIR_IN, 260 + .bmAttributes = USB_ENDPOINT_XFER_BULK, 261 + }; 262 + 263 + static struct usb_endpoint_descriptor fs_ncm_out_desc __initdata = { 264 + .bLength = USB_DT_ENDPOINT_SIZE, 265 + .bDescriptorType = USB_DT_ENDPOINT, 266 + 267 + .bEndpointAddress = USB_DIR_OUT, 268 + .bmAttributes = USB_ENDPOINT_XFER_BULK, 269 + }; 270 + 271 + static struct usb_descriptor_header *ncm_fs_function[] __initdata = { 272 + (struct usb_descriptor_header *) &ncm_iad_desc, 273 + /* CDC NCM control descriptors */ 274 + (struct usb_descriptor_header *) &ncm_control_intf, 275 + (struct usb_descriptor_header *) &ncm_header_desc, 276 + (struct usb_descriptor_header *) &ncm_union_desc, 277 + (struct usb_descriptor_header *) &ecm_desc, 278 + (struct usb_descriptor_header *) &ncm_desc, 279 + (struct usb_descriptor_header *) &fs_ncm_notify_desc, 280 + /* data interface, altsettings 0 and 1 */ 281 + (struct usb_descriptor_header *) &ncm_data_nop_intf, 282 + (struct usb_descriptor_header *) &ncm_data_intf, 283 + (struct usb_descriptor_header *) &fs_ncm_in_desc, 284 + (struct usb_descriptor_header *) &fs_ncm_out_desc, 285 + NULL, 286 + }; 287 + 288 + /* high speed support: */ 289 + 290 + static struct usb_endpoint_descriptor hs_ncm_notify_desc __initdata = { 291 + .bLength = USB_DT_ENDPOINT_SIZE, 292 + .bDescriptorType = USB_DT_ENDPOINT, 293 + 294 + .bEndpointAddress = USB_DIR_IN, 295 + .bmAttributes = USB_ENDPOINT_XFER_INT, 296 + .wMaxPacketSize = cpu_to_le16(NCM_STATUS_BYTECOUNT), 297 + .bInterval = LOG2_STATUS_INTERVAL_MSEC + 4, 298 + }; 299 + static struct usb_endpoint_descriptor hs_ncm_in_desc __initdata = { 300 + .bLength = USB_DT_ENDPOINT_SIZE, 301 + .bDescriptorType = USB_DT_ENDPOINT, 302 + 303 + .bEndpointAddress = USB_DIR_IN, 304 + .bmAttributes = USB_ENDPOINT_XFER_BULK, 305 + .wMaxPacketSize = cpu_to_le16(512), 306 + }; 307 + 308 + static struct usb_endpoint_descriptor hs_ncm_out_desc __initdata = { 309 + .bLength = USB_DT_ENDPOINT_SIZE, 310 + .bDescriptorType = USB_DT_ENDPOINT, 311 + 312 + .bEndpointAddress = USB_DIR_OUT, 313 + .bmAttributes = USB_ENDPOINT_XFER_BULK, 314 + .wMaxPacketSize = cpu_to_le16(512), 315 + }; 316 + 317 + static struct usb_descriptor_header *ncm_hs_function[] __initdata = { 318 + (struct usb_descriptor_header *) &ncm_iad_desc, 319 + /* CDC NCM control descriptors */ 320 + (struct usb_descriptor_header *) &ncm_control_intf, 321 + (struct usb_descriptor_header *) &ncm_header_desc, 322 + (struct usb_descriptor_header *) &ncm_union_desc, 323 + (struct usb_descriptor_header *) &ecm_desc, 324 + (struct usb_descriptor_header *) &ncm_desc, 325 + (struct usb_descriptor_header *) &hs_ncm_notify_desc, 326 + /* data interface, altsettings 0 and 1 */ 327 + (struct usb_descriptor_header *) &ncm_data_nop_intf, 328 + (struct usb_descriptor_header *) &ncm_data_intf, 329 + (struct usb_descriptor_header *) &hs_ncm_in_desc, 330 + (struct usb_descriptor_header *) &hs_ncm_out_desc, 331 + NULL, 332 + }; 333 + 334 + /* string descriptors: */ 335 + 336 + #define STRING_CTRL_IDX 0 337 + #define STRING_MAC_IDX 1 338 + #define STRING_DATA_IDX 2 339 + #define STRING_IAD_IDX 3 340 + 341 + static struct usb_string ncm_string_defs[] = { 342 + [STRING_CTRL_IDX].s = "CDC Network Control Model (NCM)", 343 + [STRING_MAC_IDX].s = NULL /* DYNAMIC */, 344 + [STRING_DATA_IDX].s = "CDC Network Data", 345 + [STRING_IAD_IDX].s = "CDC NCM", 346 + { } /* end of list */ 347 + }; 348 + 349 + static struct usb_gadget_strings ncm_string_table = { 350 + .language = 0x0409, /* en-us */ 351 + .strings = ncm_string_defs, 352 + }; 353 + 354 + static struct usb_gadget_strings *ncm_strings[] = { 355 + &ncm_string_table, 356 + NULL, 357 + }; 358 + 359 + /* 360 + * Here are options for NCM Datagram Pointer table (NDP) parser. 361 + * There are 2 different formats: NDP16 and NDP32 in the spec (ch. 3), 362 + * in NDP16 offsets and sizes fields are 1 16bit word wide, 363 + * in NDP32 -- 2 16bit words wide. Also signatures are different. 364 + * To make the parser code the same, put the differences in the structure, 365 + * and switch pointers to the structures when the format is changed. 366 + */ 367 + 368 + struct ndp_parser_opts { 369 + u32 nth_sign; 370 + u32 ndp_sign; 371 + unsigned nth_size; 372 + unsigned ndp_size; 373 + unsigned ndplen_align; 374 + /* sizes in u16 units */ 375 + unsigned dgram_item_len; /* index or length */ 376 + unsigned block_length; 377 + unsigned fp_index; 378 + unsigned reserved1; 379 + unsigned reserved2; 380 + unsigned next_fp_index; 381 + }; 382 + 383 + #define INIT_NDP16_OPTS { \ 384 + .nth_sign = USB_CDC_NCM_NTH16_SIGN, \ 385 + .ndp_sign = USB_CDC_NCM_NDP16_NOCRC_SIGN, \ 386 + .nth_size = sizeof(struct usb_cdc_ncm_nth16), \ 387 + .ndp_size = sizeof(struct usb_cdc_ncm_ndp16), \ 388 + .ndplen_align = 4, \ 389 + .dgram_item_len = 1, \ 390 + .block_length = 1, \ 391 + .fp_index = 1, \ 392 + .reserved1 = 0, \ 393 + .reserved2 = 0, \ 394 + .next_fp_index = 1, \ 395 + } 396 + 397 + 398 + #define INIT_NDP32_OPTS { \ 399 + .nth_sign = USB_CDC_NCM_NTH32_SIGN, \ 400 + .ndp_sign = USB_CDC_NCM_NDP32_NOCRC_SIGN, \ 401 + .nth_size = sizeof(struct usb_cdc_ncm_nth32), \ 402 + .ndp_size = sizeof(struct usb_cdc_ncm_ndp32), \ 403 + .ndplen_align = 8, \ 404 + .dgram_item_len = 2, \ 405 + .block_length = 2, \ 406 + .fp_index = 2, \ 407 + .reserved1 = 1, \ 408 + .reserved2 = 2, \ 409 + .next_fp_index = 2, \ 410 + } 411 + 412 + static struct ndp_parser_opts ndp16_opts = INIT_NDP16_OPTS; 413 + static struct ndp_parser_opts ndp32_opts = INIT_NDP32_OPTS; 414 + 415 + static inline void put_ncm(__le16 **p, unsigned size, unsigned val) 416 + { 417 + switch (size) { 418 + case 1: 419 + put_unaligned_le16((u16)val, *p); 420 + break; 421 + case 2: 422 + put_unaligned_le32((u32)val, *p); 423 + 424 + break; 425 + default: 426 + BUG(); 427 + } 428 + 429 + *p += size; 430 + } 431 + 432 + static inline unsigned get_ncm(__le16 **p, unsigned size) 433 + { 434 + unsigned tmp; 435 + 436 + switch (size) { 437 + case 1: 438 + tmp = get_unaligned_le16(*p); 439 + break; 440 + case 2: 441 + tmp = get_unaligned_le32(*p); 442 + break; 443 + default: 444 + BUG(); 445 + } 446 + 447 + *p += size; 448 + return tmp; 449 + } 450 + 451 + /*-------------------------------------------------------------------------*/ 452 + 453 + static inline void ncm_reset_values(struct f_ncm *ncm) 454 + { 455 + ncm->parser_opts = &ndp16_opts; 456 + ncm->is_crc = false; 457 + ncm->port.cdc_filter = DEFAULT_FILTER; 458 + 459 + /* doesn't make sense for ncm, fixed size used */ 460 + ncm->port.header_len = 0; 461 + 462 + ncm->port.fixed_out_len = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize); 463 + ncm->port.fixed_in_len = NTB_DEFAULT_IN_SIZE; 464 + } 465 + 466 + /* 467 + * Context: ncm->lock held 468 + */ 469 + static void ncm_do_notify(struct f_ncm *ncm) 470 + { 471 + struct usb_request *req = ncm->notify_req; 472 + struct usb_cdc_notification *event; 473 + struct usb_composite_dev *cdev = ncm->port.func.config->cdev; 474 + __le32 *data; 475 + int status; 476 + 477 + /* notification already in flight? */ 478 + if (!req) 479 + return; 480 + 481 + event = req->buf; 482 + switch (ncm->notify_state) { 483 + case NCM_NOTIFY_NONE: 484 + return; 485 + 486 + case NCM_NOTIFY_CONNECT: 487 + event->bNotificationType = USB_CDC_NOTIFY_NETWORK_CONNECTION; 488 + if (ncm->is_open) 489 + event->wValue = cpu_to_le16(1); 490 + else 491 + event->wValue = cpu_to_le16(0); 492 + event->wLength = 0; 493 + req->length = sizeof *event; 494 + 495 + DBG(cdev, "notify connect %s\n", 496 + ncm->is_open ? "true" : "false"); 497 + ncm->notify_state = NCM_NOTIFY_NONE; 498 + break; 499 + 500 + case NCM_NOTIFY_SPEED: 501 + event->bNotificationType = USB_CDC_NOTIFY_SPEED_CHANGE; 502 + event->wValue = cpu_to_le16(0); 503 + event->wLength = cpu_to_le16(8); 504 + req->length = NCM_STATUS_BYTECOUNT; 505 + 506 + /* SPEED_CHANGE data is up/down speeds in bits/sec */ 507 + data = req->buf + sizeof *event; 508 + data[0] = cpu_to_le32(ncm_bitrate(cdev->gadget)); 509 + data[1] = data[0]; 510 + 511 + DBG(cdev, "notify speed %d\n", ncm_bitrate(cdev->gadget)); 512 + ncm->notify_state = NCM_NOTIFY_CONNECT; 513 + break; 514 + } 515 + event->bmRequestType = 0xA1; 516 + event->wIndex = cpu_to_le16(ncm->ctrl_id); 517 + 518 + ncm->notify_req = NULL; 519 + /* 520 + * In double buffering if there is a space in FIFO, 521 + * completion callback can be called right after the call, 522 + * so unlocking 523 + */ 524 + spin_unlock(&ncm->lock); 525 + status = usb_ep_queue(ncm->notify, req, GFP_ATOMIC); 526 + spin_lock(&ncm->lock); 527 + if (status < 0) { 528 + ncm->notify_req = req; 529 + DBG(cdev, "notify --> %d\n", status); 530 + } 531 + } 532 + 533 + /* 534 + * Context: ncm->lock held 535 + */ 536 + static void ncm_notify(struct f_ncm *ncm) 537 + { 538 + /* 539 + * NOTE on most versions of Linux, host side cdc-ethernet 540 + * won't listen for notifications until its netdevice opens. 541 + * The first notification then sits in the FIFO for a long 542 + * time, and the second one is queued. 543 + * 544 + * If ncm_notify() is called before the second (CONNECT) 545 + * notification is sent, then it will reset to send the SPEED 546 + * notificaion again (and again, and again), but it's not a problem 547 + */ 548 + ncm->notify_state = NCM_NOTIFY_SPEED; 549 + ncm_do_notify(ncm); 550 + } 551 + 552 + static void ncm_notify_complete(struct usb_ep *ep, struct usb_request *req) 553 + { 554 + struct f_ncm *ncm = req->context; 555 + struct usb_composite_dev *cdev = ncm->port.func.config->cdev; 556 + struct usb_cdc_notification *event = req->buf; 557 + 558 + spin_lock(&ncm->lock); 559 + switch (req->status) { 560 + case 0: 561 + VDBG(cdev, "Notification %02x sent\n", 562 + event->bNotificationType); 563 + break; 564 + case -ECONNRESET: 565 + case -ESHUTDOWN: 566 + ncm->notify_state = NCM_NOTIFY_NONE; 567 + break; 568 + default: 569 + DBG(cdev, "event %02x --> %d\n", 570 + event->bNotificationType, req->status); 571 + break; 572 + } 573 + ncm->notify_req = req; 574 + ncm_do_notify(ncm); 575 + spin_unlock(&ncm->lock); 576 + } 577 + 578 + static void ncm_ep0out_complete(struct usb_ep *ep, struct usb_request *req) 579 + { 580 + /* now for SET_NTB_INPUT_SIZE only */ 581 + unsigned in_size; 582 + struct usb_function *f = req->context; 583 + struct f_ncm *ncm = func_to_ncm(f); 584 + struct usb_composite_dev *cdev = ep->driver_data; 585 + 586 + req->context = NULL; 587 + if (req->status || req->actual != req->length) { 588 + DBG(cdev, "Bad control-OUT transfer\n"); 589 + goto invalid; 590 + } 591 + 592 + in_size = get_unaligned_le32(req->buf); 593 + if (in_size < USB_CDC_NCM_NTB_MIN_IN_SIZE || 594 + in_size > le32_to_cpu(ntb_parameters.dwNtbInMaxSize)) { 595 + DBG(cdev, "Got wrong INPUT SIZE (%d) from host\n", in_size); 596 + goto invalid; 597 + } 598 + 599 + ncm->port.fixed_in_len = in_size; 600 + VDBG(cdev, "Set NTB INPUT SIZE %d\n", in_size); 601 + return; 602 + 603 + invalid: 604 + usb_ep_set_halt(ep); 605 + return; 606 + } 607 + 608 + static int ncm_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl) 609 + { 610 + struct f_ncm *ncm = func_to_ncm(f); 611 + struct usb_composite_dev *cdev = f->config->cdev; 612 + struct usb_request *req = cdev->req; 613 + int value = -EOPNOTSUPP; 614 + u16 w_index = le16_to_cpu(ctrl->wIndex); 615 + u16 w_value = le16_to_cpu(ctrl->wValue); 616 + u16 w_length = le16_to_cpu(ctrl->wLength); 617 + 618 + /* 619 + * composite driver infrastructure handles everything except 620 + * CDC class messages; interface activation uses set_alt(). 621 + */ 622 + switch ((ctrl->bRequestType << 8) | ctrl->bRequest) { 623 + case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 624 + | USB_CDC_SET_ETHERNET_PACKET_FILTER: 625 + /* 626 + * see 6.2.30: no data, wIndex = interface, 627 + * wValue = packet filter bitmap 628 + */ 629 + if (w_length != 0 || w_index != ncm->ctrl_id) 630 + goto invalid; 631 + DBG(cdev, "packet filter %02x\n", w_value); 632 + /* 633 + * REVISIT locking of cdc_filter. This assumes the UDC 634 + * driver won't have a concurrent packet TX irq running on 635 + * another CPU; or that if it does, this write is atomic... 636 + */ 637 + ncm->port.cdc_filter = w_value; 638 + value = 0; 639 + break; 640 + /* 641 + * and optionally: 642 + * case USB_CDC_SEND_ENCAPSULATED_COMMAND: 643 + * case USB_CDC_GET_ENCAPSULATED_RESPONSE: 644 + * case USB_CDC_SET_ETHERNET_MULTICAST_FILTERS: 645 + * case USB_CDC_SET_ETHERNET_PM_PATTERN_FILTER: 646 + * case USB_CDC_GET_ETHERNET_PM_PATTERN_FILTER: 647 + * case USB_CDC_GET_ETHERNET_STATISTIC: 648 + */ 649 + 650 + case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 651 + | USB_CDC_GET_NTB_PARAMETERS: 652 + 653 + if (w_length == 0 || w_value != 0 || w_index != ncm->ctrl_id) 654 + goto invalid; 655 + value = w_length > sizeof ntb_parameters ? 656 + sizeof ntb_parameters : w_length; 657 + memcpy(req->buf, &ntb_parameters, value); 658 + VDBG(cdev, "Host asked NTB parameters\n"); 659 + break; 660 + 661 + case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 662 + | USB_CDC_GET_NTB_INPUT_SIZE: 663 + 664 + if (w_length < 4 || w_value != 0 || w_index != ncm->ctrl_id) 665 + goto invalid; 666 + put_unaligned_le32(ncm->port.fixed_in_len, req->buf); 667 + value = 4; 668 + VDBG(cdev, "Host asked INPUT SIZE, sending %d\n", 669 + ncm->port.fixed_in_len); 670 + break; 671 + 672 + case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 673 + | USB_CDC_SET_NTB_INPUT_SIZE: 674 + { 675 + if (w_length != 4 || w_value != 0 || w_index != ncm->ctrl_id) 676 + goto invalid; 677 + req->complete = ncm_ep0out_complete; 678 + req->length = w_length; 679 + req->context = f; 680 + 681 + value = req->length; 682 + break; 683 + } 684 + 685 + case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 686 + | USB_CDC_GET_NTB_FORMAT: 687 + { 688 + uint16_t format; 689 + 690 + if (w_length < 2 || w_value != 0 || w_index != ncm->ctrl_id) 691 + goto invalid; 692 + format = (ncm->parser_opts == &ndp16_opts) ? 0x0000 : 0x0001; 693 + put_unaligned_le16(format, req->buf); 694 + value = 2; 695 + VDBG(cdev, "Host asked NTB FORMAT, sending %d\n", format); 696 + break; 697 + } 698 + 699 + case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 700 + | USB_CDC_SET_NTB_FORMAT: 701 + { 702 + if (w_length != 0 || w_index != ncm->ctrl_id) 703 + goto invalid; 704 + switch (w_value) { 705 + case 0x0000: 706 + ncm->parser_opts = &ndp16_opts; 707 + DBG(cdev, "NCM16 selected\n"); 708 + break; 709 + case 0x0001: 710 + ncm->parser_opts = &ndp32_opts; 711 + DBG(cdev, "NCM32 selected\n"); 712 + break; 713 + default: 714 + goto invalid; 715 + } 716 + value = 0; 717 + break; 718 + } 719 + case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 720 + | USB_CDC_GET_CRC_MODE: 721 + { 722 + uint16_t is_crc; 723 + 724 + if (w_length < 2 || w_value != 0 || w_index != ncm->ctrl_id) 725 + goto invalid; 726 + is_crc = ncm->is_crc ? 0x0001 : 0x0000; 727 + put_unaligned_le16(is_crc, req->buf); 728 + value = 2; 729 + VDBG(cdev, "Host asked CRC MODE, sending %d\n", is_crc); 730 + break; 731 + } 732 + 733 + case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8) 734 + | USB_CDC_SET_CRC_MODE: 735 + { 736 + int ndp_hdr_crc = 0; 737 + 738 + if (w_length != 0 || w_index != ncm->ctrl_id) 739 + goto invalid; 740 + switch (w_value) { 741 + case 0x0000: 742 + ncm->is_crc = false; 743 + ndp_hdr_crc = NCM_NDP_HDR_NOCRC; 744 + DBG(cdev, "non-CRC mode selected\n"); 745 + break; 746 + case 0x0001: 747 + ncm->is_crc = true; 748 + ndp_hdr_crc = NCM_NDP_HDR_CRC; 749 + DBG(cdev, "CRC mode selected\n"); 750 + break; 751 + default: 752 + goto invalid; 753 + } 754 + ncm->parser_opts->ndp_sign &= ~NCM_NDP_HDR_CRC_MASK; 755 + ncm->parser_opts->ndp_sign |= ndp_hdr_crc; 756 + value = 0; 757 + break; 758 + } 759 + 760 + /* and disabled in ncm descriptor: */ 761 + /* case USB_CDC_GET_NET_ADDRESS: */ 762 + /* case USB_CDC_SET_NET_ADDRESS: */ 763 + /* case USB_CDC_GET_MAX_DATAGRAM_SIZE: */ 764 + /* case USB_CDC_SET_MAX_DATAGRAM_SIZE: */ 765 + 766 + default: 767 + invalid: 768 + DBG(cdev, "invalid control req%02x.%02x v%04x i%04x l%d\n", 769 + ctrl->bRequestType, ctrl->bRequest, 770 + w_value, w_index, w_length); 771 + } 772 + 773 + /* respond with data transfer or status phase? */ 774 + if (value >= 0) { 775 + DBG(cdev, "ncm req%02x.%02x v%04x i%04x l%d\n", 776 + ctrl->bRequestType, ctrl->bRequest, 777 + w_value, w_index, w_length); 778 + req->zero = 0; 779 + req->length = value; 780 + value = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC); 781 + if (value < 0) 782 + ERROR(cdev, "ncm req %02x.%02x response err %d\n", 783 + ctrl->bRequestType, ctrl->bRequest, 784 + value); 785 + } 786 + 787 + /* device either stalls (value < 0) or reports success */ 788 + return value; 789 + } 790 + 791 + 792 + static int ncm_set_alt(struct usb_function *f, unsigned intf, unsigned alt) 793 + { 794 + struct f_ncm *ncm = func_to_ncm(f); 795 + struct usb_composite_dev *cdev = f->config->cdev; 796 + 797 + /* Control interface has only altsetting 0 */ 798 + if (intf == ncm->ctrl_id) { 799 + if (alt != 0) 800 + goto fail; 801 + 802 + if (ncm->notify->driver_data) { 803 + DBG(cdev, "reset ncm control %d\n", intf); 804 + usb_ep_disable(ncm->notify); 805 + } else { 806 + DBG(cdev, "init ncm ctrl %d\n", intf); 807 + ncm->notify_desc = ep_choose(cdev->gadget, 808 + ncm->hs.notify, 809 + ncm->fs.notify); 810 + } 811 + usb_ep_enable(ncm->notify, ncm->notify_desc); 812 + ncm->notify->driver_data = ncm; 813 + 814 + /* Data interface has two altsettings, 0 and 1 */ 815 + } else if (intf == ncm->data_id) { 816 + if (alt > 1) 817 + goto fail; 818 + 819 + if (ncm->port.in_ep->driver_data) { 820 + DBG(cdev, "reset ncm\n"); 821 + gether_disconnect(&ncm->port); 822 + ncm_reset_values(ncm); 823 + } 824 + 825 + /* 826 + * CDC Network only sends data in non-default altsettings. 827 + * Changing altsettings resets filters, statistics, etc. 828 + */ 829 + if (alt == 1) { 830 + struct net_device *net; 831 + 832 + if (!ncm->port.in) { 833 + DBG(cdev, "init ncm\n"); 834 + ncm->port.in = ep_choose(cdev->gadget, 835 + ncm->hs.in, 836 + ncm->fs.in); 837 + ncm->port.out = ep_choose(cdev->gadget, 838 + ncm->hs.out, 839 + ncm->fs.out); 840 + } 841 + 842 + /* TODO */ 843 + /* Enable zlps by default for NCM conformance; 844 + * override for musb_hdrc (avoids txdma ovhead) 845 + */ 846 + ncm->port.is_zlp_ok = !( 847 + gadget_is_musbhdrc(cdev->gadget) 848 + ); 849 + ncm->port.cdc_filter = DEFAULT_FILTER; 850 + DBG(cdev, "activate ncm\n"); 851 + net = gether_connect(&ncm->port); 852 + if (IS_ERR(net)) 853 + return PTR_ERR(net); 854 + } 855 + 856 + spin_lock(&ncm->lock); 857 + ncm_notify(ncm); 858 + spin_unlock(&ncm->lock); 859 + } else 860 + goto fail; 861 + 862 + return 0; 863 + fail: 864 + return -EINVAL; 865 + } 866 + 867 + /* 868 + * Because the data interface supports multiple altsettings, 869 + * this NCM function *MUST* implement a get_alt() method. 870 + */ 871 + static int ncm_get_alt(struct usb_function *f, unsigned intf) 872 + { 873 + struct f_ncm *ncm = func_to_ncm(f); 874 + 875 + if (intf == ncm->ctrl_id) 876 + return 0; 877 + return ncm->port.in_ep->driver_data ? 1 : 0; 878 + } 879 + 880 + static struct sk_buff *ncm_wrap_ntb(struct gether *port, 881 + struct sk_buff *skb) 882 + { 883 + struct f_ncm *ncm = func_to_ncm(&port->func); 884 + struct sk_buff *skb2; 885 + int ncb_len = 0; 886 + __le16 *tmp; 887 + int div = ntb_parameters.wNdpInDivisor; 888 + int rem = ntb_parameters.wNdpInPayloadRemainder; 889 + int pad; 890 + int ndp_align = ntb_parameters.wNdpInAlignment; 891 + int ndp_pad; 892 + unsigned max_size = ncm->port.fixed_in_len; 893 + struct ndp_parser_opts *opts = ncm->parser_opts; 894 + unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0; 895 + 896 + ncb_len += opts->nth_size; 897 + ndp_pad = ALIGN(ncb_len, ndp_align) - ncb_len; 898 + ncb_len += ndp_pad; 899 + ncb_len += opts->ndp_size; 900 + ncb_len += 2 * 2 * opts->dgram_item_len; /* Datagram entry */ 901 + ncb_len += 2 * 2 * opts->dgram_item_len; /* Zero datagram entry */ 902 + pad = ALIGN(ncb_len, div) + rem - ncb_len; 903 + ncb_len += pad; 904 + 905 + if (ncb_len + skb->len + crc_len > max_size) { 906 + dev_kfree_skb_any(skb); 907 + return NULL; 908 + } 909 + 910 + skb2 = skb_copy_expand(skb, ncb_len, 911 + max_size - skb->len - ncb_len - crc_len, 912 + GFP_ATOMIC); 913 + dev_kfree_skb_any(skb); 914 + if (!skb2) 915 + return NULL; 916 + 917 + skb = skb2; 918 + 919 + tmp = (void *) skb_push(skb, ncb_len); 920 + memset(tmp, 0, ncb_len); 921 + 922 + put_unaligned_le32(opts->nth_sign, tmp); /* dwSignature */ 923 + tmp += 2; 924 + /* wHeaderLength */ 925 + put_unaligned_le16(opts->nth_size, tmp++); 926 + tmp++; /* skip wSequence */ 927 + put_ncm(&tmp, opts->block_length, skb->len); /* (d)wBlockLength */ 928 + /* (d)wFpIndex */ 929 + /* the first pointer is right after the NTH + align */ 930 + put_ncm(&tmp, opts->fp_index, opts->nth_size + ndp_pad); 931 + 932 + tmp = (void *)tmp + ndp_pad; 933 + 934 + /* NDP */ 935 + put_unaligned_le32(opts->ndp_sign, tmp); /* dwSignature */ 936 + tmp += 2; 937 + /* wLength */ 938 + put_unaligned_le16(ncb_len - opts->nth_size - pad, tmp++); 939 + 940 + tmp += opts->reserved1; 941 + tmp += opts->next_fp_index; /* skip reserved (d)wNextFpIndex */ 942 + tmp += opts->reserved2; 943 + 944 + if (ncm->is_crc) { 945 + uint32_t crc; 946 + 947 + crc = ~crc32_le(~0, 948 + skb->data + ncb_len, 949 + skb->len - ncb_len); 950 + put_unaligned_le32(crc, skb->data + skb->len); 951 + skb_put(skb, crc_len); 952 + } 953 + 954 + /* (d)wDatagramIndex[0] */ 955 + put_ncm(&tmp, opts->dgram_item_len, ncb_len); 956 + /* (d)wDatagramLength[0] */ 957 + put_ncm(&tmp, opts->dgram_item_len, skb->len - ncb_len); 958 + /* (d)wDatagramIndex[1] and (d)wDatagramLength[1] already zeroed */ 959 + 960 + if (skb->len > MAX_TX_NONFIXED) 961 + memset(skb_put(skb, max_size - skb->len), 962 + 0, max_size - skb->len); 963 + 964 + return skb; 965 + } 966 + 967 + static int ncm_unwrap_ntb(struct gether *port, 968 + struct sk_buff *skb, 969 + struct sk_buff_head *list) 970 + { 971 + struct f_ncm *ncm = func_to_ncm(&port->func); 972 + __le16 *tmp = (void *) skb->data; 973 + unsigned index, index2; 974 + unsigned dg_len, dg_len2; 975 + unsigned ndp_len; 976 + struct sk_buff *skb2; 977 + int ret = -EINVAL; 978 + unsigned max_size = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize); 979 + struct ndp_parser_opts *opts = ncm->parser_opts; 980 + unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0; 981 + int dgram_counter; 982 + 983 + /* dwSignature */ 984 + if (get_unaligned_le32(tmp) != opts->nth_sign) { 985 + INFO(port->func.config->cdev, "Wrong NTH SIGN, skblen %d\n", 986 + skb->len); 987 + print_hex_dump(KERN_INFO, "HEAD:", DUMP_PREFIX_ADDRESS, 32, 1, 988 + skb->data, 32, false); 989 + 990 + goto err; 991 + } 992 + tmp += 2; 993 + /* wHeaderLength */ 994 + if (get_unaligned_le16(tmp++) != opts->nth_size) { 995 + INFO(port->func.config->cdev, "Wrong NTB headersize\n"); 996 + goto err; 997 + } 998 + tmp++; /* skip wSequence */ 999 + 1000 + /* (d)wBlockLength */ 1001 + if (get_ncm(&tmp, opts->block_length) > max_size) { 1002 + INFO(port->func.config->cdev, "OUT size exceeded\n"); 1003 + goto err; 1004 + } 1005 + 1006 + index = get_ncm(&tmp, opts->fp_index); 1007 + /* NCM 3.2 */ 1008 + if (((index % 4) != 0) && (index < opts->nth_size)) { 1009 + INFO(port->func.config->cdev, "Bad index: %x\n", 1010 + index); 1011 + goto err; 1012 + } 1013 + 1014 + /* walk through NDP */ 1015 + tmp = ((void *)skb->data) + index; 1016 + if (get_unaligned_le32(tmp) != opts->ndp_sign) { 1017 + INFO(port->func.config->cdev, "Wrong NDP SIGN\n"); 1018 + goto err; 1019 + } 1020 + tmp += 2; 1021 + 1022 + ndp_len = get_unaligned_le16(tmp++); 1023 + /* 1024 + * NCM 3.3.1 1025 + * entry is 2 items 1026 + * item size is 16/32 bits, opts->dgram_item_len * 2 bytes 1027 + * minimal: struct usb_cdc_ncm_ndpX + normal entry + zero entry 1028 + */ 1029 + if ((ndp_len < opts->ndp_size + 2 * 2 * (opts->dgram_item_len * 2)) 1030 + || (ndp_len % opts->ndplen_align != 0)) { 1031 + INFO(port->func.config->cdev, "Bad NDP length: %x\n", ndp_len); 1032 + goto err; 1033 + } 1034 + tmp += opts->reserved1; 1035 + tmp += opts->next_fp_index; /* skip reserved (d)wNextFpIndex */ 1036 + tmp += opts->reserved2; 1037 + 1038 + ndp_len -= opts->ndp_size; 1039 + index2 = get_ncm(&tmp, opts->dgram_item_len); 1040 + dg_len2 = get_ncm(&tmp, opts->dgram_item_len); 1041 + dgram_counter = 0; 1042 + 1043 + do { 1044 + index = index2; 1045 + dg_len = dg_len2; 1046 + if (dg_len < 14 + crc_len) { /* ethernet header + crc */ 1047 + INFO(port->func.config->cdev, "Bad dgram length: %x\n", 1048 + dg_len); 1049 + goto err; 1050 + } 1051 + if (ncm->is_crc) { 1052 + uint32_t crc, crc2; 1053 + 1054 + crc = get_unaligned_le32(skb->data + 1055 + index + dg_len - crc_len); 1056 + crc2 = ~crc32_le(~0, 1057 + skb->data + index, 1058 + dg_len - crc_len); 1059 + if (crc != crc2) { 1060 + INFO(port->func.config->cdev, "Bad CRC\n"); 1061 + goto err; 1062 + } 1063 + } 1064 + 1065 + index2 = get_ncm(&tmp, opts->dgram_item_len); 1066 + dg_len2 = get_ncm(&tmp, opts->dgram_item_len); 1067 + 1068 + if (index2 == 0 || dg_len2 == 0) { 1069 + skb2 = skb; 1070 + } else { 1071 + skb2 = skb_clone(skb, GFP_ATOMIC); 1072 + if (skb2 == NULL) 1073 + goto err; 1074 + } 1075 + 1076 + if (!skb_pull(skb2, index)) { 1077 + ret = -EOVERFLOW; 1078 + goto err; 1079 + } 1080 + 1081 + skb_trim(skb2, dg_len - crc_len); 1082 + skb_queue_tail(list, skb2); 1083 + 1084 + ndp_len -= 2 * (opts->dgram_item_len * 2); 1085 + 1086 + dgram_counter++; 1087 + 1088 + if (index2 == 0 || dg_len2 == 0) 1089 + break; 1090 + } while (ndp_len > 2 * (opts->dgram_item_len * 2)); /* zero entry */ 1091 + 1092 + VDBG(port->func.config->cdev, 1093 + "Parsed NTB with %d frames\n", dgram_counter); 1094 + return 0; 1095 + err: 1096 + skb_queue_purge(list); 1097 + dev_kfree_skb_any(skb); 1098 + return ret; 1099 + } 1100 + 1101 + static void ncm_disable(struct usb_function *f) 1102 + { 1103 + struct f_ncm *ncm = func_to_ncm(f); 1104 + struct usb_composite_dev *cdev = f->config->cdev; 1105 + 1106 + DBG(cdev, "ncm deactivated\n"); 1107 + 1108 + if (ncm->port.in_ep->driver_data) 1109 + gether_disconnect(&ncm->port); 1110 + 1111 + if (ncm->notify->driver_data) { 1112 + usb_ep_disable(ncm->notify); 1113 + ncm->notify->driver_data = NULL; 1114 + ncm->notify_desc = NULL; 1115 + } 1116 + } 1117 + 1118 + /*-------------------------------------------------------------------------*/ 1119 + 1120 + /* 1121 + * Callbacks let us notify the host about connect/disconnect when the 1122 + * net device is opened or closed. 1123 + * 1124 + * For testing, note that link states on this side include both opened 1125 + * and closed variants of: 1126 + * 1127 + * - disconnected/unconfigured 1128 + * - configured but inactive (data alt 0) 1129 + * - configured and active (data alt 1) 1130 + * 1131 + * Each needs to be tested with unplug, rmmod, SET_CONFIGURATION, and 1132 + * SET_INTERFACE (altsetting). Remember also that "configured" doesn't 1133 + * imply the host is actually polling the notification endpoint, and 1134 + * likewise that "active" doesn't imply it's actually using the data 1135 + * endpoints for traffic. 1136 + */ 1137 + 1138 + static void ncm_open(struct gether *geth) 1139 + { 1140 + struct f_ncm *ncm = func_to_ncm(&geth->func); 1141 + 1142 + DBG(ncm->port.func.config->cdev, "%s\n", __func__); 1143 + 1144 + spin_lock(&ncm->lock); 1145 + ncm->is_open = true; 1146 + ncm_notify(ncm); 1147 + spin_unlock(&ncm->lock); 1148 + } 1149 + 1150 + static void ncm_close(struct gether *geth) 1151 + { 1152 + struct f_ncm *ncm = func_to_ncm(&geth->func); 1153 + 1154 + DBG(ncm->port.func.config->cdev, "%s\n", __func__); 1155 + 1156 + spin_lock(&ncm->lock); 1157 + ncm->is_open = false; 1158 + ncm_notify(ncm); 1159 + spin_unlock(&ncm->lock); 1160 + } 1161 + 1162 + /*-------------------------------------------------------------------------*/ 1163 + 1164 + /* ethernet function driver setup/binding */ 1165 + 1166 + static int __init 1167 + ncm_bind(struct usb_configuration *c, struct usb_function *f) 1168 + { 1169 + struct usb_composite_dev *cdev = c->cdev; 1170 + struct f_ncm *ncm = func_to_ncm(f); 1171 + int status; 1172 + struct usb_ep *ep; 1173 + 1174 + /* allocate instance-specific interface IDs */ 1175 + status = usb_interface_id(c, f); 1176 + if (status < 0) 1177 + goto fail; 1178 + ncm->ctrl_id = status; 1179 + ncm_iad_desc.bFirstInterface = status; 1180 + 1181 + ncm_control_intf.bInterfaceNumber = status; 1182 + ncm_union_desc.bMasterInterface0 = status; 1183 + 1184 + status = usb_interface_id(c, f); 1185 + if (status < 0) 1186 + goto fail; 1187 + ncm->data_id = status; 1188 + 1189 + ncm_data_nop_intf.bInterfaceNumber = status; 1190 + ncm_data_intf.bInterfaceNumber = status; 1191 + ncm_union_desc.bSlaveInterface0 = status; 1192 + 1193 + status = -ENODEV; 1194 + 1195 + /* allocate instance-specific endpoints */ 1196 + ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); 1197 + if (!ep) 1198 + goto fail; 1199 + ncm->port.in_ep = ep; 1200 + ep->driver_data = cdev; /* claim */ 1201 + 1202 + ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_out_desc); 1203 + if (!ep) 1204 + goto fail; 1205 + ncm->port.out_ep = ep; 1206 + ep->driver_data = cdev; /* claim */ 1207 + 1208 + ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_notify_desc); 1209 + if (!ep) 1210 + goto fail; 1211 + ncm->notify = ep; 1212 + ep->driver_data = cdev; /* claim */ 1213 + 1214 + status = -ENOMEM; 1215 + 1216 + /* allocate notification request and buffer */ 1217 + ncm->notify_req = usb_ep_alloc_request(ep, GFP_KERNEL); 1218 + if (!ncm->notify_req) 1219 + goto fail; 1220 + ncm->notify_req->buf = kmalloc(NCM_STATUS_BYTECOUNT, GFP_KERNEL); 1221 + if (!ncm->notify_req->buf) 1222 + goto fail; 1223 + ncm->notify_req->context = ncm; 1224 + ncm->notify_req->complete = ncm_notify_complete; 1225 + 1226 + /* copy descriptors, and track endpoint copies */ 1227 + f->descriptors = usb_copy_descriptors(ncm_fs_function); 1228 + if (!f->descriptors) 1229 + goto fail; 1230 + 1231 + ncm->fs.in = usb_find_endpoint(ncm_fs_function, 1232 + f->descriptors, &fs_ncm_in_desc); 1233 + ncm->fs.out = usb_find_endpoint(ncm_fs_function, 1234 + f->descriptors, &fs_ncm_out_desc); 1235 + ncm->fs.notify = usb_find_endpoint(ncm_fs_function, 1236 + f->descriptors, &fs_ncm_notify_desc); 1237 + 1238 + /* 1239 + * support all relevant hardware speeds... we expect that when 1240 + * hardware is dual speed, all bulk-capable endpoints work at 1241 + * both speeds 1242 + */ 1243 + if (gadget_is_dualspeed(c->cdev->gadget)) { 1244 + hs_ncm_in_desc.bEndpointAddress = 1245 + fs_ncm_in_desc.bEndpointAddress; 1246 + hs_ncm_out_desc.bEndpointAddress = 1247 + fs_ncm_out_desc.bEndpointAddress; 1248 + hs_ncm_notify_desc.bEndpointAddress = 1249 + fs_ncm_notify_desc.bEndpointAddress; 1250 + 1251 + /* copy descriptors, and track endpoint copies */ 1252 + f->hs_descriptors = usb_copy_descriptors(ncm_hs_function); 1253 + if (!f->hs_descriptors) 1254 + goto fail; 1255 + 1256 + ncm->hs.in = usb_find_endpoint(ncm_hs_function, 1257 + f->hs_descriptors, &hs_ncm_in_desc); 1258 + ncm->hs.out = usb_find_endpoint(ncm_hs_function, 1259 + f->hs_descriptors, &hs_ncm_out_desc); 1260 + ncm->hs.notify = usb_find_endpoint(ncm_hs_function, 1261 + f->hs_descriptors, &hs_ncm_notify_desc); 1262 + } 1263 + 1264 + /* 1265 + * NOTE: all that is done without knowing or caring about 1266 + * the network link ... which is unavailable to this code 1267 + * until we're activated via set_alt(). 1268 + */ 1269 + 1270 + ncm->port.open = ncm_open; 1271 + ncm->port.close = ncm_close; 1272 + 1273 + DBG(cdev, "CDC Network: %s speed IN/%s OUT/%s NOTIFY/%s\n", 1274 + gadget_is_dualspeed(c->cdev->gadget) ? "dual" : "full", 1275 + ncm->port.in_ep->name, ncm->port.out_ep->name, 1276 + ncm->notify->name); 1277 + return 0; 1278 + 1279 + fail: 1280 + if (f->descriptors) 1281 + usb_free_descriptors(f->descriptors); 1282 + 1283 + if (ncm->notify_req) { 1284 + kfree(ncm->notify_req->buf); 1285 + usb_ep_free_request(ncm->notify, ncm->notify_req); 1286 + } 1287 + 1288 + /* we might as well release our claims on endpoints */ 1289 + if (ncm->notify) 1290 + ncm->notify->driver_data = NULL; 1291 + if (ncm->port.out) 1292 + ncm->port.out_ep->driver_data = NULL; 1293 + if (ncm->port.in) 1294 + ncm->port.in_ep->driver_data = NULL; 1295 + 1296 + ERROR(cdev, "%s: can't bind, err %d\n", f->name, status); 1297 + 1298 + return status; 1299 + } 1300 + 1301 + static void 1302 + ncm_unbind(struct usb_configuration *c, struct usb_function *f) 1303 + { 1304 + struct f_ncm *ncm = func_to_ncm(f); 1305 + 1306 + DBG(c->cdev, "ncm unbind\n"); 1307 + 1308 + if (gadget_is_dualspeed(c->cdev->gadget)) 1309 + usb_free_descriptors(f->hs_descriptors); 1310 + usb_free_descriptors(f->descriptors); 1311 + 1312 + kfree(ncm->notify_req->buf); 1313 + usb_ep_free_request(ncm->notify, ncm->notify_req); 1314 + 1315 + ncm_string_defs[1].s = NULL; 1316 + kfree(ncm); 1317 + } 1318 + 1319 + /** 1320 + * ncm_bind_config - add CDC Network link to a configuration 1321 + * @c: the configuration to support the network link 1322 + * @ethaddr: a buffer in which the ethernet address of the host side 1323 + * side of the link was recorded 1324 + * Context: single threaded during gadget setup 1325 + * 1326 + * Returns zero on success, else negative errno. 1327 + * 1328 + * Caller must have called @gether_setup(). Caller is also responsible 1329 + * for calling @gether_cleanup() before module unload. 1330 + */ 1331 + int __init ncm_bind_config(struct usb_configuration *c, u8 ethaddr[ETH_ALEN]) 1332 + { 1333 + struct f_ncm *ncm; 1334 + int status; 1335 + 1336 + if (!can_support_ecm(c->cdev->gadget) || !ethaddr) 1337 + return -EINVAL; 1338 + 1339 + /* maybe allocate device-global string IDs */ 1340 + if (ncm_string_defs[0].id == 0) { 1341 + 1342 + /* control interface label */ 1343 + status = usb_string_id(c->cdev); 1344 + if (status < 0) 1345 + return status; 1346 + ncm_string_defs[STRING_CTRL_IDX].id = status; 1347 + ncm_control_intf.iInterface = status; 1348 + 1349 + /* data interface label */ 1350 + status = usb_string_id(c->cdev); 1351 + if (status < 0) 1352 + return status; 1353 + ncm_string_defs[STRING_DATA_IDX].id = status; 1354 + ncm_data_nop_intf.iInterface = status; 1355 + ncm_data_intf.iInterface = status; 1356 + 1357 + /* MAC address */ 1358 + status = usb_string_id(c->cdev); 1359 + if (status < 0) 1360 + return status; 1361 + ncm_string_defs[STRING_MAC_IDX].id = status; 1362 + ecm_desc.iMACAddress = status; 1363 + 1364 + /* IAD */ 1365 + status = usb_string_id(c->cdev); 1366 + if (status < 0) 1367 + return status; 1368 + ncm_string_defs[STRING_IAD_IDX].id = status; 1369 + ncm_iad_desc.iFunction = status; 1370 + } 1371 + 1372 + /* allocate and initialize one new instance */ 1373 + ncm = kzalloc(sizeof *ncm, GFP_KERNEL); 1374 + if (!ncm) 1375 + return -ENOMEM; 1376 + 1377 + /* export host's Ethernet address in CDC format */ 1378 + snprintf(ncm->ethaddr, sizeof ncm->ethaddr, 1379 + "%02X%02X%02X%02X%02X%02X", 1380 + ethaddr[0], ethaddr[1], ethaddr[2], 1381 + ethaddr[3], ethaddr[4], ethaddr[5]); 1382 + ncm_string_defs[1].s = ncm->ethaddr; 1383 + 1384 + spin_lock_init(&ncm->lock); 1385 + ncm_reset_values(ncm); 1386 + ncm->port.is_fixed = true; 1387 + 1388 + ncm->port.func.name = "cdc_network"; 1389 + ncm->port.func.strings = ncm_strings; 1390 + /* descriptors are per-instance copies */ 1391 + ncm->port.func.bind = ncm_bind; 1392 + ncm->port.func.unbind = ncm_unbind; 1393 + ncm->port.func.set_alt = ncm_set_alt; 1394 + ncm->port.func.get_alt = ncm_get_alt; 1395 + ncm->port.func.setup = ncm_setup; 1396 + ncm->port.func.disable = ncm_disable; 1397 + 1398 + ncm->port.wrap = ncm_wrap_ntb; 1399 + ncm->port.unwrap = ncm_unwrap_ntb; 1400 + 1401 + status = usb_add_function(c, &ncm->port.func); 1402 + if (status) { 1403 + ncm_string_defs[1].s = NULL; 1404 + kfree(ncm); 1405 + } 1406 + return status; 1407 + }
+16 -13
drivers/usb/gadget/file_storage.c
··· 3392 3392 dev_set_name(&curlun->dev,"%s-lun%d", 3393 3393 dev_name(&gadget->dev), i); 3394 3394 3395 - if ((rc = device_register(&curlun->dev)) != 0) { 3395 + kref_get(&fsg->ref); 3396 + rc = device_register(&curlun->dev); 3397 + if (rc) { 3396 3398 INFO(fsg, "failed to register LUN%d: %d\n", i, rc); 3397 - goto out; 3398 - } 3399 - if ((rc = device_create_file(&curlun->dev, 3400 - &dev_attr_ro)) != 0 || 3401 - (rc = device_create_file(&curlun->dev, 3402 - &dev_attr_nofua)) != 0 || 3403 - (rc = device_create_file(&curlun->dev, 3404 - &dev_attr_file)) != 0) { 3405 - device_unregister(&curlun->dev); 3399 + put_device(&curlun->dev); 3406 3400 goto out; 3407 3401 } 3408 3402 curlun->registered = 1; 3409 - kref_get(&fsg->ref); 3403 + 3404 + rc = device_create_file(&curlun->dev, &dev_attr_ro); 3405 + if (rc) 3406 + goto out; 3407 + rc = device_create_file(&curlun->dev, &dev_attr_nofua); 3408 + if (rc) 3409 + goto out; 3410 + rc = device_create_file(&curlun->dev, &dev_attr_file); 3411 + if (rc) 3412 + goto out; 3410 3413 3411 3414 if (mod_data.file[i] && *mod_data.file[i]) { 3412 - if ((rc = fsg_lun_open(curlun, 3413 - mod_data.file[i])) != 0) 3415 + rc = fsg_lun_open(curlun, mod_data.file[i]); 3416 + if (rc) 3414 3417 goto out; 3415 3418 } else if (!mod_data.removable) { 3416 3419 ERROR(fsg, "no file given for LUN%d\n", i);
+24 -17
drivers/usb/gadget/g_ffs.c
··· 1 + /* 2 + * g_ffs.c -- user mode file system API for USB composite function controllers 3 + * 4 + * Copyright (C) 2010 Samsung Electronics 5 + * Author: Michal Nazarewicz <m.nazarewicz@samsung.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + */ 21 + 22 + #define pr_fmt(fmt) "g_ffs: " fmt 23 + 1 24 #include <linux/module.h> 2 25 #include <linux/utsname.h> 3 - 4 26 5 27 /* 6 28 * kbuild is not very cooperative with respect to linking separately ··· 65 43 66 44 #include "f_fs.c" 67 45 68 - 69 46 #define DRIVER_NAME "g_ffs" 70 47 #define DRIVER_DESC "USB Function Filesystem" 71 48 #define DRIVER_VERSION "24 Aug 2004" ··· 94 73 module_param_named(bDeviceProtocol, gfs_dev_desc.bDeviceProtocol, byte, 0644); 95 74 MODULE_PARM_DESC(bDeviceProtocol, "USB Device protocol"); 96 75 97 - 98 - 99 76 static const struct usb_descriptor_header *gfs_otg_desc[] = { 100 77 (const struct usb_descriptor_header *) 101 78 &(const struct usb_otg_descriptor) { ··· 110 91 NULL 111 92 }; 112 93 113 - /* string IDs are assigned dynamically */ 114 - 94 + /* String IDs are assigned dynamically */ 115 95 static struct usb_string gfs_strings[] = { 116 96 #ifdef CONFIG_USB_FUNCTIONFS_RNDIS 117 97 { .s = "FunctionFS + RNDIS" }, ··· 131 113 }, 132 114 NULL, 133 115 }; 134 - 135 - 136 116 137 117 struct gfs_configuration { 138 118 struct usb_configuration c; ··· 154 138 #endif 155 139 }; 156 140 157 - 158 141 static int gfs_bind(struct usb_composite_dev *cdev); 159 142 static int gfs_unbind(struct usb_composite_dev *cdev); 160 143 static int gfs_do_config(struct usb_configuration *c); ··· 166 151 .iProduct = DRIVER_DESC, 167 152 }; 168 153 169 - 170 154 static struct ffs_data *gfs_ffs_data; 171 155 static unsigned long gfs_registered; 172 - 173 156 174 157 static int gfs_init(void) 175 158 { ··· 187 174 functionfs_cleanup(); 188 175 } 189 176 module_exit(gfs_exit); 190 - 191 177 192 178 static int functionfs_ready_callback(struct ffs_data *ffs) 193 179 { ··· 212 200 usb_composite_unregister(&gfs_driver); 213 201 } 214 202 215 - 216 203 static int functionfs_check_dev_callback(const char *dev_name) 217 204 { 218 205 return 0; 219 206 } 220 - 221 - 222 207 223 208 static int gfs_bind(struct usb_composite_dev *cdev) 224 209 { ··· 283 274 return 0; 284 275 } 285 276 286 - 287 277 static int gfs_do_config(struct usb_configuration *c) 288 278 { 289 279 struct gfs_configuration *gc = ··· 322 314 323 315 return 0; 324 316 } 325 - 326 317 327 318 #ifdef CONFIG_USB_FUNCTIONFS_ETH 328 319
+20 -5
drivers/usb/gadget/gadget_chips.h
··· 96 96 97 97 /* Mentor high speed "dual role" controller, in peripheral role */ 98 98 #ifdef CONFIG_USB_GADGET_MUSB_HDRC 99 - #define gadget_is_musbhdrc(g) !strcmp("musb_hdrc", (g)->name) 99 + #define gadget_is_musbhdrc(g) !strcmp("musb-hdrc", (g)->name) 100 100 #else 101 101 #define gadget_is_musbhdrc(g) 0 102 102 #endif ··· 120 120 #define gadget_is_fsl_qe(g) 0 121 121 #endif 122 122 123 - #ifdef CONFIG_USB_GADGET_CI13XXX 124 - #define gadget_is_ci13xxx(g) (!strcmp("ci13xxx_udc", (g)->name)) 123 + #ifdef CONFIG_USB_GADGET_CI13XXX_PCI 124 + #define gadget_is_ci13xxx_pci(g) (!strcmp("ci13xxx_pci", (g)->name)) 125 125 #else 126 - #define gadget_is_ci13xxx(g) 0 126 + #define gadget_is_ci13xxx_pci(g) 0 127 127 #endif 128 128 129 129 // CONFIG_USB_GADGET_SX2 ··· 142 142 #define gadget_is_s3c_hsotg(g) 0 143 143 #endif 144 144 145 + #ifdef CONFIG_USB_GADGET_EG20T 146 + #define gadget_is_pch(g) (!strcmp("pch_udc", (g)->name)) 147 + #else 148 + #define gadget_is_pch(g) 0 149 + #endif 150 + 151 + #ifdef CONFIG_USB_GADGET_CI13XXX_MSM 152 + #define gadget_is_ci13xxx_msm(g) (!strcmp("ci13xxx_msm", (g)->name)) 153 + #else 154 + #define gadget_is_ci13xxx_msm(g) 0 155 + #endif 145 156 146 157 /** 147 158 * usb_gadget_controller_number - support bcdDevice id convention ··· 203 192 return 0x21; 204 193 else if (gadget_is_fsl_qe(gadget)) 205 194 return 0x22; 206 - else if (gadget_is_ci13xxx(gadget)) 195 + else if (gadget_is_ci13xxx_pci(gadget)) 207 196 return 0x23; 208 197 else if (gadget_is_langwell(gadget)) 209 198 return 0x24; ··· 211 200 return 0x25; 212 201 else if (gadget_is_s3c_hsotg(gadget)) 213 202 return 0x26; 203 + else if (gadget_is_pch(gadget)) 204 + return 0x27; 205 + else if (gadget_is_ci13xxx_msm(gadget)) 206 + return 0x28; 214 207 return -ENOENT; 215 208 } 216 209
+6 -2
drivers/usb/gadget/imx_udc.c
··· 1191 1191 return IRQ_HANDLED; 1192 1192 } 1193 1193 1194 + #ifndef MX1_INT_USBD0 1195 + #define MX1_INT_USBD0 MX1_USBD_INT0 1196 + #endif 1197 + 1194 1198 static irqreturn_t imx_udc_bulk_irq(int irq, void *dev) 1195 1199 { 1196 1200 struct imx_udc_struct *imx_usb = dev; 1197 - struct imx_ep_struct *imx_ep = &imx_usb->imx_ep[irq - USBD_INT0]; 1201 + struct imx_ep_struct *imx_ep = &imx_usb->imx_ep[irq - MX1_INT_USBD0]; 1198 1202 int intr = __raw_readl(imx_usb->base + USB_EP_INTR(EP_NO(imx_ep))); 1199 1203 1200 - dump_ep_intr(__func__, irq - USBD_INT0, intr, imx_usb->dev); 1204 + dump_ep_intr(__func__, irq - MX1_INT_USBD0, intr, imx_usb->dev); 1201 1205 1202 1206 if (!imx_usb->driver) { 1203 1207 __raw_writel(intr, imx_usb->base + USB_EP_INTR(EP_NO(imx_ep)));
-3
drivers/usb/gadget/imx_udc.h
··· 23 23 /* Helper macros */ 24 24 #define EP_NO(ep) ((ep->bEndpointAddress) & ~USB_DIR_IN) /* IN:1, OUT:0 */ 25 25 #define EP_DIR(ep) ((ep->bEndpointAddress) & USB_DIR_IN ? 1 : 0) 26 - #define irq_to_ep(irq) (((irq) >= USBD_INT0) || ((irq) <= USBD_INT6) \ 27 - ? ((irq) - USBD_INT0) : (USBD_INT6)) /*should not happen*/ 28 - #define ep_to_irq(ep) (EP_NO((ep)) + USBD_INT0) 29 26 #define IMX_USB_NB_EP 6 30 27 31 28 /* Driver structures */
+23
drivers/usb/gadget/langwell_udc.c
··· 2225 2225 u16 wValue = le16_to_cpu(setup->wValue); 2226 2226 u16 wIndex = le16_to_cpu(setup->wIndex); 2227 2227 u16 wLength = le16_to_cpu(setup->wLength); 2228 + u32 portsc1; 2228 2229 2229 2230 dev_vdbg(&dev->pdev->dev, "---> %s()\n", __func__); 2230 2231 ··· 2312 2311 } else { 2313 2312 dev->remote_wakeup = 0; 2314 2313 dev->dev_status &= ~(1 << wValue); 2314 + } 2315 + break; 2316 + case USB_DEVICE_TEST_MODE: 2317 + dev_dbg(&dev->pdev->dev, "SETUP: TEST MODE\n"); 2318 + if ((wIndex & 0xff) || 2319 + (dev->gadget.speed != USB_SPEED_HIGH)) 2320 + ep0_stall(dev); 2321 + 2322 + switch (wIndex >> 8) { 2323 + case TEST_J: 2324 + case TEST_K: 2325 + case TEST_SE0_NAK: 2326 + case TEST_PACKET: 2327 + case TEST_FORCE_EN: 2328 + if (prime_status_phase(dev, EP_DIR_IN)) 2329 + ep0_stall(dev); 2330 + portsc1 = readl(&dev->op_regs->portsc1); 2331 + portsc1 |= (wIndex & 0xf00) << 8; 2332 + writel(portsc1, &dev->op_regs->portsc1); 2333 + goto end; 2334 + default: 2335 + rc = -EOPNOTSUPP; 2315 2336 } 2316 2337 break; 2317 2338 default:
+1 -1
drivers/usb/gadget/mass_storage.c
··· 102 102 }; 103 103 FSG_MODULE_PARAMETERS(/* no prefix */, mod_data); 104 104 105 - static unsigned long msg_registered = 0; 105 + static unsigned long msg_registered; 106 106 static void msg_cleanup(void); 107 107 108 108 static int msg_thread_exits(struct fsg_common *common)
+294
drivers/usb/gadget/mv_udc.h
··· 1 + 2 + #ifndef __MV_UDC_H 3 + #define __MV_UDC_H 4 + 5 + #define VUSBHS_MAX_PORTS 8 6 + 7 + #define DQH_ALIGNMENT 2048 8 + #define DTD_ALIGNMENT 64 9 + #define DMA_BOUNDARY 4096 10 + 11 + #define EP_DIR_IN 1 12 + #define EP_DIR_OUT 0 13 + 14 + #define DMA_ADDR_INVALID (~(dma_addr_t)0) 15 + 16 + #define EP0_MAX_PKT_SIZE 64 17 + /* ep0 transfer state */ 18 + #define WAIT_FOR_SETUP 0 19 + #define DATA_STATE_XMIT 1 20 + #define DATA_STATE_NEED_ZLP 2 21 + #define WAIT_FOR_OUT_STATUS 3 22 + #define DATA_STATE_RECV 4 23 + 24 + #define CAPLENGTH_MASK (0xff) 25 + #define DCCPARAMS_DEN_MASK (0x1f) 26 + 27 + #define HCSPARAMS_PPC (0x10) 28 + 29 + /* Frame Index Register Bit Masks */ 30 + #define USB_FRINDEX_MASKS 0x3fff 31 + 32 + /* Command Register Bit Masks */ 33 + #define USBCMD_RUN_STOP (0x00000001) 34 + #define USBCMD_CTRL_RESET (0x00000002) 35 + #define USBCMD_SETUP_TRIPWIRE_SET (0x00002000) 36 + #define USBCMD_SETUP_TRIPWIRE_CLEAR (~USBCMD_SETUP_TRIPWIRE_SET) 37 + 38 + #define USBCMD_ATDTW_TRIPWIRE_SET (0x00004000) 39 + #define USBCMD_ATDTW_TRIPWIRE_CLEAR (~USBCMD_ATDTW_TRIPWIRE_SET) 40 + 41 + /* bit 15,3,2 are for frame list size */ 42 + #define USBCMD_FRAME_SIZE_1024 (0x00000000) /* 000 */ 43 + #define USBCMD_FRAME_SIZE_512 (0x00000004) /* 001 */ 44 + #define USBCMD_FRAME_SIZE_256 (0x00000008) /* 010 */ 45 + #define USBCMD_FRAME_SIZE_128 (0x0000000C) /* 011 */ 46 + #define USBCMD_FRAME_SIZE_64 (0x00008000) /* 100 */ 47 + #define USBCMD_FRAME_SIZE_32 (0x00008004) /* 101 */ 48 + #define USBCMD_FRAME_SIZE_16 (0x00008008) /* 110 */ 49 + #define USBCMD_FRAME_SIZE_8 (0x0000800C) /* 111 */ 50 + 51 + #define EPCTRL_TX_ALL_MASK (0xFFFF0000) 52 + #define EPCTRL_RX_ALL_MASK (0x0000FFFF) 53 + 54 + #define EPCTRL_TX_DATA_TOGGLE_RST (0x00400000) 55 + #define EPCTRL_TX_EP_STALL (0x00010000) 56 + #define EPCTRL_RX_EP_STALL (0x00000001) 57 + #define EPCTRL_RX_DATA_TOGGLE_RST (0x00000040) 58 + #define EPCTRL_RX_ENABLE (0x00000080) 59 + #define EPCTRL_TX_ENABLE (0x00800000) 60 + #define EPCTRL_CONTROL (0x00000000) 61 + #define EPCTRL_ISOCHRONOUS (0x00040000) 62 + #define EPCTRL_BULK (0x00080000) 63 + #define EPCTRL_INT (0x000C0000) 64 + #define EPCTRL_TX_TYPE (0x000C0000) 65 + #define EPCTRL_RX_TYPE (0x0000000C) 66 + #define EPCTRL_DATA_TOGGLE_INHIBIT (0x00000020) 67 + #define EPCTRL_TX_EP_TYPE_SHIFT (18) 68 + #define EPCTRL_RX_EP_TYPE_SHIFT (2) 69 + 70 + #define EPCOMPLETE_MAX_ENDPOINTS (16) 71 + 72 + /* endpoint list address bit masks */ 73 + #define USB_EP_LIST_ADDRESS_MASK 0xfffff800 74 + 75 + #define PORTSCX_W1C_BITS 0x2a 76 + #define PORTSCX_PORT_RESET 0x00000100 77 + #define PORTSCX_PORT_POWER 0x00001000 78 + #define PORTSCX_FORCE_FULL_SPEED_CONNECT 0x01000000 79 + #define PORTSCX_PAR_XCVR_SELECT 0xC0000000 80 + #define PORTSCX_PORT_FORCE_RESUME 0x00000040 81 + #define PORTSCX_PORT_SUSPEND 0x00000080 82 + #define PORTSCX_PORT_SPEED_FULL 0x00000000 83 + #define PORTSCX_PORT_SPEED_LOW 0x04000000 84 + #define PORTSCX_PORT_SPEED_HIGH 0x08000000 85 + #define PORTSCX_PORT_SPEED_MASK 0x0C000000 86 + 87 + /* USB MODE Register Bit Masks */ 88 + #define USBMODE_CTRL_MODE_IDLE 0x00000000 89 + #define USBMODE_CTRL_MODE_DEVICE 0x00000002 90 + #define USBMODE_CTRL_MODE_HOST 0x00000003 91 + #define USBMODE_CTRL_MODE_RSV 0x00000001 92 + #define USBMODE_SETUP_LOCK_OFF 0x00000008 93 + #define USBMODE_STREAM_DISABLE 0x00000010 94 + 95 + /* USB STS Register Bit Masks */ 96 + #define USBSTS_INT 0x00000001 97 + #define USBSTS_ERR 0x00000002 98 + #define USBSTS_PORT_CHANGE 0x00000004 99 + #define USBSTS_FRM_LST_ROLL 0x00000008 100 + #define USBSTS_SYS_ERR 0x00000010 101 + #define USBSTS_IAA 0x00000020 102 + #define USBSTS_RESET 0x00000040 103 + #define USBSTS_SOF 0x00000080 104 + #define USBSTS_SUSPEND 0x00000100 105 + #define USBSTS_HC_HALTED 0x00001000 106 + #define USBSTS_RCL 0x00002000 107 + #define USBSTS_PERIODIC_SCHEDULE 0x00004000 108 + #define USBSTS_ASYNC_SCHEDULE 0x00008000 109 + 110 + 111 + /* Interrupt Enable Register Bit Masks */ 112 + #define USBINTR_INT_EN (0x00000001) 113 + #define USBINTR_ERR_INT_EN (0x00000002) 114 + #define USBINTR_PORT_CHANGE_DETECT_EN (0x00000004) 115 + 116 + #define USBINTR_ASYNC_ADV_AAE (0x00000020) 117 + #define USBINTR_ASYNC_ADV_AAE_ENABLE (0x00000020) 118 + #define USBINTR_ASYNC_ADV_AAE_DISABLE (0xFFFFFFDF) 119 + 120 + #define USBINTR_RESET_EN (0x00000040) 121 + #define USBINTR_SOF_UFRAME_EN (0x00000080) 122 + #define USBINTR_DEVICE_SUSPEND (0x00000100) 123 + 124 + #define USB_DEVICE_ADDRESS_MASK (0xfe000000) 125 + #define USB_DEVICE_ADDRESS_BIT_SHIFT (25) 126 + 127 + struct mv_cap_regs { 128 + u32 caplength_hciversion; 129 + u32 hcsparams; /* HC structural parameters */ 130 + u32 hccparams; /* HC Capability Parameters*/ 131 + u32 reserved[5]; 132 + u32 dciversion; /* DC version number and reserved 16 bits */ 133 + u32 dccparams; /* DC Capability Parameters */ 134 + }; 135 + 136 + struct mv_op_regs { 137 + u32 usbcmd; /* Command register */ 138 + u32 usbsts; /* Status register */ 139 + u32 usbintr; /* Interrupt enable */ 140 + u32 frindex; /* Frame index */ 141 + u32 reserved1[1]; 142 + u32 deviceaddr; /* Device Address */ 143 + u32 eplistaddr; /* Endpoint List Address */ 144 + u32 ttctrl; /* HOST TT status and control */ 145 + u32 burstsize; /* Programmable Burst Size */ 146 + u32 txfilltuning; /* Host Transmit Pre-Buffer Packet Tuning */ 147 + u32 reserved[4]; 148 + u32 epnak; /* Endpoint NAK */ 149 + u32 epnaken; /* Endpoint NAK Enable */ 150 + u32 configflag; /* Configured Flag register */ 151 + u32 portsc[VUSBHS_MAX_PORTS]; /* Port Status/Control x, x = 1..8 */ 152 + u32 otgsc; 153 + u32 usbmode; /* USB Host/Device mode */ 154 + u32 epsetupstat; /* Endpoint Setup Status */ 155 + u32 epprime; /* Endpoint Initialize */ 156 + u32 epflush; /* Endpoint De-initialize */ 157 + u32 epstatus; /* Endpoint Status */ 158 + u32 epcomplete; /* Endpoint Interrupt On Complete */ 159 + u32 epctrlx[16]; /* Endpoint Control, where x = 0.. 15 */ 160 + u32 mcr; /* Mux Control */ 161 + u32 isr; /* Interrupt Status */ 162 + u32 ier; /* Interrupt Enable */ 163 + }; 164 + 165 + struct mv_udc { 166 + struct usb_gadget gadget; 167 + struct usb_gadget_driver *driver; 168 + spinlock_t lock; 169 + struct completion *done; 170 + struct platform_device *dev; 171 + int irq; 172 + 173 + struct mv_cap_regs __iomem *cap_regs; 174 + struct mv_op_regs __iomem *op_regs; 175 + unsigned int phy_regs; 176 + unsigned int max_eps; 177 + struct mv_dqh *ep_dqh; 178 + size_t ep_dqh_size; 179 + dma_addr_t ep_dqh_dma; 180 + 181 + struct dma_pool *dtd_pool; 182 + struct mv_ep *eps; 183 + 184 + struct mv_dtd *dtd_head; 185 + struct mv_dtd *dtd_tail; 186 + unsigned int dtd_entries; 187 + 188 + struct mv_req *status_req; 189 + struct usb_ctrlrequest local_setup_buff; 190 + 191 + unsigned int resume_state; /* USB state to resume */ 192 + unsigned int usb_state; /* USB current state */ 193 + unsigned int ep0_state; /* Endpoint zero state */ 194 + unsigned int ep0_dir; 195 + 196 + unsigned int dev_addr; 197 + 198 + int errors; 199 + unsigned softconnect:1, 200 + vbus_active:1, 201 + remote_wakeup:1, 202 + softconnected:1, 203 + force_fs:1; 204 + struct clk *clk; 205 + }; 206 + 207 + /* endpoint data structure */ 208 + struct mv_ep { 209 + struct usb_ep ep; 210 + struct mv_udc *udc; 211 + struct list_head queue; 212 + struct mv_dqh *dqh; 213 + const struct usb_endpoint_descriptor *desc; 214 + u32 direction; 215 + char name[14]; 216 + unsigned stopped:1, 217 + wedge:1, 218 + ep_type:2, 219 + ep_num:8; 220 + }; 221 + 222 + /* request data structure */ 223 + struct mv_req { 224 + struct usb_request req; 225 + struct mv_dtd *dtd, *head, *tail; 226 + struct mv_ep *ep; 227 + struct list_head queue; 228 + unsigned dtd_count; 229 + unsigned mapped:1; 230 + }; 231 + 232 + #define EP_QUEUE_HEAD_MULT_POS 30 233 + #define EP_QUEUE_HEAD_ZLT_SEL 0x20000000 234 + #define EP_QUEUE_HEAD_MAX_PKT_LEN_POS 16 235 + #define EP_QUEUE_HEAD_MAX_PKT_LEN(ep_info) (((ep_info)>>16)&0x07ff) 236 + #define EP_QUEUE_HEAD_IOS 0x00008000 237 + #define EP_QUEUE_HEAD_NEXT_TERMINATE 0x00000001 238 + #define EP_QUEUE_HEAD_IOC 0x00008000 239 + #define EP_QUEUE_HEAD_MULTO 0x00000C00 240 + #define EP_QUEUE_HEAD_STATUS_HALT 0x00000040 241 + #define EP_QUEUE_HEAD_STATUS_ACTIVE 0x00000080 242 + #define EP_QUEUE_CURRENT_OFFSET_MASK 0x00000FFF 243 + #define EP_QUEUE_HEAD_NEXT_POINTER_MASK 0xFFFFFFE0 244 + #define EP_QUEUE_FRINDEX_MASK 0x000007FF 245 + #define EP_MAX_LENGTH_TRANSFER 0x4000 246 + 247 + struct mv_dqh { 248 + /* Bits 16..26 Bit 15 is Interrupt On Setup */ 249 + u32 max_packet_length; 250 + u32 curr_dtd_ptr; /* Current dTD Pointer */ 251 + u32 next_dtd_ptr; /* Next dTD Pointer */ 252 + /* Total bytes (16..30), IOC (15), INT (8), STS (0-7) */ 253 + u32 size_ioc_int_sts; 254 + u32 buff_ptr0; /* Buffer pointer Page 0 (12-31) */ 255 + u32 buff_ptr1; /* Buffer pointer Page 1 (12-31) */ 256 + u32 buff_ptr2; /* Buffer pointer Page 2 (12-31) */ 257 + u32 buff_ptr3; /* Buffer pointer Page 3 (12-31) */ 258 + u32 buff_ptr4; /* Buffer pointer Page 4 (12-31) */ 259 + u32 reserved1; 260 + /* 8 bytes of setup data that follows the Setup PID */ 261 + u8 setup_buffer[8]; 262 + u32 reserved2[4]; 263 + }; 264 + 265 + 266 + #define DTD_NEXT_TERMINATE (0x00000001) 267 + #define DTD_IOC (0x00008000) 268 + #define DTD_STATUS_ACTIVE (0x00000080) 269 + #define DTD_STATUS_HALTED (0x00000040) 270 + #define DTD_STATUS_DATA_BUFF_ERR (0x00000020) 271 + #define DTD_STATUS_TRANSACTION_ERR (0x00000008) 272 + #define DTD_RESERVED_FIELDS (0x00007F00) 273 + #define DTD_ERROR_MASK (0x68) 274 + #define DTD_ADDR_MASK (0xFFFFFFE0) 275 + #define DTD_PACKET_SIZE 0x7FFF0000 276 + #define DTD_LENGTH_BIT_POS (16) 277 + 278 + struct mv_dtd { 279 + u32 dtd_next; 280 + u32 size_ioc_sts; 281 + u32 buff_ptr0; /* Buffer pointer Page 0 */ 282 + u32 buff_ptr1; /* Buffer pointer Page 1 */ 283 + u32 buff_ptr2; /* Buffer pointer Page 2 */ 284 + u32 buff_ptr3; /* Buffer pointer Page 3 */ 285 + u32 buff_ptr4; /* Buffer pointer Page 4 */ 286 + u32 scratch_ptr; 287 + /* 32 bytes */ 288 + dma_addr_t td_dma; /* dma address for this td */ 289 + struct mv_dtd *next_dtd_virt; 290 + }; 291 + 292 + extern int mv_udc_phy_init(unsigned int base); 293 + 294 + #endif
+2149
drivers/usb/gadget/mv_udc_core.c
··· 1 + #include <linux/module.h> 2 + #include <linux/pci.h> 3 + #include <linux/dma-mapping.h> 4 + #include <linux/dmapool.h> 5 + #include <linux/kernel.h> 6 + #include <linux/delay.h> 7 + #include <linux/ioport.h> 8 + #include <linux/sched.h> 9 + #include <linux/slab.h> 10 + #include <linux/errno.h> 11 + #include <linux/init.h> 12 + #include <linux/timer.h> 13 + #include <linux/list.h> 14 + #include <linux/interrupt.h> 15 + #include <linux/moduleparam.h> 16 + #include <linux/device.h> 17 + #include <linux/usb/ch9.h> 18 + #include <linux/usb/gadget.h> 19 + #include <linux/usb/otg.h> 20 + #include <linux/pm.h> 21 + #include <linux/io.h> 22 + #include <linux/irq.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/clk.h> 25 + #include <asm/system.h> 26 + #include <asm/unaligned.h> 27 + 28 + #include "mv_udc.h" 29 + 30 + #define DRIVER_DESC "Marvell PXA USB Device Controller driver" 31 + #define DRIVER_VERSION "8 Nov 2010" 32 + 33 + #define ep_dir(ep) (((ep)->ep_num == 0) ? \ 34 + ((ep)->udc->ep0_dir) : ((ep)->direction)) 35 + 36 + /* timeout value -- usec */ 37 + #define RESET_TIMEOUT 10000 38 + #define FLUSH_TIMEOUT 10000 39 + #define EPSTATUS_TIMEOUT 10000 40 + #define PRIME_TIMEOUT 10000 41 + #define READSAFE_TIMEOUT 1000 42 + #define DTD_TIMEOUT 1000 43 + 44 + #define LOOPS_USEC_SHIFT 4 45 + #define LOOPS_USEC (1 << LOOPS_USEC_SHIFT) 46 + #define LOOPS(timeout) ((timeout) >> LOOPS_USEC_SHIFT) 47 + 48 + static const char driver_name[] = "mv_udc"; 49 + static const char driver_desc[] = DRIVER_DESC; 50 + 51 + /* controller device global variable */ 52 + static struct mv_udc *the_controller; 53 + int mv_usb_otgsc; 54 + 55 + static void nuke(struct mv_ep *ep, int status); 56 + 57 + /* for endpoint 0 operations */ 58 + static const struct usb_endpoint_descriptor mv_ep0_desc = { 59 + .bLength = USB_DT_ENDPOINT_SIZE, 60 + .bDescriptorType = USB_DT_ENDPOINT, 61 + .bEndpointAddress = 0, 62 + .bmAttributes = USB_ENDPOINT_XFER_CONTROL, 63 + .wMaxPacketSize = EP0_MAX_PKT_SIZE, 64 + }; 65 + 66 + static void ep0_reset(struct mv_udc *udc) 67 + { 68 + struct mv_ep *ep; 69 + u32 epctrlx; 70 + int i = 0; 71 + 72 + /* ep0 in and out */ 73 + for (i = 0; i < 2; i++) { 74 + ep = &udc->eps[i]; 75 + ep->udc = udc; 76 + 77 + /* ep0 dQH */ 78 + ep->dqh = &udc->ep_dqh[i]; 79 + 80 + /* configure ep0 endpoint capabilities in dQH */ 81 + ep->dqh->max_packet_length = 82 + (EP0_MAX_PKT_SIZE << EP_QUEUE_HEAD_MAX_PKT_LEN_POS) 83 + | EP_QUEUE_HEAD_IOS; 84 + 85 + epctrlx = readl(&udc->op_regs->epctrlx[0]); 86 + if (i) { /* TX */ 87 + epctrlx |= EPCTRL_TX_ENABLE | EPCTRL_TX_DATA_TOGGLE_RST 88 + | (USB_ENDPOINT_XFER_CONTROL 89 + << EPCTRL_TX_EP_TYPE_SHIFT); 90 + 91 + } else { /* RX */ 92 + epctrlx |= EPCTRL_RX_ENABLE | EPCTRL_RX_DATA_TOGGLE_RST 93 + | (USB_ENDPOINT_XFER_CONTROL 94 + << EPCTRL_RX_EP_TYPE_SHIFT); 95 + } 96 + 97 + writel(epctrlx, &udc->op_regs->epctrlx[0]); 98 + } 99 + } 100 + 101 + /* protocol ep0 stall, will automatically be cleared on new transaction */ 102 + static void ep0_stall(struct mv_udc *udc) 103 + { 104 + u32 epctrlx; 105 + 106 + /* set TX and RX to stall */ 107 + epctrlx = readl(&udc->op_regs->epctrlx[0]); 108 + epctrlx |= EPCTRL_RX_EP_STALL | EPCTRL_TX_EP_STALL; 109 + writel(epctrlx, &udc->op_regs->epctrlx[0]); 110 + 111 + /* update ep0 state */ 112 + udc->ep0_state = WAIT_FOR_SETUP; 113 + udc->ep0_dir = EP_DIR_OUT; 114 + } 115 + 116 + static int process_ep_req(struct mv_udc *udc, int index, 117 + struct mv_req *curr_req) 118 + { 119 + struct mv_dtd *curr_dtd; 120 + struct mv_dqh *curr_dqh; 121 + int td_complete, actual, remaining_length; 122 + int i, direction; 123 + int retval = 0; 124 + u32 errors; 125 + 126 + curr_dqh = &udc->ep_dqh[index]; 127 + direction = index % 2; 128 + 129 + curr_dtd = curr_req->head; 130 + td_complete = 0; 131 + actual = curr_req->req.length; 132 + 133 + for (i = 0; i < curr_req->dtd_count; i++) { 134 + if (curr_dtd->size_ioc_sts & DTD_STATUS_ACTIVE) { 135 + dev_dbg(&udc->dev->dev, "%s, dTD not completed\n", 136 + udc->eps[index].name); 137 + return 1; 138 + } 139 + 140 + errors = curr_dtd->size_ioc_sts & DTD_ERROR_MASK; 141 + if (!errors) { 142 + remaining_length += 143 + (curr_dtd->size_ioc_sts & DTD_PACKET_SIZE) 144 + >> DTD_LENGTH_BIT_POS; 145 + actual -= remaining_length; 146 + } else { 147 + dev_info(&udc->dev->dev, 148 + "complete_tr error: ep=%d %s: error = 0x%x\n", 149 + index >> 1, direction ? "SEND" : "RECV", 150 + errors); 151 + if (errors & DTD_STATUS_HALTED) { 152 + /* Clear the errors and Halt condition */ 153 + curr_dqh->size_ioc_int_sts &= ~errors; 154 + retval = -EPIPE; 155 + } else if (errors & DTD_STATUS_DATA_BUFF_ERR) { 156 + retval = -EPROTO; 157 + } else if (errors & DTD_STATUS_TRANSACTION_ERR) { 158 + retval = -EILSEQ; 159 + } 160 + } 161 + if (i != curr_req->dtd_count - 1) 162 + curr_dtd = (struct mv_dtd *)curr_dtd->next_dtd_virt; 163 + } 164 + if (retval) 165 + return retval; 166 + 167 + curr_req->req.actual = actual; 168 + 169 + return 0; 170 + } 171 + 172 + /* 173 + * done() - retire a request; caller blocked irqs 174 + * @status : request status to be set, only works when 175 + * request is still in progress. 176 + */ 177 + static void done(struct mv_ep *ep, struct mv_req *req, int status) 178 + { 179 + struct mv_udc *udc = NULL; 180 + unsigned char stopped = ep->stopped; 181 + struct mv_dtd *curr_td, *next_td; 182 + int j; 183 + 184 + udc = (struct mv_udc *)ep->udc; 185 + /* Removed the req from fsl_ep->queue */ 186 + list_del_init(&req->queue); 187 + 188 + /* req.status should be set as -EINPROGRESS in ep_queue() */ 189 + if (req->req.status == -EINPROGRESS) 190 + req->req.status = status; 191 + else 192 + status = req->req.status; 193 + 194 + /* Free dtd for the request */ 195 + next_td = req->head; 196 + for (j = 0; j < req->dtd_count; j++) { 197 + curr_td = next_td; 198 + if (j != req->dtd_count - 1) 199 + next_td = curr_td->next_dtd_virt; 200 + dma_pool_free(udc->dtd_pool, curr_td, curr_td->td_dma); 201 + } 202 + 203 + if (req->mapped) { 204 + dma_unmap_single(ep->udc->gadget.dev.parent, 205 + req->req.dma, req->req.length, 206 + ((ep_dir(ep) == EP_DIR_IN) ? 207 + DMA_TO_DEVICE : DMA_FROM_DEVICE)); 208 + req->req.dma = DMA_ADDR_INVALID; 209 + req->mapped = 0; 210 + } else 211 + dma_sync_single_for_cpu(ep->udc->gadget.dev.parent, 212 + req->req.dma, req->req.length, 213 + ((ep_dir(ep) == EP_DIR_IN) ? 214 + DMA_TO_DEVICE : DMA_FROM_DEVICE)); 215 + 216 + if (status && (status != -ESHUTDOWN)) 217 + dev_info(&udc->dev->dev, "complete %s req %p stat %d len %u/%u", 218 + ep->ep.name, &req->req, status, 219 + req->req.actual, req->req.length); 220 + 221 + ep->stopped = 1; 222 + 223 + spin_unlock(&ep->udc->lock); 224 + /* 225 + * complete() is from gadget layer, 226 + * eg fsg->bulk_in_complete() 227 + */ 228 + if (req->req.complete) 229 + req->req.complete(&ep->ep, &req->req); 230 + 231 + spin_lock(&ep->udc->lock); 232 + ep->stopped = stopped; 233 + } 234 + 235 + static int queue_dtd(struct mv_ep *ep, struct mv_req *req) 236 + { 237 + u32 tmp, epstatus, bit_pos, direction; 238 + struct mv_udc *udc; 239 + struct mv_dqh *dqh; 240 + unsigned int loops; 241 + int readsafe, retval = 0; 242 + 243 + udc = ep->udc; 244 + direction = ep_dir(ep); 245 + dqh = &(udc->ep_dqh[ep->ep_num * 2 + direction]); 246 + bit_pos = 1 << (((direction == EP_DIR_OUT) ? 0 : 16) + ep->ep_num); 247 + 248 + /* check if the pipe is empty */ 249 + if (!(list_empty(&ep->queue))) { 250 + struct mv_req *lastreq; 251 + lastreq = list_entry(ep->queue.prev, struct mv_req, queue); 252 + lastreq->tail->dtd_next = 253 + req->head->td_dma & EP_QUEUE_HEAD_NEXT_POINTER_MASK; 254 + if (readl(&udc->op_regs->epprime) & bit_pos) { 255 + loops = LOOPS(PRIME_TIMEOUT); 256 + while (readl(&udc->op_regs->epprime) & bit_pos) { 257 + if (loops == 0) { 258 + retval = -ETIME; 259 + goto done; 260 + } 261 + udelay(LOOPS_USEC); 262 + loops--; 263 + } 264 + if (readl(&udc->op_regs->epstatus) & bit_pos) 265 + goto done; 266 + } 267 + readsafe = 0; 268 + loops = LOOPS(READSAFE_TIMEOUT); 269 + while (readsafe == 0) { 270 + if (loops == 0) { 271 + retval = -ETIME; 272 + goto done; 273 + } 274 + /* start with setting the semaphores */ 275 + tmp = readl(&udc->op_regs->usbcmd); 276 + tmp |= USBCMD_ATDTW_TRIPWIRE_SET; 277 + writel(tmp, &udc->op_regs->usbcmd); 278 + 279 + /* read the endpoint status */ 280 + epstatus = readl(&udc->op_regs->epstatus) & bit_pos; 281 + 282 + /* 283 + * Reread the ATDTW semaphore bit to check if it is 284 + * cleared. When hardware see a hazard, it will clear 285 + * the bit or else we remain set to 1 and we can 286 + * proceed with priming of endpoint if not already 287 + * primed. 288 + */ 289 + if (readl(&udc->op_regs->usbcmd) 290 + & USBCMD_ATDTW_TRIPWIRE_SET) { 291 + readsafe = 1; 292 + } 293 + loops--; 294 + udelay(LOOPS_USEC); 295 + } 296 + 297 + /* Clear the semaphore */ 298 + tmp = readl(&udc->op_regs->usbcmd); 299 + tmp &= USBCMD_ATDTW_TRIPWIRE_CLEAR; 300 + writel(tmp, &udc->op_regs->usbcmd); 301 + 302 + /* If endpoint is not active, we activate it now. */ 303 + if (!epstatus) { 304 + if (direction == EP_DIR_IN) { 305 + struct mv_dtd *curr_dtd = dma_to_virt( 306 + &udc->dev->dev, dqh->curr_dtd_ptr); 307 + 308 + loops = LOOPS(DTD_TIMEOUT); 309 + while (curr_dtd->size_ioc_sts 310 + & DTD_STATUS_ACTIVE) { 311 + if (loops == 0) { 312 + retval = -ETIME; 313 + goto done; 314 + } 315 + loops--; 316 + udelay(LOOPS_USEC); 317 + } 318 + } 319 + /* No other transfers on the queue */ 320 + 321 + /* Write dQH next pointer and terminate bit to 0 */ 322 + dqh->next_dtd_ptr = req->head->td_dma 323 + & EP_QUEUE_HEAD_NEXT_POINTER_MASK; 324 + dqh->size_ioc_int_sts = 0; 325 + 326 + /* 327 + * Ensure that updates to the QH will 328 + * occure before priming. 329 + */ 330 + wmb(); 331 + 332 + /* Prime the Endpoint */ 333 + writel(bit_pos, &udc->op_regs->epprime); 334 + } 335 + } else { 336 + /* Write dQH next pointer and terminate bit to 0 */ 337 + dqh->next_dtd_ptr = req->head->td_dma 338 + & EP_QUEUE_HEAD_NEXT_POINTER_MASK;; 339 + dqh->size_ioc_int_sts = 0; 340 + 341 + /* Ensure that updates to the QH will occure before priming. */ 342 + wmb(); 343 + 344 + /* Prime the Endpoint */ 345 + writel(bit_pos, &udc->op_regs->epprime); 346 + 347 + if (direction == EP_DIR_IN) { 348 + /* FIXME add status check after prime the IN ep */ 349 + int prime_again; 350 + u32 curr_dtd_ptr = dqh->curr_dtd_ptr; 351 + 352 + loops = LOOPS(DTD_TIMEOUT); 353 + prime_again = 0; 354 + while ((curr_dtd_ptr != req->head->td_dma)) { 355 + curr_dtd_ptr = dqh->curr_dtd_ptr; 356 + if (loops == 0) { 357 + dev_err(&udc->dev->dev, 358 + "failed to prime %s\n", 359 + ep->name); 360 + retval = -ETIME; 361 + goto done; 362 + } 363 + loops--; 364 + udelay(LOOPS_USEC); 365 + 366 + if (loops == (LOOPS(DTD_TIMEOUT) >> 2)) { 367 + if (prime_again) 368 + goto done; 369 + dev_info(&udc->dev->dev, 370 + "prime again\n"); 371 + writel(bit_pos, 372 + &udc->op_regs->epprime); 373 + prime_again = 1; 374 + } 375 + } 376 + } 377 + } 378 + done: 379 + return retval;; 380 + } 381 + 382 + static struct mv_dtd *build_dtd(struct mv_req *req, unsigned *length, 383 + dma_addr_t *dma, int *is_last) 384 + { 385 + u32 temp; 386 + struct mv_dtd *dtd; 387 + struct mv_udc *udc; 388 + 389 + /* how big will this transfer be? */ 390 + *length = min(req->req.length - req->req.actual, 391 + (unsigned)EP_MAX_LENGTH_TRANSFER); 392 + 393 + udc = req->ep->udc; 394 + 395 + /* 396 + * Be careful that no _GFP_HIGHMEM is set, 397 + * or we can not use dma_to_virt 398 + */ 399 + dtd = dma_pool_alloc(udc->dtd_pool, GFP_KERNEL, dma); 400 + if (dtd == NULL) 401 + return dtd; 402 + 403 + dtd->td_dma = *dma; 404 + /* initialize buffer page pointers */ 405 + temp = (u32)(req->req.dma + req->req.actual); 406 + dtd->buff_ptr0 = cpu_to_le32(temp); 407 + temp &= ~0xFFF; 408 + dtd->buff_ptr1 = cpu_to_le32(temp + 0x1000); 409 + dtd->buff_ptr2 = cpu_to_le32(temp + 0x2000); 410 + dtd->buff_ptr3 = cpu_to_le32(temp + 0x3000); 411 + dtd->buff_ptr4 = cpu_to_le32(temp + 0x4000); 412 + 413 + req->req.actual += *length; 414 + 415 + /* zlp is needed if req->req.zero is set */ 416 + if (req->req.zero) { 417 + if (*length == 0 || (*length % req->ep->ep.maxpacket) != 0) 418 + *is_last = 1; 419 + else 420 + *is_last = 0; 421 + } else if (req->req.length == req->req.actual) 422 + *is_last = 1; 423 + else 424 + *is_last = 0; 425 + 426 + /* Fill in the transfer size; set active bit */ 427 + temp = ((*length << DTD_LENGTH_BIT_POS) | DTD_STATUS_ACTIVE); 428 + 429 + /* Enable interrupt for the last dtd of a request */ 430 + if (*is_last && !req->req.no_interrupt) 431 + temp |= DTD_IOC; 432 + 433 + dtd->size_ioc_sts = temp; 434 + 435 + mb(); 436 + 437 + return dtd; 438 + } 439 + 440 + /* generate dTD linked list for a request */ 441 + static int req_to_dtd(struct mv_req *req) 442 + { 443 + unsigned count; 444 + int is_last, is_first = 1; 445 + struct mv_dtd *dtd, *last_dtd = NULL; 446 + struct mv_udc *udc; 447 + dma_addr_t dma; 448 + 449 + udc = req->ep->udc; 450 + 451 + do { 452 + dtd = build_dtd(req, &count, &dma, &is_last); 453 + if (dtd == NULL) 454 + return -ENOMEM; 455 + 456 + if (is_first) { 457 + is_first = 0; 458 + req->head = dtd; 459 + } else { 460 + last_dtd->dtd_next = dma; 461 + last_dtd->next_dtd_virt = dtd; 462 + } 463 + last_dtd = dtd; 464 + req->dtd_count++; 465 + } while (!is_last); 466 + 467 + /* set terminate bit to 1 for the last dTD */ 468 + dtd->dtd_next = DTD_NEXT_TERMINATE; 469 + 470 + req->tail = dtd; 471 + 472 + return 0; 473 + } 474 + 475 + static int mv_ep_enable(struct usb_ep *_ep, 476 + const struct usb_endpoint_descriptor *desc) 477 + { 478 + struct mv_udc *udc; 479 + struct mv_ep *ep; 480 + struct mv_dqh *dqh; 481 + u16 max = 0; 482 + u32 bit_pos, epctrlx, direction; 483 + unsigned char zlt = 0, ios = 0, mult = 0; 484 + 485 + ep = container_of(_ep, struct mv_ep, ep); 486 + udc = ep->udc; 487 + 488 + if (!_ep || !desc || ep->desc 489 + || desc->bDescriptorType != USB_DT_ENDPOINT) 490 + return -EINVAL; 491 + 492 + if (!udc->driver || udc->gadget.speed == USB_SPEED_UNKNOWN) 493 + return -ESHUTDOWN; 494 + 495 + direction = ep_dir(ep); 496 + max = le16_to_cpu(desc->wMaxPacketSize); 497 + 498 + /* 499 + * disable HW zero length termination select 500 + * driver handles zero length packet through req->req.zero 501 + */ 502 + zlt = 1; 503 + 504 + /* Get the endpoint queue head address */ 505 + dqh = (struct mv_dqh *)ep->dqh; 506 + 507 + bit_pos = 1 << ((direction == EP_DIR_OUT ? 0 : 16) + ep->ep_num); 508 + 509 + /* Check if the Endpoint is Primed */ 510 + if ((readl(&udc->op_regs->epprime) & bit_pos) 511 + || (readl(&udc->op_regs->epstatus) & bit_pos)) { 512 + dev_info(&udc->dev->dev, 513 + "ep=%d %s: Init ERROR: ENDPTPRIME=0x%x," 514 + " ENDPTSTATUS=0x%x, bit_pos=0x%x\n", 515 + (unsigned)ep->ep_num, direction ? "SEND" : "RECV", 516 + (unsigned)readl(&udc->op_regs->epprime), 517 + (unsigned)readl(&udc->op_regs->epstatus), 518 + (unsigned)bit_pos); 519 + goto en_done; 520 + } 521 + /* Set the max packet length, interrupt on Setup and Mult fields */ 522 + switch (desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) { 523 + case USB_ENDPOINT_XFER_BULK: 524 + zlt = 1; 525 + mult = 0; 526 + break; 527 + case USB_ENDPOINT_XFER_CONTROL: 528 + ios = 1; 529 + case USB_ENDPOINT_XFER_INT: 530 + mult = 0; 531 + break; 532 + case USB_ENDPOINT_XFER_ISOC: 533 + /* Calculate transactions needed for high bandwidth iso */ 534 + mult = (unsigned char)(1 + ((max >> 11) & 0x03)); 535 + max = max & 0x8ff; /* bit 0~10 */ 536 + /* 3 transactions at most */ 537 + if (mult > 3) 538 + goto en_done; 539 + break; 540 + default: 541 + goto en_done; 542 + } 543 + dqh->max_packet_length = (max << EP_QUEUE_HEAD_MAX_PKT_LEN_POS) 544 + | (mult << EP_QUEUE_HEAD_MULT_POS) 545 + | (zlt ? EP_QUEUE_HEAD_ZLT_SEL : 0) 546 + | (ios ? EP_QUEUE_HEAD_IOS : 0); 547 + dqh->next_dtd_ptr = 1; 548 + dqh->size_ioc_int_sts = 0; 549 + 550 + ep->ep.maxpacket = max; 551 + ep->desc = desc; 552 + ep->stopped = 0; 553 + 554 + /* Enable the endpoint for Rx or Tx and set the endpoint type */ 555 + epctrlx = readl(&udc->op_regs->epctrlx[ep->ep_num]); 556 + if (direction == EP_DIR_IN) { 557 + epctrlx &= ~EPCTRL_TX_ALL_MASK; 558 + epctrlx |= EPCTRL_TX_ENABLE | EPCTRL_TX_DATA_TOGGLE_RST 559 + | ((desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) 560 + << EPCTRL_TX_EP_TYPE_SHIFT); 561 + } else { 562 + epctrlx &= ~EPCTRL_RX_ALL_MASK; 563 + epctrlx |= EPCTRL_RX_ENABLE | EPCTRL_RX_DATA_TOGGLE_RST 564 + | ((desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) 565 + << EPCTRL_RX_EP_TYPE_SHIFT); 566 + } 567 + writel(epctrlx, &udc->op_regs->epctrlx[ep->ep_num]); 568 + 569 + /* 570 + * Implement Guideline (GL# USB-7) The unused endpoint type must 571 + * be programmed to bulk. 572 + */ 573 + epctrlx = readl(&udc->op_regs->epctrlx[ep->ep_num]); 574 + if ((epctrlx & EPCTRL_RX_ENABLE) == 0) { 575 + epctrlx |= ((desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) 576 + << EPCTRL_RX_EP_TYPE_SHIFT); 577 + writel(epctrlx, &udc->op_regs->epctrlx[ep->ep_num]); 578 + } 579 + 580 + epctrlx = readl(&udc->op_regs->epctrlx[ep->ep_num]); 581 + if ((epctrlx & EPCTRL_TX_ENABLE) == 0) { 582 + epctrlx |= ((desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) 583 + << EPCTRL_TX_EP_TYPE_SHIFT); 584 + writel(epctrlx, &udc->op_regs->epctrlx[ep->ep_num]); 585 + } 586 + 587 + return 0; 588 + en_done: 589 + return -EINVAL; 590 + } 591 + 592 + static int mv_ep_disable(struct usb_ep *_ep) 593 + { 594 + struct mv_udc *udc; 595 + struct mv_ep *ep; 596 + struct mv_dqh *dqh; 597 + u32 bit_pos, epctrlx, direction; 598 + 599 + ep = container_of(_ep, struct mv_ep, ep); 600 + if ((_ep == NULL) || !ep->desc) 601 + return -EINVAL; 602 + 603 + udc = ep->udc; 604 + 605 + /* Get the endpoint queue head address */ 606 + dqh = ep->dqh; 607 + 608 + direction = ep_dir(ep); 609 + bit_pos = 1 << ((direction == EP_DIR_OUT ? 0 : 16) + ep->ep_num); 610 + 611 + /* Reset the max packet length and the interrupt on Setup */ 612 + dqh->max_packet_length = 0; 613 + 614 + /* Disable the endpoint for Rx or Tx and reset the endpoint type */ 615 + epctrlx = readl(&udc->op_regs->epctrlx[ep->ep_num]); 616 + epctrlx &= ~((direction == EP_DIR_IN) 617 + ? (EPCTRL_TX_ENABLE | EPCTRL_TX_TYPE) 618 + : (EPCTRL_RX_ENABLE | EPCTRL_RX_TYPE)); 619 + writel(epctrlx, &udc->op_regs->epctrlx[ep->ep_num]); 620 + 621 + /* nuke all pending requests (does flush) */ 622 + nuke(ep, -ESHUTDOWN); 623 + 624 + ep->desc = NULL; 625 + ep->stopped = 1; 626 + return 0; 627 + } 628 + 629 + static struct usb_request * 630 + mv_alloc_request(struct usb_ep *_ep, gfp_t gfp_flags) 631 + { 632 + struct mv_req *req = NULL; 633 + 634 + req = kzalloc(sizeof *req, gfp_flags); 635 + if (!req) 636 + return NULL; 637 + 638 + req->req.dma = DMA_ADDR_INVALID; 639 + INIT_LIST_HEAD(&req->queue); 640 + 641 + return &req->req; 642 + } 643 + 644 + static void mv_free_request(struct usb_ep *_ep, struct usb_request *_req) 645 + { 646 + struct mv_req *req = NULL; 647 + 648 + req = container_of(_req, struct mv_req, req); 649 + 650 + if (_req) 651 + kfree(req); 652 + } 653 + 654 + static void mv_ep_fifo_flush(struct usb_ep *_ep) 655 + { 656 + struct mv_udc *udc; 657 + u32 bit_pos, direction; 658 + struct mv_ep *ep = container_of(_ep, struct mv_ep, ep); 659 + unsigned int loops; 660 + 661 + udc = ep->udc; 662 + direction = ep_dir(ep); 663 + bit_pos = 1 << ((direction == EP_DIR_OUT ? 0 : 16) + ep->ep_num); 664 + /* 665 + * Flushing will halt the pipe 666 + * Write 1 to the Flush register 667 + */ 668 + writel(bit_pos, &udc->op_regs->epflush); 669 + 670 + /* Wait until flushing completed */ 671 + loops = LOOPS(FLUSH_TIMEOUT); 672 + while (readl(&udc->op_regs->epflush) & bit_pos) { 673 + /* 674 + * ENDPTFLUSH bit should be cleared to indicate this 675 + * operation is complete 676 + */ 677 + if (loops == 0) { 678 + dev_err(&udc->dev->dev, 679 + "TIMEOUT for ENDPTFLUSH=0x%x, bit_pos=0x%x\n", 680 + (unsigned)readl(&udc->op_regs->epflush), 681 + (unsigned)bit_pos); 682 + return; 683 + } 684 + loops--; 685 + udelay(LOOPS_USEC); 686 + } 687 + loops = LOOPS(EPSTATUS_TIMEOUT); 688 + while (readl(&udc->op_regs->epstatus) & bit_pos) { 689 + unsigned int inter_loops; 690 + 691 + if (loops == 0) { 692 + dev_err(&udc->dev->dev, 693 + "TIMEOUT for ENDPTSTATUS=0x%x, bit_pos=0x%x\n", 694 + (unsigned)readl(&udc->op_regs->epstatus), 695 + (unsigned)bit_pos); 696 + return; 697 + } 698 + /* Write 1 to the Flush register */ 699 + writel(bit_pos, &udc->op_regs->epflush); 700 + 701 + /* Wait until flushing completed */ 702 + inter_loops = LOOPS(FLUSH_TIMEOUT); 703 + while (readl(&udc->op_regs->epflush) & bit_pos) { 704 + /* 705 + * ENDPTFLUSH bit should be cleared to indicate this 706 + * operation is complete 707 + */ 708 + if (inter_loops == 0) { 709 + dev_err(&udc->dev->dev, 710 + "TIMEOUT for ENDPTFLUSH=0x%x," 711 + "bit_pos=0x%x\n", 712 + (unsigned)readl(&udc->op_regs->epflush), 713 + (unsigned)bit_pos); 714 + return; 715 + } 716 + inter_loops--; 717 + udelay(LOOPS_USEC); 718 + } 719 + loops--; 720 + } 721 + } 722 + 723 + /* queues (submits) an I/O request to an endpoint */ 724 + static int 725 + mv_ep_queue(struct usb_ep *_ep, struct usb_request *_req, gfp_t gfp_flags) 726 + { 727 + struct mv_ep *ep = container_of(_ep, struct mv_ep, ep); 728 + struct mv_req *req = container_of(_req, struct mv_req, req); 729 + struct mv_udc *udc = ep->udc; 730 + unsigned long flags; 731 + 732 + /* catch various bogus parameters */ 733 + if (!_req || !req->req.complete || !req->req.buf 734 + || !list_empty(&req->queue)) { 735 + dev_err(&udc->dev->dev, "%s, bad params", __func__); 736 + return -EINVAL; 737 + } 738 + if (unlikely(!_ep || !ep->desc)) { 739 + dev_err(&udc->dev->dev, "%s, bad ep", __func__); 740 + return -EINVAL; 741 + } 742 + if (ep->desc->bmAttributes == USB_ENDPOINT_XFER_ISOC) { 743 + if (req->req.length > ep->ep.maxpacket) 744 + return -EMSGSIZE; 745 + } 746 + 747 + udc = ep->udc; 748 + if (!udc->driver || udc->gadget.speed == USB_SPEED_UNKNOWN) 749 + return -ESHUTDOWN; 750 + 751 + req->ep = ep; 752 + 753 + /* map virtual address to hardware */ 754 + if (req->req.dma == DMA_ADDR_INVALID) { 755 + req->req.dma = dma_map_single(ep->udc->gadget.dev.parent, 756 + req->req.buf, 757 + req->req.length, ep_dir(ep) 758 + ? DMA_TO_DEVICE 759 + : DMA_FROM_DEVICE); 760 + req->mapped = 1; 761 + } else { 762 + dma_sync_single_for_device(ep->udc->gadget.dev.parent, 763 + req->req.dma, req->req.length, 764 + ep_dir(ep) 765 + ? DMA_TO_DEVICE 766 + : DMA_FROM_DEVICE); 767 + req->mapped = 0; 768 + } 769 + 770 + req->req.status = -EINPROGRESS; 771 + req->req.actual = 0; 772 + req->dtd_count = 0; 773 + 774 + spin_lock_irqsave(&udc->lock, flags); 775 + 776 + /* build dtds and push them to device queue */ 777 + if (!req_to_dtd(req)) { 778 + int retval; 779 + retval = queue_dtd(ep, req); 780 + if (retval) { 781 + spin_unlock_irqrestore(&udc->lock, flags); 782 + return retval; 783 + } 784 + } else { 785 + spin_unlock_irqrestore(&udc->lock, flags); 786 + return -ENOMEM; 787 + } 788 + 789 + /* Update ep0 state */ 790 + if (ep->ep_num == 0) 791 + udc->ep0_state = DATA_STATE_XMIT; 792 + 793 + /* irq handler advances the queue */ 794 + if (req != NULL) 795 + list_add_tail(&req->queue, &ep->queue); 796 + spin_unlock_irqrestore(&udc->lock, flags); 797 + 798 + return 0; 799 + } 800 + 801 + /* dequeues (cancels, unlinks) an I/O request from an endpoint */ 802 + static int mv_ep_dequeue(struct usb_ep *_ep, struct usb_request *_req) 803 + { 804 + struct mv_ep *ep = container_of(_ep, struct mv_ep, ep); 805 + struct mv_req *req; 806 + struct mv_udc *udc = ep->udc; 807 + unsigned long flags; 808 + int stopped, ret = 0; 809 + u32 epctrlx; 810 + 811 + if (!_ep || !_req) 812 + return -EINVAL; 813 + 814 + spin_lock_irqsave(&ep->udc->lock, flags); 815 + stopped = ep->stopped; 816 + 817 + /* Stop the ep before we deal with the queue */ 818 + ep->stopped = 1; 819 + epctrlx = readl(&udc->op_regs->epctrlx[ep->ep_num]); 820 + if (ep_dir(ep) == EP_DIR_IN) 821 + epctrlx &= ~EPCTRL_TX_ENABLE; 822 + else 823 + epctrlx &= ~EPCTRL_RX_ENABLE; 824 + writel(epctrlx, &udc->op_regs->epctrlx[ep->ep_num]); 825 + 826 + /* make sure it's actually queued on this endpoint */ 827 + list_for_each_entry(req, &ep->queue, queue) { 828 + if (&req->req == _req) 829 + break; 830 + } 831 + if (&req->req != _req) { 832 + ret = -EINVAL; 833 + goto out; 834 + } 835 + 836 + /* The request is in progress, or completed but not dequeued */ 837 + if (ep->queue.next == &req->queue) { 838 + _req->status = -ECONNRESET; 839 + mv_ep_fifo_flush(_ep); /* flush current transfer */ 840 + 841 + /* The request isn't the last request in this ep queue */ 842 + if (req->queue.next != &ep->queue) { 843 + struct mv_dqh *qh; 844 + struct mv_req *next_req; 845 + 846 + qh = ep->dqh; 847 + next_req = list_entry(req->queue.next, struct mv_req, 848 + queue); 849 + 850 + /* Point the QH to the first TD of next request */ 851 + writel((u32) next_req->head, &qh->curr_dtd_ptr); 852 + } else { 853 + struct mv_dqh *qh; 854 + 855 + qh = ep->dqh; 856 + qh->next_dtd_ptr = 1; 857 + qh->size_ioc_int_sts = 0; 858 + } 859 + 860 + /* The request hasn't been processed, patch up the TD chain */ 861 + } else { 862 + struct mv_req *prev_req; 863 + 864 + prev_req = list_entry(req->queue.prev, struct mv_req, queue); 865 + writel(readl(&req->tail->dtd_next), 866 + &prev_req->tail->dtd_next); 867 + 868 + } 869 + 870 + done(ep, req, -ECONNRESET); 871 + 872 + /* Enable EP */ 873 + out: 874 + epctrlx = readl(&udc->op_regs->epctrlx[ep->ep_num]); 875 + if (ep_dir(ep) == EP_DIR_IN) 876 + epctrlx |= EPCTRL_TX_ENABLE; 877 + else 878 + epctrlx |= EPCTRL_RX_ENABLE; 879 + writel(epctrlx, &udc->op_regs->epctrlx[ep->ep_num]); 880 + ep->stopped = stopped; 881 + 882 + spin_unlock_irqrestore(&ep->udc->lock, flags); 883 + return ret; 884 + } 885 + 886 + static void ep_set_stall(struct mv_udc *udc, u8 ep_num, u8 direction, int stall) 887 + { 888 + u32 epctrlx; 889 + 890 + epctrlx = readl(&udc->op_regs->epctrlx[ep_num]); 891 + 892 + if (stall) { 893 + if (direction == EP_DIR_IN) 894 + epctrlx |= EPCTRL_TX_EP_STALL; 895 + else 896 + epctrlx |= EPCTRL_RX_EP_STALL; 897 + } else { 898 + if (direction == EP_DIR_IN) { 899 + epctrlx &= ~EPCTRL_TX_EP_STALL; 900 + epctrlx |= EPCTRL_TX_DATA_TOGGLE_RST; 901 + } else { 902 + epctrlx &= ~EPCTRL_RX_EP_STALL; 903 + epctrlx |= EPCTRL_RX_DATA_TOGGLE_RST; 904 + } 905 + } 906 + writel(epctrlx, &udc->op_regs->epctrlx[ep_num]); 907 + } 908 + 909 + static int ep_is_stall(struct mv_udc *udc, u8 ep_num, u8 direction) 910 + { 911 + u32 epctrlx; 912 + 913 + epctrlx = readl(&udc->op_regs->epctrlx[ep_num]); 914 + 915 + if (direction == EP_DIR_OUT) 916 + return (epctrlx & EPCTRL_RX_EP_STALL) ? 1 : 0; 917 + else 918 + return (epctrlx & EPCTRL_TX_EP_STALL) ? 1 : 0; 919 + } 920 + 921 + static int mv_ep_set_halt_wedge(struct usb_ep *_ep, int halt, int wedge) 922 + { 923 + struct mv_ep *ep; 924 + unsigned long flags = 0; 925 + int status = 0; 926 + struct mv_udc *udc; 927 + 928 + ep = container_of(_ep, struct mv_ep, ep); 929 + udc = ep->udc; 930 + if (!_ep || !ep->desc) { 931 + status = -EINVAL; 932 + goto out; 933 + } 934 + 935 + if (ep->desc->bmAttributes == USB_ENDPOINT_XFER_ISOC) { 936 + status = -EOPNOTSUPP; 937 + goto out; 938 + } 939 + 940 + /* 941 + * Attempt to halt IN ep will fail if any transfer requests 942 + * are still queue 943 + */ 944 + if (halt && (ep_dir(ep) == EP_DIR_IN) && !list_empty(&ep->queue)) { 945 + status = -EAGAIN; 946 + goto out; 947 + } 948 + 949 + spin_lock_irqsave(&ep->udc->lock, flags); 950 + ep_set_stall(udc, ep->ep_num, ep_dir(ep), halt); 951 + if (halt && wedge) 952 + ep->wedge = 1; 953 + else if (!halt) 954 + ep->wedge = 0; 955 + spin_unlock_irqrestore(&ep->udc->lock, flags); 956 + 957 + if (ep->ep_num == 0) { 958 + udc->ep0_state = WAIT_FOR_SETUP; 959 + udc->ep0_dir = EP_DIR_OUT; 960 + } 961 + out: 962 + return status; 963 + } 964 + 965 + static int mv_ep_set_halt(struct usb_ep *_ep, int halt) 966 + { 967 + return mv_ep_set_halt_wedge(_ep, halt, 0); 968 + } 969 + 970 + static int mv_ep_set_wedge(struct usb_ep *_ep) 971 + { 972 + return mv_ep_set_halt_wedge(_ep, 1, 1); 973 + } 974 + 975 + static struct usb_ep_ops mv_ep_ops = { 976 + .enable = mv_ep_enable, 977 + .disable = mv_ep_disable, 978 + 979 + .alloc_request = mv_alloc_request, 980 + .free_request = mv_free_request, 981 + 982 + .queue = mv_ep_queue, 983 + .dequeue = mv_ep_dequeue, 984 + 985 + .set_wedge = mv_ep_set_wedge, 986 + .set_halt = mv_ep_set_halt, 987 + .fifo_flush = mv_ep_fifo_flush, /* flush fifo */ 988 + }; 989 + 990 + static void udc_stop(struct mv_udc *udc) 991 + { 992 + u32 tmp; 993 + 994 + /* Disable interrupts */ 995 + tmp = readl(&udc->op_regs->usbintr); 996 + tmp &= ~(USBINTR_INT_EN | USBINTR_ERR_INT_EN | 997 + USBINTR_PORT_CHANGE_DETECT_EN | USBINTR_RESET_EN); 998 + writel(tmp, &udc->op_regs->usbintr); 999 + 1000 + /* Reset the Run the bit in the command register to stop VUSB */ 1001 + tmp = readl(&udc->op_regs->usbcmd); 1002 + tmp &= ~USBCMD_RUN_STOP; 1003 + writel(tmp, &udc->op_regs->usbcmd); 1004 + } 1005 + 1006 + static void udc_start(struct mv_udc *udc) 1007 + { 1008 + u32 usbintr; 1009 + 1010 + usbintr = USBINTR_INT_EN | USBINTR_ERR_INT_EN 1011 + | USBINTR_PORT_CHANGE_DETECT_EN 1012 + | USBINTR_RESET_EN | USBINTR_DEVICE_SUSPEND; 1013 + /* Enable interrupts */ 1014 + writel(usbintr, &udc->op_regs->usbintr); 1015 + 1016 + /* Set the Run bit in the command register */ 1017 + writel(USBCMD_RUN_STOP, &udc->op_regs->usbcmd); 1018 + } 1019 + 1020 + static int udc_reset(struct mv_udc *udc) 1021 + { 1022 + unsigned int loops; 1023 + u32 tmp, portsc; 1024 + 1025 + /* Stop the controller */ 1026 + tmp = readl(&udc->op_regs->usbcmd); 1027 + tmp &= ~USBCMD_RUN_STOP; 1028 + writel(tmp, &udc->op_regs->usbcmd); 1029 + 1030 + /* Reset the controller to get default values */ 1031 + writel(USBCMD_CTRL_RESET, &udc->op_regs->usbcmd); 1032 + 1033 + /* wait for reset to complete */ 1034 + loops = LOOPS(RESET_TIMEOUT); 1035 + while (readl(&udc->op_regs->usbcmd) & USBCMD_CTRL_RESET) { 1036 + if (loops == 0) { 1037 + dev_err(&udc->dev->dev, 1038 + "Wait for RESET completed TIMEOUT\n"); 1039 + return -ETIMEDOUT; 1040 + } 1041 + loops--; 1042 + udelay(LOOPS_USEC); 1043 + } 1044 + 1045 + /* set controller to device mode */ 1046 + tmp = readl(&udc->op_regs->usbmode); 1047 + tmp |= USBMODE_CTRL_MODE_DEVICE; 1048 + 1049 + /* turn setup lockout off, require setup tripwire in usbcmd */ 1050 + tmp |= USBMODE_SETUP_LOCK_OFF | USBMODE_STREAM_DISABLE; 1051 + 1052 + writel(tmp, &udc->op_regs->usbmode); 1053 + 1054 + writel(0x0, &udc->op_regs->epsetupstat); 1055 + 1056 + /* Configure the Endpoint List Address */ 1057 + writel(udc->ep_dqh_dma & USB_EP_LIST_ADDRESS_MASK, 1058 + &udc->op_regs->eplistaddr); 1059 + 1060 + portsc = readl(&udc->op_regs->portsc[0]); 1061 + if (readl(&udc->cap_regs->hcsparams) & HCSPARAMS_PPC) 1062 + portsc &= (~PORTSCX_W1C_BITS | ~PORTSCX_PORT_POWER); 1063 + 1064 + if (udc->force_fs) 1065 + portsc |= PORTSCX_FORCE_FULL_SPEED_CONNECT; 1066 + else 1067 + portsc &= (~PORTSCX_FORCE_FULL_SPEED_CONNECT); 1068 + 1069 + writel(portsc, &udc->op_regs->portsc[0]); 1070 + 1071 + tmp = readl(&udc->op_regs->epctrlx[0]); 1072 + tmp &= ~(EPCTRL_TX_EP_STALL | EPCTRL_RX_EP_STALL); 1073 + writel(tmp, &udc->op_regs->epctrlx[0]); 1074 + 1075 + return 0; 1076 + } 1077 + 1078 + static int mv_udc_get_frame(struct usb_gadget *gadget) 1079 + { 1080 + struct mv_udc *udc; 1081 + u16 retval; 1082 + 1083 + if (!gadget) 1084 + return -ENODEV; 1085 + 1086 + udc = container_of(gadget, struct mv_udc, gadget); 1087 + 1088 + retval = readl(udc->op_regs->frindex) & USB_FRINDEX_MASKS; 1089 + 1090 + return retval; 1091 + } 1092 + 1093 + /* Tries to wake up the host connected to this gadget */ 1094 + static int mv_udc_wakeup(struct usb_gadget *gadget) 1095 + { 1096 + struct mv_udc *udc = container_of(gadget, struct mv_udc, gadget); 1097 + u32 portsc; 1098 + 1099 + /* Remote wakeup feature not enabled by host */ 1100 + if (!udc->remote_wakeup) 1101 + return -ENOTSUPP; 1102 + 1103 + portsc = readl(&udc->op_regs->portsc); 1104 + /* not suspended? */ 1105 + if (!(portsc & PORTSCX_PORT_SUSPEND)) 1106 + return 0; 1107 + /* trigger force resume */ 1108 + portsc |= PORTSCX_PORT_FORCE_RESUME; 1109 + writel(portsc, &udc->op_regs->portsc[0]); 1110 + return 0; 1111 + } 1112 + 1113 + static int mv_udc_pullup(struct usb_gadget *gadget, int is_on) 1114 + { 1115 + struct mv_udc *udc; 1116 + unsigned long flags; 1117 + 1118 + udc = container_of(gadget, struct mv_udc, gadget); 1119 + spin_lock_irqsave(&udc->lock, flags); 1120 + 1121 + udc->softconnect = (is_on != 0); 1122 + if (udc->driver && udc->softconnect) 1123 + udc_start(udc); 1124 + else 1125 + udc_stop(udc); 1126 + 1127 + spin_unlock_irqrestore(&udc->lock, flags); 1128 + return 0; 1129 + } 1130 + 1131 + /* device controller usb_gadget_ops structure */ 1132 + static const struct usb_gadget_ops mv_ops = { 1133 + 1134 + /* returns the current frame number */ 1135 + .get_frame = mv_udc_get_frame, 1136 + 1137 + /* tries to wake up the host connected to this gadget */ 1138 + .wakeup = mv_udc_wakeup, 1139 + 1140 + /* D+ pullup, software-controlled connect/disconnect to USB host */ 1141 + .pullup = mv_udc_pullup, 1142 + }; 1143 + 1144 + static void mv_udc_testmode(struct mv_udc *udc, u16 index, bool enter) 1145 + { 1146 + dev_info(&udc->dev->dev, "Test Mode is not support yet\n"); 1147 + } 1148 + 1149 + static int eps_init(struct mv_udc *udc) 1150 + { 1151 + struct mv_ep *ep; 1152 + char name[14]; 1153 + int i; 1154 + 1155 + /* initialize ep0 */ 1156 + ep = &udc->eps[0]; 1157 + ep->udc = udc; 1158 + strncpy(ep->name, "ep0", sizeof(ep->name)); 1159 + ep->ep.name = ep->name; 1160 + ep->ep.ops = &mv_ep_ops; 1161 + ep->wedge = 0; 1162 + ep->stopped = 0; 1163 + ep->ep.maxpacket = EP0_MAX_PKT_SIZE; 1164 + ep->ep_num = 0; 1165 + ep->desc = &mv_ep0_desc; 1166 + INIT_LIST_HEAD(&ep->queue); 1167 + 1168 + ep->ep_type = USB_ENDPOINT_XFER_CONTROL; 1169 + 1170 + /* initialize other endpoints */ 1171 + for (i = 2; i < udc->max_eps * 2; i++) { 1172 + ep = &udc->eps[i]; 1173 + if (i % 2) { 1174 + snprintf(name, sizeof(name), "ep%din", i / 2); 1175 + ep->direction = EP_DIR_IN; 1176 + } else { 1177 + snprintf(name, sizeof(name), "ep%dout", i / 2); 1178 + ep->direction = EP_DIR_OUT; 1179 + } 1180 + ep->udc = udc; 1181 + strncpy(ep->name, name, sizeof(ep->name)); 1182 + ep->ep.name = ep->name; 1183 + 1184 + ep->ep.ops = &mv_ep_ops; 1185 + ep->stopped = 0; 1186 + ep->ep.maxpacket = (unsigned short) ~0; 1187 + ep->ep_num = i / 2; 1188 + 1189 + INIT_LIST_HEAD(&ep->queue); 1190 + list_add_tail(&ep->ep.ep_list, &udc->gadget.ep_list); 1191 + 1192 + ep->dqh = &udc->ep_dqh[i]; 1193 + } 1194 + 1195 + return 0; 1196 + } 1197 + 1198 + /* delete all endpoint requests, called with spinlock held */ 1199 + static void nuke(struct mv_ep *ep, int status) 1200 + { 1201 + /* called with spinlock held */ 1202 + ep->stopped = 1; 1203 + 1204 + /* endpoint fifo flush */ 1205 + mv_ep_fifo_flush(&ep->ep); 1206 + 1207 + while (!list_empty(&ep->queue)) { 1208 + struct mv_req *req = NULL; 1209 + req = list_entry(ep->queue.next, struct mv_req, queue); 1210 + done(ep, req, status); 1211 + } 1212 + } 1213 + 1214 + /* stop all USB activities */ 1215 + static void stop_activity(struct mv_udc *udc, struct usb_gadget_driver *driver) 1216 + { 1217 + struct mv_ep *ep; 1218 + 1219 + nuke(&udc->eps[0], -ESHUTDOWN); 1220 + 1221 + list_for_each_entry(ep, &udc->gadget.ep_list, ep.ep_list) { 1222 + nuke(ep, -ESHUTDOWN); 1223 + } 1224 + 1225 + /* report disconnect; the driver is already quiesced */ 1226 + if (driver) { 1227 + spin_unlock(&udc->lock); 1228 + driver->disconnect(&udc->gadget); 1229 + spin_lock(&udc->lock); 1230 + } 1231 + } 1232 + 1233 + int usb_gadget_probe_driver(struct usb_gadget_driver *driver, 1234 + int (*bind)(struct usb_gadget *)) 1235 + { 1236 + struct mv_udc *udc = the_controller; 1237 + int retval = 0; 1238 + unsigned long flags; 1239 + 1240 + if (!udc) 1241 + return -ENODEV; 1242 + 1243 + if (udc->driver) 1244 + return -EBUSY; 1245 + 1246 + spin_lock_irqsave(&udc->lock, flags); 1247 + 1248 + /* hook up the driver ... */ 1249 + driver->driver.bus = NULL; 1250 + udc->driver = driver; 1251 + udc->gadget.dev.driver = &driver->driver; 1252 + 1253 + udc->usb_state = USB_STATE_ATTACHED; 1254 + udc->ep0_state = WAIT_FOR_SETUP; 1255 + udc->ep0_dir = USB_DIR_OUT; 1256 + 1257 + spin_unlock_irqrestore(&udc->lock, flags); 1258 + 1259 + retval = bind(&udc->gadget); 1260 + if (retval) { 1261 + dev_err(&udc->dev->dev, "bind to driver %s --> %d\n", 1262 + driver->driver.name, retval); 1263 + udc->driver = NULL; 1264 + udc->gadget.dev.driver = NULL; 1265 + return retval; 1266 + } 1267 + udc_reset(udc); 1268 + ep0_reset(udc); 1269 + udc_start(udc); 1270 + 1271 + return 0; 1272 + } 1273 + EXPORT_SYMBOL(usb_gadget_probe_driver); 1274 + 1275 + int usb_gadget_unregister_driver(struct usb_gadget_driver *driver) 1276 + { 1277 + struct mv_udc *udc = the_controller; 1278 + unsigned long flags; 1279 + 1280 + if (!udc) 1281 + return -ENODEV; 1282 + 1283 + udc_stop(udc); 1284 + 1285 + spin_lock_irqsave(&udc->lock, flags); 1286 + 1287 + /* stop all usb activities */ 1288 + udc->gadget.speed = USB_SPEED_UNKNOWN; 1289 + stop_activity(udc, driver); 1290 + spin_unlock_irqrestore(&udc->lock, flags); 1291 + 1292 + /* unbind gadget driver */ 1293 + driver->unbind(&udc->gadget); 1294 + udc->gadget.dev.driver = NULL; 1295 + udc->driver = NULL; 1296 + 1297 + return 0; 1298 + } 1299 + EXPORT_SYMBOL(usb_gadget_unregister_driver); 1300 + 1301 + static int 1302 + udc_prime_status(struct mv_udc *udc, u8 direction, u16 status, bool empty) 1303 + { 1304 + int retval = 0; 1305 + struct mv_req *req; 1306 + struct mv_ep *ep; 1307 + 1308 + ep = &udc->eps[0]; 1309 + udc->ep0_dir = direction; 1310 + 1311 + req = udc->status_req; 1312 + 1313 + /* fill in the reqest structure */ 1314 + if (empty == false) { 1315 + *((u16 *) req->req.buf) = cpu_to_le16(status); 1316 + req->req.length = 2; 1317 + } else 1318 + req->req.length = 0; 1319 + 1320 + req->ep = ep; 1321 + req->req.status = -EINPROGRESS; 1322 + req->req.actual = 0; 1323 + req->req.complete = NULL; 1324 + req->dtd_count = 0; 1325 + 1326 + /* prime the data phase */ 1327 + if (!req_to_dtd(req)) 1328 + retval = queue_dtd(ep, req); 1329 + else{ /* no mem */ 1330 + retval = -ENOMEM; 1331 + goto out; 1332 + } 1333 + 1334 + if (retval) { 1335 + dev_err(&udc->dev->dev, "response error on GET_STATUS request\n"); 1336 + goto out; 1337 + } 1338 + 1339 + list_add_tail(&req->queue, &ep->queue); 1340 + 1341 + return 0; 1342 + out: 1343 + return retval; 1344 + } 1345 + 1346 + static void ch9setaddress(struct mv_udc *udc, struct usb_ctrlrequest *setup) 1347 + { 1348 + udc->dev_addr = (u8)setup->wValue; 1349 + 1350 + /* update usb state */ 1351 + udc->usb_state = USB_STATE_ADDRESS; 1352 + 1353 + if (udc_prime_status(udc, EP_DIR_IN, 0, true)) 1354 + ep0_stall(udc); 1355 + } 1356 + 1357 + static void ch9getstatus(struct mv_udc *udc, u8 ep_num, 1358 + struct usb_ctrlrequest *setup) 1359 + { 1360 + u16 status; 1361 + int retval; 1362 + 1363 + if ((setup->bRequestType & (USB_DIR_IN | USB_TYPE_MASK)) 1364 + != (USB_DIR_IN | USB_TYPE_STANDARD)) 1365 + return; 1366 + 1367 + if ((setup->bRequestType & USB_RECIP_MASK) == USB_RECIP_DEVICE) { 1368 + status = 1 << USB_DEVICE_SELF_POWERED; 1369 + status |= udc->remote_wakeup << USB_DEVICE_REMOTE_WAKEUP; 1370 + } else if ((setup->bRequestType & USB_RECIP_MASK) 1371 + == USB_RECIP_INTERFACE) { 1372 + /* get interface status */ 1373 + status = 0; 1374 + } else if ((setup->bRequestType & USB_RECIP_MASK) 1375 + == USB_RECIP_ENDPOINT) { 1376 + u8 ep_num, direction; 1377 + 1378 + ep_num = setup->wIndex & USB_ENDPOINT_NUMBER_MASK; 1379 + direction = (setup->wIndex & USB_ENDPOINT_DIR_MASK) 1380 + ? EP_DIR_IN : EP_DIR_OUT; 1381 + status = ep_is_stall(udc, ep_num, direction) 1382 + << USB_ENDPOINT_HALT; 1383 + } 1384 + 1385 + retval = udc_prime_status(udc, EP_DIR_IN, status, false); 1386 + if (retval) 1387 + ep0_stall(udc); 1388 + } 1389 + 1390 + static void ch9clearfeature(struct mv_udc *udc, struct usb_ctrlrequest *setup) 1391 + { 1392 + u8 ep_num; 1393 + u8 direction; 1394 + struct mv_ep *ep; 1395 + 1396 + if ((setup->bRequestType & (USB_TYPE_MASK | USB_RECIP_MASK)) 1397 + == ((USB_TYPE_STANDARD | USB_RECIP_DEVICE))) { 1398 + switch (setup->wValue) { 1399 + case USB_DEVICE_REMOTE_WAKEUP: 1400 + udc->remote_wakeup = 0; 1401 + break; 1402 + case USB_DEVICE_TEST_MODE: 1403 + mv_udc_testmode(udc, 0, false); 1404 + break; 1405 + default: 1406 + goto out; 1407 + } 1408 + } else if ((setup->bRequestType & (USB_TYPE_MASK | USB_RECIP_MASK)) 1409 + == ((USB_TYPE_STANDARD | USB_RECIP_ENDPOINT))) { 1410 + switch (setup->wValue) { 1411 + case USB_ENDPOINT_HALT: 1412 + ep_num = setup->wIndex & USB_ENDPOINT_NUMBER_MASK; 1413 + direction = (setup->wIndex & USB_ENDPOINT_DIR_MASK) 1414 + ? EP_DIR_IN : EP_DIR_OUT; 1415 + if (setup->wValue != 0 || setup->wLength != 0 1416 + || ep_num > udc->max_eps) 1417 + goto out; 1418 + ep = &udc->eps[ep_num * 2 + direction]; 1419 + if (ep->wedge == 1) 1420 + break; 1421 + spin_unlock(&udc->lock); 1422 + ep_set_stall(udc, ep_num, direction, 0); 1423 + spin_lock(&udc->lock); 1424 + break; 1425 + default: 1426 + goto out; 1427 + } 1428 + } else 1429 + goto out; 1430 + 1431 + if (udc_prime_status(udc, EP_DIR_IN, 0, true)) 1432 + ep0_stall(udc); 1433 + else 1434 + udc->ep0_state = DATA_STATE_XMIT; 1435 + out: 1436 + return; 1437 + } 1438 + 1439 + static void ch9setfeature(struct mv_udc *udc, struct usb_ctrlrequest *setup) 1440 + { 1441 + u8 ep_num; 1442 + u8 direction; 1443 + 1444 + if ((setup->bRequestType & (USB_TYPE_MASK | USB_RECIP_MASK)) 1445 + == ((USB_TYPE_STANDARD | USB_RECIP_DEVICE))) { 1446 + switch (setup->wValue) { 1447 + case USB_DEVICE_REMOTE_WAKEUP: 1448 + udc->remote_wakeup = 1; 1449 + break; 1450 + case USB_DEVICE_TEST_MODE: 1451 + if (setup->wIndex & 0xFF 1452 + && udc->gadget.speed != USB_SPEED_HIGH) 1453 + goto out; 1454 + if (udc->usb_state == USB_STATE_CONFIGURED 1455 + || udc->usb_state == USB_STATE_ADDRESS 1456 + || udc->usb_state == USB_STATE_DEFAULT) 1457 + mv_udc_testmode(udc, 1458 + setup->wIndex & 0xFF00, true); 1459 + else 1460 + goto out; 1461 + break; 1462 + default: 1463 + goto out; 1464 + } 1465 + } else if ((setup->bRequestType & (USB_TYPE_MASK | USB_RECIP_MASK)) 1466 + == ((USB_TYPE_STANDARD | USB_RECIP_ENDPOINT))) { 1467 + switch (setup->wValue) { 1468 + case USB_ENDPOINT_HALT: 1469 + ep_num = setup->wIndex & USB_ENDPOINT_NUMBER_MASK; 1470 + direction = (setup->wIndex & USB_ENDPOINT_DIR_MASK) 1471 + ? EP_DIR_IN : EP_DIR_OUT; 1472 + if (setup->wValue != 0 || setup->wLength != 0 1473 + || ep_num > udc->max_eps) 1474 + goto out; 1475 + spin_unlock(&udc->lock); 1476 + ep_set_stall(udc, ep_num, direction, 1); 1477 + spin_lock(&udc->lock); 1478 + break; 1479 + default: 1480 + goto out; 1481 + } 1482 + } else 1483 + goto out; 1484 + 1485 + if (udc_prime_status(udc, EP_DIR_IN, 0, true)) 1486 + ep0_stall(udc); 1487 + out: 1488 + return; 1489 + } 1490 + 1491 + static void handle_setup_packet(struct mv_udc *udc, u8 ep_num, 1492 + struct usb_ctrlrequest *setup) 1493 + { 1494 + bool delegate = false; 1495 + 1496 + nuke(&udc->eps[ep_num * 2 + EP_DIR_OUT], -ESHUTDOWN); 1497 + 1498 + dev_dbg(&udc->dev->dev, "SETUP %02x.%02x v%04x i%04x l%04x\n", 1499 + setup->bRequestType, setup->bRequest, 1500 + setup->wValue, setup->wIndex, setup->wLength); 1501 + /* We process some stardard setup requests here */ 1502 + if ((setup->bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD) { 1503 + switch (setup->bRequest) { 1504 + case USB_REQ_GET_STATUS: 1505 + ch9getstatus(udc, ep_num, setup); 1506 + break; 1507 + 1508 + case USB_REQ_SET_ADDRESS: 1509 + ch9setaddress(udc, setup); 1510 + break; 1511 + 1512 + case USB_REQ_CLEAR_FEATURE: 1513 + ch9clearfeature(udc, setup); 1514 + break; 1515 + 1516 + case USB_REQ_SET_FEATURE: 1517 + ch9setfeature(udc, setup); 1518 + break; 1519 + 1520 + default: 1521 + delegate = true; 1522 + } 1523 + } else 1524 + delegate = true; 1525 + 1526 + /* delegate USB standard requests to the gadget driver */ 1527 + if (delegate == true) { 1528 + /* USB requests handled by gadget */ 1529 + if (setup->wLength) { 1530 + /* DATA phase from gadget, STATUS phase from udc */ 1531 + udc->ep0_dir = (setup->bRequestType & USB_DIR_IN) 1532 + ? EP_DIR_IN : EP_DIR_OUT; 1533 + spin_unlock(&udc->lock); 1534 + if (udc->driver->setup(&udc->gadget, 1535 + &udc->local_setup_buff) < 0) 1536 + ep0_stall(udc); 1537 + spin_lock(&udc->lock); 1538 + udc->ep0_state = (setup->bRequestType & USB_DIR_IN) 1539 + ? DATA_STATE_XMIT : DATA_STATE_RECV; 1540 + } else { 1541 + /* no DATA phase, IN STATUS phase from gadget */ 1542 + udc->ep0_dir = EP_DIR_IN; 1543 + spin_unlock(&udc->lock); 1544 + if (udc->driver->setup(&udc->gadget, 1545 + &udc->local_setup_buff) < 0) 1546 + ep0_stall(udc); 1547 + spin_lock(&udc->lock); 1548 + udc->ep0_state = WAIT_FOR_OUT_STATUS; 1549 + } 1550 + } 1551 + } 1552 + 1553 + /* complete DATA or STATUS phase of ep0 prime status phase if needed */ 1554 + static void ep0_req_complete(struct mv_udc *udc, 1555 + struct mv_ep *ep0, struct mv_req *req) 1556 + { 1557 + u32 new_addr; 1558 + 1559 + if (udc->usb_state == USB_STATE_ADDRESS) { 1560 + /* set the new address */ 1561 + new_addr = (u32)udc->dev_addr; 1562 + writel(new_addr << USB_DEVICE_ADDRESS_BIT_SHIFT, 1563 + &udc->op_regs->deviceaddr); 1564 + } 1565 + 1566 + done(ep0, req, 0); 1567 + 1568 + switch (udc->ep0_state) { 1569 + case DATA_STATE_XMIT: 1570 + /* receive status phase */ 1571 + if (udc_prime_status(udc, EP_DIR_OUT, 0, true)) 1572 + ep0_stall(udc); 1573 + break; 1574 + case DATA_STATE_RECV: 1575 + /* send status phase */ 1576 + if (udc_prime_status(udc, EP_DIR_IN, 0 , true)) 1577 + ep0_stall(udc); 1578 + break; 1579 + case WAIT_FOR_OUT_STATUS: 1580 + udc->ep0_state = WAIT_FOR_SETUP; 1581 + break; 1582 + case WAIT_FOR_SETUP: 1583 + dev_err(&udc->dev->dev, "unexpect ep0 packets\n"); 1584 + break; 1585 + default: 1586 + ep0_stall(udc); 1587 + break; 1588 + } 1589 + } 1590 + 1591 + static void get_setup_data(struct mv_udc *udc, u8 ep_num, u8 *buffer_ptr) 1592 + { 1593 + u32 temp; 1594 + struct mv_dqh *dqh; 1595 + 1596 + dqh = &udc->ep_dqh[ep_num * 2 + EP_DIR_OUT]; 1597 + 1598 + /* Clear bit in ENDPTSETUPSTAT */ 1599 + temp = readl(&udc->op_regs->epsetupstat); 1600 + writel(temp | (1 << ep_num), &udc->op_regs->epsetupstat); 1601 + 1602 + /* while a hazard exists when setup package arrives */ 1603 + do { 1604 + /* Set Setup Tripwire */ 1605 + temp = readl(&udc->op_regs->usbcmd); 1606 + writel(temp | USBCMD_SETUP_TRIPWIRE_SET, &udc->op_regs->usbcmd); 1607 + 1608 + /* Copy the setup packet to local buffer */ 1609 + memcpy(buffer_ptr, (u8 *) dqh->setup_buffer, 8); 1610 + } while (!(readl(&udc->op_regs->usbcmd) & USBCMD_SETUP_TRIPWIRE_SET)); 1611 + 1612 + /* Clear Setup Tripwire */ 1613 + temp = readl(&udc->op_regs->usbcmd); 1614 + writel(temp & ~USBCMD_SETUP_TRIPWIRE_SET, &udc->op_regs->usbcmd); 1615 + } 1616 + 1617 + static void irq_process_tr_complete(struct mv_udc *udc) 1618 + { 1619 + u32 tmp, bit_pos; 1620 + int i, ep_num = 0, direction = 0; 1621 + struct mv_ep *curr_ep; 1622 + struct mv_req *curr_req, *temp_req; 1623 + int status; 1624 + 1625 + /* 1626 + * We use separate loops for ENDPTSETUPSTAT and ENDPTCOMPLETE 1627 + * because the setup packets are to be read ASAP 1628 + */ 1629 + 1630 + /* Process all Setup packet received interrupts */ 1631 + tmp = readl(&udc->op_regs->epsetupstat); 1632 + 1633 + if (tmp) { 1634 + for (i = 0; i < udc->max_eps; i++) { 1635 + if (tmp & (1 << i)) { 1636 + get_setup_data(udc, i, 1637 + (u8 *)(&udc->local_setup_buff)); 1638 + handle_setup_packet(udc, i, 1639 + &udc->local_setup_buff); 1640 + } 1641 + } 1642 + } 1643 + 1644 + /* Don't clear the endpoint setup status register here. 1645 + * It is cleared as a setup packet is read out of the buffer 1646 + */ 1647 + 1648 + /* Process non-setup transaction complete interrupts */ 1649 + tmp = readl(&udc->op_regs->epcomplete); 1650 + 1651 + if (!tmp) 1652 + return; 1653 + 1654 + writel(tmp, &udc->op_regs->epcomplete); 1655 + 1656 + for (i = 0; i < udc->max_eps * 2; i++) { 1657 + ep_num = i >> 1; 1658 + direction = i % 2; 1659 + 1660 + bit_pos = 1 << (ep_num + 16 * direction); 1661 + 1662 + if (!(bit_pos & tmp)) 1663 + continue; 1664 + 1665 + if (i == 1) 1666 + curr_ep = &udc->eps[0]; 1667 + else 1668 + curr_ep = &udc->eps[i]; 1669 + /* process the req queue until an uncomplete request */ 1670 + list_for_each_entry_safe(curr_req, temp_req, 1671 + &curr_ep->queue, queue) { 1672 + status = process_ep_req(udc, i, curr_req); 1673 + if (status) 1674 + break; 1675 + 1676 + /* write back status to req */ 1677 + curr_req->req.status = status; 1678 + 1679 + /* ep0 request completion */ 1680 + if (ep_num == 0) { 1681 + ep0_req_complete(udc, curr_ep, curr_req); 1682 + break; 1683 + } else { 1684 + done(curr_ep, curr_req, status); 1685 + } 1686 + } 1687 + } 1688 + } 1689 + 1690 + void irq_process_reset(struct mv_udc *udc) 1691 + { 1692 + u32 tmp; 1693 + unsigned int loops; 1694 + 1695 + udc->ep0_dir = EP_DIR_OUT; 1696 + udc->ep0_state = WAIT_FOR_SETUP; 1697 + udc->remote_wakeup = 0; /* default to 0 on reset */ 1698 + 1699 + /* The address bits are past bit 25-31. Set the address */ 1700 + tmp = readl(&udc->op_regs->deviceaddr); 1701 + tmp &= ~(USB_DEVICE_ADDRESS_MASK); 1702 + writel(tmp, &udc->op_regs->deviceaddr); 1703 + 1704 + /* Clear all the setup token semaphores */ 1705 + tmp = readl(&udc->op_regs->epsetupstat); 1706 + writel(tmp, &udc->op_regs->epsetupstat); 1707 + 1708 + /* Clear all the endpoint complete status bits */ 1709 + tmp = readl(&udc->op_regs->epcomplete); 1710 + writel(tmp, &udc->op_regs->epcomplete); 1711 + 1712 + /* wait until all endptprime bits cleared */ 1713 + loops = LOOPS(PRIME_TIMEOUT); 1714 + while (readl(&udc->op_regs->epprime) & 0xFFFFFFFF) { 1715 + if (loops == 0) { 1716 + dev_err(&udc->dev->dev, 1717 + "Timeout for ENDPTPRIME = 0x%x\n", 1718 + readl(&udc->op_regs->epprime)); 1719 + break; 1720 + } 1721 + loops--; 1722 + udelay(LOOPS_USEC); 1723 + } 1724 + 1725 + /* Write 1s to the Flush register */ 1726 + writel((u32)~0, &udc->op_regs->epflush); 1727 + 1728 + if (readl(&udc->op_regs->portsc[0]) & PORTSCX_PORT_RESET) { 1729 + dev_info(&udc->dev->dev, "usb bus reset\n"); 1730 + udc->usb_state = USB_STATE_DEFAULT; 1731 + /* reset all the queues, stop all USB activities */ 1732 + stop_activity(udc, udc->driver); 1733 + } else { 1734 + dev_info(&udc->dev->dev, "USB reset portsc 0x%x\n", 1735 + readl(&udc->op_regs->portsc)); 1736 + 1737 + /* 1738 + * re-initialize 1739 + * controller reset 1740 + */ 1741 + udc_reset(udc); 1742 + 1743 + /* reset all the queues, stop all USB activities */ 1744 + stop_activity(udc, udc->driver); 1745 + 1746 + /* reset ep0 dQH and endptctrl */ 1747 + ep0_reset(udc); 1748 + 1749 + /* enable interrupt and set controller to run state */ 1750 + udc_start(udc); 1751 + 1752 + udc->usb_state = USB_STATE_ATTACHED; 1753 + } 1754 + } 1755 + 1756 + static void handle_bus_resume(struct mv_udc *udc) 1757 + { 1758 + udc->usb_state = udc->resume_state; 1759 + udc->resume_state = 0; 1760 + 1761 + /* report resume to the driver */ 1762 + if (udc->driver) { 1763 + if (udc->driver->resume) { 1764 + spin_unlock(&udc->lock); 1765 + udc->driver->resume(&udc->gadget); 1766 + spin_lock(&udc->lock); 1767 + } 1768 + } 1769 + } 1770 + 1771 + static void irq_process_suspend(struct mv_udc *udc) 1772 + { 1773 + udc->resume_state = udc->usb_state; 1774 + udc->usb_state = USB_STATE_SUSPENDED; 1775 + 1776 + if (udc->driver->suspend) { 1777 + spin_unlock(&udc->lock); 1778 + udc->driver->suspend(&udc->gadget); 1779 + spin_lock(&udc->lock); 1780 + } 1781 + } 1782 + 1783 + static void irq_process_port_change(struct mv_udc *udc) 1784 + { 1785 + u32 portsc; 1786 + 1787 + portsc = readl(&udc->op_regs->portsc[0]); 1788 + if (!(portsc & PORTSCX_PORT_RESET)) { 1789 + /* Get the speed */ 1790 + u32 speed = portsc & PORTSCX_PORT_SPEED_MASK; 1791 + switch (speed) { 1792 + case PORTSCX_PORT_SPEED_HIGH: 1793 + udc->gadget.speed = USB_SPEED_HIGH; 1794 + break; 1795 + case PORTSCX_PORT_SPEED_FULL: 1796 + udc->gadget.speed = USB_SPEED_FULL; 1797 + break; 1798 + case PORTSCX_PORT_SPEED_LOW: 1799 + udc->gadget.speed = USB_SPEED_LOW; 1800 + break; 1801 + default: 1802 + udc->gadget.speed = USB_SPEED_UNKNOWN; 1803 + break; 1804 + } 1805 + } 1806 + 1807 + if (portsc & PORTSCX_PORT_SUSPEND) { 1808 + udc->resume_state = udc->usb_state; 1809 + udc->usb_state = USB_STATE_SUSPENDED; 1810 + if (udc->driver->suspend) { 1811 + spin_unlock(&udc->lock); 1812 + udc->driver->suspend(&udc->gadget); 1813 + spin_lock(&udc->lock); 1814 + } 1815 + } 1816 + 1817 + if (!(portsc & PORTSCX_PORT_SUSPEND) 1818 + && udc->usb_state == USB_STATE_SUSPENDED) { 1819 + handle_bus_resume(udc); 1820 + } 1821 + 1822 + if (!udc->resume_state) 1823 + udc->usb_state = USB_STATE_DEFAULT; 1824 + } 1825 + 1826 + static void irq_process_error(struct mv_udc *udc) 1827 + { 1828 + /* Increment the error count */ 1829 + udc->errors++; 1830 + } 1831 + 1832 + static irqreturn_t mv_udc_irq(int irq, void *dev) 1833 + { 1834 + struct mv_udc *udc = (struct mv_udc *)dev; 1835 + u32 status, intr; 1836 + 1837 + spin_lock(&udc->lock); 1838 + 1839 + status = readl(&udc->op_regs->usbsts); 1840 + intr = readl(&udc->op_regs->usbintr); 1841 + status &= intr; 1842 + 1843 + if (status == 0) { 1844 + spin_unlock(&udc->lock); 1845 + return IRQ_NONE; 1846 + } 1847 + 1848 + /* Clear all the interrupts occured */ 1849 + writel(status, &udc->op_regs->usbsts); 1850 + 1851 + if (status & USBSTS_ERR) 1852 + irq_process_error(udc); 1853 + 1854 + if (status & USBSTS_RESET) 1855 + irq_process_reset(udc); 1856 + 1857 + if (status & USBSTS_PORT_CHANGE) 1858 + irq_process_port_change(udc); 1859 + 1860 + if (status & USBSTS_INT) 1861 + irq_process_tr_complete(udc); 1862 + 1863 + if (status & USBSTS_SUSPEND) 1864 + irq_process_suspend(udc); 1865 + 1866 + spin_unlock(&udc->lock); 1867 + 1868 + return IRQ_HANDLED; 1869 + } 1870 + 1871 + /* release device structure */ 1872 + static void gadget_release(struct device *_dev) 1873 + { 1874 + struct mv_udc *udc = the_controller; 1875 + 1876 + complete(udc->done); 1877 + kfree(udc); 1878 + } 1879 + 1880 + static int mv_udc_remove(struct platform_device *dev) 1881 + { 1882 + struct mv_udc *udc = the_controller; 1883 + 1884 + DECLARE_COMPLETION(done); 1885 + 1886 + udc->done = &done; 1887 + 1888 + /* free memory allocated in probe */ 1889 + if (udc->dtd_pool) 1890 + dma_pool_destroy(udc->dtd_pool); 1891 + 1892 + if (udc->ep_dqh) 1893 + dma_free_coherent(&dev->dev, udc->ep_dqh_size, 1894 + udc->ep_dqh, udc->ep_dqh_dma); 1895 + 1896 + kfree(udc->eps); 1897 + 1898 + if (udc->irq) 1899 + free_irq(udc->irq, &dev->dev); 1900 + 1901 + if (udc->cap_regs) 1902 + iounmap(udc->cap_regs); 1903 + udc->cap_regs = NULL; 1904 + 1905 + if (udc->phy_regs) 1906 + iounmap((void *)udc->phy_regs); 1907 + udc->phy_regs = 0; 1908 + 1909 + if (udc->status_req) { 1910 + kfree(udc->status_req->req.buf); 1911 + kfree(udc->status_req); 1912 + } 1913 + 1914 + device_unregister(&udc->gadget.dev); 1915 + 1916 + /* free dev, wait for the release() finished */ 1917 + wait_for_completion(&done); 1918 + 1919 + the_controller = NULL; 1920 + 1921 + return 0; 1922 + } 1923 + 1924 + int mv_udc_probe(struct platform_device *dev) 1925 + { 1926 + struct mv_udc *udc; 1927 + int retval = 0; 1928 + struct resource *r; 1929 + size_t size; 1930 + 1931 + udc = kzalloc(sizeof *udc, GFP_KERNEL); 1932 + if (udc == NULL) { 1933 + dev_err(&dev->dev, "failed to allocate memory for udc\n"); 1934 + retval = -ENOMEM; 1935 + goto error; 1936 + } 1937 + 1938 + spin_lock_init(&udc->lock); 1939 + 1940 + udc->dev = dev; 1941 + 1942 + udc->clk = clk_get(&dev->dev, "U2OCLK"); 1943 + if (IS_ERR(udc->clk)) { 1944 + retval = PTR_ERR(udc->clk); 1945 + goto error; 1946 + } 1947 + 1948 + r = platform_get_resource_byname(udc->dev, IORESOURCE_MEM, "u2o"); 1949 + if (r == NULL) { 1950 + dev_err(&dev->dev, "no I/O memory resource defined\n"); 1951 + retval = -ENODEV; 1952 + goto error; 1953 + } 1954 + 1955 + udc->cap_regs = (struct mv_cap_regs __iomem *) 1956 + ioremap(r->start, resource_size(r)); 1957 + if (udc->cap_regs == NULL) { 1958 + dev_err(&dev->dev, "failed to map I/O memory\n"); 1959 + retval = -EBUSY; 1960 + goto error; 1961 + } 1962 + 1963 + r = platform_get_resource_byname(udc->dev, IORESOURCE_MEM, "u2ophy"); 1964 + if (r == NULL) { 1965 + dev_err(&dev->dev, "no phy I/O memory resource defined\n"); 1966 + retval = -ENODEV; 1967 + goto error; 1968 + } 1969 + 1970 + udc->phy_regs = (unsigned int)ioremap(r->start, resource_size(r)); 1971 + if (udc->phy_regs == 0) { 1972 + dev_err(&dev->dev, "failed to map phy I/O memory\n"); 1973 + retval = -EBUSY; 1974 + goto error; 1975 + } 1976 + 1977 + /* we will acces controller register, so enable the clk */ 1978 + clk_enable(udc->clk); 1979 + retval = mv_udc_phy_init(udc->phy_regs); 1980 + if (retval) { 1981 + dev_err(&dev->dev, "phy initialization error %d\n", retval); 1982 + goto error; 1983 + } 1984 + 1985 + udc->op_regs = (struct mv_op_regs __iomem *)((u32)udc->cap_regs 1986 + + (readl(&udc->cap_regs->caplength_hciversion) 1987 + & CAPLENGTH_MASK)); 1988 + udc->max_eps = readl(&udc->cap_regs->dccparams) & DCCPARAMS_DEN_MASK; 1989 + 1990 + size = udc->max_eps * sizeof(struct mv_dqh) *2; 1991 + size = (size + DQH_ALIGNMENT - 1) & ~(DQH_ALIGNMENT - 1); 1992 + udc->ep_dqh = dma_alloc_coherent(&dev->dev, size, 1993 + &udc->ep_dqh_dma, GFP_KERNEL); 1994 + 1995 + if (udc->ep_dqh == NULL) { 1996 + dev_err(&dev->dev, "allocate dQH memory failed\n"); 1997 + retval = -ENOMEM; 1998 + goto error; 1999 + } 2000 + udc->ep_dqh_size = size; 2001 + 2002 + /* create dTD dma_pool resource */ 2003 + udc->dtd_pool = dma_pool_create("mv_dtd", 2004 + &dev->dev, 2005 + sizeof(struct mv_dtd), 2006 + DTD_ALIGNMENT, 2007 + DMA_BOUNDARY); 2008 + 2009 + if (!udc->dtd_pool) { 2010 + retval = -ENOMEM; 2011 + goto error; 2012 + } 2013 + 2014 + size = udc->max_eps * sizeof(struct mv_ep) *2; 2015 + udc->eps = kzalloc(size, GFP_KERNEL); 2016 + if (udc->eps == NULL) { 2017 + dev_err(&dev->dev, "allocate ep memory failed\n"); 2018 + retval = -ENOMEM; 2019 + goto error; 2020 + } 2021 + 2022 + /* initialize ep0 status request structure */ 2023 + udc->status_req = kzalloc(sizeof(struct mv_req), GFP_KERNEL); 2024 + if (!udc->status_req) { 2025 + dev_err(&dev->dev, "allocate status_req memory failed\n"); 2026 + retval = -ENOMEM; 2027 + goto error; 2028 + } 2029 + INIT_LIST_HEAD(&udc->status_req->queue); 2030 + 2031 + /* allocate a small amount of memory to get valid address */ 2032 + udc->status_req->req.buf = kzalloc(8, GFP_KERNEL); 2033 + udc->status_req->req.dma = virt_to_phys(udc->status_req->req.buf); 2034 + 2035 + udc->resume_state = USB_STATE_NOTATTACHED; 2036 + udc->usb_state = USB_STATE_POWERED; 2037 + udc->ep0_dir = EP_DIR_OUT; 2038 + udc->remote_wakeup = 0; 2039 + 2040 + r = platform_get_resource(udc->dev, IORESOURCE_IRQ, 0); 2041 + if (r == NULL) { 2042 + dev_err(&dev->dev, "no IRQ resource defined\n"); 2043 + retval = -ENODEV; 2044 + goto error; 2045 + } 2046 + udc->irq = r->start; 2047 + if (request_irq(udc->irq, mv_udc_irq, 2048 + IRQF_DISABLED | IRQF_SHARED, driver_name, udc)) { 2049 + dev_err(&dev->dev, "Request irq %d for UDC failed\n", 2050 + udc->irq); 2051 + retval = -ENODEV; 2052 + goto error; 2053 + } 2054 + 2055 + /* initialize gadget structure */ 2056 + udc->gadget.ops = &mv_ops; /* usb_gadget_ops */ 2057 + udc->gadget.ep0 = &udc->eps[0].ep; /* gadget ep0 */ 2058 + INIT_LIST_HEAD(&udc->gadget.ep_list); /* ep_list */ 2059 + udc->gadget.speed = USB_SPEED_UNKNOWN; /* speed */ 2060 + udc->gadget.is_dualspeed = 1; /* support dual speed */ 2061 + 2062 + /* the "gadget" abstracts/virtualizes the controller */ 2063 + dev_set_name(&udc->gadget.dev, "gadget"); 2064 + udc->gadget.dev.parent = &dev->dev; 2065 + udc->gadget.dev.dma_mask = dev->dev.dma_mask; 2066 + udc->gadget.dev.release = gadget_release; 2067 + udc->gadget.name = driver_name; /* gadget name */ 2068 + 2069 + retval = device_register(&udc->gadget.dev); 2070 + if (retval) 2071 + goto error; 2072 + 2073 + eps_init(udc); 2074 + 2075 + the_controller = udc; 2076 + 2077 + goto out; 2078 + error: 2079 + if (udc) 2080 + mv_udc_remove(udc->dev); 2081 + out: 2082 + return retval; 2083 + } 2084 + 2085 + #ifdef CONFIG_PM 2086 + static int mv_udc_suspend(struct platform_device *_dev, pm_message_t state) 2087 + { 2088 + struct mv_udc *udc = the_controller; 2089 + 2090 + udc_stop(udc); 2091 + 2092 + return 0; 2093 + } 2094 + 2095 + static int mv_udc_resume(struct platform_device *_dev) 2096 + { 2097 + struct mv_udc *udc = the_controller; 2098 + int retval; 2099 + 2100 + retval = mv_udc_phy_init(udc->phy_regs); 2101 + if (retval) { 2102 + dev_err(_dev, "phy initialization error %d\n", retval); 2103 + goto error; 2104 + } 2105 + udc_reset(udc); 2106 + ep0_reset(udc); 2107 + udc_start(udc); 2108 + 2109 + return 0; 2110 + } 2111 + 2112 + static const struct dev_pm_ops mv_udc_pm_ops = { 2113 + .suspend = mv_udc_suspend, 2114 + .resume = mv_udc_resume, 2115 + }; 2116 + #endif 2117 + 2118 + static struct platform_driver udc_driver = { 2119 + .probe = mv_udc_probe, 2120 + .remove = __exit_p(mv_udc_remove), 2121 + .driver = { 2122 + .owner = THIS_MODULE, 2123 + .name = "pxa-u2o", 2124 + #ifdef CONFIG_PM 2125 + .pm = mv_udc_pm_ops, 2126 + #endif 2127 + }, 2128 + }; 2129 + 2130 + 2131 + MODULE_DESCRIPTION(DRIVER_DESC); 2132 + MODULE_AUTHOR("Chao Xie <chao.xie@marvell.com>"); 2133 + MODULE_VERSION(DRIVER_VERSION); 2134 + MODULE_LICENSE("GPL"); 2135 + 2136 + 2137 + static int __init init(void) 2138 + { 2139 + return platform_driver_register(&udc_driver); 2140 + } 2141 + module_init(init); 2142 + 2143 + 2144 + static void __exit cleanup(void) 2145 + { 2146 + platform_driver_unregister(&udc_driver); 2147 + } 2148 + module_exit(cleanup); 2149 +
+214
drivers/usb/gadget/mv_udc_phy.c
··· 1 + #include <linux/delay.h> 2 + #include <linux/timer.h> 3 + #include <linux/io.h> 4 + #include <linux/errno.h> 5 + 6 + #include <mach/cputype.h> 7 + 8 + #ifdef CONFIG_ARCH_MMP 9 + 10 + #define UTMI_REVISION 0x0 11 + #define UTMI_CTRL 0x4 12 + #define UTMI_PLL 0x8 13 + #define UTMI_TX 0xc 14 + #define UTMI_RX 0x10 15 + #define UTMI_IVREF 0x14 16 + #define UTMI_T0 0x18 17 + #define UTMI_T1 0x1c 18 + #define UTMI_T2 0x20 19 + #define UTMI_T3 0x24 20 + #define UTMI_T4 0x28 21 + #define UTMI_T5 0x2c 22 + #define UTMI_RESERVE 0x30 23 + #define UTMI_USB_INT 0x34 24 + #define UTMI_DBG_CTL 0x38 25 + #define UTMI_OTG_ADDON 0x3c 26 + 27 + /* For UTMICTRL Register */ 28 + #define UTMI_CTRL_USB_CLK_EN (1 << 31) 29 + /* pxa168 */ 30 + #define UTMI_CTRL_SUSPEND_SET1 (1 << 30) 31 + #define UTMI_CTRL_SUSPEND_SET2 (1 << 29) 32 + #define UTMI_CTRL_RXBUF_PDWN (1 << 24) 33 + #define UTMI_CTRL_TXBUF_PDWN (1 << 11) 34 + 35 + #define UTMI_CTRL_INPKT_DELAY_SHIFT 30 36 + #define UTMI_CTRL_INPKT_DELAY_SOF_SHIFT 28 37 + #define UTMI_CTRL_PU_REF_SHIFT 20 38 + #define UTMI_CTRL_ARC_PULLDN_SHIFT 12 39 + #define UTMI_CTRL_PLL_PWR_UP_SHIFT 1 40 + #define UTMI_CTRL_PWR_UP_SHIFT 0 41 + /* For UTMI_PLL Register */ 42 + #define UTMI_PLL_CLK_BLK_EN_SHIFT 24 43 + #define UTMI_PLL_FBDIV_SHIFT 4 44 + #define UTMI_PLL_REFDIV_SHIFT 0 45 + #define UTMI_PLL_FBDIV_MASK 0x00000FF0 46 + #define UTMI_PLL_REFDIV_MASK 0x0000000F 47 + #define UTMI_PLL_ICP_MASK 0x00007000 48 + #define UTMI_PLL_KVCO_MASK 0x00031000 49 + #define UTMI_PLL_PLLCALI12_SHIFT 29 50 + #define UTMI_PLL_PLLCALI12_MASK (0x3 << 29) 51 + #define UTMI_PLL_PLLVDD18_SHIFT 27 52 + #define UTMI_PLL_PLLVDD18_MASK (0x3 << 27) 53 + #define UTMI_PLL_PLLVDD12_SHIFT 25 54 + #define UTMI_PLL_PLLVDD12_MASK (0x3 << 25) 55 + #define UTMI_PLL_KVCO_SHIFT 15 56 + #define UTMI_PLL_ICP_SHIFT 12 57 + /* For UTMI_TX Register */ 58 + #define UTMI_TX_REG_EXT_FS_RCAL_SHIFT 27 59 + #define UTMI_TX_REG_EXT_FS_RCAL_MASK (0xf << 27) 60 + #define UTMI_TX_REG_EXT_FS_RCAL_EN_MASK 26 61 + #define UTMI_TX_REG_EXT_FS_RCAL_EN (0x1 << 26) 62 + #define UTMI_TX_LOW_VDD_EN_SHIFT 11 63 + #define UTMI_TX_IMPCAL_VTH_SHIFT 14 64 + #define UTMI_TX_IMPCAL_VTH_MASK (0x7 << 14) 65 + #define UTMI_TX_CK60_PHSEL_SHIFT 17 66 + #define UTMI_TX_CK60_PHSEL_MASK (0xf << 17) 67 + #define UTMI_TX_TXVDD12_SHIFT 22 68 + #define UTMI_TX_TXVDD12_MASK (0x3 << 22) 69 + #define UTMI_TX_AMP_SHIFT 0 70 + #define UTMI_TX_AMP_MASK (0x7 << 0) 71 + /* For UTMI_RX Register */ 72 + #define UTMI_RX_SQ_THRESH_SHIFT 4 73 + #define UTMI_RX_SQ_THRESH_MASK (0xf << 4) 74 + #define UTMI_REG_SQ_LENGTH_SHIFT 15 75 + #define UTMI_REG_SQ_LENGTH_MASK (0x3 << 15) 76 + 77 + #define REG_RCAL_START 0x00001000 78 + #define VCOCAL_START 0x00200000 79 + #define KVCO_EXT 0x00400000 80 + #define PLL_READY 0x00800000 81 + #define CLK_BLK_EN 0x01000000 82 + #endif 83 + 84 + static unsigned int u2o_read(unsigned int base, unsigned int offset) 85 + { 86 + return readl(base + offset); 87 + } 88 + 89 + static void u2o_set(unsigned int base, unsigned int offset, unsigned int value) 90 + { 91 + unsigned int reg; 92 + 93 + reg = readl(base + offset); 94 + reg |= value; 95 + writel(reg, base + offset); 96 + readl(base + offset); 97 + } 98 + 99 + static void u2o_clear(unsigned int base, unsigned int offset, 100 + unsigned int value) 101 + { 102 + unsigned int reg; 103 + 104 + reg = readl(base + offset); 105 + reg &= ~value; 106 + writel(reg, base + offset); 107 + readl(base + offset); 108 + } 109 + 110 + static void u2o_write(unsigned int base, unsigned int offset, 111 + unsigned int value) 112 + { 113 + writel(value, base + offset); 114 + readl(base + offset); 115 + } 116 + 117 + #ifdef CONFIG_ARCH_MMP 118 + int mv_udc_phy_init(unsigned int base) 119 + { 120 + unsigned long timeout; 121 + 122 + /* Initialize the USB PHY power */ 123 + if (cpu_is_pxa910()) { 124 + u2o_set(base, UTMI_CTRL, (1 << UTMI_CTRL_INPKT_DELAY_SOF_SHIFT) 125 + | (1 << UTMI_CTRL_PU_REF_SHIFT)); 126 + } 127 + 128 + u2o_set(base, UTMI_CTRL, 1 << UTMI_CTRL_PLL_PWR_UP_SHIFT); 129 + u2o_set(base, UTMI_CTRL, 1 << UTMI_CTRL_PWR_UP_SHIFT); 130 + 131 + /* UTMI_PLL settings */ 132 + u2o_clear(base, UTMI_PLL, UTMI_PLL_PLLVDD18_MASK 133 + | UTMI_PLL_PLLVDD12_MASK | UTMI_PLL_PLLCALI12_MASK 134 + | UTMI_PLL_FBDIV_MASK | UTMI_PLL_REFDIV_MASK 135 + | UTMI_PLL_ICP_MASK | UTMI_PLL_KVCO_MASK); 136 + 137 + u2o_set(base, UTMI_PLL, (0xee << UTMI_PLL_FBDIV_SHIFT) 138 + | (0xb << UTMI_PLL_REFDIV_SHIFT) 139 + | (3 << UTMI_PLL_PLLVDD18_SHIFT) 140 + | (3 << UTMI_PLL_PLLVDD12_SHIFT) 141 + | (3 << UTMI_PLL_PLLCALI12_SHIFT) 142 + | (1 << UTMI_PLL_ICP_SHIFT) | (3 << UTMI_PLL_KVCO_SHIFT)); 143 + 144 + /* UTMI_TX */ 145 + u2o_clear(base, UTMI_TX, UTMI_TX_REG_EXT_FS_RCAL_EN_MASK 146 + | UTMI_TX_TXVDD12_MASK 147 + | UTMI_TX_CK60_PHSEL_MASK | UTMI_TX_IMPCAL_VTH_MASK 148 + | UTMI_TX_REG_EXT_FS_RCAL_MASK | UTMI_TX_AMP_MASK); 149 + u2o_set(base, UTMI_TX, (3 << UTMI_TX_TXVDD12_SHIFT) 150 + | (4 << UTMI_TX_CK60_PHSEL_SHIFT) 151 + | (4 << UTMI_TX_IMPCAL_VTH_SHIFT) 152 + | (8 << UTMI_TX_REG_EXT_FS_RCAL_SHIFT) 153 + | (3 << UTMI_TX_AMP_SHIFT)); 154 + 155 + /* UTMI_RX */ 156 + u2o_clear(base, UTMI_RX, UTMI_RX_SQ_THRESH_MASK 157 + | UTMI_REG_SQ_LENGTH_MASK); 158 + if (cpu_is_pxa168()) 159 + u2o_set(base, UTMI_RX, (7 << UTMI_RX_SQ_THRESH_SHIFT) 160 + | (2 << UTMI_REG_SQ_LENGTH_SHIFT)); 161 + else 162 + u2o_set(base, UTMI_RX, (0x7 << UTMI_RX_SQ_THRESH_SHIFT) 163 + | (2 << UTMI_REG_SQ_LENGTH_SHIFT)); 164 + 165 + /* UTMI_IVREF */ 166 + if (cpu_is_pxa168()) 167 + /* 168 + * fixing Microsoft Altair board interface with NEC hub issue - 169 + * Set UTMI_IVREF from 0x4a3 to 0x4bf 170 + */ 171 + u2o_write(base, UTMI_IVREF, 0x4bf); 172 + 173 + /* calibrate */ 174 + timeout = jiffies + 100; 175 + while ((u2o_read(base, UTMI_PLL) & PLL_READY) == 0) { 176 + if (time_after(jiffies, timeout)) 177 + return -ETIME; 178 + cpu_relax(); 179 + } 180 + 181 + /* toggle VCOCAL_START bit of UTMI_PLL */ 182 + udelay(200); 183 + u2o_set(base, UTMI_PLL, VCOCAL_START); 184 + udelay(40); 185 + u2o_clear(base, UTMI_PLL, VCOCAL_START); 186 + 187 + /* toggle REG_RCAL_START bit of UTMI_TX */ 188 + udelay(200); 189 + u2o_set(base, UTMI_TX, REG_RCAL_START); 190 + udelay(40); 191 + u2o_clear(base, UTMI_TX, REG_RCAL_START); 192 + udelay(200); 193 + 194 + /* make sure phy is ready */ 195 + timeout = jiffies + 100; 196 + while ((u2o_read(base, UTMI_PLL) & PLL_READY) == 0) { 197 + if (time_after(jiffies, timeout)) 198 + return -ETIME; 199 + cpu_relax(); 200 + } 201 + 202 + if (cpu_is_pxa168()) { 203 + u2o_set(base, UTMI_RESERVE, 1 << 5); 204 + /* Turn on UTMI PHY OTG extension */ 205 + u2o_write(base, UTMI_OTG_ADDON, 1); 206 + } 207 + return 0; 208 + } 209 + #else 210 + int mv_udc_phy_init(unsigned int base) 211 + { 212 + return 0; 213 + } 214 + #endif
+248
drivers/usb/gadget/ncm.c
··· 1 + /* 2 + * ncm.c -- NCM gadget driver 3 + * 4 + * Copyright (C) 2010 Nokia Corporation 5 + * Contact: Yauheni Kaliuta <yauheni.kaliuta@nokia.com> 6 + * 7 + * The driver borrows from ether.c which is: 8 + * 9 + * Copyright (C) 2003-2005,2008 David Brownell 10 + * Copyright (C) 2003-2004 Robert Schwebel, Benedikt Spranger 11 + * Copyright (C) 2008 Nokia Corporation 12 + * 13 + * This program is free software; you can redistribute it and/or modify 14 + * it under the terms of the GNU General Public License as published by 15 + * the Free Software Foundation; either version 2 of the License, or 16 + * (at your option) any later version. 17 + * 18 + * This program is distributed in the hope that it will be useful, 19 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 20 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 + * GNU General Public License for more details. 22 + * 23 + * You should have received a copy of the GNU General Public License 24 + * along with this program; if not, write to the Free Software 25 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 26 + */ 27 + 28 + /* #define DEBUG */ 29 + /* #define VERBOSE_DEBUG */ 30 + 31 + #include <linux/kernel.h> 32 + #include <linux/utsname.h> 33 + 34 + 35 + #include "u_ether.h" 36 + 37 + #define DRIVER_DESC "NCM Gadget" 38 + 39 + /*-------------------------------------------------------------------------*/ 40 + 41 + /* 42 + * Kbuild is not very cooperative with respect to linking separately 43 + * compiled library objects into one module. So for now we won't use 44 + * separate compilation ... ensuring init/exit sections work to shrink 45 + * the runtime footprint, and giving us at least some parts of what 46 + * a "gcc --combine ... part1.c part2.c part3.c ... " build would. 47 + */ 48 + #include "composite.c" 49 + #include "usbstring.c" 50 + #include "config.c" 51 + #include "epautoconf.c" 52 + 53 + #include "f_ncm.c" 54 + #include "u_ether.c" 55 + 56 + /*-------------------------------------------------------------------------*/ 57 + 58 + /* DO NOT REUSE THESE IDs with a protocol-incompatible driver!! Ever!! 59 + * Instead: allocate your own, using normal USB-IF procedures. 60 + */ 61 + 62 + /* Thanks to NetChip Technologies for donating this product ID. 63 + * It's for devices with only CDC Ethernet configurations. 64 + */ 65 + #define CDC_VENDOR_NUM 0x0525 /* NetChip */ 66 + #define CDC_PRODUCT_NUM 0xa4a1 /* Linux-USB Ethernet Gadget */ 67 + 68 + /*-------------------------------------------------------------------------*/ 69 + 70 + static struct usb_device_descriptor device_desc = { 71 + .bLength = sizeof device_desc, 72 + .bDescriptorType = USB_DT_DEVICE, 73 + 74 + .bcdUSB = cpu_to_le16 (0x0200), 75 + 76 + .bDeviceClass = USB_CLASS_COMM, 77 + .bDeviceSubClass = 0, 78 + .bDeviceProtocol = 0, 79 + /* .bMaxPacketSize0 = f(hardware) */ 80 + 81 + /* Vendor and product id defaults change according to what configs 82 + * we support. (As does bNumConfigurations.) These values can 83 + * also be overridden by module parameters. 84 + */ 85 + .idVendor = cpu_to_le16 (CDC_VENDOR_NUM), 86 + .idProduct = cpu_to_le16 (CDC_PRODUCT_NUM), 87 + /* .bcdDevice = f(hardware) */ 88 + /* .iManufacturer = DYNAMIC */ 89 + /* .iProduct = DYNAMIC */ 90 + /* NO SERIAL NUMBER */ 91 + .bNumConfigurations = 1, 92 + }; 93 + 94 + static struct usb_otg_descriptor otg_descriptor = { 95 + .bLength = sizeof otg_descriptor, 96 + .bDescriptorType = USB_DT_OTG, 97 + 98 + /* REVISIT SRP-only hardware is possible, although 99 + * it would not be called "OTG" ... 100 + */ 101 + .bmAttributes = USB_OTG_SRP | USB_OTG_HNP, 102 + }; 103 + 104 + static const struct usb_descriptor_header *otg_desc[] = { 105 + (struct usb_descriptor_header *) &otg_descriptor, 106 + NULL, 107 + }; 108 + 109 + 110 + /* string IDs are assigned dynamically */ 111 + 112 + #define STRING_MANUFACTURER_IDX 0 113 + #define STRING_PRODUCT_IDX 1 114 + 115 + static char manufacturer[50]; 116 + 117 + static struct usb_string strings_dev[] = { 118 + [STRING_MANUFACTURER_IDX].s = manufacturer, 119 + [STRING_PRODUCT_IDX].s = DRIVER_DESC, 120 + { } /* end of list */ 121 + }; 122 + 123 + static struct usb_gadget_strings stringtab_dev = { 124 + .language = 0x0409, /* en-us */ 125 + .strings = strings_dev, 126 + }; 127 + 128 + static struct usb_gadget_strings *dev_strings[] = { 129 + &stringtab_dev, 130 + NULL, 131 + }; 132 + 133 + static u8 hostaddr[ETH_ALEN]; 134 + 135 + /*-------------------------------------------------------------------------*/ 136 + 137 + static int __init ncm_do_config(struct usb_configuration *c) 138 + { 139 + /* FIXME alloc iConfiguration string, set it in c->strings */ 140 + 141 + if (gadget_is_otg(c->cdev->gadget)) { 142 + c->descriptors = otg_desc; 143 + c->bmAttributes |= USB_CONFIG_ATT_WAKEUP; 144 + } 145 + 146 + return ncm_bind_config(c, hostaddr); 147 + } 148 + 149 + static struct usb_configuration ncm_config_driver = { 150 + /* .label = f(hardware) */ 151 + .label = "CDC Ethernet (NCM)", 152 + .bConfigurationValue = 1, 153 + /* .iConfiguration = DYNAMIC */ 154 + .bmAttributes = USB_CONFIG_ATT_SELFPOWER, 155 + }; 156 + 157 + /*-------------------------------------------------------------------------*/ 158 + 159 + static int __init gncm_bind(struct usb_composite_dev *cdev) 160 + { 161 + int gcnum; 162 + struct usb_gadget *gadget = cdev->gadget; 163 + int status; 164 + 165 + /* set up network link layer */ 166 + status = gether_setup(cdev->gadget, hostaddr); 167 + if (status < 0) 168 + return status; 169 + 170 + gcnum = usb_gadget_controller_number(gadget); 171 + if (gcnum >= 0) 172 + device_desc.bcdDevice = cpu_to_le16(0x0300 | gcnum); 173 + else { 174 + /* We assume that can_support_ecm() tells the truth; 175 + * but if the controller isn't recognized at all then 176 + * that assumption is a bit more likely to be wrong. 177 + */ 178 + dev_warn(&gadget->dev, 179 + "controller '%s' not recognized; trying %s\n", 180 + gadget->name, 181 + ncm_config_driver.label); 182 + device_desc.bcdDevice = 183 + cpu_to_le16(0x0300 | 0x0099); 184 + } 185 + 186 + 187 + /* Allocate string descriptor numbers ... note that string 188 + * contents can be overridden by the composite_dev glue. 189 + */ 190 + 191 + /* device descriptor strings: manufacturer, product */ 192 + snprintf(manufacturer, sizeof manufacturer, "%s %s with %s", 193 + init_utsname()->sysname, init_utsname()->release, 194 + gadget->name); 195 + status = usb_string_id(cdev); 196 + if (status < 0) 197 + goto fail; 198 + strings_dev[STRING_MANUFACTURER_IDX].id = status; 199 + device_desc.iManufacturer = status; 200 + 201 + status = usb_string_id(cdev); 202 + if (status < 0) 203 + goto fail; 204 + strings_dev[STRING_PRODUCT_IDX].id = status; 205 + device_desc.iProduct = status; 206 + 207 + status = usb_add_config(cdev, &ncm_config_driver, 208 + ncm_do_config); 209 + if (status < 0) 210 + goto fail; 211 + 212 + dev_info(&gadget->dev, "%s\n", DRIVER_DESC); 213 + 214 + return 0; 215 + 216 + fail: 217 + gether_cleanup(); 218 + return status; 219 + } 220 + 221 + static int __exit gncm_unbind(struct usb_composite_dev *cdev) 222 + { 223 + gether_cleanup(); 224 + return 0; 225 + } 226 + 227 + static struct usb_composite_driver ncm_driver = { 228 + .name = "g_ncm", 229 + .dev = &device_desc, 230 + .strings = dev_strings, 231 + .unbind = __exit_p(gncm_unbind), 232 + }; 233 + 234 + MODULE_DESCRIPTION(DRIVER_DESC); 235 + MODULE_AUTHOR("Yauheni Kaliuta"); 236 + MODULE_LICENSE("GPL"); 237 + 238 + static int __init init(void) 239 + { 240 + return usb_composite_probe(&ncm_driver, gncm_bind); 241 + } 242 + module_init(init); 243 + 244 + static void __exit cleanup(void) 245 + { 246 + usb_composite_unregister(&ncm_driver); 247 + } 248 + module_exit(cleanup);
+2947
drivers/usb/gadget/pch_udc.c
··· 1 + /* 2 + * Copyright (C) 2010 OKI SEMICONDUCTOR CO., LTD. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; version 2 of the License. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program; if not, write to the Free Software 15 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. 16 + */ 17 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 18 + #include <linux/kernel.h> 19 + #include <linux/module.h> 20 + #include <linux/pci.h> 21 + #include <linux/delay.h> 22 + #include <linux/errno.h> 23 + #include <linux/list.h> 24 + #include <linux/interrupt.h> 25 + #include <linux/usb/ch9.h> 26 + #include <linux/usb/gadget.h> 27 + 28 + /* Address offset of Registers */ 29 + #define UDC_EP_REG_SHIFT 0x20 /* Offset to next EP */ 30 + 31 + #define UDC_EPCTL_ADDR 0x00 /* Endpoint control */ 32 + #define UDC_EPSTS_ADDR 0x04 /* Endpoint status */ 33 + #define UDC_BUFIN_FRAMENUM_ADDR 0x08 /* buffer size in / frame number out */ 34 + #define UDC_BUFOUT_MAXPKT_ADDR 0x0C /* buffer size out / maxpkt in */ 35 + #define UDC_SUBPTR_ADDR 0x10 /* setup buffer pointer */ 36 + #define UDC_DESPTR_ADDR 0x14 /* Data descriptor pointer */ 37 + #define UDC_CONFIRM_ADDR 0x18 /* Write/Read confirmation */ 38 + 39 + #define UDC_DEVCFG_ADDR 0x400 /* Device configuration */ 40 + #define UDC_DEVCTL_ADDR 0x404 /* Device control */ 41 + #define UDC_DEVSTS_ADDR 0x408 /* Device status */ 42 + #define UDC_DEVIRQSTS_ADDR 0x40C /* Device irq status */ 43 + #define UDC_DEVIRQMSK_ADDR 0x410 /* Device irq mask */ 44 + #define UDC_EPIRQSTS_ADDR 0x414 /* Endpoint irq status */ 45 + #define UDC_EPIRQMSK_ADDR 0x418 /* Endpoint irq mask */ 46 + #define UDC_DEVLPM_ADDR 0x41C /* LPM control / status */ 47 + #define UDC_CSR_BUSY_ADDR 0x4f0 /* UDC_CSR_BUSY Status register */ 48 + #define UDC_SRST_ADDR 0x4fc /* SOFT RESET register */ 49 + #define UDC_CSR_ADDR 0x500 /* USB_DEVICE endpoint register */ 50 + 51 + /* Endpoint control register */ 52 + /* Bit position */ 53 + #define UDC_EPCTL_MRXFLUSH (1 << 12) 54 + #define UDC_EPCTL_RRDY (1 << 9) 55 + #define UDC_EPCTL_CNAK (1 << 8) 56 + #define UDC_EPCTL_SNAK (1 << 7) 57 + #define UDC_EPCTL_NAK (1 << 6) 58 + #define UDC_EPCTL_P (1 << 3) 59 + #define UDC_EPCTL_F (1 << 1) 60 + #define UDC_EPCTL_S (1 << 0) 61 + #define UDC_EPCTL_ET_SHIFT 4 62 + /* Mask patern */ 63 + #define UDC_EPCTL_ET_MASK 0x00000030 64 + /* Value for ET field */ 65 + #define UDC_EPCTL_ET_CONTROL 0 66 + #define UDC_EPCTL_ET_ISO 1 67 + #define UDC_EPCTL_ET_BULK 2 68 + #define UDC_EPCTL_ET_INTERRUPT 3 69 + 70 + /* Endpoint status register */ 71 + /* Bit position */ 72 + #define UDC_EPSTS_XFERDONE (1 << 27) 73 + #define UDC_EPSTS_RSS (1 << 26) 74 + #define UDC_EPSTS_RCS (1 << 25) 75 + #define UDC_EPSTS_TXEMPTY (1 << 24) 76 + #define UDC_EPSTS_TDC (1 << 10) 77 + #define UDC_EPSTS_HE (1 << 9) 78 + #define UDC_EPSTS_MRXFIFO_EMP (1 << 8) 79 + #define UDC_EPSTS_BNA (1 << 7) 80 + #define UDC_EPSTS_IN (1 << 6) 81 + #define UDC_EPSTS_OUT_SHIFT 4 82 + /* Mask patern */ 83 + #define UDC_EPSTS_OUT_MASK 0x00000030 84 + #define UDC_EPSTS_ALL_CLR_MASK 0x1F0006F0 85 + /* Value for OUT field */ 86 + #define UDC_EPSTS_OUT_SETUP 2 87 + #define UDC_EPSTS_OUT_DATA 1 88 + 89 + /* Device configuration register */ 90 + /* Bit position */ 91 + #define UDC_DEVCFG_CSR_PRG (1 << 17) 92 + #define UDC_DEVCFG_SP (1 << 3) 93 + /* SPD Valee */ 94 + #define UDC_DEVCFG_SPD_HS 0x0 95 + #define UDC_DEVCFG_SPD_FS 0x1 96 + #define UDC_DEVCFG_SPD_LS 0x2 97 + 98 + /* Device control register */ 99 + /* Bit position */ 100 + #define UDC_DEVCTL_THLEN_SHIFT 24 101 + #define UDC_DEVCTL_BRLEN_SHIFT 16 102 + #define UDC_DEVCTL_CSR_DONE (1 << 13) 103 + #define UDC_DEVCTL_SD (1 << 10) 104 + #define UDC_DEVCTL_MODE (1 << 9) 105 + #define UDC_DEVCTL_BREN (1 << 8) 106 + #define UDC_DEVCTL_THE (1 << 7) 107 + #define UDC_DEVCTL_DU (1 << 4) 108 + #define UDC_DEVCTL_TDE (1 << 3) 109 + #define UDC_DEVCTL_RDE (1 << 2) 110 + #define UDC_DEVCTL_RES (1 << 0) 111 + 112 + /* Device status register */ 113 + /* Bit position */ 114 + #define UDC_DEVSTS_TS_SHIFT 18 115 + #define UDC_DEVSTS_ENUM_SPEED_SHIFT 13 116 + #define UDC_DEVSTS_ALT_SHIFT 8 117 + #define UDC_DEVSTS_INTF_SHIFT 4 118 + #define UDC_DEVSTS_CFG_SHIFT 0 119 + /* Mask patern */ 120 + #define UDC_DEVSTS_TS_MASK 0xfffc0000 121 + #define UDC_DEVSTS_ENUM_SPEED_MASK 0x00006000 122 + #define UDC_DEVSTS_ALT_MASK 0x00000f00 123 + #define UDC_DEVSTS_INTF_MASK 0x000000f0 124 + #define UDC_DEVSTS_CFG_MASK 0x0000000f 125 + /* value for maximum speed for SPEED field */ 126 + #define UDC_DEVSTS_ENUM_SPEED_FULL 1 127 + #define UDC_DEVSTS_ENUM_SPEED_HIGH 0 128 + #define UDC_DEVSTS_ENUM_SPEED_LOW 2 129 + #define UDC_DEVSTS_ENUM_SPEED_FULLX 3 130 + 131 + /* Device irq register */ 132 + /* Bit position */ 133 + #define UDC_DEVINT_RWKP (1 << 7) 134 + #define UDC_DEVINT_ENUM (1 << 6) 135 + #define UDC_DEVINT_SOF (1 << 5) 136 + #define UDC_DEVINT_US (1 << 4) 137 + #define UDC_DEVINT_UR (1 << 3) 138 + #define UDC_DEVINT_ES (1 << 2) 139 + #define UDC_DEVINT_SI (1 << 1) 140 + #define UDC_DEVINT_SC (1 << 0) 141 + /* Mask patern */ 142 + #define UDC_DEVINT_MSK 0x7f 143 + 144 + /* Endpoint irq register */ 145 + /* Bit position */ 146 + #define UDC_EPINT_IN_SHIFT 0 147 + #define UDC_EPINT_OUT_SHIFT 16 148 + #define UDC_EPINT_IN_EP0 (1 << 0) 149 + #define UDC_EPINT_OUT_EP0 (1 << 16) 150 + /* Mask patern */ 151 + #define UDC_EPINT_MSK_DISABLE_ALL 0xffffffff 152 + 153 + /* UDC_CSR_BUSY Status register */ 154 + /* Bit position */ 155 + #define UDC_CSR_BUSY (1 << 0) 156 + 157 + /* SOFT RESET register */ 158 + /* Bit position */ 159 + #define UDC_PSRST (1 << 1) 160 + #define UDC_SRST (1 << 0) 161 + 162 + /* USB_DEVICE endpoint register */ 163 + /* Bit position */ 164 + #define UDC_CSR_NE_NUM_SHIFT 0 165 + #define UDC_CSR_NE_DIR_SHIFT 4 166 + #define UDC_CSR_NE_TYPE_SHIFT 5 167 + #define UDC_CSR_NE_CFG_SHIFT 7 168 + #define UDC_CSR_NE_INTF_SHIFT 11 169 + #define UDC_CSR_NE_ALT_SHIFT 15 170 + #define UDC_CSR_NE_MAX_PKT_SHIFT 19 171 + /* Mask patern */ 172 + #define UDC_CSR_NE_NUM_MASK 0x0000000f 173 + #define UDC_CSR_NE_DIR_MASK 0x00000010 174 + #define UDC_CSR_NE_TYPE_MASK 0x00000060 175 + #define UDC_CSR_NE_CFG_MASK 0x00000780 176 + #define UDC_CSR_NE_INTF_MASK 0x00007800 177 + #define UDC_CSR_NE_ALT_MASK 0x00078000 178 + #define UDC_CSR_NE_MAX_PKT_MASK 0x3ff80000 179 + 180 + #define PCH_UDC_CSR(ep) (UDC_CSR_ADDR + ep*4) 181 + #define PCH_UDC_EPINT(in, num)\ 182 + (1 << (num + (in ? UDC_EPINT_IN_SHIFT : UDC_EPINT_OUT_SHIFT))) 183 + 184 + /* Index of endpoint */ 185 + #define UDC_EP0IN_IDX 0 186 + #define UDC_EP0OUT_IDX 1 187 + #define UDC_EPIN_IDX(ep) (ep * 2) 188 + #define UDC_EPOUT_IDX(ep) (ep * 2 + 1) 189 + #define PCH_UDC_EP0 0 190 + #define PCH_UDC_EP1 1 191 + #define PCH_UDC_EP2 2 192 + #define PCH_UDC_EP3 3 193 + 194 + /* Number of endpoint */ 195 + #define PCH_UDC_EP_NUM 32 /* Total number of EPs (16 IN,16 OUT) */ 196 + #define PCH_UDC_USED_EP_NUM 4 /* EP number of EP's really used */ 197 + /* Length Value */ 198 + #define PCH_UDC_BRLEN 0x0F /* Burst length */ 199 + #define PCH_UDC_THLEN 0x1F /* Threshold length */ 200 + /* Value of EP Buffer Size */ 201 + #define UDC_EP0IN_BUFF_SIZE 64 202 + #define UDC_EPIN_BUFF_SIZE 512 203 + #define UDC_EP0OUT_BUFF_SIZE 64 204 + #define UDC_EPOUT_BUFF_SIZE 512 205 + /* Value of EP maximum packet size */ 206 + #define UDC_EP0IN_MAX_PKT_SIZE 64 207 + #define UDC_EP0OUT_MAX_PKT_SIZE 64 208 + #define UDC_BULK_MAX_PKT_SIZE 512 209 + 210 + /* DMA */ 211 + #define DMA_DIR_RX 1 /* DMA for data receive */ 212 + #define DMA_DIR_TX 2 /* DMA for data transmit */ 213 + #define DMA_ADDR_INVALID (~(dma_addr_t)0) 214 + #define UDC_DMA_MAXPACKET 65536 /* maximum packet size for DMA */ 215 + 216 + /** 217 + * struct pch_udc_data_dma_desc - Structure to hold DMA descriptor information 218 + * for data 219 + * @status: Status quadlet 220 + * @reserved: Reserved 221 + * @dataptr: Buffer descriptor 222 + * @next: Next descriptor 223 + */ 224 + struct pch_udc_data_dma_desc { 225 + u32 status; 226 + u32 reserved; 227 + u32 dataptr; 228 + u32 next; 229 + }; 230 + 231 + /** 232 + * struct pch_udc_stp_dma_desc - Structure to hold DMA descriptor information 233 + * for control data 234 + * @status: Status 235 + * @reserved: Reserved 236 + * @data12: First setup word 237 + * @data34: Second setup word 238 + */ 239 + struct pch_udc_stp_dma_desc { 240 + u32 status; 241 + u32 reserved; 242 + struct usb_ctrlrequest request; 243 + } __attribute((packed)); 244 + 245 + /* DMA status definitions */ 246 + /* Buffer status */ 247 + #define PCH_UDC_BUFF_STS 0xC0000000 248 + #define PCH_UDC_BS_HST_RDY 0x00000000 249 + #define PCH_UDC_BS_DMA_BSY 0x40000000 250 + #define PCH_UDC_BS_DMA_DONE 0x80000000 251 + #define PCH_UDC_BS_HST_BSY 0xC0000000 252 + /* Rx/Tx Status */ 253 + #define PCH_UDC_RXTX_STS 0x30000000 254 + #define PCH_UDC_RTS_SUCC 0x00000000 255 + #define PCH_UDC_RTS_DESERR 0x10000000 256 + #define PCH_UDC_RTS_BUFERR 0x30000000 257 + /* Last Descriptor Indication */ 258 + #define PCH_UDC_DMA_LAST 0x08000000 259 + /* Number of Rx/Tx Bytes Mask */ 260 + #define PCH_UDC_RXTX_BYTES 0x0000ffff 261 + 262 + /** 263 + * struct pch_udc_cfg_data - Structure to hold current configuration 264 + * and interface information 265 + * @cur_cfg: current configuration in use 266 + * @cur_intf: current interface in use 267 + * @cur_alt: current alt interface in use 268 + */ 269 + struct pch_udc_cfg_data { 270 + u16 cur_cfg; 271 + u16 cur_intf; 272 + u16 cur_alt; 273 + }; 274 + 275 + /** 276 + * struct pch_udc_ep - Structure holding a PCH USB device Endpoint information 277 + * @ep: embedded ep request 278 + * @td_stp_phys: for setup request 279 + * @td_data_phys: for data request 280 + * @td_stp: for setup request 281 + * @td_data: for data request 282 + * @dev: reference to device struct 283 + * @offset_addr: offset address of ep register 284 + * @desc: for this ep 285 + * @queue: queue for requests 286 + * @num: endpoint number 287 + * @in: endpoint is IN 288 + * @halted: endpoint halted? 289 + * @epsts: Endpoint status 290 + */ 291 + struct pch_udc_ep { 292 + struct usb_ep ep; 293 + dma_addr_t td_stp_phys; 294 + dma_addr_t td_data_phys; 295 + struct pch_udc_stp_dma_desc *td_stp; 296 + struct pch_udc_data_dma_desc *td_data; 297 + struct pch_udc_dev *dev; 298 + unsigned long offset_addr; 299 + const struct usb_endpoint_descriptor *desc; 300 + struct list_head queue; 301 + unsigned num:5, 302 + in:1, 303 + halted:1; 304 + unsigned long epsts; 305 + }; 306 + 307 + /** 308 + * struct pch_udc_dev - Structure holding complete information 309 + * of the PCH USB device 310 + * @gadget: gadget driver data 311 + * @driver: reference to gadget driver bound 312 + * @pdev: reference to the PCI device 313 + * @ep: array of endpoints 314 + * @lock: protects all state 315 + * @active: enabled the PCI device 316 + * @stall: stall requested 317 + * @prot_stall: protcol stall requested 318 + * @irq_registered: irq registered with system 319 + * @mem_region: device memory mapped 320 + * @registered: driver regsitered with system 321 + * @suspended: driver in suspended state 322 + * @connected: gadget driver associated 323 + * @set_cfg_not_acked: pending acknowledgement 4 setup 324 + * @waiting_zlp_ack: pending acknowledgement 4 ZLP 325 + * @data_requests: DMA pool for data requests 326 + * @stp_requests: DMA pool for setup requests 327 + * @dma_addr: DMA pool for received 328 + * @ep0out_buf: Buffer for DMA 329 + * @setup_data: Received setup data 330 + * @phys_addr: of device memory 331 + * @base_addr: for mapped device memory 332 + * @irq: IRQ line for the device 333 + * @cfg_data: current cfg, intf, and alt in use 334 + */ 335 + struct pch_udc_dev { 336 + struct usb_gadget gadget; 337 + struct usb_gadget_driver *driver; 338 + struct pci_dev *pdev; 339 + struct pch_udc_ep ep[PCH_UDC_EP_NUM]; 340 + spinlock_t lock; /* protects all state */ 341 + unsigned active:1, 342 + stall:1, 343 + prot_stall:1, 344 + irq_registered:1, 345 + mem_region:1, 346 + registered:1, 347 + suspended:1, 348 + connected:1, 349 + set_cfg_not_acked:1, 350 + waiting_zlp_ack:1; 351 + struct pci_pool *data_requests; 352 + struct pci_pool *stp_requests; 353 + dma_addr_t dma_addr; 354 + unsigned long ep0out_buf[64]; 355 + struct usb_ctrlrequest setup_data; 356 + unsigned long phys_addr; 357 + void __iomem *base_addr; 358 + unsigned irq; 359 + struct pch_udc_cfg_data cfg_data; 360 + }; 361 + 362 + #define PCH_UDC_PCI_BAR 1 363 + #define PCI_DEVICE_ID_INTEL_EG20T_UDC 0x8808 364 + 365 + static const char ep0_string[] = "ep0in"; 366 + static DEFINE_SPINLOCK(udc_stall_spinlock); /* stall spin lock */ 367 + struct pch_udc_dev *pch_udc; /* pointer to device object */ 368 + 369 + static int speed_fs; 370 + module_param_named(speed_fs, speed_fs, bool, S_IRUGO); 371 + MODULE_PARM_DESC(speed_fs, "true for Full speed operation"); 372 + 373 + /** 374 + * struct pch_udc_request - Structure holding a PCH USB device request packet 375 + * @req: embedded ep request 376 + * @td_data_phys: phys. address 377 + * @td_data: first dma desc. of chain 378 + * @td_data_last: last dma desc. of chain 379 + * @queue: associated queue 380 + * @dma_going: DMA in progress for request 381 + * @dma_mapped: DMA memory mapped for request 382 + * @dma_done: DMA completed for request 383 + * @chain_len: chain length 384 + */ 385 + struct pch_udc_request { 386 + struct usb_request req; 387 + dma_addr_t td_data_phys; 388 + struct pch_udc_data_dma_desc *td_data; 389 + struct pch_udc_data_dma_desc *td_data_last; 390 + struct list_head queue; 391 + unsigned dma_going:1, 392 + dma_mapped:1, 393 + dma_done:1; 394 + unsigned chain_len; 395 + }; 396 + 397 + static inline u32 pch_udc_readl(struct pch_udc_dev *dev, unsigned long reg) 398 + { 399 + return ioread32(dev->base_addr + reg); 400 + } 401 + 402 + static inline void pch_udc_writel(struct pch_udc_dev *dev, 403 + unsigned long val, unsigned long reg) 404 + { 405 + iowrite32(val, dev->base_addr + reg); 406 + } 407 + 408 + static inline void pch_udc_bit_set(struct pch_udc_dev *dev, 409 + unsigned long reg, 410 + unsigned long bitmask) 411 + { 412 + pch_udc_writel(dev, pch_udc_readl(dev, reg) | bitmask, reg); 413 + } 414 + 415 + static inline void pch_udc_bit_clr(struct pch_udc_dev *dev, 416 + unsigned long reg, 417 + unsigned long bitmask) 418 + { 419 + pch_udc_writel(dev, pch_udc_readl(dev, reg) & ~(bitmask), reg); 420 + } 421 + 422 + static inline u32 pch_udc_ep_readl(struct pch_udc_ep *ep, unsigned long reg) 423 + { 424 + return ioread32(ep->dev->base_addr + ep->offset_addr + reg); 425 + } 426 + 427 + static inline void pch_udc_ep_writel(struct pch_udc_ep *ep, 428 + unsigned long val, unsigned long reg) 429 + { 430 + iowrite32(val, ep->dev->base_addr + ep->offset_addr + reg); 431 + } 432 + 433 + static inline void pch_udc_ep_bit_set(struct pch_udc_ep *ep, 434 + unsigned long reg, 435 + unsigned long bitmask) 436 + { 437 + pch_udc_ep_writel(ep, pch_udc_ep_readl(ep, reg) | bitmask, reg); 438 + } 439 + 440 + static inline void pch_udc_ep_bit_clr(struct pch_udc_ep *ep, 441 + unsigned long reg, 442 + unsigned long bitmask) 443 + { 444 + pch_udc_ep_writel(ep, pch_udc_ep_readl(ep, reg) & ~(bitmask), reg); 445 + } 446 + 447 + /** 448 + * pch_udc_csr_busy() - Wait till idle. 449 + * @dev: Reference to pch_udc_dev structure 450 + */ 451 + static void pch_udc_csr_busy(struct pch_udc_dev *dev) 452 + { 453 + unsigned int count = 200; 454 + 455 + /* Wait till idle */ 456 + while ((pch_udc_readl(dev, UDC_CSR_BUSY_ADDR) & UDC_CSR_BUSY) 457 + && --count) 458 + cpu_relax(); 459 + if (!count) 460 + dev_err(&dev->pdev->dev, "%s: wait error\n", __func__); 461 + } 462 + 463 + /** 464 + * pch_udc_write_csr() - Write the command and status registers. 465 + * @dev: Reference to pch_udc_dev structure 466 + * @val: value to be written to CSR register 467 + * @addr: address of CSR register 468 + */ 469 + static void pch_udc_write_csr(struct pch_udc_dev *dev, unsigned long val, 470 + unsigned int ep) 471 + { 472 + unsigned long reg = PCH_UDC_CSR(ep); 473 + 474 + pch_udc_csr_busy(dev); /* Wait till idle */ 475 + pch_udc_writel(dev, val, reg); 476 + pch_udc_csr_busy(dev); /* Wait till idle */ 477 + } 478 + 479 + /** 480 + * pch_udc_read_csr() - Read the command and status registers. 481 + * @dev: Reference to pch_udc_dev structure 482 + * @addr: address of CSR register 483 + * 484 + * Return codes: content of CSR register 485 + */ 486 + static u32 pch_udc_read_csr(struct pch_udc_dev *dev, unsigned int ep) 487 + { 488 + unsigned long reg = PCH_UDC_CSR(ep); 489 + 490 + pch_udc_csr_busy(dev); /* Wait till idle */ 491 + pch_udc_readl(dev, reg); /* Dummy read */ 492 + pch_udc_csr_busy(dev); /* Wait till idle */ 493 + return pch_udc_readl(dev, reg); 494 + } 495 + 496 + /** 497 + * pch_udc_rmt_wakeup() - Initiate for remote wakeup 498 + * @dev: Reference to pch_udc_dev structure 499 + */ 500 + static inline void pch_udc_rmt_wakeup(struct pch_udc_dev *dev) 501 + { 502 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_RES); 503 + mdelay(1); 504 + pch_udc_bit_clr(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_RES); 505 + } 506 + 507 + /** 508 + * pch_udc_get_frame() - Get the current frame from device status register 509 + * @dev: Reference to pch_udc_dev structure 510 + * Retern current frame 511 + */ 512 + static inline int pch_udc_get_frame(struct pch_udc_dev *dev) 513 + { 514 + u32 frame = pch_udc_readl(dev, UDC_DEVSTS_ADDR); 515 + return (frame & UDC_DEVSTS_TS_MASK) >> UDC_DEVSTS_TS_SHIFT; 516 + } 517 + 518 + /** 519 + * pch_udc_clear_selfpowered() - Clear the self power control 520 + * @dev: Reference to pch_udc_regs structure 521 + */ 522 + static inline void pch_udc_clear_selfpowered(struct pch_udc_dev *dev) 523 + { 524 + pch_udc_bit_clr(dev, UDC_DEVCFG_ADDR, UDC_DEVCFG_SP); 525 + } 526 + 527 + /** 528 + * pch_udc_set_selfpowered() - Set the self power control 529 + * @dev: Reference to pch_udc_regs structure 530 + */ 531 + static inline void pch_udc_set_selfpowered(struct pch_udc_dev *dev) 532 + { 533 + pch_udc_bit_set(dev, UDC_DEVCFG_ADDR, UDC_DEVCFG_SP); 534 + } 535 + 536 + /** 537 + * pch_udc_set_disconnect() - Set the disconnect status. 538 + * @dev: Reference to pch_udc_regs structure 539 + */ 540 + static inline void pch_udc_set_disconnect(struct pch_udc_dev *dev) 541 + { 542 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_SD); 543 + } 544 + 545 + /** 546 + * pch_udc_clear_disconnect() - Clear the disconnect status. 547 + * @dev: Reference to pch_udc_regs structure 548 + */ 549 + static void pch_udc_clear_disconnect(struct pch_udc_dev *dev) 550 + { 551 + /* Clear the disconnect */ 552 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_RES); 553 + pch_udc_bit_clr(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_SD); 554 + mdelay(1); 555 + /* Resume USB signalling */ 556 + pch_udc_bit_clr(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_RES); 557 + } 558 + 559 + /** 560 + * pch_udc_vbus_session() - set or clearr the disconnect status. 561 + * @dev: Reference to pch_udc_regs structure 562 + * @is_active: Parameter specifying the action 563 + * 0: indicating VBUS power is ending 564 + * !0: indicating VBUS power is starting 565 + */ 566 + static inline void pch_udc_vbus_session(struct pch_udc_dev *dev, 567 + int is_active) 568 + { 569 + if (is_active) 570 + pch_udc_clear_disconnect(dev); 571 + else 572 + pch_udc_set_disconnect(dev); 573 + } 574 + 575 + /** 576 + * pch_udc_ep_set_stall() - Set the stall of endpoint 577 + * @ep: Reference to structure of type pch_udc_ep_regs 578 + */ 579 + static void pch_udc_ep_set_stall(struct pch_udc_ep *ep) 580 + { 581 + if (ep->in) { 582 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_F); 583 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_S); 584 + } else { 585 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_S); 586 + } 587 + } 588 + 589 + /** 590 + * pch_udc_ep_clear_stall() - Clear the stall of endpoint 591 + * @ep: Reference to structure of type pch_udc_ep_regs 592 + */ 593 + static inline void pch_udc_ep_clear_stall(struct pch_udc_ep *ep) 594 + { 595 + /* Clear the stall */ 596 + pch_udc_ep_bit_clr(ep, UDC_EPCTL_ADDR, UDC_EPCTL_S); 597 + /* Clear NAK by writing CNAK */ 598 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_CNAK); 599 + } 600 + 601 + /** 602 + * pch_udc_ep_set_trfr_type() - Set the transfer type of endpoint 603 + * @ep: Reference to structure of type pch_udc_ep_regs 604 + * @type: Type of endpoint 605 + */ 606 + static inline void pch_udc_ep_set_trfr_type(struct pch_udc_ep *ep, 607 + u8 type) 608 + { 609 + pch_udc_ep_writel(ep, ((type << UDC_EPCTL_ET_SHIFT) & 610 + UDC_EPCTL_ET_MASK), UDC_EPCTL_ADDR); 611 + } 612 + 613 + /** 614 + * pch_udc_ep_set_bufsz() - Set the maximum packet size for the endpoint 615 + * @ep: Reference to structure of type pch_udc_ep_regs 616 + * @buf_size: The buffer size 617 + */ 618 + static void pch_udc_ep_set_bufsz(struct pch_udc_ep *ep, 619 + u32 buf_size, u32 ep_in) 620 + { 621 + u32 data; 622 + if (ep_in) { 623 + data = pch_udc_ep_readl(ep, UDC_BUFIN_FRAMENUM_ADDR); 624 + data = (data & 0xffff0000) | (buf_size & 0xffff); 625 + pch_udc_ep_writel(ep, data, UDC_BUFIN_FRAMENUM_ADDR); 626 + } else { 627 + data = pch_udc_ep_readl(ep, UDC_BUFOUT_MAXPKT_ADDR); 628 + data = (buf_size << 16) | (data & 0xffff); 629 + pch_udc_ep_writel(ep, data, UDC_BUFOUT_MAXPKT_ADDR); 630 + } 631 + } 632 + 633 + /** 634 + * pch_udc_ep_set_maxpkt() - Set the Max packet size for the endpoint 635 + * @ep: Reference to structure of type pch_udc_ep_regs 636 + * @pkt_size: The packet size 637 + */ 638 + static void pch_udc_ep_set_maxpkt(struct pch_udc_ep *ep, u32 pkt_size) 639 + { 640 + u32 data = pch_udc_ep_readl(ep, UDC_BUFOUT_MAXPKT_ADDR); 641 + data = (data & 0xffff0000) | (pkt_size & 0xffff); 642 + pch_udc_ep_writel(ep, data, UDC_BUFOUT_MAXPKT_ADDR); 643 + } 644 + 645 + /** 646 + * pch_udc_ep_set_subptr() - Set the Setup buffer pointer for the endpoint 647 + * @ep: Reference to structure of type pch_udc_ep_regs 648 + * @addr: Address of the register 649 + */ 650 + static inline void pch_udc_ep_set_subptr(struct pch_udc_ep *ep, u32 addr) 651 + { 652 + pch_udc_ep_writel(ep, addr, UDC_SUBPTR_ADDR); 653 + } 654 + 655 + /** 656 + * pch_udc_ep_set_ddptr() - Set the Data descriptor pointer for the endpoint 657 + * @ep: Reference to structure of type pch_udc_ep_regs 658 + * @addr: Address of the register 659 + */ 660 + static inline void pch_udc_ep_set_ddptr(struct pch_udc_ep *ep, u32 addr) 661 + { 662 + pch_udc_ep_writel(ep, addr, UDC_DESPTR_ADDR); 663 + } 664 + 665 + /** 666 + * pch_udc_ep_set_pd() - Set the poll demand bit for the endpoint 667 + * @ep: Reference to structure of type pch_udc_ep_regs 668 + */ 669 + static inline void pch_udc_ep_set_pd(struct pch_udc_ep *ep) 670 + { 671 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_P); 672 + } 673 + 674 + /** 675 + * pch_udc_ep_set_rrdy() - Set the receive ready bit for the endpoint 676 + * @ep: Reference to structure of type pch_udc_ep_regs 677 + */ 678 + static inline void pch_udc_ep_set_rrdy(struct pch_udc_ep *ep) 679 + { 680 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_RRDY); 681 + } 682 + 683 + /** 684 + * pch_udc_ep_clear_rrdy() - Clear the receive ready bit for the endpoint 685 + * @ep: Reference to structure of type pch_udc_ep_regs 686 + */ 687 + static inline void pch_udc_ep_clear_rrdy(struct pch_udc_ep *ep) 688 + { 689 + pch_udc_ep_bit_clr(ep, UDC_EPCTL_ADDR, UDC_EPCTL_RRDY); 690 + } 691 + 692 + /** 693 + * pch_udc_set_dma() - Set the 'TDE' or RDE bit of device control 694 + * register depending on the direction specified 695 + * @dev: Reference to structure of type pch_udc_regs 696 + * @dir: whether Tx or Rx 697 + * DMA_DIR_RX: Receive 698 + * DMA_DIR_TX: Transmit 699 + */ 700 + static inline void pch_udc_set_dma(struct pch_udc_dev *dev, int dir) 701 + { 702 + if (dir == DMA_DIR_RX) 703 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_RDE); 704 + else if (dir == DMA_DIR_TX) 705 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_TDE); 706 + } 707 + 708 + /** 709 + * pch_udc_clear_dma() - Clear the 'TDE' or RDE bit of device control 710 + * register depending on the direction specified 711 + * @dev: Reference to structure of type pch_udc_regs 712 + * @dir: Whether Tx or Rx 713 + * DMA_DIR_RX: Receive 714 + * DMA_DIR_TX: Transmit 715 + */ 716 + static inline void pch_udc_clear_dma(struct pch_udc_dev *dev, int dir) 717 + { 718 + if (dir == DMA_DIR_RX) 719 + pch_udc_bit_clr(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_RDE); 720 + else if (dir == DMA_DIR_TX) 721 + pch_udc_bit_clr(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_TDE); 722 + } 723 + 724 + /** 725 + * pch_udc_set_csr_done() - Set the device control register 726 + * CSR done field (bit 13) 727 + * @dev: reference to structure of type pch_udc_regs 728 + */ 729 + static inline void pch_udc_set_csr_done(struct pch_udc_dev *dev) 730 + { 731 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, UDC_DEVCTL_CSR_DONE); 732 + } 733 + 734 + /** 735 + * pch_udc_disable_interrupts() - Disables the specified interrupts 736 + * @dev: Reference to structure of type pch_udc_regs 737 + * @mask: Mask to disable interrupts 738 + */ 739 + static inline void pch_udc_disable_interrupts(struct pch_udc_dev *dev, 740 + u32 mask) 741 + { 742 + pch_udc_bit_set(dev, UDC_DEVIRQMSK_ADDR, mask); 743 + } 744 + 745 + /** 746 + * pch_udc_enable_interrupts() - Enable the specified interrupts 747 + * @dev: Reference to structure of type pch_udc_regs 748 + * @mask: Mask to enable interrupts 749 + */ 750 + static inline void pch_udc_enable_interrupts(struct pch_udc_dev *dev, 751 + u32 mask) 752 + { 753 + pch_udc_bit_clr(dev, UDC_DEVIRQMSK_ADDR, mask); 754 + } 755 + 756 + /** 757 + * pch_udc_disable_ep_interrupts() - Disable endpoint interrupts 758 + * @dev: Reference to structure of type pch_udc_regs 759 + * @mask: Mask to disable interrupts 760 + */ 761 + static inline void pch_udc_disable_ep_interrupts(struct pch_udc_dev *dev, 762 + u32 mask) 763 + { 764 + pch_udc_bit_set(dev, UDC_EPIRQMSK_ADDR, mask); 765 + } 766 + 767 + /** 768 + * pch_udc_enable_ep_interrupts() - Enable endpoint interrupts 769 + * @dev: Reference to structure of type pch_udc_regs 770 + * @mask: Mask to enable interrupts 771 + */ 772 + static inline void pch_udc_enable_ep_interrupts(struct pch_udc_dev *dev, 773 + u32 mask) 774 + { 775 + pch_udc_bit_clr(dev, UDC_EPIRQMSK_ADDR, mask); 776 + } 777 + 778 + /** 779 + * pch_udc_read_device_interrupts() - Read the device interrupts 780 + * @dev: Reference to structure of type pch_udc_regs 781 + * Retern The device interrupts 782 + */ 783 + static inline u32 pch_udc_read_device_interrupts(struct pch_udc_dev *dev) 784 + { 785 + return pch_udc_readl(dev, UDC_DEVIRQSTS_ADDR); 786 + } 787 + 788 + /** 789 + * pch_udc_write_device_interrupts() - Write device interrupts 790 + * @dev: Reference to structure of type pch_udc_regs 791 + * @val: The value to be written to interrupt register 792 + */ 793 + static inline void pch_udc_write_device_interrupts(struct pch_udc_dev *dev, 794 + u32 val) 795 + { 796 + pch_udc_writel(dev, val, UDC_DEVIRQSTS_ADDR); 797 + } 798 + 799 + /** 800 + * pch_udc_read_ep_interrupts() - Read the endpoint interrupts 801 + * @dev: Reference to structure of type pch_udc_regs 802 + * Retern The endpoint interrupt 803 + */ 804 + static inline u32 pch_udc_read_ep_interrupts(struct pch_udc_dev *dev) 805 + { 806 + return pch_udc_readl(dev, UDC_EPIRQSTS_ADDR); 807 + } 808 + 809 + /** 810 + * pch_udc_write_ep_interrupts() - Clear endpoint interupts 811 + * @dev: Reference to structure of type pch_udc_regs 812 + * @val: The value to be written to interrupt register 813 + */ 814 + static inline void pch_udc_write_ep_interrupts(struct pch_udc_dev *dev, 815 + u32 val) 816 + { 817 + pch_udc_writel(dev, val, UDC_EPIRQSTS_ADDR); 818 + } 819 + 820 + /** 821 + * pch_udc_read_device_status() - Read the device status 822 + * @dev: Reference to structure of type pch_udc_regs 823 + * Retern The device status 824 + */ 825 + static inline u32 pch_udc_read_device_status(struct pch_udc_dev *dev) 826 + { 827 + return pch_udc_readl(dev, UDC_DEVSTS_ADDR); 828 + } 829 + 830 + /** 831 + * pch_udc_read_ep_control() - Read the endpoint control 832 + * @ep: Reference to structure of type pch_udc_ep_regs 833 + * Retern The endpoint control register value 834 + */ 835 + static inline u32 pch_udc_read_ep_control(struct pch_udc_ep *ep) 836 + { 837 + return pch_udc_ep_readl(ep, UDC_EPCTL_ADDR); 838 + } 839 + 840 + /** 841 + * pch_udc_clear_ep_control() - Clear the endpoint control register 842 + * @ep: Reference to structure of type pch_udc_ep_regs 843 + * Retern The endpoint control register value 844 + */ 845 + static inline void pch_udc_clear_ep_control(struct pch_udc_ep *ep) 846 + { 847 + return pch_udc_ep_writel(ep, 0, UDC_EPCTL_ADDR); 848 + } 849 + 850 + /** 851 + * pch_udc_read_ep_status() - Read the endpoint status 852 + * @ep: Reference to structure of type pch_udc_ep_regs 853 + * Retern The endpoint status 854 + */ 855 + static inline u32 pch_udc_read_ep_status(struct pch_udc_ep *ep) 856 + { 857 + return pch_udc_ep_readl(ep, UDC_EPSTS_ADDR); 858 + } 859 + 860 + /** 861 + * pch_udc_clear_ep_status() - Clear the endpoint status 862 + * @ep: Reference to structure of type pch_udc_ep_regs 863 + * @stat: Endpoint status 864 + */ 865 + static inline void pch_udc_clear_ep_status(struct pch_udc_ep *ep, 866 + u32 stat) 867 + { 868 + return pch_udc_ep_writel(ep, stat, UDC_EPSTS_ADDR); 869 + } 870 + 871 + /** 872 + * pch_udc_ep_set_nak() - Set the bit 7 (SNAK field) 873 + * of the endpoint control register 874 + * @ep: Reference to structure of type pch_udc_ep_regs 875 + */ 876 + static inline void pch_udc_ep_set_nak(struct pch_udc_ep *ep) 877 + { 878 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_SNAK); 879 + } 880 + 881 + /** 882 + * pch_udc_ep_clear_nak() - Set the bit 8 (CNAK field) 883 + * of the endpoint control register 884 + * @ep: reference to structure of type pch_udc_ep_regs 885 + */ 886 + static void pch_udc_ep_clear_nak(struct pch_udc_ep *ep) 887 + { 888 + unsigned int loopcnt = 0; 889 + struct pch_udc_dev *dev = ep->dev; 890 + 891 + if (!(pch_udc_ep_readl(ep, UDC_EPCTL_ADDR) & UDC_EPCTL_NAK)) 892 + return; 893 + if (!ep->in) { 894 + loopcnt = 10000; 895 + while (!(pch_udc_read_ep_status(ep) & UDC_EPSTS_MRXFIFO_EMP) && 896 + --loopcnt) 897 + udelay(5); 898 + if (!loopcnt) 899 + dev_err(&dev->pdev->dev, "%s: RxFIFO not Empty\n", 900 + __func__); 901 + } 902 + loopcnt = 10000; 903 + while ((pch_udc_read_ep_control(ep) & UDC_EPCTL_NAK) && --loopcnt) { 904 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_CNAK); 905 + udelay(5); 906 + } 907 + if (!loopcnt) 908 + dev_err(&dev->pdev->dev, "%s: Clear NAK not set for ep%d%s\n", 909 + __func__, ep->num, (ep->in ? "in" : "out")); 910 + } 911 + 912 + /** 913 + * pch_udc_ep_fifo_flush() - Flush the endpoint fifo 914 + * @ep: reference to structure of type pch_udc_ep_regs 915 + * @dir: direction of endpoint 916 + * 0: endpoint is OUT 917 + * !0: endpoint is IN 918 + */ 919 + static void pch_udc_ep_fifo_flush(struct pch_udc_ep *ep, int dir) 920 + { 921 + unsigned int loopcnt = 0; 922 + struct pch_udc_dev *dev = ep->dev; 923 + 924 + if (dir) { /* IN ep */ 925 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_F); 926 + return; 927 + } 928 + 929 + if (pch_udc_read_ep_status(ep) & UDC_EPSTS_MRXFIFO_EMP) 930 + return; 931 + pch_udc_ep_bit_set(ep, UDC_EPCTL_ADDR, UDC_EPCTL_MRXFLUSH); 932 + /* Wait for RxFIFO Empty */ 933 + loopcnt = 10000; 934 + while (!(pch_udc_read_ep_status(ep) & UDC_EPSTS_MRXFIFO_EMP) && 935 + --loopcnt) 936 + udelay(5); 937 + if (!loopcnt) 938 + dev_err(&dev->pdev->dev, "RxFIFO not Empty\n"); 939 + pch_udc_ep_bit_clr(ep, UDC_EPCTL_ADDR, UDC_EPCTL_MRXFLUSH); 940 + } 941 + 942 + /** 943 + * pch_udc_ep_enable() - This api enables endpoint 944 + * @regs: Reference to structure pch_udc_ep_regs 945 + * @desc: endpoint descriptor 946 + */ 947 + static void pch_udc_ep_enable(struct pch_udc_ep *ep, 948 + struct pch_udc_cfg_data *cfg, 949 + const struct usb_endpoint_descriptor *desc) 950 + { 951 + u32 val = 0; 952 + u32 buff_size = 0; 953 + 954 + pch_udc_ep_set_trfr_type(ep, desc->bmAttributes); 955 + if (ep->in) 956 + buff_size = UDC_EPIN_BUFF_SIZE; 957 + else 958 + buff_size = UDC_EPOUT_BUFF_SIZE; 959 + pch_udc_ep_set_bufsz(ep, buff_size, ep->in); 960 + pch_udc_ep_set_maxpkt(ep, le16_to_cpu(desc->wMaxPacketSize)); 961 + pch_udc_ep_set_nak(ep); 962 + pch_udc_ep_fifo_flush(ep, ep->in); 963 + /* Configure the endpoint */ 964 + val = ep->num << UDC_CSR_NE_NUM_SHIFT | ep->in << UDC_CSR_NE_DIR_SHIFT | 965 + ((desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) << 966 + UDC_CSR_NE_TYPE_SHIFT) | 967 + (cfg->cur_cfg << UDC_CSR_NE_CFG_SHIFT) | 968 + (cfg->cur_intf << UDC_CSR_NE_INTF_SHIFT) | 969 + (cfg->cur_alt << UDC_CSR_NE_ALT_SHIFT) | 970 + le16_to_cpu(desc->wMaxPacketSize) << UDC_CSR_NE_MAX_PKT_SHIFT; 971 + 972 + if (ep->in) 973 + pch_udc_write_csr(ep->dev, val, UDC_EPIN_IDX(ep->num)); 974 + else 975 + pch_udc_write_csr(ep->dev, val, UDC_EPOUT_IDX(ep->num)); 976 + } 977 + 978 + /** 979 + * pch_udc_ep_disable() - This api disables endpoint 980 + * @regs: Reference to structure pch_udc_ep_regs 981 + */ 982 + static void pch_udc_ep_disable(struct pch_udc_ep *ep) 983 + { 984 + if (ep->in) { 985 + /* flush the fifo */ 986 + pch_udc_ep_writel(ep, UDC_EPCTL_F, UDC_EPCTL_ADDR); 987 + /* set NAK */ 988 + pch_udc_ep_writel(ep, UDC_EPCTL_SNAK, UDC_EPCTL_ADDR); 989 + pch_udc_ep_bit_set(ep, UDC_EPSTS_ADDR, UDC_EPSTS_IN); 990 + } else { 991 + /* set NAK */ 992 + pch_udc_ep_writel(ep, UDC_EPCTL_SNAK, UDC_EPCTL_ADDR); 993 + } 994 + /* reset desc pointer */ 995 + pch_udc_ep_writel(ep, 0, UDC_DESPTR_ADDR); 996 + } 997 + 998 + /** 999 + * pch_udc_wait_ep_stall() - Wait EP stall. 1000 + * @dev: Reference to pch_udc_dev structure 1001 + */ 1002 + static void pch_udc_wait_ep_stall(struct pch_udc_ep *ep) 1003 + { 1004 + unsigned int count = 10000; 1005 + 1006 + /* Wait till idle */ 1007 + while ((pch_udc_read_ep_control(ep) & UDC_EPCTL_S) && --count) 1008 + udelay(5); 1009 + if (!count) 1010 + dev_err(&ep->dev->pdev->dev, "%s: wait error\n", __func__); 1011 + } 1012 + 1013 + /** 1014 + * pch_udc_init() - This API initializes usb device controller 1015 + * @dev: Rreference to pch_udc_regs structure 1016 + */ 1017 + static void pch_udc_init(struct pch_udc_dev *dev) 1018 + { 1019 + if (NULL == dev) { 1020 + pr_err("%s: Invalid address\n", __func__); 1021 + return; 1022 + } 1023 + /* Soft Reset and Reset PHY */ 1024 + pch_udc_writel(dev, UDC_SRST, UDC_SRST_ADDR); 1025 + pch_udc_writel(dev, UDC_SRST | UDC_PSRST, UDC_SRST_ADDR); 1026 + mdelay(1); 1027 + pch_udc_writel(dev, UDC_SRST, UDC_SRST_ADDR); 1028 + pch_udc_writel(dev, 0x00, UDC_SRST_ADDR); 1029 + mdelay(1); 1030 + /* mask and clear all device interrupts */ 1031 + pch_udc_bit_set(dev, UDC_DEVIRQMSK_ADDR, UDC_DEVINT_MSK); 1032 + pch_udc_bit_set(dev, UDC_DEVIRQSTS_ADDR, UDC_DEVINT_MSK); 1033 + 1034 + /* mask and clear all ep interrupts */ 1035 + pch_udc_bit_set(dev, UDC_EPIRQMSK_ADDR, UDC_EPINT_MSK_DISABLE_ALL); 1036 + pch_udc_bit_set(dev, UDC_EPIRQSTS_ADDR, UDC_EPINT_MSK_DISABLE_ALL); 1037 + 1038 + /* enable dynamic CSR programmingi, self powered and device speed */ 1039 + if (speed_fs) 1040 + pch_udc_bit_set(dev, UDC_DEVCFG_ADDR, UDC_DEVCFG_CSR_PRG | 1041 + UDC_DEVCFG_SP | UDC_DEVCFG_SPD_FS); 1042 + else /* defaul high speed */ 1043 + pch_udc_bit_set(dev, UDC_DEVCFG_ADDR, UDC_DEVCFG_CSR_PRG | 1044 + UDC_DEVCFG_SP | UDC_DEVCFG_SPD_HS); 1045 + pch_udc_bit_set(dev, UDC_DEVCTL_ADDR, 1046 + (PCH_UDC_THLEN << UDC_DEVCTL_THLEN_SHIFT) | 1047 + (PCH_UDC_BRLEN << UDC_DEVCTL_BRLEN_SHIFT) | 1048 + UDC_DEVCTL_MODE | UDC_DEVCTL_BREN | 1049 + UDC_DEVCTL_THE); 1050 + } 1051 + 1052 + /** 1053 + * pch_udc_exit() - This API exit usb device controller 1054 + * @dev: Reference to pch_udc_regs structure 1055 + */ 1056 + static void pch_udc_exit(struct pch_udc_dev *dev) 1057 + { 1058 + /* mask all device interrupts */ 1059 + pch_udc_bit_set(dev, UDC_DEVIRQMSK_ADDR, UDC_DEVINT_MSK); 1060 + /* mask all ep interrupts */ 1061 + pch_udc_bit_set(dev, UDC_EPIRQMSK_ADDR, UDC_EPINT_MSK_DISABLE_ALL); 1062 + /* put device in disconnected state */ 1063 + pch_udc_set_disconnect(dev); 1064 + } 1065 + 1066 + /** 1067 + * pch_udc_pcd_get_frame() - This API is invoked to get the current frame number 1068 + * @gadget: Reference to the gadget driver 1069 + * 1070 + * Return codes: 1071 + * 0: Success 1072 + * -EINVAL: If the gadget passed is NULL 1073 + */ 1074 + static int pch_udc_pcd_get_frame(struct usb_gadget *gadget) 1075 + { 1076 + struct pch_udc_dev *dev; 1077 + 1078 + if (!gadget) 1079 + return -EINVAL; 1080 + dev = container_of(gadget, struct pch_udc_dev, gadget); 1081 + return pch_udc_get_frame(dev); 1082 + } 1083 + 1084 + /** 1085 + * pch_udc_pcd_wakeup() - This API is invoked to initiate a remote wakeup 1086 + * @gadget: Reference to the gadget driver 1087 + * 1088 + * Return codes: 1089 + * 0: Success 1090 + * -EINVAL: If the gadget passed is NULL 1091 + */ 1092 + static int pch_udc_pcd_wakeup(struct usb_gadget *gadget) 1093 + { 1094 + struct pch_udc_dev *dev; 1095 + unsigned long flags; 1096 + 1097 + if (!gadget) 1098 + return -EINVAL; 1099 + dev = container_of(gadget, struct pch_udc_dev, gadget); 1100 + spin_lock_irqsave(&dev->lock, flags); 1101 + pch_udc_rmt_wakeup(dev); 1102 + spin_unlock_irqrestore(&dev->lock, flags); 1103 + return 0; 1104 + } 1105 + 1106 + /** 1107 + * pch_udc_pcd_selfpowered() - This API is invoked to specify whether the device 1108 + * is self powered or not 1109 + * @gadget: Reference to the gadget driver 1110 + * @value: Specifies self powered or not 1111 + * 1112 + * Return codes: 1113 + * 0: Success 1114 + * -EINVAL: If the gadget passed is NULL 1115 + */ 1116 + static int pch_udc_pcd_selfpowered(struct usb_gadget *gadget, int value) 1117 + { 1118 + struct pch_udc_dev *dev; 1119 + 1120 + if (!gadget) 1121 + return -EINVAL; 1122 + dev = container_of(gadget, struct pch_udc_dev, gadget); 1123 + if (value) 1124 + pch_udc_set_selfpowered(dev); 1125 + else 1126 + pch_udc_clear_selfpowered(dev); 1127 + return 0; 1128 + } 1129 + 1130 + /** 1131 + * pch_udc_pcd_pullup() - This API is invoked to make the device 1132 + * visible/invisible to the host 1133 + * @gadget: Reference to the gadget driver 1134 + * @is_on: Specifies whether the pull up is made active or inactive 1135 + * 1136 + * Return codes: 1137 + * 0: Success 1138 + * -EINVAL: If the gadget passed is NULL 1139 + */ 1140 + static int pch_udc_pcd_pullup(struct usb_gadget *gadget, int is_on) 1141 + { 1142 + struct pch_udc_dev *dev; 1143 + 1144 + if (!gadget) 1145 + return -EINVAL; 1146 + dev = container_of(gadget, struct pch_udc_dev, gadget); 1147 + pch_udc_vbus_session(dev, is_on); 1148 + return 0; 1149 + } 1150 + 1151 + /** 1152 + * pch_udc_pcd_vbus_session() - This API is used by a driver for an external 1153 + * transceiver (or GPIO) that 1154 + * detects a VBUS power session starting/ending 1155 + * @gadget: Reference to the gadget driver 1156 + * @is_active: specifies whether the session is starting or ending 1157 + * 1158 + * Return codes: 1159 + * 0: Success 1160 + * -EINVAL: If the gadget passed is NULL 1161 + */ 1162 + static int pch_udc_pcd_vbus_session(struct usb_gadget *gadget, int is_active) 1163 + { 1164 + struct pch_udc_dev *dev; 1165 + 1166 + if (!gadget) 1167 + return -EINVAL; 1168 + dev = container_of(gadget, struct pch_udc_dev, gadget); 1169 + pch_udc_vbus_session(dev, is_active); 1170 + return 0; 1171 + } 1172 + 1173 + /** 1174 + * pch_udc_pcd_vbus_draw() - This API is used by gadget drivers during 1175 + * SET_CONFIGURATION calls to 1176 + * specify how much power the device can consume 1177 + * @gadget: Reference to the gadget driver 1178 + * @mA: specifies the current limit in 2mA unit 1179 + * 1180 + * Return codes: 1181 + * -EINVAL: If the gadget passed is NULL 1182 + * -EOPNOTSUPP: 1183 + */ 1184 + static int pch_udc_pcd_vbus_draw(struct usb_gadget *gadget, unsigned int mA) 1185 + { 1186 + return -EOPNOTSUPP; 1187 + } 1188 + 1189 + static const struct usb_gadget_ops pch_udc_ops = { 1190 + .get_frame = pch_udc_pcd_get_frame, 1191 + .wakeup = pch_udc_pcd_wakeup, 1192 + .set_selfpowered = pch_udc_pcd_selfpowered, 1193 + .pullup = pch_udc_pcd_pullup, 1194 + .vbus_session = pch_udc_pcd_vbus_session, 1195 + .vbus_draw = pch_udc_pcd_vbus_draw, 1196 + }; 1197 + 1198 + /** 1199 + * complete_req() - This API is invoked from the driver when processing 1200 + * of a request is complete 1201 + * @ep: Reference to the endpoint structure 1202 + * @req: Reference to the request structure 1203 + * @status: Indicates the success/failure of completion 1204 + */ 1205 + static void complete_req(struct pch_udc_ep *ep, struct pch_udc_request *req, 1206 + int status) 1207 + { 1208 + struct pch_udc_dev *dev; 1209 + unsigned halted = ep->halted; 1210 + 1211 + list_del_init(&req->queue); 1212 + 1213 + /* set new status if pending */ 1214 + if (req->req.status == -EINPROGRESS) 1215 + req->req.status = status; 1216 + else 1217 + status = req->req.status; 1218 + 1219 + dev = ep->dev; 1220 + if (req->dma_mapped) { 1221 + if (ep->in) 1222 + pci_unmap_single(dev->pdev, req->req.dma, 1223 + req->req.length, PCI_DMA_TODEVICE); 1224 + else 1225 + pci_unmap_single(dev->pdev, req->req.dma, 1226 + req->req.length, PCI_DMA_FROMDEVICE); 1227 + req->dma_mapped = 0; 1228 + req->req.dma = DMA_ADDR_INVALID; 1229 + } 1230 + ep->halted = 1; 1231 + spin_unlock(&dev->lock); 1232 + if (!ep->in) 1233 + pch_udc_ep_clear_rrdy(ep); 1234 + req->req.complete(&ep->ep, &req->req); 1235 + spin_lock(&dev->lock); 1236 + ep->halted = halted; 1237 + } 1238 + 1239 + /** 1240 + * empty_req_queue() - This API empties the request queue of an endpoint 1241 + * @ep: Reference to the endpoint structure 1242 + */ 1243 + static void empty_req_queue(struct pch_udc_ep *ep) 1244 + { 1245 + struct pch_udc_request *req; 1246 + 1247 + ep->halted = 1; 1248 + while (!list_empty(&ep->queue)) { 1249 + req = list_entry(ep->queue.next, struct pch_udc_request, queue); 1250 + complete_req(ep, req, -ESHUTDOWN); /* Remove from list */ 1251 + } 1252 + } 1253 + 1254 + /** 1255 + * pch_udc_free_dma_chain() - This function frees the DMA chain created 1256 + * for the request 1257 + * @dev Reference to the driver structure 1258 + * @req Reference to the request to be freed 1259 + * 1260 + * Return codes: 1261 + * 0: Success 1262 + */ 1263 + static void pch_udc_free_dma_chain(struct pch_udc_dev *dev, 1264 + struct pch_udc_request *req) 1265 + { 1266 + struct pch_udc_data_dma_desc *td = req->td_data; 1267 + unsigned i = req->chain_len; 1268 + 1269 + for (; i > 1; --i) { 1270 + dma_addr_t addr = (dma_addr_t)td->next; 1271 + /* do not free first desc., will be done by free for request */ 1272 + td = phys_to_virt(addr); 1273 + pci_pool_free(dev->data_requests, td, addr); 1274 + } 1275 + } 1276 + 1277 + /** 1278 + * pch_udc_create_dma_chain() - This function creates or reinitializes 1279 + * a DMA chain 1280 + * @ep: Reference to the endpoint structure 1281 + * @req: Reference to the request 1282 + * @buf_len: The buffer length 1283 + * @gfp_flags: Flags to be used while mapping the data buffer 1284 + * 1285 + * Return codes: 1286 + * 0: success, 1287 + * -ENOMEM: pci_pool_alloc invocation fails 1288 + */ 1289 + static int pch_udc_create_dma_chain(struct pch_udc_ep *ep, 1290 + struct pch_udc_request *req, 1291 + unsigned long buf_len, 1292 + gfp_t gfp_flags) 1293 + { 1294 + struct pch_udc_data_dma_desc *td = req->td_data, *last; 1295 + unsigned long bytes = req->req.length, i = 0; 1296 + dma_addr_t dma_addr; 1297 + unsigned len = 1; 1298 + 1299 + if (req->chain_len > 1) 1300 + pch_udc_free_dma_chain(ep->dev, req); 1301 + 1302 + for (; ; bytes -= buf_len, ++len) { 1303 + if (ep->in) 1304 + td->status = PCH_UDC_BS_HST_BSY | min(buf_len, bytes); 1305 + else 1306 + td->status = PCH_UDC_BS_HST_BSY; 1307 + 1308 + if (bytes <= buf_len) 1309 + break; 1310 + 1311 + last = td; 1312 + td = pci_pool_alloc(ep->dev->data_requests, gfp_flags, 1313 + &dma_addr); 1314 + if (!td) 1315 + goto nomem; 1316 + 1317 + i += buf_len; 1318 + td->dataptr = req->req.dma + i; 1319 + last->next = dma_addr; 1320 + } 1321 + 1322 + req->td_data_last = td; 1323 + td->status |= PCH_UDC_DMA_LAST; 1324 + td->next = req->td_data_phys; 1325 + req->chain_len = len; 1326 + return 0; 1327 + 1328 + nomem: 1329 + if (len > 1) { 1330 + req->chain_len = len; 1331 + pch_udc_free_dma_chain(ep->dev, req); 1332 + } 1333 + req->chain_len = 1; 1334 + return -ENOMEM; 1335 + } 1336 + 1337 + /** 1338 + * prepare_dma() - This function creates and initializes the DMA chain 1339 + * for the request 1340 + * @ep: Reference to the endpoint structure 1341 + * @req: Reference to the request 1342 + * @gfp: Flag to be used while mapping the data buffer 1343 + * 1344 + * Return codes: 1345 + * 0: Success 1346 + * Other 0: linux error number on failure 1347 + */ 1348 + static int prepare_dma(struct pch_udc_ep *ep, struct pch_udc_request *req, 1349 + gfp_t gfp) 1350 + { 1351 + int retval; 1352 + 1353 + req->td_data->dataptr = req->req.dma; 1354 + req->td_data->status |= PCH_UDC_DMA_LAST; 1355 + /* Allocate and create a DMA chain */ 1356 + retval = pch_udc_create_dma_chain(ep, req, ep->ep.maxpacket, gfp); 1357 + if (retval) { 1358 + pr_err("%s: could not create DMA chain: %d\n", 1359 + __func__, retval); 1360 + return retval; 1361 + } 1362 + if (!ep->in) 1363 + return 0; 1364 + if (req->req.length <= ep->ep.maxpacket) 1365 + req->td_data->status = PCH_UDC_DMA_LAST | PCH_UDC_BS_HST_BSY | 1366 + req->req.length; 1367 + /* if bytes < max packet then tx bytes must 1368 + * be written in packet per buffer mode 1369 + */ 1370 + if ((req->req.length < ep->ep.maxpacket) || !ep->num) 1371 + req->td_data->status = (req->td_data->status & 1372 + ~PCH_UDC_RXTX_BYTES) | req->req.length; 1373 + req->td_data->status = (req->td_data->status & 1374 + ~PCH_UDC_BUFF_STS) | PCH_UDC_BS_HST_BSY; 1375 + return 0; 1376 + } 1377 + 1378 + /** 1379 + * process_zlp() - This function process zero length packets 1380 + * from the gadget driver 1381 + * @ep: Reference to the endpoint structure 1382 + * @req: Reference to the request 1383 + */ 1384 + static void process_zlp(struct pch_udc_ep *ep, struct pch_udc_request *req) 1385 + { 1386 + struct pch_udc_dev *dev = ep->dev; 1387 + 1388 + /* IN zlp's are handled by hardware */ 1389 + complete_req(ep, req, 0); 1390 + 1391 + /* if set_config or set_intf is waiting for ack by zlp 1392 + * then set CSR_DONE 1393 + */ 1394 + if (dev->set_cfg_not_acked) { 1395 + pch_udc_set_csr_done(dev); 1396 + dev->set_cfg_not_acked = 0; 1397 + } 1398 + /* setup command is ACK'ed now by zlp */ 1399 + if (!dev->stall && dev->waiting_zlp_ack) { 1400 + pch_udc_ep_clear_nak(&(dev->ep[UDC_EP0IN_IDX])); 1401 + dev->waiting_zlp_ack = 0; 1402 + } 1403 + } 1404 + 1405 + /** 1406 + * pch_udc_start_rxrequest() - This function starts the receive requirement. 1407 + * @ep: Reference to the endpoint structure 1408 + * @req: Reference to the request structure 1409 + */ 1410 + static void pch_udc_start_rxrequest(struct pch_udc_ep *ep, 1411 + struct pch_udc_request *req) 1412 + { 1413 + struct pch_udc_data_dma_desc *td_data; 1414 + 1415 + pch_udc_clear_dma(ep->dev, DMA_DIR_RX); 1416 + td_data = req->td_data; 1417 + ep->td_data = req->td_data; 1418 + /* Set the status bits for all descriptors */ 1419 + while (1) { 1420 + td_data->status = (td_data->status & ~PCH_UDC_BUFF_STS) | 1421 + PCH_UDC_BS_HST_RDY; 1422 + if ((td_data->status & PCH_UDC_DMA_LAST) == PCH_UDC_DMA_LAST) 1423 + break; 1424 + td_data = phys_to_virt(td_data->next); 1425 + } 1426 + /* Write the descriptor pointer */ 1427 + pch_udc_ep_set_ddptr(ep, req->td_data_phys); 1428 + req->dma_going = 1; 1429 + pch_udc_enable_ep_interrupts(ep->dev, UDC_EPINT_OUT_EP0 << ep->num); 1430 + pch_udc_set_dma(ep->dev, DMA_DIR_RX); 1431 + pch_udc_ep_clear_nak(ep); 1432 + pch_udc_ep_set_rrdy(ep); 1433 + } 1434 + 1435 + /** 1436 + * pch_udc_pcd_ep_enable() - This API enables the endpoint. It is called 1437 + * from gadget driver 1438 + * @usbep: Reference to the USB endpoint structure 1439 + * @desc: Reference to the USB endpoint descriptor structure 1440 + * 1441 + * Return codes: 1442 + * 0: Success 1443 + * -EINVAL: 1444 + * -ESHUTDOWN: 1445 + */ 1446 + static int pch_udc_pcd_ep_enable(struct usb_ep *usbep, 1447 + const struct usb_endpoint_descriptor *desc) 1448 + { 1449 + struct pch_udc_ep *ep; 1450 + struct pch_udc_dev *dev; 1451 + unsigned long iflags; 1452 + 1453 + if (!usbep || (usbep->name == ep0_string) || !desc || 1454 + (desc->bDescriptorType != USB_DT_ENDPOINT) || !desc->wMaxPacketSize) 1455 + return -EINVAL; 1456 + 1457 + ep = container_of(usbep, struct pch_udc_ep, ep); 1458 + dev = ep->dev; 1459 + if (!dev->driver || (dev->gadget.speed == USB_SPEED_UNKNOWN)) 1460 + return -ESHUTDOWN; 1461 + spin_lock_irqsave(&dev->lock, iflags); 1462 + ep->desc = desc; 1463 + ep->halted = 0; 1464 + pch_udc_ep_enable(ep, &ep->dev->cfg_data, desc); 1465 + ep->ep.maxpacket = le16_to_cpu(desc->wMaxPacketSize); 1466 + pch_udc_enable_ep_interrupts(ep->dev, PCH_UDC_EPINT(ep->in, ep->num)); 1467 + spin_unlock_irqrestore(&dev->lock, iflags); 1468 + return 0; 1469 + } 1470 + 1471 + /** 1472 + * pch_udc_pcd_ep_disable() - This API disables endpoint and is called 1473 + * from gadget driver 1474 + * @usbep Reference to the USB endpoint structure 1475 + * 1476 + * Return codes: 1477 + * 0: Success 1478 + * -EINVAL: 1479 + */ 1480 + static int pch_udc_pcd_ep_disable(struct usb_ep *usbep) 1481 + { 1482 + struct pch_udc_ep *ep; 1483 + struct pch_udc_dev *dev; 1484 + unsigned long iflags; 1485 + 1486 + if (!usbep) 1487 + return -EINVAL; 1488 + 1489 + ep = container_of(usbep, struct pch_udc_ep, ep); 1490 + dev = ep->dev; 1491 + if ((usbep->name == ep0_string) || !ep->desc) 1492 + return -EINVAL; 1493 + 1494 + spin_lock_irqsave(&ep->dev->lock, iflags); 1495 + empty_req_queue(ep); 1496 + ep->halted = 1; 1497 + pch_udc_ep_disable(ep); 1498 + pch_udc_disable_ep_interrupts(ep->dev, PCH_UDC_EPINT(ep->in, ep->num)); 1499 + ep->desc = NULL; 1500 + INIT_LIST_HEAD(&ep->queue); 1501 + spin_unlock_irqrestore(&ep->dev->lock, iflags); 1502 + return 0; 1503 + } 1504 + 1505 + /** 1506 + * pch_udc_alloc_request() - This function allocates request structure. 1507 + * It is called by gadget driver 1508 + * @usbep: Reference to the USB endpoint structure 1509 + * @gfp: Flag to be used while allocating memory 1510 + * 1511 + * Return codes: 1512 + * NULL: Failure 1513 + * Allocated address: Success 1514 + */ 1515 + static struct usb_request *pch_udc_alloc_request(struct usb_ep *usbep, 1516 + gfp_t gfp) 1517 + { 1518 + struct pch_udc_request *req; 1519 + struct pch_udc_ep *ep; 1520 + struct pch_udc_data_dma_desc *dma_desc; 1521 + struct pch_udc_dev *dev; 1522 + 1523 + if (!usbep) 1524 + return NULL; 1525 + ep = container_of(usbep, struct pch_udc_ep, ep); 1526 + dev = ep->dev; 1527 + req = kzalloc(sizeof *req, gfp); 1528 + if (!req) 1529 + return NULL; 1530 + req->req.dma = DMA_ADDR_INVALID; 1531 + INIT_LIST_HEAD(&req->queue); 1532 + if (!ep->dev->dma_addr) 1533 + return &req->req; 1534 + /* ep0 in requests are allocated from data pool here */ 1535 + dma_desc = pci_pool_alloc(ep->dev->data_requests, gfp, 1536 + &req->td_data_phys); 1537 + if (NULL == dma_desc) { 1538 + kfree(req); 1539 + return NULL; 1540 + } 1541 + /* prevent from using desc. - set HOST BUSY */ 1542 + dma_desc->status |= PCH_UDC_BS_HST_BSY; 1543 + dma_desc->dataptr = __constant_cpu_to_le32(DMA_ADDR_INVALID); 1544 + req->td_data = dma_desc; 1545 + req->td_data_last = dma_desc; 1546 + req->chain_len = 1; 1547 + return &req->req; 1548 + } 1549 + 1550 + /** 1551 + * pch_udc_free_request() - This function frees request structure. 1552 + * It is called by gadget driver 1553 + * @usbep: Reference to the USB endpoint structure 1554 + * @usbreq: Reference to the USB request 1555 + */ 1556 + static void pch_udc_free_request(struct usb_ep *usbep, 1557 + struct usb_request *usbreq) 1558 + { 1559 + struct pch_udc_ep *ep; 1560 + struct pch_udc_request *req; 1561 + struct pch_udc_dev *dev; 1562 + 1563 + if (!usbep || !usbreq) 1564 + return; 1565 + ep = container_of(usbep, struct pch_udc_ep, ep); 1566 + req = container_of(usbreq, struct pch_udc_request, req); 1567 + dev = ep->dev; 1568 + if (!list_empty(&req->queue)) 1569 + dev_err(&dev->pdev->dev, "%s: %s req=0x%p queue not empty\n", 1570 + __func__, usbep->name, req); 1571 + if (req->td_data != NULL) { 1572 + if (req->chain_len > 1) 1573 + pch_udc_free_dma_chain(ep->dev, req); 1574 + pci_pool_free(ep->dev->data_requests, req->td_data, 1575 + req->td_data_phys); 1576 + } 1577 + kfree(req); 1578 + } 1579 + 1580 + /** 1581 + * pch_udc_pcd_queue() - This function queues a request packet. It is called 1582 + * by gadget driver 1583 + * @usbep: Reference to the USB endpoint structure 1584 + * @usbreq: Reference to the USB request 1585 + * @gfp: Flag to be used while mapping the data buffer 1586 + * 1587 + * Return codes: 1588 + * 0: Success 1589 + * linux error number: Failure 1590 + */ 1591 + static int pch_udc_pcd_queue(struct usb_ep *usbep, struct usb_request *usbreq, 1592 + gfp_t gfp) 1593 + { 1594 + int retval = 0; 1595 + struct pch_udc_ep *ep; 1596 + struct pch_udc_dev *dev; 1597 + struct pch_udc_request *req; 1598 + unsigned long iflags; 1599 + 1600 + if (!usbep || !usbreq || !usbreq->complete || !usbreq->buf) 1601 + return -EINVAL; 1602 + ep = container_of(usbep, struct pch_udc_ep, ep); 1603 + dev = ep->dev; 1604 + if (!ep->desc && ep->num) 1605 + return -EINVAL; 1606 + req = container_of(usbreq, struct pch_udc_request, req); 1607 + if (!list_empty(&req->queue)) 1608 + return -EINVAL; 1609 + if (!dev->driver || (dev->gadget.speed == USB_SPEED_UNKNOWN)) 1610 + return -ESHUTDOWN; 1611 + spin_lock_irqsave(&ep->dev->lock, iflags); 1612 + /* map the buffer for dma */ 1613 + if (usbreq->length && 1614 + ((usbreq->dma == DMA_ADDR_INVALID) || !usbreq->dma)) { 1615 + if (ep->in) 1616 + usbreq->dma = pci_map_single(dev->pdev, usbreq->buf, 1617 + usbreq->length, PCI_DMA_TODEVICE); 1618 + else 1619 + usbreq->dma = pci_map_single(dev->pdev, usbreq->buf, 1620 + usbreq->length, PCI_DMA_FROMDEVICE); 1621 + req->dma_mapped = 1; 1622 + } 1623 + if (usbreq->length > 0) { 1624 + retval = prepare_dma(ep, req, gfp); 1625 + if (retval) 1626 + goto probe_end; 1627 + } 1628 + usbreq->actual = 0; 1629 + usbreq->status = -EINPROGRESS; 1630 + req->dma_done = 0; 1631 + if (list_empty(&ep->queue) && !ep->halted) { 1632 + /* no pending transfer, so start this req */ 1633 + if (!usbreq->length) { 1634 + process_zlp(ep, req); 1635 + retval = 0; 1636 + goto probe_end; 1637 + } 1638 + if (!ep->in) { 1639 + pch_udc_start_rxrequest(ep, req); 1640 + } else { 1641 + /* 1642 + * For IN trfr the descriptors will be programmed and 1643 + * P bit will be set when 1644 + * we get an IN token 1645 + */ 1646 + pch_udc_wait_ep_stall(ep); 1647 + pch_udc_ep_clear_nak(ep); 1648 + pch_udc_enable_ep_interrupts(ep->dev, (1 << ep->num)); 1649 + pch_udc_set_dma(dev, DMA_DIR_TX); 1650 + } 1651 + } 1652 + /* Now add this request to the ep's pending requests */ 1653 + if (req != NULL) 1654 + list_add_tail(&req->queue, &ep->queue); 1655 + 1656 + probe_end: 1657 + spin_unlock_irqrestore(&dev->lock, iflags); 1658 + return retval; 1659 + } 1660 + 1661 + /** 1662 + * pch_udc_pcd_dequeue() - This function de-queues a request packet. 1663 + * It is called by gadget driver 1664 + * @usbep: Reference to the USB endpoint structure 1665 + * @usbreq: Reference to the USB request 1666 + * 1667 + * Return codes: 1668 + * 0: Success 1669 + * linux error number: Failure 1670 + */ 1671 + static int pch_udc_pcd_dequeue(struct usb_ep *usbep, 1672 + struct usb_request *usbreq) 1673 + { 1674 + struct pch_udc_ep *ep; 1675 + struct pch_udc_request *req; 1676 + struct pch_udc_dev *dev; 1677 + unsigned long flags; 1678 + int ret = -EINVAL; 1679 + 1680 + ep = container_of(usbep, struct pch_udc_ep, ep); 1681 + dev = ep->dev; 1682 + if (!usbep || !usbreq || (!ep->desc && ep->num)) 1683 + return ret; 1684 + req = container_of(usbreq, struct pch_udc_request, req); 1685 + spin_lock_irqsave(&ep->dev->lock, flags); 1686 + /* make sure it's still queued on this endpoint */ 1687 + list_for_each_entry(req, &ep->queue, queue) { 1688 + if (&req->req == usbreq) { 1689 + pch_udc_ep_set_nak(ep); 1690 + if (!list_empty(&req->queue)) 1691 + complete_req(ep, req, -ECONNRESET); 1692 + ret = 0; 1693 + break; 1694 + } 1695 + } 1696 + spin_unlock_irqrestore(&ep->dev->lock, flags); 1697 + return ret; 1698 + } 1699 + 1700 + /** 1701 + * pch_udc_pcd_set_halt() - This function Sets or clear the endpoint halt 1702 + * feature 1703 + * @usbep: Reference to the USB endpoint structure 1704 + * @halt: Specifies whether to set or clear the feature 1705 + * 1706 + * Return codes: 1707 + * 0: Success 1708 + * linux error number: Failure 1709 + */ 1710 + static int pch_udc_pcd_set_halt(struct usb_ep *usbep, int halt) 1711 + { 1712 + struct pch_udc_ep *ep; 1713 + struct pch_udc_dev *dev; 1714 + unsigned long iflags; 1715 + int ret; 1716 + 1717 + if (!usbep) 1718 + return -EINVAL; 1719 + ep = container_of(usbep, struct pch_udc_ep, ep); 1720 + dev = ep->dev; 1721 + if (!ep->desc && !ep->num) 1722 + return -EINVAL; 1723 + if (!ep->dev->driver || (ep->dev->gadget.speed == USB_SPEED_UNKNOWN)) 1724 + return -ESHUTDOWN; 1725 + spin_lock_irqsave(&udc_stall_spinlock, iflags); 1726 + if (list_empty(&ep->queue)) { 1727 + if (halt) { 1728 + if (ep->num == PCH_UDC_EP0) 1729 + ep->dev->stall = 1; 1730 + pch_udc_ep_set_stall(ep); 1731 + pch_udc_enable_ep_interrupts(ep->dev, 1732 + PCH_UDC_EPINT(ep->in, 1733 + ep->num)); 1734 + } else { 1735 + pch_udc_ep_clear_stall(ep); 1736 + } 1737 + ret = 0; 1738 + } else { 1739 + ret = -EAGAIN; 1740 + } 1741 + spin_unlock_irqrestore(&udc_stall_spinlock, iflags); 1742 + return ret; 1743 + } 1744 + 1745 + /** 1746 + * pch_udc_pcd_set_wedge() - This function Sets or clear the endpoint 1747 + * halt feature 1748 + * @usbep: Reference to the USB endpoint structure 1749 + * @halt: Specifies whether to set or clear the feature 1750 + * 1751 + * Return codes: 1752 + * 0: Success 1753 + * linux error number: Failure 1754 + */ 1755 + static int pch_udc_pcd_set_wedge(struct usb_ep *usbep) 1756 + { 1757 + struct pch_udc_ep *ep; 1758 + struct pch_udc_dev *dev; 1759 + unsigned long iflags; 1760 + int ret; 1761 + 1762 + if (!usbep) 1763 + return -EINVAL; 1764 + ep = container_of(usbep, struct pch_udc_ep, ep); 1765 + dev = ep->dev; 1766 + if (!ep->desc && !ep->num) 1767 + return -EINVAL; 1768 + if (!ep->dev->driver || (ep->dev->gadget.speed == USB_SPEED_UNKNOWN)) 1769 + return -ESHUTDOWN; 1770 + spin_lock_irqsave(&udc_stall_spinlock, iflags); 1771 + if (!list_empty(&ep->queue)) { 1772 + ret = -EAGAIN; 1773 + } else { 1774 + if (ep->num == PCH_UDC_EP0) 1775 + ep->dev->stall = 1; 1776 + pch_udc_ep_set_stall(ep); 1777 + pch_udc_enable_ep_interrupts(ep->dev, 1778 + PCH_UDC_EPINT(ep->in, ep->num)); 1779 + ep->dev->prot_stall = 1; 1780 + ret = 0; 1781 + } 1782 + spin_unlock_irqrestore(&udc_stall_spinlock, iflags); 1783 + return ret; 1784 + } 1785 + 1786 + /** 1787 + * pch_udc_pcd_fifo_flush() - This function Flush the FIFO of specified endpoint 1788 + * @usbep: Reference to the USB endpoint structure 1789 + */ 1790 + static void pch_udc_pcd_fifo_flush(struct usb_ep *usbep) 1791 + { 1792 + struct pch_udc_ep *ep; 1793 + 1794 + if (!usbep) 1795 + return; 1796 + 1797 + ep = container_of(usbep, struct pch_udc_ep, ep); 1798 + if (ep->desc || !ep->num) 1799 + pch_udc_ep_fifo_flush(ep, ep->in); 1800 + } 1801 + 1802 + static const struct usb_ep_ops pch_udc_ep_ops = { 1803 + .enable = pch_udc_pcd_ep_enable, 1804 + .disable = pch_udc_pcd_ep_disable, 1805 + .alloc_request = pch_udc_alloc_request, 1806 + .free_request = pch_udc_free_request, 1807 + .queue = pch_udc_pcd_queue, 1808 + .dequeue = pch_udc_pcd_dequeue, 1809 + .set_halt = pch_udc_pcd_set_halt, 1810 + .set_wedge = pch_udc_pcd_set_wedge, 1811 + .fifo_status = NULL, 1812 + .fifo_flush = pch_udc_pcd_fifo_flush, 1813 + }; 1814 + 1815 + /** 1816 + * pch_udc_init_setup_buff() - This function initializes the SETUP buffer 1817 + * @td_stp: Reference to the SETP buffer structure 1818 + */ 1819 + static void pch_udc_init_setup_buff(struct pch_udc_stp_dma_desc *td_stp) 1820 + { 1821 + static u32 pky_marker; 1822 + 1823 + if (!td_stp) 1824 + return; 1825 + td_stp->reserved = ++pky_marker; 1826 + memset(&td_stp->request, 0xFF, sizeof td_stp->request); 1827 + td_stp->status = PCH_UDC_BS_HST_RDY; 1828 + } 1829 + 1830 + /** 1831 + * pch_udc_start_next_txrequest() - This function starts 1832 + * the next transmission requirement 1833 + * @ep: Reference to the endpoint structure 1834 + */ 1835 + static void pch_udc_start_next_txrequest(struct pch_udc_ep *ep) 1836 + { 1837 + struct pch_udc_request *req; 1838 + struct pch_udc_data_dma_desc *td_data; 1839 + 1840 + if (pch_udc_read_ep_control(ep) & UDC_EPCTL_P) 1841 + return; 1842 + 1843 + if (list_empty(&ep->queue)) 1844 + return; 1845 + 1846 + /* next request */ 1847 + req = list_entry(ep->queue.next, struct pch_udc_request, queue); 1848 + if (req->dma_going) 1849 + return; 1850 + if (!req->td_data) 1851 + return; 1852 + pch_udc_wait_ep_stall(ep); 1853 + req->dma_going = 1; 1854 + pch_udc_ep_set_ddptr(ep, 0); 1855 + td_data = req->td_data; 1856 + while (1) { 1857 + td_data->status = (td_data->status & ~PCH_UDC_BUFF_STS) | 1858 + PCH_UDC_BS_HST_RDY; 1859 + if ((td_data->status & PCH_UDC_DMA_LAST) == PCH_UDC_DMA_LAST) 1860 + break; 1861 + td_data = phys_to_virt(td_data->next); 1862 + } 1863 + pch_udc_ep_set_ddptr(ep, req->td_data_phys); 1864 + pch_udc_set_dma(ep->dev, DMA_DIR_TX); 1865 + pch_udc_ep_set_pd(ep); 1866 + pch_udc_enable_ep_interrupts(ep->dev, PCH_UDC_EPINT(ep->in, ep->num)); 1867 + pch_udc_ep_clear_nak(ep); 1868 + } 1869 + 1870 + /** 1871 + * pch_udc_complete_transfer() - This function completes a transfer 1872 + * @ep: Reference to the endpoint structure 1873 + */ 1874 + static void pch_udc_complete_transfer(struct pch_udc_ep *ep) 1875 + { 1876 + struct pch_udc_request *req; 1877 + struct pch_udc_dev *dev = ep->dev; 1878 + 1879 + if (list_empty(&ep->queue)) 1880 + return; 1881 + req = list_entry(ep->queue.next, struct pch_udc_request, queue); 1882 + if ((req->td_data_last->status & PCH_UDC_BUFF_STS) != 1883 + PCH_UDC_BS_DMA_DONE) 1884 + return; 1885 + if ((req->td_data_last->status & PCH_UDC_RXTX_STS) != 1886 + PCH_UDC_RTS_SUCC) { 1887 + dev_err(&dev->pdev->dev, "Invalid RXTX status (0x%08x) " 1888 + "epstatus=0x%08x\n", 1889 + (req->td_data_last->status & PCH_UDC_RXTX_STS), 1890 + (int)(ep->epsts)); 1891 + return; 1892 + } 1893 + 1894 + req->req.actual = req->req.length; 1895 + req->td_data_last->status = PCH_UDC_BS_HST_BSY | PCH_UDC_DMA_LAST; 1896 + req->td_data->status = PCH_UDC_BS_HST_BSY | PCH_UDC_DMA_LAST; 1897 + complete_req(ep, req, 0); 1898 + req->dma_going = 0; 1899 + if (!list_empty(&ep->queue)) { 1900 + pch_udc_wait_ep_stall(ep); 1901 + pch_udc_ep_clear_nak(ep); 1902 + pch_udc_enable_ep_interrupts(ep->dev, 1903 + PCH_UDC_EPINT(ep->in, ep->num)); 1904 + } else { 1905 + pch_udc_disable_ep_interrupts(ep->dev, 1906 + PCH_UDC_EPINT(ep->in, ep->num)); 1907 + } 1908 + } 1909 + 1910 + /** 1911 + * pch_udc_complete_receiver() - This function completes a receiver 1912 + * @ep: Reference to the endpoint structure 1913 + */ 1914 + static void pch_udc_complete_receiver(struct pch_udc_ep *ep) 1915 + { 1916 + struct pch_udc_request *req; 1917 + struct pch_udc_dev *dev = ep->dev; 1918 + unsigned int count; 1919 + 1920 + if (list_empty(&ep->queue)) 1921 + return; 1922 + 1923 + /* next request */ 1924 + req = list_entry(ep->queue.next, struct pch_udc_request, queue); 1925 + if ((req->td_data_last->status & PCH_UDC_BUFF_STS) != 1926 + PCH_UDC_BS_DMA_DONE) 1927 + return; 1928 + pch_udc_clear_dma(ep->dev, DMA_DIR_RX); 1929 + if ((req->td_data_last->status & PCH_UDC_RXTX_STS) != 1930 + PCH_UDC_RTS_SUCC) { 1931 + dev_err(&dev->pdev->dev, "Invalid RXTX status (0x%08x) " 1932 + "epstatus=0x%08x\n", 1933 + (req->td_data_last->status & PCH_UDC_RXTX_STS), 1934 + (int)(ep->epsts)); 1935 + return; 1936 + } 1937 + count = req->td_data_last->status & PCH_UDC_RXTX_BYTES; 1938 + 1939 + /* on 64k packets the RXBYTES field is zero */ 1940 + if (!count && (req->req.length == UDC_DMA_MAXPACKET)) 1941 + count = UDC_DMA_MAXPACKET; 1942 + req->td_data->status |= PCH_UDC_DMA_LAST; 1943 + req->td_data_last->status |= PCH_UDC_BS_HST_BSY; 1944 + 1945 + req->dma_going = 0; 1946 + req->req.actual = count; 1947 + complete_req(ep, req, 0); 1948 + /* If there is a new/failed requests try that now */ 1949 + if (!list_empty(&ep->queue)) { 1950 + req = list_entry(ep->queue.next, struct pch_udc_request, queue); 1951 + pch_udc_start_rxrequest(ep, req); 1952 + } 1953 + } 1954 + 1955 + /** 1956 + * pch_udc_svc_data_in() - This function process endpoint interrupts 1957 + * for IN endpoints 1958 + * @dev: Reference to the device structure 1959 + * @ep_num: Endpoint that generated the interrupt 1960 + */ 1961 + static void pch_udc_svc_data_in(struct pch_udc_dev *dev, int ep_num) 1962 + { 1963 + u32 epsts; 1964 + struct pch_udc_ep *ep; 1965 + 1966 + ep = &dev->ep[2*ep_num]; 1967 + epsts = ep->epsts; 1968 + ep->epsts = 0; 1969 + 1970 + if (!(epsts & (UDC_EPSTS_IN | UDC_EPSTS_BNA | UDC_EPSTS_HE | 1971 + UDC_EPSTS_TDC | UDC_EPSTS_RCS | UDC_EPSTS_TXEMPTY | 1972 + UDC_EPSTS_RSS | UDC_EPSTS_XFERDONE))) 1973 + return; 1974 + if ((epsts & UDC_EPSTS_BNA)) 1975 + return; 1976 + if (epsts & UDC_EPSTS_HE) 1977 + return; 1978 + if (epsts & UDC_EPSTS_RSS) { 1979 + pch_udc_ep_set_stall(ep); 1980 + pch_udc_enable_ep_interrupts(ep->dev, 1981 + PCH_UDC_EPINT(ep->in, ep->num)); 1982 + } 1983 + if (epsts & UDC_EPSTS_RCS) { 1984 + if (!dev->prot_stall) { 1985 + pch_udc_ep_clear_stall(ep); 1986 + } else { 1987 + pch_udc_ep_set_stall(ep); 1988 + pch_udc_enable_ep_interrupts(ep->dev, 1989 + PCH_UDC_EPINT(ep->in, ep->num)); 1990 + } 1991 + } 1992 + if (epsts & UDC_EPSTS_TDC) 1993 + pch_udc_complete_transfer(ep); 1994 + /* On IN interrupt, provide data if we have any */ 1995 + if ((epsts & UDC_EPSTS_IN) && !(epsts & UDC_EPSTS_RSS) && 1996 + !(epsts & UDC_EPSTS_TDC) && !(epsts & UDC_EPSTS_TXEMPTY)) 1997 + pch_udc_start_next_txrequest(ep); 1998 + } 1999 + 2000 + /** 2001 + * pch_udc_svc_data_out() - Handles interrupts from OUT endpoint 2002 + * @dev: Reference to the device structure 2003 + * @ep_num: Endpoint that generated the interrupt 2004 + */ 2005 + static void pch_udc_svc_data_out(struct pch_udc_dev *dev, int ep_num) 2006 + { 2007 + u32 epsts; 2008 + struct pch_udc_ep *ep; 2009 + struct pch_udc_request *req = NULL; 2010 + 2011 + ep = &dev->ep[2*ep_num + 1]; 2012 + epsts = ep->epsts; 2013 + ep->epsts = 0; 2014 + 2015 + if ((epsts & UDC_EPSTS_BNA) && (!list_empty(&ep->queue))) { 2016 + /* next request */ 2017 + req = list_entry(ep->queue.next, struct pch_udc_request, 2018 + queue); 2019 + if ((req->td_data_last->status & PCH_UDC_BUFF_STS) != 2020 + PCH_UDC_BS_DMA_DONE) { 2021 + if (!req->dma_going) 2022 + pch_udc_start_rxrequest(ep, req); 2023 + return; 2024 + } 2025 + } 2026 + if (epsts & UDC_EPSTS_HE) 2027 + return; 2028 + if (epsts & UDC_EPSTS_RSS) 2029 + pch_udc_ep_set_stall(ep); 2030 + pch_udc_enable_ep_interrupts(ep->dev, 2031 + PCH_UDC_EPINT(ep->in, ep->num)); 2032 + if (epsts & UDC_EPSTS_RCS) { 2033 + if (!dev->prot_stall) { 2034 + pch_udc_ep_clear_stall(ep); 2035 + } else { 2036 + pch_udc_ep_set_stall(ep); 2037 + pch_udc_enable_ep_interrupts(ep->dev, 2038 + PCH_UDC_EPINT(ep->in, ep->num)); 2039 + } 2040 + } 2041 + if (((epsts & UDC_EPSTS_OUT_MASK) >> UDC_EPSTS_OUT_SHIFT) == 2042 + UDC_EPSTS_OUT_DATA) { 2043 + if (ep->dev->prot_stall == 1) { 2044 + pch_udc_ep_set_stall(ep); 2045 + pch_udc_enable_ep_interrupts(ep->dev, 2046 + PCH_UDC_EPINT(ep->in, ep->num)); 2047 + } else { 2048 + pch_udc_complete_receiver(ep); 2049 + } 2050 + } 2051 + if (list_empty(&ep->queue)) 2052 + pch_udc_set_dma(dev, DMA_DIR_RX); 2053 + } 2054 + 2055 + /** 2056 + * pch_udc_svc_control_in() - Handle Control IN endpoint interrupts 2057 + * @dev: Reference to the device structure 2058 + */ 2059 + static void pch_udc_svc_control_in(struct pch_udc_dev *dev) 2060 + { 2061 + u32 epsts; 2062 + struct pch_udc_ep *ep; 2063 + 2064 + ep = &dev->ep[UDC_EP0IN_IDX]; 2065 + epsts = ep->epsts; 2066 + ep->epsts = 0; 2067 + 2068 + if (!(epsts & (UDC_EPSTS_IN | UDC_EPSTS_BNA | UDC_EPSTS_HE | 2069 + UDC_EPSTS_TDC | UDC_EPSTS_RCS | UDC_EPSTS_TXEMPTY | 2070 + UDC_EPSTS_XFERDONE))) 2071 + return; 2072 + if ((epsts & UDC_EPSTS_BNA)) 2073 + return; 2074 + if (epsts & UDC_EPSTS_HE) 2075 + return; 2076 + if ((epsts & UDC_EPSTS_TDC) && (!dev->stall)) 2077 + pch_udc_complete_transfer(ep); 2078 + /* On IN interrupt, provide data if we have any */ 2079 + if ((epsts & UDC_EPSTS_IN) && !(epsts & UDC_EPSTS_TDC) && 2080 + !(epsts & UDC_EPSTS_TXEMPTY)) 2081 + pch_udc_start_next_txrequest(ep); 2082 + } 2083 + 2084 + /** 2085 + * pch_udc_svc_control_out() - Routine that handle Control 2086 + * OUT endpoint interrupts 2087 + * @dev: Reference to the device structure 2088 + */ 2089 + static void pch_udc_svc_control_out(struct pch_udc_dev *dev) 2090 + { 2091 + u32 stat; 2092 + int setup_supported; 2093 + struct pch_udc_ep *ep; 2094 + 2095 + ep = &dev->ep[UDC_EP0OUT_IDX]; 2096 + stat = ep->epsts; 2097 + ep->epsts = 0; 2098 + 2099 + /* If setup data */ 2100 + if (((stat & UDC_EPSTS_OUT_MASK) >> UDC_EPSTS_OUT_SHIFT) == 2101 + UDC_EPSTS_OUT_SETUP) { 2102 + dev->stall = 0; 2103 + dev->ep[UDC_EP0IN_IDX].halted = 0; 2104 + dev->ep[UDC_EP0OUT_IDX].halted = 0; 2105 + /* In data not ready */ 2106 + pch_udc_ep_set_nak(&(dev->ep[UDC_EP0IN_IDX])); 2107 + dev->setup_data = ep->td_stp->request; 2108 + pch_udc_init_setup_buff(ep->td_stp); 2109 + pch_udc_clear_dma(dev, DMA_DIR_TX); 2110 + pch_udc_ep_fifo_flush(&(dev->ep[UDC_EP0IN_IDX]), 2111 + dev->ep[UDC_EP0IN_IDX].in); 2112 + if ((dev->setup_data.bRequestType & USB_DIR_IN)) 2113 + dev->gadget.ep0 = &dev->ep[UDC_EP0IN_IDX].ep; 2114 + else /* OUT */ 2115 + dev->gadget.ep0 = &ep->ep; 2116 + spin_unlock(&dev->lock); 2117 + /* If Mass storage Reset */ 2118 + if ((dev->setup_data.bRequestType == 0x21) && 2119 + (dev->setup_data.bRequest == 0xFF)) 2120 + dev->prot_stall = 0; 2121 + /* call gadget with setup data received */ 2122 + setup_supported = dev->driver->setup(&dev->gadget, 2123 + &dev->setup_data); 2124 + spin_lock(&dev->lock); 2125 + /* ep0 in returns data on IN phase */ 2126 + if (setup_supported >= 0 && setup_supported < 2127 + UDC_EP0IN_MAX_PKT_SIZE) { 2128 + pch_udc_ep_clear_nak(&(dev->ep[UDC_EP0IN_IDX])); 2129 + /* Gadget would have queued a request when 2130 + * we called the setup */ 2131 + pch_udc_set_dma(dev, DMA_DIR_RX); 2132 + pch_udc_ep_clear_nak(ep); 2133 + } else if (setup_supported < 0) { 2134 + /* if unsupported request, then stall */ 2135 + pch_udc_ep_set_stall(&(dev->ep[UDC_EP0IN_IDX])); 2136 + pch_udc_enable_ep_interrupts(ep->dev, 2137 + PCH_UDC_EPINT(ep->in, ep->num)); 2138 + dev->stall = 0; 2139 + pch_udc_set_dma(dev, DMA_DIR_RX); 2140 + } else { 2141 + dev->waiting_zlp_ack = 1; 2142 + } 2143 + } else if ((((stat & UDC_EPSTS_OUT_MASK) >> UDC_EPSTS_OUT_SHIFT) == 2144 + UDC_EPSTS_OUT_DATA) && !dev->stall) { 2145 + if (list_empty(&ep->queue)) { 2146 + dev_err(&dev->pdev->dev, "%s: No request\n", __func__); 2147 + ep->td_data->status = (ep->td_data->status & 2148 + ~PCH_UDC_BUFF_STS) | 2149 + PCH_UDC_BS_HST_RDY; 2150 + pch_udc_set_dma(dev, DMA_DIR_RX); 2151 + } else { 2152 + /* control write */ 2153 + /* next function will pickuo an clear the status */ 2154 + ep->epsts = stat; 2155 + 2156 + pch_udc_svc_data_out(dev, 0); 2157 + /* re-program desc. pointer for possible ZLPs */ 2158 + pch_udc_ep_set_ddptr(ep, ep->td_data_phys); 2159 + pch_udc_set_dma(dev, DMA_DIR_RX); 2160 + } 2161 + } 2162 + pch_udc_ep_set_rrdy(ep); 2163 + } 2164 + 2165 + 2166 + /** 2167 + * pch_udc_postsvc_epinters() - This function enables end point interrupts 2168 + * and clears NAK status 2169 + * @dev: Reference to the device structure 2170 + * @ep_num: End point number 2171 + */ 2172 + static void pch_udc_postsvc_epinters(struct pch_udc_dev *dev, int ep_num) 2173 + { 2174 + struct pch_udc_ep *ep; 2175 + struct pch_udc_request *req; 2176 + 2177 + ep = &dev->ep[2*ep_num]; 2178 + if (!list_empty(&ep->queue)) { 2179 + req = list_entry(ep->queue.next, struct pch_udc_request, queue); 2180 + pch_udc_enable_ep_interrupts(ep->dev, 2181 + PCH_UDC_EPINT(ep->in, ep->num)); 2182 + pch_udc_ep_clear_nak(ep); 2183 + } 2184 + } 2185 + 2186 + /** 2187 + * pch_udc_read_all_epstatus() - This function read all endpoint status 2188 + * @dev: Reference to the device structure 2189 + * @ep_intr: Status of endpoint interrupt 2190 + */ 2191 + static void pch_udc_read_all_epstatus(struct pch_udc_dev *dev, u32 ep_intr) 2192 + { 2193 + int i; 2194 + struct pch_udc_ep *ep; 2195 + 2196 + for (i = 0; i < PCH_UDC_USED_EP_NUM; i++) { 2197 + /* IN */ 2198 + if (ep_intr & (0x1 << i)) { 2199 + ep = &dev->ep[2*i]; 2200 + ep->epsts = pch_udc_read_ep_status(ep); 2201 + pch_udc_clear_ep_status(ep, ep->epsts); 2202 + } 2203 + /* OUT */ 2204 + if (ep_intr & (0x10000 << i)) { 2205 + ep = &dev->ep[2*i+1]; 2206 + ep->epsts = pch_udc_read_ep_status(ep); 2207 + pch_udc_clear_ep_status(ep, ep->epsts); 2208 + } 2209 + } 2210 + } 2211 + 2212 + /** 2213 + * pch_udc_activate_control_ep() - This function enables the control endpoints 2214 + * for traffic after a reset 2215 + * @dev: Reference to the device structure 2216 + */ 2217 + static void pch_udc_activate_control_ep(struct pch_udc_dev *dev) 2218 + { 2219 + struct pch_udc_ep *ep; 2220 + u32 val; 2221 + 2222 + /* Setup the IN endpoint */ 2223 + ep = &dev->ep[UDC_EP0IN_IDX]; 2224 + pch_udc_clear_ep_control(ep); 2225 + pch_udc_ep_fifo_flush(ep, ep->in); 2226 + pch_udc_ep_set_bufsz(ep, UDC_EP0IN_BUFF_SIZE, ep->in); 2227 + pch_udc_ep_set_maxpkt(ep, UDC_EP0IN_MAX_PKT_SIZE); 2228 + /* Initialize the IN EP Descriptor */ 2229 + ep->td_data = NULL; 2230 + ep->td_stp = NULL; 2231 + ep->td_data_phys = 0; 2232 + ep->td_stp_phys = 0; 2233 + 2234 + /* Setup the OUT endpoint */ 2235 + ep = &dev->ep[UDC_EP0OUT_IDX]; 2236 + pch_udc_clear_ep_control(ep); 2237 + pch_udc_ep_fifo_flush(ep, ep->in); 2238 + pch_udc_ep_set_bufsz(ep, UDC_EP0OUT_BUFF_SIZE, ep->in); 2239 + pch_udc_ep_set_maxpkt(ep, UDC_EP0OUT_MAX_PKT_SIZE); 2240 + val = UDC_EP0OUT_MAX_PKT_SIZE << UDC_CSR_NE_MAX_PKT_SHIFT; 2241 + pch_udc_write_csr(ep->dev, val, UDC_EP0OUT_IDX); 2242 + 2243 + /* Initialize the SETUP buffer */ 2244 + pch_udc_init_setup_buff(ep->td_stp); 2245 + /* Write the pointer address of dma descriptor */ 2246 + pch_udc_ep_set_subptr(ep, ep->td_stp_phys); 2247 + /* Write the pointer address of Setup descriptor */ 2248 + pch_udc_ep_set_ddptr(ep, ep->td_data_phys); 2249 + 2250 + /* Initialize the dma descriptor */ 2251 + ep->td_data->status = PCH_UDC_DMA_LAST; 2252 + ep->td_data->dataptr = dev->dma_addr; 2253 + ep->td_data->next = ep->td_data_phys; 2254 + 2255 + pch_udc_ep_clear_nak(ep); 2256 + } 2257 + 2258 + 2259 + /** 2260 + * pch_udc_svc_ur_interrupt() - This function handles a USB reset interrupt 2261 + * @dev: Reference to driver structure 2262 + */ 2263 + static void pch_udc_svc_ur_interrupt(struct pch_udc_dev *dev) 2264 + { 2265 + struct pch_udc_ep *ep; 2266 + int i; 2267 + 2268 + pch_udc_clear_dma(dev, DMA_DIR_TX); 2269 + pch_udc_clear_dma(dev, DMA_DIR_RX); 2270 + /* Mask all endpoint interrupts */ 2271 + pch_udc_disable_ep_interrupts(dev, UDC_EPINT_MSK_DISABLE_ALL); 2272 + /* clear all endpoint interrupts */ 2273 + pch_udc_write_ep_interrupts(dev, UDC_EPINT_MSK_DISABLE_ALL); 2274 + 2275 + for (i = 0; i < PCH_UDC_EP_NUM; i++) { 2276 + ep = &dev->ep[i]; 2277 + pch_udc_clear_ep_status(ep, UDC_EPSTS_ALL_CLR_MASK); 2278 + pch_udc_clear_ep_control(ep); 2279 + pch_udc_ep_set_ddptr(ep, 0); 2280 + pch_udc_write_csr(ep->dev, 0x00, i); 2281 + } 2282 + dev->stall = 0; 2283 + dev->prot_stall = 0; 2284 + dev->waiting_zlp_ack = 0; 2285 + dev->set_cfg_not_acked = 0; 2286 + 2287 + /* disable ep to empty req queue. Skip the control EP's */ 2288 + for (i = 0; i < (PCH_UDC_USED_EP_NUM*2); i++) { 2289 + ep = &dev->ep[i]; 2290 + pch_udc_ep_set_nak(ep); 2291 + pch_udc_ep_fifo_flush(ep, ep->in); 2292 + /* Complete request queue */ 2293 + empty_req_queue(ep); 2294 + } 2295 + if (dev->driver && dev->driver->disconnect) 2296 + dev->driver->disconnect(&dev->gadget); 2297 + } 2298 + 2299 + /** 2300 + * pch_udc_svc_enum_interrupt() - This function handles a USB speed enumeration 2301 + * done interrupt 2302 + * @dev: Reference to driver structure 2303 + */ 2304 + static void pch_udc_svc_enum_interrupt(struct pch_udc_dev *dev) 2305 + { 2306 + u32 dev_stat, dev_speed; 2307 + u32 speed = USB_SPEED_FULL; 2308 + 2309 + dev_stat = pch_udc_read_device_status(dev); 2310 + dev_speed = (dev_stat & UDC_DEVSTS_ENUM_SPEED_MASK) >> 2311 + UDC_DEVSTS_ENUM_SPEED_SHIFT; 2312 + switch (dev_speed) { 2313 + case UDC_DEVSTS_ENUM_SPEED_HIGH: 2314 + speed = USB_SPEED_HIGH; 2315 + break; 2316 + case UDC_DEVSTS_ENUM_SPEED_FULL: 2317 + speed = USB_SPEED_FULL; 2318 + break; 2319 + case UDC_DEVSTS_ENUM_SPEED_LOW: 2320 + speed = USB_SPEED_LOW; 2321 + break; 2322 + default: 2323 + BUG(); 2324 + } 2325 + dev->gadget.speed = speed; 2326 + pch_udc_activate_control_ep(dev); 2327 + pch_udc_enable_ep_interrupts(dev, UDC_EPINT_IN_EP0 | UDC_EPINT_OUT_EP0); 2328 + pch_udc_set_dma(dev, DMA_DIR_TX); 2329 + pch_udc_set_dma(dev, DMA_DIR_RX); 2330 + pch_udc_ep_set_rrdy(&(dev->ep[UDC_EP0OUT_IDX])); 2331 + } 2332 + 2333 + /** 2334 + * pch_udc_svc_intf_interrupt() - This function handles a set interface 2335 + * interrupt 2336 + * @dev: Reference to driver structure 2337 + */ 2338 + static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev) 2339 + { 2340 + u32 reg, dev_stat = 0; 2341 + int i, ret; 2342 + 2343 + dev_stat = pch_udc_read_device_status(dev); 2344 + dev->cfg_data.cur_intf = (dev_stat & UDC_DEVSTS_INTF_MASK) >> 2345 + UDC_DEVSTS_INTF_SHIFT; 2346 + dev->cfg_data.cur_alt = (dev_stat & UDC_DEVSTS_ALT_MASK) >> 2347 + UDC_DEVSTS_ALT_SHIFT; 2348 + dev->set_cfg_not_acked = 1; 2349 + /* Construct the usb request for gadget driver and inform it */ 2350 + memset(&dev->setup_data, 0 , sizeof dev->setup_data); 2351 + dev->setup_data.bRequest = USB_REQ_SET_INTERFACE; 2352 + dev->setup_data.bRequestType = USB_RECIP_INTERFACE; 2353 + dev->setup_data.wValue = cpu_to_le16(dev->cfg_data.cur_alt); 2354 + dev->setup_data.wIndex = cpu_to_le16(dev->cfg_data.cur_intf); 2355 + /* programm the Endpoint Cfg registers */ 2356 + /* Only one end point cfg register */ 2357 + reg = pch_udc_read_csr(dev, UDC_EP0OUT_IDX); 2358 + reg = (reg & ~UDC_CSR_NE_INTF_MASK) | 2359 + (dev->cfg_data.cur_intf << UDC_CSR_NE_INTF_SHIFT); 2360 + reg = (reg & ~UDC_CSR_NE_ALT_MASK) | 2361 + (dev->cfg_data.cur_alt << UDC_CSR_NE_ALT_SHIFT); 2362 + pch_udc_write_csr(dev, reg, UDC_EP0OUT_IDX); 2363 + for (i = 0; i < PCH_UDC_USED_EP_NUM * 2; i++) { 2364 + /* clear stall bits */ 2365 + pch_udc_ep_clear_stall(&(dev->ep[i])); 2366 + dev->ep[i].halted = 0; 2367 + } 2368 + dev->stall = 0; 2369 + spin_unlock(&dev->lock); 2370 + ret = dev->driver->setup(&dev->gadget, &dev->setup_data); 2371 + spin_lock(&dev->lock); 2372 + } 2373 + 2374 + /** 2375 + * pch_udc_svc_cfg_interrupt() - This function handles a set configuration 2376 + * interrupt 2377 + * @dev: Reference to driver structure 2378 + */ 2379 + static void pch_udc_svc_cfg_interrupt(struct pch_udc_dev *dev) 2380 + { 2381 + int i, ret; 2382 + u32 reg, dev_stat = 0; 2383 + 2384 + dev_stat = pch_udc_read_device_status(dev); 2385 + dev->set_cfg_not_acked = 1; 2386 + dev->cfg_data.cur_cfg = (dev_stat & UDC_DEVSTS_CFG_MASK) >> 2387 + UDC_DEVSTS_CFG_SHIFT; 2388 + /* make usb request for gadget driver */ 2389 + memset(&dev->setup_data, 0 , sizeof dev->setup_data); 2390 + dev->setup_data.bRequest = USB_REQ_SET_CONFIGURATION; 2391 + dev->setup_data.wValue = cpu_to_le16(dev->cfg_data.cur_cfg); 2392 + /* program the NE registers */ 2393 + /* Only one end point cfg register */ 2394 + reg = pch_udc_read_csr(dev, UDC_EP0OUT_IDX); 2395 + reg = (reg & ~UDC_CSR_NE_CFG_MASK) | 2396 + (dev->cfg_data.cur_cfg << UDC_CSR_NE_CFG_SHIFT); 2397 + pch_udc_write_csr(dev, reg, UDC_EP0OUT_IDX); 2398 + for (i = 0; i < PCH_UDC_USED_EP_NUM * 2; i++) { 2399 + /* clear stall bits */ 2400 + pch_udc_ep_clear_stall(&(dev->ep[i])); 2401 + dev->ep[i].halted = 0; 2402 + } 2403 + dev->stall = 0; 2404 + 2405 + /* call gadget zero with setup data received */ 2406 + spin_unlock(&dev->lock); 2407 + ret = dev->driver->setup(&dev->gadget, &dev->setup_data); 2408 + spin_lock(&dev->lock); 2409 + } 2410 + 2411 + /** 2412 + * pch_udc_dev_isr() - This function services device interrupts 2413 + * by invoking appropriate routines. 2414 + * @dev: Reference to the device structure 2415 + * @dev_intr: The Device interrupt status. 2416 + */ 2417 + static void pch_udc_dev_isr(struct pch_udc_dev *dev, u32 dev_intr) 2418 + { 2419 + /* USB Reset Interrupt */ 2420 + if (dev_intr & UDC_DEVINT_UR) 2421 + pch_udc_svc_ur_interrupt(dev); 2422 + /* Enumeration Done Interrupt */ 2423 + if (dev_intr & UDC_DEVINT_ENUM) 2424 + pch_udc_svc_enum_interrupt(dev); 2425 + /* Set Interface Interrupt */ 2426 + if (dev_intr & UDC_DEVINT_SI) 2427 + pch_udc_svc_intf_interrupt(dev); 2428 + /* Set Config Interrupt */ 2429 + if (dev_intr & UDC_DEVINT_SC) 2430 + pch_udc_svc_cfg_interrupt(dev); 2431 + /* USB Suspend interrupt */ 2432 + if (dev_intr & UDC_DEVINT_US) 2433 + dev_dbg(&dev->pdev->dev, "USB_SUSPEND\n"); 2434 + /* Clear the SOF interrupt, if enabled */ 2435 + if (dev_intr & UDC_DEVINT_SOF) 2436 + dev_dbg(&dev->pdev->dev, "SOF\n"); 2437 + /* ES interrupt, IDLE > 3ms on the USB */ 2438 + if (dev_intr & UDC_DEVINT_ES) 2439 + dev_dbg(&dev->pdev->dev, "ES\n"); 2440 + /* RWKP interrupt */ 2441 + if (dev_intr & UDC_DEVINT_RWKP) 2442 + dev_dbg(&dev->pdev->dev, "RWKP\n"); 2443 + } 2444 + 2445 + /** 2446 + * pch_udc_isr() - This function handles interrupts from the PCH USB Device 2447 + * @irq: Interrupt request number 2448 + * @dev: Reference to the device structure 2449 + */ 2450 + static irqreturn_t pch_udc_isr(int irq, void *pdev) 2451 + { 2452 + struct pch_udc_dev *dev = (struct pch_udc_dev *) pdev; 2453 + u32 dev_intr, ep_intr; 2454 + int i; 2455 + 2456 + dev_intr = pch_udc_read_device_interrupts(dev); 2457 + ep_intr = pch_udc_read_ep_interrupts(dev); 2458 + 2459 + if (dev_intr) 2460 + /* Clear device interrupts */ 2461 + pch_udc_write_device_interrupts(dev, dev_intr); 2462 + if (ep_intr) 2463 + /* Clear ep interrupts */ 2464 + pch_udc_write_ep_interrupts(dev, ep_intr); 2465 + if (!dev_intr && !ep_intr) 2466 + return IRQ_NONE; 2467 + spin_lock(&dev->lock); 2468 + if (dev_intr) 2469 + pch_udc_dev_isr(dev, dev_intr); 2470 + if (ep_intr) { 2471 + pch_udc_read_all_epstatus(dev, ep_intr); 2472 + /* Process Control In interrupts, if present */ 2473 + if (ep_intr & UDC_EPINT_IN_EP0) { 2474 + pch_udc_svc_control_in(dev); 2475 + pch_udc_postsvc_epinters(dev, 0); 2476 + } 2477 + /* Process Control Out interrupts, if present */ 2478 + if (ep_intr & UDC_EPINT_OUT_EP0) 2479 + pch_udc_svc_control_out(dev); 2480 + /* Process data in end point interrupts */ 2481 + for (i = 1; i < PCH_UDC_USED_EP_NUM; i++) { 2482 + if (ep_intr & (1 << i)) { 2483 + pch_udc_svc_data_in(dev, i); 2484 + pch_udc_postsvc_epinters(dev, i); 2485 + } 2486 + } 2487 + /* Process data out end point interrupts */ 2488 + for (i = UDC_EPINT_OUT_SHIFT + 1; i < (UDC_EPINT_OUT_SHIFT + 2489 + PCH_UDC_USED_EP_NUM); i++) 2490 + if (ep_intr & (1 << i)) 2491 + pch_udc_svc_data_out(dev, i - 2492 + UDC_EPINT_OUT_SHIFT); 2493 + } 2494 + spin_unlock(&dev->lock); 2495 + return IRQ_HANDLED; 2496 + } 2497 + 2498 + /** 2499 + * pch_udc_setup_ep0() - This function enables control endpoint for traffic 2500 + * @dev: Reference to the device structure 2501 + */ 2502 + static void pch_udc_setup_ep0(struct pch_udc_dev *dev) 2503 + { 2504 + /* enable ep0 interrupts */ 2505 + pch_udc_enable_ep_interrupts(dev, UDC_EPINT_IN_EP0 | 2506 + UDC_EPINT_OUT_EP0); 2507 + /* enable device interrupts */ 2508 + pch_udc_enable_interrupts(dev, UDC_DEVINT_UR | UDC_DEVINT_US | 2509 + UDC_DEVINT_ES | UDC_DEVINT_ENUM | 2510 + UDC_DEVINT_SI | UDC_DEVINT_SC); 2511 + } 2512 + 2513 + /** 2514 + * gadget_release() - Free the gadget driver private data 2515 + * @pdev reference to struct pci_dev 2516 + */ 2517 + static void gadget_release(struct device *pdev) 2518 + { 2519 + struct pch_udc_dev *dev = dev_get_drvdata(pdev); 2520 + 2521 + kfree(dev); 2522 + } 2523 + 2524 + /** 2525 + * pch_udc_pcd_reinit() - This API initializes the endpoint structures 2526 + * @dev: Reference to the driver structure 2527 + */ 2528 + static void pch_udc_pcd_reinit(struct pch_udc_dev *dev) 2529 + { 2530 + const char *const ep_string[] = { 2531 + ep0_string, "ep0out", "ep1in", "ep1out", "ep2in", "ep2out", 2532 + "ep3in", "ep3out", "ep4in", "ep4out", "ep5in", "ep5out", 2533 + "ep6in", "ep6out", "ep7in", "ep7out", "ep8in", "ep8out", 2534 + "ep9in", "ep9out", "ep10in", "ep10out", "ep11in", "ep11out", 2535 + "ep12in", "ep12out", "ep13in", "ep13out", "ep14in", "ep14out", 2536 + "ep15in", "ep15out", 2537 + }; 2538 + int i; 2539 + 2540 + dev->gadget.speed = USB_SPEED_UNKNOWN; 2541 + INIT_LIST_HEAD(&dev->gadget.ep_list); 2542 + 2543 + /* Initialize the endpoints structures */ 2544 + memset(dev->ep, 0, sizeof dev->ep); 2545 + for (i = 0; i < PCH_UDC_EP_NUM; i++) { 2546 + struct pch_udc_ep *ep = &dev->ep[i]; 2547 + ep->dev = dev; 2548 + ep->halted = 1; 2549 + ep->num = i / 2; 2550 + ep->in = ~i & 1; 2551 + ep->ep.name = ep_string[i]; 2552 + ep->ep.ops = &pch_udc_ep_ops; 2553 + if (ep->in) 2554 + ep->offset_addr = ep->num * UDC_EP_REG_SHIFT; 2555 + else 2556 + ep->offset_addr = (UDC_EPINT_OUT_SHIFT + ep->num) * 2557 + UDC_EP_REG_SHIFT; 2558 + /* need to set ep->ep.maxpacket and set Default Configuration?*/ 2559 + ep->ep.maxpacket = UDC_BULK_MAX_PKT_SIZE; 2560 + list_add_tail(&ep->ep.ep_list, &dev->gadget.ep_list); 2561 + INIT_LIST_HEAD(&ep->queue); 2562 + } 2563 + dev->ep[UDC_EP0IN_IDX].ep.maxpacket = UDC_EP0IN_MAX_PKT_SIZE; 2564 + dev->ep[UDC_EP0OUT_IDX].ep.maxpacket = UDC_EP0OUT_MAX_PKT_SIZE; 2565 + 2566 + dev->dma_addr = pci_map_single(dev->pdev, dev->ep0out_buf, 256, 2567 + PCI_DMA_FROMDEVICE); 2568 + 2569 + /* remove ep0 in and out from the list. They have own pointer */ 2570 + list_del_init(&dev->ep[UDC_EP0IN_IDX].ep.ep_list); 2571 + list_del_init(&dev->ep[UDC_EP0OUT_IDX].ep.ep_list); 2572 + 2573 + dev->gadget.ep0 = &dev->ep[UDC_EP0IN_IDX].ep; 2574 + INIT_LIST_HEAD(&dev->gadget.ep0->ep_list); 2575 + } 2576 + 2577 + /** 2578 + * pch_udc_pcd_init() - This API initializes the driver structure 2579 + * @dev: Reference to the driver structure 2580 + * 2581 + * Return codes: 2582 + * 0: Success 2583 + */ 2584 + static int pch_udc_pcd_init(struct pch_udc_dev *dev) 2585 + { 2586 + pch_udc_init(dev); 2587 + pch_udc_pcd_reinit(dev); 2588 + return 0; 2589 + } 2590 + 2591 + /** 2592 + * init_dma_pools() - create dma pools during initialization 2593 + * @pdev: reference to struct pci_dev 2594 + */ 2595 + static int init_dma_pools(struct pch_udc_dev *dev) 2596 + { 2597 + struct pch_udc_stp_dma_desc *td_stp; 2598 + struct pch_udc_data_dma_desc *td_data; 2599 + 2600 + /* DMA setup */ 2601 + dev->data_requests = pci_pool_create("data_requests", dev->pdev, 2602 + sizeof(struct pch_udc_data_dma_desc), 0, 0); 2603 + if (!dev->data_requests) { 2604 + dev_err(&dev->pdev->dev, "%s: can't get request data pool\n", 2605 + __func__); 2606 + return -ENOMEM; 2607 + } 2608 + 2609 + /* dma desc for setup data */ 2610 + dev->stp_requests = pci_pool_create("setup requests", dev->pdev, 2611 + sizeof(struct pch_udc_stp_dma_desc), 0, 0); 2612 + if (!dev->stp_requests) { 2613 + dev_err(&dev->pdev->dev, "%s: can't get setup request pool\n", 2614 + __func__); 2615 + return -ENOMEM; 2616 + } 2617 + /* setup */ 2618 + td_stp = pci_pool_alloc(dev->stp_requests, GFP_KERNEL, 2619 + &dev->ep[UDC_EP0OUT_IDX].td_stp_phys); 2620 + if (!td_stp) { 2621 + dev_err(&dev->pdev->dev, 2622 + "%s: can't allocate setup dma descriptor\n", __func__); 2623 + return -ENOMEM; 2624 + } 2625 + dev->ep[UDC_EP0OUT_IDX].td_stp = td_stp; 2626 + 2627 + /* data: 0 packets !? */ 2628 + td_data = pci_pool_alloc(dev->data_requests, GFP_KERNEL, 2629 + &dev->ep[UDC_EP0OUT_IDX].td_data_phys); 2630 + if (!td_data) { 2631 + dev_err(&dev->pdev->dev, 2632 + "%s: can't allocate data dma descriptor\n", __func__); 2633 + return -ENOMEM; 2634 + } 2635 + dev->ep[UDC_EP0OUT_IDX].td_data = td_data; 2636 + dev->ep[UDC_EP0IN_IDX].td_stp = NULL; 2637 + dev->ep[UDC_EP0IN_IDX].td_stp_phys = 0; 2638 + dev->ep[UDC_EP0IN_IDX].td_data = NULL; 2639 + dev->ep[UDC_EP0IN_IDX].td_data_phys = 0; 2640 + return 0; 2641 + } 2642 + 2643 + int usb_gadget_probe_driver(struct usb_gadget_driver *driver, 2644 + int (*bind)(struct usb_gadget *)) 2645 + { 2646 + struct pch_udc_dev *dev = pch_udc; 2647 + int retval; 2648 + 2649 + if (!driver || (driver->speed == USB_SPEED_UNKNOWN) || !bind || 2650 + !driver->setup || !driver->unbind || !driver->disconnect) { 2651 + dev_err(&dev->pdev->dev, 2652 + "%s: invalid driver parameter\n", __func__); 2653 + return -EINVAL; 2654 + } 2655 + 2656 + if (!dev) 2657 + return -ENODEV; 2658 + 2659 + if (dev->driver) { 2660 + dev_err(&dev->pdev->dev, "%s: already bound\n", __func__); 2661 + return -EBUSY; 2662 + } 2663 + driver->driver.bus = NULL; 2664 + dev->driver = driver; 2665 + dev->gadget.dev.driver = &driver->driver; 2666 + 2667 + /* Invoke the bind routine of the gadget driver */ 2668 + retval = bind(&dev->gadget); 2669 + 2670 + if (retval) { 2671 + dev_err(&dev->pdev->dev, "%s: binding to %s returning %d\n", 2672 + __func__, driver->driver.name, retval); 2673 + dev->driver = NULL; 2674 + dev->gadget.dev.driver = NULL; 2675 + return retval; 2676 + } 2677 + /* get ready for ep0 traffic */ 2678 + pch_udc_setup_ep0(dev); 2679 + 2680 + /* clear SD */ 2681 + pch_udc_clear_disconnect(dev); 2682 + 2683 + dev->connected = 1; 2684 + return 0; 2685 + } 2686 + EXPORT_SYMBOL(usb_gadget_probe_driver); 2687 + 2688 + int usb_gadget_unregister_driver(struct usb_gadget_driver *driver) 2689 + { 2690 + struct pch_udc_dev *dev = pch_udc; 2691 + 2692 + if (!dev) 2693 + return -ENODEV; 2694 + 2695 + if (!driver || (driver != dev->driver)) { 2696 + dev_err(&dev->pdev->dev, 2697 + "%s: invalid driver parameter\n", __func__); 2698 + return -EINVAL; 2699 + } 2700 + 2701 + pch_udc_disable_interrupts(dev, UDC_DEVINT_MSK); 2702 + 2703 + /* Assues that there are no pending requets with this driver */ 2704 + driver->unbind(&dev->gadget); 2705 + dev->gadget.dev.driver = NULL; 2706 + dev->driver = NULL; 2707 + dev->connected = 0; 2708 + 2709 + /* set SD */ 2710 + pch_udc_set_disconnect(dev); 2711 + return 0; 2712 + } 2713 + EXPORT_SYMBOL(usb_gadget_unregister_driver); 2714 + 2715 + static void pch_udc_shutdown(struct pci_dev *pdev) 2716 + { 2717 + struct pch_udc_dev *dev = pci_get_drvdata(pdev); 2718 + 2719 + pch_udc_disable_interrupts(dev, UDC_DEVINT_MSK); 2720 + pch_udc_disable_ep_interrupts(dev, UDC_EPINT_MSK_DISABLE_ALL); 2721 + 2722 + /* disable the pullup so the host will think we're gone */ 2723 + pch_udc_set_disconnect(dev); 2724 + } 2725 + 2726 + static void pch_udc_remove(struct pci_dev *pdev) 2727 + { 2728 + struct pch_udc_dev *dev = pci_get_drvdata(pdev); 2729 + 2730 + /* gadget driver must not be registered */ 2731 + if (dev->driver) 2732 + dev_err(&pdev->dev, 2733 + "%s: gadget driver still bound!!!\n", __func__); 2734 + /* dma pool cleanup */ 2735 + if (dev->data_requests) 2736 + pci_pool_destroy(dev->data_requests); 2737 + 2738 + if (dev->stp_requests) { 2739 + /* cleanup DMA desc's for ep0in */ 2740 + if (dev->ep[UDC_EP0OUT_IDX].td_stp) { 2741 + pci_pool_free(dev->stp_requests, 2742 + dev->ep[UDC_EP0OUT_IDX].td_stp, 2743 + dev->ep[UDC_EP0OUT_IDX].td_stp_phys); 2744 + } 2745 + if (dev->ep[UDC_EP0OUT_IDX].td_data) { 2746 + pci_pool_free(dev->stp_requests, 2747 + dev->ep[UDC_EP0OUT_IDX].td_data, 2748 + dev->ep[UDC_EP0OUT_IDX].td_data_phys); 2749 + } 2750 + pci_pool_destroy(dev->stp_requests); 2751 + } 2752 + 2753 + pch_udc_exit(dev); 2754 + 2755 + if (dev->irq_registered) 2756 + free_irq(pdev->irq, dev); 2757 + if (dev->base_addr) 2758 + iounmap(dev->base_addr); 2759 + if (dev->mem_region) 2760 + release_mem_region(dev->phys_addr, 2761 + pci_resource_len(pdev, PCH_UDC_PCI_BAR)); 2762 + if (dev->active) 2763 + pci_disable_device(pdev); 2764 + if (dev->registered) 2765 + device_unregister(&dev->gadget.dev); 2766 + kfree(dev); 2767 + pci_set_drvdata(pdev, NULL); 2768 + } 2769 + 2770 + #ifdef CONFIG_PM 2771 + static int pch_udc_suspend(struct pci_dev *pdev, pm_message_t state) 2772 + { 2773 + struct pch_udc_dev *dev = pci_get_drvdata(pdev); 2774 + 2775 + pch_udc_disable_interrupts(dev, UDC_DEVINT_MSK); 2776 + pch_udc_disable_ep_interrupts(dev, UDC_EPINT_MSK_DISABLE_ALL); 2777 + 2778 + pci_disable_device(pdev); 2779 + pci_enable_wake(pdev, PCI_D3hot, 0); 2780 + 2781 + if (pci_save_state(pdev)) { 2782 + dev_err(&pdev->dev, 2783 + "%s: could not save PCI config state\n", __func__); 2784 + return -ENOMEM; 2785 + } 2786 + pci_set_power_state(pdev, pci_choose_state(pdev, state)); 2787 + return 0; 2788 + } 2789 + 2790 + static int pch_udc_resume(struct pci_dev *pdev) 2791 + { 2792 + int ret; 2793 + 2794 + pci_set_power_state(pdev, PCI_D0); 2795 + ret = pci_restore_state(pdev); 2796 + if (ret) { 2797 + dev_err(&pdev->dev, "%s: pci_restore_state failed\n", __func__); 2798 + return ret; 2799 + } 2800 + ret = pci_enable_device(pdev); 2801 + if (ret) { 2802 + dev_err(&pdev->dev, "%s: pci_enable_device failed\n", __func__); 2803 + return ret; 2804 + } 2805 + pci_enable_wake(pdev, PCI_D3hot, 0); 2806 + return 0; 2807 + } 2808 + #else 2809 + #define pch_udc_suspend NULL 2810 + #define pch_udc_resume NULL 2811 + #endif /* CONFIG_PM */ 2812 + 2813 + static int pch_udc_probe(struct pci_dev *pdev, 2814 + const struct pci_device_id *id) 2815 + { 2816 + unsigned long resource; 2817 + unsigned long len; 2818 + int retval; 2819 + struct pch_udc_dev *dev; 2820 + 2821 + /* one udc only */ 2822 + if (pch_udc) { 2823 + pr_err("%s: already probed\n", __func__); 2824 + return -EBUSY; 2825 + } 2826 + /* init */ 2827 + dev = kzalloc(sizeof *dev, GFP_KERNEL); 2828 + if (!dev) { 2829 + pr_err("%s: no memory for device structure\n", __func__); 2830 + return -ENOMEM; 2831 + } 2832 + /* pci setup */ 2833 + if (pci_enable_device(pdev) < 0) { 2834 + kfree(dev); 2835 + pr_err("%s: pci_enable_device failed\n", __func__); 2836 + return -ENODEV; 2837 + } 2838 + dev->active = 1; 2839 + pci_set_drvdata(pdev, dev); 2840 + 2841 + /* PCI resource allocation */ 2842 + resource = pci_resource_start(pdev, 1); 2843 + len = pci_resource_len(pdev, 1); 2844 + 2845 + if (!request_mem_region(resource, len, KBUILD_MODNAME)) { 2846 + dev_err(&pdev->dev, "%s: pci device used already\n", __func__); 2847 + retval = -EBUSY; 2848 + goto finished; 2849 + } 2850 + dev->phys_addr = resource; 2851 + dev->mem_region = 1; 2852 + 2853 + dev->base_addr = ioremap_nocache(resource, len); 2854 + if (!dev->base_addr) { 2855 + pr_err("%s: device memory cannot be mapped\n", __func__); 2856 + retval = -ENOMEM; 2857 + goto finished; 2858 + } 2859 + if (!pdev->irq) { 2860 + dev_err(&pdev->dev, "%s: irq not set\n", __func__); 2861 + retval = -ENODEV; 2862 + goto finished; 2863 + } 2864 + pch_udc = dev; 2865 + /* initialize the hardware */ 2866 + if (pch_udc_pcd_init(dev)) 2867 + goto finished; 2868 + if (request_irq(pdev->irq, pch_udc_isr, IRQF_SHARED, KBUILD_MODNAME, 2869 + dev)) { 2870 + dev_err(&pdev->dev, "%s: request_irq(%d) fail\n", __func__, 2871 + pdev->irq); 2872 + retval = -ENODEV; 2873 + goto finished; 2874 + } 2875 + dev->irq = pdev->irq; 2876 + dev->irq_registered = 1; 2877 + 2878 + pci_set_master(pdev); 2879 + pci_try_set_mwi(pdev); 2880 + 2881 + /* device struct setup */ 2882 + spin_lock_init(&dev->lock); 2883 + dev->pdev = pdev; 2884 + dev->gadget.ops = &pch_udc_ops; 2885 + 2886 + retval = init_dma_pools(dev); 2887 + if (retval) 2888 + goto finished; 2889 + 2890 + dev_set_name(&dev->gadget.dev, "gadget"); 2891 + dev->gadget.dev.parent = &pdev->dev; 2892 + dev->gadget.dev.dma_mask = pdev->dev.dma_mask; 2893 + dev->gadget.dev.release = gadget_release; 2894 + dev->gadget.name = KBUILD_MODNAME; 2895 + dev->gadget.is_dualspeed = 1; 2896 + 2897 + retval = device_register(&dev->gadget.dev); 2898 + if (retval) 2899 + goto finished; 2900 + dev->registered = 1; 2901 + 2902 + /* Put the device in disconnected state till a driver is bound */ 2903 + pch_udc_set_disconnect(dev); 2904 + return 0; 2905 + 2906 + finished: 2907 + pch_udc_remove(pdev); 2908 + return retval; 2909 + } 2910 + 2911 + static DEFINE_PCI_DEVICE_TABLE(pch_udc_pcidev_id) = { 2912 + { 2913 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EG20T_UDC), 2914 + .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 2915 + .class_mask = 0xffffffff, 2916 + }, 2917 + { 0 }, 2918 + }; 2919 + 2920 + MODULE_DEVICE_TABLE(pci, pch_udc_pcidev_id); 2921 + 2922 + 2923 + static struct pci_driver pch_udc_driver = { 2924 + .name = KBUILD_MODNAME, 2925 + .id_table = pch_udc_pcidev_id, 2926 + .probe = pch_udc_probe, 2927 + .remove = pch_udc_remove, 2928 + .suspend = pch_udc_suspend, 2929 + .resume = pch_udc_resume, 2930 + .shutdown = pch_udc_shutdown, 2931 + }; 2932 + 2933 + static int __init pch_udc_pci_init(void) 2934 + { 2935 + return pci_register_driver(&pch_udc_driver); 2936 + } 2937 + module_init(pch_udc_pci_init); 2938 + 2939 + static void __exit pch_udc_pci_exit(void) 2940 + { 2941 + pci_unregister_driver(&pch_udc_driver); 2942 + } 2943 + module_exit(pch_udc_pci_exit); 2944 + 2945 + MODULE_DESCRIPTION("Intel EG20T USB Device Controller"); 2946 + MODULE_AUTHOR("OKI SEMICONDUCTOR, <toshiharu-linux@dsn.okisemi.com>"); 2947 + MODULE_LICENSE("GPL");
+5 -5
drivers/usb/gadget/u_audio.c
··· 255 255 ERROR(card, "No such PCM capture device: %s\n", fn_cap); 256 256 snd->substream = NULL; 257 257 snd->card = NULL; 258 + snd->filp = NULL; 258 259 } else { 259 260 pcm_file = snd->filp->private_data; 260 261 snd->substream = pcm_file->substream; ··· 274 273 275 274 /* Close control device */ 276 275 snd = &gau->control; 277 - if (!IS_ERR(snd->filp)) 276 + if (snd->filp) 278 277 filp_close(snd->filp, current->files); 279 278 280 279 /* Close PCM playback device and setup substream */ 281 280 snd = &gau->playback; 282 - if (!IS_ERR(snd->filp)) 281 + if (snd->filp) 283 282 filp_close(snd->filp, current->files); 284 283 285 284 /* Close PCM capture device and setup substream */ 286 285 snd = &gau->capture; 287 - if (!IS_ERR(snd->filp)) 286 + if (snd->filp) 288 287 filp_close(snd->filp, current->files); 289 288 290 289 return 0; ··· 305 304 ret = gaudio_open_snd_dev(card); 306 305 if (ret) 307 306 ERROR(card, "we need at least one control device\n"); 308 - 309 - if (!the_card) 307 + else if (!the_card) 310 308 the_card = card; 311 309 312 310 return ret;
+12 -2
drivers/usb/gadget/u_ether.c
··· 240 240 size += out->maxpacket - 1; 241 241 size -= size % out->maxpacket; 242 242 243 + if (dev->port_usb->is_fixed) 244 + size = max(size, dev->port_usb->fixed_out_len); 245 + 243 246 skb = alloc_skb(size + NET_IP_ALIGN, gfp_flags); 244 247 if (skb == NULL) { 245 248 DBG(dev, "no rx skb\n"); ··· 581 578 req->context = skb; 582 579 req->complete = tx_complete; 583 580 581 + /* NCM requires no zlp if transfer is dwNtbInMaxSize */ 582 + if (dev->port_usb->is_fixed && 583 + length == dev->port_usb->fixed_in_len && 584 + (length % in->maxpacket) == 0) 585 + req->zero = 0; 586 + else 587 + req->zero = 1; 588 + 584 589 /* use zlp framing on tx for strict CDC-Ether conformance, 585 590 * though any robust network rx path ignores extra padding. 586 591 * and some hardware doesn't like to write zlps. 587 592 */ 588 - req->zero = 1; 589 - if (!dev->zlp && (length % in->maxpacket) == 0) 593 + if (req->zero && !dev->zlp && (length % in->maxpacket) == 0) 590 594 length++; 591 595 592 596 req->length = length;
+5
drivers/usb/gadget/u_ether.h
··· 62 62 63 63 /* hooks for added framing, as needed for RNDIS and EEM. */ 64 64 u32 header_len; 65 + /* NCM requires fixed size bundles */ 66 + bool is_fixed; 67 + u32 fixed_out_len; 68 + u32 fixed_in_len; 65 69 struct sk_buff *(*wrap)(struct gether *port, 66 70 struct sk_buff *skb); 67 71 int (*unwrap)(struct gether *port, ··· 107 103 /* each configuration may bind one instance of an ethernet link */ 108 104 int geth_bind_config(struct usb_configuration *c, u8 ethaddr[ETH_ALEN]); 109 105 int ecm_bind_config(struct usb_configuration *c, u8 ethaddr[ETH_ALEN]); 106 + int ncm_bind_config(struct usb_configuration *c, u8 ethaddr[ETH_ALEN]); 110 107 int eem_bind_config(struct usb_configuration *c); 111 108 112 109 #ifdef USB_ETH_RNDIS
+19
drivers/usb/host/Kconfig
··· 133 133 ---help--- 134 134 Variation of ARC USB block used in some Freescale chips. 135 135 136 + config USB_EHCI_HCD_OMAP 137 + bool "EHCI support for OMAP3 and later chips" 138 + depends on USB_EHCI_HCD && ARCH_OMAP 139 + default y 140 + --- help --- 141 + Enables support for the on-chip EHCI controller on 142 + OMAP3 and later chips. 143 + 144 + config USB_EHCI_MSM 145 + bool "Support for MSM on-chip EHCI USB controller" 146 + depends on USB_EHCI_HCD && ARCH_MSM 147 + select USB_EHCI_ROOT_HUB_TT 148 + select USB_MSM_OTG_72K 149 + ---help--- 150 + Enables support for the USB Host controller present on the 151 + Qualcomm chipsets. Root Hub has inbuilt TT. 152 + This driver depends on OTG driver for PHY initialization, 153 + clock management, powering up VBUS, and power management. 154 + 136 155 config USB_EHCI_HCD_PPC_OF 137 156 bool "EHCI support for PPC USB controller on OF platform bus" 138 157 depends on USB_EHCI_HCD && PPC_OF
+3
drivers/usb/host/ehci-atmel.c
··· 99 99 .urb_enqueue = ehci_urb_enqueue, 100 100 .urb_dequeue = ehci_urb_dequeue, 101 101 .endpoint_disable = ehci_endpoint_disable, 102 + .endpoint_reset = ehci_endpoint_reset, 102 103 103 104 /* scheduling support */ 104 105 .get_frame_number = ehci_get_frame, ··· 111 110 .bus_resume = ehci_bus_resume, 112 111 .relinquish_port = ehci_relinquish_port, 113 112 .port_handed_over = ehci_port_handed_over, 113 + 114 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 114 115 }; 115 116 116 117 static int __init ehci_atmel_drv_probe(struct platform_device *pdev)
+1 -1
drivers/usb/host/ehci-dbg.c
··· 879 879 int ret = 0; 880 880 881 881 if (!buf->output_buf) 882 - buf->output_buf = (char *)vmalloc(buf->alloc_size); 882 + buf->output_buf = vmalloc(buf->alloc_size); 883 883 884 884 if (!buf->output_buf) { 885 885 ret = -ENOMEM;
+29 -1
drivers/usb/host/ehci-hcd.c
··· 114 114 115 115 #define INTR_MASK (STS_IAA | STS_FATAL | STS_PCD | STS_ERR | STS_INT) 116 116 117 + /* for ASPM quirk of ISOC on AMD SB800 */ 118 + static struct pci_dev *amd_nb_dev; 119 + 117 120 /*-------------------------------------------------------------------------*/ 118 121 119 122 #include "ehci.h" ··· 531 528 ehci_work (ehci); 532 529 spin_unlock_irq (&ehci->lock); 533 530 ehci_mem_cleanup (ehci); 531 + 532 + if (amd_nb_dev) { 533 + pci_dev_put(amd_nb_dev); 534 + amd_nb_dev = NULL; 535 + } 534 536 535 537 #ifdef EHCI_STATS 536 538 ehci_dbg (ehci, "irq normal %ld err %ld reclaim %ld (lost %ld)\n", ··· 1174 1166 #define PLATFORM_DRIVER ehci_mxc_driver 1175 1167 #endif 1176 1168 1169 + #ifdef CONFIG_CPU_SUBTYPE_SH7786 1170 + #include "ehci-sh.c" 1171 + #define PLATFORM_DRIVER ehci_hcd_sh_driver 1172 + #endif 1173 + 1177 1174 #ifdef CONFIG_SOC_AU1200 1178 1175 #include "ehci-au1xxx.c" 1179 1176 #define PLATFORM_DRIVER ehci_hcd_au1xxx_driver 1180 1177 #endif 1181 1178 1182 - #ifdef CONFIG_ARCH_OMAP3 1179 + #ifdef CONFIG_USB_EHCI_HCD_OMAP 1183 1180 #include "ehci-omap.c" 1184 1181 #define PLATFORM_DRIVER ehci_hcd_omap_driver 1185 1182 #endif ··· 1227 1214 #ifdef CONFIG_USB_OCTEON_EHCI 1228 1215 #include "ehci-octeon.c" 1229 1216 #define PLATFORM_DRIVER ehci_octeon_driver 1217 + #endif 1218 + 1219 + #ifdef CONFIG_ARCH_VT8500 1220 + #include "ehci-vt8500.c" 1221 + #define PLATFORM_DRIVER vt8500_ehci_driver 1222 + #endif 1223 + 1224 + #ifdef CONFIG_PLAT_SPEAR 1225 + #include "ehci-spear.c" 1226 + #define PLATFORM_DRIVER spear_ehci_hcd_driver 1227 + #endif 1228 + 1229 + #ifdef CONFIG_USB_EHCI_MSM 1230 + #include "ehci-msm.c" 1231 + #define PLATFORM_DRIVER ehci_msm_driver 1230 1232 #endif 1231 1233 1232 1234 #if !defined(PCI_DRIVER) && !defined(PLATFORM_DRIVER) && \
+345
drivers/usb/host/ehci-msm.c
··· 1 + /* ehci-msm.c - HSUSB Host Controller Driver Implementation 2 + * 3 + * Copyright (c) 2008-2010, Code Aurora Forum. All rights reserved. 4 + * 5 + * Partly derived from ehci-fsl.c and ehci-hcd.c 6 + * Copyright (c) 2000-2004 by David Brownell 7 + * Copyright (c) 2005 MontaVista Software 8 + * 9 + * All source code in this file is licensed under the following license except 10 + * where indicated. 11 + * 12 + * This program is free software; you can redistribute it and/or modify it 13 + * under the terms of the GNU General Public License version 2 as published 14 + * by the Free Software Foundation. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 19 + * 20 + * See the GNU General Public License for more details. 21 + * You should have received a copy of the GNU General Public License 22 + * along with this program; if not, you can find it at http://www.fsf.org 23 + */ 24 + 25 + #include <linux/platform_device.h> 26 + #include <linux/clk.h> 27 + #include <linux/err.h> 28 + #include <linux/pm_runtime.h> 29 + 30 + #include <linux/usb/otg.h> 31 + #include <linux/usb/msm_hsusb_hw.h> 32 + 33 + #define MSM_USB_BASE (hcd->regs) 34 + 35 + static struct otg_transceiver *otg; 36 + 37 + /* 38 + * ehci_run defined in drivers/usb/host/ehci-hcd.c reset the controller and 39 + * the configuration settings in ehci_msm_reset vanish after controller is 40 + * reset. Resetting the controler in ehci_run seems to be un-necessary 41 + * provided HCD reset the controller before calling ehci_run. Most of the HCD 42 + * do but some are not. So this function is same as ehci_run but we don't 43 + * reset the controller here. 44 + */ 45 + static int ehci_msm_run(struct usb_hcd *hcd) 46 + { 47 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 48 + u32 temp; 49 + u32 hcc_params; 50 + 51 + hcd->uses_new_polling = 1; 52 + 53 + ehci_writel(ehci, ehci->periodic_dma, &ehci->regs->frame_list); 54 + ehci_writel(ehci, (u32)ehci->async->qh_dma, &ehci->regs->async_next); 55 + 56 + /* 57 + * hcc_params controls whether ehci->regs->segment must (!!!) 58 + * be used; it constrains QH/ITD/SITD and QTD locations. 59 + * pci_pool consistent memory always uses segment zero. 60 + * streaming mappings for I/O buffers, like pci_map_single(), 61 + * can return segments above 4GB, if the device allows. 62 + * 63 + * NOTE: the dma mask is visible through dma_supported(), so 64 + * drivers can pass this info along ... like NETIF_F_HIGHDMA, 65 + * Scsi_Host.highmem_io, and so forth. It's readonly to all 66 + * host side drivers though. 67 + */ 68 + hcc_params = ehci_readl(ehci, &ehci->caps->hcc_params); 69 + if (HCC_64BIT_ADDR(hcc_params)) 70 + ehci_writel(ehci, 0, &ehci->regs->segment); 71 + 72 + /* 73 + * Philips, Intel, and maybe others need CMD_RUN before the 74 + * root hub will detect new devices (why?); NEC doesn't 75 + */ 76 + ehci->command &= ~(CMD_LRESET|CMD_IAAD|CMD_PSE|CMD_ASE|CMD_RESET); 77 + ehci->command |= CMD_RUN; 78 + ehci_writel(ehci, ehci->command, &ehci->regs->command); 79 + dbg_cmd(ehci, "init", ehci->command); 80 + 81 + /* 82 + * Start, enabling full USB 2.0 functionality ... usb 1.1 devices 83 + * are explicitly handed to companion controller(s), so no TT is 84 + * involved with the root hub. (Except where one is integrated, 85 + * and there's no companion controller unless maybe for USB OTG.) 86 + * 87 + * Turning on the CF flag will transfer ownership of all ports 88 + * from the companions to the EHCI controller. If any of the 89 + * companions are in the middle of a port reset at the time, it 90 + * could cause trouble. Write-locking ehci_cf_port_reset_rwsem 91 + * guarantees that no resets are in progress. After we set CF, 92 + * a short delay lets the hardware catch up; new resets shouldn't 93 + * be started before the port switching actions could complete. 94 + */ 95 + down_write(&ehci_cf_port_reset_rwsem); 96 + hcd->state = HC_STATE_RUNNING; 97 + ehci_writel(ehci, FLAG_CF, &ehci->regs->configured_flag); 98 + ehci_readl(ehci, &ehci->regs->command); /* unblock posted writes */ 99 + usleep_range(5000, 5500); 100 + up_write(&ehci_cf_port_reset_rwsem); 101 + ehci->last_periodic_enable = ktime_get_real(); 102 + 103 + temp = HC_VERSION(ehci_readl(ehci, &ehci->caps->hc_capbase)); 104 + ehci_info(ehci, 105 + "USB %x.%x started, EHCI %x.%02x%s\n", 106 + ((ehci->sbrn & 0xf0)>>4), (ehci->sbrn & 0x0f), 107 + temp >> 8, temp & 0xff, 108 + ignore_oc ? ", overcurrent ignored" : ""); 109 + 110 + ehci_writel(ehci, INTR_MASK, 111 + &ehci->regs->intr_enable); /* Turn On Interrupts */ 112 + 113 + /* GRR this is run-once init(), being done every time the HC starts. 114 + * So long as they're part of class devices, we can't do it init() 115 + * since the class device isn't created that early. 116 + */ 117 + create_debug_files(ehci); 118 + create_companion_file(ehci); 119 + 120 + return 0; 121 + } 122 + 123 + static int ehci_msm_reset(struct usb_hcd *hcd) 124 + { 125 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 126 + int retval; 127 + 128 + ehci->caps = USB_CAPLENGTH; 129 + ehci->regs = USB_CAPLENGTH + 130 + HC_LENGTH(ehci_readl(ehci, &ehci->caps->hc_capbase)); 131 + 132 + /* cache the data to minimize the chip reads*/ 133 + ehci->hcs_params = ehci_readl(ehci, &ehci->caps->hcs_params); 134 + 135 + hcd->has_tt = 1; 136 + ehci->sbrn = HCD_USB2; 137 + 138 + /* data structure init */ 139 + retval = ehci_init(hcd); 140 + if (retval) 141 + return retval; 142 + 143 + retval = ehci_reset(ehci); 144 + if (retval) 145 + return retval; 146 + 147 + /* bursts of unspecified length. */ 148 + writel(0, USB_AHBBURST); 149 + /* Use the AHB transactor */ 150 + writel(0, USB_AHBMODE); 151 + /* Disable streaming mode and select host mode */ 152 + writel(0x13, USB_USBMODE); 153 + 154 + ehci_port_power(ehci, 1); 155 + return 0; 156 + } 157 + 158 + static struct hc_driver msm_hc_driver = { 159 + .description = hcd_name, 160 + .product_desc = "Qualcomm On-Chip EHCI Host Controller", 161 + .hcd_priv_size = sizeof(struct ehci_hcd), 162 + 163 + /* 164 + * generic hardware linkage 165 + */ 166 + .irq = ehci_irq, 167 + .flags = HCD_USB2 | HCD_MEMORY, 168 + 169 + .reset = ehci_msm_reset, 170 + .start = ehci_msm_run, 171 + 172 + .stop = ehci_stop, 173 + .shutdown = ehci_shutdown, 174 + 175 + /* 176 + * managing i/o requests and associated device resources 177 + */ 178 + .urb_enqueue = ehci_urb_enqueue, 179 + .urb_dequeue = ehci_urb_dequeue, 180 + .endpoint_disable = ehci_endpoint_disable, 181 + .endpoint_reset = ehci_endpoint_reset, 182 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 183 + 184 + /* 185 + * scheduling support 186 + */ 187 + .get_frame_number = ehci_get_frame, 188 + 189 + /* 190 + * root hub support 191 + */ 192 + .hub_status_data = ehci_hub_status_data, 193 + .hub_control = ehci_hub_control, 194 + .relinquish_port = ehci_relinquish_port, 195 + .port_handed_over = ehci_port_handed_over, 196 + 197 + /* 198 + * PM support 199 + */ 200 + .bus_suspend = ehci_bus_suspend, 201 + .bus_resume = ehci_bus_resume, 202 + }; 203 + 204 + static int ehci_msm_probe(struct platform_device *pdev) 205 + { 206 + struct usb_hcd *hcd; 207 + struct resource *res; 208 + int ret; 209 + 210 + dev_dbg(&pdev->dev, "ehci_msm proble\n"); 211 + 212 + hcd = usb_create_hcd(&msm_hc_driver, &pdev->dev, dev_name(&pdev->dev)); 213 + if (!hcd) { 214 + dev_err(&pdev->dev, "Unable to create HCD\n"); 215 + return -ENOMEM; 216 + } 217 + 218 + hcd->irq = platform_get_irq(pdev, 0); 219 + if (hcd->irq < 0) { 220 + dev_err(&pdev->dev, "Unable to get IRQ resource\n"); 221 + ret = hcd->irq; 222 + goto put_hcd; 223 + } 224 + 225 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 226 + if (!res) { 227 + dev_err(&pdev->dev, "Unable to get memory resource\n"); 228 + ret = -ENODEV; 229 + goto put_hcd; 230 + } 231 + 232 + hcd->rsrc_start = res->start; 233 + hcd->rsrc_len = resource_size(res); 234 + hcd->regs = ioremap(hcd->rsrc_start, hcd->rsrc_len); 235 + if (!hcd->regs) { 236 + dev_err(&pdev->dev, "ioremap failed\n"); 237 + ret = -ENOMEM; 238 + goto put_hcd; 239 + } 240 + 241 + /* 242 + * OTG driver takes care of PHY initialization, clock management, 243 + * powering up VBUS, mapping of registers address space and power 244 + * management. 245 + */ 246 + otg = otg_get_transceiver(); 247 + if (!otg) { 248 + dev_err(&pdev->dev, "unable to find transceiver\n"); 249 + ret = -ENODEV; 250 + goto unmap; 251 + } 252 + 253 + ret = otg_set_host(otg, &hcd->self); 254 + if (ret < 0) { 255 + dev_err(&pdev->dev, "unable to register with transceiver\n"); 256 + goto put_transceiver; 257 + } 258 + 259 + device_init_wakeup(&pdev->dev, 1); 260 + /* 261 + * OTG device parent of HCD takes care of putting 262 + * hardware into low power mode. 263 + */ 264 + pm_runtime_no_callbacks(&pdev->dev); 265 + pm_runtime_enable(&pdev->dev); 266 + 267 + return 0; 268 + 269 + put_transceiver: 270 + otg_put_transceiver(otg); 271 + unmap: 272 + iounmap(hcd->regs); 273 + put_hcd: 274 + usb_put_hcd(hcd); 275 + 276 + return ret; 277 + } 278 + 279 + static int __devexit ehci_msm_remove(struct platform_device *pdev) 280 + { 281 + struct usb_hcd *hcd = platform_get_drvdata(pdev); 282 + 283 + device_init_wakeup(&pdev->dev, 0); 284 + pm_runtime_disable(&pdev->dev); 285 + pm_runtime_set_suspended(&pdev->dev); 286 + 287 + otg_set_host(otg, NULL); 288 + otg_put_transceiver(otg); 289 + 290 + usb_put_hcd(hcd); 291 + 292 + return 0; 293 + } 294 + 295 + #ifdef CONFIG_PM 296 + static int ehci_msm_pm_suspend(struct device *dev) 297 + { 298 + struct usb_hcd *hcd = dev_get_drvdata(dev); 299 + bool wakeup = device_may_wakeup(dev); 300 + 301 + dev_dbg(dev, "ehci-msm PM suspend\n"); 302 + 303 + /* 304 + * EHCI helper function has also the same check before manipulating 305 + * port wakeup flags. We do check here the same condition before 306 + * calling the same helper function to avoid bringing hardware 307 + * from Low power mode when there is no need for adjusting port 308 + * wakeup flags. 309 + */ 310 + if (hcd->self.root_hub->do_remote_wakeup && !wakeup) { 311 + pm_runtime_resume(dev); 312 + ehci_prepare_ports_for_controller_suspend(hcd_to_ehci(hcd), 313 + wakeup); 314 + } 315 + 316 + return 0; 317 + } 318 + 319 + static int ehci_msm_pm_resume(struct device *dev) 320 + { 321 + struct usb_hcd *hcd = dev_get_drvdata(dev); 322 + 323 + dev_dbg(dev, "ehci-msm PM resume\n"); 324 + ehci_prepare_ports_for_controller_resume(hcd_to_ehci(hcd)); 325 + 326 + return 0; 327 + } 328 + #else 329 + #define ehci_msm_pm_suspend NULL 330 + #define ehci_msm_pm_resume NULL 331 + #endif 332 + 333 + static const struct dev_pm_ops ehci_msm_dev_pm_ops = { 334 + .suspend = ehci_msm_pm_suspend, 335 + .resume = ehci_msm_pm_resume, 336 + }; 337 + 338 + static struct platform_driver ehci_msm_driver = { 339 + .probe = ehci_msm_probe, 340 + .remove = __devexit_p(ehci_msm_remove), 341 + .driver = { 342 + .name = "msm_hsusb_host", 343 + .pm = &ehci_msm_dev_pm_ops, 344 + }, 345 + };
+3
drivers/usb/host/ehci-mxc.c
··· 100 100 .urb_enqueue = ehci_urb_enqueue, 101 101 .urb_dequeue = ehci_urb_dequeue, 102 102 .endpoint_disable = ehci_endpoint_disable, 103 + .endpoint_reset = ehci_endpoint_reset, 103 104 104 105 /* 105 106 * scheduling support ··· 116 115 .bus_resume = ehci_bus_resume, 117 116 .relinquish_port = ehci_relinquish_port, 118 117 .port_handed_over = ehci_port_handed_over, 118 + 119 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 119 120 }; 120 121 121 122 static int ehci_mxc_drv_probe(struct platform_device *pdev)
+239 -65
drivers/usb/host/ehci-omap.c
··· 1 1 /* 2 - * ehci-omap.c - driver for USBHOST on OMAP 34xx processor 2 + * ehci-omap.c - driver for USBHOST on OMAP3/4 processors 3 3 * 4 - * Bus Glue for OMAP34xx USBHOST 3 port EHCI controller 5 - * Tested on OMAP3430 ES2.0 SDP 4 + * Bus Glue for the EHCI controllers in OMAP3/4 5 + * Tested on several OMAP3 boards, and OMAP4 Pandaboard 6 6 * 7 - * Copyright (C) 2007-2008 Texas Instruments, Inc. 7 + * Copyright (C) 2007-2010 Texas Instruments, Inc. 8 8 * Author: Vikram Pandita <vikram.pandita@ti.com> 9 + * Author: Anand Gadiyar <gadiyar@ti.com> 9 10 * 10 11 * Copyright (C) 2009 Nokia Corporation 11 12 * Contact: Felipe Balbi <felipe.balbi@nokia.com> ··· 27 26 * along with this program; if not, write to the Free Software 28 27 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 29 28 * 30 - * TODO (last updated Feb 12, 2010): 29 + * TODO (last updated Nov 21, 2010): 31 30 * - add kernel-doc 32 31 * - enable AUTOIDLE 33 32 * - add suspend/resume 34 33 * - move workarounds to board-files 34 + * - factor out code common to OHCI 35 + * - add HSIC and TLL support 36 + * - convert to use hwmod and runtime PM 35 37 */ 36 38 37 39 #include <linux/platform_device.h> ··· 118 114 #define OMAP_UHH_HOSTCONFIG_P2_CONNECT_STATUS (1 << 9) 119 115 #define OMAP_UHH_HOSTCONFIG_P3_CONNECT_STATUS (1 << 10) 120 116 117 + /* OMAP4-specific defines */ 118 + #define OMAP4_UHH_SYSCONFIG_IDLEMODE_CLEAR (3 << 2) 119 + #define OMAP4_UHH_SYSCONFIG_NOIDLE (1 << 2) 120 + 121 + #define OMAP4_UHH_SYSCONFIG_STDBYMODE_CLEAR (3 << 4) 122 + #define OMAP4_UHH_SYSCONFIG_NOSTDBY (1 << 4) 123 + #define OMAP4_UHH_SYSCONFIG_SOFTRESET (1 << 0) 124 + 125 + #define OMAP4_P1_MODE_CLEAR (3 << 16) 126 + #define OMAP4_P1_MODE_TLL (1 << 16) 127 + #define OMAP4_P1_MODE_HSIC (3 << 16) 128 + #define OMAP4_P2_MODE_CLEAR (3 << 18) 129 + #define OMAP4_P2_MODE_TLL (1 << 18) 130 + #define OMAP4_P2_MODE_HSIC (3 << 18) 131 + 132 + #define OMAP_REV2_TLL_CHANNEL_COUNT 2 133 + 121 134 #define OMAP_UHH_DEBUG_CSR (0x44) 122 135 123 136 /* EHCI Register Set */ ··· 147 126 #define EHCI_INSNREG05_ULPI_REGADD_SHIFT 16 148 127 #define EHCI_INSNREG05_ULPI_EXTREGADD_SHIFT 8 149 128 #define EHCI_INSNREG05_ULPI_WRDATA_SHIFT 0 129 + 130 + /* Values of UHH_REVISION - Note: these are not given in the TRM */ 131 + #define OMAP_EHCI_REV1 0x00000010 /* OMAP3 */ 132 + #define OMAP_EHCI_REV2 0x50700100 /* OMAP4 */ 133 + 134 + #define is_omap_ehci_rev1(x) (x->omap_ehci_rev == OMAP_EHCI_REV1) 135 + #define is_omap_ehci_rev2(x) (x->omap_ehci_rev == OMAP_EHCI_REV2) 136 + 137 + #define is_ehci_phy_mode(x) (x == EHCI_HCD_OMAP_MODE_PHY) 138 + #define is_ehci_tll_mode(x) (x == EHCI_HCD_OMAP_MODE_TLL) 139 + #define is_ehci_hsic_mode(x) (x == EHCI_HCD_OMAP_MODE_HSIC) 150 140 151 141 /*-------------------------------------------------------------------------*/ 152 142 ··· 188 156 struct device *dev; 189 157 190 158 struct clk *usbhost_ick; 191 - struct clk *usbhost2_120m_fck; 192 - struct clk *usbhost1_48m_fck; 159 + struct clk *usbhost_hs_fck; 160 + struct clk *usbhost_fs_fck; 193 161 struct clk *usbtll_fck; 194 162 struct clk *usbtll_ick; 163 + struct clk *xclk60mhsp1_ck; 164 + struct clk *xclk60mhsp2_ck; 165 + struct clk *utmi_p1_fck; 166 + struct clk *utmi_p2_fck; 195 167 196 168 /* FIXME the following two workarounds are 197 169 * board specific not silicon-specific so these ··· 212 176 /* phy reset workaround */ 213 177 int phy_reset; 214 178 179 + /* IP revision */ 180 + u32 omap_ehci_rev; 181 + 215 182 /* desired phy_mode: TLL, PHY */ 216 183 enum ehci_hcd_omap_mode port_mode[OMAP3_HS_USB_PORTS]; 217 184 ··· 230 191 231 192 /*-------------------------------------------------------------------------*/ 232 193 233 - static void omap_usb_utmi_init(struct ehci_hcd_omap *omap, u8 tll_channel_mask) 194 + static void omap_usb_utmi_init(struct ehci_hcd_omap *omap, u8 tll_channel_mask, 195 + u8 tll_channel_count) 234 196 { 235 197 unsigned reg; 236 198 int i; 237 199 238 200 /* Program the 3 TLL channels upfront */ 239 - for (i = 0; i < OMAP_TLL_CHANNEL_COUNT; i++) { 201 + for (i = 0; i < tll_channel_count; i++) { 240 202 reg = ehci_omap_readl(omap->tll_base, OMAP_TLL_CHANNEL_CONF(i)); 241 203 242 204 /* Disable AutoIdle, BitStuffing and use SDR Mode */ ··· 257 217 ehci_omap_writel(omap->tll_base, OMAP_TLL_SHARED_CONF, reg); 258 218 259 219 /* Enable channels now */ 260 - for (i = 0; i < OMAP_TLL_CHANNEL_COUNT; i++) { 220 + for (i = 0; i < tll_channel_count; i++) { 261 221 reg = ehci_omap_readl(omap->tll_base, OMAP_TLL_CHANNEL_CONF(i)); 262 222 263 223 /* Enable only the reg that is needed */ ··· 326 286 } 327 287 clk_enable(omap->usbhost_ick); 328 288 329 - omap->usbhost2_120m_fck = clk_get(omap->dev, "usbhost_120m_fck"); 330 - if (IS_ERR(omap->usbhost2_120m_fck)) { 331 - ret = PTR_ERR(omap->usbhost2_120m_fck); 289 + omap->usbhost_hs_fck = clk_get(omap->dev, "hs_fck"); 290 + if (IS_ERR(omap->usbhost_hs_fck)) { 291 + ret = PTR_ERR(omap->usbhost_hs_fck); 332 292 goto err_host_120m_fck; 333 293 } 334 - clk_enable(omap->usbhost2_120m_fck); 294 + clk_enable(omap->usbhost_hs_fck); 335 295 336 - omap->usbhost1_48m_fck = clk_get(omap->dev, "usbhost_48m_fck"); 337 - if (IS_ERR(omap->usbhost1_48m_fck)) { 338 - ret = PTR_ERR(omap->usbhost1_48m_fck); 296 + omap->usbhost_fs_fck = clk_get(omap->dev, "fs_fck"); 297 + if (IS_ERR(omap->usbhost_fs_fck)) { 298 + ret = PTR_ERR(omap->usbhost_fs_fck); 339 299 goto err_host_48m_fck; 340 300 } 341 - clk_enable(omap->usbhost1_48m_fck); 301 + clk_enable(omap->usbhost_fs_fck); 342 302 343 303 if (omap->phy_reset) { 344 304 /* Refer: ISSUE1 */ ··· 373 333 } 374 334 clk_enable(omap->usbtll_ick); 375 335 336 + omap->omap_ehci_rev = ehci_omap_readl(omap->uhh_base, 337 + OMAP_UHH_REVISION); 338 + dev_dbg(omap->dev, "OMAP UHH_REVISION 0x%x\n", 339 + omap->omap_ehci_rev); 340 + 341 + /* 342 + * Enable per-port clocks as needed (newer controllers only). 343 + * - External ULPI clock for PHY mode 344 + * - Internal clocks for TLL and HSIC modes (TODO) 345 + */ 346 + if (is_omap_ehci_rev2(omap)) { 347 + switch (omap->port_mode[0]) { 348 + case EHCI_HCD_OMAP_MODE_PHY: 349 + omap->xclk60mhsp1_ck = clk_get(omap->dev, 350 + "xclk60mhsp1_ck"); 351 + if (IS_ERR(omap->xclk60mhsp1_ck)) { 352 + ret = PTR_ERR(omap->xclk60mhsp1_ck); 353 + dev_err(omap->dev, 354 + "Unable to get Port1 ULPI clock\n"); 355 + } 356 + 357 + omap->utmi_p1_fck = clk_get(omap->dev, 358 + "utmi_p1_gfclk"); 359 + if (IS_ERR(omap->utmi_p1_fck)) { 360 + ret = PTR_ERR(omap->utmi_p1_fck); 361 + dev_err(omap->dev, 362 + "Unable to get utmi_p1_fck\n"); 363 + } 364 + 365 + ret = clk_set_parent(omap->utmi_p1_fck, 366 + omap->xclk60mhsp1_ck); 367 + if (ret != 0) { 368 + dev_err(omap->dev, 369 + "Unable to set P1 f-clock\n"); 370 + } 371 + break; 372 + case EHCI_HCD_OMAP_MODE_TLL: 373 + /* TODO */ 374 + default: 375 + break; 376 + } 377 + switch (omap->port_mode[1]) { 378 + case EHCI_HCD_OMAP_MODE_PHY: 379 + omap->xclk60mhsp2_ck = clk_get(omap->dev, 380 + "xclk60mhsp2_ck"); 381 + if (IS_ERR(omap->xclk60mhsp2_ck)) { 382 + ret = PTR_ERR(omap->xclk60mhsp2_ck); 383 + dev_err(omap->dev, 384 + "Unable to get Port2 ULPI clock\n"); 385 + } 386 + 387 + omap->utmi_p2_fck = clk_get(omap->dev, 388 + "utmi_p2_gfclk"); 389 + if (IS_ERR(omap->utmi_p2_fck)) { 390 + ret = PTR_ERR(omap->utmi_p2_fck); 391 + dev_err(omap->dev, 392 + "Unable to get utmi_p2_fck\n"); 393 + } 394 + 395 + ret = clk_set_parent(omap->utmi_p2_fck, 396 + omap->xclk60mhsp2_ck); 397 + if (ret != 0) { 398 + dev_err(omap->dev, 399 + "Unable to set P2 f-clock\n"); 400 + } 401 + break; 402 + case EHCI_HCD_OMAP_MODE_TLL: 403 + /* TODO */ 404 + default: 405 + break; 406 + } 407 + } 408 + 409 + 376 410 /* perform TLL soft reset, and wait until reset is complete */ 377 411 ehci_omap_writel(omap->tll_base, OMAP_USBTLL_SYSCONFIG, 378 412 OMAP_USBTLL_SYSCONFIG_SOFTRESET); ··· 474 360 475 361 /* Put UHH in NoIdle/NoStandby mode */ 476 362 reg = ehci_omap_readl(omap->uhh_base, OMAP_UHH_SYSCONFIG); 477 - reg |= (OMAP_UHH_SYSCONFIG_ENAWAKEUP 478 - | OMAP_UHH_SYSCONFIG_SIDLEMODE 479 - | OMAP_UHH_SYSCONFIG_CACTIVITY 480 - | OMAP_UHH_SYSCONFIG_MIDLEMODE); 481 - reg &= ~OMAP_UHH_SYSCONFIG_AUTOIDLE; 363 + if (is_omap_ehci_rev1(omap)) { 364 + reg |= (OMAP_UHH_SYSCONFIG_ENAWAKEUP 365 + | OMAP_UHH_SYSCONFIG_SIDLEMODE 366 + | OMAP_UHH_SYSCONFIG_CACTIVITY 367 + | OMAP_UHH_SYSCONFIG_MIDLEMODE); 368 + reg &= ~OMAP_UHH_SYSCONFIG_AUTOIDLE; 482 369 370 + 371 + } else if (is_omap_ehci_rev2(omap)) { 372 + reg &= ~OMAP4_UHH_SYSCONFIG_IDLEMODE_CLEAR; 373 + reg |= OMAP4_UHH_SYSCONFIG_NOIDLE; 374 + reg &= ~OMAP4_UHH_SYSCONFIG_STDBYMODE_CLEAR; 375 + reg |= OMAP4_UHH_SYSCONFIG_NOSTDBY; 376 + } 483 377 ehci_omap_writel(omap->uhh_base, OMAP_UHH_SYSCONFIG, reg); 484 378 485 379 reg = ehci_omap_readl(omap->uhh_base, OMAP_UHH_HOSTCONFIG); ··· 498 376 | OMAP_UHH_HOSTCONFIG_INCR16_BURST_EN); 499 377 reg &= ~OMAP_UHH_HOSTCONFIG_INCRX_ALIGN_EN; 500 378 501 - if (omap->port_mode[0] == EHCI_HCD_OMAP_MODE_UNKNOWN) 502 - reg &= ~OMAP_UHH_HOSTCONFIG_P1_CONNECT_STATUS; 503 - if (omap->port_mode[1] == EHCI_HCD_OMAP_MODE_UNKNOWN) 504 - reg &= ~OMAP_UHH_HOSTCONFIG_P2_CONNECT_STATUS; 505 - if (omap->port_mode[2] == EHCI_HCD_OMAP_MODE_UNKNOWN) 506 - reg &= ~OMAP_UHH_HOSTCONFIG_P3_CONNECT_STATUS; 379 + if (is_omap_ehci_rev1(omap)) { 380 + if (omap->port_mode[0] == EHCI_HCD_OMAP_MODE_UNKNOWN) 381 + reg &= ~OMAP_UHH_HOSTCONFIG_P1_CONNECT_STATUS; 382 + if (omap->port_mode[1] == EHCI_HCD_OMAP_MODE_UNKNOWN) 383 + reg &= ~OMAP_UHH_HOSTCONFIG_P2_CONNECT_STATUS; 384 + if (omap->port_mode[2] == EHCI_HCD_OMAP_MODE_UNKNOWN) 385 + reg &= ~OMAP_UHH_HOSTCONFIG_P3_CONNECT_STATUS; 507 386 508 - /* Bypass the TLL module for PHY mode operation */ 509 - if (cpu_is_omap3430() && (omap_rev() <= OMAP3430_REV_ES2_1)) { 510 - dev_dbg(omap->dev, "OMAP3 ES version <= ES2.1\n"); 511 - if ((omap->port_mode[0] == EHCI_HCD_OMAP_MODE_PHY) || 512 - (omap->port_mode[1] == EHCI_HCD_OMAP_MODE_PHY) || 513 - (omap->port_mode[2] == EHCI_HCD_OMAP_MODE_PHY)) 514 - reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_BYPASS; 515 - else 516 - reg |= OMAP_UHH_HOSTCONFIG_ULPI_BYPASS; 517 - } else { 518 - dev_dbg(omap->dev, "OMAP3 ES version > ES2.1\n"); 519 - if (omap->port_mode[0] == EHCI_HCD_OMAP_MODE_PHY) 520 - reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_P1_BYPASS; 521 - else if (omap->port_mode[0] == EHCI_HCD_OMAP_MODE_TLL) 522 - reg |= OMAP_UHH_HOSTCONFIG_ULPI_P1_BYPASS; 387 + /* Bypass the TLL module for PHY mode operation */ 388 + if (cpu_is_omap3430() && (omap_rev() <= OMAP3430_REV_ES2_1)) { 389 + dev_dbg(omap->dev, "OMAP3 ES version <= ES2.1\n"); 390 + if (is_ehci_phy_mode(omap->port_mode[0]) || 391 + is_ehci_phy_mode(omap->port_mode[1]) || 392 + is_ehci_phy_mode(omap->port_mode[2])) 393 + reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_BYPASS; 394 + else 395 + reg |= OMAP_UHH_HOSTCONFIG_ULPI_BYPASS; 396 + } else { 397 + dev_dbg(omap->dev, "OMAP3 ES version > ES2.1\n"); 398 + if (is_ehci_phy_mode(omap->port_mode[0])) 399 + reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_P1_BYPASS; 400 + else if (is_ehci_tll_mode(omap->port_mode[0])) 401 + reg |= OMAP_UHH_HOSTCONFIG_ULPI_P1_BYPASS; 523 402 524 - if (omap->port_mode[1] == EHCI_HCD_OMAP_MODE_PHY) 525 - reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_P2_BYPASS; 526 - else if (omap->port_mode[1] == EHCI_HCD_OMAP_MODE_TLL) 527 - reg |= OMAP_UHH_HOSTCONFIG_ULPI_P2_BYPASS; 403 + if (is_ehci_phy_mode(omap->port_mode[1])) 404 + reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_P2_BYPASS; 405 + else if (is_ehci_tll_mode(omap->port_mode[1])) 406 + reg |= OMAP_UHH_HOSTCONFIG_ULPI_P2_BYPASS; 528 407 529 - if (omap->port_mode[2] == EHCI_HCD_OMAP_MODE_PHY) 530 - reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_P3_BYPASS; 531 - else if (omap->port_mode[2] == EHCI_HCD_OMAP_MODE_TLL) 532 - reg |= OMAP_UHH_HOSTCONFIG_ULPI_P3_BYPASS; 408 + if (is_ehci_phy_mode(omap->port_mode[2])) 409 + reg &= ~OMAP_UHH_HOSTCONFIG_ULPI_P3_BYPASS; 410 + else if (is_ehci_tll_mode(omap->port_mode[2])) 411 + reg |= OMAP_UHH_HOSTCONFIG_ULPI_P3_BYPASS; 412 + } 413 + } else if (is_omap_ehci_rev2(omap)) { 414 + /* Clear port mode fields for PHY mode*/ 415 + reg &= ~OMAP4_P1_MODE_CLEAR; 416 + reg &= ~OMAP4_P2_MODE_CLEAR; 533 417 418 + if (is_ehci_tll_mode(omap->port_mode[0])) 419 + reg |= OMAP4_P1_MODE_TLL; 420 + else if (is_ehci_hsic_mode(omap->port_mode[0])) 421 + reg |= OMAP4_P1_MODE_HSIC; 422 + 423 + if (is_ehci_tll_mode(omap->port_mode[1])) 424 + reg |= OMAP4_P2_MODE_TLL; 425 + else if (is_ehci_hsic_mode(omap->port_mode[1])) 426 + reg |= OMAP4_P2_MODE_HSIC; 534 427 } 428 + 535 429 ehci_omap_writel(omap->uhh_base, OMAP_UHH_HOSTCONFIG, reg); 536 430 dev_dbg(omap->dev, "UHH setup done, uhh_hostconfig=%x\n", reg); 537 431 ··· 576 438 tll_ch_mask |= OMAP_TLL_CHANNEL_3_EN_MASK; 577 439 578 440 /* Enable UTMI mode for required TLL channels */ 579 - omap_usb_utmi_init(omap, tll_ch_mask); 441 + omap_usb_utmi_init(omap, tll_ch_mask, OMAP_TLL_CHANNEL_COUNT); 580 442 } 581 443 582 444 if (omap->phy_reset) { ··· 602 464 return 0; 603 465 604 466 err_sys_status: 467 + clk_disable(omap->utmi_p2_fck); 468 + clk_put(omap->utmi_p2_fck); 469 + clk_disable(omap->xclk60mhsp2_ck); 470 + clk_put(omap->xclk60mhsp2_ck); 471 + clk_disable(omap->utmi_p1_fck); 472 + clk_put(omap->utmi_p1_fck); 473 + clk_disable(omap->xclk60mhsp1_ck); 474 + clk_put(omap->xclk60mhsp1_ck); 605 475 clk_disable(omap->usbtll_ick); 606 476 clk_put(omap->usbtll_ick); 607 477 ··· 618 472 clk_put(omap->usbtll_fck); 619 473 620 474 err_tll_fck: 621 - clk_disable(omap->usbhost1_48m_fck); 622 - clk_put(omap->usbhost1_48m_fck); 475 + clk_disable(omap->usbhost_fs_fck); 476 + clk_put(omap->usbhost_fs_fck); 623 477 624 478 if (omap->phy_reset) { 625 479 if (gpio_is_valid(omap->reset_gpio_port[0])) ··· 630 484 } 631 485 632 486 err_host_48m_fck: 633 - clk_disable(omap->usbhost2_120m_fck); 634 - clk_put(omap->usbhost2_120m_fck); 487 + clk_disable(omap->usbhost_hs_fck); 488 + clk_put(omap->usbhost_hs_fck); 635 489 636 490 err_host_120m_fck: 637 491 clk_disable(omap->usbhost_ick); ··· 649 503 650 504 /* Reset OMAP modules for insmod/rmmod to work */ 651 505 ehci_omap_writel(omap->uhh_base, OMAP_UHH_SYSCONFIG, 506 + is_omap_ehci_rev2(omap) ? 507 + OMAP4_UHH_SYSCONFIG_SOFTRESET : 652 508 OMAP_UHH_SYSCONFIG_SOFTRESET); 653 509 while (!(ehci_omap_readl(omap->uhh_base, OMAP_UHH_SYSSTATUS) 654 510 & (1 << 0))) { ··· 698 550 omap->usbhost_ick = NULL; 699 551 } 700 552 701 - if (omap->usbhost1_48m_fck != NULL) { 702 - clk_disable(omap->usbhost1_48m_fck); 703 - clk_put(omap->usbhost1_48m_fck); 704 - omap->usbhost1_48m_fck = NULL; 553 + if (omap->usbhost_fs_fck != NULL) { 554 + clk_disable(omap->usbhost_fs_fck); 555 + clk_put(omap->usbhost_fs_fck); 556 + omap->usbhost_fs_fck = NULL; 705 557 } 706 558 707 - if (omap->usbhost2_120m_fck != NULL) { 708 - clk_disable(omap->usbhost2_120m_fck); 709 - clk_put(omap->usbhost2_120m_fck); 710 - omap->usbhost2_120m_fck = NULL; 559 + if (omap->usbhost_hs_fck != NULL) { 560 + clk_disable(omap->usbhost_hs_fck); 561 + clk_put(omap->usbhost_hs_fck); 562 + omap->usbhost_hs_fck = NULL; 711 563 } 712 564 713 565 if (omap->usbtll_ick != NULL) { 714 566 clk_disable(omap->usbtll_ick); 715 567 clk_put(omap->usbtll_ick); 716 568 omap->usbtll_ick = NULL; 569 + } 570 + 571 + if (is_omap_ehci_rev2(omap)) { 572 + if (omap->xclk60mhsp1_ck != NULL) { 573 + clk_disable(omap->xclk60mhsp1_ck); 574 + clk_put(omap->xclk60mhsp1_ck); 575 + omap->xclk60mhsp1_ck = NULL; 576 + } 577 + 578 + if (omap->utmi_p1_fck != NULL) { 579 + clk_disable(omap->utmi_p1_fck); 580 + clk_put(omap->utmi_p1_fck); 581 + omap->utmi_p1_fck = NULL; 582 + } 583 + 584 + if (omap->xclk60mhsp2_ck != NULL) { 585 + clk_disable(omap->xclk60mhsp2_ck); 586 + clk_put(omap->xclk60mhsp2_ck); 587 + omap->xclk60mhsp2_ck = NULL; 588 + } 589 + 590 + if (omap->utmi_p2_fck != NULL) { 591 + clk_disable(omap->utmi_p2_fck); 592 + clk_put(omap->utmi_p2_fck); 593 + omap->utmi_p2_fck = NULL; 594 + } 717 595 } 718 596 719 597 if (omap->phy_reset) {
+39
drivers/usb/host/ehci-pci.c
··· 22 22 #error "This file is PCI bus glue. CONFIG_PCI must be defined." 23 23 #endif 24 24 25 + /* defined here to avoid adding to pci_ids.h for single instance use */ 26 + #define PCI_DEVICE_ID_INTEL_CE4100_USB 0x2e70 27 + 25 28 /*-------------------------------------------------------------------------*/ 26 29 27 30 /* called after powerup, by probe or system-pm "wakeup" */ ··· 42 39 ehci_dbg(ehci, "MWI active\n"); 43 40 44 41 return 0; 42 + } 43 + 44 + static int ehci_quirk_amd_SB800(struct ehci_hcd *ehci) 45 + { 46 + struct pci_dev *amd_smbus_dev; 47 + u8 rev = 0; 48 + 49 + amd_smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL); 50 + if (!amd_smbus_dev) 51 + return 0; 52 + 53 + pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev); 54 + if (rev < 0x40) { 55 + pci_dev_put(amd_smbus_dev); 56 + amd_smbus_dev = NULL; 57 + return 0; 58 + } 59 + 60 + if (!amd_nb_dev) 61 + amd_nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x1510, NULL); 62 + if (!amd_nb_dev) 63 + ehci_err(ehci, "QUIRK: unable to get AMD NB device\n"); 64 + 65 + ehci_info(ehci, "QUIRK: Enable AMD SB800 L1 fix\n"); 66 + 67 + pci_dev_put(amd_smbus_dev); 68 + amd_smbus_dev = NULL; 69 + 70 + return 1; 45 71 } 46 72 47 73 /* called during probe() after chip reset completes */ ··· 131 99 /* cache this readonly data; minimize chip reads */ 132 100 ehci->hcs_params = ehci_readl(ehci, &ehci->caps->hcs_params); 133 101 102 + if (ehci_quirk_amd_SB800(ehci)) 103 + ehci->amd_l1_fix = 1; 104 + 134 105 retval = ehci_halt(ehci); 135 106 if (retval) 136 107 return retval; ··· 171 136 || pdev->device == 0x0829) { 172 137 ehci_info(ehci, "disable lpm for langwell/penwell\n"); 173 138 ehci->has_lpm = 0; 139 + } 140 + if (pdev->device == PCI_DEVICE_ID_INTEL_CE4100_USB) { 141 + hcd->has_tt = 1; 142 + tdi_reset(ehci); 174 143 } 175 144 break; 176 145 case PCI_VENDOR_ID_TDI:
+79
drivers/usb/host/ehci-sched.c
··· 1590 1590 *hw_p = cpu_to_hc32(ehci, itd->itd_dma | Q_TYPE_ITD); 1591 1591 } 1592 1592 1593 + #define AB_REG_BAR_LOW 0xe0 1594 + #define AB_REG_BAR_HIGH 0xe1 1595 + #define AB_INDX(addr) ((addr) + 0x00) 1596 + #define AB_DATA(addr) ((addr) + 0x04) 1597 + #define NB_PCIE_INDX_ADDR 0xe0 1598 + #define NB_PCIE_INDX_DATA 0xe4 1599 + #define NB_PIF0_PWRDOWN_0 0x01100012 1600 + #define NB_PIF0_PWRDOWN_1 0x01100013 1601 + 1602 + static void ehci_quirk_amd_L1(struct ehci_hcd *ehci, int disable) 1603 + { 1604 + u32 addr, addr_low, addr_high, val; 1605 + 1606 + outb_p(AB_REG_BAR_LOW, 0xcd6); 1607 + addr_low = inb_p(0xcd7); 1608 + outb_p(AB_REG_BAR_HIGH, 0xcd6); 1609 + addr_high = inb_p(0xcd7); 1610 + addr = addr_high << 8 | addr_low; 1611 + outl_p(0x30, AB_INDX(addr)); 1612 + outl_p(0x40, AB_DATA(addr)); 1613 + outl_p(0x34, AB_INDX(addr)); 1614 + val = inl_p(AB_DATA(addr)); 1615 + 1616 + if (disable) { 1617 + val &= ~0x8; 1618 + val |= (1 << 4) | (1 << 9); 1619 + } else { 1620 + val |= 0x8; 1621 + val &= ~((1 << 4) | (1 << 9)); 1622 + } 1623 + outl_p(val, AB_DATA(addr)); 1624 + 1625 + if (amd_nb_dev) { 1626 + addr = NB_PIF0_PWRDOWN_0; 1627 + pci_write_config_dword(amd_nb_dev, NB_PCIE_INDX_ADDR, addr); 1628 + pci_read_config_dword(amd_nb_dev, NB_PCIE_INDX_DATA, &val); 1629 + if (disable) 1630 + val &= ~(0x3f << 7); 1631 + else 1632 + val |= 0x3f << 7; 1633 + 1634 + pci_write_config_dword(amd_nb_dev, NB_PCIE_INDX_DATA, val); 1635 + 1636 + addr = NB_PIF0_PWRDOWN_1; 1637 + pci_write_config_dword(amd_nb_dev, NB_PCIE_INDX_ADDR, addr); 1638 + pci_read_config_dword(amd_nb_dev, NB_PCIE_INDX_DATA, &val); 1639 + if (disable) 1640 + val &= ~(0x3f << 7); 1641 + else 1642 + val |= 0x3f << 7; 1643 + 1644 + pci_write_config_dword(amd_nb_dev, NB_PCIE_INDX_DATA, val); 1645 + } 1646 + 1647 + return; 1648 + } 1649 + 1593 1650 /* fit urb's itds into the selected schedule slot; activate as needed */ 1594 1651 static int 1595 1652 itd_link_urb ( ··· 1673 1616 urb->interval, 1674 1617 next_uframe >> 3, next_uframe & 0x7); 1675 1618 } 1619 + 1620 + if (ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs == 0) { 1621 + if (ehci->amd_l1_fix == 1) 1622 + ehci_quirk_amd_L1(ehci, 1); 1623 + } 1624 + 1676 1625 ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs++; 1677 1626 1678 1627 /* fill iTDs uframe by uframe */ ··· 1802 1739 urb = NULL; 1803 1740 (void) disable_periodic(ehci); 1804 1741 ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs--; 1742 + 1743 + if (ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs == 0) { 1744 + if (ehci->amd_l1_fix == 1) 1745 + ehci_quirk_amd_L1(ehci, 0); 1746 + } 1805 1747 1806 1748 if (unlikely(list_is_singular(&stream->td_list))) { 1807 1749 ehci_to_hcd(ehci)->self.bandwidth_allocated ··· 2093 2025 (next_uframe >> 3) & (ehci->periodic_size - 1), 2094 2026 stream->interval, hc32_to_cpu(ehci, stream->splits)); 2095 2027 } 2028 + 2029 + if (ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs == 0) { 2030 + if (ehci->amd_l1_fix == 1) 2031 + ehci_quirk_amd_L1(ehci, 1); 2032 + } 2033 + 2096 2034 ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs++; 2097 2035 2098 2036 /* fill sITDs frame by frame */ ··· 2198 2124 urb = NULL; 2199 2125 (void) disable_periodic(ehci); 2200 2126 ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs--; 2127 + 2128 + if (ehci_to_hcd(ehci)->self.bandwidth_isoc_reqs == 0) { 2129 + if (ehci->amd_l1_fix == 1) 2130 + ehci_quirk_amd_L1(ehci, 0); 2131 + } 2201 2132 2202 2133 if (list_is_singular(&stream->td_list)) { 2203 2134 ehci_to_hcd(ehci)->self.bandwidth_allocated
+243
drivers/usb/host/ehci-sh.c
··· 1 + /* 2 + * SuperH EHCI host controller driver 3 + * 4 + * Copyright (C) 2010 Paul Mundt 5 + * 6 + * Based on ohci-sh.c and ehci-atmel.c. 7 + * 8 + * This file is subject to the terms and conditions of the GNU General Public 9 + * License. See the file "COPYING" in the main directory of this archive 10 + * for more details. 11 + */ 12 + #include <linux/platform_device.h> 13 + #include <linux/clk.h> 14 + 15 + struct ehci_sh_priv { 16 + struct clk *iclk, *fclk; 17 + struct usb_hcd *hcd; 18 + }; 19 + 20 + static int ehci_sh_reset(struct usb_hcd *hcd) 21 + { 22 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 23 + int ret; 24 + 25 + ehci->caps = hcd->regs; 26 + ehci->regs = hcd->regs + HC_LENGTH(ehci_readl(ehci, 27 + &ehci->caps->hc_capbase)); 28 + 29 + dbg_hcs_params(ehci, "reset"); 30 + dbg_hcc_params(ehci, "reset"); 31 + 32 + ehci->hcs_params = ehci_readl(ehci, &ehci->caps->hcs_params); 33 + 34 + ret = ehci_halt(ehci); 35 + if (unlikely(ret)) 36 + return ret; 37 + 38 + ret = ehci_init(hcd); 39 + if (unlikely(ret)) 40 + return ret; 41 + 42 + ehci->sbrn = 0x20; 43 + 44 + ehci_reset(ehci); 45 + ehci_port_power(ehci, 0); 46 + 47 + return ret; 48 + } 49 + 50 + static const struct hc_driver ehci_sh_hc_driver = { 51 + .description = hcd_name, 52 + .product_desc = "SuperH EHCI", 53 + .hcd_priv_size = sizeof(struct ehci_hcd), 54 + 55 + /* 56 + * generic hardware linkage 57 + */ 58 + .irq = ehci_irq, 59 + .flags = HCD_USB2 | HCD_MEMORY, 60 + 61 + /* 62 + * basic lifecycle operations 63 + */ 64 + .reset = ehci_sh_reset, 65 + .start = ehci_run, 66 + .stop = ehci_stop, 67 + .shutdown = ehci_shutdown, 68 + 69 + /* 70 + * managing i/o requests and associated device resources 71 + */ 72 + .urb_enqueue = ehci_urb_enqueue, 73 + .urb_dequeue = ehci_urb_dequeue, 74 + .endpoint_disable = ehci_endpoint_disable, 75 + .endpoint_reset = ehci_endpoint_reset, 76 + 77 + /* 78 + * scheduling support 79 + */ 80 + .get_frame_number = ehci_get_frame, 81 + 82 + /* 83 + * root hub support 84 + */ 85 + .hub_status_data = ehci_hub_status_data, 86 + .hub_control = ehci_hub_control, 87 + 88 + #ifdef CONFIG_PM 89 + .bus_suspend = ehci_bus_suspend, 90 + .bus_resume = ehci_bus_resume, 91 + #endif 92 + 93 + .relinquish_port = ehci_relinquish_port, 94 + .port_handed_over = ehci_port_handed_over, 95 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 96 + }; 97 + 98 + static int ehci_hcd_sh_probe(struct platform_device *pdev) 99 + { 100 + const struct hc_driver *driver = &ehci_sh_hc_driver; 101 + struct resource *res; 102 + struct ehci_sh_priv *priv; 103 + struct usb_hcd *hcd; 104 + int irq, ret; 105 + 106 + if (usb_disabled()) 107 + return -ENODEV; 108 + 109 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 110 + if (!res) { 111 + dev_err(&pdev->dev, 112 + "Found HC with no register addr. Check %s setup!\n", 113 + dev_name(&pdev->dev)); 114 + ret = -ENODEV; 115 + goto fail_create_hcd; 116 + } 117 + 118 + irq = platform_get_irq(pdev, 0); 119 + if (irq <= 0) { 120 + dev_err(&pdev->dev, 121 + "Found HC with no IRQ. Check %s setup!\n", 122 + dev_name(&pdev->dev)); 123 + ret = -ENODEV; 124 + goto fail_create_hcd; 125 + } 126 + 127 + /* initialize hcd */ 128 + hcd = usb_create_hcd(&ehci_sh_hc_driver, &pdev->dev, 129 + dev_name(&pdev->dev)); 130 + if (!hcd) { 131 + ret = -ENOMEM; 132 + goto fail_create_hcd; 133 + } 134 + 135 + hcd->rsrc_start = res->start; 136 + hcd->rsrc_len = resource_size(res); 137 + 138 + if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, 139 + driver->description)) { 140 + dev_dbg(&pdev->dev, "controller already in use\n"); 141 + ret = -EBUSY; 142 + goto fail_request_resource; 143 + } 144 + 145 + hcd->regs = ioremap_nocache(hcd->rsrc_start, hcd->rsrc_len); 146 + if (hcd->regs == NULL) { 147 + dev_dbg(&pdev->dev, "error mapping memory\n"); 148 + ret = -ENXIO; 149 + goto fail_ioremap; 150 + } 151 + 152 + priv = kmalloc(sizeof(struct ehci_sh_priv), GFP_KERNEL); 153 + if (!priv) { 154 + dev_dbg(&pdev->dev, "error allocating priv data\n"); 155 + ret = -ENOMEM; 156 + goto fail_alloc; 157 + } 158 + 159 + /* These are optional, we don't care if they fail */ 160 + priv->fclk = clk_get(&pdev->dev, "usb_fck"); 161 + if (IS_ERR(priv->fclk)) 162 + priv->fclk = NULL; 163 + 164 + priv->iclk = clk_get(&pdev->dev, "usb_ick"); 165 + if (IS_ERR(priv->iclk)) 166 + priv->iclk = NULL; 167 + 168 + clk_enable(priv->fclk); 169 + clk_enable(priv->iclk); 170 + 171 + ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED); 172 + if (ret != 0) { 173 + dev_err(&pdev->dev, "Failed to add hcd"); 174 + goto fail_add_hcd; 175 + } 176 + 177 + priv->hcd = hcd; 178 + platform_set_drvdata(pdev, priv); 179 + 180 + return ret; 181 + 182 + fail_add_hcd: 183 + clk_disable(priv->iclk); 184 + clk_disable(priv->fclk); 185 + 186 + clk_put(priv->iclk); 187 + clk_put(priv->fclk); 188 + 189 + kfree(priv); 190 + fail_alloc: 191 + iounmap(hcd->regs); 192 + fail_ioremap: 193 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 194 + fail_request_resource: 195 + usb_put_hcd(hcd); 196 + fail_create_hcd: 197 + dev_err(&pdev->dev, "init %s fail, %d\n", dev_name(&pdev->dev), ret); 198 + 199 + return ret; 200 + } 201 + 202 + static int __exit ehci_hcd_sh_remove(struct platform_device *pdev) 203 + { 204 + struct ehci_sh_priv *priv = platform_get_drvdata(pdev); 205 + struct usb_hcd *hcd = priv->hcd; 206 + 207 + usb_remove_hcd(hcd); 208 + iounmap(hcd->regs); 209 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 210 + usb_put_hcd(hcd); 211 + platform_set_drvdata(pdev, NULL); 212 + 213 + clk_disable(priv->fclk); 214 + clk_disable(priv->iclk); 215 + 216 + clk_put(priv->fclk); 217 + clk_put(priv->iclk); 218 + 219 + kfree(priv); 220 + 221 + return 0; 222 + } 223 + 224 + static void ehci_hcd_sh_shutdown(struct platform_device *pdev) 225 + { 226 + struct ehci_sh_priv *priv = platform_get_drvdata(pdev); 227 + struct usb_hcd *hcd = priv->hcd; 228 + 229 + if (hcd->driver->shutdown) 230 + hcd->driver->shutdown(hcd); 231 + } 232 + 233 + static struct platform_driver ehci_hcd_sh_driver = { 234 + .probe = ehci_hcd_sh_probe, 235 + .remove = __exit_p(ehci_hcd_sh_remove), 236 + .shutdown = ehci_hcd_sh_shutdown, 237 + .driver = { 238 + .name = "sh_ehci", 239 + .owner = THIS_MODULE, 240 + }, 241 + }; 242 + 243 + MODULE_ALIAS("platform:sh_ehci");
+212
drivers/usb/host/ehci-spear.c
··· 1 + /* 2 + * Driver for EHCI HCD on SPEAR SOC 3 + * 4 + * Copyright (C) 2010 ST Micro Electronics, 5 + * Deepak Sikri <deepak.sikri@st.com> 6 + * 7 + * Based on various ehci-*.c drivers 8 + * 9 + * This file is subject to the terms and conditions of the GNU General Public 10 + * License. See the file COPYING in the main directory of this archive for 11 + * more details. 12 + */ 13 + 14 + #include <linux/platform_device.h> 15 + #include <linux/clk.h> 16 + 17 + struct spear_ehci { 18 + struct ehci_hcd ehci; 19 + struct clk *clk; 20 + }; 21 + 22 + #define to_spear_ehci(hcd) (struct spear_ehci *)hcd_to_ehci(hcd) 23 + 24 + static void spear_start_ehci(struct spear_ehci *ehci) 25 + { 26 + clk_enable(ehci->clk); 27 + } 28 + 29 + static void spear_stop_ehci(struct spear_ehci *ehci) 30 + { 31 + clk_disable(ehci->clk); 32 + } 33 + 34 + static int ehci_spear_setup(struct usb_hcd *hcd) 35 + { 36 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 37 + int retval = 0; 38 + 39 + /* registers start at offset 0x0 */ 40 + ehci->caps = hcd->regs; 41 + ehci->regs = hcd->regs + HC_LENGTH(ehci_readl(ehci, 42 + &ehci->caps->hc_capbase)); 43 + /* cache this readonly data; minimize chip reads */ 44 + ehci->hcs_params = ehci_readl(ehci, &ehci->caps->hcs_params); 45 + retval = ehci_halt(ehci); 46 + if (retval) 47 + return retval; 48 + 49 + retval = ehci_init(hcd); 50 + if (retval) 51 + return retval; 52 + 53 + ehci_reset(ehci); 54 + ehci_port_power(ehci, 0); 55 + 56 + return retval; 57 + } 58 + 59 + static const struct hc_driver ehci_spear_hc_driver = { 60 + .description = hcd_name, 61 + .product_desc = "SPEAr EHCI", 62 + .hcd_priv_size = sizeof(struct spear_ehci), 63 + 64 + /* generic hardware linkage */ 65 + .irq = ehci_irq, 66 + .flags = HCD_MEMORY | HCD_USB2, 67 + 68 + /* basic lifecycle operations */ 69 + .reset = ehci_spear_setup, 70 + .start = ehci_run, 71 + .stop = ehci_stop, 72 + .shutdown = ehci_shutdown, 73 + 74 + /* managing i/o requests and associated device resources */ 75 + .urb_enqueue = ehci_urb_enqueue, 76 + .urb_dequeue = ehci_urb_dequeue, 77 + .endpoint_disable = ehci_endpoint_disable, 78 + .endpoint_reset = ehci_endpoint_reset, 79 + 80 + /* scheduling support */ 81 + .get_frame_number = ehci_get_frame, 82 + 83 + /* root hub support */ 84 + .hub_status_data = ehci_hub_status_data, 85 + .hub_control = ehci_hub_control, 86 + .bus_suspend = ehci_bus_suspend, 87 + .bus_resume = ehci_bus_resume, 88 + .relinquish_port = ehci_relinquish_port, 89 + .port_handed_over = ehci_port_handed_over, 90 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 91 + }; 92 + 93 + static int spear_ehci_hcd_drv_probe(struct platform_device *pdev) 94 + { 95 + struct usb_hcd *hcd ; 96 + struct spear_ehci *ehci; 97 + struct resource *res; 98 + struct clk *usbh_clk; 99 + const struct hc_driver *driver = &ehci_spear_hc_driver; 100 + int *pdata = pdev->dev.platform_data; 101 + int irq, retval; 102 + char clk_name[20] = "usbh_clk"; 103 + 104 + if (pdata == NULL) 105 + return -EFAULT; 106 + 107 + if (usb_disabled()) 108 + return -ENODEV; 109 + 110 + irq = platform_get_irq(pdev, 0); 111 + if (irq < 0) { 112 + retval = irq; 113 + goto fail_irq_get; 114 + } 115 + 116 + if (*pdata >= 0) 117 + sprintf(clk_name, "usbh.%01d_clk", *pdata); 118 + 119 + usbh_clk = clk_get(NULL, clk_name); 120 + if (IS_ERR(usbh_clk)) { 121 + dev_err(&pdev->dev, "Error getting interface clock\n"); 122 + retval = PTR_ERR(usbh_clk); 123 + goto fail_get_usbh_clk; 124 + } 125 + 126 + hcd = usb_create_hcd(driver, &pdev->dev, dev_name(&pdev->dev)); 127 + if (!hcd) { 128 + retval = -ENOMEM; 129 + goto fail_create_hcd; 130 + } 131 + 132 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 133 + if (!res) { 134 + retval = -ENODEV; 135 + goto fail_request_resource; 136 + } 137 + 138 + hcd->rsrc_start = res->start; 139 + hcd->rsrc_len = resource_size(res); 140 + if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, 141 + driver->description)) { 142 + retval = -EBUSY; 143 + goto fail_request_resource; 144 + } 145 + 146 + hcd->regs = ioremap(hcd->rsrc_start, hcd->rsrc_len); 147 + if (hcd->regs == NULL) { 148 + dev_dbg(&pdev->dev, "error mapping memory\n"); 149 + retval = -ENOMEM; 150 + goto fail_ioremap; 151 + } 152 + 153 + ehci = (struct spear_ehci *)hcd_to_ehci(hcd); 154 + ehci->clk = usbh_clk; 155 + 156 + spear_start_ehci(ehci); 157 + retval = usb_add_hcd(hcd, irq, IRQF_SHARED | IRQF_DISABLED); 158 + if (retval) 159 + goto fail_add_hcd; 160 + 161 + return retval; 162 + 163 + fail_add_hcd: 164 + spear_stop_ehci(ehci); 165 + iounmap(hcd->regs); 166 + fail_ioremap: 167 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 168 + fail_request_resource: 169 + usb_put_hcd(hcd); 170 + fail_create_hcd: 171 + clk_put(usbh_clk); 172 + fail_get_usbh_clk: 173 + fail_irq_get: 174 + dev_err(&pdev->dev, "init fail, %d\n", retval); 175 + 176 + return retval ; 177 + } 178 + 179 + static int spear_ehci_hcd_drv_remove(struct platform_device *pdev) 180 + { 181 + struct usb_hcd *hcd = platform_get_drvdata(pdev); 182 + struct spear_ehci *ehci_p = to_spear_ehci(hcd); 183 + 184 + if (!hcd) 185 + return 0; 186 + if (in_interrupt()) 187 + BUG(); 188 + usb_remove_hcd(hcd); 189 + 190 + if (ehci_p->clk) 191 + spear_stop_ehci(ehci_p); 192 + iounmap(hcd->regs); 193 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 194 + usb_put_hcd(hcd); 195 + 196 + if (ehci_p->clk) 197 + clk_put(ehci_p->clk); 198 + 199 + return 0; 200 + } 201 + 202 + static struct platform_driver spear_ehci_hcd_driver = { 203 + .probe = spear_ehci_hcd_drv_probe, 204 + .remove = spear_ehci_hcd_drv_remove, 205 + .shutdown = usb_hcd_platform_shutdown, 206 + .driver = { 207 + .name = "spear-ehci", 208 + .bus = &platform_bus_type 209 + } 210 + }; 211 + 212 + MODULE_ALIAS("platform:spear-ehci");
+172
drivers/usb/host/ehci-vt8500.c
··· 1 + /* 2 + * drivers/usb/host/ehci-vt8500.c 3 + * 4 + * Copyright (C) 2010 Alexey Charkov <alchark@gmail.com> 5 + * 6 + * Based on ehci-au1xxx.c 7 + * 8 + * This software is licensed under the terms of the GNU General Public 9 + * License version 2, as published by the Free Software Foundation, and 10 + * may be copied, distributed, and modified under those terms. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + */ 18 + 19 + #include <linux/platform_device.h> 20 + 21 + static int ehci_update_device(struct usb_hcd *hcd, struct usb_device *udev) 22 + { 23 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 24 + int rc = 0; 25 + 26 + if (!udev->parent) /* udev is root hub itself, impossible */ 27 + rc = -1; 28 + /* we only support lpm device connected to root hub yet */ 29 + if (ehci->has_lpm && !udev->parent->parent) { 30 + rc = ehci_lpm_set_da(ehci, udev->devnum, udev->portnum); 31 + if (!rc) 32 + rc = ehci_lpm_check(ehci, udev->portnum); 33 + } 34 + return rc; 35 + } 36 + 37 + static const struct hc_driver vt8500_ehci_hc_driver = { 38 + .description = hcd_name, 39 + .product_desc = "VT8500 EHCI", 40 + .hcd_priv_size = sizeof(struct ehci_hcd), 41 + 42 + /* 43 + * generic hardware linkage 44 + */ 45 + .irq = ehci_irq, 46 + .flags = HCD_MEMORY | HCD_USB2, 47 + 48 + /* 49 + * basic lifecycle operations 50 + */ 51 + .reset = ehci_init, 52 + .start = ehci_run, 53 + .stop = ehci_stop, 54 + .shutdown = ehci_shutdown, 55 + 56 + /* 57 + * managing i/o requests and associated device resources 58 + */ 59 + .urb_enqueue = ehci_urb_enqueue, 60 + .urb_dequeue = ehci_urb_dequeue, 61 + .endpoint_disable = ehci_endpoint_disable, 62 + .endpoint_reset = ehci_endpoint_reset, 63 + 64 + /* 65 + * scheduling support 66 + */ 67 + .get_frame_number = ehci_get_frame, 68 + 69 + /* 70 + * root hub support 71 + */ 72 + .hub_status_data = ehci_hub_status_data, 73 + .hub_control = ehci_hub_control, 74 + .bus_suspend = ehci_bus_suspend, 75 + .bus_resume = ehci_bus_resume, 76 + .relinquish_port = ehci_relinquish_port, 77 + .port_handed_over = ehci_port_handed_over, 78 + 79 + /* 80 + * call back when device connected and addressed 81 + */ 82 + .update_device = ehci_update_device, 83 + 84 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 85 + }; 86 + 87 + static int vt8500_ehci_drv_probe(struct platform_device *pdev) 88 + { 89 + struct usb_hcd *hcd; 90 + struct ehci_hcd *ehci; 91 + struct resource *res; 92 + int ret; 93 + 94 + if (usb_disabled()) 95 + return -ENODEV; 96 + 97 + if (pdev->resource[1].flags != IORESOURCE_IRQ) { 98 + pr_debug("resource[1] is not IORESOURCE_IRQ"); 99 + return -ENOMEM; 100 + } 101 + hcd = usb_create_hcd(&vt8500_ehci_hc_driver, &pdev->dev, "VT8500"); 102 + if (!hcd) 103 + return -ENOMEM; 104 + 105 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 106 + hcd->rsrc_start = res->start; 107 + hcd->rsrc_len = resource_size(res); 108 + 109 + if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) { 110 + pr_debug("request_mem_region failed"); 111 + ret = -EBUSY; 112 + goto err1; 113 + } 114 + 115 + hcd->regs = ioremap(hcd->rsrc_start, hcd->rsrc_len); 116 + if (!hcd->regs) { 117 + pr_debug("ioremap failed"); 118 + ret = -ENOMEM; 119 + goto err2; 120 + } 121 + 122 + ehci = hcd_to_ehci(hcd); 123 + ehci->caps = hcd->regs; 124 + ehci->regs = hcd->regs + HC_LENGTH(readl(&ehci->caps->hc_capbase)); 125 + 126 + dbg_hcs_params(ehci, "reset"); 127 + dbg_hcc_params(ehci, "reset"); 128 + 129 + /* cache this readonly data; minimize chip reads */ 130 + ehci->hcs_params = readl(&ehci->caps->hcs_params); 131 + 132 + ehci_port_power(ehci, 1); 133 + 134 + ret = usb_add_hcd(hcd, pdev->resource[1].start, 135 + IRQF_DISABLED | IRQF_SHARED); 136 + if (ret == 0) { 137 + platform_set_drvdata(pdev, hcd); 138 + return ret; 139 + } 140 + 141 + iounmap(hcd->regs); 142 + err2: 143 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 144 + err1: 145 + usb_put_hcd(hcd); 146 + return ret; 147 + } 148 + 149 + static int vt8500_ehci_drv_remove(struct platform_device *pdev) 150 + { 151 + struct usb_hcd *hcd = platform_get_drvdata(pdev); 152 + 153 + usb_remove_hcd(hcd); 154 + iounmap(hcd->regs); 155 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 156 + usb_put_hcd(hcd); 157 + platform_set_drvdata(pdev, NULL); 158 + 159 + return 0; 160 + } 161 + 162 + static struct platform_driver vt8500_ehci_driver = { 163 + .probe = vt8500_ehci_drv_probe, 164 + .remove = vt8500_ehci_drv_remove, 165 + .shutdown = usb_hcd_platform_shutdown, 166 + .driver = { 167 + .name = "vt8500-ehci", 168 + .owner = THIS_MODULE, 169 + } 170 + }; 171 + 172 + MODULE_ALIAS("platform:vt8500-ehci");
+3
drivers/usb/host/ehci-w90x900.c
··· 130 130 .urb_enqueue = ehci_urb_enqueue, 131 131 .urb_dequeue = ehci_urb_dequeue, 132 132 .endpoint_disable = ehci_endpoint_disable, 133 + .endpoint_reset = ehci_endpoint_reset, 133 134 134 135 /* 135 136 * scheduling support ··· 148 147 #endif 149 148 .relinquish_port = ehci_relinquish_port, 150 149 .port_handed_over = ehci_port_handed_over, 150 + 151 + .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete, 151 152 }; 152 153 153 154 static int __devinit ehci_w90x900_probe(struct platform_device *pdev)
+1
drivers/usb/host/ehci-xilinx-of.c
··· 117 117 .urb_enqueue = ehci_urb_enqueue, 118 118 .urb_dequeue = ehci_urb_dequeue, 119 119 .endpoint_disable = ehci_endpoint_disable, 120 + .endpoint_reset = ehci_endpoint_reset, 120 121 121 122 /* 122 123 * scheduling support
+1
drivers/usb/host/ehci.h
··· 131 131 unsigned has_amcc_usb23:1; 132 132 unsigned need_io_watchdog:1; 133 133 unsigned broken_periodic:1; 134 + unsigned amd_l1_fix:1; 134 135 unsigned fs_i_thresh:1; /* Intel iso scheduling */ 135 136 unsigned use_dummy_qh:1; /* AMD Frame List table quirk*/ 136 137
+5
drivers/usb/host/ohci-hcd.c
··· 1081 1081 #define OF_PLATFORM_DRIVER ohci_hcd_ppc_of_driver 1082 1082 #endif 1083 1083 1084 + #ifdef CONFIG_PLAT_SPEAR 1085 + #include "ohci-spear.c" 1086 + #define PLATFORM_DRIVER spear_ohci_hcd_driver 1087 + #endif 1088 + 1084 1089 #ifdef CONFIG_PPC_PS3 1085 1090 #include "ohci-ps3.c" 1086 1091 #define PS3_SYSTEM_BUS_DRIVER ps3_ohci_driver
+1 -1
drivers/usb/host/ohci-sh.c
··· 109 109 hcd->regs = (void __iomem *)res->start; 110 110 hcd->rsrc_start = res->start; 111 111 hcd->rsrc_len = resource_size(res); 112 - ret = usb_add_hcd(hcd, irq, IRQF_DISABLED); 112 + ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED); 113 113 if (ret != 0) { 114 114 err("Failed to add hcd"); 115 115 usb_put_hcd(hcd);
+240
drivers/usb/host/ohci-spear.c
··· 1 + /* 2 + * OHCI HCD (Host Controller Driver) for USB. 3 + * 4 + * Copyright (C) 2010 ST Microelectronics. 5 + * Deepak Sikri<deepak.sikri@st.com> 6 + * 7 + * Based on various ohci-*.c drivers 8 + * 9 + * This file is licensed under the terms of the GNU General Public 10 + * License version 2. This program is licensed "as is" without any 11 + * warranty of any kind, whether express or implied. 12 + */ 13 + 14 + #include <linux/signal.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/clk.h> 17 + 18 + struct spear_ohci { 19 + struct ohci_hcd ohci; 20 + struct clk *clk; 21 + }; 22 + 23 + #define to_spear_ohci(hcd) (struct spear_ohci *)hcd_to_ohci(hcd) 24 + 25 + static void spear_start_ohci(struct spear_ohci *ohci) 26 + { 27 + clk_enable(ohci->clk); 28 + } 29 + 30 + static void spear_stop_ohci(struct spear_ohci *ohci) 31 + { 32 + clk_disable(ohci->clk); 33 + } 34 + 35 + static int __devinit ohci_spear_start(struct usb_hcd *hcd) 36 + { 37 + struct ohci_hcd *ohci = hcd_to_ohci(hcd); 38 + int ret; 39 + 40 + ret = ohci_init(ohci); 41 + if (ret < 0) 42 + return ret; 43 + ohci->regs = hcd->regs; 44 + 45 + ret = ohci_run(ohci); 46 + if (ret < 0) { 47 + dev_err(hcd->self.controller, "can't start\n"); 48 + ohci_stop(hcd); 49 + return ret; 50 + } 51 + 52 + create_debug_files(ohci); 53 + 54 + #ifdef DEBUG 55 + ohci_dump(ohci, 1); 56 + #endif 57 + return 0; 58 + } 59 + 60 + static const struct hc_driver ohci_spear_hc_driver = { 61 + .description = hcd_name, 62 + .product_desc = "SPEAr OHCI", 63 + .hcd_priv_size = sizeof(struct spear_ohci), 64 + 65 + /* generic hardware linkage */ 66 + .irq = ohci_irq, 67 + .flags = HCD_USB11 | HCD_MEMORY, 68 + 69 + /* basic lifecycle operations */ 70 + .start = ohci_spear_start, 71 + .stop = ohci_stop, 72 + .shutdown = ohci_shutdown, 73 + #ifdef CONFIG_PM 74 + .bus_suspend = ohci_bus_suspend, 75 + .bus_resume = ohci_bus_resume, 76 + #endif 77 + 78 + /* managing i/o requests and associated device resources */ 79 + .urb_enqueue = ohci_urb_enqueue, 80 + .urb_dequeue = ohci_urb_dequeue, 81 + .endpoint_disable = ohci_endpoint_disable, 82 + 83 + /* scheduling support */ 84 + .get_frame_number = ohci_get_frame, 85 + 86 + /* root hub support */ 87 + .hub_status_data = ohci_hub_status_data, 88 + .hub_control = ohci_hub_control, 89 + 90 + .start_port_reset = ohci_start_port_reset, 91 + }; 92 + 93 + static int spear_ohci_hcd_drv_probe(struct platform_device *pdev) 94 + { 95 + const struct hc_driver *driver = &ohci_spear_hc_driver; 96 + struct usb_hcd *hcd = NULL; 97 + struct clk *usbh_clk; 98 + struct spear_ohci *ohci_p; 99 + struct resource *res; 100 + int retval, irq; 101 + int *pdata = pdev->dev.platform_data; 102 + char clk_name[20] = "usbh_clk"; 103 + 104 + if (pdata == NULL) 105 + return -EFAULT; 106 + 107 + irq = platform_get_irq(pdev, 0); 108 + if (irq < 0) { 109 + retval = irq; 110 + goto fail_irq_get; 111 + } 112 + 113 + if (*pdata >= 0) 114 + sprintf(clk_name, "usbh.%01d_clk", *pdata); 115 + 116 + usbh_clk = clk_get(NULL, clk_name); 117 + if (IS_ERR(usbh_clk)) { 118 + dev_err(&pdev->dev, "Error getting interface clock\n"); 119 + retval = PTR_ERR(usbh_clk); 120 + goto fail_get_usbh_clk; 121 + } 122 + 123 + hcd = usb_create_hcd(driver, &pdev->dev, dev_name(&pdev->dev)); 124 + if (!hcd) { 125 + retval = -ENOMEM; 126 + goto fail_create_hcd; 127 + } 128 + 129 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 130 + if (!res) { 131 + retval = -ENODEV; 132 + goto fail_request_resource; 133 + } 134 + 135 + hcd->rsrc_start = pdev->resource[0].start; 136 + hcd->rsrc_len = resource_size(res); 137 + if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len, hcd_name)) { 138 + dev_dbg(&pdev->dev, "request_mem_region failed\n"); 139 + retval = -EBUSY; 140 + goto fail_request_resource; 141 + } 142 + 143 + hcd->regs = ioremap(hcd->rsrc_start, hcd->rsrc_len); 144 + if (!hcd->regs) { 145 + dev_dbg(&pdev->dev, "ioremap failed\n"); 146 + retval = -ENOMEM; 147 + goto fail_ioremap; 148 + } 149 + 150 + ohci_p = (struct spear_ohci *)hcd_to_ohci(hcd); 151 + ohci_p->clk = usbh_clk; 152 + spear_start_ohci(ohci_p); 153 + ohci_hcd_init(hcd_to_ohci(hcd)); 154 + 155 + retval = usb_add_hcd(hcd, platform_get_irq(pdev, 0), IRQF_DISABLED); 156 + if (retval == 0) 157 + return retval; 158 + 159 + spear_stop_ohci(ohci_p); 160 + iounmap(hcd->regs); 161 + fail_ioremap: 162 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 163 + fail_request_resource: 164 + usb_put_hcd(hcd); 165 + fail_create_hcd: 166 + clk_put(usbh_clk); 167 + fail_get_usbh_clk: 168 + fail_irq_get: 169 + dev_err(&pdev->dev, "init fail, %d\n", retval); 170 + 171 + return retval; 172 + } 173 + 174 + static int spear_ohci_hcd_drv_remove(struct platform_device *pdev) 175 + { 176 + struct usb_hcd *hcd = platform_get_drvdata(pdev); 177 + struct spear_ohci *ohci_p = to_spear_ohci(hcd); 178 + 179 + usb_remove_hcd(hcd); 180 + if (ohci_p->clk) 181 + spear_stop_ohci(ohci_p); 182 + 183 + iounmap(hcd->regs); 184 + release_mem_region(hcd->rsrc_start, hcd->rsrc_len); 185 + usb_put_hcd(hcd); 186 + 187 + if (ohci_p->clk) 188 + clk_put(ohci_p->clk); 189 + platform_set_drvdata(pdev, NULL); 190 + return 0; 191 + } 192 + 193 + #if defined(CONFIG_PM) 194 + static int spear_ohci_hcd_drv_suspend(struct platform_device *dev, 195 + pm_message_t message) 196 + { 197 + struct usb_hcd *hcd = platform_get_drvdata(dev); 198 + struct ohci_hcd *ohci = hcd_to_ohci(hcd); 199 + struct spear_ohci *ohci_p = to_spear_ohci(hcd); 200 + 201 + if (time_before(jiffies, ohci->next_statechange)) 202 + msleep(5); 203 + ohci->next_statechange = jiffies; 204 + 205 + spear_stop_ohci(ohci_p); 206 + ohci_to_hcd(ohci)->state = HC_STATE_SUSPENDED; 207 + return 0; 208 + } 209 + 210 + static int spear_ohci_hcd_drv_resume(struct platform_device *dev) 211 + { 212 + struct usb_hcd *hcd = platform_get_drvdata(dev); 213 + struct ohci_hcd *ohci = hcd_to_ohci(hcd); 214 + struct spear_ohci *ohci_p = to_spear_ohci(hcd); 215 + 216 + if (time_before(jiffies, ohci->next_statechange)) 217 + msleep(5); 218 + ohci->next_statechange = jiffies; 219 + 220 + spear_start_ohci(ohci_p); 221 + ohci_finish_controller_resume(hcd); 222 + return 0; 223 + } 224 + #endif 225 + 226 + /* Driver definition to register with the platform bus */ 227 + static struct platform_driver spear_ohci_hcd_driver = { 228 + .probe = spear_ohci_hcd_drv_probe, 229 + .remove = spear_ohci_hcd_drv_remove, 230 + #ifdef CONFIG_PM 231 + .suspend = spear_ohci_hcd_drv_suspend, 232 + .resume = spear_ohci_hcd_drv_resume, 233 + #endif 234 + .driver = { 235 + .owner = THIS_MODULE, 236 + .name = "spear-ohci", 237 + }, 238 + }; 239 + 240 + MODULE_ALIAS("platform:spear-ohci");
+1 -1
drivers/usb/host/uhci-hcd.c
··· 569 569 */ 570 570 static void uhci_shutdown(struct pci_dev *pdev) 571 571 { 572 - struct usb_hcd *hcd = (struct usb_hcd *) pci_get_drvdata(pdev); 572 + struct usb_hcd *hcd = pci_get_drvdata(pdev); 573 573 574 574 uhci_hc_died(hcd_to_uhci(hcd)); 575 575 }
+7 -5
drivers/usb/host/uhci-q.c
··· 29 29 { 30 30 if (uhci->is_stopped) 31 31 mod_timer(&uhci_to_hcd(uhci)->rh_timer, jiffies); 32 - uhci->term_td->status |= cpu_to_le32(TD_CTRL_IOC); 32 + uhci->term_td->status |= cpu_to_le32(TD_CTRL_IOC); 33 33 } 34 34 35 35 static inline void uhci_clear_next_interrupt(struct uhci_hcd *uhci) ··· 195 195 } else { 196 196 struct uhci_td *ntd; 197 197 198 - ntd = list_entry(td->fl_list.next, struct uhci_td, fl_list); 198 + ntd = list_entry(td->fl_list.next, 199 + struct uhci_td, 200 + fl_list); 199 201 uhci->frame[td->frame] = LINK_TO_TD(ntd); 200 202 uhci->frame_cpu[td->frame] = ntd; 201 203 } ··· 730 728 731 729 urbp->urb = urb; 732 730 urb->hcpriv = urbp; 733 - 731 + 734 732 INIT_LIST_HEAD(&urbp->node); 735 733 INIT_LIST_HEAD(&urbp->td_list); 736 734 ··· 848 846 849 847 /* Alternate Data0/1 (start with Data1) */ 850 848 destination ^= TD_TOKEN_TOGGLE; 851 - 849 + 852 850 uhci_add_td_to_urbp(td, urbp); 853 851 uhci_fill_td(td, status, destination | uhci_explen(pktsze), 854 852 data); ··· 859 857 } 860 858 861 859 /* 862 - * Build the final TD for control status 860 + * Build the final TD for control status 863 861 */ 864 862 td = uhci_alloc_td(uhci); 865 863 if (!td)
+1 -1
drivers/usb/host/whci/hcd.c
··· 356 356 module_exit(whci_hc_driver_exit); 357 357 358 358 /* PCI device ID's that we handle (so it gets loaded) */ 359 - static struct pci_device_id whci_hcd_id_table[] = { 359 + static struct pci_device_id __used whci_hcd_id_table[] = { 360 360 { PCI_DEVICE_CLASS(PCI_CLASS_WIRELESS_WHCI, ~0) }, 361 361 { /* empty last entry */ } 362 362 };
+31 -3
drivers/usb/mon/mon_bin.c
··· 436 436 return length; 437 437 } 438 438 439 + /* 440 + * This is the look-ahead pass in case of 'C Zi', when actual_length cannot 441 + * be used to determine the length of the whole contiguous buffer. 442 + */ 443 + static unsigned int mon_bin_collate_isodesc(const struct mon_reader_bin *rp, 444 + struct urb *urb, unsigned int ndesc) 445 + { 446 + struct usb_iso_packet_descriptor *fp; 447 + unsigned int length; 448 + 449 + length = 0; 450 + fp = urb->iso_frame_desc; 451 + while (ndesc-- != 0) { 452 + if (fp->actual_length != 0) { 453 + if (fp->offset + fp->actual_length > length) 454 + length = fp->offset + fp->actual_length; 455 + } 456 + fp++; 457 + } 458 + return length; 459 + } 460 + 439 461 static void mon_bin_get_isodesc(const struct mon_reader_bin *rp, 440 462 unsigned int offset, struct urb *urb, char ev_type, unsigned int ndesc) 441 463 { ··· 500 478 /* 501 479 * Find the maximum allowable length, then allocate space. 502 480 */ 481 + urb_length = (ev_type == 'S') ? 482 + urb->transfer_buffer_length : urb->actual_length; 483 + length = urb_length; 484 + 503 485 if (usb_endpoint_xfer_isoc(epd)) { 504 486 if (urb->number_of_packets < 0) { 505 487 ndesc = 0; ··· 512 486 } else { 513 487 ndesc = urb->number_of_packets; 514 488 } 489 + if (ev_type == 'C' && usb_urb_dir_in(urb)) 490 + length = mon_bin_collate_isodesc(rp, urb, ndesc); 515 491 } else { 516 492 ndesc = 0; 517 493 } 518 494 lendesc = ndesc*sizeof(struct mon_bin_isodesc); 519 495 520 - urb_length = (ev_type == 'S') ? 521 - urb->transfer_buffer_length : urb->actual_length; 522 - length = urb_length; 496 + /* not an issue unless there's a subtle bug in a HCD somewhere */ 497 + if (length >= urb->transfer_buffer_length) 498 + length = urb->transfer_buffer_length; 523 499 524 500 if (length >= rp->b_size/5) 525 501 length = rp->b_size/5;
+31 -46
drivers/usb/musb/Kconfig
··· 12 12 depends on (ARM || (BF54x && !BF544) || (BF52x && !BF522 && !BF523)) 13 13 select NOP_USB_XCEIV if (ARCH_DAVINCI || MACH_OMAP3EVM || BLACKFIN) 14 14 select TWL4030_USB if MACH_OMAP_3430SDP 15 + select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA 15 16 select USB_OTG_UTILS 16 17 tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)' 17 18 help ··· 31 30 If you do not know what this is, please say N. 32 31 33 32 To compile this driver as a module, choose M here; the 34 - module will be called "musb_hdrc". 33 + module will be called "musb-hdrc". 35 34 36 - config USB_MUSB_SOC 37 - boolean 35 + choice 36 + prompt "Platform Glue Layer" 38 37 depends on USB_MUSB_HDRC 39 - default y if ARCH_DAVINCI 40 - default y if ARCH_OMAP2430 41 - default y if ARCH_OMAP3 42 - default y if ARCH_OMAP4 43 - default y if (BF54x && !BF544) 44 - default y if (BF52x && !BF522 && !BF523) 45 38 46 - comment "DaVinci 35x and 644x USB support" 47 - depends on USB_MUSB_HDRC && ARCH_DAVINCI_DMx 39 + config USB_MUSB_DAVINCI 40 + bool "DaVinci" 41 + depends on ARCH_DAVINCI_DMx 48 42 49 - comment "DA8xx/OMAP-L1x USB support" 50 - depends on USB_MUSB_HDRC && ARCH_DAVINCI_DA8XX 43 + config USB_MUSB_DA8XX 44 + bool "DA8xx/OMAP-L1x" 45 + depends on ARCH_DAVINCI_DA8XX 51 46 52 - comment "OMAP 243x high speed USB support" 53 - depends on USB_MUSB_HDRC && ARCH_OMAP2430 47 + config USB_MUSB_TUSB6010 48 + bool "TUSB6010" 49 + depends on ARCH_OMAP 54 50 55 - comment "OMAP 343x high speed USB support" 56 - depends on USB_MUSB_HDRC && ARCH_OMAP3 57 - 58 - comment "OMAP 44xx high speed USB support" 59 - depends on USB_MUSB_HDRC && ARCH_OMAP4 60 - 61 - comment "Blackfin high speed USB Support" 62 - depends on USB_MUSB_HDRC && ((BF54x && !BF544) || (BF52x && !BF522 && !BF523)) 51 + config USB_MUSB_OMAP2PLUS 52 + bool "OMAP2430 and onwards" 53 + depends on ARCH_OMAP2PLUS 63 54 64 55 config USB_MUSB_AM35X 65 - bool 66 - depends on USB_MUSB_HDRC && !ARCH_OMAP2430 && !ARCH_OMAP4 67 - select NOP_USB_XCEIV 68 - default MACH_OMAP3517EVM 69 - help 70 - Select this option if your platform is based on AM35x. As 71 - AM35x has an updated MUSB with CPPI4.1 DMA so this config 72 - is introduced to differentiate musb ip between OMAP3x and 73 - AM35x platforms. 56 + bool "AM35x" 57 + depends on ARCH_OMAP 74 58 75 - config USB_TUSB6010 76 - boolean "TUSB 6010 support" 77 - depends on USB_MUSB_HDRC && !USB_MUSB_SOC 78 - select NOP_USB_XCEIV 79 - default y 80 - help 81 - The TUSB 6010 chip, from Texas Instruments, connects a discrete 82 - HDRC core using a 16-bit parallel bus (NOR flash style) or VLYNQ 83 - (a high speed serial link). It can use system-specific external 84 - DMA controllers. 59 + config USB_MUSB_BLACKFIN 60 + bool "Blackfin" 61 + depends on (BF54x && !BF544) || (BF52x && ! BF522 && !BF523) 62 + 63 + config USB_MUSB_UX500 64 + bool "U8500 and U5500" 65 + depends on (ARCH_U8500 && AB8500_USB) || (ARCH_U5500) 66 + 67 + endchoice 85 68 86 69 choice 87 70 prompt "Driver Mode" ··· 143 158 config MUSB_PIO_ONLY 144 159 bool 'Disable DMA (always use PIO)' 145 160 depends on USB_MUSB_HDRC 146 - default USB_TUSB6010 || ARCH_DAVINCI_DA8XX || USB_MUSB_AM35X 161 + default USB_MUSB_TUSB6010 || USB_MUSB_DA8XX || USB_MUSB_AM35X 147 162 help 148 163 All data is copied between memory and FIFO by the CPU. 149 164 DMA controllers are ignored. ··· 156 171 config USB_INVENTRA_DMA 157 172 bool 158 173 depends on USB_MUSB_HDRC && !MUSB_PIO_ONLY 159 - default ARCH_OMAP2430 || ARCH_OMAP3 || BLACKFIN || ARCH_OMAP4 174 + default USB_MUSB_OMAP2PLUS || USB_MUSB_BLACKFIN 160 175 help 161 176 Enable DMA transfers using Mentor's engine. 162 177 163 178 config USB_TI_CPPI_DMA 164 179 bool 165 180 depends on USB_MUSB_HDRC && !MUSB_PIO_ONLY 166 - default ARCH_DAVINCI 181 + default USB_MUSB_DAVINCI 167 182 help 168 183 Enable DMA transfers when TI CPPI DMA is available. 169 184 170 185 config USB_TUSB_OMAP_DMA 171 186 bool 172 187 depends on USB_MUSB_HDRC && !MUSB_PIO_ONLY 173 - depends on USB_TUSB6010 188 + depends on USB_MUSB_TUSB6010 174 189 depends on ARCH_OMAP 175 190 default y 176 191 help
+9 -12
drivers/usb/musb/Makefile
··· 8 8 9 9 musb_hdrc-y := musb_core.o 10 10 11 - musb_hdrc-$(CONFIG_ARCH_DAVINCI_DMx) += davinci.o 12 - musb_hdrc-$(CONFIG_ARCH_DAVINCI_DA8XX) += da8xx.o 13 - musb_hdrc-$(CONFIG_USB_TUSB6010) += tusb6010.o 14 - musb_hdrc-$(CONFIG_ARCH_OMAP2430) += omap2430.o 15 - ifeq ($(CONFIG_USB_MUSB_AM35X),y) 16 - musb_hdrc-$(CONFIG_ARCH_OMAP3430) += am35x.o 17 - else 18 - musb_hdrc-$(CONFIG_ARCH_OMAP3430) += omap2430.o 19 - endif 20 - musb_hdrc-$(CONFIG_ARCH_OMAP4) += omap2430.o 21 - musb_hdrc-$(CONFIG_BF54x) += blackfin.o 22 - musb_hdrc-$(CONFIG_BF52x) += blackfin.o 23 11 musb_hdrc-$(CONFIG_USB_GADGET_MUSB_HDRC) += musb_gadget_ep0.o musb_gadget.o 24 12 musb_hdrc-$(CONFIG_USB_MUSB_HDRC_HCD) += musb_virthub.o musb_host.o 25 13 musb_hdrc-$(CONFIG_DEBUG_FS) += musb_debugfs.o 14 + 15 + # Hardware Glue Layer 16 + obj-$(CONFIG_USB_MUSB_OMAP2PLUS) += omap2430.o 17 + obj-$(CONFIG_USB_MUSB_AM35X) += am35x.o 18 + obj-$(CONFIG_USB_MUSB_TUSB6010) += tusb6010.o 19 + obj-$(CONFIG_USB_MUSB_DAVINCI) += davinci.o 20 + obj-$(CONFIG_USB_MUSB_DA8XX) += da8xx.o 21 + obj-$(CONFIG_USB_MUSB_BLACKFIN) += blackfin.o 22 + obj-$(CONFIG_USB_MUSB_UX500) += ux500.o 26 23 27 24 # the kconfig must guarantee that only one of the 28 25 # possible I/O schemes will be enabled at a time ...
+270 -140
drivers/usb/musb/am35x.c
··· 29 29 #include <linux/init.h> 30 30 #include <linux/clk.h> 31 31 #include <linux/io.h> 32 + #include <linux/platform_device.h> 33 + #include <linux/dma-mapping.h> 32 34 33 - #include <plat/control.h> 34 35 #include <plat/usb.h> 35 36 36 37 #include "musb_core.h" ··· 81 80 82 81 #define USB_MENTOR_CORE_OFFSET 0x400 83 82 84 - static inline void phy_on(void) 85 - { 86 - unsigned long timeout = jiffies + msecs_to_jiffies(100); 87 - u32 devconf2; 88 - 89 - /* 90 - * Start the on-chip PHY and its PLL. 91 - */ 92 - devconf2 = omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2); 93 - 94 - devconf2 &= ~(CONF2_RESET | CONF2_PHYPWRDN | CONF2_OTGPWRDN); 95 - devconf2 |= CONF2_PHY_PLLON; 96 - 97 - omap_ctrl_writel(devconf2, AM35XX_CONTROL_DEVCONF2); 98 - 99 - DBG(1, "Waiting for PHY clock good...\n"); 100 - while (!(omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2) 101 - & CONF2_PHYCLKGD)) { 102 - cpu_relax(); 103 - 104 - if (time_after(jiffies, timeout)) { 105 - DBG(1, "musb PHY clock good timed out\n"); 106 - break; 107 - } 108 - } 109 - } 110 - 111 - static inline void phy_off(void) 112 - { 113 - u32 devconf2; 114 - 115 - /* 116 - * Power down the on-chip PHY. 117 - */ 118 - devconf2 = omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2); 119 - 120 - devconf2 &= ~CONF2_PHY_PLLON; 121 - devconf2 |= CONF2_PHYPWRDN | CONF2_OTGPWRDN; 122 - omap_ctrl_writel(devconf2, AM35XX_CONTROL_DEVCONF2); 123 - } 83 + struct am35x_glue { 84 + struct device *dev; 85 + struct platform_device *musb; 86 + struct clk *phy_clk; 87 + struct clk *clk; 88 + }; 89 + #define glue_to_musb(g) platform_get_drvdata(g->musb) 124 90 125 91 /* 126 - * musb_platform_enable - enable interrupts 92 + * am35x_musb_enable - enable interrupts 127 93 */ 128 - void musb_platform_enable(struct musb *musb) 94 + static void am35x_musb_enable(struct musb *musb) 129 95 { 130 96 void __iomem *reg_base = musb->ctrl_base; 131 97 u32 epmask; ··· 111 143 } 112 144 113 145 /* 114 - * musb_platform_disable - disable HDRC and flush interrupts 146 + * am35x_musb_disable - disable HDRC and flush interrupts 115 147 */ 116 - void musb_platform_disable(struct musb *musb) 148 + static void am35x_musb_disable(struct musb *musb) 117 149 { 118 150 void __iomem *reg_base = musb->ctrl_base; 119 151 ··· 130 162 #define portstate(stmt) 131 163 #endif 132 164 133 - static void am35x_set_vbus(struct musb *musb, int is_on) 165 + static void am35x_musb_set_vbus(struct musb *musb, int is_on) 134 166 { 135 167 WARN_ON(is_on && is_peripheral_active(musb)); 136 168 } ··· 189 221 spin_unlock_irqrestore(&musb->lock, flags); 190 222 } 191 223 192 - void musb_platform_try_idle(struct musb *musb, unsigned long timeout) 224 + static void am35x_musb_try_idle(struct musb *musb, unsigned long timeout) 193 225 { 194 226 static unsigned long last_timer; 195 227 ··· 219 251 mod_timer(&otg_workaround, timeout); 220 252 } 221 253 222 - static irqreturn_t am35x_interrupt(int irq, void *hci) 254 + static irqreturn_t am35x_musb_interrupt(int irq, void *hci) 223 255 { 224 256 struct musb *musb = hci; 225 257 void __iomem *reg_base = musb->ctrl_base; 258 + struct device *dev = musb->controller; 259 + struct musb_hdrc_platform_data *plat = dev->platform_data; 260 + struct omap_musb_board_data *data = plat->board_data; 226 261 unsigned long flags; 227 262 irqreturn_t ret = IRQ_NONE; 228 - u32 epintr, usbintr, lvl_intr; 263 + u32 epintr, usbintr; 229 264 230 265 spin_lock_irqsave(&musb->lock, flags); 231 266 ··· 317 346 /* EOI needs to be written for the IRQ to be re-asserted. */ 318 347 if (ret == IRQ_HANDLED || epintr || usbintr) { 319 348 /* clear level interrupt */ 320 - lvl_intr = omap_ctrl_readl(AM35XX_CONTROL_LVL_INTR_CLEAR); 321 - lvl_intr |= AM35XX_USBOTGSS_INT_CLR; 322 - omap_ctrl_writel(lvl_intr, AM35XX_CONTROL_LVL_INTR_CLEAR); 349 + if (data->clear_irq) 350 + data->clear_irq(); 323 351 /* write EOI */ 324 352 musb_writel(reg_base, USB_END_OF_INTR_REG, 0); 325 353 } ··· 332 362 return ret; 333 363 } 334 364 335 - int musb_platform_set_mode(struct musb *musb, u8 musb_mode) 365 + static int am35x_musb_set_mode(struct musb *musb, u8 musb_mode) 336 366 { 337 - u32 devconf2 = omap_ctrl_readl(AM35XX_CONTROL_DEVCONF2); 367 + struct device *dev = musb->controller; 368 + struct musb_hdrc_platform_data *plat = dev->platform_data; 369 + struct omap_musb_board_data *data = plat->board_data; 370 + int retval = 0; 338 371 339 - devconf2 &= ~CONF2_OTGMODE; 340 - switch (musb_mode) { 341 - #ifdef CONFIG_USB_MUSB_HDRC_HCD 342 - case MUSB_HOST: /* Force VBUS valid, ID = 0 */ 343 - devconf2 |= CONF2_FORCE_HOST; 344 - break; 345 - #endif 346 - #ifdef CONFIG_USB_GADGET_MUSB_HDRC 347 - case MUSB_PERIPHERAL: /* Force VBUS valid, ID = 1 */ 348 - devconf2 |= CONF2_FORCE_DEVICE; 349 - break; 350 - #endif 351 - #ifdef CONFIG_USB_MUSB_OTG 352 - case MUSB_OTG: /* Don't override the VBUS/ID comparators */ 353 - devconf2 |= CONF2_NO_OVERRIDE; 354 - break; 355 - #endif 356 - default: 357 - DBG(2, "Trying to set unsupported mode %u\n", musb_mode); 358 - } 372 + if (data->set_mode) 373 + data->set_mode(musb_mode); 374 + else 375 + retval = -EIO; 359 376 360 - omap_ctrl_writel(devconf2, AM35XX_CONTROL_DEVCONF2); 361 - return 0; 377 + return retval; 362 378 } 363 379 364 - int __init musb_platform_init(struct musb *musb, void *board_data) 380 + static int am35x_musb_init(struct musb *musb) 365 381 { 382 + struct device *dev = musb->controller; 383 + struct musb_hdrc_platform_data *plat = dev->platform_data; 384 + struct omap_musb_board_data *data = plat->board_data; 366 385 void __iomem *reg_base = musb->ctrl_base; 367 - u32 rev, lvl_intr, sw_reset; 368 - int status; 386 + u32 rev; 369 387 370 388 musb->mregs += USB_MENTOR_CORE_OFFSET; 371 389 372 - clk_enable(musb->clock); 373 - DBG(2, "musb->clock=%lud\n", clk_get_rate(musb->clock)); 374 - 375 - musb->phy_clock = clk_get(musb->controller, "fck"); 376 - if (IS_ERR(musb->phy_clock)) { 377 - status = PTR_ERR(musb->phy_clock); 378 - goto exit0; 379 - } 380 - clk_enable(musb->phy_clock); 381 - DBG(2, "musb->phy_clock=%lud\n", clk_get_rate(musb->phy_clock)); 382 - 383 390 /* Returns zero if e.g. not clocked */ 384 391 rev = musb_readl(reg_base, USB_REVISION_REG); 385 - if (!rev) { 386 - status = -ENODEV; 387 - goto exit1; 388 - } 392 + if (!rev) 393 + return -ENODEV; 389 394 390 395 usb_nop_xceiv_register(); 391 396 musb->xceiv = otg_get_transceiver(); 392 - if (!musb->xceiv) { 393 - status = -ENODEV; 394 - goto exit1; 395 - } 397 + if (!musb->xceiv) 398 + return -ENODEV; 396 399 397 400 if (is_host_enabled(musb)) 398 401 setup_timer(&otg_workaround, otg_timer, (unsigned long) musb); 399 402 400 - musb->board_set_vbus = am35x_set_vbus; 401 - 402 - /* Global reset */ 403 - sw_reset = omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); 404 - 405 - sw_reset |= AM35XX_USBOTGSS_SW_RST; 406 - omap_ctrl_writel(sw_reset, AM35XX_CONTROL_IP_SW_RESET); 407 - 408 - sw_reset &= ~AM35XX_USBOTGSS_SW_RST; 409 - omap_ctrl_writel(sw_reset, AM35XX_CONTROL_IP_SW_RESET); 403 + /* Reset the musb */ 404 + if (data->reset) 405 + data->reset(); 410 406 411 407 /* Reset the controller */ 412 408 musb_writel(reg_base, USB_CTRL_REG, AM35X_SOFT_RESET_MASK); 413 409 414 410 /* Start the on-chip PHY and its PLL. */ 415 - phy_on(); 411 + if (data->set_phy_power) 412 + data->set_phy_power(1); 416 413 417 414 msleep(5); 418 415 419 - musb->isr = am35x_interrupt; 416 + musb->isr = am35x_musb_interrupt; 420 417 421 418 /* clear level interrupt */ 422 - lvl_intr = omap_ctrl_readl(AM35XX_CONTROL_LVL_INTR_CLEAR); 423 - lvl_intr |= AM35XX_USBOTGSS_INT_CLR; 424 - omap_ctrl_writel(lvl_intr, AM35XX_CONTROL_LVL_INTR_CLEAR); 419 + if (data->clear_irq) 420 + data->clear_irq(); 421 + 425 422 return 0; 426 - exit1: 427 - clk_disable(musb->phy_clock); 428 - clk_put(musb->phy_clock); 429 - exit0: 430 - clk_disable(musb->clock); 431 - return status; 432 423 } 433 424 434 - int musb_platform_exit(struct musb *musb) 425 + static int am35x_musb_exit(struct musb *musb) 435 426 { 427 + struct device *dev = musb->controller; 428 + struct musb_hdrc_platform_data *plat = dev->platform_data; 429 + struct omap_musb_board_data *data = plat->board_data; 430 + 436 431 if (is_host_enabled(musb)) 437 432 del_timer_sync(&otg_workaround); 438 433 439 - phy_off(); 434 + /* Shutdown the on-chip PHY and its PLL. */ 435 + if (data->set_phy_power) 436 + data->set_phy_power(0); 440 437 441 438 otg_put_transceiver(musb->xceiv); 442 439 usb_nop_xceiv_unregister(); 443 440 444 - clk_disable(musb->clock); 445 - 446 - clk_disable(musb->phy_clock); 447 - clk_put(musb->phy_clock); 448 - 449 441 return 0; 450 442 } 451 - 452 - #ifdef CONFIG_PM 453 - void musb_platform_save_context(struct musb *musb, 454 - struct musb_context_registers *musb_context) 455 - { 456 - phy_off(); 457 - } 458 - 459 - void musb_platform_restore_context(struct musb *musb, 460 - struct musb_context_registers *musb_context) 461 - { 462 - phy_on(); 463 - } 464 - #endif 465 443 466 444 /* AM35x supports only 32bit read operation */ 467 445 void musb_read_fifo(struct musb_hw_ep *hw_ep, u16 len, u8 *dst) ··· 440 522 memcpy(dst, &val, len); 441 523 } 442 524 } 525 + 526 + static const struct musb_platform_ops am35x_ops = { 527 + .init = am35x_musb_init, 528 + .exit = am35x_musb_exit, 529 + 530 + .enable = am35x_musb_enable, 531 + .disable = am35x_musb_disable, 532 + 533 + .set_mode = am35x_musb_set_mode, 534 + .try_idle = am35x_musb_try_idle, 535 + 536 + .set_vbus = am35x_musb_set_vbus, 537 + }; 538 + 539 + static u64 am35x_dmamask = DMA_BIT_MASK(32); 540 + 541 + static int __init am35x_probe(struct platform_device *pdev) 542 + { 543 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 544 + struct platform_device *musb; 545 + struct am35x_glue *glue; 546 + 547 + struct clk *phy_clk; 548 + struct clk *clk; 549 + 550 + int ret = -ENOMEM; 551 + 552 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 553 + if (!glue) { 554 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 555 + goto err0; 556 + } 557 + 558 + musb = platform_device_alloc("musb-hdrc", -1); 559 + if (!musb) { 560 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 561 + goto err1; 562 + } 563 + 564 + phy_clk = clk_get(&pdev->dev, "fck"); 565 + if (IS_ERR(phy_clk)) { 566 + dev_err(&pdev->dev, "failed to get PHY clock\n"); 567 + ret = PTR_ERR(phy_clk); 568 + goto err2; 569 + } 570 + 571 + clk = clk_get(&pdev->dev, "ick"); 572 + if (IS_ERR(clk)) { 573 + dev_err(&pdev->dev, "failed to get clock\n"); 574 + ret = PTR_ERR(clk); 575 + goto err3; 576 + } 577 + 578 + ret = clk_enable(phy_clk); 579 + if (ret) { 580 + dev_err(&pdev->dev, "failed to enable PHY clock\n"); 581 + goto err4; 582 + } 583 + 584 + ret = clk_enable(clk); 585 + if (ret) { 586 + dev_err(&pdev->dev, "failed to enable clock\n"); 587 + goto err5; 588 + } 589 + 590 + musb->dev.parent = &pdev->dev; 591 + musb->dev.dma_mask = &am35x_dmamask; 592 + musb->dev.coherent_dma_mask = am35x_dmamask; 593 + 594 + glue->dev = &pdev->dev; 595 + glue->musb = musb; 596 + glue->phy_clk = phy_clk; 597 + glue->clk = clk; 598 + 599 + pdata->platform_ops = &am35x_ops; 600 + 601 + platform_set_drvdata(pdev, glue); 602 + 603 + ret = platform_device_add_resources(musb, pdev->resource, 604 + pdev->num_resources); 605 + if (ret) { 606 + dev_err(&pdev->dev, "failed to add resources\n"); 607 + goto err6; 608 + } 609 + 610 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 611 + if (ret) { 612 + dev_err(&pdev->dev, "failed to add platform_data\n"); 613 + goto err6; 614 + } 615 + 616 + ret = platform_device_add(musb); 617 + if (ret) { 618 + dev_err(&pdev->dev, "failed to register musb device\n"); 619 + goto err6; 620 + } 621 + 622 + return 0; 623 + 624 + err6: 625 + clk_disable(clk); 626 + 627 + err5: 628 + clk_disable(phy_clk); 629 + 630 + err4: 631 + clk_put(clk); 632 + 633 + err3: 634 + clk_put(phy_clk); 635 + 636 + err2: 637 + platform_device_put(musb); 638 + 639 + err1: 640 + kfree(glue); 641 + 642 + err0: 643 + return ret; 644 + } 645 + 646 + static int __exit am35x_remove(struct platform_device *pdev) 647 + { 648 + struct am35x_glue *glue = platform_get_drvdata(pdev); 649 + 650 + platform_device_del(glue->musb); 651 + platform_device_put(glue->musb); 652 + clk_disable(glue->clk); 653 + clk_disable(glue->phy_clk); 654 + clk_put(glue->clk); 655 + clk_put(glue->phy_clk); 656 + kfree(glue); 657 + 658 + return 0; 659 + } 660 + 661 + #ifdef CONFIG_PM 662 + static int am35x_suspend(struct device *dev) 663 + { 664 + struct am35x_glue *glue = dev_get_drvdata(dev); 665 + struct musb_hdrc_platform_data *plat = dev->platform_data; 666 + struct omap_musb_board_data *data = plat->board_data; 667 + 668 + /* Shutdown the on-chip PHY and its PLL. */ 669 + if (data->set_phy_power) 670 + data->set_phy_power(0); 671 + 672 + clk_disable(glue->phy_clk); 673 + clk_disable(glue->clk); 674 + 675 + return 0; 676 + } 677 + 678 + static int am35x_resume(struct device *dev) 679 + { 680 + struct am35x_glue *glue = dev_get_drvdata(dev); 681 + struct musb_hdrc_platform_data *plat = dev->platform_data; 682 + struct omap_musb_board_data *data = plat->board_data; 683 + int ret; 684 + 685 + /* Start the on-chip PHY and its PLL. */ 686 + if (data->set_phy_power) 687 + data->set_phy_power(1); 688 + 689 + ret = clk_enable(glue->phy_clk); 690 + if (ret) { 691 + dev_err(dev, "failed to enable PHY clock\n"); 692 + return ret; 693 + } 694 + 695 + ret = clk_enable(glue->clk); 696 + if (ret) { 697 + dev_err(dev, "failed to enable clock\n"); 698 + return ret; 699 + } 700 + 701 + return 0; 702 + } 703 + 704 + static struct dev_pm_ops am35x_pm_ops = { 705 + .suspend = am35x_suspend, 706 + .resume = am35x_resume, 707 + }; 708 + 709 + #define DEV_PM_OPS &am35x_pm_ops 710 + #else 711 + #define DEV_PM_OPS NULL 712 + #endif 713 + 714 + static struct platform_driver am35x_driver = { 715 + .remove = __exit_p(am35x_remove), 716 + .driver = { 717 + .name = "musb-am35x", 718 + .pm = DEV_PM_OPS, 719 + }, 720 + }; 721 + 722 + MODULE_DESCRIPTION("AM35x MUSB Glue Layer"); 723 + MODULE_AUTHOR("Ajay Kumar Gupta <ajay.gupta@ti.com>"); 724 + MODULE_LICENSE("GPL v2"); 725 + 726 + static int __init am35x_init(void) 727 + { 728 + return platform_driver_probe(&am35x_driver, am35x_probe); 729 + } 730 + subsys_initcall(am35x_init); 731 + 732 + static void __exit am35x_exit(void) 733 + { 734 + platform_driver_unregister(&am35x_driver); 735 + } 736 + module_exit(am35x_exit);
+196 -33
drivers/usb/musb/blackfin.c
··· 15 15 #include <linux/list.h> 16 16 #include <linux/gpio.h> 17 17 #include <linux/io.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/dma-mapping.h> 18 20 19 21 #include <asm/cacheflush.h> 20 22 21 23 #include "musb_core.h" 22 24 #include "blackfin.h" 25 + 26 + struct bfin_glue { 27 + struct device *dev; 28 + struct platform_device *musb; 29 + }; 30 + #define glue_to_musb(g) platform_get_drvdata(g->musb) 23 31 24 32 /* 25 33 * Load an endpoint's FIFO ··· 286 278 DBG(4, "state is %s\n", otg_state_string(musb)); 287 279 } 288 280 289 - void musb_platform_enable(struct musb *musb) 281 + static void bfin_musb_enable(struct musb *musb) 290 282 { 291 283 if (!is_otg_enabled(musb) && is_host_enabled(musb)) { 292 284 mod_timer(&musb_conn_timer, jiffies + TIMER_DELAY); ··· 294 286 } 295 287 } 296 288 297 - void musb_platform_disable(struct musb *musb) 289 + static void bfin_musb_disable(struct musb *musb) 298 290 { 299 291 } 300 292 301 - static void bfin_set_vbus(struct musb *musb, int is_on) 293 + static void bfin_musb_set_vbus(struct musb *musb, int is_on) 302 294 { 303 295 int value = musb->config->gpio_vrsel_active; 304 296 if (!is_on) ··· 311 303 musb_readb(musb->mregs, MUSB_DEVCTL)); 312 304 } 313 305 314 - static int bfin_set_power(struct otg_transceiver *x, unsigned mA) 306 + static int bfin_musb_set_power(struct otg_transceiver *x, unsigned mA) 315 307 { 316 308 return 0; 317 309 } 318 310 319 - void musb_platform_try_idle(struct musb *musb, unsigned long timeout) 311 + static void bfin_musb_try_idle(struct musb *musb, unsigned long timeout) 320 312 { 321 313 if (!is_otg_enabled(musb) && is_host_enabled(musb)) 322 314 mod_timer(&musb_conn_timer, jiffies + TIMER_DELAY); 323 315 } 324 316 325 - int musb_platform_get_vbus_status(struct musb *musb) 317 + static int bfin_musb_get_vbus_status(struct musb *musb) 326 318 { 327 319 return 0; 328 320 } 329 321 330 - int musb_platform_set_mode(struct musb *musb, u8 musb_mode) 322 + static int bfin_musb_set_mode(struct musb *musb, u8 musb_mode) 331 323 { 332 324 return -EIO; 333 325 } 334 326 335 - int __init musb_platform_init(struct musb *musb, void *board_data) 327 + static void bfin_musb_reg_init(struct musb *musb) 336 328 { 337 - 338 - /* 339 - * Rev 1.0 BF549 EZ-KITs require PE7 to be high for both DEVICE 340 - * and OTG HOST modes, while rev 1.1 and greater require PE7 to 341 - * be low for DEVICE mode and high for HOST mode. We set it high 342 - * here because we are in host mode 343 - */ 344 - 345 - if (gpio_request(musb->config->gpio_vrsel, "USB_VRSEL")) { 346 - printk(KERN_ERR "Failed ro request USB_VRSEL GPIO_%d \n", 347 - musb->config->gpio_vrsel); 348 - return -ENODEV; 349 - } 350 - gpio_direction_output(musb->config->gpio_vrsel, 0); 351 - 352 - usb_nop_xceiv_register(); 353 - musb->xceiv = otg_get_transceiver(); 354 - if (!musb->xceiv) { 355 - gpio_free(musb->config->gpio_vrsel); 356 - return -ENODEV; 357 - } 358 - 359 329 if (ANOMALY_05000346) { 360 330 bfin_write_USB_APHY_CALIB(ANOMALY_05000346_value); 361 331 SSYNC(); ··· 368 382 EP2_RX_ENA | EP3_RX_ENA | EP4_RX_ENA | 369 383 EP5_RX_ENA | EP6_RX_ENA | EP7_RX_ENA); 370 384 SSYNC(); 385 + } 386 + 387 + static int bfin_musb_init(struct musb *musb) 388 + { 389 + 390 + /* 391 + * Rev 1.0 BF549 EZ-KITs require PE7 to be high for both DEVICE 392 + * and OTG HOST modes, while rev 1.1 and greater require PE7 to 393 + * be low for DEVICE mode and high for HOST mode. We set it high 394 + * here because we are in host mode 395 + */ 396 + 397 + if (gpio_request(musb->config->gpio_vrsel, "USB_VRSEL")) { 398 + printk(KERN_ERR "Failed ro request USB_VRSEL GPIO_%d\n", 399 + musb->config->gpio_vrsel); 400 + return -ENODEV; 401 + } 402 + gpio_direction_output(musb->config->gpio_vrsel, 0); 403 + 404 + usb_nop_xceiv_register(); 405 + musb->xceiv = otg_get_transceiver(); 406 + if (!musb->xceiv) { 407 + gpio_free(musb->config->gpio_vrsel); 408 + return -ENODEV; 409 + } 410 + 411 + bfin_musb_reg_init(musb); 371 412 372 413 if (is_host_enabled(musb)) { 373 - musb->board_set_vbus = bfin_set_vbus; 374 414 setup_timer(&musb_conn_timer, 375 415 musb_conn_timer_handler, (unsigned long) musb); 376 416 } 377 417 if (is_peripheral_enabled(musb)) 378 - musb->xceiv->set_power = bfin_set_power; 418 + musb->xceiv->set_power = bfin_musb_set_power; 379 419 380 420 musb->isr = blackfin_interrupt; 381 421 382 422 return 0; 383 423 } 384 424 385 - int musb_platform_exit(struct musb *musb) 425 + static int bfin_musb_exit(struct musb *musb) 386 426 { 387 427 gpio_free(musb->config->gpio_vrsel); 388 428 ··· 416 404 usb_nop_xceiv_unregister(); 417 405 return 0; 418 406 } 407 + 408 + static const struct musb_platform_ops bfin_ops = { 409 + .init = bfin_musb_init, 410 + .exit = bfin_musb_exit, 411 + 412 + .enable = bfin_musb_enable, 413 + .disable = bfin_musb_disable, 414 + 415 + .set_mode = bfin_musb_set_mode, 416 + .try_idle = bfin_musb_try_idle, 417 + 418 + .vbus_status = bfin_musb_vbus_status, 419 + .set_vbus = bfin_musb_set_vbus, 420 + }; 421 + 422 + static u64 bfin_dmamask = DMA_BIT_MASK(32); 423 + 424 + static int __init bfin_probe(struct platform_device *pdev) 425 + { 426 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 427 + struct platform_device *musb; 428 + struct bfin_glue *glue; 429 + 430 + int ret = -ENOMEM; 431 + 432 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 433 + if (!glue) { 434 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 435 + goto err0; 436 + } 437 + 438 + musb = platform_device_alloc("musb-hdrc", -1); 439 + if (!musb) { 440 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 441 + goto err1; 442 + } 443 + 444 + musb->dev.parent = &pdev->dev; 445 + musb->dev.dma_mask = &bfin_dmamask; 446 + musb->dev.coherent_dma_mask = bfin_dmamask; 447 + 448 + glue->dev = &pdev->dev; 449 + glue->musb = musb; 450 + 451 + pdata->platform_ops = &bfin_ops; 452 + 453 + platform_set_drvdata(pdev, glue); 454 + 455 + ret = platform_device_add_resources(musb, pdev->resource, 456 + pdev->num_resources); 457 + if (ret) { 458 + dev_err(&pdev->dev, "failed to add resources\n"); 459 + goto err2; 460 + } 461 + 462 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 463 + if (ret) { 464 + dev_err(&pdev->dev, "failed to add platform_data\n"); 465 + goto err2; 466 + } 467 + 468 + ret = platform_device_add(musb); 469 + if (ret) { 470 + dev_err(&pdev->dev, "failed to register musb device\n"); 471 + goto err2; 472 + } 473 + 474 + return 0; 475 + 476 + err2: 477 + platform_device_put(musb); 478 + 479 + err1: 480 + kfree(glue); 481 + 482 + err0: 483 + return ret; 484 + } 485 + 486 + static int __exit bfin_remove(struct platform_device *pdev) 487 + { 488 + struct bfin_glue *glue = platform_get_drvdata(pdev); 489 + 490 + platform_device_del(glue->musb); 491 + platform_device_put(glue->musb); 492 + kfree(glue); 493 + 494 + return 0; 495 + } 496 + 497 + #ifdef CONFIG_PM 498 + static int bfin_suspend(struct device *dev) 499 + { 500 + struct bfin_glue *glue = dev_get_drvdata(dev); 501 + struct musb *musb = glue_to_musb(glue); 502 + 503 + if (is_host_active(musb)) 504 + /* 505 + * During hibernate gpio_vrsel will change from high to low 506 + * low which will generate wakeup event resume the system 507 + * immediately. Set it to 0 before hibernate to avoid this 508 + * wakeup event. 509 + */ 510 + gpio_set_value(musb->config->gpio_vrsel, 0); 511 + 512 + return 0; 513 + } 514 + 515 + static int bfin_resume(struct device *dev) 516 + { 517 + struct bfin_glue *glue = dev_get_drvdata(dev); 518 + struct musb *musb = glue_to_musb(glue); 519 + 520 + bfin_musb_reg_init(musb); 521 + 522 + return 0; 523 + } 524 + 525 + static struct dev_pm_ops bfin_pm_ops = { 526 + .suspend = bfin_suspend, 527 + .resume = bfin_resume, 528 + }; 529 + 530 + #define DEV_PM_OPS &bfin_pm_op, 531 + #else 532 + #define DEV_PM_OPS NULL 533 + #endif 534 + 535 + static struct platform_driver bfin_driver = { 536 + .remove = __exit_p(bfin_remove), 537 + .driver = { 538 + .name = "musb-bfin", 539 + .pm = DEV_PM_OPS, 540 + }, 541 + }; 542 + 543 + MODULE_DESCRIPTION("Blackfin MUSB Glue Layer"); 544 + MODULE_AUTHOR("Bryan Wy <cooloney@kernel.org>"); 545 + MODULE_LICENSE("GPL v2"); 546 + 547 + static int __init bfin_init(void) 548 + { 549 + return platform_driver_probe(&bfin_driver, bfin_probe); 550 + } 551 + subsys_initcall(bfin_init); 552 + 553 + static void __exit bfin_exit(void) 554 + { 555 + platform_driver_unregister(&bfin_driver); 556 + } 557 + module_exit(bfin_exit);
+1 -1
drivers/usb/musb/cppi_dma.c
··· 1308 1308 struct cppi *controller; 1309 1309 struct device *dev = musb->controller; 1310 1310 struct platform_device *pdev = to_platform_device(dev); 1311 - int irq = platform_get_irq(pdev, 1); 1311 + int irq = platform_get_irq_byname(pdev, "dma"); 1312 1312 1313 1313 controller = kzalloc(sizeof *controller, GFP_KERNEL); 1314 1314 if (!controller)
+153 -17
drivers/usb/musb/da8xx.c
··· 29 29 #include <linux/init.h> 30 30 #include <linux/clk.h> 31 31 #include <linux/io.h> 32 + #include <linux/platform_device.h> 33 + #include <linux/dma-mapping.h> 32 34 33 35 #include <mach/da8xx.h> 34 36 #include <mach/usb.h> ··· 79 77 #define DA8XX_MENTOR_CORE_OFFSET 0x400 80 78 81 79 #define CFGCHIP2 IO_ADDRESS(DA8XX_SYSCFG0_BASE + DA8XX_CFGCHIP2_REG) 80 + 81 + struct da8xx_glue { 82 + struct device *dev; 83 + struct platform_device *musb; 84 + struct clk *clk; 85 + }; 82 86 83 87 /* 84 88 * REVISIT (PM): we should be able to keep the PHY in low power mode most ··· 139 131 */ 140 132 141 133 /** 142 - * musb_platform_enable - enable interrupts 134 + * da8xx_musb_enable - enable interrupts 143 135 */ 144 - void musb_platform_enable(struct musb *musb) 136 + static void da8xx_musb_enable(struct musb *musb) 145 137 { 146 138 void __iomem *reg_base = musb->ctrl_base; 147 139 u32 mask; ··· 159 151 } 160 152 161 153 /** 162 - * musb_platform_disable - disable HDRC and flush interrupts 154 + * da8xx_musb_disable - disable HDRC and flush interrupts 163 155 */ 164 - void musb_platform_disable(struct musb *musb) 156 + static void da8xx_musb_disable(struct musb *musb) 165 157 { 166 158 void __iomem *reg_base = musb->ctrl_base; 167 159 ··· 178 170 #define portstate(stmt) 179 171 #endif 180 172 181 - static void da8xx_set_vbus(struct musb *musb, int is_on) 173 + static void da8xx_musb_set_vbus(struct musb *musb, int is_on) 182 174 { 183 175 WARN_ON(is_on && is_peripheral_active(musb)); 184 176 } ··· 260 252 spin_unlock_irqrestore(&musb->lock, flags); 261 253 } 262 254 263 - void musb_platform_try_idle(struct musb *musb, unsigned long timeout) 255 + static void da8xx_musb_try_idle(struct musb *musb, unsigned long timeout) 264 256 { 265 257 static unsigned long last_timer; 266 258 ··· 290 282 mod_timer(&otg_workaround, timeout); 291 283 } 292 284 293 - static irqreturn_t da8xx_interrupt(int irq, void *hci) 285 + static irqreturn_t da8xx_musb_interrupt(int irq, void *hci) 294 286 { 295 287 struct musb *musb = hci; 296 288 void __iomem *reg_base = musb->ctrl_base; ··· 388 380 return ret; 389 381 } 390 382 391 - int musb_platform_set_mode(struct musb *musb, u8 musb_mode) 383 + static int da8xx_musb_set_mode(struct musb *musb, u8 musb_mode) 392 384 { 393 385 u32 cfgchip2 = __raw_readl(CFGCHIP2); 394 386 ··· 417 409 return 0; 418 410 } 419 411 420 - int __init musb_platform_init(struct musb *musb, void *board_data) 412 + static int da8xx_musb_init(struct musb *musb) 421 413 { 422 414 void __iomem *reg_base = musb->ctrl_base; 423 415 u32 rev; 424 416 425 417 musb->mregs += DA8XX_MENTOR_CORE_OFFSET; 426 - 427 - clk_enable(musb->clock); 428 418 429 419 /* Returns zero if e.g. not clocked */ 430 420 rev = musb_readl(reg_base, DA8XX_USB_REVISION_REG); ··· 437 431 if (is_host_enabled(musb)) 438 432 setup_timer(&otg_workaround, otg_timer, (unsigned long)musb); 439 433 440 - musb->board_set_vbus = da8xx_set_vbus; 441 - 442 434 /* Reset the controller */ 443 435 musb_writel(reg_base, DA8XX_USB_CTRL_REG, DA8XX_SOFT_RESET_MASK); 444 436 ··· 450 446 rev, __raw_readl(CFGCHIP2), 451 447 musb_readb(reg_base, DA8XX_USB_CTRL_REG)); 452 448 453 - musb->isr = da8xx_interrupt; 449 + musb->isr = da8xx_musb_interrupt; 454 450 return 0; 455 451 fail: 456 - clk_disable(musb->clock); 457 452 return -ENODEV; 458 453 } 459 454 460 - int musb_platform_exit(struct musb *musb) 455 + static int da8xx_musb_exit(struct musb *musb) 461 456 { 462 457 if (is_host_enabled(musb)) 463 458 del_timer_sync(&otg_workaround); ··· 466 463 otg_put_transceiver(musb->xceiv); 467 464 usb_nop_xceiv_unregister(); 468 465 469 - clk_disable(musb->clock); 466 + return 0; 467 + } 468 + 469 + static const struct musb_platform_ops da8xx_ops = { 470 + .init = da8xx_musb_init, 471 + .exit = da8xx_musb_exit, 472 + 473 + .enable = da8xx_musb_enable, 474 + .disable = da8xx_musb_disable, 475 + 476 + .set_mode = da8xx_musb_set_mode, 477 + .try_idle = da8xx_musb_try_idle, 478 + 479 + .set_vbus = da8xx_musb_set_vbus, 480 + }; 481 + 482 + static u64 da8xx_dmamask = DMA_BIT_MASK(32); 483 + 484 + static int __init da8xx_probe(struct platform_device *pdev) 485 + { 486 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 487 + struct platform_device *musb; 488 + struct da8xx_glue *glue; 489 + 490 + struct clk *clk; 491 + 492 + int ret = -ENOMEM; 493 + 494 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 495 + if (!glue) { 496 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 497 + goto err0; 498 + } 499 + 500 + musb = platform_device_alloc("musb-hdrc", -1); 501 + if (!musb) { 502 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 503 + goto err1; 504 + } 505 + 506 + clk = clk_get(&pdev->dev, "usb20"); 507 + if (IS_ERR(clk)) { 508 + dev_err(&pdev->dev, "failed to get clock\n"); 509 + ret = PTR_ERR(clk); 510 + goto err2; 511 + } 512 + 513 + ret = clk_enable(clk); 514 + if (ret) { 515 + dev_err(&pdev->dev, "failed to enable clock\n"); 516 + goto err3; 517 + } 518 + 519 + musb->dev.parent = &pdev->dev; 520 + musb->dev.dma_mask = &da8xx_dmamask; 521 + musb->dev.coherent_dma_mask = da8xx_dmamask; 522 + 523 + glue->dev = &pdev->dev; 524 + glue->musb = musb; 525 + glue->clk = clk; 526 + 527 + pdata->platform_ops = &da8xx_ops; 528 + 529 + platform_set_drvdata(pdev, glue); 530 + 531 + ret = platform_device_add_resources(musb, pdev->resource, 532 + pdev->num_resources); 533 + if (ret) { 534 + dev_err(&pdev->dev, "failed to add resources\n"); 535 + goto err4; 536 + } 537 + 538 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 539 + if (ret) { 540 + dev_err(&pdev->dev, "failed to add platform_data\n"); 541 + goto err4; 542 + } 543 + 544 + ret = platform_device_add(musb); 545 + if (ret) { 546 + dev_err(&pdev->dev, "failed to register musb device\n"); 547 + goto err4; 548 + } 549 + 550 + return 0; 551 + 552 + err4: 553 + clk_disable(clk); 554 + 555 + err3: 556 + clk_put(clk); 557 + 558 + err2: 559 + platform_device_put(musb); 560 + 561 + err1: 562 + kfree(glue); 563 + 564 + err0: 565 + return ret; 566 + } 567 + 568 + static int __exit da8xx_remove(struct platform_device *pdev) 569 + { 570 + struct da8xx_glue *glue = platform_get_drvdata(pdev); 571 + 572 + platform_device_del(glue->musb); 573 + platform_device_put(glue->musb); 574 + clk_disable(glue->clk); 575 + clk_put(glue->clk); 576 + kfree(glue); 470 577 471 578 return 0; 472 579 } 580 + 581 + static struct platform_driver da8xx_driver = { 582 + .remove = __exit_p(da8xx_remove), 583 + .driver = { 584 + .name = "musb-da8xx", 585 + }, 586 + }; 587 + 588 + MODULE_DESCRIPTION("DA8xx/OMAP-L1x MUSB Glue Layer"); 589 + MODULE_AUTHOR("Sergei Shtylyov <sshtylyov@ru.mvista.com>"); 590 + MODULE_LICENSE("GPL v2"); 591 + 592 + static int __init da8xx_init(void) 593 + { 594 + return platform_driver_probe(&da8xx_driver, da8xx_probe); 595 + } 596 + subsys_initcall(da8xx_init); 597 + 598 + static void __exit da8xx_exit(void) 599 + { 600 + platform_driver_unregister(&da8xx_driver); 601 + } 602 + module_exit(da8xx_exit);
+154 -20
drivers/usb/musb/davinci.c
··· 30 30 #include <linux/clk.h> 31 31 #include <linux/io.h> 32 32 #include <linux/gpio.h> 33 + #include <linux/platform_device.h> 34 + #include <linux/dma-mapping.h> 33 35 34 36 #include <mach/hardware.h> 35 37 #include <mach/memory.h> ··· 52 50 53 51 #define USB_PHY_CTRL IO_ADDRESS(USBPHY_CTL_PADDR) 54 52 #define DM355_DEEPSLEEP IO_ADDRESS(DM355_DEEPSLEEP_PADDR) 53 + 54 + struct davinci_glue { 55 + struct device *dev; 56 + struct platform_device *musb; 57 + struct clk *clk; 58 + }; 55 59 56 60 /* REVISIT (PM) we should be able to keep the PHY in low power mode most 57 61 * of the time (24 MHZ oscillator and PLL off, etc) by setting POWER.D0 ··· 91 83 92 84 static int dma_off = 1; 93 85 94 - void musb_platform_enable(struct musb *musb) 86 + static void davinci_musb_enable(struct musb *musb) 95 87 { 96 88 u32 tmp, old, val; 97 89 ··· 124 116 /* 125 117 * Disable the HDRC and flush interrupts 126 118 */ 127 - void musb_platform_disable(struct musb *musb) 119 + static void davinci_musb_disable(struct musb *musb) 128 120 { 129 121 /* because we don't set CTRLR.UINT, "important" to: 130 122 * - not read/write INTRUSB/INTRUSBE ··· 175 167 176 168 #endif /* EVM */ 177 169 178 - static void davinci_source_power(struct musb *musb, int is_on, int immediate) 170 + static void davinci_musb_source_power(struct musb *musb, int is_on, int immediate) 179 171 { 180 172 #ifdef CONFIG_MACH_DAVINCI_EVM 181 173 if (is_on) ··· 198 190 #endif 199 191 } 200 192 201 - static void davinci_set_vbus(struct musb *musb, int is_on) 193 + static void davinci_musb_set_vbus(struct musb *musb, int is_on) 202 194 { 203 195 WARN_ON(is_on && is_peripheral_active(musb)); 204 - davinci_source_power(musb, is_on, 0); 196 + davinci_musb_source_power(musb, is_on, 0); 205 197 } 206 198 207 199 ··· 267 259 spin_unlock_irqrestore(&musb->lock, flags); 268 260 } 269 261 270 - static irqreturn_t davinci_interrupt(int irq, void *__hci) 262 + static irqreturn_t davinci_musb_interrupt(int irq, void *__hci) 271 263 { 272 264 unsigned long flags; 273 265 irqreturn_t retval = IRQ_NONE; ··· 353 345 /* NOTE: this must complete poweron within 100 msec 354 346 * (OTG_TIME_A_WAIT_VRISE) but we don't check for that. 355 347 */ 356 - davinci_source_power(musb, drvvbus, 0); 348 + davinci_musb_source_power(musb, drvvbus, 0); 357 349 DBG(2, "VBUS %s (%s)%s, devctl %02x\n", 358 350 drvvbus ? "on" : "off", 359 351 otg_state_string(musb), ··· 378 370 return retval; 379 371 } 380 372 381 - int musb_platform_set_mode(struct musb *musb, u8 mode) 373 + static int davinci_musb_set_mode(struct musb *musb, u8 mode) 382 374 { 383 375 /* EVM can't do this (right?) */ 384 376 return -EIO; 385 377 } 386 378 387 - int __init musb_platform_init(struct musb *musb, void *board_data) 379 + static int davinci_musb_init(struct musb *musb) 388 380 { 389 381 void __iomem *tibase = musb->ctrl_base; 390 382 u32 revision; ··· 396 388 397 389 musb->mregs += DAVINCI_BASE_OFFSET; 398 390 399 - clk_enable(musb->clock); 400 - 401 391 /* returns zero if e.g. not clocked */ 402 392 revision = musb_readl(tibase, DAVINCI_USB_VERSION_REG); 403 393 if (revision == 0) ··· 404 398 if (is_host_enabled(musb)) 405 399 setup_timer(&otg_workaround, otg_timer, (unsigned long) musb); 406 400 407 - musb->board_set_vbus = davinci_set_vbus; 408 - davinci_source_power(musb, 0, 1); 401 + davinci_musb_source_power(musb, 0, 1); 409 402 410 403 /* dm355 EVM swaps D+/D- for signal integrity, and 411 404 * is clocked from the main 24 MHz crystal. ··· 445 440 revision, __raw_readl(USB_PHY_CTRL), 446 441 musb_readb(tibase, DAVINCI_USB_CTRL_REG)); 447 442 448 - musb->isr = davinci_interrupt; 443 + musb->isr = davinci_musb_interrupt; 449 444 return 0; 450 445 451 446 fail: 452 - clk_disable(musb->clock); 453 - 454 447 otg_put_transceiver(musb->xceiv); 455 448 usb_nop_xceiv_unregister(); 456 449 return -ENODEV; 457 450 } 458 451 459 - int musb_platform_exit(struct musb *musb) 452 + static int davinci_musb_exit(struct musb *musb) 460 453 { 461 454 if (is_host_enabled(musb)) 462 455 del_timer_sync(&otg_workaround); ··· 468 465 __raw_writel(deepsleep, DM355_DEEPSLEEP); 469 466 } 470 467 471 - davinci_source_power(musb, 0 /*off*/, 1); 468 + davinci_musb_source_power(musb, 0 /*off*/, 1); 472 469 473 470 /* delay, to avoid problems with module reload */ 474 471 if (is_host_enabled(musb) && musb->xceiv->default_a) { ··· 498 495 499 496 phy_off(); 500 497 501 - clk_disable(musb->clock); 502 - 503 498 otg_put_transceiver(musb->xceiv); 504 499 usb_nop_xceiv_unregister(); 505 500 506 501 return 0; 507 502 } 503 + 504 + static const struct musb_platform_ops davinci_ops = { 505 + .init = davinci_musb_init, 506 + .exit = davinci_musb_exit, 507 + 508 + .enable = davinci_musb_enable, 509 + .disable = davinci_musb_disable, 510 + 511 + .set_mode = davinci_musb_set_mode, 512 + 513 + .set_vbus = davinci_musb_set_vbus, 514 + }; 515 + 516 + static u64 davinci_dmamask = DMA_BIT_MASK(32); 517 + 518 + static int __init davinci_probe(struct platform_device *pdev) 519 + { 520 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 521 + struct platform_device *musb; 522 + struct davinci_glue *glue; 523 + struct clk *clk; 524 + 525 + int ret = -ENOMEM; 526 + 527 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 528 + if (!glue) { 529 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 530 + goto err0; 531 + } 532 + 533 + musb = platform_device_alloc("musb-hdrc", -1); 534 + if (!musb) { 535 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 536 + goto err1; 537 + } 538 + 539 + clk = clk_get(&pdev->dev, "usb"); 540 + if (IS_ERR(clk)) { 541 + dev_err(&pdev->dev, "failed to get clock\n"); 542 + ret = PTR_ERR(clk); 543 + goto err2; 544 + } 545 + 546 + ret = clk_enable(clk); 547 + if (ret) { 548 + dev_err(&pdev->dev, "failed to enable clock\n"); 549 + goto err3; 550 + } 551 + 552 + musb->dev.parent = &pdev->dev; 553 + musb->dev.dma_mask = &davinci_dmamask; 554 + musb->dev.coherent_dma_mask = davinci_dmamask; 555 + 556 + glue->dev = &pdev->dev; 557 + glue->musb = musb; 558 + glue->clk = clk; 559 + 560 + pdata->platform_ops = &davinci_ops; 561 + 562 + platform_set_drvdata(pdev, glue); 563 + 564 + ret = platform_device_add_resources(musb, pdev->resource, 565 + pdev->num_resources); 566 + if (ret) { 567 + dev_err(&pdev->dev, "failed to add resources\n"); 568 + goto err4; 569 + } 570 + 571 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 572 + if (ret) { 573 + dev_err(&pdev->dev, "failed to add platform_data\n"); 574 + goto err4; 575 + } 576 + 577 + ret = platform_device_add(musb); 578 + if (ret) { 579 + dev_err(&pdev->dev, "failed to register musb device\n"); 580 + goto err4; 581 + } 582 + 583 + return 0; 584 + 585 + err4: 586 + clk_disable(clk); 587 + 588 + err3: 589 + clk_put(clk); 590 + 591 + err2: 592 + platform_device_put(musb); 593 + 594 + err1: 595 + kfree(glue); 596 + 597 + err0: 598 + return ret; 599 + } 600 + 601 + static int __exit davinci_remove(struct platform_device *pdev) 602 + { 603 + struct davinci_glue *glue = platform_get_drvdata(pdev); 604 + 605 + platform_device_del(glue->musb); 606 + platform_device_put(glue->musb); 607 + clk_disable(glue->clk); 608 + clk_put(glue->clk); 609 + kfree(glue); 610 + 611 + return 0; 612 + } 613 + 614 + static struct platform_driver davinci_driver = { 615 + .remove = __exit_p(davinci_remove), 616 + .driver = { 617 + .name = "musb-davinci", 618 + }, 619 + }; 620 + 621 + MODULE_DESCRIPTION("DaVinci MUSB Glue Layer"); 622 + MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>"); 623 + MODULE_LICENSE("GPL v2"); 624 + 625 + static int __init davinci_init(void) 626 + { 627 + return platform_driver_probe(&davinci_driver, davinci_probe); 628 + } 629 + subsys_initcall(davinci_init); 630 + 631 + static void __exit davinci_exit(void) 632 + { 633 + platform_driver_unregister(&davinci_driver); 634 + } 635 + module_exit(davinci_exit);
+72 -123
drivers/usb/musb/musb_core.c
··· 99 99 #include <linux/platform_device.h> 100 100 #include <linux/io.h> 101 101 102 - #ifdef CONFIG_ARM 103 - #include <mach/hardware.h> 104 - #include <mach/memory.h> 105 - #include <asm/mach-types.h> 106 - #endif 107 - 108 102 #include "musb_core.h" 109 - 110 - 111 - #ifdef CONFIG_ARCH_DAVINCI 112 - #include "davinci.h" 113 - #endif 114 103 115 104 #define TA_WAIT_BCON(m) max_t(int, (m)->a_wait_bcon, OTG_TIME_A_WAIT_BCON) 116 105 ··· 115 126 116 127 #define DRIVER_INFO DRIVER_DESC ", v" MUSB_VERSION 117 128 118 - #define MUSB_DRIVER_NAME "musb_hdrc" 129 + #define MUSB_DRIVER_NAME "musb-hdrc" 119 130 const char musb_driver_name[] = MUSB_DRIVER_NAME; 120 131 121 132 MODULE_DESCRIPTION(DRIVER_INFO); ··· 219 230 220 231 /*-------------------------------------------------------------------------*/ 221 232 222 - #if !defined(CONFIG_USB_TUSB6010) && !defined(CONFIG_BLACKFIN) 233 + #if !defined(CONFIG_USB_MUSB_TUSB6010) && !defined(CONFIG_USB_MUSB_BLACKFIN) 223 234 224 235 /* 225 236 * Load an endpoint's FIFO ··· 379 390 case OTG_STATE_A_SUSPEND: 380 391 case OTG_STATE_A_WAIT_BCON: 381 392 DBG(1, "HNP: %s timeout\n", otg_state_string(musb)); 382 - musb_set_vbus(musb, 0); 393 + musb_platform_set_vbus(musb, 0); 383 394 musb->xceiv->state = OTG_STATE_A_WAIT_VFALL; 384 395 break; 385 396 default: ··· 560 571 musb->ep0_stage = MUSB_EP0_START; 561 572 musb->xceiv->state = OTG_STATE_A_IDLE; 562 573 MUSB_HST_MODE(musb); 563 - musb_set_vbus(musb, 1); 574 + musb_platform_set_vbus(musb, 1); 564 575 565 576 handled = IRQ_HANDLED; 566 577 } ··· 631 642 632 643 /* go through A_WAIT_VFALL then start a new session */ 633 644 if (!ignore) 634 - musb_set_vbus(musb, 0); 645 + musb_platform_set_vbus(musb, 0); 635 646 handled = IRQ_HANDLED; 636 647 } 637 648 ··· 1038 1049 spin_lock_irqsave(&musb->lock, flags); 1039 1050 musb_platform_disable(musb); 1040 1051 musb_generic_disable(musb); 1041 - if (musb->clock) 1042 - clk_put(musb->clock); 1043 1052 spin_unlock_irqrestore(&musb->lock, flags); 1044 1053 1045 1054 if (!is_otg_enabled(musb) && is_host_enabled(musb)) ··· 1061 1074 * We don't currently use dynamic fifo setup capability to do anything 1062 1075 * more than selecting one of a bunch of predefined configurations. 1063 1076 */ 1064 - #if defined(CONFIG_USB_TUSB6010) || \ 1065 - defined(CONFIG_ARCH_OMAP2430) || defined(CONFIG_ARCH_OMAP3) \ 1066 - || defined(CONFIG_ARCH_OMAP4) 1077 + #if defined(CONFIG_USB_MUSB_TUSB6010) || defined(CONFIG_USB_MUSB_OMAP2PLUS) \ 1078 + || defined(CONFIG_USB_MUSB_AM35X) 1067 1079 static ushort __initdata fifo_mode = 4; 1080 + #elif defined(CONFIG_USB_MUSB_UX500) 1081 + static ushort __initdata fifo_mode = 5; 1068 1082 #else 1069 1083 static ushort __initdata fifo_mode = 2; 1070 1084 #endif ··· 1489 1501 struct musb_hw_ep *hw_ep = musb->endpoints + i; 1490 1502 1491 1503 hw_ep->fifo = MUSB_FIFO_OFFSET(i) + mbase; 1492 - #ifdef CONFIG_USB_TUSB6010 1504 + #ifdef CONFIG_USB_MUSB_TUSB6010 1493 1505 hw_ep->fifo_async = musb->async + 0x400 + MUSB_FIFO_OFFSET(i); 1494 1506 hw_ep->fifo_sync = musb->sync + 0x400 + MUSB_FIFO_OFFSET(i); 1495 1507 hw_ep->fifo_sync_va = ··· 1536 1548 /*-------------------------------------------------------------------------*/ 1537 1549 1538 1550 #if defined(CONFIG_ARCH_OMAP2430) || defined(CONFIG_ARCH_OMAP3430) || \ 1539 - defined(CONFIG_ARCH_OMAP4) 1551 + defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_ARCH_U8500) || \ 1552 + defined(CONFIG_ARCH_U5500) 1540 1553 1541 1554 static irqreturn_t generic_interrupt(int irq, void *__hci) 1542 1555 { ··· 1893 1904 } 1894 1905 1895 1906 musb->controller = dev; 1907 + 1896 1908 return musb; 1897 1909 } 1898 1910 ··· 1990 2000 spin_lock_init(&musb->lock); 1991 2001 musb->board_mode = plat->mode; 1992 2002 musb->board_set_power = plat->set_power; 1993 - musb->set_clock = plat->set_clock; 1994 2003 musb->min_power = plat->min_power; 1995 - 1996 - /* Clock usage is chip-specific ... functional clock (DaVinci, 1997 - * OMAP2430), or PHY ref (some TUSB6010 boards). All this core 1998 - * code does is make sure a clock handle is available; platform 1999 - * code manages it during start/stop and suspend/resume. 2000 - */ 2001 - if (plat->clock) { 2002 - musb->clock = clk_get(dev, plat->clock); 2003 - if (IS_ERR(musb->clock)) { 2004 - status = PTR_ERR(musb->clock); 2005 - musb->clock = NULL; 2006 - goto fail1; 2007 - } 2008 - } 2004 + musb->ops = plat->platform_ops; 2009 2005 2010 2006 /* The musb_platform_init() call: 2011 2007 * - adjusts musb->mregs and musb->isr if needed, 2012 2008 * - may initialize an integrated tranceiver 2013 2009 * - initializes musb->xceiv, usually by otg_get_transceiver() 2014 - * - activates clocks. 2015 2010 * - stops powering VBUS 2016 - * - assigns musb->board_set_vbus if host mode is enabled 2017 2011 * 2018 2012 * There are various transciever configurations. Blackfin, 2019 2013 * DaVinci, TUSB60x0, and others integrate them. OMAP3 uses ··· 2005 2031 * isp1504, non-OTG, etc) mostly hooking up through ULPI. 2006 2032 */ 2007 2033 musb->isr = generic_interrupt; 2008 - status = musb_platform_init(musb, plat->board_data); 2034 + status = musb_platform_init(musb); 2009 2035 if (status < 0) 2010 - goto fail2; 2036 + goto fail1; 2011 2037 2012 2038 if (!musb->isr) { 2013 2039 status = -ENODEV; ··· 2160 2186 device_init_wakeup(dev, 0); 2161 2187 musb_platform_exit(musb); 2162 2188 2163 - fail2: 2164 - if (musb->clock) 2165 - clk_put(musb->clock); 2166 - 2167 2189 fail1: 2168 2190 dev_err(musb->controller, 2169 2191 "musb_init_controller failed with status %d\n", status); ··· 2185 2215 static int __init musb_probe(struct platform_device *pdev) 2186 2216 { 2187 2217 struct device *dev = &pdev->dev; 2188 - int irq = platform_get_irq(pdev, 0); 2218 + int irq = platform_get_irq_byname(pdev, "mc"); 2189 2219 int status; 2190 2220 struct resource *iomem; 2191 2221 void __iomem *base; ··· 2235 2265 2236 2266 #ifdef CONFIG_PM 2237 2267 2238 - static struct musb_context_registers musb_context; 2239 - 2240 - void musb_save_context(struct musb *musb) 2268 + static void musb_save_context(struct musb *musb) 2241 2269 { 2242 2270 int i; 2243 2271 void __iomem *musb_base = musb->mregs; 2244 2272 void __iomem *epio; 2245 2273 2246 2274 if (is_host_enabled(musb)) { 2247 - musb_context.frame = musb_readw(musb_base, MUSB_FRAME); 2248 - musb_context.testmode = musb_readb(musb_base, MUSB_TESTMODE); 2249 - musb_context.busctl = musb_read_ulpi_buscontrol(musb->mregs); 2275 + musb->context.frame = musb_readw(musb_base, MUSB_FRAME); 2276 + musb->context.testmode = musb_readb(musb_base, MUSB_TESTMODE); 2277 + musb->context.busctl = musb_read_ulpi_buscontrol(musb->mregs); 2250 2278 } 2251 - musb_context.power = musb_readb(musb_base, MUSB_POWER); 2252 - musb_context.intrtxe = musb_readw(musb_base, MUSB_INTRTXE); 2253 - musb_context.intrrxe = musb_readw(musb_base, MUSB_INTRRXE); 2254 - musb_context.intrusbe = musb_readb(musb_base, MUSB_INTRUSBE); 2255 - musb_context.index = musb_readb(musb_base, MUSB_INDEX); 2256 - musb_context.devctl = musb_readb(musb_base, MUSB_DEVCTL); 2279 + musb->context.power = musb_readb(musb_base, MUSB_POWER); 2280 + musb->context.intrtxe = musb_readw(musb_base, MUSB_INTRTXE); 2281 + musb->context.intrrxe = musb_readw(musb_base, MUSB_INTRRXE); 2282 + musb->context.intrusbe = musb_readb(musb_base, MUSB_INTRUSBE); 2283 + musb->context.index = musb_readb(musb_base, MUSB_INDEX); 2284 + musb->context.devctl = musb_readb(musb_base, MUSB_DEVCTL); 2257 2285 2258 2286 for (i = 0; i < musb->config->num_eps; ++i) { 2259 2287 epio = musb->endpoints[i].regs; 2260 - musb_context.index_regs[i].txmaxp = 2288 + musb->context.index_regs[i].txmaxp = 2261 2289 musb_readw(epio, MUSB_TXMAXP); 2262 - musb_context.index_regs[i].txcsr = 2290 + musb->context.index_regs[i].txcsr = 2263 2291 musb_readw(epio, MUSB_TXCSR); 2264 - musb_context.index_regs[i].rxmaxp = 2292 + musb->context.index_regs[i].rxmaxp = 2265 2293 musb_readw(epio, MUSB_RXMAXP); 2266 - musb_context.index_regs[i].rxcsr = 2294 + musb->context.index_regs[i].rxcsr = 2267 2295 musb_readw(epio, MUSB_RXCSR); 2268 2296 2269 2297 if (musb->dyn_fifo) { 2270 - musb_context.index_regs[i].txfifoadd = 2298 + musb->context.index_regs[i].txfifoadd = 2271 2299 musb_read_txfifoadd(musb_base); 2272 - musb_context.index_regs[i].rxfifoadd = 2300 + musb->context.index_regs[i].rxfifoadd = 2273 2301 musb_read_rxfifoadd(musb_base); 2274 - musb_context.index_regs[i].txfifosz = 2302 + musb->context.index_regs[i].txfifosz = 2275 2303 musb_read_txfifosz(musb_base); 2276 - musb_context.index_regs[i].rxfifosz = 2304 + musb->context.index_regs[i].rxfifosz = 2277 2305 musb_read_rxfifosz(musb_base); 2278 2306 } 2279 2307 if (is_host_enabled(musb)) { 2280 - musb_context.index_regs[i].txtype = 2308 + musb->context.index_regs[i].txtype = 2281 2309 musb_readb(epio, MUSB_TXTYPE); 2282 - musb_context.index_regs[i].txinterval = 2310 + musb->context.index_regs[i].txinterval = 2283 2311 musb_readb(epio, MUSB_TXINTERVAL); 2284 - musb_context.index_regs[i].rxtype = 2312 + musb->context.index_regs[i].rxtype = 2285 2313 musb_readb(epio, MUSB_RXTYPE); 2286 - musb_context.index_regs[i].rxinterval = 2314 + musb->context.index_regs[i].rxinterval = 2287 2315 musb_readb(epio, MUSB_RXINTERVAL); 2288 2316 2289 - musb_context.index_regs[i].txfunaddr = 2317 + musb->context.index_regs[i].txfunaddr = 2290 2318 musb_read_txfunaddr(musb_base, i); 2291 - musb_context.index_regs[i].txhubaddr = 2319 + musb->context.index_regs[i].txhubaddr = 2292 2320 musb_read_txhubaddr(musb_base, i); 2293 - musb_context.index_regs[i].txhubport = 2321 + musb->context.index_regs[i].txhubport = 2294 2322 musb_read_txhubport(musb_base, i); 2295 2323 2296 - musb_context.index_regs[i].rxfunaddr = 2324 + musb->context.index_regs[i].rxfunaddr = 2297 2325 musb_read_rxfunaddr(musb_base, i); 2298 - musb_context.index_regs[i].rxhubaddr = 2326 + musb->context.index_regs[i].rxhubaddr = 2299 2327 musb_read_rxhubaddr(musb_base, i); 2300 - musb_context.index_regs[i].rxhubport = 2328 + musb->context.index_regs[i].rxhubport = 2301 2329 musb_read_rxhubport(musb_base, i); 2302 2330 } 2303 2331 } 2304 - 2305 - musb_platform_save_context(musb, &musb_context); 2306 2332 } 2307 2333 2308 - void musb_restore_context(struct musb *musb) 2334 + static void musb_restore_context(struct musb *musb) 2309 2335 { 2310 2336 int i; 2311 2337 void __iomem *musb_base = musb->mregs; 2312 2338 void __iomem *ep_target_regs; 2313 2339 void __iomem *epio; 2314 2340 2315 - musb_platform_restore_context(musb, &musb_context); 2316 - 2317 2341 if (is_host_enabled(musb)) { 2318 - musb_writew(musb_base, MUSB_FRAME, musb_context.frame); 2319 - musb_writeb(musb_base, MUSB_TESTMODE, musb_context.testmode); 2320 - musb_write_ulpi_buscontrol(musb->mregs, musb_context.busctl); 2342 + musb_writew(musb_base, MUSB_FRAME, musb->context.frame); 2343 + musb_writeb(musb_base, MUSB_TESTMODE, musb->context.testmode); 2344 + musb_write_ulpi_buscontrol(musb->mregs, musb->context.busctl); 2321 2345 } 2322 - musb_writeb(musb_base, MUSB_POWER, musb_context.power); 2323 - musb_writew(musb_base, MUSB_INTRTXE, musb_context.intrtxe); 2324 - musb_writew(musb_base, MUSB_INTRRXE, musb_context.intrrxe); 2325 - musb_writeb(musb_base, MUSB_INTRUSBE, musb_context.intrusbe); 2326 - musb_writeb(musb_base, MUSB_DEVCTL, musb_context.devctl); 2346 + musb_writeb(musb_base, MUSB_POWER, musb->context.power); 2347 + musb_writew(musb_base, MUSB_INTRTXE, musb->context.intrtxe); 2348 + musb_writew(musb_base, MUSB_INTRRXE, musb->context.intrrxe); 2349 + musb_writeb(musb_base, MUSB_INTRUSBE, musb->context.intrusbe); 2350 + musb_writeb(musb_base, MUSB_DEVCTL, musb->context.devctl); 2327 2351 2328 2352 for (i = 0; i < musb->config->num_eps; ++i) { 2329 2353 epio = musb->endpoints[i].regs; 2330 2354 musb_writew(epio, MUSB_TXMAXP, 2331 - musb_context.index_regs[i].txmaxp); 2355 + musb->context.index_regs[i].txmaxp); 2332 2356 musb_writew(epio, MUSB_TXCSR, 2333 - musb_context.index_regs[i].txcsr); 2357 + musb->context.index_regs[i].txcsr); 2334 2358 musb_writew(epio, MUSB_RXMAXP, 2335 - musb_context.index_regs[i].rxmaxp); 2359 + musb->context.index_regs[i].rxmaxp); 2336 2360 musb_writew(epio, MUSB_RXCSR, 2337 - musb_context.index_regs[i].rxcsr); 2361 + musb->context.index_regs[i].rxcsr); 2338 2362 2339 2363 if (musb->dyn_fifo) { 2340 2364 musb_write_txfifosz(musb_base, 2341 - musb_context.index_regs[i].txfifosz); 2365 + musb->context.index_regs[i].txfifosz); 2342 2366 musb_write_rxfifosz(musb_base, 2343 - musb_context.index_regs[i].rxfifosz); 2367 + musb->context.index_regs[i].rxfifosz); 2344 2368 musb_write_txfifoadd(musb_base, 2345 - musb_context.index_regs[i].txfifoadd); 2369 + musb->context.index_regs[i].txfifoadd); 2346 2370 musb_write_rxfifoadd(musb_base, 2347 - musb_context.index_regs[i].rxfifoadd); 2371 + musb->context.index_regs[i].rxfifoadd); 2348 2372 } 2349 2373 2350 2374 if (is_host_enabled(musb)) { 2351 2375 musb_writeb(epio, MUSB_TXTYPE, 2352 - musb_context.index_regs[i].txtype); 2376 + musb->context.index_regs[i].txtype); 2353 2377 musb_writeb(epio, MUSB_TXINTERVAL, 2354 - musb_context.index_regs[i].txinterval); 2378 + musb->context.index_regs[i].txinterval); 2355 2379 musb_writeb(epio, MUSB_RXTYPE, 2356 - musb_context.index_regs[i].rxtype); 2380 + musb->context.index_regs[i].rxtype); 2357 2381 musb_writeb(epio, MUSB_RXINTERVAL, 2358 2382 2359 - musb_context.index_regs[i].rxinterval); 2383 + musb->context.index_regs[i].rxinterval); 2360 2384 musb_write_txfunaddr(musb_base, i, 2361 - musb_context.index_regs[i].txfunaddr); 2385 + musb->context.index_regs[i].txfunaddr); 2362 2386 musb_write_txhubaddr(musb_base, i, 2363 - musb_context.index_regs[i].txhubaddr); 2387 + musb->context.index_regs[i].txhubaddr); 2364 2388 musb_write_txhubport(musb_base, i, 2365 - musb_context.index_regs[i].txhubport); 2389 + musb->context.index_regs[i].txhubport); 2366 2390 2367 2391 ep_target_regs = 2368 2392 musb_read_target_reg_base(i, musb_base); 2369 2393 2370 2394 musb_write_rxfunaddr(ep_target_regs, 2371 - musb_context.index_regs[i].rxfunaddr); 2395 + musb->context.index_regs[i].rxfunaddr); 2372 2396 musb_write_rxhubaddr(ep_target_regs, 2373 - musb_context.index_regs[i].rxhubaddr); 2397 + musb->context.index_regs[i].rxhubaddr); 2374 2398 musb_write_rxhubport(ep_target_regs, 2375 - musb_context.index_regs[i].rxhubport); 2399 + musb->context.index_regs[i].rxhubport); 2376 2400 } 2377 2401 } 2378 2402 } ··· 2376 2412 struct platform_device *pdev = to_platform_device(dev); 2377 2413 unsigned long flags; 2378 2414 struct musb *musb = dev_to_musb(&pdev->dev); 2379 - 2380 - if (!musb->clock) 2381 - return 0; 2382 2415 2383 2416 spin_lock_irqsave(&musb->lock, flags); 2384 2417 ··· 2391 2430 2392 2431 musb_save_context(musb); 2393 2432 2394 - if (musb->set_clock) 2395 - musb->set_clock(musb->clock, 0); 2396 - else 2397 - clk_disable(musb->clock); 2398 2433 spin_unlock_irqrestore(&musb->lock, flags); 2399 2434 return 0; 2400 2435 } ··· 2399 2442 { 2400 2443 struct platform_device *pdev = to_platform_device(dev); 2401 2444 struct musb *musb = dev_to_musb(&pdev->dev); 2402 - 2403 - if (!musb->clock) 2404 - return 0; 2405 - 2406 - if (musb->set_clock) 2407 - musb->set_clock(musb->clock, 1); 2408 - else 2409 - clk_enable(musb->clock); 2410 2445 2411 2446 musb_restore_context(musb); 2412 2447
+112 -78
drivers/usb/musb/musb_core.h
··· 222 222 #endif 223 223 224 224 /* TUSB mapping: "flat" plus ep0 special cases */ 225 - #if defined(CONFIG_USB_TUSB6010) 225 + #if defined(CONFIG_USB_MUSB_TUSB6010) 226 226 #define musb_ep_select(_mbase, _epnum) \ 227 227 musb_writeb((_mbase), MUSB_INDEX, (_epnum)) 228 228 #define MUSB_EP_OFFSET MUSB_TUSB_OFFSET ··· 253 253 254 254 /******************************** TYPES *************************************/ 255 255 256 + /** 257 + * struct musb_platform_ops - Operations passed to musb_core by HW glue layer 258 + * @init: turns on clocks, sets up platform-specific registers, etc 259 + * @exit: undoes @init 260 + * @set_mode: forcefully changes operating mode 261 + * @try_ilde: tries to idle the IP 262 + * @vbus_status: returns vbus status if possible 263 + * @set_vbus: forces vbus status 264 + */ 265 + struct musb_platform_ops { 266 + int (*init)(struct musb *musb); 267 + int (*exit)(struct musb *musb); 268 + 269 + void (*enable)(struct musb *musb); 270 + void (*disable)(struct musb *musb); 271 + 272 + int (*set_mode)(struct musb *musb, u8 mode); 273 + void (*try_idle)(struct musb *musb, unsigned long timeout); 274 + 275 + int (*vbus_status)(struct musb *musb); 276 + void (*set_vbus)(struct musb *musb, int on); 277 + }; 278 + 256 279 /* 257 280 * struct musb_hw_ep - endpoint hardware (bidirectional) 258 281 * ··· 286 263 void __iomem *fifo; 287 264 void __iomem *regs; 288 265 289 - #ifdef CONFIG_USB_TUSB6010 266 + #ifdef CONFIG_USB_MUSB_TUSB6010 290 267 void __iomem *conf; 291 268 #endif 292 269 ··· 303 280 struct dma_channel *tx_channel; 304 281 struct dma_channel *rx_channel; 305 282 306 - #ifdef CONFIG_USB_TUSB6010 283 + #ifdef CONFIG_USB_MUSB_TUSB6010 307 284 /* TUSB has "asynchronous" and "synchronous" dma modes */ 308 285 dma_addr_t fifo_async; 309 286 dma_addr_t fifo_sync; ··· 346 323 #endif 347 324 } 348 325 326 + struct musb_csr_regs { 327 + /* FIFO registers */ 328 + u16 txmaxp, txcsr, rxmaxp, rxcsr; 329 + u16 rxfifoadd, txfifoadd; 330 + u8 txtype, txinterval, rxtype, rxinterval; 331 + u8 rxfifosz, txfifosz; 332 + u8 txfunaddr, txhubaddr, txhubport; 333 + u8 rxfunaddr, rxhubaddr, rxhubport; 334 + }; 335 + 336 + struct musb_context_registers { 337 + 338 + #if defined(CONFIG_ARCH_OMAP2430) || defined(CONFIG_ARCH_OMAP3) || \ 339 + defined(CONFIG_ARCH_OMAP4) 340 + u32 otg_sysconfig, otg_forcestandby; 341 + #endif 342 + u8 power; 343 + u16 intrtxe, intrrxe; 344 + u8 intrusbe; 345 + u16 frame; 346 + u8 index, testmode; 347 + 348 + u8 devctl, busctl, misc; 349 + 350 + struct musb_csr_regs index_regs[MUSB_C_NUM_EPS]; 351 + }; 352 + 349 353 /* 350 354 * struct musb - Driver instance data. 351 355 */ 352 356 struct musb { 353 357 /* device lock */ 354 358 spinlock_t lock; 355 - struct clk *clock; 356 - struct clk *phy_clock; 359 + 360 + const struct musb_platform_ops *ops; 361 + struct musb_context_registers context; 362 + 357 363 irqreturn_t (*isr)(int, void *); 358 364 struct work_struct irq_work; 359 365 u16 hwvers; ··· 411 359 412 360 struct timer_list otg_timer; 413 361 #endif 414 - 415 - /* called with IRQs blocked; ON/nonzero implies starting a session, 416 - * and waiting at least a_wait_vrise_tmout. 417 - */ 418 - void (*board_set_vbus)(struct musb *, int is_on); 362 + struct notifier_block nb; 419 363 420 364 struct dma_controller *dma_controller; 421 365 ··· 419 371 void __iomem *ctrl_base; 420 372 void __iomem *mregs; 421 373 422 - #ifdef CONFIG_USB_TUSB6010 374 + #ifdef CONFIG_USB_MUSB_TUSB6010 423 375 dma_addr_t async; 424 376 dma_addr_t sync; 425 377 void __iomem *sync_va; ··· 445 397 446 398 u8 board_mode; /* enum musb_mode */ 447 399 int (*board_set_power)(int state); 448 - 449 - int (*set_clock)(struct clk *clk, int is_active); 450 400 451 401 u8 min_power; /* vbus for periph, in mA/2 */ 452 402 ··· 503 457 struct proc_dir_entry *proc_entry; 504 458 #endif 505 459 }; 506 - 507 - #ifdef CONFIG_PM 508 - struct musb_csr_regs { 509 - /* FIFO registers */ 510 - u16 txmaxp, txcsr, rxmaxp, rxcsr; 511 - u16 rxfifoadd, txfifoadd; 512 - u8 txtype, txinterval, rxtype, rxinterval; 513 - u8 rxfifosz, txfifosz; 514 - u8 txfunaddr, txhubaddr, txhubport; 515 - u8 rxfunaddr, rxhubaddr, rxhubport; 516 - }; 517 - 518 - struct musb_context_registers { 519 - 520 - #if defined(CONFIG_ARCH_OMAP2430) || defined(CONFIG_ARCH_OMAP3) || \ 521 - defined(CONFIG_ARCH_OMAP4) 522 - u32 otg_sysconfig, otg_forcestandby; 523 - #endif 524 - u8 power; 525 - u16 intrtxe, intrrxe; 526 - u8 intrusbe; 527 - u16 frame; 528 - u8 index, testmode; 529 - 530 - u8 devctl, busctl, misc; 531 - 532 - struct musb_csr_regs index_regs[MUSB_C_NUM_EPS]; 533 - }; 534 - 535 - #if defined(CONFIG_ARCH_OMAP2430) || defined(CONFIG_ARCH_OMAP3) || \ 536 - defined(CONFIG_ARCH_OMAP4) 537 - extern void musb_platform_save_context(struct musb *musb, 538 - struct musb_context_registers *musb_context); 539 - extern void musb_platform_restore_context(struct musb *musb, 540 - struct musb_context_registers *musb_context); 541 - #else 542 - #define musb_platform_save_context(m, x) do {} while (0) 543 - #define musb_platform_restore_context(m, x) do {} while (0) 544 - #endif 545 - 546 - #endif 547 - 548 - static inline void musb_set_vbus(struct musb *musb, int is_on) 549 - { 550 - musb->board_set_vbus(musb, is_on); 551 - } 552 460 553 461 #ifdef CONFIG_USB_GADGET_MUSB_HDRC 554 462 static inline struct musb *gadget_to_musb(struct usb_gadget *g) ··· 592 592 593 593 extern irqreturn_t musb_interrupt(struct musb *); 594 594 595 - extern void musb_platform_enable(struct musb *musb); 596 - extern void musb_platform_disable(struct musb *musb); 597 - 598 595 extern void musb_hnp_stop(struct musb *musb); 599 596 600 - extern int musb_platform_set_mode(struct musb *musb, u8 musb_mode); 597 + static inline void musb_platform_set_vbus(struct musb *musb, int is_on) 598 + { 599 + if (musb->ops->set_vbus) 600 + musb->ops->set_vbus(musb, is_on); 601 + } 601 602 602 - #if defined(CONFIG_USB_TUSB6010) || defined(CONFIG_BLACKFIN) || \ 603 - defined(CONFIG_ARCH_DAVINCI_DA8XX) || \ 604 - defined(CONFIG_ARCH_OMAP2430) || defined(CONFIG_ARCH_OMAP3) || \ 605 - defined(CONFIG_ARCH_OMAP4) 606 - extern void musb_platform_try_idle(struct musb *musb, unsigned long timeout); 607 - #else 608 - #define musb_platform_try_idle(x, y) do {} while (0) 609 - #endif 603 + static inline void musb_platform_enable(struct musb *musb) 604 + { 605 + if (musb->ops->enable) 606 + musb->ops->enable(musb); 607 + } 610 608 611 - #if defined(CONFIG_USB_TUSB6010) || defined(CONFIG_BLACKFIN) 612 - extern int musb_platform_get_vbus_status(struct musb *musb); 613 - #else 614 - #define musb_platform_get_vbus_status(x) 0 615 - #endif 609 + static inline void musb_platform_disable(struct musb *musb) 610 + { 611 + if (musb->ops->disable) 612 + musb->ops->disable(musb); 613 + } 616 614 617 - extern int __init musb_platform_init(struct musb *musb, void *board_data); 618 - extern int musb_platform_exit(struct musb *musb); 615 + static inline int musb_platform_set_mode(struct musb *musb, u8 mode) 616 + { 617 + if (!musb->ops->set_mode) 618 + return 0; 619 + 620 + return musb->ops->set_mode(musb, mode); 621 + } 622 + 623 + static inline void musb_platform_try_idle(struct musb *musb, 624 + unsigned long timeout) 625 + { 626 + if (musb->ops->try_idle) 627 + musb->ops->try_idle(musb, timeout); 628 + } 629 + 630 + static inline int musb_platform_get_vbus_status(struct musb *musb) 631 + { 632 + if (!musb->ops->vbus_status) 633 + return 0; 634 + 635 + return musb->ops->vbus_status(musb); 636 + } 637 + 638 + static inline int musb_platform_init(struct musb *musb) 639 + { 640 + if (!musb->ops->init) 641 + return -EINVAL; 642 + 643 + return musb->ops->init(musb); 644 + } 645 + 646 + static inline int musb_platform_exit(struct musb *musb) 647 + { 648 + if (!musb->ops->exit) 649 + return -EINVAL; 650 + 651 + return musb->ops->exit(musb); 652 + } 619 653 620 654 #endif /* __MUSB_CORE_H__ */
+8 -5
drivers/usb/musb/musb_gadget.c
··· 1136 1136 struct musb_request *request = NULL; 1137 1137 1138 1138 request = kzalloc(sizeof *request, gfp_flags); 1139 - if (request) { 1140 - INIT_LIST_HEAD(&request->request.list); 1141 - request->request.dma = DMA_ADDR_INVALID; 1142 - request->epnum = musb_ep->current_epnum; 1143 - request->ep = musb_ep; 1139 + if (!request) { 1140 + DBG(4, "not enough memory\n"); 1141 + return NULL; 1144 1142 } 1143 + 1144 + INIT_LIST_HEAD(&request->request.list); 1145 + request->request.dma = DMA_ADDR_INVALID; 1146 + request->epnum = musb_ep->current_epnum; 1147 + request->ep = musb_ep; 1145 1148 1146 1149 return &request->request; 1147 1150 }
+2 -2
drivers/usb/musb/musb_io.h
··· 74 74 { __raw_writel(data, addr + offset); } 75 75 76 76 77 - #ifdef CONFIG_USB_TUSB6010 77 + #ifdef CONFIG_USB_MUSB_TUSB6010 78 78 79 79 /* 80 80 * TUSB6010 doesn't allow 8-bit access; 16-bit access is the minimum. ··· 114 114 static inline void musb_writeb(void __iomem *addr, unsigned offset, u8 data) 115 115 { __raw_writeb(data, addr + offset); } 116 116 117 - #endif /* CONFIG_USB_TUSB6010 */ 117 + #endif /* CONFIG_USB_MUSB_TUSB6010 */ 118 118 119 119 #else 120 120
+2 -2
drivers/usb/musb/musb_regs.h
··· 234 234 #define MUSB_TESTMODE 0x0F /* 8 bit */ 235 235 236 236 /* Get offset for a given FIFO from musb->mregs */ 237 - #ifdef CONFIG_USB_TUSB6010 237 + #ifdef CONFIG_USB_MUSB_TUSB6010 238 238 #define MUSB_FIFO_OFFSET(epnum) (0x200 + ((epnum) * 0x20)) 239 239 #else 240 240 #define MUSB_FIFO_OFFSET(epnum) (0x20 + ((epnum) * 4)) ··· 295 295 #define MUSB_FLAT_OFFSET(_epnum, _offset) \ 296 296 (0x100 + (0x10*(_epnum)) + (_offset)) 297 297 298 - #ifdef CONFIG_USB_TUSB6010 298 + #ifdef CONFIG_USB_MUSB_TUSB6010 299 299 /* TUSB6010 EP0 configuration register is special */ 300 300 #define MUSB_TUSB_OFFSET(_epnum, _offset) \ 301 301 (0x10 + _offset)
+1 -1
drivers/usb/musb/musb_virthub.c
··· 276 276 break; 277 277 case USB_PORT_FEAT_POWER: 278 278 if (!(is_otg_enabled(musb) && hcd->self.is_b_host)) 279 - musb_set_vbus(musb, 0); 279 + musb_platform_set_vbus(musb, 0); 280 280 break; 281 281 case USB_PORT_FEAT_C_CONNECTION: 282 282 case USB_PORT_FEAT_C_ENABLE:
+1 -1
drivers/usb/musb/musbhsdma.c
··· 377 377 struct musb_dma_controller *controller; 378 378 struct device *dev = musb->controller; 379 379 struct platform_device *pdev = to_platform_device(dev); 380 - int irq = platform_get_irq(pdev, 1); 380 + int irq = platform_get_irq_byname(pdev, "dma"); 381 381 382 382 if (irq == 0) { 383 383 dev_err(dev, "No DMA interrupt line!\n");
+316 -84
drivers/usb/musb/omap2430.c
··· 31 31 #include <linux/list.h> 32 32 #include <linux/clk.h> 33 33 #include <linux/io.h> 34 + #include <linux/platform_device.h> 35 + #include <linux/dma-mapping.h> 34 36 35 37 #include "musb_core.h" 36 38 #include "omap2430.h" 37 39 40 + struct omap2430_glue { 41 + struct device *dev; 42 + struct platform_device *musb; 43 + struct clk *clk; 44 + }; 45 + #define glue_to_musb(g) platform_get_drvdata(g->musb) 38 46 39 47 static struct timer_list musb_idle_timer; 40 48 ··· 57 49 58 50 spin_lock_irqsave(&musb->lock, flags); 59 51 60 - devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 61 - 62 52 switch (musb->xceiv->state) { 63 53 case OTG_STATE_A_WAIT_BCON: 64 - devctl &= ~MUSB_DEVCTL_SESSION; 65 - musb_writeb(musb->mregs, MUSB_DEVCTL, devctl); 66 54 67 55 devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 68 56 if (devctl & MUSB_DEVCTL_BDEVICE) { ··· 102 98 } 103 99 104 100 105 - void musb_platform_try_idle(struct musb *musb, unsigned long timeout) 101 + static void omap2430_musb_try_idle(struct musb *musb, unsigned long timeout) 106 102 { 107 103 unsigned long default_timeout = jiffies + msecs_to_jiffies(3); 108 104 static unsigned long last_timer; ··· 135 131 mod_timer(&musb_idle_timer, timeout); 136 132 } 137 133 138 - void musb_platform_enable(struct musb *musb) 139 - { 140 - } 141 - void musb_platform_disable(struct musb *musb) 142 - { 143 - } 144 - static void omap_set_vbus(struct musb *musb, int is_on) 134 + static void omap2430_musb_set_vbus(struct musb *musb, int is_on) 145 135 { 146 136 u8 devctl; 137 + unsigned long timeout = jiffies + msecs_to_jiffies(1000); 138 + int ret = 1; 147 139 /* HDRC controls CPEN, but beware current surges during device 148 140 * connect. They can trigger transient overcurrent conditions 149 141 * that must be ignored. ··· 148 148 devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 149 149 150 150 if (is_on) { 151 - musb->is_active = 1; 152 - musb->xceiv->default_a = 1; 153 - musb->xceiv->state = OTG_STATE_A_WAIT_VRISE; 154 - devctl |= MUSB_DEVCTL_SESSION; 151 + if (musb->xceiv->state == OTG_STATE_A_IDLE) { 152 + /* start the session */ 153 + devctl |= MUSB_DEVCTL_SESSION; 154 + musb_writeb(musb->mregs, MUSB_DEVCTL, devctl); 155 + /* 156 + * Wait for the musb to set as A device to enable the 157 + * VBUS 158 + */ 159 + while (musb_readb(musb->mregs, MUSB_DEVCTL) & 0x80) { 155 160 156 - MUSB_HST_MODE(musb); 161 + cpu_relax(); 162 + 163 + if (time_after(jiffies, timeout)) { 164 + dev_err(musb->controller, 165 + "configured as A device timeout"); 166 + ret = -EINVAL; 167 + break; 168 + } 169 + } 170 + 171 + if (ret && musb->xceiv->set_vbus) 172 + otg_set_vbus(musb->xceiv, 1); 173 + } else { 174 + musb->is_active = 1; 175 + musb->xceiv->default_a = 1; 176 + musb->xceiv->state = OTG_STATE_A_WAIT_VRISE; 177 + devctl |= MUSB_DEVCTL_SESSION; 178 + MUSB_HST_MODE(musb); 179 + } 157 180 } else { 158 181 musb->is_active = 0; 159 182 ··· 198 175 musb_readb(musb->mregs, MUSB_DEVCTL)); 199 176 } 200 177 201 - static int musb_platform_resume(struct musb *musb); 202 - 203 - int musb_platform_set_mode(struct musb *musb, u8 musb_mode) 178 + static int omap2430_musb_set_mode(struct musb *musb, u8 musb_mode) 204 179 { 205 180 u8 devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 206 181 ··· 208 187 return 0; 209 188 } 210 189 211 - int __init musb_platform_init(struct musb *musb, void *board_data) 190 + static inline void omap2430_low_level_exit(struct musb *musb) 212 191 { 213 192 u32 l; 214 - struct omap_musb_board_data *data = board_data; 193 + 194 + /* in any role */ 195 + l = musb_readl(musb->mregs, OTG_FORCESTDBY); 196 + l |= ENABLEFORCE; /* enable MSTANDBY */ 197 + musb_writel(musb->mregs, OTG_FORCESTDBY, l); 198 + 199 + l = musb_readl(musb->mregs, OTG_SYSCONFIG); 200 + l |= ENABLEWAKEUP; /* enable wakeup */ 201 + musb_writel(musb->mregs, OTG_SYSCONFIG, l); 202 + } 203 + 204 + static inline void omap2430_low_level_init(struct musb *musb) 205 + { 206 + u32 l; 207 + 208 + l = musb_readl(musb->mregs, OTG_SYSCONFIG); 209 + l &= ~ENABLEWAKEUP; /* disable wakeup */ 210 + musb_writel(musb->mregs, OTG_SYSCONFIG, l); 211 + 212 + l = musb_readl(musb->mregs, OTG_FORCESTDBY); 213 + l &= ~ENABLEFORCE; /* disable MSTANDBY */ 214 + musb_writel(musb->mregs, OTG_FORCESTDBY, l); 215 + } 216 + 217 + /* blocking notifier support */ 218 + static int musb_otg_notifications(struct notifier_block *nb, 219 + unsigned long event, void *unused) 220 + { 221 + struct musb *musb = container_of(nb, struct musb, nb); 222 + struct device *dev = musb->controller; 223 + struct musb_hdrc_platform_data *pdata = dev->platform_data; 224 + struct omap_musb_board_data *data = pdata->board_data; 225 + 226 + switch (event) { 227 + case USB_EVENT_ID: 228 + DBG(4, "ID GND\n"); 229 + 230 + if (is_otg_enabled(musb)) { 231 + #ifdef CONFIG_USB_GADGET_MUSB_HDRC 232 + if (musb->gadget_driver) { 233 + otg_init(musb->xceiv); 234 + 235 + if (data->interface_type == 236 + MUSB_INTERFACE_UTMI) 237 + omap2430_musb_set_vbus(musb, 1); 238 + 239 + } 240 + #endif 241 + } else { 242 + otg_init(musb->xceiv); 243 + if (data->interface_type == 244 + MUSB_INTERFACE_UTMI) 245 + omap2430_musb_set_vbus(musb, 1); 246 + } 247 + break; 248 + 249 + case USB_EVENT_VBUS: 250 + DBG(4, "VBUS Connect\n"); 251 + 252 + otg_init(musb->xceiv); 253 + break; 254 + 255 + case USB_EVENT_NONE: 256 + DBG(4, "VBUS Disconnect\n"); 257 + 258 + if (data->interface_type == MUSB_INTERFACE_UTMI) { 259 + if (musb->xceiv->set_vbus) 260 + otg_set_vbus(musb->xceiv, 0); 261 + } 262 + otg_shutdown(musb->xceiv); 263 + break; 264 + default: 265 + DBG(4, "ID float\n"); 266 + return NOTIFY_DONE; 267 + } 268 + 269 + return NOTIFY_OK; 270 + } 271 + 272 + static int omap2430_musb_init(struct musb *musb) 273 + { 274 + u32 l, status = 0; 275 + struct device *dev = musb->controller; 276 + struct musb_hdrc_platform_data *plat = dev->platform_data; 277 + struct omap_musb_board_data *data = plat->board_data; 215 278 216 279 /* We require some kind of external transceiver, hooked 217 280 * up through ULPI. TWL4030-family PMICs include one, ··· 307 202 return -ENODEV; 308 203 } 309 204 310 - musb_platform_resume(musb); 205 + omap2430_low_level_init(musb); 311 206 312 207 l = musb_readl(musb->mregs, OTG_SYSCONFIG); 313 208 l &= ~ENABLEWAKEUP; /* disable wakeup */ ··· 344 239 musb_readl(musb->mregs, OTG_INTERFSEL), 345 240 musb_readl(musb->mregs, OTG_SIMENABLE)); 346 241 347 - if (is_host_enabled(musb)) 348 - musb->board_set_vbus = omap_set_vbus; 242 + musb->nb.notifier_call = musb_otg_notifications; 243 + status = otg_register_notifier(musb->xceiv, &musb->nb); 244 + 245 + if (status) 246 + DBG(1, "notification register failed\n"); 247 + 248 + /* check whether cable is already connected */ 249 + if (musb->xceiv->state ==OTG_STATE_B_IDLE) 250 + musb_otg_notifications(&musb->nb, 1, 251 + musb->xceiv->gadget); 349 252 350 253 setup_timer(&musb_idle_timer, musb_do_idle, (unsigned long) musb); 351 254 352 255 return 0; 353 256 } 354 257 355 - #ifdef CONFIG_PM 356 - void musb_platform_save_context(struct musb *musb, 357 - struct musb_context_registers *musb_context) 258 + static int omap2430_musb_exit(struct musb *musb) 358 259 { 359 - musb_context->otg_sysconfig = musb_readl(musb->mregs, OTG_SYSCONFIG); 360 - musb_context->otg_forcestandby = musb_readl(musb->mregs, OTG_FORCESTDBY); 361 - } 362 260 363 - void musb_platform_restore_context(struct musb *musb, 364 - struct musb_context_registers *musb_context) 365 - { 366 - musb_writel(musb->mregs, OTG_SYSCONFIG, musb_context->otg_sysconfig); 367 - musb_writel(musb->mregs, OTG_FORCESTDBY, musb_context->otg_forcestandby); 368 - } 369 - #endif 370 - 371 - static int musb_platform_suspend(struct musb *musb) 372 - { 373 - u32 l; 374 - 375 - if (!musb->clock) 376 - return 0; 377 - 378 - /* in any role */ 379 - l = musb_readl(musb->mregs, OTG_FORCESTDBY); 380 - l |= ENABLEFORCE; /* enable MSTANDBY */ 381 - musb_writel(musb->mregs, OTG_FORCESTDBY, l); 382 - 383 - l = musb_readl(musb->mregs, OTG_SYSCONFIG); 384 - l |= ENABLEWAKEUP; /* enable wakeup */ 385 - musb_writel(musb->mregs, OTG_SYSCONFIG, l); 386 - 387 - otg_set_suspend(musb->xceiv, 1); 388 - 389 - if (musb->set_clock) 390 - musb->set_clock(musb->clock, 0); 391 - else 392 - clk_disable(musb->clock); 261 + omap2430_low_level_exit(musb); 262 + otg_put_transceiver(musb->xceiv); 393 263 394 264 return 0; 395 265 } 396 266 397 - static int musb_platform_resume(struct musb *musb) 267 + static const struct musb_platform_ops omap2430_ops = { 268 + .init = omap2430_musb_init, 269 + .exit = omap2430_musb_exit, 270 + 271 + .set_mode = omap2430_musb_set_mode, 272 + .try_idle = omap2430_musb_try_idle, 273 + 274 + .set_vbus = omap2430_musb_set_vbus, 275 + }; 276 + 277 + static u64 omap2430_dmamask = DMA_BIT_MASK(32); 278 + 279 + static int __init omap2430_probe(struct platform_device *pdev) 398 280 { 399 - u32 l; 281 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 282 + struct platform_device *musb; 283 + struct omap2430_glue *glue; 284 + struct clk *clk; 400 285 401 - if (!musb->clock) 402 - return 0; 286 + int ret = -ENOMEM; 403 287 288 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 289 + if (!glue) { 290 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 291 + goto err0; 292 + } 293 + 294 + musb = platform_device_alloc("musb-hdrc", -1); 295 + if (!musb) { 296 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 297 + goto err1; 298 + } 299 + 300 + clk = clk_get(&pdev->dev, "ick"); 301 + if (IS_ERR(clk)) { 302 + dev_err(&pdev->dev, "failed to get clock\n"); 303 + ret = PTR_ERR(clk); 304 + goto err2; 305 + } 306 + 307 + ret = clk_enable(clk); 308 + if (ret) { 309 + dev_err(&pdev->dev, "failed to enable clock\n"); 310 + goto err3; 311 + } 312 + 313 + musb->dev.parent = &pdev->dev; 314 + musb->dev.dma_mask = &omap2430_dmamask; 315 + musb->dev.coherent_dma_mask = omap2430_dmamask; 316 + 317 + glue->dev = &pdev->dev; 318 + glue->musb = musb; 319 + glue->clk = clk; 320 + 321 + pdata->platform_ops = &omap2430_ops; 322 + 323 + platform_set_drvdata(pdev, glue); 324 + 325 + ret = platform_device_add_resources(musb, pdev->resource, 326 + pdev->num_resources); 327 + if (ret) { 328 + dev_err(&pdev->dev, "failed to add resources\n"); 329 + goto err4; 330 + } 331 + 332 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 333 + if (ret) { 334 + dev_err(&pdev->dev, "failed to add platform_data\n"); 335 + goto err4; 336 + } 337 + 338 + ret = platform_device_add(musb); 339 + if (ret) { 340 + dev_err(&pdev->dev, "failed to register musb device\n"); 341 + goto err4; 342 + } 343 + 344 + return 0; 345 + 346 + err4: 347 + clk_disable(clk); 348 + 349 + err3: 350 + clk_put(clk); 351 + 352 + err2: 353 + platform_device_put(musb); 354 + 355 + err1: 356 + kfree(glue); 357 + 358 + err0: 359 + return ret; 360 + } 361 + 362 + static int __exit omap2430_remove(struct platform_device *pdev) 363 + { 364 + struct omap2430_glue *glue = platform_get_drvdata(pdev); 365 + 366 + platform_device_del(glue->musb); 367 + platform_device_put(glue->musb); 368 + clk_disable(glue->clk); 369 + clk_put(glue->clk); 370 + kfree(glue); 371 + 372 + return 0; 373 + } 374 + 375 + #ifdef CONFIG_PM 376 + static void omap2430_save_context(struct musb *musb) 377 + { 378 + musb->context.otg_sysconfig = musb_readl(musb->mregs, OTG_SYSCONFIG); 379 + musb->context.otg_forcestandby = musb_readl(musb->mregs, OTG_FORCESTDBY); 380 + } 381 + 382 + static void omap2430_restore_context(struct musb *musb) 383 + { 384 + musb_writel(musb->mregs, OTG_SYSCONFIG, musb->context.otg_sysconfig); 385 + musb_writel(musb->mregs, OTG_FORCESTDBY, musb->context.otg_forcestandby); 386 + } 387 + 388 + static int omap2430_suspend(struct device *dev) 389 + { 390 + struct omap2430_glue *glue = dev_get_drvdata(dev); 391 + struct musb *musb = glue_to_musb(glue); 392 + 393 + omap2430_low_level_exit(musb); 394 + otg_set_suspend(musb->xceiv, 1); 395 + omap2430_save_context(musb); 396 + clk_disable(glue->clk); 397 + 398 + return 0; 399 + } 400 + 401 + static int omap2430_resume(struct device *dev) 402 + { 403 + struct omap2430_glue *glue = dev_get_drvdata(dev); 404 + struct musb *musb = glue_to_musb(glue); 405 + int ret; 406 + 407 + ret = clk_enable(glue->clk); 408 + if (ret) { 409 + dev_err(dev, "faled to enable clock\n"); 410 + return ret; 411 + } 412 + 413 + omap2430_low_level_init(musb); 414 + omap2430_restore_context(musb); 404 415 otg_set_suspend(musb->xceiv, 0); 405 416 406 - if (musb->set_clock) 407 - musb->set_clock(musb->clock, 1); 408 - else 409 - clk_enable(musb->clock); 410 - 411 - l = musb_readl(musb->mregs, OTG_SYSCONFIG); 412 - l &= ~ENABLEWAKEUP; /* disable wakeup */ 413 - musb_writel(musb->mregs, OTG_SYSCONFIG, l); 414 - 415 - l = musb_readl(musb->mregs, OTG_FORCESTDBY); 416 - l &= ~ENABLEFORCE; /* disable MSTANDBY */ 417 - musb_writel(musb->mregs, OTG_FORCESTDBY, l); 418 - 419 417 return 0; 420 418 } 421 419 420 + static struct dev_pm_ops omap2430_pm_ops = { 421 + .suspend = omap2430_suspend, 422 + .resume = omap2430_resume, 423 + }; 422 424 423 - int musb_platform_exit(struct musb *musb) 425 + #define DEV_PM_OPS (&omap2430_pm_ops) 426 + #else 427 + #define DEV_PM_OPS NULL 428 + #endif 429 + 430 + static struct platform_driver omap2430_driver = { 431 + .remove = __exit_p(omap2430_remove), 432 + .driver = { 433 + .name = "musb-omap2430", 434 + .pm = DEV_PM_OPS, 435 + }, 436 + }; 437 + 438 + MODULE_DESCRIPTION("OMAP2PLUS MUSB Glue Layer"); 439 + MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>"); 440 + MODULE_LICENSE("GPL v2"); 441 + 442 + static int __init omap2430_init(void) 424 443 { 425 - 426 - musb_platform_suspend(musb); 427 - 428 - otg_put_transceiver(musb->xceiv); 429 - return 0; 444 + return platform_driver_probe(&omap2430_driver, omap2430_probe); 430 445 } 446 + subsys_initcall(omap2430_init); 447 + 448 + static void __exit omap2430_exit(void) 449 + { 450 + platform_driver_unregister(&omap2430_driver); 451 + } 452 + module_exit(omap2430_exit);
+141 -40
drivers/usb/musb/tusb6010.c
··· 21 21 #include <linux/usb.h> 22 22 #include <linux/irq.h> 23 23 #include <linux/platform_device.h> 24 + #include <linux/dma-mapping.h> 24 25 25 26 #include "musb_core.h" 26 27 27 - static void tusb_source_power(struct musb *musb, int is_on); 28 + struct tusb6010_glue { 29 + struct device *dev; 30 + struct platform_device *musb; 31 + }; 32 + 33 + static void tusb_musb_set_vbus(struct musb *musb, int is_on); 28 34 29 35 #define TUSB_REV_MAJOR(reg_val) ((reg_val >> 4) & 0xf) 30 36 #define TUSB_REV_MINOR(reg_val) (reg_val & 0xf) ··· 56 50 return rev; 57 51 } 58 52 59 - static int __init tusb_print_revision(struct musb *musb) 53 + static int tusb_print_revision(struct musb *musb) 60 54 { 61 55 void __iomem *tbase = musb->ctrl_base; 62 56 u8 rev; ··· 281 275 void __iomem *tbase = musb->ctrl_base; 282 276 u32 reg; 283 277 284 - /* 285 - * Keep clock active when enabled. Note that this is not tied to 286 - * drawing VBUS, as with OTG mA can be less than musb->min_power. 287 - */ 288 - if (musb->set_clock) { 289 - if (mA) 290 - musb->set_clock(musb->clock, 1); 291 - else 292 - musb->set_clock(musb->clock, 0); 293 - } 294 - 295 278 /* tps65030 seems to consume max 100mA, with maybe 60mA available 296 279 * (measured on one board) for things other than tps and tusb. 297 280 * ··· 343 348 * USB link is not suspended ... and tells us the relevant wakeup 344 349 * events. SW_EN for voltage is handled separately. 345 350 */ 346 - void tusb_allow_idle(struct musb *musb, u32 wakeup_enables) 351 + static void tusb_allow_idle(struct musb *musb, u32 wakeup_enables) 347 352 { 348 353 void __iomem *tbase = musb->ctrl_base; 349 354 u32 reg; ··· 380 385 /* 381 386 * Updates cable VBUS status. Caller must take care of locking. 382 387 */ 383 - int musb_platform_get_vbus_status(struct musb *musb) 388 + static int tusb_musb_vbus_status(struct musb *musb) 384 389 { 385 390 void __iomem *tbase = musb->ctrl_base; 386 391 u32 otg_stat, prcm_mngmt; ··· 426 431 } 427 432 /* FALLTHROUGH */ 428 433 case OTG_STATE_A_IDLE: 429 - tusb_source_power(musb, 0); 434 + tusb_musb_set_vbus(musb, 0); 430 435 default: 431 436 break; 432 437 } ··· 470 475 * we don't want to treat that full speed J as a wakeup event. 471 476 * ... peripherals must draw only suspend current after 10 msec. 472 477 */ 473 - void musb_platform_try_idle(struct musb *musb, unsigned long timeout) 478 + static void tusb_musb_try_idle(struct musb *musb, unsigned long timeout) 474 479 { 475 480 unsigned long default_timeout = jiffies + msecs_to_jiffies(3); 476 481 static unsigned long last_timer; ··· 510 515 | TUSB_DEV_OTG_TIMER_ENABLE) \ 511 516 : 0) 512 517 513 - static void tusb_source_power(struct musb *musb, int is_on) 518 + static void tusb_musb_set_vbus(struct musb *musb, int is_on) 514 519 { 515 520 void __iomem *tbase = musb->ctrl_base; 516 521 u32 conf, prcm, timer; ··· 526 531 devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 527 532 528 533 if (is_on) { 529 - if (musb->set_clock) 530 - musb->set_clock(musb->clock, 1); 531 534 timer = OTG_TIMER_MS(OTG_TIME_A_WAIT_VRISE); 532 535 musb->xceiv->default_a = 1; 533 536 musb->xceiv->state = OTG_STATE_A_WAIT_VRISE; ··· 564 571 565 572 devctl &= ~MUSB_DEVCTL_SESSION; 566 573 conf &= ~TUSB_DEV_CONF_USB_HOST_MODE; 567 - if (musb->set_clock) 568 - musb->set_clock(musb->clock, 0); 569 574 } 570 575 prcm &= ~(TUSB_PRCM_MNGMT_15_SW_EN | TUSB_PRCM_MNGMT_33_SW_EN); 571 576 ··· 590 599 * and peripheral modes in non-OTG configurations by reconfiguring hardware 591 600 * and then setting musb->board_mode. For now, only support OTG mode. 592 601 */ 593 - int musb_platform_set_mode(struct musb *musb, u8 musb_mode) 602 + static int tusb_musb_set_mode(struct musb *musb, u8 musb_mode) 594 603 { 595 604 void __iomem *tbase = musb->ctrl_base; 596 605 u32 otg_stat, phy_otg_ctrl, phy_otg_ena, dev_conf; ··· 668 677 default_a = is_host_enabled(musb); 669 678 DBG(2, "Default-%c\n", default_a ? 'A' : 'B'); 670 679 musb->xceiv->default_a = default_a; 671 - tusb_source_power(musb, default_a); 680 + tusb_musb_set_vbus(musb, default_a); 672 681 673 682 /* Don't allow idling immediately */ 674 683 if (default_a) ··· 713 722 switch (musb->xceiv->state) { 714 723 case OTG_STATE_A_IDLE: 715 724 DBG(2, "Got SRP, turning on VBUS\n"); 716 - musb_set_vbus(musb, 1); 725 + musb_platform_set_vbus(musb, 1); 717 726 718 727 /* CONNECT can wake if a_wait_bcon is set */ 719 728 if (musb->a_wait_bcon != 0) ··· 739 748 */ 740 749 if (musb->vbuserr_retry) { 741 750 musb->vbuserr_retry--; 742 - tusb_source_power(musb, 1); 751 + tusb_musb_set_vbus(musb, 1); 743 752 } else { 744 753 musb->vbuserr_retry 745 754 = VBUSERR_RETRY_COUNT; 746 - tusb_source_power(musb, 0); 755 + tusb_musb_set_vbus(musb, 0); 747 756 } 748 757 break; 749 758 default: ··· 777 786 } else { 778 787 /* REVISIT report overcurrent to hub? */ 779 788 ERR("vbus too slow, devctl %02x\n", devctl); 780 - tusb_source_power(musb, 0); 789 + tusb_musb_set_vbus(musb, 0); 781 790 } 782 791 break; 783 792 case OTG_STATE_A_WAIT_BCON: ··· 798 807 return idle_timeout; 799 808 } 800 809 801 - static irqreturn_t tusb_interrupt(int irq, void *__hci) 810 + static irqreturn_t tusb_musb_interrupt(int irq, void *__hci) 802 811 { 803 812 struct musb *musb = __hci; 804 813 void __iomem *tbase = musb->ctrl_base; ··· 902 911 musb_writel(tbase, TUSB_INT_SRC_CLEAR, 903 912 int_src & ~TUSB_INT_MASK_RESERVED_BITS); 904 913 905 - musb_platform_try_idle(musb, idle_timeout); 914 + tusb_musb_try_idle(musb, idle_timeout); 906 915 907 916 musb_writel(tbase, TUSB_INT_MASK, int_mask); 908 917 spin_unlock_irqrestore(&musb->lock, flags); ··· 917 926 * REVISIT: 918 927 * - Check what is unnecessary in MGC_HdrcStart() 919 928 */ 920 - void musb_platform_enable(struct musb *musb) 929 + static void tusb_musb_enable(struct musb *musb) 921 930 { 922 931 void __iomem *tbase = musb->ctrl_base; 923 932 ··· 961 970 /* 962 971 * Disables TUSB6010. Caller must take care of locking. 963 972 */ 964 - void musb_platform_disable(struct musb *musb) 973 + static void tusb_musb_disable(struct musb *musb) 965 974 { 966 975 void __iomem *tbase = musb->ctrl_base; 967 976 ··· 986 995 * Sets up TUSB6010 CPU interface specific signals and registers 987 996 * Note: Settings optimized for OMAP24xx 988 997 */ 989 - static void __init tusb_setup_cpu_interface(struct musb *musb) 998 + static void tusb_setup_cpu_interface(struct musb *musb) 990 999 { 991 1000 void __iomem *tbase = musb->ctrl_base; 992 1001 ··· 1013 1022 musb_writel(tbase, TUSB_WAIT_COUNT, 1); 1014 1023 } 1015 1024 1016 - static int __init tusb_start(struct musb *musb) 1025 + static int tusb_musb_start(struct musb *musb) 1017 1026 { 1018 1027 void __iomem *tbase = musb->ctrl_base; 1019 1028 int ret = 0; ··· 1082 1091 return -ENODEV; 1083 1092 } 1084 1093 1085 - int __init musb_platform_init(struct musb *musb, void *board_data) 1094 + static int tusb_musb_init(struct musb *musb) 1086 1095 { 1087 1096 struct platform_device *pdev; 1088 1097 struct resource *mem; ··· 1122 1131 */ 1123 1132 musb->mregs += TUSB_BASE_OFFSET; 1124 1133 1125 - ret = tusb_start(musb); 1134 + ret = tusb_musb_start(musb); 1126 1135 if (ret) { 1127 1136 printk(KERN_ERR "Could not start tusb6010 (%d)\n", 1128 1137 ret); 1129 1138 goto done; 1130 1139 } 1131 - musb->isr = tusb_interrupt; 1140 + musb->isr = tusb_musb_interrupt; 1132 1141 1133 - if (is_host_enabled(musb)) 1134 - musb->board_set_vbus = tusb_source_power; 1135 1142 if (is_peripheral_enabled(musb)) { 1136 1143 musb->xceiv->set_power = tusb_draw_power; 1137 1144 the_musb = musb; ··· 1148 1159 return ret; 1149 1160 } 1150 1161 1151 - int musb_platform_exit(struct musb *musb) 1162 + static int tusb_musb_exit(struct musb *musb) 1152 1163 { 1153 1164 del_timer_sync(&musb_idle_timer); 1154 1165 the_musb = NULL; ··· 1162 1173 usb_nop_xceiv_unregister(); 1163 1174 return 0; 1164 1175 } 1176 + 1177 + static const struct musb_platform_ops tusb_ops = { 1178 + .init = tusb_musb_init, 1179 + .exit = tusb_musb_exit, 1180 + 1181 + .enable = tusb_musb_enable, 1182 + .disable = tusb_musb_disable, 1183 + 1184 + .set_mode = tusb_musb_set_mode, 1185 + .try_idle = tusb_musb_try_idle, 1186 + 1187 + .vbus_status = tusb_musb_vbus_status, 1188 + .set_vbus = tusb_musb_set_vbus, 1189 + }; 1190 + 1191 + static u64 tusb_dmamask = DMA_BIT_MASK(32); 1192 + 1193 + static int __init tusb_probe(struct platform_device *pdev) 1194 + { 1195 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 1196 + struct platform_device *musb; 1197 + struct tusb6010_glue *glue; 1198 + 1199 + int ret = -ENOMEM; 1200 + 1201 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 1202 + if (!glue) { 1203 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 1204 + goto err0; 1205 + } 1206 + 1207 + musb = platform_device_alloc("musb-hdrc", -1); 1208 + if (!musb) { 1209 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 1210 + goto err1; 1211 + } 1212 + 1213 + musb->dev.parent = &pdev->dev; 1214 + musb->dev.dma_mask = &tusb_dmamask; 1215 + musb->dev.coherent_dma_mask = tusb_dmamask; 1216 + 1217 + glue->dev = &pdev->dev; 1218 + glue->musb = musb; 1219 + 1220 + pdata->platform_ops = &tusb_ops; 1221 + 1222 + platform_set_drvdata(pdev, glue); 1223 + 1224 + ret = platform_device_add_resources(musb, pdev->resource, 1225 + pdev->num_resources); 1226 + if (ret) { 1227 + dev_err(&pdev->dev, "failed to add resources\n"); 1228 + goto err2; 1229 + } 1230 + 1231 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 1232 + if (ret) { 1233 + dev_err(&pdev->dev, "failed to add platform_data\n"); 1234 + goto err2; 1235 + } 1236 + 1237 + ret = platform_device_add(musb); 1238 + if (ret) { 1239 + dev_err(&pdev->dev, "failed to register musb device\n"); 1240 + goto err1; 1241 + } 1242 + 1243 + return 0; 1244 + 1245 + err2: 1246 + platform_device_put(musb); 1247 + 1248 + err1: 1249 + kfree(glue); 1250 + 1251 + err0: 1252 + return ret; 1253 + } 1254 + 1255 + static int __exit tusb_remove(struct platform_device *pdev) 1256 + { 1257 + struct tusb6010_glue *glue = platform_get_drvdata(pdev); 1258 + 1259 + platform_device_del(glue->musb); 1260 + platform_device_put(glue->musb); 1261 + kfree(glue); 1262 + 1263 + return 0; 1264 + } 1265 + 1266 + static struct platform_driver tusb_driver = { 1267 + .remove = __exit_p(tusb_remove), 1268 + .driver = { 1269 + .name = "musb-tusb", 1270 + }, 1271 + }; 1272 + 1273 + MODULE_DESCRIPTION("TUSB6010 MUSB Glue Layer"); 1274 + MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>"); 1275 + MODULE_LICENSE("GPL v2"); 1276 + 1277 + static int __init tusb_init(void) 1278 + { 1279 + return platform_driver_probe(&tusb_driver, tusb_probe); 1280 + } 1281 + subsys_initcall(tusb_init); 1282 + 1283 + static void __exit tusb_exit(void) 1284 + { 1285 + platform_driver_unregister(&tusb_driver); 1286 + } 1287 + module_exit(tusb_exit);
+216
drivers/usb/musb/ux500.c
··· 1 + /* 2 + * Copyright (C) 2010 ST-Ericsson AB 3 + * Mian Yousaf Kaukab <mian.yousaf.kaukab@stericsson.com> 4 + * 5 + * Based on omap2430.c 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 + */ 21 + 22 + #include <linux/module.h> 23 + #include <linux/kernel.h> 24 + #include <linux/init.h> 25 + #include <linux/clk.h> 26 + #include <linux/io.h> 27 + #include <linux/platform_device.h> 28 + 29 + #include "musb_core.h" 30 + 31 + struct ux500_glue { 32 + struct device *dev; 33 + struct platform_device *musb; 34 + struct clk *clk; 35 + }; 36 + #define glue_to_musb(g) platform_get_drvdata(g->musb) 37 + 38 + static int ux500_musb_init(struct musb *musb) 39 + { 40 + musb->xceiv = otg_get_transceiver(); 41 + if (!musb->xceiv) { 42 + pr_err("HS USB OTG: no transceiver configured\n"); 43 + return -ENODEV; 44 + } 45 + 46 + return 0; 47 + } 48 + 49 + static int ux500_musb_exit(struct musb *musb) 50 + { 51 + otg_put_transceiver(musb->xceiv); 52 + 53 + return 0; 54 + } 55 + 56 + static const struct musb_platform_ops ux500_ops = { 57 + .init = ux500_musb_init, 58 + .exit = ux500_musb_exit, 59 + }; 60 + 61 + static int __init ux500_probe(struct platform_device *pdev) 62 + { 63 + struct musb_hdrc_platform_data *pdata = pdev->dev.platform_data; 64 + struct platform_device *musb; 65 + struct ux500_glue *glue; 66 + struct clk *clk; 67 + 68 + int ret = -ENOMEM; 69 + 70 + glue = kzalloc(sizeof(*glue), GFP_KERNEL); 71 + if (!glue) { 72 + dev_err(&pdev->dev, "failed to allocate glue context\n"); 73 + goto err0; 74 + } 75 + 76 + musb = platform_device_alloc("musb-hdrc", -1); 77 + if (!musb) { 78 + dev_err(&pdev->dev, "failed to allocate musb device\n"); 79 + goto err1; 80 + } 81 + 82 + clk = clk_get(&pdev->dev, "usb"); 83 + if (IS_ERR(clk)) { 84 + dev_err(&pdev->dev, "failed to get clock\n"); 85 + ret = PTR_ERR(clk); 86 + goto err2; 87 + } 88 + 89 + ret = clk_enable(clk); 90 + if (ret) { 91 + dev_err(&pdev->dev, "failed to enable clock\n"); 92 + goto err3; 93 + } 94 + 95 + musb->dev.parent = &pdev->dev; 96 + 97 + glue->dev = &pdev->dev; 98 + glue->musb = musb; 99 + glue->clk = clk; 100 + 101 + pdata->platform_ops = &ux500_ops; 102 + 103 + platform_set_drvdata(pdev, glue); 104 + 105 + ret = platform_device_add_resources(musb, pdev->resource, 106 + pdev->num_resources); 107 + if (ret) { 108 + dev_err(&pdev->dev, "failed to add resources\n"); 109 + goto err4; 110 + } 111 + 112 + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); 113 + if (ret) { 114 + dev_err(&pdev->dev, "failed to add platform_data\n"); 115 + goto err4; 116 + } 117 + 118 + ret = platform_device_add(musb); 119 + if (ret) { 120 + dev_err(&pdev->dev, "failed to register musb device\n"); 121 + goto err4; 122 + } 123 + 124 + return 0; 125 + 126 + err4: 127 + clk_disable(clk); 128 + 129 + err3: 130 + clk_put(clk); 131 + 132 + err2: 133 + platform_device_put(musb); 134 + 135 + err1: 136 + kfree(glue); 137 + 138 + err0: 139 + return ret; 140 + } 141 + 142 + static int __exit ux500_remove(struct platform_device *pdev) 143 + { 144 + struct ux500_glue *glue = platform_get_drvdata(pdev); 145 + 146 + platform_device_del(glue->musb); 147 + platform_device_put(glue->musb); 148 + clk_disable(glue->clk); 149 + clk_put(glue->clk); 150 + kfree(glue); 151 + 152 + return 0; 153 + } 154 + 155 + #ifdef CONFIG_PM 156 + static int ux500_suspend(struct device *dev) 157 + { 158 + struct ux500_glue *glue = dev_get_drvdata(dev); 159 + struct musb *musb = glue_to_musb(glue); 160 + 161 + otg_set_suspend(musb->xceiv, 1); 162 + clk_disable(glue->clk); 163 + 164 + return 0; 165 + } 166 + 167 + static int ux500_resume(struct device *dev) 168 + { 169 + struct ux500_glue *glue = dev_get_drvdata(dev); 170 + struct musb *musb = glue_to_musb(glue); 171 + int ret; 172 + 173 + ret = clk_enable(glue->clk); 174 + if (ret) { 175 + dev_err(dev, "failed to enable clock\n"); 176 + return ret; 177 + } 178 + 179 + otg_set_suspend(musb->xceiv, 0); 180 + 181 + return 0; 182 + } 183 + 184 + static const struct dev_pm_ops ux500_pm_ops = { 185 + .suspend = ux500_suspend, 186 + .resume = ux500_resume, 187 + }; 188 + 189 + #define DEV_PM_OPS (&ux500_pm_ops) 190 + #else 191 + #define DEV_PM_OPS NULL 192 + #endif 193 + 194 + static struct platform_driver ux500_driver = { 195 + .remove = __exit_p(ux500_remove), 196 + .driver = { 197 + .name = "musb-ux500", 198 + .pm = DEV_PM_OPS, 199 + }, 200 + }; 201 + 202 + MODULE_DESCRIPTION("UX500 MUSB Glue Layer"); 203 + MODULE_AUTHOR("Mian Yousaf Kaukab <mian.yousaf.kaukab@stericsson.com>"); 204 + MODULE_LICENSE("GPL v2"); 205 + 206 + static int __init ux500_init(void) 207 + { 208 + return platform_driver_probe(&ux500_driver, ux500_probe); 209 + } 210 + subsys_initcall(ux500_init); 211 + 212 + static void __exit ux500_exit(void) 213 + { 214 + platform_driver_unregister(&ux500_driver); 215 + } 216 + module_exit(ux500_exit);
+32
drivers/usb/otg/Kconfig
··· 59 59 This transceiver supports high and full speed devices plus, 60 60 in host mode, low speed. 61 61 62 + config TWL6030_USB 63 + tristate "TWL6030 USB Transceiver Driver" 64 + depends on TWL4030_CORE 65 + select USB_OTG_UTILS 66 + help 67 + Enable this to support the USB OTG transceiver on TWL6030 68 + family chips. This TWL6030 transceiver has the VBUS and ID GND 69 + and OTG SRP events capabilities. For all other transceiver functionality 70 + UTMI PHY is embedded in OMAP4430. The internal PHY configurations APIs 71 + are hooked to this driver through platform_data structure. 72 + The definition of internal PHY APIs are in the mach-omap2 layer. 73 + 62 74 config NOP_USB_XCEIV 63 75 tristate "NOP USB Transceiver Driver" 64 76 select USB_OTG_UTILS ··· 92 80 93 81 To compile this driver as a module, choose M here: the 94 82 module will be called langwell_otg. 83 + 84 + config USB_MSM_OTG_72K 85 + tristate "OTG support for Qualcomm on-chip USB controller" 86 + depends on (USB || USB_GADGET) && ARCH_MSM 87 + select USB_OTG_UTILS 88 + help 89 + Enable this to support the USB OTG transceiver on MSM chips. It 90 + handles PHY initialization, clock management, and workarounds 91 + required after resetting the hardware and power management. 92 + This driver is required even for peripheral only or host only 93 + mode configurations. 94 + 95 + config AB8500_USB 96 + tristate "AB8500 USB Transceiver Driver" 97 + depends on AB8500_CORE 98 + select USB_OTG_UTILS 99 + help 100 + Enable this to support the USB OTG transceiver in AB8500 chip. 101 + This transceiver supports high and full speed devices plus, 102 + in host mode, low speed. 95 103 96 104 endif # USB || OTG
+3
drivers/usb/otg/Makefile
··· 12 12 obj-$(CONFIG_USB_GPIO_VBUS) += gpio_vbus.o 13 13 obj-$(CONFIG_ISP1301_OMAP) += isp1301_omap.o 14 14 obj-$(CONFIG_TWL4030_USB) += twl4030-usb.o 15 + obj-$(CONFIG_TWL6030_USB) += twl6030-usb.o 15 16 obj-$(CONFIG_USB_LANGWELL_OTG) += langwell_otg.o 16 17 obj-$(CONFIG_NOP_USB_XCEIV) += nop-usb-xceiv.o 17 18 obj-$(CONFIG_USB_ULPI) += ulpi.o 19 + obj-$(CONFIG_USB_MSM_OTG_72K) += msm72k_otg.o 20 + obj-$(CONFIG_AB8500_USB) += ab8500-usb.o
+585
drivers/usb/otg/ab8500-usb.c
··· 1 + /* 2 + * drivers/usb/otg/ab8500_usb.c 3 + * 4 + * USB transceiver driver for AB8500 chip 5 + * 6 + * Copyright (C) 2010 ST-Ericsson AB 7 + * Mian Yousaf Kaukab <mian.yousaf.kaukab@stericsson.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License as published by 11 + * the Free Software Foundation; either version 2 of the License, or 12 + * (at your option) any later version. 13 + * 14 + * This program is distributed in the hope that it will be useful, 15 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 + * GNU General Public License for more details. 18 + * 19 + * You should have received a copy of the GNU General Public License 20 + * along with this program; if not, write to the Free Software 21 + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 22 + * 23 + */ 24 + 25 + #include <linux/module.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/usb/otg.h> 28 + #include <linux/slab.h> 29 + #include <linux/notifier.h> 30 + #include <linux/interrupt.h> 31 + #include <linux/delay.h> 32 + #include <linux/mfd/abx500.h> 33 + #include <linux/mfd/ab8500.h> 34 + 35 + #define AB8500_MAIN_WD_CTRL_REG 0x01 36 + #define AB8500_USB_LINE_STAT_REG 0x80 37 + #define AB8500_USB_PHY_CTRL_REG 0x8A 38 + 39 + #define AB8500_BIT_OTG_STAT_ID (1 << 0) 40 + #define AB8500_BIT_PHY_CTRL_HOST_EN (1 << 0) 41 + #define AB8500_BIT_PHY_CTRL_DEVICE_EN (1 << 1) 42 + #define AB8500_BIT_WD_CTRL_ENABLE (1 << 0) 43 + #define AB8500_BIT_WD_CTRL_KICK (1 << 1) 44 + 45 + #define AB8500_V1x_LINK_STAT_WAIT (HZ/10) 46 + #define AB8500_WD_KICK_DELAY_US 100 /* usec */ 47 + #define AB8500_WD_V11_DISABLE_DELAY_US 100 /* usec */ 48 + #define AB8500_WD_V10_DISABLE_DELAY_MS 100 /* ms */ 49 + 50 + /* Usb line status register */ 51 + enum ab8500_usb_link_status { 52 + USB_LINK_NOT_CONFIGURED = 0, 53 + USB_LINK_STD_HOST_NC, 54 + USB_LINK_STD_HOST_C_NS, 55 + USB_LINK_STD_HOST_C_S, 56 + USB_LINK_HOST_CHG_NM, 57 + USB_LINK_HOST_CHG_HS, 58 + USB_LINK_HOST_CHG_HS_CHIRP, 59 + USB_LINK_DEDICATED_CHG, 60 + USB_LINK_ACA_RID_A, 61 + USB_LINK_ACA_RID_B, 62 + USB_LINK_ACA_RID_C_NM, 63 + USB_LINK_ACA_RID_C_HS, 64 + USB_LINK_ACA_RID_C_HS_CHIRP, 65 + USB_LINK_HM_IDGND, 66 + USB_LINK_RESERVED, 67 + USB_LINK_NOT_VALID_LINK 68 + }; 69 + 70 + struct ab8500_usb { 71 + struct otg_transceiver otg; 72 + struct device *dev; 73 + int irq_num_id_rise; 74 + int irq_num_id_fall; 75 + int irq_num_vbus_rise; 76 + int irq_num_vbus_fall; 77 + int irq_num_link_status; 78 + unsigned vbus_draw; 79 + struct delayed_work dwork; 80 + struct work_struct phy_dis_work; 81 + unsigned long link_status_wait; 82 + int rev; 83 + }; 84 + 85 + static inline struct ab8500_usb *xceiv_to_ab(struct otg_transceiver *x) 86 + { 87 + return container_of(x, struct ab8500_usb, otg); 88 + } 89 + 90 + static void ab8500_usb_wd_workaround(struct ab8500_usb *ab) 91 + { 92 + abx500_set_register_interruptible(ab->dev, 93 + AB8500_SYS_CTRL2_BLOCK, 94 + AB8500_MAIN_WD_CTRL_REG, 95 + AB8500_BIT_WD_CTRL_ENABLE); 96 + 97 + udelay(AB8500_WD_KICK_DELAY_US); 98 + 99 + abx500_set_register_interruptible(ab->dev, 100 + AB8500_SYS_CTRL2_BLOCK, 101 + AB8500_MAIN_WD_CTRL_REG, 102 + (AB8500_BIT_WD_CTRL_ENABLE 103 + | AB8500_BIT_WD_CTRL_KICK)); 104 + 105 + if (ab->rev > 0x10) /* v1.1 v2.0 */ 106 + udelay(AB8500_WD_V11_DISABLE_DELAY_US); 107 + else /* v1.0 */ 108 + msleep(AB8500_WD_V10_DISABLE_DELAY_MS); 109 + 110 + abx500_set_register_interruptible(ab->dev, 111 + AB8500_SYS_CTRL2_BLOCK, 112 + AB8500_MAIN_WD_CTRL_REG, 113 + 0); 114 + } 115 + 116 + static void ab8500_usb_phy_ctrl(struct ab8500_usb *ab, bool sel_host, 117 + bool enable) 118 + { 119 + u8 ctrl_reg; 120 + abx500_get_register_interruptible(ab->dev, 121 + AB8500_USB, 122 + AB8500_USB_PHY_CTRL_REG, 123 + &ctrl_reg); 124 + if (sel_host) { 125 + if (enable) 126 + ctrl_reg |= AB8500_BIT_PHY_CTRL_HOST_EN; 127 + else 128 + ctrl_reg &= ~AB8500_BIT_PHY_CTRL_HOST_EN; 129 + } else { 130 + if (enable) 131 + ctrl_reg |= AB8500_BIT_PHY_CTRL_DEVICE_EN; 132 + else 133 + ctrl_reg &= ~AB8500_BIT_PHY_CTRL_DEVICE_EN; 134 + } 135 + 136 + abx500_set_register_interruptible(ab->dev, 137 + AB8500_USB, 138 + AB8500_USB_PHY_CTRL_REG, 139 + ctrl_reg); 140 + 141 + /* Needed to enable the phy.*/ 142 + if (enable) 143 + ab8500_usb_wd_workaround(ab); 144 + } 145 + 146 + #define ab8500_usb_host_phy_en(ab) ab8500_usb_phy_ctrl(ab, true, true) 147 + #define ab8500_usb_host_phy_dis(ab) ab8500_usb_phy_ctrl(ab, true, false) 148 + #define ab8500_usb_peri_phy_en(ab) ab8500_usb_phy_ctrl(ab, false, true) 149 + #define ab8500_usb_peri_phy_dis(ab) ab8500_usb_phy_ctrl(ab, false, false) 150 + 151 + static int ab8500_usb_link_status_update(struct ab8500_usb *ab) 152 + { 153 + u8 reg; 154 + enum ab8500_usb_link_status lsts; 155 + void *v = NULL; 156 + enum usb_xceiv_events event; 157 + 158 + abx500_get_register_interruptible(ab->dev, 159 + AB8500_USB, 160 + AB8500_USB_LINE_STAT_REG, 161 + &reg); 162 + 163 + lsts = (reg >> 3) & 0x0F; 164 + 165 + switch (lsts) { 166 + case USB_LINK_NOT_CONFIGURED: 167 + case USB_LINK_RESERVED: 168 + case USB_LINK_NOT_VALID_LINK: 169 + /* TODO: Disable regulators. */ 170 + ab8500_usb_host_phy_dis(ab); 171 + ab8500_usb_peri_phy_dis(ab); 172 + ab->otg.state = OTG_STATE_B_IDLE; 173 + ab->otg.default_a = false; 174 + ab->vbus_draw = 0; 175 + event = USB_EVENT_NONE; 176 + break; 177 + 178 + case USB_LINK_STD_HOST_NC: 179 + case USB_LINK_STD_HOST_C_NS: 180 + case USB_LINK_STD_HOST_C_S: 181 + case USB_LINK_HOST_CHG_NM: 182 + case USB_LINK_HOST_CHG_HS: 183 + case USB_LINK_HOST_CHG_HS_CHIRP: 184 + if (ab->otg.gadget) { 185 + /* TODO: Enable regulators. */ 186 + ab8500_usb_peri_phy_en(ab); 187 + v = ab->otg.gadget; 188 + } 189 + event = USB_EVENT_VBUS; 190 + break; 191 + 192 + case USB_LINK_HM_IDGND: 193 + if (ab->otg.host) { 194 + /* TODO: Enable regulators. */ 195 + ab8500_usb_host_phy_en(ab); 196 + v = ab->otg.host; 197 + } 198 + ab->otg.state = OTG_STATE_A_IDLE; 199 + ab->otg.default_a = true; 200 + event = USB_EVENT_ID; 201 + break; 202 + 203 + case USB_LINK_ACA_RID_A: 204 + case USB_LINK_ACA_RID_B: 205 + /* TODO */ 206 + case USB_LINK_ACA_RID_C_NM: 207 + case USB_LINK_ACA_RID_C_HS: 208 + case USB_LINK_ACA_RID_C_HS_CHIRP: 209 + case USB_LINK_DEDICATED_CHG: 210 + /* TODO: vbus_draw */ 211 + event = USB_EVENT_CHARGER; 212 + break; 213 + } 214 + 215 + blocking_notifier_call_chain(&ab->otg.notifier, event, v); 216 + 217 + return 0; 218 + } 219 + 220 + static void ab8500_usb_delayed_work(struct work_struct *work) 221 + { 222 + struct ab8500_usb *ab = container_of(work, struct ab8500_usb, 223 + dwork.work); 224 + 225 + ab8500_usb_link_status_update(ab); 226 + } 227 + 228 + static irqreturn_t ab8500_usb_v1x_common_irq(int irq, void *data) 229 + { 230 + struct ab8500_usb *ab = (struct ab8500_usb *) data; 231 + 232 + /* Wait for link status to become stable. */ 233 + schedule_delayed_work(&ab->dwork, ab->link_status_wait); 234 + 235 + return IRQ_HANDLED; 236 + } 237 + 238 + static irqreturn_t ab8500_usb_v1x_vbus_fall_irq(int irq, void *data) 239 + { 240 + struct ab8500_usb *ab = (struct ab8500_usb *) data; 241 + 242 + /* Link status will not be updated till phy is disabled. */ 243 + ab8500_usb_peri_phy_dis(ab); 244 + 245 + /* Wait for link status to become stable. */ 246 + schedule_delayed_work(&ab->dwork, ab->link_status_wait); 247 + 248 + return IRQ_HANDLED; 249 + } 250 + 251 + static irqreturn_t ab8500_usb_v20_irq(int irq, void *data) 252 + { 253 + struct ab8500_usb *ab = (struct ab8500_usb *) data; 254 + 255 + ab8500_usb_link_status_update(ab); 256 + 257 + return IRQ_HANDLED; 258 + } 259 + 260 + static void ab8500_usb_phy_disable_work(struct work_struct *work) 261 + { 262 + struct ab8500_usb *ab = container_of(work, struct ab8500_usb, 263 + phy_dis_work); 264 + 265 + if (!ab->otg.host) 266 + ab8500_usb_host_phy_dis(ab); 267 + 268 + if (!ab->otg.gadget) 269 + ab8500_usb_peri_phy_dis(ab); 270 + } 271 + 272 + static int ab8500_usb_set_power(struct otg_transceiver *otg, unsigned mA) 273 + { 274 + struct ab8500_usb *ab; 275 + 276 + if (!otg) 277 + return -ENODEV; 278 + 279 + ab = xceiv_to_ab(otg); 280 + 281 + ab->vbus_draw = mA; 282 + 283 + if (mA) 284 + blocking_notifier_call_chain(&ab->otg.notifier, 285 + USB_EVENT_ENUMERATED, ab->otg.gadget); 286 + return 0; 287 + } 288 + 289 + /* TODO: Implement some way for charging or other drivers to read 290 + * ab->vbus_draw. 291 + */ 292 + 293 + static int ab8500_usb_set_suspend(struct otg_transceiver *x, int suspend) 294 + { 295 + /* TODO */ 296 + return 0; 297 + } 298 + 299 + static int ab8500_usb_set_peripheral(struct otg_transceiver *otg, 300 + struct usb_gadget *gadget) 301 + { 302 + struct ab8500_usb *ab; 303 + 304 + if (!otg) 305 + return -ENODEV; 306 + 307 + ab = xceiv_to_ab(otg); 308 + 309 + /* Some drivers call this function in atomic context. 310 + * Do not update ab8500 registers directly till this 311 + * is fixed. 312 + */ 313 + 314 + if (!gadget) { 315 + /* TODO: Disable regulators. */ 316 + ab->otg.gadget = NULL; 317 + schedule_work(&ab->phy_dis_work); 318 + } else { 319 + ab->otg.gadget = gadget; 320 + ab->otg.state = OTG_STATE_B_IDLE; 321 + 322 + /* Phy will not be enabled if cable is already 323 + * plugged-in. Schedule to enable phy. 324 + * Use same delay to avoid any race condition. 325 + */ 326 + schedule_delayed_work(&ab->dwork, ab->link_status_wait); 327 + } 328 + 329 + return 0; 330 + } 331 + 332 + static int ab8500_usb_set_host(struct otg_transceiver *otg, 333 + struct usb_bus *host) 334 + { 335 + struct ab8500_usb *ab; 336 + 337 + if (!otg) 338 + return -ENODEV; 339 + 340 + ab = xceiv_to_ab(otg); 341 + 342 + /* Some drivers call this function in atomic context. 343 + * Do not update ab8500 registers directly till this 344 + * is fixed. 345 + */ 346 + 347 + if (!host) { 348 + /* TODO: Disable regulators. */ 349 + ab->otg.host = NULL; 350 + schedule_work(&ab->phy_dis_work); 351 + } else { 352 + ab->otg.host = host; 353 + /* Phy will not be enabled if cable is already 354 + * plugged-in. Schedule to enable phy. 355 + * Use same delay to avoid any race condition. 356 + */ 357 + schedule_delayed_work(&ab->dwork, ab->link_status_wait); 358 + } 359 + 360 + return 0; 361 + } 362 + 363 + static void ab8500_usb_irq_free(struct ab8500_usb *ab) 364 + { 365 + if (ab->rev < 0x20) { 366 + free_irq(ab->irq_num_id_rise, ab); 367 + free_irq(ab->irq_num_id_fall, ab); 368 + free_irq(ab->irq_num_vbus_rise, ab); 369 + free_irq(ab->irq_num_vbus_fall, ab); 370 + } else { 371 + free_irq(ab->irq_num_link_status, ab); 372 + } 373 + } 374 + 375 + static int ab8500_usb_v1x_res_setup(struct platform_device *pdev, 376 + struct ab8500_usb *ab) 377 + { 378 + int err; 379 + 380 + ab->irq_num_id_rise = platform_get_irq_byname(pdev, "ID_WAKEUP_R"); 381 + if (ab->irq_num_id_rise < 0) { 382 + dev_err(&pdev->dev, "ID rise irq not found\n"); 383 + return ab->irq_num_id_rise; 384 + } 385 + err = request_threaded_irq(ab->irq_num_id_rise, NULL, 386 + ab8500_usb_v1x_common_irq, 387 + IRQF_NO_SUSPEND | IRQF_SHARED, 388 + "usb-id-rise", ab); 389 + if (err < 0) { 390 + dev_err(ab->dev, "request_irq failed for ID rise irq\n"); 391 + goto fail0; 392 + } 393 + 394 + ab->irq_num_id_fall = platform_get_irq_byname(pdev, "ID_WAKEUP_F"); 395 + if (ab->irq_num_id_fall < 0) { 396 + dev_err(&pdev->dev, "ID fall irq not found\n"); 397 + return ab->irq_num_id_fall; 398 + } 399 + err = request_threaded_irq(ab->irq_num_id_fall, NULL, 400 + ab8500_usb_v1x_common_irq, 401 + IRQF_NO_SUSPEND | IRQF_SHARED, 402 + "usb-id-fall", ab); 403 + if (err < 0) { 404 + dev_err(ab->dev, "request_irq failed for ID fall irq\n"); 405 + goto fail1; 406 + } 407 + 408 + ab->irq_num_vbus_rise = platform_get_irq_byname(pdev, "VBUS_DET_R"); 409 + if (ab->irq_num_vbus_rise < 0) { 410 + dev_err(&pdev->dev, "VBUS rise irq not found\n"); 411 + return ab->irq_num_vbus_rise; 412 + } 413 + err = request_threaded_irq(ab->irq_num_vbus_rise, NULL, 414 + ab8500_usb_v1x_common_irq, 415 + IRQF_NO_SUSPEND | IRQF_SHARED, 416 + "usb-vbus-rise", ab); 417 + if (err < 0) { 418 + dev_err(ab->dev, "request_irq failed for Vbus rise irq\n"); 419 + goto fail2; 420 + } 421 + 422 + ab->irq_num_vbus_fall = platform_get_irq_byname(pdev, "VBUS_DET_F"); 423 + if (ab->irq_num_vbus_fall < 0) { 424 + dev_err(&pdev->dev, "VBUS fall irq not found\n"); 425 + return ab->irq_num_vbus_fall; 426 + } 427 + err = request_threaded_irq(ab->irq_num_vbus_fall, NULL, 428 + ab8500_usb_v1x_vbus_fall_irq, 429 + IRQF_NO_SUSPEND | IRQF_SHARED, 430 + "usb-vbus-fall", ab); 431 + if (err < 0) { 432 + dev_err(ab->dev, "request_irq failed for Vbus fall irq\n"); 433 + goto fail3; 434 + } 435 + 436 + return 0; 437 + fail3: 438 + free_irq(ab->irq_num_vbus_rise, ab); 439 + fail2: 440 + free_irq(ab->irq_num_id_fall, ab); 441 + fail1: 442 + free_irq(ab->irq_num_id_rise, ab); 443 + fail0: 444 + return err; 445 + } 446 + 447 + static int ab8500_usb_v2_res_setup(struct platform_device *pdev, 448 + struct ab8500_usb *ab) 449 + { 450 + int err; 451 + 452 + ab->irq_num_link_status = platform_get_irq_byname(pdev, 453 + "USB_LINK_STATUS"); 454 + if (ab->irq_num_link_status < 0) { 455 + dev_err(&pdev->dev, "Link status irq not found\n"); 456 + return ab->irq_num_link_status; 457 + } 458 + 459 + err = request_threaded_irq(ab->irq_num_link_status, NULL, 460 + ab8500_usb_v20_irq, 461 + IRQF_NO_SUSPEND | IRQF_SHARED, 462 + "usb-link-status", ab); 463 + if (err < 0) { 464 + dev_err(ab->dev, 465 + "request_irq failed for link status irq\n"); 466 + return err; 467 + } 468 + 469 + return 0; 470 + } 471 + 472 + static int __devinit ab8500_usb_probe(struct platform_device *pdev) 473 + { 474 + struct ab8500_usb *ab; 475 + int err; 476 + int rev; 477 + 478 + rev = abx500_get_chip_id(&pdev->dev); 479 + if (rev < 0) { 480 + dev_err(&pdev->dev, "Chip id read failed\n"); 481 + return rev; 482 + } else if (rev < 0x10) { 483 + dev_err(&pdev->dev, "Unsupported AB8500 chip\n"); 484 + return -ENODEV; 485 + } 486 + 487 + ab = kzalloc(sizeof *ab, GFP_KERNEL); 488 + if (!ab) 489 + return -ENOMEM; 490 + 491 + ab->dev = &pdev->dev; 492 + ab->rev = rev; 493 + ab->otg.dev = ab->dev; 494 + ab->otg.label = "ab8500"; 495 + ab->otg.state = OTG_STATE_UNDEFINED; 496 + ab->otg.set_host = ab8500_usb_set_host; 497 + ab->otg.set_peripheral = ab8500_usb_set_peripheral; 498 + ab->otg.set_suspend = ab8500_usb_set_suspend; 499 + ab->otg.set_power = ab8500_usb_set_power; 500 + 501 + platform_set_drvdata(pdev, ab); 502 + 503 + BLOCKING_INIT_NOTIFIER_HEAD(&ab->otg.notifier); 504 + 505 + /* v1: Wait for link status to become stable. 506 + * all: Updates form set_host and set_peripheral as they are atomic. 507 + */ 508 + INIT_DELAYED_WORK(&ab->dwork, ab8500_usb_delayed_work); 509 + 510 + /* all: Disable phy when called from set_host and set_peripheral */ 511 + INIT_WORK(&ab->phy_dis_work, ab8500_usb_phy_disable_work); 512 + 513 + if (ab->rev < 0x20) { 514 + err = ab8500_usb_v1x_res_setup(pdev, ab); 515 + ab->link_status_wait = AB8500_V1x_LINK_STAT_WAIT; 516 + } else { 517 + err = ab8500_usb_v2_res_setup(pdev, ab); 518 + } 519 + 520 + if (err < 0) 521 + goto fail0; 522 + 523 + err = otg_set_transceiver(&ab->otg); 524 + if (err) { 525 + dev_err(&pdev->dev, "Can't register transceiver\n"); 526 + goto fail1; 527 + } 528 + 529 + dev_info(&pdev->dev, "AB8500 usb driver initialized\n"); 530 + 531 + return 0; 532 + fail1: 533 + ab8500_usb_irq_free(ab); 534 + fail0: 535 + kfree(ab); 536 + return err; 537 + } 538 + 539 + static int __devexit ab8500_usb_remove(struct platform_device *pdev) 540 + { 541 + struct ab8500_usb *ab = platform_get_drvdata(pdev); 542 + 543 + ab8500_usb_irq_free(ab); 544 + 545 + cancel_delayed_work_sync(&ab->dwork); 546 + 547 + cancel_work_sync(&ab->phy_dis_work); 548 + 549 + otg_set_transceiver(NULL); 550 + 551 + ab8500_usb_host_phy_dis(ab); 552 + ab8500_usb_peri_phy_dis(ab); 553 + 554 + platform_set_drvdata(pdev, NULL); 555 + 556 + kfree(ab); 557 + 558 + return 0; 559 + } 560 + 561 + static struct platform_driver ab8500_usb_driver = { 562 + .probe = ab8500_usb_probe, 563 + .remove = __devexit_p(ab8500_usb_remove), 564 + .driver = { 565 + .name = "ab8500-usb", 566 + .owner = THIS_MODULE, 567 + }, 568 + }; 569 + 570 + static int __init ab8500_usb_init(void) 571 + { 572 + return platform_driver_register(&ab8500_usb_driver); 573 + } 574 + subsys_initcall(ab8500_usb_init); 575 + 576 + static void __exit ab8500_usb_exit(void) 577 + { 578 + platform_driver_unregister(&ab8500_usb_driver); 579 + } 580 + module_exit(ab8500_usb_exit); 581 + 582 + MODULE_ALIAS("platform:ab8500_usb"); 583 + MODULE_AUTHOR("ST-Ericsson AB"); 584 + MODULE_DESCRIPTION("AB8500 usb transceiver driver"); 585 + MODULE_LICENSE("GPL");
+1125
drivers/usb/otg/msm72k_otg.c
··· 1 + /* Copyright (c) 2009-2010, Code Aurora Forum. All rights reserved. 2 + * 3 + * This program is free software; you can redistribute it and/or modify 4 + * it under the terms of the GNU General Public License version 2 and 5 + * only version 2 as published by the Free Software Foundation. 6 + * 7 + * This program is distributed in the hope that it will be useful, 8 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 + * GNU General Public License for more details. 11 + * 12 + * You should have received a copy of the GNU General Public License 13 + * along with this program; if not, write to the Free Software 14 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 15 + * 02110-1301, USA. 16 + * 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/device.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/clk.h> 23 + #include <linux/slab.h> 24 + #include <linux/interrupt.h> 25 + #include <linux/err.h> 26 + #include <linux/delay.h> 27 + #include <linux/io.h> 28 + #include <linux/ioport.h> 29 + #include <linux/uaccess.h> 30 + #include <linux/debugfs.h> 31 + #include <linux/seq_file.h> 32 + #include <linux/pm_runtime.h> 33 + 34 + #include <linux/usb.h> 35 + #include <linux/usb/otg.h> 36 + #include <linux/usb/ulpi.h> 37 + #include <linux/usb/gadget.h> 38 + #include <linux/usb/hcd.h> 39 + #include <linux/usb/msm_hsusb.h> 40 + #include <linux/usb/msm_hsusb_hw.h> 41 + 42 + #include <mach/clk.h> 43 + 44 + #define MSM_USB_BASE (motg->regs) 45 + #define DRIVER_NAME "msm_otg" 46 + 47 + #define ULPI_IO_TIMEOUT_USEC (10 * 1000) 48 + static int ulpi_read(struct otg_transceiver *otg, u32 reg) 49 + { 50 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 51 + int cnt = 0; 52 + 53 + /* initiate read operation */ 54 + writel(ULPI_RUN | ULPI_READ | ULPI_ADDR(reg), 55 + USB_ULPI_VIEWPORT); 56 + 57 + /* wait for completion */ 58 + while (cnt < ULPI_IO_TIMEOUT_USEC) { 59 + if (!(readl(USB_ULPI_VIEWPORT) & ULPI_RUN)) 60 + break; 61 + udelay(1); 62 + cnt++; 63 + } 64 + 65 + if (cnt >= ULPI_IO_TIMEOUT_USEC) { 66 + dev_err(otg->dev, "ulpi_read: timeout %08x\n", 67 + readl(USB_ULPI_VIEWPORT)); 68 + return -ETIMEDOUT; 69 + } 70 + return ULPI_DATA_READ(readl(USB_ULPI_VIEWPORT)); 71 + } 72 + 73 + static int ulpi_write(struct otg_transceiver *otg, u32 val, u32 reg) 74 + { 75 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 76 + int cnt = 0; 77 + 78 + /* initiate write operation */ 79 + writel(ULPI_RUN | ULPI_WRITE | 80 + ULPI_ADDR(reg) | ULPI_DATA(val), 81 + USB_ULPI_VIEWPORT); 82 + 83 + /* wait for completion */ 84 + while (cnt < ULPI_IO_TIMEOUT_USEC) { 85 + if (!(readl(USB_ULPI_VIEWPORT) & ULPI_RUN)) 86 + break; 87 + udelay(1); 88 + cnt++; 89 + } 90 + 91 + if (cnt >= ULPI_IO_TIMEOUT_USEC) { 92 + dev_err(otg->dev, "ulpi_write: timeout\n"); 93 + return -ETIMEDOUT; 94 + } 95 + return 0; 96 + } 97 + 98 + static struct otg_io_access_ops msm_otg_io_ops = { 99 + .read = ulpi_read, 100 + .write = ulpi_write, 101 + }; 102 + 103 + static void ulpi_init(struct msm_otg *motg) 104 + { 105 + struct msm_otg_platform_data *pdata = motg->pdata; 106 + int *seq = pdata->phy_init_seq; 107 + 108 + if (!seq) 109 + return; 110 + 111 + while (seq[0] >= 0) { 112 + dev_vdbg(motg->otg.dev, "ulpi: write 0x%02x to 0x%02x\n", 113 + seq[0], seq[1]); 114 + ulpi_write(&motg->otg, seq[0], seq[1]); 115 + seq += 2; 116 + } 117 + } 118 + 119 + static int msm_otg_link_clk_reset(struct msm_otg *motg, bool assert) 120 + { 121 + int ret; 122 + 123 + if (assert) { 124 + ret = clk_reset(motg->clk, CLK_RESET_ASSERT); 125 + if (ret) 126 + dev_err(motg->otg.dev, "usb hs_clk assert failed\n"); 127 + } else { 128 + ret = clk_reset(motg->clk, CLK_RESET_DEASSERT); 129 + if (ret) 130 + dev_err(motg->otg.dev, "usb hs_clk deassert failed\n"); 131 + } 132 + return ret; 133 + } 134 + 135 + static int msm_otg_phy_clk_reset(struct msm_otg *motg) 136 + { 137 + int ret; 138 + 139 + ret = clk_reset(motg->phy_reset_clk, CLK_RESET_ASSERT); 140 + if (ret) { 141 + dev_err(motg->otg.dev, "usb phy clk assert failed\n"); 142 + return ret; 143 + } 144 + usleep_range(10000, 12000); 145 + ret = clk_reset(motg->phy_reset_clk, CLK_RESET_DEASSERT); 146 + if (ret) 147 + dev_err(motg->otg.dev, "usb phy clk deassert failed\n"); 148 + return ret; 149 + } 150 + 151 + static int msm_otg_phy_reset(struct msm_otg *motg) 152 + { 153 + u32 val; 154 + int ret; 155 + int retries; 156 + 157 + ret = msm_otg_link_clk_reset(motg, 1); 158 + if (ret) 159 + return ret; 160 + ret = msm_otg_phy_clk_reset(motg); 161 + if (ret) 162 + return ret; 163 + ret = msm_otg_link_clk_reset(motg, 0); 164 + if (ret) 165 + return ret; 166 + 167 + val = readl(USB_PORTSC) & ~PORTSC_PTS_MASK; 168 + writel(val | PORTSC_PTS_ULPI, USB_PORTSC); 169 + 170 + for (retries = 3; retries > 0; retries--) { 171 + ret = ulpi_write(&motg->otg, ULPI_FUNC_CTRL_SUSPENDM, 172 + ULPI_CLR(ULPI_FUNC_CTRL)); 173 + if (!ret) 174 + break; 175 + ret = msm_otg_phy_clk_reset(motg); 176 + if (ret) 177 + return ret; 178 + } 179 + if (!retries) 180 + return -ETIMEDOUT; 181 + 182 + /* This reset calibrates the phy, if the above write succeeded */ 183 + ret = msm_otg_phy_clk_reset(motg); 184 + if (ret) 185 + return ret; 186 + 187 + for (retries = 3; retries > 0; retries--) { 188 + ret = ulpi_read(&motg->otg, ULPI_DEBUG); 189 + if (ret != -ETIMEDOUT) 190 + break; 191 + ret = msm_otg_phy_clk_reset(motg); 192 + if (ret) 193 + return ret; 194 + } 195 + if (!retries) 196 + return -ETIMEDOUT; 197 + 198 + dev_info(motg->otg.dev, "phy_reset: success\n"); 199 + return 0; 200 + } 201 + 202 + #define LINK_RESET_TIMEOUT_USEC (250 * 1000) 203 + static int msm_otg_reset(struct otg_transceiver *otg) 204 + { 205 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 206 + struct msm_otg_platform_data *pdata = motg->pdata; 207 + int cnt = 0; 208 + int ret; 209 + u32 val = 0; 210 + u32 ulpi_val = 0; 211 + 212 + ret = msm_otg_phy_reset(motg); 213 + if (ret) { 214 + dev_err(otg->dev, "phy_reset failed\n"); 215 + return ret; 216 + } 217 + 218 + ulpi_init(motg); 219 + 220 + writel(USBCMD_RESET, USB_USBCMD); 221 + while (cnt < LINK_RESET_TIMEOUT_USEC) { 222 + if (!(readl(USB_USBCMD) & USBCMD_RESET)) 223 + break; 224 + udelay(1); 225 + cnt++; 226 + } 227 + if (cnt >= LINK_RESET_TIMEOUT_USEC) 228 + return -ETIMEDOUT; 229 + 230 + /* select ULPI phy */ 231 + writel(0x80000000, USB_PORTSC); 232 + 233 + msleep(100); 234 + 235 + writel(0x0, USB_AHBBURST); 236 + writel(0x00, USB_AHBMODE); 237 + 238 + if (pdata->otg_control == OTG_PHY_CONTROL) { 239 + val = readl(USB_OTGSC); 240 + if (pdata->mode == USB_OTG) { 241 + ulpi_val = ULPI_INT_IDGRD | ULPI_INT_SESS_VALID; 242 + val |= OTGSC_IDIE | OTGSC_BSVIE; 243 + } else if (pdata->mode == USB_PERIPHERAL) { 244 + ulpi_val = ULPI_INT_SESS_VALID; 245 + val |= OTGSC_BSVIE; 246 + } 247 + writel(val, USB_OTGSC); 248 + ulpi_write(otg, ulpi_val, ULPI_USB_INT_EN_RISE); 249 + ulpi_write(otg, ulpi_val, ULPI_USB_INT_EN_FALL); 250 + } 251 + 252 + return 0; 253 + } 254 + 255 + #define PHY_SUSPEND_TIMEOUT_USEC (500 * 1000) 256 + static int msm_otg_suspend(struct msm_otg *motg) 257 + { 258 + struct otg_transceiver *otg = &motg->otg; 259 + struct usb_bus *bus = otg->host; 260 + struct msm_otg_platform_data *pdata = motg->pdata; 261 + int cnt = 0; 262 + 263 + if (atomic_read(&motg->in_lpm)) 264 + return 0; 265 + 266 + disable_irq(motg->irq); 267 + /* 268 + * Interrupt Latch Register auto-clear feature is not present 269 + * in all PHY versions. Latch register is clear on read type. 270 + * Clear latch register to avoid spurious wakeup from 271 + * low power mode (LPM). 272 + */ 273 + ulpi_read(otg, 0x14); 274 + 275 + /* 276 + * PHY comparators are disabled when PHY enters into low power 277 + * mode (LPM). Keep PHY comparators ON in LPM only when we expect 278 + * VBUS/Id notifications from USB PHY. Otherwise turn off USB 279 + * PHY comparators. This save significant amount of power. 280 + */ 281 + if (pdata->otg_control == OTG_PHY_CONTROL) 282 + ulpi_write(otg, 0x01, 0x30); 283 + 284 + /* 285 + * PLL is not turned off when PHY enters into low power mode (LPM). 286 + * Disable PLL for maximum power savings. 287 + */ 288 + ulpi_write(otg, 0x08, 0x09); 289 + 290 + /* 291 + * PHY may take some time or even fail to enter into low power 292 + * mode (LPM). Hence poll for 500 msec and reset the PHY and link 293 + * in failure case. 294 + */ 295 + writel(readl(USB_PORTSC) | PORTSC_PHCD, USB_PORTSC); 296 + while (cnt < PHY_SUSPEND_TIMEOUT_USEC) { 297 + if (readl(USB_PORTSC) & PORTSC_PHCD) 298 + break; 299 + udelay(1); 300 + cnt++; 301 + } 302 + 303 + if (cnt >= PHY_SUSPEND_TIMEOUT_USEC) { 304 + dev_err(otg->dev, "Unable to suspend PHY\n"); 305 + msm_otg_reset(otg); 306 + enable_irq(motg->irq); 307 + return -ETIMEDOUT; 308 + } 309 + 310 + /* 311 + * PHY has capability to generate interrupt asynchronously in low 312 + * power mode (LPM). This interrupt is level triggered. So USB IRQ 313 + * line must be disabled till async interrupt enable bit is cleared 314 + * in USBCMD register. Assert STP (ULPI interface STOP signal) to 315 + * block data communication from PHY. 316 + */ 317 + writel(readl(USB_USBCMD) | ASYNC_INTR_CTRL | ULPI_STP_CTRL, USB_USBCMD); 318 + 319 + clk_disable(motg->pclk); 320 + clk_disable(motg->clk); 321 + if (motg->core_clk) 322 + clk_disable(motg->core_clk); 323 + 324 + if (device_may_wakeup(otg->dev)) 325 + enable_irq_wake(motg->irq); 326 + if (bus) 327 + clear_bit(HCD_FLAG_HW_ACCESSIBLE, &(bus_to_hcd(bus))->flags); 328 + 329 + atomic_set(&motg->in_lpm, 1); 330 + enable_irq(motg->irq); 331 + 332 + dev_info(otg->dev, "USB in low power mode\n"); 333 + 334 + return 0; 335 + } 336 + 337 + #define PHY_RESUME_TIMEOUT_USEC (100 * 1000) 338 + static int msm_otg_resume(struct msm_otg *motg) 339 + { 340 + struct otg_transceiver *otg = &motg->otg; 341 + struct usb_bus *bus = otg->host; 342 + int cnt = 0; 343 + unsigned temp; 344 + 345 + if (!atomic_read(&motg->in_lpm)) 346 + return 0; 347 + 348 + clk_enable(motg->pclk); 349 + clk_enable(motg->clk); 350 + if (motg->core_clk) 351 + clk_enable(motg->core_clk); 352 + 353 + temp = readl(USB_USBCMD); 354 + temp &= ~ASYNC_INTR_CTRL; 355 + temp &= ~ULPI_STP_CTRL; 356 + writel(temp, USB_USBCMD); 357 + 358 + /* 359 + * PHY comes out of low power mode (LPM) in case of wakeup 360 + * from asynchronous interrupt. 361 + */ 362 + if (!(readl(USB_PORTSC) & PORTSC_PHCD)) 363 + goto skip_phy_resume; 364 + 365 + writel(readl(USB_PORTSC) & ~PORTSC_PHCD, USB_PORTSC); 366 + while (cnt < PHY_RESUME_TIMEOUT_USEC) { 367 + if (!(readl(USB_PORTSC) & PORTSC_PHCD)) 368 + break; 369 + udelay(1); 370 + cnt++; 371 + } 372 + 373 + if (cnt >= PHY_RESUME_TIMEOUT_USEC) { 374 + /* 375 + * This is a fatal error. Reset the link and 376 + * PHY. USB state can not be restored. Re-insertion 377 + * of USB cable is the only way to get USB working. 378 + */ 379 + dev_err(otg->dev, "Unable to resume USB." 380 + "Re-plugin the cable\n"); 381 + msm_otg_reset(otg); 382 + } 383 + 384 + skip_phy_resume: 385 + if (device_may_wakeup(otg->dev)) 386 + disable_irq_wake(motg->irq); 387 + if (bus) 388 + set_bit(HCD_FLAG_HW_ACCESSIBLE, &(bus_to_hcd(bus))->flags); 389 + 390 + if (motg->async_int) { 391 + motg->async_int = 0; 392 + pm_runtime_put(otg->dev); 393 + enable_irq(motg->irq); 394 + } 395 + 396 + atomic_set(&motg->in_lpm, 0); 397 + 398 + dev_info(otg->dev, "USB exited from low power mode\n"); 399 + 400 + return 0; 401 + } 402 + 403 + static void msm_otg_start_host(struct otg_transceiver *otg, int on) 404 + { 405 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 406 + struct msm_otg_platform_data *pdata = motg->pdata; 407 + struct usb_hcd *hcd; 408 + 409 + if (!otg->host) 410 + return; 411 + 412 + hcd = bus_to_hcd(otg->host); 413 + 414 + if (on) { 415 + dev_dbg(otg->dev, "host on\n"); 416 + 417 + if (pdata->vbus_power) 418 + pdata->vbus_power(1); 419 + /* 420 + * Some boards have a switch cotrolled by gpio 421 + * to enable/disable internal HUB. Enable internal 422 + * HUB before kicking the host. 423 + */ 424 + if (pdata->setup_gpio) 425 + pdata->setup_gpio(OTG_STATE_A_HOST); 426 + #ifdef CONFIG_USB 427 + usb_add_hcd(hcd, hcd->irq, IRQF_SHARED); 428 + #endif 429 + } else { 430 + dev_dbg(otg->dev, "host off\n"); 431 + 432 + #ifdef CONFIG_USB 433 + usb_remove_hcd(hcd); 434 + #endif 435 + if (pdata->setup_gpio) 436 + pdata->setup_gpio(OTG_STATE_UNDEFINED); 437 + if (pdata->vbus_power) 438 + pdata->vbus_power(0); 439 + } 440 + } 441 + 442 + static int msm_otg_set_host(struct otg_transceiver *otg, struct usb_bus *host) 443 + { 444 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 445 + struct usb_hcd *hcd; 446 + 447 + /* 448 + * Fail host registration if this board can support 449 + * only peripheral configuration. 450 + */ 451 + if (motg->pdata->mode == USB_PERIPHERAL) { 452 + dev_info(otg->dev, "Host mode is not supported\n"); 453 + return -ENODEV; 454 + } 455 + 456 + if (!host) { 457 + if (otg->state == OTG_STATE_A_HOST) { 458 + pm_runtime_get_sync(otg->dev); 459 + msm_otg_start_host(otg, 0); 460 + otg->host = NULL; 461 + otg->state = OTG_STATE_UNDEFINED; 462 + schedule_work(&motg->sm_work); 463 + } else { 464 + otg->host = NULL; 465 + } 466 + 467 + return 0; 468 + } 469 + 470 + hcd = bus_to_hcd(host); 471 + hcd->power_budget = motg->pdata->power_budget; 472 + 473 + otg->host = host; 474 + dev_dbg(otg->dev, "host driver registered w/ tranceiver\n"); 475 + 476 + /* 477 + * Kick the state machine work, if peripheral is not supported 478 + * or peripheral is already registered with us. 479 + */ 480 + if (motg->pdata->mode == USB_HOST || otg->gadget) { 481 + pm_runtime_get_sync(otg->dev); 482 + schedule_work(&motg->sm_work); 483 + } 484 + 485 + return 0; 486 + } 487 + 488 + static void msm_otg_start_peripheral(struct otg_transceiver *otg, int on) 489 + { 490 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 491 + struct msm_otg_platform_data *pdata = motg->pdata; 492 + 493 + if (!otg->gadget) 494 + return; 495 + 496 + if (on) { 497 + dev_dbg(otg->dev, "gadget on\n"); 498 + /* 499 + * Some boards have a switch cotrolled by gpio 500 + * to enable/disable internal HUB. Disable internal 501 + * HUB before kicking the gadget. 502 + */ 503 + if (pdata->setup_gpio) 504 + pdata->setup_gpio(OTG_STATE_B_PERIPHERAL); 505 + usb_gadget_vbus_connect(otg->gadget); 506 + } else { 507 + dev_dbg(otg->dev, "gadget off\n"); 508 + usb_gadget_vbus_disconnect(otg->gadget); 509 + if (pdata->setup_gpio) 510 + pdata->setup_gpio(OTG_STATE_UNDEFINED); 511 + } 512 + 513 + } 514 + 515 + static int msm_otg_set_peripheral(struct otg_transceiver *otg, 516 + struct usb_gadget *gadget) 517 + { 518 + struct msm_otg *motg = container_of(otg, struct msm_otg, otg); 519 + 520 + /* 521 + * Fail peripheral registration if this board can support 522 + * only host configuration. 523 + */ 524 + if (motg->pdata->mode == USB_HOST) { 525 + dev_info(otg->dev, "Peripheral mode is not supported\n"); 526 + return -ENODEV; 527 + } 528 + 529 + if (!gadget) { 530 + if (otg->state == OTG_STATE_B_PERIPHERAL) { 531 + pm_runtime_get_sync(otg->dev); 532 + msm_otg_start_peripheral(otg, 0); 533 + otg->gadget = NULL; 534 + otg->state = OTG_STATE_UNDEFINED; 535 + schedule_work(&motg->sm_work); 536 + } else { 537 + otg->gadget = NULL; 538 + } 539 + 540 + return 0; 541 + } 542 + otg->gadget = gadget; 543 + dev_dbg(otg->dev, "peripheral driver registered w/ tranceiver\n"); 544 + 545 + /* 546 + * Kick the state machine work, if host is not supported 547 + * or host is already registered with us. 548 + */ 549 + if (motg->pdata->mode == USB_PERIPHERAL || otg->host) { 550 + pm_runtime_get_sync(otg->dev); 551 + schedule_work(&motg->sm_work); 552 + } 553 + 554 + return 0; 555 + } 556 + 557 + /* 558 + * We support OTG, Peripheral only and Host only configurations. In case 559 + * of OTG, mode switch (host-->peripheral/peripheral-->host) can happen 560 + * via Id pin status or user request (debugfs). Id/BSV interrupts are not 561 + * enabled when switch is controlled by user and default mode is supplied 562 + * by board file, which can be changed by userspace later. 563 + */ 564 + static void msm_otg_init_sm(struct msm_otg *motg) 565 + { 566 + struct msm_otg_platform_data *pdata = motg->pdata; 567 + u32 otgsc = readl(USB_OTGSC); 568 + 569 + switch (pdata->mode) { 570 + case USB_OTG: 571 + if (pdata->otg_control == OTG_PHY_CONTROL) { 572 + if (otgsc & OTGSC_ID) 573 + set_bit(ID, &motg->inputs); 574 + else 575 + clear_bit(ID, &motg->inputs); 576 + 577 + if (otgsc & OTGSC_BSV) 578 + set_bit(B_SESS_VLD, &motg->inputs); 579 + else 580 + clear_bit(B_SESS_VLD, &motg->inputs); 581 + } else if (pdata->otg_control == OTG_USER_CONTROL) { 582 + if (pdata->default_mode == USB_HOST) { 583 + clear_bit(ID, &motg->inputs); 584 + } else if (pdata->default_mode == USB_PERIPHERAL) { 585 + set_bit(ID, &motg->inputs); 586 + set_bit(B_SESS_VLD, &motg->inputs); 587 + } else { 588 + set_bit(ID, &motg->inputs); 589 + clear_bit(B_SESS_VLD, &motg->inputs); 590 + } 591 + } 592 + break; 593 + case USB_HOST: 594 + clear_bit(ID, &motg->inputs); 595 + break; 596 + case USB_PERIPHERAL: 597 + set_bit(ID, &motg->inputs); 598 + if (otgsc & OTGSC_BSV) 599 + set_bit(B_SESS_VLD, &motg->inputs); 600 + else 601 + clear_bit(B_SESS_VLD, &motg->inputs); 602 + break; 603 + default: 604 + break; 605 + } 606 + } 607 + 608 + static void msm_otg_sm_work(struct work_struct *w) 609 + { 610 + struct msm_otg *motg = container_of(w, struct msm_otg, sm_work); 611 + struct otg_transceiver *otg = &motg->otg; 612 + 613 + switch (otg->state) { 614 + case OTG_STATE_UNDEFINED: 615 + dev_dbg(otg->dev, "OTG_STATE_UNDEFINED state\n"); 616 + msm_otg_reset(otg); 617 + msm_otg_init_sm(motg); 618 + otg->state = OTG_STATE_B_IDLE; 619 + /* FALL THROUGH */ 620 + case OTG_STATE_B_IDLE: 621 + dev_dbg(otg->dev, "OTG_STATE_B_IDLE state\n"); 622 + if (!test_bit(ID, &motg->inputs) && otg->host) { 623 + /* disable BSV bit */ 624 + writel(readl(USB_OTGSC) & ~OTGSC_BSVIE, USB_OTGSC); 625 + msm_otg_start_host(otg, 1); 626 + otg->state = OTG_STATE_A_HOST; 627 + } else if (test_bit(B_SESS_VLD, &motg->inputs) && otg->gadget) { 628 + msm_otg_start_peripheral(otg, 1); 629 + otg->state = OTG_STATE_B_PERIPHERAL; 630 + } 631 + pm_runtime_put_sync(otg->dev); 632 + break; 633 + case OTG_STATE_B_PERIPHERAL: 634 + dev_dbg(otg->dev, "OTG_STATE_B_PERIPHERAL state\n"); 635 + if (!test_bit(B_SESS_VLD, &motg->inputs) || 636 + !test_bit(ID, &motg->inputs)) { 637 + msm_otg_start_peripheral(otg, 0); 638 + otg->state = OTG_STATE_B_IDLE; 639 + msm_otg_reset(otg); 640 + schedule_work(w); 641 + } 642 + break; 643 + case OTG_STATE_A_HOST: 644 + dev_dbg(otg->dev, "OTG_STATE_A_HOST state\n"); 645 + if (test_bit(ID, &motg->inputs)) { 646 + msm_otg_start_host(otg, 0); 647 + otg->state = OTG_STATE_B_IDLE; 648 + msm_otg_reset(otg); 649 + schedule_work(w); 650 + } 651 + break; 652 + default: 653 + break; 654 + } 655 + } 656 + 657 + static irqreturn_t msm_otg_irq(int irq, void *data) 658 + { 659 + struct msm_otg *motg = data; 660 + struct otg_transceiver *otg = &motg->otg; 661 + u32 otgsc = 0; 662 + 663 + if (atomic_read(&motg->in_lpm)) { 664 + disable_irq_nosync(irq); 665 + motg->async_int = 1; 666 + pm_runtime_get(otg->dev); 667 + return IRQ_HANDLED; 668 + } 669 + 670 + otgsc = readl(USB_OTGSC); 671 + if (!(otgsc & (OTGSC_IDIS | OTGSC_BSVIS))) 672 + return IRQ_NONE; 673 + 674 + if ((otgsc & OTGSC_IDIS) && (otgsc & OTGSC_IDIE)) { 675 + if (otgsc & OTGSC_ID) 676 + set_bit(ID, &motg->inputs); 677 + else 678 + clear_bit(ID, &motg->inputs); 679 + dev_dbg(otg->dev, "ID set/clear\n"); 680 + pm_runtime_get_noresume(otg->dev); 681 + } else if ((otgsc & OTGSC_BSVIS) && (otgsc & OTGSC_BSVIE)) { 682 + if (otgsc & OTGSC_BSV) 683 + set_bit(B_SESS_VLD, &motg->inputs); 684 + else 685 + clear_bit(B_SESS_VLD, &motg->inputs); 686 + dev_dbg(otg->dev, "BSV set/clear\n"); 687 + pm_runtime_get_noresume(otg->dev); 688 + } 689 + 690 + writel(otgsc, USB_OTGSC); 691 + schedule_work(&motg->sm_work); 692 + return IRQ_HANDLED; 693 + } 694 + 695 + static int msm_otg_mode_show(struct seq_file *s, void *unused) 696 + { 697 + struct msm_otg *motg = s->private; 698 + struct otg_transceiver *otg = &motg->otg; 699 + 700 + switch (otg->state) { 701 + case OTG_STATE_A_HOST: 702 + seq_printf(s, "host\n"); 703 + break; 704 + case OTG_STATE_B_PERIPHERAL: 705 + seq_printf(s, "peripheral\n"); 706 + break; 707 + default: 708 + seq_printf(s, "none\n"); 709 + break; 710 + } 711 + 712 + return 0; 713 + } 714 + 715 + static int msm_otg_mode_open(struct inode *inode, struct file *file) 716 + { 717 + return single_open(file, msm_otg_mode_show, inode->i_private); 718 + } 719 + 720 + static ssize_t msm_otg_mode_write(struct file *file, const char __user *ubuf, 721 + size_t count, loff_t *ppos) 722 + { 723 + struct msm_otg *motg = file->private_data; 724 + char buf[16]; 725 + struct otg_transceiver *otg = &motg->otg; 726 + int status = count; 727 + enum usb_mode_type req_mode; 728 + 729 + memset(buf, 0x00, sizeof(buf)); 730 + 731 + if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count))) { 732 + status = -EFAULT; 733 + goto out; 734 + } 735 + 736 + if (!strncmp(buf, "host", 4)) { 737 + req_mode = USB_HOST; 738 + } else if (!strncmp(buf, "peripheral", 10)) { 739 + req_mode = USB_PERIPHERAL; 740 + } else if (!strncmp(buf, "none", 4)) { 741 + req_mode = USB_NONE; 742 + } else { 743 + status = -EINVAL; 744 + goto out; 745 + } 746 + 747 + switch (req_mode) { 748 + case USB_NONE: 749 + switch (otg->state) { 750 + case OTG_STATE_A_HOST: 751 + case OTG_STATE_B_PERIPHERAL: 752 + set_bit(ID, &motg->inputs); 753 + clear_bit(B_SESS_VLD, &motg->inputs); 754 + break; 755 + default: 756 + goto out; 757 + } 758 + break; 759 + case USB_PERIPHERAL: 760 + switch (otg->state) { 761 + case OTG_STATE_B_IDLE: 762 + case OTG_STATE_A_HOST: 763 + set_bit(ID, &motg->inputs); 764 + set_bit(B_SESS_VLD, &motg->inputs); 765 + break; 766 + default: 767 + goto out; 768 + } 769 + break; 770 + case USB_HOST: 771 + switch (otg->state) { 772 + case OTG_STATE_B_IDLE: 773 + case OTG_STATE_B_PERIPHERAL: 774 + clear_bit(ID, &motg->inputs); 775 + break; 776 + default: 777 + goto out; 778 + } 779 + break; 780 + default: 781 + goto out; 782 + } 783 + 784 + pm_runtime_get_sync(otg->dev); 785 + schedule_work(&motg->sm_work); 786 + out: 787 + return status; 788 + } 789 + 790 + const struct file_operations msm_otg_mode_fops = { 791 + .open = msm_otg_mode_open, 792 + .read = seq_read, 793 + .write = msm_otg_mode_write, 794 + .llseek = seq_lseek, 795 + .release = single_release, 796 + }; 797 + 798 + static struct dentry *msm_otg_dbg_root; 799 + static struct dentry *msm_otg_dbg_mode; 800 + 801 + static int msm_otg_debugfs_init(struct msm_otg *motg) 802 + { 803 + msm_otg_dbg_root = debugfs_create_dir("msm_otg", NULL); 804 + 805 + if (!msm_otg_dbg_root || IS_ERR(msm_otg_dbg_root)) 806 + return -ENODEV; 807 + 808 + msm_otg_dbg_mode = debugfs_create_file("mode", S_IRUGO | S_IWUSR, 809 + msm_otg_dbg_root, motg, &msm_otg_mode_fops); 810 + if (!msm_otg_dbg_mode) { 811 + debugfs_remove(msm_otg_dbg_root); 812 + msm_otg_dbg_root = NULL; 813 + return -ENODEV; 814 + } 815 + 816 + return 0; 817 + } 818 + 819 + static void msm_otg_debugfs_cleanup(void) 820 + { 821 + debugfs_remove(msm_otg_dbg_mode); 822 + debugfs_remove(msm_otg_dbg_root); 823 + } 824 + 825 + static int __init msm_otg_probe(struct platform_device *pdev) 826 + { 827 + int ret = 0; 828 + struct resource *res; 829 + struct msm_otg *motg; 830 + struct otg_transceiver *otg; 831 + 832 + dev_info(&pdev->dev, "msm_otg probe\n"); 833 + if (!pdev->dev.platform_data) { 834 + dev_err(&pdev->dev, "No platform data given. Bailing out\n"); 835 + return -ENODEV; 836 + } 837 + 838 + motg = kzalloc(sizeof(struct msm_otg), GFP_KERNEL); 839 + if (!motg) { 840 + dev_err(&pdev->dev, "unable to allocate msm_otg\n"); 841 + return -ENOMEM; 842 + } 843 + 844 + motg->pdata = pdev->dev.platform_data; 845 + otg = &motg->otg; 846 + otg->dev = &pdev->dev; 847 + 848 + motg->phy_reset_clk = clk_get(&pdev->dev, "usb_phy_clk"); 849 + if (IS_ERR(motg->phy_reset_clk)) { 850 + dev_err(&pdev->dev, "failed to get usb_phy_clk\n"); 851 + ret = PTR_ERR(motg->phy_reset_clk); 852 + goto free_motg; 853 + } 854 + 855 + motg->clk = clk_get(&pdev->dev, "usb_hs_clk"); 856 + if (IS_ERR(motg->clk)) { 857 + dev_err(&pdev->dev, "failed to get usb_hs_clk\n"); 858 + ret = PTR_ERR(motg->clk); 859 + goto put_phy_reset_clk; 860 + } 861 + 862 + motg->pclk = clk_get(&pdev->dev, "usb_hs_pclk"); 863 + if (IS_ERR(motg->pclk)) { 864 + dev_err(&pdev->dev, "failed to get usb_hs_pclk\n"); 865 + ret = PTR_ERR(motg->pclk); 866 + goto put_clk; 867 + } 868 + 869 + /* 870 + * USB core clock is not present on all MSM chips. This 871 + * clock is introduced to remove the dependency on AXI 872 + * bus frequency. 873 + */ 874 + motg->core_clk = clk_get(&pdev->dev, "usb_hs_core_clk"); 875 + if (IS_ERR(motg->core_clk)) 876 + motg->core_clk = NULL; 877 + 878 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 879 + if (!res) { 880 + dev_err(&pdev->dev, "failed to get platform resource mem\n"); 881 + ret = -ENODEV; 882 + goto put_core_clk; 883 + } 884 + 885 + motg->regs = ioremap(res->start, resource_size(res)); 886 + if (!motg->regs) { 887 + dev_err(&pdev->dev, "ioremap failed\n"); 888 + ret = -ENOMEM; 889 + goto put_core_clk; 890 + } 891 + dev_info(&pdev->dev, "OTG regs = %p\n", motg->regs); 892 + 893 + motg->irq = platform_get_irq(pdev, 0); 894 + if (!motg->irq) { 895 + dev_err(&pdev->dev, "platform_get_irq failed\n"); 896 + ret = -ENODEV; 897 + goto free_regs; 898 + } 899 + 900 + clk_enable(motg->clk); 901 + clk_enable(motg->pclk); 902 + if (motg->core_clk) 903 + clk_enable(motg->core_clk); 904 + 905 + writel(0, USB_USBINTR); 906 + writel(0, USB_OTGSC); 907 + 908 + INIT_WORK(&motg->sm_work, msm_otg_sm_work); 909 + ret = request_irq(motg->irq, msm_otg_irq, IRQF_SHARED, 910 + "msm_otg", motg); 911 + if (ret) { 912 + dev_err(&pdev->dev, "request irq failed\n"); 913 + goto disable_clks; 914 + } 915 + 916 + otg->init = msm_otg_reset; 917 + otg->set_host = msm_otg_set_host; 918 + otg->set_peripheral = msm_otg_set_peripheral; 919 + 920 + otg->io_ops = &msm_otg_io_ops; 921 + 922 + ret = otg_set_transceiver(&motg->otg); 923 + if (ret) { 924 + dev_err(&pdev->dev, "otg_set_transceiver failed\n"); 925 + goto free_irq; 926 + } 927 + 928 + platform_set_drvdata(pdev, motg); 929 + device_init_wakeup(&pdev->dev, 1); 930 + 931 + if (motg->pdata->mode == USB_OTG && 932 + motg->pdata->otg_control == OTG_USER_CONTROL) { 933 + ret = msm_otg_debugfs_init(motg); 934 + if (ret) 935 + dev_dbg(&pdev->dev, "mode debugfs file is" 936 + "not available\n"); 937 + } 938 + 939 + pm_runtime_set_active(&pdev->dev); 940 + pm_runtime_enable(&pdev->dev); 941 + 942 + return 0; 943 + free_irq: 944 + free_irq(motg->irq, motg); 945 + disable_clks: 946 + clk_disable(motg->pclk); 947 + clk_disable(motg->clk); 948 + free_regs: 949 + iounmap(motg->regs); 950 + put_core_clk: 951 + if (motg->core_clk) 952 + clk_put(motg->core_clk); 953 + clk_put(motg->pclk); 954 + put_clk: 955 + clk_put(motg->clk); 956 + put_phy_reset_clk: 957 + clk_put(motg->phy_reset_clk); 958 + free_motg: 959 + kfree(motg); 960 + return ret; 961 + } 962 + 963 + static int __devexit msm_otg_remove(struct platform_device *pdev) 964 + { 965 + struct msm_otg *motg = platform_get_drvdata(pdev); 966 + struct otg_transceiver *otg = &motg->otg; 967 + int cnt = 0; 968 + 969 + if (otg->host || otg->gadget) 970 + return -EBUSY; 971 + 972 + msm_otg_debugfs_cleanup(); 973 + cancel_work_sync(&motg->sm_work); 974 + 975 + msm_otg_resume(motg); 976 + 977 + device_init_wakeup(&pdev->dev, 0); 978 + pm_runtime_disable(&pdev->dev); 979 + 980 + otg_set_transceiver(NULL); 981 + free_irq(motg->irq, motg); 982 + 983 + /* 984 + * Put PHY in low power mode. 985 + */ 986 + ulpi_read(otg, 0x14); 987 + ulpi_write(otg, 0x08, 0x09); 988 + 989 + writel(readl(USB_PORTSC) | PORTSC_PHCD, USB_PORTSC); 990 + while (cnt < PHY_SUSPEND_TIMEOUT_USEC) { 991 + if (readl(USB_PORTSC) & PORTSC_PHCD) 992 + break; 993 + udelay(1); 994 + cnt++; 995 + } 996 + if (cnt >= PHY_SUSPEND_TIMEOUT_USEC) 997 + dev_err(otg->dev, "Unable to suspend PHY\n"); 998 + 999 + clk_disable(motg->pclk); 1000 + clk_disable(motg->clk); 1001 + if (motg->core_clk) 1002 + clk_disable(motg->core_clk); 1003 + 1004 + iounmap(motg->regs); 1005 + pm_runtime_set_suspended(&pdev->dev); 1006 + 1007 + clk_put(motg->phy_reset_clk); 1008 + clk_put(motg->pclk); 1009 + clk_put(motg->clk); 1010 + if (motg->core_clk) 1011 + clk_put(motg->core_clk); 1012 + 1013 + kfree(motg); 1014 + 1015 + return 0; 1016 + } 1017 + 1018 + #ifdef CONFIG_PM_RUNTIME 1019 + static int msm_otg_runtime_idle(struct device *dev) 1020 + { 1021 + struct msm_otg *motg = dev_get_drvdata(dev); 1022 + struct otg_transceiver *otg = &motg->otg; 1023 + 1024 + dev_dbg(dev, "OTG runtime idle\n"); 1025 + 1026 + /* 1027 + * It is observed some times that a spurious interrupt 1028 + * comes when PHY is put into LPM immediately after PHY reset. 1029 + * This 1 sec delay also prevents entering into LPM immediately 1030 + * after asynchronous interrupt. 1031 + */ 1032 + if (otg->state != OTG_STATE_UNDEFINED) 1033 + pm_schedule_suspend(dev, 1000); 1034 + 1035 + return -EAGAIN; 1036 + } 1037 + 1038 + static int msm_otg_runtime_suspend(struct device *dev) 1039 + { 1040 + struct msm_otg *motg = dev_get_drvdata(dev); 1041 + 1042 + dev_dbg(dev, "OTG runtime suspend\n"); 1043 + return msm_otg_suspend(motg); 1044 + } 1045 + 1046 + static int msm_otg_runtime_resume(struct device *dev) 1047 + { 1048 + struct msm_otg *motg = dev_get_drvdata(dev); 1049 + 1050 + dev_dbg(dev, "OTG runtime resume\n"); 1051 + return msm_otg_resume(motg); 1052 + } 1053 + #else 1054 + #define msm_otg_runtime_idle NULL 1055 + #define msm_otg_runtime_suspend NULL 1056 + #define msm_otg_runtime_resume NULL 1057 + #endif 1058 + 1059 + #ifdef CONFIG_PM 1060 + static int msm_otg_pm_suspend(struct device *dev) 1061 + { 1062 + struct msm_otg *motg = dev_get_drvdata(dev); 1063 + 1064 + dev_dbg(dev, "OTG PM suspend\n"); 1065 + return msm_otg_suspend(motg); 1066 + } 1067 + 1068 + static int msm_otg_pm_resume(struct device *dev) 1069 + { 1070 + struct msm_otg *motg = dev_get_drvdata(dev); 1071 + int ret; 1072 + 1073 + dev_dbg(dev, "OTG PM resume\n"); 1074 + 1075 + ret = msm_otg_resume(motg); 1076 + if (ret) 1077 + return ret; 1078 + 1079 + /* 1080 + * Runtime PM Documentation recommends bringing the 1081 + * device to full powered state upon resume. 1082 + */ 1083 + pm_runtime_disable(dev); 1084 + pm_runtime_set_active(dev); 1085 + pm_runtime_enable(dev); 1086 + 1087 + return 0; 1088 + } 1089 + #else 1090 + #define msm_otg_pm_suspend NULL 1091 + #define msm_otg_pm_resume NULL 1092 + #endif 1093 + 1094 + static const struct dev_pm_ops msm_otg_dev_pm_ops = { 1095 + .runtime_suspend = msm_otg_runtime_suspend, 1096 + .runtime_resume = msm_otg_runtime_resume, 1097 + .runtime_idle = msm_otg_runtime_idle, 1098 + .suspend = msm_otg_pm_suspend, 1099 + .resume = msm_otg_pm_resume, 1100 + }; 1101 + 1102 + static struct platform_driver msm_otg_driver = { 1103 + .remove = __devexit_p(msm_otg_remove), 1104 + .driver = { 1105 + .name = DRIVER_NAME, 1106 + .owner = THIS_MODULE, 1107 + .pm = &msm_otg_dev_pm_ops, 1108 + }, 1109 + }; 1110 + 1111 + static int __init msm_otg_init(void) 1112 + { 1113 + return platform_driver_probe(&msm_otg_driver, msm_otg_probe); 1114 + } 1115 + 1116 + static void __exit msm_otg_exit(void) 1117 + { 1118 + platform_driver_unregister(&msm_otg_driver); 1119 + } 1120 + 1121 + module_init(msm_otg_init); 1122 + module_exit(msm_otg_exit); 1123 + 1124 + MODULE_LICENSE("GPL v2"); 1125 + MODULE_DESCRIPTION("MSM USB transceiver driver");
+2 -1
drivers/usb/otg/twl4030-usb.c
··· 678 678 /* disable complete OTG block */ 679 679 twl4030_usb_clear_bits(twl, POWER_CTRL, POWER_CTRL_OTG_ENAB); 680 680 681 - twl4030_phy_power(twl, 0); 681 + if (!twl->asleep) 682 + twl4030_phy_power(twl, 0); 682 683 regulator_put(twl->usb1v5); 683 684 regulator_put(twl->usb1v8); 684 685 regulator_put(twl->usb3v1);
+493
drivers/usb/otg/twl6030-usb.c
··· 1 + /* 2 + * twl6030_usb - TWL6030 USB transceiver, talking to OMAP OTG driver. 3 + * 4 + * Copyright (C) 2010 Texas Instruments Incorporated - http://www.ti.com 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * Author: Hema HK <hemahk@ti.com> 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 + * 21 + */ 22 + 23 + #include <linux/module.h> 24 + #include <linux/init.h> 25 + #include <linux/interrupt.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/io.h> 28 + #include <linux/usb/otg.h> 29 + #include <linux/i2c/twl.h> 30 + #include <linux/regulator/consumer.h> 31 + #include <linux/err.h> 32 + #include <linux/notifier.h> 33 + #include <linux/slab.h> 34 + 35 + /* usb register definitions */ 36 + #define USB_VENDOR_ID_LSB 0x00 37 + #define USB_VENDOR_ID_MSB 0x01 38 + #define USB_PRODUCT_ID_LSB 0x02 39 + #define USB_PRODUCT_ID_MSB 0x03 40 + #define USB_VBUS_CTRL_SET 0x04 41 + #define USB_VBUS_CTRL_CLR 0x05 42 + #define USB_ID_CTRL_SET 0x06 43 + #define USB_ID_CTRL_CLR 0x07 44 + #define USB_VBUS_INT_SRC 0x08 45 + #define USB_VBUS_INT_LATCH_SET 0x09 46 + #define USB_VBUS_INT_LATCH_CLR 0x0A 47 + #define USB_VBUS_INT_EN_LO_SET 0x0B 48 + #define USB_VBUS_INT_EN_LO_CLR 0x0C 49 + #define USB_VBUS_INT_EN_HI_SET 0x0D 50 + #define USB_VBUS_INT_EN_HI_CLR 0x0E 51 + #define USB_ID_INT_SRC 0x0F 52 + #define USB_ID_INT_LATCH_SET 0x10 53 + #define USB_ID_INT_LATCH_CLR 0x11 54 + 55 + #define USB_ID_INT_EN_LO_SET 0x12 56 + #define USB_ID_INT_EN_LO_CLR 0x13 57 + #define USB_ID_INT_EN_HI_SET 0x14 58 + #define USB_ID_INT_EN_HI_CLR 0x15 59 + #define USB_OTG_ADP_CTRL 0x16 60 + #define USB_OTG_ADP_HIGH 0x17 61 + #define USB_OTG_ADP_LOW 0x18 62 + #define USB_OTG_ADP_RISE 0x19 63 + #define USB_OTG_REVISION 0x1A 64 + 65 + /* to be moved to LDO */ 66 + #define TWL6030_MISC2 0xE5 67 + #define TWL6030_CFG_LDO_PD2 0xF5 68 + #define TWL6030_BACKUP_REG 0xFA 69 + 70 + #define STS_HW_CONDITIONS 0x21 71 + 72 + /* In module TWL6030_MODULE_PM_MASTER */ 73 + #define STS_HW_CONDITIONS 0x21 74 + #define STS_USB_ID BIT(2) 75 + 76 + /* In module TWL6030_MODULE_PM_RECEIVER */ 77 + #define VUSB_CFG_TRANS 0x71 78 + #define VUSB_CFG_STATE 0x72 79 + #define VUSB_CFG_VOLTAGE 0x73 80 + 81 + /* in module TWL6030_MODULE_MAIN_CHARGE */ 82 + 83 + #define CHARGERUSB_CTRL1 0x8 84 + 85 + #define CONTROLLER_STAT1 0x03 86 + #define VBUS_DET BIT(2) 87 + 88 + struct twl6030_usb { 89 + struct otg_transceiver otg; 90 + struct device *dev; 91 + 92 + /* for vbus reporting with irqs disabled */ 93 + spinlock_t lock; 94 + 95 + struct regulator *usb3v3; 96 + 97 + int irq1; 98 + int irq2; 99 + u8 linkstat; 100 + u8 asleep; 101 + bool irq_enabled; 102 + }; 103 + 104 + #define xceiv_to_twl(x) container_of((x), struct twl6030_usb, otg); 105 + 106 + /*-------------------------------------------------------------------------*/ 107 + 108 + static inline int twl6030_writeb(struct twl6030_usb *twl, u8 module, 109 + u8 data, u8 address) 110 + { 111 + int ret = 0; 112 + 113 + ret = twl_i2c_write_u8(module, data, address); 114 + if (ret < 0) 115 + dev_err(twl->dev, 116 + "Write[0x%x] Error %d\n", address, ret); 117 + return ret; 118 + } 119 + 120 + static inline u8 twl6030_readb(struct twl6030_usb *twl, u8 module, u8 address) 121 + { 122 + u8 data, ret = 0; 123 + 124 + ret = twl_i2c_read_u8(module, &data, address); 125 + if (ret >= 0) 126 + ret = data; 127 + else 128 + dev_err(twl->dev, 129 + "readb[0x%x,0x%x] Error %d\n", 130 + module, address, ret); 131 + return ret; 132 + } 133 + 134 + /*-------------------------------------------------------------------------*/ 135 + static int twl6030_set_phy_clk(struct otg_transceiver *x, int on) 136 + { 137 + struct twl6030_usb *twl; 138 + struct device *dev; 139 + struct twl4030_usb_data *pdata; 140 + 141 + twl = xceiv_to_twl(x); 142 + dev = twl->dev; 143 + pdata = dev->platform_data; 144 + 145 + pdata->phy_set_clock(twl->dev, on); 146 + 147 + return 0; 148 + } 149 + 150 + static int twl6030_phy_init(struct otg_transceiver *x) 151 + { 152 + u8 hw_state; 153 + struct twl6030_usb *twl; 154 + struct device *dev; 155 + struct twl4030_usb_data *pdata; 156 + 157 + twl = xceiv_to_twl(x); 158 + dev = twl->dev; 159 + pdata = dev->platform_data; 160 + 161 + regulator_enable(twl->usb3v3); 162 + 163 + hw_state = twl6030_readb(twl, TWL6030_MODULE_ID0, STS_HW_CONDITIONS); 164 + 165 + if (hw_state & STS_USB_ID) 166 + pdata->phy_power(twl->dev, 1, 1); 167 + else 168 + pdata->phy_power(twl->dev, 0, 1); 169 + 170 + return 0; 171 + } 172 + 173 + static void twl6030_phy_shutdown(struct otg_transceiver *x) 174 + { 175 + struct twl6030_usb *twl; 176 + struct device *dev; 177 + struct twl4030_usb_data *pdata; 178 + 179 + twl = xceiv_to_twl(x); 180 + dev = twl->dev; 181 + pdata = dev->platform_data; 182 + pdata->phy_power(twl->dev, 0, 0); 183 + regulator_disable(twl->usb3v3); 184 + } 185 + 186 + static int twl6030_usb_ldo_init(struct twl6030_usb *twl) 187 + { 188 + 189 + /* Set to OTG_REV 1.3 and turn on the ID_WAKEUP_COMP */ 190 + twl6030_writeb(twl, TWL6030_MODULE_ID0 , 0x1, TWL6030_BACKUP_REG); 191 + 192 + /* Program CFG_LDO_PD2 register and set VUSB bit */ 193 + twl6030_writeb(twl, TWL6030_MODULE_ID0 , 0x1, TWL6030_CFG_LDO_PD2); 194 + 195 + /* Program MISC2 register and set bit VUSB_IN_VBAT */ 196 + twl6030_writeb(twl, TWL6030_MODULE_ID0 , 0x10, TWL6030_MISC2); 197 + 198 + twl->usb3v3 = regulator_get(twl->dev, "vusb"); 199 + if (IS_ERR(twl->usb3v3)) 200 + return -ENODEV; 201 + 202 + regulator_enable(twl->usb3v3); 203 + 204 + /* Program the VUSB_CFG_TRANS for ACTIVE state. */ 205 + twl6030_writeb(twl, TWL_MODULE_PM_RECEIVER, 0x3F, 206 + VUSB_CFG_TRANS); 207 + 208 + /* Program the VUSB_CFG_STATE register to ON on all groups. */ 209 + twl6030_writeb(twl, TWL_MODULE_PM_RECEIVER, 0xE1, 210 + VUSB_CFG_STATE); 211 + 212 + /* Program the USB_VBUS_CTRL_SET and set VBUS_ACT_COMP bit */ 213 + twl6030_writeb(twl, TWL_MODULE_USB, 0x4, USB_VBUS_CTRL_SET); 214 + 215 + /* 216 + * Program the USB_ID_CTRL_SET register to enable GND drive 217 + * and the ID comparators 218 + */ 219 + twl6030_writeb(twl, TWL_MODULE_USB, 0x14, USB_ID_CTRL_SET); 220 + 221 + return 0; 222 + } 223 + 224 + static ssize_t twl6030_usb_vbus_show(struct device *dev, 225 + struct device_attribute *attr, char *buf) 226 + { 227 + struct twl6030_usb *twl = dev_get_drvdata(dev); 228 + unsigned long flags; 229 + int ret = -EINVAL; 230 + 231 + spin_lock_irqsave(&twl->lock, flags); 232 + 233 + switch (twl->linkstat) { 234 + case USB_EVENT_VBUS: 235 + ret = snprintf(buf, PAGE_SIZE, "vbus\n"); 236 + break; 237 + case USB_EVENT_ID: 238 + ret = snprintf(buf, PAGE_SIZE, "id\n"); 239 + break; 240 + case USB_EVENT_NONE: 241 + ret = snprintf(buf, PAGE_SIZE, "none\n"); 242 + break; 243 + default: 244 + ret = snprintf(buf, PAGE_SIZE, "UNKNOWN\n"); 245 + } 246 + spin_unlock_irqrestore(&twl->lock, flags); 247 + 248 + return ret; 249 + } 250 + static DEVICE_ATTR(vbus, 0444, twl6030_usb_vbus_show, NULL); 251 + 252 + static irqreturn_t twl6030_usb_irq(int irq, void *_twl) 253 + { 254 + struct twl6030_usb *twl = _twl; 255 + int status; 256 + u8 vbus_state, hw_state; 257 + 258 + hw_state = twl6030_readb(twl, TWL6030_MODULE_ID0, STS_HW_CONDITIONS); 259 + 260 + vbus_state = twl6030_readb(twl, TWL_MODULE_MAIN_CHARGE, 261 + CONTROLLER_STAT1); 262 + if (!(hw_state & STS_USB_ID)) { 263 + if (vbus_state & VBUS_DET) { 264 + status = USB_EVENT_VBUS; 265 + twl->otg.default_a = false; 266 + twl->otg.state = OTG_STATE_B_IDLE; 267 + } else { 268 + status = USB_EVENT_NONE; 269 + } 270 + if (status >= 0) { 271 + twl->linkstat = status; 272 + blocking_notifier_call_chain(&twl->otg.notifier, 273 + status, twl->otg.gadget); 274 + } 275 + } 276 + sysfs_notify(&twl->dev->kobj, NULL, "vbus"); 277 + 278 + return IRQ_HANDLED; 279 + } 280 + 281 + static irqreturn_t twl6030_usbotg_irq(int irq, void *_twl) 282 + { 283 + struct twl6030_usb *twl = _twl; 284 + int status = USB_EVENT_NONE; 285 + u8 hw_state; 286 + 287 + hw_state = twl6030_readb(twl, TWL6030_MODULE_ID0, STS_HW_CONDITIONS); 288 + 289 + if (hw_state & STS_USB_ID) { 290 + 291 + twl6030_writeb(twl, TWL_MODULE_USB, USB_ID_INT_EN_HI_CLR, 0x1); 292 + twl6030_writeb(twl, TWL_MODULE_USB, USB_ID_INT_EN_HI_SET, 293 + 0x10); 294 + status = USB_EVENT_ID; 295 + twl->otg.default_a = true; 296 + twl->otg.state = OTG_STATE_A_IDLE; 297 + blocking_notifier_call_chain(&twl->otg.notifier, status, 298 + twl->otg.gadget); 299 + } else { 300 + twl6030_writeb(twl, TWL_MODULE_USB, USB_ID_INT_EN_HI_CLR, 301 + 0x10); 302 + twl6030_writeb(twl, TWL_MODULE_USB, USB_ID_INT_EN_HI_SET, 303 + 0x1); 304 + } 305 + twl6030_writeb(twl, TWL_MODULE_USB, USB_ID_INT_LATCH_CLR, status); 306 + twl->linkstat = status; 307 + 308 + return IRQ_HANDLED; 309 + } 310 + 311 + static int twl6030_set_peripheral(struct otg_transceiver *x, 312 + struct usb_gadget *gadget) 313 + { 314 + struct twl6030_usb *twl; 315 + 316 + if (!x) 317 + return -ENODEV; 318 + 319 + twl = xceiv_to_twl(x); 320 + twl->otg.gadget = gadget; 321 + if (!gadget) 322 + twl->otg.state = OTG_STATE_UNDEFINED; 323 + 324 + return 0; 325 + } 326 + 327 + static int twl6030_enable_irq(struct otg_transceiver *x) 328 + { 329 + struct twl6030_usb *twl = xceiv_to_twl(x); 330 + 331 + twl6030_writeb(twl, TWL_MODULE_USB, USB_ID_INT_EN_HI_SET, 0x1); 332 + twl6030_interrupt_unmask(0x05, REG_INT_MSK_LINE_C); 333 + twl6030_interrupt_unmask(0x05, REG_INT_MSK_STS_C); 334 + 335 + twl6030_interrupt_unmask(TWL6030_CHARGER_CTRL_INT_MASK, 336 + REG_INT_MSK_LINE_C); 337 + twl6030_interrupt_unmask(TWL6030_CHARGER_CTRL_INT_MASK, 338 + REG_INT_MSK_STS_C); 339 + twl6030_usb_irq(twl->irq2, twl); 340 + twl6030_usbotg_irq(twl->irq1, twl); 341 + 342 + return 0; 343 + } 344 + 345 + static int twl6030_set_vbus(struct otg_transceiver *x, bool enabled) 346 + { 347 + struct twl6030_usb *twl = xceiv_to_twl(x); 348 + 349 + /* 350 + * Start driving VBUS. Set OPA_MODE bit in CHARGERUSB_CTRL1 351 + * register. This enables boost mode. 352 + */ 353 + if (enabled) 354 + twl6030_writeb(twl, TWL_MODULE_MAIN_CHARGE , 0x40, 355 + CHARGERUSB_CTRL1); 356 + else 357 + twl6030_writeb(twl, TWL_MODULE_MAIN_CHARGE , 0x00, 358 + CHARGERUSB_CTRL1); 359 + return 0; 360 + } 361 + 362 + static int twl6030_set_host(struct otg_transceiver *x, struct usb_bus *host) 363 + { 364 + struct twl6030_usb *twl; 365 + 366 + if (!x) 367 + return -ENODEV; 368 + 369 + twl = xceiv_to_twl(x); 370 + twl->otg.host = host; 371 + if (!host) 372 + twl->otg.state = OTG_STATE_UNDEFINED; 373 + return 0; 374 + } 375 + 376 + static int __devinit twl6030_usb_probe(struct platform_device *pdev) 377 + { 378 + struct twl6030_usb *twl; 379 + int status, err; 380 + struct twl4030_usb_data *pdata; 381 + struct device *dev = &pdev->dev; 382 + pdata = dev->platform_data; 383 + 384 + twl = kzalloc(sizeof *twl, GFP_KERNEL); 385 + if (!twl) 386 + return -ENOMEM; 387 + 388 + twl->dev = &pdev->dev; 389 + twl->irq1 = platform_get_irq(pdev, 0); 390 + twl->irq2 = platform_get_irq(pdev, 1); 391 + twl->otg.dev = twl->dev; 392 + twl->otg.label = "twl6030"; 393 + twl->otg.set_host = twl6030_set_host; 394 + twl->otg.set_peripheral = twl6030_set_peripheral; 395 + twl->otg.set_vbus = twl6030_set_vbus; 396 + twl->otg.init = twl6030_phy_init; 397 + twl->otg.shutdown = twl6030_phy_shutdown; 398 + 399 + /* init spinlock for workqueue */ 400 + spin_lock_init(&twl->lock); 401 + 402 + err = twl6030_usb_ldo_init(twl); 403 + if (err) { 404 + dev_err(&pdev->dev, "ldo init failed\n"); 405 + kfree(twl); 406 + return err; 407 + } 408 + otg_set_transceiver(&twl->otg); 409 + 410 + platform_set_drvdata(pdev, twl); 411 + if (device_create_file(&pdev->dev, &dev_attr_vbus)) 412 + dev_warn(&pdev->dev, "could not create sysfs file\n"); 413 + 414 + BLOCKING_INIT_NOTIFIER_HEAD(&twl->otg.notifier); 415 + 416 + twl->irq_enabled = true; 417 + status = request_threaded_irq(twl->irq1, NULL, twl6030_usbotg_irq, 418 + IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING, 419 + "twl6030_usb", twl); 420 + if (status < 0) { 421 + dev_err(&pdev->dev, "can't get IRQ %d, err %d\n", 422 + twl->irq1, status); 423 + device_remove_file(twl->dev, &dev_attr_vbus); 424 + kfree(twl); 425 + return status; 426 + } 427 + 428 + status = request_threaded_irq(twl->irq2, NULL, twl6030_usb_irq, 429 + IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING, 430 + "twl6030_usb", twl); 431 + if (status < 0) { 432 + dev_err(&pdev->dev, "can't get IRQ %d, err %d\n", 433 + twl->irq2, status); 434 + free_irq(twl->irq1, twl); 435 + device_remove_file(twl->dev, &dev_attr_vbus); 436 + kfree(twl); 437 + return status; 438 + } 439 + 440 + pdata->phy_init(dev); 441 + twl6030_enable_irq(&twl->otg); 442 + dev_info(&pdev->dev, "Initialized TWL6030 USB module\n"); 443 + 444 + return 0; 445 + } 446 + 447 + static int __exit twl6030_usb_remove(struct platform_device *pdev) 448 + { 449 + struct twl6030_usb *twl = platform_get_drvdata(pdev); 450 + 451 + struct twl4030_usb_data *pdata; 452 + struct device *dev = &pdev->dev; 453 + pdata = dev->platform_data; 454 + 455 + twl6030_interrupt_mask(TWL6030_USBOTG_INT_MASK, 456 + REG_INT_MSK_LINE_C); 457 + twl6030_interrupt_mask(TWL6030_USBOTG_INT_MASK, 458 + REG_INT_MSK_STS_C); 459 + free_irq(twl->irq1, twl); 460 + free_irq(twl->irq2, twl); 461 + regulator_put(twl->usb3v3); 462 + pdata->phy_exit(twl->dev); 463 + device_remove_file(twl->dev, &dev_attr_vbus); 464 + kfree(twl); 465 + 466 + return 0; 467 + } 468 + 469 + static struct platform_driver twl6030_usb_driver = { 470 + .probe = twl6030_usb_probe, 471 + .remove = __exit_p(twl6030_usb_remove), 472 + .driver = { 473 + .name = "twl6030_usb", 474 + .owner = THIS_MODULE, 475 + }, 476 + }; 477 + 478 + static int __init twl6030_usb_init(void) 479 + { 480 + return platform_driver_register(&twl6030_usb_driver); 481 + } 482 + subsys_initcall(twl6030_usb_init); 483 + 484 + static void __exit twl6030_usb_exit(void) 485 + { 486 + platform_driver_unregister(&twl6030_usb_driver); 487 + } 488 + module_exit(twl6030_usb_exit); 489 + 490 + MODULE_ALIAS("platform:twl6030_usb"); 491 + MODULE_AUTHOR("Hema HK <hemahk@ti.com>"); 492 + MODULE_DESCRIPTION("TWL6030 USB transceiver driver"); 493 + MODULE_LICENSE("GPL");
+1
drivers/usb/serial/option.c
··· 989 989 .set_termios = usb_wwan_set_termios, 990 990 .tiocmget = usb_wwan_tiocmget, 991 991 .tiocmset = usb_wwan_tiocmset, 992 + .ioctl = usb_wwan_ioctl, 992 993 .attach = usb_wwan_startup, 993 994 .disconnect = usb_wwan_disconnect, 994 995 .release = usb_wwan_release,
+8 -48
drivers/usb/serial/ssu100.c
··· 79 79 u8 shadowLSR; 80 80 u8 shadowMSR; 81 81 wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */ 82 - unsigned short max_packet_size; 83 82 struct async_icount icount; 84 83 }; 85 84 ··· 463 464 return -ENOIOCTLCMD; 464 465 } 465 466 466 - static void ssu100_set_max_packet_size(struct usb_serial_port *port) 467 - { 468 - struct ssu100_port_private *priv = usb_get_serial_port_data(port); 469 - struct usb_serial *serial = port->serial; 470 - struct usb_device *udev = serial->dev; 471 - 472 - struct usb_interface *interface = serial->interface; 473 - struct usb_endpoint_descriptor *ep_desc = &interface->cur_altsetting->endpoint[1].desc; 474 - 475 - unsigned num_endpoints; 476 - int i; 477 - unsigned long flags; 478 - 479 - num_endpoints = interface->cur_altsetting->desc.bNumEndpoints; 480 - dev_info(&udev->dev, "Number of endpoints %d\n", num_endpoints); 481 - 482 - for (i = 0; i < num_endpoints; i++) { 483 - dev_info(&udev->dev, "Endpoint %d MaxPacketSize %d\n", i+1, 484 - interface->cur_altsetting->endpoint[i].desc.wMaxPacketSize); 485 - ep_desc = &interface->cur_altsetting->endpoint[i].desc; 486 - } 487 - 488 - /* set max packet size based on descriptor */ 489 - spin_lock_irqsave(&priv->status_lock, flags); 490 - priv->max_packet_size = ep_desc->wMaxPacketSize; 491 - spin_unlock_irqrestore(&priv->status_lock, flags); 492 - 493 - dev_info(&udev->dev, "Setting MaxPacketSize %d\n", priv->max_packet_size); 494 - } 495 - 496 467 static int ssu100_attach(struct usb_serial *serial) 497 468 { 498 469 struct ssu100_port_private *priv; ··· 480 511 spin_lock_init(&priv->status_lock); 481 512 init_waitqueue_head(&priv->delta_msr_wait); 482 513 usb_set_serial_port_data(port, priv); 483 - ssu100_set_max_packet_size(port); 484 514 485 515 return ssu100_initdevice(serial->dev); 486 516 } ··· 609 641 610 642 } 611 643 612 - static int ssu100_process_packet(struct tty_struct *tty, 613 - struct usb_serial_port *port, 614 - struct ssu100_port_private *priv, 615 - char *packet, int len) 644 + static int ssu100_process_packet(struct urb *urb, 645 + struct tty_struct *tty) 616 646 { 617 - int i; 647 + struct usb_serial_port *port = urb->context; 648 + char *packet = (char *)urb->transfer_buffer; 618 649 char flag = TTY_NORMAL; 650 + u32 len = urb->actual_length; 651 + int i; 619 652 char *ch; 620 653 621 654 dbg("%s - port %d", __func__, port->number); ··· 654 685 static void ssu100_process_read_urb(struct urb *urb) 655 686 { 656 687 struct usb_serial_port *port = urb->context; 657 - struct ssu100_port_private *priv = usb_get_serial_port_data(port); 658 - char *data = (char *)urb->transfer_buffer; 659 688 struct tty_struct *tty; 660 - int count = 0; 661 - int i; 662 - int len; 689 + int count; 663 690 664 691 dbg("%s", __func__); 665 692 ··· 663 698 if (!tty) 664 699 return; 665 700 666 - for (i = 0; i < urb->actual_length; i += priv->max_packet_size) { 667 - len = min_t(int, urb->actual_length - i, priv->max_packet_size); 668 - count += ssu100_process_packet(tty, port, priv, &data[i], len); 669 - } 701 + count = ssu100_process_packet(urb, tty); 670 702 671 703 if (count) 672 704 tty_flip_buffer_push(tty); ··· 679 717 .id_table = id_table, 680 718 .usb_driver = &ssu100_driver, 681 719 .num_ports = 1, 682 - .bulk_in_size = 256, 683 - .bulk_out_size = 256, 684 720 .open = ssu100_open, 685 721 .close = ssu100_close, 686 722 .attach = ssu100_attach,
+2
drivers/usb/serial/usb-wwan.h
··· 18 18 extern int usb_wwan_tiocmget(struct tty_struct *tty, struct file *file); 19 19 extern int usb_wwan_tiocmset(struct tty_struct *tty, struct file *file, 20 20 unsigned int set, unsigned int clear); 21 + extern int usb_wwan_ioctl(struct tty_struct *tty, struct file *file, 22 + unsigned int cmd, unsigned long arg); 21 23 extern int usb_wwan_send_setup(struct usb_serial_port *port); 22 24 extern int usb_wwan_write(struct tty_struct *tty, struct usb_serial_port *port, 23 25 const unsigned char *buf, int count);
+79
drivers/usb/serial/usb_wwan.c
··· 31 31 #include <linux/tty_flip.h> 32 32 #include <linux/module.h> 33 33 #include <linux/bitops.h> 34 + #include <linux/uaccess.h> 34 35 #include <linux/usb.h> 35 36 #include <linux/usb/serial.h> 37 + #include <linux/serial.h> 36 38 #include "usb-wwan.h" 37 39 38 40 static int debug; ··· 124 122 return intfdata->send_setup(port); 125 123 } 126 124 EXPORT_SYMBOL(usb_wwan_tiocmset); 125 + 126 + static int get_serial_info(struct usb_serial_port *port, 127 + struct serial_struct __user *retinfo) 128 + { 129 + struct serial_struct tmp; 130 + 131 + if (!retinfo) 132 + return -EFAULT; 133 + 134 + memset(&tmp, 0, sizeof(tmp)); 135 + tmp.line = port->serial->minor; 136 + tmp.port = port->number; 137 + tmp.baud_base = tty_get_baud_rate(port->port.tty); 138 + tmp.close_delay = port->port.close_delay / 10; 139 + tmp.closing_wait = port->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ? 140 + ASYNC_CLOSING_WAIT_NONE : 141 + port->port.closing_wait / 10; 142 + 143 + if (copy_to_user(retinfo, &tmp, sizeof(*retinfo))) 144 + return -EFAULT; 145 + return 0; 146 + } 147 + 148 + static int set_serial_info(struct usb_serial_port *port, 149 + struct serial_struct __user *newinfo) 150 + { 151 + struct serial_struct new_serial; 152 + unsigned int closing_wait, close_delay; 153 + int retval = 0; 154 + 155 + if (copy_from_user(&new_serial, newinfo, sizeof(new_serial))) 156 + return -EFAULT; 157 + 158 + close_delay = new_serial.close_delay * 10; 159 + closing_wait = new_serial.closing_wait == ASYNC_CLOSING_WAIT_NONE ? 160 + ASYNC_CLOSING_WAIT_NONE : new_serial.closing_wait * 10; 161 + 162 + mutex_lock(&port->port.mutex); 163 + 164 + if (!capable(CAP_SYS_ADMIN)) { 165 + if ((close_delay != port->port.close_delay) || 166 + (closing_wait != port->port.closing_wait)) 167 + retval = -EPERM; 168 + else 169 + retval = -EOPNOTSUPP; 170 + } else { 171 + port->port.close_delay = close_delay; 172 + port->port.closing_wait = closing_wait; 173 + } 174 + 175 + mutex_unlock(&port->port.mutex); 176 + return retval; 177 + } 178 + 179 + int usb_wwan_ioctl(struct tty_struct *tty, struct file *file, 180 + unsigned int cmd, unsigned long arg) 181 + { 182 + struct usb_serial_port *port = tty->driver_data; 183 + 184 + dbg("%s cmd 0x%04x", __func__, cmd); 185 + 186 + switch (cmd) { 187 + case TIOCGSERIAL: 188 + return get_serial_info(port, 189 + (struct serial_struct __user *) arg); 190 + case TIOCSSERIAL: 191 + return set_serial_info(port, 192 + (struct serial_struct __user *) arg); 193 + default: 194 + break; 195 + } 196 + 197 + dbg("%s arg not supported", __func__); 198 + 199 + return -ENOIOCTLCMD; 200 + } 201 + EXPORT_SYMBOL(usb_wwan_ioctl); 127 202 128 203 /* Write */ 129 204 int usb_wwan_write(struct tty_struct *tty, struct usb_serial_port *port,
+54 -28
drivers/usb/storage/uas.c
··· 49 49 __u8 cdb[16]; /* XXX: Overflow-checking tools may misunderstand */ 50 50 }; 51 51 52 + /* 53 + * Also used for the Read Ready and Write Ready IUs since they have the 54 + * same first four bytes 55 + */ 52 56 struct sense_iu { 53 57 __u8 iu_id; 54 58 __u8 rsvd1; 55 59 __be16 tag; 56 60 __be16 status_qual; 57 61 __u8 status; 58 - __u8 service_response; 59 - __u8 rsvd8[6]; 62 + __u8 rsvd7[7]; 60 63 __be16 len; 61 64 __u8 sense[SCSI_SENSE_BUFFERSIZE]; 62 65 }; ··· 100 97 }; 101 98 102 99 enum { 103 - ALLOC_SENSE_URB = (1 << 0), 104 - SUBMIT_SENSE_URB = (1 << 1), 100 + ALLOC_STATUS_URB = (1 << 0), 101 + SUBMIT_STATUS_URB = (1 << 1), 105 102 ALLOC_DATA_IN_URB = (1 << 2), 106 103 SUBMIT_DATA_IN_URB = (1 << 3), 107 104 ALLOC_DATA_OUT_URB = (1 << 4), ··· 115 112 unsigned int state; 116 113 unsigned int stream; 117 114 struct urb *cmd_urb; 118 - struct urb *sense_urb; 115 + struct urb *status_urb; 119 116 struct urb *data_in_urb; 120 117 struct urb *data_out_urb; 121 118 struct list_head list; ··· 141 138 struct scsi_pointer *scp = (void *)cmdinfo; 142 139 struct scsi_cmnd *cmnd = container_of(scp, 143 140 struct scsi_cmnd, SCp); 144 - uas_submit_urbs(cmnd, cmnd->device->hostdata, GFP_KERNEL); 141 + uas_submit_urbs(cmnd, cmnd->device->hostdata, GFP_NOIO); 145 142 } 146 143 } 147 144 ··· 207 204 struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp; 208 205 int err; 209 206 210 - cmdinfo->state = direction | SUBMIT_SENSE_URB; 207 + cmdinfo->state = direction | SUBMIT_STATUS_URB; 211 208 err = uas_submit_urbs(cmnd, cmnd->device->hostdata, GFP_ATOMIC); 212 209 if (err) { 213 210 spin_lock(&uas_work_lock); ··· 297 294 if (!urb) 298 295 goto out; 299 296 300 - iu = kmalloc(sizeof(*iu), gfp); 297 + iu = kzalloc(sizeof(*iu), gfp); 301 298 if (!iu) 302 299 goto free; 303 300 ··· 328 325 if (len < 0) 329 326 len = 0; 330 327 len = ALIGN(len, 4); 331 - iu = kmalloc(sizeof(*iu) + len, gfp); 328 + iu = kzalloc(sizeof(*iu) + len, gfp); 332 329 if (!iu) 333 330 goto free; 334 331 ··· 360 357 { 361 358 struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp; 362 359 363 - if (cmdinfo->state & ALLOC_SENSE_URB) { 364 - cmdinfo->sense_urb = uas_alloc_sense_urb(devinfo, gfp, cmnd, 365 - cmdinfo->stream); 366 - if (!cmdinfo->sense_urb) 360 + if (cmdinfo->state & ALLOC_STATUS_URB) { 361 + cmdinfo->status_urb = uas_alloc_sense_urb(devinfo, gfp, cmnd, 362 + cmdinfo->stream); 363 + if (!cmdinfo->status_urb) 367 364 return SCSI_MLQUEUE_DEVICE_BUSY; 368 - cmdinfo->state &= ~ALLOC_SENSE_URB; 365 + cmdinfo->state &= ~ALLOC_STATUS_URB; 369 366 } 370 367 371 - if (cmdinfo->state & SUBMIT_SENSE_URB) { 372 - if (usb_submit_urb(cmdinfo->sense_urb, gfp)) { 368 + if (cmdinfo->state & SUBMIT_STATUS_URB) { 369 + if (usb_submit_urb(cmdinfo->status_urb, gfp)) { 373 370 scmd_printk(KERN_INFO, cmnd, 374 371 "sense urb submission failure\n"); 375 372 return SCSI_MLQUEUE_DEVICE_BUSY; 376 373 } 377 - cmdinfo->state &= ~SUBMIT_SENSE_URB; 374 + cmdinfo->state &= ~SUBMIT_STATUS_URB; 378 375 } 379 376 380 377 if (cmdinfo->state & ALLOC_DATA_IN_URB) { ··· 443 440 444 441 BUILD_BUG_ON(sizeof(struct uas_cmd_info) > sizeof(struct scsi_pointer)); 445 442 446 - if (!cmdinfo->sense_urb && sdev->current_cmnd) 443 + if (!cmdinfo->status_urb && sdev->current_cmnd) 447 444 return SCSI_MLQUEUE_DEVICE_BUSY; 448 445 449 446 if (blk_rq_tagged(cmnd->request)) { ··· 455 452 456 453 cmnd->scsi_done = done; 457 454 458 - cmdinfo->state = ALLOC_SENSE_URB | SUBMIT_SENSE_URB | 455 + cmdinfo->state = ALLOC_STATUS_URB | SUBMIT_STATUS_URB | 459 456 ALLOC_CMD_URB | SUBMIT_CMD_URB; 460 457 461 458 switch (cmnd->sc_data_direction) { ··· 478 475 err = uas_submit_urbs(cmnd, devinfo, GFP_ATOMIC); 479 476 if (err) { 480 477 /* If we did nothing, give up now */ 481 - if (cmdinfo->state & SUBMIT_SENSE_URB) { 482 - usb_free_urb(cmdinfo->sense_urb); 478 + if (cmdinfo->state & SUBMIT_STATUS_URB) { 479 + usb_free_urb(cmdinfo->status_urb); 483 480 return SCSI_MLQUEUE_DEVICE_BUSY; 484 481 } 485 482 spin_lock(&uas_work_lock); ··· 581 578 }; 582 579 MODULE_DEVICE_TABLE(usb, uas_usb_ids); 583 580 581 + static int uas_is_interface(struct usb_host_interface *intf) 582 + { 583 + return (intf->desc.bInterfaceClass == USB_CLASS_MASS_STORAGE && 584 + intf->desc.bInterfaceSubClass == USB_SC_SCSI && 585 + intf->desc.bInterfaceProtocol == USB_PR_UAS); 586 + } 587 + 588 + static int uas_switch_interface(struct usb_device *udev, 589 + struct usb_interface *intf) 590 + { 591 + int i; 592 + 593 + if (uas_is_interface(intf->cur_altsetting)) 594 + return 0; 595 + 596 + for (i = 0; i < intf->num_altsetting; i++) { 597 + struct usb_host_interface *alt = &intf->altsetting[i]; 598 + if (alt == intf->cur_altsetting) 599 + continue; 600 + if (uas_is_interface(alt)) 601 + return usb_set_interface(udev, 602 + alt->desc.bInterfaceNumber, 603 + alt->desc.bAlternateSetting); 604 + } 605 + 606 + return -ENODEV; 607 + } 608 + 584 609 static void uas_configure_endpoints(struct uas_dev_info *devinfo) 585 610 { 586 611 struct usb_host_endpoint *eps[4] = { }; ··· 682 651 struct uas_dev_info *devinfo; 683 652 struct usb_device *udev = interface_to_usbdev(intf); 684 653 685 - if (id->bInterfaceProtocol == 0x50) { 686 - int ifnum = intf->cur_altsetting->desc.bInterfaceNumber; 687 - /* XXX: Shouldn't assume that 1 is the alternative we want */ 688 - int ret = usb_set_interface(udev, ifnum, 1); 689 - if (ret) 690 - return -ENODEV; 691 - } 654 + if (uas_switch_interface(udev, intf)) 655 + return -ENODEV; 692 656 693 657 devinfo = kmalloc(sizeof(struct uas_dev_info), GFP_KERNEL); 694 658 if (!devinfo)
+1 -1
drivers/uwb/i1480/i1480-est.c
··· 91 91 * 92 92 * [so we are loaded when this kind device is connected] 93 93 */ 94 - static struct usb_device_id i1480_est_id_table[] = { 94 + static struct usb_device_id __used i1480_est_id_table[] = { 95 95 { USB_DEVICE(0x8086, 0xdf3b), }, 96 96 { USB_DEVICE(0x8086, 0x0c3b), }, 97 97 { },
+2 -5
drivers/uwb/umc-dev.c
··· 54 54 55 55 err = request_resource(umc->resource.parent, &umc->resource); 56 56 if (err < 0) { 57 - dev_err(&umc->dev, "can't allocate resource range " 58 - "%016Lx to %016Lx: %d\n", 59 - (unsigned long long)umc->resource.start, 60 - (unsigned long long)umc->resource.end, 61 - err); 57 + dev_err(&umc->dev, "can't allocate resource range %pR: %d\n", 58 + &umc->resource, err); 62 59 goto error_request_resource; 63 60 } 64 61
+1 -1
drivers/uwb/whc-rc.c
··· 449 449 } 450 450 451 451 /* PCI device ID's that we handle [so it gets loaded] */ 452 - static struct pci_device_id whcrc_id_table[] = { 452 + static struct pci_device_id __used whcrc_id_table[] = { 453 453 { PCI_DEVICE_CLASS(PCI_CLASS_WIRELESS_WHCI, ~0) }, 454 454 { /* empty last entry */ } 455 455 };
+7
include/linux/i2c/twl.h
··· 593 593 594 594 struct twl4030_usb_data { 595 595 enum twl4030_usb_mode usb_mode; 596 + 597 + int (*phy_init)(struct device *dev); 598 + int (*phy_exit)(struct device *dev); 599 + /* Power on/off the PHY */ 600 + int (*phy_power)(struct device *dev, int iD, int on); 601 + /* enable/disable phy clocks */ 602 + int (*phy_set_clock)(struct device *dev, int on); 596 603 }; 597 604 598 605 struct twl4030_ins {
+2 -5
include/linux/usb.h
··· 20 20 #include <linux/completion.h> /* for struct completion */ 21 21 #include <linux/sched.h> /* for current && schedule_timeout */ 22 22 #include <linux/mutex.h> /* for struct mutex */ 23 + #include <linux/pm_runtime.h> /* for runtime PM */ 23 24 24 25 struct usb_device; 25 26 struct usb_driver; ··· 412 411 * @quirks: quirks of the whole device 413 412 * @urbnum: number of URBs submitted for the whole device 414 413 * @active_duration: total time device is not suspended 415 - * @last_busy: time of last use 416 - * @autosuspend_delay: in jiffies 417 414 * @connect_time: time device was first connected 418 415 * @do_remote_wakeup: remote wakeup should be enabled 419 416 * @reset_resume: needs reset instead of resume ··· 484 485 unsigned long active_duration; 485 486 486 487 #ifdef CONFIG_PM 487 - unsigned long last_busy; 488 - int autosuspend_delay; 489 488 unsigned long connect_time; 490 489 491 490 unsigned do_remote_wakeup:1; ··· 528 531 529 532 static inline void usb_mark_last_busy(struct usb_device *udev) 530 533 { 531 - udev->last_busy = jiffies; 534 + pm_runtime_mark_last_busy(&udev->dev); 532 535 } 533 536 534 537 #else
+47
include/linux/usb/ch11.h
··· 28 28 #define HUB_STOP_TT 11 29 29 30 30 /* 31 + * Hub class additional requests defined by USB 3.0 spec 32 + * See USB 3.0 spec Table 10-6 33 + */ 34 + #define HUB_SET_DEPTH 12 35 + #define HUB_GET_PORT_ERR_COUNT 13 36 + 37 + /* 31 38 * Hub Class feature numbers 32 39 * See USB 2.0 spec Table 11-17 33 40 */ ··· 63 56 #define USB_PORT_FEAT_C_PORT_L1 23 64 57 65 58 /* 59 + * Port feature selectors added by USB 3.0 spec. 60 + * See USB 3.0 spec Table 10-7 61 + */ 62 + #define USB_PORT_FEAT_LINK_STATE 5 63 + #define USB_PORT_FEAT_U1_TIMEOUT 23 64 + #define USB_PORT_FEAT_U2_TIMEOUT 24 65 + #define USB_PORT_FEAT_C_LINK_STATE 25 66 + #define USB_PORT_FEAT_C_CONFIG_ERR 26 67 + #define USB_PORT_FEAT_REMOTE_WAKE_MASK 27 68 + #define USB_PORT_FEAT_BH_PORT_RESET 28 69 + #define USB_PORT_FEAT_C_BH_PORT_RESET 29 70 + #define USB_PORT_FEAT_FORCE_LINKPM_ACCEPT 30 71 + 72 + /* 66 73 * Hub Status and Hub Change results 67 74 * See USB 2.0 spec Table 11-19 and Table 11-20 68 75 */ ··· 103 82 #define USB_PORT_STAT_INDICATOR 0x1000 104 83 /* bits 13 to 15 are reserved */ 105 84 #define USB_PORT_STAT_SUPER_SPEED 0x8000 /* Linux-internal */ 85 + 86 + /* 87 + * Additions to wPortStatus bit field from USB 3.0 88 + * See USB 3.0 spec Table 10-10 89 + */ 90 + #define USB_PORT_STAT_LINK_STATE 0x01e0 91 + #define USB_SS_PORT_STAT_POWER 0x0200 92 + #define USB_PORT_STAT_SPEED_5GBPS 0x0000 93 + /* Valid only if port is enabled */ 94 + 95 + /* 96 + * Definitions for PORT_LINK_STATE values 97 + * (bits 5-8) in wPortStatus 98 + */ 99 + #define USB_SS_PORT_LS_U0 0x0000 100 + #define USB_SS_PORT_LS_U1 0x0020 101 + #define USB_SS_PORT_LS_U2 0x0040 102 + #define USB_SS_PORT_LS_U3 0x0060 103 + #define USB_SS_PORT_LS_SS_DISABLED 0x0080 104 + #define USB_SS_PORT_LS_RX_DETECT 0x00a0 105 + #define USB_SS_PORT_LS_SS_INACTIVE 0x00c0 106 + #define USB_SS_PORT_LS_POLLING 0x00e0 107 + #define USB_SS_PORT_LS_RECOVERY 0x0100 108 + #define USB_SS_PORT_LS_HOT_RESET 0x0120 109 + #define USB_SS_PORT_LS_COMP_MOD 0x0140 110 + #define USB_SS_PORT_LS_LOOPBACK 0x0160 106 111 107 112 /* 108 113 * wPortChange bit field
+10
include/linux/usb/ch9.h
··· 124 124 #define USB_DEVICE_DEBUG_MODE 6 /* (special devices only) */ 125 125 126 126 /* 127 + * Test Mode Selectors 128 + * See USB 2.0 spec Table 9-7 129 + */ 130 + #define TEST_J 1 131 + #define TEST_K 2 132 + #define TEST_SE0_NAK 3 133 + #define TEST_PACKET 4 134 + #define TEST_FORCE_EN 5 135 + 136 + /* 127 137 * New Feature Selectors as added by USB 3.0 128 138 * See USB 3.0 spec Table 9-6 129 139 */
+4
include/linux/usb/hcd.h
··· 471 471 472 472 /*-------------------------------------------------------------------------*/ 473 473 474 + /* class requests from USB 3.0 hub spec, table 10-5 */ 475 + #define SetHubDepth (0x3000 | HUB_SET_DEPTH) 476 + #define GetPortErrorCount (0x8000 | HUB_GET_PORT_ERR_COUNT) 477 + 474 478 /* 475 479 * Generic bandwidth allocation constants/support 476 480 */
+112
include/linux/usb/msm_hsusb.h
··· 1 + /* linux/include/asm-arm/arch-msm/hsusb.h 2 + * 3 + * Copyright (C) 2008 Google, Inc. 4 + * Author: Brian Swetland <swetland@google.com> 5 + * Copyright (c) 2009-2010, Code Aurora Forum. All rights reserved. 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + */ 17 + 18 + #ifndef __ASM_ARCH_MSM_HSUSB_H 19 + #define __ASM_ARCH_MSM_HSUSB_H 20 + 21 + #include <linux/types.h> 22 + #include <linux/usb/otg.h> 23 + 24 + /** 25 + * Supported USB modes 26 + * 27 + * USB_PERIPHERAL Only peripheral mode is supported. 28 + * USB_HOST Only host mode is supported. 29 + * USB_OTG OTG mode is supported. 30 + * 31 + */ 32 + enum usb_mode_type { 33 + USB_NONE = 0, 34 + USB_PERIPHERAL, 35 + USB_HOST, 36 + USB_OTG, 37 + }; 38 + 39 + /** 40 + * OTG control 41 + * 42 + * OTG_NO_CONTROL Id/VBUS notifications not required. Useful in host 43 + * only configuration. 44 + * OTG_PHY_CONTROL Id/VBUS notifications comes form USB PHY. 45 + * OTG_PMIC_CONTROL Id/VBUS notifications comes from PMIC hardware. 46 + * OTG_USER_CONTROL Id/VBUS notifcations comes from User via sysfs. 47 + * 48 + */ 49 + enum otg_control_type { 50 + OTG_NO_CONTROL = 0, 51 + OTG_PHY_CONTROL, 52 + OTG_PMIC_CONTROL, 53 + OTG_USER_CONTROL, 54 + }; 55 + 56 + /** 57 + * struct msm_otg_platform_data - platform device data 58 + * for msm72k_otg driver. 59 + * @phy_init_seq: PHY configuration sequence. val, reg pairs 60 + * terminated by -1. 61 + * @vbus_power: VBUS power on/off routine. 62 + * @power_budget: VBUS power budget in mA (0 will be treated as 500mA). 63 + * @mode: Supported mode (OTG/peripheral/host). 64 + * @otg_control: OTG switch controlled by user/Id pin 65 + * @default_mode: Default operational mode. Applicable only if 66 + * OTG switch is controller by user. 67 + * 68 + */ 69 + struct msm_otg_platform_data { 70 + int *phy_init_seq; 71 + void (*vbus_power)(bool on); 72 + unsigned power_budget; 73 + enum usb_mode_type mode; 74 + enum otg_control_type otg_control; 75 + enum usb_mode_type default_mode; 76 + void (*setup_gpio)(enum usb_otg_state state); 77 + }; 78 + 79 + /** 80 + * struct msm_otg: OTG driver data. Shared by HCD and DCD. 81 + * @otg: USB OTG Transceiver structure. 82 + * @pdata: otg device platform data. 83 + * @irq: IRQ number assigned for HSUSB controller. 84 + * @clk: clock struct of usb_hs_clk. 85 + * @pclk: clock struct of usb_hs_pclk. 86 + * @phy_reset_clk: clock struct of usb_phy_clk. 87 + * @core_clk: clock struct of usb_hs_core_clk. 88 + * @regs: ioremapped register base address. 89 + * @inputs: OTG state machine inputs(Id, SessValid etc). 90 + * @sm_work: OTG state machine work. 91 + * @in_lpm: indicates low power mode (LPM) state. 92 + * @async_int: Async interrupt arrived. 93 + * 94 + */ 95 + struct msm_otg { 96 + struct otg_transceiver otg; 97 + struct msm_otg_platform_data *pdata; 98 + int irq; 99 + struct clk *clk; 100 + struct clk *pclk; 101 + struct clk *phy_reset_clk; 102 + struct clk *core_clk; 103 + void __iomem *regs; 104 + #define ID 0 105 + #define B_SESS_VLD 1 106 + unsigned long inputs; 107 + struct work_struct sm_work; 108 + atomic_t in_lpm; 109 + int async_int; 110 + }; 111 + 112 + #endif
+59
include/linux/usb/msm_hsusb_hw.h
··· 1 + /* 2 + * Copyright (C) 2007 Google, Inc. 3 + * Author: Brian Swetland <swetland@google.com> 4 + * 5 + * This software is licensed under the terms of the GNU General Public 6 + * License version 2, as published by the Free Software Foundation, and 7 + * may be copied, distributed, and modified under those terms. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + */ 15 + 16 + #ifndef __LINUX_USB_GADGET_MSM72K_UDC_H__ 17 + #define __LINUX_USB_GADGET_MSM72K_UDC_H__ 18 + 19 + #ifdef CONFIG_ARCH_MSM7X00A 20 + #define USB_SBUSCFG (MSM_USB_BASE + 0x0090) 21 + #else 22 + #define USB_AHBBURST (MSM_USB_BASE + 0x0090) 23 + #define USB_AHBMODE (MSM_USB_BASE + 0x0098) 24 + #endif 25 + #define USB_CAPLENGTH (MSM_USB_BASE + 0x0100) /* 8 bit */ 26 + 27 + #define USB_USBCMD (MSM_USB_BASE + 0x0140) 28 + #define USB_PORTSC (MSM_USB_BASE + 0x0184) 29 + #define USB_OTGSC (MSM_USB_BASE + 0x01A4) 30 + #define USB_USBMODE (MSM_USB_BASE + 0x01A8) 31 + 32 + #define USBCMD_RESET 2 33 + #define USB_USBINTR (MSM_USB_BASE + 0x0148) 34 + 35 + #define PORTSC_PHCD (1 << 23) /* phy suspend mode */ 36 + #define PORTSC_PTS_MASK (3 << 30) 37 + #define PORTSC_PTS_ULPI (3 << 30) 38 + 39 + #define USB_ULPI_VIEWPORT (MSM_USB_BASE + 0x0170) 40 + #define ULPI_RUN (1 << 30) 41 + #define ULPI_WRITE (1 << 29) 42 + #define ULPI_READ (0 << 29) 43 + #define ULPI_ADDR(n) (((n) & 255) << 16) 44 + #define ULPI_DATA(n) ((n) & 255) 45 + #define ULPI_DATA_READ(n) (((n) >> 8) & 255) 46 + 47 + #define ASYNC_INTR_CTRL (1 << 29) /* Enable async interrupt */ 48 + #define ULPI_STP_CTRL (1 << 30) /* Block communication with PHY */ 49 + 50 + /* OTG definitions */ 51 + #define OTGSC_INTSTS_MASK (0x7f << 16) 52 + #define OTGSC_ID (1 << 8) 53 + #define OTGSC_BSV (1 << 11) 54 + #define OTGSC_IDIS (1 << 16) 55 + #define OTGSC_BSVIS (1 << 19) 56 + #define OTGSC_IDIE (1 << 24) 57 + #define OTGSC_BSVIE (1 << 27) 58 + 59 + #endif /* __LINUX_USB_GADGET_MSM72K_UDC_H__ */
+4 -4
include/linux/usb/musb.h
··· 3 3 * Inventra (Multidrop) Highspeed Dual-Role Controllers: (M)HDRC. 4 4 * 5 5 * Board initialization should put one of these into dev->platform_data, 6 - * probably on some platform_device named "musb_hdrc". It encapsulates 6 + * probably on some platform_device named "musb-hdrc". It encapsulates 7 7 * key configuration differences between boards. 8 8 */ 9 9 ··· 120 120 /* Power the device on or off */ 121 121 int (*set_power)(int state); 122 122 123 - /* Turn device clock on or off */ 124 - int (*set_clock)(struct clk *clock, int is_on); 125 - 126 123 /* MUSB configuration-specific details */ 127 124 struct musb_hdrc_config *config; 128 125 129 126 /* Architecture specific board data */ 130 127 void *board_data; 128 + 129 + /* Platform specific struct musb_ops pointer */ 130 + const void *platform_ops; 131 131 }; 132 132 133 133
+1 -1
include/linux/usb/otg.h
··· 116 116 /* for board-specific init logic */ 117 117 extern int otg_set_transceiver(struct otg_transceiver *); 118 118 119 - #if defined(CONFIG_NOP_USB_XCEIV) || defined(CONFIG_NOP_USB_XCEIV_MODULE) 119 + #if defined(CONFIG_NOP_USB_XCEIV) || (defined(CONFIG_NOP_USB_XCEIV_MODULE) && defined(MODULE)) 120 120 /* sometimes transceivers are accessed only through e.g. ULPI */ 121 121 extern void usb_nop_xceiv_register(void); 122 122 extern void usb_nop_xceiv_unregister(void);