Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'usb-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
"Here is the big set of USB and Thunderbolt patches for 5.14-rc1.

Nothing major here just lots of little changes for new hardware and
features. Highlights are:

- more USB 4 support added to the thunderbolt core

- build warning fixes all over the place

- usb-serial driver updates and new device support

- mtu3 driver updates

- gadget driver updates

- dwc3 driver updates

- dwc2 driver updates

- isp1760 host driver updates

- musb driver updates

- lots of other tiny things.

Full details are in the shortlog.

All of these have been in linux-next for a while now with no reported
issues"

* tag 'usb-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (223 commits)
phy: qcom-qusb2: Add configuration for SM4250 and SM6115
dt-bindings: phy: qcom,qusb2: document sm4250/6115 compatible
dt-bindings: usb: qcom,dwc3: Add bindings for sm6115/4250
USB: cdc-acm: blacklist Heimann USB Appset device
usb: xhci-mtk: allow multiple Start-Split in a microframe
usb: ftdi-elan: remove redundant continue statement in a while-loop
usb: class: cdc-wdm: return the correct errno code
xhci: remove redundant continue statement
usb: dwc3: Fix debugfs creation flow
usb: gadget: hid: fix error return code in hid_bind()
usb: gadget: eem: fix echo command packet response issue
usb: gadget: f_hid: fix endianness issue with descriptors
Revert "USB: misc: Add onboard_usb_hub driver"
Revert "of/platform: Add stubs for of_platform_device_create/destroy()"
Revert "usb: host: xhci-plat: Create platform device for onboard hubs in probe()"
Revert "arm64: dts: qcom: sc7180-trogdor: Add nodes for onboard USB hub"
xhci: solve a double free problem while doing s4
xhci: handle failed buffer copy to URB sg list and fix a W=1 copiler warning
xhci: Add adaptive interrupt rate for isoch TRBs with XHCI_AVOID_BEI quirk
xhci: Remove unused defines for ERST_SIZE and ERST_ENTRIES
...

+8671 -2712
+2
Documentation/ABI/testing/configfs-usb-gadget-uac2
··· 8 8 c_chmask capture channel mask 9 9 c_srate capture sampling rate 10 10 c_ssize capture sample size (bytes) 11 + c_sync capture synchronization type (async/adaptive) 12 + fb_max maximum extra bandwidth in async mode 11 13 p_chmask playback channel mask 12 14 p_srate playback sampling rate 13 15 p_ssize playback sample size (bytes)
+59 -23
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 1 - What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl 1 + What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl 2 2 Date: Jun 2018 3 3 KernelVersion: 4.17 4 4 Contact: thunderbolt-software@lists.01.org ··· 21 21 If a device is authorized automatically during boot its 22 22 boot attribute is set to 1. 23 23 24 - What: /sys/bus/thunderbolt/devices/.../domainX/deauthorization 24 + What: /sys/bus/thunderbolt/devices/.../domainX/deauthorization 25 25 Date: May 2021 26 26 KernelVersion: 5.12 27 27 Contact: Mika Westerberg <mika.westerberg@linux.intel.com> ··· 30 30 de-authorize PCIe tunnel by writing 0 to authorized 31 31 attribute under each device. 32 32 33 - What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection 33 + What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection 34 34 Date: Mar 2019 35 35 KernelVersion: 4.21 36 36 Contact: thunderbolt-software@lists.01.org ··· 39 39 it is not (DMA protection is solely based on Thunderbolt 40 40 security levels). 41 41 42 - What: /sys/bus/thunderbolt/devices/.../domainX/security 42 + What: /sys/bus/thunderbolt/devices/.../domainX/security 43 43 Date: Sep 2017 44 44 KernelVersion: 4.13 45 45 Contact: thunderbolt-software@lists.01.org ··· 61 61 the BIOS. 62 62 ======= ================================================== 63 63 64 - What: /sys/bus/thunderbolt/devices/.../authorized 64 + What: /sys/bus/thunderbolt/devices/.../authorized 65 65 Date: Sep 2017 66 66 KernelVersion: 4.13 67 67 Contact: thunderbolt-software@lists.01.org ··· 95 95 EKEYREJECTED if the challenge response did not match. 96 96 == ======================================================== 97 97 98 - What: /sys/bus/thunderbolt/devices/.../boot 98 + What: /sys/bus/thunderbolt/devices/.../boot 99 99 Date: Jun 2018 100 100 KernelVersion: 4.17 101 101 Contact: thunderbolt-software@lists.01.org 102 102 Description: This attribute contains 1 if Thunderbolt device was already 103 103 authorized on boot and 0 otherwise. 104 104 105 - What: /sys/bus/thunderbolt/devices/.../generation 105 + What: /sys/bus/thunderbolt/devices/.../generation 106 106 Date: Jan 2020 107 107 KernelVersion: 5.5 108 108 Contact: Christian Kellner <christian@kellner.me> ··· 110 110 controller associated with the device. It will contain 4 111 111 for USB4. 112 112 113 - What: /sys/bus/thunderbolt/devices/.../key 113 + What: /sys/bus/thunderbolt/devices/.../key 114 114 Date: Sep 2017 115 115 KernelVersion: 4.13 116 116 Contact: thunderbolt-software@lists.01.org ··· 213 213 restarted with the new NVM firmware. If the image 214 214 verification fails an error code is returned instead. 215 215 216 - This file will accept writing values "1" or "2" 216 + This file will accept writing values "1", "2" or "3". 217 217 218 218 - Writing "1" will flush the image to the storage 219 219 area and authenticate the image in one action. 220 220 - Writing "2" will run some basic validation on the image 221 221 and flush it to the storage area. 222 + - Writing "3" will authenticate the image that is 223 + currently written in the storage area. This is only 224 + supported with USB4 devices and retimers. 222 225 223 226 When read holds status of the last authentication 224 227 operation if an error occurred during the process. This 225 228 is directly the status value from the DMA configuration 226 229 based mailbox before the device is power cycled. Writing 227 230 0 here clears the status. 231 + 232 + What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect 233 + Date: Oct 2020 234 + KernelVersion: v5.9 235 + Contact: Mario Limonciello <mario.limonciello@dell.com> 236 + Description: For supported devices, automatically authenticate the new Thunderbolt 237 + image when the device is disconnected from the host system. 238 + 239 + This file will accept writing values "1" or "2" 240 + 241 + - Writing "1" will flush the image to the storage 242 + area and prepare the device for authentication on disconnect. 243 + - Writing "2" will run some basic validation on the image 244 + and flush it to the storage area. 228 245 229 246 What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/key 230 247 Date: Jan 2018 ··· 293 276 Description: This contains XDomain service specific settings as 294 277 bitmask. Format: %x 295 278 279 + What: /sys/bus/thunderbolt/devices/usb4_portX/link 280 + Date: Sep 2021 281 + KernelVersion: v5.14 282 + Contact: Mika Westerberg <mika.westerberg@linux.intel.com> 283 + Description: Returns the current link mode. Possible values are 284 + "usb4", "tbt" and "none". 285 + 286 + What: /sys/bus/thunderbolt/devices/usb4_portX/offline 287 + Date: Sep 2021 288 + KernelVersion: v5.14 289 + Contact: Rajmohan Mani <rajmohan.mani@intel.com> 290 + Description: Writing 1 to this attribute puts the USB4 port into 291 + offline mode. Only allowed when there is nothing 292 + connected to the port (link attribute returns "none"). 293 + Once the port is in offline mode it does not receive any 294 + hotplug events. This is used to update NVM firmware of 295 + on-board retimers. Writing 0 puts the port back to 296 + online mode. 297 + 298 + This attribute is only visible if the platform supports 299 + powering on retimers when there is no cable connected. 300 + 301 + What: /sys/bus/thunderbolt/devices/usb4_portX/rescan 302 + Date: Sep 2021 303 + KernelVersion: v5.14 304 + Contact: Rajmohan Mani <rajmohan.mani@intel.com> 305 + Description: When the USB4 port is in offline mode writing 1 to this 306 + attribute forces rescan of the sideband for on-board 307 + retimers. Each retimer appear under the USB4 port as if 308 + the USB4 link was up. These retimers act in the same way 309 + as if the cable was connected so upgrading their NVM 310 + firmware can be done the usual way. 311 + 296 312 What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/device 297 313 Date: Oct 2020 298 314 KernelVersion: v5.9 ··· 358 308 KernelVersion: v5.9 359 309 Contact: Mika Westerberg <mika.westerberg@linux.intel.com> 360 310 Description: Retimer vendor identifier read from the hardware. 361 - 362 - What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect 363 - Date: Oct 2020 364 - KernelVersion: v5.9 365 - Contact: Mario Limonciello <mario.limonciello@dell.com> 366 - Description: For supported devices, automatically authenticate the new Thunderbolt 367 - image when the device is disconnected from the host system. 368 - 369 - This file will accept writing values "1" or "2" 370 - 371 - - Writing "1" will flush the image to the storage 372 - area and prepare the device for authentication on disconnect. 373 - - Writing "2" will run some basic validation on the image 374 - and flush it to the storage area.
-11
Documentation/ABI/testing/sysfs-bus-usb
··· 154 154 files hold a string value (enable or disable) indicating whether 155 155 or not USB3 hardware LPM U1 or U2 is enabled for the device. 156 156 157 - What: /sys/bus/usb/devices/.../removable 158 - Date: February 2012 159 - Contact: Matthew Garrett <mjg@redhat.com> 160 - Description: 161 - Some information about whether a given USB device is 162 - physically fixed to the platform can be inferred from a 163 - combination of hub descriptor bits and platform-specific data 164 - such as ACPI. This file will read either "removable" or 165 - "fixed" if the information is available, and "unknown" 166 - otherwise. 167 - 168 157 What: /sys/bus/usb/devices/.../ltm_capable 169 158 Date: July 2012 170 159 Contact: Sarah Sharp <sarah.a.sharp@linux.intel.com>
+18
Documentation/ABI/testing/sysfs-devices-removable
··· 1 + What: /sys/devices/.../removable 2 + Date: May 2021 3 + Contact: Rajat Jain <rajatxjain@gmail.com> 4 + Description: 5 + Information about whether a given device can be removed from the 6 + platform by the user. This is determined by its subsystem in a 7 + bus / platform-specific way. This attribute is only present for 8 + devices that can support determining such information: 9 + 10 + "removable": device can be removed from the platform by the user 11 + "fixed": device is fixed to the platform / cannot be removed 12 + by the user. 13 + "unknown": The information is unavailable / cannot be deduced. 14 + 15 + Currently this is only supported by USB (which infers the 16 + information from a combination of hub descriptor bits and 17 + platform-specific data such as ACPI) and PCI (which gets this 18 + from ACPI / device tree).
+29
Documentation/admin-guide/thunderbolt.rst
··· 256 256 depend on the order they are registered in the NVMem subsystem. N in 257 257 the name is the identifier added by the NVMem subsystem. 258 258 259 + Upgrading on-board retimer NVM when there is no cable connected 260 + --------------------------------------------------------------- 261 + If the platform supports, it may be possible to upgrade the retimer NVM 262 + firmware even when there is nothing connected to the USB4 263 + ports. When this is the case the ``usb4_portX`` devices have two special 264 + attributes: ``offline`` and ``rescan``. The way to upgrade the firmware 265 + is to first put the USB4 port into offline mode:: 266 + 267 + # echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/offline 268 + 269 + This step makes sure the port does not respond to any hotplug events, 270 + and also ensures the retimers are powered on. The next step is to scan 271 + for the retimers:: 272 + 273 + # echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/rescan 274 + 275 + This enumerates and adds the on-board retimers. Now retimer NVM can be 276 + upgraded in the same way than with cable connected (see previous 277 + section). However, the retimer is not disconnected as we are offline 278 + mode) so after writing ``1`` to ``nvm_authenticate`` one should wait for 279 + 5 or more seconds before running rescan again:: 280 + 281 + # echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/rescan 282 + 283 + This point if everything went fine, the port can be put back to 284 + functional state again:: 285 + 286 + # echo 0 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/offline 287 + 259 288 Upgrading NVM when host controller is in safe mode 260 289 -------------------------------------------------- 261 290 If the existing NVM is not properly authenticated (or is missing) the
+3 -1
Documentation/devicetree/bindings/phy/allwinner,sun8i-h3-usb-phy.yaml
··· 15 15 const: 1 16 16 17 17 compatible: 18 - const: allwinner,sun8i-h3-usb-phy 18 + enum: 19 + - allwinner,sun8i-h3-usb-phy 20 + - allwinner,sun50i-h616-usb-phy 19 21 20 22 reg: 21 23 items:
+2
Documentation/devicetree/bindings/phy/qcom,qusb2-phy.yaml
··· 23 23 - qcom,msm8998-qusb2-phy 24 24 - qcom,sdm660-qusb2-phy 25 25 - qcom,ipq6018-qusb2-phy 26 + - qcom,sm4250-qusb2-phy 27 + - qcom,sm6115-qusb2-phy 26 28 - items: 27 29 - enum: 28 30 - qcom,sc7180-qusb2-phy
+3
Documentation/devicetree/bindings/usb/allwinner,sun4i-a10-musb.yaml
··· 22 22 - allwinner,sun8i-a83t-musb 23 23 - allwinner,sun50i-h6-musb 24 24 - const: allwinner,sun8i-a33-musb 25 + - items: 26 + - const: allwinner,sun50i-h616-musb 27 + - const: allwinner,sun8i-h3-musb 25 28 26 29 reg: 27 30 maxItems: 1
+1
Documentation/devicetree/bindings/usb/cdns,usb3.yaml
··· 75 75 - reg 76 76 - reg-names 77 77 - interrupts 78 + - interrupt-names 78 79 79 80 additionalProperties: false 80 81
+1
Documentation/devicetree/bindings/usb/dwc2.yaml
··· 24 24 - rockchip,rk3188-usb 25 25 - rockchip,rk3228-usb 26 26 - rockchip,rk3288-usb 27 + - rockchip,rk3308-usb 27 28 - rockchip,rk3328-usb 28 29 - rockchip,rk3368-usb 29 30 - rockchip,rv1108-usb
+69
Documentation/devicetree/bindings/usb/nxp,isp1760.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/nxp,isp1760.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NXP ISP1760 family controller bindings 8 + 9 + maintainers: 10 + - Sebastian Siewior <bigeasy@linutronix.de> 11 + - Laurent Pinchart <laurent.pinchart@ideasonboard.com> 12 + 13 + description: | 14 + NXP ISP1760 family, which includes ISP1760/1761/1763 devicetree controller 15 + bindings 16 + 17 + properties: 18 + compatible: 19 + enum: 20 + - nxp,usb-isp1760 21 + - nxp,usb-isp1761 22 + - nxp,usb-isp1763 23 + reg: 24 + maxItems: 1 25 + 26 + interrupts: 27 + minItems: 1 28 + maxItems: 2 29 + items: 30 + - description: Host controller interrupt 31 + - description: Device controller interrupt in isp1761 32 + 33 + interrupt-names: 34 + minItems: 1 35 + maxItems: 2 36 + items: 37 + - const: host 38 + - const: peripheral 39 + 40 + bus-width: 41 + description: 42 + Number of data lines. 43 + enum: [8, 16, 32] 44 + default: 32 45 + 46 + dr_mode: 47 + enum: 48 + - host 49 + - peripheral 50 + 51 + required: 52 + - compatible 53 + - reg 54 + - interrupts 55 + 56 + additionalProperties: false 57 + 58 + examples: 59 + - | 60 + #include <dt-bindings/interrupt-controller/arm-gic.h> 61 + usb@40200000 { 62 + compatible = "nxp,usb-isp1763"; 63 + reg = <0x40200000 0x100000>; 64 + interrupts = <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>; 65 + bus-width = <16>; 66 + dr_mode = "host"; 67 + }; 68 + 69 + ...
+2
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
··· 19 19 - qcom,sc7280-dwc3 20 20 - qcom,sdm845-dwc3 21 21 - qcom,sdx55-dwc3 22 + - qcom,sm4250-dwc3 23 + - qcom,sm6115-dwc3 22 24 - qcom,sm8150-dwc3 23 25 - qcom,sm8250-dwc3 24 26 - qcom,sm8350-dwc3
+62
Documentation/devicetree/bindings/usb/realtek,rts5411.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/realtek,rts5411.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Binding for the Realtek RTS5411 USB 3.0 hub controller 8 + 9 + maintainers: 10 + - Matthias Kaehlcke <mka@chromium.org> 11 + 12 + allOf: 13 + - $ref: usb-device.yaml# 14 + 15 + properties: 16 + compatible: 17 + items: 18 + - enum: 19 + - usbbda,5411 20 + - usbbda,411 21 + 22 + reg: true 23 + 24 + vdd-supply: 25 + description: 26 + phandle to the regulator that provides power to the hub. 27 + 28 + companion-hub: 29 + $ref: '/schemas/types.yaml#/definitions/phandle' 30 + description: 31 + phandle to the companion hub on the controller. 32 + 33 + required: 34 + - companion-hub 35 + - compatible 36 + - reg 37 + 38 + additionalProperties: false 39 + 40 + examples: 41 + - | 42 + usb { 43 + dr_mode = "host"; 44 + #address-cells = <1>; 45 + #size-cells = <0>; 46 + 47 + /* 2.0 hub on port 1 */ 48 + hub_2_0: hub@1 { 49 + compatible = "usbbda,5411"; 50 + reg = <1>; 51 + vdd-supply = <&pp3300_hub>; 52 + companion-hub = <&hub_3_0>; 53 + }; 54 + 55 + /* 3.0 hub on port 2 */ 56 + hub_3_0: hub@2 { 57 + compatible = "usbbda,411"; 58 + reg = <2>; 59 + vdd-supply = <&pp3300_hub>; 60 + companion-hub = <&hub_2_0>; 61 + }; 62 + };
+3
Documentation/driver-api/usb/error-codes.rst
··· 61 61 (c) requested data transfer length is invalid: negative 62 62 or too large for the host controller. 63 63 64 + ``-EBADR`` The wLength value in a control URB's setup packet does 65 + not match the URB's transfer_buffer_length. 66 + 64 67 ``-ENOSPC`` This request would overcommit the usb bandwidth reserved 65 68 for periodic transfers (interrupt, isochronous). 66 69
+2
Documentation/usb/gadget-testing.rst
··· 728 728 c_chmask capture channel mask 729 729 c_srate capture sampling rate 730 730 c_ssize capture sample size (bytes) 731 + c_sync capture synchronization type (async/adaptive) 732 + fb_max maximum extra bandwidth in async mode 731 733 p_chmask playback channel mask 732 734 p_srate playback sampling rate 733 735 p_ssize playback sample size (bytes)
+1 -1
arch/arm/boot/dts/arm-realview-eb.dtsi
··· 148 148 usb: usb@4f000000 { 149 149 compatible = "nxp,usb-isp1761"; 150 150 reg = <0x4f000000 0x20000>; 151 - port1-otg; 151 + dr_mode = "peripheral"; 152 152 }; 153 153 154 154 bridge {
+1 -1
arch/arm/boot/dts/arm-realview-pb1176.dts
··· 166 166 reg = <0x3b000000 0x20000>; 167 167 interrupt-parent = <&intc_fpga1176>; 168 168 interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>; 169 - port1-otg; 169 + dr_mode = "peripheral"; 170 170 }; 171 171 172 172 bridge {
+1 -1
arch/arm/boot/dts/arm-realview-pb11mp.dts
··· 712 712 reg = <0x4f000000 0x20000>; 713 713 interrupt-parent = <&intc_tc11mp>; 714 714 interrupts = <0 3 IRQ_TYPE_LEVEL_HIGH>; 715 - port1-otg; 715 + dr_mode = "peripheral"; 716 716 }; 717 717 }; 718 718 };
+1 -1
arch/arm/boot/dts/arm-realview-pbx.dtsi
··· 164 164 usb: usb@4f000000 { 165 165 compatible = "nxp,usb-isp1761"; 166 166 reg = <0x4f000000 0x20000>; 167 - port1-otg; 167 + dr_mode = "peripheral"; 168 168 }; 169 169 170 170 bridge {
+1 -1
arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
··· 144 144 compatible = "nxp,usb-isp1761"; 145 145 reg = <2 0x03000000 0x20000>; 146 146 interrupts = <16>; 147 - port1-otg; 147 + dr_mode = "peripheral"; 148 148 }; 149 149 150 150 iofpga-bus@300000000 {
+1 -1
arch/arm/boot/dts/vexpress-v2m.dtsi
··· 62 62 compatible = "nxp,usb-isp1761"; 63 63 reg = <3 0x03000000 0x20000>; 64 64 interrupts = <16>; 65 - port1-otg; 65 + dr_mode = "peripheral"; 66 66 }; 67 67 68 68 iofpga@7,00000000 {
+28
drivers/base/core.c
··· 2409 2409 } 2410 2410 static DEVICE_ATTR_RW(online); 2411 2411 2412 + static ssize_t removable_show(struct device *dev, struct device_attribute *attr, 2413 + char *buf) 2414 + { 2415 + const char *loc; 2416 + 2417 + switch (dev->removable) { 2418 + case DEVICE_REMOVABLE: 2419 + loc = "removable"; 2420 + break; 2421 + case DEVICE_FIXED: 2422 + loc = "fixed"; 2423 + break; 2424 + default: 2425 + loc = "unknown"; 2426 + } 2427 + return sysfs_emit(buf, "%s\n", loc); 2428 + } 2429 + static DEVICE_ATTR_RO(removable); 2430 + 2412 2431 int device_add_groups(struct device *dev, const struct attribute_group **groups) 2413 2432 { 2414 2433 return sysfs_create_groups(&dev->kobj, groups); ··· 2605 2586 goto err_remove_dev_online; 2606 2587 } 2607 2588 2589 + if (dev_removable_is_valid(dev)) { 2590 + error = device_create_file(dev, &dev_attr_removable); 2591 + if (error) 2592 + goto err_remove_dev_waiting_for_supplier; 2593 + } 2594 + 2608 2595 return 0; 2609 2596 2597 + err_remove_dev_waiting_for_supplier: 2598 + device_remove_file(dev, &dev_attr_waiting_for_supplier); 2610 2599 err_remove_dev_online: 2611 2600 device_remove_file(dev, &dev_attr_online); 2612 2601 err_remove_dev_groups: ··· 2634 2607 struct class *class = dev->class; 2635 2608 const struct device_type *type = dev->type; 2636 2609 2610 + device_remove_file(dev, &dev_attr_removable); 2637 2611 device_remove_file(dev, &dev_attr_waiting_for_supplier); 2638 2612 device_remove_file(dev, &dev_attr_online); 2639 2613 device_remove_groups(dev, dev->groups);
+22
drivers/pci/probe.c
··· 1576 1576 dev->untrusted = true; 1577 1577 } 1578 1578 1579 + static void pci_set_removable(struct pci_dev *dev) 1580 + { 1581 + struct pci_dev *parent = pci_upstream_bridge(dev); 1582 + 1583 + /* 1584 + * We (only) consider everything downstream from an external_facing 1585 + * device to be removable by the user. We're mainly concerned with 1586 + * consumer platforms with user accessible thunderbolt ports that are 1587 + * vulnerable to DMA attacks, and we expect those ports to be marked by 1588 + * the firmware as external_facing. Devices in traditional hotplug 1589 + * slots can technically be removed, but the expectation is that unless 1590 + * the port is marked with external_facing, such devices are less 1591 + * accessible to user / may not be removed by end user, and thus not 1592 + * exposed as "removable" to userspace. 1593 + */ 1594 + if (parent && 1595 + (parent->external_facing || dev_is_removable(&parent->dev))) 1596 + dev_set_removable(&dev->dev, DEVICE_REMOVABLE); 1597 + } 1598 + 1579 1599 /** 1580 1600 * pci_ext_cfg_is_aliased - Is ext config space just an alias of std config? 1581 1601 * @dev: PCI device ··· 1842 1822 1843 1823 /* Early fixups, before probing the BARs */ 1844 1824 pci_fixup_device(pci_fixup_early, dev); 1825 + 1826 + pci_set_removable(dev); 1845 1827 1846 1828 pci_info(dev, "[%04x:%04x] type %02x class %#08x\n", 1847 1829 dev->vendor, dev->device, dev->hdr_type, dev->class);
+34
drivers/phy/qualcomm/phy-qcom-qusb2.c
··· 219 219 QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_DIGITAL_TIMERS_TWO, 0x19), 220 220 }; 221 221 222 + static const struct qusb2_phy_init_tbl sm6115_init_tbl[] = { 223 + QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE1, 0xf8), 224 + QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE2, 0x53), 225 + QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE3, 0x81), 226 + QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE4, 0x17), 227 + 228 + QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_TUNE, 0x30), 229 + QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL1, 0x79), 230 + QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL2, 0x21), 231 + 232 + QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TEST2, 0x14), 233 + 234 + QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_AUTOPGM_CTL1, 0x9f), 235 + QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_PWR_CTRL, 0x00), 236 + }; 237 + 222 238 static const unsigned int qusb2_v2_regs_layout[] = { 223 239 [QUSB2PHY_PLL_CORE_INPUT_OVERRIDE] = 0xa8, 224 240 [QUSB2PHY_PLL_STATUS] = 0x1a0, ··· 353 337 354 338 .has_pll_test = true, 355 339 .se_clk_scheme_default = false, 340 + .disable_ctrl = (CLAMP_N_EN | FREEZIO_N | POWER_DOWN), 341 + .mask_core_ready = PLL_LOCKED, 342 + .autoresume_en = BIT(3), 343 + }; 344 + 345 + static const struct qusb2_phy_cfg sm6115_phy_cfg = { 346 + .tbl = sm6115_init_tbl, 347 + .tbl_num = ARRAY_SIZE(sm6115_init_tbl), 348 + .regs = msm8996_regs_layout, 349 + 350 + .has_pll_test = true, 351 + .se_clk_scheme_default = true, 356 352 .disable_ctrl = (CLAMP_N_EN | FREEZIO_N | POWER_DOWN), 357 353 .mask_core_ready = PLL_LOCKED, 358 354 .autoresume_en = BIT(3), ··· 916 888 }, { 917 889 .compatible = "qcom,sdm660-qusb2-phy", 918 890 .data = &sdm660_phy_cfg, 891 + }, { 892 + .compatible = "qcom,sm4250-qusb2-phy", 893 + .data = &sm6115_phy_cfg, 894 + }, { 895 + .compatible = "qcom,sm6115-qusb2-phy", 896 + .data = &sm6115_phy_cfg, 919 897 }, { 920 898 /* 921 899 * Deprecated. Only here to support legacy device
+549 -1
drivers/phy/tegra/xusb-tegra186.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2016-2019, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2016-2020, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/delay.h> ··· 113 113 #define ID_OVERRIDE_FLOATING ID_OVERRIDE(8) 114 114 #define ID_OVERRIDE_GROUNDED ID_OVERRIDE(0) 115 115 116 + /* XUSB AO registers */ 117 + #define XUSB_AO_USB_DEBOUNCE_DEL (0x4) 118 + #define UHSIC_LINE_DEB_CNT(x) (((x) & 0xf) << 4) 119 + #define UTMIP_LINE_DEB_CNT(x) ((x) & 0xf) 120 + 121 + #define XUSB_AO_UTMIP_TRIGGERS(x) (0x40 + (x) * 4) 122 + #define CLR_WALK_PTR BIT(0) 123 + #define CAP_CFG BIT(1) 124 + #define CLR_WAKE_ALARM BIT(3) 125 + 126 + #define XUSB_AO_UHSIC_TRIGGERS(x) (0x60 + (x) * 4) 127 + #define HSIC_CLR_WALK_PTR BIT(0) 128 + #define HSIC_CLR_WAKE_ALARM BIT(3) 129 + #define HSIC_CAP_CFG BIT(4) 130 + 131 + #define XUSB_AO_UTMIP_SAVED_STATE(x) (0x70 + (x) * 4) 132 + #define SPEED(x) ((x) & 0x3) 133 + #define UTMI_HS SPEED(0) 134 + #define UTMI_FS SPEED(1) 135 + #define UTMI_LS SPEED(2) 136 + #define UTMI_RST SPEED(3) 137 + 138 + #define XUSB_AO_UHSIC_SAVED_STATE(x) (0x90 + (x) * 4) 139 + #define MODE(x) ((x) & 0x1) 140 + #define MODE_HS MODE(0) 141 + #define MODE_RST MODE(1) 142 + 143 + #define XUSB_AO_UTMIP_SLEEPWALK_CFG(x) (0xd0 + (x) * 4) 144 + #define XUSB_AO_UHSIC_SLEEPWALK_CFG(x) (0xf0 + (x) * 4) 145 + #define FAKE_USBOP_VAL BIT(0) 146 + #define FAKE_USBON_VAL BIT(1) 147 + #define FAKE_USBOP_EN BIT(2) 148 + #define FAKE_USBON_EN BIT(3) 149 + #define FAKE_STROBE_VAL BIT(0) 150 + #define FAKE_DATA_VAL BIT(1) 151 + #define FAKE_STROBE_EN BIT(2) 152 + #define FAKE_DATA_EN BIT(3) 153 + #define WAKE_WALK_EN BIT(14) 154 + #define MASTER_ENABLE BIT(15) 155 + #define LINEVAL_WALK_EN BIT(16) 156 + #define WAKE_VAL(x) (((x) & 0xf) << 17) 157 + #define WAKE_VAL_NONE WAKE_VAL(12) 158 + #define WAKE_VAL_ANY WAKE_VAL(15) 159 + #define WAKE_VAL_DS10 WAKE_VAL(2) 160 + #define LINE_WAKEUP_EN BIT(21) 161 + #define MASTER_CFG_SEL BIT(22) 162 + 163 + #define XUSB_AO_UTMIP_SLEEPWALK(x) (0x100 + (x) * 4) 164 + /* phase A */ 165 + #define USBOP_RPD_A BIT(0) 166 + #define USBON_RPD_A BIT(1) 167 + #define AP_A BIT(4) 168 + #define AN_A BIT(5) 169 + #define HIGHZ_A BIT(6) 170 + /* phase B */ 171 + #define USBOP_RPD_B BIT(8) 172 + #define USBON_RPD_B BIT(9) 173 + #define AP_B BIT(12) 174 + #define AN_B BIT(13) 175 + #define HIGHZ_B BIT(14) 176 + /* phase C */ 177 + #define USBOP_RPD_C BIT(16) 178 + #define USBON_RPD_C BIT(17) 179 + #define AP_C BIT(20) 180 + #define AN_C BIT(21) 181 + #define HIGHZ_C BIT(22) 182 + /* phase D */ 183 + #define USBOP_RPD_D BIT(24) 184 + #define USBON_RPD_D BIT(25) 185 + #define AP_D BIT(28) 186 + #define AN_D BIT(29) 187 + #define HIGHZ_D BIT(30) 188 + 189 + #define XUSB_AO_UHSIC_SLEEPWALK(x) (0x120 + (x) * 4) 190 + /* phase A */ 191 + #define RPD_STROBE_A BIT(0) 192 + #define RPD_DATA0_A BIT(1) 193 + #define RPU_STROBE_A BIT(2) 194 + #define RPU_DATA0_A BIT(3) 195 + /* phase B */ 196 + #define RPD_STROBE_B BIT(8) 197 + #define RPD_DATA0_B BIT(9) 198 + #define RPU_STROBE_B BIT(10) 199 + #define RPU_DATA0_B BIT(11) 200 + /* phase C */ 201 + #define RPD_STROBE_C BIT(16) 202 + #define RPD_DATA0_C BIT(17) 203 + #define RPU_STROBE_C BIT(18) 204 + #define RPU_DATA0_C BIT(19) 205 + /* phase D */ 206 + #define RPD_STROBE_D BIT(24) 207 + #define RPD_DATA0_D BIT(25) 208 + #define RPU_STROBE_D BIT(26) 209 + #define RPU_DATA0_D BIT(27) 210 + 211 + #define XUSB_AO_UTMIP_PAD_CFG(x) (0x130 + (x) * 4) 212 + #define FSLS_USE_XUSB_AO BIT(3) 213 + #define TRK_CTRL_USE_XUSB_AO BIT(4) 214 + #define RPD_CTRL_USE_XUSB_AO BIT(5) 215 + #define RPU_USE_XUSB_AO BIT(6) 216 + #define VREG_USE_XUSB_AO BIT(7) 217 + #define USBOP_VAL_PD BIT(8) 218 + #define USBON_VAL_PD BIT(9) 219 + #define E_DPD_OVRD_EN BIT(10) 220 + #define E_DPD_OVRD_VAL BIT(11) 221 + 222 + #define XUSB_AO_UHSIC_PAD_CFG(x) (0x150 + (x) * 4) 223 + #define STROBE_VAL_PD BIT(0) 224 + #define DATA0_VAL_PD BIT(1) 225 + #define USE_XUSB_AO BIT(4) 226 + 116 227 #define TEGRA186_LANE(_name, _offset, _shift, _mask, _type) \ 117 228 { \ 118 229 .name = _name, \ ··· 241 130 u32 rpd_ctrl; 242 131 }; 243 132 133 + struct tegra186_xusb_padctl_context { 134 + u32 vbus_id; 135 + u32 usb2_pad_mux; 136 + u32 usb2_port_cap; 137 + u32 ss_port_cap; 138 + }; 139 + 244 140 struct tegra186_xusb_padctl { 245 141 struct tegra_xusb_padctl base; 142 + void __iomem *ao_regs; 246 143 247 144 struct tegra_xusb_fuse_calibration calib; 248 145 249 146 /* UTMI bias and tracking */ 250 147 struct clk *usb2_trk_clk; 251 148 unsigned int bias_pad_enable; 149 + 150 + /* padctl context */ 151 + struct tegra186_xusb_padctl_context context; 252 152 }; 153 + 154 + static inline void ao_writel(struct tegra186_xusb_padctl *priv, u32 value, unsigned int offset) 155 + { 156 + writel(value, priv->ao_regs + offset); 157 + } 158 + 159 + static inline u32 ao_readl(struct tegra186_xusb_padctl *priv, unsigned int offset) 160 + { 161 + return readl(priv->ao_regs + offset); 162 + } 253 163 254 164 static inline struct tegra186_xusb_padctl * 255 165 to_tegra186_xusb_padctl(struct tegra_xusb_padctl *padctl) ··· 312 180 kfree(usb2); 313 181 } 314 182 183 + static int tegra186_utmi_enable_phy_sleepwalk(struct tegra_xusb_lane *lane, 184 + enum usb_device_speed speed) 185 + { 186 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 187 + struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl); 188 + unsigned int index = lane->index; 189 + u32 value; 190 + 191 + mutex_lock(&padctl->lock); 192 + 193 + /* ensure sleepwalk logic is disabled */ 194 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 195 + value &= ~MASTER_ENABLE; 196 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 197 + 198 + /* ensure sleepwalk logics are in low power mode */ 199 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 200 + value |= MASTER_CFG_SEL; 201 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 202 + 203 + /* set debounce time */ 204 + value = ao_readl(priv, XUSB_AO_USB_DEBOUNCE_DEL); 205 + value &= ~UTMIP_LINE_DEB_CNT(~0); 206 + value |= UTMIP_LINE_DEB_CNT(1); 207 + ao_writel(priv, value, XUSB_AO_USB_DEBOUNCE_DEL); 208 + 209 + /* ensure fake events of sleepwalk logic are desiabled */ 210 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 211 + value &= ~(FAKE_USBOP_VAL | FAKE_USBON_VAL | 212 + FAKE_USBOP_EN | FAKE_USBON_EN); 213 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 214 + 215 + /* ensure wake events of sleepwalk logic are not latched */ 216 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 217 + value &= ~LINE_WAKEUP_EN; 218 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 219 + 220 + /* disable wake event triggers of sleepwalk logic */ 221 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 222 + value &= ~WAKE_VAL(~0); 223 + value |= WAKE_VAL_NONE; 224 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 225 + 226 + /* power down the line state detectors of the pad */ 227 + value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index)); 228 + value |= (USBOP_VAL_PD | USBON_VAL_PD); 229 + ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index)); 230 + 231 + /* save state per speed */ 232 + value = ao_readl(priv, XUSB_AO_UTMIP_SAVED_STATE(index)); 233 + value &= ~SPEED(~0); 234 + 235 + switch (speed) { 236 + case USB_SPEED_HIGH: 237 + value |= UTMI_HS; 238 + break; 239 + 240 + case USB_SPEED_FULL: 241 + value |= UTMI_FS; 242 + break; 243 + 244 + case USB_SPEED_LOW: 245 + value |= UTMI_LS; 246 + break; 247 + 248 + default: 249 + value |= UTMI_RST; 250 + break; 251 + } 252 + 253 + ao_writel(priv, value, XUSB_AO_UTMIP_SAVED_STATE(index)); 254 + 255 + /* enable the trigger of the sleepwalk logic */ 256 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 257 + value |= LINEVAL_WALK_EN; 258 + value &= ~WAKE_WALK_EN; 259 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 260 + 261 + /* reset the walk pointer and clear the alarm of the sleepwalk logic, 262 + * as well as capture the configuration of the USB2.0 pad 263 + */ 264 + value = ao_readl(priv, XUSB_AO_UTMIP_TRIGGERS(index)); 265 + value |= (CLR_WALK_PTR | CLR_WAKE_ALARM | CAP_CFG); 266 + ao_writel(priv, value, XUSB_AO_UTMIP_TRIGGERS(index)); 267 + 268 + /* setup the pull-ups and pull-downs of the signals during the four 269 + * stages of sleepwalk. 270 + * if device is connected, program sleepwalk logic to maintain a J and 271 + * keep driving K upon seeing remote wake. 272 + */ 273 + value = USBOP_RPD_A | USBOP_RPD_B | USBOP_RPD_C | USBOP_RPD_D; 274 + value |= USBON_RPD_A | USBON_RPD_B | USBON_RPD_C | USBON_RPD_D; 275 + 276 + switch (speed) { 277 + case USB_SPEED_HIGH: 278 + case USB_SPEED_FULL: 279 + /* J state: D+/D- = high/low, K state: D+/D- = low/high */ 280 + value |= HIGHZ_A; 281 + value |= AP_A; 282 + value |= AN_B | AN_C | AN_D; 283 + break; 284 + 285 + case USB_SPEED_LOW: 286 + /* J state: D+/D- = low/high, K state: D+/D- = high/low */ 287 + value |= HIGHZ_A; 288 + value |= AN_A; 289 + value |= AP_B | AP_C | AP_D; 290 + break; 291 + 292 + default: 293 + value |= HIGHZ_A | HIGHZ_B | HIGHZ_C | HIGHZ_D; 294 + break; 295 + } 296 + 297 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK(index)); 298 + 299 + /* power up the line state detectors of the pad */ 300 + value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index)); 301 + value &= ~(USBOP_VAL_PD | USBON_VAL_PD); 302 + ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index)); 303 + 304 + usleep_range(150, 200); 305 + 306 + /* switch the electric control of the USB2.0 pad to XUSB_AO */ 307 + value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index)); 308 + value |= FSLS_USE_XUSB_AO | TRK_CTRL_USE_XUSB_AO | RPD_CTRL_USE_XUSB_AO | 309 + RPU_USE_XUSB_AO | VREG_USE_XUSB_AO; 310 + ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index)); 311 + 312 + /* set the wake signaling trigger events */ 313 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 314 + value &= ~WAKE_VAL(~0); 315 + value |= WAKE_VAL_ANY; 316 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 317 + 318 + /* enable the wake detection */ 319 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 320 + value |= MASTER_ENABLE | LINE_WAKEUP_EN; 321 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 322 + 323 + mutex_unlock(&padctl->lock); 324 + 325 + return 0; 326 + } 327 + 328 + static int tegra186_utmi_disable_phy_sleepwalk(struct tegra_xusb_lane *lane) 329 + { 330 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 331 + struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl); 332 + unsigned int index = lane->index; 333 + u32 value; 334 + 335 + mutex_lock(&padctl->lock); 336 + 337 + /* disable the wake detection */ 338 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 339 + value &= ~(MASTER_ENABLE | LINE_WAKEUP_EN); 340 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 341 + 342 + /* switch the electric control of the USB2.0 pad to XUSB vcore logic */ 343 + value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index)); 344 + value &= ~(FSLS_USE_XUSB_AO | TRK_CTRL_USE_XUSB_AO | RPD_CTRL_USE_XUSB_AO | 345 + RPU_USE_XUSB_AO | VREG_USE_XUSB_AO); 346 + ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index)); 347 + 348 + /* disable wake event triggers of sleepwalk logic */ 349 + value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 350 + value &= ~WAKE_VAL(~0); 351 + value |= WAKE_VAL_NONE; 352 + ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index)); 353 + 354 + /* power down the line state detectors of the port */ 355 + value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index)); 356 + value |= USBOP_VAL_PD | USBON_VAL_PD; 357 + ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index)); 358 + 359 + /* clear alarm of the sleepwalk logic */ 360 + value = ao_readl(priv, XUSB_AO_UTMIP_TRIGGERS(index)); 361 + value |= CLR_WAKE_ALARM; 362 + ao_writel(priv, value, XUSB_AO_UTMIP_TRIGGERS(index)); 363 + 364 + mutex_unlock(&padctl->lock); 365 + 366 + return 0; 367 + } 368 + 369 + static int tegra186_utmi_enable_phy_wake(struct tegra_xusb_lane *lane) 370 + { 371 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 372 + unsigned int index = lane->index; 373 + u32 value; 374 + 375 + mutex_lock(&padctl->lock); 376 + 377 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 378 + value &= ~ALL_WAKE_EVENTS; 379 + value |= USB2_PORT_WAKEUP_EVENT(index); 380 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 381 + 382 + usleep_range(10, 20); 383 + 384 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 385 + value &= ~ALL_WAKE_EVENTS; 386 + value |= USB2_PORT_WAKE_INTERRUPT_ENABLE(index); 387 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 388 + 389 + mutex_unlock(&padctl->lock); 390 + 391 + return 0; 392 + } 393 + 394 + static int tegra186_utmi_disable_phy_wake(struct tegra_xusb_lane *lane) 395 + { 396 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 397 + unsigned int index = lane->index; 398 + u32 value; 399 + 400 + mutex_lock(&padctl->lock); 401 + 402 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 403 + value &= ~ALL_WAKE_EVENTS; 404 + value &= ~USB2_PORT_WAKE_INTERRUPT_ENABLE(index); 405 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 406 + 407 + usleep_range(10, 20); 408 + 409 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 410 + value &= ~ALL_WAKE_EVENTS; 411 + value |= USB2_PORT_WAKEUP_EVENT(index); 412 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 413 + 414 + mutex_unlock(&padctl->lock); 415 + 416 + return 0; 417 + } 418 + 419 + static bool tegra186_utmi_phy_remote_wake_detected(struct tegra_xusb_lane *lane) 420 + { 421 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 422 + unsigned int index = lane->index; 423 + u32 value; 424 + 425 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 426 + if ((value & USB2_PORT_WAKE_INTERRUPT_ENABLE(index)) && 427 + (value & USB2_PORT_WAKEUP_EVENT(index))) 428 + return true; 429 + 430 + return false; 431 + } 432 + 315 433 static const struct tegra_xusb_lane_ops tegra186_usb2_lane_ops = { 316 434 .probe = tegra186_usb2_lane_probe, 317 435 .remove = tegra186_usb2_lane_remove, 436 + .enable_phy_sleepwalk = tegra186_utmi_enable_phy_sleepwalk, 437 + .disable_phy_sleepwalk = tegra186_utmi_disable_phy_sleepwalk, 438 + .enable_phy_wake = tegra186_utmi_enable_phy_wake, 439 + .disable_phy_wake = tegra186_utmi_disable_phy_wake, 440 + .remote_wake_detected = tegra186_utmi_phy_remote_wake_detected, 318 441 }; 319 442 320 443 static void tegra186_utmi_bias_pad_power_on(struct tegra_xusb_padctl *padctl) ··· 1043 656 kfree(usb3); 1044 657 } 1045 658 659 + static int tegra186_usb3_enable_phy_sleepwalk(struct tegra_xusb_lane *lane, 660 + enum usb_device_speed speed) 661 + { 662 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 663 + unsigned int index = lane->index; 664 + u32 value; 665 + 666 + mutex_lock(&padctl->lock); 667 + 668 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1); 669 + value |= SSPX_ELPG_CLAMP_EN_EARLY(index); 670 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1); 671 + 672 + usleep_range(100, 200); 673 + 674 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1); 675 + value |= SSPX_ELPG_CLAMP_EN(index); 676 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1); 677 + 678 + usleep_range(250, 350); 679 + 680 + mutex_unlock(&padctl->lock); 681 + 682 + return 0; 683 + } 684 + 685 + static int tegra186_usb3_disable_phy_sleepwalk(struct tegra_xusb_lane *lane) 686 + { 687 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 688 + unsigned int index = lane->index; 689 + u32 value; 690 + 691 + mutex_lock(&padctl->lock); 692 + 693 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1); 694 + value &= ~SSPX_ELPG_CLAMP_EN_EARLY(index); 695 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1); 696 + 697 + usleep_range(100, 200); 698 + 699 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1); 700 + value &= ~SSPX_ELPG_CLAMP_EN(index); 701 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1); 702 + 703 + mutex_unlock(&padctl->lock); 704 + 705 + return 0; 706 + } 707 + 708 + static int tegra186_usb3_enable_phy_wake(struct tegra_xusb_lane *lane) 709 + { 710 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 711 + unsigned int index = lane->index; 712 + u32 value; 713 + 714 + mutex_lock(&padctl->lock); 715 + 716 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 717 + value &= ~ALL_WAKE_EVENTS; 718 + value |= SS_PORT_WAKEUP_EVENT(index); 719 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 720 + 721 + usleep_range(10, 20); 722 + 723 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 724 + value &= ~ALL_WAKE_EVENTS; 725 + value |= SS_PORT_WAKE_INTERRUPT_ENABLE(index); 726 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 727 + 728 + mutex_unlock(&padctl->lock); 729 + 730 + return 0; 731 + } 732 + 733 + static int tegra186_usb3_disable_phy_wake(struct tegra_xusb_lane *lane) 734 + { 735 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 736 + unsigned int index = lane->index; 737 + u32 value; 738 + 739 + mutex_lock(&padctl->lock); 740 + 741 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 742 + value &= ~ALL_WAKE_EVENTS; 743 + value &= ~SS_PORT_WAKE_INTERRUPT_ENABLE(index); 744 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 745 + 746 + usleep_range(10, 20); 747 + 748 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 749 + value &= ~ALL_WAKE_EVENTS; 750 + value |= SS_PORT_WAKEUP_EVENT(index); 751 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM); 752 + 753 + mutex_unlock(&padctl->lock); 754 + 755 + return 0; 756 + } 757 + 758 + static bool tegra186_usb3_phy_remote_wake_detected(struct tegra_xusb_lane *lane) 759 + { 760 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 761 + unsigned int index = lane->index; 762 + u32 value; 763 + 764 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM); 765 + if ((value & SS_PORT_WAKE_INTERRUPT_ENABLE(index)) && (value & SS_PORT_WAKEUP_EVENT(index))) 766 + return true; 767 + 768 + return false; 769 + } 770 + 1046 771 static const struct tegra_xusb_lane_ops tegra186_usb3_lane_ops = { 1047 772 .probe = tegra186_usb3_lane_probe, 1048 773 .remove = tegra186_usb3_lane_remove, 774 + .enable_phy_sleepwalk = tegra186_usb3_enable_phy_sleepwalk, 775 + .disable_phy_sleepwalk = tegra186_usb3_disable_phy_sleepwalk, 776 + .enable_phy_wake = tegra186_usb3_enable_phy_wake, 777 + .disable_phy_wake = tegra186_usb3_disable_phy_wake, 778 + .remote_wake_detected = tegra186_usb3_phy_remote_wake_detected, 1049 779 }; 780 + 1050 781 static int tegra186_usb3_port_enable(struct tegra_xusb_port *port) 1051 782 { 1052 783 return 0; ··· 1418 913 tegra186_xusb_padctl_probe(struct device *dev, 1419 914 const struct tegra_xusb_padctl_soc *soc) 1420 915 { 916 + struct platform_device *pdev = to_platform_device(dev); 1421 917 struct tegra186_xusb_padctl *priv; 918 + struct resource *res; 1422 919 int err; 1423 920 1424 921 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 1430 923 priv->base.dev = dev; 1431 924 priv->base.soc = soc; 1432 925 926 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ao"); 927 + priv->ao_regs = devm_ioremap_resource(dev, res); 928 + if (IS_ERR(priv->ao_regs)) 929 + return ERR_CAST(priv->ao_regs); 930 + 1433 931 err = tegra186_xusb_read_fuse_calibration(priv); 1434 932 if (err < 0) 1435 933 return ERR_PTR(err); 1436 934 1437 935 return &priv->base; 936 + } 937 + 938 + static void tegra186_xusb_padctl_save(struct tegra_xusb_padctl *padctl) 939 + { 940 + struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl); 941 + 942 + priv->context.vbus_id = padctl_readl(padctl, USB2_VBUS_ID); 943 + priv->context.usb2_pad_mux = padctl_readl(padctl, XUSB_PADCTL_USB2_PAD_MUX); 944 + priv->context.usb2_port_cap = padctl_readl(padctl, XUSB_PADCTL_USB2_PORT_CAP); 945 + priv->context.ss_port_cap = padctl_readl(padctl, XUSB_PADCTL_SS_PORT_CAP); 946 + } 947 + 948 + static void tegra186_xusb_padctl_restore(struct tegra_xusb_padctl *padctl) 949 + { 950 + struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl); 951 + 952 + padctl_writel(padctl, priv->context.usb2_pad_mux, XUSB_PADCTL_USB2_PAD_MUX); 953 + padctl_writel(padctl, priv->context.usb2_port_cap, XUSB_PADCTL_USB2_PORT_CAP); 954 + padctl_writel(padctl, priv->context.ss_port_cap, XUSB_PADCTL_SS_PORT_CAP); 955 + padctl_writel(padctl, priv->context.vbus_id, USB2_VBUS_ID); 956 + } 957 + 958 + static int tegra186_xusb_padctl_suspend_noirq(struct tegra_xusb_padctl *padctl) 959 + { 960 + tegra186_xusb_padctl_save(padctl); 961 + 962 + return 0; 963 + } 964 + 965 + static int tegra186_xusb_padctl_resume_noirq(struct tegra_xusb_padctl *padctl) 966 + { 967 + tegra186_xusb_padctl_restore(padctl); 968 + 969 + return 0; 1438 970 } 1439 971 1440 972 static void tegra186_xusb_padctl_remove(struct tegra_xusb_padctl *padctl) ··· 1483 937 static const struct tegra_xusb_padctl_ops tegra186_xusb_padctl_ops = { 1484 938 .probe = tegra186_xusb_padctl_probe, 1485 939 .remove = tegra186_xusb_padctl_remove, 940 + .suspend_noirq = tegra186_xusb_padctl_suspend_noirq, 941 + .resume_noirq = tegra186_xusb_padctl_resume_noirq, 1486 942 .vbus_override = tegra186_xusb_padctl_vbus_override, 1487 943 }; 1488 944
+1288 -245
drivers/phy/tegra/xusb-tegra210.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 4 4 * Copyright (C) 2015 Google, Inc. 5 5 */ 6 6 ··· 11 11 #include <linux/mailbox_client.h> 12 12 #include <linux/module.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_platform.h> 14 15 #include <linux/phy/phy.h> 15 16 #include <linux/platform_device.h> 17 + #include <linux/regmap.h> 16 18 #include <linux/regulator/consumer.h> 17 19 #include <linux/reset.h> 18 20 #include <linux/slab.h> ··· 54 52 #define XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP(x, v) (((v) & 0x7) << ((x) * 5)) 55 53 #define XUSB_PADCTL_SS_PORT_MAP_PORT_DISABLED 0x7 56 54 55 + #define XUSB_PADCTL_ELPG_PROGRAM_0 0x20 56 + #define USB2_PORT_WAKE_INTERRUPT_ENABLE(x) BIT((x)) 57 + #define USB2_PORT_WAKEUP_EVENT(x) BIT((x) + 7) 58 + #define SS_PORT_WAKE_INTERRUPT_ENABLE(x) BIT((x) + 14) 59 + #define SS_PORT_WAKEUP_EVENT(x) BIT((x) + 21) 60 + #define USB2_HSIC_PORT_WAKE_INTERRUPT_ENABLE(x) BIT((x) + 28) 61 + #define USB2_HSIC_PORT_WAKEUP_EVENT(x) BIT((x) + 30) 62 + #define ALL_WAKE_EVENTS ( \ 63 + USB2_PORT_WAKEUP_EVENT(0) | USB2_PORT_WAKEUP_EVENT(1) | \ 64 + USB2_PORT_WAKEUP_EVENT(2) | USB2_PORT_WAKEUP_EVENT(3) | \ 65 + SS_PORT_WAKEUP_EVENT(0) | SS_PORT_WAKEUP_EVENT(1) | \ 66 + SS_PORT_WAKEUP_EVENT(2) | SS_PORT_WAKEUP_EVENT(3) | \ 67 + USB2_HSIC_PORT_WAKEUP_EVENT(0)) 68 + 57 69 #define XUSB_PADCTL_ELPG_PROGRAM1 0x024 58 70 #define XUSB_PADCTL_ELPG_PROGRAM1_AUX_MUX_LP0_VCORE_DOWN (1 << 31) 59 71 #define XUSB_PADCTL_ELPG_PROGRAM1_AUX_MUX_LP0_CLAMP_EN_EARLY (1 << 30) ··· 106 90 #define XUSB_PADCTL_USB2_OTG_PAD_CTL1_PD_DR (1 << 2) 107 91 #define XUSB_PADCTL_USB2_OTG_PAD_CTL1_PD_DISC_OVRD (1 << 1) 108 92 #define XUSB_PADCTL_USB2_OTG_PAD_CTL1_PD_CHRP_OVRD (1 << 0) 93 + #define RPD_CTRL(x) (((x) & 0x1f) << 26) 94 + #define RPD_CTRL_VALUE(x) (((x) >> 26) & 0x1f) 109 95 110 96 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL0 0x284 111 97 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL0_PD (1 << 11) ··· 126 108 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL1_TRK_START_TIMER_SHIFT 12 127 109 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL1_TRK_START_TIMER_MASK 0x7f 128 110 #define XUSB_PADCTL_USB2_BIAS_PAD_CTL1_TRK_START_TIMER_VAL 0x1e 111 + #define TCTRL_VALUE(x) (((x) & 0x3f) >> 0) 112 + #define PCTRL_VALUE(x) (((x) >> 6) & 0x3f) 129 113 130 114 #define XUSB_PADCTL_HSIC_PADX_CTL0(x) (0x300 + (x) * 0x20) 131 115 #define XUSB_PADCTL_HSIC_PAD_CTL0_RPU_STROBE (1 << 18) ··· 218 198 #define XUSB_PADCTL_UPHY_MISC_PAD_CTL1_AUX_RX_TERM_EN BIT(18) 219 199 #define XUSB_PADCTL_UPHY_MISC_PAD_CTL1_AUX_RX_MODE_OVRD BIT(13) 220 200 201 + #define XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(x) (0x464 + (x) * 0x40) 202 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ BIT(0) 203 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ_OVRD BIT(1) 204 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_MASK GENMASK(5, 4) 205 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_VAL GENMASK(5, 4) 206 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_PWR_OVRD BIT(24) 207 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ BIT(8) 208 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ_OVRD BIT(9) 209 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_MASK GENMASK(13, 12) 210 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_VAL GENMASK(13, 12) 211 + #define XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_PWR_OVRD BIT(25) 212 + 221 213 #define XUSB_PADCTL_UPHY_PLL_S0_CTL1 0x860 222 214 223 215 #define XUSB_PADCTL_UPHY_PLL_S0_CTL2 0x864 ··· 241 209 #define XUSB_PADCTL_UPHY_PLL_S0_CTL8 0x87c 242 210 243 211 #define XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL1 0x960 212 + #define XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL2 0x964 244 213 245 214 #define XUSB_PADCTL_UPHY_USB3_PADX_ECTL1(x) (0xa60 + (x) * 0x40) 246 215 #define XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_SHIFT 16 ··· 271 238 #define XUSB_PADCTL_USB2_VBUS_ID_OVERRIDE_FLOATING 8 272 239 #define XUSB_PADCTL_USB2_VBUS_ID_OVERRIDE_GROUNDED 0 273 240 241 + /* USB2 SLEEPWALK registers */ 242 + #define UTMIP(_port, _offset1, _offset2) \ 243 + (((_port) <= 2) ? (_offset1) : (_offset2)) 244 + 245 + #define PMC_UTMIP_UHSIC_SLEEP_CFG(x) UTMIP(x, 0x1fc, 0x4d0) 246 + #define UTMIP_MASTER_ENABLE(x) UTMIP(x, BIT(8 * (x)), BIT(0)) 247 + #define UTMIP_FSLS_USE_PMC(x) UTMIP(x, BIT(8 * (x) + 1), \ 248 + BIT(1)) 249 + #define UTMIP_PCTRL_USE_PMC(x) UTMIP(x, BIT(8 * (x) + 2), \ 250 + BIT(2)) 251 + #define UTMIP_TCTRL_USE_PMC(x) UTMIP(x, BIT(8 * (x) + 3), \ 252 + BIT(3)) 253 + #define UTMIP_WAKE_VAL(_port, _value) (((_value) & 0xf) << \ 254 + (UTMIP(_port, 8 * (_port) + 4, 4))) 255 + #define UTMIP_WAKE_VAL_NONE(_port) UTMIP_WAKE_VAL(_port, 12) 256 + #define UTMIP_WAKE_VAL_ANY(_port) UTMIP_WAKE_VAL(_port, 15) 257 + 258 + #define PMC_UTMIP_UHSIC_SLEEP_CFG1 (0x4d0) 259 + #define UTMIP_RPU_SWITC_LOW_USE_PMC_PX(x) BIT((x) + 8) 260 + #define UTMIP_RPD_CTRL_USE_PMC_PX(x) BIT((x) + 16) 261 + 262 + #define PMC_UTMIP_MASTER_CONFIG (0x274) 263 + #define UTMIP_PWR(x) UTMIP(x, BIT(x), BIT(4)) 264 + #define UHSIC_PWR BIT(3) 265 + 266 + #define PMC_USB_DEBOUNCE_DEL (0xec) 267 + #define DEBOUNCE_VAL(x) (((x) & 0xffff) << 0) 268 + #define UTMIP_LINE_DEB_CNT(x) (((x) & 0xf) << 16) 269 + #define UHSIC_LINE_DEB_CNT(x) (((x) & 0xf) << 20) 270 + 271 + #define PMC_UTMIP_UHSIC_FAKE(x) UTMIP(x, 0x218, 0x294) 272 + #define UTMIP_FAKE_USBOP_VAL(x) UTMIP(x, BIT(4 * (x)), BIT(8)) 273 + #define UTMIP_FAKE_USBON_VAL(x) UTMIP(x, BIT(4 * (x) + 1), \ 274 + BIT(9)) 275 + #define UTMIP_FAKE_USBOP_EN(x) UTMIP(x, BIT(4 * (x) + 2), \ 276 + BIT(10)) 277 + #define UTMIP_FAKE_USBON_EN(x) UTMIP(x, BIT(4 * (x) + 3), \ 278 + BIT(11)) 279 + 280 + #define PMC_UTMIP_UHSIC_SLEEPWALK_CFG(x) UTMIP(x, 0x200, 0x288) 281 + #define UTMIP_LINEVAL_WALK_EN(x) UTMIP(x, BIT(8 * (x) + 7), \ 282 + BIT(15)) 283 + 284 + #define PMC_USB_AO (0xf0) 285 + #define USBOP_VAL_PD(x) UTMIP(x, BIT(4 * (x)), BIT(20)) 286 + #define USBON_VAL_PD(x) UTMIP(x, BIT(4 * (x) + 1), \ 287 + BIT(21)) 288 + #define STROBE_VAL_PD BIT(12) 289 + #define DATA0_VAL_PD BIT(13) 290 + #define DATA1_VAL_PD BIT(24) 291 + 292 + #define PMC_UTMIP_UHSIC_SAVED_STATE(x) UTMIP(x, 0x1f0, 0x280) 293 + #define SPEED(_port, _value) (((_value) & 0x3) << \ 294 + (UTMIP(_port, 8 * (_port), 8))) 295 + #define UTMI_HS(_port) SPEED(_port, 0) 296 + #define UTMI_FS(_port) SPEED(_port, 1) 297 + #define UTMI_LS(_port) SPEED(_port, 2) 298 + #define UTMI_RST(_port) SPEED(_port, 3) 299 + 300 + #define PMC_UTMIP_UHSIC_TRIGGERS (0x1ec) 301 + #define UTMIP_CLR_WALK_PTR(x) UTMIP(x, BIT(x), BIT(16)) 302 + #define UTMIP_CAP_CFG(x) UTMIP(x, BIT((x) + 4), BIT(17)) 303 + #define UTMIP_CLR_WAKE_ALARM(x) UTMIP(x, BIT((x) + 12), \ 304 + BIT(19)) 305 + #define UHSIC_CLR_WALK_PTR BIT(3) 306 + #define UHSIC_CLR_WAKE_ALARM BIT(15) 307 + 308 + #define PMC_UTMIP_SLEEPWALK_PX(x) UTMIP(x, 0x204 + (4 * (x)), \ 309 + 0x4e0) 310 + /* phase A */ 311 + #define UTMIP_USBOP_RPD_A BIT(0) 312 + #define UTMIP_USBON_RPD_A BIT(1) 313 + #define UTMIP_AP_A BIT(4) 314 + #define UTMIP_AN_A BIT(5) 315 + #define UTMIP_HIGHZ_A BIT(6) 316 + /* phase B */ 317 + #define UTMIP_USBOP_RPD_B BIT(8) 318 + #define UTMIP_USBON_RPD_B BIT(9) 319 + #define UTMIP_AP_B BIT(12) 320 + #define UTMIP_AN_B BIT(13) 321 + #define UTMIP_HIGHZ_B BIT(14) 322 + /* phase C */ 323 + #define UTMIP_USBOP_RPD_C BIT(16) 324 + #define UTMIP_USBON_RPD_C BIT(17) 325 + #define UTMIP_AP_C BIT(20) 326 + #define UTMIP_AN_C BIT(21) 327 + #define UTMIP_HIGHZ_C BIT(22) 328 + /* phase D */ 329 + #define UTMIP_USBOP_RPD_D BIT(24) 330 + #define UTMIP_USBON_RPD_D BIT(25) 331 + #define UTMIP_AP_D BIT(28) 332 + #define UTMIP_AN_D BIT(29) 333 + #define UTMIP_HIGHZ_D BIT(30) 334 + 335 + #define PMC_UTMIP_UHSIC_LINE_WAKEUP (0x26c) 336 + #define UTMIP_LINE_WAKEUP_EN(x) UTMIP(x, BIT(x), BIT(4)) 337 + #define UHSIC_LINE_WAKEUP_EN BIT(3) 338 + 339 + #define PMC_UTMIP_TERM_PAD_CFG (0x1f8) 340 + #define PCTRL_VAL(x) (((x) & 0x3f) << 1) 341 + #define TCTRL_VAL(x) (((x) & 0x3f) << 7) 342 + 343 + #define PMC_UTMIP_PAD_CFGX(x) (0x4c0 + (4 * (x))) 344 + #define RPD_CTRL_PX(x) (((x) & 0x1f) << 22) 345 + 346 + #define PMC_UHSIC_SLEEP_CFG PMC_UTMIP_UHSIC_SLEEP_CFG(0) 347 + #define UHSIC_MASTER_ENABLE BIT(24) 348 + #define UHSIC_WAKE_VAL(_value) (((_value) & 0xf) << 28) 349 + #define UHSIC_WAKE_VAL_SD10 UHSIC_WAKE_VAL(2) 350 + #define UHSIC_WAKE_VAL_NONE UHSIC_WAKE_VAL(12) 351 + 352 + #define PMC_UHSIC_FAKE PMC_UTMIP_UHSIC_FAKE(0) 353 + #define UHSIC_FAKE_STROBE_VAL BIT(12) 354 + #define UHSIC_FAKE_DATA_VAL BIT(13) 355 + #define UHSIC_FAKE_STROBE_EN BIT(14) 356 + #define UHSIC_FAKE_DATA_EN BIT(15) 357 + 358 + #define PMC_UHSIC_SAVED_STATE PMC_UTMIP_UHSIC_SAVED_STATE(0) 359 + #define UHSIC_MODE(_value) (((_value) & 0x1) << 24) 360 + #define UHSIC_HS UHSIC_MODE(0) 361 + #define UHSIC_RST UHSIC_MODE(1) 362 + 363 + #define PMC_UHSIC_SLEEPWALK_CFG PMC_UTMIP_UHSIC_SLEEPWALK_CFG(0) 364 + #define UHSIC_WAKE_WALK_EN BIT(30) 365 + #define UHSIC_LINEVAL_WALK_EN BIT(31) 366 + 367 + #define PMC_UHSIC_SLEEPWALK_P0 (0x210) 368 + #define UHSIC_DATA0_RPD_A BIT(1) 369 + #define UHSIC_DATA0_RPU_B BIT(11) 370 + #define UHSIC_DATA0_RPU_C BIT(19) 371 + #define UHSIC_DATA0_RPU_D BIT(27) 372 + #define UHSIC_STROBE_RPU_A BIT(2) 373 + #define UHSIC_STROBE_RPD_B BIT(8) 374 + #define UHSIC_STROBE_RPD_C BIT(16) 375 + #define UHSIC_STROBE_RPD_D BIT(24) 376 + 274 377 struct tegra210_xusb_fuse_calibration { 275 378 u32 hs_curr_level[4]; 276 379 u32 hs_term_range_adj; 277 380 u32 rpd_ctrl; 278 381 }; 279 382 383 + struct tegra210_xusb_padctl_context { 384 + u32 usb2_pad_mux; 385 + u32 usb2_port_cap; 386 + u32 ss_port_map; 387 + u32 usb3_pad_mux; 388 + }; 389 + 280 390 struct tegra210_xusb_padctl { 281 391 struct tegra_xusb_padctl base; 392 + struct regmap *regmap; 282 393 283 394 struct tegra210_xusb_fuse_calibration fuse; 395 + struct tegra210_xusb_padctl_context context; 284 396 }; 285 397 286 398 static inline struct tegra210_xusb_padctl * ··· 434 256 return container_of(padctl, struct tegra210_xusb_padctl, base); 435 257 } 436 258 259 + static const struct tegra_xusb_lane_map tegra210_usb3_map[] = { 260 + { 0, "pcie", 6 }, 261 + { 1, "pcie", 5 }, 262 + { 2, "pcie", 0 }, 263 + { 2, "pcie", 3 }, 264 + { 3, "pcie", 4 }, 265 + { 3, "sata", 0 }, 266 + { 0, NULL, 0 } 267 + }; 268 + 269 + static int tegra210_usb3_lane_map(struct tegra_xusb_lane *lane) 270 + { 271 + const struct tegra_xusb_lane_map *map; 272 + 273 + for (map = tegra210_usb3_map; map->type; map++) { 274 + if (map->index == lane->index && 275 + strcmp(map->type, lane->pad->soc->name) == 0) { 276 + dev_dbg(lane->pad->padctl->dev, "lane = %s map to port = usb3-%d\n", 277 + lane->pad->soc->lanes[lane->index].name, map->port); 278 + return map->port; 279 + } 280 + } 281 + 282 + return -EINVAL; 283 + } 284 + 437 285 /* must be called under padctl->lock */ 438 286 static int tegra210_pex_uphy_enable(struct tegra_xusb_padctl *padctl) 439 287 { 440 288 struct tegra_xusb_pcie_pad *pcie = to_pcie_pad(padctl->pcie); 441 289 unsigned long timeout; 442 290 u32 value; 291 + unsigned int i; 443 292 int err; 444 293 445 - if (pcie->enable > 0) { 446 - pcie->enable++; 294 + if (pcie->enable) 447 295 return 0; 448 - } 449 296 450 297 err = clk_prepare_enable(pcie->pll); 451 298 if (err < 0) 452 299 return err; 300 + 301 + if (tegra210_plle_hw_sequence_is_enabled()) 302 + goto skip_pll_init; 453 303 454 304 err = reset_control_deassert(pcie->rst); 455 305 if (err < 0) ··· 661 455 662 456 tegra210_xusb_pll_hw_sequence_start(); 663 457 664 - pcie->enable++; 458 + skip_pll_init: 459 + pcie->enable = true; 460 + 461 + for (i = 0; i < padctl->pcie->soc->num_lanes; i++) { 462 + value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 463 + value |= XUSB_PADCTL_USB3_PAD_MUX_PCIE_IDDQ_DISABLE(i); 464 + padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 465 + } 665 466 666 467 return 0; 667 468 ··· 682 469 static void tegra210_pex_uphy_disable(struct tegra_xusb_padctl *padctl) 683 470 { 684 471 struct tegra_xusb_pcie_pad *pcie = to_pcie_pad(padctl->pcie); 472 + u32 value; 473 + unsigned int i; 685 474 686 - mutex_lock(&padctl->lock); 475 + if (WARN_ON(!pcie->enable)) 476 + return; 687 477 688 - if (WARN_ON(pcie->enable == 0)) 689 - goto unlock; 478 + pcie->enable = false; 690 479 691 - if (--pcie->enable > 0) 692 - goto unlock; 480 + for (i = 0; i < padctl->pcie->soc->num_lanes; i++) { 481 + value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 482 + value &= ~XUSB_PADCTL_USB3_PAD_MUX_PCIE_IDDQ_DISABLE(i); 483 + padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 484 + } 693 485 694 - reset_control_assert(pcie->rst); 695 486 clk_disable_unprepare(pcie->pll); 696 - 697 - unlock: 698 - mutex_unlock(&padctl->lock); 699 487 } 700 488 701 489 /* must be called under padctl->lock */ 702 - static int tegra210_sata_uphy_enable(struct tegra_xusb_padctl *padctl, bool usb) 490 + static int tegra210_sata_uphy_enable(struct tegra_xusb_padctl *padctl) 703 491 { 704 492 struct tegra_xusb_sata_pad *sata = to_sata_pad(padctl->sata); 493 + struct tegra_xusb_lane *lane = tegra_xusb_find_lane(padctl, "sata", 0); 705 494 unsigned long timeout; 706 495 u32 value; 496 + unsigned int i; 707 497 int err; 498 + bool usb; 708 499 709 - if (sata->enable > 0) { 710 - sata->enable++; 500 + if (sata->enable) 711 501 return 0; 712 - } 502 + 503 + if (IS_ERR(lane)) 504 + return 0; 505 + 506 + if (tegra210_plle_hw_sequence_is_enabled()) 507 + goto skip_pll_init; 508 + 509 + usb = tegra_xusb_lane_check(lane, "usb3-ss"); 713 510 714 511 err = clk_prepare_enable(sata->pll); 715 512 if (err < 0) ··· 920 697 921 698 tegra210_sata_pll_hw_sequence_start(); 922 699 923 - sata->enable++; 700 + skip_pll_init: 701 + sata->enable = true; 702 + 703 + for (i = 0; i < padctl->sata->soc->num_lanes; i++) { 704 + value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 705 + value |= XUSB_PADCTL_USB3_PAD_MUX_SATA_IDDQ_DISABLE(i); 706 + padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 707 + } 924 708 925 709 return 0; 926 710 ··· 941 711 static void tegra210_sata_uphy_disable(struct tegra_xusb_padctl *padctl) 942 712 { 943 713 struct tegra_xusb_sata_pad *sata = to_sata_pad(padctl->sata); 714 + u32 value; 715 + unsigned int i; 944 716 945 - mutex_lock(&padctl->lock); 717 + if (WARN_ON(!sata->enable)) 718 + return; 946 719 947 - if (WARN_ON(sata->enable == 0)) 948 - goto unlock; 720 + sata->enable = false; 949 721 950 - if (--sata->enable > 0) 951 - goto unlock; 722 + for (i = 0; i < padctl->sata->soc->num_lanes; i++) { 723 + value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 724 + value &= ~XUSB_PADCTL_USB3_PAD_MUX_SATA_IDDQ_DISABLE(i); 725 + padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 726 + } 952 727 953 - reset_control_assert(sata->rst); 954 728 clk_disable_unprepare(sata->pll); 955 - 956 - unlock: 957 - mutex_unlock(&padctl->lock); 958 729 } 959 730 960 - static int tegra210_xusb_padctl_enable(struct tegra_xusb_padctl *padctl) 731 + static void tegra210_aux_mux_lp0_clamp_disable(struct tegra_xusb_padctl *padctl) 961 732 { 962 733 u32 value; 963 - 964 - mutex_lock(&padctl->lock); 965 - 966 - if (padctl->enable++ > 0) 967 - goto out; 968 734 969 735 value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 970 736 value &= ~XUSB_PADCTL_ELPG_PROGRAM1_AUX_MUX_LP0_CLAMP_EN; ··· 977 751 value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 978 752 value &= ~XUSB_PADCTL_ELPG_PROGRAM1_AUX_MUX_LP0_VCORE_DOWN; 979 753 padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 980 - 981 - out: 982 - mutex_unlock(&padctl->lock); 983 - return 0; 984 754 } 985 755 986 - static int tegra210_xusb_padctl_disable(struct tegra_xusb_padctl *padctl) 756 + static void tegra210_aux_mux_lp0_clamp_enable(struct tegra_xusb_padctl *padctl) 987 757 { 988 758 u32 value; 989 - 990 - mutex_lock(&padctl->lock); 991 - 992 - if (WARN_ON(padctl->enable == 0)) 993 - goto out; 994 - 995 - if (--padctl->enable > 0) 996 - goto out; 997 759 998 760 value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 999 761 value |= XUSB_PADCTL_ELPG_PROGRAM1_AUX_MUX_LP0_VCORE_DOWN; ··· 998 784 value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 999 785 value |= XUSB_PADCTL_ELPG_PROGRAM1_AUX_MUX_LP0_CLAMP_EN; 1000 786 padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 787 + } 1001 788 1002 - out: 1003 - mutex_unlock(&padctl->lock); 789 + static int tegra210_uphy_init(struct tegra_xusb_padctl *padctl) 790 + { 791 + if (padctl->pcie) 792 + tegra210_pex_uphy_enable(padctl); 793 + 794 + if (padctl->sata) 795 + tegra210_sata_uphy_enable(padctl); 796 + 797 + if (!tegra210_plle_hw_sequence_is_enabled()) 798 + tegra210_plle_hw_sequence_start(); 799 + else 800 + dev_dbg(padctl->dev, "PLLE is already in HW control\n"); 801 + 802 + tegra210_aux_mux_lp0_clamp_disable(padctl); 803 + 1004 804 return 0; 805 + } 806 + 807 + static void __maybe_unused 808 + tegra210_uphy_deinit(struct tegra_xusb_padctl *padctl) 809 + { 810 + tegra210_aux_mux_lp0_clamp_enable(padctl); 811 + 812 + if (padctl->sata) 813 + tegra210_sata_uphy_disable(padctl); 814 + 815 + if (padctl->pcie) 816 + tegra210_pex_uphy_disable(padctl); 1005 817 } 1006 818 1007 819 static int tegra210_hsic_set_idle(struct tegra_xusb_padctl *padctl, ··· 1051 811 XUSB_PADCTL_HSIC_PAD_CTL0_RPU_STROBE); 1052 812 1053 813 padctl_writel(padctl, value, XUSB_PADCTL_HSIC_PADX_CTL0(index)); 814 + 815 + return 0; 816 + } 817 + 818 + static int tegra210_usb3_enable_phy_sleepwalk(struct tegra_xusb_lane *lane, 819 + enum usb_device_speed speed) 820 + { 821 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 822 + int port = tegra210_usb3_lane_map(lane); 823 + struct device *dev = padctl->dev; 824 + u32 value; 825 + 826 + if (port < 0) { 827 + dev_err(dev, "invalid usb3 port number\n"); 828 + return -EINVAL; 829 + } 830 + 831 + mutex_lock(&padctl->lock); 832 + 833 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 834 + value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN_EARLY(port); 835 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 836 + 837 + usleep_range(100, 200); 838 + 839 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 840 + value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN(port); 841 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 842 + 843 + usleep_range(250, 350); 844 + 845 + mutex_unlock(&padctl->lock); 846 + 847 + return 0; 848 + } 849 + 850 + static int tegra210_usb3_disable_phy_sleepwalk(struct tegra_xusb_lane *lane) 851 + { 852 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 853 + int port = tegra210_usb3_lane_map(lane); 854 + struct device *dev = padctl->dev; 855 + u32 value; 856 + 857 + if (port < 0) { 858 + dev_err(dev, "invalid usb3 port number\n"); 859 + return -EINVAL; 860 + } 861 + 862 + mutex_lock(&padctl->lock); 863 + 864 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 865 + value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN_EARLY(port); 866 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 867 + 868 + usleep_range(100, 200); 869 + 870 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 871 + value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN(port); 872 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 873 + 874 + mutex_unlock(&padctl->lock); 875 + 876 + return 0; 877 + } 878 + 879 + static int tegra210_usb3_enable_phy_wake(struct tegra_xusb_lane *lane) 880 + { 881 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 882 + int port = tegra210_usb3_lane_map(lane); 883 + struct device *dev = padctl->dev; 884 + u32 value; 885 + 886 + if (port < 0) { 887 + dev_err(dev, "invalid usb3 port number\n"); 888 + return -EINVAL; 889 + } 890 + 891 + mutex_lock(&padctl->lock); 892 + 893 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 894 + value &= ~ALL_WAKE_EVENTS; 895 + value |= SS_PORT_WAKEUP_EVENT(port); 896 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 897 + 898 + usleep_range(10, 20); 899 + 900 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 901 + value &= ~ALL_WAKE_EVENTS; 902 + value |= SS_PORT_WAKE_INTERRUPT_ENABLE(port); 903 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 904 + 905 + mutex_unlock(&padctl->lock); 906 + 907 + return 0; 908 + } 909 + 910 + static int tegra210_usb3_disable_phy_wake(struct tegra_xusb_lane *lane) 911 + { 912 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 913 + int port = tegra210_usb3_lane_map(lane); 914 + struct device *dev = padctl->dev; 915 + u32 value; 916 + 917 + if (port < 0) { 918 + dev_err(dev, "invalid usb3 port number\n"); 919 + return -EINVAL; 920 + } 921 + 922 + mutex_lock(&padctl->lock); 923 + 924 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 925 + value &= ~ALL_WAKE_EVENTS; 926 + value &= ~SS_PORT_WAKE_INTERRUPT_ENABLE(port); 927 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 928 + 929 + usleep_range(10, 20); 930 + 931 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 932 + value &= ~ALL_WAKE_EVENTS; 933 + value |= SS_PORT_WAKEUP_EVENT(port); 934 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 935 + 936 + mutex_unlock(&padctl->lock); 937 + 938 + return 0; 939 + } 940 + 941 + static bool tegra210_usb3_phy_remote_wake_detected(struct tegra_xusb_lane *lane) 942 + { 943 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 944 + int index = tegra210_usb3_lane_map(lane); 945 + u32 value; 946 + 947 + if (index < 0) 948 + return false; 949 + 950 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 951 + if ((value & SS_PORT_WAKE_INTERRUPT_ENABLE(index)) && (value & SS_PORT_WAKEUP_EVENT(index))) 952 + return true; 953 + 954 + return false; 955 + } 956 + 957 + static int tegra210_utmi_enable_phy_wake(struct tegra_xusb_lane *lane) 958 + { 959 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 960 + unsigned int index = lane->index; 961 + u32 value; 962 + 963 + mutex_lock(&padctl->lock); 964 + 965 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 966 + value &= ~ALL_WAKE_EVENTS; 967 + value |= USB2_PORT_WAKEUP_EVENT(index); 968 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 969 + 970 + usleep_range(10, 20); 971 + 972 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 973 + value &= ~ALL_WAKE_EVENTS; 974 + value |= USB2_PORT_WAKE_INTERRUPT_ENABLE(index); 975 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 976 + 977 + mutex_unlock(&padctl->lock); 978 + 979 + return 0; 980 + } 981 + 982 + static int tegra210_utmi_disable_phy_wake(struct tegra_xusb_lane *lane) 983 + { 984 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 985 + unsigned int index = lane->index; 986 + u32 value; 987 + 988 + mutex_lock(&padctl->lock); 989 + 990 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 991 + value &= ~ALL_WAKE_EVENTS; 992 + value &= ~USB2_PORT_WAKE_INTERRUPT_ENABLE(index); 993 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 994 + 995 + usleep_range(10, 20); 996 + 997 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 998 + value &= ~ALL_WAKE_EVENTS; 999 + value |= USB2_PORT_WAKEUP_EVENT(index); 1000 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 1001 + 1002 + mutex_unlock(&padctl->lock); 1003 + 1004 + return 0; 1005 + } 1006 + 1007 + static bool tegra210_utmi_phy_remote_wake_detected(struct tegra_xusb_lane *lane) 1008 + { 1009 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1010 + unsigned int index = lane->index; 1011 + u32 value; 1012 + 1013 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 1014 + if ((value & USB2_PORT_WAKE_INTERRUPT_ENABLE(index)) && 1015 + (value & USB2_PORT_WAKEUP_EVENT(index))) 1016 + return true; 1017 + 1018 + return false; 1019 + } 1020 + 1021 + static int tegra210_hsic_enable_phy_wake(struct tegra_xusb_lane *lane) 1022 + { 1023 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1024 + unsigned int index = lane->index; 1025 + u32 value; 1026 + 1027 + mutex_lock(&padctl->lock); 1028 + 1029 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 1030 + value &= ~ALL_WAKE_EVENTS; 1031 + value |= USB2_HSIC_PORT_WAKEUP_EVENT(index); 1032 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 1033 + 1034 + usleep_range(10, 20); 1035 + 1036 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 1037 + value &= ~ALL_WAKE_EVENTS; 1038 + value |= USB2_HSIC_PORT_WAKE_INTERRUPT_ENABLE(index); 1039 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 1040 + 1041 + mutex_unlock(&padctl->lock); 1042 + 1043 + return 0; 1044 + } 1045 + 1046 + static int tegra210_hsic_disable_phy_wake(struct tegra_xusb_lane *lane) 1047 + { 1048 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1049 + unsigned int index = lane->index; 1050 + u32 value; 1051 + 1052 + mutex_lock(&padctl->lock); 1053 + 1054 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 1055 + value &= ~ALL_WAKE_EVENTS; 1056 + value &= ~USB2_HSIC_PORT_WAKE_INTERRUPT_ENABLE(index); 1057 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 1058 + 1059 + usleep_range(10, 20); 1060 + 1061 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 1062 + value &= ~ALL_WAKE_EVENTS; 1063 + value |= USB2_HSIC_PORT_WAKEUP_EVENT(index); 1064 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_0); 1065 + 1066 + mutex_unlock(&padctl->lock); 1067 + 1068 + return 0; 1069 + } 1070 + 1071 + static bool tegra210_hsic_phy_remote_wake_detected(struct tegra_xusb_lane *lane) 1072 + { 1073 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1074 + unsigned int index = lane->index; 1075 + u32 value; 1076 + 1077 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_0); 1078 + if ((value & USB2_HSIC_PORT_WAKE_INTERRUPT_ENABLE(index)) && 1079 + (value & USB2_HSIC_PORT_WAKEUP_EVENT(index))) 1080 + return true; 1081 + 1082 + return false; 1083 + } 1084 + 1085 + #define padctl_pmc_readl(_priv, _offset) \ 1086 + ({ \ 1087 + u32 value; \ 1088 + WARN(regmap_read(_priv->regmap, _offset, &value), "read %s failed\n", #_offset);\ 1089 + value; \ 1090 + }) 1091 + 1092 + #define padctl_pmc_writel(_priv, _value, _offset) \ 1093 + WARN(regmap_write(_priv->regmap, _offset, _value), "write %s failed\n", #_offset) 1094 + 1095 + static int tegra210_pmc_utmi_enable_phy_sleepwalk(struct tegra_xusb_lane *lane, 1096 + enum usb_device_speed speed) 1097 + { 1098 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1099 + struct tegra210_xusb_padctl *priv = to_tegra210_xusb_padctl(padctl); 1100 + unsigned int port = lane->index; 1101 + u32 value, tctrl, pctrl, rpd_ctrl; 1102 + 1103 + if (!priv->regmap) 1104 + return -EOPNOTSUPP; 1105 + 1106 + if (speed > USB_SPEED_HIGH) 1107 + return -EINVAL; 1108 + 1109 + value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL1); 1110 + tctrl = TCTRL_VALUE(value); 1111 + pctrl = PCTRL_VALUE(value); 1112 + 1113 + value = padctl_readl(padctl, XUSB_PADCTL_USB2_OTG_PADX_CTL1(port)); 1114 + rpd_ctrl = RPD_CTRL_VALUE(value); 1115 + 1116 + /* ensure sleepwalk logic is disabled */ 1117 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1118 + value &= ~UTMIP_MASTER_ENABLE(port); 1119 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1120 + 1121 + /* ensure sleepwalk logics are in low power mode */ 1122 + value = padctl_pmc_readl(priv, PMC_UTMIP_MASTER_CONFIG); 1123 + value |= UTMIP_PWR(port); 1124 + padctl_pmc_writel(priv, value, PMC_UTMIP_MASTER_CONFIG); 1125 + 1126 + /* set debounce time */ 1127 + value = padctl_pmc_readl(priv, PMC_USB_DEBOUNCE_DEL); 1128 + value &= ~UTMIP_LINE_DEB_CNT(~0); 1129 + value |= UTMIP_LINE_DEB_CNT(0x1); 1130 + padctl_pmc_writel(priv, value, PMC_USB_DEBOUNCE_DEL); 1131 + 1132 + /* ensure fake events of sleepwalk logic are desiabled */ 1133 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_FAKE(port)); 1134 + value &= ~(UTMIP_FAKE_USBOP_VAL(port) | UTMIP_FAKE_USBON_VAL(port) | 1135 + UTMIP_FAKE_USBOP_EN(port) | UTMIP_FAKE_USBON_EN(port)); 1136 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_FAKE(port)); 1137 + 1138 + /* ensure wake events of sleepwalk logic are not latched */ 1139 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1140 + value &= ~UTMIP_LINE_WAKEUP_EN(port); 1141 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1142 + 1143 + /* disable wake event triggers of sleepwalk logic */ 1144 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1145 + value &= ~UTMIP_WAKE_VAL(port, ~0); 1146 + value |= UTMIP_WAKE_VAL_NONE(port); 1147 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1148 + 1149 + /* power down the line state detectors of the pad */ 1150 + value = padctl_pmc_readl(priv, PMC_USB_AO); 1151 + value |= (USBOP_VAL_PD(port) | USBON_VAL_PD(port)); 1152 + padctl_pmc_writel(priv, value, PMC_USB_AO); 1153 + 1154 + /* save state per speed */ 1155 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SAVED_STATE(port)); 1156 + value &= ~SPEED(port, ~0); 1157 + 1158 + switch (speed) { 1159 + case USB_SPEED_HIGH: 1160 + value |= UTMI_HS(port); 1161 + break; 1162 + 1163 + case USB_SPEED_FULL: 1164 + value |= UTMI_FS(port); 1165 + break; 1166 + 1167 + case USB_SPEED_LOW: 1168 + value |= UTMI_LS(port); 1169 + break; 1170 + 1171 + default: 1172 + value |= UTMI_RST(port); 1173 + break; 1174 + } 1175 + 1176 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SAVED_STATE(port)); 1177 + 1178 + /* enable the trigger of the sleepwalk logic */ 1179 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEPWALK_CFG(port)); 1180 + value |= UTMIP_LINEVAL_WALK_EN(port); 1181 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEPWALK_CFG(port)); 1182 + 1183 + /* 1184 + * Reset the walk pointer and clear the alarm of the sleepwalk logic, 1185 + * as well as capture the configuration of the USB2.0 pad. 1186 + */ 1187 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_TRIGGERS); 1188 + value |= UTMIP_CLR_WALK_PTR(port) | UTMIP_CLR_WAKE_ALARM(port) | UTMIP_CAP_CFG(port); 1189 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_TRIGGERS); 1190 + 1191 + /* program electrical parameters read from XUSB PADCTL */ 1192 + value = padctl_pmc_readl(priv, PMC_UTMIP_TERM_PAD_CFG); 1193 + value &= ~(TCTRL_VAL(~0) | PCTRL_VAL(~0)); 1194 + value |= (TCTRL_VAL(tctrl) | PCTRL_VAL(pctrl)); 1195 + padctl_pmc_writel(priv, value, PMC_UTMIP_TERM_PAD_CFG); 1196 + 1197 + value = padctl_pmc_readl(priv, PMC_UTMIP_PAD_CFGX(port)); 1198 + value &= ~RPD_CTRL_PX(~0); 1199 + value |= RPD_CTRL_PX(rpd_ctrl); 1200 + padctl_pmc_writel(priv, value, PMC_UTMIP_PAD_CFGX(port)); 1201 + 1202 + /* 1203 + * Set up the pull-ups and pull-downs of the signals during the four 1204 + * stages of sleepwalk. If a device is connected, program sleepwalk 1205 + * logic to maintain a J and keep driving K upon seeing remote wake. 1206 + */ 1207 + value = padctl_pmc_readl(priv, PMC_UTMIP_SLEEPWALK_PX(port)); 1208 + value = UTMIP_USBOP_RPD_A | UTMIP_USBOP_RPD_B | UTMIP_USBOP_RPD_C | UTMIP_USBOP_RPD_D; 1209 + value |= UTMIP_USBON_RPD_A | UTMIP_USBON_RPD_B | UTMIP_USBON_RPD_C | UTMIP_USBON_RPD_D; 1210 + 1211 + switch (speed) { 1212 + case USB_SPEED_HIGH: 1213 + case USB_SPEED_FULL: 1214 + /* J state: D+/D- = high/low, K state: D+/D- = low/high */ 1215 + value |= UTMIP_HIGHZ_A; 1216 + value |= UTMIP_AP_A; 1217 + value |= UTMIP_AN_B | UTMIP_AN_C | UTMIP_AN_D; 1218 + break; 1219 + 1220 + case USB_SPEED_LOW: 1221 + /* J state: D+/D- = low/high, K state: D+/D- = high/low */ 1222 + value |= UTMIP_HIGHZ_A; 1223 + value |= UTMIP_AN_A; 1224 + value |= UTMIP_AP_B | UTMIP_AP_C | UTMIP_AP_D; 1225 + break; 1226 + 1227 + default: 1228 + value |= UTMIP_HIGHZ_A | UTMIP_HIGHZ_B | UTMIP_HIGHZ_C | UTMIP_HIGHZ_D; 1229 + break; 1230 + } 1231 + 1232 + padctl_pmc_writel(priv, value, PMC_UTMIP_SLEEPWALK_PX(port)); 1233 + 1234 + /* power up the line state detectors of the pad */ 1235 + value = padctl_pmc_readl(priv, PMC_USB_AO); 1236 + value &= ~(USBOP_VAL_PD(port) | USBON_VAL_PD(port)); 1237 + padctl_pmc_writel(priv, value, PMC_USB_AO); 1238 + 1239 + usleep_range(50, 100); 1240 + 1241 + /* switch the electric control of the USB2.0 pad to PMC */ 1242 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1243 + value |= UTMIP_FSLS_USE_PMC(port) | UTMIP_PCTRL_USE_PMC(port) | UTMIP_TCTRL_USE_PMC(port); 1244 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1245 + 1246 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG1); 1247 + value |= UTMIP_RPD_CTRL_USE_PMC_PX(port) | UTMIP_RPU_SWITC_LOW_USE_PMC_PX(port); 1248 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG1); 1249 + 1250 + /* set the wake signaling trigger events */ 1251 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1252 + value &= ~UTMIP_WAKE_VAL(port, ~0); 1253 + value |= UTMIP_WAKE_VAL_ANY(port); 1254 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1255 + 1256 + /* enable the wake detection */ 1257 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1258 + value |= UTMIP_MASTER_ENABLE(port); 1259 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1260 + 1261 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1262 + value |= UTMIP_LINE_WAKEUP_EN(port); 1263 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1264 + 1265 + return 0; 1266 + } 1267 + 1268 + static int tegra210_pmc_utmi_disable_phy_sleepwalk(struct tegra_xusb_lane *lane) 1269 + { 1270 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1271 + struct tegra210_xusb_padctl *priv = to_tegra210_xusb_padctl(padctl); 1272 + unsigned int port = lane->index; 1273 + u32 value; 1274 + 1275 + if (!priv->regmap) 1276 + return -EOPNOTSUPP; 1277 + 1278 + /* disable the wake detection */ 1279 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1280 + value &= ~UTMIP_MASTER_ENABLE(port); 1281 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1282 + 1283 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1284 + value &= ~UTMIP_LINE_WAKEUP_EN(port); 1285 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1286 + 1287 + /* switch the electric control of the USB2.0 pad to XUSB or USB2 */ 1288 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1289 + value &= ~(UTMIP_FSLS_USE_PMC(port) | UTMIP_PCTRL_USE_PMC(port) | 1290 + UTMIP_TCTRL_USE_PMC(port)); 1291 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1292 + 1293 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG1); 1294 + value &= ~(UTMIP_RPD_CTRL_USE_PMC_PX(port) | UTMIP_RPU_SWITC_LOW_USE_PMC_PX(port)); 1295 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG1); 1296 + 1297 + /* disable wake event triggers of sleepwalk logic */ 1298 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1299 + value &= ~UTMIP_WAKE_VAL(port, ~0); 1300 + value |= UTMIP_WAKE_VAL_NONE(port); 1301 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_SLEEP_CFG(port)); 1302 + 1303 + /* power down the line state detectors of the port */ 1304 + value = padctl_pmc_readl(priv, PMC_USB_AO); 1305 + value |= (USBOP_VAL_PD(port) | USBON_VAL_PD(port)); 1306 + padctl_pmc_writel(priv, value, PMC_USB_AO); 1307 + 1308 + /* clear alarm of the sleepwalk logic */ 1309 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_TRIGGERS); 1310 + value |= UTMIP_CLR_WAKE_ALARM(port); 1311 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_TRIGGERS); 1312 + 1313 + return 0; 1314 + } 1315 + 1316 + static int tegra210_pmc_hsic_enable_phy_sleepwalk(struct tegra_xusb_lane *lane, 1317 + enum usb_device_speed speed) 1318 + { 1319 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1320 + struct tegra210_xusb_padctl *priv = to_tegra210_xusb_padctl(padctl); 1321 + u32 value; 1322 + 1323 + if (!priv->regmap) 1324 + return -EOPNOTSUPP; 1325 + 1326 + /* ensure sleepwalk logic is disabled */ 1327 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEP_CFG); 1328 + value &= ~UHSIC_MASTER_ENABLE; 1329 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEP_CFG); 1330 + 1331 + /* ensure sleepwalk logics are in low power mode */ 1332 + value = padctl_pmc_readl(priv, PMC_UTMIP_MASTER_CONFIG); 1333 + value |= UHSIC_PWR; 1334 + padctl_pmc_writel(priv, value, PMC_UTMIP_MASTER_CONFIG); 1335 + 1336 + /* set debounce time */ 1337 + value = padctl_pmc_readl(priv, PMC_USB_DEBOUNCE_DEL); 1338 + value &= ~UHSIC_LINE_DEB_CNT(~0); 1339 + value |= UHSIC_LINE_DEB_CNT(0x1); 1340 + padctl_pmc_writel(priv, value, PMC_USB_DEBOUNCE_DEL); 1341 + 1342 + /* ensure fake events of sleepwalk logic are desiabled */ 1343 + value = padctl_pmc_readl(priv, PMC_UHSIC_FAKE); 1344 + value &= ~(UHSIC_FAKE_STROBE_VAL | UHSIC_FAKE_DATA_VAL | 1345 + UHSIC_FAKE_STROBE_EN | UHSIC_FAKE_DATA_EN); 1346 + padctl_pmc_writel(priv, value, PMC_UHSIC_FAKE); 1347 + 1348 + /* ensure wake events of sleepwalk logic are not latched */ 1349 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1350 + value &= ~UHSIC_LINE_WAKEUP_EN; 1351 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1352 + 1353 + /* disable wake event triggers of sleepwalk logic */ 1354 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEP_CFG); 1355 + value &= ~UHSIC_WAKE_VAL(~0); 1356 + value |= UHSIC_WAKE_VAL_NONE; 1357 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEP_CFG); 1358 + 1359 + /* power down the line state detectors of the port */ 1360 + value = padctl_pmc_readl(priv, PMC_USB_AO); 1361 + value |= STROBE_VAL_PD | DATA0_VAL_PD | DATA1_VAL_PD; 1362 + padctl_pmc_writel(priv, value, PMC_USB_AO); 1363 + 1364 + /* save state, HSIC always comes up as HS */ 1365 + value = padctl_pmc_readl(priv, PMC_UHSIC_SAVED_STATE); 1366 + value &= ~UHSIC_MODE(~0); 1367 + value |= UHSIC_HS; 1368 + padctl_pmc_writel(priv, value, PMC_UHSIC_SAVED_STATE); 1369 + 1370 + /* enable the trigger of the sleepwalk logic */ 1371 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEPWALK_CFG); 1372 + value |= UHSIC_WAKE_WALK_EN | UHSIC_LINEVAL_WALK_EN; 1373 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEPWALK_CFG); 1374 + 1375 + /* 1376 + * Reset the walk pointer and clear the alarm of the sleepwalk logic, 1377 + * as well as capture the configuration of the USB2.0 port. 1378 + */ 1379 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_TRIGGERS); 1380 + value |= UHSIC_CLR_WALK_PTR | UHSIC_CLR_WAKE_ALARM; 1381 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_TRIGGERS); 1382 + 1383 + /* 1384 + * Set up the pull-ups and pull-downs of the signals during the four 1385 + * stages of sleepwalk. Maintain a HSIC IDLE and keep driving HSIC 1386 + * RESUME upon remote wake. 1387 + */ 1388 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEPWALK_P0); 1389 + value = UHSIC_DATA0_RPD_A | UHSIC_DATA0_RPU_B | UHSIC_DATA0_RPU_C | UHSIC_DATA0_RPU_D | 1390 + UHSIC_STROBE_RPU_A | UHSIC_STROBE_RPD_B | UHSIC_STROBE_RPD_C | UHSIC_STROBE_RPD_D; 1391 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEPWALK_P0); 1392 + 1393 + /* power up the line state detectors of the port */ 1394 + value = padctl_pmc_readl(priv, PMC_USB_AO); 1395 + value &= ~(STROBE_VAL_PD | DATA0_VAL_PD | DATA1_VAL_PD); 1396 + padctl_pmc_writel(priv, value, PMC_USB_AO); 1397 + 1398 + usleep_range(50, 100); 1399 + 1400 + /* set the wake signaling trigger events */ 1401 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEP_CFG); 1402 + value &= ~UHSIC_WAKE_VAL(~0); 1403 + value |= UHSIC_WAKE_VAL_SD10; 1404 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEP_CFG); 1405 + 1406 + /* enable the wake detection */ 1407 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEP_CFG); 1408 + value |= UHSIC_MASTER_ENABLE; 1409 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEP_CFG); 1410 + 1411 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1412 + value |= UHSIC_LINE_WAKEUP_EN; 1413 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1414 + 1415 + return 0; 1416 + } 1417 + 1418 + static int tegra210_pmc_hsic_disable_phy_sleepwalk(struct tegra_xusb_lane *lane) 1419 + { 1420 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1421 + struct tegra210_xusb_padctl *priv = to_tegra210_xusb_padctl(padctl); 1422 + u32 value; 1423 + 1424 + if (!priv->regmap) 1425 + return -EOPNOTSUPP; 1426 + 1427 + /* disable the wake detection */ 1428 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEP_CFG); 1429 + value &= ~UHSIC_MASTER_ENABLE; 1430 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEP_CFG); 1431 + 1432 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1433 + value &= ~UHSIC_LINE_WAKEUP_EN; 1434 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_LINE_WAKEUP); 1435 + 1436 + /* disable wake event triggers of sleepwalk logic */ 1437 + value = padctl_pmc_readl(priv, PMC_UHSIC_SLEEP_CFG); 1438 + value &= ~UHSIC_WAKE_VAL(~0); 1439 + value |= UHSIC_WAKE_VAL_NONE; 1440 + padctl_pmc_writel(priv, value, PMC_UHSIC_SLEEP_CFG); 1441 + 1442 + /* power down the line state detectors of the port */ 1443 + value = padctl_pmc_readl(priv, PMC_USB_AO); 1444 + value |= STROBE_VAL_PD | DATA0_VAL_PD | DATA1_VAL_PD; 1445 + padctl_pmc_writel(priv, value, PMC_USB_AO); 1446 + 1447 + /* clear alarm of the sleepwalk logic */ 1448 + value = padctl_pmc_readl(priv, PMC_UTMIP_UHSIC_TRIGGERS); 1449 + value |= UHSIC_CLR_WAKE_ALARM; 1450 + padctl_pmc_writel(priv, value, PMC_UTMIP_UHSIC_TRIGGERS); 1054 1451 1055 1452 return 0; 1056 1453 } ··· 1788 911 static const struct tegra_xusb_lane_ops tegra210_usb2_lane_ops = { 1789 912 .probe = tegra210_usb2_lane_probe, 1790 913 .remove = tegra210_usb2_lane_remove, 914 + .enable_phy_sleepwalk = tegra210_pmc_utmi_enable_phy_sleepwalk, 915 + .disable_phy_sleepwalk = tegra210_pmc_utmi_disable_phy_sleepwalk, 916 + .enable_phy_wake = tegra210_utmi_enable_phy_wake, 917 + .disable_phy_wake = tegra210_utmi_disable_phy_wake, 918 + .remote_wake_detected = tegra210_utmi_phy_remote_wake_detected, 1791 919 }; 1792 920 1793 921 static int tegra210_usb2_phy_init(struct phy *phy) 1794 922 { 1795 923 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1796 924 struct tegra_xusb_padctl *padctl = lane->pad->padctl; 925 + unsigned int index = lane->index; 926 + struct tegra_xusb_usb2_port *port; 927 + int err; 1797 928 u32 value; 929 + 930 + port = tegra_xusb_find_usb2_port(padctl, index); 931 + if (!port) { 932 + dev_err(&phy->dev, "no port found for USB2 lane %u\n", index); 933 + return -ENODEV; 934 + } 935 + 936 + if (port->supply && port->mode == USB_DR_MODE_HOST) { 937 + err = regulator_enable(port->supply); 938 + if (err) 939 + return err; 940 + } 941 + 942 + mutex_lock(&padctl->lock); 1798 943 1799 944 value = padctl_readl(padctl, XUSB_PADCTL_USB2_PAD_MUX); 1800 945 value &= ~(XUSB_PADCTL_USB2_PAD_MUX_USB2_BIAS_PAD_MASK << ··· 1825 926 XUSB_PADCTL_USB2_PAD_MUX_USB2_BIAS_PAD_SHIFT; 1826 927 padctl_writel(padctl, value, XUSB_PADCTL_USB2_PAD_MUX); 1827 928 1828 - return tegra210_xusb_padctl_enable(padctl); 929 + mutex_unlock(&padctl->lock); 930 + 931 + return 0; 1829 932 } 1830 933 1831 934 static int tegra210_usb2_phy_exit(struct phy *phy) 1832 935 { 1833 936 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 937 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 938 + struct tegra_xusb_usb2_port *port; 939 + int err; 1834 940 1835 - return tegra210_xusb_padctl_disable(lane->pad->padctl); 941 + port = tegra_xusb_find_usb2_port(padctl, lane->index); 942 + if (!port) { 943 + dev_err(&phy->dev, "no port found for USB2 lane %u\n", lane->index); 944 + return -ENODEV; 945 + } 946 + 947 + if (port->supply && port->mode == USB_DR_MODE_HOST) { 948 + err = regulator_disable(port->supply); 949 + if (err) 950 + return err; 951 + } 952 + 953 + return 0; 1836 954 } 1837 955 1838 956 static int tegra210_xusb_padctl_vbus_override(struct tegra_xusb_padctl *padctl, ··· 1969 1053 1970 1054 priv = to_tegra210_xusb_padctl(padctl); 1971 1055 1056 + mutex_lock(&padctl->lock); 1057 + 1972 1058 if (port->usb3_port_fake != -1) { 1973 1059 value = padctl_readl(padctl, XUSB_PADCTL_SS_PORT_MAP); 1974 1060 value &= ~XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP_MASK( ··· 2064 1146 padctl_writel(padctl, value, 2065 1147 XUSB_PADCTL_USB2_BATTERY_CHRG_OTGPADX_CTL1(index)); 2066 1148 2067 - if (port->supply && port->mode == USB_DR_MODE_HOST) { 2068 - err = regulator_enable(port->supply); 2069 - if (err) 2070 - return err; 2071 - } 2072 - 2073 - mutex_lock(&padctl->lock); 2074 - 2075 1149 if (pad->enable > 0) { 2076 1150 pad->enable++; 2077 1151 mutex_unlock(&padctl->lock); ··· 2072 1162 2073 1163 err = clk_prepare_enable(pad->clk); 2074 1164 if (err) 2075 - goto disable_regulator; 1165 + goto out; 2076 1166 2077 1167 value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL1); 2078 1168 value &= ~((XUSB_PADCTL_USB2_BIAS_PAD_CTL1_TRK_START_TIMER_MASK << ··· 2104 1194 2105 1195 return 0; 2106 1196 2107 - disable_regulator: 2108 - regulator_disable(port->supply); 1197 + out: 2109 1198 mutex_unlock(&padctl->lock); 2110 1199 return err; 2111 1200 } ··· 2163 1254 padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL0); 2164 1255 2165 1256 out: 2166 - regulator_disable(port->supply); 2167 1257 mutex_unlock(&padctl->lock); 2168 1258 return 0; 2169 1259 } ··· 2284 1376 static const struct tegra_xusb_lane_ops tegra210_hsic_lane_ops = { 2285 1377 .probe = tegra210_hsic_lane_probe, 2286 1378 .remove = tegra210_hsic_lane_remove, 1379 + .enable_phy_sleepwalk = tegra210_pmc_hsic_enable_phy_sleepwalk, 1380 + .disable_phy_sleepwalk = tegra210_pmc_hsic_disable_phy_sleepwalk, 1381 + .enable_phy_wake = tegra210_hsic_enable_phy_wake, 1382 + .disable_phy_wake = tegra210_hsic_disable_phy_wake, 1383 + .remote_wake_detected = tegra210_hsic_phy_remote_wake_detected, 2287 1384 }; 2288 1385 2289 1386 static int tegra210_hsic_phy_init(struct phy *phy) ··· 2304 1391 XUSB_PADCTL_USB2_PAD_MUX_HSIC_PAD_TRK_SHIFT; 2305 1392 padctl_writel(padctl, value, XUSB_PADCTL_USB2_PAD_MUX); 2306 1393 2307 - return tegra210_xusb_padctl_enable(padctl); 1394 + return 0; 2308 1395 } 2309 1396 2310 1397 static int tegra210_hsic_phy_exit(struct phy *phy) 2311 1398 { 2312 - struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 2313 - 2314 - return tegra210_xusb_padctl_disable(lane->pad->padctl); 1399 + return 0; 2315 1400 } 2316 1401 2317 1402 static int tegra210_hsic_phy_power_on(struct phy *phy) ··· 2493 1582 .ops = &tegra210_hsic_ops, 2494 1583 }; 2495 1584 1585 + static void tegra210_uphy_lane_iddq_enable(struct tegra_xusb_lane *lane) 1586 + { 1587 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1588 + u32 value; 1589 + 1590 + value = padctl_readl(padctl, lane->soc->regs.misc_ctl2); 1591 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ_OVRD; 1592 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ_OVRD; 1593 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_PWR_OVRD; 1594 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_PWR_OVRD; 1595 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ; 1596 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_MASK; 1597 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_VAL; 1598 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ; 1599 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_MASK; 1600 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_VAL; 1601 + padctl_writel(padctl, value, lane->soc->regs.misc_ctl2); 1602 + } 1603 + 1604 + static void tegra210_uphy_lane_iddq_disable(struct tegra_xusb_lane *lane) 1605 + { 1606 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1607 + u32 value; 1608 + 1609 + value = padctl_readl(padctl, lane->soc->regs.misc_ctl2); 1610 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ_OVRD; 1611 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ_OVRD; 1612 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_PWR_OVRD; 1613 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_PWR_OVRD; 1614 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_IDDQ; 1615 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_MASK; 1616 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_TX_SLEEP_VAL; 1617 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_IDDQ; 1618 + value &= ~XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_MASK; 1619 + value |= XUSB_PADCTL_UPHY_MISC_PAD_CTL2_RX_SLEEP_VAL; 1620 + padctl_writel(padctl, value, lane->soc->regs.misc_ctl2); 1621 + } 1622 + 1623 + #define TEGRA210_UPHY_LANE(_name, _offset, _shift, _mask, _type, _misc) \ 1624 + { \ 1625 + .name = _name, \ 1626 + .offset = _offset, \ 1627 + .shift = _shift, \ 1628 + .mask = _mask, \ 1629 + .num_funcs = ARRAY_SIZE(tegra210_##_type##_functions), \ 1630 + .funcs = tegra210_##_type##_functions, \ 1631 + .regs.misc_ctl2 = _misc, \ 1632 + } 1633 + 2496 1634 static const char *tegra210_pcie_functions[] = { 2497 1635 "pcie-x1", 2498 1636 "usb3-ss", ··· 2550 1590 }; 2551 1591 2552 1592 static const struct tegra_xusb_lane_soc tegra210_pcie_lanes[] = { 2553 - TEGRA210_LANE("pcie-0", 0x028, 12, 0x3, pcie), 2554 - TEGRA210_LANE("pcie-1", 0x028, 14, 0x3, pcie), 2555 - TEGRA210_LANE("pcie-2", 0x028, 16, 0x3, pcie), 2556 - TEGRA210_LANE("pcie-3", 0x028, 18, 0x3, pcie), 2557 - TEGRA210_LANE("pcie-4", 0x028, 20, 0x3, pcie), 2558 - TEGRA210_LANE("pcie-5", 0x028, 22, 0x3, pcie), 2559 - TEGRA210_LANE("pcie-6", 0x028, 24, 0x3, pcie), 1593 + TEGRA210_UPHY_LANE("pcie-0", 0x028, 12, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(0)), 1594 + TEGRA210_UPHY_LANE("pcie-1", 0x028, 14, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(1)), 1595 + TEGRA210_UPHY_LANE("pcie-2", 0x028, 16, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(2)), 1596 + TEGRA210_UPHY_LANE("pcie-3", 0x028, 18, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(3)), 1597 + TEGRA210_UPHY_LANE("pcie-4", 0x028, 20, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(4)), 1598 + TEGRA210_UPHY_LANE("pcie-5", 0x028, 22, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(5)), 1599 + TEGRA210_UPHY_LANE("pcie-6", 0x028, 24, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_PX_CTL2(6)), 2560 1600 }; 2561 1601 1602 + static struct tegra_xusb_usb3_port * 1603 + tegra210_lane_to_usb3_port(struct tegra_xusb_lane *lane) 1604 + { 1605 + int port; 1606 + 1607 + if (!lane || !lane->pad || !lane->pad->padctl) 1608 + return NULL; 1609 + 1610 + port = tegra210_usb3_lane_map(lane); 1611 + if (port < 0) 1612 + return NULL; 1613 + 1614 + return tegra_xusb_find_usb3_port(lane->pad->padctl, port); 1615 + } 1616 + 1617 + static int tegra210_usb3_phy_power_on(struct phy *phy) 1618 + { 1619 + struct device *dev = &phy->dev; 1620 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1621 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1622 + struct tegra_xusb_usb3_port *usb3 = tegra210_lane_to_usb3_port(lane); 1623 + unsigned int index; 1624 + u32 value; 1625 + 1626 + if (!usb3) { 1627 + dev_err(dev, "no USB3 port found for lane %u\n", lane->index); 1628 + return -ENODEV; 1629 + } 1630 + 1631 + index = usb3->base.index; 1632 + 1633 + value = padctl_readl(padctl, XUSB_PADCTL_SS_PORT_MAP); 1634 + 1635 + if (!usb3->internal) 1636 + value &= ~XUSB_PADCTL_SS_PORT_MAP_PORTX_INTERNAL(index); 1637 + else 1638 + value |= XUSB_PADCTL_SS_PORT_MAP_PORTX_INTERNAL(index); 1639 + 1640 + value &= ~XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP_MASK(index); 1641 + value |= XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP(index, usb3->port); 1642 + padctl_writel(padctl, value, XUSB_PADCTL_SS_PORT_MAP); 1643 + 1644 + value = padctl_readl(padctl, XUSB_PADCTL_UPHY_USB3_PADX_ECTL1(index)); 1645 + value &= ~(XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_MASK << 1646 + XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_SHIFT); 1647 + value |= XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_VAL << 1648 + XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_SHIFT; 1649 + padctl_writel(padctl, value, XUSB_PADCTL_UPHY_USB3_PADX_ECTL1(index)); 1650 + 1651 + value = padctl_readl(padctl, XUSB_PADCTL_UPHY_USB3_PADX_ECTL2(index)); 1652 + value &= ~(XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_MASK << 1653 + XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_SHIFT); 1654 + value |= XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_VAL << 1655 + XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_SHIFT; 1656 + padctl_writel(padctl, value, XUSB_PADCTL_UPHY_USB3_PADX_ECTL2(index)); 1657 + 1658 + padctl_writel(padctl, XUSB_PADCTL_UPHY_USB3_PAD_ECTL3_RX_DFE_VAL, 1659 + XUSB_PADCTL_UPHY_USB3_PADX_ECTL3(index)); 1660 + 1661 + value = padctl_readl(padctl, XUSB_PADCTL_UPHY_USB3_PADX_ECTL4(index)); 1662 + value &= ~(XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_MASK << 1663 + XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_SHIFT); 1664 + value |= XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_VAL << 1665 + XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_SHIFT; 1666 + padctl_writel(padctl, value, XUSB_PADCTL_UPHY_USB3_PADX_ECTL4(index)); 1667 + 1668 + padctl_writel(padctl, XUSB_PADCTL_UPHY_USB3_PAD_ECTL6_RX_EQ_CTRL_H_VAL, 1669 + XUSB_PADCTL_UPHY_USB3_PADX_ECTL6(index)); 1670 + 1671 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 1672 + value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_VCORE_DOWN(index); 1673 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 1674 + 1675 + usleep_range(100, 200); 1676 + 1677 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 1678 + value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN_EARLY(index); 1679 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 1680 + 1681 + usleep_range(100, 200); 1682 + 1683 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 1684 + value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN(index); 1685 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 1686 + 1687 + return 0; 1688 + } 1689 + 1690 + static int tegra210_usb3_phy_power_off(struct phy *phy) 1691 + { 1692 + struct device *dev = &phy->dev; 1693 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1694 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 1695 + struct tegra_xusb_usb3_port *usb3 = tegra210_lane_to_usb3_port(lane); 1696 + unsigned int index; 1697 + u32 value; 1698 + 1699 + if (!usb3) { 1700 + dev_err(dev, "no USB3 port found for lane %u\n", lane->index); 1701 + return -ENODEV; 1702 + } 1703 + 1704 + index = usb3->base.index; 1705 + 1706 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 1707 + value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN_EARLY(index); 1708 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 1709 + 1710 + usleep_range(100, 200); 1711 + 1712 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 1713 + value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN(index); 1714 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 1715 + 1716 + usleep_range(250, 350); 1717 + 1718 + value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 1719 + value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_VCORE_DOWN(index); 1720 + padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 1721 + 1722 + return 0; 1723 + } 2562 1724 static struct tegra_xusb_lane * 2563 1725 tegra210_pcie_lane_probe(struct tegra_xusb_pad *pad, struct device_node *np, 2564 1726 unsigned int index) ··· 2717 1635 static const struct tegra_xusb_lane_ops tegra210_pcie_lane_ops = { 2718 1636 .probe = tegra210_pcie_lane_probe, 2719 1637 .remove = tegra210_pcie_lane_remove, 1638 + .iddq_enable = tegra210_uphy_lane_iddq_enable, 1639 + .iddq_disable = tegra210_uphy_lane_iddq_disable, 1640 + .enable_phy_sleepwalk = tegra210_usb3_enable_phy_sleepwalk, 1641 + .disable_phy_sleepwalk = tegra210_usb3_disable_phy_sleepwalk, 1642 + .enable_phy_wake = tegra210_usb3_enable_phy_wake, 1643 + .disable_phy_wake = tegra210_usb3_disable_phy_wake, 1644 + .remote_wake_detected = tegra210_usb3_phy_remote_wake_detected, 2720 1645 }; 2721 1646 2722 1647 static int tegra210_pcie_phy_init(struct phy *phy) 2723 1648 { 2724 1649 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1650 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 2725 1651 2726 - return tegra210_xusb_padctl_enable(lane->pad->padctl); 2727 - } 1652 + mutex_lock(&padctl->lock); 2728 1653 2729 - static int tegra210_pcie_phy_exit(struct phy *phy) 2730 - { 2731 - struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1654 + tegra210_uphy_init(padctl); 2732 1655 2733 - return tegra210_xusb_padctl_disable(lane->pad->padctl); 1656 + mutex_unlock(&padctl->lock); 1657 + 1658 + return 0; 2734 1659 } 2735 1660 2736 1661 static int tegra210_pcie_phy_power_on(struct phy *phy) 2737 1662 { 2738 1663 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 2739 1664 struct tegra_xusb_padctl *padctl = lane->pad->padctl; 2740 - u32 value; 2741 - int err; 1665 + int err = 0; 2742 1666 2743 1667 mutex_lock(&padctl->lock); 2744 1668 2745 - err = tegra210_pex_uphy_enable(padctl); 2746 - if (err < 0) 2747 - goto unlock; 1669 + if (tegra_xusb_lane_check(lane, "usb3-ss")) 1670 + err = tegra210_usb3_phy_power_on(phy); 2748 1671 2749 - value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 2750 - value |= XUSB_PADCTL_USB3_PAD_MUX_PCIE_IDDQ_DISABLE(lane->index); 2751 - padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 2752 - 2753 - unlock: 2754 1672 mutex_unlock(&padctl->lock); 2755 1673 return err; 2756 1674 } ··· 2759 1677 { 2760 1678 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 2761 1679 struct tegra_xusb_padctl *padctl = lane->pad->padctl; 2762 - u32 value; 1680 + int err = 0; 2763 1681 2764 - value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 2765 - value &= ~XUSB_PADCTL_USB3_PAD_MUX_PCIE_IDDQ_DISABLE(lane->index); 2766 - padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 1682 + mutex_lock(&padctl->lock); 2767 1683 2768 - tegra210_pex_uphy_disable(padctl); 1684 + if (tegra_xusb_lane_check(lane, "usb3-ss")) 1685 + err = tegra210_usb3_phy_power_off(phy); 2769 1686 2770 - return 0; 1687 + mutex_unlock(&padctl->lock); 1688 + return err; 2771 1689 } 2772 1690 2773 1691 static const struct phy_ops tegra210_pcie_phy_ops = { 2774 1692 .init = tegra210_pcie_phy_init, 2775 - .exit = tegra210_pcie_phy_exit, 2776 1693 .power_on = tegra210_pcie_phy_power_on, 2777 1694 .power_off = tegra210_pcie_phy_power_off, 2778 1695 .owner = THIS_MODULE, ··· 2848 1767 }; 2849 1768 2850 1769 static const struct tegra_xusb_lane_soc tegra210_sata_lanes[] = { 2851 - TEGRA210_LANE("sata-0", 0x028, 30, 0x3, pcie), 1770 + TEGRA210_UPHY_LANE("sata-0", 0x028, 30, 0x3, pcie, XUSB_PADCTL_UPHY_MISC_PAD_S0_CTL2), 2852 1771 }; 2853 1772 2854 1773 static struct tegra_xusb_lane * ··· 2887 1806 static const struct tegra_xusb_lane_ops tegra210_sata_lane_ops = { 2888 1807 .probe = tegra210_sata_lane_probe, 2889 1808 .remove = tegra210_sata_lane_remove, 1809 + .iddq_enable = tegra210_uphy_lane_iddq_enable, 1810 + .iddq_disable = tegra210_uphy_lane_iddq_disable, 1811 + .enable_phy_sleepwalk = tegra210_usb3_enable_phy_sleepwalk, 1812 + .disable_phy_sleepwalk = tegra210_usb3_disable_phy_sleepwalk, 1813 + .enable_phy_wake = tegra210_usb3_enable_phy_wake, 1814 + .disable_phy_wake = tegra210_usb3_disable_phy_wake, 1815 + .remote_wake_detected = tegra210_usb3_phy_remote_wake_detected, 2890 1816 }; 2891 1817 2892 1818 static int tegra210_sata_phy_init(struct phy *phy) 2893 1819 { 2894 1820 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1821 + struct tegra_xusb_padctl *padctl = lane->pad->padctl; 2895 1822 2896 - return tegra210_xusb_padctl_enable(lane->pad->padctl); 2897 - } 1823 + mutex_lock(&padctl->lock); 2898 1824 2899 - static int tegra210_sata_phy_exit(struct phy *phy) 2900 - { 2901 - struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1825 + tegra210_uphy_init(padctl); 2902 1826 2903 - return tegra210_xusb_padctl_disable(lane->pad->padctl); 1827 + mutex_unlock(&padctl->lock); 1828 + return 0; 2904 1829 } 2905 1830 2906 1831 static int tegra210_sata_phy_power_on(struct phy *phy) 2907 1832 { 2908 1833 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 2909 1834 struct tegra_xusb_padctl *padctl = lane->pad->padctl; 2910 - u32 value; 2911 - int err; 1835 + int err = 0; 2912 1836 2913 1837 mutex_lock(&padctl->lock); 2914 1838 2915 - err = tegra210_sata_uphy_enable(padctl, false); 2916 - if (err < 0) 2917 - goto unlock; 1839 + if (tegra_xusb_lane_check(lane, "usb3-ss")) 1840 + err = tegra210_usb3_phy_power_on(phy); 2918 1841 2919 - value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 2920 - value |= XUSB_PADCTL_USB3_PAD_MUX_SATA_IDDQ_DISABLE(lane->index); 2921 - padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 2922 - 2923 - unlock: 2924 1842 mutex_unlock(&padctl->lock); 2925 1843 return err; 2926 1844 } ··· 2928 1848 { 2929 1849 struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 2930 1850 struct tegra_xusb_padctl *padctl = lane->pad->padctl; 2931 - u32 value; 1851 + int err = 0; 2932 1852 2933 - value = padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 2934 - value &= ~XUSB_PADCTL_USB3_PAD_MUX_SATA_IDDQ_DISABLE(lane->index); 2935 - padctl_writel(padctl, value, XUSB_PADCTL_USB3_PAD_MUX); 1853 + mutex_lock(&padctl->lock); 2936 1854 2937 - tegra210_sata_uphy_disable(lane->pad->padctl); 1855 + if (tegra_xusb_lane_check(lane, "usb3-ss")) 1856 + err = tegra210_usb3_phy_power_off(phy); 2938 1857 2939 - return 0; 1858 + mutex_unlock(&padctl->lock); 1859 + return err; 2940 1860 } 2941 1861 2942 1862 static const struct phy_ops tegra210_sata_phy_ops = { 2943 1863 .init = tegra210_sata_phy_init, 2944 - .exit = tegra210_sata_phy_exit, 2945 1864 .power_on = tegra210_sata_phy_power_on, 2946 1865 .power_off = tegra210_sata_phy_power_off, 2947 1866 .owner = THIS_MODULE, ··· 3063 1984 3064 1985 static int tegra210_usb3_port_enable(struct tegra_xusb_port *port) 3065 1986 { 3066 - struct tegra_xusb_usb3_port *usb3 = to_usb3_port(port); 3067 - struct tegra_xusb_padctl *padctl = port->padctl; 3068 - struct tegra_xusb_lane *lane = usb3->base.lane; 3069 - unsigned int index = port->index; 3070 - u32 value; 3071 - int err; 3072 - 3073 - value = padctl_readl(padctl, XUSB_PADCTL_SS_PORT_MAP); 3074 - 3075 - if (!usb3->internal) 3076 - value &= ~XUSB_PADCTL_SS_PORT_MAP_PORTX_INTERNAL(index); 3077 - else 3078 - value |= XUSB_PADCTL_SS_PORT_MAP_PORTX_INTERNAL(index); 3079 - 3080 - value &= ~XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP_MASK(index); 3081 - value |= XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP(index, usb3->port); 3082 - padctl_writel(padctl, value, XUSB_PADCTL_SS_PORT_MAP); 3083 - 3084 - /* 3085 - * TODO: move this code into the PCIe/SATA PHY ->power_on() callbacks 3086 - * and conditionalize based on mux function? This seems to work, but 3087 - * might not be the exact proper sequence. 3088 - */ 3089 - err = regulator_enable(usb3->supply); 3090 - if (err < 0) 3091 - return err; 3092 - 3093 - value = padctl_readl(padctl, XUSB_PADCTL_UPHY_USB3_PADX_ECTL1(index)); 3094 - value &= ~(XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_MASK << 3095 - XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_SHIFT); 3096 - value |= XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_VAL << 3097 - XUSB_PADCTL_UPHY_USB3_PAD_ECTL1_TX_TERM_CTRL_SHIFT; 3098 - padctl_writel(padctl, value, XUSB_PADCTL_UPHY_USB3_PADX_ECTL1(index)); 3099 - 3100 - value = padctl_readl(padctl, XUSB_PADCTL_UPHY_USB3_PADX_ECTL2(index)); 3101 - value &= ~(XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_MASK << 3102 - XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_SHIFT); 3103 - value |= XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_VAL << 3104 - XUSB_PADCTL_UPHY_USB3_PAD_ECTL2_RX_CTLE_SHIFT; 3105 - padctl_writel(padctl, value, XUSB_PADCTL_UPHY_USB3_PADX_ECTL2(index)); 3106 - 3107 - padctl_writel(padctl, XUSB_PADCTL_UPHY_USB3_PAD_ECTL3_RX_DFE_VAL, 3108 - XUSB_PADCTL_UPHY_USB3_PADX_ECTL3(index)); 3109 - 3110 - value = padctl_readl(padctl, XUSB_PADCTL_UPHY_USB3_PADX_ECTL4(index)); 3111 - value &= ~(XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_MASK << 3112 - XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_SHIFT); 3113 - value |= XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_VAL << 3114 - XUSB_PADCTL_UPHY_USB3_PAD_ECTL4_RX_CDR_CTRL_SHIFT; 3115 - padctl_writel(padctl, value, XUSB_PADCTL_UPHY_USB3_PADX_ECTL4(index)); 3116 - 3117 - padctl_writel(padctl, XUSB_PADCTL_UPHY_USB3_PAD_ECTL6_RX_EQ_CTRL_H_VAL, 3118 - XUSB_PADCTL_UPHY_USB3_PADX_ECTL6(index)); 3119 - 3120 - if (lane->pad == padctl->sata) 3121 - err = tegra210_sata_uphy_enable(padctl, true); 3122 - else 3123 - err = tegra210_pex_uphy_enable(padctl); 3124 - 3125 - if (err) { 3126 - dev_err(&port->dev, "%s: failed to enable UPHY: %d\n", 3127 - __func__, err); 3128 - return err; 3129 - } 3130 - 3131 - value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 3132 - value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_VCORE_DOWN(index); 3133 - padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 3134 - 3135 - usleep_range(100, 200); 3136 - 3137 - value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 3138 - value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN_EARLY(index); 3139 - padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 3140 - 3141 - usleep_range(100, 200); 3142 - 3143 - value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 3144 - value &= ~XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN(index); 3145 - padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 3146 - 3147 1987 return 0; 3148 1988 } 3149 1989 3150 1990 static void tegra210_usb3_port_disable(struct tegra_xusb_port *port) 3151 1991 { 3152 - struct tegra_xusb_usb3_port *usb3 = to_usb3_port(port); 3153 - struct tegra_xusb_padctl *padctl = port->padctl; 3154 - struct tegra_xusb_lane *lane = port->lane; 3155 - unsigned int index = port->index; 3156 - u32 value; 3157 - 3158 - value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 3159 - value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN_EARLY(index); 3160 - padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 3161 - 3162 - usleep_range(100, 200); 3163 - 3164 - value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 3165 - value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_CLAMP_EN(index); 3166 - padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 3167 - 3168 - usleep_range(250, 350); 3169 - 3170 - value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM1); 3171 - value |= XUSB_PADCTL_ELPG_PROGRAM1_SSPX_ELPG_VCORE_DOWN(index); 3172 - padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM1); 3173 - 3174 - if (lane->pad == padctl->sata) 3175 - tegra210_sata_uphy_disable(padctl); 3176 - else 3177 - tegra210_pex_uphy_disable(padctl); 3178 - 3179 - regulator_disable(usb3->supply); 3180 - 3181 - value = padctl_readl(padctl, XUSB_PADCTL_SS_PORT_MAP); 3182 - value &= ~XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP_MASK(index); 3183 - value |= XUSB_PADCTL_SS_PORT_MAP_PORTX_MAP(index, 0x7); 3184 - padctl_writel(padctl, value, XUSB_PADCTL_SS_PORT_MAP); 3185 1992 } 3186 - 3187 - static const struct tegra_xusb_lane_map tegra210_usb3_map[] = { 3188 - { 0, "pcie", 6 }, 3189 - { 1, "pcie", 5 }, 3190 - { 2, "pcie", 0 }, 3191 - { 2, "pcie", 3 }, 3192 - { 3, "pcie", 4 }, 3193 - { 3, "pcie", 4 }, 3194 - { 0, NULL, 0 } 3195 - }; 3196 1993 3197 1994 static struct tegra_xusb_lane * 3198 1995 tegra210_usb3_port_map(struct tegra_xusb_port *port) ··· 3143 2188 const struct tegra_xusb_padctl_soc *soc) 3144 2189 { 3145 2190 struct tegra210_xusb_padctl *padctl; 2191 + struct platform_device *pdev; 2192 + struct device_node *np; 3146 2193 int err; 3147 2194 3148 2195 padctl = devm_kzalloc(dev, sizeof(*padctl), GFP_KERNEL); ··· 3158 2201 if (err < 0) 3159 2202 return ERR_PTR(err); 3160 2203 2204 + np = of_parse_phandle(dev->of_node, "nvidia,pmc", 0); 2205 + if (!np) { 2206 + dev_warn(dev, "nvidia,pmc property is missing\n"); 2207 + goto out; 2208 + } 2209 + 2210 + pdev = of_find_device_by_node(np); 2211 + if (!pdev) { 2212 + dev_warn(dev, "PMC device is not available\n"); 2213 + goto out; 2214 + } 2215 + 2216 + if (!platform_get_drvdata(pdev)) 2217 + return ERR_PTR(-EPROBE_DEFER); 2218 + 2219 + padctl->regmap = dev_get_regmap(&pdev->dev, "usb_sleepwalk"); 2220 + if (!padctl->regmap) 2221 + dev_info(dev, "failed to find PMC regmap\n"); 2222 + 2223 + out: 3161 2224 return &padctl->base; 3162 2225 } 3163 2226 ··· 3185 2208 { 3186 2209 } 3187 2210 2211 + static void tegra210_xusb_padctl_save(struct tegra_xusb_padctl *padctl) 2212 + { 2213 + struct tegra210_xusb_padctl *priv = to_tegra210_xusb_padctl(padctl); 2214 + 2215 + priv->context.usb2_pad_mux = 2216 + padctl_readl(padctl, XUSB_PADCTL_USB2_PAD_MUX); 2217 + priv->context.usb2_port_cap = 2218 + padctl_readl(padctl, XUSB_PADCTL_USB2_PORT_CAP); 2219 + priv->context.ss_port_map = 2220 + padctl_readl(padctl, XUSB_PADCTL_SS_PORT_MAP); 2221 + priv->context.usb3_pad_mux = 2222 + padctl_readl(padctl, XUSB_PADCTL_USB3_PAD_MUX); 2223 + } 2224 + 2225 + static void tegra210_xusb_padctl_restore(struct tegra_xusb_padctl *padctl) 2226 + { 2227 + struct tegra210_xusb_padctl *priv = to_tegra210_xusb_padctl(padctl); 2228 + struct tegra_xusb_lane *lane; 2229 + 2230 + padctl_writel(padctl, priv->context.usb2_pad_mux, 2231 + XUSB_PADCTL_USB2_PAD_MUX); 2232 + padctl_writel(padctl, priv->context.usb2_port_cap, 2233 + XUSB_PADCTL_USB2_PORT_CAP); 2234 + padctl_writel(padctl, priv->context.ss_port_map, 2235 + XUSB_PADCTL_SS_PORT_MAP); 2236 + 2237 + list_for_each_entry(lane, &padctl->lanes, list) { 2238 + if (lane->pad->ops->iddq_enable) 2239 + tegra210_uphy_lane_iddq_enable(lane); 2240 + } 2241 + 2242 + padctl_writel(padctl, priv->context.usb3_pad_mux, 2243 + XUSB_PADCTL_USB3_PAD_MUX); 2244 + 2245 + list_for_each_entry(lane, &padctl->lanes, list) { 2246 + if (lane->pad->ops->iddq_disable) 2247 + tegra210_uphy_lane_iddq_disable(lane); 2248 + } 2249 + } 2250 + 2251 + static int tegra210_xusb_padctl_suspend_noirq(struct tegra_xusb_padctl *padctl) 2252 + { 2253 + mutex_lock(&padctl->lock); 2254 + 2255 + tegra210_uphy_deinit(padctl); 2256 + 2257 + tegra210_xusb_padctl_save(padctl); 2258 + 2259 + mutex_unlock(&padctl->lock); 2260 + return 0; 2261 + } 2262 + 2263 + static int tegra210_xusb_padctl_resume_noirq(struct tegra_xusb_padctl *padctl) 2264 + { 2265 + mutex_lock(&padctl->lock); 2266 + 2267 + tegra210_xusb_padctl_restore(padctl); 2268 + 2269 + tegra210_uphy_init(padctl); 2270 + 2271 + mutex_unlock(&padctl->lock); 2272 + return 0; 2273 + } 2274 + 3188 2275 static const struct tegra_xusb_padctl_ops tegra210_xusb_padctl_ops = { 3189 2276 .probe = tegra210_xusb_padctl_probe, 3190 2277 .remove = tegra210_xusb_padctl_remove, 2278 + .suspend_noirq = tegra210_xusb_padctl_suspend_noirq, 2279 + .resume_noirq = tegra210_xusb_padctl_resume_noirq, 3191 2280 .usb3_set_lfps_detect = tegra210_usb3_set_lfps_detect, 3192 2281 .hsic_set_idle = tegra210_hsic_set_idle, 3193 2282 .vbus_override = tegra210_xusb_padctl_vbus_override,
+90 -2
drivers/phy/tegra/xusb.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/delay.h> ··· 321 321 if (soc->num_funcs < 2) 322 322 return; 323 323 324 + if (lane->pad->ops->iddq_enable) 325 + lane->pad->ops->iddq_enable(lane); 326 + 324 327 /* choose function */ 325 328 value = padctl_readl(padctl, soc->offset); 326 329 value &= ~(soc->mask << soc->shift); 327 330 value |= lane->function << soc->shift; 328 331 padctl_writel(padctl, value, soc->offset); 332 + 333 + if (lane->pad->ops->iddq_disable) 334 + lane->pad->ops->iddq_disable(lane); 329 335 } 330 336 331 337 static void tegra_xusb_pad_program(struct tegra_xusb_pad *pad) ··· 382 376 return 0; 383 377 } 384 378 385 - static bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane, 379 + bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane, 386 380 const char *function) 387 381 { 388 382 const char *func = lane->soc->funcs[lane->function]; ··· 1273 1267 return err; 1274 1268 } 1275 1269 1270 + static int tegra_xusb_padctl_suspend_noirq(struct device *dev) 1271 + { 1272 + struct tegra_xusb_padctl *padctl = dev_get_drvdata(dev); 1273 + 1274 + if (padctl->soc && padctl->soc->ops && padctl->soc->ops->suspend_noirq) 1275 + return padctl->soc->ops->suspend_noirq(padctl); 1276 + 1277 + return 0; 1278 + } 1279 + 1280 + static int tegra_xusb_padctl_resume_noirq(struct device *dev) 1281 + { 1282 + struct tegra_xusb_padctl *padctl = dev_get_drvdata(dev); 1283 + 1284 + if (padctl->soc && padctl->soc->ops && padctl->soc->ops->resume_noirq) 1285 + return padctl->soc->ops->resume_noirq(padctl); 1286 + 1287 + return 0; 1288 + } 1289 + 1290 + static const struct dev_pm_ops tegra_xusb_padctl_pm_ops = { 1291 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_xusb_padctl_suspend_noirq, 1292 + tegra_xusb_padctl_resume_noirq) 1293 + }; 1294 + 1276 1295 static struct platform_driver tegra_xusb_padctl_driver = { 1277 1296 .driver = { 1278 1297 .name = "tegra-xusb-padctl", 1279 1298 .of_match_table = tegra_xusb_padctl_of_match, 1299 + .pm = &tegra_xusb_padctl_pm_ops, 1280 1300 }, 1281 1301 .probe = tegra_xusb_padctl_probe, 1282 1302 .remove = tegra_xusb_padctl_remove, ··· 1368 1336 return -ENOSYS; 1369 1337 } 1370 1338 EXPORT_SYMBOL_GPL(tegra_xusb_padctl_hsic_set_idle); 1339 + 1340 + int tegra_xusb_padctl_enable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy, 1341 + enum usb_device_speed speed) 1342 + { 1343 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1344 + 1345 + if (lane->pad->ops->enable_phy_sleepwalk) 1346 + return lane->pad->ops->enable_phy_sleepwalk(lane, speed); 1347 + 1348 + return -EOPNOTSUPP; 1349 + } 1350 + EXPORT_SYMBOL_GPL(tegra_xusb_padctl_enable_phy_sleepwalk); 1351 + 1352 + int tegra_xusb_padctl_disable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy) 1353 + { 1354 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1355 + 1356 + if (lane->pad->ops->disable_phy_sleepwalk) 1357 + return lane->pad->ops->disable_phy_sleepwalk(lane); 1358 + 1359 + return -EOPNOTSUPP; 1360 + } 1361 + EXPORT_SYMBOL_GPL(tegra_xusb_padctl_disable_phy_sleepwalk); 1362 + 1363 + int tegra_xusb_padctl_enable_phy_wake(struct tegra_xusb_padctl *padctl, struct phy *phy) 1364 + { 1365 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1366 + 1367 + if (lane->pad->ops->enable_phy_wake) 1368 + return lane->pad->ops->enable_phy_wake(lane); 1369 + 1370 + return -EOPNOTSUPP; 1371 + } 1372 + EXPORT_SYMBOL_GPL(tegra_xusb_padctl_enable_phy_wake); 1373 + 1374 + int tegra_xusb_padctl_disable_phy_wake(struct tegra_xusb_padctl *padctl, struct phy *phy) 1375 + { 1376 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1377 + 1378 + if (lane->pad->ops->disable_phy_wake) 1379 + return lane->pad->ops->disable_phy_wake(lane); 1380 + 1381 + return -EOPNOTSUPP; 1382 + } 1383 + EXPORT_SYMBOL_GPL(tegra_xusb_padctl_disable_phy_wake); 1384 + 1385 + bool tegra_xusb_padctl_remote_wake_detected(struct tegra_xusb_padctl *padctl, struct phy *phy) 1386 + { 1387 + struct tegra_xusb_lane *lane = phy_get_drvdata(phy); 1388 + 1389 + if (lane->pad->ops->remote_wake_detected) 1390 + return lane->pad->ops->remote_wake_detected(lane); 1391 + 1392 + return false; 1393 + } 1394 + EXPORT_SYMBOL_GPL(tegra_xusb_padctl_remote_wake_detected); 1371 1395 1372 1396 int tegra_xusb_padctl_usb3_set_lfps_detect(struct tegra_xusb_padctl *padctl, 1373 1397 unsigned int port, bool enable)
+19 -3
drivers/phy/tegra/xusb.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2014-2015, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 4 4 * Copyright (c) 2015, Google Inc. 5 5 */ 6 6 ··· 11 11 #include <linux/mutex.h> 12 12 #include <linux/workqueue.h> 13 13 14 + #include <linux/usb/ch9.h> 14 15 #include <linux/usb/otg.h> 15 16 #include <linux/usb/role.h> 16 17 ··· 36 35 37 36 const char * const *funcs; 38 37 unsigned int num_funcs; 38 + 39 + struct { 40 + unsigned int misc_ctl2; 41 + } regs; 39 42 }; 40 43 41 44 struct tegra_xusb_lane { ··· 131 126 struct device_node *np, 132 127 unsigned int index); 133 128 void (*remove)(struct tegra_xusb_lane *lane); 129 + void (*iddq_enable)(struct tegra_xusb_lane *lane); 130 + void (*iddq_disable)(struct tegra_xusb_lane *lane); 131 + int (*enable_phy_sleepwalk)(struct tegra_xusb_lane *lane, enum usb_device_speed speed); 132 + int (*disable_phy_sleepwalk)(struct tegra_xusb_lane *lane); 133 + int (*enable_phy_wake)(struct tegra_xusb_lane *lane); 134 + int (*disable_phy_wake)(struct tegra_xusb_lane *lane); 135 + bool (*remote_wake_detected)(struct tegra_xusb_lane *lane); 134 136 }; 137 + 138 + bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane, const char *function); 135 139 136 140 /* 137 141 * pads ··· 244 230 struct reset_control *rst; 245 231 struct clk *pll; 246 232 247 - unsigned int enable; 233 + bool enable; 248 234 }; 249 235 250 236 static inline struct tegra_xusb_pcie_pad * ··· 259 245 struct reset_control *rst; 260 246 struct clk *pll; 261 247 262 - unsigned int enable; 248 + bool enable; 263 249 }; 264 250 265 251 static inline struct tegra_xusb_sata_pad * ··· 402 388 const struct tegra_xusb_padctl_soc *soc); 403 389 void (*remove)(struct tegra_xusb_padctl *padctl); 404 390 391 + int (*suspend_noirq)(struct tegra_xusb_padctl *padctl); 392 + int (*resume_noirq)(struct tegra_xusb_padctl *padctl); 405 393 int (*usb3_save_context)(struct tegra_xusb_padctl *padctl, 406 394 unsigned int index); 407 395 int (*hsic_set_idle)(struct tegra_xusb_padctl *padctl,
+1 -1
drivers/thunderbolt/Makefile
··· 2 2 obj-${CONFIG_USB4} := thunderbolt.o 3 3 thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o 4 4 thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o 5 - thunderbolt-objs += nvm.o retimer.o quirks.o 5 + thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o 6 6 7 7 thunderbolt-${CONFIG_ACPI} += acpi.o 8 8 thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
+206
drivers/thunderbolt/acpi.c
··· 180 180 return osc_sb_native_usb4_control & OSC_USB_XDOMAIN; 181 181 return true; 182 182 } 183 + 184 + /* UUID for retimer _DSM: e0053122-795b-4122-8a5e-57be1d26acb3 */ 185 + static const guid_t retimer_dsm_guid = 186 + GUID_INIT(0xe0053122, 0x795b, 0x4122, 187 + 0x8a, 0x5e, 0x57, 0xbe, 0x1d, 0x26, 0xac, 0xb3); 188 + 189 + #define RETIMER_DSM_QUERY_ONLINE_STATE 1 190 + #define RETIMER_DSM_SET_ONLINE_STATE 2 191 + 192 + static int tb_acpi_retimer_set_power(struct tb_port *port, bool power) 193 + { 194 + struct usb4_port *usb4 = port->usb4; 195 + union acpi_object argv4[2]; 196 + struct acpi_device *adev; 197 + union acpi_object *obj; 198 + int ret; 199 + 200 + if (!usb4->can_offline) 201 + return 0; 202 + 203 + adev = ACPI_COMPANION(&usb4->dev); 204 + if (WARN_ON(!adev)) 205 + return 0; 206 + 207 + /* Check if we are already powered on (and in correct mode) */ 208 + obj = acpi_evaluate_dsm_typed(adev->handle, &retimer_dsm_guid, 1, 209 + RETIMER_DSM_QUERY_ONLINE_STATE, NULL, 210 + ACPI_TYPE_INTEGER); 211 + if (!obj) { 212 + tb_port_warn(port, "ACPI: query online _DSM failed\n"); 213 + return -EIO; 214 + } 215 + 216 + ret = obj->integer.value; 217 + ACPI_FREE(obj); 218 + 219 + if (power == ret) 220 + return 0; 221 + 222 + tb_port_dbg(port, "ACPI: calling _DSM to power %s retimers\n", 223 + power ? "on" : "off"); 224 + 225 + argv4[0].type = ACPI_TYPE_PACKAGE; 226 + argv4[0].package.count = 1; 227 + argv4[0].package.elements = &argv4[1]; 228 + argv4[1].integer.type = ACPI_TYPE_INTEGER; 229 + argv4[1].integer.value = power; 230 + 231 + obj = acpi_evaluate_dsm_typed(adev->handle, &retimer_dsm_guid, 1, 232 + RETIMER_DSM_SET_ONLINE_STATE, argv4, 233 + ACPI_TYPE_INTEGER); 234 + if (!obj) { 235 + tb_port_warn(port, 236 + "ACPI: set online state _DSM evaluation failed\n"); 237 + return -EIO; 238 + } 239 + 240 + ret = obj->integer.value; 241 + ACPI_FREE(obj); 242 + 243 + if (ret >= 0) { 244 + if (power) 245 + return ret == 1 ? 0 : -EBUSY; 246 + return 0; 247 + } 248 + 249 + tb_port_warn(port, "ACPI: set online state _DSM failed with error %d\n", ret); 250 + return -EIO; 251 + } 252 + 253 + /** 254 + * tb_acpi_power_on_retimers() - Call platform to power on retimers 255 + * @port: USB4 port 256 + * 257 + * Calls platform to turn on power to all retimers behind this USB4 258 + * port. After this function returns successfully the caller can 259 + * continue with the normal retimer flows (as specified in the USB4 260 + * spec). Note if this returns %-EBUSY it means the type-C port is in 261 + * non-USB4/TBT mode (there is non-USB4/TBT device connected). 262 + * 263 + * This should only be called if the USB4/TBT link is not up. 264 + * 265 + * Returns %0 on success. 266 + */ 267 + int tb_acpi_power_on_retimers(struct tb_port *port) 268 + { 269 + return tb_acpi_retimer_set_power(port, true); 270 + } 271 + 272 + /** 273 + * tb_acpi_power_off_retimers() - Call platform to power off retimers 274 + * @port: USB4 port 275 + * 276 + * This is the opposite of tb_acpi_power_on_retimers(). After returning 277 + * successfully the normal operations with the @port can continue. 278 + * 279 + * Returns %0 on success. 280 + */ 281 + int tb_acpi_power_off_retimers(struct tb_port *port) 282 + { 283 + return tb_acpi_retimer_set_power(port, false); 284 + } 285 + 286 + static bool tb_acpi_bus_match(struct device *dev) 287 + { 288 + return tb_is_switch(dev) || tb_is_usb4_port_device(dev); 289 + } 290 + 291 + static struct acpi_device *tb_acpi_find_port(struct acpi_device *adev, 292 + const struct tb_port *port) 293 + { 294 + struct acpi_device *port_adev; 295 + 296 + if (!adev) 297 + return NULL; 298 + 299 + /* 300 + * Device routers exists under the downstream facing USB4 port 301 + * of the parent router. Their _ADR is always 0. 302 + */ 303 + list_for_each_entry(port_adev, &adev->children, node) { 304 + if (acpi_device_adr(port_adev) == port->port) 305 + return port_adev; 306 + } 307 + 308 + return NULL; 309 + } 310 + 311 + static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw) 312 + { 313 + struct acpi_device *adev = NULL; 314 + struct tb_switch *parent_sw; 315 + 316 + parent_sw = tb_switch_parent(sw); 317 + if (parent_sw) { 318 + struct tb_port *port = tb_port_at(tb_route(sw), parent_sw); 319 + struct acpi_device *port_adev; 320 + 321 + port_adev = tb_acpi_find_port(ACPI_COMPANION(&parent_sw->dev), port); 322 + if (port_adev) 323 + adev = acpi_find_child_device(port_adev, 0, false); 324 + } else { 325 + struct tb_nhi *nhi = sw->tb->nhi; 326 + struct acpi_device *parent_adev; 327 + 328 + parent_adev = ACPI_COMPANION(&nhi->pdev->dev); 329 + if (parent_adev) 330 + adev = acpi_find_child_device(parent_adev, 0, false); 331 + } 332 + 333 + return adev; 334 + } 335 + 336 + static struct acpi_device *tb_acpi_find_companion(struct device *dev) 337 + { 338 + /* 339 + * The Thunderbolt/USB4 hierarchy looks like following: 340 + * 341 + * Device (NHI) 342 + * Device (HR) // Host router _ADR == 0 343 + * Device (DFP0) // Downstream port _ADR == lane 0 adapter 344 + * Device (DR) // Device router _ADR == 0 345 + * Device (UFP) // Upstream port _ADR == lane 0 adapter 346 + * Device (DFP1) // Downstream port _ADR == lane 0 adapter number 347 + * 348 + * At the moment we bind the host router to the corresponding 349 + * Linux device. 350 + */ 351 + if (tb_is_switch(dev)) 352 + return tb_acpi_switch_find_companion(tb_to_switch(dev)); 353 + else if (tb_is_usb4_port_device(dev)) 354 + return tb_acpi_find_port(ACPI_COMPANION(dev->parent), 355 + tb_to_usb4_port_device(dev)->port); 356 + return NULL; 357 + } 358 + 359 + static void tb_acpi_setup(struct device *dev) 360 + { 361 + struct acpi_device *adev = ACPI_COMPANION(dev); 362 + struct usb4_port *usb4 = tb_to_usb4_port_device(dev); 363 + 364 + if (!adev || !usb4) 365 + return; 366 + 367 + if (acpi_check_dsm(adev->handle, &retimer_dsm_guid, 1, 368 + BIT(RETIMER_DSM_QUERY_ONLINE_STATE) | 369 + BIT(RETIMER_DSM_SET_ONLINE_STATE))) 370 + usb4->can_offline = true; 371 + } 372 + 373 + static struct acpi_bus_type tb_acpi_bus = { 374 + .name = "thunderbolt", 375 + .match = tb_acpi_bus_match, 376 + .find_companion = tb_acpi_find_companion, 377 + .setup = tb_acpi_setup, 378 + }; 379 + 380 + int tb_acpi_init(void) 381 + { 382 + return register_acpi_bus_type(&tb_acpi_bus); 383 + } 384 + 385 + void tb_acpi_exit(void) 386 + { 387 + unregister_acpi_bus_type(&tb_acpi_bus); 388 + }
+15 -79
drivers/thunderbolt/dma_port.c
··· 299 299 return status_to_errno(out); 300 300 } 301 301 302 - static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address, 303 - void *buf, u32 size) 302 + static int dma_port_flash_read_block(void *data, unsigned int dwaddress, 303 + void *buf, size_t dwords) 304 304 { 305 + struct tb_dma_port *dma = data; 305 306 struct tb_switch *sw = dma->sw; 306 - u32 in, dwaddress, dwords; 307 307 int ret; 308 - 309 - dwaddress = address / 4; 310 - dwords = size / 4; 308 + u32 in; 311 309 312 310 in = MAIL_IN_CMD_FLASH_READ << MAIL_IN_CMD_SHIFT; 313 311 if (dwords < MAIL_DATA_DWORDS) ··· 321 323 dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT); 322 324 } 323 325 324 - static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address, 325 - const void *buf, u32 size) 326 + static int dma_port_flash_write_block(void *data, unsigned int dwaddress, 327 + const void *buf, size_t dwords) 326 328 { 329 + struct tb_dma_port *dma = data; 327 330 struct tb_switch *sw = dma->sw; 328 - u32 in, dwaddress, dwords; 329 331 int ret; 330 - 331 - dwords = size / 4; 332 + u32 in; 332 333 333 334 /* Write the block to MAIL_DATA registers */ 334 335 ret = dma_port_write(sw->tb->ctl, buf, tb_route(sw), dma->port, ··· 338 341 in = MAIL_IN_CMD_FLASH_WRITE << MAIL_IN_CMD_SHIFT; 339 342 340 343 /* CSS header write is always done to the same magic address */ 341 - if (address >= DMA_PORT_CSS_ADDRESS) { 342 - dwaddress = DMA_PORT_CSS_ADDRESS; 344 + if (dwaddress >= DMA_PORT_CSS_ADDRESS) 343 345 in |= MAIL_IN_CSS; 344 - } else { 345 - dwaddress = address / 4; 346 - } 347 346 348 347 in |= ((dwords - 1) << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK; 349 348 in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK; ··· 358 365 int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address, 359 366 void *buf, size_t size) 360 367 { 361 - unsigned int retries = DMA_PORT_RETRIES; 362 - 363 - do { 364 - unsigned int offset; 365 - size_t nbytes; 366 - int ret; 367 - 368 - offset = address & 3; 369 - nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4); 370 - 371 - ret = dma_port_flash_read_block(dma, address, dma->buf, 372 - ALIGN(nbytes, 4)); 373 - if (ret) { 374 - if (ret == -ETIMEDOUT) { 375 - if (retries--) 376 - continue; 377 - ret = -EIO; 378 - } 379 - return ret; 380 - } 381 - 382 - nbytes -= offset; 383 - memcpy(buf, dma->buf + offset, nbytes); 384 - 385 - size -= nbytes; 386 - address += nbytes; 387 - buf += nbytes; 388 - } while (size > 0); 389 - 390 - return 0; 368 + return tb_nvm_read_data(address, buf, size, DMA_PORT_RETRIES, 369 + dma_port_flash_read_block, dma); 391 370 } 392 371 393 372 /** ··· 376 411 int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address, 377 412 const void *buf, size_t size) 378 413 { 379 - unsigned int retries = DMA_PORT_RETRIES; 380 - unsigned int offset; 414 + if (address >= DMA_PORT_CSS_ADDRESS && size > DMA_PORT_CSS_MAX_SIZE) 415 + return -E2BIG; 381 416 382 - if (address >= DMA_PORT_CSS_ADDRESS) { 383 - offset = 0; 384 - if (size > DMA_PORT_CSS_MAX_SIZE) 385 - return -E2BIG; 386 - } else { 387 - offset = address & 3; 388 - address = address & ~3; 389 - } 390 - 391 - do { 392 - u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4); 393 - int ret; 394 - 395 - memcpy(dma->buf + offset, buf, nbytes); 396 - 397 - ret = dma_port_flash_write_block(dma, address, buf, nbytes); 398 - if (ret) { 399 - if (ret == -ETIMEDOUT) { 400 - if (retries--) 401 - continue; 402 - ret = -EIO; 403 - } 404 - return ret; 405 - } 406 - 407 - size -= nbytes; 408 - address += nbytes; 409 - buf += nbytes; 410 - } while (size > 0); 411 - 412 - return 0; 417 + return tb_nvm_write_data(address, buf, size, DMA_PORT_RETRIES, 418 + dma_port_flash_write_block, dma); 413 419 } 414 420 415 421 /**
+6 -3
drivers/thunderbolt/domain.c
··· 881 881 int ret; 882 882 883 883 tb_test_init(); 884 - 885 884 tb_debugfs_init(); 885 + tb_acpi_init(); 886 + 886 887 ret = tb_xdomain_init(); 887 888 if (ret) 888 - goto err_debugfs; 889 + goto err_acpi; 889 890 ret = bus_register(&tb_bus_type); 890 891 if (ret) 891 892 goto err_xdomain; ··· 895 894 896 895 err_xdomain: 897 896 tb_xdomain_exit(); 898 - err_debugfs: 897 + err_acpi: 898 + tb_acpi_exit(); 899 899 tb_debugfs_exit(); 900 900 tb_test_exit(); 901 901 ··· 909 907 ida_destroy(&tb_domain_ida); 910 908 tb_nvm_exit(); 911 909 tb_xdomain_exit(); 910 + tb_acpi_exit(); 912 911 tb_debugfs_exit(); 913 912 tb_test_exit(); 914 913 }
+11 -8
drivers/thunderbolt/eeprom.c
··· 214 214 return ~__crc32c_le(~0, data, len); 215 215 } 216 216 217 - #define TB_DROM_DATA_START 13 217 + #define TB_DROM_DATA_START 13 218 + #define TB_DROM_HEADER_SIZE 22 219 + #define USB4_DROM_HEADER_SIZE 16 220 + 218 221 struct tb_drom_header { 219 222 /* BYTE 0 */ 220 223 u8 uid_crc8; /* checksum for uid */ ··· 227 224 u32 data_crc32; /* checksum for data_len bytes starting at byte 13 */ 228 225 /* BYTE 13 */ 229 226 u8 device_rom_revision; /* should be <= 1 */ 230 - u16 data_len:10; 231 - u8 __unknown1:6; 232 - /* BYTES 16-21 */ 227 + u16 data_len:12; 228 + u8 reserved:4; 229 + /* BYTES 16-21 - Only for TBT DROM, nonexistent in USB4 DROM */ 233 230 u16 vendor_id; 234 231 u16 model_id; 235 232 u8 model_rev; ··· 404 401 * 405 402 * Drom must have been copied to sw->drom. 406 403 */ 407 - static int tb_drom_parse_entries(struct tb_switch *sw) 404 + static int tb_drom_parse_entries(struct tb_switch *sw, size_t header_size) 408 405 { 409 406 struct tb_drom_header *header = (void *) sw->drom; 410 - u16 pos = sizeof(*header); 407 + u16 pos = header_size; 411 408 u16 drom_size = header->data_len + TB_DROM_DATA_START; 412 409 int res; 413 410 ··· 569 566 header->data_crc32, crc); 570 567 } 571 568 572 - return tb_drom_parse_entries(sw); 569 + return tb_drom_parse_entries(sw, TB_DROM_HEADER_SIZE); 573 570 } 574 571 575 572 static int usb4_drom_parse(struct tb_switch *sw) ··· 586 583 return -EINVAL; 587 584 } 588 585 589 - return tb_drom_parse_entries(sw); 586 + return tb_drom_parse_entries(sw, USB4_DROM_HEADER_SIZE); 590 587 } 591 588 592 589 /**
+13 -7
drivers/thunderbolt/icm.c
··· 1677 1677 1678 1678 static bool icm_tgl_is_supported(struct tb *tb) 1679 1679 { 1680 - u32 val; 1680 + unsigned long end = jiffies + msecs_to_jiffies(10); 1681 1681 1682 - /* 1683 - * If the firmware is not running use software CM. This platform 1684 - * should fully support both. 1685 - */ 1686 - val = ioread32(tb->nhi->iobase + REG_FW_STS); 1687 - return !!(val & REG_FW_STS_NVM_AUTH_DONE); 1682 + do { 1683 + u32 val; 1684 + 1685 + val = ioread32(tb->nhi->iobase + REG_FW_STS); 1686 + if (val & REG_FW_STS_NVM_AUTH_DONE) 1687 + return true; 1688 + usleep_range(100, 500); 1689 + } while (time_before(jiffies, end)); 1690 + 1691 + return false; 1688 1692 } 1689 1693 1690 1694 static void icm_handle_notification(struct work_struct *work) ··· 2509 2505 case PCI_DEVICE_ID_INTEL_TGL_NHI1: 2510 2506 case PCI_DEVICE_ID_INTEL_TGL_H_NHI0: 2511 2507 case PCI_DEVICE_ID_INTEL_TGL_H_NHI1: 2508 + case PCI_DEVICE_ID_INTEL_ADL_NHI0: 2509 + case PCI_DEVICE_ID_INTEL_ADL_NHI1: 2512 2510 icm->is_supported = icm_tgl_is_supported; 2513 2511 icm->driver_ready = icm_icl_driver_ready; 2514 2512 icm->set_uuid = icm_icl_set_uuid;
+4 -2
drivers/thunderbolt/lc.c
··· 208 208 if (ret) 209 209 return ret; 210 210 211 - ctrl &= ~(TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD | TB_LC_SX_CTRL_WOP | 212 - TB_LC_SX_CTRL_WOU4); 211 + ctrl &= ~(TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD | TB_LC_SX_CTRL_WODPC | 212 + TB_LC_SX_CTRL_WODPD | TB_LC_SX_CTRL_WOP | TB_LC_SX_CTRL_WOU4); 213 213 214 214 if (flags & TB_WAKE_ON_CONNECT) 215 215 ctrl |= TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD; ··· 217 217 ctrl |= TB_LC_SX_CTRL_WOU4; 218 218 if (flags & TB_WAKE_ON_PCIE) 219 219 ctrl |= TB_LC_SX_CTRL_WOP; 220 + if (flags & TB_WAKE_ON_DP) 221 + ctrl |= TB_LC_SX_CTRL_WODPC | TB_LC_SX_CTRL_WODPD; 220 222 221 223 return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, offset + TB_LC_SX_CTRL, 1); 222 224 }
+4 -67
drivers/thunderbolt/nhi.c
··· 17 17 #include <linux/module.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/property.h> 20 - #include <linux/platform_data/x86/apple.h> 21 20 22 21 #include "nhi.h" 23 22 #include "nhi_regs.h" ··· 1126 1127 return true; 1127 1128 } 1128 1129 1129 - /* 1130 - * During suspend the Thunderbolt controller is reset and all PCIe 1131 - * tunnels are lost. The NHI driver will try to reestablish all tunnels 1132 - * during resume. This adds device links between the tunneled PCIe 1133 - * downstream ports and the NHI so that the device core will make sure 1134 - * NHI is resumed first before the rest. 1135 - */ 1136 - static void tb_apple_add_links(struct tb_nhi *nhi) 1137 - { 1138 - struct pci_dev *upstream, *pdev; 1139 - 1140 - if (!x86_apple_machine) 1141 - return; 1142 - 1143 - switch (nhi->pdev->device) { 1144 - case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE: 1145 - case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: 1146 - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: 1147 - case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: 1148 - break; 1149 - default: 1150 - return; 1151 - } 1152 - 1153 - upstream = pci_upstream_bridge(nhi->pdev); 1154 - while (upstream) { 1155 - if (!pci_is_pcie(upstream)) 1156 - return; 1157 - if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM) 1158 - break; 1159 - upstream = pci_upstream_bridge(upstream); 1160 - } 1161 - 1162 - if (!upstream) 1163 - return; 1164 - 1165 - /* 1166 - * For each hotplug downstream port, create add device link 1167 - * back to NHI so that PCIe tunnels can be re-established after 1168 - * sleep. 1169 - */ 1170 - for_each_pci_bridge(pdev, upstream->subordinate) { 1171 - const struct device_link *link; 1172 - 1173 - if (!pci_is_pcie(pdev)) 1174 - continue; 1175 - if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM || 1176 - !pdev->is_hotplug_bridge) 1177 - continue; 1178 - 1179 - link = device_link_add(&pdev->dev, &nhi->pdev->dev, 1180 - DL_FLAG_AUTOREMOVE_SUPPLIER | 1181 - DL_FLAG_PM_RUNTIME); 1182 - if (link) { 1183 - dev_dbg(&nhi->pdev->dev, "created link from %s\n", 1184 - dev_name(&pdev->dev)); 1185 - } else { 1186 - dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n", 1187 - dev_name(&pdev->dev)); 1188 - } 1189 - } 1190 - } 1191 - 1192 1130 static struct tb *nhi_select_cm(struct tb_nhi *nhi) 1193 1131 { 1194 1132 struct tb *tb; ··· 1213 1277 if (res) 1214 1278 return res; 1215 1279 } 1216 - 1217 - tb_apple_add_links(nhi); 1218 - tb_acpi_add_links(nhi); 1219 1280 1220 1281 tb = nhi_select_cm(nhi); 1221 1282 if (!tb) { ··· 1332 1399 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0), 1333 1400 .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1334 1401 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1), 1402 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1403 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0), 1404 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1405 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1), 1335 1406 .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1336 1407 1337 1408 /* Any USB4 compliant host */
+2
drivers/thunderbolt/nhi.h
··· 72 72 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE 0x15ea 73 73 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI 0x15eb 74 74 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef 75 + #define PCI_DEVICE_ID_INTEL_ADL_NHI0 0x463e 76 + #define PCI_DEVICE_ID_INTEL_ADL_NHI1 0x466d 75 77 #define PCI_DEVICE_ID_INTEL_ICL_NHI1 0x8a0d 76 78 #define PCI_DEVICE_ID_INTEL_ICL_NHI0 0x8a17 77 79 #define PCI_DEVICE_ID_INTEL_TGL_NHI0 0x9a1b
+95
drivers/thunderbolt/nvm.c
··· 164 164 kfree(nvm); 165 165 } 166 166 167 + /** 168 + * tb_nvm_read_data() - Read data from NVM 169 + * @address: Start address on the flash 170 + * @buf: Buffer where the read data is copied 171 + * @size: Size of the buffer in bytes 172 + * @retries: Number of retries if block read fails 173 + * @read_block: Function that reads block from the flash 174 + * @read_block_data: Data passsed to @read_block 175 + * 176 + * This is a generic function that reads data from NVM or NVM like 177 + * device. 178 + * 179 + * Returns %0 on success and negative errno otherwise. 180 + */ 181 + int tb_nvm_read_data(unsigned int address, void *buf, size_t size, 182 + unsigned int retries, read_block_fn read_block, 183 + void *read_block_data) 184 + { 185 + do { 186 + unsigned int dwaddress, dwords, offset; 187 + u8 data[NVM_DATA_DWORDS * 4]; 188 + size_t nbytes; 189 + int ret; 190 + 191 + offset = address & 3; 192 + nbytes = min_t(size_t, size + offset, NVM_DATA_DWORDS * 4); 193 + 194 + dwaddress = address / 4; 195 + dwords = ALIGN(nbytes, 4) / 4; 196 + 197 + ret = read_block(read_block_data, dwaddress, data, dwords); 198 + if (ret) { 199 + if (ret != -ENODEV && retries--) 200 + continue; 201 + return ret; 202 + } 203 + 204 + nbytes -= offset; 205 + memcpy(buf, data + offset, nbytes); 206 + 207 + size -= nbytes; 208 + address += nbytes; 209 + buf += nbytes; 210 + } while (size > 0); 211 + 212 + return 0; 213 + } 214 + 215 + /** 216 + * tb_nvm_write_data() - Write data to NVM 217 + * @address: Start address on the flash 218 + * @buf: Buffer where the data is copied from 219 + * @size: Size of the buffer in bytes 220 + * @retries: Number of retries if the block write fails 221 + * @write_block: Function that writes block to the flash 222 + * @write_block_data: Data passwd to @write_block 223 + * 224 + * This is generic function that writes data to NVM or NVM like device. 225 + * 226 + * Returns %0 on success and negative errno otherwise. 227 + */ 228 + int tb_nvm_write_data(unsigned int address, const void *buf, size_t size, 229 + unsigned int retries, write_block_fn write_block, 230 + void *write_block_data) 231 + { 232 + do { 233 + unsigned int offset, dwaddress; 234 + u8 data[NVM_DATA_DWORDS * 4]; 235 + size_t nbytes; 236 + int ret; 237 + 238 + offset = address & 3; 239 + nbytes = min_t(u32, size + offset, NVM_DATA_DWORDS * 4); 240 + 241 + memcpy(data + offset, buf, nbytes); 242 + 243 + dwaddress = address / 4; 244 + ret = write_block(write_block_data, dwaddress, data, nbytes / 4); 245 + if (ret) { 246 + if (ret == -ETIMEDOUT) { 247 + if (retries--) 248 + continue; 249 + ret = -EIO; 250 + } 251 + return ret; 252 + } 253 + 254 + size -= nbytes; 255 + address += nbytes; 256 + buf += nbytes; 257 + } while (size > 0); 258 + 259 + return 0; 260 + } 261 + 167 262 void tb_nvm_exit(void) 168 263 { 169 264 ida_destroy(&nvm_ida);
+2 -2
drivers/thunderbolt/path.c
··· 367 367 int i, res; 368 368 for (i = first_hop; i < path->path_length; i++) { 369 369 res = tb_port_add_nfc_credits(path->hops[i].in_port, 370 - -path->nfc_credits); 370 + -path->hops[i].nfc_credits); 371 371 if (res) 372 372 tb_port_warn(path->hops[i].in_port, 373 373 "nfc credits deallocation failed for hop %d\n", ··· 502 502 /* Add non flow controlled credits. */ 503 503 for (i = path->path_length - 1; i >= 0; i--) { 504 504 res = tb_port_add_nfc_credits(path->hops[i].in_port, 505 - path->nfc_credits); 505 + path->hops[i].nfc_credits); 506 506 if (res) { 507 507 __tb_path_deallocate_nfc(path, i); 508 508 goto err;
+27 -3
drivers/thunderbolt/quirks.c
··· 12 12 sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER; 13 13 } 14 14 15 + static void quirk_dp_credit_allocation(struct tb_switch *sw) 16 + { 17 + if (sw->credit_allocation && sw->min_dp_main_credits == 56) { 18 + sw->min_dp_main_credits = 18; 19 + tb_sw_dbg(sw, "quirked DP main: %u\n", sw->min_dp_main_credits); 20 + } 21 + } 22 + 15 23 struct tb_quirk { 24 + u16 hw_vendor_id; 25 + u16 hw_device_id; 16 26 u16 vendor; 17 27 u16 device; 18 28 void (*hook)(struct tb_switch *sw); ··· 30 20 31 21 static const struct tb_quirk tb_quirks[] = { 32 22 /* Dell WD19TB supports self-authentication on unplug */ 33 - { 0x00d4, 0xb070, quirk_force_power_link }, 23 + { 0x0000, 0x0000, 0x00d4, 0xb070, quirk_force_power_link }, 24 + { 0x0000, 0x0000, 0x00d4, 0xb071, quirk_force_power_link }, 25 + /* 26 + * Intel Goshen Ridge NVM 27 and before report wrong number of 27 + * DP buffers. 28 + */ 29 + { 0x8087, 0x0b26, 0x0000, 0x0000, quirk_dp_credit_allocation }, 34 30 }; 35 31 36 32 /** ··· 52 36 for (i = 0; i < ARRAY_SIZE(tb_quirks); i++) { 53 37 const struct tb_quirk *q = &tb_quirks[i]; 54 38 55 - if (sw->device == q->device && sw->vendor == q->vendor) 56 - q->hook(sw); 39 + if (q->hw_vendor_id && q->hw_vendor_id != sw->config.vendor_id) 40 + continue; 41 + if (q->hw_device_id && q->hw_device_id != sw->config.device_id) 42 + continue; 43 + if (q->vendor && q->vendor != sw->vendor) 44 + continue; 45 + if (q->device && q->device != sw->device) 46 + continue; 47 + 48 + q->hook(sw); 57 49 } 58 50 }
+80 -28
drivers/thunderbolt/retimer.c
··· 103 103 unsigned int image_size, hdr_size; 104 104 const u8 *buf = rt->nvm->buf; 105 105 u16 ds_size, device; 106 + int ret; 106 107 107 108 image_size = rt->nvm->buf_data_size; 108 109 if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE) ··· 141 140 buf += hdr_size; 142 141 image_size -= hdr_size; 143 142 144 - return usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf, 145 - image_size); 143 + ret = usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf, 144 + image_size); 145 + if (!ret) 146 + rt->nvm->flushed = true; 147 + 148 + return ret; 149 + } 150 + 151 + static int tb_retimer_nvm_authenticate(struct tb_retimer *rt, bool auth_only) 152 + { 153 + u32 status; 154 + int ret; 155 + 156 + if (auth_only) { 157 + ret = usb4_port_retimer_nvm_set_offset(rt->port, rt->index, 0); 158 + if (ret) 159 + return ret; 160 + } 161 + 162 + ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index); 163 + if (ret) 164 + return ret; 165 + 166 + usleep_range(100, 150); 167 + 168 + /* 169 + * Check the status now if we still can access the retimer. It 170 + * is expected that the below fails. 171 + */ 172 + ret = usb4_port_retimer_nvm_authenticate_status(rt->port, rt->index, 173 + &status); 174 + if (!ret) { 175 + rt->auth_status = status; 176 + return status ? -EINVAL : 0; 177 + } 178 + 179 + return 0; 146 180 } 147 181 148 182 static ssize_t device_show(struct device *dev, struct device_attribute *attr, ··· 212 176 struct device_attribute *attr, const char *buf, size_t count) 213 177 { 214 178 struct tb_retimer *rt = tb_to_retimer(dev); 215 - bool val; 216 - int ret; 179 + int val, ret; 217 180 218 181 pm_runtime_get_sync(&rt->dev); 219 182 ··· 226 191 goto exit_unlock; 227 192 } 228 193 229 - ret = kstrtobool(buf, &val); 194 + ret = kstrtoint(buf, 10, &val); 230 195 if (ret) 231 196 goto exit_unlock; 232 197 ··· 234 199 rt->auth_status = 0; 235 200 236 201 if (val) { 237 - if (!rt->nvm->buf) { 238 - ret = -EINVAL; 239 - goto exit_unlock; 202 + if (val == AUTHENTICATE_ONLY) { 203 + ret = tb_retimer_nvm_authenticate(rt, true); 204 + } else { 205 + if (!rt->nvm->flushed) { 206 + if (!rt->nvm->buf) { 207 + ret = -EINVAL; 208 + goto exit_unlock; 209 + } 210 + 211 + ret = tb_retimer_nvm_validate_and_write(rt); 212 + if (ret || val == WRITE_ONLY) 213 + goto exit_unlock; 214 + } 215 + if (val == WRITE_AND_AUTHENTICATE) 216 + ret = tb_retimer_nvm_authenticate(rt, false); 240 217 } 241 - 242 - ret = tb_retimer_nvm_validate_and_write(rt); 243 - if (ret) 244 - goto exit_unlock; 245 - 246 - ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index); 247 218 } 248 219 249 220 exit_unlock: ··· 324 283 325 284 static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status) 326 285 { 286 + struct usb4_port *usb4; 327 287 struct tb_retimer *rt; 328 288 u32 vendor, device; 329 289 int ret; 330 290 331 - if (!port->cap_usb4) 291 + usb4 = port->usb4; 292 + if (!usb4) 332 293 return -EINVAL; 333 294 334 295 ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor, ··· 374 331 rt->port = port; 375 332 rt->tb = port->sw->tb; 376 333 377 - rt->dev.parent = &port->sw->dev; 334 + rt->dev.parent = &usb4->dev; 378 335 rt->dev.bus = &tb_bus_type; 379 336 rt->dev.type = &tb_retimer_type; 380 337 dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev), ··· 432 389 struct tb_retimer_lookup lookup = { .port = port, .index = index }; 433 390 struct device *dev; 434 391 435 - dev = device_find_child(&port->sw->dev, &lookup, retimer_match); 392 + dev = device_find_child(&port->usb4->dev, &lookup, retimer_match); 436 393 if (dev) 437 394 return tb_to_retimer(dev); 438 395 ··· 442 399 /** 443 400 * tb_retimer_scan() - Scan for on-board retimers under port 444 401 * @port: USB4 port to scan 402 + * @add: If true also registers found retimers 445 403 * 446 - * Tries to enumerate on-board retimers connected to @port. Found 447 - * retimers are registered as children of @port. Does not scan for cable 448 - * retimers for now. 404 + * Brings the sideband into a state where retimers can be accessed. 405 + * Then Tries to enumerate on-board retimers connected to @port. Found 406 + * retimers are registered as children of @port if @add is set. Does 407 + * not scan for cable retimers for now. 449 408 */ 450 - int tb_retimer_scan(struct tb_port *port) 409 + int tb_retimer_scan(struct tb_port *port, bool add) 451 410 { 452 411 u32 status[TB_MAX_RETIMER_INDEX + 1] = {}; 453 412 int ret, i, last_idx = 0; 454 - 455 - if (!port->cap_usb4) 456 - return 0; 457 413 458 414 /* 459 415 * Send broadcast RT to make sure retimer indices facing this ··· 461 419 ret = usb4_port_enumerate_retimers(port); 462 420 if (ret) 463 421 return ret; 422 + 423 + /* 424 + * Enable sideband channel for each retimer. We can do this 425 + * regardless whether there is device connected or not. 426 + */ 427 + for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++) 428 + usb4_port_retimer_set_inbound_sbtx(port, i); 464 429 465 430 /* 466 431 * Before doing anything else, read the authentication status. ··· 500 451 rt = tb_port_find_retimer(port, i); 501 452 if (rt) { 502 453 put_device(&rt->dev); 503 - } else { 454 + } else if (add) { 504 455 ret = tb_retimer_add(port, i, status[i]); 505 456 if (ret && ret != -EOPNOTSUPP) 506 - return ret; 457 + break; 507 458 } 508 459 } 509 460 ··· 528 479 */ 529 480 void tb_retimer_remove_all(struct tb_port *port) 530 481 { 531 - if (port->cap_usb4) 532 - device_for_each_child_reverse(&port->sw->dev, port, 482 + struct usb4_port *usb4; 483 + 484 + usb4 = port->usb4; 485 + if (usb4) 486 + device_for_each_child_reverse(&usb4->dev, port, 533 487 remove_retimer); 534 488 }
+2
drivers/thunderbolt/sb_regs.h
··· 17 17 enum usb4_sb_opcode { 18 18 USB4_SB_OPCODE_ERR = 0x20525245, /* "ERR " */ 19 19 USB4_SB_OPCODE_ONS = 0x444d4321, /* "!CMD" */ 20 + USB4_SB_OPCODE_ROUTER_OFFLINE = 0x4e45534c, /* "LSEN" */ 20 21 USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */ 22 + USB4_SB_OPCODE_SET_INBOUND_SBTX = 0x5055534c, /* "LSUP" */ 21 23 USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */ 22 24 USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */ 23 25 USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */
+221 -53
drivers/thunderbolt/switch.c
··· 26 26 u32 status; 27 27 }; 28 28 29 - enum nvm_write_ops { 30 - WRITE_AND_AUTHENTICATE = 1, 31 - WRITE_ONLY = 2, 32 - }; 33 - 34 29 /* 35 30 * Hold NVM authentication failure status per switch This information 36 31 * needs to stay around even when the switch gets power cycled so we ··· 303 308 return dma_port_flash_read(sw->dma_port, address, buf, size); 304 309 } 305 310 306 - static int nvm_authenticate(struct tb_switch *sw) 311 + static int nvm_authenticate(struct tb_switch *sw, bool auth_only) 307 312 { 308 313 int ret; 309 314 310 - if (tb_switch_is_usb4(sw)) 315 + if (tb_switch_is_usb4(sw)) { 316 + if (auth_only) { 317 + ret = usb4_switch_nvm_set_offset(sw, 0); 318 + if (ret) 319 + return ret; 320 + } 321 + sw->nvm->authenticating = true; 311 322 return usb4_switch_nvm_authenticate(sw); 323 + } else if (auth_only) { 324 + return -EOPNOTSUPP; 325 + } 312 326 327 + sw->nvm->authenticating = true; 313 328 if (!tb_route(sw)) { 314 329 nvm_authenticate_start_dma_port(sw); 315 330 ret = nvm_authenticate_host_dma_port(sw); ··· 464 459 465 460 /* port utility functions */ 466 461 467 - static const char *tb_port_type(struct tb_regs_port_header *port) 462 + static const char *tb_port_type(const struct tb_regs_port_header *port) 468 463 { 469 464 switch (port->type >> 16) { 470 465 case 0: ··· 493 488 } 494 489 } 495 490 496 - static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port) 491 + static void tb_dump_port(struct tb *tb, const struct tb_port *port) 497 492 { 493 + const struct tb_regs_port_header *regs = &port->config; 494 + 498 495 tb_dbg(tb, 499 496 " Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x))\n", 500 - port->port_number, port->vendor_id, port->device_id, 501 - port->revision, port->thunderbolt_version, tb_port_type(port), 502 - port->type); 497 + regs->port_number, regs->vendor_id, regs->device_id, 498 + regs->revision, regs->thunderbolt_version, tb_port_type(regs), 499 + regs->type); 503 500 tb_dbg(tb, " Max hop id (in/out): %d/%d\n", 504 - port->max_in_hop_id, port->max_out_hop_id); 505 - tb_dbg(tb, " Max counters: %d\n", port->max_counters); 506 - tb_dbg(tb, " NFC Credits: %#x\n", port->nfc_credits); 501 + regs->max_in_hop_id, regs->max_out_hop_id); 502 + tb_dbg(tb, " Max counters: %d\n", regs->max_counters); 503 + tb_dbg(tb, " NFC Credits: %#x\n", regs->nfc_credits); 504 + tb_dbg(tb, " Credits (total/control): %u/%u\n", port->total_credits, 505 + port->ctl_credits); 507 506 } 508 507 509 508 /** ··· 747 738 cap = tb_port_find_cap(port, TB_PORT_CAP_USB4); 748 739 if (cap > 0) 749 740 port->cap_usb4 = cap; 741 + 742 + /* 743 + * USB4 ports the buffers allocated for the control path 744 + * can be read from the path config space. Legacy 745 + * devices we use hard-coded value. 746 + */ 747 + if (tb_switch_is_usb4(port->sw)) { 748 + struct tb_regs_hop hop; 749 + 750 + if (!tb_port_read(port, &hop, TB_CFG_HOPS, 0, 2)) 751 + port->ctl_credits = hop.initial_credits; 752 + } 753 + if (!port->ctl_credits) 754 + port->ctl_credits = 2; 755 + 750 756 } else if (port->port != 0) { 751 757 cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP); 752 758 if (cap > 0) 753 759 port->cap_adap = cap; 754 760 } 755 761 756 - tb_dump_port(port->sw->tb, &port->config); 762 + port->total_credits = 763 + (port->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> 764 + ADP_CS_4_TOTAL_BUFFERS_SHIFT; 765 + 766 + tb_dump_port(port->sw->tb, port); 757 767 758 768 INIT_LIST_HEAD(&port->list); 759 769 return 0; ··· 1019 991 * tb_port_lane_bonding_enable() - Enable bonding on port 1020 992 * @port: port to enable 1021 993 * 1022 - * Enable bonding by setting the link width of the port and the 1023 - * other port in case of dual link port. 994 + * Enable bonding by setting the link width of the port and the other 995 + * port in case of dual link port. Does not wait for the link to 996 + * actually reach the bonded state so caller needs to call 997 + * tb_port_wait_for_link_width() before enabling any paths through the 998 + * link to make sure the link is in expected state. 1024 999 * 1025 1000 * Return: %0 in case of success and negative errno in case of error 1026 1001 */ ··· 1074 1043 tb_port_set_link_width(port, 1); 1075 1044 } 1076 1045 1046 + /** 1047 + * tb_port_wait_for_link_width() - Wait until link reaches specific width 1048 + * @port: Port to wait for 1049 + * @width: Expected link width (%1 or %2) 1050 + * @timeout_msec: Timeout in ms how long to wait 1051 + * 1052 + * Should be used after both ends of the link have been bonded (or 1053 + * bonding has been disabled) to wait until the link actually reaches 1054 + * the expected state. Returns %-ETIMEDOUT if the @width was not reached 1055 + * within the given timeout, %0 if it did. 1056 + */ 1057 + int tb_port_wait_for_link_width(struct tb_port *port, int width, 1058 + int timeout_msec) 1059 + { 1060 + ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); 1061 + int ret; 1062 + 1063 + do { 1064 + ret = tb_port_get_link_width(port); 1065 + if (ret < 0) 1066 + return ret; 1067 + else if (ret == width) 1068 + return 0; 1069 + 1070 + usleep_range(1000, 2000); 1071 + } while (ktime_before(ktime_get(), timeout)); 1072 + 1073 + return -ETIMEDOUT; 1074 + } 1075 + 1076 + static int tb_port_do_update_credits(struct tb_port *port) 1077 + { 1078 + u32 nfc_credits; 1079 + int ret; 1080 + 1081 + ret = tb_port_read(port, &nfc_credits, TB_CFG_PORT, ADP_CS_4, 1); 1082 + if (ret) 1083 + return ret; 1084 + 1085 + if (nfc_credits != port->config.nfc_credits) { 1086 + u32 total; 1087 + 1088 + total = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> 1089 + ADP_CS_4_TOTAL_BUFFERS_SHIFT; 1090 + 1091 + tb_port_dbg(port, "total credits changed %u -> %u\n", 1092 + port->total_credits, total); 1093 + 1094 + port->config.nfc_credits = nfc_credits; 1095 + port->total_credits = total; 1096 + } 1097 + 1098 + return 0; 1099 + } 1100 + 1101 + /** 1102 + * tb_port_update_credits() - Re-read port total credits 1103 + * @port: Port to update 1104 + * 1105 + * After the link is bonded (or bonding was disabled) the port total 1106 + * credits may change, so this function needs to be called to re-read 1107 + * the credits. Updates also the second lane adapter. 1108 + */ 1109 + int tb_port_update_credits(struct tb_port *port) 1110 + { 1111 + int ret; 1112 + 1113 + ret = tb_port_do_update_credits(port); 1114 + if (ret) 1115 + return ret; 1116 + return tb_port_do_update_credits(port->dual_link_port); 1117 + } 1118 + 1077 1119 static int tb_port_start_lane_initialization(struct tb_port *port) 1078 1120 { 1079 1121 int ret; ··· 1156 1052 1157 1053 ret = tb_lc_start_lane_initialization(port); 1158 1054 return ret == -EINVAL ? 0 : ret; 1055 + } 1056 + 1057 + /* 1058 + * Returns true if the port had something (router, XDomain) connected 1059 + * before suspend. 1060 + */ 1061 + static bool tb_port_resume(struct tb_port *port) 1062 + { 1063 + bool has_remote = tb_port_has_remote(port); 1064 + 1065 + if (port->usb4) { 1066 + usb4_port_device_resume(port->usb4); 1067 + } else if (!has_remote) { 1068 + /* 1069 + * For disconnected downstream lane adapters start lane 1070 + * initialization now so we detect future connects. 1071 + * 1072 + * For XDomain start the lane initialzation now so the 1073 + * link gets re-established. 1074 + * 1075 + * This is only needed for non-USB4 ports. 1076 + */ 1077 + if (!tb_is_upstream_port(port) || port->xdomain) 1078 + tb_port_start_lane_initialization(port); 1079 + } 1080 + 1081 + return has_remote || port->xdomain; 1159 1082 } 1160 1083 1161 1084 /** ··· 1723 1592 bool disconnect) 1724 1593 { 1725 1594 struct tb_switch *sw = tb_to_switch(dev); 1726 - int val; 1727 - int ret; 1595 + int val, ret; 1728 1596 1729 1597 pm_runtime_get_sync(&sw->dev); 1730 1598 ··· 1746 1616 nvm_clear_auth_status(sw); 1747 1617 1748 1618 if (val > 0) { 1749 - if (!sw->nvm->flushed) { 1750 - if (!sw->nvm->buf) { 1619 + if (val == AUTHENTICATE_ONLY) { 1620 + if (disconnect) 1751 1621 ret = -EINVAL; 1752 - goto exit_unlock; 1753 - } 1622 + else 1623 + ret = nvm_authenticate(sw, true); 1624 + } else { 1625 + if (!sw->nvm->flushed) { 1626 + if (!sw->nvm->buf) { 1627 + ret = -EINVAL; 1628 + goto exit_unlock; 1629 + } 1754 1630 1755 - ret = nvm_validate_and_write(sw); 1756 - if (ret || val == WRITE_ONLY) 1757 - goto exit_unlock; 1758 - } 1759 - if (val == WRITE_AND_AUTHENTICATE) { 1760 - if (disconnect) { 1761 - ret = tb_lc_force_power(sw); 1762 - } else { 1763 - sw->nvm->authenticating = true; 1764 - ret = nvm_authenticate(sw); 1631 + ret = nvm_validate_and_write(sw); 1632 + if (ret || val == WRITE_ONLY) 1633 + goto exit_unlock; 1634 + } 1635 + if (val == WRITE_AND_AUTHENTICATE) { 1636 + if (disconnect) 1637 + ret = tb_lc_force_power(sw); 1638 + else 1639 + ret = nvm_authenticate(sw, false); 1765 1640 } 1766 1641 } 1767 1642 } ··· 2567 2432 return ret; 2568 2433 } 2569 2434 2435 + ret = tb_port_wait_for_link_width(down, 2, 100); 2436 + if (ret) { 2437 + tb_port_warn(down, "timeout enabling lane bonding\n"); 2438 + return ret; 2439 + } 2440 + 2441 + tb_port_update_credits(down); 2442 + tb_port_update_credits(up); 2570 2443 tb_switch_update_link_attributes(sw); 2571 2444 2572 2445 tb_sw_dbg(sw, "lane bonding enabled\n"); ··· 2605 2462 tb_port_lane_bonding_disable(up); 2606 2463 tb_port_lane_bonding_disable(down); 2607 2464 2465 + /* 2466 + * It is fine if we get other errors as the router might have 2467 + * been unplugged. 2468 + */ 2469 + if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT) 2470 + tb_sw_warn(sw, "timeout disabling lane bonding\n"); 2471 + 2472 + tb_port_update_credits(down); 2473 + tb_port_update_credits(up); 2608 2474 tb_switch_update_link_attributes(sw); 2475 + 2609 2476 tb_sw_dbg(sw, "lane bonding disabled\n"); 2610 2477 } 2611 2478 ··· 2682 2529 tb_lc_unconfigure_port(down); 2683 2530 } 2684 2531 2532 + static void tb_switch_credits_init(struct tb_switch *sw) 2533 + { 2534 + if (tb_switch_is_icm(sw)) 2535 + return; 2536 + if (!tb_switch_is_usb4(sw)) 2537 + return; 2538 + if (usb4_switch_credits_init(sw)) 2539 + tb_sw_info(sw, "failed to determine preferred buffer allocation, using defaults\n"); 2540 + } 2541 + 2685 2542 /** 2686 2543 * tb_switch_add() - Add a switch to the domain 2687 2544 * @sw: Switch to add ··· 2722 2559 } 2723 2560 2724 2561 if (!sw->safe_mode) { 2562 + tb_switch_credits_init(sw); 2563 + 2725 2564 /* read drom */ 2726 2565 ret = tb_drom_read(sw); 2727 2566 if (ret) { ··· 2777 2612 sw->device_name); 2778 2613 } 2779 2614 2615 + ret = usb4_switch_add_ports(sw); 2616 + if (ret) { 2617 + dev_err(&sw->dev, "failed to add USB4 ports\n"); 2618 + goto err_del; 2619 + } 2620 + 2780 2621 ret = tb_switch_nvm_add(sw); 2781 2622 if (ret) { 2782 2623 dev_err(&sw->dev, "failed to add NVM devices\n"); 2783 - device_del(&sw->dev); 2784 - return ret; 2624 + goto err_ports; 2785 2625 } 2786 2626 2787 2627 /* ··· 2807 2637 2808 2638 tb_switch_debugfs_init(sw); 2809 2639 return 0; 2640 + 2641 + err_ports: 2642 + usb4_switch_remove_ports(sw); 2643 + err_del: 2644 + device_del(&sw->dev); 2645 + 2646 + return ret; 2810 2647 } 2811 2648 2812 2649 /** ··· 2853 2676 tb_plug_events_active(sw, false); 2854 2677 2855 2678 tb_switch_nvm_remove(sw); 2679 + usb4_switch_remove_ports(sw); 2856 2680 2857 2681 if (tb_route(sw)) 2858 2682 dev_info(&sw->dev, "device disconnected\n"); ··· 2951 2773 2952 2774 /* check for surviving downstream switches */ 2953 2775 tb_switch_for_each_port(sw, port) { 2954 - if (!tb_port_has_remote(port) && !port->xdomain) { 2955 - /* 2956 - * For disconnected downstream lane adapters 2957 - * start lane initialization now so we detect 2958 - * future connects. 2959 - */ 2960 - if (!tb_is_upstream_port(port) && tb_port_is_null(port)) 2961 - tb_port_start_lane_initialization(port); 2776 + if (!tb_port_is_null(port)) 2962 2777 continue; 2963 - } else if (port->xdomain) { 2964 - /* 2965 - * Start lane initialization for XDomain so the 2966 - * link gets re-established. 2967 - */ 2968 - tb_port_start_lane_initialization(port); 2969 - } 2778 + 2779 + if (!tb_port_resume(port)) 2780 + continue; 2970 2781 2971 2782 if (tb_wait_for_port(port, true) <= 0) { 2972 2783 tb_port_warn(port, ··· 2964 2797 tb_sw_set_unplugged(port->remote->sw); 2965 2798 else if (port->xdomain) 2966 2799 port->xdomain->is_unplugged = true; 2967 - } else if (tb_port_has_remote(port) || port->xdomain) { 2800 + } else { 2968 2801 /* 2969 2802 * Always unlock the port so the downstream 2970 2803 * switch/domain is accessible. ··· 3011 2844 if (runtime) { 3012 2845 /* Trigger wake when something is plugged in/out */ 3013 2846 flags |= TB_WAKE_ON_CONNECT | TB_WAKE_ON_DISCONNECT; 3014 - flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE; 2847 + flags |= TB_WAKE_ON_USB4; 2848 + flags |= TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE | TB_WAKE_ON_DP; 3015 2849 } else if (device_may_wakeup(&sw->dev)) { 3016 2850 flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE; 3017 2851 }
+69 -2
drivers/thunderbolt/tb.c
··· 10 10 #include <linux/errno.h> 11 11 #include <linux/delay.h> 12 12 #include <linux/pm_runtime.h> 13 + #include <linux/platform_data/x86/apple.h> 13 14 14 15 #include "tb.h" 15 16 #include "tb_regs.h" ··· 596 595 return; 597 596 } 598 597 599 - tb_retimer_scan(port); 598 + tb_retimer_scan(port, true); 600 599 601 600 sw = tb_switch_alloc(port->sw->tb, &port->sw->dev, 602 601 tb_downstream_route(port)); ··· 663 662 tb_sw_warn(sw, "failed to enable TMU\n"); 664 663 665 664 /* Scan upstream retimers */ 666 - tb_retimer_scan(upstream_port); 665 + tb_retimer_scan(upstream_port, true); 667 666 668 667 /* 669 668 * Create USB 3.x tunnels only when the switch is plugged to the ··· 1572 1571 .disconnect_xdomain_paths = tb_disconnect_xdomain_paths, 1573 1572 }; 1574 1573 1574 + /* 1575 + * During suspend the Thunderbolt controller is reset and all PCIe 1576 + * tunnels are lost. The NHI driver will try to reestablish all tunnels 1577 + * during resume. This adds device links between the tunneled PCIe 1578 + * downstream ports and the NHI so that the device core will make sure 1579 + * NHI is resumed first before the rest. 1580 + */ 1581 + static void tb_apple_add_links(struct tb_nhi *nhi) 1582 + { 1583 + struct pci_dev *upstream, *pdev; 1584 + 1585 + if (!x86_apple_machine) 1586 + return; 1587 + 1588 + switch (nhi->pdev->device) { 1589 + case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE: 1590 + case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: 1591 + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: 1592 + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: 1593 + break; 1594 + default: 1595 + return; 1596 + } 1597 + 1598 + upstream = pci_upstream_bridge(nhi->pdev); 1599 + while (upstream) { 1600 + if (!pci_is_pcie(upstream)) 1601 + return; 1602 + if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM) 1603 + break; 1604 + upstream = pci_upstream_bridge(upstream); 1605 + } 1606 + 1607 + if (!upstream) 1608 + return; 1609 + 1610 + /* 1611 + * For each hotplug downstream port, create add device link 1612 + * back to NHI so that PCIe tunnels can be re-established after 1613 + * sleep. 1614 + */ 1615 + for_each_pci_bridge(pdev, upstream->subordinate) { 1616 + const struct device_link *link; 1617 + 1618 + if (!pci_is_pcie(pdev)) 1619 + continue; 1620 + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM || 1621 + !pdev->is_hotplug_bridge) 1622 + continue; 1623 + 1624 + link = device_link_add(&pdev->dev, &nhi->pdev->dev, 1625 + DL_FLAG_AUTOREMOVE_SUPPLIER | 1626 + DL_FLAG_PM_RUNTIME); 1627 + if (link) { 1628 + dev_dbg(&nhi->pdev->dev, "created link from %s\n", 1629 + dev_name(&pdev->dev)); 1630 + } else { 1631 + dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n", 1632 + dev_name(&pdev->dev)); 1633 + } 1634 + } 1635 + } 1636 + 1575 1637 struct tb *tb_probe(struct tb_nhi *nhi) 1576 1638 { 1577 1639 struct tb_cm *tcm; ··· 1657 1593 INIT_DELAYED_WORK(&tcm->remove_work, tb_remove_work); 1658 1594 1659 1595 tb_dbg(tb, "using software connection manager\n"); 1596 + 1597 + tb_apple_add_links(nhi); 1598 + tb_acpi_add_links(nhi); 1660 1599 1661 1600 return tb; 1662 1601 }
+113 -3
drivers/thunderbolt/tb.h
··· 20 20 21 21 #define NVM_MIN_SIZE SZ_32K 22 22 #define NVM_MAX_SIZE SZ_512K 23 + #define NVM_DATA_DWORDS 16 23 24 24 25 /* Intel specific NVM offsets */ 25 26 #define NVM_DEVID 0x05 ··· 56 55 size_t buf_data_size; 57 56 bool authenticating; 58 57 bool flushed; 58 + }; 59 + 60 + enum tb_nvm_write_ops { 61 + WRITE_AND_AUTHENTICATE = 1, 62 + WRITE_ONLY = 2, 63 + AUTHENTICATE_ONLY = 3, 59 64 }; 60 65 61 66 #define TB_SWITCH_KEY_SIZE 32 ··· 142 135 * @rpm_complete: Completion used to wait for runtime resume to 143 136 * complete (ICM only) 144 137 * @quirks: Quirks used for this Thunderbolt switch 138 + * @credit_allocation: Are the below buffer allocation parameters valid 139 + * @max_usb3_credits: Router preferred number of buffers for USB 3.x 140 + * @min_dp_aux_credits: Router preferred minimum number of buffers for DP AUX 141 + * @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN 142 + * @max_pcie_credits: Router preferred number of buffers for PCIe 143 + * @max_dma_credits: Router preferred number of buffers for DMA/P2P 145 144 * 146 145 * When the switch is being added or removed to the domain (other 147 146 * switches) you need to have domain lock held. ··· 190 177 u8 depth; 191 178 struct completion rpm_complete; 192 179 unsigned long quirks; 180 + bool credit_allocation; 181 + unsigned int max_usb3_credits; 182 + unsigned int min_dp_aux_credits; 183 + unsigned int min_dp_main_credits; 184 + unsigned int max_pcie_credits; 185 + unsigned int max_dma_credits; 193 186 }; 194 187 195 188 /** ··· 208 189 * @cap_tmu: Offset of the adapter specific TMU capability (%0 if not present) 209 190 * @cap_adap: Offset of the adapter specific capability (%0 if not present) 210 191 * @cap_usb4: Offset to the USB4 port capability (%0 if not present) 192 + * @usb4: Pointer to the USB4 port structure (only if @cap_usb4 is != %0) 211 193 * @port: Port number on switch 212 194 * @disabled: Disabled by eeprom or enabled but not implemented 213 195 * @bonded: true if the port is bonded (two lanes combined as one) ··· 218 198 * @in_hopids: Currently allocated input HopIDs 219 199 * @out_hopids: Currently allocated output HopIDs 220 200 * @list: Used to link ports to DP resources list 201 + * @total_credits: Total number of buffers available for this port 202 + * @ctl_credits: Buffers reserved for control path 203 + * @dma_credits: Number of credits allocated for DMA tunneling for all 204 + * DMA paths through this port. 221 205 * 222 206 * In USB4 terminology this structure represents an adapter (protocol or 223 207 * lane adapter). ··· 235 211 int cap_tmu; 236 212 int cap_adap; 237 213 int cap_usb4; 214 + struct usb4_port *usb4; 238 215 u8 port; 239 216 bool disabled; 240 217 bool bonded; ··· 244 219 struct ida in_hopids; 245 220 struct ida out_hopids; 246 221 struct list_head list; 222 + unsigned int total_credits; 223 + unsigned int ctl_credits; 224 + unsigned int dma_credits; 225 + }; 226 + 227 + /** 228 + * struct usb4_port - USB4 port device 229 + * @dev: Device for the port 230 + * @port: Pointer to the lane 0 adapter 231 + * @can_offline: Does the port have necessary platform support to moved 232 + * it into offline mode and back 233 + * @offline: The port is currently in offline mode 234 + */ 235 + struct usb4_port { 236 + struct device dev; 237 + struct tb_port *port; 238 + bool can_offline; 239 + bool offline; 247 240 }; 248 241 249 242 /** ··· 298 255 * @next_hop_index: HopID of the packet when it is routed out from @out_port 299 256 * @initial_credits: Number of initial flow control credits allocated for 300 257 * the path 258 + * @nfc_credits: Number of non-flow controlled buffers allocated for the 259 + * @in_port. 301 260 * 302 261 * Hop configuration is always done on the IN port of a switch. 303 262 * in_port and out_port have to be on the same switch. Packets arriving on ··· 319 274 int in_counter_index; 320 275 int next_hop_index; 321 276 unsigned int initial_credits; 277 + unsigned int nfc_credits; 322 278 }; 323 279 324 280 /** ··· 342 296 * struct tb_path - a unidirectional path between two ports 343 297 * @tb: Pointer to the domain structure 344 298 * @name: Name of the path (used for debugging) 345 - * @nfc_credits: Number of non flow controlled credits allocated for the path 346 299 * @ingress_shared_buffer: Shared buffering used for ingress ports on the path 347 300 * @egress_shared_buffer: Shared buffering used for egress ports on the path 348 301 * @ingress_fc_enable: Flow control for ingress ports on the path ··· 362 317 struct tb_path { 363 318 struct tb *tb; 364 319 const char *name; 365 - int nfc_credits; 366 320 enum tb_path_port ingress_shared_buffer; 367 321 enum tb_path_port egress_shared_buffer; 368 322 enum tb_path_port ingress_fc_enable; ··· 390 346 #define TB_WAKE_ON_USB4 BIT(2) 391 347 #define TB_WAKE_ON_USB3 BIT(3) 392 348 #define TB_WAKE_ON_PCIE BIT(4) 349 + #define TB_WAKE_ON_DP BIT(5) 393 350 394 351 /** 395 352 * struct tb_cm_ops - Connection manager specific operations vector ··· 668 623 extern struct device_type tb_domain_type; 669 624 extern struct device_type tb_retimer_type; 670 625 extern struct device_type tb_switch_type; 626 + extern struct device_type usb4_port_device_type; 671 627 672 628 int tb_domain_init(void); 673 629 void tb_domain_exit(void); ··· 719 673 nvmem_reg_write_t reg_write); 720 674 void tb_nvm_free(struct tb_nvm *nvm); 721 675 void tb_nvm_exit(void); 676 + 677 + typedef int (*read_block_fn)(void *, unsigned int, void *, size_t); 678 + typedef int (*write_block_fn)(void *, unsigned int, const void *, size_t); 679 + 680 + int tb_nvm_read_data(unsigned int address, void *buf, size_t size, 681 + unsigned int retries, read_block_fn read_block, 682 + void *read_block_data); 683 + int tb_nvm_write_data(unsigned int address, const void *buf, size_t size, 684 + unsigned int retries, write_block_fn write_next_block, 685 + void *write_block_data); 722 686 723 687 struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, 724 688 u64 route); ··· 909 853 struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, 910 854 struct tb_port *prev); 911 855 856 + static inline bool tb_port_use_credit_allocation(const struct tb_port *port) 857 + { 858 + return tb_port_is_null(port) && port->sw->credit_allocation; 859 + } 860 + 912 861 /** 913 862 * tb_for_each_port_on_path() - Iterate over each port on path 914 863 * @src: Source port ··· 931 870 int tb_port_state(struct tb_port *port); 932 871 int tb_port_lane_bonding_enable(struct tb_port *port); 933 872 void tb_port_lane_bonding_disable(struct tb_port *port); 873 + int tb_port_wait_for_link_width(struct tb_port *port, int width, 874 + int timeout_msec); 875 + int tb_port_update_credits(struct tb_port *port); 934 876 935 877 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); 936 878 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); ··· 967 903 bool tb_path_is_invalid(struct tb_path *path); 968 904 bool tb_path_port_on_path(const struct tb_path *path, 969 905 const struct tb_port *port); 906 + 907 + /** 908 + * tb_path_for_each_hop() - Iterate over each hop on path 909 + * @path: Path whose hops to iterate 910 + * @hop: Hop used as iterator 911 + * 912 + * Iterates over each hop on path. 913 + */ 914 + #define tb_path_for_each_hop(path, hop) \ 915 + for ((hop) = &(path)->hops[0]; \ 916 + (hop) <= &(path)->hops[(path)->path_length - 1]; (hop)++) 970 917 971 918 int tb_drom_read(struct tb_switch *sw); 972 919 int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid); ··· 1025 950 struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link, 1026 951 u8 depth); 1027 952 1028 - int tb_retimer_scan(struct tb_port *port); 953 + int tb_retimer_scan(struct tb_port *port, bool add); 1029 954 void tb_retimer_remove_all(struct tb_port *port); 1030 955 1031 956 static inline bool tb_is_retimer(const struct device *dev) ··· 1050 975 int usb4_switch_nvm_sector_size(struct tb_switch *sw); 1051 976 int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf, 1052 977 size_t size); 978 + int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address); 1053 979 int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address, 1054 980 const void *buf, size_t size); 1055 981 int usb4_switch_nvm_authenticate(struct tb_switch *sw); 1056 982 int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status); 983 + int usb4_switch_credits_init(struct tb_switch *sw); 1057 984 bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in); 1058 985 int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in); 1059 986 int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in); ··· 1063 986 const struct tb_port *port); 1064 987 struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw, 1065 988 const struct tb_port *port); 989 + int usb4_switch_add_ports(struct tb_switch *sw); 990 + void usb4_switch_remove_ports(struct tb_switch *sw); 1066 991 1067 992 int usb4_port_unlock(struct tb_port *port); 1068 993 int usb4_port_configure(struct tb_port *port); 1069 994 void usb4_port_unconfigure(struct tb_port *port); 1070 995 int usb4_port_configure_xdomain(struct tb_port *port); 1071 996 void usb4_port_unconfigure_xdomain(struct tb_port *port); 997 + int usb4_port_router_offline(struct tb_port *port); 998 + int usb4_port_router_online(struct tb_port *port); 1072 999 int usb4_port_enumerate_retimers(struct tb_port *port); 1073 1000 1001 + int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index); 1074 1002 int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf, 1075 1003 u8 size); 1076 1004 int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg, 1077 1005 const void *buf, u8 size); 1078 1006 int usb4_port_retimer_is_last(struct tb_port *port, u8 index); 1079 1007 int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index); 1008 + int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index, 1009 + unsigned int address); 1080 1010 int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, 1081 1011 unsigned int address, const void *buf, 1082 1012 size_t size); ··· 1102 1018 int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw, 1103 1019 int *downstream_bw); 1104 1020 1021 + static inline bool tb_is_usb4_port_device(const struct device *dev) 1022 + { 1023 + return dev->type == &usb4_port_device_type; 1024 + } 1025 + 1026 + static inline struct usb4_port *tb_to_usb4_port_device(struct device *dev) 1027 + { 1028 + if (tb_is_usb4_port_device(dev)) 1029 + return container_of(dev, struct usb4_port, dev); 1030 + return NULL; 1031 + } 1032 + 1033 + struct usb4_port *usb4_port_device_add(struct tb_port *port); 1034 + void usb4_port_device_remove(struct usb4_port *usb4); 1035 + int usb4_port_device_resume(struct usb4_port *usb4); 1036 + 1105 1037 /* Keep link controller awake during update */ 1106 1038 #define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0) 1107 1039 ··· 1131 1031 bool tb_acpi_may_tunnel_dp(void); 1132 1032 bool tb_acpi_may_tunnel_pcie(void); 1133 1033 bool tb_acpi_is_xdomain_allowed(void); 1034 + 1035 + int tb_acpi_init(void); 1036 + void tb_acpi_exit(void); 1037 + int tb_acpi_power_on_retimers(struct tb_port *port); 1038 + int tb_acpi_power_off_retimers(struct tb_port *port); 1134 1039 #else 1135 1040 static inline void tb_acpi_add_links(struct tb_nhi *nhi) { } 1136 1041 ··· 1144 1039 static inline bool tb_acpi_may_tunnel_dp(void) { return true; } 1145 1040 static inline bool tb_acpi_may_tunnel_pcie(void) { return true; } 1146 1041 static inline bool tb_acpi_is_xdomain_allowed(void) { return true; } 1042 + 1043 + static inline int tb_acpi_init(void) { return 0; } 1044 + static inline void tb_acpi_exit(void) { } 1045 + static inline int tb_acpi_power_on_retimers(struct tb_port *port) { return 0; } 1046 + static inline int tb_acpi_power_off_retimers(struct tb_port *port) { return 0; } 1147 1047 #endif 1148 1048 1149 1049 #ifdef CONFIG_DEBUG_FS
+4
drivers/thunderbolt/tb_regs.h
··· 195 195 #define ROUTER_CS_5_SLP BIT(0) 196 196 #define ROUTER_CS_5_WOP BIT(1) 197 197 #define ROUTER_CS_5_WOU BIT(2) 198 + #define ROUTER_CS_5_WOD BIT(3) 198 199 #define ROUTER_CS_5_C3S BIT(23) 199 200 #define ROUTER_CS_5_PTO BIT(24) 200 201 #define ROUTER_CS_5_UTO BIT(25) ··· 229 228 USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23, 230 229 USB4_SWITCH_OP_DROM_READ = 0x24, 231 230 USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25, 231 + USB4_SWITCH_OP_BUFFER_ALLOC = 0x33, 232 232 }; 233 233 234 234 /* Router TMU configuration */ ··· 460 458 #define TB_LC_SX_CTRL 0x96 461 459 #define TB_LC_SX_CTRL_WOC BIT(1) 462 460 #define TB_LC_SX_CTRL_WOD BIT(2) 461 + #define TB_LC_SX_CTRL_WODPC BIT(3) 462 + #define TB_LC_SX_CTRL_WODPD BIT(4) 463 463 #define TB_LC_SX_CTRL_WOU4 BIT(5) 464 464 #define TB_LC_SX_CTRL_WOP BIT(6) 465 465 #define TB_LC_SX_CTRL_L1C BIT(16)
+552 -7
drivers/thunderbolt/test.c
··· 87 87 sw->ports[1].config.type = TB_TYPE_PORT; 88 88 sw->ports[1].config.max_in_hop_id = 19; 89 89 sw->ports[1].config.max_out_hop_id = 19; 90 + sw->ports[1].total_credits = 60; 91 + sw->ports[1].ctl_credits = 2; 90 92 sw->ports[1].dual_link_port = &sw->ports[2]; 91 93 92 94 sw->ports[2].config.type = TB_TYPE_PORT; 93 95 sw->ports[2].config.max_in_hop_id = 19; 94 96 sw->ports[2].config.max_out_hop_id = 19; 97 + sw->ports[2].total_credits = 60; 98 + sw->ports[2].ctl_credits = 2; 95 99 sw->ports[2].dual_link_port = &sw->ports[1]; 96 100 sw->ports[2].link_nr = 1; 97 101 98 102 sw->ports[3].config.type = TB_TYPE_PORT; 99 103 sw->ports[3].config.max_in_hop_id = 19; 100 104 sw->ports[3].config.max_out_hop_id = 19; 105 + sw->ports[3].total_credits = 60; 106 + sw->ports[3].ctl_credits = 2; 101 107 sw->ports[3].dual_link_port = &sw->ports[4]; 102 108 103 109 sw->ports[4].config.type = TB_TYPE_PORT; 104 110 sw->ports[4].config.max_in_hop_id = 19; 105 111 sw->ports[4].config.max_out_hop_id = 19; 112 + sw->ports[4].total_credits = 60; 113 + sw->ports[4].ctl_credits = 2; 106 114 sw->ports[4].dual_link_port = &sw->ports[3]; 107 115 sw->ports[4].link_nr = 1; 108 116 ··· 151 143 return sw; 152 144 } 153 145 146 + static struct tb_switch *alloc_host_usb4(struct kunit *test) 147 + { 148 + struct tb_switch *sw; 149 + 150 + sw = alloc_host(test); 151 + if (!sw) 152 + return NULL; 153 + 154 + sw->generation = 4; 155 + sw->credit_allocation = true; 156 + sw->max_usb3_credits = 32; 157 + sw->min_dp_aux_credits = 1; 158 + sw->min_dp_main_credits = 0; 159 + sw->max_pcie_credits = 64; 160 + sw->max_dma_credits = 14; 161 + 162 + return sw; 163 + } 164 + 154 165 static struct tb_switch *alloc_dev_default(struct kunit *test, 155 166 struct tb_switch *parent, 156 167 u64 route, bool bonded) ··· 191 164 sw->ports[1].config.type = TB_TYPE_PORT; 192 165 sw->ports[1].config.max_in_hop_id = 19; 193 166 sw->ports[1].config.max_out_hop_id = 19; 167 + sw->ports[1].total_credits = 60; 168 + sw->ports[1].ctl_credits = 2; 194 169 sw->ports[1].dual_link_port = &sw->ports[2]; 195 170 196 171 sw->ports[2].config.type = TB_TYPE_PORT; 197 172 sw->ports[2].config.max_in_hop_id = 19; 198 173 sw->ports[2].config.max_out_hop_id = 19; 174 + sw->ports[2].total_credits = 60; 175 + sw->ports[2].ctl_credits = 2; 199 176 sw->ports[2].dual_link_port = &sw->ports[1]; 200 177 sw->ports[2].link_nr = 1; 201 178 202 179 sw->ports[3].config.type = TB_TYPE_PORT; 203 180 sw->ports[3].config.max_in_hop_id = 19; 204 181 sw->ports[3].config.max_out_hop_id = 19; 182 + sw->ports[3].total_credits = 60; 183 + sw->ports[3].ctl_credits = 2; 205 184 sw->ports[3].dual_link_port = &sw->ports[4]; 206 185 207 186 sw->ports[4].config.type = TB_TYPE_PORT; 208 187 sw->ports[4].config.max_in_hop_id = 19; 209 188 sw->ports[4].config.max_out_hop_id = 19; 189 + sw->ports[4].total_credits = 60; 190 + sw->ports[4].ctl_credits = 2; 210 191 sw->ports[4].dual_link_port = &sw->ports[3]; 211 192 sw->ports[4].link_nr = 1; 212 193 213 194 sw->ports[5].config.type = TB_TYPE_PORT; 214 195 sw->ports[5].config.max_in_hop_id = 19; 215 196 sw->ports[5].config.max_out_hop_id = 19; 197 + sw->ports[5].total_credits = 60; 198 + sw->ports[5].ctl_credits = 2; 216 199 sw->ports[5].dual_link_port = &sw->ports[6]; 217 200 218 201 sw->ports[6].config.type = TB_TYPE_PORT; 219 202 sw->ports[6].config.max_in_hop_id = 19; 220 203 sw->ports[6].config.max_out_hop_id = 19; 204 + sw->ports[6].total_credits = 60; 205 + sw->ports[6].ctl_credits = 2; 221 206 sw->ports[6].dual_link_port = &sw->ports[5]; 222 207 sw->ports[6].link_nr = 1; 223 208 224 209 sw->ports[7].config.type = TB_TYPE_PORT; 225 210 sw->ports[7].config.max_in_hop_id = 19; 226 211 sw->ports[7].config.max_out_hop_id = 19; 212 + sw->ports[7].total_credits = 60; 213 + sw->ports[7].ctl_credits = 2; 227 214 sw->ports[7].dual_link_port = &sw->ports[8]; 228 215 229 216 sw->ports[8].config.type = TB_TYPE_PORT; 230 217 sw->ports[8].config.max_in_hop_id = 19; 231 218 sw->ports[8].config.max_out_hop_id = 19; 219 + sw->ports[8].total_credits = 60; 220 + sw->ports[8].ctl_credits = 2; 232 221 sw->ports[8].dual_link_port = &sw->ports[7]; 233 222 sw->ports[8].link_nr = 1; 234 223 ··· 303 260 if (port->dual_link_port && upstream_port->dual_link_port) { 304 261 port->dual_link_port->remote = upstream_port->dual_link_port; 305 262 upstream_port->dual_link_port->remote = port->dual_link_port; 306 - } 307 263 308 - if (bonded) { 309 - /* Bonding is used */ 310 - port->bonded = true; 311 - port->dual_link_port->bonded = true; 312 - upstream_port->bonded = true; 313 - upstream_port->dual_link_port->bonded = true; 264 + if (bonded) { 265 + /* Bonding is used */ 266 + port->bonded = true; 267 + port->total_credits *= 2; 268 + port->dual_link_port->bonded = true; 269 + port->dual_link_port->total_credits = 0; 270 + upstream_port->bonded = true; 271 + upstream_port->total_credits *= 2; 272 + upstream_port->dual_link_port->bonded = true; 273 + upstream_port->dual_link_port->total_credits = 0; 274 + } 314 275 } 315 276 316 277 return sw; ··· 337 290 sw->ports[14].config.type = TB_TYPE_DP_HDMI_IN; 338 291 sw->ports[14].config.max_in_hop_id = 9; 339 292 sw->ports[14].config.max_out_hop_id = 9; 293 + 294 + return sw; 295 + } 296 + 297 + static struct tb_switch *alloc_dev_usb4(struct kunit *test, 298 + struct tb_switch *parent, 299 + u64 route, bool bonded) 300 + { 301 + struct tb_switch *sw; 302 + 303 + sw = alloc_dev_default(test, parent, route, bonded); 304 + if (!sw) 305 + return NULL; 306 + 307 + sw->generation = 4; 308 + sw->credit_allocation = true; 309 + sw->max_usb3_credits = 14; 310 + sw->min_dp_aux_credits = 1; 311 + sw->min_dp_main_credits = 18; 312 + sw->max_pcie_credits = 32; 313 + sw->max_dma_credits = 14; 340 314 341 315 return sw; 342 316 } ··· 1897 1829 tb_tunnel_free(tunnel); 1898 1830 } 1899 1831 1832 + static void tb_test_credit_alloc_legacy_not_bonded(struct kunit *test) 1833 + { 1834 + struct tb_switch *host, *dev; 1835 + struct tb_port *up, *down; 1836 + struct tb_tunnel *tunnel; 1837 + struct tb_path *path; 1838 + 1839 + host = alloc_host(test); 1840 + dev = alloc_dev_default(test, host, 0x1, false); 1841 + 1842 + down = &host->ports[8]; 1843 + up = &dev->ports[9]; 1844 + tunnel = tb_tunnel_alloc_pci(NULL, up, down); 1845 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1846 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 1847 + 1848 + path = tunnel->paths[0]; 1849 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1850 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1851 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1852 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1853 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U); 1854 + 1855 + path = tunnel->paths[1]; 1856 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1857 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1858 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1859 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1860 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U); 1861 + 1862 + tb_tunnel_free(tunnel); 1863 + } 1864 + 1865 + static void tb_test_credit_alloc_legacy_bonded(struct kunit *test) 1866 + { 1867 + struct tb_switch *host, *dev; 1868 + struct tb_port *up, *down; 1869 + struct tb_tunnel *tunnel; 1870 + struct tb_path *path; 1871 + 1872 + host = alloc_host(test); 1873 + dev = alloc_dev_default(test, host, 0x1, true); 1874 + 1875 + down = &host->ports[8]; 1876 + up = &dev->ports[9]; 1877 + tunnel = tb_tunnel_alloc_pci(NULL, up, down); 1878 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1879 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 1880 + 1881 + path = tunnel->paths[0]; 1882 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1883 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1884 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1885 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1886 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); 1887 + 1888 + path = tunnel->paths[1]; 1889 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1890 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1891 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1892 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1893 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); 1894 + 1895 + tb_tunnel_free(tunnel); 1896 + } 1897 + 1898 + static void tb_test_credit_alloc_pcie(struct kunit *test) 1899 + { 1900 + struct tb_switch *host, *dev; 1901 + struct tb_port *up, *down; 1902 + struct tb_tunnel *tunnel; 1903 + struct tb_path *path; 1904 + 1905 + host = alloc_host_usb4(test); 1906 + dev = alloc_dev_usb4(test, host, 0x1, true); 1907 + 1908 + down = &host->ports[8]; 1909 + up = &dev->ports[9]; 1910 + tunnel = tb_tunnel_alloc_pci(NULL, up, down); 1911 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1912 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 1913 + 1914 + path = tunnel->paths[0]; 1915 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1916 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1917 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1918 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1919 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); 1920 + 1921 + path = tunnel->paths[1]; 1922 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1923 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1924 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1925 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1926 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U); 1927 + 1928 + tb_tunnel_free(tunnel); 1929 + } 1930 + 1931 + static void tb_test_credit_alloc_dp(struct kunit *test) 1932 + { 1933 + struct tb_switch *host, *dev; 1934 + struct tb_port *in, *out; 1935 + struct tb_tunnel *tunnel; 1936 + struct tb_path *path; 1937 + 1938 + host = alloc_host_usb4(test); 1939 + dev = alloc_dev_usb4(test, host, 0x1, true); 1940 + 1941 + in = &host->ports[5]; 1942 + out = &dev->ports[14]; 1943 + 1944 + tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); 1945 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1946 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3); 1947 + 1948 + /* Video (main) path */ 1949 + path = tunnel->paths[0]; 1950 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1951 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U); 1952 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 1953 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U); 1954 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U); 1955 + 1956 + /* AUX TX */ 1957 + path = tunnel->paths[1]; 1958 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1959 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1960 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); 1961 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1962 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 1963 + 1964 + /* AUX RX */ 1965 + path = tunnel->paths[2]; 1966 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1967 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1968 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); 1969 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1970 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 1971 + 1972 + tb_tunnel_free(tunnel); 1973 + } 1974 + 1975 + static void tb_test_credit_alloc_usb3(struct kunit *test) 1976 + { 1977 + struct tb_switch *host, *dev; 1978 + struct tb_port *up, *down; 1979 + struct tb_tunnel *tunnel; 1980 + struct tb_path *path; 1981 + 1982 + host = alloc_host_usb4(test); 1983 + dev = alloc_dev_usb4(test, host, 0x1, true); 1984 + 1985 + down = &host->ports[12]; 1986 + up = &dev->ports[16]; 1987 + tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0); 1988 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 1989 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 1990 + 1991 + path = tunnel->paths[0]; 1992 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 1993 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 1994 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 1995 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 1996 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 1997 + 1998 + path = tunnel->paths[1]; 1999 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2000 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2001 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 2002 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2003 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); 2004 + 2005 + tb_tunnel_free(tunnel); 2006 + } 2007 + 2008 + static void tb_test_credit_alloc_dma(struct kunit *test) 2009 + { 2010 + struct tb_switch *host, *dev; 2011 + struct tb_port *nhi, *port; 2012 + struct tb_tunnel *tunnel; 2013 + struct tb_path *path; 2014 + 2015 + host = alloc_host_usb4(test); 2016 + dev = alloc_dev_usb4(test, host, 0x1, true); 2017 + 2018 + nhi = &host->ports[7]; 2019 + port = &dev->ports[3]; 2020 + 2021 + tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); 2022 + KUNIT_ASSERT_TRUE(test, tunnel != NULL); 2023 + KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2); 2024 + 2025 + /* DMA RX */ 2026 + path = tunnel->paths[0]; 2027 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2028 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2029 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); 2030 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2031 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2032 + 2033 + /* DMA TX */ 2034 + path = tunnel->paths[1]; 2035 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2036 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2037 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2038 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2039 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2040 + 2041 + tb_tunnel_free(tunnel); 2042 + } 2043 + 2044 + static void tb_test_credit_alloc_dma_multiple(struct kunit *test) 2045 + { 2046 + struct tb_tunnel *tunnel1, *tunnel2, *tunnel3; 2047 + struct tb_switch *host, *dev; 2048 + struct tb_port *nhi, *port; 2049 + struct tb_path *path; 2050 + 2051 + host = alloc_host_usb4(test); 2052 + dev = alloc_dev_usb4(test, host, 0x1, true); 2053 + 2054 + nhi = &host->ports[7]; 2055 + port = &dev->ports[3]; 2056 + 2057 + /* 2058 + * Create three DMA tunnels through the same ports. With the 2059 + * default buffers we should be able to create two and the last 2060 + * one fails. 2061 + * 2062 + * For default host we have following buffers for DMA: 2063 + * 2064 + * 120 - (2 + 2 * (1 + 0) + 32 + 64 + spare) = 20 2065 + * 2066 + * For device we have following: 2067 + * 2068 + * 120 - (2 + 2 * (1 + 18) + 14 + 32 + spare) = 34 2069 + * 2070 + * spare = 14 + 1 = 15 2071 + * 2072 + * So on host the first tunnel gets 14 and the second gets the 2073 + * remaining 1 and then we run out of buffers. 2074 + */ 2075 + tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); 2076 + KUNIT_ASSERT_TRUE(test, tunnel1 != NULL); 2077 + KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2); 2078 + 2079 + path = tunnel1->paths[0]; 2080 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2081 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2082 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); 2083 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2084 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2085 + 2086 + path = tunnel1->paths[1]; 2087 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2088 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2089 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2090 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2091 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2092 + 2093 + tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2); 2094 + KUNIT_ASSERT_TRUE(test, tunnel2 != NULL); 2095 + KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2); 2096 + 2097 + path = tunnel2->paths[0]; 2098 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2099 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2100 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); 2101 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2102 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2103 + 2104 + path = tunnel2->paths[1]; 2105 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2106 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2107 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2108 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2109 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2110 + 2111 + tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3); 2112 + KUNIT_ASSERT_TRUE(test, tunnel3 == NULL); 2113 + 2114 + /* 2115 + * Release the first DMA tunnel. That should make 14 buffers 2116 + * available for the next tunnel. 2117 + */ 2118 + tb_tunnel_free(tunnel1); 2119 + 2120 + tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3); 2121 + KUNIT_ASSERT_TRUE(test, tunnel3 != NULL); 2122 + 2123 + path = tunnel3->paths[0]; 2124 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2125 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2126 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); 2127 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2128 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2129 + 2130 + path = tunnel3->paths[1]; 2131 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2132 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2133 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2134 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2135 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2136 + 2137 + tb_tunnel_free(tunnel3); 2138 + tb_tunnel_free(tunnel2); 2139 + } 2140 + 2141 + static void tb_test_credit_alloc_all(struct kunit *test) 2142 + { 2143 + struct tb_port *up, *down, *in, *out, *nhi, *port; 2144 + struct tb_tunnel *pcie_tunnel, *dp_tunnel1, *dp_tunnel2, *usb3_tunnel; 2145 + struct tb_tunnel *dma_tunnel1, *dma_tunnel2; 2146 + struct tb_switch *host, *dev; 2147 + struct tb_path *path; 2148 + 2149 + /* 2150 + * Create PCIe, 2 x DP, USB 3.x and two DMA tunnels from host to 2151 + * device. Expectation is that all these can be established with 2152 + * the default credit allocation found in Intel hardware. 2153 + */ 2154 + 2155 + host = alloc_host_usb4(test); 2156 + dev = alloc_dev_usb4(test, host, 0x1, true); 2157 + 2158 + down = &host->ports[8]; 2159 + up = &dev->ports[9]; 2160 + pcie_tunnel = tb_tunnel_alloc_pci(NULL, up, down); 2161 + KUNIT_ASSERT_TRUE(test, pcie_tunnel != NULL); 2162 + KUNIT_ASSERT_EQ(test, pcie_tunnel->npaths, (size_t)2); 2163 + 2164 + path = pcie_tunnel->paths[0]; 2165 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2166 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2167 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 2168 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2169 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); 2170 + 2171 + path = pcie_tunnel->paths[1]; 2172 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2173 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2174 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 2175 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2176 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U); 2177 + 2178 + in = &host->ports[5]; 2179 + out = &dev->ports[13]; 2180 + 2181 + dp_tunnel1 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); 2182 + KUNIT_ASSERT_TRUE(test, dp_tunnel1 != NULL); 2183 + KUNIT_ASSERT_EQ(test, dp_tunnel1->npaths, (size_t)3); 2184 + 2185 + path = dp_tunnel1->paths[0]; 2186 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2187 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U); 2188 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2189 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U); 2190 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U); 2191 + 2192 + path = dp_tunnel1->paths[1]; 2193 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2194 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2195 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); 2196 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2197 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2198 + 2199 + path = dp_tunnel1->paths[2]; 2200 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2201 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2202 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); 2203 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2204 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2205 + 2206 + in = &host->ports[6]; 2207 + out = &dev->ports[14]; 2208 + 2209 + dp_tunnel2 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0); 2210 + KUNIT_ASSERT_TRUE(test, dp_tunnel2 != NULL); 2211 + KUNIT_ASSERT_EQ(test, dp_tunnel2->npaths, (size_t)3); 2212 + 2213 + path = dp_tunnel2->paths[0]; 2214 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2215 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U); 2216 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2217 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U); 2218 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U); 2219 + 2220 + path = dp_tunnel2->paths[1]; 2221 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2222 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2223 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); 2224 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2225 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2226 + 2227 + path = dp_tunnel2->paths[2]; 2228 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2229 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2230 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U); 2231 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2232 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2233 + 2234 + down = &host->ports[12]; 2235 + up = &dev->ports[16]; 2236 + usb3_tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0); 2237 + KUNIT_ASSERT_TRUE(test, usb3_tunnel != NULL); 2238 + KUNIT_ASSERT_EQ(test, usb3_tunnel->npaths, (size_t)2); 2239 + 2240 + path = usb3_tunnel->paths[0]; 2241 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2242 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2243 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 2244 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2245 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2246 + 2247 + path = usb3_tunnel->paths[1]; 2248 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2249 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2250 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U); 2251 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2252 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U); 2253 + 2254 + nhi = &host->ports[7]; 2255 + port = &dev->ports[3]; 2256 + 2257 + dma_tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1); 2258 + KUNIT_ASSERT_TRUE(test, dma_tunnel1 != NULL); 2259 + KUNIT_ASSERT_EQ(test, dma_tunnel1->npaths, (size_t)2); 2260 + 2261 + path = dma_tunnel1->paths[0]; 2262 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2263 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2264 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); 2265 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2266 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2267 + 2268 + path = dma_tunnel1->paths[1]; 2269 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2270 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2271 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2272 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2273 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U); 2274 + 2275 + dma_tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2); 2276 + KUNIT_ASSERT_TRUE(test, dma_tunnel2 != NULL); 2277 + KUNIT_ASSERT_EQ(test, dma_tunnel2->npaths, (size_t)2); 2278 + 2279 + path = dma_tunnel2->paths[0]; 2280 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2281 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2282 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U); 2283 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2284 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2285 + 2286 + path = dma_tunnel2->paths[1]; 2287 + KUNIT_ASSERT_EQ(test, path->path_length, 2); 2288 + KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U); 2289 + KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U); 2290 + KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U); 2291 + KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U); 2292 + 2293 + tb_tunnel_free(dma_tunnel2); 2294 + tb_tunnel_free(dma_tunnel1); 2295 + tb_tunnel_free(usb3_tunnel); 2296 + tb_tunnel_free(dp_tunnel2); 2297 + tb_tunnel_free(dp_tunnel1); 2298 + tb_tunnel_free(pcie_tunnel); 2299 + } 2300 + 1900 2301 static const u32 root_directory[] = { 1901 2302 0x55584401, /* "UXD" v1 */ 1902 2303 0x00000018, /* Root directory length */ ··· 2642 2105 KUNIT_CASE(tb_test_tunnel_dma_tx), 2643 2106 KUNIT_CASE(tb_test_tunnel_dma_chain), 2644 2107 KUNIT_CASE(tb_test_tunnel_dma_match), 2108 + KUNIT_CASE(tb_test_credit_alloc_legacy_not_bonded), 2109 + KUNIT_CASE(tb_test_credit_alloc_legacy_bonded), 2110 + KUNIT_CASE(tb_test_credit_alloc_pcie), 2111 + KUNIT_CASE(tb_test_credit_alloc_dp), 2112 + KUNIT_CASE(tb_test_credit_alloc_usb3), 2113 + KUNIT_CASE(tb_test_credit_alloc_dma), 2114 + KUNIT_CASE(tb_test_credit_alloc_dma_multiple), 2115 + KUNIT_CASE(tb_test_credit_alloc_all), 2645 2116 KUNIT_CASE(tb_test_property_parse), 2646 2117 KUNIT_CASE(tb_test_property_format), 2647 2118 KUNIT_CASE(tb_test_property_copy),
+330 -71
drivers/thunderbolt/tunnel.c
··· 34 34 #define TB_DP_AUX_PATH_OUT 1 35 35 #define TB_DP_AUX_PATH_IN 2 36 36 37 + /* Minimum number of credits needed for PCIe path */ 38 + #define TB_MIN_PCIE_CREDITS 6U 39 + /* 40 + * Number of credits we try to allocate for each DMA path if not limited 41 + * by the host router baMaxHI. 42 + */ 43 + #define TB_DMA_CREDITS 14U 44 + /* Minimum number of credits for DMA path */ 45 + #define TB_MIN_DMA_CREDITS 1U 46 + 37 47 static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" }; 38 48 39 49 #define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ ··· 66 56 __TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg) 67 57 #define tb_tunnel_dbg(tunnel, fmt, arg...) \ 68 58 __TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg) 59 + 60 + static inline unsigned int tb_usable_credits(const struct tb_port *port) 61 + { 62 + return port->total_credits - port->ctl_credits; 63 + } 64 + 65 + /** 66 + * tb_available_credits() - Available credits for PCIe and DMA 67 + * @port: Lane adapter to check 68 + * @max_dp_streams: If non-%NULL stores maximum number of simultaneous DP 69 + * streams possible through this lane adapter 70 + */ 71 + static unsigned int tb_available_credits(const struct tb_port *port, 72 + size_t *max_dp_streams) 73 + { 74 + const struct tb_switch *sw = port->sw; 75 + int credits, usb3, pcie, spare; 76 + size_t ndp; 77 + 78 + usb3 = tb_acpi_may_tunnel_usb3() ? sw->max_usb3_credits : 0; 79 + pcie = tb_acpi_may_tunnel_pcie() ? sw->max_pcie_credits : 0; 80 + 81 + if (tb_acpi_is_xdomain_allowed()) { 82 + spare = min_not_zero(sw->max_dma_credits, TB_DMA_CREDITS); 83 + /* Add some credits for potential second DMA tunnel */ 84 + spare += TB_MIN_DMA_CREDITS; 85 + } else { 86 + spare = 0; 87 + } 88 + 89 + credits = tb_usable_credits(port); 90 + if (tb_acpi_may_tunnel_dp()) { 91 + /* 92 + * Maximum number of DP streams possible through the 93 + * lane adapter. 94 + */ 95 + ndp = (credits - (usb3 + pcie + spare)) / 96 + (sw->min_dp_aux_credits + sw->min_dp_main_credits); 97 + } else { 98 + ndp = 0; 99 + } 100 + credits -= ndp * (sw->min_dp_aux_credits + sw->min_dp_main_credits); 101 + credits -= usb3; 102 + 103 + if (max_dp_streams) 104 + *max_dp_streams = ndp; 105 + 106 + return credits > 0 ? credits : 0; 107 + } 69 108 70 109 static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths, 71 110 enum tb_tunnel_type type) ··· 153 94 return 0; 154 95 } 155 96 156 - static int tb_initial_credits(const struct tb_switch *sw) 97 + static int tb_pci_init_credits(struct tb_path_hop *hop) 157 98 { 158 - /* If the path is complete sw is not NULL */ 159 - if (sw) { 160 - /* More credits for faster link */ 161 - switch (sw->link_speed * sw->link_width) { 162 - case 40: 163 - return 32; 164 - case 20: 165 - return 24; 166 - } 99 + struct tb_port *port = hop->in_port; 100 + struct tb_switch *sw = port->sw; 101 + unsigned int credits; 102 + 103 + if (tb_port_use_credit_allocation(port)) { 104 + unsigned int available; 105 + 106 + available = tb_available_credits(port, NULL); 107 + credits = min(sw->max_pcie_credits, available); 108 + 109 + if (credits < TB_MIN_PCIE_CREDITS) 110 + return -ENOSPC; 111 + 112 + credits = max(TB_MIN_PCIE_CREDITS, credits); 113 + } else { 114 + if (tb_port_is_null(port)) 115 + credits = port->bonded ? 32 : 16; 116 + else 117 + credits = 7; 167 118 } 168 119 169 - return 16; 120 + hop->initial_credits = credits; 121 + return 0; 170 122 } 171 123 172 - static void tb_pci_init_path(struct tb_path *path) 124 + static int tb_pci_init_path(struct tb_path *path) 173 125 { 126 + struct tb_path_hop *hop; 127 + 174 128 path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; 175 129 path->egress_shared_buffer = TB_PATH_NONE; 176 130 path->ingress_fc_enable = TB_PATH_ALL; ··· 191 119 path->priority = 3; 192 120 path->weight = 1; 193 121 path->drop_packages = 0; 194 - path->nfc_credits = 0; 195 - path->hops[0].initial_credits = 7; 196 - if (path->path_length > 1) 197 - path->hops[1].initial_credits = 198 - tb_initial_credits(path->hops[1].in_port->sw); 122 + 123 + tb_path_for_each_hop(path, hop) { 124 + int ret; 125 + 126 + ret = tb_pci_init_credits(hop); 127 + if (ret) 128 + return ret; 129 + } 130 + 131 + return 0; 199 132 } 200 133 201 134 /** ··· 240 163 goto err_free; 241 164 } 242 165 tunnel->paths[TB_PCI_PATH_UP] = path; 243 - tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]); 166 + if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP])) 167 + goto err_free; 244 168 245 169 path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL, 246 170 "PCIe Down"); 247 171 if (!path) 248 172 goto err_deactivate; 249 173 tunnel->paths[TB_PCI_PATH_DOWN] = path; 250 - tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]); 174 + if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN])) 175 + goto err_deactivate; 251 176 252 177 /* Validate that the tunnel is complete */ 253 178 if (!tb_port_is_pcie_up(tunnel->dst_port)) { ··· 307 228 308 229 path = tb_path_alloc(tb, down, TB_PCI_HOPID, up, TB_PCI_HOPID, 0, 309 230 "PCIe Down"); 310 - if (!path) { 311 - tb_tunnel_free(tunnel); 312 - return NULL; 313 - } 314 - tb_pci_init_path(path); 231 + if (!path) 232 + goto err_free; 315 233 tunnel->paths[TB_PCI_PATH_DOWN] = path; 234 + if (tb_pci_init_path(path)) 235 + goto err_free; 316 236 317 237 path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0, 318 238 "PCIe Up"); 319 - if (!path) { 320 - tb_tunnel_free(tunnel); 321 - return NULL; 322 - } 323 - tb_pci_init_path(path); 239 + if (!path) 240 + goto err_free; 324 241 tunnel->paths[TB_PCI_PATH_UP] = path; 242 + if (tb_pci_init_path(path)) 243 + goto err_free; 325 244 326 245 return tunnel; 246 + 247 + err_free: 248 + tb_tunnel_free(tunnel); 249 + return NULL; 327 250 } 328 251 329 252 static bool tb_dp_is_usb4(const struct tb_switch *sw) ··· 680 599 return 0; 681 600 } 682 601 602 + static void tb_dp_init_aux_credits(struct tb_path_hop *hop) 603 + { 604 + struct tb_port *port = hop->in_port; 605 + struct tb_switch *sw = port->sw; 606 + 607 + if (tb_port_use_credit_allocation(port)) 608 + hop->initial_credits = sw->min_dp_aux_credits; 609 + else 610 + hop->initial_credits = 1; 611 + } 612 + 683 613 static void tb_dp_init_aux_path(struct tb_path *path) 684 614 { 685 - int i; 615 + struct tb_path_hop *hop; 686 616 687 617 path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; 688 618 path->egress_shared_buffer = TB_PATH_NONE; ··· 702 610 path->priority = 2; 703 611 path->weight = 1; 704 612 705 - for (i = 0; i < path->path_length; i++) 706 - path->hops[i].initial_credits = 1; 613 + tb_path_for_each_hop(path, hop) 614 + tb_dp_init_aux_credits(hop); 707 615 } 708 616 709 - static void tb_dp_init_video_path(struct tb_path *path, bool discover) 617 + static int tb_dp_init_video_credits(struct tb_path_hop *hop) 710 618 { 711 - u32 nfc_credits = path->hops[0].in_port->config.nfc_credits; 619 + struct tb_port *port = hop->in_port; 620 + struct tb_switch *sw = port->sw; 621 + 622 + if (tb_port_use_credit_allocation(port)) { 623 + unsigned int nfc_credits; 624 + size_t max_dp_streams; 625 + 626 + tb_available_credits(port, &max_dp_streams); 627 + /* 628 + * Read the number of currently allocated NFC credits 629 + * from the lane adapter. Since we only use them for DP 630 + * tunneling we can use that to figure out how many DP 631 + * tunnels already go through the lane adapter. 632 + */ 633 + nfc_credits = port->config.nfc_credits & 634 + ADP_CS_4_NFC_BUFFERS_MASK; 635 + if (nfc_credits / sw->min_dp_main_credits > max_dp_streams) 636 + return -ENOSPC; 637 + 638 + hop->nfc_credits = sw->min_dp_main_credits; 639 + } else { 640 + hop->nfc_credits = min(port->total_credits - 2, 12U); 641 + } 642 + 643 + return 0; 644 + } 645 + 646 + static int tb_dp_init_video_path(struct tb_path *path) 647 + { 648 + struct tb_path_hop *hop; 712 649 713 650 path->egress_fc_enable = TB_PATH_NONE; 714 651 path->egress_shared_buffer = TB_PATH_NONE; ··· 746 625 path->priority = 1; 747 626 path->weight = 1; 748 627 749 - if (discover) { 750 - path->nfc_credits = nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK; 751 - } else { 752 - u32 max_credits; 628 + tb_path_for_each_hop(path, hop) { 629 + int ret; 753 630 754 - max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> 755 - ADP_CS_4_TOTAL_BUFFERS_SHIFT; 756 - /* Leave some credits for AUX path */ 757 - path->nfc_credits = min(max_credits - 2, 12U); 631 + ret = tb_dp_init_video_credits(hop); 632 + if (ret) 633 + return ret; 758 634 } 635 + 636 + return 0; 759 637 } 760 638 761 639 /** ··· 794 674 goto err_free; 795 675 } 796 676 tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path; 797 - tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], true); 677 + if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT])) 678 + goto err_free; 798 679 799 680 path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX"); 800 681 if (!path) ··· 882 761 1, "Video"); 883 762 if (!path) 884 763 goto err_free; 885 - tb_dp_init_video_path(path, false); 764 + tb_dp_init_video_path(path); 886 765 paths[TB_DP_VIDEO_PATH_OUT] = path; 887 766 888 767 path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out, ··· 906 785 return NULL; 907 786 } 908 787 909 - static u32 tb_dma_credits(struct tb_port *nhi) 788 + static unsigned int tb_dma_available_credits(const struct tb_port *port) 910 789 { 911 - u32 max_credits; 790 + const struct tb_switch *sw = port->sw; 791 + int credits; 912 792 913 - max_credits = (nhi->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >> 914 - ADP_CS_4_TOTAL_BUFFERS_SHIFT; 915 - return min(max_credits, 13U); 793 + credits = tb_available_credits(port, NULL); 794 + if (tb_acpi_may_tunnel_pcie()) 795 + credits -= sw->max_pcie_credits; 796 + credits -= port->dma_credits; 797 + 798 + return credits > 0 ? credits : 0; 916 799 } 917 800 918 - static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits) 801 + static int tb_dma_reserve_credits(struct tb_path_hop *hop, unsigned int credits) 919 802 { 920 - int i; 803 + struct tb_port *port = hop->in_port; 921 804 922 - path->egress_fc_enable = efc; 805 + if (tb_port_use_credit_allocation(port)) { 806 + unsigned int available = tb_dma_available_credits(port); 807 + 808 + /* 809 + * Need to have at least TB_MIN_DMA_CREDITS, otherwise 810 + * DMA path cannot be established. 811 + */ 812 + if (available < TB_MIN_DMA_CREDITS) 813 + return -ENOSPC; 814 + 815 + while (credits > available) 816 + credits--; 817 + 818 + tb_port_dbg(port, "reserving %u credits for DMA path\n", 819 + credits); 820 + 821 + port->dma_credits += credits; 822 + } else { 823 + if (tb_port_is_null(port)) 824 + credits = port->bonded ? 14 : 6; 825 + else 826 + credits = min(port->total_credits, credits); 827 + } 828 + 829 + hop->initial_credits = credits; 830 + return 0; 831 + } 832 + 833 + /* Path from lane adapter to NHI */ 834 + static int tb_dma_init_rx_path(struct tb_path *path, unsigned int credits) 835 + { 836 + struct tb_path_hop *hop; 837 + unsigned int i, tmp; 838 + 839 + path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; 923 840 path->ingress_fc_enable = TB_PATH_ALL; 924 841 path->egress_shared_buffer = TB_PATH_NONE; 925 842 path->ingress_shared_buffer = TB_PATH_NONE; ··· 965 806 path->weight = 1; 966 807 path->clear_fc = true; 967 808 968 - for (i = 0; i < path->path_length; i++) 969 - path->hops[i].initial_credits = credits; 809 + /* 810 + * First lane adapter is the one connected to the remote host. 811 + * We don't tunnel other traffic over this link so can use all 812 + * the credits (except the ones reserved for control traffic). 813 + */ 814 + hop = &path->hops[0]; 815 + tmp = min(tb_usable_credits(hop->in_port), credits); 816 + hop->initial_credits = tmp; 817 + hop->in_port->dma_credits += tmp; 818 + 819 + for (i = 1; i < path->path_length; i++) { 820 + int ret; 821 + 822 + ret = tb_dma_reserve_credits(&path->hops[i], credits); 823 + if (ret) 824 + return ret; 825 + } 826 + 827 + return 0; 828 + } 829 + 830 + /* Path from NHI to lane adapter */ 831 + static int tb_dma_init_tx_path(struct tb_path *path, unsigned int credits) 832 + { 833 + struct tb_path_hop *hop; 834 + 835 + path->egress_fc_enable = TB_PATH_ALL; 836 + path->ingress_fc_enable = TB_PATH_ALL; 837 + path->egress_shared_buffer = TB_PATH_NONE; 838 + path->ingress_shared_buffer = TB_PATH_NONE; 839 + path->priority = 5; 840 + path->weight = 1; 841 + path->clear_fc = true; 842 + 843 + tb_path_for_each_hop(path, hop) { 844 + int ret; 845 + 846 + ret = tb_dma_reserve_credits(hop, credits); 847 + if (ret) 848 + return ret; 849 + } 850 + 851 + return 0; 852 + } 853 + 854 + static void tb_dma_release_credits(struct tb_path_hop *hop) 855 + { 856 + struct tb_port *port = hop->in_port; 857 + 858 + if (tb_port_use_credit_allocation(port)) { 859 + port->dma_credits -= hop->initial_credits; 860 + 861 + tb_port_dbg(port, "released %u DMA path credits\n", 862 + hop->initial_credits); 863 + } 864 + } 865 + 866 + static void tb_dma_deinit_path(struct tb_path *path) 867 + { 868 + struct tb_path_hop *hop; 869 + 870 + tb_path_for_each_hop(path, hop) 871 + tb_dma_release_credits(hop); 872 + } 873 + 874 + static void tb_dma_deinit(struct tb_tunnel *tunnel) 875 + { 876 + int i; 877 + 878 + for (i = 0; i < tunnel->npaths; i++) { 879 + if (!tunnel->paths[i]) 880 + continue; 881 + tb_dma_deinit_path(tunnel->paths[i]); 882 + } 970 883 } 971 884 972 885 /** ··· 1063 832 struct tb_tunnel *tunnel; 1064 833 size_t npaths = 0, i = 0; 1065 834 struct tb_path *path; 1066 - u32 credits; 835 + int credits; 1067 836 1068 837 if (receive_ring > 0) 1069 838 npaths++; ··· 1079 848 1080 849 tunnel->src_port = nhi; 1081 850 tunnel->dst_port = dst; 851 + tunnel->deinit = tb_dma_deinit; 1082 852 1083 - credits = tb_dma_credits(nhi); 853 + credits = min_not_zero(TB_DMA_CREDITS, nhi->sw->max_dma_credits); 1084 854 1085 855 if (receive_ring > 0) { 1086 856 path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, 1087 857 "DMA RX"); 1088 - if (!path) { 1089 - tb_tunnel_free(tunnel); 1090 - return NULL; 1091 - } 1092 - tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits); 858 + if (!path) 859 + goto err_free; 1093 860 tunnel->paths[i++] = path; 861 + if (tb_dma_init_rx_path(path, credits)) { 862 + tb_tunnel_dbg(tunnel, "not enough buffers for RX path\n"); 863 + goto err_free; 864 + } 1094 865 } 1095 866 1096 867 if (transmit_ring > 0) { 1097 868 path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, 1098 869 "DMA TX"); 1099 - if (!path) { 1100 - tb_tunnel_free(tunnel); 1101 - return NULL; 1102 - } 1103 - tb_dma_init_path(path, TB_PATH_ALL, credits); 870 + if (!path) 871 + goto err_free; 1104 872 tunnel->paths[i++] = path; 873 + if (tb_dma_init_tx_path(path, credits)) { 874 + tb_tunnel_dbg(tunnel, "not enough buffers for TX path\n"); 875 + goto err_free; 876 + } 1105 877 } 1106 878 1107 879 return tunnel; 880 + 881 + err_free: 882 + tb_tunnel_free(tunnel); 883 + return NULL; 1108 884 } 1109 885 1110 886 /** ··· 1305 1067 tunnel->allocated_up, tunnel->allocated_down); 1306 1068 } 1307 1069 1070 + static void tb_usb3_init_credits(struct tb_path_hop *hop) 1071 + { 1072 + struct tb_port *port = hop->in_port; 1073 + struct tb_switch *sw = port->sw; 1074 + unsigned int credits; 1075 + 1076 + if (tb_port_use_credit_allocation(port)) { 1077 + credits = sw->max_usb3_credits; 1078 + } else { 1079 + if (tb_port_is_null(port)) 1080 + credits = port->bonded ? 32 : 16; 1081 + else 1082 + credits = 7; 1083 + } 1084 + 1085 + hop->initial_credits = credits; 1086 + } 1087 + 1308 1088 static void tb_usb3_init_path(struct tb_path *path) 1309 1089 { 1090 + struct tb_path_hop *hop; 1091 + 1310 1092 path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; 1311 1093 path->egress_shared_buffer = TB_PATH_NONE; 1312 1094 path->ingress_fc_enable = TB_PATH_ALL; ··· 1334 1076 path->priority = 3; 1335 1077 path->weight = 3; 1336 1078 path->drop_packages = 0; 1337 - path->nfc_credits = 0; 1338 - path->hops[0].initial_credits = 7; 1339 - if (path->path_length > 1) 1340 - path->hops[1].initial_credits = 1341 - tb_initial_credits(path->hops[1].in_port->sw); 1079 + 1080 + tb_path_for_each_hop(path, hop) 1081 + tb_usb3_init_credits(hop); 1342 1082 } 1343 1083 1344 1084 /** ··· 1535 1279 1536 1280 if (!tunnel) 1537 1281 return; 1282 + 1283 + if (tunnel->deinit) 1284 + tunnel->deinit(tunnel); 1538 1285 1539 1286 for (i = 0; i < tunnel->npaths; i++) { 1540 1287 if (tunnel->paths[i])
+2
drivers/thunderbolt/tunnel.h
··· 27 27 * @paths: All paths required by the tunnel 28 28 * @npaths: Number of paths in @paths 29 29 * @init: Optional tunnel specific initialization 30 + * @deinit: Optional tunnel specific de-initialization 30 31 * @activate: Optional tunnel specific activation/deactivation 31 32 * @consumed_bandwidth: Return how much bandwidth the tunnel consumes 32 33 * @release_unused_bandwidth: Release all unused bandwidth ··· 48 47 struct tb_path **paths; 49 48 size_t npaths; 50 49 int (*init)(struct tb_tunnel *tunnel); 50 + void (*deinit)(struct tb_tunnel *tunnel); 51 51 int (*activate)(struct tb_tunnel *tunnel, bool activate); 52 52 int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up, 53 53 int *consumed_down);
+335 -103
drivers/thunderbolt/usb4.c
··· 13 13 #include "sb_regs.h" 14 14 #include "tb.h" 15 15 16 - #define USB4_DATA_DWORDS 16 17 16 #define USB4_DATA_RETRIES 3 18 17 19 18 enum usb4_sb_target { ··· 36 37 37 38 #define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0) 38 39 39 - typedef int (*read_block_fn)(void *, unsigned int, void *, size_t); 40 - typedef int (*write_block_fn)(void *, const void *, size_t); 40 + #define USB4_BA_LENGTH_MASK GENMASK(7, 0) 41 + #define USB4_BA_INDEX_MASK GENMASK(15, 0) 42 + 43 + enum usb4_ba_index { 44 + USB4_BA_MAX_USB3 = 0x1, 45 + USB4_BA_MIN_DP_AUX = 0x2, 46 + USB4_BA_MIN_DP_MAIN = 0x3, 47 + USB4_BA_MAX_PCIE = 0x4, 48 + USB4_BA_MAX_HI = 0x5, 49 + }; 50 + 51 + #define USB4_BA_VALUE_MASK GENMASK(31, 16) 52 + #define USB4_BA_VALUE_SHIFT 16 41 53 42 54 static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, 43 55 u32 value, int timeout_msec) ··· 70 60 } while (ktime_before(ktime_get(), timeout)); 71 61 72 62 return -ETIMEDOUT; 73 - } 74 - 75 - static int usb4_do_read_data(u16 address, void *buf, size_t size, 76 - read_block_fn read_block, void *read_block_data) 77 - { 78 - unsigned int retries = USB4_DATA_RETRIES; 79 - unsigned int offset; 80 - 81 - do { 82 - unsigned int dwaddress, dwords; 83 - u8 data[USB4_DATA_DWORDS * 4]; 84 - size_t nbytes; 85 - int ret; 86 - 87 - offset = address & 3; 88 - nbytes = min_t(size_t, size + offset, USB4_DATA_DWORDS * 4); 89 - 90 - dwaddress = address / 4; 91 - dwords = ALIGN(nbytes, 4) / 4; 92 - 93 - ret = read_block(read_block_data, dwaddress, data, dwords); 94 - if (ret) { 95 - if (ret != -ENODEV && retries--) 96 - continue; 97 - return ret; 98 - } 99 - 100 - nbytes -= offset; 101 - memcpy(buf, data + offset, nbytes); 102 - 103 - size -= nbytes; 104 - address += nbytes; 105 - buf += nbytes; 106 - } while (size > 0); 107 - 108 - return 0; 109 - } 110 - 111 - static int usb4_do_write_data(unsigned int address, const void *buf, size_t size, 112 - write_block_fn write_next_block, void *write_block_data) 113 - { 114 - unsigned int retries = USB4_DATA_RETRIES; 115 - unsigned int offset; 116 - 117 - offset = address & 3; 118 - address = address & ~3; 119 - 120 - do { 121 - u32 nbytes = min_t(u32, size, USB4_DATA_DWORDS * 4); 122 - u8 data[USB4_DATA_DWORDS * 4]; 123 - int ret; 124 - 125 - memcpy(data + offset, buf, nbytes); 126 - 127 - ret = write_next_block(write_block_data, data, nbytes / 4); 128 - if (ret) { 129 - if (ret == -ETIMEDOUT) { 130 - if (retries--) 131 - continue; 132 - ret = -EIO; 133 - } 134 - return ret; 135 - } 136 - 137 - size -= nbytes; 138 - address += nbytes; 139 - buf += nbytes; 140 - } while (size > 0); 141 - 142 - return 0; 143 63 } 144 64 145 65 static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode, ··· 133 193 { 134 194 const struct tb_cm_ops *cm_ops = sw->tb->cm_ops; 135 195 136 - if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS) 196 + if (tx_dwords > NVM_DATA_DWORDS || rx_dwords > NVM_DATA_DWORDS) 137 197 return -EINVAL; 138 198 139 199 /* ··· 260 320 parent = tb_switch_parent(sw); 261 321 downstream_port = tb_port_at(tb_route(sw), parent); 262 322 sw->link_usb4 = link_is_usb4(downstream_port); 263 - tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3"); 323 + tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT"); 264 324 265 325 xhci = val & ROUTER_CS_6_HCI; 266 326 tbt3 = !(val & ROUTER_CS_6_TNS); ··· 354 414 int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf, 355 415 size_t size) 356 416 { 357 - return usb4_do_read_data(address, buf, size, 358 - usb4_switch_drom_read_block, sw); 417 + return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES, 418 + usb4_switch_drom_read_block, sw); 359 419 } 360 420 361 421 /** ··· 413 473 414 474 val &= ~(PORT_CS_19_WOC | PORT_CS_19_WOD | PORT_CS_19_WOU4); 415 475 416 - if (flags & TB_WAKE_ON_CONNECT) 417 - val |= PORT_CS_19_WOC; 418 - if (flags & TB_WAKE_ON_DISCONNECT) 419 - val |= PORT_CS_19_WOD; 420 - if (flags & TB_WAKE_ON_USB4) 476 + if (tb_is_upstream_port(port)) { 421 477 val |= PORT_CS_19_WOU4; 478 + } else { 479 + bool configured = val & PORT_CS_19_PC; 480 + 481 + if ((flags & TB_WAKE_ON_CONNECT) && !configured) 482 + val |= PORT_CS_19_WOC; 483 + if ((flags & TB_WAKE_ON_DISCONNECT) && configured) 484 + val |= PORT_CS_19_WOD; 485 + if ((flags & TB_WAKE_ON_USB4) && configured) 486 + val |= PORT_CS_19_WOU4; 487 + } 422 488 423 489 ret = tb_port_write(port, &val, TB_CFG_PORT, 424 490 port->cap_usb4 + PORT_CS_19, 1); ··· 433 487 } 434 488 435 489 /* 436 - * Enable wakes from PCIe and USB 3.x on this router. Only 490 + * Enable wakes from PCIe, USB 3.x and DP on this router. Only 437 491 * needed for device routers. 438 492 */ 439 493 if (route) { ··· 441 495 if (ret) 442 496 return ret; 443 497 444 - val &= ~(ROUTER_CS_5_WOP | ROUTER_CS_5_WOU); 498 + val &= ~(ROUTER_CS_5_WOP | ROUTER_CS_5_WOU | ROUTER_CS_5_WOD); 445 499 if (flags & TB_WAKE_ON_USB3) 446 500 val |= ROUTER_CS_5_WOU; 447 501 if (flags & TB_WAKE_ON_PCIE) 448 502 val |= ROUTER_CS_5_WOP; 503 + if (flags & TB_WAKE_ON_DP) 504 + val |= ROUTER_CS_5_WOD; 449 505 450 506 ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1); 451 507 if (ret) ··· 543 595 int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf, 544 596 size_t size) 545 597 { 546 - return usb4_do_read_data(address, buf, size, 547 - usb4_switch_nvm_read_block, sw); 598 + return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES, 599 + usb4_switch_nvm_read_block, sw); 548 600 } 549 601 550 - static int usb4_switch_nvm_set_offset(struct tb_switch *sw, 551 - unsigned int address) 602 + /** 603 + * usb4_switch_nvm_set_offset() - Set NVM write offset 604 + * @sw: USB4 router 605 + * @address: Start offset 606 + * 607 + * Explicitly sets NVM write offset. Normally when writing to NVM this 608 + * is done automatically by usb4_switch_nvm_write(). 609 + * 610 + * Returns %0 in success and negative errno if there was a failure. 611 + */ 612 + int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address) 552 613 { 553 614 u32 metadata, dwaddress; 554 615 u8 status = 0; ··· 575 618 return status ? -EIO : 0; 576 619 } 577 620 578 - static int usb4_switch_nvm_write_next_block(void *data, const void *buf, 579 - size_t dwords) 621 + static int usb4_switch_nvm_write_next_block(void *data, unsigned int dwaddress, 622 + const void *buf, size_t dwords) 580 623 { 581 624 struct tb_switch *sw = data; 582 625 u8 status; ··· 609 652 if (ret) 610 653 return ret; 611 654 612 - return usb4_do_write_data(address, buf, size, 613 - usb4_switch_nvm_write_next_block, sw); 655 + return tb_nvm_write_data(address, buf, size, USB4_DATA_RETRIES, 656 + usb4_switch_nvm_write_next_block, sw); 614 657 } 615 658 616 659 /** ··· 690 733 } 691 734 692 735 return 0; 736 + } 737 + 738 + /** 739 + * usb4_switch_credits_init() - Read buffer allocation parameters 740 + * @sw: USB4 router 741 + * 742 + * Reads @sw buffer allocation parameters and initializes @sw buffer 743 + * allocation fields accordingly. Specifically @sw->credits_allocation 744 + * is set to %true if these parameters can be used in tunneling. 745 + * 746 + * Returns %0 on success and negative errno otherwise. 747 + */ 748 + int usb4_switch_credits_init(struct tb_switch *sw) 749 + { 750 + int max_usb3, min_dp_aux, min_dp_main, max_pcie, max_dma; 751 + int ret, length, i, nports; 752 + const struct tb_port *port; 753 + u32 data[NVM_DATA_DWORDS]; 754 + u32 metadata = 0; 755 + u8 status = 0; 756 + 757 + memset(data, 0, sizeof(data)); 758 + ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_BUFFER_ALLOC, &metadata, 759 + &status, NULL, 0, data, ARRAY_SIZE(data)); 760 + if (ret) 761 + return ret; 762 + if (status) 763 + return -EIO; 764 + 765 + length = metadata & USB4_BA_LENGTH_MASK; 766 + if (WARN_ON(length > ARRAY_SIZE(data))) 767 + return -EMSGSIZE; 768 + 769 + max_usb3 = -1; 770 + min_dp_aux = -1; 771 + min_dp_main = -1; 772 + max_pcie = -1; 773 + max_dma = -1; 774 + 775 + tb_sw_dbg(sw, "credit allocation parameters:\n"); 776 + 777 + for (i = 0; i < length; i++) { 778 + u16 index, value; 779 + 780 + index = data[i] & USB4_BA_INDEX_MASK; 781 + value = (data[i] & USB4_BA_VALUE_MASK) >> USB4_BA_VALUE_SHIFT; 782 + 783 + switch (index) { 784 + case USB4_BA_MAX_USB3: 785 + tb_sw_dbg(sw, " USB3: %u\n", value); 786 + max_usb3 = value; 787 + break; 788 + case USB4_BA_MIN_DP_AUX: 789 + tb_sw_dbg(sw, " DP AUX: %u\n", value); 790 + min_dp_aux = value; 791 + break; 792 + case USB4_BA_MIN_DP_MAIN: 793 + tb_sw_dbg(sw, " DP main: %u\n", value); 794 + min_dp_main = value; 795 + break; 796 + case USB4_BA_MAX_PCIE: 797 + tb_sw_dbg(sw, " PCIe: %u\n", value); 798 + max_pcie = value; 799 + break; 800 + case USB4_BA_MAX_HI: 801 + tb_sw_dbg(sw, " DMA: %u\n", value); 802 + max_dma = value; 803 + break; 804 + default: 805 + tb_sw_dbg(sw, " unknown credit allocation index %#x, skipping\n", 806 + index); 807 + break; 808 + } 809 + } 810 + 811 + /* 812 + * Validate the buffer allocation preferences. If we find 813 + * issues, log a warning and fall back using the hard-coded 814 + * values. 815 + */ 816 + 817 + /* Host router must report baMaxHI */ 818 + if (!tb_route(sw) && max_dma < 0) { 819 + tb_sw_warn(sw, "host router is missing baMaxHI\n"); 820 + goto err_invalid; 821 + } 822 + 823 + nports = 0; 824 + tb_switch_for_each_port(sw, port) { 825 + if (tb_port_is_null(port)) 826 + nports++; 827 + } 828 + 829 + /* Must have DP buffer allocation (multiple USB4 ports) */ 830 + if (nports > 2 && (min_dp_aux < 0 || min_dp_main < 0)) { 831 + tb_sw_warn(sw, "multiple USB4 ports require baMinDPaux/baMinDPmain\n"); 832 + goto err_invalid; 833 + } 834 + 835 + tb_switch_for_each_port(sw, port) { 836 + if (tb_port_is_dpout(port) && min_dp_main < 0) { 837 + tb_sw_warn(sw, "missing baMinDPmain"); 838 + goto err_invalid; 839 + } 840 + if ((tb_port_is_dpin(port) || tb_port_is_dpout(port)) && 841 + min_dp_aux < 0) { 842 + tb_sw_warn(sw, "missing baMinDPaux"); 843 + goto err_invalid; 844 + } 845 + if ((tb_port_is_usb3_down(port) || tb_port_is_usb3_up(port)) && 846 + max_usb3 < 0) { 847 + tb_sw_warn(sw, "missing baMaxUSB3"); 848 + goto err_invalid; 849 + } 850 + if ((tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) && 851 + max_pcie < 0) { 852 + tb_sw_warn(sw, "missing baMaxPCIe"); 853 + goto err_invalid; 854 + } 855 + } 856 + 857 + /* 858 + * Buffer allocation passed the validation so we can use it in 859 + * path creation. 860 + */ 861 + sw->credit_allocation = true; 862 + if (max_usb3 > 0) 863 + sw->max_usb3_credits = max_usb3; 864 + if (min_dp_aux > 0) 865 + sw->min_dp_aux_credits = min_dp_aux; 866 + if (min_dp_main > 0) 867 + sw->min_dp_main_credits = min_dp_main; 868 + if (max_pcie > 0) 869 + sw->max_pcie_credits = max_pcie; 870 + if (max_dma > 0) 871 + sw->max_dma_credits = max_dma; 872 + 873 + return 0; 874 + 875 + err_invalid: 876 + return -EINVAL; 693 877 } 694 878 695 879 /** ··· 995 897 } 996 898 997 899 /** 900 + * usb4_switch_add_ports() - Add USB4 ports for this router 901 + * @sw: USB4 router 902 + * 903 + * For USB4 router finds all USB4 ports and registers devices for each. 904 + * Can be called to any router. 905 + * 906 + * Return %0 in case of success and negative errno in case of failure. 907 + */ 908 + int usb4_switch_add_ports(struct tb_switch *sw) 909 + { 910 + struct tb_port *port; 911 + 912 + if (tb_switch_is_icm(sw) || !tb_switch_is_usb4(sw)) 913 + return 0; 914 + 915 + tb_switch_for_each_port(sw, port) { 916 + struct usb4_port *usb4; 917 + 918 + if (!tb_port_is_null(port)) 919 + continue; 920 + if (!port->cap_usb4) 921 + continue; 922 + 923 + usb4 = usb4_port_device_add(port); 924 + if (IS_ERR(usb4)) { 925 + usb4_switch_remove_ports(sw); 926 + return PTR_ERR(usb4); 927 + } 928 + 929 + port->usb4 = usb4; 930 + } 931 + 932 + return 0; 933 + } 934 + 935 + /** 936 + * usb4_switch_remove_ports() - Removes USB4 ports from this router 937 + * @sw: USB4 router 938 + * 939 + * Unregisters previously registered USB4 ports. 940 + */ 941 + void usb4_switch_remove_ports(struct tb_switch *sw) 942 + { 943 + struct tb_port *port; 944 + 945 + tb_switch_for_each_port(sw, port) { 946 + if (port->usb4) { 947 + usb4_port_device_remove(port->usb4); 948 + port->usb4 = NULL; 949 + } 950 + } 951 + } 952 + 953 + /** 998 954 * usb4_port_unlock() - Unlock USB4 downstream port 999 955 * @port: USB4 port to unlock 1000 956 * ··· 1181 1029 1182 1030 static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords) 1183 1031 { 1184 - if (dwords > USB4_DATA_DWORDS) 1032 + if (dwords > NVM_DATA_DWORDS) 1185 1033 return -EINVAL; 1186 1034 1187 1035 return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2, ··· 1191 1039 static int usb4_port_write_data(struct tb_port *port, const void *data, 1192 1040 size_t dwords) 1193 1041 { 1194 - if (dwords > USB4_DATA_DWORDS) 1042 + if (dwords > NVM_DATA_DWORDS) 1195 1043 return -EINVAL; 1196 1044 1197 1045 return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2, ··· 1327 1175 return -ETIMEDOUT; 1328 1176 } 1329 1177 1178 + static int usb4_port_set_router_offline(struct tb_port *port, bool offline) 1179 + { 1180 + u32 val = !offline; 1181 + int ret; 1182 + 1183 + ret = usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0, 1184 + USB4_SB_METADATA, &val, sizeof(val)); 1185 + if (ret) 1186 + return ret; 1187 + 1188 + val = USB4_SB_OPCODE_ROUTER_OFFLINE; 1189 + return usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0, 1190 + USB4_SB_OPCODE, &val, sizeof(val)); 1191 + } 1192 + 1193 + /** 1194 + * usb4_port_router_offline() - Put the USB4 port to offline mode 1195 + * @port: USB4 port 1196 + * 1197 + * This function puts the USB4 port into offline mode. In this mode the 1198 + * port does not react on hotplug events anymore. This needs to be 1199 + * called before retimer access is done when the USB4 links is not up. 1200 + * 1201 + * Returns %0 in case of success and negative errno if there was an 1202 + * error. 1203 + */ 1204 + int usb4_port_router_offline(struct tb_port *port) 1205 + { 1206 + return usb4_port_set_router_offline(port, true); 1207 + } 1208 + 1209 + /** 1210 + * usb4_port_router_online() - Put the USB4 port back to online 1211 + * @port: USB4 port 1212 + * 1213 + * Makes the USB4 port functional again. 1214 + */ 1215 + int usb4_port_router_online(struct tb_port *port) 1216 + { 1217 + return usb4_port_set_router_offline(port, false); 1218 + } 1219 + 1330 1220 /** 1331 1221 * usb4_port_enumerate_retimers() - Send RT broadcast transaction 1332 1222 * @port: USB4 port ··· 1392 1198 { 1393 1199 return usb4_port_sb_op(port, USB4_SB_TARGET_RETIMER, index, opcode, 1394 1200 timeout_msec); 1201 + } 1202 + 1203 + /** 1204 + * usb4_port_retimer_set_inbound_sbtx() - Enable sideband channel transactions 1205 + * @port: USB4 port 1206 + * @index: Retimer index 1207 + * 1208 + * Enables sideband channel transations on SBTX. Can be used when USB4 1209 + * link does not go up, for example if there is no device connected. 1210 + */ 1211 + int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index) 1212 + { 1213 + int ret; 1214 + 1215 + ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_SET_INBOUND_SBTX, 1216 + 500); 1217 + 1218 + if (ret != -ENODEV) 1219 + return ret; 1220 + 1221 + /* 1222 + * Per the USB4 retimer spec, the retimer is not required to 1223 + * send an RT (Retimer Transaction) response for the first 1224 + * SET_INBOUND_SBTX command 1225 + */ 1226 + return usb4_port_retimer_op(port, index, USB4_SB_OPCODE_SET_INBOUND_SBTX, 1227 + 500); 1395 1228 } 1396 1229 1397 1230 /** ··· 1513 1292 return ret ? ret : metadata & USB4_NVM_SECTOR_SIZE_MASK; 1514 1293 } 1515 1294 1516 - static int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index, 1517 - unsigned int address) 1295 + /** 1296 + * usb4_port_retimer_nvm_set_offset() - Set NVM write offset 1297 + * @port: USB4 port 1298 + * @index: Retimer index 1299 + * @address: Start offset 1300 + * 1301 + * Exlicitly sets NVM write offset. Normally when writing to NVM this is 1302 + * done automatically by usb4_port_retimer_nvm_write(). 1303 + * 1304 + * Returns %0 in success and negative errno if there was a failure. 1305 + */ 1306 + int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index, 1307 + unsigned int address) 1518 1308 { 1519 1309 u32 metadata, dwaddress; 1520 1310 int ret; ··· 1548 1316 u8 index; 1549 1317 }; 1550 1318 1551 - static int usb4_port_retimer_nvm_write_next_block(void *data, const void *buf, 1552 - size_t dwords) 1319 + static int usb4_port_retimer_nvm_write_next_block(void *data, 1320 + unsigned int dwaddress, const void *buf, size_t dwords) 1553 1321 1554 1322 { 1555 1323 const struct retimer_info *info = data; ··· 1589 1357 if (ret) 1590 1358 return ret; 1591 1359 1592 - return usb4_do_write_data(address, buf, size, 1593 - usb4_port_retimer_nvm_write_next_block, &info); 1360 + return tb_nvm_write_data(address, buf, size, USB4_DATA_RETRIES, 1361 + usb4_port_retimer_nvm_write_next_block, &info); 1594 1362 } 1595 1363 1596 1364 /** ··· 1674 1442 int ret; 1675 1443 1676 1444 metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT; 1677 - if (dwords < USB4_DATA_DWORDS) 1445 + if (dwords < NVM_DATA_DWORDS) 1678 1446 metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT; 1679 1447 1680 1448 ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata, ··· 1707 1475 { 1708 1476 struct retimer_info info = { .port = port, .index = index }; 1709 1477 1710 - return usb4_do_read_data(address, buf, size, 1711 - usb4_port_retimer_nvm_read_block, &info); 1478 + return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES, 1479 + usb4_port_retimer_nvm_read_block, &info); 1712 1480 } 1713 1481 1714 1482 /**
+280
drivers/thunderbolt/usb4_port.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * USB4 port device 4 + * 5 + * Copyright (C) 2021, Intel Corporation 6 + * Author: Mika Westerberg <mika.westerberg@linux.intel.com> 7 + */ 8 + 9 + #include <linux/pm_runtime.h> 10 + 11 + #include "tb.h" 12 + 13 + static ssize_t link_show(struct device *dev, struct device_attribute *attr, 14 + char *buf) 15 + { 16 + struct usb4_port *usb4 = tb_to_usb4_port_device(dev); 17 + struct tb_port *port = usb4->port; 18 + struct tb *tb = port->sw->tb; 19 + const char *link; 20 + 21 + if (mutex_lock_interruptible(&tb->lock)) 22 + return -ERESTARTSYS; 23 + 24 + if (tb_is_upstream_port(port)) 25 + link = port->sw->link_usb4 ? "usb4" : "tbt"; 26 + else if (tb_port_has_remote(port)) 27 + link = port->remote->sw->link_usb4 ? "usb4" : "tbt"; 28 + else 29 + link = "none"; 30 + 31 + mutex_unlock(&tb->lock); 32 + 33 + return sysfs_emit(buf, "%s\n", link); 34 + } 35 + static DEVICE_ATTR_RO(link); 36 + 37 + static struct attribute *common_attrs[] = { 38 + &dev_attr_link.attr, 39 + NULL 40 + }; 41 + 42 + static const struct attribute_group common_group = { 43 + .attrs = common_attrs, 44 + }; 45 + 46 + static int usb4_port_offline(struct usb4_port *usb4) 47 + { 48 + struct tb_port *port = usb4->port; 49 + int ret; 50 + 51 + ret = tb_acpi_power_on_retimers(port); 52 + if (ret) 53 + return ret; 54 + 55 + ret = usb4_port_router_offline(port); 56 + if (ret) { 57 + tb_acpi_power_off_retimers(port); 58 + return ret; 59 + } 60 + 61 + ret = tb_retimer_scan(port, false); 62 + if (ret) { 63 + usb4_port_router_online(port); 64 + tb_acpi_power_off_retimers(port); 65 + } 66 + 67 + return ret; 68 + } 69 + 70 + static void usb4_port_online(struct usb4_port *usb4) 71 + { 72 + struct tb_port *port = usb4->port; 73 + 74 + usb4_port_router_online(port); 75 + tb_acpi_power_off_retimers(port); 76 + } 77 + 78 + static ssize_t offline_show(struct device *dev, 79 + struct device_attribute *attr, char *buf) 80 + { 81 + struct usb4_port *usb4 = tb_to_usb4_port_device(dev); 82 + 83 + return sysfs_emit(buf, "%d\n", usb4->offline); 84 + } 85 + 86 + static ssize_t offline_store(struct device *dev, 87 + struct device_attribute *attr, const char *buf, size_t count) 88 + { 89 + struct usb4_port *usb4 = tb_to_usb4_port_device(dev); 90 + struct tb_port *port = usb4->port; 91 + struct tb *tb = port->sw->tb; 92 + bool val; 93 + int ret; 94 + 95 + ret = kstrtobool(buf, &val); 96 + if (ret) 97 + return ret; 98 + 99 + pm_runtime_get_sync(&usb4->dev); 100 + 101 + if (mutex_lock_interruptible(&tb->lock)) { 102 + ret = -ERESTARTSYS; 103 + goto out_rpm; 104 + } 105 + 106 + if (val == usb4->offline) 107 + goto out_unlock; 108 + 109 + /* Offline mode works only for ports that are not connected */ 110 + if (tb_port_has_remote(port)) { 111 + ret = -EBUSY; 112 + goto out_unlock; 113 + } 114 + 115 + if (val) { 116 + ret = usb4_port_offline(usb4); 117 + if (ret) 118 + goto out_unlock; 119 + } else { 120 + usb4_port_online(usb4); 121 + tb_retimer_remove_all(port); 122 + } 123 + 124 + usb4->offline = val; 125 + tb_port_dbg(port, "%s offline mode\n", val ? "enter" : "exit"); 126 + 127 + out_unlock: 128 + mutex_unlock(&tb->lock); 129 + out_rpm: 130 + pm_runtime_mark_last_busy(&usb4->dev); 131 + pm_runtime_put_autosuspend(&usb4->dev); 132 + 133 + return ret ? ret : count; 134 + } 135 + static DEVICE_ATTR_RW(offline); 136 + 137 + static ssize_t rescan_store(struct device *dev, 138 + struct device_attribute *attr, const char *buf, size_t count) 139 + { 140 + struct usb4_port *usb4 = tb_to_usb4_port_device(dev); 141 + struct tb_port *port = usb4->port; 142 + struct tb *tb = port->sw->tb; 143 + bool val; 144 + int ret; 145 + 146 + ret = kstrtobool(buf, &val); 147 + if (ret) 148 + return ret; 149 + 150 + if (!val) 151 + return count; 152 + 153 + pm_runtime_get_sync(&usb4->dev); 154 + 155 + if (mutex_lock_interruptible(&tb->lock)) { 156 + ret = -ERESTARTSYS; 157 + goto out_rpm; 158 + } 159 + 160 + /* Must be in offline mode already */ 161 + if (!usb4->offline) { 162 + ret = -EINVAL; 163 + goto out_unlock; 164 + } 165 + 166 + tb_retimer_remove_all(port); 167 + ret = tb_retimer_scan(port, true); 168 + 169 + out_unlock: 170 + mutex_unlock(&tb->lock); 171 + out_rpm: 172 + pm_runtime_mark_last_busy(&usb4->dev); 173 + pm_runtime_put_autosuspend(&usb4->dev); 174 + 175 + return ret ? ret : count; 176 + } 177 + static DEVICE_ATTR_WO(rescan); 178 + 179 + static struct attribute *service_attrs[] = { 180 + &dev_attr_offline.attr, 181 + &dev_attr_rescan.attr, 182 + NULL 183 + }; 184 + 185 + static umode_t service_attr_is_visible(struct kobject *kobj, 186 + struct attribute *attr, int n) 187 + { 188 + struct device *dev = kobj_to_dev(kobj); 189 + struct usb4_port *usb4 = tb_to_usb4_port_device(dev); 190 + 191 + /* 192 + * Always need some platform help to cycle the modes so that 193 + * retimers can be accessed through the sideband. 194 + */ 195 + return usb4->can_offline ? attr->mode : 0; 196 + } 197 + 198 + static const struct attribute_group service_group = { 199 + .attrs = service_attrs, 200 + .is_visible = service_attr_is_visible, 201 + }; 202 + 203 + static const struct attribute_group *usb4_port_device_groups[] = { 204 + &common_group, 205 + &service_group, 206 + NULL 207 + }; 208 + 209 + static void usb4_port_device_release(struct device *dev) 210 + { 211 + struct usb4_port *usb4 = container_of(dev, struct usb4_port, dev); 212 + 213 + kfree(usb4); 214 + } 215 + 216 + struct device_type usb4_port_device_type = { 217 + .name = "usb4_port", 218 + .groups = usb4_port_device_groups, 219 + .release = usb4_port_device_release, 220 + }; 221 + 222 + /** 223 + * usb4_port_device_add() - Add USB4 port device 224 + * @port: Lane 0 adapter port to add the USB4 port 225 + * 226 + * Creates and registers a USB4 port device for @port. Returns the new 227 + * USB4 port device pointer or ERR_PTR() in case of error. 228 + */ 229 + struct usb4_port *usb4_port_device_add(struct tb_port *port) 230 + { 231 + struct usb4_port *usb4; 232 + int ret; 233 + 234 + usb4 = kzalloc(sizeof(*usb4), GFP_KERNEL); 235 + if (!usb4) 236 + return ERR_PTR(-ENOMEM); 237 + 238 + usb4->port = port; 239 + usb4->dev.type = &usb4_port_device_type; 240 + usb4->dev.parent = &port->sw->dev; 241 + dev_set_name(&usb4->dev, "usb4_port%d", port->port); 242 + 243 + ret = device_register(&usb4->dev); 244 + if (ret) { 245 + put_device(&usb4->dev); 246 + return ERR_PTR(ret); 247 + } 248 + 249 + pm_runtime_no_callbacks(&usb4->dev); 250 + pm_runtime_set_active(&usb4->dev); 251 + pm_runtime_enable(&usb4->dev); 252 + pm_runtime_set_autosuspend_delay(&usb4->dev, TB_AUTOSUSPEND_DELAY); 253 + pm_runtime_mark_last_busy(&usb4->dev); 254 + pm_runtime_use_autosuspend(&usb4->dev); 255 + 256 + return usb4; 257 + } 258 + 259 + /** 260 + * usb4_port_device_remove() - Removes USB4 port device 261 + * @usb4: USB4 port device 262 + * 263 + * Unregisters the USB4 port device from the system. The device will be 264 + * released when the last reference is dropped. 265 + */ 266 + void usb4_port_device_remove(struct usb4_port *usb4) 267 + { 268 + device_unregister(&usb4->dev); 269 + } 270 + 271 + /** 272 + * usb4_port_device_resume() - Resumes USB4 port device 273 + * @usb4: USB4 port device 274 + * 275 + * Used to resume USB4 port device after sleep state. 276 + */ 277 + int usb4_port_device_resume(struct usb4_port *usb4) 278 + { 279 + return usb4->offline ? usb4_port_offline(usb4) : 0; 280 + }
+10
drivers/thunderbolt/xdomain.c
··· 1527 1527 return ret; 1528 1528 } 1529 1529 1530 + ret = tb_port_wait_for_link_width(port, 2, 100); 1531 + if (ret) { 1532 + tb_port_warn(port, "timeout enabling lane bonding\n"); 1533 + return ret; 1534 + } 1535 + 1536 + tb_port_update_credits(port); 1530 1537 tb_xdomain_update_link_attributes(xd); 1531 1538 1532 1539 dev_dbg(&xd->dev, "lane bonding enabled\n"); ··· 1555 1548 port = tb_port_at(xd->route, tb_xdomain_parent(xd)); 1556 1549 if (port->dual_link_port) { 1557 1550 tb_port_lane_bonding_disable(port); 1551 + if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT) 1552 + tb_port_warn(port, "timeout disabling lane bonding\n"); 1558 1553 tb_port_disable(port->dual_link_port); 1554 + tb_port_update_credits(port); 1559 1555 tb_xdomain_update_link_attributes(xd); 1560 1556 1561 1557 dev_dbg(&xd->dev, "lane bonding disabled\n");
+1 -1
drivers/usb/atm/cxacru.c
··· 180 180 struct mutex poll_state_serialize; 181 181 enum cxacru_poll_state poll_state; 182 182 183 - /* contol handles */ 183 + /* control handles */ 184 184 struct mutex cm_serialize; 185 185 u8 *rcv_buf; 186 186 u8 *snd_buf;
+3 -3
drivers/usb/cdns3/cdns3-ep0.c
··· 677 677 } 678 678 679 679 /** 680 - * cdns3_gadget_ep0_queue Transfer data on endpoint zero 680 + * cdns3_gadget_ep0_queue - Transfer data on endpoint zero 681 681 * @ep: pointer to endpoint zero object 682 682 * @request: pointer to request object 683 683 * @gfp_flags: gfp flags ··· 772 772 } 773 773 774 774 /** 775 - * cdns3_gadget_ep_set_wedge Set wedge on selected endpoint 775 + * cdns3_gadget_ep_set_wedge - Set wedge on selected endpoint 776 776 * @ep: endpoint object 777 777 * 778 778 * Returns 0 ··· 865 865 } 866 866 867 867 /** 868 - * cdns3_init_ep0 Initializes software endpoint 0 of gadget 868 + * cdns3_init_ep0 - Initializes software endpoint 0 of gadget 869 869 * @priv_dev: extended gadget object 870 870 * @priv_ep: extended endpoint object 871 871 *
+19 -21
drivers/usb/cdns3/cdns3-gadget.c
··· 155 155 } 156 156 157 157 /** 158 - * select_ep - selects endpoint 158 + * cdns3_select_ep - selects endpoint 159 159 * @priv_dev: extended gadget object 160 160 * @ep: endpoint address 161 161 */ ··· 430 430 if (ret) 431 431 return ret; 432 432 433 - list_del(&request->list); 434 - list_add_tail(&request->list, 435 - &priv_ep->pending_req_list); 433 + list_move_tail(&request->list, &priv_ep->pending_req_list); 436 434 if (request->stream_id != 0 || (priv_ep->flags & EP_TDLCHK_EN)) 437 435 break; 438 436 } ··· 482 484 } 483 485 484 486 /** 485 - * cdns3_wa2_descmiss_copy_data copy data from internal requests to 487 + * cdns3_wa2_descmiss_copy_data - copy data from internal requests to 486 488 * request queued by class driver. 487 489 * @priv_ep: extended endpoint object 488 490 * @request: request object ··· 1833 1835 } 1834 1836 1835 1837 /** 1836 - * cdns3_device_irq_handler- interrupt handler for device part of controller 1838 + * cdns3_device_irq_handler - interrupt handler for device part of controller 1837 1839 * 1838 1840 * @irq: irq number for cdns3 core device 1839 1841 * @data: structure of cdns3 ··· 1877 1879 } 1878 1880 1879 1881 /** 1880 - * cdns3_device_thread_irq_handler- interrupt handler for device part 1882 + * cdns3_device_thread_irq_handler - interrupt handler for device part 1881 1883 * of controller 1882 1884 * 1883 1885 * @irq: irq number for cdns3 core device ··· 2020 2022 } 2021 2023 2022 2024 /** 2023 - * cdns3_ep_config Configure hardware endpoint 2025 + * cdns3_ep_config - Configure hardware endpoint 2024 2026 * @priv_ep: extended endpoint object 2025 2027 * @enable: set EP_CFG_ENABLE bit in ep_cfg register. 2026 2028 */ ··· 2219 2221 } 2220 2222 2221 2223 /** 2222 - * cdns3_gadget_ep_alloc_request Allocates request 2224 + * cdns3_gadget_ep_alloc_request - Allocates request 2223 2225 * @ep: endpoint object associated with request 2224 2226 * @gfp_flags: gfp flags 2225 2227 * ··· 2242 2244 } 2243 2245 2244 2246 /** 2245 - * cdns3_gadget_ep_free_request Free memory occupied by request 2247 + * cdns3_gadget_ep_free_request - Free memory occupied by request 2246 2248 * @ep: endpoint object associated with request 2247 2249 * @request: request to free memory 2248 2250 */ ··· 2259 2261 } 2260 2262 2261 2263 /** 2262 - * cdns3_gadget_ep_enable Enable endpoint 2264 + * cdns3_gadget_ep_enable - Enable endpoint 2263 2265 * @ep: endpoint object 2264 2266 * @desc: endpoint descriptor 2265 2267 * ··· 2394 2396 } 2395 2397 2396 2398 /** 2397 - * cdns3_gadget_ep_disable Disable endpoint 2399 + * cdns3_gadget_ep_disable - Disable endpoint 2398 2400 * @ep: endpoint object 2399 2401 * 2400 2402 * Returns 0 on success, error code elsewhere ··· 2484 2486 } 2485 2487 2486 2488 /** 2487 - * cdns3_gadget_ep_queue Transfer data on endpoint 2489 + * __cdns3_gadget_ep_queue - Transfer data on endpoint 2488 2490 * @ep: endpoint object 2489 2491 * @request: request object 2490 2492 * @gfp_flags: gfp flags ··· 2584 2586 } 2585 2587 2586 2588 /** 2587 - * cdns3_gadget_ep_dequeue Remove request from transfer queue 2589 + * cdns3_gadget_ep_dequeue - Remove request from transfer queue 2588 2590 * @ep: endpoint object associated with request 2589 2591 * @request: request object 2590 2592 * ··· 2651 2653 } 2652 2654 2653 2655 /** 2654 - * __cdns3_gadget_ep_set_halt Sets stall on selected endpoint 2656 + * __cdns3_gadget_ep_set_halt - Sets stall on selected endpoint 2655 2657 * Should be called after acquiring spin_lock and selecting ep 2656 2658 * @priv_ep: endpoint object to set stall on. 2657 2659 */ ··· 2672 2674 } 2673 2675 2674 2676 /** 2675 - * __cdns3_gadget_ep_clear_halt Clears stall on selected endpoint 2677 + * __cdns3_gadget_ep_clear_halt - Clears stall on selected endpoint 2676 2678 * Should be called after acquiring spin_lock and selecting ep 2677 2679 * @priv_ep: endpoint object to clear stall on 2678 2680 */ ··· 2717 2719 } 2718 2720 2719 2721 /** 2720 - * cdns3_gadget_ep_set_halt Sets/clears stall on selected endpoint 2722 + * cdns3_gadget_ep_set_halt - Sets/clears stall on selected endpoint 2721 2723 * @ep: endpoint object to set/clear stall on 2722 2724 * @value: 1 for set stall, 0 for clear stall 2723 2725 * ··· 2763 2765 }; 2764 2766 2765 2767 /** 2766 - * cdns3_gadget_get_frame Returns number of actual ITP frame 2768 + * cdns3_gadget_get_frame - Returns number of actual ITP frame 2767 2769 * @gadget: gadget object 2768 2770 * 2769 2771 * Returns number of actual ITP frame ··· 2872 2874 } 2873 2875 2874 2876 /** 2875 - * cdns3_gadget_udc_start Gadget start 2877 + * cdns3_gadget_udc_start - Gadget start 2876 2878 * @gadget: gadget object 2877 2879 * @driver: driver which operates on this gadget 2878 2880 * ··· 2918 2920 } 2919 2921 2920 2922 /** 2921 - * cdns3_gadget_udc_stop Stops gadget 2923 + * cdns3_gadget_udc_stop - Stops gadget 2922 2924 * @gadget: gadget object 2923 2925 * 2924 2926 * Returns 0 ··· 2981 2983 } 2982 2984 2983 2985 /** 2984 - * cdns3_init_eps Initializes software endpoints of gadget 2986 + * cdns3_init_eps - Initializes software endpoints of gadget 2985 2987 * @priv_dev: extended gadget object 2986 2988 * 2987 2989 * Returns 0 on success, error code elsewhere
+1 -1
drivers/usb/cdns3/cdns3-imx.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * cdns3-imx.c - NXP i.MX specific Glue layer for Cadence USB Controller 4 4 * 5 5 * Copyright (C) 2019 NXP
+1 -1
drivers/usb/cdns3/cdns3-plat.c
··· 170 170 } 171 171 172 172 /** 173 - * cdns3_remove - unbind drd driver and clean up 173 + * cdns3_plat_remove() - unbind drd driver and clean up 174 174 * @pdev: Pointer to Linux platform device 175 175 * 176 176 * Returns 0 on success otherwise negative errno
+1 -1
drivers/usb/cdns3/cdns3-ti.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * cdns3-ti.c - TI specific Glue layer for Cadence USB Controller 4 4 * 5 5 * Copyright (C) 2019 Texas Instruments Incorporated - https://www.ti.com
+4 -3
drivers/usb/cdns3/cdnsp-gadget.c
··· 56 56 } 57 57 58 58 /** 59 - * Find the offset of the extended capabilities with capability ID id. 59 + * cdnsp_find_next_ext_cap - Find the offset of the extended capabilities 60 + * with capability ID id. 60 61 * @base: PCI MMIO registers base address. 61 62 * @start: Address at which to start looking, (0 or HCC_PARAMS to start at 62 63 * beginning of list) ··· 1152 1151 struct cdnsp_ep *pep = to_cdnsp_ep(ep); 1153 1152 struct cdnsp_device *pdev = pep->pdev; 1154 1153 struct cdnsp_request *preq; 1155 - unsigned long flags = 0; 1154 + unsigned long flags; 1156 1155 int ret; 1157 1156 1158 1157 spin_lock_irqsave(&pdev->lock, flags); ··· 1177 1176 { 1178 1177 struct cdnsp_ep *pep = to_cdnsp_ep(ep); 1179 1178 struct cdnsp_device *pdev = pep->pdev; 1180 - unsigned long flags = 0; 1179 + unsigned long flags; 1181 1180 int ret; 1182 1181 1183 1182 spin_lock_irqsave(&pdev->lock, flags);
+2 -3
drivers/usb/cdns3/cdnsp-mem.c
··· 1082 1082 dma_pool_destroy(pdev->device_pool); 1083 1083 pdev->device_pool = NULL; 1084 1084 1085 - if (pdev->dcbaa) 1086 - dma_free_coherent(dev, sizeof(*pdev->dcbaa), 1087 - pdev->dcbaa, pdev->dcbaa->dma); 1085 + dma_free_coherent(dev, sizeof(*pdev->dcbaa), 1086 + pdev->dcbaa, pdev->dcbaa->dma); 1088 1087 1089 1088 pdev->dcbaa = NULL; 1090 1089
+2 -2
drivers/usb/cdns3/core.c
··· 332 332 } 333 333 334 334 /** 335 - * cdsn3_role_get - get current role of controller. 335 + * cdns_role_get - get current role of controller. 336 336 * 337 337 * @sw: pointer to USB role switch structure 338 338 * ··· 419 419 } 420 420 421 421 /** 422 - * cdns_probe - probe for cdns3/cdnsp core device 422 + * cdns_init - probe for cdns3/cdnsp core device 423 423 * @cdns: Pointer to cdns structure. 424 424 * 425 425 * Returns 0 on success otherwise negative errno
-2
drivers/usb/chipidea/ci.h
··· 195 195 * @phy: pointer to PHY, if any 196 196 * @usb_phy: pointer to USB PHY, if any and if using the USB PHY framework 197 197 * @hcd: pointer to usb_hcd for ehci host driver 198 - * @debugfs: root dentry for this controller in debugfs 199 198 * @id_event: indicates there is an id event, and handled at ci_otg_work 200 199 * @b_sess_valid_event: indicates there is a vbus event, and handled 201 200 * at ci_otg_work ··· 248 249 /* old usb_phy interface */ 249 250 struct usb_phy *usb_phy; 250 251 struct usb_hcd *hcd; 251 - struct dentry *debugfs; 252 252 bool id_event; 253 253 bool b_sess_valid_event; 254 254 bool imx28_write_fix;
+1 -1
drivers/usb/chipidea/core.c
··· 335 335 } 336 336 337 337 /** 338 - * _ci_usb_phy_exit: deinitialize phy taking in account both phy and usb_phy 338 + * ci_usb_phy_exit: deinitialize phy taking in account both phy and usb_phy 339 339 * interfaces 340 340 * @ci: the controller 341 341 */
+12 -18
drivers/usb/chipidea/debug.c
··· 342 342 */ 343 343 void dbg_create_files(struct ci_hdrc *ci) 344 344 { 345 - ci->debugfs = debugfs_create_dir(dev_name(ci->dev), usb_debug_root); 345 + struct dentry *dir; 346 346 347 - debugfs_create_file("device", S_IRUGO, ci->debugfs, ci, 348 - &ci_device_fops); 349 - debugfs_create_file("port_test", S_IRUGO | S_IWUSR, ci->debugfs, ci, 350 - &ci_port_test_fops); 351 - debugfs_create_file("qheads", S_IRUGO, ci->debugfs, ci, 352 - &ci_qheads_fops); 353 - debugfs_create_file("requests", S_IRUGO, ci->debugfs, ci, 354 - &ci_requests_fops); 347 + dir = debugfs_create_dir(dev_name(ci->dev), usb_debug_root); 355 348 356 - if (ci_otg_is_fsm_mode(ci)) { 357 - debugfs_create_file("otg", S_IRUGO, ci->debugfs, ci, 358 - &ci_otg_fops); 359 - } 349 + debugfs_create_file("device", S_IRUGO, dir, ci, &ci_device_fops); 350 + debugfs_create_file("port_test", S_IRUGO | S_IWUSR, dir, ci, &ci_port_test_fops); 351 + debugfs_create_file("qheads", S_IRUGO, dir, ci, &ci_qheads_fops); 352 + debugfs_create_file("requests", S_IRUGO, dir, ci, &ci_requests_fops); 360 353 361 - debugfs_create_file("role", S_IRUGO | S_IWUSR, ci->debugfs, ci, 362 - &ci_role_fops); 363 - debugfs_create_file("registers", S_IRUGO, ci->debugfs, ci, 364 - &ci_registers_fops); 354 + if (ci_otg_is_fsm_mode(ci)) 355 + debugfs_create_file("otg", S_IRUGO, dir, ci, &ci_otg_fops); 356 + 357 + debugfs_create_file("role", S_IRUGO | S_IWUSR, dir, ci, &ci_role_fops); 358 + debugfs_create_file("registers", S_IRUGO, dir, ci, &ci_registers_fops); 365 359 } 366 360 367 361 /** ··· 364 370 */ 365 371 void dbg_remove_files(struct ci_hdrc *ci) 366 372 { 367 - debugfs_remove_recursive(ci->debugfs); 373 + debugfs_remove(debugfs_lookup(dev_name(ci->dev), usb_debug_root)); 368 374 }
+5 -4
drivers/usb/chipidea/otg.c
··· 22 22 #include "otg_fsm.h" 23 23 24 24 /** 25 - * hw_read_otgsc returns otgsc register bits value. 25 + * hw_read_otgsc - returns otgsc register bits value. 26 26 * @ci: the controller 27 27 * @mask: bitfield mask 28 28 */ ··· 75 75 } 76 76 77 77 /** 78 - * hw_write_otgsc updates target bits of OTGSC register. 78 + * hw_write_otgsc - updates target bits of OTGSC register. 79 79 * @ci: the controller 80 80 * @mask: bitfield mask 81 81 * @data: to be written ··· 140 140 } 141 141 142 142 /** 143 - * When we switch to device mode, the vbus value should be lower 144 - * than OTGSC_BSV before connecting to host. 143 + * hw_wait_vbus_lower_bsv - When we switch to device mode, the vbus value 144 + * should be lower than OTGSC_BSV before connecting 145 + * to host. 145 146 * 146 147 * @ci: the controller 147 148 *
+1 -1
drivers/usb/chipidea/udc.c
··· 238 238 } 239 239 240 240 /** 241 - * hw_is_port_high_speed: test if port is high speed 241 + * hw_port_is_high_speed: test if port is high speed 242 242 * @ci: the controller 243 243 * 244 244 * This function returns true if high speed port
+5
drivers/usb/class/cdc-acm.c
··· 1946 1946 .driver_info = IGNORE_DEVICE, 1947 1947 }, 1948 1948 1949 + /* Exclude Heimann Sensor GmbH USB appset demo */ 1950 + { USB_DEVICE(0x32a7, 0x0000), 1951 + .driver_info = IGNORE_DEVICE, 1952 + }, 1953 + 1949 1954 /* control interfaces without any protocol set */ 1950 1955 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM, 1951 1956 USB_CDC_PROTO_NONE) },
+3 -2
drivers/usb/class/cdc-wdm.c
··· 1035 1035 INIT_WORK(&desc->rxwork, wdm_rxwork); 1036 1036 INIT_WORK(&desc->service_outs_intr, service_interrupt_work); 1037 1037 1038 - rv = -EINVAL; 1039 - if (!usb_endpoint_is_int_in(ep)) 1038 + if (!usb_endpoint_is_int_in(ep)) { 1039 + rv = -EINVAL; 1040 1040 goto err; 1041 + } 1041 1042 1042 1043 desc->wMaxPacketSize = usb_endpoint_maxp(ep); 1043 1044
+1 -1
drivers/usb/common/ulpi.c
··· 141 141 /* -------------------------------------------------------------------------- */ 142 142 143 143 /** 144 - * ulpi_register_driver - register a driver with the ULPI bus 144 + * __ulpi_register_driver - register a driver with the ULPI bus 145 145 * @drv: driver being registered 146 146 * @module: ends up being THIS_MODULE 147 147 *
+33 -29
drivers/usb/common/usb-conn-gpio.c
··· 83 83 else 84 84 role = USB_ROLE_NONE; 85 85 86 - dev_dbg(info->dev, "role %d/%d, gpios: id %d, vbus %d\n", 87 - info->last_role, role, id, vbus); 86 + dev_dbg(info->dev, "role %s -> %s, gpios: id %d, vbus %d\n", 87 + usb_role_string(info->last_role), usb_role_string(role), id, vbus); 88 88 89 89 if (info->last_role == role) { 90 - dev_warn(info->dev, "repeated role: %d\n", role); 90 + dev_warn(info->dev, "repeated role: %s\n", usb_role_string(role)); 91 91 return; 92 92 } 93 93 ··· 149 149 return 0; 150 150 } 151 151 152 - static int usb_conn_probe(struct platform_device *pdev) 152 + static int usb_conn_psy_register(struct usb_conn_info *info) 153 153 { 154 - struct device *dev = &pdev->dev; 155 - struct power_supply_desc *desc; 156 - struct usb_conn_info *info; 154 + struct device *dev = info->dev; 155 + struct power_supply_desc *desc = &info->desc; 157 156 struct power_supply_config cfg = { 158 157 .of_node = dev->of_node, 159 158 }; 159 + 160 + desc->name = "usb-charger"; 161 + desc->properties = usb_charger_properties; 162 + desc->num_properties = ARRAY_SIZE(usb_charger_properties); 163 + desc->get_property = usb_charger_get_property; 164 + desc->type = POWER_SUPPLY_TYPE_USB; 165 + cfg.drv_data = info; 166 + 167 + info->charger = devm_power_supply_register(dev, desc, &cfg); 168 + if (IS_ERR(info->charger)) 169 + dev_err(dev, "Unable to register charger\n"); 170 + 171 + return PTR_ERR_OR_ZERO(info->charger); 172 + } 173 + 174 + static int usb_conn_probe(struct platform_device *pdev) 175 + { 176 + struct device *dev = &pdev->dev; 177 + struct usb_conn_info *info; 160 178 bool need_vbus = true; 161 179 int ret = 0; 162 180 ··· 223 205 } 224 206 225 207 if (IS_ERR(info->vbus)) { 226 - if (PTR_ERR(info->vbus) != -EPROBE_DEFER) 227 - dev_err(dev, "failed to get vbus: %ld\n", PTR_ERR(info->vbus)); 228 - return PTR_ERR(info->vbus); 208 + ret = PTR_ERR(info->vbus); 209 + return dev_err_probe(dev, ret, "failed to get vbus :%d\n", ret); 229 210 } 230 211 231 212 info->role_sw = usb_role_switch_get(dev); 232 - if (IS_ERR(info->role_sw)) { 233 - if (PTR_ERR(info->role_sw) != -EPROBE_DEFER) 234 - dev_err(dev, "failed to get role switch\n"); 213 + if (IS_ERR(info->role_sw)) 214 + return dev_err_probe(dev, PTR_ERR(info->role_sw), 215 + "failed to get role switch\n"); 235 216 236 - return PTR_ERR(info->role_sw); 237 - } 217 + ret = usb_conn_psy_register(info); 218 + if (ret) 219 + goto put_role_sw; 238 220 239 221 if (info->id_gpiod) { 240 222 info->id_irq = gpiod_to_irq(info->id_gpiod); ··· 268 250 dev_err(dev, "failed to request VBUS IRQ\n"); 269 251 goto put_role_sw; 270 252 } 271 - } 272 - 273 - desc = &info->desc; 274 - desc->name = "usb-charger"; 275 - desc->properties = usb_charger_properties; 276 - desc->num_properties = ARRAY_SIZE(usb_charger_properties); 277 - desc->get_property = usb_charger_get_property; 278 - desc->type = POWER_SUPPLY_TYPE_USB; 279 - cfg.drv_data = info; 280 - 281 - info->charger = devm_power_supply_register(dev, desc, &cfg); 282 - if (IS_ERR(info->charger)) { 283 - dev_err(dev, "Unable to register charger\n"); 284 - return PTR_ERR(info->charger); 285 253 } 286 254 287 255 platform_set_drvdata(pdev, info);
+1 -1
drivers/usb/core/devio.c
··· 1162 1162 tbuf, ctrl->wLength); 1163 1163 1164 1164 usb_unlock_device(dev); 1165 - i = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), ctrl->bRequest, 1165 + i = usb_control_msg(dev, pipe, ctrl->bRequest, 1166 1166 ctrl->bRequestType, ctrl->wValue, ctrl->wIndex, 1167 1167 tbuf, ctrl->wLength, tmo); 1168 1168 usb_lock_device(dev);
+130
drivers/usb/core/hcd.c
··· 2111 2111 } 2112 2112 2113 2113 /*-------------------------------------------------------------------------*/ 2114 + #ifdef CONFIG_USB_HCD_TEST_MODE 2115 + 2116 + static void usb_ehset_completion(struct urb *urb) 2117 + { 2118 + struct completion *done = urb->context; 2119 + 2120 + complete(done); 2121 + } 2122 + /* 2123 + * Allocate and initialize a control URB. This request will be used by the 2124 + * EHSET SINGLE_STEP_SET_FEATURE test in which the DATA and STATUS stages 2125 + * of the GetDescriptor request are sent 15 seconds after the SETUP stage. 2126 + * Return NULL if failed. 2127 + */ 2128 + static struct urb *request_single_step_set_feature_urb( 2129 + struct usb_device *udev, 2130 + void *dr, 2131 + void *buf, 2132 + struct completion *done) 2133 + { 2134 + struct urb *urb; 2135 + struct usb_hcd *hcd = bus_to_hcd(udev->bus); 2136 + struct usb_host_endpoint *ep; 2137 + 2138 + urb = usb_alloc_urb(0, GFP_KERNEL); 2139 + if (!urb) 2140 + return NULL; 2141 + 2142 + urb->pipe = usb_rcvctrlpipe(udev, 0); 2143 + ep = (usb_pipein(urb->pipe) ? udev->ep_in : udev->ep_out) 2144 + [usb_pipeendpoint(urb->pipe)]; 2145 + if (!ep) { 2146 + usb_free_urb(urb); 2147 + return NULL; 2148 + } 2149 + 2150 + urb->ep = ep; 2151 + urb->dev = udev; 2152 + urb->setup_packet = (void *)dr; 2153 + urb->transfer_buffer = buf; 2154 + urb->transfer_buffer_length = USB_DT_DEVICE_SIZE; 2155 + urb->complete = usb_ehset_completion; 2156 + urb->status = -EINPROGRESS; 2157 + urb->actual_length = 0; 2158 + urb->transfer_flags = URB_DIR_IN; 2159 + usb_get_urb(urb); 2160 + atomic_inc(&urb->use_count); 2161 + atomic_inc(&urb->dev->urbnum); 2162 + if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) { 2163 + usb_put_urb(urb); 2164 + usb_free_urb(urb); 2165 + return NULL; 2166 + } 2167 + 2168 + urb->context = done; 2169 + return urb; 2170 + } 2171 + 2172 + int ehset_single_step_set_feature(struct usb_hcd *hcd, int port) 2173 + { 2174 + int retval = -ENOMEM; 2175 + struct usb_ctrlrequest *dr; 2176 + struct urb *urb; 2177 + struct usb_device *udev; 2178 + struct usb_device_descriptor *buf; 2179 + DECLARE_COMPLETION_ONSTACK(done); 2180 + 2181 + /* Obtain udev of the rhub's child port */ 2182 + udev = usb_hub_find_child(hcd->self.root_hub, port); 2183 + if (!udev) { 2184 + dev_err(hcd->self.controller, "No device attached to the RootHub\n"); 2185 + return -ENODEV; 2186 + } 2187 + buf = kmalloc(USB_DT_DEVICE_SIZE, GFP_KERNEL); 2188 + if (!buf) 2189 + return -ENOMEM; 2190 + 2191 + dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_KERNEL); 2192 + if (!dr) { 2193 + kfree(buf); 2194 + return -ENOMEM; 2195 + } 2196 + 2197 + /* Fill Setup packet for GetDescriptor */ 2198 + dr->bRequestType = USB_DIR_IN; 2199 + dr->bRequest = USB_REQ_GET_DESCRIPTOR; 2200 + dr->wValue = cpu_to_le16(USB_DT_DEVICE << 8); 2201 + dr->wIndex = 0; 2202 + dr->wLength = cpu_to_le16(USB_DT_DEVICE_SIZE); 2203 + urb = request_single_step_set_feature_urb(udev, dr, buf, &done); 2204 + if (!urb) 2205 + goto cleanup; 2206 + 2207 + /* Submit just the SETUP stage */ 2208 + retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 1); 2209 + if (retval) 2210 + goto out1; 2211 + if (!wait_for_completion_timeout(&done, msecs_to_jiffies(2000))) { 2212 + usb_kill_urb(urb); 2213 + retval = -ETIMEDOUT; 2214 + dev_err(hcd->self.controller, 2215 + "%s SETUP stage timed out on ep0\n", __func__); 2216 + goto out1; 2217 + } 2218 + msleep(15 * 1000); 2219 + 2220 + /* Complete remaining DATA and STATUS stages using the same URB */ 2221 + urb->status = -EINPROGRESS; 2222 + usb_get_urb(urb); 2223 + atomic_inc(&urb->use_count); 2224 + atomic_inc(&urb->dev->urbnum); 2225 + retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0); 2226 + if (!retval && !wait_for_completion_timeout(&done, 2227 + msecs_to_jiffies(2000))) { 2228 + usb_kill_urb(urb); 2229 + retval = -ETIMEDOUT; 2230 + dev_err(hcd->self.controller, 2231 + "%s IN stage timed out on ep0\n", __func__); 2232 + } 2233 + out1: 2234 + usb_free_urb(urb); 2235 + cleanup: 2236 + kfree(dr); 2237 + kfree(buf); 2238 + return retval; 2239 + } 2240 + EXPORT_SYMBOL_GPL(ehset_single_step_set_feature); 2241 + #endif /* CONFIG_USB_HCD_TEST_MODE */ 2242 + 2243 + /*-------------------------------------------------------------------------*/ 2114 2244 2115 2245 #ifdef CONFIG_PM 2116 2246
+28 -6
drivers/usb/core/hub.c
··· 2434 2434 u16 wHubCharacteristics; 2435 2435 bool removable = true; 2436 2436 2437 + dev_set_removable(&udev->dev, DEVICE_REMOVABLE_UNKNOWN); 2438 + 2437 2439 if (!hdev) 2438 2440 return; 2439 2441 ··· 2447 2445 */ 2448 2446 switch (hub->ports[udev->portnum - 1]->connect_type) { 2449 2447 case USB_PORT_CONNECT_TYPE_HOT_PLUG: 2450 - udev->removable = USB_DEVICE_REMOVABLE; 2448 + dev_set_removable(&udev->dev, DEVICE_REMOVABLE); 2451 2449 return; 2452 2450 case USB_PORT_CONNECT_TYPE_HARD_WIRED: 2453 2451 case USB_PORT_NOT_USED: 2454 - udev->removable = USB_DEVICE_FIXED; 2452 + dev_set_removable(&udev->dev, DEVICE_FIXED); 2455 2453 return; 2456 2454 default: 2457 2455 break; ··· 2476 2474 } 2477 2475 2478 2476 if (removable) 2479 - udev->removable = USB_DEVICE_REMOVABLE; 2477 + dev_set_removable(&udev->dev, DEVICE_REMOVABLE); 2480 2478 else 2481 - udev->removable = USB_DEVICE_FIXED; 2479 + dev_set_removable(&udev->dev, DEVICE_FIXED); 2482 2480 2483 2481 } 2484 2482 ··· 2550 2548 device_enable_async_suspend(&udev->dev); 2551 2549 2552 2550 /* check whether the hub or firmware marks this port as non-removable */ 2553 - if (udev->parent) 2554 - set_usb_port_removable(udev); 2551 + set_usb_port_removable(udev); 2555 2552 2556 2553 /* Register the device. The device driver is responsible 2557 2554 * for configuring the device and invoking the add-device ··· 3388 3387 status = 0; 3389 3388 } 3390 3389 if (status) { 3390 + /* Check if the port has been suspended for the timeout case 3391 + * to prevent the suspended port from incorrect handling. 3392 + */ 3393 + if (status == -ETIMEDOUT) { 3394 + int ret; 3395 + u16 portstatus, portchange; 3396 + 3397 + portstatus = portchange = 0; 3398 + ret = hub_port_status(hub, port1, &portstatus, 3399 + &portchange); 3400 + 3401 + dev_dbg(&port_dev->dev, 3402 + "suspend timeout, status %04x\n", portstatus); 3403 + 3404 + if (ret == 0 && port_is_suspended(hub, portstatus)) { 3405 + status = 0; 3406 + goto suspend_done; 3407 + } 3408 + } 3409 + 3391 3410 dev_dbg(&port_dev->dev, "can't suspend, status %d\n", status); 3392 3411 3393 3412 /* Try to enable USB3 LTM again */ ··· 3424 3403 if (!PMSG_IS_AUTO(msg)) 3425 3404 status = 0; 3426 3405 } else { 3406 + suspend_done: 3427 3407 dev_dbg(&udev->dev, "usb %ssuspend, wakeup %d\n", 3428 3408 (PMSG_IS_AUTO(msg) ? "auto-" : ""), 3429 3409 udev->do_remote_wakeup);
+6
drivers/usb/core/message.c
··· 783 783 int i; 784 784 int result; 785 785 786 + if (size <= 0) /* No point in asking for no data */ 787 + return -EINVAL; 788 + 786 789 memset(buf, 0, size); /* Make sure we parse really received data */ 787 790 788 791 for (i = 0; i < 3; ++i) { ··· 834 831 { 835 832 int i; 836 833 int result; 834 + 835 + if (size <= 0) /* No point in asking for no data */ 836 + return -EINVAL; 837 837 838 838 for (i = 0; i < 3; ++i) { 839 839 /* retry on length 0 or stall; some devices are flakey */
-1
drivers/usb/core/quirks.c
··· 406 406 407 407 /* Realtek hub in Dell WD19 (Type-C) */ 408 408 { USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM }, 409 - { USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME }, 410 409 411 410 /* Generic RTL8153 based ethernet adapters */ 412 411 { USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
-24
drivers/usb/core/sysfs.c
··· 301 301 } 302 302 static DEVICE_ATTR_RO(urbnum); 303 303 304 - static ssize_t removable_show(struct device *dev, struct device_attribute *attr, 305 - char *buf) 306 - { 307 - struct usb_device *udev; 308 - char *state; 309 - 310 - udev = to_usb_device(dev); 311 - 312 - switch (udev->removable) { 313 - case USB_DEVICE_REMOVABLE: 314 - state = "removable"; 315 - break; 316 - case USB_DEVICE_FIXED: 317 - state = "fixed"; 318 - break; 319 - default: 320 - state = "unknown"; 321 - } 322 - 323 - return sprintf(buf, "%s\n", state); 324 - } 325 - static DEVICE_ATTR_RO(removable); 326 - 327 304 static ssize_t ltm_capable_show(struct device *dev, 328 305 struct device_attribute *attr, char *buf) 329 306 { ··· 805 828 &dev_attr_avoid_reset_quirk.attr, 806 829 &dev_attr_authorized.attr, 807 830 &dev_attr_remove.attr, 808 - &dev_attr_removable.attr, 809 831 &dev_attr_ltm_capable.attr, 810 832 #ifdef CONFIG_OF 811 833 &dev_attr_devspec.attr,
+9
drivers/usb/core/urb.c
··· 407 407 return -ENOEXEC; 408 408 is_out = !(setup->bRequestType & USB_DIR_IN) || 409 409 !setup->wLength; 410 + dev_WARN_ONCE(&dev->dev, (usb_pipeout(urb->pipe) != is_out), 411 + "BOGUS control dir, pipe %x doesn't match bRequestType %x\n", 412 + urb->pipe, setup->bRequestType); 413 + if (le16_to_cpu(setup->wLength) != urb->transfer_buffer_length) { 414 + dev_dbg(&dev->dev, "BOGUS control len %d doesn't match transfer length %d\n", 415 + le16_to_cpu(setup->wLength), 416 + urb->transfer_buffer_length); 417 + return -EBADR; 418 + } 410 419 } else { 411 420 is_out = usb_endpoint_dir_out(&ep->desc); 412 421 }
+21 -9
drivers/usb/dwc2/core.c
··· 1111 1111 usbcfg &= ~(GUSBCFG_ULPI_UTMI_SEL | GUSBCFG_PHYIF16); 1112 1112 if (hsotg->params.phy_utmi_width == 16) 1113 1113 usbcfg |= GUSBCFG_PHYIF16; 1114 - 1115 - /* Set turnaround time */ 1116 - if (dwc2_is_device_mode(hsotg)) { 1117 - usbcfg &= ~GUSBCFG_USBTRDTIM_MASK; 1118 - if (hsotg->params.phy_utmi_width == 16) 1119 - usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT; 1120 - else 1121 - usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT; 1122 - } 1123 1114 break; 1124 1115 default: 1125 1116 dev_err(hsotg->dev, "FS PHY selected at HS!\n"); ··· 1132 1141 return retval; 1133 1142 } 1134 1143 1144 + static void dwc2_set_turnaround_time(struct dwc2_hsotg *hsotg) 1145 + { 1146 + u32 usbcfg; 1147 + 1148 + if (hsotg->params.phy_type != DWC2_PHY_TYPE_PARAM_UTMI) 1149 + return; 1150 + 1151 + usbcfg = dwc2_readl(hsotg, GUSBCFG); 1152 + 1153 + usbcfg &= ~GUSBCFG_USBTRDTIM_MASK; 1154 + if (hsotg->params.phy_utmi_width == 16) 1155 + usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT; 1156 + else 1157 + usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT; 1158 + 1159 + dwc2_writel(hsotg, usbcfg, GUSBCFG); 1160 + } 1161 + 1135 1162 int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy) 1136 1163 { 1137 1164 u32 usbcfg; ··· 1167 1158 retval = dwc2_hs_phy_init(hsotg, select_phy); 1168 1159 if (retval) 1169 1160 return retval; 1161 + 1162 + if (dwc2_is_device_mode(hsotg)) 1163 + dwc2_set_turnaround_time(hsotg); 1170 1164 } 1171 1165 1172 1166 if (hsotg->hw_params.hs_phy_type == GHWCFG2_HS_PHY_TYPE_ULPI &&
+7 -7
drivers/usb/dwc2/gadget.c
··· 1496 1496 { 1497 1497 struct dwc2_hsotg_ep *hs_ep = our_ep(ep); 1498 1498 struct dwc2_hsotg *hs = hs_ep->parent; 1499 - unsigned long flags = 0; 1500 - int ret = 0; 1499 + unsigned long flags; 1500 + int ret; 1501 1501 1502 1502 spin_lock_irqsave(&hs->lock, flags); 1503 1503 ret = dwc2_hsotg_ep_queue(ep, req, gfp_flags); ··· 3338 3338 3339 3339 static int dwc2_hsotg_ep_disable(struct usb_ep *ep); 3340 3340 /** 3341 - * dwc2_hsotg_core_init - issue softreset to the core 3341 + * dwc2_hsotg_core_init_disconnected - issue softreset to the core 3342 3342 * @hsotg: The device state 3343 3343 * @is_usb_reset: Usb resetting flag 3344 3344 * ··· 4374 4374 { 4375 4375 struct dwc2_hsotg_ep *hs_ep = our_ep(ep); 4376 4376 struct dwc2_hsotg *hs = hs_ep->parent; 4377 - unsigned long flags = 0; 4378 - int ret = 0; 4377 + unsigned long flags; 4378 + int ret; 4379 4379 4380 4380 spin_lock_irqsave(&hs->lock, flags); 4381 4381 ret = dwc2_hsotg_ep_sethalt(ep, value, false); ··· 4505 4505 static int dwc2_hsotg_udc_stop(struct usb_gadget *gadget) 4506 4506 { 4507 4507 struct dwc2_hsotg *hsotg = to_hsotg(gadget); 4508 - unsigned long flags = 0; 4508 + unsigned long flags; 4509 4509 int ep; 4510 4510 4511 4511 if (!hsotg) ··· 4577 4577 static int dwc2_hsotg_pullup(struct usb_gadget *gadget, int is_on) 4578 4578 { 4579 4579 struct dwc2_hsotg *hsotg = to_hsotg(gadget); 4580 - unsigned long flags = 0; 4580 + unsigned long flags; 4581 4581 4582 4582 dev_dbg(hsotg->dev, "%s: is_on: %d op_state: %d\n", __func__, is_on, 4583 4583 hsotg->op_state);
+1 -1
drivers/usb/dwc2/hcd_queue.c
··· 675 675 } 676 676 677 677 /** 678 - * dwc2_ls_pmap_unschedule() - Undo work done by dwc2_hs_pmap_schedule() 678 + * dwc2_hs_pmap_unschedule() - Undo work done by dwc2_hs_pmap_schedule() 679 679 * 680 680 * @hsotg: The HCD state structure for the DWC OTG controller. 681 681 * @qh: QH for the periodic transfer.
+2 -2
drivers/usb/dwc2/params.c
··· 784 784 } 785 785 786 786 /** 787 - * During device initialization, read various hardware configuration 788 - * registers and interpret the contents. 787 + * dwc2_get_hwparams() - During device initialization, read various hardware 788 + * configuration registers and interpret the contents. 789 789 * 790 790 * @hsotg: Programming view of the DWC_otg controller 791 791 *
+1 -1
drivers/usb/dwc2/pci.c
··· 64 64 }; 65 65 66 66 /** 67 - * dwc2_pci_probe() - Provides the cleanup entry points for the DWC_otg PCI 67 + * dwc2_pci_remove() - Provides the cleanup entry points for the DWC_otg PCI 68 68 * driver 69 69 * 70 70 * @pci: The programming view of DWC_otg PCI
+1 -1
drivers/usb/dwc2/platform.c
··· 408 408 } 409 409 410 410 /** 411 - * Check core version 411 + * dwc2_check_core_version() - Check core version 412 412 * 413 413 * @hsotg: Programming view of the DWC_otg controller 414 414 *
+6 -1
drivers/usb/dwc3/core.c
··· 1545 1545 1546 1546 dwc3_get_properties(dwc); 1547 1547 1548 + ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64)); 1549 + if (ret) 1550 + return ret; 1551 + 1548 1552 dwc->reset = devm_reset_control_array_get_optional_shared(dev); 1549 1553 if (IS_ERR(dwc->reset)) 1550 1554 return PTR_ERR(dwc->reset); ··· 1620 1616 } 1621 1617 1622 1618 dwc3_check_params(dwc); 1619 + dwc3_debugfs_init(dwc); 1623 1620 1624 1621 ret = dwc3_core_init_mode(dwc); 1625 1622 if (ret) 1626 1623 goto err5; 1627 1624 1628 - dwc3_debugfs_init(dwc); 1629 1625 pm_runtime_put(dev); 1630 1626 1631 1627 return 0; 1632 1628 1633 1629 err5: 1630 + dwc3_debugfs_exit(dwc); 1634 1631 dwc3_event_buffers_cleanup(dwc); 1635 1632 1636 1633 usb_phy_shutdown(dwc->usb2_phy);
-2
drivers/usb/dwc3/core.h
··· 1013 1013 * @link_state: link state 1014 1014 * @speed: device speed (super, high, full, low) 1015 1015 * @hwparams: copy of hwparams registers 1016 - * @root: debugfs root folder pointer 1017 1016 * @regset: debugfs pointer to regdump file 1018 1017 * @dbg_lsp_select: current debug lsp mux register selection 1019 1018 * @test_mode: true when we're entering a USB test mode ··· 1221 1222 u8 num_eps; 1222 1223 1223 1224 struct dwc3_hwparams hwparams; 1224 - struct dentry *root; 1225 1225 struct debugfs_regset32 *regset; 1226 1226 1227 1227 u32 dbg_lsp_select;
+4 -4
drivers/usb/dwc3/debugfs.c
··· 889 889 void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep) 890 890 { 891 891 struct dentry *dir; 892 + struct dentry *root; 892 893 893 - dir = debugfs_create_dir(dep->name, dep->dwc->root); 894 + root = debugfs_lookup(dev_name(dep->dwc->dev), usb_debug_root); 895 + dir = debugfs_create_dir(dep->name, root); 894 896 dwc3_debugfs_create_endpoint_files(dep, dir); 895 897 } 896 898 ··· 911 909 dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START; 912 910 913 911 root = debugfs_create_dir(dev_name(dwc->dev), usb_debug_root); 914 - dwc->root = root; 915 - 916 912 debugfs_create_regset32("regdump", 0444, root, dwc->regset); 917 913 debugfs_create_file("lsp_dump", 0644, root, dwc, &dwc3_lsp_fops); 918 914 ··· 929 929 930 930 void dwc3_debugfs_exit(struct dwc3 *dwc) 931 931 { 932 - debugfs_remove_recursive(dwc->root); 932 + debugfs_remove(debugfs_lookup(dev_name(dwc->dev), usb_debug_root)); 933 933 kfree(dwc->regset); 934 934 }
-1
drivers/usb/dwc3/drd.c
··· 596 596 dwc3_drd_update(dwc); 597 597 } else { 598 598 dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG); 599 - dwc->current_dr_role = DWC3_GCTL_PRTCAP_OTG; 600 599 601 600 /* use OTG block to get ID event */ 602 601 irq = dwc3_otg_get_irq(dwc);
+4 -4
drivers/usb/dwc3/dwc3-pci.c
··· 36 36 #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e 37 37 #define PCI_DEVICE_ID_INTEL_CNPV 0xa3b0 38 38 #define PCI_DEVICE_ID_INTEL_ICLLP 0x34ee 39 - #define PCI_DEVICE_ID_INTEL_EHLLP 0x4b7e 39 + #define PCI_DEVICE_ID_INTEL_EHL 0x4b7e 40 40 #define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee 41 41 #define PCI_DEVICE_ID_INTEL_TGPH 0x43ee 42 42 #define PCI_DEVICE_ID_INTEL_JSP 0x4dee ··· 167 167 if (pdev->vendor == PCI_VENDOR_ID_INTEL) { 168 168 if (pdev->device == PCI_DEVICE_ID_INTEL_BXT || 169 169 pdev->device == PCI_DEVICE_ID_INTEL_BXT_M || 170 - pdev->device == PCI_DEVICE_ID_INTEL_EHLLP) { 170 + pdev->device == PCI_DEVICE_ID_INTEL_EHL) { 171 171 guid_parse(PCI_INTEL_BXT_DSM_GUID, &dwc->guid); 172 172 dwc->has_dsm_for_pm = true; 173 173 } ··· 375 375 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICLLP), 376 376 (kernel_ulong_t) &dwc3_pci_intel_swnode, }, 377 377 378 - { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_EHLLP), 379 - (kernel_ulong_t) &dwc3_pci_intel_swnode }, 378 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_EHL), 379 + (kernel_ulong_t) &dwc3_pci_intel_swnode, }, 380 380 381 381 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGPLP), 382 382 (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+3 -1
drivers/usb/dwc3/gadget.c
··· 2798 2798 list_del(&dep->endpoint.ep_list); 2799 2799 } 2800 2800 2801 - debugfs_remove_recursive(debugfs_lookup(dep->name, dwc->root)); 2801 + debugfs_remove_recursive(debugfs_lookup(dep->name, 2802 + debugfs_lookup(dev_name(dep->dwc->dev), 2803 + usb_debug_root))); 2802 2804 kfree(dep); 2803 2805 } 2804 2806 }
-2
drivers/usb/dwc3/trace.h
··· 222 222 TP_STRUCT__entry( 223 223 __string(name, dep->name) 224 224 __field(struct dwc3_trb *, trb) 225 - __field(u32, allocated) 226 - __field(u32, queued) 227 225 __field(u32, bpl) 228 226 __field(u32, bph) 229 227 __field(u32, size)
+39 -4
drivers/usb/gadget/function/f_eem.c
··· 30 30 u8 ctrl_id; 31 31 }; 32 32 33 + struct in_context { 34 + struct sk_buff *skb; 35 + struct usb_ep *ep; 36 + }; 37 + 33 38 static inline struct f_eem *func_to_eem(struct usb_function *f) 34 39 { 35 40 return container_of(f, struct f_eem, port.func); ··· 325 320 326 321 static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req) 327 322 { 328 - struct sk_buff *skb = (struct sk_buff *)req->context; 323 + struct in_context *ctx = req->context; 329 324 330 - dev_kfree_skb_any(skb); 325 + dev_kfree_skb_any(ctx->skb); 326 + kfree(req->buf); 327 + usb_ep_free_request(ctx->ep, req); 328 + kfree(ctx); 331 329 } 332 330 333 331 /* ··· 418 410 * b15: bmType (0 == data, 1 == command) 419 411 */ 420 412 if (header & BIT(15)) { 421 - struct usb_request *req = cdev->req; 413 + struct usb_request *req; 414 + struct in_context *ctx; 415 + struct usb_ep *ep; 422 416 u16 bmEEMCmd; 423 417 424 418 /* EEM command packet format: ··· 449 439 skb_trim(skb2, len); 450 440 put_unaligned_le16(BIT(15) | BIT(11) | len, 451 441 skb_push(skb2, 2)); 442 + 443 + ep = port->in_ep; 444 + req = usb_ep_alloc_request(ep, GFP_ATOMIC); 445 + if (!req) { 446 + dev_kfree_skb_any(skb2); 447 + goto next; 448 + } 449 + 450 + req->buf = kmalloc(skb2->len, GFP_KERNEL); 451 + if (!req->buf) { 452 + usb_ep_free_request(ep, req); 453 + dev_kfree_skb_any(skb2); 454 + goto next; 455 + } 456 + 457 + ctx = kmalloc(sizeof(*ctx), GFP_KERNEL); 458 + if (!ctx) { 459 + kfree(req->buf); 460 + usb_ep_free_request(ep, req); 461 + dev_kfree_skb_any(skb2); 462 + goto next; 463 + } 464 + ctx->skb = skb2; 465 + ctx->ep = ep; 466 + 452 467 skb_copy_bits(skb2, 0, req->buf, skb2->len); 453 468 req->length = skb2->len; 454 469 req->complete = eem_cmd_complete; 455 470 req->zero = 1; 456 - req->context = skb2; 471 + req->context = ctx; 457 472 if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC)) 458 473 DBG(cdev, "echo response queue fail\n"); 459 474 break;
+32 -33
drivers/usb/gadget/function/f_fs.c
··· 250 250 static struct ffs_dev *_ffs_find_dev(const char *name); 251 251 static struct ffs_dev *_ffs_alloc_dev(void); 252 252 static void _ffs_free_dev(struct ffs_dev *dev); 253 - static void *ffs_acquire_dev(const char *dev_name); 254 - static void ffs_release_dev(struct ffs_data *ffs_data); 253 + static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data); 254 + static void ffs_release_dev(struct ffs_dev *ffs_dev); 255 255 static int ffs_ready(struct ffs_data *ffs); 256 256 static void ffs_closed(struct ffs_data *ffs); 257 257 ··· 1554 1554 static int ffs_fs_get_tree(struct fs_context *fc) 1555 1555 { 1556 1556 struct ffs_sb_fill_data *ctx = fc->fs_private; 1557 - void *ffs_dev; 1558 1557 struct ffs_data *ffs; 1558 + int ret; 1559 1559 1560 1560 ENTER(); 1561 1561 ··· 1574 1574 return -ENOMEM; 1575 1575 } 1576 1576 1577 - ffs_dev = ffs_acquire_dev(ffs->dev_name); 1578 - if (IS_ERR(ffs_dev)) { 1577 + ret = ffs_acquire_dev(ffs->dev_name, ffs); 1578 + if (ret) { 1579 1579 ffs_data_put(ffs); 1580 - return PTR_ERR(ffs_dev); 1580 + return ret; 1581 1581 } 1582 1582 1583 - ffs->private_data = ffs_dev; 1584 1583 ctx->ffs_data = ffs; 1585 1584 return get_tree_nodev(fc, ffs_sb_fill); 1586 1585 } ··· 1590 1591 1591 1592 if (ctx) { 1592 1593 if (ctx->ffs_data) { 1593 - ffs_release_dev(ctx->ffs_data); 1594 1594 ffs_data_put(ctx->ffs_data); 1595 1595 } 1596 1596 ··· 1628 1630 ENTER(); 1629 1631 1630 1632 kill_litter_super(sb); 1631 - if (sb->s_fs_info) { 1632 - ffs_release_dev(sb->s_fs_info); 1633 + if (sb->s_fs_info) 1633 1634 ffs_data_closed(sb->s_fs_info); 1634 - } 1635 1635 } 1636 1636 1637 1637 static struct file_system_type ffs_fs_type = { ··· 1699 1703 if (refcount_dec_and_test(&ffs->ref)) { 1700 1704 pr_info("%s(): freeing\n", __func__); 1701 1705 ffs_data_clear(ffs); 1706 + ffs_release_dev(ffs->private_data); 1702 1707 BUG_ON(waitqueue_active(&ffs->ev.waitq) || 1703 1708 swait_active(&ffs->ep0req_completion.wait) || 1704 1709 waitqueue_active(&ffs->wait)); ··· 3029 3032 struct ffs_function *func = ffs_func_from_usb(f); 3030 3033 struct f_fs_opts *ffs_opts = 3031 3034 container_of(f->fi, struct f_fs_opts, func_inst); 3035 + struct ffs_data *ffs_data; 3032 3036 int ret; 3033 3037 3034 3038 ENTER(); ··· 3044 3046 if (!ffs_opts->no_configfs) 3045 3047 ffs_dev_lock(); 3046 3048 ret = ffs_opts->dev->desc_ready ? 0 : -ENODEV; 3047 - func->ffs = ffs_opts->dev->ffs_data; 3049 + ffs_data = ffs_opts->dev->ffs_data; 3048 3050 if (!ffs_opts->no_configfs) 3049 3051 ffs_dev_unlock(); 3050 3052 if (ret) 3051 3053 return ERR_PTR(ret); 3052 3054 3055 + func->ffs = ffs_data; 3053 3056 func->conf = c; 3054 3057 func->gadget = c->cdev->gadget; 3055 3058 ··· 3505 3506 struct f_fs_opts *opts; 3506 3507 3507 3508 opts = to_f_fs_opts(f); 3509 + ffs_release_dev(opts->dev); 3508 3510 ffs_dev_lock(); 3509 3511 _ffs_free_dev(opts->dev); 3510 3512 ffs_dev_unlock(); ··· 3693 3693 { 3694 3694 list_del(&dev->entry); 3695 3695 3696 - /* Clear the private_data pointer to stop incorrect dev access */ 3697 - if (dev->ffs_data) 3698 - dev->ffs_data->private_data = NULL; 3699 - 3700 3696 kfree(dev); 3701 3697 if (list_empty(&ffs_devices)) 3702 3698 functionfs_cleanup(); 3703 3699 } 3704 3700 3705 - static void *ffs_acquire_dev(const char *dev_name) 3701 + static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data) 3706 3702 { 3703 + int ret = 0; 3707 3704 struct ffs_dev *ffs_dev; 3708 3705 3709 3706 ENTER(); 3710 3707 ffs_dev_lock(); 3711 3708 3712 3709 ffs_dev = _ffs_find_dev(dev_name); 3713 - if (!ffs_dev) 3714 - ffs_dev = ERR_PTR(-ENOENT); 3715 - else if (ffs_dev->mounted) 3716 - ffs_dev = ERR_PTR(-EBUSY); 3717 - else if (ffs_dev->ffs_acquire_dev_callback && 3718 - ffs_dev->ffs_acquire_dev_callback(ffs_dev)) 3719 - ffs_dev = ERR_PTR(-ENOENT); 3720 - else 3710 + if (!ffs_dev) { 3711 + ret = -ENOENT; 3712 + } else if (ffs_dev->mounted) { 3713 + ret = -EBUSY; 3714 + } else if (ffs_dev->ffs_acquire_dev_callback && 3715 + ffs_dev->ffs_acquire_dev_callback(ffs_dev)) { 3716 + ret = -ENOENT; 3717 + } else { 3721 3718 ffs_dev->mounted = true; 3719 + ffs_dev->ffs_data = ffs_data; 3720 + ffs_data->private_data = ffs_dev; 3721 + } 3722 3722 3723 3723 ffs_dev_unlock(); 3724 - return ffs_dev; 3724 + return ret; 3725 3725 } 3726 3726 3727 - static void ffs_release_dev(struct ffs_data *ffs_data) 3727 + static void ffs_release_dev(struct ffs_dev *ffs_dev) 3728 3728 { 3729 - struct ffs_dev *ffs_dev; 3730 - 3731 3729 ENTER(); 3732 3730 ffs_dev_lock(); 3733 3731 3734 - ffs_dev = ffs_data->private_data; 3735 - if (ffs_dev) { 3732 + if (ffs_dev && ffs_dev->mounted) { 3736 3733 ffs_dev->mounted = false; 3734 + if (ffs_dev->ffs_data) { 3735 + ffs_dev->ffs_data->private_data = NULL; 3736 + ffs_dev->ffs_data = NULL; 3737 + } 3737 3738 3738 3739 if (ffs_dev->ffs_release_dev_callback) 3739 3740 ffs_dev->ffs_release_dev_callback(ffs_dev); ··· 3762 3761 } 3763 3762 3764 3763 ffs_obj->desc_ready = true; 3765 - ffs_obj->ffs_data = ffs; 3766 3764 3767 3765 if (ffs_obj->ffs_ready_callback) { 3768 3766 ret = ffs_obj->ffs_ready_callback(ffs); ··· 3789 3789 goto done; 3790 3790 3791 3791 ffs_obj->desc_ready = false; 3792 - ffs_obj->ffs_data = NULL; 3793 3792 3794 3793 if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) && 3795 3794 ffs_obj->ffs_closed_callback)
+2 -2
drivers/usb/gadget/function/f_hid.c
··· 88 88 static struct hid_descriptor hidg_desc = { 89 89 .bLength = sizeof hidg_desc, 90 90 .bDescriptorType = HID_DT_HID, 91 - .bcdHID = 0x0101, 91 + .bcdHID = cpu_to_le16(0x0101), 92 92 .bCountryCode = 0x00, 93 93 .bNumDescriptors = 0x1, 94 94 /*.desc[0].bDescriptorType = DYNAMIC */ ··· 1118 1118 hidg->func.setup = hidg_setup; 1119 1119 hidg->func.free_func = hidg_free; 1120 1120 1121 - /* this could me made configurable at some point */ 1121 + /* this could be made configurable at some point */ 1122 1122 hidg->qlen = 4; 1123 1123 1124 1124 return &hidg->func;
+1 -2
drivers/usb/gadget/function/f_printer.c
··· 667 667 value = usb_ep_queue(dev->in_ep, req, GFP_ATOMIC); 668 668 spin_lock(&dev->lock); 669 669 if (value) { 670 - list_del(&req->list); 671 - list_add(&req->list, &dev->tx_reqs); 670 + list_move(&req->list, &dev->tx_reqs); 672 671 spin_unlock_irqrestore(&dev->lock, flags); 673 672 mutex_unlock(&dev->lock_printer_io); 674 673 return -EAGAIN;
+140 -4
drivers/usb/gadget/function/f_uac2.c
··· 44 44 45 45 #define EPIN_EN(_opts) ((_opts)->p_chmask != 0) 46 46 #define EPOUT_EN(_opts) ((_opts)->c_chmask != 0) 47 + #define EPOUT_FBACK_IN_EN(_opts) ((_opts)->c_sync == USB_ENDPOINT_SYNC_ASYNC) 47 48 48 49 struct f_uac2 { 49 50 struct g_audio g_audio; ··· 274 273 .bDescriptorType = USB_DT_ENDPOINT, 275 274 276 275 .bEndpointAddress = USB_DIR_OUT, 277 - .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 276 + /* .bmAttributes = DYNAMIC */ 278 277 /* .wMaxPacketSize = DYNAMIC */ 279 278 .bInterval = 1, 280 279 }; ··· 283 282 .bLength = USB_DT_ENDPOINT_SIZE, 284 283 .bDescriptorType = USB_DT_ENDPOINT, 285 284 286 - .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 285 + /* .bmAttributes = DYNAMIC */ 287 286 /* .wMaxPacketSize = DYNAMIC */ 288 287 .bInterval = 4, 289 288 }; ··· 293 292 .bDescriptorType = USB_DT_ENDPOINT, 294 293 295 294 .bEndpointAddress = USB_DIR_OUT, 296 - .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 295 + /* .bmAttributes = DYNAMIC */ 297 296 /* .wMaxPacketSize = DYNAMIC */ 298 297 .bInterval = 4, 299 298 }; ··· 317 316 .bLockDelayUnits = 0, 318 317 .wLockDelay = 0, 319 318 }; 319 + 320 + /* STD AS ISO IN Feedback Endpoint */ 321 + static struct usb_endpoint_descriptor fs_epin_fback_desc = { 322 + .bLength = USB_DT_ENDPOINT_SIZE, 323 + .bDescriptorType = USB_DT_ENDPOINT, 324 + 325 + .bEndpointAddress = USB_DIR_IN, 326 + .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_USAGE_FEEDBACK, 327 + .wMaxPacketSize = cpu_to_le16(3), 328 + .bInterval = 1, 329 + }; 330 + 331 + static struct usb_endpoint_descriptor hs_epin_fback_desc = { 332 + .bLength = USB_DT_ENDPOINT_SIZE, 333 + .bDescriptorType = USB_DT_ENDPOINT, 334 + 335 + .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_USAGE_FEEDBACK, 336 + .wMaxPacketSize = cpu_to_le16(4), 337 + .bInterval = 4, 338 + }; 339 + 340 + static struct usb_endpoint_descriptor ss_epin_fback_desc = { 341 + .bLength = USB_DT_ENDPOINT_SIZE, 342 + .bDescriptorType = USB_DT_ENDPOINT, 343 + 344 + .bEndpointAddress = USB_DIR_IN, 345 + .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_USAGE_FEEDBACK, 346 + .wMaxPacketSize = cpu_to_le16(4), 347 + .bInterval = 4, 348 + }; 349 + 320 350 321 351 /* Audio Streaming IN Interface - Alt0 */ 322 352 static struct usb_interface_descriptor std_as_in_if0_desc = { ··· 463 431 (struct usb_descriptor_header *)&as_out_fmt1_desc, 464 432 (struct usb_descriptor_header *)&fs_epout_desc, 465 433 (struct usb_descriptor_header *)&as_iso_out_desc, 434 + (struct usb_descriptor_header *)&fs_epin_fback_desc, 466 435 467 436 (struct usb_descriptor_header *)&std_as_in_if0_desc, 468 437 (struct usb_descriptor_header *)&std_as_in_if1_desc, ··· 494 461 (struct usb_descriptor_header *)&as_out_fmt1_desc, 495 462 (struct usb_descriptor_header *)&hs_epout_desc, 496 463 (struct usb_descriptor_header *)&as_iso_out_desc, 464 + (struct usb_descriptor_header *)&hs_epin_fback_desc, 497 465 498 466 (struct usb_descriptor_header *)&std_as_in_if0_desc, 499 467 (struct usb_descriptor_header *)&std_as_in_if1_desc, ··· 526 492 (struct usb_descriptor_header *)&ss_epout_desc, 527 493 (struct usb_descriptor_header *)&ss_epout_desc_comp, 528 494 (struct usb_descriptor_header *)&as_iso_out_desc, 495 + (struct usb_descriptor_header *)&ss_epin_fback_desc, 529 496 530 497 (struct usb_descriptor_header *)&std_as_in_if0_desc, 531 498 (struct usb_descriptor_header *)&std_as_in_if1_desc, ··· 584 549 ssize = uac2_opts->c_ssize; 585 550 } 586 551 552 + if (!is_playback && (uac2_opts->c_sync == USB_ENDPOINT_SYNC_ASYNC)) 553 + srate = srate * (1000 + uac2_opts->fb_max) / 1000; 554 + 587 555 max_size_bw = num_channels(chmask) * ssize * 588 - ((srate / (factor / (1 << (ep_desc->bInterval - 1)))) + 1); 556 + DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1))); 589 557 ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw, 590 558 max_size_ep)); 591 559 ··· 606 568 struct usb_ss_ep_comp_descriptor *epin_desc_comp = NULL; 607 569 struct usb_endpoint_descriptor *epout_desc; 608 570 struct usb_endpoint_descriptor *epin_desc; 571 + struct usb_endpoint_descriptor *epin_fback_desc; 609 572 int i; 610 573 611 574 switch (speed) { 612 575 case USB_SPEED_FULL: 613 576 epout_desc = &fs_epout_desc; 614 577 epin_desc = &fs_epin_desc; 578 + epin_fback_desc = &fs_epin_fback_desc; 615 579 break; 616 580 case USB_SPEED_HIGH: 617 581 epout_desc = &hs_epout_desc; 618 582 epin_desc = &hs_epin_desc; 583 + epin_fback_desc = &hs_epin_fback_desc; 619 584 break; 620 585 default: 621 586 epout_desc = &ss_epout_desc; 622 587 epin_desc = &ss_epin_desc; 623 588 epout_desc_comp = &ss_epout_desc_comp; 624 589 epin_desc_comp = &ss_epin_desc_comp; 590 + epin_fback_desc = &ss_epin_fback_desc; 625 591 } 626 592 627 593 i = 0; ··· 653 611 headers[i++] = USBDHDR(epout_desc_comp); 654 612 655 613 headers[i++] = USBDHDR(&as_iso_out_desc); 614 + 615 + if (EPOUT_FBACK_IN_EN(opts)) 616 + headers[i++] = USBDHDR(epin_fback_desc); 656 617 } 657 618 if (EPIN_EN(opts)) { 658 619 headers[i++] = USBDHDR(&std_as_in_if0_desc); ··· 826 781 std_as_out_if1_desc.bInterfaceNumber = ret; 827 782 uac2->as_out_intf = ret; 828 783 uac2->as_out_alt = 0; 784 + 785 + if (EPOUT_FBACK_IN_EN(uac2_opts)) { 786 + fs_epout_desc.bmAttributes = 787 + USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC; 788 + hs_epout_desc.bmAttributes = 789 + USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC; 790 + ss_epout_desc.bmAttributes = 791 + USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC; 792 + std_as_out_if1_desc.bNumEndpoints++; 793 + } else { 794 + fs_epout_desc.bmAttributes = 795 + USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ADAPTIVE; 796 + hs_epout_desc.bmAttributes = 797 + USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ADAPTIVE; 798 + ss_epout_desc.bmAttributes = 799 + USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ADAPTIVE; 800 + } 829 801 } 830 802 831 803 if (EPIN_EN(uac2_opts)) { ··· 906 844 dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); 907 845 return -ENODEV; 908 846 } 847 + if (EPOUT_FBACK_IN_EN(uac2_opts)) { 848 + agdev->in_ep_fback = usb_ep_autoconfig(gadget, 849 + &fs_epin_fback_desc); 850 + if (!agdev->in_ep_fback) { 851 + dev_err(dev, "%s:%d Error!\n", 852 + __func__, __LINE__); 853 + return -ENODEV; 854 + } 855 + } 909 856 } 910 857 911 858 if (EPIN_EN(uac2_opts)) { ··· 938 867 le16_to_cpu(ss_epout_desc.wMaxPacketSize)); 939 868 940 869 hs_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress; 870 + hs_epin_fback_desc.bEndpointAddress = fs_epin_fback_desc.bEndpointAddress; 941 871 hs_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress; 942 872 ss_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress; 873 + ss_epin_fback_desc.bEndpointAddress = fs_epin_fback_desc.bEndpointAddress; 943 874 ss_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress; 944 875 945 876 setup_descriptor(uac2_opts); ··· 960 887 agdev->params.c_srate = uac2_opts->c_srate; 961 888 agdev->params.c_ssize = uac2_opts->c_ssize; 962 889 agdev->params.req_number = uac2_opts->req_number; 890 + agdev->params.fb_max = uac2_opts->fb_max; 963 891 ret = g_audio_setup(agdev, "UAC2 PCM", "UAC2_Gadget"); 964 892 if (ret) 965 893 goto err_free_descs; ··· 1269 1195 \ 1270 1196 CONFIGFS_ATTR(f_uac2_opts_, name) 1271 1197 1198 + #define UAC2_ATTRIBUTE_SYNC(name) \ 1199 + static ssize_t f_uac2_opts_##name##_show(struct config_item *item, \ 1200 + char *page) \ 1201 + { \ 1202 + struct f_uac2_opts *opts = to_f_uac2_opts(item); \ 1203 + int result; \ 1204 + char *str; \ 1205 + \ 1206 + mutex_lock(&opts->lock); \ 1207 + switch (opts->name) { \ 1208 + case USB_ENDPOINT_SYNC_ASYNC: \ 1209 + str = "async"; \ 1210 + break; \ 1211 + case USB_ENDPOINT_SYNC_ADAPTIVE: \ 1212 + str = "adaptive"; \ 1213 + break; \ 1214 + default: \ 1215 + str = "unknown"; \ 1216 + break; \ 1217 + } \ 1218 + result = sprintf(page, "%s\n", str); \ 1219 + mutex_unlock(&opts->lock); \ 1220 + \ 1221 + return result; \ 1222 + } \ 1223 + \ 1224 + static ssize_t f_uac2_opts_##name##_store(struct config_item *item, \ 1225 + const char *page, size_t len) \ 1226 + { \ 1227 + struct f_uac2_opts *opts = to_f_uac2_opts(item); \ 1228 + int ret = 0; \ 1229 + \ 1230 + mutex_lock(&opts->lock); \ 1231 + if (opts->refcnt) { \ 1232 + ret = -EBUSY; \ 1233 + goto end; \ 1234 + } \ 1235 + \ 1236 + if (!strncmp(page, "async", 5)) \ 1237 + opts->name = USB_ENDPOINT_SYNC_ASYNC; \ 1238 + else if (!strncmp(page, "adaptive", 8)) \ 1239 + opts->name = USB_ENDPOINT_SYNC_ADAPTIVE; \ 1240 + else { \ 1241 + ret = -EINVAL; \ 1242 + goto end; \ 1243 + } \ 1244 + \ 1245 + ret = len; \ 1246 + \ 1247 + end: \ 1248 + mutex_unlock(&opts->lock); \ 1249 + return ret; \ 1250 + } \ 1251 + \ 1252 + CONFIGFS_ATTR(f_uac2_opts_, name) 1253 + 1272 1254 UAC2_ATTRIBUTE(p_chmask); 1273 1255 UAC2_ATTRIBUTE(p_srate); 1274 1256 UAC2_ATTRIBUTE(p_ssize); 1275 1257 UAC2_ATTRIBUTE(c_chmask); 1276 1258 UAC2_ATTRIBUTE(c_srate); 1259 + UAC2_ATTRIBUTE_SYNC(c_sync); 1277 1260 UAC2_ATTRIBUTE(c_ssize); 1278 1261 UAC2_ATTRIBUTE(req_number); 1262 + UAC2_ATTRIBUTE(fb_max); 1279 1263 1280 1264 static struct configfs_attribute *f_uac2_attrs[] = { 1281 1265 &f_uac2_opts_attr_p_chmask, ··· 1342 1210 &f_uac2_opts_attr_c_chmask, 1343 1211 &f_uac2_opts_attr_c_srate, 1344 1212 &f_uac2_opts_attr_c_ssize, 1213 + &f_uac2_opts_attr_c_sync, 1345 1214 &f_uac2_opts_attr_req_number, 1215 + &f_uac2_opts_attr_fb_max, 1346 1216 NULL, 1347 1217 }; 1348 1218 ··· 1382 1248 opts->c_chmask = UAC2_DEF_CCHMASK; 1383 1249 opts->c_srate = UAC2_DEF_CSRATE; 1384 1250 opts->c_ssize = UAC2_DEF_CSSIZE; 1251 + opts->c_sync = UAC2_DEF_CSYNC; 1385 1252 opts->req_number = UAC2_DEF_REQ_NUM; 1253 + opts->fb_max = UAC2_DEF_FB_MAX; 1386 1254 return &opts->func_inst; 1387 1255 } 1388 1256
+223 -2
drivers/usb/gadget/function/u_audio.c
··· 16 16 #include <sound/core.h> 17 17 #include <sound/pcm.h> 18 18 #include <sound/pcm_params.h> 19 + #include <sound/control.h> 19 20 20 21 #include "u_audio.h" 21 22 ··· 36 35 37 36 void *rbuf; 38 37 38 + unsigned int pitch; /* Stream pitch ratio to 1000000 */ 39 39 unsigned int max_psize; /* MaxPacketSize of endpoint */ 40 40 41 41 struct usb_request **reqs; 42 + 43 + struct usb_request *req_fback; /* Feedback endpoint request */ 44 + bool fb_ep_enabled; /* if the ep is enabled */ 42 45 }; 43 46 44 47 struct snd_uac_chip { ··· 74 69 .period_bytes_max = PRD_SIZE_MAX, 75 70 .periods_min = MIN_PERIODS, 76 71 }; 72 + 73 + static void u_audio_set_fback_frequency(enum usb_device_speed speed, 74 + unsigned long long freq, 75 + unsigned int pitch, 76 + void *buf) 77 + { 78 + u32 ff = 0; 79 + 80 + /* 81 + * Because the pitch base is 1000000, the final divider here 82 + * will be 1000 * 1000000 = 1953125 << 9 83 + * 84 + * Instead of dealing with big numbers lets fold this 9 left shift 85 + */ 86 + 87 + if (speed == USB_SPEED_FULL) { 88 + /* 89 + * Full-speed feedback endpoints report frequency 90 + * in samples/frame 91 + * Format is encoded in Q10.10 left-justified in the 24 bits, 92 + * so that it has a Q10.14 format. 93 + * 94 + * ff = (freq << 14) / 1000 95 + */ 96 + freq <<= 5; 97 + } else { 98 + /* 99 + * High-speed feedback endpoints report frequency 100 + * in samples/microframe. 101 + * Format is encoded in Q12.13 fitted into four bytes so that 102 + * the binary point is located between the second and the third 103 + * byte fromat (that is Q16.16) 104 + * 105 + * ff = (freq << 16) / 8000 106 + */ 107 + freq <<= 4; 108 + } 109 + 110 + ff = DIV_ROUND_CLOSEST_ULL((freq * pitch), 1953125); 111 + 112 + *(__le32 *)buf = cpu_to_le32(ff); 113 + } 77 114 78 115 static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req) 79 116 { ··· 216 169 snd_pcm_period_elapsed(substream); 217 170 218 171 exit: 172 + if (usb_ep_queue(ep, req, GFP_ATOMIC)) 173 + dev_err(uac->card->dev, "%d Error!\n", __LINE__); 174 + } 175 + 176 + static void u_audio_iso_fback_complete(struct usb_ep *ep, 177 + struct usb_request *req) 178 + { 179 + struct uac_rtd_params *prm = req->context; 180 + struct snd_uac_chip *uac = prm->uac; 181 + struct g_audio *audio_dev = uac->audio_dev; 182 + struct uac_params *params = &audio_dev->params; 183 + int status = req->status; 184 + 185 + /* i/f shutting down */ 186 + if (!prm->fb_ep_enabled || req->status == -ESHUTDOWN) 187 + return; 188 + 189 + /* 190 + * We can't really do much about bad xfers. 191 + * Afterall, the ISOCH xfers could fail legitimately. 192 + */ 193 + if (status) 194 + pr_debug("%s: iso_complete status(%d) %d/%d\n", 195 + __func__, status, req->actual, req->length); 196 + 197 + u_audio_set_fback_frequency(audio_dev->gadget->speed, 198 + params->c_srate, prm->pitch, 199 + req->buf); 200 + 219 201 if (usb_ep_queue(ep, req, GFP_ATOMIC)) 220 202 dev_err(uac->card->dev, "%d Error!\n", __LINE__); 221 203 } ··· 411 335 dev_err(uac->card->dev, "%s:%d Error!\n", __func__, __LINE__); 412 336 } 413 337 338 + static inline void free_ep_fback(struct uac_rtd_params *prm, struct usb_ep *ep) 339 + { 340 + struct snd_uac_chip *uac = prm->uac; 341 + 342 + if (!prm->fb_ep_enabled) 343 + return; 344 + 345 + prm->fb_ep_enabled = false; 346 + 347 + if (prm->req_fback) { 348 + usb_ep_dequeue(ep, prm->req_fback); 349 + kfree(prm->req_fback->buf); 350 + usb_ep_free_request(ep, prm->req_fback); 351 + prm->req_fback = NULL; 352 + } 353 + 354 + if (usb_ep_disable(ep)) 355 + dev_err(uac->card->dev, "%s:%d Error!\n", __func__, __LINE__); 356 + } 414 357 415 358 int u_audio_start_capture(struct g_audio *audio_dev) 416 359 { 417 360 struct snd_uac_chip *uac = audio_dev->uac; 418 361 struct usb_gadget *gadget = audio_dev->gadget; 419 362 struct device *dev = &gadget->dev; 420 - struct usb_request *req; 421 - struct usb_ep *ep; 363 + struct usb_request *req, *req_fback; 364 + struct usb_ep *ep, *ep_fback; 422 365 struct uac_rtd_params *prm; 423 366 struct uac_params *params = &audio_dev->params; 424 367 int req_len, i; ··· 469 374 dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); 470 375 } 471 376 377 + ep_fback = audio_dev->in_ep_fback; 378 + if (!ep_fback) 379 + return 0; 380 + 381 + /* Setup feedback endpoint */ 382 + config_ep_by_speed(gadget, &audio_dev->func, ep_fback); 383 + prm->fb_ep_enabled = true; 384 + usb_ep_enable(ep_fback); 385 + req_len = ep_fback->maxpacket; 386 + 387 + req_fback = usb_ep_alloc_request(ep_fback, GFP_ATOMIC); 388 + if (req_fback == NULL) 389 + return -ENOMEM; 390 + 391 + prm->req_fback = req_fback; 392 + req_fback->zero = 0; 393 + req_fback->context = prm; 394 + req_fback->length = req_len; 395 + req_fback->complete = u_audio_iso_fback_complete; 396 + 397 + req_fback->buf = kzalloc(req_len, GFP_ATOMIC); 398 + if (!req_fback->buf) 399 + return -ENOMEM; 400 + 401 + /* 402 + * Configure the feedback endpoint's reported frequency. 403 + * Always start with original frequency since its deviation can't 404 + * be meauserd at start of playback 405 + */ 406 + prm->pitch = 1000000; 407 + u_audio_set_fback_frequency(audio_dev->gadget->speed, 408 + params->c_srate, prm->pitch, 409 + req_fback->buf); 410 + 411 + if (usb_ep_queue(ep_fback, req_fback, GFP_ATOMIC)) 412 + dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); 413 + 472 414 return 0; 473 415 } 474 416 EXPORT_SYMBOL_GPL(u_audio_start_capture); ··· 514 382 { 515 383 struct snd_uac_chip *uac = audio_dev->uac; 516 384 385 + if (audio_dev->in_ep_fback) 386 + free_ep_fback(&uac->c_prm, audio_dev->in_ep_fback); 517 387 free_ep(&uac->c_prm, audio_dev->out_ep); 518 388 } 519 389 EXPORT_SYMBOL_GPL(u_audio_stop_capture); ··· 597 463 } 598 464 EXPORT_SYMBOL_GPL(u_audio_stop_playback); 599 465 466 + static int u_audio_pitch_info(struct snd_kcontrol *kcontrol, 467 + struct snd_ctl_elem_info *uinfo) 468 + { 469 + struct uac_rtd_params *prm = snd_kcontrol_chip(kcontrol); 470 + struct snd_uac_chip *uac = prm->uac; 471 + struct g_audio *audio_dev = uac->audio_dev; 472 + struct uac_params *params = &audio_dev->params; 473 + unsigned int pitch_min, pitch_max; 474 + 475 + pitch_min = (1000 - FBACK_SLOW_MAX) * 1000; 476 + pitch_max = (1000 + params->fb_max) * 1000; 477 + 478 + uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; 479 + uinfo->count = 1; 480 + uinfo->value.integer.min = pitch_min; 481 + uinfo->value.integer.max = pitch_max; 482 + uinfo->value.integer.step = 1; 483 + return 0; 484 + } 485 + 486 + static int u_audio_pitch_get(struct snd_kcontrol *kcontrol, 487 + struct snd_ctl_elem_value *ucontrol) 488 + { 489 + struct uac_rtd_params *prm = snd_kcontrol_chip(kcontrol); 490 + 491 + ucontrol->value.integer.value[0] = prm->pitch; 492 + 493 + return 0; 494 + } 495 + 496 + static int u_audio_pitch_put(struct snd_kcontrol *kcontrol, 497 + struct snd_ctl_elem_value *ucontrol) 498 + { 499 + struct uac_rtd_params *prm = snd_kcontrol_chip(kcontrol); 500 + struct snd_uac_chip *uac = prm->uac; 501 + struct g_audio *audio_dev = uac->audio_dev; 502 + struct uac_params *params = &audio_dev->params; 503 + unsigned int val; 504 + unsigned int pitch_min, pitch_max; 505 + int change = 0; 506 + 507 + pitch_min = (1000 - FBACK_SLOW_MAX) * 1000; 508 + pitch_max = (1000 + params->fb_max) * 1000; 509 + 510 + val = ucontrol->value.integer.value[0]; 511 + 512 + if (val < pitch_min) 513 + val = pitch_min; 514 + if (val > pitch_max) 515 + val = pitch_max; 516 + 517 + if (prm->pitch != val) { 518 + prm->pitch = val; 519 + change = 1; 520 + } 521 + 522 + return change; 523 + } 524 + 525 + static const struct snd_kcontrol_new u_audio_controls[] = { 526 + { 527 + .iface = SNDRV_CTL_ELEM_IFACE_PCM, 528 + .name = "Capture Pitch 1000000", 529 + .info = u_audio_pitch_info, 530 + .get = u_audio_pitch_get, 531 + .put = u_audio_pitch_put, 532 + }, 533 + }; 534 + 600 535 int g_audio_setup(struct g_audio *g_audio, const char *pcm_name, 601 536 const char *card_name) 602 537 { 603 538 struct snd_uac_chip *uac; 604 539 struct snd_card *card; 605 540 struct snd_pcm *pcm; 541 + struct snd_kcontrol *kctl; 606 542 struct uac_params *params; 607 543 int p_chmask, c_chmask; 608 544 int err; ··· 759 555 760 556 snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &uac_pcm_ops); 761 557 snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &uac_pcm_ops); 558 + 559 + if (c_chmask && g_audio->in_ep_fback) { 560 + strscpy(card->mixername, card_name, sizeof(card->driver)); 561 + 562 + kctl = snd_ctl_new1(&u_audio_controls[0], &uac->c_prm); 563 + if (!kctl) { 564 + err = -ENOMEM; 565 + goto snd_fail; 566 + } 567 + 568 + kctl->id.device = pcm->device; 569 + kctl->id.subdevice = 0; 570 + 571 + err = snd_ctl_add(card, kctl); 572 + if (err < 0) 573 + goto snd_fail; 574 + } 762 575 763 576 strscpy(card->driver, card_name, sizeof(card->driver)); 764 577 strscpy(card->shortname, card_name, sizeof(card->shortname));
+12
drivers/usb/gadget/function/u_audio.h
··· 11 11 12 12 #include <linux/usb/composite.h> 13 13 14 + /* 15 + * Same maximum frequency deviation on the slower side as in 16 + * sound/usb/endpoint.c. Value is expressed in per-mil deviation. 17 + * The maximum deviation on the faster side will be provided as 18 + * parameter, as it impacts the endpoint required bandwidth. 19 + */ 20 + #define FBACK_SLOW_MAX 250 21 + 14 22 struct uac_params { 15 23 /* playback */ 16 24 int p_chmask; /* channel mask */ ··· 31 23 int c_ssize; /* sample size */ 32 24 33 25 int req_number; /* number of preallocated requests */ 26 + int fb_max; /* upper frequency drift feedback limit per-mil */ 34 27 }; 35 28 36 29 struct g_audio { ··· 39 30 struct usb_gadget *gadget; 40 31 41 32 struct usb_ep *in_ep; 33 + 42 34 struct usb_ep *out_ep; 35 + /* feedback IN endpoint corresponding to out_ep */ 36 + struct usb_ep *in_ep_fback; 43 37 44 38 /* Max packet size for all in_ep possible speeds */ 45 39 unsigned int in_ep_maxpsize;
+2 -2
drivers/usb/gadget/function/u_hid.h
··· 29 29 * Protect the data form concurrent access by read/write 30 30 * and create symlink/remove symlink. 31 31 */ 32 - struct mutex lock; 33 - int refcnt; 32 + struct mutex lock; 33 + int refcnt; 34 34 }; 35 35 36 36 int ghid_setup(struct usb_gadget *g, int count);
+2 -2
drivers/usb/gadget/function/u_midi.h
··· 29 29 * Protect the data form concurrent access by read/write 30 30 * and create symlink/remove symlink. 31 31 */ 32 - struct mutex lock; 33 - int refcnt; 32 + struct mutex lock; 33 + int refcnt; 34 34 }; 35 35 36 36 #endif /* U_MIDI_H */
+4
drivers/usb/gadget/function/u_uac2.h
··· 21 21 #define UAC2_DEF_CCHMASK 0x3 22 22 #define UAC2_DEF_CSRATE 64000 23 23 #define UAC2_DEF_CSSIZE 2 24 + #define UAC2_DEF_CSYNC USB_ENDPOINT_SYNC_ASYNC 24 25 #define UAC2_DEF_REQ_NUM 2 26 + #define UAC2_DEF_FB_MAX 5 25 27 26 28 struct f_uac2_opts { 27 29 struct usb_function_instance func_inst; ··· 33 31 int c_chmask; 34 32 int c_srate; 35 33 int c_ssize; 34 + int c_sync; 36 35 int req_number; 36 + int fb_max; 37 37 bool bound; 38 38 39 39 struct mutex lock;
-5
drivers/usb/gadget/function/uvc_configfs.c
··· 914 914 915 915 target_fmt = container_of(to_config_group(target), struct uvcg_format, 916 916 group); 917 - if (!target_fmt) 918 - goto out; 919 917 920 918 uvcg_format_set_indices(to_config_group(target)); 921 919 ··· 953 955 mutex_lock(&opts->lock); 954 956 target_fmt = container_of(to_config_group(target), struct uvcg_format, 955 957 group); 956 - if (!target_fmt) 957 - goto out; 958 958 959 959 list_for_each_entry_safe(format_ptr, tmp, &src_hdr->formats, entry) 960 960 if (format_ptr->fmt == target_fmt) { ··· 964 968 965 969 --target_fmt->linked; 966 970 967 - out: 968 971 mutex_unlock(&opts->lock); 969 972 mutex_unlock(su_mutex); 970 973 }
+3 -1
drivers/usb/gadget/legacy/hid.c
··· 171 171 struct usb_descriptor_header *usb_desc; 172 172 173 173 usb_desc = usb_otg_descriptor_alloc(gadget); 174 - if (!usb_desc) 174 + if (!usb_desc) { 175 + status = -ENOMEM; 175 176 goto put; 177 + } 176 178 usb_otg_descriptor_init(gadget, usb_desc); 177 179 otg_desc[0] = usb_desc; 178 180 otg_desc[1] = NULL;
+1 -6
drivers/usb/gadget/udc/bcm63xx_udc.c
··· 288 288 * @ep0_req_completed: ep0 request has completed; worker has not seen it yet. 289 289 * @ep0_reply: Pending reply from gadget driver. 290 290 * @ep0_request: Outstanding ep0 request. 291 - * @debugfs_root: debugfs directory: /sys/kernel/debug/<DRV_MODULE_NAME>. 292 291 */ 293 292 struct bcm63xx_udc { 294 293 spinlock_t lock; ··· 326 327 unsigned ep0_req_completed:1; 327 328 struct usb_request *ep0_reply; 328 329 struct usb_request *ep0_request; 329 - 330 - struct dentry *debugfs_root; 331 330 }; 332 331 333 332 static const struct usb_ep_ops bcm63xx_udc_ep_ops; ··· 2247 2250 return; 2248 2251 2249 2252 root = debugfs_create_dir(udc->gadget.name, usb_debug_root); 2250 - udc->debugfs_root = root; 2251 - 2252 2253 debugfs_create_file("usbd", 0400, root, udc, &bcm63xx_usbd_dbg_fops); 2253 2254 debugfs_create_file("iudma", 0400, root, udc, &bcm63xx_iudma_dbg_fops); 2254 2255 } ··· 2259 2264 */ 2260 2265 static void bcm63xx_udc_cleanup_debugfs(struct bcm63xx_udc *udc) 2261 2266 { 2262 - debugfs_remove_recursive(udc->debugfs_root); 2267 + debugfs_remove(debugfs_lookup(udc->gadget.name, usb_debug_root)); 2263 2268 } 2264 2269 2265 2270 /***********************************************************************
+49
drivers/usb/gadget/udc/core.c
··· 1148 1148 } 1149 1149 1150 1150 /** 1151 + * usb_gadget_enable_async_callbacks - tell usb device controller to enable asynchronous callbacks 1152 + * @udc: The UDC which should enable async callbacks 1153 + * 1154 + * This routine is used when binding gadget drivers. It undoes the effect 1155 + * of usb_gadget_disable_async_callbacks(); the UDC driver should enable IRQs 1156 + * (if necessary) and resume issuing callbacks. 1157 + * 1158 + * This routine will always be called in process context. 1159 + */ 1160 + static inline void usb_gadget_enable_async_callbacks(struct usb_udc *udc) 1161 + { 1162 + struct usb_gadget *gadget = udc->gadget; 1163 + 1164 + if (gadget->ops->udc_async_callbacks) 1165 + gadget->ops->udc_async_callbacks(gadget, true); 1166 + } 1167 + 1168 + /** 1169 + * usb_gadget_disable_async_callbacks - tell usb device controller to disable asynchronous callbacks 1170 + * @udc: The UDC which should disable async callbacks 1171 + * 1172 + * This routine is used when unbinding gadget drivers. It prevents a race: 1173 + * The UDC driver doesn't know when the gadget driver's ->unbind callback 1174 + * runs, so unless it is told to disable asynchronous callbacks, it might 1175 + * issue a callback (such as ->disconnect) after the unbind has completed. 1176 + * 1177 + * After this function runs, the UDC driver must suppress all ->suspend, 1178 + * ->resume, ->disconnect, ->reset, and ->setup callbacks to the gadget driver 1179 + * until async callbacks are again enabled. A simple-minded but effective 1180 + * way to accomplish this is to tell the UDC hardware not to generate any 1181 + * more IRQs. 1182 + * 1183 + * Request completion callbacks must still be issued. However, it's okay 1184 + * to defer them until the request is cancelled, since the pull-up will be 1185 + * turned off during the time period when async callbacks are disabled. 1186 + * 1187 + * This routine will always be called in process context. 1188 + */ 1189 + static inline void usb_gadget_disable_async_callbacks(struct usb_udc *udc) 1190 + { 1191 + struct usb_gadget *gadget = udc->gadget; 1192 + 1193 + if (gadget->ops->udc_async_callbacks) 1194 + gadget->ops->udc_async_callbacks(gadget, false); 1195 + } 1196 + 1197 + /** 1151 1198 * usb_udc_release - release the usb_udc struct 1152 1199 * @dev: the dev member within usb_udc 1153 1200 * ··· 1408 1361 kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE); 1409 1362 1410 1363 usb_gadget_disconnect(udc->gadget); 1364 + usb_gadget_disable_async_callbacks(udc); 1411 1365 if (udc->gadget->irq) 1412 1366 synchronize_irq(udc->gadget->irq); 1413 1367 udc->driver->unbind(udc->gadget); ··· 1490 1442 driver->unbind(udc->gadget); 1491 1443 goto err1; 1492 1444 } 1445 + usb_gadget_enable_async_callbacks(udc); 1493 1446 usb_udc_connect_control(udc); 1494 1447 1495 1448 kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
+10 -1
drivers/usb/gadget/udc/dummy_hcd.c
··· 934 934 dummy_udc_update_ep0(dum); 935 935 } 936 936 937 + static void dummy_udc_async_callbacks(struct usb_gadget *_gadget, bool enable) 938 + { 939 + struct dummy *dum = gadget_dev_to_dummy(&_gadget->dev); 940 + 941 + spin_lock_irq(&dum->lock); 942 + dum->ints_enabled = enable; 943 + spin_unlock_irq(&dum->lock); 944 + } 945 + 937 946 static int dummy_udc_start(struct usb_gadget *g, 938 947 struct usb_gadget_driver *driver); 939 948 static int dummy_udc_stop(struct usb_gadget *g); ··· 955 946 .udc_start = dummy_udc_start, 956 947 .udc_stop = dummy_udc_stop, 957 948 .udc_set_speed = dummy_udc_set_speed, 949 + .udc_async_callbacks = dummy_udc_async_callbacks, 958 950 }; 959 951 960 952 /*-------------------------------------------------------------------------*/ ··· 1015 1005 spin_lock_irq(&dum->lock); 1016 1006 dum->devstatus = 0; 1017 1007 dum->driver = driver; 1018 - dum->ints_enabled = 1; 1019 1008 spin_unlock_irq(&dum->lock); 1020 1009 1021 1010 return 0;
+5
drivers/usb/gadget/udc/fsl_qe_udc.c
··· 541 541 case USB_SPEED_HIGH: 542 542 if ((max == 128) || (max == 256) || (max == 512)) 543 543 break; 544 + fallthrough; 544 545 default: 545 546 switch (max) { 546 547 case 4: ··· 563 562 case USB_SPEED_HIGH: 564 563 if (max <= 1024) 565 564 break; 565 + fallthrough; 566 566 case USB_SPEED_FULL: 567 567 if (max <= 64) 568 568 break; 569 + fallthrough; 569 570 default: 570 571 if (max <= 8) 571 572 break; ··· 582 579 case USB_SPEED_HIGH: 583 580 if (max <= 1024) 584 581 break; 582 + fallthrough; 585 583 case USB_SPEED_FULL: 586 584 if (max <= 1023) 587 585 break; ··· 609 605 default: 610 606 goto en_done; 611 607 } 608 + fallthrough; 612 609 case USB_SPEED_LOW: 613 610 switch (max) { 614 611 case 1:
+10 -34
drivers/usb/gadget/udc/fsl_udc_core.c
··· 36 36 #include <linux/platform_device.h> 37 37 #include <linux/fsl_devices.h> 38 38 #include <linux/dmapool.h> 39 - #include <linux/delay.h> 40 39 #include <linux/of_device.h> 41 40 42 41 #include <asm/byteorder.h> ··· 322 323 fsl_writel(tmp, &dr_regs->endptctrl[ep_num]); 323 324 } 324 325 /* Config control enable i/o output, cpu endian register */ 325 - #ifndef CONFIG_ARCH_MXC 326 326 if (udc->pdata->have_sysif_regs) { 327 327 ctrl = __raw_readl(&usb_sys_regs->control); 328 328 ctrl |= USB_CTRL_IOENB; 329 329 __raw_writel(ctrl, &usb_sys_regs->control); 330 330 } 331 - #endif 332 331 333 332 #if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE) 334 333 /* Turn on cache snooping hardware, since some PowerPC platforms ··· 544 547 unsigned short max = 0; 545 548 unsigned char mult = 0, zlt; 546 549 int retval = -EINVAL; 547 - unsigned long flags = 0; 550 + unsigned long flags; 548 551 549 552 ep = container_of(_ep, struct fsl_ep, ep); 550 553 ··· 628 631 { 629 632 struct fsl_udc *udc = NULL; 630 633 struct fsl_ep *ep = NULL; 631 - unsigned long flags = 0; 634 + unsigned long flags; 632 635 u32 epctrl; 633 636 int ep_num; 634 637 ··· 998 1001 static int fsl_ep_set_halt(struct usb_ep *_ep, int value) 999 1002 { 1000 1003 struct fsl_ep *ep = NULL; 1001 - unsigned long flags = 0; 1004 + unsigned long flags; 1002 1005 int status = -EOPNOTSUPP; /* operation not supported */ 1003 1006 unsigned char ep_dir = 0, ep_num = 0; 1004 1007 struct fsl_udc *udc = NULL; ··· 1935 1938 struct usb_gadget_driver *driver) 1936 1939 { 1937 1940 int retval = 0; 1938 - unsigned long flags = 0; 1941 + unsigned long flags; 1939 1942 1940 1943 /* lock is needed but whether should use this lock or another */ 1941 1944 spin_lock_irqsave(&udc_controller->lock, flags); ··· 2150 2153 tmp_reg = fsl_readl(&dr_regs->endpointprime); 2151 2154 seq_printf(m, "EP Prime Reg = [0x%x]\n\n", tmp_reg); 2152 2155 2153 - #ifndef CONFIG_ARCH_MXC 2154 2156 if (udc->pdata->have_sysif_regs) { 2155 2157 tmp_reg = usb_sys_regs->snoop1; 2156 2158 seq_printf(m, "Snoop1 Reg : = [0x%x]\n\n", tmp_reg); ··· 2157 2161 tmp_reg = usb_sys_regs->control; 2158 2162 seq_printf(m, "General Control Reg : = [0x%x]\n\n", tmp_reg); 2159 2163 } 2160 - #endif 2161 2164 2162 2165 /* ------fsl_udc, fsl_ep, fsl_request structure information ----- */ 2163 2166 ep = &udc->eps[0]; ··· 2407 2412 */ 2408 2413 if (pdata->init && pdata->init(pdev)) { 2409 2414 ret = -ENODEV; 2410 - goto err_iounmap_noclk; 2415 + goto err_iounmap; 2411 2416 } 2412 2417 2413 2418 /* Set accessors only after pdata->init() ! */ 2414 2419 fsl_set_accessors(pdata); 2415 2420 2416 - #ifndef CONFIG_ARCH_MXC 2417 2421 if (pdata->have_sysif_regs) 2418 2422 usb_sys_regs = (void *)dr_regs + USB_DR_SYS_OFFSET; 2419 - #endif 2420 - 2421 - /* Initialize USB clocks */ 2422 - ret = fsl_udc_clk_init(pdev); 2423 - if (ret < 0) 2424 - goto err_iounmap_noclk; 2425 2423 2426 2424 /* Read Device Controller Capability Parameters register */ 2427 2425 dccparams = fsl_readl(&dr_regs->dccparams); 2428 2426 if (!(dccparams & DCCPARAMS_DC)) { 2429 2427 ERR("This SOC doesn't support device role\n"); 2430 2428 ret = -ENODEV; 2431 - goto err_iounmap; 2429 + goto err_exit; 2432 2430 } 2433 2431 /* Get max device endpoints */ 2434 2432 /* DEN is bidirectional ep number, max_ep doubles the number */ ··· 2430 2442 ret = platform_get_irq(pdev, 0); 2431 2443 if (ret <= 0) { 2432 2444 ret = ret ? : -ENODEV; 2433 - goto err_iounmap; 2445 + goto err_exit; 2434 2446 } 2435 2447 udc_controller->irq = ret; 2436 2448 ··· 2439 2451 if (ret != 0) { 2440 2452 ERR("cannot request irq %d err %d\n", 2441 2453 udc_controller->irq, ret); 2442 - goto err_iounmap; 2454 + goto err_exit; 2443 2455 } 2444 2456 2445 2457 /* Initialize the udc structure including QH member and other member */ ··· 2454 2466 * leave usbintr reg untouched */ 2455 2467 dr_controller_setup(udc_controller); 2456 2468 } 2457 - 2458 - ret = fsl_udc_clk_finalize(pdev); 2459 - if (ret) 2460 - goto err_free_irq; 2461 2469 2462 2470 /* Setup gadget structure */ 2463 2471 udc_controller->gadget.ops = &fsl_gadget_ops; ··· 2514 2530 dma_pool_destroy(udc_controller->td_pool); 2515 2531 err_free_irq: 2516 2532 free_irq(udc_controller->irq, udc_controller); 2517 - err_iounmap: 2533 + err_exit: 2518 2534 if (pdata->exit) 2519 2535 pdata->exit(pdev); 2520 - fsl_udc_clk_release(); 2521 - err_iounmap_noclk: 2536 + err_iounmap: 2522 2537 iounmap(dr_regs); 2523 2538 err_release_mem_region: 2524 2539 if (pdata->operating_mode == FSL_USB2_DR_DEVICE) ··· 2543 2560 2544 2561 udc_controller->done = &done; 2545 2562 usb_del_gadget_udc(&udc_controller->gadget); 2546 - 2547 - fsl_udc_clk_release(); 2548 2563 2549 2564 /* DR has been stopped in usb_gadget_unregister_driver() */ 2550 2565 remove_proc_file(); ··· 2658 2677 --------------------------------------------------------------------------*/ 2659 2678 static const struct platform_device_id fsl_udc_devtype[] = { 2660 2679 { 2661 - .name = "imx-udc-mx27", 2662 - }, { 2663 - .name = "imx-udc-mx51", 2664 - }, { 2665 2680 .name = "fsl-usb2-udc", 2666 2681 }, { 2667 2682 /* sentinel */ ··· 2666 2689 MODULE_DEVICE_TABLE(platform, fsl_udc_devtype); 2667 2690 static struct platform_driver udc_driver = { 2668 2691 .remove = fsl_udc_remove, 2669 - /* Just for FSL i.mx SoC currently */ 2670 2692 .id_table = fsl_udc_devtype, 2671 2693 /* these suspend and resume are not usb suspend and resume */ 2672 2694 .suspend = fsl_udc_suspend,
-19
drivers/usb/gadget/udc/fsl_usb2_udc.h
··· 588 588 USB_DIR_IN) ? 1 : 0]; 589 589 } 590 590 591 - struct platform_device; 592 - #ifdef CONFIG_ARCH_MXC 593 - int fsl_udc_clk_init(struct platform_device *pdev); 594 - int fsl_udc_clk_finalize(struct platform_device *pdev); 595 - void fsl_udc_clk_release(void); 596 - #else 597 - static inline int fsl_udc_clk_init(struct platform_device *pdev) 598 - { 599 - return 0; 600 - } 601 - static inline int fsl_udc_clk_finalize(struct platform_device *pdev) 602 - { 603 - return 0; 604 - } 605 - static inline void fsl_udc_clk_release(void) 606 - { 607 - } 608 - #endif 609 - 610 591 #endif
+4 -3
drivers/usb/gadget/udc/gr_udc.c
··· 207 207 static void gr_dfs_create(struct gr_udc *dev) 208 208 { 209 209 const char *name = "gr_udc_state"; 210 + struct dentry *root; 210 211 211 - dev->dfs_root = debugfs_create_dir(dev_name(dev->dev), usb_debug_root); 212 - debugfs_create_file(name, 0444, dev->dfs_root, dev, &gr_dfs_fops); 212 + root = debugfs_create_dir(dev_name(dev->dev), usb_debug_root); 213 + debugfs_create_file(name, 0444, root, dev, &gr_dfs_fops); 213 214 } 214 215 215 216 static void gr_dfs_delete(struct gr_udc *dev) 216 217 { 217 - debugfs_remove_recursive(dev->dfs_root); 218 + debugfs_remove(debugfs_lookup(dev_name(dev->dev), usb_debug_root)); 218 219 } 219 220 220 221 #else /* !CONFIG_USB_GADGET_DEBUG_FS */
-2
drivers/usb/gadget/udc/gr_udc.h
··· 215 215 struct list_head ep_list; 216 216 217 217 spinlock_t lock; /* General lock, a.k.a. "dev->lock" in comments */ 218 - 219 - struct dentry *dfs_root; 220 218 }; 221 219 222 220 #define to_gr_udc(gadget) (container_of((gadget), struct gr_udc, gadget))
+2 -3
drivers/usb/gadget/udc/lpc32xx_udc.c
··· 127 127 struct usb_gadget_driver *driver; 128 128 struct platform_device *pdev; 129 129 struct device *dev; 130 - struct dentry *pde; 131 130 spinlock_t lock; 132 131 struct i2c_client *isp1301_i2c_client; 133 132 ··· 527 528 528 529 static void create_debug_file(struct lpc32xx_udc *udc) 529 530 { 530 - udc->pde = debugfs_create_file(debug_filename, 0, NULL, udc, &udc_fops); 531 + debugfs_create_file(debug_filename, 0, NULL, udc, &udc_fops); 531 532 } 532 533 533 534 static void remove_debug_file(struct lpc32xx_udc *udc) 534 535 { 535 - debugfs_remove(udc->pde); 536 + debugfs_remove(debugfs_lookup(debug_filename, NULL)); 536 537 } 537 538 538 539 #else
+1 -1
drivers/usb/gadget/udc/mv_u3d_core.c
··· 941 941 static int mv_u3d_ep_set_halt_wedge(struct usb_ep *_ep, int halt, int wedge) 942 942 { 943 943 struct mv_u3d_ep *ep; 944 - unsigned long flags = 0; 944 + unsigned long flags; 945 945 int status = 0; 946 946 struct mv_u3d *u3d; 947 947
+1 -1
drivers/usb/gadget/udc/mv_udc_core.c
··· 888 888 static int mv_ep_set_halt_wedge(struct usb_ep *_ep, int halt, int wedge) 889 889 { 890 890 struct mv_ep *ep; 891 - unsigned long flags = 0; 891 + unsigned long flags; 892 892 int status = 0; 893 893 struct mv_udc *udc; 894 894
+27 -14
drivers/usb/gadget/udc/net2272.c
··· 1150 1150 static int net2272_start(struct usb_gadget *_gadget, 1151 1151 struct usb_gadget_driver *driver); 1152 1152 static int net2272_stop(struct usb_gadget *_gadget); 1153 + static void net2272_async_callbacks(struct usb_gadget *_gadget, bool enable); 1153 1154 1154 1155 static const struct usb_gadget_ops net2272_ops = { 1155 1156 .get_frame = net2272_get_frame, ··· 1159 1158 .pullup = net2272_pullup, 1160 1159 .udc_start = net2272_start, 1161 1160 .udc_stop = net2272_stop, 1161 + .udc_async_callbacks = net2272_async_callbacks, 1162 1162 }; 1163 1163 1164 1164 /*---------------------------------------------------------------------------*/ ··· 1478 1476 net2272_dequeue_all(&dev->ep[i]); 1479 1477 1480 1478 /* report disconnect; the driver is already quiesced */ 1481 - if (driver) { 1479 + if (dev->async_callbacks && driver) { 1482 1480 spin_unlock(&dev->lock); 1483 1481 driver->disconnect(&dev->gadget); 1484 1482 spin_lock(&dev->lock); ··· 1501 1499 dev->driver = NULL; 1502 1500 1503 1501 return 0; 1502 + } 1503 + 1504 + static void net2272_async_callbacks(struct usb_gadget *_gadget, bool enable) 1505 + { 1506 + struct net2272 *dev = container_of(_gadget, struct net2272, gadget); 1507 + 1508 + spin_lock_irq(&dev->lock); 1509 + dev->async_callbacks = enable; 1510 + spin_unlock_irq(&dev->lock); 1504 1511 } 1505 1512 1506 1513 /*---------------------------------------------------------------------------*/ ··· 1921 1910 u.r.bRequestType, u.r.bRequest, 1922 1911 u.r.wValue, u.r.wIndex, 1923 1912 net2272_ep_read(ep, EP_CFG)); 1924 - spin_unlock(&dev->lock); 1925 - tmp = dev->driver->setup(&dev->gadget, &u.r); 1926 - spin_lock(&dev->lock); 1913 + if (dev->async_callbacks) { 1914 + spin_unlock(&dev->lock); 1915 + tmp = dev->driver->setup(&dev->gadget, &u.r); 1916 + spin_lock(&dev->lock); 1917 + } 1927 1918 } 1928 1919 1929 1920 /* stall ep0 on error */ ··· 2007 1994 if (disconnect || reset) { 2008 1995 stop_activity(dev, dev->driver); 2009 1996 net2272_ep0_start(dev); 2010 - spin_unlock(&dev->lock); 2011 - if (reset) 2012 - usb_gadget_udc_reset 2013 - (&dev->gadget, dev->driver); 2014 - else 2015 - (dev->driver->disconnect) 2016 - (&dev->gadget); 2017 - spin_lock(&dev->lock); 1997 + if (dev->async_callbacks) { 1998 + spin_unlock(&dev->lock); 1999 + if (reset) 2000 + usb_gadget_udc_reset(&dev->gadget, dev->driver); 2001 + else 2002 + (dev->driver->disconnect)(&dev->gadget); 2003 + spin_lock(&dev->lock); 2004 + } 2018 2005 return; 2019 2006 } 2020 2007 } ··· 2028 2015 if (stat & tmp) { 2029 2016 net2272_write(dev, IRQSTAT1, tmp); 2030 2017 if (stat & (1 << SUSPEND_REQUEST_INTERRUPT)) { 2031 - if (dev->driver->suspend) 2018 + if (dev->async_callbacks && dev->driver->suspend) 2032 2019 dev->driver->suspend(&dev->gadget); 2033 2020 if (!enable_suspend) { 2034 2021 stat &= ~(1 << SUSPEND_REQUEST_INTERRUPT); 2035 2022 dev_dbg(dev->dev, "Suspend disabled, ignoring\n"); 2036 2023 } 2037 2024 } else { 2038 - if (dev->driver->resume) 2025 + if (dev->async_callbacks && dev->driver->resume) 2039 2026 dev->driver->resume(&dev->gadget); 2040 2027 } 2041 2028 stat &= ~tmp;
+1
drivers/usb/gadget/udc/net2272.h
··· 442 442 softconnect:1, 443 443 wakeup:1, 444 444 added:1, 445 + async_callbacks:1, 445 446 dma_eot_polarity:1, 446 447 dma_dack_polarity:1, 447 448 dma_dreq_polarity:1,
+32 -19
drivers/usb/gadget/udc/net2280.c
··· 1617 1617 static int net2280_start(struct usb_gadget *_gadget, 1618 1618 struct usb_gadget_driver *driver); 1619 1619 static int net2280_stop(struct usb_gadget *_gadget); 1620 + static void net2280_async_callbacks(struct usb_gadget *_gadget, bool enable); 1620 1621 1621 1622 static const struct usb_gadget_ops net2280_ops = { 1622 1623 .get_frame = net2280_get_frame, ··· 1626 1625 .pullup = net2280_pullup, 1627 1626 .udc_start = net2280_start, 1628 1627 .udc_stop = net2280_stop, 1628 + .udc_async_callbacks = net2280_async_callbacks, 1629 1629 .match_ep = net2280_match_ep, 1630 1630 }; 1631 1631 ··· 2474 2472 nuke(&dev->ep[i]); 2475 2473 2476 2474 /* report disconnect; the driver is already quiesced */ 2477 - if (driver) { 2475 + if (dev->async_callbacks && driver) { 2478 2476 spin_unlock(&dev->lock); 2479 2477 driver->disconnect(&dev->gadget); 2480 2478 spin_lock(&dev->lock); ··· 2502 2500 dev->driver = NULL; 2503 2501 2504 2502 return 0; 2503 + } 2504 + 2505 + static void net2280_async_callbacks(struct usb_gadget *_gadget, bool enable) 2506 + { 2507 + struct net2280 *dev = container_of(_gadget, struct net2280, gadget); 2508 + 2509 + spin_lock_irq(&dev->lock); 2510 + dev->async_callbacks = enable; 2511 + spin_unlock_irq(&dev->lock); 2505 2512 } 2506 2513 2507 2514 /*-------------------------------------------------------------------------*/ ··· 2825 2814 * - Wait and try again. 2826 2815 */ 2827 2816 udelay(DEFECT_7374_PROCESSOR_WAIT_TIME); 2828 - 2829 - continue; 2830 2817 } 2831 2818 2832 2819 ··· 3051 3042 readl(&ep->cfg->ep_cfg)); 3052 3043 3053 3044 ep->responded = 0; 3054 - spin_unlock(&dev->lock); 3055 - tmp = dev->driver->setup(&dev->gadget, &r); 3056 - spin_lock(&dev->lock); 3045 + if (dev->async_callbacks) { 3046 + spin_unlock(&dev->lock); 3047 + tmp = dev->driver->setup(&dev->gadget, &r); 3048 + spin_lock(&dev->lock); 3049 + } 3057 3050 } 3058 3051 do_stall3: 3059 3052 if (tmp < 0) { ··· 3295 3284 w_value, w_index, w_length, 3296 3285 readl(&ep->cfg->ep_cfg)); 3297 3286 ep->responded = 0; 3298 - spin_unlock(&dev->lock); 3299 - tmp = dev->driver->setup(&dev->gadget, &u.r); 3300 - spin_lock(&dev->lock); 3287 + if (dev->async_callbacks) { 3288 + spin_unlock(&dev->lock); 3289 + tmp = dev->driver->setup(&dev->gadget, &u.r); 3290 + spin_lock(&dev->lock); 3291 + } 3301 3292 } 3302 3293 3303 3294 /* stall ep0 on error */ ··· 3404 3391 if (disconnect || reset) { 3405 3392 stop_activity(dev, dev->driver); 3406 3393 ep0_start(dev); 3407 - spin_unlock(&dev->lock); 3408 - if (reset) 3409 - usb_gadget_udc_reset 3410 - (&dev->gadget, dev->driver); 3411 - else 3412 - (dev->driver->disconnect) 3413 - (&dev->gadget); 3414 - spin_lock(&dev->lock); 3394 + if (dev->async_callbacks) { 3395 + spin_unlock(&dev->lock); 3396 + if (reset) 3397 + usb_gadget_udc_reset(&dev->gadget, dev->driver); 3398 + else 3399 + (dev->driver->disconnect)(&dev->gadget); 3400 + spin_lock(&dev->lock); 3401 + } 3415 3402 return; 3416 3403 } 3417 3404 } ··· 3432 3419 writel(tmp, &dev->regs->irqstat1); 3433 3420 spin_unlock(&dev->lock); 3434 3421 if (stat & BIT(SUSPEND_REQUEST_INTERRUPT)) { 3435 - if (dev->driver->suspend) 3422 + if (dev->async_callbacks && dev->driver->suspend) 3436 3423 dev->driver->suspend(&dev->gadget); 3437 3424 if (!enable_suspend) 3438 3425 stat &= ~BIT(SUSPEND_REQUEST_INTERRUPT); 3439 3426 } else { 3440 - if (dev->driver->resume) 3427 + if (dev->async_callbacks && dev->driver->resume) 3441 3428 dev->driver->resume(&dev->gadget); 3442 3429 /* at high speed, note erratum 0133 */ 3443 3430 }
+1
drivers/usb/gadget/udc/net2280.h
··· 162 162 ltm_enable:1, 163 163 wakeup_enable:1, 164 164 addressed_state:1, 165 + async_callbacks:1, 165 166 bug7734_patched:1; 166 167 u16 chiprev; 167 168 int enhanced_mode;
+2 -2
drivers/usb/gadget/udc/pxa25x_udc.c
··· 1338 1338 1339 1339 #define create_debug_files(dev) \ 1340 1340 do { \ 1341 - dev->debugfs_udc = debugfs_create_file(dev->gadget.name, \ 1341 + debugfs_create_file(dev->gadget.name, \ 1342 1342 S_IRUGO, NULL, dev, &udc_debug_fops); \ 1343 1343 } while (0) 1344 - #define remove_debug_files(dev) debugfs_remove(dev->debugfs_udc) 1344 + #define remove_debug_files(dev) debugfs_remove(debugfs_lookup(dev->gadget.name, NULL)) 1345 1345 1346 1346 #else /* !CONFIG_USB_GADGET_DEBUG_FILES */ 1347 1347
-4
drivers/usb/gadget/udc/pxa25x_udc.h
··· 116 116 struct usb_phy *transceiver; 117 117 u64 dma_mask; 118 118 struct pxa25x_ep ep [PXA_UDC_NUM_ENDPOINTS]; 119 - 120 - #ifdef CONFIG_USB_GADGET_DEBUG_FS 121 - struct dentry *debugfs_udc; 122 - #endif 123 119 void __iomem *regs; 124 120 }; 125 121 #define to_pxa25x(g) (container_of((g), struct pxa25x_udc, gadget))
+2 -4
drivers/usb/gadget/udc/pxa27x_udc.c
··· 208 208 struct dentry *root; 209 209 210 210 root = debugfs_create_dir(udc->gadget.name, usb_debug_root); 211 - udc->debugfs_root = root; 212 - 213 211 debugfs_create_file("udcstate", 0400, root, udc, &state_dbg_fops); 214 212 debugfs_create_file("queues", 0400, root, udc, &queues_dbg_fops); 215 213 debugfs_create_file("epstate", 0400, root, udc, &eps_dbg_fops); ··· 215 217 216 218 static void pxa_cleanup_debugfs(struct pxa_udc *udc) 217 219 { 218 - debugfs_remove_recursive(udc->debugfs_root); 220 + debugfs_remove(debugfs_lookup(udc->gadget.name, usb_debug_root)); 219 221 } 220 222 221 223 #else ··· 1728 1730 } 1729 1731 1730 1732 /** 1731 - * pxa27x_start - Register gadget driver 1733 + * pxa27x_udc_start - Register gadget driver 1732 1734 * @g: gadget 1733 1735 * @driver: gadget driver 1734 1736 *
-4
drivers/usb/gadget/udc/pxa27x_udc.h
··· 440 440 * @last_interface: UDC interface of the last SET_INTERFACE host request 441 441 * @last_alternate: UDC altsetting of the last SET_INTERFACE host request 442 442 * @udccsr0: save of udccsr0 in case of suspend 443 - * @debugfs_root: root entry of debug filesystem 444 443 * @debugfs_state: debugfs entry for "udcstate" 445 444 * @debugfs_queues: debugfs entry for "queues" 446 445 * @debugfs_eps: debugfs entry for "epstate" ··· 472 473 473 474 #ifdef CONFIG_PM 474 475 unsigned udccsr0; 475 - #endif 476 - #ifdef CONFIG_USB_GADGET_DEBUG_FS 477 - struct dentry *debugfs_root; 478 476 #endif 479 477 }; 480 478 #define to_pxa(g) (container_of((g), struct pxa_udc, gadget))
+2 -3
drivers/usb/gadget/udc/s3c-hsudc.c
··· 1220 1220 struct s3c24xx_hsudc_platdata *pd = dev_get_platdata(&pdev->dev); 1221 1221 int ret, i; 1222 1222 1223 - hsudc = devm_kzalloc(&pdev->dev, sizeof(struct s3c_hsudc) + 1224 - sizeof(struct s3c_hsudc_ep) * pd->epnum, 1225 - GFP_KERNEL); 1223 + hsudc = devm_kzalloc(&pdev->dev, struct_size(hsudc, ep, pd->epnum), 1224 + GFP_KERNEL); 1226 1225 if (!hsudc) 1227 1226 return -ENOMEM; 1228 1227
+4 -5
drivers/usb/gadget/udc/s3c2410_udc.c
··· 198 198 udc_writeb(base, S3C2410_UDC_EP0_CSR_DE, S3C2410_UDC_EP0_CSR_REG); 199 199 } 200 200 201 - inline void s3c2410_udc_set_ep0_ss(void __iomem *b) 201 + static inline void s3c2410_udc_set_ep0_ss(void __iomem *b) 202 202 { 203 203 udc_writeb(b, S3C2410_UDC_INDEX_EP0, S3C2410_UDC_INDEX_REG); 204 204 udc_writeb(b, S3C2410_UDC_EP0_CSR_SENDSTL, S3C2410_UDC_EP0_CSR_REG); ··· 1843 1843 if (retval) 1844 1844 goto err_add_udc; 1845 1845 1846 - udc->regs_info = debugfs_create_file("registers", S_IRUGO, 1847 - s3c2410_udc_debugfs_root, udc, 1848 - &s3c2410_udc_debugfs_fops); 1846 + debugfs_create_file("registers", S_IRUGO, s3c2410_udc_debugfs_root, udc, 1847 + &s3c2410_udc_debugfs_fops); 1849 1848 1850 1849 dev_dbg(dev, "probe ok\n"); 1851 1850 ··· 1888 1889 return -EBUSY; 1889 1890 1890 1891 usb_del_gadget_udc(&udc->gadget); 1891 - debugfs_remove(udc->regs_info); 1892 + debugfs_remove(debugfs_lookup("registers", s3c2410_udc_debugfs_root)); 1892 1893 1893 1894 if (udc_info && !udc_info->udc_command && 1894 1895 gpio_is_valid(udc_info->pullup_pin))
-1
drivers/usb/gadget/udc/s3c2410_udc.h
··· 89 89 unsigned req_config : 1; 90 90 unsigned req_pending : 1; 91 91 u8 vbus; 92 - struct dentry *regs_info; 93 92 int irq; 94 93 }; 95 94 #define to_s3c2410(g) (container_of((g), struct s3c2410_udc, gadget))
+11 -19
drivers/usb/gadget/udc/tegra-xudc.c
··· 1907 1907 kfree(req); 1908 1908 } 1909 1909 1910 - static struct usb_ep_ops tegra_xudc_ep_ops = { 1910 + static const struct usb_ep_ops tegra_xudc_ep_ops = { 1911 1911 .enable = tegra_xudc_ep_enable, 1912 1912 .disable = tegra_xudc_ep_disable, 1913 1913 .alloc_request = tegra_xudc_ep_alloc_request, ··· 1928 1928 return -EBUSY; 1929 1929 } 1930 1930 1931 - static struct usb_ep_ops tegra_xudc_ep0_ops = { 1931 + static const struct usb_ep_ops tegra_xudc_ep0_ops = { 1932 1932 .enable = tegra_xudc_ep0_enable, 1933 1933 .disable = tegra_xudc_ep0_disable, 1934 1934 .alloc_request = tegra_xudc_ep_alloc_request, ··· 2168 2168 return 0; 2169 2169 } 2170 2170 2171 - static struct usb_gadget_ops tegra_xudc_gadget_ops = { 2171 + static const struct usb_gadget_ops tegra_xudc_gadget_ops = { 2172 2172 .get_frame = tegra_xudc_gadget_get_frame, 2173 2173 .wakeup = tegra_xudc_gadget_wakeup, 2174 2174 .pullup = tegra_xudc_gadget_pullup, ··· 3508 3508 xudc->utmi_phy[i] = devm_phy_optional_get(xudc->dev, phy_name); 3509 3509 if (IS_ERR(xudc->utmi_phy[i])) { 3510 3510 err = PTR_ERR(xudc->utmi_phy[i]); 3511 - if (err != -EPROBE_DEFER) 3512 - dev_err(xudc->dev, "failed to get usb2-%d PHY: %d\n", 3513 - i, err); 3514 - 3511 + dev_err_probe(xudc->dev, err, 3512 + "failed to get usb2-%d PHY\n", i); 3515 3513 goto clean_up; 3516 3514 } else if (xudc->utmi_phy[i]) { 3517 3515 /* Get usb-phy, if utmi phy is available */ ··· 3518 3520 &xudc->vbus_nb); 3519 3521 if (IS_ERR(xudc->usbphy[i])) { 3520 3522 err = PTR_ERR(xudc->usbphy[i]); 3521 - dev_err(xudc->dev, "failed to get usbphy-%d: %d\n", 3522 - i, err); 3523 + dev_err_probe(xudc->dev, err, 3524 + "failed to get usbphy-%d\n", i); 3523 3525 goto clean_up; 3524 3526 } 3525 3527 } else if (!xudc->utmi_phy[i]) { ··· 3536 3538 xudc->usb3_phy[i] = devm_phy_optional_get(xudc->dev, phy_name); 3537 3539 if (IS_ERR(xudc->usb3_phy[i])) { 3538 3540 err = PTR_ERR(xudc->usb3_phy[i]); 3539 - if (err != -EPROBE_DEFER) 3540 - dev_err(xudc->dev, "failed to get usb3-%d PHY: %d\n", 3541 - usb3, err); 3542 - 3541 + dev_err_probe(xudc->dev, err, 3542 + "failed to get usb3-%d PHY\n", usb3); 3543 3543 goto clean_up; 3544 3544 } else if (xudc->usb3_phy[i]) 3545 3545 dev_dbg(xudc->dev, "usb3-%d PHY registered", usb3); ··· 3777 3781 3778 3782 err = devm_clk_bulk_get(&pdev->dev, xudc->soc->num_clks, xudc->clks); 3779 3783 if (err) { 3780 - if (err != -EPROBE_DEFER) 3781 - dev_err(xudc->dev, "failed to request clocks: %d\n", err); 3782 - 3784 + dev_err_probe(xudc->dev, err, "failed to request clocks\n"); 3783 3785 return err; 3784 3786 } 3785 3787 ··· 3792 3798 err = devm_regulator_bulk_get(&pdev->dev, xudc->soc->num_supplies, 3793 3799 xudc->supplies); 3794 3800 if (err) { 3795 - if (err != -EPROBE_DEFER) 3796 - dev_err(xudc->dev, "failed to request regulators: %d\n", err); 3797 - 3801 + dev_err_probe(xudc->dev, err, "failed to request regulators\n"); 3798 3802 return err; 3799 3803 } 3800 3804
+1 -1
drivers/usb/gadget/udc/trace.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * trace.c - USB Gadget Framework Trace Support 4 4 * 5 5 * Copyright (C) 2016 Intel Corporation
+1 -1
drivers/usb/gadget/udc/trace.h
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * udc.c - Core UDC Framework 4 4 * 5 5 * Copyright (C) 2016 Intel Corporation
+2 -2
drivers/usb/gadget/udc/udc-xilinx.c
··· 791 791 } 792 792 793 793 /** 794 - * xudc_ep_enable - Enables the given endpoint. 794 + * __xudc_ep_enable - Enables the given endpoint. 795 795 * @ep: pointer to the xusb endpoint structure. 796 796 * @desc: pointer to usb endpoint descriptor. 797 797 * ··· 987 987 } 988 988 989 989 /** 990 - * xudc_ep0_queue - Adds the request to endpoint 0 queue. 990 + * __xudc_ep0_queue - Adds the request to endpoint 0 queue. 991 991 * @ep0: pointer to the xusb endpoint 0 structure. 992 992 * @req: pointer to the xusb request structure. 993 993 *
+4 -4
drivers/usb/host/ehci-fsl.c
··· 387 387 /* EHCI registers start at offset 0x100 */ 388 388 ehci->caps = hcd->regs + 0x100; 389 389 390 - #ifdef CONFIG_PPC_83xx 390 + #if defined(CONFIG_PPC_83xx) || defined(CONFIG_PPC_85xx) 391 391 /* 392 - * Deal with MPC834X that need port power to be cycled after the power 393 - * fault condition is removed. Otherwise the state machine does not 394 - * reflect PORTSC[CSC] correctly. 392 + * Deal with MPC834X/85XX that need port power to be cycled 393 + * after the power fault condition is removed. Otherwise the 394 + * state machine does not reflect PORTSC[CSC] correctly. 395 395 */ 396 396 ehci->need_oc_pp_cycle = 1; 397 397 #endif
+6 -2
drivers/usb/host/ehci-hcd.c
··· 76 76 #define EHCI_TUNE_FLS 1 /* (medium) 512-frame schedule */ 77 77 78 78 /* Initial IRQ latency: faster than hw default */ 79 - static int log2_irq_thresh = 0; // 0 to 6 79 + static int log2_irq_thresh; // 0 to 6 80 80 module_param (log2_irq_thresh, int, S_IRUGO); 81 81 MODULE_PARM_DESC (log2_irq_thresh, "log2 IRQ latency, 1-64 microframes"); 82 82 83 83 /* initial park setting: slower than hw default */ 84 - static unsigned park = 0; 84 + static unsigned park; 85 85 module_param (park, uint, S_IRUGO); 86 86 MODULE_PARM_DESC (park, "park setting; 1-3 back-to-back async packets"); 87 87 ··· 1238 1238 * device support 1239 1239 */ 1240 1240 .free_dev = ehci_remove_device, 1241 + #ifdef CONFIG_USB_HCD_TEST_MODE 1242 + /* EH SINGLE_STEP_SET_FEATURE test support */ 1243 + .submit_single_step_set_feature = ehci_submit_single_step_set_feature, 1244 + #endif 1241 1245 }; 1242 1246 1243 1247 void ehci_init_driver(struct hc_driver *drv,
-139
drivers/usb/host/ehci-hub.c
··· 727 727 } 728 728 729 729 /*-------------------------------------------------------------------------*/ 730 - #ifdef CONFIG_USB_HCD_TEST_MODE 731 - 732 - #define EHSET_TEST_SINGLE_STEP_SET_FEATURE 0x06 733 - 734 - static void usb_ehset_completion(struct urb *urb) 735 - { 736 - struct completion *done = urb->context; 737 - 738 - complete(done); 739 - } 740 - static int submit_single_step_set_feature( 741 - struct usb_hcd *hcd, 742 - struct urb *urb, 743 - int is_setup 744 - ); 745 - 746 - /* 747 - * Allocate and initialize a control URB. This request will be used by the 748 - * EHSET SINGLE_STEP_SET_FEATURE test in which the DATA and STATUS stages 749 - * of the GetDescriptor request are sent 15 seconds after the SETUP stage. 750 - * Return NULL if failed. 751 - */ 752 - static struct urb *request_single_step_set_feature_urb( 753 - struct usb_device *udev, 754 - void *dr, 755 - void *buf, 756 - struct completion *done 757 - ) { 758 - struct urb *urb; 759 - struct usb_hcd *hcd = bus_to_hcd(udev->bus); 760 - struct usb_host_endpoint *ep; 761 - 762 - urb = usb_alloc_urb(0, GFP_KERNEL); 763 - if (!urb) 764 - return NULL; 765 - 766 - urb->pipe = usb_rcvctrlpipe(udev, 0); 767 - ep = (usb_pipein(urb->pipe) ? udev->ep_in : udev->ep_out) 768 - [usb_pipeendpoint(urb->pipe)]; 769 - if (!ep) { 770 - usb_free_urb(urb); 771 - return NULL; 772 - } 773 - 774 - urb->ep = ep; 775 - urb->dev = udev; 776 - urb->setup_packet = (void *)dr; 777 - urb->transfer_buffer = buf; 778 - urb->transfer_buffer_length = USB_DT_DEVICE_SIZE; 779 - urb->complete = usb_ehset_completion; 780 - urb->status = -EINPROGRESS; 781 - urb->actual_length = 0; 782 - urb->transfer_flags = URB_DIR_IN; 783 - usb_get_urb(urb); 784 - atomic_inc(&urb->use_count); 785 - atomic_inc(&urb->dev->urbnum); 786 - urb->setup_dma = dma_map_single( 787 - hcd->self.sysdev, 788 - urb->setup_packet, 789 - sizeof(struct usb_ctrlrequest), 790 - DMA_TO_DEVICE); 791 - urb->transfer_dma = dma_map_single( 792 - hcd->self.sysdev, 793 - urb->transfer_buffer, 794 - urb->transfer_buffer_length, 795 - DMA_FROM_DEVICE); 796 - urb->context = done; 797 - return urb; 798 - } 799 - 800 - static int ehset_single_step_set_feature(struct usb_hcd *hcd, int port) 801 - { 802 - int retval = -ENOMEM; 803 - struct usb_ctrlrequest *dr; 804 - struct urb *urb; 805 - struct usb_device *udev; 806 - struct ehci_hcd *ehci = hcd_to_ehci(hcd); 807 - struct usb_device_descriptor *buf; 808 - DECLARE_COMPLETION_ONSTACK(done); 809 - 810 - /* Obtain udev of the rhub's child port */ 811 - udev = usb_hub_find_child(hcd->self.root_hub, port); 812 - if (!udev) { 813 - ehci_err(ehci, "No device attached to the RootHub\n"); 814 - return -ENODEV; 815 - } 816 - buf = kmalloc(USB_DT_DEVICE_SIZE, GFP_KERNEL); 817 - if (!buf) 818 - return -ENOMEM; 819 - 820 - dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_KERNEL); 821 - if (!dr) { 822 - kfree(buf); 823 - return -ENOMEM; 824 - } 825 - 826 - /* Fill Setup packet for GetDescriptor */ 827 - dr->bRequestType = USB_DIR_IN; 828 - dr->bRequest = USB_REQ_GET_DESCRIPTOR; 829 - dr->wValue = cpu_to_le16(USB_DT_DEVICE << 8); 830 - dr->wIndex = 0; 831 - dr->wLength = cpu_to_le16(USB_DT_DEVICE_SIZE); 832 - urb = request_single_step_set_feature_urb(udev, dr, buf, &done); 833 - if (!urb) 834 - goto cleanup; 835 - 836 - /* Submit just the SETUP stage */ 837 - retval = submit_single_step_set_feature(hcd, urb, 1); 838 - if (retval) 839 - goto out1; 840 - if (!wait_for_completion_timeout(&done, msecs_to_jiffies(2000))) { 841 - usb_kill_urb(urb); 842 - retval = -ETIMEDOUT; 843 - ehci_err(ehci, "%s SETUP stage timed out on ep0\n", __func__); 844 - goto out1; 845 - } 846 - msleep(15 * 1000); 847 - 848 - /* Complete remaining DATA and STATUS stages using the same URB */ 849 - urb->status = -EINPROGRESS; 850 - usb_get_urb(urb); 851 - atomic_inc(&urb->use_count); 852 - atomic_inc(&urb->dev->urbnum); 853 - retval = submit_single_step_set_feature(hcd, urb, 0); 854 - if (!retval && !wait_for_completion_timeout(&done, 855 - msecs_to_jiffies(2000))) { 856 - usb_kill_urb(urb); 857 - retval = -ETIMEDOUT; 858 - ehci_err(ehci, "%s IN stage timed out on ep0\n", __func__); 859 - } 860 - out1: 861 - usb_free_urb(urb); 862 - cleanup: 863 - kfree(dr); 864 - kfree(buf); 865 - return retval; 866 - } 867 - #endif /* CONFIG_USB_HCD_TEST_MODE */ 868 - /*-------------------------------------------------------------------------*/ 869 730 870 731 int ehci_hub_control( 871 732 struct usb_hcd *hcd,
+1 -1
drivers/usb/host/ehci-q.c
··· 1165 1165 * performed; TRUE - SETUP and FALSE - IN+STATUS 1166 1166 * Returns 0 if success 1167 1167 */ 1168 - static int submit_single_step_set_feature( 1168 + static int ehci_submit_single_step_set_feature( 1169 1169 struct usb_hcd *hcd, 1170 1170 struct urb *urb, 1171 1171 int is_setup
+3 -2
drivers/usb/host/fotg210-hcd.c
··· 850 850 struct dentry *root; 851 851 852 852 root = debugfs_create_dir(bus->bus_name, fotg210_debug_root); 853 - fotg210->debug_dir = root; 854 853 855 854 debugfs_create_file("async", S_IRUGO, root, bus, &debug_async_fops); 856 855 debugfs_create_file("periodic", S_IRUGO, root, bus, ··· 860 861 861 862 static inline void remove_debug_files(struct fotg210_hcd *fotg210) 862 863 { 863 - debugfs_remove_recursive(fotg210->debug_dir); 864 + struct usb_bus *bus = &fotg210_to_hcd(fotg210)->self; 865 + 866 + debugfs_remove(debugfs_lookup(bus->bus_name, fotg210_debug_root)); 864 867 } 865 868 866 869 /* handshake - spin reading hc until handshake completes or fails
-3
drivers/usb/host/fotg210.h
··· 184 184 185 185 /* silicon clock */ 186 186 struct clk *pclk; 187 - 188 - /* debug files */ 189 - struct dentry *debug_dir; 190 187 }; 191 188 192 189 /* convert between an HCD pointer and the corresponding FOTG210_HCD */
+2 -4
drivers/usb/host/u132-hcd.c
··· 2392 2392 urb->error_count = 0; 2393 2393 usb_hcd_giveback_urb(hcd, urb, 0); 2394 2394 return 0; 2395 - } else 2396 - continue; 2395 + } 2397 2396 } 2398 2397 dev_err(&u132->platform_dev->dev, "urb=%p not found in endp[%d]=%p ring" 2399 2398 "[%d] %c%c usb_endp=%d usb_addr=%d size=%d next=%04X last=%04X" ··· 2447 2448 urb_slot = &endp->urb_list[ENDP_QUEUE_MASK & 2448 2449 queue_scan]; 2449 2450 break; 2450 - } else 2451 - continue; 2451 + } 2452 2452 } 2453 2453 while (++queue_list < ENDP_QUEUE_SIZE && --queue_size > 0) { 2454 2454 *urb_slot = endp->urb_list[ENDP_QUEUE_MASK &
+3
drivers/usb/host/xhci-mem.c
··· 1924 1924 xhci->hw_ports = NULL; 1925 1925 xhci->rh_bw = NULL; 1926 1926 xhci->ext_caps = NULL; 1927 + xhci->port_caps = NULL; 1927 1928 1928 1929 xhci->page_size = 0; 1929 1930 xhci->page_shift = 0; ··· 2547 2546 xhci_set_hc_event_deq(xhci); 2548 2547 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2549 2548 "Wrote ERST address to ir_set 0."); 2549 + 2550 + xhci->isoc_bei_interval = AVOID_BEI_INTERVAL_MAX; 2550 2551 2551 2552 /* 2552 2553 * XXX: Might need to set the Interrupter Moderation Register to
+19 -41
drivers/usb/host/xhci-mtk-sch.c
··· 470 470 471 471 static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset) 472 472 { 473 - struct mu3h_sch_tt *tt = sch_ep->sch_tt; 474 473 u32 extra_cs_count; 475 474 u32 start_ss, last_ss; 476 475 u32 start_cs, last_cs; 477 - int i; 476 + 477 + if (!sch_ep->sch_tt) 478 + return 0; 478 479 479 480 start_ss = offset % 8; 480 481 ··· 488 487 */ 489 488 if (!(start_ss == 7 || last_ss < 6)) 490 489 return -ESCH_SS_Y6; 491 - 492 - for (i = 0; i < sch_ep->cs_count; i++) 493 - if (test_bit(offset + i, tt->ss_bit_map)) 494 - return -ESCH_SS_OVERLAP; 495 490 496 491 } else { 497 492 u32 cs_count = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX); ··· 515 518 if (cs_count > 7) 516 519 cs_count = 7; /* HW limit */ 517 520 518 - if (test_bit(offset, tt->ss_bit_map)) 519 - return -ESCH_SS_OVERLAP; 520 - 521 521 sch_ep->cs_count = cs_count; 522 522 /* one for ss, the other for idle */ 523 523 sch_ep->num_budget_microframes = cs_count + 2; ··· 535 541 struct mu3h_sch_tt *tt = sch_ep->sch_tt; 536 542 u32 base, num_esit; 537 543 int bw_updated; 538 - int bits; 539 544 int i, j; 540 545 541 546 num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit; 542 - bits = (sch_ep->ep_type == ISOC_OUT_EP) ? sch_ep->cs_count : 1; 543 547 544 548 if (used) 545 549 bw_updated = sch_ep->bw_cost_per_microframe; ··· 546 554 547 555 for (i = 0; i < num_esit; i++) { 548 556 base = sch_ep->offset + i * sch_ep->esit; 549 - 550 - for (j = 0; j < bits; j++) { 551 - if (used) 552 - set_bit(base + j, tt->ss_bit_map); 553 - else 554 - clear_bit(base + j, tt->ss_bit_map); 555 - } 556 557 557 558 for (j = 0; j < sch_ep->cs_count; j++) 558 559 tt->fs_bus_bw[base + j] += bw_updated; ··· 588 603 static int check_sch_bw(struct mu3h_sch_bw_info *sch_bw, 589 604 struct mu3h_sch_ep_info *sch_ep) 590 605 { 606 + const u32 esit_boundary = get_esit_boundary(sch_ep); 607 + const u32 bw_boundary = get_bw_boundary(sch_ep->speed); 591 608 u32 offset; 592 - u32 min_bw; 593 - u32 min_index; 594 609 u32 worst_bw; 595 - u32 bw_boundary; 596 - u32 esit_boundary; 597 - u32 min_num_budget; 598 - u32 min_cs_count; 610 + u32 min_bw = ~0; 611 + int min_index = -1; 599 612 int ret = 0; 600 613 601 614 /* 602 615 * Search through all possible schedule microframes. 603 616 * and find a microframe where its worst bandwidth is minimum. 604 617 */ 605 - min_bw = ~0; 606 - min_index = 0; 607 - min_cs_count = sch_ep->cs_count; 608 - min_num_budget = sch_ep->num_budget_microframes; 609 - esit_boundary = get_esit_boundary(sch_ep); 610 618 for (offset = 0; offset < sch_ep->esit; offset++) { 611 - if (sch_ep->sch_tt) { 612 - ret = check_sch_tt(sch_ep, offset); 613 - if (ret) 614 - continue; 615 - } 619 + ret = check_sch_tt(sch_ep, offset); 620 + if (ret) 621 + continue; 616 622 617 623 if ((offset + sch_ep->num_budget_microframes) > esit_boundary) 618 624 break; 619 625 620 626 worst_bw = get_max_bw(sch_bw, sch_ep, offset); 627 + if (worst_bw > bw_boundary) 628 + continue; 629 + 621 630 if (min_bw > worst_bw) { 622 631 min_bw = worst_bw; 623 632 min_index = offset; 624 - min_cs_count = sch_ep->cs_count; 625 - min_num_budget = sch_ep->num_budget_microframes; 626 633 } 634 + 635 + /* use first-fit for LS/FS */ 636 + if (sch_ep->sch_tt && min_index >= 0) 637 + break; 638 + 627 639 if (min_bw == 0) 628 640 break; 629 641 } 630 642 631 - bw_boundary = get_bw_boundary(sch_ep->speed); 632 - /* check bandwidth */ 633 - if (min_bw > bw_boundary) 643 + if (min_index < 0) 634 644 return ret ? ret : -ESCH_BW_OVERFLOW; 635 645 636 646 sch_ep->offset = min_index; 637 - sch_ep->cs_count = min_cs_count; 638 - sch_ep->num_budget_microframes = min_num_budget; 639 647 640 648 return load_ep_bw(sch_bw, sch_ep, true); 641 649 }
-2
drivers/usb/host/xhci-mtk.c
··· 495 495 goto put_usb2_hcd; 496 496 } 497 497 mtk->has_ippc = true; 498 - } else { 499 - mtk->has_ippc = false; 500 498 } 501 499 502 500 device_init_wakeup(dev, true);
+4 -6
drivers/usb/host/xhci-mtk.h
··· 24 24 #define XHCI_MTK_MAX_ESIT 64 25 25 26 26 /** 27 - * @ss_bit_map: used to avoid start split microframes overlay 28 27 * @fs_bus_bw: array to keep track of bandwidth already used for FS 29 28 * @ep_list: Endpoints using this TT 30 29 */ 31 30 struct mu3h_sch_tt { 32 - DECLARE_BITMAP(ss_bit_map, XHCI_MTK_MAX_ESIT); 33 31 u32 fs_bus_bw[XHCI_MTK_MAX_ESIT]; 34 32 struct list_head ep_list; 35 33 }; ··· 136 138 struct mu3h_sch_bw_info *sch_array; 137 139 struct list_head bw_ep_chk_list; 138 140 struct mu3c_ippc_regs __iomem *ippc_regs; 139 - bool has_ippc; 140 141 int num_u2_ports; 141 142 int num_u3_ports; 142 143 int u3p_dis_msk; 143 144 struct regulator *vusb33; 144 145 struct regulator *vbus; 145 146 struct clk_bulk_data clks[BULK_CLKS_NUM]; 146 - bool lpm_support; 147 - bool u2_lpm_disable; 147 + unsigned int has_ippc:1; 148 + unsigned int lpm_support:1; 149 + unsigned int u2_lpm_disable:1; 148 150 /* usb remote wakeup */ 149 - bool uwk_en; 151 + unsigned int uwk_en:1; 150 152 struct regmap *uwk; 151 153 u32 uwk_reg_base; 152 154 u32 uwk_vers;
+9 -9
drivers/usb/host/xhci-pci-renesas.c
··· 197 197 if (err) 198 198 return pcibios_err_to_errno(err); 199 199 200 - if (rom_state & BIT(15)) { 200 + if (rom_state & RENESAS_ROM_STATUS_ROM_EXISTS) { 201 201 /* ROM exists */ 202 202 dev_dbg(&pdev->dev, "ROM exists\n"); 203 203 ··· 207 207 return 0; 208 208 209 209 case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */ 210 - return 0; 210 + dev_dbg(&pdev->dev, "Unknown ROM status ...\n"); 211 + break; 211 212 212 213 case RENESAS_ROM_STATUS_ERROR: /* Error State */ 213 214 default: /* All other states are marked as "Reserved states" */ ··· 225 224 u8 fw_state; 226 225 int err; 227 226 228 - /* Check if device has ROM and loaded, if so skip everything */ 229 - err = renesas_check_rom(pdev); 230 - if (err) { /* we have rom */ 231 - err = renesas_check_rom_state(pdev); 232 - if (!err) 233 - return err; 234 - } 227 + /* 228 + * Only if device has ROM and loaded FW we can skip loading and 229 + * return success. Otherwise (even unknown state), attempt to load FW. 230 + */ 231 + if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev)) 232 + return 0; 235 233 236 234 /* 237 235 * Test if the device is actually needing the firmware. As most
+6 -1
drivers/usb/host/xhci-ring.c
··· 3076 3076 if (event_loop++ < TRBS_PER_SEGMENT / 2) 3077 3077 continue; 3078 3078 xhci_update_erst_dequeue(xhci, event_ring_deq); 3079 + 3080 + /* ring is half-full, force isoc trbs to interrupt more often */ 3081 + if (xhci->isoc_bei_interval > AVOID_BEI_INTERVAL_MIN) 3082 + xhci->isoc_bei_interval = xhci->isoc_bei_interval / 2; 3083 + 3079 3084 event_loop = 0; 3080 3085 } 3081 3086 ··· 3961 3956 * generate an event at least every 8th TD to clear the event ring 3962 3957 */ 3963 3958 if (i && xhci->quirks & XHCI_AVOID_BEI) 3964 - return !!(i % 8); 3959 + return !!(i % xhci->isoc_bei_interval); 3965 3960 3966 3961 return true; 3967 3962 }
+488 -133
drivers/usb/host/xhci-tegra.c
··· 2 2 /* 3 3 * NVIDIA Tegra xHCI host controller driver 4 4 * 5 - * Copyright (C) 2014 NVIDIA Corporation 5 + * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 6 6 * Copyright (C) 2014 Google, Inc. 7 7 */ 8 8 ··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/module.h> 17 17 #include <linux/of_device.h> 18 + #include <linux/of_irq.h> 18 19 #include <linux/phy/phy.h> 19 20 #include <linux/phy/tegra/xusb.h> 20 21 #include <linux/platform_device.h> 22 + #include <linux/usb/ch9.h> 21 23 #include <linux/pm.h> 22 24 #include <linux/pm_domain.h> 23 25 #include <linux/pm_runtime.h> ··· 226 224 227 225 int xhci_irq; 228 226 int mbox_irq; 227 + int padctl_irq; 229 228 230 229 void __iomem *ipfs_base; 231 230 void __iomem *fpci_base; ··· 252 249 253 250 struct device *genpd_dev_host; 254 251 struct device *genpd_dev_ss; 255 - struct device_link *genpd_dl_host; 256 - struct device_link *genpd_dl_ss; 252 + bool use_genpd; 257 253 258 254 struct phy **phys; 259 255 unsigned int num_phys; ··· 272 270 dma_addr_t phys; 273 271 } fw; 274 272 273 + bool suspended; 275 274 struct tegra_xusb_context context; 276 275 }; 277 276 ··· 669 666 670 667 mutex_lock(&tegra->lock); 671 668 669 + if (pm_runtime_suspended(tegra->dev) || tegra->suspended) 670 + goto out; 671 + 672 672 value = fpci_readl(tegra, tegra->soc->mbox.data_out); 673 673 tegra_xusb_mbox_unpack(&msg, value); 674 674 ··· 685 679 686 680 tegra_xusb_mbox_handle(tegra, &msg); 687 681 682 + out: 688 683 mutex_unlock(&tegra->lock); 689 684 return IRQ_HANDLED; 690 685 } ··· 824 817 phy_power_off(tegra->phys[i]); 825 818 phy_exit(tegra->phys[i]); 826 819 } 827 - } 828 - 829 - static int tegra_xusb_runtime_suspend(struct device *dev) 830 - { 831 - struct tegra_xusb *tegra = dev_get_drvdata(dev); 832 - 833 - regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies); 834 - tegra_xusb_clk_disable(tegra); 835 - 836 - return 0; 837 - } 838 - 839 - static int tegra_xusb_runtime_resume(struct device *dev) 840 - { 841 - struct tegra_xusb *tegra = dev_get_drvdata(dev); 842 - int err; 843 - 844 - err = tegra_xusb_clk_enable(tegra); 845 - if (err) { 846 - dev_err(dev, "failed to enable clocks: %d\n", err); 847 - return err; 848 - } 849 - 850 - err = regulator_bulk_enable(tegra->soc->num_supplies, tegra->supplies); 851 - if (err) { 852 - dev_err(dev, "failed to enable regulators: %d\n", err); 853 - goto disable_clk; 854 - } 855 - 856 - return 0; 857 - 858 - disable_clk: 859 - tegra_xusb_clk_disable(tegra); 860 - return err; 861 820 } 862 821 863 822 #ifdef CONFIG_PM_SLEEP ··· 995 1022 static void tegra_xusb_powerdomain_remove(struct device *dev, 996 1023 struct tegra_xusb *tegra) 997 1024 { 998 - if (tegra->genpd_dl_ss) 999 - device_link_del(tegra->genpd_dl_ss); 1000 - if (tegra->genpd_dl_host) 1001 - device_link_del(tegra->genpd_dl_host); 1025 + if (!tegra->use_genpd) 1026 + return; 1027 + 1002 1028 if (!IS_ERR_OR_NULL(tegra->genpd_dev_ss)) 1003 1029 dev_pm_domain_detach(tegra->genpd_dev_ss, true); 1004 1030 if (!IS_ERR_OR_NULL(tegra->genpd_dev_host)) ··· 1023 1051 return err; 1024 1052 } 1025 1053 1026 - tegra->genpd_dl_host = device_link_add(dev, tegra->genpd_dev_host, 1027 - DL_FLAG_PM_RUNTIME | 1028 - DL_FLAG_STATELESS); 1029 - if (!tegra->genpd_dl_host) { 1030 - dev_err(dev, "adding host device link failed!\n"); 1031 - return -ENODEV; 1054 + tegra->use_genpd = true; 1055 + 1056 + return 0; 1057 + } 1058 + 1059 + static int tegra_xusb_unpowergate_partitions(struct tegra_xusb *tegra) 1060 + { 1061 + struct device *dev = tegra->dev; 1062 + int rc; 1063 + 1064 + if (tegra->use_genpd) { 1065 + rc = pm_runtime_get_sync(tegra->genpd_dev_ss); 1066 + if (rc < 0) { 1067 + dev_err(dev, "failed to enable XUSB SS partition\n"); 1068 + return rc; 1069 + } 1070 + 1071 + rc = pm_runtime_get_sync(tegra->genpd_dev_host); 1072 + if (rc < 0) { 1073 + dev_err(dev, "failed to enable XUSB Host partition\n"); 1074 + pm_runtime_put_sync(tegra->genpd_dev_ss); 1075 + return rc; 1076 + } 1077 + } else { 1078 + rc = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_XUSBA, 1079 + tegra->ss_clk, 1080 + tegra->ss_rst); 1081 + if (rc < 0) { 1082 + dev_err(dev, "failed to enable XUSB SS partition\n"); 1083 + return rc; 1084 + } 1085 + 1086 + rc = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_XUSBC, 1087 + tegra->host_clk, 1088 + tegra->host_rst); 1089 + if (rc < 0) { 1090 + dev_err(dev, "failed to enable XUSB Host partition\n"); 1091 + tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1092 + return rc; 1093 + } 1032 1094 } 1033 1095 1034 - tegra->genpd_dl_ss = device_link_add(dev, tegra->genpd_dev_ss, 1035 - DL_FLAG_PM_RUNTIME | 1036 - DL_FLAG_STATELESS); 1037 - if (!tegra->genpd_dl_ss) { 1038 - dev_err(dev, "adding superspeed device link failed!\n"); 1039 - return -ENODEV; 1096 + return 0; 1097 + } 1098 + 1099 + static int tegra_xusb_powergate_partitions(struct tegra_xusb *tegra) 1100 + { 1101 + struct device *dev = tegra->dev; 1102 + int rc; 1103 + 1104 + if (tegra->use_genpd) { 1105 + rc = pm_runtime_put_sync(tegra->genpd_dev_host); 1106 + if (rc < 0) { 1107 + dev_err(dev, "failed to disable XUSB Host partition\n"); 1108 + return rc; 1109 + } 1110 + 1111 + rc = pm_runtime_put_sync(tegra->genpd_dev_ss); 1112 + if (rc < 0) { 1113 + dev_err(dev, "failed to disable XUSB SS partition\n"); 1114 + pm_runtime_get_sync(tegra->genpd_dev_host); 1115 + return rc; 1116 + } 1117 + } else { 1118 + rc = tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC); 1119 + if (rc < 0) { 1120 + dev_err(dev, "failed to disable XUSB Host partition\n"); 1121 + return rc; 1122 + } 1123 + 1124 + rc = tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1125 + if (rc < 0) { 1126 + dev_err(dev, "failed to disable XUSB SS partition\n"); 1127 + tegra_powergate_sequence_power_up(TEGRA_POWERGATE_XUSBC, 1128 + tegra->host_clk, 1129 + tegra->host_rst); 1130 + return rc; 1131 + } 1040 1132 } 1041 1133 1042 1134 return 0; ··· 1120 1084 dev_err(tegra->dev, "failed to enable messages: %d\n", err); 1121 1085 1122 1086 return err; 1087 + } 1088 + 1089 + static irqreturn_t tegra_xusb_padctl_irq(int irq, void *data) 1090 + { 1091 + struct tegra_xusb *tegra = data; 1092 + 1093 + mutex_lock(&tegra->lock); 1094 + 1095 + if (tegra->suspended) { 1096 + mutex_unlock(&tegra->lock); 1097 + return IRQ_HANDLED; 1098 + } 1099 + 1100 + mutex_unlock(&tegra->lock); 1101 + 1102 + pm_runtime_resume(tegra->dev); 1103 + 1104 + return IRQ_HANDLED; 1123 1105 } 1124 1106 1125 1107 static int tegra_xusb_enable_firmware_messages(struct tegra_xusb *tegra) ··· 1263 1209 } 1264 1210 } 1265 1211 1212 + #if IS_ENABLED(CONFIG_PM) || IS_ENABLED(CONFIG_PM_SLEEP) 1213 + static bool is_usb2_otg_phy(struct tegra_xusb *tegra, unsigned int index) 1214 + { 1215 + return (tegra->usbphy[index] != NULL); 1216 + } 1217 + 1218 + static bool is_usb3_otg_phy(struct tegra_xusb *tegra, unsigned int index) 1219 + { 1220 + struct tegra_xusb_padctl *padctl = tegra->padctl; 1221 + unsigned int i; 1222 + int port; 1223 + 1224 + for (i = 0; i < tegra->num_usb_phys; i++) { 1225 + if (is_usb2_otg_phy(tegra, i)) { 1226 + port = tegra_xusb_padctl_get_usb3_companion(padctl, i); 1227 + if ((port >= 0) && (index == (unsigned int)port)) 1228 + return true; 1229 + } 1230 + } 1231 + 1232 + return false; 1233 + } 1234 + 1235 + static bool is_host_mode_phy(struct tegra_xusb *tegra, unsigned int phy_type, unsigned int index) 1236 + { 1237 + if (strcmp(tegra->soc->phy_types[phy_type].name, "hsic") == 0) 1238 + return true; 1239 + 1240 + if (strcmp(tegra->soc->phy_types[phy_type].name, "usb2") == 0) { 1241 + if (is_usb2_otg_phy(tegra, index)) 1242 + return ((index == tegra->otg_usb2_port) && tegra->host_mode); 1243 + else 1244 + return true; 1245 + } 1246 + 1247 + if (strcmp(tegra->soc->phy_types[phy_type].name, "usb3") == 0) { 1248 + if (is_usb3_otg_phy(tegra, index)) 1249 + return ((index == tegra->otg_usb3_port) && tegra->host_mode); 1250 + else 1251 + return true; 1252 + } 1253 + 1254 + return false; 1255 + } 1256 + #endif 1257 + 1266 1258 static int tegra_xusb_get_usb2_port(struct tegra_xusb *tegra, 1267 1259 struct usb_phy *usbphy) 1268 1260 { ··· 1401 1301 static int tegra_xusb_probe(struct platform_device *pdev) 1402 1302 { 1403 1303 struct tegra_xusb *tegra; 1304 + struct device_node *np; 1404 1305 struct resource *regs; 1405 1306 struct xhci_hcd *xhci; 1406 1307 unsigned int i, j, k; ··· 1422 1321 if (err < 0) 1423 1322 return err; 1424 1323 1425 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1426 - tegra->regs = devm_ioremap_resource(&pdev->dev, regs); 1324 + tegra->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &regs); 1427 1325 if (IS_ERR(tegra->regs)) 1428 1326 return PTR_ERR(tegra->regs); 1429 1327 ··· 1447 1347 tegra->padctl = tegra_xusb_padctl_get(&pdev->dev); 1448 1348 if (IS_ERR(tegra->padctl)) 1449 1349 return PTR_ERR(tegra->padctl); 1350 + 1351 + np = of_parse_phandle(pdev->dev.of_node, "nvidia,xusb-padctl", 0); 1352 + if (!np) { 1353 + err = -ENODEV; 1354 + goto put_padctl; 1355 + } 1356 + 1357 + tegra->padctl_irq = of_irq_get(np, 0); 1358 + if (tegra->padctl_irq <= 0) { 1359 + err = (tegra->padctl_irq == 0) ? -ENODEV : tegra->padctl_irq; 1360 + goto put_padctl; 1361 + } 1450 1362 1451 1363 tegra->host_clk = devm_clk_get(&pdev->dev, "xusb_host"); 1452 1364 if (IS_ERR(tegra->host_clk)) { ··· 1540 1428 err); 1541 1429 goto put_padctl; 1542 1430 } 1543 - 1544 - err = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_XUSBA, 1545 - tegra->ss_clk, 1546 - tegra->ss_rst); 1547 - if (err) { 1548 - dev_err(&pdev->dev, 1549 - "failed to enable XUSBA domain: %d\n", err); 1550 - goto put_padctl; 1551 - } 1552 - 1553 - err = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_XUSBC, 1554 - tegra->host_clk, 1555 - tegra->host_rst); 1556 - if (err) { 1557 - tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1558 - dev_err(&pdev->dev, 1559 - "failed to enable XUSBC domain: %d\n", err); 1560 - goto put_padctl; 1561 - } 1562 1431 } else { 1563 1432 err = tegra_xusb_powerdomain_init(&pdev->dev, tegra); 1564 1433 if (err) ··· 1604 1511 goto put_powerdomains; 1605 1512 } 1606 1513 1514 + tegra->hcd->skip_phy_initialization = 1; 1607 1515 tegra->hcd->regs = tegra->regs; 1608 1516 tegra->hcd->rsrc_start = regs->start; 1609 1517 tegra->hcd->rsrc_len = resource_size(regs); ··· 1615 1521 */ 1616 1522 platform_set_drvdata(pdev, tegra); 1617 1523 1524 + err = tegra_xusb_clk_enable(tegra); 1525 + if (err) { 1526 + dev_err(tegra->dev, "failed to enable clocks: %d\n", err); 1527 + goto put_hcd; 1528 + } 1529 + 1530 + err = regulator_bulk_enable(tegra->soc->num_supplies, tegra->supplies); 1531 + if (err) { 1532 + dev_err(tegra->dev, "failed to enable regulators: %d\n", err); 1533 + goto disable_clk; 1534 + } 1535 + 1618 1536 err = tegra_xusb_phy_enable(tegra); 1619 1537 if (err < 0) { 1620 1538 dev_err(&pdev->dev, "failed to enable PHYs: %d\n", err); 1621 - goto put_hcd; 1539 + goto disable_regulator; 1622 1540 } 1623 1541 1624 1542 /* ··· 1649 1543 goto disable_phy; 1650 1544 } 1651 1545 1652 - pm_runtime_enable(&pdev->dev); 1653 - 1654 - if (!pm_runtime_enabled(&pdev->dev)) 1655 - err = tegra_xusb_runtime_resume(&pdev->dev); 1656 - else 1657 - err = pm_runtime_get_sync(&pdev->dev); 1658 - 1659 - if (err < 0) { 1660 - dev_err(&pdev->dev, "failed to enable device: %d\n", err); 1546 + err = tegra_xusb_unpowergate_partitions(tegra); 1547 + if (err) 1661 1548 goto free_firmware; 1662 - } 1663 1549 1664 1550 tegra_xusb_config(tegra); 1665 1551 1666 1552 err = tegra_xusb_load_firmware(tegra); 1667 1553 if (err < 0) { 1668 1554 dev_err(&pdev->dev, "failed to load firmware: %d\n", err); 1669 - goto put_rpm; 1555 + goto powergate; 1670 1556 } 1671 1557 1672 1558 err = usb_add_hcd(tegra->hcd, tegra->xhci_irq, IRQF_SHARED); 1673 1559 if (err < 0) { 1674 1560 dev_err(&pdev->dev, "failed to add USB HCD: %d\n", err); 1675 - goto put_rpm; 1561 + goto powergate; 1676 1562 } 1677 1563 1678 1564 device_wakeup_enable(tegra->hcd->self.controller); ··· 1687 1589 goto put_usb3; 1688 1590 } 1689 1591 1690 - err = tegra_xusb_enable_firmware_messages(tegra); 1691 - if (err < 0) { 1692 - dev_err(&pdev->dev, "failed to enable messages: %d\n", err); 1693 - goto remove_usb3; 1694 - } 1695 - 1696 1592 err = devm_request_threaded_irq(&pdev->dev, tegra->mbox_irq, 1697 1593 tegra_xusb_mbox_irq, 1698 1594 tegra_xusb_mbox_thread, 0, ··· 1696 1604 goto remove_usb3; 1697 1605 } 1698 1606 1607 + err = devm_request_threaded_irq(&pdev->dev, tegra->padctl_irq, NULL, tegra_xusb_padctl_irq, 1608 + IRQF_ONESHOT, dev_name(&pdev->dev), tegra); 1609 + if (err < 0) { 1610 + dev_err(&pdev->dev, "failed to request padctl IRQ: %d\n", err); 1611 + goto remove_usb3; 1612 + } 1613 + 1614 + err = tegra_xusb_enable_firmware_messages(tegra); 1615 + if (err < 0) { 1616 + dev_err(&pdev->dev, "failed to enable messages: %d\n", err); 1617 + goto remove_usb3; 1618 + } 1619 + 1699 1620 err = tegra_xusb_init_usb_phy(tegra); 1700 1621 if (err < 0) { 1701 1622 dev_err(&pdev->dev, "failed to init USB PHY: %d\n", err); 1702 1623 goto remove_usb3; 1703 1624 } 1625 + 1626 + /* Enable wake for both USB 2.0 and USB 3.0 roothubs */ 1627 + device_init_wakeup(&tegra->hcd->self.root_hub->dev, true); 1628 + device_init_wakeup(&xhci->shared_hcd->self.root_hub->dev, true); 1629 + device_init_wakeup(tegra->dev, true); 1630 + 1631 + pm_runtime_use_autosuspend(tegra->dev); 1632 + pm_runtime_set_autosuspend_delay(tegra->dev, 2000); 1633 + pm_runtime_mark_last_busy(tegra->dev); 1634 + pm_runtime_set_active(tegra->dev); 1635 + pm_runtime_enable(tegra->dev); 1704 1636 1705 1637 return 0; 1706 1638 ··· 1734 1618 usb_put_hcd(xhci->shared_hcd); 1735 1619 remove_usb2: 1736 1620 usb_remove_hcd(tegra->hcd); 1737 - put_rpm: 1738 - if (!pm_runtime_status_suspended(&pdev->dev)) 1739 - tegra_xusb_runtime_suspend(&pdev->dev); 1740 - put_hcd: 1741 - usb_put_hcd(tegra->hcd); 1621 + powergate: 1622 + tegra_xusb_powergate_partitions(tegra); 1742 1623 free_firmware: 1743 1624 dma_free_coherent(&pdev->dev, tegra->fw.size, tegra->fw.virt, 1744 1625 tegra->fw.phys); 1745 1626 disable_phy: 1746 1627 tegra_xusb_phy_disable(tegra); 1747 - pm_runtime_disable(&pdev->dev); 1628 + disable_regulator: 1629 + regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies); 1630 + disable_clk: 1631 + tegra_xusb_clk_disable(tegra); 1632 + put_hcd: 1633 + usb_put_hcd(tegra->hcd); 1748 1634 put_powerdomains: 1749 - if (!of_property_read_bool(pdev->dev.of_node, "power-domains")) { 1750 - tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC); 1751 - tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1752 - } else { 1753 - tegra_xusb_powerdomain_remove(&pdev->dev, tegra); 1754 - } 1635 + tegra_xusb_powerdomain_remove(&pdev->dev, tegra); 1755 1636 put_padctl: 1637 + of_node_put(np); 1756 1638 tegra_xusb_padctl_put(tegra->padctl); 1757 1639 return err; 1758 1640 } ··· 1762 1648 1763 1649 tegra_xusb_deinit_usb_phy(tegra); 1764 1650 1651 + pm_runtime_get_sync(&pdev->dev); 1765 1652 usb_remove_hcd(xhci->shared_hcd); 1766 1653 usb_put_hcd(xhci->shared_hcd); 1767 1654 xhci->shared_hcd = NULL; ··· 1772 1657 dma_free_coherent(&pdev->dev, tegra->fw.size, tegra->fw.virt, 1773 1658 tegra->fw.phys); 1774 1659 1775 - pm_runtime_put_sync(&pdev->dev); 1776 1660 pm_runtime_disable(&pdev->dev); 1661 + pm_runtime_put(&pdev->dev); 1777 1662 1778 - if (!of_property_read_bool(pdev->dev.of_node, "power-domains")) { 1779 - tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC); 1780 - tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1781 - } else { 1782 - tegra_xusb_powerdomain_remove(&pdev->dev, tegra); 1783 - } 1663 + tegra_xusb_powergate_partitions(tegra); 1664 + 1665 + tegra_xusb_powerdomain_remove(&pdev->dev, tegra); 1784 1666 1785 1667 tegra_xusb_phy_disable(tegra); 1786 - 1668 + tegra_xusb_clk_disable(tegra); 1669 + regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies); 1787 1670 tegra_xusb_padctl_put(tegra->padctl); 1788 1671 1789 1672 return 0; 1790 1673 } 1791 1674 1792 - #ifdef CONFIG_PM_SLEEP 1675 + #if IS_ENABLED(CONFIG_PM) || IS_ENABLED(CONFIG_PM_SLEEP) 1793 1676 static bool xhci_hub_ports_suspended(struct xhci_hub *hub) 1794 1677 { 1795 1678 struct device *dev = hub->hcd->self.controller; ··· 1813 1700 static int tegra_xusb_check_ports(struct tegra_xusb *tegra) 1814 1701 { 1815 1702 struct xhci_hcd *xhci = hcd_to_xhci(tegra->hcd); 1703 + struct xhci_bus_state *bus_state = &xhci->usb2_rhub.bus_state; 1816 1704 unsigned long flags; 1817 1705 int err = 0; 1706 + 1707 + if (bus_state->bus_suspended) { 1708 + /* xusb_hub_suspend() has just directed one or more USB2 port(s) 1709 + * to U3 state, it takes 3ms to enter U3. 1710 + */ 1711 + usleep_range(3000, 4000); 1712 + } 1818 1713 1819 1714 spin_lock_irqsave(&xhci->lock, flags); 1820 1715 ··· 1869 1748 } 1870 1749 } 1871 1750 1872 - static int tegra_xusb_enter_elpg(struct tegra_xusb *tegra, bool wakeup) 1751 + static enum usb_device_speed tegra_xhci_portsc_to_speed(struct tegra_xusb *tegra, u32 portsc) 1752 + { 1753 + if (DEV_LOWSPEED(portsc)) 1754 + return USB_SPEED_LOW; 1755 + 1756 + if (DEV_HIGHSPEED(portsc)) 1757 + return USB_SPEED_HIGH; 1758 + 1759 + if (DEV_FULLSPEED(portsc)) 1760 + return USB_SPEED_FULL; 1761 + 1762 + if (DEV_SUPERSPEED_ANY(portsc)) 1763 + return USB_SPEED_SUPER; 1764 + 1765 + return USB_SPEED_UNKNOWN; 1766 + } 1767 + 1768 + static void tegra_xhci_enable_phy_sleepwalk_wake(struct tegra_xusb *tegra) 1769 + { 1770 + struct tegra_xusb_padctl *padctl = tegra->padctl; 1771 + struct xhci_hcd *xhci = hcd_to_xhci(tegra->hcd); 1772 + enum usb_device_speed speed; 1773 + struct phy *phy; 1774 + unsigned int index, offset; 1775 + unsigned int i, j, k; 1776 + struct xhci_hub *rhub; 1777 + u32 portsc; 1778 + 1779 + for (i = 0, k = 0; i < tegra->soc->num_types; i++) { 1780 + if (strcmp(tegra->soc->phy_types[i].name, "usb3") == 0) 1781 + rhub = &xhci->usb3_rhub; 1782 + else 1783 + rhub = &xhci->usb2_rhub; 1784 + 1785 + if (strcmp(tegra->soc->phy_types[i].name, "hsic") == 0) 1786 + offset = tegra->soc->ports.usb2.count; 1787 + else 1788 + offset = 0; 1789 + 1790 + for (j = 0; j < tegra->soc->phy_types[i].num; j++) { 1791 + phy = tegra->phys[k++]; 1792 + 1793 + if (!phy) 1794 + continue; 1795 + 1796 + index = j + offset; 1797 + 1798 + if (index >= rhub->num_ports) 1799 + continue; 1800 + 1801 + if (!is_host_mode_phy(tegra, i, j)) 1802 + continue; 1803 + 1804 + portsc = readl(rhub->ports[index]->addr); 1805 + speed = tegra_xhci_portsc_to_speed(tegra, portsc); 1806 + tegra_xusb_padctl_enable_phy_sleepwalk(padctl, phy, speed); 1807 + tegra_xusb_padctl_enable_phy_wake(padctl, phy); 1808 + } 1809 + } 1810 + } 1811 + 1812 + static void tegra_xhci_disable_phy_wake(struct tegra_xusb *tegra) 1813 + { 1814 + struct tegra_xusb_padctl *padctl = tegra->padctl; 1815 + unsigned int i; 1816 + 1817 + for (i = 0; i < tegra->num_phys; i++) { 1818 + if (!tegra->phys[i]) 1819 + continue; 1820 + 1821 + tegra_xusb_padctl_disable_phy_wake(padctl, tegra->phys[i]); 1822 + } 1823 + } 1824 + 1825 + static void tegra_xhci_disable_phy_sleepwalk(struct tegra_xusb *tegra) 1826 + { 1827 + struct tegra_xusb_padctl *padctl = tegra->padctl; 1828 + unsigned int i; 1829 + 1830 + for (i = 0; i < tegra->num_phys; i++) { 1831 + if (!tegra->phys[i]) 1832 + continue; 1833 + 1834 + tegra_xusb_padctl_disable_phy_sleepwalk(padctl, tegra->phys[i]); 1835 + } 1836 + } 1837 + 1838 + static int tegra_xusb_enter_elpg(struct tegra_xusb *tegra, bool runtime) 1873 1839 { 1874 1840 struct xhci_hcd *xhci = hcd_to_xhci(tegra->hcd); 1841 + struct device *dev = tegra->dev; 1842 + bool wakeup = runtime ? true : device_may_wakeup(dev); 1843 + unsigned int i; 1875 1844 int err; 1845 + u32 usbcmd; 1846 + 1847 + dev_dbg(dev, "entering ELPG\n"); 1848 + 1849 + usbcmd = readl(&xhci->op_regs->command); 1850 + usbcmd &= ~CMD_EIE; 1851 + writel(usbcmd, &xhci->op_regs->command); 1876 1852 1877 1853 err = tegra_xusb_check_ports(tegra); 1878 1854 if (err < 0) { 1879 1855 dev_err(tegra->dev, "not all ports suspended: %d\n", err); 1880 - return err; 1856 + goto out; 1881 1857 } 1882 1858 1883 1859 err = xhci_suspend(xhci, wakeup); 1884 1860 if (err < 0) { 1885 1861 dev_err(tegra->dev, "failed to suspend XHCI: %d\n", err); 1886 - return err; 1862 + goto out; 1887 1863 } 1888 1864 1889 1865 tegra_xusb_save_context(tegra); 1890 - tegra_xusb_phy_disable(tegra); 1866 + 1867 + if (wakeup) 1868 + tegra_xhci_enable_phy_sleepwalk_wake(tegra); 1869 + 1870 + tegra_xusb_powergate_partitions(tegra); 1871 + 1872 + for (i = 0; i < tegra->num_phys; i++) { 1873 + if (!tegra->phys[i]) 1874 + continue; 1875 + 1876 + phy_power_off(tegra->phys[i]); 1877 + if (!wakeup) 1878 + phy_exit(tegra->phys[i]); 1879 + } 1880 + 1891 1881 tegra_xusb_clk_disable(tegra); 1892 1882 1893 - return 0; 1883 + out: 1884 + if (!err) 1885 + dev_dbg(tegra->dev, "entering ELPG done\n"); 1886 + else { 1887 + usbcmd = readl(&xhci->op_regs->command); 1888 + usbcmd |= CMD_EIE; 1889 + writel(usbcmd, &xhci->op_regs->command); 1890 + 1891 + dev_dbg(tegra->dev, "entering ELPG failed\n"); 1892 + pm_runtime_mark_last_busy(tegra->dev); 1893 + } 1894 + 1895 + return err; 1894 1896 } 1895 1897 1896 - static int tegra_xusb_exit_elpg(struct tegra_xusb *tegra, bool wakeup) 1898 + static int tegra_xusb_exit_elpg(struct tegra_xusb *tegra, bool runtime) 1897 1899 { 1898 1900 struct xhci_hcd *xhci = hcd_to_xhci(tegra->hcd); 1901 + struct device *dev = tegra->dev; 1902 + bool wakeup = runtime ? true : device_may_wakeup(dev); 1903 + unsigned int i; 1904 + u32 usbcmd; 1899 1905 int err; 1906 + 1907 + dev_dbg(dev, "exiting ELPG\n"); 1908 + pm_runtime_mark_last_busy(tegra->dev); 1900 1909 1901 1910 err = tegra_xusb_clk_enable(tegra); 1902 1911 if (err < 0) { 1903 1912 dev_err(tegra->dev, "failed to enable clocks: %d\n", err); 1904 - return err; 1913 + goto out; 1905 1914 } 1906 1915 1907 - err = tegra_xusb_phy_enable(tegra); 1908 - if (err < 0) { 1909 - dev_err(tegra->dev, "failed to enable PHYs: %d\n", err); 1910 - goto disable_clk; 1916 + err = tegra_xusb_unpowergate_partitions(tegra); 1917 + if (err) 1918 + goto disable_clks; 1919 + 1920 + if (wakeup) 1921 + tegra_xhci_disable_phy_wake(tegra); 1922 + 1923 + for (i = 0; i < tegra->num_phys; i++) { 1924 + if (!tegra->phys[i]) 1925 + continue; 1926 + 1927 + if (!wakeup) 1928 + phy_init(tegra->phys[i]); 1929 + 1930 + phy_power_on(tegra->phys[i]); 1911 1931 } 1912 1932 1913 1933 tegra_xusb_config(tegra); ··· 2066 1804 goto disable_phy; 2067 1805 } 2068 1806 2069 - err = xhci_resume(xhci, true); 1807 + if (wakeup) 1808 + tegra_xhci_disable_phy_sleepwalk(tegra); 1809 + 1810 + err = xhci_resume(xhci, 0); 2070 1811 if (err < 0) { 2071 1812 dev_err(tegra->dev, "failed to resume XHCI: %d\n", err); 2072 1813 goto disable_phy; 2073 1814 } 2074 1815 2075 - return 0; 1816 + usbcmd = readl(&xhci->op_regs->command); 1817 + usbcmd |= CMD_EIE; 1818 + writel(usbcmd, &xhci->op_regs->command); 1819 + 1820 + goto out; 2076 1821 2077 1822 disable_phy: 2078 - tegra_xusb_phy_disable(tegra); 2079 - disable_clk: 1823 + for (i = 0; i < tegra->num_phys; i++) { 1824 + if (!tegra->phys[i]) 1825 + continue; 1826 + 1827 + phy_power_off(tegra->phys[i]); 1828 + if (!wakeup) 1829 + phy_exit(tegra->phys[i]); 1830 + } 1831 + tegra_xusb_powergate_partitions(tegra); 1832 + disable_clks: 2080 1833 tegra_xusb_clk_disable(tegra); 1834 + out: 1835 + if (!err) 1836 + dev_dbg(dev, "exiting ELPG done\n"); 1837 + else 1838 + dev_dbg(dev, "exiting ELPG failed\n"); 1839 + 2081 1840 return err; 2082 1841 } 2083 1842 2084 1843 static int tegra_xusb_suspend(struct device *dev) 2085 1844 { 2086 1845 struct tegra_xusb *tegra = dev_get_drvdata(dev); 2087 - bool wakeup = device_may_wakeup(dev); 2088 1846 int err; 2089 1847 2090 1848 synchronize_irq(tegra->mbox_irq); 2091 1849 2092 1850 mutex_lock(&tegra->lock); 2093 - err = tegra_xusb_enter_elpg(tegra, wakeup); 1851 + 1852 + if (pm_runtime_suspended(dev)) { 1853 + err = tegra_xusb_exit_elpg(tegra, true); 1854 + if (err < 0) 1855 + goto out; 1856 + } 1857 + 1858 + err = tegra_xusb_enter_elpg(tegra, false); 1859 + if (err < 0) { 1860 + if (pm_runtime_suspended(dev)) { 1861 + pm_runtime_disable(dev); 1862 + pm_runtime_set_active(dev); 1863 + pm_runtime_enable(dev); 1864 + } 1865 + 1866 + goto out; 1867 + } 1868 + 1869 + out: 1870 + if (!err) { 1871 + tegra->suspended = true; 1872 + pm_runtime_disable(dev); 1873 + 1874 + if (device_may_wakeup(dev)) { 1875 + if (enable_irq_wake(tegra->padctl_irq)) 1876 + dev_err(dev, "failed to enable padctl wakes\n"); 1877 + } 1878 + } 1879 + 2094 1880 mutex_unlock(&tegra->lock); 2095 1881 2096 1882 return err; ··· 2147 1837 static int tegra_xusb_resume(struct device *dev) 2148 1838 { 2149 1839 struct tegra_xusb *tegra = dev_get_drvdata(dev); 2150 - bool wakeup = device_may_wakeup(dev); 2151 1840 int err; 2152 1841 2153 1842 mutex_lock(&tegra->lock); 2154 - err = tegra_xusb_exit_elpg(tegra, wakeup); 1843 + 1844 + if (!tegra->suspended) { 1845 + mutex_unlock(&tegra->lock); 1846 + return 0; 1847 + } 1848 + 1849 + err = tegra_xusb_exit_elpg(tegra, false); 1850 + if (err < 0) { 1851 + mutex_unlock(&tegra->lock); 1852 + return err; 1853 + } 1854 + 1855 + if (device_may_wakeup(dev)) { 1856 + if (disable_irq_wake(tegra->padctl_irq)) 1857 + dev_err(dev, "failed to disable padctl wakes\n"); 1858 + } 1859 + tegra->suspended = false; 1860 + mutex_unlock(&tegra->lock); 1861 + 1862 + pm_runtime_set_active(dev); 1863 + pm_runtime_enable(dev); 1864 + 1865 + return 0; 1866 + } 1867 + #endif 1868 + 1869 + #ifdef CONFIG_PM 1870 + static int tegra_xusb_runtime_suspend(struct device *dev) 1871 + { 1872 + struct tegra_xusb *tegra = dev_get_drvdata(dev); 1873 + int ret; 1874 + 1875 + synchronize_irq(tegra->mbox_irq); 1876 + mutex_lock(&tegra->lock); 1877 + ret = tegra_xusb_enter_elpg(tegra, true); 1878 + mutex_unlock(&tegra->lock); 1879 + 1880 + return ret; 1881 + } 1882 + 1883 + static int tegra_xusb_runtime_resume(struct device *dev) 1884 + { 1885 + struct tegra_xusb *tegra = dev_get_drvdata(dev); 1886 + int err; 1887 + 1888 + mutex_lock(&tegra->lock); 1889 + err = tegra_xusb_exit_elpg(tegra, true); 2155 1890 mutex_unlock(&tegra->lock); 2156 1891 2157 1892 return err;
+7 -3
drivers/usb/host/xhci.c
··· 1361 1361 urb->transfer_buffer_length, 1362 1362 dir); 1363 1363 1364 - if (usb_urb_dir_in(urb)) 1364 + if (usb_urb_dir_in(urb)) { 1365 1365 len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, 1366 1366 urb->transfer_buffer, 1367 1367 buf_len, 1368 1368 0); 1369 - 1369 + if (len != buf_len) { 1370 + xhci_dbg(hcd_to_xhci(hcd), 1371 + "Copy from tmp buf to urb sg list failed\n"); 1372 + urb->actual_length = len; 1373 + } 1374 + } 1370 1375 urb->transfer_flags &= ~URB_DMA_MAP_SINGLE; 1371 1376 kfree(urb->transfer_buffer); 1372 1377 urb->transfer_buffer = NULL; ··· 4845 4840 if (xhci_update_timeout_for_endpoint(xhci, udev, 4846 4841 &alt->endpoint[j].desc, state, timeout)) 4847 4842 return -E2BIG; 4848 - continue; 4849 4843 } 4850 4844 return 0; 4851 4845 }
+7 -4
drivers/usb/host/xhci.h
··· 1526 1526 #define TRB_BUFF_LEN_UP_TO_BOUNDARY(addr) (TRB_MAX_BUFF_SIZE - \ 1527 1527 (addr & (TRB_MAX_BUFF_SIZE - 1))) 1528 1528 #define MAX_SOFT_RETRY 3 1529 + /* 1530 + * Limits of consecutive isoc trbs that can Block Event Interrupt (BEI) if 1531 + * XHCI_AVOID_BEI quirk is in use. 1532 + */ 1533 + #define AVOID_BEI_INTERVAL_MIN 8 1534 + #define AVOID_BEI_INTERVAL_MAX 32 1529 1535 1530 1536 struct xhci_segment { 1531 1537 union xhci_trb *trbs; ··· 1669 1663 * meaning 64 ring segments. 1670 1664 * Initial allocated size of the ERST, in number of entries */ 1671 1665 #define ERST_NUM_SEGS 1 1672 - /* Initial allocated size of the ERST, in number of entries */ 1673 - #define ERST_SIZE 64 1674 - /* Initial number of event segment rings allocated */ 1675 - #define ERST_ENTRIES 1 1676 1666 /* Poll every 60 seconds */ 1677 1667 #define POLL_TIMEOUT 60 1678 1668 /* Stop endpoint command timeout (secs) for URB cancellation watchdog timer */ ··· 1774 1772 u8 isoc_threshold; 1775 1773 /* imod_interval in ns (I * 250ns) */ 1776 1774 u32 imod_interval; 1775 + u32 isoc_bei_interval; 1777 1776 int event_ring_max; 1778 1777 /* 4KB min, 128MB max */ 1779 1778 int page_size;
+3 -2
drivers/usb/isp1760/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 config USB_ISP1760 4 - tristate "NXP ISP 1760/1761 support" 4 + tristate "NXP ISP 1760/1761/1763 support" 5 5 depends on USB || USB_GADGET 6 + select REGMAP_MMIO 6 7 help 7 - Say Y or M here if your system as an ISP1760 USB host controller 8 + Say Y or M here if your system as an ISP1760/1763 USB host controller 8 9 or an ISP1761 USB dual-role controller. 9 10 10 11 This driver does not support isochronous transfers or OTG.
+470 -43
drivers/usb/isp1760/isp1760-core.c
··· 2 2 /* 3 3 * Driver for the NXP ISP1760 chip 4 4 * 5 + * Copyright 2021 Linaro, Rui Miguel Silva 5 6 * Copyright 2014 Laurent Pinchart 6 7 * Copyright 2007 Sebastian Siewior 7 8 * 8 9 * Contacts: 9 10 * Sebastian Siewior <bigeasy@linutronix.de> 10 11 * Laurent Pinchart <laurent.pinchart@ideasonboard.com> 12 + * Rui Miguel Silva <rui.silva@linaro.org> 11 13 */ 12 14 13 15 #include <linux/delay.h> ··· 17 15 #include <linux/io.h> 18 16 #include <linux/kernel.h> 19 17 #include <linux/module.h> 18 + #include <linux/regmap.h> 20 19 #include <linux/slab.h> 21 20 #include <linux/usb.h> 22 21 ··· 26 23 #include "isp1760-regs.h" 27 24 #include "isp1760-udc.h" 28 25 29 - static void isp1760_init_core(struct isp1760_device *isp) 26 + static int isp1760_init_core(struct isp1760_device *isp) 30 27 { 31 - u32 otgctrl; 32 - u32 hwmode; 28 + struct isp1760_hcd *hcd = &isp->hcd; 29 + struct isp1760_udc *udc = &isp->udc; 33 30 34 31 /* Low-level chip reset */ 35 32 if (isp->rst_gpio) { ··· 42 39 * Reset the host controller, including the CPU interface 43 40 * configuration. 44 41 */ 45 - isp1760_write32(isp->regs, HC_RESET_REG, SW_RESET_RESET_ALL); 42 + isp1760_field_set(hcd->fields, SW_RESET_RESET_ALL); 46 43 msleep(100); 47 44 48 45 /* Setup HW Mode Control: This assumes a level active-low interrupt */ 49 - hwmode = HW_DATA_BUS_32BIT; 46 + if ((isp->devflags & ISP1760_FLAG_ANALOG_OC) && hcd->is_isp1763) { 47 + dev_err(isp->dev, "isp1763 analog overcurrent not available\n"); 48 + return -EINVAL; 49 + } 50 50 51 51 if (isp->devflags & ISP1760_FLAG_BUS_WIDTH_16) 52 - hwmode &= ~HW_DATA_BUS_32BIT; 52 + isp1760_field_clear(hcd->fields, HW_DATA_BUS_WIDTH); 53 + if (isp->devflags & ISP1760_FLAG_BUS_WIDTH_8) 54 + isp1760_field_set(hcd->fields, HW_DATA_BUS_WIDTH); 53 55 if (isp->devflags & ISP1760_FLAG_ANALOG_OC) 54 - hwmode |= HW_ANA_DIGI_OC; 56 + isp1760_field_set(hcd->fields, HW_ANA_DIGI_OC); 55 57 if (isp->devflags & ISP1760_FLAG_DACK_POL_HIGH) 56 - hwmode |= HW_DACK_POL_HIGH; 58 + isp1760_field_set(hcd->fields, HW_DACK_POL_HIGH); 57 59 if (isp->devflags & ISP1760_FLAG_DREQ_POL_HIGH) 58 - hwmode |= HW_DREQ_POL_HIGH; 60 + isp1760_field_set(hcd->fields, HW_DREQ_POL_HIGH); 59 61 if (isp->devflags & ISP1760_FLAG_INTR_POL_HIGH) 60 - hwmode |= HW_INTR_HIGH_ACT; 62 + isp1760_field_set(hcd->fields, HW_INTR_HIGH_ACT); 61 63 if (isp->devflags & ISP1760_FLAG_INTR_EDGE_TRIG) 62 - hwmode |= HW_INTR_EDGE_TRIG; 64 + isp1760_field_set(hcd->fields, HW_INTR_EDGE_TRIG); 63 65 64 66 /* 65 67 * The ISP1761 has a dedicated DC IRQ line but supports sharing the HC ··· 73 65 * spurious interrupts during HCD registration. 74 66 */ 75 67 if (isp->devflags & ISP1760_FLAG_ISP1761) { 76 - isp1760_write32(isp->regs, DC_MODE, 0); 77 - hwmode |= HW_COMN_IRQ; 68 + isp1760_reg_write(udc->regs, ISP176x_DC_MODE, 0); 69 + isp1760_field_set(hcd->fields, HW_COMN_IRQ); 78 70 } 79 - 80 - /* 81 - * We have to set this first in case we're in 16-bit mode. 82 - * Write it twice to ensure correct upper bits if switching 83 - * to 16-bit mode. 84 - */ 85 - isp1760_write32(isp->regs, HC_HW_MODE_CTRL, hwmode); 86 - isp1760_write32(isp->regs, HC_HW_MODE_CTRL, hwmode); 87 71 88 72 /* 89 73 * PORT 1 Control register of the ISP1760 is the OTG control register 90 74 * on ISP1761. 91 75 * 92 76 * TODO: Really support OTG. For now we configure port 1 in device mode 93 - * when OTG is requested. 94 77 */ 95 - if ((isp->devflags & ISP1760_FLAG_ISP1761) && 96 - (isp->devflags & ISP1760_FLAG_OTG_EN)) 97 - otgctrl = ((HW_DM_PULLDOWN | HW_DP_PULLDOWN) << 16) 98 - | HW_OTG_DISABLE; 99 - else 100 - otgctrl = (HW_SW_SEL_HC_DC << 16) 101 - | (HW_VBUS_DRV | HW_SEL_CP_EXT); 78 + if (((isp->devflags & ISP1760_FLAG_ISP1761) || 79 + (isp->devflags & ISP1760_FLAG_ISP1763)) && 80 + (isp->devflags & ISP1760_FLAG_PERIPHERAL_EN)) { 81 + isp1760_field_set(hcd->fields, HW_DM_PULLDOWN); 82 + isp1760_field_set(hcd->fields, HW_DP_PULLDOWN); 83 + isp1760_field_set(hcd->fields, HW_OTG_DISABLE); 84 + } else { 85 + isp1760_field_set(hcd->fields, HW_SW_SEL_HC_DC); 86 + isp1760_field_set(hcd->fields, HW_VBUS_DRV); 87 + isp1760_field_set(hcd->fields, HW_SEL_CP_EXT); 88 + } 102 89 103 - isp1760_write32(isp->regs, HC_PORT1_CTRL, otgctrl); 104 - 105 - dev_info(isp->dev, "bus width: %u, oc: %s\n", 90 + dev_info(isp->dev, "%s bus width: %u, oc: %s\n", 91 + hcd->is_isp1763 ? "isp1763" : "isp1760", 92 + isp->devflags & ISP1760_FLAG_BUS_WIDTH_8 ? 8 : 106 93 isp->devflags & ISP1760_FLAG_BUS_WIDTH_16 ? 16 : 32, 94 + hcd->is_isp1763 ? "not available" : 107 95 isp->devflags & ISP1760_FLAG_ANALOG_OC ? "analog" : "digital"); 96 + 97 + return 0; 108 98 } 109 99 110 100 void isp1760_set_pullup(struct isp1760_device *isp, bool enable) 111 101 { 112 - isp1760_write32(isp->regs, HW_OTG_CTRL_SET, 113 - enable ? HW_DP_PULLUP : HW_DP_PULLUP << 16); 102 + struct isp1760_udc *udc = &isp->udc; 103 + 104 + if (enable) 105 + isp1760_field_set(udc->fields, HW_DP_PULLUP); 106 + else 107 + isp1760_field_set(udc->fields, HW_DP_PULLUP_CLEAR); 114 108 } 109 + 110 + /* 111 + * ISP1760/61: 112 + * 113 + * 60kb divided in: 114 + * - 32 blocks @ 256 bytes 115 + * - 20 blocks @ 1024 bytes 116 + * - 4 blocks @ 8192 bytes 117 + */ 118 + static const struct isp1760_memory_layout isp176x_memory_conf = { 119 + .blocks[0] = 32, 120 + .blocks_size[0] = 256, 121 + .blocks[1] = 20, 122 + .blocks_size[1] = 1024, 123 + .blocks[2] = 4, 124 + .blocks_size[2] = 8192, 125 + 126 + .slot_num = 32, 127 + .payload_blocks = 32 + 20 + 4, 128 + .payload_area_size = 0xf000, 129 + }; 130 + 131 + /* 132 + * ISP1763: 133 + * 134 + * 20kb divided in: 135 + * - 8 blocks @ 256 bytes 136 + * - 2 blocks @ 1024 bytes 137 + * - 4 blocks @ 4096 bytes 138 + */ 139 + static const struct isp1760_memory_layout isp1763_memory_conf = { 140 + .blocks[0] = 8, 141 + .blocks_size[0] = 256, 142 + .blocks[1] = 2, 143 + .blocks_size[1] = 1024, 144 + .blocks[2] = 4, 145 + .blocks_size[2] = 4096, 146 + 147 + .slot_num = 16, 148 + .payload_blocks = 8 + 2 + 4, 149 + .payload_area_size = 0x5000, 150 + }; 151 + 152 + static const struct regmap_range isp176x_hc_volatile_ranges[] = { 153 + regmap_reg_range(ISP176x_HC_USBCMD, ISP176x_HC_ATL_PTD_LASTPTD), 154 + regmap_reg_range(ISP176x_HC_BUFFER_STATUS, ISP176x_HC_MEMORY), 155 + regmap_reg_range(ISP176x_HC_INTERRUPT, ISP176x_HC_OTG_CTRL_CLEAR), 156 + }; 157 + 158 + static const struct regmap_access_table isp176x_hc_volatile_table = { 159 + .yes_ranges = isp176x_hc_volatile_ranges, 160 + .n_yes_ranges = ARRAY_SIZE(isp176x_hc_volatile_ranges), 161 + }; 162 + 163 + static const struct regmap_config isp1760_hc_regmap_conf = { 164 + .name = "isp1760-hc", 165 + .reg_bits = 16, 166 + .reg_stride = 4, 167 + .val_bits = 32, 168 + .fast_io = true, 169 + .max_register = ISP176x_HC_OTG_CTRL_CLEAR, 170 + .volatile_table = &isp176x_hc_volatile_table, 171 + }; 172 + 173 + static const struct reg_field isp1760_hc_reg_fields[] = { 174 + [HCS_PPC] = REG_FIELD(ISP176x_HC_HCSPARAMS, 4, 4), 175 + [HCS_N_PORTS] = REG_FIELD(ISP176x_HC_HCSPARAMS, 0, 3), 176 + [HCC_ISOC_CACHE] = REG_FIELD(ISP176x_HC_HCCPARAMS, 7, 7), 177 + [HCC_ISOC_THRES] = REG_FIELD(ISP176x_HC_HCCPARAMS, 4, 6), 178 + [CMD_LRESET] = REG_FIELD(ISP176x_HC_USBCMD, 7, 7), 179 + [CMD_RESET] = REG_FIELD(ISP176x_HC_USBCMD, 1, 1), 180 + [CMD_RUN] = REG_FIELD(ISP176x_HC_USBCMD, 0, 0), 181 + [STS_PCD] = REG_FIELD(ISP176x_HC_USBSTS, 2, 2), 182 + [HC_FRINDEX] = REG_FIELD(ISP176x_HC_FRINDEX, 0, 13), 183 + [FLAG_CF] = REG_FIELD(ISP176x_HC_CONFIGFLAG, 0, 0), 184 + [HC_ISO_PTD_DONEMAP] = REG_FIELD(ISP176x_HC_ISO_PTD_DONEMAP, 0, 31), 185 + [HC_ISO_PTD_SKIPMAP] = REG_FIELD(ISP176x_HC_ISO_PTD_SKIPMAP, 0, 31), 186 + [HC_ISO_PTD_LASTPTD] = REG_FIELD(ISP176x_HC_ISO_PTD_LASTPTD, 0, 31), 187 + [HC_INT_PTD_DONEMAP] = REG_FIELD(ISP176x_HC_INT_PTD_DONEMAP, 0, 31), 188 + [HC_INT_PTD_SKIPMAP] = REG_FIELD(ISP176x_HC_INT_PTD_SKIPMAP, 0, 31), 189 + [HC_INT_PTD_LASTPTD] = REG_FIELD(ISP176x_HC_INT_PTD_LASTPTD, 0, 31), 190 + [HC_ATL_PTD_DONEMAP] = REG_FIELD(ISP176x_HC_ATL_PTD_DONEMAP, 0, 31), 191 + [HC_ATL_PTD_SKIPMAP] = REG_FIELD(ISP176x_HC_ATL_PTD_SKIPMAP, 0, 31), 192 + [HC_ATL_PTD_LASTPTD] = REG_FIELD(ISP176x_HC_ATL_PTD_LASTPTD, 0, 31), 193 + [PORT_OWNER] = REG_FIELD(ISP176x_HC_PORTSC1, 13, 13), 194 + [PORT_POWER] = REG_FIELD(ISP176x_HC_PORTSC1, 12, 12), 195 + [PORT_LSTATUS] = REG_FIELD(ISP176x_HC_PORTSC1, 10, 11), 196 + [PORT_RESET] = REG_FIELD(ISP176x_HC_PORTSC1, 8, 8), 197 + [PORT_SUSPEND] = REG_FIELD(ISP176x_HC_PORTSC1, 7, 7), 198 + [PORT_RESUME] = REG_FIELD(ISP176x_HC_PORTSC1, 6, 6), 199 + [PORT_PE] = REG_FIELD(ISP176x_HC_PORTSC1, 2, 2), 200 + [PORT_CSC] = REG_FIELD(ISP176x_HC_PORTSC1, 1, 1), 201 + [PORT_CONNECT] = REG_FIELD(ISP176x_HC_PORTSC1, 0, 0), 202 + [ALL_ATX_RESET] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 31, 31), 203 + [HW_ANA_DIGI_OC] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 15, 15), 204 + [HW_COMN_IRQ] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 10, 10), 205 + [HW_DATA_BUS_WIDTH] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 8, 8), 206 + [HW_DACK_POL_HIGH] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 6, 6), 207 + [HW_DREQ_POL_HIGH] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 5, 5), 208 + [HW_INTR_HIGH_ACT] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 2, 2), 209 + [HW_INTR_EDGE_TRIG] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 1, 1), 210 + [HW_GLOBAL_INTR_EN] = REG_FIELD(ISP176x_HC_HW_MODE_CTRL, 0, 0), 211 + [HC_CHIP_REV] = REG_FIELD(ISP176x_HC_CHIP_ID, 16, 31), 212 + [HC_CHIP_ID_HIGH] = REG_FIELD(ISP176x_HC_CHIP_ID, 8, 15), 213 + [HC_CHIP_ID_LOW] = REG_FIELD(ISP176x_HC_CHIP_ID, 0, 7), 214 + [HC_SCRATCH] = REG_FIELD(ISP176x_HC_SCRATCH, 0, 31), 215 + [SW_RESET_RESET_ALL] = REG_FIELD(ISP176x_HC_RESET, 0, 0), 216 + [ISO_BUF_FILL] = REG_FIELD(ISP176x_HC_BUFFER_STATUS, 2, 2), 217 + [INT_BUF_FILL] = REG_FIELD(ISP176x_HC_BUFFER_STATUS, 1, 1), 218 + [ATL_BUF_FILL] = REG_FIELD(ISP176x_HC_BUFFER_STATUS, 0, 0), 219 + [MEM_BANK_SEL] = REG_FIELD(ISP176x_HC_MEMORY, 16, 17), 220 + [MEM_START_ADDR] = REG_FIELD(ISP176x_HC_MEMORY, 0, 15), 221 + [HC_INTERRUPT] = REG_FIELD(ISP176x_HC_INTERRUPT, 0, 9), 222 + [HC_ATL_IRQ_ENABLE] = REG_FIELD(ISP176x_HC_INTERRUPT_ENABLE, 8, 8), 223 + [HC_INT_IRQ_ENABLE] = REG_FIELD(ISP176x_HC_INTERRUPT_ENABLE, 7, 7), 224 + [HC_ISO_IRQ_MASK_OR] = REG_FIELD(ISP176x_HC_ISO_IRQ_MASK_OR, 0, 31), 225 + [HC_INT_IRQ_MASK_OR] = REG_FIELD(ISP176x_HC_INT_IRQ_MASK_OR, 0, 31), 226 + [HC_ATL_IRQ_MASK_OR] = REG_FIELD(ISP176x_HC_ATL_IRQ_MASK_OR, 0, 31), 227 + [HC_ISO_IRQ_MASK_AND] = REG_FIELD(ISP176x_HC_ISO_IRQ_MASK_AND, 0, 31), 228 + [HC_INT_IRQ_MASK_AND] = REG_FIELD(ISP176x_HC_INT_IRQ_MASK_AND, 0, 31), 229 + [HC_ATL_IRQ_MASK_AND] = REG_FIELD(ISP176x_HC_ATL_IRQ_MASK_AND, 0, 31), 230 + [HW_OTG_DISABLE] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 10, 10), 231 + [HW_SW_SEL_HC_DC] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 7, 7), 232 + [HW_VBUS_DRV] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 4, 4), 233 + [HW_SEL_CP_EXT] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 3, 3), 234 + [HW_DM_PULLDOWN] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 2, 2), 235 + [HW_DP_PULLDOWN] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 1, 1), 236 + [HW_DP_PULLUP] = REG_FIELD(ISP176x_HC_OTG_CTRL_SET, 0, 0), 237 + [HW_OTG_DISABLE_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 10, 10), 238 + [HW_SW_SEL_HC_DC_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 7, 7), 239 + [HW_VBUS_DRV_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 4, 4), 240 + [HW_SEL_CP_EXT_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 3, 3), 241 + [HW_DM_PULLDOWN_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 2, 2), 242 + [HW_DP_PULLDOWN_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 1, 1), 243 + [HW_DP_PULLUP_CLEAR] = REG_FIELD(ISP176x_HC_OTG_CTRL_CLEAR, 0, 0), 244 + }; 245 + 246 + static const struct reg_field isp1763_hc_reg_fields[] = { 247 + [CMD_LRESET] = REG_FIELD(ISP1763_HC_USBCMD, 7, 7), 248 + [CMD_RESET] = REG_FIELD(ISP1763_HC_USBCMD, 1, 1), 249 + [CMD_RUN] = REG_FIELD(ISP1763_HC_USBCMD, 0, 0), 250 + [STS_PCD] = REG_FIELD(ISP1763_HC_USBSTS, 2, 2), 251 + [HC_FRINDEX] = REG_FIELD(ISP1763_HC_FRINDEX, 0, 13), 252 + [FLAG_CF] = REG_FIELD(ISP1763_HC_CONFIGFLAG, 0, 0), 253 + [HC_ISO_PTD_DONEMAP] = REG_FIELD(ISP1763_HC_ISO_PTD_DONEMAP, 0, 15), 254 + [HC_ISO_PTD_SKIPMAP] = REG_FIELD(ISP1763_HC_ISO_PTD_SKIPMAP, 0, 15), 255 + [HC_ISO_PTD_LASTPTD] = REG_FIELD(ISP1763_HC_ISO_PTD_LASTPTD, 0, 15), 256 + [HC_INT_PTD_DONEMAP] = REG_FIELD(ISP1763_HC_INT_PTD_DONEMAP, 0, 15), 257 + [HC_INT_PTD_SKIPMAP] = REG_FIELD(ISP1763_HC_INT_PTD_SKIPMAP, 0, 15), 258 + [HC_INT_PTD_LASTPTD] = REG_FIELD(ISP1763_HC_INT_PTD_LASTPTD, 0, 15), 259 + [HC_ATL_PTD_DONEMAP] = REG_FIELD(ISP1763_HC_ATL_PTD_DONEMAP, 0, 15), 260 + [HC_ATL_PTD_SKIPMAP] = REG_FIELD(ISP1763_HC_ATL_PTD_SKIPMAP, 0, 15), 261 + [HC_ATL_PTD_LASTPTD] = REG_FIELD(ISP1763_HC_ATL_PTD_LASTPTD, 0, 15), 262 + [PORT_OWNER] = REG_FIELD(ISP1763_HC_PORTSC1, 13, 13), 263 + [PORT_POWER] = REG_FIELD(ISP1763_HC_PORTSC1, 12, 12), 264 + [PORT_LSTATUS] = REG_FIELD(ISP1763_HC_PORTSC1, 10, 11), 265 + [PORT_RESET] = REG_FIELD(ISP1763_HC_PORTSC1, 8, 8), 266 + [PORT_SUSPEND] = REG_FIELD(ISP1763_HC_PORTSC1, 7, 7), 267 + [PORT_RESUME] = REG_FIELD(ISP1763_HC_PORTSC1, 6, 6), 268 + [PORT_PE] = REG_FIELD(ISP1763_HC_PORTSC1, 2, 2), 269 + [PORT_CSC] = REG_FIELD(ISP1763_HC_PORTSC1, 1, 1), 270 + [PORT_CONNECT] = REG_FIELD(ISP1763_HC_PORTSC1, 0, 0), 271 + [HW_DATA_BUS_WIDTH] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 4, 4), 272 + [HW_DACK_POL_HIGH] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 6, 6), 273 + [HW_DREQ_POL_HIGH] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 5, 5), 274 + [HW_INTF_LOCK] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 3, 3), 275 + [HW_INTR_HIGH_ACT] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 2, 2), 276 + [HW_INTR_EDGE_TRIG] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 1, 1), 277 + [HW_GLOBAL_INTR_EN] = REG_FIELD(ISP1763_HC_HW_MODE_CTRL, 0, 0), 278 + [SW_RESET_RESET_ATX] = REG_FIELD(ISP1763_HC_RESET, 3, 3), 279 + [SW_RESET_RESET_ALL] = REG_FIELD(ISP1763_HC_RESET, 0, 0), 280 + [HC_CHIP_ID_HIGH] = REG_FIELD(ISP1763_HC_CHIP_ID, 0, 15), 281 + [HC_CHIP_ID_LOW] = REG_FIELD(ISP1763_HC_CHIP_REV, 8, 15), 282 + [HC_CHIP_REV] = REG_FIELD(ISP1763_HC_CHIP_REV, 0, 7), 283 + [HC_SCRATCH] = REG_FIELD(ISP1763_HC_SCRATCH, 0, 15), 284 + [ISO_BUF_FILL] = REG_FIELD(ISP1763_HC_BUFFER_STATUS, 2, 2), 285 + [INT_BUF_FILL] = REG_FIELD(ISP1763_HC_BUFFER_STATUS, 1, 1), 286 + [ATL_BUF_FILL] = REG_FIELD(ISP1763_HC_BUFFER_STATUS, 0, 0), 287 + [MEM_START_ADDR] = REG_FIELD(ISP1763_HC_MEMORY, 0, 15), 288 + [HC_DATA] = REG_FIELD(ISP1763_HC_DATA, 0, 15), 289 + [HC_INTERRUPT] = REG_FIELD(ISP1763_HC_INTERRUPT, 0, 10), 290 + [HC_ATL_IRQ_ENABLE] = REG_FIELD(ISP1763_HC_INTERRUPT_ENABLE, 8, 8), 291 + [HC_INT_IRQ_ENABLE] = REG_FIELD(ISP1763_HC_INTERRUPT_ENABLE, 7, 7), 292 + [HC_ISO_IRQ_MASK_OR] = REG_FIELD(ISP1763_HC_ISO_IRQ_MASK_OR, 0, 15), 293 + [HC_INT_IRQ_MASK_OR] = REG_FIELD(ISP1763_HC_INT_IRQ_MASK_OR, 0, 15), 294 + [HC_ATL_IRQ_MASK_OR] = REG_FIELD(ISP1763_HC_ATL_IRQ_MASK_OR, 0, 15), 295 + [HC_ISO_IRQ_MASK_AND] = REG_FIELD(ISP1763_HC_ISO_IRQ_MASK_AND, 0, 15), 296 + [HC_INT_IRQ_MASK_AND] = REG_FIELD(ISP1763_HC_INT_IRQ_MASK_AND, 0, 15), 297 + [HC_ATL_IRQ_MASK_AND] = REG_FIELD(ISP1763_HC_ATL_IRQ_MASK_AND, 0, 15), 298 + [HW_HC_2_DIS] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 15, 15), 299 + [HW_OTG_DISABLE] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 10, 10), 300 + [HW_SW_SEL_HC_DC] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 7, 7), 301 + [HW_VBUS_DRV] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 4, 4), 302 + [HW_SEL_CP_EXT] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 3, 3), 303 + [HW_DM_PULLDOWN] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 2, 2), 304 + [HW_DP_PULLDOWN] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 1, 1), 305 + [HW_DP_PULLUP] = REG_FIELD(ISP1763_HC_OTG_CTRL_SET, 0, 0), 306 + [HW_HC_2_DIS_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 15, 15), 307 + [HW_OTG_DISABLE_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 10, 10), 308 + [HW_SW_SEL_HC_DC_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 7, 7), 309 + [HW_VBUS_DRV_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 4, 4), 310 + [HW_SEL_CP_EXT_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 3, 3), 311 + [HW_DM_PULLDOWN_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 2, 2), 312 + [HW_DP_PULLDOWN_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 1, 1), 313 + [HW_DP_PULLUP_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 0, 0), 314 + }; 315 + 316 + static const struct regmap_range isp1763_hc_volatile_ranges[] = { 317 + regmap_reg_range(ISP1763_HC_USBCMD, ISP1763_HC_ATL_PTD_LASTPTD), 318 + regmap_reg_range(ISP1763_HC_BUFFER_STATUS, ISP1763_HC_DATA), 319 + regmap_reg_range(ISP1763_HC_INTERRUPT, ISP1763_HC_OTG_CTRL_CLEAR), 320 + }; 321 + 322 + static const struct regmap_access_table isp1763_hc_volatile_table = { 323 + .yes_ranges = isp1763_hc_volatile_ranges, 324 + .n_yes_ranges = ARRAY_SIZE(isp1763_hc_volatile_ranges), 325 + }; 326 + 327 + static const struct regmap_config isp1763_hc_regmap_conf = { 328 + .name = "isp1763-hc", 329 + .reg_bits = 8, 330 + .reg_stride = 2, 331 + .val_bits = 16, 332 + .fast_io = true, 333 + .max_register = ISP1763_HC_OTG_CTRL_CLEAR, 334 + .volatile_table = &isp1763_hc_volatile_table, 335 + }; 336 + 337 + static const struct regmap_range isp176x_dc_volatile_ranges[] = { 338 + regmap_reg_range(ISP176x_DC_EPMAXPKTSZ, ISP176x_DC_EPTYPE), 339 + regmap_reg_range(ISP176x_DC_BUFLEN, ISP176x_DC_EPINDEX), 340 + }; 341 + 342 + static const struct regmap_access_table isp176x_dc_volatile_table = { 343 + .yes_ranges = isp176x_dc_volatile_ranges, 344 + .n_yes_ranges = ARRAY_SIZE(isp176x_dc_volatile_ranges), 345 + }; 346 + 347 + static const struct regmap_config isp1761_dc_regmap_conf = { 348 + .name = "isp1761-dc", 349 + .reg_bits = 16, 350 + .reg_stride = 4, 351 + .val_bits = 32, 352 + .fast_io = true, 353 + .max_register = ISP176x_DC_TESTMODE, 354 + .volatile_table = &isp176x_dc_volatile_table, 355 + }; 356 + 357 + static const struct reg_field isp1761_dc_reg_fields[] = { 358 + [DC_DEVEN] = REG_FIELD(ISP176x_DC_ADDRESS, 7, 7), 359 + [DC_DEVADDR] = REG_FIELD(ISP176x_DC_ADDRESS, 0, 6), 360 + [DC_VBUSSTAT] = REG_FIELD(ISP176x_DC_MODE, 8, 8), 361 + [DC_SFRESET] = REG_FIELD(ISP176x_DC_MODE, 4, 4), 362 + [DC_GLINTENA] = REG_FIELD(ISP176x_DC_MODE, 3, 3), 363 + [DC_CDBGMOD_ACK] = REG_FIELD(ISP176x_DC_INTCONF, 6, 6), 364 + [DC_DDBGMODIN_ACK] = REG_FIELD(ISP176x_DC_INTCONF, 4, 4), 365 + [DC_DDBGMODOUT_ACK] = REG_FIELD(ISP176x_DC_INTCONF, 2, 2), 366 + [DC_INTPOL] = REG_FIELD(ISP176x_DC_INTCONF, 0, 0), 367 + [DC_IEPRXTX_7] = REG_FIELD(ISP176x_DC_INTENABLE, 25, 25), 368 + [DC_IEPRXTX_6] = REG_FIELD(ISP176x_DC_INTENABLE, 23, 23), 369 + [DC_IEPRXTX_5] = REG_FIELD(ISP176x_DC_INTENABLE, 21, 21), 370 + [DC_IEPRXTX_4] = REG_FIELD(ISP176x_DC_INTENABLE, 19, 19), 371 + [DC_IEPRXTX_3] = REG_FIELD(ISP176x_DC_INTENABLE, 17, 17), 372 + [DC_IEPRXTX_2] = REG_FIELD(ISP176x_DC_INTENABLE, 15, 15), 373 + [DC_IEPRXTX_1] = REG_FIELD(ISP176x_DC_INTENABLE, 13, 13), 374 + [DC_IEPRXTX_0] = REG_FIELD(ISP176x_DC_INTENABLE, 11, 11), 375 + [DC_IEP0SETUP] = REG_FIELD(ISP176x_DC_INTENABLE, 8, 8), 376 + [DC_IEVBUS] = REG_FIELD(ISP176x_DC_INTENABLE, 7, 7), 377 + [DC_IEHS_STA] = REG_FIELD(ISP176x_DC_INTENABLE, 5, 5), 378 + [DC_IERESM] = REG_FIELD(ISP176x_DC_INTENABLE, 4, 4), 379 + [DC_IESUSP] = REG_FIELD(ISP176x_DC_INTENABLE, 3, 3), 380 + [DC_IEBRST] = REG_FIELD(ISP176x_DC_INTENABLE, 0, 0), 381 + [DC_EP0SETUP] = REG_FIELD(ISP176x_DC_EPINDEX, 5, 5), 382 + [DC_ENDPIDX] = REG_FIELD(ISP176x_DC_EPINDEX, 1, 4), 383 + [DC_EPDIR] = REG_FIELD(ISP176x_DC_EPINDEX, 0, 0), 384 + [DC_CLBUF] = REG_FIELD(ISP176x_DC_CTRLFUNC, 4, 4), 385 + [DC_VENDP] = REG_FIELD(ISP176x_DC_CTRLFUNC, 3, 3), 386 + [DC_DSEN] = REG_FIELD(ISP176x_DC_CTRLFUNC, 2, 2), 387 + [DC_STATUS] = REG_FIELD(ISP176x_DC_CTRLFUNC, 1, 1), 388 + [DC_STALL] = REG_FIELD(ISP176x_DC_CTRLFUNC, 0, 0), 389 + [DC_BUFLEN] = REG_FIELD(ISP176x_DC_BUFLEN, 0, 15), 390 + [DC_FFOSZ] = REG_FIELD(ISP176x_DC_EPMAXPKTSZ, 0, 10), 391 + [DC_EPENABLE] = REG_FIELD(ISP176x_DC_EPTYPE, 3, 3), 392 + [DC_ENDPTYP] = REG_FIELD(ISP176x_DC_EPTYPE, 0, 1), 393 + [DC_UFRAMENUM] = REG_FIELD(ISP176x_DC_FRAMENUM, 11, 13), 394 + [DC_FRAMENUM] = REG_FIELD(ISP176x_DC_FRAMENUM, 0, 10), 395 + [DC_CHIP_ID_HIGH] = REG_FIELD(ISP176x_DC_CHIPID, 16, 31), 396 + [DC_CHIP_ID_LOW] = REG_FIELD(ISP176x_DC_CHIPID, 0, 15), 397 + [DC_SCRATCH] = REG_FIELD(ISP176x_DC_SCRATCH, 0, 15), 398 + }; 399 + 400 + static const struct regmap_range isp1763_dc_volatile_ranges[] = { 401 + regmap_reg_range(ISP1763_DC_EPMAXPKTSZ, ISP1763_DC_EPTYPE), 402 + regmap_reg_range(ISP1763_DC_BUFLEN, ISP1763_DC_EPINDEX), 403 + }; 404 + 405 + static const struct regmap_access_table isp1763_dc_volatile_table = { 406 + .yes_ranges = isp1763_dc_volatile_ranges, 407 + .n_yes_ranges = ARRAY_SIZE(isp1763_dc_volatile_ranges), 408 + }; 409 + 410 + static const struct reg_field isp1763_dc_reg_fields[] = { 411 + [DC_DEVEN] = REG_FIELD(ISP1763_DC_ADDRESS, 7, 7), 412 + [DC_DEVADDR] = REG_FIELD(ISP1763_DC_ADDRESS, 0, 6), 413 + [DC_VBUSSTAT] = REG_FIELD(ISP1763_DC_MODE, 8, 8), 414 + [DC_SFRESET] = REG_FIELD(ISP1763_DC_MODE, 4, 4), 415 + [DC_GLINTENA] = REG_FIELD(ISP1763_DC_MODE, 3, 3), 416 + [DC_CDBGMOD_ACK] = REG_FIELD(ISP1763_DC_INTCONF, 6, 6), 417 + [DC_DDBGMODIN_ACK] = REG_FIELD(ISP1763_DC_INTCONF, 4, 4), 418 + [DC_DDBGMODOUT_ACK] = REG_FIELD(ISP1763_DC_INTCONF, 2, 2), 419 + [DC_INTPOL] = REG_FIELD(ISP1763_DC_INTCONF, 0, 0), 420 + [DC_IEPRXTX_7] = REG_FIELD(ISP1763_DC_INTENABLE, 25, 25), 421 + [DC_IEPRXTX_6] = REG_FIELD(ISP1763_DC_INTENABLE, 23, 23), 422 + [DC_IEPRXTX_5] = REG_FIELD(ISP1763_DC_INTENABLE, 21, 21), 423 + [DC_IEPRXTX_4] = REG_FIELD(ISP1763_DC_INTENABLE, 19, 19), 424 + [DC_IEPRXTX_3] = REG_FIELD(ISP1763_DC_INTENABLE, 17, 17), 425 + [DC_IEPRXTX_2] = REG_FIELD(ISP1763_DC_INTENABLE, 15, 15), 426 + [DC_IEPRXTX_1] = REG_FIELD(ISP1763_DC_INTENABLE, 13, 13), 427 + [DC_IEPRXTX_0] = REG_FIELD(ISP1763_DC_INTENABLE, 11, 11), 428 + [DC_IEP0SETUP] = REG_FIELD(ISP1763_DC_INTENABLE, 8, 8), 429 + [DC_IEVBUS] = REG_FIELD(ISP1763_DC_INTENABLE, 7, 7), 430 + [DC_IEHS_STA] = REG_FIELD(ISP1763_DC_INTENABLE, 5, 5), 431 + [DC_IERESM] = REG_FIELD(ISP1763_DC_INTENABLE, 4, 4), 432 + [DC_IESUSP] = REG_FIELD(ISP1763_DC_INTENABLE, 3, 3), 433 + [DC_IEBRST] = REG_FIELD(ISP1763_DC_INTENABLE, 0, 0), 434 + [DC_EP0SETUP] = REG_FIELD(ISP1763_DC_EPINDEX, 5, 5), 435 + [DC_ENDPIDX] = REG_FIELD(ISP1763_DC_EPINDEX, 1, 4), 436 + [DC_EPDIR] = REG_FIELD(ISP1763_DC_EPINDEX, 0, 0), 437 + [DC_CLBUF] = REG_FIELD(ISP1763_DC_CTRLFUNC, 4, 4), 438 + [DC_VENDP] = REG_FIELD(ISP1763_DC_CTRLFUNC, 3, 3), 439 + [DC_DSEN] = REG_FIELD(ISP1763_DC_CTRLFUNC, 2, 2), 440 + [DC_STATUS] = REG_FIELD(ISP1763_DC_CTRLFUNC, 1, 1), 441 + [DC_STALL] = REG_FIELD(ISP1763_DC_CTRLFUNC, 0, 0), 442 + [DC_BUFLEN] = REG_FIELD(ISP1763_DC_BUFLEN, 0, 15), 443 + [DC_FFOSZ] = REG_FIELD(ISP1763_DC_EPMAXPKTSZ, 0, 10), 444 + [DC_EPENABLE] = REG_FIELD(ISP1763_DC_EPTYPE, 3, 3), 445 + [DC_ENDPTYP] = REG_FIELD(ISP1763_DC_EPTYPE, 0, 1), 446 + [DC_UFRAMENUM] = REG_FIELD(ISP1763_DC_FRAMENUM, 11, 13), 447 + [DC_FRAMENUM] = REG_FIELD(ISP1763_DC_FRAMENUM, 0, 10), 448 + [DC_CHIP_ID_HIGH] = REG_FIELD(ISP1763_DC_CHIPID_HIGH, 0, 15), 449 + [DC_CHIP_ID_LOW] = REG_FIELD(ISP1763_DC_CHIPID_LOW, 0, 15), 450 + [DC_SCRATCH] = REG_FIELD(ISP1763_DC_SCRATCH, 0, 15), 451 + }; 452 + 453 + static const struct regmap_config isp1763_dc_regmap_conf = { 454 + .name = "isp1763-dc", 455 + .reg_bits = 8, 456 + .reg_stride = 2, 457 + .val_bits = 16, 458 + .fast_io = true, 459 + .max_register = ISP1763_DC_TESTMODE, 460 + .volatile_table = &isp1763_dc_volatile_table, 461 + }; 115 462 116 463 int isp1760_register(struct resource *mem, int irq, unsigned long irqflags, 117 464 struct device *dev, unsigned int devflags) 118 465 { 466 + const struct regmap_config *hc_regmap; 467 + const struct reg_field *hc_reg_fields; 468 + const struct regmap_config *dc_regmap; 469 + const struct reg_field *dc_reg_fields; 119 470 struct isp1760_device *isp; 120 - bool udc_disabled = !(devflags & ISP1760_FLAG_ISP1761); 471 + struct isp1760_hcd *hcd; 472 + struct isp1760_udc *udc; 473 + struct regmap_field *f; 474 + bool udc_enabled; 121 475 int ret; 476 + int i; 122 477 123 478 /* 124 479 * If neither the HCD not the UDC is enabled return an error, as no 125 480 * device would be registered. 126 481 */ 482 + udc_enabled = ((devflags & ISP1760_FLAG_ISP1763) || 483 + (devflags & ISP1760_FLAG_ISP1761)); 484 + 127 485 if ((!IS_ENABLED(CONFIG_USB_ISP1760_HCD) || usb_disabled()) && 128 - (!IS_ENABLED(CONFIG_USB_ISP1761_UDC) || udc_disabled)) 486 + (!IS_ENABLED(CONFIG_USB_ISP1761_UDC) || !udc_enabled)) 129 487 return -ENODEV; 130 488 131 489 isp = devm_kzalloc(dev, sizeof(*isp), GFP_KERNEL); ··· 500 126 501 127 isp->dev = dev; 502 128 isp->devflags = devflags; 129 + hcd = &isp->hcd; 130 + udc = &isp->udc; 131 + 132 + hcd->is_isp1763 = !!(devflags & ISP1760_FLAG_ISP1763); 133 + udc->is_isp1763 = !!(devflags & ISP1760_FLAG_ISP1763); 134 + 135 + if (!hcd->is_isp1763 && (devflags & ISP1760_FLAG_BUS_WIDTH_8)) { 136 + dev_err(dev, "isp1760/61 do not support data width 8\n"); 137 + return -EINVAL; 138 + } 139 + 140 + if (hcd->is_isp1763) { 141 + hc_regmap = &isp1763_hc_regmap_conf; 142 + hc_reg_fields = &isp1763_hc_reg_fields[0]; 143 + dc_regmap = &isp1763_dc_regmap_conf; 144 + dc_reg_fields = &isp1763_dc_reg_fields[0]; 145 + } else { 146 + hc_regmap = &isp1760_hc_regmap_conf; 147 + hc_reg_fields = &isp1760_hc_reg_fields[0]; 148 + dc_regmap = &isp1761_dc_regmap_conf; 149 + dc_reg_fields = &isp1761_dc_reg_fields[0]; 150 + } 503 151 504 152 isp->rst_gpio = devm_gpiod_get_optional(dev, NULL, GPIOD_OUT_HIGH); 505 153 if (IS_ERR(isp->rst_gpio)) 506 154 return PTR_ERR(isp->rst_gpio); 507 155 508 - isp->regs = devm_ioremap_resource(dev, mem); 509 - if (IS_ERR(isp->regs)) 510 - return PTR_ERR(isp->regs); 156 + hcd->base = devm_ioremap_resource(dev, mem); 157 + if (IS_ERR(hcd->base)) 158 + return PTR_ERR(hcd->base); 511 159 512 - isp1760_init_core(isp); 160 + hcd->regs = devm_regmap_init_mmio(dev, hcd->base, hc_regmap); 161 + if (IS_ERR(hcd->regs)) 162 + return PTR_ERR(hcd->regs); 163 + 164 + for (i = 0; i < HC_FIELD_MAX; i++) { 165 + f = devm_regmap_field_alloc(dev, hcd->regs, hc_reg_fields[i]); 166 + if (IS_ERR(f)) 167 + return PTR_ERR(f); 168 + 169 + hcd->fields[i] = f; 170 + } 171 + 172 + udc->regs = devm_regmap_init_mmio(dev, hcd->base, dc_regmap); 173 + if (IS_ERR(udc->regs)) 174 + return PTR_ERR(udc->regs); 175 + 176 + for (i = 0; i < DC_FIELD_MAX; i++) { 177 + f = devm_regmap_field_alloc(dev, udc->regs, dc_reg_fields[i]); 178 + if (IS_ERR(f)) 179 + return PTR_ERR(f); 180 + 181 + udc->fields[i] = f; 182 + } 183 + 184 + if (hcd->is_isp1763) 185 + hcd->memory_layout = &isp1763_memory_conf; 186 + else 187 + hcd->memory_layout = &isp176x_memory_conf; 188 + 189 + ret = isp1760_init_core(isp); 190 + if (ret < 0) 191 + return ret; 513 192 514 193 if (IS_ENABLED(CONFIG_USB_ISP1760_HCD) && !usb_disabled()) { 515 - ret = isp1760_hcd_register(&isp->hcd, isp->regs, mem, irq, 194 + ret = isp1760_hcd_register(hcd, mem, irq, 516 195 irqflags | IRQF_SHARED, dev); 517 196 if (ret < 0) 518 197 return ret; 519 198 } 520 199 521 - if (IS_ENABLED(CONFIG_USB_ISP1761_UDC) && !udc_disabled) { 200 + if (IS_ENABLED(CONFIG_USB_ISP1761_UDC) && udc_enabled) { 522 201 ret = isp1760_udc_register(isp, irq, irqflags); 523 202 if (ret < 0) { 524 - isp1760_hcd_unregister(&isp->hcd); 203 + isp1760_hcd_unregister(hcd); 525 204 return ret; 526 205 } 527 206 }
+38 -6
drivers/usb/isp1760/isp1760-core.h
··· 2 2 /* 3 3 * Driver for the NXP ISP1760 chip 4 4 * 5 + * Copyright 2021 Linaro, Rui Miguel Silva 5 6 * Copyright 2014 Laurent Pinchart 6 7 * Copyright 2007 Sebastian Siewior 7 8 * 8 9 * Contacts: 9 10 * Sebastian Siewior <bigeasy@linutronix.de> 10 11 * Laurent Pinchart <laurent.pinchart@ideasonboard.com> 12 + * Rui Miguel Silva <rui.silva@linaro.org> 11 13 */ 12 14 13 15 #ifndef _ISP1760_CORE_H_ 14 16 #define _ISP1760_CORE_H_ 15 17 16 18 #include <linux/ioport.h> 19 + #include <linux/regmap.h> 17 20 18 21 #include "isp1760-hcd.h" 19 22 #include "isp1760-udc.h" ··· 30 27 * a sane default configuration. 31 28 */ 32 29 #define ISP1760_FLAG_BUS_WIDTH_16 0x00000002 /* 16-bit data bus width */ 33 - #define ISP1760_FLAG_OTG_EN 0x00000004 /* Port 1 supports OTG */ 30 + #define ISP1760_FLAG_PERIPHERAL_EN 0x00000004 /* Port 1 supports Peripheral mode*/ 34 31 #define ISP1760_FLAG_ANALOG_OC 0x00000008 /* Analog overcurrent */ 35 32 #define ISP1760_FLAG_DACK_POL_HIGH 0x00000010 /* DACK active high */ 36 33 #define ISP1760_FLAG_DREQ_POL_HIGH 0x00000020 /* DREQ active high */ 37 34 #define ISP1760_FLAG_ISP1761 0x00000040 /* Chip is ISP1761 */ 38 35 #define ISP1760_FLAG_INTR_POL_HIGH 0x00000080 /* Interrupt polarity active high */ 39 36 #define ISP1760_FLAG_INTR_EDGE_TRIG 0x00000100 /* Interrupt edge triggered */ 37 + #define ISP1760_FLAG_ISP1763 0x00000200 /* Chip is ISP1763 */ 38 + #define ISP1760_FLAG_BUS_WIDTH_8 0x00000400 /* 8-bit data bus width */ 40 39 41 40 struct isp1760_device { 42 41 struct device *dev; 43 42 44 - void __iomem *regs; 45 43 unsigned int devflags; 46 44 struct gpio_desc *rst_gpio; 47 45 ··· 56 52 57 53 void isp1760_set_pullup(struct isp1760_device *isp, bool enable); 58 54 59 - static inline u32 isp1760_read32(void __iomem *base, u32 reg) 55 + static inline u32 isp1760_field_read(struct regmap_field **fields, u32 field) 60 56 { 61 - return readl(base + reg); 57 + unsigned int val; 58 + 59 + regmap_field_read(fields[field], &val); 60 + 61 + return val; 62 62 } 63 63 64 - static inline void isp1760_write32(void __iomem *base, u32 reg, u32 val) 64 + static inline void isp1760_field_write(struct regmap_field **fields, u32 field, 65 + u32 val) 65 66 { 66 - writel(val, base + reg); 67 + regmap_field_write(fields[field], val); 67 68 } 68 69 70 + static inline void isp1760_field_set(struct regmap_field **fields, u32 field) 71 + { 72 + isp1760_field_write(fields, field, 0xFFFFFFFF); 73 + } 74 + 75 + static inline void isp1760_field_clear(struct regmap_field **fields, u32 field) 76 + { 77 + isp1760_field_write(fields, field, 0); 78 + } 79 + 80 + static inline u32 isp1760_reg_read(struct regmap *regs, u32 reg) 81 + { 82 + unsigned int val; 83 + 84 + regmap_read(regs, reg, &val); 85 + 86 + return val; 87 + } 88 + 89 + static inline void isp1760_reg_write(struct regmap *regs, u32 reg, u32 val) 90 + { 91 + regmap_write(regs, reg, val); 92 + } 69 93 #endif
+704 -318
drivers/usb/isp1760/isp1760-hcd.c
··· 11 11 * 12 12 * (c) 2011 Arvid Brodin <arvid.brodin@enea.com> 13 13 * 14 + * Copyright 2021 Linaro, Rui Miguel Silva <rui.silva@linaro.org> 15 + * 14 16 */ 15 17 #include <linux/gpio/consumer.h> 16 18 #include <linux/module.h> ··· 46 44 return *(struct isp1760_hcd **)hcd->hcd_priv; 47 45 } 48 46 47 + #define dw_to_le32(x) (cpu_to_le32((__force u32)x)) 48 + #define le32_to_dw(x) ((__force __dw)(le32_to_cpu(x))) 49 + 49 50 /* urb state*/ 50 51 #define DELETE_URB (0x0008) 51 52 #define NO_TRANSFER_ACTIVE (0xffffffff) ··· 65 60 __dw dw6; 66 61 __dw dw7; 67 62 }; 63 + 64 + struct ptd_le32 { 65 + __le32 dw0; 66 + __le32 dw1; 67 + __le32 dw2; 68 + __le32 dw3; 69 + __le32 dw4; 70 + __le32 dw5; 71 + __le32 dw6; 72 + __le32 dw7; 73 + }; 74 + 68 75 #define PTD_OFFSET 0x0400 69 76 #define ISO_PTD_OFFSET 0x0400 70 77 #define INT_PTD_OFFSET 0x0800 71 78 #define ATL_PTD_OFFSET 0x0c00 72 79 #define PAYLOAD_OFFSET 0x1000 73 80 81 + #define ISP_BANK_0 0x00 82 + #define ISP_BANK_1 0x01 83 + #define ISP_BANK_2 0x02 84 + #define ISP_BANK_3 0x03 74 85 75 - /* ATL */ 76 - /* DW0 */ 77 - #define DW0_VALID_BIT 1 78 - #define FROM_DW0_VALID(x) ((x) & 0x01) 79 - #define TO_DW0_LENGTH(x) (((u32) x) << 3) 80 - #define TO_DW0_MAXPACKET(x) (((u32) x) << 18) 81 - #define TO_DW0_MULTI(x) (((u32) x) << 29) 82 - #define TO_DW0_ENDPOINT(x) (((u32) x) << 31) 86 + #define TO_DW(x) ((__force __dw)x) 87 + #define TO_U32(x) ((__force u32)x) 88 + 89 + /* ATL */ 90 + /* DW0 */ 91 + #define DW0_VALID_BIT TO_DW(1) 92 + #define FROM_DW0_VALID(x) (TO_U32(x) & 0x01) 93 + #define TO_DW0_LENGTH(x) TO_DW((((u32)x) << 3)) 94 + #define TO_DW0_MAXPACKET(x) TO_DW((((u32)x) << 18)) 95 + #define TO_DW0_MULTI(x) TO_DW((((u32)x) << 29)) 96 + #define TO_DW0_ENDPOINT(x) TO_DW((((u32)x) << 31)) 83 97 /* DW1 */ 84 - #define TO_DW1_DEVICE_ADDR(x) (((u32) x) << 3) 85 - #define TO_DW1_PID_TOKEN(x) (((u32) x) << 10) 86 - #define DW1_TRANS_BULK ((u32) 2 << 12) 87 - #define DW1_TRANS_INT ((u32) 3 << 12) 88 - #define DW1_TRANS_SPLIT ((u32) 1 << 14) 89 - #define DW1_SE_USB_LOSPEED ((u32) 2 << 16) 90 - #define TO_DW1_PORT_NUM(x) (((u32) x) << 18) 91 - #define TO_DW1_HUB_NUM(x) (((u32) x) << 25) 98 + #define TO_DW1_DEVICE_ADDR(x) TO_DW((((u32)x) << 3)) 99 + #define TO_DW1_PID_TOKEN(x) TO_DW((((u32)x) << 10)) 100 + #define DW1_TRANS_BULK TO_DW(((u32)2 << 12)) 101 + #define DW1_TRANS_INT TO_DW(((u32)3 << 12)) 102 + #define DW1_TRANS_SPLIT TO_DW(((u32)1 << 14)) 103 + #define DW1_SE_USB_LOSPEED TO_DW(((u32)2 << 16)) 104 + #define TO_DW1_PORT_NUM(x) TO_DW((((u32)x) << 18)) 105 + #define TO_DW1_HUB_NUM(x) TO_DW((((u32)x) << 25)) 92 106 /* DW2 */ 93 - #define TO_DW2_DATA_START_ADDR(x) (((u32) x) << 8) 94 - #define TO_DW2_RL(x) ((x) << 25) 95 - #define FROM_DW2_RL(x) (((x) >> 25) & 0xf) 107 + #define TO_DW2_DATA_START_ADDR(x) TO_DW((((u32)x) << 8)) 108 + #define TO_DW2_RL(x) TO_DW(((x) << 25)) 109 + #define FROM_DW2_RL(x) ((TO_U32(x) >> 25) & 0xf) 96 110 /* DW3 */ 97 - #define FROM_DW3_NRBYTESTRANSFERRED(x) ((x) & 0x7fff) 98 - #define FROM_DW3_SCS_NRBYTESTRANSFERRED(x) ((x) & 0x07ff) 99 - #define TO_DW3_NAKCOUNT(x) ((x) << 19) 100 - #define FROM_DW3_NAKCOUNT(x) (((x) >> 19) & 0xf) 101 - #define TO_DW3_CERR(x) ((x) << 23) 102 - #define FROM_DW3_CERR(x) (((x) >> 23) & 0x3) 103 - #define TO_DW3_DATA_TOGGLE(x) ((x) << 25) 104 - #define FROM_DW3_DATA_TOGGLE(x) (((x) >> 25) & 0x1) 105 - #define TO_DW3_PING(x) ((x) << 26) 106 - #define FROM_DW3_PING(x) (((x) >> 26) & 0x1) 107 - #define DW3_ERROR_BIT (1 << 28) 108 - #define DW3_BABBLE_BIT (1 << 29) 109 - #define DW3_HALT_BIT (1 << 30) 110 - #define DW3_ACTIVE_BIT (1 << 31) 111 - #define FROM_DW3_ACTIVE(x) (((x) >> 31) & 0x01) 111 + #define FROM_DW3_NRBYTESTRANSFERRED(x) TO_U32((x) & 0x3fff) 112 + #define FROM_DW3_SCS_NRBYTESTRANSFERRED(x) TO_U32((x) & 0x07ff) 113 + #define TO_DW3_NAKCOUNT(x) TO_DW(((x) << 19)) 114 + #define FROM_DW3_NAKCOUNT(x) ((TO_U32(x) >> 19) & 0xf) 115 + #define TO_DW3_CERR(x) TO_DW(((x) << 23)) 116 + #define FROM_DW3_CERR(x) ((TO_U32(x) >> 23) & 0x3) 117 + #define TO_DW3_DATA_TOGGLE(x) TO_DW(((x) << 25)) 118 + #define FROM_DW3_DATA_TOGGLE(x) ((TO_U32(x) >> 25) & 0x1) 119 + #define TO_DW3_PING(x) TO_DW(((x) << 26)) 120 + #define FROM_DW3_PING(x) ((TO_U32(x) >> 26) & 0x1) 121 + #define DW3_ERROR_BIT TO_DW((1 << 28)) 122 + #define DW3_BABBLE_BIT TO_DW((1 << 29)) 123 + #define DW3_HALT_BIT TO_DW((1 << 30)) 124 + #define DW3_ACTIVE_BIT TO_DW((1 << 31)) 125 + #define FROM_DW3_ACTIVE(x) ((TO_U32(x) >> 31) & 0x01) 112 126 113 127 #define INT_UNDERRUN (1 << 2) 114 128 #define INT_BABBLE (1 << 1) ··· 140 116 /* Errata 1 */ 141 117 #define RL_COUNTER (0) 142 118 #define NAK_COUNTER (0) 143 - #define ERR_COUNTER (2) 119 + #define ERR_COUNTER (3) 144 120 145 121 struct isp1760_qtd { 146 122 u8 packet_type; ··· 182 158 struct urb *urb; 183 159 }; 184 160 161 + static const u32 isp1763_hc_portsc1_fields[] = { 162 + [PORT_OWNER] = BIT(13), 163 + [PORT_POWER] = BIT(12), 164 + [PORT_LSTATUS] = BIT(10), 165 + [PORT_RESET] = BIT(8), 166 + [PORT_SUSPEND] = BIT(7), 167 + [PORT_RESUME] = BIT(6), 168 + [PORT_PE] = BIT(2), 169 + [PORT_CSC] = BIT(1), 170 + [PORT_CONNECT] = BIT(0), 171 + }; 172 + 185 173 /* 186 - * Access functions for isp176x registers (addresses 0..0x03FF). 174 + * Access functions for isp176x registers regmap fields 187 175 */ 188 - static u32 reg_read32(void __iomem *base, u32 reg) 176 + static u32 isp1760_hcd_read(struct usb_hcd *hcd, u32 field) 189 177 { 190 - return isp1760_read32(base, reg); 178 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 179 + 180 + return isp1760_field_read(priv->fields, field); 191 181 } 192 182 193 - static void reg_write32(void __iomem *base, u32 reg, u32 val) 183 + /* 184 + * We need, in isp1763, to write directly the values to the portsc1 185 + * register so it will make the other values to trigger. 186 + */ 187 + static void isp1760_hcd_portsc1_set_clear(struct isp1760_hcd *priv, u32 field, 188 + u32 val) 194 189 { 195 - isp1760_write32(base, reg, val); 190 + u32 bit = isp1763_hc_portsc1_fields[field]; 191 + u32 port_status = readl(priv->base + ISP1763_HC_PORTSC1); 192 + 193 + if (val) 194 + writel(port_status | bit, priv->base + ISP1763_HC_PORTSC1); 195 + else 196 + writel(port_status & ~bit, priv->base + ISP1763_HC_PORTSC1); 197 + } 198 + 199 + static void isp1760_hcd_write(struct usb_hcd *hcd, u32 field, u32 val) 200 + { 201 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 202 + 203 + if (unlikely(priv->is_isp1763 && 204 + (field >= PORT_OWNER && field <= PORT_CONNECT))) 205 + return isp1760_hcd_portsc1_set_clear(priv, field, val); 206 + 207 + isp1760_field_write(priv->fields, field, val); 208 + } 209 + 210 + static void isp1760_hcd_set(struct usb_hcd *hcd, u32 field) 211 + { 212 + isp1760_hcd_write(hcd, field, 0xFFFFFFFF); 213 + } 214 + 215 + static void isp1760_hcd_clear(struct usb_hcd *hcd, u32 field) 216 + { 217 + isp1760_hcd_write(hcd, field, 0); 218 + } 219 + 220 + static int isp1760_hcd_set_and_wait(struct usb_hcd *hcd, u32 field, 221 + u32 timeout_us) 222 + { 223 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 224 + u32 val; 225 + 226 + isp1760_hcd_set(hcd, field); 227 + 228 + return regmap_field_read_poll_timeout(priv->fields[field], val, 229 + val, 10, timeout_us); 230 + } 231 + 232 + static int isp1760_hcd_set_and_wait_swap(struct usb_hcd *hcd, u32 field, 233 + u32 timeout_us) 234 + { 235 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 236 + u32 val; 237 + 238 + isp1760_hcd_set(hcd, field); 239 + 240 + return regmap_field_read_poll_timeout(priv->fields[field], val, 241 + !val, 10, timeout_us); 242 + } 243 + 244 + static int isp1760_hcd_clear_and_wait(struct usb_hcd *hcd, u32 field, 245 + u32 timeout_us) 246 + { 247 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 248 + u32 val; 249 + 250 + isp1760_hcd_clear(hcd, field); 251 + 252 + return regmap_field_read_poll_timeout(priv->fields[field], val, 253 + !val, 10, timeout_us); 254 + } 255 + 256 + static bool isp1760_hcd_is_set(struct usb_hcd *hcd, u32 field) 257 + { 258 + return !!isp1760_hcd_read(hcd, field); 259 + } 260 + 261 + static bool isp1760_hcd_ppc_is_set(struct usb_hcd *hcd) 262 + { 263 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 264 + 265 + if (priv->is_isp1763) 266 + return true; 267 + 268 + return isp1760_hcd_is_set(hcd, HCS_PPC); 269 + } 270 + 271 + static u32 isp1760_hcd_n_ports(struct usb_hcd *hcd) 272 + { 273 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 274 + 275 + if (priv->is_isp1763) 276 + return 1; 277 + 278 + return isp1760_hcd_read(hcd, HCS_N_PORTS); 196 279 } 197 280 198 281 /* ··· 307 176 * 308 177 * bank_reads8() reads memory locations prefetched by an earlier write to 309 178 * HC_MEMORY_REG (see isp176x datasheet). Unless you want to do fancy multi- 310 - * bank optimizations, you should use the more generic mem_reads8() below. 179 + * bank optimizations, you should use the more generic mem_read() below. 311 180 * 312 181 * For access to ptd memory, use the specialized ptd_read() and ptd_write() 313 182 * below. ··· 327 196 328 197 if (src_offset < PAYLOAD_OFFSET) { 329 198 while (bytes >= 4) { 330 - *dst = le32_to_cpu(__raw_readl(src)); 199 + *dst = readl_relaxed(src); 331 200 bytes -= 4; 332 201 src++; 333 202 dst++; ··· 348 217 * allocated. 349 218 */ 350 219 if (src_offset < PAYLOAD_OFFSET) 351 - val = le32_to_cpu(__raw_readl(src)); 220 + val = readl_relaxed(src); 352 221 else 353 222 val = __raw_readl(src); 354 223 ··· 362 231 } 363 232 } 364 233 365 - static void mem_reads8(void __iomem *src_base, u32 src_offset, void *dst, 366 - u32 bytes) 234 + static void isp1760_mem_read(struct usb_hcd *hcd, u32 src_offset, void *dst, 235 + u32 bytes) 367 236 { 368 - reg_write32(src_base, HC_MEMORY_REG, src_offset + ISP_BANK(0)); 369 - ndelay(90); 370 - bank_reads8(src_base, src_offset, ISP_BANK(0), dst, bytes); 237 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 238 + 239 + isp1760_hcd_write(hcd, MEM_BANK_SEL, ISP_BANK_0); 240 + isp1760_hcd_write(hcd, MEM_START_ADDR, src_offset); 241 + ndelay(100); 242 + 243 + bank_reads8(priv->base, src_offset, ISP_BANK_0, dst, bytes); 371 244 } 372 245 373 - static void mem_writes8(void __iomem *dst_base, u32 dst_offset, 374 - __u32 const *src, u32 bytes) 246 + /* 247 + * ISP1763 does not have the banks direct host controller memory access, 248 + * needs to use the HC_DATA register. Add data read/write according to this, 249 + * and also adjust 16bit access. 250 + */ 251 + static void isp1763_mem_read(struct usb_hcd *hcd, u16 srcaddr, 252 + u16 *dstptr, u32 bytes) 253 + { 254 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 255 + 256 + /* Write the starting device address to the hcd memory register */ 257 + isp1760_reg_write(priv->regs, ISP1763_HC_MEMORY, srcaddr); 258 + ndelay(100); /* Delay between consecutive access */ 259 + 260 + /* As long there are at least 16-bit to read ... */ 261 + while (bytes >= 2) { 262 + *dstptr = __raw_readw(priv->base + ISP1763_HC_DATA); 263 + bytes -= 2; 264 + dstptr++; 265 + } 266 + 267 + /* If there are no more bytes to read, return */ 268 + if (bytes <= 0) 269 + return; 270 + 271 + *((u8 *)dstptr) = (u8)(readw(priv->base + ISP1763_HC_DATA) & 0xFF); 272 + } 273 + 274 + static void mem_read(struct usb_hcd *hcd, u32 src_offset, __u32 *dst, 275 + u32 bytes) 276 + { 277 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 278 + 279 + if (!priv->is_isp1763) 280 + return isp1760_mem_read(hcd, src_offset, (u16 *)dst, bytes); 281 + 282 + isp1763_mem_read(hcd, (u16)src_offset, (u16 *)dst, bytes); 283 + } 284 + 285 + static void isp1760_mem_write(void __iomem *dst_base, u32 dst_offset, 286 + __u32 const *src, u32 bytes) 375 287 { 376 288 __u32 __iomem *dst; 377 289 ··· 422 248 423 249 if (dst_offset < PAYLOAD_OFFSET) { 424 250 while (bytes >= 4) { 425 - __raw_writel(cpu_to_le32(*src), dst); 251 + writel_relaxed(*src, dst); 426 252 bytes -= 4; 427 253 src++; 428 254 dst++; ··· 443 269 */ 444 270 445 271 if (dst_offset < PAYLOAD_OFFSET) 446 - __raw_writel(cpu_to_le32(*src), dst); 272 + writel_relaxed(*src, dst); 447 273 else 448 274 __raw_writel(*src, dst); 275 + } 276 + 277 + static void isp1763_mem_write(struct usb_hcd *hcd, u16 dstaddr, u16 *src, 278 + u32 bytes) 279 + { 280 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 281 + 282 + /* Write the starting device address to the hcd memory register */ 283 + isp1760_reg_write(priv->regs, ISP1763_HC_MEMORY, dstaddr); 284 + ndelay(100); /* Delay between consecutive access */ 285 + 286 + while (bytes >= 2) { 287 + /* Get and write the data; then adjust the data ptr and len */ 288 + __raw_writew(*src, priv->base + ISP1763_HC_DATA); 289 + bytes -= 2; 290 + src++; 291 + } 292 + 293 + /* If there are no more bytes to process, return */ 294 + if (bytes <= 0) 295 + return; 296 + 297 + /* 298 + * The only way to get here is if there is a single byte left, 299 + * get it and write it to the data reg; 300 + */ 301 + writew(*((u8 *)src), priv->base + ISP1763_HC_DATA); 302 + } 303 + 304 + static void mem_write(struct usb_hcd *hcd, u32 dst_offset, __u32 *src, 305 + u32 bytes) 306 + { 307 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 308 + 309 + if (!priv->is_isp1763) 310 + return isp1760_mem_write(priv->base, dst_offset, src, bytes); 311 + 312 + isp1763_mem_write(hcd, dst_offset, (u16 *)src, bytes); 449 313 } 450 314 451 315 /* 452 316 * Read and write ptds. 'ptd_offset' should be one of ISO_PTD_OFFSET, 453 317 * INT_PTD_OFFSET, and ATL_PTD_OFFSET. 'slot' should be less than 32. 454 318 */ 455 - static void ptd_read(void __iomem *base, u32 ptd_offset, u32 slot, 456 - struct ptd *ptd) 319 + static void isp1760_ptd_read(struct usb_hcd *hcd, u32 ptd_offset, u32 slot, 320 + struct ptd *ptd) 457 321 { 458 - reg_write32(base, HC_MEMORY_REG, 459 - ISP_BANK(0) + ptd_offset + slot*sizeof(*ptd)); 322 + u16 src_offset = ptd_offset + slot * sizeof(*ptd); 323 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 324 + 325 + isp1760_hcd_write(hcd, MEM_BANK_SEL, ISP_BANK_0); 326 + isp1760_hcd_write(hcd, MEM_START_ADDR, src_offset); 460 327 ndelay(90); 461 - bank_reads8(base, ptd_offset + slot*sizeof(*ptd), ISP_BANK(0), 462 - (void *) ptd, sizeof(*ptd)); 328 + 329 + bank_reads8(priv->base, src_offset, ISP_BANK_0, (void *)ptd, 330 + sizeof(*ptd)); 463 331 } 464 332 465 - static void ptd_write(void __iomem *base, u32 ptd_offset, u32 slot, 466 - struct ptd *ptd) 333 + static void isp1763_ptd_read(struct usb_hcd *hcd, u32 ptd_offset, u32 slot, 334 + struct ptd *ptd) 467 335 { 468 - mem_writes8(base, ptd_offset + slot*sizeof(*ptd) + sizeof(ptd->dw0), 469 - &ptd->dw1, 7*sizeof(ptd->dw1)); 470 - /* Make sure dw0 gets written last (after other dw's and after payload) 471 - since it contains the enable bit */ 472 - wmb(); 473 - mem_writes8(base, ptd_offset + slot*sizeof(*ptd), &ptd->dw0, 474 - sizeof(ptd->dw0)); 336 + u16 src_offset = ptd_offset + slot * sizeof(*ptd); 337 + struct ptd_le32 le32_ptd; 338 + 339 + isp1763_mem_read(hcd, src_offset, (u16 *)&le32_ptd, sizeof(le32_ptd)); 340 + /* Normalize the data obtained */ 341 + ptd->dw0 = le32_to_dw(le32_ptd.dw0); 342 + ptd->dw1 = le32_to_dw(le32_ptd.dw1); 343 + ptd->dw2 = le32_to_dw(le32_ptd.dw2); 344 + ptd->dw3 = le32_to_dw(le32_ptd.dw3); 345 + ptd->dw4 = le32_to_dw(le32_ptd.dw4); 346 + ptd->dw5 = le32_to_dw(le32_ptd.dw5); 347 + ptd->dw6 = le32_to_dw(le32_ptd.dw6); 348 + ptd->dw7 = le32_to_dw(le32_ptd.dw7); 475 349 } 476 350 351 + static void ptd_read(struct usb_hcd *hcd, u32 ptd_offset, u32 slot, 352 + struct ptd *ptd) 353 + { 354 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 355 + 356 + if (!priv->is_isp1763) 357 + return isp1760_ptd_read(hcd, ptd_offset, slot, ptd); 358 + 359 + isp1763_ptd_read(hcd, ptd_offset, slot, ptd); 360 + } 361 + 362 + static void isp1763_ptd_write(struct usb_hcd *hcd, u32 ptd_offset, u32 slot, 363 + struct ptd *cpu_ptd) 364 + { 365 + u16 dst_offset = ptd_offset + slot * sizeof(*cpu_ptd); 366 + struct ptd_le32 ptd; 367 + 368 + ptd.dw0 = dw_to_le32(cpu_ptd->dw0); 369 + ptd.dw1 = dw_to_le32(cpu_ptd->dw1); 370 + ptd.dw2 = dw_to_le32(cpu_ptd->dw2); 371 + ptd.dw3 = dw_to_le32(cpu_ptd->dw3); 372 + ptd.dw4 = dw_to_le32(cpu_ptd->dw4); 373 + ptd.dw5 = dw_to_le32(cpu_ptd->dw5); 374 + ptd.dw6 = dw_to_le32(cpu_ptd->dw6); 375 + ptd.dw7 = dw_to_le32(cpu_ptd->dw7); 376 + 377 + isp1763_mem_write(hcd, dst_offset, (u16 *)&ptd.dw0, 378 + 8 * sizeof(ptd.dw0)); 379 + } 380 + 381 + static void isp1760_ptd_write(void __iomem *base, u32 ptd_offset, u32 slot, 382 + struct ptd *ptd) 383 + { 384 + u32 dst_offset = ptd_offset + slot * sizeof(*ptd); 385 + 386 + /* 387 + * Make sure dw0 gets written last (after other dw's and after payload) 388 + * since it contains the enable bit 389 + */ 390 + isp1760_mem_write(base, dst_offset + sizeof(ptd->dw0), 391 + (__force u32 *)&ptd->dw1, 7 * sizeof(ptd->dw1)); 392 + wmb(); 393 + isp1760_mem_write(base, dst_offset, (__force u32 *)&ptd->dw0, 394 + sizeof(ptd->dw0)); 395 + } 396 + 397 + static void ptd_write(struct usb_hcd *hcd, u32 ptd_offset, u32 slot, 398 + struct ptd *ptd) 399 + { 400 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 401 + 402 + if (!priv->is_isp1763) 403 + return isp1760_ptd_write(priv->base, ptd_offset, slot, ptd); 404 + 405 + isp1763_ptd_write(hcd, ptd_offset, slot, ptd); 406 + } 477 407 478 408 /* memory management of the 60kb on the chip from 0x1000 to 0xffff */ 479 409 static void init_memory(struct isp1760_hcd *priv) 480 410 { 481 - int i, curr; 411 + const struct isp1760_memory_layout *mem = priv->memory_layout; 412 + int i, j, curr; 482 413 u32 payload_addr; 483 414 484 415 payload_addr = PAYLOAD_OFFSET; 485 - for (i = 0; i < BLOCK_1_NUM; i++) { 486 - priv->memory_pool[i].start = payload_addr; 487 - priv->memory_pool[i].size = BLOCK_1_SIZE; 488 - priv->memory_pool[i].free = 1; 489 - payload_addr += priv->memory_pool[i].size; 416 + 417 + for (i = 0, curr = 0; i < ARRAY_SIZE(mem->blocks); i++) { 418 + for (j = 0; j < mem->blocks[i]; j++, curr++) { 419 + priv->memory_pool[curr + j].start = payload_addr; 420 + priv->memory_pool[curr + j].size = mem->blocks_size[i]; 421 + priv->memory_pool[curr + j].free = 1; 422 + payload_addr += priv->memory_pool[curr + j].size; 423 + } 490 424 } 491 425 492 - curr = i; 493 - for (i = 0; i < BLOCK_2_NUM; i++) { 494 - priv->memory_pool[curr + i].start = payload_addr; 495 - priv->memory_pool[curr + i].size = BLOCK_2_SIZE; 496 - priv->memory_pool[curr + i].free = 1; 497 - payload_addr += priv->memory_pool[curr + i].size; 498 - } 499 - 500 - curr = i; 501 - for (i = 0; i < BLOCK_3_NUM; i++) { 502 - priv->memory_pool[curr + i].start = payload_addr; 503 - priv->memory_pool[curr + i].size = BLOCK_3_SIZE; 504 - priv->memory_pool[curr + i].free = 1; 505 - payload_addr += priv->memory_pool[curr + i].size; 506 - } 507 - 508 - WARN_ON(payload_addr - priv->memory_pool[0].start > PAYLOAD_AREA_SIZE); 426 + WARN_ON(payload_addr - priv->memory_pool[0].start > 427 + mem->payload_area_size); 509 428 } 510 429 511 430 static void alloc_mem(struct usb_hcd *hcd, struct isp1760_qtd *qtd) 512 431 { 513 432 struct isp1760_hcd *priv = hcd_to_priv(hcd); 433 + const struct isp1760_memory_layout *mem = priv->memory_layout; 514 434 int i; 515 435 516 436 WARN_ON(qtd->payload_addr); ··· 612 344 if (!qtd->length) 613 345 return; 614 346 615 - for (i = 0; i < BLOCKS; i++) { 347 + for (i = 0; i < mem->payload_blocks; i++) { 616 348 if (priv->memory_pool[i].size >= qtd->length && 617 349 priv->memory_pool[i].free) { 618 350 priv->memory_pool[i].free = 0; ··· 625 357 static void free_mem(struct usb_hcd *hcd, struct isp1760_qtd *qtd) 626 358 { 627 359 struct isp1760_hcd *priv = hcd_to_priv(hcd); 360 + const struct isp1760_memory_layout *mem = priv->memory_layout; 628 361 int i; 629 362 630 363 if (!qtd->payload_addr) 631 364 return; 632 365 633 - for (i = 0; i < BLOCKS; i++) { 366 + for (i = 0; i < mem->payload_blocks; i++) { 634 367 if (priv->memory_pool[i].start == qtd->payload_addr) { 635 368 WARN_ON(priv->memory_pool[i].free); 636 369 priv->memory_pool[i].free = 1; ··· 646 377 qtd->payload_addr = 0; 647 378 } 648 379 649 - static int handshake(struct usb_hcd *hcd, u32 reg, 650 - u32 mask, u32 done, int usec) 651 - { 652 - u32 result; 653 - int ret; 654 - 655 - ret = readl_poll_timeout_atomic(hcd->regs + reg, result, 656 - ((result & mask) == done || 657 - result == U32_MAX), 1, usec); 658 - if (result == U32_MAX) 659 - return -ENODEV; 660 - 661 - return ret; 662 - } 663 - 664 380 /* reset a non-running (STS_HALT == 1) controller */ 665 381 static int ehci_reset(struct usb_hcd *hcd) 666 382 { 667 383 struct isp1760_hcd *priv = hcd_to_priv(hcd); 668 384 669 - u32 command = reg_read32(hcd->regs, HC_USBCMD); 670 - 671 - command |= CMD_RESET; 672 - reg_write32(hcd->regs, HC_USBCMD, command); 673 385 hcd->state = HC_STATE_HALT; 674 386 priv->next_statechange = jiffies; 675 387 676 - return handshake(hcd, HC_USBCMD, CMD_RESET, 0, 250 * 1000); 388 + return isp1760_hcd_set_and_wait_swap(hcd, CMD_RESET, 250 * 1000); 677 389 } 678 390 679 391 static struct isp1760_qh *qh_alloc(gfp_t flags) ··· 682 432 /* one-time init, only for memory state */ 683 433 static int priv_init(struct usb_hcd *hcd) 684 434 { 685 - struct isp1760_hcd *priv = hcd_to_priv(hcd); 686 - u32 hcc_params; 435 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 436 + u32 isoc_cache; 437 + u32 isoc_thres; 687 438 int i; 688 439 689 440 spin_lock_init(&priv->lock); ··· 698 447 */ 699 448 priv->periodic_size = DEFAULT_I_TDPS; 700 449 450 + if (priv->is_isp1763) { 451 + priv->i_thresh = 2; 452 + return 0; 453 + } 454 + 701 455 /* controllers may cache some of the periodic schedule ... */ 702 - hcc_params = reg_read32(hcd->regs, HC_HCCPARAMS); 456 + isoc_cache = isp1760_hcd_read(hcd, HCC_ISOC_CACHE); 457 + isoc_thres = isp1760_hcd_read(hcd, HCC_ISOC_THRES); 458 + 703 459 /* full frame cache */ 704 - if (HCC_ISOC_CACHE(hcc_params)) 460 + if (isoc_cache) 705 461 priv->i_thresh = 8; 706 462 else /* N microframes cached */ 707 - priv->i_thresh = 2 + HCC_ISOC_THRES(hcc_params); 463 + priv->i_thresh = 2 + isoc_thres; 708 464 709 465 return 0; 710 466 } ··· 719 461 static int isp1760_hc_setup(struct usb_hcd *hcd) 720 462 { 721 463 struct isp1760_hcd *priv = hcd_to_priv(hcd); 464 + u32 atx_reset; 722 465 int result; 723 - u32 scratch, hwmode; 466 + u32 scratch; 467 + u32 pattern; 724 468 725 - reg_write32(hcd->regs, HC_SCRATCH_REG, 0xdeadbabe); 469 + if (priv->is_isp1763) 470 + pattern = 0xcafe; 471 + else 472 + pattern = 0xdeadcafe; 473 + 474 + isp1760_hcd_write(hcd, HC_SCRATCH, pattern); 475 + 726 476 /* Change bus pattern */ 727 - scratch = reg_read32(hcd->regs, HC_CHIP_ID_REG); 728 - scratch = reg_read32(hcd->regs, HC_SCRATCH_REG); 729 - if (scratch != 0xdeadbabe) { 730 - dev_err(hcd->self.controller, "Scratch test failed.\n"); 477 + scratch = isp1760_hcd_read(hcd, HC_CHIP_ID_HIGH); 478 + dev_err(hcd->self.controller, "Scratch test 0x%08x\n", scratch); 479 + scratch = isp1760_hcd_read(hcd, HC_SCRATCH); 480 + if (scratch != pattern) { 481 + dev_err(hcd->self.controller, "Scratch test failed. 0x%08x\n", scratch); 731 482 return -ENODEV; 732 483 } 733 484 ··· 748 481 * the host controller through the EHCI USB Command register. The device 749 482 * has been reset in core code anyway, so this shouldn't matter. 750 483 */ 751 - reg_write32(hcd->regs, HC_BUFFER_STATUS_REG, 0); 752 - reg_write32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG, NO_TRANSFER_ACTIVE); 753 - reg_write32(hcd->regs, HC_INT_PTD_SKIPMAP_REG, NO_TRANSFER_ACTIVE); 754 - reg_write32(hcd->regs, HC_ISO_PTD_SKIPMAP_REG, NO_TRANSFER_ACTIVE); 484 + isp1760_hcd_clear(hcd, ISO_BUF_FILL); 485 + isp1760_hcd_clear(hcd, INT_BUF_FILL); 486 + isp1760_hcd_clear(hcd, ATL_BUF_FILL); 487 + 488 + isp1760_hcd_set(hcd, HC_ATL_PTD_SKIPMAP); 489 + isp1760_hcd_set(hcd, HC_INT_PTD_SKIPMAP); 490 + isp1760_hcd_set(hcd, HC_ISO_PTD_SKIPMAP); 755 491 756 492 result = ehci_reset(hcd); 757 493 if (result) ··· 763 493 /* Step 11 passed */ 764 494 765 495 /* ATL reset */ 766 - hwmode = reg_read32(hcd->regs, HC_HW_MODE_CTRL) & ~ALL_ATX_RESET; 767 - reg_write32(hcd->regs, HC_HW_MODE_CTRL, hwmode | ALL_ATX_RESET); 496 + if (priv->is_isp1763) 497 + atx_reset = SW_RESET_RESET_ATX; 498 + else 499 + atx_reset = ALL_ATX_RESET; 500 + 501 + isp1760_hcd_set(hcd, atx_reset); 768 502 mdelay(10); 769 - reg_write32(hcd->regs, HC_HW_MODE_CTRL, hwmode); 503 + isp1760_hcd_clear(hcd, atx_reset); 770 504 771 - reg_write32(hcd->regs, HC_INTERRUPT_ENABLE, INTERRUPT_ENABLE_MASK); 505 + if (priv->is_isp1763) { 506 + isp1760_hcd_set(hcd, HW_OTG_DISABLE); 507 + isp1760_hcd_set(hcd, HW_SW_SEL_HC_DC_CLEAR); 508 + isp1760_hcd_set(hcd, HW_HC_2_DIS_CLEAR); 509 + mdelay(10); 772 510 773 - priv->hcs_params = reg_read32(hcd->regs, HC_HCSPARAMS); 511 + isp1760_hcd_set(hcd, HW_INTF_LOCK); 512 + } 513 + 514 + isp1760_hcd_set(hcd, HC_INT_IRQ_ENABLE); 515 + isp1760_hcd_set(hcd, HC_ATL_IRQ_ENABLE); 774 516 775 517 return priv_init(hcd); 776 518 } ··· 835 553 ptd->dw0 |= TO_DW0_ENDPOINT(usb_pipeendpoint(qtd->urb->pipe)); 836 554 837 555 /* DW1 */ 838 - ptd->dw1 = usb_pipeendpoint(qtd->urb->pipe) >> 1; 556 + ptd->dw1 = TO_DW((usb_pipeendpoint(qtd->urb->pipe) >> 1)); 839 557 ptd->dw1 |= TO_DW1_DEVICE_ADDR(usb_pipedevice(qtd->urb->pipe)); 840 558 ptd->dw1 |= TO_DW1_PID_TOKEN(qtd->packet_type); 841 559 ··· 857 575 /* SE bit for Split INT transfers */ 858 576 if (usb_pipeint(qtd->urb->pipe) && 859 577 (qtd->urb->dev->speed == USB_SPEED_LOW)) 860 - ptd->dw1 |= 2 << 16; 578 + ptd->dw1 |= DW1_SE_USB_LOSPEED; 861 579 862 580 rl = 0; 863 581 nak = 0; ··· 929 647 * that number come from? 0xff seems to work fine... 930 648 */ 931 649 /* ptd->dw5 = 0x1c; */ 932 - ptd->dw5 = 0xff; /* Execute Complete Split on any uFrame */ 650 + ptd->dw5 = TO_DW(0xff); /* Execute Complete Split on any uFrame */ 933 651 } 934 652 935 653 period = period >> 1;/* Ensure equal or shorter period than requested */ 936 654 period &= 0xf8; /* Mask off too large values and lowest unused 3 bits */ 937 655 938 - ptd->dw2 |= period; 939 - ptd->dw4 = usof; 656 + ptd->dw2 |= TO_DW(period); 657 + ptd->dw4 = TO_DW(usof); 940 658 } 941 659 942 660 static void create_ptd_int(struct isp1760_qh *qh, ··· 1002 720 struct ptd *ptd) 1003 721 { 1004 722 struct isp1760_hcd *priv = hcd_to_priv(hcd); 723 + const struct isp1760_memory_layout *mem = priv->memory_layout; 1005 724 int skip_map; 1006 725 1007 - WARN_ON((slot < 0) || (slot > 31)); 726 + WARN_ON((slot < 0) || (slot > mem->slot_num - 1)); 1008 727 WARN_ON(qtd->length && !qtd->payload_addr); 1009 728 WARN_ON(slots[slot].qtd); 1010 729 WARN_ON(slots[slot].qh); 1011 730 WARN_ON(qtd->status != QTD_PAYLOAD_ALLOC); 1012 731 732 + if (priv->is_isp1763) 733 + ndelay(100); 734 + 1013 735 /* Make sure done map has not triggered from some unlinked transfer */ 1014 736 if (ptd_offset == ATL_PTD_OFFSET) { 1015 - priv->atl_done_map |= reg_read32(hcd->regs, 1016 - HC_ATL_PTD_DONEMAP_REG); 737 + skip_map = isp1760_hcd_read(hcd, HC_ATL_PTD_SKIPMAP); 738 + isp1760_hcd_write(hcd, HC_ATL_PTD_SKIPMAP, 739 + skip_map | (1 << slot)); 740 + priv->atl_done_map |= isp1760_hcd_read(hcd, HC_ATL_PTD_DONEMAP); 1017 741 priv->atl_done_map &= ~(1 << slot); 1018 742 } else { 1019 - priv->int_done_map |= reg_read32(hcd->regs, 1020 - HC_INT_PTD_DONEMAP_REG); 743 + skip_map = isp1760_hcd_read(hcd, HC_INT_PTD_SKIPMAP); 744 + isp1760_hcd_write(hcd, HC_INT_PTD_SKIPMAP, 745 + skip_map | (1 << slot)); 746 + priv->int_done_map |= isp1760_hcd_read(hcd, HC_INT_PTD_DONEMAP); 1021 747 priv->int_done_map &= ~(1 << slot); 1022 748 } 1023 749 750 + skip_map &= ~(1 << slot); 1024 751 qh->slot = slot; 1025 752 qtd->status = QTD_XFER_STARTED; 1026 753 slots[slot].timestamp = jiffies; 1027 754 slots[slot].qtd = qtd; 1028 755 slots[slot].qh = qh; 1029 - ptd_write(hcd->regs, ptd_offset, slot, ptd); 756 + ptd_write(hcd, ptd_offset, slot, ptd); 1030 757 1031 - if (ptd_offset == ATL_PTD_OFFSET) { 1032 - skip_map = reg_read32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG); 1033 - skip_map &= ~(1 << qh->slot); 1034 - reg_write32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG, skip_map); 1035 - } else { 1036 - skip_map = reg_read32(hcd->regs, HC_INT_PTD_SKIPMAP_REG); 1037 - skip_map &= ~(1 << qh->slot); 1038 - reg_write32(hcd->regs, HC_INT_PTD_SKIPMAP_REG, skip_map); 1039 - } 758 + if (ptd_offset == ATL_PTD_OFFSET) 759 + isp1760_hcd_write(hcd, HC_ATL_PTD_SKIPMAP, skip_map); 760 + else 761 + isp1760_hcd_write(hcd, HC_INT_PTD_SKIPMAP, skip_map); 1040 762 } 1041 763 1042 764 static int is_short_bulk(struct isp1760_qtd *qtd) ··· 1052 766 static void collect_qtds(struct usb_hcd *hcd, struct isp1760_qh *qh, 1053 767 struct list_head *urb_list) 1054 768 { 1055 - int last_qtd; 1056 769 struct isp1760_qtd *qtd, *qtd_next; 1057 770 struct urb_listitem *urb_listitem; 771 + int last_qtd; 1058 772 1059 773 list_for_each_entry_safe(qtd, qtd_next, &qh->qtd_list, qtd_list) { 1060 774 if (qtd->status < QTD_XFER_COMPLETE) ··· 1069 783 if (qtd->actual_length) { 1070 784 switch (qtd->packet_type) { 1071 785 case IN_PID: 1072 - mem_reads8(hcd->regs, qtd->payload_addr, 1073 - qtd->data_buffer, 1074 - qtd->actual_length); 786 + mem_read(hcd, qtd->payload_addr, 787 + qtd->data_buffer, 788 + qtd->actual_length); 1075 789 fallthrough; 1076 790 case OUT_PID: 1077 791 qtd->urb->actual_length += ··· 1115 829 static void enqueue_qtds(struct usb_hcd *hcd, struct isp1760_qh *qh) 1116 830 { 1117 831 struct isp1760_hcd *priv = hcd_to_priv(hcd); 832 + const struct isp1760_memory_layout *mem = priv->memory_layout; 833 + int slot_num = mem->slot_num; 1118 834 int ptd_offset; 1119 835 struct isp1760_slotinfo *slots; 1120 836 int curr_slot, free_slot; ··· 1143 855 } 1144 856 1145 857 free_slot = -1; 1146 - for (curr_slot = 0; curr_slot < 32; curr_slot++) { 858 + for (curr_slot = 0; curr_slot < slot_num; curr_slot++) { 1147 859 if ((free_slot == -1) && (slots[curr_slot].qtd == NULL)) 1148 860 free_slot = curr_slot; 1149 861 if (slots[curr_slot].qh == qh) ··· 1158 870 if ((qtd->length) && (!qtd->payload_addr)) 1159 871 break; 1160 872 1161 - if ((qtd->length) && 1162 - ((qtd->packet_type == SETUP_PID) || 1163 - (qtd->packet_type == OUT_PID))) { 1164 - mem_writes8(hcd->regs, qtd->payload_addr, 1165 - qtd->data_buffer, qtd->length); 873 + if (qtd->length && (qtd->packet_type == SETUP_PID || 874 + qtd->packet_type == OUT_PID)) { 875 + mem_write(hcd, qtd->payload_addr, 876 + qtd->data_buffer, qtd->length); 1166 877 } 1167 878 1168 879 qtd->status = QTD_PAYLOAD_ALLOC; ··· 1174 887 "available for transfer\n", __func__); 1175 888 */ 1176 889 /* Start xfer for this endpoint if not already done */ 1177 - if ((curr_slot > 31) && (free_slot > -1)) { 890 + if ((curr_slot > slot_num - 1) && (free_slot > -1)) { 1178 891 if (usb_pipeint(qtd->urb->pipe)) 1179 892 create_ptd_int(qh, qtd, &ptd); 1180 893 else ··· 1264 977 static int check_int_transfer(struct usb_hcd *hcd, struct ptd *ptd, 1265 978 struct urb *urb) 1266 979 { 1267 - __dw dw4; 980 + u32 dw4; 1268 981 int i; 1269 982 1270 - dw4 = ptd->dw4; 983 + dw4 = TO_U32(ptd->dw4); 1271 984 dw4 >>= 8; 1272 985 1273 986 /* FIXME: ISP1761 datasheet does not say what to do with these. Do we ··· 1361 1074 int modified; 1362 1075 int skip_map; 1363 1076 1364 - skip_map = reg_read32(hcd->regs, HC_INT_PTD_SKIPMAP_REG); 1077 + skip_map = isp1760_hcd_read(hcd, HC_INT_PTD_SKIPMAP); 1365 1078 priv->int_done_map &= ~skip_map; 1366 - skip_map = reg_read32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG); 1079 + skip_map = isp1760_hcd_read(hcd, HC_ATL_PTD_SKIPMAP); 1367 1080 priv->atl_done_map &= ~skip_map; 1368 1081 1369 1082 modified = priv->int_done_map || priv->atl_done_map; ··· 1381 1094 continue; 1382 1095 } 1383 1096 ptd_offset = INT_PTD_OFFSET; 1384 - ptd_read(hcd->regs, INT_PTD_OFFSET, slot, &ptd); 1097 + ptd_read(hcd, INT_PTD_OFFSET, slot, &ptd); 1385 1098 state = check_int_transfer(hcd, &ptd, 1386 1099 slots[slot].qtd->urb); 1387 1100 } else { ··· 1396 1109 continue; 1397 1110 } 1398 1111 ptd_offset = ATL_PTD_OFFSET; 1399 - ptd_read(hcd->regs, ATL_PTD_OFFSET, slot, &ptd); 1112 + ptd_read(hcd, ATL_PTD_OFFSET, slot, &ptd); 1400 1113 state = check_atl_transfer(hcd, &ptd, 1401 1114 slots[slot].qtd->urb); 1402 1115 } ··· 1421 1134 1422 1135 qtd->status = QTD_XFER_COMPLETE; 1423 1136 if (list_is_last(&qtd->qtd_list, &qh->qtd_list) || 1424 - is_short_bulk(qtd)) 1137 + is_short_bulk(qtd)) 1425 1138 qtd = NULL; 1426 1139 else 1427 1140 qtd = list_entry(qtd->qtd_list.next, ··· 1489 1202 static irqreturn_t isp1760_irq(struct usb_hcd *hcd) 1490 1203 { 1491 1204 struct isp1760_hcd *priv = hcd_to_priv(hcd); 1492 - u32 imask; 1493 1205 irqreturn_t irqret = IRQ_NONE; 1206 + u32 int_reg; 1207 + u32 imask; 1494 1208 1495 1209 spin_lock(&priv->lock); 1496 1210 1497 1211 if (!(hcd->state & HC_STATE_RUNNING)) 1498 1212 goto leave; 1499 1213 1500 - imask = reg_read32(hcd->regs, HC_INTERRUPT_REG); 1214 + imask = isp1760_hcd_read(hcd, HC_INTERRUPT); 1501 1215 if (unlikely(!imask)) 1502 1216 goto leave; 1503 - reg_write32(hcd->regs, HC_INTERRUPT_REG, imask); /* Clear */ 1504 1217 1505 - priv->int_done_map |= reg_read32(hcd->regs, HC_INT_PTD_DONEMAP_REG); 1506 - priv->atl_done_map |= reg_read32(hcd->regs, HC_ATL_PTD_DONEMAP_REG); 1218 + int_reg = priv->is_isp1763 ? ISP1763_HC_INTERRUPT : 1219 + ISP176x_HC_INTERRUPT; 1220 + isp1760_reg_write(priv->regs, int_reg, imask); 1221 + 1222 + priv->int_done_map |= isp1760_hcd_read(hcd, HC_INT_PTD_DONEMAP); 1223 + priv->atl_done_map |= isp1760_hcd_read(hcd, HC_ATL_PTD_DONEMAP); 1507 1224 1508 1225 handle_done_ptds(hcd); 1509 1226 1510 1227 irqret = IRQ_HANDLED; 1228 + 1511 1229 leave: 1512 1230 spin_unlock(&priv->lock); 1513 1231 ··· 1553 1261 { 1554 1262 struct usb_hcd *hcd = errata2_timer_hcd; 1555 1263 struct isp1760_hcd *priv = hcd_to_priv(hcd); 1264 + const struct isp1760_memory_layout *mem = priv->memory_layout; 1556 1265 int slot; 1557 1266 struct ptd ptd; 1558 1267 unsigned long spinflags; 1559 1268 1560 1269 spin_lock_irqsave(&priv->lock, spinflags); 1561 1270 1562 - for (slot = 0; slot < 32; slot++) 1271 + for (slot = 0; slot < mem->slot_num; slot++) 1563 1272 if (priv->atl_slots[slot].qh && time_after(jiffies, 1564 1273 priv->atl_slots[slot].timestamp + 1565 1274 msecs_to_jiffies(SLOT_TIMEOUT))) { 1566 - ptd_read(hcd->regs, ATL_PTD_OFFSET, slot, &ptd); 1275 + ptd_read(hcd, ATL_PTD_OFFSET, slot, &ptd); 1567 1276 if (!FROM_DW0_VALID(ptd.dw0) && 1568 1277 !FROM_DW3_ACTIVE(ptd.dw3)) 1569 1278 priv->atl_done_map |= 1 << slot; ··· 1579 1286 add_timer(&errata2_timer); 1580 1287 } 1581 1288 1289 + static int isp1763_run(struct usb_hcd *hcd) 1290 + { 1291 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 1292 + int retval; 1293 + u32 chipid_h; 1294 + u32 chipid_l; 1295 + u32 chip_rev; 1296 + u32 ptd_atl_int; 1297 + u32 ptd_iso; 1298 + 1299 + hcd->uses_new_polling = 1; 1300 + hcd->state = HC_STATE_RUNNING; 1301 + 1302 + chipid_h = isp1760_hcd_read(hcd, HC_CHIP_ID_HIGH); 1303 + chipid_l = isp1760_hcd_read(hcd, HC_CHIP_ID_LOW); 1304 + chip_rev = isp1760_hcd_read(hcd, HC_CHIP_REV); 1305 + dev_info(hcd->self.controller, "USB ISP %02x%02x HW rev. %d started\n", 1306 + chipid_h, chipid_l, chip_rev); 1307 + 1308 + isp1760_hcd_clear(hcd, ISO_BUF_FILL); 1309 + isp1760_hcd_clear(hcd, INT_BUF_FILL); 1310 + isp1760_hcd_clear(hcd, ATL_BUF_FILL); 1311 + 1312 + isp1760_hcd_set(hcd, HC_ATL_PTD_SKIPMAP); 1313 + isp1760_hcd_set(hcd, HC_INT_PTD_SKIPMAP); 1314 + isp1760_hcd_set(hcd, HC_ISO_PTD_SKIPMAP); 1315 + ndelay(100); 1316 + isp1760_hcd_clear(hcd, HC_ATL_PTD_DONEMAP); 1317 + isp1760_hcd_clear(hcd, HC_INT_PTD_DONEMAP); 1318 + isp1760_hcd_clear(hcd, HC_ISO_PTD_DONEMAP); 1319 + 1320 + isp1760_hcd_set(hcd, HW_OTG_DISABLE); 1321 + isp1760_reg_write(priv->regs, ISP1763_HC_OTG_CTRL_CLEAR, BIT(7)); 1322 + isp1760_reg_write(priv->regs, ISP1763_HC_OTG_CTRL_CLEAR, BIT(15)); 1323 + mdelay(10); 1324 + 1325 + isp1760_hcd_set(hcd, HC_INT_IRQ_ENABLE); 1326 + isp1760_hcd_set(hcd, HC_ATL_IRQ_ENABLE); 1327 + 1328 + isp1760_hcd_set(hcd, HW_GLOBAL_INTR_EN); 1329 + 1330 + isp1760_hcd_clear(hcd, HC_ATL_IRQ_MASK_AND); 1331 + isp1760_hcd_clear(hcd, HC_INT_IRQ_MASK_AND); 1332 + isp1760_hcd_clear(hcd, HC_ISO_IRQ_MASK_AND); 1333 + 1334 + isp1760_hcd_set(hcd, HC_ATL_IRQ_MASK_OR); 1335 + isp1760_hcd_set(hcd, HC_INT_IRQ_MASK_OR); 1336 + isp1760_hcd_set(hcd, HC_ISO_IRQ_MASK_OR); 1337 + 1338 + ptd_atl_int = 0x8000; 1339 + ptd_iso = 0x0001; 1340 + 1341 + isp1760_hcd_write(hcd, HC_ATL_PTD_LASTPTD, ptd_atl_int); 1342 + isp1760_hcd_write(hcd, HC_INT_PTD_LASTPTD, ptd_atl_int); 1343 + isp1760_hcd_write(hcd, HC_ISO_PTD_LASTPTD, ptd_iso); 1344 + 1345 + isp1760_hcd_set(hcd, ATL_BUF_FILL); 1346 + isp1760_hcd_set(hcd, INT_BUF_FILL); 1347 + 1348 + isp1760_hcd_clear(hcd, CMD_LRESET); 1349 + isp1760_hcd_clear(hcd, CMD_RESET); 1350 + 1351 + retval = isp1760_hcd_set_and_wait(hcd, CMD_RUN, 250 * 1000); 1352 + if (retval) 1353 + return retval; 1354 + 1355 + down_write(&ehci_cf_port_reset_rwsem); 1356 + retval = isp1760_hcd_set_and_wait(hcd, FLAG_CF, 250 * 1000); 1357 + up_write(&ehci_cf_port_reset_rwsem); 1358 + if (retval) 1359 + return retval; 1360 + 1361 + return 0; 1362 + } 1363 + 1582 1364 static int isp1760_run(struct usb_hcd *hcd) 1583 1365 { 1366 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 1584 1367 int retval; 1585 - u32 temp; 1586 - u32 command; 1587 - u32 chipid; 1368 + u32 chipid_h; 1369 + u32 chipid_l; 1370 + u32 chip_rev; 1371 + u32 ptd_atl_int; 1372 + u32 ptd_iso; 1373 + 1374 + /* 1375 + * ISP1763 have some differences in the setup and order to enable 1376 + * the ports, disable otg, setup buffers, and ATL, INT, ISO status. 1377 + * So, just handle it a separate sequence. 1378 + */ 1379 + if (priv->is_isp1763) 1380 + return isp1763_run(hcd); 1588 1381 1589 1382 hcd->uses_new_polling = 1; 1590 1383 1591 1384 hcd->state = HC_STATE_RUNNING; 1592 1385 1593 1386 /* Set PTD interrupt AND & OR maps */ 1594 - reg_write32(hcd->regs, HC_ATL_IRQ_MASK_AND_REG, 0); 1595 - reg_write32(hcd->regs, HC_ATL_IRQ_MASK_OR_REG, 0xffffffff); 1596 - reg_write32(hcd->regs, HC_INT_IRQ_MASK_AND_REG, 0); 1597 - reg_write32(hcd->regs, HC_INT_IRQ_MASK_OR_REG, 0xffffffff); 1598 - reg_write32(hcd->regs, HC_ISO_IRQ_MASK_AND_REG, 0); 1599 - reg_write32(hcd->regs, HC_ISO_IRQ_MASK_OR_REG, 0xffffffff); 1387 + isp1760_hcd_clear(hcd, HC_ATL_IRQ_MASK_AND); 1388 + isp1760_hcd_clear(hcd, HC_INT_IRQ_MASK_AND); 1389 + isp1760_hcd_clear(hcd, HC_ISO_IRQ_MASK_AND); 1390 + 1391 + isp1760_hcd_set(hcd, HC_ATL_IRQ_MASK_OR); 1392 + isp1760_hcd_set(hcd, HC_INT_IRQ_MASK_OR); 1393 + isp1760_hcd_set(hcd, HC_ISO_IRQ_MASK_OR); 1394 + 1600 1395 /* step 23 passed */ 1601 1396 1602 - temp = reg_read32(hcd->regs, HC_HW_MODE_CTRL); 1603 - reg_write32(hcd->regs, HC_HW_MODE_CTRL, temp | HW_GLOBAL_INTR_EN); 1397 + isp1760_hcd_set(hcd, HW_GLOBAL_INTR_EN); 1604 1398 1605 - command = reg_read32(hcd->regs, HC_USBCMD); 1606 - command &= ~(CMD_LRESET|CMD_RESET); 1607 - command |= CMD_RUN; 1608 - reg_write32(hcd->regs, HC_USBCMD, command); 1399 + isp1760_hcd_clear(hcd, CMD_LRESET); 1400 + isp1760_hcd_clear(hcd, CMD_RESET); 1609 1401 1610 - retval = handshake(hcd, HC_USBCMD, CMD_RUN, CMD_RUN, 250 * 1000); 1402 + retval = isp1760_hcd_set_and_wait(hcd, CMD_RUN, 250 * 1000); 1611 1403 if (retval) 1612 1404 return retval; 1613 1405 ··· 1702 1324 * the semaphore while doing so. 1703 1325 */ 1704 1326 down_write(&ehci_cf_port_reset_rwsem); 1705 - reg_write32(hcd->regs, HC_CONFIGFLAG, FLAG_CF); 1706 1327 1707 - retval = handshake(hcd, HC_CONFIGFLAG, FLAG_CF, FLAG_CF, 250 * 1000); 1328 + retval = isp1760_hcd_set_and_wait(hcd, FLAG_CF, 250 * 1000); 1708 1329 up_write(&ehci_cf_port_reset_rwsem); 1709 1330 if (retval) 1710 1331 return retval; ··· 1713 1336 errata2_timer.expires = jiffies + msecs_to_jiffies(SLOT_CHECK_PERIOD); 1714 1337 add_timer(&errata2_timer); 1715 1338 1716 - chipid = reg_read32(hcd->regs, HC_CHIP_ID_REG); 1717 - dev_info(hcd->self.controller, "USB ISP %04x HW rev. %d started\n", 1718 - chipid & 0xffff, chipid >> 16); 1339 + chipid_h = isp1760_hcd_read(hcd, HC_CHIP_ID_HIGH); 1340 + chipid_l = isp1760_hcd_read(hcd, HC_CHIP_ID_LOW); 1341 + chip_rev = isp1760_hcd_read(hcd, HC_CHIP_REV); 1342 + dev_info(hcd->self.controller, "USB ISP %02x%02x HW rev. %d started\n", 1343 + chipid_h, chipid_l, chip_rev); 1719 1344 1720 1345 /* PTD Register Init Part 2, Step 28 */ 1721 1346 1722 1347 /* Setup registers controlling PTD checking */ 1723 - reg_write32(hcd->regs, HC_ATL_PTD_LASTPTD_REG, 0x80000000); 1724 - reg_write32(hcd->regs, HC_INT_PTD_LASTPTD_REG, 0x80000000); 1725 - reg_write32(hcd->regs, HC_ISO_PTD_LASTPTD_REG, 0x00000001); 1726 - reg_write32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG, 0xffffffff); 1727 - reg_write32(hcd->regs, HC_INT_PTD_SKIPMAP_REG, 0xffffffff); 1728 - reg_write32(hcd->regs, HC_ISO_PTD_SKIPMAP_REG, 0xffffffff); 1729 - reg_write32(hcd->regs, HC_BUFFER_STATUS_REG, 1730 - ATL_BUF_FILL | INT_BUF_FILL); 1348 + ptd_atl_int = 0x80000000; 1349 + ptd_iso = 0x00000001; 1350 + 1351 + isp1760_hcd_write(hcd, HC_ATL_PTD_LASTPTD, ptd_atl_int); 1352 + isp1760_hcd_write(hcd, HC_INT_PTD_LASTPTD, ptd_atl_int); 1353 + isp1760_hcd_write(hcd, HC_ISO_PTD_LASTPTD, ptd_iso); 1354 + 1355 + isp1760_hcd_set(hcd, HC_ATL_PTD_SKIPMAP); 1356 + isp1760_hcd_set(hcd, HC_INT_PTD_SKIPMAP); 1357 + isp1760_hcd_set(hcd, HC_ISO_PTD_SKIPMAP); 1358 + 1359 + isp1760_hcd_set(hcd, ATL_BUF_FILL); 1360 + isp1760_hcd_set(hcd, INT_BUF_FILL); 1731 1361 1732 1362 /* GRR this is run-once init(), being done every time the HC starts. 1733 1363 * So long as they're part of class devices, we can't do it init() ··· 1747 1363 { 1748 1364 qtd->data_buffer = databuffer; 1749 1365 1750 - if (len > MAX_PAYLOAD_SIZE) 1751 - len = MAX_PAYLOAD_SIZE; 1752 1366 qtd->length = len; 1753 1367 1754 1368 return qtd->length; ··· 1770 1388 static void packetize_urb(struct usb_hcd *hcd, 1771 1389 struct urb *urb, struct list_head *head, gfp_t flags) 1772 1390 { 1391 + struct isp1760_hcd *priv = hcd_to_priv(hcd); 1392 + const struct isp1760_memory_layout *mem = priv->memory_layout; 1773 1393 struct isp1760_qtd *qtd; 1774 1394 void *buf; 1775 1395 int len, maxpacketsize; ··· 1824 1440 qtd = qtd_alloc(flags, urb, packet_type); 1825 1441 if (!qtd) 1826 1442 goto cleanup; 1443 + 1444 + if (len > mem->blocks_size[ISP176x_BLOCK_NUM - 1]) 1445 + len = mem->blocks_size[ISP176x_BLOCK_NUM - 1]; 1446 + 1827 1447 this_qtd_len = qtd_fill(qtd, buf, len); 1828 1448 list_add_tail(&qtd->qtd_list, head); 1829 1449 ··· 1972 1584 /* We need to forcefully reclaim the slot since some transfers never 1973 1585 return, e.g. interrupt transfers and NAKed bulk transfers. */ 1974 1586 if (usb_pipecontrol(urb->pipe) || usb_pipebulk(urb->pipe)) { 1975 - skip_map = reg_read32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG); 1587 + skip_map = isp1760_hcd_read(hcd, HC_ATL_PTD_SKIPMAP); 1976 1588 skip_map |= (1 << qh->slot); 1977 - reg_write32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG, skip_map); 1589 + isp1760_hcd_write(hcd, HC_ATL_PTD_SKIPMAP, skip_map); 1590 + ndelay(100); 1978 1591 priv->atl_slots[qh->slot].qh = NULL; 1979 1592 priv->atl_slots[qh->slot].qtd = NULL; 1980 1593 } else { 1981 - skip_map = reg_read32(hcd->regs, HC_INT_PTD_SKIPMAP_REG); 1594 + skip_map = isp1760_hcd_read(hcd, HC_INT_PTD_SKIPMAP); 1982 1595 skip_map |= (1 << qh->slot); 1983 - reg_write32(hcd->regs, HC_INT_PTD_SKIPMAP_REG, skip_map); 1596 + isp1760_hcd_write(hcd, HC_INT_PTD_SKIPMAP, skip_map); 1984 1597 priv->int_slots[qh->slot].qh = NULL; 1985 1598 priv->int_slots[qh->slot].qtd = NULL; 1986 1599 } ··· 2094 1705 static int isp1760_hub_status_data(struct usb_hcd *hcd, char *buf) 2095 1706 { 2096 1707 struct isp1760_hcd *priv = hcd_to_priv(hcd); 2097 - u32 temp, status = 0; 2098 - u32 mask; 1708 + u32 status = 0; 2099 1709 int retval = 1; 2100 1710 unsigned long flags; 2101 1711 ··· 2104 1716 2105 1717 /* init status to no-changes */ 2106 1718 buf[0] = 0; 2107 - mask = PORT_CSC; 2108 1719 2109 1720 spin_lock_irqsave(&priv->lock, flags); 2110 - temp = reg_read32(hcd->regs, HC_PORTSC1); 2111 1721 2112 - if (temp & PORT_OWNER) { 2113 - if (temp & PORT_CSC) { 2114 - temp &= ~PORT_CSC; 2115 - reg_write32(hcd->regs, HC_PORTSC1, temp); 2116 - goto done; 2117 - } 1722 + if (isp1760_hcd_is_set(hcd, PORT_OWNER) && 1723 + isp1760_hcd_is_set(hcd, PORT_CSC)) { 1724 + isp1760_hcd_clear(hcd, PORT_CSC); 1725 + goto done; 2118 1726 } 2119 1727 2120 1728 /* ··· 2119 1735 * high-speed device is switched over to the companion 2120 1736 * controller by the user. 2121 1737 */ 2122 - 2123 - if ((temp & mask) != 0 2124 - || ((temp & PORT_RESUME) != 0 2125 - && time_after_eq(jiffies, 2126 - priv->reset_done))) { 1738 + if (isp1760_hcd_is_set(hcd, PORT_CSC) || 1739 + (isp1760_hcd_is_set(hcd, PORT_RESUME) && 1740 + time_after_eq(jiffies, priv->reset_done))) { 2127 1741 buf [0] |= 1 << (0 + 1); 2128 1742 status = STS_PCD; 2129 1743 } ··· 2134 1752 static void isp1760_hub_descriptor(struct isp1760_hcd *priv, 2135 1753 struct usb_hub_descriptor *desc) 2136 1754 { 2137 - int ports = HCS_N_PORTS(priv->hcs_params); 1755 + int ports; 2138 1756 u16 temp; 1757 + 1758 + ports = isp1760_hcd_n_ports(priv->hcd); 2139 1759 2140 1760 desc->bDescriptorType = USB_DT_HUB; 2141 1761 /* priv 1.0, 2.3.9 says 20ms max */ ··· 2154 1770 2155 1771 /* per-port overcurrent reporting */ 2156 1772 temp = HUB_CHAR_INDV_PORT_OCPM; 2157 - if (HCS_PPC(priv->hcs_params)) 1773 + if (isp1760_hcd_ppc_is_set(priv->hcd)) 2158 1774 /* per-port power control */ 2159 1775 temp |= HUB_CHAR_INDV_PORT_LPSM; 2160 1776 else ··· 2165 1781 2166 1782 #define PORT_WAKE_BITS (PORT_WKOC_E|PORT_WKDISC_E|PORT_WKCONN_E) 2167 1783 2168 - static int check_reset_complete(struct usb_hcd *hcd, int index, 2169 - int port_status) 1784 + static void check_reset_complete(struct usb_hcd *hcd, int index) 2170 1785 { 2171 - if (!(port_status & PORT_CONNECT)) 2172 - return port_status; 1786 + if (!(isp1760_hcd_is_set(hcd, PORT_CONNECT))) 1787 + return; 2173 1788 2174 1789 /* if reset finished and it's still not enabled -- handoff */ 2175 - if (!(port_status & PORT_PE)) { 2176 - 1790 + if (!isp1760_hcd_is_set(hcd, PORT_PE)) { 2177 1791 dev_info(hcd->self.controller, 2178 - "port %d full speed --> companion\n", 2179 - index + 1); 1792 + "port %d full speed --> companion\n", index + 1); 2180 1793 2181 - port_status |= PORT_OWNER; 2182 - port_status &= ~PORT_RWC_BITS; 2183 - reg_write32(hcd->regs, HC_PORTSC1, port_status); 1794 + isp1760_hcd_set(hcd, PORT_OWNER); 2184 1795 2185 - } else 1796 + isp1760_hcd_clear(hcd, PORT_CSC); 1797 + } else { 2186 1798 dev_info(hcd->self.controller, "port %d high speed\n", 2187 - index + 1); 1799 + index + 1); 1800 + } 2188 1801 2189 - return port_status; 1802 + return; 2190 1803 } 2191 1804 2192 1805 static int isp1760_hub_control(struct usb_hcd *hcd, u16 typeReq, 2193 1806 u16 wValue, u16 wIndex, char *buf, u16 wLength) 2194 1807 { 2195 1808 struct isp1760_hcd *priv = hcd_to_priv(hcd); 2196 - int ports = HCS_N_PORTS(priv->hcs_params); 2197 - u32 temp, status; 1809 + u32 status; 2198 1810 unsigned long flags; 2199 1811 int retval = 0; 1812 + int ports; 1813 + 1814 + ports = isp1760_hcd_n_ports(hcd); 2200 1815 2201 1816 /* 2202 1817 * FIXME: support SetPortFeatures USB_PORT_FEAT_INDICATOR. ··· 2220 1837 if (!wIndex || wIndex > ports) 2221 1838 goto error; 2222 1839 wIndex--; 2223 - temp = reg_read32(hcd->regs, HC_PORTSC1); 2224 1840 2225 1841 /* 2226 1842 * Even if OWNER is set, so the port is owned by the ··· 2230 1848 2231 1849 switch (wValue) { 2232 1850 case USB_PORT_FEAT_ENABLE: 2233 - reg_write32(hcd->regs, HC_PORTSC1, temp & ~PORT_PE); 1851 + isp1760_hcd_clear(hcd, PORT_PE); 2234 1852 break; 2235 1853 case USB_PORT_FEAT_C_ENABLE: 2236 1854 /* XXX error? */ 2237 1855 break; 2238 1856 case USB_PORT_FEAT_SUSPEND: 2239 - if (temp & PORT_RESET) 1857 + if (isp1760_hcd_is_set(hcd, PORT_RESET)) 2240 1858 goto error; 2241 1859 2242 - if (temp & PORT_SUSPEND) { 2243 - if ((temp & PORT_PE) == 0) 1860 + if (isp1760_hcd_is_set(hcd, PORT_SUSPEND)) { 1861 + if (!isp1760_hcd_is_set(hcd, PORT_PE)) 2244 1862 goto error; 2245 1863 /* resume signaling for 20 msec */ 2246 - temp &= ~(PORT_RWC_BITS); 2247 - reg_write32(hcd->regs, HC_PORTSC1, 2248 - temp | PORT_RESUME); 1864 + isp1760_hcd_clear(hcd, PORT_CSC); 1865 + isp1760_hcd_set(hcd, PORT_RESUME); 1866 + 2249 1867 priv->reset_done = jiffies + 2250 1868 msecs_to_jiffies(USB_RESUME_TIMEOUT); 2251 1869 } ··· 2254 1872 /* we auto-clear this feature */ 2255 1873 break; 2256 1874 case USB_PORT_FEAT_POWER: 2257 - if (HCS_PPC(priv->hcs_params)) 2258 - reg_write32(hcd->regs, HC_PORTSC1, 2259 - temp & ~PORT_POWER); 1875 + if (isp1760_hcd_ppc_is_set(hcd)) 1876 + isp1760_hcd_clear(hcd, PORT_POWER); 2260 1877 break; 2261 1878 case USB_PORT_FEAT_C_CONNECTION: 2262 - reg_write32(hcd->regs, HC_PORTSC1, temp | PORT_CSC); 1879 + isp1760_hcd_set(hcd, PORT_CSC); 2263 1880 break; 2264 1881 case USB_PORT_FEAT_C_OVER_CURRENT: 2265 1882 /* XXX error ?*/ ··· 2269 1888 default: 2270 1889 goto error; 2271 1890 } 2272 - reg_read32(hcd->regs, HC_USBCMD); 1891 + isp1760_hcd_read(hcd, CMD_RUN); 2273 1892 break; 2274 1893 case GetHubDescriptor: 2275 1894 isp1760_hub_descriptor(priv, (struct usb_hub_descriptor *) ··· 2284 1903 goto error; 2285 1904 wIndex--; 2286 1905 status = 0; 2287 - temp = reg_read32(hcd->regs, HC_PORTSC1); 2288 1906 2289 1907 /* wPortChange bits */ 2290 - if (temp & PORT_CSC) 1908 + if (isp1760_hcd_is_set(hcd, PORT_CSC)) 2291 1909 status |= USB_PORT_STAT_C_CONNECTION << 16; 2292 1910 2293 - 2294 1911 /* whoever resumes must GetPortStatus to complete it!! */ 2295 - if (temp & PORT_RESUME) { 1912 + if (isp1760_hcd_is_set(hcd, PORT_RESUME)) { 2296 1913 dev_err(hcd->self.controller, "Port resume should be skipped.\n"); 2297 1914 2298 1915 /* Remote Wakeup received? */ ··· 2309 1930 priv->reset_done = 0; 2310 1931 2311 1932 /* stop resume signaling */ 2312 - temp = reg_read32(hcd->regs, HC_PORTSC1); 2313 - reg_write32(hcd->regs, HC_PORTSC1, 2314 - temp & ~(PORT_RWC_BITS | PORT_RESUME)); 2315 - retval = handshake(hcd, HC_PORTSC1, 2316 - PORT_RESUME, 0, 2000 /* 2msec */); 1933 + isp1760_hcd_clear(hcd, PORT_CSC); 1934 + 1935 + retval = isp1760_hcd_clear_and_wait(hcd, 1936 + PORT_RESUME, 2000); 2317 1937 if (retval != 0) { 2318 1938 dev_err(hcd->self.controller, 2319 1939 "port %d resume error %d\n", 2320 1940 wIndex + 1, retval); 2321 1941 goto error; 2322 1942 } 2323 - temp &= ~(PORT_SUSPEND|PORT_RESUME|(3<<10)); 2324 1943 } 2325 1944 } 2326 1945 2327 1946 /* whoever resets must GetPortStatus to complete it!! */ 2328 - if ((temp & PORT_RESET) 2329 - && time_after_eq(jiffies, 2330 - priv->reset_done)) { 1947 + if (isp1760_hcd_is_set(hcd, PORT_RESET) && 1948 + time_after_eq(jiffies, priv->reset_done)) { 2331 1949 status |= USB_PORT_STAT_C_RESET << 16; 2332 1950 priv->reset_done = 0; 2333 1951 2334 1952 /* force reset to complete */ 2335 - reg_write32(hcd->regs, HC_PORTSC1, temp & ~PORT_RESET); 2336 1953 /* REVISIT: some hardware needs 550+ usec to clear 2337 1954 * this bit; seems too long to spin routinely... 2338 1955 */ 2339 - retval = handshake(hcd, HC_PORTSC1, 2340 - PORT_RESET, 0, 750); 1956 + retval = isp1760_hcd_clear_and_wait(hcd, PORT_RESET, 1957 + 750); 2341 1958 if (retval != 0) { 2342 1959 dev_err(hcd->self.controller, "port %d reset error %d\n", 2343 - wIndex + 1, retval); 1960 + wIndex + 1, retval); 2344 1961 goto error; 2345 1962 } 2346 1963 2347 1964 /* see what we found out */ 2348 - temp = check_reset_complete(hcd, wIndex, 2349 - reg_read32(hcd->regs, HC_PORTSC1)); 1965 + check_reset_complete(hcd, wIndex); 2350 1966 } 2351 1967 /* 2352 1968 * Even if OWNER is set, there's no harm letting hub_wq ··· 2349 1975 * for PORT_POWER anyway). 2350 1976 */ 2351 1977 2352 - if (temp & PORT_OWNER) 1978 + if (isp1760_hcd_is_set(hcd, PORT_OWNER)) 2353 1979 dev_err(hcd->self.controller, "PORT_OWNER is set\n"); 2354 1980 2355 - if (temp & PORT_CONNECT) { 1981 + if (isp1760_hcd_is_set(hcd, PORT_CONNECT)) { 2356 1982 status |= USB_PORT_STAT_CONNECTION; 2357 1983 /* status may be from integrated TT */ 2358 1984 status |= USB_PORT_STAT_HIGH_SPEED; 2359 1985 } 2360 - if (temp & PORT_PE) 1986 + if (isp1760_hcd_is_set(hcd, PORT_PE)) 2361 1987 status |= USB_PORT_STAT_ENABLE; 2362 - if (temp & (PORT_SUSPEND|PORT_RESUME)) 1988 + if (isp1760_hcd_is_set(hcd, PORT_SUSPEND) && 1989 + isp1760_hcd_is_set(hcd, PORT_RESUME)) 2363 1990 status |= USB_PORT_STAT_SUSPEND; 2364 - if (temp & PORT_RESET) 1991 + if (isp1760_hcd_is_set(hcd, PORT_RESET)) 2365 1992 status |= USB_PORT_STAT_RESET; 2366 - if (temp & PORT_POWER) 1993 + if (isp1760_hcd_is_set(hcd, PORT_POWER)) 2367 1994 status |= USB_PORT_STAT_POWER; 2368 1995 2369 1996 put_unaligned(cpu_to_le32(status), (__le32 *) buf); ··· 2384 2009 if (!wIndex || wIndex > ports) 2385 2010 goto error; 2386 2011 wIndex--; 2387 - temp = reg_read32(hcd->regs, HC_PORTSC1); 2388 - if (temp & PORT_OWNER) 2012 + 2013 + if (isp1760_hcd_is_set(hcd, PORT_OWNER)) 2389 2014 break; 2390 2015 2391 - /* temp &= ~PORT_RWC_BITS; */ 2392 2016 switch (wValue) { 2393 2017 case USB_PORT_FEAT_ENABLE: 2394 - reg_write32(hcd->regs, HC_PORTSC1, temp | PORT_PE); 2018 + isp1760_hcd_set(hcd, PORT_PE); 2395 2019 break; 2396 2020 2397 2021 case USB_PORT_FEAT_SUSPEND: 2398 - if ((temp & PORT_PE) == 0 2399 - || (temp & PORT_RESET) != 0) 2022 + if (!isp1760_hcd_is_set(hcd, PORT_PE) || 2023 + isp1760_hcd_is_set(hcd, PORT_RESET)) 2400 2024 goto error; 2401 2025 2402 - reg_write32(hcd->regs, HC_PORTSC1, temp | PORT_SUSPEND); 2026 + isp1760_hcd_set(hcd, PORT_SUSPEND); 2403 2027 break; 2404 2028 case USB_PORT_FEAT_POWER: 2405 - if (HCS_PPC(priv->hcs_params)) 2406 - reg_write32(hcd->regs, HC_PORTSC1, 2407 - temp | PORT_POWER); 2029 + if (isp1760_hcd_ppc_is_set(hcd)) 2030 + isp1760_hcd_set(hcd, PORT_POWER); 2408 2031 break; 2409 2032 case USB_PORT_FEAT_RESET: 2410 - if (temp & PORT_RESUME) 2033 + if (isp1760_hcd_is_set(hcd, PORT_RESUME)) 2411 2034 goto error; 2412 2035 /* line status bits may report this as low speed, 2413 2036 * which can be fine if this root hub has a 2414 2037 * transaction translator built in. 2415 2038 */ 2416 - if ((temp & (PORT_PE|PORT_CONNECT)) == PORT_CONNECT 2417 - && PORT_USB11(temp)) { 2418 - temp |= PORT_OWNER; 2039 + if ((isp1760_hcd_is_set(hcd, PORT_CONNECT) && 2040 + !isp1760_hcd_is_set(hcd, PORT_PE)) && 2041 + (isp1760_hcd_read(hcd, PORT_LSTATUS) == 1)) { 2042 + isp1760_hcd_set(hcd, PORT_OWNER); 2419 2043 } else { 2420 - temp |= PORT_RESET; 2421 - temp &= ~PORT_PE; 2044 + isp1760_hcd_set(hcd, PORT_RESET); 2045 + isp1760_hcd_clear(hcd, PORT_PE); 2422 2046 2423 2047 /* 2424 2048 * caller must wait, then call GetPortStatus ··· 2426 2052 priv->reset_done = jiffies + 2427 2053 msecs_to_jiffies(50); 2428 2054 } 2429 - reg_write32(hcd->regs, HC_PORTSC1, temp); 2430 2055 break; 2431 2056 default: 2432 2057 goto error; 2433 2058 } 2434 - reg_read32(hcd->regs, HC_USBCMD); 2435 2059 break; 2436 2060 2437 2061 default: ··· 2446 2074 struct isp1760_hcd *priv = hcd_to_priv(hcd); 2447 2075 u32 fr; 2448 2076 2449 - fr = reg_read32(hcd->regs, HC_FRINDEX); 2077 + fr = isp1760_hcd_read(hcd, HC_FRINDEX); 2450 2078 return (fr >> 3) % priv->periodic_size; 2451 2079 } 2452 2080 2453 2081 static void isp1760_stop(struct usb_hcd *hcd) 2454 2082 { 2455 2083 struct isp1760_hcd *priv = hcd_to_priv(hcd); 2456 - u32 temp; 2457 2084 2458 2085 del_timer(&errata2_timer); 2459 2086 ··· 2463 2092 spin_lock_irq(&priv->lock); 2464 2093 ehci_reset(hcd); 2465 2094 /* Disable IRQ */ 2466 - temp = reg_read32(hcd->regs, HC_HW_MODE_CTRL); 2467 - reg_write32(hcd->regs, HC_HW_MODE_CTRL, temp &= ~HW_GLOBAL_INTR_EN); 2095 + isp1760_hcd_clear(hcd, HW_GLOBAL_INTR_EN); 2468 2096 spin_unlock_irq(&priv->lock); 2469 2097 2470 - reg_write32(hcd->regs, HC_CONFIGFLAG, 0); 2098 + isp1760_hcd_clear(hcd, FLAG_CF); 2471 2099 } 2472 2100 2473 2101 static void isp1760_shutdown(struct usb_hcd *hcd) 2474 2102 { 2475 - u32 command, temp; 2476 - 2477 2103 isp1760_stop(hcd); 2478 - temp = reg_read32(hcd->regs, HC_HW_MODE_CTRL); 2479 - reg_write32(hcd->regs, HC_HW_MODE_CTRL, temp &= ~HW_GLOBAL_INTR_EN); 2480 2104 2481 - command = reg_read32(hcd->regs, HC_USBCMD); 2482 - command &= ~CMD_RUN; 2483 - reg_write32(hcd->regs, HC_USBCMD, command); 2105 + isp1760_hcd_clear(hcd, HW_GLOBAL_INTR_EN); 2106 + 2107 + isp1760_hcd_clear(hcd, CMD_RUN); 2484 2108 } 2485 2109 2486 2110 static void isp1760_clear_tt_buffer_complete(struct usb_hcd *hcd, ··· 2548 2182 kmem_cache_destroy(urb_listitem_cachep); 2549 2183 } 2550 2184 2551 - int isp1760_hcd_register(struct isp1760_hcd *priv, void __iomem *regs, 2552 - struct resource *mem, int irq, unsigned long irqflags, 2185 + int isp1760_hcd_register(struct isp1760_hcd *priv, struct resource *mem, 2186 + int irq, unsigned long irqflags, 2553 2187 struct device *dev) 2554 2188 { 2189 + const struct isp1760_memory_layout *mem_layout = priv->memory_layout; 2555 2190 struct usb_hcd *hcd; 2556 2191 int ret; 2557 2192 ··· 2564 2197 2565 2198 priv->hcd = hcd; 2566 2199 2200 + priv->atl_slots = kcalloc(mem_layout->slot_num, 2201 + sizeof(struct isp1760_slotinfo), GFP_KERNEL); 2202 + if (!priv->atl_slots) { 2203 + ret = -ENOMEM; 2204 + goto put_hcd; 2205 + } 2206 + 2207 + priv->int_slots = kcalloc(mem_layout->slot_num, 2208 + sizeof(struct isp1760_slotinfo), GFP_KERNEL); 2209 + if (!priv->int_slots) { 2210 + ret = -ENOMEM; 2211 + goto free_atl_slots; 2212 + } 2213 + 2567 2214 init_memory(priv); 2568 2215 2569 2216 hcd->irq = irq; 2570 - hcd->regs = regs; 2571 2217 hcd->rsrc_start = mem->start; 2572 2218 hcd->rsrc_len = resource_size(mem); 2573 2219 ··· 2589 2209 2590 2210 ret = usb_add_hcd(hcd, irq, irqflags); 2591 2211 if (ret) 2592 - goto error; 2212 + goto free_int_slots; 2593 2213 2594 2214 device_wakeup_enable(hcd->self.controller); 2595 2215 2596 2216 return 0; 2597 2217 2598 - error: 2218 + free_int_slots: 2219 + kfree(priv->int_slots); 2220 + free_atl_slots: 2221 + kfree(priv->atl_slots); 2222 + put_hcd: 2599 2223 usb_put_hcd(hcd); 2600 2224 return ret; 2601 2225 } ··· 2611 2227 2612 2228 usb_remove_hcd(priv->hcd); 2613 2229 usb_put_hcd(priv->hcd); 2230 + kfree(priv->atl_slots); 2231 + kfree(priv->int_slots); 2614 2232 }
+29 -28
drivers/usb/isp1760/isp1760-hcd.h
··· 3 3 #define _ISP1760_HCD_H_ 4 4 5 5 #include <linux/spinlock.h> 6 + #include <linux/regmap.h> 7 + 8 + #include "isp1760-regs.h" 6 9 7 10 struct isp1760_qh; 8 11 struct isp1760_qtd; 9 12 struct resource; 10 13 struct usb_hcd; 11 - 12 - /* 13 - * 60kb divided in: 14 - * - 32 blocks @ 256 bytes 15 - * - 20 blocks @ 1024 bytes 16 - * - 4 blocks @ 8192 bytes 17 - */ 18 - 19 - #define BLOCK_1_NUM 32 20 - #define BLOCK_2_NUM 20 21 - #define BLOCK_3_NUM 4 22 - 23 - #define BLOCK_1_SIZE 256 24 - #define BLOCK_2_SIZE 1024 25 - #define BLOCK_3_SIZE 8192 26 - #define BLOCKS (BLOCK_1_NUM + BLOCK_2_NUM + BLOCK_3_NUM) 27 - #define MAX_PAYLOAD_SIZE BLOCK_3_SIZE 28 - #define PAYLOAD_AREA_SIZE 0xf000 29 14 30 15 struct isp1760_slotinfo { 31 16 struct isp1760_qh *qh; ··· 19 34 }; 20 35 21 36 /* chip memory management */ 37 + #define ISP176x_BLOCK_MAX (32 + 20 + 4) 38 + #define ISP176x_BLOCK_NUM 3 39 + 40 + struct isp1760_memory_layout { 41 + unsigned int blocks[ISP176x_BLOCK_NUM]; 42 + unsigned int blocks_size[ISP176x_BLOCK_NUM]; 43 + 44 + unsigned int slot_num; 45 + unsigned int payload_blocks; 46 + unsigned int payload_area_size; 47 + }; 48 + 22 49 struct isp1760_memory_chunk { 23 50 unsigned int start; 24 51 unsigned int size; ··· 45 48 }; 46 49 47 50 struct isp1760_hcd { 48 - #ifdef CONFIG_USB_ISP1760_HCD 49 51 struct usb_hcd *hcd; 50 52 51 - u32 hcs_params; 53 + void __iomem *base; 54 + 55 + struct regmap *regs; 56 + struct regmap_field *fields[HC_FIELD_MAX]; 57 + 58 + bool is_isp1763; 59 + const struct isp1760_memory_layout *memory_layout; 60 + 52 61 spinlock_t lock; 53 - struct isp1760_slotinfo atl_slots[32]; 62 + struct isp1760_slotinfo *atl_slots; 54 63 int atl_done_map; 55 - struct isp1760_slotinfo int_slots[32]; 64 + struct isp1760_slotinfo *int_slots; 56 65 int int_done_map; 57 - struct isp1760_memory_chunk memory_pool[BLOCKS]; 66 + struct isp1760_memory_chunk memory_pool[ISP176x_BLOCK_MAX]; 58 67 struct list_head qh_list[QH_END]; 59 68 60 69 /* periodic schedule support */ ··· 69 66 unsigned i_thresh; 70 67 unsigned long reset_done; 71 68 unsigned long next_statechange; 72 - #endif 73 69 }; 74 70 75 71 #ifdef CONFIG_USB_ISP1760_HCD 76 - int isp1760_hcd_register(struct isp1760_hcd *priv, void __iomem *regs, 77 - struct resource *mem, int irq, unsigned long irqflags, 78 - struct device *dev); 72 + int isp1760_hcd_register(struct isp1760_hcd *priv, struct resource *mem, 73 + int irq, unsigned long irqflags, struct device *dev); 79 74 void isp1760_hcd_unregister(struct isp1760_hcd *priv); 80 75 81 76 int isp1760_init_kmem_once(void); 82 77 void isp1760_deinit_kmem_cache(void); 83 78 #else 84 79 static inline int isp1760_hcd_register(struct isp1760_hcd *priv, 85 - void __iomem *regs, struct resource *mem, 80 + struct resource *mem, 86 81 int irq, unsigned long irqflags, 87 82 struct device *dev) 88 83 {
+19 -22
drivers/usb/isp1760/isp1760-if.c
··· 7 7 * - PDEV (generic platform device centralized driver model) 8 8 * 9 9 * (c) 2007 Sebastian Siewior <bigeasy@linutronix.de> 10 + * Copyright 2021 Linaro, Rui Miguel Silva <rui.silva@linaro.org> 10 11 * 11 12 */ 12 13 ··· 17 16 #include <linux/of.h> 18 17 #include <linux/platform_device.h> 19 18 #include <linux/slab.h> 20 - #include <linux/usb/isp1760.h> 21 19 #include <linux/usb/hcd.h> 20 + #include <linux/usb/otg.h> 22 21 23 22 #include "isp1760-core.h" 24 23 #include "isp1760-regs.h" ··· 76 75 /*by default host is in 16bit mode, so 77 76 * io operations at this stage must be 16 bit 78 77 * */ 79 - writel(0xface, iobase + HC_SCRATCH_REG); 78 + writel(0xface, iobase + ISP176x_HC_SCRATCH); 80 79 udelay(100); 81 - reg_data = readl(iobase + HC_SCRATCH_REG) & 0x0000ffff; 80 + reg_data = readl(iobase + ISP176x_HC_SCRATCH) & 0x0000ffff; 82 81 retry_count--; 83 82 } 84 83 ··· 210 209 if (of_device_is_compatible(dp, "nxp,usb-isp1761")) 211 210 devflags |= ISP1760_FLAG_ISP1761; 212 211 213 - /* Some systems wire up only 16 of the 32 data lines */ 212 + if (of_device_is_compatible(dp, "nxp,usb-isp1763")) 213 + devflags |= ISP1760_FLAG_ISP1763; 214 + 215 + /* 216 + * Some systems wire up only 8 of 16 data lines or 217 + * 16 of the 32 data lines 218 + */ 214 219 of_property_read_u32(dp, "bus-width", &bus_width); 215 220 if (bus_width == 16) 216 221 devflags |= ISP1760_FLAG_BUS_WIDTH_16; 222 + else if (bus_width == 8) 223 + devflags |= ISP1760_FLAG_BUS_WIDTH_8; 217 224 218 - if (of_property_read_bool(dp, "port1-otg")) 219 - devflags |= ISP1760_FLAG_OTG_EN; 225 + if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) 226 + devflags |= ISP1760_FLAG_PERIPHERAL_EN; 220 227 221 228 if (of_property_read_bool(dp, "analog-oc")) 222 229 devflags |= ISP1760_FLAG_ANALOG_OC; ··· 234 225 235 226 if (of_property_read_bool(dp, "dreq-polarity")) 236 227 devflags |= ISP1760_FLAG_DREQ_POL_HIGH; 237 - } else if (dev_get_platdata(&pdev->dev)) { 238 - struct isp1760_platform_data *pdata = 239 - dev_get_platdata(&pdev->dev); 240 - 241 - if (pdata->is_isp1761) 242 - devflags |= ISP1760_FLAG_ISP1761; 243 - if (pdata->bus_width_16) 244 - devflags |= ISP1760_FLAG_BUS_WIDTH_16; 245 - if (pdata->port1_otg) 246 - devflags |= ISP1760_FLAG_OTG_EN; 247 - if (pdata->analog_oc) 248 - devflags |= ISP1760_FLAG_ANALOG_OC; 249 - if (pdata->dack_polarity_high) 250 - devflags |= ISP1760_FLAG_DACK_POL_HIGH; 251 - if (pdata->dreq_polarity_high) 252 - devflags |= ISP1760_FLAG_DREQ_POL_HIGH; 228 + } else { 229 + pr_err("isp1760: no platform data\n"); 230 + return -ENXIO; 253 231 } 254 232 255 233 ret = isp1760_register(mem_res, irq_res->start, irqflags, &pdev->dev, ··· 259 263 static const struct of_device_id isp1760_of_match[] = { 260 264 { .compatible = "nxp,usb-isp1760", }, 261 265 { .compatible = "nxp,usb-isp1761", }, 266 + { .compatible = "nxp,usb-isp1763", }, 262 267 { }, 263 268 }; 264 269 MODULE_DEVICE_TABLE(of, isp1760_of_match);
+246 -181
drivers/usb/isp1760/isp1760-regs.h
··· 2 2 /* 3 3 * Driver for the NXP ISP1760 chip 4 4 * 5 + * Copyright 2021 Linaro, Rui Miguel Silva 5 6 * Copyright 2014 Laurent Pinchart 6 7 * Copyright 2007 Sebastian Siewior 7 8 * 8 9 * Contacts: 9 10 * Sebastian Siewior <bigeasy@linutronix.de> 10 11 * Laurent Pinchart <laurent.pinchart@ideasonboard.com> 12 + * Rui Miguel Silva <rui.silva@linaro.org> 11 13 */ 12 14 13 - #ifndef _ISP1760_REGS_H_ 14 - #define _ISP1760_REGS_H_ 15 + #ifndef _ISP176x_REGS_H_ 16 + #define _ISP176x_REGS_H_ 15 17 16 18 /* ----------------------------------------------------------------------------- 17 19 * Host Controller 18 20 */ 19 21 22 + /* ISP1760/31 */ 20 23 /* EHCI capability registers */ 21 - #define HC_CAPLENGTH 0x000 22 - #define HC_LENGTH(p) (((p) >> 00) & 0x00ff) /* bits 7:0 */ 23 - #define HC_VERSION(p) (((p) >> 16) & 0xffff) /* bits 31:16 */ 24 - 25 - #define HC_HCSPARAMS 0x004 26 - #define HCS_INDICATOR(p) ((p) & (1 << 16)) /* true: has port indicators */ 27 - #define HCS_PPC(p) ((p) & (1 << 4)) /* true: port power control */ 28 - #define HCS_N_PORTS(p) (((p) >> 0) & 0xf) /* bits 3:0, ports on HC */ 29 - 30 - #define HC_HCCPARAMS 0x008 31 - #define HCC_ISOC_CACHE(p) ((p) & (1 << 7)) /* true: can cache isoc frame */ 32 - #define HCC_ISOC_THRES(p) (((p) >> 4) & 0x7) /* bits 6:4, uframes cached */ 24 + #define ISP176x_HC_VERSION 0x002 25 + #define ISP176x_HC_HCSPARAMS 0x004 26 + #define ISP176x_HC_HCCPARAMS 0x008 33 27 34 28 /* EHCI operational registers */ 35 - #define HC_USBCMD 0x020 36 - #define CMD_LRESET (1 << 7) /* partial reset (no ports, etc) */ 37 - #define CMD_RESET (1 << 1) /* reset HC not bus */ 38 - #define CMD_RUN (1 << 0) /* start/stop HC */ 29 + #define ISP176x_HC_USBCMD 0x020 30 + #define ISP176x_HC_USBSTS 0x024 31 + #define ISP176x_HC_FRINDEX 0x02c 39 32 40 - #define HC_USBSTS 0x024 41 - #define STS_PCD (1 << 2) /* port change detect */ 33 + #define ISP176x_HC_CONFIGFLAG 0x060 34 + #define ISP176x_HC_PORTSC1 0x064 42 35 43 - #define HC_FRINDEX 0x02c 44 - 45 - #define HC_CONFIGFLAG 0x060 46 - #define FLAG_CF (1 << 0) /* true: we'll support "high speed" */ 47 - 48 - #define HC_PORTSC1 0x064 49 - #define PORT_OWNER (1 << 13) /* true: companion hc owns this port */ 50 - #define PORT_POWER (1 << 12) /* true: has power (see PPC) */ 51 - #define PORT_USB11(x) (((x) & (3 << 10)) == (1 << 10)) /* USB 1.1 device */ 52 - #define PORT_RESET (1 << 8) /* reset port */ 53 - #define PORT_SUSPEND (1 << 7) /* suspend port */ 54 - #define PORT_RESUME (1 << 6) /* resume it */ 55 - #define PORT_PE (1 << 2) /* port enable */ 56 - #define PORT_CSC (1 << 1) /* connect status change */ 57 - #define PORT_CONNECT (1 << 0) /* device connected */ 58 - #define PORT_RWC_BITS (PORT_CSC) 59 - 60 - #define HC_ISO_PTD_DONEMAP_REG 0x130 61 - #define HC_ISO_PTD_SKIPMAP_REG 0x134 62 - #define HC_ISO_PTD_LASTPTD_REG 0x138 63 - #define HC_INT_PTD_DONEMAP_REG 0x140 64 - #define HC_INT_PTD_SKIPMAP_REG 0x144 65 - #define HC_INT_PTD_LASTPTD_REG 0x148 66 - #define HC_ATL_PTD_DONEMAP_REG 0x150 67 - #define HC_ATL_PTD_SKIPMAP_REG 0x154 68 - #define HC_ATL_PTD_LASTPTD_REG 0x158 36 + #define ISP176x_HC_ISO_PTD_DONEMAP 0x130 37 + #define ISP176x_HC_ISO_PTD_SKIPMAP 0x134 38 + #define ISP176x_HC_ISO_PTD_LASTPTD 0x138 39 + #define ISP176x_HC_INT_PTD_DONEMAP 0x140 40 + #define ISP176x_HC_INT_PTD_SKIPMAP 0x144 41 + #define ISP176x_HC_INT_PTD_LASTPTD 0x148 42 + #define ISP176x_HC_ATL_PTD_DONEMAP 0x150 43 + #define ISP176x_HC_ATL_PTD_SKIPMAP 0x154 44 + #define ISP176x_HC_ATL_PTD_LASTPTD 0x158 69 45 70 46 /* Configuration Register */ 71 - #define HC_HW_MODE_CTRL 0x300 72 - #define ALL_ATX_RESET (1 << 31) 73 - #define HW_ANA_DIGI_OC (1 << 15) 74 - #define HW_DEV_DMA (1 << 11) 75 - #define HW_COMN_IRQ (1 << 10) 76 - #define HW_COMN_DMA (1 << 9) 77 - #define HW_DATA_BUS_32BIT (1 << 8) 78 - #define HW_DACK_POL_HIGH (1 << 6) 79 - #define HW_DREQ_POL_HIGH (1 << 5) 80 - #define HW_INTR_HIGH_ACT (1 << 2) 81 - #define HW_INTR_EDGE_TRIG (1 << 1) 82 - #define HW_GLOBAL_INTR_EN (1 << 0) 83 - 84 - #define HC_CHIP_ID_REG 0x304 85 - #define HC_SCRATCH_REG 0x308 86 - 87 - #define HC_RESET_REG 0x30c 88 - #define SW_RESET_RESET_HC (1 << 1) 89 - #define SW_RESET_RESET_ALL (1 << 0) 90 - 91 - #define HC_BUFFER_STATUS_REG 0x334 92 - #define ISO_BUF_FILL (1 << 2) 93 - #define INT_BUF_FILL (1 << 1) 94 - #define ATL_BUF_FILL (1 << 0) 95 - 96 - #define HC_MEMORY_REG 0x33c 97 - #define ISP_BANK(x) ((x) << 16) 98 - 99 - #define HC_PORT1_CTRL 0x374 100 - #define PORT1_POWER (3 << 3) 101 - #define PORT1_INIT1 (1 << 7) 102 - #define PORT1_INIT2 (1 << 23) 103 - #define HW_OTG_CTRL_SET 0x374 104 - #define HW_OTG_CTRL_CLR 0x376 105 - #define HW_OTG_DISABLE (1 << 10) 106 - #define HW_OTG_SE0_EN (1 << 9) 107 - #define HW_BDIS_ACON_EN (1 << 8) 108 - #define HW_SW_SEL_HC_DC (1 << 7) 109 - #define HW_VBUS_CHRG (1 << 6) 110 - #define HW_VBUS_DISCHRG (1 << 5) 111 - #define HW_VBUS_DRV (1 << 4) 112 - #define HW_SEL_CP_EXT (1 << 3) 113 - #define HW_DM_PULLDOWN (1 << 2) 114 - #define HW_DP_PULLDOWN (1 << 1) 115 - #define HW_DP_PULLUP (1 << 0) 47 + #define ISP176x_HC_HW_MODE_CTRL 0x300 48 + #define ISP176x_HC_CHIP_ID 0x304 49 + #define ISP176x_HC_SCRATCH 0x308 50 + #define ISP176x_HC_RESET 0x30c 51 + #define ISP176x_HC_BUFFER_STATUS 0x334 52 + #define ISP176x_HC_MEMORY 0x33c 116 53 117 54 /* Interrupt Register */ 118 - #define HC_INTERRUPT_REG 0x310 55 + #define ISP176x_HC_INTERRUPT 0x310 56 + #define ISP176x_HC_INTERRUPT_ENABLE 0x314 57 + #define ISP176x_HC_ISO_IRQ_MASK_OR 0x318 58 + #define ISP176x_HC_INT_IRQ_MASK_OR 0x31c 59 + #define ISP176x_HC_ATL_IRQ_MASK_OR 0x320 60 + #define ISP176x_HC_ISO_IRQ_MASK_AND 0x324 61 + #define ISP176x_HC_INT_IRQ_MASK_AND 0x328 62 + #define ISP176x_HC_ATL_IRQ_MASK_AND 0x32c 119 63 120 - #define HC_INTERRUPT_ENABLE 0x314 121 - #define HC_ISO_INT (1 << 9) 122 - #define HC_ATL_INT (1 << 8) 123 - #define HC_INTL_INT (1 << 7) 124 - #define HC_EOT_INT (1 << 3) 125 - #define HC_SOT_INT (1 << 1) 126 - #define INTERRUPT_ENABLE_MASK (HC_INTL_INT | HC_ATL_INT) 64 + #define ISP176x_HC_OTG_CTRL_SET 0x374 65 + #define ISP176x_HC_OTG_CTRL_CLEAR 0x376 127 66 128 - #define HC_ISO_IRQ_MASK_OR_REG 0x318 129 - #define HC_INT_IRQ_MASK_OR_REG 0x31c 130 - #define HC_ATL_IRQ_MASK_OR_REG 0x320 131 - #define HC_ISO_IRQ_MASK_AND_REG 0x324 132 - #define HC_INT_IRQ_MASK_AND_REG 0x328 133 - #define HC_ATL_IRQ_MASK_AND_REG 0x32c 67 + enum isp176x_host_controller_fields { 68 + /* HC_PORTSC1 */ 69 + PORT_OWNER, PORT_POWER, PORT_LSTATUS, PORT_RESET, PORT_SUSPEND, 70 + PORT_RESUME, PORT_PE, PORT_CSC, PORT_CONNECT, 71 + /* HC_HCSPARAMS */ 72 + HCS_PPC, HCS_N_PORTS, 73 + /* HC_HCCPARAMS */ 74 + HCC_ISOC_CACHE, HCC_ISOC_THRES, 75 + /* HC_USBCMD */ 76 + CMD_LRESET, CMD_RESET, CMD_RUN, 77 + /* HC_USBSTS */ 78 + STS_PCD, 79 + /* HC_FRINDEX */ 80 + HC_FRINDEX, 81 + /* HC_CONFIGFLAG */ 82 + FLAG_CF, 83 + /* ISO/INT/ATL PTD */ 84 + HC_ISO_PTD_DONEMAP, HC_ISO_PTD_SKIPMAP, HC_ISO_PTD_LASTPTD, 85 + HC_INT_PTD_DONEMAP, HC_INT_PTD_SKIPMAP, HC_INT_PTD_LASTPTD, 86 + HC_ATL_PTD_DONEMAP, HC_ATL_PTD_SKIPMAP, HC_ATL_PTD_LASTPTD, 87 + /* HC_HW_MODE_CTRL */ 88 + ALL_ATX_RESET, HW_ANA_DIGI_OC, HW_DEV_DMA, HW_COMN_IRQ, HW_COMN_DMA, 89 + HW_DATA_BUS_WIDTH, HW_DACK_POL_HIGH, HW_DREQ_POL_HIGH, HW_INTR_HIGH_ACT, 90 + HW_INTF_LOCK, HW_INTR_EDGE_TRIG, HW_GLOBAL_INTR_EN, 91 + /* HC_CHIP_ID */ 92 + HC_CHIP_ID_HIGH, HC_CHIP_ID_LOW, HC_CHIP_REV, 93 + /* HC_SCRATCH */ 94 + HC_SCRATCH, 95 + /* HC_RESET */ 96 + SW_RESET_RESET_ATX, SW_RESET_RESET_HC, SW_RESET_RESET_ALL, 97 + /* HC_BUFFER_STATUS */ 98 + ISO_BUF_FILL, INT_BUF_FILL, ATL_BUF_FILL, 99 + /* HC_MEMORY */ 100 + MEM_BANK_SEL, MEM_START_ADDR, 101 + /* HC_DATA */ 102 + HC_DATA, 103 + /* HC_INTERRUPT */ 104 + HC_INTERRUPT, 105 + /* HC_INTERRUPT_ENABLE */ 106 + HC_INT_IRQ_ENABLE, HC_ATL_IRQ_ENABLE, 107 + /* INTERRUPT MASKS */ 108 + HC_ISO_IRQ_MASK_OR, HC_INT_IRQ_MASK_OR, HC_ATL_IRQ_MASK_OR, 109 + HC_ISO_IRQ_MASK_AND, HC_INT_IRQ_MASK_AND, HC_ATL_IRQ_MASK_AND, 110 + /* HW_OTG_CTRL_SET */ 111 + HW_OTG_DISABLE, HW_SW_SEL_HC_DC, HW_VBUS_DRV, HW_SEL_CP_EXT, 112 + HW_DM_PULLDOWN, HW_DP_PULLDOWN, HW_DP_PULLUP, HW_HC_2_DIS, 113 + /* HW_OTG_CTRL_CLR */ 114 + HW_OTG_DISABLE_CLEAR, HW_SW_SEL_HC_DC_CLEAR, HW_VBUS_DRV_CLEAR, 115 + HW_SEL_CP_EXT_CLEAR, HW_DM_PULLDOWN_CLEAR, HW_DP_PULLDOWN_CLEAR, 116 + HW_DP_PULLUP_CLEAR, HW_HC_2_DIS_CLEAR, 117 + /* Last element */ 118 + HC_FIELD_MAX, 119 + }; 120 + 121 + /* ISP1763 */ 122 + /* EHCI operational registers */ 123 + #define ISP1763_HC_USBCMD 0x8c 124 + #define ISP1763_HC_USBSTS 0x90 125 + #define ISP1763_HC_FRINDEX 0x98 126 + 127 + #define ISP1763_HC_CONFIGFLAG 0x9c 128 + #define ISP1763_HC_PORTSC1 0xa0 129 + 130 + #define ISP1763_HC_ISO_PTD_DONEMAP 0xa4 131 + #define ISP1763_HC_ISO_PTD_SKIPMAP 0xa6 132 + #define ISP1763_HC_ISO_PTD_LASTPTD 0xa8 133 + #define ISP1763_HC_INT_PTD_DONEMAP 0xaa 134 + #define ISP1763_HC_INT_PTD_SKIPMAP 0xac 135 + #define ISP1763_HC_INT_PTD_LASTPTD 0xae 136 + #define ISP1763_HC_ATL_PTD_DONEMAP 0xb0 137 + #define ISP1763_HC_ATL_PTD_SKIPMAP 0xb2 138 + #define ISP1763_HC_ATL_PTD_LASTPTD 0xb4 139 + 140 + /* Configuration Register */ 141 + #define ISP1763_HC_HW_MODE_CTRL 0xb6 142 + #define ISP1763_HC_CHIP_REV 0x70 143 + #define ISP1763_HC_CHIP_ID 0x72 144 + #define ISP1763_HC_SCRATCH 0x78 145 + #define ISP1763_HC_RESET 0xb8 146 + #define ISP1763_HC_BUFFER_STATUS 0xba 147 + #define ISP1763_HC_MEMORY 0xc4 148 + #define ISP1763_HC_DATA 0xc6 149 + 150 + /* Interrupt Register */ 151 + #define ISP1763_HC_INTERRUPT 0xd4 152 + #define ISP1763_HC_INTERRUPT_ENABLE 0xd6 153 + #define ISP1763_HC_ISO_IRQ_MASK_OR 0xd8 154 + #define ISP1763_HC_INT_IRQ_MASK_OR 0xda 155 + #define ISP1763_HC_ATL_IRQ_MASK_OR 0xdc 156 + #define ISP1763_HC_ISO_IRQ_MASK_AND 0xde 157 + #define ISP1763_HC_INT_IRQ_MASK_AND 0xe0 158 + #define ISP1763_HC_ATL_IRQ_MASK_AND 0xe2 159 + 160 + #define ISP1763_HC_OTG_CTRL_SET 0xe4 161 + #define ISP1763_HC_OTG_CTRL_CLEAR 0xe6 134 162 135 163 /* ----------------------------------------------------------------------------- 136 164 * Peripheral Controller 137 165 */ 138 166 139 - /* Initialization Registers */ 140 - #define DC_ADDRESS 0x0200 141 - #define DC_DEVEN (1 << 7) 142 - 143 - #define DC_MODE 0x020c 144 - #define DC_DMACLKON (1 << 9) 145 - #define DC_VBUSSTAT (1 << 8) 146 - #define DC_CLKAON (1 << 7) 147 - #define DC_SNDRSU (1 << 6) 148 - #define DC_GOSUSP (1 << 5) 149 - #define DC_SFRESET (1 << 4) 150 - #define DC_GLINTENA (1 << 3) 151 - #define DC_WKUPCS (1 << 2) 152 - 153 - #define DC_INTCONF 0x0210 154 - #define DC_CDBGMOD_ACK_NAK (0 << 6) 155 - #define DC_CDBGMOD_ACK (1 << 6) 156 - #define DC_CDBGMOD_ACK_1NAK (2 << 6) 157 - #define DC_DDBGMODIN_ACK_NAK (0 << 4) 158 - #define DC_DDBGMODIN_ACK (1 << 4) 159 - #define DC_DDBGMODIN_ACK_1NAK (2 << 4) 160 - #define DC_DDBGMODOUT_ACK_NYET_NAK (0 << 2) 161 - #define DC_DDBGMODOUT_ACK_NYET (1 << 2) 162 - #define DC_DDBGMODOUT_ACK_NYET_1NAK (2 << 2) 163 - #define DC_INTLVL (1 << 1) 164 - #define DC_INTPOL (1 << 0) 165 - 166 - #define DC_DEBUG 0x0212 167 - #define DC_INTENABLE 0x0214 168 167 #define DC_IEPTX(n) (1 << (11 + 2 * (n))) 169 168 #define DC_IEPRX(n) (1 << (10 + 2 * (n))) 170 169 #define DC_IEPRXTX(n) (3 << (10 + 2 * (n))) 171 - #define DC_IEP0SETUP (1 << 8) 172 - #define DC_IEVBUS (1 << 7) 173 - #define DC_IEDMA (1 << 6) 174 - #define DC_IEHS_STA (1 << 5) 175 - #define DC_IERESM (1 << 4) 176 - #define DC_IESUSP (1 << 3) 177 - #define DC_IEPSOF (1 << 2) 178 - #define DC_IESOF (1 << 1) 179 - #define DC_IEBRST (1 << 0) 170 + 171 + #define ISP176x_DC_CDBGMOD_ACK BIT(6) 172 + #define ISP176x_DC_DDBGMODIN_ACK BIT(4) 173 + #define ISP176x_DC_DDBGMODOUT_ACK BIT(2) 174 + 175 + #define ISP176x_DC_IEP0SETUP BIT(8) 176 + #define ISP176x_DC_IEVBUS BIT(7) 177 + #define ISP176x_DC_IEHS_STA BIT(5) 178 + #define ISP176x_DC_IERESM BIT(4) 179 + #define ISP176x_DC_IESUSP BIT(3) 180 + #define ISP176x_DC_IEBRST BIT(0) 181 + 182 + #define ISP176x_DC_ENDPTYP_ISOC 0x01 183 + #define ISP176x_DC_ENDPTYP_BULK 0x02 184 + #define ISP176x_DC_ENDPTYP_INTERRUPT 0x03 185 + 186 + /* Initialization Registers */ 187 + #define ISP176x_DC_ADDRESS 0x0200 188 + #define ISP176x_DC_MODE 0x020c 189 + #define ISP176x_DC_INTCONF 0x0210 190 + #define ISP176x_DC_DEBUG 0x0212 191 + #define ISP176x_DC_INTENABLE 0x0214 180 192 181 193 /* Data Flow Registers */ 182 - #define DC_EPINDEX 0x022c 183 - #define DC_EP0SETUP (1 << 5) 184 - #define DC_ENDPIDX(n) ((n) << 1) 185 - #define DC_EPDIR (1 << 0) 194 + #define ISP176x_DC_EPMAXPKTSZ 0x0204 195 + #define ISP176x_DC_EPTYPE 0x0208 186 196 187 - #define DC_CTRLFUNC 0x0228 188 - #define DC_CLBUF (1 << 4) 189 - #define DC_VENDP (1 << 3) 190 - #define DC_DSEN (1 << 2) 191 - #define DC_STATUS (1 << 1) 192 - #define DC_STALL (1 << 0) 197 + #define ISP176x_DC_BUFLEN 0x021c 198 + #define ISP176x_DC_BUFSTAT 0x021e 199 + #define ISP176x_DC_DATAPORT 0x0220 193 200 194 - #define DC_DATAPORT 0x0220 195 - #define DC_BUFLEN 0x021c 196 - #define DC_DATACOUNT_MASK 0xffff 197 - #define DC_BUFSTAT 0x021e 198 - #define DC_EPMAXPKTSZ 0x0204 199 - 200 - #define DC_EPTYPE 0x0208 201 - #define DC_NOEMPKT (1 << 4) 202 - #define DC_EPENABLE (1 << 3) 203 - #define DC_DBLBUF (1 << 2) 204 - #define DC_ENDPTYP_ISOC (1 << 0) 205 - #define DC_ENDPTYP_BULK (2 << 0) 206 - #define DC_ENDPTYP_INTERRUPT (3 << 0) 201 + #define ISP176x_DC_CTRLFUNC 0x0228 202 + #define ISP176x_DC_EPINDEX 0x022c 207 203 208 204 /* DMA Registers */ 209 - #define DC_DMACMD 0x0230 210 - #define DC_DMATXCOUNT 0x0234 211 - #define DC_DMACONF 0x0238 212 - #define DC_DMAHW 0x023c 213 - #define DC_DMAINTREASON 0x0250 214 - #define DC_DMAINTEN 0x0254 215 - #define DC_DMAEP 0x0258 216 - #define DC_DMABURSTCOUNT 0x0264 205 + #define ISP176x_DC_DMACMD 0x0230 206 + #define ISP176x_DC_DMATXCOUNT 0x0234 207 + #define ISP176x_DC_DMACONF 0x0238 208 + #define ISP176x_DC_DMAHW 0x023c 209 + #define ISP176x_DC_DMAINTREASON 0x0250 210 + #define ISP176x_DC_DMAINTEN 0x0254 211 + #define ISP176x_DC_DMAEP 0x0258 212 + #define ISP176x_DC_DMABURSTCOUNT 0x0264 217 213 218 214 /* General Registers */ 219 - #define DC_INTERRUPT 0x0218 220 - #define DC_CHIPID 0x0270 221 - #define DC_FRAMENUM 0x0274 222 - #define DC_SCRATCH 0x0278 223 - #define DC_UNLOCKDEV 0x027c 224 - #define DC_INTPULSEWIDTH 0x0280 225 - #define DC_TESTMODE 0x0284 215 + #define ISP176x_DC_INTERRUPT 0x0218 216 + #define ISP176x_DC_CHIPID 0x0270 217 + #define ISP176x_DC_FRAMENUM 0x0274 218 + #define ISP176x_DC_SCRATCH 0x0278 219 + #define ISP176x_DC_UNLOCKDEV 0x027c 220 + #define ISP176x_DC_INTPULSEWIDTH 0x0280 221 + #define ISP176x_DC_TESTMODE 0x0284 222 + 223 + enum isp176x_device_controller_fields { 224 + /* DC_ADDRESS */ 225 + DC_DEVEN, DC_DEVADDR, 226 + /* DC_MODE */ 227 + DC_VBUSSTAT, DC_SFRESET, DC_GLINTENA, 228 + /* DC_INTCONF */ 229 + DC_CDBGMOD_ACK, DC_DDBGMODIN_ACK, DC_DDBGMODOUT_ACK, DC_INTPOL, 230 + /* DC_INTENABLE */ 231 + DC_IEPRXTX_7, DC_IEPRXTX_6, DC_IEPRXTX_5, DC_IEPRXTX_4, DC_IEPRXTX_3, 232 + DC_IEPRXTX_2, DC_IEPRXTX_1, DC_IEPRXTX_0, 233 + DC_IEP0SETUP, DC_IEVBUS, DC_IEHS_STA, DC_IERESM, DC_IESUSP, DC_IEBRST, 234 + /* DC_EPINDEX */ 235 + DC_EP0SETUP, DC_ENDPIDX, DC_EPDIR, 236 + /* DC_CTRLFUNC */ 237 + DC_CLBUF, DC_VENDP, DC_DSEN, DC_STATUS, DC_STALL, 238 + /* DC_BUFLEN */ 239 + DC_BUFLEN, 240 + /* DC_EPMAXPKTSZ */ 241 + DC_FFOSZ, 242 + /* DC_EPTYPE */ 243 + DC_EPENABLE, DC_ENDPTYP, 244 + /* DC_FRAMENUM */ 245 + DC_FRAMENUM, DC_UFRAMENUM, 246 + /* DC_CHIP_ID */ 247 + DC_CHIP_ID_HIGH, DC_CHIP_ID_LOW, 248 + /* DC_SCRATCH */ 249 + DC_SCRATCH, 250 + /* Last element */ 251 + DC_FIELD_MAX, 252 + }; 253 + 254 + /* ISP1763 */ 255 + /* Initialization Registers */ 256 + #define ISP1763_DC_ADDRESS 0x00 257 + #define ISP1763_DC_MODE 0x0c 258 + #define ISP1763_DC_INTCONF 0x10 259 + #define ISP1763_DC_INTENABLE 0x14 260 + 261 + /* Data Flow Registers */ 262 + #define ISP1763_DC_EPMAXPKTSZ 0x04 263 + #define ISP1763_DC_EPTYPE 0x08 264 + 265 + #define ISP1763_DC_BUFLEN 0x1c 266 + #define ISP1763_DC_BUFSTAT 0x1e 267 + #define ISP1763_DC_DATAPORT 0x20 268 + 269 + #define ISP1763_DC_CTRLFUNC 0x28 270 + #define ISP1763_DC_EPINDEX 0x2c 271 + 272 + /* DMA Registers */ 273 + #define ISP1763_DC_DMACMD 0x30 274 + #define ISP1763_DC_DMATXCOUNT 0x34 275 + #define ISP1763_DC_DMACONF 0x38 276 + #define ISP1763_DC_DMAHW 0x3c 277 + #define ISP1763_DC_DMAINTREASON 0x50 278 + #define ISP1763_DC_DMAINTEN 0x54 279 + #define ISP1763_DC_DMAEP 0x58 280 + #define ISP1763_DC_DMABURSTCOUNT 0x64 281 + 282 + /* General Registers */ 283 + #define ISP1763_DC_INTERRUPT 0x18 284 + #define ISP1763_DC_CHIPID_LOW 0x70 285 + #define ISP1763_DC_CHIPID_HIGH 0x72 286 + #define ISP1763_DC_FRAMENUM 0x74 287 + #define ISP1763_DC_SCRATCH 0x78 288 + #define ISP1763_DC_UNLOCKDEV 0x7c 289 + #define ISP1763_DC_INTPULSEWIDTH 0x80 290 + #define ISP1763_DC_TESTMODE 0x84 226 291 227 292 #endif
+172 -77
drivers/usb/isp1760/isp1760-udc.c
··· 2 2 /* 3 3 * Driver for the NXP ISP1761 device controller 4 4 * 5 + * Copyright 2021 Linaro, Rui Miguel Silva 5 6 * Copyright 2014 Ideas on Board Oy 6 7 * 7 8 * Contacts: 8 9 * Laurent Pinchart <laurent.pinchart@ideasonboard.com> 10 + * Rui Miguel Silva <rui.silva@linaro.org> 9 11 */ 10 12 11 13 #include <linux/interrupt.h> ··· 47 45 return container_of(req, struct isp1760_request, req); 48 46 } 49 47 50 - static inline u32 isp1760_udc_read(struct isp1760_udc *udc, u16 reg) 48 + static u32 isp1760_udc_read(struct isp1760_udc *udc, u16 field) 51 49 { 52 - return isp1760_read32(udc->regs, reg); 50 + return isp1760_field_read(udc->fields, field); 53 51 } 54 52 55 - static inline void isp1760_udc_write(struct isp1760_udc *udc, u16 reg, u32 val) 53 + static void isp1760_udc_write(struct isp1760_udc *udc, u16 field, u32 val) 56 54 { 57 - isp1760_write32(udc->regs, reg, val); 55 + isp1760_field_write(udc->fields, field, val); 58 56 } 59 57 58 + static u32 isp1760_udc_read_raw(struct isp1760_udc *udc, u16 reg) 59 + { 60 + __le32 val; 61 + 62 + regmap_raw_read(udc->regs, reg, &val, 4); 63 + 64 + return le32_to_cpu(val); 65 + } 66 + 67 + static u16 isp1760_udc_read_raw16(struct isp1760_udc *udc, u16 reg) 68 + { 69 + __le16 val; 70 + 71 + regmap_raw_read(udc->regs, reg, &val, 2); 72 + 73 + return le16_to_cpu(val); 74 + } 75 + 76 + static void isp1760_udc_write_raw(struct isp1760_udc *udc, u16 reg, u32 val) 77 + { 78 + __le32 val_le = cpu_to_le32(val); 79 + 80 + regmap_raw_write(udc->regs, reg, &val_le, 4); 81 + } 82 + 83 + static void isp1760_udc_write_raw16(struct isp1760_udc *udc, u16 reg, u16 val) 84 + { 85 + __le16 val_le = cpu_to_le16(val); 86 + 87 + regmap_raw_write(udc->regs, reg, &val_le, 2); 88 + } 89 + 90 + static void isp1760_udc_set(struct isp1760_udc *udc, u32 field) 91 + { 92 + isp1760_udc_write(udc, field, 0xFFFFFFFF); 93 + } 94 + 95 + static void isp1760_udc_clear(struct isp1760_udc *udc, u32 field) 96 + { 97 + isp1760_udc_write(udc, field, 0); 98 + } 99 + 100 + static bool isp1760_udc_is_set(struct isp1760_udc *udc, u32 field) 101 + { 102 + return !!isp1760_udc_read(udc, field); 103 + } 60 104 /* ----------------------------------------------------------------------------- 61 105 * Endpoint Management 62 106 */ ··· 123 75 return NULL; 124 76 } 125 77 126 - static void __isp1760_udc_select_ep(struct isp1760_ep *ep, int dir) 78 + static void __isp1760_udc_select_ep(struct isp1760_udc *udc, 79 + struct isp1760_ep *ep, int dir) 127 80 { 128 - isp1760_udc_write(ep->udc, DC_EPINDEX, 129 - DC_ENDPIDX(ep->addr & USB_ENDPOINT_NUMBER_MASK) | 130 - (dir == USB_DIR_IN ? DC_EPDIR : 0)); 81 + isp1760_udc_write(udc, DC_ENDPIDX, ep->addr & USB_ENDPOINT_NUMBER_MASK); 82 + 83 + if (dir == USB_DIR_IN) 84 + isp1760_udc_set(udc, DC_EPDIR); 85 + else 86 + isp1760_udc_clear(udc, DC_EPDIR); 131 87 } 132 88 133 89 /** 134 90 * isp1760_udc_select_ep - Select an endpoint for register access 135 91 * @ep: The endpoint 92 + * @udc: Reference to the device controller 136 93 * 137 94 * The ISP1761 endpoint registers are banked. This function selects the target 138 95 * endpoint for banked register access. The selection remains valid until the ··· 146 93 * 147 94 * Called with the UDC spinlock held. 148 95 */ 149 - static void isp1760_udc_select_ep(struct isp1760_ep *ep) 96 + static void isp1760_udc_select_ep(struct isp1760_udc *udc, 97 + struct isp1760_ep *ep) 150 98 { 151 - __isp1760_udc_select_ep(ep, ep->addr & USB_ENDPOINT_DIR_MASK); 99 + __isp1760_udc_select_ep(udc, ep, ep->addr & USB_ENDPOINT_DIR_MASK); 152 100 } 153 101 154 102 /* Called with the UDC spinlock held. */ ··· 162 108 * the direction opposite to the data stage data packets, we thus need 163 109 * to select the OUT/IN endpoint for IN/OUT transfers. 164 110 */ 165 - isp1760_udc_write(udc, DC_EPINDEX, DC_ENDPIDX(0) | 166 - (dir == USB_DIR_IN ? 0 : DC_EPDIR)); 167 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_STATUS); 111 + if (dir == USB_DIR_IN) 112 + isp1760_udc_clear(udc, DC_EPDIR); 113 + else 114 + isp1760_udc_set(udc, DC_EPDIR); 115 + 116 + isp1760_udc_write(udc, DC_ENDPIDX, 1); 117 + isp1760_udc_set(udc, DC_STATUS); 168 118 169 119 /* 170 120 * The hardware will terminate the request automatically and go back to ··· 215 157 spin_lock_irqsave(&udc->lock, flags); 216 158 217 159 /* Stall both the IN and OUT endpoints. */ 218 - __isp1760_udc_select_ep(ep, USB_DIR_OUT); 219 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_STALL); 220 - __isp1760_udc_select_ep(ep, USB_DIR_IN); 221 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_STALL); 160 + __isp1760_udc_select_ep(udc, ep, USB_DIR_OUT); 161 + isp1760_udc_set(udc, DC_STALL); 162 + __isp1760_udc_select_ep(udc, ep, USB_DIR_IN); 163 + isp1760_udc_set(udc, DC_STALL); 222 164 223 165 /* A protocol stall completes the control transaction. */ 224 166 udc->ep0_state = ISP1760_CTRL_SETUP; ··· 239 181 u32 *buf; 240 182 int i; 241 183 242 - isp1760_udc_select_ep(ep); 243 - len = isp1760_udc_read(udc, DC_BUFLEN) & DC_DATACOUNT_MASK; 184 + isp1760_udc_select_ep(udc, ep); 185 + len = isp1760_udc_read(udc, DC_BUFLEN); 244 186 245 187 dev_dbg(udc->isp->dev, "%s: received %u bytes (%u/%u done)\n", 246 188 __func__, len, req->req.actual, req->req.length); ··· 256 198 * datasheet doesn't clearly document how this should be 257 199 * handled. 258 200 */ 259 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_CLBUF); 201 + isp1760_udc_set(udc, DC_CLBUF); 260 202 return false; 261 203 } 262 204 ··· 267 209 * the next packet might be removed from the FIFO. 268 210 */ 269 211 for (i = len; i > 2; i -= 4, ++buf) 270 - *buf = le32_to_cpu(isp1760_udc_read(udc, DC_DATAPORT)); 212 + *buf = isp1760_udc_read_raw(udc, ISP176x_DC_DATAPORT); 271 213 if (i > 0) 272 - *(u16 *)buf = le16_to_cpu(readw(udc->regs + DC_DATAPORT)); 214 + *(u16 *)buf = isp1760_udc_read_raw16(udc, ISP176x_DC_DATAPORT); 273 215 274 216 req->req.actual += len; 275 217 ··· 311 253 __func__, req->packet_size, req->req.actual, 312 254 req->req.length); 313 255 314 - __isp1760_udc_select_ep(ep, USB_DIR_IN); 256 + __isp1760_udc_select_ep(udc, ep, USB_DIR_IN); 315 257 316 258 if (req->packet_size) 317 259 isp1760_udc_write(udc, DC_BUFLEN, req->packet_size); ··· 323 265 * the FIFO for this kind of conditions, but doesn't seem to work. 324 266 */ 325 267 for (i = req->packet_size; i > 2; i -= 4, ++buf) 326 - isp1760_udc_write(udc, DC_DATAPORT, cpu_to_le32(*buf)); 268 + isp1760_udc_write_raw(udc, ISP176x_DC_DATAPORT, *buf); 327 269 if (i > 0) 328 - writew(cpu_to_le16(*(u16 *)buf), udc->regs + DC_DATAPORT); 270 + isp1760_udc_write_raw16(udc, ISP176x_DC_DATAPORT, *(u16 *)buf); 329 271 330 272 if (ep->addr == 0) 331 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_DSEN); 273 + isp1760_udc_set(udc, DC_DSEN); 332 274 if (!req->packet_size) 333 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_VENDP); 275 + isp1760_udc_set(udc, DC_VENDP); 334 276 } 335 277 336 278 static void isp1760_ep_rx_ready(struct isp1760_ep *ep) ··· 466 408 return -EINVAL; 467 409 } 468 410 469 - isp1760_udc_select_ep(ep); 470 - isp1760_udc_write(udc, DC_CTRLFUNC, halt ? DC_STALL : 0); 411 + isp1760_udc_select_ep(udc, ep); 412 + 413 + if (halt) 414 + isp1760_udc_set(udc, DC_STALL); 415 + else 416 + isp1760_udc_clear(udc, DC_STALL); 471 417 472 418 if (ep->addr == 0) { 473 419 /* When halting the control endpoint, stall both IN and OUT. */ 474 - __isp1760_udc_select_ep(ep, USB_DIR_IN); 475 - isp1760_udc_write(udc, DC_CTRLFUNC, halt ? DC_STALL : 0); 420 + __isp1760_udc_select_ep(udc, ep, USB_DIR_IN); 421 + if (halt) 422 + isp1760_udc_set(udc, DC_STALL); 423 + else 424 + isp1760_udc_clear(udc, DC_STALL); 476 425 } else if (!halt) { 477 426 /* Reset the data PID by cycling the endpoint enable bit. */ 478 - u16 eptype = isp1760_udc_read(udc, DC_EPTYPE); 479 - 480 - isp1760_udc_write(udc, DC_EPTYPE, eptype & ~DC_EPENABLE); 481 - isp1760_udc_write(udc, DC_EPTYPE, eptype); 427 + isp1760_udc_clear(udc, DC_EPENABLE); 428 + isp1760_udc_set(udc, DC_EPENABLE); 482 429 483 430 /* 484 431 * Disabling the endpoint emptied the transmit FIFO, fill it ··· 542 479 return -EINVAL; 543 480 } 544 481 545 - isp1760_udc_write(udc, DC_EPINDEX, DC_ENDPIDX(0) | DC_EPDIR); 482 + isp1760_udc_set(udc, DC_EPDIR); 483 + isp1760_udc_write(udc, DC_ENDPIDX, 1); 484 + 546 485 isp1760_udc_write(udc, DC_BUFLEN, 2); 547 486 548 - writew(cpu_to_le16(status), udc->regs + DC_DATAPORT); 487 + isp1760_udc_write_raw16(udc, ISP176x_DC_DATAPORT, status); 549 488 550 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_DSEN); 489 + isp1760_udc_set(udc, DC_DSEN); 551 490 552 491 dev_dbg(udc->isp->dev, "%s: status 0x%04x\n", __func__, status); 553 492 ··· 573 508 usb_gadget_set_state(&udc->gadget, addr ? USB_STATE_ADDRESS : 574 509 USB_STATE_DEFAULT); 575 510 576 - isp1760_udc_write(udc, DC_ADDRESS, DC_DEVEN | addr); 511 + isp1760_udc_write(udc, DC_DEVADDR, addr); 512 + isp1760_udc_set(udc, DC_DEVEN); 577 513 578 514 spin_lock(&udc->lock); 579 515 isp1760_udc_ctrl_send_status(&udc->ep[0], USB_DIR_OUT); ··· 716 650 717 651 spin_lock(&udc->lock); 718 652 719 - isp1760_udc_write(udc, DC_EPINDEX, DC_EP0SETUP); 653 + isp1760_udc_set(udc, DC_EP0SETUP); 720 654 721 - count = isp1760_udc_read(udc, DC_BUFLEN) & DC_DATACOUNT_MASK; 655 + count = isp1760_udc_read(udc, DC_BUFLEN); 722 656 if (count != sizeof(req)) { 723 657 spin_unlock(&udc->lock); 724 658 ··· 729 663 return; 730 664 } 731 665 732 - req.data[0] = isp1760_udc_read(udc, DC_DATAPORT); 733 - req.data[1] = isp1760_udc_read(udc, DC_DATAPORT); 666 + req.data[0] = isp1760_udc_read_raw(udc, ISP176x_DC_DATAPORT); 667 + req.data[1] = isp1760_udc_read_raw(udc, ISP176x_DC_DATAPORT); 734 668 735 669 if (udc->ep0_state != ISP1760_CTRL_SETUP) { 736 670 spin_unlock(&udc->lock); ··· 798 732 799 733 switch (usb_endpoint_type(desc)) { 800 734 case USB_ENDPOINT_XFER_ISOC: 801 - type = DC_ENDPTYP_ISOC; 735 + type = ISP176x_DC_ENDPTYP_ISOC; 802 736 break; 803 737 case USB_ENDPOINT_XFER_BULK: 804 - type = DC_ENDPTYP_BULK; 738 + type = ISP176x_DC_ENDPTYP_BULK; 805 739 break; 806 740 case USB_ENDPOINT_XFER_INT: 807 - type = DC_ENDPTYP_INTERRUPT; 741 + type = ISP176x_DC_ENDPTYP_INTERRUPT; 808 742 break; 809 743 case USB_ENDPOINT_XFER_CONTROL: 810 744 default: ··· 821 755 uep->halted = false; 822 756 uep->wedged = false; 823 757 824 - isp1760_udc_select_ep(uep); 825 - isp1760_udc_write(udc, DC_EPMAXPKTSZ, uep->maxpacket); 758 + isp1760_udc_select_ep(udc, uep); 759 + 760 + isp1760_udc_write(udc, DC_FFOSZ, uep->maxpacket); 826 761 isp1760_udc_write(udc, DC_BUFLEN, uep->maxpacket); 827 - isp1760_udc_write(udc, DC_EPTYPE, DC_EPENABLE | type); 762 + 763 + isp1760_udc_write(udc, DC_ENDPTYP, type); 764 + isp1760_udc_set(udc, DC_EPENABLE); 828 765 829 766 spin_unlock_irqrestore(&udc->lock, flags); 830 767 ··· 855 786 uep->desc = NULL; 856 787 uep->maxpacket = 0; 857 788 858 - isp1760_udc_select_ep(uep); 859 - isp1760_udc_write(udc, DC_EPTYPE, 0); 789 + isp1760_udc_select_ep(udc, uep); 790 + isp1760_udc_clear(udc, DC_EPENABLE); 791 + isp1760_udc_clear(udc, DC_ENDPTYP); 860 792 861 793 /* TODO Synchronize with the IRQ handler */ 862 794 ··· 934 864 935 865 case ISP1760_CTRL_DATA_OUT: 936 866 list_add_tail(&req->queue, &uep->queue); 937 - __isp1760_udc_select_ep(uep, USB_DIR_OUT); 938 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_DSEN); 867 + __isp1760_udc_select_ep(udc, uep, USB_DIR_OUT); 868 + isp1760_udc_set(udc, DC_DSEN); 939 869 break; 940 870 941 871 case ISP1760_CTRL_STATUS: ··· 1095 1025 1096 1026 spin_lock_irqsave(&udc->lock, flags); 1097 1027 1098 - isp1760_udc_select_ep(uep); 1028 + isp1760_udc_select_ep(udc, uep); 1099 1029 1100 1030 /* 1101 1031 * Set the CLBUF bit twice to flush both buffers in case double 1102 1032 * buffering is enabled. 1103 1033 */ 1104 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_CLBUF); 1105 - isp1760_udc_write(udc, DC_CTRLFUNC, DC_CLBUF); 1034 + isp1760_udc_set(udc, DC_CLBUF); 1035 + isp1760_udc_set(udc, DC_CLBUF); 1106 1036 1107 1037 spin_unlock_irqrestore(&udc->lock, flags); 1108 1038 } ··· 1152 1082 1153 1083 static void isp1760_udc_init_hw(struct isp1760_udc *udc) 1154 1084 { 1085 + u32 intconf = udc->is_isp1763 ? ISP1763_DC_INTCONF : ISP176x_DC_INTCONF; 1086 + u32 intena = udc->is_isp1763 ? ISP1763_DC_INTENABLE : 1087 + ISP176x_DC_INTENABLE; 1088 + 1155 1089 /* 1156 1090 * The device controller currently shares its interrupt with the host 1157 1091 * controller, the DC_IRQ polarity and signaling mode are ignored. Set ··· 1165 1091 * ACK tokens only (and NYET for the out pipe). The default 1166 1092 * configuration also generates an interrupt on the first NACK token. 1167 1093 */ 1168 - isp1760_udc_write(udc, DC_INTCONF, DC_CDBGMOD_ACK | DC_DDBGMODIN_ACK | 1169 - DC_DDBGMODOUT_ACK_NYET); 1094 + isp1760_reg_write(udc->regs, intconf, 1095 + ISP176x_DC_CDBGMOD_ACK | ISP176x_DC_DDBGMODIN_ACK | 1096 + ISP176x_DC_DDBGMODOUT_ACK); 1170 1097 1171 - isp1760_udc_write(udc, DC_INTENABLE, DC_IEPRXTX(7) | DC_IEPRXTX(6) | 1172 - DC_IEPRXTX(5) | DC_IEPRXTX(4) | DC_IEPRXTX(3) | 1173 - DC_IEPRXTX(2) | DC_IEPRXTX(1) | DC_IEPRXTX(0) | 1174 - DC_IEP0SETUP | DC_IEVBUS | DC_IERESM | DC_IESUSP | 1175 - DC_IEHS_STA | DC_IEBRST); 1098 + isp1760_reg_write(udc->regs, intena, DC_IEPRXTX(7) | 1099 + DC_IEPRXTX(6) | DC_IEPRXTX(5) | DC_IEPRXTX(4) | 1100 + DC_IEPRXTX(3) | DC_IEPRXTX(2) | DC_IEPRXTX(1) | 1101 + DC_IEPRXTX(0) | ISP176x_DC_IEP0SETUP | 1102 + ISP176x_DC_IEVBUS | ISP176x_DC_IERESM | 1103 + ISP176x_DC_IESUSP | ISP176x_DC_IEHS_STA | 1104 + ISP176x_DC_IEBRST); 1176 1105 1177 1106 if (udc->connected) 1178 1107 isp1760_set_pullup(udc->isp, true); 1179 1108 1180 - isp1760_udc_write(udc, DC_ADDRESS, DC_DEVEN); 1109 + isp1760_udc_set(udc, DC_DEVEN); 1181 1110 } 1182 1111 1183 1112 static void isp1760_udc_reset(struct isp1760_udc *udc) ··· 1229 1152 { 1230 1153 struct isp1760_udc *udc = gadget_to_udc(gadget); 1231 1154 1232 - return isp1760_udc_read(udc, DC_FRAMENUM) & ((1 << 11) - 1); 1155 + return isp1760_udc_read(udc, DC_FRAMENUM); 1233 1156 } 1234 1157 1235 1158 static int isp1760_udc_wakeup(struct usb_gadget *gadget) ··· 1296 1219 usb_gadget_set_state(&udc->gadget, USB_STATE_ATTACHED); 1297 1220 1298 1221 /* DMA isn't supported yet, don't enable the DMA clock. */ 1299 - isp1760_udc_write(udc, DC_MODE, DC_GLINTENA); 1222 + isp1760_udc_set(udc, DC_GLINTENA); 1300 1223 1301 1224 isp1760_udc_init_hw(udc); 1302 1225 ··· 1309 1232 static int isp1760_udc_stop(struct usb_gadget *gadget) 1310 1233 { 1311 1234 struct isp1760_udc *udc = gadget_to_udc(gadget); 1235 + u32 mode_reg = udc->is_isp1763 ? ISP1763_DC_MODE : ISP176x_DC_MODE; 1312 1236 unsigned long flags; 1313 1237 1314 1238 dev_dbg(udc->isp->dev, "%s\n", __func__); 1315 1239 1316 1240 del_timer_sync(&udc->vbus_timer); 1317 1241 1318 - isp1760_udc_write(udc, DC_MODE, 0); 1242 + isp1760_reg_write(udc->regs, mode_reg, 0); 1319 1243 1320 1244 spin_lock_irqsave(&udc->lock, flags); 1321 1245 udc->driver = NULL; ··· 1338 1260 * Interrupt Handling 1339 1261 */ 1340 1262 1263 + static u32 isp1760_udc_irq_get_status(struct isp1760_udc *udc) 1264 + { 1265 + u32 status; 1266 + 1267 + if (udc->is_isp1763) { 1268 + status = isp1760_reg_read(udc->regs, ISP1763_DC_INTERRUPT) 1269 + & isp1760_reg_read(udc->regs, ISP1763_DC_INTENABLE); 1270 + isp1760_reg_write(udc->regs, ISP1763_DC_INTERRUPT, status); 1271 + } else { 1272 + status = isp1760_reg_read(udc->regs, ISP176x_DC_INTERRUPT) 1273 + & isp1760_reg_read(udc->regs, ISP176x_DC_INTENABLE); 1274 + isp1760_reg_write(udc->regs, ISP176x_DC_INTERRUPT, status); 1275 + } 1276 + 1277 + return status; 1278 + } 1279 + 1341 1280 static irqreturn_t isp1760_udc_irq(int irq, void *dev) 1342 1281 { 1343 1282 struct isp1760_udc *udc = dev; 1344 1283 unsigned int i; 1345 1284 u32 status; 1346 1285 1347 - status = isp1760_udc_read(udc, DC_INTERRUPT) 1348 - & isp1760_udc_read(udc, DC_INTENABLE); 1349 - isp1760_udc_write(udc, DC_INTERRUPT, status); 1286 + status = isp1760_udc_irq_get_status(udc); 1350 1287 1351 1288 if (status & DC_IEVBUS) { 1352 1289 dev_dbg(udc->isp->dev, "%s(VBUS)\n", __func__); ··· 1406 1313 dev_dbg(udc->isp->dev, "%s(SUSP)\n", __func__); 1407 1314 1408 1315 spin_lock(&udc->lock); 1409 - if (!(isp1760_udc_read(udc, DC_MODE) & DC_VBUSSTAT)) 1316 + if (!isp1760_udc_is_set(udc, DC_VBUSSTAT)) 1410 1317 isp1760_udc_disconnect(udc); 1411 1318 else 1412 1319 isp1760_udc_suspend(udc); ··· 1428 1335 1429 1336 spin_lock_irqsave(&udc->lock, flags); 1430 1337 1431 - if (!(isp1760_udc_read(udc, DC_MODE) & DC_VBUSSTAT)) 1338 + if (!(isp1760_udc_is_set(udc, DC_VBUSSTAT))) 1432 1339 isp1760_udc_disconnect(udc); 1433 1340 else if (udc->gadget.state >= USB_STATE_POWERED) 1434 1341 mod_timer(&udc->vbus_timer, ··· 1496 1403 1497 1404 static int isp1760_udc_init(struct isp1760_udc *udc) 1498 1405 { 1406 + u32 mode_reg = udc->is_isp1763 ? ISP1763_DC_MODE : ISP176x_DC_MODE; 1499 1407 u16 scratch; 1500 1408 u32 chipid; 1501 1409 ··· 1507 1413 * and scratch register contents must match the expected values. 1508 1414 */ 1509 1415 isp1760_udc_write(udc, DC_SCRATCH, 0xbabe); 1510 - chipid = isp1760_udc_read(udc, DC_CHIPID); 1416 + chipid = isp1760_udc_read(udc, DC_CHIP_ID_HIGH) << 16; 1417 + chipid |= isp1760_udc_read(udc, DC_CHIP_ID_LOW); 1511 1418 scratch = isp1760_udc_read(udc, DC_SCRATCH); 1512 1419 1513 1420 if (scratch != 0xbabe) { ··· 1518 1423 return -ENODEV; 1519 1424 } 1520 1425 1521 - if (chipid != 0x00011582 && chipid != 0x00158210) { 1426 + if (chipid != 0x00011582 && chipid != 0x00158210 && 1427 + chipid != 0x00176320) { 1522 1428 dev_err(udc->isp->dev, "udc: invalid chip ID 0x%08x\n", chipid); 1523 1429 return -ENODEV; 1524 1430 } 1525 1431 1526 1432 /* Reset the device controller. */ 1527 - isp1760_udc_write(udc, DC_MODE, DC_SFRESET); 1433 + isp1760_udc_set(udc, DC_SFRESET); 1528 1434 usleep_range(10000, 11000); 1529 - isp1760_udc_write(udc, DC_MODE, 0); 1435 + isp1760_reg_write(udc->regs, mode_reg, 0); 1530 1436 usleep_range(10000, 11000); 1531 1437 1532 1438 return 0; ··· 1541 1445 1542 1446 udc->irq = -1; 1543 1447 udc->isp = isp; 1544 - udc->regs = isp->regs; 1545 1448 1546 1449 spin_lock_init(&udc->lock); 1547 1450 timer_setup(&udc->vbus_timer, isp1760_udc_vbus_poll, 0);
+9 -4
drivers/usb/isp1760/isp1760-udc.h
··· 2 2 /* 3 3 * Driver for the NXP ISP1761 device controller 4 4 * 5 + * Copyright 2021 Linaro, Rui Miguel Silva 5 6 * Copyright 2014 Ideas on Board Oy 6 7 * 7 8 * Contacts: 8 9 * Laurent Pinchart <laurent.pinchart@ideasonboard.com> 10 + * Rui Miguel Silva <rui.silva@linaro.org> 9 11 */ 10 12 11 13 #ifndef _ISP1760_UDC_H_ ··· 18 16 #include <linux/spinlock.h> 19 17 #include <linux/timer.h> 20 18 #include <linux/usb/gadget.h> 19 + 20 + #include "isp1760-regs.h" 21 21 22 22 struct isp1760_device; 23 23 struct isp1760_udc; ··· 52 48 * struct isp1760_udc - UDC state information 53 49 * irq: IRQ number 54 50 * irqname: IRQ name (as passed to request_irq) 55 - * regs: Base address of the UDC registers 51 + * regs: regmap for UDC registers 56 52 * driver: Gadget driver 57 53 * gadget: Gadget device 58 54 * lock: Protects driver, vbus_timer, ep, ep0_*, DC_EPINDEX register ··· 63 59 * connected: Tracks gadget driver bus connection state 64 60 */ 65 61 struct isp1760_udc { 66 - #ifdef CONFIG_USB_ISP1761_UDC 67 62 struct isp1760_device *isp; 68 63 69 64 int irq; 70 65 char *irqname; 71 - void __iomem *regs; 66 + 67 + struct regmap *regs; 68 + struct regmap_field *fields[DC_FIELD_MAX]; 72 69 73 70 struct usb_gadget_driver *driver; 74 71 struct usb_gadget gadget; ··· 84 79 u16 ep0_length; 85 80 86 81 bool connected; 82 + bool is_isp1763; 87 83 88 84 unsigned int devstatus; 89 - #endif 90 85 }; 91 86 92 87 #ifdef CONFIG_USB_ISP1761_UDC
-1
drivers/usb/misc/ftdi-elan.c
··· 2098 2098 } else 2099 2099 d += sprintf(d, " .."); 2100 2100 bytes_read += 1; 2101 - continue; 2102 2101 } 2103 2102 goto more; 2104 2103 } else if (packet_bytes > 1) {
+9 -21
drivers/usb/mtu3/mtu3.h
··· 10 10 #ifndef __MTU3_H__ 11 11 #define __MTU3_H__ 12 12 13 + #include <linux/clk.h> 13 14 #include <linux/device.h> 14 15 #include <linux/dmapool.h> 15 16 #include <linux/extcon.h> ··· 22 21 #include <linux/usb/ch9.h> 23 22 #include <linux/usb/gadget.h> 24 23 #include <linux/usb/otg.h> 24 + #include <linux/usb/role.h> 25 25 26 26 struct mtu3; 27 27 struct mtu3_ep; ··· 89 87 * the SET_SEL request uses 6 so far, and GET_STATUS is 2 90 88 */ 91 89 #define EP0_RESPONSE_BUF 6 90 + 91 + #define BULK_CLKS_CNT 4 92 92 93 93 /* device operated link and speed got from DEVICE_CONF register */ 94 94 enum mtu3_speed { ··· 196 192 /** 197 193 * @vbus: vbus 5V used by host mode 198 194 * @edev: external connector used to detect vbus and iddig changes 199 - * @vbus_nb: notifier for vbus detection 200 - * @vbus_work : work of vbus detection notifier, used to avoid sleep in 201 - * notifier callback which is atomic context 202 - * @vbus_event : event of vbus detecion notifier 203 195 * @id_nb : notifier for iddig(idpin) detection 204 - * @id_work : work of iddig detection notifier 205 - * @id_event : event of iddig detecion notifier 196 + * @dr_work : work for drd mode switch, used to avoid sleep in atomic context 197 + * @desired_role : role desired to switch 206 198 * @role_sw : use USB Role Switch to support dual-role switch, can't use 207 199 * extcon at the same time, and extcon is deprecated. 208 200 * @role_sw_used : true when the USB Role Switch is used. ··· 209 209 struct otg_switch_mtk { 210 210 struct regulator *vbus; 211 211 struct extcon_dev *edev; 212 - struct notifier_block vbus_nb; 213 - struct work_struct vbus_work; 214 - unsigned long vbus_event; 215 212 struct notifier_block id_nb; 216 - struct work_struct id_work; 217 - unsigned long id_event; 213 + struct work_struct dr_work; 214 + enum usb_role desired_role; 218 215 struct usb_role_switch *role_sw; 219 216 bool role_sw_used; 220 217 bool is_u3_drd; ··· 222 225 * @mac_base: register base address of device MAC, exclude xHCI's 223 226 * @ippc_base: register base address of IP Power and Clock interface (IPPC) 224 227 * @vusb33: usb3.3V shared by device/host IP 225 - * @sys_clk: system clock of mtu3, shared by device/host IP 226 - * @ref_clk: reference clock 227 - * @mcu_clk: mcu_bus_ck clock for AHB bus etc 228 - * @dma_clk: dma_bus_ck clock for AXI bus etc 229 228 * @dr_mode: works in which mode: 230 229 * host only, device only or dual-role mode 231 230 * @u2_ports: number of usb2.0 host ports ··· 243 250 int num_phys; 244 251 /* common power & clock */ 245 252 struct regulator *vusb33; 246 - struct clk *sys_clk; 247 - struct clk *ref_clk; 248 - struct clk *mcu_clk; 249 - struct clk *dma_clk; 253 + struct clk_bulk_data clks[BULK_CLKS_CNT]; 250 254 /* otg */ 251 255 struct otg_switch_mtk otg_switch; 252 256 enum usb_dr_mode dr_mode; ··· 412 422 int interval, int burst, int mult); 413 423 void mtu3_deconfig_ep(struct mtu3 *mtu, struct mtu3_ep *mep); 414 424 void mtu3_ep_stall_set(struct mtu3_ep *mep, bool set); 415 - void mtu3_ep0_setup(struct mtu3 *mtu); 416 425 void mtu3_start(struct mtu3 *mtu); 417 426 void mtu3_stop(struct mtu3 *mtu); 418 427 void mtu3_dev_on_off(struct mtu3 *mtu, int is_on); 419 - void mtu3_set_speed(struct mtu3 *mtu, enum usb_device_speed speed); 420 428 421 429 int mtu3_gadget_setup(struct mtu3 *mtu); 422 430 void mtu3_gadget_cleanup(struct mtu3 *mtu);
+14 -6
drivers/usb/mtu3/mtu3_core.c
··· 207 207 mtu3_writel(mbase, U3D_DEV_LINK_INTR_ENABLE, SSUSB_DEV_SPEED_CHG_INTR); 208 208 } 209 209 210 - void mtu3_set_speed(struct mtu3 *mtu, enum usb_device_speed speed) 210 + static void mtu3_set_speed(struct mtu3 *mtu, enum usb_device_speed speed) 211 211 { 212 212 void __iomem *mbase = mtu->mac_base; 213 213 ··· 334 334 mtu3_readl(mbase, U3D_DEVICE_CONTROL)); 335 335 336 336 mtu3_clrbits(mtu->ippc_base, U3D_SSUSB_IP_PW_CTRL2, SSUSB_IP_DEV_PDN); 337 + if (mtu->is_u3_ip) 338 + mtu3_clrbits(mtu->ippc_base, SSUSB_U3_CTRL(0), SSUSB_U3_PORT_PDN); 339 + 340 + mtu3_clrbits(mtu->ippc_base, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_PDN); 337 341 338 342 mtu3_csr_init(mtu); 339 343 mtu3_set_speed(mtu, mtu->speed); ··· 360 356 mtu3_dev_on_off(mtu, 0); 361 357 362 358 mtu->is_active = 0; 359 + 360 + if (mtu->is_u3_ip) 361 + mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0), SSUSB_U3_PORT_PDN); 362 + 363 + mtu3_setbits(mtu->ippc_base, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_PDN); 363 364 mtu3_setbits(mtu->ippc_base, U3D_SSUSB_IP_PW_CTRL2, SSUSB_IP_DEV_PDN); 364 365 } 365 366 ··· 545 536 rx_fifo->base, rx_fifo->limit); 546 537 } 547 538 548 - void mtu3_ep0_setup(struct mtu3 *mtu) 539 + static void mtu3_ep0_setup(struct mtu3 *mtu) 549 540 { 550 541 u32 maxpacket = mtu->g.ep0->maxpacket; 551 542 u32 csr; ··· 930 921 931 922 device_init_wakeup(dev, true); 932 923 924 + /* power down device IP for power saving by default */ 925 + mtu3_stop(mtu); 926 + 933 927 ret = mtu3_gadget_setup(mtu); 934 928 if (ret) { 935 929 dev_err(dev, "mtu3 gadget init failed:%d\n", ret); 936 930 goto gadget_err; 937 931 } 938 - 939 - /* init as host mode, power down device IP for power saving */ 940 - if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG) 941 - mtu3_stop(mtu); 942 932 943 933 ssusb_dev_debugfs_init(ssusb); 944 934
+1
drivers/usb/mtu3/mtu3_debugfs.c
··· 30 30 dump_register(SSUSB_IP_PW_CTRL1), 31 31 dump_register(SSUSB_IP_PW_CTRL2), 32 32 dump_register(SSUSB_IP_PW_CTRL3), 33 + dump_register(SSUSB_IP_PW_STS1), 33 34 dump_register(SSUSB_OTG_STS), 34 35 dump_register(SSUSB_IP_XHCI_CAP), 35 36 dump_register(SSUSB_IP_DEV_CAP),
+47 -124
drivers/usb/mtu3/mtu3_dr.c
··· 7 7 * Author: Chunfeng Yun <chunfeng.yun@mediatek.com> 8 8 */ 9 9 10 - #include <linux/usb/role.h> 11 - 12 10 #include "mtu3.h" 13 11 #include "mtu3_dr.h" 14 12 #include "mtu3_debug.h" ··· 14 16 #define USB2_PORT 2 15 17 #define USB3_PORT 3 16 18 17 - enum mtu3_vbus_id_state { 18 - MTU3_ID_FLOAT = 1, 19 - MTU3_ID_GROUND, 20 - MTU3_VBUS_OFF, 21 - MTU3_VBUS_VALID, 22 - }; 23 - 24 - static char *mailbox_state_string(enum mtu3_vbus_id_state state) 19 + static inline struct ssusb_mtk *otg_sx_to_ssusb(struct otg_switch_mtk *otg_sx) 25 20 { 26 - switch (state) { 27 - case MTU3_ID_FLOAT: 28 - return "ID_FLOAT"; 29 - case MTU3_ID_GROUND: 30 - return "ID_GROUND"; 31 - case MTU3_VBUS_OFF: 32 - return "VBUS_OFF"; 33 - case MTU3_VBUS_VALID: 34 - return "VBUS_VALID"; 35 - default: 36 - return "UNKNOWN"; 37 - } 21 + return container_of(otg_sx, struct ssusb_mtk, otg_switch); 38 22 } 39 23 40 24 static void toggle_opstate(struct ssusb_mtk *ssusb) ··· 103 123 104 124 int ssusb_set_vbus(struct otg_switch_mtk *otg_sx, int is_on) 105 125 { 106 - struct ssusb_mtk *ssusb = 107 - container_of(otg_sx, struct ssusb_mtk, otg_switch); 126 + struct ssusb_mtk *ssusb = otg_sx_to_ssusb(otg_sx); 108 127 struct regulator *vbus = otg_sx->vbus; 109 128 int ret; 110 129 ··· 126 147 return 0; 127 148 } 128 149 129 - /* 130 - * switch to host: -> MTU3_VBUS_OFF --> MTU3_ID_GROUND 131 - * switch to device: -> MTU3_ID_FLOAT --> MTU3_VBUS_VALID 132 - */ 133 - static void ssusb_set_mailbox(struct otg_switch_mtk *otg_sx, 134 - enum mtu3_vbus_id_state status) 150 + static void ssusb_mode_sw_work(struct work_struct *work) 135 151 { 136 - struct ssusb_mtk *ssusb = 137 - container_of(otg_sx, struct ssusb_mtk, otg_switch); 152 + struct otg_switch_mtk *otg_sx = 153 + container_of(work, struct otg_switch_mtk, dr_work); 154 + struct ssusb_mtk *ssusb = otg_sx_to_ssusb(otg_sx); 138 155 struct mtu3 *mtu = ssusb->u3d; 156 + enum usb_role desired_role = otg_sx->desired_role; 157 + enum usb_role current_role; 139 158 140 - dev_dbg(ssusb->dev, "mailbox %s\n", mailbox_state_string(status)); 141 - mtu3_dbg_trace(ssusb->dev, "mailbox %s", mailbox_state_string(status)); 159 + current_role = ssusb->is_host ? USB_ROLE_HOST : USB_ROLE_DEVICE; 142 160 143 - switch (status) { 144 - case MTU3_ID_GROUND: 161 + if (desired_role == USB_ROLE_NONE) 162 + desired_role = USB_ROLE_HOST; 163 + 164 + if (current_role == desired_role) 165 + return; 166 + 167 + dev_dbg(ssusb->dev, "set role : %s\n", usb_role_string(desired_role)); 168 + mtu3_dbg_trace(ssusb->dev, "set role : %s", usb_role_string(desired_role)); 169 + 170 + switch (desired_role) { 171 + case USB_ROLE_HOST: 172 + ssusb_set_force_mode(ssusb, MTU3_DR_FORCE_HOST); 173 + mtu3_stop(mtu); 145 174 switch_port_to_host(ssusb); 146 175 ssusb_set_vbus(otg_sx, 1); 147 176 ssusb->is_host = true; 148 177 break; 149 - case MTU3_ID_FLOAT: 178 + case USB_ROLE_DEVICE: 179 + ssusb_set_force_mode(ssusb, MTU3_DR_FORCE_DEVICE); 150 180 ssusb->is_host = false; 151 181 ssusb_set_vbus(otg_sx, 0); 152 182 switch_port_to_device(ssusb); 153 - break; 154 - case MTU3_VBUS_OFF: 155 - mtu3_stop(mtu); 156 - pm_relax(ssusb->dev); 157 - break; 158 - case MTU3_VBUS_VALID: 159 - /* avoid suspend when works as device */ 160 - pm_stay_awake(ssusb->dev); 161 183 mtu3_start(mtu); 162 184 break; 185 + case USB_ROLE_NONE: 163 186 default: 164 - dev_err(ssusb->dev, "invalid state\n"); 187 + dev_err(ssusb->dev, "invalid role\n"); 165 188 } 166 189 } 167 190 168 - static void ssusb_id_work(struct work_struct *work) 191 + static void ssusb_set_mode(struct otg_switch_mtk *otg_sx, enum usb_role role) 169 192 { 170 - struct otg_switch_mtk *otg_sx = 171 - container_of(work, struct otg_switch_mtk, id_work); 193 + struct ssusb_mtk *ssusb = otg_sx_to_ssusb(otg_sx); 172 194 173 - if (otg_sx->id_event) 174 - ssusb_set_mailbox(otg_sx, MTU3_ID_GROUND); 175 - else 176 - ssusb_set_mailbox(otg_sx, MTU3_ID_FLOAT); 195 + if (ssusb->dr_mode != USB_DR_MODE_OTG) 196 + return; 197 + 198 + otg_sx->desired_role = role; 199 + queue_work(system_freezable_wq, &otg_sx->dr_work); 177 200 } 178 201 179 - static void ssusb_vbus_work(struct work_struct *work) 180 - { 181 - struct otg_switch_mtk *otg_sx = 182 - container_of(work, struct otg_switch_mtk, vbus_work); 183 - 184 - if (otg_sx->vbus_event) 185 - ssusb_set_mailbox(otg_sx, MTU3_VBUS_VALID); 186 - else 187 - ssusb_set_mailbox(otg_sx, MTU3_VBUS_OFF); 188 - } 189 - 190 - /* 191 - * @ssusb_id_notifier is called in atomic context, but @ssusb_set_mailbox 192 - * may sleep, so use work queue here 193 - */ 194 202 static int ssusb_id_notifier(struct notifier_block *nb, 195 203 unsigned long event, void *ptr) 196 204 { 197 205 struct otg_switch_mtk *otg_sx = 198 206 container_of(nb, struct otg_switch_mtk, id_nb); 199 207 200 - otg_sx->id_event = event; 201 - schedule_work(&otg_sx->id_work); 202 - 203 - return NOTIFY_DONE; 204 - } 205 - 206 - static int ssusb_vbus_notifier(struct notifier_block *nb, 207 - unsigned long event, void *ptr) 208 - { 209 - struct otg_switch_mtk *otg_sx = 210 - container_of(nb, struct otg_switch_mtk, vbus_nb); 211 - 212 - otg_sx->vbus_event = event; 213 - schedule_work(&otg_sx->vbus_work); 208 + ssusb_set_mode(otg_sx, event ? USB_ROLE_HOST : USB_ROLE_DEVICE); 214 209 215 210 return NOTIFY_DONE; 216 211 } 217 212 218 213 static int ssusb_extcon_register(struct otg_switch_mtk *otg_sx) 219 214 { 220 - struct ssusb_mtk *ssusb = 221 - container_of(otg_sx, struct ssusb_mtk, otg_switch); 215 + struct ssusb_mtk *ssusb = otg_sx_to_ssusb(otg_sx); 222 216 struct extcon_dev *edev = otg_sx->edev; 223 217 int ret; 224 218 225 219 /* extcon is optional */ 226 220 if (!edev) 227 221 return 0; 228 - 229 - otg_sx->vbus_nb.notifier_call = ssusb_vbus_notifier; 230 - ret = devm_extcon_register_notifier(ssusb->dev, edev, EXTCON_USB, 231 - &otg_sx->vbus_nb); 232 - if (ret < 0) { 233 - dev_err(ssusb->dev, "failed to register notifier for USB\n"); 234 - return ret; 235 - } 236 222 237 223 otg_sx->id_nb.notifier_call = ssusb_id_notifier; 238 224 ret = devm_extcon_register_notifier(ssusb->dev, edev, EXTCON_USB_HOST, ··· 207 263 return ret; 208 264 } 209 265 210 - dev_dbg(ssusb->dev, "EXTCON_USB: %d, EXTCON_USB_HOST: %d\n", 211 - extcon_get_state(edev, EXTCON_USB), 212 - extcon_get_state(edev, EXTCON_USB_HOST)); 266 + ret = extcon_get_state(edev, EXTCON_USB_HOST); 267 + dev_dbg(ssusb->dev, "EXTCON_USB_HOST: %d\n", ret); 213 268 214 269 /* default as host, switch to device mode if needed */ 215 - if (extcon_get_state(edev, EXTCON_USB_HOST) == false) 216 - ssusb_set_mailbox(otg_sx, MTU3_ID_FLOAT); 217 - if (extcon_get_state(edev, EXTCON_USB) == true) 218 - ssusb_set_mailbox(otg_sx, MTU3_VBUS_VALID); 270 + if (!ret) 271 + ssusb_set_mode(otg_sx, USB_ROLE_DEVICE); 219 272 220 273 return 0; 221 274 } ··· 227 286 { 228 287 struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; 229 288 230 - if (to_host) { 231 - ssusb_set_force_mode(ssusb, MTU3_DR_FORCE_HOST); 232 - ssusb_set_mailbox(otg_sx, MTU3_VBUS_OFF); 233 - ssusb_set_mailbox(otg_sx, MTU3_ID_GROUND); 234 - } else { 235 - ssusb_set_force_mode(ssusb, MTU3_DR_FORCE_DEVICE); 236 - ssusb_set_mailbox(otg_sx, MTU3_ID_FLOAT); 237 - ssusb_set_mailbox(otg_sx, MTU3_VBUS_VALID); 238 - } 289 + ssusb_set_mode(otg_sx, to_host ? USB_ROLE_HOST : USB_ROLE_DEVICE); 239 290 } 240 291 241 292 void ssusb_set_force_mode(struct ssusb_mtk *ssusb, ··· 256 323 static int ssusb_role_sw_set(struct usb_role_switch *sw, enum usb_role role) 257 324 { 258 325 struct ssusb_mtk *ssusb = usb_role_switch_get_drvdata(sw); 259 - bool to_host = false; 326 + struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; 260 327 261 - if (role == USB_ROLE_HOST) 262 - to_host = true; 263 - 264 - if (to_host ^ ssusb->is_host) 265 - ssusb_mode_switch(ssusb, to_host); 328 + ssusb_set_mode(otg_sx, role); 266 329 267 330 return 0; 268 331 } ··· 266 337 static enum usb_role ssusb_role_sw_get(struct usb_role_switch *sw) 267 338 { 268 339 struct ssusb_mtk *ssusb = usb_role_switch_get_drvdata(sw); 269 - enum usb_role role; 270 340 271 - role = ssusb->is_host ? USB_ROLE_HOST : USB_ROLE_DEVICE; 272 - 273 - return role; 341 + return ssusb->is_host ? USB_ROLE_HOST : USB_ROLE_DEVICE; 274 342 } 275 343 276 344 static int ssusb_role_sw_register(struct otg_switch_mtk *otg_sx) 277 345 { 278 346 struct usb_role_switch_desc role_sx_desc = { 0 }; 279 - struct ssusb_mtk *ssusb = 280 - container_of(otg_sx, struct ssusb_mtk, otg_switch); 347 + struct ssusb_mtk *ssusb = otg_sx_to_ssusb(otg_sx); 281 348 282 349 if (!otg_sx->role_sw_used) 283 350 return 0; ··· 292 367 struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; 293 368 int ret = 0; 294 369 295 - INIT_WORK(&otg_sx->id_work, ssusb_id_work); 296 - INIT_WORK(&otg_sx->vbus_work, ssusb_vbus_work); 370 + INIT_WORK(&otg_sx->dr_work, ssusb_mode_sw_work); 297 371 298 372 if (otg_sx->manual_drd_enabled) 299 373 ssusb_dr_debugfs_init(ssusb); ··· 308 384 { 309 385 struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; 310 386 311 - cancel_work_sync(&otg_sx->id_work); 312 - cancel_work_sync(&otg_sx->vbus_work); 387 + cancel_work_sync(&otg_sx->dr_work); 313 388 usb_role_switch_unregister(otg_sx->role_sw); 314 389 }
+1 -1
drivers/usb/mtu3/mtu3_gadget.c
··· 577 577 dev_dbg(mtu->dev, "%s %s\n", __func__, usb_speed_string(speed)); 578 578 579 579 spin_lock_irqsave(&mtu->lock, flags); 580 - mtu3_set_speed(mtu, speed); 580 + mtu->speed = speed; 581 581 spin_unlock_irqrestore(&mtu->lock, flags); 582 582 } 583 583
+1 -5
drivers/usb/mtu3/mtu3_host.c
··· 213 213 214 214 static void ssusb_host_setup(struct ssusb_mtk *ssusb) 215 215 { 216 - struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; 217 - 218 216 host_ports_num_get(ssusb); 219 217 220 218 /* ··· 220 222 * if support OTG, gadget driver will switch port0 to device mode 221 223 */ 222 224 ssusb_host_enable(ssusb); 223 - 224 - if (otg_sx->manual_drd_enabled) 225 - ssusb_set_force_mode(ssusb, MTU3_DR_FORCE_HOST); 225 + ssusb_set_force_mode(ssusb, MTU3_DR_FORCE_HOST); 226 226 227 227 /* if port0 supports dual-role, works as host mode by default */ 228 228 ssusb_set_vbus(&ssusb->otg_switch, 1);
+20 -75
drivers/usb/mtu3/mtu3_plat.c
··· 5 5 * Author: Chunfeng Yun <chunfeng.yun@mediatek.com> 6 6 */ 7 7 8 - #include <linux/clk.h> 9 8 #include <linux/dma-mapping.h> 10 9 #include <linux/iopoll.h> 11 10 #include <linux/kernel.h> ··· 100 101 phy_power_off(ssusb->phys[i]); 101 102 } 102 103 103 - static int ssusb_clks_enable(struct ssusb_mtk *ssusb) 104 - { 105 - int ret; 106 - 107 - ret = clk_prepare_enable(ssusb->sys_clk); 108 - if (ret) { 109 - dev_err(ssusb->dev, "failed to enable sys_clk\n"); 110 - goto sys_clk_err; 111 - } 112 - 113 - ret = clk_prepare_enable(ssusb->ref_clk); 114 - if (ret) { 115 - dev_err(ssusb->dev, "failed to enable ref_clk\n"); 116 - goto ref_clk_err; 117 - } 118 - 119 - ret = clk_prepare_enable(ssusb->mcu_clk); 120 - if (ret) { 121 - dev_err(ssusb->dev, "failed to enable mcu_clk\n"); 122 - goto mcu_clk_err; 123 - } 124 - 125 - ret = clk_prepare_enable(ssusb->dma_clk); 126 - if (ret) { 127 - dev_err(ssusb->dev, "failed to enable dma_clk\n"); 128 - goto dma_clk_err; 129 - } 130 - 131 - return 0; 132 - 133 - dma_clk_err: 134 - clk_disable_unprepare(ssusb->mcu_clk); 135 - mcu_clk_err: 136 - clk_disable_unprepare(ssusb->ref_clk); 137 - ref_clk_err: 138 - clk_disable_unprepare(ssusb->sys_clk); 139 - sys_clk_err: 140 - return ret; 141 - } 142 - 143 - static void ssusb_clks_disable(struct ssusb_mtk *ssusb) 144 - { 145 - clk_disable_unprepare(ssusb->dma_clk); 146 - clk_disable_unprepare(ssusb->mcu_clk); 147 - clk_disable_unprepare(ssusb->ref_clk); 148 - clk_disable_unprepare(ssusb->sys_clk); 149 - } 150 - 151 104 static int ssusb_rscs_init(struct ssusb_mtk *ssusb) 152 105 { 153 106 int ret = 0; ··· 110 159 goto vusb33_err; 111 160 } 112 161 113 - ret = ssusb_clks_enable(ssusb); 162 + ret = clk_bulk_prepare_enable(BULK_CLKS_CNT, ssusb->clks); 114 163 if (ret) 115 164 goto clks_err; 116 165 ··· 131 180 phy_err: 132 181 ssusb_phy_exit(ssusb); 133 182 phy_init_err: 134 - ssusb_clks_disable(ssusb); 183 + clk_bulk_disable_unprepare(BULK_CLKS_CNT, ssusb->clks); 135 184 clks_err: 136 185 regulator_disable(ssusb->vusb33); 137 186 vusb33_err: ··· 140 189 141 190 static void ssusb_rscs_exit(struct ssusb_mtk *ssusb) 142 191 { 143 - ssusb_clks_disable(ssusb); 192 + clk_bulk_disable_unprepare(BULK_CLKS_CNT, ssusb->clks); 144 193 regulator_disable(ssusb->vusb33); 145 194 ssusb_phy_power_off(ssusb); 146 195 ssusb_phy_exit(ssusb); ··· 166 215 { 167 216 struct device_node *node = pdev->dev.of_node; 168 217 struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; 218 + struct clk_bulk_data *clks = ssusb->clks; 169 219 struct device *dev = &pdev->dev; 170 220 int i; 171 221 int ret; ··· 177 225 return PTR_ERR(ssusb->vusb33); 178 226 } 179 227 180 - ssusb->sys_clk = devm_clk_get(dev, "sys_ck"); 181 - if (IS_ERR(ssusb->sys_clk)) { 182 - dev_err(dev, "failed to get sys clock\n"); 183 - return PTR_ERR(ssusb->sys_clk); 184 - } 185 - 186 - ssusb->ref_clk = devm_clk_get_optional(dev, "ref_ck"); 187 - if (IS_ERR(ssusb->ref_clk)) 188 - return PTR_ERR(ssusb->ref_clk); 189 - 190 - ssusb->mcu_clk = devm_clk_get_optional(dev, "mcu_ck"); 191 - if (IS_ERR(ssusb->mcu_clk)) 192 - return PTR_ERR(ssusb->mcu_clk); 193 - 194 - ssusb->dma_clk = devm_clk_get_optional(dev, "dma_ck"); 195 - if (IS_ERR(ssusb->dma_clk)) 196 - return PTR_ERR(ssusb->dma_clk); 228 + clks[0].id = "sys_ck"; 229 + clks[1].id = "ref_ck"; 230 + clks[2].id = "mcu_ck"; 231 + clks[3].id = "dma_ck"; 232 + ret = devm_clk_bulk_get_optional(dev, BULK_CLKS_CNT, clks); 233 + if (ret) 234 + return ret; 197 235 198 236 ssusb->num_phys = of_count_phandle_with_args(node, 199 237 "phys", "#phy-cells"); ··· 241 299 of_property_read_bool(node, "enable-manual-drd"); 242 300 otg_sx->role_sw_used = of_property_read_bool(node, "usb-role-switch"); 243 301 244 - if (!otg_sx->role_sw_used && of_property_read_bool(node, "extcon")) { 302 + if (otg_sx->role_sw_used || otg_sx->manual_drd_enabled) 303 + goto out; 304 + 305 + if (of_property_read_bool(node, "extcon")) { 245 306 otg_sx->edev = extcon_get_edev_by_phandle(ssusb->dev, 0); 246 307 if (IS_ERR(otg_sx->edev)) { 247 - dev_err(ssusb->dev, "couldn't get extcon device\n"); 248 - return PTR_ERR(otg_sx->edev); 308 + return dev_err_probe(dev, PTR_ERR(otg_sx->edev), 309 + "couldn't get extcon device\n"); 249 310 } 250 311 } 251 312 ··· 406 461 407 462 ssusb_host_disable(ssusb, true); 408 463 ssusb_phy_power_off(ssusb); 409 - ssusb_clks_disable(ssusb); 464 + clk_bulk_disable_unprepare(BULK_CLKS_CNT, ssusb->clks); 410 465 ssusb_wakeup_set(ssusb, true); 411 466 412 467 return 0; ··· 423 478 return 0; 424 479 425 480 ssusb_wakeup_set(ssusb, false); 426 - ret = ssusb_clks_enable(ssusb); 481 + ret = clk_bulk_prepare_enable(BULK_CLKS_CNT, ssusb->clks); 427 482 if (ret) 428 483 goto clks_err; 429 484 ··· 436 491 return 0; 437 492 438 493 phy_err: 439 - ssusb_clks_disable(ssusb); 494 + clk_bulk_disable_unprepare(BULK_CLKS_CNT, ssusb->clks); 440 495 clks_err: 441 496 return ret; 442 497 }
+41 -30
drivers/usb/musb/musb_core.c
··· 480 480 481 481 devctl = musb_read_devctl(musb); 482 482 if (!(devctl & MUSB_DEVCTL_BDEVICE)) { 483 - dev_info(musb->controller, 484 - "%s: already in host mode: %02x\n", 485 - __func__, devctl); 483 + trace_musb_state(musb, devctl, "Already in host mode"); 486 484 goto init_data; 487 485 } 488 486 ··· 496 498 497 499 return error; 498 500 } 501 + 502 + devctl = musb_read_devctl(musb); 503 + trace_musb_state(musb, devctl, "Host mode set"); 499 504 500 505 init_data: 501 506 musb->is_active = 1; ··· 527 526 528 527 devctl = musb_read_devctl(musb); 529 528 if (devctl & MUSB_DEVCTL_BDEVICE) { 530 - dev_info(musb->controller, 531 - "%s: already in peripheral mode: %02x\n", 532 - __func__, devctl); 533 - 529 + trace_musb_state(musb, devctl, "Already in peripheral mode"); 534 530 goto init_data; 535 531 } 536 532 ··· 543 545 544 546 return error; 545 547 } 548 + 549 + devctl = musb_read_devctl(musb); 550 + trace_musb_state(musb, devctl, "Peripheral mode set"); 546 551 547 552 init_data: 548 553 musb->is_active = 0; ··· 1985 1984 #define MUSB_QUIRK_A_DISCONNECT_19 ((3 << MUSB_DEVCTL_VBUS_SHIFT) | \ 1986 1985 MUSB_DEVCTL_SESSION) 1987 1986 1987 + static bool musb_state_needs_recheck(struct musb *musb, u8 devctl, 1988 + const char *desc) 1989 + { 1990 + if (musb->quirk_retries && !musb->flush_irq_work) { 1991 + trace_musb_state(musb, devctl, desc); 1992 + schedule_delayed_work(&musb->irq_work, 1993 + msecs_to_jiffies(1000)); 1994 + musb->quirk_retries--; 1995 + 1996 + return true; 1997 + } 1998 + 1999 + return false; 2000 + } 2001 + 1988 2002 /* 1989 2003 * Check the musb devctl session bit to determine if we want to 1990 2004 * allow PM runtime for the device. In general, we want to keep things ··· 2020 2004 MUSB_DEVCTL_HR; 2021 2005 switch (devctl & ~s) { 2022 2006 case MUSB_QUIRK_B_DISCONNECT_99: 2023 - if (musb->quirk_retries && !musb->flush_irq_work) { 2024 - musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n"); 2025 - schedule_delayed_work(&musb->irq_work, 2026 - msecs_to_jiffies(1000)); 2027 - musb->quirk_retries--; 2028 - } 2007 + musb_state_needs_recheck(musb, devctl, 2008 + "Poll devctl in case of suspend after disconnect"); 2029 2009 break; 2030 2010 case MUSB_QUIRK_B_INVALID_VBUS_91: 2031 - if (musb->quirk_retries && !musb->flush_irq_work) { 2032 - musb_dbg(musb, 2033 - "Poll devctl on invalid vbus, assume no session"); 2034 - schedule_delayed_work(&musb->irq_work, 2035 - msecs_to_jiffies(1000)); 2036 - musb->quirk_retries--; 2011 + if (musb_state_needs_recheck(musb, devctl, 2012 + "Poll devctl on invalid vbus, assume no session")) 2037 2013 return; 2038 - } 2039 2014 fallthrough; 2040 2015 case MUSB_QUIRK_A_DISCONNECT_19: 2041 - if (musb->quirk_retries && !musb->flush_irq_work) { 2042 - musb_dbg(musb, 2043 - "Poll devctl on possible host mode disconnect"); 2044 - schedule_delayed_work(&musb->irq_work, 2045 - msecs_to_jiffies(1000)); 2046 - musb->quirk_retries--; 2016 + if (musb_state_needs_recheck(musb, devctl, 2017 + "Poll devctl on possible host mode disconnect")) 2047 2018 return; 2048 - } 2049 2019 if (!musb->session) 2050 2020 break; 2051 - musb_dbg(musb, "Allow PM on possible host mode disconnect"); 2021 + trace_musb_state(musb, devctl, "Allow PM on possible host mode disconnect"); 2052 2022 pm_runtime_mark_last_busy(musb->controller); 2053 2023 pm_runtime_put_autosuspend(musb->controller); 2054 2024 musb->session = false; ··· 2050 2048 2051 2049 /* Block PM or allow PM? */ 2052 2050 if (s) { 2053 - musb_dbg(musb, "Block PM on active session: %02x", devctl); 2051 + trace_musb_state(musb, devctl, "Block PM on active session"); 2054 2052 error = pm_runtime_get_sync(musb->controller); 2055 2053 if (error < 0) 2056 2054 dev_err(musb->controller, "Could not enable: %i\n", 2057 2055 error); 2058 2056 musb->quirk_retries = 3; 2057 + 2058 + /* 2059 + * We can get a spurious MUSB_INTR_SESSREQ interrupt on start-up 2060 + * in B-peripheral mode with nothing connected and the session 2061 + * bit clears silently. Check status again in 3 seconds. 2062 + */ 2063 + if (devctl & MUSB_DEVCTL_BDEVICE) 2064 + schedule_delayed_work(&musb->irq_work, 2065 + msecs_to_jiffies(3000)); 2059 2066 } else { 2060 - musb_dbg(musb, "Allow PM with no session: %02x", devctl); 2067 + trace_musb_state(musb, devctl, "Allow PM with no session"); 2061 2068 pm_runtime_mark_last_busy(musb->controller); 2062 2069 pm_runtime_put_autosuspend(musb->controller); 2063 2070 }
+1 -1
drivers/usb/musb/musb_gadget.c
··· 611 611 * mode 0 only. So we do not get endpoint interrupts due to DMA 612 612 * completion. We only get interrupts from DMA controller. 613 613 * 614 - * We could operate in DMA mode 1 if we knew the size of the tranfer 614 + * We could operate in DMA mode 1 if we knew the size of the transfer 615 615 * in advance. For mass storage class, request->length = what the host 616 616 * sends, so that'd work. But for pretty much everything else, 617 617 * request->length is routinely more than what the host sends. For
+6 -12
drivers/usb/musb/musb_host.c
··· 563 563 ep->rx_reinit = 0; 564 564 } 565 565 566 - static void musb_tx_dma_set_mode_mentor(struct dma_controller *dma, 567 - struct musb_hw_ep *hw_ep, struct musb_qh *qh, 568 - struct urb *urb, u32 offset, 569 - u32 *length, u8 *mode) 566 + static void musb_tx_dma_set_mode_mentor(struct musb_hw_ep *hw_ep, 567 + struct musb_qh *qh, 568 + u32 *length, u8 *mode) 570 569 { 571 570 struct dma_channel *channel = hw_ep->tx_channel; 572 571 void __iomem *epio = hw_ep->regs; ··· 601 602 musb_writew(epio, MUSB_TXCSR, csr); 602 603 } 603 604 604 - static void musb_tx_dma_set_mode_cppi_tusb(struct dma_controller *dma, 605 - struct musb_hw_ep *hw_ep, 606 - struct musb_qh *qh, 605 + static void musb_tx_dma_set_mode_cppi_tusb(struct musb_hw_ep *hw_ep, 607 606 struct urb *urb, 608 - u32 offset, 609 - u32 *length, 610 607 u8 *mode) 611 608 { 612 609 struct dma_channel *channel = hw_ep->tx_channel; ··· 625 630 u8 mode; 626 631 627 632 if (musb_dma_inventra(hw_ep->musb) || musb_dma_ux500(hw_ep->musb)) 628 - musb_tx_dma_set_mode_mentor(dma, hw_ep, qh, urb, offset, 633 + musb_tx_dma_set_mode_mentor(hw_ep, qh, 629 634 &length, &mode); 630 635 else if (is_cppi_enabled(hw_ep->musb) || tusb_dma_omap(hw_ep->musb)) 631 - musb_tx_dma_set_mode_cppi_tusb(dma, hw_ep, qh, urb, offset, 632 - &length, &mode); 636 + musb_tx_dma_set_mode_cppi_tusb(hw_ep, urb, &mode); 633 637 else 634 638 return false; 635 639
-4
drivers/usb/musb/musb_host.h
··· 61 61 extern void musb_host_rx(struct musb *, u8); 62 62 extern void musb_root_disconnect(struct musb *musb); 63 63 extern void musb_host_free(struct musb *); 64 - extern void musb_host_cleanup(struct musb *); 65 - extern void musb_host_tx(struct musb *, u8); 66 - extern void musb_host_rx(struct musb *, u8); 67 - extern void musb_root_disconnect(struct musb *musb); 68 64 extern void musb_host_resume_root_hub(struct musb *musb); 69 65 extern void musb_host_poke_root_hub(struct musb *musb); 70 66 extern int musb_port_suspend(struct musb *musb, bool do_suspend);
+17
drivers/usb/musb/musb_trace.h
··· 37 37 TP_printk("%s: %s", __get_str(name), __get_str(msg)) 38 38 ); 39 39 40 + TRACE_EVENT(musb_state, 41 + TP_PROTO(struct musb *musb, u8 devctl, const char *desc), 42 + TP_ARGS(musb, devctl, desc), 43 + TP_STRUCT__entry( 44 + __string(name, dev_name(musb->controller)) 45 + __field(u8, devctl) 46 + __string(desc, desc) 47 + ), 48 + TP_fast_assign( 49 + __assign_str(name, dev_name(musb->controller)); 50 + __entry->devctl = devctl; 51 + __assign_str(desc, desc); 52 + ), 53 + TP_printk("%s: devctl: %02x %s", __get_str(name), __entry->devctl, 54 + __get_str(desc)) 55 + ); 56 + 40 57 DECLARE_EVENT_CLASS(musb_regb, 41 58 TP_PROTO(void *caller, const void __iomem *addr, 42 59 unsigned int offset, u8 data),
+32
drivers/usb/musb/omap2430.c
··· 33 33 enum musb_vbus_id_status status; 34 34 struct work_struct omap_musb_mailbox_work; 35 35 struct device *control_otghs; 36 + unsigned int is_runtime_suspended:1; 37 + unsigned int needs_resume:1; 36 38 }; 37 39 #define glue_to_musb(g) platform_get_drvdata(g->musb) 38 40 ··· 461 459 phy_power_off(musb->phy); 462 460 phy_exit(musb->phy); 463 461 462 + glue->is_runtime_suspended = 1; 463 + 464 464 return 0; 465 465 } 466 466 ··· 484 480 /* Wait for musb to get oriented. Otherwise we can get babble */ 485 481 usleep_range(200000, 250000); 486 482 483 + glue->is_runtime_suspended = 0; 484 + 487 485 return 0; 486 + } 487 + 488 + static int omap2430_suspend(struct device *dev) 489 + { 490 + struct omap2430_glue *glue = dev_get_drvdata(dev); 491 + 492 + if (glue->is_runtime_suspended) 493 + return 0; 494 + 495 + glue->needs_resume = 1; 496 + 497 + return omap2430_runtime_suspend(dev); 498 + } 499 + 500 + static int omap2430_resume(struct device *dev) 501 + { 502 + struct omap2430_glue *glue = dev_get_drvdata(dev); 503 + 504 + if (!glue->needs_resume) 505 + return 0; 506 + 507 + glue->needs_resume = 0; 508 + 509 + return omap2430_runtime_resume(dev); 488 510 } 489 511 490 512 static const struct dev_pm_ops omap2430_pm_ops = { 491 513 .runtime_suspend = omap2430_runtime_suspend, 492 514 .runtime_resume = omap2430_runtime_resume, 515 + .suspend = omap2430_suspend, 516 + .resume = omap2430_resume, 493 517 }; 494 518 495 519 #define DEV_PM_OPS (&omap2430_pm_ops)
+1 -1
drivers/usb/phy/phy-isp1301-omap.c
··· 555 555 case OTG_STATE_A_PERIPHERAL: 556 556 if (otg_ctrl & OTG_PULLUP) 557 557 goto pullup; 558 - /* FALLTHROUGH */ 558 + fallthrough; 559 559 // case OTG_STATE_B_WAIT_ACON: 560 560 default: 561 561 pulldown:
+9 -16
drivers/usb/phy/phy-isp1301.c
··· 142 142 143 143 module_i2c_driver(isp1301_driver); 144 144 145 - static int match(struct device *dev, const void *data) 146 - { 147 - const struct device_node *node = (const struct device_node *)data; 148 - return (dev->of_node == node) && 149 - (dev->driver == &isp1301_driver.driver); 150 - } 151 - 152 145 struct i2c_client *isp1301_get_client(struct device_node *node) 153 146 { 154 - if (node) { /* reference of ISP1301 I2C node via DT */ 155 - struct device *dev = bus_find_device(&i2c_bus_type, NULL, 156 - node, match); 157 - if (!dev) 158 - return NULL; 159 - return to_i2c_client(dev); 160 - } else { /* non-DT: only one ISP1301 chip supported */ 161 - return isp1301_i2c_client; 162 - } 147 + struct i2c_client *client; 148 + 149 + /* reference of ISP1301 I2C node via DT */ 150 + client = of_find_i2c_device_by_node(node); 151 + if (client) 152 + return client; 153 + 154 + /* non-DT: only one ISP1301 chip supported */ 155 + return isp1301_i2c_client; 163 156 } 164 157 EXPORT_SYMBOL_GPL(isp1301_get_client); 165 158
+12 -3
drivers/usb/phy/phy-tegra-usb.c
··· 58 58 #define USB_WAKEUP_DEBOUNCE_COUNT(x) (((x) & 0x7) << 16) 59 59 60 60 #define USB_PHY_VBUS_SENSORS 0x404 61 - #define B_SESS_VLD_WAKEUP_EN BIT(6) 62 - #define B_VBUS_VLD_WAKEUP_EN BIT(14) 61 + #define B_SESS_VLD_WAKEUP_EN BIT(14) 63 62 #define A_SESS_VLD_WAKEUP_EN BIT(22) 64 63 #define A_VBUS_VLD_WAKEUP_EN BIT(30) 65 64 66 65 #define USB_PHY_VBUS_WAKEUP_ID 0x408 66 + #define VBUS_WAKEUP_STS BIT(10) 67 67 #define VBUS_WAKEUP_WAKEUP_EN BIT(30) 68 68 69 69 #define USB1_LEGACY_CTRL 0x410 ··· 544 544 545 545 val = readl_relaxed(base + USB_PHY_VBUS_SENSORS); 546 546 val &= ~(A_VBUS_VLD_WAKEUP_EN | A_SESS_VLD_WAKEUP_EN); 547 - val &= ~(B_VBUS_VLD_WAKEUP_EN | B_SESS_VLD_WAKEUP_EN); 547 + val &= ~(B_SESS_VLD_WAKEUP_EN); 548 548 writel_relaxed(val, base + USB_PHY_VBUS_SENSORS); 549 549 550 550 val = readl_relaxed(base + UTMIP_BAT_CHRG_CFG0); ··· 641 641 { 642 642 void __iomem *base = phy->regs; 643 643 u32 val; 644 + 645 + /* 646 + * Give hardware time to settle down after VBUS disconnection, 647 + * otherwise PHY will immediately wake up from suspend. 648 + */ 649 + if (phy->wakeup_enabled && phy->mode != USB_DR_MODE_HOST) 650 + readl_relaxed_poll_timeout(base + USB_PHY_VBUS_WAKEUP_ID, 651 + val, !(val & VBUS_WAKEUP_STS), 652 + 5000, 100000); 644 653 645 654 utmi_phy_clk_disable(phy); 646 655
+47 -8
drivers/usb/phy/phy.c
··· 42 42 [ACA_TYPE] = "USB_CHARGER_ACA_TYPE", 43 43 }; 44 44 45 + static const char *const usb_chger_state[] = { 46 + [USB_CHARGER_DEFAULT] = "USB_CHARGER_DEFAULT", 47 + [USB_CHARGER_PRESENT] = "USB_CHARGER_PRESENT", 48 + [USB_CHARGER_ABSENT] = "USB_CHARGER_ABSENT", 49 + }; 50 + 45 51 static struct usb_phy *__usb_find_phy(struct list_head *list, 46 52 enum usb_phy_type type) 47 53 { ··· 78 72 } 79 73 80 74 return ERR_PTR(-EPROBE_DEFER); 75 + } 76 + 77 + static struct usb_phy *__device_to_usb_phy(struct device *dev) 78 + { 79 + struct usb_phy *usb_phy; 80 + 81 + list_for_each_entry(usb_phy, &phy_list, head) { 82 + if (usb_phy->dev == dev) 83 + break; 84 + } 85 + 86 + return usb_phy; 81 87 } 82 88 83 89 static void usb_phy_set_default_current(struct usb_phy *usb_phy) ··· 123 105 static void usb_phy_notify_charger_work(struct work_struct *work) 124 106 { 125 107 struct usb_phy *usb_phy = container_of(work, struct usb_phy, chg_work); 126 - char uchger_state[50] = { 0 }; 127 - char uchger_type[50] = { 0 }; 128 - char *envp[] = { uchger_state, uchger_type, NULL }; 129 108 unsigned int min, max; 130 109 131 110 switch (usb_phy->chg_state) { ··· 130 115 usb_phy_get_charger_current(usb_phy, &min, &max); 131 116 132 117 atomic_notifier_call_chain(&usb_phy->notifier, max, usb_phy); 133 - snprintf(uchger_state, ARRAY_SIZE(uchger_state), 134 - "USB_CHARGER_STATE=%s", "USB_CHARGER_PRESENT"); 135 118 break; 136 119 case USB_CHARGER_ABSENT: 137 120 usb_phy_set_default_current(usb_phy); 138 121 139 122 atomic_notifier_call_chain(&usb_phy->notifier, 0, usb_phy); 140 - snprintf(uchger_state, ARRAY_SIZE(uchger_state), 141 - "USB_CHARGER_STATE=%s", "USB_CHARGER_ABSENT"); 142 123 break; 143 124 default: 144 125 dev_warn(usb_phy->dev, "Unknown USB charger state: %d\n", ··· 142 131 return; 143 132 } 144 133 134 + kobject_uevent(&usb_phy->dev->kobj, KOBJ_CHANGE); 135 + } 136 + 137 + static int usb_phy_uevent(struct device *dev, struct kobj_uevent_env *env) 138 + { 139 + struct usb_phy *usb_phy; 140 + char uchger_state[50] = { 0 }; 141 + char uchger_type[50] = { 0 }; 142 + 143 + usb_phy = __device_to_usb_phy(dev); 144 + 145 + snprintf(uchger_state, ARRAY_SIZE(uchger_state), 146 + "USB_CHARGER_STATE=%s", usb_chger_state[usb_phy->chg_state]); 147 + 145 148 snprintf(uchger_type, ARRAY_SIZE(uchger_type), 146 149 "USB_CHARGER_TYPE=%s", usb_chger_type[usb_phy->chg_type]); 147 - kobject_uevent_env(&usb_phy->dev->kobj, KOBJ_CHANGE, envp); 150 + 151 + if (add_uevent_var(env, uchger_state)) 152 + return -ENOMEM; 153 + 154 + if (add_uevent_var(env, uchger_type)) 155 + return -ENOMEM; 156 + 157 + return 0; 148 158 } 149 159 150 160 static void __usb_phy_get_charger_type(struct usb_phy *usb_phy) ··· 693 661 } 694 662 EXPORT_SYMBOL_GPL(usb_add_phy); 695 663 664 + static struct device_type usb_phy_dev_type = { 665 + .name = "usb_phy", 666 + .uevent = usb_phy_uevent, 667 + }; 668 + 696 669 /** 697 670 * usb_add_phy_dev - declare the USB PHY 698 671 * @x: the USB phy to be used; or NULL ··· 720 683 ret = usb_add_extcon(x); 721 684 if (ret) 722 685 return ret; 686 + 687 + x->dev->type = &usb_phy_dev_type; 723 688 724 689 ATOMIC_INIT_NOTIFIER_HEAD(&x->notifier); 725 690
+9
drivers/usb/roles/class.c
··· 214 214 [USB_ROLE_DEVICE] = "device", 215 215 }; 216 216 217 + const char *usb_role_string(enum usb_role role) 218 + { 219 + if (role < 0 || role >= ARRAY_SIZE(usb_roles)) 220 + return "unknown"; 221 + 222 + return usb_roles[role]; 223 + } 224 + EXPORT_SYMBOL_GPL(usb_role_string); 225 + 217 226 static ssize_t 218 227 role_show(struct device *dev, struct device_attribute *attr, char *buf) 219 228 {
+170 -19
drivers/usb/serial/cp210x.c
··· 247 247 #ifdef CONFIG_GPIOLIB 248 248 struct gpio_chip gc; 249 249 bool gpio_registered; 250 - u8 gpio_pushpull; 251 - u8 gpio_altfunc; 252 - u8 gpio_input; 250 + u16 gpio_pushpull; 251 + u16 gpio_altfunc; 252 + u16 gpio_input; 253 253 #endif 254 254 u8 partnum; 255 255 u32 fw_version; ··· 534 534 #define CP2104_GPIO1_RXLED_MODE BIT(1) 535 535 #define CP2104_GPIO2_RS485_MODE BIT(2) 536 536 537 + struct cp210x_quad_port_state { 538 + __le16 gpio_mode_pb0; 539 + __le16 gpio_mode_pb1; 540 + __le16 gpio_mode_pb2; 541 + __le16 gpio_mode_pb3; 542 + __le16 gpio_mode_pb4; 543 + 544 + __le16 gpio_lowpower_pb0; 545 + __le16 gpio_lowpower_pb1; 546 + __le16 gpio_lowpower_pb2; 547 + __le16 gpio_lowpower_pb3; 548 + __le16 gpio_lowpower_pb4; 549 + 550 + __le16 gpio_latch_pb0; 551 + __le16 gpio_latch_pb1; 552 + __le16 gpio_latch_pb2; 553 + __le16 gpio_latch_pb3; 554 + __le16 gpio_latch_pb4; 555 + }; 556 + 557 + /* 558 + * CP210X_VENDOR_SPECIFIC, CP210X_GET_PORTCONFIG call reads these 0x49 bytes 559 + * on a CP2108 chip. 560 + * 561 + * See https://www.silabs.com/documents/public/application-notes/an978-cp210x-usb-to-uart-api-specification.pdf 562 + */ 563 + struct cp210x_quad_port_config { 564 + struct cp210x_quad_port_state reset_state; 565 + struct cp210x_quad_port_state suspend_state; 566 + u8 ipdelay_ifc[4]; 567 + u8 enhancedfxn_ifc[4]; 568 + u8 enhancedfxn_device; 569 + u8 extclkfreq[4]; 570 + } __packed; 571 + 572 + #define CP2108_EF_IFC_GPIO_TXLED 0x01 573 + #define CP2108_EF_IFC_GPIO_RXLED 0x02 574 + #define CP2108_EF_IFC_GPIO_RS485 0x04 575 + #define CP2108_EF_IFC_GPIO_RS485_LOGIC 0x08 576 + #define CP2108_EF_IFC_GPIO_CLOCK 0x10 577 + #define CP2108_EF_IFC_DYNAMIC_SUSPEND 0x40 578 + 537 579 /* CP2102N configuration array indices */ 538 580 #define CP210X_2NCONFIG_CONFIG_VERSION_IDX 2 539 581 #define CP210X_2NCONFIG_GPIO_MODE_IDX 581 ··· 588 546 #define CP2102N_QFN20_GPIO1_RS485_MODE BIT(4) 589 547 #define CP2102N_QFN20_GPIO0_CLK_MODE BIT(6) 590 548 591 - /* CP210X_VENDOR_SPECIFIC, CP210X_WRITE_LATCH call writes these 0x2 bytes. */ 549 + /* 550 + * CP210X_VENDOR_SPECIFIC, CP210X_WRITE_LATCH call writes these 0x02 bytes 551 + * for CP2102N, CP2103, CP2104 and CP2105. 552 + */ 592 553 struct cp210x_gpio_write { 593 554 u8 mask; 594 555 u8 state; 556 + }; 557 + 558 + /* 559 + * CP210X_VENDOR_SPECIFIC, CP210X_WRITE_LATCH call writes these 0x04 bytes 560 + * for CP2108. 561 + */ 562 + struct cp210x_gpio_write16 { 563 + __le16 mask; 564 + __le16 state; 595 565 }; 596 566 597 567 /* ··· 1488 1434 { 1489 1435 struct usb_serial *serial = gpiochip_get_data(gc); 1490 1436 struct cp210x_serial_private *priv = usb_get_serial_data(serial); 1491 - u8 req_type = REQTYPE_DEVICE_TO_HOST; 1437 + u8 req_type; 1438 + u16 mask; 1492 1439 int result; 1493 - u8 buf; 1494 - 1495 - if (priv->partnum == CP210X_PARTNUM_CP2105) 1496 - req_type = REQTYPE_INTERFACE_TO_HOST; 1440 + int len; 1497 1441 1498 1442 result = usb_autopm_get_interface(serial->interface); 1499 1443 if (result) 1500 1444 return result; 1501 1445 1502 - result = cp210x_read_vendor_block(serial, req_type, 1503 - CP210X_READ_LATCH, &buf, sizeof(buf)); 1446 + switch (priv->partnum) { 1447 + case CP210X_PARTNUM_CP2105: 1448 + req_type = REQTYPE_INTERFACE_TO_HOST; 1449 + len = 1; 1450 + break; 1451 + case CP210X_PARTNUM_CP2108: 1452 + req_type = REQTYPE_INTERFACE_TO_HOST; 1453 + len = 2; 1454 + break; 1455 + default: 1456 + req_type = REQTYPE_DEVICE_TO_HOST; 1457 + len = 1; 1458 + break; 1459 + } 1460 + 1461 + mask = 0; 1462 + result = cp210x_read_vendor_block(serial, req_type, CP210X_READ_LATCH, 1463 + &mask, len); 1464 + 1504 1465 usb_autopm_put_interface(serial->interface); 1466 + 1505 1467 if (result < 0) 1506 1468 return result; 1507 1469 1508 - return !!(buf & BIT(gpio)); 1470 + le16_to_cpus(&mask); 1471 + 1472 + return !!(mask & BIT(gpio)); 1509 1473 } 1510 1474 1511 1475 static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value) 1512 1476 { 1513 1477 struct usb_serial *serial = gpiochip_get_data(gc); 1514 1478 struct cp210x_serial_private *priv = usb_get_serial_data(serial); 1479 + struct cp210x_gpio_write16 buf16; 1515 1480 struct cp210x_gpio_write buf; 1481 + u16 mask, state; 1482 + u16 wIndex; 1516 1483 int result; 1517 1484 1518 1485 if (value == 1) 1519 - buf.state = BIT(gpio); 1486 + state = BIT(gpio); 1520 1487 else 1521 - buf.state = 0; 1488 + state = 0; 1522 1489 1523 - buf.mask = BIT(gpio); 1490 + mask = BIT(gpio); 1524 1491 1525 1492 result = usb_autopm_get_interface(serial->interface); 1526 1493 if (result) 1527 1494 goto out; 1528 1495 1529 - if (priv->partnum == CP210X_PARTNUM_CP2105) { 1496 + switch (priv->partnum) { 1497 + case CP210X_PARTNUM_CP2105: 1498 + buf.mask = (u8)mask; 1499 + buf.state = (u8)state; 1530 1500 result = cp210x_write_vendor_block(serial, 1531 1501 REQTYPE_HOST_TO_INTERFACE, 1532 1502 CP210X_WRITE_LATCH, &buf, 1533 1503 sizeof(buf)); 1534 - } else { 1535 - u16 wIndex = buf.state << 8 | buf.mask; 1536 - 1504 + break; 1505 + case CP210X_PARTNUM_CP2108: 1506 + buf16.mask = cpu_to_le16(mask); 1507 + buf16.state = cpu_to_le16(state); 1508 + result = cp210x_write_vendor_block(serial, 1509 + REQTYPE_HOST_TO_INTERFACE, 1510 + CP210X_WRITE_LATCH, &buf16, 1511 + sizeof(buf16)); 1512 + break; 1513 + default: 1514 + wIndex = state << 8 | mask; 1537 1515 result = usb_control_msg(serial->dev, 1538 1516 usb_sndctrlpipe(serial->dev, 0), 1539 1517 CP210X_VENDOR_SPECIFIC, ··· 1573 1487 CP210X_WRITE_LATCH, 1574 1488 wIndex, 1575 1489 NULL, 0, USB_CTRL_SET_TIMEOUT); 1490 + break; 1576 1491 } 1577 1492 1578 1493 usb_autopm_put_interface(serial->interface); ··· 1783 1696 return 0; 1784 1697 } 1785 1698 1699 + static int cp2108_gpio_init(struct usb_serial *serial) 1700 + { 1701 + struct cp210x_serial_private *priv = usb_get_serial_data(serial); 1702 + struct cp210x_quad_port_config config; 1703 + u16 gpio_latch; 1704 + int result; 1705 + u8 i; 1706 + 1707 + result = cp210x_read_vendor_block(serial, REQTYPE_DEVICE_TO_HOST, 1708 + CP210X_GET_PORTCONFIG, &config, 1709 + sizeof(config)); 1710 + if (result < 0) 1711 + return result; 1712 + 1713 + priv->gc.ngpio = 16; 1714 + priv->gpio_pushpull = le16_to_cpu(config.reset_state.gpio_mode_pb1); 1715 + gpio_latch = le16_to_cpu(config.reset_state.gpio_latch_pb1); 1716 + 1717 + /* 1718 + * Mark all pins which are not in GPIO mode. 1719 + * 1720 + * Refer to table 9.1 "GPIO Mode alternate Functions" in the datasheet: 1721 + * https://www.silabs.com/documents/public/data-sheets/cp2108-datasheet.pdf 1722 + * 1723 + * Alternate functions of GPIO0 to GPIO3 are determine by enhancedfxn_ifc[0] 1724 + * and the similarly for the other pins; enhancedfxn_ifc[1]: GPIO4 to GPIO7, 1725 + * enhancedfxn_ifc[2]: GPIO8 to GPIO11, enhancedfxn_ifc[3]: GPIO12 to GPIO15. 1726 + */ 1727 + for (i = 0; i < 4; i++) { 1728 + if (config.enhancedfxn_ifc[i] & CP2108_EF_IFC_GPIO_TXLED) 1729 + priv->gpio_altfunc |= BIT(i * 4); 1730 + if (config.enhancedfxn_ifc[i] & CP2108_EF_IFC_GPIO_RXLED) 1731 + priv->gpio_altfunc |= BIT((i * 4) + 1); 1732 + if (config.enhancedfxn_ifc[i] & CP2108_EF_IFC_GPIO_RS485) 1733 + priv->gpio_altfunc |= BIT((i * 4) + 2); 1734 + if (config.enhancedfxn_ifc[i] & CP2108_EF_IFC_GPIO_CLOCK) 1735 + priv->gpio_altfunc |= BIT((i * 4) + 3); 1736 + } 1737 + 1738 + /* 1739 + * Like CP2102N, CP2108 has also no strict input and output pin 1740 + * modes. Do the same input mode emulation as CP2102N. 1741 + */ 1742 + for (i = 0; i < priv->gc.ngpio; ++i) { 1743 + /* 1744 + * Set direction to "input" iff pin is open-drain and reset 1745 + * value is 1. 1746 + */ 1747 + if (!(priv->gpio_pushpull & BIT(i)) && (gpio_latch & BIT(i))) 1748 + priv->gpio_input |= BIT(i); 1749 + } 1750 + 1751 + return 0; 1752 + } 1753 + 1786 1754 static int cp2102n_gpioconf_init(struct usb_serial *serial) 1787 1755 { 1788 1756 struct cp210x_serial_private *priv = usb_get_serial_data(serial); ··· 1953 1811 break; 1954 1812 case CP210X_PARTNUM_CP2105: 1955 1813 result = cp2105_gpioconf_init(serial); 1814 + break; 1815 + case CP210X_PARTNUM_CP2108: 1816 + /* 1817 + * The GPIOs are not tied to any specific port so only register 1818 + * once for interface 0. 1819 + */ 1820 + if (cp210x_interface_num(serial) != 0) 1821 + return 0; 1822 + result = cp2108_gpio_init(serial); 1956 1823 break; 1957 1824 case CP210X_PARTNUM_CP2102N_QFN28: 1958 1825 case CP210X_PARTNUM_CP2102N_QFN24:
+2 -2
drivers/usb/serial/cyberjack.c
··· 53 53 static void cyberjack_close(struct usb_serial_port *port); 54 54 static int cyberjack_write(struct tty_struct *tty, 55 55 struct usb_serial_port *port, const unsigned char *buf, int count); 56 - static int cyberjack_write_room(struct tty_struct *tty); 56 + static unsigned int cyberjack_write_room(struct tty_struct *tty); 57 57 static void cyberjack_read_int_callback(struct urb *urb); 58 58 static void cyberjack_read_bulk_callback(struct urb *urb); 59 59 static void cyberjack_write_bulk_callback(struct urb *urb); ··· 240 240 return count; 241 241 } 242 242 243 - static int cyberjack_write_room(struct tty_struct *tty) 243 + static unsigned int cyberjack_write_room(struct tty_struct *tty) 244 244 { 245 245 /* FIXME: .... */ 246 246 return CYBERJACK_LOCAL_BUF_SIZE;
+8 -8
drivers/usb/serial/cypress_m8.c
··· 122 122 static int cypress_write(struct tty_struct *tty, struct usb_serial_port *port, 123 123 const unsigned char *buf, int count); 124 124 static void cypress_send(struct usb_serial_port *port); 125 - static int cypress_write_room(struct tty_struct *tty); 125 + static unsigned int cypress_write_room(struct tty_struct *tty); 126 126 static void cypress_earthmate_init_termios(struct tty_struct *tty); 127 127 static void cypress_set_termios(struct tty_struct *tty, 128 128 struct usb_serial_port *port, struct ktermios *old); 129 129 static int cypress_tiocmget(struct tty_struct *tty); 130 130 static int cypress_tiocmset(struct tty_struct *tty, 131 131 unsigned int set, unsigned int clear); 132 - static int cypress_chars_in_buffer(struct tty_struct *tty); 132 + static unsigned int cypress_chars_in_buffer(struct tty_struct *tty); 133 133 static void cypress_throttle(struct tty_struct *tty); 134 134 static void cypress_unthrottle(struct tty_struct *tty); 135 135 static void cypress_set_dead(struct usb_serial_port *port); ··· 789 789 790 790 791 791 /* returns how much space is available in the soft buffer */ 792 - static int cypress_write_room(struct tty_struct *tty) 792 + static unsigned int cypress_write_room(struct tty_struct *tty) 793 793 { 794 794 struct usb_serial_port *port = tty->driver_data; 795 795 struct cypress_private *priv = usb_get_serial_port_data(port); 796 - int room = 0; 796 + unsigned int room; 797 797 unsigned long flags; 798 798 799 799 spin_lock_irqsave(&priv->lock, flags); 800 800 room = kfifo_avail(&priv->write_fifo); 801 801 spin_unlock_irqrestore(&priv->lock, flags); 802 802 803 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, room); 803 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, room); 804 804 return room; 805 805 } 806 806 ··· 955 955 956 956 957 957 /* returns amount of data still left in soft buffer */ 958 - static int cypress_chars_in_buffer(struct tty_struct *tty) 958 + static unsigned int cypress_chars_in_buffer(struct tty_struct *tty) 959 959 { 960 960 struct usb_serial_port *port = tty->driver_data; 961 961 struct cypress_private *priv = usb_get_serial_port_data(port); 962 - int chars = 0; 962 + unsigned int chars; 963 963 unsigned long flags; 964 964 965 965 spin_lock_irqsave(&priv->lock, flags); 966 966 chars = kfifo_len(&priv->write_fifo); 967 967 spin_unlock_irqrestore(&priv->lock, flags); 968 968 969 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, chars); 969 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, chars); 970 970 return chars; 971 971 } 972 972
+23 -23
drivers/usb/serial/digi_acceleport.c
··· 223 223 static int digi_write(struct tty_struct *tty, struct usb_serial_port *port, 224 224 const unsigned char *buf, int count); 225 225 static void digi_write_bulk_callback(struct urb *urb); 226 - static int digi_write_room(struct tty_struct *tty); 227 - static int digi_chars_in_buffer(struct tty_struct *tty); 226 + static unsigned int digi_write_room(struct tty_struct *tty); 227 + static unsigned int digi_chars_in_buffer(struct tty_struct *tty); 228 228 static int digi_open(struct tty_struct *tty, struct usb_serial_port *port); 229 229 static void digi_close(struct usb_serial_port *port); 230 230 static void digi_dtr_rts(struct usb_serial_port *port, int on); ··· 372 372 int len; 373 373 struct usb_serial_port *oob_port = (struct usb_serial_port *)((struct digi_serial *)(usb_get_serial_data(port->serial)))->ds_oob_port; 374 374 struct digi_port *oob_priv = usb_get_serial_port_data(oob_port); 375 - unsigned long flags = 0; 375 + unsigned long flags; 376 376 377 377 dev_dbg(&port->dev, 378 378 "digi_write_oob_command: TOP: port=%d, count=%d\n", ··· 430 430 int len; 431 431 struct digi_port *priv = usb_get_serial_port_data(port); 432 432 unsigned char *data = port->write_urb->transfer_buffer; 433 - unsigned long flags = 0; 433 + unsigned long flags; 434 434 435 435 dev_dbg(&port->dev, "digi_write_inb_command: TOP: port=%d, count=%d\n", 436 436 priv->dp_port_num, count); ··· 511 511 struct usb_serial_port *oob_port = (struct usb_serial_port *) ((struct digi_serial *)(usb_get_serial_data(port->serial)))->ds_oob_port; 512 512 struct digi_port *oob_priv = usb_get_serial_port_data(oob_port); 513 513 unsigned char *data = oob_port->write_urb->transfer_buffer; 514 - unsigned long flags = 0; 515 - 514 + unsigned long flags; 516 515 517 516 dev_dbg(&port->dev, 518 517 "digi_set_modem_signals: TOP: port=%d, modem_signals=0x%x\n", ··· 576 577 int ret; 577 578 unsigned char buf[2]; 578 579 struct digi_port *priv = usb_get_serial_port_data(port); 579 - unsigned long flags = 0; 580 + unsigned long flags; 580 581 581 582 spin_lock_irqsave(&priv->dp_port_lock, flags); 582 583 priv->dp_transmit_idle = 0; ··· 886 887 int ret, data_len, new_len; 887 888 struct digi_port *priv = usb_get_serial_port_data(port); 888 889 unsigned char *data = port->write_urb->transfer_buffer; 889 - unsigned long flags = 0; 890 + unsigned long flags; 890 891 891 892 dev_dbg(&port->dev, "digi_write: TOP: port=%d, count=%d\n", 892 893 priv->dp_port_num, count); ··· 1019 1020 tty_port_tty_wakeup(&port->port); 1020 1021 } 1021 1022 1022 - static int digi_write_room(struct tty_struct *tty) 1023 + static unsigned int digi_write_room(struct tty_struct *tty) 1023 1024 { 1024 1025 struct usb_serial_port *port = tty->driver_data; 1025 1026 struct digi_port *priv = usb_get_serial_port_data(port); 1026 - int room; 1027 - unsigned long flags = 0; 1027 + unsigned long flags; 1028 + unsigned int room; 1028 1029 1029 1030 spin_lock_irqsave(&priv->dp_port_lock, flags); 1030 1031 ··· 1034 1035 room = port->bulk_out_size - 2 - priv->dp_out_buf_len; 1035 1036 1036 1037 spin_unlock_irqrestore(&priv->dp_port_lock, flags); 1037 - dev_dbg(&port->dev, "digi_write_room: port=%d, room=%d\n", priv->dp_port_num, room); 1038 + dev_dbg(&port->dev, "digi_write_room: port=%d, room=%u\n", priv->dp_port_num, room); 1038 1039 return room; 1039 1040 1040 1041 } 1041 1042 1042 - static int digi_chars_in_buffer(struct tty_struct *tty) 1043 + static unsigned int digi_chars_in_buffer(struct tty_struct *tty) 1043 1044 { 1044 1045 struct usb_serial_port *port = tty->driver_data; 1045 1046 struct digi_port *priv = usb_get_serial_port_data(port); 1047 + unsigned long flags; 1048 + unsigned int chars; 1046 1049 1047 - if (priv->dp_write_urb_in_use) { 1048 - dev_dbg(&port->dev, "digi_chars_in_buffer: port=%d, chars=%d\n", 1049 - priv->dp_port_num, port->bulk_out_size - 2); 1050 - /* return(port->bulk_out_size - 2); */ 1051 - return 256; 1052 - } else { 1053 - dev_dbg(&port->dev, "digi_chars_in_buffer: port=%d, chars=%d\n", 1054 - priv->dp_port_num, priv->dp_out_buf_len); 1055 - return priv->dp_out_buf_len; 1056 - } 1050 + spin_lock_irqsave(&priv->dp_port_lock, flags); 1051 + if (priv->dp_write_urb_in_use) 1052 + chars = port->bulk_out_size - 2; 1053 + else 1054 + chars = priv->dp_out_buf_len; 1055 + spin_unlock_irqrestore(&priv->dp_port_lock, flags); 1057 1056 1057 + dev_dbg(&port->dev, "%s: port=%d, chars=%d\n", __func__, 1058 + priv->dp_port_num, chars); 1059 + return chars; 1058 1060 } 1059 1061 1060 1062 static void digi_dtr_rts(struct usb_serial_port *port, int on)
+1 -1
drivers/usb/serial/garmin_gps.c
··· 1113 1113 } 1114 1114 1115 1115 1116 - static int garmin_write_room(struct tty_struct *tty) 1116 + static unsigned int garmin_write_room(struct tty_struct *tty) 1117 1117 { 1118 1118 struct usb_serial_port *port = tty->driver_data; 1119 1119 /*
+6 -6
drivers/usb/serial/generic.c
··· 230 230 } 231 231 EXPORT_SYMBOL_GPL(usb_serial_generic_write); 232 232 233 - int usb_serial_generic_write_room(struct tty_struct *tty) 233 + unsigned int usb_serial_generic_write_room(struct tty_struct *tty) 234 234 { 235 235 struct usb_serial_port *port = tty->driver_data; 236 236 unsigned long flags; 237 - int room; 237 + unsigned int room; 238 238 239 239 if (!port->bulk_out_size) 240 240 return 0; ··· 243 243 room = kfifo_avail(&port->write_fifo); 244 244 spin_unlock_irqrestore(&port->lock, flags); 245 245 246 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, room); 246 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, room); 247 247 return room; 248 248 } 249 249 250 - int usb_serial_generic_chars_in_buffer(struct tty_struct *tty) 250 + unsigned int usb_serial_generic_chars_in_buffer(struct tty_struct *tty) 251 251 { 252 252 struct usb_serial_port *port = tty->driver_data; 253 253 unsigned long flags; 254 - int chars; 254 + unsigned int chars; 255 255 256 256 if (!port->bulk_out_size) 257 257 return 0; ··· 260 260 chars = kfifo_len(&port->write_fifo) + port->tx_bytes; 261 261 spin_unlock_irqrestore(&port->lock, flags); 262 262 263 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, chars); 263 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, chars); 264 264 return chars; 265 265 } 266 266 EXPORT_SYMBOL_GPL(usb_serial_generic_chars_in_buffer);
+7 -32
drivers/usb/serial/io_edgeport.c
··· 1351 1351 /***************************************************************************** 1352 1352 * edge_write_room 1353 1353 * this function is called by the tty driver when it wants to know how 1354 - * many bytes of data we can accept for a specific port. If successful, 1355 - * we return the amount of room that we have for this port (the txCredits) 1356 - * otherwise we return a negative error number. 1354 + * many bytes of data we can accept for a specific port. 1357 1355 *****************************************************************************/ 1358 - static int edge_write_room(struct tty_struct *tty) 1356 + static unsigned int edge_write_room(struct tty_struct *tty) 1359 1357 { 1360 1358 struct usb_serial_port *port = tty->driver_data; 1361 1359 struct edgeport_port *edge_port = usb_get_serial_port_data(port); 1362 - int room; 1360 + unsigned int room; 1363 1361 unsigned long flags; 1364 - 1365 - if (edge_port == NULL) 1366 - return 0; 1367 - if (edge_port->closePending) 1368 - return 0; 1369 - 1370 - if (!edge_port->open) { 1371 - dev_dbg(&port->dev, "%s - port not opened\n", __func__); 1372 - return 0; 1373 - } 1374 1362 1375 1363 /* total of both buffers is still txCredit */ 1376 1364 spin_lock_irqsave(&edge_port->ep_lock, flags); 1377 1365 room = edge_port->txCredits - edge_port->txfifo.count; 1378 1366 spin_unlock_irqrestore(&edge_port->ep_lock, flags); 1379 1367 1380 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, room); 1368 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, room); 1381 1369 return room; 1382 1370 } 1383 1371 ··· 1375 1387 * this function is called by the tty driver when it wants to know how 1376 1388 * many bytes of data we currently have outstanding in the port (data that 1377 1389 * has been written, but hasn't made it out the port yet) 1378 - * If successful, we return the number of bytes left to be written in the 1379 - * system, 1380 - * Otherwise we return a negative error number. 1381 1390 *****************************************************************************/ 1382 - static int edge_chars_in_buffer(struct tty_struct *tty) 1391 + static unsigned int edge_chars_in_buffer(struct tty_struct *tty) 1383 1392 { 1384 1393 struct usb_serial_port *port = tty->driver_data; 1385 1394 struct edgeport_port *edge_port = usb_get_serial_port_data(port); 1386 - int num_chars; 1395 + unsigned int num_chars; 1387 1396 unsigned long flags; 1388 - 1389 - if (edge_port == NULL) 1390 - return 0; 1391 - if (edge_port->closePending) 1392 - return 0; 1393 - 1394 - if (!edge_port->open) { 1395 - dev_dbg(&port->dev, "%s - port not opened\n", __func__); 1396 - return 0; 1397 - } 1398 1397 1399 1398 spin_lock_irqsave(&edge_port->ep_lock, flags); 1400 1399 num_chars = edge_port->maxTxCredits - edge_port->txCredits + 1401 1400 edge_port->txfifo.count; 1402 1401 spin_unlock_irqrestore(&edge_port->ep_lock, flags); 1403 1402 if (num_chars) { 1404 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, num_chars); 1403 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, num_chars); 1405 1404 } 1406 1405 1407 1406 return num_chars;
+6 -6
drivers/usb/serial/io_ti.c
··· 2067 2067 tty_wakeup(tty); 2068 2068 } 2069 2069 2070 - static int edge_write_room(struct tty_struct *tty) 2070 + static unsigned int edge_write_room(struct tty_struct *tty) 2071 2071 { 2072 2072 struct usb_serial_port *port = tty->driver_data; 2073 2073 struct edgeport_port *edge_port = usb_get_serial_port_data(port); 2074 - int room = 0; 2074 + unsigned int room; 2075 2075 unsigned long flags; 2076 2076 2077 2077 if (edge_port == NULL) ··· 2083 2083 room = kfifo_avail(&port->write_fifo); 2084 2084 spin_unlock_irqrestore(&edge_port->ep_lock, flags); 2085 2085 2086 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, room); 2086 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, room); 2087 2087 return room; 2088 2088 } 2089 2089 2090 - static int edge_chars_in_buffer(struct tty_struct *tty) 2090 + static unsigned int edge_chars_in_buffer(struct tty_struct *tty) 2091 2091 { 2092 2092 struct usb_serial_port *port = tty->driver_data; 2093 2093 struct edgeport_port *edge_port = usb_get_serial_port_data(port); 2094 - int chars = 0; 2094 + unsigned int chars; 2095 2095 unsigned long flags; 2096 2096 if (edge_port == NULL) 2097 2097 return 0; ··· 2100 2100 chars = kfifo_len(&port->write_fifo); 2101 2101 spin_unlock_irqrestore(&edge_port->ep_lock, flags); 2102 2102 2103 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, chars); 2103 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, chars); 2104 2104 return chars; 2105 2105 } 2106 2106
+3 -3
drivers/usb/serial/ir-usb.c
··· 47 47 static int ir_startup (struct usb_serial *serial); 48 48 static int ir_write(struct tty_struct *tty, struct usb_serial_port *port, 49 49 const unsigned char *buf, int count); 50 - static int ir_write_room(struct tty_struct *tty); 50 + static unsigned int ir_write_room(struct tty_struct *tty); 51 51 static void ir_write_bulk_callback(struct urb *urb); 52 52 static void ir_process_read_urb(struct urb *urb); 53 53 static void ir_set_termios(struct tty_struct *tty, ··· 339 339 usb_serial_port_softint(port); 340 340 } 341 341 342 - static int ir_write_room(struct tty_struct *tty) 342 + static unsigned int ir_write_room(struct tty_struct *tty) 343 343 { 344 344 struct usb_serial_port *port = tty->driver_data; 345 - int count = 0; 345 + unsigned int count = 0; 346 346 347 347 if (port->bulk_out_size == 0) 348 348 return 0;
+2 -2
drivers/usb/serial/keyspan.c
··· 1453 1453 } 1454 1454 } 1455 1455 1456 - static int keyspan_write_room(struct tty_struct *tty) 1456 + static unsigned int keyspan_write_room(struct tty_struct *tty) 1457 1457 { 1458 1458 struct usb_serial_port *port = tty->driver_data; 1459 1459 struct keyspan_port_private *p_priv; 1460 1460 const struct keyspan_device_details *d_details; 1461 1461 int flip; 1462 - int data_len; 1462 + unsigned int data_len; 1463 1463 struct urb *this_urb; 1464 1464 1465 1465 p_priv = usb_get_serial_port_data(port);
+2 -2
drivers/usb/serial/kobil_sct.c
··· 53 53 static void kobil_close(struct usb_serial_port *port); 54 54 static int kobil_write(struct tty_struct *tty, struct usb_serial_port *port, 55 55 const unsigned char *buf, int count); 56 - static int kobil_write_room(struct tty_struct *tty); 56 + static unsigned int kobil_write_room(struct tty_struct *tty); 57 57 static int kobil_ioctl(struct tty_struct *tty, 58 58 unsigned int cmd, unsigned long arg); 59 59 static int kobil_tiocmget(struct tty_struct *tty); ··· 358 358 } 359 359 360 360 361 - static int kobil_write_room(struct tty_struct *tty) 361 + static unsigned int kobil_write_room(struct tty_struct *tty) 362 362 { 363 363 /* FIXME */ 364 364 return 8;
+6 -6
drivers/usb/serial/metro-usb.c
··· 109 109 struct usb_serial_port *port = urb->context; 110 110 struct metrousb_private *metro_priv = usb_get_serial_port_data(port); 111 111 unsigned char *data = urb->transfer_buffer; 112 + unsigned long flags; 112 113 int throttled = 0; 113 114 int result = 0; 114 - unsigned long flags = 0; 115 115 116 116 dev_dbg(&port->dev, "%s\n", __func__); 117 117 ··· 171 171 { 172 172 struct usb_serial *serial = port->serial; 173 173 struct metrousb_private *metro_priv = usb_get_serial_port_data(port); 174 - unsigned long flags = 0; 174 + unsigned long flags; 175 175 int result = 0; 176 176 177 177 /* Set the private data information for the port. */ ··· 268 268 { 269 269 struct usb_serial_port *port = tty->driver_data; 270 270 struct metrousb_private *metro_priv = usb_get_serial_port_data(port); 271 - unsigned long flags = 0; 271 + unsigned long flags; 272 272 273 273 /* Set the private information for the port to stop reading data. */ 274 274 spin_lock_irqsave(&metro_priv->lock, flags); ··· 281 281 unsigned long control_state = 0; 282 282 struct usb_serial_port *port = tty->driver_data; 283 283 struct metrousb_private *metro_priv = usb_get_serial_port_data(port); 284 - unsigned long flags = 0; 284 + unsigned long flags; 285 285 286 286 spin_lock_irqsave(&metro_priv->lock, flags); 287 287 control_state = metro_priv->control_state; ··· 296 296 struct usb_serial_port *port = tty->driver_data; 297 297 struct usb_serial *serial = port->serial; 298 298 struct metrousb_private *metro_priv = usb_get_serial_port_data(port); 299 - unsigned long flags = 0; 299 + unsigned long flags; 300 300 unsigned long control_state = 0; 301 301 302 302 dev_dbg(&port->dev, "%s - set=%d, clear=%d\n", __func__, set, clear); ··· 323 323 { 324 324 struct usb_serial_port *port = tty->driver_data; 325 325 struct metrousb_private *metro_priv = usb_get_serial_port_data(port); 326 - unsigned long flags = 0; 326 + unsigned long flags; 327 327 int result = 0; 328 328 329 329 /* Set the private information for the port to resume reading data. */
+8 -21
drivers/usb/serial/mos7720.c
··· 945 945 * this function is called by the tty driver when it wants to know how many 946 946 * bytes of data we currently have outstanding in the port (data that has 947 947 * been written, but hasn't made it out the port yet) 948 - * If successful, we return the number of bytes left to be written in the 949 - * system, 950 - * Otherwise we return a negative error number. 951 948 */ 952 - static int mos7720_chars_in_buffer(struct tty_struct *tty) 949 + static unsigned int mos7720_chars_in_buffer(struct tty_struct *tty) 953 950 { 954 951 struct usb_serial_port *port = tty->driver_data; 952 + struct moschip_port *mos7720_port = usb_get_serial_port_data(port); 955 953 int i; 956 - int chars = 0; 957 - struct moschip_port *mos7720_port; 958 - 959 - mos7720_port = usb_get_serial_port_data(port); 960 - if (mos7720_port == NULL) 961 - return 0; 954 + unsigned int chars = 0; 962 955 963 956 for (i = 0; i < NUM_URBS; ++i) { 964 957 if (mos7720_port->write_urb_pool[i] && 965 958 mos7720_port->write_urb_pool[i]->status == -EINPROGRESS) 966 959 chars += URB_TRANSFER_BUFFER_SIZE; 967 960 } 968 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, chars); 961 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, chars); 969 962 return chars; 970 963 } 971 964 ··· 1023 1030 * mos7720_write_room 1024 1031 * this function is called by the tty driver when it wants to know how many 1025 1032 * bytes of data we can accept for a specific port. 1026 - * If successful, we return the amount of room that we have for this port 1027 - * Otherwise we return a negative error number. 1028 1033 */ 1029 - static int mos7720_write_room(struct tty_struct *tty) 1034 + static unsigned int mos7720_write_room(struct tty_struct *tty) 1030 1035 { 1031 1036 struct usb_serial_port *port = tty->driver_data; 1032 - struct moschip_port *mos7720_port; 1033 - int room = 0; 1037 + struct moschip_port *mos7720_port = usb_get_serial_port_data(port); 1038 + unsigned int room = 0; 1034 1039 int i; 1035 - 1036 - mos7720_port = usb_get_serial_port_data(port); 1037 - if (mos7720_port == NULL) 1038 - return 0; 1039 1040 1040 1041 /* FIXME: Locking */ 1041 1042 for (i = 0; i < NUM_URBS; ++i) { ··· 1038 1051 room += URB_TRANSFER_BUFFER_SIZE; 1039 1052 } 1040 1053 1041 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, room); 1054 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, room); 1042 1055 return room; 1043 1056 } 1044 1057
+6 -11
drivers/usb/serial/mos7840.c
··· 730 730 * this function is called by the tty driver when it wants to know how many 731 731 * bytes of data we currently have outstanding in the port (data that has 732 732 * been written, but hasn't made it out the port yet) 733 - * If successful, we return the number of bytes left to be written in the 734 - * system, 735 - * Otherwise we return zero. 736 733 *****************************************************************************/ 737 734 738 - static int mos7840_chars_in_buffer(struct tty_struct *tty) 735 + static unsigned int mos7840_chars_in_buffer(struct tty_struct *tty) 739 736 { 740 737 struct usb_serial_port *port = tty->driver_data; 741 738 struct moschip_port *mos7840_port = usb_get_serial_port_data(port); 742 739 int i; 743 - int chars = 0; 740 + unsigned int chars = 0; 744 741 unsigned long flags; 745 742 746 743 spin_lock_irqsave(&mos7840_port->pool_lock, flags); ··· 748 751 } 749 752 } 750 753 spin_unlock_irqrestore(&mos7840_port->pool_lock, flags); 751 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, chars); 754 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, chars); 752 755 return chars; 753 756 754 757 } ··· 811 814 * mos7840_write_room 812 815 * this function is called by the tty driver when it wants to know how many 813 816 * bytes of data we can accept for a specific port. 814 - * If successful, we return the amount of room that we have for this port 815 - * Otherwise we return a negative error number. 816 817 *****************************************************************************/ 817 818 818 - static int mos7840_write_room(struct tty_struct *tty) 819 + static unsigned int mos7840_write_room(struct tty_struct *tty) 819 820 { 820 821 struct usb_serial_port *port = tty->driver_data; 821 822 struct moschip_port *mos7840_port = usb_get_serial_port_data(port); 822 823 int i; 823 - int room = 0; 824 + unsigned int room = 0; 824 825 unsigned long flags; 825 826 826 827 spin_lock_irqsave(&mos7840_port->pool_lock, flags); ··· 829 834 spin_unlock_irqrestore(&mos7840_port->pool_lock, flags); 830 835 831 836 room = (room == 0) ? 0 : room - URB_TRANSFER_BUFFER_SIZE + 1; 832 - dev_dbg(&mos7840_port->port->dev, "%s - returns %d\n", __func__, room); 837 + dev_dbg(&mos7840_port->port->dev, "%s - returns %u\n", __func__, room); 833 838 return room; 834 839 835 840 }
+3 -3
drivers/usb/serial/opticon.c
··· 267 267 return ret; 268 268 } 269 269 270 - static int opticon_write_room(struct tty_struct *tty) 270 + static unsigned int opticon_write_room(struct tty_struct *tty) 271 271 { 272 272 struct usb_serial_port *port = tty->driver_data; 273 273 struct opticon_private *priv = usb_get_serial_port_data(port); ··· 289 289 return 2048; 290 290 } 291 291 292 - static int opticon_chars_in_buffer(struct tty_struct *tty) 292 + static unsigned int opticon_chars_in_buffer(struct tty_struct *tty) 293 293 { 294 294 struct usb_serial_port *port = tty->driver_data; 295 295 struct opticon_private *priv = usb_get_serial_port_data(port); 296 296 unsigned long flags; 297 - int count; 297 + unsigned int count; 298 298 299 299 spin_lock_irqsave(&priv->lock, flags); 300 300 count = priv->outstanding_bytes;
+6 -6
drivers/usb/serial/oti6858.c
··· 126 126 static void oti6858_write_bulk_callback(struct urb *urb); 127 127 static int oti6858_write(struct tty_struct *tty, struct usb_serial_port *port, 128 128 const unsigned char *buf, int count); 129 - static int oti6858_write_room(struct tty_struct *tty); 130 - static int oti6858_chars_in_buffer(struct tty_struct *tty); 129 + static unsigned int oti6858_write_room(struct tty_struct *tty); 130 + static unsigned int oti6858_chars_in_buffer(struct tty_struct *tty); 131 131 static int oti6858_tiocmget(struct tty_struct *tty); 132 132 static int oti6858_tiocmset(struct tty_struct *tty, 133 133 unsigned int set, unsigned int clear); ··· 363 363 return count; 364 364 } 365 365 366 - static int oti6858_write_room(struct tty_struct *tty) 366 + static unsigned int oti6858_write_room(struct tty_struct *tty) 367 367 { 368 368 struct usb_serial_port *port = tty->driver_data; 369 - int room = 0; 369 + unsigned int room; 370 370 unsigned long flags; 371 371 372 372 spin_lock_irqsave(&port->lock, flags); ··· 376 376 return room; 377 377 } 378 378 379 - static int oti6858_chars_in_buffer(struct tty_struct *tty) 379 + static unsigned int oti6858_chars_in_buffer(struct tty_struct *tty) 380 380 { 381 381 struct usb_serial_port *port = tty->driver_data; 382 - int chars = 0; 382 + unsigned int chars; 383 383 unsigned long flags; 384 384 385 385 spin_lock_irqsave(&port->lock, flags);
+3 -3
drivers/usb/serial/quatech2.c
··· 870 870 871 871 } 872 872 873 - static int qt2_write_room(struct tty_struct *tty) 873 + static unsigned int qt2_write_room(struct tty_struct *tty) 874 874 { 875 875 struct usb_serial_port *port = tty->driver_data; 876 876 struct qt2_port_private *port_priv; 877 - unsigned long flags = 0; 878 - int r; 877 + unsigned long flags; 878 + unsigned int r; 879 879 880 880 port_priv = usb_get_serial_port_data(port); 881 881
+4 -4
drivers/usb/serial/sierra.c
··· 613 613 } 614 614 } 615 615 616 - static int sierra_write_room(struct tty_struct *tty) 616 + static unsigned int sierra_write_room(struct tty_struct *tty) 617 617 { 618 618 struct usb_serial_port *port = tty->driver_data; 619 619 struct sierra_port_private *portdata = usb_get_serial_port_data(port); ··· 632 632 return 2048; 633 633 } 634 634 635 - static int sierra_chars_in_buffer(struct tty_struct *tty) 635 + static unsigned int sierra_chars_in_buffer(struct tty_struct *tty) 636 636 { 637 637 struct usb_serial_port *port = tty->driver_data; 638 638 struct sierra_port_private *portdata = usb_get_serial_port_data(port); 639 639 unsigned long flags; 640 - int chars; 640 + unsigned int chars; 641 641 642 642 /* NOTE: This overcounts somewhat. */ 643 643 spin_lock_irqsave(&portdata->lock, flags); 644 644 chars = portdata->outstanding_urbs * MAX_TRANSFER; 645 645 spin_unlock_irqrestore(&portdata->lock, flags); 646 646 647 - dev_dbg(&port->dev, "%s - %d\n", __func__, chars); 647 + dev_dbg(&port->dev, "%s - %u\n", __func__, chars); 648 648 649 649 return chars; 650 650 }
+8 -8
drivers/usb/serial/ti_usb_3410_5052.c
··· 308 308 static void ti_close(struct usb_serial_port *port); 309 309 static int ti_write(struct tty_struct *tty, struct usb_serial_port *port, 310 310 const unsigned char *data, int count); 311 - static int ti_write_room(struct tty_struct *tty); 312 - static int ti_chars_in_buffer(struct tty_struct *tty); 311 + static unsigned int ti_write_room(struct tty_struct *tty); 312 + static unsigned int ti_chars_in_buffer(struct tty_struct *tty); 313 313 static bool ti_tx_empty(struct usb_serial_port *port); 314 314 static void ti_throttle(struct tty_struct *tty); 315 315 static void ti_unthrottle(struct tty_struct *tty); ··· 813 813 } 814 814 815 815 816 - static int ti_write_room(struct tty_struct *tty) 816 + static unsigned int ti_write_room(struct tty_struct *tty) 817 817 { 818 818 struct usb_serial_port *port = tty->driver_data; 819 819 struct ti_port *tport = usb_get_serial_port_data(port); 820 - int room = 0; 820 + unsigned int room; 821 821 unsigned long flags; 822 822 823 823 spin_lock_irqsave(&tport->tp_lock, flags); 824 824 room = kfifo_avail(&port->write_fifo); 825 825 spin_unlock_irqrestore(&tport->tp_lock, flags); 826 826 827 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, room); 827 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, room); 828 828 return room; 829 829 } 830 830 831 831 832 - static int ti_chars_in_buffer(struct tty_struct *tty) 832 + static unsigned int ti_chars_in_buffer(struct tty_struct *tty) 833 833 { 834 834 struct usb_serial_port *port = tty->driver_data; 835 835 struct ti_port *tport = usb_get_serial_port_data(port); 836 - int chars = 0; 836 + unsigned int chars; 837 837 unsigned long flags; 838 838 839 839 spin_lock_irqsave(&tport->tp_lock, flags); 840 840 chars = kfifo_len(&port->write_fifo); 841 841 spin_unlock_irqrestore(&tport->tp_lock, flags); 842 842 843 - dev_dbg(&port->dev, "%s - returns %d\n", __func__, chars); 843 + dev_dbg(&port->dev, "%s - returns %u\n", __func__, chars); 844 844 return chars; 845 845 } 846 846
+2 -2
drivers/usb/serial/usb-wwan.h
··· 11 11 extern void usb_wwan_close(struct usb_serial_port *port); 12 12 extern int usb_wwan_port_probe(struct usb_serial_port *port); 13 13 extern void usb_wwan_port_remove(struct usb_serial_port *port); 14 - extern int usb_wwan_write_room(struct tty_struct *tty); 14 + extern unsigned int usb_wwan_write_room(struct tty_struct *tty); 15 15 extern int usb_wwan_tiocmget(struct tty_struct *tty); 16 16 extern int usb_wwan_tiocmset(struct tty_struct *tty, 17 17 unsigned int set, unsigned int clear); 18 18 extern int usb_wwan_write(struct tty_struct *tty, struct usb_serial_port *port, 19 19 const unsigned char *buf, int count); 20 - extern int usb_wwan_chars_in_buffer(struct tty_struct *tty); 20 + extern unsigned int usb_wwan_chars_in_buffer(struct tty_struct *tty); 21 21 #ifdef CONFIG_PM 22 22 extern int usb_wwan_suspend(struct usb_serial *serial, pm_message_t message); 23 23 extern int usb_wwan_resume(struct usb_serial *serial);
+6 -6
drivers/usb/serial/usb_wwan.c
··· 278 278 } 279 279 } 280 280 281 - int usb_wwan_write_room(struct tty_struct *tty) 281 + unsigned int usb_wwan_write_room(struct tty_struct *tty) 282 282 { 283 283 struct usb_serial_port *port = tty->driver_data; 284 284 struct usb_wwan_port_private *portdata; 285 285 int i; 286 - int data_len = 0; 286 + unsigned int data_len = 0; 287 287 struct urb *this_urb; 288 288 289 289 portdata = usb_get_serial_port_data(port); ··· 294 294 data_len += OUT_BUFLEN; 295 295 } 296 296 297 - dev_dbg(&port->dev, "%s: %d\n", __func__, data_len); 297 + dev_dbg(&port->dev, "%s: %u\n", __func__, data_len); 298 298 return data_len; 299 299 } 300 300 EXPORT_SYMBOL(usb_wwan_write_room); 301 301 302 - int usb_wwan_chars_in_buffer(struct tty_struct *tty) 302 + unsigned int usb_wwan_chars_in_buffer(struct tty_struct *tty) 303 303 { 304 304 struct usb_serial_port *port = tty->driver_data; 305 305 struct usb_wwan_port_private *portdata; 306 306 int i; 307 - int data_len = 0; 307 + unsigned int data_len = 0; 308 308 struct urb *this_urb; 309 309 310 310 portdata = usb_get_serial_port_data(port); ··· 316 316 if (this_urb && test_bit(i, &portdata->out_busy)) 317 317 data_len += this_urb->transfer_buffer_length; 318 318 } 319 - dev_dbg(&port->dev, "%s: %d\n", __func__, data_len); 319 + dev_dbg(&port->dev, "%s: %u\n", __func__, data_len); 320 320 return data_len; 321 321 } 322 322 EXPORT_SYMBOL(usb_wwan_chars_in_buffer);
+3 -1
drivers/usb/typec/class.c
··· 517 517 int ret; 518 518 519 519 alt = kzalloc(sizeof(*alt), GFP_KERNEL); 520 - if (!alt) 520 + if (!alt) { 521 + altmode_id_remove(parent, id); 521 522 return ERR_PTR(-ENOMEM); 523 + } 522 524 523 525 alt->adev.svid = desc->svid; 524 526 alt->adev.mode = desc->mode;
+23 -16
drivers/usb/typec/mux.c
··· 17 17 #include "class.h" 18 18 #include "mux.h" 19 19 20 - static bool dev_name_ends_with(struct device *dev, const char *suffix) 21 - { 22 - const char *name = dev_name(dev); 23 - const int name_len = strlen(name); 24 - const int suffix_len = strlen(suffix); 25 - 26 - if (suffix_len > name_len) 27 - return false; 28 - 29 - return strcmp(name + (name_len - suffix_len), suffix) == 0; 30 - } 31 - 32 20 static int switch_fwnode_match(struct device *dev, const void *fwnode) 33 21 { 34 - return dev_fwnode(dev) == fwnode && dev_name_ends_with(dev, "-switch"); 22 + if (!is_typec_switch(dev)) 23 + return 0; 24 + 25 + return dev_fwnode(dev) == fwnode; 35 26 } 36 27 37 28 static void *typec_switch_match(struct fwnode_handle *fwnode, const char *id, ··· 30 39 { 31 40 struct device *dev; 32 41 42 + /* 43 + * Device graph (OF graph) does not give any means to identify the 44 + * device type or the device class of the remote port parent that @fwnode 45 + * represents, so in order to identify the type or the class of @fwnode 46 + * an additional device property is needed. With typec switches the 47 + * property is named "orientation-switch" (@id). The value of the device 48 + * property is ignored. 49 + */ 33 50 if (id && !fwnode_property_present(fwnode, id)) 34 51 return NULL; 35 52 53 + /* 54 + * At this point we are sure that @fwnode is a typec switch in all 55 + * cases. If the switch hasn't yet been registered for some reason, the 56 + * function "defers probe" for now. 57 + */ 36 58 dev = class_find_device(&typec_mux_class, NULL, fwnode, 37 59 switch_fwnode_match); 38 60 ··· 94 90 kfree(to_typec_switch(dev)); 95 91 } 96 92 97 - static const struct device_type typec_switch_dev_type = { 93 + const struct device_type typec_switch_dev_type = { 98 94 .name = "orientation_switch", 99 95 .release = typec_switch_release, 100 96 }; ··· 184 180 185 181 static int mux_fwnode_match(struct device *dev, const void *fwnode) 186 182 { 187 - return dev_fwnode(dev) == fwnode && dev_name_ends_with(dev, "-mux"); 183 + if (!is_typec_mux(dev)) 184 + return 0; 185 + 186 + return dev_fwnode(dev) == fwnode; 188 187 } 189 188 190 189 static void *typec_mux_match(struct fwnode_handle *fwnode, const char *id, ··· 302 295 kfree(to_typec_mux(dev)); 303 296 } 304 297 305 - static const struct device_type typec_mux_dev_type = { 298 + const struct device_type typec_mux_dev_type = { 306 299 .name = "mode_switch", 307 300 .release = typec_mux_release, 308 301 };
+6
drivers/usb/typec/mux.h
··· 18 18 #define to_typec_switch(_dev_) container_of(_dev_, struct typec_switch, dev) 19 19 #define to_typec_mux(_dev_) container_of(_dev_, struct typec_mux, dev) 20 20 21 + extern const struct device_type typec_switch_dev_type; 22 + extern const struct device_type typec_mux_dev_type; 23 + 24 + #define is_typec_switch(dev) ((dev)->type == &typec_switch_dev_type) 25 + #define is_typec_mux(dev) ((dev)->type == &typec_mux_dev_type) 26 + 21 27 #endif /* __USB_TYPEC_MUX__ */
+23 -5
drivers/usb/typec/mux/intel_pmc_mux.c
··· 83 83 /* 84 84 * Input Output Manager (IOM) PORT STATUS 85 85 */ 86 - #define IOM_PORT_STATUS_OFFSET 0x560 87 - 88 86 #define IOM_PORT_STATUS_ACTIVITY_TYPE_MASK GENMASK(9, 6) 89 87 #define IOM_PORT_STATUS_ACTIVITY_TYPE_SHIFT 6 90 88 #define IOM_PORT_STATUS_ACTIVITY_TYPE_USB 0x03 ··· 142 144 struct pmc_usb_port *port; 143 145 struct acpi_device *iom_adev; 144 146 void __iomem *iom_base; 147 + u32 iom_port_status_offset; 145 148 }; 146 149 147 150 static void update_port_status(struct pmc_usb_port *port) ··· 152 153 /* SoC expects the USB Type-C port numbers to start with 0 */ 153 154 port_num = port->usb3_port - 1; 154 155 155 - port->iom_status = readl(port->pmc->iom_base + IOM_PORT_STATUS_OFFSET + 156 + port->iom_status = readl(port->pmc->iom_base + 157 + port->pmc->iom_port_status_offset + 156 158 port_num * sizeof(u32)); 157 159 } 158 160 ··· 559 559 return !acpi_dev_resource_memory(res, &r); 560 560 } 561 561 562 + /* IOM ACPI IDs and IOM_PORT_STATUS_OFFSET */ 563 + static const struct acpi_device_id iom_acpi_ids[] = { 564 + /* TigerLake */ 565 + { "INTC1072", 0x560, }, 566 + 567 + /* AlderLake */ 568 + { "INTC1079", 0x160, }, 569 + {} 570 + }; 571 + 562 572 static int pmc_usb_probe_iom(struct pmc_usb *pmc) 563 573 { 564 574 struct list_head resource_list; 565 575 struct resource_entry *rentry; 566 - struct acpi_device *adev; 576 + static const struct acpi_device_id *dev_id; 577 + struct acpi_device *adev = NULL; 567 578 int ret; 568 579 569 - adev = acpi_dev_get_first_match_dev("INTC1072", NULL, -1); 580 + for (dev_id = &iom_acpi_ids[0]; dev_id->id[0]; dev_id++) { 581 + if (acpi_dev_present(dev_id->id, NULL, -1)) { 582 + pmc->iom_port_status_offset = (u32)dev_id->driver_data; 583 + adev = acpi_dev_get_first_match_dev(dev_id->id, NULL, -1); 584 + break; 585 + } 586 + } 587 + 570 588 if (!adev) 571 589 return -ENODEV; 572 590
+40 -6
drivers/usb/typec/tcpm/tcpci.c
··· 21 21 #define PD_RETRY_COUNT_DEFAULT 3 22 22 #define PD_RETRY_COUNT_3_0_OR_HIGHER 2 23 23 #define AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV 3500 24 - #define AUTO_DISCHARGE_PD_HEADROOM_MV 850 25 - #define AUTO_DISCHARGE_PPS_HEADROOM_MV 1250 24 + #define VSINKPD_MIN_IR_DROP_MV 750 25 + #define VSRC_NEW_MIN_PERCENT 95 26 + #define VSRC_VALID_MIN_MV 500 27 + #define VPPS_NEW_MIN_PERCENT 95 28 + #define VPPS_VALID_MIN_MV 100 29 + #define VSINKDISCONNECT_PD_MIN_PERCENT 90 26 30 27 31 #define tcpc_presenting_rd(reg, cc) \ 28 32 (!(TCPC_ROLE_CTRL_DRP & (reg)) && \ ··· 117 113 return ret; 118 114 119 115 return 0; 116 + } 117 + 118 + static int tcpci_apply_rc(struct tcpc_dev *tcpc, enum typec_cc_status cc, 119 + enum typec_cc_polarity polarity) 120 + { 121 + struct tcpci *tcpci = tcpc_to_tcpci(tcpc); 122 + unsigned int reg; 123 + int ret; 124 + 125 + ret = regmap_read(tcpci->regmap, TCPC_ROLE_CTRL, &reg); 126 + if (ret < 0) 127 + return ret; 128 + 129 + /* 130 + * APPLY_RC state is when ROLE_CONTROL.CC1 != ROLE_CONTROL.CC2 and vbus autodischarge on 131 + * disconnect is disabled. Bail out when ROLE_CONTROL.CC1 != ROLE_CONTROL.CC2. 132 + */ 133 + if (((reg & (TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT)) >> 134 + TCPC_ROLE_CTRL_CC2_SHIFT) != 135 + ((reg & (TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT)) >> 136 + TCPC_ROLE_CTRL_CC1_SHIFT)) 137 + return 0; 138 + 139 + return regmap_update_bits(tcpci->regmap, TCPC_ROLE_CTRL, polarity == TYPEC_POLARITY_CC1 ? 140 + TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT : 141 + TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT, 142 + TCPC_ROLE_CTRL_CC_OPEN); 120 143 } 121 144 122 145 static int tcpci_start_toggling(struct tcpc_dev *tcpc, ··· 355 324 threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV; 356 325 } else if (mode == TYPEC_PWR_MODE_PD) { 357 326 if (pps_active) 358 - threshold = (95 * requested_vbus_voltage_mv / 100) - 359 - AUTO_DISCHARGE_PD_HEADROOM_MV; 327 + threshold = ((VPPS_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) - 328 + VSINKPD_MIN_IR_DROP_MV - VPPS_VALID_MIN_MV) * 329 + VSINKDISCONNECT_PD_MIN_PERCENT / 100; 360 330 else 361 - threshold = (95 * requested_vbus_voltage_mv / 100) - 362 - AUTO_DISCHARGE_PPS_HEADROOM_MV; 331 + threshold = ((VSRC_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) - 332 + VSINKPD_MIN_IR_DROP_MV - VSRC_VALID_MIN_MV) * 333 + VSINKDISCONNECT_PD_MIN_PERCENT / 100; 363 334 } else { 364 335 /* 3.5V for non-pd sink */ 365 336 threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV; ··· 761 728 tcpci->tcpc.get_vbus = tcpci_get_vbus; 762 729 tcpci->tcpc.set_vbus = tcpci_set_vbus; 763 730 tcpci->tcpc.set_cc = tcpci_set_cc; 731 + tcpci->tcpc.apply_rc = tcpci_apply_rc; 764 732 tcpci->tcpc.get_cc = tcpci_get_cc; 765 733 tcpci->tcpc.set_polarity = tcpci_set_polarity; 766 734 tcpci->tcpc.set_vconn = tcpci_set_vconn;
+65 -21
drivers/usb/typec/tcpm/tcpm.c
··· 776 776 port->tcpc->set_cc(port->tcpc, cc); 777 777 } 778 778 779 + static int tcpm_enable_auto_vbus_discharge(struct tcpm_port *port, bool enable) 780 + { 781 + int ret = 0; 782 + 783 + if (port->tcpc->enable_auto_vbus_discharge) { 784 + ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, enable); 785 + tcpm_log_force(port, "%s vbus discharge ret:%d", enable ? "enable" : "disable", 786 + ret); 787 + if (!ret) 788 + port->auto_vbus_discharge_enabled = enable; 789 + } 790 + 791 + return ret; 792 + } 793 + 794 + static void tcpm_apply_rc(struct tcpm_port *port) 795 + { 796 + /* 797 + * TCPCI: Move to APPLY_RC state to prevent disconnect during PR_SWAP 798 + * when Vbus auto discharge on disconnect is enabled. 799 + */ 800 + if (port->tcpc->enable_auto_vbus_discharge && port->tcpc->apply_rc) { 801 + tcpm_log(port, "Apply_RC"); 802 + port->tcpc->apply_rc(port->tcpc, port->cc_req, port->polarity); 803 + tcpm_enable_auto_vbus_discharge(port, false); 804 + } 805 + } 806 + 779 807 /* 780 808 * Determine RP value to set based on maximum current supported 781 809 * by a port if configured as source. ··· 2604 2576 } else { 2605 2577 next_state = SNK_WAIT_CAPABILITIES; 2606 2578 } 2579 + 2580 + /* Threshold was relaxed before sending Request. Restore it back. */ 2581 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD, 2582 + port->pps_data.active, 2583 + port->supply_voltage); 2607 2584 tcpm_set_state(port, next_state, 0); 2608 2585 break; 2609 2586 case SNK_NEGOTIATE_PPS_CAPABILITIES: ··· 2621 2588 if (port->data_role == TYPEC_HOST && 2622 2589 port->send_discover) 2623 2590 port->vdm_sm_running = true; 2591 + 2592 + /* Threshold was relaxed before sending Request. Restore it back. */ 2593 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD, 2594 + port->pps_data.active, 2595 + port->supply_voltage); 2624 2596 2625 2597 tcpm_set_state(port, SNK_READY, 0); 2626 2598 break; ··· 3346 3308 if (ret < 0) 3347 3309 return ret; 3348 3310 3311 + /* 3312 + * Relax the threshold as voltage will be adjusted after Accept Message plus tSrcTransition. 3313 + * It is safer to modify the threshold here. 3314 + */ 3315 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0); 3316 + 3349 3317 memset(&msg, 0, sizeof(msg)); 3350 3318 msg.header = PD_HEADER_LE(PD_DATA_REQUEST, 3351 3319 port->pwr_role, ··· 3448 3404 ret = tcpm_pd_build_pps_request(port, &rdo); 3449 3405 if (ret < 0) 3450 3406 return ret; 3407 + 3408 + /* Relax the threshold as voltage will be adjusted right after Accept Message. */ 3409 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0); 3451 3410 3452 3411 memset(&msg, 0, sizeof(msg)); 3453 3412 msg.header = PD_HEADER_LE(PD_DATA_REQUEST, ··· 3562 3515 if (ret < 0) 3563 3516 return ret; 3564 3517 3565 - if (port->tcpc->enable_auto_vbus_discharge) { 3566 - ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true); 3567 - tcpm_log_force(port, "enable vbus discharge ret:%d", ret); 3568 - if (!ret) 3569 - port->auto_vbus_discharge_enabled = true; 3570 - } 3518 + tcpm_enable_auto_vbus_discharge(port, true); 3571 3519 3572 3520 ret = tcpm_set_roles(port, true, TYPEC_SOURCE, tcpm_data_role_for_source(port)); 3573 3521 if (ret < 0) ··· 3639 3597 3640 3598 static void tcpm_reset_port(struct tcpm_port *port) 3641 3599 { 3642 - int ret; 3643 - 3644 - if (port->tcpc->enable_auto_vbus_discharge) { 3645 - ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, false); 3646 - tcpm_log_force(port, "Disable vbus discharge ret:%d", ret); 3647 - if (!ret) 3648 - port->auto_vbus_discharge_enabled = false; 3649 - } 3600 + tcpm_enable_auto_vbus_discharge(port, false); 3650 3601 port->in_ams = false; 3651 3602 port->ams = NONE_AMS; 3652 3603 port->vdm_sm_running = false; ··· 3707 3672 if (ret < 0) 3708 3673 return ret; 3709 3674 3710 - if (port->tcpc->enable_auto_vbus_discharge) { 3711 - tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V); 3712 - ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true); 3713 - tcpm_log_force(port, "enable vbus discharge ret:%d", ret); 3714 - if (!ret) 3715 - port->auto_vbus_discharge_enabled = true; 3716 - } 3675 + tcpm_enable_auto_vbus_discharge(port, true); 3717 3676 3718 3677 ret = tcpm_set_roles(port, true, TYPEC_SINK, tcpm_data_role_for_sink(port)); 3719 3678 if (ret < 0) ··· 4215 4186 port->hard_reset_count = 0; 4216 4187 ret = tcpm_pd_send_request(port); 4217 4188 if (ret < 0) { 4189 + /* Restore back to the original state */ 4190 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD, 4191 + port->pps_data.active, 4192 + port->supply_voltage); 4218 4193 /* Let the Source send capabilities again. */ 4219 4194 tcpm_set_state(port, SNK_WAIT_CAPABILITIES, 0); 4220 4195 } else { ··· 4229 4196 case SNK_NEGOTIATE_PPS_CAPABILITIES: 4230 4197 ret = tcpm_pd_send_pps_request(port); 4231 4198 if (ret < 0) { 4199 + /* Restore back to the original state */ 4200 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD, 4201 + port->pps_data.active, 4202 + port->supply_voltage); 4232 4203 port->pps_status = ret; 4233 4204 /* 4234 4205 * If this was called due to updates to sink ··· 4552 4515 tcpm_set_state(port, ready_state(port), 0); 4553 4516 break; 4554 4517 case PR_SWAP_START: 4518 + tcpm_apply_rc(port); 4555 4519 if (port->pwr_role == TYPEC_SOURCE) 4556 4520 tcpm_set_state(port, PR_SWAP_SRC_SNK_TRANSITION_OFF, 4557 4521 PD_T_SRC_TRANSITION); ··· 4592 4554 tcpm_set_state(port, ERROR_RECOVERY, PD_T_PS_SOURCE_ON_PRS); 4593 4555 break; 4594 4556 case PR_SWAP_SRC_SNK_SINK_ON: 4557 + tcpm_enable_auto_vbus_discharge(port, true); 4595 4558 /* Set the vbus disconnect threshold for implicit contract */ 4596 4559 tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V); 4597 4560 tcpm_set_state(port, SNK_STARTUP, 0); ··· 4609 4570 PD_T_PS_SOURCE_OFF); 4610 4571 break; 4611 4572 case PR_SWAP_SNK_SRC_SOURCE_ON: 4573 + tcpm_enable_auto_vbus_discharge(port, true); 4612 4574 tcpm_set_cc(port, tcpm_rp_cc(port)); 4613 4575 tcpm_set_vbus(port, true); 4614 4576 /* ··· 5237 5197 else 5238 5198 tcpm_set_state(port, SNK_UNATTACHED, 0); 5239 5199 } 5200 + break; 5201 + case PR_SWAP_SNK_SRC_SINK_OFF: 5202 + case PR_SWAP_SNK_SRC_SOURCE_ON: 5203 + /* Do nothing, vsafe0v is expected during transition */ 5240 5204 break; 5241 5205 default: 5242 5206 if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
+1 -1
drivers/usb/typec/tcpm/wcove.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * typec_wcove.c - WhiskeyCove PMIC USB Type-C PHY driver 4 4 * 5 5 * Copyright (C) 2017 Intel Corporation
+2 -2
drivers/usb/typec/ucsi/ucsi.c
··· 1219 1219 goto err_reset; 1220 1220 } 1221 1221 1222 - /* Allocate the connectors. Released in ucsi_unregister_ppm() */ 1222 + /* Allocate the connectors. Released in ucsi_unregister() */ 1223 1223 ucsi->connector = kcalloc(ucsi->cap.num_connectors + 1, 1224 1224 sizeof(*ucsi->connector), GFP_KERNEL); 1225 1225 if (!ucsi->connector) { ··· 1280 1280 EXPORT_SYMBOL_GPL(ucsi_get_drvdata); 1281 1281 1282 1282 /** 1283 - * ucsi_get_drvdata - Assign private driver data pointer 1283 + * ucsi_set_drvdata - Assign private driver data pointer 1284 1284 * @ucsi: UCSI interface 1285 1285 * @data: Private data pointer 1286 1286 */
+37
include/linux/device.h
··· 342 342 }; 343 343 344 344 /** 345 + * enum device_removable - Whether the device is removable. The criteria for a 346 + * device to be classified as removable is determined by its subsystem or bus. 347 + * @DEVICE_REMOVABLE_NOT_SUPPORTED: This attribute is not supported for this 348 + * device (default). 349 + * @DEVICE_REMOVABLE_UNKNOWN: Device location is Unknown. 350 + * @DEVICE_FIXED: Device is not removable by the user. 351 + * @DEVICE_REMOVABLE: Device is removable by the user. 352 + */ 353 + enum device_removable { 354 + DEVICE_REMOVABLE_NOT_SUPPORTED = 0, /* must be 0 */ 355 + DEVICE_REMOVABLE_UNKNOWN, 356 + DEVICE_FIXED, 357 + DEVICE_REMOVABLE, 358 + }; 359 + 360 + /** 345 361 * struct dev_links_info - Device data related to device links. 346 362 * @suppliers: List of links to supplier devices. 347 363 * @consumers: List of links to consumer devices. ··· 438 422 * device (i.e. the bus driver that discovered the device). 439 423 * @iommu_group: IOMMU group the device belongs to. 440 424 * @iommu: Per device generic IOMMU runtime data 425 + * @removable: Whether the device can be removed from the system. This 426 + * should be set by the subsystem / bus driver that discovered 427 + * the device. 441 428 * 442 429 * @offline_disabled: If set, the device is permanently online. 443 430 * @offline: Set after successful invocation of bus type's .offline(). ··· 553 534 void (*release)(struct device *dev); 554 535 struct iommu_group *iommu_group; 555 536 struct dev_iommu *iommu; 537 + 538 + enum device_removable removable; 556 539 557 540 bool offline_disabled:1; 558 541 bool offline:1; ··· 790 769 if (dev->bus && dev->bus->sync_state) 791 770 return true; 792 771 return false; 772 + } 773 + 774 + static inline void dev_set_removable(struct device *dev, 775 + enum device_removable removable) 776 + { 777 + dev->removable = removable; 778 + } 779 + 780 + static inline bool dev_is_removable(struct device *dev) 781 + { 782 + return dev->removable == DEVICE_REMOVABLE; 783 + } 784 + 785 + static inline bool dev_removable_is_valid(struct device *dev) 786 + { 787 + return dev->removable != DEVICE_REMOVABLE_NOT_SUPPORTED; 793 788 } 794 789 795 790 /*
+9 -1
include/linux/phy/tegra/xusb.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2016-2020, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #ifndef PHY_TEGRA_XUSB_H ··· 8 8 9 9 struct tegra_xusb_padctl; 10 10 struct device; 11 + enum usb_device_speed; 11 12 12 13 struct tegra_xusb_padctl *tegra_xusb_padctl_get(struct device *dev); 13 14 void tegra_xusb_padctl_put(struct tegra_xusb_padctl *padctl); ··· 24 23 int tegra_phy_xusb_utmi_port_reset(struct phy *phy); 25 24 int tegra_xusb_padctl_get_usb3_companion(struct tegra_xusb_padctl *padctl, 26 25 unsigned int port); 26 + int tegra_xusb_padctl_enable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy, 27 + enum usb_device_speed speed); 28 + int tegra_xusb_padctl_disable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy); 29 + int tegra_xusb_padctl_enable_phy_wake(struct tegra_xusb_padctl *padctl, struct phy *phy); 30 + int tegra_xusb_padctl_disable_phy_wake(struct tegra_xusb_padctl *padctl, struct phy *phy); 31 + bool tegra_xusb_padctl_remote_wake_detected(struct tegra_xusb_padctl *padctl, struct phy *phy); 32 + 27 33 #endif /* PHY_TEGRA_XUSB_H */
+1 -8
include/linux/usb.h
··· 473 473 474 474 struct usb_tt; 475 475 476 - enum usb_device_removable { 477 - USB_DEVICE_REMOVABLE_UNKNOWN = 0, 478 - USB_DEVICE_REMOVABLE, 479 - USB_DEVICE_FIXED, 480 - }; 481 - 482 476 enum usb_port_connect_type { 483 477 USB_PORT_CONNECT_TYPE_UNKNOWN = 0, 484 478 USB_PORT_CONNECT_TYPE_HOT_PLUG, ··· 697 703 #endif 698 704 struct wusb_dev *wusb_dev; 699 705 int slot_id; 700 - enum usb_device_removable removable; 701 706 struct usb2_lpm_parameters l1_params; 702 707 struct usb3_lpm_parameters u1_params; 703 708 struct usb3_lpm_parameters u2_params; ··· 1478 1485 * 1479 1486 * Note that transfer_buffer must still be set if the controller 1480 1487 * does not support DMA (as indicated by hcd_uses_dma()) and when talking 1481 - * to root hub. If you have to trasfer between highmem zone and the device 1488 + * to root hub. If you have to transfer between highmem zone and the device 1482 1489 * on such controller, create a bounce buffer or bail out with an error. 1483 1490 * If transfer_buffer cannot be set (is in highmem) and the controller is DMA 1484 1491 * capable, assign NULL to it, so that usbmon knows not to use the value.
+1 -1
include/linux/usb/composite.h
··· 271 271 * @bConfigurationValue: Copied into configuration descriptor. 272 272 * @iConfiguration: Copied into configuration descriptor. 273 273 * @bmAttributes: Copied into configuration descriptor. 274 - * @MaxPower: Power consumtion in mA. Used to compute bMaxPower in the 274 + * @MaxPower: Power consumption in mA. Used to compute bMaxPower in the 275 275 * configuration descriptor after considering the bus speed. 276 276 * @cdev: assigned by @usb_add_config() before calling @bind(); this is 277 277 * the device associated with this configuration.
+2 -1
include/linux/usb/gadget.h
··· 197 197 * @name:identifier for the endpoint, such as "ep-a" or "ep9in-bulk" 198 198 * @ops: Function pointers used to access hardware-specific operations. 199 199 * @ep_list:the gadget's ep_list holds all of its endpoints 200 - * @caps:The structure describing types and directions supported by endoint. 200 + * @caps:The structure describing types and directions supported by endpoint. 201 201 * @enabled: The current endpoint enabled/disabled state. 202 202 * @claimed: True if this endpoint is claimed by a function. 203 203 * @maxpacket:The maximum packet size used on this endpoint. The initial ··· 325 325 void (*udc_set_speed)(struct usb_gadget *, enum usb_device_speed); 326 326 void (*udc_set_ssp_rate)(struct usb_gadget *gadget, 327 327 enum usb_ssp_rate rate); 328 + void (*udc_async_callbacks)(struct usb_gadget *gadget, bool enable); 328 329 struct usb_ep *(*match_ep)(struct usb_gadget *, 329 330 struct usb_endpoint_descriptor *, 330 331 struct usb_ss_ep_comp_descriptor *);
+14 -3
include/linux/usb/hcd.h
··· 59 59 * USB Host Controller Driver (usb_hcd) framework 60 60 * 61 61 * Since "struct usb_bus" is so thin, you can't share much code in it. 62 - * This framework is a layer over that, and should be more sharable. 62 + * This framework is a layer over that, and should be more shareable. 63 63 */ 64 64 65 65 /*-------------------------------------------------------------------------*/ ··· 299 299 * (optional) these hooks allow an HCD to override the default DMA 300 300 * mapping and unmapping routines. In general, they shouldn't be 301 301 * necessary unless the host controller has special DMA requirements, 302 - * such as alignment contraints. If these are not specified, the 302 + * such as alignment constraints. If these are not specified, the 303 303 * general usb_hcd_(un)?map_urb_for_dma functions will be used instead 304 304 * (and it may be a good idea to call these functions in your HCD 305 305 * implementation) ··· 409 409 int (*find_raw_port_number)(struct usb_hcd *, int); 410 410 /* Call for power on/off the port if necessary */ 411 411 int (*port_power)(struct usb_hcd *hcd, int portnum, bool enable); 412 - 412 + /* Call for SINGLE_STEP_SET_FEATURE Test for USB2 EH certification */ 413 + #define EHSET_TEST_SINGLE_STEP_SET_FEATURE 0x06 414 + int (*submit_single_step_set_feature)(struct usb_hcd *, 415 + struct urb *, int); 413 416 }; 414 417 415 418 static inline int hcd_giveback_urb_in_bh(struct usb_hcd *hcd) ··· 477 474 478 475 struct platform_device; 479 476 extern void usb_hcd_platform_shutdown(struct platform_device *dev); 477 + #ifdef CONFIG_USB_HCD_TEST_MODE 478 + extern int ehset_single_step_set_feature(struct usb_hcd *hcd, int port); 479 + #else 480 + static inline int ehset_single_step_set_feature(struct usb_hcd *hcd, int port) 481 + { 482 + return 0; 483 + } 484 + #endif /* CONFIG_USB_HCD_TEST_MODE */ 480 485 481 486 #ifdef CONFIG_USB_PCI 482 487 struct pci_dev;
-19
include/linux/usb/isp1760.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * board initialization should put one of these into dev->platform_data 4 - * and place the isp1760 onto platform_bus named "isp1760-hcd". 5 - */ 6 - 7 - #ifndef __LINUX_USB_ISP1760_H 8 - #define __LINUX_USB_ISP1760_H 9 - 10 - struct isp1760_platform_data { 11 - unsigned is_isp1761:1; /* Chip is ISP1761 */ 12 - unsigned bus_width_16:1; /* 16/32-bit data bus width */ 13 - unsigned port1_otg:1; /* Port 1 supports OTG */ 14 - unsigned analog_oc:1; /* Analog overcurrent */ 15 - unsigned dack_polarity_high:1; /* DACK active high */ 16 - unsigned dreq_polarity_high:1; /* DREQ active high */ 17 - }; 18 - 19 - #endif /* __LINUX_USB_ISP1760_H */
+3 -3
include/linux/usb/otg-fsm.h
··· 98 98 * @b_bus_req: TRUE during the time that the Application running on the 99 99 * B-device wants to use the bus 100 100 * 101 - * Auxilary inputs (OTG v1.3 only. Obsolete now.) 101 + * Auxiliary inputs (OTG v1.3 only. Obsolete now.) 102 102 * @a_sess_vld: TRUE if the A-device detects that VBUS is above VA_SESS_VLD 103 103 * @b_bus_suspend: TRUE when the A-device detects that the B-device has put 104 104 * the bus into suspend ··· 153 153 int a_bus_req; 154 154 int b_bus_req; 155 155 156 - /* Auxilary inputs */ 156 + /* Auxiliary inputs */ 157 157 int a_sess_vld; 158 158 int b_bus_resume; 159 159 int b_bus_suspend; ··· 177 177 int a_bus_req_inf; 178 178 int a_clr_err_inf; 179 179 int b_bus_req_inf; 180 - /* Auxilary informative variables */ 180 + /* Auxiliary informative variables */ 181 181 int a_suspend_req_inf; 182 182 183 183 /* Timeout indicator for timers */
+1 -1
include/linux/usb/otg.h
··· 125 125 * @dev: Pointer to the given device 126 126 * 127 127 * The function gets phy interface string from property 'dr_mode', 128 - * and returns the correspondig enum usb_dr_mode 128 + * and returns the corresponding enum usb_dr_mode 129 129 */ 130 130 extern enum usb_dr_mode usb_get_dr_mode(struct device *dev); 131 131
+1 -1
include/linux/usb/quirks.h
··· 32 32 #define USB_QUIRK_DELAY_INIT BIT(6) 33 33 34 34 /* 35 - * For high speed and super speed interupt endpoints, the USB 2.0 and 35 + * For high speed and super speed interrupt endpoints, the USB 2.0 and 36 36 * USB 3.0 spec require the interval in microframes 37 37 * (1 microframe = 125 microseconds) to be calculated as 38 38 * interval = 2 ^ (bInterval-1).
+6
include/linux/usb/role.h
··· 65 65 66 66 void usb_role_switch_set_drvdata(struct usb_role_switch *sw, void *data); 67 67 void *usb_role_switch_get_drvdata(struct usb_role_switch *sw); 68 + const char *usb_role_string(enum usb_role role); 68 69 #else 69 70 static inline int usb_role_switch_set_role(struct usb_role_switch *sw, 70 71 enum usb_role role) ··· 108 107 static inline void *usb_role_switch_get_drvdata(struct usb_role_switch *sw) 109 108 { 110 109 return NULL; 110 + } 111 + 112 + static inline const char *usb_role_string(enum usb_role role) 113 + { 114 + return "unknown"; 111 115 } 112 116 113 117 #endif
+5 -5
include/linux/usb/serial.h
··· 276 276 int (*write)(struct tty_struct *tty, struct usb_serial_port *port, 277 277 const unsigned char *buf, int count); 278 278 /* Called only by the tty layer */ 279 - int (*write_room)(struct tty_struct *tty); 279 + unsigned int (*write_room)(struct tty_struct *tty); 280 280 int (*ioctl)(struct tty_struct *tty, 281 281 unsigned int cmd, unsigned long arg); 282 282 void (*get_serial)(struct tty_struct *tty, struct serial_struct *ss); ··· 284 284 void (*set_termios)(struct tty_struct *tty, 285 285 struct usb_serial_port *port, struct ktermios *old); 286 286 void (*break_ctl)(struct tty_struct *tty, int break_state); 287 - int (*chars_in_buffer)(struct tty_struct *tty); 287 + unsigned int (*chars_in_buffer)(struct tty_struct *tty); 288 288 void (*wait_until_sent)(struct tty_struct *tty, long timeout); 289 289 bool (*tx_empty)(struct usb_serial_port *port); 290 290 void (*throttle)(struct tty_struct *tty); ··· 347 347 const unsigned char *buf, int count); 348 348 void usb_serial_generic_close(struct usb_serial_port *port); 349 349 int usb_serial_generic_resume(struct usb_serial *serial); 350 - int usb_serial_generic_write_room(struct tty_struct *tty); 351 - int usb_serial_generic_chars_in_buffer(struct tty_struct *tty); 350 + unsigned int usb_serial_generic_write_room(struct tty_struct *tty); 351 + unsigned int usb_serial_generic_chars_in_buffer(struct tty_struct *tty); 352 352 void usb_serial_generic_wait_until_sent(struct tty_struct *tty, long timeout); 353 353 void usb_serial_generic_read_bulk_callback(struct urb *urb); 354 354 void usb_serial_generic_write_bulk_callback(struct urb *urb); ··· 395 395 } 396 396 397 397 /* 398 - * Macro for reporting errors in write path to avoid inifinite loop 398 + * Macro for reporting errors in write path to avoid infinite loop 399 399 * when port is used as a console. 400 400 */ 401 401 #define dev_err_console(usport, fmt, ...) \
+4
include/linux/usb/tcpm.h
··· 66 66 * For example, some tcpcs may include BC1.2 charger detection 67 67 * and use that in this case. 68 68 * @set_cc: Called to set value of CC pins 69 + * @apply_rc: Optional; Needed to move TCPCI based chipset to APPLY_RC state 70 + * as stated by the TCPCI specification. 69 71 * @get_cc: Called to read current CC pin values 70 72 * @set_polarity: 71 73 * Called to set polarity ··· 122 120 int (*get_vbus)(struct tcpc_dev *dev); 123 121 int (*get_current_limit)(struct tcpc_dev *dev); 124 122 int (*set_cc)(struct tcpc_dev *dev, enum typec_cc_status cc); 123 + int (*apply_rc)(struct tcpc_dev *dev, enum typec_cc_status cc, 124 + enum typec_cc_polarity polarity); 125 125 int (*get_cc)(struct tcpc_dev *dev, enum typec_cc_status *cc1, 126 126 enum typec_cc_status *cc2); 127 127 int (*set_polarity)(struct tcpc_dev *dev,
+1 -1
include/linux/usb/typec_dp.h
··· 97 97 #define DP_CONF_PIN_ASSIGNEMENT_SHIFT 8 98 98 #define DP_CONF_PIN_ASSIGNEMENT_MASK GENMASK(15, 8) 99 99 100 - /* Helper for setting/getting the pin assignement value to the configuration */ 100 + /* Helper for setting/getting the pin assignment value to the configuration */ 101 101 #define DP_CONF_SET_PIN_ASSIGN(_a_) ((_a_) << 8) 102 102 #define DP_CONF_GET_PIN_ASSIGN(_conf_) (((_conf_) & GENMASK(15, 8)) >> 8) 103 103