Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'usb-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB and Thunderbolt updates from Greg KH:
"Here is the big set of USB and Thunderbolt driver changes for
5.17-rc1.

Nothing major in here, just lots of little updates and cleanups. These
include:

- some USB header fixes picked from Ingo's header-splitup work

- more USB4/Thunderbolt hardware support added

- USB gadget driver updates and additions

- USB typec additions (includes some acpi changes, which were acked
by the ACPI maintainer)

- core USB fixes as found by syzbot that were too late for 5.16-final

- USB dwc3 driver updates

- USB dwc2 driver updates

- platform_get_irq() conversions of some USB drivers

- other minor USB driver updates and additions

All of these have been in linux-next for a while with no reported
issues"

* tag 'usb-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (111 commits)
docs: ABI: fixed formatting in configfs-usb-gadget-uac2
usb: gadget: u_audio: Subdevice 0 for capture ctls
usb: gadget: u_audio: fix calculations for small bInterval
usb: dwc2: gadget: initialize max_speed from params
usb: dwc2: do not gate off the hardware if it does not support clock gating
usb: dwc3: qcom: Fix NULL vs IS_ERR checking in dwc3_qcom_probe
headers/deps: USB: Optimize <linux/usb/ch9.h> dependencies, remove <linux/device.h>
USB: common: debug: add needed kernel.h include
headers/prep: Fix non-standard header section: drivers/usb/host/ohci-tmio.c
headers/prep: Fix non-standard header section: drivers/usb/cdns3/core.h
headers/prep: usb: gadget: Fix namespace collision
USB: core: Fix bug in resuming hub's handling of wakeup requests
USB: Fix "slab-out-of-bounds Write" bug in usb_hcd_poll_rh_status
usb: dwc3: dwc3-qcom: Add missing platform_device_put() in dwc3_qcom_acpi_register_core
usb: gadget: clear related members when goto fail
usb: gadget: don't release an existing dev->buf
usb: dwc2: Simplify a bitmap declaration
usb: Remove usb_for_each_port()
usb: typec: port-mapper: Convert to the component framework
usb: Link the ports to the connectors they are attached to
...

+4184 -990
+1 -1
Documentation/ABI/testing/configfs-usb-gadget-uac1
··· 27 27 (in 1/256 dB) 28 28 p_volume_res playback volume control resolution 29 29 (in 1/256 dB) 30 - req_number the number of pre-allocated request 30 + req_number the number of pre-allocated requests 31 31 for both capture and playback 32 32 ===================== =======================================
+2
Documentation/ABI/testing/configfs-usb-gadget-uac2
··· 30 30 (in 1/256 dB) 31 31 p_volume_res playback volume control resolution 32 32 (in 1/256 dB) 33 + req_number the number of pre-allocated requests 34 + for both capture and playback 33 35 ===================== =======================================
+9
Documentation/ABI/testing/sysfs-bus-usb
··· 244 244 is permitted, "u2" if only u2 is permitted, "u1_u2" if both u1 and 245 245 u2 are permitted. 246 246 247 + What: /sys/bus/usb/devices/.../<hub_interface>/port<X>/connector 248 + Date: December 2021 249 + Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 250 + Description: 251 + Link to the USB Type-C connector when available. This link is 252 + only created when USB Type-C Connector Class is enabled, and 253 + only if the system firmware is capable of describing the 254 + connection between a port and its connector. 255 + 247 256 What: /sys/bus/usb/devices/.../power/usb2_lpm_l1_timeout 248 257 Date: May 2013 249 258 Contact: Mathias Nyman <mathias.nyman@linux.intel.com>
+13
Documentation/devicetree/bindings/usb/dwc2.yaml
··· 114 114 115 115 usb-role-switch: true 116 116 117 + role-switch-default-mode: true 118 + 117 119 g-rx-fifo-size: 118 120 $ref: /schemas/types.yaml#/definitions/uint32 119 121 description: size of rx fifo size in gadget mode. ··· 137 135 $ref: /schemas/types.yaml#/definitions/flag 138 136 description: If present indicates that we need to reset the PHY when we 139 137 detect a wakeup. This is due to a hardware errata. 138 + 139 + port: 140 + description: 141 + Any connector to the data bus of this controller should be modelled 142 + using the OF graph bindings specified, if the "usb-role-switch" 143 + property is used. 144 + $ref: /schemas/graph.yaml#/properties/port 145 + 146 + dependencies: 147 + port: [ usb-role-switch ] 148 + role-switch-default-mode: [ usb-role-switch ] 140 149 141 150 required: 142 151 - compatible
-56
Documentation/devicetree/bindings/usb/dwc3-xilinx.txt
··· 1 - Xilinx SuperSpeed DWC3 USB SoC controller 2 - 3 - Required properties: 4 - - compatible: May contain "xlnx,zynqmp-dwc3" or "xlnx,versal-dwc3" 5 - - reg: Base address and length of the register control block 6 - - clocks: A list of phandles for the clocks listed in clock-names 7 - - clock-names: Should contain the following: 8 - "bus_clk" Master/Core clock, have to be >= 125 MHz for SS 9 - operation and >= 60MHz for HS operation 10 - 11 - "ref_clk" Clock source to core during PHY power down 12 - - resets: A list of phandles for resets listed in reset-names 13 - - reset-names: 14 - "usb_crst" USB core reset 15 - "usb_hibrst" USB hibernation reset 16 - "usb_apbrst" USB APB reset 17 - 18 - Required child node: 19 - A child node must exist to represent the core DWC3 IP block. The name of 20 - the node is not important. The content of the node is defined in dwc3.txt. 21 - 22 - Optional properties for snps,dwc3: 23 - - dma-coherent: Enable this flag if CCI is enabled in design. Adding this 24 - flag configures Global SoC bus Configuration Register and 25 - Xilinx USB 3.0 IP - USB coherency register to enable CCI. 26 - - interrupt-names: Should contain the following: 27 - "dwc_usb3" USB gadget mode interrupts 28 - "otg" USB OTG mode interrupts 29 - "hiber" USB hibernation interrupts 30 - 31 - Example device node: 32 - 33 - usb@0 { 34 - #address-cells = <0x2>; 35 - #size-cells = <0x1>; 36 - compatible = "xlnx,zynqmp-dwc3"; 37 - reg = <0x0 0xff9d0000 0x0 0x100>; 38 - clock-names = "bus_clk", "ref_clk"; 39 - clocks = <&clk125>, <&clk125>; 40 - resets = <&zynqmp_reset ZYNQMP_RESET_USB1_CORERESET>, 41 - <&zynqmp_reset ZYNQMP_RESET_USB1_HIBERRESET>, 42 - <&zynqmp_reset ZYNQMP_RESET_USB1_APB>; 43 - reset-names = "usb_crst", "usb_hibrst", "usb_apbrst"; 44 - ranges; 45 - 46 - dwc3@fe200000 { 47 - compatible = "snps,dwc3"; 48 - reg = <0x0 0xfe200000 0x40000>; 49 - interrupt-names = "dwc_usb3", "otg", "hiber"; 50 - interrupts = <0 65 4>, <0 69 4>, <0 75 4>; 51 - phys = <&psgtr 2 PHY_TYPE_USB3 0 2>; 52 - phy-names = "usb3-phy"; 53 - dr_mode = "host"; 54 - dma-coherent; 55 - }; 56 - };
+131
Documentation/devicetree/bindings/usb/dwc3-xilinx.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/dwc3-xilinx.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Xilinx SuperSpeed DWC3 USB SoC controller 8 + 9 + maintainers: 10 + - Manish Narani <manish.narani@xilinx.com> 11 + 12 + properties: 13 + compatible: 14 + items: 15 + - enum: 16 + - xlnx,zynqmp-dwc3 17 + - xlnx,versal-dwc3 18 + reg: 19 + maxItems: 1 20 + 21 + "#address-cells": 22 + enum: [ 1, 2 ] 23 + 24 + "#size-cells": 25 + enum: [ 1, 2 ] 26 + 27 + ranges: true 28 + 29 + power-domains: 30 + description: specifies a phandle to PM domain provider node 31 + maxItems: 1 32 + 33 + clocks: 34 + description: 35 + A list of phandle and clock-specifier pairs for the clocks 36 + listed in clock-names. 37 + items: 38 + - description: Master/Core clock, has to be >= 125 MHz 39 + for SS operation and >= 60MHz for HS operation. 40 + - description: Clock source to core during PHY power down. 41 + 42 + clock-names: 43 + items: 44 + - const: bus_clk 45 + - const: ref_clk 46 + 47 + resets: 48 + description: 49 + A list of phandles for resets listed in reset-names. 50 + 51 + items: 52 + - description: USB core reset 53 + - description: USB hibernation reset 54 + - description: USB APB reset 55 + 56 + reset-names: 57 + items: 58 + - const: usb_crst 59 + - const: usb_hibrst 60 + - const: usb_apbrst 61 + 62 + phys: 63 + minItems: 1 64 + maxItems: 2 65 + 66 + phy-names: 67 + minItems: 1 68 + maxItems: 2 69 + items: 70 + enum: 71 + - usb2-phy 72 + - usb3-phy 73 + 74 + # Required child node: 75 + 76 + patternProperties: 77 + "^usb@[0-9a-f]+$": 78 + $ref: snps,dwc3.yaml# 79 + 80 + required: 81 + - compatible 82 + - reg 83 + - "#address-cells" 84 + - "#size-cells" 85 + - ranges 86 + - power-domains 87 + - clocks 88 + - clock-names 89 + - resets 90 + - reset-names 91 + 92 + additionalProperties: false 93 + 94 + examples: 95 + - | 96 + #include <dt-bindings/dma/xlnx-zynqmp-dpdma.h> 97 + #include <dt-bindings/power/xlnx-zynqmp-power.h> 98 + #include <dt-bindings/reset/xlnx-zynqmp-resets.h> 99 + #include <dt-bindings/clock/xlnx-zynqmp-clk.h> 100 + #include <dt-bindings/reset/xlnx-zynqmp-resets.h> 101 + #include <dt-bindings/phy/phy.h> 102 + axi { 103 + #address-cells = <2>; 104 + #size-cells = <2>; 105 + 106 + usb@0 { 107 + #address-cells = <0x2>; 108 + #size-cells = <0x2>; 109 + compatible = "xlnx,zynqmp-dwc3"; 110 + reg = <0x0 0xff9d0000 0x0 0x100>; 111 + clocks = <&zynqmp_clk USB0_BUS_REF>, <&zynqmp_clk USB3_DUAL_REF>; 112 + clock-names = "bus_clk", "ref_clk"; 113 + power-domains = <&zynqmp_firmware PD_USB_0>; 114 + resets = <&zynqmp_reset ZYNQMP_RESET_USB1_CORERESET>, 115 + <&zynqmp_reset ZYNQMP_RESET_USB1_HIBERRESET>, 116 + <&zynqmp_reset ZYNQMP_RESET_USB1_APB>; 117 + reset-names = "usb_crst", "usb_hibrst", "usb_apbrst"; 118 + phys = <&psgtr 2 PHY_TYPE_USB3 0 2>; 119 + phy-names = "usb3-phy"; 120 + ranges; 121 + 122 + usb@fe200000 { 123 + compatible = "snps,dwc3"; 124 + reg = <0x0 0xfe200000 0x0 0x40000>; 125 + interrupt-names = "host", "otg"; 126 + interrupts = <0 65 4>, <0 69 4>; 127 + dr_mode = "host"; 128 + dma-coherent; 129 + }; 130 + }; 131 + };
+4
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
··· 13 13 compatible: 14 14 items: 15 15 - enum: 16 + - qcom,ipq4019-dwc3 16 17 - qcom,ipq6018-dwc3 18 + - qcom,ipq8064-dwc3 17 19 - qcom,msm8996-dwc3 18 20 - qcom,msm8998-dwc3 19 21 - qcom,sc7180-dwc3 ··· 25 23 - qcom,sdx55-dwc3 26 24 - qcom,sm4250-dwc3 27 25 - qcom,sm6115-dwc3 26 + - qcom,sm6350-dwc3 28 27 - qcom,sm8150-dwc3 29 28 - qcom,sm8250-dwc3 30 29 - qcom,sm8350-dwc3 30 + - qcom,sm8450-dwc3 31 31 - const: qcom,dwc3 32 32 33 33 reg:
+17 -15
Documentation/driver-api/usb/writing_usb_driver.rst
··· 94 94 /* register this driver with the USB subsystem */ 95 95 result = usb_register(&skel_driver); 96 96 if (result < 0) { 97 - err("usb_register failed for the "__FILE__ "driver." 98 - "Error number %d", result); 97 + pr_err("usb_register failed for the %s driver. Error number %d\n", 98 + skel_driver.name, result); 99 99 return -1; 100 100 } 101 101 ··· 170 170 enable the driver to determine which device the user is addressing. All 171 171 of this is done with the following code:: 172 172 173 - /* increment our usage count for the module */ 174 - ++skel->open_count; 173 + /* increment our usage count for the device */ 174 + kref_get(&dev->kref); 175 175 176 176 /* save our object in the file's private structure */ 177 177 file->private_data = dev; ··· 188 188 subsystem. This can be seen in the following code:: 189 189 190 190 /* we can only write as much as 1 urb will hold */ 191 - bytes_written = (count > skel->bulk_out_size) ? skel->bulk_out_size : count; 191 + size_t writesize = min_t(size_t, count, MAX_TRANSFER); 192 192 193 193 /* copy the data from user space into our urb */ 194 - copy_from_user(skel->write_urb->transfer_buffer, buffer, bytes_written); 194 + copy_from_user(buf, user_buffer, writesize); 195 195 196 196 /* set up our urb */ 197 - usb_fill_bulk_urb(skel->write_urb, 198 - skel->dev, 199 - usb_sndbulkpipe(skel->dev, skel->bulk_out_endpointAddr), 200 - skel->write_urb->transfer_buffer, 201 - bytes_written, 197 + usb_fill_bulk_urb(urb, 198 + dev->udev, 199 + usb_sndbulkpipe(dev->udev, dev->bulk_out_endpointAddr), 200 + buf, 201 + writesize, 202 202 skel_write_bulk_callback, 203 - skel); 203 + dev); 204 204 205 205 /* send the data out the bulk port */ 206 - result = usb_submit_urb(skel->write_urb); 207 - if (result) { 208 - err("Failed submitting write urb, error %d", result); 206 + retval = usb_submit_urb(urb, GFP_KERNEL); 207 + if (retval) { 208 + dev_err(&dev->interface->dev, 209 + "%s - failed submitting write urb, error %d\n", 210 + __func__, retval); 209 211 } 210 212 211 213
+1 -1
Documentation/usb/gadget-testing.rst
··· 931 931 p_volume_min playback volume control min value (in 1/256 dB) 932 932 p_volume_max playback volume control max value (in 1/256 dB) 933 933 p_volume_res playback volume control resolution (in 1/256 dB) 934 - req_number the number of pre-allocated request for both capture 934 + req_number the number of pre-allocated requests for both capture 935 935 and playback 936 936 ================ ==================================================== 937 937
+8
MAINTAINERS
··· 21033 21033 F: drivers/xen/xen-scsiback.c 21034 21034 F: include/xen/interface/io/vscsiif.h 21035 21035 21036 + XEN PVUSB DRIVER 21037 + M: Juergen Gross <jgross@suse.com> 21038 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 21039 + L: linux-usb@vger.kernel.org 21040 + S: Supported 21041 + F: drivers/usb/host/xen* 21042 + F: include/xen/interface/io/usbif.h 21043 + 21036 21044 XEN SOUND FRONTEND DRIVER 21037 21045 M: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> 21038 21046 L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
+1
drivers/acpi/bus.c
··· 1043 1043 .remove = acpi_device_remove, 1044 1044 .uevent = acpi_device_uevent, 1045 1045 }; 1046 + EXPORT_SYMBOL_GPL(acpi_bus_type); 1046 1047 1047 1048 /* -------------------------------------------------------------------------- 1048 1049 Initialization/Cleanup
+16
drivers/acpi/scan.c
··· 19 19 #include <linux/dma-map-ops.h> 20 20 #include <linux/platform_data/x86/apple.h> 21 21 #include <linux/pgtable.h> 22 + #include <linux/crc32.h> 22 23 23 24 #include "internal.h" 24 25 ··· 668 667 return 0; 669 668 } 670 669 670 + static void acpi_store_pld_crc(struct acpi_device *adev) 671 + { 672 + struct acpi_pld_info *pld; 673 + acpi_status status; 674 + 675 + status = acpi_get_physical_device_location(adev->handle, &pld); 676 + if (ACPI_FAILURE(status)) 677 + return; 678 + 679 + adev->pld_crc = crc32(~0, pld, sizeof(*pld)); 680 + ACPI_FREE(pld); 681 + } 682 + 671 683 static int __acpi_device_add(struct acpi_device *device, 672 684 void (*release)(struct device *)) 673 685 { ··· 738 724 739 725 if (device->wakeup.flags.valid) 740 726 list_add_tail(&device->wakeup_list, &acpi_wakeup_device_list); 727 + 728 + acpi_store_pld_crc(device); 741 729 742 730 mutex_unlock(&acpi_device_lock); 743 731
+14 -1
drivers/thunderbolt/acpi.c
··· 7 7 */ 8 8 9 9 #include <linux/acpi.h> 10 + #include <linux/pm_runtime.h> 10 11 11 12 #include "tb.h" 12 13 ··· 32 31 return AE_OK; 33 32 34 33 /* It needs to reference this NHI */ 35 - if (nhi->pdev->dev.fwnode != args.fwnode) 34 + if (dev_fwnode(&nhi->pdev->dev) != args.fwnode) 36 35 goto out_put; 37 36 38 37 /* ··· 75 74 pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM))) { 76 75 const struct device_link *link; 77 76 77 + /* 78 + * Make them both active first to make sure the NHI does 79 + * not runtime suspend before the consumer. The 80 + * pm_runtime_put() below then allows the consumer to 81 + * runtime suspend again (which then allows NHI runtime 82 + * suspend too now that the device link is established). 83 + */ 84 + pm_runtime_get_sync(&pdev->dev); 85 + 78 86 link = device_link_add(&pdev->dev, &nhi->pdev->dev, 79 87 DL_FLAG_AUTOREMOVE_SUPPLIER | 88 + DL_FLAG_RPM_ACTIVE | 80 89 DL_FLAG_PM_RUNTIME); 81 90 if (link) { 82 91 dev_dbg(&nhi->pdev->dev, "created link from %s\n", ··· 95 84 dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n", 96 85 dev_name(&pdev->dev)); 97 86 } 87 + 88 + pm_runtime_put(&pdev->dev); 98 89 } 99 90 100 91 out_put:
+6 -1
drivers/thunderbolt/icm.c
··· 1741 1741 if (!n) 1742 1742 return; 1743 1743 1744 - INIT_WORK(&n->work, icm_handle_notification); 1745 1744 n->pkg = kmemdup(buf, size, GFP_KERNEL); 1745 + if (!n->pkg) { 1746 + kfree(n); 1747 + return; 1748 + } 1749 + 1750 + INIT_WORK(&n->work, icm_handle_notification); 1746 1751 n->tb = tb; 1747 1752 1748 1753 queue_work(tb->wq, &n->work);
+24
drivers/thunderbolt/lc.c
··· 193 193 return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, cap + TB_LC_SX_CTRL, 1); 194 194 } 195 195 196 + /** 197 + * tb_lc_is_clx_supported() - Check whether CLx is supported by the lane adapter 198 + * @port: Lane adapter 199 + * 200 + * TB_LC_LINK_ATTR_CPS bit reflects if the link supports CLx including 201 + * active cables (if connected on the link). 202 + */ 203 + bool tb_lc_is_clx_supported(struct tb_port *port) 204 + { 205 + struct tb_switch *sw = port->sw; 206 + int cap, ret; 207 + u32 val; 208 + 209 + cap = find_port_lc_cap(port); 210 + if (cap < 0) 211 + return false; 212 + 213 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, cap + TB_LC_LINK_ATTR, 1); 214 + if (ret) 215 + return false; 216 + 217 + return !!(val & TB_LC_LINK_ATTR_CPS); 218 + } 219 + 196 220 static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset, 197 221 unsigned int flags) 198 222 {
+23 -15
drivers/thunderbolt/path.c
··· 85 85 * @dst_hopid: HopID to the @dst (%-1 if don't care) 86 86 * @last: Last port is filled here if not %NULL 87 87 * @name: Name of the path 88 + * @alloc_hopid: Allocate HopIDs for the ports 88 89 * 89 90 * Follows a path starting from @src and @src_hopid to the last output 90 - * port of the path. Allocates HopIDs for the visited ports. Call 91 - * tb_path_free() to release the path and allocated HopIDs when the path 92 - * is not needed anymore. 91 + * port of the path. Allocates HopIDs for the visited ports (if 92 + * @alloc_hopid is true). Call tb_path_free() to release the path and 93 + * allocated HopIDs when the path is not needed anymore. 93 94 * 94 95 * Note function discovers also incomplete paths so caller should check 95 96 * that the @dst port is the expected one. If it is not, the path can be ··· 100 99 */ 101 100 struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid, 102 101 struct tb_port *dst, int dst_hopid, 103 - struct tb_port **last, const char *name) 102 + struct tb_port **last, const char *name, 103 + bool alloc_hopid) 104 104 { 105 105 struct tb_port *out_port; 106 106 struct tb_regs_hop hop; ··· 158 156 path->tb = src->sw->tb; 159 157 path->path_length = num_hops; 160 158 path->activated = true; 159 + path->alloc_hopid = alloc_hopid; 161 160 162 161 path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL); 163 162 if (!path->hops) { ··· 180 177 goto err; 181 178 } 182 179 183 - if (tb_port_alloc_in_hopid(p, h, h) < 0) 180 + if (alloc_hopid && tb_port_alloc_in_hopid(p, h, h) < 0) 184 181 goto err; 185 182 186 183 out_port = &sw->ports[hop.out_port]; 187 184 next_hop = hop.next_hop; 188 185 189 - if (tb_port_alloc_out_hopid(out_port, next_hop, next_hop) < 0) { 186 + if (alloc_hopid && 187 + tb_port_alloc_out_hopid(out_port, next_hop, next_hop) < 0) { 190 188 tb_port_release_in_hopid(p, h); 191 189 goto err; 192 190 } ··· 266 262 kfree(path); 267 263 return NULL; 268 264 } 265 + 266 + path->alloc_hopid = true; 269 267 270 268 in_hopid = src_hopid; 271 269 out_port = NULL; ··· 351 345 */ 352 346 void tb_path_free(struct tb_path *path) 353 347 { 354 - int i; 348 + if (path->alloc_hopid) { 349 + int i; 355 350 356 - for (i = 0; i < path->path_length; i++) { 357 - const struct tb_path_hop *hop = &path->hops[i]; 351 + for (i = 0; i < path->path_length; i++) { 352 + const struct tb_path_hop *hop = &path->hops[i]; 358 353 359 - if (hop->in_port) 360 - tb_port_release_in_hopid(hop->in_port, 361 - hop->in_hop_index); 362 - if (hop->out_port) 363 - tb_port_release_out_hopid(hop->out_port, 364 - hop->next_hop_index); 354 + if (hop->in_port) 355 + tb_port_release_in_hopid(hop->in_port, 356 + hop->in_hop_index); 357 + if (hop->out_port) 358 + tb_port_release_out_hopid(hop->out_port, 359 + hop->next_hop_index); 360 + } 365 361 } 366 362 367 363 kfree(path->hops);
+18 -10
drivers/thunderbolt/retimer.c
··· 324 324 325 325 static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status) 326 326 { 327 - struct usb4_port *usb4; 328 327 struct tb_retimer *rt; 329 328 u32 vendor, device; 330 329 int ret; 331 - 332 - usb4 = port->usb4; 333 - if (!usb4) 334 - return -EINVAL; 335 330 336 331 ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor, 337 332 sizeof(vendor)); ··· 369 374 rt->port = port; 370 375 rt->tb = port->sw->tb; 371 376 372 - rt->dev.parent = &usb4->dev; 377 + rt->dev.parent = &port->usb4->dev; 373 378 rt->dev.bus = &tb_bus_type; 374 379 rt->dev.type = &tb_retimer_type; 375 380 dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev), ··· 448 453 { 449 454 u32 status[TB_MAX_RETIMER_INDEX + 1] = {}; 450 455 int ret, i, last_idx = 0; 456 + struct usb4_port *usb4; 457 + 458 + usb4 = port->usb4; 459 + if (!usb4) 460 + return 0; 461 + 462 + pm_runtime_get_sync(&usb4->dev); 451 463 452 464 /* 453 465 * Send broadcast RT to make sure retimer indices facing this ··· 462 460 */ 463 461 ret = usb4_port_enumerate_retimers(port); 464 462 if (ret) 465 - return ret; 463 + goto out; 466 464 467 465 /* 468 466 * Enable sideband channel for each retimer. We can do this ··· 492 490 break; 493 491 } 494 492 495 - if (!last_idx) 496 - return 0; 493 + if (!last_idx) { 494 + ret = 0; 495 + goto out; 496 + } 497 497 498 498 /* Add on-board retimers if they do not exist already */ 499 499 for (i = 1; i <= last_idx; i++) { ··· 511 507 } 512 508 } 513 509 514 - return 0; 510 + out: 511 + pm_runtime_mark_last_busy(&usb4->dev); 512 + pm_runtime_put_autosuspend(&usb4->dev); 513 + 514 + return ret; 515 515 } 516 516 517 517 static int remove_retimer(struct device *dev, void *data)
+490 -3
drivers/thunderbolt/switch.c
··· 13 13 #include <linux/sched/signal.h> 14 14 #include <linux/sizes.h> 15 15 #include <linux/slab.h> 16 + #include <linux/module.h> 16 17 17 18 #include "tb.h" 18 19 ··· 26 25 uuid_t uuid; 27 26 u32 status; 28 27 }; 28 + 29 + static bool clx_enabled = true; 30 + module_param_named(clx, clx_enabled, bool, 0444); 31 + MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); 29 32 30 33 /* 31 34 * Hold NVM authentication failure status per switch This information ··· 628 623 return 0; 629 624 630 625 nfc_credits = port->config.nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK; 626 + if (credits < 0) 627 + credits = max_t(int, -nfc_credits, credits); 628 + 631 629 nfc_credits += credits; 632 630 633 631 tb_port_dbg(port, "adding %d NFC credits to %lu", credits, ··· 1327 1319 * @aux_tx: AUX TX Hop ID 1328 1320 * @aux_rx: AUX RX Hop ID 1329 1321 * 1330 - * Programs specified Hop IDs for DP IN/OUT port. 1322 + * Programs specified Hop IDs for DP IN/OUT port. Can be called for USB4 1323 + * router DP adapters too but does not program the values as the fields 1324 + * are read-only. 1331 1325 */ 1332 1326 int tb_dp_port_set_hops(struct tb_port *port, unsigned int video, 1333 1327 unsigned int aux_tx, unsigned int aux_rx) 1334 1328 { 1335 1329 u32 data[2]; 1336 1330 int ret; 1331 + 1332 + if (tb_switch_is_usb4(port->sw)) 1333 + return 0; 1337 1334 1338 1335 ret = tb_port_read(port, data, TB_CFG_PORT, 1339 1336 port->cap_adap + ADP_DP_CS_0, ARRAY_SIZE(data)); ··· 1460 1447 if (res.err > 0) 1461 1448 return -EIO; 1462 1449 return res.err; 1450 + } 1451 + 1452 + /** 1453 + * tb_switch_wait_for_bit() - Wait for specified value of bits in offset 1454 + * @sw: Router to read the offset value from 1455 + * @offset: Offset in the router config space to read from 1456 + * @bit: Bit mask in the offset to wait for 1457 + * @value: Value of the bits to wait for 1458 + * @timeout_msec: Timeout in ms how long to wait 1459 + * 1460 + * Wait till the specified bits in specified offset reach specified value. 1461 + * Returns %0 in case of success, %-ETIMEDOUT if the @value was not reached 1462 + * within the given timeout or a negative errno in case of failure. 1463 + */ 1464 + int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, 1465 + u32 value, int timeout_msec) 1466 + { 1467 + ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); 1468 + 1469 + do { 1470 + u32 val; 1471 + int ret; 1472 + 1473 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); 1474 + if (ret) 1475 + return ret; 1476 + 1477 + if ((val & bit) == value) 1478 + return 0; 1479 + 1480 + usleep_range(50, 100); 1481 + } while (ktime_before(ktime_get(), timeout)); 1482 + 1483 + return -ETIMEDOUT; 1463 1484 } 1464 1485 1465 1486 /* ··· 2233 2186 if (ret > 0) 2234 2187 sw->cap_plug_events = ret; 2235 2188 2189 + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_TIME2); 2190 + if (ret > 0) 2191 + sw->cap_vsec_tmu = ret; 2192 + 2236 2193 ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER); 2237 2194 if (ret > 0) 2238 2195 sw->cap_lc = ret; 2196 + 2197 + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_CP_LP); 2198 + if (ret > 0) 2199 + sw->cap_lp = ret; 2239 2200 2240 2201 /* Root switch is always authorized */ 2241 2202 if (!route) ··· 3051 2996 3052 2997 tb_sw_dbg(sw, "suspending switch\n"); 3053 2998 2999 + /* 3000 + * Actually only needed for Titan Ridge but for simplicity can be 3001 + * done for USB4 device too as CLx is re-enabled at resume. 3002 + */ 3003 + if (tb_switch_disable_clx(sw, TB_CL0S)) 3004 + tb_sw_warn(sw, "failed to disable CLx on upstream port\n"); 3005 + 3054 3006 err = tb_plug_events_active(sw, false); 3055 3007 if (err) 3056 3008 return; ··· 3110 3048 */ 3111 3049 int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in) 3112 3050 { 3051 + int ret; 3052 + 3113 3053 if (tb_switch_is_usb4(sw)) 3114 - return usb4_switch_alloc_dp_resource(sw, in); 3115 - return tb_lc_dp_sink_alloc(sw, in); 3054 + ret = usb4_switch_alloc_dp_resource(sw, in); 3055 + else 3056 + ret = tb_lc_dp_sink_alloc(sw, in); 3057 + 3058 + if (ret) 3059 + tb_sw_warn(sw, "failed to allocate DP resource for port %d\n", 3060 + in->port); 3061 + else 3062 + tb_sw_dbg(sw, "allocated DP resource for port %d\n", in->port); 3063 + 3064 + return ret; 3116 3065 } 3117 3066 3118 3067 /** ··· 3146 3073 if (ret) 3147 3074 tb_sw_warn(sw, "failed to de-allocate DP resource for port %d\n", 3148 3075 in->port); 3076 + else 3077 + tb_sw_dbg(sw, "released DP resource for port %d\n", in->port); 3149 3078 } 3150 3079 3151 3080 struct tb_sw_lookup { ··· 3276 3201 } 3277 3202 3278 3203 return NULL; 3204 + } 3205 + 3206 + static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary) 3207 + { 3208 + u32 phy; 3209 + int ret; 3210 + 3211 + ret = tb_port_read(port, &phy, TB_CFG_PORT, 3212 + port->cap_phy + LANE_ADP_CS_1, 1); 3213 + if (ret) 3214 + return ret; 3215 + 3216 + if (secondary) 3217 + phy |= LANE_ADP_CS_1_PMS; 3218 + else 3219 + phy &= ~LANE_ADP_CS_1_PMS; 3220 + 3221 + return tb_port_write(port, &phy, TB_CFG_PORT, 3222 + port->cap_phy + LANE_ADP_CS_1, 1); 3223 + } 3224 + 3225 + static int tb_port_pm_secondary_enable(struct tb_port *port) 3226 + { 3227 + return __tb_port_pm_secondary_set(port, true); 3228 + } 3229 + 3230 + static int tb_port_pm_secondary_disable(struct tb_port *port) 3231 + { 3232 + return __tb_port_pm_secondary_set(port, false); 3233 + } 3234 + 3235 + static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) 3236 + { 3237 + struct tb_switch *parent = tb_switch_parent(sw); 3238 + struct tb_port *up, *down; 3239 + int ret; 3240 + 3241 + if (!tb_route(sw)) 3242 + return 0; 3243 + 3244 + up = tb_upstream_port(sw); 3245 + down = tb_port_at(tb_route(sw), parent); 3246 + ret = tb_port_pm_secondary_enable(up); 3247 + if (ret) 3248 + return ret; 3249 + 3250 + return tb_port_pm_secondary_disable(down); 3251 + } 3252 + 3253 + /* Called for USB4 or Titan Ridge routers only */ 3254 + static bool tb_port_clx_supported(struct tb_port *port, enum tb_clx clx) 3255 + { 3256 + u32 mask, val; 3257 + bool ret; 3258 + 3259 + /* Don't enable CLx in case of two single-lane links */ 3260 + if (!port->bonded && port->dual_link_port) 3261 + return false; 3262 + 3263 + /* Don't enable CLx in case of inter-domain link */ 3264 + if (port->xdomain) 3265 + return false; 3266 + 3267 + if (tb_switch_is_usb4(port->sw)) { 3268 + if (!usb4_port_clx_supported(port)) 3269 + return false; 3270 + } else if (!tb_lc_is_clx_supported(port)) { 3271 + return false; 3272 + } 3273 + 3274 + switch (clx) { 3275 + case TB_CL0S: 3276 + /* CL0s support requires also CL1 support */ 3277 + mask = LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; 3278 + break; 3279 + 3280 + /* For now we support only CL0s. Not CL1, CL2 */ 3281 + case TB_CL1: 3282 + case TB_CL2: 3283 + default: 3284 + return false; 3285 + } 3286 + 3287 + ret = tb_port_read(port, &val, TB_CFG_PORT, 3288 + port->cap_phy + LANE_ADP_CS_0, 1); 3289 + if (ret) 3290 + return false; 3291 + 3292 + return !!(val & mask); 3293 + } 3294 + 3295 + static inline bool tb_port_cl0s_supported(struct tb_port *port) 3296 + { 3297 + return tb_port_clx_supported(port, TB_CL0S); 3298 + } 3299 + 3300 + static int __tb_port_cl0s_set(struct tb_port *port, bool enable) 3301 + { 3302 + u32 phy, mask; 3303 + int ret; 3304 + 3305 + /* To enable CL0s also required to enable CL1 */ 3306 + mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; 3307 + ret = tb_port_read(port, &phy, TB_CFG_PORT, 3308 + port->cap_phy + LANE_ADP_CS_1, 1); 3309 + if (ret) 3310 + return ret; 3311 + 3312 + if (enable) 3313 + phy |= mask; 3314 + else 3315 + phy &= ~mask; 3316 + 3317 + return tb_port_write(port, &phy, TB_CFG_PORT, 3318 + port->cap_phy + LANE_ADP_CS_1, 1); 3319 + } 3320 + 3321 + static int tb_port_cl0s_disable(struct tb_port *port) 3322 + { 3323 + return __tb_port_cl0s_set(port, false); 3324 + } 3325 + 3326 + static int tb_port_cl0s_enable(struct tb_port *port) 3327 + { 3328 + return __tb_port_cl0s_set(port, true); 3329 + } 3330 + 3331 + static int tb_switch_enable_cl0s(struct tb_switch *sw) 3332 + { 3333 + struct tb_switch *parent = tb_switch_parent(sw); 3334 + bool up_cl0s_support, down_cl0s_support; 3335 + struct tb_port *up, *down; 3336 + int ret; 3337 + 3338 + if (!tb_switch_is_clx_supported(sw)) 3339 + return 0; 3340 + 3341 + /* 3342 + * Enable CLx for host router's downstream port as part of the 3343 + * downstream router enabling procedure. 3344 + */ 3345 + if (!tb_route(sw)) 3346 + return 0; 3347 + 3348 + /* Enable CLx only for first hop router (depth = 1) */ 3349 + if (tb_route(parent)) 3350 + return 0; 3351 + 3352 + ret = tb_switch_pm_secondary_resolve(sw); 3353 + if (ret) 3354 + return ret; 3355 + 3356 + up = tb_upstream_port(sw); 3357 + down = tb_port_at(tb_route(sw), parent); 3358 + 3359 + up_cl0s_support = tb_port_cl0s_supported(up); 3360 + down_cl0s_support = tb_port_cl0s_supported(down); 3361 + 3362 + tb_port_dbg(up, "CL0s %ssupported\n", 3363 + up_cl0s_support ? "" : "not "); 3364 + tb_port_dbg(down, "CL0s %ssupported\n", 3365 + down_cl0s_support ? "" : "not "); 3366 + 3367 + if (!up_cl0s_support || !down_cl0s_support) 3368 + return -EOPNOTSUPP; 3369 + 3370 + ret = tb_port_cl0s_enable(up); 3371 + if (ret) 3372 + return ret; 3373 + 3374 + ret = tb_port_cl0s_enable(down); 3375 + if (ret) { 3376 + tb_port_cl0s_disable(up); 3377 + return ret; 3378 + } 3379 + 3380 + ret = tb_switch_mask_clx_objections(sw); 3381 + if (ret) { 3382 + tb_port_cl0s_disable(up); 3383 + tb_port_cl0s_disable(down); 3384 + return ret; 3385 + } 3386 + 3387 + sw->clx = TB_CL0S; 3388 + 3389 + tb_port_dbg(up, "CL0s enabled\n"); 3390 + return 0; 3391 + } 3392 + 3393 + /** 3394 + * tb_switch_enable_clx() - Enable CLx on upstream port of specified router 3395 + * @sw: Router to enable CLx for 3396 + * @clx: The CLx state to enable 3397 + * 3398 + * Enable CLx state only for first hop router. That is the most common 3399 + * use-case, that is intended for better thermal management, and so helps 3400 + * to improve performance. CLx is enabled only if both sides of the link 3401 + * support CLx, and if both sides of the link are not configured as two 3402 + * single lane links and only if the link is not inter-domain link. The 3403 + * complete set of conditions is descibed in CM Guide 1.0 section 8.1. 3404 + * 3405 + * Return: Returns 0 on success or an error code on failure. 3406 + */ 3407 + int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) 3408 + { 3409 + struct tb_switch *root_sw = sw->tb->root_switch; 3410 + 3411 + if (!clx_enabled) 3412 + return 0; 3413 + 3414 + /* 3415 + * CLx is not enabled and validated on Intel USB4 platforms before 3416 + * Alder Lake. 3417 + */ 3418 + if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) 3419 + return 0; 3420 + 3421 + switch (clx) { 3422 + case TB_CL0S: 3423 + return tb_switch_enable_cl0s(sw); 3424 + 3425 + default: 3426 + return -EOPNOTSUPP; 3427 + } 3428 + } 3429 + 3430 + static int tb_switch_disable_cl0s(struct tb_switch *sw) 3431 + { 3432 + struct tb_switch *parent = tb_switch_parent(sw); 3433 + struct tb_port *up, *down; 3434 + int ret; 3435 + 3436 + if (!tb_switch_is_clx_supported(sw)) 3437 + return 0; 3438 + 3439 + /* 3440 + * Disable CLx for host router's downstream port as part of the 3441 + * downstream router enabling procedure. 3442 + */ 3443 + if (!tb_route(sw)) 3444 + return 0; 3445 + 3446 + /* Disable CLx only for first hop router (depth = 1) */ 3447 + if (tb_route(parent)) 3448 + return 0; 3449 + 3450 + up = tb_upstream_port(sw); 3451 + down = tb_port_at(tb_route(sw), parent); 3452 + ret = tb_port_cl0s_disable(up); 3453 + if (ret) 3454 + return ret; 3455 + 3456 + ret = tb_port_cl0s_disable(down); 3457 + if (ret) 3458 + return ret; 3459 + 3460 + sw->clx = TB_CLX_DISABLE; 3461 + 3462 + tb_port_dbg(up, "CL0s disabled\n"); 3463 + return 0; 3464 + } 3465 + 3466 + /** 3467 + * tb_switch_disable_clx() - Disable CLx on upstream port of specified router 3468 + * @sw: Router to disable CLx for 3469 + * @clx: The CLx state to disable 3470 + * 3471 + * Return: Returns 0 on success or an error code on failure. 3472 + */ 3473 + int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) 3474 + { 3475 + if (!clx_enabled) 3476 + return 0; 3477 + 3478 + switch (clx) { 3479 + case TB_CL0S: 3480 + return tb_switch_disable_cl0s(sw); 3481 + 3482 + default: 3483 + return -EOPNOTSUPP; 3484 + } 3485 + } 3486 + 3487 + /** 3488 + * tb_switch_mask_clx_objections() - Mask CLx objections for a router 3489 + * @sw: Router to mask objections for 3490 + * 3491 + * Mask the objections coming from the second depth routers in order to 3492 + * stop these objections from interfering with the CLx states of the first 3493 + * depth link. 3494 + */ 3495 + int tb_switch_mask_clx_objections(struct tb_switch *sw) 3496 + { 3497 + int up_port = sw->config.upstream_port_number; 3498 + u32 offset, val[2], mask_obj, unmask_obj; 3499 + int ret, i; 3500 + 3501 + /* Only Titan Ridge of pre-USB4 devices support CLx states */ 3502 + if (!tb_switch_is_titan_ridge(sw)) 3503 + return 0; 3504 + 3505 + if (!tb_route(sw)) 3506 + return 0; 3507 + 3508 + /* 3509 + * In Titan Ridge there are only 2 dual-lane Thunderbolt ports: 3510 + * Port A consists of lane adapters 1,2 and 3511 + * Port B consists of lane adapters 3,4 3512 + * If upstream port is A, (lanes are 1,2), we mask objections from 3513 + * port B (lanes 3,4) and unmask objections from Port A and vice-versa. 3514 + */ 3515 + if (up_port == 1) { 3516 + mask_obj = TB_LOW_PWR_C0_PORT_B_MASK; 3517 + unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK; 3518 + offset = TB_LOW_PWR_C1_CL1; 3519 + } else { 3520 + mask_obj = TB_LOW_PWR_C1_PORT_A_MASK; 3521 + unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK; 3522 + offset = TB_LOW_PWR_C3_CL1; 3523 + } 3524 + 3525 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, 3526 + sw->cap_lp + offset, ARRAY_SIZE(val)); 3527 + if (ret) 3528 + return ret; 3529 + 3530 + for (i = 0; i < ARRAY_SIZE(val); i++) { 3531 + val[i] |= mask_obj; 3532 + val[i] &= ~unmask_obj; 3533 + } 3534 + 3535 + return tb_sw_write(sw, &val, TB_CFG_SWITCH, 3536 + sw->cap_lp + offset, ARRAY_SIZE(val)); 3537 + } 3538 + 3539 + /* 3540 + * Can be used for read/write a specified PCIe bridge for any Thunderbolt 3 3541 + * device. For now used only for Titan Ridge. 3542 + */ 3543 + static int tb_switch_pcie_bridge_write(struct tb_switch *sw, unsigned int bridge, 3544 + unsigned int pcie_offset, u32 value) 3545 + { 3546 + u32 offset, command, val; 3547 + int ret; 3548 + 3549 + if (sw->generation != 3) 3550 + return -EOPNOTSUPP; 3551 + 3552 + offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_WR_DATA; 3553 + ret = tb_sw_write(sw, &value, TB_CFG_SWITCH, offset, 1); 3554 + if (ret) 3555 + return ret; 3556 + 3557 + command = pcie_offset & TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK; 3558 + command |= BIT(bridge + TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT); 3559 + command |= TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK; 3560 + command |= TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL 3561 + << TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT; 3562 + command |= TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK; 3563 + 3564 + offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_CMD; 3565 + 3566 + ret = tb_sw_write(sw, &command, TB_CFG_SWITCH, offset, 1); 3567 + if (ret) 3568 + return ret; 3569 + 3570 + ret = tb_switch_wait_for_bit(sw, offset, 3571 + TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK, 0, 100); 3572 + if (ret) 3573 + return ret; 3574 + 3575 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); 3576 + if (ret) 3577 + return ret; 3578 + 3579 + if (val & TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK) 3580 + return -ETIMEDOUT; 3581 + 3582 + return 0; 3583 + } 3584 + 3585 + /** 3586 + * tb_switch_pcie_l1_enable() - Enable PCIe link to enter L1 state 3587 + * @sw: Router to enable PCIe L1 3588 + * 3589 + * For Titan Ridge switch to enter CLx state, its PCIe bridges shall enable 3590 + * entry to PCIe L1 state. Shall be called after the upstream PCIe tunnel 3591 + * was configured. Due to Intel platforms limitation, shall be called only 3592 + * for first hop switch. 3593 + */ 3594 + int tb_switch_pcie_l1_enable(struct tb_switch *sw) 3595 + { 3596 + struct tb_switch *parent = tb_switch_parent(sw); 3597 + int ret; 3598 + 3599 + if (!tb_route(sw)) 3600 + return 0; 3601 + 3602 + if (!tb_switch_is_titan_ridge(sw)) 3603 + return 0; 3604 + 3605 + /* Enable PCIe L1 enable only for first hop router (depth = 1) */ 3606 + if (tb_route(parent)) 3607 + return 0; 3608 + 3609 + /* Write to downstream PCIe bridge #5 aka Dn4 */ 3610 + ret = tb_switch_pcie_bridge_write(sw, 5, 0x143, 0x0c7806b1); 3611 + if (ret) 3612 + return ret; 3613 + 3614 + /* Write to Upstream PCIe bridge #0 aka Up0 */ 3615 + return tb_switch_pcie_bridge_write(sw, 0, 0x143, 0x0c5806b1); 3279 3616 }
+74 -17
drivers/thunderbolt/tb.c
··· 105 105 } 106 106 } 107 107 108 - static void tb_discover_tunnels(struct tb_switch *sw) 108 + static void tb_switch_discover_tunnels(struct tb_switch *sw, 109 + struct list_head *list, 110 + bool alloc_hopids) 109 111 { 110 112 struct tb *tb = sw->tb; 111 - struct tb_cm *tcm = tb_priv(tb); 112 113 struct tb_port *port; 113 114 114 115 tb_switch_for_each_port(sw, port) { ··· 117 116 118 117 switch (port->config.type) { 119 118 case TB_TYPE_DP_HDMI_IN: 120 - tunnel = tb_tunnel_discover_dp(tb, port); 119 + tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids); 121 120 break; 122 121 123 122 case TB_TYPE_PCIE_DOWN: 124 - tunnel = tb_tunnel_discover_pci(tb, port); 123 + tunnel = tb_tunnel_discover_pci(tb, port, alloc_hopids); 125 124 break; 126 125 127 126 case TB_TYPE_USB3_DOWN: 128 - tunnel = tb_tunnel_discover_usb3(tb, port); 127 + tunnel = tb_tunnel_discover_usb3(tb, port, alloc_hopids); 129 128 break; 130 129 131 130 default: 132 131 break; 133 132 } 134 133 135 - if (!tunnel) 136 - continue; 134 + if (tunnel) 135 + list_add_tail(&tunnel->list, list); 136 + } 137 137 138 + tb_switch_for_each_port(sw, port) { 139 + if (tb_port_has_remote(port)) { 140 + tb_switch_discover_tunnels(port->remote->sw, list, 141 + alloc_hopids); 142 + } 143 + } 144 + } 145 + 146 + static void tb_discover_tunnels(struct tb *tb) 147 + { 148 + struct tb_cm *tcm = tb_priv(tb); 149 + struct tb_tunnel *tunnel; 150 + 151 + tb_switch_discover_tunnels(tb->root_switch, &tcm->tunnel_list, true); 152 + 153 + list_for_each_entry(tunnel, &tcm->tunnel_list, list) { 138 154 if (tb_tunnel_is_pci(tunnel)) { 139 155 struct tb_switch *parent = tunnel->dst_port->sw; 140 156 ··· 164 146 pm_runtime_get_sync(&tunnel->src_port->sw->dev); 165 147 pm_runtime_get_sync(&tunnel->dst_port->sw->dev); 166 148 } 167 - 168 - list_add_tail(&tunnel->list, &tcm->tunnel_list); 169 - } 170 - 171 - tb_switch_for_each_port(sw, port) { 172 - if (tb_port_has_remote(port)) 173 - tb_discover_tunnels(port->remote->sw); 174 149 } 175 150 } 176 151 ··· 221 210 int ret; 222 211 223 212 /* If it is already enabled in correct mode, don't touch it */ 224 - if (tb_switch_tmu_is_enabled(sw)) 213 + if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirectional_request)) 225 214 return 0; 226 215 227 216 ret = tb_switch_tmu_disable(sw); ··· 669 658 tb_switch_lane_bonding_enable(sw); 670 659 /* Set the link configured */ 671 660 tb_switch_configure_link(sw); 661 + if (tb_switch_enable_clx(sw, TB_CL0S)) 662 + tb_sw_warn(sw, "failed to enable CLx on upstream port\n"); 663 + 664 + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, 665 + tb_switch_is_clx_enabled(sw)); 672 666 673 667 if (tb_enable_tmu(sw)) 674 668 tb_sw_warn(sw, "failed to enable TMU\n"); ··· 1092 1076 return -EIO; 1093 1077 } 1094 1078 1079 + /* 1080 + * PCIe L1 is needed to enable CL0s for Titan Ridge so enable it 1081 + * here. 1082 + */ 1083 + if (tb_switch_pcie_l1_enable(sw)) 1084 + tb_sw_warn(sw, "failed to enable PCIe L1 for Titan Ridge\n"); 1085 + 1095 1086 list_add_tail(&tunnel->list, &tcm->tunnel_list); 1096 1087 return 0; 1097 1088 } ··· 1387 1364 return ret; 1388 1365 } 1389 1366 1367 + tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI, false); 1390 1368 /* Enable TMU if it is off */ 1391 1369 tb_switch_tmu_enable(tb->root_switch); 1392 1370 /* Full scan to discover devices added before the driver was loaded. */ 1393 1371 tb_scan_switch(tb->root_switch); 1394 1372 /* Find out tunnels created by the boot firmware */ 1395 - tb_discover_tunnels(tb->root_switch); 1373 + tb_discover_tunnels(tb); 1396 1374 /* 1397 1375 * If the boot firmware did not create USB 3.x tunnels create them 1398 1376 * now for the whole topology. ··· 1431 1407 if (sw->is_unplugged) 1432 1408 return; 1433 1409 1410 + if (tb_switch_enable_clx(sw, TB_CL0S)) 1411 + tb_sw_warn(sw, "failed to re-enable CLx on upstream port\n"); 1412 + 1413 + /* 1414 + * tb_switch_tmu_configure() was already called when the switch was 1415 + * added before entering system sleep or runtime suspend, 1416 + * so no need to call it again before enabling TMU. 1417 + */ 1434 1418 if (tb_enable_tmu(sw)) 1435 1419 tb_sw_warn(sw, "failed to restore TMU configuration\n"); 1436 1420 ··· 1461 1429 { 1462 1430 struct tb_cm *tcm = tb_priv(tb); 1463 1431 struct tb_tunnel *tunnel, *n; 1432 + unsigned int usb3_delay = 0; 1433 + LIST_HEAD(tunnels); 1464 1434 1465 1435 tb_dbg(tb, "resuming...\n"); 1466 1436 ··· 1473 1439 tb_free_invalid_tunnels(tb); 1474 1440 tb_free_unplugged_children(tb->root_switch); 1475 1441 tb_restore_children(tb->root_switch); 1476 - list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) 1442 + 1443 + /* 1444 + * If we get here from suspend to disk the boot firmware or the 1445 + * restore kernel might have created tunnels of its own. Since 1446 + * we cannot be sure they are usable for us we find and tear 1447 + * them down. 1448 + */ 1449 + tb_switch_discover_tunnels(tb->root_switch, &tunnels, false); 1450 + list_for_each_entry_safe_reverse(tunnel, n, &tunnels, list) { 1451 + if (tb_tunnel_is_usb3(tunnel)) 1452 + usb3_delay = 500; 1453 + tb_tunnel_deactivate(tunnel); 1454 + tb_tunnel_free(tunnel); 1455 + } 1456 + 1457 + /* Re-create our tunnels now */ 1458 + list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) { 1459 + /* USB3 requires delay before it can be re-activated */ 1460 + if (tb_tunnel_is_usb3(tunnel)) { 1461 + msleep(usb3_delay); 1462 + /* Only need to do it once */ 1463 + usb3_delay = 0; 1464 + } 1477 1465 tb_tunnel_restart(tunnel); 1466 + } 1478 1467 if (!list_empty(&tcm->tunnel_list)) { 1479 1468 /* 1480 1469 * the pcie links need some time to get going.
+100 -6
drivers/thunderbolt/tb.h
··· 89 89 * @cap: Offset to the TMU capability (%0 if not found) 90 90 * @has_ucap: Does the switch support uni-directional mode 91 91 * @rate: TMU refresh rate related to upstream switch. In case of root 92 - * switch this holds the domain rate. 92 + * switch this holds the domain rate. Reflects the HW setting. 93 93 * @unidirectional: Is the TMU in uni-directional or bi-directional mode 94 - * related to upstream switch. Don't case for root switch. 94 + * related to upstream switch. Don't care for root switch. 95 + * Reflects the HW setting. 96 + * @unidirectional_request: Is the new TMU mode: uni-directional or bi-directional 97 + * that is requested to be set. Related to upstream switch. 98 + * Don't care for root switch. 99 + * @rate_request: TMU new refresh rate related to upstream switch that is 100 + * requested to be set. In case of root switch, this holds 101 + * the new domain rate that is requested to be set. 95 102 */ 96 103 struct tb_switch_tmu { 97 104 int cap; 98 105 bool has_ucap; 99 106 enum tb_switch_tmu_rate rate; 100 107 bool unidirectional; 108 + bool unidirectional_request; 109 + enum tb_switch_tmu_rate rate_request; 110 + }; 111 + 112 + enum tb_clx { 113 + TB_CLX_DISABLE, 114 + TB_CL0S, 115 + TB_CL1, 116 + TB_CL2, 101 117 }; 102 118 103 119 /** ··· 138 122 * @link_usb4: Upstream link is USB4 139 123 * @generation: Switch Thunderbolt generation 140 124 * @cap_plug_events: Offset to the plug events capability (%0 if not found) 125 + * @cap_vsec_tmu: Offset to the TMU vendor specific capability (%0 if not found) 141 126 * @cap_lc: Offset to the link controller capability (%0 if not found) 127 + * @cap_lp: Offset to the low power (CLx for TBT) capability (%0 if not found) 142 128 * @is_unplugged: The switch is going away 143 129 * @drom: DROM of the switch (%NULL if not found) 144 130 * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) ··· 166 148 * @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN 167 149 * @max_pcie_credits: Router preferred number of buffers for PCIe 168 150 * @max_dma_credits: Router preferred number of buffers for DMA/P2P 151 + * @clx: CLx state on the upstream link of the router 169 152 * 170 153 * When the switch is being added or removed to the domain (other 171 154 * switches) you need to have domain lock held. ··· 191 172 bool link_usb4; 192 173 unsigned int generation; 193 174 int cap_plug_events; 175 + int cap_vsec_tmu; 194 176 int cap_lc; 177 + int cap_lp; 195 178 bool is_unplugged; 196 179 u8 *drom; 197 180 struct tb_nvm *nvm; ··· 217 196 unsigned int min_dp_main_credits; 218 197 unsigned int max_pcie_credits; 219 198 unsigned int max_dma_credits; 199 + enum tb_clx clx; 220 200 }; 221 201 222 202 /** ··· 376 354 * when deactivating this path 377 355 * @hops: Path hops 378 356 * @path_length: How many hops the path uses 357 + * @alloc_hopid: Does this path consume port HopID 379 358 * 380 359 * A path consists of a number of hops (see &struct tb_path_hop). To 381 360 * establish a PCIe tunnel two paths have to be created between the two ··· 397 374 bool clear_fc; 398 375 struct tb_path_hop *hops; 399 376 int path_length; 377 + bool alloc_hopid; 400 378 }; 401 379 402 380 /* HopIDs 0-7 are reserved by the Thunderbolt protocol */ ··· 764 740 void tb_switch_suspend(struct tb_switch *sw, bool runtime); 765 741 int tb_switch_resume(struct tb_switch *sw); 766 742 int tb_switch_reset(struct tb_switch *sw); 743 + int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, 744 + u32 value, int timeout_msec); 767 745 void tb_sw_set_unplugged(struct tb_switch *sw); 768 746 struct tb_port *tb_switch_find_port(struct tb_switch *sw, 769 747 enum tb_port_type type); ··· 877 851 return false; 878 852 } 879 853 854 + static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw) 855 + { 856 + if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) { 857 + switch (sw->config.device_id) { 858 + case PCI_DEVICE_ID_INTEL_TGL_NHI0: 859 + case PCI_DEVICE_ID_INTEL_TGL_NHI1: 860 + case PCI_DEVICE_ID_INTEL_TGL_H_NHI0: 861 + case PCI_DEVICE_ID_INTEL_TGL_H_NHI1: 862 + return true; 863 + } 864 + } 865 + return false; 866 + } 867 + 880 868 /** 881 869 * tb_switch_is_usb4() - Is the switch USB4 compliant 882 870 * @sw: Switch to check ··· 929 889 int tb_switch_tmu_post_time(struct tb_switch *sw); 930 890 int tb_switch_tmu_disable(struct tb_switch *sw); 931 891 int tb_switch_tmu_enable(struct tb_switch *sw); 932 - 933 - static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) 892 + void tb_switch_tmu_configure(struct tb_switch *sw, 893 + enum tb_switch_tmu_rate rate, 894 + bool unidirectional); 895 + /** 896 + * tb_switch_tmu_hifi_is_enabled() - Checks if the specified TMU mode is enabled 897 + * @sw: Router whose TMU mode to check 898 + * @unidirectional: If uni-directional (bi-directional otherwise) 899 + * 900 + * Return true if hardware TMU configuration matches the one passed in 901 + * as parameter. That is HiFi and either uni-directional or bi-directional. 902 + */ 903 + static inline bool tb_switch_tmu_hifi_is_enabled(const struct tb_switch *sw, 904 + bool unidirectional) 934 905 { 935 906 return sw->tmu.rate == TB_SWITCH_TMU_RATE_HIFI && 936 - !sw->tmu.unidirectional; 907 + sw->tmu.unidirectional == unidirectional; 937 908 } 909 + 910 + int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx); 911 + int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx); 912 + 913 + /** 914 + * tb_switch_is_clx_enabled() - Checks if the CLx is enabled 915 + * @sw: Router to check the CLx state for 916 + * 917 + * Checks if the CLx is enabled on the router upstream link. 918 + * Not applicable for a host router. 919 + */ 920 + static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw) 921 + { 922 + return sw->clx != TB_CLX_DISABLE; 923 + } 924 + 925 + /** 926 + * tb_switch_is_cl0s_enabled() - Checks if the CL0s is enabled 927 + * @sw: Router to check for the CL0s 928 + * 929 + * Checks if the CL0s is enabled on the router upstream link. 930 + * Not applicable for a host router. 931 + */ 932 + static inline bool tb_switch_is_cl0s_enabled(const struct tb_switch *sw) 933 + { 934 + return sw->clx == TB_CL0S; 935 + } 936 + 937 + /** 938 + * tb_switch_is_clx_supported() - Is CLx supported on this type of router 939 + * @sw: The router to check CLx support for 940 + */ 941 + static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw) 942 + { 943 + return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw); 944 + } 945 + 946 + int tb_switch_mask_clx_objections(struct tb_switch *sw); 947 + 948 + int tb_switch_pcie_l1_enable(struct tb_switch *sw); 938 949 939 950 int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); 940 951 int tb_port_add_nfc_credits(struct tb_port *port, int credits); ··· 1048 957 1049 958 struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid, 1050 959 struct tb_port *dst, int dst_hopid, 1051 - struct tb_port **last, const char *name); 960 + struct tb_port **last, const char *name, 961 + bool alloc_hopid); 1052 962 struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid, 1053 963 struct tb_port *dst, int dst_hopid, int link_nr, 1054 964 const char *name); ··· 1080 988 int tb_lc_configure_xdomain(struct tb_port *port); 1081 989 void tb_lc_unconfigure_xdomain(struct tb_port *port); 1082 990 int tb_lc_start_lane_initialization(struct tb_port *port); 991 + bool tb_lc_is_clx_supported(struct tb_port *port); 1083 992 int tb_lc_set_wake(struct tb_switch *sw, unsigned int flags); 1084 993 int tb_lc_set_sleep(struct tb_switch *sw); 1085 994 bool tb_lc_lane_bonding_possible(struct tb_switch *sw); ··· 1167 1074 int usb4_port_router_offline(struct tb_port *port); 1168 1075 int usb4_port_router_online(struct tb_port *port); 1169 1076 int usb4_port_enumerate_retimers(struct tb_port *port); 1077 + bool usb4_port_clx_supported(struct tb_port *port); 1170 1078 1171 1079 int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index); 1172 1080 int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
+30 -17
drivers/thunderbolt/tb_msgs.h
··· 535 535 u32 type; 536 536 }; 537 537 538 + struct tb_xdp_error_response { 539 + struct tb_xdp_header hdr; 540 + u32 error; 541 + }; 542 + 538 543 struct tb_xdp_uuid { 539 544 struct tb_xdp_header hdr; 540 545 }; 541 546 542 547 struct tb_xdp_uuid_response { 543 - struct tb_xdp_header hdr; 544 - uuid_t src_uuid; 545 - u32 src_route_hi; 546 - u32 src_route_lo; 548 + union { 549 + struct tb_xdp_error_response err; 550 + struct { 551 + struct tb_xdp_header hdr; 552 + uuid_t src_uuid; 553 + u32 src_route_hi; 554 + u32 src_route_lo; 555 + }; 556 + }; 547 557 }; 548 558 549 559 struct tb_xdp_properties { ··· 565 555 }; 566 556 567 557 struct tb_xdp_properties_response { 568 - struct tb_xdp_header hdr; 569 - uuid_t src_uuid; 570 - uuid_t dst_uuid; 571 - u16 offset; 572 - u16 data_length; 573 - u32 generation; 574 - u32 data[0]; 558 + union { 559 + struct tb_xdp_error_response err; 560 + struct { 561 + struct tb_xdp_header hdr; 562 + uuid_t src_uuid; 563 + uuid_t dst_uuid; 564 + u16 offset; 565 + u16 data_length; 566 + u32 generation; 567 + u32 data[]; 568 + }; 569 + }; 575 570 }; 576 571 577 572 /* ··· 595 580 }; 596 581 597 582 struct tb_xdp_properties_changed_response { 598 - struct tb_xdp_header hdr; 583 + union { 584 + struct tb_xdp_error_response err; 585 + struct tb_xdp_header hdr; 586 + }; 599 587 }; 600 588 601 589 enum tb_xdp_error { ··· 607 589 ERROR_UNKNOWN_DOMAIN, 608 590 ERROR_NOT_SUPPORTED, 609 591 ERROR_NOT_READY, 610 - }; 611 - 612 - struct tb_xdp_error_response { 613 - struct tb_xdp_header hdr; 614 - u32 error; 615 592 }; 616 593 617 594 #endif
+80 -31
drivers/thunderbolt/tb_regs.h
··· 33 33 enum tb_switch_vse_cap { 34 34 TB_VSE_CAP_PLUG_EVENTS = 0x01, /* also EEPROM */ 35 35 TB_VSE_CAP_TIME2 = 0x03, 36 - TB_VSE_CAP_IECS = 0x04, 36 + TB_VSE_CAP_CP_LP = 0x04, 37 37 TB_VSE_CAP_LINK_CONTROLLER = 0x06, /* also IECS */ 38 38 }; 39 39 ··· 246 246 #define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16 247 247 #define TMU_RTR_CS_22 0x16 248 248 #define TMU_RTR_CS_24 0x18 249 + #define TMU_RTR_CS_25 0x19 249 250 250 251 enum tb_port_type { 251 252 TB_TYPE_INACTIVE = 0x000000, ··· 306 305 /* TMU adapter registers */ 307 306 #define TMU_ADP_CS_3 0x03 308 307 #define TMU_ADP_CS_3_UDM BIT(29) 308 + #define TMU_ADP_CS_6 0x06 309 + #define TMU_ADP_CS_6_DTS BIT(1) 309 310 310 311 /* Lane adapter registers */ 311 312 #define LANE_ADP_CS_0 0x00 312 313 #define LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK GENMASK(25, 20) 313 314 #define LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT 20 315 + #define LANE_ADP_CS_0_CL0S_SUPPORT BIT(26) 316 + #define LANE_ADP_CS_0_CL1_SUPPORT BIT(27) 314 317 #define LANE_ADP_CS_1 0x01 315 318 #define LANE_ADP_CS_1_TARGET_WIDTH_MASK GENMASK(9, 4) 316 319 #define LANE_ADP_CS_1_TARGET_WIDTH_SHIFT 4 317 320 #define LANE_ADP_CS_1_TARGET_WIDTH_SINGLE 0x1 318 321 #define LANE_ADP_CS_1_TARGET_WIDTH_DUAL 0x3 322 + #define LANE_ADP_CS_1_CL0S_ENABLE BIT(10) 323 + #define LANE_ADP_CS_1_CL1_ENABLE BIT(11) 319 324 #define LANE_ADP_CS_1_LD BIT(14) 320 325 #define LANE_ADP_CS_1_LB BIT(15) 321 326 #define LANE_ADP_CS_1_CURRENT_SPEED_MASK GENMASK(19, 16) ··· 330 323 #define LANE_ADP_CS_1_CURRENT_SPEED_GEN3 0x4 331 324 #define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20) 332 325 #define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20 326 + #define LANE_ADP_CS_1_PMS BIT(30) 333 327 334 328 /* USB4 port registers */ 335 329 #define PORT_CS_1 0x01 ··· 346 338 #define PORT_CS_18 0x12 347 339 #define PORT_CS_18_BE BIT(8) 348 340 #define PORT_CS_18_TCM BIT(9) 341 + #define PORT_CS_18_CPS BIT(10) 349 342 #define PORT_CS_18_WOU4S BIT(18) 350 343 #define PORT_CS_19 0x13 351 344 #define PORT_CS_19_PC BIT(3) ··· 446 437 u32 unknown3:3; /* set to zero */ 447 438 } __packed; 448 439 440 + /* TMU Thunderbolt 3 registers */ 441 + #define TB_TIME_VSEC_3_CS_9 0x9 442 + #define TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK GENMASK(17, 16) 443 + #define TB_TIME_VSEC_3_CS_26 0x1a 444 + #define TB_TIME_VSEC_3_CS_26_TD BIT(22) 445 + 446 + /* 447 + * Used for Titan Ridge only. Bits are part of the same register: TMU_ADP_CS_6 448 + * (see above) as in USB4 spec, but these specific bits used for Titan Ridge 449 + * only and reserved in USB4 spec. 450 + */ 451 + #define TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK GENMASK(3, 2) 452 + #define TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 BIT(2) 453 + #define TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2 BIT(3) 454 + 455 + /* Plug Events registers */ 456 + #define TB_PLUG_EVENTS_PCIE_WR_DATA 0x1b 457 + #define TB_PLUG_EVENTS_PCIE_CMD 0x1c 458 + #define TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK GENMASK(9, 0) 459 + #define TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT 10 460 + #define TB_PLUG_EVENTS_PCIE_CMD_BR_MASK GENMASK(17, 10) 461 + #define TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK BIT(21) 462 + #define TB_PLUG_EVENTS_PCIE_CMD_WR 0x1 463 + #define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT 22 464 + #define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_MASK GENMASK(24, 22) 465 + #define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL 0x2 466 + #define TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK BIT(30) 467 + #define TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK BIT(31) 468 + #define TB_PLUG_EVENTS_PCIE_CMD_RD_DATA 0x1d 469 + 470 + /* CP Low Power registers */ 471 + #define TB_LOW_PWR_C1_CL1 0x1 472 + #define TB_LOW_PWR_C1_CL1_OBJ_MASK GENMASK(4, 1) 473 + #define TB_LOW_PWR_C1_CL2_OBJ_MASK GENMASK(4, 1) 474 + #define TB_LOW_PWR_C1_PORT_A_MASK GENMASK(2, 1) 475 + #define TB_LOW_PWR_C0_PORT_B_MASK GENMASK(4, 3) 476 + #define TB_LOW_PWR_C3_CL1 0x3 477 + 449 478 /* Common link controller registers */ 450 - #define TB_LC_DESC 0x02 451 - #define TB_LC_DESC_NLC_MASK GENMASK(3, 0) 452 - #define TB_LC_DESC_SIZE_SHIFT 8 453 - #define TB_LC_DESC_SIZE_MASK GENMASK(15, 8) 454 - #define TB_LC_DESC_PORT_SIZE_SHIFT 16 455 - #define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16) 456 - #define TB_LC_FUSE 0x03 457 - #define TB_LC_SNK_ALLOCATION 0x10 458 - #define TB_LC_SNK_ALLOCATION_SNK0_MASK GENMASK(3, 0) 459 - #define TB_LC_SNK_ALLOCATION_SNK0_CM 0x1 460 - #define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4 461 - #define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4) 462 - #define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1 463 - #define TB_LC_POWER 0x740 479 + #define TB_LC_DESC 0x02 480 + #define TB_LC_DESC_NLC_MASK GENMASK(3, 0) 481 + #define TB_LC_DESC_SIZE_SHIFT 8 482 + #define TB_LC_DESC_SIZE_MASK GENMASK(15, 8) 483 + #define TB_LC_DESC_PORT_SIZE_SHIFT 16 484 + #define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16) 485 + #define TB_LC_FUSE 0x03 486 + #define TB_LC_SNK_ALLOCATION 0x10 487 + #define TB_LC_SNK_ALLOCATION_SNK0_MASK GENMASK(3, 0) 488 + #define TB_LC_SNK_ALLOCATION_SNK0_CM 0x1 489 + #define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4 490 + #define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4) 491 + #define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1 492 + #define TB_LC_POWER 0x740 464 493 465 494 /* Link controller registers */ 466 - #define TB_LC_PORT_ATTR 0x8d 467 - #define TB_LC_PORT_ATTR_BE BIT(12) 495 + #define TB_LC_PORT_ATTR 0x8d 496 + #define TB_LC_PORT_ATTR_BE BIT(12) 468 497 469 - #define TB_LC_SX_CTRL 0x96 470 - #define TB_LC_SX_CTRL_WOC BIT(1) 471 - #define TB_LC_SX_CTRL_WOD BIT(2) 472 - #define TB_LC_SX_CTRL_WODPC BIT(3) 473 - #define TB_LC_SX_CTRL_WODPD BIT(4) 474 - #define TB_LC_SX_CTRL_WOU4 BIT(5) 475 - #define TB_LC_SX_CTRL_WOP BIT(6) 476 - #define TB_LC_SX_CTRL_L1C BIT(16) 477 - #define TB_LC_SX_CTRL_L1D BIT(17) 478 - #define TB_LC_SX_CTRL_L2C BIT(20) 479 - #define TB_LC_SX_CTRL_L2D BIT(21) 480 - #define TB_LC_SX_CTRL_SLI BIT(29) 481 - #define TB_LC_SX_CTRL_UPSTREAM BIT(30) 482 - #define TB_LC_SX_CTRL_SLP BIT(31) 498 + #define TB_LC_SX_CTRL 0x96 499 + #define TB_LC_SX_CTRL_WOC BIT(1) 500 + #define TB_LC_SX_CTRL_WOD BIT(2) 501 + #define TB_LC_SX_CTRL_WODPC BIT(3) 502 + #define TB_LC_SX_CTRL_WODPD BIT(4) 503 + #define TB_LC_SX_CTRL_WOU4 BIT(5) 504 + #define TB_LC_SX_CTRL_WOP BIT(6) 505 + #define TB_LC_SX_CTRL_L1C BIT(16) 506 + #define TB_LC_SX_CTRL_L1D BIT(17) 507 + #define TB_LC_SX_CTRL_L2C BIT(20) 508 + #define TB_LC_SX_CTRL_L2D BIT(21) 509 + #define TB_LC_SX_CTRL_SLI BIT(29) 510 + #define TB_LC_SX_CTRL_UPSTREAM BIT(30) 511 + #define TB_LC_SX_CTRL_SLP BIT(31) 512 + #define TB_LC_LINK_ATTR 0x97 513 + #define TB_LC_LINK_ATTR_CPS BIT(18) 483 514 484 515 #endif
+292 -55
drivers/thunderbolt/tmu.c
··· 115 115 return tb_port_tmu_set_unidirectional(port, false); 116 116 } 117 117 118 + static inline int tb_port_tmu_unidirectional_enable(struct tb_port *port) 119 + { 120 + return tb_port_tmu_set_unidirectional(port, true); 121 + } 122 + 118 123 static bool tb_port_tmu_is_unidirectional(struct tb_port *port) 119 124 { 120 125 int ret; ··· 133 128 return val & TMU_ADP_CS_3_UDM; 134 129 } 135 130 131 + static int tb_port_tmu_time_sync(struct tb_port *port, bool time_sync) 132 + { 133 + u32 val = time_sync ? TMU_ADP_CS_6_DTS : 0; 134 + 135 + return tb_port_tmu_write(port, TMU_ADP_CS_6, TMU_ADP_CS_6_DTS, val); 136 + } 137 + 138 + static int tb_port_tmu_time_sync_disable(struct tb_port *port) 139 + { 140 + return tb_port_tmu_time_sync(port, true); 141 + } 142 + 143 + static int tb_port_tmu_time_sync_enable(struct tb_port *port) 144 + { 145 + return tb_port_tmu_time_sync(port, false); 146 + } 147 + 136 148 static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set) 137 149 { 150 + u32 val, offset, bit; 138 151 int ret; 139 - u32 val; 140 152 141 - ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, 142 - sw->tmu.cap + TMU_RTR_CS_0, 1); 153 + if (tb_switch_is_usb4(sw)) { 154 + offset = sw->tmu.cap + TMU_RTR_CS_0; 155 + bit = TMU_RTR_CS_0_TD; 156 + } else { 157 + offset = sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_26; 158 + bit = TB_TIME_VSEC_3_CS_26_TD; 159 + } 160 + 161 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); 143 162 if (ret) 144 163 return ret; 145 164 146 165 if (set) 147 - val |= TMU_RTR_CS_0_TD; 166 + val |= bit; 148 167 else 149 - val &= ~TMU_RTR_CS_0_TD; 168 + val &= ~bit; 150 169 151 - return tb_sw_write(sw, &val, TB_CFG_SWITCH, 152 - sw->tmu.cap + TMU_RTR_CS_0, 1); 170 + return tb_sw_write(sw, &val, TB_CFG_SWITCH, offset, 1); 153 171 } 154 172 155 173 /** ··· 235 207 */ 236 208 int tb_switch_tmu_post_time(struct tb_switch *sw) 237 209 { 238 - unsigned int post_local_time_offset, post_time_offset; 210 + unsigned int post_time_high_offset, post_time_high = 0; 211 + unsigned int post_local_time_offset, post_time_offset; 239 212 struct tb_switch *root_switch = sw->tb->root_switch; 240 213 u64 hi, mid, lo, local_time, post_time; 241 214 int i, ret, retries = 100; ··· 276 247 277 248 post_local_time_offset = sw->tmu.cap + TMU_RTR_CS_22; 278 249 post_time_offset = sw->tmu.cap + TMU_RTR_CS_24; 250 + post_time_high_offset = sw->tmu.cap + TMU_RTR_CS_25; 279 251 280 252 /* 281 253 * Write the Grandmaster time to the Post Local Time registers ··· 288 258 goto out; 289 259 290 260 /* 291 - * Have the new switch update its local time (by writing 1 to 292 - * the post_time registers) and wait for the completion of the 293 - * same (post_time register becomes 0). This means the time has 294 - * been converged properly. 261 + * Have the new switch update its local time by: 262 + * 1) writing 0x1 to the Post Time Low register and 0xffffffff to 263 + * Post Time High register. 264 + * 2) write 0 to Post Time High register and then wait for 265 + * the completion of the post_time register becomes 0. 266 + * This means the time has been converged properly. 295 267 */ 296 - post_time = 1; 268 + post_time = 0xffffffff00000001ULL; 297 269 298 270 ret = tb_sw_write(sw, &post_time, TB_CFG_SWITCH, post_time_offset, 2); 271 + if (ret) 272 + goto out; 273 + 274 + ret = tb_sw_write(sw, &post_time_high, TB_CFG_SWITCH, 275 + post_time_high_offset, 1); 299 276 if (ret) 300 277 goto out; 301 278 ··· 334 297 */ 335 298 int tb_switch_tmu_disable(struct tb_switch *sw) 336 299 { 337 - int ret; 338 - 339 - if (!tb_switch_is_usb4(sw)) 300 + /* 301 + * No need to disable TMU on devices that don't support CLx since 302 + * on these devices e.g. Alpine Ridge and earlier, the TMU mode 303 + * HiFi bi-directional is enabled by default and we don't change it. 304 + */ 305 + if (!tb_switch_is_clx_supported(sw)) 340 306 return 0; 341 307 342 308 /* Already disabled? */ 343 309 if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) 344 310 return 0; 345 311 346 - if (sw->tmu.unidirectional) { 312 + 313 + if (tb_route(sw)) { 314 + bool unidirectional = tb_switch_tmu_hifi_is_enabled(sw, true); 347 315 struct tb_switch *parent = tb_switch_parent(sw); 348 - struct tb_port *up, *down; 316 + struct tb_port *down, *up; 317 + int ret; 349 318 350 - up = tb_upstream_port(sw); 351 319 down = tb_port_at(tb_route(sw), parent); 320 + up = tb_upstream_port(sw); 321 + /* 322 + * In case of uni-directional time sync, TMU handshake is 323 + * initiated by upstream router. In case of bi-directional 324 + * time sync, TMU handshake is initiated by downstream router. 325 + * Therefore, we change the rate to off in the respective 326 + * router. 327 + */ 328 + if (unidirectional) 329 + tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF); 330 + else 331 + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); 352 332 353 - /* The switch may be unplugged so ignore any errors */ 354 - tb_port_tmu_unidirectional_disable(up); 355 - ret = tb_port_tmu_unidirectional_disable(down); 333 + tb_port_tmu_time_sync_disable(up); 334 + ret = tb_port_tmu_time_sync_disable(down); 356 335 if (ret) 357 336 return ret; 358 - } 359 337 360 - tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); 338 + if (unidirectional) { 339 + /* The switch may be unplugged so ignore any errors */ 340 + tb_port_tmu_unidirectional_disable(up); 341 + ret = tb_port_tmu_unidirectional_disable(down); 342 + if (ret) 343 + return ret; 344 + } 345 + } else { 346 + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); 347 + } 361 348 362 349 sw->tmu.unidirectional = false; 363 350 sw->tmu.rate = TB_SWITCH_TMU_RATE_OFF; ··· 390 329 return 0; 391 330 } 392 331 393 - /** 394 - * tb_switch_tmu_enable() - Enable TMU on a switch 395 - * @sw: Switch whose TMU to enable 396 - * 397 - * Enables TMU of a switch to be in bi-directional, HiFi mode. In this mode 398 - * all tunneling should work. 399 - */ 400 - int tb_switch_tmu_enable(struct tb_switch *sw) 332 + static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) 401 333 { 334 + struct tb_switch *parent = tb_switch_parent(sw); 335 + struct tb_port *down, *up; 336 + 337 + down = tb_port_at(tb_route(sw), parent); 338 + up = tb_upstream_port(sw); 339 + /* 340 + * In case of any failure in one of the steps when setting 341 + * bi-directional or uni-directional TMU mode, get back to the TMU 342 + * configurations in off mode. In case of additional failures in 343 + * the functions below, ignore them since the caller shall already 344 + * report a failure. 345 + */ 346 + tb_port_tmu_time_sync_disable(down); 347 + tb_port_tmu_time_sync_disable(up); 348 + if (unidirectional) 349 + tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF); 350 + else 351 + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); 352 + 353 + tb_port_tmu_unidirectional_disable(down); 354 + tb_port_tmu_unidirectional_disable(up); 355 + } 356 + 357 + /* 358 + * This function is called when the previous TMU mode was 359 + * TB_SWITCH_TMU_RATE_OFF. 360 + */ 361 + static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) 362 + { 363 + struct tb_switch *parent = tb_switch_parent(sw); 364 + struct tb_port *up, *down; 402 365 int ret; 403 366 404 - if (!tb_switch_is_usb4(sw)) 367 + up = tb_upstream_port(sw); 368 + down = tb_port_at(tb_route(sw), parent); 369 + 370 + ret = tb_port_tmu_unidirectional_disable(up); 371 + if (ret) 372 + return ret; 373 + 374 + ret = tb_port_tmu_unidirectional_disable(down); 375 + if (ret) 376 + goto out; 377 + 378 + ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); 379 + if (ret) 380 + goto out; 381 + 382 + ret = tb_port_tmu_time_sync_enable(up); 383 + if (ret) 384 + goto out; 385 + 386 + ret = tb_port_tmu_time_sync_enable(down); 387 + if (ret) 388 + goto out; 389 + 390 + return 0; 391 + 392 + out: 393 + __tb_switch_tmu_off(sw, false); 394 + return ret; 395 + } 396 + 397 + static int tb_switch_tmu_objection_mask(struct tb_switch *sw) 398 + { 399 + u32 val; 400 + int ret; 401 + 402 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, 403 + sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); 404 + if (ret) 405 + return ret; 406 + 407 + val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK; 408 + 409 + return tb_sw_write(sw, &val, TB_CFG_SWITCH, 410 + sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); 411 + } 412 + 413 + static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw) 414 + { 415 + struct tb_port *up = tb_upstream_port(sw); 416 + 417 + return tb_port_tmu_write(up, TMU_ADP_CS_6, 418 + TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK, 419 + TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK); 420 + } 421 + 422 + /* 423 + * This function is called when the previous TMU mode was 424 + * TB_SWITCH_TMU_RATE_OFF. 425 + */ 426 + static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) 427 + { 428 + struct tb_switch *parent = tb_switch_parent(sw); 429 + struct tb_port *up, *down; 430 + int ret; 431 + 432 + up = tb_upstream_port(sw); 433 + down = tb_port_at(tb_route(sw), parent); 434 + ret = tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_HIFI); 435 + if (ret) 436 + return ret; 437 + 438 + ret = tb_port_tmu_unidirectional_enable(up); 439 + if (ret) 440 + goto out; 441 + 442 + ret = tb_port_tmu_time_sync_enable(up); 443 + if (ret) 444 + goto out; 445 + 446 + ret = tb_port_tmu_unidirectional_enable(down); 447 + if (ret) 448 + goto out; 449 + 450 + ret = tb_port_tmu_time_sync_enable(down); 451 + if (ret) 452 + goto out; 453 + 454 + return 0; 455 + 456 + out: 457 + __tb_switch_tmu_off(sw, true); 458 + return ret; 459 + } 460 + 461 + static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) 462 + { 463 + bool unidirectional = sw->tmu.unidirectional_request; 464 + int ret; 465 + 466 + if (unidirectional && !sw->tmu.has_ucap) 467 + return -EOPNOTSUPP; 468 + 469 + /* 470 + * No need to enable TMU on devices that don't support CLx since on 471 + * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi 472 + * bi-directional is enabled by default. 473 + */ 474 + if (!tb_switch_is_clx_supported(sw)) 405 475 return 0; 406 476 407 - if (tb_switch_tmu_is_enabled(sw)) 477 + if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirectional_request)) 408 478 return 0; 479 + 480 + if (tb_switch_is_titan_ridge(sw) && unidirectional) { 481 + /* Titan Ridge supports only CL0s */ 482 + if (!tb_switch_is_cl0s_enabled(sw)) 483 + return -EOPNOTSUPP; 484 + 485 + ret = tb_switch_tmu_objection_mask(sw); 486 + if (ret) 487 + return ret; 488 + 489 + ret = tb_switch_tmu_unidirectional_enable(sw); 490 + if (ret) 491 + return ret; 492 + } 409 493 410 494 ret = tb_switch_tmu_set_time_disruption(sw, true); 411 495 if (ret) 412 496 return ret; 413 497 414 - /* Change mode to bi-directional */ 415 - if (tb_route(sw) && sw->tmu.unidirectional) { 416 - struct tb_switch *parent = tb_switch_parent(sw); 417 - struct tb_port *up, *down; 418 - 419 - up = tb_upstream_port(sw); 420 - down = tb_port_at(tb_route(sw), parent); 421 - 422 - ret = tb_port_tmu_unidirectional_disable(down); 423 - if (ret) 424 - return ret; 425 - 426 - ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); 427 - if (ret) 428 - return ret; 429 - 430 - ret = tb_port_tmu_unidirectional_disable(up); 431 - if (ret) 432 - return ret; 498 + if (tb_route(sw)) { 499 + /* The used mode changes are from OFF to HiFi-Uni/HiFi-BiDir */ 500 + if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) { 501 + if (unidirectional) 502 + ret = __tb_switch_tmu_enable_unidirectional(sw); 503 + else 504 + ret = __tb_switch_tmu_enable_bidirectional(sw); 505 + if (ret) 506 + return ret; 507 + } 508 + sw->tmu.unidirectional = unidirectional; 433 509 } else { 510 + /* 511 + * Host router port configurations are written as 512 + * part of configurations for downstream port of the parent 513 + * of the child node - see above. 514 + * Here only the host router' rate configuration is written. 515 + */ 434 516 ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); 435 517 if (ret) 436 518 return ret; 437 519 } 438 520 439 - sw->tmu.unidirectional = false; 440 521 sw->tmu.rate = TB_SWITCH_TMU_RATE_HIFI; 441 - tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw)); 442 522 523 + tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw)); 443 524 return tb_switch_tmu_set_time_disruption(sw, false); 525 + } 526 + 527 + /** 528 + * tb_switch_tmu_enable() - Enable TMU on a router 529 + * @sw: Router whose TMU to enable 530 + * 531 + * Enables TMU of a router to be in uni-directional or bi-directional HiFi mode. 532 + * Calling tb_switch_tmu_configure() is required before calling this function, 533 + * to select the mode HiFi and directionality (uni-directional/bi-directional). 534 + * In both modes all tunneling should work. Uni-directional mode is required for 535 + * CLx (Link Low-Power) to work. 536 + */ 537 + int tb_switch_tmu_enable(struct tb_switch *sw) 538 + { 539 + if (sw->tmu.rate_request == TB_SWITCH_TMU_RATE_NORMAL) 540 + return -EOPNOTSUPP; 541 + 542 + return tb_switch_tmu_hifi_enable(sw); 543 + } 544 + 545 + /** 546 + * tb_switch_tmu_configure() - Configure the TMU rate and directionality 547 + * @sw: Router whose mode to change 548 + * @rate: Rate to configure Off/LowRes/HiFi 549 + * @unidirectional: If uni-directional (bi-directional otherwise) 550 + * 551 + * Selects the rate of the TMU and directionality (uni-directional or 552 + * bi-directional). Must be called before tb_switch_tmu_enable(). 553 + */ 554 + void tb_switch_tmu_configure(struct tb_switch *sw, 555 + enum tb_switch_tmu_rate rate, bool unidirectional) 556 + { 557 + sw->tmu.unidirectional_request = unidirectional; 558 + sw->tmu.rate_request = rate; 444 559 }
+17 -10
drivers/thunderbolt/tunnel.c
··· 207 207 * tb_tunnel_discover_pci() - Discover existing PCIe tunnels 208 208 * @tb: Pointer to the domain structure 209 209 * @down: PCIe downstream adapter 210 + * @alloc_hopid: Allocate HopIDs from visited ports 210 211 * 211 212 * If @down adapter is active, follows the tunnel to the PCIe upstream 212 213 * adapter and back. Returns the discovered tunnel or %NULL if there was 213 214 * no tunnel. 214 215 */ 215 - struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down) 216 + struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down, 217 + bool alloc_hopid) 216 218 { 217 219 struct tb_tunnel *tunnel; 218 220 struct tb_path *path; ··· 235 233 * case. 236 234 */ 237 235 path = tb_path_discover(down, TB_PCI_HOPID, NULL, -1, 238 - &tunnel->dst_port, "PCIe Up"); 236 + &tunnel->dst_port, "PCIe Up", alloc_hopid); 239 237 if (!path) { 240 238 /* Just disable the downstream port */ 241 239 tb_pci_port_enable(down, false); ··· 246 244 goto err_free; 247 245 248 246 path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL, 249 - "PCIe Down"); 247 + "PCIe Down", alloc_hopid); 250 248 if (!path) 251 249 goto err_deactivate; 252 250 tunnel->paths[TB_PCI_PATH_DOWN] = path; ··· 763 761 * tb_tunnel_discover_dp() - Discover existing Display Port tunnels 764 762 * @tb: Pointer to the domain structure 765 763 * @in: DP in adapter 764 + * @alloc_hopid: Allocate HopIDs from visited ports 766 765 * 767 766 * If @in adapter is active, follows the tunnel to the DP out adapter 768 767 * and back. Returns the discovered tunnel or %NULL if there was no ··· 771 768 * 772 769 * Return: DP tunnel or %NULL if no tunnel found. 773 770 */ 774 - struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in) 771 + struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in, 772 + bool alloc_hopid) 775 773 { 776 774 struct tb_tunnel *tunnel; 777 775 struct tb_port *port; ··· 791 787 tunnel->src_port = in; 792 788 793 789 path = tb_path_discover(in, TB_DP_VIDEO_HOPID, NULL, -1, 794 - &tunnel->dst_port, "Video"); 790 + &tunnel->dst_port, "Video", alloc_hopid); 795 791 if (!path) { 796 792 /* Just disable the DP IN port */ 797 793 tb_dp_port_enable(in, false); ··· 801 797 if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT])) 802 798 goto err_free; 803 799 804 - path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX"); 800 + path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX", 801 + alloc_hopid); 805 802 if (!path) 806 803 goto err_deactivate; 807 804 tunnel->paths[TB_DP_AUX_PATH_OUT] = path; 808 805 tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_OUT]); 809 806 810 807 path = tb_path_discover(tunnel->dst_port, -1, in, TB_DP_AUX_RX_HOPID, 811 - &port, "AUX RX"); 808 + &port, "AUX RX", alloc_hopid); 812 809 if (!path) 813 810 goto err_deactivate; 814 811 tunnel->paths[TB_DP_AUX_PATH_IN] = path; ··· 1348 1343 * tb_tunnel_discover_usb3() - Discover existing USB3 tunnels 1349 1344 * @tb: Pointer to the domain structure 1350 1345 * @down: USB3 downstream adapter 1346 + * @alloc_hopid: Allocate HopIDs from visited ports 1351 1347 * 1352 1348 * If @down adapter is active, follows the tunnel to the USB3 upstream 1353 1349 * adapter and back. Returns the discovered tunnel or %NULL if there was 1354 1350 * no tunnel. 1355 1351 */ 1356 - struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down) 1352 + struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down, 1353 + bool alloc_hopid) 1357 1354 { 1358 1355 struct tb_tunnel *tunnel; 1359 1356 struct tb_path *path; ··· 1376 1369 * case. 1377 1370 */ 1378 1371 path = tb_path_discover(down, TB_USB3_HOPID, NULL, -1, 1379 - &tunnel->dst_port, "USB3 Down"); 1372 + &tunnel->dst_port, "USB3 Down", alloc_hopid); 1380 1373 if (!path) { 1381 1374 /* Just disable the downstream port */ 1382 1375 tb_usb3_port_enable(down, false); ··· 1386 1379 tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]); 1387 1380 1388 1381 path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL, 1389 - "USB3 Up"); 1382 + "USB3 Up", alloc_hopid); 1390 1383 if (!path) 1391 1384 goto err_deactivate; 1392 1385 tunnel->paths[TB_USB3_PATH_UP] = path;
+6 -3
drivers/thunderbolt/tunnel.h
··· 64 64 int allocated_down; 65 65 }; 66 66 67 - struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down); 67 + struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down, 68 + bool alloc_hopid); 68 69 struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up, 69 70 struct tb_port *down); 70 - struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in); 71 + struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in, 72 + bool alloc_hopid); 71 73 struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, 72 74 struct tb_port *out, int max_up, 73 75 int max_down); ··· 79 77 int receive_ring); 80 78 bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path, 81 79 int transmit_ring, int receive_path, int receive_ring); 82 - struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down); 80 + struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down, 81 + bool alloc_hopid); 83 82 struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, 84 83 struct tb_port *down, int max_up, 85 84 int max_down);
+25 -27
drivers/thunderbolt/usb4.c
··· 50 50 #define USB4_BA_VALUE_MASK GENMASK(31, 16) 51 51 #define USB4_BA_VALUE_SHIFT 16 52 52 53 - static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, 54 - u32 value, int timeout_msec) 55 - { 56 - ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); 57 - 58 - do { 59 - u32 val; 60 - int ret; 61 - 62 - ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); 63 - if (ret) 64 - return ret; 65 - 66 - if ((val & bit) == value) 67 - return 0; 68 - 69 - usleep_range(50, 100); 70 - } while (ktime_before(ktime_get(), timeout)); 71 - 72 - return -ETIMEDOUT; 73 - } 74 - 75 53 static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode, 76 54 u32 *metadata, u8 *status, 77 55 const void *tx_data, size_t tx_dwords, ··· 75 97 if (ret) 76 98 return ret; 77 99 78 - ret = usb4_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500); 100 + ret = tb_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500); 79 101 if (ret) 80 102 return ret; 81 103 ··· 281 303 if (ret) 282 304 return ret; 283 305 284 - return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR, 285 - ROUTER_CS_6_CR, 50); 306 + return tb_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR, 307 + ROUTER_CS_6_CR, 50); 286 308 } 287 309 288 310 /** ··· 458 480 if (ret) 459 481 return ret; 460 482 461 - return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR, 462 - ROUTER_CS_6_SLPR, 500); 483 + return tb_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR, 484 + ROUTER_CS_6_SLPR, 500); 463 485 } 464 486 465 487 /** ··· 1362 1384 val = USB4_SB_OPCODE_ENUMERATE_RETIMERS; 1363 1385 return usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0, 1364 1386 USB4_SB_OPCODE, &val, sizeof(val)); 1387 + } 1388 + 1389 + /** 1390 + * usb4_port_clx_supported() - Check if CLx is supported by the link 1391 + * @port: Port to check for CLx support for 1392 + * 1393 + * PORT_CS_18_CPS bit reflects if the link supports CLx including 1394 + * active cables (if connected on the link). 1395 + */ 1396 + bool usb4_port_clx_supported(struct tb_port *port) 1397 + { 1398 + int ret; 1399 + u32 val; 1400 + 1401 + ret = tb_port_read(port, &val, TB_CFG_PORT, 1402 + port->cap_usb4 + PORT_CS_18, 1); 1403 + if (ret) 1404 + return false; 1405 + 1406 + return !!(val & PORT_CS_18_CPS); 1365 1407 } 1366 1408 1367 1409 static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
+6 -10
drivers/thunderbolt/xdomain.c
··· 214 214 memcpy(&hdr->uuid, &tb_xdp_uuid, sizeof(tb_xdp_uuid)); 215 215 } 216 216 217 - static int tb_xdp_handle_error(const struct tb_xdp_header *hdr) 217 + static int tb_xdp_handle_error(const struct tb_xdp_error_response *res) 218 218 { 219 - const struct tb_xdp_error_response *error; 220 - 221 - if (hdr->type != ERROR_RESPONSE) 219 + if (res->hdr.type != ERROR_RESPONSE) 222 220 return 0; 223 221 224 - error = (const struct tb_xdp_error_response *)hdr; 225 - 226 - switch (error->error) { 222 + switch (res->error) { 227 223 case ERROR_UNKNOWN_PACKET: 228 224 case ERROR_UNKNOWN_DOMAIN: 229 225 return -EIO; ··· 253 257 if (ret) 254 258 return ret; 255 259 256 - ret = tb_xdp_handle_error(&res.hdr); 260 + ret = tb_xdp_handle_error(&res.err); 257 261 if (ret) 258 262 return ret; 259 263 ··· 325 329 if (ret) 326 330 goto err; 327 331 328 - ret = tb_xdp_handle_error(&res->hdr); 332 + ret = tb_xdp_handle_error(&res->err); 329 333 if (ret) 330 334 goto err; 331 335 ··· 458 462 if (ret) 459 463 return ret; 460 464 461 - return tb_xdp_handle_error(&res.hdr); 465 + return tb_xdp_handle_error(&res.err); 462 466 } 463 467 464 468 static int
+8 -6
drivers/usb/cdns3/cdns3-plat.c
··· 13 13 */ 14 14 15 15 #include <linux/module.h> 16 + #include <linux/irq.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/platform_device.h> 18 19 #include <linux/pm_runtime.h> ··· 66 65 67 66 platform_set_drvdata(pdev, cdns); 68 67 69 - res = platform_get_resource_byname(pdev, IORESOURCE_IRQ, "host"); 70 - if (!res) { 71 - dev_err(dev, "missing host IRQ\n"); 72 - return -ENODEV; 73 - } 68 + ret = platform_get_irq_byname(pdev, "host"); 69 + if (ret < 0) 70 + return ret; 74 71 75 - cdns->xhci_res[0] = *res; 72 + cdns->xhci_res[0].start = ret; 73 + cdns->xhci_res[0].end = ret; 74 + cdns->xhci_res[0].flags = IORESOURCE_IRQ | irq_get_trigger_type(ret); 75 + cdns->xhci_res[0].name = "host"; 76 76 77 77 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "xhci"); 78 78 if (!res) {
+1 -1
drivers/usb/cdns3/cdnsp-gadget.c
··· 81 81 offset = HCC_EXT_CAPS(val) << 2; 82 82 if (!offset) 83 83 return 0; 84 - }; 84 + } 85 85 86 86 do { 87 87 val = readl(base + offset);
+3 -3
drivers/usb/cdns3/core.h
··· 8 8 * Authors: Peter Chen <peter.chen@nxp.com> 9 9 * Pawel Laszczak <pawell@cadence.com> 10 10 */ 11 - #include <linux/usb/otg.h> 12 - #include <linux/usb/role.h> 13 - 14 11 #ifndef __LINUX_CDNS3_CORE_H 15 12 #define __LINUX_CDNS3_CORE_H 13 + 14 + #include <linux/usb/otg.h> 15 + #include <linux/usb/role.h> 16 16 17 17 struct cdns; 18 18
+1
drivers/usb/chipidea/core.c
··· 864 864 } 865 865 866 866 pdev->dev.parent = dev; 867 + device_set_of_node_from_dev(&pdev->dev, dev); 867 868 868 869 ret = platform_device_add_resources(pdev, res, nres); 869 870 if (ret)
+2 -3
drivers/usb/chipidea/otg.c
··· 255 255 */ 256 256 void ci_hdrc_otg_destroy(struct ci_hdrc *ci) 257 257 { 258 - if (ci->wq) { 259 - flush_workqueue(ci->wq); 258 + if (ci->wq) 260 259 destroy_workqueue(ci->wq); 261 - } 260 + 262 261 /* Disable all OTG irq and clear status */ 263 262 hw_write_otgsc(ci, OTGSC_INT_EN_BITS | OTGSC_INT_STATUS_BITS, 264 263 OTGSC_INT_STATUS_BITS);
+1
drivers/usb/common/debug.c
··· 8 8 * Sebastian Andrzej Siewior <bigeasy@linutronix.de> 9 9 */ 10 10 11 + #include <linux/kernel.h> 11 12 #include <linux/usb/ch9.h> 12 13 13 14 static void usb_decode_get_status(__u8 bRequestType, __u16 wIndex,
+2 -1
drivers/usb/core/driver.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * drivers/usb/driver.c - most of the driver model stuff for usb 3 + * drivers/usb/core/driver.c - most of the driver model stuff for usb 4 4 * 5 5 * (C) Copyright 2005 Greg Kroah-Hartman <gregkh@suse.de> 6 6 * ··· 834 834 835 835 return NULL; 836 836 } 837 + EXPORT_SYMBOL_GPL(usb_device_match_id); 837 838 838 839 bool usb_driver_applicable(struct usb_device *udev, 839 840 struct usb_device_driver *udrv)
+1 -1
drivers/usb/core/generic.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * drivers/usb/generic.c - generic driver for USB devices (not interfaces) 3 + * drivers/usb/core/generic.c - generic driver for USB devices (not interfaces) 4 4 * 5 5 * (C) Copyright 2005 Greg Kroah-Hartman <gregkh@suse.de> 6 6 *
+9 -2
drivers/usb/core/hcd.c
··· 753 753 { 754 754 struct urb *urb; 755 755 int length; 756 + int status; 756 757 unsigned long flags; 757 758 char buffer[6]; /* Any root hubs with > 31 ports? */ 758 759 ··· 771 770 if (urb) { 772 771 clear_bit(HCD_FLAG_POLL_PENDING, &hcd->flags); 773 772 hcd->status_urb = NULL; 773 + if (urb->transfer_buffer_length >= length) { 774 + status = 0; 775 + } else { 776 + status = -EOVERFLOW; 777 + length = urb->transfer_buffer_length; 778 + } 774 779 urb->actual_length = length; 775 780 memcpy(urb->transfer_buffer, buffer, length); 776 781 777 782 usb_hcd_unlink_urb_from_ep(hcd, urb); 778 - usb_hcd_giveback_urb(hcd, urb, 0); 783 + usb_hcd_giveback_urb(hcd, urb, status); 779 784 } else { 780 785 length = 0; 781 786 set_bit(HCD_FLAG_POLL_PENDING, &hcd->flags); ··· 1288 1281 return -EFAULT; 1289 1282 } 1290 1283 1291 - vaddr = hcd_buffer_alloc(bus, size + sizeof(vaddr), 1284 + vaddr = hcd_buffer_alloc(bus, size + sizeof(unsigned long), 1292 1285 mem_flags, dma_handle); 1293 1286 if (!vaddr) 1294 1287 return -ENOMEM;
+27 -10
drivers/usb/core/hub.c
··· 1110 1110 } else { 1111 1111 hub_power_on(hub, true); 1112 1112 } 1113 - } 1113 + /* Give some time on remote wakeup to let links to transit to U0 */ 1114 + } else if (hub_is_superspeed(hub->hdev)) 1115 + msleep(20); 1116 + 1114 1117 init2: 1115 1118 1116 1119 /* ··· 1228 1225 */ 1229 1226 if (portchange || (hub_is_superspeed(hub->hdev) && 1230 1227 port_resumed)) 1231 - set_bit(port1, hub->change_bits); 1228 + set_bit(port1, hub->event_bits); 1232 1229 1233 1230 } else if (udev->persist_enabled) { 1234 1231 #ifdef CONFIG_PM ··· 2780 2777 #define PORT_INIT_TRIES 4 2781 2778 #endif /* CONFIG_USB_FEW_INIT_RETRIES */ 2782 2779 2780 + #define DETECT_DISCONNECT_TRIES 5 2781 + 2783 2782 #define HUB_ROOT_RESET_TIME 60 /* times are in msec */ 2784 2783 #define HUB_SHORT_RESET_TIME 10 2785 2784 #define HUB_BH_RESET_TIME 50 ··· 3575 3570 * This routine should only be called when persist is enabled. 3576 3571 */ 3577 3572 static int wait_for_connected(struct usb_device *udev, 3578 - struct usb_hub *hub, int *port1, 3573 + struct usb_hub *hub, int port1, 3579 3574 u16 *portchange, u16 *portstatus) 3580 3575 { 3581 3576 int status = 0, delay_ms = 0; ··· 3589 3584 } 3590 3585 msleep(20); 3591 3586 delay_ms += 20; 3592 - status = hub_port_status(hub, *port1, portstatus, portchange); 3587 + status = hub_port_status(hub, port1, portstatus, portchange); 3593 3588 } 3594 3589 dev_dbg(&udev->dev, "Waited %dms for CONNECT\n", delay_ms); 3595 3590 return status; ··· 3695 3690 } 3696 3691 3697 3692 if (udev->persist_enabled) 3698 - status = wait_for_connected(udev, hub, &port1, &portchange, 3693 + status = wait_for_connected(udev, hub, port1, &portchange, 3699 3694 &portstatus); 3700 3695 3701 3696 status = check_port_resume_type(udev, ··· 5548 5543 struct usb_device *udev = port_dev->child; 5549 5544 struct usb_device *hdev = hub->hdev; 5550 5545 u16 portstatus, portchange; 5546 + int i = 0; 5551 5547 5552 5548 connect_change = test_bit(port1, hub->change_bits); 5553 5549 clear_bit(port1, hub->event_bits); ··· 5625 5619 connect_change = 1; 5626 5620 5627 5621 /* 5628 - * Warm reset a USB3 protocol port if it's in 5629 - * SS.Inactive state. 5622 + * Avoid trying to recover a USB3 SS.Inactive port with a warm reset if 5623 + * the device was disconnected. A 12ms disconnect detect timer in 5624 + * SS.Inactive state transitions the port to RxDetect automatically. 5625 + * SS.Inactive link error state is common during device disconnect. 5630 5626 */ 5631 - if (hub_port_warm_reset_required(hub, port1, portstatus)) { 5632 - dev_dbg(&port_dev->dev, "do warm reset\n"); 5633 - if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION) 5627 + while (hub_port_warm_reset_required(hub, port1, portstatus)) { 5628 + if ((i++ < DETECT_DISCONNECT_TRIES) && udev) { 5629 + u16 unused; 5630 + 5631 + msleep(20); 5632 + hub_port_status(hub, port1, &portstatus, &unused); 5633 + dev_dbg(&port_dev->dev, "Wait for inactive link disconnect detect\n"); 5634 + continue; 5635 + } else if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION) 5634 5636 || udev->state == USB_STATE_NOTATTACHED) { 5637 + dev_dbg(&port_dev->dev, "do warm reset, port only\n"); 5635 5638 if (hub_port_reset(hub, port1, NULL, 5636 5639 HUB_BH_RESET_TIME, true) < 0) 5637 5640 hub_port_disable(hub, port1, 1); 5638 5641 } else { 5642 + dev_dbg(&port_dev->dev, "do warm reset, full device\n"); 5639 5643 usb_unlock_port(port_dev); 5640 5644 usb_lock_device(udev); 5641 5645 usb_reset_device(udev); ··· 5653 5637 usb_lock_port(port_dev); 5654 5638 connect_change = 0; 5655 5639 } 5640 + break; 5656 5641 } 5657 5642 5658 5643 if (connect_change)
+32
drivers/usb/core/port.c
··· 9 9 10 10 #include <linux/slab.h> 11 11 #include <linux/pm_qos.h> 12 + #include <linux/component.h> 12 13 13 14 #include "hub.h" 14 15 ··· 529 528 link_peers_report(port_dev, peer); 530 529 } 531 530 531 + static int connector_bind(struct device *dev, struct device *connector, void *data) 532 + { 533 + int ret; 534 + 535 + ret = sysfs_create_link(&dev->kobj, &connector->kobj, "connector"); 536 + if (ret) 537 + return ret; 538 + 539 + ret = sysfs_create_link(&connector->kobj, &dev->kobj, dev_name(dev)); 540 + if (ret) 541 + sysfs_remove_link(&dev->kobj, "connector"); 542 + 543 + return ret; 544 + } 545 + 546 + static void connector_unbind(struct device *dev, struct device *connector, void *data) 547 + { 548 + sysfs_remove_link(&connector->kobj, dev_name(dev)); 549 + sysfs_remove_link(&dev->kobj, "connector"); 550 + } 551 + 552 + static const struct component_ops connector_ops = { 553 + .bind = connector_bind, 554 + .unbind = connector_unbind, 555 + }; 556 + 532 557 int usb_hub_create_port_device(struct usb_hub *hub, int port1) 533 558 { 534 559 struct usb_port *port_dev; ··· 604 577 605 578 find_and_link_peer(hub, port1); 606 579 580 + retval = component_add(&port_dev->dev, &connector_ops); 581 + if (retval) 582 + dev_warn(&port_dev->dev, "failed to add component\n"); 583 + 607 584 /* 608 585 * Enable runtime pm and hold a refernce that hub_configure() 609 586 * will drop once the PM_QOS_NO_POWER_OFF flag state has been set ··· 650 619 peer = port_dev->peer; 651 620 if (peer) 652 621 unlink_peers(port_dev, peer); 622 + component_del(&port_dev->dev, &connector_ops); 653 623 device_unregister(&port_dev->dev); 654 624 }
-46
drivers/usb/core/usb.c
··· 398 398 } 399 399 EXPORT_SYMBOL_GPL(usb_for_each_dev); 400 400 401 - struct each_hub_arg { 402 - void *data; 403 - int (*fn)(struct device *, void *); 404 - }; 405 - 406 - static int __each_hub(struct usb_device *hdev, void *data) 407 - { 408 - struct each_hub_arg *arg = (struct each_hub_arg *)data; 409 - struct usb_hub *hub; 410 - int ret = 0; 411 - int i; 412 - 413 - hub = usb_hub_to_struct_hub(hdev); 414 - if (!hub) 415 - return 0; 416 - 417 - mutex_lock(&usb_port_peer_mutex); 418 - 419 - for (i = 0; i < hdev->maxchild; i++) { 420 - ret = arg->fn(&hub->ports[i]->dev, arg->data); 421 - if (ret) 422 - break; 423 - } 424 - 425 - mutex_unlock(&usb_port_peer_mutex); 426 - 427 - return ret; 428 - } 429 - 430 - /** 431 - * usb_for_each_port - interate over all USB ports in the system 432 - * @data: data pointer that will be handed to the callback function 433 - * @fn: callback function to be called for each USB port 434 - * 435 - * Iterate over all USB ports and call @fn for each, passing it @data. If it 436 - * returns anything other than 0, we break the iteration prematurely and return 437 - * that value. 438 - */ 439 - int usb_for_each_port(void *data, int (*fn)(struct device *, void *)) 440 - { 441 - struct each_hub_arg arg = {data, fn}; 442 - 443 - return usb_for_each_dev(&arg, __each_hub); 444 - } 445 - EXPORT_SYMBOL_GPL(usb_for_each_port); 446 - 447 401 /** 448 402 * usb_release_dev - free a usb device structure when all users of it are finished. 449 403 * @dev: device that's been disconnected
+4 -2
drivers/usb/dwc2/core.h
··· 869 869 * - USB_DR_MODE_HOST 870 870 * - USB_DR_MODE_OTG 871 871 * @role_sw: usb_role_switch handle 872 + * @role_sw_default_mode: default operation mode of controller while usb role 873 + * is USB_ROLE_NONE 872 874 * @hcd_enabled: Host mode sub-driver initialization indicator. 873 875 * @gadget_enabled: Peripheral mode sub-driver initialization indicator. 874 876 * @ll_hw_enabled: Status of low-level hardware resources. ··· 1067 1065 enum usb_otg_state op_state; 1068 1066 enum usb_dr_mode dr_mode; 1069 1067 struct usb_role_switch *role_sw; 1068 + enum usb_dr_mode role_sw_default_mode; 1070 1069 unsigned int hcd_enabled:1; 1071 1070 unsigned int gadget_enabled:1; 1072 1071 unsigned int ll_hw_enabled:1; ··· 1154 1151 struct list_head periodic_sched_queued; 1155 1152 struct list_head split_order; 1156 1153 u16 periodic_usecs; 1157 - unsigned long hs_periodic_bitmap[ 1158 - DIV_ROUND_UP(DWC2_HS_SCHEDULE_US, BITS_PER_LONG)]; 1154 + DECLARE_BITMAP(hs_periodic_bitmap, DWC2_HS_SCHEDULE_US); 1159 1155 u16 periodic_qh_count; 1160 1156 bool new_connection; 1161 1157
+49 -2
drivers/usb/dwc2/drd.c
··· 13 13 #include <linux/usb/role.h> 14 14 #include "core.h" 15 15 16 + #define dwc2_ovr_gotgctl(gotgctl) \ 17 + ((gotgctl) |= GOTGCTL_BVALOEN | GOTGCTL_AVALOEN | GOTGCTL_VBVALOEN | \ 18 + GOTGCTL_DBNCE_FLTR_BYPASS) 19 + 16 20 static void dwc2_ovr_init(struct dwc2_hsotg *hsotg) 17 21 { 18 22 unsigned long flags; ··· 25 21 spin_lock_irqsave(&hsotg->lock, flags); 26 22 27 23 gotgctl = dwc2_readl(hsotg, GOTGCTL); 28 - gotgctl |= GOTGCTL_BVALOEN | GOTGCTL_AVALOEN | GOTGCTL_VBVALOEN; 29 - gotgctl |= GOTGCTL_DBNCE_FLTR_BYPASS; 24 + dwc2_ovr_gotgctl(gotgctl); 30 25 gotgctl &= ~(GOTGCTL_BVALOVAL | GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL); 26 + if (hsotg->role_sw_default_mode == USB_DR_MODE_HOST) 27 + gotgctl |= GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL; 28 + else if (hsotg->role_sw_default_mode == USB_DR_MODE_PERIPHERAL) 29 + gotgctl |= GOTGCTL_BVALOVAL | GOTGCTL_VBVALOVAL; 31 30 dwc2_writel(hsotg, gotgctl, GOTGCTL); 32 31 33 32 spin_unlock_irqrestore(&hsotg->lock, flags); ··· 46 39 if ((valid && (gotgctl & GOTGCTL_ASESVLD)) || 47 40 (!valid && !(gotgctl & GOTGCTL_ASESVLD))) 48 41 return -EALREADY; 42 + 43 + /* Always enable overrides to handle the resume case */ 44 + dwc2_ovr_gotgctl(gotgctl); 49 45 50 46 gotgctl &= ~GOTGCTL_BVALOVAL; 51 47 if (valid) ··· 68 58 if ((valid && (gotgctl & GOTGCTL_BSESVLD)) || 69 59 (!valid && !(gotgctl & GOTGCTL_BSESVLD))) 70 60 return -EALREADY; 61 + 62 + /* Always enable overrides to handle the resume case */ 63 + dwc2_ovr_gotgctl(gotgctl); 71 64 72 65 gotgctl &= ~GOTGCTL_AVALOVAL; 73 66 if (valid) ··· 118 105 119 106 spin_lock_irqsave(&hsotg->lock, flags); 120 107 108 + if (role == USB_ROLE_NONE) { 109 + /* default operation mode when usb role is USB_ROLE_NONE */ 110 + if (hsotg->role_sw_default_mode == USB_DR_MODE_HOST) 111 + role = USB_ROLE_HOST; 112 + else if (hsotg->role_sw_default_mode == USB_DR_MODE_PERIPHERAL) 113 + role = USB_ROLE_DEVICE; 114 + } 115 + 121 116 if (role == USB_ROLE_HOST) { 122 117 already = dwc2_ovr_avalid(hsotg, true); 123 118 } else if (role == USB_ROLE_DEVICE) { ··· 167 146 if (!device_property_read_bool(hsotg->dev, "usb-role-switch")) 168 147 return 0; 169 148 149 + hsotg->role_sw_default_mode = usb_get_role_switch_default_mode(hsotg->dev); 170 150 role_sw_desc.driver_data = hsotg; 171 151 role_sw_desc.fwnode = dev_fwnode(hsotg->dev); 172 152 role_sw_desc.set = dwc2_drd_role_sw_set; ··· 205 183 void dwc2_drd_resume(struct dwc2_hsotg *hsotg) 206 184 { 207 185 u32 gintsts, gintmsk; 186 + enum usb_role role; 187 + 188 + if (hsotg->role_sw) { 189 + /* get last known role (as the get ops isn't implemented by this driver) */ 190 + role = usb_role_switch_get_role(hsotg->role_sw); 191 + 192 + if (role == USB_ROLE_NONE) { 193 + if (hsotg->role_sw_default_mode == USB_DR_MODE_HOST) 194 + role = USB_ROLE_HOST; 195 + else if (hsotg->role_sw_default_mode == USB_DR_MODE_PERIPHERAL) 196 + role = USB_ROLE_DEVICE; 197 + } 198 + 199 + /* restore last role that may have been lost */ 200 + if (role == USB_ROLE_HOST) 201 + dwc2_ovr_avalid(hsotg, true); 202 + else if (role == USB_ROLE_DEVICE) 203 + dwc2_ovr_bvalid(hsotg, true); 204 + 205 + dwc2_force_mode(hsotg, role == USB_ROLE_HOST); 206 + 207 + dev_dbg(hsotg->dev, "resuming %s-session valid\n", 208 + role == USB_ROLE_NONE ? "No" : 209 + role == USB_ROLE_HOST ? "A" : "B"); 210 + } 208 211 209 212 if (hsotg->role_sw && !hsotg->params.external_id_pin_ctl) { 210 213 gintsts = dwc2_readl(hsotg, GINTSTS);
+14 -3
drivers/usb/dwc2/gadget.c
··· 4974 4974 hsotg->params.g_np_tx_fifo_size); 4975 4975 dev_dbg(dev, "RXFIFO size: %d\n", hsotg->params.g_rx_fifo_size); 4976 4976 4977 - hsotg->gadget.max_speed = USB_SPEED_HIGH; 4977 + switch (hsotg->params.speed) { 4978 + case DWC2_SPEED_PARAM_LOW: 4979 + hsotg->gadget.max_speed = USB_SPEED_LOW; 4980 + break; 4981 + case DWC2_SPEED_PARAM_FULL: 4982 + hsotg->gadget.max_speed = USB_SPEED_FULL; 4983 + break; 4984 + default: 4985 + hsotg->gadget.max_speed = USB_SPEED_HIGH; 4986 + break; 4987 + } 4988 + 4978 4989 hsotg->gadget.ops = &dwc2_hsotg_gadget_ops; 4979 4990 hsotg->gadget.name = dev_name(dev); 4980 4991 hsotg->gadget.otg_caps = &hsotg->params.otg_caps; ··· 5228 5217 * as result BNA interrupt asserted on hibernation exit 5229 5218 * by restoring from saved area. 5230 5219 */ 5231 - if (hsotg->params.g_dma_desc && 5220 + if (using_desc_dma(hsotg) && 5232 5221 (dr->diepctl[i] & DXEPCTL_EPENA)) 5233 5222 dr->diepdma[i] = hsotg->eps_in[i]->desc_list_dma; 5234 5223 dwc2_writel(hsotg, dr->dtxfsiz[i], DPTXFSIZN(i)); ··· 5240 5229 * as result BNA interrupt asserted on hibernation exit 5241 5230 * by restoring from saved area. 5242 5231 */ 5243 - if (hsotg->params.g_dma_desc && 5232 + if (using_desc_dma(hsotg) && 5244 5233 (dr->doepctl[i] & DXEPCTL_EPENA)) 5245 5234 dr->doepdma[i] = hsotg->eps_out[i]->desc_list_dma; 5246 5235 dwc2_writel(hsotg, dr->doepdma[i], DOEPDMA(i));
+4 -3
drivers/usb/dwc2/hcd.c
··· 4399 4399 * If not hibernation nor partial power down are supported, 4400 4400 * clock gating is used to save power. 4401 4401 */ 4402 - if (!hsotg->params.no_clock_gating) 4402 + if (!hsotg->params.no_clock_gating) { 4403 4403 dwc2_host_enter_clock_gating(hsotg); 4404 4404 4405 - /* After entering suspend, hardware is not accessible */ 4406 - clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 4405 + /* After entering suspend, hardware is not accessible */ 4406 + clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 4407 + } 4407 4408 break; 4408 4409 default: 4409 4410 goto skip_power_saving;
+22 -41
drivers/usb/dwc2/platform.c
··· 222 222 int i, ret; 223 223 224 224 hsotg->reset = devm_reset_control_get_optional(hsotg->dev, "dwc2"); 225 - if (IS_ERR(hsotg->reset)) { 226 - ret = PTR_ERR(hsotg->reset); 227 - dev_err(hsotg->dev, "error getting reset control %d\n", ret); 228 - return ret; 229 - } 225 + if (IS_ERR(hsotg->reset)) 226 + return dev_err_probe(hsotg->dev, PTR_ERR(hsotg->reset), 227 + "error getting reset control\n"); 230 228 231 229 reset_control_deassert(hsotg->reset); 232 230 233 231 hsotg->reset_ecc = devm_reset_control_get_optional(hsotg->dev, "dwc2-ecc"); 234 - if (IS_ERR(hsotg->reset_ecc)) { 235 - ret = PTR_ERR(hsotg->reset_ecc); 236 - dev_err(hsotg->dev, "error getting reset control for ecc %d\n", ret); 237 - return ret; 238 - } 232 + if (IS_ERR(hsotg->reset_ecc)) 233 + return dev_err_probe(hsotg->dev, PTR_ERR(hsotg->reset_ecc), 234 + "error getting reset control for ecc\n"); 239 235 240 236 reset_control_deassert(hsotg->reset_ecc); 241 237 ··· 247 251 case -ENOSYS: 248 252 hsotg->phy = NULL; 249 253 break; 250 - case -EPROBE_DEFER: 251 - return ret; 252 254 default: 253 - dev_err(hsotg->dev, "error getting phy %d\n", ret); 254 - return ret; 255 + return dev_err_probe(hsotg->dev, ret, "error getting phy\n"); 255 256 } 256 257 } 257 258 ··· 261 268 case -ENXIO: 262 269 hsotg->uphy = NULL; 263 270 break; 264 - case -EPROBE_DEFER: 265 - return ret; 266 271 default: 267 - dev_err(hsotg->dev, "error getting usb phy %d\n", 268 - ret); 269 - return ret; 272 + return dev_err_probe(hsotg->dev, ret, "error getting usb phy\n"); 270 273 } 271 274 } 272 275 } ··· 271 282 272 283 /* Clock */ 273 284 hsotg->clk = devm_clk_get_optional(hsotg->dev, "otg"); 274 - if (IS_ERR(hsotg->clk)) { 275 - dev_err(hsotg->dev, "cannot get otg clock\n"); 276 - return PTR_ERR(hsotg->clk); 277 - } 285 + if (IS_ERR(hsotg->clk)) 286 + return dev_err_probe(hsotg->dev, PTR_ERR(hsotg->clk), "cannot get otg clock\n"); 278 287 279 288 /* Regulators */ 280 289 for (i = 0; i < ARRAY_SIZE(hsotg->supplies); i++) ··· 280 293 281 294 ret = devm_regulator_bulk_get(hsotg->dev, ARRAY_SIZE(hsotg->supplies), 282 295 hsotg->supplies); 283 - if (ret) { 284 - if (ret != -EPROBE_DEFER) 285 - dev_err(hsotg->dev, "failed to request supplies: %d\n", 286 - ret); 287 - return ret; 288 - } 296 + if (ret) 297 + return dev_err_probe(hsotg->dev, ret, "failed to request supplies\n"); 298 + 289 299 return 0; 290 300 } 291 301 ··· 542 558 hsotg->usb33d = devm_regulator_get(hsotg->dev, "usb33d"); 543 559 if (IS_ERR(hsotg->usb33d)) { 544 560 retval = PTR_ERR(hsotg->usb33d); 545 - if (retval != -EPROBE_DEFER) 546 - dev_err(hsotg->dev, 547 - "failed to request usb33d supply: %d\n", 548 - retval); 561 + dev_err_probe(hsotg->dev, retval, "failed to request usb33d supply\n"); 549 562 goto error; 550 563 } 551 564 retval = regulator_enable(hsotg->usb33d); 552 565 if (retval) { 553 - dev_err(hsotg->dev, 554 - "failed to enable usb33d supply: %d\n", retval); 566 + dev_err_probe(hsotg->dev, retval, "failed to enable usb33d supply\n"); 555 567 goto error; 556 568 } 557 569 ··· 562 582 563 583 retval = dwc2_drd_init(hsotg); 564 584 if (retval) { 565 - if (retval != -EPROBE_DEFER) 566 - dev_err(hsotg->dev, "failed to initialize dual-role\n"); 585 + dev_err_probe(hsotg->dev, retval, "failed to initialize dual-role\n"); 567 586 goto error_init; 568 587 } 569 588 ··· 730 751 spin_unlock_irqrestore(&dwc2->lock, flags); 731 752 } 732 753 733 - /* Need to restore FORCEDEVMODE/FORCEHOSTMODE */ 734 - dwc2_force_dr_mode(dwc2); 735 - 736 - dwc2_drd_resume(dwc2); 754 + if (!dwc2->role_sw) { 755 + /* Need to restore FORCEDEVMODE/FORCEHOSTMODE */ 756 + dwc2_force_dr_mode(dwc2); 757 + } else { 758 + dwc2_drd_resume(dwc2); 759 + } 737 760 738 761 if (dwc2_is_device_mode(dwc2)) 739 762 ret = dwc2_hsotg_resume(dwc2);
+9
drivers/usb/dwc3/core.h
··· 153 153 #define DWC3_DGCMDPAR 0xc710 154 154 #define DWC3_DGCMD 0xc714 155 155 #define DWC3_DALEPENA 0xc720 156 + #define DWC3_DCFG1 0xc740 /* DWC_usb32 only */ 156 157 157 158 #define DWC3_DEP_BASE(n) (0xc800 + ((n) * 0x10)) 158 159 #define DWC3_DEPCMDPAR2 0x00 ··· 383 382 384 383 /* Global HWPARAMS9 Register */ 385 384 #define DWC3_GHWPARAMS9_DEV_TXF_FLUSH_BYPASS BIT(0) 385 + #define DWC3_GHWPARAMS9_DEV_MST BIT(1) 386 386 387 387 /* Global Frame Length Adjustment Register */ 388 388 #define DWC3_GFLADJ_30MHZ_SDBND_SEL BIT(7) ··· 559 557 560 558 /* The EP number goes 0..31 so ep0 is always out and ep1 is always in */ 561 559 #define DWC3_DALEPENA_EP(n) BIT(n) 560 + 561 + /* DWC_usb32 DCFG1 config */ 562 + #define DWC3_DCFG1_DIS_MST_ENH BIT(1) 562 563 563 564 #define DWC3_DEPCMD_TYPE_CONTROL 0 564 565 #define DWC3_DEPCMD_TYPE_ISOC 1 ··· 892 887 893 888 /* HWPARAMS7 */ 894 889 #define DWC3_RAM1_DEPTH(n) ((n) & 0xffff) 890 + 891 + /* HWPARAMS9 */ 892 + #define DWC3_MST_CAPABLE(p) (!!((p)->hwparams9 & \ 893 + DWC3_GHWPARAMS9_DEV_MST)) 895 894 896 895 /** 897 896 * struct dwc3_request - representation of a transfer request
+12 -5
drivers/usb/dwc3/dwc3-meson-g12a.c
··· 755 755 756 756 ret = dwc3_meson_g12a_get_phys(priv); 757 757 if (ret) 758 - goto err_disable_clks; 758 + goto err_rearm; 759 759 760 760 ret = priv->drvdata->setup_regmaps(priv, base); 761 761 if (ret) 762 - goto err_disable_clks; 762 + goto err_rearm; 763 763 764 764 if (priv->vbus) { 765 765 ret = regulator_enable(priv->vbus); 766 766 if (ret) 767 - goto err_disable_clks; 767 + goto err_rearm; 768 768 } 769 769 770 770 /* Get dr_mode */ ··· 825 825 if (priv->vbus) 826 826 regulator_disable(priv->vbus); 827 827 828 + err_rearm: 829 + reset_control_rearm(priv->reset); 830 + 828 831 err_disable_clks: 829 832 clk_bulk_disable_unprepare(priv->drvdata->num_clks, 830 833 priv->drvdata->clks); ··· 854 851 pm_runtime_disable(dev); 855 852 pm_runtime_put_noidle(dev); 856 853 pm_runtime_set_suspended(dev); 854 + 855 + reset_control_rearm(priv->reset); 857 856 858 857 clk_bulk_disable_unprepare(priv->drvdata->num_clks, 859 858 priv->drvdata->clks); ··· 897 892 phy_exit(priv->phys[i]); 898 893 } 899 894 900 - reset_control_assert(priv->reset); 895 + reset_control_rearm(priv->reset); 901 896 902 897 return 0; 903 898 } ··· 907 902 struct dwc3_meson_g12a *priv = dev_get_drvdata(dev); 908 903 int i, ret; 909 904 910 - reset_control_deassert(priv->reset); 905 + ret = reset_control_reset(priv->reset); 906 + if (ret) 907 + return ret; 911 908 912 909 ret = priv->drvdata->usb_init(priv); 913 910 if (ret)
+12 -3
drivers/usb/dwc3/dwc3-qcom.c
··· 598 598 qcom->dwc3->dev.coherent_dma_mask = dev->coherent_dma_mask; 599 599 600 600 child_res = kcalloc(2, sizeof(*child_res), GFP_KERNEL); 601 - if (!child_res) 601 + if (!child_res) { 602 + platform_device_put(qcom->dwc3); 602 603 return -ENOMEM; 604 + } 603 605 604 606 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 605 607 if (!res) { ··· 639 637 if (ret) { 640 638 dev_err(&pdev->dev, "failed to add device\n"); 641 639 device_remove_software_node(&qcom->dwc3->dev); 640 + goto out; 642 641 } 642 + kfree(child_res); 643 + return 0; 643 644 644 645 out: 646 + platform_device_put(qcom->dwc3); 645 647 kfree(child_res); 646 648 return ret; 647 649 } ··· 775 769 776 770 if (qcom->acpi_pdata->is_urs) { 777 771 qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev); 778 - if (!qcom->urs_usb) { 772 + if (IS_ERR_OR_NULL(qcom->urs_usb)) { 779 773 dev_err(dev, "failed to create URS USB platdev\n"); 780 - return -ENODEV; 774 + if (!qcom->urs_usb) 775 + return -ENODEV; 776 + else 777 + return PTR_ERR(qcom->urs_usb); 781 778 } 782 779 } 783 780 }
+43 -16
drivers/usb/dwc3/gadget.c
··· 331 331 } 332 332 } 333 333 334 - dwc3_writel(dep->regs, DWC3_DEPCMDPAR0, params->param0); 335 - dwc3_writel(dep->regs, DWC3_DEPCMDPAR1, params->param1); 336 - dwc3_writel(dep->regs, DWC3_DEPCMDPAR2, params->param2); 334 + /* 335 + * For some commands such as Update Transfer command, DEPCMDPARn 336 + * registers are reserved. Since the driver often sends Update Transfer 337 + * command, don't write to DEPCMDPARn to avoid register write delays and 338 + * improve performance. 339 + */ 340 + if (DWC3_DEPCMD_CMD(cmd) != DWC3_DEPCMD_UPDATETRANSFER) { 341 + dwc3_writel(dep->regs, DWC3_DEPCMDPAR0, params->param0); 342 + dwc3_writel(dep->regs, DWC3_DEPCMDPAR1, params->param1); 343 + dwc3_writel(dep->regs, DWC3_DEPCMDPAR2, params->param2); 344 + } 337 345 338 346 /* 339 347 * Synopsys Databook 2.60a states in section 6.3.2.5.6 of that if we're ··· 365 357 cmd |= DWC3_DEPCMD_CMDACT; 366 358 367 359 dwc3_writel(dep->regs, DWC3_DEPCMD, cmd); 360 + 361 + if (!(cmd & DWC3_DEPCMD_CMDACT)) { 362 + ret = 0; 363 + goto skip_status; 364 + } 365 + 368 366 do { 369 367 reg = dwc3_readl(dep->regs, DWC3_DEPCMD); 370 368 if (!(reg & DWC3_DEPCMD_CMDACT)) { ··· 412 398 cmd_status = -ETIMEDOUT; 413 399 } 414 400 401 + skip_status: 415 402 trace_dwc3_gadget_ep_cmd(dep, cmd, params, cmd_status); 416 403 417 404 if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) { ··· 1275 1260 trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; 1276 1261 } 1277 1262 1263 + /* All TRBs setup for MST must set CSP=1 when LST=0 */ 1264 + if (dep->stream_capable && DWC3_MST_CAPABLE(&dwc->hwparams)) 1265 + trb->ctrl |= DWC3_TRB_CTRL_CSP; 1266 + 1278 1267 if ((!no_interrupt && !chain) || must_interrupt) 1279 1268 trb->ctrl |= DWC3_TRB_CTRL_IOC; 1280 1269 1281 1270 if (chain) 1282 1271 trb->ctrl |= DWC3_TRB_CTRL_CHN; 1283 - else if (dep->stream_capable && is_last) 1272 + else if (dep->stream_capable && is_last && 1273 + !DWC3_MST_CAPABLE(&dwc->hwparams)) 1284 1274 trb->ctrl |= DWC3_TRB_CTRL_LST; 1285 1275 1286 1276 if (usb_endpoint_xfer_bulk(dep->endpoint.desc) && dep->stream_capable) ··· 1533 1513 * burst capability may try to read and use TRBs beyond the 1534 1514 * active transfer instead of stopping. 1535 1515 */ 1536 - if (dep->stream_capable && req->request.is_last) 1516 + if (dep->stream_capable && req->request.is_last && 1517 + !DWC3_MST_CAPABLE(&dep->dwc->hwparams)) 1537 1518 return ret; 1538 1519 } 1539 1520 ··· 1567 1546 * burst capability may try to read and use TRBs beyond the 1568 1547 * active transfer instead of stopping. 1569 1548 */ 1570 - if (dep->stream_capable && req->request.is_last) 1549 + if (dep->stream_capable && req->request.is_last && 1550 + !DWC3_MST_CAPABLE(&dwc->hwparams)) 1571 1551 return ret; 1572 1552 } 1573 1553 ··· 1645 1623 return ret; 1646 1624 } 1647 1625 1648 - if (dep->stream_capable && req->request.is_last) 1626 + if (dep->stream_capable && req->request.is_last && 1627 + !DWC3_MST_CAPABLE(&dep->dwc->hwparams)) 1649 1628 dep->flags |= DWC3_EP_WAIT_TRANSFER_COMPLETE; 1650 1629 1651 1630 return 0; ··· 2661 2638 reg |= DWC3_DCFG_IGNSTRMPP; 2662 2639 dwc3_writel(dwc->regs, DWC3_DCFG, reg); 2663 2640 2641 + /* Enable MST by default if the device is capable of MST */ 2642 + if (DWC3_MST_CAPABLE(&dwc->hwparams)) { 2643 + reg = dwc3_readl(dwc->regs, DWC3_DCFG1); 2644 + reg &= ~DWC3_DCFG1_DIS_MST_ENH; 2645 + dwc3_writel(dwc->regs, DWC3_DCFG1, reg); 2646 + } 2647 + 2664 2648 /* Start with SuperSpeed Default */ 2665 2649 dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512); 2666 2650 ··· 3467 3437 case DEPEVT_STREAM_NOSTREAM: 3468 3438 if ((dep->flags & DWC3_EP_IGNORE_NEXT_NOSTREAM) || 3469 3439 !(dep->flags & DWC3_EP_FORCE_RESTART_STREAM) || 3470 - !(dep->flags & DWC3_EP_WAIT_TRANSFER_COMPLETE)) 3440 + (!DWC3_MST_CAPABLE(&dwc->hwparams) && 3441 + !(dep->flags & DWC3_EP_WAIT_TRANSFER_COMPLETE))) 3471 3442 break; 3472 3443 3473 3444 /* ··· 4098 4067 struct dwc3 *dwc = evt->dwc; 4099 4068 irqreturn_t ret = IRQ_NONE; 4100 4069 int left; 4101 - u32 reg; 4102 4070 4103 4071 left = evt->count; 4104 4072 ··· 4129 4099 ret = IRQ_HANDLED; 4130 4100 4131 4101 /* Unmask interrupt */ 4132 - reg = dwc3_readl(dwc->regs, DWC3_GEVNTSIZ(0)); 4133 - reg &= ~DWC3_GEVNTSIZ_INTMASK; 4134 - dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), reg); 4102 + dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), 4103 + DWC3_GEVNTSIZ_SIZE(evt->length)); 4135 4104 4136 4105 if (dwc->imod_interval) { 4137 4106 dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), DWC3_GEVNTCOUNT_EHB); ··· 4159 4130 struct dwc3 *dwc = evt->dwc; 4160 4131 u32 amount; 4161 4132 u32 count; 4162 - u32 reg; 4163 4133 4164 4134 if (pm_runtime_suspended(dwc->dev)) { 4165 4135 pm_runtime_get(dwc->dev); ··· 4185 4157 evt->flags |= DWC3_EVENT_PENDING; 4186 4158 4187 4159 /* Mask interrupt */ 4188 - reg = dwc3_readl(dwc->regs, DWC3_GEVNTSIZ(0)); 4189 - reg |= DWC3_GEVNTSIZ_INTMASK; 4190 - dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), reg); 4160 + dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), 4161 + DWC3_GEVNTSIZ_INTMASK | DWC3_GEVNTSIZ_SIZE(evt->length)); 4191 4162 4192 4163 amount = min(count, evt->length - evt->lpos); 4193 4164 memcpy(evt->cache + evt->lpos, evt->buf + evt->lpos, amount);
+26 -19
drivers/usb/dwc3/host.c
··· 8 8 */ 9 9 10 10 #include <linux/acpi.h> 11 + #include <linux/irq.h> 12 + #include <linux/of.h> 11 13 #include <linux/platform_device.h> 12 14 13 15 #include "core.h" 16 + 17 + static void dwc3_host_fill_xhci_irq_res(struct dwc3 *dwc, 18 + int irq, char *name) 19 + { 20 + struct platform_device *pdev = to_platform_device(dwc->dev); 21 + struct device_node *np = dev_of_node(&pdev->dev); 22 + 23 + dwc->xhci_resources[1].start = irq; 24 + dwc->xhci_resources[1].end = irq; 25 + dwc->xhci_resources[1].flags = IORESOURCE_IRQ | irq_get_trigger_type(irq); 26 + if (!name && np) 27 + dwc->xhci_resources[1].name = of_node_full_name(pdev->dev.of_node); 28 + else 29 + dwc->xhci_resources[1].name = name; 30 + } 14 31 15 32 static int dwc3_host_get_irq(struct dwc3 *dwc) 16 33 { ··· 35 18 int irq; 36 19 37 20 irq = platform_get_irq_byname_optional(dwc3_pdev, "host"); 38 - if (irq > 0) 21 + if (irq > 0) { 22 + dwc3_host_fill_xhci_irq_res(dwc, irq, "host"); 39 23 goto out; 24 + } 40 25 41 26 if (irq == -EPROBE_DEFER) 42 27 goto out; 43 28 44 29 irq = platform_get_irq_byname_optional(dwc3_pdev, "dwc_usb3"); 45 - if (irq > 0) 30 + if (irq > 0) { 31 + dwc3_host_fill_xhci_irq_res(dwc, irq, "dwc_usb3"); 46 32 goto out; 33 + } 47 34 48 35 if (irq == -EPROBE_DEFER) 49 36 goto out; 50 37 51 38 irq = platform_get_irq(dwc3_pdev, 0); 52 - if (irq > 0) 39 + if (irq > 0) { 40 + dwc3_host_fill_xhci_irq_res(dwc, irq, NULL); 53 41 goto out; 42 + } 54 43 55 44 if (!irq) 56 45 irq = -EINVAL; ··· 70 47 struct property_entry props[4]; 71 48 struct platform_device *xhci; 72 49 int ret, irq; 73 - struct resource *res; 74 - struct platform_device *dwc3_pdev = to_platform_device(dwc->dev); 75 50 int prop_idx = 0; 76 51 77 52 irq = dwc3_host_get_irq(dwc); 78 53 if (irq < 0) 79 54 return irq; 80 - 81 - res = platform_get_resource_byname(dwc3_pdev, IORESOURCE_IRQ, "host"); 82 - if (!res) 83 - res = platform_get_resource_byname(dwc3_pdev, IORESOURCE_IRQ, 84 - "dwc_usb3"); 85 - if (!res) 86 - res = platform_get_resource(dwc3_pdev, IORESOURCE_IRQ, 0); 87 - if (!res) 88 - return -ENOMEM; 89 - 90 - dwc->xhci_resources[1].start = irq; 91 - dwc->xhci_resources[1].end = irq; 92 - dwc->xhci_resources[1].flags = res->flags; 93 - dwc->xhci_resources[1].name = res->name; 94 55 95 56 xhci = platform_device_alloc("xhci-hcd", PLATFORM_DEVID_AUTO); 96 57 if (!xhci) {
+26 -13
drivers/usb/gadget/composite.c
··· 159 159 int want_comp_desc = 0; 160 160 161 161 struct usb_descriptor_header **d_spd; /* cursor for speed desc */ 162 + struct usb_composite_dev *cdev; 163 + bool incomplete_desc = false; 162 164 163 165 if (!g || !f || !_ep) 164 166 return -EIO; ··· 169 167 switch (g->speed) { 170 168 case USB_SPEED_SUPER_PLUS: 171 169 if (gadget_is_superspeed_plus(g)) { 172 - speed_desc = f->ssp_descriptors; 173 - want_comp_desc = 1; 174 - break; 170 + if (f->ssp_descriptors) { 171 + speed_desc = f->ssp_descriptors; 172 + want_comp_desc = 1; 173 + break; 174 + } 175 + incomplete_desc = true; 175 176 } 176 177 fallthrough; 177 178 case USB_SPEED_SUPER: 178 179 if (gadget_is_superspeed(g)) { 179 - speed_desc = f->ss_descriptors; 180 - want_comp_desc = 1; 181 - break; 180 + if (f->ss_descriptors) { 181 + speed_desc = f->ss_descriptors; 182 + want_comp_desc = 1; 183 + break; 184 + } 185 + incomplete_desc = true; 182 186 } 183 187 fallthrough; 184 188 case USB_SPEED_HIGH: 185 189 if (gadget_is_dualspeed(g)) { 186 - speed_desc = f->hs_descriptors; 187 - break; 190 + if (f->hs_descriptors) { 191 + speed_desc = f->hs_descriptors; 192 + break; 193 + } 194 + incomplete_desc = true; 188 195 } 189 196 fallthrough; 190 197 default: 191 198 speed_desc = f->fs_descriptors; 192 199 } 200 + 201 + cdev = get_gadget_data(g); 202 + if (incomplete_desc) 203 + WARNING(cdev, 204 + "%s doesn't hold the descriptors for current speed\n", 205 + f->name); 193 206 194 207 /* find correct alternate setting descriptor */ 195 208 for_each_desc(speed_desc, d_spd, USB_DT_INTERFACE) { ··· 261 244 _ep->maxburst = comp_desc->bMaxBurst + 1; 262 245 break; 263 246 default: 264 - if (comp_desc->bMaxBurst != 0) { 265 - struct usb_composite_dev *cdev; 266 - 267 - cdev = get_gadget_data(g); 247 + if (comp_desc->bMaxBurst != 0) 268 248 ERROR(cdev, "ep0 bMaxBurst must be 0\n"); 269 - } 270 249 _ep->maxburst = 1; 271 250 break; 272 251 }
+9 -30
drivers/usb/gadget/configfs.c
··· 89 89 struct list_head list; 90 90 }; 91 91 92 - struct os_desc { 93 - struct config_group group; 94 - }; 95 - 96 92 struct gadget_config_name { 97 93 struct usb_gadget_strings stringtab_dev; 98 94 struct usb_string strings; ··· 416 420 struct config_usb_cfg *cfg = to_config_usb_cfg(usb_cfg_ci); 417 421 struct gadget_info *gi = cfg_to_gadget_info(cfg); 418 422 419 - struct config_group *group = to_config_group(usb_func_ci); 420 - struct usb_function_instance *fi = container_of(group, 421 - struct usb_function_instance, group); 423 + struct usb_function_instance *fi = 424 + to_usb_function_instance(usb_func_ci); 422 425 struct usb_function_instance *a_fi; 423 426 struct usb_function *f; 424 427 int ret; ··· 465 470 struct config_usb_cfg *cfg = to_config_usb_cfg(usb_cfg_ci); 466 471 struct gadget_info *gi = cfg_to_gadget_info(cfg); 467 472 468 - struct config_group *group = to_config_group(usb_func_ci); 469 - struct usb_function_instance *fi = container_of(group, 470 - struct usb_function_instance, group); 473 + struct usb_function_instance *fi = 474 + to_usb_function_instance(usb_func_ci); 471 475 struct usb_function *f; 472 476 473 477 /* ··· 777 783 USB_CONFIG_STRING_RW_OPS(gadget_strings); 778 784 USB_CONFIG_STRINGS_LANG(gadget_strings, gadget_info); 779 785 780 - static inline struct os_desc *to_os_desc(struct config_item *item) 781 - { 782 - return container_of(to_config_group(item), struct os_desc, group); 783 - } 784 - 785 786 static inline struct gadget_info *os_desc_item_to_gadget_info( 786 787 struct config_item *item) 787 788 { 788 - return to_gadget_info(to_os_desc(item)->group.cg_item.ci_parent); 789 + return container_of(to_config_group(item), 790 + struct gadget_info, os_desc_group); 789 791 } 790 792 791 793 static ssize_t os_desc_use_show(struct config_item *item, char *page) ··· 876 886 NULL, 877 887 }; 878 888 879 - static void os_desc_attr_release(struct config_item *item) 880 - { 881 - struct os_desc *os_desc = to_os_desc(item); 882 - kfree(os_desc); 883 - } 884 - 885 889 static int os_desc_link(struct config_item *os_desc_ci, 886 890 struct config_item *usb_cfg_ci) 887 891 { 888 - struct gadget_info *gi = container_of(to_config_group(os_desc_ci), 889 - struct gadget_info, os_desc_group); 892 + struct gadget_info *gi = os_desc_item_to_gadget_info(os_desc_ci); 890 893 struct usb_composite_dev *cdev = &gi->cdev; 891 - struct config_usb_cfg *c_target = 892 - container_of(to_config_group(usb_cfg_ci), 893 - struct config_usb_cfg, group); 894 + struct config_usb_cfg *c_target = to_config_usb_cfg(usb_cfg_ci); 894 895 struct usb_configuration *c; 895 896 int ret; 896 897 ··· 911 930 static void os_desc_unlink(struct config_item *os_desc_ci, 912 931 struct config_item *usb_cfg_ci) 913 932 { 914 - struct gadget_info *gi = container_of(to_config_group(os_desc_ci), 915 - struct gadget_info, os_desc_group); 933 + struct gadget_info *gi = os_desc_item_to_gadget_info(os_desc_ci); 916 934 struct usb_composite_dev *cdev = &gi->cdev; 917 935 918 936 mutex_lock(&gi->lock); ··· 923 943 } 924 944 925 945 static struct configfs_item_operations os_desc_ops = { 926 - .release = os_desc_attr_release, 927 946 .allow_link = os_desc_link, 928 947 .drop_link = os_desc_unlink, 929 948 };
+2 -2
drivers/usb/gadget/function/f_fs.c
··· 614 614 file->private_data = ffs; 615 615 ffs_data_opened(ffs); 616 616 617 - return 0; 617 + return stream_open(inode, file); 618 618 } 619 619 620 620 static int ffs_ep0_release(struct inode *inode, struct file *file) ··· 1154 1154 file->private_data = epfile; 1155 1155 ffs_data_opened(epfile->ffs); 1156 1156 1157 - return 0; 1157 + return stream_open(inode, file); 1158 1158 } 1159 1159 1160 1160 static int ffs_aio_cancel(struct kiocb *kiocb)
+46 -2
drivers/usb/gadget/function/f_midi.c
··· 1097 1097 int result; \ 1098 1098 \ 1099 1099 mutex_lock(&opts->lock); \ 1100 - result = sprintf(page, "%d\n", opts->name); \ 1100 + result = sprintf(page, "%u\n", opts->name); \ 1101 1101 mutex_unlock(&opts->lock); \ 1102 1102 \ 1103 1103 return result; \ ··· 1134 1134 \ 1135 1135 CONFIGFS_ATTR(f_midi_opts_, name); 1136 1136 1137 - F_MIDI_OPT(index, true, SNDRV_CARDS); 1137 + #define F_MIDI_OPT_SIGNED(name, test_limit, limit) \ 1138 + static ssize_t f_midi_opts_##name##_show(struct config_item *item, char *page) \ 1139 + { \ 1140 + struct f_midi_opts *opts = to_f_midi_opts(item); \ 1141 + int result; \ 1142 + \ 1143 + mutex_lock(&opts->lock); \ 1144 + result = sprintf(page, "%d\n", opts->name); \ 1145 + mutex_unlock(&opts->lock); \ 1146 + \ 1147 + return result; \ 1148 + } \ 1149 + \ 1150 + static ssize_t f_midi_opts_##name##_store(struct config_item *item, \ 1151 + const char *page, size_t len) \ 1152 + { \ 1153 + struct f_midi_opts *opts = to_f_midi_opts(item); \ 1154 + int ret; \ 1155 + s32 num; \ 1156 + \ 1157 + mutex_lock(&opts->lock); \ 1158 + if (opts->refcnt > 1) { \ 1159 + ret = -EBUSY; \ 1160 + goto end; \ 1161 + } \ 1162 + \ 1163 + ret = kstrtos32(page, 0, &num); \ 1164 + if (ret) \ 1165 + goto end; \ 1166 + \ 1167 + if (test_limit && num > limit) { \ 1168 + ret = -EINVAL; \ 1169 + goto end; \ 1170 + } \ 1171 + opts->name = num; \ 1172 + ret = len; \ 1173 + \ 1174 + end: \ 1175 + mutex_unlock(&opts->lock); \ 1176 + return ret; \ 1177 + } \ 1178 + \ 1179 + CONFIGFS_ATTR(f_midi_opts_, name); 1180 + 1181 + F_MIDI_OPT_SIGNED(index, true, SNDRV_CARDS); 1138 1182 F_MIDI_OPT(buflen, false, 0); 1139 1183 F_MIDI_OPT(qlen, false, 0); 1140 1184 F_MIDI_OPT(in_ports, true, MAX_PORTS);
+15 -13
drivers/usb/gadget/function/u_audio.c
··· 76 76 struct snd_pcm *pcm; 77 77 78 78 /* pre-calculated values for playback iso completion */ 79 - unsigned long long p_interval_mil; 80 79 unsigned long long p_residue_mil; 80 + unsigned int p_interval; 81 81 unsigned int p_framesize; 82 82 }; 83 83 ··· 194 194 * If there is a residue from this division, add it to the 195 195 * residue accumulator. 196 196 */ 197 + unsigned long long p_interval_mil = uac->p_interval * 1000000ULL; 198 + 197 199 pitched_rate_mil = (unsigned long long) 198 200 params->p_srate * prm->pitch; 199 201 div_result = pitched_rate_mil; 200 - do_div(div_result, uac->p_interval_mil); 202 + do_div(div_result, uac->p_interval); 203 + do_div(div_result, 1000000); 201 204 frames = (unsigned int) div_result; 202 205 203 206 pr_debug("p_srate %d, pitch %d, interval_mil %llu, frames %d\n", 204 - params->p_srate, prm->pitch, uac->p_interval_mil, frames); 207 + params->p_srate, prm->pitch, p_interval_mil, frames); 205 208 206 209 p_pktsize = min_t(unsigned int, 207 210 uac->p_framesize * frames, 208 211 ep->maxpacket); 209 212 210 213 if (p_pktsize < ep->maxpacket) { 211 - residue_frames_mil = pitched_rate_mil - frames * uac->p_interval_mil; 214 + residue_frames_mil = pitched_rate_mil - frames * p_interval_mil; 212 215 p_pktsize_residue_mil = uac->p_framesize * residue_frames_mil; 213 216 } else 214 217 p_pktsize_residue_mil = 0; ··· 225 222 * size and decrease the accumulator. 226 223 */ 227 224 div_result = uac->p_residue_mil; 228 - do_div(div_result, uac->p_interval_mil); 225 + do_div(div_result, uac->p_interval); 226 + do_div(div_result, 1000000); 229 227 if ((unsigned int) div_result >= uac->p_framesize) { 230 228 req->length += uac->p_framesize; 231 - uac->p_residue_mil -= uac->p_framesize * 232 - uac->p_interval_mil; 229 + uac->p_residue_mil -= uac->p_framesize * p_interval_mil; 233 230 pr_debug("increased req length to %d\n", req->length); 234 231 } 235 232 pr_debug("remains uac->p_residue_mil %llu\n", uac->p_residue_mil); ··· 594 591 unsigned int factor; 595 592 const struct usb_endpoint_descriptor *ep_desc; 596 593 int req_len, i; 597 - unsigned int p_interval, p_pktsize; 594 + unsigned int p_pktsize; 598 595 599 596 ep = audio_dev->in_ep; 600 597 prm = &uac->p_prm; ··· 615 612 /* pre-compute some values for iso_complete() */ 616 613 uac->p_framesize = params->p_ssize * 617 614 num_channels(params->p_chmask); 618 - p_interval = factor / (1 << (ep_desc->bInterval - 1)); 619 - uac->p_interval_mil = (unsigned long long) p_interval * 1000000; 615 + uac->p_interval = factor / (1 << (ep_desc->bInterval - 1)); 620 616 p_pktsize = min_t(unsigned int, 621 617 uac->p_framesize * 622 - (params->p_srate / p_interval), 618 + (params->p_srate / uac->p_interval), 623 619 ep->maxpacket); 624 620 625 621 req_len = p_pktsize; ··· 1147 1145 } 1148 1146 1149 1147 kctl->id.device = pcm->device; 1150 - kctl->id.subdevice = i; 1148 + kctl->id.subdevice = 0; 1151 1149 1152 1150 err = snd_ctl_add(card, kctl); 1153 1151 if (err < 0) ··· 1170 1168 } 1171 1169 1172 1170 kctl->id.device = pcm->device; 1173 - kctl->id.subdevice = i; 1171 + kctl->id.subdevice = 0; 1174 1172 1175 1173 1176 1174 kctl->tlv.c = u_audio_volume_tlv;
+11 -7
drivers/usb/gadget/legacy/inode.c
··· 1242 1242 return mask; 1243 1243 } 1244 1244 1245 - static long dev_ioctl (struct file *fd, unsigned code, unsigned long value) 1245 + static long gadget_dev_ioctl (struct file *fd, unsigned code, unsigned long value) 1246 1246 { 1247 1247 struct dev_data *dev = fd->private_data; 1248 1248 struct usb_gadget *gadget = dev->gadget; ··· 1826 1826 spin_lock_irq (&dev->lock); 1827 1827 value = -EINVAL; 1828 1828 if (dev->buf) { 1829 + spin_unlock_irq(&dev->lock); 1829 1830 kfree(kbuf); 1830 - goto fail; 1831 + return value; 1831 1832 } 1832 1833 dev->buf = kbuf; 1833 1834 ··· 1875 1874 1876 1875 value = usb_gadget_probe_driver(&gadgetfs_driver); 1877 1876 if (value != 0) { 1878 - kfree (dev->buf); 1879 - dev->buf = NULL; 1877 + spin_lock_irq(&dev->lock); 1878 + goto fail; 1880 1879 } else { 1881 1880 /* at this point "good" hardware has for the first time 1882 1881 * let the USB the host see us. alternatively, if users ··· 1893 1892 return value; 1894 1893 1895 1894 fail: 1895 + dev->config = NULL; 1896 + dev->hs_config = NULL; 1897 + dev->dev = NULL; 1896 1898 spin_unlock_irq (&dev->lock); 1897 1899 pr_debug ("%s: %s fail %zd, %p\n", shortname, __func__, value, dev); 1898 1900 kfree (dev->buf); ··· 1904 1900 } 1905 1901 1906 1902 static int 1907 - dev_open (struct inode *inode, struct file *fd) 1903 + gadget_dev_open (struct inode *inode, struct file *fd) 1908 1904 { 1909 1905 struct dev_data *dev = inode->i_private; 1910 1906 int value = -EBUSY; ··· 1924 1920 static const struct file_operations ep0_operations = { 1925 1921 .llseek = no_llseek, 1926 1922 1927 - .open = dev_open, 1923 + .open = gadget_dev_open, 1928 1924 .read = ep0_read, 1929 1925 .write = dev_config, 1930 1926 .fasync = ep0_fasync, 1931 1927 .poll = ep0_poll, 1932 - .unlocked_ioctl = dev_ioctl, 1928 + .unlocked_ioctl = gadget_dev_ioctl, 1933 1929 .release = dev_release, 1934 1930 }; 1935 1931
+15 -4
drivers/usb/gadget/udc/aspeed-vhub/dev.c
··· 110 110 u16 wIndex, u16 wValue, 111 111 bool is_set) 112 112 { 113 + u32 val; 114 + 113 115 DDBG(d, "%s_FEATURE(dev val=%02x)\n", 114 116 is_set ? "SET" : "CLEAR", wValue); 115 117 116 - if (wValue != USB_DEVICE_REMOTE_WAKEUP) 117 - return std_req_driver; 118 + if (wValue == USB_DEVICE_REMOTE_WAKEUP) { 119 + d->wakeup_en = is_set; 120 + return std_req_complete; 121 + } 118 122 119 - d->wakeup_en = is_set; 123 + if (wValue == USB_DEVICE_TEST_MODE) { 124 + val = readl(d->vhub->regs + AST_VHUB_CTRL); 125 + val &= ~GENMASK(10, 8); 126 + val |= VHUB_CTRL_SET_TEST_MODE((wIndex >> 8) & 0x7); 127 + writel(val, d->vhub->regs + AST_VHUB_CTRL); 120 128 121 - return std_req_complete; 129 + return std_req_complete; 130 + } 131 + 132 + return std_req_driver; 122 133 } 123 134 124 135 static int ast_vhub_ep_feature(struct ast_vhub_dev *d,
+7
drivers/usb/gadget/udc/aspeed-vhub/ep0.c
··· 251 251 len = remain; 252 252 rc = -EOVERFLOW; 253 253 } 254 + 255 + /* Hardware return wrong data len */ 256 + if (len < ep->ep.maxpacket && len != remain) { 257 + EPDBG(ep, "using expected data len instead\n"); 258 + len = remain; 259 + } 260 + 254 261 if (len && req->req.buf) 255 262 memcpy(req->req.buf + req->req.actual, ep->buf, len); 256 263 req->req.actual += len;
+41 -6
drivers/usb/gadget/udc/aspeed-vhub/hub.c
··· 68 68 .bNumConfigurations = 1, 69 69 }; 70 70 71 + static const struct usb_qualifier_descriptor ast_vhub_qual_desc = { 72 + .bLength = 0xA, 73 + .bDescriptorType = USB_DT_DEVICE_QUALIFIER, 74 + .bcdUSB = cpu_to_le16(0x0200), 75 + .bDeviceClass = USB_CLASS_HUB, 76 + .bDeviceSubClass = 0, 77 + .bDeviceProtocol = 0, 78 + .bMaxPacketSize0 = 64, 79 + .bNumConfigurations = 1, 80 + .bRESERVED = 0, 81 + }; 82 + 71 83 /* 72 84 * Configuration descriptor: same comments as above 73 85 * regarding handling USB1 mode. ··· 212 200 u16 wIndex, u16 wValue, 213 201 bool is_set) 214 202 { 203 + u32 val; 204 + 215 205 EPDBG(ep, "%s_FEATURE(dev val=%02x)\n", 216 206 is_set ? "SET" : "CLEAR", wValue); 217 207 218 - if (wValue != USB_DEVICE_REMOTE_WAKEUP) 219 - return std_req_stall; 208 + if (wValue == USB_DEVICE_REMOTE_WAKEUP) { 209 + ep->vhub->wakeup_en = is_set; 210 + EPDBG(ep, "Hub remote wakeup %s\n", 211 + is_set ? "enabled" : "disabled"); 212 + return std_req_complete; 213 + } 220 214 221 - ep->vhub->wakeup_en = is_set; 222 - EPDBG(ep, "Hub remote wakeup %s\n", 223 - is_set ? "enabled" : "disabled"); 215 + if (wValue == USB_DEVICE_TEST_MODE) { 216 + val = readl(ep->vhub->regs + AST_VHUB_CTRL); 217 + val &= ~GENMASK(10, 8); 218 + val |= VHUB_CTRL_SET_TEST_MODE((wIndex >> 8) & 0x7); 219 + writel(val, ep->vhub->regs + AST_VHUB_CTRL); 224 220 225 - return std_req_complete; 221 + return std_req_complete; 222 + } 223 + 224 + return std_req_stall; 226 225 } 227 226 228 227 static int ast_vhub_hub_ep_feature(struct ast_vhub_ep *ep, ··· 294 271 BUILD_BUG_ON(dsize > sizeof(vhub->vhub_dev_desc)); 295 272 BUILD_BUG_ON(USB_DT_DEVICE_SIZE >= AST_VHUB_EP0_MAX_PACKET); 296 273 break; 274 + case USB_DT_OTHER_SPEED_CONFIG: 297 275 case USB_DT_CONFIG: 298 276 dsize = AST_VHUB_CONF_DESC_SIZE; 299 277 memcpy(ep->buf, &vhub->vhub_conf_desc, dsize); 278 + ((u8 *)ep->buf)[1] = desc_type; 300 279 BUILD_BUG_ON(dsize > sizeof(vhub->vhub_conf_desc)); 301 280 BUILD_BUG_ON(AST_VHUB_CONF_DESC_SIZE >= AST_VHUB_EP0_MAX_PACKET); 302 281 break; ··· 307 282 memcpy(ep->buf, &vhub->vhub_hub_desc, dsize); 308 283 BUILD_BUG_ON(dsize > sizeof(vhub->vhub_hub_desc)); 309 284 BUILD_BUG_ON(AST_VHUB_HUB_DESC_SIZE >= AST_VHUB_EP0_MAX_PACKET); 285 + break; 286 + case USB_DT_DEVICE_QUALIFIER: 287 + dsize = sizeof(vhub->vhub_qual_desc); 288 + memcpy(ep->buf, &vhub->vhub_qual_desc, dsize); 310 289 break; 311 290 default: 312 291 return std_req_stall; ··· 457 428 switch (wValue >> 8) { 458 429 case USB_DT_DEVICE: 459 430 case USB_DT_CONFIG: 431 + case USB_DT_DEVICE_QUALIFIER: 432 + case USB_DT_OTHER_SPEED_CONFIG: 460 433 return ast_vhub_rep_desc(ep, wValue >> 8, 461 434 wLength); 462 435 case USB_DT_STRING: ··· 1063 1032 ret = ast_vhub_of_parse_str_desc(vhub, desc_np); 1064 1033 else 1065 1034 ret = ast_vhub_str_alloc_add(vhub, &ast_vhub_strings); 1035 + 1036 + /* Initialize vhub Qualifier Descriptor. */ 1037 + memcpy(&vhub->vhub_qual_desc, &ast_vhub_qual_desc, 1038 + sizeof(vhub->vhub_qual_desc)); 1066 1039 1067 1040 return ret; 1068 1041 }
+1
drivers/usb/gadget/udc/aspeed-vhub/vhub.h
··· 425 425 struct ast_vhub_full_cdesc vhub_conf_desc; 426 426 struct usb_hub_descriptor vhub_hub_desc; 427 427 struct list_head vhub_str_desc; 428 + struct usb_qualifier_descriptor vhub_qual_desc; 428 429 }; 429 430 430 431 /* Standard request handlers result codes */
+24 -43
drivers/usb/gadget/udc/at91_udc.c
··· 25 25 #include <linux/usb/ch9.h> 26 26 #include <linux/usb/gadget.h> 27 27 #include <linux/of.h> 28 - #include <linux/of_gpio.h> 28 + #include <linux/gpio/consumer.h> 29 29 #include <linux/platform_data/atmel.h> 30 30 #include <linux/regmap.h> 31 31 #include <linux/mfd/syscon.h> ··· 1510 1510 1511 1511 static void at91_vbus_update(struct at91_udc *udc, unsigned value) 1512 1512 { 1513 - value ^= udc->board.vbus_active_low; 1514 1513 if (value != udc->vbus) 1515 1514 at91_vbus_session(&udc->gadget, value); 1516 1515 } ··· 1520 1521 1521 1522 /* vbus needs at least brief debouncing */ 1522 1523 udelay(10); 1523 - at91_vbus_update(udc, gpio_get_value(udc->board.vbus_pin)); 1524 + at91_vbus_update(udc, gpiod_get_value(udc->board.vbus_pin)); 1524 1525 1525 1526 return IRQ_HANDLED; 1526 1527 } ··· 1530 1531 struct at91_udc *udc = container_of(work, struct at91_udc, 1531 1532 vbus_timer_work); 1532 1533 1533 - at91_vbus_update(udc, gpio_get_value_cansleep(udc->board.vbus_pin)); 1534 + at91_vbus_update(udc, gpiod_get_value_cansleep(udc->board.vbus_pin)); 1534 1535 1535 1536 if (!timer_pending(&udc->vbus_timer)) 1536 1537 mod_timer(&udc->vbus_timer, jiffies + VBUS_POLL_TIMEOUT); ··· 1594 1595 static int at91rm9200_udc_init(struct at91_udc *udc) 1595 1596 { 1596 1597 struct at91_ep *ep; 1597 - int ret; 1598 1598 int i; 1599 1599 1600 1600 for (i = 0; i < NUM_ENDPOINTS; i++) { ··· 1613 1615 } 1614 1616 } 1615 1617 1616 - if (!gpio_is_valid(udc->board.pullup_pin)) { 1618 + if (!udc->board.pullup_pin) { 1617 1619 DBG("no D+ pullup?\n"); 1618 1620 return -ENODEV; 1619 1621 } 1620 1622 1621 - ret = devm_gpio_request(&udc->pdev->dev, udc->board.pullup_pin, 1622 - "udc_pullup"); 1623 - if (ret) { 1624 - DBG("D+ pullup is busy\n"); 1625 - return ret; 1626 - } 1627 - 1628 - gpio_direction_output(udc->board.pullup_pin, 1629 - udc->board.pullup_active_low); 1623 + gpiod_direction_output(udc->board.pullup_pin, 1624 + gpiod_is_active_low(udc->board.pullup_pin)); 1630 1625 1631 1626 return 0; 1632 1627 } 1633 1628 1634 1629 static void at91rm9200_udc_pullup(struct at91_udc *udc, int is_on) 1635 1630 { 1636 - int active = !udc->board.pullup_active_low; 1637 - 1638 1631 if (is_on) 1639 - gpio_set_value(udc->board.pullup_pin, active); 1632 + gpiod_set_value(udc->board.pullup_pin, 1); 1640 1633 else 1641 - gpio_set_value(udc->board.pullup_pin, !active); 1634 + gpiod_set_value(udc->board.pullup_pin, 0); 1642 1635 } 1643 1636 1644 1637 static const struct at91_udc_caps at91rm9200_udc_caps = { ··· 1772 1783 { 1773 1784 struct at91_udc_data *board = &udc->board; 1774 1785 const struct of_device_id *match; 1775 - enum of_gpio_flags flags; 1776 1786 u32 val; 1777 1787 1778 1788 if (of_property_read_u32(np, "atmel,vbus-polled", &val) == 0) 1779 1789 board->vbus_polled = 1; 1780 1790 1781 - board->vbus_pin = of_get_named_gpio_flags(np, "atmel,vbus-gpio", 0, 1782 - &flags); 1783 - board->vbus_active_low = (flags & OF_GPIO_ACTIVE_LOW) ? 1 : 0; 1791 + board->vbus_pin = gpiod_get_from_of_node(np, "atmel,vbus-gpio", 0, 1792 + GPIOD_IN, "udc_vbus"); 1793 + if (IS_ERR(board->vbus_pin)) 1794 + board->vbus_pin = NULL; 1784 1795 1785 - board->pullup_pin = of_get_named_gpio_flags(np, "atmel,pullup-gpio", 0, 1786 - &flags); 1787 - 1788 - board->pullup_active_low = (flags & OF_GPIO_ACTIVE_LOW) ? 1 : 0; 1796 + board->pullup_pin = gpiod_get_from_of_node(np, "atmel,pullup-gpio", 0, 1797 + GPIOD_ASIS, "udc_pullup"); 1798 + if (IS_ERR(board->pullup_pin)) 1799 + board->pullup_pin = NULL; 1789 1800 1790 1801 match = of_match_node(at91_udc_dt_ids, np); 1791 1802 if (match) ··· 1875 1886 goto err_unprepare_iclk; 1876 1887 } 1877 1888 1878 - if (gpio_is_valid(udc->board.vbus_pin)) { 1879 - retval = devm_gpio_request(dev, udc->board.vbus_pin, 1880 - "udc_vbus"); 1881 - if (retval) { 1882 - DBG("request vbus pin failed\n"); 1883 - goto err_unprepare_iclk; 1884 - } 1885 - 1886 - gpio_direction_input(udc->board.vbus_pin); 1889 + if (udc->board.vbus_pin) { 1890 + gpiod_direction_input(udc->board.vbus_pin); 1887 1891 1888 1892 /* 1889 1893 * Get the initial state of VBUS - we cannot expect 1890 1894 * a pending interrupt. 1891 1895 */ 1892 - udc->vbus = gpio_get_value_cansleep(udc->board.vbus_pin) ^ 1893 - udc->board.vbus_active_low; 1896 + udc->vbus = gpiod_get_value_cansleep(udc->board.vbus_pin); 1894 1897 1895 1898 if (udc->board.vbus_polled) { 1896 1899 INIT_WORK(&udc->vbus_timer_work, at91_vbus_timer_work); ··· 1891 1910 jiffies + VBUS_POLL_TIMEOUT); 1892 1911 } else { 1893 1912 retval = devm_request_irq(dev, 1894 - gpio_to_irq(udc->board.vbus_pin), 1913 + gpiod_to_irq(udc->board.vbus_pin), 1895 1914 at91_vbus_irq, 0, driver_name, udc); 1896 1915 if (retval) { 1897 1916 DBG("request vbus irq %d failed\n", ··· 1969 1988 enable_irq_wake(udc->udp_irq); 1970 1989 1971 1990 udc->active_suspend = wake; 1972 - if (gpio_is_valid(udc->board.vbus_pin) && !udc->board.vbus_polled && wake) 1973 - enable_irq_wake(udc->board.vbus_pin); 1991 + if (udc->board.vbus_pin && !udc->board.vbus_polled && wake) 1992 + enable_irq_wake(gpiod_to_irq(udc->board.vbus_pin)); 1974 1993 return 0; 1975 1994 } 1976 1995 ··· 1979 1998 struct at91_udc *udc = platform_get_drvdata(pdev); 1980 1999 unsigned long flags; 1981 2000 1982 - if (gpio_is_valid(udc->board.vbus_pin) && !udc->board.vbus_polled && 2001 + if (udc->board.vbus_pin && !udc->board.vbus_polled && 1983 2002 udc->active_suspend) 1984 - disable_irq_wake(udc->board.vbus_pin); 2003 + disable_irq_wake(gpiod_to_irq(udc->board.vbus_pin)); 1985 2004 1986 2005 /* maybe reconnect to host; if so, clocks on */ 1987 2006 if (udc->active_suspend)
+3 -5
drivers/usb/gadget/udc/at91_udc.h
··· 109 109 }; 110 110 111 111 struct at91_udc_data { 112 - int vbus_pin; /* high == host powering us */ 113 - u8 vbus_active_low; /* vbus polarity */ 114 - u8 vbus_polled; /* Use polling, not interrupt */ 115 - int pullup_pin; /* active == D+ pulled up */ 116 - u8 pullup_active_low; /* true == pullup_pin is active low */ 112 + struct gpio_desc *vbus_pin; /* high == host powering us */ 113 + u8 vbus_polled; /* Use polling, not interrupt */ 114 + struct gpio_desc *pullup_pin; /* active == D+ pulled up */ 117 115 }; 118 116 119 117 /*
+6 -2
drivers/usb/gadget/udc/bcm63xx_udc.c
··· 2321 2321 2322 2322 /* IRQ resource #0: control interrupt (VBUS, speed, etc.) */ 2323 2323 irq = platform_get_irq(pdev, 0); 2324 - if (irq < 0) 2324 + if (irq < 0) { 2325 + rc = irq; 2325 2326 goto out_uninit; 2327 + } 2326 2328 if (devm_request_irq(dev, irq, &bcm63xx_udc_ctrl_isr, 0, 2327 2329 dev_name(dev), udc) < 0) 2328 2330 goto report_request_failure; ··· 2332 2330 /* IRQ resources #1-6: data interrupts for IUDMA channels 0-5 */ 2333 2331 for (i = 0; i < BCM63XX_NUM_IUDMA; i++) { 2334 2332 irq = platform_get_irq(pdev, i + 1); 2335 - if (irq < 0) 2333 + if (irq < 0) { 2334 + rc = irq; 2336 2335 goto out_uninit; 2336 + } 2337 2337 if (devm_request_irq(dev, irq, &bcm63xx_udc_data_isr, 0, 2338 2338 dev_name(dev), &udc->iudma[i]) < 0) 2339 2339 goto report_request_failure;
+1
drivers/usb/gadget/udc/bdc/bdc_core.c
··· 623 623 ret = bdc_reinit(bdc); 624 624 if (ret) { 625 625 dev_err(bdc->dev, "err in bdc reinit\n"); 626 + clk_disable_unprepare(bdc->clk); 626 627 return ret; 627 628 } 628 629
+1 -3
drivers/usb/gadget/udc/mv_udc_core.c
··· 2084 2084 2085 2085 usb_del_gadget_udc(&udc->gadget); 2086 2086 2087 - if (udc->qwork) { 2088 - flush_workqueue(udc->qwork); 2087 + if (udc->qwork) 2089 2088 destroy_workqueue(udc->qwork); 2090 - } 2091 2089 2092 2090 /* free memory allocated in probe */ 2093 2091 dma_pool_destroy(udc->dtd_pool);
+1 -1
drivers/usb/gadget/udc/pxa25x_udc.c
··· 2364 2364 2365 2365 irq = platform_get_irq(pdev, 0); 2366 2366 if (irq < 0) 2367 - return -ENODEV; 2367 + return irq; 2368 2368 2369 2369 dev->regs = devm_platform_ioremap_resource(pdev, 0); 2370 2370 if (IS_ERR(dev->regs))
+56
drivers/usb/gadget/udc/udc-xilinx.c
··· 2179 2179 return 0; 2180 2180 } 2181 2181 2182 + #ifdef CONFIG_PM_SLEEP 2183 + static int xudc_suspend(struct device *dev) 2184 + { 2185 + struct xusb_udc *udc; 2186 + u32 crtlreg; 2187 + unsigned long flags; 2188 + 2189 + udc = dev_get_drvdata(dev); 2190 + 2191 + spin_lock_irqsave(&udc->lock, flags); 2192 + 2193 + crtlreg = udc->read_fn(udc->addr + XUSB_CONTROL_OFFSET); 2194 + crtlreg &= ~XUSB_CONTROL_USB_READY_MASK; 2195 + 2196 + udc->write_fn(udc->addr, XUSB_CONTROL_OFFSET, crtlreg); 2197 + 2198 + spin_unlock_irqrestore(&udc->lock, flags); 2199 + if (udc->driver && udc->driver->suspend) 2200 + udc->driver->suspend(&udc->gadget); 2201 + 2202 + clk_disable(udc->clk); 2203 + 2204 + return 0; 2205 + } 2206 + 2207 + static int xudc_resume(struct device *dev) 2208 + { 2209 + struct xusb_udc *udc; 2210 + u32 crtlreg; 2211 + unsigned long flags; 2212 + int ret; 2213 + 2214 + udc = dev_get_drvdata(dev); 2215 + 2216 + ret = clk_enable(udc->clk); 2217 + if (ret < 0) 2218 + return ret; 2219 + 2220 + spin_lock_irqsave(&udc->lock, flags); 2221 + 2222 + crtlreg = udc->read_fn(udc->addr + XUSB_CONTROL_OFFSET); 2223 + crtlreg |= XUSB_CONTROL_USB_READY_MASK; 2224 + 2225 + udc->write_fn(udc->addr, XUSB_CONTROL_OFFSET, crtlreg); 2226 + 2227 + spin_unlock_irqrestore(&udc->lock, flags); 2228 + 2229 + return 0; 2230 + } 2231 + #endif /* CONFIG_PM_SLEEP */ 2232 + 2233 + static const struct dev_pm_ops xudc_pm_ops = { 2234 + SET_SYSTEM_SLEEP_PM_OPS(xudc_suspend, xudc_resume) 2235 + }; 2236 + 2182 2237 /* Match table for of_platform binding */ 2183 2238 static const struct of_device_id usb_of_match[] = { 2184 2239 { .compatible = "xlnx,usb2-device-4.00.a", }, ··· 2245 2190 .driver = { 2246 2191 .name = driver_name, 2247 2192 .of_match_table = usb_of_match, 2193 + .pm = &xudc_pm_ops, 2248 2194 }, 2249 2195 .probe = xudc_probe, 2250 2196 .remove = xudc_remove,
+11
drivers/usb/host/Kconfig
··· 772 772 This option is of interest only to developers who need to validate 773 773 their USB hardware designs. It is not needed for normal use. If 774 774 unsure, say N. 775 + 776 + config USB_XEN_HCD 777 + tristate "Xen usb virtual host driver" 778 + depends on XEN 779 + select XEN_XENBUS_FRONTEND 780 + help 781 + The Xen usb virtual host driver serves as a frontend driver enabling 782 + a Xen guest system to access USB Devices passed through to the guest 783 + by the Xen host (usually Dom0). 784 + Only needed if the kernel is running in a Xen guest and generic 785 + access to a USB device is needed.
+1
drivers/usb/host/Makefile
··· 85 85 obj-$(CONFIG_USB_HCD_SSB) += ssb-hcd.o 86 86 obj-$(CONFIG_USB_FOTG210_HCD) += fotg210-hcd.o 87 87 obj-$(CONFIG_USB_MAX3421_HCD) += max3421-hcd.o 88 + obj-$(CONFIG_USB_XEN_HCD) += xen-hcd.o
+5 -1
drivers/usb/host/ehci-brcm.c
··· 62 62 u32 __iomem *status_reg; 63 63 unsigned long flags; 64 64 int retval, irq_disabled = 0; 65 + u32 temp; 65 66 66 - status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1]; 67 + temp = (wIndex & 0xff) - 1; 68 + if (temp >= HCS_N_PORTS_MAX) /* Avoid index-out-of-bounds warning */ 69 + temp = 0; 70 + status_reg = &ehci->regs->port_status[temp]; 67 71 68 72 /* 69 73 * RESUME is cleared when GetPortStatus() is called 20ms after start
+3 -8
drivers/usb/host/fotg210-hcd.c
··· 5576 5576 5577 5577 pdev->dev.power.power_state = PMSG_ON; 5578 5578 5579 - res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 5580 - if (!res) { 5581 - dev_err(dev, "Found HC with no IRQ. Check %s setup!\n", 5582 - dev_name(dev)); 5583 - return -ENODEV; 5584 - } 5585 - 5586 - irq = res->start; 5579 + irq = platform_get_irq(pdev, 0); 5580 + if (irq < 0) 5581 + return irq; 5587 5582 5588 5583 hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev, 5589 5584 dev_name(dev));
+1 -1
drivers/usb/host/ohci-omap.c
··· 306 306 307 307 irq = platform_get_irq(pdev, 0); 308 308 if (irq < 0) { 309 - retval = -ENXIO; 309 + retval = irq; 310 310 goto err3; 311 311 } 312 312 retval = usb_add_hcd(hcd, irq, 0);
+8 -2
drivers/usb/host/ohci-s3c2410.c
··· 356 356 { 357 357 struct usb_hcd *hcd = NULL; 358 358 struct s3c2410_hcd_info *info = dev_get_platdata(&dev->dev); 359 - int retval; 359 + int retval, irq; 360 360 361 361 s3c2410_usb_set_power(info, 1, 1); 362 362 s3c2410_usb_set_power(info, 2, 1); ··· 388 388 goto err_put; 389 389 } 390 390 391 + irq = platform_get_irq(dev, 0); 392 + if (irq < 0) { 393 + retval = irq; 394 + goto err_put; 395 + } 396 + 391 397 s3c2410_start_hc(dev, hcd); 392 398 393 - retval = usb_add_hcd(hcd, dev->resource[1].start, 0); 399 + retval = usb_add_hcd(hcd, irq, 0); 394 400 if (retval != 0) 395 401 goto err_ioremap; 396 402
+1 -1
drivers/usb/host/ohci-spear.c
··· 76 76 goto err_put_hcd; 77 77 } 78 78 79 - hcd->rsrc_start = pdev->resource[0].start; 79 + hcd->rsrc_start = res->start; 80 80 hcd->rsrc_len = resource_size(res); 81 81 82 82 sohci_p = to_spear_ohci(hcd);
-5
drivers/usb/host/ohci-tmio.c
··· 21 21 * usb-ohci-tc6393.c(C) Copyright 2004 Lineo Solutions, Inc. 22 22 */ 23 23 24 - /*#include <linux/fs.h> 25 - #include <linux/mount.h> 26 - #include <linux/pagemap.h> 27 - #include <linux/namei.h> 28 - #include <linux/sched.h>*/ 29 24 #include <linux/platform_device.h> 30 25 #include <linux/mfd/core.h> 31 26 #include <linux/mfd/tmio.h>
-1
drivers/usb/host/u132-hcd.c
··· 3211 3211 platform_driver_unregister(&u132_platform_driver); 3212 3212 printk(KERN_INFO "u132-hcd driver deregistered\n"); 3213 3213 wait_event(u132_hcd_wait, u132_instances == 0); 3214 - flush_workqueue(workqueue); 3215 3214 destroy_workqueue(workqueue); 3216 3215 } 3217 3216
+7 -2
drivers/usb/host/uhci-platform.c
··· 113 113 num_ports); 114 114 } 115 115 if (of_device_is_compatible(np, "aspeed,ast2400-uhci") || 116 - of_device_is_compatible(np, "aspeed,ast2500-uhci")) { 116 + of_device_is_compatible(np, "aspeed,ast2500-uhci") || 117 + of_device_is_compatible(np, "aspeed,ast2600-uhci")) { 117 118 uhci->is_aspeed = 1; 118 119 dev_info(&pdev->dev, 119 120 "Enabled Aspeed implementation workarounds\n"); ··· 133 132 goto err_rmr; 134 133 } 135 134 136 - ret = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_SHARED); 135 + ret = platform_get_irq(pdev, 0); 136 + if (ret < 0) 137 + goto err_clk; 138 + 139 + ret = usb_add_hcd(hcd, ret, IRQF_SHARED); 137 140 if (ret) 138 141 goto err_clk; 139 142
+1609
drivers/usb/host/xen-hcd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * xen-hcd.c 4 + * 5 + * Xen USB Virtual Host Controller driver 6 + * 7 + * Copyright (C) 2009, FUJITSU LABORATORIES LTD. 8 + * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com> 9 + */ 10 + 11 + #include <linux/module.h> 12 + #include <linux/usb.h> 13 + #include <linux/list.h> 14 + #include <linux/usb/hcd.h> 15 + #include <linux/io.h> 16 + 17 + #include <xen/xen.h> 18 + #include <xen/xenbus.h> 19 + #include <xen/grant_table.h> 20 + #include <xen/events.h> 21 + #include <xen/page.h> 22 + 23 + #include <xen/interface/io/usbif.h> 24 + 25 + /* Private per-URB data */ 26 + struct urb_priv { 27 + struct list_head list; 28 + struct urb *urb; 29 + int req_id; /* RING_REQUEST id for submitting */ 30 + int unlink_req_id; /* RING_REQUEST id for unlinking */ 31 + int status; 32 + bool unlinked; /* dequeued marker */ 33 + }; 34 + 35 + /* virtual roothub port status */ 36 + struct rhport_status { 37 + __u32 status; 38 + bool resuming; /* in resuming */ 39 + bool c_connection; /* connection changed */ 40 + unsigned long timeout; 41 + }; 42 + 43 + /* status of attached device */ 44 + struct vdevice_status { 45 + int devnum; 46 + enum usb_device_state status; 47 + enum usb_device_speed speed; 48 + }; 49 + 50 + /* RING request shadow */ 51 + struct usb_shadow { 52 + struct xenusb_urb_request req; 53 + struct urb *urb; 54 + }; 55 + 56 + struct xenhcd_info { 57 + /* Virtual Host Controller has 4 urb queues */ 58 + struct list_head pending_submit_list; 59 + struct list_head pending_unlink_list; 60 + struct list_head in_progress_list; 61 + struct list_head giveback_waiting_list; 62 + 63 + spinlock_t lock; 64 + 65 + /* timer that kick pending and giveback waiting urbs */ 66 + struct timer_list watchdog; 67 + unsigned long actions; 68 + 69 + /* virtual root hub */ 70 + int rh_numports; 71 + struct rhport_status ports[XENUSB_MAX_PORTNR]; 72 + struct vdevice_status devices[XENUSB_MAX_PORTNR]; 73 + 74 + /* Xen related staff */ 75 + struct xenbus_device *xbdev; 76 + int urb_ring_ref; 77 + int conn_ring_ref; 78 + struct xenusb_urb_front_ring urb_ring; 79 + struct xenusb_conn_front_ring conn_ring; 80 + 81 + unsigned int evtchn; 82 + unsigned int irq; 83 + struct usb_shadow shadow[XENUSB_URB_RING_SIZE]; 84 + unsigned int shadow_free; 85 + 86 + bool error; 87 + }; 88 + 89 + #define GRANT_INVALID_REF 0 90 + 91 + #define XENHCD_RING_JIFFIES (HZ/200) 92 + #define XENHCD_SCAN_JIFFIES 1 93 + 94 + enum xenhcd_timer_action { 95 + TIMER_RING_WATCHDOG, 96 + TIMER_SCAN_PENDING_URBS, 97 + }; 98 + 99 + static struct kmem_cache *xenhcd_urbp_cachep; 100 + 101 + static inline struct xenhcd_info *xenhcd_hcd_to_info(struct usb_hcd *hcd) 102 + { 103 + return (struct xenhcd_info *)hcd->hcd_priv; 104 + } 105 + 106 + static inline struct usb_hcd *xenhcd_info_to_hcd(struct xenhcd_info *info) 107 + { 108 + return container_of((void *)info, struct usb_hcd, hcd_priv); 109 + } 110 + 111 + static void xenhcd_set_error(struct xenhcd_info *info, const char *msg) 112 + { 113 + info->error = true; 114 + 115 + pr_alert("xen-hcd: protocol error: %s!\n", msg); 116 + } 117 + 118 + static inline void xenhcd_timer_action_done(struct xenhcd_info *info, 119 + enum xenhcd_timer_action action) 120 + { 121 + clear_bit(action, &info->actions); 122 + } 123 + 124 + static void xenhcd_timer_action(struct xenhcd_info *info, 125 + enum xenhcd_timer_action action) 126 + { 127 + if (timer_pending(&info->watchdog) && 128 + test_bit(TIMER_SCAN_PENDING_URBS, &info->actions)) 129 + return; 130 + 131 + if (!test_and_set_bit(action, &info->actions)) { 132 + unsigned long t; 133 + 134 + switch (action) { 135 + case TIMER_RING_WATCHDOG: 136 + t = XENHCD_RING_JIFFIES; 137 + break; 138 + default: 139 + t = XENHCD_SCAN_JIFFIES; 140 + break; 141 + } 142 + mod_timer(&info->watchdog, t + jiffies); 143 + } 144 + } 145 + 146 + /* 147 + * set virtual port connection status 148 + */ 149 + static void xenhcd_set_connect_state(struct xenhcd_info *info, int portnum) 150 + { 151 + int port; 152 + 153 + port = portnum - 1; 154 + if (info->ports[port].status & USB_PORT_STAT_POWER) { 155 + switch (info->devices[port].speed) { 156 + case XENUSB_SPEED_NONE: 157 + info->ports[port].status &= 158 + ~(USB_PORT_STAT_CONNECTION | 159 + USB_PORT_STAT_ENABLE | 160 + USB_PORT_STAT_LOW_SPEED | 161 + USB_PORT_STAT_HIGH_SPEED | 162 + USB_PORT_STAT_SUSPEND); 163 + break; 164 + case XENUSB_SPEED_LOW: 165 + info->ports[port].status |= USB_PORT_STAT_CONNECTION; 166 + info->ports[port].status |= USB_PORT_STAT_LOW_SPEED; 167 + break; 168 + case XENUSB_SPEED_FULL: 169 + info->ports[port].status |= USB_PORT_STAT_CONNECTION; 170 + break; 171 + case XENUSB_SPEED_HIGH: 172 + info->ports[port].status |= USB_PORT_STAT_CONNECTION; 173 + info->ports[port].status |= USB_PORT_STAT_HIGH_SPEED; 174 + break; 175 + default: /* error */ 176 + return; 177 + } 178 + info->ports[port].status |= (USB_PORT_STAT_C_CONNECTION << 16); 179 + } 180 + } 181 + 182 + /* 183 + * set virtual device connection status 184 + */ 185 + static int xenhcd_rhport_connect(struct xenhcd_info *info, __u8 portnum, 186 + __u8 speed) 187 + { 188 + int port; 189 + 190 + if (portnum < 1 || portnum > info->rh_numports) 191 + return -EINVAL; /* invalid port number */ 192 + 193 + port = portnum - 1; 194 + if (info->devices[port].speed != speed) { 195 + switch (speed) { 196 + case XENUSB_SPEED_NONE: /* disconnect */ 197 + info->devices[port].status = USB_STATE_NOTATTACHED; 198 + break; 199 + case XENUSB_SPEED_LOW: 200 + case XENUSB_SPEED_FULL: 201 + case XENUSB_SPEED_HIGH: 202 + info->devices[port].status = USB_STATE_ATTACHED; 203 + break; 204 + default: /* error */ 205 + return -EINVAL; 206 + } 207 + info->devices[port].speed = speed; 208 + info->ports[port].c_connection = true; 209 + 210 + xenhcd_set_connect_state(info, portnum); 211 + } 212 + 213 + return 0; 214 + } 215 + 216 + /* 217 + * SetPortFeature(PORT_SUSPENDED) 218 + */ 219 + static void xenhcd_rhport_suspend(struct xenhcd_info *info, int portnum) 220 + { 221 + int port; 222 + 223 + port = portnum - 1; 224 + info->ports[port].status |= USB_PORT_STAT_SUSPEND; 225 + info->devices[port].status = USB_STATE_SUSPENDED; 226 + } 227 + 228 + /* 229 + * ClearPortFeature(PORT_SUSPENDED) 230 + */ 231 + static void xenhcd_rhport_resume(struct xenhcd_info *info, int portnum) 232 + { 233 + int port; 234 + 235 + port = portnum - 1; 236 + if (info->ports[port].status & USB_PORT_STAT_SUSPEND) { 237 + info->ports[port].resuming = true; 238 + info->ports[port].timeout = jiffies + msecs_to_jiffies(20); 239 + } 240 + } 241 + 242 + /* 243 + * SetPortFeature(PORT_POWER) 244 + */ 245 + static void xenhcd_rhport_power_on(struct xenhcd_info *info, int portnum) 246 + { 247 + int port; 248 + 249 + port = portnum - 1; 250 + if ((info->ports[port].status & USB_PORT_STAT_POWER) == 0) { 251 + info->ports[port].status |= USB_PORT_STAT_POWER; 252 + if (info->devices[port].status != USB_STATE_NOTATTACHED) 253 + info->devices[port].status = USB_STATE_POWERED; 254 + if (info->ports[port].c_connection) 255 + xenhcd_set_connect_state(info, portnum); 256 + } 257 + } 258 + 259 + /* 260 + * ClearPortFeature(PORT_POWER) 261 + * SetConfiguration(non-zero) 262 + * Power_Source_Off 263 + * Over-current 264 + */ 265 + static void xenhcd_rhport_power_off(struct xenhcd_info *info, int portnum) 266 + { 267 + int port; 268 + 269 + port = portnum - 1; 270 + if (info->ports[port].status & USB_PORT_STAT_POWER) { 271 + info->ports[port].status = 0; 272 + if (info->devices[port].status != USB_STATE_NOTATTACHED) 273 + info->devices[port].status = USB_STATE_ATTACHED; 274 + } 275 + } 276 + 277 + /* 278 + * ClearPortFeature(PORT_ENABLE) 279 + */ 280 + static void xenhcd_rhport_disable(struct xenhcd_info *info, int portnum) 281 + { 282 + int port; 283 + 284 + port = portnum - 1; 285 + info->ports[port].status &= ~USB_PORT_STAT_ENABLE; 286 + info->ports[port].status &= ~USB_PORT_STAT_SUSPEND; 287 + info->ports[port].resuming = false; 288 + if (info->devices[port].status != USB_STATE_NOTATTACHED) 289 + info->devices[port].status = USB_STATE_POWERED; 290 + } 291 + 292 + /* 293 + * SetPortFeature(PORT_RESET) 294 + */ 295 + static void xenhcd_rhport_reset(struct xenhcd_info *info, int portnum) 296 + { 297 + int port; 298 + 299 + port = portnum - 1; 300 + info->ports[port].status &= ~(USB_PORT_STAT_ENABLE | 301 + USB_PORT_STAT_LOW_SPEED | 302 + USB_PORT_STAT_HIGH_SPEED); 303 + info->ports[port].status |= USB_PORT_STAT_RESET; 304 + 305 + if (info->devices[port].status != USB_STATE_NOTATTACHED) 306 + info->devices[port].status = USB_STATE_ATTACHED; 307 + 308 + /* 10msec reset signaling */ 309 + info->ports[port].timeout = jiffies + msecs_to_jiffies(10); 310 + } 311 + 312 + #ifdef CONFIG_PM 313 + static int xenhcd_bus_suspend(struct usb_hcd *hcd) 314 + { 315 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 316 + int ret = 0; 317 + int i, ports; 318 + 319 + ports = info->rh_numports; 320 + 321 + spin_lock_irq(&info->lock); 322 + if (!test_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags)) { 323 + ret = -ESHUTDOWN; 324 + } else { 325 + /* suspend any active ports*/ 326 + for (i = 1; i <= ports; i++) 327 + xenhcd_rhport_suspend(info, i); 328 + } 329 + spin_unlock_irq(&info->lock); 330 + 331 + del_timer_sync(&info->watchdog); 332 + 333 + return ret; 334 + } 335 + 336 + static int xenhcd_bus_resume(struct usb_hcd *hcd) 337 + { 338 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 339 + int ret = 0; 340 + int i, ports; 341 + 342 + ports = info->rh_numports; 343 + 344 + spin_lock_irq(&info->lock); 345 + if (!test_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags)) { 346 + ret = -ESHUTDOWN; 347 + } else { 348 + /* resume any suspended ports*/ 349 + for (i = 1; i <= ports; i++) 350 + xenhcd_rhport_resume(info, i); 351 + } 352 + spin_unlock_irq(&info->lock); 353 + 354 + return ret; 355 + } 356 + #endif 357 + 358 + static void xenhcd_hub_descriptor(struct xenhcd_info *info, 359 + struct usb_hub_descriptor *desc) 360 + { 361 + __u16 temp; 362 + int ports = info->rh_numports; 363 + 364 + desc->bDescriptorType = 0x29; 365 + desc->bPwrOn2PwrGood = 10; /* EHCI says 20ms max */ 366 + desc->bHubContrCurrent = 0; 367 + desc->bNbrPorts = ports; 368 + 369 + /* size of DeviceRemovable and PortPwrCtrlMask fields */ 370 + temp = 1 + (ports / 8); 371 + desc->bDescLength = 7 + 2 * temp; 372 + 373 + /* bitmaps for DeviceRemovable and PortPwrCtrlMask */ 374 + memset(&desc->u.hs.DeviceRemovable[0], 0, temp); 375 + memset(&desc->u.hs.DeviceRemovable[temp], 0xff, temp); 376 + 377 + /* per-port over current reporting and no power switching */ 378 + temp = 0x000a; 379 + desc->wHubCharacteristics = cpu_to_le16(temp); 380 + } 381 + 382 + /* port status change mask for hub_status_data */ 383 + #define PORT_C_MASK ((USB_PORT_STAT_C_CONNECTION | \ 384 + USB_PORT_STAT_C_ENABLE | \ 385 + USB_PORT_STAT_C_SUSPEND | \ 386 + USB_PORT_STAT_C_OVERCURRENT | \ 387 + USB_PORT_STAT_C_RESET) << 16) 388 + 389 + /* 390 + * See USB 2.0 Spec, 11.12.4 Hub and Port Status Change Bitmap. 391 + * If port status changed, writes the bitmap to buf and return 392 + * that length(number of bytes). 393 + * If Nothing changed, return 0. 394 + */ 395 + static int xenhcd_hub_status_data(struct usb_hcd *hcd, char *buf) 396 + { 397 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 398 + int ports; 399 + int i; 400 + unsigned long flags; 401 + int ret; 402 + int changed = 0; 403 + 404 + /* initialize the status to no-changes */ 405 + ports = info->rh_numports; 406 + ret = 1 + (ports / 8); 407 + memset(buf, 0, ret); 408 + 409 + spin_lock_irqsave(&info->lock, flags); 410 + 411 + for (i = 0; i < ports; i++) { 412 + /* check status for each port */ 413 + if (info->ports[i].status & PORT_C_MASK) { 414 + buf[(i + 1) / 8] |= 1 << (i + 1) % 8; 415 + changed = 1; 416 + } 417 + } 418 + 419 + if ((hcd->state == HC_STATE_SUSPENDED) && (changed == 1)) 420 + usb_hcd_resume_root_hub(hcd); 421 + 422 + spin_unlock_irqrestore(&info->lock, flags); 423 + 424 + return changed ? ret : 0; 425 + } 426 + 427 + static int xenhcd_hub_control(struct usb_hcd *hcd, __u16 typeReq, __u16 wValue, 428 + __u16 wIndex, char *buf, __u16 wLength) 429 + { 430 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 431 + int ports = info->rh_numports; 432 + unsigned long flags; 433 + int ret = 0; 434 + int i; 435 + int changed = 0; 436 + 437 + spin_lock_irqsave(&info->lock, flags); 438 + switch (typeReq) { 439 + case ClearHubFeature: 440 + /* ignore this request */ 441 + break; 442 + case ClearPortFeature: 443 + if (!wIndex || wIndex > ports) 444 + goto error; 445 + 446 + switch (wValue) { 447 + case USB_PORT_FEAT_SUSPEND: 448 + xenhcd_rhport_resume(info, wIndex); 449 + break; 450 + case USB_PORT_FEAT_POWER: 451 + xenhcd_rhport_power_off(info, wIndex); 452 + break; 453 + case USB_PORT_FEAT_ENABLE: 454 + xenhcd_rhport_disable(info, wIndex); 455 + break; 456 + case USB_PORT_FEAT_C_CONNECTION: 457 + info->ports[wIndex - 1].c_connection = false; 458 + fallthrough; 459 + default: 460 + info->ports[wIndex - 1].status &= ~(1 << wValue); 461 + break; 462 + } 463 + break; 464 + case GetHubDescriptor: 465 + xenhcd_hub_descriptor(info, (struct usb_hub_descriptor *)buf); 466 + break; 467 + case GetHubStatus: 468 + /* always local power supply good and no over-current exists. */ 469 + *(__le32 *)buf = cpu_to_le32(0); 470 + break; 471 + case GetPortStatus: 472 + if (!wIndex || wIndex > ports) 473 + goto error; 474 + 475 + wIndex--; 476 + 477 + /* resume completion */ 478 + if (info->ports[wIndex].resuming && 479 + time_after_eq(jiffies, info->ports[wIndex].timeout)) { 480 + info->ports[wIndex].status |= 481 + USB_PORT_STAT_C_SUSPEND << 16; 482 + info->ports[wIndex].status &= ~USB_PORT_STAT_SUSPEND; 483 + } 484 + 485 + /* reset completion */ 486 + if ((info->ports[wIndex].status & USB_PORT_STAT_RESET) != 0 && 487 + time_after_eq(jiffies, info->ports[wIndex].timeout)) { 488 + info->ports[wIndex].status |= 489 + USB_PORT_STAT_C_RESET << 16; 490 + info->ports[wIndex].status &= ~USB_PORT_STAT_RESET; 491 + 492 + if (info->devices[wIndex].status != 493 + USB_STATE_NOTATTACHED) { 494 + info->ports[wIndex].status |= 495 + USB_PORT_STAT_ENABLE; 496 + info->devices[wIndex].status = 497 + USB_STATE_DEFAULT; 498 + } 499 + 500 + switch (info->devices[wIndex].speed) { 501 + case XENUSB_SPEED_LOW: 502 + info->ports[wIndex].status |= 503 + USB_PORT_STAT_LOW_SPEED; 504 + break; 505 + case XENUSB_SPEED_HIGH: 506 + info->ports[wIndex].status |= 507 + USB_PORT_STAT_HIGH_SPEED; 508 + break; 509 + default: 510 + break; 511 + } 512 + } 513 + 514 + *(__le32 *)buf = cpu_to_le32(info->ports[wIndex].status); 515 + break; 516 + case SetPortFeature: 517 + if (!wIndex || wIndex > ports) 518 + goto error; 519 + 520 + switch (wValue) { 521 + case USB_PORT_FEAT_POWER: 522 + xenhcd_rhport_power_on(info, wIndex); 523 + break; 524 + case USB_PORT_FEAT_RESET: 525 + xenhcd_rhport_reset(info, wIndex); 526 + break; 527 + case USB_PORT_FEAT_SUSPEND: 528 + xenhcd_rhport_suspend(info, wIndex); 529 + break; 530 + default: 531 + if (info->ports[wIndex-1].status & USB_PORT_STAT_POWER) 532 + info->ports[wIndex-1].status |= (1 << wValue); 533 + } 534 + break; 535 + 536 + case SetHubFeature: 537 + /* not supported */ 538 + default: 539 + error: 540 + ret = -EPIPE; 541 + } 542 + spin_unlock_irqrestore(&info->lock, flags); 543 + 544 + /* check status for each port */ 545 + for (i = 0; i < ports; i++) { 546 + if (info->ports[i].status & PORT_C_MASK) 547 + changed = 1; 548 + } 549 + if (changed) 550 + usb_hcd_poll_rh_status(hcd); 551 + 552 + return ret; 553 + } 554 + 555 + static void xenhcd_free_urb_priv(struct urb_priv *urbp) 556 + { 557 + urbp->urb->hcpriv = NULL; 558 + kmem_cache_free(xenhcd_urbp_cachep, urbp); 559 + } 560 + 561 + static inline unsigned int xenhcd_get_id_from_freelist(struct xenhcd_info *info) 562 + { 563 + unsigned int free; 564 + 565 + free = info->shadow_free; 566 + info->shadow_free = info->shadow[free].req.id; 567 + info->shadow[free].req.id = 0x0fff; /* debug */ 568 + return free; 569 + } 570 + 571 + static inline void xenhcd_add_id_to_freelist(struct xenhcd_info *info, 572 + unsigned int id) 573 + { 574 + info->shadow[id].req.id = info->shadow_free; 575 + info->shadow[id].urb = NULL; 576 + info->shadow_free = id; 577 + } 578 + 579 + static inline int xenhcd_count_pages(void *addr, int length) 580 + { 581 + unsigned long vaddr = (unsigned long)addr; 582 + 583 + return PFN_UP(vaddr + length) - PFN_DOWN(vaddr); 584 + } 585 + 586 + static void xenhcd_gnttab_map(struct xenhcd_info *info, void *addr, int length, 587 + grant_ref_t *gref_head, 588 + struct xenusb_request_segment *seg, 589 + int nr_pages, int flags) 590 + { 591 + grant_ref_t ref; 592 + unsigned long buffer_mfn; 593 + unsigned int offset; 594 + unsigned int len = length; 595 + unsigned int bytes; 596 + int i; 597 + 598 + for (i = 0; i < nr_pages; i++) { 599 + buffer_mfn = PFN_DOWN(arbitrary_virt_to_machine(addr).maddr); 600 + offset = offset_in_page(addr); 601 + 602 + bytes = PAGE_SIZE - offset; 603 + if (bytes > len) 604 + bytes = len; 605 + 606 + ref = gnttab_claim_grant_reference(gref_head); 607 + gnttab_grant_foreign_access_ref(ref, info->xbdev->otherend_id, 608 + buffer_mfn, flags); 609 + seg[i].gref = ref; 610 + seg[i].offset = (__u16)offset; 611 + seg[i].length = (__u16)bytes; 612 + 613 + addr += bytes; 614 + len -= bytes; 615 + } 616 + } 617 + 618 + static __u32 xenhcd_pipe_urb_to_xenusb(__u32 urb_pipe, __u8 port) 619 + { 620 + static __u32 pipe; 621 + 622 + pipe = usb_pipedevice(urb_pipe) << XENUSB_PIPE_DEV_SHIFT; 623 + pipe |= usb_pipeendpoint(urb_pipe) << XENUSB_PIPE_EP_SHIFT; 624 + if (usb_pipein(urb_pipe)) 625 + pipe |= XENUSB_PIPE_DIR; 626 + switch (usb_pipetype(urb_pipe)) { 627 + case PIPE_ISOCHRONOUS: 628 + pipe |= XENUSB_PIPE_TYPE_ISOC << XENUSB_PIPE_TYPE_SHIFT; 629 + break; 630 + case PIPE_INTERRUPT: 631 + pipe |= XENUSB_PIPE_TYPE_INT << XENUSB_PIPE_TYPE_SHIFT; 632 + break; 633 + case PIPE_CONTROL: 634 + pipe |= XENUSB_PIPE_TYPE_CTRL << XENUSB_PIPE_TYPE_SHIFT; 635 + break; 636 + case PIPE_BULK: 637 + pipe |= XENUSB_PIPE_TYPE_BULK << XENUSB_PIPE_TYPE_SHIFT; 638 + break; 639 + } 640 + pipe = xenusb_setportnum_pipe(pipe, port); 641 + 642 + return pipe; 643 + } 644 + 645 + static int xenhcd_map_urb_for_request(struct xenhcd_info *info, struct urb *urb, 646 + struct xenusb_urb_request *req) 647 + { 648 + grant_ref_t gref_head; 649 + int nr_buff_pages = 0; 650 + int nr_isodesc_pages = 0; 651 + int nr_grants = 0; 652 + 653 + if (urb->transfer_buffer_length) { 654 + nr_buff_pages = xenhcd_count_pages(urb->transfer_buffer, 655 + urb->transfer_buffer_length); 656 + 657 + if (usb_pipeisoc(urb->pipe)) 658 + nr_isodesc_pages = xenhcd_count_pages( 659 + &urb->iso_frame_desc[0], 660 + sizeof(struct usb_iso_packet_descriptor) * 661 + urb->number_of_packets); 662 + 663 + nr_grants = nr_buff_pages + nr_isodesc_pages; 664 + if (nr_grants > XENUSB_MAX_SEGMENTS_PER_REQUEST) { 665 + pr_err("xenhcd: error: %d grants\n", nr_grants); 666 + return -E2BIG; 667 + } 668 + 669 + if (gnttab_alloc_grant_references(nr_grants, &gref_head)) { 670 + pr_err("xenhcd: gnttab_alloc_grant_references() error\n"); 671 + return -ENOMEM; 672 + } 673 + 674 + xenhcd_gnttab_map(info, urb->transfer_buffer, 675 + urb->transfer_buffer_length, &gref_head, 676 + &req->seg[0], nr_buff_pages, 677 + usb_pipein(urb->pipe) ? 0 : GTF_readonly); 678 + } 679 + 680 + req->pipe = xenhcd_pipe_urb_to_xenusb(urb->pipe, urb->dev->portnum); 681 + req->transfer_flags = 0; 682 + if (urb->transfer_flags & URB_SHORT_NOT_OK) 683 + req->transfer_flags |= XENUSB_SHORT_NOT_OK; 684 + req->buffer_length = urb->transfer_buffer_length; 685 + req->nr_buffer_segs = nr_buff_pages; 686 + 687 + switch (usb_pipetype(urb->pipe)) { 688 + case PIPE_ISOCHRONOUS: 689 + req->u.isoc.interval = urb->interval; 690 + req->u.isoc.start_frame = urb->start_frame; 691 + req->u.isoc.number_of_packets = urb->number_of_packets; 692 + req->u.isoc.nr_frame_desc_segs = nr_isodesc_pages; 693 + 694 + xenhcd_gnttab_map(info, &urb->iso_frame_desc[0], 695 + sizeof(struct usb_iso_packet_descriptor) * 696 + urb->number_of_packets, 697 + &gref_head, &req->seg[nr_buff_pages], 698 + nr_isodesc_pages, 0); 699 + break; 700 + case PIPE_INTERRUPT: 701 + req->u.intr.interval = urb->interval; 702 + break; 703 + case PIPE_CONTROL: 704 + if (urb->setup_packet) 705 + memcpy(req->u.ctrl, urb->setup_packet, 8); 706 + break; 707 + case PIPE_BULK: 708 + break; 709 + default: 710 + break; 711 + } 712 + 713 + if (nr_grants) 714 + gnttab_free_grant_references(gref_head); 715 + 716 + return 0; 717 + } 718 + 719 + static void xenhcd_gnttab_done(struct usb_shadow *shadow) 720 + { 721 + int nr_segs = 0; 722 + int i; 723 + 724 + nr_segs = shadow->req.nr_buffer_segs; 725 + 726 + if (xenusb_pipeisoc(shadow->req.pipe)) 727 + nr_segs += shadow->req.u.isoc.nr_frame_desc_segs; 728 + 729 + for (i = 0; i < nr_segs; i++) 730 + gnttab_end_foreign_access(shadow->req.seg[i].gref, 0, 0UL); 731 + 732 + shadow->req.nr_buffer_segs = 0; 733 + shadow->req.u.isoc.nr_frame_desc_segs = 0; 734 + } 735 + 736 + static int xenhcd_translate_status(int status) 737 + { 738 + switch (status) { 739 + case XENUSB_STATUS_OK: 740 + return 0; 741 + case XENUSB_STATUS_NODEV: 742 + return -ENODEV; 743 + case XENUSB_STATUS_INVAL: 744 + return -EINVAL; 745 + case XENUSB_STATUS_STALL: 746 + return -EPIPE; 747 + case XENUSB_STATUS_IOERROR: 748 + return -EPROTO; 749 + case XENUSB_STATUS_BABBLE: 750 + return -EOVERFLOW; 751 + default: 752 + return -ESHUTDOWN; 753 + } 754 + } 755 + 756 + static void xenhcd_giveback_urb(struct xenhcd_info *info, struct urb *urb, 757 + int status) 758 + { 759 + struct urb_priv *urbp = (struct urb_priv *)urb->hcpriv; 760 + int priv_status = urbp->status; 761 + 762 + list_del_init(&urbp->list); 763 + xenhcd_free_urb_priv(urbp); 764 + 765 + if (urb->status == -EINPROGRESS) 766 + urb->status = xenhcd_translate_status(status); 767 + 768 + spin_unlock(&info->lock); 769 + usb_hcd_giveback_urb(xenhcd_info_to_hcd(info), urb, 770 + priv_status <= 0 ? priv_status : urb->status); 771 + spin_lock(&info->lock); 772 + } 773 + 774 + static int xenhcd_do_request(struct xenhcd_info *info, struct urb_priv *urbp) 775 + { 776 + struct xenusb_urb_request *req; 777 + struct urb *urb = urbp->urb; 778 + unsigned int id; 779 + int notify; 780 + int ret; 781 + 782 + id = xenhcd_get_id_from_freelist(info); 783 + req = &info->shadow[id].req; 784 + req->id = id; 785 + 786 + if (unlikely(urbp->unlinked)) { 787 + req->u.unlink.unlink_id = urbp->req_id; 788 + req->pipe = xenusb_setunlink_pipe(xenhcd_pipe_urb_to_xenusb( 789 + urb->pipe, urb->dev->portnum)); 790 + urbp->unlink_req_id = id; 791 + } else { 792 + ret = xenhcd_map_urb_for_request(info, urb, req); 793 + if (ret) { 794 + xenhcd_add_id_to_freelist(info, id); 795 + return ret; 796 + } 797 + urbp->req_id = id; 798 + } 799 + 800 + req = RING_GET_REQUEST(&info->urb_ring, info->urb_ring.req_prod_pvt); 801 + *req = info->shadow[id].req; 802 + 803 + info->urb_ring.req_prod_pvt++; 804 + info->shadow[id].urb = urb; 805 + 806 + RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->urb_ring, notify); 807 + if (notify) 808 + notify_remote_via_irq(info->irq); 809 + 810 + return 0; 811 + } 812 + 813 + static void xenhcd_kick_pending_urbs(struct xenhcd_info *info) 814 + { 815 + struct urb_priv *urbp; 816 + 817 + while (!list_empty(&info->pending_submit_list)) { 818 + if (RING_FULL(&info->urb_ring)) { 819 + xenhcd_timer_action(info, TIMER_RING_WATCHDOG); 820 + return; 821 + } 822 + 823 + urbp = list_entry(info->pending_submit_list.next, 824 + struct urb_priv, list); 825 + if (!xenhcd_do_request(info, urbp)) 826 + list_move_tail(&urbp->list, &info->in_progress_list); 827 + else 828 + xenhcd_giveback_urb(info, urbp->urb, -ESHUTDOWN); 829 + } 830 + xenhcd_timer_action_done(info, TIMER_SCAN_PENDING_URBS); 831 + } 832 + 833 + /* 834 + * caller must lock info->lock 835 + */ 836 + static void xenhcd_cancel_all_enqueued_urbs(struct xenhcd_info *info) 837 + { 838 + struct urb_priv *urbp, *tmp; 839 + int req_id; 840 + 841 + list_for_each_entry_safe(urbp, tmp, &info->in_progress_list, list) { 842 + req_id = urbp->req_id; 843 + if (!urbp->unlinked) { 844 + xenhcd_gnttab_done(&info->shadow[req_id]); 845 + if (urbp->urb->status == -EINPROGRESS) 846 + /* not dequeued */ 847 + xenhcd_giveback_urb(info, urbp->urb, 848 + -ESHUTDOWN); 849 + else /* dequeued */ 850 + xenhcd_giveback_urb(info, urbp->urb, 851 + urbp->urb->status); 852 + } 853 + info->shadow[req_id].urb = NULL; 854 + } 855 + 856 + list_for_each_entry_safe(urbp, tmp, &info->pending_submit_list, list) 857 + xenhcd_giveback_urb(info, urbp->urb, -ESHUTDOWN); 858 + } 859 + 860 + /* 861 + * caller must lock info->lock 862 + */ 863 + static void xenhcd_giveback_unlinked_urbs(struct xenhcd_info *info) 864 + { 865 + struct urb_priv *urbp, *tmp; 866 + 867 + list_for_each_entry_safe(urbp, tmp, &info->giveback_waiting_list, list) 868 + xenhcd_giveback_urb(info, urbp->urb, urbp->urb->status); 869 + } 870 + 871 + static int xenhcd_submit_urb(struct xenhcd_info *info, struct urb_priv *urbp) 872 + { 873 + int ret; 874 + 875 + if (RING_FULL(&info->urb_ring)) { 876 + list_add_tail(&urbp->list, &info->pending_submit_list); 877 + xenhcd_timer_action(info, TIMER_RING_WATCHDOG); 878 + return 0; 879 + } 880 + 881 + if (!list_empty(&info->pending_submit_list)) { 882 + list_add_tail(&urbp->list, &info->pending_submit_list); 883 + xenhcd_timer_action(info, TIMER_SCAN_PENDING_URBS); 884 + return 0; 885 + } 886 + 887 + ret = xenhcd_do_request(info, urbp); 888 + if (ret == 0) 889 + list_add_tail(&urbp->list, &info->in_progress_list); 890 + 891 + return ret; 892 + } 893 + 894 + static int xenhcd_unlink_urb(struct xenhcd_info *info, struct urb_priv *urbp) 895 + { 896 + int ret; 897 + 898 + /* already unlinked? */ 899 + if (urbp->unlinked) 900 + return -EBUSY; 901 + 902 + urbp->unlinked = true; 903 + 904 + /* the urb is still in pending_submit queue */ 905 + if (urbp->req_id == ~0) { 906 + list_move_tail(&urbp->list, &info->giveback_waiting_list); 907 + xenhcd_timer_action(info, TIMER_SCAN_PENDING_URBS); 908 + return 0; 909 + } 910 + 911 + /* send unlink request to backend */ 912 + if (RING_FULL(&info->urb_ring)) { 913 + list_move_tail(&urbp->list, &info->pending_unlink_list); 914 + xenhcd_timer_action(info, TIMER_RING_WATCHDOG); 915 + return 0; 916 + } 917 + 918 + if (!list_empty(&info->pending_unlink_list)) { 919 + list_move_tail(&urbp->list, &info->pending_unlink_list); 920 + xenhcd_timer_action(info, TIMER_SCAN_PENDING_URBS); 921 + return 0; 922 + } 923 + 924 + ret = xenhcd_do_request(info, urbp); 925 + if (ret == 0) 926 + list_move_tail(&urbp->list, &info->in_progress_list); 927 + 928 + return ret; 929 + } 930 + 931 + static int xenhcd_urb_request_done(struct xenhcd_info *info) 932 + { 933 + struct xenusb_urb_response res; 934 + struct urb *urb; 935 + RING_IDX i, rp; 936 + __u16 id; 937 + int more_to_do = 0; 938 + unsigned long flags; 939 + 940 + spin_lock_irqsave(&info->lock, flags); 941 + 942 + rp = info->urb_ring.sring->rsp_prod; 943 + if (RING_RESPONSE_PROD_OVERFLOW(&info->urb_ring, rp)) { 944 + xenhcd_set_error(info, "Illegal index on urb-ring"); 945 + spin_unlock_irqrestore(&info->lock, flags); 946 + return 0; 947 + } 948 + rmb(); /* ensure we see queued responses up to "rp" */ 949 + 950 + for (i = info->urb_ring.rsp_cons; i != rp; i++) { 951 + RING_COPY_RESPONSE(&info->urb_ring, i, &res); 952 + id = res.id; 953 + if (id >= XENUSB_URB_RING_SIZE) { 954 + xenhcd_set_error(info, "Illegal data on urb-ring"); 955 + continue; 956 + } 957 + 958 + if (likely(xenusb_pipesubmit(info->shadow[id].req.pipe))) { 959 + xenhcd_gnttab_done(&info->shadow[id]); 960 + urb = info->shadow[id].urb; 961 + if (likely(urb)) { 962 + urb->actual_length = res.actual_length; 963 + urb->error_count = res.error_count; 964 + urb->start_frame = res.start_frame; 965 + xenhcd_giveback_urb(info, urb, res.status); 966 + } 967 + } 968 + 969 + xenhcd_add_id_to_freelist(info, id); 970 + } 971 + info->urb_ring.rsp_cons = i; 972 + 973 + if (i != info->urb_ring.req_prod_pvt) 974 + RING_FINAL_CHECK_FOR_RESPONSES(&info->urb_ring, more_to_do); 975 + else 976 + info->urb_ring.sring->rsp_event = i + 1; 977 + 978 + spin_unlock_irqrestore(&info->lock, flags); 979 + 980 + return more_to_do; 981 + } 982 + 983 + static int xenhcd_conn_notify(struct xenhcd_info *info) 984 + { 985 + struct xenusb_conn_response res; 986 + struct xenusb_conn_request *req; 987 + RING_IDX rc, rp; 988 + __u16 id; 989 + __u8 portnum, speed; 990 + int more_to_do = 0; 991 + int notify; 992 + int port_changed = 0; 993 + unsigned long flags; 994 + 995 + spin_lock_irqsave(&info->lock, flags); 996 + 997 + rc = info->conn_ring.rsp_cons; 998 + rp = info->conn_ring.sring->rsp_prod; 999 + if (RING_RESPONSE_PROD_OVERFLOW(&info->conn_ring, rp)) { 1000 + xenhcd_set_error(info, "Illegal index on conn-ring"); 1001 + spin_unlock_irqrestore(&info->lock, flags); 1002 + return 0; 1003 + } 1004 + rmb(); /* ensure we see queued responses up to "rp" */ 1005 + 1006 + while (rc != rp) { 1007 + RING_COPY_RESPONSE(&info->conn_ring, rc, &res); 1008 + id = res.id; 1009 + portnum = res.portnum; 1010 + speed = res.speed; 1011 + info->conn_ring.rsp_cons = ++rc; 1012 + 1013 + if (xenhcd_rhport_connect(info, portnum, speed)) { 1014 + xenhcd_set_error(info, "Illegal data on conn-ring"); 1015 + spin_unlock_irqrestore(&info->lock, flags); 1016 + return 0; 1017 + } 1018 + 1019 + if (info->ports[portnum - 1].c_connection) 1020 + port_changed = 1; 1021 + 1022 + barrier(); 1023 + 1024 + req = RING_GET_REQUEST(&info->conn_ring, 1025 + info->conn_ring.req_prod_pvt); 1026 + req->id = id; 1027 + info->conn_ring.req_prod_pvt++; 1028 + } 1029 + 1030 + if (rc != info->conn_ring.req_prod_pvt) 1031 + RING_FINAL_CHECK_FOR_RESPONSES(&info->conn_ring, more_to_do); 1032 + else 1033 + info->conn_ring.sring->rsp_event = rc + 1; 1034 + 1035 + RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->conn_ring, notify); 1036 + if (notify) 1037 + notify_remote_via_irq(info->irq); 1038 + 1039 + spin_unlock_irqrestore(&info->lock, flags); 1040 + 1041 + if (port_changed) 1042 + usb_hcd_poll_rh_status(xenhcd_info_to_hcd(info)); 1043 + 1044 + return more_to_do; 1045 + } 1046 + 1047 + static irqreturn_t xenhcd_int(int irq, void *dev_id) 1048 + { 1049 + struct xenhcd_info *info = (struct xenhcd_info *)dev_id; 1050 + 1051 + if (unlikely(info->error)) 1052 + return IRQ_HANDLED; 1053 + 1054 + while (xenhcd_urb_request_done(info) | xenhcd_conn_notify(info)) 1055 + /* Yield point for this unbounded loop. */ 1056 + cond_resched(); 1057 + 1058 + return IRQ_HANDLED; 1059 + } 1060 + 1061 + static void xenhcd_destroy_rings(struct xenhcd_info *info) 1062 + { 1063 + if (info->irq) 1064 + unbind_from_irqhandler(info->irq, info); 1065 + info->irq = 0; 1066 + 1067 + if (info->urb_ring_ref != GRANT_INVALID_REF) { 1068 + gnttab_end_foreign_access(info->urb_ring_ref, 0, 1069 + (unsigned long)info->urb_ring.sring); 1070 + info->urb_ring_ref = GRANT_INVALID_REF; 1071 + } 1072 + info->urb_ring.sring = NULL; 1073 + 1074 + if (info->conn_ring_ref != GRANT_INVALID_REF) { 1075 + gnttab_end_foreign_access(info->conn_ring_ref, 0, 1076 + (unsigned long)info->conn_ring.sring); 1077 + info->conn_ring_ref = GRANT_INVALID_REF; 1078 + } 1079 + info->conn_ring.sring = NULL; 1080 + } 1081 + 1082 + static int xenhcd_setup_rings(struct xenbus_device *dev, 1083 + struct xenhcd_info *info) 1084 + { 1085 + struct xenusb_urb_sring *urb_sring; 1086 + struct xenusb_conn_sring *conn_sring; 1087 + grant_ref_t gref; 1088 + int err; 1089 + 1090 + info->urb_ring_ref = GRANT_INVALID_REF; 1091 + info->conn_ring_ref = GRANT_INVALID_REF; 1092 + 1093 + urb_sring = (struct xenusb_urb_sring *)get_zeroed_page( 1094 + GFP_NOIO | __GFP_HIGH); 1095 + if (!urb_sring) { 1096 + xenbus_dev_fatal(dev, -ENOMEM, "allocating urb ring"); 1097 + return -ENOMEM; 1098 + } 1099 + SHARED_RING_INIT(urb_sring); 1100 + FRONT_RING_INIT(&info->urb_ring, urb_sring, PAGE_SIZE); 1101 + 1102 + err = xenbus_grant_ring(dev, urb_sring, 1, &gref); 1103 + if (err < 0) { 1104 + free_page((unsigned long)urb_sring); 1105 + info->urb_ring.sring = NULL; 1106 + goto fail; 1107 + } 1108 + info->urb_ring_ref = gref; 1109 + 1110 + conn_sring = (struct xenusb_conn_sring *)get_zeroed_page( 1111 + GFP_NOIO | __GFP_HIGH); 1112 + if (!conn_sring) { 1113 + xenbus_dev_fatal(dev, -ENOMEM, "allocating conn ring"); 1114 + err = -ENOMEM; 1115 + goto fail; 1116 + } 1117 + SHARED_RING_INIT(conn_sring); 1118 + FRONT_RING_INIT(&info->conn_ring, conn_sring, PAGE_SIZE); 1119 + 1120 + err = xenbus_grant_ring(dev, conn_sring, 1, &gref); 1121 + if (err < 0) { 1122 + free_page((unsigned long)conn_sring); 1123 + info->conn_ring.sring = NULL; 1124 + goto fail; 1125 + } 1126 + info->conn_ring_ref = gref; 1127 + 1128 + err = xenbus_alloc_evtchn(dev, &info->evtchn); 1129 + if (err) { 1130 + xenbus_dev_fatal(dev, err, "xenbus_alloc_evtchn"); 1131 + goto fail; 1132 + } 1133 + 1134 + err = bind_evtchn_to_irq(info->evtchn); 1135 + if (err <= 0) { 1136 + xenbus_dev_fatal(dev, err, "bind_evtchn_to_irq"); 1137 + goto fail; 1138 + } 1139 + 1140 + info->irq = err; 1141 + 1142 + err = request_threaded_irq(info->irq, NULL, xenhcd_int, 1143 + IRQF_ONESHOT, "xenhcd", info); 1144 + if (err) { 1145 + xenbus_dev_fatal(dev, err, "request_threaded_irq"); 1146 + goto free_irq; 1147 + } 1148 + 1149 + return 0; 1150 + 1151 + free_irq: 1152 + unbind_from_irqhandler(info->irq, info); 1153 + fail: 1154 + xenhcd_destroy_rings(info); 1155 + return err; 1156 + } 1157 + 1158 + static int xenhcd_talk_to_backend(struct xenbus_device *dev, 1159 + struct xenhcd_info *info) 1160 + { 1161 + const char *message; 1162 + struct xenbus_transaction xbt; 1163 + int err; 1164 + 1165 + err = xenhcd_setup_rings(dev, info); 1166 + if (err) 1167 + return err; 1168 + 1169 + again: 1170 + err = xenbus_transaction_start(&xbt); 1171 + if (err) { 1172 + xenbus_dev_fatal(dev, err, "starting transaction"); 1173 + goto destroy_ring; 1174 + } 1175 + 1176 + err = xenbus_printf(xbt, dev->nodename, "urb-ring-ref", "%u", 1177 + info->urb_ring_ref); 1178 + if (err) { 1179 + message = "writing urb-ring-ref"; 1180 + goto abort_transaction; 1181 + } 1182 + 1183 + err = xenbus_printf(xbt, dev->nodename, "conn-ring-ref", "%u", 1184 + info->conn_ring_ref); 1185 + if (err) { 1186 + message = "writing conn-ring-ref"; 1187 + goto abort_transaction; 1188 + } 1189 + 1190 + err = xenbus_printf(xbt, dev->nodename, "event-channel", "%u", 1191 + info->evtchn); 1192 + if (err) { 1193 + message = "writing event-channel"; 1194 + goto abort_transaction; 1195 + } 1196 + 1197 + err = xenbus_transaction_end(xbt, 0); 1198 + if (err) { 1199 + if (err == -EAGAIN) 1200 + goto again; 1201 + xenbus_dev_fatal(dev, err, "completing transaction"); 1202 + goto destroy_ring; 1203 + } 1204 + 1205 + return 0; 1206 + 1207 + abort_transaction: 1208 + xenbus_transaction_end(xbt, 1); 1209 + xenbus_dev_fatal(dev, err, "%s", message); 1210 + 1211 + destroy_ring: 1212 + xenhcd_destroy_rings(info); 1213 + 1214 + return err; 1215 + } 1216 + 1217 + static int xenhcd_connect(struct xenbus_device *dev) 1218 + { 1219 + struct xenhcd_info *info = dev_get_drvdata(&dev->dev); 1220 + struct xenusb_conn_request *req; 1221 + int idx, err; 1222 + int notify; 1223 + char name[TASK_COMM_LEN]; 1224 + struct usb_hcd *hcd; 1225 + 1226 + hcd = xenhcd_info_to_hcd(info); 1227 + snprintf(name, TASK_COMM_LEN, "xenhcd.%d", hcd->self.busnum); 1228 + 1229 + err = xenhcd_talk_to_backend(dev, info); 1230 + if (err) 1231 + return err; 1232 + 1233 + /* prepare ring for hotplug notification */ 1234 + for (idx = 0; idx < XENUSB_CONN_RING_SIZE; idx++) { 1235 + req = RING_GET_REQUEST(&info->conn_ring, idx); 1236 + req->id = idx; 1237 + } 1238 + info->conn_ring.req_prod_pvt = idx; 1239 + 1240 + RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->conn_ring, notify); 1241 + if (notify) 1242 + notify_remote_via_irq(info->irq); 1243 + 1244 + return 0; 1245 + } 1246 + 1247 + static void xenhcd_disconnect(struct xenbus_device *dev) 1248 + { 1249 + struct xenhcd_info *info = dev_get_drvdata(&dev->dev); 1250 + struct usb_hcd *hcd = xenhcd_info_to_hcd(info); 1251 + 1252 + usb_remove_hcd(hcd); 1253 + xenbus_frontend_closed(dev); 1254 + } 1255 + 1256 + static void xenhcd_watchdog(struct timer_list *timer) 1257 + { 1258 + struct xenhcd_info *info = from_timer(info, timer, watchdog); 1259 + unsigned long flags; 1260 + 1261 + spin_lock_irqsave(&info->lock, flags); 1262 + if (likely(HC_IS_RUNNING(xenhcd_info_to_hcd(info)->state))) { 1263 + xenhcd_timer_action_done(info, TIMER_RING_WATCHDOG); 1264 + xenhcd_giveback_unlinked_urbs(info); 1265 + xenhcd_kick_pending_urbs(info); 1266 + } 1267 + spin_unlock_irqrestore(&info->lock, flags); 1268 + } 1269 + 1270 + /* 1271 + * one-time HC init 1272 + */ 1273 + static int xenhcd_setup(struct usb_hcd *hcd) 1274 + { 1275 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 1276 + 1277 + spin_lock_init(&info->lock); 1278 + INIT_LIST_HEAD(&info->pending_submit_list); 1279 + INIT_LIST_HEAD(&info->pending_unlink_list); 1280 + INIT_LIST_HEAD(&info->in_progress_list); 1281 + INIT_LIST_HEAD(&info->giveback_waiting_list); 1282 + timer_setup(&info->watchdog, xenhcd_watchdog, 0); 1283 + 1284 + hcd->has_tt = (hcd->driver->flags & HCD_MASK) != HCD_USB11; 1285 + 1286 + return 0; 1287 + } 1288 + 1289 + /* 1290 + * start HC running 1291 + */ 1292 + static int xenhcd_run(struct usb_hcd *hcd) 1293 + { 1294 + hcd->uses_new_polling = 1; 1295 + clear_bit(HCD_FLAG_POLL_RH, &hcd->flags); 1296 + hcd->state = HC_STATE_RUNNING; 1297 + return 0; 1298 + } 1299 + 1300 + /* 1301 + * stop running HC 1302 + */ 1303 + static void xenhcd_stop(struct usb_hcd *hcd) 1304 + { 1305 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 1306 + 1307 + del_timer_sync(&info->watchdog); 1308 + spin_lock_irq(&info->lock); 1309 + /* cancel all urbs */ 1310 + hcd->state = HC_STATE_HALT; 1311 + xenhcd_cancel_all_enqueued_urbs(info); 1312 + xenhcd_giveback_unlinked_urbs(info); 1313 + spin_unlock_irq(&info->lock); 1314 + } 1315 + 1316 + /* 1317 + * called as .urb_enqueue() 1318 + * non-error returns are promise to giveback the urb later 1319 + */ 1320 + static int xenhcd_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, 1321 + gfp_t mem_flags) 1322 + { 1323 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 1324 + struct urb_priv *urbp; 1325 + unsigned long flags; 1326 + int ret; 1327 + 1328 + if (unlikely(info->error)) 1329 + return -ESHUTDOWN; 1330 + 1331 + urbp = kmem_cache_zalloc(xenhcd_urbp_cachep, mem_flags); 1332 + if (!urbp) 1333 + return -ENOMEM; 1334 + 1335 + spin_lock_irqsave(&info->lock, flags); 1336 + 1337 + urbp->urb = urb; 1338 + urb->hcpriv = urbp; 1339 + urbp->req_id = ~0; 1340 + urbp->unlink_req_id = ~0; 1341 + INIT_LIST_HEAD(&urbp->list); 1342 + urbp->status = 1; 1343 + urb->unlinked = false; 1344 + 1345 + ret = xenhcd_submit_urb(info, urbp); 1346 + 1347 + if (ret) 1348 + xenhcd_free_urb_priv(urbp); 1349 + 1350 + spin_unlock_irqrestore(&info->lock, flags); 1351 + 1352 + return ret; 1353 + } 1354 + 1355 + /* 1356 + * called as .urb_dequeue() 1357 + */ 1358 + static int xenhcd_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status) 1359 + { 1360 + struct xenhcd_info *info = xenhcd_hcd_to_info(hcd); 1361 + struct urb_priv *urbp; 1362 + unsigned long flags; 1363 + int ret = 0; 1364 + 1365 + spin_lock_irqsave(&info->lock, flags); 1366 + 1367 + urbp = urb->hcpriv; 1368 + if (urbp) { 1369 + urbp->status = status; 1370 + ret = xenhcd_unlink_urb(info, urbp); 1371 + } 1372 + 1373 + spin_unlock_irqrestore(&info->lock, flags); 1374 + 1375 + return ret; 1376 + } 1377 + 1378 + /* 1379 + * called from usb_get_current_frame_number(), 1380 + * but, almost all drivers not use such function. 1381 + */ 1382 + static int xenhcd_get_frame(struct usb_hcd *hcd) 1383 + { 1384 + /* it means error, but probably no problem :-) */ 1385 + return 0; 1386 + } 1387 + 1388 + static struct hc_driver xenhcd_usb20_hc_driver = { 1389 + .description = "xen-hcd", 1390 + .product_desc = "Xen USB2.0 Virtual Host Controller", 1391 + .hcd_priv_size = sizeof(struct xenhcd_info), 1392 + .flags = HCD_USB2, 1393 + 1394 + /* basic HC lifecycle operations */ 1395 + .reset = xenhcd_setup, 1396 + .start = xenhcd_run, 1397 + .stop = xenhcd_stop, 1398 + 1399 + /* managing urb I/O */ 1400 + .urb_enqueue = xenhcd_urb_enqueue, 1401 + .urb_dequeue = xenhcd_urb_dequeue, 1402 + .get_frame_number = xenhcd_get_frame, 1403 + 1404 + /* root hub operations */ 1405 + .hub_status_data = xenhcd_hub_status_data, 1406 + .hub_control = xenhcd_hub_control, 1407 + #ifdef CONFIG_PM 1408 + .bus_suspend = xenhcd_bus_suspend, 1409 + .bus_resume = xenhcd_bus_resume, 1410 + #endif 1411 + }; 1412 + 1413 + static struct hc_driver xenhcd_usb11_hc_driver = { 1414 + .description = "xen-hcd", 1415 + .product_desc = "Xen USB1.1 Virtual Host Controller", 1416 + .hcd_priv_size = sizeof(struct xenhcd_info), 1417 + .flags = HCD_USB11, 1418 + 1419 + /* basic HC lifecycle operations */ 1420 + .reset = xenhcd_setup, 1421 + .start = xenhcd_run, 1422 + .stop = xenhcd_stop, 1423 + 1424 + /* managing urb I/O */ 1425 + .urb_enqueue = xenhcd_urb_enqueue, 1426 + .urb_dequeue = xenhcd_urb_dequeue, 1427 + .get_frame_number = xenhcd_get_frame, 1428 + 1429 + /* root hub operations */ 1430 + .hub_status_data = xenhcd_hub_status_data, 1431 + .hub_control = xenhcd_hub_control, 1432 + #ifdef CONFIG_PM 1433 + .bus_suspend = xenhcd_bus_suspend, 1434 + .bus_resume = xenhcd_bus_resume, 1435 + #endif 1436 + }; 1437 + 1438 + static struct usb_hcd *xenhcd_create_hcd(struct xenbus_device *dev) 1439 + { 1440 + int i; 1441 + int err = 0; 1442 + int num_ports; 1443 + int usb_ver; 1444 + struct usb_hcd *hcd = NULL; 1445 + struct xenhcd_info *info; 1446 + 1447 + err = xenbus_scanf(XBT_NIL, dev->otherend, "num-ports", "%d", 1448 + &num_ports); 1449 + if (err != 1) { 1450 + xenbus_dev_fatal(dev, err, "reading num-ports"); 1451 + return ERR_PTR(-EINVAL); 1452 + } 1453 + if (num_ports < 1 || num_ports > XENUSB_MAX_PORTNR) { 1454 + xenbus_dev_fatal(dev, err, "invalid num-ports"); 1455 + return ERR_PTR(-EINVAL); 1456 + } 1457 + 1458 + err = xenbus_scanf(XBT_NIL, dev->otherend, "usb-ver", "%d", &usb_ver); 1459 + if (err != 1) { 1460 + xenbus_dev_fatal(dev, err, "reading usb-ver"); 1461 + return ERR_PTR(-EINVAL); 1462 + } 1463 + switch (usb_ver) { 1464 + case XENUSB_VER_USB11: 1465 + hcd = usb_create_hcd(&xenhcd_usb11_hc_driver, &dev->dev, 1466 + dev_name(&dev->dev)); 1467 + break; 1468 + case XENUSB_VER_USB20: 1469 + hcd = usb_create_hcd(&xenhcd_usb20_hc_driver, &dev->dev, 1470 + dev_name(&dev->dev)); 1471 + break; 1472 + default: 1473 + xenbus_dev_fatal(dev, err, "invalid usb-ver"); 1474 + return ERR_PTR(-EINVAL); 1475 + } 1476 + if (!hcd) { 1477 + xenbus_dev_fatal(dev, err, 1478 + "fail to allocate USB host controller"); 1479 + return ERR_PTR(-ENOMEM); 1480 + } 1481 + 1482 + info = xenhcd_hcd_to_info(hcd); 1483 + info->xbdev = dev; 1484 + info->rh_numports = num_ports; 1485 + 1486 + for (i = 0; i < XENUSB_URB_RING_SIZE; i++) { 1487 + info->shadow[i].req.id = i + 1; 1488 + info->shadow[i].urb = NULL; 1489 + } 1490 + info->shadow[XENUSB_URB_RING_SIZE - 1].req.id = 0x0fff; 1491 + 1492 + return hcd; 1493 + } 1494 + 1495 + static void xenhcd_backend_changed(struct xenbus_device *dev, 1496 + enum xenbus_state backend_state) 1497 + { 1498 + switch (backend_state) { 1499 + case XenbusStateInitialising: 1500 + case XenbusStateReconfiguring: 1501 + case XenbusStateReconfigured: 1502 + case XenbusStateUnknown: 1503 + break; 1504 + 1505 + case XenbusStateInitWait: 1506 + case XenbusStateInitialised: 1507 + case XenbusStateConnected: 1508 + if (dev->state != XenbusStateInitialising) 1509 + break; 1510 + if (!xenhcd_connect(dev)) 1511 + xenbus_switch_state(dev, XenbusStateConnected); 1512 + break; 1513 + 1514 + case XenbusStateClosed: 1515 + if (dev->state == XenbusStateClosed) 1516 + break; 1517 + fallthrough; /* Missed the backend's Closing state. */ 1518 + case XenbusStateClosing: 1519 + xenhcd_disconnect(dev); 1520 + break; 1521 + 1522 + default: 1523 + xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend", 1524 + backend_state); 1525 + break; 1526 + } 1527 + } 1528 + 1529 + static int xenhcd_remove(struct xenbus_device *dev) 1530 + { 1531 + struct xenhcd_info *info = dev_get_drvdata(&dev->dev); 1532 + struct usb_hcd *hcd = xenhcd_info_to_hcd(info); 1533 + 1534 + xenhcd_destroy_rings(info); 1535 + usb_put_hcd(hcd); 1536 + 1537 + return 0; 1538 + } 1539 + 1540 + static int xenhcd_probe(struct xenbus_device *dev, 1541 + const struct xenbus_device_id *id) 1542 + { 1543 + int err; 1544 + struct usb_hcd *hcd; 1545 + struct xenhcd_info *info; 1546 + 1547 + if (usb_disabled()) 1548 + return -ENODEV; 1549 + 1550 + hcd = xenhcd_create_hcd(dev); 1551 + if (IS_ERR(hcd)) { 1552 + err = PTR_ERR(hcd); 1553 + xenbus_dev_fatal(dev, err, 1554 + "fail to create usb host controller"); 1555 + return err; 1556 + } 1557 + 1558 + info = xenhcd_hcd_to_info(hcd); 1559 + dev_set_drvdata(&dev->dev, info); 1560 + 1561 + err = usb_add_hcd(hcd, 0, 0); 1562 + if (err) { 1563 + xenbus_dev_fatal(dev, err, "fail to add USB host controller"); 1564 + usb_put_hcd(hcd); 1565 + dev_set_drvdata(&dev->dev, NULL); 1566 + } 1567 + 1568 + return err; 1569 + } 1570 + 1571 + static const struct xenbus_device_id xenhcd_ids[] = { 1572 + { "vusb" }, 1573 + { "" }, 1574 + }; 1575 + 1576 + static struct xenbus_driver xenhcd_driver = { 1577 + .ids = xenhcd_ids, 1578 + .probe = xenhcd_probe, 1579 + .otherend_changed = xenhcd_backend_changed, 1580 + .remove = xenhcd_remove, 1581 + }; 1582 + 1583 + static int __init xenhcd_init(void) 1584 + { 1585 + if (!xen_domain()) 1586 + return -ENODEV; 1587 + 1588 + xenhcd_urbp_cachep = kmem_cache_create("xenhcd_urb_priv", 1589 + sizeof(struct urb_priv), 0, 0, NULL); 1590 + if (!xenhcd_urbp_cachep) { 1591 + pr_err("xenhcd failed to create kmem cache\n"); 1592 + return -ENOMEM; 1593 + } 1594 + 1595 + return xenbus_register_frontend(&xenhcd_driver); 1596 + } 1597 + module_init(xenhcd_init); 1598 + 1599 + static void __exit xenhcd_exit(void) 1600 + { 1601 + kmem_cache_destroy(xenhcd_urbp_cachep); 1602 + xenbus_unregister_driver(&xenhcd_driver); 1603 + } 1604 + module_exit(xenhcd_exit); 1605 + 1606 + MODULE_ALIAS("xen:vusb"); 1607 + MODULE_AUTHOR("Juergen Gross <jgross@suse.com>"); 1608 + MODULE_DESCRIPTION("Xen USB Virtual Host Controller driver (xen-hcd)"); 1609 + MODULE_LICENSE("Dual BSD/GPL");
+7 -9
drivers/usb/host/xhci-mtk.c
··· 245 245 /* wait for host ip to sleep */ 246 246 ret = readl_poll_timeout(&ippc->ip_pw_sts1, value, 247 247 (value & STS1_IP_SLEEP_STS), 100, 100000); 248 - if (ret) { 248 + if (ret) 249 249 dev_err(mtk->dev, "ip sleep failed!!!\n"); 250 - return ret; 251 - } 252 - return 0; 250 + else /* workaound for platforms using low level latch */ 251 + usleep_range(100, 200); 252 + 253 + return ret; 253 254 } 254 255 255 256 static int xhci_mtk_ssusb_config(struct xhci_hcd_mtk *mtk) ··· 301 300 case SSUSB_UWK_V1_1: 302 301 reg = mtk->uwk_reg_base + PERI_WK_CTRL0; 303 302 msk = WC0_IS_EN | WC0_IS_C(0xf) | WC0_IS_P; 304 - val = enable ? (WC0_IS_EN | WC0_IS_C(0x8)) : 0; 303 + val = enable ? (WC0_IS_EN | WC0_IS_C(0x1)) : 0; 305 304 break; 306 305 case SSUSB_UWK_V1_2: 307 306 reg = mtk->uwk_reg_base + PERI_WK_CTRL0; ··· 438 437 if (ret) 439 438 return ret; 440 439 441 - if (usb_hcd_is_primary_hcd(hcd)) { 440 + if (usb_hcd_is_primary_hcd(hcd)) 442 441 ret = xhci_mtk_sch_init(mtk); 443 - if (ret) 444 - return ret; 445 - } 446 442 447 443 return ret; 448 444 }
+2 -4
drivers/usb/host/xhci.c
··· 4998 4998 enabling_u2) 4999 4999 u2_mel_us = DIV_ROUND_UP(udev->u2_params.mel, 1000); 5000 5000 5001 - if (u1_mel_us > u2_mel_us) 5002 - mel_us = u1_mel_us; 5003 - else 5004 - mel_us = u2_mel_us; 5001 + mel_us = max(u1_mel_us, u2_mel_us); 5002 + 5005 5003 /* xHCI host controller max exit latency field is only 16 bits wide. */ 5006 5004 if (mel_us > MAX_EXIT) { 5007 5005 dev_warn(&udev->dev, "Link PM max exit latency of %lluus "
+7 -9
drivers/usb/isp1760/isp1760-if.c
··· 13 13 14 14 #include <linux/usb.h> 15 15 #include <linux/io.h> 16 + #include <linux/irq.h> 16 17 #include <linux/module.h> 17 18 #include <linux/of.h> 18 19 #include <linux/platform_device.h> ··· 192 191 unsigned long irqflags; 193 192 unsigned int devflags = 0; 194 193 struct resource *mem_res; 195 - struct resource *irq_res; 194 + int irq; 196 195 int ret; 197 196 198 197 mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 199 198 200 - irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 201 - if (!irq_res) { 202 - pr_warn("isp1760: IRQ resource not available\n"); 203 - return -ENODEV; 204 - } 205 - irqflags = irq_res->flags & IRQF_TRIGGER_MASK; 199 + irq = platform_get_irq(pdev, 0); 200 + if (irq < 0) 201 + return irq; 202 + irqflags = irq_get_trigger_type(irq); 206 203 207 204 if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) { 208 205 struct device_node *dp = pdev->dev.of_node; ··· 238 239 return -ENXIO; 239 240 } 240 241 241 - ret = isp1760_register(mem_res, irq_res->start, irqflags, &pdev->dev, 242 - devflags); 242 + ret = isp1760_register(mem_res, irq, irqflags, &pdev->dev, devflags); 243 243 if (ret < 0) 244 244 return ret; 245 245
+58
drivers/usb/misc/ehset.c
··· 18 18 #define TEST_SINGLE_STEP_GET_DEV_DESC 0x0107 19 19 #define TEST_SINGLE_STEP_SET_FEATURE 0x0108 20 20 21 + extern const struct usb_device_id *usb_device_match_id(struct usb_device *udev, 22 + const struct usb_device_id *id); 23 + 24 + /* 25 + * A list of USB hubs which requires to disable the power 26 + * to the port before starting the testing procedures. 27 + */ 28 + static const struct usb_device_id ehset_hub_list[] = { 29 + { USB_DEVICE(0x0424, 0x4502) }, 30 + { USB_DEVICE(0x0424, 0x4913) }, 31 + { USB_DEVICE(0x0451, 0x8027) }, 32 + { } 33 + }; 34 + 35 + static int ehset_prepare_port_for_testing(struct usb_device *hub_udev, u16 portnum) 36 + { 37 + int ret = 0; 38 + 39 + /* 40 + * The USB2.0 spec chapter 11.24.2.13 says that the USB port which is 41 + * going under test needs to be put in suspend before sending the 42 + * test command. Most hubs don't enforce this precondition, but there 43 + * are some hubs which needs to disable the power to the port before 44 + * starting the test. 45 + */ 46 + if (usb_device_match_id(hub_udev, ehset_hub_list)) { 47 + ret = usb_control_msg_send(hub_udev, 0, USB_REQ_CLEAR_FEATURE, 48 + USB_RT_PORT, USB_PORT_FEAT_ENABLE, 49 + portnum, NULL, 0, 1000, GFP_KERNEL); 50 + /* 51 + * Wait for the port to be disabled. It's an arbitrary value 52 + * which worked every time. 53 + */ 54 + msleep(100); 55 + } else { 56 + /* 57 + * For the hubs which are compliant with the spec, 58 + * put the port in SUSPEND. 59 + */ 60 + ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, 61 + USB_RT_PORT, USB_PORT_FEAT_SUSPEND, 62 + portnum, NULL, 0, 1000, GFP_KERNEL); 63 + } 64 + return ret; 65 + } 66 + 21 67 static int ehset_probe(struct usb_interface *intf, 22 68 const struct usb_device_id *id) 23 69 { ··· 76 30 77 31 switch (test_pid) { 78 32 case TEST_SE0_NAK_PID: 33 + ret = ehset_prepare_port_for_testing(hub_udev, portnum); 34 + if (!ret) 35 + break; 79 36 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, 80 37 USB_RT_PORT, USB_PORT_FEAT_TEST, 81 38 (USB_TEST_SE0_NAK << 8) | portnum, 82 39 NULL, 0, 1000, GFP_KERNEL); 83 40 break; 84 41 case TEST_J_PID: 42 + ret = ehset_prepare_port_for_testing(hub_udev, portnum); 43 + if (!ret) 44 + break; 85 45 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, 86 46 USB_RT_PORT, USB_PORT_FEAT_TEST, 87 47 (USB_TEST_J << 8) | portnum, NULL, 0, 88 48 1000, GFP_KERNEL); 89 49 break; 90 50 case TEST_K_PID: 51 + ret = ehset_prepare_port_for_testing(hub_udev, portnum); 52 + if (!ret) 53 + break; 91 54 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, 92 55 USB_RT_PORT, USB_PORT_FEAT_TEST, 93 56 (USB_TEST_K << 8) | portnum, NULL, 0, 94 57 1000, GFP_KERNEL); 95 58 break; 96 59 case TEST_PACKET_PID: 60 + ret = ehset_prepare_port_for_testing(hub_udev, portnum); 61 + if (!ret) 62 + break; 97 63 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE, 98 64 USB_RT_PORT, USB_PORT_FEAT_TEST, 99 65 (USB_TEST_PACKET << 8) | portnum,
+1
drivers/usb/misc/ftdi-elan.c
··· 202 202 mutex_unlock(&ftdi_module_lock); 203 203 kfree(ftdi->bulk_in_buffer); 204 204 ftdi->bulk_in_buffer = NULL; 205 + kfree(ftdi); 205 206 } 206 207 207 208 static void ftdi_elan_put_kref(struct usb_ftdi *ftdi)
+2
drivers/usb/musb/am35x.c
··· 500 500 pinfo.num_res = pdev->num_resources; 501 501 pinfo.data = pdata; 502 502 pinfo.size_data = sizeof(*pdata); 503 + pinfo.fwnode = of_fwnode_handle(pdev->dev.of_node); 504 + pinfo.of_node_reused = true; 503 505 504 506 glue->musb = musb = platform_device_register_full(&pinfo); 505 507 if (IS_ERR(musb)) {
+4 -16
drivers/usb/musb/da8xx.c
··· 505 505 506 506 static int da8xx_probe(struct platform_device *pdev) 507 507 { 508 - struct resource musb_resources[2]; 509 508 struct musb_hdrc_platform_data *pdata = dev_get_platdata(&pdev->dev); 510 509 struct da8xx_glue *glue; 511 510 struct platform_device_info pinfo; ··· 557 558 if (ret) 558 559 return ret; 559 560 560 - memset(musb_resources, 0x00, sizeof(*musb_resources) * 561 - ARRAY_SIZE(musb_resources)); 562 - 563 - musb_resources[0].name = pdev->resource[0].name; 564 - musb_resources[0].start = pdev->resource[0].start; 565 - musb_resources[0].end = pdev->resource[0].end; 566 - musb_resources[0].flags = pdev->resource[0].flags; 567 - 568 - musb_resources[1].name = pdev->resource[1].name; 569 - musb_resources[1].start = pdev->resource[1].start; 570 - musb_resources[1].end = pdev->resource[1].end; 571 - musb_resources[1].flags = pdev->resource[1].flags; 572 - 573 561 pinfo = da8xx_dev_info; 574 562 pinfo.parent = &pdev->dev; 575 - pinfo.res = musb_resources; 576 - pinfo.num_res = ARRAY_SIZE(musb_resources); 563 + pinfo.res = pdev->resource; 564 + pinfo.num_res = pdev->num_resources; 577 565 pinfo.data = pdata; 578 566 pinfo.size_data = sizeof(*pdata); 567 + pinfo.fwnode = of_fwnode_handle(np); 568 + pinfo.of_node_reused = true; 579 569 580 570 glue->musb = platform_device_register_full(&pinfo); 581 571 ret = PTR_ERR_OR_ZERO(glue->musb);
+1
drivers/usb/musb/jz4740.c
··· 231 231 musb->dev.parent = dev; 232 232 musb->dev.dma_mask = &musb->dev.coherent_dma_mask; 233 233 musb->dev.coherent_dma_mask = DMA_BIT_MASK(32); 234 + device_set_of_node_from_dev(&musb->dev, dev); 234 235 235 236 glue->pdev = musb; 236 237 glue->clk = clk;
+2
drivers/usb/musb/mediatek.c
··· 538 538 pinfo.num_res = pdev->num_resources; 539 539 pinfo.data = pdata; 540 540 pinfo.size_data = sizeof(*pdata); 541 + pinfo.fwnode = of_fwnode_handle(np); 542 + pinfo.of_node_reused = true; 541 543 542 544 glue->musb_pdev = platform_device_register_full(&pinfo); 543 545 if (IS_ERR(glue->musb_pdev)) {
+9 -6
drivers/usb/musb/musb_dsps.c
··· 15 15 */ 16 16 17 17 #include <linux/io.h> 18 + #include <linux/irq.h> 18 19 #include <linux/err.h> 19 20 #include <linux/platform_device.h> 20 21 #include <linux/dma-mapping.h> ··· 740 739 } 741 740 resources[0] = *res; 742 741 743 - res = platform_get_resource_byname(parent, IORESOURCE_IRQ, "mc"); 744 - if (!res) { 745 - dev_err(dev, "failed to get irq.\n"); 746 - return -EINVAL; 747 - } 748 - resources[1] = *res; 742 + ret = platform_get_irq_byname(parent, "mc"); 743 + if (ret < 0) 744 + return ret; 745 + 746 + resources[1].start = ret; 747 + resources[1].end = ret; 748 + resources[1].flags = IORESOURCE_IRQ | irq_get_trigger_type(ret); 749 + resources[1].name = "mc"; 749 750 750 751 /* allocate the child platform device */ 751 752 musb = platform_device_alloc("musb-hdrc",
+2 -21
drivers/usb/musb/omap2430.c
··· 301 301 302 302 static int omap2430_probe(struct platform_device *pdev) 303 303 { 304 - struct resource musb_resources[3]; 305 304 struct musb_hdrc_platform_data *pdata = dev_get_platdata(&pdev->dev); 306 305 struct omap_musb_board_data *data; 307 306 struct platform_device *musb; ··· 327 328 musb->dev.parent = &pdev->dev; 328 329 musb->dev.dma_mask = &omap2430_dmamask; 329 330 musb->dev.coherent_dma_mask = omap2430_dmamask; 331 + device_set_of_node_from_dev(&musb->dev, &pdev->dev); 330 332 331 333 glue->dev = &pdev->dev; 332 334 glue->musb = musb; ··· 383 383 384 384 INIT_WORK(&glue->omap_musb_mailbox_work, omap_musb_mailbox_work); 385 385 386 - memset(musb_resources, 0x00, sizeof(*musb_resources) * 387 - ARRAY_SIZE(musb_resources)); 388 - 389 - musb_resources[0].name = pdev->resource[0].name; 390 - musb_resources[0].start = pdev->resource[0].start; 391 - musb_resources[0].end = pdev->resource[0].end; 392 - musb_resources[0].flags = pdev->resource[0].flags; 393 - 394 - musb_resources[1].name = pdev->resource[1].name; 395 - musb_resources[1].start = pdev->resource[1].start; 396 - musb_resources[1].end = pdev->resource[1].end; 397 - musb_resources[1].flags = pdev->resource[1].flags; 398 - 399 - musb_resources[2].name = pdev->resource[2].name; 400 - musb_resources[2].start = pdev->resource[2].start; 401 - musb_resources[2].end = pdev->resource[2].end; 402 - musb_resources[2].flags = pdev->resource[2].flags; 403 - 404 - ret = platform_device_add_resources(musb, musb_resources, 405 - ARRAY_SIZE(musb_resources)); 386 + ret = platform_device_add_resources(musb, pdev->resource, pdev->num_resources); 406 387 if (ret) { 407 388 dev_err(&pdev->dev, "failed to add resources\n"); 408 389 goto err2;
+2 -16
drivers/usb/musb/ux500.c
··· 216 216 217 217 static int ux500_probe(struct platform_device *pdev) 218 218 { 219 - struct resource musb_resources[2]; 220 219 struct musb_hdrc_platform_data *pdata = dev_get_platdata(&pdev->dev); 221 220 struct device_node *np = pdev->dev.of_node; 222 221 struct platform_device *musb; ··· 262 263 musb->dev.parent = &pdev->dev; 263 264 musb->dev.dma_mask = &pdev->dev.coherent_dma_mask; 264 265 musb->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask; 266 + device_set_of_node_from_dev(&musb->dev, &pdev->dev); 265 267 266 268 glue->dev = &pdev->dev; 267 269 glue->musb = musb; ··· 273 273 274 274 platform_set_drvdata(pdev, glue); 275 275 276 - memset(musb_resources, 0x00, sizeof(*musb_resources) * 277 - ARRAY_SIZE(musb_resources)); 278 - 279 - musb_resources[0].name = pdev->resource[0].name; 280 - musb_resources[0].start = pdev->resource[0].start; 281 - musb_resources[0].end = pdev->resource[0].end; 282 - musb_resources[0].flags = pdev->resource[0].flags; 283 - 284 - musb_resources[1].name = pdev->resource[1].name; 285 - musb_resources[1].start = pdev->resource[1].start; 286 - musb_resources[1].end = pdev->resource[1].end; 287 - musb_resources[1].flags = pdev->resource[1].flags; 288 - 289 - ret = platform_device_add_resources(musb, musb_resources, 290 - ARRAY_SIZE(musb_resources)); 276 + ret = platform_device_add_resources(musb, pdev->resource, pdev->num_resources); 291 277 if (ret) { 292 278 dev_err(&pdev->dev, "failed to add resources\n"); 293 279 goto err2;
+1 -4
drivers/usb/phy/phy-mv-usb.c
··· 648 648 { 649 649 struct mv_otg *mvotg = platform_get_drvdata(pdev); 650 650 651 - if (mvotg->qwork) { 652 - flush_workqueue(mvotg->qwork); 651 + if (mvotg->qwork) 653 652 destroy_workqueue(mvotg->qwork); 654 - } 655 653 656 654 mv_otg_disable(mvotg); 657 655 ··· 823 825 err_disable_clk: 824 826 mv_otg_disable_internal(mvotg); 825 827 err_destroy_workqueue: 826 - flush_workqueue(mvotg->qwork); 827 828 destroy_workqueue(mvotg->qwork); 828 829 829 830 return retval;
+5 -9
drivers/usb/renesas_usbhs/common.c
··· 589 589 { 590 590 const struct renesas_usbhs_platform_info *info; 591 591 struct usbhs_priv *priv; 592 - struct resource *irq_res; 593 592 struct device *dev = &pdev->dev; 594 593 struct gpio_desc *gpiod; 595 594 int ret; 596 595 u32 tmp; 596 + int irq; 597 597 598 598 /* check device node */ 599 599 if (dev_of_node(dev)) ··· 608 608 } 609 609 610 610 /* platform data */ 611 - irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 612 - if (!irq_res) { 613 - dev_err(dev, "Not enough Renesas USB platform resources.\n"); 614 - return -ENODEV; 615 - } 611 + irq = platform_get_irq(pdev, 0); 612 + if (irq < 0) 613 + return irq; 616 614 617 615 /* usb private data */ 618 616 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 667 669 /* 668 670 * priv settings 669 671 */ 670 - priv->irq = irq_res->start; 671 - if (irq_res->flags & IORESOURCE_IRQ_SHAREABLE) 672 - priv->irqflags = IRQF_SHARED; 672 + priv->irq = irq; 673 673 priv->pdev = pdev; 674 674 INIT_DELAYED_WORK(&priv->notify_hotplug_work, usbhsc_notify_hotplug); 675 675 spin_lock_init(usbhs_priv_to_lock(priv));
-1
drivers/usb/renesas_usbhs/common.h
··· 252 252 253 253 void __iomem *base; 254 254 unsigned int irq; 255 - unsigned long irqflags; 256 255 257 256 const struct renesas_usbhs_platform_callback *pfunc; 258 257 struct renesas_usbhs_driver_param dparam;
+1 -13
drivers/usb/renesas_usbhs/mod.c
··· 142 142 143 143 /* irq settings */ 144 144 ret = devm_request_irq(dev, priv->irq, usbhs_interrupt, 145 - priv->irqflags, dev_name(dev), priv); 145 + 0, dev_name(dev), priv); 146 146 if (ret) { 147 147 dev_err(dev, "irq request err\n"); 148 148 goto mod_init_gadget_err; ··· 218 218 } 219 219 usbhs_unlock(priv, flags); 220 220 /******************** spin unlock ******************/ 221 - 222 - /* 223 - * Check whether the irq enable registers and the irq status are set 224 - * when IRQF_SHARED is set. 225 - */ 226 - if (priv->irqflags & IRQF_SHARED) { 227 - if (!(intenb0 & state->intsts0) && 228 - !(intenb1 & state->intsts1) && 229 - !(state->bempsts) && 230 - !(state->brdysts)) 231 - return -EIO; 232 - } 233 221 234 222 return 0; 235 223 }
-2
drivers/usb/storage/sierra_ms.c
··· 130 130 struct swoc_info *swocInfo; 131 131 struct usb_device *udev; 132 132 133 - retries = 3; 134 - result = 0; 135 133 udev = us->pusb_dev; 136 134 137 135 /* Force Modem mode */
+2 -1
drivers/usb/typec/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_TYPEC) += typec.o 3 - typec-y := class.o mux.o bus.o port-mapper.o 3 + typec-y := class.o mux.o bus.o 4 + typec-$(CONFIG_ACPI) += port-mapper.o 4 5 obj-$(CONFIG_TYPEC) += altmodes/ 5 6 obj-$(CONFIG_TYPEC_TCPM) += tcpm/ 6 7 obj-$(CONFIG_TYPEC_UCSI) += ucsi/
-2
drivers/usb/typec/class.c
··· 2039 2039 2040 2040 ida_init(&port->mode_ids); 2041 2041 mutex_init(&port->port_type_lock); 2042 - mutex_init(&port->port_list_lock); 2043 - INIT_LIST_HEAD(&port->port_list); 2044 2042 2045 2043 port->id = id; 2046 2044 port->ops = cap->ops;
+5 -5
drivers/usb/typec/class.h
··· 54 54 55 55 const struct typec_capability *cap; 56 56 const struct typec_operations *ops; 57 - 58 - struct list_head port_list; 59 - struct mutex port_list_lock; /* Port list lock */ 60 - 61 - void *pld; 62 57 }; 63 58 64 59 #define to_typec_port(_dev_) container_of(_dev_, struct typec_port, dev) ··· 74 79 extern struct class typec_mux_class; 75 80 extern struct class typec_class; 76 81 82 + #if defined(CONFIG_ACPI) 77 83 int typec_link_ports(struct typec_port *connector); 78 84 void typec_unlink_ports(struct typec_port *connector); 85 + #else 86 + static inline int typec_link_ports(struct typec_port *connector) { return 0; } 87 + static inline void typec_unlink_ports(struct typec_port *connector) { } 88 + #endif 79 89 80 90 #endif /* __USB_TYPEC_CLASS__ */
+49 -250
drivers/usb/typec/port-mapper.c
··· 7 7 */ 8 8 9 9 #include <linux/acpi.h> 10 - #include <linux/usb.h> 11 - #include <linux/usb/typec.h> 10 + #include <linux/component.h> 12 11 13 12 #include "class.h" 14 13 15 - struct port_node { 16 - struct list_head list; 17 - struct device *dev; 18 - void *pld; 14 + static int typec_aggregate_bind(struct device *dev) 15 + { 16 + return component_bind_all(dev, NULL); 17 + } 18 + 19 + static void typec_aggregate_unbind(struct device *dev) 20 + { 21 + component_unbind_all(dev, NULL); 22 + } 23 + 24 + static const struct component_master_ops typec_aggregate_ops = { 25 + .bind = typec_aggregate_bind, 26 + .unbind = typec_aggregate_unbind, 19 27 }; 20 28 21 - static int acpi_pld_match(const struct acpi_pld_info *pld1, 22 - const struct acpi_pld_info *pld2) 29 + struct each_port_arg { 30 + struct typec_port *port; 31 + struct component_match *match; 32 + }; 33 + 34 + static int typec_port_compare(struct device *dev, void *fwnode) 23 35 { 24 - if (!pld1 || !pld2) 36 + return device_match_fwnode(dev, fwnode); 37 + } 38 + 39 + static int typec_port_match(struct device *dev, void *data) 40 + { 41 + struct acpi_device *adev = to_acpi_device(dev); 42 + struct each_port_arg *arg = data; 43 + struct acpi_device *con_adev; 44 + 45 + con_adev = ACPI_COMPANION(&arg->port->dev); 46 + if (con_adev == adev) 25 47 return 0; 26 48 27 - /* 28 - * To speed things up, first checking only the group_position. It seems 29 - * to often have the first unique value in the _PLD. 30 - */ 31 - if (pld1->group_position == pld2->group_position) 32 - return !memcmp(pld1, pld2, sizeof(struct acpi_pld_info)); 33 - 34 - return 0; 35 - } 36 - 37 - static void *get_pld(struct device *dev) 38 - { 39 - #ifdef CONFIG_ACPI 40 - struct acpi_pld_info *pld; 41 - acpi_status status; 42 - 43 - if (!has_acpi_companion(dev)) 44 - return NULL; 45 - 46 - status = acpi_get_physical_device_location(ACPI_HANDLE(dev), &pld); 47 - if (ACPI_FAILURE(status)) 48 - return NULL; 49 - 50 - return pld; 51 - #else 52 - return NULL; 53 - #endif 54 - } 55 - 56 - static void free_pld(void *pld) 57 - { 58 - #ifdef CONFIG_ACPI 59 - ACPI_FREE(pld); 60 - #endif 61 - } 62 - 63 - static int __link_port(struct typec_port *con, struct port_node *node) 64 - { 65 - int ret; 66 - 67 - ret = sysfs_create_link(&node->dev->kobj, &con->dev.kobj, "connector"); 68 - if (ret) 69 - return ret; 70 - 71 - ret = sysfs_create_link(&con->dev.kobj, &node->dev->kobj, 72 - dev_name(node->dev)); 73 - if (ret) { 74 - sysfs_remove_link(&node->dev->kobj, "connector"); 75 - return ret; 76 - } 77 - 78 - list_add_tail(&node->list, &con->port_list); 79 - 80 - return 0; 81 - } 82 - 83 - static int link_port(struct typec_port *con, struct port_node *node) 84 - { 85 - int ret; 86 - 87 - mutex_lock(&con->port_list_lock); 88 - ret = __link_port(con, node); 89 - mutex_unlock(&con->port_list_lock); 90 - 91 - return ret; 92 - } 93 - 94 - static void __unlink_port(struct typec_port *con, struct port_node *node) 95 - { 96 - sysfs_remove_link(&con->dev.kobj, dev_name(node->dev)); 97 - sysfs_remove_link(&node->dev->kobj, "connector"); 98 - list_del(&node->list); 99 - } 100 - 101 - static void unlink_port(struct typec_port *con, struct port_node *node) 102 - { 103 - mutex_lock(&con->port_list_lock); 104 - __unlink_port(con, node); 105 - mutex_unlock(&con->port_list_lock); 106 - } 107 - 108 - static struct port_node *create_port_node(struct device *port) 109 - { 110 - struct port_node *node; 111 - 112 - node = kzalloc(sizeof(*node), GFP_KERNEL); 113 - if (!node) 114 - return ERR_PTR(-ENOMEM); 115 - 116 - node->dev = get_device(port); 117 - node->pld = get_pld(port); 118 - 119 - return node; 120 - } 121 - 122 - static void remove_port_node(struct port_node *node) 123 - { 124 - put_device(node->dev); 125 - free_pld(node->pld); 126 - kfree(node); 127 - } 128 - 129 - static int connector_match(struct device *dev, const void *data) 130 - { 131 - const struct port_node *node = data; 132 - 133 - if (!is_typec_port(dev)) 134 - return 0; 135 - 136 - return acpi_pld_match(to_typec_port(dev)->pld, node->pld); 137 - } 138 - 139 - static struct device *find_connector(struct port_node *node) 140 - { 141 - if (!node->pld) 142 - return NULL; 143 - 144 - return class_find_device(&typec_class, NULL, node, connector_match); 145 - } 146 - 147 - /** 148 - * typec_link_port - Link a port to its connector 149 - * @port: The port device 150 - * 151 - * Find the connector of @port and create symlink named "connector" for it. 152 - * Returns 0 on success, or errno in case of a failure. 153 - * 154 - * NOTE. The function increments the reference count of @port on success. 155 - */ 156 - int typec_link_port(struct device *port) 157 - { 158 - struct device *connector; 159 - struct port_node *node; 160 - int ret; 161 - 162 - node = create_port_node(port); 163 - if (IS_ERR(node)) 164 - return PTR_ERR(node); 165 - 166 - connector = find_connector(node); 167 - if (!connector) { 168 - ret = 0; 169 - goto remove_node; 170 - } 171 - 172 - ret = link_port(to_typec_port(connector), node); 173 - if (ret) 174 - goto put_connector; 175 - 176 - return 0; 177 - 178 - put_connector: 179 - put_device(connector); 180 - remove_node: 181 - remove_port_node(node); 182 - 183 - return ret; 184 - } 185 - EXPORT_SYMBOL_GPL(typec_link_port); 186 - 187 - static int port_match_and_unlink(struct device *connector, void *port) 188 - { 189 - struct port_node *node; 190 - struct port_node *tmp; 191 - int ret = 0; 192 - 193 - if (!is_typec_port(connector)) 194 - return 0; 195 - 196 - mutex_lock(&to_typec_port(connector)->port_list_lock); 197 - list_for_each_entry_safe(node, tmp, &to_typec_port(connector)->port_list, list) { 198 - ret = node->dev == port; 199 - if (ret) { 200 - unlink_port(to_typec_port(connector), node); 201 - remove_port_node(node); 202 - put_device(connector); 203 - break; 204 - } 205 - } 206 - mutex_unlock(&to_typec_port(connector)->port_list_lock); 207 - 208 - return ret; 209 - } 210 - 211 - /** 212 - * typec_unlink_port - Unlink port from its connector 213 - * @port: The port device 214 - * 215 - * Removes the symlink "connector" and decrements the reference count of @port. 216 - */ 217 - void typec_unlink_port(struct device *port) 218 - { 219 - class_for_each_device(&typec_class, NULL, port, port_match_and_unlink); 220 - } 221 - EXPORT_SYMBOL_GPL(typec_unlink_port); 222 - 223 - static int each_port(struct device *port, void *connector) 224 - { 225 - struct port_node *node; 226 - int ret; 227 - 228 - node = create_port_node(port); 229 - if (IS_ERR(node)) 230 - return PTR_ERR(node); 231 - 232 - if (!connector_match(connector, node)) { 233 - remove_port_node(node); 234 - return 0; 235 - } 236 - 237 - ret = link_port(to_typec_port(connector), node); 238 - if (ret) { 239 - remove_port_node(node->pld); 240 - return ret; 241 - } 242 - 243 - get_device(connector); 244 - 49 + if (con_adev->pld_crc == adev->pld_crc) 50 + component_match_add(&arg->port->dev, &arg->match, typec_port_compare, 51 + acpi_fwnode_handle(adev)); 245 52 return 0; 246 53 } 247 54 248 55 int typec_link_ports(struct typec_port *con) 249 56 { 250 - int ret = 0; 57 + struct each_port_arg arg = { .port = con, .match = NULL }; 251 58 252 - con->pld = get_pld(&con->dev); 253 - if (!con->pld) 254 - return 0; 59 + bus_for_each_dev(&acpi_bus_type, NULL, &arg, typec_port_match); 255 60 256 - ret = usb_for_each_port(&con->dev, each_port); 257 - if (ret) 258 - typec_unlink_ports(con); 259 - 260 - return ret; 61 + /* 62 + * REVISIT: Now each connector can have only a single component master. 63 + * So far only the USB ports connected to the USB Type-C connector share 64 + * the _PLD with it, but if there one day is something else (like maybe 65 + * the DisplayPort ACPI device object) that also shares the _PLD with 66 + * the connector, every one of those needs to have its own component 67 + * master, because each different type of component needs to be bind to 68 + * the connector independently of the other components. That requires 69 + * improvements to the component framework. Right now you can only have 70 + * one master per device. 71 + */ 72 + return component_master_add_with_match(&con->dev, &typec_aggregate_ops, arg.match); 261 73 } 262 74 263 75 void typec_unlink_ports(struct typec_port *con) 264 76 { 265 - struct port_node *node; 266 - struct port_node *tmp; 267 - 268 - mutex_lock(&con->port_list_lock); 269 - 270 - list_for_each_entry_safe(node, tmp, &con->port_list, list) { 271 - __unlink_port(con, node); 272 - remove_port_node(node); 273 - put_device(&con->dev); 274 - } 275 - 276 - mutex_unlock(&con->port_list_lock); 277 - 278 - free_pld(con->pld); 77 + component_master_del(&con->dev, &typec_aggregate_ops); 279 78 }
+15 -1
drivers/usb/typec/ucsi/ucsi.c
··· 303 303 return -ENOENT; 304 304 } 305 305 306 + static int ucsi_get_num_altmode(struct typec_altmode **alt) 307 + { 308 + int i; 309 + 310 + for (i = 0; i < UCSI_MAX_ALTMODES; i++) 311 + if (!alt[i]) 312 + break; 313 + 314 + return i; 315 + } 316 + 306 317 static int ucsi_register_altmode(struct ucsi_connector *con, 307 318 struct typec_altmode_desc *desc, 308 319 u8 recipient) ··· 618 607 619 608 static int ucsi_check_altmodes(struct ucsi_connector *con) 620 609 { 621 - int ret; 610 + int ret, num_partner_am; 622 611 623 612 ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_SOP); 624 613 if (ret && ret != -ETIMEDOUT) ··· 628 617 629 618 /* Ignoring the errors in this case. */ 630 619 if (con->partner_altmode[0]) { 620 + num_partner_am = ucsi_get_num_altmode(con->partner_altmode); 621 + if (num_partner_am > 0) 622 + typec_partner_set_num_altmodes(con->partner, num_partner_am); 631 623 ucsi_altmode_update_active(con); 632 624 return 0; 633 625 }
-1
drivers/usb/usbip/usbip_event.c
··· 137 137 138 138 void usbip_finish_eh(void) 139 139 { 140 - flush_workqueue(usbip_queue); 141 140 destroy_workqueue(usbip_queue); 142 141 usbip_queue = NULL; 143 142 }
+1
include/acpi/acpi_bus.h
··· 360 360 361 361 /* Device */ 362 362 struct acpi_device { 363 + u32 pld_crc; 363 364 int device_type; 364 365 acpi_handle handle; /* no handle for fixed hardware */ 365 366 struct fwnode_handle fwnode;
-9
include/linux/usb.h
··· 875 875 unsigned int iface_num, 876 876 unsigned int alt_num); 877 877 878 - #if IS_REACHABLE(CONFIG_USB) 879 - int usb_for_each_port(void *data, int (*fn)(struct device *, void *)); 880 - #else 881 - static inline int usb_for_each_port(void *data, int (*fn)(struct device *, void *)) 882 - { 883 - return 0; 884 - } 885 - #endif 886 - 887 878 /* port claiming functions */ 888 879 int usb_hub_claim_port(struct usb_device *hdev, unsigned port1, 889 880 struct usb_dev_state *owner);
+2 -1
include/linux/usb/ch9.h
··· 33 33 #ifndef __LINUX_USB_CH9_H 34 34 #define __LINUX_USB_CH9_H 35 35 36 - #include <linux/device.h> 37 36 #include <uapi/linux/usb/ch9.h> 38 37 39 38 /* USB 3.2 SuperSpeed Plus phy signaling rate generation and lane count */ ··· 43 44 USB_SSP_GEN_1x2, 44 45 USB_SSP_GEN_2x2, 45 46 }; 47 + 48 + struct device; 46 49 47 50 extern const char *usb_ep_type_string(int ep_type); 48 51 extern const char *usb_speed_string(enum usb_device_speed speed);
-12
include/linux/usb/typec.h
··· 305 305 enum usb_pd_svdm_ver svdm_version); 306 306 int typec_get_negotiated_svdm_version(struct typec_port *port); 307 307 308 - #if IS_REACHABLE(CONFIG_TYPEC) 309 - int typec_link_port(struct device *port); 310 - void typec_unlink_port(struct device *port); 311 - #else 312 - static inline int typec_link_port(struct device *port) 313 - { 314 - return 0; 315 - } 316 - 317 - static inline void typec_unlink_port(struct device *port) { } 318 - #endif 319 - 320 308 #endif /* __LINUX_USB_TYPEC_H */
+405
include/xen/interface/io/usbif.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + 3 + /* 4 + * usbif.h 5 + * 6 + * USB I/O interface for Xen guest OSes. 7 + * 8 + * Copyright (C) 2009, FUJITSU LABORATORIES LTD. 9 + * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com> 10 + */ 11 + 12 + #ifndef __XEN_PUBLIC_IO_USBIF_H__ 13 + #define __XEN_PUBLIC_IO_USBIF_H__ 14 + 15 + #include "ring.h" 16 + #include "../grant_table.h" 17 + 18 + /* 19 + * Detailed Interface Description 20 + * ============================== 21 + * The pvUSB interface is using a split driver design: a frontend driver in 22 + * the guest and a backend driver in a driver domain (normally dom0) having 23 + * access to the physical USB device(s) being passed to the guest. 24 + * 25 + * The frontend and backend drivers use XenStore to initiate the connection 26 + * between them, the I/O activity is handled via two shared ring pages and an 27 + * event channel. As the interface between frontend and backend is at the USB 28 + * host connector level, multiple (up to 31) physical USB devices can be 29 + * handled by a single connection. 30 + * 31 + * The Xen pvUSB device name is "qusb", so the frontend's XenStore entries are 32 + * to be found under "device/qusb", while the backend's XenStore entries are 33 + * under "backend/<guest-dom-id>/qusb". 34 + * 35 + * When a new pvUSB connection is established, the frontend needs to setup the 36 + * two shared ring pages for communication and the event channel. The ring 37 + * pages need to be made available to the backend via the grant table 38 + * interface. 39 + * 40 + * One of the shared ring pages is used by the backend to inform the frontend 41 + * about USB device plug events (device to be added or removed). This is the 42 + * "conn-ring". 43 + * 44 + * The other ring page is used for USB I/O communication (requests and 45 + * responses). This is the "urb-ring". 46 + * 47 + * Feature and Parameter Negotiation 48 + * ================================= 49 + * The two halves of a Xen pvUSB driver utilize nodes within the XenStore to 50 + * communicate capabilities and to negotiate operating parameters. This 51 + * section enumerates these nodes which reside in the respective front and 52 + * backend portions of the XenStore, following the XenBus convention. 53 + * 54 + * Any specified default value is in effect if the corresponding XenBus node 55 + * is not present in the XenStore. 56 + * 57 + * XenStore nodes in sections marked "PRIVATE" are solely for use by the 58 + * driver side whose XenBus tree contains them. 59 + * 60 + ***************************************************************************** 61 + * Backend XenBus Nodes 62 + ***************************************************************************** 63 + * 64 + *------------------ Backend Device Identification (PRIVATE) ------------------ 65 + * 66 + * num-ports 67 + * Values: unsigned [1...31] 68 + * 69 + * Number of ports for this (virtual) USB host connector. 70 + * 71 + * usb-ver 72 + * Values: unsigned [1...2] 73 + * 74 + * USB version of this host connector: 1 = USB 1.1, 2 = USB 2.0. 75 + * 76 + * port/[1...31] 77 + * Values: string 78 + * 79 + * Physical USB device connected to the given port, e.g. "3-1.5". 80 + * 81 + ***************************************************************************** 82 + * Frontend XenBus Nodes 83 + ***************************************************************************** 84 + * 85 + *----------------------- Request Transport Parameters ----------------------- 86 + * 87 + * event-channel 88 + * Values: unsigned 89 + * 90 + * The identifier of the Xen event channel used to signal activity 91 + * in the ring buffer. 92 + * 93 + * urb-ring-ref 94 + * Values: unsigned 95 + * 96 + * The Xen grant reference granting permission for the backend to map 97 + * the sole page in a single page sized ring buffer. This is the ring 98 + * buffer for urb requests. 99 + * 100 + * conn-ring-ref 101 + * Values: unsigned 102 + * 103 + * The Xen grant reference granting permission for the backend to map 104 + * the sole page in a single page sized ring buffer. This is the ring 105 + * buffer for connection/disconnection requests. 106 + * 107 + * protocol 108 + * Values: string (XEN_IO_PROTO_ABI_*) 109 + * Default Value: XEN_IO_PROTO_ABI_NATIVE 110 + * 111 + * The machine ABI rules governing the format of all ring request and 112 + * response structures. 113 + * 114 + * Protocol Description 115 + * ==================== 116 + * 117 + *-------------------------- USB device plug events -------------------------- 118 + * 119 + * USB device plug events are send via the "conn-ring" shared page. As only 120 + * events are being sent, the respective requests from the frontend to the 121 + * backend are just dummy ones. 122 + * The events sent to the frontend have the following layout: 123 + * 0 1 2 3 octet 124 + * +----------------+----------------+----------------+----------------+ 125 + * | id | portnum | speed | 4 126 + * +----------------+----------------+----------------+----------------+ 127 + * id - uint16_t, event id (taken from the actual frontend dummy request) 128 + * portnum - uint8_t, port number (1 ... 31) 129 + * speed - uint8_t, device XENUSB_SPEED_*, XENUSB_SPEED_NONE == unplug 130 + * 131 + * The dummy request: 132 + * 0 1 octet 133 + * +----------------+----------------+ 134 + * | id | 2 135 + * +----------------+----------------+ 136 + * id - uint16_t, guest supplied value (no need for being unique) 137 + * 138 + *-------------------------- USB I/O request --------------------------------- 139 + * 140 + * A single USB I/O request on the "urb-ring" has the following layout: 141 + * 0 1 2 3 octet 142 + * +----------------+----------------+----------------+----------------+ 143 + * | id | nr_buffer_segs | 4 144 + * +----------------+----------------+----------------+----------------+ 145 + * | pipe | 8 146 + * +----------------+----------------+----------------+----------------+ 147 + * | transfer_flags | buffer_length | 12 148 + * +----------------+----------------+----------------+----------------+ 149 + * | request type specific | 16 150 + * | data | 20 151 + * +----------------+----------------+----------------+----------------+ 152 + * | seg[0] | 24 153 + * | data | 28 154 + * +----------------+----------------+----------------+----------------+ 155 + * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| 156 + * +----------------+----------------+----------------+----------------+ 157 + * | seg[XENUSB_MAX_SEGMENTS_PER_REQUEST - 1] | 144 158 + * | data | 148 159 + * +----------------+----------------+----------------+----------------+ 160 + * Bit field bit number 0 is always least significant bit, undefined bits must 161 + * be zero. 162 + * id - uint16_t, guest supplied value 163 + * nr_buffer_segs - uint16_t, number of segment entries in seg[] array 164 + * pipe - uint32_t, bit field with multiple information: 165 + * bits 0-4: port request to send to 166 + * bit 5: unlink request with specified id (cancel I/O) if set (see below) 167 + * bit 7: direction (1 = read from device) 168 + * bits 8-14: device number on port 169 + * bits 15-18: endpoint of device 170 + * bits 30-31: request type: 00 = isochronous, 01 = interrupt, 171 + * 10 = control, 11 = bulk 172 + * transfer_flags - uint16_t, bit field with processing flags: 173 + * bit 0: less data than specified allowed 174 + * buffer_length - uint16_t, total length of data 175 + * request type specific data - 8 bytes, see below 176 + * seg[] - array with 8 byte elements, see below 177 + * 178 + * Request type specific data for isochronous request: 179 + * 0 1 2 3 octet 180 + * +----------------+----------------+----------------+----------------+ 181 + * | interval | start_frame | 4 182 + * +----------------+----------------+----------------+----------------+ 183 + * | number_of_packets | nr_frame_desc_segs | 8 184 + * +----------------+----------------+----------------+----------------+ 185 + * interval - uint16_t, time interval in msecs between frames 186 + * start_frame - uint16_t, start frame number 187 + * number_of_packets - uint16_t, number of packets to transfer 188 + * nr_frame_desc_segs - uint16_t number of seg[] frame descriptors elements 189 + * 190 + * Request type specific data for interrupt request: 191 + * 0 1 2 3 octet 192 + * +----------------+----------------+----------------+----------------+ 193 + * | interval | 0 | 4 194 + * +----------------+----------------+----------------+----------------+ 195 + * | 0 | 8 196 + * +----------------+----------------+----------------+----------------+ 197 + * interval - uint16_t, time in msecs until interruption 198 + * 199 + * Request type specific data for control request: 200 + * 0 1 2 3 octet 201 + * +----------------+----------------+----------------+----------------+ 202 + * | data of setup packet | 4 203 + * | | 8 204 + * +----------------+----------------+----------------+----------------+ 205 + * 206 + * Request type specific data for bulk request: 207 + * 0 1 2 3 octet 208 + * +----------------+----------------+----------------+----------------+ 209 + * | 0 | 4 210 + * | 0 | 8 211 + * +----------------+----------------+----------------+----------------+ 212 + * 213 + * Request type specific data for unlink request: 214 + * 0 1 2 3 octet 215 + * +----------------+----------------+----------------+----------------+ 216 + * | unlink_id | 0 | 4 217 + * +----------------+----------------+----------------+----------------+ 218 + * | 0 | 8 219 + * +----------------+----------------+----------------+----------------+ 220 + * unlink_id - uint16_t, request id of request to terminate 221 + * 222 + * seg[] array element layout: 223 + * 0 1 2 3 octet 224 + * +----------------+----------------+----------------+----------------+ 225 + * | gref | 4 226 + * +----------------+----------------+----------------+----------------+ 227 + * | offset | length | 8 228 + * +----------------+----------------+----------------+----------------+ 229 + * gref - uint32_t, grant reference of buffer page 230 + * offset - uint16_t, offset of buffer start in page 231 + * length - uint16_t, length of buffer in page 232 + * 233 + *-------------------------- USB I/O response -------------------------------- 234 + * 235 + * 0 1 2 3 octet 236 + * +----------------+----------------+----------------+----------------+ 237 + * | id | start_frame | 4 238 + * +----------------+----------------+----------------+----------------+ 239 + * | status | 8 240 + * +----------------+----------------+----------------+----------------+ 241 + * | actual_length | 12 242 + * +----------------+----------------+----------------+----------------+ 243 + * | error_count | 16 244 + * +----------------+----------------+----------------+----------------+ 245 + * id - uint16_t, id of the request this response belongs to 246 + * start_frame - uint16_t, start_frame this response (iso requests only) 247 + * status - int32_t, XENUSB_STATUS_* (non-iso requests) 248 + * actual_length - uint32_t, actual size of data transferred 249 + * error_count - uint32_t, number of errors (iso requests) 250 + */ 251 + 252 + enum xenusb_spec_version { 253 + XENUSB_VER_UNKNOWN = 0, 254 + XENUSB_VER_USB11, 255 + XENUSB_VER_USB20, 256 + XENUSB_VER_USB30, /* not supported yet */ 257 + }; 258 + 259 + /* 260 + * USB pipe in xenusb_request 261 + * 262 + * - port number: bits 0-4 263 + * (USB_MAXCHILDREN is 31) 264 + * 265 + * - operation flag: bit 5 266 + * (0 = submit urb, 267 + * 1 = unlink urb) 268 + * 269 + * - direction: bit 7 270 + * (0 = Host-to-Device [Out] 271 + * 1 = Device-to-Host [In]) 272 + * 273 + * - device address: bits 8-14 274 + * 275 + * - endpoint: bits 15-18 276 + * 277 + * - pipe type: bits 30-31 278 + * (00 = isochronous, 01 = interrupt, 279 + * 10 = control, 11 = bulk) 280 + */ 281 + 282 + #define XENUSB_PIPE_PORT_MASK 0x0000001f 283 + #define XENUSB_PIPE_UNLINK 0x00000020 284 + #define XENUSB_PIPE_DIR 0x00000080 285 + #define XENUSB_PIPE_DEV_MASK 0x0000007f 286 + #define XENUSB_PIPE_DEV_SHIFT 8 287 + #define XENUSB_PIPE_EP_MASK 0x0000000f 288 + #define XENUSB_PIPE_EP_SHIFT 15 289 + #define XENUSB_PIPE_TYPE_MASK 0x00000003 290 + #define XENUSB_PIPE_TYPE_SHIFT 30 291 + #define XENUSB_PIPE_TYPE_ISOC 0 292 + #define XENUSB_PIPE_TYPE_INT 1 293 + #define XENUSB_PIPE_TYPE_CTRL 2 294 + #define XENUSB_PIPE_TYPE_BULK 3 295 + 296 + #define xenusb_pipeportnum(pipe) ((pipe) & XENUSB_PIPE_PORT_MASK) 297 + #define xenusb_setportnum_pipe(pipe, portnum) ((pipe) | (portnum)) 298 + 299 + #define xenusb_pipeunlink(pipe) ((pipe) & XENUSB_PIPE_UNLINK) 300 + #define xenusb_pipesubmit(pipe) (!xenusb_pipeunlink(pipe)) 301 + #define xenusb_setunlink_pipe(pipe) ((pipe) | XENUSB_PIPE_UNLINK) 302 + 303 + #define xenusb_pipein(pipe) ((pipe) & XENUSB_PIPE_DIR) 304 + #define xenusb_pipeout(pipe) (!xenusb_pipein(pipe)) 305 + 306 + #define xenusb_pipedevice(pipe) \ 307 + (((pipe) >> XENUSB_PIPE_DEV_SHIFT) & XENUSB_PIPE_DEV_MASK) 308 + 309 + #define xenusb_pipeendpoint(pipe) \ 310 + (((pipe) >> XENUSB_PIPE_EP_SHIFT) & XENUSB_PIPE_EP_MASK) 311 + 312 + #define xenusb_pipetype(pipe) \ 313 + (((pipe) >> XENUSB_PIPE_TYPE_SHIFT) & XENUSB_PIPE_TYPE_MASK) 314 + #define xenusb_pipeisoc(pipe) (xenusb_pipetype(pipe) == XENUSB_PIPE_TYPE_ISOC) 315 + #define xenusb_pipeint(pipe) (xenusb_pipetype(pipe) == XENUSB_PIPE_TYPE_INT) 316 + #define xenusb_pipectrl(pipe) (xenusb_pipetype(pipe) == XENUSB_PIPE_TYPE_CTRL) 317 + #define xenusb_pipebulk(pipe) (xenusb_pipetype(pipe) == XENUSB_PIPE_TYPE_BULK) 318 + 319 + #define XENUSB_MAX_SEGMENTS_PER_REQUEST (16) 320 + #define XENUSB_MAX_PORTNR 31 321 + #define XENUSB_RING_SIZE 4096 322 + 323 + /* 324 + * RING for transferring urbs. 325 + */ 326 + struct xenusb_request_segment { 327 + grant_ref_t gref; 328 + uint16_t offset; 329 + uint16_t length; 330 + }; 331 + 332 + struct xenusb_urb_request { 333 + uint16_t id; /* request id */ 334 + uint16_t nr_buffer_segs; /* number of urb->transfer_buffer segments */ 335 + 336 + /* basic urb parameter */ 337 + uint32_t pipe; 338 + uint16_t transfer_flags; 339 + #define XENUSB_SHORT_NOT_OK 0x0001 340 + uint16_t buffer_length; 341 + union { 342 + uint8_t ctrl[8]; /* setup_packet (Ctrl) */ 343 + 344 + struct { 345 + uint16_t interval; /* maximum (1024*8) in usb core */ 346 + uint16_t start_frame; /* start frame */ 347 + uint16_t number_of_packets; /* number of ISO packet */ 348 + uint16_t nr_frame_desc_segs; /* number of iso_frame_desc segments */ 349 + } isoc; 350 + 351 + struct { 352 + uint16_t interval; /* maximum (1024*8) in usb core */ 353 + uint16_t pad[3]; 354 + } intr; 355 + 356 + struct { 357 + uint16_t unlink_id; /* unlink request id */ 358 + uint16_t pad[3]; 359 + } unlink; 360 + 361 + } u; 362 + 363 + /* urb data segments */ 364 + struct xenusb_request_segment seg[XENUSB_MAX_SEGMENTS_PER_REQUEST]; 365 + }; 366 + 367 + struct xenusb_urb_response { 368 + uint16_t id; /* request id */ 369 + uint16_t start_frame; /* start frame (ISO) */ 370 + int32_t status; /* status (non-ISO) */ 371 + #define XENUSB_STATUS_OK 0 372 + #define XENUSB_STATUS_NODEV (-19) 373 + #define XENUSB_STATUS_INVAL (-22) 374 + #define XENUSB_STATUS_STALL (-32) 375 + #define XENUSB_STATUS_IOERROR (-71) 376 + #define XENUSB_STATUS_BABBLE (-75) 377 + #define XENUSB_STATUS_SHUTDOWN (-108) 378 + int32_t actual_length; /* actual transfer length */ 379 + int32_t error_count; /* number of ISO errors */ 380 + }; 381 + 382 + DEFINE_RING_TYPES(xenusb_urb, struct xenusb_urb_request, struct xenusb_urb_response); 383 + #define XENUSB_URB_RING_SIZE __CONST_RING_SIZE(xenusb_urb, XENUSB_RING_SIZE) 384 + 385 + /* 386 + * RING for notifying connect/disconnect events to frontend 387 + */ 388 + struct xenusb_conn_request { 389 + uint16_t id; 390 + }; 391 + 392 + struct xenusb_conn_response { 393 + uint16_t id; /* request id */ 394 + uint8_t portnum; /* port number */ 395 + uint8_t speed; /* usb_device_speed */ 396 + #define XENUSB_SPEED_NONE 0 397 + #define XENUSB_SPEED_LOW 1 398 + #define XENUSB_SPEED_FULL 2 399 + #define XENUSB_SPEED_HIGH 3 400 + }; 401 + 402 + DEFINE_RING_TYPES(xenusb_conn, struct xenusb_conn_request, struct xenusb_conn_response); 403 + #define XENUSB_CONN_RING_SIZE __CONST_RING_SIZE(xenusb_conn, XENUSB_RING_SIZE) 404 + 405 + #endif /* __XEN_PUBLIC_IO_USBIF_H__ */