Merge tag 'usb-6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
"Here is the big set of USB and Thunderbolt driver updates for
6.15-rc1. Included in here are:

- Thunderbolt driver and core api updates for new hardware and
features

- usb-storage const array cleanups

- typec driver updates

- dwc3 driver updates

- xhci driver updates and bugfixes

- small USB documentation updates

- usb cdns3 driver updates

- usb gadget driver updates

- other small driver updates and fixes

All of these have been in linux-next for a while with no reported
issues"

* tag 'usb-6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (92 commits)
thunderbolt: Do not add non-active NVM if NVM upgrade is disabled for retimer
thunderbolt: Scan retimers after device router has been enumerated
usb: host: cdns3: forward lost power information to xhci
usb: host: xhci-plat: allow upper layers to signal power loss
usb: xhci: change xhci_resume() parameters to explicit the desired info
usb: cdns3-ti: run HW init at resume() if HW was reset
usb: cdns3-ti: move reg writes to separate function
usb: cdns3: call cdns_power_is_lost() only once in cdns_resume()
usb: cdns3: rename hibernated argument of role->resume() to lost_power
usb: xhci: tegra: rename `runtime` boolean to `is_auto_runtime`
usb: host: xhci-plat: mvebu: use ->quirks instead of ->init_quirk() func
usb: dwc3: Don't use %pK through printk
usb: core: Don't use %pK through printk
usb: gadget: aspeed: Add NULL pointer check in ast_vhub_init_dev()
dt-bindings: usb: qcom,dwc3: Synchronize minItems for interrupts and -names
usb: common: usb-conn-gpio: switch psy_cfg from of_node to fwnode
usb: xhci: Avoid Stop Endpoint retry loop if the endpoint seems Running
usb: xhci: Don't change the status of stalled TDs on failed Stop EP
xhci: Avoid queuing redundant Stop Endpoint command for stalled endpoint
xhci: Handle spurious events on Etron host isoc enpoints
...

+1638 -700
+1 -1
Documentation/devicetree/bindings/mfd/aspeed-lpc.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - # # Copyright (c) 2021 Aspeed Tehchnology Inc. 2 + # # Copyright (c) 2021 Aspeed Technology Inc. 3 3 %YAML 1.2 4 4 --- 5 5 $id: http://devicetree.org/schemas/mfd/aspeed-lpc.yaml#
+2
Documentation/devicetree/bindings/usb/generic-xhci.yaml
··· 51 51 - const: core 52 52 - const: reg 53 53 54 + dma-coherent: true 55 + 54 56 power-domains: 55 57 maxItems: 1 56 58
+27 -8
Documentation/devicetree/bindings/usb/microchip,usb2514.yaml
··· 9 9 maintainers: 10 10 - Fabio Estevam <festevam@gmail.com> 11 11 12 - allOf: 13 - - $ref: usb-device.yaml# 14 - 15 12 properties: 16 13 compatible: 17 - enum: 18 - - usb424,2412 19 - - usb424,2417 20 - - usb424,2514 21 - - usb424,2517 14 + oneOf: 15 + - enum: 16 + - usb424,2412 17 + - usb424,2417 18 + - usb424,2514 19 + - usb424,2517 20 + - items: 21 + - enum: 22 + - usb424,2512 23 + - usb424,2513 24 + - const: usb424,2514 22 25 23 26 reg: true 24 27 ··· 30 27 31 28 vdd-supply: 32 29 description: 3.3V power supply. 30 + 31 + vdda-supply: 32 + description: 3.3V analog power supply. 33 33 34 34 clocks: 35 35 description: External 24MHz clock connected to the CLKIN pin. ··· 48 42 type: object 49 43 $ref: /schemas/usb/usb-device.yaml 50 44 additionalProperties: true 45 + 46 + allOf: 47 + - $ref: usb-device.yaml# 48 + - if: 49 + not: 50 + properties: 51 + compatible: 52 + contains: 53 + const: usb424,2514 54 + then: 55 + properties: 56 + vdda-supply: false 51 57 52 58 unevaluatedProperties: false 53 59 ··· 78 60 clocks = <&clks IMX6QDL_CLK_CKO>; 79 61 reset-gpios = <&gpio7 12 GPIO_ACTIVE_LOW>; 80 62 vdd-supply = <&reg_3v3_hub>; 63 + vdda-supply = <&reg_3v3a_hub>; 81 64 #address-cells = <1>; 82 65 #size-cells = <0>; 83 66
+140
Documentation/devicetree/bindings/usb/parade,ps8830.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/parade,ps8830.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Parade PS883x USB and DisplayPort Retimer 8 + 9 + maintainers: 10 + - Abel Vesa <abel.vesa@linaro.org> 11 + 12 + properties: 13 + compatible: 14 + enum: 15 + - parade,ps8830 16 + 17 + reg: 18 + maxItems: 1 19 + 20 + clocks: 21 + items: 22 + - description: XO Clock 23 + 24 + reset-gpios: 25 + maxItems: 1 26 + 27 + vdd-supply: 28 + description: power supply (1.07V) 29 + 30 + vdd33-supply: 31 + description: power supply (3.3V) 32 + 33 + vdd33-cap-supply: 34 + description: power supply (3.3V) 35 + 36 + vddar-supply: 37 + description: power supply (1.07V) 38 + 39 + vddat-supply: 40 + description: power supply (1.07V) 41 + 42 + vddio-supply: 43 + description: power supply (1.2V or 1.8V) 44 + 45 + orientation-switch: true 46 + retimer-switch: true 47 + 48 + ports: 49 + $ref: /schemas/graph.yaml#/properties/ports 50 + properties: 51 + port@0: 52 + $ref: /schemas/graph.yaml#/properties/port 53 + description: Super Speed (SS) Output endpoint to the Type-C connector 54 + 55 + port@1: 56 + $ref: /schemas/graph.yaml#/$defs/port-base 57 + description: Super Speed (SS) Input endpoint from the Super-Speed PHY 58 + unevaluatedProperties: false 59 + 60 + port@2: 61 + $ref: /schemas/graph.yaml#/properties/port 62 + description: 63 + Sideband Use (SBU) AUX lines endpoint to the Type-C connector for the purpose of 64 + handling altmode muxing and orientation switching. 65 + 66 + required: 67 + - compatible 68 + - reg 69 + - clocks 70 + - reset-gpios 71 + - vdd-supply 72 + - vdd33-supply 73 + - vdd33-cap-supply 74 + - vddat-supply 75 + - vddio-supply 76 + - orientation-switch 77 + - retimer-switch 78 + 79 + allOf: 80 + - $ref: usb-switch.yaml# 81 + 82 + additionalProperties: false 83 + 84 + examples: 85 + - | 86 + #include <dt-bindings/gpio/gpio.h> 87 + 88 + i2c { 89 + #address-cells = <1>; 90 + #size-cells = <0>; 91 + 92 + typec-mux@8 { 93 + compatible = "parade,ps8830"; 94 + reg = <0x8>; 95 + 96 + clocks = <&clk_rtmr_xo>; 97 + 98 + vdd-supply = <&vreg_rtmr_1p15>; 99 + vdd33-supply = <&vreg_rtmr_3p3>; 100 + vdd33-cap-supply = <&vreg_rtmr_3p3>; 101 + vddar-supply = <&vreg_rtmr_1p15>; 102 + vddat-supply = <&vreg_rtmr_1p15>; 103 + vddio-supply = <&vreg_rtmr_1p8>; 104 + 105 + reset-gpios = <&tlmm 10 GPIO_ACTIVE_LOW>; 106 + 107 + retimer-switch; 108 + orientation-switch; 109 + 110 + ports { 111 + #address-cells = <1>; 112 + #size-cells = <0>; 113 + 114 + port@0 { 115 + reg = <0>; 116 + 117 + endpoint { 118 + remote-endpoint = <&typec_con_ss>; 119 + }; 120 + }; 121 + 122 + port@1 { 123 + reg = <1>; 124 + 125 + endpoint { 126 + remote-endpoint = <&usb_phy_ss>; 127 + }; 128 + }; 129 + 130 + port@2 { 131 + reg = <2>; 132 + 133 + endpoint { 134 + remote-endpoint = <&typec_dp_aux>; 135 + }; 136 + }; 137 + }; 138 + }; 139 + }; 140 + ...
+2
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
··· 404 404 minItems: 2 405 405 maxItems: 3 406 406 interrupt-names: 407 + minItems: 2 407 408 items: 408 409 - const: pwr_event 409 410 - const: qusb2_phy ··· 426 425 minItems: 3 427 426 maxItems: 4 428 427 interrupt-names: 428 + minItems: 3 429 429 items: 430 430 - const: pwr_event 431 431 - const: qusb2_phy
+3
Documentation/devicetree/bindings/usb/richtek,rt1711h.yaml
··· 30 30 interrupts: 31 31 maxItems: 1 32 32 33 + vbus-supply: 34 + description: VBUS power supply 35 + 33 36 wakeup-source: 34 37 type: boolean 35 38
+19
Documentation/devicetree/bindings/usb/rockchip,dwc3.yaml
··· 26 26 contains: 27 27 enum: 28 28 - rockchip,rk3328-dwc3 29 + - rockchip,rk3562-dwc3 29 30 - rockchip,rk3568-dwc3 30 31 - rockchip,rk3576-dwc3 31 32 - rockchip,rk3588-dwc3 ··· 38 37 items: 39 38 - enum: 40 39 - rockchip,rk3328-dwc3 40 + - rockchip,rk3562-dwc3 41 41 - rockchip,rk3568-dwc3 42 42 - rockchip,rk3576-dwc3 43 43 - rockchip,rk3588-dwc3 ··· 74 72 - enum: 75 73 - grf_clk 76 74 - utmi 75 + - pipe 77 76 - const: pipe 78 77 79 78 power-domains: ··· 114 111 - const: suspend_clk 115 112 - const: bus_clk 116 113 - const: grf_clk 114 + - if: 115 + properties: 116 + compatible: 117 + contains: 118 + const: rockchip,rk3562-dwc3 119 + then: 120 + properties: 121 + clocks: 122 + minItems: 4 123 + maxItems: 4 124 + clock-names: 125 + items: 126 + - const: ref_clk 127 + - const: suspend_clk 128 + - const: bus_clk 129 + - const: pipe 117 130 - if: 118 131 properties: 119 132 compatible:
+37 -7
Documentation/devicetree/bindings/usb/samsung,exynos-dwc3.yaml
··· 11 11 12 12 properties: 13 13 compatible: 14 - enum: 15 - - google,gs101-dwusb3 16 - - samsung,exynos5250-dwusb3 17 - - samsung,exynos5433-dwusb3 18 - - samsung,exynos7-dwusb3 19 - - samsung,exynos850-dwusb3 14 + oneOf: 15 + - enum: 16 + - google,gs101-dwusb3 17 + - samsung,exynos5250-dwusb3 18 + - samsung,exynos5433-dwusb3 19 + - samsung,exynos7-dwusb3 20 + - samsung,exynos7870-dwusb3 21 + - samsung,exynos850-dwusb3 22 + - items: 23 + - const: samsung,exynos990-dwusb3 24 + - const: samsung,exynos850-dwusb3 20 25 21 26 '#address-cells': 22 27 const: 1 ··· 57 52 - clock-names 58 53 - ranges 59 54 - '#size-cells' 60 - - vdd10-supply 61 55 - vdd33-supply 62 56 63 57 allOf: ··· 76 72 - const: susp_clk 77 73 - const: link_aclk 78 74 - const: link_pclk 75 + required: 76 + - vdd10-supply 79 77 80 78 - if: 81 79 properties: ··· 92 86 clock-names: 93 87 items: 94 88 - const: usbdrd30 89 + required: 90 + - vdd10-supply 95 91 96 92 - if: 97 93 properties: ··· 111 103 - const: susp_clk 112 104 - const: phyclk 113 105 - const: pipe_pclk 106 + required: 107 + - vdd10-supply 114 108 115 109 - if: 116 110 properties: ··· 129 119 - const: usbdrd30 130 120 - const: usbdrd30_susp_clk 131 121 - const: usbdrd30_axius_clk 122 + required: 123 + - vdd10-supply 124 + 125 + - if: 126 + properties: 127 + compatible: 128 + contains: 129 + const: samsung,exynos7870-dwusb3 130 + then: 131 + properties: 132 + clocks: 133 + minItems: 3 134 + maxItems: 3 135 + clock-names: 136 + items: 137 + - const: bus_early 138 + - const: ref 139 + - const: ctrl 132 140 133 141 - if: 134 142 properties: ··· 162 134 items: 163 135 - const: bus_early 164 136 - const: ref 137 + required: 138 + - vdd10-supply 165 139 166 140 additionalProperties: false 167 141
+11
Documentation/devicetree/bindings/usb/snps,dwc3-common.yaml
··· 65 65 mode. 66 66 type: boolean 67 67 68 + snps,reserved-endpoints: 69 + description: 70 + Reserve endpoints for other needs, e.g, for tracing control and output. 71 + When set, the driver will avoid using them for the regular USB transfers. 72 + $ref: /schemas/types.yaml#/definitions/uint8-array 73 + minItems: 1 74 + maxItems: 30 75 + items: 76 + minimum: 2 77 + maximum: 31 78 + 68 79 snps,dis-start-transfer-quirk: 69 80 description: 70 81 When set, disable isoc START TRANSFER command failure SW work-around
+4 -2
Documentation/devicetree/bindings/usb/usb-device.yaml
··· 39 39 40 40 reg: 41 41 description: the number of the USB hub port or the USB host-controller 42 - port to which this device is attached. The range is 1-255. 43 - maxItems: 1 42 + port to which this device is attached. 43 + items: 44 + - minimum: 1 45 + maximum: 255 44 46 45 47 "#address-cells": 46 48 description: should be 1 for hub nodes with device nodes,
+1 -1
Documentation/driver-api/usb/writing_musb_glue_layer.rst
··· 613 613 to bypass reading the configuration from silicon, and rely on a 614 614 hard-coded table that describes the endpoints configuration instead:: 615 615 616 - static struct musb_fifo_cfg jz4740_musb_fifo_cfg[] = { 616 + static const struct musb_fifo_cfg jz4740_musb_fifo_cfg[] = { 617 617 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 618 618 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 619 619 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 64, },
+1 -1
Documentation/usb/CREDITS
··· 161 161 - The people at the linux-usb mailing list, for reading so 162 162 many messages :) Ok, no more kidding; for all your advises! 163 163 164 - - All the people at the USB Implementors Forum for their 164 + - All the people at the USB Implementers Forum for their 165 165 help and assistance. 166 166 167 167 - Nathan Myers <ncm@cantrip.org>, for his advice! (hope you
+2 -2
MAINTAINERS
··· 24023 24023 THUNDERBOLT DRIVER 24024 24024 M: Andreas Noever <andreas.noever@gmail.com> 24025 24025 M: Michael Jamet <michael.jamet@intel.com> 24026 - M: Mika Westerberg <mika.westerberg@linux.intel.com> 24026 + M: Mika Westerberg <westeri@kernel.org> 24027 24027 M: Yehezkel Bernat <YehezkelShB@gmail.com> 24028 24028 L: linux-usb@vger.kernel.org 24029 24029 S: Maintained ··· 24034 24034 24035 24035 THUNDERBOLT NETWORK DRIVER 24036 24036 M: Michael Jamet <michael.jamet@intel.com> 24037 - M: Mika Westerberg <mika.westerberg@linux.intel.com> 24037 + M: Mika Westerberg <westeri@kernel.org> 24038 24038 M: Yehezkel Bernat <YehezkelShB@gmail.com> 24039 24039 L: netdev@vger.kernel.org 24040 24040 S: Maintained
+5 -3
drivers/thunderbolt/retimer.c
··· 93 93 if (ret) 94 94 goto err_nvm; 95 95 96 - ret = tb_nvm_add_non_active(nvm, nvm_write); 97 - if (ret) 98 - goto err_nvm; 96 + if (!rt->no_nvm_upgrade) { 97 + ret = tb_nvm_add_non_active(nvm, nvm_write); 98 + if (ret) 99 + goto err_nvm; 100 + } 99 101 100 102 rt->nvm = nvm; 101 103 dev_dbg(&rt->dev, "NVM version %x.%x\n", nvm->major, nvm->minor);
+14 -2
drivers/thunderbolt/tb.c
··· 1305 1305 goto out_rpm_put; 1306 1306 } 1307 1307 1308 - tb_retimer_scan(port, true); 1309 - 1310 1308 sw = tb_switch_alloc(port->sw->tb, &port->sw->dev, 1311 1309 tb_downstream_route(port)); 1312 1310 if (IS_ERR(sw)) { 1311 + /* 1312 + * Make the downstream retimers available even if there 1313 + * is no router connected. 1314 + */ 1315 + tb_retimer_scan(port, true); 1316 + 1313 1317 /* 1314 1318 * If there is an error accessing the connected switch 1315 1319 * it may be connected to another domain. Also we allow ··· 1362 1358 1363 1359 upstream_port = tb_upstream_port(sw); 1364 1360 tb_configure_link(port, upstream_port, sw); 1361 + 1362 + /* 1363 + * Scan for downstream retimers. We only scan them after the 1364 + * router has been enumerated to avoid issues with certain 1365 + * Pluggable devices that expect the host to enumerate them 1366 + * within certain timeout. 1367 + */ 1368 + tb_retimer_scan(port, true); 1365 1369 1366 1370 /* 1367 1371 * CL0s and CL1 are enabled and supported together.
+8 -8
drivers/thunderbolt/tunnel.c
··· 2229 2229 2230 2230 path = tb_path_alloc(tb, down, TB_USB3_HOPID, up, TB_USB3_HOPID, 0, 2231 2231 "USB3 Down"); 2232 - if (!path) { 2233 - tb_tunnel_put(tunnel); 2234 - return NULL; 2235 - } 2232 + if (!path) 2233 + goto err_free; 2236 2234 tb_usb3_init_path(path); 2237 2235 tunnel->paths[TB_USB3_PATH_DOWN] = path; 2238 2236 2239 2237 path = tb_path_alloc(tb, up, TB_USB3_HOPID, down, TB_USB3_HOPID, 0, 2240 2238 "USB3 Up"); 2241 - if (!path) { 2242 - tb_tunnel_put(tunnel); 2243 - return NULL; 2244 - } 2239 + if (!path) 2240 + goto err_free; 2245 2241 tb_usb3_init_path(path); 2246 2242 tunnel->paths[TB_USB3_PATH_UP] = path; 2247 2243 ··· 2254 2258 } 2255 2259 2256 2260 return tunnel; 2261 + 2262 + err_free: 2263 + tb_tunnel_put(tunnel); 2264 + return NULL; 2257 2265 } 2258 2266 2259 2267 /**
+2 -2
drivers/usb/cdns3/cdns3-gadget.c
··· 3468 3468 return 0; 3469 3469 } 3470 3470 3471 - static int cdns3_gadget_resume(struct cdns *cdns, bool hibernated) 3471 + static int cdns3_gadget_resume(struct cdns *cdns, bool lost_power) 3472 3472 { 3473 3473 struct cdns3_device *priv_dev = cdns->gadget_dev; 3474 3474 ··· 3476 3476 return 0; 3477 3477 3478 3478 cdns3_gadget_config(priv_dev); 3479 - if (hibernated) 3479 + if (lost_power) 3480 3480 writel(USB_CONF_DEVEN, &priv_dev->regs->usb_conf); 3481 3481 3482 3482 return 0;
+69 -38
drivers/usb/cdns3/cdns3-ti.c
··· 58 58 unsigned vbus_divider:1; 59 59 struct clk *usb2_refclk; 60 60 struct clk *lpm_clk; 61 + int usb2_refclk_rate_code; 61 62 }; 62 63 63 64 static const int cdns_ti_rate_table[] = { /* in KHZ */ ··· 99 98 {}, 100 99 }; 101 100 101 + static void cdns_ti_reset_and_init_hw(struct cdns_ti *data) 102 + { 103 + u32 reg; 104 + 105 + /* assert RESET */ 106 + reg = cdns_ti_readl(data, USBSS_W1); 107 + reg &= ~USBSS_W1_PWRUP_RST; 108 + cdns_ti_writel(data, USBSS_W1, reg); 109 + 110 + /* set static config */ 111 + reg = cdns_ti_readl(data, USBSS_STATIC_CONFIG); 112 + reg &= ~USBSS1_STATIC_PLL_REF_SEL_MASK; 113 + reg |= data->usb2_refclk_rate_code << USBSS1_STATIC_PLL_REF_SEL_SHIFT; 114 + 115 + reg &= ~USBSS1_STATIC_VBUS_SEL_MASK; 116 + if (data->vbus_divider) 117 + reg |= 1 << USBSS1_STATIC_VBUS_SEL_SHIFT; 118 + 119 + cdns_ti_writel(data, USBSS_STATIC_CONFIG, reg); 120 + reg = cdns_ti_readl(data, USBSS_STATIC_CONFIG); 121 + 122 + /* set USB2_ONLY mode if requested */ 123 + reg = cdns_ti_readl(data, USBSS_W1); 124 + if (data->usb2_only) 125 + reg |= USBSS_W1_USB2_ONLY; 126 + 127 + /* set default modestrap */ 128 + reg |= USBSS_W1_MODESTRAP_SEL; 129 + reg &= ~USBSS_W1_MODESTRAP_MASK; 130 + reg |= USBSS_MODESTRAP_MODE_NONE << USBSS_W1_MODESTRAP_SHIFT; 131 + cdns_ti_writel(data, USBSS_W1, reg); 132 + 133 + /* de-assert RESET */ 134 + reg |= USBSS_W1_PWRUP_RST; 135 + cdns_ti_writel(data, USBSS_W1, reg); 136 + } 137 + 102 138 static int cdns_ti_probe(struct platform_device *pdev) 103 139 { 104 140 struct device *dev = &pdev->dev; 105 141 struct device_node *node = pdev->dev.of_node; 106 142 struct cdns_ti *data; 107 - int error; 108 - u32 reg; 109 - int rate_code, i; 110 143 unsigned long rate; 144 + int error, i; 111 145 112 146 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 113 147 if (!data) ··· 182 146 return -EINVAL; 183 147 } 184 148 185 - rate_code = i; 149 + data->usb2_refclk_rate_code = i; 150 + data->vbus_divider = device_property_read_bool(dev, "ti,vbus-divider"); 151 + data->usb2_only = device_property_read_bool(dev, "ti,usb2-only"); 152 + 153 + /* 154 + * The call below to pm_runtime_get_sync() MIGHT reset hardware, if it 155 + * detects it as uninitialised. We want to enforce a reset at probe, 156 + * and so do it manually here. This means the first runtime_resume() 157 + * will be a no-op. 158 + */ 159 + cdns_ti_reset_and_init_hw(data); 186 160 187 161 pm_runtime_enable(dev); 188 162 error = pm_runtime_get_sync(dev); ··· 200 154 dev_err(dev, "pm_runtime_get_sync failed: %d\n", error); 201 155 goto err; 202 156 } 203 - 204 - /* assert RESET */ 205 - reg = cdns_ti_readl(data, USBSS_W1); 206 - reg &= ~USBSS_W1_PWRUP_RST; 207 - cdns_ti_writel(data, USBSS_W1, reg); 208 - 209 - /* set static config */ 210 - reg = cdns_ti_readl(data, USBSS_STATIC_CONFIG); 211 - reg &= ~USBSS1_STATIC_PLL_REF_SEL_MASK; 212 - reg |= rate_code << USBSS1_STATIC_PLL_REF_SEL_SHIFT; 213 - 214 - reg &= ~USBSS1_STATIC_VBUS_SEL_MASK; 215 - data->vbus_divider = device_property_read_bool(dev, "ti,vbus-divider"); 216 - if (data->vbus_divider) 217 - reg |= 1 << USBSS1_STATIC_VBUS_SEL_SHIFT; 218 - 219 - cdns_ti_writel(data, USBSS_STATIC_CONFIG, reg); 220 - reg = cdns_ti_readl(data, USBSS_STATIC_CONFIG); 221 - 222 - /* set USB2_ONLY mode if requested */ 223 - reg = cdns_ti_readl(data, USBSS_W1); 224 - data->usb2_only = device_property_read_bool(dev, "ti,usb2-only"); 225 - if (data->usb2_only) 226 - reg |= USBSS_W1_USB2_ONLY; 227 - 228 - /* set default modestrap */ 229 - reg |= USBSS_W1_MODESTRAP_SEL; 230 - reg &= ~USBSS_W1_MODESTRAP_MASK; 231 - reg |= USBSS_MODESTRAP_MODE_NONE << USBSS_W1_MODESTRAP_SHIFT; 232 - cdns_ti_writel(data, USBSS_W1, reg); 233 - 234 - /* de-assert RESET */ 235 - reg |= USBSS_W1_PWRUP_RST; 236 - cdns_ti_writel(data, USBSS_W1, reg); 237 157 238 158 error = of_platform_populate(node, NULL, cdns_ti_auxdata, dev); 239 159 if (error) { ··· 236 224 platform_set_drvdata(pdev, NULL); 237 225 } 238 226 227 + static int cdns_ti_runtime_resume(struct device *dev) 228 + { 229 + const u32 mask = USBSS_W1_PWRUP_RST | USBSS_W1_MODESTRAP_SEL; 230 + struct cdns_ti *data = dev_get_drvdata(dev); 231 + u32 w1; 232 + 233 + w1 = cdns_ti_readl(data, USBSS_W1); 234 + if ((w1 & mask) != mask) 235 + cdns_ti_reset_and_init_hw(data); 236 + 237 + return 0; 238 + } 239 + 240 + static const struct dev_pm_ops cdns_ti_pm_ops = { 241 + RUNTIME_PM_OPS(NULL, cdns_ti_runtime_resume, NULL) 242 + SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 243 + }; 244 + 239 245 static const struct of_device_id cdns_ti_of_match[] = { 240 246 { .compatible = "ti,j721e-usb", }, 241 247 { .compatible = "ti,am64-usb", }, ··· 267 237 .driver = { 268 238 .name = "cdns3-ti", 269 239 .of_match_table = cdns_ti_of_match, 240 + .pm = pm_ptr(&cdns_ti_pm_ops), 270 241 }, 271 242 }; 272 243
+1 -1
drivers/usb/cdns3/cdnsp-gadget.c
··· 1974 1974 return 0; 1975 1975 } 1976 1976 1977 - static int cdnsp_gadget_resume(struct cdns *cdns, bool hibernated) 1977 + static int cdnsp_gadget_resume(struct cdns *cdns, bool lost_power) 1978 1978 { 1979 1979 struct cdnsp_device *pdev = cdns->gadget_dev; 1980 1980 enum usb_device_speed max_speed;
+3 -2
drivers/usb/cdns3/core.c
··· 524 524 525 525 int cdns_resume(struct cdns *cdns) 526 526 { 527 + bool power_lost = cdns_power_is_lost(cdns); 527 528 enum usb_role real_role; 528 529 bool role_changed = false; 529 530 int ret = 0; 530 531 531 - if (cdns_power_is_lost(cdns)) { 532 + if (power_lost) { 532 533 if (!cdns->role_sw) { 533 534 real_role = cdns_hw_role_state_machine(cdns); 534 535 if (real_role != cdns->role) { ··· 552 551 } 553 552 554 553 if (cdns->roles[cdns->role]->resume) 555 - cdns->roles[cdns->role]->resume(cdns, cdns_power_is_lost(cdns)); 554 + cdns->roles[cdns->role]->resume(cdns, power_lost); 556 555 557 556 return 0; 558 557 }
+1 -1
drivers/usb/cdns3/core.h
··· 30 30 int (*start)(struct cdns *cdns); 31 31 void (*stop)(struct cdns *cdns); 32 32 int (*suspend)(struct cdns *cdns, bool do_wakeup); 33 - int (*resume)(struct cdns *cdns, bool hibernated); 33 + int (*resume)(struct cdns *cdns, bool lost_power); 34 34 const char *name; 35 35 #define CDNS_ROLE_STATE_INACTIVE 0 36 36 #define CDNS_ROLE_STATE_ACTIVE 1
+11
drivers/usb/cdns3/host.c
··· 138 138 cdns_drd_host_off(cdns); 139 139 } 140 140 141 + static int cdns_host_resume(struct cdns *cdns, bool power_lost) 142 + { 143 + struct usb_hcd *hcd = platform_get_drvdata(cdns->host_dev); 144 + struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd); 145 + 146 + priv->power_lost = power_lost; 147 + 148 + return 0; 149 + } 150 + 141 151 int cdns_host_init(struct cdns *cdns) 142 152 { 143 153 struct cdns_role_driver *rdrv; ··· 158 148 159 149 rdrv->start = __cdns_host_init; 160 150 rdrv->stop = cdns_host_exit; 151 + rdrv->resume = cdns_host_resume; 161 152 rdrv->state = CDNS_ROLE_STATE_INACTIVE; 162 153 rdrv->name = "host"; 163 154
+5 -5
drivers/usb/chipidea/usbmisc_imx.c
··· 440 440 else if (data->oc_pol_configured) 441 441 reg &= ~MX6_BM_OVER_CUR_POLARITY; 442 442 } 443 - /* If the polarity is not set keep it as setup by the bootlader */ 443 + /* If the polarity is not set keep it as setup by the bootloader */ 444 444 if (data->pwr_pol == 1) 445 445 reg |= MX6_BM_PWR_POLARITY; 446 446 writel(reg, usbmisc->base + data->index * 4); ··· 645 645 else if (data->oc_pol_configured) 646 646 reg &= ~MX6_BM_OVER_CUR_POLARITY; 647 647 } 648 - /* If the polarity is not set keep it as setup by the bootlader */ 648 + /* If the polarity is not set keep it as setup by the bootloader */ 649 649 if (data->pwr_pol == 1) 650 650 reg |= MX6_BM_PWR_POLARITY; 651 651 writel(reg, usbmisc->base); ··· 939 939 else if (data->oc_pol_configured) 940 940 reg &= ~MX6_BM_OVER_CUR_POLARITY; 941 941 } 942 - /* If the polarity is not set keep it as setup by the bootlader */ 942 + /* If the polarity is not set keep it as setup by the bootloader */ 943 943 if (data->pwr_pol == 1) 944 944 reg |= MX6_BM_PWR_POLARITY; 945 945 ··· 1185 1185 if (usbmisc->ops->hsic_set_clk && data->hsic) 1186 1186 ret = usbmisc->ops->hsic_set_clk(data, false); 1187 1187 if (ret) { 1188 - dev_err(data->dev, "set_wakeup failed, ret=%d\n", ret); 1188 + dev_err(data->dev, "hsic_set_clk failed, ret=%d\n", ret); 1189 1189 return ret; 1190 1190 } 1191 1191 ··· 1224 1224 if (usbmisc->ops->hsic_set_clk && data->hsic) 1225 1225 ret = usbmisc->ops->hsic_set_clk(data, true); 1226 1226 if (ret) { 1227 - dev_err(data->dev, "set_wakeup failed, ret=%d\n", ret); 1227 + dev_err(data->dev, "hsic_set_clk failed, ret=%d\n", ret); 1228 1228 goto hsic_set_clk_fail; 1229 1229 } 1230 1230
+1 -1
drivers/usb/common/usb-conn-gpio.c
··· 158 158 struct device *dev = info->dev; 159 159 struct power_supply_desc *desc = &info->desc; 160 160 struct power_supply_config cfg = { 161 - .of_node = dev->of_node, 161 + .fwnode = dev_fwnode(dev), 162 162 }; 163 163 164 164 desc->name = "usb-charger";
+46 -5
drivers/usb/core/config.c
··· 64 64 memcpy(&ep->ssp_isoc_ep_comp, desc, USB_DT_SSP_ISOC_EP_COMP_SIZE); 65 65 } 66 66 67 + static void usb_parse_eusb2_isoc_endpoint_companion(struct device *ddev, 68 + int cfgno, int inum, int asnum, struct usb_host_endpoint *ep, 69 + unsigned char *buffer, int size) 70 + { 71 + struct usb_eusb2_isoc_ep_comp_descriptor *desc; 72 + struct usb_descriptor_header *h; 73 + 74 + /* 75 + * eUSB2 isochronous endpoint companion descriptor for this endpoint 76 + * shall be declared before the next endpoint or interface descriptor 77 + */ 78 + while (size >= USB_DT_EUSB2_ISOC_EP_COMP_SIZE) { 79 + h = (struct usb_descriptor_header *)buffer; 80 + 81 + if (h->bDescriptorType == USB_DT_EUSB2_ISOC_ENDPOINT_COMP) { 82 + desc = (struct usb_eusb2_isoc_ep_comp_descriptor *)buffer; 83 + ep->eusb2_isoc_ep_comp = *desc; 84 + return; 85 + } 86 + if (h->bDescriptorType == USB_DT_ENDPOINT || 87 + h->bDescriptorType == USB_DT_INTERFACE) 88 + break; 89 + 90 + buffer += h->bLength; 91 + size -= h->bLength; 92 + } 93 + 94 + dev_notice(ddev, "No eUSB2 isoc ep %d companion for config %d interface %d altsetting %d\n", 95 + ep->desc.bEndpointAddress, cfgno, inum, asnum); 96 + } 97 + 67 98 static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno, 68 99 int inum, int asnum, struct usb_host_endpoint *ep, 69 100 unsigned char *buffer, int size) ··· 289 258 int n, i, j, retval; 290 259 unsigned int maxp; 291 260 const unsigned short *maxpacket_maxes; 261 + u16 bcdUSB; 292 262 293 263 d = (struct usb_endpoint_descriptor *) buffer; 264 + bcdUSB = le16_to_cpu(udev->descriptor.bcdUSB); 294 265 buffer += d->bLength; 295 266 size -= d->bLength; 296 267 ··· 442 409 443 410 /* 444 411 * Validate the wMaxPacketSize field. 445 - * Some devices have isochronous endpoints in altsetting 0; 446 - * the USB-2 spec requires such endpoints to have wMaxPacketSize = 0 447 - * (see the end of section 5.6.3), so don't warn about them. 412 + * eUSB2 devices (see USB 2.0 Double Isochronous IN ECN 9.6.6 Endpoint) 413 + * and devices with isochronous endpoints in altsetting 0 (see USB 2.0 414 + * end of section 5.6.3) have wMaxPacketSize = 0. 415 + * So don't warn about those. 448 416 */ 449 417 maxp = le16_to_cpu(endpoint->desc.wMaxPacketSize); 450 - if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) { 418 + 419 + if (maxp == 0 && bcdUSB != 0x0220 && 420 + !(usb_endpoint_xfer_isoc(d) && asnum == 0)) 451 421 dev_notice(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n", 452 422 cfgno, inum, asnum, d->bEndpointAddress); 453 - } 454 423 455 424 /* Find the highest legal maxpacket size for this endpoint */ 456 425 i = 0; /* additional transactions per microframe */ ··· 499 464 cfgno, inum, asnum, d->bEndpointAddress, 500 465 maxp); 501 466 } 467 + 468 + /* Parse a possible eUSB2 periodic endpoint companion descriptor */ 469 + if (bcdUSB == 0x0220 && d->wMaxPacketSize == 0 && 470 + (usb_endpoint_xfer_isoc(d) || usb_endpoint_xfer_int(d))) 471 + usb_parse_eusb2_isoc_endpoint_companion(ddev, cfgno, inum, asnum, 472 + endpoint, buffer, size); 502 473 503 474 /* Parse a possible SuperSpeed endpoint companion descriptor */ 504 475 if (udev->speed >= USB_SPEED_SUPER)
+2 -2
drivers/usb/core/hcd.c
··· 1609 1609 if (retval == 0) 1610 1610 retval = -EINPROGRESS; 1611 1611 else if (retval != -EIDRM && retval != -EBUSY) 1612 - dev_dbg(&udev->dev, "hcd_unlink_urb %pK fail %d\n", 1612 + dev_dbg(&udev->dev, "hcd_unlink_urb %p fail %d\n", 1613 1613 urb, retval); 1614 1614 usb_put_dev(udev); 1615 1615 } ··· 1786 1786 /* kick hcd */ 1787 1787 unlink1(hcd, urb, -ESHUTDOWN); 1788 1788 dev_dbg (hcd->self.controller, 1789 - "shutdown urb %pK ep%d%s-%s\n", 1789 + "shutdown urb %p ep%d%s-%s\n", 1790 1790 urb, usb_endpoint_num(&ep->desc), 1791 1791 is_in ? "in" : "out", 1792 1792 usb_ep_type_string(usb_endpoint_type(&ep->desc)));
+1 -3
drivers/usb/core/hub.c
··· 4708 4708 } 4709 4709 EXPORT_SYMBOL_GPL(usb_ep0_reinit); 4710 4710 4711 - #define usb_sndaddr0pipe() (PIPE_CONTROL << 30) 4712 - 4713 4711 static int hub_set_address(struct usb_device *udev, int devnum) 4714 4712 { 4715 4713 int retval; ··· 4731 4733 if (hcd->driver->address_device) 4732 4734 retval = hcd->driver->address_device(hcd, udev, timeout_ms); 4733 4735 else 4734 - retval = usb_control_msg(udev, usb_sndaddr0pipe(), 4736 + retval = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 4735 4737 USB_REQ_SET_ADDRESS, 0, devnum, 0, 4736 4738 NULL, 0, timeout_ms); 4737 4739 if (retval == 0) {
+1 -1
drivers/usb/core/urb.c
··· 376 376 if (!urb || !urb->complete) 377 377 return -EINVAL; 378 378 if (urb->hcpriv) { 379 - WARN_ONCE(1, "URB %pK submitted while active\n", urb); 379 + WARN_ONCE(1, "URB %p submitted while active\n", urb); 380 380 return -EBUSY; 381 381 } 382 382
+1
drivers/usb/dwc2/core.c
··· 43 43 /* Backup global regs */ 44 44 gr = &hsotg->gr_backup; 45 45 46 + gr->gintsts = dwc2_readl(hsotg, GINTSTS); 46 47 gr->gotgctl = dwc2_readl(hsotg, GOTGCTL); 47 48 gr->gintmsk = dwc2_readl(hsotg, GINTMSK); 48 49 gr->gahbcfg = dwc2_readl(hsotg, GAHBCFG);
+21 -2
drivers/usb/dwc2/core.h
··· 667 667 /** 668 668 * struct dwc2_gregs_backup - Holds global registers state before 669 669 * entering partial power down 670 + * @gintsts: Backup of GINTSTS register 670 671 * @gotgctl: Backup of GOTGCTL register 671 672 * @gintmsk: Backup of GINTMSK register 672 673 * @gahbcfg: Backup of GAHBCFG register ··· 684 683 * @valid: True if registers values backuped. 685 684 */ 686 685 struct dwc2_gregs_backup { 686 + u32 gintsts; 687 687 u32 gotgctl; 688 688 u32 gintmsk; 689 689 u32 gahbcfg; ··· 1129 1127 #define DWC2_FS_IOT_ID 0x55310000 1130 1128 #define DWC2_HS_IOT_ID 0x55320000 1131 1129 1130 + #define DWC2_RESTORE_DCTL BIT(0) 1131 + #define DWC2_RESTORE_DCFG BIT(1) 1132 + 1132 1133 #if IS_ENABLED(CONFIG_USB_DWC2_HOST) || IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE) 1133 1134 union dwc2_hcd_internal_flags { 1134 1135 u32 d32; ··· 1425 1420 #define dwc2_is_device_connected(hsotg) (hsotg->connected) 1426 1421 #define dwc2_is_device_enabled(hsotg) (hsotg->enabled) 1427 1422 int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg); 1428 - int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup); 1423 + int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, unsigned int flags); 1429 1424 int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg); 1430 1425 int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg, 1431 1426 int rem_wakeup, int reset); ··· 1440 1435 int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg); 1441 1436 void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg); 1442 1437 void dwc2_gadget_program_ref_clk(struct dwc2_hsotg *hsotg); 1438 + int dwc2_gadget_backup_critical_registers(struct dwc2_hsotg *hsotg); 1439 + int dwc2_gadget_restore_critical_registers(struct dwc2_hsotg *hsotg, 1440 + unsigned int flags); 1443 1441 static inline void dwc2_clear_fifo_map(struct dwc2_hsotg *hsotg) 1444 1442 { hsotg->fifo_map = 0; } 1445 1443 #else ··· 1467 1459 static inline int dwc2_backup_device_registers(struct dwc2_hsotg *hsotg) 1468 1460 { return 0; } 1469 1461 static inline int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, 1470 - int remote_wakeup) 1462 + unsigned int flags) 1471 1463 { return 0; } 1472 1464 static inline int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg) 1473 1465 { return 0; } ··· 1490 1482 { return 0; } 1491 1483 static inline void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg) {} 1492 1484 static inline void dwc2_gadget_program_ref_clk(struct dwc2_hsotg *hsotg) {} 1485 + static inline int dwc2_gadget_backup_critical_registers(struct dwc2_hsotg *hsotg) 1486 + { return 0; } 1487 + static inline int dwc2_gadget_restore_critical_registers(struct dwc2_hsotg *hsotg, 1488 + unsigned int flags) 1489 + { return 0; } 1493 1490 static inline void dwc2_clear_fifo_map(struct dwc2_hsotg *hsotg) {} 1494 1491 #endif 1495 1492 ··· 1518 1505 void dwc2_host_enter_clock_gating(struct dwc2_hsotg *hsotg); 1519 1506 void dwc2_host_exit_clock_gating(struct dwc2_hsotg *hsotg, int rem_wakeup); 1520 1507 bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2); 1508 + int dwc2_host_backup_critical_registers(struct dwc2_hsotg *hsotg); 1509 + int dwc2_host_restore_critical_registers(struct dwc2_hsotg *hsotg); 1521 1510 static inline void dwc2_host_schedule_phy_reset(struct dwc2_hsotg *hsotg) 1522 1511 { schedule_work(&hsotg->phy_reset_work); } 1523 1512 #else ··· 1559 1544 int rem_wakeup) {} 1560 1545 static inline bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2) 1561 1546 { return false; } 1547 + static inline int dwc2_host_backup_critical_registers(struct dwc2_hsotg *hsotg) 1548 + { return 0; } 1549 + static inline int dwc2_host_restore_critical_registers(struct dwc2_hsotg *hsotg) 1550 + { return 0; } 1562 1551 static inline void dwc2_host_schedule_phy_reset(struct dwc2_hsotg *hsotg) {} 1563 1552 1564 1553 #endif
+60 -56
drivers/usb/dwc2/gadget.c
··· 5204 5204 * if controller power were disabled. 5205 5205 * 5206 5206 * @hsotg: Programming view of the DWC_otg controller 5207 - * @remote_wakeup: Indicates whether resume is initiated by Device or Host. 5207 + * @flags: Defines which registers should be restored. 5208 5208 * 5209 5209 * Return: 0 if successful, negative error code otherwise 5210 5210 */ 5211 - int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup) 5211 + int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, unsigned int flags) 5212 5212 { 5213 5213 struct dwc2_dregs_backup *dr; 5214 5214 int i; ··· 5224 5224 } 5225 5225 dr->valid = false; 5226 5226 5227 - if (!remote_wakeup) 5227 + if (flags & DWC2_RESTORE_DCFG) 5228 + dwc2_writel(hsotg, dr->dcfg, DCFG); 5229 + 5230 + if (flags & DWC2_RESTORE_DCTL) 5228 5231 dwc2_writel(hsotg, dr->dctl, DCTL); 5229 5232 5230 5233 dwc2_writel(hsotg, dr->daintmsk, DAINTMSK); ··· 5313 5310 dev_dbg(hsotg->dev, "GREFCLK=0x%08x\n", dwc2_readl(hsotg, GREFCLK)); 5314 5311 } 5315 5312 5313 + int dwc2_gadget_backup_critical_registers(struct dwc2_hsotg *hsotg) 5314 + { 5315 + int ret; 5316 + 5317 + /* Backup all registers */ 5318 + ret = dwc2_backup_global_registers(hsotg); 5319 + if (ret) { 5320 + dev_err(hsotg->dev, "%s: failed to backup global registers\n", 5321 + __func__); 5322 + return ret; 5323 + } 5324 + 5325 + ret = dwc2_backup_device_registers(hsotg); 5326 + if (ret) { 5327 + dev_err(hsotg->dev, "%s: failed to backup device registers\n", 5328 + __func__); 5329 + return ret; 5330 + } 5331 + 5332 + return 0; 5333 + } 5334 + 5335 + int dwc2_gadget_restore_critical_registers(struct dwc2_hsotg *hsotg, 5336 + unsigned int flags) 5337 + { 5338 + int ret; 5339 + 5340 + ret = dwc2_restore_global_registers(hsotg); 5341 + if (ret) { 5342 + dev_err(hsotg->dev, "%s: failed to restore registers\n", 5343 + __func__); 5344 + return ret; 5345 + } 5346 + ret = dwc2_restore_device_registers(hsotg, flags); 5347 + if (ret) { 5348 + dev_err(hsotg->dev, "%s: failed to restore device registers\n", 5349 + __func__); 5350 + return ret; 5351 + } 5352 + 5353 + return 0; 5354 + } 5355 + 5316 5356 /** 5317 5357 * dwc2_gadget_enter_hibernation() - Put controller in Hibernation. 5318 5358 * ··· 5373 5327 /* Change to L2(suspend) state */ 5374 5328 hsotg->lx_state = DWC2_L2; 5375 5329 dev_dbg(hsotg->dev, "Start of hibernation completed\n"); 5376 - ret = dwc2_backup_global_registers(hsotg); 5377 - if (ret) { 5378 - dev_err(hsotg->dev, "%s: failed to backup global registers\n", 5379 - __func__); 5330 + ret = dwc2_gadget_backup_critical_registers(hsotg); 5331 + if (ret) 5380 5332 return ret; 5381 - } 5382 - ret = dwc2_backup_device_registers(hsotg); 5383 - if (ret) { 5384 - dev_err(hsotg->dev, "%s: failed to backup device registers\n", 5385 - __func__); 5386 - return ret; 5387 - } 5388 5333 5389 5334 gpwrdn = GPWRDN_PWRDNRSTN; 5390 5335 udelay(10); ··· 5452 5415 u32 gpwrdn; 5453 5416 u32 dctl; 5454 5417 int ret = 0; 5418 + unsigned int flags = 0; 5455 5419 struct dwc2_gregs_backup *gr; 5456 5420 struct dwc2_dregs_backup *dr; 5457 5421 ··· 5515 5477 dctl = dwc2_readl(hsotg, DCTL); 5516 5478 dctl |= DCTL_PWRONPRGDONE; 5517 5479 dwc2_writel(hsotg, dctl, DCTL); 5480 + flags |= DWC2_RESTORE_DCTL; 5518 5481 } 5519 5482 /* Wait for interrupts which must be cleared */ 5520 5483 mdelay(2); ··· 5523 5484 dwc2_writel(hsotg, 0xffffffff, GINTSTS); 5524 5485 5525 5486 /* Restore global registers */ 5526 - ret = dwc2_restore_global_registers(hsotg); 5527 - if (ret) { 5528 - dev_err(hsotg->dev, "%s: failed to restore registers\n", 5529 - __func__); 5487 + ret = dwc2_gadget_restore_critical_registers(hsotg, flags); 5488 + if (ret) 5530 5489 return ret; 5531 - } 5532 - 5533 - /* Restore device registers */ 5534 - ret = dwc2_restore_device_registers(hsotg, rem_wakeup); 5535 - if (ret) { 5536 - dev_err(hsotg->dev, "%s: failed to restore device registers\n", 5537 - __func__); 5538 - return ret; 5539 - } 5540 5490 5541 5491 if (rem_wakeup) { 5542 5492 mdelay(10); ··· 5559 5531 dev_dbg(hsotg->dev, "Entering device partial power down started.\n"); 5560 5532 5561 5533 /* Backup all registers */ 5562 - ret = dwc2_backup_global_registers(hsotg); 5563 - if (ret) { 5564 - dev_err(hsotg->dev, "%s: failed to backup global registers\n", 5565 - __func__); 5534 + ret = dwc2_gadget_backup_critical_registers(hsotg); 5535 + if (ret) 5566 5536 return ret; 5567 - } 5568 - 5569 - ret = dwc2_backup_device_registers(hsotg); 5570 - if (ret) { 5571 - dev_err(hsotg->dev, "%s: failed to backup device registers\n", 5572 - __func__); 5573 - return ret; 5574 - } 5575 5537 5576 5538 /* 5577 5539 * Clear any pending interrupts since dwc2 will not be able to ··· 5608 5590 { 5609 5591 u32 pcgcctl; 5610 5592 u32 dctl; 5611 - struct dwc2_dregs_backup *dr; 5612 5593 int ret = 0; 5613 - 5614 - dr = &hsotg->dr_backup; 5615 5594 5616 5595 dev_dbg(hsotg->dev, "Exiting device partial Power Down started.\n"); 5617 5596 ··· 5626 5611 5627 5612 udelay(100); 5628 5613 if (restore) { 5629 - ret = dwc2_restore_global_registers(hsotg); 5630 - if (ret) { 5631 - dev_err(hsotg->dev, "%s: failed to restore registers\n", 5632 - __func__); 5614 + ret = dwc2_gadget_restore_critical_registers(hsotg, DWC2_RESTORE_DCTL | 5615 + DWC2_RESTORE_DCFG); 5616 + if (ret) 5633 5617 return ret; 5634 - } 5635 - /* Restore DCFG */ 5636 - dwc2_writel(hsotg, dr->dcfg, DCFG); 5637 - 5638 - ret = dwc2_restore_device_registers(hsotg, 0); 5639 - if (ret) { 5640 - dev_err(hsotg->dev, "%s: failed to restore device registers\n", 5641 - __func__); 5642 - return ret; 5643 - } 5644 5618 } 5645 5619 5646 5620 /* Set the Power-On Programming done bit */
+51 -48
drivers/usb/dwc2/hcd.c
··· 5474 5474 return 0; 5475 5475 } 5476 5476 5477 + int dwc2_host_backup_critical_registers(struct dwc2_hsotg *hsotg) 5478 + { 5479 + int ret; 5480 + 5481 + /* Backup all registers */ 5482 + ret = dwc2_backup_global_registers(hsotg); 5483 + if (ret) { 5484 + dev_err(hsotg->dev, "%s: failed to backup global registers\n", 5485 + __func__); 5486 + return ret; 5487 + } 5488 + 5489 + ret = dwc2_backup_host_registers(hsotg); 5490 + if (ret) { 5491 + dev_err(hsotg->dev, "%s: failed to backup host registers\n", 5492 + __func__); 5493 + return ret; 5494 + } 5495 + 5496 + return 0; 5497 + } 5498 + 5499 + int dwc2_host_restore_critical_registers(struct dwc2_hsotg *hsotg) 5500 + { 5501 + int ret; 5502 + 5503 + ret = dwc2_restore_global_registers(hsotg); 5504 + if (ret) { 5505 + dev_err(hsotg->dev, "%s: failed to restore registers\n", 5506 + __func__); 5507 + return ret; 5508 + } 5509 + 5510 + ret = dwc2_restore_host_registers(hsotg); 5511 + if (ret) { 5512 + dev_err(hsotg->dev, "%s: failed to restore host registers\n", 5513 + __func__); 5514 + return ret; 5515 + } 5516 + 5517 + return 0; 5518 + } 5519 + 5477 5520 /** 5478 5521 * dwc2_host_enter_hibernation() - Put controller in Hibernation. 5479 5522 * ··· 5532 5489 u32 gpwrdn; 5533 5490 5534 5491 dev_dbg(hsotg->dev, "Preparing host for hibernation\n"); 5535 - ret = dwc2_backup_global_registers(hsotg); 5536 - if (ret) { 5537 - dev_err(hsotg->dev, "%s: failed to backup global registers\n", 5538 - __func__); 5492 + ret = dwc2_host_backup_critical_registers(hsotg); 5493 + if (ret) 5539 5494 return ret; 5540 - } 5541 - ret = dwc2_backup_host_registers(hsotg); 5542 - if (ret) { 5543 - dev_err(hsotg->dev, "%s: failed to backup host registers\n", 5544 - __func__); 5545 - return ret; 5546 - } 5547 5495 5548 5496 /* Enter USB Suspend Mode */ 5549 5497 hprt0 = dwc2_readl(hsotg, HPRT0); ··· 5728 5694 dwc2_writel(hsotg, 0xffffffff, GINTSTS); 5729 5695 5730 5696 /* Restore global registers */ 5731 - ret = dwc2_restore_global_registers(hsotg); 5732 - if (ret) { 5733 - dev_err(hsotg->dev, "%s: failed to restore registers\n", 5734 - __func__); 5697 + ret = dwc2_host_restore_critical_registers(hsotg); 5698 + if (ret) 5735 5699 return ret; 5736 - } 5737 - 5738 - /* Restore host registers */ 5739 - ret = dwc2_restore_host_registers(hsotg); 5740 - if (ret) { 5741 - dev_err(hsotg->dev, "%s: failed to restore host registers\n", 5742 - __func__); 5743 - return ret; 5744 - } 5745 5700 5746 5701 if (rem_wakeup) { 5747 5702 dwc2_hcd_rem_wakeup(hsotg); ··· 5797 5774 dev_warn(hsotg->dev, "Suspend wasn't generated\n"); 5798 5775 5799 5776 /* Backup all registers */ 5800 - ret = dwc2_backup_global_registers(hsotg); 5801 - if (ret) { 5802 - dev_err(hsotg->dev, "%s: failed to backup global registers\n", 5803 - __func__); 5777 + ret = dwc2_host_backup_critical_registers(hsotg); 5778 + if (ret) 5804 5779 return ret; 5805 - } 5806 - 5807 - ret = dwc2_backup_host_registers(hsotg); 5808 - if (ret) { 5809 - dev_err(hsotg->dev, "%s: failed to backup host registers\n", 5810 - __func__); 5811 - return ret; 5812 - } 5813 5780 5814 5781 /* 5815 5782 * Clear any pending interrupts since dwc2 will not be able to ··· 5868 5855 5869 5856 udelay(100); 5870 5857 if (restore) { 5871 - ret = dwc2_restore_global_registers(hsotg); 5872 - if (ret) { 5873 - dev_err(hsotg->dev, "%s: failed to restore registers\n", 5874 - __func__); 5858 + ret = dwc2_host_restore_critical_registers(hsotg); 5859 + if (ret) 5875 5860 return ret; 5876 - } 5877 - 5878 - ret = dwc2_restore_host_registers(hsotg); 5879 - if (ret) { 5880 - dev_err(hsotg->dev, "%s: failed to restore host registers\n", 5881 - __func__); 5882 - return ret; 5883 - } 5884 5861 } 5885 5862 5886 5863 /* Drive resume signaling and exit suspend mode on the port. */
+38
drivers/usb/dwc2/platform.c
··· 685 685 regulator_disable(dwc2->usb33d); 686 686 } 687 687 688 + if (is_device_mode) 689 + ret = dwc2_gadget_backup_critical_registers(dwc2); 690 + else 691 + ret = dwc2_host_backup_critical_registers(dwc2); 692 + 693 + if (ret) 694 + return ret; 695 + 688 696 if (dwc2->ll_hw_enabled && 689 697 (is_device_mode || dwc2_host_can_poweroff_phy(dwc2))) { 690 698 ret = __dwc2_lowlevel_hw_disable(dwc2); ··· 700 692 } 701 693 702 694 return ret; 695 + } 696 + 697 + static int dwc2_restore_critical_registers(struct dwc2_hsotg *hsotg) 698 + { 699 + struct dwc2_gregs_backup *gr; 700 + 701 + gr = &hsotg->gr_backup; 702 + 703 + if (!gr->valid) { 704 + dev_err(hsotg->dev, "No valid register backup, failed to restore\n"); 705 + return -EINVAL; 706 + } 707 + 708 + if (gr->gintsts & GINTSTS_CURMODE_HOST) 709 + return dwc2_host_restore_critical_registers(hsotg); 710 + 711 + return dwc2_gadget_restore_critical_registers(hsotg, DWC2_RESTORE_DCTL | 712 + DWC2_RESTORE_DCFG); 703 713 } 704 714 705 715 static int __maybe_unused dwc2_resume(struct device *dev) ··· 731 705 return ret; 732 706 } 733 707 dwc2->phy_off_for_suspend = false; 708 + 709 + /* 710 + * During suspend it's possible that the power domain for the 711 + * DWC2 controller is disabled and all register values get lost. 712 + * In case the GUSBCFG register is not initialized, it's clear the 713 + * registers must be restored. 714 + */ 715 + if (!(dwc2_readl(dwc2, GUSBCFG) & GUSBCFG_TOUTCAL_MASK)) { 716 + ret = dwc2_restore_critical_registers(dwc2); 717 + if (ret) 718 + return ret; 719 + } 734 720 735 721 if (dwc2->params.activate_stm_id_vb_detection) { 736 722 unsigned long flags;
+2 -10
drivers/usb/dwc3/dwc3-am62.c
··· 153 153 { 154 154 struct device *dev = am62->dev; 155 155 struct device_node *node = dev->of_node; 156 - struct of_phandle_args args; 157 156 struct regmap *syscon; 158 157 int ret; 159 158 160 - syscon = syscon_regmap_lookup_by_phandle(node, "ti,syscon-phy-pll-refclk"); 159 + syscon = syscon_regmap_lookup_by_phandle_args(node, "ti,syscon-phy-pll-refclk", 160 + 1, &am62->offset); 161 161 if (IS_ERR(syscon)) { 162 162 dev_err(dev, "unable to get ti,syscon-phy-pll-refclk regmap\n"); 163 163 return PTR_ERR(syscon); 164 164 } 165 165 166 166 am62->syscon = syscon; 167 - 168 - ret = of_parse_phandle_with_fixed_args(node, "ti,syscon-phy-pll-refclk", 1, 169 - 0, &args); 170 - if (ret) 171 - return ret; 172 - 173 - of_node_put(args.np); 174 - am62->offset = args.args[0]; 175 167 176 168 /* Core voltage. PHY_CORE_VOLTAGE bit Recommended to be 0 always */ 177 169 ret = regmap_update_bits(am62->syscon, am62->offset, PHY_CORE_VOLTAGE_MASK, 0);
+9
drivers/usb/dwc3/dwc3-exynos.c
··· 163 163 .suspend_clk_idx = 1, 164 164 }; 165 165 166 + static const struct dwc3_exynos_driverdata exynos7870_drvdata = { 167 + .clk_names = { "bus_early", "ref", "ctrl" }, 168 + .num_clks = 3, 169 + .suspend_clk_idx = -1, 170 + }; 171 + 166 172 static const struct dwc3_exynos_driverdata exynos850_drvdata = { 167 173 .clk_names = { "bus_early", "ref" }, 168 174 .num_clks = 2, ··· 191 185 }, { 192 186 .compatible = "samsung,exynos7-dwusb3", 193 187 .data = &exynos7_drvdata, 188 + }, { 189 + .compatible = "samsung,exynos7870-dwusb3", 190 + .data = &exynos7870_drvdata, 194 191 }, { 195 192 .compatible = "samsung,exynos850-dwusb3", 196 193 .data = &exynos850_drvdata,
+10
drivers/usb/dwc3/dwc3-pci.c
··· 148 148 {} 149 149 }; 150 150 151 + /* 152 + * Intel Merrifield SoC uses these endpoints for tracing and they cannot 153 + * be re-allocated if being used because the side band flow control signals 154 + * are hard wired to certain endpoints: 155 + * - 1 High BW Bulk IN (IN#1) (RTIT) 156 + * - 1 1KB BW Bulk IN (IN#8) + 1 1KB BW Bulk OUT (Run Control) (OUT#8) 157 + */ 158 + static const u8 dwc3_pci_mrfld_reserved_endpoints[] = { 3, 16, 17 }; 159 + 151 160 static const struct property_entry dwc3_pci_mrfld_properties[] = { 152 161 PROPERTY_ENTRY_STRING("dr_mode", "otg"), 153 162 PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"), 154 163 PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"), 155 164 PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"), 165 + PROPERTY_ENTRY_U8_ARRAY("snps,reserved-endpoints", dwc3_pci_mrfld_reserved_endpoints), 156 166 PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"), 157 167 PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"), 158 168 {}
+1 -1
drivers/usb/dwc3/dwc3-st.c
··· 225 225 226 226 dwc3_data->syscfg_reg_off = res->start; 227 227 228 - dev_vdbg(dev, "glue-logic addr 0x%pK, syscfg-reg offset 0x%x\n", 228 + dev_vdbg(dev, "glue-logic addr 0x%p, syscfg-reg offset 0x%x\n", 229 229 dwc3_data->glue_base, dwc3_data->syscfg_reg_off); 230 230 231 231 struct device_node *child __free(device_node) = of_get_compatible_child(node,
+61 -8
drivers/usb/dwc3/gadget.c
··· 547 547 int dwc3_gadget_start_config(struct dwc3 *dwc, unsigned int resource_index) 548 548 { 549 549 struct dwc3_gadget_ep_cmd_params params; 550 + struct dwc3_ep *dep; 550 551 u32 cmd; 551 552 int i; 552 553 int ret; ··· 564 563 return ret; 565 564 566 565 /* Reset resource allocation flags */ 567 - for (i = resource_index; i < dwc->num_eps && dwc->eps[i]; i++) 568 - dwc->eps[i]->flags &= ~DWC3_EP_RESOURCE_ALLOCATED; 566 + for (i = resource_index; i < dwc->num_eps; i++) { 567 + dep = dwc->eps[i]; 568 + if (!dep) 569 + continue; 570 + 571 + dep->flags &= ~DWC3_EP_RESOURCE_ALLOCATED; 572 + } 569 573 570 574 return 0; 571 575 } ··· 757 751 758 752 dwc->last_fifo_depth = fifo_depth; 759 753 /* Clear existing TXFIFO for all IN eps except ep0 */ 760 - for (num = 3; num < min_t(int, dwc->num_eps, DWC3_ENDPOINTS_NUM); 761 - num += 2) { 754 + for (num = 3; num < min_t(int, dwc->num_eps, DWC3_ENDPOINTS_NUM); num += 2) { 762 755 dep = dwc->eps[num]; 756 + if (!dep) 757 + continue; 758 + 763 759 /* Don't change TXFRAMNUM on usb31 version */ 764 760 size = DWC3_IP_IS(DWC3) ? 0 : 765 761 dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(num >> 1)) & ··· 1979 1971 return -ESHUTDOWN; 1980 1972 } 1981 1973 1982 - if (WARN(req->dep != dep, "request %pK belongs to '%s'\n", 1974 + if (WARN(req->dep != dep, "request %p belongs to '%s'\n", 1983 1975 &req->request, req->dep->name)) 1984 1976 return -EINVAL; 1985 1977 1986 1978 if (WARN(req->status < DWC3_REQUEST_STATUS_COMPLETED, 1987 - "%s: request %pK already in flight\n", 1979 + "%s: request %p already in flight\n", 1988 1980 dep->name, &req->request)) 1989 1981 return -EINVAL; 1990 1982 ··· 2173 2165 } 2174 2166 } 2175 2167 2176 - dev_err(dwc->dev, "request %pK was not queued to %s\n", 2168 + dev_err(dwc->dev, "request %p was not queued to %s\n", 2177 2169 request, ep->name); 2178 2170 ret = -EINVAL; 2179 2171 out: ··· 3437 3429 return 0; 3438 3430 } 3439 3431 3432 + static int dwc3_gadget_get_reserved_endpoints(struct dwc3 *dwc, const char *propname, 3433 + u8 *eps, u8 num) 3434 + { 3435 + u8 count; 3436 + int ret; 3437 + 3438 + if (!device_property_present(dwc->dev, propname)) 3439 + return 0; 3440 + 3441 + ret = device_property_count_u8(dwc->dev, propname); 3442 + if (ret < 0) 3443 + return ret; 3444 + count = ret; 3445 + 3446 + ret = device_property_read_u8_array(dwc->dev, propname, eps, min(num, count)); 3447 + if (ret) 3448 + return ret; 3449 + 3450 + return count; 3451 + } 3452 + 3440 3453 static int dwc3_gadget_init_endpoints(struct dwc3 *dwc, u8 total) 3441 3454 { 3455 + const char *propname = "snps,reserved-endpoints"; 3442 3456 u8 epnum; 3457 + u8 reserved_eps[DWC3_ENDPOINTS_NUM]; 3458 + u8 count; 3459 + u8 num; 3460 + int ret; 3443 3461 3444 3462 INIT_LIST_HEAD(&dwc->gadget->ep_list); 3445 3463 3464 + ret = dwc3_gadget_get_reserved_endpoints(dwc, propname, 3465 + reserved_eps, ARRAY_SIZE(reserved_eps)); 3466 + if (ret < 0) { 3467 + dev_err(dwc->dev, "failed to read %s\n", propname); 3468 + return ret; 3469 + } 3470 + count = ret; 3471 + 3446 3472 for (epnum = 0; epnum < total; epnum++) { 3447 - int ret; 3473 + for (num = 0; num < count; num++) { 3474 + if (epnum == reserved_eps[num]) 3475 + break; 3476 + } 3477 + if (num < count) 3478 + continue; 3448 3479 3449 3480 ret = dwc3_gadget_init_endpoint(dwc, epnum); 3450 3481 if (ret) ··· 3750 3703 3751 3704 for (i = 0; i < DWC3_ENDPOINTS_NUM; i++) { 3752 3705 dep = dwc->eps[i]; 3706 + if (!dep) 3707 + continue; 3753 3708 3754 3709 if (!(dep->flags & DWC3_EP_ENABLED)) 3755 3710 continue; ··· 3901 3852 u8 epnum = event->endpoint_number; 3902 3853 3903 3854 dep = dwc->eps[epnum]; 3855 + if (!dep) { 3856 + dev_warn(dwc->dev, "spurious event, endpoint %u is not allocated\n", epnum); 3857 + return; 3858 + } 3904 3859 3905 3860 if (!(dep->flags & DWC3_EP_ENABLED)) { 3906 3861 if ((epnum > 1) && !(dep->flags & DWC3_EP_TRANSFER_STARTED))
-2
drivers/usb/gadget/function/uvc_queue.c
··· 122 122 .queue_setup = uvc_queue_setup, 123 123 .buf_prepare = uvc_buffer_prepare, 124 124 .buf_queue = uvc_buffer_queue, 125 - .wait_prepare = vb2_ops_wait_prepare, 126 - .wait_finish = vb2_ops_wait_finish, 127 125 }; 128 126 129 127 int uvcg_queue_init(struct uvc_video_queue *queue, struct device *dev, enum v4l2_buf_type type,
+3
drivers/usb/gadget/udc/aspeed-vhub/dev.c
··· 548 548 d->vhub = vhub; 549 549 d->index = idx; 550 550 d->name = devm_kasprintf(parent, GFP_KERNEL, "port%d", idx+1); 551 + if (!d->name) 552 + return -ENOMEM; 553 + 551 554 d->regs = vhub->regs + 0x100 + 0x10 * idx; 552 555 553 556 ast_vhub_init_ep0(vhub, &d->ep0, d);
+7
drivers/usb/host/max3421-hcd.c
··· 1946 1946 usb_put_hcd(hcd); 1947 1947 } 1948 1948 1949 + static const struct spi_device_id max3421_spi_ids[] = { 1950 + { "max3421" }, 1951 + { }, 1952 + }; 1953 + MODULE_DEVICE_TABLE(spi, max3421_spi_ids); 1954 + 1949 1955 static const struct of_device_id max3421_of_match_table[] = { 1950 1956 { .compatible = "maxim,max3421", }, 1951 1957 {}, ··· 1961 1955 static struct spi_driver max3421_driver = { 1962 1956 .probe = max3421_probe, 1963 1957 .remove = max3421_remove, 1958 + .id_table = max3421_spi_ids, 1964 1959 .driver = { 1965 1960 .name = "max3421-hcd", 1966 1961 .of_match_table = max3421_of_match_table,
+1 -1
drivers/usb/host/xhci-histb.c
··· 355 355 if (!device_may_wakeup(dev)) 356 356 xhci_histb_host_enable(histb); 357 357 358 - return xhci_resume(xhci, PMSG_RESUME); 358 + return xhci_resume(xhci, false, false); 359 359 } 360 360 361 361 static const struct dev_pm_ops xhci_histb_pm_ops = {
+18 -16
drivers/usb/host/xhci-mem.c
··· 1953 1953 xhci->interrupters = NULL; 1954 1954 1955 1955 xhci->page_size = 0; 1956 - xhci->page_shift = 0; 1957 1956 xhci->usb2_rhub.bus_state.bus_suspended = 0; 1958 1957 xhci->usb3_rhub.bus_state.bus_suspended = 0; 1959 1958 } ··· 2371 2372 } 2372 2373 EXPORT_SYMBOL_GPL(xhci_create_secondary_interrupter); 2373 2374 2375 + static void xhci_hcd_page_size(struct xhci_hcd *xhci) 2376 + { 2377 + u32 page_size; 2378 + 2379 + page_size = readl(&xhci->op_regs->page_size) & XHCI_PAGE_SIZE_MASK; 2380 + if (!is_power_of_2(page_size)) { 2381 + xhci_warn(xhci, "Invalid page size register = 0x%x\n", page_size); 2382 + /* Fallback to 4K page size, since that's common */ 2383 + page_size = 1; 2384 + } 2385 + 2386 + xhci->page_size = page_size << 12; 2387 + xhci_dbg_trace(xhci, trace_xhci_dbg_init, "HCD page size set to %iK", 2388 + xhci->page_size >> 10); 2389 + } 2390 + 2374 2391 int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) 2375 2392 { 2376 2393 struct xhci_interrupter *ir; ··· 2394 2379 dma_addr_t dma; 2395 2380 unsigned int val, val2; 2396 2381 u64 val_64; 2397 - u32 page_size, temp; 2382 + u32 temp; 2398 2383 int i; 2399 2384 2400 2385 INIT_LIST_HEAD(&xhci->cmd_list); ··· 2403 2388 INIT_DELAYED_WORK(&xhci->cmd_timer, xhci_handle_command_timeout); 2404 2389 init_completion(&xhci->cmd_ring_stop_completion); 2405 2390 2406 - page_size = readl(&xhci->op_regs->page_size); 2407 - xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2408 - "Supported page size register = 0x%x", page_size); 2409 - i = ffs(page_size); 2410 - if (i < 16) 2411 - xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2412 - "Supported page size of %iK", (1 << (i+12)) / 1024); 2413 - else 2414 - xhci_warn(xhci, "WARN: no supported page size\n"); 2415 - /* Use 4K pages, since that's common and the minimum the HC supports */ 2416 - xhci->page_shift = 12; 2417 - xhci->page_size = 1 << xhci->page_shift; 2418 - xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2419 - "HCD page size set to %iK", xhci->page_size / 1024); 2391 + xhci_hcd_page_size(xhci); 2420 2392 2421 2393 /* 2422 2394 * Program the Number of Device Slots Enabled field in the CONFIG
-10
drivers/usb/host/xhci-mvebu.c
··· 73 73 74 74 return 0; 75 75 } 76 - 77 - int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd) 78 - { 79 - struct xhci_hcd *xhci = hcd_to_xhci(hcd); 80 - 81 - /* Without reset on resume, the HC won't work at all */ 82 - xhci->quirks |= XHCI_RESET_ON_RESUME; 83 - 84 - return 0; 85 - }
-6
drivers/usb/host/xhci-mvebu.h
··· 12 12 13 13 #if IS_ENABLED(CONFIG_USB_XHCI_MVEBU) 14 14 int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd); 15 - int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd); 16 15 #else 17 16 static inline int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd) 18 - { 19 - return 0; 20 - } 21 - 22 - static inline int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd) 23 17 { 24 18 return 0; 25 19 }
+5 -3
drivers/usb/host/xhci-pci.c
··· 807 807 808 808 static int xhci_pci_resume(struct usb_hcd *hcd, pm_message_t msg) 809 809 { 810 - struct xhci_hcd *xhci = hcd_to_xhci(hcd); 811 - struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 810 + struct xhci_hcd *xhci = hcd_to_xhci(hcd); 811 + struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 812 + bool power_lost = msg.event == PM_EVENT_RESTORE; 813 + bool is_auto_resume = msg.event == PM_EVENT_AUTO_RESUME; 812 814 813 815 reset_control_reset(xhci->reset); 814 816 ··· 841 839 if (xhci->quirks & XHCI_PME_STUCK_QUIRK) 842 840 xhci_pme_quirk(hcd); 843 841 844 - return xhci_resume(xhci, msg); 842 + return xhci_resume(xhci, power_lost, is_auto_resume); 845 843 } 846 844 847 845 static int xhci_pci_poweroff_late(struct usb_hcd *hcd, bool do_wakeup)
+7 -6
drivers/usb/host/xhci-plat.c
··· 106 106 }; 107 107 108 108 static const struct xhci_plat_priv xhci_plat_marvell_armada3700 = { 109 - .init_quirk = xhci_mvebu_a3700_init_quirk, 109 + .quirks = XHCI_RESET_ON_RESUME, 110 110 }; 111 111 112 112 static const struct xhci_plat_priv xhci_plat_brcm = { ··· 479 479 return 0; 480 480 } 481 481 482 - static int xhci_plat_resume_common(struct device *dev, struct pm_message pmsg) 482 + static int xhci_plat_resume_common(struct device *dev, bool power_lost) 483 483 { 484 484 struct usb_hcd *hcd = dev_get_drvdata(dev); 485 + struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd); 485 486 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 486 487 int ret; 487 488 ··· 502 501 if (ret) 503 502 goto disable_clks; 504 503 505 - ret = xhci_resume(xhci, pmsg); 504 + ret = xhci_resume(xhci, power_lost || priv->power_lost, false); 506 505 if (ret) 507 506 goto disable_clks; 508 507 ··· 523 522 524 523 static int xhci_plat_resume(struct device *dev) 525 524 { 526 - return xhci_plat_resume_common(dev, PMSG_RESUME); 525 + return xhci_plat_resume_common(dev, false); 527 526 } 528 527 529 528 static int xhci_plat_restore(struct device *dev) 530 529 { 531 - return xhci_plat_resume_common(dev, PMSG_RESTORE); 530 + return xhci_plat_resume_common(dev, true); 532 531 } 533 532 534 533 static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev) ··· 549 548 struct usb_hcd *hcd = dev_get_drvdata(dev); 550 549 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 551 550 552 - return xhci_resume(xhci, PMSG_AUTO_RESUME); 551 + return xhci_resume(xhci, false, true); 553 552 } 554 553 555 554 const struct dev_pm_ops xhci_plat_pm_ops = {
+1
drivers/usb/host/xhci-plat.h
··· 15 15 struct xhci_plat_priv { 16 16 const char *firmware_name; 17 17 unsigned long long quirks; 18 + bool power_lost; 18 19 void (*plat_start)(struct usb_hcd *); 19 20 int (*init_quirk)(struct usb_hcd *); 20 21 int (*suspend_quirk)(struct usb_hcd *);
+213 -209
drivers/usb/host/xhci-ring.c
··· 204 204 } 205 205 206 206 /* 207 + * If enqueue points at a link TRB, follow links until an ordinary TRB is reached. 208 + * Toggle the cycle bit of passed link TRBs and optionally chain them. 209 + */ 210 + static void inc_enq_past_link(struct xhci_hcd *xhci, struct xhci_ring *ring, u32 chain) 211 + { 212 + unsigned int link_trb_count = 0; 213 + 214 + while (trb_is_link(ring->enqueue)) { 215 + 216 + /* 217 + * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit 218 + * set, but other sections talk about dealing with the chain bit set. This was 219 + * fixed in the 0.96 specification errata, but we have to assume that all 0.95 220 + * xHCI hardware can't handle the chain bit being cleared on a link TRB. 221 + * 222 + * On 0.95 and some 0.96 HCs the chain bit is set once at segment initalization 223 + * and never changed here. On all others, modify it as requested by the caller. 224 + */ 225 + if (!xhci_link_chain_quirk(xhci, ring->type)) { 226 + ring->enqueue->link.control &= cpu_to_le32(~TRB_CHAIN); 227 + ring->enqueue->link.control |= cpu_to_le32(chain); 228 + } 229 + 230 + /* Give this link TRB to the hardware */ 231 + wmb(); 232 + ring->enqueue->link.control ^= cpu_to_le32(TRB_CYCLE); 233 + 234 + /* Toggle the cycle bit after the last ring segment. */ 235 + if (link_trb_toggles_cycle(ring->enqueue)) 236 + ring->cycle_state ^= 1; 237 + 238 + ring->enq_seg = ring->enq_seg->next; 239 + ring->enqueue = ring->enq_seg->trbs; 240 + 241 + trace_xhci_inc_enq(ring); 242 + 243 + if (link_trb_count++ > ring->num_segs) { 244 + xhci_warn(xhci, "Link TRB loop at enqueue\n"); 245 + break; 246 + } 247 + } 248 + } 249 + 250 + /* 207 251 * See Cycle bit rules. SW is the consumer for the event ring only. 208 252 * 209 253 * If we've just enqueued a TRB that is in the middle of a TD (meaning the 210 254 * chain bit is set), then set the chain bit in all the following link TRBs. 211 255 * If we've enqueued the last TRB in a TD, make sure the following link TRBs 212 256 * have their chain bit cleared (so that each Link TRB is a separate TD). 213 - * 214 - * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit 215 - * set, but other sections talk about dealing with the chain bit set. This was 216 - * fixed in the 0.96 specification errata, but we have to assume that all 0.95 217 - * xHCI hardware can't handle the chain bit being cleared on a link TRB. 218 257 * 219 258 * @more_trbs_coming: Will you enqueue more TRBs before calling 220 259 * prepare_transfer()? ··· 262 223 bool more_trbs_coming) 263 224 { 264 225 u32 chain; 265 - union xhci_trb *next; 266 - unsigned int link_trb_count = 0; 267 226 268 227 chain = le32_to_cpu(ring->enqueue->generic.field[3]) & TRB_CHAIN; 269 228 ··· 270 233 return; 271 234 } 272 235 273 - next = ++(ring->enqueue); 236 + ring->enqueue++; 274 237 275 - /* Update the dequeue pointer further if that was a link TRB */ 276 - while (trb_is_link(next)) { 238 + /* 239 + * If we are in the middle of a TD or the caller plans to enqueue more 240 + * TDs as one transfer (eg. control), traverse any link TRBs right now. 241 + * Otherwise, enqueue can stay on a link until the next prepare_ring(). 242 + * This avoids enqueue entering deq_seg and simplifies ring expansion. 243 + */ 244 + if (trb_is_link(ring->enqueue) && (chain || more_trbs_coming)) 245 + inc_enq_past_link(xhci, ring, chain); 246 + } 277 247 278 - /* 279 - * If the caller doesn't plan on enqueueing more TDs before 280 - * ringing the doorbell, then we don't want to give the link TRB 281 - * to the hardware just yet. We'll give the link TRB back in 282 - * prepare_ring() just before we enqueue the TD at the top of 283 - * the ring. 284 - */ 285 - if (!chain && !more_trbs_coming) 286 - break; 248 + /* 249 + * If the suspect DMA address is a TRB in this TD, this function returns that 250 + * TRB's segment. Otherwise it returns 0. 251 + */ 252 + static struct xhci_segment *trb_in_td(struct xhci_td *td, dma_addr_t suspect_dma) 253 + { 254 + dma_addr_t start_dma; 255 + dma_addr_t end_seg_dma; 256 + dma_addr_t end_trb_dma; 257 + struct xhci_segment *cur_seg; 287 258 288 - /* If we're not dealing with 0.95 hardware or isoc rings on 289 - * AMD 0.96 host, carry over the chain bit of the previous TRB 290 - * (which may mean the chain bit is cleared). 291 - */ 292 - if (!xhci_link_chain_quirk(xhci, ring->type)) { 293 - next->link.control &= cpu_to_le32(~TRB_CHAIN); 294 - next->link.control |= cpu_to_le32(chain); 259 + start_dma = xhci_trb_virt_to_dma(td->start_seg, td->start_trb); 260 + cur_seg = td->start_seg; 261 + 262 + do { 263 + if (start_dma == 0) 264 + return NULL; 265 + /* We may get an event for a Link TRB in the middle of a TD */ 266 + end_seg_dma = xhci_trb_virt_to_dma(cur_seg, 267 + &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); 268 + /* If the end TRB isn't in this segment, this is set to 0 */ 269 + end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->end_trb); 270 + 271 + if (end_trb_dma > 0) { 272 + /* The end TRB is in this segment, so suspect should be here */ 273 + if (start_dma <= end_trb_dma) { 274 + if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma) 275 + return cur_seg; 276 + } else { 277 + /* Case for one segment with 278 + * a TD wrapped around to the top 279 + */ 280 + if ((suspect_dma >= start_dma && 281 + suspect_dma <= end_seg_dma) || 282 + (suspect_dma >= cur_seg->dma && 283 + suspect_dma <= end_trb_dma)) 284 + return cur_seg; 285 + } 286 + return NULL; 295 287 } 296 - /* Give this link TRB to the hardware */ 297 - wmb(); 298 - next->link.control ^= cpu_to_le32(TRB_CYCLE); 288 + /* Might still be somewhere in this segment */ 289 + if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma) 290 + return cur_seg; 299 291 300 - /* Toggle the cycle bit after the last ring segment. */ 301 - if (link_trb_toggles_cycle(next)) 302 - ring->cycle_state ^= 1; 292 + cur_seg = cur_seg->next; 293 + start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]); 294 + } while (cur_seg != td->start_seg); 303 295 304 - ring->enq_seg = ring->enq_seg->next; 305 - ring->enqueue = ring->enq_seg->trbs; 306 - next = ring->enqueue; 307 - 308 - trace_xhci_inc_enq(ring); 309 - 310 - if (link_trb_count++ > ring->num_segs) { 311 - xhci_warn(xhci, "%s: Ring link TRB loop\n", __func__); 312 - break; 313 - } 314 - } 296 + return NULL; 315 297 } 316 298 317 299 /* ··· 561 505 * pointer command pending because the device can choose to start any 562 506 * stream once the endpoint is on the HW schedule. 563 507 */ 564 - if ((ep_state & EP_STOP_CMD_PENDING) || (ep_state & SET_DEQ_PENDING) || 565 - (ep_state & EP_HALTED) || (ep_state & EP_CLEARING_TT)) 508 + if (ep_state & (EP_STOP_CMD_PENDING | SET_DEQ_PENDING | EP_HALTED | 509 + EP_CLEARING_TT | EP_STALLED)) 566 510 return; 567 511 568 512 trace_xhci_ring_ep_doorbell(slot_id, DB_VALUE(ep_index, stream_id)); ··· 1070 1014 td->urb->stream_id); 1071 1015 hw_deq &= ~0xf; 1072 1016 1073 - if (td->cancel_status == TD_HALTED || trb_in_td(xhci, td, hw_deq, false)) { 1017 + if (td->cancel_status == TD_HALTED || trb_in_td(td, hw_deq)) { 1074 1018 switch (td->cancel_status) { 1075 1019 case TD_CLEARED: /* TD is already no-op */ 1076 1020 case TD_CLEARING_CACHE: /* set TR deq command already queued */ ··· 1160 1104 hw_deq = xhci_get_hw_deq(ep->xhci, ep->vdev, ep->ep_index, 0); 1161 1105 hw_deq &= ~0xf; 1162 1106 td = list_first_entry(&ep->ring->td_list, struct xhci_td, td_list); 1163 - if (trb_in_td(ep->xhci, td, hw_deq, false)) 1107 + if (trb_in_td(td, hw_deq)) 1164 1108 return td; 1165 1109 } 1166 1110 return NULL; ··· 1220 1164 */ 1221 1165 switch (GET_EP_CTX_STATE(ep_ctx)) { 1222 1166 case EP_STATE_HALTED: 1223 - xhci_dbg(xhci, "Stop ep completion raced with stall, reset ep\n"); 1167 + xhci_dbg(xhci, "Stop ep completion raced with stall\n"); 1168 + /* 1169 + * If the halt happened before Stop Endpoint failed, its transfer event 1170 + * should have already been handled and Reset Endpoint should be pending. 1171 + */ 1172 + if (ep->ep_state & EP_HALTED) 1173 + goto reset_done; 1174 + 1224 1175 if (ep->ep_state & EP_HAS_STREAMS) { 1225 1176 reset_type = EP_SOFT_RESET; 1226 1177 } else { ··· 1238 1175 } 1239 1176 /* reset ep, reset handler cleans up cancelled tds */ 1240 1177 err = xhci_handle_halted_endpoint(xhci, ep, td, reset_type); 1178 + xhci_dbg(xhci, "Stop ep completion resetting ep, status %d\n", err); 1241 1179 if (err) 1242 1180 break; 1181 + reset_done: 1182 + /* Reset EP handler will clean up cancelled TDs */ 1243 1183 ep->ep_state &= ~EP_STOP_CMD_PENDING; 1244 1184 return; 1245 1185 case EP_STATE_STOPPED: ··· 1264 1198 * Stopped state, but it will soon change to Running. 1265 1199 * 1266 1200 * Assume this bug on unexpected Stop Endpoint failures. 1267 - * Keep retrying until the EP starts and stops again, on 1268 - * chips where this is known to help. Wait for 100ms. 1201 + * Keep retrying until the EP starts and stops again. 1269 1202 */ 1270 - if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100))) 1271 - break; 1272 1203 fallthrough; 1273 1204 case EP_STATE_RUNNING: 1274 1205 /* Race, HW handled stop ep cmd before ep was running */ 1275 1206 xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n", 1276 1207 GET_EP_CTX_STATE(ep_ctx)); 1208 + /* 1209 + * Don't retry forever if we guessed wrong or a defective HC never starts 1210 + * the EP or says 'Running' but fails the command. We must give back TDs. 1211 + */ 1212 + if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100))) 1213 + break; 1277 1214 1278 1215 command = xhci_alloc_command(xhci, false, GFP_ATOMIC); 1279 1216 if (!command) { ··· 1401 1332 usb_hc_died(xhci_to_hcd(xhci)); 1402 1333 } 1403 1334 1404 - static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci, 1405 - struct xhci_virt_device *dev, 1406 - struct xhci_ring *ep_ring, 1407 - unsigned int ep_index) 1408 - { 1409 - union xhci_trb *dequeue_temp; 1410 - 1411 - dequeue_temp = ep_ring->dequeue; 1412 - 1413 - /* If we get two back-to-back stalls, and the first stalled transfer 1414 - * ends just before a link TRB, the dequeue pointer will be left on 1415 - * the link TRB by the code in the while loop. So we have to update 1416 - * the dequeue pointer one segment further, or we'll jump off 1417 - * the segment into la-la-land. 1418 - */ 1419 - if (trb_is_link(ep_ring->dequeue)) { 1420 - ep_ring->deq_seg = ep_ring->deq_seg->next; 1421 - ep_ring->dequeue = ep_ring->deq_seg->trbs; 1422 - } 1423 - 1424 - while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) { 1425 - /* We have more usable TRBs */ 1426 - ep_ring->dequeue++; 1427 - if (trb_is_link(ep_ring->dequeue)) { 1428 - if (ep_ring->dequeue == 1429 - dev->eps[ep_index].queued_deq_ptr) 1430 - break; 1431 - ep_ring->deq_seg = ep_ring->deq_seg->next; 1432 - ep_ring->dequeue = ep_ring->deq_seg->trbs; 1433 - } 1434 - if (ep_ring->dequeue == dequeue_temp) { 1435 - xhci_dbg(xhci, "Unable to find new dequeue pointer\n"); 1436 - break; 1437 - } 1438 - } 1439 - } 1440 - 1441 1335 /* 1442 1336 * When we get a completion for a Set Transfer Ring Dequeue Pointer command, 1443 1337 * we need to clear the set deq pending flag in the endpoint ring state, so that ··· 1505 1473 /* Update the ring's dequeue segment and dequeue pointer 1506 1474 * to reflect the new position. 1507 1475 */ 1508 - update_ring_for_set_deq_completion(xhci, ep->vdev, 1509 - ep_ring, ep_index); 1476 + ep_ring->deq_seg = ep->queued_deq_seg; 1477 + ep_ring->dequeue = ep->queued_deq_ptr; 1510 1478 } else { 1511 1479 xhci_warn(xhci, "Mismatch between completed Set TR Deq Ptr command & xHCI internal state.\n"); 1512 1480 xhci_warn(xhci, "ep deq seg = %p, deq ptr = %p\n", ··· 2148 2116 spin_lock(&xhci->lock); 2149 2117 } 2150 2118 2151 - /* 2152 - * If the suspect DMA address is a TRB in this TD, this function returns that 2153 - * TRB's segment. Otherwise it returns 0. 2154 - */ 2155 - struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, dma_addr_t suspect_dma, 2156 - bool debug) 2157 - { 2158 - dma_addr_t start_dma; 2159 - dma_addr_t end_seg_dma; 2160 - dma_addr_t end_trb_dma; 2161 - struct xhci_segment *cur_seg; 2162 - 2163 - start_dma = xhci_trb_virt_to_dma(td->start_seg, td->start_trb); 2164 - cur_seg = td->start_seg; 2165 - 2166 - do { 2167 - if (start_dma == 0) 2168 - return NULL; 2169 - /* We may get an event for a Link TRB in the middle of a TD */ 2170 - end_seg_dma = xhci_trb_virt_to_dma(cur_seg, 2171 - &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); 2172 - /* If the end TRB isn't in this segment, this is set to 0 */ 2173 - end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->end_trb); 2174 - 2175 - if (debug) 2176 - xhci_warn(xhci, 2177 - "Looking for event-dma %016llx trb-start %016llx trb-end %016llx seg-start %016llx seg-end %016llx\n", 2178 - (unsigned long long)suspect_dma, 2179 - (unsigned long long)start_dma, 2180 - (unsigned long long)end_trb_dma, 2181 - (unsigned long long)cur_seg->dma, 2182 - (unsigned long long)end_seg_dma); 2183 - 2184 - if (end_trb_dma > 0) { 2185 - /* The end TRB is in this segment, so suspect should be here */ 2186 - if (start_dma <= end_trb_dma) { 2187 - if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma) 2188 - return cur_seg; 2189 - } else { 2190 - /* Case for one segment with 2191 - * a TD wrapped around to the top 2192 - */ 2193 - if ((suspect_dma >= start_dma && 2194 - suspect_dma <= end_seg_dma) || 2195 - (suspect_dma >= cur_seg->dma && 2196 - suspect_dma <= end_trb_dma)) 2197 - return cur_seg; 2198 - } 2199 - return NULL; 2200 - } else { 2201 - /* Might still be somewhere in this segment */ 2202 - if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma) 2203 - return cur_seg; 2204 - } 2205 - cur_seg = cur_seg->next; 2206 - start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]); 2207 - } while (cur_seg != td->start_seg); 2208 - 2209 - return NULL; 2210 - } 2211 - 2212 2119 static void xhci_clear_hub_tt_buffer(struct xhci_hcd *xhci, struct xhci_td *td, 2213 2120 struct xhci_virt_ep *ep) 2214 2121 { ··· 2447 2476 if (ep_trb != td->end_trb) 2448 2477 td->error_mid_td = true; 2449 2478 break; 2479 + case COMP_MISSED_SERVICE_ERROR: 2480 + frame->status = -EXDEV; 2481 + sum_trbs_for_length = true; 2482 + if (ep_trb != td->end_trb) 2483 + td->error_mid_td = true; 2484 + break; 2450 2485 case COMP_INCOMPATIBLE_DEVICE_ERROR: 2451 2486 case COMP_STALL_ERROR: 2452 2487 frame->status = -EPROTO; ··· 2573 2596 2574 2597 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET); 2575 2598 return; 2599 + case COMP_STALL_ERROR: 2600 + ep->ep_state |= EP_STALLED; 2601 + break; 2576 2602 default: 2577 2603 /* do nothing */ 2578 2604 break; ··· 2624 2644 return 0; 2625 2645 } 2626 2646 2647 + static bool xhci_spurious_success_tx_event(struct xhci_hcd *xhci, 2648 + struct xhci_ring *ring) 2649 + { 2650 + switch (ring->old_trb_comp_code) { 2651 + case COMP_SHORT_PACKET: 2652 + return xhci->quirks & XHCI_SPURIOUS_SUCCESS; 2653 + case COMP_USB_TRANSACTION_ERROR: 2654 + case COMP_BABBLE_DETECTED_ERROR: 2655 + case COMP_ISOCH_BUFFER_OVERRUN: 2656 + return xhci->quirks & XHCI_ETRON_HOST && 2657 + ring->type == TYPE_ISOC; 2658 + default: 2659 + return false; 2660 + } 2661 + } 2662 + 2627 2663 /* 2628 2664 * If this function returns an error condition, it means it got a Transfer 2629 2665 * event with a corrupted Slot ID, Endpoint ID, or TRB DMA address. ··· 2660 2664 int status = -EINPROGRESS; 2661 2665 struct xhci_ep_ctx *ep_ctx; 2662 2666 u32 trb_comp_code; 2667 + bool ring_xrun_event = false; 2663 2668 2664 2669 slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags)); 2665 2670 ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1; ··· 2694 2697 case COMP_SUCCESS: 2695 2698 if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) { 2696 2699 trb_comp_code = COMP_SHORT_PACKET; 2697 - xhci_dbg(xhci, "Successful completion on short TX for slot %u ep %u with last td short %d\n", 2698 - slot_id, ep_index, ep_ring->last_td_was_short); 2700 + xhci_dbg(xhci, "Successful completion on short TX for slot %u ep %u with last td comp code %d\n", 2701 + slot_id, ep_index, ep_ring->old_trb_comp_code); 2699 2702 } 2700 2703 break; 2701 2704 case COMP_SHORT_PACKET: ··· 2767 2770 * Underrun Event for OUT Isoch endpoint. 2768 2771 */ 2769 2772 xhci_dbg(xhci, "Underrun event on slot %u ep %u\n", slot_id, ep_index); 2770 - if (ep->skip) 2771 - break; 2772 - return 0; 2773 + ring_xrun_event = true; 2774 + break; 2773 2775 case COMP_RING_OVERRUN: 2774 2776 xhci_dbg(xhci, "Overrun event on slot %u ep %u\n", slot_id, ep_index); 2775 - if (ep->skip) 2776 - break; 2777 - return 0; 2777 + ring_xrun_event = true; 2778 + break; 2778 2779 case COMP_MISSED_SERVICE_ERROR: 2779 2780 /* 2780 2781 * When encounter missed service error, one or more isoc tds ··· 2782 2787 */ 2783 2788 ep->skip = true; 2784 2789 xhci_dbg(xhci, 2785 - "Miss service interval error for slot %u ep %u, set skip flag\n", 2786 - slot_id, ep_index); 2787 - return 0; 2790 + "Miss service interval error for slot %u ep %u, set skip flag%s\n", 2791 + slot_id, ep_index, ep_trb_dma ? ", skip now" : ""); 2792 + break; 2788 2793 case COMP_NO_PING_RESPONSE_ERROR: 2789 2794 ep->skip = true; 2790 2795 xhci_dbg(xhci, ··· 2827 2832 */ 2828 2833 td = list_first_entry_or_null(&ep_ring->td_list, struct xhci_td, td_list); 2829 2834 2830 - if (td && td->error_mid_td && !trb_in_td(xhci, td, ep_trb_dma, false)) { 2835 + if (td && td->error_mid_td && !trb_in_td(td, ep_trb_dma)) { 2831 2836 xhci_dbg(xhci, "Missing TD completion event after mid TD error\n"); 2832 2837 xhci_dequeue_td(xhci, td, ep_ring, td->status); 2833 2838 } 2839 + 2840 + /* If the TRB pointer is NULL, missed TDs will be skipped on the next event */ 2841 + if (trb_comp_code == COMP_MISSED_SERVICE_ERROR && !ep_trb_dma) 2842 + return 0; 2834 2843 2835 2844 if (list_empty(&ep_ring->td_list)) { 2836 2845 /* ··· 2845 2846 */ 2846 2847 if (trb_comp_code != COMP_STOPPED && 2847 2848 trb_comp_code != COMP_STOPPED_LENGTH_INVALID && 2848 - !ep_ring->last_td_was_short) { 2849 + !ring_xrun_event && 2850 + !xhci_spurious_success_tx_event(xhci, ep_ring)) { 2849 2851 xhci_warn(xhci, "Event TRB for slot %u ep %u with no TDs queued\n", 2850 2852 slot_id, ep_index); 2851 2853 } ··· 2860 2860 td_list); 2861 2861 2862 2862 /* Is this a TRB in the currently executing TD? */ 2863 - ep_seg = trb_in_td(xhci, td, ep_trb_dma, false); 2863 + ep_seg = trb_in_td(td, ep_trb_dma); 2864 2864 2865 2865 if (!ep_seg) { 2866 2866 2867 2867 if (ep->skip && usb_endpoint_xfer_isoc(&td->urb->ep->desc)) { 2868 + /* this event is unlikely to match any TD, don't skip them all */ 2869 + if (trb_comp_code == COMP_STOPPED_LENGTH_INVALID) 2870 + return 0; 2871 + 2868 2872 skip_isoc_td(xhci, td, ep, status); 2869 - if (!list_empty(&ep_ring->td_list)) 2873 + 2874 + if (!list_empty(&ep_ring->td_list)) { 2875 + if (ring_xrun_event) { 2876 + /* 2877 + * If we are here, we are on xHCI 1.0 host with no 2878 + * idea how many TDs were missed or where the xrun 2879 + * occurred. New TDs may have been added after the 2880 + * xrun, so skip only one TD to be safe. 2881 + */ 2882 + xhci_dbg(xhci, "Skipped one TD for slot %u ep %u", 2883 + slot_id, ep_index); 2884 + return 0; 2885 + } 2870 2886 continue; 2887 + } 2871 2888 2872 2889 xhci_dbg(xhci, "All TDs skipped for slot %u ep %u. Clear skip flag.\n", 2873 2890 slot_id, ep_index); ··· 2892 2875 td = NULL; 2893 2876 goto check_endpoint_halted; 2894 2877 } 2878 + 2879 + /* TD was queued after xrun, maybe xrun was on a link, don't panic yet */ 2880 + if (ring_xrun_event) 2881 + return 0; 2895 2882 2896 2883 /* 2897 2884 * Skip the Force Stopped Event. The 'ep_trb' of FSE is not in the current ··· 2911 2890 2912 2891 /* 2913 2892 * Some hosts give a spurious success event after a short 2914 - * transfer. Ignore it. 2893 + * transfer or error on last TRB. Ignore it. 2915 2894 */ 2916 - if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) && 2917 - ep_ring->last_td_was_short) { 2918 - ep_ring->last_td_was_short = false; 2895 + if (xhci_spurious_success_tx_event(xhci, ep_ring)) { 2896 + xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n", 2897 + &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code); 2898 + ep_ring->old_trb_comp_code = trb_comp_code; 2919 2899 return 0; 2920 2900 } 2921 2901 2922 2902 /* HC is busted, give up! */ 2923 - xhci_err(xhci, 2924 - "ERROR Transfer event TRB DMA ptr not part of current TD ep_index %d comp_code %u\n", 2925 - ep_index, trb_comp_code); 2926 - trb_in_td(xhci, td, ep_trb_dma, true); 2927 - 2928 - return -ESHUTDOWN; 2903 + goto debug_finding_td; 2929 2904 } 2930 2905 2931 2906 if (ep->skip) { ··· 2939 2922 */ 2940 2923 } while (ep->skip); 2941 2924 2942 - if (trb_comp_code == COMP_SHORT_PACKET) 2943 - ep_ring->last_td_was_short = true; 2944 - else 2945 - ep_ring->last_td_was_short = false; 2925 + ep_ring->old_trb_comp_code = trb_comp_code; 2926 + 2927 + /* Get out if a TD was queued at enqueue after the xrun occurred */ 2928 + if (ring_xrun_event) 2929 + return 0; 2946 2930 2947 2931 ep_trb = &ep_seg->trbs[(ep_trb_dma - ep_seg->dma) / sizeof(*ep_trb)]; 2948 2932 trace_xhci_handle_transfer(ep_ring, (struct xhci_generic_trb *) ep_trb, ep_trb_dma); ··· 2974 2956 xhci_handle_halted_endpoint(xhci, ep, td, EP_HARD_RESET); 2975 2957 2976 2958 return 0; 2959 + 2960 + debug_finding_td: 2961 + xhci_err(xhci, "Event dma %pad for ep %d status %d not part of TD at %016llx - %016llx\n", 2962 + &ep_trb_dma, ep_index, trb_comp_code, 2963 + (unsigned long long)xhci_trb_virt_to_dma(td->start_seg, td->start_trb), 2964 + (unsigned long long)xhci_trb_virt_to_dma(td->end_seg, td->end_trb)); 2965 + 2966 + xhci_for_each_ring_seg(ep_ring->first_seg, ep_seg) 2967 + xhci_warn(xhci, "Ring seg %u dma %pad\n", ep_seg->num, &ep_seg->dma); 2968 + 2969 + return -ESHUTDOWN; 2977 2970 2978 2971 err_out: 2979 2972 xhci_err(xhci, "@%016llx %08x %08x %08x %08x\n", ··· 3245 3216 static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, 3246 3217 u32 ep_state, unsigned int num_trbs, gfp_t mem_flags) 3247 3218 { 3248 - unsigned int link_trb_count = 0; 3249 3219 unsigned int new_segs = 0; 3250 3220 3251 3221 /* Make sure the endpoint has been added to xHC schedule */ ··· 3292 3264 } 3293 3265 } 3294 3266 3295 - while (trb_is_link(ep_ring->enqueue)) { 3296 - /* If we're not dealing with 0.95 hardware or isoc rings 3297 - * on AMD 0.96 host, clear the chain bit. 3298 - */ 3299 - if (!xhci_link_chain_quirk(xhci, ep_ring->type)) 3300 - ep_ring->enqueue->link.control &= 3301 - cpu_to_le32(~TRB_CHAIN); 3302 - else 3303 - ep_ring->enqueue->link.control |= 3304 - cpu_to_le32(TRB_CHAIN); 3305 - 3306 - wmb(); 3307 - ep_ring->enqueue->link.control ^= cpu_to_le32(TRB_CYCLE); 3308 - 3309 - /* Toggle the cycle bit after the last ring segment. */ 3310 - if (link_trb_toggles_cycle(ep_ring->enqueue)) 3311 - ep_ring->cycle_state ^= 1; 3312 - 3313 - ep_ring->enq_seg = ep_ring->enq_seg->next; 3314 - ep_ring->enqueue = ep_ring->enq_seg->trbs; 3315 - 3316 - /* prevent infinite loop if all first trbs are link trbs */ 3317 - if (link_trb_count++ > ep_ring->num_segs) { 3318 - xhci_warn(xhci, "Ring is an endless link TRB loop\n"); 3319 - return -EINVAL; 3320 - } 3321 - } 3267 + /* Ensure that new TRBs won't overwrite a link */ 3268 + if (trb_is_link(ep_ring->enqueue)) 3269 + inc_enq_past_link(xhci, ep_ring, 0); 3322 3270 3323 3271 if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue)) { 3324 3272 xhci_warn(xhci, "Missing link TRB at end of ring segment\n");
+5 -5
drivers/usb/host/xhci-tegra.c
··· 2162 2162 } 2163 2163 } 2164 2164 2165 - static int tegra_xusb_enter_elpg(struct tegra_xusb *tegra, bool runtime) 2165 + static int tegra_xusb_enter_elpg(struct tegra_xusb *tegra, bool is_auto_resume) 2166 2166 { 2167 2167 struct xhci_hcd *xhci = hcd_to_xhci(tegra->hcd); 2168 2168 struct device *dev = tegra->dev; 2169 - bool wakeup = runtime ? true : device_may_wakeup(dev); 2169 + bool wakeup = is_auto_resume ? true : device_may_wakeup(dev); 2170 2170 unsigned int i; 2171 2171 int err; 2172 2172 u32 usbcmd; ··· 2232 2232 return err; 2233 2233 } 2234 2234 2235 - static int tegra_xusb_exit_elpg(struct tegra_xusb *tegra, bool runtime) 2235 + static int tegra_xusb_exit_elpg(struct tegra_xusb *tegra, bool is_auto_resume) 2236 2236 { 2237 2237 struct xhci_hcd *xhci = hcd_to_xhci(tegra->hcd); 2238 2238 struct device *dev = tegra->dev; 2239 - bool wakeup = runtime ? true : device_may_wakeup(dev); 2239 + bool wakeup = is_auto_resume ? true : device_may_wakeup(dev); 2240 2240 unsigned int i; 2241 2241 u32 usbcmd; 2242 2242 int err; ··· 2287 2287 if (wakeup) 2288 2288 tegra_xhci_disable_phy_sleepwalk(tegra); 2289 2289 2290 - err = xhci_resume(xhci, runtime ? PMSG_AUTO_RESUME : PMSG_RESUME); 2290 + err = xhci_resume(xhci, false, is_auto_resume); 2291 2291 if (err < 0) { 2292 2292 dev_err(tegra->dev, "failed to resume XHCI: %d\n", err); 2293 2293 goto disable_phy;
+23 -18
drivers/usb/host/xhci.c
··· 994 994 * This is called when the machine transition from S3/S4 mode. 995 995 * 996 996 */ 997 - int xhci_resume(struct xhci_hcd *xhci, pm_message_t msg) 997 + int xhci_resume(struct xhci_hcd *xhci, bool power_lost, bool is_auto_resume) 998 998 { 999 - bool hibernated = (msg.event == PM_EVENT_RESTORE); 1000 999 u32 command, temp = 0; 1001 1000 struct usb_hcd *hcd = xhci_to_hcd(xhci); 1002 1001 int retval = 0; 1003 1002 bool comp_timer_running = false; 1004 1003 bool pending_portevent = false; 1005 1004 bool suspended_usb3_devs = false; 1006 - bool reinit_xhc = false; 1007 1005 1008 1006 if (!hcd->state) 1009 1007 return 0; ··· 1020 1022 1021 1023 spin_lock_irq(&xhci->lock); 1022 1024 1023 - if (hibernated || xhci->quirks & XHCI_RESET_ON_RESUME || xhci->broken_suspend) 1024 - reinit_xhc = true; 1025 + if (xhci->quirks & XHCI_RESET_ON_RESUME || xhci->broken_suspend) 1026 + power_lost = true; 1025 1027 1026 - if (!reinit_xhc) { 1028 + if (!power_lost) { 1027 1029 /* 1028 1030 * Some controllers might lose power during suspend, so wait 1029 1031 * for controller not ready bit to clear, just as in xHC init. ··· 1063 1065 /* re-initialize the HC on Restore Error, or Host Controller Error */ 1064 1066 if ((temp & (STS_SRE | STS_HCE)) && 1065 1067 !(xhci->xhc_state & XHCI_STATE_REMOVING)) { 1066 - reinit_xhc = true; 1067 - if (!xhci->broken_suspend) 1068 + if (!power_lost) 1068 1069 xhci_warn(xhci, "xHC error in resume, USBSTS 0x%x, Reinit\n", temp); 1070 + power_lost = true; 1069 1071 } 1070 1072 1071 - if (reinit_xhc) { 1073 + if (power_lost) { 1072 1074 if ((xhci->quirks & XHCI_COMP_MODE_QUIRK) && 1073 1075 !(xhci_all_ports_seen_u0(xhci))) { 1074 1076 del_timer_sync(&xhci->comp_mode_recovery_timer); ··· 1166 1168 1167 1169 pending_portevent = xhci_pending_portevent(xhci); 1168 1170 1169 - if (suspended_usb3_devs && !pending_portevent && 1170 - msg.event == PM_EVENT_AUTO_RESUME) { 1171 + if (suspended_usb3_devs && !pending_portevent && is_auto_resume) { 1171 1172 msleep(120); 1172 1173 pending_portevent = xhci_pending_portevent(xhci); 1173 1174 } ··· 1605 1608 goto free_priv; 1606 1609 } 1607 1610 1611 + /* Class driver might not be aware ep halted due to async URB giveback */ 1612 + if (*ep_state & EP_STALLED) 1613 + dev_dbg(&urb->dev->dev, "URB %p queued before clearing halt\n", 1614 + urb); 1615 + 1608 1616 switch (usb_endpoint_type(&urb->ep->desc)) { 1609 1617 1610 1618 case USB_ENDPOINT_XFER_CONTROL: ··· 1770 1768 goto done; 1771 1769 } 1772 1770 1773 - /* In this case no commands are pending but the endpoint is stopped */ 1774 - if (ep->ep_state & EP_CLEARING_TT) { 1771 + /* In these cases no commands are pending but the endpoint is stopped */ 1772 + if (ep->ep_state & (EP_CLEARING_TT | EP_STALLED)) { 1775 1773 /* and cancelled TDs can be given back right away */ 1776 1774 xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n", 1777 1775 urb->dev->slot_id, ep_index, ep->ep_state); ··· 3209 3207 3210 3208 ep = &vdev->eps[ep_index]; 3211 3209 3212 - /* Bail out if toggle is already being cleared by a endpoint reset */ 3213 3210 spin_lock_irqsave(&xhci->lock, flags); 3211 + 3212 + ep->ep_state &= ~EP_STALLED; 3213 + 3214 + /* Bail out if toggle is already being cleared by a endpoint reset */ 3214 3215 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) { 3215 3216 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE; 3216 3217 spin_unlock_irqrestore(&xhci->lock, flags); ··· 4764 4759 */ 4765 4760 if (timeout_ns <= USB3_LPM_U1_MAX_TIMEOUT) 4766 4761 return timeout_ns; 4767 - dev_dbg(&udev->dev, "Hub-initiated U1 disabled " 4768 - "due to long timeout %llu ms\n", timeout_ns); 4762 + dev_dbg(&udev->dev, "Hub-initiated U1 disabled due to long timeout %lluus\n", 4763 + timeout_ns); 4769 4764 return xhci_get_timeout_no_hub_lpm(udev, USB3_LPM_U1); 4770 4765 } 4771 4766 ··· 4822 4817 */ 4823 4818 if (timeout_ns <= USB3_LPM_U2_MAX_TIMEOUT) 4824 4819 return timeout_ns; 4825 - dev_dbg(&udev->dev, "Hub-initiated U2 disabled " 4826 - "due to long timeout %llu ms\n", timeout_ns); 4820 + dev_dbg(&udev->dev, "Hub-initiated U2 disabled due to long timeout %lluus\n", 4821 + timeout_ns * 256); 4827 4822 return xhci_get_timeout_no_hub_lpm(udev, USB3_LPM_U2); 4828 4823 } 4829 4824
+19 -11
drivers/usb/host/xhci.h
··· 211 211 #define CONFIG_CIE (1 << 9) 212 212 /* bits 10:31 - reserved and should be preserved */ 213 213 214 + /* bits 15:0 - HCD page shift bit */ 215 + #define XHCI_PAGE_SIZE_MASK 0xffff 216 + 214 217 /** 215 218 * struct xhci_intr_reg - Interrupt Register Set 216 219 * @irq_pending: IMAN - Interrupt Management Register. Used to enable ··· 664 661 unsigned int err_count; 665 662 unsigned int ep_state; 666 663 #define SET_DEQ_PENDING (1 << 0) 667 - #define EP_HALTED (1 << 1) /* For stall handling */ 664 + #define EP_HALTED (1 << 1) /* Halted host ep handling */ 668 665 #define EP_STOP_CMD_PENDING (1 << 2) /* For URB cancellation */ 669 666 /* Transitioning the endpoint to using streams, don't enqueue URBs */ 670 667 #define EP_GETTING_STREAMS (1 << 3) ··· 675 672 #define EP_SOFT_CLEAR_TOGGLE (1 << 7) 676 673 /* usb_hub_clear_tt_buffer is in progress */ 677 674 #define EP_CLEARING_TT (1 << 8) 675 + #define EP_STALLED (1 << 9) /* For stall handling */ 678 676 /* ---- Related to URB cancellation ---- */ 679 677 struct list_head cancelled_td_list; 680 678 struct xhci_hcd *xhci; ··· 1375 1371 unsigned int num_trbs_free; /* used only by xhci DbC */ 1376 1372 unsigned int bounce_buf_len; 1377 1373 enum xhci_ring_type type; 1378 - bool last_td_was_short; 1374 + u32 old_trb_comp_code; 1379 1375 struct radix_tree_root *trb_address_map; 1380 1376 }; 1381 1377 ··· 1518 1514 u16 max_interrupters; 1519 1515 /* imod_interval in ns (I * 250ns) */ 1520 1516 u32 imod_interval; 1521 - /* 4KB min, 128MB max */ 1522 - int page_size; 1523 - /* Valid values are 12 to 20, inclusive */ 1524 - int page_shift; 1517 + u32 page_size; 1525 1518 /* MSI-X/MSI vectors */ 1526 1519 int nvecs; 1527 1520 /* optional clocks */ ··· 1760 1759 } 1761 1760 1762 1761 1763 - /* Link TRB chain should always be set on 0.95 hosts, and AMD 0.96 ISOC rings */ 1762 + /* 1763 + * Reportedly, some chapters of v0.95 spec said that Link TRB always has its chain bit set. 1764 + * Other chapters and later specs say that it should only be set if the link is inside a TD 1765 + * which continues from the end of one segment to the next segment. 1766 + * 1767 + * Some 0.95 hardware was found to misbehave if any link TRB doesn't have the chain bit set. 1768 + * 1769 + * 0.96 hardware from AMD and NEC was found to ignore unchained isochronous link TRBs when 1770 + * "resynchronizing the pipe" after a Missed Service Error. 1771 + */ 1764 1772 static inline bool xhci_link_chain_quirk(struct xhci_hcd *xhci, enum xhci_ring_type type) 1765 1773 { 1766 1774 return (xhci->quirks & XHCI_LINK_TRB_QUIRK) || 1767 - (type == TYPE_ISOC && (xhci->quirks & XHCI_AMD_0x96_HOST)); 1775 + (type == TYPE_ISOC && (xhci->quirks & (XHCI_AMD_0x96_HOST | XHCI_NEC_HOST))); 1768 1776 } 1769 1777 1770 1778 /* xHCI debugging */ ··· 1880 1870 int xhci_ext_cap_init(struct xhci_hcd *xhci); 1881 1871 1882 1872 int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup); 1883 - int xhci_resume(struct xhci_hcd *xhci, pm_message_t msg); 1873 + int xhci_resume(struct xhci_hcd *xhci, bool power_lost, bool is_auto_resume); 1884 1874 1885 1875 irqreturn_t xhci_irq(struct usb_hcd *hcd); 1886 1876 irqreturn_t xhci_msi_irq(int irq, void *hcd); ··· 1894 1884 1895 1885 /* xHCI ring, segment, TRB, and TD functions */ 1896 1886 dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb); 1897 - struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, 1898 - dma_addr_t suspect_dma, bool debug); 1899 1887 int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code); 1900 1888 void xhci_ring_cmd_db(struct xhci_hcd *xhci); 1901 1889 int xhci_queue_slot_control(struct xhci_hcd *xhci, struct xhci_command *cmd,
+8 -1
drivers/usb/misc/onboard_usb_dev.h
··· 23 23 .is_hub = true, 24 24 }; 25 25 26 + static const struct onboard_dev_pdata microchip_usb2514_data = { 27 + .reset_us = 1, 28 + .num_supplies = 2, 29 + .supply_names = { "vdd", "vdda" }, 30 + .is_hub = true, 31 + }; 32 + 26 33 static const struct onboard_dev_pdata microchip_usb5744_data = { 27 34 .reset_us = 0, 28 35 .power_on_delay_us = 10000, ··· 103 96 104 97 static const struct of_device_id onboard_dev_match[] = { 105 98 { .compatible = "usb424,2412", .data = &microchip_usb424_data, }, 106 - { .compatible = "usb424,2514", .data = &microchip_usb424_data, }, 99 + { .compatible = "usb424,2514", .data = &microchip_usb2514_data, }, 107 100 { .compatible = "usb424,2517", .data = &microchip_usb424_data, }, 108 101 { .compatible = "usb424,2744", .data = &microchip_usb5744_data, }, 109 102 { .compatible = "usb424,5744", .data = &microchip_usb5744_data, },
+2 -4
drivers/usb/misc/usb251xb.c
··· 636 636 637 637 if (np && usb_data) { 638 638 err = usb251xb_get_ofdata(hub, usb_data); 639 - if (err) { 640 - dev_err(dev, "failed to get ofdata: %d\n", err); 641 - return err; 642 - } 639 + if (err) 640 + return dev_err_probe(dev, err, "failed to get ofdata\n"); 643 641 } 644 642 645 643 /*
+2 -2
drivers/usb/musb/jz4740.c
··· 59 59 return IRQ_NONE; 60 60 } 61 61 62 - static struct musb_fifo_cfg jz4740_musb_fifo_cfg[] = { 62 + static const struct musb_fifo_cfg jz4740_musb_fifo_cfg[] = { 63 63 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 64 64 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 65 65 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 64, }, ··· 205 205 .platform_ops = &jz4740_musb_ops, 206 206 }; 207 207 208 - static struct musb_fifo_cfg jz4770_musb_fifo_cfg[] = { 208 + static const struct musb_fifo_cfg jz4770_musb_fifo_cfg[] = { 209 209 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 210 210 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 211 211 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, },
+1 -1
drivers/usb/musb/mediatek.c
··· 365 365 #define MTK_MUSB_MAX_EP_NUM 8 366 366 #define MTK_MUSB_RAM_BITS 11 367 367 368 - static struct musb_fifo_cfg mtk_musb_mode_cfg[] = { 368 + static const struct musb_fifo_cfg mtk_musb_mode_cfg[] = { 369 369 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 370 370 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 371 371 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, },
+1 -1
drivers/usb/musb/mpfs.c
··· 29 29 struct clk *clk; 30 30 }; 31 31 32 - static struct musb_fifo_cfg mpfs_musb_mode_cfg[] = { 32 + static const struct musb_fifo_cfg mpfs_musb_mode_cfg[] = { 33 33 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 34 34 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 35 35 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, },
+7 -7
drivers/usb/musb/musb_core.c
··· 1271 1271 */ 1272 1272 1273 1273 /* mode 0 - fits in 2KB */ 1274 - static struct musb_fifo_cfg mode_0_cfg[] = { 1274 + static const struct musb_fifo_cfg mode_0_cfg[] = { 1275 1275 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 1276 1276 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 1277 1277 { .hw_ep_num = 2, .style = FIFO_RXTX, .maxpacket = 512, }, ··· 1280 1280 }; 1281 1281 1282 1282 /* mode 1 - fits in 4KB */ 1283 - static struct musb_fifo_cfg mode_1_cfg[] = { 1283 + static const struct musb_fifo_cfg mode_1_cfg[] = { 1284 1284 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, .mode = BUF_DOUBLE, }, 1285 1285 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, .mode = BUF_DOUBLE, }, 1286 1286 { .hw_ep_num = 2, .style = FIFO_RXTX, .maxpacket = 512, .mode = BUF_DOUBLE, }, ··· 1289 1289 }; 1290 1290 1291 1291 /* mode 2 - fits in 4KB */ 1292 - static struct musb_fifo_cfg mode_2_cfg[] = { 1292 + static const struct musb_fifo_cfg mode_2_cfg[] = { 1293 1293 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 1294 1294 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 1295 1295 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, }, ··· 1299 1299 }; 1300 1300 1301 1301 /* mode 3 - fits in 4KB */ 1302 - static struct musb_fifo_cfg mode_3_cfg[] = { 1302 + static const struct musb_fifo_cfg mode_3_cfg[] = { 1303 1303 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, .mode = BUF_DOUBLE, }, 1304 1304 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, .mode = BUF_DOUBLE, }, 1305 1305 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, }, ··· 1309 1309 }; 1310 1310 1311 1311 /* mode 4 - fits in 16KB */ 1312 - static struct musb_fifo_cfg mode_4_cfg[] = { 1312 + static const struct musb_fifo_cfg mode_4_cfg[] = { 1313 1313 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 1314 1314 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 1315 1315 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, }, ··· 1340 1340 }; 1341 1341 1342 1342 /* mode 5 - fits in 8KB */ 1343 - static struct musb_fifo_cfg mode_5_cfg[] = { 1343 + static const struct musb_fifo_cfg mode_5_cfg[] = { 1344 1344 { .hw_ep_num = 1, .style = FIFO_TX, .maxpacket = 512, }, 1345 1345 { .hw_ep_num = 1, .style = FIFO_RX, .maxpacket = 512, }, 1346 1346 { .hw_ep_num = 2, .style = FIFO_TX, .maxpacket = 512, }, ··· 1447 1447 return offset + (maxpacket << ((c_size & MUSB_FIFOSZ_DPB) ? 1 : 0)); 1448 1448 } 1449 1449 1450 - static struct musb_fifo_cfg ep0_cfg = { 1450 + static const struct musb_fifo_cfg ep0_cfg = { 1451 1451 .style = FIFO_RXTX, .maxpacket = 64, 1452 1452 }; 1453 1453
+2 -2
drivers/usb/musb/sunxi.c
··· 629 629 #define SUNXI_MUSB_RAM_BITS 11 630 630 631 631 /* Allwinner OTG supports up to 5 endpoints */ 632 - static struct musb_fifo_cfg sunxi_musb_mode_cfg_5eps[] = { 632 + static const struct musb_fifo_cfg sunxi_musb_mode_cfg_5eps[] = { 633 633 MUSB_EP_FIFO_SINGLE(1, FIFO_TX, 512), 634 634 MUSB_EP_FIFO_SINGLE(1, FIFO_RX, 512), 635 635 MUSB_EP_FIFO_SINGLE(2, FIFO_TX, 512), ··· 643 643 }; 644 644 645 645 /* H3/V3s OTG supports only 4 endpoints */ 646 - static struct musb_fifo_cfg sunxi_musb_mode_cfg_4eps[] = { 646 + static const struct musb_fifo_cfg sunxi_musb_mode_cfg_4eps[] = { 647 647 MUSB_EP_FIFO_SINGLE(1, FIFO_TX, 512), 648 648 MUSB_EP_FIFO_SINGLE(1, FIFO_RX, 512), 649 649 MUSB_EP_FIFO_SINGLE(2, FIFO_TX, 512),
+3 -5
drivers/usb/phy/phy-mxs-usb.c
··· 769 769 return PTR_ERR(base); 770 770 771 771 clk = devm_clk_get(&pdev->dev, NULL); 772 - if (IS_ERR(clk)) { 773 - dev_err(&pdev->dev, 774 - "can't get the clock, err=%ld", PTR_ERR(clk)); 775 - return PTR_ERR(clk); 776 - } 772 + if (IS_ERR(clk)) 773 + return dev_err_probe(&pdev->dev, PTR_ERR(clk), 774 + "can't get the clock\n"); 777 775 778 776 mxs_phy = devm_kzalloc(&pdev->dev, sizeof(*mxs_phy), GFP_KERNEL); 779 777 if (!mxs_phy)
-23
drivers/usb/phy/phy-ulpi.c
··· 256 256 } 257 257 258 258 struct usb_phy * 259 - otg_ulpi_create(struct usb_phy_io_ops *ops, 260 - unsigned int flags) 261 - { 262 - struct usb_phy *phy; 263 - struct usb_otg *otg; 264 - 265 - phy = kzalloc(sizeof(*phy), GFP_KERNEL); 266 - if (!phy) 267 - return NULL; 268 - 269 - otg = kzalloc(sizeof(*otg), GFP_KERNEL); 270 - if (!otg) { 271 - kfree(phy); 272 - return NULL; 273 - } 274 - 275 - otg_ulpi_init(phy, otg, ops, flags); 276 - 277 - return phy; 278 - } 279 - EXPORT_SYMBOL_GPL(otg_ulpi_create); 280 - 281 - struct usb_phy * 282 259 devm_otg_ulpi_create(struct device *dev, 283 260 struct usb_phy_io_ops *ops, 284 261 unsigned int flags)
-13
drivers/usb/serial/mos7840.c
··· 66 66 67 67 #define MOS_WDR_TIMEOUT 5000 /* default urb timeout */ 68 68 69 - #define MOS_PORT1 0x0200 70 - #define MOS_PORT2 0x0300 71 - #define MOS_VENREG 0x0000 72 - #define MOS_MAX_PORT 0x02 73 - #define MOS_WRITE 0x0E 74 - #define MOS_READ 0x0D 75 - 76 69 /* Requests */ 77 70 #define MCS_RD_RTYPE 0xC0 78 71 #define MCS_WR_RTYPE 0x40 79 72 #define MCS_RDREQ 0x0D 80 73 #define MCS_WRREQ 0x0E 81 - #define MCS_CTRL_TIMEOUT 500 82 74 #define VENDOR_READ_LENGTH (0x01) 83 - 84 - #define MAX_NAME_LEN 64 85 75 86 76 #define ZLP_REG1 0x3A /* Zero_Flag_Reg1 58 */ 87 77 #define ZLP_REG5 0x3E /* Zero_Flag_Reg5 62 */ 88 - 89 - /* For higher baud Rates use TIOCEXBAUD */ 90 - #define TIOCEXBAUD 0x5462 91 78 92 79 /* 93 80 * Vendor id and device id defines
+4 -4
drivers/usb/storage/alauda.c
··· 174 174 unsigned char zoneshift; /* 1<<zs blocks per zone */ 175 175 }; 176 176 177 - static struct alauda_card_info alauda_card_ids[] = { 177 + static const struct alauda_card_info alauda_card_ids[] = { 178 178 /* NAND flash */ 179 179 { 0x6e, 20, 8, 4, 8}, /* 1 MB */ 180 180 { 0xe8, 20, 8, 4, 8}, /* 1 MB */ ··· 200 200 { 0,} 201 201 }; 202 202 203 - static struct alauda_card_info *alauda_card_find_id(unsigned char id) 203 + static const struct alauda_card_info *alauda_card_find_id(unsigned char id) 204 204 { 205 205 int i; 206 206 ··· 383 383 { 384 384 unsigned char *data = us->iobuf; 385 385 int ready = 0; 386 - struct alauda_card_info *media_info; 386 + const struct alauda_card_info *media_info; 387 387 unsigned int num_zones; 388 388 389 389 while (ready == 0) { ··· 1132 1132 int rc; 1133 1133 struct alauda_info *info = (struct alauda_info *) us->extra; 1134 1134 unsigned char *ptr = us->iobuf; 1135 - static unsigned char inquiry_response[36] = { 1135 + static const unsigned char inquiry_response[36] = { 1136 1136 0x00, 0x80, 0x00, 0x01, 0x1F, 0x00, 0x00, 0x00 1137 1137 }; 1138 1138
+7 -7
drivers/usb/storage/datafab.c
··· 319 319 // 320 320 // There might be a better way of doing this? 321 321 322 - static unsigned char scommand[8] = { 0, 1, 0, 0, 0, 0xa0, 0xec, 1 }; 322 + static const unsigned char scommand[8] = { 0, 1, 0, 0, 0, 0xa0, 0xec, 1 }; 323 323 unsigned char *command = us->iobuf; 324 324 unsigned char *buf; 325 325 int count = 0, rc; ··· 384 384 // to the ATA spec, 'Sector Count' isn't used but the Windows driver 385 385 // sets this bit so we do too... 386 386 // 387 - static unsigned char scommand[8] = { 0, 1, 0, 0, 0, 0xa0, 0xec, 1 }; 387 + static const unsigned char scommand[8] = { 0, 1, 0, 0, 0, 0xa0, 0xec, 1 }; 388 388 unsigned char *command = us->iobuf; 389 389 unsigned char *reply; 390 390 int rc; ··· 437 437 struct scsi_cmnd * srb, 438 438 int sense_6) 439 439 { 440 - static unsigned char rw_err_page[12] = { 440 + static const unsigned char rw_err_page[12] = { 441 441 0x1, 0xA, 0x21, 1, 0, 0, 0, 0, 1, 0, 0, 0 442 442 }; 443 - static unsigned char cache_page[12] = { 443 + static const unsigned char cache_page[12] = { 444 444 0x8, 0xA, 0x1, 0, 0, 0, 0, 0, 0, 0, 0, 0 445 445 }; 446 - static unsigned char rbac_page[12] = { 446 + static const unsigned char rbac_page[12] = { 447 447 0x1B, 0xA, 0, 0x81, 0, 0, 0, 0, 0, 0, 0, 0 448 448 }; 449 - static unsigned char timer_page[8] = { 449 + static const unsigned char timer_page[8] = { 450 450 0x1C, 0x6, 0, 0, 0, 0 451 451 }; 452 452 unsigned char pc, page_code; ··· 550 550 int rc; 551 551 unsigned long block, blocks; 552 552 unsigned char *ptr = us->iobuf; 553 - static unsigned char inquiry_reply[8] = { 553 + static const unsigned char inquiry_reply[8] = { 554 554 0x00, 0x80, 0x00, 0x01, 0x1F, 0x00, 0x00, 0x00 555 555 }; 556 556
+1 -1
drivers/usb/storage/initializers.c
··· 54 54 struct bulk_cs_wrap *bcs = (struct bulk_cs_wrap*) us->iobuf; 55 55 int res; 56 56 unsigned int partial; 57 - static char init_string[] = "\xec\x0a\x06\x00$PCCHIPS"; 57 + static const char init_string[] = "\xec\x0a\x06\x00$PCCHIPS"; 58 58 59 59 usb_stor_dbg(us, "Sending UCR-61S2B initialization packet...\n"); 60 60
+5 -5
drivers/usb/storage/jumpshot.c
··· 367 367 struct scsi_cmnd * srb, 368 368 int sense_6) 369 369 { 370 - static unsigned char rw_err_page[12] = { 370 + static const unsigned char rw_err_page[12] = { 371 371 0x1, 0xA, 0x21, 1, 0, 0, 0, 0, 1, 0, 0, 0 372 372 }; 373 - static unsigned char cache_page[12] = { 373 + static const unsigned char cache_page[12] = { 374 374 0x8, 0xA, 0x1, 0, 0, 0, 0, 0, 0, 0, 0, 0 375 375 }; 376 - static unsigned char rbac_page[12] = { 376 + static const unsigned char rbac_page[12] = { 377 377 0x1B, 0xA, 0, 0x81, 0, 0, 0, 0, 0, 0, 0, 0 378 378 }; 379 - static unsigned char timer_page[8] = { 379 + static const unsigned char timer_page[8] = { 380 380 0x1C, 0x6, 0, 0, 0, 0 381 381 }; 382 382 unsigned char pc, page_code; ··· 477 477 int rc; 478 478 unsigned long block, blocks; 479 479 unsigned char *ptr = us->iobuf; 480 - static unsigned char inquiry_response[8] = { 480 + static const unsigned char inquiry_response[8] = { 481 481 0x00, 0x80, 0x00, 0x01, 0x1F, 0x00, 0x00, 0x00 482 482 }; 483 483
+3 -3
drivers/usb/storage/realtek_cr.c
··· 191 191 .initFunction = init_function, \ 192 192 } 193 193 194 - static struct us_unusual_dev realtek_cr_unusual_dev_list[] = { 194 + static const struct us_unusual_dev realtek_cr_unusual_dev_list[] = { 195 195 # include "unusual_realtek.h" 196 196 {} /* Terminating entry */ 197 197 }; ··· 797 797 { 798 798 struct rts51x_chip *chip = (struct rts51x_chip *)(us->extra); 799 799 static int card_first_show = 1; 800 - static u8 media_not_present[] = { 0x70, 0, 0x02, 0, 0, 0, 0, 800 + static const u8 media_not_present[] = { 0x70, 0, 0x02, 0, 0, 0, 0, 801 801 10, 0, 0, 0, 0, 0x3A, 0, 0, 0, 0, 0 802 802 }; 803 - static u8 invalid_cmd_field[] = { 0x70, 0, 0x05, 0, 0, 0, 0, 803 + static const u8 invalid_cmd_field[] = { 0x70, 0, 0x05, 0, 0, 0, 0, 804 804 10, 0, 0, 0, 0, 0x24, 0, 0, 0, 0, 0 805 805 }; 806 806 int ret;
+7 -7
drivers/usb/storage/sddr09.c
··· 144 144 * 256 MB NAND flash has a 5-byte ID with 2nd byte 0xaa, 0xba, 0xca or 0xda. 145 145 */ 146 146 147 - static struct nand_flash_dev nand_flash_ids[] = { 147 + static const struct nand_flash_dev nand_flash_ids[] = { 148 148 /* NAND flash */ 149 149 { 0x6e, 20, 8, 4, 8, 2}, /* 1 MB */ 150 150 { 0xe8, 20, 8, 4, 8, 2}, /* 1 MB */ ··· 169 169 { 0,} 170 170 }; 171 171 172 - static struct nand_flash_dev * 172 + static const struct nand_flash_dev * 173 173 nand_find_id(unsigned char id) { 174 174 int i; 175 175 ··· 1133 1133 } 1134 1134 #endif 1135 1135 1136 - static struct nand_flash_dev * 1136 + static const struct nand_flash_dev * 1137 1137 sddr09_get_cardinfo(struct us_data *us, unsigned char flags) { 1138 - struct nand_flash_dev *cardinfo; 1138 + const struct nand_flash_dev *cardinfo; 1139 1139 unsigned char deviceID[4]; 1140 1140 char blurbtxt[256]; 1141 1141 int result; ··· 1545 1545 1546 1546 struct sddr09_card_info *info; 1547 1547 1548 - static unsigned char inquiry_response[8] = { 1548 + static const unsigned char inquiry_response[8] = { 1549 1549 0x00, 0x80, 0x00, 0x02, 0x1F, 0x00, 0x00, 0x00 1550 1550 }; 1551 1551 1552 1552 /* note: no block descriptor support */ 1553 - static unsigned char mode_page_01[19] = { 1553 + static const unsigned char mode_page_01[19] = { 1554 1554 0x00, 0x0F, 0x00, 0x0, 0x0, 0x0, 0x00, 1555 1555 0x01, 0x0A, 1556 1556 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 ··· 1584 1584 } 1585 1585 1586 1586 if (srb->cmnd[0] == READ_CAPACITY) { 1587 - struct nand_flash_dev *cardinfo; 1587 + const struct nand_flash_dev *cardinfo; 1588 1588 1589 1589 sddr09_get_wp(us, info); /* read WP bit */ 1590 1590
+2 -2
drivers/usb/storage/sddr55.c
··· 775 775 static int sddr55_transport(struct scsi_cmnd *srb, struct us_data *us) 776 776 { 777 777 int result; 778 - static unsigned char inquiry_response[8] = { 778 + static const unsigned char inquiry_response[8] = { 779 779 0x00, 0x80, 0x00, 0x02, 0x1F, 0x00, 0x00, 0x00 780 780 }; 781 781 // write-protected for now, no block descriptor support 782 - static unsigned char mode_page_01[20] = { 782 + static const unsigned char mode_page_01[20] = { 783 783 0x0, 0x12, 0x00, 0x80, 0x0, 0x0, 0x0, 0x0, 784 784 0x01, 0x0A, 785 785 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+1 -1
drivers/usb/storage/shuttle_usbat.c
··· 1683 1683 struct usbat_info *info = (struct usbat_info *) (us->extra); 1684 1684 unsigned long block, blocks; 1685 1685 unsigned char *ptr = us->iobuf; 1686 - static unsigned char inquiry_response[36] = { 1686 + static const unsigned char inquiry_response[36] = { 1687 1687 0x00, 0x80, 0x00, 0x01, 0x1F, 0x00, 0x00, 0x00 1688 1688 }; 1689 1689
+1 -1
drivers/usb/storage/transport.c
··· 528 528 u32 sector; 529 529 530 530 /* To Report "Medium Error: Record Not Found */ 531 - static unsigned char record_not_found[18] = { 531 + static const unsigned char record_not_found[18] = { 532 532 [0] = 0x70, /* current error */ 533 533 [2] = MEDIUM_ERROR, /* = 0x03 */ 534 534 [7] = 0x0a, /* additional length */
+5 -5
drivers/usb/typec/altmodes/thunderbolt.c
··· 112 112 return; 113 113 114 114 disable_plugs: 115 - for (int i = TYPEC_PLUG_SOP_PP; i > 0; --i) { 115 + for (int i = TYPEC_PLUG_SOP_PP; i >= 0; --i) { 116 116 if (tbt->plug[i]) 117 117 typec_altmode_put_plug(tbt->plug[i]); 118 118 ··· 143 143 if (tbt->plug[TYPEC_PLUG_SOP_P]) { 144 144 ret = typec_cable_altmode_enter(alt, TYPEC_PLUG_SOP_P, NULL); 145 145 if (ret < 0) { 146 - for (int i = TYPEC_PLUG_SOP_PP; i > 0; --i) { 146 + for (int i = TYPEC_PLUG_SOP_PP; i >= 0; --i) { 147 147 if (tbt->plug[i]) 148 148 typec_altmode_put_plug(tbt->plug[i]); 149 149 ··· 324 324 { 325 325 struct tbt_altmode *tbt = typec_altmode_get_drvdata(alt); 326 326 327 - for (int i = TYPEC_PLUG_SOP_PP; i > 0; --i) { 327 + for (int i = TYPEC_PLUG_SOP_PP; i >= 0; --i) { 328 328 if (tbt->plug[i]) 329 329 typec_altmode_put_plug(tbt->plug[i]); 330 330 } ··· 351 351 */ 352 352 for (int i = 0; i < TYPEC_PLUG_SOP_PP + 1; i++) { 353 353 plug = typec_altmode_get_plug(tbt->alt, i); 354 - if (IS_ERR(plug)) 354 + if (!plug) 355 355 continue; 356 356 357 - if (!plug || plug->svid != USB_TYPEC_TBT_SID) 357 + if (plug->svid != USB_TYPEC_TBT_SID) 358 358 break; 359 359 360 360 plug->desc = "Thunderbolt3";
+10
drivers/usb/typec/mux/Kconfig
··· 56 56 Say Y or M if your system has a On Semiconductor NB7VPQ904M Type-C 57 57 redriver chip found on some devices with a Type-C port. 58 58 59 + config TYPEC_MUX_PS883X 60 + tristate "Parade PS883x Type-C retimer driver" 61 + depends on I2C 62 + depends on DRM || DRM=n 63 + select DRM_AUX_BRIDGE if DRM_BRIDGE && OF 64 + select REGMAP_I2C 65 + help 66 + Say Y or M if your system has a Parade PS883x Type-C retimer chip 67 + found on some devices with a Type-C port. 68 + 59 69 config TYPEC_MUX_PTN36502 60 70 tristate "NXP PTN36502 Type-C redriver driver" 61 71 depends on I2C
+1
drivers/usb/typec/mux/Makefile
··· 6 6 obj-$(CONFIG_TYPEC_MUX_INTEL_PMC) += intel_pmc_mux.o 7 7 obj-$(CONFIG_TYPEC_MUX_IT5205) += it5205.o 8 8 obj-$(CONFIG_TYPEC_MUX_NB7VPQ904M) += nb7vpq904m.o 9 + obj-$(CONFIG_TYPEC_MUX_PS883X) += ps883x.o 9 10 obj-$(CONFIG_TYPEC_MUX_PTN36502) += ptn36502.o 10 11 obj-$(CONFIG_TYPEC_MUX_TUSB1046) += tusb1046.o 11 12 obj-$(CONFIG_TYPEC_MUX_WCD939X_USBSS) += wcd939x-usbss.o
+466
drivers/usb/typec/mux/ps883x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Parade ps883x usb retimer driver 4 + * 5 + * Copyright (C) 2024 Linaro Ltd. 6 + */ 7 + 8 + #include <drm/bridge/aux-bridge.h> 9 + #include <linux/clk.h> 10 + #include <linux/gpio/consumer.h> 11 + #include <linux/i2c.h> 12 + #include <linux/kernel.h> 13 + #include <linux/module.h> 14 + #include <linux/mutex.h> 15 + #include <linux/regmap.h> 16 + #include <linux/regulator/consumer.h> 17 + #include <linux/usb/typec_altmode.h> 18 + #include <linux/usb/typec_dp.h> 19 + #include <linux/usb/typec_mux.h> 20 + #include <linux/usb/typec_retimer.h> 21 + 22 + #define REG_USB_PORT_CONN_STATUS_0 0x00 23 + 24 + #define CONN_STATUS_0_CONNECTION_PRESENT BIT(0) 25 + #define CONN_STATUS_0_ORIENTATION_REVERSED BIT(1) 26 + #define CONN_STATUS_0_USB_3_1_CONNECTED BIT(5) 27 + 28 + #define REG_USB_PORT_CONN_STATUS_1 0x01 29 + 30 + #define CONN_STATUS_1_DP_CONNECTED BIT(0) 31 + #define CONN_STATUS_1_DP_SINK_REQUESTED BIT(1) 32 + #define CONN_STATUS_1_DP_PIN_ASSIGNMENT_C_D BIT(2) 33 + #define CONN_STATUS_1_DP_HPD_LEVEL BIT(7) 34 + 35 + #define REG_USB_PORT_CONN_STATUS_2 0x02 36 + 37 + struct ps883x_retimer { 38 + struct i2c_client *client; 39 + struct gpio_desc *reset_gpio; 40 + struct regmap *regmap; 41 + struct typec_switch_dev *sw; 42 + struct typec_retimer *retimer; 43 + struct clk *xo_clk; 44 + struct regulator *vdd_supply; 45 + struct regulator *vdd33_supply; 46 + struct regulator *vdd33_cap_supply; 47 + struct regulator *vddat_supply; 48 + struct regulator *vddar_supply; 49 + struct regulator *vddio_supply; 50 + 51 + struct typec_switch *typec_switch; 52 + struct typec_mux *typec_mux; 53 + 54 + struct mutex lock; /* protect non-concurrent retimer & switch */ 55 + 56 + enum typec_orientation orientation; 57 + unsigned long mode; 58 + unsigned int svid; 59 + }; 60 + 61 + static int ps883x_configure(struct ps883x_retimer *retimer, int cfg0, 62 + int cfg1, int cfg2) 63 + { 64 + struct device *dev = &retimer->client->dev; 65 + int ret; 66 + 67 + ret = regmap_write(retimer->regmap, REG_USB_PORT_CONN_STATUS_0, cfg0); 68 + if (ret) { 69 + dev_err(dev, "failed to write conn_status_0: %d\n", ret); 70 + return ret; 71 + } 72 + 73 + ret = regmap_write(retimer->regmap, REG_USB_PORT_CONN_STATUS_1, cfg1); 74 + if (ret) { 75 + dev_err(dev, "failed to write conn_status_1: %d\n", ret); 76 + return ret; 77 + } 78 + 79 + ret = regmap_write(retimer->regmap, REG_USB_PORT_CONN_STATUS_2, cfg2); 80 + if (ret) { 81 + dev_err(dev, "failed to write conn_status_2: %d\n", ret); 82 + return ret; 83 + } 84 + 85 + return 0; 86 + } 87 + 88 + static int ps883x_set(struct ps883x_retimer *retimer) 89 + { 90 + int cfg0 = CONN_STATUS_0_CONNECTION_PRESENT; 91 + int cfg1 = 0x00; 92 + int cfg2 = 0x00; 93 + 94 + if (retimer->orientation == TYPEC_ORIENTATION_NONE || 95 + retimer->mode == TYPEC_STATE_SAFE) { 96 + return ps883x_configure(retimer, cfg0, cfg1, cfg2); 97 + } 98 + 99 + if (retimer->mode != TYPEC_STATE_USB && retimer->svid != USB_TYPEC_DP_SID) 100 + return -EINVAL; 101 + 102 + if (retimer->orientation == TYPEC_ORIENTATION_REVERSE) 103 + cfg0 |= CONN_STATUS_0_ORIENTATION_REVERSED; 104 + 105 + switch (retimer->mode) { 106 + case TYPEC_STATE_USB: 107 + cfg0 |= CONN_STATUS_0_USB_3_1_CONNECTED; 108 + break; 109 + 110 + case TYPEC_DP_STATE_C: 111 + cfg1 = CONN_STATUS_1_DP_CONNECTED | 112 + CONN_STATUS_1_DP_SINK_REQUESTED | 113 + CONN_STATUS_1_DP_PIN_ASSIGNMENT_C_D | 114 + CONN_STATUS_1_DP_HPD_LEVEL; 115 + break; 116 + 117 + case TYPEC_DP_STATE_D: 118 + cfg0 |= CONN_STATUS_0_USB_3_1_CONNECTED; 119 + cfg1 = CONN_STATUS_1_DP_CONNECTED | 120 + CONN_STATUS_1_DP_SINK_REQUESTED | 121 + CONN_STATUS_1_DP_PIN_ASSIGNMENT_C_D | 122 + CONN_STATUS_1_DP_HPD_LEVEL; 123 + break; 124 + 125 + case TYPEC_DP_STATE_E: 126 + cfg1 = CONN_STATUS_1_DP_CONNECTED | 127 + CONN_STATUS_1_DP_HPD_LEVEL; 128 + break; 129 + 130 + default: 131 + return -EOPNOTSUPP; 132 + } 133 + 134 + return ps883x_configure(retimer, cfg0, cfg1, cfg2); 135 + } 136 + 137 + static int ps883x_sw_set(struct typec_switch_dev *sw, 138 + enum typec_orientation orientation) 139 + { 140 + struct ps883x_retimer *retimer = typec_switch_get_drvdata(sw); 141 + int ret = 0; 142 + 143 + ret = typec_switch_set(retimer->typec_switch, orientation); 144 + if (ret) 145 + return ret; 146 + 147 + mutex_lock(&retimer->lock); 148 + 149 + if (retimer->orientation != orientation) { 150 + retimer->orientation = orientation; 151 + 152 + ret = ps883x_set(retimer); 153 + } 154 + 155 + mutex_unlock(&retimer->lock); 156 + 157 + return ret; 158 + } 159 + 160 + static int ps883x_retimer_set(struct typec_retimer *rtmr, 161 + struct typec_retimer_state *state) 162 + { 163 + struct ps883x_retimer *retimer = typec_retimer_get_drvdata(rtmr); 164 + struct typec_mux_state mux_state; 165 + int ret = 0; 166 + 167 + mutex_lock(&retimer->lock); 168 + 169 + if (state->mode != retimer->mode) { 170 + retimer->mode = state->mode; 171 + 172 + if (state->alt) 173 + retimer->svid = state->alt->svid; 174 + else 175 + retimer->svid = 0; 176 + 177 + ret = ps883x_set(retimer); 178 + } 179 + 180 + mutex_unlock(&retimer->lock); 181 + 182 + if (ret) 183 + return ret; 184 + 185 + mux_state.alt = state->alt; 186 + mux_state.data = state->data; 187 + mux_state.mode = state->mode; 188 + 189 + return typec_mux_set(retimer->typec_mux, &mux_state); 190 + } 191 + 192 + static int ps883x_enable_vregs(struct ps883x_retimer *retimer) 193 + { 194 + struct device *dev = &retimer->client->dev; 195 + int ret; 196 + 197 + ret = regulator_enable(retimer->vdd33_supply); 198 + if (ret) { 199 + dev_err(dev, "cannot enable VDD 3.3V regulator: %d\n", ret); 200 + return ret; 201 + } 202 + 203 + ret = regulator_enable(retimer->vdd33_cap_supply); 204 + if (ret) { 205 + dev_err(dev, "cannot enable VDD 3.3V CAP regulator: %d\n", ret); 206 + goto err_vdd33_disable; 207 + } 208 + 209 + usleep_range(4000, 10000); 210 + 211 + ret = regulator_enable(retimer->vdd_supply); 212 + if (ret) { 213 + dev_err(dev, "cannot enable VDD regulator: %d\n", ret); 214 + goto err_vdd33_cap_disable; 215 + } 216 + 217 + ret = regulator_enable(retimer->vddar_supply); 218 + if (ret) { 219 + dev_err(dev, "cannot enable VDD AR regulator: %d\n", ret); 220 + goto err_vdd_disable; 221 + } 222 + 223 + ret = regulator_enable(retimer->vddat_supply); 224 + if (ret) { 225 + dev_err(dev, "cannot enable VDD AT regulator: %d\n", ret); 226 + goto err_vddar_disable; 227 + } 228 + 229 + ret = regulator_enable(retimer->vddio_supply); 230 + if (ret) { 231 + dev_err(dev, "cannot enable VDD IO regulator: %d\n", ret); 232 + goto err_vddat_disable; 233 + } 234 + 235 + return 0; 236 + 237 + err_vddat_disable: 238 + regulator_disable(retimer->vddat_supply); 239 + err_vddar_disable: 240 + regulator_disable(retimer->vddar_supply); 241 + err_vdd_disable: 242 + regulator_disable(retimer->vdd_supply); 243 + err_vdd33_cap_disable: 244 + regulator_disable(retimer->vdd33_cap_supply); 245 + err_vdd33_disable: 246 + regulator_disable(retimer->vdd33_supply); 247 + 248 + return ret; 249 + } 250 + 251 + static void ps883x_disable_vregs(struct ps883x_retimer *retimer) 252 + { 253 + regulator_disable(retimer->vddio_supply); 254 + regulator_disable(retimer->vddat_supply); 255 + regulator_disable(retimer->vddar_supply); 256 + regulator_disable(retimer->vdd_supply); 257 + regulator_disable(retimer->vdd33_cap_supply); 258 + regulator_disable(retimer->vdd33_supply); 259 + } 260 + 261 + static int ps883x_get_vregs(struct ps883x_retimer *retimer) 262 + { 263 + struct device *dev = &retimer->client->dev; 264 + 265 + retimer->vdd_supply = devm_regulator_get(dev, "vdd"); 266 + if (IS_ERR(retimer->vdd_supply)) 267 + return dev_err_probe(dev, PTR_ERR(retimer->vdd_supply), 268 + "failed to get VDD\n"); 269 + 270 + retimer->vdd33_supply = devm_regulator_get(dev, "vdd33"); 271 + if (IS_ERR(retimer->vdd33_supply)) 272 + return dev_err_probe(dev, PTR_ERR(retimer->vdd33_supply), 273 + "failed to get VDD 3.3V\n"); 274 + 275 + retimer->vdd33_cap_supply = devm_regulator_get(dev, "vdd33-cap"); 276 + if (IS_ERR(retimer->vdd33_cap_supply)) 277 + return dev_err_probe(dev, PTR_ERR(retimer->vdd33_cap_supply), 278 + "failed to get VDD CAP 3.3V\n"); 279 + 280 + retimer->vddat_supply = devm_regulator_get(dev, "vddat"); 281 + if (IS_ERR(retimer->vddat_supply)) 282 + return dev_err_probe(dev, PTR_ERR(retimer->vddat_supply), 283 + "failed to get VDD AT\n"); 284 + 285 + retimer->vddar_supply = devm_regulator_get(dev, "vddar"); 286 + if (IS_ERR(retimer->vddar_supply)) 287 + return dev_err_probe(dev, PTR_ERR(retimer->vddar_supply), 288 + "failed to get VDD AR\n"); 289 + 290 + retimer->vddio_supply = devm_regulator_get(dev, "vddio"); 291 + if (IS_ERR(retimer->vddio_supply)) 292 + return dev_err_probe(dev, PTR_ERR(retimer->vddio_supply), 293 + "failed to get VDD IO\n"); 294 + 295 + return 0; 296 + } 297 + 298 + static const struct regmap_config ps883x_retimer_regmap = { 299 + .max_register = 0x1f, 300 + .reg_bits = 8, 301 + .val_bits = 8, 302 + }; 303 + 304 + static int ps883x_retimer_probe(struct i2c_client *client) 305 + { 306 + struct device *dev = &client->dev; 307 + struct typec_switch_desc sw_desc = { }; 308 + struct typec_retimer_desc rtmr_desc = { }; 309 + struct ps883x_retimer *retimer; 310 + unsigned int val; 311 + int ret; 312 + 313 + retimer = devm_kzalloc(dev, sizeof(*retimer), GFP_KERNEL); 314 + if (!retimer) 315 + return -ENOMEM; 316 + 317 + retimer->client = client; 318 + 319 + mutex_init(&retimer->lock); 320 + 321 + retimer->regmap = devm_regmap_init_i2c(client, &ps883x_retimer_regmap); 322 + if (IS_ERR(retimer->regmap)) 323 + return dev_err_probe(dev, PTR_ERR(retimer->regmap), 324 + "failed to allocate register map\n"); 325 + 326 + ret = ps883x_get_vregs(retimer); 327 + if (ret) 328 + return ret; 329 + 330 + retimer->xo_clk = devm_clk_get(dev, NULL); 331 + if (IS_ERR(retimer->xo_clk)) 332 + return dev_err_probe(dev, PTR_ERR(retimer->xo_clk), 333 + "failed to get xo clock\n"); 334 + 335 + retimer->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_ASIS); 336 + if (IS_ERR(retimer->reset_gpio)) 337 + return dev_err_probe(dev, PTR_ERR(retimer->reset_gpio), 338 + "failed to get reset gpio\n"); 339 + 340 + retimer->typec_switch = typec_switch_get(dev); 341 + if (IS_ERR(retimer->typec_switch)) 342 + return dev_err_probe(dev, PTR_ERR(retimer->typec_switch), 343 + "failed to acquire orientation-switch\n"); 344 + 345 + retimer->typec_mux = typec_mux_get(dev); 346 + if (IS_ERR(retimer->typec_mux)) { 347 + ret = dev_err_probe(dev, PTR_ERR(retimer->typec_mux), 348 + "failed to acquire mode-mux\n"); 349 + goto err_switch_put; 350 + } 351 + 352 + ret = drm_aux_bridge_register(dev); 353 + if (ret) 354 + goto err_mux_put; 355 + 356 + ret = ps883x_enable_vregs(retimer); 357 + if (ret) 358 + goto err_mux_put; 359 + 360 + ret = clk_prepare_enable(retimer->xo_clk); 361 + if (ret) { 362 + dev_err(dev, "failed to enable XO: %d\n", ret); 363 + goto err_vregs_disable; 364 + } 365 + 366 + /* skip resetting if already configured */ 367 + if (regmap_test_bits(retimer->regmap, REG_USB_PORT_CONN_STATUS_0, 368 + CONN_STATUS_0_CONNECTION_PRESENT) == 1) { 369 + gpiod_direction_output(retimer->reset_gpio, 0); 370 + } else { 371 + gpiod_direction_output(retimer->reset_gpio, 1); 372 + 373 + /* VDD IO supply enable to reset release delay */ 374 + usleep_range(4000, 14000); 375 + 376 + gpiod_set_value(retimer->reset_gpio, 0); 377 + 378 + /* firmware initialization delay */ 379 + msleep(60); 380 + 381 + /* make sure device is accessible */ 382 + ret = regmap_read(retimer->regmap, REG_USB_PORT_CONN_STATUS_0, 383 + &val); 384 + if (ret) { 385 + dev_err(dev, "failed to read conn_status_0: %d\n", ret); 386 + if (ret == -ENXIO) 387 + ret = -EIO; 388 + goto err_clk_disable; 389 + } 390 + } 391 + 392 + sw_desc.drvdata = retimer; 393 + sw_desc.fwnode = dev_fwnode(dev); 394 + sw_desc.set = ps883x_sw_set; 395 + 396 + retimer->sw = typec_switch_register(dev, &sw_desc); 397 + if (IS_ERR(retimer->sw)) { 398 + ret = PTR_ERR(retimer->sw); 399 + dev_err(dev, "failed to register typec switch: %d\n", ret); 400 + goto err_clk_disable; 401 + } 402 + 403 + rtmr_desc.drvdata = retimer; 404 + rtmr_desc.fwnode = dev_fwnode(dev); 405 + rtmr_desc.set = ps883x_retimer_set; 406 + 407 + retimer->retimer = typec_retimer_register(dev, &rtmr_desc); 408 + if (IS_ERR(retimer->retimer)) { 409 + ret = PTR_ERR(retimer->retimer); 410 + dev_err(dev, "failed to register typec retimer: %d\n", ret); 411 + goto err_switch_unregister; 412 + } 413 + 414 + return 0; 415 + 416 + err_switch_unregister: 417 + typec_switch_unregister(retimer->sw); 418 + err_clk_disable: 419 + clk_disable_unprepare(retimer->xo_clk); 420 + err_vregs_disable: 421 + gpiod_set_value(retimer->reset_gpio, 1); 422 + ps883x_disable_vregs(retimer); 423 + err_mux_put: 424 + typec_mux_put(retimer->typec_mux); 425 + err_switch_put: 426 + typec_switch_put(retimer->typec_switch); 427 + 428 + return ret; 429 + } 430 + 431 + static void ps883x_retimer_remove(struct i2c_client *client) 432 + { 433 + struct ps883x_retimer *retimer = i2c_get_clientdata(client); 434 + 435 + typec_retimer_unregister(retimer->retimer); 436 + typec_switch_unregister(retimer->sw); 437 + 438 + gpiod_set_value(retimer->reset_gpio, 1); 439 + 440 + clk_disable_unprepare(retimer->xo_clk); 441 + 442 + ps883x_disable_vregs(retimer); 443 + 444 + typec_mux_put(retimer->typec_mux); 445 + typec_switch_put(retimer->typec_switch); 446 + } 447 + 448 + static const struct of_device_id ps883x_retimer_of_table[] = { 449 + { .compatible = "parade,ps8830" }, 450 + { } 451 + }; 452 + MODULE_DEVICE_TABLE(of, ps883x_retimer_of_table); 453 + 454 + static struct i2c_driver ps883x_retimer_driver = { 455 + .driver = { 456 + .name = "ps883x_retimer", 457 + .of_match_table = ps883x_retimer_of_table, 458 + }, 459 + .probe = ps883x_retimer_probe, 460 + .remove = ps883x_retimer_remove, 461 + }; 462 + 463 + module_i2c_driver(ps883x_retimer_driver); 464 + 465 + MODULE_DESCRIPTION("Parade ps883x Type-C Retimer driver"); 466 + MODULE_LICENSE("GPL");
+15 -7
drivers/usb/typec/ucsi/cros_ec_ucsi.c
··· 105 105 return 0; 106 106 } 107 107 108 - static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd) 108 + static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd, u32 *cci, 109 + void *data, size_t size) 109 110 { 110 111 struct cros_ucsi_data *udata = ucsi_get_drvdata(ucsi); 111 112 int ret; 112 113 113 - ret = ucsi_sync_control_common(ucsi, cmd); 114 + ret = ucsi_sync_control_common(ucsi, cmd, cci, data, size); 114 115 switch (ret) { 115 116 case -EBUSY: 116 117 /* EC may return -EBUSY if CCI.busy is set. ··· 206 205 { 207 206 struct cros_ucsi_data *udata = container_of(nb, struct cros_ucsi_data, nb); 208 207 209 - if (!(host_event & PD_EVENT_PPM)) 210 - return NOTIFY_OK; 208 + if (host_event & PD_EVENT_INIT) { 209 + /* Late init event received from ChromeOS EC. Treat this as a 210 + * system resume to re-enable communication with the PPM. 211 + */ 212 + dev_dbg(udata->dev, "Late PD init received\n"); 213 + ucsi_resume(udata->ucsi); 214 + } 211 215 212 - dev_dbg(udata->dev, "UCSI notification received\n"); 213 - flush_work(&udata->work); 214 - schedule_work(&udata->work); 216 + if (host_event & PD_EVENT_PPM) { 217 + dev_dbg(udata->dev, "UCSI notification received\n"); 218 + flush_work(&udata->work); 219 + schedule_work(&udata->work); 220 + } 215 221 216 222 return NOTIFY_OK; 217 223 }
+5 -1
drivers/usb/typec/ucsi/debugfs.c
··· 28 28 ucsi->debugfs->status = 0; 29 29 30 30 switch (UCSI_COMMAND(val)) { 31 - case UCSI_SET_UOM: 31 + case UCSI_SET_CCOM: 32 32 case UCSI_SET_UOR: 33 33 case UCSI_SET_PDR: 34 34 case UCSI_CONNECTOR_RESET: 35 35 case UCSI_SET_SINK_PATH: 36 + case UCSI_SET_NEW_CAM: 36 37 ret = ucsi_send_command(ucsi, val, NULL, 0); 37 38 break; 38 39 case UCSI_GET_CAPABILITY: ··· 43 42 case UCSI_GET_PDOS: 44 43 case UCSI_GET_CABLE_PROPERTY: 45 44 case UCSI_GET_CONNECTOR_STATUS: 45 + case UCSI_GET_ERROR_STATUS: 46 + case UCSI_GET_CAM_CS: 47 + case UCSI_GET_LPM_PPM_INFO: 46 48 ret = ucsi_send_command(ucsi, val, 47 49 &ucsi->debugfs->response, 48 50 sizeof(ucsi->debugfs->response));
+1 -1
drivers/usb/typec/ucsi/trace.c
··· 12 12 [UCSI_SET_NOTIFICATION_ENABLE] = "SET_NOTIFICATION_ENABLE", 13 13 [UCSI_GET_CAPABILITY] = "GET_CAPABILITY", 14 14 [UCSI_GET_CONNECTOR_CAPABILITY] = "GET_CONNECTOR_CAPABILITY", 15 - [UCSI_SET_UOM] = "SET_UOM", 15 + [UCSI_SET_CCOM] = "SET_CCOM", 16 16 [UCSI_SET_UOR] = "SET_UOR", 17 17 [UCSI_SET_PDM] = "SET_PDM", 18 18 [UCSI_SET_PDR] = "SET_PDR",
+11 -8
drivers/usb/typec/ucsi/ucsi.c
··· 55 55 } 56 56 EXPORT_SYMBOL_GPL(ucsi_notify_common); 57 57 58 - int ucsi_sync_control_common(struct ucsi *ucsi, u64 command) 58 + int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci, 59 + void *data, size_t size) 59 60 { 60 61 bool ack = UCSI_COMMAND(command) == UCSI_ACK_CC_CI; 61 62 int ret; ··· 81 80 else 82 81 clear_bit(COMMAND_PENDING, &ucsi->flags); 83 82 83 + if (!ret && cci) 84 + ret = ucsi->ops->read_cci(ucsi, cci); 85 + 86 + if (!ret && data && 87 + (*cci & UCSI_CCI_COMMAND_COMPLETE)) 88 + ret = ucsi->ops->read_message_in(ucsi, data, size); 89 + 84 90 return ret; 85 91 } 86 92 EXPORT_SYMBOL_GPL(ucsi_sync_control_common); ··· 103 95 ctrl |= UCSI_ACK_CONNECTOR_CHANGE; 104 96 } 105 97 106 - return ucsi->ops->sync_control(ucsi, ctrl); 98 + return ucsi->ops->sync_control(ucsi, ctrl, NULL, NULL, 0); 107 99 } 108 100 109 101 static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci, ··· 116 108 if (size > UCSI_MAX_DATA_LENGTH(ucsi)) 117 109 return -EINVAL; 118 110 119 - ret = ucsi->ops->sync_control(ucsi, command); 120 - if (ucsi->ops->read_cci(ucsi, cci)) 121 - return -EIO; 111 + ret = ucsi->ops->sync_control(ucsi, command, cci, data, size); 122 112 123 113 if (*cci & UCSI_CCI_BUSY) 124 114 return ucsi_run_command(ucsi, UCSI_CANCEL, cci, NULL, 0, false) ?: -EBUSY; ··· 132 126 err = -EIO; 133 127 else 134 128 err = 0; 135 - 136 - if (!err && data && UCSI_CCI_LENGTH(*cci)) 137 - err = ucsi->ops->read_message_in(ucsi, data, size); 138 129 139 130 /* 140 131 * Don't ACK connection change if there was an error.
+7 -3
drivers/usb/typec/ucsi/ucsi.h
··· 79 79 int (*read_cci)(struct ucsi *ucsi, u32 *cci); 80 80 int (*poll_cci)(struct ucsi *ucsi, u32 *cci); 81 81 int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len); 82 - int (*sync_control)(struct ucsi *ucsi, u64 command); 82 + int (*sync_control)(struct ucsi *ucsi, u64 command, u32 *cci, 83 + void *data, size_t size); 83 84 int (*async_control)(struct ucsi *ucsi, u64 command); 84 85 bool (*update_altmodes)(struct ucsi *ucsi, struct ucsi_altmode *orig, 85 86 struct ucsi_altmode *updated); ··· 109 108 #define UCSI_GET_CAPABILITY_SIZE 128 110 109 #define UCSI_GET_CONNECTOR_CAPABILITY 0x07 111 110 #define UCSI_GET_CONNECTOR_CAPABILITY_SIZE 32 112 - #define UCSI_SET_UOM 0x08 111 + #define UCSI_SET_CCOM 0x08 113 112 #define UCSI_SET_UOR 0x09 114 113 #define UCSI_SET_PDM 0x0a 115 114 #define UCSI_SET_PDR 0x0b ··· 124 123 #define UCSI_GET_CONNECTOR_STATUS_SIZE 152 125 124 #define UCSI_GET_ERROR_STATUS 0x13 126 125 #define UCSI_GET_PD_MESSAGE 0x15 126 + #define UCSI_GET_CAM_CS 0x18 127 127 #define UCSI_SET_SINK_PATH 0x1c 128 + #define UCSI_GET_LPM_PPM_INFO 0x22 128 129 129 130 #define UCSI_CONNECTOR_NUMBER(_num_) ((u64)(_num_) << 16) 130 131 #define UCSI_COMMAND(_cmd_) ((_cmd_) & 0xff) ··· 534 531 int ucsi_resume(struct ucsi *ucsi); 535 532 536 533 void ucsi_notify_common(struct ucsi *ucsi, u32 cci); 537 - int ucsi_sync_control_common(struct ucsi *ucsi, u64 command); 534 + int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci, 535 + void *data, size_t size); 538 536 539 537 #if IS_ENABLED(CONFIG_POWER_SUPPLY) 540 538 int ucsi_register_port_psy(struct ucsi_connector *con);
+9 -20
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 105 105 .async_control = ucsi_acpi_async_control 106 106 }; 107 107 108 - static int ucsi_gram_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) 108 + static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command, u32 *cci, 109 + void *val, size_t len) 109 110 { 110 111 u16 bogus_change = UCSI_CONSTAT_POWER_LEVEL_CHANGE | 111 112 UCSI_CONSTAT_PDOS_CHANGE; 112 113 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 113 114 int ret; 114 115 115 - ret = ucsi_acpi_read_message_in(ucsi, val, val_len); 116 + ret = ucsi_sync_control_common(ucsi, command, cci, val, len); 116 117 if (ret < 0) 117 118 return ret; 119 + 120 + if (UCSI_COMMAND(ua->cmd) == UCSI_GET_PDOS && 121 + ua->cmd & UCSI_GET_PDOS_PARTNER_PDO(1) && 122 + ua->cmd & UCSI_GET_PDOS_SRC_PDOS) 123 + ua->check_bogus_event = true; 118 124 119 125 if (UCSI_COMMAND(ua->cmd) == UCSI_GET_CONNECTOR_STATUS && 120 126 ua->check_bogus_event) { ··· 134 128 return ret; 135 129 } 136 130 137 - static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command) 138 - { 139 - struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 140 - int ret; 141 - 142 - ret = ucsi_sync_control_common(ucsi, command); 143 - if (ret < 0) 144 - return ret; 145 - 146 - if (UCSI_COMMAND(ua->cmd) == UCSI_GET_PDOS && 147 - ua->cmd & UCSI_GET_PDOS_PARTNER_PDO(1) && 148 - ua->cmd & UCSI_GET_PDOS_SRC_PDOS) 149 - ua->check_bogus_event = true; 150 - 151 - return ret; 152 - } 153 - 154 131 static const struct ucsi_operations ucsi_gram_ops = { 155 132 .read_version = ucsi_acpi_read_version, 156 133 .read_cci = ucsi_acpi_read_cci, 157 134 .poll_cci = ucsi_acpi_poll_cci, 158 - .read_message_in = ucsi_gram_read_message_in, 135 + .read_message_in = ucsi_acpi_read_message_in, 159 136 .sync_control = ucsi_gram_sync_control, 160 137 .async_control = ucsi_acpi_async_control 161 138 };
+53 -44
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 222 222 u16 fw_build; 223 223 struct work_struct pm_work; 224 224 225 - u64 last_cmd_sent; 226 225 bool has_multiple_dp; 227 226 struct ucsi_ccg_altmode orig[UCSI_MAX_ALTMODES]; 228 227 struct ucsi_ccg_altmode updated[UCSI_MAX_ALTMODES]; ··· 537 538 * first and then vdo=0x3 538 539 */ 539 540 static void ucsi_ccg_nvidia_altmode(struct ucsi_ccg *uc, 540 - struct ucsi_altmode *alt) 541 + struct ucsi_altmode *alt, 542 + u64 command) 541 543 { 542 - switch (UCSI_ALTMODE_OFFSET(uc->last_cmd_sent)) { 544 + switch (UCSI_ALTMODE_OFFSET(command)) { 543 545 case NVIDIA_FTB_DP_OFFSET: 544 546 if (alt[0].mid == USB_TYPEC_NVIDIA_VLINK_DBG_VDO) 545 547 alt[0].mid = USB_TYPEC_NVIDIA_VLINK_DP_VDO | ··· 578 578 static int ucsi_ccg_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) 579 579 { 580 580 struct ucsi_ccg *uc = ucsi_get_drvdata(ucsi); 581 - struct ucsi_capability *cap; 582 - struct ucsi_altmode *alt; 583 581 584 582 spin_lock(&uc->op_lock); 585 583 memcpy(val, uc->op_data.message_in, val_len); 586 584 spin_unlock(&uc->op_lock); 587 - 588 - switch (UCSI_COMMAND(uc->last_cmd_sent)) { 589 - case UCSI_GET_CURRENT_CAM: 590 - if (uc->has_multiple_dp) 591 - ucsi_ccg_update_get_current_cam_cmd(uc, (u8 *)val); 592 - break; 593 - case UCSI_GET_ALTERNATE_MODES: 594 - if (UCSI_ALTMODE_RECIPIENT(uc->last_cmd_sent) == 595 - UCSI_RECIPIENT_SOP) { 596 - alt = val; 597 - if (alt[0].svid == USB_TYPEC_NVIDIA_VLINK_SID) 598 - ucsi_ccg_nvidia_altmode(uc, alt); 599 - } 600 - break; 601 - case UCSI_GET_CAPABILITY: 602 - if (uc->fw_build == CCG_FW_BUILD_NVIDIA_TEGRA) { 603 - cap = val; 604 - cap->features &= ~UCSI_CAP_ALT_MODE_DETAILS; 605 - } 606 - break; 607 - default: 608 - break; 609 - } 610 - uc->last_cmd_sent = 0; 611 585 612 586 return 0; 613 587 } ··· 602 628 return ccg_write(uc, reg, (u8 *)&command, sizeof(command)); 603 629 } 604 630 605 - static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command) 631 + static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command, u32 *cci, 632 + void *data, size_t size) 606 633 { 607 634 struct ucsi_ccg *uc = ucsi_get_drvdata(ucsi); 608 635 struct ucsi_connector *con; ··· 613 638 mutex_lock(&uc->lock); 614 639 pm_runtime_get_sync(uc->dev); 615 640 616 - uc->last_cmd_sent = command; 617 - 618 - if (UCSI_COMMAND(uc->last_cmd_sent) == UCSI_SET_NEW_CAM && 641 + if (UCSI_COMMAND(command) == UCSI_SET_NEW_CAM && 619 642 uc->has_multiple_dp) { 620 - con_index = (uc->last_cmd_sent >> 16) & 643 + con_index = (command >> 16) & 621 644 UCSI_CMD_CONNECTOR_MASK; 622 645 if (con_index == 0) { 623 646 ret = -EINVAL; ··· 625 652 ucsi_ccg_update_set_new_cam_cmd(uc, con, &command); 626 653 } 627 654 628 - ret = ucsi_sync_control_common(ucsi, command); 655 + ret = ucsi_sync_control_common(ucsi, command, cci, data, size); 656 + 657 + switch (UCSI_COMMAND(command)) { 658 + case UCSI_GET_CURRENT_CAM: 659 + if (uc->has_multiple_dp) 660 + ucsi_ccg_update_get_current_cam_cmd(uc, (u8 *)data); 661 + break; 662 + case UCSI_GET_ALTERNATE_MODES: 663 + if (UCSI_ALTMODE_RECIPIENT(command) == UCSI_RECIPIENT_SOP) { 664 + struct ucsi_altmode *alt = data; 665 + 666 + if (alt[0].svid == USB_TYPEC_NVIDIA_VLINK_SID) 667 + ucsi_ccg_nvidia_altmode(uc, alt, command); 668 + } 669 + break; 670 + case UCSI_GET_CAPABILITY: 671 + if (uc->fw_build == CCG_FW_BUILD_NVIDIA_TEGRA) { 672 + struct ucsi_capability *cap = data; 673 + 674 + cap->features &= ~UCSI_CAP_ALT_MODE_DETAILS; 675 + } 676 + break; 677 + default: 678 + break; 679 + } 629 680 630 681 err_put: 631 682 pm_runtime_put_sync(uc->dev); ··· 1388 1391 if (!flash) 1389 1392 return n; 1390 1393 1391 - if (uc->fw_build == 0x0) { 1392 - dev_err(dev, "fail to flash FW due to missing FW build info\n"); 1393 - return -EINVAL; 1394 - } 1395 - 1396 1394 schedule_work(&uc->work); 1397 1395 return n; 1396 + } 1397 + 1398 + static umode_t ucsi_ccg_attrs_is_visible(struct kobject *kobj, struct attribute *attr, int idx) 1399 + { 1400 + struct device *dev = kobj_to_dev(kobj); 1401 + struct ucsi_ccg *uc = i2c_get_clientdata(to_i2c_client(dev)); 1402 + 1403 + if (!uc->fw_build) 1404 + return 0; 1405 + 1406 + return attr->mode; 1398 1407 } 1399 1408 1400 1409 static DEVICE_ATTR_WO(do_flash); ··· 1409 1406 &dev_attr_do_flash.attr, 1410 1407 NULL, 1411 1408 }; 1412 - ATTRIBUTE_GROUPS(ucsi_ccg); 1409 + static struct attribute_group ucsi_ccg_attr_group = { 1410 + .attrs = ucsi_ccg_attrs, 1411 + .is_visible = ucsi_ccg_attrs_is_visible, 1412 + }; 1413 + static const struct attribute_group *ucsi_ccg_groups[] = { 1414 + &ucsi_ccg_attr_group, 1415 + NULL, 1416 + }; 1413 1417 1414 1418 static int ucsi_ccg_probe(struct i2c_client *client) 1415 1419 { ··· 1443 1433 uc->fw_build = CCG_FW_BUILD_NVIDIA_TEGRA; 1444 1434 else if (!strcmp(fw_name, "nvidia,gpu")) 1445 1435 uc->fw_build = CCG_FW_BUILD_NVIDIA; 1436 + if (!uc->fw_build) 1437 + dev_err(uc->dev, "failed to get FW build information\n"); 1446 1438 } 1447 - 1448 - if (!uc->fw_build) 1449 - dev_err(uc->dev, "failed to get FW build information\n"); 1450 1439 1451 1440 /* reset ccg device and initialize ucsi */ 1452 1441 status = ucsi_ccg_init(uc);
+1
include/linux/platform_data/cros_ec_commands.h
··· 5046 5046 #define PD_EVENT_DATA_SWAP BIT(3) 5047 5047 #define PD_EVENT_TYPEC BIT(4) 5048 5048 #define PD_EVENT_PPM BIT(5) 5049 + #define PD_EVENT_INIT BIT(6) 5049 5050 5050 5051 struct ec_response_host_event_status { 5051 5052 uint32_t status; /* PD MCU host event status */
+5 -3
include/linux/usb.h
··· 51 51 * @desc: descriptor for this endpoint, wMaxPacketSize in native byteorder 52 52 * @ss_ep_comp: SuperSpeed companion descriptor for this endpoint 53 53 * @ssp_isoc_ep_comp: SuperSpeedPlus isoc companion descriptor for this endpoint 54 + * @eusb2_isoc_ep_comp: eUSB2 isoc companion descriptor for this endpoint 54 55 * @urb_list: urbs queued to this endpoint; maintained by usbcore 55 56 * @hcpriv: for use by HCD; typically holds hardware dma queue head (QH) 56 57 * with one or more transfer descriptors (TDs) per urb ··· 65 64 * descriptor within an active interface in a given USB configuration. 66 65 */ 67 66 struct usb_host_endpoint { 68 - struct usb_endpoint_descriptor desc; 69 - struct usb_ss_ep_comp_descriptor ss_ep_comp; 70 - struct usb_ssp_isoc_ep_comp_descriptor ssp_isoc_ep_comp; 67 + struct usb_endpoint_descriptor desc; 68 + struct usb_ss_ep_comp_descriptor ss_ep_comp; 69 + struct usb_ssp_isoc_ep_comp_descriptor ssp_isoc_ep_comp; 70 + struct usb_eusb2_isoc_ep_comp_descriptor eusb2_isoc_ep_comp; 71 71 struct list_head urb_list; 72 72 void *hcpriv; 73 73 struct ep_device *ep_dev; /* For sysfs info */
+1 -1
include/linux/usb/musb.h
··· 61 61 }; 62 62 63 63 struct musb_hdrc_config { 64 - struct musb_fifo_cfg *fifo_cfg; /* board fifo configuration */ 64 + const struct musb_fifo_cfg *fifo_cfg; /* board fifo configuration */ 65 65 unsigned fifo_cfg_size; /* size of the fifo configuration */ 66 66 67 67 /* MUSB configuration-specific details */
-9
include/linux/usb/ulpi.h
··· 49 49 /*-------------------------------------------------------------------------*/ 50 50 51 51 #if IS_ENABLED(CONFIG_USB_ULPI) 52 - struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops, 53 - unsigned int flags); 54 - 55 52 struct usb_phy *devm_otg_ulpi_create(struct device *dev, 56 53 struct usb_phy_io_ops *ops, 57 54 unsigned int flags); 58 55 #else 59 - static inline struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops, 60 - unsigned int flags) 61 - { 62 - return NULL; 63 - } 64 - 65 56 static inline struct usb_phy *devm_otg_ulpi_create(struct device *dev, 66 57 struct usb_phy_io_ops *ops, 67 58 unsigned int flags)
+15
include/uapi/linux/usb/ch9.h
··· 253 253 #define USB_DT_BOS 0x0f 254 254 #define USB_DT_DEVICE_CAPABILITY 0x10 255 255 #define USB_DT_WIRELESS_ENDPOINT_COMP 0x11 256 + /* From the eUSB2 spec */ 257 + #define USB_DT_EUSB2_ISOC_ENDPOINT_COMP 0x12 258 + /* From Wireless USB spec */ 256 259 #define USB_DT_WIRE_ADAPTER 0x21 257 260 /* From USB Device Firmware Upgrade Specification, Revision 1.1 */ 258 261 #define USB_DT_DFU_FUNCTIONAL 0x21 ··· 676 673 { 677 674 return epd->bmAttributes & USB_ENDPOINT_INTRTYPE; 678 675 } 676 + 677 + /*-------------------------------------------------------------------------*/ 678 + 679 + /* USB_DT_EUSB2_ISOC_ENDPOINT_COMP: eUSB2 Isoch Endpoint Companion descriptor */ 680 + struct usb_eusb2_isoc_ep_comp_descriptor { 681 + __u8 bLength; 682 + __u8 bDescriptorType; 683 + __le16 wMaxPacketSize; 684 + __le32 dwBytesPerInterval; 685 + } __attribute__ ((packed)); 686 + 687 + #define USB_DT_EUSB2_ISOC_EP_COMP_SIZE 8 679 688 680 689 /*-------------------------------------------------------------------------*/ 681 690