Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'usb-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
"Here is the big USB and thunderbolt pull request for 5.11-rc1.

Nothing major in here, just the grind of constant development to
support new hardware and fix old issues:

- thunderbolt updates for new USB4 hardware

- cdns3 major driver updates

- lots of typec updates and additions as more hardware is available

- usb serial driver updates and fixes

- other tiny USB driver updates

All have been in linux-next with no reported issues"

* tag 'usb-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (172 commits)
usb: phy: convert comma to semicolon
usb: ucsi: convert comma to semicolon
usb: typec: tcpm: convert comma to semicolon
usb: typec: tcpm: Update vbus_vsafe0v on init
usb: typec: tcpci: Enable bleed discharge when auto discharge is enabled
usb: typec: Add class for plug alt mode device
USB: typec: tcpci: Add Bleed discharge to POWER_CONTROL definition
USB: typec: tcpm: Add a 30ms room for tPSSourceOn in PR_SWAP
USB: typec: tcpm: Fix PR_SWAP error handling
USB: typec: tcpm: Hard Reset after not receiving a Request
USB: gadget: f_fs: remove likely/unlikely
usb: gadget: f_fs: Re-use SS descriptors for SuperSpeedPlus
USB: gadget: f_midi: setup SuperSpeed Plus descriptors
USB: gadget: f_acm: add support for SuperSpeed Plus
USB: gadget: f_rndis: fix bitrate for SuperSpeed and above
usb: typec: intel_pmc_mux: Configure cable generation value for USB4
MAINTAINERS: Add myself as a reviewer for CADENCE USB3 DRD IP DRIVER
usb: chipidea: ci_hdrc_imx: Use of_device_get_match_data()
usb: chipidea: usbmisc_imx: Use of_device_get_match_data()
usb: cdns3: fix NULL pointer dereference on no platform data
...

+4301 -4711
+28
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 1 + What: /sys/bus/thunderbolt/devices/<xdomain>/rx_speed 2 + Date: Feb 2021 3 + KernelVersion: 5.11 4 + Contact: Isaac Hazan <isaac.hazan@intel.com> 5 + Description: This attribute reports the XDomain RX speed per lane. 6 + All RX lanes run at the same speed. 7 + 8 + What: /sys/bus/thunderbolt/devices/<xdomain>/rx_lanes 9 + Date: Feb 2021 10 + KernelVersion: 5.11 11 + Contact: Isaac Hazan <isaac.hazan@intel.com> 12 + Description: This attribute reports the number of RX lanes the XDomain 13 + is using simultaneously through its upstream port. 14 + 15 + What: /sys/bus/thunderbolt/devices/<xdomain>/tx_speed 16 + Date: Feb 2021 17 + KernelVersion: 5.11 18 + Contact: Isaac Hazan <isaac.hazan@intel.com> 19 + Description: This attribute reports the XDomain TX speed per lane. 20 + All TX lanes run at the same speed. 21 + 22 + What: /sys/bus/thunderbolt/devices/<xdomain>/tx_lanes 23 + Date: Feb 2021 24 + KernelVersion: 5.11 25 + Contact: Isaac Hazan <isaac.hazan@intel.com> 26 + Description: This attribute reports number of TX lanes the XDomain 27 + is using simultaneously through its upstream port. 28 + 1 29 What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl 2 30 Date: Jun 2018 3 31 KernelVersion: 4.17
+108 -34
Documentation/ABI/testing/sysfs-class-typec
··· 139 139 Shows if the partner supports USB Power Delivery communication: 140 140 Valid values: yes, no 141 141 142 + What: /sys/class/typec/<port>-partner/number_of_alternate_modes 143 + Date: November 2020 144 + Contact: Prashant Malani <pmalani@chromium.org> 145 + Description: 146 + Shows the number of alternate modes which are advertised by the partner 147 + during Power Delivery discovery. This file remains hidden until a value 148 + greater than or equal to 0 is set by Type C port driver. 149 + 150 + What: /sys/class/typec/<port>-partner/type 151 + Date: December 2020 152 + Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 153 + Description: USB Power Delivery Specification defines a set of product types 154 + for the partner devices. This file will show the product type of 155 + the partner if it is known. Dual-role capable partners will have 156 + both UFP and DFP product types defined, but only one that 157 + matches the current role will be active at the time. If the 158 + product type of the partner is not visible to the device driver, 159 + this file will not exist. 160 + 161 + When the partner product type is detected, or changed with role 162 + swap, uvevent is also raised that contains PRODUCT_TYPE=<product 163 + type> (for example PRODUCT_TYPE=hub). 164 + 165 + Valid values: 166 + 167 + UFP / device role 168 + ====================== ========================== 169 + undefined - 170 + hub PDUSB Hub 171 + peripheral PDUSB Peripheral 172 + psd Power Bank 173 + ama Alternate Mode Adapter 174 + ====================== ========================== 175 + 176 + DFP / host role 177 + ====================== ========================== 178 + undefined - 179 + hub PDUSB Hub 180 + host PDUSB Host 181 + power_brick Power Brick 182 + amc Alternate Mode Controller 183 + ====================== ========================== 184 + 142 185 What: /sys/class/typec/<port>-partner>/identity/ 143 186 Date: April 2017 144 187 Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> ··· 194 151 directory exists, it will have an attribute file for every VDO 195 152 in Discover Identity command result. 196 153 197 - What: /sys/class/typec/<port>-partner/identity/id_header 198 - Date: April 2017 199 - Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 200 - Description: 201 - ID Header VDO part of Discover Identity command result. The 202 - value will show 0 until Discover Identity command result becomes 203 - available. The value can be polled. 204 - 205 - What: /sys/class/typec/<port>-partner/identity/cert_stat 206 - Date: April 2017 207 - Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 208 - Description: 209 - Cert Stat VDO part of Discover Identity command result. The 210 - value will show 0 until Discover Identity command result becomes 211 - available. The value can be polled. 212 - 213 - What: /sys/class/typec/<port>-partner/identity/product 214 - Date: April 2017 215 - Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 216 - Description: 217 - Product VDO part of Discover Identity command result. The value 218 - will show 0 until Discover Identity command result becomes 219 - available. The value can be polled. 220 - 221 - 222 154 USB Type-C cable devices (eg. /sys/class/typec/port0-cable/) 223 155 224 156 Note: Electronically Marked Cables will have a device also for one cable plug ··· 205 187 What: /sys/class/typec/<port>-cable/type 206 188 Date: April 2017 207 189 Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 208 - Description: 209 - Shows if the cable is active. 210 - Valid values: active, passive 190 + Description: USB Power Delivery Specification defines a set of product types 191 + for the cables. This file will show the product type of the 192 + cable if it is known. If the product type of the cable is not 193 + visible to the device driver, this file will not exist. 194 + 195 + When the cable product type is detected, uvevent is also raised 196 + with PRODUCT_TYPE showing the product type of the cable. 197 + 198 + Valid values: 199 + 200 + ====================== ========================== 201 + undefined - 202 + active Active Cable 203 + passive Passive Cable 204 + ====================== ========================== 211 205 212 206 What: /sys/class/typec/<port>-cable/plug_type 213 207 Date: April 2017 ··· 232 202 - type-c 233 203 - captive 234 204 235 - What: /sys/class/typec/<port>-cable/identity/ 205 + What: /sys/class/typec/<port>-<plug>/number_of_alternate_modes 206 + Date: November 2020 207 + Contact: Prashant Malani <pmalani@chromium.org> 208 + Description: 209 + Shows the number of alternate modes which are advertised by the plug 210 + associated with a particular cable during Power Delivery discovery. 211 + This file remains hidden until a value greater than or equal to 0 212 + is set by Type C port driver. 213 + 214 + 215 + USB Type-C partner/cable Power Delivery Identity objects 216 + 217 + NOTE: The following attributes will be applicable to both 218 + partner (e.g /sys/class/typec/port0-partner/) and 219 + cable (e.g /sys/class/typec/port0-cable/) devices. Consequently, the example file 220 + paths below are prefixed with "/sys/class/typec/<port>-{partner|cable}/" to 221 + reflect this. 222 + 223 + What: /sys/class/typec/<port>-{partner|cable}/identity/ 236 224 Date: April 2017 237 225 Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 238 226 Description: 239 227 This directory appears only if the port device driver is capable 240 228 of showing the result of Discover Identity USB power delivery 241 229 command. That will not always be possible even when USB power 242 - delivery is supported. If the directory exists, it will have an 243 - attribute for every VDO returned by Discover Identity command. 230 + delivery is supported, for example when USB power delivery 231 + communication for the port is mostly handled in firmware. If the 232 + directory exists, it will have an attribute file for every VDO 233 + in Discover Identity command result. 244 234 245 - What: /sys/class/typec/<port>-cable/identity/id_header 235 + What: /sys/class/typec/<port>-{partner|cable}/identity/id_header 246 236 Date: April 2017 247 237 Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 248 238 Description: ··· 270 220 value will show 0 until Discover Identity command result becomes 271 221 available. The value can be polled. 272 222 273 - What: /sys/class/typec/<port>-cable/identity/cert_stat 223 + What: /sys/class/typec/<port>-{partner|cable}/identity/cert_stat 274 224 Date: April 2017 275 225 Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 276 226 Description: ··· 278 228 value will show 0 until Discover Identity command result becomes 279 229 available. The value can be polled. 280 230 281 - What: /sys/class/typec/<port>-cable/identity/product 231 + What: /sys/class/typec/<port>-{partner|cable}/identity/product 282 232 Date: April 2017 283 233 Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 284 234 Description: 285 235 Product VDO part of Discover Identity command result. The value 286 236 will show 0 until Discover Identity command result becomes 287 237 available. The value can be polled. 238 + 239 + What: /sys/class/typec/<port>-{partner|cable}/identity/product_type_vdo1 240 + Date: October 2020 241 + Contact: Prashant Malani <pmalani@chromium.org> 242 + Description: 243 + 1st Product Type VDO of Discover Identity command result. 244 + The value will show 0 until Discover Identity command result becomes 245 + available and a valid Product Type VDO is returned. 246 + 247 + What: /sys/class/typec/<port>-{partner|cable}/identity/product_type_vdo2 248 + Date: October 2020 249 + Contact: Prashant Malani <pmalani@chromium.org> 250 + Description: 251 + 2nd Product Type VDO of Discover Identity command result. 252 + The value will show 0 until Discover Identity command result becomes 253 + available and a valid Product Type VDO is returned. 254 + 255 + What: /sys/class/typec/<port>-{partner|cable}/identity/product_type_vdo3 256 + Date: October 2020 257 + Contact: Prashant Malani <pmalani@chromium.org> 258 + Description: 259 + 3rd Product Type VDO of Discover Identity command result. 260 + The value will show 0 until Discover Identity command result becomes 261 + available and a valid Product Type VDO is returned. 288 262 289 263 290 264 USB Type-C port alternate mode devices.
+1
Documentation/admin-guide/kernel-parameters.txt
··· 5665 5665 device); 5666 5666 j = NO_REPORT_LUNS (don't use report luns 5667 5667 command, uas only); 5668 + k = NO_SAME (do not use WRITE_SAME, uas only) 5668 5669 l = NOT_LOCKABLE (don't try to lock and 5669 5670 unlock ejectable media, not on uas); 5670 5671 m = MAX_SECTORS_64 (don't transfer more
+19
Documentation/devicetree/bindings/connector/usb-connector.yaml
··· 147 147 required: 148 148 - port@0 149 149 150 + new-source-frs-typec-current: 151 + description: Initial current capability of the new source when vSafe5V 152 + is applied during PD3.0 Fast Role Swap. "Table 6-14 Fixed Supply PDO - Sink" 153 + of "USB Power Delivery Specification Revision 3.0, Version 1.2" provides the 154 + different power levels and "6.4.1.3.1.6 Fast Role Swap USB Type-C Current" 155 + provides a detailed description of the field. The sink PDO from current source 156 + reflects the current source's(i.e. transmitter of the FRS signal) power 157 + requirement during fr swap. The current sink (i.e. receiver of the FRS signal), 158 + a.k.a new source, should check if it will be able to satisfy the current source's, 159 + new sink's, requirement during frswap before enabling the frs signal reception. 160 + This property refers to maximum current capability that the current sink can 161 + satisfy. During FRS, VBUS voltage is at 5V, as the partners are in implicit 162 + contract, hence, the power level is only a function of the current capability. 163 + "1" refers to default USB power level as described by "Table 6-14 Fixed Supply PDO - Sink". 164 + "2" refers to 1.5A@5V. 165 + "3" refers to 3.0A@5V. 166 + $ref: /schemas/types.yaml#/definitions/uint32 167 + enum: [1, 2, 3] 168 + 150 169 required: 151 170 - compatible 152 171
+70
Documentation/devicetree/bindings/usb/brcm,usb-pinmap.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/brcm,usb-pinmap.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Broadcom USB pin map Controller Device Tree Bindings 8 + 9 + maintainers: 10 + - Al Cooper <alcooperx@gmail.com> 11 + 12 + properties: 13 + compatible: 14 + items: 15 + - const: brcm,usb-pinmap 16 + 17 + reg: 18 + maxItems: 1 19 + 20 + interrupts: 21 + maxItems: 1 22 + description: Interrupt for signals mirrored to out-gpios. 23 + 24 + in-gpios: 25 + description: Array of one or two GPIO pins used for input signals. 26 + 27 + brcm,in-functions: 28 + $ref: /schemas/types.yaml#/definitions/string-array 29 + description: Array of input signal names, one per gpio in in-gpios. 30 + 31 + brcm,in-masks: 32 + $ref: /schemas/types.yaml#/definitions/uint32-array 33 + description: Array of enable and mask pairs, one per gpio in-gpios. 34 + 35 + out-gpios: 36 + description: Array of one GPIO pin used for output signals. 37 + 38 + brcm,out-functions: 39 + $ref: /schemas/types.yaml#/definitions/string-array 40 + description: Array of output signal names, one per gpio in out-gpios. 41 + 42 + brcm,out-masks: 43 + $ref: /schemas/types.yaml#/definitions/uint32-array 44 + description: Array of enable, value, changed and clear masks, one 45 + per gpio in out-gpios. 46 + 47 + required: 48 + - compatible 49 + - reg 50 + 51 + additionalProperties: false 52 + 53 + dependencies: 54 + in-gpios: [ interrupts ] 55 + 56 + examples: 57 + - | 58 + usb_pinmap: usb-pinmap@22000d0 { 59 + compatible = "brcm,usb-pinmap"; 60 + reg = <0x22000d0 0x4>; 61 + in-gpios = <&gpio 18 0>, <&gpio 19 0>; 62 + brcm,in-functions = "VBUS", "PWRFLT"; 63 + brcm,in-masks = <0x8000 0x40000 0x10000 0x80000>; 64 + out-gpios = <&gpio 20 0>; 65 + brcm,out-functions = "PWRON"; 66 + brcm,out-masks = <0x20000 0x800000 0x400000 0x200000>; 67 + interrupts = <0x0 0xb2 0x4>; 68 + }; 69 + 70 + ...
+5
Documentation/devicetree/bindings/usb/cdns,usb3.yaml
··· 26 26 - const: dev 27 27 28 28 interrupts: 29 + minItems: 3 29 30 items: 30 31 - description: OTG/DRD controller interrupt 31 32 - description: XHCI host controller interrupt 32 33 - description: Device controller interrupt 34 + - description: interrupt used to wake up core, e.g when usbcmd.rs is 35 + cleared by xhci core, this interrupt is optional 33 36 34 37 interrupt-names: 38 + minItems: 3 35 39 items: 36 40 - const: host 37 41 - const: peripheral 38 42 - const: otg 43 + - const: wakeup 39 44 40 45 dr_mode: 41 46 enum: [host, otg, peripheral]
+75
Documentation/devicetree/bindings/usb/maxim,max33359.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/usb/maxim,max33359.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Maxim TCPCI Type-C PD controller DT bindings 8 + 9 + maintainers: 10 + - Badhri Jagan Sridharan <badhri@google.com> 11 + 12 + description: Maxim TCPCI Type-C PD controller 13 + 14 + properties: 15 + compatible: 16 + enum: 17 + - maxim,max33359 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + interrupts: 23 + maxItems: 1 24 + 25 + connector: 26 + type: object 27 + $ref: ../connector/usb-connector.yaml# 28 + description: 29 + Properties for usb c connector. 30 + 31 + required: 32 + - compatible 33 + - reg 34 + - interrupts 35 + - connector 36 + 37 + additionalProperties: false 38 + 39 + examples: 40 + - | 41 + #include <dt-bindings/interrupt-controller/irq.h> 42 + #include <dt-bindings/usb/pd.h> 43 + i2c0 { 44 + #address-cells = <1>; 45 + #size-cells = <0>; 46 + 47 + maxtcpc@25 { 48 + compatible = "maxim,max33359"; 49 + reg = <0x25>; 50 + interrupt-parent = <&gpa8>; 51 + interrupts = <2 IRQ_TYPE_LEVEL_LOW>; 52 + 53 + connector { 54 + compatible = "usb-c-connector"; 55 + label = "USB-C"; 56 + data-role = "dual"; 57 + power-role = "dual"; 58 + try-power-role = "sink"; 59 + self-powered; 60 + op-sink-microwatt = <2600000>; 61 + new-source-frs-typec-current = <FRS_5V_1P5A>; 62 + source-pdos = <PDO_FIXED(5000, 900, 63 + PDO_FIXED_SUSPEND | 64 + PDO_FIXED_USB_COMM | 65 + PDO_FIXED_DATA_SWAP | 66 + PDO_FIXED_DUAL_ROLE)>; 67 + sink-pdos = <PDO_FIXED(5000, 3000, 68 + PDO_FIXED_USB_COMM | 69 + PDO_FIXED_DATA_SWAP | 70 + PDO_FIXED_DUAL_ROLE) 71 + PDO_FIXED(9000, 2000, 0)>; 72 + }; 73 + }; 74 + }; 75 + ...
+15
MAINTAINERS
··· 3583 3583 F: Documentation/devicetree/bindings/usb/brcm,bcm7445-ehci.yaml 3584 3584 F: drivers/usb/host/ehci-brcm.* 3585 3585 3586 + BROADCOM BRCMSTB USB PIN MAP DRIVER 3587 + M: Al Cooper <alcooperx@gmail.com> 3588 + L: linux-usb@vger.kernel.org 3589 + L: bcm-kernel-feedback-list@broadcom.com 3590 + S: Maintained 3591 + F: Documentation/devicetree/bindings/usb/brcm,usb-pinmap.yaml 3592 + F: drivers/usb/misc/brcmstb-usb-pinmap.c 3593 + 3586 3594 BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER 3587 3595 M: Al Cooper <alcooperx@gmail.com> 3588 3596 L: linux-kernel@vger.kernel.org ··· 3876 3868 M: Peter Chen <peter.chen@nxp.com> 3877 3869 M: Pawel Laszczak <pawell@cadence.com> 3878 3870 M: Roger Quadros <rogerq@ti.com> 3871 + R: Aswath Govindraju <a-govindraju@ti.com> 3879 3872 L: linux-usb@vger.kernel.org 3880 3873 S: Maintained 3881 3874 T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git ··· 17484 17475 W: http://thinkwiki.org/wiki/Ibm-acpi 17485 17476 T: git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git 17486 17477 F: drivers/platform/x86/thinkpad_acpi.c 17478 + 17479 + THUNDERBOLT DMA TRAFFIC TEST DRIVER 17480 + M: Isaac Hazan <isaac.hazan@intel.com> 17481 + L: linux-usb@vger.kernel.org 17482 + S: Maintained 17483 + F: drivers/thunderbolt/dma_test.c 17487 17484 17488 17485 THUNDERBOLT DRIVER 17489 17486 M: Andreas Noever <andreas.noever@gmail.com>
-1
arch/arm/configs/badge4_defconfig
··· 89 89 CONFIG_USB_SERIAL_MCT_U232=m 90 90 CONFIG_USB_SERIAL_PL2303=m 91 91 CONFIG_USB_SERIAL_CYBERJACK=m 92 - CONFIG_USB_SERIAL_XIRCOM=m 93 92 CONFIG_USB_SERIAL_OMNINET=m 94 93 CONFIG_EXT2_FS=m 95 94 CONFIG_EXT3_FS=m
-1
arch/arm/configs/corgi_defconfig
··· 191 191 CONFIG_USB_SERIAL_SAFE=m 192 192 CONFIG_USB_SERIAL_TI=m 193 193 CONFIG_USB_SERIAL_CYBERJACK=m 194 - CONFIG_USB_SERIAL_XIRCOM=m 195 194 CONFIG_USB_SERIAL_OMNINET=m 196 195 CONFIG_USB_EMI62=m 197 196 CONFIG_USB_EMI26=m
-1
arch/arm/configs/pxa_defconfig
··· 574 574 CONFIG_USB_SERIAL_SAFE=m 575 575 CONFIG_USB_SERIAL_TI=m 576 576 CONFIG_USB_SERIAL_CYBERJACK=m 577 - CONFIG_USB_SERIAL_XIRCOM=m 578 577 CONFIG_USB_SERIAL_OMNINET=m 579 578 CONFIG_USB_EMI62=m 580 579 CONFIG_USB_EMI26=m
-1
arch/arm/configs/spitz_defconfig
··· 185 185 CONFIG_USB_SERIAL_SAFE=m 186 186 CONFIG_USB_SERIAL_TI=m 187 187 CONFIG_USB_SERIAL_CYBERJACK=m 188 - CONFIG_USB_SERIAL_XIRCOM=m 189 188 CONFIG_USB_SERIAL_OMNINET=m 190 189 CONFIG_USB_EMI62=m 191 190 CONFIG_USB_EMI26=m
+20 -2
arch/arm/mach-omap1/board-h2.c
··· 16 16 * Copyright (C) 2004 Nokia Corporation by Imre Deak <imre.deak@nokia.com> 17 17 */ 18 18 #include <linux/gpio.h> 19 + #include <linux/gpio/machine.h> 19 20 #include <linux/kernel.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/delay.h> ··· 46 45 47 46 #include "common.h" 48 47 #include "board-h2.h" 48 + 49 + /* The first 16 SoC GPIO lines are on this GPIO chip */ 50 + #define OMAP_GPIO_LABEL "gpio-0-15" 49 51 50 52 /* At OMAP1610 Innovator the Ethernet is directly connected to CS1 */ 51 53 #define OMAP1610_ETHR_START 0x04000300 ··· 338 334 I2C_BOARD_INFO("tps65010", 0x48), 339 335 .platform_data = &tps_board, 340 336 }, { 341 - I2C_BOARD_INFO("isp1301_omap", 0x2d), 337 + .type = "isp1301_omap", 338 + .addr = 0x2d, 339 + .dev_name = "isp1301", 340 + }, 341 + }; 342 + 343 + static struct gpiod_lookup_table isp1301_gpiod_table = { 344 + .dev_id = "isp1301", 345 + .table = { 346 + /* Active low since the irq triggers on falling edge */ 347 + GPIO_LOOKUP(OMAP_GPIO_LABEL, 2, 348 + NULL, GPIO_ACTIVE_LOW), 349 + { }, 342 350 }, 343 351 }; 344 352 ··· 422 406 h2_smc91x_resources[1].end = gpio_to_irq(0); 423 407 platform_add_devices(h2_devices, ARRAY_SIZE(h2_devices)); 424 408 omap_serial_init(); 409 + 410 + /* ISP1301 IRQ wired at M14 */ 411 + omap_cfg_reg(M14_1510_GPIO2); 425 412 h2_i2c_board_info[0].irq = gpio_to_irq(58); 426 - h2_i2c_board_info[1].irq = gpio_to_irq(2); 427 413 omap_register_i2c_bus(1, 100, h2_i2c_board_info, 428 414 ARRAY_SIZE(h2_i2c_board_info)); 429 415 omap1_usb_init(&h2_usb_config);
-1
arch/mips/configs/mtx1_defconfig
··· 563 563 CONFIG_USB_SERIAL_SIERRAWIRELESS=m 564 564 CONFIG_USB_SERIAL_TI=m 565 565 CONFIG_USB_SERIAL_CYBERJACK=m 566 - CONFIG_USB_SERIAL_XIRCOM=m 567 566 CONFIG_USB_SERIAL_OPTION=m 568 567 CONFIG_USB_SERIAL_OMNINET=m 569 568 CONFIG_USB_EMI62=m
-1
arch/mips/configs/rm200_defconfig
··· 311 311 CONFIG_USB_SERIAL_SAFE=m 312 312 CONFIG_USB_SERIAL_SAFE_PADDED=y 313 313 CONFIG_USB_SERIAL_CYBERJACK=m 314 - CONFIG_USB_SERIAL_XIRCOM=m 315 314 CONFIG_USB_SERIAL_OMNINET=m 316 315 CONFIG_USB_LEGOTOWER=m 317 316 CONFIG_USB_LCD=m
-1
arch/powerpc/configs/g5_defconfig
··· 194 194 CONFIG_USB_SERIAL_SAFE_PADDED=y 195 195 CONFIG_USB_SERIAL_TI=m 196 196 CONFIG_USB_SERIAL_CYBERJACK=m 197 - CONFIG_USB_SERIAL_XIRCOM=m 198 197 CONFIG_USB_SERIAL_OMNINET=m 199 198 CONFIG_USB_APPLEDISPLAY=m 200 199 CONFIG_EXT2_FS=y
-1
arch/powerpc/configs/ppc6xx_defconfig
··· 911 911 CONFIG_USB_SERIAL_SIERRAWIRELESS=m 912 912 CONFIG_USB_SERIAL_TI=m 913 913 CONFIG_USB_SERIAL_CYBERJACK=m 914 - CONFIG_USB_SERIAL_XIRCOM=m 915 914 CONFIG_USB_SERIAL_OPTION=m 916 915 CONFIG_USB_SERIAL_OMNINET=m 917 916 CONFIG_USB_SERIAL_DEBUG=m
+1 -1
drivers/net/thunderbolt.c
··· 866 866 eof_mask = BIT(TBIP_PDF_FRAME_END); 867 867 868 868 ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE, 869 - RING_FLAG_FRAME, sof_mask, eof_mask, 869 + RING_FLAG_FRAME, 0, sof_mask, eof_mask, 870 870 tbnet_start_poll, net); 871 871 if (!ring) { 872 872 netdev_err(dev, "failed to allocate Rx ring\n");
+1 -2
drivers/platform/chrome/cros_ec_typec.c
··· 438 438 if (pd_ctrl->control_flags & USB_PD_CTRL_ACTIVE_LINK_UNIDIR) 439 439 data.cable_mode |= TBT_CABLE_LINK_TRAINING; 440 440 441 - if (pd_ctrl->cable_gen) 442 - data.cable_mode |= TBT_CABLE_ROUNDED; 441 + data.cable_mode |= TBT_SET_CABLE_ROUNDED(pd_ctrl->cable_gen); 443 442 444 443 /* Enter Mode VDO */ 445 444 data.enter_vdo = TBT_SET_CABLE_SPEED(pd_ctrl->cable_speed);
+13
drivers/thunderbolt/Kconfig
··· 31 31 bool "KUnit tests" 32 32 depends on KUNIT=y 33 33 34 + config USB4_DMA_TEST 35 + tristate "DMA traffic test driver" 36 + depends on DEBUG_FS 37 + help 38 + This allows sending and receiving DMA traffic through loopback 39 + connection. Loopback connection can be done by either special 40 + dongle that has TX/RX lines crossed, or by simply connecting a 41 + cable back to the host. Only enable this if you know what you 42 + are doing. Normal users and distro kernels should say N here. 43 + 44 + To compile this driver a module, choose M here. The module will be 45 + called thunderbolt_dma_test. 46 + 34 47 endif # USB4
+3
drivers/thunderbolt/Makefile
··· 7 7 thunderbolt-${CONFIG_ACPI} += acpi.o 8 8 thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o 9 9 thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o 10 + 11 + thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o 12 + obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o
+5 -2
drivers/thunderbolt/ctl.c
··· 628 628 if (!ctl->tx) 629 629 goto err; 630 630 631 - ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0xffff, 632 - 0xffff, NULL, NULL); 631 + ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0, 0xffff, 632 + 0xffff, NULL, NULL); 633 633 if (!ctl->rx) 634 634 goto err; 635 635 ··· 962 962 963 963 if (res->tb_error == TB_CFG_ERROR_LOCK) 964 964 return -EACCES; 965 + else if (res->tb_error == TB_CFG_ERROR_PORT_NOT_CONNECTED) 966 + return -ENOTCONN; 967 + 965 968 return -EIO; 966 969 } 967 970
+24
drivers/thunderbolt/debugfs.c
··· 691 691 debugfs_remove_recursive(sw->debugfs_dir); 692 692 } 693 693 694 + /** 695 + * tb_service_debugfs_init() - Add debugfs directory for service 696 + * @svc: Thunderbolt service pointer 697 + * 698 + * Adds debugfs directory for service. 699 + */ 700 + void tb_service_debugfs_init(struct tb_service *svc) 701 + { 702 + svc->debugfs_dir = debugfs_create_dir(dev_name(&svc->dev), 703 + tb_debugfs_root); 704 + } 705 + 706 + /** 707 + * tb_service_debugfs_remove() - Remove service debugfs directory 708 + * @svc: Thunderbolt service pointer 709 + * 710 + * Removes the previously created debugfs directory for @svc. 711 + */ 712 + void tb_service_debugfs_remove(struct tb_service *svc) 713 + { 714 + debugfs_remove_recursive(svc->debugfs_dir); 715 + svc->debugfs_dir = NULL; 716 + } 717 + 694 718 void tb_debugfs_init(void) 695 719 { 696 720 tb_debugfs_root = debugfs_create_dir("thunderbolt", NULL);
+736
drivers/thunderbolt/dma_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * DMA traffic test driver 4 + * 5 + * Copyright (C) 2020, Intel Corporation 6 + * Authors: Isaac Hazan <isaac.hazan@intel.com> 7 + * Mika Westerberg <mika.westerberg@linux.intel.com> 8 + */ 9 + 10 + #include <linux/acpi.h> 11 + #include <linux/completion.h> 12 + #include <linux/debugfs.h> 13 + #include <linux/module.h> 14 + #include <linux/sizes.h> 15 + #include <linux/thunderbolt.h> 16 + 17 + #define DMA_TEST_HOPID 8 18 + #define DMA_TEST_TX_RING_SIZE 64 19 + #define DMA_TEST_RX_RING_SIZE 256 20 + #define DMA_TEST_FRAME_SIZE SZ_4K 21 + #define DMA_TEST_DATA_PATTERN 0x0123456789abcdefLL 22 + #define DMA_TEST_MAX_PACKETS 1000 23 + 24 + enum dma_test_frame_pdf { 25 + DMA_TEST_PDF_FRAME_START = 1, 26 + DMA_TEST_PDF_FRAME_END, 27 + }; 28 + 29 + struct dma_test_frame { 30 + struct dma_test *dma_test; 31 + void *data; 32 + struct ring_frame frame; 33 + }; 34 + 35 + enum dma_test_test_error { 36 + DMA_TEST_NO_ERROR, 37 + DMA_TEST_INTERRUPTED, 38 + DMA_TEST_BUFFER_ERROR, 39 + DMA_TEST_DMA_ERROR, 40 + DMA_TEST_CONFIG_ERROR, 41 + DMA_TEST_SPEED_ERROR, 42 + DMA_TEST_WIDTH_ERROR, 43 + DMA_TEST_BONDING_ERROR, 44 + DMA_TEST_PACKET_ERROR, 45 + }; 46 + 47 + static const char * const dma_test_error_names[] = { 48 + [DMA_TEST_NO_ERROR] = "no errors", 49 + [DMA_TEST_INTERRUPTED] = "interrupted by signal", 50 + [DMA_TEST_BUFFER_ERROR] = "no memory for packet buffers", 51 + [DMA_TEST_DMA_ERROR] = "DMA ring setup failed", 52 + [DMA_TEST_CONFIG_ERROR] = "configuration is not valid", 53 + [DMA_TEST_SPEED_ERROR] = "unexpected link speed", 54 + [DMA_TEST_WIDTH_ERROR] = "unexpected link width", 55 + [DMA_TEST_BONDING_ERROR] = "lane bonding configuration error", 56 + [DMA_TEST_PACKET_ERROR] = "packet check failed", 57 + }; 58 + 59 + enum dma_test_result { 60 + DMA_TEST_NOT_RUN, 61 + DMA_TEST_SUCCESS, 62 + DMA_TEST_FAIL, 63 + }; 64 + 65 + static const char * const dma_test_result_names[] = { 66 + [DMA_TEST_NOT_RUN] = "not run", 67 + [DMA_TEST_SUCCESS] = "success", 68 + [DMA_TEST_FAIL] = "failed", 69 + }; 70 + 71 + /** 72 + * struct dma_test - DMA test device driver private data 73 + * @svc: XDomain service the driver is bound to 74 + * @xd: XDomain the service belongs to 75 + * @rx_ring: Software ring holding RX frames 76 + * @tx_ring: Software ring holding TX frames 77 + * @packets_to_send: Number of packets to send 78 + * @packets_to_receive: Number of packets to receive 79 + * @packets_sent: Actual number of packets sent 80 + * @packets_received: Actual number of packets received 81 + * @link_speed: Expected link speed (Gb/s), %0 to use whatever is negotiated 82 + * @link_width: Expected link width (Gb/s), %0 to use whatever is negotiated 83 + * @crc_errors: Number of CRC errors during the test run 84 + * @buffer_overflow_errors: Number of buffer overflow errors during the test 85 + * run 86 + * @result: Result of the last run 87 + * @error_code: Error code of the last run 88 + * @complete: Used to wait for the Rx to complete 89 + * @lock: Lock serializing access to this structure 90 + * @debugfs_dir: dentry of this dma_test 91 + */ 92 + struct dma_test { 93 + const struct tb_service *svc; 94 + struct tb_xdomain *xd; 95 + struct tb_ring *rx_ring; 96 + struct tb_ring *tx_ring; 97 + unsigned int packets_to_send; 98 + unsigned int packets_to_receive; 99 + unsigned int packets_sent; 100 + unsigned int packets_received; 101 + unsigned int link_speed; 102 + unsigned int link_width; 103 + unsigned int crc_errors; 104 + unsigned int buffer_overflow_errors; 105 + enum dma_test_result result; 106 + enum dma_test_test_error error_code; 107 + struct completion complete; 108 + struct mutex lock; 109 + struct dentry *debugfs_dir; 110 + }; 111 + 112 + /* DMA test property directory UUID: 3188cd10-6523-4a5a-a682-fdca07a248d8 */ 113 + static const uuid_t dma_test_dir_uuid = 114 + UUID_INIT(0x3188cd10, 0x6523, 0x4a5a, 115 + 0xa6, 0x82, 0xfd, 0xca, 0x07, 0xa2, 0x48, 0xd8); 116 + 117 + static struct tb_property_dir *dma_test_dir; 118 + static void *dma_test_pattern; 119 + 120 + static void dma_test_free_rings(struct dma_test *dt) 121 + { 122 + if (dt->rx_ring) { 123 + tb_ring_free(dt->rx_ring); 124 + dt->rx_ring = NULL; 125 + } 126 + if (dt->tx_ring) { 127 + tb_ring_free(dt->tx_ring); 128 + dt->tx_ring = NULL; 129 + } 130 + } 131 + 132 + static int dma_test_start_rings(struct dma_test *dt) 133 + { 134 + unsigned int flags = RING_FLAG_FRAME; 135 + struct tb_xdomain *xd = dt->xd; 136 + int ret, e2e_tx_hop = 0; 137 + struct tb_ring *ring; 138 + 139 + /* 140 + * If we are both sender and receiver (traffic goes over a 141 + * special loopback dongle) enable E2E flow control. This avoids 142 + * losing packets. 143 + */ 144 + if (dt->packets_to_send && dt->packets_to_receive) 145 + flags |= RING_FLAG_E2E; 146 + 147 + if (dt->packets_to_send) { 148 + ring = tb_ring_alloc_tx(xd->tb->nhi, -1, DMA_TEST_TX_RING_SIZE, 149 + flags); 150 + if (!ring) 151 + return -ENOMEM; 152 + 153 + dt->tx_ring = ring; 154 + e2e_tx_hop = ring->hop; 155 + } 156 + 157 + if (dt->packets_to_receive) { 158 + u16 sof_mask, eof_mask; 159 + 160 + sof_mask = BIT(DMA_TEST_PDF_FRAME_START); 161 + eof_mask = BIT(DMA_TEST_PDF_FRAME_END); 162 + 163 + ring = tb_ring_alloc_rx(xd->tb->nhi, -1, DMA_TEST_RX_RING_SIZE, 164 + flags, e2e_tx_hop, sof_mask, eof_mask, 165 + NULL, NULL); 166 + if (!ring) { 167 + dma_test_free_rings(dt); 168 + return -ENOMEM; 169 + } 170 + 171 + dt->rx_ring = ring; 172 + } 173 + 174 + ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID, 175 + dt->tx_ring ? dt->tx_ring->hop : 0, 176 + DMA_TEST_HOPID, 177 + dt->rx_ring ? dt->rx_ring->hop : 0); 178 + if (ret) { 179 + dma_test_free_rings(dt); 180 + return ret; 181 + } 182 + 183 + if (dt->tx_ring) 184 + tb_ring_start(dt->tx_ring); 185 + if (dt->rx_ring) 186 + tb_ring_start(dt->rx_ring); 187 + 188 + return 0; 189 + } 190 + 191 + static void dma_test_stop_rings(struct dma_test *dt) 192 + { 193 + if (dt->rx_ring) 194 + tb_ring_stop(dt->rx_ring); 195 + if (dt->tx_ring) 196 + tb_ring_stop(dt->tx_ring); 197 + 198 + if (tb_xdomain_disable_paths(dt->xd)) 199 + dev_warn(&dt->svc->dev, "failed to disable DMA paths\n"); 200 + 201 + dma_test_free_rings(dt); 202 + } 203 + 204 + static void dma_test_rx_callback(struct tb_ring *ring, struct ring_frame *frame, 205 + bool canceled) 206 + { 207 + struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame); 208 + struct dma_test *dt = tf->dma_test; 209 + struct device *dma_dev = tb_ring_dma_device(dt->rx_ring); 210 + 211 + dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE, 212 + DMA_FROM_DEVICE); 213 + kfree(tf->data); 214 + 215 + if (canceled) { 216 + kfree(tf); 217 + return; 218 + } 219 + 220 + dt->packets_received++; 221 + dev_dbg(&dt->svc->dev, "packet %u/%u received\n", dt->packets_received, 222 + dt->packets_to_receive); 223 + 224 + if (tf->frame.flags & RING_DESC_CRC_ERROR) 225 + dt->crc_errors++; 226 + if (tf->frame.flags & RING_DESC_BUFFER_OVERRUN) 227 + dt->buffer_overflow_errors++; 228 + 229 + kfree(tf); 230 + 231 + if (dt->packets_received == dt->packets_to_receive) 232 + complete(&dt->complete); 233 + } 234 + 235 + static int dma_test_submit_rx(struct dma_test *dt, size_t npackets) 236 + { 237 + struct device *dma_dev = tb_ring_dma_device(dt->rx_ring); 238 + int i; 239 + 240 + for (i = 0; i < npackets; i++) { 241 + struct dma_test_frame *tf; 242 + dma_addr_t dma_addr; 243 + 244 + tf = kzalloc(sizeof(*tf), GFP_KERNEL); 245 + if (!tf) 246 + return -ENOMEM; 247 + 248 + tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL); 249 + if (!tf->data) { 250 + kfree(tf); 251 + return -ENOMEM; 252 + } 253 + 254 + dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE, 255 + DMA_FROM_DEVICE); 256 + if (dma_mapping_error(dma_dev, dma_addr)) { 257 + kfree(tf->data); 258 + kfree(tf); 259 + return -ENOMEM; 260 + } 261 + 262 + tf->frame.buffer_phy = dma_addr; 263 + tf->frame.callback = dma_test_rx_callback; 264 + tf->dma_test = dt; 265 + INIT_LIST_HEAD(&tf->frame.list); 266 + 267 + tb_ring_rx(dt->rx_ring, &tf->frame); 268 + } 269 + 270 + return 0; 271 + } 272 + 273 + static void dma_test_tx_callback(struct tb_ring *ring, struct ring_frame *frame, 274 + bool canceled) 275 + { 276 + struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame); 277 + struct dma_test *dt = tf->dma_test; 278 + struct device *dma_dev = tb_ring_dma_device(dt->tx_ring); 279 + 280 + dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE, 281 + DMA_TO_DEVICE); 282 + kfree(tf->data); 283 + kfree(tf); 284 + } 285 + 286 + static int dma_test_submit_tx(struct dma_test *dt, size_t npackets) 287 + { 288 + struct device *dma_dev = tb_ring_dma_device(dt->tx_ring); 289 + int i; 290 + 291 + for (i = 0; i < npackets; i++) { 292 + struct dma_test_frame *tf; 293 + dma_addr_t dma_addr; 294 + 295 + tf = kzalloc(sizeof(*tf), GFP_KERNEL); 296 + if (!tf) 297 + return -ENOMEM; 298 + 299 + tf->frame.size = 0; /* means 4096 */ 300 + tf->dma_test = dt; 301 + 302 + tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL); 303 + if (!tf->data) { 304 + kfree(tf); 305 + return -ENOMEM; 306 + } 307 + 308 + memcpy(tf->data, dma_test_pattern, DMA_TEST_FRAME_SIZE); 309 + 310 + dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE, 311 + DMA_TO_DEVICE); 312 + if (dma_mapping_error(dma_dev, dma_addr)) { 313 + kfree(tf->data); 314 + kfree(tf); 315 + return -ENOMEM; 316 + } 317 + 318 + tf->frame.buffer_phy = dma_addr; 319 + tf->frame.callback = dma_test_tx_callback; 320 + tf->frame.sof = DMA_TEST_PDF_FRAME_START; 321 + tf->frame.eof = DMA_TEST_PDF_FRAME_END; 322 + INIT_LIST_HEAD(&tf->frame.list); 323 + 324 + dt->packets_sent++; 325 + dev_dbg(&dt->svc->dev, "packet %u/%u sent\n", dt->packets_sent, 326 + dt->packets_to_send); 327 + 328 + tb_ring_tx(dt->tx_ring, &tf->frame); 329 + } 330 + 331 + return 0; 332 + } 333 + 334 + #define DMA_TEST_DEBUGFS_ATTR(__fops, __get, __validate, __set) \ 335 + static int __fops ## _show(void *data, u64 *val) \ 336 + { \ 337 + struct tb_service *svc = data; \ 338 + struct dma_test *dt = tb_service_get_drvdata(svc); \ 339 + int ret; \ 340 + \ 341 + ret = mutex_lock_interruptible(&dt->lock); \ 342 + if (ret) \ 343 + return ret; \ 344 + __get(dt, val); \ 345 + mutex_unlock(&dt->lock); \ 346 + return 0; \ 347 + } \ 348 + static int __fops ## _store(void *data, u64 val) \ 349 + { \ 350 + struct tb_service *svc = data; \ 351 + struct dma_test *dt = tb_service_get_drvdata(svc); \ 352 + int ret; \ 353 + \ 354 + ret = __validate(val); \ 355 + if (ret) \ 356 + return ret; \ 357 + ret = mutex_lock_interruptible(&dt->lock); \ 358 + if (ret) \ 359 + return ret; \ 360 + __set(dt, val); \ 361 + mutex_unlock(&dt->lock); \ 362 + return 0; \ 363 + } \ 364 + DEFINE_DEBUGFS_ATTRIBUTE(__fops ## _fops, __fops ## _show, \ 365 + __fops ## _store, "%llu\n") 366 + 367 + static void lanes_get(const struct dma_test *dt, u64 *val) 368 + { 369 + *val = dt->link_width; 370 + } 371 + 372 + static int lanes_validate(u64 val) 373 + { 374 + return val > 2 ? -EINVAL : 0; 375 + } 376 + 377 + static void lanes_set(struct dma_test *dt, u64 val) 378 + { 379 + dt->link_width = val; 380 + } 381 + DMA_TEST_DEBUGFS_ATTR(lanes, lanes_get, lanes_validate, lanes_set); 382 + 383 + static void speed_get(const struct dma_test *dt, u64 *val) 384 + { 385 + *val = dt->link_speed; 386 + } 387 + 388 + static int speed_validate(u64 val) 389 + { 390 + switch (val) { 391 + case 20: 392 + case 10: 393 + case 0: 394 + return 0; 395 + default: 396 + return -EINVAL; 397 + } 398 + } 399 + 400 + static void speed_set(struct dma_test *dt, u64 val) 401 + { 402 + dt->link_speed = val; 403 + } 404 + DMA_TEST_DEBUGFS_ATTR(speed, speed_get, speed_validate, speed_set); 405 + 406 + static void packets_to_receive_get(const struct dma_test *dt, u64 *val) 407 + { 408 + *val = dt->packets_to_receive; 409 + } 410 + 411 + static int packets_to_receive_validate(u64 val) 412 + { 413 + return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0; 414 + } 415 + 416 + static void packets_to_receive_set(struct dma_test *dt, u64 val) 417 + { 418 + dt->packets_to_receive = val; 419 + } 420 + DMA_TEST_DEBUGFS_ATTR(packets_to_receive, packets_to_receive_get, 421 + packets_to_receive_validate, packets_to_receive_set); 422 + 423 + static void packets_to_send_get(const struct dma_test *dt, u64 *val) 424 + { 425 + *val = dt->packets_to_send; 426 + } 427 + 428 + static int packets_to_send_validate(u64 val) 429 + { 430 + return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0; 431 + } 432 + 433 + static void packets_to_send_set(struct dma_test *dt, u64 val) 434 + { 435 + dt->packets_to_send = val; 436 + } 437 + DMA_TEST_DEBUGFS_ATTR(packets_to_send, packets_to_send_get, 438 + packets_to_send_validate, packets_to_send_set); 439 + 440 + static int dma_test_set_bonding(struct dma_test *dt) 441 + { 442 + switch (dt->link_width) { 443 + case 2: 444 + return tb_xdomain_lane_bonding_enable(dt->xd); 445 + case 1: 446 + tb_xdomain_lane_bonding_disable(dt->xd); 447 + fallthrough; 448 + default: 449 + return 0; 450 + } 451 + } 452 + 453 + static bool dma_test_validate_config(struct dma_test *dt) 454 + { 455 + if (!dt->packets_to_send && !dt->packets_to_receive) 456 + return false; 457 + if (dt->packets_to_send && dt->packets_to_receive && 458 + dt->packets_to_send != dt->packets_to_receive) 459 + return false; 460 + return true; 461 + } 462 + 463 + static void dma_test_check_errors(struct dma_test *dt, int ret) 464 + { 465 + if (!dt->error_code) { 466 + if (dt->link_speed && dt->xd->link_speed != dt->link_speed) { 467 + dt->error_code = DMA_TEST_SPEED_ERROR; 468 + } else if (dt->link_width && 469 + dt->xd->link_width != dt->link_width) { 470 + dt->error_code = DMA_TEST_WIDTH_ERROR; 471 + } else if (dt->packets_to_send != dt->packets_sent || 472 + dt->packets_to_receive != dt->packets_received || 473 + dt->crc_errors || dt->buffer_overflow_errors) { 474 + dt->error_code = DMA_TEST_PACKET_ERROR; 475 + } else { 476 + return; 477 + } 478 + } 479 + 480 + dt->result = DMA_TEST_FAIL; 481 + } 482 + 483 + static int test_store(void *data, u64 val) 484 + { 485 + struct tb_service *svc = data; 486 + struct dma_test *dt = tb_service_get_drvdata(svc); 487 + int ret; 488 + 489 + if (val != 1) 490 + return -EINVAL; 491 + 492 + ret = mutex_lock_interruptible(&dt->lock); 493 + if (ret) 494 + return ret; 495 + 496 + dt->packets_sent = 0; 497 + dt->packets_received = 0; 498 + dt->crc_errors = 0; 499 + dt->buffer_overflow_errors = 0; 500 + dt->result = DMA_TEST_SUCCESS; 501 + dt->error_code = DMA_TEST_NO_ERROR; 502 + 503 + dev_dbg(&svc->dev, "DMA test starting\n"); 504 + if (dt->link_speed) 505 + dev_dbg(&svc->dev, "link_speed: %u Gb/s\n", dt->link_speed); 506 + if (dt->link_width) 507 + dev_dbg(&svc->dev, "link_width: %u\n", dt->link_width); 508 + dev_dbg(&svc->dev, "packets_to_send: %u\n", dt->packets_to_send); 509 + dev_dbg(&svc->dev, "packets_to_receive: %u\n", dt->packets_to_receive); 510 + 511 + if (!dma_test_validate_config(dt)) { 512 + dev_err(&svc->dev, "invalid test configuration\n"); 513 + dt->error_code = DMA_TEST_CONFIG_ERROR; 514 + goto out_unlock; 515 + } 516 + 517 + ret = dma_test_set_bonding(dt); 518 + if (ret) { 519 + dev_err(&svc->dev, "failed to set lanes\n"); 520 + dt->error_code = DMA_TEST_BONDING_ERROR; 521 + goto out_unlock; 522 + } 523 + 524 + ret = dma_test_start_rings(dt); 525 + if (ret) { 526 + dev_err(&svc->dev, "failed to enable DMA rings\n"); 527 + dt->error_code = DMA_TEST_DMA_ERROR; 528 + goto out_unlock; 529 + } 530 + 531 + if (dt->packets_to_receive) { 532 + reinit_completion(&dt->complete); 533 + ret = dma_test_submit_rx(dt, dt->packets_to_receive); 534 + if (ret) { 535 + dev_err(&svc->dev, "failed to submit receive buffers\n"); 536 + dt->error_code = DMA_TEST_BUFFER_ERROR; 537 + goto out_stop; 538 + } 539 + } 540 + 541 + if (dt->packets_to_send) { 542 + ret = dma_test_submit_tx(dt, dt->packets_to_send); 543 + if (ret) { 544 + dev_err(&svc->dev, "failed to submit transmit buffers\n"); 545 + dt->error_code = DMA_TEST_BUFFER_ERROR; 546 + goto out_stop; 547 + } 548 + } 549 + 550 + if (dt->packets_to_receive) { 551 + ret = wait_for_completion_interruptible(&dt->complete); 552 + if (ret) { 553 + dt->error_code = DMA_TEST_INTERRUPTED; 554 + goto out_stop; 555 + } 556 + } 557 + 558 + out_stop: 559 + dma_test_stop_rings(dt); 560 + out_unlock: 561 + dma_test_check_errors(dt, ret); 562 + mutex_unlock(&dt->lock); 563 + 564 + dev_dbg(&svc->dev, "DMA test %s\n", dma_test_result_names[dt->result]); 565 + return ret; 566 + } 567 + DEFINE_DEBUGFS_ATTRIBUTE(test_fops, NULL, test_store, "%llu\n"); 568 + 569 + static int status_show(struct seq_file *s, void *not_used) 570 + { 571 + struct tb_service *svc = s->private; 572 + struct dma_test *dt = tb_service_get_drvdata(svc); 573 + int ret; 574 + 575 + ret = mutex_lock_interruptible(&dt->lock); 576 + if (ret) 577 + return ret; 578 + 579 + seq_printf(s, "result: %s\n", dma_test_result_names[dt->result]); 580 + if (dt->result == DMA_TEST_NOT_RUN) 581 + goto out_unlock; 582 + 583 + seq_printf(s, "packets received: %u\n", dt->packets_received); 584 + seq_printf(s, "packets sent: %u\n", dt->packets_sent); 585 + seq_printf(s, "CRC errors: %u\n", dt->crc_errors); 586 + seq_printf(s, "buffer overflow errors: %u\n", 587 + dt->buffer_overflow_errors); 588 + seq_printf(s, "error: %s\n", dma_test_error_names[dt->error_code]); 589 + 590 + out_unlock: 591 + mutex_unlock(&dt->lock); 592 + return 0; 593 + } 594 + DEFINE_SHOW_ATTRIBUTE(status); 595 + 596 + static void dma_test_debugfs_init(struct tb_service *svc) 597 + { 598 + struct dma_test *dt = tb_service_get_drvdata(svc); 599 + 600 + dt->debugfs_dir = debugfs_create_dir("dma_test", svc->debugfs_dir); 601 + 602 + debugfs_create_file("lanes", 0600, dt->debugfs_dir, svc, &lanes_fops); 603 + debugfs_create_file("speed", 0600, dt->debugfs_dir, svc, &speed_fops); 604 + debugfs_create_file("packets_to_receive", 0600, dt->debugfs_dir, svc, 605 + &packets_to_receive_fops); 606 + debugfs_create_file("packets_to_send", 0600, dt->debugfs_dir, svc, 607 + &packets_to_send_fops); 608 + debugfs_create_file("status", 0400, dt->debugfs_dir, svc, &status_fops); 609 + debugfs_create_file("test", 0200, dt->debugfs_dir, svc, &test_fops); 610 + } 611 + 612 + static int dma_test_probe(struct tb_service *svc, const struct tb_service_id *id) 613 + { 614 + struct tb_xdomain *xd = tb_service_parent(svc); 615 + struct dma_test *dt; 616 + 617 + dt = devm_kzalloc(&svc->dev, sizeof(*dt), GFP_KERNEL); 618 + if (!dt) 619 + return -ENOMEM; 620 + 621 + dt->svc = svc; 622 + dt->xd = xd; 623 + mutex_init(&dt->lock); 624 + init_completion(&dt->complete); 625 + 626 + tb_service_set_drvdata(svc, dt); 627 + dma_test_debugfs_init(svc); 628 + 629 + return 0; 630 + } 631 + 632 + static void dma_test_remove(struct tb_service *svc) 633 + { 634 + struct dma_test *dt = tb_service_get_drvdata(svc); 635 + 636 + mutex_lock(&dt->lock); 637 + debugfs_remove_recursive(dt->debugfs_dir); 638 + mutex_unlock(&dt->lock); 639 + } 640 + 641 + static int __maybe_unused dma_test_suspend(struct device *dev) 642 + { 643 + /* 644 + * No need to do anything special here. If userspace is writing 645 + * to the test attribute when suspend started, it comes out from 646 + * wait_for_completion_interruptible() with -ERESTARTSYS and the 647 + * DMA test fails tearing down the rings. Once userspace is 648 + * thawed the kernel restarts the write syscall effectively 649 + * re-running the test. 650 + */ 651 + return 0; 652 + } 653 + 654 + static int __maybe_unused dma_test_resume(struct device *dev) 655 + { 656 + return 0; 657 + } 658 + 659 + static const struct dev_pm_ops dma_test_pm_ops = { 660 + SET_SYSTEM_SLEEP_PM_OPS(dma_test_suspend, dma_test_resume) 661 + }; 662 + 663 + static const struct tb_service_id dma_test_ids[] = { 664 + { TB_SERVICE("dma_test", 1) }, 665 + { }, 666 + }; 667 + MODULE_DEVICE_TABLE(tbsvc, dma_test_ids); 668 + 669 + static struct tb_service_driver dma_test_driver = { 670 + .driver = { 671 + .owner = THIS_MODULE, 672 + .name = "thunderbolt_dma_test", 673 + .pm = &dma_test_pm_ops, 674 + }, 675 + .probe = dma_test_probe, 676 + .remove = dma_test_remove, 677 + .id_table = dma_test_ids, 678 + }; 679 + 680 + static int __init dma_test_init(void) 681 + { 682 + u64 data_value = DMA_TEST_DATA_PATTERN; 683 + int i, ret; 684 + 685 + dma_test_pattern = kmalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL); 686 + if (!dma_test_pattern) 687 + return -ENOMEM; 688 + 689 + for (i = 0; i < DMA_TEST_FRAME_SIZE / sizeof(data_value); i++) 690 + ((u32 *)dma_test_pattern)[i] = data_value++; 691 + 692 + dma_test_dir = tb_property_create_dir(&dma_test_dir_uuid); 693 + if (!dma_test_dir) { 694 + ret = -ENOMEM; 695 + goto err_free_pattern; 696 + } 697 + 698 + tb_property_add_immediate(dma_test_dir, "prtcid", 1); 699 + tb_property_add_immediate(dma_test_dir, "prtcvers", 1); 700 + tb_property_add_immediate(dma_test_dir, "prtcrevs", 0); 701 + tb_property_add_immediate(dma_test_dir, "prtcstns", 0); 702 + 703 + ret = tb_register_property_dir("dma_test", dma_test_dir); 704 + if (ret) 705 + goto err_free_dir; 706 + 707 + ret = tb_register_service_driver(&dma_test_driver); 708 + if (ret) 709 + goto err_unregister_dir; 710 + 711 + return 0; 712 + 713 + err_unregister_dir: 714 + tb_unregister_property_dir("dma_test", dma_test_dir); 715 + err_free_dir: 716 + tb_property_free_dir(dma_test_dir); 717 + err_free_pattern: 718 + kfree(dma_test_pattern); 719 + 720 + return ret; 721 + } 722 + module_init(dma_test_init); 723 + 724 + static void __exit dma_test_exit(void) 725 + { 726 + tb_unregister_service_driver(&dma_test_driver); 727 + tb_unregister_property_dir("dma_test", dma_test_dir); 728 + tb_property_free_dir(dma_test_dir); 729 + kfree(dma_test_pattern); 730 + } 731 + module_exit(dma_test_exit); 732 + 733 + MODULE_AUTHOR("Isaac Hazan <isaac.hazan@intel.com>"); 734 + MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>"); 735 + MODULE_DESCRIPTION("DMA traffic test driver"); 736 + MODULE_LICENSE("GPL v2");
+229 -11
drivers/thunderbolt/icm.c
··· 49 49 MODULE_PARM_DESC(start_icm, "start ICM firmware if it is not running (default: false)"); 50 50 51 51 /** 52 + * struct usb4_switch_nvm_auth - Holds USB4 NVM_AUTH status 53 + * @reply: Reply from ICM firmware is placed here 54 + * @request: Request that is sent to ICM firmware 55 + * @icm: Pointer to ICM private data 56 + */ 57 + struct usb4_switch_nvm_auth { 58 + struct icm_usb4_switch_op_response reply; 59 + struct icm_usb4_switch_op request; 60 + struct icm *icm; 61 + }; 62 + 63 + /** 52 64 * struct icm - Internal connection manager private data 53 65 * @request_lock: Makes sure only one message is send to ICM at time 54 66 * @rescan_work: Work used to rescan the surviving switches after resume ··· 73 61 * @max_boot_acl: Maximum number of preboot ACL entries (%0 if not supported) 74 62 * @rpm: Does the controller support runtime PM (RTD3) 75 63 * @can_upgrade_nvm: Can the NVM firmware be upgrade on this controller 64 + * @proto_version: Firmware protocol version 65 + * @last_nvm_auth: Last USB4 router NVM_AUTH result (or %NULL if not set) 76 66 * @veto: Is RTD3 veto in effect 77 67 * @is_supported: Checks if we can support ICM on this controller 78 68 * @cio_reset: Trigger CIO reset ··· 93 79 struct mutex request_lock; 94 80 struct delayed_work rescan_work; 95 81 struct pci_dev *upstream_port; 96 - size_t max_boot_acl; 97 82 int vnd_cap; 98 83 bool safe_mode; 84 + size_t max_boot_acl; 99 85 bool rpm; 100 86 bool can_upgrade_nvm; 87 + u8 proto_version; 88 + struct usb4_switch_nvm_auth *last_nvm_auth; 101 89 bool veto; 102 90 bool (*is_supported)(struct tb *tb); 103 91 int (*cio_reset)(struct tb *tb); ··· 108 92 void (*save_devices)(struct tb *tb); 109 93 int (*driver_ready)(struct tb *tb, 110 94 enum tb_security_level *security_level, 111 - size_t *nboot_acl, bool *rpm); 95 + u8 *proto_version, size_t *nboot_acl, bool *rpm); 112 96 void (*set_uuid)(struct tb *tb); 113 97 void (*device_connected)(struct tb *tb, 114 98 const struct icm_pkg_header *hdr); ··· 453 437 454 438 static int 455 439 icm_fr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 456 - size_t *nboot_acl, bool *rpm) 440 + u8 *proto_version, size_t *nboot_acl, bool *rpm) 457 441 { 458 442 struct icm_fr_pkg_driver_ready_response reply; 459 443 struct icm_pkg_driver_ready request = { ··· 886 870 return; 887 871 } 888 872 873 + pm_runtime_get_sync(sw->dev.parent); 874 + 889 875 remove_switch(sw); 876 + 877 + pm_runtime_mark_last_busy(sw->dev.parent); 878 + pm_runtime_put_autosuspend(sw->dev.parent); 879 + 890 880 tb_switch_put(sw); 891 881 } 892 882 ··· 1008 986 1009 987 static int 1010 988 icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1011 - size_t *nboot_acl, bool *rpm) 989 + u8 *proto_version, size_t *nboot_acl, bool *rpm) 1012 990 { 1013 991 struct icm_tr_pkg_driver_ready_response reply; 1014 992 struct icm_pkg_driver_ready request = { ··· 1024 1002 1025 1003 if (security_level) 1026 1004 *security_level = reply.info & ICM_TR_INFO_SLEVEL_MASK; 1005 + if (proto_version) 1006 + *proto_version = (reply.info & ICM_TR_INFO_PROTO_VERSION_MASK) >> 1007 + ICM_TR_INFO_PROTO_VERSION_SHIFT; 1027 1008 if (nboot_acl) 1028 1009 *nboot_acl = (reply.info & ICM_TR_INFO_BOOT_ACL_MASK) >> 1029 1010 ICM_TR_INFO_BOOT_ACL_SHIFT; ··· 1305 1280 tb_warn(tb, "no switch exists at %llx, ignoring\n", route); 1306 1281 return; 1307 1282 } 1283 + pm_runtime_get_sync(sw->dev.parent); 1308 1284 1309 1285 remove_switch(sw); 1286 + 1287 + pm_runtime_mark_last_busy(sw->dev.parent); 1288 + pm_runtime_put_autosuspend(sw->dev.parent); 1289 + 1310 1290 tb_switch_put(sw); 1311 1291 } 1312 1292 ··· 1480 1450 1481 1451 static int 1482 1452 icm_ar_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1483 - size_t *nboot_acl, bool *rpm) 1453 + u8 *proto_version, size_t *nboot_acl, bool *rpm) 1484 1454 { 1485 1455 struct icm_ar_pkg_driver_ready_response reply; 1486 1456 struct icm_pkg_driver_ready request = { ··· 1610 1580 1611 1581 static int 1612 1582 icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1613 - size_t *nboot_acl, bool *rpm) 1583 + u8 *proto_version, size_t *nboot_acl, bool *rpm) 1614 1584 { 1615 1585 struct icm_tr_pkg_driver_ready_response reply; 1616 1586 struct icm_pkg_driver_ready request = { ··· 1623 1593 1, 20000); 1624 1594 if (ret) 1625 1595 return ret; 1596 + 1597 + if (proto_version) 1598 + *proto_version = (reply.info & ICM_TR_INFO_PROTO_VERSION_MASK) >> 1599 + ICM_TR_INFO_PROTO_VERSION_SHIFT; 1626 1600 1627 1601 /* Ice Lake always supports RTD3 */ 1628 1602 if (rpm) ··· 1736 1702 1737 1703 static int 1738 1704 __icm_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1739 - size_t *nboot_acl, bool *rpm) 1705 + u8 *proto_version, size_t *nboot_acl, bool *rpm) 1740 1706 { 1741 1707 struct icm *icm = tb_priv(tb); 1742 1708 unsigned int retries = 50; 1743 1709 int ret; 1744 1710 1745 - ret = icm->driver_ready(tb, security_level, nboot_acl, rpm); 1711 + ret = icm->driver_ready(tb, security_level, proto_version, nboot_acl, 1712 + rpm); 1746 1713 if (ret) { 1747 1714 tb_err(tb, "failed to send driver ready to ICM\n"); 1748 1715 return ret; ··· 1953 1918 return 0; 1954 1919 } 1955 1920 1956 - ret = __icm_driver_ready(tb, &tb->security_level, &tb->nboot_acl, 1957 - &icm->rpm); 1921 + ret = __icm_driver_ready(tb, &tb->security_level, &icm->proto_version, 1922 + &tb->nboot_acl, &icm->rpm); 1958 1923 if (ret) 1959 1924 return ret; 1960 1925 ··· 1964 1929 */ 1965 1930 if (tb->nboot_acl > icm->max_boot_acl) 1966 1931 tb->nboot_acl = 0; 1932 + 1933 + if (icm->proto_version >= 3) 1934 + tb_dbg(tb, "USB4 proxy operations supported\n"); 1967 1935 1968 1936 return 0; 1969 1937 } ··· 2083 2045 * Now all existing children should be resumed, start events 2084 2046 * from ICM to get updated status. 2085 2047 */ 2086 - __icm_driver_ready(tb, NULL, NULL, NULL); 2048 + __icm_driver_ready(tb, NULL, NULL, NULL, NULL); 2087 2049 2088 2050 /* 2089 2051 * We do not get notifications of devices that have been ··· 2162 2124 tb_switch_remove(tb->root_switch); 2163 2125 tb->root_switch = NULL; 2164 2126 nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0); 2127 + kfree(icm->last_nvm_auth); 2128 + icm->last_nvm_auth = NULL; 2165 2129 } 2166 2130 2167 2131 static int icm_disconnect_pcie_paths(struct tb *tb) 2168 2132 { 2169 2133 return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0); 2134 + } 2135 + 2136 + static void icm_usb4_switch_nvm_auth_complete(void *data) 2137 + { 2138 + struct usb4_switch_nvm_auth *auth = data; 2139 + struct icm *icm = auth->icm; 2140 + struct tb *tb = icm_to_tb(icm); 2141 + 2142 + tb_dbg(tb, "NVM_AUTH response for %llx flags %#x status %#x\n", 2143 + get_route(auth->reply.route_hi, auth->reply.route_lo), 2144 + auth->reply.hdr.flags, auth->reply.status); 2145 + 2146 + mutex_lock(&tb->lock); 2147 + if (WARN_ON(icm->last_nvm_auth)) 2148 + kfree(icm->last_nvm_auth); 2149 + icm->last_nvm_auth = auth; 2150 + mutex_unlock(&tb->lock); 2151 + } 2152 + 2153 + static int icm_usb4_switch_nvm_authenticate(struct tb *tb, u64 route) 2154 + { 2155 + struct usb4_switch_nvm_auth *auth; 2156 + struct icm *icm = tb_priv(tb); 2157 + struct tb_cfg_request *req; 2158 + int ret; 2159 + 2160 + auth = kzalloc(sizeof(*auth), GFP_KERNEL); 2161 + if (!auth) 2162 + return -ENOMEM; 2163 + 2164 + auth->icm = icm; 2165 + auth->request.hdr.code = ICM_USB4_SWITCH_OP; 2166 + auth->request.route_hi = upper_32_bits(route); 2167 + auth->request.route_lo = lower_32_bits(route); 2168 + auth->request.opcode = USB4_SWITCH_OP_NVM_AUTH; 2169 + 2170 + req = tb_cfg_request_alloc(); 2171 + if (!req) { 2172 + ret = -ENOMEM; 2173 + goto err_free_auth; 2174 + } 2175 + 2176 + req->match = icm_match; 2177 + req->copy = icm_copy; 2178 + req->request = &auth->request; 2179 + req->request_size = sizeof(auth->request); 2180 + req->request_type = TB_CFG_PKG_ICM_CMD; 2181 + req->response = &auth->reply; 2182 + req->npackets = 1; 2183 + req->response_size = sizeof(auth->reply); 2184 + req->response_type = TB_CFG_PKG_ICM_RESP; 2185 + 2186 + tb_dbg(tb, "NVM_AUTH request for %llx\n", route); 2187 + 2188 + mutex_lock(&icm->request_lock); 2189 + ret = tb_cfg_request(tb->ctl, req, icm_usb4_switch_nvm_auth_complete, 2190 + auth); 2191 + mutex_unlock(&icm->request_lock); 2192 + 2193 + tb_cfg_request_put(req); 2194 + if (ret) 2195 + goto err_free_auth; 2196 + return 0; 2197 + 2198 + err_free_auth: 2199 + kfree(auth); 2200 + return ret; 2201 + } 2202 + 2203 + static int icm_usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata, 2204 + u8 *status, const void *tx_data, size_t tx_data_len, 2205 + void *rx_data, size_t rx_data_len) 2206 + { 2207 + struct icm_usb4_switch_op_response reply; 2208 + struct icm_usb4_switch_op request; 2209 + struct tb *tb = sw->tb; 2210 + struct icm *icm = tb_priv(tb); 2211 + u64 route = tb_route(sw); 2212 + int ret; 2213 + 2214 + /* 2215 + * USB4 router operation proxy is supported in firmware if the 2216 + * protocol version is 3 or higher. 2217 + */ 2218 + if (icm->proto_version < 3) 2219 + return -EOPNOTSUPP; 2220 + 2221 + /* 2222 + * NVM_AUTH is a special USB4 proxy operation that does not 2223 + * return immediately so handle it separately. 2224 + */ 2225 + if (opcode == USB4_SWITCH_OP_NVM_AUTH) 2226 + return icm_usb4_switch_nvm_authenticate(tb, route); 2227 + 2228 + memset(&request, 0, sizeof(request)); 2229 + request.hdr.code = ICM_USB4_SWITCH_OP; 2230 + request.route_hi = upper_32_bits(route); 2231 + request.route_lo = lower_32_bits(route); 2232 + request.opcode = opcode; 2233 + if (metadata) 2234 + request.metadata = *metadata; 2235 + 2236 + if (tx_data_len) { 2237 + request.data_len_valid |= ICM_USB4_SWITCH_DATA_VALID; 2238 + if (tx_data_len < ARRAY_SIZE(request.data)) 2239 + request.data_len_valid = 2240 + tx_data_len & ICM_USB4_SWITCH_DATA_LEN_MASK; 2241 + memcpy(request.data, tx_data, tx_data_len * sizeof(u32)); 2242 + } 2243 + 2244 + memset(&reply, 0, sizeof(reply)); 2245 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 2246 + 1, ICM_TIMEOUT); 2247 + if (ret) 2248 + return ret; 2249 + 2250 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 2251 + return -EIO; 2252 + 2253 + if (status) 2254 + *status = reply.status; 2255 + 2256 + if (metadata) 2257 + *metadata = reply.metadata; 2258 + 2259 + if (rx_data_len) 2260 + memcpy(rx_data, reply.data, rx_data_len * sizeof(u32)); 2261 + 2262 + return 0; 2263 + } 2264 + 2265 + static int icm_usb4_switch_nvm_authenticate_status(struct tb_switch *sw, 2266 + u32 *status) 2267 + { 2268 + struct usb4_switch_nvm_auth *auth; 2269 + struct tb *tb = sw->tb; 2270 + struct icm *icm = tb_priv(tb); 2271 + int ret = 0; 2272 + 2273 + if (icm->proto_version < 3) 2274 + return -EOPNOTSUPP; 2275 + 2276 + auth = icm->last_nvm_auth; 2277 + icm->last_nvm_auth = NULL; 2278 + 2279 + if (auth && auth->reply.route_hi == sw->config.route_hi && 2280 + auth->reply.route_lo == sw->config.route_lo) { 2281 + tb_dbg(tb, "NVM_AUTH found for %llx flags 0x%#x status %#x\n", 2282 + tb_route(sw), auth->reply.hdr.flags, auth->reply.status); 2283 + if (auth->reply.hdr.flags & ICM_FLAGS_ERROR) 2284 + ret = -EIO; 2285 + else 2286 + *status = auth->reply.status; 2287 + } else { 2288 + *status = 0; 2289 + } 2290 + 2291 + kfree(auth); 2292 + return ret; 2170 2293 } 2171 2294 2172 2295 /* Falcon Ridge */ ··· 2388 2189 .disconnect_pcie_paths = icm_disconnect_pcie_paths, 2389 2190 .approve_xdomain_paths = icm_tr_approve_xdomain_paths, 2390 2191 .disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths, 2192 + .usb4_switch_op = icm_usb4_switch_op, 2193 + .usb4_switch_nvm_authenticate_status = 2194 + icm_usb4_switch_nvm_authenticate_status, 2391 2195 }; 2392 2196 2393 2197 /* Ice Lake */ ··· 2404 2202 .handle_event = icm_handle_event, 2405 2203 .approve_xdomain_paths = icm_tr_approve_xdomain_paths, 2406 2204 .disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths, 2205 + .usb4_switch_op = icm_usb4_switch_op, 2206 + .usb4_switch_nvm_authenticate_status = 2207 + icm_usb4_switch_nvm_authenticate_status, 2407 2208 }; 2408 2209 2409 2210 struct tb *icm_probe(struct tb_nhi *nhi) ··· 2505 2300 icm->rtd3_veto = icm_icl_rtd3_veto; 2506 2301 tb->cm_ops = &icm_icl_ops; 2507 2302 break; 2303 + 2304 + case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI: 2305 + icm->is_supported = icm_tgl_is_supported; 2306 + icm->get_mode = icm_ar_get_mode; 2307 + icm->driver_ready = icm_tr_driver_ready; 2308 + icm->device_connected = icm_tr_device_connected; 2309 + icm->device_disconnected = icm_tr_device_disconnected; 2310 + icm->xdomain_connected = icm_tr_xdomain_connected; 2311 + icm->xdomain_disconnected = icm_tr_xdomain_disconnected; 2312 + tb->cm_ops = &icm_tr_ops; 2313 + break; 2508 2314 } 2509 2315 2510 2316 if (!icm->is_supported || !icm->is_supported(tb)) { ··· 2523 2307 tb_domain_put(tb); 2524 2308 return NULL; 2525 2309 } 2310 + 2311 + tb_dbg(tb, "using firmware connection manager\n"); 2526 2312 2527 2313 return tb; 2528 2314 }
+32 -4
drivers/thunderbolt/nhi.c
··· 494 494 495 495 static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size, 496 496 bool transmit, unsigned int flags, 497 - u16 sof_mask, u16 eof_mask, 497 + int e2e_tx_hop, u16 sof_mask, u16 eof_mask, 498 498 void (*start_poll)(void *), 499 499 void *poll_data) 500 500 { ··· 517 517 ring->is_tx = transmit; 518 518 ring->size = size; 519 519 ring->flags = flags; 520 + ring->e2e_tx_hop = e2e_tx_hop; 520 521 ring->sof_mask = sof_mask; 521 522 ring->eof_mask = eof_mask; 522 523 ring->head = 0; ··· 562 561 struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size, 563 562 unsigned int flags) 564 563 { 565 - return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, NULL, NULL); 564 + return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, 0, NULL, NULL); 566 565 } 567 566 EXPORT_SYMBOL_GPL(tb_ring_alloc_tx); 568 567 ··· 572 571 * @hop: HopID (ring) to allocate. Pass %-1 for automatic allocation. 573 572 * @size: Number of entries in the ring 574 573 * @flags: Flags for the ring 574 + * @e2e_tx_hop: Transmit HopID when E2E is enabled in @flags 575 575 * @sof_mask: Mask of PDF values that start a frame 576 576 * @eof_mask: Mask of PDF values that end a frame 577 577 * @start_poll: If not %NULL the ring will call this function when an ··· 581 579 * @poll_data: Optional data passed to @start_poll 582 580 */ 583 581 struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size, 584 - unsigned int flags, u16 sof_mask, u16 eof_mask, 582 + unsigned int flags, int e2e_tx_hop, 583 + u16 sof_mask, u16 eof_mask, 585 584 void (*start_poll)(void *), void *poll_data) 586 585 { 587 - return tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask, 586 + return tb_ring_alloc(nhi, hop, size, false, flags, e2e_tx_hop, sof_mask, eof_mask, 588 587 start_poll, poll_data); 589 588 } 590 589 EXPORT_SYMBOL_GPL(tb_ring_alloc_rx); ··· 632 629 ring_iowrite32options(ring, sof_eof_mask, 4); 633 630 ring_iowrite32options(ring, flags, 0); 634 631 } 632 + 633 + /* 634 + * Now that the ring valid bit is set we can configure E2E if 635 + * enabled for the ring. 636 + */ 637 + if (ring->flags & RING_FLAG_E2E) { 638 + if (!ring->is_tx) { 639 + u32 hop; 640 + 641 + hop = ring->e2e_tx_hop << REG_RX_OPTIONS_E2E_HOP_SHIFT; 642 + hop &= REG_RX_OPTIONS_E2E_HOP_MASK; 643 + flags |= hop; 644 + 645 + dev_dbg(&ring->nhi->pdev->dev, 646 + "enabling E2E for %s %d with TX HopID %d\n", 647 + RING_TYPE(ring), ring->hop, ring->e2e_tx_hop); 648 + } else { 649 + dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n", 650 + RING_TYPE(ring), ring->hop); 651 + } 652 + 653 + flags |= RING_FLAG_E2E_FLOW_CONTROL; 654 + ring_iowrite32options(ring, flags, 0); 655 + } 656 + 635 657 ring_interrupt_active(ring, true); 636 658 ring->running = true; 637 659 err:
+1
drivers/thunderbolt/nhi.h
··· 55 55 * need for the PCI quirk anymore as we will use ICM also on Apple 56 56 * hardware. 57 57 */ 58 + #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI 0x1137 58 59 #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI 0x157d 59 60 #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE 0x157e 60 61 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI 0x15bf
+12 -5
drivers/thunderbolt/path.c
··· 406 406 407 407 if (!hop.pending) { 408 408 if (clear_fc) { 409 - /* Clear flow control */ 410 - hop.ingress_fc = 0; 409 + /* 410 + * Clear flow control. Protocol adapters 411 + * IFC and ISE bits are vendor defined 412 + * in the USB4 spec so we clear them 413 + * only for pre-USB4 adapters. 414 + */ 415 + if (!tb_switch_is_usb4(port->sw)) { 416 + hop.ingress_fc = 0; 417 + hop.ingress_shared_buffer = 0; 418 + } 411 419 hop.egress_fc = 0; 412 - hop.ingress_shared_buffer = 0; 413 420 hop.egress_shared_buffer = 0; 414 421 415 422 return tb_port_write(port, &hop, TB_CFG_HOPS, ··· 454 447 return; 455 448 } 456 449 tb_dbg(path->tb, 457 - "deactivating %s path from %llx:%x to %llx:%x\n", 450 + "deactivating %s path from %llx:%u to %llx:%u\n", 458 451 path->name, tb_route(path->hops[0].in_port->sw), 459 452 path->hops[0].in_port->port, 460 453 tb_route(path->hops[path->path_length - 1].out_port->sw), ··· 482 475 } 483 476 484 477 tb_dbg(path->tb, 485 - "activating %s path from %llx:%x to %llx:%x\n", 478 + "activating %s path from %llx:%u to %llx:%u\n", 486 479 path->name, tb_route(path->hops[0].in_port->sw), 487 480 path->hops[0].in_port->port, 488 481 tb_route(path->hops[path->path_length - 1].out_port->sw),
+46 -7
drivers/thunderbolt/switch.c
··· 503 503 504 504 /** 505 505 * tb_port_state() - get connectedness state of a port 506 + * @port: the port to check 506 507 * 507 508 * The port must have a TB_CAP_PHY (i.e. it should be a real port). 508 509 * 509 510 * Return: Returns an enum tb_port_state on success or an error code on failure. 510 511 */ 511 - static int tb_port_state(struct tb_port *port) 512 + int tb_port_state(struct tb_port *port) 512 513 { 513 514 struct tb_cap_phy phy; 514 515 int res; ··· 933 932 return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10; 934 933 } 935 934 936 - static int tb_port_get_link_width(struct tb_port *port) 935 + /** 936 + * tb_port_get_link_width() - Get current link width 937 + * @port: Port to check (USB4 or CIO) 938 + * 939 + * Returns link width. Return values can be 1 (Single-Lane), 2 (Dual-Lane) 940 + * or negative errno in case of failure. 941 + */ 942 + int tb_port_get_link_width(struct tb_port *port) 937 943 { 938 944 u32 val; 939 945 int ret; ··· 1009 1001 port->cap_phy + LANE_ADP_CS_1, 1); 1010 1002 } 1011 1003 1012 - static int tb_port_lane_bonding_enable(struct tb_port *port) 1004 + /** 1005 + * tb_port_lane_bonding_enable() - Enable bonding on port 1006 + * @port: port to enable 1007 + * 1008 + * Enable bonding by setting the link width of the port and the 1009 + * other port in case of dual link port. 1010 + * 1011 + * Return: %0 in case of success and negative errno in case of error 1012 + */ 1013 + int tb_port_lane_bonding_enable(struct tb_port *port) 1013 1014 { 1014 1015 int ret; 1015 1016 ··· 1048 1031 return 0; 1049 1032 } 1050 1033 1051 - static void tb_port_lane_bonding_disable(struct tb_port *port) 1034 + /** 1035 + * tb_port_lane_bonding_disable() - Disable bonding on port 1036 + * @port: port to disable 1037 + * 1038 + * Disable bonding by setting the link width of the port and the 1039 + * other port in case of dual link port. 1040 + * 1041 + */ 1042 + void tb_port_lane_bonding_disable(struct tb_port *port) 1052 1043 { 1053 1044 port->dual_link_port->bonded = false; 1054 1045 port->bonded = false; ··· 2160 2135 2161 2136 fallthrough; 2162 2137 case 3: 2138 + case 4: 2163 2139 ret = tb_switch_set_uuid(sw); 2164 2140 if (ret) 2165 2141 return ret; ··· 2176 2150 break; 2177 2151 } 2178 2152 2153 + if (sw->no_nvm_upgrade) 2154 + return 0; 2155 + 2156 + if (tb_switch_is_usb4(sw)) { 2157 + ret = usb4_switch_nvm_authenticate_status(sw, &status); 2158 + if (ret) 2159 + return ret; 2160 + 2161 + if (status) { 2162 + tb_sw_info(sw, "switch flash authentication failed\n"); 2163 + nvm_set_auth_status(sw, status); 2164 + } 2165 + 2166 + return 0; 2167 + } 2168 + 2179 2169 /* Root switch DMA port requires running firmware */ 2180 2170 if (!tb_route(sw) && !tb_switch_is_icm(sw)) 2181 2171 return 0; 2182 2172 2183 2173 sw->dma_port = dma_port_alloc(sw); 2184 2174 if (!sw->dma_port) 2185 - return 0; 2186 - 2187 - if (sw->no_nvm_upgrade) 2188 2175 return 0; 2189 2176 2190 2177 /*
+2
drivers/thunderbolt/tb.c
··· 1534 1534 INIT_LIST_HEAD(&tcm->dp_resources); 1535 1535 INIT_DELAYED_WORK(&tcm->remove_work, tb_remove_work); 1536 1536 1537 + tb_dbg(tb, "using software connection manager\n"); 1538 + 1537 1539 return tb; 1538 1540 }
+22
drivers/thunderbolt/tb.h
··· 367 367 * @disconnect_pcie_paths: Disconnects PCIe paths before NVM update 368 368 * @approve_xdomain_paths: Approve (establish) XDomain DMA paths 369 369 * @disconnect_xdomain_paths: Disconnect XDomain DMA paths 370 + * @usb4_switch_op: Optional proxy for USB4 router operations. If set 371 + * this will be called whenever USB4 router operation is 372 + * performed. If this returns %-EOPNOTSUPP then the 373 + * native USB4 router operation is called. 374 + * @usb4_switch_nvm_authenticate_status: Optional callback that the CM 375 + * implementation can be used to 376 + * return status of USB4 NVM_AUTH 377 + * router operation. 370 378 */ 371 379 struct tb_cm_ops { 372 380 int (*driver_ready)(struct tb *tb); ··· 401 393 int (*disconnect_pcie_paths)(struct tb *tb); 402 394 int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd); 403 395 int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd); 396 + int (*usb4_switch_op)(struct tb_switch *sw, u16 opcode, u32 *metadata, 397 + u8 *status, const void *tx_data, size_t tx_data_len, 398 + void *rx_data, size_t rx_data_len); 399 + int (*usb4_switch_nvm_authenticate_status)(struct tb_switch *sw, 400 + u32 *status); 404 401 }; 405 402 406 403 static inline void *tb_priv(struct tb *tb) ··· 877 864 (p) = tb_next_port_on_path((src), (dst), (p))) 878 865 879 866 int tb_port_get_link_speed(struct tb_port *port); 867 + int tb_port_get_link_width(struct tb_port *port); 868 + int tb_port_state(struct tb_port *port); 869 + int tb_port_lane_bonding_enable(struct tb_port *port); 870 + void tb_port_lane_bonding_disable(struct tb_port *port); 880 871 881 872 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); 882 873 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); ··· 987 970 int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address, 988 971 const void *buf, size_t size); 989 972 int usb4_switch_nvm_authenticate(struct tb_switch *sw); 973 + int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status); 990 974 bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in); 991 975 int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in); 992 976 int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in); ··· 1043 1025 void tb_debugfs_exit(void); 1044 1026 void tb_switch_debugfs_init(struct tb_switch *sw); 1045 1027 void tb_switch_debugfs_remove(struct tb_switch *sw); 1028 + void tb_service_debugfs_init(struct tb_service *svc); 1029 + void tb_service_debugfs_remove(struct tb_service *svc); 1046 1030 #else 1047 1031 static inline void tb_debugfs_init(void) { } 1048 1032 static inline void tb_debugfs_exit(void) { } 1049 1033 static inline void tb_switch_debugfs_init(struct tb_switch *sw) { } 1050 1034 static inline void tb_switch_debugfs_remove(struct tb_switch *sw) { } 1035 + static inline void tb_service_debugfs_init(struct tb_service *svc) { } 1036 + static inline void tb_service_debugfs_remove(struct tb_service *svc) { } 1051 1037 #endif 1052 1038 1053 1039 #ifdef CONFIG_USB4_KUNIT_TEST
+28
drivers/thunderbolt/tb_msgs.h
··· 106 106 ICM_APPROVE_XDOMAIN = 0x10, 107 107 ICM_DISCONNECT_XDOMAIN = 0x11, 108 108 ICM_PREBOOT_ACL = 0x18, 109 + ICM_USB4_SWITCH_OP = 0x20, 109 110 }; 110 111 111 112 enum icm_event_code { ··· 344 343 #define ICM_TR_FLAGS_RTD3 BIT(6) 345 344 346 345 #define ICM_TR_INFO_SLEVEL_MASK GENMASK(2, 0) 346 + #define ICM_TR_INFO_PROTO_VERSION_MASK GENMASK(6, 4) 347 + #define ICM_TR_INFO_PROTO_VERSION_SHIFT 4 347 348 #define ICM_TR_INFO_BOOT_ACL_SHIFT 7 348 349 #define ICM_TR_INFO_BOOT_ACL_MASK GENMASK(12, 7) 349 350 ··· 479 476 struct icm_icl_event_rtd3_veto { 480 477 struct icm_pkg_header hdr; 481 478 u32 veto_reason; 479 + }; 480 + 481 + /* USB4 ICM messages */ 482 + 483 + struct icm_usb4_switch_op { 484 + struct icm_pkg_header hdr; 485 + u32 route_hi; 486 + u32 route_lo; 487 + u32 metadata; 488 + u16 opcode; 489 + u16 data_len_valid; 490 + u32 data[16]; 491 + }; 492 + 493 + #define ICM_USB4_SWITCH_DATA_LEN_MASK GENMASK(3, 0) 494 + #define ICM_USB4_SWITCH_DATA_VALID BIT(4) 495 + 496 + struct icm_usb4_switch_op_response { 497 + struct icm_pkg_header hdr; 498 + u32 route_hi; 499 + u32 route_lo; 500 + u32 metadata; 501 + u16 opcode; 502 + u16 status; 503 + u32 data[16]; 482 504 }; 483 505 484 506 /* XDomain messages */
+14
drivers/thunderbolt/tb_regs.h
··· 211 211 #define ROUTER_CS_9 0x09 212 212 #define ROUTER_CS_25 0x19 213 213 #define ROUTER_CS_26 0x1a 214 + #define ROUTER_CS_26_OPCODE_MASK GENMASK(15, 0) 214 215 #define ROUTER_CS_26_STATUS_MASK GENMASK(29, 24) 215 216 #define ROUTER_CS_26_STATUS_SHIFT 24 216 217 #define ROUTER_CS_26_ONS BIT(30) 217 218 #define ROUTER_CS_26_OV BIT(31) 219 + 220 + /* USB4 router operations opcodes */ 221 + enum usb4_switch_op { 222 + USB4_SWITCH_OP_QUERY_DP_RESOURCE = 0x10, 223 + USB4_SWITCH_OP_ALLOC_DP_RESOURCE = 0x11, 224 + USB4_SWITCH_OP_DEALLOC_DP_RESOURCE = 0x12, 225 + USB4_SWITCH_OP_NVM_WRITE = 0x20, 226 + USB4_SWITCH_OP_NVM_AUTH = 0x21, 227 + USB4_SWITCH_OP_NVM_READ = 0x22, 228 + USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23, 229 + USB4_SWITCH_OP_DROM_READ = 0x24, 230 + USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25, 231 + }; 218 232 219 233 /* Router TMU configuration */ 220 234 #define TMU_RTR_CS_0 0x00
+31 -19
drivers/thunderbolt/tunnel.c
··· 34 34 #define TB_DP_AUX_PATH_OUT 1 35 35 #define TB_DP_AUX_PATH_IN 2 36 36 37 - #define TB_DMA_PATH_OUT 0 38 - #define TB_DMA_PATH_IN 1 39 - 40 37 static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" }; 41 38 42 39 #define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ ··· 826 829 * @nhi: Host controller port 827 830 * @dst: Destination null port which the other domain is connected to 828 831 * @transmit_ring: NHI ring number used to send packets towards the 829 - * other domain 832 + * other domain. Set to %0 if TX path is not needed. 830 833 * @transmit_path: HopID used for transmitting packets 831 834 * @receive_ring: NHI ring number used to receive packets from the 832 - * other domain 835 + * other domain. Set to %0 if RX path is not needed. 833 836 * @reveive_path: HopID used for receiving packets 834 837 * 835 838 * Return: Returns a tb_tunnel on success or NULL on failure. ··· 840 843 int receive_path) 841 844 { 842 845 struct tb_tunnel *tunnel; 846 + size_t npaths = 0, i = 0; 843 847 struct tb_path *path; 844 848 u32 credits; 845 849 846 - tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_DMA); 850 + if (receive_ring) 851 + npaths++; 852 + if (transmit_ring) 853 + npaths++; 854 + 855 + if (WARN_ON(!npaths)) 856 + return NULL; 857 + 858 + tunnel = tb_tunnel_alloc(tb, npaths, TB_TUNNEL_DMA); 847 859 if (!tunnel) 848 860 return NULL; 849 861 ··· 862 856 863 857 credits = tb_dma_credits(nhi); 864 858 865 - path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX"); 866 - if (!path) { 867 - tb_tunnel_free(tunnel); 868 - return NULL; 859 + if (receive_ring) { 860 + path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, 861 + "DMA RX"); 862 + if (!path) { 863 + tb_tunnel_free(tunnel); 864 + return NULL; 865 + } 866 + tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, 867 + credits); 868 + tunnel->paths[i++] = path; 869 869 } 870 - tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, 871 - credits); 872 - tunnel->paths[TB_DMA_PATH_IN] = path; 873 870 874 - path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX"); 875 - if (!path) { 876 - tb_tunnel_free(tunnel); 877 - return NULL; 871 + if (transmit_ring) { 872 + path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, 873 + "DMA TX"); 874 + if (!path) { 875 + tb_tunnel_free(tunnel); 876 + return NULL; 877 + } 878 + tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); 879 + tunnel->paths[i++] = path; 878 880 } 879 - tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); 880 - tunnel->paths[TB_DMA_PATH_OUT] = path; 881 881 882 882 return tunnel; 883 883 }
+163 -106
drivers/thunderbolt/usb4.c
··· 16 16 #define USB4_DATA_DWORDS 16 17 17 #define USB4_DATA_RETRIES 3 18 18 19 - enum usb4_switch_op { 20 - USB4_SWITCH_OP_QUERY_DP_RESOURCE = 0x10, 21 - USB4_SWITCH_OP_ALLOC_DP_RESOURCE = 0x11, 22 - USB4_SWITCH_OP_DEALLOC_DP_RESOURCE = 0x12, 23 - USB4_SWITCH_OP_NVM_WRITE = 0x20, 24 - USB4_SWITCH_OP_NVM_AUTH = 0x21, 25 - USB4_SWITCH_OP_NVM_READ = 0x22, 26 - USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23, 27 - USB4_SWITCH_OP_DROM_READ = 0x24, 28 - USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25, 29 - }; 30 - 31 19 enum usb4_sb_target { 32 20 USB4_SB_TARGET_ROUTER, 33 21 USB4_SB_TARGET_PARTNER, ··· 60 72 } while (ktime_before(ktime_get(), timeout)); 61 73 62 74 return -ETIMEDOUT; 63 - } 64 - 65 - static int usb4_switch_op_read_data(struct tb_switch *sw, void *data, 66 - size_t dwords) 67 - { 68 - if (dwords > USB4_DATA_DWORDS) 69 - return -EINVAL; 70 - 71 - return tb_sw_read(sw, data, TB_CFG_SWITCH, ROUTER_CS_9, dwords); 72 - } 73 - 74 - static int usb4_switch_op_write_data(struct tb_switch *sw, const void *data, 75 - size_t dwords) 76 - { 77 - if (dwords > USB4_DATA_DWORDS) 78 - return -EINVAL; 79 - 80 - return tb_sw_write(sw, data, TB_CFG_SWITCH, ROUTER_CS_9, dwords); 81 - } 82 - 83 - static int usb4_switch_op_read_metadata(struct tb_switch *sw, u32 *metadata) 84 - { 85 - return tb_sw_read(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1); 86 - } 87 - 88 - static int usb4_switch_op_write_metadata(struct tb_switch *sw, u32 metadata) 89 - { 90 - return tb_sw_write(sw, &metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1); 91 75 } 92 76 93 77 static int usb4_do_read_data(u16 address, void *buf, size_t size, ··· 131 171 return 0; 132 172 } 133 173 134 - static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status) 174 + static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode, 175 + u32 *metadata, u8 *status, 176 + const void *tx_data, size_t tx_dwords, 177 + void *rx_data, size_t rx_dwords) 135 178 { 136 179 u32 val; 137 180 int ret; 181 + 182 + if (metadata) { 183 + ret = tb_sw_write(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1); 184 + if (ret) 185 + return ret; 186 + } 187 + if (tx_dwords) { 188 + ret = tb_sw_write(sw, tx_data, TB_CFG_SWITCH, ROUTER_CS_9, 189 + tx_dwords); 190 + if (ret) 191 + return ret; 192 + } 138 193 139 194 val = opcode | ROUTER_CS_26_OV; 140 195 ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1); ··· 167 192 if (val & ROUTER_CS_26_ONS) 168 193 return -EOPNOTSUPP; 169 194 170 - *status = (val & ROUTER_CS_26_STATUS_MASK) >> ROUTER_CS_26_STATUS_SHIFT; 195 + if (status) 196 + *status = (val & ROUTER_CS_26_STATUS_MASK) >> 197 + ROUTER_CS_26_STATUS_SHIFT; 198 + 199 + if (metadata) { 200 + ret = tb_sw_read(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1); 201 + if (ret) 202 + return ret; 203 + } 204 + if (rx_dwords) { 205 + ret = tb_sw_read(sw, rx_data, TB_CFG_SWITCH, ROUTER_CS_9, 206 + rx_dwords); 207 + if (ret) 208 + return ret; 209 + } 210 + 171 211 return 0; 212 + } 213 + 214 + static int __usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata, 215 + u8 *status, const void *tx_data, size_t tx_dwords, 216 + void *rx_data, size_t rx_dwords) 217 + { 218 + const struct tb_cm_ops *cm_ops = sw->tb->cm_ops; 219 + 220 + if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS) 221 + return -EINVAL; 222 + 223 + /* 224 + * If the connection manager implementation provides USB4 router 225 + * operation proxy callback, call it here instead of running the 226 + * operation natively. 227 + */ 228 + if (cm_ops->usb4_switch_op) { 229 + int ret; 230 + 231 + ret = cm_ops->usb4_switch_op(sw, opcode, metadata, status, 232 + tx_data, tx_dwords, rx_data, 233 + rx_dwords); 234 + if (ret != -EOPNOTSUPP) 235 + return ret; 236 + 237 + /* 238 + * If the proxy was not supported then run the native 239 + * router operation instead. 240 + */ 241 + } 242 + 243 + return usb4_native_switch_op(sw, opcode, metadata, status, tx_data, 244 + tx_dwords, rx_data, rx_dwords); 245 + } 246 + 247 + static inline int usb4_switch_op(struct tb_switch *sw, u16 opcode, 248 + u32 *metadata, u8 *status) 249 + { 250 + return __usb4_switch_op(sw, opcode, metadata, status, NULL, 0, NULL, 0); 251 + } 252 + 253 + static inline int usb4_switch_op_data(struct tb_switch *sw, u16 opcode, 254 + u32 *metadata, u8 *status, 255 + const void *tx_data, size_t tx_dwords, 256 + void *rx_data, size_t rx_dwords) 257 + { 258 + return __usb4_switch_op(sw, opcode, metadata, status, tx_data, 259 + tx_dwords, rx_data, rx_dwords); 172 260 } 173 261 174 262 static void usb4_switch_check_wakes(struct tb_switch *sw) ··· 386 348 metadata |= (dwaddress << USB4_DROM_ADDRESS_SHIFT) & 387 349 USB4_DROM_ADDRESS_MASK; 388 350 389 - ret = usb4_switch_op_write_metadata(sw, metadata); 351 + ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_DROM_READ, &metadata, 352 + &status, NULL, 0, buf, dwords); 390 353 if (ret) 391 354 return ret; 392 355 393 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_DROM_READ, &status); 394 - if (ret) 395 - return ret; 396 - 397 - if (status) 398 - return -EIO; 399 - 400 - return usb4_switch_op_read_data(sw, buf, dwords); 356 + return status ? -EIO : 0; 401 357 } 402 358 403 359 /** ··· 544 512 u8 status; 545 513 int ret; 546 514 547 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SECTOR_SIZE, &status); 515 + ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SECTOR_SIZE, &metadata, 516 + &status); 548 517 if (ret) 549 518 return ret; 550 519 551 520 if (status) 552 521 return status == 0x2 ? -EOPNOTSUPP : -EIO; 553 - 554 - ret = usb4_switch_op_read_metadata(sw, &metadata); 555 - if (ret) 556 - return ret; 557 522 558 523 return metadata & USB4_NVM_SECTOR_SIZE_MASK; 559 524 } ··· 568 539 metadata |= (dwaddress << USB4_NVM_READ_OFFSET_SHIFT) & 569 540 USB4_NVM_READ_OFFSET_MASK; 570 541 571 - ret = usb4_switch_op_write_metadata(sw, metadata); 542 + ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_NVM_READ, &metadata, 543 + &status, NULL, 0, buf, dwords); 572 544 if (ret) 573 545 return ret; 574 546 575 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_READ, &status); 576 - if (ret) 577 - return ret; 578 - 579 - if (status) 580 - return -EIO; 581 - 582 - return usb4_switch_op_read_data(sw, buf, dwords); 547 + return status ? -EIO : 0; 583 548 } 584 549 585 550 /** ··· 604 581 metadata = (dwaddress << USB4_NVM_SET_OFFSET_SHIFT) & 605 582 USB4_NVM_SET_OFFSET_MASK; 606 583 607 - ret = usb4_switch_op_write_metadata(sw, metadata); 608 - if (ret) 609 - return ret; 610 - 611 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SET_OFFSET, &status); 584 + ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SET_OFFSET, &metadata, 585 + &status); 612 586 if (ret) 613 587 return ret; 614 588 ··· 619 599 u8 status; 620 600 int ret; 621 601 622 - ret = usb4_switch_op_write_data(sw, buf, dwords); 623 - if (ret) 624 - return ret; 625 - 626 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_WRITE, &status); 602 + ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_NVM_WRITE, NULL, &status, 603 + buf, dwords, NULL, 0); 627 604 if (ret) 628 605 return ret; 629 606 ··· 655 638 * @sw: USB4 router 656 639 * 657 640 * After the new NVM has been written via usb4_switch_nvm_write(), this 658 - * function triggers NVM authentication process. If the authentication 659 - * is successful the router is power cycled and the new NVM starts 641 + * function triggers NVM authentication process. The router gets power 642 + * cycled and if the authentication is successful the new NVM starts 660 643 * running. In case of failure returns negative errno. 644 + * 645 + * The caller should call usb4_switch_nvm_authenticate_status() to read 646 + * the status of the authentication after power cycle. It should be the 647 + * first router operation to avoid the status being lost. 661 648 */ 662 649 int usb4_switch_nvm_authenticate(struct tb_switch *sw) 663 650 { 664 - u8 status = 0; 665 651 int ret; 666 652 667 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_AUTH, &status); 653 + ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_AUTH, NULL, NULL); 654 + switch (ret) { 655 + /* 656 + * The router is power cycled once NVM_AUTH is started so it is 657 + * expected to get any of the following errors back. 658 + */ 659 + case -EACCES: 660 + case -ENOTCONN: 661 + case -ETIMEDOUT: 662 + return 0; 663 + 664 + default: 665 + return ret; 666 + } 667 + } 668 + 669 + /** 670 + * usb4_switch_nvm_authenticate_status() - Read status of last NVM authenticate 671 + * @sw: USB4 router 672 + * @status: Status code of the operation 673 + * 674 + * The function checks if there is status available from the last NVM 675 + * authenticate router operation. If there is status then %0 is returned 676 + * and the status code is placed in @status. Returns negative errno in case 677 + * of failure. 678 + * 679 + * Must be called before any other router operation. 680 + */ 681 + int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status) 682 + { 683 + const struct tb_cm_ops *cm_ops = sw->tb->cm_ops; 684 + u16 opcode; 685 + u32 val; 686 + int ret; 687 + 688 + if (cm_ops->usb4_switch_nvm_authenticate_status) { 689 + ret = cm_ops->usb4_switch_nvm_authenticate_status(sw, status); 690 + if (ret != -EOPNOTSUPP) 691 + return ret; 692 + } 693 + 694 + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1); 668 695 if (ret) 669 696 return ret; 670 697 671 - switch (status) { 672 - case 0x0: 673 - tb_sw_dbg(sw, "NVM authentication successful\n"); 674 - return 0; 675 - case 0x1: 676 - return -EINVAL; 677 - case 0x2: 678 - return -EAGAIN; 679 - case 0x3: 680 - return -EOPNOTSUPP; 681 - default: 682 - return -EIO; 698 + /* Check that the opcode is correct */ 699 + opcode = val & ROUTER_CS_26_OPCODE_MASK; 700 + if (opcode == USB4_SWITCH_OP_NVM_AUTH) { 701 + if (val & ROUTER_CS_26_OV) 702 + return -EBUSY; 703 + if (val & ROUTER_CS_26_ONS) 704 + return -EOPNOTSUPP; 705 + 706 + *status = (val & ROUTER_CS_26_STATUS_MASK) >> 707 + ROUTER_CS_26_STATUS_SHIFT; 708 + } else { 709 + *status = 0; 683 710 } 711 + 712 + return 0; 684 713 } 685 714 686 715 /** ··· 740 677 */ 741 678 bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in) 742 679 { 680 + u32 metadata = in->port; 743 681 u8 status; 744 682 int ret; 745 683 746 - ret = usb4_switch_op_write_metadata(sw, in->port); 747 - if (ret) 748 - return false; 749 - 750 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_QUERY_DP_RESOURCE, &status); 684 + ret = usb4_switch_op(sw, USB4_SWITCH_OP_QUERY_DP_RESOURCE, &metadata, 685 + &status); 751 686 /* 752 687 * If DP resource allocation is not supported assume it is 753 688 * always available. ··· 770 709 */ 771 710 int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in) 772 711 { 712 + u32 metadata = in->port; 773 713 u8 status; 774 714 int ret; 775 715 776 - ret = usb4_switch_op_write_metadata(sw, in->port); 777 - if (ret) 778 - return ret; 779 - 780 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_ALLOC_DP_RESOURCE, &status); 716 + ret = usb4_switch_op(sw, USB4_SWITCH_OP_ALLOC_DP_RESOURCE, &metadata, 717 + &status); 781 718 if (ret == -EOPNOTSUPP) 782 719 return 0; 783 720 else if (ret) ··· 793 734 */ 794 735 int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in) 795 736 { 737 + u32 metadata = in->port; 796 738 u8 status; 797 739 int ret; 798 740 799 - ret = usb4_switch_op_write_metadata(sw, in->port); 800 - if (ret) 801 - return ret; 802 - 803 - ret = usb4_switch_op(sw, USB4_SWITCH_OP_DEALLOC_DP_RESOURCE, &status); 741 + ret = usb4_switch_op(sw, USB4_SWITCH_OP_DEALLOC_DP_RESOURCE, &metadata, 742 + &status); 804 743 if (ret == -EOPNOTSUPP) 805 744 return 0; 806 745 else if (ret)
+140 -8
drivers/thunderbolt/xdomain.c
··· 8 8 */ 9 9 10 10 #include <linux/device.h> 11 + #include <linux/delay.h> 11 12 #include <linux/kmod.h> 12 13 #include <linux/module.h> 13 14 #include <linux/pm_runtime.h> ··· 22 21 #define XDOMAIN_UUID_RETRIES 10 23 22 #define XDOMAIN_PROPERTIES_RETRIES 60 24 23 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10 24 + #define XDOMAIN_BONDING_WAIT 100 /* ms */ 25 25 26 26 struct xdomain_request_work { 27 27 struct work_struct work; ··· 589 587 break; 590 588 591 589 case PROPERTIES_CHANGED_REQUEST: { 592 - const struct tb_xdp_properties_changed *xchg = 593 - (const struct tb_xdp_properties_changed *)pkg; 594 590 struct tb_xdomain *xd; 595 591 596 592 ret = tb_xdp_properties_changed_response(ctl, route, sequence); ··· 598 598 * the xdomain related to this connection as well in 599 599 * case there is a change in services it offers. 600 600 */ 601 - xd = tb_xdomain_find_by_uuid_locked(tb, &xchg->src_uuid); 601 + xd = tb_xdomain_find_by_route_locked(tb, route); 602 602 if (xd) { 603 - queue_delayed_work(tb->wq, &xd->get_properties_work, 604 - msecs_to_jiffies(50)); 603 + if (device_is_registered(&xd->dev)) { 604 + queue_delayed_work(tb->wq, &xd->get_properties_work, 605 + msecs_to_jiffies(50)); 606 + } 605 607 tb_xdomain_put(xd); 606 608 } 607 609 ··· 779 777 struct tb_service *svc = container_of(dev, struct tb_service, dev); 780 778 struct tb_xdomain *xd = tb_service_parent(svc); 781 779 780 + tb_service_debugfs_remove(svc); 782 781 ida_simple_remove(&xd->service_ids, svc->id); 783 782 kfree(svc->key); 784 783 kfree(svc); ··· 894 891 svc->dev.parent = &xd->dev; 895 892 dev_set_name(&svc->dev, "%s.%d", dev_name(&xd->dev), svc->id); 896 893 894 + tb_service_debugfs_init(svc); 895 + 897 896 if (device_register(&svc->dev)) { 898 897 put_device(&svc->dev); 899 898 break; ··· 948 943 } 949 944 } 950 945 946 + static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) 947 + { 948 + return tb_to_switch(xd->dev.parent); 949 + } 950 + 951 + static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd) 952 + { 953 + bool change = false; 954 + struct tb_port *port; 955 + int ret; 956 + 957 + port = tb_port_at(xd->route, tb_xdomain_parent(xd)); 958 + 959 + ret = tb_port_get_link_speed(port); 960 + if (ret < 0) 961 + return ret; 962 + 963 + if (xd->link_speed != ret) 964 + change = true; 965 + 966 + xd->link_speed = ret; 967 + 968 + ret = tb_port_get_link_width(port); 969 + if (ret < 0) 970 + return ret; 971 + 972 + if (xd->link_width != ret) 973 + change = true; 974 + 975 + xd->link_width = ret; 976 + 977 + if (change) 978 + kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE); 979 + 980 + return 0; 981 + } 982 + 951 983 static void tb_xdomain_get_uuid(struct work_struct *work) 952 984 { 953 985 struct tb_xdomain *xd = container_of(work, typeof(*xd), ··· 1004 962 return; 1005 963 } 1006 964 1007 - if (uuid_equal(&uuid, xd->local_uuid)) { 965 + if (uuid_equal(&uuid, xd->local_uuid)) 1008 966 dev_dbg(&xd->dev, "intra-domain loop detected\n"); 1009 - return; 1010 - } 1011 967 1012 968 /* 1013 969 * If the UUID is different, there is another domain connected ··· 1095 1055 1096 1056 xd->properties = dir; 1097 1057 xd->property_block_gen = gen; 1058 + 1059 + tb_xdomain_update_link_attributes(xd); 1098 1060 1099 1061 tb_xdomain_restore_paths(xd); 1100 1062 ··· 1204 1162 } 1205 1163 static DEVICE_ATTR_RO(unique_id); 1206 1164 1165 + static ssize_t speed_show(struct device *dev, struct device_attribute *attr, 1166 + char *buf) 1167 + { 1168 + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); 1169 + 1170 + return sprintf(buf, "%u.0 Gb/s\n", xd->link_speed); 1171 + } 1172 + 1173 + static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL); 1174 + static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL); 1175 + 1176 + static ssize_t lanes_show(struct device *dev, struct device_attribute *attr, 1177 + char *buf) 1178 + { 1179 + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); 1180 + 1181 + return sprintf(buf, "%u\n", xd->link_width); 1182 + } 1183 + 1184 + static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL); 1185 + static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL); 1186 + 1207 1187 static struct attribute *xdomain_attrs[] = { 1208 1188 &dev_attr_device.attr, 1209 1189 &dev_attr_device_name.attr, 1190 + &dev_attr_rx_lanes.attr, 1191 + &dev_attr_rx_speed.attr, 1192 + &dev_attr_tx_lanes.attr, 1193 + &dev_attr_tx_speed.attr, 1210 1194 &dev_attr_unique_id.attr, 1211 1195 &dev_attr_vendor.attr, 1212 1196 &dev_attr_vendor_name.attr, ··· 1448 1380 else 1449 1381 device_unregister(&xd->dev); 1450 1382 } 1383 + 1384 + /** 1385 + * tb_xdomain_lane_bonding_enable() - Enable lane bonding on XDomain 1386 + * @xd: XDomain connection 1387 + * 1388 + * Lane bonding is disabled by default for XDomains. This function tries 1389 + * to enable bonding by first enabling the port and waiting for the CL0 1390 + * state. 1391 + * 1392 + * Return: %0 in case of success and negative errno in case of error. 1393 + */ 1394 + int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) 1395 + { 1396 + struct tb_port *port; 1397 + int ret; 1398 + 1399 + port = tb_port_at(xd->route, tb_xdomain_parent(xd)); 1400 + if (!port->dual_link_port) 1401 + return -ENODEV; 1402 + 1403 + ret = tb_port_enable(port->dual_link_port); 1404 + if (ret) 1405 + return ret; 1406 + 1407 + ret = tb_wait_for_port(port->dual_link_port, true); 1408 + if (ret < 0) 1409 + return ret; 1410 + if (!ret) 1411 + return -ENOTCONN; 1412 + 1413 + ret = tb_port_lane_bonding_enable(port); 1414 + if (ret) { 1415 + tb_port_warn(port, "failed to enable lane bonding\n"); 1416 + return ret; 1417 + } 1418 + 1419 + tb_xdomain_update_link_attributes(xd); 1420 + 1421 + dev_dbg(&xd->dev, "lane bonding enabled\n"); 1422 + return 0; 1423 + } 1424 + EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_enable); 1425 + 1426 + /** 1427 + * tb_xdomain_lane_bonding_disable() - Disable lane bonding 1428 + * @xd: XDomain connection 1429 + * 1430 + * Lane bonding is disabled by default for XDomains. If bonding has been 1431 + * enabled, this function can be used to disable it. 1432 + */ 1433 + void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) 1434 + { 1435 + struct tb_port *port; 1436 + 1437 + port = tb_port_at(xd->route, tb_xdomain_parent(xd)); 1438 + if (port->dual_link_port) { 1439 + tb_port_lane_bonding_disable(port); 1440 + tb_port_disable(port->dual_link_port); 1441 + tb_xdomain_update_link_attributes(xd); 1442 + 1443 + dev_dbg(&xd->dev, "lane bonding disabled\n"); 1444 + } 1445 + } 1446 + EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable); 1451 1447 1452 1448 /** 1453 1449 * tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection
-1
drivers/usb/Makefile
··· 30 30 obj-$(CONFIG_USB_U132_HCD) += host/ 31 31 obj-$(CONFIG_USB_R8A66597_HCD) += host/ 32 32 obj-$(CONFIG_USB_HWA_HCD) += host/ 33 - obj-$(CONFIG_USB_IMX21_HCD) += host/ 34 33 obj-$(CONFIG_USB_FSL_USB2) += host/ 35 34 obj-$(CONFIG_USB_FOTG210_HCD) += host/ 36 35 obj-$(CONFIG_USB_MAX3421_HCD) += host/
+3 -6
drivers/usb/atm/cxacru.c
··· 810 810 mutex_unlock(&instance->poll_state_serialize); 811 811 mutex_unlock(&instance->adsl_state_serialize); 812 812 813 - printk(KERN_INFO "%s%d: %s %pM\n", atm_dev->type, atm_dev->number, 814 - usbatm_instance->description, atm_dev->esi); 815 - 816 813 if (start_polling) 817 814 cxacru_poll_status(&instance->poll_work.work); 818 815 return 0; ··· 849 852 850 853 switch (instance->adsl_status) { 851 854 case 0: 852 - atm_printk(KERN_INFO, usbatm, "ADSL state: running\n"); 855 + atm_info(usbatm, "ADSL state: running\n"); 853 856 break; 854 857 855 858 case 1: 856 - atm_printk(KERN_INFO, usbatm, "ADSL state: stopped\n"); 859 + atm_info(usbatm, "ADSL state: stopped\n"); 857 860 break; 858 861 859 862 default: 860 - atm_printk(KERN_INFO, usbatm, "Unknown adsl status %02x\n", instance->adsl_status); 863 + atm_info(usbatm, "Unknown adsl status %02x\n", instance->adsl_status); 861 864 break; 862 865 } 863 866 }
+2 -2
drivers/usb/atm/usbatm.c
··· 249 249 /* vdbg("%s: urb 0x%p, status %d, actual_length %d", 250 250 __func__, urb, status, urb->actual_length); */ 251 251 252 - /* usually in_interrupt(), but not always */ 252 + /* Can be invoked from task context, protect against interrupts */ 253 253 spin_lock_irqsave(&channel->lock, flags); 254 254 255 255 /* must add to the back when receiving; doesn't matter when sending */ ··· 1278 1278 static int __init usbatm_usb_init(void) 1279 1279 { 1280 1280 if (sizeof(struct usbatm_control) > sizeof_field(struct sk_buff, cb)) { 1281 - printk(KERN_ERR "%s unusable with this kernel!\n", usbatm_driver_name); 1281 + pr_err("%s unusable with this kernel!\n", usbatm_driver_name); 1282 1282 return -EIO; 1283 1283 } 1284 1284
+1 -1
drivers/usb/atm/xusbatm.c
··· 179 179 num_vendor != num_product || 180 180 num_vendor != num_rx_endpoint || 181 181 num_vendor != num_tx_endpoint) { 182 - printk(KERN_WARNING "xusbatm: malformed module parameters\n"); 182 + pr_warn("xusbatm: malformed module parameters\n"); 183 183 return -EINVAL; 184 184 } 185 185
+1 -1
drivers/usb/cdns3/cdns3-imx.c
··· 151 151 bool suspend, bool wakeup); 152 152 static struct cdns3_platform_data cdns_imx_pdata = { 153 153 .platform_suspend = cdns_imx_platform_suspend, 154 + .quirks = CDNS3_DEFAULT_PM_RUNTIME_ALLOW, 154 155 }; 155 156 156 157 static const struct of_dev_auxdata cdns_imx_auxdata[] = { ··· 207 206 device_set_wakeup_capable(dev, true); 208 207 pm_runtime_set_active(dev); 209 208 pm_runtime_enable(dev); 210 - pm_runtime_forbid(dev); 211 209 212 210 return ret; 213 211 err:
+4 -11
drivers/usb/cdns3/core.c
··· 465 465 cdns->xhci_res[1] = *res; 466 466 467 467 cdns->dev_irq = platform_get_irq_byname(pdev, "peripheral"); 468 - if (cdns->dev_irq == -EPROBE_DEFER) 469 - return cdns->dev_irq; 470 - 471 468 if (cdns->dev_irq < 0) 472 - dev_err(dev, "couldn't get peripheral irq\n"); 469 + return cdns->dev_irq; 473 470 474 471 regs = devm_platform_ioremap_resource_byname(pdev, "dev"); 475 472 if (IS_ERR(regs)) ··· 474 477 cdns->dev_regs = regs; 475 478 476 479 cdns->otg_irq = platform_get_irq_byname(pdev, "otg"); 477 - if (cdns->otg_irq == -EPROBE_DEFER) 480 + if (cdns->otg_irq < 0) 478 481 return cdns->otg_irq; 479 - 480 - if (cdns->otg_irq < 0) { 481 - dev_err(dev, "couldn't get otg irq\n"); 482 - return cdns->otg_irq; 483 - } 484 482 485 483 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "otg"); 486 484 if (!res) { ··· 561 569 device_set_wakeup_capable(dev, true); 562 570 pm_runtime_set_active(dev); 563 571 pm_runtime_enable(dev); 564 - pm_runtime_forbid(dev); 572 + if (!(cdns->pdata && (cdns->pdata->quirks & CDNS3_DEFAULT_PM_RUNTIME_ALLOW))) 573 + pm_runtime_forbid(dev); 565 574 566 575 /* 567 576 * The controller needs less time between bus and controller suspend,
+4
drivers/usb/cdns3/core.h
··· 42 42 struct cdns3_platform_data { 43 43 int (*platform_suspend)(struct device *dev, 44 44 bool suspend, bool wakeup); 45 + unsigned long quirks; 46 + #define CDNS3_DEFAULT_PM_RUNTIME_ALLOW BIT(0) 45 47 }; 46 48 47 49 /** ··· 75 73 * @wakeup_pending: wakeup interrupt pending 76 74 * @pdata: platform data from glue layer 77 75 * @lock: spinlock structure 76 + * @xhci_plat_data: xhci private data structure pointer 78 77 */ 79 78 struct cdns3 { 80 79 struct device *dev; ··· 109 106 bool wakeup_pending; 110 107 struct cdns3_platform_data *pdata; 111 108 spinlock_t lock; 109 + struct xhci_plat_priv *xhci_plat_data; 112 110 }; 113 111 114 112 int cdns3_hw_role_switch(struct cdns3 *cdns);
-3
drivers/usb/cdns3/gadget-export.h
··· 13 13 #ifdef CONFIG_USB_CDNS3_GADGET 14 14 15 15 int cdns3_gadget_init(struct cdns3 *cdns); 16 - void cdns3_gadget_exit(struct cdns3 *cdns); 17 16 #else 18 17 19 18 static inline int cdns3_gadget_init(struct cdns3 *cdns) 20 19 { 21 20 return -ENXIO; 22 21 } 23 - 24 - static inline void cdns3_gadget_exit(struct cdns3 *cdns) { } 25 22 26 23 #endif 27 24
+1 -1
drivers/usb/cdns3/gadget.c
··· 3084 3084 kfree(priv_dev); 3085 3085 } 3086 3086 3087 - void cdns3_gadget_exit(struct cdns3 *cdns) 3087 + static void cdns3_gadget_exit(struct cdns3 *cdns) 3088 3088 { 3089 3089 struct cdns3_device *priv_dev; 3090 3090
+6
drivers/usb/cdns3/host-export.h
··· 9 9 #ifndef __LINUX_CDNS3_HOST_EXPORT 10 10 #define __LINUX_CDNS3_HOST_EXPORT 11 11 12 + struct usb_hcd; 12 13 #ifdef CONFIG_USB_CDNS3_HOST 13 14 14 15 int cdns3_host_init(struct cdns3 *cdns); 16 + int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd); 15 17 16 18 #else 17 19 ··· 23 21 } 24 22 25 23 static inline void cdns3_host_exit(struct cdns3 *cdns) { } 24 + static inline int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd) 25 + { 26 + return 0; 27 + } 26 28 27 29 #endif /* CONFIG_USB_CDNS3_HOST */ 28 30
+59 -1
drivers/usb/cdns3/host.c
··· 14 14 #include "drd.h" 15 15 #include "host-export.h" 16 16 #include <linux/usb/hcd.h> 17 + #include "../host/xhci.h" 18 + #include "../host/xhci-plat.h" 19 + 20 + #define XECP_PORT_CAP_REG 0x8000 21 + #define XECP_AUX_CTRL_REG1 0x8120 22 + 23 + #define CFG_RXDET_P3_EN BIT(15) 24 + #define LPM_2_STB_SWITCH_EN BIT(25) 25 + 26 + static const struct xhci_plat_priv xhci_plat_cdns3_xhci = { 27 + .quirks = XHCI_SKIP_PHY_INIT | XHCI_AVOID_BEI, 28 + .suspend_quirk = xhci_cdns3_suspend_quirk, 29 + }; 17 30 18 31 static int __cdns3_host_init(struct cdns3 *cdns) 19 32 { ··· 52 39 goto err1; 53 40 } 54 41 42 + cdns->xhci_plat_data = kmemdup(&xhci_plat_cdns3_xhci, 43 + sizeof(struct xhci_plat_priv), GFP_KERNEL); 44 + if (!cdns->xhci_plat_data) { 45 + ret = -ENOMEM; 46 + goto err1; 47 + } 48 + 49 + if (cdns->pdata && (cdns->pdata->quirks & CDNS3_DEFAULT_PM_RUNTIME_ALLOW)) 50 + cdns->xhci_plat_data->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 51 + 52 + ret = platform_device_add_data(xhci, cdns->xhci_plat_data, 53 + sizeof(struct xhci_plat_priv)); 54 + if (ret) 55 + goto free_memory; 56 + 55 57 ret = platform_device_add(xhci); 56 58 if (ret) { 57 59 dev_err(cdns->dev, "failed to register xHCI device\n"); 58 - goto err1; 60 + goto free_memory; 59 61 } 60 62 61 63 /* Glue needs to access xHCI region register for Power management */ ··· 79 51 cdns->xhci_regs = hcd->regs; 80 52 81 53 return 0; 54 + 55 + free_memory: 56 + kfree(cdns->xhci_plat_data); 82 57 err1: 83 58 platform_device_put(xhci); 84 59 return ret; 85 60 } 86 61 62 + int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd) 63 + { 64 + struct xhci_hcd *xhci = hcd_to_xhci(hcd); 65 + u32 value; 66 + 67 + if (pm_runtime_status_suspended(hcd->self.controller)) 68 + return 0; 69 + 70 + /* set usbcmd.EU3S */ 71 + value = readl(&xhci->op_regs->command); 72 + value |= CMD_PM_INDEX; 73 + writel(value, &xhci->op_regs->command); 74 + 75 + if (hcd->regs) { 76 + value = readl(hcd->regs + XECP_AUX_CTRL_REG1); 77 + value |= CFG_RXDET_P3_EN; 78 + writel(value, hcd->regs + XECP_AUX_CTRL_REG1); 79 + 80 + value = readl(hcd->regs + XECP_PORT_CAP_REG); 81 + value |= LPM_2_STB_SWITCH_EN; 82 + writel(value, hcd->regs + XECP_PORT_CAP_REG); 83 + } 84 + 85 + return 0; 86 + } 87 + 87 88 static void cdns3_host_exit(struct cdns3 *cdns) 88 89 { 90 + kfree(cdns->xhci_plat_data); 89 91 platform_device_unregister(cdns->host_dev); 90 92 cdns->host_dev = NULL; 91 93 cdns3_drd_host_off(cdns);
+4 -1
drivers/usb/chipidea/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + 3 + # define_trace.h needs to know how to find our header 4 + CFLAGS_trace.o := -I$(src) 2 5 obj-$(CONFIG_USB_CHIPIDEA) += ci_hdrc.o 3 6 4 7 ci_hdrc-y := core.o otg.o debug.o ulpi.o 5 - ci_hdrc-$(CONFIG_USB_CHIPIDEA_UDC) += udc.o 8 + ci_hdrc-$(CONFIG_USB_CHIPIDEA_UDC) += udc.o trace.o 6 9 ci_hdrc-$(CONFIG_USB_CHIPIDEA_HOST) += host.o 7 10 ci_hdrc-$(CONFIG_USB_OTG_FSM) += otg_fsm.o 8 11
+3 -7
drivers/usb/chipidea/ci_hdrc_imx.c
··· 57 57 58 58 static const struct ci_hdrc_imx_platform_flag imx6ul_usb_data = { 59 59 .flags = CI_HDRC_SUPPORTS_RUNTIME_PM | 60 - CI_HDRC_TURN_VBUS_EARLY_ON, 60 + CI_HDRC_TURN_VBUS_EARLY_ON | 61 + CI_HDRC_DISABLE_DEVICE_STREAMING, 61 62 }; 62 63 63 64 static const struct ci_hdrc_imx_platform_flag imx7d_usb_data = { ··· 320 319 .notify_event = ci_hdrc_imx_notify_event, 321 320 }; 322 321 int ret; 323 - const struct of_device_id *of_id; 324 322 const struct ci_hdrc_imx_platform_flag *imx_platform_flag; 325 323 struct device_node *np = pdev->dev.of_node; 326 324 struct device *dev = &pdev->dev; 327 325 328 - of_id = of_match_device(ci_hdrc_imx_dt_ids, dev); 329 - if (!of_id) 330 - return -ENODEV; 331 - 332 - imx_platform_flag = of_id->data; 326 + imx_platform_flag = of_device_get_match_data(&pdev->dev); 333 327 334 328 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 335 329 if (!data)
+23
drivers/usb/chipidea/trace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Chipidea Device Mode Trace Support 4 + * 5 + * Copyright (C) 2020 NXP 6 + * 7 + * Author: Peter Chen <peter.chen@nxp.com> 8 + */ 9 + 10 + #define CREATE_TRACE_POINTS 11 + #include "trace.h" 12 + 13 + void ci_log(struct ci_hdrc *ci, const char *fmt, ...) 14 + { 15 + struct va_format vaf; 16 + va_list args; 17 + 18 + va_start(args, fmt); 19 + vaf.fmt = fmt; 20 + vaf.va = &args; 21 + trace_ci_log(ci, &vaf); 22 + va_end(args); 23 + }
+92
drivers/usb/chipidea/trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Trace support header file for device mode 4 + * 5 + * Copyright (C) 2020 NXP 6 + * 7 + * Author: Peter Chen <peter.chen@nxp.com> 8 + */ 9 + 10 + #undef TRACE_SYSTEM 11 + #define TRACE_SYSTEM chipidea 12 + 13 + #if !defined(__LINUX_CHIPIDEA_TRACE) || defined(TRACE_HEADER_MULTI_READ) 14 + #define __LINUX_CHIPIDEA_TRACE 15 + 16 + #include <linux/types.h> 17 + #include <linux/tracepoint.h> 18 + #include <linux/usb/chipidea.h> 19 + #include "ci.h" 20 + #include "udc.h" 21 + 22 + #define CHIPIDEA_MSG_MAX 500 23 + 24 + void ci_log(struct ci_hdrc *ci, const char *fmt, ...); 25 + 26 + TRACE_EVENT(ci_log, 27 + TP_PROTO(struct ci_hdrc *ci, struct va_format *vaf), 28 + TP_ARGS(ci, vaf), 29 + TP_STRUCT__entry( 30 + __string(name, dev_name(ci->dev)) 31 + __dynamic_array(char, msg, CHIPIDEA_MSG_MAX) 32 + ), 33 + TP_fast_assign( 34 + __assign_str(name, dev_name(ci->dev)); 35 + vsnprintf(__get_str(msg), CHIPIDEA_MSG_MAX, vaf->fmt, *vaf->va); 36 + ), 37 + TP_printk("%s: %s", __get_str(name), __get_str(msg)) 38 + ); 39 + 40 + DECLARE_EVENT_CLASS(ci_log_trb, 41 + TP_PROTO(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq, struct td_node *td), 42 + TP_ARGS(hwep, hwreq, td), 43 + TP_STRUCT__entry( 44 + __string(name, hwep->name) 45 + __field(struct td_node *, td) 46 + __field(struct usb_request *, req) 47 + __field(dma_addr_t, dma) 48 + __field(s32, td_remaining_size) 49 + __field(u32, next) 50 + __field(u32, token) 51 + __field(u32, type) 52 + ), 53 + TP_fast_assign( 54 + __assign_str(name, hwep->name); 55 + __entry->req = &hwreq->req; 56 + __entry->td = td; 57 + __entry->dma = td->dma; 58 + __entry->td_remaining_size = td->td_remaining_size; 59 + __entry->next = le32_to_cpu(td->ptr->next); 60 + __entry->token = le32_to_cpu(td->ptr->token); 61 + __entry->type = usb_endpoint_type(hwep->ep.desc); 62 + ), 63 + TP_printk("%s: req: %p, td: %p, td_dma_address: %pad, remaining_size: %d, " 64 + "next: %x, total bytes: %d, status: %lx", 65 + __get_str(name), __entry->req, __entry->td, &__entry->dma, 66 + __entry->td_remaining_size, __entry->next, 67 + (int)((__entry->token & TD_TOTAL_BYTES) >> __ffs(TD_TOTAL_BYTES)), 68 + __entry->token & TD_STATUS 69 + ) 70 + ); 71 + 72 + DEFINE_EVENT(ci_log_trb, ci_prepare_td, 73 + TP_PROTO(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq, struct td_node *td), 74 + TP_ARGS(hwep, hwreq, td) 75 + ); 76 + 77 + DEFINE_EVENT(ci_log_trb, ci_complete_td, 78 + TP_PROTO(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq, struct td_node *td), 79 + TP_ARGS(hwep, hwreq, td) 80 + ); 81 + 82 + #endif /* __LINUX_CHIPIDEA_TRACE */ 83 + 84 + /* this part must be outside header guard */ 85 + 86 + #undef TRACE_INCLUDE_PATH 87 + #define TRACE_INCLUDE_PATH . 88 + 89 + #undef TRACE_INCLUDE_FILE 90 + #define TRACE_INCLUDE_FILE trace 91 + 92 + #include <trace/define_trace.h>
+8 -2
drivers/usb/chipidea/udc.c
··· 26 26 #include "bits.h" 27 27 #include "otg.h" 28 28 #include "otg_fsm.h" 29 + #include "trace.h" 29 30 30 31 /* control endpoint description */ 31 32 static const struct usb_endpoint_descriptor ··· 570 569 if (ret) 571 570 return ret; 572 571 573 - firstnode = list_first_entry(&hwreq->tds, struct td_node, td); 574 - 575 572 lastnode = list_entry(hwreq->tds.prev, 576 573 struct td_node, td); 577 574 578 575 lastnode->ptr->next = cpu_to_le32(TD_TERMINATE); 579 576 if (!hwreq->req.no_interrupt) 580 577 lastnode->ptr->token |= cpu_to_le32(TD_IOC); 578 + 579 + list_for_each_entry_safe(firstnode, lastnode, &hwreq->tds, td) 580 + trace_ci_prepare_td(hwep, hwreq, firstnode); 581 + 582 + firstnode = list_first_entry(&hwreq->tds, struct td_node, td); 583 + 581 584 wmb(); 582 585 583 586 hwreq->req.actual = 0; ··· 676 671 677 672 list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) { 678 673 tmptoken = le32_to_cpu(node->ptr->token); 674 + trace_ci_complete_td(hwep, hwreq, node); 679 675 if ((TD_STATUS_ACTIVE & tmptoken) != 0) { 680 676 int n = hw_ep_bit(hwep->num, hwep->dir); 681 677
+1 -6
drivers/usb/chipidea/usbmisc_imx.c
··· 1134 1134 static int usbmisc_imx_probe(struct platform_device *pdev) 1135 1135 { 1136 1136 struct imx_usbmisc *data; 1137 - const struct of_device_id *of_id; 1138 - 1139 - of_id = of_match_device(usbmisc_imx_dt_ids, &pdev->dev); 1140 - if (!of_id) 1141 - return -ENODEV; 1142 1137 1143 1138 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 1144 1139 if (!data) ··· 1145 1150 if (IS_ERR(data->base)) 1146 1151 return PTR_ERR(data->base); 1147 1152 1148 - data->ops = (const struct usbmisc_ops *)of_id->data; 1153 + data->ops = of_device_get_match_data(&pdev->dev); 1149 1154 platform_set_drvdata(pdev, data); 1150 1155 1151 1156 return 0;
+1 -1
drivers/usb/common/ulpi.c
··· 118 118 NULL 119 119 }; 120 120 121 - static struct attribute_group ulpi_dev_attr_group = { 121 + static const struct attribute_group ulpi_dev_attr_group = { 122 122 .attrs = ulpi_dev_attrs, 123 123 }; 124 124
+4 -2
drivers/usb/core/buffer.c
··· 51 51 /** 52 52 * hcd_buffer_create - initialize buffer pools 53 53 * @hcd: the bus whose buffer pools are to be initialized 54 - * Context: !in_interrupt() 54 + * 55 + * Context: task context, might sleep 55 56 * 56 57 * Call this as part of initializing a host controller that uses the dma 57 58 * memory allocators. It initializes some pools of dma-coherent memory that ··· 89 88 /** 90 89 * hcd_buffer_destroy - deallocate buffer pools 91 90 * @hcd: the bus whose buffer pools are to be destroyed 92 - * Context: !in_interrupt() 91 + * 92 + * Context: task context, might sleep 93 93 * 94 94 * This frees the buffer pools created by hcd_buffer_create(). 95 95 */
+1
drivers/usb/core/config.c
··· 1076 1076 case USB_PTM_CAP_TYPE: 1077 1077 dev->bos->ptm_cap = 1078 1078 (struct usb_ptm_cap_descriptor *)buffer; 1079 + break; 1079 1080 default: 1080 1081 break; 1081 1082 }
+1 -1
drivers/usb/core/endpoint.c
··· 153 153 &dev_attr_direction.attr, 154 154 NULL, 155 155 }; 156 - static struct attribute_group ep_dev_attr_grp = { 156 + static const struct attribute_group ep_dev_attr_grp = { 157 157 .attrs = ep_dev_attrs, 158 158 }; 159 159 static const struct attribute_group *ep_dev_groups[] = {
+4 -2
drivers/usb/core/hcd-pci.c
··· 160 160 * @dev: USB Host Controller being probed 161 161 * @id: pci hotplug id connecting controller to HCD framework 162 162 * @driver: USB HC driver handle 163 - * Context: !in_interrupt() 163 + * 164 + * Context: task context, might sleep 164 165 * 165 166 * Allocates basic PCI resources for this USB host controller, and 166 167 * then invokes the start() method for the HCD associated with it ··· 305 304 /** 306 305 * usb_hcd_pci_remove - shutdown processing for PCI-based HCDs 307 306 * @dev: USB Host Controller being removed 308 - * Context: !in_interrupt() 307 + * 308 + * Context: task context, might sleep 309 309 * 310 310 * Reverses the effect of usb_hcd_pci_probe(), first invoking 311 311 * the HCD's stop() method. It is always called from a thread
+26 -11
drivers/usb/core/hcd.c
··· 747 747 * driver requests it; otherwise the driver is responsible for 748 748 * calling usb_hcd_poll_rh_status() when an event occurs. 749 749 * 750 - * Completions are called in_interrupt(), but they may or may not 751 - * be in_irq(). 750 + * Completion handler may not sleep. See usb_hcd_giveback_urb() for details. 752 751 */ 753 752 void usb_hcd_poll_rh_status(struct usb_hcd *hcd) 754 753 { ··· 903 904 /** 904 905 * usb_register_bus - registers the USB host controller with the usb core 905 906 * @bus: pointer to the bus to register 906 - * Context: !in_interrupt() 907 + * 908 + * Context: task context, might sleep. 907 909 * 908 910 * Assigns a bus number, and links the controller into usbcore data 909 911 * structures so that it can be seen by scanning the bus list. ··· 939 939 /** 940 940 * usb_deregister_bus - deregisters the USB host controller 941 941 * @bus: pointer to the bus to deregister 942 - * Context: !in_interrupt() 942 + * 943 + * Context: task context, might sleep. 943 944 * 944 945 * Recycles the bus number, and unlinks the controller from usbcore data 945 946 * structures so that it won't be seen by scanning the bus list. ··· 1647 1646 1648 1647 /* pass ownership to the completion handler */ 1649 1648 urb->status = status; 1650 - kcov_remote_start_usb((u64)urb->dev->bus->busnum); 1649 + /* 1650 + * This function can be called in task context inside another remote 1651 + * coverage collection section, but KCOV doesn't support that kind of 1652 + * recursion yet. Only collect coverage in softirq context for now. 1653 + */ 1654 + if (in_serving_softirq()) 1655 + kcov_remote_start_usb((u64)urb->dev->bus->busnum); 1651 1656 urb->complete(urb); 1652 - kcov_remote_stop(); 1657 + if (in_serving_softirq()) 1658 + kcov_remote_stop(); 1653 1659 1654 1660 usb_anchor_resume_wakeups(anchor); 1655 1661 atomic_dec(&urb->use_count); ··· 1699 1691 * @hcd: host controller returning the URB 1700 1692 * @urb: urb being returned to the USB device driver. 1701 1693 * @status: completion status code for the URB. 1702 - * Context: in_interrupt() 1694 + * 1695 + * Context: atomic. The completion callback is invoked in caller's context. 1696 + * For HCDs with HCD_BH flag set, the completion callback is invoked in tasklet 1697 + * context (except for URBs submitted to the root hub which always complete in 1698 + * caller's context). 1703 1699 * 1704 1700 * This hands the URB from HCD to its USB device driver, using its 1705 1701 * completion function. The HCD has freed all per-urb resources ··· 2280 2268 * usb_bus_start_enum - start immediate enumeration (for OTG) 2281 2269 * @bus: the bus (must use hcd framework) 2282 2270 * @port_num: 1-based number of port; usually bus->otg_port 2283 - * Context: in_interrupt() 2271 + * Context: atomic 2284 2272 * 2285 2273 * Starts enumeration, with an immediate reset followed later by 2286 2274 * hub_wq identifying and possibly configuring the device. ··· 2486 2474 * @bus_name: value to store in hcd->self.bus_name 2487 2475 * @primary_hcd: a pointer to the usb_hcd structure that is sharing the 2488 2476 * PCI device. Only allocate certain resources for the primary HCD 2489 - * Context: !in_interrupt() 2477 + * 2478 + * Context: task context, might sleep. 2490 2479 * 2491 2480 * Allocate a struct usb_hcd, with extra space at the end for the 2492 2481 * HC driver's private data. Initialize the generic members of the ··· 2509 2496 * @driver: HC driver that will use this hcd 2510 2497 * @dev: device for this HC, stored in hcd->self.controller 2511 2498 * @bus_name: value to store in hcd->self.bus_name 2512 - * Context: !in_interrupt() 2499 + * 2500 + * Context: task context, might sleep. 2513 2501 * 2514 2502 * Allocate a struct usb_hcd, with extra space at the end for the 2515 2503 * HC driver's private data. Initialize the generic members of the ··· 2844 2830 /** 2845 2831 * usb_remove_hcd - shutdown processing for generic HCDs 2846 2832 * @hcd: the usb_hcd structure to remove 2847 - * Context: !in_interrupt() 2833 + * 2834 + * Context: task context, might sleep. 2848 2835 * 2849 2836 * Disconnects the root hub, then reverses the effects of usb_add_hcd(), 2850 2837 * invoking the HCD's stop() method.
+2 -1
drivers/usb/core/hub.c
··· 2171 2171 /** 2172 2172 * usb_disconnect - disconnect a device (usbcore-internal) 2173 2173 * @pdev: pointer to device being disconnected 2174 - * Context: !in_interrupt () 2174 + * 2175 + * Context: task context, might sleep 2175 2176 * 2176 2177 * Something got disconnected. Get rid of it and all of its children. 2177 2178 *
+26 -21
drivers/usb/core/message.c
··· 119 119 * @timeout: time in msecs to wait for the message to complete before timing 120 120 * out (if 0 the wait is forever) 121 121 * 122 - * Context: !in_interrupt () 122 + * Context: task context, might sleep. 123 123 * 124 124 * This function sends a simple control message to a specified endpoint and 125 125 * waits for the message to complete, or timeout. ··· 204 204 int ret; 205 205 u8 *data = NULL; 206 206 207 - if (usb_pipe_type_check(dev, pipe)) 208 - return -EINVAL; 209 - 210 207 if (size) { 211 208 data = kmemdup(driver_data, size, memflags); 212 209 if (!data) ··· 216 219 217 220 if (ret < 0) 218 221 return ret; 219 - if (ret == size) 220 - return 0; 221 - return -EINVAL; 222 + 223 + return 0; 222 224 } 223 225 EXPORT_SYMBOL_GPL(usb_control_msg_send); 224 226 ··· 269 273 int ret; 270 274 u8 *data; 271 275 272 - if (!size || !driver_data || usb_pipe_type_check(dev, pipe)) 276 + if (!size || !driver_data) 273 277 return -EINVAL; 274 278 275 279 data = kmalloc(size, memflags); ··· 286 290 memcpy(driver_data, data, size); 287 291 ret = 0; 288 292 } else { 289 - ret = -EINVAL; 293 + ret = -EREMOTEIO; 290 294 } 291 295 292 296 exit: ··· 306 310 * @timeout: time in msecs to wait for the message to complete before 307 311 * timing out (if 0 the wait is forever) 308 312 * 309 - * Context: !in_interrupt () 313 + * Context: task context, might sleep. 310 314 * 311 315 * This function sends a simple interrupt message to a specified endpoint and 312 316 * waits for the message to complete, or timeout. ··· 339 343 * @timeout: time in msecs to wait for the message to complete before 340 344 * timing out (if 0 the wait is forever) 341 345 * 342 - * Context: !in_interrupt () 346 + * Context: task context, might sleep. 343 347 * 344 348 * This function sends a simple bulk message to a specified endpoint 345 349 * and waits for the message to complete, or timeout. ··· 606 610 * usb_sg_wait - synchronously execute scatter/gather request 607 611 * @io: request block handle, as initialized with usb_sg_init(). 608 612 * some fields become accessible when this call returns. 609 - * Context: !in_interrupt () 613 + * 614 + * Context: task context, might sleep. 610 615 * 611 616 * This function blocks until the specified I/O operation completes. It 612 617 * leverages the grouping of the related I/O requests to get good transfer ··· 761 764 * @index: the number of the descriptor 762 765 * @buf: where to put the descriptor 763 766 * @size: how big is "buf"? 764 - * Context: !in_interrupt () 767 + * 768 + * Context: task context, might sleep. 765 769 * 766 770 * Gets a USB descriptor. Convenience functions exist to simplify 767 771 * getting some types of descriptors. Use ··· 810 812 * @index: the number of the descriptor 811 813 * @buf: where to put the string 812 814 * @size: how big is "buf"? 813 - * Context: !in_interrupt () 815 + * 816 + * Context: task context, might sleep. 814 817 * 815 818 * Retrieves a string, encoded using UTF-16LE (Unicode, 16 bits per character, 816 819 * in little-endian byte order). ··· 946 947 * @index: the number of the descriptor 947 948 * @buf: where to put the string 948 949 * @size: how big is "buf"? 949 - * Context: !in_interrupt () 950 + * 951 + * Context: task context, might sleep. 950 952 * 951 953 * This converts the UTF-16LE encoded strings returned by devices, from 952 954 * usb_get_string_descriptor(), to null-terminated UTF-8 encoded ones ··· 1036 1036 * usb_get_device_descriptor - (re)reads the device descriptor (usbcore) 1037 1037 * @dev: the device whose device descriptor is being updated 1038 1038 * @size: how much of the descriptor to read 1039 - * Context: !in_interrupt () 1039 + * 1040 + * Context: task context, might sleep. 1040 1041 * 1041 1042 * Updates the copy of the device descriptor stored in the device structure, 1042 1043 * which dedicates space for this purpose. ··· 1072 1071 /* 1073 1072 * usb_set_isoch_delay - informs the device of the packet transmit delay 1074 1073 * @dev: the device whose delay is to be informed 1075 - * Context: !in_interrupt() 1074 + * Context: task context, might sleep 1076 1075 * 1077 1076 * Since this is an optional request, we don't bother if it fails. 1078 1077 */ ··· 1101 1100 * @type: USB_STATUS_TYPE_*; for standard or PTM status types 1102 1101 * @target: zero (for device), else interface or endpoint number 1103 1102 * @data: pointer to two bytes of bitmap data 1104 - * Context: !in_interrupt () 1103 + * 1104 + * Context: task context, might sleep. 1105 1105 * 1106 1106 * Returns device, interface, or endpoint status. Normally only of 1107 1107 * interest to see if the device is self powered, or has enabled the ··· 1179 1177 * usb_clear_halt - tells device to clear endpoint halt/stall condition 1180 1178 * @dev: device whose endpoint is halted 1181 1179 * @pipe: endpoint "pipe" being cleared 1182 - * Context: !in_interrupt () 1180 + * 1181 + * Context: task context, might sleep. 1183 1182 * 1184 1183 * This is used to clear halt conditions for bulk and interrupt endpoints, 1185 1184 * as reported by URB completion status. Endpoints that are halted are ··· 1484 1481 * @dev: the device whose interface is being updated 1485 1482 * @interface: the interface being updated 1486 1483 * @alternate: the setting being chosen. 1487 - * Context: !in_interrupt () 1484 + * 1485 + * Context: task context, might sleep. 1488 1486 * 1489 1487 * This is used to enable data transfers on interfaces that may not 1490 1488 * be enabled by default. Not all devices support such configurability. ··· 1906 1902 * usb_set_configuration - Makes a particular device setting be current 1907 1903 * @dev: the device whose configuration is being updated 1908 1904 * @configuration: the configuration being chosen. 1909 - * Context: !in_interrupt(), caller owns the device lock 1905 + * 1906 + * Context: task context, might sleep. Caller holds device lock. 1910 1907 * 1911 1908 * This is used to enable non-default device modes. Not all devices 1912 1909 * use this kind of configurability; many devices only have one
+2 -2
drivers/usb/core/port.c
··· 155 155 NULL, 156 156 }; 157 157 158 - static struct attribute_group port_dev_attr_grp = { 158 + static const struct attribute_group port_dev_attr_grp = { 159 159 .attrs = port_dev_attrs, 160 160 }; 161 161 ··· 169 169 NULL, 170 170 }; 171 171 172 - static struct attribute_group port_dev_usb3_attr_grp = { 172 + static const struct attribute_group port_dev_usb3_attr_grp = { 173 173 .attrs = port_dev_usb3_attrs, 174 174 }; 175 175
+3
drivers/usb/core/quirks.c
··· 342 342 { USB_DEVICE(0x06a3, 0x0006), .driver_info = 343 343 USB_QUIRK_CONFIG_INTF_STRINGS }, 344 344 345 + /* Agfa SNAPSCAN 1212U */ 346 + { USB_DEVICE(0x06bd, 0x0001), .driver_info = USB_QUIRK_RESET_RESUME }, 347 + 345 348 /* Guillemot Webcam Hercules Dualpix Exchange (2nd ID) */ 346 349 { USB_DEVICE(0x06f8, 0x0804), .driver_info = USB_QUIRK_RESET_RESUME }, 347 350
+7 -7
drivers/usb/core/sysfs.c
··· 641 641 &dev_attr_usb2_lpm_besl.attr, 642 642 NULL, 643 643 }; 644 - static struct attribute_group usb2_hardware_lpm_attr_group = { 644 + static const struct attribute_group usb2_hardware_lpm_attr_group = { 645 645 .name = power_group_name, 646 646 .attrs = usb2_hardware_lpm_attr, 647 647 }; ··· 651 651 &dev_attr_usb3_hardware_lpm_u2.attr, 652 652 NULL, 653 653 }; 654 - static struct attribute_group usb3_hardware_lpm_attr_group = { 654 + static const struct attribute_group usb3_hardware_lpm_attr_group = { 655 655 .name = power_group_name, 656 656 .attrs = usb3_hardware_lpm_attr, 657 657 }; ··· 663 663 &dev_attr_active_duration.attr, 664 664 NULL, 665 665 }; 666 - static struct attribute_group power_attr_group = { 666 + static const struct attribute_group power_attr_group = { 667 667 .name = power_group_name, 668 668 .attrs = power_attrs, 669 669 }; ··· 832 832 #endif 833 833 NULL, 834 834 }; 835 - static struct attribute_group dev_attr_grp = { 835 + static const struct attribute_group dev_attr_grp = { 836 836 .attrs = dev_attrs, 837 837 }; 838 838 ··· 865 865 return a->mode; 866 866 } 867 867 868 - static struct attribute_group dev_string_attr_grp = { 868 + static const struct attribute_group dev_string_attr_grp = { 869 869 .attrs = dev_string_attrs, 870 870 .is_visible = dev_string_attrs_are_visible, 871 871 }; ··· 1222 1222 &dev_attr_interface_authorized.attr, 1223 1223 NULL, 1224 1224 }; 1225 - static struct attribute_group intf_attr_grp = { 1225 + static const struct attribute_group intf_attr_grp = { 1226 1226 .attrs = intf_attrs, 1227 1227 }; 1228 1228 ··· 1246 1246 return a->mode; 1247 1247 } 1248 1248 1249 - static struct attribute_group intf_assoc_attr_grp = { 1249 + static const struct attribute_group intf_assoc_attr_grp = { 1250 1250 .attrs = intf_assoc_attrs, 1251 1251 .is_visible = intf_assoc_attrs_are_visible, 1252 1252 };
+2 -2
drivers/usb/core/usb.c
··· 28 28 #include <linux/string.h> 29 29 #include <linux/bitops.h> 30 30 #include <linux/slab.h> 31 - #include <linux/interrupt.h> /* for in_interrupt() */ 32 31 #include <linux/kmod.h> 33 32 #include <linux/init.h> 34 33 #include <linux/spinlock.h> ··· 560 561 * @parent: hub to which device is connected; null to allocate a root hub 561 562 * @bus: bus used to access the device 562 563 * @port1: one-based index of port; ignored for root hubs 563 - * Context: !in_interrupt() 564 + * 565 + * Context: task context, might sleep. 564 566 * 565 567 * Only hub drivers (including virtual root hub drivers for host 566 568 * controllers) should ever call this.
+1 -1
drivers/usb/gadget/function/f_acm.c
··· 686 686 acm_ss_out_desc.bEndpointAddress = acm_fs_out_desc.bEndpointAddress; 687 687 688 688 status = usb_assign_descriptors(f, acm_fs_function, acm_hs_function, 689 - acm_ss_function, NULL); 689 + acm_ss_function, acm_ss_function); 690 690 if (status) 691 691 goto fail; 692 692
+94 -90
drivers/usb/gadget/function/f_fs.c
··· 296 296 reinit_completion(&ffs->ep0req_completion); 297 297 298 298 ret = usb_ep_queue(ffs->gadget->ep0, req, GFP_ATOMIC); 299 - if (unlikely(ret < 0)) 299 + if (ret < 0) 300 300 return ret; 301 301 302 302 ret = wait_for_completion_interruptible(&ffs->ep0req_completion); 303 - if (unlikely(ret)) { 303 + if (ret) { 304 304 usb_ep_dequeue(ffs->gadget->ep0, req); 305 305 return -EINTR; 306 306 } ··· 337 337 338 338 /* Acquire mutex */ 339 339 ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 340 - if (unlikely(ret < 0)) 340 + if (ret < 0) 341 341 return ret; 342 342 343 343 /* Check state */ ··· 345 345 case FFS_READ_DESCRIPTORS: 346 346 case FFS_READ_STRINGS: 347 347 /* Copy data */ 348 - if (unlikely(len < 16)) { 348 + if (len < 16) { 349 349 ret = -EINVAL; 350 350 break; 351 351 } ··· 360 360 if (ffs->state == FFS_READ_DESCRIPTORS) { 361 361 pr_info("read descriptors\n"); 362 362 ret = __ffs_data_got_descs(ffs, data, len); 363 - if (unlikely(ret < 0)) 363 + if (ret < 0) 364 364 break; 365 365 366 366 ffs->state = FFS_READ_STRINGS; ··· 368 368 } else { 369 369 pr_info("read strings\n"); 370 370 ret = __ffs_data_got_strings(ffs, data, len); 371 - if (unlikely(ret < 0)) 371 + if (ret < 0) 372 372 break; 373 373 374 374 ret = ffs_epfiles_create(ffs); 375 - if (unlikely(ret)) { 375 + if (ret) { 376 376 ffs->state = FFS_CLOSING; 377 377 break; 378 378 } ··· 381 381 mutex_unlock(&ffs->mutex); 382 382 383 383 ret = ffs_ready(ffs); 384 - if (unlikely(ret < 0)) { 384 + if (ret < 0) { 385 385 ffs->state = FFS_CLOSING; 386 386 return ret; 387 387 } ··· 495 495 spin_unlock_irq(&ffs->ev.waitq.lock); 496 496 mutex_unlock(&ffs->mutex); 497 497 498 - return unlikely(copy_to_user(buf, events, size)) ? -EFAULT : size; 498 + return copy_to_user(buf, events, size) ? -EFAULT : size; 499 499 } 500 500 501 501 static ssize_t ffs_ep0_read(struct file *file, char __user *buf, ··· 514 514 515 515 /* Acquire mutex */ 516 516 ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 517 - if (unlikely(ret < 0)) 517 + if (ret < 0) 518 518 return ret; 519 519 520 520 /* Check state */ ··· 536 536 537 537 case FFS_NO_SETUP: 538 538 n = len / sizeof(struct usb_functionfs_event); 539 - if (unlikely(!n)) { 539 + if (!n) { 540 540 ret = -EINVAL; 541 541 break; 542 542 } ··· 567 567 568 568 spin_unlock_irq(&ffs->ev.waitq.lock); 569 569 570 - if (likely(len)) { 570 + if (len) { 571 571 data = kmalloc(len, GFP_KERNEL); 572 - if (unlikely(!data)) { 572 + if (!data) { 573 573 ret = -ENOMEM; 574 574 goto done_mutex; 575 575 } ··· 586 586 587 587 /* unlocks spinlock */ 588 588 ret = __ffs_ep0_queue_wait(ffs, data, len); 589 - if (likely(ret > 0) && unlikely(copy_to_user(buf, data, len))) 589 + if ((ret > 0) && (copy_to_user(buf, data, len))) 590 590 ret = -EFAULT; 591 591 goto done_mutex; 592 592 ··· 608 608 609 609 ENTER(); 610 610 611 - if (unlikely(ffs->state == FFS_CLOSING)) 611 + if (ffs->state == FFS_CLOSING) 612 612 return -EBUSY; 613 613 614 614 file->private_data = ffs; ··· 657 657 poll_wait(file, &ffs->ev.waitq, wait); 658 658 659 659 ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK); 660 - if (unlikely(ret < 0)) 660 + if (ret < 0) 661 661 return mask; 662 662 663 663 switch (ffs->state) { ··· 678 678 mask |= (EPOLLIN | EPOLLOUT); 679 679 break; 680 680 } 681 + break; 682 + 681 683 case FFS_CLOSING: 682 684 break; 683 685 case FFS_DEACTIVATED: ··· 708 706 static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req) 709 707 { 710 708 ENTER(); 711 - if (likely(req->context)) { 709 + if (req->context) { 712 710 struct ffs_ep *ep = _ep->driver_data; 713 711 ep->status = req->status ? req->status : req->actual; 714 712 complete(req->context); ··· 718 716 static ssize_t ffs_copy_to_iter(void *data, int data_len, struct iov_iter *iter) 719 717 { 720 718 ssize_t ret = copy_to_iter(data, data_len, iter); 721 - if (likely(ret == data_len)) 719 + if (ret == data_len) 722 720 return ret; 723 721 724 - if (unlikely(iov_iter_count(iter))) 722 + if (iov_iter_count(iter)) 725 723 return -EFAULT; 726 724 727 725 /* ··· 887 885 return ret; 888 886 } 889 887 890 - if (unlikely(iov_iter_count(iter))) { 888 + if (iov_iter_count(iter)) { 891 889 ret = -EFAULT; 892 890 } else { 893 891 buf->length -= ret; ··· 908 906 struct ffs_buffer *buf; 909 907 910 908 ssize_t ret = copy_to_iter(data, data_len, iter); 911 - if (likely(data_len == ret)) 909 + if (data_len == ret) 912 910 return ret; 913 911 914 - if (unlikely(iov_iter_count(iter))) 912 + if (iov_iter_count(iter)) 915 913 return -EFAULT; 916 914 917 915 /* See ffs_copy_to_iter for more context. */ ··· 932 930 * in struct ffs_epfile for full read_buffer pointer synchronisation 933 931 * story. 934 932 */ 935 - if (unlikely(cmpxchg(&epfile->read_buffer, NULL, buf))) 933 + if (cmpxchg(&epfile->read_buffer, NULL, buf)) 936 934 kfree(buf); 937 935 938 936 return ret; ··· 970 968 971 969 /* We will be using request and read_buffer */ 972 970 ret = ffs_mutex_lock(&epfile->mutex, file->f_flags & O_NONBLOCK); 973 - if (unlikely(ret)) 971 + if (ret) 974 972 goto error; 975 973 976 974 /* Allocate & copy */ ··· 1015 1013 spin_unlock_irq(&epfile->ffs->eps_lock); 1016 1014 1017 1015 data = ffs_alloc_buffer(io_data, data_len); 1018 - if (unlikely(!data)) { 1016 + if (!data) { 1019 1017 ret = -ENOMEM; 1020 1018 goto error_mutex; 1021 1019 } ··· 1035 1033 ret = usb_ep_set_halt(ep->ep); 1036 1034 if (!ret) 1037 1035 ret = -EBADMSG; 1038 - } else if (unlikely(data_len == -EINVAL)) { 1036 + } else if (data_len == -EINVAL) { 1039 1037 /* 1040 1038 * Sanity Check: even though data_len can't be used 1041 1039 * uninitialized at the time I write this comment, some ··· 1070 1068 req->complete = ffs_epfile_io_complete; 1071 1069 1072 1070 ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 1073 - if (unlikely(ret < 0)) 1071 + if (ret < 0) 1074 1072 goto error_lock; 1075 1073 1076 1074 spin_unlock_irq(&epfile->ffs->eps_lock); 1077 1075 1078 - if (unlikely(wait_for_completion_interruptible(&done))) { 1076 + if (wait_for_completion_interruptible(&done)) { 1079 1077 /* 1080 1078 * To avoid race condition with ffs_epfile_io_complete, 1081 1079 * dequeue the request first then check ··· 1117 1115 req->complete = ffs_epfile_async_io_complete; 1118 1116 1119 1117 ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); 1120 - if (unlikely(ret)) { 1118 + if (ret) { 1121 1119 io_data->req = NULL; 1122 1120 usb_ep_free_request(ep->ep, req); 1123 1121 goto error_lock; ··· 1168 1166 1169 1167 spin_lock_irqsave(&epfile->ffs->eps_lock, flags); 1170 1168 1171 - if (likely(io_data && io_data->ep && io_data->req)) 1169 + if (io_data && io_data->ep && io_data->req) 1172 1170 value = usb_ep_dequeue(io_data->ep, io_data->req); 1173 1171 else 1174 1172 value = -EINVAL; ··· 1187 1185 1188 1186 if (!is_sync_kiocb(kiocb)) { 1189 1187 p = kzalloc(sizeof(io_data), GFP_KERNEL); 1190 - if (unlikely(!p)) 1188 + if (!p) 1191 1189 return -ENOMEM; 1192 1190 p->aio = true; 1193 1191 } else { ··· 1224 1222 1225 1223 if (!is_sync_kiocb(kiocb)) { 1226 1224 p = kzalloc(sizeof(io_data), GFP_KERNEL); 1227 - if (unlikely(!p)) 1225 + if (!p) 1228 1226 return -ENOMEM; 1229 1227 p->aio = true; 1230 1228 } else { ··· 1330 1328 1331 1329 switch (epfile->ffs->gadget->speed) { 1332 1330 case USB_SPEED_SUPER: 1331 + case USB_SPEED_SUPER_PLUS: 1333 1332 desc_idx = 2; 1334 1333 break; 1335 1334 case USB_SPEED_HIGH: ··· 1388 1385 1389 1386 inode = new_inode(sb); 1390 1387 1391 - if (likely(inode)) { 1388 + if (inode) { 1392 1389 struct timespec64 ts = current_time(inode); 1393 1390 1394 1391 inode->i_ino = get_next_ino(); ··· 1420 1417 ENTER(); 1421 1418 1422 1419 dentry = d_alloc_name(sb->s_root, name); 1423 - if (unlikely(!dentry)) 1420 + if (!dentry) 1424 1421 return NULL; 1425 1422 1426 1423 inode = ffs_sb_make_inode(sb, data, fops, NULL, &ffs->file_perms); 1427 - if (unlikely(!inode)) { 1424 + if (!inode) { 1428 1425 dput(dentry); 1429 1426 return NULL; 1430 1427 } ··· 1471 1468 &simple_dir_inode_operations, 1472 1469 &data->perms); 1473 1470 sb->s_root = d_make_root(inode); 1474 - if (unlikely(!sb->s_root)) 1471 + if (!sb->s_root) 1475 1472 return -ENOMEM; 1476 1473 1477 1474 /* EP0 file */ 1478 - if (unlikely(!ffs_sb_create_file(sb, "ep0", ffs, 1479 - &ffs_ep0_operations))) 1475 + if (!ffs_sb_create_file(sb, "ep0", ffs, &ffs_ep0_operations)) 1480 1476 return -ENOMEM; 1481 1477 1482 1478 return 0; ··· 1563 1561 return invalf(fc, "No source specified"); 1564 1562 1565 1563 ffs = ffs_data_new(fc->source); 1566 - if (unlikely(!ffs)) 1564 + if (!ffs) 1567 1565 return -ENOMEM; 1568 1566 ffs->file_perms = ctx->perms; 1569 1567 ffs->no_disconnect = ctx->no_disconnect; 1570 1568 1571 1569 ffs->dev_name = kstrdup(fc->source, GFP_KERNEL); 1572 - if (unlikely(!ffs->dev_name)) { 1570 + if (!ffs->dev_name) { 1573 1571 ffs_data_put(ffs); 1574 1572 return -ENOMEM; 1575 1573 } ··· 1655 1653 ENTER(); 1656 1654 1657 1655 ret = register_filesystem(&ffs_fs_type); 1658 - if (likely(!ret)) 1656 + if (!ret) 1659 1657 pr_info("file system registered\n"); 1660 1658 else 1661 1659 pr_err("failed registering file system (%d)\n", ret); ··· 1700 1698 { 1701 1699 ENTER(); 1702 1700 1703 - if (unlikely(refcount_dec_and_test(&ffs->ref))) { 1701 + if (refcount_dec_and_test(&ffs->ref)) { 1704 1702 pr_info("%s(): freeing\n", __func__); 1705 1703 ffs_data_clear(ffs); 1706 1704 BUG_ON(waitqueue_active(&ffs->ev.waitq) || ··· 1742 1740 static struct ffs_data *ffs_data_new(const char *dev_name) 1743 1741 { 1744 1742 struct ffs_data *ffs = kzalloc(sizeof *ffs, GFP_KERNEL); 1745 - if (unlikely(!ffs)) 1743 + if (!ffs) 1746 1744 return NULL; 1747 1745 1748 1746 ENTER(); ··· 1832 1830 return -EBADFD; 1833 1831 1834 1832 first_id = usb_string_ids_n(cdev, ffs->strings_count); 1835 - if (unlikely(first_id < 0)) 1833 + if (first_id < 0) 1836 1834 return first_id; 1837 1835 1838 1836 ffs->ep0req = usb_ep_alloc_request(cdev->gadget->ep0, GFP_KERNEL); 1839 - if (unlikely(!ffs->ep0req)) 1837 + if (!ffs->ep0req) 1840 1838 return -ENOMEM; 1841 1839 ffs->ep0req->complete = ffs_ep0_complete; 1842 1840 ffs->ep0req->context = ffs; ··· 1892 1890 epfile->dentry = ffs_sb_create_file(ffs->sb, epfile->name, 1893 1891 epfile, 1894 1892 &ffs_epfile_operations); 1895 - if (unlikely(!epfile->dentry)) { 1893 + if (!epfile->dentry) { 1896 1894 ffs_epfiles_destroy(epfiles, i - 1); 1897 1895 return -ENOMEM; 1898 1896 } ··· 1930 1928 spin_lock_irqsave(&func->ffs->eps_lock, flags); 1931 1929 while (count--) { 1932 1930 /* pending requests get nuked */ 1933 - if (likely(ep->ep)) 1931 + if (ep->ep) 1934 1932 usb_ep_disable(ep->ep); 1935 1933 ++ep; 1936 1934 ··· 1964 1962 } 1965 1963 1966 1964 ret = usb_ep_enable(ep->ep); 1967 - if (likely(!ret)) { 1965 + if (!ret) { 1968 1966 epfile->ep = ep; 1969 1967 epfile->in = usb_endpoint_dir_in(ep->ep->desc); 1970 1968 epfile->isoc = usb_endpoint_xfer_isoc(ep->ep->desc); ··· 2037 2035 #define __entity_check_ENDPOINT(val) ((val) & USB_ENDPOINT_NUMBER_MASK) 2038 2036 #define __entity(type, val) do { \ 2039 2037 pr_vdebug("entity " #type "(%02x)\n", (val)); \ 2040 - if (unlikely(!__entity_check_ ##type(val))) { \ 2038 + if (!__entity_check_ ##type(val)) { \ 2041 2039 pr_vdebug("invalid entity's value\n"); \ 2042 2040 return -EINVAL; \ 2043 2041 } \ 2044 2042 ret = entity(FFS_ ##type, &val, _ds, priv); \ 2045 - if (unlikely(ret < 0)) { \ 2043 + if (ret < 0) { \ 2046 2044 pr_debug("entity " #type "(%02x); ret = %d\n", \ 2047 2045 (val), ret); \ 2048 2046 return ret; \ ··· 2167 2165 2168 2166 /* Record "descriptor" entity */ 2169 2167 ret = entity(FFS_DESCRIPTOR, (u8 *)num, (void *)data, priv); 2170 - if (unlikely(ret < 0)) { 2168 + if (ret < 0) { 2171 2169 pr_debug("entity DESCRIPTOR(%02lx); ret = %d\n", 2172 2170 num, ret); 2173 2171 return ret; ··· 2178 2176 2179 2177 ret = ffs_do_single_desc(data, len, entity, priv, 2180 2178 &current_class); 2181 - if (unlikely(ret < 0)) { 2179 + if (ret < 0) { 2182 2180 pr_debug("%s returns %d\n", __func__, ret); 2183 2181 return ret; 2184 2182 } ··· 2284 2282 /* loop over all ext compat/ext prop descriptors */ 2285 2283 while (feature_count--) { 2286 2284 ret = entity(type, h, data, len, priv); 2287 - if (unlikely(ret < 0)) { 2285 + if (ret < 0) { 2288 2286 pr_debug("bad OS descriptor, type: %d\n", type); 2289 2287 return ret; 2290 2288 } ··· 2324 2322 return -EINVAL; 2325 2323 2326 2324 ret = __ffs_do_os_desc_header(&type, desc); 2327 - if (unlikely(ret < 0)) { 2325 + if (ret < 0) { 2328 2326 pr_debug("entity OS_DESCRIPTOR(%02lx); ret = %d\n", 2329 2327 num, ret); 2330 2328 return ret; ··· 2345 2343 */ 2346 2344 ret = ffs_do_single_os_desc(data, len, type, 2347 2345 feature_count, entity, priv, desc); 2348 - if (unlikely(ret < 0)) { 2346 + if (ret < 0) { 2349 2347 pr_debug("%s returns %d\n", __func__, ret); 2350 2348 return ret; 2351 2349 } ··· 2577 2575 2578 2576 ENTER(); 2579 2577 2580 - if (unlikely(len < 16 || 2581 - get_unaligned_le32(data) != FUNCTIONFS_STRINGS_MAGIC || 2582 - get_unaligned_le32(data + 4) != len)) 2578 + if (len < 16 || 2579 + get_unaligned_le32(data) != FUNCTIONFS_STRINGS_MAGIC || 2580 + get_unaligned_le32(data + 4) != len) 2583 2581 goto error; 2584 2582 str_count = get_unaligned_le32(data + 8); 2585 2583 lang_count = get_unaligned_le32(data + 12); 2586 2584 2587 2585 /* if one is zero the other must be zero */ 2588 - if (unlikely(!str_count != !lang_count)) 2586 + if (!str_count != !lang_count) 2589 2587 goto error; 2590 2588 2591 2589 /* Do we have at least as many strings as descriptors need? */ 2592 2590 needed_count = ffs->strings_count; 2593 - if (unlikely(str_count < needed_count)) 2591 + if (str_count < needed_count) 2594 2592 goto error; 2595 2593 2596 2594 /* ··· 2614 2612 2615 2613 char *vlabuf = kmalloc(vla_group_size(d), GFP_KERNEL); 2616 2614 2617 - if (unlikely(!vlabuf)) { 2615 + if (!vlabuf) { 2618 2616 kfree(_data); 2619 2617 return -ENOMEM; 2620 2618 } ··· 2641 2639 do { /* lang_count > 0 so we can use do-while */ 2642 2640 unsigned needed = needed_count; 2643 2641 2644 - if (unlikely(len < 3)) 2642 + if (len < 3) 2645 2643 goto error_free; 2646 2644 t->language = get_unaligned_le16(data); 2647 2645 t->strings = s; ··· 2654 2652 do { /* str_count > 0 so we can use do-while */ 2655 2653 size_t length = strnlen(data, len); 2656 2654 2657 - if (unlikely(length == len)) 2655 + if (length == len) 2658 2656 goto error_free; 2659 2657 2660 2658 /* ··· 2662 2660 * if that's the case we simply ignore the 2663 2661 * rest 2664 2662 */ 2665 - if (likely(needed)) { 2663 + if (needed) { 2666 2664 /* 2667 2665 * s->id will be set while adding 2668 2666 * function to configuration so for ··· 2684 2682 } while (--lang_count); 2685 2683 2686 2684 /* Some garbage left? */ 2687 - if (unlikely(len)) 2685 + if (len) 2688 2686 goto error_free; 2689 2687 2690 2688 /* Done! */ ··· 2831 2829 2832 2830 ffs_ep = func->eps + idx; 2833 2831 2834 - if (unlikely(ffs_ep->descs[ep_desc_id])) { 2832 + if (ffs_ep->descs[ep_desc_id]) { 2835 2833 pr_err("two %sspeed descriptors for EP %d\n", 2836 2834 speed_names[ep_desc_id], 2837 2835 ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); ··· 2862 2860 wMaxPacketSize = ds->wMaxPacketSize; 2863 2861 pr_vdebug("autoconfig\n"); 2864 2862 ep = usb_ep_autoconfig(func->gadget, ds); 2865 - if (unlikely(!ep)) 2863 + if (!ep) 2866 2864 return -ENOTSUPP; 2867 2865 ep->driver_data = func->eps + idx; 2868 2866 2869 2867 req = usb_ep_alloc_request(ep, GFP_KERNEL); 2870 - if (unlikely(!req)) 2868 + if (!req) 2871 2869 return -ENOMEM; 2872 2870 2873 2871 ffs_ep->ep = ep; ··· 2909 2907 idx = *valuep; 2910 2908 if (func->interfaces_nums[idx] < 0) { 2911 2909 int id = usb_interface_id(func->conf, &func->function); 2912 - if (unlikely(id < 0)) 2910 + if (id < 0) 2913 2911 return id; 2914 2912 func->interfaces_nums[idx] = id; 2915 2913 } ··· 2930 2928 return 0; 2931 2929 2932 2930 idx = (*valuep & USB_ENDPOINT_NUMBER_MASK) - 1; 2933 - if (unlikely(!func->eps[idx].ep)) 2931 + if (!func->eps[idx].ep) 2934 2932 return -EINVAL; 2935 2933 2936 2934 { ··· 3113 3111 ENTER(); 3114 3112 3115 3113 /* Has descriptors only for speeds gadget does not support */ 3116 - if (unlikely(!(full | high | super))) 3114 + if (!(full | high | super)) 3117 3115 return -ENOTSUPP; 3118 3116 3119 3117 /* Allocate a single chunk, less management later on */ 3120 3118 vlabuf = kzalloc(vla_group_size(d), GFP_KERNEL); 3121 - if (unlikely(!vlabuf)) 3119 + if (!vlabuf) 3122 3120 return -ENOMEM; 3123 3121 3124 3122 ffs->ms_os_descs_ext_prop_avail = vla_ptr(vlabuf, d, ext_prop); ··· 3147 3145 * endpoints first, so that later we can rewrite the endpoint 3148 3146 * numbers without worrying that it may be described later on. 3149 3147 */ 3150 - if (likely(full)) { 3148 + if (full) { 3151 3149 func->function.fs_descriptors = vla_ptr(vlabuf, d, fs_descs); 3152 3150 fs_len = ffs_do_descs(ffs->fs_descs_count, 3153 3151 vla_ptr(vlabuf, d, raw_descs), 3154 3152 d_raw_descs__sz, 3155 3153 __ffs_func_bind_do_descs, func); 3156 - if (unlikely(fs_len < 0)) { 3154 + if (fs_len < 0) { 3157 3155 ret = fs_len; 3158 3156 goto error; 3159 3157 } ··· 3161 3159 fs_len = 0; 3162 3160 } 3163 3161 3164 - if (likely(high)) { 3162 + if (high) { 3165 3163 func->function.hs_descriptors = vla_ptr(vlabuf, d, hs_descs); 3166 3164 hs_len = ffs_do_descs(ffs->hs_descs_count, 3167 3165 vla_ptr(vlabuf, d, raw_descs) + fs_len, 3168 3166 d_raw_descs__sz - fs_len, 3169 3167 __ffs_func_bind_do_descs, func); 3170 - if (unlikely(hs_len < 0)) { 3168 + if (hs_len < 0) { 3171 3169 ret = hs_len; 3172 3170 goto error; 3173 3171 } ··· 3175 3173 hs_len = 0; 3176 3174 } 3177 3175 3178 - if (likely(super)) { 3179 - func->function.ss_descriptors = vla_ptr(vlabuf, d, ss_descs); 3176 + if (super) { 3177 + func->function.ss_descriptors = func->function.ssp_descriptors = 3178 + vla_ptr(vlabuf, d, ss_descs); 3180 3179 ss_len = ffs_do_descs(ffs->ss_descs_count, 3181 3180 vla_ptr(vlabuf, d, raw_descs) + fs_len + hs_len, 3182 3181 d_raw_descs__sz - fs_len - hs_len, 3183 3182 __ffs_func_bind_do_descs, func); 3184 - if (unlikely(ss_len < 0)) { 3183 + if (ss_len < 0) { 3185 3184 ret = ss_len; 3186 3185 goto error; 3187 3186 } ··· 3200 3197 (super ? ffs->ss_descs_count : 0), 3201 3198 vla_ptr(vlabuf, d, raw_descs), d_raw_descs__sz, 3202 3199 __ffs_func_bind_do_nums, func); 3203 - if (unlikely(ret < 0)) 3200 + if (ret < 0) 3204 3201 goto error; 3205 3202 3206 3203 func->function.os_desc_table = vla_ptr(vlabuf, d, os_desc_table); ··· 3221 3218 d_raw_descs__sz - fs_len - hs_len - 3222 3219 ss_len, 3223 3220 __ffs_func_bind_do_os_desc, func); 3224 - if (unlikely(ret < 0)) 3221 + if (ret < 0) 3225 3222 goto error; 3226 3223 } 3227 3224 func->function.os_desc_n = ··· 3272 3269 3273 3270 if (alt != (unsigned)-1) { 3274 3271 intf = ffs_func_revmap_intf(func, interface); 3275 - if (unlikely(intf < 0)) 3272 + if (intf < 0) 3276 3273 return intf; 3277 3274 } 3278 3275 ··· 3297 3294 3298 3295 ffs->func = func; 3299 3296 ret = ffs_func_eps_enable(func); 3300 - if (likely(ret >= 0)) 3297 + if (ret >= 0) 3301 3298 ffs_event_add(ffs, FUNCTIONFS_ENABLE); 3302 3299 return ret; 3303 3300 } ··· 3339 3336 switch (creq->bRequestType & USB_RECIP_MASK) { 3340 3337 case USB_RECIP_INTERFACE: 3341 3338 ret = ffs_func_revmap_intf(func, le16_to_cpu(creq->wIndex)); 3342 - if (unlikely(ret < 0)) 3339 + if (ret < 0) 3343 3340 return ret; 3344 3341 break; 3345 3342 3346 3343 case USB_RECIP_ENDPOINT: 3347 3344 ret = ffs_func_revmap_ep(func, le16_to_cpu(creq->wIndex)); 3348 - if (unlikely(ret < 0)) 3345 + if (ret < 0) 3349 3346 return ret; 3350 3347 if (func->ffs->user_flags & FUNCTIONFS_VIRTUAL_ADDR) 3351 3348 ret = func->ffs->eps_addrmap[ret]; ··· 3587 3584 func->function.fs_descriptors = NULL; 3588 3585 func->function.hs_descriptors = NULL; 3589 3586 func->function.ss_descriptors = NULL; 3587 + func->function.ssp_descriptors = NULL; 3590 3588 func->interfaces_nums = NULL; 3591 3589 3592 3590 ffs_event_add(ffs, FUNCTIONFS_UNBIND); ··· 3600 3596 ENTER(); 3601 3597 3602 3598 func = kzalloc(sizeof(*func), GFP_KERNEL); 3603 - if (unlikely(!func)) 3599 + if (!func) 3604 3600 return ERR_PTR(-ENOMEM); 3605 3601 3606 3602 func->function.name = "Function FS Gadget"; ··· 3815 3811 static int ffs_mutex_lock(struct mutex *mutex, unsigned nonblock) 3816 3812 { 3817 3813 return nonblock 3818 - ? likely(mutex_trylock(mutex)) ? 0 : -EAGAIN 3814 + ? mutex_trylock(mutex) ? 0 : -EAGAIN 3819 3815 : mutex_lock_interruptible(mutex); 3820 3816 } 3821 3817 ··· 3823 3819 { 3824 3820 char *data; 3825 3821 3826 - if (unlikely(!len)) 3822 + if (!len) 3827 3823 return NULL; 3828 3824 3829 3825 data = kmalloc(len, GFP_KERNEL); 3830 - if (unlikely(!data)) 3826 + if (!data) 3831 3827 return ERR_PTR(-ENOMEM); 3832 3828 3833 - if (unlikely(copy_from_user(data, buf, len))) { 3829 + if (copy_from_user(data, buf, len)) { 3834 3830 kfree(data); 3835 3831 return ERR_PTR(-EFAULT); 3836 3832 }
+1 -1
drivers/usb/gadget/function/f_loopback.c
··· 274 274 default: 275 275 ERROR(cdev, "%s loop complete --> %d, %d/%d\n", ep->name, 276 276 status, req->actual, req->length); 277 - /* FALLTHROUGH */ 277 + fallthrough; 278 278 279 279 /* NOTE: since this driver doesn't maintain an explicit record 280 280 * of requests it submitted (just maintains qlen count), we
+6
drivers/usb/gadget/function/f_midi.c
··· 1048 1048 f->ss_descriptors = usb_copy_descriptors(midi_function); 1049 1049 if (!f->ss_descriptors) 1050 1050 goto fail_f_midi; 1051 + 1052 + if (gadget_is_superspeed_plus(c->cdev->gadget)) { 1053 + f->ssp_descriptors = usb_copy_descriptors(midi_function); 1054 + if (!f->ssp_descriptors) 1055 + goto fail_f_midi; 1056 + } 1051 1057 } 1052 1058 1053 1059 kfree(midi_function);
+3 -1
drivers/usb/gadget/function/f_rndis.c
··· 87 87 /* peak (theoretical) bulk transfer rate in bits-per-second */ 88 88 static unsigned int bitrate(struct usb_gadget *g) 89 89 { 90 + if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS) 91 + return 4250000000U; 90 92 if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER) 91 - return 13 * 1024 * 8 * 1000 * 8; 93 + return 3750000000U; 92 94 else if (gadget_is_dualspeed(g) && g->speed == USB_SPEED_HIGH) 93 95 return 13 * 512 * 8 * 1000 * 8; 94 96 else
+1
drivers/usb/gadget/function/f_sourcesink.c
··· 559 559 #if 1 560 560 DBG(cdev, "%s complete --> %d, %d/%d\n", ep->name, 561 561 status, req->actual, req->length); 562 + break; 562 563 #endif 563 564 case -EREMOTEIO: /* short read */ 564 565 break;
-2
drivers/usb/gadget/udc/core.c
··· 897 897 * @ep: the endpoint to be used with with the request 898 898 * @req: the request being given back 899 899 * 900 - * Context: in_interrupt() 901 - * 902 900 * This is called by device controller drivers in order to return the 903 901 * completed request back to the gadget layer. 904 902 */
+7 -3
drivers/usb/gadget/udc/dummy_hcd.c
··· 553 553 /* we'll fake any legal size */ 554 554 break; 555 555 /* save a return statement */ 556 + fallthrough; 556 557 default: 557 558 goto done; 558 559 } ··· 596 595 if (max <= 1023) 597 596 break; 598 597 /* save a return statement */ 598 + fallthrough; 599 599 default: 600 600 goto done; 601 601 } ··· 1756 1754 return ret_val; 1757 1755 } 1758 1756 1759 - /* drive both sides of the transfers; looks like irq handlers to 1760 - * both drivers except the callbacks aren't in_irq(). 1757 + /* 1758 + * Drive both sides of the transfers; looks like irq handlers to both 1759 + * drivers except that the callbacks are invoked from soft interrupt 1760 + * context. 1761 1761 */ 1762 1762 static void dummy_timer(struct timer_list *t) 1763 1763 { ··· 2738 2734 { 2739 2735 int retval = -ENOMEM; 2740 2736 int i; 2741 - struct dummy *dum[MAX_NUM_UDC]; 2737 + struct dummy *dum[MAX_NUM_UDC] = {}; 2742 2738 2743 2739 if (usb_disabled()) 2744 2740 return -ENODEV;
+10 -9
drivers/usb/gadget/udc/pxa27x_udc.c
··· 304 304 * update_pxa_ep_matches - update pxa_ep cached values in all udc_usb_ep 305 305 * @udc: pxa udc 306 306 * 307 - * Context: in_interrupt() 307 + * Context: interrupt handler 308 308 * 309 309 * Updates all pxa_ep fields in udc_usb_ep structures, if this field was 310 310 * previously set up (and is not NULL). The update is necessary is a ··· 859 859 * @ep: pxa physical endpoint 860 860 * @req: usb request 861 861 * 862 - * Context: callable when in_interrupt() 862 + * Context: interrupt handler 863 863 * 864 864 * Unload as many packets as possible from the fifo we use for usb OUT 865 865 * transfers and put them into the request. Caller should have made sure ··· 997 997 * @ep: control endpoint 998 998 * @req: request 999 999 * 1000 - * Context: callable when in_interrupt() 1000 + * Context: interrupt handler 1001 1001 * 1002 1002 * Sends a request (or a part of the request) to the control endpoint (ep0 in). 1003 1003 * If the request doesn't fit, the remaining part will be sent from irq. ··· 1036 1036 * @_req: usb request 1037 1037 * @gfp_flags: flags 1038 1038 * 1039 - * Context: normally called when !in_interrupt, but callable when in_interrupt() 1040 - * in the special case of ep0 setup : 1039 + * Context: thread context or from the interrupt handler in the 1040 + * special case of ep0 setup : 1041 1041 * (irq->handle_ep0_ctrl_req->gadget_setup->pxa_ep_queue) 1042 1042 * 1043 1043 * Returns 0 if succedeed, error otherwise ··· 1512 1512 * pxa_udc_pullup - Offer manual D+ pullup control 1513 1513 * @_gadget: usb gadget using the control 1514 1514 * @is_active: 0 if disconnect, else connect D+ pullup resistor 1515 - * Context: !in_interrupt() 1515 + * 1516 + * Context: task context, might sleep 1516 1517 * 1517 1518 * Returns 0 if OK, -EOPNOTSUPP if udc driver doesn't handle D+ pullup 1518 1519 */ ··· 1561 1560 * @_gadget: usb gadget 1562 1561 * @mA: current drawn 1563 1562 * 1564 - * Context: !in_interrupt() 1563 + * Context: task context, might sleep 1565 1564 * 1566 1565 * Called after a configuration was chosen by a USB host, to inform how much 1567 1566 * current can be drawn by the device from VBus line. ··· 1887 1886 * @fifo_irq: 1 if triggered by fifo service type irq 1888 1887 * @opc_irq: 1 if triggered by output packet complete type irq 1889 1888 * 1890 - * Context : when in_interrupt() or with ep->lock held 1889 + * Context : interrupt handler 1891 1890 * 1892 1891 * Tries to transfer all pending request data into the endpoint and/or 1893 1892 * transfer all pending data in the endpoint into usb requests. ··· 2012 2011 * Tries to transfer all pending request data into the endpoint and/or 2013 2012 * transfer all pending data in the endpoint into usb requests. 2014 2013 * 2015 - * Is always called when in_interrupt() and with ep->lock released. 2014 + * Is always called from the interrupt handler. ep->lock must not be held. 2016 2015 */ 2017 2016 static void handle_ep(struct pxa_ep *ep) 2018 2017 {
-17
drivers/usb/host/Kconfig
··· 213 213 help 214 214 Variation of ARC USB block used in some Freescale chips. 215 215 216 - config USB_EHCI_MXC 217 - tristate "Support for Freescale i.MX on-chip EHCI USB controller" 218 - depends on ARCH_MXC || COMPILE_TEST 219 - select USB_EHCI_ROOT_HUB_TT 220 - help 221 - Variation of ARC USB block used in some Freescale chips. 222 - 223 216 config USB_EHCI_HCD_NPCM7XX 224 217 tristate "Support for Nuvoton NPCM7XX on-chip EHCI USB controller" 225 218 depends on (USB_EHCI_HCD && ARCH_NPCM7XX) || COMPILE_TEST ··· 733 740 734 741 To compile this driver as a module, choose M here: the 735 742 module will be called renesas-usbhs. 736 - 737 - config USB_IMX21_HCD 738 - tristate "i.MX21 HCD support" 739 - depends on ARM && ARCH_MXC 740 - help 741 - This driver enables support for the on-chip USB host in the 742 - i.MX21 processor. 743 - 744 - To compile this driver as a module, choose M here: the 745 - module will be called "imx21-hcd". 746 743 747 744 config USB_HCD_BCMA 748 745 tristate "BCMA usb host driver"
-2
drivers/usb/host/Makefile
··· 40 40 obj-$(CONFIG_USB_EHCI_HCD) += ehci-hcd.o 41 41 obj-$(CONFIG_USB_EHCI_PCI) += ehci-pci.o 42 42 obj-$(CONFIG_USB_EHCI_HCD_PLATFORM) += ehci-platform.o 43 - obj-$(CONFIG_USB_EHCI_MXC) += ehci-mxc.o 44 43 obj-$(CONFIG_USB_EHCI_HCD_NPCM7XX) += ehci-npcm7xx.o 45 44 obj-$(CONFIG_USB_EHCI_HCD_OMAP) += ehci-omap.o 46 45 obj-$(CONFIG_USB_EHCI_HCD_ORION) += ehci-orion.o ··· 80 81 obj-$(CONFIG_USB_SL811_CS) += sl811_cs.o 81 82 obj-$(CONFIG_USB_U132_HCD) += u132-hcd.o 82 83 obj-$(CONFIG_USB_R8A66597_HCD) += r8a66597-hcd.o 83 - obj-$(CONFIG_USB_IMX21_HCD) += imx21-hcd.o 84 84 obj-$(CONFIG_USB_FSL_USB2) += fsl-mph-dr-of.o 85 85 obj-$(CONFIG_USB_EHCI_FSL) += fsl-mph-dr-of.o 86 86 obj-$(CONFIG_USB_EHCI_FSL) += ehci-fsl.o
+4 -5
drivers/usb/host/ehci-fsl.c
··· 39 39 /* 40 40 * fsl_ehci_drv_probe - initialize FSL-based HCDs 41 41 * @pdev: USB Host Controller being probed 42 - * Context: !in_interrupt() 42 + * 43 + * Context: task context, might sleep 43 44 * 44 45 * Allocates basic resources for this USB host controller. 45 - * 46 46 */ 47 47 static int fsl_ehci_drv_probe(struct platform_device *pdev) 48 48 { ··· 684 684 /** 685 685 * fsl_ehci_drv_remove - shutdown processing for FSL-based HCDs 686 686 * @pdev: USB Host Controller being removed 687 - * Context: !in_interrupt() 687 + * 688 + * Context: task context, might sleep 688 689 * 689 690 * Reverses the effect of usb_hcd_fsl_probe(). 690 - * 691 691 */ 692 - 693 692 static int fsl_ehci_drv_remove(struct platform_device *pdev) 694 693 { 695 694 struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev);
+1 -1
drivers/usb/host/ehci-hcd.c
··· 867 867 */ 868 868 if (urb->transfer_buffer_length > (16 * 1024)) 869 869 return -EMSGSIZE; 870 - /* FALLTHROUGH */ 870 + fallthrough; 871 871 /* case PIPE_BULK: */ 872 872 default: 873 873 if (!qh_urb_transaction (ehci, urb, &qtd_list, mem_flags))
-213
drivers/usb/host/ehci-mxc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 2 - /* 3 - * Copyright (c) 2008 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix 4 - * Copyright (c) 2009 Daniel Mack <daniel@caiaq.de> 5 - */ 6 - 7 - #include <linux/kernel.h> 8 - #include <linux/module.h> 9 - #include <linux/io.h> 10 - #include <linux/platform_device.h> 11 - #include <linux/clk.h> 12 - #include <linux/delay.h> 13 - #include <linux/usb/otg.h> 14 - #include <linux/usb/ulpi.h> 15 - #include <linux/slab.h> 16 - #include <linux/usb.h> 17 - #include <linux/usb/hcd.h> 18 - #include <linux/platform_data/usb-ehci-mxc.h> 19 - #include "ehci.h" 20 - 21 - #define DRIVER_DESC "Freescale On-Chip EHCI Host driver" 22 - 23 - static const char hcd_name[] = "ehci-mxc"; 24 - 25 - #define ULPI_VIEWPORT_OFFSET 0x170 26 - 27 - struct ehci_mxc_priv { 28 - struct clk *usbclk, *ahbclk, *phyclk; 29 - }; 30 - 31 - static struct hc_driver __read_mostly ehci_mxc_hc_driver; 32 - 33 - static const struct ehci_driver_overrides ehci_mxc_overrides __initconst = { 34 - .extra_priv_size = sizeof(struct ehci_mxc_priv), 35 - }; 36 - 37 - static int ehci_mxc_drv_probe(struct platform_device *pdev) 38 - { 39 - struct device *dev = &pdev->dev; 40 - struct mxc_usbh_platform_data *pdata = dev_get_platdata(dev); 41 - struct usb_hcd *hcd; 42 - struct resource *res; 43 - int irq, ret; 44 - struct ehci_mxc_priv *priv; 45 - struct ehci_hcd *ehci; 46 - 47 - if (!pdata) { 48 - dev_err(dev, "No platform data given, bailing out.\n"); 49 - return -EINVAL; 50 - } 51 - 52 - irq = platform_get_irq(pdev, 0); 53 - if (irq < 0) 54 - return irq; 55 - 56 - hcd = usb_create_hcd(&ehci_mxc_hc_driver, dev, dev_name(dev)); 57 - if (!hcd) 58 - return -ENOMEM; 59 - 60 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 61 - hcd->regs = devm_ioremap_resource(dev, res); 62 - if (IS_ERR(hcd->regs)) { 63 - ret = PTR_ERR(hcd->regs); 64 - goto err_alloc; 65 - } 66 - hcd->rsrc_start = res->start; 67 - hcd->rsrc_len = resource_size(res); 68 - 69 - hcd->has_tt = 1; 70 - ehci = hcd_to_ehci(hcd); 71 - priv = (struct ehci_mxc_priv *) ehci->priv; 72 - 73 - /* enable clocks */ 74 - priv->usbclk = devm_clk_get(dev, "ipg"); 75 - if (IS_ERR(priv->usbclk)) { 76 - ret = PTR_ERR(priv->usbclk); 77 - goto err_alloc; 78 - } 79 - clk_prepare_enable(priv->usbclk); 80 - 81 - priv->ahbclk = devm_clk_get(dev, "ahb"); 82 - if (IS_ERR(priv->ahbclk)) { 83 - ret = PTR_ERR(priv->ahbclk); 84 - goto err_clk_ahb; 85 - } 86 - clk_prepare_enable(priv->ahbclk); 87 - 88 - /* "dr" device has its own clock on i.MX51 */ 89 - priv->phyclk = devm_clk_get(dev, "phy"); 90 - if (IS_ERR(priv->phyclk)) 91 - priv->phyclk = NULL; 92 - if (priv->phyclk) 93 - clk_prepare_enable(priv->phyclk); 94 - 95 - /* call platform specific init function */ 96 - if (pdata->init) { 97 - ret = pdata->init(pdev); 98 - if (ret) { 99 - dev_err(dev, "platform init failed\n"); 100 - goto err_init; 101 - } 102 - /* platforms need some time to settle changed IO settings */ 103 - mdelay(10); 104 - } 105 - 106 - /* EHCI registers start at offset 0x100 */ 107 - ehci->caps = hcd->regs + 0x100; 108 - ehci->regs = hcd->regs + 0x100 + 109 - HC_LENGTH(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase)); 110 - 111 - /* set up the PORTSCx register */ 112 - ehci_writel(ehci, pdata->portsc, &ehci->regs->port_status[0]); 113 - 114 - /* is this really needed? */ 115 - msleep(10); 116 - 117 - /* Initialize the transceiver */ 118 - if (pdata->otg) { 119 - pdata->otg->io_priv = hcd->regs + ULPI_VIEWPORT_OFFSET; 120 - ret = usb_phy_init(pdata->otg); 121 - if (ret) { 122 - dev_err(dev, "unable to init transceiver, probably missing\n"); 123 - ret = -ENODEV; 124 - goto err_add; 125 - } 126 - ret = otg_set_vbus(pdata->otg->otg, 1); 127 - if (ret) { 128 - dev_err(dev, "unable to enable vbus on transceiver\n"); 129 - goto err_add; 130 - } 131 - } 132 - 133 - platform_set_drvdata(pdev, hcd); 134 - 135 - ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 136 - if (ret) 137 - goto err_add; 138 - 139 - device_wakeup_enable(hcd->self.controller); 140 - return 0; 141 - 142 - err_add: 143 - if (pdata && pdata->exit) 144 - pdata->exit(pdev); 145 - err_init: 146 - if (priv->phyclk) 147 - clk_disable_unprepare(priv->phyclk); 148 - 149 - clk_disable_unprepare(priv->ahbclk); 150 - err_clk_ahb: 151 - clk_disable_unprepare(priv->usbclk); 152 - err_alloc: 153 - usb_put_hcd(hcd); 154 - return ret; 155 - } 156 - 157 - static int ehci_mxc_drv_remove(struct platform_device *pdev) 158 - { 159 - struct mxc_usbh_platform_data *pdata = dev_get_platdata(&pdev->dev); 160 - struct usb_hcd *hcd = platform_get_drvdata(pdev); 161 - struct ehci_hcd *ehci = hcd_to_ehci(hcd); 162 - struct ehci_mxc_priv *priv = (struct ehci_mxc_priv *) ehci->priv; 163 - 164 - usb_remove_hcd(hcd); 165 - 166 - if (pdata && pdata->exit) 167 - pdata->exit(pdev); 168 - 169 - if (pdata && pdata->otg) 170 - usb_phy_shutdown(pdata->otg); 171 - 172 - clk_disable_unprepare(priv->usbclk); 173 - clk_disable_unprepare(priv->ahbclk); 174 - 175 - if (priv->phyclk) 176 - clk_disable_unprepare(priv->phyclk); 177 - 178 - usb_put_hcd(hcd); 179 - return 0; 180 - } 181 - 182 - MODULE_ALIAS("platform:mxc-ehci"); 183 - 184 - static struct platform_driver ehci_mxc_driver = { 185 - .probe = ehci_mxc_drv_probe, 186 - .remove = ehci_mxc_drv_remove, 187 - .shutdown = usb_hcd_platform_shutdown, 188 - .driver = { 189 - .name = "mxc-ehci", 190 - }, 191 - }; 192 - 193 - static int __init ehci_mxc_init(void) 194 - { 195 - if (usb_disabled()) 196 - return -ENODEV; 197 - 198 - pr_info("%s: " DRIVER_DESC "\n", hcd_name); 199 - 200 - ehci_init_driver(&ehci_mxc_hc_driver, &ehci_mxc_overrides); 201 - return platform_driver_register(&ehci_mxc_driver); 202 - } 203 - module_init(ehci_mxc_init); 204 - 205 - static void __exit ehci_mxc_cleanup(void) 206 - { 207 - platform_driver_unregister(&ehci_mxc_driver); 208 - } 209 - module_exit(ehci_mxc_cleanup); 210 - 211 - MODULE_DESCRIPTION(DRIVER_DESC); 212 - MODULE_AUTHOR("Sascha Hauer"); 213 - MODULE_LICENSE("GPL");
+1
drivers/usb/host/ehci-omap.c
··· 220 220 221 221 err_pm_runtime: 222 222 pm_runtime_put_sync(dev); 223 + pm_runtime_disable(dev); 223 224 224 225 err_phy: 225 226 for (i = 0; i < omap->nports; i++) {
+9 -6
drivers/usb/host/ehci-pmcmsp.c
··· 147 147 148 148 /** 149 149 * usb_hcd_msp_probe - initialize PMC MSP-based HCDs 150 - * Context: !in_interrupt() 150 + * @driver: Pointer to hc driver instance 151 + * @dev: USB controller to probe 152 + * 153 + * Context: task context, might sleep 151 154 * 152 155 * Allocates basic resources for this USB host controller, and 153 156 * then invokes the start() method for the HCD associated with it 154 157 * through the hotplug entry's driver_data. 155 - * 156 158 */ 157 159 int usb_hcd_msp_probe(const struct hc_driver *driver, 158 160 struct platform_device *dev) ··· 225 223 226 224 /** 227 225 * usb_hcd_msp_remove - shutdown processing for PMC MSP-based HCDs 228 - * @dev: USB Host Controller being removed 229 - * Context: !in_interrupt() 226 + * @hcd: USB Host Controller being removed 227 + * 228 + * Context: task context, might sleep 230 229 * 231 230 * Reverses the effect of usb_hcd_msp_probe(), first invoking 232 231 * the HCD's stop() method. It is always called from a thread ··· 236 233 * may be called without controller electrically present 237 234 * may be called with controller, bus, and devices active 238 235 */ 239 - void usb_hcd_msp_remove(struct usb_hcd *hcd, struct platform_device *dev) 236 + static void usb_hcd_msp_remove(struct usb_hcd *hcd) 240 237 { 241 238 usb_remove_hcd(hcd); 242 239 iounmap(hcd->regs); ··· 309 306 { 310 307 struct usb_hcd *hcd = platform_get_drvdata(pdev); 311 308 312 - usb_hcd_msp_remove(hcd, pdev); 309 + usb_hcd_msp_remove(hcd); 313 310 314 311 /* free TWI GPIO USB_HOST_DEV pin */ 315 312 gpio_free(MSP_PIN_USB0_HOST_DEV);
+12
drivers/usb/host/ehci-sched.c
··· 244 244 245 245 /* FS/LS bus bandwidth */ 246 246 if (tt_usecs) { 247 + /* 248 + * find_tt() will not return any error here as we have 249 + * already called find_tt() before calling this function 250 + * and checked for any error return. The previous call 251 + * would have created the data structure. 252 + */ 247 253 tt = find_tt(qh->ps.udev); 248 254 if (sign > 0) 249 255 list_add_tail(&qh->ps.ps_list, &tt->ps_list); ··· 1343 1337 } 1344 1338 } 1345 1339 1340 + /* 1341 + * find_tt() will not return any error here as we have 1342 + * already called find_tt() before calling this function 1343 + * and checked for any error return. The previous call 1344 + * would have created the data structure. 1345 + */ 1346 1346 tt = find_tt(stream->ps.udev); 1347 1347 if (sign > 0) 1348 1348 list_add_tail(&stream->ps.ps_list, &tt->ps_list);
+2 -2
drivers/usb/host/fotg210-hcd.c
··· 1951 1951 goto fail; 1952 1952 1953 1953 /* Hardware periodic table */ 1954 - fotg210->periodic = (__le32 *) 1954 + fotg210->periodic = 1955 1955 dma_alloc_coherent(fotg210_to_hcd(fotg210)->self.controller, 1956 1956 fotg210->periodic_size * sizeof(__le32), 1957 1957 &fotg210->periodic_dma, 0); ··· 5276 5276 */ 5277 5277 if (urb->transfer_buffer_length > (16 * 1024)) 5278 5278 return -EMSGSIZE; 5279 - /* FALLTHROUGH */ 5279 + fallthrough; 5280 5280 /* case PIPE_BULK: */ 5281 5281 default: 5282 5282 if (!qh_urb_transaction(fotg210, urb, &qtd_list, mem_flags))
-439
drivers/usb/host/imx21-dbg.c
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 2 - /* 3 - * Copyright (c) 2009 by Martin Fuzzey 4 - */ 5 - 6 - /* this file is part of imx21-hcd.c */ 7 - 8 - #ifdef CONFIG_DYNAMIC_DEBUG 9 - #define DEBUG 10 - #endif 11 - 12 - #ifndef DEBUG 13 - 14 - static inline void create_debug_files(struct imx21 *imx21) { } 15 - static inline void remove_debug_files(struct imx21 *imx21) { } 16 - static inline void debug_urb_submitted(struct imx21 *imx21, struct urb *urb) {} 17 - static inline void debug_urb_completed(struct imx21 *imx21, struct urb *urb, 18 - int status) {} 19 - static inline void debug_urb_unlinked(struct imx21 *imx21, struct urb *urb) {} 20 - static inline void debug_urb_queued_for_etd(struct imx21 *imx21, 21 - struct urb *urb) {} 22 - static inline void debug_urb_queued_for_dmem(struct imx21 *imx21, 23 - struct urb *urb) {} 24 - static inline void debug_etd_allocated(struct imx21 *imx21) {} 25 - static inline void debug_etd_freed(struct imx21 *imx21) {} 26 - static inline void debug_dmem_allocated(struct imx21 *imx21, int size) {} 27 - static inline void debug_dmem_freed(struct imx21 *imx21, int size) {} 28 - static inline void debug_isoc_submitted(struct imx21 *imx21, 29 - int frame, struct td *td) {} 30 - static inline void debug_isoc_completed(struct imx21 *imx21, 31 - int frame, struct td *td, int cc, int len) {} 32 - 33 - #else 34 - 35 - #include <linux/debugfs.h> 36 - #include <linux/seq_file.h> 37 - 38 - static const char *dir_labels[] = { 39 - "TD 0", 40 - "OUT", 41 - "IN", 42 - "TD 1" 43 - }; 44 - 45 - static const char *speed_labels[] = { 46 - "Full", 47 - "Low" 48 - }; 49 - 50 - static const char *format_labels[] = { 51 - "Control", 52 - "ISO", 53 - "Bulk", 54 - "Interrupt" 55 - }; 56 - 57 - static inline struct debug_stats *stats_for_urb(struct imx21 *imx21, 58 - struct urb *urb) 59 - { 60 - return usb_pipeisoc(urb->pipe) ? 61 - &imx21->isoc_stats : &imx21->nonisoc_stats; 62 - } 63 - 64 - static void debug_urb_submitted(struct imx21 *imx21, struct urb *urb) 65 - { 66 - stats_for_urb(imx21, urb)->submitted++; 67 - } 68 - 69 - static void debug_urb_completed(struct imx21 *imx21, struct urb *urb, int st) 70 - { 71 - if (st) 72 - stats_for_urb(imx21, urb)->completed_failed++; 73 - else 74 - stats_for_urb(imx21, urb)->completed_ok++; 75 - } 76 - 77 - static void debug_urb_unlinked(struct imx21 *imx21, struct urb *urb) 78 - { 79 - stats_for_urb(imx21, urb)->unlinked++; 80 - } 81 - 82 - static void debug_urb_queued_for_etd(struct imx21 *imx21, struct urb *urb) 83 - { 84 - stats_for_urb(imx21, urb)->queue_etd++; 85 - } 86 - 87 - static void debug_urb_queued_for_dmem(struct imx21 *imx21, struct urb *urb) 88 - { 89 - stats_for_urb(imx21, urb)->queue_dmem++; 90 - } 91 - 92 - static inline void debug_etd_allocated(struct imx21 *imx21) 93 - { 94 - imx21->etd_usage.maximum = max( 95 - ++(imx21->etd_usage.value), 96 - imx21->etd_usage.maximum); 97 - } 98 - 99 - static inline void debug_etd_freed(struct imx21 *imx21) 100 - { 101 - imx21->etd_usage.value--; 102 - } 103 - 104 - static inline void debug_dmem_allocated(struct imx21 *imx21, int size) 105 - { 106 - imx21->dmem_usage.value += size; 107 - imx21->dmem_usage.maximum = max( 108 - imx21->dmem_usage.value, 109 - imx21->dmem_usage.maximum); 110 - } 111 - 112 - static inline void debug_dmem_freed(struct imx21 *imx21, int size) 113 - { 114 - imx21->dmem_usage.value -= size; 115 - } 116 - 117 - 118 - static void debug_isoc_submitted(struct imx21 *imx21, 119 - int frame, struct td *td) 120 - { 121 - struct debug_isoc_trace *trace = &imx21->isoc_trace[ 122 - imx21->isoc_trace_index++]; 123 - 124 - imx21->isoc_trace_index %= ARRAY_SIZE(imx21->isoc_trace); 125 - trace->schedule_frame = td->frame; 126 - trace->submit_frame = frame; 127 - trace->request_len = td->len; 128 - trace->td = td; 129 - } 130 - 131 - static inline void debug_isoc_completed(struct imx21 *imx21, 132 - int frame, struct td *td, int cc, int len) 133 - { 134 - struct debug_isoc_trace *trace, *trace_failed; 135 - int i; 136 - int found = 0; 137 - 138 - trace = imx21->isoc_trace; 139 - for (i = 0; i < ARRAY_SIZE(imx21->isoc_trace); i++, trace++) { 140 - if (trace->td == td) { 141 - trace->done_frame = frame; 142 - trace->done_len = len; 143 - trace->cc = cc; 144 - trace->td = NULL; 145 - found = 1; 146 - break; 147 - } 148 - } 149 - 150 - if (found && cc) { 151 - trace_failed = &imx21->isoc_trace_failed[ 152 - imx21->isoc_trace_index_failed++]; 153 - 154 - imx21->isoc_trace_index_failed %= ARRAY_SIZE( 155 - imx21->isoc_trace_failed); 156 - *trace_failed = *trace; 157 - } 158 - } 159 - 160 - 161 - static char *format_ep(struct usb_host_endpoint *ep, char *buf, int bufsize) 162 - { 163 - if (ep) 164 - snprintf(buf, bufsize, "ep_%02x (type:%02X kaddr:%p)", 165 - ep->desc.bEndpointAddress, 166 - usb_endpoint_type(&ep->desc), 167 - ep); 168 - else 169 - snprintf(buf, bufsize, "none"); 170 - return buf; 171 - } 172 - 173 - static char *format_etd_dword0(u32 value, char *buf, int bufsize) 174 - { 175 - snprintf(buf, bufsize, 176 - "addr=%d ep=%d dir=%s speed=%s format=%s halted=%d", 177 - value & 0x7F, 178 - (value >> DW0_ENDPNT) & 0x0F, 179 - dir_labels[(value >> DW0_DIRECT) & 0x03], 180 - speed_labels[(value >> DW0_SPEED) & 0x01], 181 - format_labels[(value >> DW0_FORMAT) & 0x03], 182 - (value >> DW0_HALTED) & 0x01); 183 - return buf; 184 - } 185 - 186 - static int debug_status_show(struct seq_file *s, void *v) 187 - { 188 - struct imx21 *imx21 = s->private; 189 - int etds_allocated = 0; 190 - int etds_sw_busy = 0; 191 - int etds_hw_busy = 0; 192 - int dmem_blocks = 0; 193 - int queued_for_etd = 0; 194 - int queued_for_dmem = 0; 195 - unsigned int dmem_bytes = 0; 196 - int i; 197 - struct etd_priv *etd; 198 - u32 etd_enable_mask; 199 - unsigned long flags; 200 - struct imx21_dmem_area *dmem; 201 - struct ep_priv *ep_priv; 202 - 203 - spin_lock_irqsave(&imx21->lock, flags); 204 - 205 - etd_enable_mask = readl(imx21->regs + USBH_ETDENSET); 206 - for (i = 0, etd = imx21->etd; i < USB_NUM_ETD; i++, etd++) { 207 - if (etd->alloc) 208 - etds_allocated++; 209 - if (etd->urb) 210 - etds_sw_busy++; 211 - if (etd_enable_mask & (1<<i)) 212 - etds_hw_busy++; 213 - } 214 - 215 - list_for_each_entry(dmem, &imx21->dmem_list, list) { 216 - dmem_bytes += dmem->size; 217 - dmem_blocks++; 218 - } 219 - 220 - list_for_each_entry(ep_priv, &imx21->queue_for_etd, queue) 221 - queued_for_etd++; 222 - 223 - list_for_each_entry(etd, &imx21->queue_for_dmem, queue) 224 - queued_for_dmem++; 225 - 226 - spin_unlock_irqrestore(&imx21->lock, flags); 227 - 228 - seq_printf(s, 229 - "Frame: %d\n" 230 - "ETDs allocated: %d/%d (max=%d)\n" 231 - "ETDs in use sw: %d\n" 232 - "ETDs in use hw: %d\n" 233 - "DMEM allocated: %d/%d (max=%d)\n" 234 - "DMEM blocks: %d\n" 235 - "Queued waiting for ETD: %d\n" 236 - "Queued waiting for DMEM: %d\n", 237 - readl(imx21->regs + USBH_FRMNUB) & 0xFFFF, 238 - etds_allocated, USB_NUM_ETD, imx21->etd_usage.maximum, 239 - etds_sw_busy, 240 - etds_hw_busy, 241 - dmem_bytes, DMEM_SIZE, imx21->dmem_usage.maximum, 242 - dmem_blocks, 243 - queued_for_etd, 244 - queued_for_dmem); 245 - 246 - return 0; 247 - } 248 - DEFINE_SHOW_ATTRIBUTE(debug_status); 249 - 250 - static int debug_dmem_show(struct seq_file *s, void *v) 251 - { 252 - struct imx21 *imx21 = s->private; 253 - struct imx21_dmem_area *dmem; 254 - unsigned long flags; 255 - char ep_text[40]; 256 - 257 - spin_lock_irqsave(&imx21->lock, flags); 258 - 259 - list_for_each_entry(dmem, &imx21->dmem_list, list) 260 - seq_printf(s, 261 - "%04X: size=0x%X " 262 - "ep=%s\n", 263 - dmem->offset, dmem->size, 264 - format_ep(dmem->ep, ep_text, sizeof(ep_text))); 265 - 266 - spin_unlock_irqrestore(&imx21->lock, flags); 267 - 268 - return 0; 269 - } 270 - DEFINE_SHOW_ATTRIBUTE(debug_dmem); 271 - 272 - static int debug_etd_show(struct seq_file *s, void *v) 273 - { 274 - struct imx21 *imx21 = s->private; 275 - struct etd_priv *etd; 276 - char buf[60]; 277 - u32 dword; 278 - int i, j; 279 - unsigned long flags; 280 - 281 - spin_lock_irqsave(&imx21->lock, flags); 282 - 283 - for (i = 0, etd = imx21->etd; i < USB_NUM_ETD; i++, etd++) { 284 - int state = -1; 285 - struct urb_priv *urb_priv; 286 - if (etd->urb) { 287 - urb_priv = etd->urb->hcpriv; 288 - if (urb_priv) 289 - state = urb_priv->state; 290 - } 291 - 292 - seq_printf(s, 293 - "etd_num: %d\n" 294 - "ep: %s\n" 295 - "alloc: %d\n" 296 - "len: %d\n" 297 - "busy sw: %d\n" 298 - "busy hw: %d\n" 299 - "urb state: %d\n" 300 - "current urb: %p\n", 301 - 302 - i, 303 - format_ep(etd->ep, buf, sizeof(buf)), 304 - etd->alloc, 305 - etd->len, 306 - etd->urb != NULL, 307 - (readl(imx21->regs + USBH_ETDENSET) & (1 << i)) > 0, 308 - state, 309 - etd->urb); 310 - 311 - for (j = 0; j < 4; j++) { 312 - dword = etd_readl(imx21, i, j); 313 - switch (j) { 314 - case 0: 315 - format_etd_dword0(dword, buf, sizeof(buf)); 316 - break; 317 - case 2: 318 - snprintf(buf, sizeof(buf), 319 - "cc=0X%02X", dword >> DW2_COMPCODE); 320 - break; 321 - default: 322 - *buf = 0; 323 - break; 324 - } 325 - seq_printf(s, 326 - "dword %d: submitted=%08X cur=%08X [%s]\n", 327 - j, 328 - etd->submitted_dwords[j], 329 - dword, 330 - buf); 331 - } 332 - seq_printf(s, "\n"); 333 - } 334 - 335 - spin_unlock_irqrestore(&imx21->lock, flags); 336 - 337 - return 0; 338 - } 339 - DEFINE_SHOW_ATTRIBUTE(debug_etd); 340 - 341 - static void debug_statistics_show_one(struct seq_file *s, 342 - const char *name, struct debug_stats *stats) 343 - { 344 - seq_printf(s, "%s:\n" 345 - "submitted URBs: %lu\n" 346 - "completed OK: %lu\n" 347 - "completed failed: %lu\n" 348 - "unlinked: %lu\n" 349 - "queued for ETD: %lu\n" 350 - "queued for DMEM: %lu\n\n", 351 - name, 352 - stats->submitted, 353 - stats->completed_ok, 354 - stats->completed_failed, 355 - stats->unlinked, 356 - stats->queue_etd, 357 - stats->queue_dmem); 358 - } 359 - 360 - static int debug_statistics_show(struct seq_file *s, void *v) 361 - { 362 - struct imx21 *imx21 = s->private; 363 - unsigned long flags; 364 - 365 - spin_lock_irqsave(&imx21->lock, flags); 366 - 367 - debug_statistics_show_one(s, "nonisoc", &imx21->nonisoc_stats); 368 - debug_statistics_show_one(s, "isoc", &imx21->isoc_stats); 369 - seq_printf(s, "unblock kludge triggers: %lu\n", imx21->debug_unblocks); 370 - spin_unlock_irqrestore(&imx21->lock, flags); 371 - 372 - return 0; 373 - } 374 - DEFINE_SHOW_ATTRIBUTE(debug_statistics); 375 - 376 - static void debug_isoc_show_one(struct seq_file *s, 377 - const char *name, int index, struct debug_isoc_trace *trace) 378 - { 379 - seq_printf(s, "%s %d:\n" 380 - "cc=0X%02X\n" 381 - "scheduled frame %d (%d)\n" 382 - "submitted frame %d (%d)\n" 383 - "completed frame %d (%d)\n" 384 - "requested length=%d\n" 385 - "completed length=%d\n\n", 386 - name, index, 387 - trace->cc, 388 - trace->schedule_frame, trace->schedule_frame & 0xFFFF, 389 - trace->submit_frame, trace->submit_frame & 0xFFFF, 390 - trace->done_frame, trace->done_frame & 0xFFFF, 391 - trace->request_len, 392 - trace->done_len); 393 - } 394 - 395 - static int debug_isoc_show(struct seq_file *s, void *v) 396 - { 397 - struct imx21 *imx21 = s->private; 398 - struct debug_isoc_trace *trace; 399 - unsigned long flags; 400 - int i; 401 - 402 - spin_lock_irqsave(&imx21->lock, flags); 403 - 404 - trace = imx21->isoc_trace_failed; 405 - for (i = 0; i < ARRAY_SIZE(imx21->isoc_trace_failed); i++, trace++) 406 - debug_isoc_show_one(s, "isoc failed", i, trace); 407 - 408 - trace = imx21->isoc_trace; 409 - for (i = 0; i < ARRAY_SIZE(imx21->isoc_trace); i++, trace++) 410 - debug_isoc_show_one(s, "isoc", i, trace); 411 - 412 - spin_unlock_irqrestore(&imx21->lock, flags); 413 - 414 - return 0; 415 - } 416 - DEFINE_SHOW_ATTRIBUTE(debug_isoc); 417 - 418 - static void create_debug_files(struct imx21 *imx21) 419 - { 420 - struct dentry *root; 421 - 422 - root = debugfs_create_dir(dev_name(imx21->dev), usb_debug_root); 423 - imx21->debug_root = root; 424 - 425 - debugfs_create_file("status", S_IRUGO, root, imx21, &debug_status_fops); 426 - debugfs_create_file("dmem", S_IRUGO, root, imx21, &debug_dmem_fops); 427 - debugfs_create_file("etd", S_IRUGO, root, imx21, &debug_etd_fops); 428 - debugfs_create_file("statistics", S_IRUGO, root, imx21, 429 - &debug_statistics_fops); 430 - debugfs_create_file("isoc", S_IRUGO, root, imx21, &debug_isoc_fops); 431 - } 432 - 433 - static void remove_debug_files(struct imx21 *imx21) 434 - { 435 - debugfs_remove_recursive(imx21->debug_root); 436 - } 437 - 438 - #endif 439 -
-1933
drivers/usb/host/imx21-hcd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 2 - /* 3 - * USB Host Controller Driver for IMX21 4 - * 5 - * Copyright (C) 2006 Loping Dog Embedded Systems 6 - * Copyright (C) 2009 Martin Fuzzey 7 - * Originally written by Jay Monkman <jtm@lopingdog.com> 8 - * Ported to 2.6.30, debugged and enhanced by Martin Fuzzey 9 - */ 10 - 11 - 12 - /* 13 - * The i.MX21 USB hardware contains 14 - * * 32 transfer descriptors (called ETDs) 15 - * * 4Kb of Data memory 16 - * 17 - * The data memory is shared between the host and function controllers 18 - * (but this driver only supports the host controller) 19 - * 20 - * So setting up a transfer involves: 21 - * * Allocating a ETD 22 - * * Fill in ETD with appropriate information 23 - * * Allocating data memory (and putting the offset in the ETD) 24 - * * Activate the ETD 25 - * * Get interrupt when done. 26 - * 27 - * An ETD is assigned to each active endpoint. 28 - * 29 - * Low resource (ETD and Data memory) situations are handled differently for 30 - * isochronous and non insosynchronous transactions : 31 - * 32 - * Non ISOC transfers are queued if either ETDs or Data memory are unavailable 33 - * 34 - * ISOC transfers use 2 ETDs per endpoint to achieve double buffering. 35 - * They allocate both ETDs and Data memory during URB submission 36 - * (and fail if unavailable). 37 - */ 38 - 39 - #include <linux/clk.h> 40 - #include <linux/io.h> 41 - #include <linux/kernel.h> 42 - #include <linux/list.h> 43 - #include <linux/platform_device.h> 44 - #include <linux/slab.h> 45 - #include <linux/usb.h> 46 - #include <linux/usb/hcd.h> 47 - #include <linux/dma-mapping.h> 48 - #include <linux/module.h> 49 - 50 - #include "imx21-hcd.h" 51 - 52 - #ifdef CONFIG_DYNAMIC_DEBUG 53 - #define DEBUG 54 - #endif 55 - 56 - #ifdef DEBUG 57 - #define DEBUG_LOG_FRAME(imx21, etd, event) \ 58 - (etd)->event##_frame = readl((imx21)->regs + USBH_FRMNUB) 59 - #else 60 - #define DEBUG_LOG_FRAME(imx21, etd, event) do { } while (0) 61 - #endif 62 - 63 - static const char hcd_name[] = "imx21-hcd"; 64 - 65 - static inline struct imx21 *hcd_to_imx21(struct usb_hcd *hcd) 66 - { 67 - return (struct imx21 *)hcd->hcd_priv; 68 - } 69 - 70 - 71 - /* =========================================== */ 72 - /* Hardware access helpers */ 73 - /* =========================================== */ 74 - 75 - static inline void set_register_bits(struct imx21 *imx21, u32 offset, u32 mask) 76 - { 77 - void __iomem *reg = imx21->regs + offset; 78 - writel(readl(reg) | mask, reg); 79 - } 80 - 81 - static inline void clear_register_bits(struct imx21 *imx21, 82 - u32 offset, u32 mask) 83 - { 84 - void __iomem *reg = imx21->regs + offset; 85 - writel(readl(reg) & ~mask, reg); 86 - } 87 - 88 - static inline void clear_toggle_bit(struct imx21 *imx21, u32 offset, u32 mask) 89 - { 90 - void __iomem *reg = imx21->regs + offset; 91 - 92 - if (readl(reg) & mask) 93 - writel(mask, reg); 94 - } 95 - 96 - static inline void set_toggle_bit(struct imx21 *imx21, u32 offset, u32 mask) 97 - { 98 - void __iomem *reg = imx21->regs + offset; 99 - 100 - if (!(readl(reg) & mask)) 101 - writel(mask, reg); 102 - } 103 - 104 - static void etd_writel(struct imx21 *imx21, int etd_num, int dword, u32 value) 105 - { 106 - writel(value, imx21->regs + USB_ETD_DWORD(etd_num, dword)); 107 - } 108 - 109 - static u32 etd_readl(struct imx21 *imx21, int etd_num, int dword) 110 - { 111 - return readl(imx21->regs + USB_ETD_DWORD(etd_num, dword)); 112 - } 113 - 114 - static inline int wrap_frame(int counter) 115 - { 116 - return counter & 0xFFFF; 117 - } 118 - 119 - static inline int frame_after(int frame, int after) 120 - { 121 - /* handle wrapping like jiffies time_afer */ 122 - return (s16)((s16)after - (s16)frame) < 0; 123 - } 124 - 125 - static int imx21_hc_get_frame(struct usb_hcd *hcd) 126 - { 127 - struct imx21 *imx21 = hcd_to_imx21(hcd); 128 - 129 - return wrap_frame(readl(imx21->regs + USBH_FRMNUB)); 130 - } 131 - 132 - static inline bool unsuitable_for_dma(dma_addr_t addr) 133 - { 134 - return (addr & 3) != 0; 135 - } 136 - 137 - #include "imx21-dbg.c" 138 - 139 - static void nonisoc_urb_completed_for_etd( 140 - struct imx21 *imx21, struct etd_priv *etd, int status); 141 - static void schedule_nonisoc_etd(struct imx21 *imx21, struct urb *urb); 142 - static void free_dmem(struct imx21 *imx21, struct etd_priv *etd); 143 - 144 - /* =========================================== */ 145 - /* ETD management */ 146 - /* =========================================== */ 147 - 148 - static int alloc_etd(struct imx21 *imx21) 149 - { 150 - int i; 151 - struct etd_priv *etd = imx21->etd; 152 - 153 - for (i = 0; i < USB_NUM_ETD; i++, etd++) { 154 - if (etd->alloc == 0) { 155 - memset(etd, 0, sizeof(imx21->etd[0])); 156 - etd->alloc = 1; 157 - debug_etd_allocated(imx21); 158 - return i; 159 - } 160 - } 161 - return -1; 162 - } 163 - 164 - static void disactivate_etd(struct imx21 *imx21, int num) 165 - { 166 - int etd_mask = (1 << num); 167 - struct etd_priv *etd = &imx21->etd[num]; 168 - 169 - writel(etd_mask, imx21->regs + USBH_ETDENCLR); 170 - clear_register_bits(imx21, USBH_ETDDONEEN, etd_mask); 171 - writel(etd_mask, imx21->regs + USB_ETDDMACHANLCLR); 172 - clear_toggle_bit(imx21, USBH_ETDDONESTAT, etd_mask); 173 - 174 - etd->active_count = 0; 175 - 176 - DEBUG_LOG_FRAME(imx21, etd, disactivated); 177 - } 178 - 179 - static void reset_etd(struct imx21 *imx21, int num) 180 - { 181 - struct etd_priv *etd = imx21->etd + num; 182 - int i; 183 - 184 - disactivate_etd(imx21, num); 185 - 186 - for (i = 0; i < 4; i++) 187 - etd_writel(imx21, num, i, 0); 188 - etd->urb = NULL; 189 - etd->ep = NULL; 190 - etd->td = NULL; 191 - etd->bounce_buffer = NULL; 192 - } 193 - 194 - static void free_etd(struct imx21 *imx21, int num) 195 - { 196 - if (num < 0) 197 - return; 198 - 199 - if (num >= USB_NUM_ETD) { 200 - dev_err(imx21->dev, "BAD etd=%d!\n", num); 201 - return; 202 - } 203 - if (imx21->etd[num].alloc == 0) { 204 - dev_err(imx21->dev, "ETD %d already free!\n", num); 205 - return; 206 - } 207 - 208 - debug_etd_freed(imx21); 209 - reset_etd(imx21, num); 210 - memset(&imx21->etd[num], 0, sizeof(imx21->etd[0])); 211 - } 212 - 213 - 214 - static void setup_etd_dword0(struct imx21 *imx21, 215 - int etd_num, struct urb *urb, u8 dir, u16 maxpacket) 216 - { 217 - etd_writel(imx21, etd_num, 0, 218 - ((u32) usb_pipedevice(urb->pipe)) << DW0_ADDRESS | 219 - ((u32) usb_pipeendpoint(urb->pipe) << DW0_ENDPNT) | 220 - ((u32) dir << DW0_DIRECT) | 221 - ((u32) ((urb->dev->speed == USB_SPEED_LOW) ? 222 - 1 : 0) << DW0_SPEED) | 223 - ((u32) fmt_urb_to_etd[usb_pipetype(urb->pipe)] << DW0_FORMAT) | 224 - ((u32) maxpacket << DW0_MAXPKTSIZ)); 225 - } 226 - 227 - /* 228 - * Copy buffer to data controller data memory. 229 - * We cannot use memcpy_toio() because the hardware requires 32bit writes 230 - */ 231 - static void copy_to_dmem( 232 - struct imx21 *imx21, int dmem_offset, void *src, int count) 233 - { 234 - void __iomem *dmem = imx21->regs + USBOTG_DMEM + dmem_offset; 235 - u32 word = 0; 236 - u8 *p = src; 237 - int byte = 0; 238 - int i; 239 - 240 - for (i = 0; i < count; i++) { 241 - byte = i % 4; 242 - word += (*p++ << (byte * 8)); 243 - if (byte == 3) { 244 - writel(word, dmem); 245 - dmem += 4; 246 - word = 0; 247 - } 248 - } 249 - 250 - if (count && byte != 3) 251 - writel(word, dmem); 252 - } 253 - 254 - static void activate_etd(struct imx21 *imx21, int etd_num, u8 dir) 255 - { 256 - u32 etd_mask = 1 << etd_num; 257 - struct etd_priv *etd = &imx21->etd[etd_num]; 258 - 259 - if (etd->dma_handle && unsuitable_for_dma(etd->dma_handle)) { 260 - /* For non aligned isoc the condition below is always true */ 261 - if (etd->len <= etd->dmem_size) { 262 - /* Fits into data memory, use PIO */ 263 - if (dir != TD_DIR_IN) { 264 - copy_to_dmem(imx21, 265 - etd->dmem_offset, 266 - etd->cpu_buffer, etd->len); 267 - } 268 - etd->dma_handle = 0; 269 - 270 - } else { 271 - /* Too big for data memory, use bounce buffer */ 272 - enum dma_data_direction dmadir; 273 - 274 - if (dir == TD_DIR_IN) { 275 - dmadir = DMA_FROM_DEVICE; 276 - etd->bounce_buffer = kmalloc(etd->len, 277 - GFP_ATOMIC); 278 - } else { 279 - dmadir = DMA_TO_DEVICE; 280 - etd->bounce_buffer = kmemdup(etd->cpu_buffer, 281 - etd->len, 282 - GFP_ATOMIC); 283 - } 284 - if (!etd->bounce_buffer) { 285 - dev_err(imx21->dev, "failed bounce alloc\n"); 286 - goto err_bounce_alloc; 287 - } 288 - 289 - etd->dma_handle = 290 - dma_map_single(imx21->dev, 291 - etd->bounce_buffer, 292 - etd->len, 293 - dmadir); 294 - if (dma_mapping_error(imx21->dev, etd->dma_handle)) { 295 - dev_err(imx21->dev, "failed bounce map\n"); 296 - goto err_bounce_map; 297 - } 298 - } 299 - } 300 - 301 - clear_toggle_bit(imx21, USBH_ETDDONESTAT, etd_mask); 302 - set_register_bits(imx21, USBH_ETDDONEEN, etd_mask); 303 - clear_toggle_bit(imx21, USBH_XFILLSTAT, etd_mask); 304 - clear_toggle_bit(imx21, USBH_YFILLSTAT, etd_mask); 305 - 306 - if (etd->dma_handle) { 307 - set_register_bits(imx21, USB_ETDDMACHANLCLR, etd_mask); 308 - clear_toggle_bit(imx21, USBH_XBUFSTAT, etd_mask); 309 - clear_toggle_bit(imx21, USBH_YBUFSTAT, etd_mask); 310 - writel(etd->dma_handle, imx21->regs + USB_ETDSMSA(etd_num)); 311 - set_register_bits(imx21, USB_ETDDMAEN, etd_mask); 312 - } else { 313 - if (dir != TD_DIR_IN) { 314 - /* need to set for ZLP and PIO */ 315 - set_toggle_bit(imx21, USBH_XFILLSTAT, etd_mask); 316 - set_toggle_bit(imx21, USBH_YFILLSTAT, etd_mask); 317 - } 318 - } 319 - 320 - DEBUG_LOG_FRAME(imx21, etd, activated); 321 - 322 - #ifdef DEBUG 323 - if (!etd->active_count) { 324 - int i; 325 - etd->activated_frame = readl(imx21->regs + USBH_FRMNUB); 326 - etd->disactivated_frame = -1; 327 - etd->last_int_frame = -1; 328 - etd->last_req_frame = -1; 329 - 330 - for (i = 0; i < 4; i++) 331 - etd->submitted_dwords[i] = etd_readl(imx21, etd_num, i); 332 - } 333 - #endif 334 - 335 - etd->active_count = 1; 336 - writel(etd_mask, imx21->regs + USBH_ETDENSET); 337 - return; 338 - 339 - err_bounce_map: 340 - kfree(etd->bounce_buffer); 341 - 342 - err_bounce_alloc: 343 - free_dmem(imx21, etd); 344 - nonisoc_urb_completed_for_etd(imx21, etd, -ENOMEM); 345 - } 346 - 347 - /* =========================================== */ 348 - /* Data memory management */ 349 - /* =========================================== */ 350 - 351 - static int alloc_dmem(struct imx21 *imx21, unsigned int size, 352 - struct usb_host_endpoint *ep) 353 - { 354 - unsigned int offset = 0; 355 - struct imx21_dmem_area *area; 356 - struct imx21_dmem_area *tmp; 357 - 358 - size += (~size + 1) & 0x3; /* Round to 4 byte multiple */ 359 - 360 - if (size > DMEM_SIZE) { 361 - dev_err(imx21->dev, "size=%d > DMEM_SIZE(%d)\n", 362 - size, DMEM_SIZE); 363 - return -EINVAL; 364 - } 365 - 366 - list_for_each_entry(tmp, &imx21->dmem_list, list) { 367 - if ((size + offset) < offset) 368 - goto fail; 369 - if ((size + offset) <= tmp->offset) 370 - break; 371 - offset = tmp->size + tmp->offset; 372 - if ((offset + size) > DMEM_SIZE) 373 - goto fail; 374 - } 375 - 376 - area = kmalloc(sizeof(struct imx21_dmem_area), GFP_ATOMIC); 377 - if (area == NULL) 378 - return -ENOMEM; 379 - 380 - area->ep = ep; 381 - area->offset = offset; 382 - area->size = size; 383 - list_add_tail(&area->list, &tmp->list); 384 - debug_dmem_allocated(imx21, size); 385 - return offset; 386 - 387 - fail: 388 - return -ENOMEM; 389 - } 390 - 391 - /* Memory now available for a queued ETD - activate it */ 392 - static void activate_queued_etd(struct imx21 *imx21, 393 - struct etd_priv *etd, u32 dmem_offset) 394 - { 395 - struct urb_priv *urb_priv = etd->urb->hcpriv; 396 - int etd_num = etd - &imx21->etd[0]; 397 - u32 maxpacket = etd_readl(imx21, etd_num, 1) >> DW1_YBUFSRTAD; 398 - u8 dir = (etd_readl(imx21, etd_num, 2) >> DW2_DIRPID) & 0x03; 399 - 400 - dev_dbg(imx21->dev, "activating queued ETD %d now DMEM available\n", 401 - etd_num); 402 - etd_writel(imx21, etd_num, 1, 403 - ((dmem_offset + maxpacket) << DW1_YBUFSRTAD) | dmem_offset); 404 - 405 - etd->dmem_offset = dmem_offset; 406 - urb_priv->active = 1; 407 - activate_etd(imx21, etd_num, dir); 408 - } 409 - 410 - static void free_dmem(struct imx21 *imx21, struct etd_priv *etd) 411 - { 412 - struct imx21_dmem_area *area; 413 - struct etd_priv *tmp; 414 - int found = 0; 415 - int offset; 416 - 417 - if (!etd->dmem_size) 418 - return; 419 - etd->dmem_size = 0; 420 - 421 - offset = etd->dmem_offset; 422 - list_for_each_entry(area, &imx21->dmem_list, list) { 423 - if (area->offset == offset) { 424 - debug_dmem_freed(imx21, area->size); 425 - list_del(&area->list); 426 - kfree(area); 427 - found = 1; 428 - break; 429 - } 430 - } 431 - 432 - if (!found) { 433 - dev_err(imx21->dev, 434 - "Trying to free unallocated DMEM %d\n", offset); 435 - return; 436 - } 437 - 438 - /* Try again to allocate memory for anything we've queued */ 439 - list_for_each_entry_safe(etd, tmp, &imx21->queue_for_dmem, queue) { 440 - offset = alloc_dmem(imx21, etd->dmem_size, etd->ep); 441 - if (offset >= 0) { 442 - list_del(&etd->queue); 443 - activate_queued_etd(imx21, etd, (u32)offset); 444 - } 445 - } 446 - } 447 - 448 - static void free_epdmem(struct imx21 *imx21, struct usb_host_endpoint *ep) 449 - { 450 - struct imx21_dmem_area *area, *tmp; 451 - 452 - list_for_each_entry_safe(area, tmp, &imx21->dmem_list, list) { 453 - if (area->ep == ep) { 454 - dev_err(imx21->dev, 455 - "Active DMEM %d for disabled ep=%p\n", 456 - area->offset, ep); 457 - list_del(&area->list); 458 - kfree(area); 459 - } 460 - } 461 - } 462 - 463 - 464 - /* =========================================== */ 465 - /* End handling */ 466 - /* =========================================== */ 467 - 468 - /* Endpoint now idle - release its ETD(s) or assign to queued request */ 469 - static void ep_idle(struct imx21 *imx21, struct ep_priv *ep_priv) 470 - { 471 - int i; 472 - 473 - for (i = 0; i < NUM_ISO_ETDS; i++) { 474 - int etd_num = ep_priv->etd[i]; 475 - struct etd_priv *etd; 476 - if (etd_num < 0) 477 - continue; 478 - 479 - etd = &imx21->etd[etd_num]; 480 - ep_priv->etd[i] = -1; 481 - 482 - free_dmem(imx21, etd); /* for isoc */ 483 - 484 - if (list_empty(&imx21->queue_for_etd)) { 485 - free_etd(imx21, etd_num); 486 - continue; 487 - } 488 - 489 - dev_dbg(imx21->dev, 490 - "assigning idle etd %d for queued request\n", etd_num); 491 - ep_priv = list_first_entry(&imx21->queue_for_etd, 492 - struct ep_priv, queue); 493 - list_del(&ep_priv->queue); 494 - reset_etd(imx21, etd_num); 495 - ep_priv->waiting_etd = 0; 496 - ep_priv->etd[i] = etd_num; 497 - 498 - if (list_empty(&ep_priv->ep->urb_list)) { 499 - dev_err(imx21->dev, "No urb for queued ep!\n"); 500 - continue; 501 - } 502 - schedule_nonisoc_etd(imx21, list_first_entry( 503 - &ep_priv->ep->urb_list, struct urb, urb_list)); 504 - } 505 - } 506 - 507 - static void urb_done(struct usb_hcd *hcd, struct urb *urb, int status) 508 - __releases(imx21->lock) 509 - __acquires(imx21->lock) 510 - { 511 - struct imx21 *imx21 = hcd_to_imx21(hcd); 512 - struct ep_priv *ep_priv = urb->ep->hcpriv; 513 - struct urb_priv *urb_priv = urb->hcpriv; 514 - 515 - debug_urb_completed(imx21, urb, status); 516 - dev_vdbg(imx21->dev, "urb %p done %d\n", urb, status); 517 - 518 - kfree(urb_priv->isoc_td); 519 - kfree(urb->hcpriv); 520 - urb->hcpriv = NULL; 521 - usb_hcd_unlink_urb_from_ep(hcd, urb); 522 - spin_unlock(&imx21->lock); 523 - usb_hcd_giveback_urb(hcd, urb, status); 524 - spin_lock(&imx21->lock); 525 - if (list_empty(&ep_priv->ep->urb_list)) 526 - ep_idle(imx21, ep_priv); 527 - } 528 - 529 - static void nonisoc_urb_completed_for_etd( 530 - struct imx21 *imx21, struct etd_priv *etd, int status) 531 - { 532 - struct usb_host_endpoint *ep = etd->ep; 533 - 534 - urb_done(imx21->hcd, etd->urb, status); 535 - etd->urb = NULL; 536 - 537 - if (!list_empty(&ep->urb_list)) { 538 - struct urb *urb = list_first_entry( 539 - &ep->urb_list, struct urb, urb_list); 540 - 541 - dev_vdbg(imx21->dev, "next URB %p\n", urb); 542 - schedule_nonisoc_etd(imx21, urb); 543 - } 544 - } 545 - 546 - 547 - /* =========================================== */ 548 - /* ISOC Handling ... */ 549 - /* =========================================== */ 550 - 551 - static void schedule_isoc_etds(struct usb_hcd *hcd, 552 - struct usb_host_endpoint *ep) 553 - { 554 - struct imx21 *imx21 = hcd_to_imx21(hcd); 555 - struct ep_priv *ep_priv = ep->hcpriv; 556 - struct etd_priv *etd; 557 - struct urb_priv *urb_priv; 558 - struct td *td; 559 - int etd_num; 560 - int i; 561 - int cur_frame; 562 - u8 dir; 563 - 564 - for (i = 0; i < NUM_ISO_ETDS; i++) { 565 - too_late: 566 - if (list_empty(&ep_priv->td_list)) 567 - break; 568 - 569 - etd_num = ep_priv->etd[i]; 570 - if (etd_num < 0) 571 - break; 572 - 573 - etd = &imx21->etd[etd_num]; 574 - if (etd->urb) 575 - continue; 576 - 577 - td = list_entry(ep_priv->td_list.next, struct td, list); 578 - list_del(&td->list); 579 - urb_priv = td->urb->hcpriv; 580 - 581 - cur_frame = imx21_hc_get_frame(hcd); 582 - if (frame_after(cur_frame, td->frame)) { 583 - dev_dbg(imx21->dev, "isoc too late frame %d > %d\n", 584 - cur_frame, td->frame); 585 - urb_priv->isoc_status = -EXDEV; 586 - td->urb->iso_frame_desc[ 587 - td->isoc_index].actual_length = 0; 588 - td->urb->iso_frame_desc[td->isoc_index].status = -EXDEV; 589 - if (--urb_priv->isoc_remaining == 0) 590 - urb_done(hcd, td->urb, urb_priv->isoc_status); 591 - goto too_late; 592 - } 593 - 594 - urb_priv->active = 1; 595 - etd->td = td; 596 - etd->ep = td->ep; 597 - etd->urb = td->urb; 598 - etd->len = td->len; 599 - etd->dma_handle = td->dma_handle; 600 - etd->cpu_buffer = td->cpu_buffer; 601 - 602 - debug_isoc_submitted(imx21, cur_frame, td); 603 - 604 - dir = usb_pipeout(td->urb->pipe) ? TD_DIR_OUT : TD_DIR_IN; 605 - setup_etd_dword0(imx21, etd_num, td->urb, dir, etd->dmem_size); 606 - etd_writel(imx21, etd_num, 1, etd->dmem_offset); 607 - etd_writel(imx21, etd_num, 2, 608 - (TD_NOTACCESSED << DW2_COMPCODE) | 609 - ((td->frame & 0xFFFF) << DW2_STARTFRM)); 610 - etd_writel(imx21, etd_num, 3, 611 - (TD_NOTACCESSED << DW3_COMPCODE0) | 612 - (td->len << DW3_PKTLEN0)); 613 - 614 - activate_etd(imx21, etd_num, dir); 615 - } 616 - } 617 - 618 - static void isoc_etd_done(struct usb_hcd *hcd, int etd_num) 619 - { 620 - struct imx21 *imx21 = hcd_to_imx21(hcd); 621 - int etd_mask = 1 << etd_num; 622 - struct etd_priv *etd = imx21->etd + etd_num; 623 - struct urb *urb = etd->urb; 624 - struct urb_priv *urb_priv = urb->hcpriv; 625 - struct td *td = etd->td; 626 - struct usb_host_endpoint *ep = etd->ep; 627 - int isoc_index = td->isoc_index; 628 - unsigned int pipe = urb->pipe; 629 - int dir_in = usb_pipein(pipe); 630 - int cc; 631 - int bytes_xfrd; 632 - 633 - disactivate_etd(imx21, etd_num); 634 - 635 - cc = (etd_readl(imx21, etd_num, 3) >> DW3_COMPCODE0) & 0xf; 636 - bytes_xfrd = etd_readl(imx21, etd_num, 3) & 0x3ff; 637 - 638 - /* Input doesn't always fill the buffer, don't generate an error 639 - * when this happens. 640 - */ 641 - if (dir_in && (cc == TD_DATAUNDERRUN)) 642 - cc = TD_CC_NOERROR; 643 - 644 - if (cc == TD_NOTACCESSED) 645 - bytes_xfrd = 0; 646 - 647 - debug_isoc_completed(imx21, 648 - imx21_hc_get_frame(hcd), td, cc, bytes_xfrd); 649 - if (cc) { 650 - urb_priv->isoc_status = -EXDEV; 651 - dev_dbg(imx21->dev, 652 - "bad iso cc=0x%X frame=%d sched frame=%d " 653 - "cnt=%d len=%d urb=%p etd=%d index=%d\n", 654 - cc, imx21_hc_get_frame(hcd), td->frame, 655 - bytes_xfrd, td->len, urb, etd_num, isoc_index); 656 - } 657 - 658 - if (dir_in) { 659 - clear_toggle_bit(imx21, USBH_XFILLSTAT, etd_mask); 660 - if (!etd->dma_handle) 661 - memcpy_fromio(etd->cpu_buffer, 662 - imx21->regs + USBOTG_DMEM + etd->dmem_offset, 663 - bytes_xfrd); 664 - } 665 - 666 - urb->actual_length += bytes_xfrd; 667 - urb->iso_frame_desc[isoc_index].actual_length = bytes_xfrd; 668 - urb->iso_frame_desc[isoc_index].status = cc_to_error[cc]; 669 - 670 - etd->td = NULL; 671 - etd->urb = NULL; 672 - etd->ep = NULL; 673 - 674 - if (--urb_priv->isoc_remaining == 0) 675 - urb_done(hcd, urb, urb_priv->isoc_status); 676 - 677 - schedule_isoc_etds(hcd, ep); 678 - } 679 - 680 - static struct ep_priv *alloc_isoc_ep( 681 - struct imx21 *imx21, struct usb_host_endpoint *ep) 682 - { 683 - struct ep_priv *ep_priv; 684 - int i; 685 - 686 - ep_priv = kzalloc(sizeof(struct ep_priv), GFP_ATOMIC); 687 - if (!ep_priv) 688 - return NULL; 689 - 690 - for (i = 0; i < NUM_ISO_ETDS; i++) 691 - ep_priv->etd[i] = -1; 692 - 693 - INIT_LIST_HEAD(&ep_priv->td_list); 694 - ep_priv->ep = ep; 695 - ep->hcpriv = ep_priv; 696 - return ep_priv; 697 - } 698 - 699 - static int alloc_isoc_etds(struct imx21 *imx21, struct ep_priv *ep_priv) 700 - { 701 - int i, j; 702 - int etd_num; 703 - 704 - /* Allocate the ETDs if required */ 705 - for (i = 0; i < NUM_ISO_ETDS; i++) { 706 - if (ep_priv->etd[i] < 0) { 707 - etd_num = alloc_etd(imx21); 708 - if (etd_num < 0) 709 - goto alloc_etd_failed; 710 - 711 - ep_priv->etd[i] = etd_num; 712 - imx21->etd[etd_num].ep = ep_priv->ep; 713 - } 714 - } 715 - return 0; 716 - 717 - alloc_etd_failed: 718 - dev_err(imx21->dev, "isoc: Couldn't allocate etd\n"); 719 - for (j = 0; j < i; j++) { 720 - free_etd(imx21, ep_priv->etd[j]); 721 - ep_priv->etd[j] = -1; 722 - } 723 - return -ENOMEM; 724 - } 725 - 726 - static int imx21_hc_urb_enqueue_isoc(struct usb_hcd *hcd, 727 - struct usb_host_endpoint *ep, 728 - struct urb *urb, gfp_t mem_flags) 729 - { 730 - struct imx21 *imx21 = hcd_to_imx21(hcd); 731 - struct urb_priv *urb_priv; 732 - unsigned long flags; 733 - struct ep_priv *ep_priv; 734 - struct td *td = NULL; 735 - int i; 736 - int ret; 737 - int cur_frame; 738 - u16 maxpacket; 739 - 740 - urb_priv = kzalloc(sizeof(struct urb_priv), mem_flags); 741 - if (urb_priv == NULL) 742 - return -ENOMEM; 743 - 744 - urb_priv->isoc_td = kcalloc(urb->number_of_packets, sizeof(struct td), 745 - mem_flags); 746 - if (urb_priv->isoc_td == NULL) { 747 - ret = -ENOMEM; 748 - goto alloc_td_failed; 749 - } 750 - 751 - spin_lock_irqsave(&imx21->lock, flags); 752 - 753 - if (ep->hcpriv == NULL) { 754 - ep_priv = alloc_isoc_ep(imx21, ep); 755 - if (ep_priv == NULL) { 756 - ret = -ENOMEM; 757 - goto alloc_ep_failed; 758 - } 759 - } else { 760 - ep_priv = ep->hcpriv; 761 - } 762 - 763 - ret = alloc_isoc_etds(imx21, ep_priv); 764 - if (ret) 765 - goto alloc_etd_failed; 766 - 767 - ret = usb_hcd_link_urb_to_ep(hcd, urb); 768 - if (ret) 769 - goto link_failed; 770 - 771 - urb->status = -EINPROGRESS; 772 - urb->actual_length = 0; 773 - urb->error_count = 0; 774 - urb->hcpriv = urb_priv; 775 - urb_priv->ep = ep; 776 - 777 - /* allocate data memory for largest packets if not already done */ 778 - maxpacket = usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe)); 779 - for (i = 0; i < NUM_ISO_ETDS; i++) { 780 - struct etd_priv *etd = &imx21->etd[ep_priv->etd[i]]; 781 - 782 - if (etd->dmem_size > 0 && etd->dmem_size < maxpacket) { 783 - /* not sure if this can really occur.... */ 784 - dev_err(imx21->dev, "increasing isoc buffer %d->%d\n", 785 - etd->dmem_size, maxpacket); 786 - ret = -EMSGSIZE; 787 - goto alloc_dmem_failed; 788 - } 789 - 790 - if (etd->dmem_size == 0) { 791 - etd->dmem_offset = alloc_dmem(imx21, maxpacket, ep); 792 - if (etd->dmem_offset < 0) { 793 - dev_dbg(imx21->dev, "failed alloc isoc dmem\n"); 794 - ret = -EAGAIN; 795 - goto alloc_dmem_failed; 796 - } 797 - etd->dmem_size = maxpacket; 798 - } 799 - } 800 - 801 - /* calculate frame */ 802 - cur_frame = imx21_hc_get_frame(hcd); 803 - i = 0; 804 - if (list_empty(&ep_priv->td_list)) { 805 - urb->start_frame = wrap_frame(cur_frame + 5); 806 - } else { 807 - urb->start_frame = wrap_frame(list_entry(ep_priv->td_list.prev, 808 - struct td, list)->frame + urb->interval); 809 - 810 - if (frame_after(cur_frame, urb->start_frame)) { 811 - dev_dbg(imx21->dev, 812 - "enqueue: adjusting iso start %d (cur=%d) asap=%d\n", 813 - urb->start_frame, cur_frame, 814 - (urb->transfer_flags & URB_ISO_ASAP) != 0); 815 - i = DIV_ROUND_UP(wrap_frame( 816 - cur_frame - urb->start_frame), 817 - urb->interval); 818 - 819 - /* Treat underruns as if URB_ISO_ASAP was set */ 820 - if ((urb->transfer_flags & URB_ISO_ASAP) || 821 - i >= urb->number_of_packets) { 822 - urb->start_frame = wrap_frame(urb->start_frame 823 - + i * urb->interval); 824 - i = 0; 825 - } 826 - } 827 - } 828 - 829 - /* set up transfers */ 830 - urb_priv->isoc_remaining = urb->number_of_packets - i; 831 - td = urb_priv->isoc_td; 832 - for (; i < urb->number_of_packets; i++, td++) { 833 - unsigned int offset = urb->iso_frame_desc[i].offset; 834 - td->ep = ep; 835 - td->urb = urb; 836 - td->len = urb->iso_frame_desc[i].length; 837 - td->isoc_index = i; 838 - td->frame = wrap_frame(urb->start_frame + urb->interval * i); 839 - td->dma_handle = urb->transfer_dma + offset; 840 - td->cpu_buffer = urb->transfer_buffer + offset; 841 - list_add_tail(&td->list, &ep_priv->td_list); 842 - } 843 - 844 - dev_vdbg(imx21->dev, "setup %d packets for iso frame %d->%d\n", 845 - urb->number_of_packets, urb->start_frame, td->frame); 846 - 847 - debug_urb_submitted(imx21, urb); 848 - schedule_isoc_etds(hcd, ep); 849 - 850 - spin_unlock_irqrestore(&imx21->lock, flags); 851 - return 0; 852 - 853 - alloc_dmem_failed: 854 - usb_hcd_unlink_urb_from_ep(hcd, urb); 855 - 856 - link_failed: 857 - alloc_etd_failed: 858 - alloc_ep_failed: 859 - spin_unlock_irqrestore(&imx21->lock, flags); 860 - kfree(urb_priv->isoc_td); 861 - 862 - alloc_td_failed: 863 - kfree(urb_priv); 864 - return ret; 865 - } 866 - 867 - static void dequeue_isoc_urb(struct imx21 *imx21, 868 - struct urb *urb, struct ep_priv *ep_priv) 869 - { 870 - struct urb_priv *urb_priv = urb->hcpriv; 871 - struct td *td, *tmp; 872 - int i; 873 - 874 - if (urb_priv->active) { 875 - for (i = 0; i < NUM_ISO_ETDS; i++) { 876 - int etd_num = ep_priv->etd[i]; 877 - if (etd_num != -1 && imx21->etd[etd_num].urb == urb) { 878 - struct etd_priv *etd = imx21->etd + etd_num; 879 - 880 - reset_etd(imx21, etd_num); 881 - free_dmem(imx21, etd); 882 - } 883 - } 884 - } 885 - 886 - list_for_each_entry_safe(td, tmp, &ep_priv->td_list, list) { 887 - if (td->urb == urb) { 888 - dev_vdbg(imx21->dev, "removing td %p\n", td); 889 - list_del(&td->list); 890 - } 891 - } 892 - } 893 - 894 - /* =========================================== */ 895 - /* NON ISOC Handling ... */ 896 - /* =========================================== */ 897 - 898 - static void schedule_nonisoc_etd(struct imx21 *imx21, struct urb *urb) 899 - { 900 - unsigned int pipe = urb->pipe; 901 - struct urb_priv *urb_priv = urb->hcpriv; 902 - struct ep_priv *ep_priv = urb_priv->ep->hcpriv; 903 - int state = urb_priv->state; 904 - int etd_num = ep_priv->etd[0]; 905 - struct etd_priv *etd; 906 - u32 count; 907 - u16 etd_buf_size; 908 - u16 maxpacket; 909 - u8 dir; 910 - u8 bufround; 911 - u8 datatoggle; 912 - u8 interval = 0; 913 - u8 relpolpos = 0; 914 - 915 - if (etd_num < 0) { 916 - dev_err(imx21->dev, "No valid ETD\n"); 917 - return; 918 - } 919 - if (readl(imx21->regs + USBH_ETDENSET) & (1 << etd_num)) 920 - dev_err(imx21->dev, "submitting to active ETD %d\n", etd_num); 921 - 922 - etd = &imx21->etd[etd_num]; 923 - maxpacket = usb_maxpacket(urb->dev, pipe, usb_pipeout(pipe)); 924 - if (!maxpacket) 925 - maxpacket = 8; 926 - 927 - if (usb_pipecontrol(pipe) && (state != US_CTRL_DATA)) { 928 - if (state == US_CTRL_SETUP) { 929 - dir = TD_DIR_SETUP; 930 - if (unsuitable_for_dma(urb->setup_dma)) 931 - usb_hcd_unmap_urb_setup_for_dma(imx21->hcd, 932 - urb); 933 - etd->dma_handle = urb->setup_dma; 934 - etd->cpu_buffer = urb->setup_packet; 935 - bufround = 0; 936 - count = 8; 937 - datatoggle = TD_TOGGLE_DATA0; 938 - } else { /* US_CTRL_ACK */ 939 - dir = usb_pipeout(pipe) ? TD_DIR_IN : TD_DIR_OUT; 940 - bufround = 0; 941 - count = 0; 942 - datatoggle = TD_TOGGLE_DATA1; 943 - } 944 - } else { 945 - dir = usb_pipeout(pipe) ? TD_DIR_OUT : TD_DIR_IN; 946 - bufround = (dir == TD_DIR_IN) ? 1 : 0; 947 - if (unsuitable_for_dma(urb->transfer_dma)) 948 - usb_hcd_unmap_urb_for_dma(imx21->hcd, urb); 949 - 950 - etd->dma_handle = urb->transfer_dma; 951 - etd->cpu_buffer = urb->transfer_buffer; 952 - if (usb_pipebulk(pipe) && (state == US_BULK0)) 953 - count = 0; 954 - else 955 - count = urb->transfer_buffer_length; 956 - 957 - if (usb_pipecontrol(pipe)) { 958 - datatoggle = TD_TOGGLE_DATA1; 959 - } else { 960 - if (usb_gettoggle( 961 - urb->dev, 962 - usb_pipeendpoint(urb->pipe), 963 - usb_pipeout(urb->pipe))) 964 - datatoggle = TD_TOGGLE_DATA1; 965 - else 966 - datatoggle = TD_TOGGLE_DATA0; 967 - } 968 - } 969 - 970 - etd->urb = urb; 971 - etd->ep = urb_priv->ep; 972 - etd->len = count; 973 - 974 - if (usb_pipeint(pipe)) { 975 - interval = urb->interval; 976 - relpolpos = (readl(imx21->regs + USBH_FRMNUB) + 1) & 0xff; 977 - } 978 - 979 - /* Write ETD to device memory */ 980 - setup_etd_dword0(imx21, etd_num, urb, dir, maxpacket); 981 - 982 - etd_writel(imx21, etd_num, 2, 983 - (u32) interval << DW2_POLINTERV | 984 - ((u32) relpolpos << DW2_RELPOLPOS) | 985 - ((u32) dir << DW2_DIRPID) | 986 - ((u32) bufround << DW2_BUFROUND) | 987 - ((u32) datatoggle << DW2_DATATOG) | 988 - ((u32) TD_NOTACCESSED << DW2_COMPCODE)); 989 - 990 - /* DMA will always transfer buffer size even if TOBYCNT in DWORD3 991 - is smaller. Make sure we don't overrun the buffer! 992 - */ 993 - if (count && count < maxpacket) 994 - etd_buf_size = count; 995 - else 996 - etd_buf_size = maxpacket; 997 - 998 - etd_writel(imx21, etd_num, 3, 999 - ((u32) (etd_buf_size - 1) << DW3_BUFSIZE) | (u32) count); 1000 - 1001 - if (!count) 1002 - etd->dma_handle = 0; 1003 - 1004 - /* allocate x and y buffer space at once */ 1005 - etd->dmem_size = (count > maxpacket) ? maxpacket * 2 : maxpacket; 1006 - etd->dmem_offset = alloc_dmem(imx21, etd->dmem_size, urb_priv->ep); 1007 - if (etd->dmem_offset < 0) { 1008 - /* Setup everything we can in HW and update when we get DMEM */ 1009 - etd_writel(imx21, etd_num, 1, (u32)maxpacket << 16); 1010 - 1011 - dev_dbg(imx21->dev, "Queuing etd %d for DMEM\n", etd_num); 1012 - debug_urb_queued_for_dmem(imx21, urb); 1013 - list_add_tail(&etd->queue, &imx21->queue_for_dmem); 1014 - return; 1015 - } 1016 - 1017 - etd_writel(imx21, etd_num, 1, 1018 - (((u32) etd->dmem_offset + (u32) maxpacket) << DW1_YBUFSRTAD) | 1019 - (u32) etd->dmem_offset); 1020 - 1021 - urb_priv->active = 1; 1022 - 1023 - /* enable the ETD to kick off transfer */ 1024 - dev_vdbg(imx21->dev, "Activating etd %d for %d bytes %s\n", 1025 - etd_num, count, dir != TD_DIR_IN ? "out" : "in"); 1026 - activate_etd(imx21, etd_num, dir); 1027 - 1028 - } 1029 - 1030 - static void nonisoc_etd_done(struct usb_hcd *hcd, int etd_num) 1031 - { 1032 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1033 - struct etd_priv *etd = &imx21->etd[etd_num]; 1034 - struct urb *urb = etd->urb; 1035 - u32 etd_mask = 1 << etd_num; 1036 - struct urb_priv *urb_priv = urb->hcpriv; 1037 - int dir; 1038 - int cc; 1039 - u32 bytes_xfrd; 1040 - int etd_done; 1041 - 1042 - disactivate_etd(imx21, etd_num); 1043 - 1044 - dir = (etd_readl(imx21, etd_num, 0) >> DW0_DIRECT) & 0x3; 1045 - cc = (etd_readl(imx21, etd_num, 2) >> DW2_COMPCODE) & 0xf; 1046 - bytes_xfrd = etd->len - (etd_readl(imx21, etd_num, 3) & 0x1fffff); 1047 - 1048 - /* save toggle carry */ 1049 - usb_settoggle(urb->dev, usb_pipeendpoint(urb->pipe), 1050 - usb_pipeout(urb->pipe), 1051 - (etd_readl(imx21, etd_num, 0) >> DW0_TOGCRY) & 0x1); 1052 - 1053 - if (dir == TD_DIR_IN) { 1054 - clear_toggle_bit(imx21, USBH_XFILLSTAT, etd_mask); 1055 - clear_toggle_bit(imx21, USBH_YFILLSTAT, etd_mask); 1056 - 1057 - if (etd->bounce_buffer) { 1058 - memcpy(etd->cpu_buffer, etd->bounce_buffer, bytes_xfrd); 1059 - dma_unmap_single(imx21->dev, 1060 - etd->dma_handle, etd->len, DMA_FROM_DEVICE); 1061 - } else if (!etd->dma_handle && bytes_xfrd) {/* PIO */ 1062 - memcpy_fromio(etd->cpu_buffer, 1063 - imx21->regs + USBOTG_DMEM + etd->dmem_offset, 1064 - bytes_xfrd); 1065 - } 1066 - } 1067 - 1068 - kfree(etd->bounce_buffer); 1069 - etd->bounce_buffer = NULL; 1070 - free_dmem(imx21, etd); 1071 - 1072 - urb->error_count = 0; 1073 - if (!(urb->transfer_flags & URB_SHORT_NOT_OK) 1074 - && (cc == TD_DATAUNDERRUN)) 1075 - cc = TD_CC_NOERROR; 1076 - 1077 - if (cc != 0) 1078 - dev_vdbg(imx21->dev, "cc is 0x%x\n", cc); 1079 - 1080 - etd_done = (cc_to_error[cc] != 0); /* stop if error */ 1081 - 1082 - switch (usb_pipetype(urb->pipe)) { 1083 - case PIPE_CONTROL: 1084 - switch (urb_priv->state) { 1085 - case US_CTRL_SETUP: 1086 - if (urb->transfer_buffer_length > 0) 1087 - urb_priv->state = US_CTRL_DATA; 1088 - else 1089 - urb_priv->state = US_CTRL_ACK; 1090 - break; 1091 - case US_CTRL_DATA: 1092 - urb->actual_length += bytes_xfrd; 1093 - urb_priv->state = US_CTRL_ACK; 1094 - break; 1095 - case US_CTRL_ACK: 1096 - etd_done = 1; 1097 - break; 1098 - default: 1099 - dev_err(imx21->dev, 1100 - "Invalid pipe state %d\n", urb_priv->state); 1101 - etd_done = 1; 1102 - break; 1103 - } 1104 - break; 1105 - 1106 - case PIPE_BULK: 1107 - urb->actual_length += bytes_xfrd; 1108 - if ((urb_priv->state == US_BULK) 1109 - && (urb->transfer_flags & URB_ZERO_PACKET) 1110 - && urb->transfer_buffer_length > 0 1111 - && ((urb->transfer_buffer_length % 1112 - usb_maxpacket(urb->dev, urb->pipe, 1113 - usb_pipeout(urb->pipe))) == 0)) { 1114 - /* need a 0-packet */ 1115 - urb_priv->state = US_BULK0; 1116 - } else { 1117 - etd_done = 1; 1118 - } 1119 - break; 1120 - 1121 - case PIPE_INTERRUPT: 1122 - urb->actual_length += bytes_xfrd; 1123 - etd_done = 1; 1124 - break; 1125 - } 1126 - 1127 - if (etd_done) 1128 - nonisoc_urb_completed_for_etd(imx21, etd, cc_to_error[cc]); 1129 - else { 1130 - dev_vdbg(imx21->dev, "next state=%d\n", urb_priv->state); 1131 - schedule_nonisoc_etd(imx21, urb); 1132 - } 1133 - } 1134 - 1135 - 1136 - static struct ep_priv *alloc_ep(void) 1137 - { 1138 - int i; 1139 - struct ep_priv *ep_priv; 1140 - 1141 - ep_priv = kzalloc(sizeof(struct ep_priv), GFP_ATOMIC); 1142 - if (!ep_priv) 1143 - return NULL; 1144 - 1145 - for (i = 0; i < NUM_ISO_ETDS; ++i) 1146 - ep_priv->etd[i] = -1; 1147 - 1148 - return ep_priv; 1149 - } 1150 - 1151 - static int imx21_hc_urb_enqueue(struct usb_hcd *hcd, 1152 - struct urb *urb, gfp_t mem_flags) 1153 - { 1154 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1155 - struct usb_host_endpoint *ep = urb->ep; 1156 - struct urb_priv *urb_priv; 1157 - struct ep_priv *ep_priv; 1158 - struct etd_priv *etd; 1159 - int ret; 1160 - unsigned long flags; 1161 - 1162 - dev_vdbg(imx21->dev, 1163 - "enqueue urb=%p ep=%p len=%d " 1164 - "buffer=%p dma=%pad setupBuf=%p setupDma=%pad\n", 1165 - urb, ep, 1166 - urb->transfer_buffer_length, 1167 - urb->transfer_buffer, &urb->transfer_dma, 1168 - urb->setup_packet, &urb->setup_dma); 1169 - 1170 - if (usb_pipeisoc(urb->pipe)) 1171 - return imx21_hc_urb_enqueue_isoc(hcd, ep, urb, mem_flags); 1172 - 1173 - urb_priv = kzalloc(sizeof(struct urb_priv), mem_flags); 1174 - if (!urb_priv) 1175 - return -ENOMEM; 1176 - 1177 - spin_lock_irqsave(&imx21->lock, flags); 1178 - 1179 - ep_priv = ep->hcpriv; 1180 - if (ep_priv == NULL) { 1181 - ep_priv = alloc_ep(); 1182 - if (!ep_priv) { 1183 - ret = -ENOMEM; 1184 - goto failed_alloc_ep; 1185 - } 1186 - ep->hcpriv = ep_priv; 1187 - ep_priv->ep = ep; 1188 - } 1189 - 1190 - ret = usb_hcd_link_urb_to_ep(hcd, urb); 1191 - if (ret) 1192 - goto failed_link; 1193 - 1194 - urb->status = -EINPROGRESS; 1195 - urb->actual_length = 0; 1196 - urb->error_count = 0; 1197 - urb->hcpriv = urb_priv; 1198 - urb_priv->ep = ep; 1199 - 1200 - switch (usb_pipetype(urb->pipe)) { 1201 - case PIPE_CONTROL: 1202 - urb_priv->state = US_CTRL_SETUP; 1203 - break; 1204 - case PIPE_BULK: 1205 - urb_priv->state = US_BULK; 1206 - break; 1207 - } 1208 - 1209 - debug_urb_submitted(imx21, urb); 1210 - if (ep_priv->etd[0] < 0) { 1211 - if (ep_priv->waiting_etd) { 1212 - dev_dbg(imx21->dev, 1213 - "no ETD available already queued %p\n", 1214 - ep_priv); 1215 - debug_urb_queued_for_etd(imx21, urb); 1216 - goto out; 1217 - } 1218 - ep_priv->etd[0] = alloc_etd(imx21); 1219 - if (ep_priv->etd[0] < 0) { 1220 - dev_dbg(imx21->dev, 1221 - "no ETD available queueing %p\n", ep_priv); 1222 - debug_urb_queued_for_etd(imx21, urb); 1223 - list_add_tail(&ep_priv->queue, &imx21->queue_for_etd); 1224 - ep_priv->waiting_etd = 1; 1225 - goto out; 1226 - } 1227 - } 1228 - 1229 - /* Schedule if no URB already active for this endpoint */ 1230 - etd = &imx21->etd[ep_priv->etd[0]]; 1231 - if (etd->urb == NULL) { 1232 - DEBUG_LOG_FRAME(imx21, etd, last_req); 1233 - schedule_nonisoc_etd(imx21, urb); 1234 - } 1235 - 1236 - out: 1237 - spin_unlock_irqrestore(&imx21->lock, flags); 1238 - return 0; 1239 - 1240 - failed_link: 1241 - failed_alloc_ep: 1242 - spin_unlock_irqrestore(&imx21->lock, flags); 1243 - kfree(urb_priv); 1244 - return ret; 1245 - } 1246 - 1247 - static int imx21_hc_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, 1248 - int status) 1249 - { 1250 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1251 - unsigned long flags; 1252 - struct usb_host_endpoint *ep; 1253 - struct ep_priv *ep_priv; 1254 - struct urb_priv *urb_priv = urb->hcpriv; 1255 - int ret = -EINVAL; 1256 - 1257 - dev_vdbg(imx21->dev, "dequeue urb=%p iso=%d status=%d\n", 1258 - urb, usb_pipeisoc(urb->pipe), status); 1259 - 1260 - spin_lock_irqsave(&imx21->lock, flags); 1261 - 1262 - ret = usb_hcd_check_unlink_urb(hcd, urb, status); 1263 - if (ret) 1264 - goto fail; 1265 - ep = urb_priv->ep; 1266 - ep_priv = ep->hcpriv; 1267 - 1268 - debug_urb_unlinked(imx21, urb); 1269 - 1270 - if (usb_pipeisoc(urb->pipe)) { 1271 - dequeue_isoc_urb(imx21, urb, ep_priv); 1272 - schedule_isoc_etds(hcd, ep); 1273 - } else if (urb_priv->active) { 1274 - int etd_num = ep_priv->etd[0]; 1275 - if (etd_num != -1) { 1276 - struct etd_priv *etd = &imx21->etd[etd_num]; 1277 - 1278 - disactivate_etd(imx21, etd_num); 1279 - free_dmem(imx21, etd); 1280 - etd->urb = NULL; 1281 - kfree(etd->bounce_buffer); 1282 - etd->bounce_buffer = NULL; 1283 - } 1284 - } 1285 - 1286 - urb_done(hcd, urb, status); 1287 - 1288 - spin_unlock_irqrestore(&imx21->lock, flags); 1289 - return 0; 1290 - 1291 - fail: 1292 - spin_unlock_irqrestore(&imx21->lock, flags); 1293 - return ret; 1294 - } 1295 - 1296 - /* =========================================== */ 1297 - /* Interrupt dispatch */ 1298 - /* =========================================== */ 1299 - 1300 - static void process_etds(struct usb_hcd *hcd, struct imx21 *imx21, int sof) 1301 - { 1302 - int etd_num; 1303 - int enable_sof_int = 0; 1304 - unsigned long flags; 1305 - 1306 - spin_lock_irqsave(&imx21->lock, flags); 1307 - 1308 - for (etd_num = 0; etd_num < USB_NUM_ETD; etd_num++) { 1309 - u32 etd_mask = 1 << etd_num; 1310 - u32 enabled = readl(imx21->regs + USBH_ETDENSET) & etd_mask; 1311 - u32 done = readl(imx21->regs + USBH_ETDDONESTAT) & etd_mask; 1312 - struct etd_priv *etd = &imx21->etd[etd_num]; 1313 - 1314 - 1315 - if (done) { 1316 - DEBUG_LOG_FRAME(imx21, etd, last_int); 1317 - } else { 1318 - /* 1319 - * Kludge warning! 1320 - * 1321 - * When multiple transfers are using the bus we sometimes get into a state 1322 - * where the transfer has completed (the CC field of the ETD is != 0x0F), 1323 - * the ETD has self disabled but the ETDDONESTAT flag is not set 1324 - * (and hence no interrupt occurs). 1325 - * This causes the transfer in question to hang. 1326 - * The kludge below checks for this condition at each SOF and processes any 1327 - * blocked ETDs (after an arbitrary 10 frame wait) 1328 - * 1329 - * With a single active transfer the usbtest test suite will run for days 1330 - * without the kludge. 1331 - * With other bus activity (eg mass storage) even just test1 will hang without 1332 - * the kludge. 1333 - */ 1334 - u32 dword0; 1335 - int cc; 1336 - 1337 - if (etd->active_count && !enabled) /* suspicious... */ 1338 - enable_sof_int = 1; 1339 - 1340 - if (!sof || enabled || !etd->active_count) 1341 - continue; 1342 - 1343 - cc = etd_readl(imx21, etd_num, 2) >> DW2_COMPCODE; 1344 - if (cc == TD_NOTACCESSED) 1345 - continue; 1346 - 1347 - if (++etd->active_count < 10) 1348 - continue; 1349 - 1350 - dword0 = etd_readl(imx21, etd_num, 0); 1351 - dev_dbg(imx21->dev, 1352 - "unblock ETD %d dev=0x%X ep=0x%X cc=0x%02X!\n", 1353 - etd_num, dword0 & 0x7F, 1354 - (dword0 >> DW0_ENDPNT) & 0x0F, 1355 - cc); 1356 - 1357 - #ifdef DEBUG 1358 - dev_dbg(imx21->dev, 1359 - "frame: act=%d disact=%d" 1360 - " int=%d req=%d cur=%d\n", 1361 - etd->activated_frame, 1362 - etd->disactivated_frame, 1363 - etd->last_int_frame, 1364 - etd->last_req_frame, 1365 - readl(imx21->regs + USBH_FRMNUB)); 1366 - imx21->debug_unblocks++; 1367 - #endif 1368 - etd->active_count = 0; 1369 - /* End of kludge */ 1370 - } 1371 - 1372 - if (etd->ep == NULL || etd->urb == NULL) { 1373 - dev_dbg(imx21->dev, 1374 - "Interrupt for unexpected etd %d" 1375 - " ep=%p urb=%p\n", 1376 - etd_num, etd->ep, etd->urb); 1377 - disactivate_etd(imx21, etd_num); 1378 - continue; 1379 - } 1380 - 1381 - if (usb_pipeisoc(etd->urb->pipe)) 1382 - isoc_etd_done(hcd, etd_num); 1383 - else 1384 - nonisoc_etd_done(hcd, etd_num); 1385 - } 1386 - 1387 - /* only enable SOF interrupt if it may be needed for the kludge */ 1388 - if (enable_sof_int) 1389 - set_register_bits(imx21, USBH_SYSIEN, USBH_SYSIEN_SOFINT); 1390 - else 1391 - clear_register_bits(imx21, USBH_SYSIEN, USBH_SYSIEN_SOFINT); 1392 - 1393 - 1394 - spin_unlock_irqrestore(&imx21->lock, flags); 1395 - } 1396 - 1397 - static irqreturn_t imx21_irq(struct usb_hcd *hcd) 1398 - { 1399 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1400 - u32 ints = readl(imx21->regs + USBH_SYSISR); 1401 - 1402 - if (ints & USBH_SYSIEN_HERRINT) 1403 - dev_dbg(imx21->dev, "Scheduling error\n"); 1404 - 1405 - if (ints & USBH_SYSIEN_SORINT) 1406 - dev_dbg(imx21->dev, "Scheduling overrun\n"); 1407 - 1408 - if (ints & (USBH_SYSISR_DONEINT | USBH_SYSISR_SOFINT)) 1409 - process_etds(hcd, imx21, ints & USBH_SYSISR_SOFINT); 1410 - 1411 - writel(ints, imx21->regs + USBH_SYSISR); 1412 - return IRQ_HANDLED; 1413 - } 1414 - 1415 - static void imx21_hc_endpoint_disable(struct usb_hcd *hcd, 1416 - struct usb_host_endpoint *ep) 1417 - { 1418 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1419 - unsigned long flags; 1420 - struct ep_priv *ep_priv; 1421 - int i; 1422 - 1423 - if (ep == NULL) 1424 - return; 1425 - 1426 - spin_lock_irqsave(&imx21->lock, flags); 1427 - ep_priv = ep->hcpriv; 1428 - dev_vdbg(imx21->dev, "disable ep=%p, ep->hcpriv=%p\n", ep, ep_priv); 1429 - 1430 - if (!list_empty(&ep->urb_list)) 1431 - dev_dbg(imx21->dev, "ep's URB list is not empty\n"); 1432 - 1433 - if (ep_priv != NULL) { 1434 - for (i = 0; i < NUM_ISO_ETDS; i++) { 1435 - if (ep_priv->etd[i] > -1) 1436 - dev_dbg(imx21->dev, "free etd %d for disable\n", 1437 - ep_priv->etd[i]); 1438 - 1439 - free_etd(imx21, ep_priv->etd[i]); 1440 - } 1441 - kfree(ep_priv); 1442 - ep->hcpriv = NULL; 1443 - } 1444 - 1445 - for (i = 0; i < USB_NUM_ETD; i++) { 1446 - if (imx21->etd[i].alloc && imx21->etd[i].ep == ep) { 1447 - dev_err(imx21->dev, 1448 - "Active etd %d for disabled ep=%p!\n", i, ep); 1449 - free_etd(imx21, i); 1450 - } 1451 - } 1452 - free_epdmem(imx21, ep); 1453 - spin_unlock_irqrestore(&imx21->lock, flags); 1454 - } 1455 - 1456 - /* =========================================== */ 1457 - /* Hub handling */ 1458 - /* =========================================== */ 1459 - 1460 - static int get_hub_descriptor(struct usb_hcd *hcd, 1461 - struct usb_hub_descriptor *desc) 1462 - { 1463 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1464 - desc->bDescriptorType = USB_DT_HUB; /* HUB descriptor */ 1465 - desc->bHubContrCurrent = 0; 1466 - 1467 - desc->bNbrPorts = readl(imx21->regs + USBH_ROOTHUBA) 1468 - & USBH_ROOTHUBA_NDNSTMPRT_MASK; 1469 - desc->bDescLength = 9; 1470 - desc->bPwrOn2PwrGood = 0; 1471 - desc->wHubCharacteristics = (__force __u16) cpu_to_le16( 1472 - HUB_CHAR_NO_LPSM | /* No power switching */ 1473 - HUB_CHAR_NO_OCPM); /* No over current protection */ 1474 - 1475 - desc->u.hs.DeviceRemovable[0] = 1 << 1; 1476 - desc->u.hs.DeviceRemovable[1] = ~0; 1477 - return 0; 1478 - } 1479 - 1480 - static int imx21_hc_hub_status_data(struct usb_hcd *hcd, char *buf) 1481 - { 1482 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1483 - int ports; 1484 - int changed = 0; 1485 - int i; 1486 - unsigned long flags; 1487 - 1488 - spin_lock_irqsave(&imx21->lock, flags); 1489 - ports = readl(imx21->regs + USBH_ROOTHUBA) 1490 - & USBH_ROOTHUBA_NDNSTMPRT_MASK; 1491 - if (ports > 7) { 1492 - ports = 7; 1493 - dev_err(imx21->dev, "ports %d > 7\n", ports); 1494 - } 1495 - for (i = 0; i < ports; i++) { 1496 - if (readl(imx21->regs + USBH_PORTSTAT(i)) & 1497 - (USBH_PORTSTAT_CONNECTSC | 1498 - USBH_PORTSTAT_PRTENBLSC | 1499 - USBH_PORTSTAT_PRTSTATSC | 1500 - USBH_PORTSTAT_OVRCURIC | 1501 - USBH_PORTSTAT_PRTRSTSC)) { 1502 - 1503 - changed = 1; 1504 - buf[0] |= 1 << (i + 1); 1505 - } 1506 - } 1507 - spin_unlock_irqrestore(&imx21->lock, flags); 1508 - 1509 - if (changed) 1510 - dev_info(imx21->dev, "Hub status changed\n"); 1511 - return changed; 1512 - } 1513 - 1514 - static int imx21_hc_hub_control(struct usb_hcd *hcd, 1515 - u16 typeReq, 1516 - u16 wValue, u16 wIndex, char *buf, u16 wLength) 1517 - { 1518 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1519 - int rc = 0; 1520 - u32 status_write = 0; 1521 - 1522 - switch (typeReq) { 1523 - case ClearHubFeature: 1524 - dev_dbg(imx21->dev, "ClearHubFeature\n"); 1525 - switch (wValue) { 1526 - case C_HUB_OVER_CURRENT: 1527 - dev_dbg(imx21->dev, " OVER_CURRENT\n"); 1528 - break; 1529 - case C_HUB_LOCAL_POWER: 1530 - dev_dbg(imx21->dev, " LOCAL_POWER\n"); 1531 - break; 1532 - default: 1533 - dev_dbg(imx21->dev, " unknown\n"); 1534 - rc = -EINVAL; 1535 - break; 1536 - } 1537 - break; 1538 - 1539 - case ClearPortFeature: 1540 - dev_dbg(imx21->dev, "ClearPortFeature\n"); 1541 - switch (wValue) { 1542 - case USB_PORT_FEAT_ENABLE: 1543 - dev_dbg(imx21->dev, " ENABLE\n"); 1544 - status_write = USBH_PORTSTAT_CURCONST; 1545 - break; 1546 - case USB_PORT_FEAT_SUSPEND: 1547 - dev_dbg(imx21->dev, " SUSPEND\n"); 1548 - status_write = USBH_PORTSTAT_PRTOVRCURI; 1549 - break; 1550 - case USB_PORT_FEAT_POWER: 1551 - dev_dbg(imx21->dev, " POWER\n"); 1552 - status_write = USBH_PORTSTAT_LSDEVCON; 1553 - break; 1554 - case USB_PORT_FEAT_C_ENABLE: 1555 - dev_dbg(imx21->dev, " C_ENABLE\n"); 1556 - status_write = USBH_PORTSTAT_PRTENBLSC; 1557 - break; 1558 - case USB_PORT_FEAT_C_SUSPEND: 1559 - dev_dbg(imx21->dev, " C_SUSPEND\n"); 1560 - status_write = USBH_PORTSTAT_PRTSTATSC; 1561 - break; 1562 - case USB_PORT_FEAT_C_CONNECTION: 1563 - dev_dbg(imx21->dev, " C_CONNECTION\n"); 1564 - status_write = USBH_PORTSTAT_CONNECTSC; 1565 - break; 1566 - case USB_PORT_FEAT_C_OVER_CURRENT: 1567 - dev_dbg(imx21->dev, " C_OVER_CURRENT\n"); 1568 - status_write = USBH_PORTSTAT_OVRCURIC; 1569 - break; 1570 - case USB_PORT_FEAT_C_RESET: 1571 - dev_dbg(imx21->dev, " C_RESET\n"); 1572 - status_write = USBH_PORTSTAT_PRTRSTSC; 1573 - break; 1574 - default: 1575 - dev_dbg(imx21->dev, " unknown\n"); 1576 - rc = -EINVAL; 1577 - break; 1578 - } 1579 - 1580 - break; 1581 - 1582 - case GetHubDescriptor: 1583 - dev_dbg(imx21->dev, "GetHubDescriptor\n"); 1584 - rc = get_hub_descriptor(hcd, (void *)buf); 1585 - break; 1586 - 1587 - case GetHubStatus: 1588 - dev_dbg(imx21->dev, " GetHubStatus\n"); 1589 - *(__le32 *) buf = 0; 1590 - break; 1591 - 1592 - case GetPortStatus: 1593 - dev_dbg(imx21->dev, "GetPortStatus: port: %d, 0x%x\n", 1594 - wIndex, USBH_PORTSTAT(wIndex - 1)); 1595 - *(__le32 *) buf = readl(imx21->regs + 1596 - USBH_PORTSTAT(wIndex - 1)); 1597 - break; 1598 - 1599 - case SetHubFeature: 1600 - dev_dbg(imx21->dev, "SetHubFeature\n"); 1601 - switch (wValue) { 1602 - case C_HUB_OVER_CURRENT: 1603 - dev_dbg(imx21->dev, " OVER_CURRENT\n"); 1604 - break; 1605 - 1606 - case C_HUB_LOCAL_POWER: 1607 - dev_dbg(imx21->dev, " LOCAL_POWER\n"); 1608 - break; 1609 - default: 1610 - dev_dbg(imx21->dev, " unknown\n"); 1611 - rc = -EINVAL; 1612 - break; 1613 - } 1614 - 1615 - break; 1616 - 1617 - case SetPortFeature: 1618 - dev_dbg(imx21->dev, "SetPortFeature\n"); 1619 - switch (wValue) { 1620 - case USB_PORT_FEAT_SUSPEND: 1621 - dev_dbg(imx21->dev, " SUSPEND\n"); 1622 - status_write = USBH_PORTSTAT_PRTSUSPST; 1623 - break; 1624 - case USB_PORT_FEAT_POWER: 1625 - dev_dbg(imx21->dev, " POWER\n"); 1626 - status_write = USBH_PORTSTAT_PRTPWRST; 1627 - break; 1628 - case USB_PORT_FEAT_RESET: 1629 - dev_dbg(imx21->dev, " RESET\n"); 1630 - status_write = USBH_PORTSTAT_PRTRSTST; 1631 - break; 1632 - default: 1633 - dev_dbg(imx21->dev, " unknown\n"); 1634 - rc = -EINVAL; 1635 - break; 1636 - } 1637 - break; 1638 - 1639 - default: 1640 - dev_dbg(imx21->dev, " unknown\n"); 1641 - rc = -EINVAL; 1642 - break; 1643 - } 1644 - 1645 - if (status_write) 1646 - writel(status_write, imx21->regs + USBH_PORTSTAT(wIndex - 1)); 1647 - return rc; 1648 - } 1649 - 1650 - /* =========================================== */ 1651 - /* Host controller management */ 1652 - /* =========================================== */ 1653 - 1654 - static int imx21_hc_reset(struct usb_hcd *hcd) 1655 - { 1656 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1657 - unsigned long timeout; 1658 - unsigned long flags; 1659 - 1660 - spin_lock_irqsave(&imx21->lock, flags); 1661 - 1662 - /* Reset the Host controller modules */ 1663 - writel(USBOTG_RST_RSTCTRL | USBOTG_RST_RSTRH | 1664 - USBOTG_RST_RSTHSIE | USBOTG_RST_RSTHC, 1665 - imx21->regs + USBOTG_RST_CTRL); 1666 - 1667 - /* Wait for reset to finish */ 1668 - timeout = jiffies + HZ; 1669 - while (readl(imx21->regs + USBOTG_RST_CTRL) != 0) { 1670 - if (time_after(jiffies, timeout)) { 1671 - spin_unlock_irqrestore(&imx21->lock, flags); 1672 - dev_err(imx21->dev, "timeout waiting for reset\n"); 1673 - return -ETIMEDOUT; 1674 - } 1675 - spin_unlock_irq(&imx21->lock); 1676 - schedule_timeout_uninterruptible(1); 1677 - spin_lock_irq(&imx21->lock); 1678 - } 1679 - spin_unlock_irqrestore(&imx21->lock, flags); 1680 - return 0; 1681 - } 1682 - 1683 - static int imx21_hc_start(struct usb_hcd *hcd) 1684 - { 1685 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1686 - unsigned long flags; 1687 - int i, j; 1688 - u32 hw_mode = USBOTG_HWMODE_CRECFG_HOST; 1689 - u32 usb_control = 0; 1690 - 1691 - hw_mode |= ((imx21->pdata->host_xcvr << USBOTG_HWMODE_HOSTXCVR_SHIFT) & 1692 - USBOTG_HWMODE_HOSTXCVR_MASK); 1693 - hw_mode |= ((imx21->pdata->otg_xcvr << USBOTG_HWMODE_OTGXCVR_SHIFT) & 1694 - USBOTG_HWMODE_OTGXCVR_MASK); 1695 - 1696 - if (imx21->pdata->host1_txenoe) 1697 - usb_control |= USBCTRL_HOST1_TXEN_OE; 1698 - 1699 - if (!imx21->pdata->host1_xcverless) 1700 - usb_control |= USBCTRL_HOST1_BYP_TLL; 1701 - 1702 - if (imx21->pdata->otg_ext_xcvr) 1703 - usb_control |= USBCTRL_OTC_RCV_RXDP; 1704 - 1705 - 1706 - spin_lock_irqsave(&imx21->lock, flags); 1707 - 1708 - writel((USBOTG_CLK_CTRL_HST | USBOTG_CLK_CTRL_MAIN), 1709 - imx21->regs + USBOTG_CLK_CTRL); 1710 - writel(hw_mode, imx21->regs + USBOTG_HWMODE); 1711 - writel(usb_control, imx21->regs + USBCTRL); 1712 - writel(USB_MISCCONTROL_SKPRTRY | USB_MISCCONTROL_ARBMODE, 1713 - imx21->regs + USB_MISCCONTROL); 1714 - 1715 - /* Clear the ETDs */ 1716 - for (i = 0; i < USB_NUM_ETD; i++) 1717 - for (j = 0; j < 4; j++) 1718 - etd_writel(imx21, i, j, 0); 1719 - 1720 - /* Take the HC out of reset */ 1721 - writel(USBH_HOST_CTRL_HCUSBSTE_OPERATIONAL | USBH_HOST_CTRL_CTLBLKSR_1, 1722 - imx21->regs + USBH_HOST_CTRL); 1723 - 1724 - /* Enable ports */ 1725 - if (imx21->pdata->enable_otg_host) 1726 - writel(USBH_PORTSTAT_PRTPWRST | USBH_PORTSTAT_PRTENABST, 1727 - imx21->regs + USBH_PORTSTAT(0)); 1728 - 1729 - if (imx21->pdata->enable_host1) 1730 - writel(USBH_PORTSTAT_PRTPWRST | USBH_PORTSTAT_PRTENABST, 1731 - imx21->regs + USBH_PORTSTAT(1)); 1732 - 1733 - if (imx21->pdata->enable_host2) 1734 - writel(USBH_PORTSTAT_PRTPWRST | USBH_PORTSTAT_PRTENABST, 1735 - imx21->regs + USBH_PORTSTAT(2)); 1736 - 1737 - 1738 - hcd->state = HC_STATE_RUNNING; 1739 - 1740 - /* Enable host controller interrupts */ 1741 - set_register_bits(imx21, USBH_SYSIEN, 1742 - USBH_SYSIEN_HERRINT | 1743 - USBH_SYSIEN_DONEINT | USBH_SYSIEN_SORINT); 1744 - set_register_bits(imx21, USBOTG_CINT_STEN, USBOTG_HCINT); 1745 - 1746 - spin_unlock_irqrestore(&imx21->lock, flags); 1747 - 1748 - return 0; 1749 - } 1750 - 1751 - static void imx21_hc_stop(struct usb_hcd *hcd) 1752 - { 1753 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1754 - unsigned long flags; 1755 - 1756 - spin_lock_irqsave(&imx21->lock, flags); 1757 - 1758 - writel(0, imx21->regs + USBH_SYSIEN); 1759 - clear_register_bits(imx21, USBOTG_CINT_STEN, USBOTG_HCINT); 1760 - clear_register_bits(imx21, USBOTG_CLK_CTRL_HST | USBOTG_CLK_CTRL_MAIN, 1761 - USBOTG_CLK_CTRL); 1762 - spin_unlock_irqrestore(&imx21->lock, flags); 1763 - } 1764 - 1765 - /* =========================================== */ 1766 - /* Driver glue */ 1767 - /* =========================================== */ 1768 - 1769 - static const struct hc_driver imx21_hc_driver = { 1770 - .description = hcd_name, 1771 - .product_desc = "IMX21 USB Host Controller", 1772 - .hcd_priv_size = sizeof(struct imx21), 1773 - 1774 - .flags = HCD_DMA | HCD_USB11, 1775 - .irq = imx21_irq, 1776 - 1777 - .reset = imx21_hc_reset, 1778 - .start = imx21_hc_start, 1779 - .stop = imx21_hc_stop, 1780 - 1781 - /* I/O requests */ 1782 - .urb_enqueue = imx21_hc_urb_enqueue, 1783 - .urb_dequeue = imx21_hc_urb_dequeue, 1784 - .endpoint_disable = imx21_hc_endpoint_disable, 1785 - 1786 - /* scheduling support */ 1787 - .get_frame_number = imx21_hc_get_frame, 1788 - 1789 - /* Root hub support */ 1790 - .hub_status_data = imx21_hc_hub_status_data, 1791 - .hub_control = imx21_hc_hub_control, 1792 - 1793 - }; 1794 - 1795 - static struct mx21_usbh_platform_data default_pdata = { 1796 - .host_xcvr = MX21_USBXCVR_TXDIF_RXDIF, 1797 - .otg_xcvr = MX21_USBXCVR_TXDIF_RXDIF, 1798 - .enable_host1 = 1, 1799 - .enable_host2 = 1, 1800 - .enable_otg_host = 1, 1801 - 1802 - }; 1803 - 1804 - static int imx21_remove(struct platform_device *pdev) 1805 - { 1806 - struct usb_hcd *hcd = platform_get_drvdata(pdev); 1807 - struct imx21 *imx21 = hcd_to_imx21(hcd); 1808 - struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1809 - 1810 - remove_debug_files(imx21); 1811 - usb_remove_hcd(hcd); 1812 - 1813 - if (res != NULL) { 1814 - clk_disable_unprepare(imx21->clk); 1815 - clk_put(imx21->clk); 1816 - iounmap(imx21->regs); 1817 - release_mem_region(res->start, resource_size(res)); 1818 - } 1819 - 1820 - kfree(hcd); 1821 - return 0; 1822 - } 1823 - 1824 - 1825 - static int imx21_probe(struct platform_device *pdev) 1826 - { 1827 - struct usb_hcd *hcd; 1828 - struct imx21 *imx21; 1829 - struct resource *res; 1830 - int ret; 1831 - int irq; 1832 - 1833 - printk(KERN_INFO "%s\n", imx21_hc_driver.product_desc); 1834 - 1835 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1836 - if (!res) 1837 - return -ENODEV; 1838 - irq = platform_get_irq(pdev, 0); 1839 - if (irq < 0) 1840 - return irq; 1841 - 1842 - hcd = usb_create_hcd(&imx21_hc_driver, 1843 - &pdev->dev, dev_name(&pdev->dev)); 1844 - if (hcd == NULL) { 1845 - dev_err(&pdev->dev, "Cannot create hcd (%s)\n", 1846 - dev_name(&pdev->dev)); 1847 - return -ENOMEM; 1848 - } 1849 - 1850 - imx21 = hcd_to_imx21(hcd); 1851 - imx21->hcd = hcd; 1852 - imx21->dev = &pdev->dev; 1853 - imx21->pdata = dev_get_platdata(&pdev->dev); 1854 - if (!imx21->pdata) 1855 - imx21->pdata = &default_pdata; 1856 - 1857 - spin_lock_init(&imx21->lock); 1858 - INIT_LIST_HEAD(&imx21->dmem_list); 1859 - INIT_LIST_HEAD(&imx21->queue_for_etd); 1860 - INIT_LIST_HEAD(&imx21->queue_for_dmem); 1861 - create_debug_files(imx21); 1862 - 1863 - res = request_mem_region(res->start, resource_size(res), hcd_name); 1864 - if (!res) { 1865 - ret = -EBUSY; 1866 - goto failed_request_mem; 1867 - } 1868 - 1869 - imx21->regs = ioremap(res->start, resource_size(res)); 1870 - if (imx21->regs == NULL) { 1871 - dev_err(imx21->dev, "Cannot map registers\n"); 1872 - ret = -ENOMEM; 1873 - goto failed_ioremap; 1874 - } 1875 - 1876 - /* Enable clocks source */ 1877 - imx21->clk = clk_get(imx21->dev, NULL); 1878 - if (IS_ERR(imx21->clk)) { 1879 - dev_err(imx21->dev, "no clock found\n"); 1880 - ret = PTR_ERR(imx21->clk); 1881 - goto failed_clock_get; 1882 - } 1883 - 1884 - ret = clk_set_rate(imx21->clk, clk_round_rate(imx21->clk, 48000000)); 1885 - if (ret) 1886 - goto failed_clock_set; 1887 - ret = clk_prepare_enable(imx21->clk); 1888 - if (ret) 1889 - goto failed_clock_enable; 1890 - 1891 - dev_info(imx21->dev, "Hardware HC revision: 0x%02X\n", 1892 - (readl(imx21->regs + USBOTG_HWMODE) >> 16) & 0xFF); 1893 - 1894 - ret = usb_add_hcd(hcd, irq, 0); 1895 - if (ret != 0) { 1896 - dev_err(imx21->dev, "usb_add_hcd() returned %d\n", ret); 1897 - goto failed_add_hcd; 1898 - } 1899 - device_wakeup_enable(hcd->self.controller); 1900 - 1901 - return 0; 1902 - 1903 - failed_add_hcd: 1904 - clk_disable_unprepare(imx21->clk); 1905 - failed_clock_enable: 1906 - failed_clock_set: 1907 - clk_put(imx21->clk); 1908 - failed_clock_get: 1909 - iounmap(imx21->regs); 1910 - failed_ioremap: 1911 - release_mem_region(res->start, resource_size(res)); 1912 - failed_request_mem: 1913 - remove_debug_files(imx21); 1914 - usb_put_hcd(hcd); 1915 - return ret; 1916 - } 1917 - 1918 - static struct platform_driver imx21_hcd_driver = { 1919 - .driver = { 1920 - .name = hcd_name, 1921 - }, 1922 - .probe = imx21_probe, 1923 - .remove = imx21_remove, 1924 - .suspend = NULL, 1925 - .resume = NULL, 1926 - }; 1927 - 1928 - module_platform_driver(imx21_hcd_driver); 1929 - 1930 - MODULE_DESCRIPTION("i.MX21 USB Host controller"); 1931 - MODULE_AUTHOR("Martin Fuzzey"); 1932 - MODULE_LICENSE("GPL"); 1933 - MODULE_ALIAS("platform:imx21-hcd");
-431
drivers/usb/host/imx21-hcd.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* 3 - * Macros and prototypes for i.MX21 4 - * 5 - * Copyright (C) 2006 Loping Dog Embedded Systems 6 - * Copyright (C) 2009 Martin Fuzzey 7 - * Originally written by Jay Monkman <jtm@lopingdog.com> 8 - * Ported to 2.6.30, debugged and enhanced by Martin Fuzzey 9 - */ 10 - 11 - #ifndef __LINUX_IMX21_HCD_H__ 12 - #define __LINUX_IMX21_HCD_H__ 13 - 14 - #ifdef CONFIG_DYNAMIC_DEBUG 15 - #define DEBUG 16 - #endif 17 - 18 - #include <linux/platform_data/usb-mx2.h> 19 - 20 - #define NUM_ISO_ETDS 2 21 - #define USB_NUM_ETD 32 22 - #define DMEM_SIZE 4096 23 - 24 - /* Register definitions */ 25 - #define USBOTG_HWMODE 0x00 26 - #define USBOTG_HWMODE_ANASDBEN (1 << 14) 27 - #define USBOTG_HWMODE_OTGXCVR_SHIFT 6 28 - #define USBOTG_HWMODE_OTGXCVR_MASK (3 << 6) 29 - #define USBOTG_HWMODE_OTGXCVR_TD_RD (0 << 6) 30 - #define USBOTG_HWMODE_OTGXCVR_TS_RD (2 << 6) 31 - #define USBOTG_HWMODE_OTGXCVR_TD_RS (1 << 6) 32 - #define USBOTG_HWMODE_OTGXCVR_TS_RS (3 << 6) 33 - #define USBOTG_HWMODE_HOSTXCVR_SHIFT 4 34 - #define USBOTG_HWMODE_HOSTXCVR_MASK (3 << 4) 35 - #define USBOTG_HWMODE_HOSTXCVR_TD_RD (0 << 4) 36 - #define USBOTG_HWMODE_HOSTXCVR_TS_RD (2 << 4) 37 - #define USBOTG_HWMODE_HOSTXCVR_TD_RS (1 << 4) 38 - #define USBOTG_HWMODE_HOSTXCVR_TS_RS (3 << 4) 39 - #define USBOTG_HWMODE_CRECFG_MASK (3 << 0) 40 - #define USBOTG_HWMODE_CRECFG_HOST (1 << 0) 41 - #define USBOTG_HWMODE_CRECFG_FUNC (2 << 0) 42 - #define USBOTG_HWMODE_CRECFG_HNP (3 << 0) 43 - 44 - #define USBOTG_CINT_STAT 0x04 45 - #define USBOTG_CINT_STEN 0x08 46 - #define USBOTG_ASHNPINT (1 << 5) 47 - #define USBOTG_ASFCINT (1 << 4) 48 - #define USBOTG_ASHCINT (1 << 3) 49 - #define USBOTG_SHNPINT (1 << 2) 50 - #define USBOTG_FCINT (1 << 1) 51 - #define USBOTG_HCINT (1 << 0) 52 - 53 - #define USBOTG_CLK_CTRL 0x0c 54 - #define USBOTG_CLK_CTRL_FUNC (1 << 2) 55 - #define USBOTG_CLK_CTRL_HST (1 << 1) 56 - #define USBOTG_CLK_CTRL_MAIN (1 << 0) 57 - 58 - #define USBOTG_RST_CTRL 0x10 59 - #define USBOTG_RST_RSTI2C (1 << 15) 60 - #define USBOTG_RST_RSTCTRL (1 << 5) 61 - #define USBOTG_RST_RSTFC (1 << 4) 62 - #define USBOTG_RST_RSTFSKE (1 << 3) 63 - #define USBOTG_RST_RSTRH (1 << 2) 64 - #define USBOTG_RST_RSTHSIE (1 << 1) 65 - #define USBOTG_RST_RSTHC (1 << 0) 66 - 67 - #define USBOTG_FRM_INTVL 0x14 68 - #define USBOTG_FRM_REMAIN 0x18 69 - #define USBOTG_HNP_CSR 0x1c 70 - #define USBOTG_HNP_ISR 0x2c 71 - #define USBOTG_HNP_IEN 0x30 72 - 73 - #define USBOTG_I2C_TXCVR_REG(x) (0x100 + (x)) 74 - #define USBOTG_I2C_XCVR_DEVAD 0x118 75 - #define USBOTG_I2C_SEQ_OP_REG 0x119 76 - #define USBOTG_I2C_SEQ_RD_STARTAD 0x11a 77 - #define USBOTG_I2C_OP_CTRL_REG 0x11b 78 - #define USBOTG_I2C_SCLK_TO_SCK_HPER 0x11e 79 - #define USBOTG_I2C_MASTER_INT_REG 0x11f 80 - 81 - #define USBH_HOST_CTRL 0x80 82 - #define USBH_HOST_CTRL_HCRESET (1 << 31) 83 - #define USBH_HOST_CTRL_SCHDOVR(x) ((x) << 16) 84 - #define USBH_HOST_CTRL_RMTWUEN (1 << 4) 85 - #define USBH_HOST_CTRL_HCUSBSTE_RESET (0 << 2) 86 - #define USBH_HOST_CTRL_HCUSBSTE_RESUME (1 << 2) 87 - #define USBH_HOST_CTRL_HCUSBSTE_OPERATIONAL (2 << 2) 88 - #define USBH_HOST_CTRL_HCUSBSTE_SUSPEND (3 << 2) 89 - #define USBH_HOST_CTRL_CTLBLKSR_1 (0 << 0) 90 - #define USBH_HOST_CTRL_CTLBLKSR_2 (1 << 0) 91 - #define USBH_HOST_CTRL_CTLBLKSR_3 (2 << 0) 92 - #define USBH_HOST_CTRL_CTLBLKSR_4 (3 << 0) 93 - 94 - #define USBH_SYSISR 0x88 95 - #define USBH_SYSISR_PSCINT (1 << 6) 96 - #define USBH_SYSISR_FMOFINT (1 << 5) 97 - #define USBH_SYSISR_HERRINT (1 << 4) 98 - #define USBH_SYSISR_RESDETINT (1 << 3) 99 - #define USBH_SYSISR_SOFINT (1 << 2) 100 - #define USBH_SYSISR_DONEINT (1 << 1) 101 - #define USBH_SYSISR_SORINT (1 << 0) 102 - 103 - #define USBH_SYSIEN 0x8c 104 - #define USBH_SYSIEN_PSCINT (1 << 6) 105 - #define USBH_SYSIEN_FMOFINT (1 << 5) 106 - #define USBH_SYSIEN_HERRINT (1 << 4) 107 - #define USBH_SYSIEN_RESDETINT (1 << 3) 108 - #define USBH_SYSIEN_SOFINT (1 << 2) 109 - #define USBH_SYSIEN_DONEINT (1 << 1) 110 - #define USBH_SYSIEN_SORINT (1 << 0) 111 - 112 - #define USBH_XBUFSTAT 0x98 113 - #define USBH_YBUFSTAT 0x9c 114 - #define USBH_XYINTEN 0xa0 115 - #define USBH_XFILLSTAT 0xa8 116 - #define USBH_YFILLSTAT 0xac 117 - #define USBH_ETDENSET 0xc0 118 - #define USBH_ETDENCLR 0xc4 119 - #define USBH_IMMEDINT 0xcc 120 - #define USBH_ETDDONESTAT 0xd0 121 - #define USBH_ETDDONEEN 0xd4 122 - #define USBH_FRMNUB 0xe0 123 - #define USBH_LSTHRESH 0xe4 124 - 125 - #define USBH_ROOTHUBA 0xe8 126 - #define USBH_ROOTHUBA_PWRTOGOOD_MASK (0xff) 127 - #define USBH_ROOTHUBA_PWRTOGOOD_SHIFT (24) 128 - #define USBH_ROOTHUBA_NOOVRCURP (1 << 12) 129 - #define USBH_ROOTHUBA_OVRCURPM (1 << 11) 130 - #define USBH_ROOTHUBA_DEVTYPE (1 << 10) 131 - #define USBH_ROOTHUBA_PWRSWTMD (1 << 9) 132 - #define USBH_ROOTHUBA_NOPWRSWT (1 << 8) 133 - #define USBH_ROOTHUBA_NDNSTMPRT_MASK (0xff) 134 - 135 - #define USBH_ROOTHUBB 0xec 136 - #define USBH_ROOTHUBB_PRTPWRCM(x) (1 << ((x) + 16)) 137 - #define USBH_ROOTHUBB_DEVREMOVE(x) (1 << (x)) 138 - 139 - #define USBH_ROOTSTAT 0xf0 140 - #define USBH_ROOTSTAT_CLRRMTWUE (1 << 31) 141 - #define USBH_ROOTSTAT_OVRCURCHG (1 << 17) 142 - #define USBH_ROOTSTAT_DEVCONWUE (1 << 15) 143 - #define USBH_ROOTSTAT_OVRCURI (1 << 1) 144 - #define USBH_ROOTSTAT_LOCPWRS (1 << 0) 145 - 146 - #define USBH_PORTSTAT(x) (0xf4 + ((x) * 4)) 147 - #define USBH_PORTSTAT_PRTRSTSC (1 << 20) 148 - #define USBH_PORTSTAT_OVRCURIC (1 << 19) 149 - #define USBH_PORTSTAT_PRTSTATSC (1 << 18) 150 - #define USBH_PORTSTAT_PRTENBLSC (1 << 17) 151 - #define USBH_PORTSTAT_CONNECTSC (1 << 16) 152 - #define USBH_PORTSTAT_LSDEVCON (1 << 9) 153 - #define USBH_PORTSTAT_PRTPWRST (1 << 8) 154 - #define USBH_PORTSTAT_PRTRSTST (1 << 4) 155 - #define USBH_PORTSTAT_PRTOVRCURI (1 << 3) 156 - #define USBH_PORTSTAT_PRTSUSPST (1 << 2) 157 - #define USBH_PORTSTAT_PRTENABST (1 << 1) 158 - #define USBH_PORTSTAT_CURCONST (1 << 0) 159 - 160 - #define USB_DMAREV 0x800 161 - #define USB_DMAINTSTAT 0x804 162 - #define USB_DMAINTSTAT_EPERR (1 << 1) 163 - #define USB_DMAINTSTAT_ETDERR (1 << 0) 164 - 165 - #define USB_DMAINTEN 0x808 166 - #define USB_DMAINTEN_EPERRINTEN (1 << 1) 167 - #define USB_DMAINTEN_ETDERRINTEN (1 << 0) 168 - 169 - #define USB_ETDDMAERSTAT 0x80c 170 - #define USB_EPDMAERSTAT 0x810 171 - #define USB_ETDDMAEN 0x820 172 - #define USB_EPDMAEN 0x824 173 - #define USB_ETDDMAXTEN 0x828 174 - #define USB_EPDMAXTEN 0x82c 175 - #define USB_ETDDMAENXYT 0x830 176 - #define USB_EPDMAENXYT 0x834 177 - #define USB_ETDDMABST4EN 0x838 178 - #define USB_EPDMABST4EN 0x83c 179 - 180 - #define USB_MISCCONTROL 0x840 181 - #define USB_MISCCONTROL_ISOPREVFRM (1 << 3) 182 - #define USB_MISCCONTROL_SKPRTRY (1 << 2) 183 - #define USB_MISCCONTROL_ARBMODE (1 << 1) 184 - #define USB_MISCCONTROL_FILTCC (1 << 0) 185 - 186 - #define USB_ETDDMACHANLCLR 0x848 187 - #define USB_EPDMACHANLCLR 0x84c 188 - #define USB_ETDSMSA(x) (0x900 + ((x) * 4)) 189 - #define USB_EPSMSA(x) (0x980 + ((x) * 4)) 190 - #define USB_ETDDMABUFPTR(x) (0xa00 + ((x) * 4)) 191 - #define USB_EPDMABUFPTR(x) (0xa80 + ((x) * 4)) 192 - 193 - #define USB_ETD_DWORD(x, w) (0x200 + ((x) * 16) + ((w) * 4)) 194 - #define DW0_ADDRESS 0 195 - #define DW0_ENDPNT 7 196 - #define DW0_DIRECT 11 197 - #define DW0_SPEED 13 198 - #define DW0_FORMAT 14 199 - #define DW0_MAXPKTSIZ 16 200 - #define DW0_HALTED 27 201 - #define DW0_TOGCRY 28 202 - #define DW0_SNDNAK 30 203 - 204 - #define DW1_XBUFSRTAD 0 205 - #define DW1_YBUFSRTAD 16 206 - 207 - #define DW2_RTRYDELAY 0 208 - #define DW2_POLINTERV 0 209 - #define DW2_STARTFRM 0 210 - #define DW2_RELPOLPOS 8 211 - #define DW2_DIRPID 16 212 - #define DW2_BUFROUND 18 213 - #define DW2_DELAYINT 19 214 - #define DW2_DATATOG 22 215 - #define DW2_ERRORCNT 24 216 - #define DW2_COMPCODE 28 217 - 218 - #define DW3_TOTBYECNT 0 219 - #define DW3_PKTLEN0 0 220 - #define DW3_COMPCODE0 12 221 - #define DW3_PKTLEN1 16 222 - #define DW3_BUFSIZE 21 223 - #define DW3_COMPCODE1 28 224 - 225 - #define USBCTRL 0x600 226 - #define USBCTRL_I2C_WU_INT_STAT (1 << 27) 227 - #define USBCTRL_OTG_WU_INT_STAT (1 << 26) 228 - #define USBCTRL_HOST_WU_INT_STAT (1 << 25) 229 - #define USBCTRL_FNT_WU_INT_STAT (1 << 24) 230 - #define USBCTRL_I2C_WU_INT_EN (1 << 19) 231 - #define USBCTRL_OTG_WU_INT_EN (1 << 18) 232 - #define USBCTRL_HOST_WU_INT_EN (1 << 17) 233 - #define USBCTRL_FNT_WU_INT_EN (1 << 16) 234 - #define USBCTRL_OTC_RCV_RXDP (1 << 13) 235 - #define USBCTRL_HOST1_BYP_TLL (1 << 12) 236 - #define USBCTRL_OTG_BYP_VAL(x) ((x) << 10) 237 - #define USBCTRL_HOST1_BYP_VAL(x) ((x) << 8) 238 - #define USBCTRL_OTG_PWR_MASK (1 << 6) 239 - #define USBCTRL_HOST1_PWR_MASK (1 << 5) 240 - #define USBCTRL_HOST2_PWR_MASK (1 << 4) 241 - #define USBCTRL_USB_BYP (1 << 2) 242 - #define USBCTRL_HOST1_TXEN_OE (1 << 1) 243 - 244 - #define USBOTG_DMEM 0x1000 245 - 246 - /* Values in TD blocks */ 247 - #define TD_DIR_SETUP 0 248 - #define TD_DIR_OUT 1 249 - #define TD_DIR_IN 2 250 - #define TD_FORMAT_CONTROL 0 251 - #define TD_FORMAT_ISO 1 252 - #define TD_FORMAT_BULK 2 253 - #define TD_FORMAT_INT 3 254 - #define TD_TOGGLE_CARRY 0 255 - #define TD_TOGGLE_DATA0 2 256 - #define TD_TOGGLE_DATA1 3 257 - 258 - /* control transfer states */ 259 - #define US_CTRL_SETUP 2 260 - #define US_CTRL_DATA 1 261 - #define US_CTRL_ACK 0 262 - 263 - /* bulk transfer main state and 0-length packet */ 264 - #define US_BULK 1 265 - #define US_BULK0 0 266 - 267 - /*ETD format description*/ 268 - #define IMX_FMT_CTRL 0x0 269 - #define IMX_FMT_ISO 0x1 270 - #define IMX_FMT_BULK 0x2 271 - #define IMX_FMT_INT 0x3 272 - 273 - static char fmt_urb_to_etd[4] = { 274 - /*PIPE_ISOCHRONOUS*/ IMX_FMT_ISO, 275 - /*PIPE_INTERRUPT*/ IMX_FMT_INT, 276 - /*PIPE_CONTROL*/ IMX_FMT_CTRL, 277 - /*PIPE_BULK*/ IMX_FMT_BULK 278 - }; 279 - 280 - /* condition (error) CC codes and mapping (OHCI like) */ 281 - 282 - #define TD_CC_NOERROR 0x00 283 - #define TD_CC_CRC 0x01 284 - #define TD_CC_BITSTUFFING 0x02 285 - #define TD_CC_DATATOGGLEM 0x03 286 - #define TD_CC_STALL 0x04 287 - #define TD_DEVNOTRESP 0x05 288 - #define TD_PIDCHECKFAIL 0x06 289 - /*#define TD_UNEXPECTEDPID 0x07 - reserved, not active on MX2*/ 290 - #define TD_DATAOVERRUN 0x08 291 - #define TD_DATAUNDERRUN 0x09 292 - #define TD_BUFFEROVERRUN 0x0C 293 - #define TD_BUFFERUNDERRUN 0x0D 294 - #define TD_SCHEDULEOVERRUN 0x0E 295 - #define TD_NOTACCESSED 0x0F 296 - 297 - static const int cc_to_error[16] = { 298 - /* No Error */ 0, 299 - /* CRC Error */ -EILSEQ, 300 - /* Bit Stuff */ -EPROTO, 301 - /* Data Togg */ -EILSEQ, 302 - /* Stall */ -EPIPE, 303 - /* DevNotResp */ -ETIMEDOUT, 304 - /* PIDCheck */ -EPROTO, 305 - /* UnExpPID */ -EPROTO, 306 - /* DataOver */ -EOVERFLOW, 307 - /* DataUnder */ -EREMOTEIO, 308 - /* (for hw) */ -EIO, 309 - /* (for hw) */ -EIO, 310 - /* BufferOver */ -ECOMM, 311 - /* BuffUnder */ -ENOSR, 312 - /* (for HCD) */ -ENOSPC, 313 - /* (for HCD) */ -EALREADY 314 - }; 315 - 316 - /* HCD data associated with a usb core URB */ 317 - struct urb_priv { 318 - struct urb *urb; 319 - struct usb_host_endpoint *ep; 320 - int active; 321 - int state; 322 - struct td *isoc_td; 323 - int isoc_remaining; 324 - int isoc_status; 325 - }; 326 - 327 - /* HCD data associated with a usb core endpoint */ 328 - struct ep_priv { 329 - struct usb_host_endpoint *ep; 330 - struct list_head td_list; 331 - struct list_head queue; 332 - int etd[NUM_ISO_ETDS]; 333 - int waiting_etd; 334 - }; 335 - 336 - /* isoc packet */ 337 - struct td { 338 - struct list_head list; 339 - struct urb *urb; 340 - struct usb_host_endpoint *ep; 341 - dma_addr_t dma_handle; 342 - void *cpu_buffer; 343 - int len; 344 - int frame; 345 - int isoc_index; 346 - }; 347 - 348 - /* HCD data associated with a hardware ETD */ 349 - struct etd_priv { 350 - struct usb_host_endpoint *ep; 351 - struct urb *urb; 352 - struct td *td; 353 - struct list_head queue; 354 - dma_addr_t dma_handle; 355 - void *cpu_buffer; 356 - void *bounce_buffer; 357 - int alloc; 358 - int len; 359 - int dmem_size; 360 - int dmem_offset; 361 - int active_count; 362 - #ifdef DEBUG 363 - int activated_frame; 364 - int disactivated_frame; 365 - int last_int_frame; 366 - int last_req_frame; 367 - u32 submitted_dwords[4]; 368 - #endif 369 - }; 370 - 371 - /* Hardware data memory info */ 372 - struct imx21_dmem_area { 373 - struct usb_host_endpoint *ep; 374 - unsigned int offset; 375 - unsigned int size; 376 - struct list_head list; 377 - }; 378 - 379 - #ifdef DEBUG 380 - struct debug_usage_stats { 381 - unsigned int value; 382 - unsigned int maximum; 383 - }; 384 - 385 - struct debug_stats { 386 - unsigned long submitted; 387 - unsigned long completed_ok; 388 - unsigned long completed_failed; 389 - unsigned long unlinked; 390 - unsigned long queue_etd; 391 - unsigned long queue_dmem; 392 - }; 393 - 394 - struct debug_isoc_trace { 395 - int schedule_frame; 396 - int submit_frame; 397 - int request_len; 398 - int done_frame; 399 - int done_len; 400 - int cc; 401 - struct td *td; 402 - }; 403 - #endif 404 - 405 - /* HCD data structure */ 406 - struct imx21 { 407 - spinlock_t lock; 408 - struct device *dev; 409 - struct usb_hcd *hcd; 410 - struct mx21_usbh_platform_data *pdata; 411 - struct list_head dmem_list; 412 - struct list_head queue_for_etd; /* eps queued due to etd shortage */ 413 - struct list_head queue_for_dmem; /* etds queued due to dmem shortage */ 414 - struct etd_priv etd[USB_NUM_ETD]; 415 - struct clk *clk; 416 - void __iomem *regs; 417 - #ifdef DEBUG 418 - struct dentry *debug_root; 419 - struct debug_stats nonisoc_stats; 420 - struct debug_stats isoc_stats; 421 - struct debug_usage_stats etd_usage; 422 - struct debug_usage_stats dmem_usage; 423 - struct debug_isoc_trace isoc_trace[20]; 424 - struct debug_isoc_trace isoc_trace_failed[20]; 425 - unsigned long debug_unblocks; 426 - int isoc_trace_index; 427 - int isoc_trace_index_failed; 428 - #endif 429 - }; 430 - 431 - #endif
+1
drivers/usb/host/isp116x-hcd.c
··· 1447 1447 val &= ~HCCONTROL_HCFS; 1448 1448 val |= HCCONTROL_USB_RESUME; 1449 1449 isp116x_write_reg32(isp116x, HCCONTROL, val); 1450 + break; 1450 1451 case HCCONTROL_USB_RESUME: 1451 1452 break; 1452 1453 case HCCONTROL_USB_OPER:
-54
drivers/usb/host/isp1362.h
··· 793 793 ISP1362_REG_NO(ISP1362_REG_##r), isp1362_read_reg16(d, r)); \ 794 794 } 795 795 796 - static void __attribute__((__unused__)) isp1362_show_regs(struct isp1362_hcd *isp1362_hcd) 797 - { 798 - isp1362_show_reg(isp1362_hcd, HCREVISION); 799 - isp1362_show_reg(isp1362_hcd, HCCONTROL); 800 - isp1362_show_reg(isp1362_hcd, HCCMDSTAT); 801 - isp1362_show_reg(isp1362_hcd, HCINTSTAT); 802 - isp1362_show_reg(isp1362_hcd, HCINTENB); 803 - isp1362_show_reg(isp1362_hcd, HCFMINTVL); 804 - isp1362_show_reg(isp1362_hcd, HCFMREM); 805 - isp1362_show_reg(isp1362_hcd, HCFMNUM); 806 - isp1362_show_reg(isp1362_hcd, HCLSTHRESH); 807 - isp1362_show_reg(isp1362_hcd, HCRHDESCA); 808 - isp1362_show_reg(isp1362_hcd, HCRHDESCB); 809 - isp1362_show_reg(isp1362_hcd, HCRHSTATUS); 810 - isp1362_show_reg(isp1362_hcd, HCRHPORT1); 811 - isp1362_show_reg(isp1362_hcd, HCRHPORT2); 812 - 813 - isp1362_show_reg(isp1362_hcd, HCHWCFG); 814 - isp1362_show_reg(isp1362_hcd, HCDMACFG); 815 - isp1362_show_reg(isp1362_hcd, HCXFERCTR); 816 - isp1362_show_reg(isp1362_hcd, HCuPINT); 817 - 818 - if (in_interrupt()) 819 - DBG(0, "%-12s[%02x]: %04x\n", "HCuPINTENB", 820 - ISP1362_REG_NO(ISP1362_REG_HCuPINTENB), isp1362_hcd->irqenb); 821 - else 822 - isp1362_show_reg(isp1362_hcd, HCuPINTENB); 823 - isp1362_show_reg(isp1362_hcd, HCCHIPID); 824 - isp1362_show_reg(isp1362_hcd, HCSCRATCH); 825 - isp1362_show_reg(isp1362_hcd, HCBUFSTAT); 826 - isp1362_show_reg(isp1362_hcd, HCDIRADDR); 827 - /* Access would advance fifo 828 - * isp1362_show_reg(isp1362_hcd, HCDIRDATA); 829 - */ 830 - isp1362_show_reg(isp1362_hcd, HCISTLBUFSZ); 831 - isp1362_show_reg(isp1362_hcd, HCISTLRATE); 832 - isp1362_show_reg(isp1362_hcd, HCINTLBUFSZ); 833 - isp1362_show_reg(isp1362_hcd, HCINTLBLKSZ); 834 - isp1362_show_reg(isp1362_hcd, HCINTLDONE); 835 - isp1362_show_reg(isp1362_hcd, HCINTLSKIP); 836 - isp1362_show_reg(isp1362_hcd, HCINTLLAST); 837 - isp1362_show_reg(isp1362_hcd, HCINTLCURR); 838 - isp1362_show_reg(isp1362_hcd, HCATLBUFSZ); 839 - isp1362_show_reg(isp1362_hcd, HCATLBLKSZ); 840 - /* only valid after ATL_DONE interrupt 841 - * isp1362_show_reg(isp1362_hcd, HCATLDONE); 842 - */ 843 - isp1362_show_reg(isp1362_hcd, HCATLSKIP); 844 - isp1362_show_reg(isp1362_hcd, HCATLLAST); 845 - isp1362_show_reg(isp1362_hcd, HCATLCURR); 846 - isp1362_show_reg(isp1362_hcd, HCATLDTC); 847 - isp1362_show_reg(isp1362_hcd, HCATLDTCTO); 848 - } 849 - 850 796 static void isp1362_write_diraddr(struct isp1362_hcd *isp1362_hcd, u16 offset, u16 len) 851 797 { 852 798 len = (len + 1) & ~1;
+3 -1
drivers/usb/host/max3421-hcd.c
··· 1537 1537 __func__, urb->interval); 1538 1538 return -EINVAL; 1539 1539 } 1540 + break; 1540 1541 default: 1541 1542 break; 1542 1543 } ··· 1848 1847 struct max3421_hcd *max3421_hcd; 1849 1848 struct usb_hcd *hcd = NULL; 1850 1849 struct max3421_hcd_platform_data *pdata = NULL; 1851 - int retval = -ENOMEM; 1850 + int retval; 1852 1851 1853 1852 if (spi_setup(spi) < 0) { 1854 1853 dev_err(&spi->dev, "Unable to setup SPI bus"); ··· 1890 1889 goto error; 1891 1890 } 1892 1891 1892 + retval = -ENOMEM; 1893 1893 hcd = usb_create_hcd(&max3421_hcd_desc, &spi->dev, 1894 1894 dev_name(&spi->dev)); 1895 1895 if (!hcd) {
+8 -3
drivers/usb/host/ohci-at91.c
··· 155 155 156 156 /* 157 157 * usb_hcd_at91_probe - initialize AT91-based HCDs 158 - * Context: !in_interrupt() 158 + * @driver: Pointer to hc driver instance 159 + * @pdev: USB controller to probe 160 + * 161 + * Context: task context, might sleep 159 162 * 160 163 * Allocates basic resources for this USB host controller, and 161 164 * then invokes the start() method for the HCD associated with it ··· 249 246 250 247 /* 251 248 * usb_hcd_at91_remove - shutdown processing for AT91-based HCDs 252 - * Context: !in_interrupt() 249 + * @hcd: USB controller to remove 250 + * @pdev: Platform device required for cleanup 251 + * 252 + * Context: task context, might sleep 253 253 * 254 254 * Reverses the effect of usb_hcd_at91_probe(), first invoking 255 255 * the HCD's stop() method. It is always called from a thread 256 256 * context, "rmmod" or something similar. 257 - * 258 257 */ 259 258 static void usb_hcd_at91_remove(struct usb_hcd *hcd, 260 259 struct platform_device *pdev)
+1 -1
drivers/usb/host/ohci-hcd.c
··· 171 171 172 172 /* 1 TD for setup, 1 for ACK, plus ... */ 173 173 size = 2; 174 - /* FALLTHROUGH */ 174 + fallthrough; 175 175 // case PIPE_INTERRUPT: 176 176 // case PIPE_BULK: 177 177 default:
+1
drivers/usb/host/ohci-hub.c
··· 692 692 case C_HUB_OVER_CURRENT: 693 693 ohci_writel (ohci, RH_HS_OCIC, 694 694 &ohci->regs->roothub.status); 695 + break; 695 696 case C_HUB_LOCAL_POWER: 696 697 break; 697 698 default:
+6 -3
drivers/usb/host/ohci-omap.c
··· 285 285 286 286 /** 287 287 * ohci_hcd_omap_probe - initialize OMAP-based HCDs 288 - * Context: !in_interrupt() 288 + * @pdev: USB controller to probe 289 + * 290 + * Context: task context, might sleep 289 291 * 290 292 * Allocates basic resources for this USB host controller, and 291 293 * then invokes the start() method for the HCD associated with it ··· 401 399 402 400 /** 403 401 * ohci_hcd_omap_remove - shutdown processing for OMAP-based HCDs 404 - * @dev: USB Host Controller being removed 405 - * Context: !in_interrupt() 402 + * @pdev: USB Host Controller being removed 403 + * 404 + * Context: task context, might sleep 406 405 * 407 406 * Reverses the effect of ohci_hcd_omap_probe(), first invoking 408 407 * the HCD's stop() method. It is always called from a thread
+6 -5
drivers/usb/host/ohci-pxa27x.c
··· 410 410 411 411 /** 412 412 * ohci_hcd_pxa27x_probe - initialize pxa27x-based HCDs 413 - * Context: !in_interrupt() 413 + * @pdev: USB Host controller to probe 414 + * 415 + * Context: task context, might sleep 414 416 * 415 417 * Allocates basic resources for this USB host controller, and 416 418 * then invokes the start() method for the HCD associated with it 417 419 * through the hotplug entry's driver_data. 418 - * 419 420 */ 420 421 static int ohci_hcd_pxa27x_probe(struct platform_device *pdev) 421 422 { ··· 510 509 511 510 /** 512 511 * ohci_hcd_pxa27x_remove - shutdown processing for pxa27x-based HCDs 513 - * @dev: USB Host Controller being removed 514 - * Context: !in_interrupt() 512 + * @pdev: USB Host Controller being removed 513 + * 514 + * Context: task context, might sleep 515 515 * 516 516 * Reverses the effect of ohci_hcd_pxa27x_probe(), first invoking 517 517 * the HCD's stop() method. It is always called from a thread 518 518 * context, normally "rmmod", "apmd", or something similar. 519 - * 520 519 */ 521 520 static int ohci_hcd_pxa27x_remove(struct platform_device *pdev) 522 521 {
+6 -6
drivers/usb/host/ohci-s3c2410.c
··· 324 324 /* 325 325 * ohci_hcd_s3c2410_remove - shutdown processing for HCD 326 326 * @dev: USB Host Controller being removed 327 - * Context: !in_interrupt() 327 + * 328 + * Context: task context, might sleep 328 329 * 329 330 * Reverses the effect of ohci_hcd_3c2410_probe(), first invoking 330 331 * the HCD's stop() method. It is always called from a thread 331 332 * context, normally "rmmod", "apmd", or something similar. 332 - * 333 - */ 334 - 333 + */ 335 334 static int 336 335 ohci_hcd_s3c2410_remove(struct platform_device *dev) 337 336 { ··· 344 345 345 346 /* 346 347 * ohci_hcd_s3c2410_probe - initialize S3C2410-based HCDs 347 - * Context: !in_interrupt() 348 + * @dev: USB Host Controller to be probed 349 + * 350 + * Context: task context, might sleep 348 351 * 349 352 * Allocates basic resources for this USB host controller, and 350 353 * then invokes the start() method for the HCD associated with it 351 354 * through the hotplug entry's driver_data. 352 - * 353 355 */ 354 356 static int ohci_hcd_s3c2410_probe(struct platform_device *dev) 355 357 {
+4 -1
drivers/usb/host/oxu210hp-hcd.c
··· 1365 1365 switch (urb->status) { 1366 1366 case -EINPROGRESS: /* success */ 1367 1367 urb->status = 0; 1368 + break; 1368 1369 default: /* fault */ 1369 1370 break; 1370 1371 case -EREMOTEIO: /* fault or normal */ ··· 4152 4151 oxu->is_otg = otg; 4153 4152 4154 4153 ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 4155 - if (ret < 0) 4154 + if (ret < 0) { 4155 + usb_put_hcd(hcd); 4156 4156 return ERR_PTR(ret); 4157 + } 4157 4158 4158 4159 device_wakeup_enable(hcd->self.controller); 4159 4160 return hcd;
+3 -3
drivers/usb/host/u132-hcd.c
··· 208 208 #define ftdi_read_pcimem(pdev, member, data) usb_ftdi_elan_read_pcimem(pdev, \ 209 209 offsetof(struct ohci_regs, member), 0, data); 210 210 #define ftdi_write_pcimem(pdev, member, data) usb_ftdi_elan_write_pcimem(pdev, \ 211 - offsetof(struct ohci_regs, member), 0, data); 211 + offsetof(struct ohci_regs, member), 0, data) 212 212 #define u132_read_pcimem(u132, member, data) \ 213 213 usb_ftdi_elan_read_pcimem(u132->platform_dev, offsetof(struct \ 214 - ohci_regs, member), 0, data); 214 + ohci_regs, member), 0, data) 215 215 #define u132_write_pcimem(u132, member, data) \ 216 216 usb_ftdi_elan_write_pcimem(u132->platform_dev, offsetof(struct \ 217 - ohci_regs, member), 0, data); 217 + ohci_regs, member), 0, data) 218 218 static inline struct u132 *udev_to_u132(struct u132_udev *udev) 219 219 { 220 220 u8 udev_number = udev->udev_number;
+4
drivers/usb/host/xhci-hub.c
··· 1712 1712 hcd->state = HC_STATE_SUSPENDED; 1713 1713 bus_state->next_statechange = jiffies + msecs_to_jiffies(10); 1714 1714 spin_unlock_irqrestore(&xhci->lock, flags); 1715 + 1716 + if (bus_state->bus_suspended) 1717 + usleep_range(5000, 10000); 1718 + 1715 1719 return 0; 1716 1720 } 1717 1721
+1 -2
drivers/usb/host/xhci-mem.c
··· 1144 1144 case USB_SPEED_WIRELESS: 1145 1145 xhci_dbg(xhci, "FIXME xHCI doesn't support wireless speeds\n"); 1146 1146 return -EINVAL; 1147 - break; 1148 1147 default: 1149 1148 /* Speed was set earlier, this shouldn't happen. */ 1150 1149 return -EINVAL; ··· 2109 2110 2110 2111 deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg, 2111 2112 xhci->event_ring->dequeue); 2112 - if (deq == 0 && !in_interrupt()) 2113 + if (!deq) 2113 2114 xhci_warn(xhci, "WARN something wrong with SW event ring " 2114 2115 "dequeue ptr.\n"); 2115 2116 /* Update HC event ring dequeue pointer */
+5 -1
drivers/usb/host/xhci-pci.c
··· 47 47 #define PCI_DEVICE_ID_INTEL_DNV_XHCI 0x19d0 48 48 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI 0x15b5 49 49 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI 0x15b6 50 + #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI 0x15c1 50 51 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_XHCI 0x15db 51 52 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_XHCI 0x15d4 52 53 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_XHCI 0x15e9 ··· 56 55 #define PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI 0x8a13 57 56 #define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af 58 57 #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13 58 + #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138 59 59 60 60 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 61 61 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba ··· 234 232 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 235 233 (pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI || 236 234 pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI || 235 + pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI || 237 236 pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_XHCI || 238 237 pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_XHCI || 239 238 pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_XHCI || 240 239 pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI || 241 240 pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI || 242 241 pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI || 243 - pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI)) 242 + pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI || 243 + pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI)) 244 244 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 245 245 246 246 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+3
drivers/usb/host/xhci-plat.c
··· 333 333 if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) 334 334 hcd->skip_phy_initialization = 1; 335 335 336 + if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK)) 337 + xhci->quirks |= XHCI_SG_TRB_CACHE_SIZE_QUIRK; 338 + 336 339 ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 337 340 if (ret) 338 341 goto disable_usb_phy;
+3 -1
drivers/usb/host/xhci-ring.c
··· 2418 2418 xhci_warn_ratelimited(xhci, 2419 2419 "WARN Successful completion on short TX for slot %u ep %u: needs XHCI_TRUST_TX_LENGTH quirk?\n", 2420 2420 slot_id, ep_index); 2421 + break; 2421 2422 case COMP_SHORT_PACKET: 2422 2423 break; 2423 2424 /* Completion codes for endpoint stopped state */ ··· 2963 2962 return -EINVAL; 2964 2963 case EP_STATE_HALTED: 2965 2964 xhci_dbg(xhci, "WARN halted endpoint, queueing URB anyway.\n"); 2965 + break; 2966 2966 case EP_STATE_STOPPED: 2967 2967 case EP_STATE_RUNNING: 2968 2968 break; ··· 3327 3325 3328 3326 full_len = urb->transfer_buffer_length; 3329 3327 /* If we have scatter/gather list, we use it. */ 3330 - if (urb->num_sgs) { 3328 + if (urb->num_sgs && !(urb->transfer_flags & URB_DMA_MAP_SINGLE)) { 3331 3329 num_sgs = urb->num_mapped_sgs; 3332 3330 sg = urb->sg; 3333 3331 addr = (u64) sg_dma_address(sg);
+130 -5
drivers/usb/host/xhci.c
··· 1259 1259 1260 1260 /*-------------------------------------------------------------------------*/ 1261 1261 1262 + static int xhci_map_temp_buffer(struct usb_hcd *hcd, struct urb *urb) 1263 + { 1264 + void *temp; 1265 + int ret = 0; 1266 + unsigned int buf_len; 1267 + enum dma_data_direction dir; 1268 + 1269 + dir = usb_urb_dir_in(urb) ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 1270 + buf_len = urb->transfer_buffer_length; 1271 + 1272 + temp = kzalloc_node(buf_len, GFP_ATOMIC, 1273 + dev_to_node(hcd->self.sysdev)); 1274 + 1275 + if (usb_urb_dir_out(urb)) 1276 + sg_pcopy_to_buffer(urb->sg, urb->num_sgs, 1277 + temp, buf_len, 0); 1278 + 1279 + urb->transfer_buffer = temp; 1280 + urb->transfer_dma = dma_map_single(hcd->self.sysdev, 1281 + urb->transfer_buffer, 1282 + urb->transfer_buffer_length, 1283 + dir); 1284 + 1285 + if (dma_mapping_error(hcd->self.sysdev, 1286 + urb->transfer_dma)) { 1287 + ret = -EAGAIN; 1288 + kfree(temp); 1289 + } else { 1290 + urb->transfer_flags |= URB_DMA_MAP_SINGLE; 1291 + } 1292 + 1293 + return ret; 1294 + } 1295 + 1296 + static bool xhci_urb_temp_buffer_required(struct usb_hcd *hcd, 1297 + struct urb *urb) 1298 + { 1299 + bool ret = false; 1300 + unsigned int i; 1301 + unsigned int len = 0; 1302 + unsigned int trb_size; 1303 + unsigned int max_pkt; 1304 + struct scatterlist *sg; 1305 + struct scatterlist *tail_sg; 1306 + 1307 + tail_sg = urb->sg; 1308 + max_pkt = usb_endpoint_maxp(&urb->ep->desc); 1309 + 1310 + if (!urb->num_sgs) 1311 + return ret; 1312 + 1313 + if (urb->dev->speed >= USB_SPEED_SUPER) 1314 + trb_size = TRB_CACHE_SIZE_SS; 1315 + else 1316 + trb_size = TRB_CACHE_SIZE_HS; 1317 + 1318 + if (urb->transfer_buffer_length != 0 && 1319 + !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP)) { 1320 + for_each_sg(urb->sg, sg, urb->num_sgs, i) { 1321 + len = len + sg->length; 1322 + if (i > trb_size - 2) { 1323 + len = len - tail_sg->length; 1324 + if (len < max_pkt) { 1325 + ret = true; 1326 + break; 1327 + } 1328 + 1329 + tail_sg = sg_next(tail_sg); 1330 + } 1331 + } 1332 + } 1333 + return ret; 1334 + } 1335 + 1336 + static void xhci_unmap_temp_buf(struct usb_hcd *hcd, struct urb *urb) 1337 + { 1338 + unsigned int len; 1339 + unsigned int buf_len; 1340 + enum dma_data_direction dir; 1341 + 1342 + dir = usb_urb_dir_in(urb) ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 1343 + 1344 + buf_len = urb->transfer_buffer_length; 1345 + 1346 + if (IS_ENABLED(CONFIG_HAS_DMA) && 1347 + (urb->transfer_flags & URB_DMA_MAP_SINGLE)) 1348 + dma_unmap_single(hcd->self.sysdev, 1349 + urb->transfer_dma, 1350 + urb->transfer_buffer_length, 1351 + dir); 1352 + 1353 + if (usb_urb_dir_in(urb)) 1354 + len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, 1355 + urb->transfer_buffer, 1356 + buf_len, 1357 + 0); 1358 + 1359 + urb->transfer_flags &= ~URB_DMA_MAP_SINGLE; 1360 + kfree(urb->transfer_buffer); 1361 + urb->transfer_buffer = NULL; 1362 + } 1363 + 1262 1364 /* 1263 1365 * Bypass the DMA mapping if URB is suitable for Immediate Transfer (IDT), 1264 1366 * we'll copy the actual data into the TRB address register. This is limited to ··· 1370 1268 static int xhci_map_urb_for_dma(struct usb_hcd *hcd, struct urb *urb, 1371 1269 gfp_t mem_flags) 1372 1270 { 1271 + struct xhci_hcd *xhci; 1272 + 1273 + xhci = hcd_to_xhci(hcd); 1274 + 1373 1275 if (xhci_urb_suitable_for_idt(urb)) 1374 1276 return 0; 1375 1277 1278 + if (xhci->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK) { 1279 + if (xhci_urb_temp_buffer_required(hcd, urb)) 1280 + return xhci_map_temp_buffer(hcd, urb); 1281 + } 1376 1282 return usb_hcd_map_urb_for_dma(hcd, urb, mem_flags); 1377 1283 } 1378 1284 1379 - /* 1285 + static void xhci_unmap_urb_for_dma(struct usb_hcd *hcd, struct urb *urb) 1286 + { 1287 + struct xhci_hcd *xhci; 1288 + bool unmap_temp_buf = false; 1289 + 1290 + xhci = hcd_to_xhci(hcd); 1291 + 1292 + if (urb->num_sgs && (urb->transfer_flags & URB_DMA_MAP_SINGLE)) 1293 + unmap_temp_buf = true; 1294 + 1295 + if ((xhci->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK) && unmap_temp_buf) 1296 + xhci_unmap_temp_buf(hcd, urb); 1297 + else 1298 + usb_hcd_unmap_urb_for_dma(hcd, urb); 1299 + } 1300 + 1301 + /** 1380 1302 * xhci_get_endpoint_index - Used for passing endpoint bitmasks between the core and 1381 1303 * HCDs. Find the index for an endpoint given its descriptor. Use the return 1382 1304 * value to right shift 1 for the bitmask. ··· 1602 1476 ep_index = xhci_get_endpoint_index(&urb->ep->desc); 1603 1477 ep_state = &xhci->devs[slot_id]->eps[ep_index].ep_state; 1604 1478 1605 - if (!HCD_HW_ACCESSIBLE(hcd)) { 1606 - if (!in_interrupt()) 1607 - xhci_dbg(xhci, "urb submitted during PCI suspend\n"); 1479 + if (!HCD_HW_ACCESSIBLE(hcd)) 1608 1480 return -ESHUTDOWN; 1609 - } 1481 + 1610 1482 if (xhci->devs[slot_id]->flags & VDEV_PORT_ERROR) { 1611 1483 xhci_dbg(xhci, "Can't queue urb, port error, link inactive\n"); 1612 1484 return -ENODEV; ··· 5453 5329 * managing i/o requests and associated device resources 5454 5330 */ 5455 5331 .map_urb_for_dma = xhci_map_urb_for_dma, 5332 + .unmap_urb_for_dma = xhci_unmap_urb_for_dma, 5456 5333 .urb_enqueue = xhci_urb_enqueue, 5457 5334 .urb_dequeue = xhci_urb_dequeue, 5458 5335 .alloc_dev = xhci_alloc_dev,
+5
drivers/usb/host/xhci.h
··· 1330 1330 #define TRB_SIA (1<<31) 1331 1331 #define TRB_FRAME_ID(p) (((p) & 0x7ff) << 20) 1332 1332 1333 + /* TRB cache size for xHC with TRB cache */ 1334 + #define TRB_CACHE_SIZE_HS 8 1335 + #define TRB_CACHE_SIZE_SS 16 1336 + 1333 1337 struct xhci_generic_trb { 1334 1338 __le32 field[4]; 1335 1339 }; ··· 1882 1878 #define XHCI_RENESAS_FW_QUIRK BIT_ULL(36) 1883 1879 #define XHCI_SKIP_PHY_INIT BIT_ULL(37) 1884 1880 #define XHCI_DISABLE_SPARSE BIT_ULL(38) 1881 + #define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39) 1885 1882 1886 1883 unsigned int num_active_eps; 1887 1884 unsigned int limit_active_eps;
+9
drivers/usb/misc/Kconfig
··· 275 275 276 276 To compile this driver as a module, choose M here: the 277 277 module will be called chaoskey. 278 + 279 + config BRCM_USB_PINMAP 280 + tristate "Broadcom pinmap driver support" 281 + depends on (ARCH_BRCMSTB && PHY_BRCM_USB) || COMPILE_TEST 282 + default ARCH_BRCMSTB && PHY_BRCM_USB 283 + help 284 + This option enables support for remapping some USB external 285 + signals, which are typically on dedicated pins on the chip, 286 + to any gpio.
+1
drivers/usb/misc/Makefile
··· 31 31 32 32 obj-$(CONFIG_USB_SISUSBVGA) += sisusbvga/ 33 33 obj-$(CONFIG_USB_LINK_LAYER_TEST) += lvstest.o 34 + obj-$(CONFIG_BRCM_USB_PINMAP) += brcmstb-usb-pinmap.o
+4 -9
drivers/usb/misc/apple-mfi-fastcharge.c
··· 184 184 return -ENODEV; 185 185 186 186 mfi = kzalloc(sizeof(struct mfi_device), GFP_KERNEL); 187 - if (!mfi) { 188 - err = -ENOMEM; 189 - goto error; 190 - } 187 + if (!mfi) 188 + return -ENOMEM; 191 189 192 190 battery_cfg.drv_data = mfi; 193 191 ··· 196 198 if (IS_ERR(mfi->battery)) { 197 199 dev_err(&udev->dev, "Can't register battery\n"); 198 200 err = PTR_ERR(mfi->battery); 199 - goto error; 201 + kfree(mfi); 202 + return err; 200 203 } 201 204 202 205 mfi->udev = usb_get_dev(udev); 203 206 dev_set_drvdata(&udev->dev, mfi); 204 207 205 208 return 0; 206 - 207 - error: 208 - kfree(mfi); 209 - return err; 210 209 } 211 210 212 211 static void mfi_fc_disconnect(struct usb_device *udev)
+351
drivers/usb/misc/brcmstb-usb-pinmap.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2020, Broadcom */ 3 + 4 + #include <linux/init.h> 5 + #include <linux/types.h> 6 + #include <linux/module.h> 7 + #include <linux/platform_device.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/io.h> 10 + #include <linux/device.h> 11 + #include <linux/of.h> 12 + #include <linux/kernel.h> 13 + #include <linux/kdebug.h> 14 + #include <linux/gpio/consumer.h> 15 + 16 + struct out_pin { 17 + u32 enable_mask; 18 + u32 value_mask; 19 + u32 changed_mask; 20 + u32 clr_changed_mask; 21 + struct gpio_desc *gpiod; 22 + const char *name; 23 + }; 24 + 25 + struct in_pin { 26 + u32 enable_mask; 27 + u32 value_mask; 28 + struct gpio_desc *gpiod; 29 + const char *name; 30 + struct brcmstb_usb_pinmap_data *pdata; 31 + }; 32 + 33 + struct brcmstb_usb_pinmap_data { 34 + void __iomem *regs; 35 + int in_count; 36 + struct in_pin *in_pins; 37 + int out_count; 38 + struct out_pin *out_pins; 39 + }; 40 + 41 + 42 + static void pinmap_set(void __iomem *reg, u32 mask) 43 + { 44 + u32 val; 45 + 46 + val = readl(reg); 47 + val |= mask; 48 + writel(val, reg); 49 + } 50 + 51 + static void pinmap_unset(void __iomem *reg, u32 mask) 52 + { 53 + u32 val; 54 + 55 + val = readl(reg); 56 + val &= ~mask; 57 + writel(val, reg); 58 + } 59 + 60 + static void sync_in_pin(struct in_pin *pin) 61 + { 62 + u32 val; 63 + 64 + val = gpiod_get_value(pin->gpiod); 65 + if (val) 66 + pinmap_set(pin->pdata->regs, pin->value_mask); 67 + else 68 + pinmap_unset(pin->pdata->regs, pin->value_mask); 69 + } 70 + 71 + /* 72 + * Interrupt from override register, propagate from override bit 73 + * to GPIO. 74 + */ 75 + static irqreturn_t brcmstb_usb_pinmap_ovr_isr(int irq, void *dev_id) 76 + { 77 + struct brcmstb_usb_pinmap_data *pdata = dev_id; 78 + struct out_pin *pout; 79 + u32 val; 80 + u32 bit; 81 + int x; 82 + 83 + pr_debug("%s: reg: 0x%x\n", __func__, readl(pdata->regs)); 84 + pout = pdata->out_pins; 85 + for (x = 0; x < pdata->out_count; x++) { 86 + val = readl(pdata->regs); 87 + if (val & pout->changed_mask) { 88 + pinmap_set(pdata->regs, pout->clr_changed_mask); 89 + pinmap_unset(pdata->regs, pout->clr_changed_mask); 90 + bit = val & pout->value_mask; 91 + gpiod_set_value(pout->gpiod, bit ? 1 : 0); 92 + pr_debug("%s: %s bit changed state to %d\n", 93 + __func__, pout->name, bit ? 1 : 0); 94 + } 95 + } 96 + return IRQ_HANDLED; 97 + } 98 + 99 + /* 100 + * Interrupt from GPIO, propagate from GPIO to override bit. 101 + */ 102 + static irqreturn_t brcmstb_usb_pinmap_gpio_isr(int irq, void *dev_id) 103 + { 104 + struct in_pin *pin = dev_id; 105 + 106 + pr_debug("%s: %s pin changed state\n", __func__, pin->name); 107 + sync_in_pin(pin); 108 + return IRQ_HANDLED; 109 + } 110 + 111 + 112 + static void get_pin_counts(struct device_node *dn, int *in_count, 113 + int *out_count) 114 + { 115 + int in; 116 + int out; 117 + 118 + *in_count = 0; 119 + *out_count = 0; 120 + in = of_property_count_strings(dn, "brcm,in-functions"); 121 + if (in < 0) 122 + return; 123 + out = of_property_count_strings(dn, "brcm,out-functions"); 124 + if (out < 0) 125 + return; 126 + *in_count = in; 127 + *out_count = out; 128 + } 129 + 130 + static int parse_pins(struct device *dev, struct device_node *dn, 131 + struct brcmstb_usb_pinmap_data *pdata) 132 + { 133 + struct out_pin *pout; 134 + struct in_pin *pin; 135 + int index; 136 + int res; 137 + int x; 138 + 139 + pin = pdata->in_pins; 140 + for (x = 0, index = 0; x < pdata->in_count; x++) { 141 + pin->gpiod = devm_gpiod_get_index(dev, "in", x, GPIOD_IN); 142 + if (IS_ERR(pin->gpiod)) { 143 + dev_err(dev, "Error getting gpio %s\n", pin->name); 144 + return PTR_ERR(pin->gpiod); 145 + 146 + } 147 + res = of_property_read_string_index(dn, "brcm,in-functions", x, 148 + &pin->name); 149 + if (res < 0) { 150 + dev_err(dev, "Error getting brcm,in-functions for %s\n", 151 + pin->name); 152 + return res; 153 + } 154 + res = of_property_read_u32_index(dn, "brcm,in-masks", index++, 155 + &pin->enable_mask); 156 + if (res < 0) { 157 + dev_err(dev, "Error getting 1st brcm,in-masks for %s\n", 158 + pin->name); 159 + return res; 160 + } 161 + res = of_property_read_u32_index(dn, "brcm,in-masks", index++, 162 + &pin->value_mask); 163 + if (res < 0) { 164 + dev_err(dev, "Error getting 2nd brcm,in-masks for %s\n", 165 + pin->name); 166 + return res; 167 + } 168 + pin->pdata = pdata; 169 + pin++; 170 + } 171 + pout = pdata->out_pins; 172 + for (x = 0, index = 0; x < pdata->out_count; x++) { 173 + pout->gpiod = devm_gpiod_get_index(dev, "out", x, 174 + GPIOD_OUT_HIGH); 175 + if (IS_ERR(pout->gpiod)) { 176 + dev_err(dev, "Error getting gpio %s\n", pin->name); 177 + return PTR_ERR(pout->gpiod); 178 + } 179 + res = of_property_read_string_index(dn, "brcm,out-functions", x, 180 + &pout->name); 181 + if (res < 0) { 182 + dev_err(dev, "Error getting brcm,out-functions for %s\n", 183 + pout->name); 184 + return res; 185 + } 186 + res = of_property_read_u32_index(dn, "brcm,out-masks", index++, 187 + &pout->enable_mask); 188 + if (res < 0) { 189 + dev_err(dev, "Error getting 1st brcm,out-masks for %s\n", 190 + pout->name); 191 + return res; 192 + } 193 + res = of_property_read_u32_index(dn, "brcm,out-masks", index++, 194 + &pout->value_mask); 195 + if (res < 0) { 196 + dev_err(dev, "Error getting 2nd brcm,out-masks for %s\n", 197 + pout->name); 198 + return res; 199 + } 200 + res = of_property_read_u32_index(dn, "brcm,out-masks", index++, 201 + &pout->changed_mask); 202 + if (res < 0) { 203 + dev_err(dev, "Error getting 3rd brcm,out-masks for %s\n", 204 + pout->name); 205 + return res; 206 + } 207 + res = of_property_read_u32_index(dn, "brcm,out-masks", index++, 208 + &pout->clr_changed_mask); 209 + if (res < 0) { 210 + dev_err(dev, "Error getting 4th out-masks for %s\n", 211 + pout->name); 212 + return res; 213 + } 214 + pout++; 215 + } 216 + return 0; 217 + } 218 + 219 + static void sync_all_pins(struct brcmstb_usb_pinmap_data *pdata) 220 + { 221 + struct out_pin *pout; 222 + struct in_pin *pin; 223 + int val; 224 + int x; 225 + 226 + /* 227 + * Enable the override, clear any changed condition and 228 + * propagate the state to the GPIO for all out pins. 229 + */ 230 + pout = pdata->out_pins; 231 + for (x = 0; x < pdata->out_count; x++) { 232 + pinmap_set(pdata->regs, pout->enable_mask); 233 + pinmap_set(pdata->regs, pout->clr_changed_mask); 234 + pinmap_unset(pdata->regs, pout->clr_changed_mask); 235 + val = readl(pdata->regs) & pout->value_mask; 236 + gpiod_set_value(pout->gpiod, val ? 1 : 0); 237 + pout++; 238 + } 239 + 240 + /* sync and enable all in pins. */ 241 + pin = pdata->in_pins; 242 + for (x = 0; x < pdata->in_count; x++) { 243 + sync_in_pin(pin); 244 + pinmap_set(pdata->regs, pin->enable_mask); 245 + pin++; 246 + } 247 + } 248 + 249 + static int __init brcmstb_usb_pinmap_probe(struct platform_device *pdev) 250 + { 251 + struct device_node *dn = pdev->dev.of_node; 252 + struct brcmstb_usb_pinmap_data *pdata; 253 + struct in_pin *pin; 254 + struct resource *r; 255 + int out_count; 256 + int in_count; 257 + int err; 258 + int irq; 259 + int x; 260 + 261 + get_pin_counts(dn, &in_count, &out_count); 262 + if ((in_count + out_count) == 0) 263 + return -EINVAL; 264 + 265 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 266 + 267 + pdata = devm_kzalloc(&pdev->dev, 268 + sizeof(*pdata) + 269 + (sizeof(struct in_pin) * in_count) + 270 + (sizeof(struct out_pin) * out_count), GFP_KERNEL); 271 + if (!pdata) 272 + return -ENOMEM; 273 + 274 + pdata->in_count = in_count; 275 + pdata->out_count = out_count; 276 + pdata->in_pins = (struct in_pin *)(pdata + 1); 277 + pdata->out_pins = (struct out_pin *)(pdata->in_pins + in_count); 278 + 279 + pdata->regs = devm_ioremap(&pdev->dev, r->start, resource_size(r)); 280 + if (!pdata->regs) 281 + return -ENOMEM; 282 + platform_set_drvdata(pdev, pdata); 283 + 284 + err = parse_pins(&pdev->dev, dn, pdata); 285 + if (err) 286 + return err; 287 + 288 + sync_all_pins(pdata); 289 + 290 + if (out_count) { 291 + 292 + /* Enable interrupt for out pins */ 293 + irq = platform_get_irq(pdev, 0); 294 + err = devm_request_irq(&pdev->dev, irq, 295 + brcmstb_usb_pinmap_ovr_isr, 296 + IRQF_TRIGGER_RISING, 297 + pdev->name, pdata); 298 + if (err < 0) { 299 + dev_err(&pdev->dev, "Error requesting IRQ\n"); 300 + return err; 301 + } 302 + } 303 + 304 + for (x = 0, pin = pdata->in_pins; x < pdata->in_count; x++, pin++) { 305 + irq = gpiod_to_irq(pin->gpiod); 306 + if (irq < 0) { 307 + dev_err(&pdev->dev, "Error getting IRQ for %s pin\n", 308 + pin->name); 309 + return irq; 310 + } 311 + err = devm_request_irq(&pdev->dev, irq, 312 + brcmstb_usb_pinmap_gpio_isr, 313 + IRQF_SHARED | IRQF_TRIGGER_RISING | 314 + IRQF_TRIGGER_FALLING, 315 + pdev->name, pin); 316 + if (err < 0) { 317 + dev_err(&pdev->dev, "Error requesting IRQ for %s pin\n", 318 + pin->name); 319 + return err; 320 + } 321 + } 322 + 323 + dev_dbg(&pdev->dev, "Driver probe succeeded\n"); 324 + dev_dbg(&pdev->dev, "In pin count: %d, out pin count: %d\n", 325 + pdata->in_count, pdata->out_count); 326 + return 0; 327 + } 328 + 329 + 330 + static const struct of_device_id brcmstb_usb_pinmap_of_match[] = { 331 + { .compatible = "brcm,usb-pinmap" }, 332 + { }, 333 + }; 334 + 335 + static struct platform_driver brcmstb_usb_pinmap_driver = { 336 + .driver = { 337 + .name = "brcm-usb-pinmap", 338 + .of_match_table = brcmstb_usb_pinmap_of_match, 339 + }, 340 + }; 341 + 342 + static int __init brcmstb_usb_pinmap_init(void) 343 + { 344 + return platform_driver_probe(&brcmstb_usb_pinmap_driver, 345 + brcmstb_usb_pinmap_probe); 346 + } 347 + 348 + module_init(brcmstb_usb_pinmap_init); 349 + MODULE_AUTHOR("Al Cooper <alcooperx@gmail.com>"); 350 + MODULE_DESCRIPTION("Broadcom USB Pinmap Driver"); 351 + MODULE_LICENSE("GPL");
-3
drivers/usb/misc/iowarrior.c
··· 384 384 retval = usb_set_report(dev->interface, 2, 0, buf, count); 385 385 kfree(buf); 386 386 goto exit; 387 - break; 388 387 case USB_DEVICE_ID_CODEMERCS_IOW56: 389 388 case USB_DEVICE_ID_CODEMERCS_IOW56AM: 390 389 case USB_DEVICE_ID_CODEMERCS_IOW28: ··· 453 454 retval = count; 454 455 usb_free_urb(int_out_urb); 455 456 goto exit; 456 - break; 457 457 default: 458 458 /* what do we have here ? An unsupported Product-ID ? */ 459 459 dev_err(&dev->interface->dev, "%s - not supported for product=0x%x\n", 460 460 __func__, dev->product_id); 461 461 retval = -EFAULT; 462 462 goto exit; 463 - break; 464 463 } 465 464 error: 466 465 usb_free_coherent(dev->udev, dev->report_size, buf,
+1 -1
drivers/usb/misc/legousbtower.c
··· 797 797 &get_version_reply, 798 798 sizeof(get_version_reply), 799 799 1000, GFP_KERNEL); 800 - if (!result) { 800 + if (result) { 801 801 dev_err(idev, "get version request failed: %d\n", result); 802 802 retval = result; 803 803 goto error;
+1 -1
drivers/usb/misc/sisusbvga/Kconfig
··· 16 16 17 17 config USB_SISUSBVGA_CON 18 18 bool "Text console and mode switching support" if USB_SISUSBVGA 19 - depends on VT 19 + depends on VT && BROKEN 20 20 select FONT_8x16 21 21 help 22 22 Say Y here if you want a VGA text console via the USB dongle or
+1
drivers/usb/misc/yurex.c
··· 137 137 dev_err(&dev->interface->dev, 138 138 "%s - overflow with length %d, actual length is %d\n", 139 139 __func__, YUREX_BUF_SIZE, dev->urb->actual_length); 140 + return; 140 141 case -ECONNRESET: 141 142 case -ENOENT: 142 143 case -ESHUTDOWN:
-1
drivers/usb/mtu3/mtu3_debug.h
··· 19 19 struct mtu3_regset { 20 20 char name[MTU3_DEBUGFS_NAME_LEN]; 21 21 struct debugfs_regset32 regset; 22 - size_t nregs; 23 22 }; 24 23 25 24 struct mtu3_file_map {
+1 -1
drivers/usb/mtu3/mtu3_debugfs.c
··· 127 127 struct debugfs_regset32 *regset; 128 128 struct mtu3_regset *mregs; 129 129 130 - mregs = devm_kzalloc(mtu->dev, sizeof(*regset), GFP_KERNEL); 130 + mregs = devm_kzalloc(mtu->dev, sizeof(*mregs), GFP_KERNEL); 131 131 if (!mregs) 132 132 return; 133 133
+1
drivers/usb/musb/tusb6010.c
··· 467 467 fallthrough; 468 468 case OTG_STATE_A_IDLE: 469 469 tusb_musb_set_vbus(musb, 0); 470 + break; 470 471 default: 471 472 break; 472 473 }
+1 -1
drivers/usb/phy/Kconfig
··· 144 144 depends on USB_GADGET || !USB_GADGET # if USB_GADGET=m, this can't be 'y' 145 145 select USB_PHY 146 146 help 147 - Say Y here if you want to build Marvell USB OTG transciever 147 + Say Y here if you want to build Marvell USB OTG transceiver 148 148 driver in kernel (including PXA and MMP series). This driver 149 149 implements role switch between EHCI host driver and gadget driver. 150 150
+18 -13
drivers/usb/phy/phy-isp1301-omap.c
··· 1208 1208 #ifdef CONFIG_USB_OTG 1209 1209 otg_unbind(isp); 1210 1210 #endif 1211 - if (machine_is_omap_h2()) 1212 - gpio_free(2); 1213 - 1214 1211 set_bit(WORK_STOP, &isp->todo); 1215 1212 del_timer_sync(&isp->timer); 1216 1213 flush_work(&isp->work); ··· 1477 1480 { 1478 1481 int status; 1479 1482 struct isp1301 *isp; 1483 + int irq; 1480 1484 1481 1485 if (the_transceiver) 1482 1486 return 0; ··· 1541 1543 #endif 1542 1544 1543 1545 if (machine_is_omap_h2()) { 1546 + struct gpio_desc *gpiod; 1547 + 1544 1548 /* full speed signaling by default */ 1545 1549 isp1301_set_bits(isp, ISP1301_MODE_CONTROL_1, 1546 1550 MC1_SPEED); 1547 1551 isp1301_set_bits(isp, ISP1301_MODE_CONTROL_2, 1548 1552 MC2_SPD_SUSP_CTRL); 1549 1553 1550 - /* IRQ wired at M14 */ 1551 - omap_cfg_reg(M14_1510_GPIO2); 1552 - if (gpio_request(2, "isp1301") == 0) 1553 - gpio_direction_input(2); 1554 + gpiod = devm_gpiod_get(&i2c->dev, NULL, GPIOD_IN); 1555 + if (IS_ERR(gpiod)) { 1556 + dev_err(&i2c->dev, "cannot obtain H2 GPIO\n"); 1557 + goto fail; 1558 + } 1559 + gpiod_set_consumer_name(gpiod, "isp1301"); 1560 + irq = gpiod_to_irq(gpiod); 1554 1561 isp->irq_type = IRQF_TRIGGER_FALLING; 1562 + } else { 1563 + irq = i2c->irq; 1555 1564 } 1556 1565 1557 - status = request_irq(i2c->irq, isp1301_irq, 1566 + status = request_irq(irq, isp1301_irq, 1558 1567 isp->irq_type, DRIVER_NAME, isp); 1559 1568 if (status < 0) { 1560 1569 dev_dbg(&i2c->dev, "can't get IRQ %d, err %d\n", ··· 1571 1566 1572 1567 isp->phy.dev = &i2c->dev; 1573 1568 isp->phy.label = DRIVER_NAME; 1574 - isp->phy.set_power = isp1301_set_power, 1569 + isp->phy.set_power = isp1301_set_power; 1575 1570 1576 1571 isp->phy.otg->usb_phy = &isp->phy; 1577 - isp->phy.otg->set_host = isp1301_set_host, 1578 - isp->phy.otg->set_peripheral = isp1301_set_peripheral, 1579 - isp->phy.otg->start_srp = isp1301_start_srp, 1580 - isp->phy.otg->start_hnp = isp1301_start_hnp, 1572 + isp->phy.otg->set_host = isp1301_set_host; 1573 + isp->phy.otg->set_peripheral = isp1301_set_peripheral; 1574 + isp->phy.otg->start_srp = isp1301_start_srp; 1575 + isp->phy.otg->start_hnp = isp1301_start_hnp; 1581 1576 1582 1577 enable_vbus_draw(isp, 0); 1583 1578 power_down(isp);
+4 -15
drivers/usb/serial/Kconfig
··· 298 298 module will be called iuu_phoenix.o 299 299 300 300 config USB_SERIAL_KEYSPAN_PDA 301 - tristate "USB Keyspan PDA Single Port Serial Driver" 301 + tristate "USB Keyspan PDA / Xircom Single Port Serial Driver" 302 302 select USB_EZUSB_FX2 303 303 help 304 - Say Y here if you want to use a Keyspan PDA single port USB to 305 - serial converter device. This driver makes use of firmware 306 - developed from scratch by Brian Warner. 304 + Say Y here if you want to use a Keyspan PDA, Xircom or Entrega single 305 + port USB to serial converter device. This driver makes use of 306 + firmware developed from scratch by Brian Warner. 307 307 308 308 To compile this driver as a module, choose M here: the 309 309 module will be called keyspan_pda. ··· 537 537 module will be called cyberjack. 538 538 539 539 If unsure, say N. 540 - 541 - config USB_SERIAL_XIRCOM 542 - tristate "USB Xircom / Entrega Single Port Serial Driver" 543 - select USB_EZUSB_FX2 544 - help 545 - Say Y here if you want to use a Xircom or Entrega single port USB to 546 - serial converter device. This driver makes use of firmware 547 - developed from scratch by Brian Warner. 548 - 549 - To compile this driver as a module, choose M here: the 550 - module will be called keyspan_pda. 551 540 552 541 config USB_SERIAL_WWAN 553 542 tristate
-1
drivers/usb/serial/Makefile
··· 61 61 obj-$(CONFIG_USB_SERIAL_VISOR) += visor.o 62 62 obj-$(CONFIG_USB_SERIAL_WISHBONE) += wishbone-serial.o 63 63 obj-$(CONFIG_USB_SERIAL_WHITEHEAT) += whiteheat.o 64 - obj-$(CONFIG_USB_SERIAL_XIRCOM) += keyspan_pda.o 65 64 obj-$(CONFIG_USB_SERIAL_XSENS_MT) += xsens_mt.o
+104 -395
drivers/usb/serial/cp210x.c
··· 31 31 */ 32 32 static int cp210x_open(struct tty_struct *tty, struct usb_serial_port *); 33 33 static void cp210x_close(struct usb_serial_port *); 34 - static void cp210x_get_termios(struct tty_struct *, struct usb_serial_port *); 35 - static void cp210x_get_termios_port(struct usb_serial_port *port, 36 - tcflag_t *cflagp, unsigned int *baudp); 37 34 static void cp210x_change_speed(struct tty_struct *, struct usb_serial_port *, 38 35 struct ktermios *); 39 36 static void cp210x_set_termios(struct tty_struct *, struct usb_serial_port *, ··· 46 49 static void cp210x_release(struct usb_serial *); 47 50 static int cp210x_port_probe(struct usb_serial_port *); 48 51 static int cp210x_port_remove(struct usb_serial_port *); 49 - static void cp210x_dtr_rts(struct usb_serial_port *p, int on); 52 + static void cp210x_dtr_rts(struct usb_serial_port *port, int on); 50 53 static void cp210x_process_read_urb(struct urb *urb); 51 54 static void cp210x_enable_event_mode(struct usb_serial_port *port); 52 55 static void cp210x_disable_event_mode(struct usb_serial_port *port); ··· 264 267 265 268 struct cp210x_port_private { 266 269 u8 bInterfaceNumber; 267 - bool has_swapped_line_ctl; 268 270 bool event_mode; 269 271 enum cp210x_event_state event_state; 270 272 u8 lsr; ··· 555 559 int result; 556 560 557 561 dmabuf = kmalloc(bufsize, GFP_KERNEL); 558 - if (!dmabuf) { 559 - /* 560 - * FIXME Some callers don't bother to check for error, 561 - * at least give them consistent junk until they are fixed 562 - */ 563 - memset(buf, 0, bufsize); 562 + if (!dmabuf) 564 563 return -ENOMEM; 565 - } 566 564 567 565 result = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0), 568 566 req, REQTYPE_INTERFACE_TO_HOST, 0, ··· 570 580 req, bufsize, result); 571 581 if (result >= 0) 572 582 result = -EIO; 573 - 574 - /* 575 - * FIXME Some callers don't bother to check for error, 576 - * at least give them consistent junk until they are fixed 577 - */ 578 - memset(buf, 0, bufsize); 579 583 } 580 584 581 585 kfree(dmabuf); 582 586 583 587 return result; 584 - } 585 - 586 - /* 587 - * Reads any 32-bit CP210X_ register identified by req. 588 - */ 589 - static int cp210x_read_u32_reg(struct usb_serial_port *port, u8 req, u32 *val) 590 - { 591 - __le32 le32_val; 592 - int err; 593 - 594 - err = cp210x_read_reg_block(port, req, &le32_val, sizeof(le32_val)); 595 - if (err) { 596 - /* 597 - * FIXME Some callers don't bother to check for error, 598 - * at least give them consistent junk until they are fixed 599 - */ 600 - *val = 0; 601 - return err; 602 - } 603 - 604 - *val = le32_to_cpu(le32_val); 605 - 606 - return 0; 607 - } 608 - 609 - /* 610 - * Reads any 16-bit CP210X_ register identified by req. 611 - */ 612 - static int cp210x_read_u16_reg(struct usb_serial_port *port, u8 req, u16 *val) 613 - { 614 - __le16 le16_val; 615 - int err; 616 - 617 - err = cp210x_read_reg_block(port, req, &le16_val, sizeof(le16_val)); 618 - if (err) 619 - return err; 620 - 621 - *val = le16_to_cpu(le16_val); 622 - 623 - return 0; 624 588 } 625 589 626 590 /* ··· 724 780 } 725 781 #endif 726 782 727 - /* 728 - * Detect CP2108 GET_LINE_CTL bug and activate workaround. 729 - * Write a known good value 0x800, read it back. 730 - * If it comes back swapped the bug is detected. 731 - * Preserve the original register value. 732 - */ 733 - static int cp210x_detect_swapped_line_ctl(struct usb_serial_port *port) 734 - { 735 - struct cp210x_port_private *port_priv = usb_get_serial_port_data(port); 736 - u16 line_ctl_save; 737 - u16 line_ctl_test; 738 - int err; 739 - 740 - err = cp210x_read_u16_reg(port, CP210X_GET_LINE_CTL, &line_ctl_save); 741 - if (err) 742 - return err; 743 - 744 - err = cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, 0x800); 745 - if (err) 746 - return err; 747 - 748 - err = cp210x_read_u16_reg(port, CP210X_GET_LINE_CTL, &line_ctl_test); 749 - if (err) 750 - return err; 751 - 752 - if (line_ctl_test == 8) { 753 - port_priv->has_swapped_line_ctl = true; 754 - line_ctl_save = swab16(line_ctl_save); 755 - } 756 - 757 - return cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, line_ctl_save); 758 - } 759 - 760 - /* 761 - * Must always be called instead of cp210x_read_u16_reg(CP210X_GET_LINE_CTL) 762 - * to workaround cp2108 bug and get correct value. 763 - */ 764 - static int cp210x_get_line_ctl(struct usb_serial_port *port, u16 *ctl) 765 - { 766 - struct cp210x_port_private *port_priv = usb_get_serial_port_data(port); 767 - int err; 768 - 769 - err = cp210x_read_u16_reg(port, CP210X_GET_LINE_CTL, ctl); 770 - if (err) 771 - return err; 772 - 773 - /* Workaround swapped bytes in 16-bit value from CP210X_GET_LINE_CTL */ 774 - if (port_priv->has_swapped_line_ctl) 775 - *ctl = swab16(*ctl); 776 - 777 - return 0; 778 - } 779 - 780 783 static int cp210x_open(struct tty_struct *tty, struct usb_serial_port *port) 781 784 { 782 785 struct cp210x_port_private *port_priv = usb_get_serial_port_data(port); ··· 735 844 return result; 736 845 } 737 846 738 - /* Configure the termios structure */ 739 - cp210x_get_termios(tty, port); 740 - 741 - if (tty) { 742 - /* The baud rate must be initialised on cp2104 */ 743 - cp210x_change_speed(tty, port, NULL); 744 - 745 - if (I_INPCK(tty)) 746 - cp210x_enable_event_mode(port); 747 - } 847 + if (tty) 848 + cp210x_set_termios(tty, port, NULL); 748 849 749 850 result = usb_serial_generic_open(tty, port); 750 851 if (result) ··· 915 1032 return !count; 916 1033 } 917 1034 918 - /* 919 - * cp210x_get_termios 920 - * Reads the baud rate, data bits, parity, stop bits and flow control mode 921 - * from the device, corrects any unsupported values, and configures the 922 - * termios structure to reflect the state of the device 923 - */ 924 - static void cp210x_get_termios(struct tty_struct *tty, 925 - struct usb_serial_port *port) 926 - { 927 - unsigned int baud; 928 - 929 - if (tty) { 930 - cp210x_get_termios_port(tty->driver_data, 931 - &tty->termios.c_cflag, &baud); 932 - tty_encode_baud_rate(tty, baud, baud); 933 - } else { 934 - tcflag_t cflag; 935 - cflag = 0; 936 - cp210x_get_termios_port(port, &cflag, &baud); 937 - } 938 - } 939 - 940 - /* 941 - * cp210x_get_termios_port 942 - * This is the heart of cp210x_get_termios which always uses a &usb_serial_port. 943 - */ 944 - static void cp210x_get_termios_port(struct usb_serial_port *port, 945 - tcflag_t *cflagp, unsigned int *baudp) 946 - { 947 - struct device *dev = &port->dev; 948 - tcflag_t cflag; 949 - struct cp210x_flow_ctl flow_ctl; 950 - u32 baud; 951 - u16 bits; 952 - u32 ctl_hs; 953 - u32 flow_repl; 954 - 955 - cp210x_read_u32_reg(port, CP210X_GET_BAUDRATE, &baud); 956 - 957 - dev_dbg(dev, "%s - baud rate = %d\n", __func__, baud); 958 - *baudp = baud; 959 - 960 - cflag = *cflagp; 961 - 962 - cp210x_get_line_ctl(port, &bits); 963 - cflag &= ~CSIZE; 964 - switch (bits & BITS_DATA_MASK) { 965 - case BITS_DATA_5: 966 - dev_dbg(dev, "%s - data bits = 5\n", __func__); 967 - cflag |= CS5; 968 - break; 969 - case BITS_DATA_6: 970 - dev_dbg(dev, "%s - data bits = 6\n", __func__); 971 - cflag |= CS6; 972 - break; 973 - case BITS_DATA_7: 974 - dev_dbg(dev, "%s - data bits = 7\n", __func__); 975 - cflag |= CS7; 976 - break; 977 - case BITS_DATA_8: 978 - dev_dbg(dev, "%s - data bits = 8\n", __func__); 979 - cflag |= CS8; 980 - break; 981 - case BITS_DATA_9: 982 - dev_dbg(dev, "%s - data bits = 9 (not supported, using 8 data bits)\n", __func__); 983 - cflag |= CS8; 984 - bits &= ~BITS_DATA_MASK; 985 - bits |= BITS_DATA_8; 986 - cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits); 987 - break; 988 - default: 989 - dev_dbg(dev, "%s - Unknown number of data bits, using 8\n", __func__); 990 - cflag |= CS8; 991 - bits &= ~BITS_DATA_MASK; 992 - bits |= BITS_DATA_8; 993 - cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits); 994 - break; 995 - } 996 - 997 - switch (bits & BITS_PARITY_MASK) { 998 - case BITS_PARITY_NONE: 999 - dev_dbg(dev, "%s - parity = NONE\n", __func__); 1000 - cflag &= ~PARENB; 1001 - break; 1002 - case BITS_PARITY_ODD: 1003 - dev_dbg(dev, "%s - parity = ODD\n", __func__); 1004 - cflag |= (PARENB|PARODD); 1005 - break; 1006 - case BITS_PARITY_EVEN: 1007 - dev_dbg(dev, "%s - parity = EVEN\n", __func__); 1008 - cflag &= ~PARODD; 1009 - cflag |= PARENB; 1010 - break; 1011 - case BITS_PARITY_MARK: 1012 - dev_dbg(dev, "%s - parity = MARK\n", __func__); 1013 - cflag |= (PARENB|PARODD|CMSPAR); 1014 - break; 1015 - case BITS_PARITY_SPACE: 1016 - dev_dbg(dev, "%s - parity = SPACE\n", __func__); 1017 - cflag &= ~PARODD; 1018 - cflag |= (PARENB|CMSPAR); 1019 - break; 1020 - default: 1021 - dev_dbg(dev, "%s - Unknown parity mode, disabling parity\n", __func__); 1022 - cflag &= ~PARENB; 1023 - bits &= ~BITS_PARITY_MASK; 1024 - cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits); 1025 - break; 1026 - } 1027 - 1028 - cflag &= ~CSTOPB; 1029 - switch (bits & BITS_STOP_MASK) { 1030 - case BITS_STOP_1: 1031 - dev_dbg(dev, "%s - stop bits = 1\n", __func__); 1032 - break; 1033 - case BITS_STOP_1_5: 1034 - dev_dbg(dev, "%s - stop bits = 1.5 (not supported, using 1 stop bit)\n", __func__); 1035 - bits &= ~BITS_STOP_MASK; 1036 - cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits); 1037 - break; 1038 - case BITS_STOP_2: 1039 - dev_dbg(dev, "%s - stop bits = 2\n", __func__); 1040 - cflag |= CSTOPB; 1041 - break; 1042 - default: 1043 - dev_dbg(dev, "%s - Unknown number of stop bits, using 1 stop bit\n", __func__); 1044 - bits &= ~BITS_STOP_MASK; 1045 - cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits); 1046 - break; 1047 - } 1048 - 1049 - cp210x_read_reg_block(port, CP210X_GET_FLOW, &flow_ctl, 1050 - sizeof(flow_ctl)); 1051 - ctl_hs = le32_to_cpu(flow_ctl.ulControlHandshake); 1052 - if (ctl_hs & CP210X_SERIAL_CTS_HANDSHAKE) { 1053 - dev_dbg(dev, "%s - flow control = CRTSCTS\n", __func__); 1054 - /* 1055 - * When the port is closed, the CP210x hardware disables 1056 - * auto-RTS and RTS is deasserted but it leaves auto-CTS when 1057 - * in hardware flow control mode. When re-opening the port, if 1058 - * auto-CTS is enabled on the cp210x, then auto-RTS must be 1059 - * re-enabled in the driver. 1060 - */ 1061 - flow_repl = le32_to_cpu(flow_ctl.ulFlowReplace); 1062 - flow_repl &= ~CP210X_SERIAL_RTS_MASK; 1063 - flow_repl |= CP210X_SERIAL_RTS_SHIFT(CP210X_SERIAL_RTS_FLOW_CTL); 1064 - flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl); 1065 - cp210x_write_reg_block(port, 1066 - CP210X_SET_FLOW, 1067 - &flow_ctl, 1068 - sizeof(flow_ctl)); 1069 - 1070 - cflag |= CRTSCTS; 1071 - } else { 1072 - dev_dbg(dev, "%s - flow control = NONE\n", __func__); 1073 - cflag &= ~CRTSCTS; 1074 - } 1075 - 1076 - *cflagp = cflag; 1077 - } 1078 - 1079 1035 struct cp210x_rate { 1080 1036 speed_t rate; 1081 1037 speed_t high; ··· 1074 1352 port_priv->event_mode = false; 1075 1353 } 1076 1354 1355 + static bool cp210x_termios_change(const struct ktermios *a, const struct ktermios *b) 1356 + { 1357 + bool iflag_change; 1358 + 1359 + iflag_change = ((a->c_iflag ^ b->c_iflag) & INPCK); 1360 + 1361 + return tty_termios_hw_change(a, b) || iflag_change; 1362 + } 1363 + 1364 + static void cp210x_set_flow_control(struct tty_struct *tty, 1365 + struct usb_serial_port *port, struct ktermios *old_termios) 1366 + { 1367 + struct cp210x_flow_ctl flow_ctl; 1368 + u32 flow_repl; 1369 + u32 ctl_hs; 1370 + int ret; 1371 + 1372 + if (old_termios && C_CRTSCTS(tty) == (old_termios->c_cflag & CRTSCTS)) 1373 + return; 1374 + 1375 + ret = cp210x_read_reg_block(port, CP210X_GET_FLOW, &flow_ctl, 1376 + sizeof(flow_ctl)); 1377 + if (ret) 1378 + return; 1379 + 1380 + ctl_hs = le32_to_cpu(flow_ctl.ulControlHandshake); 1381 + flow_repl = le32_to_cpu(flow_ctl.ulFlowReplace); 1382 + 1383 + ctl_hs &= ~CP210X_SERIAL_DSR_HANDSHAKE; 1384 + ctl_hs &= ~CP210X_SERIAL_DCD_HANDSHAKE; 1385 + ctl_hs &= ~CP210X_SERIAL_DSR_SENSITIVITY; 1386 + ctl_hs &= ~CP210X_SERIAL_DTR_MASK; 1387 + ctl_hs |= CP210X_SERIAL_DTR_SHIFT(CP210X_SERIAL_DTR_ACTIVE); 1388 + 1389 + if (C_CRTSCTS(tty)) { 1390 + ctl_hs |= CP210X_SERIAL_CTS_HANDSHAKE; 1391 + flow_repl &= ~CP210X_SERIAL_RTS_MASK; 1392 + flow_repl |= CP210X_SERIAL_RTS_SHIFT(CP210X_SERIAL_RTS_FLOW_CTL); 1393 + } else { 1394 + ctl_hs &= ~CP210X_SERIAL_CTS_HANDSHAKE; 1395 + flow_repl &= ~CP210X_SERIAL_RTS_MASK; 1396 + flow_repl |= CP210X_SERIAL_RTS_SHIFT(CP210X_SERIAL_RTS_ACTIVE); 1397 + } 1398 + 1399 + dev_dbg(&port->dev, "%s - ulControlHandshake=0x%08x, ulFlowReplace=0x%08x\n", 1400 + __func__, ctl_hs, flow_repl); 1401 + 1402 + flow_ctl.ulControlHandshake = cpu_to_le32(ctl_hs); 1403 + flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl); 1404 + 1405 + cp210x_write_reg_block(port, CP210X_SET_FLOW, &flow_ctl, 1406 + sizeof(flow_ctl)); 1407 + } 1408 + 1077 1409 static void cp210x_set_termios(struct tty_struct *tty, 1078 1410 struct usb_serial_port *port, struct ktermios *old_termios) 1079 1411 { 1080 - struct device *dev = &port->dev; 1081 - unsigned int cflag, old_cflag; 1412 + struct cp210x_serial_private *priv = usb_get_serial_data(port->serial); 1082 1413 u16 bits; 1414 + int ret; 1083 1415 1084 - cflag = tty->termios.c_cflag; 1085 - old_cflag = old_termios->c_cflag; 1416 + if (old_termios && !cp210x_termios_change(&tty->termios, old_termios)) 1417 + return; 1086 1418 1087 - if (tty->termios.c_ospeed != old_termios->c_ospeed) 1419 + if (!old_termios || tty->termios.c_ospeed != old_termios->c_ospeed) 1088 1420 cp210x_change_speed(tty, port, old_termios); 1089 1421 1090 - /* If the number of data bits is to be updated */ 1091 - if ((cflag & CSIZE) != (old_cflag & CSIZE)) { 1092 - cp210x_get_line_ctl(port, &bits); 1093 - bits &= ~BITS_DATA_MASK; 1094 - switch (cflag & CSIZE) { 1095 - case CS5: 1096 - bits |= BITS_DATA_5; 1097 - dev_dbg(dev, "%s - data bits = 5\n", __func__); 1098 - break; 1099 - case CS6: 1100 - bits |= BITS_DATA_6; 1101 - dev_dbg(dev, "%s - data bits = 6\n", __func__); 1102 - break; 1103 - case CS7: 1104 - bits |= BITS_DATA_7; 1105 - dev_dbg(dev, "%s - data bits = 7\n", __func__); 1106 - break; 1107 - case CS8: 1108 - default: 1109 - bits |= BITS_DATA_8; 1110 - dev_dbg(dev, "%s - data bits = 8\n", __func__); 1111 - break; 1112 - } 1113 - if (cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits)) 1114 - dev_dbg(dev, "Number of data bits requested not supported by device\n"); 1422 + /* CP2101 only supports CS8, 1 stop bit and non-stick parity. */ 1423 + if (priv->partnum == CP210X_PARTNUM_CP2101) { 1424 + tty->termios.c_cflag &= ~(CSIZE | CSTOPB | CMSPAR); 1425 + tty->termios.c_cflag |= CS8; 1115 1426 } 1116 1427 1117 - if ((cflag & (PARENB|PARODD|CMSPAR)) != 1118 - (old_cflag & (PARENB|PARODD|CMSPAR))) { 1119 - cp210x_get_line_ctl(port, &bits); 1120 - bits &= ~BITS_PARITY_MASK; 1121 - if (cflag & PARENB) { 1122 - if (cflag & CMSPAR) { 1123 - if (cflag & PARODD) { 1124 - bits |= BITS_PARITY_MARK; 1125 - dev_dbg(dev, "%s - parity = MARK\n", __func__); 1126 - } else { 1127 - bits |= BITS_PARITY_SPACE; 1128 - dev_dbg(dev, "%s - parity = SPACE\n", __func__); 1129 - } 1130 - } else { 1131 - if (cflag & PARODD) { 1132 - bits |= BITS_PARITY_ODD; 1133 - dev_dbg(dev, "%s - parity = ODD\n", __func__); 1134 - } else { 1135 - bits |= BITS_PARITY_EVEN; 1136 - dev_dbg(dev, "%s - parity = EVEN\n", __func__); 1137 - } 1138 - } 1139 - } 1140 - if (cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits)) 1141 - dev_dbg(dev, "Parity mode not supported by device\n"); 1428 + bits = 0; 1429 + 1430 + switch (C_CSIZE(tty)) { 1431 + case CS5: 1432 + bits |= BITS_DATA_5; 1433 + break; 1434 + case CS6: 1435 + bits |= BITS_DATA_6; 1436 + break; 1437 + case CS7: 1438 + bits |= BITS_DATA_7; 1439 + break; 1440 + case CS8: 1441 + default: 1442 + bits |= BITS_DATA_8; 1443 + break; 1142 1444 } 1143 1445 1144 - if ((cflag & CSTOPB) != (old_cflag & CSTOPB)) { 1145 - cp210x_get_line_ctl(port, &bits); 1146 - bits &= ~BITS_STOP_MASK; 1147 - if (cflag & CSTOPB) { 1148 - bits |= BITS_STOP_2; 1149 - dev_dbg(dev, "%s - stop bits = 2\n", __func__); 1446 + if (C_PARENB(tty)) { 1447 + if (C_CMSPAR(tty)) { 1448 + if (C_PARODD(tty)) 1449 + bits |= BITS_PARITY_MARK; 1450 + else 1451 + bits |= BITS_PARITY_SPACE; 1150 1452 } else { 1151 - bits |= BITS_STOP_1; 1152 - dev_dbg(dev, "%s - stop bits = 1\n", __func__); 1453 + if (C_PARODD(tty)) 1454 + bits |= BITS_PARITY_ODD; 1455 + else 1456 + bits |= BITS_PARITY_EVEN; 1153 1457 } 1154 - if (cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits)) 1155 - dev_dbg(dev, "Number of stop bits requested not supported by device\n"); 1156 1458 } 1157 1459 1158 - if ((cflag & CRTSCTS) != (old_cflag & CRTSCTS)) { 1159 - struct cp210x_flow_ctl flow_ctl; 1160 - u32 ctl_hs; 1161 - u32 flow_repl; 1460 + if (C_CSTOPB(tty)) 1461 + bits |= BITS_STOP_2; 1462 + else 1463 + bits |= BITS_STOP_1; 1162 1464 1163 - cp210x_read_reg_block(port, CP210X_GET_FLOW, &flow_ctl, 1164 - sizeof(flow_ctl)); 1165 - ctl_hs = le32_to_cpu(flow_ctl.ulControlHandshake); 1166 - flow_repl = le32_to_cpu(flow_ctl.ulFlowReplace); 1167 - dev_dbg(dev, "%s - read ulControlHandshake=0x%08x, ulFlowReplace=0x%08x\n", 1168 - __func__, ctl_hs, flow_repl); 1465 + ret = cp210x_write_u16_reg(port, CP210X_SET_LINE_CTL, bits); 1466 + if (ret) 1467 + dev_err(&port->dev, "failed to set line control: %d\n", ret); 1169 1468 1170 - ctl_hs &= ~CP210X_SERIAL_DSR_HANDSHAKE; 1171 - ctl_hs &= ~CP210X_SERIAL_DCD_HANDSHAKE; 1172 - ctl_hs &= ~CP210X_SERIAL_DSR_SENSITIVITY; 1173 - ctl_hs &= ~CP210X_SERIAL_DTR_MASK; 1174 - ctl_hs |= CP210X_SERIAL_DTR_SHIFT(CP210X_SERIAL_DTR_ACTIVE); 1175 - if (cflag & CRTSCTS) { 1176 - ctl_hs |= CP210X_SERIAL_CTS_HANDSHAKE; 1177 - 1178 - flow_repl &= ~CP210X_SERIAL_RTS_MASK; 1179 - flow_repl |= CP210X_SERIAL_RTS_SHIFT( 1180 - CP210X_SERIAL_RTS_FLOW_CTL); 1181 - dev_dbg(dev, "%s - flow control = CRTSCTS\n", __func__); 1182 - } else { 1183 - ctl_hs &= ~CP210X_SERIAL_CTS_HANDSHAKE; 1184 - 1185 - flow_repl &= ~CP210X_SERIAL_RTS_MASK; 1186 - flow_repl |= CP210X_SERIAL_RTS_SHIFT( 1187 - CP210X_SERIAL_RTS_ACTIVE); 1188 - dev_dbg(dev, "%s - flow control = NONE\n", __func__); 1189 - } 1190 - 1191 - dev_dbg(dev, "%s - write ulControlHandshake=0x%08x, ulFlowReplace=0x%08x\n", 1192 - __func__, ctl_hs, flow_repl); 1193 - flow_ctl.ulControlHandshake = cpu_to_le32(ctl_hs); 1194 - flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl); 1195 - cp210x_write_reg_block(port, CP210X_SET_FLOW, &flow_ctl, 1196 - sizeof(flow_ctl)); 1197 - } 1469 + cp210x_set_flow_control(tty, port, old_termios); 1198 1470 1199 1471 /* 1200 1472 * Enable event-insertion mode only if input parity checking is ··· 1234 1518 return cp210x_write_u16_reg(port, CP210X_SET_MHS, control); 1235 1519 } 1236 1520 1237 - static void cp210x_dtr_rts(struct usb_serial_port *p, int on) 1521 + static void cp210x_dtr_rts(struct usb_serial_port *port, int on) 1238 1522 { 1239 1523 if (on) 1240 - cp210x_tiocmset_port(p, TIOCM_DTR|TIOCM_RTS, 0); 1524 + cp210x_tiocmset_port(port, TIOCM_DTR | TIOCM_RTS, 0); 1241 1525 else 1242 - cp210x_tiocmset_port(p, 0, TIOCM_DTR|TIOCM_RTS); 1526 + cp210x_tiocmset_port(port, 0, TIOCM_DTR | TIOCM_RTS); 1243 1527 } 1244 1528 1245 1529 static int cp210x_tiocmget(struct tty_struct *tty) ··· 1702 1986 { 1703 1987 struct usb_serial *serial = port->serial; 1704 1988 struct cp210x_port_private *port_priv; 1705 - int ret; 1706 1989 1707 1990 port_priv = kzalloc(sizeof(*port_priv), GFP_KERNEL); 1708 1991 if (!port_priv) ··· 1710 1995 port_priv->bInterfaceNumber = cp210x_interface_num(serial); 1711 1996 1712 1997 usb_set_serial_port_data(port, port_priv); 1713 - 1714 - ret = cp210x_detect_swapped_line_ctl(port); 1715 - if (ret) { 1716 - kfree(port_priv); 1717 - return ret; 1718 - } 1719 1998 1720 1999 return 0; 1721 2000 }
+21 -41
drivers/usb/serial/digi_acceleport.c
··· 19 19 #include <linux/tty_flip.h> 20 20 #include <linux/module.h> 21 21 #include <linux/spinlock.h> 22 - #include <linux/workqueue.h> 23 22 #include <linux/uaccess.h> 24 23 #include <linux/usb.h> 25 24 #include <linux/wait.h> ··· 197 198 int dp_throttle_restart; 198 199 wait_queue_head_t dp_flush_wait; 199 200 wait_queue_head_t dp_close_wait; /* wait queue for close */ 200 - struct work_struct dp_wakeup_work; 201 + wait_queue_head_t write_wait; 201 202 struct usb_serial_port *dp_port; 202 203 }; 203 204 204 205 205 206 /* Local Function Declarations */ 206 207 207 - static void digi_wakeup_write_lock(struct work_struct *work); 208 208 static int digi_write_oob_command(struct usb_serial_port *port, 209 209 unsigned char *buf, int count, int interruptible); 210 210 static int digi_write_inb_command(struct usb_serial_port *port, ··· 354 356 return timeout; 355 357 } 356 358 357 - 358 - /* 359 - * Digi Wakeup Write 360 - * 361 - * Wake up port, line discipline, and tty processes sleeping 362 - * on writes. 363 - */ 364 - 365 - static void digi_wakeup_write_lock(struct work_struct *work) 366 - { 367 - struct digi_port *priv = 368 - container_of(work, struct digi_port, dp_wakeup_work); 369 - struct usb_serial_port *port = priv->dp_port; 370 - unsigned long flags; 371 - 372 - spin_lock_irqsave(&priv->dp_port_lock, flags); 373 - tty_port_tty_wakeup(&port->port); 374 - spin_unlock_irqrestore(&priv->dp_port_lock, flags); 375 - } 376 - 377 359 /* 378 360 * Digi Write OOB Command 379 361 * ··· 382 404 while (count > 0) { 383 405 while (oob_priv->dp_write_urb_in_use) { 384 406 cond_wait_interruptible_timeout_irqrestore( 385 - &oob_port->write_wait, DIGI_RETRY_TIMEOUT, 407 + &oob_priv->write_wait, DIGI_RETRY_TIMEOUT, 386 408 &oob_priv->dp_port_lock, flags); 387 409 if (interruptible && signal_pending(current)) 388 410 return -EINTR; ··· 445 467 while (priv->dp_write_urb_in_use && 446 468 time_before(jiffies, timeout)) { 447 469 cond_wait_interruptible_timeout_irqrestore( 448 - &port->write_wait, DIGI_RETRY_TIMEOUT, 470 + &priv->write_wait, DIGI_RETRY_TIMEOUT, 449 471 &priv->dp_port_lock, flags); 450 472 if (signal_pending(current)) 451 473 return -EINTR; ··· 524 546 while (oob_priv->dp_write_urb_in_use) { 525 547 spin_unlock(&port_priv->dp_port_lock); 526 548 cond_wait_interruptible_timeout_irqrestore( 527 - &oob_port->write_wait, DIGI_RETRY_TIMEOUT, 549 + &oob_priv->write_wait, DIGI_RETRY_TIMEOUT, 528 550 &oob_priv->dp_port_lock, flags); 529 551 if (interruptible && signal_pending(current)) 530 552 return -EINTR; ··· 889 911 unsigned char *data = port->write_urb->transfer_buffer; 890 912 unsigned long flags = 0; 891 913 892 - dev_dbg(&port->dev, 893 - "digi_write: TOP: port=%d, count=%d, in_interrupt=%ld\n", 894 - priv->dp_port_num, count, in_interrupt()); 914 + dev_dbg(&port->dev, "digi_write: TOP: port=%d, count=%d\n", 915 + priv->dp_port_num, count); 895 916 896 917 /* copy user data (which can sleep) before getting spin lock */ 897 918 count = min(count, port->bulk_out_size-2); ··· 963 986 unsigned long flags; 964 987 int ret = 0; 965 988 int status = urb->status; 989 + bool wakeup; 966 990 967 991 /* port and serial sanity check */ 968 992 if (port == NULL || (priv = usb_get_serial_port_data(port)) == NULL) { ··· 984 1006 dev_dbg(&port->dev, "digi_write_bulk_callback: oob callback\n"); 985 1007 spin_lock_irqsave(&priv->dp_port_lock, flags); 986 1008 priv->dp_write_urb_in_use = 0; 987 - wake_up_interruptible(&port->write_wait); 1009 + wake_up_interruptible(&priv->write_wait); 988 1010 spin_unlock_irqrestore(&priv->dp_port_lock, flags); 989 1011 return; 990 1012 } 991 1013 992 1014 /* try to send any buffered data on this port */ 1015 + wakeup = true; 993 1016 spin_lock_irqsave(&priv->dp_port_lock, flags); 994 1017 priv->dp_write_urb_in_use = 0; 995 1018 if (priv->dp_out_buf_len > 0) { ··· 1006 1027 if (ret == 0) { 1007 1028 priv->dp_write_urb_in_use = 1; 1008 1029 priv->dp_out_buf_len = 0; 1030 + wakeup = false; 1009 1031 } 1010 1032 } 1011 - /* wake up processes sleeping on writes immediately */ 1012 - tty_port_tty_wakeup(&port->port); 1013 - /* also queue up a wakeup at scheduler time, in case we */ 1014 - /* lost the race in write_chan(). */ 1015 - schedule_work(&priv->dp_wakeup_work); 1016 - 1017 1033 spin_unlock_irqrestore(&priv->dp_port_lock, flags); 1034 + 1018 1035 if (ret && ret != -EPERM) 1019 1036 dev_err_console(port, 1020 1037 "%s: usb_submit_urb failed, ret=%d, port=%d\n", 1021 1038 __func__, ret, priv->dp_port_num); 1039 + 1040 + if (wakeup) 1041 + tty_port_tty_wakeup(&port->port); 1022 1042 } 1023 1043 1024 1044 static int digi_write_room(struct tty_struct *tty) ··· 1217 1239 init_waitqueue_head(&priv->dp_transmit_idle_wait); 1218 1240 init_waitqueue_head(&priv->dp_flush_wait); 1219 1241 init_waitqueue_head(&priv->dp_close_wait); 1220 - INIT_WORK(&priv->dp_wakeup_work, digi_wakeup_write_lock); 1242 + init_waitqueue_head(&priv->write_wait); 1221 1243 priv->dp_port = port; 1222 - 1223 - init_waitqueue_head(&port->write_wait); 1224 1244 1225 1245 usb_set_serial_port_data(port, priv); 1226 1246 ··· 1484 1508 rts = C_CRTSCTS(tty); 1485 1509 1486 1510 if (tty && opcode == DIGI_CMD_READ_INPUT_SIGNALS) { 1511 + bool wakeup = false; 1512 + 1487 1513 spin_lock_irqsave(&priv->dp_port_lock, flags); 1488 1514 /* convert from digi flags to termiox flags */ 1489 1515 if (val & DIGI_READ_INPUT_SIGNALS_CTS) { 1490 1516 priv->dp_modem_signals |= TIOCM_CTS; 1491 - /* port must be open to use tty struct */ 1492 1517 if (rts) 1493 - tty_port_tty_wakeup(&port->port); 1518 + wakeup = true; 1494 1519 } else { 1495 1520 priv->dp_modem_signals &= ~TIOCM_CTS; 1496 1521 /* port must be open to use tty struct */ ··· 1510 1533 priv->dp_modem_signals &= ~TIOCM_CD; 1511 1534 1512 1535 spin_unlock_irqrestore(&priv->dp_port_lock, flags); 1536 + 1537 + if (wakeup) 1538 + tty_port_tty_wakeup(&port->port); 1513 1539 } else if (opcode == DIGI_CMD_TRANSMIT_IDLE) { 1514 1540 spin_lock_irqsave(&priv->dp_port_lock, flags); 1515 1541 priv->dp_transmit_idle = 1;
+20 -3
drivers/usb/serial/ftdi_sio.c
··· 1841 1841 struct ftdi_private *priv = usb_get_serial_port_data(port); 1842 1842 int result; 1843 1843 1844 - if (priv->gpio_altfunc & BIT(offset)) 1845 - return -ENODEV; 1846 - 1847 1844 mutex_lock(&priv->gpio_lock); 1848 1845 if (!priv->gpio_used) { 1849 1846 /* Set default pin states, as we cannot get them from device */ ··· 1997 2000 mutex_unlock(&priv->gpio_lock); 1998 2001 1999 2002 return result; 2003 + } 2004 + 2005 + static int ftdi_gpio_init_valid_mask(struct gpio_chip *gc, 2006 + unsigned long *valid_mask, 2007 + unsigned int ngpios) 2008 + { 2009 + struct usb_serial_port *port = gpiochip_get_data(gc); 2010 + struct ftdi_private *priv = usb_get_serial_port_data(port); 2011 + unsigned long map = priv->gpio_altfunc; 2012 + 2013 + bitmap_complement(valid_mask, &map, ngpios); 2014 + 2015 + if (bitmap_empty(valid_mask, ngpios)) 2016 + dev_dbg(&port->dev, "no CBUS pin configured for GPIO\n"); 2017 + else 2018 + dev_dbg(&port->dev, "CBUS%*pbl configured for GPIO\n", ngpios, 2019 + valid_mask); 2020 + 2021 + return 0; 2000 2022 } 2001 2023 2002 2024 static int ftdi_read_eeprom(struct usb_serial *serial, void *dst, u16 addr, ··· 2189 2173 priv->gc.get_direction = ftdi_gpio_direction_get; 2190 2174 priv->gc.direction_input = ftdi_gpio_direction_input; 2191 2175 priv->gc.direction_output = ftdi_gpio_direction_output; 2176 + priv->gc.init_valid_mask = ftdi_gpio_init_valid_mask; 2192 2177 priv->gc.get = ftdi_gpio_get; 2193 2178 priv->gc.set = ftdi_gpio_set; 2194 2179 priv->gc.get_multiple = ftdi_gpio_get_multiple;
-2
drivers/usb/serial/iuu_phoenix.c
··· 850 850 default: 851 851 kfree(dataout); 852 852 return IUU_INVALID_PARAMETER; 853 - break; 854 853 } 855 854 856 855 switch (parity & 0xF0) { ··· 863 864 default: 864 865 kfree(dataout); 865 866 return IUU_INVALID_PARAMETER; 866 - break; 867 867 } 868 868 869 869 status = bulk_immediate(port, dataout, DataCount);
+244 -318
drivers/usb/serial/keyspan_pda.c
··· 5 5 * Copyright (C) 1999 - 2001 Greg Kroah-Hartman <greg@kroah.com> 6 6 * Copyright (C) 1999, 2000 Brian Warner <warner@lothar.com> 7 7 * Copyright (C) 2000 Al Borchers <borchers@steinerpoint.com> 8 + * Copyright (C) 2020 Johan Hovold <johan@kernel.org> 8 9 * 9 10 * See Documentation/usb/usb-serial.rst for more information on using this 10 11 * driver 11 12 */ 12 - 13 13 14 14 #include <linux/kernel.h> 15 15 #include <linux/errno.h> ··· 25 25 #include <linux/usb/serial.h> 26 26 #include <linux/usb/ezusb.h> 27 27 28 - /* make a simple define to handle if we are compiling keyspan_pda or xircom support */ 29 - #if IS_ENABLED(CONFIG_USB_SERIAL_KEYSPAN_PDA) 30 - #define KEYSPAN 31 - #else 32 - #undef KEYSPAN 33 - #endif 34 - #if IS_ENABLED(CONFIG_USB_SERIAL_XIRCOM) 35 - #define XIRCOM 36 - #else 37 - #undef XIRCOM 38 - #endif 39 - 40 - #define DRIVER_AUTHOR "Brian Warner <warner@lothar.com>" 28 + #define DRIVER_AUTHOR "Brian Warner <warner@lothar.com>, Johan Hovold <johan@kernel.org>" 41 29 #define DRIVER_DESC "USB Keyspan PDA Converter driver" 30 + 31 + #define KEYSPAN_TX_THRESHOLD 128 42 32 43 33 struct keyspan_pda_private { 44 34 int tx_room; 45 - int tx_throttled; 46 - struct work_struct wakeup_work; 47 - struct work_struct unthrottle_work; 35 + struct work_struct unthrottle_work; 48 36 struct usb_serial *serial; 49 37 struct usb_serial_port *port; 50 38 }; 51 39 40 + static int keyspan_pda_write_start(struct usb_serial_port *port); 52 41 53 42 #define KEYSPAN_VENDOR_ID 0x06cd 54 43 #define KEYSPAN_PDA_FAKE_ID 0x0103 ··· 51 62 #define ENTREGA_FAKE_ID 0x8093 52 63 53 64 static const struct usb_device_id id_table_combined[] = { 54 - #ifdef KEYSPAN 55 65 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_FAKE_ID) }, 56 - #endif 57 - #ifdef XIRCOM 58 66 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 59 67 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 60 68 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 61 - #endif 62 69 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_ID) }, 63 70 { } /* Terminating entry */ 64 71 }; 65 - 66 72 MODULE_DEVICE_TABLE(usb, id_table_combined); 67 73 68 74 static const struct usb_device_id id_table_std[] = { ··· 65 81 { } /* Terminating entry */ 66 82 }; 67 83 68 - #ifdef KEYSPAN 69 84 static const struct usb_device_id id_table_fake[] = { 70 85 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_FAKE_ID) }, 71 - { } /* Terminating entry */ 72 - }; 73 - #endif 74 - 75 - #ifdef XIRCOM 76 - static const struct usb_device_id id_table_fake_xircom[] = { 77 86 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 78 87 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 79 88 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 80 - { } 89 + { } /* Terminating entry */ 81 90 }; 82 - #endif 83 91 84 - static void keyspan_pda_wakeup_write(struct work_struct *work) 92 + static int keyspan_pda_get_write_room(struct keyspan_pda_private *priv) 85 93 { 86 - struct keyspan_pda_private *priv = 87 - container_of(work, struct keyspan_pda_private, wakeup_work); 88 94 struct usb_serial_port *port = priv->port; 95 + struct usb_serial *serial = port->serial; 96 + u8 *room; 97 + int rc; 89 98 90 - tty_port_tty_wakeup(&port->port); 99 + room = kmalloc(1, GFP_KERNEL); 100 + if (!room) 101 + return -ENOMEM; 102 + 103 + rc = usb_control_msg(serial->dev, 104 + usb_rcvctrlpipe(serial->dev, 0), 105 + 6, /* write_room */ 106 + USB_TYPE_VENDOR | USB_RECIP_INTERFACE 107 + | USB_DIR_IN, 108 + 0, /* value: 0 means "remaining room" */ 109 + 0, /* index */ 110 + room, 111 + 1, 112 + 2000); 113 + if (rc != 1) { 114 + if (rc >= 0) 115 + rc = -EIO; 116 + dev_dbg(&port->dev, "roomquery failed: %d\n", rc); 117 + goto out_free; 118 + } 119 + 120 + dev_dbg(&port->dev, "roomquery says %d\n", *room); 121 + rc = *room; 122 + out_free: 123 + kfree(room); 124 + 125 + return rc; 91 126 } 92 127 93 128 static void keyspan_pda_request_unthrottle(struct work_struct *work) 94 129 { 95 130 struct keyspan_pda_private *priv = 96 131 container_of(work, struct keyspan_pda_private, unthrottle_work); 97 - struct usb_serial *serial = priv->serial; 132 + struct usb_serial_port *port = priv->port; 133 + struct usb_serial *serial = port->serial; 134 + unsigned long flags; 98 135 int result; 99 136 100 - /* ask the device to tell us when the tx buffer becomes 101 - sufficiently empty */ 137 + dev_dbg(&port->dev, "%s\n", __func__); 138 + 139 + /* 140 + * Ask the device to tell us when the tx buffer becomes 141 + * sufficiently empty. 142 + */ 102 143 result = usb_control_msg(serial->dev, 103 144 usb_sndctrlpipe(serial->dev, 0), 104 145 7, /* request_unthrottle */ 105 146 USB_TYPE_VENDOR | USB_RECIP_INTERFACE 106 147 | USB_DIR_OUT, 107 - 16, /* value: threshold */ 148 + KEYSPAN_TX_THRESHOLD, 108 149 0, /* index */ 109 150 NULL, 110 151 0, ··· 137 128 if (result < 0) 138 129 dev_dbg(&serial->dev->dev, "%s - error %d from usb_control_msg\n", 139 130 __func__, result); 140 - } 131 + /* 132 + * Need to check available space after requesting notification in case 133 + * buffer is already empty so that no notification is sent. 134 + */ 135 + result = keyspan_pda_get_write_room(priv); 136 + if (result > KEYSPAN_TX_THRESHOLD) { 137 + spin_lock_irqsave(&port->lock, flags); 138 + priv->tx_room = max(priv->tx_room, result); 139 + spin_unlock_irqrestore(&port->lock, flags); 141 140 141 + usb_serial_port_softint(port); 142 + } 143 + } 142 144 143 145 static void keyspan_pda_rx_interrupt(struct urb *urb) 144 146 { ··· 159 139 int retval; 160 140 int status = urb->status; 161 141 struct keyspan_pda_private *priv; 142 + unsigned long flags; 143 + 162 144 priv = usb_get_serial_port_data(port); 163 145 164 146 switch (status) { ··· 194 172 break; 195 173 case 1: 196 174 /* status interrupt */ 197 - if (len < 3) { 175 + if (len < 2) { 198 176 dev_warn(&port->dev, "short interrupt message received\n"); 199 177 break; 200 178 } 201 - dev_dbg(&port->dev, "rx int, d1=%d, d2=%d\n", data[1], data[2]); 179 + dev_dbg(&port->dev, "rx int, d1=%d\n", data[1]); 202 180 switch (data[1]) { 203 181 case 1: /* modemline change */ 204 182 break; 205 183 case 2: /* tx unthrottle interrupt */ 206 - priv->tx_throttled = 0; 207 - /* queue up a wakeup at scheduler time */ 208 - schedule_work(&priv->wakeup_work); 184 + spin_lock_irqsave(&port->lock, flags); 185 + priv->tx_room = max(priv->tx_room, KEYSPAN_TX_THRESHOLD); 186 + spin_unlock_irqrestore(&port->lock, flags); 187 + 188 + keyspan_pda_write_start(port); 189 + 190 + usb_serial_port_softint(port); 209 191 break; 210 192 default: 211 193 break; ··· 227 201 __func__, retval); 228 202 } 229 203 230 - 231 204 static void keyspan_pda_rx_throttle(struct tty_struct *tty) 232 205 { 233 - /* stop receiving characters. We just turn off the URB request, and 234 - let chars pile up in the device. If we're doing hardware 235 - flowcontrol, the device will signal the other end when its buffer 236 - fills up. If we're doing XON/XOFF, this would be a good time to 237 - send an XOFF, although it might make sense to foist that off 238 - upon the device too. */ 239 206 struct usb_serial_port *port = tty->driver_data; 240 207 208 + /* 209 + * Stop receiving characters. We just turn off the URB request, and 210 + * let chars pile up in the device. If we're doing hardware 211 + * flowcontrol, the device will signal the other end when its buffer 212 + * fills up. If we're doing XON/XOFF, this would be a good time to 213 + * send an XOFF, although it might make sense to foist that off upon 214 + * the device too. 215 + */ 241 216 usb_kill_urb(port->interrupt_in_urb); 242 217 } 243 - 244 218 245 219 static void keyspan_pda_rx_unthrottle(struct tty_struct *tty) 246 220 { 247 221 struct usb_serial_port *port = tty->driver_data; 248 - /* just restart the receive interrupt URB */ 249 222 223 + /* just restart the receive interrupt URB */ 250 224 if (usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL)) 251 225 dev_dbg(&port->dev, "usb_submit_urb(read urb) failed\n"); 252 226 } 253 - 254 227 255 228 static speed_t keyspan_pda_setbaud(struct usb_serial *serial, speed_t baud) 256 229 { ··· 292 267 baud = 9600; 293 268 } 294 269 295 - /* rather than figure out how to sleep while waiting for this 296 - to complete, I just use the "legacy" API. */ 297 270 rc = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0), 298 271 0, /* set baud */ 299 272 USB_TYPE_VENDOR ··· 304 281 2000); /* timeout */ 305 282 if (rc < 0) 306 283 return 0; 284 + 307 285 return baud; 308 286 } 309 - 310 287 311 288 static void keyspan_pda_break_ctl(struct tty_struct *tty, int break_state) 312 289 { ··· 319 296 value = 1; /* start break */ 320 297 else 321 298 value = 0; /* clear break */ 299 + 322 300 result = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0), 323 301 4, /* set break */ 324 302 USB_TYPE_VENDOR | USB_RECIP_INTERFACE | USB_DIR_OUT, ··· 327 303 if (result < 0) 328 304 dev_dbg(&port->dev, "%s - error %d from usb_control_msg\n", 329 305 __func__, result); 330 - /* there is something funky about this.. the TCSBRK that 'cu' performs 331 - ought to translate into a break_ctl(-1),break_ctl(0) pair HZ/4 332 - seconds apart, but it feels like the break sent isn't as long as it 333 - is on /dev/ttyS0 */ 334 306 } 335 - 336 307 337 308 static void keyspan_pda_set_termios(struct tty_struct *tty, 338 309 struct usb_serial_port *port, struct ktermios *old_termios) ··· 335 316 struct usb_serial *serial = port->serial; 336 317 speed_t speed; 337 318 338 - /* cflag specifies lots of stuff: number of stop bits, parity, number 339 - of data bits, baud. What can the device actually handle?: 340 - CSTOPB (1 stop bit or 2) 341 - PARENB (parity) 342 - CSIZE (5bit .. 8bit) 343 - There is minimal hw support for parity (a PSW bit seems to hold the 344 - parity of whatever is in the accumulator). The UART either deals 345 - with 10 bits (start, 8 data, stop) or 11 bits (start, 8 data, 346 - 1 special, stop). So, with firmware changes, we could do: 347 - 8N1: 10 bit 348 - 8N2: 11 bit, extra bit always (mark?) 349 - 8[EOMS]1: 11 bit, extra bit is parity 350 - 7[EOMS]1: 10 bit, b0/b7 is parity 351 - 7[EOMS]2: 11 bit, b0/b7 is parity, extra bit always (mark?) 352 - 353 - HW flow control is dictated by the tty->termios.c_cflags & CRTSCTS 354 - bit. 355 - 356 - For now, just do baud. */ 357 - 319 + /* 320 + * cflag specifies lots of stuff: number of stop bits, parity, number 321 + * of data bits, baud. What can the device actually handle?: 322 + * CSTOPB (1 stop bit or 2) 323 + * PARENB (parity) 324 + * CSIZE (5bit .. 8bit) 325 + * There is minimal hw support for parity (a PSW bit seems to hold the 326 + * parity of whatever is in the accumulator). The UART either deals 327 + * with 10 bits (start, 8 data, stop) or 11 bits (start, 8 data, 328 + * 1 special, stop). So, with firmware changes, we could do: 329 + * 8N1: 10 bit 330 + * 8N2: 11 bit, extra bit always (mark?) 331 + * 8[EOMS]1: 11 bit, extra bit is parity 332 + * 7[EOMS]1: 10 bit, b0/b7 is parity 333 + * 7[EOMS]2: 11 bit, b0/b7 is parity, extra bit always (mark?) 334 + * 335 + * HW flow control is dictated by the tty->termios.c_cflags & CRTSCTS 336 + * bit. 337 + * 338 + * For now, just do baud. 339 + */ 358 340 speed = tty_get_baud_rate(tty); 359 341 speed = keyspan_pda_setbaud(serial, speed); 360 342 ··· 364 344 /* It hasn't changed so.. */ 365 345 speed = tty_termios_baud_rate(old_termios); 366 346 } 367 - /* Only speed can change so copy the old h/w parameters 368 - then encode the new speed */ 347 + /* 348 + * Only speed can change so copy the old h/w parameters then encode 349 + * the new speed. 350 + */ 369 351 tty_termios_copy_hw(&tty->termios, old_termios); 370 352 tty_encode_baud_rate(tty, speed, speed); 371 353 } 372 354 373 - 374 - /* modem control pins: DTR and RTS are outputs and can be controlled. 375 - DCD, RI, DSR, CTS are inputs and can be read. All outputs can also be 376 - read. The byte passed is: DTR(b7) DCD RI DSR CTS RTS(b2) unused unused */ 377 - 355 + /* 356 + * Modem control pins: DTR and RTS are outputs and can be controlled. 357 + * DCD, RI, DSR, CTS are inputs and can be read. All outputs can also be 358 + * read. The byte passed is: DTR(b7) DCD RI DSR CTS RTS(b2) unused unused. 359 + */ 378 360 static int keyspan_pda_get_modem_info(struct usb_serial *serial, 379 361 unsigned char *value) 380 362 { ··· 400 378 return rc; 401 379 } 402 380 403 - 404 381 static int keyspan_pda_set_modem_info(struct usb_serial *serial, 405 382 unsigned char value) 406 383 { ··· 422 401 rc = keyspan_pda_get_modem_info(serial, &status); 423 402 if (rc < 0) 424 403 return rc; 425 - value = 426 - ((status & (1<<7)) ? TIOCM_DTR : 0) | 427 - ((status & (1<<6)) ? TIOCM_CAR : 0) | 428 - ((status & (1<<5)) ? TIOCM_RNG : 0) | 429 - ((status & (1<<4)) ? TIOCM_DSR : 0) | 430 - ((status & (1<<3)) ? TIOCM_CTS : 0) | 431 - ((status & (1<<2)) ? TIOCM_RTS : 0); 404 + 405 + value = ((status & BIT(7)) ? TIOCM_DTR : 0) | 406 + ((status & BIT(6)) ? TIOCM_CAR : 0) | 407 + ((status & BIT(5)) ? TIOCM_RNG : 0) | 408 + ((status & BIT(4)) ? TIOCM_DSR : 0) | 409 + ((status & BIT(3)) ? TIOCM_CTS : 0) | 410 + ((status & BIT(2)) ? TIOCM_RTS : 0); 411 + 432 412 return value; 433 413 } 434 414 ··· 446 424 return rc; 447 425 448 426 if (set & TIOCM_RTS) 449 - status |= (1<<2); 427 + status |= BIT(2); 450 428 if (set & TIOCM_DTR) 451 - status |= (1<<7); 429 + status |= BIT(7); 452 430 453 431 if (clear & TIOCM_RTS) 454 - status &= ~(1<<2); 432 + status &= ~BIT(2); 455 433 if (clear & TIOCM_DTR) 456 - status &= ~(1<<7); 434 + status &= ~BIT(7); 457 435 rc = keyspan_pda_set_modem_info(serial, status); 458 436 return rc; 459 437 } 460 438 461 - static int keyspan_pda_write(struct tty_struct *tty, 462 - struct usb_serial_port *port, const unsigned char *buf, int count) 439 + static int keyspan_pda_write_start(struct usb_serial_port *port) 463 440 { 464 - struct usb_serial *serial = port->serial; 465 - int request_unthrottle = 0; 466 - int rc = 0; 467 - struct keyspan_pda_private *priv; 441 + struct keyspan_pda_private *priv = usb_get_serial_port_data(port); 442 + unsigned long flags; 443 + struct urb *urb; 444 + int count; 445 + int room; 446 + int rc; 468 447 469 - priv = usb_get_serial_port_data(port); 470 - /* guess how much room is left in the device's ring buffer, and if we 471 - want to send more than that, check first, updating our notion of 472 - what is left. If our write will result in no room left, ask the 473 - device to give us an interrupt when the room available rises above 474 - a threshold, and hold off all writers (eventually, those using 475 - select() or poll() too) until we receive that unthrottle interrupt. 476 - Block if we can't write anything at all, otherwise write as much as 477 - we can. */ 478 - if (count == 0) { 479 - dev_dbg(&port->dev, "write request of 0 bytes\n"); 448 + /* 449 + * Guess how much room is left in the device's ring buffer. If our 450 + * write will result in no room left, ask the device to give us an 451 + * interrupt when the room available rises above a threshold but also 452 + * query how much room is currently available (in case our guess was 453 + * too conservative and the buffer is already empty when the 454 + * unthrottle work is scheduled). 455 + */ 456 + 457 + /* 458 + * We might block because of: 459 + * the TX urb is in-flight (wait until it completes) 460 + * the device is full (wait until it says there is room) 461 + */ 462 + spin_lock_irqsave(&port->lock, flags); 463 + 464 + room = priv->tx_room; 465 + count = kfifo_len(&port->write_fifo); 466 + 467 + if (!test_bit(0, &port->write_urbs_free) || count == 0 || room == 0) { 468 + spin_unlock_irqrestore(&port->lock, flags); 480 469 return 0; 481 470 } 471 + __clear_bit(0, &port->write_urbs_free); 482 472 483 - /* we might block because of: 484 - the TX urb is in-flight (wait until it completes) 485 - the device is full (wait until it says there is room) 486 - */ 487 - spin_lock_bh(&port->lock); 488 - if (!test_bit(0, &port->write_urbs_free) || priv->tx_throttled) { 489 - spin_unlock_bh(&port->lock); 490 - return 0; 491 - } 492 - clear_bit(0, &port->write_urbs_free); 493 - spin_unlock_bh(&port->lock); 473 + if (count > room) 474 + count = room; 475 + if (count > port->bulk_out_size) 476 + count = port->bulk_out_size; 494 477 495 - /* At this point the URB is in our control, nobody else can submit it 496 - again (the only sudden transition was the one from EINPROGRESS to 497 - finished). Also, the tx process is not throttled. So we are 498 - ready to write. */ 478 + urb = port->write_urb; 479 + count = kfifo_out(&port->write_fifo, urb->transfer_buffer, count); 480 + urb->transfer_buffer_length = count; 499 481 500 - count = (count > port->bulk_out_size) ? port->bulk_out_size : count; 482 + port->tx_bytes += count; 483 + priv->tx_room -= count; 501 484 502 - /* Check if we might overrun the Tx buffer. If so, ask the 503 - device how much room it really has. This is done only on 504 - scheduler time, since usb_control_msg() sleeps. */ 505 - if (count > priv->tx_room && !in_interrupt()) { 506 - u8 *room; 485 + spin_unlock_irqrestore(&port->lock, flags); 507 486 508 - room = kmalloc(1, GFP_KERNEL); 509 - if (!room) { 510 - rc = -ENOMEM; 511 - goto exit; 512 - } 487 + dev_dbg(&port->dev, "%s - count = %d, txroom = %d\n", __func__, count, room); 513 488 514 - rc = usb_control_msg(serial->dev, 515 - usb_rcvctrlpipe(serial->dev, 0), 516 - 6, /* write_room */ 517 - USB_TYPE_VENDOR | USB_RECIP_INTERFACE 518 - | USB_DIR_IN, 519 - 0, /* value: 0 means "remaining room" */ 520 - 0, /* index */ 521 - room, 522 - 1, 523 - 2000); 524 - if (rc > 0) { 525 - dev_dbg(&port->dev, "roomquery says %d\n", *room); 526 - priv->tx_room = *room; 527 - } 528 - kfree(room); 529 - if (rc < 0) { 530 - dev_dbg(&port->dev, "roomquery failed\n"); 531 - goto exit; 532 - } 533 - if (rc == 0) { 534 - dev_dbg(&port->dev, "roomquery returned 0 bytes\n"); 535 - rc = -EIO; /* device didn't return any data */ 536 - goto exit; 537 - } 538 - } 539 - if (count > priv->tx_room) { 540 - /* we're about to completely fill the Tx buffer, so 541 - we'll be throttled afterwards. */ 542 - count = priv->tx_room; 543 - request_unthrottle = 1; 489 + rc = usb_submit_urb(urb, GFP_ATOMIC); 490 + if (rc) { 491 + dev_dbg(&port->dev, "usb_submit_urb(write bulk) failed\n"); 492 + 493 + spin_lock_irqsave(&port->lock, flags); 494 + port->tx_bytes -= count; 495 + priv->tx_room = max(priv->tx_room, room + count); 496 + __set_bit(0, &port->write_urbs_free); 497 + spin_unlock_irqrestore(&port->lock, flags); 498 + 499 + return rc; 544 500 } 545 501 546 - if (count) { 547 - /* now transfer data */ 548 - memcpy(port->write_urb->transfer_buffer, buf, count); 549 - /* send the data out the bulk port */ 550 - port->write_urb->transfer_buffer_length = count; 551 - 552 - priv->tx_room -= count; 553 - 554 - rc = usb_submit_urb(port->write_urb, GFP_ATOMIC); 555 - if (rc) { 556 - dev_dbg(&port->dev, "usb_submit_urb(write bulk) failed\n"); 557 - goto exit; 558 - } 559 - } else { 560 - /* There wasn't any room left, so we are throttled until 561 - the buffer empties a bit */ 562 - request_unthrottle = 1; 563 - } 564 - 565 - if (request_unthrottle) { 566 - priv->tx_throttled = 1; /* block writers */ 502 + if (count == room) 567 503 schedule_work(&priv->unthrottle_work); 568 - } 569 504 570 - rc = count; 571 - exit: 572 - if (rc < 0) 573 - set_bit(0, &port->write_urbs_free); 574 - return rc; 505 + return count; 575 506 } 576 - 577 507 578 508 static void keyspan_pda_write_bulk_callback(struct urb *urb) 579 509 { 580 510 struct usb_serial_port *port = urb->context; 581 - struct keyspan_pda_private *priv; 582 - 583 - set_bit(0, &port->write_urbs_free); 584 - priv = usb_get_serial_port_data(port); 585 - 586 - /* queue up a wakeup at scheduler time */ 587 - schedule_work(&priv->wakeup_work); 588 - } 589 - 590 - 591 - static int keyspan_pda_write_room(struct tty_struct *tty) 592 - { 593 - struct usb_serial_port *port = tty->driver_data; 594 - struct keyspan_pda_private *priv; 595 - priv = usb_get_serial_port_data(port); 596 - /* used by n_tty.c for processing of tabs and such. Giving it our 597 - conservative guess is probably good enough, but needs testing by 598 - running a console through the device. */ 599 - return priv->tx_room; 600 - } 601 - 602 - 603 - static int keyspan_pda_chars_in_buffer(struct tty_struct *tty) 604 - { 605 - struct usb_serial_port *port = tty->driver_data; 606 - struct keyspan_pda_private *priv; 607 511 unsigned long flags; 608 - int ret = 0; 609 - 610 - priv = usb_get_serial_port_data(port); 611 - 612 - /* when throttled, return at least WAKEUP_CHARS to tell select() (via 613 - n_tty.c:normal_poll() ) that we're not writeable. */ 614 512 615 513 spin_lock_irqsave(&port->lock, flags); 616 - if (!test_bit(0, &port->write_urbs_free) || priv->tx_throttled) 617 - ret = 256; 514 + port->tx_bytes -= urb->transfer_buffer_length; 515 + __set_bit(0, &port->write_urbs_free); 618 516 spin_unlock_irqrestore(&port->lock, flags); 619 - return ret; 517 + 518 + keyspan_pda_write_start(port); 519 + 520 + usb_serial_port_softint(port); 620 521 } 621 522 523 + static int keyspan_pda_write(struct tty_struct *tty, struct usb_serial_port *port, 524 + const unsigned char *buf, int count) 525 + { 526 + int rc; 527 + 528 + dev_dbg(&port->dev, "%s - count = %d\n", __func__, count); 529 + 530 + if (!count) 531 + return 0; 532 + 533 + count = kfifo_in_locked(&port->write_fifo, buf, count, &port->lock); 534 + 535 + rc = keyspan_pda_write_start(port); 536 + if (rc) 537 + return rc; 538 + 539 + return count; 540 + } 622 541 623 542 static void keyspan_pda_dtr_rts(struct usb_serial_port *port, int on) 624 543 { 625 544 struct usb_serial *serial = port->serial; 626 545 627 546 if (on) 628 - keyspan_pda_set_modem_info(serial, (1 << 7) | (1 << 2)); 547 + keyspan_pda_set_modem_info(serial, BIT(7) | BIT(2)); 629 548 else 630 549 keyspan_pda_set_modem_info(serial, 0); 631 550 } ··· 575 612 static int keyspan_pda_open(struct tty_struct *tty, 576 613 struct usb_serial_port *port) 577 614 { 578 - struct usb_serial *serial = port->serial; 579 - u8 *room; 580 - int rc = 0; 581 - struct keyspan_pda_private *priv; 615 + struct keyspan_pda_private *priv = usb_get_serial_port_data(port); 616 + int rc; 582 617 583 618 /* find out how much room is in the Tx ring */ 584 - room = kmalloc(1, GFP_KERNEL); 585 - if (!room) 586 - return -ENOMEM; 619 + rc = keyspan_pda_get_write_room(priv); 620 + if (rc < 0) 621 + return rc; 587 622 588 - rc = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0), 589 - 6, /* write_room */ 590 - USB_TYPE_VENDOR | USB_RECIP_INTERFACE 591 - | USB_DIR_IN, 592 - 0, /* value */ 593 - 0, /* index */ 594 - room, 595 - 1, 596 - 2000); 597 - if (rc < 0) { 598 - dev_dbg(&port->dev, "%s - roomquery failed\n", __func__); 599 - goto error; 600 - } 601 - if (rc == 0) { 602 - dev_dbg(&port->dev, "%s - roomquery returned 0 bytes\n", __func__); 603 - rc = -EIO; 604 - goto error; 605 - } 606 - priv = usb_get_serial_port_data(port); 607 - priv->tx_room = *room; 608 - priv->tx_throttled = *room ? 0 : 1; 623 + spin_lock_irq(&port->lock); 624 + priv->tx_room = rc; 625 + spin_unlock_irq(&port->lock); 609 626 610 - /*Start reading from the device*/ 611 627 rc = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); 612 628 if (rc) { 613 629 dev_dbg(&port->dev, "%s - usb_submit_urb(read int) failed\n", __func__); 614 - goto error; 630 + return rc; 615 631 } 616 - error: 617 - kfree(room); 618 - return rc; 619 - } 620 - static void keyspan_pda_close(struct usb_serial_port *port) 621 - { 622 - usb_kill_urb(port->write_urb); 623 - usb_kill_urb(port->interrupt_in_urb); 632 + 633 + return 0; 624 634 } 625 635 636 + static void keyspan_pda_close(struct usb_serial_port *port) 637 + { 638 + struct keyspan_pda_private *priv = usb_get_serial_port_data(port); 639 + 640 + /* 641 + * Stop the interrupt URB first as its completion handler may submit 642 + * the write URB. 643 + */ 644 + usb_kill_urb(port->interrupt_in_urb); 645 + usb_kill_urb(port->write_urb); 646 + 647 + cancel_work_sync(&priv->unthrottle_work); 648 + 649 + spin_lock_irq(&port->lock); 650 + kfifo_reset(&port->write_fifo); 651 + spin_unlock_irq(&port->lock); 652 + } 626 653 627 654 /* download the firmware to a "fake" device (pre-renumeration) */ 628 655 static int keyspan_pda_fake_startup(struct usb_serial *serial) 629 656 { 657 + unsigned int vid = le16_to_cpu(serial->dev->descriptor.idVendor); 630 658 const char *fw_name; 631 659 632 660 /* download the firmware here ... */ 633 661 ezusb_fx1_set_reset(serial->dev, 1); 634 662 635 - if (0) { ; } 636 - #ifdef KEYSPAN 637 - else if (le16_to_cpu(serial->dev->descriptor.idVendor) == KEYSPAN_VENDOR_ID) 663 + switch (vid) { 664 + case KEYSPAN_VENDOR_ID: 638 665 fw_name = "keyspan_pda/keyspan_pda.fw"; 639 - #endif 640 - #ifdef XIRCOM 641 - else if ((le16_to_cpu(serial->dev->descriptor.idVendor) == XIRCOM_VENDOR_ID) || 642 - (le16_to_cpu(serial->dev->descriptor.idVendor) == ENTREGA_VENDOR_ID)) 666 + break; 667 + case XIRCOM_VENDOR_ID: 668 + case ENTREGA_VENDOR_ID: 643 669 fw_name = "keyspan_pda/xircom_pgs.fw"; 644 - #endif 645 - else { 670 + break; 671 + default: 646 672 dev_err(&serial->dev->dev, "%s: unknown vendor, aborting.\n", 647 673 __func__); 648 674 return -ENODEV; ··· 643 691 return -ENOENT; 644 692 } 645 693 646 - /* after downloading firmware Renumeration will occur in a 647 - moment and the new device will bind to the real driver */ 694 + /* 695 + * After downloading firmware renumeration will occur in a moment and 696 + * the new device will bind to the real driver. 697 + */ 648 698 649 - /* we want this device to fail to have a driver assigned to it. */ 699 + /* We want this device to fail to have a driver assigned to it. */ 650 700 return 1; 651 701 } 652 702 653 - #ifdef KEYSPAN 654 703 MODULE_FIRMWARE("keyspan_pda/keyspan_pda.fw"); 655 - #endif 656 - #ifdef XIRCOM 657 704 MODULE_FIRMWARE("keyspan_pda/xircom_pgs.fw"); 658 - #endif 659 705 660 706 static int keyspan_pda_port_probe(struct usb_serial_port *port) 661 707 { ··· 664 714 if (!priv) 665 715 return -ENOMEM; 666 716 667 - INIT_WORK(&priv->wakeup_work, keyspan_pda_wakeup_write); 668 717 INIT_WORK(&priv->unthrottle_work, keyspan_pda_request_unthrottle); 669 - priv->serial = port->serial; 670 718 priv->port = port; 671 719 672 720 usb_set_serial_port_data(port, priv); ··· 682 734 return 0; 683 735 } 684 736 685 - #ifdef KEYSPAN 686 737 static struct usb_serial_driver keyspan_pda_fake_device = { 687 738 .driver = { 688 739 .owner = THIS_MODULE, ··· 692 745 .num_ports = 1, 693 746 .attach = keyspan_pda_fake_startup, 694 747 }; 695 - #endif 696 - 697 - #ifdef XIRCOM 698 - static struct usb_serial_driver xircom_pgs_fake_device = { 699 - .driver = { 700 - .owner = THIS_MODULE, 701 - .name = "xircom_no_firm", 702 - }, 703 - .description = "Xircom / Entrega PGS - (prerenumeration)", 704 - .id_table = id_table_fake_xircom, 705 - .num_ports = 1, 706 - .attach = keyspan_pda_fake_startup, 707 - }; 708 - #endif 709 748 710 749 static struct usb_serial_driver keyspan_pda_device = { 711 750 .driver = { ··· 707 774 .open = keyspan_pda_open, 708 775 .close = keyspan_pda_close, 709 776 .write = keyspan_pda_write, 710 - .write_room = keyspan_pda_write_room, 711 - .write_bulk_callback = keyspan_pda_write_bulk_callback, 777 + .write_bulk_callback = keyspan_pda_write_bulk_callback, 712 778 .read_int_callback = keyspan_pda_rx_interrupt, 713 - .chars_in_buffer = keyspan_pda_chars_in_buffer, 714 779 .throttle = keyspan_pda_rx_throttle, 715 780 .unthrottle = keyspan_pda_rx_unthrottle, 716 781 .set_termios = keyspan_pda_set_termios, ··· 721 790 722 791 static struct usb_serial_driver * const serial_drivers[] = { 723 792 &keyspan_pda_device, 724 - #ifdef KEYSPAN 725 793 &keyspan_pda_fake_device, 726 - #endif 727 - #ifdef XIRCOM 728 - &xircom_pgs_fake_device, 729 - #endif 730 794 NULL 731 795 }; 732 796
+37 -199
drivers/usb/serial/mos7720.c
··· 79 79 #define DCR_INIT_VAL 0x0c /* SLCTIN, nINIT */ 80 80 #define ECR_INIT_VAL 0x00 /* SPP mode */ 81 81 82 - struct urbtracker { 83 - struct mos7715_parport *mos_parport; 84 - struct list_head urblist_entry; 85 - struct kref ref_count; 86 - struct urb *urb; 87 - struct usb_ctrlrequest *setup; 88 - }; 89 - 90 82 enum mos7715_pp_modes { 91 83 SPP = 0<<5, 92 84 PS2 = 1<<5, /* moschip calls this 'NIBBLE' mode */ ··· 88 96 struct mos7715_parport { 89 97 struct parport *pp; /* back to containing struct */ 90 98 struct kref ref_count; /* to instance of this struct */ 91 - struct list_head deferred_urbs; /* list deferred async urbs */ 92 - struct list_head active_urbs; /* list async urbs in flight */ 93 - spinlock_t listlock; /* protects list access */ 94 99 bool msg_pending; /* usb sync call pending */ 95 100 struct completion syncmsg_compl; /* usb sync call completed */ 96 - struct tasklet_struct urb_tasklet; /* for sending deferred urbs */ 101 + struct work_struct work; /* restore deferred writes */ 97 102 struct usb_serial *serial; /* back to containing struct */ 98 103 __u8 shadowECR; /* parallel port regs... */ 99 104 __u8 shadowDCR; ··· 254 265 kfree(mos_parport); 255 266 } 256 267 257 - static void destroy_urbtracker(struct kref *kref) 258 - { 259 - struct urbtracker *urbtrack = 260 - container_of(kref, struct urbtracker, ref_count); 261 - struct mos7715_parport *mos_parport = urbtrack->mos_parport; 262 - 263 - usb_free_urb(urbtrack->urb); 264 - kfree(urbtrack->setup); 265 - kfree(urbtrack); 266 - kref_put(&mos_parport->ref_count, destroy_mos_parport); 267 - } 268 - 269 268 /* 270 - * This runs as a tasklet when sending an urb in a non-blocking parallel 271 - * port callback had to be deferred because the disconnect mutex could not be 272 - * obtained at the time. 273 - */ 274 - static void send_deferred_urbs(struct tasklet_struct *t) 275 - { 276 - int ret_val; 277 - unsigned long flags; 278 - struct mos7715_parport *mos_parport = from_tasklet(mos_parport, t, 279 - urb_tasklet); 280 - struct urbtracker *urbtrack, *tmp; 281 - struct list_head *cursor, *next; 282 - struct device *dev; 283 - 284 - /* if release function ran, game over */ 285 - if (unlikely(mos_parport->serial == NULL)) 286 - return; 287 - 288 - dev = &mos_parport->serial->dev->dev; 289 - 290 - /* try again to get the mutex */ 291 - if (!mutex_trylock(&mos_parport->serial->disc_mutex)) { 292 - dev_dbg(dev, "%s: rescheduling tasklet\n", __func__); 293 - tasklet_schedule(&mos_parport->urb_tasklet); 294 - return; 295 - } 296 - 297 - /* if device disconnected, game over */ 298 - if (unlikely(mos_parport->serial->disconnected)) { 299 - mutex_unlock(&mos_parport->serial->disc_mutex); 300 - return; 301 - } 302 - 303 - spin_lock_irqsave(&mos_parport->listlock, flags); 304 - if (list_empty(&mos_parport->deferred_urbs)) { 305 - spin_unlock_irqrestore(&mos_parport->listlock, flags); 306 - mutex_unlock(&mos_parport->serial->disc_mutex); 307 - dev_dbg(dev, "%s: deferred_urbs list empty\n", __func__); 308 - return; 309 - } 310 - 311 - /* move contents of deferred_urbs list to active_urbs list and submit */ 312 - list_for_each_safe(cursor, next, &mos_parport->deferred_urbs) 313 - list_move_tail(cursor, &mos_parport->active_urbs); 314 - list_for_each_entry_safe(urbtrack, tmp, &mos_parport->active_urbs, 315 - urblist_entry) { 316 - ret_val = usb_submit_urb(urbtrack->urb, GFP_ATOMIC); 317 - dev_dbg(dev, "%s: urb submitted\n", __func__); 318 - if (ret_val) { 319 - dev_err(dev, "usb_submit_urb() failed: %d\n", ret_val); 320 - list_del(&urbtrack->urblist_entry); 321 - kref_put(&urbtrack->ref_count, destroy_urbtracker); 322 - } 323 - } 324 - spin_unlock_irqrestore(&mos_parport->listlock, flags); 325 - mutex_unlock(&mos_parport->serial->disc_mutex); 326 - } 327 - 328 - /* callback for parallel port control urbs submitted asynchronously */ 329 - static void async_complete(struct urb *urb) 330 - { 331 - struct urbtracker *urbtrack = urb->context; 332 - int status = urb->status; 333 - unsigned long flags; 334 - 335 - if (unlikely(status)) 336 - dev_dbg(&urb->dev->dev, "%s - nonzero urb status received: %d\n", __func__, status); 337 - 338 - /* remove the urbtracker from the active_urbs list */ 339 - spin_lock_irqsave(&urbtrack->mos_parport->listlock, flags); 340 - list_del(&urbtrack->urblist_entry); 341 - spin_unlock_irqrestore(&urbtrack->mos_parport->listlock, flags); 342 - kref_put(&urbtrack->ref_count, destroy_urbtracker); 343 - } 344 - 345 - static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport, 346 - enum mos_regs reg, __u8 data) 347 - { 348 - struct urbtracker *urbtrack; 349 - int ret_val; 350 - unsigned long flags; 351 - struct usb_serial *serial = mos_parport->serial; 352 - struct usb_device *usbdev = serial->dev; 353 - 354 - /* create and initialize the control urb and containing urbtracker */ 355 - urbtrack = kmalloc(sizeof(struct urbtracker), GFP_ATOMIC); 356 - if (!urbtrack) 357 - return -ENOMEM; 358 - 359 - urbtrack->urb = usb_alloc_urb(0, GFP_ATOMIC); 360 - if (!urbtrack->urb) { 361 - kfree(urbtrack); 362 - return -ENOMEM; 363 - } 364 - urbtrack->setup = kmalloc(sizeof(*urbtrack->setup), GFP_ATOMIC); 365 - if (!urbtrack->setup) { 366 - usb_free_urb(urbtrack->urb); 367 - kfree(urbtrack); 368 - return -ENOMEM; 369 - } 370 - urbtrack->setup->bRequestType = (__u8)0x40; 371 - urbtrack->setup->bRequest = (__u8)0x0e; 372 - urbtrack->setup->wValue = cpu_to_le16(get_reg_value(reg, dummy)); 373 - urbtrack->setup->wIndex = cpu_to_le16(get_reg_index(reg)); 374 - urbtrack->setup->wLength = 0; 375 - usb_fill_control_urb(urbtrack->urb, usbdev, 376 - usb_sndctrlpipe(usbdev, 0), 377 - (unsigned char *)urbtrack->setup, 378 - NULL, 0, async_complete, urbtrack); 379 - kref_get(&mos_parport->ref_count); 380 - urbtrack->mos_parport = mos_parport; 381 - kref_init(&urbtrack->ref_count); 382 - INIT_LIST_HEAD(&urbtrack->urblist_entry); 383 - 384 - /* 385 - * get the disconnect mutex, or add tracker to the deferred_urbs list 386 - * and schedule a tasklet to try again later 387 - */ 388 - if (!mutex_trylock(&serial->disc_mutex)) { 389 - spin_lock_irqsave(&mos_parport->listlock, flags); 390 - list_add_tail(&urbtrack->urblist_entry, 391 - &mos_parport->deferred_urbs); 392 - spin_unlock_irqrestore(&mos_parport->listlock, flags); 393 - tasklet_schedule(&mos_parport->urb_tasklet); 394 - dev_dbg(&usbdev->dev, "tasklet scheduled\n"); 395 - return 0; 396 - } 397 - 398 - /* bail if device disconnected */ 399 - if (serial->disconnected) { 400 - kref_put(&urbtrack->ref_count, destroy_urbtracker); 401 - mutex_unlock(&serial->disc_mutex); 402 - return -ENODEV; 403 - } 404 - 405 - /* add the tracker to the active_urbs list and submit */ 406 - spin_lock_irqsave(&mos_parport->listlock, flags); 407 - list_add_tail(&urbtrack->urblist_entry, &mos_parport->active_urbs); 408 - spin_unlock_irqrestore(&mos_parport->listlock, flags); 409 - ret_val = usb_submit_urb(urbtrack->urb, GFP_ATOMIC); 410 - mutex_unlock(&serial->disc_mutex); 411 - if (ret_val) { 412 - dev_err(&usbdev->dev, 413 - "%s: submit_urb() failed: %d\n", __func__, ret_val); 414 - spin_lock_irqsave(&mos_parport->listlock, flags); 415 - list_del(&urbtrack->urblist_entry); 416 - spin_unlock_irqrestore(&mos_parport->listlock, flags); 417 - kref_put(&urbtrack->ref_count, destroy_urbtracker); 418 - return ret_val; 419 - } 420 - return 0; 421 - } 422 - 423 - /* 424 - * This is the the common top part of all parallel port callback operations that 269 + * This is the common top part of all parallel port callback operations that 425 270 * send synchronous messages to the device. This implements convoluted locking 426 271 * that avoids two scenarios: (1) a port operation is called after usbserial 427 272 * has called our release function, at which point struct mos7715_parport has ··· 281 458 reinit_completion(&mos_parport->syncmsg_compl); 282 459 spin_unlock(&release_lock); 283 460 461 + /* ensure writes from restore are submitted before new requests */ 462 + if (work_pending(&mos_parport->work)) 463 + flush_work(&mos_parport->work); 464 + 284 465 mutex_lock(&mos_parport->serial->disc_mutex); 285 466 if (mos_parport->serial->disconnected) { 286 467 /* device disconnected */ ··· 307 480 mutex_unlock(&mos_parport->serial->disc_mutex); 308 481 mos_parport->msg_pending = false; 309 482 complete(&mos_parport->syncmsg_compl); 483 + } 484 + 485 + static void deferred_restore_writes(struct work_struct *work) 486 + { 487 + struct mos7715_parport *mos_parport; 488 + 489 + mos_parport = container_of(work, struct mos7715_parport, work); 490 + 491 + mutex_lock(&mos_parport->serial->disc_mutex); 492 + 493 + /* if device disconnected, game over */ 494 + if (mos_parport->serial->disconnected) 495 + goto done; 496 + 497 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DCR, 498 + mos_parport->shadowDCR); 499 + write_mos_reg(mos_parport->serial, dummy, MOS7720_ECR, 500 + mos_parport->shadowECR); 501 + done: 502 + mutex_unlock(&mos_parport->serial->disc_mutex); 310 503 } 311 504 312 505 static void parport_mos7715_write_data(struct parport *pp, unsigned char d) ··· 486 639 spin_unlock(&release_lock); 487 640 return; 488 641 } 489 - write_parport_reg_nonblock(mos_parport, MOS7720_DCR, 490 - mos_parport->shadowDCR); 491 - write_parport_reg_nonblock(mos_parport, MOS7720_ECR, 492 - mos_parport->shadowECR); 642 + mos_parport->shadowDCR = s->u.pc.ctr; 643 + mos_parport->shadowECR = s->u.pc.ecr; 644 + 645 + schedule_work(&mos_parport->work); 493 646 spin_unlock(&release_lock); 494 647 } 495 648 ··· 559 712 560 713 mos_parport->msg_pending = false; 561 714 kref_init(&mos_parport->ref_count); 562 - spin_lock_init(&mos_parport->listlock); 563 - INIT_LIST_HEAD(&mos_parport->active_urbs); 564 - INIT_LIST_HEAD(&mos_parport->deferred_urbs); 565 715 usb_set_serial_data(serial, mos_parport); /* hijack private pointer */ 566 716 mos_parport->serial = serial; 567 - tasklet_setup(&mos_parport->urb_tasklet, send_deferred_urbs); 717 + INIT_WORK(&mos_parport->work, deferred_restore_writes); 568 718 init_completion(&mos_parport->syncmsg_compl); 569 719 570 720 /* cycle parallel port reset bit */ ··· 1711 1867 1712 1868 if (le16_to_cpu(serial->dev->descriptor.idProduct) 1713 1869 == MOSCHIP_DEVICE_ID_7715) { 1714 - struct urbtracker *urbtrack; 1715 - unsigned long flags; 1716 1870 struct mos7715_parport *mos_parport = 1717 1871 usb_get_serial_data(serial); 1718 1872 ··· 1723 1881 if (mos_parport->msg_pending) 1724 1882 wait_for_completion_timeout(&mos_parport->syncmsg_compl, 1725 1883 msecs_to_jiffies(MOS_WDR_TIMEOUT)); 1884 + /* 1885 + * If delayed work is currently scheduled, wait for it to 1886 + * complete. This also implies barriers that ensure the 1887 + * below serial clearing is not hoisted above the ->work. 1888 + */ 1889 + cancel_work_sync(&mos_parport->work); 1726 1890 1727 1891 parport_remove_port(mos_parport->pp); 1728 1892 usb_set_serial_data(serial, NULL); 1729 1893 mos_parport->serial = NULL; 1730 1894 1731 - /* if tasklet currently scheduled, wait for it to complete */ 1732 - tasklet_kill(&mos_parport->urb_tasklet); 1733 - 1734 - /* unlink any urbs sent by the tasklet */ 1735 - spin_lock_irqsave(&mos_parport->listlock, flags); 1736 - list_for_each_entry(urbtrack, 1737 - &mos_parport->active_urbs, 1738 - urblist_entry) 1739 - usb_unlink_urb(urbtrack->urb); 1740 - spin_unlock_irqrestore(&mos_parport->listlock, flags); 1741 1895 parport_del_port(mos_parport->pp); 1742 1896 1743 1897 kref_put(&mos_parport->ref_count, destroy_mos_parport);
+21 -2
drivers/usb/serial/option.c
··· 563 563 564 564 /* Device flags */ 565 565 566 + /* Highest interface number which can be used with NCTRL() and RSVD() */ 567 + #define FLAG_IFNUM_MAX 7 568 + 566 569 /* Interface does not support modem-control requests */ 567 570 #define NCTRL(ifnum) ((BIT(ifnum) & 0xff) << 8) 568 571 ··· 2104 2101 2105 2102 module_usb_serial_driver(serial_drivers, option_ids); 2106 2103 2104 + static bool iface_is_reserved(unsigned long device_flags, u8 ifnum) 2105 + { 2106 + if (ifnum > FLAG_IFNUM_MAX) 2107 + return false; 2108 + 2109 + return device_flags & RSVD(ifnum); 2110 + } 2111 + 2107 2112 static int option_probe(struct usb_serial *serial, 2108 2113 const struct usb_device_id *id) 2109 2114 { ··· 2128 2117 * the same class/subclass/protocol as the serial interfaces. Look at 2129 2118 * the Windows driver .INF files for reserved interface numbers. 2130 2119 */ 2131 - if (device_flags & RSVD(iface_desc->bInterfaceNumber)) 2120 + if (iface_is_reserved(device_flags, iface_desc->bInterfaceNumber)) 2132 2121 return -ENODEV; 2133 2122 2134 2123 /* ··· 2142 2131 usb_set_serial_data(serial, (void *)device_flags); 2143 2132 2144 2133 return 0; 2134 + } 2135 + 2136 + static bool iface_no_modem_control(unsigned long device_flags, u8 ifnum) 2137 + { 2138 + if (ifnum > FLAG_IFNUM_MAX) 2139 + return false; 2140 + 2141 + return device_flags & NCTRL(ifnum); 2145 2142 } 2146 2143 2147 2144 static int option_attach(struct usb_serial *serial) ··· 2167 2148 2168 2149 iface_desc = &serial->interface->cur_altsetting->desc; 2169 2150 2170 - if (!(device_flags & NCTRL(iface_desc->bInterfaceNumber))) 2151 + if (!iface_no_modem_control(device_flags, iface_desc->bInterfaceNumber)) 2171 2152 data->use_send_setup = 1; 2172 2153 2173 2154 if (device_flags & ZLP)
+1
drivers/usb/storage/ene_ub6250.c
··· 861 861 case MS_LB_NOT_USED: 862 862 case MS_LB_NOT_USED_ERASED: 863 863 Count++; 864 + break; 864 865 default: 865 866 break; 866 867 }
-1
drivers/usb/storage/freecom.c
··· 431 431 us->srb->sc_data_direction); 432 432 /* Return fail, SCSI seems to handle this better. */ 433 433 return USB_STOR_TRANSPORT_FAILED; 434 - break; 435 434 } 436 435 437 436 return USB_STOR_TRANSPORT_GOOD;
+7 -2
drivers/usb/storage/transport.c
··· 416 416 417 417 /* don't submit s-g requests during abort processing */ 418 418 if (test_bit(US_FLIDX_ABORTING, &us->dflags)) 419 - return USB_STOR_XFER_ERROR; 419 + goto usb_stor_xfer_error; 420 420 421 421 /* initialize the scatter-gather request block */ 422 422 usb_stor_dbg(us, "xfer %u bytes, %d entries\n", length, num_sg); ··· 424 424 sg, num_sg, length, GFP_NOIO); 425 425 if (result) { 426 426 usb_stor_dbg(us, "usb_sg_init returned %d\n", result); 427 - return USB_STOR_XFER_ERROR; 427 + goto usb_stor_xfer_error; 428 428 } 429 429 430 430 /* ··· 452 452 *act_len = us->current_sg.bytes; 453 453 return interpret_urb_result(us, pipe, length, result, 454 454 us->current_sg.bytes); 455 + 456 + usb_stor_xfer_error: 457 + if (act_len) 458 + *act_len = 0; 459 + return USB_STOR_XFER_ERROR; 455 460 } 456 461 457 462 /*
+4
drivers/usb/storage/uas.c
··· 690 690 fallthrough; 691 691 case DMA_TO_DEVICE: 692 692 cmdinfo->state |= ALLOC_DATA_OUT_URB | SUBMIT_DATA_OUT_URB; 693 + break; 693 694 case DMA_NONE: 694 695 break; 695 696 } ··· 868 867 if (devinfo->flags & US_FL_NO_READ_CAPACITY_16) 869 868 sdev->no_read_capacity_16 = 1; 870 869 870 + /* Some disks cannot handle WRITE_SAME */ 871 + if (devinfo->flags & US_FL_NO_SAME) 872 + sdev->no_write_same = 1; 871 873 /* 872 874 * Some disks return the total number of blocks in response 873 875 * to READ CAPACITY rather than the highest block number.
+5 -2
drivers/usb/storage/unusual_uas.h
··· 35 35 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 36 36 US_FL_NO_REPORT_OPCODES), 37 37 38 - /* Reported-by: Julian Groß <julian.g@posteo.de> */ 38 + /* 39 + * Initially Reported-by: Julian Groß <julian.g@posteo.de> 40 + * Further reports David C. Partridge <david.partridge@perdrix.co.uk> 41 + */ 39 42 UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999, 40 43 "LaCie", 41 44 "2Big Quadra USB3", 42 45 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 43 - US_FL_NO_REPORT_OPCODES), 46 + US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME), 44 47 45 48 /* 46 49 * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+3
drivers/usb/storage/usb.c
··· 541 541 case 'j': 542 542 f |= US_FL_NO_REPORT_LUNS; 543 543 break; 544 + case 'k': 545 + f |= US_FL_NO_SAME; 546 + break; 544 547 case 'l': 545 548 f |= US_FL_NOT_LOCKABLE; 546 549 break;
+3 -2
drivers/usb/typec/Kconfig
··· 64 64 config TYPEC_TPS6598X 65 65 tristate "TI TPS6598x USB Power Delivery controller driver" 66 66 depends on I2C 67 - depends on REGMAP_I2C 68 - depends on USB_ROLE_SWITCH || !USB_ROLE_SWITCH 67 + select POWER_SUPPLY 68 + select REGMAP_I2C 69 + select USB_ROLE_SWITCH 69 70 help 70 71 Say Y or M here if your system has TI TPS65982 or TPS65983 USB Power 71 72 Delivery controller.
+281 -17
drivers/usb/typec/class.c
··· 11 11 #include <linux/mutex.h> 12 12 #include <linux/property.h> 13 13 #include <linux/slab.h> 14 + #include <linux/usb/pd_vdo.h> 14 15 15 16 #include "bus.h" 16 17 ··· 19 18 struct device dev; 20 19 enum typec_plug_index index; 21 20 struct ida mode_ids; 21 + int num_altmodes; 22 22 }; 23 23 24 24 struct typec_cable { ··· 35 33 struct usb_pd_identity *identity; 36 34 enum typec_accessory accessory; 37 35 struct ida mode_ids; 36 + int num_altmodes; 38 37 }; 39 38 40 39 struct typec_port { ··· 84 81 [TYPEC_ACCESSORY_DEBUG] = "debug", 85 82 }; 86 83 84 + /* Product types defined in USB PD Specification R3.0 V2.0 */ 85 + static const char * const product_type_ufp[8] = { 86 + [IDH_PTYPE_UNDEF] = "undefined", 87 + [IDH_PTYPE_HUB] = "hub", 88 + [IDH_PTYPE_PERIPH] = "peripheral", 89 + [IDH_PTYPE_PSD] = "psd", 90 + [IDH_PTYPE_AMA] = "ama", 91 + }; 92 + 93 + static const char * const product_type_dfp[8] = { 94 + [IDH_PTYPE_DFP_UNDEF] = "undefined", 95 + [IDH_PTYPE_DFP_HUB] = "hub", 96 + [IDH_PTYPE_DFP_HOST] = "host", 97 + [IDH_PTYPE_DFP_PB] = "power_brick", 98 + [IDH_PTYPE_DFP_AMC] = "amc", 99 + }; 100 + 101 + static const char * const product_type_cable[8] = { 102 + [IDH_PTYPE_UNDEF] = "undefined", 103 + [IDH_PTYPE_PCABLE] = "passive", 104 + [IDH_PTYPE_ACABLE] = "active", 105 + }; 106 + 87 107 static struct usb_pd_identity *get_pd_identity(struct device *dev) 88 108 { 89 109 if (is_typec_partner(dev)) { ··· 119 93 return cable->identity; 120 94 } 121 95 return NULL; 96 + } 97 + 98 + static const char *get_pd_product_type(struct device *dev) 99 + { 100 + struct typec_port *port = to_typec_port(dev->parent); 101 + struct usb_pd_identity *id = get_pd_identity(dev); 102 + const char *ptype = NULL; 103 + 104 + if (is_typec_partner(dev)) { 105 + if (!id) 106 + return NULL; 107 + 108 + if (port->data_role == TYPEC_HOST) 109 + ptype = product_type_ufp[PD_IDH_PTYPE(id->id_header)]; 110 + else 111 + ptype = product_type_dfp[PD_IDH_DFP_PTYPE(id->id_header)]; 112 + } else if (is_typec_cable(dev)) { 113 + if (id) 114 + ptype = product_type_cable[PD_IDH_PTYPE(id->id_header)]; 115 + else 116 + ptype = to_typec_cable(dev)->active ? 117 + product_type_cable[IDH_PTYPE_ACABLE] : 118 + product_type_cable[IDH_PTYPE_PCABLE]; 119 + } 120 + 121 + return ptype; 122 122 } 123 123 124 124 static ssize_t id_header_show(struct device *dev, struct device_attribute *attr, ··· 174 122 } 175 123 static DEVICE_ATTR_RO(product); 176 124 125 + static ssize_t product_type_vdo1_show(struct device *dev, struct device_attribute *attr, 126 + char *buf) 127 + { 128 + struct usb_pd_identity *id = get_pd_identity(dev); 129 + 130 + return sysfs_emit(buf, "0x%08x\n", id->vdo[0]); 131 + } 132 + static DEVICE_ATTR_RO(product_type_vdo1); 133 + 134 + static ssize_t product_type_vdo2_show(struct device *dev, struct device_attribute *attr, 135 + char *buf) 136 + { 137 + struct usb_pd_identity *id = get_pd_identity(dev); 138 + 139 + return sysfs_emit(buf, "0x%08x\n", id->vdo[1]); 140 + } 141 + static DEVICE_ATTR_RO(product_type_vdo2); 142 + 143 + static ssize_t product_type_vdo3_show(struct device *dev, struct device_attribute *attr, 144 + char *buf) 145 + { 146 + struct usb_pd_identity *id = get_pd_identity(dev); 147 + 148 + return sysfs_emit(buf, "0x%08x\n", id->vdo[2]); 149 + } 150 + static DEVICE_ATTR_RO(product_type_vdo3); 151 + 177 152 static struct attribute *usb_pd_id_attrs[] = { 178 153 &dev_attr_id_header.attr, 179 154 &dev_attr_cert_stat.attr, 180 155 &dev_attr_product.attr, 156 + &dev_attr_product_type_vdo1.attr, 157 + &dev_attr_product_type_vdo2.attr, 158 + &dev_attr_product_type_vdo3.attr, 181 159 NULL 182 160 }; 183 161 ··· 221 139 NULL, 222 140 }; 223 141 142 + static void typec_product_type_notify(struct device *dev) 143 + { 144 + char *envp[2] = { }; 145 + const char *ptype; 146 + 147 + ptype = get_pd_product_type(dev); 148 + if (!ptype) 149 + return; 150 + 151 + sysfs_notify(&dev->kobj, NULL, "type"); 152 + 153 + envp[0] = kasprintf(GFP_KERNEL, "PRODUCT_TYPE=%s", ptype); 154 + if (!envp[0]) 155 + return; 156 + 157 + kobject_uevent_env(&dev->kobj, KOBJ_CHANGE, envp); 158 + kfree(envp[0]); 159 + } 160 + 224 161 static void typec_report_identity(struct device *dev) 225 162 { 226 163 sysfs_notify(&dev->kobj, "identity", "id_header"); 227 164 sysfs_notify(&dev->kobj, "identity", "cert_stat"); 228 165 sysfs_notify(&dev->kobj, "identity", "product"); 166 + sysfs_notify(&dev->kobj, "identity", "product_type_vdo1"); 167 + sysfs_notify(&dev->kobj, "identity", "product_type_vdo2"); 168 + sysfs_notify(&dev->kobj, "identity", "product_type_vdo3"); 169 + typec_product_type_notify(dev); 229 170 } 171 + 172 + static ssize_t 173 + type_show(struct device *dev, struct device_attribute *attr, char *buf) 174 + { 175 + const char *ptype; 176 + 177 + ptype = get_pd_product_type(dev); 178 + if (!ptype) 179 + return 0; 180 + 181 + return sysfs_emit(buf, "%s\n", ptype); 182 + } 183 + static DEVICE_ATTR_RO(type); 230 184 231 185 /* ------------------------------------------------------------------------- */ 232 186 /* Alternate Modes */ ··· 500 382 return attr->mode; 501 383 } 502 384 503 - static struct attribute_group typec_altmode_group = { 385 + static const struct attribute_group typec_altmode_group = { 504 386 .is_visible = typec_altmode_attr_is_visible, 505 387 .attrs = typec_altmode_attrs, 506 388 }; ··· 600 482 if (is_typec_partner(parent)) 601 483 alt->adev.dev.bus = &typec_bus; 602 484 485 + /* Plug alt modes need a class to generate udev events. */ 486 + if (is_typec_plug(parent)) 487 + alt->adev.dev.class = typec_class; 488 + 603 489 ret = device_register(&alt->adev.dev); 604 490 if (ret) { 605 491 dev_err(parent, "failed to register alternate mode (%d)\n", ··· 654 532 } 655 533 static DEVICE_ATTR_RO(supports_usb_power_delivery); 656 534 535 + static ssize_t number_of_alternate_modes_show(struct device *dev, struct device_attribute *attr, 536 + char *buf) 537 + { 538 + struct typec_partner *partner; 539 + struct typec_plug *plug; 540 + int num_altmodes; 541 + 542 + if (is_typec_partner(dev)) { 543 + partner = to_typec_partner(dev); 544 + num_altmodes = partner->num_altmodes; 545 + } else if (is_typec_plug(dev)) { 546 + plug = to_typec_plug(dev); 547 + num_altmodes = plug->num_altmodes; 548 + } else { 549 + return 0; 550 + } 551 + 552 + return sysfs_emit(buf, "%d\n", num_altmodes); 553 + } 554 + static DEVICE_ATTR_RO(number_of_alternate_modes); 555 + 657 556 static struct attribute *typec_partner_attrs[] = { 658 557 &dev_attr_accessory_mode.attr, 659 558 &dev_attr_supports_usb_power_delivery.attr, 559 + &dev_attr_number_of_alternate_modes.attr, 560 + &dev_attr_type.attr, 660 561 NULL 661 562 }; 662 - ATTRIBUTE_GROUPS(typec_partner); 563 + 564 + static umode_t typec_partner_attr_is_visible(struct kobject *kobj, struct attribute *attr, int n) 565 + { 566 + struct typec_partner *partner = to_typec_partner(kobj_to_dev(kobj)); 567 + 568 + if (attr == &dev_attr_number_of_alternate_modes.attr) { 569 + if (partner->num_altmodes < 0) 570 + return 0; 571 + } 572 + 573 + if (attr == &dev_attr_type.attr) 574 + if (!get_pd_product_type(kobj_to_dev(kobj))) 575 + return 0; 576 + 577 + return attr->mode; 578 + } 579 + 580 + static const struct attribute_group typec_partner_group = { 581 + .is_visible = typec_partner_attr_is_visible, 582 + .attrs = typec_partner_attrs 583 + }; 584 + 585 + static const struct attribute_group *typec_partner_groups[] = { 586 + &typec_partner_group, 587 + NULL 588 + }; 663 589 664 590 static void typec_partner_release(struct device *dev) 665 591 { ··· 739 569 return 0; 740 570 } 741 571 EXPORT_SYMBOL_GPL(typec_partner_set_identity); 572 + 573 + /** 574 + * typec_partner_set_num_altmodes - Set the number of available partner altmodes 575 + * @partner: The partner to be updated. 576 + * @num_altmodes: The number of altmodes we want to specify as available. 577 + * 578 + * This routine is used to report the number of alternate modes supported by the 579 + * partner. This value is *not* enforced in alternate mode registration routines. 580 + * 581 + * @partner.num_altmodes is set to -1 on partner registration, denoting that 582 + * a valid value has not been set for it yet. 583 + * 584 + * Returns 0 on success or negative error number on failure. 585 + */ 586 + int typec_partner_set_num_altmodes(struct typec_partner *partner, int num_altmodes) 587 + { 588 + int ret; 589 + 590 + if (num_altmodes < 0) 591 + return -EINVAL; 592 + 593 + partner->num_altmodes = num_altmodes; 594 + ret = sysfs_update_group(&partner->dev.kobj, &typec_partner_group); 595 + if (ret < 0) 596 + return ret; 597 + 598 + sysfs_notify(&partner->dev.kobj, NULL, "number_of_alternate_modes"); 599 + 600 + return 0; 601 + } 602 + EXPORT_SYMBOL_GPL(typec_partner_set_num_altmodes); 742 603 743 604 /** 744 605 * typec_partner_register_altmode - Register USB Type-C Partner Alternate Mode ··· 813 612 ida_init(&partner->mode_ids); 814 613 partner->usb_pd = desc->usb_pd; 815 614 partner->accessory = desc->accessory; 615 + partner->num_altmodes = -1; 816 616 817 617 if (desc->identity) { 818 618 /* ··· 864 662 kfree(plug); 865 663 } 866 664 665 + static struct attribute *typec_plug_attrs[] = { 666 + &dev_attr_number_of_alternate_modes.attr, 667 + NULL 668 + }; 669 + 670 + static umode_t typec_plug_attr_is_visible(struct kobject *kobj, struct attribute *attr, int n) 671 + { 672 + struct typec_plug *plug = to_typec_plug(kobj_to_dev(kobj)); 673 + 674 + if (attr == &dev_attr_number_of_alternate_modes.attr) { 675 + if (plug->num_altmodes < 0) 676 + return 0; 677 + } 678 + 679 + return attr->mode; 680 + } 681 + 682 + static const struct attribute_group typec_plug_group = { 683 + .is_visible = typec_plug_attr_is_visible, 684 + .attrs = typec_plug_attrs 685 + }; 686 + 687 + static const struct attribute_group *typec_plug_groups[] = { 688 + &typec_plug_group, 689 + NULL 690 + }; 691 + 867 692 static const struct device_type typec_plug_dev_type = { 868 693 .name = "typec_plug", 694 + .groups = typec_plug_groups, 869 695 .release = typec_plug_release, 870 696 }; 697 + 698 + /** 699 + * typec_plug_set_num_altmodes - Set the number of available plug altmodes 700 + * @plug: The plug to be updated. 701 + * @num_altmodes: The number of altmodes we want to specify as available. 702 + * 703 + * This routine is used to report the number of alternate modes supported by the 704 + * plug. This value is *not* enforced in alternate mode registration routines. 705 + * 706 + * @plug.num_altmodes is set to -1 on plug registration, denoting that 707 + * a valid value has not been set for it yet. 708 + * 709 + * Returns 0 on success or negative error number on failure. 710 + */ 711 + int typec_plug_set_num_altmodes(struct typec_plug *plug, int num_altmodes) 712 + { 713 + int ret; 714 + 715 + if (num_altmodes < 0) 716 + return -EINVAL; 717 + 718 + plug->num_altmodes = num_altmodes; 719 + ret = sysfs_update_group(&plug->dev.kobj, &typec_plug_group); 720 + if (ret < 0) 721 + return ret; 722 + 723 + sysfs_notify(&plug->dev.kobj, NULL, "number_of_alternate_modes"); 724 + 725 + return 0; 726 + } 727 + EXPORT_SYMBOL_GPL(typec_plug_set_num_altmodes); 871 728 872 729 /** 873 730 * typec_plug_register_altmode - Register USB Type-C Cable Plug Alternate Mode ··· 973 712 sprintf(name, "plug%d", desc->index); 974 713 975 714 ida_init(&plug->mode_ids); 715 + plug->num_altmodes = -1; 976 716 plug->index = desc->index; 977 717 plug->dev.class = typec_class; 978 718 plug->dev.parent = &cable->dev; ··· 1005 743 EXPORT_SYMBOL_GPL(typec_unregister_plug); 1006 744 1007 745 /* Type-C Cables */ 1008 - 1009 - static ssize_t 1010 - type_show(struct device *dev, struct device_attribute *attr, char *buf) 1011 - { 1012 - struct typec_cable *cable = to_typec_cable(dev); 1013 - 1014 - return sprintf(buf, "%s\n", cable->active ? "active" : "passive"); 1015 - } 1016 - static DEVICE_ATTR_RO(type); 1017 746 1018 747 static const char * const typec_plug_types[] = { 1019 748 [USB_PLUG_NONE] = "unknown", ··· 1562 1309 return attr->mode; 1563 1310 } 1564 1311 1565 - static struct attribute_group typec_group = { 1312 + static const struct attribute_group typec_group = { 1566 1313 .is_visible = typec_attr_is_visible, 1567 1314 .attrs = typec_attrs, 1568 1315 }; ··· 1605 1352 /* --------------------------------------- */ 1606 1353 /* Driver callbacks to report role updates */ 1607 1354 1355 + static int partner_match(struct device *dev, void *data) 1356 + { 1357 + return is_typec_partner(dev); 1358 + } 1359 + 1608 1360 /** 1609 1361 * typec_set_data_role - Report data role change 1610 1362 * @port: The USB Type-C Port where the role was changed ··· 1619 1361 */ 1620 1362 void typec_set_data_role(struct typec_port *port, enum typec_data_role role) 1621 1363 { 1364 + struct device *partner_dev; 1365 + 1622 1366 if (port->data_role == role) 1623 1367 return; 1624 1368 1625 1369 port->data_role = role; 1626 1370 sysfs_notify(&port->dev.kobj, NULL, "data_role"); 1627 1371 kobject_uevent(&port->dev.kobj, KOBJ_CHANGE); 1372 + 1373 + partner_dev = device_find_child(&port->dev, NULL, partner_match); 1374 + if (!partner_dev) 1375 + return; 1376 + 1377 + if (to_typec_partner(partner_dev)->identity) 1378 + typec_product_type_notify(partner_dev); 1379 + 1380 + put_device(partner_dev); 1628 1381 } 1629 1382 EXPORT_SYMBOL_GPL(typec_set_data_role); 1630 1383 ··· 1675 1406 kobject_uevent(&port->dev.kobj, KOBJ_CHANGE); 1676 1407 } 1677 1408 EXPORT_SYMBOL_GPL(typec_set_vconn_role); 1678 - 1679 - static int partner_match(struct device *dev, void *data) 1680 - { 1681 - return is_typec_partner(dev); 1682 - } 1683 1409 1684 1410 /** 1685 1411 * typec_set_pwr_opmode - Report changed power operation mode
+15 -2
drivers/usb/typec/mux/intel_pmc_mux.c
··· 176 176 static int pmc_usb_command(struct pmc_usb_port *port, u8 *msg, u32 len) 177 177 { 178 178 u8 response[4]; 179 + u8 status_res; 179 180 int ret; 180 181 181 182 /* ··· 190 189 if (ret) 191 190 return ret; 192 191 193 - if (response[2] & PMC_USB_RESP_STATUS_FAILURE) { 194 - if (response[2] & PMC_USB_RESP_STATUS_FATAL) 192 + status_res = (msg[0] & 0xf) < PMC_USB_SAFE_MODE ? 193 + response[2] : response[1]; 194 + 195 + if (status_res & PMC_USB_RESP_STATUS_FAILURE) { 196 + if (status_res & PMC_USB_RESP_STATUS_FATAL) 195 197 return -EIO; 198 + 196 199 return -EBUSY; 197 200 } 198 201 ··· 261 256 pmc_usb_mux_tbt(struct pmc_usb_port *port, struct typec_mux_state *state) 262 257 { 263 258 struct typec_thunderbolt_data *data = state->data; 259 + u8 cable_rounded = TBT_CABLE_ROUNDED_SUPPORT(data->cable_mode); 264 260 u8 cable_speed = TBT_CABLE_SPEED(data->cable_mode); 265 261 struct altmode_req req = { }; 266 262 ··· 289 283 req.mode_data |= PMC_USB_ALTMODE_ACTIVE_CABLE; 290 284 291 285 req.mode_data |= PMC_USB_ALTMODE_CABLE_SPD(cable_speed); 286 + 287 + req.mode_data |= PMC_USB_ALTMODE_TBT_GEN(cable_rounded); 292 288 293 289 return pmc_usb_command(port, (void *)&req, sizeof(req)); 294 290 } ··· 327 319 fallthrough; 328 320 default: 329 321 req.mode_data |= PMC_USB_ALTMODE_ACTIVE_CABLE; 322 + 323 + /* Configure data rate to rounded in the case of Active TBT3 324 + * and USB4 cables. 325 + */ 326 + req.mode_data |= PMC_USB_ALTMODE_TBT_GEN(1); 330 327 break; 331 328 } 332 329
+11 -5
drivers/usb/typec/tcpm/fusb302.c
··· 343 343 return ret; 344 344 } 345 345 346 - static int fusb302_enable_tx_auto_retries(struct fusb302_chip *chip) 346 + static int fusb302_enable_tx_auto_retries(struct fusb302_chip *chip, u8 retry_count) 347 347 { 348 348 int ret = 0; 349 349 350 - ret = fusb302_i2c_set_bits(chip, FUSB_REG_CONTROL3, 351 - FUSB_REG_CONTROL3_N_RETRIES_3 | 350 + ret = fusb302_i2c_set_bits(chip, FUSB_REG_CONTROL3, retry_count | 352 351 FUSB_REG_CONTROL3_AUTO_RETRY); 353 352 354 353 return ret; ··· 398 399 ret = fusb302_sw_reset(chip); 399 400 if (ret < 0) 400 401 return ret; 401 - ret = fusb302_enable_tx_auto_retries(chip); 402 + ret = fusb302_enable_tx_auto_retries(chip, FUSB_REG_CONTROL3_N_RETRIES_3); 402 403 if (ret < 0) 403 404 return ret; 404 405 ret = fusb302_init_interrupt(chip); ··· 1016 1017 }; 1017 1018 1018 1019 static int tcpm_pd_transmit(struct tcpc_dev *dev, enum tcpm_transmit_type type, 1019 - const struct pd_message *msg) 1020 + const struct pd_message *msg, unsigned int negotiated_rev) 1020 1021 { 1021 1022 struct fusb302_chip *chip = container_of(dev, struct fusb302_chip, 1022 1023 tcpc_dev); ··· 1025 1026 mutex_lock(&chip->lock); 1026 1027 switch (type) { 1027 1028 case TCPC_TX_SOP: 1029 + /* nRetryCount 3 in P2.0 spec, whereas 2 in PD3.0 spec */ 1030 + ret = fusb302_enable_tx_auto_retries(chip, negotiated_rev > PD_REV20 ? 1031 + FUSB_REG_CONTROL3_N_RETRIES_2 : 1032 + FUSB_REG_CONTROL3_N_RETRIES_3); 1033 + if (ret < 0) 1034 + fusb302_log(chip, "Cannot update retry count ret=%d", ret); 1035 + 1028 1036 ret = fusb302_pd_send_message(chip, msg); 1029 1037 if (ret < 0) 1030 1038 fusb302_log(chip,
+113 -10
drivers/usb/typec/tcpm/tcpci.c
··· 18 18 19 19 #include "tcpci.h" 20 20 21 - #define PD_RETRY_COUNT 3 21 + #define PD_RETRY_COUNT_DEFAULT 3 22 + #define PD_RETRY_COUNT_3_0_OR_HIGHER 2 23 + #define AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV 3500 24 + #define AUTO_DISCHARGE_PD_HEADROOM_MV 850 25 + #define AUTO_DISCHARGE_PPS_HEADROOM_MV 1250 22 26 23 27 struct tcpci { 24 28 struct device *dev; ··· 272 268 enable ? TCPC_POWER_CTRL_VCONN_ENABLE : 0); 273 269 } 274 270 271 + static int tcpci_enable_auto_vbus_discharge(struct tcpc_dev *dev, bool enable) 272 + { 273 + struct tcpci *tcpci = tcpc_to_tcpci(dev); 274 + int ret; 275 + 276 + ret = regmap_update_bits(tcpci->regmap, TCPC_POWER_CTRL, TCPC_POWER_CTRL_AUTO_DISCHARGE, 277 + enable ? TCPC_POWER_CTRL_AUTO_DISCHARGE : 0); 278 + return ret; 279 + } 280 + 281 + static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum typec_pwr_opmode mode, 282 + bool pps_active, u32 requested_vbus_voltage_mv) 283 + { 284 + struct tcpci *tcpci = tcpc_to_tcpci(dev); 285 + unsigned int pwr_ctrl, threshold = 0; 286 + int ret; 287 + 288 + /* 289 + * Indicates that vbus is going to go away due PR_SWAP, hard reset etc. 290 + * Do not discharge vbus here. 291 + */ 292 + if (requested_vbus_voltage_mv == 0) 293 + goto write_thresh; 294 + 295 + ret = regmap_read(tcpci->regmap, TCPC_POWER_CTRL, &pwr_ctrl); 296 + if (ret < 0) 297 + return ret; 298 + 299 + if (pwr_ctrl & TCPC_FAST_ROLE_SWAP_EN) { 300 + /* To prevent disconnect when the source is fast role swap is capable. */ 301 + threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV; 302 + } else if (mode == TYPEC_PWR_MODE_PD) { 303 + if (pps_active) 304 + threshold = (95 * requested_vbus_voltage_mv / 100) - 305 + AUTO_DISCHARGE_PD_HEADROOM_MV; 306 + else 307 + threshold = (95 * requested_vbus_voltage_mv / 100) - 308 + AUTO_DISCHARGE_PPS_HEADROOM_MV; 309 + } else { 310 + /* 3.5V for non-pd sink */ 311 + threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV; 312 + } 313 + 314 + threshold = threshold / TCPC_VBUS_SINK_DISCONNECT_THRESH_LSB_MV; 315 + 316 + if (threshold > TCPC_VBUS_SINK_DISCONNECT_THRESH_MAX) 317 + return -EINVAL; 318 + 319 + write_thresh: 320 + return tcpci_write16(tcpci, TCPC_VBUS_SINK_DISCONNECT_THRESH, threshold); 321 + } 322 + 275 323 static int tcpci_enable_frs(struct tcpc_dev *dev, bool enable) 276 324 { 277 325 struct tcpci *tcpci = tcpc_to_tcpci(dev); ··· 338 282 TCPC_FAST_ROLE_SWAP_EN : 0); 339 283 340 284 return ret; 285 + } 286 + 287 + static void tcpci_frs_sourcing_vbus(struct tcpc_dev *dev) 288 + { 289 + struct tcpci *tcpci = tcpc_to_tcpci(dev); 290 + 291 + if (tcpci->data->frs_sourcing_vbus) 292 + tcpci->data->frs_sourcing_vbus(tcpci, tcpci->data); 341 293 } 342 294 343 295 static int tcpci_set_bist_data(struct tcpc_dev *tcpc, bool enable) ··· 403 339 return !!(reg & TCPC_POWER_STATUS_VBUS_PRES); 404 340 } 405 341 342 + static bool tcpci_is_vbus_vsafe0v(struct tcpc_dev *tcpc) 343 + { 344 + struct tcpci *tcpci = tcpc_to_tcpci(tcpc); 345 + unsigned int reg; 346 + int ret; 347 + 348 + ret = regmap_read(tcpci->regmap, TCPC_EXTENDED_STATUS, &reg); 349 + if (ret < 0) 350 + return false; 351 + 352 + return !!(reg & TCPC_EXTENDED_STATUS_VSAFE0V); 353 + } 354 + 406 355 static int tcpci_set_vbus(struct tcpc_dev *tcpc, bool source, bool sink) 407 356 { 408 357 struct tcpci *tcpci = tcpc_to_tcpci(tcpc); ··· 461 384 return 0; 462 385 } 463 386 464 - static int tcpci_pd_transmit(struct tcpc_dev *tcpc, 465 - enum tcpm_transmit_type type, 466 - const struct pd_message *msg) 387 + static int tcpci_pd_transmit(struct tcpc_dev *tcpc, enum tcpm_transmit_type type, 388 + const struct pd_message *msg, unsigned int negotiated_rev) 467 389 { 468 390 struct tcpci *tcpci = tcpc_to_tcpci(tcpc); 469 391 u16 header = msg ? le16_to_cpu(msg->header) : 0; ··· 510 434 } 511 435 } 512 436 513 - reg = (PD_RETRY_COUNT << TCPC_TRANSMIT_RETRY_SHIFT) | (type << TCPC_TRANSMIT_TYPE_SHIFT); 437 + /* nRetryCount is 3 in PD2.0 spec where 2 in PD3.0 spec */ 438 + reg = ((negotiated_rev > PD_REV20 ? PD_RETRY_COUNT_3_0_OR_HIGHER : PD_RETRY_COUNT_DEFAULT) 439 + << TCPC_TRANSMIT_RETRY_SHIFT) | (type << TCPC_TRANSMIT_TYPE_SHIFT); 514 440 ret = regmap_write(tcpci->regmap, TCPC_TRANSMIT, reg); 515 441 if (ret < 0) 516 442 return ret; ··· 569 491 TCPC_ALERT_RX_HARD_RST | TCPC_ALERT_CC_STATUS; 570 492 if (tcpci->controls_vbus) 571 493 reg |= TCPC_ALERT_POWER_STATUS; 494 + /* Enable VSAFE0V status interrupt when detecting VSAFE0V is supported */ 495 + if (tcpci->data->vbus_vsafe0v) { 496 + reg |= TCPC_ALERT_EXTENDED_STATUS; 497 + ret = regmap_write(tcpci->regmap, TCPC_EXTENDED_STATUS_MASK, 498 + TCPC_EXTENDED_STATUS_VSAFE0V); 499 + if (ret < 0) 500 + return ret; 501 + } 572 502 return tcpci_write16(tcpci, TCPC_ALERT_MASK, reg); 573 503 } 574 504 575 505 irqreturn_t tcpci_irq(struct tcpci *tcpci) 576 506 { 577 507 u16 status; 508 + int ret; 509 + unsigned int raw; 578 510 579 511 tcpci_read16(tcpci, TCPC_ALERT, &status); 580 512 ··· 600 512 tcpm_cc_change(tcpci->port); 601 513 602 514 if (status & TCPC_ALERT_POWER_STATUS) { 603 - unsigned int reg; 604 - 605 - regmap_read(tcpci->regmap, TCPC_POWER_STATUS_MASK, &reg); 606 - 515 + regmap_read(tcpci->regmap, TCPC_POWER_STATUS_MASK, &raw); 607 516 /* 608 517 * If power status mask has been reset, then the TCPC 609 518 * has reset. 610 519 */ 611 - if (reg == 0xff) 520 + if (raw == 0xff) 612 521 tcpm_tcpc_reset(tcpci->port); 613 522 else 614 523 tcpm_vbus_change(tcpci->port); ··· 642 557 tcpci_write16(tcpci, TCPC_ALERT, TCPC_ALERT_RX_STATUS); 643 558 644 559 tcpm_pd_receive(tcpci->port, &msg); 560 + } 561 + 562 + if (status & TCPC_ALERT_EXTENDED_STATUS) { 563 + ret = regmap_read(tcpci->regmap, TCPC_EXTENDED_STATUS, &raw); 564 + if (!ret && (raw & TCPC_EXTENDED_STATUS_VSAFE0V)) 565 + tcpm_vbus_change(tcpci->port); 645 566 } 646 567 647 568 if (status & TCPC_ALERT_RX_HARD_RST) ··· 719 628 tcpci->tcpc.pd_transmit = tcpci_pd_transmit; 720 629 tcpci->tcpc.set_bist_data = tcpci_set_bist_data; 721 630 tcpci->tcpc.enable_frs = tcpci_enable_frs; 631 + tcpci->tcpc.frs_sourcing_vbus = tcpci_frs_sourcing_vbus; 632 + 633 + if (tcpci->data->auto_discharge_disconnect) { 634 + tcpci->tcpc.enable_auto_vbus_discharge = tcpci_enable_auto_vbus_discharge; 635 + tcpci->tcpc.set_auto_vbus_discharge_threshold = 636 + tcpci_set_auto_vbus_discharge_threshold; 637 + regmap_update_bits(tcpci->regmap, TCPC_POWER_CTRL, TCPC_POWER_CTRL_BLEED_DISCHARGE, 638 + TCPC_POWER_CTRL_BLEED_DISCHARGE); 639 + } 640 + 641 + if (tcpci->data->vbus_vsafe0v) 642 + tcpci->tcpc.is_vbus_vsafe0v = tcpci_is_vbus_vsafe0v; 722 643 723 644 err = tcpci_parse_config(tcpci); 724 645 if (err < 0)
+25 -4
drivers/usb/typec/tcpm/tcpci.h
··· 8 8 #ifndef __LINUX_USB_TCPCI_H 9 9 #define __LINUX_USB_TCPCI_H 10 10 11 + #include <linux/usb/typec.h> 12 + 11 13 #define TCPC_VENDOR_ID 0x0 12 14 #define TCPC_PRODUCT_ID 0x2 13 15 #define TCPC_BCD_DEV 0x4 ··· 49 47 #define TCPC_TCPC_CTRL_ORIENTATION BIT(0) 50 48 #define TCPC_TCPC_CTRL_BIST_TM BIT(1) 51 49 50 + #define TCPC_EXTENDED_STATUS 0x20 51 + #define TCPC_EXTENDED_STATUS_VSAFE0V BIT(0) 52 + 52 53 #define TCPC_ROLE_CTRL 0x1a 53 54 #define TCPC_ROLE_CTRL_DRP BIT(6) 54 55 #define TCPC_ROLE_CTRL_RP_VAL_SHIFT 4 ··· 72 67 73 68 #define TCPC_POWER_CTRL 0x1c 74 69 #define TCPC_POWER_CTRL_VCONN_ENABLE BIT(0) 70 + #define TCPC_POWER_CTRL_BLEED_DISCHARGE BIT(3) 71 + #define TCPC_POWER_CTRL_AUTO_DISCHARGE BIT(4) 75 72 #define TCPC_FAST_ROLE_SWAP_EN BIT(7) 76 73 77 74 #define TCPC_CC_STATUS 0x1d ··· 140 133 141 134 #define TCPC_VBUS_VOLTAGE 0x70 142 135 #define TCPC_VBUS_SINK_DISCONNECT_THRESH 0x72 136 + #define TCPC_VBUS_SINK_DISCONNECT_THRESH_LSB_MV 25 137 + #define TCPC_VBUS_SINK_DISCONNECT_THRESH_MAX 0x3ff 143 138 #define TCPC_VBUS_STOP_DISCHARGE_THRESH 0x74 144 139 #define TCPC_VBUS_VOLTAGE_ALARM_HI_CFG 0x76 145 140 #define TCPC_VBUS_VOLTAGE_ALARM_LO_CFG 0x78 ··· 149 140 /* I2C_WRITE_BYTE_COUNT + 1 when TX_BUF_BYTE_x is only accessible I2C_WRITE_BYTE_COUNT */ 150 141 #define TCPC_TRANSMIT_BUFFER_MAX_LEN 31 151 142 152 - /* 153 - * @TX_BUF_BYTE_x_hidden 154 - * optional; Set when TX_BUF_BYTE_x can only be accessed through I2C_WRITE_BYTE_COUNT. 155 - */ 156 143 struct tcpci; 144 + 145 + /* 146 + * @TX_BUF_BYTE_x_hidden: 147 + * optional; Set when TX_BUF_BYTE_x can only be accessed through I2C_WRITE_BYTE_COUNT. 148 + * @frs_sourcing_vbus: 149 + * Optional; Callback to perform chip specific operations when FRS 150 + * is sourcing vbus. 151 + * @auto_discharge_disconnect: 152 + * Optional; Enables TCPC to autonously discharge vbus on disconnect. 153 + * @vbus_vsafe0v: 154 + * optional; Set when TCPC can detect whether vbus is at VSAFE0V. 155 + */ 157 156 struct tcpci_data { 158 157 struct regmap *regmap; 159 158 unsigned char TX_BUF_BYTE_x_hidden:1; 159 + unsigned char auto_discharge_disconnect:1; 160 + unsigned char vbus_vsafe0v:1; 161 + 160 162 int (*init)(struct tcpci *tcpci, struct tcpci_data *data); 161 163 int (*set_vconn)(struct tcpci *tcpci, struct tcpci_data *data, 162 164 bool enable); 163 165 int (*start_drp_toggling)(struct tcpci *tcpci, struct tcpci_data *data, 164 166 enum typec_cc_status cc); 165 167 int (*set_vbus)(struct tcpci *tcpci, struct tcpci_data *data, bool source, bool sink); 168 + void (*frs_sourcing_vbus)(struct tcpci *tcpci, struct tcpci_data *data); 166 169 }; 167 170 168 171 struct tcpci *tcpci_register_port(struct device *dev, struct tcpci_data *data);
+33 -18
drivers/usb/typec/tcpm/tcpci_maxim.c
··· 112 112 return; 113 113 } 114 114 115 + /* Enable VSAFE0V detection */ 116 + ret = max_tcpci_write8(chip, TCPC_EXTENDED_STATUS_MASK, TCPC_EXTENDED_STATUS_VSAFE0V); 117 + if (ret < 0) { 118 + dev_err(chip->dev, "Unable to unmask TCPC_EXTENDED_STATUS_VSAFE0V ret:%d\n", ret); 119 + return; 120 + } 121 + 115 122 alert_mask = TCPC_ALERT_TX_SUCCESS | TCPC_ALERT_TX_DISCARDED | TCPC_ALERT_TX_FAILED | 116 123 TCPC_ALERT_RX_HARD_RST | TCPC_ALERT_RX_STATUS | TCPC_ALERT_CC_STATUS | 117 124 TCPC_ALERT_VBUS_DISCNCT | TCPC_ALERT_RX_BUF_OVF | TCPC_ALERT_POWER_STATUS | 118 125 /* Enable Extended alert for detecting Fast Role Swap Signal */ 119 - TCPC_ALERT_EXTND; 126 + TCPC_ALERT_EXTND | TCPC_ALERT_EXTENDED_STATUS; 120 127 121 128 ret = max_tcpci_write16(chip, TCPC_ALERT_MASK, alert_mask); 122 129 if (ret < 0) { ··· 245 238 if (ret < 0) 246 239 return; 247 240 248 - if (pwr_status == 0xff) { 241 + if (pwr_status == 0xff) 249 242 max_tcpci_init_regs(chip); 250 - } else if (pwr_status & TCPC_POWER_STATUS_SOURCING_VBUS) { 243 + else if (pwr_status & TCPC_POWER_STATUS_SOURCING_VBUS) 251 244 tcpm_sourcing_vbus(chip->port); 252 - /* 253 - * Alawys re-enable boost here. 254 - * In normal case, when say an headset is attached, TCPM would 255 - * have instructed to TCPC to enable boost, so the call is a 256 - * no-op. 257 - * But for Fast Role Swap case, Boost turns on autonomously without 258 - * AP intervention, but, needs AP to enable source mode explicitly 259 - * for AP to regain control. 260 - */ 261 - max_tcpci_set_vbus(chip->tcpci, &chip->data, true, false); 262 - } else { 245 + else 263 246 tcpm_vbus_change(chip->port); 264 - } 247 + } 248 + 249 + static void max_tcpci_frs_sourcing_vbus(struct tcpci *tcpci, struct tcpci_data *tdata) 250 + { 251 + /* 252 + * For Fast Role Swap case, Boost turns on autonomously without 253 + * AP intervention, but, needs AP to enable source mode explicitly 254 + * for AP to regain control. 255 + */ 256 + max_tcpci_set_vbus(tcpci, tdata, true, false); 265 257 } 266 258 267 259 static void process_tx(struct max_tcpci_chip *chip, u16 status) ··· 322 316 } 323 317 } 324 318 319 + if (status & TCPC_ALERT_EXTENDED_STATUS) { 320 + ret = max_tcpci_read8(chip, TCPC_EXTENDED_STATUS, (u8 *)&reg_status); 321 + if (ret >= 0 && (reg_status & TCPC_EXTENDED_STATUS_VSAFE0V)) 322 + tcpm_vbus_change(chip->port); 323 + } 324 + 325 325 if (status & TCPC_ALERT_RX_STATUS) 326 326 process_rx(chip, status); 327 327 ··· 356 344 { 357 345 struct max_tcpci_chip *chip = dev_id; 358 346 u16 status; 359 - irqreturn_t irq_return; 347 + irqreturn_t irq_return = IRQ_HANDLED; 360 348 int ret; 361 349 362 350 if (!chip->port) ··· 453 441 chip->data.start_drp_toggling = max_tcpci_start_toggling; 454 442 chip->data.TX_BUF_BYTE_x_hidden = true; 455 443 chip->data.init = tcpci_init; 444 + chip->data.frs_sourcing_vbus = max_tcpci_frs_sourcing_vbus; 445 + chip->data.auto_discharge_disconnect = true; 446 + chip->data.vbus_vsafe0v = true; 456 447 457 448 max_tcpci_init_regs(chip); 458 449 chip->tcpci = tcpci_register_port(chip->dev, &chip->data); 459 - if (IS_ERR_OR_NULL(chip->tcpci)) { 450 + if (IS_ERR(chip->tcpci)) { 460 451 dev_err(&client->dev, "TCPCI port registration failed"); 461 452 ret = PTR_ERR(chip->tcpci); 462 453 return PTR_ERR(chip->tcpci); ··· 496 481 497 482 #ifdef CONFIG_OF 498 483 static const struct of_device_id max_tcpci_of_match[] = { 499 - { .compatible = "maxim,tcpc", }, 484 + { .compatible = "maxim,max33359", }, 500 485 {}, 501 486 }; 502 487 MODULE_DEVICE_TABLE(of, max_tcpci_of_match);
+187 -40
drivers/usb/typec/tcpm/tcpm.c
··· 258 258 bool attached; 259 259 bool connected; 260 260 enum typec_port_type port_type; 261 + 262 + /* 263 + * Set to true when vbus is greater than VSAFE5V min. 264 + * Set to false when vbus falls below vSinkDisconnect max threshold. 265 + */ 261 266 bool vbus_present; 267 + 268 + /* 269 + * Set to true when vbus is less than VSAFE0V max. 270 + * Set to false when vbus is greater than VSAFE0V max. 271 + */ 272 + bool vbus_vsafe0v; 273 + 262 274 bool vbus_never_low; 263 275 bool vbus_source; 264 276 bool vbus_charge; ··· 375 363 /* port belongs to a self powered device */ 376 364 bool self_powered; 377 365 378 - /* FRS */ 379 - enum frs_typec_current frs_current; 366 + /* Sink FRS */ 367 + enum frs_typec_current new_source_frs_current; 380 368 381 369 /* Sink caps have been queried */ 382 370 bool sink_cap_done; ··· 679 667 tcpm_log(port, "PD TX, type: %#x", type); 680 668 681 669 reinit_completion(&port->tx_complete); 682 - ret = port->tcpc->pd_transmit(port->tcpc, type, msg); 670 + ret = port->tcpc->pd_transmit(port->tcpc, type, msg, port->negotiated_rev); 683 671 if (ret < 0) 684 672 return ret; 685 673 ··· 1718 1706 } 1719 1707 } 1720 1708 1709 + static int tcpm_set_auto_vbus_discharge_threshold(struct tcpm_port *port, 1710 + enum typec_pwr_opmode mode, bool pps_active, 1711 + u32 requested_vbus_voltage) 1712 + { 1713 + int ret; 1714 + 1715 + if (!port->tcpc->set_auto_vbus_discharge_threshold) 1716 + return 0; 1717 + 1718 + ret = port->tcpc->set_auto_vbus_discharge_threshold(port->tcpc, mode, pps_active, 1719 + requested_vbus_voltage); 1720 + tcpm_log_force(port, 1721 + "set_auto_vbus_discharge_threshold mode:%d pps_active:%c vbus:%u ret:%d", 1722 + mode, pps_active ? 'y' : 'n', requested_vbus_voltage, ret); 1723 + 1724 + return ret; 1725 + } 1726 + 1721 1727 static void tcpm_pd_data_request(struct tcpm_port *port, 1722 1728 const struct pd_message *msg) 1723 1729 { ··· 1743 1713 unsigned int cnt = pd_header_cnt_le(msg->header); 1744 1714 unsigned int rev = pd_header_rev_le(msg->header); 1745 1715 unsigned int i; 1746 - enum frs_typec_current frs_current; 1716 + enum frs_typec_current partner_frs_current; 1747 1717 bool frs_enable; 1748 1718 int ret; 1749 1719 ··· 1816 1786 for (i = 0; i < cnt; i++) 1817 1787 port->sink_caps[i] = le32_to_cpu(msg->payload[i]); 1818 1788 1819 - frs_current = (port->sink_caps[0] & PDO_FIXED_FRS_CURR_MASK) >> 1789 + partner_frs_current = (port->sink_caps[0] & PDO_FIXED_FRS_CURR_MASK) >> 1820 1790 PDO_FIXED_FRS_CURR_SHIFT; 1821 - frs_enable = frs_current && (frs_current <= port->frs_current); 1791 + frs_enable = partner_frs_current && (partner_frs_current <= 1792 + port->new_source_frs_current); 1822 1793 tcpm_log(port, 1823 1794 "Port partner FRS capable partner_frs_current:%u port_frs_current:%u enable:%c", 1824 - frs_current, port->frs_current, frs_enable ? 'y' : 'n'); 1795 + partner_frs_current, port->new_source_frs_current, frs_enable ? 'y' : 'n'); 1825 1796 if (frs_enable) { 1826 1797 ret = port->tcpc->enable_frs(port->tcpc, true); 1827 1798 tcpm_log(port, "Enable FRS %s, ret:%d\n", ret ? "fail" : "success", ret); ··· 1906 1875 port->current_limit, 1907 1876 port->supply_voltage); 1908 1877 port->explicit_contract = true; 1878 + tcpm_set_auto_vbus_discharge_threshold(port, 1879 + TYPEC_PWR_MODE_PD, 1880 + port->pps_data.active, 1881 + port->supply_voltage); 1909 1882 tcpm_set_state(port, SNK_READY, 0); 1910 1883 } else { 1911 1884 /* ··· 2228 2193 static bool tcpm_send_queued_message(struct tcpm_port *port) 2229 2194 { 2230 2195 enum pd_msg_request queued_message; 2196 + int ret; 2231 2197 2232 2198 do { 2233 2199 queued_message = port->queued_message; ··· 2248 2212 tcpm_pd_send_sink_caps(port); 2249 2213 break; 2250 2214 case PD_MSG_DATA_SOURCE_CAP: 2251 - tcpm_pd_send_source_caps(port); 2215 + ret = tcpm_pd_send_source_caps(port); 2216 + if (ret < 0) { 2217 + tcpm_log(port, 2218 + "Unable to send src caps, ret=%d", 2219 + ret); 2220 + tcpm_set_state(port, SOFT_RESET_SEND, 0); 2221 + } else if (port->pwr_role == TYPEC_SOURCE) { 2222 + tcpm_set_state(port, HARD_RESET_SEND, 2223 + PD_T_SENDER_RESPONSE); 2224 + } 2252 2225 break; 2253 2226 default: 2254 2227 break; ··· 2834 2789 if (ret < 0) 2835 2790 return ret; 2836 2791 2837 - ret = tcpm_set_roles(port, true, TYPEC_SOURCE, 2838 - tcpm_data_role_for_source(port)); 2792 + if (port->tcpc->enable_auto_vbus_discharge) { 2793 + ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true); 2794 + tcpm_log_force(port, "enable vbus discharge ret:%d", ret); 2795 + } 2796 + 2797 + ret = tcpm_set_roles(port, true, TYPEC_SOURCE, tcpm_data_role_for_source(port)); 2839 2798 if (ret < 0) 2840 2799 return ret; 2841 2800 ··· 2906 2857 2907 2858 static void tcpm_reset_port(struct tcpm_port *port) 2908 2859 { 2860 + int ret; 2861 + 2862 + if (port->tcpc->enable_auto_vbus_discharge) { 2863 + ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, false); 2864 + tcpm_log_force(port, "Disable vbus discharge ret:%d", ret); 2865 + } 2909 2866 tcpm_unregister_altmodes(port); 2910 2867 tcpm_typec_disconnect(port); 2911 2868 port->attached = false; ··· 2976 2921 if (ret < 0) 2977 2922 return ret; 2978 2923 2979 - ret = tcpm_set_roles(port, true, TYPEC_SINK, 2980 - tcpm_data_role_for_sink(port)); 2924 + if (port->tcpc->enable_auto_vbus_discharge) { 2925 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V); 2926 + ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true); 2927 + tcpm_log_force(port, "enable vbus discharge ret:%d", ret); 2928 + } 2929 + 2930 + ret = tcpm_set_roles(port, true, TYPEC_SINK, tcpm_data_role_for_sink(port)); 2981 2931 if (ret < 0) 2982 2932 return ret; 2983 2933 ··· 3057 2997 static void tcpm_check_send_discover(struct tcpm_port *port) 3058 2998 { 3059 2999 if (port->data_role == TYPEC_HOST && port->send_discover && 3060 - port->pd_capable) { 3000 + port->pd_capable) 3061 3001 tcpm_send_vdm(port, USB_SID_PD, CMD_DISCOVER_IDENT, NULL, 0); 3062 - port->send_discover = false; 3063 - } 3002 + port->send_discover = false; 3064 3003 } 3065 3004 3066 3005 static void tcpm_swap_complete(struct tcpm_port *port, int result) ··· 3115 3056 else if (tcpm_port_is_audio(port)) 3116 3057 tcpm_set_state(port, AUDIO_ACC_ATTACHED, 3117 3058 PD_T_CC_DEBOUNCE); 3118 - else if (tcpm_port_is_source(port)) 3059 + else if (tcpm_port_is_source(port) && port->vbus_vsafe0v) 3119 3060 tcpm_set_state(port, 3120 3061 tcpm_try_snk(port) ? SNK_TRY 3121 3062 : SRC_ATTACHED, ··· 3145 3086 break; 3146 3087 case SNK_TRY_WAIT_DEBOUNCE: 3147 3088 tcpm_set_state(port, SNK_TRY_WAIT_DEBOUNCE_CHECK_VBUS, 3148 - PD_T_PD_DEBOUNCE); 3089 + PD_T_TRY_CC_DEBOUNCE); 3149 3090 break; 3150 3091 case SNK_TRY_WAIT_DEBOUNCE_CHECK_VBUS: 3151 - if (port->vbus_present && tcpm_port_is_sink(port)) { 3092 + if (port->vbus_present && tcpm_port_is_sink(port)) 3152 3093 tcpm_set_state(port, SNK_ATTACHED, 0); 3153 - } else { 3154 - tcpm_set_state(port, SRC_TRYWAIT, 0); 3094 + else 3155 3095 port->max_wait = 0; 3156 - } 3157 3096 break; 3158 3097 case SRC_TRYWAIT: 3159 3098 tcpm_set_cc(port, tcpm_rp_cc(port)); ··· 3563 3506 tcpm_set_state(port, SRC_UNATTACHED, PD_T_PS_SOURCE_ON); 3564 3507 break; 3565 3508 case SNK_HARD_RESET_SINK_OFF: 3509 + /* Do not discharge/disconnect during hard reseet */ 3510 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0); 3566 3511 memset(&port->pps_data, 0, sizeof(port->pps_data)); 3567 3512 tcpm_set_vconn(port, false); 3568 3513 if (port->pd_capable) ··· 3607 3548 tcpm_set_charge(port, true); 3608 3549 } 3609 3550 tcpm_set_attached_state(port, true); 3551 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V); 3610 3552 tcpm_set_state(port, SNK_STARTUP, 0); 3611 3553 break; 3612 3554 ··· 3709 3649 tcpm_set_state(port, PR_SWAP_SNK_SRC_SINK_OFF, 0); 3710 3650 break; 3711 3651 case PR_SWAP_SRC_SNK_TRANSITION_OFF: 3652 + /* 3653 + * Prevent vbus discharge circuit from turning on during PR_SWAP 3654 + * as this is not a disconnect. 3655 + */ 3712 3656 tcpm_set_vbus(port, false); 3713 3657 port->explicit_contract = false; 3714 3658 /* allow time for Vbus discharge, must be < tSrcSwapStdby */ ··· 3738 3674 tcpm_set_state(port, ERROR_RECOVERY, 0); 3739 3675 break; 3740 3676 } 3741 - tcpm_set_state_cond(port, SNK_UNATTACHED, PD_T_PS_SOURCE_ON); 3677 + tcpm_set_state(port, ERROR_RECOVERY, PD_T_PS_SOURCE_ON_PRS); 3742 3678 break; 3743 3679 case PR_SWAP_SRC_SNK_SINK_ON: 3680 + /* Set the vbus disconnect threshold for implicit contract */ 3681 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V); 3744 3682 tcpm_set_state(port, SNK_STARTUP, 0); 3745 3683 break; 3746 3684 case PR_SWAP_SNK_SRC_SINK_OFF: 3685 + /* 3686 + * Prevent vbus discharge circuit from turning on during PR_SWAP 3687 + * as this is not a disconnect. 3688 + */ 3689 + tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, 3690 + port->pps_data.active, 0); 3747 3691 tcpm_set_charge(port, false); 3748 3692 tcpm_set_state(port, hard_reset_state(port), 3749 3693 PD_T_PS_SOURCE_OFF); ··· 4072 4000 if (!tcpm_port_is_sink(port)) 4073 4001 tcpm_set_state(port, SNK_TRYWAIT_DEBOUNCE, 0); 4074 4002 break; 4003 + case SNK_TRY_WAIT_DEBOUNCE_CHECK_VBUS: 4004 + if (!tcpm_port_is_sink(port)) 4005 + tcpm_set_state(port, SRC_TRYWAIT, PD_T_TRY_CC_DEBOUNCE); 4006 + else 4007 + tcpm_set_state(port, SNK_TRY_WAIT_DEBOUNCE_CHECK_VBUS, 0); 4008 + break; 4075 4009 case SNK_TRYWAIT: 4076 4010 /* Do nothing, waiting for tCCDebounce */ 4077 4011 break; ··· 4118 4040 { 4119 4041 tcpm_log_force(port, "VBUS on"); 4120 4042 port->vbus_present = true; 4043 + /* 4044 + * When vbus_present is true i.e. Voltage at VBUS is greater than VSAFE5V implicitly 4045 + * states that vbus is not at VSAFE0V, hence clear the vbus_vsafe0v flag here. 4046 + */ 4047 + port->vbus_vsafe0v = false; 4048 + 4121 4049 switch (port->state) { 4122 4050 case SNK_TRANSITION_SINK_VBUS: 4123 4051 port->explicit_contract = true; ··· 4170 4086 case SNK_TRYWAIT_DEBOUNCE: 4171 4087 /* Do nothing, waiting for Rp */ 4172 4088 break; 4089 + case SNK_TRY_WAIT_DEBOUNCE_CHECK_VBUS: 4090 + if (port->vbus_present && tcpm_port_is_sink(port)) 4091 + tcpm_set_state(port, SNK_ATTACHED, 0); 4092 + break; 4173 4093 case SRC_TRY_WAIT: 4174 4094 case SRC_TRY_DEBOUNCE: 4175 4095 /* Do nothing, waiting for sink detection */ 4176 4096 break; 4097 + case FR_SWAP_SEND: 4098 + case FR_SWAP_SEND_TIMEOUT: 4099 + case FR_SWAP_SNK_SRC_TRANSITION_TO_OFF: 4100 + case FR_SWAP_SNK_SRC_SOURCE_VBUS_APPLIED: 4101 + if (port->tcpc->frs_sourcing_vbus) 4102 + port->tcpc->frs_sourcing_vbus(port->tcpc); 4103 + break; 4177 4104 case FR_SWAP_SNK_SRC_NEW_SINK_READY: 4105 + if (port->tcpc->frs_sourcing_vbus) 4106 + port->tcpc->frs_sourcing_vbus(port->tcpc); 4178 4107 tcpm_set_state(port, FR_SWAP_SNK_SRC_SOURCE_VBUS_APPLIED, 0); 4179 4108 break; 4180 4109 ··· 4213 4116 case SNK_HARD_RESET_SINK_OFF: 4214 4117 tcpm_set_state(port, SNK_HARD_RESET_WAIT_VBUS, 0); 4215 4118 break; 4216 - case SRC_HARD_RESET_VBUS_OFF: 4217 - /* 4218 - * After establishing the vSafe0V voltage condition on VBUS, the Source Shall wait 4219 - * tSrcRecover before re-applying VCONN and restoring VBUS to vSafe5V. 4220 - */ 4221 - tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, PD_T_SRC_RECOVER); 4222 - break; 4223 4119 case HARD_RESET_SEND: 4224 4120 break; 4225 - 4226 4121 case SNK_TRY: 4227 4122 /* Do nothing, waiting for timeout */ 4228 4123 break; ··· 4245 4156 /* Do nothing, expected */ 4246 4157 break; 4247 4158 4159 + case PR_SWAP_SNK_SRC_SOURCE_ON: 4160 + /* 4161 + * Do nothing when vbus off notification is received. 4162 + * TCPM can wait for PD_T_NEWSRC in PR_SWAP_SNK_SRC_SOURCE_ON 4163 + * for the vbus source to ramp up. 4164 + */ 4165 + break; 4166 + 4248 4167 case PORT_RESET_WAIT_OFF: 4249 4168 tcpm_set_state(port, tcpm_default_state(port), 0); 4250 4169 break; ··· 4281 4184 if (port->pwr_role == TYPEC_SINK && 4282 4185 port->attached) 4283 4186 tcpm_set_state(port, SNK_UNATTACHED, 0); 4187 + break; 4188 + } 4189 + } 4190 + 4191 + static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port) 4192 + { 4193 + tcpm_log_force(port, "VBUS VSAFE0V"); 4194 + port->vbus_vsafe0v = true; 4195 + switch (port->state) { 4196 + case SRC_HARD_RESET_VBUS_OFF: 4197 + /* 4198 + * After establishing the vSafe0V voltage condition on VBUS, the Source Shall wait 4199 + * tSrcRecover before re-applying VCONN and restoring VBUS to vSafe5V. 4200 + */ 4201 + tcpm_set_state(port, SRC_HARD_RESET_VBUS_ON, PD_T_SRC_RECOVER); 4202 + break; 4203 + case SRC_ATTACH_WAIT: 4204 + if (tcpm_port_is_source(port)) 4205 + tcpm_set_state(port, tcpm_try_snk(port) ? SNK_TRY : SRC_ATTACHED, 4206 + PD_T_CC_DEBOUNCE); 4207 + break; 4208 + default: 4284 4209 break; 4285 4210 } 4286 4211 } ··· 4342 4223 bool vbus; 4343 4224 4344 4225 vbus = port->tcpc->get_vbus(port->tcpc); 4345 - if (vbus) 4226 + if (vbus) { 4346 4227 _tcpm_pd_vbus_on(port); 4347 - else 4228 + } else { 4348 4229 _tcpm_pd_vbus_off(port); 4230 + /* 4231 + * When TCPC does not support detecting vsafe0v voltage level, 4232 + * treat vbus absent as vsafe0v. Else invoke is_vbus_vsafe0v 4233 + * to see if vbus has discharge to VSAFE0V. 4234 + */ 4235 + if (!port->tcpc->is_vbus_vsafe0v || 4236 + port->tcpc->is_vbus_vsafe0v(port->tcpc)) 4237 + _tcpm_pd_vbus_vsafe0v(port); 4238 + } 4349 4239 } 4350 4240 if (events & TCPM_CC_EVENT) { 4351 4241 enum typec_cc_status cc1, cc2; ··· 4804 4676 if (port->vbus_present) 4805 4677 port->vbus_never_low = true; 4806 4678 4679 + /* 4680 + * 1. When vbus_present is true, voltage on VBUS is already at VSAFE5V. 4681 + * So implicitly vbus_vsafe0v = false. 4682 + * 4683 + * 2. When vbus_present is false and TCPC does NOT support querying 4684 + * vsafe0v status, then, it's best to assume vbus is at VSAFE0V i.e. 4685 + * vbus_vsafe0v is true. 4686 + * 4687 + * 3. When vbus_present is false and TCPC does support querying vsafe0v, 4688 + * then, query tcpc for vsafe0v status. 4689 + */ 4690 + if (port->vbus_present) 4691 + port->vbus_vsafe0v = false; 4692 + else if (!port->tcpc->is_vbus_vsafe0v) 4693 + port->vbus_vsafe0v = true; 4694 + else 4695 + port->vbus_vsafe0v = port->tcpc->is_vbus_vsafe0v(port->tcpc); 4696 + 4807 4697 tcpm_set_state(port, tcpm_default_state(port), 0); 4808 4698 4809 4699 if (port->tcpc->get_cc(port->tcpc, &cc1, &cc2) == 0) ··· 4954 4808 4955 4809 /* FRS can only be supported byb DRP ports */ 4956 4810 if (port->port_type == TYPEC_PORT_DRP) { 4957 - ret = fwnode_property_read_u32(fwnode, "frs-typec-current", &frs_current); 4811 + ret = fwnode_property_read_u32(fwnode, "new-source-frs-typec-current", 4812 + &frs_current); 4958 4813 if (ret >= 0 && frs_current <= FRS_5V_3A) 4959 - port->frs_current = frs_current; 4814 + port->new_source_frs_current = frs_current; 4960 4815 } 4961 4816 4962 4817 return 0; ··· 5171 5024 snprintf(psy_name, psy_name_len, "%s%s", tcpm_psy_name_prefix, 5172 5025 port_dev_name); 5173 5026 port->psy_desc.name = psy_name; 5174 - port->psy_desc.type = POWER_SUPPLY_TYPE_USB, 5027 + port->psy_desc.type = POWER_SUPPLY_TYPE_USB; 5175 5028 port->psy_desc.usb_types = tcpm_psy_usb_types; 5176 5029 port->psy_desc.num_usb_types = ARRAY_SIZE(tcpm_psy_usb_types); 5177 - port->psy_desc.properties = tcpm_psy_props, 5178 - port->psy_desc.num_properties = ARRAY_SIZE(tcpm_psy_props), 5179 - port->psy_desc.get_property = tcpm_psy_get_prop, 5180 - port->psy_desc.set_property = tcpm_psy_set_prop, 5181 - port->psy_desc.property_is_writeable = tcpm_psy_prop_writeable, 5030 + port->psy_desc.properties = tcpm_psy_props; 5031 + port->psy_desc.num_properties = ARRAY_SIZE(tcpm_psy_props); 5032 + port->psy_desc.get_property = tcpm_psy_get_prop; 5033 + port->psy_desc.set_property = tcpm_psy_set_prop; 5034 + port->psy_desc.property_is_writeable = tcpm_psy_prop_writeable; 5182 5035 5183 5036 port->usb_type = POWER_SUPPLY_USB_TYPE_C; 5184 5037
+2 -1
drivers/usb/typec/tcpm/wcove.c
··· 356 356 357 357 static int wcove_pd_transmit(struct tcpc_dev *tcpc, 358 358 enum tcpm_transmit_type type, 359 - const struct pd_message *msg) 359 + const struct pd_message *msg, 360 + unsigned int negotiated_rev) 360 361 { 361 362 struct wcove_typec *wcove = tcpc_to_wcove(tcpc); 362 363 unsigned int info = 0;
+103
drivers/usb/typec/tps6598x.c
··· 9 9 #include <linux/i2c.h> 10 10 #include <linux/acpi.h> 11 11 #include <linux/module.h> 12 + #include <linux/power_supply.h> 12 13 #include <linux/regmap.h> 13 14 #include <linux/interrupt.h> 14 15 #include <linux/usb/typec.h> ··· 56 55 }; 57 56 58 57 /* TPS_REG_POWER_STATUS bits */ 58 + #define TPS_POWER_STATUS_CONNECTION BIT(0) 59 59 #define TPS_POWER_STATUS_SOURCESINK BIT(1) 60 60 #define TPS_POWER_STATUS_PWROPMODE(p) (((p) & GENMASK(3, 2)) >> 2) 61 61 ··· 98 96 struct typec_partner *partner; 99 97 struct usb_pd_identity partner_identity; 100 98 struct usb_role_switch *role_sw; 99 + struct typec_capability typec_cap; 100 + 101 + struct power_supply *psy; 102 + struct power_supply_desc psy_desc; 103 + enum power_supply_usb_type usb_type; 101 104 }; 105 + 106 + static enum power_supply_property tps6598x_psy_props[] = { 107 + POWER_SUPPLY_PROP_USB_TYPE, 108 + POWER_SUPPLY_PROP_ONLINE, 109 + }; 110 + 111 + static enum power_supply_usb_type tps6598x_psy_usb_types[] = { 112 + POWER_SUPPLY_USB_TYPE_C, 113 + POWER_SUPPLY_USB_TYPE_PD, 114 + }; 115 + 116 + static const char *tps6598x_psy_name_prefix = "tps6598x-source-psy-"; 102 117 103 118 /* 104 119 * Max data bytes for Data1, Data2, and other registers. See ch 1.3.2: ··· 267 248 if (desc.identity) 268 249 typec_partner_set_identity(tps->partner); 269 250 251 + power_supply_changed(tps->psy); 252 + 270 253 return 0; 271 254 } 272 255 ··· 281 260 typec_set_pwr_role(tps->port, TPS_STATUS_PORTROLE(status)); 282 261 typec_set_vconn_role(tps->port, TPS_STATUS_VCONN(status)); 283 262 tps6598x_set_data_role(tps, TPS_STATUS_DATAROLE(status), false); 263 + power_supply_changed(tps->psy); 284 264 } 285 265 286 266 static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd, ··· 489 467 .max_register = 0x7F, 490 468 }; 491 469 470 + static int tps6598x_psy_get_online(struct tps6598x *tps, 471 + union power_supply_propval *val) 472 + { 473 + int ret; 474 + u16 pwr_status; 475 + 476 + ret = tps6598x_read16(tps, TPS_REG_POWER_STATUS, &pwr_status); 477 + if (ret < 0) 478 + return ret; 479 + 480 + if ((pwr_status & TPS_POWER_STATUS_CONNECTION) && 481 + (pwr_status & TPS_POWER_STATUS_SOURCESINK)) { 482 + val->intval = 1; 483 + } else { 484 + val->intval = 0; 485 + } 486 + return 0; 487 + } 488 + 489 + static int tps6598x_psy_get_prop(struct power_supply *psy, 490 + enum power_supply_property psp, 491 + union power_supply_propval *val) 492 + { 493 + struct tps6598x *tps = power_supply_get_drvdata(psy); 494 + u16 pwr_status; 495 + int ret = 0; 496 + 497 + switch (psp) { 498 + case POWER_SUPPLY_PROP_USB_TYPE: 499 + ret = tps6598x_read16(tps, TPS_REG_POWER_STATUS, &pwr_status); 500 + if (ret < 0) 501 + return ret; 502 + if (TPS_POWER_STATUS_PWROPMODE(pwr_status) == TYPEC_PWR_MODE_PD) 503 + val->intval = POWER_SUPPLY_USB_TYPE_PD; 504 + else 505 + val->intval = POWER_SUPPLY_USB_TYPE_C; 506 + break; 507 + case POWER_SUPPLY_PROP_ONLINE: 508 + ret = tps6598x_psy_get_online(tps, val); 509 + break; 510 + default: 511 + ret = -EINVAL; 512 + break; 513 + } 514 + 515 + return ret; 516 + } 517 + 518 + static int devm_tps6598_psy_register(struct tps6598x *tps) 519 + { 520 + struct power_supply_config psy_cfg = {}; 521 + const char *port_dev_name = dev_name(tps->dev); 522 + char *psy_name; 523 + 524 + psy_cfg.drv_data = tps; 525 + psy_cfg.fwnode = dev_fwnode(tps->dev); 526 + 527 + psy_name = devm_kasprintf(tps->dev, GFP_KERNEL, "%s%s", tps6598x_psy_name_prefix, 528 + port_dev_name); 529 + if (!psy_name) 530 + return -ENOMEM; 531 + 532 + tps->psy_desc.name = psy_name; 533 + tps->psy_desc.type = POWER_SUPPLY_TYPE_USB; 534 + tps->psy_desc.usb_types = tps6598x_psy_usb_types; 535 + tps->psy_desc.num_usb_types = ARRAY_SIZE(tps6598x_psy_usb_types); 536 + tps->psy_desc.properties = tps6598x_psy_props; 537 + tps->psy_desc.num_properties = ARRAY_SIZE(tps6598x_psy_props); 538 + tps->psy_desc.get_property = tps6598x_psy_get_prop; 539 + 540 + tps->usb_type = POWER_SUPPLY_USB_TYPE_C; 541 + 542 + tps->psy = devm_power_supply_register(tps->dev, &tps->psy_desc, 543 + &psy_cfg); 544 + return PTR_ERR_OR_ZERO(tps->psy); 545 + } 546 + 492 547 static int tps6598x_probe(struct i2c_client *client) 493 548 { 494 549 struct typec_capability typec_cap = { }; ··· 658 559 ret = -ENODEV; 659 560 goto err_role_put; 660 561 } 562 + 563 + ret = devm_tps6598_psy_register(tps); 564 + if (ret) 565 + return ret; 661 566 662 567 tps->port = typec_register_port(&client->dev, &typec_cap); 663 568 if (IS_ERR(tps->port)) {
+3 -3
drivers/usb/typec/ucsi/psy.c
··· 220 220 return -ENOMEM; 221 221 222 222 con->psy_desc.name = psy_name; 223 - con->psy_desc.type = POWER_SUPPLY_TYPE_USB, 223 + con->psy_desc.type = POWER_SUPPLY_TYPE_USB; 224 224 con->psy_desc.usb_types = ucsi_psy_usb_types; 225 225 con->psy_desc.num_usb_types = ARRAY_SIZE(ucsi_psy_usb_types); 226 - con->psy_desc.properties = ucsi_psy_props, 227 - con->psy_desc.num_properties = ARRAY_SIZE(ucsi_psy_props), 226 + con->psy_desc.properties = ucsi_psy_props; 227 + con->psy_desc.num_properties = ARRAY_SIZE(ucsi_psy_props); 228 228 con->psy_desc.get_property = ucsi_psy_get_prop; 229 229 230 230 con->psy = power_supply_register(dev, &con->psy_desc, &psy_cfg);
+105 -20
drivers/usb/typec/ucsi/ucsi.c
··· 53 53 ctrl = UCSI_ACK_CC_CI; 54 54 ctrl |= UCSI_ACK_CONNECTOR_CHANGE; 55 55 56 - return ucsi->ops->async_write(ucsi, UCSI_CONTROL, &ctrl, sizeof(ctrl)); 56 + return ucsi->ops->sync_write(ucsi, UCSI_CONTROL, &ctrl, sizeof(ctrl)); 57 57 } 58 58 59 59 static int ucsi_exec_command(struct ucsi *ucsi, u64 command); ··· 625 625 struct ucsi_connector *con = container_of(work, struct ucsi_connector, 626 626 work); 627 627 struct ucsi *ucsi = con->ucsi; 628 + struct ucsi_connector_status pre_ack_status; 629 + struct ucsi_connector_status post_ack_status; 628 630 enum typec_role role; 631 + u16 inferred_changes; 632 + u16 changed_flags; 629 633 u64 command; 630 634 int ret; 631 635 632 636 mutex_lock(&con->lock); 633 637 638 + /* 639 + * Some/many PPMs have an issue where all fields in the change bitfield 640 + * are cleared when an ACK is send. This will causes any change 641 + * between GET_CONNECTOR_STATUS and ACK to be lost. 642 + * 643 + * We work around this by re-fetching the connector status afterwards. 644 + * We then infer any changes that we see have happened but that may not 645 + * be represented in the change bitfield. 646 + * 647 + * Also, even though we don't need to know the currently supported alt 648 + * modes, we run the GET_CAM_SUPPORTED command to ensure the PPM does 649 + * not get stuck in case it assumes we do. 650 + * Always do this, rather than relying on UCSI_CONSTAT_CAM_CHANGE to be 651 + * set in the change bitfield. 652 + * 653 + * We end up with the following actions: 654 + * 1. UCSI_GET_CONNECTOR_STATUS, store result, update unprocessed_changes 655 + * 2. UCSI_GET_CAM_SUPPORTED, discard result 656 + * 3. ACK connector change 657 + * 4. UCSI_GET_CONNECTOR_STATUS, store result 658 + * 5. Infere lost changes by comparing UCSI_GET_CONNECTOR_STATUS results 659 + * 6. If PPM reported a new change, then restart in order to ACK 660 + * 7. Process everything as usual. 661 + * 662 + * We may end up seeing a change twice, but we can only miss extremely 663 + * short transitional changes. 664 + */ 665 + 666 + /* 1. First UCSI_GET_CONNECTOR_STATUS */ 634 667 command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num); 635 - ret = ucsi_send_command(ucsi, command, &con->status, 636 - sizeof(con->status)); 668 + ret = ucsi_send_command(ucsi, command, &pre_ack_status, 669 + sizeof(pre_ack_status)); 637 670 if (ret < 0) { 638 671 dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n", 639 672 __func__, ret); 640 673 goto out_unlock; 641 674 } 675 + con->unprocessed_changes |= pre_ack_status.change; 676 + 677 + /* 2. Run UCSI_GET_CAM_SUPPORTED and discard the result. */ 678 + command = UCSI_GET_CAM_SUPPORTED; 679 + command |= UCSI_CONNECTOR_NUMBER(con->num); 680 + ucsi_send_command(con->ucsi, command, NULL, 0); 681 + 682 + /* 3. ACK connector change */ 683 + clear_bit(EVENT_PENDING, &ucsi->flags); 684 + ret = ucsi_acknowledge_connector_change(ucsi); 685 + if (ret) { 686 + dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret); 687 + goto out_unlock; 688 + } 689 + 690 + /* 4. Second UCSI_GET_CONNECTOR_STATUS */ 691 + command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num); 692 + ret = ucsi_send_command(ucsi, command, &post_ack_status, 693 + sizeof(post_ack_status)); 694 + if (ret < 0) { 695 + dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n", 696 + __func__, ret); 697 + goto out_unlock; 698 + } 699 + 700 + /* 5. Inferre any missing changes */ 701 + changed_flags = pre_ack_status.flags ^ post_ack_status.flags; 702 + inferred_changes = 0; 703 + if (UCSI_CONSTAT_PWR_OPMODE(changed_flags) != 0) 704 + inferred_changes |= UCSI_CONSTAT_POWER_OPMODE_CHANGE; 705 + 706 + if (changed_flags & UCSI_CONSTAT_CONNECTED) 707 + inferred_changes |= UCSI_CONSTAT_CONNECT_CHANGE; 708 + 709 + if (changed_flags & UCSI_CONSTAT_PWR_DIR) 710 + inferred_changes |= UCSI_CONSTAT_POWER_DIR_CHANGE; 711 + 712 + if (UCSI_CONSTAT_PARTNER_FLAGS(changed_flags) != 0) 713 + inferred_changes |= UCSI_CONSTAT_PARTNER_CHANGE; 714 + 715 + if (UCSI_CONSTAT_PARTNER_TYPE(changed_flags) != 0) 716 + inferred_changes |= UCSI_CONSTAT_PARTNER_CHANGE; 717 + 718 + /* Mask out anything that was correctly notified in the later call. */ 719 + inferred_changes &= ~post_ack_status.change; 720 + if (inferred_changes) 721 + dev_dbg(ucsi->dev, "%s: Inferred changes that would have been lost: 0x%04x\n", 722 + __func__, inferred_changes); 723 + 724 + con->unprocessed_changes |= inferred_changes; 725 + 726 + /* 6. If PPM reported a new change, then restart in order to ACK */ 727 + if (post_ack_status.change) 728 + goto out_unlock; 729 + 730 + /* 7. Continue as if nothing happened */ 731 + con->status = post_ack_status; 732 + con->status.change = con->unprocessed_changes; 733 + con->unprocessed_changes = 0; 642 734 643 735 role = !!(con->status.flags & UCSI_CONSTAT_PWR_DIR); 644 736 ··· 772 680 ucsi_port_psy_changed(con); 773 681 } 774 682 775 - if (con->status.change & UCSI_CONSTAT_CAM_CHANGE) { 776 - /* 777 - * We don't need to know the currently supported alt modes here. 778 - * Running GET_CAM_SUPPORTED command just to make sure the PPM 779 - * does not get stuck in case it assumes we do so. 780 - */ 781 - command = UCSI_GET_CAM_SUPPORTED; 782 - command |= UCSI_CONNECTOR_NUMBER(con->num); 783 - ucsi_send_command(con->ucsi, command, NULL, 0); 784 - } 785 - 786 683 if (con->status.change & UCSI_CONSTAT_PARTNER_CHANGE) 787 684 ucsi_partner_change(con); 788 - 789 - ret = ucsi_acknowledge_connector_change(ucsi); 790 - if (ret) 791 - dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret); 792 685 793 686 trace_ucsi_connector_change(con->num, &con->status); 794 687 795 688 out_unlock: 796 - clear_bit(EVENT_PENDING, &ucsi->flags); 689 + if (test_and_clear_bit(EVENT_PENDING, &ucsi->flags)) { 690 + schedule_work(&con->work); 691 + mutex_unlock(&con->lock); 692 + return; 693 + } 694 + 695 + clear_bit(EVENT_PROCESSING, &ucsi->flags); 797 696 mutex_unlock(&con->lock); 798 697 } 799 698 ··· 802 719 return; 803 720 } 804 721 805 - if (!test_and_set_bit(EVENT_PENDING, &ucsi->flags)) 722 + set_bit(EVENT_PENDING, &ucsi->flags); 723 + 724 + if (!test_and_set_bit(EVENT_PROCESSING, &ucsi->flags)) 806 725 schedule_work(&con->work); 807 726 } 808 727 EXPORT_SYMBOL_GPL(ucsi_connector_change);
+2
drivers/usb/typec/ucsi/ucsi.h
··· 296 296 #define EVENT_PENDING 0 297 297 #define COMMAND_PENDING 1 298 298 #define ACK_PENDING 2 299 + #define EVENT_PROCESSING 3 299 300 }; 300 301 301 302 #define UCSI_MAX_SVID 5 ··· 323 322 324 323 struct typec_capability typec_cap; 325 324 325 + u16 unprocessed_changes; 326 326 struct ucsi_connector_status status; 327 327 struct ucsi_connector_capability cap; 328 328 struct power_supply *psy;
+3 -2
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 103 103 if (ret) 104 104 return; 105 105 106 + if (UCSI_CCI_CONNECTOR(cci)) 107 + ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci)); 108 + 106 109 if (test_bit(COMMAND_PENDING, &ua->flags) && 107 110 cci & (UCSI_CCI_ACK_COMPLETE | UCSI_CCI_COMMAND_COMPLETE)) 108 111 complete(&ua->complete); 109 - else if (UCSI_CCI_CONNECTOR(cci)) 110 - ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci)); 111 112 } 112 113 113 114 static int ucsi_acpi_probe(struct platform_device *pdev)
-5
drivers/usb/usbip/usbip_common.c
··· 324 324 } while (msg_data_left(&msg)); 325 325 326 326 if (usbip_dbg_flag_xmit) { 327 - if (!in_interrupt()) 328 - pr_debug("%-10s:", current->comm); 329 - else 330 - pr_debug("interrupt :"); 331 - 332 327 pr_debug("receiving....\n"); 333 328 usbip_dump_buffer(buf, size); 334 329 pr_debug("received, osize %d ret %d size %zd total %d\n",
+8
include/dt-bindings/usb/pd.h
··· 85 85 PDO_PPS_APDO_MIN_VOLT(min_mv) | PDO_PPS_APDO_MAX_VOLT(max_mv) | \ 86 86 PDO_PPS_APDO_MAX_CURR(max_ma)) 87 87 88 + /* 89 + * Based on "Table 6-14 Fixed Supply PDO - Sink" of "USB Power Delivery Specification Revision 3.0, 90 + * Version 1.2" 91 + * Initial current capability of the new source when vSafe5V is applied. 92 + */ 93 + #define FRS_DEFAULT_POWER 1 94 + #define FRS_5V_1P5A 2 95 + #define FRS_5V_3A 3 88 96 #endif /* __DT_POWER_DELIVERY_H */
-14
include/linux/platform_data/usb-ehci-mxc.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef __INCLUDE_ASM_ARCH_MXC_EHCI_H 3 - #define __INCLUDE_ASM_ARCH_MXC_EHCI_H 4 - 5 - struct mxc_usbh_platform_data { 6 - int (*init)(struct platform_device *pdev); 7 - int (*exit)(struct platform_device *pdev); 8 - 9 - unsigned int portsc; 10 - struct usb_phy *otg; 11 - }; 12 - 13 - #endif /* __INCLUDE_ASM_ARCH_MXC_EHCI_H */ 14 -
+17 -1
include/linux/thunderbolt.h
··· 179 179 * @lock: Lock to serialize access to the following fields of this structure 180 180 * @vendor_name: Name of the vendor (or %NULL if not known) 181 181 * @device_name: Name of the device (or %NULL if not known) 182 + * @link_speed: Speed of the link in Gb/s 183 + * @link_width: Width of the link (1 or 2) 182 184 * @is_unplugged: The XDomain is unplugged 183 185 * @resume: The XDomain is being resumed 184 186 * @needs_uuid: If the XDomain does not have @remote_uuid it will be ··· 225 223 struct mutex lock; 226 224 const char *vendor_name; 227 225 const char *device_name; 226 + unsigned int link_speed; 227 + unsigned int link_width; 228 228 bool is_unplugged; 229 229 bool resume; 230 230 bool needs_uuid; ··· 247 243 u8 depth; 248 244 }; 249 245 246 + int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd); 247 + void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd); 250 248 int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path, 251 249 u16 transmit_ring, u16 receive_path, 252 250 u16 receive_ring); ··· 350 344 * @prtcvers: Protocol version from the properties directory 351 345 * @prtcrevs: Protocol software revision from the properties directory 352 346 * @prtcstns: Protocol settings mask from the properties directory 347 + * @debugfs_dir: Pointer to the service debugfs directory. Always created 348 + * when debugfs is enabled. Can be used by service drivers to 349 + * add their own entries under the service. 353 350 * 354 351 * Each domain exposes set of services it supports as collection of 355 352 * properties. For each service there will be one corresponding ··· 366 357 u32 prtcvers; 367 358 u32 prtcrevs; 368 359 u32 prtcstns; 360 + struct dentry *debugfs_dir; 369 361 }; 370 362 371 363 static inline struct tb_service *tb_service_get(struct tb_service *svc) ··· 481 471 * @irq: MSI-X irq number if the ring uses MSI-X. %0 otherwise. 482 472 * @vector: MSI-X vector number the ring uses (only set if @irq is > 0) 483 473 * @flags: Ring specific flags 474 + * @e2e_tx_hop: Transmit HopID when E2E is enabled. Only applicable to 475 + * RX ring. For TX ring this should be set to %0. 484 476 * @sof_mask: Bit mask used to detect start of frame PDF 485 477 * @eof_mask: Bit mask used to detect end of frame PDF 486 478 * @start_poll: Called when ring interrupt is triggered to start ··· 506 494 int irq; 507 495 u8 vector; 508 496 unsigned int flags; 497 + int e2e_tx_hop; 509 498 u16 sof_mask; 510 499 u16 eof_mask; 511 500 void (*start_poll)(void *data); ··· 517 504 #define RING_FLAG_NO_SUSPEND BIT(0) 518 505 /* Configure the ring to be in frame mode */ 519 506 #define RING_FLAG_FRAME BIT(1) 507 + /* Enable end-to-end flow control */ 508 + #define RING_FLAG_E2E BIT(2) 520 509 521 510 struct ring_frame; 522 511 typedef void (*ring_cb)(struct tb_ring *, struct ring_frame *, bool canceled); ··· 567 552 struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size, 568 553 unsigned int flags); 569 554 struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size, 570 - unsigned int flags, u16 sof_mask, u16 eof_mask, 555 + unsigned int flags, int e2e_tx_hop, 556 + u16 sof_mask, u16 eof_mask, 571 557 void (*start_poll)(void *), void *poll_data); 572 558 void tb_ring_start(struct tb_ring *ring); 573 559 void tb_ring_stop(struct tb_ring *ring);
-4
include/linux/usb/hcd.h
··· 734 734 735 735 /* random stuff */ 736 736 737 - #define RUN_CONTEXT (in_irq() ? "in_irq" \ 738 - : (in_interrupt() ? "in_interrupt" : "can sleep")) 739 - 740 - 741 737 /* This rwsem is for use only by the hub driver and ehci-hcd. 742 738 * Nobody else should touch it. 743 739 */
+2
include/linux/usb/pd.h
··· 466 466 #define PD_T_DRP_SRC 30 467 467 #define PD_T_PS_SOURCE_OFF 920 468 468 #define PD_T_PS_SOURCE_ON 480 469 + #define PD_T_PS_SOURCE_ON_PRS 450 /* 390 - 480ms */ 469 470 #define PD_T_PS_HARD_RESET 30 470 471 #define PD_T_SRC_RECOVER 760 471 472 #define PD_T_SRC_RECOVER_MAX 1000 ··· 485 484 486 485 #define PD_T_CC_DEBOUNCE 200 /* 100 - 200 ms */ 487 486 #define PD_T_PD_DEBOUNCE 20 /* 10 - 20 ms */ 487 + #define PD_T_TRY_CC_DEBOUNCE 15 /* 10 - 20 ms */ 488 488 489 489 #define PD_N_CAPS_COUNT (PD_T_NO_RESPONSE / PD_T_SEND_SOURCE_CAP) 490 490 #define PD_N_HARD_RESET_COUNT 2
+15 -4
include/linux/usb/pd_vdo.h
··· 103 103 * -------------------- 104 104 * <31> :: data capable as a USB host 105 105 * <30> :: data capable as a USB device 106 - * <29:27> :: product type 106 + * <29:27> :: product type (UFP / Cable) 107 107 * <26> :: modal operation supported (1b == yes) 108 - * <25:16> :: Reserved, Shall be set to zero 108 + * <25:16> :: product type (DFP) 109 109 * <15:0> :: USB-IF assigned VID for this cable vendor 110 110 */ 111 111 #define IDH_PTYPE_UNDEF 0 112 112 #define IDH_PTYPE_HUB 1 113 113 #define IDH_PTYPE_PERIPH 2 114 + #define IDH_PTYPE_PSD 3 115 + #define IDH_PTYPE_AMA 5 116 + 114 117 #define IDH_PTYPE_PCABLE 3 115 118 #define IDH_PTYPE_ACABLE 4 116 - #define IDH_PTYPE_AMA 5 119 + 120 + #define IDH_PTYPE_DFP_UNDEF 0 121 + #define IDH_PTYPE_DFP_HUB 1 122 + #define IDH_PTYPE_DFP_HOST 2 123 + #define IDH_PTYPE_DFP_PB 3 124 + #define IDH_PTYPE_DFP_AMC 4 117 125 118 126 #define VDO_IDH(usbh, usbd, ptype, is_modal, vid) \ 119 127 ((usbh) << 31 | (usbd) << 30 | ((ptype) & 0x7) << 27 \ ··· 130 122 #define PD_IDH_PTYPE(vdo) (((vdo) >> 27) & 0x7) 131 123 #define PD_IDH_VID(vdo) ((vdo) & 0xffff) 132 124 #define PD_IDH_MODAL_SUPP(vdo) ((vdo) & (1 << 26)) 125 + #define PD_IDH_DFP_PTYPE(vdo) (((vdo) >> 23) & 0x7) 133 126 134 127 /* 135 128 * Cert Stat VDO ··· 186 177 * <31:28> :: Cable HW version 187 178 * <27:24> :: Cable FW version 188 179 * <23:20> :: Reserved, Shall be set to zero 189 - * <19:18> :: type-C to Type-A/B/C (00b == A, 01 == B, 10 == C) 180 + * <19:18> :: type-C to Type-A/B/C/Captive (00b == A, 01 == B, 10 == C, 11 == Captive) 190 181 * <17> :: Type-C to Plug/Receptacle (0b == plug, 1b == receptacle) 191 182 * <16:13> :: cable latency (0001 == <10ns(~1m length)) 192 183 * <12:11> :: cable termination type (11b == both ends active VCONN req) ··· 202 193 #define CABLE_ATYPE 0 203 194 #define CABLE_BTYPE 1 204 195 #define CABLE_CTYPE 2 196 + #define CABLE_CAPTIVE 3 205 197 #define CABLE_PLUG 0 206 198 #define CABLE_RECEPTACLE 1 207 199 #define CABLE_CURR_1A5 0 ··· 218 208 | (tx1d) << 10 | (tx2d) << 9 | (rx1d) << 8 | (rx2d) << 7 \ 219 209 | ((cur) & 0x3) << 5 | (vps) << 4 | (sopp) << 3 \ 220 210 | ((usbss) & 0x7)) 211 + #define VDO_TYPEC_CABLE_TYPE(vdo) (((vdo) >> 18) & 0x3) 221 212 222 213 /* 223 214 * AMA VDO
-2
include/linux/usb/serial.h
··· 62 62 * @bulk_out_endpointAddress: endpoint address for the bulk out pipe for this 63 63 * port. 64 64 * @flags: usb serial port flags 65 - * @write_wait: a wait_queue_head_t used by the port. 66 65 * @work: work queue entry for the line discipline waking up. 67 66 * @dev: pointer to the serial device 68 67 * ··· 107 108 int tx_bytes; 108 109 109 110 unsigned long flags; 110 - wait_queue_head_t write_wait; 111 111 struct work_struct work; 112 112 unsigned long sysrq; /* sysrq timeout */ 113 113 struct device dev;
+27 -1
include/linux/usb/tcpm.h
··· 83 83 * Optional; Called to enable/disable PD 3.0 fast role swap. 84 84 * Enabling frs is accessory dependent as not all PD3.0 85 85 * accessories support fast role swap. 86 + * @frs_sourcing_vbus: 87 + * Optional; Called to notify that vbus is now being sourced. 88 + * Low level drivers can perform chip specific operations, if any. 89 + * @enable_auto_vbus_discharge: 90 + * Optional; TCPCI spec based TCPC implementations can optionally 91 + * support hardware to autonomously dischrge vbus upon disconnecting 92 + * as sink or source. TCPM signals TCPC to enable the mechanism upon 93 + * entering connected state and signals disabling upon disconnect. 94 + * @set_auto_vbus_discharge_threshold: 95 + * Mandatory when enable_auto_vbus_discharge is implemented. TCPM 96 + * calls this function to allow lower levels drivers to program the 97 + * vbus threshold voltage below which the vbus discharge circuit 98 + * will be turned on. requested_vbus_voltage is set to 0 when vbus 99 + * is going to disappear knowingly i.e. during PR_SWAP and 100 + * HARD_RESET etc. 101 + * @is_vbus_vsafe0v: 102 + * Optional; TCPCI spec based TCPC implementations are expected to 103 + * detect VSAFE0V voltage level at vbus. When detection of VSAFE0V 104 + * is supported by TCPC, set this callback for TCPM to query 105 + * whether vbus is at VSAFE0V when needed. 106 + * Returns true when vbus is at VSAFE0V, false otherwise. 86 107 */ 87 108 struct tcpc_dev { 88 109 struct fwnode_handle *fwnode; ··· 127 106 enum typec_cc_status cc); 128 107 int (*try_role)(struct tcpc_dev *dev, int role); 129 108 int (*pd_transmit)(struct tcpc_dev *dev, enum tcpm_transmit_type type, 130 - const struct pd_message *msg); 109 + const struct pd_message *msg, unsigned int negotiated_rev); 131 110 int (*set_bist_data)(struct tcpc_dev *dev, bool on); 132 111 int (*enable_frs)(struct tcpc_dev *dev, bool enable); 112 + void (*frs_sourcing_vbus)(struct tcpc_dev *dev); 113 + int (*enable_auto_vbus_discharge)(struct tcpc_dev *dev, bool enable); 114 + int (*set_auto_vbus_discharge_threshold)(struct tcpc_dev *dev, enum typec_pwr_opmode mode, 115 + bool pps_active, u32 requested_vbus_voltage); 116 + bool (*is_vbus_vsafe0v)(struct tcpc_dev *dev); 133 117 }; 134 118 135 119 struct tcpm_port;
+2
include/linux/usb/typec.h
··· 126 126 enum typec_port_data roles; 127 127 }; 128 128 129 + int typec_partner_set_num_altmodes(struct typec_partner *partner, int num_altmodes); 129 130 struct typec_altmode 130 131 *typec_partner_register_altmode(struct typec_partner *partner, 131 132 const struct typec_altmode_desc *desc); 133 + int typec_plug_set_num_altmodes(struct typec_plug *plug, int num_altmodes); 132 134 struct typec_altmode 133 135 *typec_plug_register_altmode(struct typec_plug *plug, 134 136 const struct typec_altmode_desc *desc);
+5 -1
include/linux/usb/typec_tbt.h
··· 39 39 #define TBT_CABLE_USB3_GEN1 1 40 40 #define TBT_CABLE_USB3_PASSIVE 2 41 41 #define TBT_CABLE_10_AND_20GBPS 3 42 - #define TBT_CABLE_ROUNDED BIT(19) 42 + #define TBT_CABLE_ROUNDED_SUPPORT(_vdo_) \ 43 + (((_vdo_) & GENMASK(20, 19)) >> 19) 44 + #define TBT_GEN3_NON_ROUNDED 0 45 + #define TBT_GEN3_GEN4_ROUNDED_NON_ROUNDED 1 43 46 #define TBT_CABLE_OPTICAL BIT(21) 44 47 #define TBT_CABLE_RETIMER BIT(22) 45 48 #define TBT_CABLE_LINK_TRAINING BIT(23) 46 49 47 50 #define TBT_SET_CABLE_SPEED(_s_) (((_s_) & GENMASK(2, 0)) << 16) 51 + #define TBT_SET_CABLE_ROUNDED(_g_) (((_g_) & GENMASK(1, 0)) << 19) 48 52 49 53 /* TBT3 Device Enter Mode VDO bits */ 50 54 #define TBT_ENTER_MODE_CABLE_SPEED(s) TBT_SET_CABLE_SPEED(s)
+2
include/linux/usb_usual.h
··· 84 84 /* Cannot handle REPORT_LUNS */ \ 85 85 US_FLAG(ALWAYS_SYNC, 0x20000000) \ 86 86 /* lies about caching, so always sync */ \ 87 + US_FLAG(NO_SAME, 0x40000000) \ 88 + /* Cannot handle WRITE_SAME */ \ 87 89 88 90 #define US_FLAG(name, value) US_FL_##name = value , 89 91 enum { US_DO_ALL_FLAGS };