Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'usb-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
"Here is the big set of USB and Thunderbolt changes for 6.13-rc1.

Overall, a pretty slow development cycle, the majority of the work
going into the debugfs interface for the thunderbolt (i.e. USB4) code,
to help with debugging the myrad ways that hardware vendors get their
interfaces messed up. Other than that, here's the highlights:

- thunderbolt changes and additions to debugfs interfaces

- lots of device tree updates for new and old hardware

- UVC configfs gadget updates and new apis for features

- xhci driver updates and fixes

- dwc3 driver updates and fixes

- typec driver updates and fixes

- lots of other small updates and fixes, full details in the shortlog

All of these have been in linux-next for a while with no reported
problems"

* tag 'usb-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (148 commits)
usb: typec: tcpm: Add support for sink-bc12-completion-time-ms DT property
dt-bindings: usb: maxim,max33359: add usage of sink bc12 time property
dt-bindings: connector: Add time property for Sink BC12 detection completion
usb: dwc3: gadget: Remove dwc3_request->needs_extra_trb
usb: dwc3: gadget: Cleanup SG handling
usb: dwc3: gadget: Fix looping of queued SG entries
usb: dwc3: gadget: Fix checking for number of TRBs left
usb: dwc3: ep0: Don't clear ep0 DWC3_EP_TRANSFER_STARTED
Revert "usb: gadget: composite: fix OS descriptors w_value logic"
usb: ehci-spear: fix call balance of sehci clk handling routines
USB: make to_usb_device_driver() use container_of_const()
USB: make to_usb_driver() use container_of_const()
USB: properly lock dynamic id list when showing an id
USB: make single lock for all usb dynamic id lists
drivers/usb/storage: refactor min with min_t
drivers/usb/serial: refactor min with min_t
drivers/usb/musb: refactor min/max with min_t/max_t
drivers/usb/mon: refactor min with min_t
drivers/usb/misc: refactor min with min_t
drivers/usb/host: refactor min/max with min_t/max_t
...

+3507 -1354
+64
Documentation/ABI/testing/configfs-usb-gadget-uvc
··· 342 342 support 343 343 ========================= ===================================== 344 344 345 + What: /config/usb-gadget/gadget/functions/uvc.name/streaming/framebased 346 + Date: Sept 2024 347 + KernelVersion: 5.15 348 + Description: Framebased format descriptors 349 + 350 + What: /config/usb-gadget/gadget/functions/uvc.name/streaming/framebased/name 351 + Date: Sept 2024 352 + KernelVersion: 5.15 353 + Description: Specific framebased format descriptors 354 + 355 + ================== ======================================= 356 + bFormatIndex unique id for this format descriptor; 357 + only defined after parent header is 358 + linked into the streaming class; 359 + read-only 360 + bmaControls this format's data for bmaControls in 361 + the streaming header 362 + bmInterlaceFlags specifies interlace information, 363 + read-only 364 + bAspectRatioY the X dimension of the picture aspect 365 + ratio, read-only 366 + bAspectRatioX the Y dimension of the picture aspect 367 + ratio, read-only 368 + bDefaultFrameIndex optimum frame index for this stream 369 + bBitsPerPixel number of bits per pixel used to 370 + specify color in the decoded video 371 + frame 372 + guidFormat globally unique id used to identify 373 + stream-encoding format 374 + ================== ======================================= 375 + 376 + What: /config/usb-gadget/gadget/functions/uvc.name/streaming/framebased/name/name 377 + Date: Sept 2024 378 + KernelVersion: 5.15 379 + Description: Specific framebased frame descriptors 380 + 381 + ========================= ===================================== 382 + bFrameIndex unique id for this framedescriptor; 383 + only defined after parent format is 384 + linked into the streaming header; 385 + read-only 386 + dwFrameInterval indicates how frame interval can be 387 + programmed; a number of values 388 + separated by newline can be specified 389 + dwDefaultFrameInterval the frame interval the device would 390 + like to use as default 391 + dwBytesPerLine Specifies the number of bytes per line 392 + of video for packed fixed frame size 393 + formats, allowing the receiver to 394 + perform stride alignment of the video. 395 + If the bVariableSize value (above) is 396 + TRUE (1), or if the format does not 397 + permit such alignment, this value shall 398 + be set to zero (0). 399 + dwMaxBitRate the maximum bit rate at the shortest 400 + frame interval in bps 401 + dwMinBitRate the minimum bit rate at the longest 402 + frame interval in bps 403 + wHeight height of decoded bitmap frame in px 404 + wWidth width of decoded bitmam frame in px 405 + bmCapabilities still image support, fixed frame-rate 406 + support 407 + ========================= ===================================== 408 + 345 409 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/header 346 410 Date: Dec 2014 347 411 KernelVersion: 4.0
+27
Documentation/ABI/testing/sysfs-class-typec
··· 149 149 advertise to the partner. The currently used capabilities are in 150 150 brackets. Selection happens by writing to the file. 151 151 152 + What: /sys/class/typec/<port>/usb_capability 153 + Date: November 2024 154 + Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 155 + Description: Lists the supported USB Modes. The default USB mode that is used 156 + next time with the Enter_USB Message is in brackets. The default 157 + mode can be changed by writing to the file when supported by the 158 + driver. 159 + 160 + Valid values: 161 + - usb2 (USB 2.0) 162 + - usb3 (USB 3.2) 163 + - usb4 (USB4) 164 + 152 165 USB Type-C partner devices (eg. /sys/class/typec/port0-partner/) 153 166 154 167 What: /sys/class/typec/<port>-partner/accessory_mode ··· 232 219 communication for the port is mostly handled in firmware. If the 233 220 directory exists, it will have an attribute file for every VDO 234 221 in Discover Identity command result. 222 + 223 + What: /sys/class/typec/<port>-partner/usb_mode 224 + Date: November 2024 225 + Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com> 226 + Description: The USB Modes that the partner device supports. The active mode 227 + is displayed in brackets. The active USB mode can be changed by 228 + writing to this file when the port driver is able to send Data 229 + Reset Message to the partner. That requires USB Power Delivery 230 + contract between the partner and the port. 231 + 232 + Valid values: 233 + - usb2 (USB 2.0) 234 + - usb3 (USB 3.2) 235 + - usb4 (USB4) 235 236 236 237 USB Type-C cable devices (eg. /sys/class/typec/port0-cable/) 237 238
+45 -1
Documentation/devicetree/bindings/connector/usb-connector.yaml
··· 253 253 254 254 additionalProperties: false 255 255 256 + sink-wait-cap-time-ms: 257 + description: Represents the max time in ms that USB Type-C port (in sink 258 + role) should wait for the port partner (source role) to send source caps. 259 + SinkWaitCap timer starts when port in sink role attaches to the source. 260 + This timer will stop when sink receives PD source cap advertisement before 261 + timeout in which case it'll move to capability negotiation stage. A 262 + timeout leads to a hard reset message by the port. 263 + minimum: 310 264 + maximum: 620 265 + default: 310 266 + 267 + ps-source-off-time-ms: 268 + description: Represents the max time in ms that a DRP in source role should 269 + take to turn off power after the PsSourceOff timer starts. PsSourceOff 270 + timer starts when a sink's PHY layer receives EOP of the GoodCRC message 271 + (corresponding to an Accept message sent in response to a PR_Swap or a 272 + FR_Swap request). This timer stops when last bit of GoodCRC EOP 273 + corresponding to the received PS_RDY message is transmitted by the PHY 274 + layer. A timeout shall lead to error recovery in the type-c port. 275 + minimum: 750 276 + maximum: 920 277 + default: 920 278 + 279 + cc-debounce-time-ms: 280 + description: Represents the max time in ms that a port shall wait to 281 + determine if it's attached to a partner. 282 + minimum: 100 283 + maximum: 200 284 + default: 200 285 + 286 + sink-bc12-completion-time-ms: 287 + description: Represents the max time in ms that a port in sink role takes 288 + to complete Battery Charger (BC1.2) Detection. BC1.2 detection is a 289 + hardware mechanism, which in some TCPC implementations, can run in 290 + parallel once the Type-C connection state machine reaches the "potential 291 + connect as sink" state. In TCPCs where this causes delays to respond to 292 + the incoming PD messages, sink-bc12-completion-time-ms is used to delay 293 + PD negotiation till BC1.2 detection completes. 294 + default: 0 295 + 256 296 dependencies: 257 297 sink-vdos-v1: [ sink-vdos ] 258 298 sink-vdos: [ sink-vdos-v1 ] ··· 420 380 }; 421 381 422 382 # USB-C connector attached to a typec port controller(ptn5110), which has 423 - # power delivery support and enables drp. 383 + # power delivery support, explicitly defines time properties and enables drp. 424 384 - | 425 385 #include <dt-bindings/usb/pd.h> 426 386 typec: ptn5110 { ··· 433 393 sink-pdos = <PDO_FIXED(5000, 2000, PDO_FIXED_USB_COMM) 434 394 PDO_VAR(5000, 12000, 2000)>; 435 395 op-sink-microwatt = <10000000>; 396 + sink-wait-cap-time-ms = <465>; 397 + ps-source-off-time-ms = <835>; 398 + cc-debounce-time-ms = <101>; 399 + sink-bc12-completion-time-ms = <500>; 436 400 }; 437 401 }; 438 402
+37 -5
Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
··· 11 11 12 12 properties: 13 13 compatible: 14 - enum: 15 - - fsl,imx8mq-usb-phy 16 - - fsl,imx8mp-usb-phy 14 + oneOf: 15 + - enum: 16 + - fsl,imx8mq-usb-phy 17 + - fsl,imx8mp-usb-phy 18 + - items: 19 + - const: fsl,imx95-usb-phy 20 + - const: fsl,imx8mp-usb-phy 17 21 18 22 reg: 19 - maxItems: 1 23 + minItems: 1 24 + maxItems: 2 20 25 21 26 "#phy-cells": 22 27 const: 0 ··· 94 89 - clocks 95 90 - clock-names 96 91 97 - additionalProperties: false 92 + allOf: 93 + - if: 94 + properties: 95 + compatible: 96 + contains: 97 + enum: 98 + - fsl,imx95-usb-phy 99 + then: 100 + properties: 101 + reg: 102 + items: 103 + - description: USB PHY Control range 104 + - description: USB PHY TCA Block range 105 + else: 106 + properties: 107 + reg: 108 + maxItems: 1 109 + 110 + - if: 111 + properties: 112 + compatible: 113 + contains: 114 + enum: 115 + - fsl,imx95-usb-phy 116 + then: 117 + $ref: /schemas/usb/usb-switch.yaml# 118 + 119 + unevaluatedProperties: false 98 120 99 121 examples: 100 122 - |
+2
Documentation/devicetree/bindings/phy/qcom,msm8998-qmp-usb3-phy.yaml
··· 18 18 enum: 19 19 - qcom,msm8998-qmp-usb3-phy 20 20 - qcom,qcm2290-qmp-usb3-phy 21 + - qcom,qcs615-qmp-usb3-phy 21 22 - qcom,sdm660-qmp-usb3-phy 22 23 - qcom,sm6115-qmp-usb3-phy 23 24 ··· 97 96 contains: 98 97 enum: 99 98 - qcom,msm8998-qmp-usb3-phy 99 + - qcom,qcs615-qmp-usb3-phy 100 100 - qcom,sdm660-qmp-usb3-phy 101 101 then: 102 102 properties:
+1
Documentation/devicetree/bindings/phy/qcom,qusb2-phy.yaml
··· 25 25 - qcom,msm8996-qusb2-phy 26 26 - qcom,msm8998-qusb2-phy 27 27 - qcom,qcm2290-qusb2-phy 28 + - qcom,qcs615-qusb2-phy 28 29 - qcom,sdm660-qusb2-phy 29 30 - qcom,sm4250-qusb2-phy 30 31 - qcom,sm6115-qusb2-phy
+1
Documentation/devicetree/bindings/usb/allwinner,sun4i-a10-musb.yaml
··· 25 25 - allwinner,sun20i-d1-musb 26 26 - allwinner,sun50i-a100-musb 27 27 - allwinner,sun50i-h6-musb 28 + - allwinner,sun55i-a523-musb 28 29 - const: allwinner,sun8i-a33-musb 29 30 - items: 30 31 - const: allwinner,sun50i-h616-musb
+1 -4
Documentation/devicetree/bindings/usb/cypress,cypd4226.yaml
··· 61 61 62 62 examples: 63 63 - | 64 - #include <dt-bindings/gpio/tegra194-gpio.h> 65 64 #include <dt-bindings/interrupt-controller/arm-gic.h> 66 65 i2c { 67 66 #address-cells = <1>; 68 67 #size-cells = <0>; 69 - #interrupt-cells = <2>; 70 68 71 69 typec@8 { 72 70 compatible = "cypress,cypd4226"; 73 71 reg = <0x08>; 74 - interrupt-parent = <&gpio_aon>; 75 - interrupts = <TEGRA194_AON_GPIO(BB, 2) IRQ_TYPE_LEVEL_LOW>; 72 + interrupts = <2 IRQ_TYPE_LEVEL_LOW>; 76 73 firmware-name = "nvidia,jetson-agx-xavier"; 77 74 #address-cells = <1>; 78 75 #size-cells = <0>;
+5 -1
Documentation/devicetree/bindings/usb/fsl,imx8mp-dwc3.yaml
··· 12 12 13 13 properties: 14 14 compatible: 15 - const: fsl,imx8mp-dwc3 15 + oneOf: 16 + - items: 17 + - const: fsl,imx95-dwc3 18 + - const: fsl,imx8mp-dwc3 19 + - const: fsl,imx8mp-dwc3 16 20 17 21 reg: 18 22 items:
+1
Documentation/devicetree/bindings/usb/generic-ehci.yaml
··· 32 32 - allwinner,sun50i-a64-ehci 33 33 - allwinner,sun50i-h6-ehci 34 34 - allwinner,sun50i-h616-ehci 35 + - allwinner,sun55i-a523-ehci 35 36 - allwinner,sun5i-a13-ehci 36 37 - allwinner,sun6i-a31-ehci 37 38 - allwinner,sun7i-a20-ehci
+1
Documentation/devicetree/bindings/usb/generic-ohci.yaml
··· 19 19 - allwinner,sun50i-a64-ohci 20 20 - allwinner,sun50i-h6-ohci 21 21 - allwinner,sun50i-h616-ohci 22 + - allwinner,sun55i-a523-ohci 22 23 - allwinner,sun5i-a13-ohci 23 24 - allwinner,sun6i-a31-ohci 24 25 - allwinner,sun7i-a20-ohci
+8 -1
Documentation/devicetree/bindings/usb/genesys,gl850g.yaml
··· 62 62 peer-hub: true 63 63 vdd-supply: true 64 64 65 - additionalProperties: false 65 + patternProperties: 66 + "^.*@[0-9a-f]{1,2}$": 67 + description: The hard wired USB devices 68 + type: object 69 + $ref: /schemas/usb/usb-device.yaml 70 + additionalProperties: true 71 + 72 + unevaluatedProperties: false 66 73 67 74 examples: 68 75 - |
+1
Documentation/devicetree/bindings/usb/maxim,max33359.yaml
··· 69 69 PDO_FIXED_DATA_SWAP | 70 70 PDO_FIXED_DUAL_ROLE) 71 71 PDO_FIXED(9000, 2000, 0)>; 72 + sink-bc12-completion-time-ms = <500>; 72 73 }; 73 74 }; 74 75 };
+5 -2
Documentation/devicetree/bindings/usb/microchip,mpfs-musb.yaml
··· 14 14 15 15 properties: 16 16 compatible: 17 - enum: 18 - - microchip,mpfs-musb 17 + oneOf: 18 + - items: 19 + - const: microchip,pic64gx-musb 20 + - const: microchip,mpfs-musb 21 + - const: microchip,mpfs-musb 19 22 20 23 dr_mode: true 21 24
+2
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
··· 29 29 - qcom,qcs8300-dwc3 30 30 - qcom,qdu1000-dwc3 31 31 - qcom,sa8775p-dwc3 32 + - qcom,sar2130p-dwc3 32 33 - qcom,sc7180-dwc3 33 34 - qcom,sc7280-dwc3 34 35 - qcom,sc8180x-dwc3 ··· 341 340 contains: 342 341 enum: 343 342 - qcom,qcm2290-dwc3 343 + - qcom,sar2130p-dwc3 344 344 - qcom,sc8180x-dwc3 345 345 - qcom,sc8180x-dwc3-mp 346 346 - qcom,sm6115-dwc3
+4
Documentation/devicetree/bindings/usb/renesas,usbhs.yaml
··· 76 76 Integer to use BUSWAIT register. 77 77 78 78 renesas,enable-gpio: 79 + deprecated: true 80 + maxItems: 1 81 + 82 + renesas,enable-gpios: 79 83 maxItems: 1 80 84 description: | 81 85 gpio specifier to check GPIO determining if USB function should be
+5 -1
Documentation/devicetree/bindings/usb/rockchip,dwc3.yaml
··· 27 27 enum: 28 28 - rockchip,rk3328-dwc3 29 29 - rockchip,rk3568-dwc3 30 + - rockchip,rk3576-dwc3 30 31 - rockchip,rk3588-dwc3 31 32 required: 32 33 - compatible ··· 38 37 - enum: 39 38 - rockchip,rk3328-dwc3 40 39 - rockchip,rk3568-dwc3 40 + - rockchip,rk3576-dwc3 41 41 - rockchip,rk3588-dwc3 42 42 - const: snps,dwc3 43 43 ··· 115 113 properties: 116 114 compatible: 117 115 contains: 118 - const: rockchip,rk3568-dwc3 116 + enum: 117 + - rockchip,rk3568-dwc3 118 + - rockchip,rk3576-dwc3 119 119 then: 120 120 properties: 121 121 clocks:
+49
Documentation/devicetree/bindings/usb/ti,tusb1046.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/ti,tusb1046.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Texas Instruments TUSB1046-DCI Type-C crosspoint switch 8 + 9 + maintainers: 10 + - Romain Gantois <romain.gantois@bootlin.com> 11 + 12 + allOf: 13 + - $ref: usb-switch.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: ti,tusb1046 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + required: 23 + - compatible 24 + - reg 25 + - port 26 + 27 + unevaluatedProperties: false 28 + 29 + examples: 30 + - | 31 + i2c { 32 + #address-cells = <1>; 33 + #size-cells = <0>; 34 + 35 + typec-mux@44 { 36 + compatible = "ti,tusb1046"; 37 + reg = <0x44>; 38 + 39 + mode-switch; 40 + orientation-switch; 41 + 42 + port { 43 + endpoint { 44 + remote-endpoint = <&typec_controller>; 45 + }; 46 + }; 47 + }; 48 + }; 49 + ...
+55
Documentation/devicetree/bindings/usb/ti,tusb73x0-pci.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/usb/ti,tusb73x0-pci.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: TUSB73x0 USB 3.0 xHCI Host Controller (PCIe) 8 + 9 + maintainers: 10 + - Francesco Dolcini <francesco.dolcini@toradex.com> 11 + 12 + description: 13 + TUSB73x0 USB 3.0 xHCI Host Controller via PCIe x1 Gen2 interface. 14 + The TUSB7320 supports up to two downstream ports, the TUSB7340 supports up 15 + to four downstream ports, both variants share the same PCI device ID. 16 + 17 + properties: 18 + compatible: 19 + const: pci104c,8241 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + ti,pwron-active-high: 25 + $ref: /schemas/types.yaml#/definitions/flag 26 + description: 27 + Configure the polarity of the PWRONx# signals. When this is present, the 28 + PWRONx# pins are active high and their internal pull-down resistors are 29 + disabled. When this is absent, the PWRONx# pins are active low (default) 30 + and their internal pull-down resistors are enabled. 31 + 32 + required: 33 + - compatible 34 + - reg 35 + 36 + allOf: 37 + - $ref: usb-xhci.yaml 38 + 39 + additionalProperties: false 40 + 41 + examples: 42 + - | 43 + pcie@0 { 44 + reg = <0x0 0x1000>; 45 + ranges = <0x02000000 0x0 0x100000 0x10000000 0x0 0x0>; 46 + #address-cells = <3>; 47 + #size-cells = <2>; 48 + device_type = "pci"; 49 + 50 + usb@0 { 51 + compatible = "pci104c,8241"; 52 + reg = <0x0 0x0 0x0 0x0 0x0>; 53 + ti,pwron-active-high; 54 + }; 55 + };
+7
MAINTAINERS
··· 24352 24352 S: Orphan 24353 24353 F: drivers/usb/typec/tcpm/ 24354 24354 24355 + USB TYPEC TUSB1046 MUX DRIVER 24356 + M: Romain Gantois <romain.gantois@bootlin.com> 24357 + L: linux-usb@vger.kernel.org 24358 + S: Maintained 24359 + F: Documentation/devicetree/bindings/usb/ti,tusb1046.yaml 24360 + F: drivers/usb/typec/mux/tusb1046.c 24361 + 24355 24362 USB UHCI DRIVER 24356 24363 M: Alan Stern <stern@rowland.harvard.edu> 24357 24364 L: linux-usb@vger.kernel.org
+2
drivers/phy/realtek/phy-rtk-usb2.c
··· 1023 1023 1024 1024 rtk_phy->dev = &pdev->dev; 1025 1025 rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL); 1026 + if (!rtk_phy->phy_cfg) 1027 + return -ENOMEM; 1026 1028 1027 1029 memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg)); 1028 1030
+2
drivers/phy/realtek/phy-rtk-usb3.c
··· 577 577 578 578 rtk_phy->dev = &pdev->dev; 579 579 rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL); 580 + if (!rtk_phy->phy_cfg) 581 + return -ENOMEM; 580 582 581 583 memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg)); 582 584
+367 -135
drivers/thunderbolt/debugfs.c
··· 7 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 8 8 */ 9 9 10 + #include <linux/array_size.h> 10 11 #include <linux/bitfield.h> 11 12 #include <linux/debugfs.h> 12 13 #include <linux/delay.h> ··· 43 42 #define MIN_DWELL_TIME 100 /* ms */ 44 43 #define MAX_DWELL_TIME 500 /* ms */ 45 44 #define DWELL_SAMPLE_INTERVAL 10 45 + 46 + enum usb4_margin_cap_voltage_indp { 47 + USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_MIN, 48 + USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_HL, 49 + USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_BOTH, 50 + USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_4_MIN, 51 + USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_4_BOTH, 52 + USB4_MARGIN_CAP_VOLTAGE_INDP_UNKNOWN, 53 + }; 54 + 55 + enum usb4_margin_cap_time_indp { 56 + USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_MIN, 57 + USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_LR, 58 + USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_BOTH, 59 + USB4_MARGIN_CAP_TIME_INDP_GEN_4_MIN, 60 + USB4_MARGIN_CAP_TIME_INDP_GEN_4_BOTH, 61 + USB4_MARGIN_CAP_TIME_INDP_UNKNOWN, 62 + }; 46 63 47 64 /* Sideband registers and their sizes as defined in the USB4 spec */ 48 65 struct sb_reg { ··· 414 395 * @target: Sideband target 415 396 * @index: Retimer index if taget is %USB4_SB_TARGET_RETIMER 416 397 * @dev: Pointer to the device that is the target (USB4 port or retimer) 398 + * @gen: Link generation 399 + * @asym_rx: %true% if @port supports asymmetric link with 3 Rx 417 400 * @caps: Port lane margining capabilities 418 401 * @results: Last lane margining results 419 402 * @lanes: %0, %1 or %7 (all) ··· 437 416 * @time: %true if time margining is used instead of voltage 438 417 * @right_high: %false if left/low margin test is performed, %true if 439 418 * right/high 419 + * @upper_eye: %false if the lower PAM3 eye is used, %true if the upper 420 + * eye is used 440 421 */ 441 422 struct tb_margining { 442 423 struct tb_port *port; 443 424 enum usb4_sb_target target; 444 425 u8 index; 445 426 struct device *dev; 446 - u32 caps[2]; 447 - u32 results[2]; 448 - unsigned int lanes; 427 + unsigned int gen; 428 + bool asym_rx; 429 + u32 caps[3]; 430 + u32 results[3]; 431 + enum usb4_margining_lane lanes; 449 432 unsigned int min_ber_level; 450 433 unsigned int max_ber_level; 451 434 unsigned int ber_level; ··· 466 441 bool software; 467 442 bool time; 468 443 bool right_high; 444 + bool upper_eye; 469 445 }; 470 446 471 447 static int margining_modify_error_counter(struct tb_margining *margining, ··· 489 463 490 464 static bool supports_software(const struct tb_margining *margining) 491 465 { 492 - return margining->caps[0] & USB4_MARGIN_CAP_0_MODES_SW; 466 + if (margining->gen < 4) 467 + return margining->caps[0] & USB4_MARGIN_CAP_0_MODES_SW; 468 + return margining->caps[2] & USB4_MARGIN_CAP_2_MODES_SW; 493 469 } 494 470 495 471 static bool supports_hardware(const struct tb_margining *margining) 496 472 { 497 - return margining->caps[0] & USB4_MARGIN_CAP_0_MODES_HW; 473 + if (margining->gen < 4) 474 + return margining->caps[0] & USB4_MARGIN_CAP_0_MODES_HW; 475 + return margining->caps[2] & USB4_MARGIN_CAP_2_MODES_HW; 498 476 } 499 477 500 - static bool both_lanes(const struct tb_margining *margining) 478 + static bool all_lanes(const struct tb_margining *margining) 501 479 { 502 - return margining->caps[0] & USB4_MARGIN_CAP_0_2_LANES; 480 + return margining->caps[0] & USB4_MARGIN_CAP_0_ALL_LANES; 503 481 } 504 482 505 - static unsigned int 483 + static enum usb4_margin_cap_voltage_indp 506 484 independent_voltage_margins(const struct tb_margining *margining) 507 485 { 508 - return FIELD_GET(USB4_MARGIN_CAP_0_VOLTAGE_INDP_MASK, margining->caps[0]); 486 + if (margining->gen < 4) { 487 + switch (FIELD_GET(USB4_MARGIN_CAP_0_VOLTAGE_INDP_MASK, margining->caps[0])) { 488 + case USB4_MARGIN_CAP_0_VOLTAGE_MIN: 489 + return USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_MIN; 490 + case USB4_MARGIN_CAP_0_VOLTAGE_HL: 491 + return USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_HL; 492 + case USB4_MARGIN_CAP_1_TIME_BOTH: 493 + return USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_BOTH; 494 + } 495 + } else { 496 + switch (FIELD_GET(USB4_MARGIN_CAP_2_VOLTAGE_INDP_MASK, margining->caps[2])) { 497 + case USB4_MARGIN_CAP_2_VOLTAGE_MIN: 498 + return USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_4_MIN; 499 + case USB4_MARGIN_CAP_2_VOLTAGE_BOTH: 500 + return USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_4_BOTH; 501 + } 502 + } 503 + return USB4_MARGIN_CAP_VOLTAGE_INDP_UNKNOWN; 509 504 } 510 505 511 506 static bool supports_time(const struct tb_margining *margining) 512 507 { 513 - return margining->caps[0] & USB4_MARGIN_CAP_0_TIME; 508 + if (margining->gen < 4) 509 + return margining->caps[0] & USB4_MARGIN_CAP_0_TIME; 510 + return margining->caps[2] & USB4_MARGIN_CAP_2_TIME; 514 511 } 515 512 516 513 /* Only applicable if supports_time() returns true */ 517 - static unsigned int 514 + static enum usb4_margin_cap_time_indp 518 515 independent_time_margins(const struct tb_margining *margining) 519 516 { 520 - return FIELD_GET(USB4_MARGIN_CAP_1_TIME_INDP_MASK, margining->caps[1]); 517 + if (margining->gen < 4) { 518 + switch (FIELD_GET(USB4_MARGIN_CAP_1_TIME_INDP_MASK, margining->caps[1])) { 519 + case USB4_MARGIN_CAP_1_TIME_MIN: 520 + return USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_MIN; 521 + case USB4_MARGIN_CAP_1_TIME_LR: 522 + return USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_LR; 523 + case USB4_MARGIN_CAP_1_TIME_BOTH: 524 + return USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_BOTH; 525 + } 526 + } else { 527 + switch (FIELD_GET(USB4_MARGIN_CAP_2_TIME_INDP_MASK, margining->caps[2])) { 528 + case USB4_MARGIN_CAP_2_TIME_MIN: 529 + return USB4_MARGIN_CAP_TIME_INDP_GEN_4_MIN; 530 + case USB4_MARGIN_CAP_2_TIME_BOTH: 531 + return USB4_MARGIN_CAP_TIME_INDP_GEN_4_BOTH; 532 + } 533 + } 534 + return USB4_MARGIN_CAP_TIME_INDP_UNKNOWN; 521 535 } 522 536 523 537 static bool ··· 636 570 { 637 571 struct tb_margining *margining = s->private; 638 572 struct tb *tb = margining->port->sw->tb; 639 - u32 cap0, cap1; 573 + int ret = 0; 640 574 641 575 if (mutex_lock_interruptible(&tb->lock)) 642 576 return -ERESTARTSYS; 643 577 644 578 /* Dump the raw caps first */ 645 - cap0 = margining->caps[0]; 646 - seq_printf(s, "0x%08x\n", cap0); 647 - cap1 = margining->caps[1]; 648 - seq_printf(s, "0x%08x\n", cap1); 579 + for (int i = 0; i < ARRAY_SIZE(margining->caps); i++) 580 + seq_printf(s, "0x%08x\n", margining->caps[i]); 649 581 650 582 seq_printf(s, "# software margining: %s\n", 651 583 supports_software(margining) ? "yes" : "no"); ··· 657 593 seq_puts(s, "# hardware margining: no\n"); 658 594 } 659 595 660 - seq_printf(s, "# both lanes simultaneously: %s\n", 661 - both_lanes(margining) ? "yes" : "no"); 596 + seq_printf(s, "# all lanes simultaneously: %s\n", 597 + str_yes_no(all_lanes(margining))); 662 598 seq_printf(s, "# voltage margin steps: %u\n", 663 599 margining->voltage_steps); 664 600 seq_printf(s, "# maximum voltage offset: %u mV\n", ··· 673 609 } 674 610 675 611 switch (independent_voltage_margins(margining)) { 676 - case USB4_MARGIN_CAP_0_VOLTAGE_MIN: 612 + case USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_MIN: 677 613 seq_puts(s, "# returns minimum between high and low voltage margins\n"); 678 614 break; 679 - case USB4_MARGIN_CAP_0_VOLTAGE_HL: 615 + case USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_HL: 680 616 seq_puts(s, "# returns high or low voltage margin\n"); 681 617 break; 682 - case USB4_MARGIN_CAP_0_VOLTAGE_BOTH: 618 + case USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_BOTH: 683 619 seq_puts(s, "# returns both high and low margins\n"); 684 620 break; 621 + case USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_4_MIN: 622 + seq_puts(s, "# returns minimum between high and low voltage margins in both lower and upper eye\n"); 623 + break; 624 + case USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_4_BOTH: 625 + seq_puts(s, "# returns both high and low margins of both upper and lower eye\n"); 626 + break; 627 + case USB4_MARGIN_CAP_VOLTAGE_INDP_UNKNOWN: 628 + tb_port_warn(margining->port, 629 + "failed to parse independent voltage margining capabilities\n"); 630 + ret = -EIO; 631 + goto out; 685 632 } 686 633 687 634 if (supports_time(margining)) { 688 635 seq_puts(s, "# time margining: yes\n"); 689 636 seq_printf(s, "# time margining is destructive: %s\n", 690 - cap1 & USB4_MARGIN_CAP_1_TIME_DESTR ? "yes" : "no"); 637 + str_yes_no(margining->caps[1] & USB4_MARGIN_CAP_1_TIME_DESTR)); 691 638 692 639 switch (independent_time_margins(margining)) { 693 - case USB4_MARGIN_CAP_1_TIME_MIN: 640 + case USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_MIN: 694 641 seq_puts(s, "# returns minimum between left and right time margins\n"); 695 642 break; 696 - case USB4_MARGIN_CAP_1_TIME_LR: 643 + case USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_LR: 697 644 seq_puts(s, "# returns left or right margin\n"); 698 645 break; 699 - case USB4_MARGIN_CAP_1_TIME_BOTH: 646 + case USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_BOTH: 700 647 seq_puts(s, "# returns both left and right margins\n"); 701 648 break; 649 + case USB4_MARGIN_CAP_TIME_INDP_GEN_4_MIN: 650 + seq_puts(s, "# returns minimum between left and right time margins in both lower and upper eye\n"); 651 + break; 652 + case USB4_MARGIN_CAP_TIME_INDP_GEN_4_BOTH: 653 + seq_puts(s, "# returns both left and right margins of both upper and lower eye\n"); 654 + break; 655 + case USB4_MARGIN_CAP_TIME_INDP_UNKNOWN: 656 + tb_port_warn(margining->port, 657 + "failed to parse independent time margining capabilities\n"); 658 + ret = -EIO; 659 + goto out; 702 660 } 703 661 704 662 seq_printf(s, "# time margin steps: %u\n", ··· 731 645 seq_puts(s, "# time margining: no\n"); 732 646 } 733 647 648 + out: 734 649 mutex_unlock(&tb->lock); 735 - return 0; 650 + return ret; 736 651 } 737 652 DEBUGFS_ATTR_RO(margining_caps); 653 + 654 + static const struct { 655 + enum usb4_margining_lane lane; 656 + const char *name; 657 + } lane_names[] = { 658 + { 659 + .lane = USB4_MARGINING_LANE_RX0, 660 + .name = "0", 661 + }, 662 + { 663 + .lane = USB4_MARGINING_LANE_RX1, 664 + .name = "1", 665 + }, 666 + { 667 + .lane = USB4_MARGINING_LANE_RX2, 668 + .name = "2", 669 + }, 670 + { 671 + .lane = USB4_MARGINING_LANE_ALL, 672 + .name = "all", 673 + }, 674 + }; 738 675 739 676 static ssize_t 740 677 margining_lanes_write(struct file *file, const char __user *user_buf, ··· 765 656 { 766 657 struct seq_file *s = file->private_data; 767 658 struct tb_margining *margining = s->private; 768 - struct tb *tb = margining->port->sw->tb; 769 - int ret = 0; 659 + struct tb_port *port = margining->port; 660 + struct tb *tb = port->sw->tb; 661 + int lane = -1; 770 662 char *buf; 771 663 772 664 buf = validate_and_copy_from_user(user_buf, &count); ··· 776 666 777 667 buf[count - 1] = '\0'; 778 668 779 - if (mutex_lock_interruptible(&tb->lock)) { 780 - ret = -ERESTARTSYS; 781 - goto out_free; 669 + for (int i = 0; i < ARRAY_SIZE(lane_names); i++) { 670 + if (!strcmp(buf, lane_names[i].name)) { 671 + lane = lane_names[i].lane; 672 + break; 673 + } 782 674 } 783 675 784 - if (!strcmp(buf, "0")) { 785 - margining->lanes = 0; 786 - } else if (!strcmp(buf, "1")) { 787 - margining->lanes = 1; 788 - } else if (!strcmp(buf, "all")) { 789 - /* Needs to be supported */ 790 - if (both_lanes(margining)) 791 - margining->lanes = 7; 792 - else 793 - ret = -EINVAL; 794 - } else { 795 - ret = -EINVAL; 796 - } 797 - 798 - mutex_unlock(&tb->lock); 799 - 800 - out_free: 801 676 free_page((unsigned long)buf); 802 - return ret < 0 ? ret : count; 677 + 678 + if (lane == -1) 679 + return -EINVAL; 680 + 681 + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &tb->lock) { 682 + if (lane == USB4_MARGINING_LANE_ALL && !all_lanes(margining)) 683 + return -EINVAL; 684 + /* 685 + * Enabling on RX2 requires that it is supported by the 686 + * USB4 port. 687 + */ 688 + if (lane == USB4_MARGINING_LANE_RX2 && !margining->asym_rx) 689 + return -EINVAL; 690 + 691 + margining->lanes = lane; 692 + } 693 + 694 + return count; 803 695 } 804 696 805 697 static int margining_lanes_show(struct seq_file *s, void *not_used) 806 698 { 807 699 struct tb_margining *margining = s->private; 808 - struct tb *tb = margining->port->sw->tb; 809 - unsigned int lanes; 700 + struct tb_port *port = margining->port; 701 + struct tb *tb = port->sw->tb; 810 702 811 - if (mutex_lock_interruptible(&tb->lock)) 812 - return -ERESTARTSYS; 703 + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &tb->lock) { 704 + for (int i = 0; i < ARRAY_SIZE(lane_names); i++) { 705 + if (lane_names[i].lane == USB4_MARGINING_LANE_ALL && 706 + !all_lanes(margining)) 707 + continue; 708 + if (lane_names[i].lane == USB4_MARGINING_LANE_RX2 && 709 + !margining->asym_rx) 710 + continue; 813 711 814 - lanes = margining->lanes; 815 - if (both_lanes(margining)) { 816 - if (!lanes) 817 - seq_puts(s, "[0] 1 all\n"); 818 - else if (lanes == 1) 819 - seq_puts(s, "0 [1] all\n"); 820 - else 821 - seq_puts(s, "0 1 [all]\n"); 822 - } else { 823 - if (!lanes) 824 - seq_puts(s, "[0] 1\n"); 825 - else 826 - seq_puts(s, "0 [1]\n"); 712 + if (i != 0) 713 + seq_putc(s, ' '); 714 + 715 + if (lane_names[i].lane == margining->lanes) 716 + seq_printf(s, "[%s]", lane_names[i].name); 717 + else 718 + seq_printf(s, "%s", lane_names[i].name); 719 + } 720 + seq_puts(s, "\n"); 827 721 } 828 722 829 - mutex_unlock(&tb->lock); 830 723 return 0; 831 724 } 832 725 DEBUGFS_ATTR_RW(margining_lanes); ··· 1117 1004 if (ret) 1118 1005 break; 1119 1006 1120 - if (margining->lanes == USB4_MARGIN_SW_LANE_0) 1007 + if (margining->lanes == USB4_MARGINING_LANE_RX0) 1121 1008 errors = FIELD_GET(USB4_MARGIN_SW_ERR_COUNTER_LANE_0_MASK, 1122 1009 margining->results[1]); 1123 - else if (margining->lanes == USB4_MARGIN_SW_LANE_1) 1010 + else if (margining->lanes == USB4_MARGINING_LANE_RX1) 1124 1011 errors = FIELD_GET(USB4_MARGIN_SW_ERR_COUNTER_LANE_1_MASK, 1125 1012 margining->results[1]); 1126 - else if (margining->lanes == USB4_MARGIN_SW_ALL_LANES) 1013 + else if (margining->lanes == USB4_MARGINING_LANE_RX2) 1014 + errors = FIELD_GET(USB4_MARGIN_SW_ERR_COUNTER_LANE_2_MASK, 1015 + margining->results[1]); 1016 + else if (margining->lanes == USB4_MARGINING_LANE_ALL) 1127 1017 errors = margining->results[1]; 1128 1018 1129 1019 /* Any errors stop the test */ ··· 1144 1028 margining_modify_error_counter(margining, margining->lanes, 1145 1029 USB4_MARGIN_SW_ERROR_COUNTER_STOP); 1146 1030 return ret; 1031 + } 1032 + 1033 + static int validate_margining(struct tb_margining *margining) 1034 + { 1035 + /* 1036 + * For running on RX2 the link must be asymmetric with 3 1037 + * receivers. Because this is can change dynamically, check it 1038 + * here before we start the margining and report back error if 1039 + * expectations are not met. 1040 + */ 1041 + if (margining->lanes == USB4_MARGINING_LANE_RX2) { 1042 + int ret; 1043 + 1044 + ret = tb_port_get_link_width(margining->port); 1045 + if (ret < 0) 1046 + return ret; 1047 + if (ret != TB_LINK_WIDTH_ASYM_RX) { 1048 + tb_port_warn(margining->port, "link is %s expected %s", 1049 + tb_width_name(ret), 1050 + tb_width_name(TB_LINK_WIDTH_ASYM_RX)); 1051 + return -EINVAL; 1052 + } 1053 + } 1054 + 1055 + return 0; 1147 1056 } 1148 1057 1149 1058 static int margining_run_write(void *data, u64 val) ··· 1190 1049 ret = -ERESTARTSYS; 1191 1050 goto out_rpm_put; 1192 1051 } 1052 + 1053 + ret = validate_margining(margining); 1054 + if (ret) 1055 + goto out_unlock; 1193 1056 1194 1057 if (tb_is_upstream_port(port)) 1195 1058 down_sw = sw; ··· 1225 1080 .time = margining->time, 1226 1081 .voltage_time_offset = margining->voltage_time_offset, 1227 1082 .right_high = margining->right_high, 1083 + .upper_eye = margining->upper_eye, 1228 1084 .optional_voltage_offset_range = margining->optional_voltage_offset_range, 1229 1085 }; 1230 1086 ··· 1241 1095 .lanes = margining->lanes, 1242 1096 .time = margining->time, 1243 1097 .right_high = margining->right_high, 1098 + .upper_eye = margining->upper_eye, 1244 1099 .optional_voltage_offset_range = margining->optional_voltage_offset_range, 1245 1100 }; 1246 1101 ··· 1251 1104 margining->lanes); 1252 1105 1253 1106 ret = usb4_port_hw_margin(port, margining->target, margining->index, &params, 1254 - margining->results); 1107 + margining->results, ARRAY_SIZE(margining->results)); 1255 1108 } 1256 1109 1257 1110 if (down_sw) ··· 1279 1132 return -ERESTARTSYS; 1280 1133 1281 1134 /* Just clear the results */ 1282 - margining->results[0] = 0; 1283 - margining->results[1] = 0; 1135 + memset(margining->results, 0, sizeof(margining->results)); 1284 1136 1285 1137 if (margining->software) { 1286 1138 /* Clear the error counters */ 1287 1139 margining_modify_error_counter(margining, 1288 - USB4_MARGIN_SW_ALL_LANES, 1140 + USB4_MARGINING_LANE_ALL, 1289 1141 USB4_MARGIN_SW_ERROR_COUNTER_CLEAR); 1290 1142 } 1291 1143 ··· 1297 1151 { 1298 1152 unsigned int tmp, voltage; 1299 1153 1300 - tmp = FIELD_GET(USB4_MARGIN_HW_RES_1_MARGIN_MASK, val); 1154 + tmp = FIELD_GET(USB4_MARGIN_HW_RES_MARGIN_MASK, val); 1301 1155 voltage = tmp * margining->max_voltage_offset / margining->voltage_steps; 1302 1156 seq_printf(s, "%u mV (%u)", voltage, tmp); 1303 - if (val & USB4_MARGIN_HW_RES_1_EXCEEDS) 1157 + if (val & USB4_MARGIN_HW_RES_EXCEEDS) 1304 1158 seq_puts(s, " exceeds maximum"); 1305 1159 seq_puts(s, "\n"); 1306 1160 if (margining->optional_voltage_offset_range) ··· 1312 1166 { 1313 1167 unsigned int tmp, interval; 1314 1168 1315 - tmp = FIELD_GET(USB4_MARGIN_HW_RES_1_MARGIN_MASK, val); 1169 + tmp = FIELD_GET(USB4_MARGIN_HW_RES_MARGIN_MASK, val); 1316 1170 interval = tmp * margining->max_time_offset / margining->time_steps; 1317 1171 seq_printf(s, "%u mUI (%u)", interval, tmp); 1318 - if (val & USB4_MARGIN_HW_RES_1_EXCEEDS) 1172 + if (val & USB4_MARGIN_HW_RES_EXCEEDS) 1319 1173 seq_puts(s, " exceeds maximum"); 1320 1174 seq_puts(s, "\n"); 1175 + } 1176 + 1177 + static u8 margining_hw_result_val(const u32 *results, 1178 + enum usb4_margining_lane lane, 1179 + bool right_high) 1180 + { 1181 + u32 val; 1182 + 1183 + if (lane == USB4_MARGINING_LANE_RX0) 1184 + val = results[1]; 1185 + else if (lane == USB4_MARGINING_LANE_RX1) 1186 + val = results[1] >> USB4_MARGIN_HW_RES_LANE_SHIFT; 1187 + else if (lane == USB4_MARGINING_LANE_RX2) 1188 + val = results[2]; 1189 + else 1190 + val = 0; 1191 + 1192 + return right_high ? val : val >> USB4_MARGIN_HW_RES_LL_SHIFT; 1193 + } 1194 + 1195 + static void margining_hw_result_format(struct seq_file *s, 1196 + const struct tb_margining *margining, 1197 + enum usb4_margining_lane lane) 1198 + { 1199 + u8 val; 1200 + 1201 + if (margining->time) { 1202 + val = margining_hw_result_val(margining->results, lane, true); 1203 + seq_printf(s, "# lane %u right time margin: ", lane); 1204 + time_margin_show(s, margining, val); 1205 + val = margining_hw_result_val(margining->results, lane, false); 1206 + seq_printf(s, "# lane %u left time margin: ", lane); 1207 + time_margin_show(s, margining, val); 1208 + } else { 1209 + val = margining_hw_result_val(margining->results, lane, true); 1210 + seq_printf(s, "# lane %u high voltage margin: ", lane); 1211 + voltage_margin_show(s, margining, val); 1212 + val = margining_hw_result_val(margining->results, lane, false); 1213 + seq_printf(s, "# lane %u low voltage margin: ", lane); 1214 + voltage_margin_show(s, margining, val); 1215 + } 1321 1216 } 1322 1217 1323 1218 static int margining_results_show(struct seq_file *s, void *not_used) ··· 1373 1186 seq_printf(s, "0x%08x\n", margining->results[0]); 1374 1187 /* Only the hardware margining has two result dwords */ 1375 1188 if (!margining->software) { 1376 - unsigned int val; 1189 + for (int i = 1; i < ARRAY_SIZE(margining->results); i++) 1190 + seq_printf(s, "0x%08x\n", margining->results[i]); 1377 1191 1378 - seq_printf(s, "0x%08x\n", margining->results[1]); 1379 - 1380 - if (margining->time) { 1381 - if (!margining->lanes || margining->lanes == 7) { 1382 - val = margining->results[1]; 1383 - seq_puts(s, "# lane 0 right time margin: "); 1384 - time_margin_show(s, margining, val); 1385 - val = margining->results[1] >> 1386 - USB4_MARGIN_HW_RES_1_L0_LL_MARGIN_SHIFT; 1387 - seq_puts(s, "# lane 0 left time margin: "); 1388 - time_margin_show(s, margining, val); 1389 - } 1390 - if (margining->lanes == 1 || margining->lanes == 7) { 1391 - val = margining->results[1] >> 1392 - USB4_MARGIN_HW_RES_1_L1_RH_MARGIN_SHIFT; 1393 - seq_puts(s, "# lane 1 right time margin: "); 1394 - time_margin_show(s, margining, val); 1395 - val = margining->results[1] >> 1396 - USB4_MARGIN_HW_RES_1_L1_LL_MARGIN_SHIFT; 1397 - seq_puts(s, "# lane 1 left time margin: "); 1398 - time_margin_show(s, margining, val); 1399 - } 1192 + if (margining->lanes == USB4_MARGINING_LANE_ALL) { 1193 + margining_hw_result_format(s, margining, 1194 + USB4_MARGINING_LANE_RX0); 1195 + margining_hw_result_format(s, margining, 1196 + USB4_MARGINING_LANE_RX1); 1197 + if (margining->asym_rx) 1198 + margining_hw_result_format(s, margining, 1199 + USB4_MARGINING_LANE_RX2); 1400 1200 } else { 1401 - if (!margining->lanes || margining->lanes == 7) { 1402 - val = margining->results[1]; 1403 - seq_puts(s, "# lane 0 high voltage margin: "); 1404 - voltage_margin_show(s, margining, val); 1405 - val = margining->results[1] >> 1406 - USB4_MARGIN_HW_RES_1_L0_LL_MARGIN_SHIFT; 1407 - seq_puts(s, "# lane 0 low voltage margin: "); 1408 - voltage_margin_show(s, margining, val); 1409 - } 1410 - if (margining->lanes == 1 || margining->lanes == 7) { 1411 - val = margining->results[1] >> 1412 - USB4_MARGIN_HW_RES_1_L1_RH_MARGIN_SHIFT; 1413 - seq_puts(s, "# lane 1 high voltage margin: "); 1414 - voltage_margin_show(s, margining, val); 1415 - val = margining->results[1] >> 1416 - USB4_MARGIN_HW_RES_1_L1_LL_MARGIN_SHIFT; 1417 - seq_puts(s, "# lane 1 low voltage margin: "); 1418 - voltage_margin_show(s, margining, val); 1419 - } 1201 + margining_hw_result_format(s, margining, 1202 + margining->lanes); 1420 1203 } 1421 1204 } else { 1422 1205 u32 lane_errors, result; 1423 1206 1424 1207 seq_printf(s, "0x%08x\n", margining->results[1]); 1425 - result = FIELD_GET(USB4_MARGIN_SW_LANES_MASK, margining->results[0]); 1426 1208 1427 - if (result == USB4_MARGIN_SW_LANE_0 || 1428 - result == USB4_MARGIN_SW_ALL_LANES) { 1209 + result = FIELD_GET(USB4_MARGIN_SW_LANES_MASK, margining->results[0]); 1210 + if (result == USB4_MARGINING_LANE_RX0 || 1211 + result == USB4_MARGINING_LANE_ALL) { 1429 1212 lane_errors = FIELD_GET(USB4_MARGIN_SW_ERR_COUNTER_LANE_0_MASK, 1430 1213 margining->results[1]); 1431 1214 seq_printf(s, "# lane 0 errors: %u\n", lane_errors); 1432 1215 } 1433 - if (result == USB4_MARGIN_SW_LANE_1 || 1434 - result == USB4_MARGIN_SW_ALL_LANES) { 1216 + if (result == USB4_MARGINING_LANE_RX1 || 1217 + result == USB4_MARGINING_LANE_ALL) { 1435 1218 lane_errors = FIELD_GET(USB4_MARGIN_SW_ERR_COUNTER_LANE_1_MASK, 1436 1219 margining->results[1]); 1437 1220 seq_printf(s, "# lane 1 errors: %u\n", lane_errors); 1221 + } 1222 + if (margining->asym_rx && 1223 + (result == USB4_MARGINING_LANE_RX2 || 1224 + result == USB4_MARGINING_LANE_ALL)) { 1225 + lane_errors = FIELD_GET(USB4_MARGIN_SW_ERR_COUNTER_LANE_2_MASK, 1226 + margining->results[1]); 1227 + seq_printf(s, "# lane 2 errors: %u\n", lane_errors); 1438 1228 } 1439 1229 } 1440 1230 ··· 1546 1382 } 1547 1383 DEBUGFS_ATTR_RW(margining_margin); 1548 1384 1385 + static ssize_t margining_eye_write(struct file *file, 1386 + const char __user *user_buf, 1387 + size_t count, loff_t *ppos) 1388 + { 1389 + struct seq_file *s = file->private_data; 1390 + struct tb_port *port = s->private; 1391 + struct usb4_port *usb4 = port->usb4; 1392 + struct tb *tb = port->sw->tb; 1393 + int ret = 0; 1394 + char *buf; 1395 + 1396 + buf = validate_and_copy_from_user(user_buf, &count); 1397 + if (IS_ERR(buf)) 1398 + return PTR_ERR(buf); 1399 + 1400 + buf[count - 1] = '\0'; 1401 + 1402 + scoped_cond_guard(mutex_intr, ret = -ERESTARTSYS, &tb->lock) { 1403 + if (!strcmp(buf, "lower")) 1404 + usb4->margining->upper_eye = false; 1405 + else if (!strcmp(buf, "upper")) 1406 + usb4->margining->upper_eye = true; 1407 + else 1408 + ret = -EINVAL; 1409 + } 1410 + 1411 + free_page((unsigned long)buf); 1412 + return ret ? ret : count; 1413 + } 1414 + 1415 + static int margining_eye_show(struct seq_file *s, void *not_used) 1416 + { 1417 + struct tb_port *port = s->private; 1418 + struct usb4_port *usb4 = port->usb4; 1419 + struct tb *tb = port->sw->tb; 1420 + 1421 + scoped_guard(mutex_intr, &tb->lock) { 1422 + if (usb4->margining->upper_eye) 1423 + seq_puts(s, "lower [upper]\n"); 1424 + else 1425 + seq_puts(s, "[lower] upper\n"); 1426 + 1427 + return 0; 1428 + } 1429 + 1430 + return -ERESTARTSYS; 1431 + } 1432 + DEBUGFS_ATTR_RW(margining_eye); 1433 + 1549 1434 static struct tb_margining *margining_alloc(struct tb_port *port, 1550 1435 struct device *dev, 1551 1436 enum usb4_sb_target target, ··· 1605 1392 unsigned int val; 1606 1393 int ret; 1607 1394 1395 + ret = tb_port_get_link_generation(port); 1396 + if (ret < 0) { 1397 + tb_port_warn(port, "failed to read link generation\n"); 1398 + return NULL; 1399 + } 1400 + 1608 1401 margining = kzalloc(sizeof(*margining), GFP_KERNEL); 1609 1402 if (!margining) 1610 1403 return NULL; ··· 1619 1400 margining->target = target; 1620 1401 margining->index = index; 1621 1402 margining->dev = dev; 1403 + margining->gen = ret; 1404 + margining->asym_rx = tb_port_width_supported(port, TB_LINK_WIDTH_ASYM_RX); 1622 1405 1623 - ret = usb4_port_margining_caps(port, target, index, margining->caps); 1406 + ret = usb4_port_margining_caps(port, target, index, margining->caps, 1407 + ARRAY_SIZE(margining->caps)); 1624 1408 if (ret) { 1625 1409 kfree(margining); 1626 1410 return NULL; ··· 1633 1411 if (supports_software(margining)) 1634 1412 margining->software = true; 1635 1413 1636 - val = FIELD_GET(USB4_MARGIN_CAP_0_VOLTAGE_STEPS_MASK, margining->caps[0]); 1637 - margining->voltage_steps = val; 1638 - val = FIELD_GET(USB4_MARGIN_CAP_0_MAX_VOLTAGE_OFFSET_MASK, margining->caps[0]); 1639 - margining->max_voltage_offset = 74 + val * 2; 1414 + if (margining->gen < 4) { 1415 + val = FIELD_GET(USB4_MARGIN_CAP_0_VOLTAGE_STEPS_MASK, margining->caps[0]); 1416 + margining->voltage_steps = val; 1417 + val = FIELD_GET(USB4_MARGIN_CAP_0_MAX_VOLTAGE_OFFSET_MASK, margining->caps[0]); 1418 + margining->max_voltage_offset = 74 + val * 2; 1419 + } else { 1420 + val = FIELD_GET(USB4_MARGIN_CAP_2_VOLTAGE_STEPS_MASK, margining->caps[2]); 1421 + margining->voltage_steps = val; 1422 + val = FIELD_GET(USB4_MARGIN_CAP_2_MAX_VOLTAGE_OFFSET_MASK, margining->caps[2]); 1423 + margining->max_voltage_offset = 74 + val * 2; 1424 + } 1640 1425 1641 1426 if (supports_optional_voltage_offset_range(margining)) { 1642 1427 val = FIELD_GET(USB4_MARGIN_CAP_0_VOLT_STEPS_OPT_MASK, ··· 1685 1456 debugfs_create_file("results", 0600, dir, margining, 1686 1457 &margining_results_fops); 1687 1458 debugfs_create_file("test", 0600, dir, margining, &margining_test_fops); 1688 - if (independent_voltage_margins(margining) == USB4_MARGIN_CAP_0_VOLTAGE_HL || 1459 + if (independent_voltage_margins(margining) == USB4_MARGIN_CAP_VOLTAGE_INDP_GEN_2_3_HL || 1689 1460 (supports_time(margining) && 1690 - independent_time_margins(margining) == USB4_MARGIN_CAP_1_TIME_LR)) 1691 - debugfs_create_file("margin", 0600, dir, margining, 1692 - &margining_margin_fops); 1461 + independent_time_margins(margining) == USB4_MARGIN_CAP_TIME_INDP_GEN_2_3_LR)) 1462 + debugfs_create_file("margin", 0600, dir, margining, &margining_margin_fops); 1693 1463 1694 1464 margining->error_counter = USB4_MARGIN_SW_ERROR_COUNTER_CLEAR; 1695 1465 margining->dwell_time = MIN_DWELL_TIME; ··· 1705 1477 debugfs_create_file("dwell_time", DEBUGFS_MODE, dir, margining, 1706 1478 &margining_dwell_time_fops); 1707 1479 } 1480 + 1481 + if (margining->gen >= 4) 1482 + debugfs_create_file("eye", 0600, dir, port, &margining_eye_fops); 1483 + 1708 1484 return margining; 1709 1485 } 1710 1486
+6 -6
drivers/thunderbolt/nhi.c
··· 1340 1340 if (res) 1341 1341 return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n"); 1342 1342 1343 - res = pcim_iomap_regions(pdev, 1 << 0, "thunderbolt"); 1344 - if (res) 1345 - return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n"); 1346 - 1347 1343 nhi = devm_kzalloc(&pdev->dev, sizeof(*nhi), GFP_KERNEL); 1348 1344 if (!nhi) 1349 1345 return -ENOMEM; 1350 1346 1351 1347 nhi->pdev = pdev; 1352 1348 nhi->ops = (const struct tb_nhi_ops *)id->driver_data; 1353 - /* cannot fail - table is allocated in pcim_iomap_regions */ 1354 - nhi->iobase = pcim_iomap_table(pdev)[0]; 1349 + 1350 + nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt"); 1351 + res = PTR_ERR_OR_ZERO(nhi->iobase); 1352 + if (res) 1353 + return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n"); 1354 + 1355 1355 nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff; 1356 1356 dev_dbg(dev, "total paths: %d\n", nhi->hop_count); 1357 1357
+21 -11
drivers/thunderbolt/sb_regs.h
··· 49 49 /* USB4_SB_OPCODE_READ_LANE_MARGINING_CAP */ 50 50 #define USB4_MARGIN_CAP_0_MODES_HW BIT(0) 51 51 #define USB4_MARGIN_CAP_0_MODES_SW BIT(1) 52 - #define USB4_MARGIN_CAP_0_2_LANES BIT(2) 52 + #define USB4_MARGIN_CAP_0_ALL_LANES BIT(2) 53 53 #define USB4_MARGIN_CAP_0_VOLTAGE_INDP_MASK GENMASK(4, 3) 54 54 #define USB4_MARGIN_CAP_0_VOLTAGE_MIN 0x0 55 55 #define USB4_MARGIN_CAP_0_VOLTAGE_HL 0x1 ··· 69 69 #define USB4_MARGIN_CAP_1_TIME_OFFSET_MASK GENMASK(20, 16) 70 70 #define USB4_MARGIN_CAP_1_MIN_BER_MASK GENMASK(25, 21) 71 71 #define USB4_MARGIN_CAP_1_MAX_BER_MASK GENMASK(30, 26) 72 + #define USB4_MARGIN_CAP_2_MODES_HW BIT(0) 73 + #define USB4_MARGIN_CAP_2_MODES_SW BIT(1) 74 + #define USB4_MARGIN_CAP_2_TIME BIT(2) 75 + #define USB4_MARGIN_CAP_2_MAX_VOLTAGE_OFFSET_MASK GENMASK(8, 3) 76 + #define USB4_MARGIN_CAP_2_VOLTAGE_STEPS_MASK GENMASK(15, 9) 77 + #define USB4_MARGIN_CAP_2_VOLTAGE_INDP_MASK GENMASK(17, 16) 78 + #define USB4_MARGIN_CAP_2_VOLTAGE_MIN 0x0 79 + #define USB4_MARGIN_CAP_2_VOLTAGE_BOTH 0x1 80 + #define USB4_MARGIN_CAP_2_TIME_INDP_MASK GENMASK(19, 18) 81 + #define USB4_MARGIN_CAP_2_TIME_MIN 0x0 82 + #define USB4_MARGIN_CAP_2_TIME_BOTH 0x1 72 83 73 84 /* USB4_SB_OPCODE_RUN_HW_LANE_MARGINING */ 74 85 #define USB4_MARGIN_HW_TIME BIT(3) 75 - #define USB4_MARGIN_HW_RH BIT(4) 86 + #define USB4_MARGIN_HW_RHU BIT(4) 76 87 #define USB4_MARGIN_HW_BER_MASK GENMASK(9, 5) 77 88 #define USB4_MARGIN_HW_BER_SHIFT 5 78 89 #define USB4_MARGIN_HW_OPT_VOLTAGE BIT(10) 79 90 80 91 /* Applicable to all margin values */ 81 - #define USB4_MARGIN_HW_RES_1_MARGIN_MASK GENMASK(6, 0) 82 - #define USB4_MARGIN_HW_RES_1_EXCEEDS BIT(7) 83 - /* Different lane margin shifts */ 84 - #define USB4_MARGIN_HW_RES_1_L0_LL_MARGIN_SHIFT 8 85 - #define USB4_MARGIN_HW_RES_1_L1_RH_MARGIN_SHIFT 16 86 - #define USB4_MARGIN_HW_RES_1_L1_LL_MARGIN_SHIFT 24 92 + #define USB4_MARGIN_HW_RES_MARGIN_MASK GENMASK(6, 0) 93 + #define USB4_MARGIN_HW_RES_EXCEEDS BIT(7) 94 + 95 + /* Shifts for parsing the lane results */ 96 + #define USB4_MARGIN_HW_RES_LANE_SHIFT 16 97 + #define USB4_MARGIN_HW_RES_LL_SHIFT 8 87 98 88 99 /* USB4_SB_OPCODE_RUN_SW_LANE_MARGINING */ 89 100 #define USB4_MARGIN_SW_LANES_MASK GENMASK(2, 0) 90 - #define USB4_MARGIN_SW_LANE_0 0x0 91 - #define USB4_MARGIN_SW_LANE_1 0x1 92 - #define USB4_MARGIN_SW_ALL_LANES 0x7 93 101 #define USB4_MARGIN_SW_TIME BIT(3) 94 102 #define USB4_MARGIN_SW_RH BIT(4) 95 103 #define USB4_MARGIN_SW_OPT_VOLTAGE BIT(5) 96 104 #define USB4_MARGIN_SW_VT_MASK GENMASK(12, 6) 97 105 #define USB4_MARGIN_SW_COUNTER_MASK GENMASK(14, 13) 106 + #define USB4_MARGIN_SW_UPPER_EYE BIT(15) 98 107 99 108 #define USB4_MARGIN_SW_ERR_COUNTER_LANE_0_MASK GENMASK(3, 0) 100 109 #define USB4_MARGIN_SW_ERR_COUNTER_LANE_1_MASK GENMASK(7, 4) 110 + #define USB4_MARGIN_SW_ERR_COUNTER_LANE_2_MASK GENMASK(11, 8) 101 111 102 112 #endif
+12 -4
drivers/thunderbolt/tb.h
··· 1367 1367 USB4_MARGIN_SW_ERROR_COUNTER_STOP, 1368 1368 }; 1369 1369 1370 + enum usb4_margining_lane { 1371 + USB4_MARGINING_LANE_RX0 = 0, 1372 + USB4_MARGINING_LANE_RX1 = 1, 1373 + USB4_MARGINING_LANE_RX2 = 2, 1374 + USB4_MARGINING_LANE_ALL = 7, 1375 + }; 1376 + 1370 1377 /** 1371 1378 * struct usb4_port_margining_params - USB4 margining parameters 1372 1379 * @error_counter: Error counter operation for software margining 1373 1380 * @ber_level: Current BER level contour value 1374 - * @lanes: %0, %1 or %7 (all) 1381 + * @lanes: Lanes to enable for the margining operation 1375 1382 * @voltage_time_offset: Offset for voltage / time for software margining 1376 1383 * @optional_voltage_offset_range: Enable optional extended voltage range 1377 1384 * @right_high: %false if left/low margin test is performed, %true if right/high ··· 1387 1380 struct usb4_port_margining_params { 1388 1381 enum usb4_margin_sw_error_counter error_counter; 1389 1382 u32 ber_level; 1390 - u32 lanes; 1383 + enum usb4_margining_lane lanes; 1391 1384 u32 voltage_time_offset; 1392 1385 bool optional_voltage_offset_range; 1393 1386 bool right_high; 1387 + bool upper_eye; 1394 1388 bool time; 1395 1389 }; 1396 1390 1397 1391 int usb4_port_margining_caps(struct tb_port *port, enum usb4_sb_target target, 1398 - u8 index, u32 *caps); 1392 + u8 index, u32 *caps, size_t ncaps); 1399 1393 int usb4_port_hw_margin(struct tb_port *port, enum usb4_sb_target target, 1400 1394 u8 index, const struct usb4_port_margining_params *params, 1401 - u32 *results); 1395 + u32 *results, size_t nresults); 1402 1396 int usb4_port_sw_margin(struct tb_port *port, enum usb4_sb_target target, 1403 1397 u8 index, const struct usb4_port_margining_params *params, 1404 1398 u32 *results);
+11 -7
drivers/thunderbolt/usb4.c
··· 1631 1631 * @target: Sideband target 1632 1632 * @index: Retimer index if taget is %USB4_SB_TARGET_RETIMER 1633 1633 * @caps: Array with at least two elements to hold the results 1634 + * @ncaps: Number of elements in the caps array 1634 1635 * 1635 1636 * Reads the USB4 port lane margining capabilities into @caps. 1636 1637 */ 1637 1638 int usb4_port_margining_caps(struct tb_port *port, enum usb4_sb_target target, 1638 - u8 index, u32 *caps) 1639 + u8 index, u32 *caps, size_t ncaps) 1639 1640 { 1640 1641 int ret; 1641 1642 ··· 1646 1645 return ret; 1647 1646 1648 1647 return usb4_port_sb_read(port, target, index, USB4_SB_DATA, caps, 1649 - sizeof(*caps) * 2); 1648 + sizeof(*caps) * ncaps); 1650 1649 } 1651 1650 1652 1651 /** ··· 1655 1654 * @target: Sideband target 1656 1655 * @index: Retimer index if taget is %USB4_SB_TARGET_RETIMER 1657 1656 * @params: Parameters for USB4 hardware margining 1658 - * @results: Array with at least two elements to hold the results 1657 + * @results: Array to hold the results 1658 + * @nresults: Number of elements in the results array 1659 1659 * 1660 1660 * Runs hardware lane margining on USB4 port and returns the result in 1661 1661 * @results. 1662 1662 */ 1663 1663 int usb4_port_hw_margin(struct tb_port *port, enum usb4_sb_target target, 1664 1664 u8 index, const struct usb4_port_margining_params *params, 1665 - u32 *results) 1665 + u32 *results, size_t nresults) 1666 1666 { 1667 1667 u32 val; 1668 1668 int ret; ··· 1674 1672 val = params->lanes; 1675 1673 if (params->time) 1676 1674 val |= USB4_MARGIN_HW_TIME; 1677 - if (params->right_high) 1678 - val |= USB4_MARGIN_HW_RH; 1675 + if (params->right_high || params->upper_eye) 1676 + val |= USB4_MARGIN_HW_RHU; 1679 1677 if (params->ber_level) 1680 1678 val |= FIELD_PREP(USB4_MARGIN_HW_BER_MASK, params->ber_level); 1681 1679 if (params->optional_voltage_offset_range) ··· 1692 1690 return ret; 1693 1691 1694 1692 return usb4_port_sb_read(port, target, index, USB4_SB_DATA, results, 1695 - sizeof(*results) * 2); 1693 + sizeof(*results) * nresults); 1696 1694 } 1697 1695 1698 1696 /** ··· 1724 1722 val |= USB4_MARGIN_SW_OPT_VOLTAGE; 1725 1723 if (params->right_high) 1726 1724 val |= USB4_MARGIN_SW_RH; 1725 + if (params->upper_eye) 1726 + val |= USB4_MARGIN_SW_UPPER_EYE; 1727 1727 val |= FIELD_PREP(USB4_MARGIN_SW_COUNTER_MASK, params->error_counter); 1728 1728 val |= FIELD_PREP(USB4_MARGIN_SW_VT_MASK, params->voltage_time_offset); 1729 1729
+3 -3
drivers/usb/atm/ueagle-atm.c
··· 808 808 if (l > len) 809 809 return 1; 810 810 811 - /* zero is zero regardless endianes */ 811 + /* zero is zero regardless endianness */ 812 812 } while (blockidx->NotLastBlock); 813 813 } 814 814 ··· 1276 1276 sc->stats.phy.dsrate == dsrate) 1277 1277 return; 1278 1278 1279 - /* Original timming (1Mbit/s) from ADI (used in windows driver) */ 1279 + /* Original timing (1Mbit/s) from ADI (used in windows driver) */ 1280 1280 timeout = (dsrate <= 1024*1024) ? 0 : 1; 1281 1281 ret = uea_request(sc, UEA_SET_TIMEOUT, timeout, 0, NULL); 1282 1282 uea_info(INS_TO_USBDEV(sc), "setting new timeout %d%s\n", ··· 1972 1972 if (cmv->bDirection != E1_MODEMTOHOST) 1973 1973 goto bad1; 1974 1974 1975 - /* FIXME : ADI930 reply wrong preambule (func = 2, sub = 2) to 1975 + /* FIXME : ADI930 reply wrong preamble (func = 2, sub = 2) to 1976 1976 * the first MEMACCESS cmv. Ignore it... 1977 1977 */ 1978 1978 if (cmv->bFunction != dsc->function) {
+1 -1
drivers/usb/atm/usbatm.c
··· 1158 1158 if (i >= num_rcv_urbs) 1159 1159 list_add_tail(&urb->urb_list, &channel->list); 1160 1160 1161 - vdbg(&intf->dev, "%s: alloced buffer 0x%p buf size %u urb 0x%p", 1161 + vdbg(&intf->dev, "%s: allocated buffer 0x%p buf size %u urb 0x%p", 1162 1162 __func__, urb->transfer_buffer, urb->transfer_buffer_length, urb); 1163 1163 } 1164 1164
+1 -1
drivers/usb/c67x00/c67x00-drv.c
··· 201 201 202 202 static struct platform_driver c67x00_driver = { 203 203 .probe = c67x00_drv_probe, 204 - .remove_new = c67x00_drv_remove, 204 + .remove = c67x00_drv_remove, 205 205 .driver = { 206 206 .name = "c67x00", 207 207 },
+1 -1
drivers/usb/cdns3/cdns3-imx.c
··· 422 422 423 423 static struct platform_driver cdns_imx_driver = { 424 424 .probe = cdns_imx_probe, 425 - .remove_new = cdns_imx_remove, 425 + .remove = cdns_imx_remove, 426 426 .driver = { 427 427 .name = "cdns3-imx", 428 428 .of_match_table = cdns_imx_of_match,
+1 -3
drivers/usb/cdns3/cdns3-pci-wrap.c
··· 37 37 #define PCI_DRIVER_NAME "cdns3-pci-usbss" 38 38 #define PLAT_DRIVER_NAME "cdns-usb3" 39 39 40 - #define PCI_DEVICE_ID_CDNS_USB3 0x0100 41 - 42 40 static struct pci_dev *cdns3_get_second_fun(struct pci_dev *pdev) 43 41 { 44 42 struct pci_dev *func; ··· 187 189 } 188 190 189 191 static const struct pci_device_id cdns3_pci_ids[] = { 190 - { PCI_VDEVICE(CDNS, PCI_DEVICE_ID_CDNS_USB3) }, 192 + { PCI_VDEVICE(CDNS, PCI_DEVICE_ID_CDNS_USBSS) }, 191 193 { 0, } 192 194 }; 193 195
+1 -1
drivers/usb/cdns3/cdns3-plat.c
··· 327 327 328 328 static struct platform_driver cdns3_driver = { 329 329 .probe = cdns3_plat_probe, 330 - .remove_new = cdns3_plat_remove, 330 + .remove = cdns3_plat_remove, 331 331 .driver = { 332 332 .name = "cdns-usb3", 333 333 .of_match_table = of_match_ptr(of_cdns3_match),
+1 -1
drivers/usb/cdns3/cdns3-starfive.c
··· 230 230 231 231 static struct platform_driver cdns_starfive_driver = { 232 232 .probe = cdns_starfive_probe, 233 - .remove_new = cdns_starfive_remove, 233 + .remove = cdns_starfive_remove, 234 234 .driver = { 235 235 .name = "cdns3-starfive", 236 236 .of_match_table = cdns_starfive_of_match,
+1 -1
drivers/usb/cdns3/cdns3-ti.c
··· 233 233 234 234 static struct platform_driver cdns_ti_driver = { 235 235 .probe = cdns_ti_probe, 236 - .remove_new = cdns_ti_remove, 236 + .remove = cdns_ti_remove, 237 237 .driver = { 238 238 .name = "cdns3-ti", 239 239 .of_match_table = cdns_ti_of_match,
+10 -16
drivers/usb/cdns3/cdnsp-pci.c
··· 28 28 #define PCI_DRIVER_NAME "cdns-pci-usbssp" 29 29 #define PLAT_DRIVER_NAME "cdns-usbssp" 30 30 31 - #define PCI_DEVICE_ID_CDNS_USB3 0x0100 32 - #define PCI_DEVICE_ID_CDNS_UDC 0x0200 33 - 34 - #define PCI_CLASS_SERIAL_USB_CDNS_USB3 (PCI_CLASS_SERIAL_USB << 8 | 0x80) 35 - #define PCI_CLASS_SERIAL_USB_CDNS_UDC PCI_CLASS_SERIAL_USB_DEVICE 36 - 37 31 static struct pci_dev *cdnsp_get_second_fun(struct pci_dev *pdev) 38 32 { 39 33 /* ··· 35 41 * Platform has two function. The fist keeps resources for 36 42 * Host/Device while the secon keeps resources for DRD/OTG. 37 43 */ 38 - if (pdev->device == PCI_DEVICE_ID_CDNS_UDC) 39 - return pci_get_device(pdev->vendor, PCI_DEVICE_ID_CDNS_USB3, NULL); 40 - if (pdev->device == PCI_DEVICE_ID_CDNS_USB3) 41 - return pci_get_device(pdev->vendor, PCI_DEVICE_ID_CDNS_UDC, NULL); 44 + if (pdev->device == PCI_DEVICE_ID_CDNS_USBSSP) 45 + return pci_get_device(pdev->vendor, PCI_DEVICE_ID_CDNS_USBSS, NULL); 46 + if (pdev->device == PCI_DEVICE_ID_CDNS_USBSS) 47 + return pci_get_device(pdev->vendor, PCI_DEVICE_ID_CDNS_USBSSP, NULL); 42 48 43 49 return NULL; 44 50 } ··· 215 221 }; 216 222 217 223 static const struct pci_device_id cdnsp_pci_ids[] = { 218 - { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_UDC), 219 - .class = PCI_CLASS_SERIAL_USB_CDNS_UDC }, 220 - { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_UDC), 221 - .class = PCI_CLASS_SERIAL_USB_CDNS_USB3 }, 222 - { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_USB3), 223 - .class = PCI_CLASS_SERIAL_USB_CDNS_USB3 }, 224 + { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_USBSSP), 225 + .class = PCI_CLASS_SERIAL_USB_DEVICE }, 226 + { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_USBSSP), 227 + .class = PCI_CLASS_SERIAL_USB_CDNS }, 228 + { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_USBSS), 229 + .class = PCI_CLASS_SERIAL_USB_CDNS }, 224 230 { 0, } 225 231 }; 226 232
+2
drivers/usb/chipidea/ci.h
··· 25 25 #define TD_PAGE_COUNT 5 26 26 #define CI_HDRC_PAGE_SIZE 4096ul /* page size for TD's */ 27 27 #define ENDPT_MAX 32 28 + #define CI_MAX_REQ_SIZE (4 * CI_HDRC_PAGE_SIZE) 28 29 #define CI_MAX_BUF_SIZE (TD_PAGE_COUNT * CI_HDRC_PAGE_SIZE) 29 30 30 31 /****************************************************************************** ··· 261 260 bool b_sess_valid_event; 262 261 bool imx28_write_fix; 263 262 bool has_portsc_pec_bug; 263 + bool has_short_pkt_limit; 264 264 bool supports_runtime_pm; 265 265 bool in_lpm; 266 266 bool wakeup_int;
+2 -1
drivers/usb/chipidea/ci_hdrc_imx.c
··· 342 342 struct ci_hdrc_platform_data pdata = { 343 343 .name = dev_name(&pdev->dev), 344 344 .capoffset = DEF_CAPOFFSET, 345 + .flags = CI_HDRC_HAS_SHORT_PKT_LIMIT, 345 346 .notify_event = ci_hdrc_imx_notify_event, 346 347 }; 347 348 int ret; ··· 676 675 }; 677 676 static struct platform_driver ci_hdrc_imx_driver = { 678 677 .probe = ci_hdrc_imx_probe, 679 - .remove_new = ci_hdrc_imx_remove, 678 + .remove = ci_hdrc_imx_remove, 680 679 .shutdown = ci_hdrc_imx_shutdown, 681 680 .driver = { 682 681 .name = "imx_usb",
+1 -1
drivers/usb/chipidea/ci_hdrc_msm.c
··· 292 292 293 293 static struct platform_driver ci_hdrc_msm_driver = { 294 294 .probe = ci_hdrc_msm_probe, 295 - .remove_new = ci_hdrc_msm_remove, 295 + .remove = ci_hdrc_msm_remove, 296 296 .driver = { 297 297 .name = "msm_hsusb", 298 298 .of_match_table = msm_ci_dt_match,
+1 -1
drivers/usb/chipidea/ci_hdrc_npcm.c
··· 98 98 99 99 static struct platform_driver npcm_udc_driver = { 100 100 .probe = npcm_udc_probe, 101 - .remove_new = npcm_udc_remove, 101 + .remove = npcm_udc_remove, 102 102 .driver = { 103 103 .name = "npcm_udc", 104 104 .of_match_table = npcm_udc_dt_match,
+1 -1
drivers/usb/chipidea/ci_hdrc_tegra.c
··· 406 406 .pm = pm_ptr(&tegra_usb_pm), 407 407 }, 408 408 .probe = tegra_usb_probe, 409 - .remove_new = tegra_usb_remove, 409 + .remove = tegra_usb_remove, 410 410 }; 411 411 module_platform_driver(tegra_usb_driver); 412 412
+1 -1
drivers/usb/chipidea/ci_hdrc_usb2.c
··· 116 116 117 117 static struct platform_driver ci_hdrc_usb2_driver = { 118 118 .probe = ci_hdrc_usb2_probe, 119 - .remove_new = ci_hdrc_usb2_remove, 119 + .remove = ci_hdrc_usb2_remove, 120 120 .driver = { 121 121 .name = "chipidea-usb2", 122 122 .of_match_table = ci_hdrc_usb2_of_match,
+4 -2
drivers/usb/chipidea/core.c
··· 765 765 766 766 ext_id = ERR_PTR(-ENODEV); 767 767 ext_vbus = ERR_PTR(-ENODEV); 768 - if (of_property_read_bool(dev->of_node, "extcon")) { 768 + if (of_property_present(dev->of_node, "extcon")) { 769 769 /* Each one of them is not mandatory */ 770 770 ext_vbus = extcon_get_edev_by_phandle(dev, 0); 771 771 if (IS_ERR(ext_vbus) && PTR_ERR(ext_vbus) != -ENODEV) ··· 1076 1076 CI_HDRC_SUPPORTS_RUNTIME_PM); 1077 1077 ci->has_portsc_pec_bug = !!(ci->platdata->flags & 1078 1078 CI_HDRC_HAS_PORTSC_PEC_MISSED); 1079 + ci->has_short_pkt_limit = !!(ci->platdata->flags & 1080 + CI_HDRC_HAS_SHORT_PKT_LIMIT); 1079 1081 platform_set_drvdata(pdev, ci); 1080 1082 1081 1083 ret = hw_device_init(ci, base); ··· 1497 1495 1498 1496 static struct platform_driver ci_hdrc_driver = { 1499 1497 .probe = ci_hdrc_probe, 1500 - .remove_new = ci_hdrc_remove, 1498 + .remove = ci_hdrc_remove, 1501 1499 .driver = { 1502 1500 .name = "ci_hdrc", 1503 1501 .pm = &ci_pm_ops,
+172 -6
drivers/usb/chipidea/udc.c
··· 10 10 #include <linux/delay.h> 11 11 #include <linux/device.h> 12 12 #include <linux/dmapool.h> 13 + #include <linux/dma-direct.h> 13 14 #include <linux/err.h> 14 15 #include <linux/irqreturn.h> 15 16 #include <linux/kernel.h> ··· 541 540 return ret; 542 541 } 543 542 543 + /* 544 + * Verify if the scatterlist is valid by iterating each sg entry. 545 + * Return invalid sg entry index which is less than num_sgs. 546 + */ 547 + static int sglist_get_invalid_entry(struct device *dma_dev, u8 dir, 548 + struct usb_request *req) 549 + { 550 + int i; 551 + struct scatterlist *s = req->sg; 552 + 553 + if (req->num_sgs == 1) 554 + return 1; 555 + 556 + dir = dir ? DMA_TO_DEVICE : DMA_FROM_DEVICE; 557 + 558 + for (i = 0; i < req->num_sgs; i++, s = sg_next(s)) { 559 + /* Only small sg (generally last sg) may be bounced. If 560 + * that happens. we can't ensure the addr is page-aligned 561 + * after dma map. 562 + */ 563 + if (dma_kmalloc_needs_bounce(dma_dev, s->length, dir)) 564 + break; 565 + 566 + /* Make sure each sg start address (except first sg) is 567 + * page-aligned and end address (except last sg) is also 568 + * page-aligned. 569 + */ 570 + if (i == 0) { 571 + if (!IS_ALIGNED(s->offset + s->length, 572 + CI_HDRC_PAGE_SIZE)) 573 + break; 574 + } else { 575 + if (s->offset) 576 + break; 577 + if (!sg_is_last(s) && !IS_ALIGNED(s->length, 578 + CI_HDRC_PAGE_SIZE)) 579 + break; 580 + } 581 + } 582 + 583 + return i; 584 + } 585 + 586 + static int sglist_do_bounce(struct ci_hw_req *hwreq, int index, 587 + bool copy, unsigned int *bounced) 588 + { 589 + void *buf; 590 + int i, ret, nents, num_sgs; 591 + unsigned int rest, rounded; 592 + struct scatterlist *sg, *src, *dst; 593 + 594 + nents = index + 1; 595 + ret = sg_alloc_table(&hwreq->sgt, nents, GFP_KERNEL); 596 + if (ret) 597 + return ret; 598 + 599 + sg = src = hwreq->req.sg; 600 + num_sgs = hwreq->req.num_sgs; 601 + rest = hwreq->req.length; 602 + dst = hwreq->sgt.sgl; 603 + 604 + for (i = 0; i < index; i++) { 605 + memcpy(dst, src, sizeof(*src)); 606 + rest -= src->length; 607 + src = sg_next(src); 608 + dst = sg_next(dst); 609 + } 610 + 611 + /* create one bounce buffer */ 612 + rounded = round_up(rest, CI_HDRC_PAGE_SIZE); 613 + buf = kmalloc(rounded, GFP_KERNEL); 614 + if (!buf) { 615 + sg_free_table(&hwreq->sgt); 616 + return -ENOMEM; 617 + } 618 + 619 + sg_set_buf(dst, buf, rounded); 620 + 621 + hwreq->req.sg = hwreq->sgt.sgl; 622 + hwreq->req.num_sgs = nents; 623 + hwreq->sgt.sgl = sg; 624 + hwreq->sgt.nents = num_sgs; 625 + 626 + if (copy) 627 + sg_copy_to_buffer(src, num_sgs - index, buf, rest); 628 + 629 + *bounced = rest; 630 + 631 + return 0; 632 + } 633 + 634 + static void sglist_do_debounce(struct ci_hw_req *hwreq, bool copy) 635 + { 636 + void *buf; 637 + int i, nents, num_sgs; 638 + struct scatterlist *sg, *src, *dst; 639 + 640 + sg = hwreq->req.sg; 641 + num_sgs = hwreq->req.num_sgs; 642 + src = sg_last(sg, num_sgs); 643 + buf = sg_virt(src); 644 + 645 + if (copy) { 646 + dst = hwreq->sgt.sgl; 647 + for (i = 0; i < num_sgs - 1; i++) 648 + dst = sg_next(dst); 649 + 650 + nents = hwreq->sgt.nents - num_sgs + 1; 651 + sg_copy_from_buffer(dst, nents, buf, sg_dma_len(src)); 652 + } 653 + 654 + hwreq->req.sg = hwreq->sgt.sgl; 655 + hwreq->req.num_sgs = hwreq->sgt.nents; 656 + hwreq->sgt.sgl = sg; 657 + hwreq->sgt.nents = num_sgs; 658 + 659 + kfree(buf); 660 + sg_free_table(&hwreq->sgt); 661 + } 662 + 544 663 /** 545 664 * _hardware_enqueue: configures a request at hardware level 546 665 * @hwep: endpoint ··· 673 552 struct ci_hdrc *ci = hwep->ci; 674 553 int ret = 0; 675 554 struct td_node *firstnode, *lastnode; 555 + unsigned int bounced_size; 556 + struct scatterlist *sg; 676 557 677 558 /* don't queue twice */ 678 559 if (hwreq->req.status == -EALREADY) ··· 682 559 683 560 hwreq->req.status = -EALREADY; 684 561 562 + if (hwreq->req.num_sgs && hwreq->req.length && 563 + ci->has_short_pkt_limit) { 564 + ret = sglist_get_invalid_entry(ci->dev->parent, hwep->dir, 565 + &hwreq->req); 566 + if (ret < hwreq->req.num_sgs) { 567 + ret = sglist_do_bounce(hwreq, ret, hwep->dir == TX, 568 + &bounced_size); 569 + if (ret) 570 + return ret; 571 + } 572 + } 573 + 685 574 ret = usb_gadget_map_request_by_dev(ci->dev->parent, 686 575 &hwreq->req, hwep->dir); 687 576 if (ret) 688 577 return ret; 578 + 579 + if (hwreq->sgt.sgl) { 580 + /* We've mapped a bigger buffer, now recover the actual size */ 581 + sg = sg_last(hwreq->req.sg, hwreq->req.num_sgs); 582 + sg_dma_len(sg) = min(sg_dma_len(sg), bounced_size); 583 + } 689 584 690 585 if (hwreq->req.num_mapped_sgs) 691 586 ret = prepare_td_for_sg(hwep, hwreq); ··· 753 612 do { 754 613 hw_write(ci, OP_USBCMD, USBCMD_ATDTW, USBCMD_ATDTW); 755 614 tmp_stat = hw_read(ci, OP_ENDPTSTAT, BIT(n)); 756 - } while (!hw_read(ci, OP_USBCMD, USBCMD_ATDTW)); 615 + } while (!hw_read(ci, OP_USBCMD, USBCMD_ATDTW) && tmp_stat); 757 616 hw_write(ci, OP_USBCMD, USBCMD_ATDTW, 0); 758 617 if (tmp_stat) 618 + goto done; 619 + 620 + /* OP_ENDPTSTAT will be clear by HW when the endpoint met 621 + * err. This dTD don't push to dQH if current dTD point is 622 + * not the last one in previous request. 623 + */ 624 + if (hwep->qh.ptr->curr != cpu_to_le32(prevlastnode->dma)) 759 625 goto done; 760 626 } 761 627 ··· 824 676 unsigned remaining_length; 825 677 unsigned actual = hwreq->req.length; 826 678 struct ci_hdrc *ci = hwep->ci; 679 + bool is_isoc = hwep->type == USB_ENDPOINT_XFER_ISOC; 827 680 828 681 if (hwreq->req.status != -EALREADY) 829 682 return -EINVAL; ··· 838 689 int n = hw_ep_bit(hwep->num, hwep->dir); 839 690 840 691 if (ci->rev == CI_REVISION_24 || 841 - ci->rev == CI_REVISION_22) 692 + ci->rev == CI_REVISION_22 || is_isoc) 842 693 if (!hw_read(ci, OP_ENDPTSTAT, BIT(n))) 843 694 reprime_dtd(ci, hwep, node); 844 695 hwreq->req.status = -EALREADY; ··· 857 708 hwreq->req.status = -EPROTO; 858 709 break; 859 710 } else if ((TD_STATUS_TR_ERR & hwreq->req.status)) { 860 - hwreq->req.status = -EILSEQ; 861 - break; 711 + if (is_isoc) { 712 + hwreq->req.status = 0; 713 + } else { 714 + hwreq->req.status = -EILSEQ; 715 + break; 716 + } 862 717 } 863 718 864 - if (remaining_length) { 719 + if (remaining_length && !is_isoc) { 865 720 if (hwep->dir == TX) { 866 721 hwreq->req.status = -EPROTO; 867 722 break; ··· 885 732 886 733 usb_gadget_unmap_request_by_dev(hwep->ci->dev->parent, 887 734 &hwreq->req, hwep->dir); 735 + 736 + /* sglist bounced */ 737 + if (hwreq->sgt.sgl) 738 + sglist_do_debounce(hwreq, hwep->dir == RX); 888 739 889 740 hwreq->req.actual += actual; 890 741 ··· 1114 957 if (usb_endpoint_xfer_isoc(hwep->ep.desc) && 1115 958 hwreq->req.length > hwep->ep.mult * hwep->ep.maxpacket) { 1116 959 dev_err(hwep->ci->dev, "request length too big for isochronous\n"); 960 + return -EMSGSIZE; 961 + } 962 + 963 + if (ci->has_short_pkt_limit && 964 + hwreq->req.length > CI_MAX_REQ_SIZE) { 965 + dev_err(hwep->ci->dev, "request length too big (max 16KB)\n"); 1117 966 return -EMSGSIZE; 1118 967 } 1119 968 ··· 1737 1574 1738 1575 usb_gadget_unmap_request(&hwep->ci->gadget, req, hwep->dir); 1739 1576 1577 + if (hwreq->sgt.sgl) 1578 + sglist_do_debounce(hwreq, false); 1579 + 1740 1580 req->status = -ECONNRESET; 1741 1581 1742 1582 if (hwreq->req.complete != NULL) { ··· 2229 2063 } 2230 2064 } 2231 2065 2232 - if (USBi_UI & intr) 2066 + if ((USBi_UI | USBi_UEI) & intr) 2233 2067 isr_tr_complete_handler(ci); 2234 2068 2235 2069 if ((USBi_SLI & intr) && !(ci->suspended)) {
+2
drivers/usb/chipidea/udc.h
··· 69 69 * @req: request structure for gadget drivers 70 70 * @queue: link to QH list 71 71 * @tds: link to TD list 72 + * @sgt: hold original sglist when bounce sglist 72 73 */ 73 74 struct ci_hw_req { 74 75 struct usb_request req; 75 76 struct list_head queue; 76 77 struct list_head tds; 78 + struct sg_table sgt; 77 79 }; 78 80 79 81 #ifdef CONFIG_USB_CHIPIDEA_UDC
+4
drivers/usb/chipidea/usbmisc_imx.c
··· 1285 1285 .compatible = "fsl,imx7ulp-usbmisc", 1286 1286 .data = &imx7ulp_usbmisc_ops, 1287 1287 }, 1288 + { 1289 + .compatible = "fsl,imx8ulp-usbmisc", 1290 + .data = &imx7ulp_usbmisc_ops, 1291 + }, 1288 1292 { /* sentinel */ } 1289 1293 }; 1290 1294 MODULE_DEVICE_TABLE(of, usbmisc_imx_dt_ids);
+3
drivers/usb/common/common.c
··· 415 415 struct dentry *usb_debug_root; 416 416 EXPORT_SYMBOL_GPL(usb_debug_root); 417 417 418 + DEFINE_MUTEX(usb_dynids_lock); 419 + EXPORT_SYMBOL_GPL(usb_dynids_lock); 420 + 418 421 static int __init usb_common_init(void) 419 422 { 420 423 usb_debug_root = debugfs_create_dir("usb", NULL);
+1 -1
drivers/usb/common/usb-conn-gpio.c
··· 340 340 341 341 static struct platform_driver usb_conn_driver = { 342 342 .probe = usb_conn_probe, 343 - .remove_new = usb_conn_remove, 343 + .remove = usb_conn_remove, 344 344 .driver = { 345 345 .name = "usb-conn-gpio", 346 346 .pm = &usb_conn_pm_ops,
+1 -1
drivers/usb/core/config.c
··· 924 924 result = -EINVAL; 925 925 goto err; 926 926 } 927 - length = max((int) le16_to_cpu(desc->wTotalLength), 927 + length = max_t(int, le16_to_cpu(desc->wTotalLength), 928 928 USB_DT_CONFIG_SIZE); 929 929 930 930 /* Now that we know the length, get the whole thing */
+4 -1
drivers/usb/core/devio.c
··· 238 238 dma_addr_t dma_handle = DMA_MAPPING_ERROR; 239 239 int ret; 240 240 241 + if (!(file->f_mode & FMODE_WRITE)) 242 + return -EPERM; 243 + 241 244 ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory)); 242 245 if (ret) 243 246 goto error; ··· 1298 1295 return ret; 1299 1296 1300 1297 len1 = bulk->len; 1301 - if (len1 < 0 || len1 >= (INT_MAX - sizeof(struct urb))) 1298 + if (len1 >= (INT_MAX - sizeof(struct urb))) 1302 1299 return -EINVAL; 1303 1300 1304 1301 if (bulk->ep & USB_DIR_IN)
+10 -14
drivers/usb/core/driver.c
··· 95 95 } 96 96 } 97 97 98 - spin_lock(&dynids->lock); 98 + mutex_lock(&usb_dynids_lock); 99 99 list_add_tail(&dynid->node, &dynids->list); 100 - spin_unlock(&dynids->lock); 100 + mutex_unlock(&usb_dynids_lock); 101 101 102 102 retval = driver_attach(driver); 103 103 ··· 116 116 struct usb_dynid *dynid; 117 117 size_t count = 0; 118 118 119 + guard(mutex)(&usb_dynids_lock); 119 120 list_for_each_entry(dynid, &dynids->list, node) 120 121 if (dynid->id.bInterfaceClass != 0) 121 122 count += scnprintf(&buf[count], PAGE_SIZE - count, "%04x %04x %02x\n", ··· 161 160 if (fields < 2) 162 161 return -EINVAL; 163 162 164 - spin_lock(&usb_driver->dynids.lock); 163 + guard(mutex)(&usb_dynids_lock); 165 164 list_for_each_entry_safe(dynid, n, &usb_driver->dynids.list, node) { 166 165 struct usb_device_id *id = &dynid->id; 167 166 ··· 172 171 break; 173 172 } 174 173 } 175 - spin_unlock(&usb_driver->dynids.lock); 176 174 return count; 177 175 } 178 176 ··· 220 220 { 221 221 struct usb_dynid *dynid, *n; 222 222 223 - spin_lock(&usb_drv->dynids.lock); 223 + guard(mutex)(&usb_dynids_lock); 224 224 list_for_each_entry_safe(dynid, n, &usb_drv->dynids.list, node) { 225 225 list_del(&dynid->node); 226 226 kfree(dynid); 227 227 } 228 - spin_unlock(&usb_drv->dynids.lock); 229 228 } 230 229 231 230 static const struct usb_device_id *usb_match_dynamic_id(struct usb_interface *intf, 232 - struct usb_driver *drv) 231 + const struct usb_driver *drv) 233 232 { 234 233 struct usb_dynid *dynid; 235 234 236 - spin_lock(&drv->dynids.lock); 235 + guard(mutex)(&usb_dynids_lock); 237 236 list_for_each_entry(dynid, &drv->dynids.list, node) { 238 237 if (usb_match_one_id(intf, &dynid->id)) { 239 - spin_unlock(&drv->dynids.lock); 240 238 return &dynid->id; 241 239 } 242 240 } 243 - spin_unlock(&drv->dynids.lock); 244 241 return NULL; 245 242 } 246 243 ··· 850 853 EXPORT_SYMBOL_GPL(usb_device_match_id); 851 854 852 855 bool usb_driver_applicable(struct usb_device *udev, 853 - struct usb_device_driver *udrv) 856 + const struct usb_device_driver *udrv) 854 857 { 855 858 if (udrv->id_table && udrv->match) 856 859 return usb_device_match_id(udev, udrv->id_table) != NULL && ··· 870 873 /* devices and interfaces are handled separately */ 871 874 if (is_usb_device(dev)) { 872 875 struct usb_device *udev; 873 - struct usb_device_driver *udrv; 876 + const struct usb_device_driver *udrv; 874 877 875 878 /* interface drivers never match devices */ 876 879 if (!is_usb_device_driver(drv)) ··· 890 893 891 894 } else if (is_usb_interface(dev)) { 892 895 struct usb_interface *intf; 893 - struct usb_driver *usb_drv; 896 + const struct usb_driver *usb_drv; 894 897 const struct usb_device_id *id; 895 898 896 899 /* device drivers never match interfaces */ ··· 1073 1076 new_driver->driver.owner = owner; 1074 1077 new_driver->driver.mod_name = mod_name; 1075 1078 new_driver->driver.dev_groups = new_driver->dev_groups; 1076 - spin_lock_init(&new_driver->dynids.lock); 1077 1079 INIT_LIST_HEAD(&new_driver->dynids.list); 1078 1080 1079 1081 retval = driver_register(&new_driver->driver);
+6 -5
drivers/usb/core/endpoint.c
··· 14 14 #include <linux/kernel.h> 15 15 #include <linux/spinlock.h> 16 16 #include <linux/slab.h> 17 + #include <linux/sysfs.h> 17 18 #include <linux/usb.h> 18 19 #include "usb.h" 19 20 ··· 40 39 char *buf) \ 41 40 { \ 42 41 struct ep_device *ep = to_ep_device(dev); \ 43 - return sprintf(buf, format_string, ep->desc->field); \ 42 + return sysfs_emit(buf, format_string, ep->desc->field); \ 44 43 } \ 45 44 static DEVICE_ATTR_RO(field) 46 45 ··· 53 52 struct device_attribute *attr, char *buf) 54 53 { 55 54 struct ep_device *ep = to_ep_device(dev); 56 - return sprintf(buf, "%04x\n", usb_endpoint_maxp(ep->desc)); 55 + return sysfs_emit(buf, "%04x\n", usb_endpoint_maxp(ep->desc)); 57 56 } 58 57 static DEVICE_ATTR_RO(wMaxPacketSize); 59 58 ··· 77 76 type = "Interrupt"; 78 77 break; 79 78 } 80 - return sprintf(buf, "%s\n", type); 79 + return sysfs_emit(buf, "%s\n", type); 81 80 } 82 81 static DEVICE_ATTR_RO(type); 83 82 ··· 96 95 interval /= 1000; 97 96 } 98 97 99 - return sprintf(buf, "%d%cs\n", interval, unit); 98 + return sysfs_emit(buf, "%d%cs\n", interval, unit); 100 99 } 101 100 static DEVICE_ATTR_RO(interval); 102 101 ··· 112 111 direction = "in"; 113 112 else 114 113 direction = "out"; 115 - return sprintf(buf, "%s\n", direction); 114 + return sysfs_emit(buf, "%s\n", direction); 116 115 } 117 116 static DEVICE_ATTR_RO(direction); 118 117
+2 -1
drivers/usb/core/ledtrig-usbport.c
··· 10 10 #include <linux/module.h> 11 11 #include <linux/of.h> 12 12 #include <linux/slab.h> 13 + #include <linux/sysfs.h> 13 14 #include <linux/usb.h> 14 15 #include <linux/usb/of.h> 15 16 ··· 88 87 struct usbport_trig_port, 89 88 attr); 90 89 91 - return sprintf(buf, "%d\n", port->observed) + 1; 90 + return sysfs_emit(buf, "%d\n", port->observed) + 1; 92 91 } 93 92 94 93 static ssize_t usbport_trig_port_store(struct device *dev,
+6 -5
drivers/usb/core/port.c
··· 9 9 10 10 #include <linux/kstrtox.h> 11 11 #include <linux/slab.h> 12 + #include <linux/sysfs.h> 12 13 #include <linux/pm_qos.h> 13 14 #include <linux/component.h> 14 15 #include <linux/usb/of.h> ··· 167 166 { 168 167 struct usb_port *port_dev = to_usb_port(dev); 169 168 170 - return sprintf(buf, "0x%08x\n", port_dev->location); 169 + return sysfs_emit(buf, "0x%08x\n", port_dev->location); 171 170 } 172 171 static DEVICE_ATTR_RO(location); 173 172 ··· 192 191 break; 193 192 } 194 193 195 - return sprintf(buf, "%s\n", result); 194 + return sysfs_emit(buf, "%s\n", result); 196 195 } 197 196 static DEVICE_ATTR_RO(connect_type); 198 197 ··· 211 210 { 212 211 struct usb_port *port_dev = to_usb_port(dev); 213 212 214 - return sprintf(buf, "%u\n", port_dev->over_current_count); 213 + return sysfs_emit(buf, "%u\n", port_dev->over_current_count); 215 214 } 216 215 static DEVICE_ATTR_RO(over_current_count); 217 216 ··· 220 219 { 221 220 struct usb_port *port_dev = to_usb_port(dev); 222 221 223 - return sprintf(buf, "%08x\n", port_dev->quirks); 222 + return sysfs_emit(buf, "%08x\n", port_dev->quirks); 224 223 } 225 224 226 225 static ssize_t quirks_store(struct device *dev, struct device_attribute *attr, ··· 255 254 p = "0"; 256 255 } 257 256 258 - return sprintf(buf, "%s\n", p); 257 + return sysfs_emit(buf, "%s\n", p); 259 258 } 260 259 261 260 static ssize_t usb3_lpm_permit_store(struct device *dev,
+1 -1
drivers/usb/core/usb.h
··· 75 75 extern const struct usb_device_id *usb_device_match_id(struct usb_device *udev, 76 76 const struct usb_device_id *id); 77 77 extern bool usb_driver_applicable(struct usb_device *udev, 78 - struct usb_device_driver *udrv); 78 + const struct usb_device_driver *udrv); 79 79 extern void usb_forced_unbind_intf(struct usb_interface *intf); 80 80 extern void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev); 81 81
+1 -1
drivers/usb/dwc2/platform.c
··· 756 756 .pm = &dwc2_dev_pm_ops, 757 757 }, 758 758 .probe = dwc2_driver_probe, 759 - .remove_new = dwc2_driver_remove, 759 + .remove = dwc2_driver_remove, 760 760 .shutdown = dwc2_driver_shutdown, 761 761 }; 762 762
+10 -6
drivers/usb/dwc3/core.c
··· 1409 1409 1410 1410 /* 1411 1411 * When configured in HOST mode, after issuing U3/L2 exit controller 1412 - * fails to send proper CRC checksum in CRC5 feild. Because of this 1412 + * fails to send proper CRC checksum in CRC5 field. Because of this 1413 1413 * behaviour Transaction Error is generated, resulting in reset and 1414 1414 * re-enumeration of usb device attached. All the termsel, xcvrsel, 1415 1415 * opmode becomes 0 during end of resume. Enabling bit 10 of GUCTL1 ··· 1470 1470 if (hw_mode != DWC3_GHWPARAMS0_MODE_GADGET && 1471 1471 (DWC3_IP_IS(DWC31)) && 1472 1472 dwc->maximum_speed == USB_SPEED_SUPER) { 1473 - reg = dwc3_readl(dwc->regs, DWC3_LLUCTL); 1474 - reg |= DWC3_LLUCTL_FORCE_GEN1; 1475 - dwc3_writel(dwc->regs, DWC3_LLUCTL, reg); 1473 + int i; 1474 + 1475 + for (i = 0; i < dwc->num_usb3_ports; i++) { 1476 + reg = dwc3_readl(dwc->regs, DWC3_LLUCTL(i)); 1477 + reg |= DWC3_LLUCTL_FORCE_GEN1; 1478 + dwc3_writel(dwc->regs, DWC3_LLUCTL(i), reg); 1479 + } 1476 1480 } 1477 1481 1478 1482 return 0; ··· 1945 1941 struct extcon_dev *edev = NULL; 1946 1942 const char *name; 1947 1943 1948 - if (device_property_read_bool(dev, "extcon")) 1944 + if (device_property_present(dev, "extcon")) 1949 1945 return extcon_get_edev_by_phandle(dev, 0); 1950 1946 1951 1947 /* ··· 2655 2651 2656 2652 static struct platform_driver dwc3_driver = { 2657 2653 .probe = dwc3_probe, 2658 - .remove_new = dwc3_remove, 2654 + .remove = dwc3_remove, 2659 2655 .driver = { 2660 2656 .name = "dwc3", 2661 2657 .of_match_table = of_match_ptr(of_dwc3_match),
+6 -8
drivers/usb/dwc3/core.h
··· 81 81 #define DWC3_GSNPSREV_MASK 0xffff 82 82 #define DWC3_GSNPS_ID(p) (((p) & DWC3_GSNPSID_MASK) >> 16) 83 83 84 - /* DWC3 registers memory space boundries */ 84 + /* DWC3 registers memory space boundaries */ 85 85 #define DWC3_XHCI_REGS_START 0x0 86 86 #define DWC3_XHCI_REGS_END 0x7fff 87 87 #define DWC3_GLOBALS_REGS_START 0xc100 ··· 179 179 #define DWC3_OEVTEN 0xcc0C 180 180 #define DWC3_OSTS 0xcc10 181 181 182 - #define DWC3_LLUCTL 0xd024 182 + #define DWC3_LLUCTL(n) (0xd024 + ((n) * 0x80)) 183 183 184 184 /* Bit fields */ 185 185 ··· 915 915 #define DWC3_MODE(n) ((n) & 0x7) 916 916 917 917 /* HWPARAMS1 */ 918 + #define DWC3_SPRAM_TYPE(n) (((n) >> 23) & 1) 918 919 #define DWC3_NUM_INT(n) (((n) & (0x3f << 15)) >> 15) 919 920 920 921 /* HWPARAMS3 */ ··· 925 924 (DWC3_NUM_EPS_MASK)) >> 12) 926 925 #define DWC3_NUM_IN_EPS(p) (((p)->hwparams3 & \ 927 926 (DWC3_NUM_IN_EPS_MASK)) >> 18) 927 + 928 + /* HWPARAMS6 */ 929 + #define DWC3_RAM0_DEPTH(n) (((n) & (0xffff0000)) >> 16) 928 930 929 931 /* HWPARAMS7 */ 930 932 #define DWC3_RAM1_DEPTH(n) ((n) & 0xffff) ··· 941 937 * @request: struct usb_request to be transferred 942 938 * @list: a list_head used for request queueing 943 939 * @dep: struct dwc3_ep owning this request 944 - * @sg: pointer to first incomplete sg 945 940 * @start_sg: pointer to the sg which should be queued next 946 941 * @num_pending_sgs: counter to pending sgs 947 - * @num_queued_sgs: counter to the number of sgs which already got queued 948 942 * @remaining: amount of data remaining 949 943 * @status: internal dwc3 request status tracking 950 944 * @epnum: endpoint number to which this request refers 951 945 * @trb: pointer to struct dwc3_trb 952 946 * @trb_dma: DMA address of @trb 953 947 * @num_trbs: number of TRBs used by this request 954 - * @needs_extra_trb: true when request needs one extra TRB (either due to ZLP 955 - * or unaligned OUT) 956 948 * @direction: IN or OUT direction flag 957 949 * @mapped: true when request has been dma-mapped 958 950 */ ··· 960 960 struct scatterlist *start_sg; 961 961 962 962 unsigned int num_pending_sgs; 963 - unsigned int num_queued_sgs; 964 963 unsigned int remaining; 965 964 966 965 unsigned int status; ··· 977 978 978 979 unsigned int num_trbs; 979 980 980 - unsigned int needs_extra_trb:1; 981 981 unsigned int direction:1; 982 982 unsigned int mapped:1; 983 983 };
+1 -1
drivers/usb/dwc3/dwc3-am62.c
··· 377 377 378 378 static struct platform_driver dwc3_ti_driver = { 379 379 .probe = dwc3_ti_probe, 380 - .remove_new = dwc3_ti_remove, 380 + .remove = dwc3_ti_remove, 381 381 .driver = { 382 382 .name = "dwc3-am62", 383 383 .pm = DEV_PM_OPS,
+1 -1
drivers/usb/dwc3/dwc3-exynos.c
··· 243 243 244 244 static struct platform_driver dwc3_exynos_driver = { 245 245 .probe = dwc3_exynos_probe, 246 - .remove_new = dwc3_exynos_remove, 246 + .remove = dwc3_exynos_remove, 247 247 .driver = { 248 248 .name = "exynos-dwc3", 249 249 .of_match_table = exynos_dwc3_match,
+1 -1
drivers/usb/dwc3/dwc3-imx8mp.c
··· 400 400 401 401 static struct platform_driver dwc3_imx8mp_driver = { 402 402 .probe = dwc3_imx8mp_probe, 403 - .remove_new = dwc3_imx8mp_remove, 403 + .remove = dwc3_imx8mp_remove, 404 404 .driver = { 405 405 .name = "imx8mp-dwc3", 406 406 .pm = pm_ptr(&dwc3_imx8mp_dev_pm_ops),
+1 -1
drivers/usb/dwc3/dwc3-keystone.c
··· 208 208 209 209 static struct platform_driver kdwc3_driver = { 210 210 .probe = kdwc3_probe, 211 - .remove_new = kdwc3_remove, 211 + .remove = kdwc3_remove, 212 212 .driver = { 213 213 .name = "keystone-dwc3", 214 214 .of_match_table = kdwc3_of_match,
+1 -1
drivers/usb/dwc3/dwc3-meson-g12a.c
··· 968 968 969 969 static struct platform_driver dwc3_meson_g12a_driver = { 970 970 .probe = dwc3_meson_g12a_probe, 971 - .remove_new = dwc3_meson_g12a_remove, 971 + .remove = dwc3_meson_g12a_remove, 972 972 .driver = { 973 973 .name = "dwc3-meson-g12a", 974 974 .of_match_table = dwc3_meson_g12a_match,
+1 -1
drivers/usb/dwc3/dwc3-octeon.c
··· 520 520 521 521 static struct platform_driver dwc3_octeon_driver = { 522 522 .probe = dwc3_octeon_probe, 523 - .remove_new = dwc3_octeon_remove, 523 + .remove = dwc3_octeon_remove, 524 524 .driver = { 525 525 .name = "dwc3-octeon", 526 526 .of_match_table = dwc3_octeon_of_match,
+1 -1
drivers/usb/dwc3/dwc3-of-simple.c
··· 180 180 181 181 static struct platform_driver dwc3_of_simple_driver = { 182 182 .probe = dwc3_of_simple_probe, 183 - .remove_new = dwc3_of_simple_remove, 183 + .remove = dwc3_of_simple_remove, 184 184 .shutdown = dwc3_of_simple_shutdown, 185 185 .driver = { 186 186 .name = "dwc3-of-simple",
+2 -2
drivers/usb/dwc3/dwc3-omap.c
··· 416 416 struct device_node *node = omap->dev->of_node; 417 417 struct extcon_dev *edev; 418 418 419 - if (of_property_read_bool(node, "extcon")) { 419 + if (of_property_present(node, "extcon")) { 420 420 edev = extcon_get_edev_by_phandle(omap->dev, 0); 421 421 if (IS_ERR(edev)) { 422 422 dev_vdbg(omap->dev, "couldn't get extcon device\n"); ··· 611 611 612 612 static struct platform_driver dwc3_omap_driver = { 613 613 .probe = dwc3_omap_probe, 614 - .remove_new = dwc3_omap_remove, 614 + .remove = dwc3_omap_remove, 615 615 .driver = { 616 616 .name = "omap-dwc3", 617 617 .of_match_table = of_dwc3_match,
+2 -2
drivers/usb/dwc3/dwc3-qcom.c
··· 161 161 struct extcon_dev *host_edev; 162 162 int ret; 163 163 164 - if (!of_property_read_bool(dev->of_node, "extcon")) 164 + if (!of_property_present(dev->of_node, "extcon")) 165 165 return 0; 166 166 167 167 qcom->edev = extcon_get_edev_by_phandle(dev, 0); ··· 921 921 922 922 static struct platform_driver dwc3_qcom_driver = { 923 923 .probe = dwc3_qcom_probe, 924 - .remove_new = dwc3_qcom_remove, 924 + .remove = dwc3_qcom_remove, 925 925 .driver = { 926 926 .name = "dwc3-qcom", 927 927 .pm = &dwc3_qcom_dev_pm_ops,
+1 -1
drivers/usb/dwc3/dwc3-rtk.c
··· 441 441 442 442 static struct platform_driver dwc3_rtk_driver = { 443 443 .probe = dwc3_rtk_probe, 444 - .remove_new = dwc3_rtk_remove, 444 + .remove = dwc3_rtk_remove, 445 445 .driver = { 446 446 .name = "rtk-dwc3", 447 447 .of_match_table = rtk_dwc3_match,
+1 -1
drivers/usb/dwc3/dwc3-st.c
··· 356 356 357 357 static struct platform_driver st_dwc3_driver = { 358 358 .probe = st_dwc3_probe, 359 - .remove_new = st_dwc3_remove, 359 + .remove = st_dwc3_remove, 360 360 .driver = { 361 361 .name = "usb-st-dwc3", 362 362 .of_match_table = st_dwc3_match,
+1 -1
drivers/usb/dwc3/dwc3-xilinx.c
··· 420 420 421 421 static struct platform_driver dwc3_xlnx_driver = { 422 422 .probe = dwc3_xlnx_probe, 423 - .remove_new = dwc3_xlnx_remove, 423 + .remove = dwc3_xlnx_remove, 424 424 .driver = { 425 425 .name = "dwc3-xilinx", 426 426 .of_match_table = dwc3_xlnx_of_match,
+2 -2
drivers/usb/dwc3/ep0.c
··· 145 145 * Unfortunately we have uncovered a limitation wrt the Data Phase. 146 146 * 147 147 * Section 9.4 says we can wait for the XferNotReady(DATA) event to 148 - * come before issueing Start Transfer command, but if we do, we will 148 + * come before issuing Start Transfer command, but if we do, we will 149 149 * miss situations where the host starts another SETUP phase instead of 150 150 * the DATA phase. Such cases happen at least on TD.7.6 of the Link 151 151 * Layer Compliance Suite. ··· 232 232 /* stall is always issued on EP0 */ 233 233 dep = dwc->eps[0]; 234 234 __dwc3_gadget_ep_set_halt(dep, 1, false); 235 - dep->flags &= DWC3_EP_RESOURCE_ALLOCATED; 235 + dep->flags &= DWC3_EP_RESOURCE_ALLOCATED | DWC3_EP_TRANSFER_STARTED; 236 236 dep->flags |= DWC3_EP_ENABLED; 237 237 dwc->delayed_status = false; 238 238
+90 -62
drivers/usb/dwc3/gadget.c
··· 197 197 198 198 list_del(&req->list); 199 199 req->remaining = 0; 200 - req->needs_extra_trb = false; 201 200 req->num_trbs = 0; 202 201 203 202 if (req->request.status == -EINPROGRESS) ··· 687 688 } 688 689 689 690 /** 691 + * dwc3_gadget_calc_ram_depth - calculates the ram depth for txfifo 692 + * @dwc: pointer to the DWC3 context 693 + */ 694 + static int dwc3_gadget_calc_ram_depth(struct dwc3 *dwc) 695 + { 696 + int ram_depth; 697 + int fifo_0_start; 698 + bool is_single_port_ram; 699 + 700 + /* Check supporting RAM type by HW */ 701 + is_single_port_ram = DWC3_SPRAM_TYPE(dwc->hwparams.hwparams1); 702 + 703 + /* 704 + * If a single port RAM is utilized, then allocate TxFIFOs from 705 + * RAM0. otherwise, allocate them from RAM1. 706 + */ 707 + ram_depth = is_single_port_ram ? DWC3_RAM0_DEPTH(dwc->hwparams.hwparams6) : 708 + DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7); 709 + 710 + /* 711 + * In a single port RAM configuration, the available RAM is shared 712 + * between the RX and TX FIFOs. This means that the txfifo can begin 713 + * at a non-zero address. 714 + */ 715 + if (is_single_port_ram) { 716 + u32 reg; 717 + 718 + /* Check if TXFIFOs start at non-zero addr */ 719 + reg = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(0)); 720 + fifo_0_start = DWC3_GTXFIFOSIZ_TXFSTADDR(reg); 721 + 722 + ram_depth -= (fifo_0_start >> 16); 723 + } 724 + 725 + return ram_depth; 726 + } 727 + 728 + /** 690 729 * dwc3_gadget_clear_tx_fifos - Clears txfifo allocation 691 730 * @dwc: pointer to the DWC3 context 692 731 * ··· 790 753 { 791 754 struct dwc3 *dwc = dep->dwc; 792 755 int fifo_0_start; 793 - int ram1_depth; 756 + int ram_depth; 794 757 int fifo_size; 795 758 int min_depth; 796 759 int num_in_ep; ··· 810 773 if (dep->flags & DWC3_EP_TXFIFO_RESIZED) 811 774 return 0; 812 775 813 - ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7); 776 + ram_depth = dwc3_gadget_calc_ram_depth(dwc); 814 777 815 - if ((dep->endpoint.maxburst > 1 && 816 - usb_endpoint_xfer_bulk(dep->endpoint.desc)) || 817 - usb_endpoint_xfer_isoc(dep->endpoint.desc)) 818 - num_fifos = 3; 819 - 820 - if (dep->endpoint.maxburst > 6 && 821 - (usb_endpoint_xfer_bulk(dep->endpoint.desc) || 822 - usb_endpoint_xfer_isoc(dep->endpoint.desc)) && DWC3_IP_IS(DWC31)) 823 - num_fifos = dwc->tx_fifo_resize_max_num; 778 + switch (dwc->gadget->speed) { 779 + case USB_SPEED_SUPER_PLUS: 780 + case USB_SPEED_SUPER: 781 + if (usb_endpoint_xfer_bulk(dep->endpoint.desc) || 782 + usb_endpoint_xfer_isoc(dep->endpoint.desc)) 783 + num_fifos = min_t(unsigned int, 784 + dep->endpoint.maxburst, 785 + dwc->tx_fifo_resize_max_num); 786 + break; 787 + case USB_SPEED_HIGH: 788 + if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 789 + num_fifos = min_t(unsigned int, 790 + usb_endpoint_maxp_mult(dep->endpoint.desc) + 1, 791 + dwc->tx_fifo_resize_max_num); 792 + break; 793 + } 794 + fallthrough; 795 + case USB_SPEED_FULL: 796 + if (usb_endpoint_xfer_bulk(dep->endpoint.desc)) 797 + num_fifos = 2; 798 + break; 799 + default: 800 + break; 801 + } 824 802 825 803 /* FIFO size for a single buffer */ 826 804 fifo = dwc3_gadget_calc_tx_fifo_size(dwc, 1); ··· 846 794 847 795 /* Reserve at least one FIFO for the number of IN EPs */ 848 796 min_depth = num_in_ep * (fifo + 1); 849 - remaining = ram1_depth - min_depth - dwc->last_fifo_depth; 797 + remaining = ram_depth - min_depth - dwc->last_fifo_depth; 850 798 remaining = max_t(int, 0, remaining); 851 799 /* 852 800 * We've already reserved 1 FIFO per EP, so check what we can fit in ··· 872 820 dwc->last_fifo_depth += DWC31_GTXFIFOSIZ_TXFDEP(fifo_size); 873 821 874 822 /* Check fifo size allocation doesn't exceed available RAM size. */ 875 - if (dwc->last_fifo_depth >= ram1_depth) { 823 + if (dwc->last_fifo_depth >= ram_depth) { 876 824 dev_err(dwc->dev, "Fifosize(%d) > RAM size(%d) %s depth:%d\n", 877 - dwc->last_fifo_depth, ram1_depth, 825 + dwc->last_fifo_depth, ram_depth, 878 826 dep->endpoint.name, fifo_size); 879 827 if (DWC3_IP_IS(DWC3)) 880 828 fifo_size = DWC3_GTXFIFOSIZ_TXFDEP(fifo_size); ··· 1229 1177 * pending to be processed by the driver. 1230 1178 */ 1231 1179 if (dep->trb_enqueue == dep->trb_dequeue) { 1180 + struct dwc3_request *req; 1181 + 1232 1182 /* 1233 - * If there is any request remained in the started_list at 1234 - * this point, that means there is no TRB available. 1183 + * If there is any request remained in the started_list with 1184 + * active TRBs at this point, then there is no TRB available. 1235 1185 */ 1236 - if (!list_empty(&dep->started_list)) 1186 + req = next_request(&dep->started_list); 1187 + if (req && req->num_trbs) 1237 1188 return 0; 1238 1189 1239 1190 return DWC3_TRB_NUM - 1; ··· 1439 1384 unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc); 1440 1385 unsigned int rem = req->request.length % maxp; 1441 1386 unsigned int num_trbs = 1; 1387 + bool needs_extra_trb; 1442 1388 1443 1389 if (dwc3_needs_extra_trb(dep, req)) 1444 1390 num_trbs++; ··· 1447 1391 if (dwc3_calc_trbs_left(dep) < num_trbs) 1448 1392 return 0; 1449 1393 1450 - req->needs_extra_trb = num_trbs > 1; 1394 + needs_extra_trb = num_trbs > 1; 1451 1395 1452 1396 /* Prepare a normal TRB */ 1453 1397 if (req->direction || req->request.length) 1454 1398 dwc3_prepare_one_trb(dep, req, entry_length, 1455 - req->needs_extra_trb, node, false, false); 1399 + needs_extra_trb, node, false, false); 1456 1400 1457 1401 /* Prepare extra TRBs for ZLP and MPS OUT transfer alignment */ 1458 - if ((!req->direction && !req->request.length) || req->needs_extra_trb) 1402 + if ((!req->direction && !req->request.length) || needs_extra_trb) 1459 1403 dwc3_prepare_one_trb(dep, req, 1460 1404 req->direction ? 0 : maxp - rem, 1461 1405 false, 1, true, false); ··· 1470 1414 struct scatterlist *s; 1471 1415 int i; 1472 1416 unsigned int length = req->request.length; 1473 - unsigned int remaining = req->request.num_mapped_sgs 1474 - - req->num_queued_sgs; 1417 + unsigned int remaining = req->num_pending_sgs; 1418 + unsigned int num_queued_sgs = req->request.num_mapped_sgs - remaining; 1475 1419 unsigned int num_trbs = req->num_trbs; 1476 1420 bool needs_extra_trb = dwc3_needs_extra_trb(dep, req); 1477 1421 ··· 1479 1423 * If we resume preparing the request, then get the remaining length of 1480 1424 * the request and resume where we left off. 1481 1425 */ 1482 - for_each_sg(req->request.sg, s, req->num_queued_sgs, i) 1426 + for_each_sg(req->request.sg, s, num_queued_sgs, i) 1483 1427 length -= sg_dma_len(s); 1484 1428 1485 1429 for_each_sg(sg, s, remaining, i) { ··· 1544 1488 if (!last_sg) 1545 1489 req->start_sg = sg_next(s); 1546 1490 1547 - req->num_queued_sgs++; 1548 1491 req->num_pending_sgs--; 1549 1492 1550 1493 /* ··· 1624 1569 if (ret) 1625 1570 return ret; 1626 1571 1627 - req->sg = req->request.sg; 1628 - req->start_sg = req->sg; 1629 - req->num_queued_sgs = 0; 1572 + req->start_sg = req->request.sg; 1630 1573 req->num_pending_sgs = req->request.num_mapped_sgs; 1631 1574 1632 1575 if (req->num_pending_sgs > 0) { ··· 3128 3075 struct dwc3 *dwc = gadget_to_dwc(g); 3129 3076 struct usb_ep *ep; 3130 3077 int fifo_size = 0; 3131 - int ram1_depth; 3078 + int ram_depth; 3132 3079 int ep_num = 0; 3133 3080 3134 3081 if (!dwc->do_fifo_resize) ··· 3151 3098 fifo_size += dwc->max_cfg_eps; 3152 3099 3153 3100 /* Check if we can fit a single fifo per endpoint */ 3154 - ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7); 3155 - if (fifo_size > ram1_depth) 3101 + ram_depth = dwc3_gadget_calc_ram_depth(dwc); 3102 + if (fifo_size > ram_depth) 3156 3103 return -ENOMEM; 3157 3104 3158 3105 return 0; ··· 3469 3416 int status) 3470 3417 { 3471 3418 struct dwc3_trb *trb; 3472 - struct scatterlist *sg = req->sg; 3473 - struct scatterlist *s; 3474 - unsigned int num_queued = req->num_queued_sgs; 3419 + unsigned int num_completed_trbs = req->num_trbs; 3475 3420 unsigned int i; 3476 3421 int ret = 0; 3477 3422 3478 - for_each_sg(sg, s, num_queued, i) { 3423 + for (i = 0; i < num_completed_trbs; i++) { 3479 3424 trb = &dep->trb_pool[dep->trb_dequeue]; 3480 3425 3481 - req->sg = sg_next(s); 3482 - req->num_queued_sgs--; 3483 - 3484 3426 ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req, 3485 - trb, event, status, true); 3427 + trb, event, status, 3428 + !!(trb->ctrl & DWC3_TRB_CTRL_CHN)); 3486 3429 if (ret) 3487 3430 break; 3488 3431 } ··· 3486 3437 return ret; 3487 3438 } 3488 3439 3489 - static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep, 3490 - struct dwc3_request *req, const struct dwc3_event_depevt *event, 3491 - int status) 3492 - { 3493 - struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue]; 3494 - 3495 - return dwc3_gadget_ep_reclaim_completed_trb(dep, req, trb, 3496 - event, status, false); 3497 - } 3498 - 3499 3440 static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req) 3500 3441 { 3501 - return req->num_pending_sgs == 0 && req->num_queued_sgs == 0; 3442 + return req->num_pending_sgs == 0 && req->num_trbs == 0; 3502 3443 } 3503 3444 3504 3445 static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep, ··· 3498 3459 int request_status; 3499 3460 int ret; 3500 3461 3501 - if (req->request.num_mapped_sgs) 3502 - ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event, 3503 - status); 3504 - else 3505 - ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, 3506 - status); 3462 + ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event, status); 3507 3463 3508 3464 req->request.actual = req->request.length - req->remaining; 3509 3465 3510 3466 if (!dwc3_gadget_ep_request_completed(req)) 3511 3467 goto out; 3512 - 3513 - if (req->needs_extra_trb) { 3514 - ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, 3515 - status); 3516 - req->needs_extra_trb = false; 3517 - } 3518 3468 3519 3469 /* 3520 3470 * The event status only reflects the status of the TRB with IOC set.
+1 -1
drivers/usb/dwc3/host.c
··· 35 35 u32 reg; 36 36 int i; 37 37 38 - /* xhci regs is not mapped yet, do it temperary here */ 38 + /* xhci regs are not mapped yet, do it temporarily here */ 39 39 if (dwc->xhci_resources[0].start) { 40 40 xhci_regs = ioremap(dwc->xhci_resources[0].start, DWC3_XHCI_REGS_END); 41 41 if (!xhci_regs) {
+1 -1
drivers/usb/fotg210/fotg210-core.c
··· 195 195 .of_match_table = of_match_ptr(fotg210_of_match), 196 196 }, 197 197 .probe = fotg210_probe, 198 - .remove_new = fotg210_remove, 198 + .remove = fotg210_remove, 199 199 }; 200 200 201 201 static int __init fotg210_init(void)
+21 -9
drivers/usb/gadget/composite.c
··· 1844 1844 cdev->desc.bcdUSB = cpu_to_le16(0x0200); 1845 1845 } 1846 1846 1847 - value = min(w_length, (u16) sizeof cdev->desc); 1847 + value = min_t(u16, w_length, sizeof(cdev->desc)); 1848 1848 memcpy(req->buf, &cdev->desc, value); 1849 1849 break; 1850 1850 case USB_DT_DEVICE_QUALIFIER: ··· 1863 1863 case USB_DT_CONFIG: 1864 1864 value = config_desc(cdev, w_value); 1865 1865 if (value >= 0) 1866 - value = min(w_length, (u16) value); 1866 + value = min_t(u16, w_length, value); 1867 1867 break; 1868 1868 case USB_DT_STRING: 1869 1869 value = get_string(cdev, req->buf, 1870 1870 w_index, w_value & 0xff); 1871 1871 if (value >= 0) 1872 - value = min(w_length, (u16) value); 1872 + value = min_t(u16, w_length, value); 1873 1873 break; 1874 1874 case USB_DT_BOS: 1875 1875 if (gadget_is_superspeed(gadget) || 1876 1876 gadget->lpm_capable || cdev->use_webusb) { 1877 1877 value = bos_desc(cdev); 1878 - value = min(w_length, (u16) value); 1878 + value = min_t(u16, w_length, value); 1879 1879 } 1880 1880 break; 1881 1881 case USB_DT_OTG: ··· 1930 1930 *(u8 *)req->buf = cdev->config->bConfigurationValue; 1931 1931 else 1932 1932 *(u8 *)req->buf = 0; 1933 - value = min(w_length, (u16) 1); 1933 + value = min_t(u16, w_length, 1); 1934 1934 break; 1935 1935 1936 1936 /* function drivers must handle get/set altsetting */ ··· 1976 1976 if (value < 0) 1977 1977 break; 1978 1978 *((u8 *)req->buf) = value; 1979 - value = min(w_length, (u16) 1); 1979 + value = min_t(u16, w_length, 1); 1980 1980 break; 1981 1981 case USB_REQ_GET_STATUS: 1982 1982 if (gadget_is_otg(gadget) && gadget->hnp_polling_support && ··· 2111 2111 memset(buf, 0, w_length); 2112 2112 buf[5] = 0x01; 2113 2113 switch (ctrl->bRequestType & USB_RECIP_MASK) { 2114 + /* 2115 + * The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and 2116 + * Extended Prop OS Desc Spec(w_index = 0x5) state that the 2117 + * HighByte of wValue is the InterfaceNumber and the LowByte is 2118 + * the PageNumber. This high/low byte ordering is incorrectly 2119 + * documented in the Spec. USB analyzer output on the below 2120 + * request packets show the high/low byte inverted i.e LowByte 2121 + * is the InterfaceNumber and the HighByte is the PageNumber. 2122 + * Since we dont support >64KB CompatID/ExtendedProp descriptors, 2123 + * PageNumber is set to 0. Hence verify that the HighByte is 0 2124 + * for below two cases. 2125 + */ 2114 2126 case USB_RECIP_DEVICE: 2115 - if (w_index != 0x4 || (w_value & 0xff)) 2127 + if (w_index != 0x4 || (w_value >> 8)) 2116 2128 break; 2117 2129 buf[6] = w_index; 2118 2130 /* Number of ext compat interfaces */ ··· 2140 2128 } 2141 2129 break; 2142 2130 case USB_RECIP_INTERFACE: 2143 - if (w_index != 0x5 || (w_value & 0xff)) 2131 + if (w_index != 0x5 || (w_value >> 8)) 2144 2132 break; 2145 - interface = w_value >> 8; 2133 + interface = w_value & 0xFF; 2146 2134 if (interface >= MAX_CONFIG_INTERFACES || 2147 2135 !os_desc_cfg->interface[interface]) 2148 2136 break;
+2 -2
drivers/usb/gadget/config.c
··· 57 57 * usb_gadget_config_buf - builts a complete configuration descriptor 58 58 * @config: Header for the descriptor, including characteristics such 59 59 * as power requirements and number of interfaces. 60 - * @desc: Null-terminated vector of pointers to the descriptors (interface, 61 - * endpoint, etc) defining all functions in this device configuration. 62 60 * @buf: Buffer for the resulting configuration descriptor. 63 61 * @length: Length of buffer. If this is not big enough to hold the 64 62 * entire configuration descriptor, an error code will be returned. 63 + * @desc: Null-terminated vector of pointers to the descriptors (interface, 64 + * endpoint, etc) defining all functions in this device configuration. 65 65 * 66 66 * This copies descriptors into the response buffer, building a descriptor 67 67 * for that configuration. It returns the buffer length or a negative
+1 -1
drivers/usb/gadget/configfs.c
··· 1184 1184 struct gadget_info *gi = os_desc_item_to_gadget_info(item); 1185 1185 int res, l; 1186 1186 1187 - l = min((int)len, OS_STRING_QW_SIGN_LEN >> 1); 1187 + l = min_t(int, len, OS_STRING_QW_SIGN_LEN >> 1); 1188 1188 if (page[l - 1] == '\n') 1189 1189 --l; 1190 1190
+4
drivers/usb/gadget/function/Makefile
··· 41 41 usb_f_uac2-y := f_uac2.o 42 42 obj-$(CONFIG_USB_F_UAC2) += usb_f_uac2.o 43 43 usb_f_uvc-y := f_uvc.o uvc_queue.o uvc_v4l2.o uvc_video.o uvc_configfs.o 44 + ifneq ($(CONFIG_TRACING),) 45 + CFLAGS_uvc_trace.o := -I$(src) 46 + usb_f_uvc-y += uvc_trace.o 47 + endif 44 48 obj-$(CONFIG_USB_F_UVC) += usb_f_uvc.o 45 49 usb_f_midi-y := f_midi.o 46 50 obj-$(CONFIG_USB_F_MIDI) += usb_f_midi.o
+3 -3
drivers/usb/gadget/function/f_fs.c
··· 456 456 } 457 457 458 458 /* FFS_SETUP_PENDING and not stall */ 459 - len = min(len, (size_t)le16_to_cpu(ffs->ev.setup.wLength)); 459 + len = min_t(size_t, len, le16_to_cpu(ffs->ev.setup.wLength)); 460 460 461 461 spin_unlock_irq(&ffs->ev.waitq.lock); 462 462 ··· 590 590 591 591 /* unlocks spinlock */ 592 592 return __ffs_ep0_read_events(ffs, buf, 593 - min(n, (size_t)ffs->ev.count)); 593 + min_t(size_t, n, ffs->ev.count)); 594 594 595 595 case FFS_SETUP_PENDING: 596 596 if (ffs->ev.setup.bRequestType & USB_DIR_IN) { ··· 599 599 goto done_mutex; 600 600 } 601 601 602 - len = min(len, (size_t)le16_to_cpu(ffs->ev.setup.wLength)); 602 + len = min_t(size_t, len, le16_to_cpu(ffs->ev.setup.wLength)); 603 603 604 604 spin_unlock_irq(&ffs->ev.waitq.lock); 605 605
+4 -4
drivers/usb/gadget/function/f_mass_storage.c
··· 500 500 *(u8 *)req->buf = _fsg_common_get_max_lun(fsg->common); 501 501 502 502 /* Respond with data/status */ 503 - req->length = min((u16)1, w_length); 503 + req->length = min_t(u16, 1, w_length); 504 504 return ep0_queue(fsg->common); 505 505 } 506 506 ··· 655 655 * And don't try to read past the end of the file. 656 656 */ 657 657 amount = min(amount_left, FSG_BUFLEN); 658 - amount = min((loff_t)amount, 658 + amount = min_t(loff_t, amount, 659 659 curlun->file_length - file_offset); 660 660 661 661 /* Wait for the next buffer to become available */ ··· 1005 1005 * And don't try to read past the end of the file. 1006 1006 */ 1007 1007 amount = min(amount_left, FSG_BUFLEN); 1008 - amount = min((loff_t)amount, 1008 + amount = min_t(loff_t, amount, 1009 1009 curlun->file_length - file_offset); 1010 1010 if (amount == 0) { 1011 1011 curlun->sense_data = ··· 2167 2167 if (reply == -EINVAL) 2168 2168 reply = 0; /* Error reply length */ 2169 2169 if (reply >= 0 && common->data_dir == DATA_DIR_TO_HOST) { 2170 - reply = min((u32)reply, common->data_size_from_cmnd); 2170 + reply = min_t(u32, reply, common->data_size_from_cmnd); 2171 2171 bh->inreq->length = reply; 2172 2172 bh->state = BUF_STATE_FULL; 2173 2173 common->residue -= reply;
+4 -4
drivers/usb/gadget/function/f_midi.c
··· 819 819 goto fail; 820 820 } 821 821 822 - strcpy(card->driver, f_midi_longname); 823 - strcpy(card->longname, f_midi_longname); 824 - strcpy(card->shortname, f_midi_shortname); 822 + strscpy(card->driver, f_midi_longname); 823 + strscpy(card->longname, f_midi_longname); 824 + strscpy(card->shortname, f_midi_shortname); 825 825 826 826 /* Set up rawmidi */ 827 827 snd_component_add(card, "MIDI"); ··· 833 833 } 834 834 midi->rmidi = rmidi; 835 835 midi->in_last_port = 0; 836 - strcpy(rmidi->name, card->shortname); 836 + strscpy(rmidi->name, card->shortname); 837 837 rmidi->info_flags = SNDRV_RAWMIDI_INFO_OUTPUT | 838 838 SNDRV_RAWMIDI_INFO_INPUT | 839 839 SNDRV_RAWMIDI_INFO_DUPLEX;
+1 -3
drivers/usb/gadget/function/f_midi2.c
··· 1285 1285 1286 1286 if (alt == 0) 1287 1287 op_mode = MIDI_OP_MODE_MIDI1; 1288 - else if (alt == 1) 1289 - op_mode = MIDI_OP_MODE_MIDI2; 1290 1288 else 1291 - op_mode = MIDI_OP_MODE_UNSET; 1289 + op_mode = MIDI_OP_MODE_MIDI2; 1292 1290 1293 1291 if (midi2->operation_mode == op_mode) 1294 1292 return 0;
+3 -1
drivers/usb/gadget/function/f_uvc.c
··· 465 465 memcpy(mem, desc, (desc)->bLength); \ 466 466 *(dst)++ = mem; \ 467 467 mem += (desc)->bLength; \ 468 - } while (0); 468 + } while (0) 469 469 470 470 #define UVC_COPY_DESCRIPTORS(mem, dst, src) \ 471 471 do { \ ··· 990 990 long wait_ret = 1; 991 991 992 992 uvcg_info(f, "%s()\n", __func__); 993 + 994 + kthread_cancel_work_sync(&video->hw_submit); 993 995 994 996 if (video->async_wq) 995 997 destroy_workqueue(video->async_wq);
+13
drivers/usb/gadget/function/uvc.h
··· 71 71 72 72 #define UVCG_REQUEST_HEADER_LEN 12 73 73 74 + #define UVCG_REQ_MAX_INT_COUNT 16 75 + #define UVCG_REQ_MAX_ZERO_COUNT (2 * UVCG_REQ_MAX_INT_COUNT) 76 + 77 + #define UVCG_STREAMING_MIN_BUFFERS 2 78 + 74 79 /* ------------------------------------------------------------------------ 75 80 * Structures 76 81 */ ··· 96 91 struct work_struct pump; 97 92 struct workqueue_struct *async_wq; 98 93 94 + struct kthread_worker *kworker; 95 + struct kthread_work hw_submit; 96 + 97 + atomic_t queued; 98 + 99 99 /* Frame parameters */ 100 100 u8 bpp; 101 101 u32 fcc; 102 102 unsigned int width; 103 103 unsigned int height; 104 104 unsigned int imagesize; 105 + unsigned int interval; 105 106 struct mutex mutex; /* protects frame parameters */ 106 107 107 108 unsigned int uvc_num_requests; 109 + 110 + unsigned int reqs_per_frame; 108 111 109 112 /* Requests */ 110 113 bool is_enabled; /* tracks whether video stream is enabled */
+337 -11
drivers/usb/gadget/function/uvc_configfs.c
··· 1566 1566 /* ----------------------------------------------------------------------------- 1567 1567 * streaming/uncompressed 1568 1568 * streaming/mjpeg 1569 + * streaming/framebased 1569 1570 */ 1570 1571 1571 1572 static const char * const uvcg_format_names[] = { 1572 1573 "uncompressed", 1573 1574 "mjpeg", 1575 + "framebased", 1574 1576 }; 1575 1577 1576 1578 static struct uvcg_color_matching * ··· 1779 1777 target_fmt = container_of(to_config_group(target), struct uvcg_format, 1780 1778 group); 1781 1779 1780 + if (!target_fmt) 1781 + goto out; 1782 + 1782 1783 uvcg_format_set_indices(to_config_group(target)); 1783 1784 1784 1785 format_ptr = kzalloc(sizeof(*format_ptr), GFP_KERNEL); ··· 1821 1816 target_fmt = container_of(to_config_group(target), struct uvcg_format, 1822 1817 group); 1823 1818 1819 + if (!target_fmt) 1820 + goto out; 1821 + 1824 1822 list_for_each_entry_safe(format_ptr, tmp, &src_hdr->formats, entry) 1825 1823 if (format_ptr->fmt == target_fmt) { 1826 1824 list_del(&format_ptr->entry); ··· 1834 1826 1835 1827 --target_fmt->linked; 1836 1828 1829 + out: 1837 1830 mutex_unlock(&opts->lock); 1838 1831 mutex_unlock(su_mutex); 1839 1832 } ··· 2031 2022 UVCG_FRAME_ATTR(dw_max_bit_rate, dwMaxBitRate, 32); 2032 2023 UVCG_FRAME_ATTR(dw_max_video_frame_buffer_size, dwMaxVideoFrameBufferSize, 32); 2033 2024 UVCG_FRAME_ATTR(dw_default_frame_interval, dwDefaultFrameInterval, 32); 2025 + UVCG_FRAME_ATTR(dw_bytes_perline, dwBytesPerLine, 32); 2034 2026 2035 2027 #undef UVCG_FRAME_ATTR 2036 2028 ··· 2045 2035 int result, i; 2046 2036 char *pg = page; 2047 2037 2048 - mutex_lock(su_mutex); /* for navigating configfs hierarchy */ 2038 + mutex_lock(su_mutex); /* for navigating configfs hierarchy */ 2049 2039 2050 2040 opts_item = frm->item.ci_parent->ci_parent->ci_parent->ci_parent; 2051 2041 opts = to_f_uvc_opts(opts_item); ··· 2115 2105 2116 2106 UVC_ATTR(uvcg_frame_, dw_frame_interval, dwFrameInterval); 2117 2107 2118 - static struct configfs_attribute *uvcg_frame_attrs[] = { 2108 + static struct configfs_attribute *uvcg_frame_attrs1[] = { 2119 2109 &uvcg_frame_attr_b_frame_index, 2120 2110 &uvcg_frame_attr_bm_capabilities, 2121 2111 &uvcg_frame_attr_w_width, ··· 2128 2118 NULL, 2129 2119 }; 2130 2120 2131 - static const struct config_item_type uvcg_frame_type = { 2121 + static struct configfs_attribute *uvcg_frame_attrs2[] = { 2122 + &uvcg_frame_attr_b_frame_index, 2123 + &uvcg_frame_attr_bm_capabilities, 2124 + &uvcg_frame_attr_w_width, 2125 + &uvcg_frame_attr_w_height, 2126 + &uvcg_frame_attr_dw_min_bit_rate, 2127 + &uvcg_frame_attr_dw_max_bit_rate, 2128 + &uvcg_frame_attr_dw_default_frame_interval, 2129 + &uvcg_frame_attr_dw_frame_interval, 2130 + &uvcg_frame_attr_dw_bytes_perline, 2131 + NULL, 2132 + }; 2133 + 2134 + static const struct config_item_type uvcg_frame_type1 = { 2132 2135 .ct_item_ops = &uvcg_config_item_ops, 2133 - .ct_attrs = uvcg_frame_attrs, 2136 + .ct_attrs = uvcg_frame_attrs1, 2134 2137 .ct_owner = THIS_MODULE, 2138 + }; 2139 + 2140 + static const struct config_item_type uvcg_frame_type2 = { 2141 + .ct_item_ops = &uvcg_config_item_ops, 2142 + .ct_attrs = uvcg_frame_attrs2, 2143 + .ct_owner = THIS_MODULE, 2135 2144 }; 2136 2145 2137 2146 static struct config_item *uvcg_frame_make(struct config_group *group, ··· 2174 2145 h->frame.dw_max_bit_rate = 55296000; 2175 2146 h->frame.dw_max_video_frame_buffer_size = 460800; 2176 2147 h->frame.dw_default_frame_interval = 666666; 2148 + h->frame.dw_bytes_perline = 0; 2177 2149 2178 2150 opts_item = group->cg_item.ci_parent->ci_parent->ci_parent; 2179 2151 opts = to_f_uvc_opts(opts_item); ··· 2187 2157 } else if (fmt->type == UVCG_MJPEG) { 2188 2158 h->frame.b_descriptor_subtype = UVC_VS_FRAME_MJPEG; 2189 2159 h->fmt_type = UVCG_MJPEG; 2160 + } else if (fmt->type == UVCG_FRAMEBASED) { 2161 + h->frame.b_descriptor_subtype = UVC_VS_FRAME_FRAME_BASED; 2162 + h->fmt_type = UVCG_FRAMEBASED; 2190 2163 } else { 2191 2164 mutex_unlock(&opts->lock); 2192 2165 kfree(h); ··· 2208 2175 ++fmt->num_frames; 2209 2176 mutex_unlock(&opts->lock); 2210 2177 2211 - config_item_init_type_name(&h->item, name, &uvcg_frame_type); 2178 + if (fmt->type == UVCG_FRAMEBASED) 2179 + config_item_init_type_name(&h->item, name, &uvcg_frame_type2); 2180 + else 2181 + config_item_init_type_name(&h->item, name, &uvcg_frame_type1); 2212 2182 2213 2183 return &h->item; 2214 2184 } ··· 2250 2214 2251 2215 list_for_each_entry(ci, &fmt->cg_children, ci_entry) { 2252 2216 struct uvcg_frame *frm; 2253 - 2254 - if (ci->ci_type != &uvcg_frame_type) 2255 - continue; 2256 2217 2257 2218 frm = to_uvcg_frame(ci); 2258 2219 frm->frame.b_frame_index = i++; ··· 2711 2678 }; 2712 2679 2713 2680 /* ----------------------------------------------------------------------------- 2681 + * streaming/framebased/<NAME> 2682 + */ 2683 + 2684 + static struct configfs_group_operations uvcg_framebased_group_ops = { 2685 + .make_item = uvcg_frame_make, 2686 + .drop_item = uvcg_frame_drop, 2687 + }; 2688 + 2689 + #define UVCG_FRAMEBASED_ATTR_RO(cname, aname, bits) \ 2690 + static ssize_t uvcg_framebased_##cname##_show(struct config_item *item, \ 2691 + char *page) \ 2692 + { \ 2693 + struct uvcg_framebased *u = to_uvcg_framebased(item); \ 2694 + struct f_uvc_opts *opts; \ 2695 + struct config_item *opts_item; \ 2696 + struct mutex *su_mutex = &u->fmt.group.cg_subsys->su_mutex; \ 2697 + int result; \ 2698 + \ 2699 + mutex_lock(su_mutex); /* for navigating configfs hierarchy */ \ 2700 + \ 2701 + opts_item = u->fmt.group.cg_item.ci_parent->ci_parent->ci_parent;\ 2702 + opts = to_f_uvc_opts(opts_item); \ 2703 + \ 2704 + mutex_lock(&opts->lock); \ 2705 + result = sprintf(page, "%u\n", le##bits##_to_cpu(u->desc.aname));\ 2706 + mutex_unlock(&opts->lock); \ 2707 + \ 2708 + mutex_unlock(su_mutex); \ 2709 + return result; \ 2710 + } \ 2711 + \ 2712 + UVC_ATTR_RO(uvcg_framebased_, cname, aname) 2713 + 2714 + #define UVCG_FRAMEBASED_ATTR(cname, aname, bits) \ 2715 + static ssize_t uvcg_framebased_##cname##_show(struct config_item *item, \ 2716 + char *page) \ 2717 + { \ 2718 + struct uvcg_framebased *u = to_uvcg_framebased(item); \ 2719 + struct f_uvc_opts *opts; \ 2720 + struct config_item *opts_item; \ 2721 + struct mutex *su_mutex = &u->fmt.group.cg_subsys->su_mutex; \ 2722 + int result; \ 2723 + \ 2724 + mutex_lock(su_mutex); /* for navigating configfs hierarchy */ \ 2725 + \ 2726 + opts_item = u->fmt.group.cg_item.ci_parent->ci_parent->ci_parent;\ 2727 + opts = to_f_uvc_opts(opts_item); \ 2728 + \ 2729 + mutex_lock(&opts->lock); \ 2730 + result = sprintf(page, "%u\n", le##bits##_to_cpu(u->desc.aname));\ 2731 + mutex_unlock(&opts->lock); \ 2732 + \ 2733 + mutex_unlock(su_mutex); \ 2734 + return result; \ 2735 + } \ 2736 + \ 2737 + static ssize_t \ 2738 + uvcg_framebased_##cname##_store(struct config_item *item, \ 2739 + const char *page, size_t len) \ 2740 + { \ 2741 + struct uvcg_framebased *u = to_uvcg_framebased(item); \ 2742 + struct f_uvc_opts *opts; \ 2743 + struct config_item *opts_item; \ 2744 + struct mutex *su_mutex = &u->fmt.group.cg_subsys->su_mutex; \ 2745 + int ret; \ 2746 + u8 num; \ 2747 + \ 2748 + mutex_lock(su_mutex); /* for navigating configfs hierarchy */ \ 2749 + \ 2750 + opts_item = u->fmt.group.cg_item.ci_parent->ci_parent->ci_parent;\ 2751 + opts = to_f_uvc_opts(opts_item); \ 2752 + \ 2753 + mutex_lock(&opts->lock); \ 2754 + if (u->fmt.linked || opts->refcnt) { \ 2755 + ret = -EBUSY; \ 2756 + goto end; \ 2757 + } \ 2758 + \ 2759 + ret = kstrtou8(page, 0, &num); \ 2760 + if (ret) \ 2761 + goto end; \ 2762 + \ 2763 + if (num > 255) { \ 2764 + ret = -EINVAL; \ 2765 + goto end; \ 2766 + } \ 2767 + u->desc.aname = num; \ 2768 + ret = len; \ 2769 + end: \ 2770 + mutex_unlock(&opts->lock); \ 2771 + mutex_unlock(su_mutex); \ 2772 + return ret; \ 2773 + } \ 2774 + \ 2775 + UVC_ATTR(uvcg_framebased_, cname, aname) 2776 + 2777 + UVCG_FRAMEBASED_ATTR_RO(b_format_index, bFormatIndex, 8); 2778 + UVCG_FRAMEBASED_ATTR_RO(b_bits_per_pixel, bBitsPerPixel, 8); 2779 + UVCG_FRAMEBASED_ATTR(b_default_frame_index, bDefaultFrameIndex, 8); 2780 + UVCG_FRAMEBASED_ATTR_RO(b_aspect_ratio_x, bAspectRatioX, 8); 2781 + UVCG_FRAMEBASED_ATTR_RO(b_aspect_ratio_y, bAspectRatioY, 8); 2782 + UVCG_FRAMEBASED_ATTR_RO(bm_interface_flags, bmInterfaceFlags, 8); 2783 + 2784 + #undef UVCG_FRAMEBASED_ATTR 2785 + #undef UVCG_FRAMEBASED_ATTR_RO 2786 + 2787 + static ssize_t uvcg_framebased_guid_format_show(struct config_item *item, 2788 + char *page) 2789 + { 2790 + struct uvcg_framebased *ch = to_uvcg_framebased(item); 2791 + struct f_uvc_opts *opts; 2792 + struct config_item *opts_item; 2793 + struct mutex *su_mutex = &ch->fmt.group.cg_subsys->su_mutex; 2794 + 2795 + mutex_lock(su_mutex); /* for navigating configfs hierarchy */ 2796 + 2797 + opts_item = ch->fmt.group.cg_item.ci_parent->ci_parent->ci_parent; 2798 + opts = to_f_uvc_opts(opts_item); 2799 + 2800 + mutex_lock(&opts->lock); 2801 + memcpy(page, ch->desc.guidFormat, sizeof(ch->desc.guidFormat)); 2802 + mutex_unlock(&opts->lock); 2803 + 2804 + mutex_unlock(su_mutex); 2805 + 2806 + return sizeof(ch->desc.guidFormat); 2807 + } 2808 + 2809 + static ssize_t uvcg_framebased_guid_format_store(struct config_item *item, 2810 + const char *page, size_t len) 2811 + { 2812 + struct uvcg_framebased *ch = to_uvcg_framebased(item); 2813 + struct f_uvc_opts *opts; 2814 + struct config_item *opts_item; 2815 + struct mutex *su_mutex = &ch->fmt.group.cg_subsys->su_mutex; 2816 + int ret; 2817 + 2818 + mutex_lock(su_mutex); /* for navigating configfs hierarchy */ 2819 + 2820 + opts_item = ch->fmt.group.cg_item.ci_parent->ci_parent->ci_parent; 2821 + opts = to_f_uvc_opts(opts_item); 2822 + 2823 + mutex_lock(&opts->lock); 2824 + if (ch->fmt.linked || opts->refcnt) { 2825 + ret = -EBUSY; 2826 + goto end; 2827 + } 2828 + 2829 + memcpy(ch->desc.guidFormat, page, 2830 + min(sizeof(ch->desc.guidFormat), len)); 2831 + ret = sizeof(ch->desc.guidFormat); 2832 + 2833 + end: 2834 + mutex_unlock(&opts->lock); 2835 + mutex_unlock(su_mutex); 2836 + return ret; 2837 + } 2838 + 2839 + UVC_ATTR(uvcg_framebased_, guid_format, guidFormat); 2840 + 2841 + static inline ssize_t 2842 + uvcg_framebased_bma_controls_show(struct config_item *item, char *page) 2843 + { 2844 + struct uvcg_framebased *u = to_uvcg_framebased(item); 2845 + 2846 + return uvcg_format_bma_controls_show(&u->fmt, page); 2847 + } 2848 + 2849 + static inline ssize_t 2850 + uvcg_framebased_bma_controls_store(struct config_item *item, 2851 + const char *page, size_t len) 2852 + { 2853 + struct uvcg_framebased *u = to_uvcg_framebased(item); 2854 + 2855 + return uvcg_format_bma_controls_store(&u->fmt, page, len); 2856 + } 2857 + 2858 + UVC_ATTR(uvcg_framebased_, bma_controls, bmaControls); 2859 + 2860 + static struct configfs_attribute *uvcg_framebased_attrs[] = { 2861 + &uvcg_framebased_attr_b_format_index, 2862 + &uvcg_framebased_attr_b_default_frame_index, 2863 + &uvcg_framebased_attr_b_bits_per_pixel, 2864 + &uvcg_framebased_attr_b_aspect_ratio_x, 2865 + &uvcg_framebased_attr_b_aspect_ratio_y, 2866 + &uvcg_framebased_attr_bm_interface_flags, 2867 + &uvcg_framebased_attr_bma_controls, 2868 + &uvcg_framebased_attr_guid_format, 2869 + NULL, 2870 + }; 2871 + 2872 + static const struct config_item_type uvcg_framebased_type = { 2873 + .ct_item_ops = &uvcg_config_item_ops, 2874 + .ct_group_ops = &uvcg_framebased_group_ops, 2875 + .ct_attrs = uvcg_framebased_attrs, 2876 + .ct_owner = THIS_MODULE, 2877 + }; 2878 + 2879 + static struct config_group *uvcg_framebased_make(struct config_group *group, 2880 + const char *name) 2881 + { 2882 + static char guid[] = { /*Declear frame based as H264 format*/ 2883 + 'H', '2', '6', '4', 0x00, 0x00, 0x10, 0x00, 2884 + 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71 2885 + }; 2886 + struct uvcg_framebased *h; 2887 + 2888 + h = kzalloc(sizeof(*h), GFP_KERNEL); 2889 + if (!h) 2890 + return ERR_PTR(-ENOMEM); 2891 + 2892 + h->desc.bLength = UVC_DT_FORMAT_FRAMEBASED_SIZE; 2893 + h->desc.bDescriptorType = USB_DT_CS_INTERFACE; 2894 + h->desc.bDescriptorSubType = UVC_VS_FORMAT_FRAME_BASED; 2895 + memcpy(h->desc.guidFormat, guid, sizeof(guid)); 2896 + h->desc.bBitsPerPixel = 0; 2897 + h->desc.bDefaultFrameIndex = 1; 2898 + h->desc.bAspectRatioX = 0; 2899 + h->desc.bAspectRatioY = 0; 2900 + h->desc.bmInterfaceFlags = 0; 2901 + h->desc.bCopyProtect = 0; 2902 + h->desc.bVariableSize = 1; 2903 + 2904 + INIT_LIST_HEAD(&h->fmt.frames); 2905 + h->fmt.type = UVCG_FRAMEBASED; 2906 + config_group_init_type_name(&h->fmt.group, name, 2907 + &uvcg_framebased_type); 2908 + 2909 + return &h->fmt.group; 2910 + } 2911 + 2912 + static struct configfs_group_operations uvcg_framebased_grp_ops = { 2913 + .make_group = uvcg_framebased_make, 2914 + }; 2915 + 2916 + static const struct uvcg_config_group_type uvcg_framebased_grp_type = { 2917 + .type = { 2918 + .ct_item_ops = &uvcg_config_item_ops, 2919 + .ct_group_ops = &uvcg_framebased_grp_ops, 2920 + .ct_owner = THIS_MODULE, 2921 + }, 2922 + .name = "framebased", 2923 + }; 2924 + 2925 + /* ----------------------------------------------------------------------------- 2714 2926 * streaming/color_matching/default 2715 2927 */ 2716 2928 ··· 3190 2912 if (ret) 3191 2913 return ret; 3192 2914 grp = &f->fmt->group; 2915 + j = 0; 3193 2916 list_for_each_entry(item, &grp->cg_children, ci_entry) { 3194 2917 frm = to_uvcg_frame(item); 3195 2918 ret = fun(frm, priv2, priv3, j++, UVCG_FRAME); ··· 3244 2965 container_of(fmt, struct uvcg_mjpeg, fmt); 3245 2966 3246 2967 *size += sizeof(m->desc); 2968 + } else if (fmt->type == UVCG_FRAMEBASED) { 2969 + struct uvcg_framebased *f = 2970 + container_of(fmt, struct uvcg_framebased, fmt); 2971 + 2972 + *size += sizeof(f->desc); 3247 2973 } else { 3248 2974 return -EINVAL; 3249 2975 } ··· 3259 2975 int sz = sizeof(frm->dw_frame_interval); 3260 2976 3261 2977 *size += sizeof(frm->frame); 2978 + /* 2979 + * framebased has duplicate member with uncompressed and 2980 + * mjpeg, so minus it 2981 + */ 2982 + *size -= sizeof(u32); 3262 2983 *size += frm->frame.b_frame_interval_type * sz; 3263 2984 } 3264 2985 break; ··· 3276 2987 } 3277 2988 3278 2989 ++*count; 2990 + 2991 + return 0; 2992 + } 2993 + 2994 + static int __uvcg_copy_framebased_desc(void *dest, struct uvcg_frame *frm, 2995 + int sz) 2996 + { 2997 + struct uvc_frame_framebased *desc = dest; 2998 + 2999 + desc->bLength = frm->frame.b_length; 3000 + desc->bDescriptorType = frm->frame.b_descriptor_type; 3001 + desc->bDescriptorSubType = frm->frame.b_descriptor_subtype; 3002 + desc->bFrameIndex = frm->frame.b_frame_index; 3003 + desc->bmCapabilities = frm->frame.bm_capabilities; 3004 + desc->wWidth = frm->frame.w_width; 3005 + desc->wHeight = frm->frame.w_height; 3006 + desc->dwMinBitRate = frm->frame.dw_min_bit_rate; 3007 + desc->dwMaxBitRate = frm->frame.dw_max_bit_rate; 3008 + desc->dwDefaultFrameInterval = frm->frame.dw_default_frame_interval; 3009 + desc->bFrameIntervalType = frm->frame.b_frame_interval_type; 3010 + desc->dwBytesPerLine = frm->frame.dw_bytes_perline; 3279 3011 3280 3012 return 0; 3281 3013 } ··· 3355 3045 m->desc.bNumFrameDescriptors = fmt->num_frames; 3356 3046 memcpy(*dest, &m->desc, sizeof(m->desc)); 3357 3047 *dest += sizeof(m->desc); 3048 + } else if (fmt->type == UVCG_FRAMEBASED) { 3049 + struct uvcg_framebased *f = 3050 + container_of(fmt, struct uvcg_framebased, 3051 + fmt); 3052 + 3053 + f->desc.bFormatIndex = n + 1; 3054 + f->desc.bNumFrameDescriptors = fmt->num_frames; 3055 + memcpy(*dest, &f->desc, sizeof(f->desc)); 3056 + *dest += sizeof(f->desc); 3358 3057 } else { 3359 3058 return -EINVAL; 3360 3059 } ··· 3373 3054 struct uvcg_frame *frm = priv1; 3374 3055 struct uvc_descriptor_header *h = *dest; 3375 3056 3376 - sz = sizeof(frm->frame); 3377 - memcpy(*dest, &frm->frame, sz); 3057 + sz = sizeof(frm->frame) - 4; 3058 + if (frm->fmt_type != UVCG_FRAMEBASED) 3059 + memcpy(*dest, &frm->frame, sz); 3060 + else 3061 + __uvcg_copy_framebased_desc(*dest, frm, sz); 3378 3062 *dest += sz; 3379 3063 sz = frm->frame.b_frame_interval_type * 3380 3064 sizeof(*frm->dw_frame_interval); ··· 3388 3066 frm->frame.b_frame_interval_type); 3389 3067 else if (frm->fmt_type == UVCG_MJPEG) 3390 3068 h->bLength = UVC_DT_FRAME_MJPEG_SIZE( 3391 - frm->frame.b_frame_interval_type); 3069 + frm->frame.b_frame_interval_type); 3070 + else if (frm->fmt_type == UVCG_FRAMEBASED) 3071 + h->bLength = UVC_DT_FRAME_FRAMEBASED_SIZE( 3072 + frm->frame.b_frame_interval_type); 3392 3073 } 3393 3074 break; 3394 3075 case UVCG_COLOR_MATCHING: { ··· 3610 3285 &uvcg_streaming_header_grp_type, 3611 3286 &uvcg_uncompressed_grp_type, 3612 3287 &uvcg_mjpeg_grp_type, 3288 + &uvcg_framebased_grp_type, 3613 3289 &uvcg_color_matching_grp_type, 3614 3290 &uvcg_streaming_class_grp_type, 3615 3291 NULL,
+16
drivers/usb/gadget/function/uvc_configfs.h
··· 49 49 enum uvcg_format_type { 50 50 UVCG_UNCOMPRESSED = 0, 51 51 UVCG_MJPEG, 52 + UVCG_FRAMEBASED, 52 53 }; 53 54 54 55 struct uvcg_format { ··· 106 105 u32 dw_max_video_frame_buffer_size; 107 106 u32 dw_default_frame_interval; 108 107 u8 b_frame_interval_type; 108 + u32 dw_bytes_perline; 109 109 } __attribute__((packed)) frame; 110 110 u32 *dw_frame_interval; 111 111 }; ··· 142 140 static inline struct uvcg_mjpeg *to_uvcg_mjpeg(struct config_item *item) 143 141 { 144 142 return container_of(to_uvcg_format(item), struct uvcg_mjpeg, fmt); 143 + } 144 + 145 + /* ----------------------------------------------------------------------------- 146 + * streaming/framebased/<NAME> 147 + */ 148 + 149 + struct uvcg_framebased { 150 + struct uvcg_format fmt; 151 + struct uvc_format_framebased desc; 152 + }; 153 + 154 + static inline struct uvcg_framebased *to_uvcg_framebased(struct config_item *item) 155 + { 156 + return container_of(to_uvcg_format(item), struct uvcg_framebased, fmt); 145 157 } 146 158 147 159 /* -----------------------------------------------------------------------------
+11 -15
drivers/usb/gadget/function/uvc_queue.c
··· 21 21 #include <media/videobuf2-vmalloc.h> 22 22 23 23 #include "uvc.h" 24 + #include "uvc_video.h" 24 25 25 26 /* ------------------------------------------------------------------------ 26 27 * Video buffers queue management. ··· 45 44 { 46 45 struct uvc_video_queue *queue = vb2_get_drv_priv(vq); 47 46 struct uvc_video *video = container_of(queue, struct uvc_video, queue); 48 - unsigned int req_size; 49 - unsigned int nreq; 50 47 51 48 if (*nbuffers > UVC_MAX_VIDEO_BUFFERS) 52 49 *nbuffers = UVC_MAX_VIDEO_BUFFERS; 50 + if (*nbuffers < UVCG_STREAMING_MIN_BUFFERS) 51 + *nbuffers = UVCG_STREAMING_MIN_BUFFERS; 53 52 54 53 *nplanes = 1; 55 54 56 55 sizes[0] = video->imagesize; 57 - 58 - req_size = video->ep->maxpacket 59 - * max_t(unsigned int, video->ep->maxburst, 1) 60 - * (video->ep->mult); 61 - 62 - /* We divide by two, to increase the chance to run 63 - * into fewer requests for smaller framesizes. 64 - */ 65 - nreq = DIV_ROUND_UP(DIV_ROUND_UP(sizes[0], 2), req_size); 66 - nreq = clamp(nreq, 4U, 64U); 67 - video->uvc_num_requests = nreq; 68 56 69 57 return 0; 70 58 } ··· 61 71 static int uvc_buffer_prepare(struct vb2_buffer *vb) 62 72 { 63 73 struct uvc_video_queue *queue = vb2_get_drv_priv(vb->vb2_queue); 74 + struct uvc_video *video = container_of(queue, struct uvc_video, queue); 64 75 struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); 65 76 struct uvc_buffer *buf = container_of(vbuf, struct uvc_buffer, buf); 66 77 ··· 82 91 buf->mem = vb2_plane_vaddr(vb, 0); 83 92 } 84 93 buf->length = vb2_plane_size(vb, 0); 85 - if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) 94 + if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) { 86 95 buf->bytesused = 0; 87 - else 96 + } else { 88 97 buf->bytesused = vb2_get_plane_payload(vb, 0); 98 + buf->req_payload_size = 99 + DIV_ROUND_UP(buf->bytesused + 100 + (video->reqs_per_frame * UVCG_REQUEST_HEADER_LEN), 101 + video->reqs_per_frame); 102 + } 89 103 90 104 return 0; 91 105 }
+2
drivers/usb/gadget/function/uvc_queue.h
··· 39 39 unsigned int offset; 40 40 unsigned int length; 41 41 unsigned int bytesused; 42 + /* req_payload_size: only used with isoc */ 43 + unsigned int req_payload_size; 42 44 }; 43 45 44 46 #define UVC_QUEUE_DISCONNECTED (1 << 0)
+11
drivers/usb/gadget/function/uvc_trace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * trace.c - USB UVC Gadget Trace Support 4 + * 5 + * Copyright (C) 2024 Pengutronix e.K. 6 + * 7 + * Author: Michael Grzeschik <m.grzeschik@pengutronix.de> 8 + */ 9 + 10 + #define CREATE_TRACE_POINTS 11 + #include "uvc_trace.h"
+60
drivers/usb/gadget/function/uvc_trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * trace.h - USB UVC Gadget Trace Support 4 + * 5 + * Copyright (C) 2024 Pengutronix e.K. 6 + * 7 + * Author: Michael Grzeschik <m.grzeschik@pengutronix.de> 8 + */ 9 + 10 + #undef TRACE_SYSTEM 11 + #define TRACE_SYSTEM uvcg 12 + 13 + #if !defined(__UVCG_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) 14 + #define __UVCG_TRACE_H 15 + 16 + #include <linux/types.h> 17 + #include <linux/tracepoint.h> 18 + #include <linux/usb/gadget.h> 19 + #include <asm/byteorder.h> 20 + 21 + DECLARE_EVENT_CLASS(uvcg_video_req, 22 + TP_PROTO(struct usb_request *req, u32 queued), 23 + TP_ARGS(req, queued), 24 + TP_STRUCT__entry( 25 + __field(struct usb_request *, req) 26 + __field(u32, length) 27 + __field(u32, queued) 28 + ), 29 + TP_fast_assign( 30 + __entry->req = req; 31 + __entry->length = req->length; 32 + __entry->queued = queued; 33 + ), 34 + TP_printk("req %p length %u queued %u", 35 + __entry->req, 36 + __entry->length, 37 + __entry->queued) 38 + ); 39 + 40 + DEFINE_EVENT(uvcg_video_req, uvcg_video_complete, 41 + TP_PROTO(struct usb_request *req, u32 queued), 42 + TP_ARGS(req, queued) 43 + ); 44 + 45 + DEFINE_EVENT(uvcg_video_req, uvcg_video_queue, 46 + TP_PROTO(struct usb_request *req, u32 queued), 47 + TP_ARGS(req, queued) 48 + ); 49 + 50 + #endif /* __UVCG_TRACE_H */ 51 + 52 + /* this part has to be here */ 53 + 54 + #undef TRACE_INCLUDE_PATH 55 + #define TRACE_INCLUDE_PATH . 56 + 57 + #undef TRACE_INCLUDE_FILE 58 + #define TRACE_INCLUDE_FILE uvc_trace 59 + 60 + #include <trace/define_trace.h>
+65 -1
drivers/usb/gadget/function/uvc_v4l2.c
··· 31 31 { 32 32 char guid[16] = UVC_GUID_FORMAT_MJPEG; 33 33 const struct uvc_format_desc *format; 34 - struct uvcg_uncompressed *unc; 35 34 36 35 if (uformat->type == UVCG_UNCOMPRESSED) { 36 + struct uvcg_uncompressed *unc; 37 + 37 38 unc = to_uvcg_uncompressed(&uformat->group.cg_item); 39 + if (!unc) 40 + return ERR_PTR(-EINVAL); 41 + 42 + memcpy(guid, unc->desc.guidFormat, sizeof(guid)); 43 + } else if (uformat->type == UVCG_FRAMEBASED) { 44 + struct uvcg_framebased *unc; 45 + 46 + unc = to_uvcg_framebased(&uformat->group.cg_item); 38 47 if (!unc) 39 48 return ERR_PTR(-EINVAL); 40 49 ··· 323 314 return ret; 324 315 } 325 316 317 + static int uvc_v4l2_g_parm(struct file *file, void *fh, 318 + struct v4l2_streamparm *parm) 319 + { 320 + struct video_device *vdev = video_devdata(file); 321 + struct uvc_device *uvc = video_get_drvdata(vdev); 322 + struct uvc_video *video = &uvc->video; 323 + struct v4l2_fract timeperframe; 324 + 325 + if (!V4L2_TYPE_IS_OUTPUT(parm->type)) 326 + return -EINVAL; 327 + 328 + /* Return the actual frame period. */ 329 + timeperframe.numerator = video->interval; 330 + timeperframe.denominator = 10000000; 331 + v4l2_simplify_fraction(&timeperframe.numerator, 332 + &timeperframe.denominator, 8, 333); 333 + 334 + uvcg_dbg(&uvc->func, "Getting frame interval of %u/%u (%u)\n", 335 + timeperframe.numerator, timeperframe.denominator, 336 + video->interval); 337 + 338 + parm->parm.output.timeperframe = timeperframe; 339 + parm->parm.output.capability = V4L2_CAP_TIMEPERFRAME; 340 + 341 + return 0; 342 + } 343 + 344 + static int uvc_v4l2_s_parm(struct file *file, void *fh, 345 + struct v4l2_streamparm *parm) 346 + { 347 + struct video_device *vdev = video_devdata(file); 348 + struct uvc_device *uvc = video_get_drvdata(vdev); 349 + struct uvc_video *video = &uvc->video; 350 + struct v4l2_fract timeperframe; 351 + 352 + if (!V4L2_TYPE_IS_OUTPUT(parm->type)) 353 + return -EINVAL; 354 + 355 + timeperframe = parm->parm.output.timeperframe; 356 + 357 + video->interval = v4l2_fraction_to_interval(timeperframe.numerator, 358 + timeperframe.denominator); 359 + 360 + uvcg_dbg(&uvc->func, "Setting frame interval to %u/%u (%u)\n", 361 + timeperframe.numerator, timeperframe.denominator, 362 + video->interval); 363 + 364 + return 0; 365 + } 366 + 326 367 static int 327 368 uvc_v4l2_enum_frameintervals(struct file *file, void *fh, 328 369 struct v4l2_frmivalenum *fival) ··· 555 496 if (ret < 0) 556 497 return ret; 557 498 499 + if (uvc->state != UVC_STATE_STREAMING) 500 + return 0; 501 + 558 502 uvc->state = UVC_STATE_CONNECTED; 559 503 uvc_function_setup_continue(uvc, 1); 560 504 return 0; ··· 649 587 .vidioc_dqbuf = uvc_v4l2_dqbuf, 650 588 .vidioc_streamon = uvc_v4l2_streamon, 651 589 .vidioc_streamoff = uvc_v4l2_streamoff, 590 + .vidioc_s_parm = uvc_v4l2_s_parm, 591 + .vidioc_g_parm = uvc_v4l2_g_parm, 652 592 .vidioc_subscribe_event = uvc_v4l2_subscribe_event, 653 593 .vidioc_unsubscribe_event = uvc_v4l2_unsubscribe_event, 654 594 .vidioc_default = uvc_v4l2_ioctl_default,
+164 -110
drivers/usb/gadget/function/uvc_video.c
··· 19 19 #include "uvc.h" 20 20 #include "uvc_queue.h" 21 21 #include "uvc_video.h" 22 + #include "uvc_trace.h" 22 23 23 24 /* -------------------------------------------------------------------------- 24 25 * Video codecs ··· 79 78 80 79 /* Copy video data to the USB buffer. */ 81 80 mem = buf->mem + queue->buf_used; 82 - nbytes = min((unsigned int)len, buf->bytesused - queue->buf_used); 81 + nbytes = min_t(unsigned int, len, buf->bytesused - queue->buf_used); 83 82 84 83 memcpy(data, mem, nbytes); 85 84 queue->buf_used += nbytes; ··· 105 104 } 106 105 107 106 /* Process video data. */ 108 - len = min((int)(video->max_payload_size - video->payload_size), len); 107 + len = min_t(int, video->max_payload_size - video->payload_size, len); 109 108 ret = uvc_video_encode_data(video, buf, mem, len); 110 109 111 110 video->payload_size += ret; ··· 137 136 unsigned int pending = buf->bytesused - video->queue.buf_used; 138 137 struct uvc_request *ureq = req->context; 139 138 struct scatterlist *sg, *iter; 140 - unsigned int len = video->req_size; 139 + unsigned int len = buf->req_payload_size; 141 140 unsigned int sg_left, part = 0; 142 141 unsigned int i; 143 142 int header_len; ··· 147 146 148 147 /* Init the header. */ 149 148 header_len = uvc_video_encode_header(video, buf, ureq->header, 150 - video->req_size); 149 + buf->req_payload_size); 151 150 sg_set_buf(sg, ureq->header, header_len); 152 151 len -= header_len; 153 152 154 153 if (pending <= len) 155 154 len = pending; 156 155 157 - req->length = (len == pending) ? 158 - len + header_len : video->req_size; 156 + req->length = (len == pending) ? len + header_len : 157 + buf->req_payload_size; 159 158 160 159 /* Init the pending sgs with payload */ 161 160 sg = sg_next(sg); ··· 203 202 { 204 203 void *mem = req->buf; 205 204 struct uvc_request *ureq = req->context; 206 - int len = video->req_size; 205 + int len = buf->req_payload_size; 207 206 int ret; 208 207 209 208 /* Add the header. */ ··· 215 214 ret = uvc_video_encode_data(video, buf, mem, len); 216 215 len -= ret; 217 216 218 - req->length = video->req_size - len; 217 + req->length = buf->req_payload_size - len; 219 218 220 219 if (buf->bytesused == video->queue.buf_used || 221 220 video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) { ··· 270 269 } 271 270 } 272 271 272 + atomic_inc(&video->queued); 273 + 274 + trace_uvcg_video_queue(req, atomic_read(&video->queued)); 275 + 273 276 return ret; 274 277 } 275 278 ··· 309 304 */ 310 305 if (list_empty(&video->req_free) || ureq->last_buf || 311 306 !(video->req_int_count % 312 - DIV_ROUND_UP(video->uvc_num_requests, 4))) { 307 + min(DIV_ROUND_UP(video->uvc_num_requests, 4), UVCG_REQ_MAX_INT_COUNT))) { 313 308 video->req_int_count = 0; 314 309 req->no_interrupt = 0; 315 310 } else { ··· 327 322 return 0; 328 323 } 329 324 330 - /* 331 - * Must only be called from uvcg_video_enable - since after that we only want to 332 - * queue requests to the endpoint from the uvc_video_complete complete handler. 333 - * This function is needed in order to 'kick start' the flow of requests from 334 - * gadget driver to the usb controller. 335 - */ 336 - static void uvc_video_ep_queue_initial_requests(struct uvc_video *video) 337 - { 338 - struct usb_request *req = NULL; 339 - unsigned long flags = 0; 340 - unsigned int count = 0; 341 - int ret = 0; 342 - 343 - /* 344 - * We only queue half of the free list since we still want to have 345 - * some free usb_requests in the free list for the video_pump async_wq 346 - * thread to encode uvc buffers into. Otherwise we could get into a 347 - * situation where the free list does not have any usb requests to 348 - * encode into - we always end up queueing 0 length requests to the 349 - * end point. 350 - */ 351 - unsigned int half_list_size = video->uvc_num_requests / 2; 352 - 353 - spin_lock_irqsave(&video->req_lock, flags); 354 - /* 355 - * Take these requests off the free list and queue them all to the 356 - * endpoint. Since we queue 0 length requests with the req_lock held, 357 - * there isn't any 'data' race involved here with the complete handler. 358 - */ 359 - while (count < half_list_size) { 360 - req = list_first_entry(&video->req_free, struct usb_request, 361 - list); 362 - list_del(&req->list); 363 - req->length = 0; 364 - ret = uvcg_video_ep_queue(video, req); 365 - if (ret < 0) { 366 - uvcg_queue_cancel(&video->queue, 0); 367 - break; 368 - } 369 - count++; 370 - } 371 - spin_unlock_irqrestore(&video->req_lock, flags); 372 - } 373 - 374 325 static void 375 326 uvc_video_complete(struct usb_ep *ep, struct usb_request *req) 376 327 { ··· 334 373 struct uvc_video *video = ureq->video; 335 374 struct uvc_video_queue *queue = &video->queue; 336 375 struct uvc_buffer *last_buf; 337 - struct usb_request *to_queue = req; 338 376 unsigned long flags; 339 - bool is_bulk = video->max_payload_size; 340 - int ret = 0; 341 377 342 378 spin_lock_irqsave(&video->req_lock, flags); 379 + atomic_dec(&video->queued); 343 380 if (!video->is_enabled) { 344 381 /* 345 382 * When is_enabled is false, uvcg_video_disable() ensures ··· 397 438 return; 398 439 } 399 440 441 + list_add_tail(&req->list, &video->req_free); 400 442 /* 401 - * Here we check whether any request is available in the ready 402 - * list. If it is, queue it to the ep and add the current 403 - * usb_request to the req_free list - for video_pump to fill in. 404 - * Otherwise, just use the current usb_request to queue a 0 405 - * length request to the ep. Since we always add to the req_free 406 - * list if we dequeue from the ready list, there will never 407 - * be a situation where the req_free list is completely out of 408 - * requests and cannot recover. 443 + * Queue work to the wq as well since it is possible that a 444 + * buffer may not have been completely encoded with the set of 445 + * in-flight usb requests for whih the complete callbacks are 446 + * firing. 447 + * In that case, if we do not queue work to the worker thread, 448 + * the buffer will never be marked as complete - and therefore 449 + * not be returned to userpsace. As a result, 450 + * dequeue -> queue -> dequeue flow of uvc buffers will not 451 + * happen. Since there are is a new free request wake up the pump. 409 452 */ 410 - to_queue->length = 0; 411 - if (!list_empty(&video->req_ready)) { 412 - to_queue = list_first_entry(&video->req_ready, 413 - struct usb_request, list); 414 - list_del(&to_queue->list); 415 - list_add_tail(&req->list, &video->req_free); 416 - /* 417 - * Queue work to the wq as well since it is possible that a 418 - * buffer may not have been completely encoded with the set of 419 - * in-flight usb requests for whih the complete callbacks are 420 - * firing. 421 - * In that case, if we do not queue work to the worker thread, 422 - * the buffer will never be marked as complete - and therefore 423 - * not be returned to userpsace. As a result, 424 - * dequeue -> queue -> dequeue flow of uvc buffers will not 425 - * happen. 426 - */ 427 - queue_work(video->async_wq, &video->pump); 428 - } 429 - /* 430 - * Queue to the endpoint. The actual queueing to ep will 431 - * only happen on one thread - the async_wq for bulk endpoints 432 - * and this thread for isoc endpoints. 433 - */ 434 - ret = uvcg_video_usb_req_queue(video, to_queue, !is_bulk); 435 - if (ret < 0) { 436 - /* 437 - * Endpoint error, but the stream is still enabled. 438 - * Put request back in req_free for it to be cleaned 439 - * up later. 440 - */ 441 - list_add_tail(&to_queue->list, &video->req_free); 442 - } 453 + queue_work(video->async_wq, &video->pump); 454 + 455 + trace_uvcg_video_complete(req, atomic_read(&video->queued)); 443 456 444 457 spin_unlock_irqrestore(&video->req_lock, flags); 458 + 459 + kthread_queue_work(video->kworker, &video->hw_submit); 460 + } 461 + 462 + static void uvcg_video_hw_submit(struct kthread_work *work) 463 + { 464 + struct uvc_video *video = container_of(work, struct uvc_video, hw_submit); 465 + bool is_bulk = video->max_payload_size; 466 + unsigned long flags; 467 + struct usb_request *req; 468 + int ret = 0; 469 + 470 + while (true) { 471 + if (!video->ep->enabled) 472 + return; 473 + spin_lock_irqsave(&video->req_lock, flags); 474 + /* 475 + * Here we check whether any request is available in the ready 476 + * list. If it is, queue it to the ep and add the current 477 + * usb_request to the req_free list - for video_pump to fill in. 478 + * Otherwise, just use the current usb_request to queue a 0 479 + * length request to the ep. Since we always add to the req_free 480 + * list if we dequeue from the ready list, there will never 481 + * be a situation where the req_free list is completely out of 482 + * requests and cannot recover. 483 + */ 484 + if (!list_empty(&video->req_ready)) { 485 + req = list_first_entry(&video->req_ready, 486 + struct usb_request, list); 487 + } else { 488 + if (list_empty(&video->req_free) || 489 + (atomic_read(&video->queued) > UVCG_REQ_MAX_ZERO_COUNT)) { 490 + spin_unlock_irqrestore(&video->req_lock, flags); 491 + 492 + return; 493 + } 494 + req = list_first_entry(&video->req_free, struct usb_request, 495 + list); 496 + req->length = 0; 497 + } 498 + list_del(&req->list); 499 + 500 + /* 501 + * Queue to the endpoint. The actual queueing to ep will 502 + * only happen on one thread - the async_wq for bulk endpoints 503 + * and this thread for isoc endpoints. 504 + */ 505 + ret = uvcg_video_usb_req_queue(video, req, !is_bulk); 506 + if (ret < 0) { 507 + /* 508 + * Endpoint error, but the stream is still enabled. 509 + * Put request back in req_free for it to be cleaned 510 + * up later. 511 + */ 512 + list_add_tail(&req->list, &video->req_free); 513 + /* 514 + * There is a new free request - wake up the pump. 515 + */ 516 + queue_work(video->async_wq, &video->pump); 517 + 518 + } 519 + 520 + spin_unlock_irqrestore(&video->req_lock, flags); 521 + } 445 522 } 446 523 447 524 static int ··· 491 496 INIT_LIST_HEAD(&video->ureqs); 492 497 INIT_LIST_HEAD(&video->req_free); 493 498 INIT_LIST_HEAD(&video->req_ready); 494 - video->req_size = 0; 495 499 return 0; 500 + } 501 + 502 + static void 503 + uvc_video_prep_requests(struct uvc_video *video) 504 + { 505 + struct uvc_device *uvc = container_of(video, struct uvc_device, video); 506 + struct usb_composite_dev *cdev = uvc->func.config->cdev; 507 + unsigned int interval_duration = video->ep->desc->bInterval * 1250; 508 + unsigned int max_req_size, req_size, header_size; 509 + unsigned int nreq; 510 + 511 + max_req_size = video->ep->maxpacket 512 + * max_t(unsigned int, video->ep->maxburst, 1) 513 + * (video->ep->mult); 514 + 515 + if (!usb_endpoint_xfer_isoc(video->ep->desc)) { 516 + video->req_size = max_req_size; 517 + video->reqs_per_frame = video->uvc_num_requests = 518 + DIV_ROUND_UP(video->imagesize, max_req_size); 519 + 520 + return; 521 + } 522 + 523 + if (cdev->gadget->speed < USB_SPEED_HIGH) 524 + interval_duration = video->ep->desc->bInterval * 10000; 525 + 526 + nreq = DIV_ROUND_UP(video->interval, interval_duration); 527 + 528 + header_size = nreq * UVCG_REQUEST_HEADER_LEN; 529 + 530 + req_size = DIV_ROUND_UP(video->imagesize + header_size, nreq); 531 + 532 + if (req_size > max_req_size) { 533 + /* The prepared interval length and expected buffer size 534 + * is not possible to stream with the currently configured 535 + * isoc bandwidth. Fallback to the maximum. 536 + */ 537 + req_size = max_req_size; 538 + } 539 + video->req_size = req_size; 540 + 541 + /* We need to compensate the amount of requests to be 542 + * allocated with the maximum amount of zero length requests. 543 + * Since it is possible that hw_submit will initially 544 + * enqueue some zero length requests and we then will not be 545 + * able to fully encode one frame. 546 + */ 547 + video->uvc_num_requests = nreq + UVCG_REQ_MAX_ZERO_COUNT; 548 + video->reqs_per_frame = nreq; 496 549 } 497 550 498 551 static int 499 552 uvc_video_alloc_requests(struct uvc_video *video) 500 553 { 501 554 struct uvc_request *ureq; 502 - unsigned int req_size; 503 555 unsigned int i; 504 556 int ret = -ENOMEM; 505 557 506 - BUG_ON(video->req_size); 507 - 508 - req_size = video->ep->maxpacket 509 - * max_t(unsigned int, video->ep->maxburst, 1) 510 - * (video->ep->mult); 558 + /* 559 + * calculate in uvc_video_prep_requests 560 + * - video->uvc_num_requests 561 + * - video->req_size 562 + */ 563 + uvc_video_prep_requests(video); 511 564 512 565 for (i = 0; i < video->uvc_num_requests; i++) { 513 566 ureq = kzalloc(sizeof(struct uvc_request), GFP_KERNEL); ··· 566 523 567 524 list_add_tail(&ureq->list, &video->ureqs); 568 525 569 - ureq->req_buffer = kmalloc(req_size, GFP_KERNEL); 526 + ureq->req_buffer = kmalloc(video->req_size, GFP_KERNEL); 570 527 if (ureq->req_buffer == NULL) 571 528 goto error; 572 529 ··· 584 541 list_add_tail(&ureq->req->list, &video->req_free); 585 542 /* req_size/PAGE_SIZE + 1 for overruns and + 1 for header */ 586 543 sg_alloc_table(&ureq->sgt, 587 - DIV_ROUND_UP(req_size - UVCG_REQUEST_HEADER_LEN, 544 + DIV_ROUND_UP(video->req_size - UVCG_REQUEST_HEADER_LEN, 588 545 PAGE_SIZE) + 2, GFP_KERNEL); 589 546 } 590 - 591 - video->req_size = req_size; 592 547 593 548 return 0; 594 549 ··· 740 699 INIT_LIST_HEAD(&video->ureqs); 741 700 INIT_LIST_HEAD(&video->req_free); 742 701 INIT_LIST_HEAD(&video->req_ready); 743 - video->req_size = 0; 744 702 spin_unlock_irqrestore(&video->req_lock, flags); 745 703 746 704 /* ··· 792 752 793 753 video->req_int_count = 0; 794 754 795 - uvc_video_ep_queue_initial_requests(video); 755 + atomic_set(&video->queued, 0); 756 + 757 + kthread_queue_work(video->kworker, &video->hw_submit); 796 758 queue_work(video->async_wq, &video->pump); 797 759 798 760 return ret; ··· 817 775 if (!video->async_wq) 818 776 return -EINVAL; 819 777 778 + /* Allocate a kthread for asynchronous hw submit handler. */ 779 + video->kworker = kthread_create_worker(0, "UVCG"); 780 + if (IS_ERR(video->kworker)) { 781 + uvcg_err(&video->uvc->func, "failed to create UVCG kworker\n"); 782 + return PTR_ERR(video->kworker); 783 + } 784 + 785 + kthread_init_work(&video->hw_submit, uvcg_video_hw_submit); 786 + 787 + sched_set_fifo(video->kworker->task); 788 + 820 789 video->uvc = uvc; 821 790 video->fcc = V4L2_PIX_FMT_YUYV; 822 791 video->bpp = 16; 823 792 video->width = 320; 824 793 video->height = 240; 825 794 video->imagesize = 320 * 240 * 2; 795 + video->interval = 666666; 826 796 827 797 /* Initialize the video buffers queue. */ 828 798 uvcg_queue_init(&video->queue, uvc->v4l2_dev.dev->parent,
+1 -1
drivers/usb/gadget/legacy/hid.c
··· 261 261 }; 262 262 263 263 static struct platform_driver hidg_plat_driver = { 264 - .remove_new = hidg_plat_driver_remove, 264 + .remove = hidg_plat_driver_remove, 265 265 .driver = { 266 266 .name = "hidg", 267 267 },
+2 -2
drivers/usb/gadget/legacy/raw_gadget.c
··· 782 782 if (ret < 0) 783 783 goto free; 784 784 785 - length = min(io.length, (unsigned int)ret); 785 + length = min_t(unsigned int, io.length, ret); 786 786 if (copy_to_user((void __user *)(value + sizeof(io)), data, length)) 787 787 ret = -EFAULT; 788 788 else ··· 1168 1168 if (ret < 0) 1169 1169 goto free; 1170 1170 1171 - length = min(io.length, (unsigned int)ret); 1171 + length = min_t(unsigned int, io.length, ret); 1172 1172 if (copy_to_user((void __user *)(value + sizeof(io)), data, length)) 1173 1173 ret = -EFAULT; 1174 1174 else
+1 -1
drivers/usb/gadget/udc/aspeed-vhub/core.c
··· 428 428 429 429 static struct platform_driver ast_vhub_driver = { 430 430 .probe = ast_vhub_probe, 431 - .remove_new = ast_vhub_remove, 431 + .remove = ast_vhub_remove, 432 432 .driver = { 433 433 .name = KBUILD_MODNAME, 434 434 .of_match_table = ast_vhub_dt_ids,
+2 -2
drivers/usb/gadget/udc/aspeed_udc.c
··· 156 156 #define AST_EP_DMA_DESC_PID_DATA1 (2 << 14) 157 157 #define AST_EP_DMA_DESC_PID_MDATA (3 << 14) 158 158 #define EP_DESC1_IN_LEN(x) ((x) & 0x1fff) 159 - #define AST_EP_DMA_DESC_MAX_LEN (7680) /* Max packet length for trasmit in 1 desc */ 159 + #define AST_EP_DMA_DESC_MAX_LEN (7680) /* Max packet length for transmit in 1 desc */ 160 160 161 161 struct ast_udc_request { 162 162 struct usb_request req; ··· 1590 1590 1591 1591 static struct platform_driver ast_udc_driver = { 1592 1592 .probe = ast_udc_probe, 1593 - .remove_new = ast_udc_remove, 1593 + .remove = ast_udc_remove, 1594 1594 .driver = { 1595 1595 .name = KBUILD_MODNAME, 1596 1596 .of_match_table = ast_udc_of_dt_ids,
+1 -1
drivers/usb/gadget/udc/at91_udc.c
··· 2002 2002 2003 2003 static struct platform_driver at91_udc_driver = { 2004 2004 .probe = at91udc_probe, 2005 - .remove_new = at91udc_remove, 2005 + .remove = at91udc_remove, 2006 2006 .shutdown = at91udc_shutdown, 2007 2007 .suspend = at91udc_suspend, 2008 2008 .resume = at91udc_resume,
+1 -1
drivers/usb/gadget/udc/atmel_usba_udc.c
··· 2444 2444 2445 2445 static struct platform_driver udc_driver = { 2446 2446 .probe = usba_udc_probe, 2447 - .remove_new = usba_udc_remove, 2447 + .remove = usba_udc_remove, 2448 2448 .driver = { 2449 2449 .name = "atmel_usba_udc", 2450 2450 .pm = &usba_udc_pm_ops,
+1 -1
drivers/usb/gadget/udc/bcm63xx_udc.c
··· 2367 2367 2368 2368 static struct platform_driver bcm63xx_udc_driver = { 2369 2369 .probe = bcm63xx_udc_probe, 2370 - .remove_new = bcm63xx_udc_remove, 2370 + .remove = bcm63xx_udc_remove, 2371 2371 .driver = { 2372 2372 .name = DRV_MODULE_NAME, 2373 2373 },
+1 -1
drivers/usb/gadget/udc/bdc/bdc_core.c
··· 648 648 .of_match_table = bdc_of_match, 649 649 }, 650 650 .probe = bdc_probe, 651 - .remove_new = bdc_remove, 651 + .remove = bdc_remove, 652 652 }; 653 653 654 654 module_platform_driver(bdc_driver);
+1 -2
drivers/usb/gadget/udc/cdns2/cdns2-pci.c
··· 15 15 #include "cdns2-gadget.h" 16 16 17 17 #define PCI_DRIVER_NAME "cdns-pci-usbhs" 18 - #define PCI_DEVICE_ID_CDNS_USB2 0x0120 19 18 #define PCI_BAR_DEV 0 20 19 #define PCI_DEV_FN_DEVICE 0 21 20 ··· 112 113 }; 113 114 114 115 static const struct pci_device_id cdns2_pci_ids[] = { 115 - { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_USB2), 116 + { PCI_DEVICE(PCI_VENDOR_ID_CDNS, PCI_DEVICE_ID_CDNS_USB), 116 117 .class = PCI_CLASS_SERIAL_USB_DEVICE }, 117 118 { 0, } 118 119 };
+3 -3
drivers/usb/gadget/udc/dummy_hcd.c
··· 80 80 MODULE_PARM_DESC(num, "number of emulated controllers"); 81 81 /*-------------------------------------------------------------------------*/ 82 82 83 - /* gadget side driver data structres */ 83 + /* gadget side driver data structures */ 84 84 struct dummy_ep { 85 85 struct list_head queue; 86 86 unsigned long last_io; /* jiffies timestamp */ ··· 1152 1152 1153 1153 static struct platform_driver dummy_udc_driver = { 1154 1154 .probe = dummy_udc_probe, 1155 - .remove_new = dummy_udc_remove, 1155 + .remove = dummy_udc_remove, 1156 1156 .suspend = dummy_udc_suspend, 1157 1157 .resume = dummy_udc_resume, 1158 1158 .driver = { ··· 2769 2769 2770 2770 static struct platform_driver dummy_hcd_driver = { 2771 2771 .probe = dummy_hcd_probe, 2772 - .remove_new = dummy_hcd_remove, 2772 + .remove = dummy_hcd_remove, 2773 2773 .suspend = dummy_hcd_suspend, 2774 2774 .resume = dummy_hcd_resume, 2775 2775 .driver = {
+4 -4
drivers/usb/gadget/udc/fsl_qe_udc.c
··· 511 511 out_8(&epparam->tbmr, rtfcr); 512 512 513 513 tmp = (u16)(ep->ep.maxpacket + USB_CRC_SIZE); 514 - /* MRBLR must be divisble by 4 */ 514 + /* MRBLR must be divisible by 4 */ 515 515 tmp = (u16)(((tmp >> 2) << 2) + 4); 516 516 out_be16(&epparam->mrblr, tmp); 517 517 ··· 1413 1413 return 0; 1414 1414 } 1415 1415 1416 - /* confirm the already trainsmited bd */ 1416 + /* confirm the already transmitted bd */ 1417 1417 static int qe_ep_txconf(struct qe_ep *ep) 1418 1418 { 1419 1419 struct qe_bd __iomem *bd; ··· 2196 2196 } 2197 2197 2198 2198 2199 - /* setup packect's rx is handle in the function too */ 2199 + /* setup packet's rx is handle in the function too */ 2200 2200 static void rx_irq(struct qe_udc *udc) 2201 2201 { 2202 2202 struct qe_ep *ep; ··· 2704 2704 .of_match_table = qe_udc_match, 2705 2705 }, 2706 2706 .probe = qe_udc_probe, 2707 - .remove_new = qe_udc_remove, 2707 + .remove = qe_udc_remove, 2708 2708 #ifdef CONFIG_PM 2709 2709 .suspend = qe_udc_suspend, 2710 2710 .resume = qe_udc_resume,
+1 -1
drivers/usb/gadget/udc/fsl_udc_core.c
··· 2685 2685 2686 2686 static struct platform_driver udc_driver = { 2687 2687 .probe = fsl_udc_probe, 2688 - .remove_new = fsl_udc_remove, 2688 + .remove = fsl_udc_remove, 2689 2689 .id_table = fsl_udc_devtype, 2690 2690 /* these suspend and resume are not usb suspend and resume */ 2691 2691 .suspend = fsl_udc_suspend,
+2 -2
drivers/usb/gadget/udc/fusb300_udc.c
··· 1297 1297 reg |= val; 1298 1298 iowrite32(reg, fusb300->reg + FUSB300_OFFSET_HSCR); 1299 1299 1300 - /*set u1 u2 timmer*/ 1300 + /*set u1 u2 timer*/ 1301 1301 fusb300_set_u2_timeout(fusb300, 0xff); 1302 1302 fusb300_set_u1_timeout(fusb300, 0xff); 1303 1303 ··· 1507 1507 1508 1508 static struct platform_driver fusb300_driver = { 1509 1509 .probe = fusb300_probe, 1510 - .remove_new = fusb300_remove, 1510 + .remove = fusb300_remove, 1511 1511 .driver = { 1512 1512 .name = udc_name, 1513 1513 },
+1 -1
drivers/usb/gadget/udc/gr_udc.c
··· 2249 2249 .of_match_table = gr_match, 2250 2250 }, 2251 2251 .probe = gr_probe, 2252 - .remove_new = gr_remove, 2252 + .remove = gr_remove, 2253 2253 }; 2254 2254 module_platform_driver(gr_driver); 2255 2255
+1 -1
drivers/usb/gadget/udc/lpc32xx_udc.c
··· 3249 3249 3250 3250 static struct platform_driver lpc32xx_udc_driver = { 3251 3251 .probe = lpc32xx_udc_probe, 3252 - .remove_new = lpc32xx_udc_remove, 3252 + .remove = lpc32xx_udc_remove, 3253 3253 .shutdown = lpc32xx_udc_shutdown, 3254 3254 .suspend = lpc32xx_udc_suspend, 3255 3255 .resume = lpc32xx_udc_resume,
+1 -1
drivers/usb/gadget/udc/m66592-udc.c
··· 1688 1688 /*-------------------------------------------------------------------------*/ 1689 1689 static struct platform_driver m66592_driver = { 1690 1690 .probe = m66592_probe, 1691 - .remove_new = m66592_remove, 1691 + .remove = m66592_remove, 1692 1692 .driver = { 1693 1693 .name = udc_name, 1694 1694 },
+1 -1
drivers/usb/gadget/udc/mv_u3d_core.c
··· 2047 2047 2048 2048 static struct platform_driver mv_u3d_driver = { 2049 2049 .probe = mv_u3d_probe, 2050 - .remove_new = mv_u3d_remove, 2050 + .remove = mv_u3d_remove, 2051 2051 .shutdown = mv_u3d_shutdown, 2052 2052 .driver = { 2053 2053 .name = "mv-u3d",
+1 -1
drivers/usb/gadget/udc/mv_udc_core.c
··· 2409 2409 2410 2410 static struct platform_driver udc_driver = { 2411 2411 .probe = mv_udc_probe, 2412 - .remove_new = mv_udc_remove, 2412 + .remove = mv_udc_remove, 2413 2413 .shutdown = mv_udc_shutdown, 2414 2414 .driver = { 2415 2415 .name = "mv-udc",
+2 -2
drivers/usb/gadget/udc/net2272.c
··· 2097 2097 } 2098 2098 /* check dma interrupts */ 2099 2099 #endif 2100 - /* Platform/devcice interrupt handler */ 2100 + /* Platform/device interrupt handler */ 2101 2101 #if !defined(PLX_PCI_RDK) 2102 2102 net2272_handle_stat1_irqs(dev, net2272_read(dev, IRQSTAT1)); 2103 2103 net2272_handle_stat0_irqs(dev, net2272_read(dev, IRQSTAT0)); ··· 2685 2685 2686 2686 static struct platform_driver net2272_plat_driver = { 2687 2687 .probe = net2272_plat_probe, 2688 - .remove_new = net2272_plat_remove, 2688 + .remove = net2272_plat_remove, 2689 2689 .driver = { 2690 2690 .name = driver_name, 2691 2691 },
+3 -3
drivers/usb/gadget/udc/omap_udc.c
··· 576 576 577 577 static void next_out_dma(struct omap_ep *ep, struct omap_req *req) 578 578 { 579 - unsigned packets = req->req.length - req->req.actual; 579 + unsigned int packets = req->req.length - req->req.actual; 580 580 int dma_trigger = 0; 581 581 u16 w; 582 582 583 583 /* set up this DMA transfer, enable the fifo, start */ 584 584 packets /= ep->ep.maxpacket; 585 - packets = min(packets, (unsigned)UDC_RXN_TC + 1); 585 + packets = min_t(unsigned int, packets, UDC_RXN_TC + 1); 586 586 req->dma_bytes = packets * ep->ep.maxpacket; 587 587 omap_set_dma_transfer_params(ep->lch, OMAP_DMA_DATA_TYPE_S16, 588 588 ep->ep.maxpacket >> 1, packets, ··· 2980 2980 2981 2981 static struct platform_driver udc_driver = { 2982 2982 .probe = omap_udc_probe, 2983 - .remove_new = omap_udc_remove, 2983 + .remove = omap_udc_remove, 2984 2984 .suspend = omap_udc_suspend, 2985 2985 .resume = omap_udc_resume, 2986 2986 .driver = {
+1 -1
drivers/usb/gadget/udc/pxa25x_udc.c
··· 2474 2474 static struct platform_driver udc_driver = { 2475 2475 .shutdown = pxa25x_udc_shutdown, 2476 2476 .probe = pxa25x_udc_probe, 2477 - .remove_new = pxa25x_udc_remove, 2477 + .remove = pxa25x_udc_remove, 2478 2478 .suspend = pxa25x_udc_suspend, 2479 2479 .resume = pxa25x_udc_resume, 2480 2480 .driver = {
+1 -1
drivers/usb/gadget/udc/pxa27x_udc.c
··· 2539 2539 .of_match_table = of_match_ptr(udc_pxa_dt_ids), 2540 2540 }, 2541 2541 .probe = pxa_udc_probe, 2542 - .remove_new = pxa_udc_remove, 2542 + .remove = pxa_udc_remove, 2543 2543 .shutdown = pxa_udc_shutdown, 2544 2544 #ifdef CONFIG_PM 2545 2545 .suspend = pxa_udc_suspend,
+1 -1
drivers/usb/gadget/udc/r8a66597-udc.c
··· 1965 1965 /*-------------------------------------------------------------------------*/ 1966 1966 static struct platform_driver r8a66597_driver = { 1967 1967 .probe = r8a66597_probe, 1968 - .remove_new = r8a66597_remove, 1968 + .remove = r8a66597_remove, 1969 1969 .driver = { 1970 1970 .name = udc_name, 1971 1971 },
+1 -1
drivers/usb/gadget/udc/renesas_usb3.c
··· 3013 3013 3014 3014 static struct platform_driver renesas_usb3_driver = { 3015 3015 .probe = renesas_usb3_probe, 3016 - .remove_new = renesas_usb3_remove, 3016 + .remove = renesas_usb3_remove, 3017 3017 .driver = { 3018 3018 .name = udc_name, 3019 3019 .pm = &renesas_usb3_pm_ops,
+2 -2
drivers/usb/gadget/udc/renesas_usbf.c
··· 2482 2482 ep0->delayed_status = 0; 2483 2483 2484 2484 if ((crq.ctrlreq.bRequestType & USB_TYPE_MASK) != USB_TYPE_STANDARD) { 2485 - /* This is not a USB standard request -> delelate */ 2485 + /* This is not a USB standard request -> delegate */ 2486 2486 goto delegate; 2487 2487 } 2488 2488 ··· 3381 3381 .of_match_table = usbf_match, 3382 3382 }, 3383 3383 .probe = usbf_probe, 3384 - .remove_new = usbf_remove, 3384 + .remove = usbf_remove, 3385 3385 }; 3386 3386 3387 3387 module_platform_driver(udc_driver);
+1 -1
drivers/usb/gadget/udc/rzv2m_usb3drd.c
··· 127 127 .of_match_table = rzv2m_usb3drd_of_match, 128 128 }, 129 129 .probe = rzv2m_usb3drd_probe, 130 - .remove_new = rzv2m_usb3drd_remove, 130 + .remove = rzv2m_usb3drd_remove, 131 131 }; 132 132 module_platform_driver(rzv2m_usb3drd_driver); 133 133
+1 -1
drivers/usb/gadget/udc/snps_udc_core.c
··· 2707 2707 /* write fifo */ 2708 2708 udc_txfifo_write(ep, &req->req); 2709 2709 2710 - /* lengh bytes transferred */ 2710 + /* length bytes transferred */ 2711 2711 len = req->req.length - req->req.actual; 2712 2712 if (len > ep->ep.maxpacket) 2713 2713 len = ep->ep.maxpacket;
+1 -1
drivers/usb/gadget/udc/snps_udc_plat.c
··· 309 309 310 310 static struct platform_driver udc_plat_driver = { 311 311 .probe = udc_plat_probe, 312 - .remove_new = udc_plat_remove, 312 + .remove = udc_plat_remove, 313 313 .driver = { 314 314 .name = "snps-udc-plat", 315 315 .of_match_table = of_udc_match,
+1 -1
drivers/usb/gadget/udc/tegra-xudc.c
··· 4071 4071 4072 4072 static struct platform_driver tegra_xudc_driver = { 4073 4073 .probe = tegra_xudc_probe, 4074 - .remove_new = tegra_xudc_remove, 4074 + .remove = tegra_xudc_remove, 4075 4075 .driver = { 4076 4076 .name = "tegra-xudc", 4077 4077 .pm = &tegra_xudc_pm_ops,
+1 -1
drivers/usb/gadget/udc/udc-xilinx.c
··· 2258 2258 .pm = &xudc_pm_ops, 2259 2259 }, 2260 2260 .probe = xudc_probe, 2261 - .remove_new = xudc_remove, 2261 + .remove = xudc_remove, 2262 2262 }; 2263 2263 2264 2264 module_platform_driver(xudc_driver);
+1 -1
drivers/usb/gadget/usbstring.c
··· 55 55 return -EINVAL; 56 56 57 57 /* string descriptors have length, tag, then UTF16-LE text */ 58 - len = min((size_t)USB_MAX_STRING_LEN, strlen(s->s)); 58 + len = min_t(size_t, USB_MAX_STRING_LEN, strlen(s->s)); 59 59 len = utf8s_to_utf16s(s->s, len, UTF16_LITTLE_ENDIAN, 60 60 (wchar_t *) &buf[2], USB_MAX_STRING_LEN); 61 61 if (len < 0)
-1
drivers/usb/host/bcma-hcd.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/slab.h> 27 27 #include <linux/of.h> 28 - #include <linux/of_gpio.h> 29 28 #include <linux/of_platform.h> 30 29 #include <linux/usb/ehci_pdriver.h> 31 30 #include <linux/usb/ohci_pdriver.h>
+1 -1
drivers/usb/host/ehci-atmel.c
··· 220 220 221 221 static struct platform_driver ehci_atmel_driver = { 222 222 .probe = ehci_atmel_drv_probe, 223 - .remove_new = ehci_atmel_drv_remove, 223 + .remove = ehci_atmel_drv_remove, 224 224 .shutdown = usb_hcd_platform_shutdown, 225 225 .driver = { 226 226 .name = "atmel-ehci",
+1 -1
drivers/usb/host/ehci-brcm.c
··· 250 250 251 251 static struct platform_driver ehci_brcm_driver = { 252 252 .probe = ehci_brcm_probe, 253 - .remove_new = ehci_brcm_remove, 253 + .remove = ehci_brcm_remove, 254 254 .shutdown = usb_hcd_platform_shutdown, 255 255 .driver = { 256 256 .name = "ehci-brcm",
+1 -1
drivers/usb/host/ehci-exynos.c
··· 288 288 289 289 static struct platform_driver exynos_ehci_driver = { 290 290 .probe = exynos_ehci_probe, 291 - .remove_new = exynos_ehci_remove, 291 + .remove = exynos_ehci_remove, 292 292 .shutdown = usb_hcd_platform_shutdown, 293 293 .driver = { 294 294 .name = "exynos-ehci",
+1 -1
drivers/usb/host/ehci-fsl.c
··· 706 706 707 707 static struct platform_driver ehci_fsl_driver = { 708 708 .probe = fsl_ehci_drv_probe, 709 - .remove_new = fsl_ehci_drv_remove, 709 + .remove = fsl_ehci_drv_remove, 710 710 .shutdown = usb_hcd_platform_shutdown, 711 711 .driver = { 712 712 .name = DRV_NAME,
+1 -1
drivers/usb/host/ehci-grlib.c
··· 168 168 169 169 static struct platform_driver ehci_grlib_driver = { 170 170 .probe = ehci_hcd_grlib_probe, 171 - .remove_new = ehci_hcd_grlib_remove, 171 + .remove = ehci_hcd_grlib_remove, 172 172 .shutdown = usb_hcd_platform_shutdown, 173 173 .driver = { 174 174 .name = "grlib-ehci",
+1 -1
drivers/usb/host/ehci-hcd.c
··· 547 547 * make problems: throughput reduction (!), data errors... 548 548 */ 549 549 if (park) { 550 - park = min(park, (unsigned) 3); 550 + park = min_t(unsigned int, park, 3); 551 551 temp |= CMD_PARK; 552 552 temp |= park << 8; 553 553 }
+1 -1
drivers/usb/host/ehci-mv.c
··· 279 279 280 280 static struct platform_driver ehci_mv_driver = { 281 281 .probe = mv_ehci_probe, 282 - .remove_new = mv_ehci_remove, 282 + .remove = mv_ehci_remove, 283 283 .shutdown = mv_ehci_shutdown, 284 284 .driver = { 285 285 .name = "mv-ehci",
+1 -1
drivers/usb/host/ehci-npcm7xx.c
··· 122 122 123 123 static struct platform_driver npcm7xx_ehci_hcd_driver = { 124 124 .probe = npcm7xx_ehci_hcd_drv_probe, 125 - .remove_new = npcm7xx_ehci_hcd_drv_remove, 125 + .remove = npcm7xx_ehci_hcd_drv_remove, 126 126 .shutdown = usb_hcd_platform_shutdown, 127 127 .driver = { 128 128 .name = "npcm7xx-ehci",
+1 -1
drivers/usb/host/ehci-omap.c
··· 264 264 265 265 static struct platform_driver ehci_hcd_omap_driver = { 266 266 .probe = ehci_hcd_omap_probe, 267 - .remove_new = ehci_hcd_omap_remove, 267 + .remove = ehci_hcd_omap_remove, 268 268 .shutdown = usb_hcd_platform_shutdown, 269 269 /*.suspend = ehci_hcd_omap_suspend, */ 270 270 /*.resume = ehci_hcd_omap_resume, */
+1 -1
drivers/usb/host/ehci-orion.c
··· 352 352 353 353 static struct platform_driver ehci_orion_driver = { 354 354 .probe = ehci_orion_drv_probe, 355 - .remove_new = ehci_orion_drv_remove, 355 + .remove = ehci_orion_drv_remove, 356 356 .shutdown = usb_hcd_platform_shutdown, 357 357 .driver = { 358 358 .name = "orion-ehci",
+1 -1
drivers/usb/host/ehci-platform.c
··· 508 508 static struct platform_driver ehci_platform_driver = { 509 509 .id_table = ehci_platform_table, 510 510 .probe = ehci_platform_probe, 511 - .remove_new = ehci_platform_remove, 511 + .remove = ehci_platform_remove, 512 512 .shutdown = usb_hcd_platform_shutdown, 513 513 .driver = { 514 514 .name = "ehci-platform",
+1 -1
drivers/usb/host/ehci-ppc-of.c
··· 230 230 231 231 static struct platform_driver ehci_hcd_ppc_of_driver = { 232 232 .probe = ehci_hcd_ppc_of_probe, 233 - .remove_new = ehci_hcd_ppc_of_remove, 233 + .remove = ehci_hcd_ppc_of_remove, 234 234 .shutdown = usb_hcd_platform_shutdown, 235 235 .driver = { 236 236 .name = "ppc-of-ehci",
+1 -1
drivers/usb/host/ehci-sh.c
··· 169 169 170 170 static struct platform_driver ehci_hcd_sh_driver = { 171 171 .probe = ehci_hcd_sh_probe, 172 - .remove_new = ehci_hcd_sh_remove, 172 + .remove = ehci_hcd_sh_remove, 173 173 .shutdown = ehci_hcd_sh_shutdown, 174 174 .driver = { 175 175 .name = "sh_ehci",
+5 -4
drivers/usb/host/ehci-spear.c
··· 105 105 /* registers start at offset 0x0 */ 106 106 hcd_to_ehci(hcd)->caps = hcd->regs; 107 107 108 - clk_prepare_enable(sehci->clk); 108 + retval = clk_prepare_enable(sehci->clk); 109 + if (retval) 110 + goto err_put_hcd; 109 111 retval = usb_add_hcd(hcd, irq, IRQF_SHARED); 110 112 if (retval) 111 113 goto err_stop_ehci; ··· 132 130 133 131 usb_remove_hcd(hcd); 134 132 135 - if (sehci->clk) 136 - clk_disable_unprepare(sehci->clk); 133 + clk_disable_unprepare(sehci->clk); 137 134 usb_put_hcd(hcd); 138 135 } 139 136 ··· 144 143 145 144 static struct platform_driver spear_ehci_hcd_driver = { 146 145 .probe = spear_ehci_hcd_drv_probe, 147 - .remove_new = spear_ehci_hcd_drv_remove, 146 + .remove = spear_ehci_hcd_drv_remove, 148 147 .shutdown = usb_hcd_platform_shutdown, 149 148 .driver = { 150 149 .name = "spear-ehci",
+1 -1
drivers/usb/host/ehci-st.c
··· 320 320 321 321 static struct platform_driver ehci_platform_driver = { 322 322 .probe = st_ehci_platform_probe, 323 - .remove_new = st_ehci_platform_remove, 323 + .remove = st_ehci_platform_remove, 324 324 .shutdown = usb_hcd_platform_shutdown, 325 325 .driver = { 326 326 .name = "st-ehci",
+1 -1
drivers/usb/host/ehci-xilinx-of.c
··· 220 220 221 221 static struct platform_driver ehci_hcd_xilinx_of_driver = { 222 222 .probe = ehci_hcd_xilinx_of_probe, 223 - .remove_new = ehci_hcd_xilinx_of_remove, 223 + .remove = ehci_hcd_xilinx_of_remove, 224 224 .shutdown = usb_hcd_platform_shutdown, 225 225 .driver = { 226 226 .name = "xilinx-of-ehci",
+1 -1
drivers/usb/host/fhci-hcd.c
··· 791 791 .of_match_table = of_fhci_match, 792 792 }, 793 793 .probe = of_fhci_probe, 794 - .remove_new = of_fhci_remove, 794 + .remove = of_fhci_remove, 795 795 }; 796 796 797 797 module_platform_driver(of_fhci_driver);
+2 -2
drivers/usb/host/fhci-sched.c
··· 158 158 struct packet *pkt; 159 159 u8 *data = NULL; 160 160 161 - /* calcalate data address,len and toggle and then add the transaction */ 161 + /* calculate data address,len and toggle and then add the transaction */ 162 162 if (td->toggle == USB_TD_TOGGLE_CARRY) 163 163 td->toggle = ed->toggle_carry; 164 164 ··· 679 679 680 680 DECLARE_TASKLET_OLD(fhci_tasklet, process_done_list); 681 681 682 - /* transfer complted callback */ 682 + /* transfer completed callback */ 683 683 u32 fhci_transfer_confirm_callback(struct fhci_hcd *fhci) 684 684 { 685 685 if (!fhci->process_done_task->state)
+1 -1
drivers/usb/host/fsl-mph-dr-of.c
··· 362 362 .of_match_table = fsl_usb2_mph_dr_of_match, 363 363 }, 364 364 .probe = fsl_usb2_mph_dr_of_probe, 365 - .remove_new = fsl_usb2_mph_dr_of_remove, 365 + .remove = fsl_usb2_mph_dr_of_remove, 366 366 }; 367 367 368 368 module_platform_driver(fsl_usb2_mph_dr_driver);
+1 -1
drivers/usb/host/isp116x-hcd.c
··· 1684 1684 1685 1685 static struct platform_driver isp116x_driver = { 1686 1686 .probe = isp116x_probe, 1687 - .remove_new = isp116x_remove, 1687 + .remove = isp116x_remove, 1688 1688 .suspend = isp116x_suspend, 1689 1689 .resume = isp116x_resume, 1690 1690 .driver = {
+1 -1
drivers/usb/host/isp1362-hcd.c
··· 2757 2757 2758 2758 static struct platform_driver isp1362_driver = { 2759 2759 .probe = isp1362_probe, 2760 - .remove_new = isp1362_remove, 2760 + .remove = isp1362_remove, 2761 2761 2762 2762 .suspend = isp1362_suspend, 2763 2763 .resume = isp1362_resume,
+3 -3
drivers/usb/host/octeon-hcd.c
··· 3346 3346 break; 3347 3347 case USB_PORT_FEAT_INDICATOR: 3348 3348 dev_dbg(dev, " INDICATOR\n"); 3349 - /* Port inidicator not supported */ 3349 + /* Port indicator not supported */ 3350 3350 break; 3351 3351 case USB_PORT_FEAT_C_CONNECTION: 3352 3352 dev_dbg(dev, " C_CONNECTION\n"); ··· 3711 3711 .name = "octeon-hcd", 3712 3712 .of_match_table = octeon_usb_match, 3713 3713 }, 3714 - .probe = octeon_usb_probe, 3715 - .remove_new = octeon_usb_remove, 3714 + .probe = octeon_usb_probe, 3715 + .remove = octeon_usb_remove, 3716 3716 }; 3717 3717 3718 3718 static int __init octeon_usb_driver_init(void)
+1 -1
drivers/usb/host/ohci-at91.c
··· 685 685 686 686 static struct platform_driver ohci_hcd_at91_driver = { 687 687 .probe = ohci_hcd_at91_drv_probe, 688 - .remove_new = ohci_hcd_at91_drv_remove, 688 + .remove = ohci_hcd_at91_drv_remove, 689 689 .shutdown = usb_hcd_platform_shutdown, 690 690 .driver = { 691 691 .name = "at91_ohci",
+1 -1
drivers/usb/host/ohci-da8xx.c
··· 531 531 */ 532 532 static struct platform_driver ohci_hcd_da8xx_driver = { 533 533 .probe = ohci_da8xx_probe, 534 - .remove_new = ohci_da8xx_remove, 534 + .remove = ohci_da8xx_remove, 535 535 .shutdown = usb_hcd_platform_shutdown, 536 536 #ifdef CONFIG_PM 537 537 .suspend = ohci_da8xx_suspend,
+1 -1
drivers/usb/host/ohci-exynos.c
··· 262 262 263 263 static struct platform_driver exynos_ohci_driver = { 264 264 .probe = exynos_ohci_probe, 265 - .remove_new = exynos_ohci_remove, 265 + .remove = exynos_ohci_remove, 266 266 .shutdown = exynos_ohci_shutdown, 267 267 .driver = { 268 268 .name = "exynos-ohci",
+1 -1
drivers/usb/host/ohci-nxp.c
··· 254 254 .of_match_table = of_match_ptr(ohci_hcd_nxp_match), 255 255 }, 256 256 .probe = ohci_hcd_nxp_probe, 257 - .remove_new = ohci_hcd_nxp_remove, 257 + .remove = ohci_hcd_nxp_remove, 258 258 }; 259 259 260 260 static int __init ohci_nxp_init(void)
+2 -2
drivers/usb/host/ohci-omap.c
··· 152 152 153 153 rh &= ~RH_A_NOCP; 154 154 155 - /* gpio9 for overcurrent detction */ 155 + /* gpio9 for overcurrent detection */ 156 156 omap_cfg_reg(W8_1610_GPIO9); 157 157 158 158 /* for paranoia's sake: disable USB.PUEN */ ··· 390 390 */ 391 391 static struct platform_driver ohci_hcd_omap_driver = { 392 392 .probe = ohci_hcd_omap_probe, 393 - .remove_new = ohci_hcd_omap_remove, 393 + .remove = ohci_hcd_omap_remove, 394 394 .shutdown = usb_hcd_platform_shutdown, 395 395 #ifdef CONFIG_PM 396 396 .suspend = ohci_omap_suspend,
+1 -1
drivers/usb/host/ohci-platform.c
··· 344 344 static struct platform_driver ohci_platform_driver = { 345 345 .id_table = ohci_platform_table, 346 346 .probe = ohci_platform_probe, 347 - .remove_new = ohci_platform_remove, 347 + .remove = ohci_platform_remove, 348 348 .shutdown = usb_hcd_platform_shutdown, 349 349 .driver = { 350 350 .name = "ohci-platform",
+1 -1
drivers/usb/host/ohci-ppc-of.c
··· 219 219 220 220 static struct platform_driver ohci_hcd_ppc_of_driver = { 221 221 .probe = ohci_hcd_ppc_of_probe, 222 - .remove_new = ohci_hcd_ppc_of_remove, 222 + .remove = ohci_hcd_ppc_of_remove, 223 223 .shutdown = usb_hcd_platform_shutdown, 224 224 .driver = { 225 225 .name = "ppc-of-ohci",
+1 -1
drivers/usb/host/ohci-pxa27x.c
··· 569 569 570 570 static struct platform_driver ohci_hcd_pxa27x_driver = { 571 571 .probe = ohci_hcd_pxa27x_probe, 572 - .remove_new = ohci_hcd_pxa27x_remove, 572 + .remove = ohci_hcd_pxa27x_remove, 573 573 .shutdown = usb_hcd_platform_shutdown, 574 574 .driver = { 575 575 .name = "pxa27x-ohci",
+1 -1
drivers/usb/host/ohci-s3c2410.c
··· 457 457 458 458 static struct platform_driver ohci_hcd_s3c2410_driver = { 459 459 .probe = ohci_hcd_s3c2410_probe, 460 - .remove_new = ohci_hcd_s3c2410_remove, 460 + .remove = ohci_hcd_s3c2410_remove, 461 461 .shutdown = usb_hcd_platform_shutdown, 462 462 .driver = { 463 463 .name = "s3c2410-ohci",
+1 -1
drivers/usb/host/ohci-sm501.c
··· 252 252 */ 253 253 static struct platform_driver ohci_hcd_sm501_driver = { 254 254 .probe = ohci_hcd_sm501_drv_probe, 255 - .remove_new = ohci_hcd_sm501_drv_remove, 255 + .remove = ohci_hcd_sm501_drv_remove, 256 256 .shutdown = usb_hcd_platform_shutdown, 257 257 .suspend = ohci_sm501_suspend, 258 258 .resume = ohci_sm501_resume,
+1 -1
drivers/usb/host/ohci-spear.c
··· 157 157 /* Driver definition to register with the platform bus */ 158 158 static struct platform_driver spear_ohci_hcd_driver = { 159 159 .probe = spear_ohci_hcd_drv_probe, 160 - .remove_new = spear_ohci_hcd_drv_remove, 160 + .remove = spear_ohci_hcd_drv_remove, 161 161 #ifdef CONFIG_PM 162 162 .suspend = spear_ohci_hcd_drv_suspend, 163 163 .resume = spear_ohci_hcd_drv_resume,
+1 -1
drivers/usb/host/ohci-st.c
··· 298 298 299 299 static struct platform_driver ohci_platform_driver = { 300 300 .probe = st_ohci_platform_probe, 301 - .remove_new = st_ohci_platform_remove, 301 + .remove = st_ohci_platform_remove, 302 302 .shutdown = usb_hcd_platform_shutdown, 303 303 .driver = { 304 304 .name = "st-ohci",
+4 -4
drivers/usb/host/oxu210hp-hcd.c
··· 885 885 int a_blocks; /* blocks allocated */ 886 886 int i, j; 887 887 888 - /* Don't allocte bigger than supported */ 888 + /* Don't allocate bigger than supported */ 889 889 if (len > BUFFER_SIZE * BUFFER_NUM) { 890 890 oxu_err(oxu, "buffer too big (%d)\n", len); 891 891 return -ENOMEM; ··· 902 902 903 903 /* Find a suitable available data buffer */ 904 904 for (i = 0; i < BUFFER_NUM; 905 - i += max(a_blocks, (int)oxu->db_used[i])) { 905 + i += max_t(int, a_blocks, oxu->db_used[i])) { 906 906 907 907 /* Check all the required blocks are available */ 908 908 for (j = 0; j < a_blocks; j++) ··· 3040 3040 * make problems: throughput reduction (!), data errors... 3041 3041 */ 3042 3042 if (park) { 3043 - park = min(park, (unsigned) 3); 3043 + park = min_t(unsigned int, park, 3); 3044 3044 temp |= CMD_PARK; 3045 3045 temp |= park << 8; 3046 3046 } ··· 4289 4289 4290 4290 static struct platform_driver oxu_driver = { 4291 4291 .probe = oxu_drv_probe, 4292 - .remove_new = oxu_drv_remove, 4292 + .remove = oxu_drv_remove, 4293 4293 .shutdown = oxu_drv_shutdown, 4294 4294 .suspend = oxu_drv_suspend, 4295 4295 .resume = oxu_drv_resume,
+3 -3
drivers/usb/host/r8a66597-hcd.c
··· 759 759 struct r8a66597_pipe_info *info = &pipe->info; 760 760 unsigned short mbw = mbw_value(r8a66597); 761 761 762 - /* pipe dma is only for external controlles */ 762 + /* pipe dma is only for external controllers */ 763 763 if (r8a66597->pdata->on_chip) 764 764 return; 765 765 ··· 1336 1336 buf = (void *)urb->transfer_buffer + urb->actual_length; 1337 1337 urb_len = urb->transfer_buffer_length - urb->actual_length; 1338 1338 } 1339 - bufsize = min(urb_len, (int) td->maxpacket); 1339 + bufsize = min_t(int, urb_len, td->maxpacket); 1340 1340 if (rcv_len <= bufsize) { 1341 1341 size = rcv_len; 1342 1342 } else { ··· 2510 2510 2511 2511 static struct platform_driver r8a66597_driver = { 2512 2512 .probe = r8a66597_probe, 2513 - .remove_new = r8a66597_remove, 2513 + .remove = r8a66597_remove, 2514 2514 .driver = { 2515 2515 .name = hcd_name, 2516 2516 .pm = R8A66597_DEV_PM_OPS,
+1 -1
drivers/usb/host/sl811-hcd.c
··· 1784 1784 /* this driver is exported so sl811_cs can depend on it */ 1785 1785 struct platform_driver sl811h_driver = { 1786 1786 .probe = sl811h_probe, 1787 - .remove_new = sl811h_remove, 1787 + .remove = sl811h_remove, 1788 1788 1789 1789 .suspend = sl811h_suspend, 1790 1790 .resume = sl811h_resume,
+1 -1
drivers/usb/host/uhci-grlib.c
··· 184 184 185 185 static struct platform_driver uhci_grlib_driver = { 186 186 .probe = uhci_hcd_grlib_probe, 187 - .remove_new = uhci_hcd_grlib_remove, 187 + .remove = uhci_hcd_grlib_remove, 188 188 .shutdown = uhci_hcd_grlib_shutdown, 189 189 .driver = { 190 190 .name = "grlib-uhci",
+1 -1
drivers/usb/host/uhci-platform.c
··· 184 184 185 185 static struct platform_driver uhci_platform_driver = { 186 186 .probe = uhci_hcd_platform_probe, 187 - .remove_new = uhci_hcd_platform_remove, 187 + .remove = uhci_hcd_platform_remove, 188 188 .shutdown = uhci_hcd_platform_shutdown, 189 189 .driver = { 190 190 .name = "platform-uhci",
+8 -5
drivers/usb/host/xhci-dbgcap.c
··· 248 248 trb->generic.field[2] = cpu_to_le32(field3); 249 249 trb->generic.field[3] = cpu_to_le32(field4); 250 250 251 - trace_xhci_dbc_gadget_ep_queue(ring, &trb->generic); 252 - 251 + trace_xhci_dbc_gadget_ep_queue(ring, &trb->generic, 252 + xhci_trb_virt_to_dma(ring->enq_seg, 253 + ring->enqueue)); 253 254 ring->num_trbs_free--; 254 255 next = ++(ring->enqueue); 255 256 if (TRB_TYPE_LINK_LE32(next->link.control)) { ··· 472 471 trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK)); 473 472 } 474 473 INIT_LIST_HEAD(&ring->td_list); 475 - xhci_initialize_ring_info(ring, 1); 474 + xhci_initialize_ring_info(ring); 476 475 return ring; 477 476 dma_fail: 478 477 kfree(seg); ··· 748 747 return; 749 748 } 750 749 751 - trace_xhci_dbc_handle_transfer(ring, &req->trb->generic); 750 + trace_xhci_dbc_handle_transfer(ring, &req->trb->generic, req->trb_dma); 752 751 753 752 switch (comp_code) { 754 753 case COMP_SUCCESS: ··· 899 898 */ 900 899 rmb(); 901 900 902 - trace_xhci_dbc_handle_event(dbc->ring_evt, &evt->generic); 901 + trace_xhci_dbc_handle_event(dbc->ring_evt, &evt->generic, 902 + xhci_trb_virt_to_dma(dbc->ring_evt->deq_seg, 903 + dbc->ring_evt->dequeue)); 903 904 904 905 switch (le32_to_cpu(evt->event_cmd.flags) & TRB_TYPE_BITMASK) { 905 906 case TRB_TYPE(TRB_PORT_STATUS):
+4 -6
drivers/usb/host/xhci-debugfs.c
··· 214 214 215 215 static int xhci_ring_trb_show(struct seq_file *s, void *unused) 216 216 { 217 - int i; 218 217 struct xhci_ring *ring = *(struct xhci_ring **)s->private; 219 218 struct xhci_segment *seg = ring->first_seg; 220 219 221 - for (i = 0; i < ring->num_segs; i++) { 220 + xhci_for_each_ring_seg(ring->first_seg, seg) 222 221 xhci_ring_dump_segment(s, seg); 223 - seg = seg->next; 224 - } 225 222 226 223 return 0; 227 224 } ··· 288 291 for (ep_index = 0; ep_index < 31; ep_index++) { 289 292 ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); 290 293 dma = dev->out_ctx->dma + (ep_index + 1) * CTX_SIZE(xhci->hcc_params); 291 - seq_printf(s, "%pad: %s\n", &dma, 294 + seq_printf(s, "%pad: %s, virt_state:%#x\n", &dma, 292 295 xhci_decode_ep_context(str, 293 296 le32_to_cpu(ep_ctx->ep_info), 294 297 le32_to_cpu(ep_ctx->ep_info2), 295 298 le64_to_cpu(ep_ctx->deq), 296 - le32_to_cpu(ep_ctx->tx_info))); 299 + le32_to_cpu(ep_ctx->tx_info)), 300 + dev->eps[ep_index].ep_state); 297 301 } 298 302 299 303 return 0;
+1 -1
drivers/usb/host/xhci-histb.c
··· 373 373 374 374 static struct platform_driver histb_xhci_driver = { 375 375 .probe = xhci_histb_probe, 376 - .remove_new = xhci_histb_remove, 376 + .remove = xhci_histb_remove, 377 377 .driver = { 378 378 .name = "xhci-histb", 379 379 .pm = DEV_PM_OPS,
+3 -3
drivers/usb/host/xhci-hub.c
··· 946 946 } 947 947 /* did port event handler already start resume timing? */ 948 948 if (!port->resume_timestamp) { 949 - /* If not, maybe we are in a host initated resume? */ 949 + /* If not, maybe we are in a host initiated resume? */ 950 950 if (test_bit(wIndex, &bus_state->resuming_ports)) { 951 - /* Host initated resume doesn't time the resume 951 + /* Host initiated resume doesn't time the resume 952 952 * signalling using resume_done[]. 953 953 * It manually sets RESUME state, sleeps 20ms 954 954 * and sets U0 state. This should probably be ··· 1924 1924 /* resume already initiated */ 1925 1925 break; 1926 1926 default: 1927 - /* not in a resumeable state, ignore it */ 1927 + /* not in a resumable state, ignore it */ 1928 1928 clear_bit(port_index, 1929 1929 &bus_state->bus_suspended); 1930 1930 break;
+114 -125
drivers/usb/host/xhci-mem.c
··· 27 27 * "All components of all Command and Transfer TRBs shall be initialized to '0'" 28 28 */ 29 29 static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci, 30 - unsigned int cycle_state, 31 30 unsigned int max_packet, 32 31 unsigned int num, 33 32 gfp_t flags) 34 33 { 35 34 struct xhci_segment *seg; 36 35 dma_addr_t dma; 37 - int i; 38 36 struct device *dev = xhci_to_hcd(xhci)->self.sysdev; 39 37 40 38 seg = kzalloc_node(sizeof(*seg), flags, dev_to_node(dev)); ··· 54 56 return NULL; 55 57 } 56 58 } 57 - /* If the cycle state is 0, set the cycle bit to 1 for all the TRBs */ 58 - if (cycle_state == 0) { 59 - for (i = 0; i < TRBS_PER_SEGMENT; i++) 60 - seg->trbs[i].link.control = cpu_to_le32(TRB_CYCLE); 61 - } 62 59 seg->num = num; 63 60 seg->dma = dma; 64 61 seg->next = NULL; ··· 71 78 kfree(seg); 72 79 } 73 80 74 - static void xhci_free_segments_for_ring(struct xhci_hcd *xhci, 75 - struct xhci_segment *first) 81 + static void xhci_ring_segments_free(struct xhci_hcd *xhci, struct xhci_ring *ring) 76 82 { 77 - struct xhci_segment *seg; 83 + struct xhci_segment *seg, *next; 78 84 79 - seg = first->next; 80 - while (seg && seg != first) { 81 - struct xhci_segment *next = seg->next; 85 + ring->last_seg->next = NULL; 86 + seg = ring->first_seg; 87 + 88 + while (seg) { 89 + next = seg->next; 82 90 xhci_segment_free(xhci, seg); 83 91 seg = next; 84 92 } 85 - xhci_segment_free(xhci, first); 86 93 } 87 94 88 95 /* 89 - * Make the prev segment point to the next segment. 96 + * Only for transfer and command rings where driver is the producer, not for 97 + * event rings. 90 98 * 91 - * Change the last TRB in the prev segment to be a Link TRB which points to the 99 + * Change the last TRB in the segment to be a Link TRB which points to the 92 100 * DMA address of the next segment. The caller needs to set any Link TRB 93 101 * related flags, such as End TRB, Toggle Cycle, and no snoop. 94 102 */ 95 - static void xhci_link_segments(struct xhci_segment *prev, 96 - struct xhci_segment *next, 97 - enum xhci_ring_type type, bool chain_links) 103 + static void xhci_set_link_trb(struct xhci_segment *seg, bool chain_links) 98 104 { 105 + union xhci_trb *trb; 99 106 u32 val; 100 107 101 - if (!prev || !next) 108 + if (!seg || !seg->next) 102 109 return; 103 - prev->next = next; 104 - if (type != TYPE_EVENT) { 105 - prev->trbs[TRBS_PER_SEGMENT-1].link.segment_ptr = 106 - cpu_to_le64(next->dma); 107 110 108 - /* Set the last TRB in the segment to have a TRB type ID of Link TRB */ 109 - val = le32_to_cpu(prev->trbs[TRBS_PER_SEGMENT-1].link.control); 110 - val &= ~TRB_TYPE_BITMASK; 111 - val |= TRB_TYPE(TRB_LINK); 112 - if (chain_links) 113 - val |= TRB_CHAIN; 114 - prev->trbs[TRBS_PER_SEGMENT-1].link.control = cpu_to_le32(val); 115 - } 111 + trb = &seg->trbs[TRBS_PER_SEGMENT - 1]; 112 + 113 + /* Set the last TRB in the segment to have a TRB type ID of Link TRB */ 114 + val = le32_to_cpu(trb->link.control); 115 + val &= ~TRB_TYPE_BITMASK; 116 + val |= TRB_TYPE(TRB_LINK); 117 + if (chain_links) 118 + val |= TRB_CHAIN; 119 + trb->link.control = cpu_to_le32(val); 120 + trb->link.segment_ptr = cpu_to_le64(seg->next->dma); 116 121 } 117 122 118 - /* 119 - * Link the ring to the new segments. 120 - * Set Toggle Cycle for the new ring if needed. 121 - */ 122 - static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring, 123 - struct xhci_segment *first, struct xhci_segment *last, 124 - unsigned int num_segs) 123 + static void xhci_initialize_ring_segments(struct xhci_hcd *xhci, struct xhci_ring *ring) 125 124 { 126 - struct xhci_segment *next, *seg; 125 + struct xhci_segment *seg; 127 126 bool chain_links; 128 127 129 - if (!ring || !first || !last) 128 + if (ring->type == TYPE_EVENT) 130 129 return; 131 130 132 131 chain_links = xhci_link_chain_quirk(xhci, ring->type); 132 + xhci_for_each_ring_seg(ring->first_seg, seg) 133 + xhci_set_link_trb(seg, chain_links); 133 134 134 - next = ring->enq_seg->next; 135 - xhci_link_segments(ring->enq_seg, first, ring->type, chain_links); 136 - xhci_link_segments(last, next, ring->type, chain_links); 137 - ring->num_segs += num_segs; 135 + /* See section 4.9.2.1 and 6.4.4.1 */ 136 + ring->last_seg->trbs[TRBS_PER_SEGMENT - 1].link.control |= cpu_to_le32(LINK_TOGGLE); 137 + } 138 138 139 - if (ring->enq_seg == ring->last_seg) { 140 - if (ring->type != TYPE_EVENT) { 141 - ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control 142 - &= ~cpu_to_le32(LINK_TOGGLE); 143 - last->trbs[TRBS_PER_SEGMENT-1].link.control 144 - |= cpu_to_le32(LINK_TOGGLE); 139 + /* 140 + * Link the src ring segments to the dst ring. 141 + * Set Toggle Cycle for the new ring if needed. 142 + */ 143 + static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *src, struct xhci_ring *dst) 144 + { 145 + struct xhci_segment *seg; 146 + bool chain_links; 147 + 148 + if (!src || !dst) 149 + return; 150 + 151 + /* If the cycle state is 0, set the cycle bit to 1 for all the TRBs */ 152 + if (dst->cycle_state == 0) { 153 + xhci_for_each_ring_seg(src->first_seg, seg) { 154 + for (int i = 0; i < TRBS_PER_SEGMENT; i++) 155 + seg->trbs[i].link.control |= cpu_to_le32(TRB_CYCLE); 145 156 } 146 - ring->last_seg = last; 147 157 } 148 158 149 - for (seg = ring->enq_seg; seg != ring->last_seg; seg = seg->next) 159 + src->last_seg->next = dst->enq_seg->next; 160 + dst->enq_seg->next = src->first_seg; 161 + if (dst->type != TYPE_EVENT) { 162 + chain_links = xhci_link_chain_quirk(xhci, dst->type); 163 + xhci_set_link_trb(dst->enq_seg, chain_links); 164 + xhci_set_link_trb(src->last_seg, chain_links); 165 + } 166 + dst->num_segs += src->num_segs; 167 + 168 + if (dst->enq_seg == dst->last_seg) { 169 + if (dst->type != TYPE_EVENT) 170 + dst->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control 171 + &= ~cpu_to_le32(LINK_TOGGLE); 172 + 173 + dst->last_seg = src->last_seg; 174 + } else if (dst->type != TYPE_EVENT) { 175 + src->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control &= ~cpu_to_le32(LINK_TOGGLE); 176 + } 177 + 178 + for (seg = dst->enq_seg; seg != dst->last_seg; seg = seg->next) 150 179 seg->next->num = seg->num + 1; 151 180 } 152 181 ··· 239 224 struct radix_tree_root *trb_address_map, 240 225 struct xhci_ring *ring, 241 226 struct xhci_segment *first_seg, 242 - struct xhci_segment *last_seg, 243 227 gfp_t mem_flags) 244 228 { 245 229 struct xhci_segment *seg; ··· 248 234 if (WARN_ON_ONCE(trb_address_map == NULL)) 249 235 return 0; 250 236 251 - seg = first_seg; 252 - do { 237 + xhci_for_each_ring_seg(first_seg, seg) { 253 238 ret = xhci_insert_segment_mapping(trb_address_map, 254 239 ring, seg, mem_flags); 255 240 if (ret) 256 241 goto remove_streams; 257 - if (seg == last_seg) 258 - return 0; 259 - seg = seg->next; 260 - } while (seg != first_seg); 242 + } 261 243 262 244 return 0; 263 245 264 246 remove_streams: 265 247 failed_seg = seg; 266 - seg = first_seg; 267 - do { 248 + xhci_for_each_ring_seg(first_seg, seg) { 268 249 xhci_remove_segment_mapping(trb_address_map, seg); 269 250 if (seg == failed_seg) 270 251 return ret; 271 - seg = seg->next; 272 - } while (seg != first_seg); 252 + } 273 253 274 254 return ret; 275 255 } ··· 275 267 if (WARN_ON_ONCE(ring->trb_address_map == NULL)) 276 268 return; 277 269 278 - seg = ring->first_seg; 279 - do { 270 + xhci_for_each_ring_seg(ring->first_seg, seg) 280 271 xhci_remove_segment_mapping(ring->trb_address_map, seg); 281 - seg = seg->next; 282 - } while (seg != ring->first_seg); 283 272 } 284 273 285 274 static int xhci_update_stream_mapping(struct xhci_ring *ring, gfp_t mem_flags) 286 275 { 287 276 return xhci_update_stream_segment_mapping(ring->trb_address_map, ring, 288 - ring->first_seg, ring->last_seg, mem_flags); 277 + ring->first_seg, mem_flags); 289 278 } 290 279 291 280 /* XXX: Do we need the hcd structure in all these functions? */ ··· 296 291 if (ring->first_seg) { 297 292 if (ring->type == TYPE_STREAM) 298 293 xhci_remove_stream_mapping(ring); 299 - xhci_free_segments_for_ring(xhci, ring->first_seg); 294 + xhci_ring_segments_free(xhci, ring); 300 295 } 301 296 302 297 kfree(ring); 303 298 } 304 299 305 - void xhci_initialize_ring_info(struct xhci_ring *ring, 306 - unsigned int cycle_state) 300 + void xhci_initialize_ring_info(struct xhci_ring *ring) 307 301 { 308 302 /* The ring is empty, so the enqueue pointer == dequeue pointer */ 309 303 ring->enqueue = ring->first_seg->trbs; ··· 316 312 * New rings are initialized with cycle state equal to 1; if we are 317 313 * handling ring expansion, set the cycle state equal to the old ring. 318 314 */ 319 - ring->cycle_state = cycle_state; 315 + ring->cycle_state = 1; 320 316 321 317 /* 322 318 * Each segment has a link TRB, and leave an extra TRB for SW ··· 327 323 EXPORT_SYMBOL_GPL(xhci_initialize_ring_info); 328 324 329 325 /* Allocate segments and link them for a ring */ 330 - static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci, 331 - struct xhci_segment **first, 332 - struct xhci_segment **last, 333 - unsigned int num_segs, 334 - unsigned int cycle_state, 335 - enum xhci_ring_type type, 336 - unsigned int max_packet, 337 - gfp_t flags) 326 + static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci, struct xhci_ring *ring, gfp_t flags) 338 327 { 339 328 struct xhci_segment *prev; 340 329 unsigned int num = 0; 341 - bool chain_links; 342 330 343 - chain_links = xhci_link_chain_quirk(xhci, type); 344 - 345 - prev = xhci_segment_alloc(xhci, cycle_state, max_packet, num, flags); 331 + prev = xhci_segment_alloc(xhci, ring->bounce_buf_len, num, flags); 346 332 if (!prev) 347 333 return -ENOMEM; 348 334 num++; 349 335 350 - *first = prev; 351 - while (num < num_segs) { 336 + ring->first_seg = prev; 337 + while (num < ring->num_segs) { 352 338 struct xhci_segment *next; 353 339 354 - next = xhci_segment_alloc(xhci, cycle_state, max_packet, num, 355 - flags); 340 + next = xhci_segment_alloc(xhci, ring->bounce_buf_len, num, flags); 356 341 if (!next) 357 342 goto free_segments; 358 343 359 - xhci_link_segments(prev, next, type, chain_links); 344 + prev->next = next; 360 345 prev = next; 361 346 num++; 362 347 } 363 - xhci_link_segments(prev, *first, type, chain_links); 364 - *last = prev; 348 + ring->last_seg = prev; 365 349 350 + ring->last_seg->next = ring->first_seg; 366 351 return 0; 367 352 368 353 free_segments: 369 - xhci_free_segments_for_ring(xhci, *first); 354 + ring->last_seg = prev; 355 + xhci_ring_segments_free(xhci, ring); 370 356 return -ENOMEM; 371 357 } 372 358 ··· 367 373 * Set the end flag and the cycle toggle bit on the last segment. 368 374 * See section 4.9.1 and figures 15 and 16. 369 375 */ 370 - struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci, 371 - unsigned int num_segs, unsigned int cycle_state, 372 - enum xhci_ring_type type, unsigned int max_packet, gfp_t flags) 376 + struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci, unsigned int num_segs, 377 + enum xhci_ring_type type, unsigned int max_packet, gfp_t flags) 373 378 { 374 379 struct xhci_ring *ring; 375 380 int ret; ··· 385 392 if (num_segs == 0) 386 393 return ring; 387 394 388 - ret = xhci_alloc_segments_for_ring(xhci, &ring->first_seg, &ring->last_seg, num_segs, 389 - cycle_state, type, max_packet, flags); 395 + ret = xhci_alloc_segments_for_ring(xhci, ring, flags); 390 396 if (ret) 391 397 goto fail; 392 398 393 - /* Only event ring does not use link TRB */ 394 - if (type != TYPE_EVENT) { 395 - /* See section 4.9.2.1 and 6.4.4.1 */ 396 - ring->last_seg->trbs[TRBS_PER_SEGMENT - 1].link.control |= 397 - cpu_to_le32(LINK_TOGGLE); 398 - } 399 - xhci_initialize_ring_info(ring, cycle_state); 399 + xhci_initialize_ring_segments(xhci, ring); 400 + xhci_initialize_ring_info(ring); 400 401 trace_xhci_ring_alloc(ring); 401 402 return ring; 402 403 ··· 414 427 int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring, 415 428 unsigned int num_new_segs, gfp_t flags) 416 429 { 417 - struct xhci_segment *first; 418 - struct xhci_segment *last; 419 - int ret; 430 + struct xhci_ring new_ring; 431 + int ret; 420 432 421 - ret = xhci_alloc_segments_for_ring(xhci, &first, &last, num_new_segs, ring->cycle_state, 422 - ring->type, ring->bounce_buf_len, flags); 433 + if (num_new_segs == 0) 434 + return 0; 435 + 436 + new_ring.num_segs = num_new_segs; 437 + new_ring.bounce_buf_len = ring->bounce_buf_len; 438 + new_ring.type = ring->type; 439 + ret = xhci_alloc_segments_for_ring(xhci, &new_ring, flags); 423 440 if (ret) 424 441 return -ENOMEM; 425 442 443 + xhci_initialize_ring_segments(xhci, &new_ring); 444 + 426 445 if (ring->type == TYPE_STREAM) { 427 - ret = xhci_update_stream_segment_mapping(ring->trb_address_map, 428 - ring, first, last, flags); 446 + ret = xhci_update_stream_segment_mapping(ring->trb_address_map, ring, 447 + new_ring.first_seg, flags); 429 448 if (ret) 430 449 goto free_segments; 431 450 } 432 451 433 - xhci_link_rings(xhci, ring, first, last, num_new_segs); 452 + xhci_link_rings(xhci, ring, &new_ring); 434 453 trace_xhci_ring_expansion(ring); 435 454 xhci_dbg_trace(xhci, trace_xhci_dbg_ring_expansion, 436 455 "ring expansion succeed, now has %d segments", ··· 445 452 return 0; 446 453 447 454 free_segments: 448 - xhci_free_segments_for_ring(xhci, first); 455 + xhci_ring_segments_free(xhci, &new_ring); 449 456 return ret; 450 457 } 451 458 ··· 635 642 636 643 for (cur_stream = 1; cur_stream < num_streams; cur_stream++) { 637 644 stream_info->stream_rings[cur_stream] = 638 - xhci_ring_alloc(xhci, 2, 1, TYPE_STREAM, max_packet, 639 - mem_flags); 645 + xhci_ring_alloc(xhci, 2, TYPE_STREAM, max_packet, mem_flags); 640 646 cur_ring = stream_info->stream_rings[cur_stream]; 641 647 if (!cur_ring) 642 648 goto cleanup_rings; ··· 650 658 xhci_dbg(xhci, "Setting stream %d ring ptr to 0x%08llx\n", cur_stream, addr); 651 659 652 660 ret = xhci_update_stream_mapping(cur_ring, mem_flags); 661 + 662 + trace_xhci_alloc_stream_info_ctx(stream_info, cur_stream); 653 663 if (ret) { 654 664 xhci_ring_free(xhci, cur_ring); 655 665 stream_info->stream_rings[cur_stream] = NULL; ··· 978 984 } 979 985 980 986 /* Allocate endpoint 0 ring */ 981 - dev->eps[0].ring = xhci_ring_alloc(xhci, 2, 1, TYPE_CTRL, 0, flags); 987 + dev->eps[0].ring = xhci_ring_alloc(xhci, 2, TYPE_CTRL, 0, flags); 982 988 if (!dev->eps[0].ring) 983 989 goto fail; 984 990 ··· 1455 1461 1456 1462 /* Set up the endpoint ring */ 1457 1463 virt_dev->eps[ep_index].new_ring = 1458 - xhci_ring_alloc(xhci, 2, 1, ring_type, max_packet, mem_flags); 1464 + xhci_ring_alloc(xhci, 2, ring_type, max_packet, mem_flags); 1459 1465 if (!virt_dev->eps[ep_index].new_ring) 1460 1466 return -ENOMEM; 1461 1467 ··· 2265 2271 if (!ir) 2266 2272 return NULL; 2267 2273 2268 - ir->event_ring = xhci_ring_alloc(xhci, segs, 1, TYPE_EVENT, 0, flags); 2274 + ir->event_ring = xhci_ring_alloc(xhci, segs, TYPE_EVENT, 0, flags); 2269 2275 if (!ir->event_ring) { 2270 2276 xhci_warn(xhci, "Failed to allocate interrupter event ring\n"); 2271 2277 kfree(ir); ··· 2467 2473 goto fail; 2468 2474 2469 2475 /* Set up the command ring to have one segments for now. */ 2470 - xhci->cmd_ring = xhci_ring_alloc(xhci, 1, 1, TYPE_COMMAND, 0, flags); 2476 + xhci->cmd_ring = xhci_ring_alloc(xhci, 1, TYPE_COMMAND, 0, flags); 2471 2477 if (!xhci->cmd_ring) 2472 2478 goto fail; 2473 2479 xhci_dbg_trace(xhci, trace_xhci_dbg_init, ··· 2512 2518 2513 2519 ir->isoc_bei_interval = AVOID_BEI_INTERVAL_MAX; 2514 2520 2515 - /* 2516 - * XXX: Might need to set the Interrupter Moderation Register to 2517 - * something other than the default (~1ms minimum between interrupts). 2518 - * See section 5.5.1.2. 2519 - */ 2520 2521 for (i = 0; i < MAX_HC_SLOTS; i++) 2521 2522 xhci->devs[i] = NULL; 2522 2523
+1 -1
drivers/usb/host/xhci-mtk.c
··· 853 853 854 854 static struct platform_driver mtk_xhci_driver = { 855 855 .probe = xhci_mtk_probe, 856 - .remove_new = xhci_mtk_remove, 856 + .remove = xhci_mtk_remove, 857 857 .driver = { 858 858 .name = "xhci-mtk", 859 859 .pm = DEV_PM_OPS,
+20 -25
drivers/usb/host/xhci-pci.c
··· 28 28 #define SPARSE_CNTL_ENABLE 0xC12C 29 29 30 30 /* Device for a quirk */ 31 - #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 32 - #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 31 + #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 32 + #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 33 33 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009 34 34 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 0x1100 35 35 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400 36 36 37 - #define PCI_VENDOR_ID_ETRON 0x1b6f 38 - #define PCI_DEVICE_ID_EJ168 0x7023 39 - #define PCI_DEVICE_ID_EJ188 0x7052 37 + #define PCI_VENDOR_ID_ETRON 0x1b6f 38 + #define PCI_DEVICE_ID_ETRON_EJ168 0x7023 39 + #define PCI_DEVICE_ID_ETRON_EJ188 0x7052 40 40 41 - #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 42 - #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 41 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 42 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 43 43 #define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI 0x9cb1 44 44 #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5 45 45 #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f ··· 81 81 #define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142 82 82 #define PCI_DEVICE_ID_ASMEDIA_3042_XHCI 0x3042 83 83 #define PCI_DEVICE_ID_ASMEDIA_3242_XHCI 0x3242 84 - 85 - #define PCI_DEVICE_ID_CADENCE 0x17CD 86 - #define PCI_DEVICE_ID_CADENCE_SSP 0x0200 87 84 88 85 static const char hcd_name[] = "xhci_hcd"; 89 86 ··· 147 150 hcd->irq = 0; 148 151 149 152 /* 150 - * calculate number of MSI-X vectors supported. 151 - * - HCS_MAX_INTRS: the max number of interrupts the host can handle, 152 - * with max number of interrupters based on the xhci HCSPARAMS1. 153 - * - num_online_cpus: maximum MSI-X vectors per CPUs core. 154 - * Add additional 1 vector to ensure always available interrupt. 153 + * Calculate number of MSI/MSI-X vectors supported. 154 + * - max_interrupters: the max number of interrupts requested, capped to xhci HCSPARAMS1. 155 + * - num_online_cpus: one vector per CPUs core, with at least one overall. 155 156 */ 156 - xhci->nvecs = min(num_online_cpus() + 1, 157 - HCS_MAX_INTRS(xhci->hcs_params1)); 157 + xhci->nvecs = min(num_online_cpus() + 1, xhci->max_interrupters); 158 158 159 159 /* TODO: Check with MSI Soc for sysdev */ 160 160 xhci->nvecs = pci_alloc_irq_vectors(pdev, 1, xhci->nvecs, ··· 389 395 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 390 396 391 397 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 392 - pdev->device == PCI_DEVICE_ID_EJ168) { 398 + (pdev->device == PCI_DEVICE_ID_ETRON_EJ168 || 399 + pdev->device == PCI_DEVICE_ID_ETRON_EJ188)) { 400 + xhci->quirks |= XHCI_ETRON_HOST; 393 401 xhci->quirks |= XHCI_RESET_ON_RESUME; 394 402 xhci->quirks |= XHCI_BROKEN_STREAMS; 395 - } 396 - if (pdev->vendor == PCI_VENDOR_ID_ETRON && 397 - pdev->device == PCI_DEVICE_ID_EJ188) { 398 - xhci->quirks |= XHCI_RESET_ON_RESUME; 399 - xhci->quirks |= XHCI_BROKEN_STREAMS; 403 + xhci->quirks |= XHCI_NO_SOFT_RETRY; 400 404 } 401 405 402 406 if (pdev->vendor == PCI_VENDOR_ID_RENESAS && ··· 474 482 xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; 475 483 } 476 484 477 - if (pdev->vendor == PCI_DEVICE_ID_CADENCE && 478 - pdev->device == PCI_DEVICE_ID_CADENCE_SSP) 485 + if (pdev->vendor == PCI_VENDOR_ID_CDNS && 486 + pdev->device == PCI_DEVICE_ID_CDNS_USBSSP) 479 487 xhci->quirks |= XHCI_CDNS_SCTX_QUIRK; 480 488 481 489 /* xHC spec requires PCI devices to support D3hot and D3cold */ ··· 637 645 pm_runtime_allow(&dev->dev); 638 646 639 647 dma_set_max_seg_size(&dev->dev, UINT_MAX); 648 + 649 + if (device_property_read_bool(&dev->dev, "ti,pwron-active-high")) 650 + pci_clear_and_set_config_dword(dev, 0xE0, 0, 1 << 22); 640 651 641 652 return 0; 642 653
+1 -1
drivers/usb/host/xhci-plat.c
··· 573 573 574 574 static struct platform_driver usb_generic_xhci_driver = { 575 575 .probe = xhci_generic_plat_probe, 576 - .remove_new = xhci_plat_remove, 576 + .remove = xhci_plat_remove, 577 577 .shutdown = usb_hcd_platform_shutdown, 578 578 .driver = { 579 579 .name = "xhci-hcd",
+1 -1
drivers/usb/host/xhci-rcar.c
··· 274 274 275 275 static struct platform_driver usb_xhci_renesas_driver = { 276 276 .probe = xhci_renesas_probe, 277 - .remove_new = xhci_plat_remove, 277 + .remove = xhci_plat_remove, 278 278 .shutdown = usb_hcd_platform_shutdown, 279 279 .driver = { 280 280 .name = "xhci-renesas-hcd",
+178 -124
drivers/usb/host/xhci-ring.c
··· 52 52 * endpoint rings; it generates events on the event ring for these. 53 53 */ 54 54 55 + #include <linux/jiffies.h> 55 56 #include <linux/scatterlist.h> 56 57 #include <linux/slab.h> 57 58 #include <linux/dma-mapping.h> ··· 146 145 * TRB is in a new segment. This does not skip over link TRBs, and it does not 147 146 * effect the ring dequeue or enqueue pointers. 148 147 */ 149 - static void next_trb(struct xhci_hcd *xhci, 150 - struct xhci_ring *ring, 151 - struct xhci_segment **seg, 152 - union xhci_trb **trb) 148 + static void next_trb(struct xhci_segment **seg, 149 + union xhci_trb **trb) 153 150 { 154 151 if (trb_is_link(*trb) || last_trb_on_seg(*seg, *trb)) { 155 152 *seg = (*seg)->next; ··· 168 169 if (ring->type == TYPE_EVENT) { 169 170 if (!last_trb_on_seg(ring->deq_seg, ring->dequeue)) { 170 171 ring->dequeue++; 171 - goto out; 172 + return; 172 173 } 173 174 if (last_trb_on_ring(ring, ring->deq_seg, ring->dequeue)) 174 175 ring->cycle_state ^= 1; 175 176 ring->deq_seg = ring->deq_seg->next; 176 177 ring->dequeue = ring->deq_seg->trbs; 177 - goto out; 178 + 179 + trace_xhci_inc_deq(ring); 180 + 181 + return; 178 182 } 179 183 180 184 /* All other rings have link trbs */ ··· 192 190 ring->deq_seg = ring->deq_seg->next; 193 191 ring->dequeue = ring->deq_seg->trbs; 194 192 193 + trace_xhci_inc_deq(ring); 194 + 195 195 if (link_trb_count++ > ring->num_segs) { 196 196 xhci_warn(xhci, "Ring is an endless link TRB loop\n"); 197 197 break; 198 198 } 199 199 } 200 - out: 201 - trace_xhci_inc_deq(ring); 202 - 203 200 return; 204 201 } 205 202 ··· 267 266 ring->enqueue = ring->enq_seg->trbs; 268 267 next = ring->enqueue; 269 268 269 + trace_xhci_inc_enq(ring); 270 + 270 271 if (link_trb_count++ > ring->num_segs) { 271 272 xhci_warn(xhci, "%s: Ring link TRB loop\n", __func__); 272 273 break; 273 274 } 274 275 } 275 - 276 - trace_xhci_inc_enq(ring); 277 276 } 278 277 279 278 /* ··· 427 426 } 428 427 } 429 428 430 - /* Must be called with xhci->lock held, releases and aquires lock back */ 429 + /* Must be called with xhci->lock held, releases and acquires lock back */ 431 430 static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags) 432 431 { 433 432 struct xhci_segment *new_seg = xhci->cmd_ring->deq_seg; ··· 447 446 * avoiding corrupting the command ring pointer in case the command ring 448 447 * is stopped by the time the upper dword is written. 449 448 */ 450 - next_trb(xhci, NULL, &new_seg, &new_deq); 449 + next_trb(&new_seg, &new_deq); 451 450 if (trb_is_link(new_deq)) 452 - next_trb(xhci, NULL, &new_seg, &new_deq); 451 + next_trb(&new_seg, &new_deq); 453 452 454 453 crcr = xhci_trb_virt_to_dma(new_seg, new_deq); 455 454 xhci_write_64(xhci, crcr | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); ··· 661 660 662 661 /* 663 662 * We want to find the pointer, segment and cycle state of the new trb 664 - * (the one after current TD's last_trb). We know the cycle state at 665 - * hw_dequeue, so walk the ring until both hw_dequeue and last_trb are 663 + * (the one after current TD's end_trb). We know the cycle state at 664 + * hw_dequeue, so walk the ring until both hw_dequeue and end_trb are 666 665 * found. 667 666 */ 668 667 do { ··· 672 671 if (td_last_trb_found) 673 672 break; 674 673 } 675 - if (new_deq == td->last_trb) 674 + if (new_deq == td->end_trb) 676 675 td_last_trb_found = true; 677 676 678 677 if (cycle_found && trb_is_link(new_deq) && 679 678 link_trb_toggles_cycle(new_deq)) 680 679 new_cycle ^= 0x1; 681 680 682 - next_trb(xhci, ep_ring, &new_seg, &new_deq); 681 + next_trb(&new_seg, &new_deq); 683 682 684 683 /* Search wrapped around, bail out */ 685 684 if (new_deq == ep->ring->dequeue) { ··· 741 740 * (The last TRB actually points to the ring enqueue pointer, which is not part 742 741 * of this TD.) This is used to remove partially enqueued isoc TDs from a ring. 743 742 */ 744 - static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, 745 - struct xhci_td *td, bool flip_cycle) 743 + static void td_to_noop(struct xhci_td *td, bool flip_cycle) 746 744 { 747 745 struct xhci_segment *seg = td->start_seg; 748 - union xhci_trb *trb = td->first_trb; 746 + union xhci_trb *trb = td->start_trb; 749 747 750 748 while (1) { 751 749 trb_to_noop(trb, TRB_TR_NOOP); 752 750 753 751 /* flip cycle if asked to */ 754 - if (flip_cycle && trb != td->first_trb && trb != td->last_trb) 752 + if (flip_cycle && trb != td->start_trb && trb != td->end_trb) 755 753 trb->generic.field[3] ^= cpu_to_le32(TRB_CYCLE); 756 754 757 - if (trb == td->last_trb) 755 + if (trb == td->end_trb) 758 756 break; 759 757 760 - next_trb(xhci, ep_ring, &seg, &trb); 758 + next_trb(&seg, &trb); 761 759 } 762 760 } 763 761 ··· 799 799 800 800 dma_unmap_single(dev, seg->bounce_dma, ring->bounce_buf_len, 801 801 DMA_FROM_DEVICE); 802 - /* for in tranfers we need to copy the data from bounce to sg */ 802 + /* for in transfers we need to copy the data from bounce to sg */ 803 803 if (urb->num_sgs) { 804 804 len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf, 805 805 seg->bounce_len, seg->bounce_offs); ··· 814 814 seg->bounce_offs = 0; 815 815 } 816 816 817 - static int xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, 818 - struct xhci_ring *ep_ring, int status) 817 + static void xhci_td_cleanup(struct xhci_hcd *xhci, struct xhci_td *td, 818 + struct xhci_ring *ep_ring, int status) 819 819 { 820 820 struct urb *urb = NULL; 821 821 ··· 858 858 status = 0; 859 859 xhci_giveback_urb_in_irq(xhci, td, status); 860 860 } 861 - 862 - return 0; 863 861 } 864 862 863 + /* Give back previous TD and move on to the next TD. */ 864 + static void xhci_dequeue_td(struct xhci_hcd *xhci, struct xhci_td *td, struct xhci_ring *ring, 865 + u32 status) 866 + { 867 + ring->dequeue = td->end_trb; 868 + ring->deq_seg = td->end_seg; 869 + inc_deq(xhci, ring); 870 + 871 + xhci_td_cleanup(xhci, td, ring, status); 872 + } 865 873 866 874 /* Complete the cancelled URBs we unlinked from td_list. */ 867 875 static void xhci_giveback_invalidated_tds(struct xhci_virt_ep *ep) ··· 980 972 unsigned int slot_id = ep->vdev->slot_id; 981 973 int err; 982 974 975 + /* 976 + * This is not going to work if the hardware is changing its dequeue 977 + * pointers as we look at them. Completion handler will call us later. 978 + */ 979 + if (ep->ep_state & SET_DEQ_PENDING) 980 + return 0; 981 + 983 982 xhci = ep->xhci; 984 983 985 984 list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) { 986 985 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 987 986 "Removing canceled TD starting at 0x%llx (dma) in stream %u URB %p", 988 987 (unsigned long long)xhci_trb_virt_to_dma( 989 - td->start_seg, td->first_trb), 988 + td->start_seg, td->start_trb), 990 989 td->urb->stream_id, td->urb); 991 990 list_del_init(&td->td_list); 992 991 ring = xhci_urb_to_transfer_ring(xhci, td->urb); ··· 1035 1020 "Found multiple active URBs %p and %p in stream %u?\n", 1036 1021 td->urb, cached_td->urb, 1037 1022 td->urb->stream_id); 1038 - td_to_noop(xhci, ring, cached_td, false); 1023 + td_to_noop(cached_td, false); 1039 1024 cached_td->cancel_status = TD_CLEARED; 1040 1025 } 1041 - td_to_noop(xhci, ring, td, false); 1026 + td_to_noop(td, false); 1042 1027 td->cancel_status = TD_CLEARING_CACHE; 1043 1028 cached_td = td; 1044 1029 break; 1045 1030 } 1046 1031 } else { 1047 - td_to_noop(xhci, ring, td, false); 1032 + td_to_noop(td, false); 1048 1033 td->cancel_status = TD_CLEARED; 1049 1034 } 1050 1035 } ··· 1069 1054 continue; 1070 1055 xhci_warn(xhci, "Failed to clear cancelled cached URB %p, mark clear anyway\n", 1071 1056 td->urb); 1072 - td_to_noop(xhci, ring, td, false); 1057 + td_to_noop(td, false); 1073 1058 td->cancel_status = TD_CLEARED; 1074 1059 } 1075 1060 } 1076 1061 return 0; 1062 + } 1063 + 1064 + /* 1065 + * Erase queued TDs from transfer ring(s) and give back those the xHC didn't 1066 + * stop on. If necessary, queue commands to move the xHC off cancelled TDs it 1067 + * stopped on. Those will be given back later when the commands complete. 1068 + * 1069 + * Call under xhci->lock on a stopped endpoint. 1070 + */ 1071 + void xhci_process_cancelled_tds(struct xhci_virt_ep *ep) 1072 + { 1073 + xhci_invalidate_cancelled_tds(ep); 1074 + xhci_giveback_invalidated_tds(ep); 1077 1075 } 1078 1076 1079 1077 /* ··· 1179 1151 return; 1180 1152 case EP_STATE_STOPPED: 1181 1153 /* 1182 - * NEC uPD720200 sometimes sets this state and fails with 1183 - * Context Error while continuing to process TRBs. 1184 - * Be conservative and trust EP_CTX_STATE on other chips. 1154 + * Per xHCI 4.6.9, Stop Endpoint command on a Stopped 1155 + * EP is a Context State Error, and EP stays Stopped. 1156 + * 1157 + * But maybe it failed on Halted, and somebody ran Reset 1158 + * Endpoint later. EP state is now Stopped and EP_HALTED 1159 + * still set because Reset EP handler will run after us. 1160 + */ 1161 + if (ep->ep_state & EP_HALTED) 1162 + break; 1163 + /* 1164 + * On some HCs EP state remains Stopped for some tens of 1165 + * us to a few ms or more after a doorbell ring, and any 1166 + * new Stop Endpoint fails without aborting the restart. 1167 + * This handler may run quickly enough to still see this 1168 + * Stopped state, but it will soon change to Running. 1169 + * 1170 + * Assume this bug on unexpected Stop Endpoint failures. 1171 + * Keep retrying until the EP starts and stops again, on 1172 + * chips where this is known to help. Wait for 100ms. 1185 1173 */ 1186 1174 if (!(xhci->quirks & XHCI_NEC_HOST)) 1175 + break; 1176 + if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100))) 1187 1177 break; 1188 1178 fallthrough; 1189 1179 case EP_STATE_RUNNING: 1190 1180 /* Race, HW handled stop ep cmd before ep was running */ 1191 - xhci_dbg(xhci, "Stop ep completion ctx error, ep is running\n"); 1181 + xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n", 1182 + GET_EP_CTX_STATE(ep_ctx)); 1192 1183 1193 1184 command = xhci_alloc_command(xhci, false, GFP_ATOMIC); 1194 1185 if (!command) { ··· 1385 1338 struct xhci_virt_ep *ep; 1386 1339 struct xhci_ep_ctx *ep_ctx; 1387 1340 struct xhci_slot_ctx *slot_ctx; 1341 + struct xhci_stream_ctx *stream_ctx; 1388 1342 struct xhci_td *td, *tmp_td; 1389 - bool deferred = false; 1390 1343 1391 1344 ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3])); 1392 1345 stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2])); ··· 1406 1359 slot_ctx = xhci_get_slot_ctx(xhci, ep->vdev->out_ctx); 1407 1360 trace_xhci_handle_cmd_set_deq(slot_ctx); 1408 1361 trace_xhci_handle_cmd_set_deq_ep(ep_ctx); 1362 + 1363 + if (ep->ep_state & EP_HAS_STREAMS) { 1364 + stream_ctx = &ep->stream_info->stream_ctx_array[stream_id]; 1365 + trace_xhci_handle_cmd_set_deq_stream(ep->stream_info, stream_id); 1366 + } 1409 1367 1410 1368 if (cmd_comp_code != COMP_SUCCESS) { 1411 1369 unsigned int ep_state; ··· 1448 1396 u64 deq; 1449 1397 /* 4.6.10 deq ptr is written to the stream ctx for streams */ 1450 1398 if (ep->ep_state & EP_HAS_STREAMS) { 1451 - struct xhci_stream_ctx *ctx = 1452 - &ep->stream_info->stream_ctx_array[stream_id]; 1453 - deq = le64_to_cpu(ctx->stream_ring) & SCTX_DEQ_MASK; 1399 + deq = le64_to_cpu(stream_ctx->stream_ring) & SCTX_DEQ_MASK; 1454 1400 1455 1401 /* 1456 1402 * Cadence xHCI controllers store some endpoint state ··· 1460 1410 * To fix this issue driver must clear Rsvd0 field. 1461 1411 */ 1462 1412 if (xhci->quirks & XHCI_CDNS_SCTX_QUIRK) { 1463 - ctx->reserved[0] = 0; 1464 - ctx->reserved[1] = 0; 1413 + stream_ctx->reserved[0] = 0; 1414 + stream_ctx->reserved[1] = 0; 1465 1415 } 1466 1416 } else { 1467 1417 deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK; ··· 1490 1440 xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n", 1491 1441 __func__, td->urb); 1492 1442 xhci_td_cleanup(ep->xhci, td, ep_ring, td->status); 1493 - } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) { 1494 - deferred = true; 1495 1443 } else { 1496 1444 xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n", 1497 1445 __func__, td->urb, td->cancel_status); ··· 1500 1452 ep->queued_deq_seg = NULL; 1501 1453 ep->queued_deq_ptr = NULL; 1502 1454 1503 - if (deferred) { 1504 - /* We have more streams to clear */ 1455 + /* Check for deferred or newly cancelled TDs */ 1456 + if (!list_empty(&ep->cancelled_td_list)) { 1505 1457 xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n", 1506 1458 __func__); 1507 1459 xhci_invalidate_cancelled_tds(ep); 1460 + /* Try to restart the endpoint if all is done */ 1461 + ring_doorbell_for_active_rings(xhci, slot_id, ep_index); 1462 + /* Start giving back any TDs invalidated above */ 1463 + xhci_giveback_invalidated_tds(ep); 1508 1464 } else { 1509 1465 /* Restart any rings with pending URBs */ 1510 1466 xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__); ··· 1768 1716 cmd_dma = le64_to_cpu(event->cmd_trb); 1769 1717 cmd_trb = xhci->cmd_ring->dequeue; 1770 1718 1771 - trace_xhci_handle_command(xhci->cmd_ring, &cmd_trb->generic); 1719 + trace_xhci_handle_command(xhci->cmd_ring, &cmd_trb->generic, cmd_dma); 1772 1720 1773 1721 cmd_comp_code = GET_COMP_CODE(le32_to_cpu(event->status)); 1774 1722 ··· 2126 2074 dma_addr_t end_trb_dma; 2127 2075 struct xhci_segment *cur_seg; 2128 2076 2129 - start_dma = xhci_trb_virt_to_dma(td->start_seg, td->first_trb); 2077 + start_dma = xhci_trb_virt_to_dma(td->start_seg, td->start_trb); 2130 2078 cur_seg = td->start_seg; 2131 2079 2132 2080 do { ··· 2136 2084 end_seg_dma = xhci_trb_virt_to_dma(cur_seg, 2137 2085 &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); 2138 2086 /* If the end TRB isn't in this segment, this is set to 0 */ 2139 - end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->last_trb); 2087 + end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->end_trb); 2140 2088 2141 2089 if (debug) 2142 2090 xhci_warn(xhci, ··· 2236 2184 return 0; 2237 2185 } 2238 2186 2239 - static int finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2240 - struct xhci_ring *ep_ring, struct xhci_td *td, 2241 - u32 trb_comp_code) 2187 + static void finish_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2188 + struct xhci_ring *ep_ring, struct xhci_td *td, 2189 + u32 trb_comp_code) 2242 2190 { 2243 2191 struct xhci_ep_ctx *ep_ctx; 2244 2192 ··· 2253 2201 * stopped TDs. A stopped TD may be restarted, so don't update 2254 2202 * the ring dequeue pointer or take this TD off any lists yet. 2255 2203 */ 2256 - return 0; 2204 + return; 2257 2205 case COMP_USB_TRANSACTION_ERROR: 2258 2206 case COMP_BABBLE_DETECTED_ERROR: 2259 2207 case COMP_SPLIT_TRANSACTION_ERROR: ··· 2278 2226 !list_empty(&td->cancelled_td_list)) { 2279 2227 xhci_dbg(xhci, "Already resolving halted ep for 0x%llx\n", 2280 2228 (unsigned long long)xhci_trb_virt_to_dma( 2281 - td->start_seg, td->first_trb)); 2282 - return 0; 2229 + td->start_seg, td->start_trb)); 2230 + return; 2283 2231 } 2284 2232 /* endpoint not halted, don't reset it */ 2285 2233 break; ··· 2287 2235 /* Almost same procedure as for STALL_ERROR below */ 2288 2236 xhci_clear_hub_tt_buffer(xhci, td, ep); 2289 2237 xhci_handle_halted_endpoint(xhci, ep, td, EP_HARD_RESET); 2290 - return 0; 2238 + return; 2291 2239 case COMP_STALL_ERROR: 2292 2240 /* 2293 2241 * xhci internal endpoint state will go to a "halt" state for ··· 2304 2252 2305 2253 xhci_handle_halted_endpoint(xhci, ep, td, EP_HARD_RESET); 2306 2254 2307 - return 0; /* xhci_handle_halted_endpoint marked td cancelled */ 2255 + return; /* xhci_handle_halted_endpoint marked td cancelled */ 2308 2256 default: 2309 2257 break; 2310 2258 } 2311 2259 2312 - /* Update ring dequeue pointer */ 2313 - ep_ring->dequeue = td->last_trb; 2314 - ep_ring->deq_seg = td->last_trb_seg; 2315 - inc_deq(xhci, ep_ring); 2316 - 2317 - return xhci_td_cleanup(xhci, td, ep_ring, td->status); 2260 + xhci_dequeue_td(xhci, td, ep_ring, td->status); 2318 2261 } 2319 2262 2320 - /* sum trb lengths from ring dequeue up to stop_trb, _excluding_ stop_trb */ 2321 - static int sum_trb_lengths(struct xhci_hcd *xhci, struct xhci_ring *ring, 2322 - union xhci_trb *stop_trb) 2263 + /* sum trb lengths from the first trb up to stop_trb, _excluding_ stop_trb */ 2264 + static u32 sum_trb_lengths(struct xhci_td *td, union xhci_trb *stop_trb) 2323 2265 { 2324 2266 u32 sum; 2325 - union xhci_trb *trb = ring->dequeue; 2326 - struct xhci_segment *seg = ring->deq_seg; 2267 + union xhci_trb *trb = td->start_trb; 2268 + struct xhci_segment *seg = td->start_seg; 2327 2269 2328 - for (sum = 0; trb != stop_trb; next_trb(xhci, ring, &seg, &trb)) { 2270 + for (sum = 0; trb != stop_trb; next_trb(&seg, &trb)) { 2329 2271 if (!trb_is_noop(trb) && !trb_is_link(trb)) 2330 2272 sum += TRB_LEN(le32_to_cpu(trb->generic.field[2])); 2331 2273 } ··· 2329 2283 /* 2330 2284 * Process control tds, update urb status and actual_length. 2331 2285 */ 2332 - static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2333 - struct xhci_ring *ep_ring, struct xhci_td *td, 2334 - union xhci_trb *ep_trb, struct xhci_transfer_event *event) 2286 + static void process_ctrl_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2287 + struct xhci_ring *ep_ring, struct xhci_td *td, 2288 + union xhci_trb *ep_trb, struct xhci_transfer_event *event) 2335 2289 { 2336 2290 struct xhci_ep_ctx *ep_ctx; 2337 2291 u32 trb_comp_code; ··· 2410 2364 td->urb_length_set = true; 2411 2365 td->urb->actual_length = requested - remaining; 2412 2366 xhci_dbg(xhci, "Waiting for status stage event\n"); 2413 - return 0; 2367 + return; 2414 2368 } 2415 2369 2416 2370 /* at status stage */ ··· 2418 2372 td->urb->actual_length = requested; 2419 2373 2420 2374 finish_td: 2421 - return finish_td(xhci, ep, ep_ring, td, trb_comp_code); 2375 + finish_td(xhci, ep, ep_ring, td, trb_comp_code); 2422 2376 } 2423 2377 2424 2378 /* 2425 2379 * Process isochronous tds, update urb packet status and actual_length. 2426 2380 */ 2427 - static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2428 - struct xhci_ring *ep_ring, struct xhci_td *td, 2429 - union xhci_trb *ep_trb, struct xhci_transfer_event *event) 2381 + static void process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2382 + struct xhci_ring *ep_ring, struct xhci_td *td, 2383 + union xhci_trb *ep_trb, struct xhci_transfer_event *event) 2430 2384 { 2431 2385 struct urb_priv *urb_priv; 2432 2386 int idx; ··· 2471 2425 fallthrough; 2472 2426 case COMP_ISOCH_BUFFER_OVERRUN: 2473 2427 frame->status = -EOVERFLOW; 2474 - if (ep_trb != td->last_trb) 2428 + if (ep_trb != td->end_trb) 2475 2429 td->error_mid_td = true; 2476 2430 break; 2477 2431 case COMP_INCOMPATIBLE_DEVICE_ERROR: ··· 2481 2435 case COMP_USB_TRANSACTION_ERROR: 2482 2436 frame->status = -EPROTO; 2483 2437 sum_trbs_for_length = true; 2484 - if (ep_trb != td->last_trb) 2438 + if (ep_trb != td->end_trb) 2485 2439 td->error_mid_td = true; 2486 2440 break; 2487 2441 case COMP_STOPPED: 2488 2442 sum_trbs_for_length = true; 2489 2443 break; 2490 2444 case COMP_STOPPED_SHORT_PACKET: 2491 - /* field normally containing residue now contains tranferred */ 2445 + /* field normally containing residue now contains transferred */ 2492 2446 frame->status = short_framestatus; 2493 2447 requested = remaining; 2494 2448 break; ··· 2508 2462 goto finish_td; 2509 2463 2510 2464 if (sum_trbs_for_length) 2511 - frame->actual_length = sum_trb_lengths(xhci, ep->ring, ep_trb) + 2465 + frame->actual_length = sum_trb_lengths(td, ep_trb) + 2512 2466 ep_trb_len - remaining; 2513 2467 else 2514 2468 frame->actual_length = requested; ··· 2517 2471 2518 2472 finish_td: 2519 2473 /* Don't give back TD yet if we encountered an error mid TD */ 2520 - if (td->error_mid_td && ep_trb != td->last_trb) { 2474 + if (td->error_mid_td && ep_trb != td->end_trb) { 2521 2475 xhci_dbg(xhci, "Error mid isoc TD, wait for final completion event\n"); 2522 2476 td->urb_length_set = true; 2523 - return 0; 2477 + return; 2524 2478 } 2525 - 2526 - return finish_td(xhci, ep, ep_ring, td, trb_comp_code); 2479 + finish_td(xhci, ep, ep_ring, td, trb_comp_code); 2527 2480 } 2528 2481 2529 - static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, 2530 - struct xhci_virt_ep *ep, int status) 2482 + static void skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, 2483 + struct xhci_virt_ep *ep, int status) 2531 2484 { 2532 2485 struct urb_priv *urb_priv; 2533 2486 struct usb_iso_packet_descriptor *frame; ··· 2542 2497 /* calc actual length */ 2543 2498 frame->actual_length = 0; 2544 2499 2545 - /* Update ring dequeue pointer */ 2546 - ep->ring->dequeue = td->last_trb; 2547 - ep->ring->deq_seg = td->last_trb_seg; 2548 - inc_deq(xhci, ep->ring); 2549 - 2550 - return xhci_td_cleanup(xhci, td, ep->ring, status); 2500 + xhci_dequeue_td(xhci, td, ep->ring, status); 2551 2501 } 2552 2502 2553 2503 /* 2554 2504 * Process bulk and interrupt tds, update urb status and actual_length. 2555 2505 */ 2556 - static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2557 - struct xhci_ring *ep_ring, struct xhci_td *td, 2558 - union xhci_trb *ep_trb, struct xhci_transfer_event *event) 2506 + static void process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 2507 + struct xhci_ring *ep_ring, struct xhci_td *td, 2508 + union xhci_trb *ep_trb, struct xhci_transfer_event *event) 2559 2509 { 2560 2510 struct xhci_slot_ctx *slot_ctx; 2561 2511 u32 trb_comp_code; ··· 2566 2526 case COMP_SUCCESS: 2567 2527 ep->err_count = 0; 2568 2528 /* handle success with untransferred data as short packet */ 2569 - if (ep_trb != td->last_trb || remaining) { 2529 + if (ep_trb != td->end_trb || remaining) { 2570 2530 xhci_warn(xhci, "WARN Successful completion on short TX\n"); 2571 2531 xhci_dbg(xhci, "ep %#x - asked for %d bytes, %d bytes untransferred\n", 2572 2532 td->urb->ep->desc.bEndpointAddress, ··· 2582 2542 goto finish_td; 2583 2543 case COMP_STOPPED_LENGTH_INVALID: 2584 2544 /* stopped on ep trb with invalid length, exclude it */ 2585 - td->urb->actual_length = sum_trb_lengths(xhci, ep_ring, ep_trb); 2545 + td->urb->actual_length = sum_trb_lengths(td, ep_trb); 2586 2546 goto finish_td; 2587 2547 case COMP_USB_TRANSACTION_ERROR: 2588 2548 if (xhci->quirks & XHCI_NO_SOFT_RETRY || ··· 2593 2553 td->status = 0; 2594 2554 2595 2555 xhci_handle_halted_endpoint(xhci, ep, td, EP_SOFT_RESET); 2596 - return 0; 2556 + return; 2597 2557 default: 2598 2558 /* do nothing */ 2599 2559 break; 2600 2560 } 2601 2561 2602 - if (ep_trb == td->last_trb) 2562 + if (ep_trb == td->end_trb) 2603 2563 td->urb->actual_length = requested - remaining; 2604 2564 else 2605 2565 td->urb->actual_length = 2606 - sum_trb_lengths(xhci, ep_ring, ep_trb) + 2566 + sum_trb_lengths(td, ep_trb) + 2607 2567 ep_trb_len - remaining; 2608 2568 finish_td: 2609 2569 if (remaining > requested) { ··· 2612 2572 td->urb->actual_length = 0; 2613 2573 } 2614 2574 2615 - return finish_td(xhci, ep, ep_ring, td, trb_comp_code); 2575 + finish_td(xhci, ep, ep_ring, td, trb_comp_code); 2616 2576 } 2617 2577 2618 2578 /* Transfer events which don't point to a transfer TRB, see xhci 4.17.4 */ ··· 2832 2792 2833 2793 if (td && td->error_mid_td && !trb_in_td(xhci, td, ep_trb_dma, false)) { 2834 2794 xhci_dbg(xhci, "Missing TD completion event after mid TD error\n"); 2835 - ep_ring->dequeue = td->last_trb; 2836 - ep_ring->deq_seg = td->last_trb_seg; 2837 - inc_deq(xhci, ep_ring); 2838 - xhci_td_cleanup(xhci, td, ep_ring, td->status); 2795 + xhci_dequeue_td(xhci, td, ep_ring, td->status); 2839 2796 } 2840 2797 2841 2798 if (list_empty(&ep_ring->td_list)) { ··· 2926 2889 ep_ring->last_td_was_short = false; 2927 2890 2928 2891 ep_trb = &ep_seg->trbs[(ep_trb_dma - ep_seg->dma) / sizeof(*ep_trb)]; 2929 - trace_xhci_handle_transfer(ep_ring, (struct xhci_generic_trb *) ep_trb); 2892 + trace_xhci_handle_transfer(ep_ring, (struct xhci_generic_trb *) ep_trb, ep_trb_dma); 2930 2893 2931 2894 /* 2932 2895 * No-op TRB could trigger interrupts in a case where a URB was killed ··· 2976 2939 { 2977 2940 u32 trb_type; 2978 2941 2979 - trace_xhci_handle_event(ir->event_ring, &event->generic); 2942 + trace_xhci_handle_event(ir->event_ring, &event->generic, 2943 + xhci_trb_virt_to_dma(ir->event_ring->deq_seg, 2944 + ir->event_ring->dequeue)); 2980 2945 2981 2946 /* 2982 2947 * Barrier between reading the TRB_CYCLE (valid) flag before, and any ··· 3201 3162 wmb(); 3202 3163 trb->field[3] = cpu_to_le32(field4); 3203 3164 3204 - trace_xhci_queue_trb(ring, trb); 3165 + trace_xhci_queue_trb(ring, trb, 3166 + xhci_trb_virt_to_dma(ring->enq_seg, ring->enqueue)); 3205 3167 3206 3168 inc_enq(xhci, ring, more_trbs_coming); 3207 3169 } ··· 3342 3302 /* Add this TD to the tail of the endpoint ring's TD list */ 3343 3303 list_add_tail(&td->td_list, &ep_ring->td_list); 3344 3304 td->start_seg = ep_ring->enq_seg; 3345 - td->first_trb = ep_ring->enqueue; 3305 + td->start_trb = ep_ring->enqueue; 3346 3306 3347 3307 return 0; 3348 3308 } ··· 3681 3641 field &= ~TRB_CHAIN; 3682 3642 field |= TRB_IOC; 3683 3643 more_trbs_coming = false; 3684 - td->last_trb = ring->enqueue; 3685 - td->last_trb_seg = ring->enq_seg; 3644 + td->end_trb = ring->enqueue; 3645 + td->end_seg = ring->enq_seg; 3686 3646 if (xhci_urb_suitable_for_idt(urb)) { 3687 3647 memcpy(&send_addr, urb->transfer_buffer, 3688 3648 trb_buff_len); ··· 3730 3690 ret = prepare_transfer(xhci, xhci->devs[slot_id], 3731 3691 ep_index, urb->stream_id, 3732 3692 1, urb, 1, mem_flags); 3733 - urb_priv->td[1].last_trb = ring->enqueue; 3734 - urb_priv->td[1].last_trb_seg = ring->enq_seg; 3693 + urb_priv->td[1].end_trb = ring->enqueue; 3694 + urb_priv->td[1].end_seg = ring->enq_seg; 3735 3695 field = TRB_TYPE(TRB_NORMAL) | ring->cycle_state | TRB_IOC; 3736 3696 queue_trb(xhci, ring, 0, 0, 0, TRB_INTR_TARGET(0), field); 3737 3697 } ··· 3766 3726 */ 3767 3727 if (!urb->setup_packet) 3768 3728 return -EINVAL; 3729 + 3730 + if ((xhci->quirks & XHCI_ETRON_HOST) && 3731 + urb->dev->speed >= USB_SPEED_SUPER) { 3732 + /* 3733 + * If next available TRB is the Link TRB in the ring segment then 3734 + * enqueue a No Op TRB, this can prevent the Setup and Data Stage 3735 + * TRB to be breaked by the Link TRB. 3736 + */ 3737 + if (trb_is_link(ep_ring->enqueue + 1)) { 3738 + field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state; 3739 + queue_trb(xhci, ep_ring, false, 0, 0, 3740 + TRB_INTR_TARGET(0), field); 3741 + } 3742 + } 3769 3743 3770 3744 /* 1 TRB for setup, 1 for status */ 3771 3745 num_trbs = 2; ··· 3869 3815 } 3870 3816 3871 3817 /* Save the DMA address of the last TRB in the TD */ 3872 - td->last_trb = ep_ring->enqueue; 3873 - td->last_trb_seg = ep_ring->enq_seg; 3818 + td->end_trb = ep_ring->enqueue; 3819 + td->end_seg = ep_ring->enq_seg; 3874 3820 3875 3821 /* Queue status TRB - see Table 7 and sections 4.11.2.2 and 6.4.1.2.3 */ 3876 3822 /* If the device sent data, the status stage is an OUT transfer */ ··· 4155 4101 field |= TRB_CHAIN; 4156 4102 } else { 4157 4103 more_trbs_coming = false; 4158 - td->last_trb = ep_ring->enqueue; 4159 - td->last_trb_seg = ep_ring->enq_seg; 4104 + td->end_trb = ep_ring->enqueue; 4105 + td->end_seg = ep_ring->enq_seg; 4160 4106 field |= TRB_IOC; 4161 4107 if (trb_block_event_intr(xhci, num_tds, i, ir)) 4162 4108 field |= TRB_BEI; ··· 4222 4168 /* Use the first TD as a temporary variable to turn the TDs we've queued 4223 4169 * into No-ops with a software-owned cycle bit. That way the hardware 4224 4170 * won't accidentally start executing bogus TDs when we partially 4225 - * overwrite them. td->first_trb and td->start_seg are already set. 4171 + * overwrite them. td->start_trb and td->start_seg are already set. 4226 4172 */ 4227 - urb_priv->td[0].last_trb = ep_ring->enqueue; 4173 + urb_priv->td[0].end_trb = ep_ring->enqueue; 4228 4174 /* Every TRB except the first & last will have its cycle bit flipped. */ 4229 - td_to_noop(xhci, ep_ring, &urb_priv->td[0], true); 4175 + td_to_noop(&urb_priv->td[0], true); 4230 4176 4231 4177 /* Reset the ring enqueue back to the first TRB and its cycle bit. */ 4232 - ep_ring->enqueue = urb_priv->td[0].first_trb; 4178 + ep_ring->enqueue = urb_priv->td[0].start_trb; 4233 4179 ep_ring->enq_seg = urb_priv->td[0].start_seg; 4234 4180 ep_ring->cycle_state = start_cycle; 4235 4181 usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb);
+1 -1
drivers/usb/host/xhci-tegra.c
··· 2664 2664 2665 2665 static struct platform_driver tegra_xusb_driver = { 2666 2666 .probe = tegra_xusb_probe, 2667 - .remove_new = tegra_xusb_remove, 2667 + .remove = tegra_xusb_remove, 2668 2668 .shutdown = tegra_xusb_shutdown, 2669 2669 .driver = { 2670 2670 .name = "tegra-xusb",
+55 -24
drivers/usb/host/xhci-trace.h
··· 108 108 ); 109 109 110 110 DECLARE_EVENT_CLASS(xhci_log_trb, 111 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 112 - TP_ARGS(ring, trb), 111 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 112 + TP_ARGS(ring, trb, dma), 113 113 TP_STRUCT__entry( 114 + __field(dma_addr_t, dma) 114 115 __field(u32, type) 115 116 __field(u32, field0) 116 117 __field(u32, field1) ··· 119 118 __field(u32, field3) 120 119 ), 121 120 TP_fast_assign( 121 + __entry->dma = dma; 122 122 __entry->type = ring->type; 123 123 __entry->field0 = le32_to_cpu(trb->field[0]); 124 124 __entry->field1 = le32_to_cpu(trb->field[1]); 125 125 __entry->field2 = le32_to_cpu(trb->field[2]); 126 126 __entry->field3 = le32_to_cpu(trb->field[3]); 127 127 ), 128 - TP_printk("%s: %s", xhci_ring_type_string(__entry->type), 128 + TP_printk("%s: @%pad %s", 129 + xhci_ring_type_string(__entry->type), &__entry->dma, 129 130 xhci_decode_trb(__get_buf(XHCI_MSG_MAX), XHCI_MSG_MAX, __entry->field0, 130 131 __entry->field1, __entry->field2, __entry->field3) 131 132 ) 132 133 ); 133 134 134 135 DEFINE_EVENT(xhci_log_trb, xhci_handle_event, 135 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 136 - TP_ARGS(ring, trb) 136 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 137 + TP_ARGS(ring, trb, dma) 137 138 ); 138 139 139 140 DEFINE_EVENT(xhci_log_trb, xhci_handle_command, 140 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 141 - TP_ARGS(ring, trb) 141 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 142 + TP_ARGS(ring, trb, dma) 142 143 ); 143 144 144 145 DEFINE_EVENT(xhci_log_trb, xhci_handle_transfer, 145 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 146 - TP_ARGS(ring, trb) 146 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 147 + TP_ARGS(ring, trb, dma) 147 148 ); 148 149 149 150 DEFINE_EVENT(xhci_log_trb, xhci_queue_trb, 150 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 151 - TP_ARGS(ring, trb) 151 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 152 + TP_ARGS(ring, trb, dma) 153 + 152 154 ); 153 155 154 156 DEFINE_EVENT(xhci_log_trb, xhci_dbc_handle_event, 155 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 156 - TP_ARGS(ring, trb) 157 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 158 + TP_ARGS(ring, trb, dma) 157 159 ); 158 160 159 161 DEFINE_EVENT(xhci_log_trb, xhci_dbc_handle_transfer, 160 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 161 - TP_ARGS(ring, trb) 162 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 163 + TP_ARGS(ring, trb, dma) 162 164 ); 163 165 164 166 DEFINE_EVENT(xhci_log_trb, xhci_dbc_gadget_ep_queue, 165 - TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb), 166 - TP_ARGS(ring, trb) 167 + TP_PROTO(struct xhci_ring *ring, struct xhci_generic_trb *trb, dma_addr_t dma), 168 + TP_ARGS(ring, trb, dma) 167 169 ); 168 170 169 171 DECLARE_EVENT_CLASS(xhci_log_free_virt_dev, ··· 314 310 TP_ARGS(urb) 315 311 ); 316 312 313 + DECLARE_EVENT_CLASS(xhci_log_stream_ctx, 314 + TP_PROTO(struct xhci_stream_info *info, unsigned int stream_id), 315 + TP_ARGS(info, stream_id), 316 + TP_STRUCT__entry( 317 + __field(unsigned int, stream_id) 318 + __field(u64, stream_ring) 319 + __field(dma_addr_t, ctx_array_dma) 320 + 321 + ), 322 + TP_fast_assign( 323 + __entry->stream_id = stream_id; 324 + __entry->stream_ring = le64_to_cpu(info->stream_ctx_array[stream_id].stream_ring); 325 + __entry->ctx_array_dma = info->ctx_array_dma + stream_id * 16; 326 + 327 + ), 328 + TP_printk("stream %u ctx @%pad: SCT %llu deq %llx", __entry->stream_id, 329 + &__entry->ctx_array_dma, CTX_TO_SCT(__entry->stream_ring), 330 + __entry->stream_ring 331 + ) 332 + ); 333 + 334 + DEFINE_EVENT(xhci_log_stream_ctx, xhci_alloc_stream_info_ctx, 335 + TP_PROTO(struct xhci_stream_info *info, unsigned int stream_id), 336 + TP_ARGS(info, stream_id) 337 + ); 338 + 339 + DEFINE_EVENT(xhci_log_stream_ctx, xhci_handle_cmd_set_deq_stream, 340 + TP_PROTO(struct xhci_stream_info *info, unsigned int stream_id), 341 + TP_ARGS(info, stream_id) 342 + ); 343 + 317 344 DECLARE_EVENT_CLASS(xhci_log_ep_ctx, 318 345 TP_PROTO(struct xhci_ep_ctx *ctx), 319 346 TP_ARGS(ctx), ··· 489 454 __field(void *, ring) 490 455 __field(dma_addr_t, enq) 491 456 __field(dma_addr_t, deq) 492 - __field(dma_addr_t, enq_seg) 493 - __field(dma_addr_t, deq_seg) 494 457 __field(unsigned int, num_segs) 495 458 __field(unsigned int, stream_id) 496 459 __field(unsigned int, cycle_state) ··· 499 466 __entry->type = ring->type; 500 467 __entry->num_segs = ring->num_segs; 501 468 __entry->stream_id = ring->stream_id; 502 - __entry->enq_seg = ring->enq_seg->dma; 503 - __entry->deq_seg = ring->deq_seg->dma; 504 469 __entry->cycle_state = ring->cycle_state; 505 470 __entry->bounce_buf_len = ring->bounce_buf_len; 506 471 __entry->enq = xhci_trb_virt_to_dma(ring->enq_seg, ring->enqueue); 507 472 __entry->deq = xhci_trb_virt_to_dma(ring->deq_seg, ring->dequeue); 508 473 ), 509 - TP_printk("%s %p: enq %pad(%pad) deq %pad(%pad) segs %d stream %d bounce %d cycle %d", 474 + TP_printk("%s %p: enq %pad deq %pad segs %d stream %d bounce %d cycle %d", 510 475 xhci_ring_type_string(__entry->type), __entry->ring, 511 - &__entry->enq, &__entry->enq_seg, 512 - &__entry->deq, &__entry->deq_seg, 476 + &__entry->enq, 477 + &__entry->deq, 513 478 __entry->num_segs, 514 479 __entry->stream_id, 515 480 __entry->bounce_buf_len,
+95 -26
drivers/usb/host/xhci.c
··· 8 8 * Some code borrowed from the Linux EHCI driver. 9 9 */ 10 10 11 + #include <linux/jiffies.h> 11 12 #include <linux/pci.h> 12 13 #include <linux/iommu.h> 13 14 #include <linux/iopoll.h> ··· 41 40 42 41 static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring) 43 42 { 44 - struct xhci_segment *seg = ring->first_seg; 43 + struct xhci_segment *seg; 45 44 46 45 if (!td || !td->start_seg) 47 46 return false; 48 - do { 47 + 48 + xhci_for_each_ring_seg(ring->first_seg, seg) { 49 49 if (seg == td->start_seg) 50 50 return true; 51 - seg = seg->next; 52 - } while (seg && seg != ring->first_seg); 51 + } 53 52 54 53 return false; 55 54 } ··· 474 473 475 474 xhci_dbg_trace(xhci, trace_xhci_dbg_init, "xhci_init"); 476 475 spin_lock_init(&xhci->lock); 477 - if (xhci->hci_version == 0x95 && link_quirk) { 478 - xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 479 - "QUIRK: Not clearing Link TRB chain bits."); 480 - xhci->quirks |= XHCI_LINK_TRB_QUIRK; 481 - } else { 482 - xhci_dbg_trace(xhci, trace_xhci_dbg_init, 483 - "xHCI doesn't need link TRB QUIRK"); 484 - } 476 + 485 477 retval = xhci_mem_init(xhci, GFP_KERNEL); 486 478 xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Finished xhci_init"); 487 479 ··· 779 785 struct xhci_segment *seg; 780 786 781 787 ring = xhci->cmd_ring; 782 - seg = ring->deq_seg; 783 - do { 784 - memset(seg->trbs, 0, 785 - sizeof(union xhci_trb) * (TRBS_PER_SEGMENT - 1)); 786 - seg->trbs[TRBS_PER_SEGMENT - 1].link.control &= 787 - cpu_to_le32(~TRB_CYCLE); 788 - seg = seg->next; 789 - } while (seg != ring->deq_seg); 788 + xhci_for_each_ring_seg(ring->first_seg, seg) 789 + memset(seg->trbs, 0, sizeof(union xhci_trb) * (TRBS_PER_SEGMENT - 1)); 790 790 791 - xhci_initialize_ring_info(ring, 1); 791 + xhci_initialize_ring_info(ring); 792 792 /* 793 793 * Reset the hardware dequeue pointer. 794 794 * Yes, this will need to be re-written after resume, but we're paranoid ··· 1744 1756 urb->ep->desc.bEndpointAddress, 1745 1757 (unsigned long long) xhci_trb_virt_to_dma( 1746 1758 urb_priv->td[i].start_seg, 1747 - urb_priv->td[i].first_trb)); 1759 + urb_priv->td[i].start_trb)); 1748 1760 1749 1761 for (; i < urb_priv->num_tds; i++) { 1750 1762 td = &urb_priv->td[i]; ··· 1756 1768 } 1757 1769 } 1758 1770 1759 - /* Queue a stop endpoint command, but only if this is 1760 - * the first cancellation to be handled. 1761 - */ 1762 - if (!(ep->ep_state & EP_STOP_CMD_PENDING)) { 1771 + /* These completion handlers will sort out cancelled TDs for us */ 1772 + if (ep->ep_state & (EP_STOP_CMD_PENDING | EP_HALTED | SET_DEQ_PENDING)) { 1773 + xhci_dbg(xhci, "Not queuing Stop Endpoint on slot %d ep %d in state 0x%x\n", 1774 + urb->dev->slot_id, ep_index, ep->ep_state); 1775 + goto done; 1776 + } 1777 + 1778 + /* In this case no commands are pending but the endpoint is stopped */ 1779 + if (ep->ep_state & EP_CLEARING_TT) { 1780 + /* and cancelled TDs can be given back right away */ 1781 + xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n", 1782 + urb->dev->slot_id, ep_index, ep->ep_state); 1783 + xhci_process_cancelled_tds(ep); 1784 + } else { 1785 + /* Otherwise, queue a new Stop Endpoint command */ 1763 1786 command = xhci_alloc_command(xhci, false, GFP_ATOMIC); 1764 1787 if (!command) { 1765 1788 ret = -ENOMEM; 1766 1789 goto done; 1767 1790 } 1791 + ep->stop_time = jiffies; 1768 1792 ep->ep_state |= EP_STOP_CMD_PENDING; 1769 1793 xhci_queue_stop_endpoint(xhci, command, urb->dev->slot_id, 1770 1794 ep_index, 0); ··· 2794 2794 return -ENOMEM; 2795 2795 } 2796 2796 2797 + /* 2798 + * Synchronous XHCI stop endpoint helper. Issues the stop endpoint command and 2799 + * waits for the command completion before returning. This does not call 2800 + * xhci_handle_cmd_stop_ep(), which has additional handling for 'context error' 2801 + * cases, along with transfer ring cleanup. 2802 + * 2803 + * xhci_stop_endpoint_sync() is intended to be utilized by clients that manage 2804 + * their own transfer ring, such as offload situations. 2805 + */ 2806 + int xhci_stop_endpoint_sync(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, int suspend, 2807 + gfp_t gfp_flags) 2808 + { 2809 + struct xhci_command *command; 2810 + unsigned long flags; 2811 + int ret; 2812 + 2813 + command = xhci_alloc_command(xhci, true, gfp_flags); 2814 + if (!command) 2815 + return -ENOMEM; 2816 + 2817 + spin_lock_irqsave(&xhci->lock, flags); 2818 + ret = xhci_queue_stop_endpoint(xhci, command, ep->vdev->slot_id, 2819 + ep->ep_index, suspend); 2820 + if (ret < 0) { 2821 + spin_unlock_irqrestore(&xhci->lock, flags); 2822 + goto out; 2823 + } 2824 + 2825 + xhci_ring_cmd_db(xhci); 2826 + spin_unlock_irqrestore(&xhci->lock, flags); 2827 + 2828 + wait_for_completion(command->completion); 2829 + 2830 + /* No handling for COMP_CONTEXT_STATE_ERROR done at command completion*/ 2831 + if (command->status == COMP_COMMAND_ABORTED || 2832 + command->status == COMP_COMMAND_RING_STOPPED) { 2833 + xhci_warn(xhci, "Timeout while waiting for stop endpoint command\n"); 2834 + ret = -ETIME; 2835 + } 2836 + out: 2837 + xhci_free_command(xhci, command); 2838 + 2839 + return ret; 2840 + } 2841 + EXPORT_SYMBOL_GPL(xhci_stop_endpoint_sync); 2797 2842 2798 2843 /* Issue a configure endpoint command or evaluate context command 2799 2844 * and wait for it to finish. ··· 3737 3692 xhci->num_active_eps); 3738 3693 } 3739 3694 3695 + static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev); 3696 + 3740 3697 /* 3741 3698 * This submits a Reset Device Command, which will set the device state to 0, 3742 3699 * set the device address to 0, and disable all the endpoints except the default ··· 3808 3761 if (GET_SLOT_STATE(le32_to_cpu(slot_ctx->dev_state)) == 3809 3762 SLOT_STATE_DISABLED) 3810 3763 return 0; 3764 + 3765 + if (xhci->quirks & XHCI_ETRON_HOST) { 3766 + /* 3767 + * Obtaining a new device slot to inform the xHCI host that 3768 + * the USB device has been reset. 3769 + */ 3770 + ret = xhci_disable_slot(xhci, udev->slot_id); 3771 + xhci_free_virt_device(xhci, udev->slot_id); 3772 + if (!ret) { 3773 + ret = xhci_alloc_dev(hcd, udev); 3774 + if (ret == 1) 3775 + ret = 0; 3776 + else 3777 + ret = -EINVAL; 3778 + } 3779 + return ret; 3780 + } 3811 3781 3812 3782 trace_xhci_discover_or_reset_device(slot_ctx); 3813 3783 ··· 5314 5250 */ 5315 5251 if (xhci->hci_version > 0x96) 5316 5252 xhci->quirks |= XHCI_SPURIOUS_SUCCESS; 5253 + 5254 + if (xhci->hci_version == 0x95 && link_quirk) { 5255 + xhci_dbg(xhci, "QUIRK: Not clearing Link TRB chain bits"); 5256 + xhci->quirks |= XHCI_LINK_TRB_QUIRK; 5257 + } 5317 5258 5318 5259 /* Make sure the HC is halted. */ 5319 5260 retval = xhci_halt(xhci);
+39 -12
drivers/usb/host/xhci.h
··· 554 554 555 555 /* Stream Context Types (section 6.4.1) - bits 3:1 of stream ctx deq ptr */ 556 556 #define SCT_FOR_CTX(p) (((p) & 0x7) << 1) 557 + #define CTX_TO_SCT(p) (((p) >> 1) & 0x7) 557 558 /* Secondary stream array type, dequeue pointer is to a transfer ring */ 558 559 #define SCT_SEC_TR 0 559 560 /* Primary stream array type, dequeue pointer is to a transfer ring */ ··· 691 690 /* Bandwidth checking storage */ 692 691 struct xhci_bw_info bw_info; 693 692 struct list_head bw_endpoint_list; 693 + unsigned long stop_time; 694 694 /* Isoch Frame ID checking storage */ 695 695 int next_frame_id; 696 696 /* Use new Isoch TRB layout needed for extended TBC support */ ··· 1025 1023 /* Interrupter Target - which MSI-X vector to target the completion event at */ 1026 1024 #define TRB_INTR_TARGET(p) (((p) & 0x3ff) << 22) 1027 1025 #define GET_INTR_TARGET(p) (((p) >> 22) & 0x3ff) 1028 - /* Total burst count field, Rsvdz on xhci 1.1 with Extended TBC enabled (ETE) */ 1029 - #define TRB_TBC(p) (((p) & 0x3) << 7) 1030 - #define TRB_TLBPC(p) (((p) & 0xf) << 16) 1031 1026 1032 1027 /* Cycle bit - indicates TRB ownership by HC or HCD */ 1033 1028 #define TRB_CYCLE (1<<0) ··· 1058 1059 /* Isochronous TRB specific fields */ 1059 1060 #define TRB_SIA (1<<31) 1060 1061 #define TRB_FRAME_ID(p) (((p) & 0x7ff) << 20) 1062 + #define GET_FRAME_ID(p) (((p) >> 20) & 0x7ff) 1063 + /* Total burst count field, Rsvdz on xhci 1.1 with Extended TBC enabled (ETE) */ 1064 + #define TRB_TBC(p) (((p) & 0x3) << 7) 1065 + #define GET_TBC(p) (((p) >> 7) & 0x3) 1066 + #define TRB_TLBPC(p) (((p) & 0xf) << 16) 1067 + #define GET_TLBPC(p) (((p) >> 16) & 0xf) 1061 1068 1062 1069 /* TRB cache size for xHC with TRB cache */ 1063 1070 #define TRB_CACHE_SIZE_HS 8 ··· 1264 1259 #define AVOID_BEI_INTERVAL_MIN 8 1265 1260 #define AVOID_BEI_INTERVAL_MAX 32 1266 1261 1262 + #define xhci_for_each_ring_seg(head, seg) \ 1263 + for (seg = head; seg != NULL; seg = (seg->next != head ? seg->next : NULL)) 1264 + 1267 1265 struct xhci_segment { 1268 1266 union xhci_trb *trbs; 1269 1267 /* private to HCD */ ··· 1295 1287 enum xhci_cancelled_td_status cancel_status; 1296 1288 struct urb *urb; 1297 1289 struct xhci_segment *start_seg; 1298 - union xhci_trb *first_trb; 1299 - union xhci_trb *last_trb; 1300 - struct xhci_segment *last_trb_seg; 1290 + union xhci_trb *start_trb; 1291 + struct xhci_segment *end_seg; 1292 + union xhci_trb *end_trb; 1301 1293 struct xhci_segment *bounce_seg; 1302 1294 /* actual_length of the URB has already been set */ 1303 1295 bool urb_length_set; ··· 1632 1624 #define XHCI_ZHAOXIN_HOST BIT_ULL(46) 1633 1625 #define XHCI_WRITE_64_HI_LO BIT_ULL(47) 1634 1626 #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48) 1627 + #define XHCI_ETRON_HOST BIT_ULL(49) 1635 1628 1636 1629 unsigned int num_active_eps; 1637 1630 unsigned int limit_active_eps; ··· 1797 1788 int xhci_endpoint_init(struct xhci_hcd *xhci, struct xhci_virt_device *virt_dev, 1798 1789 struct usb_device *udev, struct usb_host_endpoint *ep, 1799 1790 gfp_t mem_flags); 1800 - struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci, 1801 - unsigned int num_segs, unsigned int cycle_state, 1791 + struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci, unsigned int num_segs, 1802 1792 enum xhci_ring_type type, unsigned int max_packet, gfp_t flags); 1803 1793 void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring); 1804 1794 int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring, 1805 1795 unsigned int num_trbs, gfp_t flags); 1806 - void xhci_initialize_ring_info(struct xhci_ring *ring, 1807 - unsigned int cycle_state); 1796 + void xhci_initialize_ring_info(struct xhci_ring *ring); 1808 1797 void xhci_free_endpoint_ring(struct xhci_hcd *xhci, 1809 1798 struct xhci_virt_device *virt_dev, 1810 1799 unsigned int ep_index); ··· 1920 1913 void xhci_cleanup_command_queue(struct xhci_hcd *xhci); 1921 1914 void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring); 1922 1915 unsigned int count_trbs(u64 addr, u64 len); 1916 + int xhci_stop_endpoint_sync(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, 1917 + int suspend, gfp_t gfp_flags); 1918 + void xhci_process_cancelled_tds(struct xhci_virt_ep *ep); 1923 1919 1924 1920 /* xHCI roothub code */ 1925 1921 void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port, ··· 2080 2070 field3 & TRB_CYCLE ? 'C' : 'c'); 2081 2071 break; 2082 2072 case TRB_NORMAL: 2083 - case TRB_ISOC: 2084 2073 case TRB_EVENT_DATA: 2085 2074 case TRB_TR_NOOP: 2086 2075 snprintf(str, size, ··· 2096 2087 field3 & TRB_ENT ? 'E' : 'e', 2097 2088 field3 & TRB_CYCLE ? 'C' : 'c'); 2098 2089 break; 2099 - 2090 + case TRB_ISOC: 2091 + snprintf(str, size, 2092 + "Buffer %08x%08x length %d TD size/TBC %d intr %d type '%s' TBC %u TLBPC %u frame_id %u flags %c:%c:%c:%c:%c:%c:%c:%c:%c", 2093 + field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2), 2094 + GET_INTR_TARGET(field2), 2095 + xhci_trb_type_string(type), 2096 + GET_TBC(field3), 2097 + GET_TLBPC(field3), 2098 + GET_FRAME_ID(field3), 2099 + field3 & TRB_SIA ? 'S' : 's', 2100 + field3 & TRB_BEI ? 'B' : 'b', 2101 + field3 & TRB_IDT ? 'I' : 'i', 2102 + field3 & TRB_IOC ? 'I' : 'i', 2103 + field3 & TRB_CHAIN ? 'C' : 'c', 2104 + field3 & TRB_NO_SNOOP ? 'S' : 's', 2105 + field3 & TRB_ISP ? 'I' : 'i', 2106 + field3 & TRB_ENT ? 'E' : 'e', 2107 + field3 & TRB_CYCLE ? 'C' : 'c'); 2108 + break; 2100 2109 case TRB_CMD_NOOP: 2101 2110 case TRB_ENABLE_SLOT: 2102 2111 snprintf(str, size,
+1 -1
drivers/usb/isp1760/isp1760-if.c
··· 263 263 264 264 static struct platform_driver isp1760_plat_driver = { 265 265 .probe = isp1760_plat_probe, 266 - .remove_new = isp1760_plat_remove, 266 + .remove = isp1760_plat_remove, 267 267 .driver = { 268 268 .name = "isp1760", 269 269 .of_match_table = of_match_ptr(isp1760_of_match),
+24 -11
drivers/usb/misc/chaoskey.c
··· 27 27 static int chaoskey_rng_read(struct hwrng *rng, void *data, 28 28 size_t max, bool wait); 29 29 30 + static DEFINE_MUTEX(chaoskey_list_lock); 31 + 30 32 #define usb_dbg(usb_if, format, arg...) \ 31 33 dev_dbg(&(usb_if)->dev, format, ## arg) 32 34 ··· 235 233 usb_deregister_dev(interface, &chaoskey_class); 236 234 237 235 usb_set_intfdata(interface, NULL); 236 + mutex_lock(&chaoskey_list_lock); 238 237 mutex_lock(&dev->lock); 239 238 240 239 dev->present = false; ··· 247 244 } else 248 245 mutex_unlock(&dev->lock); 249 246 247 + mutex_unlock(&chaoskey_list_lock); 250 248 usb_dbg(interface, "disconnect done"); 251 249 } 252 250 ··· 255 251 { 256 252 struct chaoskey *dev; 257 253 struct usb_interface *interface; 254 + int rv = 0; 258 255 259 256 /* get the interface from minor number and driver information */ 260 257 interface = usb_find_interface(&chaoskey_driver, iminor(inode)); ··· 271 266 } 272 267 273 268 file->private_data = dev; 269 + mutex_lock(&chaoskey_list_lock); 274 270 mutex_lock(&dev->lock); 275 - ++dev->open; 271 + if (dev->present) 272 + ++dev->open; 273 + else 274 + rv = -ENODEV; 276 275 mutex_unlock(&dev->lock); 276 + mutex_unlock(&chaoskey_list_lock); 277 277 278 - usb_dbg(interface, "open success"); 279 - return 0; 278 + return rv; 280 279 } 281 280 282 281 static int chaoskey_release(struct inode *inode, struct file *file) 283 282 { 284 283 struct chaoskey *dev = file->private_data; 285 284 struct usb_interface *interface; 285 + int rv = 0; 286 286 287 287 if (dev == NULL) 288 288 return -ENODEV; ··· 296 286 297 287 usb_dbg(interface, "release"); 298 288 289 + mutex_lock(&chaoskey_list_lock); 299 290 mutex_lock(&dev->lock); 300 291 301 292 usb_dbg(interface, "open count at release is %d", dev->open); 302 293 303 294 if (dev->open <= 0) { 304 295 usb_dbg(interface, "invalid open count (%d)", dev->open); 305 - mutex_unlock(&dev->lock); 306 - return -ENODEV; 296 + rv = -ENODEV; 297 + goto bail; 307 298 } 308 299 309 300 --dev->open; ··· 313 302 if (dev->open == 0) { 314 303 mutex_unlock(&dev->lock); 315 304 chaoskey_free(dev); 316 - } else 317 - mutex_unlock(&dev->lock); 318 - } else 319 - mutex_unlock(&dev->lock); 320 - 305 + goto destruction; 306 + } 307 + } 308 + bail: 309 + mutex_unlock(&dev->lock); 310 + destruction: 311 + mutex_unlock(&chaoskey_list_lock); 321 312 usb_dbg(interface, "release success"); 322 - return 0; 313 + return rv; 323 314 } 324 315 325 316 static void chaos_read_callback(struct urb *urb)
+36 -14
drivers/usb/misc/iowarrior.c
··· 277 277 struct iowarrior *dev; 278 278 int read_idx; 279 279 int offset; 280 + int retval; 280 281 281 282 dev = file->private_data; 282 283 284 + if (file->f_flags & O_NONBLOCK) { 285 + retval = mutex_trylock(&dev->mutex); 286 + if (!retval) 287 + return -EAGAIN; 288 + } else { 289 + retval = mutex_lock_interruptible(&dev->mutex); 290 + if (retval) 291 + return -ERESTARTSYS; 292 + } 293 + 283 294 /* verify that the device wasn't unplugged */ 284 - if (!dev || !dev->present) 285 - return -ENODEV; 295 + if (!dev->present) { 296 + retval = -ENODEV; 297 + goto exit; 298 + } 286 299 287 300 dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n", 288 301 dev->minor, count); 289 302 290 303 /* read count must be packet size (+ time stamp) */ 291 304 if ((count != dev->report_size) 292 - && (count != (dev->report_size + 1))) 293 - return -EINVAL; 305 + && (count != (dev->report_size + 1))) { 306 + retval = -EINVAL; 307 + goto exit; 308 + } 294 309 295 310 /* repeat until no buffer overrun in callback handler occur */ 296 311 do { 297 312 atomic_set(&dev->overflow_flag, 0); 298 313 if ((read_idx = read_index(dev)) == -1) { 299 314 /* queue empty */ 300 - if (file->f_flags & O_NONBLOCK) 301 - return -EAGAIN; 315 + if (file->f_flags & O_NONBLOCK) { 316 + retval = -EAGAIN; 317 + goto exit; 318 + } 302 319 else { 303 320 //next line will return when there is either new data, or the device is unplugged 304 321 int r = wait_event_interruptible(dev->read_wait, ··· 326 309 -1)); 327 310 if (r) { 328 311 //we were interrupted by a signal 329 - return -ERESTART; 312 + retval = -ERESTART; 313 + goto exit; 330 314 } 331 315 if (!dev->present) { 332 316 //The device was unplugged 333 - return -ENODEV; 317 + retval = -ENODEV; 318 + goto exit; 334 319 } 335 320 if (read_idx == -1) { 336 321 // Can this happen ??? 337 - return 0; 322 + retval = 0; 323 + goto exit; 338 324 } 339 325 } 340 326 } 341 327 342 328 offset = read_idx * (dev->report_size + 1); 343 329 if (copy_to_user(buffer, dev->read_queue + offset, count)) { 344 - return -EFAULT; 330 + retval = -EFAULT; 331 + goto exit; 345 332 } 346 333 } while (atomic_read(&dev->overflow_flag)); 347 334 348 335 read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx; 349 336 atomic_set(&dev->read_idx, read_idx); 337 + mutex_unlock(&dev->mutex); 350 338 return count; 339 + 340 + exit: 341 + mutex_unlock(&dev->mutex); 342 + return retval; 351 343 } 352 344 353 345 /* ··· 911 885 static void iowarrior_disconnect(struct usb_interface *interface) 912 886 { 913 887 struct iowarrior *dev = usb_get_intfdata(interface); 914 - int minor = dev->minor; 915 888 916 889 usb_deregister_dev(interface, &iowarrior_class); 917 890 ··· 934 909 mutex_unlock(&dev->mutex); 935 910 iowarrior_delete(dev); 936 911 } 937 - 938 - dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n", 939 - minor - IOWARRIOR_MINOR_BASE); 940 912 } 941 913 942 914 /* usb specific object needed to register this driver with the usb subsystem */
+1 -1
drivers/usb/misc/onboard_usb_dev.c
··· 473 473 474 474 static struct platform_driver onboard_dev_driver = { 475 475 .probe = onboard_dev_probe, 476 - .remove_new = onboard_dev_remove, 476 + .remove = onboard_dev_remove, 477 477 478 478 .driver = { 479 479 .name = "onboard-usb-dev",
+1 -1
drivers/usb/misc/qcom_eud.c
··· 239 239 240 240 static struct platform_driver eud_driver = { 241 241 .probe = eud_probe, 242 - .remove_new = eud_remove, 242 + .remove = eud_remove, 243 243 .driver = { 244 244 .name = "qcom_eud", 245 245 .dev_groups = eud_groups,
+14 -6
drivers/usb/misc/usb-ljca.c
··· 332 332 333 333 ret = usb_bulk_msg(adap->usb_dev, adap->tx_pipe, header, 334 334 msg_len, &transferred, LJCA_WRITE_TIMEOUT_MS); 335 - 336 - usb_autopm_put_interface(adap->intf); 337 - 338 335 if (ret < 0) 339 - goto out; 336 + goto out_put; 340 337 if (transferred != msg_len) { 341 338 ret = -EIO; 342 - goto out; 339 + goto out_put; 343 340 } 344 341 345 342 if (ack) { ··· 344 347 timeout); 345 348 if (!ret) { 346 349 ret = -ETIMEDOUT; 347 - goto out; 350 + goto out_put; 348 351 } 349 352 } 350 353 ret = adap->actual_length; 354 + 355 + out_put: 356 + usb_autopm_put_interface(adap->intf); 351 357 352 358 out: 353 359 spin_lock_irqsave(&adap->lock, flags); ··· 810 810 ret = ljca_enumerate_clients(adap); 811 811 if (ret) 812 812 goto err_free; 813 + 814 + /* 815 + * This works around problems with ov2740 initialization on some 816 + * Lenovo platforms. The autosuspend delay, has to be smaller than 817 + * the delay after setting the reset_gpio line in ov2740_resume(). 818 + * Otherwise the sensor randomly fails to initialize. 819 + */ 820 + pm_runtime_set_autosuspend_delay(&usb_dev->dev, 10); 813 821 814 822 usb_enable_autosuspend(usb_dev); 815 823
+1 -1
drivers/usb/misc/usb3503.c
··· 423 423 .pm = pm_ptr(&usb3503_platform_pm_ops), 424 424 }, 425 425 .probe = usb3503_platform_probe, 426 - .remove_new = usb3503_platform_remove, 426 + .remove = usb3503_platform_remove, 427 427 }; 428 428 429 429 static int __init usb3503_init(void)
+2 -1
drivers/usb/misc/usbtest.c
··· 2021 2021 2022 2022 for (i = 0; i < packets; i++) { 2023 2023 /* here, only the last packet will be short */ 2024 - urb->iso_frame_desc[i].length = min((unsigned) bytes, maxp); 2024 + urb->iso_frame_desc[i].length = min_t(unsigned int, 2025 + bytes, maxp); 2025 2026 bytes -= urb->iso_frame_desc[i].length; 2026 2027 2027 2028 urb->iso_frame_desc[i].offset = maxp * i;
+4 -1
drivers/usb/misc/yurex.c
··· 441 441 if (count == 0) 442 442 goto error; 443 443 444 - mutex_lock(&dev->io_mutex); 444 + retval = mutex_lock_interruptible(&dev->io_mutex); 445 + if (retval < 0) 446 + return -EINTR; 447 + 445 448 if (dev->disconnected) { /* already disconnected */ 446 449 mutex_unlock(&dev->io_mutex); 447 450 retval = -ENODEV;
+1 -1
drivers/usb/mon/mon_bin.c
··· 823 823 ep = MON_OFF2HDR(rp, rp->b_out); 824 824 825 825 if (rp->b_read < hdrbytes) { 826 - step_len = min(nbytes, (size_t)(hdrbytes - rp->b_read)); 826 + step_len = min_t(size_t, nbytes, hdrbytes - rp->b_read); 827 827 ptr = ((char *)ep) + rp->b_read; 828 828 if (step_len && copy_to_user(buf, ptr, step_len)) { 829 829 mutex_unlock(&rp->fetch_lock);
+2 -2
drivers/usb/mtu3/mtu3_plat.c
··· 307 307 if (otg_sx->role_sw_used || otg_sx->manual_drd_enabled) 308 308 goto out; 309 309 310 - if (of_property_read_bool(node, "extcon")) { 310 + if (of_property_present(node, "extcon")) { 311 311 otg_sx->edev = extcon_get_edev_by_phandle(ssusb->dev, 0); 312 312 if (IS_ERR(otg_sx->edev)) { 313 313 return dev_err_probe(dev, PTR_ERR(otg_sx->edev), ··· 621 621 622 622 static struct platform_driver mtu3_driver = { 623 623 .probe = mtu3_probe, 624 - .remove_new = mtu3_remove, 624 + .remove = mtu3_remove, 625 625 .driver = { 626 626 .name = MTU3_DRIVER_NAME, 627 627 .pm = DEV_PM_OPS,
+1 -1
drivers/usb/musb/da8xx.c
··· 633 633 634 634 static struct platform_driver da8xx_driver = { 635 635 .probe = da8xx_probe, 636 - .remove_new = da8xx_remove, 636 + .remove = da8xx_remove, 637 637 .driver = { 638 638 .name = "musb-da8xx", 639 639 .pm = &da8xx_pm_ops,
+1 -1
drivers/usb/musb/jz4740.c
··· 325 325 326 326 static struct platform_driver jz4740_driver = { 327 327 .probe = jz4740_probe, 328 - .remove_new = jz4740_remove, 328 + .remove = jz4740_remove, 329 329 .driver = { 330 330 .name = "musb-jz4740", 331 331 .of_match_table = jz4740_musb_of_match,
+1 -1
drivers/usb/musb/mediatek.c
··· 523 523 524 524 static struct platform_driver mtk_musb_driver = { 525 525 .probe = mtk_musb_probe, 526 - .remove_new = mtk_musb_remove, 526 + .remove = mtk_musb_remove, 527 527 .driver = { 528 528 .name = "musb-mtk", 529 529 .of_match_table = of_match_ptr(mtk_musb_match),
+1 -1
drivers/usb/musb/mpfs.c
··· 369 369 370 370 static struct platform_driver mpfs_musb_driver = { 371 371 .probe = mpfs_probe, 372 - .remove_new = mpfs_remove, 372 + .remove = mpfs_remove, 373 373 .driver = { 374 374 .name = "mpfs-musb", 375 375 .of_match_table = of_match_ptr(mpfs_id_table)
+2 -2
drivers/usb/musb/musb_core.c
··· 1387 1387 1388 1388 /* expect hw_ep has already been zero-initialized */ 1389 1389 1390 - size = ffs(max(maxpacket, (u16) 8)) - 1; 1390 + size = ffs(max_t(u16, maxpacket, 8)) - 1; 1391 1391 maxpacket = 1 << size; 1392 1392 1393 1393 c_size = size - 3; ··· 2953 2953 .dev_groups = musb_groups, 2954 2954 }, 2955 2955 .probe = musb_probe, 2956 - .remove_new = musb_remove, 2956 + .remove = musb_remove, 2957 2957 }; 2958 2958 2959 2959 module_platform_driver(musb_driver);
+1 -1
drivers/usb/musb/musb_dsps.c
··· 1032 1032 1033 1033 static struct platform_driver dsps_usbss_driver = { 1034 1034 .probe = dsps_probe, 1035 - .remove_new = dsps_remove, 1035 + .remove = dsps_remove, 1036 1036 .driver = { 1037 1037 .name = "musb-dsps", 1038 1038 .pm = &dsps_pm_ops,
+10 -3
drivers/usb/musb/musb_gadget.c
··· 1161 1161 */ 1162 1162 void musb_ep_restart(struct musb *musb, struct musb_request *req) 1163 1163 { 1164 + u16 csr; 1165 + void __iomem *epio = req->ep->hw_ep->regs; 1166 + 1164 1167 trace_musb_req_start(req); 1165 1168 musb_ep_select(musb->mregs, req->epnum); 1166 - if (req->tx) 1169 + if (req->tx) { 1167 1170 txstate(musb, req); 1168 - else 1169 - rxstate(musb, req); 1171 + } else { 1172 + csr = musb_readw(epio, MUSB_RXCSR); 1173 + csr |= MUSB_RXCSR_FLUSHFIFO | MUSB_RXCSR_P_WZC_BITS; 1174 + musb_writew(epio, MUSB_RXCSR, csr); 1175 + musb_writew(epio, MUSB_RXCSR, csr); 1176 + } 1170 1177 } 1171 1178 1172 1179 static int musb_ep_restart_resume_work(struct musb *musb, void *data)
+1 -1
drivers/usb/musb/musb_gadget_ep0.c
··· 533 533 534 534 /* load the data */ 535 535 fifo_src = (u8 *) request->buf + request->actual; 536 - fifo_count = min((unsigned) MUSB_EP0_FIFOSIZE, 536 + fifo_count = min_t(unsigned, MUSB_EP0_FIFOSIZE, 537 537 request->length - request->actual); 538 538 musb_write_fifo(&musb->endpoints[0], fifo_count, fifo_src); 539 539 request->actual += fifo_count;
+2 -3
drivers/usb/musb/musb_host.c
··· 798 798 } 799 799 800 800 if (can_bulk_split(musb, qh->type)) 801 - load_count = min((u32) hw_ep->max_packet_sz_tx, 802 - len); 801 + load_count = min_t(u32, hw_ep->max_packet_sz_tx, len); 803 802 else 804 - load_count = min((u32) packet_sz, len); 803 + load_count = min_t(u32, packet_sz, len); 805 804 806 805 if (dma_channel && musb_tx_dma_program(dma_controller, 807 806 hw_ep, qh, urb, offset, len))
+1 -1
drivers/usb/musb/omap2430.c
··· 608 608 609 609 static struct platform_driver omap2430_driver = { 610 610 .probe = omap2430_probe, 611 - .remove_new = omap2430_remove, 611 + .remove = omap2430_remove, 612 612 .driver = { 613 613 .name = "musb-omap2430", 614 614 .pm = DEV_PM_OPS,
+1 -1
drivers/usb/musb/sunxi.c
··· 857 857 858 858 static struct platform_driver sunxi_musb_driver = { 859 859 .probe = sunxi_musb_probe, 860 - .remove_new = sunxi_musb_remove, 860 + .remove = sunxi_musb_remove, 861 861 .driver = { 862 862 .name = "musb-sunxi", 863 863 .of_match_table = sunxi_musb_match,
+1 -1
drivers/usb/musb/tusb6010.c
··· 1290 1290 1291 1291 static struct platform_driver tusb_driver = { 1292 1292 .probe = tusb_probe, 1293 - .remove_new = tusb_remove, 1293 + .remove = tusb_remove, 1294 1294 .driver = { 1295 1295 .name = "musb-tusb", 1296 1296 },
+1 -1
drivers/usb/musb/ux500.c
··· 355 355 356 356 static struct platform_driver ux500_driver = { 357 357 .probe = ux500_probe, 358 - .remove_new = ux500_remove, 358 + .remove = ux500_remove, 359 359 .driver = { 360 360 .name = "musb-ux500", 361 361 .pm = &ux500_pm_ops,
+1 -1
drivers/usb/phy/phy-ab8500-usb.c
··· 987 987 988 988 static struct platform_driver ab8500_usb_driver = { 989 989 .probe = ab8500_usb_probe, 990 - .remove_new = ab8500_usb_remove, 990 + .remove = ab8500_usb_remove, 991 991 .id_table = ab8500_usb_devtype, 992 992 .driver = { 993 993 .name = "abx5x0-usb",
+1 -1
drivers/usb/phy/phy-am335x.c
··· 133 133 134 134 static struct platform_driver am335x_phy_driver = { 135 135 .probe = am335x_phy_probe, 136 - .remove_new = am335x_phy_remove, 136 + .remove = am335x_phy_remove, 137 137 .driver = { 138 138 .name = "am335x-phy-driver", 139 139 .pm = &am335x_pm_ops,
+1 -1
drivers/usb/phy/phy-fsl-usb.c
··· 1002 1002 1003 1003 struct platform_driver fsl_otg_driver = { 1004 1004 .probe = fsl_otg_probe, 1005 - .remove_new = fsl_otg_remove, 1005 + .remove = fsl_otg_remove, 1006 1006 .driver = { 1007 1007 .name = driver_name, 1008 1008 },
+1 -1
drivers/usb/phy/phy-generic.c
··· 345 345 346 346 static struct platform_driver usb_phy_generic_driver = { 347 347 .probe = usb_phy_generic_probe, 348 - .remove_new = usb_phy_generic_remove, 348 + .remove = usb_phy_generic_remove, 349 349 .driver = { 350 350 .name = "usb_phy_generic", 351 351 .of_match_table = nop_xceiv_dt_ids,
+1 -1
drivers/usb/phy/phy-gpio-vbus-usb.c
··· 385 385 .of_match_table = gpio_vbus_of_match, 386 386 }, 387 387 .probe = gpio_vbus_probe, 388 - .remove_new = gpio_vbus_remove, 388 + .remove = gpio_vbus_remove, 389 389 }; 390 390 391 391 module_platform_driver(gpio_vbus_driver);
+1 -1
drivers/usb/phy/phy-isp1301.c
··· 25 25 #define phy_to_isp(p) (container_of((p), struct isp1301, phy)) 26 26 27 27 static const struct i2c_device_id isp1301_id[] = { 28 - { "isp1301", 0 }, 28 + { "isp1301" }, 29 29 { } 30 30 }; 31 31 MODULE_DEVICE_TABLE(i2c, isp1301_id);
+1 -1
drivers/usb/phy/phy-keystone.c
··· 103 103 104 104 static struct platform_driver keystone_usbphy_driver = { 105 105 .probe = keystone_usbphy_probe, 106 - .remove_new = keystone_usbphy_remove, 106 + .remove = keystone_usbphy_remove, 107 107 .driver = { 108 108 .name = "keystone-usbphy", 109 109 .of_match_table = keystone_usbphy_ids,
+1 -1
drivers/usb/phy/phy-mv-usb.c
··· 867 867 868 868 static struct platform_driver mv_otg_driver = { 869 869 .probe = mv_otg_probe, 870 - .remove_new = mv_otg_remove, 870 + .remove = mv_otg_remove, 871 871 .driver = { 872 872 .name = driver_name, 873 873 .dev_groups = mv_otg_groups,
+1 -1
drivers/usb/phy/phy-mxs-usb.c
··· 952 952 953 953 static struct platform_driver mxs_phy_driver = { 954 954 .probe = mxs_phy_probe, 955 - .remove_new = mxs_phy_remove, 955 + .remove = mxs_phy_remove, 956 956 .driver = { 957 957 .name = DRIVER_NAME, 958 958 .of_match_table = mxs_phy_dt_ids,
+1 -1
drivers/usb/phy/phy-tahvo.c
··· 424 424 425 425 static struct platform_driver tahvo_usb_driver = { 426 426 .probe = tahvo_usb_probe, 427 - .remove_new = tahvo_usb_remove, 427 + .remove = tahvo_usb_remove, 428 428 .driver = { 429 429 .name = "tahvo-usb", 430 430 .dev_groups = tahvo_groups,
+1 -1
drivers/usb/phy/phy-tegra-usb.c
··· 1495 1495 1496 1496 static struct platform_driver tegra_usb_phy_driver = { 1497 1497 .probe = tegra_usb_phy_probe, 1498 - .remove_new = tegra_usb_phy_remove, 1498 + .remove = tegra_usb_phy_remove, 1499 1499 .driver = { 1500 1500 .name = "tegra-phy", 1501 1501 .of_match_table = tegra_usb_phy_id_table,
+1 -1
drivers/usb/phy/phy-twl6030-usb.c
··· 432 432 433 433 static struct platform_driver twl6030_usb_driver = { 434 434 .probe = twl6030_usb_probe, 435 - .remove_new = twl6030_usb_remove, 435 + .remove = twl6030_usb_remove, 436 436 .driver = { 437 437 .name = "twl6030_usb", 438 438 .of_match_table = of_match_ptr(twl6030_usb_id_table),
+1 -1
drivers/usb/phy/phy.c
··· 365 365 { 366 366 int ret; 367 367 368 - if (of_property_read_bool(x->dev->of_node, "extcon")) { 368 + if (of_property_present(x->dev->of_node, "extcon")) { 369 369 x->edev = extcon_get_edev_by_phandle(x->dev, 0); 370 370 if (IS_ERR(x->edev)) 371 371 return PTR_ERR(x->edev);
+2 -2
drivers/usb/renesas_usbhs/common.c
··· 632 632 if (IS_ERR(priv->base)) 633 633 return PTR_ERR(priv->base); 634 634 635 - if (of_property_read_bool(dev_of_node(dev), "extcon")) { 635 + if (of_property_present(dev_of_node(dev), "extcon")) { 636 636 priv->edev = extcon_get_edev_by_phandle(dev, 0); 637 637 if (IS_ERR(priv->edev)) 638 638 return PTR_ERR(priv->edev); ··· 835 835 .of_match_table = usbhs_of_match, 836 836 }, 837 837 .probe = usbhs_probe, 838 - .remove_new = usbhs_remove, 838 + .remove = usbhs_remove, 839 839 }; 840 840 841 841 module_platform_driver(renesas_usbhs_driver);
+1 -1
drivers/usb/roles/intel-xhci-usb-role-switch.c
··· 217 217 }, 218 218 .id_table = intel_xhci_usb_table, 219 219 .probe = intel_xhci_usb_probe, 220 - .remove_new = intel_xhci_usb_remove, 220 + .remove = intel_xhci_usb_remove, 221 221 }; 222 222 223 223 module_platform_driver(intel_xhci_usb_driver);
+1 -3
drivers/usb/serial/bus.c
··· 136 136 { 137 137 struct usb_dynid *dynid, *n; 138 138 139 - spin_lock(&drv->dynids.lock); 139 + guard(mutex)(&usb_dynids_lock); 140 140 list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) { 141 141 list_del(&dynid->node); 142 142 kfree(dynid); 143 143 } 144 - spin_unlock(&drv->dynids.lock); 145 144 } 146 145 147 146 const struct bus_type usb_serial_bus_type = { ··· 156 157 int retval; 157 158 158 159 driver->driver.bus = &usb_serial_bus_type; 159 - spin_lock_init(&driver->dynids.lock); 160 160 INIT_LIST_HEAD(&driver->dynids.list); 161 161 162 162 retval = driver_register(&driver->driver);
+2
drivers/usb/serial/ftdi_sio.c
··· 1443 1443 struct usb_serial_port *port = tty->driver_data; 1444 1444 struct ftdi_private *priv = usb_get_serial_port_data(port); 1445 1445 1446 + mutex_lock(&priv->cfg_lock); 1446 1447 ss->flags = priv->flags; 1447 1448 ss->baud_base = priv->baud_base; 1448 1449 ss->custom_divisor = priv->custom_divisor; 1450 + mutex_unlock(&priv->cfg_lock); 1449 1451 } 1450 1452 1451 1453 static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+1 -1
drivers/usb/serial/io_edgeport.c
··· 1129 1129 spin_lock_irqsave(&edge_port->ep_lock, flags); 1130 1130 1131 1131 /* calculate number of bytes to put in fifo */ 1132 - copySize = min((unsigned int)count, 1132 + copySize = min_t(unsigned int, count, 1133 1133 (edge_port->txCredits - fifo->count)); 1134 1134 1135 1135 dev_dbg(&port->dev, "%s of %d byte(s) Fifo room %d -- will copy %d bytes\n",
+37 -1
drivers/usb/serial/pl2303.c
··· 31 31 #define PL2303_QUIRK_UART_STATE_IDX0 BIT(0) 32 32 #define PL2303_QUIRK_LEGACY BIT(1) 33 33 #define PL2303_QUIRK_ENDPOINT_HACK BIT(2) 34 + #define PL2303_QUIRK_NO_BREAK_GETLINE BIT(3) 34 35 35 36 static const struct usb_device_id id_table[] = { 36 37 { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID), ··· 468 467 return -ENODEV; 469 468 } 470 469 470 + static bool pl2303_is_hxd_clone(struct usb_serial *serial) 471 + { 472 + struct usb_device *udev = serial->dev; 473 + unsigned char *buf; 474 + int ret; 475 + 476 + buf = kmalloc(7, GFP_KERNEL); 477 + if (!buf) 478 + return false; 479 + 480 + ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 481 + GET_LINE_REQUEST, GET_LINE_REQUEST_TYPE, 482 + 0, 0, buf, 7, 100); 483 + 484 + kfree(buf); 485 + 486 + return ret == -EPIPE; 487 + } 488 + 471 489 static int pl2303_startup(struct usb_serial *serial) 472 490 { 473 491 struct pl2303_serial_private *spriv; ··· 508 488 spriv->type = &pl2303_type_data[type]; 509 489 spriv->quirks = (unsigned long)usb_get_serial_data(serial); 510 490 spriv->quirks |= spriv->type->quirks; 491 + 492 + if (type == TYPE_HXD && pl2303_is_hxd_clone(serial)) 493 + spriv->quirks |= PL2303_QUIRK_NO_BREAK_GETLINE; 511 494 512 495 usb_set_serial_data(serial, spriv); 513 496 ··· 748 725 static int pl2303_get_line_request(struct usb_serial_port *port, 749 726 unsigned char buf[7]) 750 727 { 751 - struct usb_device *udev = port->serial->dev; 728 + struct usb_serial *serial = port->serial; 729 + struct pl2303_serial_private *spriv = usb_get_serial_data(serial); 730 + struct usb_device *udev = serial->dev; 752 731 int ret; 732 + 733 + if (spriv->quirks & PL2303_QUIRK_NO_BREAK_GETLINE) { 734 + struct pl2303_private *priv = usb_get_serial_port_data(port); 735 + 736 + memcpy(buf, priv->line_settings, 7); 737 + return 0; 738 + } 753 739 754 740 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 755 741 GET_LINE_REQUEST, GET_LINE_REQUEST_TYPE, ··· 1096 1064 static int pl2303_set_break(struct usb_serial_port *port, bool enable) 1097 1065 { 1098 1066 struct usb_serial *serial = port->serial; 1067 + struct pl2303_serial_private *spriv = usb_get_serial_data(serial); 1099 1068 u16 state; 1100 1069 int result; 1070 + 1071 + if (spriv->quirks & PL2303_QUIRK_NO_BREAK_GETLINE) 1072 + return -ENOTTY; 1101 1073 1102 1074 if (enable) 1103 1075 state = BREAK_ON;
+1 -1
drivers/usb/serial/sierra.c
··· 421 421 unsigned long flags; 422 422 unsigned char *buffer; 423 423 struct urb *urb; 424 - size_t writesize = min((size_t)count, (size_t)MAX_TRANSFER); 424 + size_t writesize = min_t(size_t, count, MAX_TRANSFER); 425 425 int retval = 0; 426 426 427 427 /* verify that we actually have some data to write */
+1 -3
drivers/usb/serial/usb-serial.c
··· 706 706 { 707 707 struct usb_dynid *dynid; 708 708 709 - spin_lock(&drv->dynids.lock); 709 + guard(mutex)(&usb_dynids_lock); 710 710 list_for_each_entry(dynid, &drv->dynids.list, node) { 711 711 if (usb_match_one_id(intf, &dynid->id)) { 712 - spin_unlock(&drv->dynids.lock); 713 712 return &dynid->id; 714 713 } 715 714 } 716 - spin_unlock(&drv->dynids.lock); 717 715 return NULL; 718 716 } 719 717
+4 -4
drivers/usb/storage/ene_ub6250.c
··· 737 737 memset(bcb, 0, sizeof(struct bulk_cb_wrap)); 738 738 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 739 739 bcb->DataTransferLength = blenByte; 740 - bcb->Flags = 0x00; 740 + bcb->Flags = US_BULK_FLAG_OUT; 741 741 bcb->CDB[0] = 0xF0; 742 742 bcb->CDB[5] = (unsigned char)(bnByte); 743 743 bcb->CDB[4] = (unsigned char)(bnByte>>8); ··· 1163 1163 memset(bcb, 0, sizeof(struct bulk_cb_wrap)); 1164 1164 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 1165 1165 bcb->DataTransferLength = 0x200*len; 1166 - bcb->Flags = 0x00; 1166 + bcb->Flags = US_BULK_FLAG_OUT; 1167 1167 bcb->CDB[0] = 0xF0; 1168 1168 bcb->CDB[1] = 0x08; 1169 1169 bcb->CDB[4] = (unsigned char)(oldphy); ··· 1759 1759 memset(bcb, 0, sizeof(struct bulk_cb_wrap)); 1760 1760 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 1761 1761 bcb->DataTransferLength = blenByte; 1762 - bcb->Flags = 0x00; 1762 + bcb->Flags = US_BULK_FLAG_OUT; 1763 1763 bcb->CDB[0] = 0xF0; 1764 1764 bcb->CDB[1] = 0x04; 1765 1765 bcb->CDB[5] = (unsigned char)(bn); ··· 1931 1931 memset(bcb, 0, sizeof(struct bulk_cb_wrap)); 1932 1932 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 1933 1933 bcb->DataTransferLength = sd_fw->size; 1934 - bcb->Flags = 0x00; 1934 + bcb->Flags = US_BULK_FLAG_OUT; 1935 1935 bcb->CDB[0] = 0xEF; 1936 1936 1937 1937 result = ene_send_scsi_cmd(us, FDIR_WRITE, buf, 0);
+2 -2
drivers/usb/storage/realtek_cr.c
··· 212 212 /* set up the command wrapper */ 213 213 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 214 214 bcb->DataTransferLength = cpu_to_le32(buf_len); 215 - bcb->Flags = (dir == DMA_FROM_DEVICE) ? US_BULK_FLAG_IN : 0; 215 + bcb->Flags = (dir == DMA_FROM_DEVICE) ? US_BULK_FLAG_IN : US_BULK_FLAG_OUT; 216 216 bcb->Tag = ++us->tag; 217 217 bcb->Lun = lun; 218 218 bcb->Length = cmd_len; ··· 301 301 /* set up the command wrapper */ 302 302 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 303 303 bcb->DataTransferLength = cpu_to_le32(buf_len); 304 - bcb->Flags = (dir == DMA_FROM_DEVICE) ? US_BULK_FLAG_IN : 0; 304 + bcb->Flags = (dir == DMA_FROM_DEVICE) ? US_BULK_FLAG_IN : US_BULK_FLAG_OUT; 305 305 bcb->Tag = ++us->tag; 306 306 bcb->Lun = lun; 307 307 bcb->Length = cmd_len;
+2 -2
drivers/usb/storage/sddr09.c
··· 752 752 // a bounce buffer and move the data a piece at a time between the 753 753 // bounce buffer and the actual transfer buffer. 754 754 755 - len = min(sectors, (unsigned int) info->blocksize) * info->pagesize; 755 + len = min_t(unsigned int, sectors, info->blocksize) * info->pagesize; 756 756 buffer = kmalloc(len, GFP_NOIO); 757 757 if (!buffer) 758 758 return -ENOMEM; ··· 997 997 * at a time between the bounce buffer and the actual transfer buffer. 998 998 */ 999 999 1000 - len = min(sectors, (unsigned int) info->blocksize) * info->pagesize; 1000 + len = min_t(unsigned int, sectors, info->blocksize) * info->pagesize; 1001 1001 buffer = kmalloc(len, GFP_NOIO); 1002 1002 if (!buffer) { 1003 1003 kfree(blockbuffer);
+4 -4
drivers/usb/storage/sddr55.c
··· 206 206 // a bounce buffer and move the data a piece at a time between the 207 207 // bounce buffer and the actual transfer buffer. 208 208 209 - len = min((unsigned int) sectors, (unsigned int) info->blocksize >> 209 + len = min_t(unsigned int, sectors, info->blocksize >> 210 210 info->smallpageshift) * PAGESIZE; 211 211 buffer = kmalloc(len, GFP_NOIO); 212 212 if (buffer == NULL) ··· 224 224 225 225 // Read as many sectors as possible in this block 226 226 227 - pages = min((unsigned int) sectors << info->smallpageshift, 227 + pages = min_t(unsigned int, sectors << info->smallpageshift, 228 228 info->blocksize - page); 229 229 len = pages << info->pageshift; 230 230 ··· 333 333 // a bounce buffer and move the data a piece at a time between the 334 334 // bounce buffer and the actual transfer buffer. 335 335 336 - len = min((unsigned int) sectors, (unsigned int) info->blocksize >> 336 + len = min_t(unsigned int, sectors, info->blocksize >> 337 337 info->smallpageshift) * PAGESIZE; 338 338 buffer = kmalloc(len, GFP_NOIO); 339 339 if (buffer == NULL) ··· 351 351 352 352 // Write as many sectors as possible in this block 353 353 354 - pages = min((unsigned int) sectors << info->smallpageshift, 354 + pages = min_t(unsigned int, sectors << info->smallpageshift, 355 355 info->blocksize - page); 356 356 len = pages << info->pageshift; 357 357
+1 -1
drivers/usb/storage/transport.c
··· 1133 1133 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); 1134 1134 bcb->DataTransferLength = cpu_to_le32(transfer_length); 1135 1135 bcb->Flags = srb->sc_data_direction == DMA_FROM_DEVICE ? 1136 - US_BULK_FLAG_IN : 0; 1136 + US_BULK_FLAG_IN : US_BULK_FLAG_OUT; 1137 1137 bcb->Tag = ++us->tag; 1138 1138 bcb->Lun = srb->device->lun; 1139 1139 if (us->fflags & US_FL_SCM_MULT_TARG)
+1 -1
drivers/usb/typec/altmodes/displayport.c
··· 729 729 730 730 /* FIXME: Port can only be DFP_U. */ 731 731 732 - /* Make sure we have compatiple pin configurations */ 732 + /* Make sure we have compatible pin configurations */ 733 733 if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) & 734 734 DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) && 735 735 !(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) &
+201 -4
drivers/usb/typec/class.c
··· 219 219 char *buf); 220 220 static DEVICE_ATTR_RO(usb_power_delivery_revision); 221 221 222 + static const char * const usb_modes[] = { 223 + [USB_MODE_NONE] = "none", 224 + [USB_MODE_USB2] = "usb2", 225 + [USB_MODE_USB3] = "usb3", 226 + [USB_MODE_USB4] = "usb4" 227 + }; 228 + 222 229 /* ------------------------------------------------------------------------- */ 223 230 /* Alternate Modes */ 224 231 ··· 621 614 /* ------------------------------------------------------------------------- */ 622 615 /* Type-C Partners */ 623 616 617 + /** 618 + * typec_partner_set_usb_mode - Assign active USB Mode for the partner 619 + * @partner: USB Type-C partner 620 + * @mode: USB Mode (USB2, USB3 or USB4) 621 + * 622 + * The port drivers can use this function to assign the active USB Mode to 623 + * @partner. The USB Mode can change for example due to Data Reset. 624 + */ 625 + void typec_partner_set_usb_mode(struct typec_partner *partner, enum usb_mode mode) 626 + { 627 + if (!partner || partner->usb_mode == mode) 628 + return; 629 + 630 + partner->usb_capability |= BIT(mode - 1); 631 + partner->usb_mode = mode; 632 + sysfs_notify(&partner->dev.kobj, NULL, "usb_mode"); 633 + } 634 + EXPORT_SYMBOL_GPL(typec_partner_set_usb_mode); 635 + 636 + static ssize_t 637 + usb_mode_show(struct device *dev, struct device_attribute *attr, char *buf) 638 + { 639 + struct typec_partner *partner = to_typec_partner(dev); 640 + int len = 0; 641 + int i; 642 + 643 + for (i = USB_MODE_USB2; i < USB_MODE_USB4 + 1; i++) { 644 + if (!(BIT(i - 1) & partner->usb_capability)) 645 + continue; 646 + 647 + if (i == partner->usb_mode) 648 + len += sysfs_emit_at(buf, len, "[%s] ", usb_modes[i]); 649 + else 650 + len += sysfs_emit_at(buf, len, "%s ", usb_modes[i]); 651 + } 652 + 653 + sysfs_emit_at(buf, len - 1, "\n"); 654 + 655 + return len; 656 + } 657 + 658 + static ssize_t usb_mode_store(struct device *dev, struct device_attribute *attr, 659 + const char *buf, size_t size) 660 + { 661 + struct typec_partner *partner = to_typec_partner(dev); 662 + struct typec_port *port = to_typec_port(dev->parent); 663 + int mode; 664 + int ret; 665 + 666 + if (!port->ops || !port->ops->enter_usb_mode) 667 + return -EOPNOTSUPP; 668 + 669 + mode = sysfs_match_string(usb_modes, buf); 670 + if (mode < 0) 671 + return mode; 672 + 673 + if (mode == partner->usb_mode) 674 + return size; 675 + 676 + ret = port->ops->enter_usb_mode(port, mode); 677 + if (ret) 678 + return ret; 679 + 680 + typec_partner_set_usb_mode(partner, mode); 681 + 682 + return size; 683 + } 684 + static DEVICE_ATTR_RW(usb_mode); 685 + 624 686 static ssize_t accessory_mode_show(struct device *dev, 625 687 struct device_attribute *attr, 626 688 char *buf) ··· 736 660 &dev_attr_supports_usb_power_delivery.attr, 737 661 &dev_attr_number_of_alternate_modes.attr, 738 662 &dev_attr_type.attr, 663 + &dev_attr_usb_mode.attr, 739 664 &dev_attr_usb_power_delivery_revision.attr, 740 665 NULL 741 666 }; ··· 744 667 static umode_t typec_partner_attr_is_visible(struct kobject *kobj, struct attribute *attr, int n) 745 668 { 746 669 struct typec_partner *partner = to_typec_partner(kobj_to_dev(kobj)); 670 + struct typec_port *port = to_typec_port(partner->dev.parent); 671 + 672 + if (attr == &dev_attr_usb_mode.attr) { 673 + if (!partner->usb_capability) 674 + return 0; 675 + if (!port->ops || !port->ops->enter_usb_mode) 676 + return 0444; 677 + } 747 678 748 679 if (attr == &dev_attr_number_of_alternate_modes.attr) { 749 680 if (partner->num_altmodes < 0) ··· 825 740 */ 826 741 int typec_partner_set_identity(struct typec_partner *partner) 827 742 { 828 - if (!partner->identity) 743 + u8 usb_capability = partner->usb_capability; 744 + struct device *dev = &partner->dev; 745 + struct usb_pd_identity *id; 746 + 747 + id = get_pd_identity(dev); 748 + if (!id) 829 749 return -EINVAL; 830 750 831 - typec_report_identity(&partner->dev); 751 + if (to_typec_port(dev->parent)->data_role == TYPEC_HOST) { 752 + u32 devcap = PD_VDO_UFP_DEVCAP(id->vdo[0]); 753 + 754 + if (devcap & (DEV_USB2_CAPABLE | DEV_USB2_BILLBOARD)) 755 + usb_capability |= USB_CAPABILITY_USB2; 756 + if (devcap & DEV_USB3_CAPABLE) 757 + usb_capability |= USB_CAPABILITY_USB3; 758 + if (devcap & DEV_USB4_CAPABLE) 759 + usb_capability |= USB_CAPABILITY_USB4; 760 + } else { 761 + usb_capability = PD_VDO_DFP_HOSTCAP(id->vdo[0]); 762 + } 763 + 764 + if (partner->usb_capability != usb_capability) { 765 + partner->usb_capability = usb_capability; 766 + sysfs_notify(&dev->kobj, NULL, "usb_mode"); 767 + } 768 + 769 + typec_report_identity(dev); 832 770 return 0; 833 771 } 834 772 EXPORT_SYMBOL_GPL(typec_partner_set_identity); ··· 1021 913 partner->usb_pd = desc->usb_pd; 1022 914 partner->accessory = desc->accessory; 1023 915 partner->num_altmodes = -1; 916 + partner->usb_capability = desc->usb_capability; 1024 917 partner->pd_revision = desc->pd_revision; 1025 918 partner->svdm_version = port->cap->svdm_version; 1026 919 partner->attach = desc->attach; ··· 1040 931 partner->dev.parent = &port->dev; 1041 932 partner->dev.type = &typec_partner_dev_type; 1042 933 dev_set_name(&partner->dev, "%s-partner", dev_name(&port->dev)); 934 + 935 + if (port->usb2_dev) { 936 + partner->usb_capability |= USB_CAPABILITY_USB2; 937 + partner->usb_mode = USB_MODE_USB2; 938 + } 939 + if (port->usb3_dev) { 940 + partner->usb_capability |= USB_CAPABILITY_USB2 | USB_CAPABILITY_USB3; 941 + partner->usb_mode = USB_MODE_USB3; 942 + } 1043 943 1044 944 ret = device_register(&partner->dev); 1045 945 if (ret) { ··· 1409 1291 1410 1292 /* ------------------------------------------------------------------------- */ 1411 1293 /* USB Type-C ports */ 1294 + 1295 + /** 1296 + * typec_port_set_usb_mode - Set the operational USB mode for the port 1297 + * @port: USB Type-C port 1298 + * @mode: USB Mode (USB2, USB3 or USB4) 1299 + * 1300 + * @mode will be used with the next Enter_USB message. Existing connections are 1301 + * not affected. 1302 + */ 1303 + void typec_port_set_usb_mode(struct typec_port *port, enum usb_mode mode) 1304 + { 1305 + port->usb_mode = mode; 1306 + } 1307 + EXPORT_SYMBOL_GPL(typec_port_set_usb_mode); 1308 + 1309 + static ssize_t 1310 + usb_capability_show(struct device *dev, struct device_attribute *attr, char *buf) 1311 + { 1312 + struct typec_port *port = to_typec_port(dev); 1313 + int len = 0; 1314 + int i; 1315 + 1316 + for (i = USB_MODE_USB2; i < USB_MODE_USB4 + 1; i++) { 1317 + if (!(BIT(i - 1) & port->cap->usb_capability)) 1318 + continue; 1319 + 1320 + if (i == port->usb_mode) 1321 + len += sysfs_emit_at(buf, len, "[%s] ", usb_modes[i]); 1322 + else 1323 + len += sysfs_emit_at(buf, len, "%s ", usb_modes[i]); 1324 + } 1325 + 1326 + sysfs_emit_at(buf, len - 1, "\n"); 1327 + 1328 + return len; 1329 + } 1330 + 1331 + static ssize_t 1332 + usb_capability_store(struct device *dev, struct device_attribute *attr, 1333 + const char *buf, size_t size) 1334 + { 1335 + struct typec_port *port = to_typec_port(dev); 1336 + int ret = 0; 1337 + int mode; 1338 + 1339 + if (!port->ops || !port->ops->default_usb_mode_set) 1340 + return -EOPNOTSUPP; 1341 + 1342 + mode = sysfs_match_string(usb_modes, buf); 1343 + if (mode < 0) 1344 + return mode; 1345 + 1346 + ret = port->ops->default_usb_mode_set(port, mode); 1347 + if (ret) 1348 + return ret; 1349 + 1350 + port->usb_mode = mode; 1351 + 1352 + return size; 1353 + } 1354 + static DEVICE_ATTR_RW(usb_capability); 1412 1355 1413 1356 /** 1414 1357 * typec_port_set_usb_power_delivery - Assign USB PD for port. ··· 1939 1760 &dev_attr_vconn_source.attr, 1940 1761 &dev_attr_port_type.attr, 1941 1762 &dev_attr_orientation.attr, 1763 + &dev_attr_usb_capability.attr, 1942 1764 NULL, 1943 1765 }; 1944 1766 ··· 1973 1793 if (port->cap->orientation_aware) 1974 1794 return 0444; 1975 1795 return 0; 1796 + } else if (attr == &dev_attr_usb_capability.attr) { 1797 + if (!port->cap->usb_capability) 1798 + return 0; 1799 + if (!port->ops || !port->ops->default_usb_mode_set) 1800 + return 0444; 1976 1801 } 1977 1802 1978 1803 return attr->mode; ··· 2049 1864 struct typec_port *port = container_of(con, struct typec_port, con); 2050 1865 struct typec_partner *partner = typec_get_partner(port); 2051 1866 struct usb_device *udev = to_usb_device(dev); 1867 + enum usb_mode usb_mode; 2052 1868 2053 - if (udev->speed < USB_SPEED_SUPER) 1869 + if (udev->speed < USB_SPEED_SUPER) { 1870 + usb_mode = USB_MODE_USB2; 2054 1871 port->usb2_dev = dev; 2055 - else 1872 + } else { 1873 + usb_mode = USB_MODE_USB3; 2056 1874 port->usb3_dev = dev; 1875 + } 2057 1876 2058 1877 if (partner) { 1878 + typec_partner_set_usb_mode(partner, usb_mode); 2059 1879 typec_partner_link_device(partner, dev); 2060 1880 put_device(&partner->dev); 2061 1881 } ··· 2622 2432 port->prefer_role = cap->prefer_role; 2623 2433 port->con.attach = typec_partner_attach; 2624 2434 port->con.deattach = typec_partner_deattach; 2435 + 2436 + if (cap->usb_capability & USB_CAPABILITY_USB4) 2437 + port->usb_mode = USB_MODE_USB4; 2438 + else if (cap->usb_capability & USB_CAPABILITY_USB3) 2439 + port->usb_mode = USB_MODE_USB3; 2440 + else if (cap->usb_capability & USB_CAPABILITY_USB2) 2441 + port->usb_mode = USB_MODE_USB2; 2625 2442 2626 2443 device_initialize(&port->dev); 2627 2444 port->dev.class = &typec_class;
+3
drivers/usb/typec/class.h
··· 35 35 int num_altmodes; 36 36 u16 pd_revision; /* 0300H = "3.0" */ 37 37 enum usb_pd_svdm_ver svdm_version; 38 + enum usb_mode usb_mode; 39 + u8 usb_capability; 38 40 39 41 struct usb_power_delivery *pd; 40 42 ··· 57 55 enum typec_role vconn_role; 58 56 enum typec_pwr_opmode pwr_opmode; 59 57 enum typec_port_type port_type; 58 + enum usb_mode usb_mode; 60 59 struct mutex port_type_lock; 61 60 62 61 enum typec_orientation orientation;
+9
drivers/usb/typec/mux/Kconfig
··· 66 66 Say Y or M if your system has a NXP PTN36502 Type-C redriver chip 67 67 found on some devices with a Type-C port. 68 68 69 + config TYPEC_MUX_TUSB1046 70 + tristate "TI TUSB1046 Type-C crosspoint switch driver" 71 + depends on I2C 72 + help 73 + Driver for the Texas Instruments TUSB1046-DCI crosspoint switch. 74 + Supports flipping USB-C SuperSpeed lanes to adapt to orientation 75 + changes, as well as muxing DisplayPort and sideband signals to a 76 + common Type-C connector. 77 + 69 78 config TYPEC_MUX_WCD939X_USBSS 70 79 tristate "Qualcomm WCD939x USBSS Analog Audio Switch driver" 71 80 depends on I2C
+1
drivers/usb/typec/mux/Makefile
··· 7 7 obj-$(CONFIG_TYPEC_MUX_IT5205) += it5205.o 8 8 obj-$(CONFIG_TYPEC_MUX_NB7VPQ904M) += nb7vpq904m.o 9 9 obj-$(CONFIG_TYPEC_MUX_PTN36502) += ptn36502.o 10 + obj-$(CONFIG_TYPEC_MUX_TUSB1046) += tusb1046.o 10 11 obj-$(CONFIG_TYPEC_MUX_WCD939X_USBSS) += wcd939x-usbss.o
+1 -1
drivers/usb/typec/mux/gpio-sbu-mux.c
··· 159 159 160 160 static struct platform_driver gpio_sbu_mux_driver = { 161 161 .probe = gpio_sbu_mux_probe, 162 - .remove_new = gpio_sbu_mux_remove, 162 + .remove = gpio_sbu_mux_remove, 163 163 .driver = { 164 164 .name = "gpio_sbu_mux", 165 165 .of_match_table = gpio_sbu_mux_match,
+1 -1
drivers/usb/typec/mux/intel_pmc_mux.c
··· 828 828 .acpi_match_table = ACPI_PTR(pmc_usb_acpi_ids), 829 829 }, 830 830 .probe = pmc_usb_probe, 831 - .remove_new = pmc_usb_remove, 831 + .remove = pmc_usb_remove, 832 832 }; 833 833 834 834 static int __init pmc_usb_init(void)
+196
drivers/usb/typec/mux/tusb1046.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Driver for the TUSB1046-DCI USB Type-C crosspoint switch 4 + * 5 + * Copyright (C) 2024 Bootlin 6 + */ 7 + 8 + #include <linux/bits.h> 9 + #include <linux/i2c.h> 10 + #include <linux/usb/typec_mux.h> 11 + #include <linux/usb/typec_dp.h> 12 + #include <linux/usb/typec_altmode.h> 13 + #include <linux/module.h> 14 + #include <linux/mod_devicetable.h> 15 + #include <linux/err.h> 16 + #include <linux/of_device.h> 17 + #include <linux/device.h> 18 + #include <linux/mutex.h> 19 + 20 + #define TUSB1046_REG_GENERAL 0xa 21 + 22 + /* General register bits */ 23 + #define TUSB1046_GENERAL_FLIPSEL BIT(2) 24 + #define TUSB1046_GENERAL_CTLSEL GENMASK(1, 0) 25 + 26 + /* Mux modes */ 27 + #define TUSB1046_CTLSEL_DISABLED 0x0 28 + #define TUSB1046_CTLSEL_USB3 0x1 29 + #define TUSB1046_CTLSEL_4LANE_DP 0x2 30 + #define TUSB1046_CTLSEL_USB3_AND_2LANE_DP 0x3 31 + 32 + struct tusb1046_priv { 33 + struct i2c_client *client; 34 + struct typec_switch_dev *sw; 35 + struct typec_mux_dev *mux; 36 + 37 + /* Lock General register during accesses */ 38 + struct mutex general_reg_lock; 39 + }; 40 + 41 + static int tusb1046_mux_set(struct typec_mux_dev *mux, 42 + struct typec_mux_state *state) 43 + { 44 + struct tusb1046_priv *priv = typec_mux_get_drvdata(mux); 45 + struct i2c_client *client = priv->client; 46 + struct device *dev = &client->dev; 47 + int mode, val, ret = 0; 48 + 49 + if (state->mode >= TYPEC_STATE_MODAL && 50 + state->alt->svid != USB_TYPEC_DP_SID) 51 + return -EINVAL; 52 + 53 + dev_dbg(dev, "mux mode requested: %lu\n", state->mode); 54 + 55 + mutex_lock(&priv->general_reg_lock); 56 + 57 + val = i2c_smbus_read_byte_data(client, TUSB1046_REG_GENERAL); 58 + if (val < 0) { 59 + dev_err(dev, "failed to read ctlsel status, err %d\n", val); 60 + ret = val; 61 + goto out_unlock; 62 + } 63 + 64 + switch (state->mode) { 65 + case TYPEC_STATE_USB: 66 + mode = TUSB1046_CTLSEL_USB3; 67 + break; 68 + case TYPEC_DP_STATE_C: 69 + case TYPEC_DP_STATE_E: 70 + mode = TUSB1046_CTLSEL_4LANE_DP; 71 + break; 72 + case TYPEC_DP_STATE_D: 73 + mode = TUSB1046_CTLSEL_USB3_AND_2LANE_DP; 74 + break; 75 + case TYPEC_STATE_SAFE: 76 + default: 77 + mode = TUSB1046_CTLSEL_DISABLED; 78 + break; 79 + } 80 + 81 + val &= ~TUSB1046_GENERAL_CTLSEL; 82 + val |= mode; 83 + 84 + ret = i2c_smbus_write_byte_data(client, TUSB1046_REG_GENERAL, val); 85 + 86 + out_unlock: 87 + mutex_unlock(&priv->general_reg_lock); 88 + return ret; 89 + } 90 + 91 + static int tusb1046_switch_set(struct typec_switch_dev *sw, 92 + enum typec_orientation orientation) 93 + { 94 + struct tusb1046_priv *priv = typec_switch_get_drvdata(sw); 95 + struct i2c_client *client = priv->client; 96 + struct device *dev = &client->dev; 97 + int val, ret = 0; 98 + 99 + dev_dbg(dev, "setting USB3.0 lane flip for orientation %d\n", orientation); 100 + 101 + mutex_lock(&priv->general_reg_lock); 102 + 103 + val = i2c_smbus_read_byte_data(client, TUSB1046_REG_GENERAL); 104 + if (val < 0) { 105 + dev_err(dev, "failed to read flipsel status, err %d\n", val); 106 + ret = val; 107 + goto out_unlock; 108 + } 109 + 110 + if (orientation == TYPEC_ORIENTATION_REVERSE) 111 + val |= TUSB1046_GENERAL_FLIPSEL; 112 + else 113 + val &= ~TUSB1046_GENERAL_FLIPSEL; 114 + 115 + ret = i2c_smbus_write_byte_data(client, TUSB1046_REG_GENERAL, val); 116 + 117 + out_unlock: 118 + mutex_unlock(&priv->general_reg_lock); 119 + return ret; 120 + } 121 + 122 + static int tusb1046_i2c_probe(struct i2c_client *client) 123 + { 124 + struct typec_switch_desc sw_desc = { }; 125 + struct typec_mux_desc mux_desc = { }; 126 + struct device *dev = &client->dev; 127 + struct tusb1046_priv *priv; 128 + int ret = 0; 129 + 130 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 131 + if (!priv) 132 + return dev_err_probe(dev, -ENOMEM, "failed to allocate driver data\n"); 133 + 134 + priv->client = client; 135 + 136 + mutex_init(&priv->general_reg_lock); 137 + 138 + sw_desc.drvdata = priv; 139 + sw_desc.fwnode = dev_fwnode(dev); 140 + sw_desc.set = tusb1046_switch_set; 141 + 142 + priv->sw = typec_switch_register(dev, &sw_desc); 143 + if (IS_ERR(priv->sw)) { 144 + ret = dev_err_probe(dev, PTR_ERR(priv->sw), "failed to register type-c switch\n"); 145 + goto err_destroy_mutex; 146 + } 147 + 148 + mux_desc.drvdata = priv; 149 + mux_desc.fwnode = dev_fwnode(dev); 150 + mux_desc.set = tusb1046_mux_set; 151 + 152 + priv->mux = typec_mux_register(dev, &mux_desc); 153 + if (IS_ERR(priv->mux)) { 154 + ret = dev_err_probe(dev, PTR_ERR(priv->mux), "failed to register type-c mux\n"); 155 + goto err_unregister_switch; 156 + } 157 + 158 + i2c_set_clientdata(client, priv); 159 + 160 + return 0; 161 + 162 + err_unregister_switch: 163 + typec_switch_unregister(priv->sw); 164 + err_destroy_mutex: 165 + mutex_destroy(&priv->general_reg_lock); 166 + return ret; 167 + } 168 + 169 + static void tusb1046_i2c_remove(struct i2c_client *client) 170 + { 171 + struct tusb1046_priv *priv = i2c_get_clientdata(client); 172 + 173 + typec_switch_unregister(priv->sw); 174 + typec_mux_unregister(priv->mux); 175 + mutex_destroy(&priv->general_reg_lock); 176 + } 177 + 178 + static const struct of_device_id tusb1046_match_table[] = { 179 + {.compatible = "ti,tusb1046"}, 180 + {}, 181 + }; 182 + 183 + static struct i2c_driver tusb1046_driver = { 184 + .driver = { 185 + .name = "tusb1046", 186 + .of_match_table = tusb1046_match_table, 187 + }, 188 + .probe = tusb1046_i2c_probe, 189 + .remove = tusb1046_i2c_remove, 190 + }; 191 + 192 + module_i2c_driver(tusb1046_driver); 193 + 194 + MODULE_DESCRIPTION("TUSB1046 USB Type-C switch driver"); 195 + MODULE_AUTHOR("Romain Gantois <romain.gantois@bootlin.com>"); 196 + MODULE_LICENSE("GPL");
+3 -4
drivers/usb/typec/stusb160x.c
··· 633 633 634 634 static int stusb160x_probe(struct i2c_client *client) 635 635 { 636 + const struct regmap_config *regmap_config; 636 637 struct stusb160x *chip; 637 - const struct of_device_id *match; 638 - struct regmap_config *regmap_config; 639 638 struct fwnode_handle *fwnode; 640 639 int ret; 641 640 ··· 644 645 645 646 i2c_set_clientdata(client, chip); 646 647 647 - match = i2c_of_match_device(stusb160x_of_match, client); 648 - regmap_config = (struct regmap_config *)match->data; 648 + regmap_config = i2c_get_match_data(client); 649 + 649 650 chip->regmap = devm_regmap_init_i2c(client, regmap_config); 650 651 if (IS_ERR(chip->regmap)) { 651 652 ret = PTR_ERR(chip->regmap);
+1 -1
drivers/usb/typec/tcpm/qcom/qcom_pmic_typec.c
··· 163 163 .of_match_table = qcom_pmic_typec_table, 164 164 }, 165 165 .probe = qcom_pmic_typec_probe, 166 - .remove_new = qcom_pmic_typec_remove, 166 + .remove = qcom_pmic_typec_remove, 167 167 }; 168 168 169 169 module_platform_driver(qcom_pmic_typec_driver);
+1 -1
drivers/usb/typec/tcpm/tcpci_mt6360.c
··· 221 221 .of_match_table = mt6360_tcpc_of_id, 222 222 }, 223 223 .probe = mt6360_tcpc_probe, 224 - .remove_new = mt6360_tcpc_remove, 224 + .remove = mt6360_tcpc_remove, 225 225 }; 226 226 module_platform_driver(mt6360_tcpc_driver); 227 227
+1 -1
drivers/usb/typec/tcpm/tcpci_mt6370.c
··· 196 196 .of_match_table = mt6370_tcpc_devid_table, 197 197 }, 198 198 .probe = mt6370_tcpc_probe, 199 - .remove_new = mt6370_tcpc_remove, 199 + .remove = mt6370_tcpc_remove, 200 200 }; 201 201 module_platform_driver(mt6370_tcpc_driver); 202 202
+76 -20
drivers/usb/typec/tcpm/tcpm.c
··· 310 310 unsigned int operating_snk_mw; 311 311 }; 312 312 313 + /* 314 + * @sink_wait_cap_time: Deadline (in ms) for tTypeCSinkWaitCap timer 315 + * @ps_src_wait_off_time: Deadline (in ms) for tPSSourceOff timer 316 + * @cc_debounce_time: Deadline (in ms) for tCCDebounce timer 317 + */ 318 + struct pd_timings { 319 + u32 sink_wait_cap_time; 320 + u32 ps_src_off_time; 321 + u32 cc_debounce_time; 322 + u32 snk_bc12_cmpletion_time; 323 + }; 324 + 313 325 struct tcpm_port { 314 326 struct device *dev; 315 327 ··· 564 552 */ 565 553 unsigned int message_id_prime; 566 554 unsigned int rx_msgid_prime; 555 + 556 + /* Timer deadline values configured at runtime */ 557 + struct pd_timings timings; 567 558 #ifdef CONFIG_DEBUG_FS 568 559 struct dentry *dentry; 569 560 struct mutex logbuffer_lock; /* log buffer access lock */ ··· 4655 4640 case SRC_ATTACH_WAIT: 4656 4641 if (tcpm_port_is_debug(port)) 4657 4642 tcpm_set_state(port, DEBUG_ACC_ATTACHED, 4658 - PD_T_CC_DEBOUNCE); 4643 + port->timings.cc_debounce_time); 4659 4644 else if (tcpm_port_is_audio(port)) 4660 4645 tcpm_set_state(port, AUDIO_ACC_ATTACHED, 4661 - PD_T_CC_DEBOUNCE); 4646 + port->timings.cc_debounce_time); 4662 4647 else if (tcpm_port_is_source(port) && port->vbus_vsafe0v) 4663 4648 tcpm_set_state(port, 4664 4649 tcpm_try_snk(port) ? SNK_TRY 4665 4650 : SRC_ATTACHED, 4666 - PD_T_CC_DEBOUNCE); 4651 + port->timings.cc_debounce_time); 4667 4652 break; 4668 4653 4669 4654 case SNK_TRY: ··· 4714 4699 } 4715 4700 break; 4716 4701 case SRC_TRYWAIT_DEBOUNCE: 4717 - tcpm_set_state(port, SRC_ATTACHED, PD_T_CC_DEBOUNCE); 4702 + tcpm_set_state(port, SRC_ATTACHED, port->timings.cc_debounce_time); 4718 4703 break; 4719 4704 case SRC_TRYWAIT_UNATTACHED: 4720 4705 tcpm_set_state(port, SNK_UNATTACHED, 0); ··· 4917 4902 (port->cc1 != TYPEC_CC_OPEN && 4918 4903 port->cc2 == TYPEC_CC_OPEN)) 4919 4904 tcpm_set_state(port, SNK_DEBOUNCED, 4920 - PD_T_CC_DEBOUNCE); 4905 + port->timings.cc_debounce_time); 4921 4906 else if (tcpm_port_is_disconnected(port)) 4922 4907 tcpm_set_state(port, SNK_UNATTACHED, 4923 4908 PD_T_PD_DEBOUNCE); ··· 4957 4942 break; 4958 4943 case SNK_TRYWAIT: 4959 4944 tcpm_set_cc(port, TYPEC_CC_RD); 4960 - tcpm_set_state(port, SNK_TRYWAIT_VBUS, PD_T_CC_DEBOUNCE); 4945 + tcpm_set_state(port, SNK_TRYWAIT_VBUS, port->timings.cc_debounce_time); 4961 4946 break; 4962 4947 case SNK_TRYWAIT_VBUS: 4963 4948 /* ··· 4980 4965 if (ret < 0) 4981 4966 tcpm_set_state(port, SNK_UNATTACHED, 0); 4982 4967 else 4983 - tcpm_set_state(port, SNK_STARTUP, 0); 4968 + /* 4969 + * For Type C port controllers that use Battery Charging 4970 + * Detection (based on BCv1.2 spec) to detect USB 4971 + * charger type, add a delay of "snk_bc12_cmpletion_time" 4972 + * before transitioning to SNK_STARTUP to allow BC1.2 4973 + * detection to complete before PD is eventually enabled 4974 + * in later states. 4975 + */ 4976 + tcpm_set_state(port, SNK_STARTUP, 4977 + port->timings.snk_bc12_cmpletion_time); 4984 4978 break; 4985 4979 case SNK_STARTUP: 4986 4980 opmode = tcpm_get_pwr_opmode(port->polarity ? ··· 5039 5015 break; 5040 5016 case SNK_DISCOVERY_DEBOUNCE: 5041 5017 tcpm_set_state(port, SNK_DISCOVERY_DEBOUNCE_DONE, 5042 - PD_T_CC_DEBOUNCE); 5018 + port->timings.cc_debounce_time); 5043 5019 break; 5044 5020 case SNK_DISCOVERY_DEBOUNCE_DONE: 5045 5021 if (!tcpm_port_is_disconnected(port) && ··· 5066 5042 if (port->vbus_never_low) { 5067 5043 port->vbus_never_low = false; 5068 5044 tcpm_set_state(port, SNK_SOFT_RESET, 5069 - PD_T_SINK_WAIT_CAP); 5045 + port->timings.sink_wait_cap_time); 5070 5046 } else { 5071 5047 if (!port->self_powered) 5072 5048 upcoming_state = SNK_WAIT_CAPABILITIES_TIMEOUT; 5073 5049 else 5074 5050 upcoming_state = hard_reset_state(port); 5075 - tcpm_set_state(port, upcoming_state, PD_T_SINK_WAIT_CAP); 5051 + tcpm_set_state(port, SNK_WAIT_CAPABILITIES_TIMEOUT, 5052 + port->timings.sink_wait_cap_time); 5076 5053 } 5077 5054 break; 5078 5055 case SNK_WAIT_CAPABILITIES_TIMEOUT: ··· 5095 5070 if (tcpm_pd_send_control(port, PD_CTRL_GET_SOURCE_CAP, TCPC_TX_SOP)) 5096 5071 tcpm_set_state_cond(port, hard_reset_state(port), 0); 5097 5072 else 5098 - tcpm_set_state(port, hard_reset_state(port), PD_T_SINK_WAIT_CAP); 5073 + tcpm_set_state(port, hard_reset_state(port), 5074 + port->timings.sink_wait_cap_time); 5099 5075 break; 5100 5076 case SNK_NEGOTIATE_CAPABILITIES: 5101 5077 port->pd_capable = true; ··· 5233 5207 tcpm_set_state(port, ACC_UNATTACHED, 0); 5234 5208 break; 5235 5209 case AUDIO_ACC_DEBOUNCE: 5236 - tcpm_set_state(port, ACC_UNATTACHED, PD_T_CC_DEBOUNCE); 5210 + tcpm_set_state(port, ACC_UNATTACHED, port->timings.cc_debounce_time); 5237 5211 break; 5238 5212 5239 5213 /* Hard_Reset states */ ··· 5300 5274 tcpm_set_state(port, SRC_UNATTACHED, PD_T_PS_SOURCE_ON); 5301 5275 break; 5302 5276 case SNK_HARD_RESET_SINK_OFF: 5303 - /* Do not discharge/disconnect during hard reseet */ 5277 + /* Do not discharge/disconnect during hard reset */ 5304 5278 tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0); 5305 5279 memset(&port->pps_data, 0, sizeof(port->pps_data)); 5306 5280 tcpm_set_vconn(port, false); ··· 5450 5424 tcpm_set_state(port, ERROR_RECOVERY, 0); 5451 5425 break; 5452 5426 case FR_SWAP_SNK_SRC_TRANSITION_TO_OFF: 5453 - tcpm_set_state(port, ERROR_RECOVERY, PD_T_PS_SOURCE_OFF); 5427 + tcpm_set_state(port, ERROR_RECOVERY, port->timings.ps_src_off_time); 5454 5428 break; 5455 5429 case FR_SWAP_SNK_SRC_NEW_SINK_READY: 5456 5430 if (port->vbus_source) ··· 5505 5479 tcpm_set_cc(port, TYPEC_CC_RD); 5506 5480 /* allow CC debounce */ 5507 5481 tcpm_set_state(port, PR_SWAP_SRC_SNK_SOURCE_OFF_CC_DEBOUNCED, 5508 - PD_T_CC_DEBOUNCE); 5482 + port->timings.cc_debounce_time); 5509 5483 break; 5510 5484 case PR_SWAP_SRC_SNK_SOURCE_OFF_CC_DEBOUNCED: 5511 5485 /* ··· 5540 5514 port->pps_data.active, 0); 5541 5515 tcpm_set_charge(port, false); 5542 5516 tcpm_set_state(port, hard_reset_state(port), 5543 - PD_T_PS_SOURCE_OFF); 5517 + port->timings.ps_src_off_time); 5544 5518 break; 5545 5519 case PR_SWAP_SNK_SRC_SOURCE_ON: 5546 5520 tcpm_enable_auto_vbus_discharge(port, true); ··· 5696 5670 case PORT_RESET_WAIT_OFF: 5697 5671 tcpm_set_state(port, 5698 5672 tcpm_default_state(port), 5699 - port->vbus_present ? PD_T_PS_SOURCE_OFF : 0); 5673 + port->vbus_present ? port->timings.ps_src_off_time : 0); 5700 5674 break; 5701 5675 5702 5676 /* AMS intermediate state */ ··· 6096 6070 break; 6097 6071 case SNK_ATTACH_WAIT: 6098 6072 case SNK_DEBOUNCED: 6099 - /* Do nothing, as TCPM is still waiting for vbus to reaach VSAFE5V to connect */ 6073 + /* Do nothing, as TCPM is still waiting for vbus to reach VSAFE5V to connect */ 6100 6074 break; 6101 6075 6102 6076 case SNK_NEGOTIATE_CAPABILITIES: ··· 6187 6161 case SRC_ATTACH_WAIT: 6188 6162 if (tcpm_port_is_source(port)) 6189 6163 tcpm_set_state(port, tcpm_try_snk(port) ? SNK_TRY : SRC_ATTACHED, 6190 - PD_T_CC_DEBOUNCE); 6164 + port->timings.cc_debounce_time); 6191 6165 break; 6192 6166 case SRC_STARTUP: 6193 6167 case SRC_SEND_CAPABILITIES: ··· 7083 7057 return ret; 7084 7058 } 7085 7059 7060 + static void tcpm_fw_get_timings(struct tcpm_port *port, struct fwnode_handle *fwnode) 7061 + { 7062 + int ret; 7063 + u32 val; 7064 + 7065 + ret = fwnode_property_read_u32(fwnode, "sink-wait-cap-time-ms", &val); 7066 + if (!ret) 7067 + port->timings.sink_wait_cap_time = val; 7068 + else 7069 + port->timings.sink_wait_cap_time = PD_T_SINK_WAIT_CAP; 7070 + 7071 + ret = fwnode_property_read_u32(fwnode, "ps-source-off-time-ms", &val); 7072 + if (!ret) 7073 + port->timings.ps_src_off_time = val; 7074 + else 7075 + port->timings.ps_src_off_time = PD_T_PS_SOURCE_OFF; 7076 + 7077 + ret = fwnode_property_read_u32(fwnode, "cc-debounce-time-ms", &val); 7078 + if (!ret) 7079 + port->timings.cc_debounce_time = val; 7080 + else 7081 + port->timings.cc_debounce_time = PD_T_CC_DEBOUNCE; 7082 + 7083 + ret = fwnode_property_read_u32(fwnode, "sink-bc12-completion-time-ms", &val); 7084 + if (!ret) 7085 + port->timings.snk_bc12_cmpletion_time = val; 7086 + } 7087 + 7086 7088 static int tcpm_fw_get_caps(struct tcpm_port *port, struct fwnode_handle *fwnode) 7087 7089 { 7088 7090 struct fwnode_handle *capabilities, *child, *caps = NULL; ··· 7425 7371 src_mv = pdo_fixed_voltage(pdo); 7426 7372 src_ma = pdo_max_current(pdo); 7427 7373 tmp = src_mv * src_ma; 7428 - max_src_uw = tmp > max_src_uw ? tmp : max_src_uw; 7374 + max_src_uw = max(tmp, max_src_uw); 7429 7375 } 7430 7376 } 7431 7377 ··· 7667 7613 err = tcpm_fw_get_snk_vdos(port, tcpc->fwnode); 7668 7614 if (err < 0) 7669 7615 goto out_destroy_wq; 7616 + 7617 + tcpm_fw_get_timings(port, tcpc->fwnode); 7670 7618 7671 7619 port->try_role = port->typec_caps.prefer_role; 7672 7620
+1 -1
drivers/usb/typec/tcpm/wcove.c
··· 687 687 .name = "bxt_wcove_usbc", 688 688 }, 689 689 .probe = wcove_typec_probe, 690 - .remove_new = wcove_typec_remove, 690 + .remove = wcove_typec_remove, 691 691 }; 692 692 693 693 module_platform_driver(wcove_typec_driver);
+1
drivers/usb/typec/ucsi/debugfs.c
··· 32 32 case UCSI_SET_UOR: 33 33 case UCSI_SET_PDR: 34 34 case UCSI_CONNECTOR_RESET: 35 + case UCSI_SET_SINK_PATH: 35 36 ret = ucsi_send_command(ucsi, val, NULL, 0); 36 37 break; 37 38 case UCSI_GET_CAPABILITY:
+12 -16
drivers/usb/typec/ucsi/psy.c
··· 55 55 union power_supply_propval *val) 56 56 { 57 57 val->intval = UCSI_PSY_OFFLINE; 58 - if (con->status.flags & UCSI_CONSTAT_CONNECTED && 59 - (con->status.flags & UCSI_CONSTAT_PWR_DIR) == TYPEC_SINK) 58 + if (UCSI_CONSTAT(con, CONNECTED) && 59 + (UCSI_CONSTAT(con, PWR_DIR) == TYPEC_SINK)) 60 60 val->intval = UCSI_PSY_FIXED_ONLINE; 61 61 return 0; 62 62 } ··· 66 66 { 67 67 u32 pdo; 68 68 69 - switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) { 69 + switch (UCSI_CONSTAT(con, PWR_OPMODE)) { 70 70 case UCSI_CONSTAT_PWR_OPMODE_PD: 71 71 pdo = con->src_pdos[0]; 72 72 val->intval = pdo_fixed_voltage(pdo) * 1000; ··· 89 89 { 90 90 u32 pdo; 91 91 92 - switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) { 92 + switch (UCSI_CONSTAT(con, PWR_OPMODE)) { 93 93 case UCSI_CONSTAT_PWR_OPMODE_PD: 94 94 if (con->num_pdos > 0) { 95 95 pdo = con->src_pdos[con->num_pdos - 1]; ··· 117 117 int index; 118 118 u32 pdo; 119 119 120 - switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) { 120 + switch (UCSI_CONSTAT(con, PWR_OPMODE)) { 121 121 case UCSI_CONSTAT_PWR_OPMODE_PD: 122 122 index = rdo_index(con->rdo); 123 123 if (index > 0) { ··· 145 145 { 146 146 u32 pdo; 147 147 148 - switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) { 148 + switch (UCSI_CONSTAT(con, PWR_OPMODE)) { 149 149 case UCSI_CONSTAT_PWR_OPMODE_PD: 150 150 if (con->num_pdos > 0) { 151 151 pdo = con->src_pdos[con->num_pdos - 1]; ··· 173 173 static int ucsi_psy_get_current_now(struct ucsi_connector *con, 174 174 union power_supply_propval *val) 175 175 { 176 - u16 flags = con->status.flags; 177 - 178 - if (UCSI_CONSTAT_PWR_OPMODE(flags) == UCSI_CONSTAT_PWR_OPMODE_PD) 176 + if (UCSI_CONSTAT(con, PWR_OPMODE) == UCSI_CONSTAT_PWR_OPMODE_PD) 179 177 val->intval = rdo_op_current(con->rdo) * 1000; 180 178 else 181 179 val->intval = 0; ··· 183 185 static int ucsi_psy_get_usb_type(struct ucsi_connector *con, 184 186 union power_supply_propval *val) 185 187 { 186 - u16 flags = con->status.flags; 187 - 188 188 val->intval = POWER_SUPPLY_USB_TYPE_C; 189 - if (flags & UCSI_CONSTAT_CONNECTED && 190 - UCSI_CONSTAT_PWR_OPMODE(flags) == UCSI_CONSTAT_PWR_OPMODE_PD) 189 + if (UCSI_CONSTAT(con, CONNECTED) && 190 + UCSI_CONSTAT(con, PWR_OPMODE) == UCSI_CONSTAT_PWR_OPMODE_PD) 191 191 val->intval = POWER_SUPPLY_USB_TYPE_PD; 192 192 193 193 return 0; ··· 193 197 194 198 static int ucsi_psy_get_charge_type(struct ucsi_connector *con, union power_supply_propval *val) 195 199 { 196 - if (!(con->status.flags & UCSI_CONSTAT_CONNECTED)) { 200 + if (!(UCSI_CONSTAT(con, CONNECTED))) { 197 201 val->intval = POWER_SUPPLY_CHARGE_TYPE_NONE; 198 202 return 0; 199 203 } 200 204 201 205 /* The Battery Charging Cabability Status field is only valid in sink role. */ 202 - if ((con->status.flags & UCSI_CONSTAT_PWR_DIR) != TYPEC_SINK) { 206 + if (UCSI_CONSTAT(con, PWR_DIR) != TYPEC_SINK) { 203 207 val->intval = POWER_SUPPLY_CHARGE_TYPE_UNKNOWN; 204 208 return 0; 205 209 } 206 210 207 - switch (UCSI_CONSTAT_BC_STATUS(con->status.pwr_status)) { 211 + switch (UCSI_CONSTAT(con, BC_STATUS)) { 208 212 case UCSI_CONSTAT_BC_NOMINAL_CHARGING: 209 213 val->intval = POWER_SUPPLY_CHARGE_TYPE_STANDARD; 210 214 break;
+14 -14
drivers/usb/typec/ucsi/trace.h
··· 40 40 ); 41 41 42 42 DECLARE_EVENT_CLASS(ucsi_log_connector_status, 43 - TP_PROTO(int port, struct ucsi_connector_status *status), 44 - TP_ARGS(port, status), 43 + TP_PROTO(int port, struct ucsi_connector *con), 44 + TP_ARGS(port, con), 45 45 TP_STRUCT__entry( 46 46 __field(int, port) 47 47 __field(u16, change) ··· 55 55 ), 56 56 TP_fast_assign( 57 57 __entry->port = port - 1; 58 - __entry->change = status->change; 59 - __entry->opmode = UCSI_CONSTAT_PWR_OPMODE(status->flags); 60 - __entry->connected = !!(status->flags & UCSI_CONSTAT_CONNECTED); 61 - __entry->pwr_dir = !!(status->flags & UCSI_CONSTAT_PWR_DIR); 62 - __entry->partner_flags = UCSI_CONSTAT_PARTNER_FLAGS(status->flags); 63 - __entry->partner_type = UCSI_CONSTAT_PARTNER_TYPE(status->flags); 64 - __entry->request_data_obj = status->request_data_obj; 65 - __entry->bc_status = UCSI_CONSTAT_BC_STATUS(status->pwr_status); 58 + __entry->change = UCSI_CONSTAT(con, CHANGE); 59 + __entry->opmode = UCSI_CONSTAT(con, PWR_OPMODE); 60 + __entry->connected = UCSI_CONSTAT(con, CONNECTED); 61 + __entry->pwr_dir = UCSI_CONSTAT(con, PWR_DIR); 62 + __entry->partner_flags = UCSI_CONSTAT(con, PARTNER_FLAGS); 63 + __entry->partner_type = UCSI_CONSTAT(con, PARTNER_TYPE); 64 + __entry->request_data_obj = UCSI_CONSTAT(con, RDO); 65 + __entry->bc_status = UCSI_CONSTAT(con, BC_STATUS); 66 66 ), 67 67 TP_printk("port%d status: change=%04x, opmode=%x, connected=%d, " 68 68 "sourcing=%d, partner_flags=%x, partner_type=%x, " ··· 73 73 ); 74 74 75 75 DEFINE_EVENT(ucsi_log_connector_status, ucsi_connector_change, 76 - TP_PROTO(int port, struct ucsi_connector_status *status), 77 - TP_ARGS(port, status) 76 + TP_PROTO(int port, struct ucsi_connector *con), 77 + TP_ARGS(port, con) 78 78 ); 79 79 80 80 DEFINE_EVENT(ucsi_log_connector_status, ucsi_register_port, 81 - TP_PROTO(int port, struct ucsi_connector_status *status), 82 - TP_ARGS(port, status) 81 + TP_PROTO(int port, struct ucsi_connector *con), 82 + TP_ARGS(port, con) 83 83 ); 84 84 85 85 DECLARE_EVENT_CLASS(ucsi_log_register_altmode,
+84 -69
drivers/usb/typec/ucsi/ucsi.c
··· 648 648 } 649 649 } 650 650 651 + static int ucsi_get_connector_status(struct ucsi_connector *con, bool conn_ack) 652 + { 653 + u64 command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num); 654 + size_t size = min(UCSI_GET_CONNECTOR_STATUS_SIZE, UCSI_MAX_DATA_LENGTH(con->ucsi)); 655 + int ret; 656 + 657 + ret = ucsi_send_command_common(con->ucsi, command, &con->status, size, conn_ack); 658 + 659 + return ret < 0 ? ret : 0; 660 + } 661 + 651 662 static int ucsi_read_pdos(struct ucsi_connector *con, 652 663 enum typec_role role, int is_partner, 653 664 u32 *pdos, int offset, int num_pdos) ··· 669 658 670 659 if (is_partner && 671 660 ucsi->quirks & UCSI_NO_PARTNER_PDOS && 672 - ((con->status.flags & UCSI_CONSTAT_PWR_DIR) || 673 - !is_source(role))) 661 + (UCSI_CONSTAT(con, PWR_DIR) || !is_source(role))) 674 662 return 0; 675 663 676 664 command = UCSI_COMMAND(UCSI_GET_PDOS) | UCSI_CONNECTOR_NUMBER(con->num); ··· 983 973 984 974 static int ucsi_check_connector_capability(struct ucsi_connector *con) 985 975 { 976 + u64 pd_revision; 986 977 u64 command; 987 978 int ret; 988 979 ··· 997 986 return ret; 998 987 } 999 988 1000 - typec_partner_set_pd_revision(con->partner, 1001 - UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags)); 989 + pd_revision = UCSI_CONCAP(con, PARTNER_PD_REVISION_V2_1); 990 + typec_partner_set_pd_revision(con->partner, UCSI_SPEC_REVISION_TO_BCD(pd_revision)); 1002 991 1003 992 return ret; 1004 993 } 1005 994 1006 995 static void ucsi_pwr_opmode_change(struct ucsi_connector *con) 1007 996 { 1008 - switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) { 997 + switch (UCSI_CONSTAT(con, PWR_OPMODE)) { 1009 998 case UCSI_CONSTAT_PWR_OPMODE_PD: 1010 - con->rdo = con->status.request_data_obj; 999 + con->rdo = UCSI_CONSTAT(con, RDO); 1011 1000 typec_set_pwr_opmode(con->port, TYPEC_PWR_MODE_PD); 1012 1001 ucsi_partner_task(con, ucsi_get_src_pdos, 30, 0); 1013 1002 ucsi_partner_task(con, ucsi_check_altmodes, 30, HZ); ··· 1031 1020 1032 1021 static int ucsi_register_partner(struct ucsi_connector *con) 1033 1022 { 1034 - u8 pwr_opmode = UCSI_CONSTAT_PWR_OPMODE(con->status.flags); 1023 + u8 pwr_opmode = UCSI_CONSTAT(con, PWR_OPMODE); 1035 1024 struct typec_partner_desc desc; 1036 1025 struct typec_partner *partner; 1037 1026 ··· 1040 1029 1041 1030 memset(&desc, 0, sizeof(desc)); 1042 1031 1043 - switch (UCSI_CONSTAT_PARTNER_TYPE(con->status.flags)) { 1032 + switch (UCSI_CONSTAT(con, PARTNER_TYPE)) { 1044 1033 case UCSI_CONSTAT_PARTNER_TYPE_DEBUG: 1045 1034 desc.accessory = TYPEC_ACCESSORY_DEBUG; 1046 1035 break; ··· 1058 1047 desc.identity = &con->partner_identity; 1059 1048 desc.usb_pd = pwr_opmode == UCSI_CONSTAT_PWR_OPMODE_PD; 1060 1049 1050 + if (con->ucsi->version >= UCSI_VERSION_2_1) { 1051 + u64 pd_revision = UCSI_CONCAP(con, PARTNER_PD_REVISION_V2_1); 1052 + desc.pd_revision = UCSI_SPEC_REVISION_TO_BCD(pd_revision); 1053 + } 1054 + 1061 1055 partner = typec_register_partner(con->port, &desc); 1062 1056 if (IS_ERR(partner)) { 1063 1057 dev_err(con->ucsi->dev, ··· 1072 1056 } 1073 1057 1074 1058 con->partner = partner; 1059 + 1060 + if (con->ucsi->version >= UCSI_VERSION_3_0 && 1061 + UCSI_CONSTAT(con, PARTNER_FLAG_USB4_GEN4)) 1062 + typec_partner_set_usb_mode(partner, USB_MODE_USB4); 1063 + else if (con->ucsi->version >= UCSI_VERSION_2_0 && 1064 + UCSI_CONSTAT(con, PARTNER_FLAG_USB4_GEN3)) 1065 + typec_partner_set_usb_mode(partner, USB_MODE_USB4); 1075 1066 1076 1067 return 0; 1077 1068 } ··· 1104 1081 enum usb_role u_role = USB_ROLE_NONE; 1105 1082 int ret; 1106 1083 1107 - switch (UCSI_CONSTAT_PARTNER_TYPE(con->status.flags)) { 1084 + switch (UCSI_CONSTAT(con, PARTNER_TYPE)) { 1108 1085 case UCSI_CONSTAT_PARTNER_TYPE_UFP: 1109 1086 case UCSI_CONSTAT_PARTNER_TYPE_CABLE_AND_UFP: 1110 1087 u_role = USB_ROLE_HOST; ··· 1120 1097 break; 1121 1098 } 1122 1099 1123 - if (con->status.flags & UCSI_CONSTAT_CONNECTED) { 1124 - switch (UCSI_CONSTAT_PARTNER_TYPE(con->status.flags)) { 1100 + if (UCSI_CONSTAT(con, CONNECTED)) { 1101 + switch (UCSI_CONSTAT(con, PARTNER_TYPE)) { 1125 1102 case UCSI_CONSTAT_PARTNER_TYPE_DEBUG: 1126 1103 typec_set_mode(con->port, TYPEC_MODE_DEBUG); 1127 1104 break; ··· 1129 1106 typec_set_mode(con->port, TYPEC_MODE_AUDIO); 1130 1107 break; 1131 1108 default: 1132 - if (UCSI_CONSTAT_PARTNER_FLAGS(con->status.flags) == 1133 - UCSI_CONSTAT_PARTNER_FLAG_USB) 1109 + if (UCSI_CONSTAT(con, PARTNER_FLAG_USB)) 1134 1110 typec_set_mode(con->port, TYPEC_STATE_USB); 1135 1111 } 1136 1112 } 1137 1113 1138 1114 /* Only notify USB controller if partner supports USB data */ 1139 - if (!(UCSI_CONSTAT_PARTNER_FLAGS(con->status.flags) & UCSI_CONSTAT_PARTNER_FLAG_USB)) 1115 + if (!(UCSI_CONSTAT(con, PARTNER_FLAG_USB))) 1140 1116 u_role = USB_ROLE_NONE; 1141 1117 1142 1118 ret = usb_role_switch_set_role(con->usb_role_sw, u_role); ··· 1146 1124 1147 1125 static int ucsi_check_connection(struct ucsi_connector *con) 1148 1126 { 1149 - u8 prev_flags = con->status.flags; 1150 - u64 command; 1127 + u8 prev_state = UCSI_CONSTAT(con, CONNECTED); 1151 1128 int ret; 1152 1129 1153 - command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num); 1154 - ret = ucsi_send_command(con->ucsi, command, &con->status, sizeof(con->status)); 1155 - if (ret < 0) { 1130 + ret = ucsi_get_connector_status(con, false); 1131 + if (ret) { 1156 1132 dev_err(con->ucsi->dev, "GET_CONNECTOR_STATUS failed (%d)\n", ret); 1157 1133 return ret; 1158 1134 } 1159 1135 1160 - if (con->status.flags == prev_flags) 1161 - return 0; 1162 - 1163 - if (con->status.flags & UCSI_CONSTAT_CONNECTED) { 1136 + if (UCSI_CONSTAT(con, CONNECTED)) { 1137 + if (prev_state) 1138 + return 0; 1164 1139 ucsi_register_partner(con); 1165 1140 ucsi_pwr_opmode_change(con); 1166 1141 ucsi_partner_change(con); ··· 1213 1194 work); 1214 1195 struct ucsi *ucsi = con->ucsi; 1215 1196 enum typec_role role; 1216 - u64 command; 1197 + u16 change; 1217 1198 int ret; 1218 1199 1219 1200 mutex_lock(&con->lock); ··· 1222 1203 dev_err_once(ucsi->dev, "%s entered without EVENT_PENDING\n", 1223 1204 __func__); 1224 1205 1225 - command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num); 1226 - 1227 - ret = ucsi_send_command_common(ucsi, command, &con->status, 1228 - sizeof(con->status), true); 1229 - if (ret < 0) { 1206 + ret = ucsi_get_connector_status(con, true); 1207 + if (ret) { 1230 1208 dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n", 1231 1209 __func__, ret); 1232 1210 clear_bit(EVENT_PENDING, &con->ucsi->flags); 1233 1211 goto out_unlock; 1234 1212 } 1235 1213 1236 - trace_ucsi_connector_change(con->num, &con->status); 1214 + trace_ucsi_connector_change(con->num, con); 1237 1215 1238 1216 if (ucsi->ops->connector_status) 1239 1217 ucsi->ops->connector_status(con); 1240 1218 1241 - role = !!(con->status.flags & UCSI_CONSTAT_PWR_DIR); 1219 + change = UCSI_CONSTAT(con, CHANGE); 1220 + role = UCSI_CONSTAT(con, PWR_DIR); 1242 1221 1243 - if (con->status.change & UCSI_CONSTAT_POWER_DIR_CHANGE) { 1222 + if (change & UCSI_CONSTAT_POWER_DIR_CHANGE) { 1244 1223 typec_set_pwr_role(con->port, role); 1245 1224 1246 1225 /* Complete pending power role swap */ ··· 1246 1229 complete(&con->complete); 1247 1230 } 1248 1231 1249 - if (con->status.change & UCSI_CONSTAT_CONNECT_CHANGE) { 1232 + if (change & UCSI_CONSTAT_CONNECT_CHANGE) { 1250 1233 typec_set_pwr_role(con->port, role); 1251 1234 ucsi_port_psy_changed(con); 1252 1235 ucsi_partner_change(con); 1253 1236 1254 - if (con->status.flags & UCSI_CONSTAT_CONNECTED) { 1237 + if (UCSI_CONSTAT(con, CONNECTED)) { 1255 1238 ucsi_register_partner(con); 1256 1239 ucsi_partner_task(con, ucsi_check_connection, 1, HZ); 1257 1240 if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) ··· 1259 1242 if (con->ucsi->cap.features & UCSI_CAP_CABLE_DETAILS) 1260 1243 ucsi_partner_task(con, ucsi_check_cable, 1, HZ); 1261 1244 1262 - if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 1263 - UCSI_CONSTAT_PWR_OPMODE_PD) { 1245 + if (UCSI_CONSTAT(con, PWR_OPMODE) == UCSI_CONSTAT_PWR_OPMODE_PD) { 1264 1246 ucsi_partner_task(con, ucsi_register_partner_pdos, 1, HZ); 1265 1247 ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ); 1266 1248 } ··· 1268 1252 } 1269 1253 } 1270 1254 1271 - if (con->status.change & UCSI_CONSTAT_POWER_OPMODE_CHANGE || 1272 - con->status.change & UCSI_CONSTAT_POWER_LEVEL_CHANGE) 1255 + if (change & (UCSI_CONSTAT_POWER_OPMODE_CHANGE | UCSI_CONSTAT_POWER_LEVEL_CHANGE)) 1273 1256 ucsi_pwr_opmode_change(con); 1274 1257 1275 - if (con->partner && con->status.change & UCSI_CONSTAT_PARTNER_CHANGE) { 1258 + if (con->partner && (change & UCSI_CONSTAT_PARTNER_CHANGE)) { 1276 1259 ucsi_partner_change(con); 1277 1260 1278 1261 /* Complete pending data role swap */ ··· 1279 1264 complete(&con->complete); 1280 1265 } 1281 1266 1282 - if (con->status.change & UCSI_CONSTAT_CAM_CHANGE) 1267 + if (change & UCSI_CONSTAT_CAM_CHANGE) 1283 1268 ucsi_partner_task(con, ucsi_check_altmodes, 1, HZ); 1284 1269 1285 - if (con->status.change & UCSI_CONSTAT_BC_CHANGE) 1270 + if (change & UCSI_CONSTAT_BC_CHANGE) 1286 1271 ucsi_port_psy_changed(con); 1287 1272 1288 1273 out_unlock: ··· 1442 1427 goto out_unlock; 1443 1428 } 1444 1429 1445 - partner_type = UCSI_CONSTAT_PARTNER_TYPE(con->status.flags); 1430 + partner_type = UCSI_CONSTAT(con, PARTNER_TYPE); 1446 1431 if ((partner_type == UCSI_CONSTAT_PARTNER_TYPE_DFP && 1447 1432 role == TYPEC_DEVICE) || 1448 1433 (partner_type == UCSI_CONSTAT_PARTNER_TYPE_UFP && ··· 1486 1471 goto out_unlock; 1487 1472 } 1488 1473 1489 - cur_role = !!(con->status.flags & UCSI_CONSTAT_PWR_DIR); 1474 + cur_role = UCSI_CONSTAT(con, PWR_DIR); 1490 1475 1491 1476 if (cur_role == role) 1492 1477 goto out_unlock; ··· 1509 1494 mutex_lock(&con->lock); 1510 1495 1511 1496 /* Something has gone wrong while swapping the role */ 1512 - if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) != 1513 - UCSI_CONSTAT_PWR_OPMODE_PD) { 1497 + if (UCSI_CONSTAT(con, PWR_OPMODE) != UCSI_CONSTAT_PWR_OPMODE_PD) { 1514 1498 ucsi_reset_connector(con, true); 1515 1499 ret = -EPROTO; 1516 1500 } ··· 1577 1563 if (ret < 0) 1578 1564 goto out_unlock; 1579 1565 1580 - if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP) 1566 + if (UCSI_CONCAP(con, OPMODE_DRP)) 1581 1567 cap->data = TYPEC_PORT_DRD; 1582 - else if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DFP) 1568 + else if (UCSI_CONCAP(con, OPMODE_DFP)) 1583 1569 cap->data = TYPEC_PORT_DFP; 1584 - else if (con->cap.op_mode & UCSI_CONCAP_OPMODE_UFP) 1570 + else if (UCSI_CONCAP(con, OPMODE_UFP)) 1585 1571 cap->data = TYPEC_PORT_UFP; 1586 1572 1587 - if ((con->cap.flags & UCSI_CONCAP_FLAG_PROVIDER) && 1588 - (con->cap.flags & UCSI_CONCAP_FLAG_CONSUMER)) 1573 + if (UCSI_CONCAP(con, PROVIDER) && UCSI_CONCAP(con, CONSUMER)) 1589 1574 cap->type = TYPEC_PORT_DRP; 1590 - else if (con->cap.flags & UCSI_CONCAP_FLAG_PROVIDER) 1575 + else if (UCSI_CONCAP(con, PROVIDER)) 1591 1576 cap->type = TYPEC_PORT_SRC; 1592 - else if (con->cap.flags & UCSI_CONCAP_FLAG_CONSUMER) 1577 + else if (UCSI_CONCAP(con, CONSUMER)) 1593 1578 cap->type = TYPEC_PORT_SNK; 1594 1579 1595 1580 cap->revision = ucsi->cap.typec_version; ··· 1596 1583 cap->svdm_version = SVDM_VER_2_0; 1597 1584 cap->prefer_role = TYPEC_NO_PREFERRED_ROLE; 1598 1585 1599 - if (con->cap.op_mode & UCSI_CONCAP_OPMODE_AUDIO_ACCESSORY) 1586 + if (UCSI_CONCAP(con, OPMODE_AUDIO_ACCESSORY)) 1600 1587 *accessory++ = TYPEC_ACCESSORY_AUDIO; 1601 - if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DEBUG_ACCESSORY) 1588 + if (UCSI_CONCAP(con, OPMODE_DEBUG_ACCESSORY)) 1602 1589 *accessory = TYPEC_ACCESSORY_DEBUG; 1590 + 1591 + if (UCSI_CONCAP_USB2_SUPPORT(con)) 1592 + cap->usb_capability |= USB_CAPABILITY_USB2; 1593 + if (UCSI_CONCAP_USB3_SUPPORT(con)) 1594 + cap->usb_capability |= USB_CAPABILITY_USB3; 1595 + if (UCSI_CONCAP_USB4_SUPPORT(con)) 1596 + cap->usb_capability |= USB_CAPABILITY_USB4; 1603 1597 1604 1598 cap->driver_data = con; 1605 1599 cap->ops = &ucsi_ops; ··· 1637 1617 } 1638 1618 1639 1619 /* Get the status */ 1640 - command = UCSI_GET_CONNECTOR_STATUS | UCSI_CONNECTOR_NUMBER(con->num); 1641 - ret = ucsi_send_command(ucsi, command, &con->status, sizeof(con->status)); 1642 - if (ret < 0) { 1620 + ret = ucsi_get_connector_status(con, false); 1621 + if (ret) { 1643 1622 dev_err(ucsi->dev, "con%d: failed to get status\n", con->num); 1644 - ret = 0; 1645 1623 goto out; 1646 1624 } 1647 - ret = 0; /* ucsi_send_command() returns length on success */ 1648 1625 1649 1626 if (ucsi->ops->connector_status) 1650 1627 ucsi->ops->connector_status(con); 1651 1628 1652 - switch (UCSI_CONSTAT_PARTNER_TYPE(con->status.flags)) { 1629 + switch (UCSI_CONSTAT(con, PARTNER_TYPE)) { 1653 1630 case UCSI_CONSTAT_PARTNER_TYPE_UFP: 1654 1631 case UCSI_CONSTAT_PARTNER_TYPE_CABLE_AND_UFP: 1655 1632 u_role = USB_ROLE_HOST; ··· 1663 1646 } 1664 1647 1665 1648 /* Check if there is already something connected */ 1666 - if (con->status.flags & UCSI_CONSTAT_CONNECTED) { 1667 - typec_set_pwr_role(con->port, 1668 - !!(con->status.flags & UCSI_CONSTAT_PWR_DIR)); 1649 + if (UCSI_CONSTAT(con, CONNECTED)) { 1650 + typec_set_pwr_role(con->port, UCSI_CONSTAT(con, PWR_DIR)); 1669 1651 ucsi_register_partner(con); 1670 1652 ucsi_pwr_opmode_change(con); 1671 1653 ucsi_port_psy_changed(con); ··· 1675 1659 } 1676 1660 1677 1661 /* Only notify USB controller if partner supports USB data */ 1678 - if (!(UCSI_CONSTAT_PARTNER_FLAGS(con->status.flags) & UCSI_CONSTAT_PARTNER_FLAG_USB)) 1662 + if (!(UCSI_CONSTAT(con, PARTNER_FLAG_USB))) 1679 1663 u_role = USB_ROLE_NONE; 1680 1664 1681 1665 ret = usb_role_switch_set_role(con->usb_role_sw, u_role); ··· 1685 1669 ret = 0; 1686 1670 } 1687 1671 1688 - if (con->partner && 1689 - UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 1690 - UCSI_CONSTAT_PWR_OPMODE_PD) { 1672 + if (con->partner && UCSI_CONSTAT(con, PWR_OPMODE) == UCSI_CONSTAT_PWR_OPMODE_PD) { 1691 1673 ucsi_register_device_pdos(con); 1692 1674 ucsi_get_src_pdos(con); 1693 1675 ucsi_check_altmodes(con); 1694 1676 ucsi_check_connector_capability(con); 1695 1677 } 1696 1678 1697 - trace_ucsi_register_port(con->num, &con->status); 1679 + trace_ucsi_register_port(con->num, con); 1698 1680 1699 1681 out: 1700 1682 fwnode_handle_put(cap->fwnode); ··· 1775 1761 1776 1762 /* Get PPM capabilities */ 1777 1763 command = UCSI_GET_CAPABILITY; 1778 - ret = ucsi_send_command(ucsi, command, &ucsi->cap, sizeof(ucsi->cap)); 1764 + ret = ucsi_send_command(ucsi, command, &ucsi->cap, 1765 + BITS_TO_BYTES(UCSI_GET_CAPABILITY_SIZE)); 1779 1766 if (ret < 0) 1780 1767 goto err_reset; 1781 1768
+155 -90
drivers/usb/typec/ucsi/ucsi.h
··· 4 4 #define __DRIVER_USB_TYPEC_UCSI_H 5 5 6 6 #include <linux/bitops.h> 7 + #include <linux/bitmap.h> 7 8 #include <linux/completion.h> 8 9 #include <linux/device.h> 9 10 #include <linux/power_supply.h> ··· 96 95 /* -------------------------------------------------------------------------- */ 97 96 98 97 /* Commands */ 99 - #define UCSI_PPM_RESET 0x01 100 - #define UCSI_CANCEL 0x02 101 - #define UCSI_CONNECTOR_RESET 0x03 102 - #define UCSI_ACK_CC_CI 0x04 103 - #define UCSI_SET_NOTIFICATION_ENABLE 0x05 104 - #define UCSI_GET_CAPABILITY 0x06 105 - #define UCSI_GET_CONNECTOR_CAPABILITY 0x07 106 - #define UCSI_SET_UOM 0x08 107 - #define UCSI_SET_UOR 0x09 108 - #define UCSI_SET_PDM 0x0a 109 - #define UCSI_SET_PDR 0x0b 110 - #define UCSI_GET_ALTERNATE_MODES 0x0c 111 - #define UCSI_GET_CAM_SUPPORTED 0x0d 112 - #define UCSI_GET_CURRENT_CAM 0x0e 113 - #define UCSI_SET_NEW_CAM 0x0f 114 - #define UCSI_GET_PDOS 0x10 115 - #define UCSI_GET_CABLE_PROPERTY 0x11 116 - #define UCSI_GET_CONNECTOR_STATUS 0x12 117 - #define UCSI_GET_ERROR_STATUS 0x13 118 - #define UCSI_GET_PD_MESSAGE 0x15 98 + #define UCSI_PPM_RESET 0x01 99 + #define UCSI_CANCEL 0x02 100 + #define UCSI_CONNECTOR_RESET 0x03 101 + #define UCSI_ACK_CC_CI 0x04 102 + #define UCSI_SET_NOTIFICATION_ENABLE 0x05 103 + #define UCSI_GET_CAPABILITY 0x06 104 + #define UCSI_GET_CAPABILITY_SIZE 128 105 + #define UCSI_GET_CONNECTOR_CAPABILITY 0x07 106 + #define UCSI_GET_CONNECTOR_CAPABILITY_SIZE 32 107 + #define UCSI_SET_UOM 0x08 108 + #define UCSI_SET_UOR 0x09 109 + #define UCSI_SET_PDM 0x0a 110 + #define UCSI_SET_PDR 0x0b 111 + #define UCSI_GET_ALTERNATE_MODES 0x0c 112 + #define UCSI_GET_CAM_SUPPORTED 0x0d 113 + #define UCSI_GET_CURRENT_CAM 0x0e 114 + #define UCSI_SET_NEW_CAM 0x0f 115 + #define UCSI_GET_PDOS 0x10 116 + #define UCSI_GET_CABLE_PROPERTY 0x11 117 + #define UCSI_GET_CABLE_PROPERTY_SIZE 64 118 + #define UCSI_GET_CONNECTOR_STATUS 0x12 119 + #define UCSI_GET_CONNECTOR_STATUS_SIZE 152 120 + #define UCSI_GET_ERROR_STATUS 0x13 121 + #define UCSI_GET_PD_MESSAGE 0x15 122 + #define UCSI_SET_SINK_PATH 0x1c 119 123 120 124 #define UCSI_CONNECTOR_NUMBER(_num_) ((u64)(_num_) << 16) 121 125 #define UCSI_COMMAND(_cmd_) ((_cmd_) & 0xff) ··· 131 125 /* CONNECTOR_RESET command bits */ 132 126 #define UCSI_CONNECTOR_RESET_HARD_VER_1_0 BIT(23) /* Deprecated in v1.1 */ 133 127 #define UCSI_CONNECTOR_RESET_DATA_VER_2_0 BIT(23) /* Redefined in v2.0 */ 134 - 135 128 136 129 /* ACK_CC_CI bits */ 137 130 #define UCSI_ACK_CONNECTOR_CHANGE BIT(16) ··· 255 250 u16 typec_version; 256 251 } __packed; 257 252 258 - /* Data structure filled by PPM in response to GET_CONNECTOR_CAPABILITY cmd. */ 259 - struct ucsi_connector_capability { 260 - u8 op_mode; 261 - #define UCSI_CONCAP_OPMODE_DFP BIT(0) 262 - #define UCSI_CONCAP_OPMODE_UFP BIT(1) 263 - #define UCSI_CONCAP_OPMODE_DRP BIT(2) 264 - #define UCSI_CONCAP_OPMODE_AUDIO_ACCESSORY BIT(3) 265 - #define UCSI_CONCAP_OPMODE_DEBUG_ACCESSORY BIT(4) 266 - #define UCSI_CONCAP_OPMODE_USB2 BIT(5) 267 - #define UCSI_CONCAP_OPMODE_USB3 BIT(6) 268 - #define UCSI_CONCAP_OPMODE_ALT_MODE BIT(7) 269 - u32 flags; 270 - #define UCSI_CONCAP_FLAG_PROVIDER BIT(0) 271 - #define UCSI_CONCAP_FLAG_CONSUMER BIT(1) 272 - #define UCSI_CONCAP_FLAG_SWAP_TO_DFP BIT(2) 273 - #define UCSI_CONCAP_FLAG_SWAP_TO_UFP BIT(3) 274 - #define UCSI_CONCAP_FLAG_SWAP_TO_SRC BIT(4) 275 - #define UCSI_CONCAP_FLAG_SWAP_TO_SINK BIT(5) 276 - #define UCSI_CONCAP_FLAG_EX_OP_MODE(_f_) \ 277 - (((_f_) & GENMASK(13, 6)) >> 6) 278 - #define UCSI_CONCAP_EX_OP_MODE_USB4_GEN2 BIT(0) 279 - #define UCSI_CONCAP_EX_OP_MODE_EPR_SRC BIT(1) 280 - #define UCSI_CONCAP_EX_OP_MODE_EPR_SINK BIT(2) 281 - #define UCSI_CONCAP_EX_OP_MODE_USB4_GEN3 BIT(3) 282 - #define UCSI_CONCAP_EX_OP_MODE_USB4_GEN4 BIT(4) 283 - #define UCSI_CONCAP_FLAG_MISC_CAPS(_f_) \ 284 - (((_f_) & GENMASK(17, 14)) >> 14) 285 - #define UCSI_CONCAP_MISC_CAP_FW_UPDATE BIT(0) 286 - #define UCSI_CONCAP_MISC_CAP_SECURITY BIT(1) 287 - #define UCSI_CONCAP_FLAG_REV_CURR_PROT_SUPPORT BIT(18) 288 - #define UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV(_f_) \ 289 - (((_f_) & GENMASK(20, 19)) >> 19) 290 - #define UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(_f_) \ 291 - UCSI_SPEC_REVISION_TO_BCD(UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV(_f_)) 292 - } __packed; 293 - 294 253 struct ucsi_altmode { 295 254 u16 svid; 296 255 u32 mid; ··· 280 311 u8 latency; 281 312 } __packed; 282 313 283 - /* Data structure filled by PPM in response to GET_CONNECTOR_STATUS command. */ 284 - struct ucsi_connector_status { 285 - u16 change; 314 + /* Get Connector Capability Fields. */ 315 + #define UCSI_CONCAP_OPMODE UCSI_DECLARE_BITFIELD(0, 0, 8) 316 + #define UCSI_CONCAP_OPMODE_DFP UCSI_DECLARE_BITFIELD(0, 0, 1) 317 + #define UCSI_CONCAP_OPMODE_UFP UCSI_DECLARE_BITFIELD(0, 1, 1) 318 + #define UCSI_CONCAP_OPMODE_DRP UCSI_DECLARE_BITFIELD(0, 2, 1) 319 + #define UCSI_CONCAP_OPMODE_AUDIO_ACCESSORY UCSI_DECLARE_BITFIELD(0, 3, 1) 320 + #define UCSI_CONCAP_OPMODE_DEBUG_ACCESSORY UCSI_DECLARE_BITFIELD(0, 4, 1) 321 + #define UCSI_CONCAP_OPMODE_USB2 UCSI_DECLARE_BITFIELD(0, 5, 1) 322 + #define UCSI_CONCAP_OPMODE_USB3 UCSI_DECLARE_BITFIELD(0, 6, 1) 323 + #define UCSI_CONCAP_OPMODE_ALT_MODE UCSI_DECLARE_BITFIELD(0, 7, 1) 324 + #define UCSI_CONCAP_PROVIDER UCSI_DECLARE_BITFIELD(0, 8, 1) 325 + #define UCSI_CONCAP_CONSUMER UCSI_DECLARE_BITFIELD(0, 9, 1) 326 + #define UCSI_CONCAP_SWAP_TO_DFP_V1_1 UCSI_DECLARE_BITFIELD_V1_1(10, 1) 327 + #define UCSI_CONCAP_SWAP_TO_UFP_V1_1 UCSI_DECLARE_BITFIELD_V1_1(11, 1) 328 + #define UCSI_CONCAP_SWAP_TO_SRC_V1_1 UCSI_DECLARE_BITFIELD_V1_1(12, 1) 329 + #define UCSI_CONCAP_SWAP_TO_SNK_V1_1 UCSI_DECLARE_BITFIELD_V1_1(13, 1) 330 + #define UCSI_CONCAP_EXT_OPMODE_V2_0 UCSI_DECLARE_BITFIELD_V2_0(14, 8) 331 + #define UCSI_CONCAP_EXT_OPMODE_USB4_GEN2_V2_0 UCSI_DECLARE_BITFIELD_V2_0(14, 1) 332 + #define UCSI_CONCAP_EXT_OPMODE_EPR_SRC_V2_0 UCSI_DECLARE_BITFIELD_V2_0(15, 1) 333 + #define UCSI_CONCAP_EXT_OPMODE_EPR_SINK_V2_0 UCSI_DECLARE_BITFIELD_V2_0(16, 1) 334 + #define UCSI_CONCAP_EXT_OPMODE_USB4_GEN3_V2_0 UCSI_DECLARE_BITFIELD_V2_0(17, 1) 335 + #define UCSI_CONCAP_EXT_OPMODE_USB4_GEN4_V2_0 UCSI_DECLARE_BITFIELD_V2_0(18, 1) 336 + #define UCSI_CONCAP_MISC_V2_0 UCSI_DECLARE_BITFIELD_V2_0(22, 4) 337 + #define UCSI_CONCAP_MISC_FW_UPDATE_V2_0 UCSI_DECLARE_BITFIELD_V2_0(22, 1) 338 + #define UCSI_CONCAP_MISC_SECURITY_V2_0 UCSI_DECLARE_BITFIELD_V2_0(23, 1) 339 + #define UCSI_CONCAP_REV_CURR_PROT_SUPPORT_V2_0 UCSI_DECLARE_BITFIELD_V2_0(26, 1) 340 + #define UCSI_CONCAP_PARTNER_PD_REVISION_V2_1 UCSI_DECLARE_BITFIELD_V2_1(27, 2) 341 + 342 + /* Helpers for USB capability checks. */ 343 + #define UCSI_CONCAP_USB2_SUPPORT(_con_) UCSI_CONCAP((_con_), OPMODE_USB2) 344 + #define UCSI_CONCAP_USB3_SUPPORT(_con_) UCSI_CONCAP((_con_), OPMODE_USB3) 345 + #define UCSI_CONCAP_USB4_SUPPORT(_con_) \ 346 + ((_con_)->ucsi->version >= UCSI_VERSION_2_0 && \ 347 + (UCSI_CONCAP((_con_), EXT_OPMODE_USB4_GEN2_V2_0) | \ 348 + UCSI_CONCAP((_con_), EXT_OPMODE_USB4_GEN3_V2_0) | \ 349 + UCSI_CONCAP((_con_), EXT_OPMODE_USB4_GEN4_V2_0))) 350 + 351 + /* Get Connector Status Fields. */ 352 + #define UCSI_CONSTAT_CHANGE UCSI_DECLARE_BITFIELD(0, 0, 16) 353 + #define UCSI_CONSTAT_PWR_OPMODE UCSI_DECLARE_BITFIELD(0, 16, 3) 354 + #define UCSI_CONSTAT_PWR_OPMODE_NONE 0 355 + #define UCSI_CONSTAT_PWR_OPMODE_DEFAULT 1 356 + #define UCSI_CONSTAT_PWR_OPMODE_BC 2 357 + #define UCSI_CONSTAT_PWR_OPMODE_PD 3 358 + #define UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5 4 359 + #define UCSI_CONSTAT_PWR_OPMODE_TYPEC3_0 5 360 + #define UCSI_CONSTAT_CONNECTED UCSI_DECLARE_BITFIELD(0, 19, 1) 361 + #define UCSI_CONSTAT_PWR_DIR UCSI_DECLARE_BITFIELD(0, 20, 1) 362 + #define UCSI_CONSTAT_PARTNER_FLAGS UCSI_DECLARE_BITFIELD(0, 21, 8) 363 + #define UCSI_CONSTAT_PARTNER_FLAG_USB UCSI_DECLARE_BITFIELD(0, 21, 1) 364 + #define UCSI_CONSTAT_PARTNER_FLAG_ALT_MODE UCSI_DECLARE_BITFIELD(0, 22, 1) 365 + #define UCSI_CONSTAT_PARTNER_FLAG_USB4_GEN3 UCSI_DECLARE_BITFIELD(0, 23, 1) 366 + #define UCSI_CONSTAT_PARTNER_FLAG_USB4_GEN4 UCSI_DECLARE_BITFIELD(0, 24, 1) 367 + #define UCSI_CONSTAT_PARTNER_TYPE UCSI_DECLARE_BITFIELD(0, 29, 3) 368 + #define UCSI_CONSTAT_PARTNER_TYPE_DFP 1 369 + #define UCSI_CONSTAT_PARTNER_TYPE_UFP 2 370 + #define UCSI_CONSTAT_PARTNER_TYPE_CABLE 3 /* Powered Cable */ 371 + #define UCSI_CONSTAT_PARTNER_TYPE_CABLE_AND_UFP 4 /* Powered Cable */ 372 + #define UCSI_CONSTAT_PARTNER_TYPE_DEBUG 5 373 + #define UCSI_CONSTAT_PARTNER_TYPE_AUDIO 6 374 + #define UCSI_CONSTAT_RDO UCSI_DECLARE_BITFIELD(0, 32, 32) 375 + #define UCSI_CONSTAT_BC_STATUS UCSI_DECLARE_BITFIELD(0, 64, 2) 376 + #define UCSI_CONSTAT_BC_NOT_CHARGING 0 377 + #define UCSI_CONSTAT_BC_NOMINAL_CHARGING 1 378 + #define UCSI_CONSTAT_BC_SLOW_CHARGING 2 379 + #define UCSI_CONSTAT_BC_TRICKLE_CHARGING 3 380 + #define UCSI_CONSTAT_PD_VERSION_V1_2 UCSI_DECLARE_BITFIELD_V1_2(70, 16) 381 + 382 + /* Connector Status Change Bits. */ 286 383 #define UCSI_CONSTAT_EXT_SUPPLY_CHANGE BIT(1) 287 384 #define UCSI_CONSTAT_POWER_OPMODE_CHANGE BIT(2) 288 385 #define UCSI_CONSTAT_PDOS_CHANGE BIT(5) ··· 360 325 #define UCSI_CONSTAT_POWER_DIR_CHANGE BIT(12) 361 326 #define UCSI_CONSTAT_CONNECT_CHANGE BIT(14) 362 327 #define UCSI_CONSTAT_ERROR BIT(15) 363 - u16 flags; 364 - #define UCSI_CONSTAT_PWR_OPMODE(_f_) ((_f_) & GENMASK(2, 0)) 365 - #define UCSI_CONSTAT_PWR_OPMODE_NONE 0 366 - #define UCSI_CONSTAT_PWR_OPMODE_DEFAULT 1 367 - #define UCSI_CONSTAT_PWR_OPMODE_BC 2 368 - #define UCSI_CONSTAT_PWR_OPMODE_PD 3 369 - #define UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5 4 370 - #define UCSI_CONSTAT_PWR_OPMODE_TYPEC3_0 5 371 - #define UCSI_CONSTAT_CONNECTED BIT(3) 372 - #define UCSI_CONSTAT_PWR_DIR BIT(4) 373 - #define UCSI_CONSTAT_PARTNER_FLAGS(_f_) (((_f_) & GENMASK(12, 5)) >> 5) 374 - #define UCSI_CONSTAT_PARTNER_FLAG_USB 1 375 - #define UCSI_CONSTAT_PARTNER_FLAG_ALT_MODE 2 376 - #define UCSI_CONSTAT_PARTNER_TYPE(_f_) (((_f_) & GENMASK(15, 13)) >> 13) 377 - #define UCSI_CONSTAT_PARTNER_TYPE_DFP 1 378 - #define UCSI_CONSTAT_PARTNER_TYPE_UFP 2 379 - #define UCSI_CONSTAT_PARTNER_TYPE_CABLE 3 /* Powered Cable */ 380 - #define UCSI_CONSTAT_PARTNER_TYPE_CABLE_AND_UFP 4 /* Powered Cable */ 381 - #define UCSI_CONSTAT_PARTNER_TYPE_DEBUG 5 382 - #define UCSI_CONSTAT_PARTNER_TYPE_AUDIO 6 383 - u32 request_data_obj; 384 328 385 - u8 pwr_status; 386 - #define UCSI_CONSTAT_BC_STATUS(_p_) ((_p_) & GENMASK(1, 0)) 387 - #define UCSI_CONSTAT_BC_NOT_CHARGING 0 388 - #define UCSI_CONSTAT_BC_NOMINAL_CHARGING 1 389 - #define UCSI_CONSTAT_BC_SLOW_CHARGING 2 390 - #define UCSI_CONSTAT_BC_TRICKLE_CHARGING 3 391 - } __packed; 329 + #define UCSI_DECLARE_BITFIELD_V1_1(_offset_, _size_) \ 330 + UCSI_DECLARE_BITFIELD(UCSI_VERSION_1_1, (_offset_), (_size_)) 331 + #define UCSI_DECLARE_BITFIELD_V1_2(_offset_, _size_) \ 332 + UCSI_DECLARE_BITFIELD(UCSI_VERSION_1_2, (_offset_), (_size_)) 333 + #define UCSI_DECLARE_BITFIELD_V2_0(_offset_, _size_) \ 334 + UCSI_DECLARE_BITFIELD(UCSI_VERSION_2_0, (_offset_), (_size_)) 335 + #define UCSI_DECLARE_BITFIELD_V2_1(_offset_, _size_) \ 336 + UCSI_DECLARE_BITFIELD(UCSI_VERSION_2_1, (_offset_), (_size_)) 337 + #define UCSI_DECLARE_BITFIELD_V3_0(_offset_, _size_) \ 338 + UCSI_DECLARE_BITFIELD(UCSI_VERSION_3_0, (_offset_), (_size_)) 339 + 340 + #define UCSI_DECLARE_BITFIELD(_ver_, _offset_, _size_) \ 341 + (struct ucsi_bitfield) { \ 342 + .version = _ver_, \ 343 + .offset = _offset_, \ 344 + .size = _size_, \ 345 + } 346 + 347 + struct ucsi_bitfield { 348 + const u16 version; 349 + const u8 offset; 350 + const u8 size; 351 + }; 352 + 353 + /** 354 + * ucsi_bitfield_read - Read a field from UCSI command response 355 + * @_map_: UCSI command response 356 + * @_field_: The field offset in the response data structure 357 + * @_ver_: UCSI version where the field was introduced 358 + * 359 + * Reads the fields in the command responses by first checking that the field is 360 + * valid with the UCSI interface version that is used in the system. 361 + * @_ver_ is the minimum UCSI version for the @_field_. If the UCSI interface is 362 + * older than @_ver_, a warning is generated. 363 + * 364 + * Caveats: 365 + * - Removed fields are not checked - @_ver_ is just the minimum UCSI version. 366 + * 367 + * Returns the value of @_field_, or 0 when the UCSI interface is older than 368 + * @_ver_. 369 + */ 370 + #define ucsi_bitfield_read(_map_, _field_, _ver_) \ 371 + ({ \ 372 + struct ucsi_bitfield f = (_field_); \ 373 + WARN((_ver_) < f.version, \ 374 + "Access to unsupported field at offset 0x%x (need version %04x)", \ 375 + f.offset, f.version) ? 0 : \ 376 + bitmap_read((_map_), f.offset, f.size); \ 377 + }) 378 + 379 + /* Helpers to access cached command responses. */ 380 + #define UCSI_CONCAP(_con_, _field_) \ 381 + ucsi_bitfield_read((_con_)->cap, UCSI_CONCAP_##_field_, (_con_)->ucsi->version) 382 + 383 + #define UCSI_CONSTAT(_con_, _field_) \ 384 + ucsi_bitfield_read((_con_)->status, UCSI_CONSTAT_##_field_, (_con_)->ucsi->version) 392 385 393 386 /* -------------------------------------------------------------------------- */ 394 387 ··· 496 433 497 434 struct typec_capability typec_cap; 498 435 499 - struct ucsi_connector_status status; 500 - struct ucsi_connector_capability cap; 436 + /* Cached command responses. */ 437 + DECLARE_BITMAP(cap, UCSI_GET_CONNECTOR_CAPABILITY_SIZE); 438 + DECLARE_BITMAP(status, UCSI_GET_CONNECTOR_STATUS_SIZE); 439 + 501 440 struct power_supply *psy; 502 441 struct power_supply_desc psy_desc; 503 442 u32 rdo;
+8 -57
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 61 61 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 62 62 int ret; 63 63 64 - ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); 65 - if (ret) 66 - return ret; 64 + if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) { 65 + ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); 66 + if (ret) 67 + return ret; 68 + } 67 69 68 70 memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci)); 69 71 ··· 75 73 static int ucsi_acpi_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) 76 74 { 77 75 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 78 - int ret; 79 - 80 - ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); 81 - if (ret) 82 - return ret; 83 76 84 77 memcpy(val, ua->base + UCSI_MESSAGE_IN, val_len); 85 78 ··· 99 102 .async_control = ucsi_acpi_async_control 100 103 }; 101 104 102 - static int 103 - ucsi_zenbook_read_cci(struct ucsi *ucsi, u32 *cci) 104 - { 105 - struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 106 - int ret; 107 - 108 - if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) { 109 - ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); 110 - if (ret) 111 - return ret; 112 - } 113 - 114 - memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci)); 115 - 116 - return 0; 117 - } 118 - 119 - static int 120 - ucsi_zenbook_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) 121 - { 122 - struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 123 - 124 - /* UCSI_MESSAGE_IN is never read for PPM_RESET, return stored data */ 125 - memcpy(val, ua->base + UCSI_MESSAGE_IN, val_len); 126 - 127 - return 0; 128 - } 129 - 130 - static const struct ucsi_operations ucsi_zenbook_ops = { 131 - .read_version = ucsi_acpi_read_version, 132 - .read_cci = ucsi_zenbook_read_cci, 133 - .read_message_in = ucsi_zenbook_read_message_in, 134 - .sync_control = ucsi_sync_control_common, 135 - .async_control = ucsi_acpi_async_control 136 - }; 137 - 138 105 static int ucsi_gram_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) 139 106 { 140 107 u16 bogus_change = UCSI_CONSTAT_POWER_LEVEL_CHANGE | 141 108 UCSI_CONSTAT_PDOS_CHANGE; 142 109 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 143 - struct ucsi_connector_status *status; 144 110 int ret; 145 111 146 112 ret = ucsi_acpi_read_message_in(ucsi, val, val_len); ··· 112 152 113 153 if (UCSI_COMMAND(ua->cmd) == UCSI_GET_CONNECTOR_STATUS && 114 154 ua->check_bogus_event) { 115 - status = (struct ucsi_connector_status *)val; 116 - 117 155 /* Clear the bogus change */ 118 - if (status->change == bogus_change) 119 - status->change = 0; 156 + if (*(u16 *)val == bogus_change) 157 + *(u16 *)val = 0; 120 158 121 159 ua->check_bogus_event = false; 122 160 } ··· 148 190 }; 149 191 150 192 static const struct dmi_system_id ucsi_acpi_quirks[] = { 151 - { 152 - .matches = { 153 - DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 154 - DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"), 155 - }, 156 - .driver_data = (void *)&ucsi_zenbook_ops, 157 - }, 158 193 { 159 194 .matches = { 160 195 DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), ··· 271 320 .acpi_match_table = ACPI_PTR(ucsi_acpi_match), 272 321 }, 273 322 .probe = ucsi_acpi_probe, 274 - .remove_new = ucsi_acpi_remove, 323 + .remove = ucsi_acpi_remove, 275 324 }; 276 325 277 326 module_platform_driver(ucsi_acpi_platform_driver);
+5
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 644 644 uc->has_multiple_dp) { 645 645 con_index = (uc->last_cmd_sent >> 16) & 646 646 UCSI_CMD_CONNECTOR_MASK; 647 + if (con_index == 0) { 648 + ret = -EINVAL; 649 + goto unlock; 650 + } 647 651 con = &uc->ucsi->connector[con_index - 1]; 648 652 ucsi_ccg_update_set_new_cam_cmd(uc, con, &command); 649 653 } ··· 655 651 ret = ucsi_sync_control_common(ucsi, command); 656 652 657 653 pm_runtime_put_sync(uc->dev); 654 + unlock: 658 655 mutex_unlock(&uc->lock); 659 656 660 657 return ret;
+9 -12
drivers/usb/typec/ucsi/ucsi_glink.c
··· 172 172 static void pmic_glink_ucsi_update_connector(struct ucsi_connector *con) 173 173 { 174 174 struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi); 175 - int i; 176 175 177 - for (i = 0; i < PMIC_GLINK_MAX_PORTS; i++) { 178 - if (ucsi->port_orientation[i]) 179 - con->typec_cap.orientation_aware = true; 180 - } 176 + if (con->num > PMIC_GLINK_MAX_PORTS || 177 + !ucsi->port_orientation[con->num - 1]) 178 + return; 179 + 180 + con->typec_cap.orientation_aware = true; 181 181 } 182 182 183 183 static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con) ··· 185 185 struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi); 186 186 int orientation; 187 187 188 - if (con->num >= PMIC_GLINK_MAX_PORTS || 188 + if (con->num > PMIC_GLINK_MAX_PORTS || 189 189 !ucsi->port_orientation[con->num - 1]) 190 190 return; 191 191 ··· 322 322 struct pmic_glink_ucsi *ucsi; 323 323 struct device *dev = &adev->dev; 324 324 const struct of_device_id *match; 325 - struct fwnode_handle *fwnode; 326 325 int ret; 327 326 328 327 ucsi = devm_kzalloc(dev, sizeof(*ucsi), GFP_KERNEL); ··· 353 354 354 355 ucsi_set_drvdata(ucsi->ucsi, ucsi); 355 356 356 - device_for_each_child_node(dev, fwnode) { 357 + device_for_each_child_node_scoped(dev, fwnode) { 357 358 struct gpio_desc *desc; 358 359 u32 port; 359 360 360 361 ret = fwnode_property_read_u32(fwnode, "reg", &port); 361 362 if (ret < 0) { 362 363 dev_err(dev, "missing reg property of %pOFn\n", fwnode); 363 - fwnode_handle_put(fwnode); 364 364 return ret; 365 365 } 366 366 ··· 374 376 if (!desc) 375 377 continue; 376 378 377 - if (IS_ERR(desc)) { 378 - fwnode_handle_put(fwnode); 379 + if (IS_ERR(desc)) 379 380 return dev_err_probe(dev, PTR_ERR(desc), 380 381 "unable to acquire orientation gpio\n"); 381 - } 382 + 382 383 ucsi->port_orientation[port] = desc; 383 384 } 384 385
+1 -1
drivers/usb/usbip/vhci_hcd.c
··· 1487 1487 1488 1488 static struct platform_driver vhci_driver = { 1489 1489 .probe = vhci_hcd_probe, 1490 - .remove_new = vhci_hcd_remove, 1490 + .remove = vhci_hcd_remove, 1491 1491 .suspend = vhci_hcd_suspend, 1492 1492 .resume = vhci_hcd_resume, 1493 1493 .driver = {
+1 -1
drivers/usb/usbip/vudc_main.c
··· 19 19 20 20 static struct platform_driver vudc_driver = { 21 21 .probe = vudc_probe, 22 - .remove_new = vudc_remove, 22 + .remove = vudc_remove, 23 23 .driver = { 24 24 .name = GADGET_NAME, 25 25 .dev_groups = vudc_groups,
+4
include/linux/pci_ids.h
··· 121 121 #define PCI_CLASS_SERIAL_USB_OHCI 0x0c0310 122 122 #define PCI_CLASS_SERIAL_USB_EHCI 0x0c0320 123 123 #define PCI_CLASS_SERIAL_USB_XHCI 0x0c0330 124 + #define PCI_CLASS_SERIAL_USB_CDNS 0x0c0380 124 125 #define PCI_CLASS_SERIAL_USB_DEVICE 0x0c03fe 125 126 #define PCI_CLASS_SERIAL_FIBER 0x0c04 126 127 #define PCI_CLASS_SERIAL_SMBUS 0x0c05 ··· 2422 2421 #define PCI_VENDOR_ID_QCOM 0x17cb 2423 2422 2424 2423 #define PCI_VENDOR_ID_CDNS 0x17cd 2424 + #define PCI_DEVICE_ID_CDNS_USBSS 0x0100 2425 + #define PCI_DEVICE_ID_CDNS_USB 0x0120 2426 + #define PCI_DEVICE_ID_CDNS_USBSSP 0x0200 2425 2427 2426 2428 #define PCI_VENDOR_ID_ARECA 0x17d3 2427 2429 #define PCI_DEVICE_ID_ARECA_1110 0x1110
+3 -4
include/linux/usb.h
··· 1129 1129 /* ----------------------------------------------------------------------- */ 1130 1130 1131 1131 /* Stuff for dynamic usb ids */ 1132 + extern struct mutex usb_dynids_lock; 1132 1133 struct usb_dynids { 1133 - spinlock_t lock; 1134 1134 struct list_head list; 1135 1135 }; 1136 1136 ··· 1243 1243 unsigned int disable_hub_initiated_lpm:1; 1244 1244 unsigned int soft_unbind:1; 1245 1245 }; 1246 - #define to_usb_driver(d) container_of(d, struct usb_driver, driver) 1246 + #define to_usb_driver(d) container_of_const(d, struct usb_driver, driver) 1247 1247 1248 1248 /** 1249 1249 * struct usb_device_driver - identifies USB device driver to usbcore ··· 1294 1294 unsigned int supports_autosuspend:1; 1295 1295 unsigned int generic_subclass:1; 1296 1296 }; 1297 - #define to_usb_device_driver(d) container_of(d, struct usb_device_driver, \ 1298 - driver) 1297 + #define to_usb_device_driver(d) container_of_const(d, struct usb_device_driver, driver) 1299 1298 1300 1299 /** 1301 1300 * struct usb_class_driver - identifies a USB driver that wants to use the USB major number
+1
include/linux/usb/chipidea.h
··· 65 65 #define CI_HDRC_PHY_VBUS_CONTROL BIT(16) 66 66 #define CI_HDRC_HAS_PORTSC_PEC_MISSED BIT(17) 67 67 #define CI_HDRC_FORCE_VBUS_ACTIVE_ALWAYS BIT(18) 68 + #define CI_HDRC_HAS_SHORT_PKT_LIMIT BIT(19) 68 69 enum usb_dr_mode dr_mode; 69 70 #define CI_HDRC_CONTROLLER_RESET_EVENT 0 70 71 #define CI_HDRC_CONTROLLER_STOPPED_EVENT 1
+1 -1
include/linux/usb/storage.h
··· 53 53 __le32 Signature; /* contains 'USBC' */ 54 54 __u32 Tag; /* unique per command id */ 55 55 __le32 DataTransferLength; /* size of data */ 56 - __u8 Flags; /* direction in bit 0 */ 56 + __u8 Flags; /* direction in bit 7 */ 57 57 __u8 Lun; /* LUN normally 0 */ 58 58 __u8 Length; /* length of the CDB */ 59 59 __u8 CDB[16]; /* max command */
+22
include/linux/usb/typec.h
··· 87 87 TYPEC_ORIENTATION_REVERSE, 88 88 }; 89 89 90 + enum usb_mode { 91 + USB_MODE_NONE, 92 + USB_MODE_USB2, 93 + USB_MODE_USB3, 94 + USB_MODE_USB4 95 + }; 96 + 97 + #define USB_CAPABILITY_USB2 BIT(0) 98 + #define USB_CAPABILITY_USB3 BIT(1) 99 + #define USB_CAPABILITY_USB4 BIT(2) 100 + 90 101 /* 91 102 * struct enter_usb_data - Enter_USB Message details 92 103 * @eudo: Enter_USB Data Object ··· 220 209 * @accessory: Audio, Debug or none. 221 210 * @identity: Discover Identity command data 222 211 * @pd_revision: USB Power Delivery Specification Revision if supported 212 + * @usb_capability: Supported USB Modes 223 213 * @attach: Notification about attached USB device 224 214 * @deattach: Notification about removed USB device 225 215 * ··· 238 226 enum typec_accessory accessory; 239 227 struct usb_pd_identity *identity; 240 228 u16 pd_revision; /* 0300H = "3.0" */ 229 + u8 usb_capability; 241 230 242 231 void (*attach)(struct typec_partner *partner, struct device *dev); 243 232 void (*deattach)(struct typec_partner *partner, struct device *dev); ··· 253 240 * @port_type_set: Set port type 254 241 * @pd_get: Get available USB Power Delivery Capabilities. 255 242 * @pd_set: Set USB Power Delivery Capabilities. 243 + * @default_usb_mode_set: USB Mode to be used by default with Enter_USB Message 244 + * @enter_usb_mode: Change the active USB Mode 256 245 */ 257 246 struct typec_operations { 258 247 int (*try_role)(struct typec_port *port, int role); ··· 265 250 enum typec_port_type type); 266 251 struct usb_power_delivery **(*pd_get)(struct typec_port *port); 267 252 int (*pd_set)(struct typec_port *port, struct usb_power_delivery *pd); 253 + int (*default_usb_mode_set)(struct typec_port *port, enum usb_mode mode); 254 + int (*enter_usb_mode)(struct typec_port *port, enum usb_mode mode); 268 255 }; 269 256 270 257 enum usb_pd_svdm_ver { ··· 284 267 * @svdm_version: USB PD Structured VDM version if supported 285 268 * @prefer_role: Initial role preference (DRP ports). 286 269 * @accessory: Supported Accessory Modes 270 + * @usb_capability: Supported USB Modes 287 271 * @fwnode: Optional fwnode of the port 288 272 * @driver_data: Private pointer for driver specific info 289 273 * @pd: Optional USB Power Delivery Support ··· 301 283 int prefer_role; 302 284 enum typec_accessory accessory[TYPEC_MAX_ACCESSORY]; 303 285 unsigned int orientation_aware:1; 286 + u8 usb_capability; 304 287 305 288 struct fwnode_handle *fwnode; 306 289 void *driver_data; ··· 368 349 int typec_port_set_usb_power_delivery(struct typec_port *port, struct usb_power_delivery *pd); 369 350 int typec_partner_set_usb_power_delivery(struct typec_partner *partner, 370 351 struct usb_power_delivery *pd); 352 + 353 + void typec_partner_set_usb_mode(struct typec_partner *partner, enum usb_mode usb_mode); 354 + void typec_port_set_usb_mode(struct typec_port *port, enum usb_mode mode); 371 355 372 356 /** 373 357 * struct typec_connector - Representation of Type-C port for external drivers
+58
include/uapi/linux/usb/video.h
··· 597 597 __le32 dwFrameInterval[n]; \ 598 598 } __attribute__ ((packed)) 599 599 600 + /* Frame Based Payload - 3.1.1. Frame Based Video Format Descriptor */ 601 + struct uvc_format_framebased { 602 + __u8 bLength; 603 + __u8 bDescriptorType; 604 + __u8 bDescriptorSubType; 605 + __u8 bFormatIndex; 606 + __u8 bNumFrameDescriptors; 607 + __u8 guidFormat[16]; 608 + __u8 bBitsPerPixel; 609 + __u8 bDefaultFrameIndex; 610 + __u8 bAspectRatioX; 611 + __u8 bAspectRatioY; 612 + __u8 bmInterfaceFlags; 613 + __u8 bCopyProtect; 614 + __u8 bVariableSize; 615 + } __attribute__((__packed__)); 616 + 617 + #define UVC_DT_FORMAT_FRAMEBASED_SIZE 28 618 + 619 + /* Frame Based Payload - 3.1.2. Frame Based Video Frame Descriptor */ 620 + struct uvc_frame_framebased { 621 + __u8 bLength; 622 + __u8 bDescriptorType; 623 + __u8 bDescriptorSubType; 624 + __u8 bFrameIndex; 625 + __u8 bmCapabilities; 626 + __u16 wWidth; 627 + __u16 wHeight; 628 + __u32 dwMinBitRate; 629 + __u32 dwMaxBitRate; 630 + __u32 dwDefaultFrameInterval; 631 + __u8 bFrameIntervalType; 632 + __u32 dwBytesPerLine; 633 + __u32 dwFrameInterval[]; 634 + } __attribute__((__packed__)); 635 + 636 + #define UVC_DT_FRAME_FRAMEBASED_SIZE(n) (26+4*(n)) 637 + 638 + #define UVC_FRAME_FRAMEBASED(n) \ 639 + uvc_frame_framebased_##n 640 + 641 + #define DECLARE_UVC_FRAME_FRAMEBASED(n) \ 642 + struct UVC_FRAME_FRAMEBASED(n) { \ 643 + __u8 bLength; \ 644 + __u8 bDescriptorType; \ 645 + __u8 bDescriptorSubType; \ 646 + __u8 bFrameIndex; \ 647 + __u8 bmCapabilities; \ 648 + __u16 wWidth; \ 649 + __u16 wHeight; \ 650 + __u32 dwMinBitRate; \ 651 + __u32 dwMaxBitRate; \ 652 + __u32 dwDefaultFrameInterval; \ 653 + __u8 bFrameIntervalType; \ 654 + __u32 dwBytesPerLine; \ 655 + __u32 dwFrameInterval[n]; \ 656 + } __attribute__ ((packed)) 657 + 600 658 #endif /* __LINUX_USB_VIDEO_H */ 601 659