USB/Thunderbolt patches for 5.13-rc1

Here is the big set of USB and Thunderbolt driver updates for 5.13-rc1.
 
 Lots of little things in here, with loads of tiny fixes and cleanups
 over these drivers, as well as these "larger" changes:
 	- thunderbolt updates and new features added
 	- xhci driver updates and split out of a mediatek-specific xhci
 	  driver from the main xhci module to make it easier to work
 	  with (something that I have been wanting for a while).
 	- loads of typec feature additions and updates
 	- dwc2 driver updates
 	- dwc3 driver updates
 	- gadget driver fixes and minor updates
 	- loads of usb-serial cleanups and fixes and updates
 	- usbip documentation updates and fixes
 	- lots of other tiny USB driver updates
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYIa42A8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ymMEACgkQBzLb5W/IocS+oq7+D7P3V581sAn1Dcy2Qq
 Yz370/X2hrjXAyIm7/Cz
 =5dj/
 -----END PGP SIGNATURE-----

Merge tag 'usb-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB and Thunderbolt updates from Greg KH:
 "Here is the big set of USB and Thunderbolt driver updates for
  5.13-rc1.

  Lots of little things in here, with loads of tiny fixes and cleanups
  over these drivers, as well as these "larger" changes:

   - thunderbolt updates and new features added

   - xhci driver updates and split out of a mediatek-specific xhci
     driver from the main xhci module to make it easier to work with
     (something that I have been wanting for a while).

   - loads of typec feature additions and updates

   - dwc2 driver updates

   - dwc3 driver updates

   - gadget driver fixes and minor updates

   - loads of usb-serial cleanups and fixes and updates

   - usbip documentation updates and fixes

   - lots of other tiny USB driver updates

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'usb-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (371 commits)
  usb: Fix up movement of USB core kerneldoc location
  usb: dwc3: gadget: Handle DEV_TXF_FLUSH_BYPASS capability
  usb: dwc3: Capture new capability register GHWPARAMS9
  usb: gadget: prevent a ternary sign expansion bug
  usb: dwc3: core: Do core softreset when switch mode
  usb: dwc2: Get rid of useless error checks in suspend interrupt
  usb: dwc2: Update dwc2_handle_usb_suspend_intr function.
  usb: dwc2: Add exit hibernation mode before removing drive
  usb: dwc2: Add hibernation exiting flow by system resume
  usb: dwc2: Add hibernation entering flow by system suspend
  usb: dwc2: Allow exit hibernation in urb enqueue
  usb: dwc2: Move exit hibernation to dwc2_port_resume() function
  usb: dwc2: Move enter hibernation to dwc2_port_suspend() function
  usb: dwc2: Clear GINTSTS_RESTOREDONE bit after restore is generated.
  usb: dwc2: Clear fifo_map when resetting core.
  usb: dwc2: Allow exiting hibernation from gpwrdn rst detect
  usb: dwc2: Fix hibernation between host and device modes.
  usb: dwc2: Fix host mode hibernation exit with remote wakeup flow.
  usb: dwc2: Reset DEVADDR after exiting gadget hibernation.
  usb: dwc2: Update exit hibernation when port reset is asserted
  ...
This commit is contained in:
Linus Torvalds 2021-04-26 11:32:23 -07:00
commit ef12441243
211 changed files with 7611 additions and 3448 deletions

View File

@ -1,31 +1,3 @@
What: /sys/bus/thunderbolt/devices/<xdomain>/rx_speed
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the XDomain RX speed per lane.
All RX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/<xdomain>/rx_lanes
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the number of RX lanes the XDomain
is using simultaneously through its upstream port.
What: /sys/bus/thunderbolt/devices/<xdomain>/tx_speed
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the XDomain TX speed per lane.
All TX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/<xdomain>/tx_lanes
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports number of TX lanes the XDomain
is using simultaneously through its upstream port.
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
Date: Jun 2018
KernelVersion: 4.17
@ -162,6 +134,13 @@ Contact: thunderbolt-software@lists.01.org
Description: This attribute contains name of this device extracted from
the device DROM.
What: /sys/bus/thunderbolt/devices/.../maxhopid
Date: Jul 2021
KernelVersion: 5.13
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Only set for XDomains. The maximum HopID the other host
supports as its input HopID.
What: /sys/bus/thunderbolt/devices/.../rx_speed
Date: Jan 2020
KernelVersion: 5.5

View File

@ -197,6 +197,16 @@ properties:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [1, 2, 3]
slow-charger-loop:
description: Allows PMIC charger loops which are slow(i.e. cannot meet the 15ms deadline) to
still comply to pSnkStby i.e Maximum power that can be consumed by sink while in Sink Standby
state as defined in 7.4.2 Sink Electrical Parameters of USB Power Delivery Specification
Revision 3.0, Version 1.2. When the property is set, the port requests pSnkStby(2.5W -
5V@500mA) upon entering SNK_DISCOVERY(instead of 3A or the 1.5A, Rp current advertised, during
SNK_DISCOVERY) and the actual currrent limit after reception of PS_Ready for PD link or during
SNK_READY for non-pd link.
type: boolean
required:
- compatible

View File

@ -1,32 +1,56 @@
Xilinx SuperSpeed DWC3 USB SoC controller
Required properties:
- compatible: Should contain "xlnx,zynqmp-dwc3"
- compatible: May contain "xlnx,zynqmp-dwc3" or "xlnx,versal-dwc3"
- reg: Base address and length of the register control block
- clocks: A list of phandles for the clocks listed in clock-names
- clock-names: Should contain the following:
"bus_clk" Master/Core clock, have to be >= 125 MHz for SS
operation and >= 60MHz for HS operation
"ref_clk" Clock source to core during PHY power down
- resets: A list of phandles for resets listed in reset-names
- reset-names:
"usb_crst" USB core reset
"usb_hibrst" USB hibernation reset
"usb_apbrst" USB APB reset
Required child node:
A child node must exist to represent the core DWC3 IP block. The name of
the node is not important. The content of the node is defined in dwc3.txt.
Optional properties for snps,dwc3:
- dma-coherent: Enable this flag if CCI is enabled in design. Adding this
flag configures Global SoC bus Configuration Register and
Xilinx USB 3.0 IP - USB coherency register to enable CCI.
- interrupt-names: Should contain the following:
"dwc_usb3" USB gadget mode interrupts
"otg" USB OTG mode interrupts
"hiber" USB hibernation interrupts
Example device node:
usb@0 {
#address-cells = <0x2>;
#size-cells = <0x1>;
compatible = "xlnx,zynqmp-dwc3";
reg = <0x0 0xff9d0000 0x0 0x100>;
clock-names = "bus_clk", "ref_clk";
clocks = <&clk125>, <&clk125>;
resets = <&zynqmp_reset ZYNQMP_RESET_USB1_CORERESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_HIBERRESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_APB>;
reset-names = "usb_crst", "usb_hibrst", "usb_apbrst";
ranges;
dwc3@fe200000 {
compatible = "snps,dwc3";
reg = <0x0 0xfe200000 0x40000>;
interrupts = <0x0 0x41 0x4>;
interrupt-names = "dwc_usb3", "otg", "hiber";
interrupts = <0 65 4>, <0 69 4>, <0 75 4>;
phys = <&psgtr 2 PHY_TYPE_USB3 0 2>;
phy-names = "usb3-phy";
dr_mode = "host";
dma-coherent;
};
};

View File

@ -52,11 +52,8 @@ properties:
# Required child node:
patternProperties:
"^dwc3@[0-9a-f]+$":
type: object
description:
A child node must exist to represent the core DWC3 IP block
The content of the node is defined in dwc3.txt.
"^usb@[0-9a-f]+$":
$ref: snps,dwc3.yaml#
required:
- compatible
@ -87,7 +84,7 @@ examples:
dma-ranges = <0x40000000 0x40000000 0xc0000000>;
ranges;
dwc3@38100000 {
usb@38100000 {
compatible = "snps,dwc3";
reg = <0x38100000 0x10000>;
clocks = <&clk IMX8MP_CLK_HSIO_AXI>,

View File

@ -122,6 +122,12 @@ properties:
description:
Set this flag to force EHCI reset after resume.
spurious-oc:
$ref: /schemas/types.yaml#/definitions/flag
description:
Set this flag to indicate that the hardware sometimes turns on
the OC bit when an over-current isn't actually present.
companion:
$ref: /schemas/types.yaml#/definitions/phandle
description:

View File

@ -30,6 +30,7 @@ properties:
- mediatek,mt7629-xhci
- mediatek,mt8173-xhci
- mediatek,mt8183-xhci
- mediatek,mt8192-xhci
- const: mediatek,mtk-xhci
reg:
@ -45,7 +46,18 @@ properties:
- const: ippc # optional, only needed for case 1.
interrupts:
maxItems: 1
description:
use "interrupts-extended" when the interrupts are connected to the
separate interrupt controllers
minItems: 1
items:
- description: xHCI host controller interrupt
- description: optional, wakeup interrupt used to support runtime PM
interrupt-names:
items:
- const: host
- const: wakeup
power-domains:
description: A phandle to USB power domain node to control USB's MTCMOS
@ -99,9 +111,9 @@ properties:
vbus-supply:
description: Regulator of USB VBUS5v
usb3-lpm-capable:
description: supports USB3.0 LPM
type: boolean
usb3-lpm-capable: true
usb2-lpm-disable: true
imod-interval-ns:
description:
@ -127,10 +139,13 @@ properties:
- description:
The second cell represents the register base address of the glue
layer in syscon
- description:
- description: |
The third cell represents the hardware version of the glue layer,
1 is used by mt8173 etc, 2 is used by mt2712 etc
enum: [1, 2]
1 - used by mt8173 etc, revision 1 without following IPM rule;
2 - used by mt2712 etc, revision 2 following IPM rule;
101 - used by mt8183, specific 1.01;
102 - used by mt8192, specific 1.02;
enum: [1, 2, 101, 102]
mediatek,u3p-dis-msk:
$ref: /schemas/types.yaml#/definitions/uint32

View File

@ -24,6 +24,7 @@ properties:
- mediatek,mt2712-mtu3
- mediatek,mt8173-mtu3
- mediatek,mt8183-mtu3
- mediatek,mt8192-mtu3
- const: mediatek,mtu3
reg:
@ -126,7 +127,7 @@ properties:
Any connector to the data bus of this controller should be modelled
using the OF graph bindings specified, if the "usb-role-switch"
property is used. See graph.txt
type: object
$ref: /schemas/graph.yaml#/properties/port
enable-manual-drd:
$ref: /schemas/types.yaml#/definitions/flag
@ -152,10 +153,13 @@ properties:
- description:
The second cell represents the register base address of the glue
layer in syscon
- description:
- description: |
The third cell represents the hardware version of the glue layer,
1 is used by mt8173 etc, 2 is used by mt2712 etc
enum: [1, 2]
1 - used by mt8173 etc, revision 1 without following IPM rule;
2 - used by mt2712 etc, revision 2 with following IPM rule;
101 - used by mt8183, specific 1.01;
102 - used by mt8192, specific 1.02;
enum: [1, 2, 101, 102]
mediatek,u3p-dis-msk:
$ref: /schemas/types.yaml#/definitions/uint32

View File

@ -16,6 +16,7 @@ properties:
- qcom,msm8996-dwc3
- qcom,msm8998-dwc3
- qcom,sc7180-dwc3
- qcom,sc7280-dwc3
- qcom,sdm845-dwc3
- qcom,sdx55-dwc3
- qcom,sm8150-dwc3

View File

@ -87,13 +87,19 @@ properties:
minItems: 1
snps,usb2-lpm-disable:
description: Indicate if we don't want to enable USB2 HW LPM
description: Indicate if we don't want to enable USB2 HW LPM for host
mode.
type: boolean
snps,usb3_lpm_capable:
description: Determines if platform is USB3 LPM capable
type: boolean
snps,usb2-gadget-lpm-disable:
description: Indicate if we don't want to enable USB2 HW LPM for gadget
mode.
type: boolean
snps,dis-start-transfer-quirk:
description:
When set, disable isoc START TRANSFER command failure SW work-around

View File

@ -82,9 +82,9 @@ required:
additionalProperties: true
examples:
#hub connected to port 1
#device connected to port 2
#device connected to port 3
# hub connected to port 1
# device connected to port 2
# device connected to port 3
# interface 0 of configuration 1
# interface 0 of configuration 2
- |

View File

@ -1,43 +0,0 @@
USB NOP PHY
Required properties:
- compatible: should be usb-nop-xceiv
- #phy-cells: Must be 0
Optional properties:
- clocks: phandle to the PHY clock. Use as per Documentation/devicetree
/bindings/clock/clock-bindings.txt
This property is required if clock-frequency is specified.
- clock-names: Should be "main_clk"
- clock-frequency: the clock frequency (in Hz) that the PHY clock must
be configured to.
- vcc-supply: phandle to the regulator that provides power to the PHY.
- reset-gpios: Should specify the GPIO for reset.
- vbus-detect-gpio: should specify the GPIO detecting a VBus insertion
(see Documentation/devicetree/bindings/gpio/gpio.txt)
- vbus-regulator : should specifiy the regulator supplying current drawn from
the VBus line (see Documentation/devicetree/bindings/regulator/regulator.txt).
Example:
hsusb1_phy {
compatible = "usb-nop-xceiv";
clock-frequency = <19200000>;
clocks = <&osc 0>;
clock-names = "main_clk";
vcc-supply = <&hsusb1_vcc_regulator>;
reset-gpios = <&gpio1 7 GPIO_ACTIVE_LOW>;
vbus-detect-gpio = <&gpio2 13 GPIO_ACTIVE_HIGH>;
vbus-regulator = <&vbus_regulator>;
#phy-cells = <0>;
};
hsusb1_phy is a NOP USB PHY device that gets its clock from an oscillator
and expects that clock to be configured to 19.2MHz by the NOP PHY driver.
hsusb1_vcc_regulator provides power to the PHY and GPIO 7 controls RESET.
GPIO 13 detects VBus insertion, and accordingly notifies the vbus-regulator.

View File

@ -0,0 +1,64 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/usb-nop-xceiv.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: USB NOP PHY
maintainers:
- Rob Herring <robh@kernel.org>
properties:
compatible:
const: usb-nop-xceiv
clocks:
maxItems: 1
clock-names:
const: main_clk
clock-frequency: true
'#phy-cells':
const: 0
vcc-supply:
description: phandle to the regulator that provides power to the PHY.
reset-gpios:
maxItems: 1
vbus-detect-gpio:
description: Should specify the GPIO detecting a VBus insertion
maxItems: 1
vbus-regulator:
description: Should specifiy the regulator supplying current drawn from
the VBus line.
$ref: /schemas/types.yaml#/definitions/phandle
required:
- compatible
- '#phy-cells'
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
hsusb1_phy {
compatible = "usb-nop-xceiv";
clock-frequency = <19200000>;
clocks = <&osc 0>;
clock-names = "main_clk";
vcc-supply = <&hsusb1_vcc_regulator>;
reset-gpios = <&gpio1 7 GPIO_ACTIVE_LOW>;
vbus-detect-gpio = <&gpio2 13 GPIO_ACTIVE_HIGH>;
vbus-regulator = <&vbus_regulator>;
#phy-cells = <0>;
};
...

View File

@ -109,15 +109,16 @@ well as to make sure they aren't relying on some HCD-specific behavior.
USB-Standard Types
==================
In ``<linux/usb/ch9.h>`` you will find the USB data types defined in
chapter 9 of the USB specification. These data types are used throughout
USB, and in APIs including this host side API, gadget APIs, usb character
devices and debugfs interfaces.
In ``drivers/usb/common/common.c`` and ``drivers/usb/common/debug.c`` you
will find the USB data types defined in chapter 9 of the USB specification.
These data types are used throughout USB, and in APIs including this host
side API, gadget APIs, usb character devices and debugfs interfaces.
.. kernel-doc:: include/linux/usb/ch9.h
:internal:
.. kernel-doc:: drivers/usb/common/common.c
:export:
.. _usb_header:
.. kernel-doc:: drivers/usb/common/debug.c
:export:
Host-Side Data Types and Macros
===============================

View File

@ -2,15 +2,15 @@
USB/IP protocol
===============
PRELIMINARY DRAFT, MAY CONTAIN MISTAKES!
28 Jun 2011
Architecture
============
The USB/IP protocol follows a server/client architecture. The server exports the
USB devices and the clients imports them. The device driver for the exported
USB devices and the clients import them. The device driver for the exported
USB device runs on the client machine.
The client may ask for the list of the exported USB devices. To get the list the
client opens a TCP/IP connection towards the server, and sends an OP_REQ_DEVLIST
client opens a TCP/IP connection to the server, and sends an OP_REQ_DEVLIST
packet on top of the TCP/IP connection (so the actual OP_REQ_DEVLIST may be sent
in one or more pieces at the low level transport layer). The server sends back
the OP_REP_DEVLIST packet which lists the exported USB devices. Finally the
@ -30,7 +30,7 @@ TCP/IP connection is closed.
| |
Once the client knows the list of exported USB devices it may decide to use one
of them. First the client opens a TCP/IP connection towards the server and
of them. First the client opens a TCP/IP connection to the server and
sends an OP_REQ_IMPORT packet. The server replies with OP_REP_IMPORT. If the
import was successful the TCP/IP connection remains open and will be used
to transfer the URB traffic between the client and the server. The client may
@ -84,17 +84,61 @@ server may be USBIP_RET_SUBMIT and USBIP_RET_UNLINK respectively.
| <---------------------------------------------- |
| . |
| : |
For UNLINK, note that after a successful USBIP_RET_UNLINK, the unlinked URB
submission would not have a corresponding USBIP_RET_SUBMIT (this is explained in
function stub_recv_cmd_unlink of drivers/usb/usbip/stub_rx.c).
::
virtual host controller usb host
"client" "server"
(imports USB devices) (exports USB devices)
| |
| USBIP_CMD_SUBMIT(seqnum = p) |
| ----------------------------------------------> |
| |
| USBIP_CMD_UNLINK |
| (seqnum = p+1, unlink_seqnum = p) |
| ----------------------------------------------> |
| |
| USBIP_RET_UNLINK |
| (seqnum = p+1, status = -ECONNRESET) |
| <---------------------------------------------- |
| |
| Note: No USBIP_RET_SUBMIT(seqnum = p) |
| <--X---X---X---X---X---X---X---X---X---X---X--- |
| . |
| : |
| |
| USBIP_CMD_SUBMIT(seqnum = q) |
| ----------------------------------------------> |
| |
| USBIP_RET_SUBMIT(seqnum = q) |
| <---------------------------------------------- |
| |
| USBIP_CMD_UNLINK |
| (seqnum = q+1, unlink_seqnum = q) |
| ----------------------------------------------> |
| |
| USBIP_RET_UNLINK |
| (seqnum = q+1, status = 0) |
| <---------------------------------------------- |
| |
The fields are in network (big endian) byte order meaning that the most significant
byte (MSB) is stored at the lowest address.
Protocol Version
================
The documented USBIP version is v1.1.1. The binary representation of this
version in message headers is 0x0111.
This is defined in tools/usb/usbip/configure.ac
Message Format
==============
OP_REQ_DEVLIST:
Retrieve the list of exported USB devices.
@ -102,7 +146,7 @@ OP_REQ_DEVLIST:
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 2 | 0x0100 | Binary-coded decimal USBIP version number: v1.0.0 |
| 0 | 2 | | USBIP version |
+-----------+--------+------------+---------------------------------------------------+
| 2 | 2 | 0x8005 | Command code: Retrieve the list of exported USB |
| | | | devices. |
@ -116,7 +160,7 @@ OP_REP_DEVLIST:
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 2 | 0x0100 | Binary-coded decimal USBIP version number: v1.0.0.|
| 0 | 2 | | USBIP version |
+-----------+--------+------------+---------------------------------------------------+
| 2 | 2 | 0x0005 | Reply code: The list of exported USB devices. |
+-----------+--------+------------+---------------------------------------------------+
@ -165,8 +209,8 @@ OP_REP_DEVLIST:
| 0x143 | 1 | | bNumInterfaces |
+-----------+--------+------------+---------------------------------------------------+
| 0x144 | | m_0 | From now on each interface is described, all |
| | | | together bNumInterfaces times, with the |
| | | | the following 4 fields: |
| | | | together bNumInterfaces times, with the following |
| | | | 4 fields: |
+-----------+--------+------------+---------------------------------------------------+
| | 1 | | bInterfaceClass |
+-----------+--------+------------+---------------------------------------------------+
@ -177,7 +221,7 @@ OP_REP_DEVLIST:
| 0x147 | 1 | | padding byte for alignment, shall be set to zero |
+-----------+--------+------------+---------------------------------------------------+
| 0xC + | | | The second exported USB device starts at i=1 |
| i*0x138 + | | | with the busid field. |
| i*0x138 + | | | with the path field. |
| m_(i-1)*4 | | | |
+-----------+--------+------------+---------------------------------------------------+
@ -187,7 +231,7 @@ OP_REQ_IMPORT:
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 2 | 0x0100 | Binary-coded decimal USBIP version number: v1.0.0 |
| 0 | 2 | | USBIP version |
+-----------+--------+------------+---------------------------------------------------+
| 2 | 2 | 0x8003 | Command code: import a remote USB device. |
+-----------+--------+------------+---------------------------------------------------+
@ -206,7 +250,7 @@ OP_REP_IMPORT:
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 2 | 0x0100 | Binary-coded decimal USBIP version number: v1.0.0 |
| 0 | 2 | | USBIP version |
+-----------+--------+------------+---------------------------------------------------+
| 2 | 2 | 0x0003 | Reply code: Reply to import. |
+-----------+--------+------------+---------------------------------------------------+
@ -254,158 +298,156 @@ OP_REP_IMPORT:
| 0x13E | 1 | | bNumInterfaces |
+-----------+--------+------------+---------------------------------------------------+
The following four commands have a common basic header called
'usbip_header_basic', and their headers, called 'usbip_header' (before
transfer_buffer payload), have the same length, therefore paddings are needed.
usbip_header_basic:
+-----------+--------+---------------------------------------------------+
| Offset | Length | Description |
+===========+========+===================================================+
| 0 | 4 | command |
+-----------+--------+---------------------------------------------------+
| 4 | 4 | seqnum: sequential number that identifies requests|
| | | and corresponding responses; |
| | | incremented per connection |
+-----------+--------+---------------------------------------------------+
| 8 | 4 | devid: specifies a remote USB device uniquely |
| | | instead of busnum and devnum; |
| | | for client (request), this value is |
| | | ((busnum << 16) | devnum); |
| | | for server (response), this shall be set to 0 |
+-----------+--------+---------------------------------------------------+
| 0xC | 4 | direction: |
| | | |
| | | - 0: USBIP_DIR_OUT |
| | | - 1: USBIP_DIR_IN |
| | | |
| | | only used by client, for server this shall be 0 |
+-----------+--------+---------------------------------------------------+
| 0x10 | 4 | ep: endpoint number |
| | | only used by client, for server this shall be 0; |
| | | for UNLINK, this shall be 0 |
+-----------+--------+---------------------------------------------------+
USBIP_CMD_SUBMIT:
Submit an URB
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 4 | 0x00000001 | command: Submit an URB |
+-----------+--------+------------+---------------------------------------------------+
| 4 | 4 | | seqnum: the sequence number of the URB to submit |
+-----------+--------+------------+---------------------------------------------------+
| 8 | 4 | | devid |
+-----------+--------+------------+---------------------------------------------------+
| 0xC | 4 | | direction: |
| | | | |
| | | | - 0: USBIP_DIR_OUT |
| | | | - 1: USBIP_DIR_IN |
+-----------+--------+------------+---------------------------------------------------+
| 0x10 | 4 | | ep: endpoint number, possible values are: 0...15 |
+-----------+--------+------------+---------------------------------------------------+
| 0x14 | 4 | | transfer_flags: possible values depend on the |
| | | | URB transfer type, see below |
+-----------+--------+------------+---------------------------------------------------+
| 0x18 | 4 | | transfer_buffer_length |
+-----------+--------+------------+---------------------------------------------------+
| 0x1C | 4 | | start_frame: specify the selected frame to |
| | | | transmit an ISO frame, ignored if URB_ISO_ASAP |
| | | | is specified at transfer_flags |
+-----------+--------+------------+---------------------------------------------------+
| 0x20 | 4 | | number_of_packets: number of ISO packets |
+-----------+--------+------------+---------------------------------------------------+
| 0x24 | 4 | | interval: maximum time for the request on the |
| | | | server-side host controller |
+-----------+--------+------------+---------------------------------------------------+
| 0x28 | 8 | | setup: data bytes for USB setup, filled with |
| | | | zeros if not used |
+-----------+--------+------------+---------------------------------------------------+
| 0x30 | | | URB data. For ISO transfers the padding between |
| | | | each ISO packets is not transmitted. |
+-----------+--------+------------+---------------------------------------------------+
+-------------------------+------------+---------+-----------+----------+-------------+
| Allowed transfer_flags | value | control | interrupt | bulk | isochronous |
+=========================+============+=========+===========+==========+=============+
| URB_SHORT_NOT_OK | 0x00000001 | only in | only in | only in | no |
+-------------------------+------------+---------+-----------+----------+-------------+
| URB_ISO_ASAP | 0x00000002 | no | no | no | yes |
+-------------------------+------------+---------+-----------+----------+-------------+
| URB_NO_TRANSFER_DMA_MAP | 0x00000004 | yes | yes | yes | yes |
+-------------------------+------------+---------+-----------+----------+-------------+
| URB_ZERO_PACKET | 0x00000040 | no | no | only out | no |
+-------------------------+------------+---------+-----------+----------+-------------+
| URB_NO_INTERRUPT | 0x00000080 | yes | yes | yes | yes |
+-------------------------+------------+---------+-----------+----------+-------------+
| URB_FREE_BUFFER | 0x00000100 | yes | yes | yes | yes |
+-------------------------+------------+---------+-----------+----------+-------------+
| URB_DIR_MASK | 0x00000200 | yes | yes | yes | yes |
+-------------------------+------------+---------+-----------+----------+-------------+
+-----------+--------+---------------------------------------------------+
| Offset | Length | Description |
+===========+========+===================================================+
| 0 | 20 | usbip_header_basic, 'command' shall be 0x00000001 |
+-----------+--------+---------------------------------------------------+
| 0x14 | 4 | transfer_flags: possible values depend on the |
| | | URB transfer_flags (refer to URB doc in |
| | | Documentation/driver-api/usb/URB.rst) |
| | | but with URB_NO_TRANSFER_DMA_MAP masked. Refer to |
| | | function usbip_pack_cmd_submit and function |
| | | tweak_transfer_flags in drivers/usb/usbip/ |
| | | usbip_common.c. The following fields may also ref |
| | | to function usbip_pack_cmd_submit and URB doc |
+-----------+--------+---------------------------------------------------+
| 0x18 | 4 | transfer_buffer_length: |
| | | use URB transfer_buffer_length |
+-----------+--------+---------------------------------------------------+
| 0x1C | 4 | start_frame: use URB start_frame; |
| | | initial frame for ISO transfer; |
| | | shall be set to 0 if not ISO transfer |
+-----------+--------+---------------------------------------------------+
| 0x20 | 4 | number_of_packets: number of ISO packets; |
| | | shall be set to 0xffffffff if not ISO transfer |
+-----------+--------+---------------------------------------------------+
| 0x24 | 4 | interval: maximum time for the request on the |
| | | server-side host controller |
+-----------+--------+---------------------------------------------------+
| 0x28 | 8 | setup: data bytes for USB setup, filled with |
| | | zeros if not used. |
+-----------+--------+---------------------------------------------------+
| 0x30 | n | transfer_buffer. |
| | | If direction is USBIP_DIR_OUT then n equals |
| | | transfer_buffer_length; otherwise n equals 0. |
| | | For ISO transfers the padding between each ISO |
| | | packets is not transmitted. |
+-----------+--------+---------------------------------------------------+
| 0x30+n | m | iso_packet_descriptor |
+-----------+--------+---------------------------------------------------+
USBIP_RET_SUBMIT:
Reply for submitting an URB
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 4 | 0x00000003 | command |
+-----------+--------+------------+---------------------------------------------------+
| 4 | 4 | | seqnum: URB sequence number |
+-----------+--------+------------+---------------------------------------------------+
| 8 | 4 | | devid |
+-----------+--------+------------+---------------------------------------------------+
| 0xC | 4 | | direction: |
| | | | |
| | | | - 0: USBIP_DIR_OUT |
| | | | - 1: USBIP_DIR_IN |
+-----------+--------+------------+---------------------------------------------------+
| 0x10 | 4 | | ep: endpoint number |
+-----------+--------+------------+---------------------------------------------------+
| 0x14 | 4 | | status: zero for successful URB transaction, |
| | | | otherwise some kind of error happened. |
+-----------+--------+------------+---------------------------------------------------+
| 0x18 | 4 | n | actual_length: number of URB data bytes |
+-----------+--------+------------+---------------------------------------------------+
| 0x1C | 4 | | start_frame: for an ISO frame the actually |
| | | | selected frame for transmit. |
+-----------+--------+------------+---------------------------------------------------+
| 0x20 | 4 | | number_of_packets |
+-----------+--------+------------+---------------------------------------------------+
| 0x24 | 4 | | error_count |
+-----------+--------+------------+---------------------------------------------------+
| 0x28 | 8 | | setup: data bytes for USB setup, filled with |
| | | | zeros if not used |
+-----------+--------+------------+---------------------------------------------------+
| 0x30 | n | | URB data bytes. For ISO transfers the padding |
| | | | between each ISO packets is not transmitted. |
+-----------+--------+------------+---------------------------------------------------+
+-----------+--------+---------------------------------------------------+
| Offset | Length | Description |
+===========+========+===================================================+
| 0 | 20 | usbip_header_basic, 'command' shall be 0x00000003 |
+-----------+--------+---------------------------------------------------+
| 0x14 | 4 | status: zero for successful URB transaction, |
| | | otherwise some kind of error happened. |
+-----------+--------+---------------------------------------------------+
| 0x18 | 4 | actual_length: number of URB data bytes; |
| | | use URB actual_length |
+-----------+--------+---------------------------------------------------+
| 0x1C | 4 | start_frame: use URB start_frame; |
| | | initial frame for ISO transfer; |
| | | shall be set to 0 if not ISO transfer |
+-----------+--------+---------------------------------------------------+
| 0x20 | 4 | number_of_packets: number of ISO packets; |
| | | shall be set to 0xffffffff if not ISO transfer |
+-----------+--------+---------------------------------------------------+
| 0x24 | 4 | error_count |
+-----------+--------+---------------------------------------------------+
| 0x28 | 8 | padding, shall be set to 0 |
+-----------+--------+---------------------------------------------------+
| 0x30 | n | transfer_buffer. |
| | | If direction is USBIP_DIR_IN then n equals |
| | | actual_length; otherwise n equals 0. |
| | | For ISO transfers the padding between each ISO |
| | | packets is not transmitted. |
+-----------+--------+---------------------------------------------------+
| 0x30+n | m | iso_packet_descriptor |
+-----------+--------+---------------------------------------------------+
USBIP_CMD_UNLINK:
Unlink an URB
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 4 | 0x00000002 | command: URB unlink command |
+-----------+--------+------------+---------------------------------------------------+
| 4 | 4 | | seqnum: URB sequence number to unlink: |
| | | | |
| | | | FIXME: |
| | | | is this so? |
+-----------+--------+------------+---------------------------------------------------+
| 8 | 4 | | devid |
+-----------+--------+------------+---------------------------------------------------+
| 0xC | 4 | | direction: |
| | | | |
| | | | - 0: USBIP_DIR_OUT |
| | | | - 1: USBIP_DIR_IN |
+-----------+--------+------------+---------------------------------------------------+
| 0x10 | 4 | | ep: endpoint number: zero |
+-----------+--------+------------+---------------------------------------------------+
| 0x14 | 4 | | seqnum: the URB sequence number given previously |
| | | | at USBIP_CMD_SUBMIT.seqnum field |
+-----------+--------+------------+---------------------------------------------------+
| 0x30 | n | | URB data bytes. For ISO transfers the padding |
| | | | between each ISO packets is not transmitted. |
+-----------+--------+------------+---------------------------------------------------+
+-----------+--------+---------------------------------------------------+
| Offset | Length | Description |
+===========+========+===================================================+
| 0 | 20 | usbip_header_basic, 'command' shall be 0x00000002 |
+-----------+--------+---------------------------------------------------+
| 0x14 | 4 | unlink_seqnum, of the SUBMIT request to unlink |
+-----------+--------+---------------------------------------------------+
| 0x18 | 24 | padding, shall be set to 0 |
+-----------+--------+---------------------------------------------------+
USBIP_RET_UNLINK:
Reply for URB unlink
+-----------+--------+------------+---------------------------------------------------+
| Offset | Length | Value | Description |
+===========+========+============+===================================================+
| 0 | 4 | 0x00000004 | command: reply for the URB unlink command |
+-----------+--------+------------+---------------------------------------------------+
| 4 | 4 | | seqnum: the unlinked URB sequence number |
+-----------+--------+------------+---------------------------------------------------+
| 8 | 4 | | devid |
+-----------+--------+------------+---------------------------------------------------+
| 0xC | 4 | | direction: |
| | | | |
| | | | - 0: USBIP_DIR_OUT |
| | | | - 1: USBIP_DIR_IN |
+-----------+--------+------------+---------------------------------------------------+
| 0x10 | 4 | | ep: endpoint number |
+-----------+--------+------------+---------------------------------------------------+
| 0x14 | 4 | | status: This is the value contained in the |
| | | | urb->status in the URB completition handler. |
| | | | |
| | | | FIXME: |
| | | | a better explanation needed. |
+-----------+--------+------------+---------------------------------------------------+
| 0x30 | n | | URB data bytes. For ISO transfers the padding |
| | | | between each ISO packets is not transmitted. |
+-----------+--------+------------+---------------------------------------------------+
+-----------+--------+---------------------------------------------------+
| Offset | Length | Description |
+===========+========+===================================================+
| 0 | 20 | usbip_header_basic, 'command' shall be 0x00000004 |
+-----------+--------+---------------------------------------------------+
| 0x14 | 4 | status: This is similar to the status of |
| | | USBIP_RET_SUBMIT (share the same memory offset). |
| | | When UNLINK is successful, status is -ECONNRESET; |
| | | when USBIP_CMD_UNLINK is after USBIP_RET_SUBMIT |
| | | status is 0 |
+-----------+--------+---------------------------------------------------+
| 0x18 | 24 | padding, shall be set to 0 |
+-----------+--------+---------------------------------------------------+
EXAMPLE
=======
The following data is captured from wire with Human Interface Devices (HID)
payload
::
CmdIntrIN: 00000001 00000d05 0001000f 00000001 00000001 00000200 00000040 ffffffff 00000000 00000004 00000000 00000000
CmdIntrOUT: 00000001 00000d06 0001000f 00000000 00000001 00000000 00000040 ffffffff 00000000 00000004 00000000 00000000
ffffffff860008a784ce5ae212376300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
RetIntrOut: 00000003 00000d06 00000000 00000000 00000000 00000000 00000040 ffffffff 00000000 00000000 00000000 00000000
RetIntrIn: 00000003 00000d05 00000000 00000000 00000000 00000000 00000040 ffffffff 00000000 00000000 00000000 00000000
ffffffff860011a784ce5ae2123763612891b1020100000400000000000000000000000000000000000000000000000000000000000000000000000000000000

View File

@ -791,7 +791,6 @@ CONFIG_USB_XHCI_MVEBU=y
CONFIG_USB_XHCI_TEGRA=m
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_HCD_STI=y
CONFIG_USB_EHCI_TEGRA=y
CONFIG_USB_EHCI_EXYNOS=m
CONFIG_USB_EHCI_MV=m
CONFIG_USB_OHCI_HCD=y
@ -817,6 +816,7 @@ CONFIG_USB_DWC2=y
CONFIG_USB_CHIPIDEA=y
CONFIG_USB_CHIPIDEA_UDC=y
CONFIG_USB_CHIPIDEA_HOST=y
CONFIG_USB_CHIPIDEA_TEGRA=y
CONFIG_USB_ISP1760=y
CONFIG_USB_HSIC_USB3503=y
CONFIG_AB8500_USB=y

View File

@ -828,7 +828,7 @@ usb3_0: usb@32f10100 {
ranges;
status = "disabled";
usb_dwc3_0: dwc3@38100000 {
usb_dwc3_0: usb@38100000 {
compatible = "snps,dwc3";
reg = <0x38100000 0x10000>;
clocks = <&clk IMX8MP_CLK_HSIO_AXI>,
@ -869,7 +869,7 @@ usb3_1: usb@32f10108 {
ranges;
status = "disabled";
usb_dwc3_1: dwc3@38200000 {
usb_dwc3_1: usb@38200000 {
compatible = "snps,dwc3";
reg = <0x38200000 0x10000>;
clocks = <&clk IMX8MP_CLK_HSIO_AXI>,

View File

@ -874,7 +874,7 @@ ssusb: usb@11201000 {
clocks = <&infracfg CLK_INFRA_UNIPRO_SCK>,
<&infracfg CLK_INFRA_USB>;
clock-names = "sys_ck", "ref_ck";
mediatek,syscon-wakeup = <&pericfg 0x400 0>;
mediatek,syscon-wakeup = <&pericfg 0x420 101>;
#address-cells = <2>;
#size-cells = <2>;
ranges;

View File

@ -25,13 +25,13 @@
/* Protocol timeouts in ms */
#define TBNET_LOGIN_DELAY 4500
#define TBNET_LOGIN_TIMEOUT 500
#define TBNET_LOGOUT_TIMEOUT 100
#define TBNET_LOGOUT_TIMEOUT 1000
#define TBNET_RING_SIZE 256
#define TBNET_LOCAL_PATH 0xf
#define TBNET_LOGIN_RETRIES 60
#define TBNET_LOGOUT_RETRIES 5
#define TBNET_LOGOUT_RETRIES 10
#define TBNET_MATCH_FRAGS_ID BIT(1)
#define TBNET_64K_FRAMES BIT(2)
#define TBNET_MAX_MTU SZ_64K
#define TBNET_FRAME_SIZE SZ_4K
#define TBNET_MAX_PAYLOAD_SIZE \
@ -154,8 +154,8 @@ struct tbnet_ring {
* @login_sent: ThunderboltIP login message successfully sent
* @login_received: ThunderboltIP login message received from the remote
* host
* @transmit_path: HopID the other end needs to use building the
* opposite side path.
* @local_transmit_path: HopID we are using to send out packets
* @remote_transmit_path: HopID the other end is using to send packets to us
* @connection_lock: Lock serializing access to @login_sent,
* @login_received and @transmit_path.
* @login_retries: Number of login retries currently done
@ -184,7 +184,8 @@ struct tbnet {
atomic_t command_id;
bool login_sent;
bool login_received;
u32 transmit_path;
int local_transmit_path;
int remote_transmit_path;
struct mutex connection_lock;
int login_retries;
struct delayed_work login_work;
@ -257,7 +258,7 @@ static int tbnet_login_request(struct tbnet *net, u8 sequence)
atomic_inc_return(&net->command_id));
request.proto_version = TBIP_LOGIN_PROTO_VERSION;
request.transmit_path = TBNET_LOCAL_PATH;
request.transmit_path = net->local_transmit_path;
return tb_xdomain_request(xd, &request, sizeof(request),
TB_CFG_PKG_XDOMAIN_RESP, &reply,
@ -364,10 +365,10 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
mutex_lock(&net->connection_lock);
if (net->login_sent && net->login_received) {
int retries = TBNET_LOGOUT_RETRIES;
int ret, retries = TBNET_LOGOUT_RETRIES;
while (send_logout && retries-- > 0) {
int ret = tbnet_logout_request(net);
ret = tbnet_logout_request(net);
if (ret != -ETIMEDOUT)
break;
}
@ -377,8 +378,16 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
tbnet_free_buffers(&net->rx_ring);
tbnet_free_buffers(&net->tx_ring);
if (tb_xdomain_disable_paths(net->xd))
ret = tb_xdomain_disable_paths(net->xd,
net->local_transmit_path,
net->rx_ring.ring->hop,
net->remote_transmit_path,
net->tx_ring.ring->hop);
if (ret)
netdev_warn(net->dev, "failed to disable DMA paths\n");
tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path);
net->remote_transmit_path = 0;
}
net->login_retries = 0;
@ -424,7 +433,7 @@ static int tbnet_handle_packet(const void *buf, size_t size, void *data)
if (!ret) {
mutex_lock(&net->connection_lock);
net->login_received = true;
net->transmit_path = pkg->transmit_path;
net->remote_transmit_path = pkg->transmit_path;
/* If we reached the number of max retries or
* previous logout, schedule another round of
@ -597,12 +606,18 @@ static void tbnet_connected_work(struct work_struct *work)
if (!connected)
return;
ret = tb_xdomain_alloc_in_hopid(net->xd, net->remote_transmit_path);
if (ret != net->remote_transmit_path) {
netdev_err(net->dev, "failed to allocate Rx HopID\n");
return;
}
/* Both logins successful so enable the high-speed DMA paths and
* start the network device queue.
*/
ret = tb_xdomain_enable_paths(net->xd, TBNET_LOCAL_PATH,
ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
net->rx_ring.ring->hop,
net->transmit_path,
net->remote_transmit_path,
net->tx_ring.ring->hop);
if (ret) {
netdev_err(net->dev, "failed to enable DMA paths\n");
@ -629,6 +644,7 @@ static void tbnet_connected_work(struct work_struct *work)
err_stop_rings:
tb_ring_stop(net->rx_ring.ring);
tb_ring_stop(net->tx_ring.ring);
tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path);
}
static void tbnet_login_work(struct work_struct *work)
@ -851,6 +867,7 @@ static int tbnet_open(struct net_device *dev)
struct tb_xdomain *xd = net->xd;
u16 sof_mask, eof_mask;
struct tb_ring *ring;
int hopid;
netif_carrier_off(dev);
@ -862,6 +879,15 @@ static int tbnet_open(struct net_device *dev)
}
net->tx_ring.ring = ring;
hopid = tb_xdomain_alloc_out_hopid(xd, -1);
if (hopid < 0) {
netdev_err(dev, "failed to allocate Tx HopID\n");
tb_ring_free(net->tx_ring.ring);
net->tx_ring.ring = NULL;
return hopid;
}
net->local_transmit_path = hopid;
sof_mask = BIT(TBIP_PDF_FRAME_START);
eof_mask = BIT(TBIP_PDF_FRAME_END);
@ -893,6 +919,8 @@ static int tbnet_stop(struct net_device *dev)
tb_ring_free(net->rx_ring.ring);
net->rx_ring.ring = NULL;
tb_xdomain_release_out_hopid(net->xd, net->local_transmit_path);
tb_ring_free(net->tx_ring.ring);
net->tx_ring.ring = NULL;
@ -1340,7 +1368,7 @@ static int __init tbnet_init(void)
* the moment.
*/
tb_property_add_immediate(tbnet_dir, "prtcstns",
TBNET_MATCH_FRAGS_ID);
TBNET_MATCH_FRAGS_ID | TBNET_64K_FRAMES);
ret = tb_register_property_dir("network", tbnet_dir);
if (ret) {

View File

@ -124,12 +124,31 @@ static const struct software_node usb_connector_node = {
.properties = usb_connector_properties,
};
static const struct software_node altmodes_node = {
.name = "altmodes",
.parent = &usb_connector_node,
};
static const struct property_entry dp_altmode_properties[] = {
PROPERTY_ENTRY_U32("svid", 0xff01),
PROPERTY_ENTRY_U32("vdo", 0x0c0086),
{ }
};
static const struct software_node dp_altmode_node = {
.name = "displayport-altmode",
.parent = &altmodes_node,
.properties = dp_altmode_properties,
};
static const struct software_node *node_group[] = {
&fusb302_node,
&max17047_node,
&pi3usb30532_node,
&displayport_node,
&usb_connector_node,
&altmodes_node,
&dp_altmode_node,
NULL
};

View File

@ -17,7 +17,7 @@
#define TB_CTL_RX_PKG_COUNT 10
#define TB_CTL_RETRIES 4
#define TB_CTL_RETRIES 1
/**
* struct tb_ctl - Thunderbolt control channel
@ -29,6 +29,7 @@
* @request_queue_lock: Lock protecting @request_queue
* @request_queue: List of outstanding requests
* @running: Is the control channel running at the moment
* @timeout_msec: Default timeout for non-raw control messages
* @callback: Callback called when hotplug message is received
* @callback_data: Data passed to @callback
*/
@ -43,6 +44,7 @@ struct tb_ctl {
struct list_head request_queue;
bool running;
int timeout_msec;
event_cb callback;
void *callback_data;
};
@ -613,6 +615,7 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
/**
* tb_ctl_alloc() - allocate a control channel
* @nhi: Pointer to NHI
* @timeout_msec: Default timeout used with non-raw control messages
* @cb: Callback called for plug events
* @cb_data: Data passed to @cb
*
@ -620,13 +623,15 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
*
* Return: Returns a pointer on success or NULL on failure.
*/
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb,
void *cb_data)
{
int i;
struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
if (!ctl)
return NULL;
ctl->nhi = nhi;
ctl->timeout_msec = timeout_msec;
ctl->callback = cb;
ctl->callback_data = cb_data;
@ -802,14 +807,12 @@ static bool tb_cfg_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
* tb_cfg_reset() - send a reset packet and wait for a response
* @ctl: Control channel pointer
* @route: Router string for the router to send reset
* @timeout_msec: Timeout in ms how long to wait for the response
*
* If the switch at route is incorrectly configured then we will not receive a
* reply (even though the switch will reset). The caller should check for
* -ETIMEDOUT and attempt to reconfigure the switch.
*/
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec)
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route)
{
struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) };
struct tb_cfg_result res = { 0 };
@ -831,7 +834,7 @@ struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
req->response_size = sizeof(reply);
req->response_type = TB_CFG_PKG_RESET;
res = tb_cfg_request_sync(ctl, req, timeout_msec);
res = tb_cfg_request_sync(ctl, req, ctl->timeout_msec);
tb_cfg_request_put(req);
@ -1007,7 +1010,7 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length)
{
struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port,
space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
space, offset, length, ctl->timeout_msec);
switch (res.err) {
case 0:
/* Success */
@ -1033,7 +1036,7 @@ int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length)
{
struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port,
space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
space, offset, length, ctl->timeout_msec);
switch (res.err) {
case 0:
/* Success */
@ -1071,7 +1074,7 @@ int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route)
u32 dummy;
struct tb_cfg_result res = tb_cfg_read_raw(ctl, &dummy, route, 0,
TB_CFG_SWITCH, 0, 1,
TB_CFG_DEFAULT_TIMEOUT);
ctl->timeout_msec);
if (res.err == 1)
return -EIO;
if (res.err)

View File

@ -21,15 +21,14 @@ struct tb_ctl;
typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type,
const void *buf, size_t size);
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data);
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb,
void *cb_data);
void tb_ctl_start(struct tb_ctl *ctl);
void tb_ctl_stop(struct tb_ctl *ctl);
void tb_ctl_free(struct tb_ctl *ctl);
/* configuration commands */
#define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */
struct tb_cfg_result {
u64 response_route;
u32 response_port; /*
@ -124,8 +123,7 @@ static inline struct tb_cfg_header tb_cfg_make_header(u64 route)
}
int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug);
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec);
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route);
struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
u64 route, u32 port,
enum tb_cfg_space space, u32 offset,

View File

@ -251,6 +251,29 @@ static ssize_t counters_write(struct file *file, const char __user *user_buf,
return ret < 0 ? ret : count;
}
static void cap_show_by_dw(struct seq_file *s, struct tb_switch *sw,
struct tb_port *port, unsigned int cap,
unsigned int offset, u8 cap_id, u8 vsec_id,
int dwords)
{
int i, ret;
u32 data;
for (i = 0; i < dwords; i++) {
if (port)
ret = tb_port_read(port, &data, TB_CFG_PORT, cap + offset + i, 1);
else
ret = tb_sw_read(sw, &data, TB_CFG_SWITCH, cap + offset + i, 1);
if (ret) {
seq_printf(s, "0x%04x <not accessible>\n", cap + offset + i);
continue;
}
seq_printf(s, "0x%04x %4d 0x%02x 0x%02x 0x%08x\n", cap + offset + i,
offset + i, cap_id, vsec_id, data);
}
}
static void cap_show(struct seq_file *s, struct tb_switch *sw,
struct tb_port *port, unsigned int cap, u8 cap_id,
u8 vsec_id, int length)
@ -267,10 +290,7 @@ static void cap_show(struct seq_file *s, struct tb_switch *sw,
else
ret = tb_sw_read(sw, data, TB_CFG_SWITCH, cap + offset, dwords);
if (ret) {
seq_printf(s, "0x%04x <not accessible>\n",
cap + offset);
if (dwords > 1)
seq_printf(s, "0x%04x ...\n", cap + offset + 1);
cap_show_by_dw(s, sw, port, cap, offset, cap_id, vsec_id, length);
return;
}
@ -341,15 +361,6 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s,
} else {
length = header.extended_short.length;
vsec_id = header.extended_short.vsec_id;
/*
* Ice Lake and Tiger Lake do not implement the
* full length of the capability, only first 32
* dwords so hard-code it here.
*/
if (!vsec_id &&
(tb_switch_is_ice_lake(port->sw) ||
tb_switch_is_tiger_lake(port->sw)))
length = 32;
}
break;

View File

@ -13,7 +13,6 @@
#include <linux/sizes.h>
#include <linux/thunderbolt.h>
#define DMA_TEST_HOPID 8
#define DMA_TEST_TX_RING_SIZE 64
#define DMA_TEST_RX_RING_SIZE 256
#define DMA_TEST_FRAME_SIZE SZ_4K
@ -72,7 +71,9 @@ static const char * const dma_test_result_names[] = {
* @svc: XDomain service the driver is bound to
* @xd: XDomain the service belongs to
* @rx_ring: Software ring holding RX frames
* @rx_hopid: HopID used for receiving frames
* @tx_ring: Software ring holding TX frames
* @tx_hopid: HopID used for sending fames
* @packets_to_send: Number of packets to send
* @packets_to_receive: Number of packets to receive
* @packets_sent: Actual number of packets sent
@ -92,7 +93,9 @@ struct dma_test {
const struct tb_service *svc;
struct tb_xdomain *xd;
struct tb_ring *rx_ring;
int rx_hopid;
struct tb_ring *tx_ring;
int tx_hopid;
unsigned int packets_to_send;
unsigned int packets_to_receive;
unsigned int packets_sent;
@ -119,10 +122,12 @@ static void *dma_test_pattern;
static void dma_test_free_rings(struct dma_test *dt)
{
if (dt->rx_ring) {
tb_xdomain_release_in_hopid(dt->xd, dt->rx_hopid);
tb_ring_free(dt->rx_ring);
dt->rx_ring = NULL;
}
if (dt->tx_ring) {
tb_xdomain_release_out_hopid(dt->xd, dt->tx_hopid);
tb_ring_free(dt->tx_ring);
dt->tx_ring = NULL;
}
@ -151,6 +156,14 @@ static int dma_test_start_rings(struct dma_test *dt)
dt->tx_ring = ring;
e2e_tx_hop = ring->hop;
ret = tb_xdomain_alloc_out_hopid(xd, -1);
if (ret < 0) {
dma_test_free_rings(dt);
return ret;
}
dt->tx_hopid = ret;
}
if (dt->packets_to_receive) {
@ -168,11 +181,19 @@ static int dma_test_start_rings(struct dma_test *dt)
}
dt->rx_ring = ring;
ret = tb_xdomain_alloc_in_hopid(xd, -1);
if (ret < 0) {
dma_test_free_rings(dt);
return ret;
}
dt->rx_hopid = ret;
}
ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID,
ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid,
dt->tx_ring ? dt->tx_ring->hop : 0,
DMA_TEST_HOPID,
dt->rx_hopid,
dt->rx_ring ? dt->rx_ring->hop : 0);
if (ret) {
dma_test_free_rings(dt);
@ -189,12 +210,18 @@ static int dma_test_start_rings(struct dma_test *dt)
static void dma_test_stop_rings(struct dma_test *dt)
{
int ret;
if (dt->rx_ring)
tb_ring_stop(dt->rx_ring);
if (dt->tx_ring)
tb_ring_stop(dt->tx_ring);
if (tb_xdomain_disable_paths(dt->xd))
ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid,
dt->tx_ring ? dt->tx_ring->hop : 0,
dt->rx_hopid,
dt->rx_ring ? dt->rx_ring->hop : 0);
if (ret)
dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
dma_test_free_rings(dt);

View File

@ -341,9 +341,34 @@ struct device_type tb_domain_type = {
.release = tb_domain_release,
};
static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
const void *buf, size_t size)
{
struct tb *tb = data;
if (!tb->cm_ops->handle_event) {
tb_warn(tb, "domain does not have event handler\n");
return true;
}
switch (type) {
case TB_CFG_PKG_XDOMAIN_REQ:
case TB_CFG_PKG_XDOMAIN_RESP:
if (tb_is_xdomain_enabled())
return tb_xdomain_handle_request(tb, type, buf, size);
break;
default:
tb->cm_ops->handle_event(tb, type, buf, size);
}
return true;
}
/**
* tb_domain_alloc() - Allocate a domain
* @nhi: Pointer to the host controller
* @timeout_msec: Control channel timeout for non-raw messages
* @privsize: Size of the connection manager private data
*
* Allocates and initializes a new Thunderbolt domain. Connection
@ -355,7 +380,7 @@ struct device_type tb_domain_type = {
*
* Return: allocated domain structure on %NULL in case of error
*/
struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize)
{
struct tb *tb;
@ -382,6 +407,10 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
if (!tb->wq)
goto err_remove_ida;
tb->ctl = tb_ctl_alloc(nhi, timeout_msec, tb_domain_event_cb, tb);
if (!tb->ctl)
goto err_destroy_wq;
tb->dev.parent = &nhi->pdev->dev;
tb->dev.bus = &tb_bus_type;
tb->dev.type = &tb_domain_type;
@ -391,6 +420,8 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
return tb;
err_destroy_wq:
destroy_workqueue(tb->wq);
err_remove_ida:
ida_simple_remove(&tb_domain_ida, tb->index);
err_free:
@ -399,30 +430,6 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
return NULL;
}
static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
const void *buf, size_t size)
{
struct tb *tb = data;
if (!tb->cm_ops->handle_event) {
tb_warn(tb, "domain does not have event handler\n");
return true;
}
switch (type) {
case TB_CFG_PKG_XDOMAIN_REQ:
case TB_CFG_PKG_XDOMAIN_RESP:
if (tb_is_xdomain_enabled())
return tb_xdomain_handle_request(tb, type, buf, size);
break;
default:
tb->cm_ops->handle_event(tb, type, buf, size);
}
return true;
}
/**
* tb_domain_add() - Add domain to the system
* @tb: Domain to add
@ -442,13 +449,6 @@ int tb_domain_add(struct tb *tb)
return -EINVAL;
mutex_lock(&tb->lock);
tb->ctl = tb_ctl_alloc(tb->nhi, tb_domain_event_cb, tb);
if (!tb->ctl) {
ret = -ENOMEM;
goto err_unlock;
}
/*
* tb_schedule_hotplug_handler may be called as soon as the config
* channel is started. Thats why we have to hold the lock here.
@ -493,7 +493,6 @@ int tb_domain_add(struct tb *tb)
device_del(&tb->dev);
err_ctl_stop:
tb_ctl_stop(tb->ctl);
err_unlock:
mutex_unlock(&tb->lock);
return ret;
@ -793,6 +792,10 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb)
* tb_domain_approve_xdomain_paths() - Enable DMA paths for XDomain
* @tb: Domain enabling the DMA paths
* @xd: XDomain DMA paths are created to
* @transmit_path: HopID we are using to send out packets
* @transmit_ring: DMA ring used to send out packets
* @receive_path: HopID the other end is using to send packets to us
* @receive_ring: DMA ring used to receive packets from @receive_path
*
* Calls connection manager specific method to enable DMA paths to the
* XDomain in question.
@ -801,18 +804,25 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb)
* particular returns %-ENOTSUPP if the connection manager
* implementation does not support XDomains.
*/
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
if (!tb->cm_ops->approve_xdomain_paths)
return -ENOTSUPP;
return tb->cm_ops->approve_xdomain_paths(tb, xd);
return tb->cm_ops->approve_xdomain_paths(tb, xd, transmit_path,
transmit_ring, receive_path, receive_ring);
}
/**
* tb_domain_disconnect_xdomain_paths() - Disable DMA paths for XDomain
* @tb: Domain disabling the DMA paths
* @xd: XDomain whose DMA paths are disconnected
* @transmit_path: HopID we are using to send out packets
* @transmit_ring: DMA ring used to send out packets
* @receive_path: HopID the other end is using to send packets to us
* @receive_ring: DMA ring used to receive packets from @receive_path
*
* Calls connection manager specific method to disconnect DMA paths to
* the XDomain in question.
@ -821,12 +831,15 @@ int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
* particular returns %-ENOTSUPP if the connection manager
* implementation does not support XDomains.
*/
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
if (!tb->cm_ops->disconnect_xdomain_paths)
return -ENOTSUPP;
return tb->cm_ops->disconnect_xdomain_paths(tb, xd);
return tb->cm_ops->disconnect_xdomain_paths(tb, xd, transmit_path,
transmit_ring, receive_path, receive_ring);
}
static int disconnect_xdomain(struct device *dev, void *data)
@ -837,7 +850,7 @@ static int disconnect_xdomain(struct device *dev, void *data)
xd = tb_to_xdomain(dev);
if (xd && xd->tb == tb)
ret = tb_xdomain_disable_paths(xd);
ret = tb_xdomain_disable_all_paths(xd);
return ret;
}

View File

@ -277,6 +277,16 @@ struct tb_drom_entry_port {
u8 unknown4:2;
} __packed;
/* USB4 product descriptor */
struct tb_drom_entry_desc {
struct tb_drom_entry_header header;
u16 bcdUSBSpec;
u16 idVendor;
u16 idProduct;
u16 bcdProductFWRevision;
u32 TID;
u8 productHWRevision;
};
/**
* tb_drom_read_uid_only() - Read UID directly from DROM
@ -329,6 +339,16 @@ static int tb_drom_parse_entry_generic(struct tb_switch *sw,
if (!sw->device_name)
return -ENOMEM;
break;
case 9: {
const struct tb_drom_entry_desc *desc =
(const struct tb_drom_entry_desc *)entry;
if (!sw->vendor && !sw->device) {
sw->vendor = desc->idVendor;
sw->device = desc->idProduct;
}
break;
}
}
return 0;
@ -521,6 +541,51 @@ static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val,
return tb_eeprom_read_n(sw, offset, val, count);
}
static int tb_drom_parse(struct tb_switch *sw)
{
const struct tb_drom_header *header =
(const struct tb_drom_header *)sw->drom;
u32 crc;
crc = tb_crc8((u8 *) &header->uid, 8);
if (crc != header->uid_crc8) {
tb_sw_warn(sw,
"DROM UID CRC8 mismatch (expected: %#x, got: %#x), aborting\n",
header->uid_crc8, crc);
return -EINVAL;
}
if (!sw->uid)
sw->uid = header->uid;
sw->vendor = header->vendor_id;
sw->device = header->model_id;
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"DROM data CRC32 mismatch (expected: %#x, got: %#x), continuing\n",
header->data_crc32, crc);
}
return tb_drom_parse_entries(sw);
}
static int usb4_drom_parse(struct tb_switch *sw)
{
const struct tb_drom_header *header =
(const struct tb_drom_header *)sw->drom;
u32 crc;
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"DROM data CRC32 mismatch (expected: %#x, got: %#x), aborting\n",
header->data_crc32, crc);
return -EINVAL;
}
return tb_drom_parse_entries(sw);
}
/**
* tb_drom_read() - Copy DROM to sw->drom and parse it
* @sw: Router whose DROM to read and parse
@ -534,7 +599,6 @@ static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val,
int tb_drom_read(struct tb_switch *sw)
{
u16 size;
u32 crc;
struct tb_drom_header *header;
int res, retries = 1;
@ -599,31 +663,21 @@ int tb_drom_read(struct tb_switch *sw)
goto err;
}
crc = tb_crc8((u8 *) &header->uid, 8);
if (crc != header->uid_crc8) {
tb_sw_warn(sw,
"drom uid crc8 mismatch (expected: %#x, got: %#x), aborting\n",
header->uid_crc8, crc);
goto err;
}
if (!sw->uid)
sw->uid = header->uid;
sw->vendor = header->vendor_id;
sw->device = header->model_id;
tb_check_quirks(sw);
tb_sw_dbg(sw, "DROM version: %d\n", header->device_rom_revision);
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"drom data crc32 mismatch (expected: %#x, got: %#x), continuing\n",
header->data_crc32, crc);
switch (header->device_rom_revision) {
case 3:
res = usb4_drom_parse(sw);
break;
default:
tb_sw_warn(sw, "DROM device_rom_revision %#x unknown\n",
header->device_rom_revision);
fallthrough;
case 1:
res = tb_drom_parse(sw);
break;
}
if (header->device_rom_revision > 2)
tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n",
header->device_rom_revision);
res = tb_drom_parse_entries(sw);
/* If the DROM parsing fails, wait a moment and retry once */
if (res == -EILSEQ && retries--) {
tb_sw_warn(sw, "parsing DROM failed, retrying\n");
@ -633,10 +687,11 @@ int tb_drom_read(struct tb_switch *sw)
goto parse;
}
return res;
if (!res)
return 0;
err:
kfree(sw->drom);
sw->drom = NULL;
return -EIO;
}

View File

@ -557,7 +557,9 @@ static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
return 0;
}
static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct icm_fr_pkg_approve_xdomain_response reply;
struct icm_fr_pkg_approve_xdomain request;
@ -568,10 +570,10 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
request.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT | xd->link;
memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid));
request.transmit_path = xd->transmit_path;
request.transmit_ring = xd->transmit_ring;
request.receive_path = xd->receive_path;
request.receive_ring = xd->receive_ring;
request.transmit_path = transmit_path;
request.transmit_ring = transmit_ring;
request.receive_path = receive_path;
request.receive_ring = receive_ring;
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
@ -585,7 +587,9 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
return 0;
}
static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
u8 phy_port;
u8 cmd;
@ -1122,7 +1126,9 @@ static int icm_tr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
return 0;
}
static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct icm_tr_pkg_approve_xdomain_response reply;
struct icm_tr_pkg_approve_xdomain request;
@ -1132,10 +1138,10 @@ static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
request.hdr.code = ICM_APPROVE_XDOMAIN;
request.route_hi = upper_32_bits(xd->route);
request.route_lo = lower_32_bits(xd->route);
request.transmit_path = xd->transmit_path;
request.transmit_ring = xd->transmit_ring;
request.receive_path = xd->receive_path;
request.receive_ring = xd->receive_ring;
request.transmit_path = transmit_path;
request.transmit_ring = transmit_ring;
request.receive_path = receive_path;
request.receive_ring = receive_ring;
memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid));
memset(&reply, 0, sizeof(reply));
@ -1176,7 +1182,9 @@ static int icm_tr_xdomain_tear_down(struct tb *tb, struct tb_xdomain *xd,
return 0;
}
static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
int ret;
@ -2416,7 +2424,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
struct icm *icm;
struct tb *tb;
tb = tb_domain_alloc(nhi, sizeof(struct icm));
tb = tb_domain_alloc(nhi, ICM_TIMEOUT, sizeof(struct icm));
if (!tb)
return NULL;

View File

@ -501,6 +501,77 @@ ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block,
return ret < 0 ? ret : 0;
}
/**
* tb_property_copy_dir() - Take a deep copy of directory
* @dir: Directory to copy
*
* This function takes a deep copy of @dir and returns back the copy. In
* case of error returns %NULL. The resulting directory needs to be
* released by calling tb_property_free_dir().
*/
struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir)
{
struct tb_property *property, *p = NULL;
struct tb_property_dir *d;
if (!dir)
return NULL;
d = tb_property_create_dir(dir->uuid);
if (!d)
return NULL;
list_for_each_entry(property, &dir->properties, list) {
struct tb_property *p;
p = tb_property_alloc(property->key, property->type);
if (!p)
goto err_free;
p->length = property->length;
switch (property->type) {
case TB_PROPERTY_TYPE_DIRECTORY:
p->value.dir = tb_property_copy_dir(property->value.dir);
if (!p->value.dir)
goto err_free;
break;
case TB_PROPERTY_TYPE_DATA:
p->value.data = kmemdup(property->value.data,
property->length * 4,
GFP_KERNEL);
if (!p->value.data)
goto err_free;
break;
case TB_PROPERTY_TYPE_TEXT:
p->value.text = kzalloc(p->length * 4, GFP_KERNEL);
if (!p->value.text)
goto err_free;
strcpy(p->value.text, property->value.text);
break;
case TB_PROPERTY_TYPE_VALUE:
p->value.immediate = property->value.immediate;
break;
default:
break;
}
list_add_tail(&p->list, &d->properties);
}
return d;
err_free:
kfree(p);
tb_property_free_dir(d);
return NULL;
}
/**
* tb_property_add_immediate() - Add immediate property to directory
* @parent: Directory to add the property

View File

@ -626,28 +626,6 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits)
TB_CFG_PORT, ADP_CS_4, 1);
}
/**
* tb_port_set_initial_credits() - Set initial port link credits allocated
* @port: Port to set the initial credits
* @credits: Number of credits to to allocate
*
* Set initial credits value to be used for ingress shared buffering.
*/
int tb_port_set_initial_credits(struct tb_port *port, u32 credits)
{
u32 data;
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, ADP_CS_5, 1);
if (ret)
return ret;
data &= ~ADP_CS_5_LCA_MASK;
data |= (credits << ADP_CS_5_LCA_SHIFT) & ADP_CS_5_LCA_MASK;
return tb_port_write(port, &data, TB_CFG_PORT, ADP_CS_5, 1);
}
/**
* tb_port_clear_counter() - clear a counter in TB_CFG_COUNTER
* @port: Port whose counters to clear
@ -1331,7 +1309,7 @@ int tb_switch_reset(struct tb_switch *sw)
TB_CFG_SWITCH, 2, 2);
if (res.err)
return res.err;
res = tb_cfg_reset(sw->tb->ctl, tb_route(sw), TB_CFG_DEFAULT_TIMEOUT);
res = tb_cfg_reset(sw->tb->ctl, tb_route(sw));
if (res.err > 0)
return -EIO;
return res.err;
@ -1762,6 +1740,18 @@ static struct attribute *switch_attrs[] = {
NULL,
};
static bool has_port(const struct tb_switch *sw, enum tb_port_type type)
{
const struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (!port->disabled && port->config.type == type)
return true;
}
return false;
}
static umode_t switch_attr_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
@ -1770,7 +1760,8 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
if (attr == &dev_attr_authorized.attr) {
if (sw->tb->security_level == TB_SECURITY_NOPCIE ||
sw->tb->security_level == TB_SECURITY_DPONLY)
sw->tb->security_level == TB_SECURITY_DPONLY ||
!has_port(sw, TB_TYPE_PCIE_UP))
return 0;
} else if (attr == &dev_attr_device.attr) {
if (!sw->device)
@ -1849,6 +1840,39 @@ static void tb_switch_release(struct device *dev)
kfree(sw);
}
static int tb_switch_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct tb_switch *sw = tb_to_switch(dev);
const char *type;
if (sw->config.thunderbolt_version == USB4_VERSION_1_0) {
if (add_uevent_var(env, "USB4_VERSION=1.0"))
return -ENOMEM;
}
if (!tb_route(sw)) {
type = "host";
} else {
const struct tb_port *port;
bool hub = false;
/* Device is hub if it has any downstream ports */
tb_switch_for_each_port(sw, port) {
if (!port->disabled && !tb_is_upstream_port(port) &&
tb_port_is_null(port)) {
hub = true;
break;
}
}
type = hub ? "hub" : "device";
}
if (add_uevent_var(env, "USB4_TYPE=%s", type))
return -ENOMEM;
return 0;
}
/*
* Currently only need to provide the callbacks. Everything else is handled
* in the connection manager.
@ -1882,6 +1906,7 @@ static const struct dev_pm_ops tb_switch_pm_ops = {
struct device_type tb_switch_type = {
.name = "thunderbolt_device",
.release = tb_switch_release,
.uevent = tb_switch_uevent,
.pm = &tb_switch_pm_ops,
};
@ -2542,6 +2567,8 @@ int tb_switch_add(struct tb_switch *sw)
}
tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
tb_check_quirks(sw);
ret = tb_switch_set_uuid(sw);
if (ret) {
dev_err(&sw->dev, "failed to set UUID\n");

View File

@ -15,6 +15,8 @@
#include "tb_regs.h"
#include "tunnel.h"
#define TB_TIMEOUT 100 /* ms */
/**
* struct tb_cm - Simple Thunderbolt connection manager
* @tunnel_list: List of active tunnels
@ -1077,7 +1079,9 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
return 0;
}
static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *nhi_port, *dst_port;
@ -1089,9 +1093,8 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
mutex_lock(&tb->lock);
tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring,
xd->transmit_path, xd->receive_ring,
xd->receive_path);
tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path,
transmit_ring, receive_path, receive_ring);
if (!tunnel) {
mutex_unlock(&tb->lock);
return -ENOMEM;
@ -1110,29 +1113,40 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
return 0;
}
static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct tb_port *dst_port;
struct tb_tunnel *tunnel;
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *nhi_port, *dst_port;
struct tb_tunnel *tunnel, *n;
struct tb_switch *sw;
sw = tb_to_switch(xd->dev.parent);
dst_port = tb_port_at(xd->route, sw);
nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
/*
* It is possible that the tunnel was already teared down (in
* case of cable disconnect) so it is fine if we cannot find it
* here anymore.
*/
tunnel = tb_find_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port);
tb_deactivate_and_free_tunnel(tunnel);
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
if (!tb_tunnel_is_dma(tunnel))
continue;
if (tunnel->src_port != nhi_port || tunnel->dst_port != dst_port)
continue;
if (tb_tunnel_match_dma(tunnel, transmit_path, transmit_ring,
receive_path, receive_ring))
tb_deactivate_and_free_tunnel(tunnel);
}
}
static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
if (!xd->is_unplugged) {
mutex_lock(&tb->lock);
__tb_disconnect_xdomain_paths(tb, xd);
__tb_disconnect_xdomain_paths(tb, xd, transmit_path,
transmit_ring, receive_path,
receive_ring);
mutex_unlock(&tb->lock);
}
return 0;
@ -1208,12 +1222,12 @@ static void tb_handle_hotplug(struct work_struct *work)
* tb_xdomain_remove() so setting XDomain as
* unplugged here prevents deadlock if they call
* tb_xdomain_disable_paths(). We will tear down
* the path below.
* all the tunnels below.
*/
xd->is_unplugged = true;
tb_xdomain_remove(xd);
port->xdomain = NULL;
__tb_disconnect_xdomain_paths(tb, xd);
__tb_disconnect_xdomain_paths(tb, xd, -1, -1, -1, -1);
tb_xdomain_put(xd);
tb_port_unconfigure_xdomain(port);
} else if (tb_port_is_dpout(port) || tb_port_is_dpin(port)) {
@ -1562,7 +1576,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
struct tb_cm *tcm;
struct tb *tb;
tb = tb_domain_alloc(nhi, sizeof(*tcm));
tb = tb_domain_alloc(nhi, TB_TIMEOUT, sizeof(*tcm));
if (!tb)
return NULL;

View File

@ -406,8 +406,12 @@ struct tb_cm_ops {
int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw,
const u8 *challenge, u8 *response);
int (*disconnect_pcie_paths)(struct tb *tb);
int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int (*usb4_switch_op)(struct tb_switch *sw, u16 opcode, u32 *metadata,
u8 *status, const void *tx_data, size_t tx_data_len,
void *rx_data, size_t rx_data_len);
@ -625,7 +629,7 @@ void tb_domain_exit(void);
int tb_xdomain_init(void);
void tb_xdomain_exit(void);
struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);
struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize);
int tb_domain_add(struct tb *tb);
void tb_domain_remove(struct tb *tb);
int tb_domain_suspend_noirq(struct tb *tb);
@ -641,8 +645,12 @@ int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw);
int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw);
int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);
int tb_domain_disconnect_pcie_paths(struct tb *tb);
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int tb_domain_disconnect_all_paths(struct tb *tb);
static inline struct tb *tb_domain_get(struct tb *tb)
@ -787,32 +795,6 @@ static inline bool tb_switch_is_titan_ridge(const struct tb_switch *sw)
return false;
}
static inline bool tb_switch_is_ice_lake(const struct tb_switch *sw)
{
if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) {
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_ICL_NHI0:
case PCI_DEVICE_ID_INTEL_ICL_NHI1:
return true;
}
}
return false;
}
static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw)
{
if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) {
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_TGL_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_NHI1:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI1:
return true;
}
}
return false;
}
/**
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
@ -860,7 +842,6 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_set_initial_credits(struct tb_port *port, u32 credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
int tb_port_unlock(struct tb_port *port);
int tb_port_enable(struct tb_port *port);

View File

@ -119,6 +119,7 @@ static struct tb_switch *alloc_host(struct kunit *test)
sw->ports[7].config.type = TB_TYPE_NHI;
sw->ports[7].config.max_in_hop_id = 11;
sw->ports[7].config.max_out_hop_id = 11;
sw->ports[7].config.nfc_credits = 0x41800000;
sw->ports[8].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[8].config.max_in_hop_id = 8;
@ -1594,6 +1595,489 @@ static void tb_test_tunnel_port_on_path(struct kunit *test)
tb_tunnel_free(dp_tunnel);
}
static void tb_test_tunnel_dma(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
/*
* Create DMA tunnel from NHI to port 1 and back.
*
* [Host 1]
* 1 ^ In HopID 1 -> Out HopID 8
* |
* v In HopID 8 -> Out HopID 1
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
/* RX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 1);
/* TX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].out_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].next_hop_index, 8);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_rx(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
/*
* Create DMA RX tunnel from port 1 to NHI.
*
* [Host 1]
* 1 ^
* |
* | In HopID 15 -> Out HopID 2
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 2);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1);
/* RX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 15);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 2);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_tx(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
/*
* Create DMA TX tunnel from NHI to port 1.
*
* [Host 1]
* 1 | In HopID 2 -> Out HopID 15
* |
* v
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 2, -1, -1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1);
/* TX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 15);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_chain(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2;
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
/*
* Create DMA tunnel from NHI to Device #2 port 3 and back.
*
* [Host 1]
* 1 ^ In HopID 1 -> Out HopID x
* |
* 1 | In HopID x -> Out HopID 1
* [Device #1]
* 7 \
* 1 \
* [Device #2]
* 3 | In HopID x -> Out HopID 8
* |
* v In HopID 8 -> Out HopID x
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x701, true);
nhi = &host->ports[7];
port = &dev2->ports[3];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
/* RX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port,
&dev2->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].in_port,
&dev1->ports[7]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port,
&dev1->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].in_port,
&host->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[2].next_hop_index, 1);
/* TX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].in_port,
&dev1->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port,
&dev1->ports[7]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].in_port,
&dev2->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[2].next_hop_index, 8);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_match(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, 15, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, 1, 15, 1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, -1, 8, -1));
tb_tunnel_free(tunnel);
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, -1, -1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1));
tb_tunnel_free(tunnel);
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 11);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 11));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 11));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 10, 11));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1));
tb_tunnel_free(tunnel);
}
static const u32 root_directory[] = {
0x55584401, /* "UXD" v1 */
0x00000018, /* Root directory length */
0x76656e64, /* "vend" */
0x6f726964, /* "orid" */
0x76000001, /* "v" R 1 */
0x00000a27, /* Immediate value, ! Vendor ID */
0x76656e64, /* "vend" */
0x6f726964, /* "orid" */
0x74000003, /* "t" R 3 */
0x0000001a, /* Text leaf offset, (“Apple Inc.”) */
0x64657669, /* "devi" */
0x63656964, /* "ceid" */
0x76000001, /* "v" R 1 */
0x0000000a, /* Immediate value, ! Device ID */
0x64657669, /* "devi" */
0x63656964, /* "ceid" */
0x74000003, /* "t" R 3 */
0x0000001d, /* Text leaf offset, (“Macintosh”) */
0x64657669, /* "devi" */
0x63657276, /* "cerv" */
0x76000001, /* "v" R 1 */
0x80000100, /* Immediate value, Device Revision */
0x6e657477, /* "netw" */
0x6f726b00, /* "ork" */
0x44000014, /* "D" R 20 */
0x00000021, /* Directory data offset, (Network Directory) */
0x4170706c, /* "Appl" */
0x6520496e, /* "e In" */
0x632e0000, /* "c." ! */
0x4d616369, /* "Maci" */
0x6e746f73, /* "ntos" */
0x68000000, /* "h" */
0x00000000, /* padding */
0xca8961c6, /* Directory UUID, Network Directory */
0x9541ce1c, /* Directory UUID, Network Directory */
0x5949b8bd, /* Directory UUID, Network Directory */
0x4f5a5f2e, /* Directory UUID, Network Directory */
0x70727463, /* "prtc" */
0x69640000, /* "id" */
0x76000001, /* "v" R 1 */
0x00000001, /* Immediate value, Network Protocol ID */
0x70727463, /* "prtc" */
0x76657273, /* "vers" */
0x76000001, /* "v" R 1 */
0x00000001, /* Immediate value, Network Protocol Version */
0x70727463, /* "prtc" */
0x72657673, /* "revs" */
0x76000001, /* "v" R 1 */
0x00000001, /* Immediate value, Network Protocol Revision */
0x70727463, /* "prtc" */
0x73746e73, /* "stns" */
0x76000001, /* "v" R 1 */
0x00000000, /* Immediate value, Network Protocol Settings */
};
static const uuid_t network_dir_uuid =
UUID_INIT(0xc66189ca, 0x1cce, 0x4195,
0xbd, 0xb8, 0x49, 0x59, 0x2e, 0x5f, 0x5a, 0x4f);
static void tb_test_property_parse(struct kunit *test)
{
struct tb_property_dir *dir, *network_dir;
struct tb_property *p;
dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory));
KUNIT_ASSERT_TRUE(test, dir != NULL);
p = tb_property_find(dir, "foo", TB_PROPERTY_TYPE_TEXT);
KUNIT_ASSERT_TRUE(test, !p);
p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_TEXT);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_STREQ(test, p->value.text, "Apple Inc.");
p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa27);
p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_TEXT);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_STREQ(test, p->value.text, "Macintosh");
p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa);
p = tb_property_find(dir, "missing", TB_PROPERTY_TYPE_DIRECTORY);
KUNIT_ASSERT_TRUE(test, !p);
p = tb_property_find(dir, "network", TB_PROPERTY_TYPE_DIRECTORY);
KUNIT_ASSERT_TRUE(test, p != NULL);
network_dir = p->value.dir;
KUNIT_EXPECT_TRUE(test, uuid_equal(network_dir->uuid, &network_dir_uuid));
p = tb_property_find(network_dir, "prtcid", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1);
p = tb_property_find(network_dir, "prtcvers", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1);
p = tb_property_find(network_dir, "prtcrevs", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1);
p = tb_property_find(network_dir, "prtcstns", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x0);
p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_VALUE);
KUNIT_EXPECT_TRUE(test, !p);
p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_TEXT);
KUNIT_EXPECT_TRUE(test, !p);
tb_property_free_dir(dir);
}
static void tb_test_property_format(struct kunit *test)
{
struct tb_property_dir *dir;
ssize_t block_len;
u32 *block;
int ret, i;
dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory));
KUNIT_ASSERT_TRUE(test, dir != NULL);
ret = tb_property_format_dir(dir, NULL, 0);
KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory));
block_len = ret;
block = kunit_kzalloc(test, block_len * sizeof(u32), GFP_KERNEL);
KUNIT_ASSERT_TRUE(test, block != NULL);
ret = tb_property_format_dir(dir, block, block_len);
KUNIT_EXPECT_EQ(test, ret, 0);
for (i = 0; i < ARRAY_SIZE(root_directory); i++)
KUNIT_EXPECT_EQ(test, root_directory[i], block[i]);
tb_property_free_dir(dir);
}
static void compare_dirs(struct kunit *test, struct tb_property_dir *d1,
struct tb_property_dir *d2)
{
struct tb_property *p1, *p2, *tmp;
int n1, n2, i;
if (d1->uuid) {
KUNIT_ASSERT_TRUE(test, d2->uuid != NULL);
KUNIT_ASSERT_TRUE(test, uuid_equal(d1->uuid, d2->uuid));
} else {
KUNIT_ASSERT_TRUE(test, d2->uuid == NULL);
}
n1 = 0;
tb_property_for_each(d1, tmp)
n1++;
KUNIT_ASSERT_NE(test, n1, 0);
n2 = 0;
tb_property_for_each(d2, tmp)
n2++;
KUNIT_ASSERT_NE(test, n2, 0);
KUNIT_ASSERT_EQ(test, n1, n2);
p1 = NULL;
p2 = NULL;
for (i = 0; i < n1; i++) {
p1 = tb_property_get_next(d1, p1);
KUNIT_ASSERT_TRUE(test, p1 != NULL);
p2 = tb_property_get_next(d2, p2);
KUNIT_ASSERT_TRUE(test, p2 != NULL);
KUNIT_ASSERT_STREQ(test, &p1->key[0], &p2->key[0]);
KUNIT_ASSERT_EQ(test, p1->type, p2->type);
KUNIT_ASSERT_EQ(test, p1->length, p2->length);
switch (p1->type) {
case TB_PROPERTY_TYPE_DIRECTORY:
KUNIT_ASSERT_TRUE(test, p1->value.dir != NULL);
KUNIT_ASSERT_TRUE(test, p2->value.dir != NULL);
compare_dirs(test, p1->value.dir, p2->value.dir);
break;
case TB_PROPERTY_TYPE_DATA:
KUNIT_ASSERT_TRUE(test, p1->value.data != NULL);
KUNIT_ASSERT_TRUE(test, p2->value.data != NULL);
KUNIT_ASSERT_TRUE(test,
!memcmp(p1->value.data, p2->value.data,
p1->length * 4)
);
break;
case TB_PROPERTY_TYPE_TEXT:
KUNIT_ASSERT_TRUE(test, p1->value.text != NULL);
KUNIT_ASSERT_TRUE(test, p2->value.text != NULL);
KUNIT_ASSERT_STREQ(test, p1->value.text, p2->value.text);
break;
case TB_PROPERTY_TYPE_VALUE:
KUNIT_ASSERT_EQ(test, p1->value.immediate,
p2->value.immediate);
break;
default:
KUNIT_FAIL(test, "unexpected property type");
break;
}
}
}
static void tb_test_property_copy(struct kunit *test)
{
struct tb_property_dir *src, *dst;
u32 *block;
int ret, i;
src = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory));
KUNIT_ASSERT_TRUE(test, src != NULL);
dst = tb_property_copy_dir(src);
KUNIT_ASSERT_TRUE(test, dst != NULL);
/* Compare the structures */
compare_dirs(test, src, dst);
/* Compare the resulting property block */
ret = tb_property_format_dir(dst, NULL, 0);
KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory));
block = kunit_kzalloc(test, sizeof(root_directory), GFP_KERNEL);
KUNIT_ASSERT_TRUE(test, block != NULL);
ret = tb_property_format_dir(dst, block, ARRAY_SIZE(root_directory));
KUNIT_EXPECT_TRUE(test, !ret);
for (i = 0; i < ARRAY_SIZE(root_directory); i++)
KUNIT_EXPECT_EQ(test, root_directory[i], block[i]);
tb_property_free_dir(dst);
tb_property_free_dir(src);
}
static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_path_basic),
KUNIT_CASE(tb_test_path_not_connected_walk),
@ -1616,6 +2100,14 @@ static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_tunnel_dp_max_length),
KUNIT_CASE(tb_test_tunnel_port_on_path),
KUNIT_CASE(tb_test_tunnel_usb3),
KUNIT_CASE(tb_test_tunnel_dma),
KUNIT_CASE(tb_test_tunnel_dma_rx),
KUNIT_CASE(tb_test_tunnel_dma_tx),
KUNIT_CASE(tb_test_tunnel_dma_chain),
KUNIT_CASE(tb_test_tunnel_dma_match),
KUNIT_CASE(tb_test_property_parse),
KUNIT_CASE(tb_test_property_format),
KUNIT_CASE(tb_test_property_copy),
{ }
};

View File

@ -794,24 +794,14 @@ static u32 tb_dma_credits(struct tb_port *nhi)
return min(max_credits, 13U);
}
static int tb_dma_activate(struct tb_tunnel *tunnel, bool active)
{
struct tb_port *nhi = tunnel->src_port;
u32 credits;
credits = active ? tb_dma_credits(nhi) : 0;
return tb_port_set_initial_credits(nhi, credits);
}
static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
unsigned int efc, u32 credits)
static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits)
{
int i;
path->egress_fc_enable = efc;
path->ingress_fc_enable = TB_PATH_ALL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_shared_buffer = isb;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 5;
path->weight = 1;
path->clear_fc = true;
@ -825,28 +815,28 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
* @tb: Pointer to the domain structure
* @nhi: Host controller port
* @dst: Destination null port which the other domain is connected to
* @transmit_ring: NHI ring number used to send packets towards the
* other domain. Set to %0 if TX path is not needed.
* @transmit_path: HopID used for transmitting packets
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Set to %0 if RX path is not needed.
* @transmit_ring: NHI ring number used to send packets towards the
* other domain. Set to %-1 if TX path is not needed.
* @receive_path: HopID used for receiving packets
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Set to %-1 if RX path is not needed.
*
* Return: Returns a tb_tunnel on success or NULL on failure.
*/
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring,
int receive_path)
struct tb_port *dst, int transmit_path,
int transmit_ring, int receive_path,
int receive_ring)
{
struct tb_tunnel *tunnel;
size_t npaths = 0, i = 0;
struct tb_path *path;
u32 credits;
if (receive_ring)
if (receive_ring > 0)
npaths++;
if (transmit_ring)
if (transmit_ring > 0)
npaths++;
if (WARN_ON(!npaths))
@ -856,38 +846,96 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
if (!tunnel)
return NULL;
tunnel->activate = tb_dma_activate;
tunnel->src_port = nhi;
tunnel->dst_port = dst;
credits = tb_dma_credits(nhi);
if (receive_ring) {
if (receive_ring > 0) {
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,
"DMA RX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
credits);
tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits);
tunnel->paths[i++] = path;
}
if (transmit_ring) {
if (transmit_ring > 0) {
path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0,
"DMA TX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
tb_dma_init_path(path, TB_PATH_ALL, credits);
tunnel->paths[i++] = path;
}
return tunnel;
}
/**
* tb_tunnel_match_dma() - Match DMA tunnel
* @tunnel: Tunnel to match
* @transmit_path: HopID used for transmitting packets. Pass %-1 to ignore.
* @transmit_ring: NHI ring number used to send packets towards the
* other domain. Pass %-1 to ignore.
* @receive_path: HopID used for receiving packets. Pass %-1 to ignore.
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Pass %-1 to ignore.
*
* This function can be used to match specific DMA tunnel, if there are
* multiple DMA tunnels going through the same XDomain connection.
* Returns true if there is match and false otherwise.
*/
bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path,
int transmit_ring, int receive_path, int receive_ring)
{
const struct tb_path *tx_path = NULL, *rx_path = NULL;
int i;
if (!receive_ring || !transmit_ring)
return false;
for (i = 0; i < tunnel->npaths; i++) {
const struct tb_path *path = tunnel->paths[i];
if (!path)
continue;
if (tb_port_is_nhi(path->hops[0].in_port))
tx_path = path;
else if (tb_port_is_nhi(path->hops[path->path_length - 1].out_port))
rx_path = path;
}
if (transmit_ring > 0 || transmit_path > 0) {
if (!tx_path)
return false;
if (transmit_ring > 0 &&
(tx_path->hops[0].in_hop_index != transmit_ring))
return false;
if (transmit_path > 0 &&
(tx_path->hops[tx_path->path_length - 1].next_hop_index != transmit_path))
return false;
}
if (receive_ring > 0 || receive_path > 0) {
if (!rx_path)
return false;
if (receive_path > 0 &&
(rx_path->hops[0].in_hop_index != receive_path))
return false;
if (receive_ring > 0 &&
(rx_path->hops[rx_path->path_length - 1].next_hop_index != receive_ring))
return false;
}
return true;
}
static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down)
{
int ret, up_max_rate, down_max_rate;

View File

@ -70,9 +70,11 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out, int max_up,
int max_down);
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring,
int receive_path);
struct tb_port *dst, int transmit_path,
int transmit_ring, int receive_path,
int receive_ring);
bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path,
int transmit_ring, int receive_path, int receive_ring);
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down, int max_up,

View File

@ -12,17 +12,19 @@
#include <linux/kmod.h>
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/prandom.h>
#include <linux/utsname.h>
#include <linux/uuid.h>
#include <linux/workqueue.h>
#include "tb.h"
#define XDOMAIN_DEFAULT_TIMEOUT 5000 /* ms */
#define XDOMAIN_DEFAULT_TIMEOUT 1000 /* ms */
#define XDOMAIN_UUID_RETRIES 10
#define XDOMAIN_PROPERTIES_RETRIES 60
#define XDOMAIN_PROPERTIES_RETRIES 10
#define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10
#define XDOMAIN_BONDING_WAIT 100 /* ms */
#define XDOMAIN_DEFAULT_MAX_HOPID 15
struct xdomain_request_work {
struct work_struct work;
@ -34,13 +36,15 @@ static bool tb_xdomain_enabled = true;
module_param_named(xdomain, tb_xdomain_enabled, bool, 0444);
MODULE_PARM_DESC(xdomain, "allow XDomain protocol (default: true)");
/* Serializes access to the properties and protocol handlers below */
/*
* Serializes access to the properties and protocol handlers below. If
* you need to take both this lock and the struct tb_xdomain lock, take
* this one first.
*/
static DEFINE_MUTEX(xdomain_lock);
/* Properties exposed to the remote domains */
static struct tb_property_dir *xdomain_property_dir;
static u32 *xdomain_property_block;
static u32 xdomain_property_block_len;
static u32 xdomain_property_block_gen;
/* Additional protocol handlers */
@ -385,8 +389,7 @@ static int tb_xdp_properties_request(struct tb_ctl *ctl, u64 route,
}
static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl *ctl,
u64 route, u8 sequence, const uuid_t *src_uuid,
const struct tb_xdp_properties *req)
struct tb_xdomain *xd, u8 sequence, const struct tb_xdp_properties *req)
{
struct tb_xdp_properties_response *res;
size_t total_size;
@ -398,39 +401,39 @@ static int tb_xdp_properties_response(struct tb *tb, struct tb_ctl *ctl,
* protocol supports forwarding, though which we might add
* support later on.
*/
if (!uuid_equal(src_uuid, &req->dst_uuid)) {
tb_xdp_error_response(ctl, route, sequence,
if (!uuid_equal(xd->local_uuid, &req->dst_uuid)) {
tb_xdp_error_response(ctl, xd->route, sequence,
ERROR_UNKNOWN_DOMAIN);
return 0;
}
mutex_lock(&xdomain_lock);
mutex_lock(&xd->lock);
if (req->offset >= xdomain_property_block_len) {
mutex_unlock(&xdomain_lock);
if (req->offset >= xd->local_property_block_len) {
mutex_unlock(&xd->lock);
return -EINVAL;
}
len = xdomain_property_block_len - req->offset;
len = xd->local_property_block_len - req->offset;
len = min_t(u16, len, TB_XDP_PROPERTIES_MAX_DATA_LENGTH);
total_size = sizeof(*res) + len * 4;
res = kzalloc(total_size, GFP_KERNEL);
if (!res) {
mutex_unlock(&xdomain_lock);
mutex_unlock(&xd->lock);
return -ENOMEM;
}
tb_xdp_fill_header(&res->hdr, route, sequence, PROPERTIES_RESPONSE,
tb_xdp_fill_header(&res->hdr, xd->route, sequence, PROPERTIES_RESPONSE,
total_size);
res->generation = xdomain_property_block_gen;
res->data_length = xdomain_property_block_len;
res->generation = xd->local_property_block_gen;
res->data_length = xd->local_property_block_len;
res->offset = req->offset;
uuid_copy(&res->src_uuid, src_uuid);
uuid_copy(&res->src_uuid, xd->local_uuid);
uuid_copy(&res->dst_uuid, &req->src_uuid);
memcpy(res->data, &xdomain_property_block[req->offset], len * 4);
memcpy(res->data, &xd->local_property_block[req->offset], len * 4);
mutex_unlock(&xdomain_lock);
mutex_unlock(&xd->lock);
ret = __tb_xdomain_response(ctl, res, total_size,
TB_CFG_PKG_XDOMAIN_RESP);
@ -512,52 +515,63 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler)
}
EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler);
static int rebuild_property_block(void)
static void update_property_block(struct tb_xdomain *xd)
{
u32 *block, len;
int ret;
ret = tb_property_format_dir(xdomain_property_dir, NULL, 0);
if (ret < 0)
return ret;
len = ret;
block = kcalloc(len, sizeof(u32), GFP_KERNEL);
if (!block)
return -ENOMEM;
ret = tb_property_format_dir(xdomain_property_dir, block, len);
if (ret) {
kfree(block);
return ret;
}
kfree(xdomain_property_block);
xdomain_property_block = block;
xdomain_property_block_len = len;
xdomain_property_block_gen++;
return 0;
}
static void finalize_property_block(void)
{
const struct tb_property *nodename;
/*
* On first XDomain connection we set up the the system
* nodename. This delayed here because userspace may not have it
* set when the driver is first probed.
*/
mutex_lock(&xdomain_lock);
nodename = tb_property_find(xdomain_property_dir, "deviceid",
TB_PROPERTY_TYPE_TEXT);
if (!nodename) {
tb_property_add_text(xdomain_property_dir, "deviceid",
utsname()->nodename);
rebuild_property_block();
mutex_lock(&xd->lock);
/*
* If the local property block is not up-to-date, rebuild it now
* based on the global property template.
*/
if (!xd->local_property_block ||
xd->local_property_block_gen < xdomain_property_block_gen) {
struct tb_property_dir *dir;
int ret, block_len;
u32 *block;
dir = tb_property_copy_dir(xdomain_property_dir);
if (!dir) {
dev_warn(&xd->dev, "failed to copy properties\n");
goto out_unlock;
}
/* Fill in non-static properties now */
tb_property_add_text(dir, "deviceid", utsname()->nodename);
tb_property_add_immediate(dir, "maxhopid", xd->local_max_hopid);
ret = tb_property_format_dir(dir, NULL, 0);
if (ret < 0) {
dev_warn(&xd->dev, "local property block creation failed\n");
tb_property_free_dir(dir);
goto out_unlock;
}
block_len = ret;
block = kcalloc(block_len, sizeof(*block), GFP_KERNEL);
if (!block) {
tb_property_free_dir(dir);
goto out_unlock;
}
ret = tb_property_format_dir(dir, block, block_len);
if (ret) {
dev_warn(&xd->dev, "property block generation failed\n");
tb_property_free_dir(dir);
kfree(block);
goto out_unlock;
}
tb_property_free_dir(dir);
/* Release the previous block */
kfree(xd->local_property_block);
/* Assign new one */
xd->local_property_block = block;
xd->local_property_block_len = block_len;
xd->local_property_block_gen = xdomain_property_block_gen;
}
out_unlock:
mutex_unlock(&xd->lock);
mutex_unlock(&xdomain_lock);
}
@ -568,6 +582,7 @@ static void tb_xdp_handle_request(struct work_struct *work)
const struct tb_xdomain_header *xhdr = &pkg->xd_hdr;
struct tb *tb = xw->tb;
struct tb_ctl *ctl = tb->ctl;
struct tb_xdomain *xd;
const uuid_t *uuid;
int ret = 0;
u32 sequence;
@ -589,17 +604,21 @@ static void tb_xdp_handle_request(struct work_struct *work)
goto out;
}
finalize_property_block();
tb_dbg(tb, "%llx: received XDomain request %#x\n", route, pkg->type);
xd = tb_xdomain_find_by_route_locked(tb, route);
if (xd)
update_property_block(xd);
switch (pkg->type) {
case PROPERTIES_REQUEST:
ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid,
(const struct tb_xdp_properties *)pkg);
if (xd) {
ret = tb_xdp_properties_response(tb, ctl, xd, sequence,
(const struct tb_xdp_properties *)pkg);
}
break;
case PROPERTIES_CHANGED_REQUEST: {
struct tb_xdomain *xd;
case PROPERTIES_CHANGED_REQUEST:
ret = tb_xdp_properties_changed_response(ctl, route, sequence);
/*
@ -607,17 +626,11 @@ static void tb_xdp_handle_request(struct work_struct *work)
* the xdomain related to this connection as well in
* case there is a change in services it offers.
*/
xd = tb_xdomain_find_by_route_locked(tb, route);
if (xd) {
if (device_is_registered(&xd->dev)) {
queue_delayed_work(tb->wq, &xd->get_properties_work,
msecs_to_jiffies(50));
}
tb_xdomain_put(xd);
if (xd && device_is_registered(&xd->dev)) {
queue_delayed_work(tb->wq, &xd->get_properties_work,
msecs_to_jiffies(50));
}
break;
}
case UUID_REQUEST_OLD:
case UUID_REQUEST:
@ -630,6 +643,8 @@ static void tb_xdp_handle_request(struct work_struct *work)
break;
}
tb_xdomain_put(xd);
if (ret) {
tb_warn(tb, "failed to send XDomain response for %#x\n",
pkg->type);
@ -811,7 +826,7 @@ static int remove_missing_service(struct device *dev, void *data)
if (!svc)
return 0;
if (!tb_property_find(xd->properties, svc->key,
if (!tb_property_find(xd->remote_properties, svc->key,
TB_PROPERTY_TYPE_DIRECTORY))
device_unregister(dev);
@ -871,7 +886,7 @@ static void enumerate_services(struct tb_xdomain *xd)
device_for_each_child_reverse(&xd->dev, xd, remove_missing_service);
/* Then re-enumerate properties creating new services as we go */
tb_property_for_each(xd->properties, p) {
tb_property_for_each(xd->remote_properties, p) {
if (p->type != TB_PROPERTY_TYPE_DIRECTORY)
continue;
@ -928,6 +943,14 @@ static int populate_properties(struct tb_xdomain *xd,
return -EINVAL;
xd->vendor = p->value.immediate;
p = tb_property_find(dir, "maxhopid", TB_PROPERTY_TYPE_VALUE);
/*
* USB4 inter-domain spec suggests using 15 as HopID if the
* other end does not announce it in a property. This is for
* TBT3 compatibility.
*/
xd->remote_max_hopid = p ? p->value.immediate : XDOMAIN_DEFAULT_MAX_HOPID;
kfree(xd->device_name);
xd->device_name = NULL;
kfree(xd->vendor_name);
@ -944,19 +967,6 @@ static int populate_properties(struct tb_xdomain *xd,
return 0;
}
/* Called with @xd->lock held */
static void tb_xdomain_restore_paths(struct tb_xdomain *xd)
{
if (!xd->resume)
return;
xd->resume = false;
if (xd->transmit_path) {
dev_dbg(&xd->dev, "re-establishing DMA path\n");
tb_domain_approve_xdomain_paths(xd->tb, xd);
}
}
static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd)
{
return tb_to_switch(xd->dev.parent);
@ -1002,9 +1012,12 @@ static void tb_xdomain_get_uuid(struct work_struct *work)
uuid_t uuid;
int ret;
dev_dbg(&xd->dev, "requesting remote UUID\n");
ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->uuid_retries, &uuid);
if (ret < 0) {
if (xd->uuid_retries-- > 0) {
dev_dbg(&xd->dev, "failed to request UUID, retrying\n");
queue_delayed_work(xd->tb->wq, &xd->get_uuid_work,
msecs_to_jiffies(100));
} else {
@ -1013,6 +1026,8 @@ static void tb_xdomain_get_uuid(struct work_struct *work)
return;
}
dev_dbg(&xd->dev, "got remote UUID %pUb\n", &uuid);
if (uuid_equal(&uuid, xd->local_uuid))
dev_dbg(&xd->dev, "intra-domain loop detected\n");
@ -1052,11 +1067,15 @@ static void tb_xdomain_get_properties(struct work_struct *work)
u32 gen = 0;
int ret;
dev_dbg(&xd->dev, "requesting remote properties\n");
ret = tb_xdp_properties_request(tb->ctl, xd->route, xd->local_uuid,
xd->remote_uuid, xd->properties_retries,
&block, &gen);
if (ret < 0) {
if (xd->properties_retries-- > 0) {
dev_dbg(&xd->dev,
"failed to request remote properties, retrying\n");
queue_delayed_work(xd->tb->wq, &xd->get_properties_work,
msecs_to_jiffies(1000));
} else {
@ -1073,16 +1092,8 @@ static void tb_xdomain_get_properties(struct work_struct *work)
mutex_lock(&xd->lock);
/* Only accept newer generation properties */
if (xd->properties && gen <= xd->property_block_gen) {
/*
* On resume it is likely that the properties block is
* not changed (unless the other end added or removed
* services). However, we need to make sure the existing
* DMA paths are restored properly.
*/
tb_xdomain_restore_paths(xd);
if (xd->remote_properties && gen <= xd->remote_property_block_gen)
goto err_free_block;
}
dir = tb_property_parse_dir(block, ret);
if (!dir) {
@ -1097,18 +1108,16 @@ static void tb_xdomain_get_properties(struct work_struct *work)
}
/* Release the existing one */
if (xd->properties) {
tb_property_free_dir(xd->properties);
if (xd->remote_properties) {
tb_property_free_dir(xd->remote_properties);
update = true;
}
xd->properties = dir;
xd->property_block_gen = gen;
xd->remote_properties = dir;
xd->remote_property_block_gen = gen;
tb_xdomain_update_link_attributes(xd);
tb_xdomain_restore_paths(xd);
mutex_unlock(&xd->lock);
kfree(block);
@ -1123,6 +1132,11 @@ static void tb_xdomain_get_properties(struct work_struct *work)
dev_err(&xd->dev, "failed to add XDomain device\n");
return;
}
dev_info(&xd->dev, "new host found, vendor=%#x device=%#x\n",
xd->vendor, xd->device);
if (xd->vendor_name && xd->device_name)
dev_info(&xd->dev, "%s %s\n", xd->vendor_name,
xd->device_name);
} else {
kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE);
}
@ -1143,13 +1157,19 @@ static void tb_xdomain_properties_changed(struct work_struct *work)
properties_changed_work.work);
int ret;
dev_dbg(&xd->dev, "sending properties changed notification\n");
ret = tb_xdp_properties_changed_request(xd->tb->ctl, xd->route,
xd->properties_changed_retries, xd->local_uuid);
if (ret) {
if (xd->properties_changed_retries-- > 0)
if (xd->properties_changed_retries-- > 0) {
dev_dbg(&xd->dev,
"failed to send properties changed notification, retrying\n");
queue_delayed_work(xd->tb->wq,
&xd->properties_changed_work,
msecs_to_jiffies(1000));
}
dev_err(&xd->dev, "failed to send properties changed notification\n");
return;
}
@ -1180,6 +1200,15 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf)
}
static DEVICE_ATTR_RO(device_name);
static ssize_t maxhopid_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
return sprintf(buf, "%d\n", xd->remote_max_hopid);
}
static DEVICE_ATTR_RO(maxhopid);
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@ -1238,6 +1267,7 @@ static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
static struct attribute *xdomain_attrs[] = {
&dev_attr_device.attr,
&dev_attr_device_name.attr,
&dev_attr_maxhopid.attr,
&dev_attr_rx_lanes.attr,
&dev_attr_rx_speed.attr,
&dev_attr_tx_lanes.attr,
@ -1263,7 +1293,10 @@ static void tb_xdomain_release(struct device *dev)
put_device(xd->dev.parent);
tb_property_free_dir(xd->properties);
kfree(xd->local_property_block);
tb_property_free_dir(xd->remote_properties);
ida_destroy(&xd->out_hopids);
ida_destroy(&xd->in_hopids);
ida_destroy(&xd->service_ids);
kfree(xd->local_uuid);
@ -1310,15 +1343,7 @@ static int __maybe_unused tb_xdomain_suspend(struct device *dev)
static int __maybe_unused tb_xdomain_resume(struct device *dev)
{
struct tb_xdomain *xd = tb_to_xdomain(dev);
/*
* Ask tb_xdomain_get_properties() restore any existing DMA
* paths after properties are re-read.
*/
xd->resume = true;
start_handshake(xd);
start_handshake(tb_to_xdomain(dev));
return 0;
}
@ -1363,7 +1388,10 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
xd->tb = tb;
xd->route = route;
xd->local_max_hopid = down->config.max_in_hop_id;
ida_init(&xd->service_ids);
ida_init(&xd->in_hopids);
ida_init(&xd->out_hopids);
mutex_init(&xd->lock);
INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid);
INIT_DELAYED_WORK(&xd->get_properties_work, tb_xdomain_get_properties);
@ -1390,6 +1418,10 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
xd->dev.groups = xdomain_attr_groups;
dev_set_name(&xd->dev, "%u-%llx", tb->index, route);
dev_dbg(&xd->dev, "local UUID %pUb\n", local_uuid);
if (remote_uuid)
dev_dbg(&xd->dev, "remote UUID %pUb\n", remote_uuid);
/*
* This keeps the DMA powered on as long as we have active
* connection to another host.
@ -1452,10 +1484,12 @@ void tb_xdomain_remove(struct tb_xdomain *xd)
pm_runtime_put_noidle(&xd->dev);
pm_runtime_set_suspended(&xd->dev);
if (!device_is_registered(&xd->dev))
if (!device_is_registered(&xd->dev)) {
put_device(&xd->dev);
else
} else {
dev_info(&xd->dev, "host disconnected\n");
device_unregister(&xd->dev);
}
}
/**
@ -1522,74 +1556,119 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
}
EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable);
/**
* tb_xdomain_alloc_in_hopid() - Allocate input HopID for tunneling
* @xd: XDomain connection
* @hopid: Preferred HopID or %-1 for next available
*
* Returns allocated HopID or negative errno. Specifically returns
* %-ENOSPC if there are no more available HopIDs. Returned HopID is
* guaranteed to be within range supported by the input lane adapter.
* Call tb_xdomain_release_in_hopid() to release the allocated HopID.
*/
int tb_xdomain_alloc_in_hopid(struct tb_xdomain *xd, int hopid)
{
if (hopid < 0)
hopid = TB_PATH_MIN_HOPID;
if (hopid < TB_PATH_MIN_HOPID || hopid > xd->local_max_hopid)
return -EINVAL;
return ida_alloc_range(&xd->in_hopids, hopid, xd->local_max_hopid,
GFP_KERNEL);
}
EXPORT_SYMBOL_GPL(tb_xdomain_alloc_in_hopid);
/**
* tb_xdomain_alloc_out_hopid() - Allocate output HopID for tunneling
* @xd: XDomain connection
* @hopid: Preferred HopID or %-1 for next available
*
* Returns allocated HopID or negative errno. Specifically returns
* %-ENOSPC if there are no more available HopIDs. Returned HopID is
* guaranteed to be within range supported by the output lane adapter.
* Call tb_xdomain_release_in_hopid() to release the allocated HopID.
*/
int tb_xdomain_alloc_out_hopid(struct tb_xdomain *xd, int hopid)
{
if (hopid < 0)
hopid = TB_PATH_MIN_HOPID;
if (hopid < TB_PATH_MIN_HOPID || hopid > xd->remote_max_hopid)
return -EINVAL;
return ida_alloc_range(&xd->out_hopids, hopid, xd->remote_max_hopid,
GFP_KERNEL);
}
EXPORT_SYMBOL_GPL(tb_xdomain_alloc_out_hopid);
/**
* tb_xdomain_release_in_hopid() - Release input HopID
* @xd: XDomain connection
* @hopid: HopID to release
*/
void tb_xdomain_release_in_hopid(struct tb_xdomain *xd, int hopid)
{
ida_free(&xd->in_hopids, hopid);
}
EXPORT_SYMBOL_GPL(tb_xdomain_release_in_hopid);
/**
* tb_xdomain_release_out_hopid() - Release output HopID
* @xd: XDomain connection
* @hopid: HopID to release
*/
void tb_xdomain_release_out_hopid(struct tb_xdomain *xd, int hopid)
{
ida_free(&xd->out_hopids, hopid);
}
EXPORT_SYMBOL_GPL(tb_xdomain_release_out_hopid);
/**
* tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection
* @xd: XDomain connection
* @transmit_path: HopID of the transmit path the other end is using to
* send packets
* @transmit_ring: DMA ring used to receive packets from the other end
* @receive_path: HopID of the receive path the other end is using to
* receive packets
* @receive_ring: DMA ring used to send packets to the other end
* @transmit_path: HopID we are using to send out packets
* @transmit_ring: DMA ring used to send out packets
* @receive_path: HopID the other end is using to send packets to us
* @receive_ring: DMA ring used to receive packets from @receive_path
*
* The function enables DMA paths accordingly so that after successful
* return the caller can send and receive packets using high-speed DMA
* path.
* path. If a transmit or receive path is not needed, pass %-1 for those
* parameters.
*
* Return: %0 in case of success and negative errno in case of error
*/
int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path,
u16 transmit_ring, u16 receive_path,
u16 receive_ring)
int tb_xdomain_enable_paths(struct tb_xdomain *xd, int transmit_path,
int transmit_ring, int receive_path,
int receive_ring)
{
int ret;
mutex_lock(&xd->lock);
if (xd->transmit_path) {
ret = xd->transmit_path == transmit_path ? 0 : -EBUSY;
goto exit_unlock;
}
xd->transmit_path = transmit_path;
xd->transmit_ring = transmit_ring;
xd->receive_path = receive_path;
xd->receive_ring = receive_ring;
ret = tb_domain_approve_xdomain_paths(xd->tb, xd);
exit_unlock:
mutex_unlock(&xd->lock);
return ret;
return tb_domain_approve_xdomain_paths(xd->tb, xd, transmit_path,
transmit_ring, receive_path,
receive_ring);
}
EXPORT_SYMBOL_GPL(tb_xdomain_enable_paths);
/**
* tb_xdomain_disable_paths() - Disable DMA paths for XDomain connection
* @xd: XDomain connection
* @transmit_path: HopID we are using to send out packets
* @transmit_ring: DMA ring used to send out packets
* @receive_path: HopID the other end is using to send packets to us
* @receive_ring: DMA ring used to receive packets from @receive_path
*
* This does the opposite of tb_xdomain_enable_paths(). After call to
* this the caller is not expected to use the rings anymore.
* this the caller is not expected to use the rings anymore. Passing %-1
* as path/ring parameter means don't care. Normally the callers should
* pass the same values here as they do when paths are enabled.
*
* Return: %0 in case of success and negative errno in case of error
*/
int tb_xdomain_disable_paths(struct tb_xdomain *xd)
int tb_xdomain_disable_paths(struct tb_xdomain *xd, int transmit_path,
int transmit_ring, int receive_path,
int receive_ring)
{
int ret = 0;
mutex_lock(&xd->lock);
if (xd->transmit_path) {
xd->transmit_path = 0;
xd->transmit_ring = 0;
xd->receive_path = 0;
xd->receive_ring = 0;
ret = tb_domain_disconnect_xdomain_paths(xd->tb, xd);
}
mutex_unlock(&xd->lock);
return ret;
return tb_domain_disconnect_xdomain_paths(xd->tb, xd, transmit_path,
transmit_ring, receive_path,
receive_ring);
}
EXPORT_SYMBOL_GPL(tb_xdomain_disable_paths);
@ -1826,11 +1905,7 @@ int tb_register_property_dir(const char *key, struct tb_property_dir *dir)
if (ret)
goto err_unlock;
ret = rebuild_property_block();
if (ret) {
remove_directory(key, dir);
goto err_unlock;
}
xdomain_property_block_gen++;
mutex_unlock(&xdomain_lock);
update_all_xdomains();
@ -1856,7 +1931,7 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir)
mutex_lock(&xdomain_lock);
if (remove_directory(key, dir))
ret = rebuild_property_block();
xdomain_property_block_gen++;
mutex_unlock(&xdomain_lock);
if (!ret)
@ -1875,7 +1950,8 @@ int tb_xdomain_init(void)
* directories. Those will be added by service drivers
* themselves when they are loaded.
*
* We also add node name later when first connection is made.
* Rest of the properties are filled dynamically based on these
* when the P2P connection is made.
*/
tb_property_add_immediate(xdomain_property_dir, "vendorid",
PCI_VENDOR_ID_INTEL);
@ -1883,11 +1959,11 @@ int tb_xdomain_init(void)
tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1);
tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100);
xdomain_property_block_gen = prandom_u32();
return 0;
}
void tb_xdomain_exit(void)
{
kfree(xdomain_property_block);
tb_property_free_dir(xdomain_property_dir);
}

View File

@ -59,6 +59,7 @@
#include <linux/dma-mapping.h>
#include <linux/usb/gadget.h>
#include <linux/module.h>
#include <linux/dmapool.h>
#include <linux/iopoll.h>
#include "core.h"
@ -190,29 +191,13 @@ dma_addr_t cdns3_trb_virt_to_dma(struct cdns3_endpoint *priv_ep,
return priv_ep->trb_pool_dma + offset;
}
static int cdns3_ring_size(struct cdns3_endpoint *priv_ep)
{
switch (priv_ep->type) {
case USB_ENDPOINT_XFER_ISOC:
return TRB_ISO_RING_SIZE;
case USB_ENDPOINT_XFER_CONTROL:
return TRB_CTRL_RING_SIZE;
default:
if (priv_ep->use_streams)
return TRB_STREAM_RING_SIZE;
else
return TRB_RING_SIZE;
}
}
static void cdns3_free_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
if (priv_ep->trb_pool) {
dma_free_coherent(priv_dev->sysdev,
cdns3_ring_size(priv_ep),
priv_ep->trb_pool, priv_ep->trb_pool_dma);
dma_pool_free(priv_dev->eps_dma_pool,
priv_ep->trb_pool, priv_ep->trb_pool_dma);
priv_ep->trb_pool = NULL;
}
}
@ -226,7 +211,7 @@ static void cdns3_free_trb_pool(struct cdns3_endpoint *priv_ep)
int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
int ring_size = cdns3_ring_size(priv_ep);
int ring_size = TRB_RING_SIZE;
int num_trbs = ring_size / TRB_SIZE;
struct cdns3_trb *link_trb;
@ -234,10 +219,10 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
cdns3_free_trb_pool(priv_ep);
if (!priv_ep->trb_pool) {
priv_ep->trb_pool = dma_alloc_coherent(priv_dev->sysdev,
ring_size,
&priv_ep->trb_pool_dma,
GFP_DMA32 | GFP_ATOMIC);
priv_ep->trb_pool = dma_pool_alloc(priv_dev->eps_dma_pool,
GFP_DMA32 | GFP_ATOMIC,
&priv_ep->trb_pool_dma);
if (!priv_ep->trb_pool)
return -ENOMEM;
@ -834,9 +819,15 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
priv_ep->dir);
if ((priv_req->flags & REQUEST_UNALIGNED) &&
priv_ep->dir == USB_DIR_OUT && !request->status)
priv_ep->dir == USB_DIR_OUT && !request->status) {
/* Make DMA buffer CPU accessible */
dma_sync_single_for_cpu(priv_dev->sysdev,
priv_req->aligned_buf->dma,
priv_req->aligned_buf->size,
priv_req->aligned_buf->dir);
memcpy(request->buf, priv_req->aligned_buf->buf,
request->length);
}
priv_req->flags &= ~(REQUEST_PENDING | REQUEST_UNALIGNED);
/* All TRBs have finished, clear the counter */
@ -898,8 +889,8 @@ static void cdns3_free_aligned_request_buf(struct work_struct *work)
* interrupts.
*/
spin_unlock_irqrestore(&priv_dev->lock, flags);
dma_free_coherent(priv_dev->sysdev, buf->size,
buf->buf, buf->dma);
dma_free_noncoherent(priv_dev->sysdev, buf->size,
buf->buf, buf->dma, buf->dir);
kfree(buf);
spin_lock_irqsave(&priv_dev->lock, flags);
}
@ -926,10 +917,13 @@ static int cdns3_prepare_aligned_request_buf(struct cdns3_request *priv_req)
return -ENOMEM;
buf->size = priv_req->request.length;
buf->dir = usb_endpoint_dir_in(priv_ep->endpoint.desc) ?
DMA_TO_DEVICE : DMA_FROM_DEVICE;
buf->buf = dma_alloc_coherent(priv_dev->sysdev,
buf->buf = dma_alloc_noncoherent(priv_dev->sysdev,
buf->size,
&buf->dma,
buf->dir,
GFP_ATOMIC);
if (!buf->buf) {
kfree(buf);
@ -951,10 +945,17 @@ static int cdns3_prepare_aligned_request_buf(struct cdns3_request *priv_req)
}
if (priv_ep->dir == USB_DIR_IN) {
/* Make DMA buffer CPU accessible */
dma_sync_single_for_cpu(priv_dev->sysdev,
buf->dma, buf->size, buf->dir);
memcpy(buf->buf, priv_req->request.buf,
priv_req->request.length);
}
/* Transfer DMA buffer ownership back to device */
dma_sync_single_for_device(priv_dev->sysdev,
buf->dma, buf->size, buf->dir);
priv_req->flags |= REQUEST_UNALIGNED;
trace_cdns3_prepare_aligned_request(priv_req);
@ -3103,9 +3104,10 @@ static void cdns3_gadget_exit(struct cdns *cdns)
struct cdns3_aligned_buf *buf;
buf = cdns3_next_align_buf(&priv_dev->aligned_buf_list);
dma_free_coherent(priv_dev->sysdev, buf->size,
dma_free_noncoherent(priv_dev->sysdev, buf->size,
buf->buf,
buf->dma);
buf->dma,
buf->dir);
list_del(&buf->list);
kfree(buf);
@ -3113,6 +3115,7 @@ static void cdns3_gadget_exit(struct cdns *cdns)
dma_free_coherent(priv_dev->sysdev, 8, priv_dev->setup_buf,
priv_dev->setup_dma);
dma_pool_destroy(priv_dev->eps_dma_pool);
kfree(priv_dev->zlp_buf);
usb_put_gadget(&priv_dev->gadget);
@ -3185,6 +3188,14 @@ static int cdns3_gadget_start(struct cdns *cdns)
/* initialize endpoint container */
INIT_LIST_HEAD(&priv_dev->gadget.ep_list);
INIT_LIST_HEAD(&priv_dev->aligned_buf_list);
priv_dev->eps_dma_pool = dma_pool_create("cdns3_eps_dma_pool",
priv_dev->sysdev,
TRB_RING_SIZE, 8, 0);
if (!priv_dev->eps_dma_pool) {
dev_err(priv_dev->dev, "Failed to create TRB dma pool\n");
ret = -ENOMEM;
goto err1;
}
ret = cdns3_init_eps(priv_dev);
if (ret) {
@ -3235,6 +3246,8 @@ static int cdns3_gadget_start(struct cdns *cdns)
err2:
cdns3_free_all_eps(priv_dev);
err1:
dma_pool_destroy(priv_dev->eps_dma_pool);
usb_put_gadget(&priv_dev->gadget);
cdns->gadget_dev = NULL;
return ret;
@ -3304,6 +3317,8 @@ static int cdns3_gadget_resume(struct cdns *cdns, bool hibernated)
return 0;
cdns3_gadget_config(priv_dev);
if (hibernated)
writel(USB_CONF_DEVEN, &priv_dev->regs->usb_conf);
return 0;
}

View File

@ -12,6 +12,7 @@
#ifndef __LINUX_CDNS3_GADGET
#define __LINUX_CDNS3_GADGET
#include <linux/usb/gadget.h>
#include <linux/dma-direction.h>
/*
* USBSS-DEV register interface.
@ -1205,6 +1206,7 @@ struct cdns3_aligned_buf {
void *buf;
dma_addr_t dma;
u32 size;
enum dma_data_direction dir;
unsigned in_use:1;
struct list_head list;
};
@ -1298,6 +1300,7 @@ struct cdns3_device {
struct cdns3_usb_regs __iomem *regs;
struct dma_pool *eps_dma_pool;
struct usb_ctrlrequest *setup_buf;
dma_addr_t setup_dma;
void *zlp_buf;

View File

@ -361,6 +361,39 @@ static int cdns_imx_suspend(struct device *dev)
return 0;
}
/* Indicate if the controller was power lost before */
static inline bool cdns_imx_is_power_lost(struct cdns_imx *data)
{
u32 value;
value = cdns_imx_readl(data, USB3_CORE_CTRL1);
if ((value & SW_RESET_MASK) == ALL_SW_RESET)
return true;
else
return false;
}
static int __maybe_unused cdns_imx_system_resume(struct device *dev)
{
struct cdns_imx *data = dev_get_drvdata(dev);
int ret;
ret = cdns_imx_resume(dev);
if (ret)
return ret;
if (cdns_imx_is_power_lost(data)) {
dev_dbg(dev, "resume from power lost\n");
ret = cdns_imx_noncore_init(data);
if (ret)
cdns_imx_suspend(dev);
}
return ret;
}
#else
static int cdns_imx_platform_suspend(struct device *dev,
bool suspend, bool wakeup)
@ -372,6 +405,7 @@ static int cdns_imx_platform_suspend(struct device *dev,
static const struct dev_pm_ops cdns_imx_pm_ops = {
SET_RUNTIME_PM_OPS(cdns_imx_suspend, cdns_imx_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(cdns_imx_suspend, cdns_imx_system_resume)
};
static const struct of_device_id cdns_imx_of_match[] = {

View File

@ -19,6 +19,7 @@
#include "core.h"
#include "gadget-export.h"
#include "drd.h"
static int set_phy_power_on(struct cdns *cdns)
{
@ -236,6 +237,18 @@ static int cdns3_controller_resume(struct device *dev, pm_message_t msg)
if (!cdns->in_lpm)
return 0;
if (cdns_power_is_lost(cdns)) {
phy_exit(cdns->usb2_phy);
ret = phy_init(cdns->usb2_phy);
if (ret)
return ret;
phy_exit(cdns->usb3_phy);
ret = phy_init(cdns->usb3_phy);
if (ret)
return ret;
}
ret = set_phy_power_on(cdns);
if (ret)
return ret;
@ -270,10 +283,18 @@ static int cdns3_plat_runtime_resume(struct device *dev)
static int cdns3_plat_suspend(struct device *dev)
{
struct cdns *cdns = dev_get_drvdata(dev);
int ret;
cdns_suspend(cdns);
return cdns3_controller_suspend(dev, PMSG_SUSPEND);
ret = cdns3_controller_suspend(dev, PMSG_SUSPEND);
if (ret)
return ret;
if (device_may_wakeup(dev) && cdns->wakeup_irq)
enable_irq_wake(cdns->wakeup_irq);
return ret;
}
static int cdns3_plat_resume(struct device *dev)

View File

@ -214,7 +214,6 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__field(int, no_interrupt)
__field(int, start_trb)
__field(int, end_trb)
__field(struct cdns3_trb *, start_trb_addr)
__field(int, flags)
__field(unsigned int, stream_id)
),
@ -230,12 +229,11 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__entry->no_interrupt = req->request.no_interrupt;
__entry->start_trb = req->start_trb;
__entry->end_trb = req->end_trb;
__entry->start_trb_addr = req->trb;
__entry->flags = req->flags;
__entry->stream_id = req->request.stream_id;
),
TP_printk("%s: req: %p, req buff %p, length: %u/%u %s%s%s, status: %d,"
" trb: [start:%d, end:%d: virt addr %pa], flags:%x SID: %u",
" trb: [start:%d, end:%d], flags:%x SID: %u",
__get_str(name), __entry->req, __entry->buf, __entry->actual,
__entry->length,
__entry->zero ? "Z" : "z",
@ -244,7 +242,6 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__entry->status,
__entry->start_trb,
__entry->end_trb,
__entry->start_trb_addr,
__entry->flags,
__entry->stream_id
)

View File

@ -727,7 +727,7 @@ int cdnsp_reset_device(struct cdnsp_device *pdev)
* are in Disabled state.
*/
for (i = 1; i < CDNSP_ENDPOINTS_NUM; ++i)
pdev->eps[i].ep_state |= EP_STOPPED;
pdev->eps[i].ep_state |= EP_STOPPED | EP_UNCONFIGURED;
trace_cdnsp_handle_cmd_reset_dev(slot_ctx);
@ -942,6 +942,7 @@ static int cdnsp_gadget_ep_enable(struct usb_ep *ep,
pep = to_cdnsp_ep(ep);
pdev = pep->pdev;
pep->ep_state &= ~EP_UNCONFIGURED;
if (dev_WARN_ONCE(pdev->dev, pep->ep_state & EP_ENABLED,
"%s is already enabled\n", pep->name))
@ -1023,9 +1024,13 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
goto finish;
}
cdnsp_cmd_stop_ep(pdev, pep);
pep->ep_state |= EP_DIS_IN_RROGRESS;
cdnsp_cmd_flush_ep(pdev, pep);
/* Endpoint was unconfigured by Reset Device command. */
if (!(pep->ep_state & EP_UNCONFIGURED)) {
cdnsp_cmd_stop_ep(pdev, pep);
cdnsp_cmd_flush_ep(pdev, pep);
}
/* Remove all queued USB requests. */
while (!list_empty(&pep->pending_list)) {
@ -1043,10 +1048,12 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
cdnsp_endpoint_zero(pdev, pep);
ret = cdnsp_update_eps_configuration(pdev, pep);
if (!(pep->ep_state & EP_UNCONFIGURED))
ret = cdnsp_update_eps_configuration(pdev, pep);
cdnsp_free_endpoint_rings(pdev, pep);
pep->ep_state &= ~EP_ENABLED;
pep->ep_state &= ~(EP_ENABLED | EP_UNCONFIGURED);
pep->ep_state |= EP_STOPPED;
finish:

View File

@ -835,6 +835,7 @@ struct cdnsp_ep {
#define EP_WEDGE BIT(4)
#define EP0_HALTED_STATUS BIT(5)
#define EP_HAS_STREAMS BIT(6)
#define EP_UNCONFIGURED BIT(7)
bool skip;
};

View File

@ -686,7 +686,7 @@ static void cdnsp_free_priv_device(struct cdnsp_device *pdev)
static int cdnsp_alloc_priv_device(struct cdnsp_device *pdev)
{
int ret = -ENOMEM;
int ret;
ret = cdnsp_init_device_ctx(pdev);
if (ret)
@ -1231,7 +1231,6 @@ int cdnsp_mem_init(struct cdnsp_device *pdev)
if (!pdev->dcbaa)
return -ENOMEM;
memset(pdev->dcbaa, 0, sizeof(*pdev->dcbaa));
pdev->dcbaa->dma = dma;
cdnsp_write_64(dma, &pdev->op_regs->dcbaa_ptr);

View File

@ -525,9 +525,36 @@ EXPORT_SYMBOL_GPL(cdns_suspend);
int cdns_resume(struct cdns *cdns, u8 set_active)
{
struct device *dev = cdns->dev;
enum usb_role real_role;
bool role_changed = false;
int ret = 0;
if (cdns_power_is_lost(cdns)) {
if (cdns->role_sw) {
cdns->role = cdns_role_get(cdns->role_sw);
} else {
real_role = cdns_hw_role_state_machine(cdns);
if (real_role != cdns->role) {
ret = cdns_hw_role_switch(cdns);
if (ret)
return ret;
role_changed = true;
}
}
if (!role_changed) {
if (cdns->role == USB_ROLE_HOST)
ret = cdns_drd_host_on(cdns);
else if (cdns->role == USB_ROLE_DEVICE)
ret = cdns_drd_gadget_on(cdns);
if (ret)
return ret;
}
}
if (cdns->roles[cdns->role]->resume)
cdns->roles[cdns->role]->resume(cdns, false);
cdns->roles[cdns->role]->resume(cdns, cdns_power_is_lost(cdns));
if (set_active) {
pm_runtime_disable(dev);

View File

@ -478,3 +478,18 @@ int cdns_drd_exit(struct cdns *cdns)
return 0;
}
/* Indicate the cdns3 core was power lost before */
bool cdns_power_is_lost(struct cdns *cdns)
{
if (cdns->version == CDNS3_CONTROLLER_V1) {
if (!(readl(&cdns->otg_v1_regs->simulate) & BIT(0)))
return true;
} else {
if (!(readl(&cdns->otg_v0_regs->simulate) & BIT(0)))
return true;
}
return false;
}
EXPORT_SYMBOL_GPL(cdns_power_is_lost);

View File

@ -215,5 +215,5 @@ int cdns_drd_gadget_on(struct cdns *cdns);
void cdns_drd_gadget_off(struct cdns *cdns);
int cdns_drd_host_on(struct cdns *cdns);
void cdns_drd_host_off(struct cdns *cdns);
bool cdns_power_is_lost(struct cdns *cdns);
#endif /* __LINUX_CDNS3_DRD */

View File

@ -285,11 +285,9 @@ static int tegra_usb_probe(struct platform_device *pdev)
}
usb->phy = devm_usb_get_phy_by_phandle(&pdev->dev, "nvidia,phy", 0);
if (IS_ERR(usb->phy)) {
err = PTR_ERR(usb->phy);
dev_err(&pdev->dev, "failed to get PHY: %d\n", err);
return err;
}
if (IS_ERR(usb->phy))
return dev_err_probe(&pdev->dev, PTR_ERR(usb->phy),
"failed to get PHY\n");
usb->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(usb->clk)) {

View File

@ -32,7 +32,7 @@ struct ehci_ci_priv {
struct ci_hdrc_dma_aligned_buffer {
void *kmalloc_ptr;
void *old_xfer_buffer;
u8 data[0];
u8 data[];
};
static int ehci_ci_portpower(struct usb_hcd *hcd, int portnum, bool enable)

View File

@ -929,8 +929,7 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
{
struct acm *acm = tty->driver_data;
ss->xmit_fifo_size = acm->writesize;
ss->baud_base = le32_to_cpu(acm->line.dwDTERate);
ss->line = acm->minor;
ss->close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
ASYNC_CLOSING_WAIT_NONE :
@ -942,7 +941,6 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
{
struct acm *acm = tty->driver_data;
unsigned int closing_wait, close_delay;
unsigned int old_closing_wait, old_close_delay;
int retval = 0;
close_delay = msecs_to_jiffies(ss->close_delay * 10);
@ -950,20 +948,12 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
ASYNC_CLOSING_WAIT_NONE :
msecs_to_jiffies(ss->closing_wait * 10);
/* we must redo the rounding here, so that the values match */
old_close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
ASYNC_CLOSING_WAIT_NONE :
jiffies_to_msecs(acm->port.closing_wait) / 10;
mutex_lock(&acm->port.mutex);
if (!capable(CAP_SYS_ADMIN)) {
if ((ss->close_delay != old_close_delay) ||
(ss->closing_wait != old_closing_wait))
if ((close_delay != acm->port.close_delay) ||
(closing_wait != acm->port.closing_wait))
retval = -EPERM;
else
retval = -EOPNOTSUPP;
} else {
acm->port.close_delay = close_delay;
acm->port.closing_wait = closing_wait;
@ -1634,12 +1624,13 @@ static int acm_resume(struct usb_interface *intf)
struct urb *urb;
int rv = 0;
acm_unpoison_urbs(acm);
spin_lock_irq(&acm->write_lock);
if (--acm->susp_count)
goto out;
acm_unpoison_urbs(acm);
if (tty_port_initialized(&acm->port)) {
rv = usb_submit_urb(acm->ctrlurb, GFP_ATOMIC);
@ -1922,9 +1913,17 @@ static const struct usb_device_id acm_ids[] = {
#endif
#if IS_ENABLED(CONFIG_USB_SERIAL_XR)
{ USB_DEVICE(0x04e2, 0x1410), /* Ignore XR21V141X USB to Serial converter */
.driver_info = IGNORE_DEVICE,
},
{ USB_DEVICE(0x04e2, 0x1400), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1401), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1402), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1403), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1410), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1411), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1412), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1414), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1420), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1422), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1424), .driver_info = IGNORE_DEVICE },
#endif
/*Samsung phone in firmware update mode */

View File

@ -25,6 +25,12 @@ static const char *const ep_type_names[] = {
[USB_ENDPOINT_XFER_INT] = "intr",
};
/**
* usb_ep_type_string() - Returns human readable-name of the endpoint type.
* @ep_type: The endpoint type to return human-readable name for. If it's not
* any of the types: USB_ENDPOINT_XFER_{CONTROL, ISOC, BULK, INT},
* usually got by usb_endpoint_type(), the string 'unknown' will be returned.
*/
const char *usb_ep_type_string(int ep_type)
{
if (ep_type < 0 || ep_type >= ARRAY_SIZE(ep_type_names))
@ -76,6 +82,12 @@ static const char *const ssp_rate[] = {
[USB_SSP_GEN_2x2] = "super-speed-plus-gen2x2",
};
/**
* usb_speed_string() - Returns human readable-name of the speed.
* @speed: The speed to return human-readable name for. If it's not
* any of the speeds defined in usb_device_speed enum, string for
* USB_SPEED_UNKNOWN will be returned.
*/
const char *usb_speed_string(enum usb_device_speed speed)
{
if (speed < 0 || speed >= ARRAY_SIZE(speed_names))
@ -84,6 +96,14 @@ const char *usb_speed_string(enum usb_device_speed speed)
}
EXPORT_SYMBOL_GPL(usb_speed_string);
/**
* usb_get_maximum_speed - Get maximum requested speed for a given USB
* controller.
* @dev: Pointer to the given USB controller device
*
* The function gets the maximum speed string from property "maximum-speed",
* and returns the corresponding enum usb_device_speed.
*/
enum usb_device_speed usb_get_maximum_speed(struct device *dev)
{
const char *maximum_speed;
@ -102,6 +122,15 @@ enum usb_device_speed usb_get_maximum_speed(struct device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_maximum_speed);
/**
* usb_get_maximum_ssp_rate - Get the signaling rate generation and lane count
* of a SuperSpeed Plus capable device.
* @dev: Pointer to the given USB controller device
*
* If the string from "maximum-speed" property is super-speed-plus-genXxY where
* 'X' is the generation number and 'Y' is the number of lanes, then this
* function returns the corresponding enum usb_ssp_rate.
*/
enum usb_ssp_rate usb_get_maximum_ssp_rate(struct device *dev)
{
const char *maximum_speed;
@ -116,6 +145,12 @@ enum usb_ssp_rate usb_get_maximum_ssp_rate(struct device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_maximum_ssp_rate);
/**
* usb_state_string - Returns human readable name for the state.
* @state: The state to return a human-readable name for. If it's not
* any of the states devices in usb_device_state_string enum,
* the string UNKNOWN will be returned.
*/
const char *usb_state_string(enum usb_device_state state)
{
static const char *const names[] = {
@ -165,6 +200,47 @@ enum usb_dr_mode usb_get_dr_mode(struct device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_dr_mode);
/**
* usb_decode_interval - Decode bInterval into the time expressed in 1us unit
* @epd: The descriptor of the endpoint
* @speed: The speed that the endpoint works as
*
* Function returns the interval expressed in 1us unit for servicing
* endpoint for data transfers.
*/
unsigned int usb_decode_interval(const struct usb_endpoint_descriptor *epd,
enum usb_device_speed speed)
{
unsigned int interval = 0;
switch (usb_endpoint_type(epd)) {
case USB_ENDPOINT_XFER_CONTROL:
/* uframes per NAK */
if (speed == USB_SPEED_HIGH)
interval = epd->bInterval;
break;
case USB_ENDPOINT_XFER_ISOC:
interval = 1 << (epd->bInterval - 1);
break;
case USB_ENDPOINT_XFER_BULK:
/* uframes per NAK */
if (speed == USB_SPEED_HIGH && usb_endpoint_dir_out(epd))
interval = epd->bInterval;
break;
case USB_ENDPOINT_XFER_INT:
if (speed >= USB_SPEED_HIGH)
interval = 1 << (epd->bInterval - 1);
else
interval = epd->bInterval;
break;
}
interval *= (speed >= USB_SPEED_HIGH) ? 125 : 1000;
return interval;
}
EXPORT_SYMBOL_GPL(usb_decode_interval);
#ifdef CONFIG_OF
/**
* of_usb_get_dr_mode_by_phy - Get dual role mode for the controller device

View File

@ -207,8 +207,26 @@ static void usb_decode_set_isoch_delay(__u8 wValue, char *str, size_t size)
snprintf(str, size, "Set Isochronous Delay(Delay = %d ns)", wValue);
}
/*
* usb_decode_ctrl - returns a string representation of ctrl request
/**
* usb_decode_ctrl - Returns human readable representation of control request.
* @str: buffer to return a human-readable representation of control request.
* This buffer should have about 200 bytes.
* @size: size of str buffer.
* @bRequestType: matches the USB bmRequestType field
* @bRequest: matches the USB bRequest field
* @wValue: matches the USB wValue field (CPU byte order)
* @wIndex: matches the USB wIndex field (CPU byte order)
* @wLength: matches the USB wLength field (CPU byte order)
*
* Function returns decoded, formatted and human-readable description of
* control request packet.
*
* The usage scenario for this is for tracepoints, so function as a return
* use the same value as in parameters. This approach allows to use this
* function in TP_printk
*
* Important: wValue, wIndex, wLength parameters before invoking this function
* should be processed by le16_to_cpu macro.
*/
const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
__u8 bRequest, __u16 wValue, __u16 wIndex,

View File

@ -157,38 +157,25 @@ static char *usb_dump_endpoint_descriptor(int speed, char *start, char *end,
switch (usb_endpoint_type(desc)) {
case USB_ENDPOINT_XFER_CONTROL:
type = "Ctrl";
if (speed == USB_SPEED_HIGH) /* uframes per NAK */
interval = desc->bInterval;
else
interval = 0;
dir = 'B'; /* ctrl is bidirectional */
break;
case USB_ENDPOINT_XFER_ISOC:
type = "Isoc";
interval = 1 << (desc->bInterval - 1);
break;
case USB_ENDPOINT_XFER_BULK:
type = "Bulk";
if (speed == USB_SPEED_HIGH && dir == 'O') /* uframes per NAK */
interval = desc->bInterval;
else
interval = 0;
break;
case USB_ENDPOINT_XFER_INT:
type = "Int.";
if (speed == USB_SPEED_HIGH || speed >= USB_SPEED_SUPER)
interval = 1 << (desc->bInterval - 1);
else
interval = desc->bInterval;
break;
default: /* "can't happen" */
return start;
}
interval *= (speed == USB_SPEED_HIGH ||
speed >= USB_SPEED_SUPER) ? 125 : 1000;
if (interval % 1000)
interval = usb_decode_interval(desc, speed);
if (interval % 1000) {
unit = 'u';
else {
} else {
unit = 'm';
interval /= 1000;
}

View File

@ -519,17 +519,13 @@ static int usb_unbind_interface(struct device *dev)
* @driver: the driver to be bound
* @iface: the interface to which it will be bound; must be in the
* usb device's active configuration
* @priv: driver data associated with that interface
* @data: driver data associated with that interface
*
* This is used by usb device drivers that need to claim more than one
* interface on a device when probing (audio and acm are current examples).
* No device driver should directly modify internal usb_interface or
* usb_device structure members.
*
* Few drivers should need to use this routine, since the most natural
* way to bind to an interface is to return the private data from
* the driver's probe() method.
*
* Callers must own the device lock, so driver probe() entries don't need
* extra locking, but other call contexts may need to explicitly claim that
* lock.
@ -537,7 +533,7 @@ static int usb_unbind_interface(struct device *dev)
* Return: 0 on success.
*/
int usb_driver_claim_interface(struct usb_driver *driver,
struct usb_interface *iface, void *priv)
struct usb_interface *iface, void *data)
{
struct device *dev;
int retval = 0;
@ -554,7 +550,7 @@ int usb_driver_claim_interface(struct usb_driver *driver,
return -ENODEV;
dev->driver = &driver->drvwrap.driver;
usb_set_intfdata(iface, priv);
usb_set_intfdata(iface, data);
iface->needs_binding = 0;
iface->condition = USB_INTERFACE_BOUND;

View File

@ -84,40 +84,13 @@ static ssize_t interval_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct ep_device *ep = to_ep_device(dev);
unsigned int interval;
char unit;
unsigned interval = 0;
unsigned in;
in = (ep->desc->bEndpointAddress & USB_DIR_IN);
switch (usb_endpoint_type(ep->desc)) {
case USB_ENDPOINT_XFER_CONTROL:
if (ep->udev->speed == USB_SPEED_HIGH)
/* uframes per NAK */
interval = ep->desc->bInterval;
break;
case USB_ENDPOINT_XFER_ISOC:
interval = 1 << (ep->desc->bInterval - 1);
break;
case USB_ENDPOINT_XFER_BULK:
if (ep->udev->speed == USB_SPEED_HIGH && !in)
/* uframes per NAK */
interval = ep->desc->bInterval;
break;
case USB_ENDPOINT_XFER_INT:
if (ep->udev->speed == USB_SPEED_HIGH)
interval = 1 << (ep->desc->bInterval - 1);
else
interval = ep->desc->bInterval;
break;
}
interval *= (ep->udev->speed == USB_SPEED_HIGH) ? 125 : 1000;
if (interval % 1000)
interval = usb_decode_interval(ep->desc, ep->udev->speed);
if (interval % 1000) {
unit = 'u';
else {
} else {
unit = 'm';
interval /= 1000;
}

View File

@ -2721,6 +2721,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
rhdev->rx_lanes = 1;
rhdev->tx_lanes = 1;
rhdev->ssp_rate = USB_SSP_GEN_UNKNOWN;
switch (hcd->speed) {
case HCD_USB11:
@ -2738,8 +2739,11 @@ int usb_add_hcd(struct usb_hcd *hcd,
case HCD_USB32:
rhdev->rx_lanes = 2;
rhdev->tx_lanes = 2;
fallthrough;
rhdev->ssp_rate = USB_SSP_GEN_2x2;
rhdev->speed = USB_SPEED_SUPER_PLUS;
break;
case HCD_USB31:
rhdev->ssp_rate = USB_SSP_GEN_2x1;
rhdev->speed = USB_SPEED_SUPER_PLUS;
break;
default:

View File

@ -31,6 +31,7 @@
#include <linux/pm_qos.h>
#include <linux/kobject.h>
#include <linux/bitfield.h>
#include <linux/uaccess.h>
#include <asm/byteorder.h>
@ -2668,31 +2669,79 @@ int usb_authorize_device(struct usb_device *usb_dev)
return result;
}
/*
* Return 1 if port speed is SuperSpeedPlus, 0 otherwise
* check it from the link protocol field of the current speed ID attribute.
* current speed ID is got from ext port status request. Sublink speed attribute
* table is returned with the hub BOS SSP device capability descriptor
/**
* get_port_ssp_rate - Match the extended port status to SSP rate
* @hdev: The hub device
* @ext_portstatus: extended port status
*
* Match the extended port status speed id to the SuperSpeed Plus sublink speed
* capability attributes. Base on the number of connected lanes and speed,
* return the corresponding enum usb_ssp_rate.
*/
static int port_speed_is_ssp(struct usb_device *hdev, int speed_id)
static enum usb_ssp_rate get_port_ssp_rate(struct usb_device *hdev,
u32 ext_portstatus)
{
int ssa_count;
u32 ss_attr;
int i;
struct usb_ssp_cap_descriptor *ssp_cap = hdev->bos->ssp_cap;
u32 attr;
u8 speed_id;
u8 ssac;
u8 lanes;
int i;
if (!ssp_cap)
return 0;
goto out;
ssa_count = le32_to_cpu(ssp_cap->bmAttributes) &
speed_id = ext_portstatus & USB_EXT_PORT_STAT_RX_SPEED_ID;
lanes = USB_EXT_PORT_RX_LANES(ext_portstatus) + 1;
ssac = le32_to_cpu(ssp_cap->bmAttributes) &
USB_SSP_SUBLINK_SPEED_ATTRIBS;
for (i = 0; i <= ssa_count; i++) {
ss_attr = le32_to_cpu(ssp_cap->bmSublinkSpeedAttr[i]);
if (speed_id == (ss_attr & USB_SSP_SUBLINK_SPEED_SSID))
return !!(ss_attr & USB_SSP_SUBLINK_SPEED_LP);
for (i = 0; i <= ssac; i++) {
u8 ssid;
attr = le32_to_cpu(ssp_cap->bmSublinkSpeedAttr[i]);
ssid = FIELD_GET(USB_SSP_SUBLINK_SPEED_SSID, attr);
if (speed_id == ssid) {
u16 mantissa;
u8 lse;
u8 type;
/*
* Note: currently asymmetric lane types are only
* applicable for SSIC operate in SuperSpeed protocol
*/
type = FIELD_GET(USB_SSP_SUBLINK_SPEED_ST, attr);
if (type == USB_SSP_SUBLINK_SPEED_ST_ASYM_RX ||
type == USB_SSP_SUBLINK_SPEED_ST_ASYM_TX)
goto out;
if (FIELD_GET(USB_SSP_SUBLINK_SPEED_LP, attr) !=
USB_SSP_SUBLINK_SPEED_LP_SSP)
goto out;
lse = FIELD_GET(USB_SSP_SUBLINK_SPEED_LSE, attr);
mantissa = FIELD_GET(USB_SSP_SUBLINK_SPEED_LSM, attr);
/* Convert to Gbps */
for (; lse < USB_SSP_SUBLINK_SPEED_LSE_GBPS; lse++)
mantissa /= 1000;
if (mantissa >= 10 && lanes == 1)
return USB_SSP_GEN_2x1;
if (mantissa >= 10 && lanes == 2)
return USB_SSP_GEN_2x2;
if (mantissa >= 5 && lanes == 2)
return USB_SSP_GEN_1x2;
goto out;
}
}
return 0;
out:
return USB_SSP_GEN_UNKNOWN;
}
/* Returns 1 if @hub is a WUSB root hub, 0 otherwise */
@ -2850,15 +2899,15 @@ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
/* extended portstatus Rx and Tx lane count are zero based */
udev->rx_lanes = USB_EXT_PORT_RX_LANES(ext_portstatus) + 1;
udev->tx_lanes = USB_EXT_PORT_TX_LANES(ext_portstatus) + 1;
udev->ssp_rate = get_port_ssp_rate(hub->hdev, ext_portstatus);
} else {
udev->rx_lanes = 1;
udev->tx_lanes = 1;
udev->ssp_rate = USB_SSP_GEN_UNKNOWN;
}
if (hub_is_wusb(hub))
udev->speed = USB_SPEED_WIRELESS;
else if (hub_is_superspeedplus(hub->hdev) &&
port_speed_is_ssp(hub->hdev, ext_portstatus &
USB_EXT_PORT_STAT_RX_SPEED_ID))
else if (udev->ssp_rate != USB_SSP_GEN_UNKNOWN)
udev->speed = USB_SPEED_SUPER_PLUS;
else if (hub_is_superspeed(hub->hdev))
udev->speed = USB_SPEED_SUPER;
@ -3556,7 +3605,7 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
u16 portchange, portstatus;
if (!test_and_set_bit(port1, hub->child_usage_bits)) {
status = pm_runtime_get_sync(&port_dev->dev);
status = pm_runtime_resume_and_get(&port_dev->dev);
if (status < 0) {
dev_dbg(&udev->dev, "can't resume usb port, status %d\n",
status);
@ -4781,9 +4830,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
"%s SuperSpeed%s%s USB device number %d using %s\n",
(udev->config) ? "reset" : "new",
(udev->speed == USB_SPEED_SUPER_PLUS) ?
"Plus Gen 2" : " Gen 1",
(udev->rx_lanes == 2 && udev->tx_lanes == 2) ?
"x2" : "",
" Plus" : "",
(udev->ssp_rate == USB_SSP_GEN_2x2) ?
" Gen 2x2" :
(udev->ssp_rate == USB_SSP_GEN_2x1) ?
" Gen 2x1" :
(udev->ssp_rate == USB_SSP_GEN_1x2) ?
" Gen 1x2" : "",
devnum, driver_name);
}

View File

@ -148,8 +148,10 @@ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
{
unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2;
/* Wait at least 100 msec for power to become stable */
return max(delay, 100U);
if (!hub->hdev->parent) /* root hub */
return delay;
else /* Wait at least 100 msec for power to become stable */
return max(delay, 100U);
}
static inline int hub_port_debounce_be_connected(struct usb_hub *hub,

View File

@ -406,6 +406,7 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Realtek hub in Dell WD19 (Type-C) */
{ USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
{ USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME },
/* Generic RTL8153 based ethernet adapters */
{ USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
@ -438,6 +439,9 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x17ef, 0xa012), .driver_info =
USB_QUIRK_DISCONNECT_SUSPEND },
/* Lenovo ThinkPad USB-C Dock Gen2 Ethernet (RTL8153 GigE) */
{ USB_DEVICE(0x17ef, 0xa387), .driver_info = USB_QUIRK_NO_LPM },
/* BUILDWIN Photo Frame */
{ USB_DEVICE(0x1908, 0x1315), .driver_info =
USB_QUIRK_HONOR_BNUMINTERFACES },

View File

@ -167,7 +167,10 @@ static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
speed = "5000";
break;
case USB_SPEED_SUPER_PLUS:
speed = "10000";
if (udev->ssp_rate == USB_SSP_GEN_2x2)
speed = "20000";
else
speed = "10000";
break;
default:
speed = "unknown";

View File

@ -398,6 +398,52 @@ int usb_for_each_dev(void *data, int (*fn)(struct usb_device *, void *))
}
EXPORT_SYMBOL_GPL(usb_for_each_dev);
struct each_hub_arg {
void *data;
int (*fn)(struct device *, void *);
};
static int __each_hub(struct usb_device *hdev, void *data)
{
struct each_hub_arg *arg = (struct each_hub_arg *)data;
struct usb_hub *hub;
int ret = 0;
int i;
hub = usb_hub_to_struct_hub(hdev);
if (!hub)
return 0;
mutex_lock(&usb_port_peer_mutex);
for (i = 0; i < hdev->maxchild; i++) {
ret = arg->fn(&hub->ports[i]->dev, arg->data);
if (ret)
break;
}
mutex_unlock(&usb_port_peer_mutex);
return ret;
}
/**
* usb_for_each_port - interate over all USB ports in the system
* @data: data pointer that will be handed to the callback function
* @fn: callback function to be called for each USB port
*
* Iterate over all USB ports and call @fn for each, passing it @data. If it
* returns anything other than 0, we break the iteration prematurely and return
* that value.
*/
int usb_for_each_port(void *data, int (*fn)(struct device *, void *))
{
struct each_hub_arg arg = {data, fn};
return usb_for_each_dev(&arg, __each_hub);
}
EXPORT_SYMBOL_GPL(usb_for_each_port);
/**
* usb_release_dev - free a usb device structure when all users of it are finished.
* @dev: device that's been disconnected
@ -982,17 +1028,15 @@ static struct notifier_block usb_bus_nb = {
.notifier_call = usb_bus_notify,
};
static struct dentry *usb_devices_root;
static void usb_debugfs_init(void)
{
usb_devices_root = debugfs_create_file("devices", 0444, usb_debug_root,
NULL, &usbfs_devices_fops);
debugfs_create_file("devices", 0444, usb_debug_root, NULL,
&usbfs_devices_fops);
}
static void usb_debugfs_cleanup(void)
{
debugfs_remove(usb_devices_root);
debugfs_remove(debugfs_lookup("devices", usb_debug_root));
}
/*

View File

@ -131,54 +131,26 @@ int dwc2_restore_global_registers(struct dwc2_hsotg *hsotg)
* dwc2_exit_partial_power_down() - Exit controller from Partial Power Down.
*
* @hsotg: Programming view of the DWC_otg controller
* @rem_wakeup: indicates whether resume is initiated by Reset.
* @restore: Controller registers need to be restored
*/
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, bool restore)
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, int rem_wakeup,
bool restore)
{
u32 pcgcctl;
int ret = 0;
struct dwc2_gregs_backup *gr;
if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL)
return -ENOTSUPP;
gr = &hsotg->gr_backup;
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(100);
if (restore) {
ret = dwc2_restore_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore registers\n",
__func__);
return ret;
}
if (dwc2_is_host_mode(hsotg)) {
ret = dwc2_restore_host_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore host registers\n",
__func__);
return ret;
}
} else {
ret = dwc2_restore_device_registers(hsotg, 0);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore device registers\n",
__func__);
return ret;
}
}
}
return ret;
/*
* Restore host or device regisers with the same mode core enterted
* to partial power down by checking "GOTGCTL_CURMODE_HOST" backup
* value of the "gotgctl" register.
*/
if (gr->gotgctl & GOTGCTL_CURMODE_HOST)
return dwc2_host_exit_partial_power_down(hsotg, rem_wakeup,
restore);
else
return dwc2_gadget_exit_partial_power_down(hsotg, restore);
}
/**
@ -188,57 +160,10 @@ int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, bool restore)
*/
int dwc2_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{
u32 pcgcctl;
int ret = 0;
if (!hsotg->params.power_down)
return -ENOTSUPP;
/* Backup all registers */
ret = dwc2_backup_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup global registers\n",
__func__);
return ret;
}
if (dwc2_is_host_mode(hsotg)) {
ret = dwc2_backup_host_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup host registers\n",
__func__);
return ret;
}
} else {
ret = dwc2_backup_device_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup device registers\n",
__func__);
return ret;
}
}
/*
* Clear any pending interrupts since dwc2 will not be able to
* clear them after entering partial_power_down.
*/
dwc2_writel(hsotg, 0xffffffff, GINTSTS);
/* Put the controller in low power state */
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl |= PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
ndelay(20);
pcgcctl |= PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
ndelay(20);
pcgcctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
return ret;
if (dwc2_is_host_mode(hsotg))
return dwc2_host_enter_partial_power_down(hsotg);
else
return dwc2_gadget_enter_partial_power_down(hsotg);
}
/**
@ -374,6 +299,12 @@ void dwc2_hib_restore_common(struct dwc2_hsotg *hsotg, int rem_wakeup,
__func__);
} else {
dev_dbg(hsotg->dev, "restore done generated here\n");
/*
* To avoid restore done interrupt storm after restore is
* generated clear GINTSTS_RESTOREDONE bit.
*/
dwc2_writel(hsotg, GINTSTS_RESTOREDONE, GINTSTS);
}
}
@ -460,9 +391,6 @@ static bool dwc2_iddig_filter_enabled(struct dwc2_hsotg *hsotg)
*/
int dwc2_enter_hibernation(struct dwc2_hsotg *hsotg, int is_host)
{
if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_HIBERNATION)
return -ENOTSUPP;
if (is_host)
return dwc2_host_enter_hibernation(hsotg);
else
@ -545,6 +473,22 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
dwc2_writel(hsotg, greset, GRSTCTL);
}
/*
* Switching from device mode to host mode by disconnecting
* device cable core enters and exits form hibernation.
* However, the fifo map remains not cleared. It results
* to a WARNING (WARNING: CPU: 5 PID: 0 at drivers/usb/dwc2/
* gadget.c:307 dwc2_hsotg_init_fifo+0x12/0x152 [dwc2])
* if in host mode we disconnect the micro a to b host
* cable. Because core reset occurs.
* To avoid the WARNING, fifo_map should be cleared
* in dwc2_core_reset() function by taking into account configs.
* fifo_map must be cleared only if driver is configured in
* "CONFIG_USB_DWC2_PERIPHERAL" or "CONFIG_USB_DWC2_DUAL_ROLE"
* mode.
*/
dwc2_clear_fifo_map(hsotg);
/* Wait for AHB master IDLE state */
if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) {
dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n",

View File

@ -38,6 +38,7 @@
#ifndef __DWC2_CORE_H__
#define __DWC2_CORE_H__
#include <linux/acpi.h>
#include <linux/phy/phy.h>
#include <linux/regulator/consumer.h>
#include <linux/usb/gadget.h>
@ -426,7 +427,7 @@ enum dwc2_ep0_state {
* @g_tx_fifo_size: An array of TX fifo sizes in dedicated fifo
* mode. Each value corresponds to one EP
* starting from EP1 (max 15 values). Sizes are
* in DWORDS with possible values from from
* in DWORDS with possible values from
* 16-32768 (default: 256, 256, 256, 256, 768,
* 768, 768, 768, 0, 0, 0, 0, 0, 0, 0).
* @change_speed_quirk: Change speed configuration to DWC2_SPEED_PARAM_FULL
@ -865,6 +866,8 @@ struct dwc2_hregs_backup {
* @gadget_enabled: Peripheral mode sub-driver initialization indicator.
* @ll_hw_enabled: Status of low-level hardware resources.
* @hibernated: True if core is hibernated
* @in_ppd: True if core is partial power down mode.
* @bus_suspended: True if bus is suspended
* @reset_phy_on_wake: Quirk saying that we should assert PHY reset on a
* remote wakeup.
* @phy_off_for_suspend: Status of whether we turned the PHY off at suspend.
@ -1022,7 +1025,6 @@ struct dwc2_hregs_backup {
* a pointer to an array of register definitions, the
* array size and the base address where the register bank
* is to be found.
* @bus_suspended: True if bus is suspended
* @last_frame_num: Number of last frame. Range from 0 to 32768
* @frame_num_array: Used only if CONFIG_USB_DWC2_TRACK_MISSED_SOFS is
* defined, for missed SOFs tracking. Array holds that
@ -1060,6 +1062,8 @@ struct dwc2_hsotg {
unsigned int gadget_enabled:1;
unsigned int ll_hw_enabled:1;
unsigned int hibernated:1;
unsigned int in_ppd:1;
bool bus_suspended;
unsigned int reset_phy_on_wake:1;
unsigned int need_phy_for_wake:1;
unsigned int phy_off_for_suspend:1;
@ -1143,7 +1147,6 @@ struct dwc2_hsotg {
unsigned long hs_periodic_bitmap[
DIV_ROUND_UP(DWC2_HS_SCHEDULE_US, BITS_PER_LONG)];
u16 periodic_qh_count;
bool bus_suspended;
bool new_connection;
u16 last_frame_num;
@ -1301,7 +1304,8 @@ static inline bool dwc2_is_hs_iot(struct dwc2_hsotg *hsotg)
*/
int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait);
int dwc2_enter_partial_power_down(struct dwc2_hsotg *hsotg);
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, bool restore);
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, int rem_wakeup,
bool restore);
int dwc2_enter_hibernation(struct dwc2_hsotg *hsotg, int is_host);
int dwc2_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
int reset, int is_host);
@ -1339,6 +1343,7 @@ irqreturn_t dwc2_handle_common_intr(int irq, void *dev);
/* The device ID match table */
extern const struct of_device_id dwc2_of_match_table[];
extern const struct acpi_device_id dwc2_acpi_match[];
int dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg);
int dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg);
@ -1409,11 +1414,19 @@ int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup);
int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg);
int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset);
int dwc2_gadget_enter_partial_power_down(struct dwc2_hsotg *hsotg);
int dwc2_gadget_exit_partial_power_down(struct dwc2_hsotg *hsotg,
bool restore);
void dwc2_gadget_enter_clock_gating(struct dwc2_hsotg *hsotg);
void dwc2_gadget_exit_clock_gating(struct dwc2_hsotg *hsotg,
int rem_wakeup);
int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg);
int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg);
int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg);
void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg);
void dwc2_gadget_program_ref_clk(struct dwc2_hsotg *hsotg);
static inline void dwc2_clear_fifo_map(struct dwc2_hsotg *hsotg)
{ hsotg->fifo_map = 0; }
#else
static inline int dwc2_hsotg_remove(struct dwc2_hsotg *dwc2)
{ return 0; }
@ -1442,6 +1455,14 @@ static inline int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg)
static inline int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset)
{ return 0; }
static inline int dwc2_gadget_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_gadget_exit_partial_power_down(struct dwc2_hsotg *hsotg,
bool restore)
{ return 0; }
static inline void dwc2_gadget_enter_clock_gating(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_gadget_exit_clock_gating(struct dwc2_hsotg *hsotg,
int rem_wakeup) {}
static inline int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg)
@ -1450,6 +1471,7 @@ static inline int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_gadget_program_ref_clk(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_clear_fifo_map(struct dwc2_hsotg *hsotg) {}
#endif
#if IS_ENABLED(CONFIG_USB_DWC2_HOST) || IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)
@ -1459,11 +1481,18 @@ void dwc2_hcd_connect(struct dwc2_hsotg *hsotg);
void dwc2_hcd_disconnect(struct dwc2_hsotg *hsotg, bool force);
void dwc2_hcd_start(struct dwc2_hsotg *hsotg);
int dwc2_core_init(struct dwc2_hsotg *hsotg, bool initial_setup);
int dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex);
int dwc2_port_resume(struct dwc2_hsotg *hsotg);
int dwc2_backup_host_registers(struct dwc2_hsotg *hsotg);
int dwc2_restore_host_registers(struct dwc2_hsotg *hsotg);
int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg);
int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset);
int dwc2_host_enter_partial_power_down(struct dwc2_hsotg *hsotg);
int dwc2_host_exit_partial_power_down(struct dwc2_hsotg *hsotg,
int rem_wakeup, bool restore);
void dwc2_host_enter_clock_gating(struct dwc2_hsotg *hsotg);
void dwc2_host_exit_clock_gating(struct dwc2_hsotg *hsotg, int rem_wakeup);
bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2);
static inline void dwc2_host_schedule_phy_reset(struct dwc2_hsotg *hsotg)
{ schedule_work(&hsotg->phy_reset_work); }
@ -1479,6 +1508,10 @@ static inline void dwc2_hcd_start(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_hcd_remove(struct dwc2_hsotg *hsotg) {}
static inline int dwc2_core_init(struct dwc2_hsotg *hsotg, bool initial_setup)
{ return 0; }
static inline int dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
{ return 0; }
static inline int dwc2_port_resume(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_backup_host_registers(struct dwc2_hsotg *hsotg)
@ -1490,6 +1523,14 @@ static inline int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg)
static inline int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset)
{ return 0; }
static inline int dwc2_host_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_host_exit_partial_power_down(struct dwc2_hsotg *hsotg,
int rem_wakeup, bool restore)
{ return 0; }
static inline void dwc2_host_enter_clock_gating(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_host_exit_clock_gating(struct dwc2_hsotg *hsotg,
int rem_wakeup) {}
static inline bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2)
{ return false; }
static inline void dwc2_host_schedule_phy_reset(struct dwc2_hsotg *hsotg) {}

View File

@ -307,6 +307,7 @@ static void dwc2_handle_conn_id_status_change_intr(struct dwc2_hsotg *hsotg)
static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
{
int ret;
u32 hprt0;
/* Clear interrupt */
dwc2_writel(hsotg, GINTSTS_SESSREQINT, GINTSTS);
@ -316,10 +317,18 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
if (dwc2_is_device_mode(hsotg)) {
if (hsotg->lx_state == DWC2_L2) {
ret = dwc2_exit_partial_power_down(hsotg, true);
if (ret && (ret != -ENOTSUPP))
dev_err(hsotg->dev,
"exit power_down failed\n");
if (hsotg->in_ppd) {
ret = dwc2_exit_partial_power_down(hsotg, 0,
true);
if (ret)
dev_err(hsotg->dev,
"exit power_down failed\n");
}
/* Exit gadget mode clock gating. */
if (hsotg->params.power_down ==
DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
dwc2_gadget_exit_clock_gating(hsotg, 0);
}
/*
@ -327,6 +336,13 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
* established
*/
dwc2_hsotg_disconnect(hsotg);
} else {
/* Turn on the port power bit. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_PWR;
dwc2_writel(hsotg, hprt0, HPRT0);
/* Connect hcd after port power is set. */
dwc2_hcd_connect(hsotg);
}
}
@ -407,32 +423,40 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
dev_dbg(hsotg->dev, "DSTS=0x%0x\n",
dwc2_readl(hsotg, DSTS));
if (hsotg->lx_state == DWC2_L2) {
u32 dctl = dwc2_readl(hsotg, DCTL);
if (hsotg->in_ppd) {
u32 dctl = dwc2_readl(hsotg, DCTL);
/* Clear Remote Wakeup Signaling */
dctl &= ~DCTL_RMTWKUPSIG;
dwc2_writel(hsotg, dctl, DCTL);
ret = dwc2_exit_partial_power_down(hsotg, 1,
true);
if (ret)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
call_gadget(hsotg, resume);
}
/* Clear Remote Wakeup Signaling */
dctl &= ~DCTL_RMTWKUPSIG;
dwc2_writel(hsotg, dctl, DCTL);
ret = dwc2_exit_partial_power_down(hsotg, true);
if (ret && (ret != -ENOTSUPP))
dev_err(hsotg->dev, "exit power_down failed\n");
/* Change to L0 state */
hsotg->lx_state = DWC2_L0;
call_gadget(hsotg, resume);
/* Exit gadget mode clock gating. */
if (hsotg->params.power_down ==
DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
dwc2_gadget_exit_clock_gating(hsotg, 0);
} else {
/* Change to L0 state */
hsotg->lx_state = DWC2_L0;
}
} else {
if (hsotg->params.power_down)
return;
if (hsotg->lx_state == DWC2_L2) {
if (hsotg->in_ppd) {
ret = dwc2_exit_partial_power_down(hsotg, 1,
true);
if (ret)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
}
if (hsotg->lx_state != DWC2_L1) {
u32 pcgcctl = dwc2_readl(hsotg, PCGCTL);
/* Restart the Phy Clock */
pcgcctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
if (hsotg->params.power_down ==
DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
dwc2_host_exit_clock_gating(hsotg, 1);
/*
* If we've got this quirk then the PHY is stuck upon
@ -508,31 +532,33 @@ static void dwc2_handle_usb_suspend_intr(struct dwc2_hsotg *hsotg)
return;
}
if (dsts & DSTS_SUSPSTS) {
if (hsotg->hw_params.power_optimized) {
switch (hsotg->params.power_down) {
case DWC2_POWER_DOWN_PARAM_PARTIAL:
ret = dwc2_enter_partial_power_down(hsotg);
if (ret) {
if (ret != -ENOTSUPP)
dev_err(hsotg->dev,
"%s: enter partial_power_down failed\n",
__func__);
goto skip_power_saving;
}
if (ret)
dev_err(hsotg->dev,
"enter partial_power_down failed\n");
udelay(100);
/* Ask phy to be suspended */
if (!IS_ERR_OR_NULL(hsotg->uphy))
usb_phy_set_suspend(hsotg->uphy, true);
break;
case DWC2_POWER_DOWN_PARAM_HIBERNATION:
ret = dwc2_enter_hibernation(hsotg, 0);
if (ret)
dev_err(hsotg->dev,
"enter hibernation failed\n");
break;
case DWC2_POWER_DOWN_PARAM_NONE:
/*
* If neither hibernation nor partial power down are supported,
* clock gating is used to save power.
*/
dwc2_gadget_enter_clock_gating(hsotg);
}
if (hsotg->hw_params.hibernation) {
ret = dwc2_enter_hibernation(hsotg, 0);
if (ret && ret != -ENOTSUPP)
dev_err(hsotg->dev,
"%s: enter hibernation failed\n",
__func__);
}
skip_power_saving:
/*
* Change to L2 (suspend) state before releasing
* spinlock
@ -652,16 +678,82 @@ static u32 dwc2_read_common_intr(struct dwc2_hsotg *hsotg)
return 0;
}
/**
* dwc_handle_gpwrdn_disc_det() - Handles the gpwrdn disconnect detect.
* Exits hibernation without restoring registers.
*
* @hsotg: Programming view of DWC_otg controller
* @gpwrdn: GPWRDN register
*/
static inline void dwc_handle_gpwrdn_disc_det(struct dwc2_hsotg *hsotg,
u32 gpwrdn)
{
u32 gpwrdn_tmp;
/* Switch-on voltage to the core */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(5);
/* Reset core */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(5);
/* Disable Power Down Clamp */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(5);
/* Deassert reset core */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(5);
/* Disable PMU interrupt */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
/* De-assert Wakeup Logic */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PMUACTV;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
hsotg->hibernated = 0;
hsotg->bus_suspended = 0;
if (gpwrdn & GPWRDN_IDSTS) {
hsotg->op_state = OTG_STATE_B_PERIPHERAL;
dwc2_core_init(hsotg, false);
dwc2_enable_global_interrupts(hsotg);
dwc2_hsotg_core_init_disconnected(hsotg, false);
dwc2_hsotg_core_connect(hsotg);
} else {
hsotg->op_state = OTG_STATE_A_HOST;
/* Initialize the Core for Host mode */
dwc2_core_init(hsotg, false);
dwc2_enable_global_interrupts(hsotg);
dwc2_hcd_start(hsotg);
}
}
/*
* GPWRDN interrupt handler.
*
* The GPWRDN interrupts are those that occur in both Host and
* Device mode while core is in hibernated state.
*/
static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
static int dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
{
u32 gpwrdn;
int linestate;
int ret = 0;
gpwrdn = dwc2_readl(hsotg, GPWRDN);
/* clear all interrupt */
@ -673,93 +765,52 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
if ((gpwrdn & GPWRDN_DISCONN_DET) &&
(gpwrdn & GPWRDN_DISCONN_DET_MSK) && !linestate) {
u32 gpwrdn_tmp;
dev_dbg(hsotg->dev, "%s: GPWRDN_DISCONN_DET\n", __func__);
/* Switch-on voltage to the core */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(10);
/* Reset core */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(10);
/* Disable Power Down Clamp */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(10);
/* Deassert reset core */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
udelay(10);
/* Disable PMU interrupt */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
/* De-assert Wakeup Logic */
gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
gpwrdn_tmp &= ~GPWRDN_PMUACTV;
dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
hsotg->hibernated = 0;
if (gpwrdn & GPWRDN_IDSTS) {
hsotg->op_state = OTG_STATE_B_PERIPHERAL;
dwc2_core_init(hsotg, false);
dwc2_enable_global_interrupts(hsotg);
dwc2_hsotg_core_init_disconnected(hsotg, false);
dwc2_hsotg_core_connect(hsotg);
} else {
hsotg->op_state = OTG_STATE_A_HOST;
/* Initialize the Core for Host mode */
dwc2_core_init(hsotg, false);
dwc2_enable_global_interrupts(hsotg);
dwc2_hcd_start(hsotg);
}
}
if ((gpwrdn & GPWRDN_LNSTSCHG) &&
(gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
/*
* Call disconnect detect function to exit from
* hibernation
*/
dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
} else if ((gpwrdn & GPWRDN_LNSTSCHG) &&
(gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
dev_dbg(hsotg->dev, "%s: GPWRDN_LNSTSCHG\n", __func__);
if (hsotg->hw_params.hibernation &&
hsotg->hibernated) {
if (gpwrdn & GPWRDN_IDSTS) {
dwc2_exit_hibernation(hsotg, 0, 0, 0);
ret = dwc2_exit_hibernation(hsotg, 0, 0, 0);
if (ret)
dev_err(hsotg->dev,
"exit hibernation failed.\n");
call_gadget(hsotg, resume);
} else {
dwc2_exit_hibernation(hsotg, 1, 0, 1);
ret = dwc2_exit_hibernation(hsotg, 1, 0, 1);
if (ret)
dev_err(hsotg->dev,
"exit hibernation failed.\n");
}
}
}
if ((gpwrdn & GPWRDN_RST_DET) && (gpwrdn & GPWRDN_RST_DET_MSK)) {
} else if ((gpwrdn & GPWRDN_RST_DET) &&
(gpwrdn & GPWRDN_RST_DET_MSK)) {
dev_dbg(hsotg->dev, "%s: GPWRDN_RST_DET\n", __func__);
if (!linestate && (gpwrdn & GPWRDN_BSESSVLD))
dwc2_exit_hibernation(hsotg, 0, 1, 0);
}
if ((gpwrdn & GPWRDN_STS_CHGINT) &&
(gpwrdn & GPWRDN_STS_CHGINT_MSK) && linestate) {
dev_dbg(hsotg->dev, "%s: GPWRDN_STS_CHGINT\n", __func__);
if (hsotg->hw_params.hibernation &&
hsotg->hibernated) {
if (gpwrdn & GPWRDN_IDSTS) {
dwc2_exit_hibernation(hsotg, 0, 0, 0);
call_gadget(hsotg, resume);
} else {
dwc2_exit_hibernation(hsotg, 1, 0, 1);
}
if (!linestate) {
ret = dwc2_exit_hibernation(hsotg, 0, 1, 0);
if (ret)
dev_err(hsotg->dev,
"exit hibernation failed.\n");
}
} else if ((gpwrdn & GPWRDN_STS_CHGINT) &&
(gpwrdn & GPWRDN_STS_CHGINT_MSK)) {
dev_dbg(hsotg->dev, "%s: GPWRDN_STS_CHGINT\n", __func__);
/*
* As GPWRDN_STS_CHGINT exit from hibernation flow is
* the same as in GPWRDN_DISCONN_DET flow. Call
* disconnect detect helper function to exit from
* hibernation.
*/
dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
}
return ret;
}
/*

View File

@ -691,6 +691,8 @@ static int params_show(struct seq_file *seq, void *v)
print_param(seq, p, ulpi_fs_ls);
print_param(seq, p, host_support_fs_ls_low_power);
print_param(seq, p, host_ls_low_power_phy_clk);
print_param(seq, p, activate_stm_fs_transceiver);
print_param(seq, p, activate_stm_id_vb_detection);
print_param(seq, p, ts_dline);
print_param(seq, p, reload_ctl);
print_param_hex(seq, p, ahbcfg);

View File

@ -3689,10 +3689,10 @@ static irqreturn_t dwc2_hsotg_irq(int irq, void *pw)
dwc2_writel(hsotg, GINTSTS_RESETDET, GINTSTS);
/* This event must be used only if controller is suspended */
if (hsotg->lx_state == DWC2_L2) {
dwc2_exit_partial_power_down(hsotg, true);
hsotg->lx_state = DWC2_L0;
}
if (hsotg->in_ppd && hsotg->lx_state == DWC2_L2)
dwc2_exit_partial_power_down(hsotg, 0, true);
hsotg->lx_state = DWC2_L0;
}
if (gintsts & (GINTSTS_USBRST | GINTSTS_RESETDET)) {
@ -4615,11 +4615,15 @@ static int dwc2_hsotg_vbus_session(struct usb_gadget *gadget, int is_active)
spin_lock_irqsave(&hsotg->lock, flags);
/*
* If controller is hibernated, it must exit from power_down
* before being initialized / de-initialized
* If controller is in partial power down state, it must exit from
* that state before being initialized / de-initialized
*/
if (hsotg->lx_state == DWC2_L2)
dwc2_exit_partial_power_down(hsotg, false);
if (hsotg->lx_state == DWC2_L2 && hsotg->in_ppd)
/*
* No need to check the return value as
* registers are not being restored.
*/
dwc2_exit_partial_power_down(hsotg, 0, false);
if (is_active) {
hsotg->op_state = OTG_STATE_B_PERIPHERAL;
@ -5301,6 +5305,10 @@ int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg,
dwc2_writel(hsotg, dr->dcfg, DCFG);
dwc2_writel(hsotg, dr->dctl, DCTL);
/* On USB Reset, reset device address to zero */
if (reset)
dwc2_clear_bit(hsotg, DCFG, DCFG_DEVADDR_MASK);
/* De-assert Wakeup Logic */
gpwrdn = dwc2_readl(hsotg, GPWRDN);
gpwrdn &= ~GPWRDN_PMUACTV;
@ -5351,3 +5359,202 @@ int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg,
return ret;
}
/**
* dwc2_gadget_enter_partial_power_down() - Put controller in partial
* power down.
*
* @hsotg: Programming view of the DWC_otg controller
*
* Return: non-zero if failed to enter device partial power down.
*
* This function is for entering device mode partial power down.
*/
int dwc2_gadget_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{
u32 pcgcctl;
int ret = 0;
dev_dbg(hsotg->dev, "Entering device partial power down started.\n");
/* Backup all registers */
ret = dwc2_backup_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup global registers\n",
__func__);
return ret;
}
ret = dwc2_backup_device_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup device registers\n",
__func__);
return ret;
}
/*
* Clear any pending interrupts since dwc2 will not be able to
* clear them after entering partial_power_down.
*/
dwc2_writel(hsotg, 0xffffffff, GINTSTS);
/* Put the controller in low power state */
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl |= PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(5);
pcgcctl |= PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(5);
pcgcctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
/* Set in_ppd flag to 1 as here core enters suspend. */
hsotg->in_ppd = 1;
hsotg->lx_state = DWC2_L2;
dev_dbg(hsotg->dev, "Entering device partial power down completed.\n");
return ret;
}
/*
* dwc2_gadget_exit_partial_power_down() - Exit controller from device partial
* power down.
*
* @hsotg: Programming view of the DWC_otg controller
* @restore: indicates whether need to restore the registers or not.
*
* Return: non-zero if failed to exit device partial power down.
*
* This function is for exiting from device mode partial power down.
*/
int dwc2_gadget_exit_partial_power_down(struct dwc2_hsotg *hsotg,
bool restore)
{
u32 pcgcctl;
u32 dctl;
struct dwc2_dregs_backup *dr;
int ret = 0;
dr = &hsotg->dr_backup;
dev_dbg(hsotg->dev, "Exiting device partial Power Down started.\n");
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(100);
if (restore) {
ret = dwc2_restore_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore registers\n",
__func__);
return ret;
}
/* Restore DCFG */
dwc2_writel(hsotg, dr->dcfg, DCFG);
ret = dwc2_restore_device_registers(hsotg, 0);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore device registers\n",
__func__);
return ret;
}
}
/* Set the Power-On Programming done bit */
dctl = dwc2_readl(hsotg, DCTL);
dctl |= DCTL_PWRONPRGDONE;
dwc2_writel(hsotg, dctl, DCTL);
/* Set in_ppd flag to 0 as here core exits from suspend. */
hsotg->in_ppd = 0;
hsotg->lx_state = DWC2_L0;
dev_dbg(hsotg->dev, "Exiting device partial Power Down completed.\n");
return ret;
}
/**
* dwc2_gadget_enter_clock_gating() - Put controller in clock gating.
*
* @hsotg: Programming view of the DWC_otg controller
*
* Return: non-zero if failed to enter device partial power down.
*
* This function is for entering device mode clock gating.
*/
void dwc2_gadget_enter_clock_gating(struct dwc2_hsotg *hsotg)
{
u32 pcgctl;
dev_dbg(hsotg->dev, "Entering device clock gating.\n");
/* Set the Phy Clock bit as suspend is received. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
/* Set the Gate hclk as suspend is received. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl |= PCGCTL_GATEHCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
hsotg->lx_state = DWC2_L2;
hsotg->bus_suspended = true;
}
/*
* dwc2_gadget_exit_clock_gating() - Exit controller from device clock gating.
*
* @hsotg: Programming view of the DWC_otg controller
* @rem_wakeup: indicates whether remote wake up is enabled.
*
* This function is for exiting from device mode clock gating.
*/
void dwc2_gadget_exit_clock_gating(struct dwc2_hsotg *hsotg, int rem_wakeup)
{
u32 pcgctl;
u32 dctl;
dev_dbg(hsotg->dev, "Exiting device clock gating.\n");
/* Clear the Gate hclk. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl &= ~PCGCTL_GATEHCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
/* Phy Clock bit. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
if (rem_wakeup) {
/* Set Remote Wakeup Signaling */
dctl = dwc2_readl(hsotg, DCTL);
dctl |= DCTL_RMTWKUPSIG;
dwc2_writel(hsotg, dctl, DCTL);
}
/* Change to L0 state */
call_gadget(hsotg, resume);
hsotg->lx_state = DWC2_L0;
hsotg->bus_suspended = false;
}

View File

@ -56,8 +56,6 @@
#include "core.h"
#include "hcd.h"
static void dwc2_port_resume(struct dwc2_hsotg *hsotg);
/*
* =========================================================================
* Host Core Layer Functions
@ -3208,6 +3206,15 @@ static void dwc2_conn_id_status_change(struct work_struct *work)
if (count > 250)
dev_err(hsotg->dev,
"Connection id status change timed out\n");
/*
* Exit Partial Power Down without restoring registers.
* No need to check the return value as registers
* are not being restored.
*/
if (hsotg->in_ppd && hsotg->lx_state == DWC2_L2)
dwc2_exit_partial_power_down(hsotg, 0, false);
hsotg->op_state = OTG_STATE_B_PERIPHERAL;
dwc2_core_init(hsotg, false);
dwc2_enable_global_interrupts(hsotg);
@ -3277,13 +3284,23 @@ static int dwc2_host_is_b_hnp_enabled(struct dwc2_hsotg *hsotg)
return hcd->self.b_hnp_enable;
}
/* Must NOT be called with interrupt disabled or spinlock held */
static void dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
/**
* dwc2_port_suspend() - Put controller in suspend mode for host.
*
* @hsotg: Programming view of the DWC_otg controller
* @windex: The control request wIndex field
*
* Return: non-zero if failed to enter suspend mode for host.
*
* This function is for entering Host mode suspend.
* Must NOT be called with interrupt disabled or spinlock held.
*/
int dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
{
unsigned long flags;
u32 hprt0;
u32 pcgctl;
u32 gotgctl;
int ret = 0;
dev_dbg(hsotg->dev, "%s()\n", __func__);
@ -3296,22 +3313,33 @@ static void dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
hsotg->op_state = OTG_STATE_A_SUSPEND;
}
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_SUSP;
dwc2_writel(hsotg, hprt0, HPRT0);
hsotg->bus_suspended = true;
/*
* If power_down is supported, Phy clock will be suspended
* after registers are backuped.
*/
if (!hsotg->params.power_down) {
/* Suspend the Phy Clock */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(10);
switch (hsotg->params.power_down) {
case DWC2_POWER_DOWN_PARAM_PARTIAL:
ret = dwc2_enter_partial_power_down(hsotg);
if (ret)
dev_err(hsotg->dev,
"enter partial_power_down failed.\n");
break;
case DWC2_POWER_DOWN_PARAM_HIBERNATION:
/*
* Perform spin unlock and lock because in
* "dwc2_host_enter_hibernation()" function there is a spinlock
* logic which prevents servicing of any IRQ during entering
* hibernation.
*/
spin_unlock_irqrestore(&hsotg->lock, flags);
ret = dwc2_enter_hibernation(hsotg, 1);
if (ret)
dev_err(hsotg->dev, "enter hibernation failed.\n");
spin_lock_irqsave(&hsotg->lock, flags);
break;
case DWC2_POWER_DOWN_PARAM_NONE:
/*
* If not hibernation nor partial power down are supported,
* clock gating is used to save power.
*/
dwc2_host_enter_clock_gating(hsotg);
break;
}
/* For HNP the bus must be suspended for at least 200ms */
@ -3326,44 +3354,54 @@ static void dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
} else {
spin_unlock_irqrestore(&hsotg->lock, flags);
}
return ret;
}
/* Must NOT be called with interrupt disabled or spinlock held */
static void dwc2_port_resume(struct dwc2_hsotg *hsotg)
/**
* dwc2_port_resume() - Exit controller from suspend mode for host.
*
* @hsotg: Programming view of the DWC_otg controller
*
* Return: non-zero if failed to exit suspend mode for host.
*
* This function is for exiting Host mode suspend.
* Must NOT be called with interrupt disabled or spinlock held.
*/
int dwc2_port_resume(struct dwc2_hsotg *hsotg)
{
unsigned long flags;
u32 hprt0;
u32 pcgctl;
int ret = 0;
spin_lock_irqsave(&hsotg->lock, flags);
/*
* If power_down is supported, Phy clock is already resumed
* after registers restore.
*/
if (!hsotg->params.power_down) {
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
switch (hsotg->params.power_down) {
case DWC2_POWER_DOWN_PARAM_PARTIAL:
ret = dwc2_exit_partial_power_down(hsotg, 0, true);
if (ret)
dev_err(hsotg->dev,
"exit partial_power_down failed.\n");
break;
case DWC2_POWER_DOWN_PARAM_HIBERNATION:
/* Exit host hibernation. */
ret = dwc2_exit_hibernation(hsotg, 0, 0, 1);
if (ret)
dev_err(hsotg->dev, "exit hibernation failed.\n");
break;
case DWC2_POWER_DOWN_PARAM_NONE:
/*
* If not hibernation nor partial power down are supported,
* port resume is done using the clock gating programming flow.
*/
spin_unlock_irqrestore(&hsotg->lock, flags);
msleep(20);
dwc2_host_exit_clock_gating(hsotg, 0);
spin_lock_irqsave(&hsotg->lock, flags);
break;
}
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_RES;
hprt0 &= ~HPRT0_SUSP;
dwc2_writel(hsotg, hprt0, HPRT0);
spin_unlock_irqrestore(&hsotg->lock, flags);
msleep(USB_RESUME_TIMEOUT);
spin_lock_irqsave(&hsotg->lock, flags);
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 &= ~(HPRT0_RES | HPRT0_SUSP);
dwc2_writel(hsotg, hprt0, HPRT0);
hsotg->bus_suspended = false;
spin_unlock_irqrestore(&hsotg->lock, flags);
return ret;
}
/* Handles hub class-specific requests */
@ -3413,12 +3451,8 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
dev_dbg(hsotg->dev,
"ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
if (hsotg->bus_suspended) {
if (hsotg->hibernated)
dwc2_exit_hibernation(hsotg, 0, 0, 1);
else
dwc2_port_resume(hsotg);
}
if (hsotg->bus_suspended)
retval = dwc2_port_resume(hsotg);
break;
case USB_PORT_FEAT_POWER:
@ -3629,10 +3663,8 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
"SetPortFeature - USB_PORT_FEAT_SUSPEND\n");
if (windex != hsotg->otg_port)
goto error;
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_HIBERNATION)
dwc2_enter_hibernation(hsotg, 1);
else
dwc2_port_suspend(hsotg, windex);
if (!hsotg->bus_suspended)
retval = dwc2_port_suspend(hsotg, windex);
break;
case USB_PORT_FEAT_POWER:
@ -3647,12 +3679,30 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
break;
case USB_PORT_FEAT_RESET:
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_HIBERNATION &&
hsotg->hibernated)
dwc2_exit_hibernation(hsotg, 0, 1, 1);
hprt0 = dwc2_read_hprt0(hsotg);
dev_dbg(hsotg->dev,
"SetPortFeature - USB_PORT_FEAT_RESET\n");
hprt0 = dwc2_read_hprt0(hsotg);
if (hsotg->hibernated) {
retval = dwc2_exit_hibernation(hsotg, 0, 1, 1);
if (retval)
dev_err(hsotg->dev,
"exit hibernation failed\n");
}
if (hsotg->in_ppd) {
retval = dwc2_exit_partial_power_down(hsotg, 1,
true);
if (retval)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
}
if (hsotg->params.power_down ==
DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
dwc2_host_exit_clock_gating(hsotg, 0);
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl &= ~(PCGCTL_ENBL_SLEEP_GATING | PCGCTL_STOPPCLK);
dwc2_writel(hsotg, pcgctl, PCGCTL);
@ -4305,8 +4355,6 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
struct dwc2_hsotg *hsotg = dwc2_hcd_to_hsotg(hcd);
unsigned long flags;
int ret = 0;
u32 hprt0;
u32 pcgctl;
spin_lock_irqsave(&hsotg->lock, flags);
@ -4322,47 +4370,51 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
if (hsotg->op_state == OTG_STATE_B_PERIPHERAL)
goto unlock;
if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL ||
hsotg->flags.b.port_connect_status == 0)
if (hsotg->bus_suspended)
goto skip_power_saving;
/*
* Drive USB suspend and disable port Power
* if usb bus is not suspended.
*/
if (!hsotg->bus_suspended) {
hprt0 = dwc2_read_hprt0(hsotg);
if (hprt0 & HPRT0_CONNSTS) {
hprt0 |= HPRT0_SUSP;
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_PARTIAL)
hprt0 &= ~HPRT0_PWR;
dwc2_writel(hsotg, hprt0, HPRT0);
}
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_PARTIAL) {
spin_unlock_irqrestore(&hsotg->lock, flags);
dwc2_vbus_supply_exit(hsotg);
spin_lock_irqsave(&hsotg->lock, flags);
} else {
pcgctl = readl(hsotg->regs + PCGCTL);
pcgctl |= PCGCTL_STOPPCLK;
writel(pcgctl, hsotg->regs + PCGCTL);
}
}
if (hsotg->flags.b.port_connect_status == 0)
goto skip_power_saving;
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_PARTIAL) {
switch (hsotg->params.power_down) {
case DWC2_POWER_DOWN_PARAM_PARTIAL:
/* Enter partial_power_down */
ret = dwc2_enter_partial_power_down(hsotg);
if (ret) {
if (ret != -ENOTSUPP)
dev_err(hsotg->dev,
"enter partial_power_down failed\n");
goto skip_power_saving;
}
/* After entering partial_power_down, hardware is no more accessible */
if (ret)
dev_err(hsotg->dev,
"enter partial_power_down failed\n");
/* After entering suspend, hardware is not accessible */
clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
break;
case DWC2_POWER_DOWN_PARAM_HIBERNATION:
/* Enter hibernation */
spin_unlock_irqrestore(&hsotg->lock, flags);
ret = dwc2_enter_hibernation(hsotg, 1);
if (ret)
dev_err(hsotg->dev, "enter hibernation failed\n");
spin_lock_irqsave(&hsotg->lock, flags);
/* After entering suspend, hardware is not accessible */
clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
break;
case DWC2_POWER_DOWN_PARAM_NONE:
/*
* If not hibernation nor partial power down are supported,
* clock gating is used to save power.
*/
dwc2_host_enter_clock_gating(hsotg);
/* After entering suspend, hardware is not accessible */
clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
break;
default:
goto skip_power_saving;
}
spin_unlock_irqrestore(&hsotg->lock, flags);
dwc2_vbus_supply_exit(hsotg);
spin_lock_irqsave(&hsotg->lock, flags);
/* Ask phy to be suspended */
if (!IS_ERR_OR_NULL(hsotg->uphy)) {
spin_unlock_irqrestore(&hsotg->lock, flags);
@ -4382,7 +4434,7 @@ static int _dwc2_hcd_resume(struct usb_hcd *hcd)
{
struct dwc2_hsotg *hsotg = dwc2_hcd_to_hsotg(hcd);
unsigned long flags;
u32 pcgctl;
u32 hprt0;
int ret = 0;
spin_lock_irqsave(&hsotg->lock, flags);
@ -4393,11 +4445,72 @@ static int _dwc2_hcd_resume(struct usb_hcd *hcd)
if (hsotg->lx_state != DWC2_L2)
goto unlock;
if (hsotg->params.power_down > DWC2_POWER_DOWN_PARAM_PARTIAL) {
hprt0 = dwc2_read_hprt0(hsotg);
/*
* Added port connection status checking which prevents exiting from
* Partial Power Down mode from _dwc2_hcd_resume() if not in Partial
* Power Down mode.
*/
if (hprt0 & HPRT0_CONNSTS) {
hsotg->lx_state = DWC2_L0;
goto unlock;
}
switch (hsotg->params.power_down) {
case DWC2_POWER_DOWN_PARAM_PARTIAL:
ret = dwc2_exit_partial_power_down(hsotg, 0, true);
if (ret)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
/*
* Set HW accessible bit before powering on the controller
* since an interrupt may rise.
*/
set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
break;
case DWC2_POWER_DOWN_PARAM_HIBERNATION:
ret = dwc2_exit_hibernation(hsotg, 0, 0, 1);
if (ret)
dev_err(hsotg->dev, "exit hibernation failed.\n");
/*
* Set HW accessible bit before powering on the controller
* since an interrupt may rise.
*/
set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
break;
case DWC2_POWER_DOWN_PARAM_NONE:
/*
* If not hibernation nor partial power down are supported,
* port resume is done using the clock gating programming flow.
*/
spin_unlock_irqrestore(&hsotg->lock, flags);
dwc2_host_exit_clock_gating(hsotg, 0);
/*
* Initialize the Core for Host mode, as after system resume
* the global interrupts are disabled.
*/
dwc2_core_init(hsotg, false);
dwc2_enable_global_interrupts(hsotg);
dwc2_hcd_reinit(hsotg);
spin_lock_irqsave(&hsotg->lock, flags);
/*
* Set HW accessible bit before powering on the controller
* since an interrupt may rise.
*/
set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
break;
default:
hsotg->lx_state = DWC2_L0;
goto unlock;
}
/* Change Root port status, as port status change occurred after resume.*/
hsotg->flags.b.port_suspend_change = 1;
/*
* Enable power if not already done.
* This must not be spinlocked since duration
@ -4409,52 +4522,25 @@ static int _dwc2_hcd_resume(struct usb_hcd *hcd)
spin_lock_irqsave(&hsotg->lock, flags);
}
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_PARTIAL) {
/*
* Set HW accessible bit before powering on the controller
* since an interrupt may rise.
*/
set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
/* Exit partial_power_down */
ret = dwc2_exit_partial_power_down(hsotg, true);
if (ret && (ret != -ENOTSUPP))
dev_err(hsotg->dev, "exit partial_power_down failed\n");
} else {
pcgctl = readl(hsotg->regs + PCGCTL);
pcgctl &= ~PCGCTL_STOPPCLK;
writel(pcgctl, hsotg->regs + PCGCTL);
}
hsotg->lx_state = DWC2_L0;
/* Enable external vbus supply after resuming the port. */
spin_unlock_irqrestore(&hsotg->lock, flags);
dwc2_vbus_supply_init(hsotg);
if (hsotg->bus_suspended) {
spin_lock_irqsave(&hsotg->lock, flags);
hsotg->flags.b.port_suspend_change = 1;
spin_unlock_irqrestore(&hsotg->lock, flags);
dwc2_port_resume(hsotg);
} else {
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_PARTIAL) {
dwc2_vbus_supply_init(hsotg);
/* Wait for controller to correctly update D+/D- level */
usleep_range(3000, 5000);
spin_lock_irqsave(&hsotg->lock, flags);
/* Wait for controller to correctly update D+/D- level */
usleep_range(3000, 5000);
}
/*
* Clear Port Enable and Port Status changes.
* Enable Port Power.
*/
dwc2_writel(hsotg, HPRT0_PWR | HPRT0_CONNDET |
HPRT0_ENACHG, HPRT0);
/*
* Clear Port Enable and Port Status changes.
* Enable Port Power.
*/
dwc2_writel(hsotg, HPRT0_PWR | HPRT0_CONNDET |
HPRT0_ENACHG, HPRT0);
/* Wait for controller to detect Port Connect */
usleep_range(5000, 7000);
}
return ret;
/* Wait for controller to detect Port Connect */
spin_unlock_irqrestore(&hsotg->lock, flags);
usleep_range(5000, 7000);
spin_lock_irqsave(&hsotg->lock, flags);
unlock:
spin_unlock_irqrestore(&hsotg->lock, flags);
@ -4565,12 +4651,41 @@ static int _dwc2_hcd_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
struct dwc2_qh *qh;
bool qh_allocated = false;
struct dwc2_qtd *qtd;
struct dwc2_gregs_backup *gr;
gr = &hsotg->gr_backup;
if (dbg_urb(urb)) {
dev_vdbg(hsotg->dev, "DWC OTG HCD URB Enqueue\n");
dwc2_dump_urb_info(hcd, urb, "urb_enqueue");
}
if (hsotg->hibernated) {
if (gr->gotgctl & GOTGCTL_CURMODE_HOST)
retval = dwc2_exit_hibernation(hsotg, 0, 0, 1);
else
retval = dwc2_exit_hibernation(hsotg, 0, 0, 0);
if (retval)
dev_err(hsotg->dev,
"exit hibernation failed.\n");
}
if (hsotg->in_ppd) {
retval = dwc2_exit_partial_power_down(hsotg, 0, true);
if (retval)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
}
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
hsotg->bus_suspended) {
if (dwc2_is_device_mode(hsotg))
dwc2_gadget_exit_clock_gating(hsotg, 0);
else
dwc2_host_exit_clock_gating(hsotg, 0);
}
if (!ep)
return -EINVAL;
@ -5580,7 +5695,15 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
return ret;
}
dwc2_hcd_rem_wakeup(hsotg);
if (rem_wakeup) {
dwc2_hcd_rem_wakeup(hsotg);
/*
* Change "port_connect_status_change" flag to re-enumerate,
* because after exit from hibernation port connection status
* is not detected.
*/
hsotg->flags.b.port_connect_status_change = 1;
}
hsotg->hibernated = 0;
hsotg->bus_suspended = 0;
@ -5607,3 +5730,249 @@ bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2)
/* No reason to keep the PHY powered, so allow poweroff */
return true;
}
/**
* dwc2_host_enter_partial_power_down() - Put controller in partial
* power down.
*
* @hsotg: Programming view of the DWC_otg controller
*
* Return: non-zero if failed to enter host partial power down.
*
* This function is for entering Host mode partial power down.
*/
int dwc2_host_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{
u32 pcgcctl;
u32 hprt0;
int ret = 0;
dev_dbg(hsotg->dev, "Entering host partial power down started.\n");
/* Put this port in suspend mode. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_SUSP;
dwc2_writel(hsotg, hprt0, HPRT0);
udelay(5);
/* Wait for the HPRT0.PrtSusp register field to be set */
if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000))
dev_warn(hsotg->dev, "Suspend wasn't generated\n");
/* Backup all registers */
ret = dwc2_backup_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup global registers\n",
__func__);
return ret;
}
ret = dwc2_backup_host_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup host registers\n",
__func__);
return ret;
}
/*
* Clear any pending interrupts since dwc2 will not be able to
* clear them after entering partial_power_down.
*/
dwc2_writel(hsotg, 0xffffffff, GINTSTS);
/* Put the controller in low power state */
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl |= PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(5);
pcgcctl |= PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(5);
pcgcctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
/* Set in_ppd flag to 1 as here core enters suspend. */
hsotg->in_ppd = 1;
hsotg->lx_state = DWC2_L2;
hsotg->bus_suspended = true;
dev_dbg(hsotg->dev, "Entering host partial power down completed.\n");
return ret;
}
/*
* dwc2_host_exit_partial_power_down() - Exit controller from host partial
* power down.
*
* @hsotg: Programming view of the DWC_otg controller
* @rem_wakeup: indicates whether resume is initiated by Reset.
* @restore: indicates whether need to restore the registers or not.
*
* Return: non-zero if failed to exit host partial power down.
*
* This function is for exiting from Host mode partial power down.
*/
int dwc2_host_exit_partial_power_down(struct dwc2_hsotg *hsotg,
int rem_wakeup, bool restore)
{
u32 pcgcctl;
int ret = 0;
u32 hprt0;
dev_dbg(hsotg->dev, "Exiting host partial power down started.\n");
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(5);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(5);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
udelay(100);
if (restore) {
ret = dwc2_restore_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore registers\n",
__func__);
return ret;
}
ret = dwc2_restore_host_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore host registers\n",
__func__);
return ret;
}
}
/* Drive resume signaling and exit suspend mode on the port. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_RES;
hprt0 &= ~HPRT0_SUSP;
dwc2_writel(hsotg, hprt0, HPRT0);
udelay(5);
if (!rem_wakeup) {
/* Stop driveing resume signaling on the port. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 &= ~HPRT0_RES;
dwc2_writel(hsotg, hprt0, HPRT0);
hsotg->bus_suspended = false;
} else {
/* Turn on the port power bit. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_PWR;
dwc2_writel(hsotg, hprt0, HPRT0);
/* Connect hcd. */
dwc2_hcd_connect(hsotg);
mod_timer(&hsotg->wkp_timer,
jiffies + msecs_to_jiffies(71));
}
/* Set lx_state to and in_ppd to 0 as here core exits from suspend. */
hsotg->in_ppd = 0;
hsotg->lx_state = DWC2_L0;
dev_dbg(hsotg->dev, "Exiting host partial power down completed.\n");
return ret;
}
/**
* dwc2_host_enter_clock_gating() - Put controller in clock gating.
*
* @hsotg: Programming view of the DWC_otg controller
*
* This function is for entering Host mode clock gating.
*/
void dwc2_host_enter_clock_gating(struct dwc2_hsotg *hsotg)
{
u32 hprt0;
u32 pcgctl;
dev_dbg(hsotg->dev, "Entering host clock gating.\n");
/* Put this port in suspend mode. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_SUSP;
dwc2_writel(hsotg, hprt0, HPRT0);
/* Set the Phy Clock bit as suspend is received. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
/* Set the Gate hclk as suspend is received. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl |= PCGCTL_GATEHCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
hsotg->bus_suspended = true;
hsotg->lx_state = DWC2_L2;
}
/**
* dwc2_host_exit_clock_gating() - Exit controller from clock gating.
*
* @hsotg: Programming view of the DWC_otg controller
* @rem_wakeup: indicates whether resume is initiated by remote wakeup
*
* This function is for exiting Host mode clock gating.
*/
void dwc2_host_exit_clock_gating(struct dwc2_hsotg *hsotg, int rem_wakeup)
{
u32 hprt0;
u32 pcgctl;
dev_dbg(hsotg->dev, "Exiting host clock gating.\n");
/* Clear the Gate hclk. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl &= ~PCGCTL_GATEHCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
/* Phy Clock bit. */
pcgctl = dwc2_readl(hsotg, PCGCTL);
pcgctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgctl, PCGCTL);
udelay(5);
/* Drive resume signaling and exit suspend mode on the port. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 |= HPRT0_RES;
hprt0 &= ~HPRT0_SUSP;
dwc2_writel(hsotg, hprt0, HPRT0);
udelay(5);
if (!rem_wakeup) {
/* In case of port resume need to wait for 40 ms */
msleep(USB_RESUME_TIMEOUT);
/* Stop driveing resume signaling on the port. */
hprt0 = dwc2_read_hprt0(hsotg);
hprt0 &= ~HPRT0_RES;
dwc2_writel(hsotg, hprt0, HPRT0);
hsotg->bus_suspended = false;
hsotg->lx_state = DWC2_L0;
} else {
mod_timer(&hsotg->wkp_timer,
jiffies + msecs_to_jiffies(71));
}
}

View File

@ -59,7 +59,7 @@
#define DWC2_UNRESERVE_DELAY (msecs_to_jiffies(5))
/* If we get a NAK, wait this long before retrying */
#define DWC2_RETRY_WAIT_DELAY 1*1E6L
#define DWC2_RETRY_WAIT_DELAY (1 * 1E6L)
/**
* dwc2_periodic_channel_available() - Checks that a channel is available for a

View File

@ -44,6 +44,7 @@
#define GOTGCTL_CHIRPEN BIT(27)
#define GOTGCTL_MULT_VALID_BC_MASK (0x1f << 22)
#define GOTGCTL_MULT_VALID_BC_SHIFT 22
#define GOTGCTL_CURMODE_HOST BIT(21)
#define GOTGCTL_OTGVER BIT(20)
#define GOTGCTL_BSESVLD BIT(19)
#define GOTGCTL_ASESVLD BIT(18)

View File

@ -232,6 +232,12 @@ const struct of_device_id dwc2_of_match_table[] = {
};
MODULE_DEVICE_TABLE(of, dwc2_of_match_table);
const struct acpi_device_id dwc2_acpi_match[] = {
{ "BCM2848", (kernel_ulong_t)dwc2_set_bcm_params },
{ },
};
MODULE_DEVICE_TABLE(acpi, dwc2_acpi_match);
static void dwc2_set_param_otg_cap(struct dwc2_hsotg *hsotg)
{
u8 val;
@ -866,10 +872,12 @@ int dwc2_get_hwparams(struct dwc2_hsotg *hsotg)
return 0;
}
typedef void (*set_params_cb)(struct dwc2_hsotg *data);
int dwc2_init_params(struct dwc2_hsotg *hsotg)
{
const struct of_device_id *match;
void (*set_params)(struct dwc2_hsotg *data);
set_params_cb set_params;
dwc2_set_default_params(hsotg);
dwc2_get_device_properties(hsotg);
@ -878,6 +886,14 @@ int dwc2_init_params(struct dwc2_hsotg *hsotg)
if (match && match->data) {
set_params = match->data;
set_params(hsotg);
} else {
const struct acpi_device_id *amatch;
amatch = acpi_match_device(dwc2_acpi_match, hsotg->dev);
if (amatch && amatch->driver_data) {
set_params = (set_params_cb)amatch->driver_data;
set_params(hsotg);
}
}
dwc2_check_params(hsotg);

View File

@ -316,6 +316,39 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
static int dwc2_driver_remove(struct platform_device *dev)
{
struct dwc2_hsotg *hsotg = platform_get_drvdata(dev);
struct dwc2_gregs_backup *gr;
int ret = 0;
gr = &hsotg->gr_backup;
/* Exit Hibernation when driver is removed. */
if (hsotg->hibernated) {
if (gr->gotgctl & GOTGCTL_CURMODE_HOST)
ret = dwc2_exit_hibernation(hsotg, 0, 0, 1);
else
ret = dwc2_exit_hibernation(hsotg, 0, 0, 0);
if (ret)
dev_err(hsotg->dev,
"exit hibernation failed.\n");
}
/* Exit Partial Power Down when driver is removed. */
if (hsotg->in_ppd) {
ret = dwc2_exit_partial_power_down(hsotg, 0, true);
if (ret)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
}
/* Exit clock gating when driver is removed. */
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
hsotg->bus_suspended) {
if (dwc2_is_device_mode(hsotg))
dwc2_gadget_exit_clock_gating(hsotg, 0);
else
dwc2_host_exit_clock_gating(hsotg, 0);
}
dwc2_debugfs_exit(hsotg);
if (hsotg->hcd_enabled)
@ -334,7 +367,7 @@ static int dwc2_driver_remove(struct platform_device *dev)
reset_control_assert(hsotg->reset);
reset_control_assert(hsotg->reset_ecc);
return 0;
return ret;
}
/**
@ -734,6 +767,7 @@ static struct platform_driver dwc2_platform_driver = {
.driver = {
.name = dwc2_driver_name,
.of_match_table = dwc2_of_match_table,
.acpi_match_table = ACPI_PTR(dwc2_acpi_match),
.pm = &dwc2_dev_pm_ops,
},
.probe = dwc2_driver_probe,

View File

@ -149,4 +149,13 @@ config USB_DWC3_IMX8MP
functionality.
Say 'Y' or 'M' if you have one such device.
config USB_DWC3_XILINX
tristate "Xilinx Platforms"
depends on (ARCH_ZYNQMP || ARCH_VERSAL) && OF
default USB_DWC3
help
Support Xilinx SoCs with DesignWare Core USB3 IP.
This driver handles both ZynqMP and Versal SoC operations.
Say 'Y' or 'M' if you have one such device.
endif

View File

@ -52,3 +52,4 @@ obj-$(CONFIG_USB_DWC3_OF_SIMPLE) += dwc3-of-simple.o
obj-$(CONFIG_USB_DWC3_ST) += dwc3-st.o
obj-$(CONFIG_USB_DWC3_QCOM) += dwc3-qcom.o
obj-$(CONFIG_USB_DWC3_IMX8MP) += dwc3-imx8mp.o
obj-$(CONFIG_USB_DWC3_XILINX) += dwc3-xilinx.o

View File

@ -114,6 +114,8 @@ void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
dwc->current_dr_role = mode;
}
static int dwc3_core_soft_reset(struct dwc3 *dwc);
static void __dwc3_set_mode(struct work_struct *work)
{
struct dwc3 *dwc = work_to_dwc(work);
@ -121,6 +123,8 @@ static void __dwc3_set_mode(struct work_struct *work)
int ret;
u32 reg;
mutex_lock(&dwc->mutex);
pm_runtime_get_sync(dwc->dev);
if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
@ -154,6 +158,25 @@ static void __dwc3_set_mode(struct work_struct *work)
break;
}
/* For DRD host or device mode only */
if (dwc->desired_dr_role != DWC3_GCTL_PRTCAP_OTG) {
reg = dwc3_readl(dwc->regs, DWC3_GCTL);
reg |= DWC3_GCTL_CORESOFTRESET;
dwc3_writel(dwc->regs, DWC3_GCTL, reg);
/*
* Wait for internal clocks to synchronized. DWC_usb31 and
* DWC_usb32 may need at least 50ms (less for DWC_usb3). To
* keep it consistent across different IPs, let's wait up to
* 100ms before clearing GCTL.CORESOFTRESET.
*/
msleep(100);
reg = dwc3_readl(dwc->regs, DWC3_GCTL);
reg &= ~DWC3_GCTL_CORESOFTRESET;
dwc3_writel(dwc->regs, DWC3_GCTL, reg);
}
spin_lock_irqsave(&dwc->lock, flags);
dwc3_set_prtcap(dwc, dwc->desired_dr_role);
@ -178,6 +201,8 @@ static void __dwc3_set_mode(struct work_struct *work)
}
break;
case DWC3_GCTL_PRTCAP_DEVICE:
dwc3_core_soft_reset(dwc);
dwc3_event_buffers_setup(dwc);
if (dwc->usb2_phy)
@ -200,6 +225,7 @@ static void __dwc3_set_mode(struct work_struct *work)
out:
pm_runtime_mark_last_busy(dwc->dev);
pm_runtime_put_autosuspend(dwc->dev);
mutex_unlock(&dwc->mutex);
}
void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
@ -544,6 +570,9 @@ static void dwc3_cache_hwparams(struct dwc3 *dwc)
parms->hwparams6 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS6);
parms->hwparams7 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS7);
parms->hwparams8 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS8);
if (DWC3_IP_IS(DWC32))
parms->hwparams9 = dwc3_readl(dwc->regs, DWC3_GHWPARAMS9);
}
static int dwc3_core_ulpi_init(struct dwc3 *dwc)
@ -1238,6 +1267,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
u8 rx_max_burst_prd;
u8 tx_thr_num_pkt_prd;
u8 tx_max_burst_prd;
const char *usb_psy_name;
int ret;
/* default to highest possible threshold */
lpm_nyet_threshold = 0xf;
@ -1263,6 +1294,13 @@ static void dwc3_get_properties(struct dwc3 *dwc)
else
dwc->sysdev = dwc->dev;
ret = device_property_read_string(dev, "usb-psy-name", &usb_psy_name);
if (ret >= 0) {
dwc->usb_psy = power_supply_get_by_name(usb_psy_name);
if (!dwc->usb_psy)
dev_err(dev, "couldn't get usb power supply\n");
}
dwc->has_lpm_erratum = device_property_read_bool(dev,
"snps,has-lpm-erratum");
device_property_read_u8(dev, "snps,lpm-nyet-threshold",
@ -1277,6 +1315,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
"snps,usb3_lpm_capable");
dwc->usb2_lpm_disable = device_property_read_bool(dev,
"snps,usb2-lpm-disable");
dwc->usb2_gadget_lpm_disable = device_property_read_bool(dev,
"snps,usb2-gadget-lpm-disable");
device_property_read_u8(dev, "snps,rx-thr-num-pkt-prd",
&rx_thr_num_pkt_prd);
device_property_read_u8(dev, "snps,rx-max-burst-prd",
@ -1385,7 +1425,6 @@ static void dwc3_check_params(struct dwc3 *dwc)
/* Check the maximum_speed parameter */
switch (dwc->maximum_speed) {
case USB_SPEED_LOW:
case USB_SPEED_FULL:
case USB_SPEED_HIGH:
break;
@ -1543,6 +1582,7 @@ static int dwc3_probe(struct platform_device *pdev)
dwc3_cache_hwparams(dwc);
spin_lock_init(&dwc->lock);
mutex_init(&dwc->mutex);
pm_runtime_set_active(dev);
pm_runtime_use_autosuspend(dev);
@ -1619,6 +1659,9 @@ static int dwc3_probe(struct platform_device *pdev)
assert_reset:
reset_control_assert(dwc->reset);
if (dwc->usb_psy)
power_supply_put(dwc->usb_psy);
return ret;
}
@ -1641,9 +1684,17 @@ static int dwc3_remove(struct platform_device *pdev)
dwc3_free_event_buffers(dwc);
dwc3_free_scratch_buffers(dwc);
if (dwc->usb_psy)
power_supply_put(dwc->usb_psy);
return 0;
}
static void dwc3_shutdown(struct platform_device *pdev)
{
dwc3_remove(pdev);
}
#ifdef CONFIG_PM
static int dwc3_core_init_for_resume(struct dwc3 *dwc)
{
@ -1961,6 +2012,7 @@ MODULE_DEVICE_TABLE(acpi, dwc3_acpi_match);
static struct platform_driver dwc3_driver = {
.probe = dwc3_probe,
.remove = dwc3_remove,
.shutdown = dwc3_shutdown,
.driver = {
.name = "dwc3",
.of_match_table = of_match_ptr(of_dwc3_match),

View File

@ -13,6 +13,7 @@
#include <linux/device.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/ioport.h>
#include <linux/list.h>
#include <linux/bitops.h>
@ -30,6 +31,8 @@
#include <linux/phy/phy.h>
#include <linux/power_supply.h>
#define DWC3_MSG_MAX 500
/* Global constants */
@ -140,6 +143,7 @@
#define DWC3_GHWPARAMS8 0xc600
#define DWC3_GUCTL3 0xc60c
#define DWC3_GFLADJ 0xc630
#define DWC3_GHWPARAMS9 0xc680
/* Device Registers */
#define DWC3_DCFG 0xc700
@ -375,6 +379,9 @@
#define DWC3_GHWPARAMS7_RAM1_DEPTH(n) ((n) & 0xffff)
#define DWC3_GHWPARAMS7_RAM2_DEPTH(n) (((n) >> 16) & 0xffff)
/* Global HWPARAMS9 Register */
#define DWC3_GHWPARAMS9_DEV_TXF_FLUSH_BYPASS BIT(0)
/* Global Frame Length Adjustment Register */
#define DWC3_GFLADJ_30MHZ_SDBND_SEL BIT(7)
#define DWC3_GFLADJ_30MHZ_MASK 0x3f
@ -396,12 +403,12 @@
#define DWC3_DCFG_SUPERSPEED (4 << 0)
#define DWC3_DCFG_HIGHSPEED (0 << 0)
#define DWC3_DCFG_FULLSPEED BIT(0)
#define DWC3_DCFG_LOWSPEED (2 << 0)
#define DWC3_DCFG_NUMP_SHIFT 17
#define DWC3_DCFG_NUMP(n) (((n) >> DWC3_DCFG_NUMP_SHIFT) & 0x1f)
#define DWC3_DCFG_NUMP_MASK (0x1f << DWC3_DCFG_NUMP_SHIFT)
#define DWC3_DCFG_LPM_CAP BIT(22)
#define DWC3_DCFG_IGNSTRMPP BIT(23)
/* Device Control Register */
#define DWC3_DCTL_RUN_STOP BIT(31)
@ -490,7 +497,6 @@
#define DWC3_DSTS_SUPERSPEED (4 << 0)
#define DWC3_DSTS_HIGHSPEED (0 << 0)
#define DWC3_DSTS_FULLSPEED BIT(0)
#define DWC3_DSTS_LOWSPEED (2 << 0)
/* Device Generic Command Register */
#define DWC3_DGCMD_SET_LMP 0x01
@ -855,13 +861,12 @@ struct dwc3_hwparams {
u32 hwparams6;
u32 hwparams7;
u32 hwparams8;
u32 hwparams9;
};
/* HWPARAMS0 */
#define DWC3_MODE(n) ((n) & 0x7)
#define DWC3_MDWIDTH(n) (((n) & 0xff00) >> 8)
/* HWPARAMS1 */
#define DWC3_NUM_INT(n) (((n) & (0x3f << 15)) >> 15)
@ -908,11 +913,13 @@ struct dwc3_request {
unsigned int remaining;
unsigned int status;
#define DWC3_REQUEST_STATUS_QUEUED 0
#define DWC3_REQUEST_STATUS_STARTED 1
#define DWC3_REQUEST_STATUS_CANCELLED 2
#define DWC3_REQUEST_STATUS_COMPLETED 3
#define DWC3_REQUEST_STATUS_UNKNOWN -1
#define DWC3_REQUEST_STATUS_QUEUED 0
#define DWC3_REQUEST_STATUS_STARTED 1
#define DWC3_REQUEST_STATUS_DISCONNECTED 2
#define DWC3_REQUEST_STATUS_DEQUEUED 3
#define DWC3_REQUEST_STATUS_STALLED 4
#define DWC3_REQUEST_STATUS_COMPLETED 5
#define DWC3_REQUEST_STATUS_UNKNOWN -1
u8 epnum;
struct dwc3_trb *trb;
@ -946,6 +953,7 @@ struct dwc3_scratchpad_array {
* @scratch_addr: dma address of scratchbuf
* @ep0_in_setup: one control transfer is completed and enter setup phase
* @lock: for synchronizing
* @mutex: for mode switching
* @dev: pointer to our struct device
* @sysdev: pointer to the DMA-capable device
* @xhci: pointer to our xHCI child
@ -986,6 +994,7 @@ struct dwc3_scratchpad_array {
* @role_sw: usb_role_switch handle
* @role_switch_default_mode: default operation mode of controller while
* usb role is USB_ROLE_NONE.
* @usb_psy: pointer to power supply interface.
* @usb2_phy: pointer to USB2 PHY
* @usb3_phy: pointer to USB3 PHY
* @usb2_generic_phy: pointer to USB2 PHY
@ -1034,7 +1043,8 @@ struct dwc3_scratchpad_array {
* @dis_start_transfer_quirk: set if start_transfer failure SW workaround is
* not needed for DWC_usb31 version 1.70a-ea06 and below
* @usb3_lpm_capable: set if hadrware supports Link Power Management
* @usb2_lpm_disable: set to disable usb2 lpm
* @usb2_lpm_disable: set to disable usb2 lpm for host
* @usb2_gadget_lpm_disable: set to disable usb2 lpm for gadget
* @disable_scramble_quirk: set if we enable the disable scramble quirk
* @u2exit_lfps_quirk: set if we enable u2exit lfps quirk
* @u2ss_inp3_quirk: set if we enable P3 OK for U2/SS Inactive quirk
@ -1085,6 +1095,9 @@ struct dwc3 {
/* device lock */
spinlock_t lock;
/* mode switching lock */
struct mutex mutex;
struct device *dev;
struct device *sysdev;
@ -1125,6 +1138,8 @@ struct dwc3 {
struct usb_role_switch *role_sw;
enum usb_dr_mode role_switch_default_mode;
struct power_supply *usb_psy;
u32 fladj;
u32 irq_gadget;
u32 otg_irq;
@ -1238,6 +1253,7 @@ struct dwc3 {
unsigned dis_start_transfer_quirk:1;
unsigned usb3_lpm_capable:1;
unsigned usb2_lpm_disable:1;
unsigned usb2_gadget_lpm_disable:1;
unsigned disable_scramble_quirk:1;
unsigned u2exit_lfps_quirk:1;
@ -1455,6 +1471,23 @@ u32 dwc3_core_fifo_space(struct dwc3_ep *dep, u8 type);
(!(_ip##_VERSIONTYPE_##_to) || \
dwc->version_type <= _ip##_VERSIONTYPE_##_to))
/**
* dwc3_mdwidth - get MDWIDTH value in bits
* @dwc: pointer to our context structure
*
* Return MDWIDTH configuration value in bits.
*/
static inline u32 dwc3_mdwidth(struct dwc3 *dwc)
{
u32 mdwidth;
mdwidth = DWC3_GHWPARAMS0_MDWIDTH(dwc->hwparams.hwparams0);
if (DWC3_IP_IS(DWC32))
mdwidth += DWC3_GHWPARAMS6_MDWIDTH(dwc->hwparams.hwparams6);
return mdwidth;
}
bool dwc3_has_imod(struct dwc3 *dwc);
int dwc3_event_buffers_setup(struct dwc3 *dwc);

View File

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/**
/*
* debug.h - DesignWare USB3 DRD Controller Debug Header
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* debugfs.c - DesignWare USB3 DRD Controller DebugFS file
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
@ -638,16 +638,14 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
struct dwc3_ep *dep = s->private;
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
int mdwidth;
u32 mdwidth;
u32 val;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_TXFIFO);
/* Convert to bytes */
mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
if (DWC3_IP_IS(DWC32))
mdwidth += DWC3_GHWPARAMS6_MDWIDTH(dwc->hwparams.hwparams6);
mdwidth = dwc3_mdwidth(dwc);
val *= mdwidth;
val >>= 3;
@ -662,16 +660,14 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
struct dwc3_ep *dep = s->private;
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
int mdwidth;
u32 mdwidth;
u32 val;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_RXFIFO);
/* Convert to bytes */
mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
if (DWC3_IP_IS(DWC32))
mdwidth += DWC3_GHWPARAMS6_MDWIDTH(dwc->hwparams.hwparams6);
mdwidth = dwc3_mdwidth(dwc);
val *= mdwidth;
val >>= 3;

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* dwc3-exynos.c - Samsung Exynos DWC3 Specific Glue layer
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* dwc3-imx8mp.c - NXP imx8mp Specific Glue layer
*
* Copyright (c) 2020 NXP.

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* dwc3-keystone.c - Keystone Specific Glue layer
*
* Copyright (C) 2010-2013 Texas Instruments Incorporated - https://www.ti.com

View File

@ -172,7 +172,6 @@ static const struct dev_pm_ops dwc3_of_simple_dev_pm_ops = {
static const struct of_device_id of_dwc3_simple_match[] = {
{ .compatible = "rockchip,rk3399-dwc3" },
{ .compatible = "xlnx,zynqmp-dwc3" },
{ .compatible = "cavium,octeon-7130-usb-uctl" },
{ .compatible = "sprd,sc9860-dwc3" },
{ .compatible = "allwinner,sun50i-h6-dwc3" },

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* dwc3-pci.c - PCI Specific glue layer
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
@ -41,6 +41,7 @@
#define PCI_DEVICE_ID_INTEL_TGPH 0x43ee
#define PCI_DEVICE_ID_INTEL_JSP 0x4dee
#define PCI_DEVICE_ID_INTEL_ADLP 0x51ee
#define PCI_DEVICE_ID_INTEL_ADLM 0x54ee
#define PCI_DEVICE_ID_INTEL_ADLS 0x7ae1
#define PCI_DEVICE_ID_INTEL_TGL 0x9a15
@ -388,6 +389,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLP),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLM),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLS),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },

View File

@ -235,7 +235,7 @@ static int dwc3_qcom_interconnect_disable(struct dwc3_qcom *qcom)
/**
* dwc3_qcom_interconnect_init() - Get interconnect path handles
* and set bandwidhth.
* and set bandwidth.
* @qcom: Pointer to the concerned usb core.
*
*/
@ -647,7 +647,7 @@ static int dwc3_qcom_of_register_core(struct platform_device *pdev)
struct device *dev = &pdev->dev;
int ret;
dwc3_np = of_get_child_by_name(np, "dwc3");
dwc3_np = of_get_compatible_child(np, "snps,dwc3");
if (!dwc3_np) {
dev_err(dev, "failed to find dwc3 core child\n");
return -ENODEV;
@ -774,7 +774,6 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
qcom->qscratch_base = devm_ioremap_resource(dev, parent_res);
if (IS_ERR(qcom->qscratch_base)) {
dev_err(dev, "failed to map qscratch, err=%d\n", ret);
ret = PTR_ERR(qcom->qscratch_base);
goto clk_disable;
}

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0+
/**
/*
* dwc3-st.c Support for dwc3 platform devices on ST Microelectronics platforms
*
* This is a small driver for the dwc3 to provide the glue logic

View File

@ -0,0 +1,337 @@
// SPDX-License-Identifier: GPL-2.0
/*
* dwc3-xilinx.c - Xilinx DWC3 controller specific glue driver
*
* Authors: Manish Narani <manish.narani@xilinx.com>
* Anurag Kumar Vulisha <anurag.kumar.vulisha@xilinx.com>
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/of_platform.h>
#include <linux/pm_runtime.h>
#include <linux/reset.h>
#include <linux/of_address.h>
#include <linux/delay.h>
#include <linux/firmware/xlnx-zynqmp.h>
#include <linux/io.h>
#include <linux/phy/phy.h>
/* USB phy reset mask register */
#define XLNX_USB_PHY_RST_EN 0x001C
#define XLNX_PHY_RST_MASK 0x1
/* Xilinx USB 3.0 IP Register */
#define XLNX_USB_TRAFFIC_ROUTE_CONFIG 0x005C
#define XLNX_USB_TRAFFIC_ROUTE_FPD 0x1
/* Versal USB Reset ID */
#define VERSAL_USB_RESET_ID 0xC104036
#define XLNX_USB_FPD_PIPE_CLK 0x7c
#define PIPE_CLK_DESELECT 1
#define PIPE_CLK_SELECT 0
#define XLNX_USB_FPD_POWER_PRSNT 0x80
#define FPD_POWER_PRSNT_OPTION BIT(0)
struct dwc3_xlnx {
int num_clocks;
struct clk_bulk_data *clks;
struct device *dev;
void __iomem *regs;
int (*pltfm_init)(struct dwc3_xlnx *data);
};
static void dwc3_xlnx_mask_phy_rst(struct dwc3_xlnx *priv_data, bool mask)
{
u32 reg;
/*
* Enable or disable ULPI PHY reset from USB Controller.
* This does not actually reset the phy, but just controls
* whether USB controller can or cannot reset ULPI PHY.
*/
reg = readl(priv_data->regs + XLNX_USB_PHY_RST_EN);
if (mask)
reg &= ~XLNX_PHY_RST_MASK;
else
reg |= XLNX_PHY_RST_MASK;
writel(reg, priv_data->regs + XLNX_USB_PHY_RST_EN);
}
static int dwc3_xlnx_init_versal(struct dwc3_xlnx *priv_data)
{
struct device *dev = priv_data->dev;
int ret;
dwc3_xlnx_mask_phy_rst(priv_data, false);
/* Assert and De-assert reset */
ret = zynqmp_pm_reset_assert(VERSAL_USB_RESET_ID,
PM_RESET_ACTION_ASSERT);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to assert Reset\n");
return ret;
}
ret = zynqmp_pm_reset_assert(VERSAL_USB_RESET_ID,
PM_RESET_ACTION_RELEASE);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to De-assert Reset\n");
return ret;
}
dwc3_xlnx_mask_phy_rst(priv_data, true);
return 0;
}
static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
{
struct device *dev = priv_data->dev;
struct reset_control *crst, *hibrst, *apbrst;
struct phy *usb3_phy;
int ret;
u32 reg;
usb3_phy = devm_phy_get(dev, "usb3-phy");
if (PTR_ERR(usb3_phy) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto err;
} else if (IS_ERR(usb3_phy)) {
usb3_phy = NULL;
}
crst = devm_reset_control_get_exclusive(dev, "usb_crst");
if (IS_ERR(crst)) {
ret = PTR_ERR(crst);
dev_err_probe(dev, ret,
"failed to get core reset signal\n");
goto err;
}
hibrst = devm_reset_control_get_exclusive(dev, "usb_hibrst");
if (IS_ERR(hibrst)) {
ret = PTR_ERR(hibrst);
dev_err_probe(dev, ret,
"failed to get hibernation reset signal\n");
goto err;
}
apbrst = devm_reset_control_get_exclusive(dev, "usb_apbrst");
if (IS_ERR(apbrst)) {
ret = PTR_ERR(apbrst);
dev_err_probe(dev, ret,
"failed to get APB reset signal\n");
goto err;
}
ret = reset_control_assert(crst);
if (ret < 0) {
dev_err(dev, "Failed to assert core reset\n");
goto err;
}
ret = reset_control_assert(hibrst);
if (ret < 0) {
dev_err(dev, "Failed to assert hibernation reset\n");
goto err;
}
ret = reset_control_assert(apbrst);
if (ret < 0) {
dev_err(dev, "Failed to assert APB reset\n");
goto err;
}
ret = phy_init(usb3_phy);
if (ret < 0) {
phy_exit(usb3_phy);
goto err;
}
ret = reset_control_deassert(apbrst);
if (ret < 0) {
dev_err(dev, "Failed to release APB reset\n");
goto err;
}
/* Set PIPE Power Present signal in FPD Power Present Register*/
writel(FPD_POWER_PRSNT_OPTION, priv_data->regs + XLNX_USB_FPD_POWER_PRSNT);
/* Set the PIPE Clock Select bit in FPD PIPE Clock register */
writel(PIPE_CLK_SELECT, priv_data->regs + XLNX_USB_FPD_PIPE_CLK);
ret = reset_control_deassert(crst);
if (ret < 0) {
dev_err(dev, "Failed to release core reset\n");
goto err;
}
ret = reset_control_deassert(hibrst);
if (ret < 0) {
dev_err(dev, "Failed to release hibernation reset\n");
goto err;
}
ret = phy_power_on(usb3_phy);
if (ret < 0) {
phy_exit(usb3_phy);
goto err;
}
/*
* This routes the USB DMA traffic to go through FPD path instead
* of reaching DDR directly. This traffic routing is needed to
* make SMMU and CCI work with USB DMA.
*/
if (of_dma_is_coherent(dev->of_node) || device_iommu_mapped(dev)) {
reg = readl(priv_data->regs + XLNX_USB_TRAFFIC_ROUTE_CONFIG);
reg |= XLNX_USB_TRAFFIC_ROUTE_FPD;
writel(reg, priv_data->regs + XLNX_USB_TRAFFIC_ROUTE_CONFIG);
}
err:
return ret;
}
static const struct of_device_id dwc3_xlnx_of_match[] = {
{
.compatible = "xlnx,zynqmp-dwc3",
.data = &dwc3_xlnx_init_zynqmp,
},
{
.compatible = "xlnx,versal-dwc3",
.data = &dwc3_xlnx_init_versal,
},
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(of, dwc3_xlnx_of_match);
static int dwc3_xlnx_probe(struct platform_device *pdev)
{
struct dwc3_xlnx *priv_data;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
const struct of_device_id *match;
void __iomem *regs;
int ret;
priv_data = devm_kzalloc(dev, sizeof(*priv_data), GFP_KERNEL);
if (!priv_data)
return -ENOMEM;
regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(regs)) {
ret = PTR_ERR(regs);
dev_err_probe(dev, ret, "failed to map registers\n");
return ret;
}
match = of_match_node(dwc3_xlnx_of_match, pdev->dev.of_node);
priv_data->pltfm_init = match->data;
priv_data->regs = regs;
priv_data->dev = dev;
platform_set_drvdata(pdev, priv_data);
ret = devm_clk_bulk_get_all(priv_data->dev, &priv_data->clks);
if (ret < 0)
return ret;
priv_data->num_clocks = ret;
ret = clk_bulk_prepare_enable(priv_data->num_clocks, priv_data->clks);
if (ret)
return ret;
ret = priv_data->pltfm_init(priv_data);
if (ret)
goto err_clk_put;
ret = of_platform_populate(np, NULL, NULL, dev);
if (ret)
goto err_clk_put;
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
pm_suspend_ignore_children(dev, false);
pm_runtime_get_sync(dev);
return 0;
err_clk_put:
clk_bulk_disable_unprepare(priv_data->num_clocks, priv_data->clks);
return ret;
}
static int dwc3_xlnx_remove(struct platform_device *pdev)
{
struct dwc3_xlnx *priv_data = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
of_platform_depopulate(dev);
clk_bulk_disable_unprepare(priv_data->num_clocks, priv_data->clks);
priv_data->num_clocks = 0;
pm_runtime_disable(dev);
pm_runtime_put_noidle(dev);
pm_runtime_set_suspended(dev);
return 0;
}
static int __maybe_unused dwc3_xlnx_suspend_common(struct device *dev)
{
struct dwc3_xlnx *priv_data = dev_get_drvdata(dev);
clk_bulk_disable(priv_data->num_clocks, priv_data->clks);
return 0;
}
static int __maybe_unused dwc3_xlnx_resume_common(struct device *dev)
{
struct dwc3_xlnx *priv_data = dev_get_drvdata(dev);
return clk_bulk_enable(priv_data->num_clocks, priv_data->clks);
}
static int __maybe_unused dwc3_xlnx_runtime_idle(struct device *dev)
{
pm_runtime_mark_last_busy(dev);
pm_runtime_autosuspend(dev);
return 0;
}
static UNIVERSAL_DEV_PM_OPS(dwc3_xlnx_dev_pm_ops, dwc3_xlnx_suspend_common,
dwc3_xlnx_resume_common, dwc3_xlnx_runtime_idle);
static struct platform_driver dwc3_xlnx_driver = {
.probe = dwc3_xlnx_probe,
.remove = dwc3_xlnx_remove,
.driver = {
.name = "dwc3-xilinx",
.of_match_table = dwc3_xlnx_of_match,
.pm = &dwc3_xlnx_dev_pm_ops,
},
};
module_platform_driver(dwc3_xlnx_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Xilinx DWC3 controller specific glue driver");
MODULE_AUTHOR("Manish Narani <manish.narani@xilinx.com>");
MODULE_AUTHOR("Anurag Kumar Vulisha <anurag.kumar.vulisha@xilinx.com>");

View File

@ -308,13 +308,12 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
}
if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
int needs_wakeup;
int link_state;
needs_wakeup = (dwc->link_state == DWC3_LINK_STATE_U1 ||
dwc->link_state == DWC3_LINK_STATE_U2 ||
dwc->link_state == DWC3_LINK_STATE_U3);
if (unlikely(needs_wakeup)) {
link_state = dwc3_gadget_get_link_state(dwc);
if (link_state == DWC3_LINK_STATE_U1 ||
link_state == DWC3_LINK_STATE_U2 ||
link_state == DWC3_LINK_STATE_U3) {
ret = __dwc3_gadget_wakeup(dwc);
dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n",
ret);
@ -608,12 +607,14 @@ static int dwc3_gadget_set_ep_config(struct dwc3_ep *dep, unsigned int action)
u8 bInterval_m1;
/*
* Valid range for DEPCFG.bInterval_m1 is from 0 to 13, and it
* must be set to 0 when the controller operates in full-speed.
* Valid range for DEPCFG.bInterval_m1 is from 0 to 13.
*
* NOTE: The programming guide incorrectly stated bInterval_m1
* must be set to 0 when operating in fullspeed. Internally the
* controller does not have this limitation. See DWC_usb3x
* programming guide section 3.2.2.1.
*/
bInterval_m1 = min_t(u8, desc->bInterval - 1, 13);
if (dwc->gadget->speed == USB_SPEED_FULL)
bInterval_m1 = 0;
if (usb_endpoint_type(desc) == USB_ENDPOINT_XFER_INT &&
dwc->gadget->speed == USB_SPEED_FULL)
@ -729,8 +730,16 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, unsigned int action)
* All stream eps will reinitiate stream on NoStream
* rejection until we can determine that the host can
* prime after the first transfer.
*
* However, if the controller is capable of
* TXF_FLUSH_BYPASS, then IN direction endpoints will
* automatically restart the stream without the driver
* initiation.
*/
dep->flags |= DWC3_EP_FORCE_RESTART_STREAM;
if (!dep->direction ||
!(dwc->hwparams.hwparams9 &
DWC3_GHWPARAMS9_DEV_TXF_FLUSH_BYPASS))
dep->flags |= DWC3_EP_FORCE_RESTART_STREAM;
}
}
@ -1402,7 +1411,7 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
dwc3_stop_active_transfer(dep, true, true);
list_for_each_entry_safe(req, tmp, &dep->started_list, list)
dwc3_gadget_move_cancelled_request(req);
dwc3_gadget_move_cancelled_request(req, DWC3_REQUEST_STATUS_DEQUEUED);
/* If ep isn't started, then there's no end transfer pending */
if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING))
@ -1729,10 +1738,25 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
{
struct dwc3_request *req;
struct dwc3_request *tmp;
struct dwc3 *dwc = dep->dwc;
list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) {
dwc3_gadget_ep_skip_trbs(dep, req);
dwc3_gadget_giveback(dep, req, -ECONNRESET);
switch (req->status) {
case DWC3_REQUEST_STATUS_DISCONNECTED:
dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
break;
case DWC3_REQUEST_STATUS_DEQUEUED:
dwc3_gadget_giveback(dep, req, -ECONNRESET);
break;
case DWC3_REQUEST_STATUS_STALLED:
dwc3_gadget_giveback(dep, req, -EPIPE);
break;
default:
dev_err(dwc->dev, "request cancelled with wrong reason:%d\n", req->status);
dwc3_gadget_giveback(dep, req, -ECONNRESET);
break;
}
}
}
@ -1776,7 +1800,8 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
* cancelled.
*/
list_for_each_entry_safe(r, t, &dep->started_list, list)
dwc3_gadget_move_cancelled_request(r);
dwc3_gadget_move_cancelled_request(r,
DWC3_REQUEST_STATUS_DEQUEUED);
dep->flags &= ~DWC3_EP_WAIT_TRANSFER_COMPLETE;
@ -1848,7 +1873,7 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
dwc3_stop_active_transfer(dep, true, true);
list_for_each_entry_safe(req, tmp, &dep->started_list, list)
dwc3_gadget_move_cancelled_request(req);
dwc3_gadget_move_cancelled_request(req, DWC3_REQUEST_STATUS_STALLED);
if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) {
dep->flags |= DWC3_EP_PENDING_CLEAR_STALL;
@ -1973,6 +1998,8 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
case DWC3_LINK_STATE_RESET:
case DWC3_LINK_STATE_RX_DET: /* in HS, means Early Suspend */
case DWC3_LINK_STATE_U3: /* in HS, means SUSPEND */
case DWC3_LINK_STATE_U2: /* in HS, means Sleep (L1) */
case DWC3_LINK_STATE_U1:
case DWC3_LINK_STATE_RESUME:
break;
default:
@ -2113,9 +2140,6 @@ static void __dwc3_gadget_set_speed(struct dwc3 *dwc)
reg |= DWC3_DCFG_SUPERSPEED;
} else {
switch (speed) {
case USB_SPEED_LOW:
reg |= DWC3_DCFG_LOWSPEED;
break;
case USB_SPEED_FULL:
reg |= DWC3_DCFG_FULLSPEED;
break;
@ -2340,9 +2364,7 @@ static void dwc3_gadget_setup_nump(struct dwc3 *dwc)
u32 reg;
ram2_depth = DWC3_GHWPARAMS7_RAM2_DEPTH(dwc->hwparams.hwparams7);
mdwidth = DWC3_GHWPARAMS0_MDWIDTH(dwc->hwparams.hwparams0);
if (DWC3_IP_IS(DWC32))
mdwidth += DWC3_GHWPARAMS6_MDWIDTH(dwc->hwparams.hwparams6);
mdwidth = dwc3_mdwidth(dwc);
nump = ((ram2_depth * mdwidth / 8) - 24 - 16) / 1024;
nump = min_t(u32, nump, 16);
@ -2388,6 +2410,17 @@ static int __dwc3_gadget_start(struct dwc3 *dwc)
dwc3_gadget_setup_nump(dwc);
/*
* Currently the controller handles single stream only. So, Ignore
* Packet Pending bit for stream selection and don't search for another
* stream if the host sends Data Packet with PP=0 (for OUT direction) or
* ACK with NumP=0 and PP=0 (for IN direction). This slightly improves
* the stream performance.
*/
reg = dwc3_readl(dwc->regs, DWC3_DCFG);
reg |= DWC3_DCFG_IGNSTRMPP;
dwc3_writel(dwc->regs, DWC3_DCFG, reg);
/* Start with SuperSpeed Default */
dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
@ -2531,11 +2564,19 @@ static void dwc3_gadget_set_ssp_rate(struct usb_gadget *g,
static int dwc3_gadget_vbus_draw(struct usb_gadget *g, unsigned int mA)
{
struct dwc3 *dwc = gadget_to_dwc(g);
union power_supply_propval val = {0};
int ret;
if (dwc->usb2_phy)
return usb_phy_set_power(dwc->usb2_phy, mA);
return 0;
if (!dwc->usb_psy)
return -EOPNOTSUPP;
val.intval = 1000 * mA;
ret = power_supply_set_property(dwc->usb_psy, POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, &val);
return ret;
}
static const struct usb_gadget_ops dwc3_gadget_ops = {
@ -2571,12 +2612,10 @@ static int dwc3_gadget_init_control_endpoint(struct dwc3_ep *dep)
static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
{
struct dwc3 *dwc = dep->dwc;
int mdwidth;
u32 mdwidth;
int size;
mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
if (DWC3_IP_IS(DWC32))
mdwidth += DWC3_GHWPARAMS6_MDWIDTH(dwc->hwparams.hwparams6);
mdwidth = dwc3_mdwidth(dwc);
/* MDWIDTH is represented in bits, we need it in bytes */
mdwidth /= 8;
@ -2618,12 +2657,10 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
static int dwc3_gadget_init_out_endpoint(struct dwc3_ep *dep)
{
struct dwc3 *dwc = dep->dwc;
int mdwidth;
u32 mdwidth;
int size;
mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
if (DWC3_IP_IS(DWC32))
mdwidth += DWC3_GHWPARAMS6_MDWIDTH(dwc->hwparams.hwparams6);
mdwidth = dwc3_mdwidth(dwc);
/* MDWIDTH is represented in bits, convert to bytes */
mdwidth /= 8;
@ -2913,6 +2950,11 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep,
static bool dwc3_gadget_ep_should_continue(struct dwc3_ep *dep)
{
struct dwc3_request *req;
struct dwc3 *dwc = dep->dwc;
if (!dep->endpoint.desc || !dwc->pullups_connected ||
!dwc->connected)
return false;
if (!list_empty(&dep->pending_list))
return true;
@ -3322,6 +3364,15 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
{
u32 reg;
/*
* Ideally, dwc3_reset_gadget() would trigger the function
* drivers to stop any active transfers through ep disable.
* However, for functions which defer ep disable, such as mass
* storage, we will need to rely on the call to stop active
* transfers here, and avoid allowing of request queuing.
*/
dwc->connected = false;
/*
* WORKAROUND: DWC3 revisions <1.88a have an issue which
* would cause a missing Disconnect Event if there's a
@ -3448,11 +3499,6 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
dwc->gadget->ep0->maxpacket = 64;
dwc->gadget->speed = USB_SPEED_FULL;
break;
case DWC3_DSTS_LOWSPEED:
dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(8);
dwc->gadget->ep0->maxpacket = 8;
dwc->gadget->speed = USB_SPEED_LOW;
break;
}
dwc->eps[1]->endpoint.maxpacket = dwc->gadget->ep0->maxpacket;
@ -3460,6 +3506,7 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
/* Enable USB2 LPM Capability */
if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A) &&
!dwc->usb2_gadget_lpm_disable &&
(speed != DWC3_DSTS_SUPERSPEED) &&
(speed != DWC3_DSTS_SUPERSPEED_PLUS)) {
reg = dwc3_readl(dwc->regs, DWC3_DCFG);
@ -3486,6 +3533,12 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
dwc3_gadget_dctl_write_safe(dwc, reg);
} else {
if (dwc->usb2_gadget_lpm_disable) {
reg = dwc3_readl(dwc->regs, DWC3_DCFG);
reg &= ~DWC3_DCFG_LPM_CAP;
dwc3_writel(dwc->regs, DWC3_DCFG, reg);
}
reg = dwc3_readl(dwc->regs, DWC3_DCTL);
reg &= ~DWC3_DCTL_HIRD_THRES_MASK;
dwc3_gadget_dctl_write_safe(dwc, reg);
@ -3934,7 +3987,7 @@ int dwc3_gadget_init(struct dwc3 *dwc)
dwc->gadget->ssp_rate = USB_SSP_GEN_UNKNOWN;
dwc->gadget->sg_supported = true;
dwc->gadget->name = "dwc3-gadget";
dwc->gadget->lpm_capable = true;
dwc->gadget->lpm_capable = !dwc->usb2_gadget_lpm_disable;
/*
* FIXME We might be setting max_speed to <SUPER, however versions

View File

@ -90,15 +90,17 @@ static inline void dwc3_gadget_move_started_request(struct dwc3_request *req)
/**
* dwc3_gadget_move_cancelled_request - move @req to the cancelled_list
* @req: the request to be moved
* @reason: cancelled reason for the dwc3 request
*
* Caller should take care of locking. This function will move @req from its
* current list to the endpoint's cancelled_list.
*/
static inline void dwc3_gadget_move_cancelled_request(struct dwc3_request *req)
static inline void dwc3_gadget_move_cancelled_request(struct dwc3_request *req,
unsigned int reason)
{
struct dwc3_ep *dep = req->dep;
req->status = DWC3_REQUEST_STATUS_CANCELLED;
req->status = reason;
list_move_tail(&req->list, &dep->cancelled_list);
}

View File

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/**
/*
* io.h - DesignWare USB3 DRD IO Header
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* trace.c - DesignWare USB3 DRD Controller Trace Support
*
* Copyright (C) 2014 Texas Instruments Incorporated - https://www.ti.com

View File

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/**
/*
* trace.h - DesignWare USB3 DRD Controller Trace Support
*
* Copyright (C) 2014 Texas Instruments Incorporated - https://www.ti.com
@ -32,8 +32,10 @@ DECLARE_EVENT_CLASS(dwc3_log_io,
__entry->offset = offset;
__entry->value = value;
),
TP_printk("addr %p value %08x", __entry->base + __entry->offset,
__entry->value)
TP_printk("addr %p offset %04x value %08x",
__entry->base + __entry->offset,
__entry->offset,
__entry->value)
);
DEFINE_EVENT(dwc3_log_io, dwc3_readl,

View File

@ -194,9 +194,13 @@ EXPORT_SYMBOL_GPL(usb_assign_descriptors);
void usb_free_all_descriptors(struct usb_function *f)
{
usb_free_descriptors(f->fs_descriptors);
f->fs_descriptors = NULL;
usb_free_descriptors(f->hs_descriptors);
f->hs_descriptors = NULL;
usb_free_descriptors(f->ss_descriptors);
f->ss_descriptors = NULL;
usb_free_descriptors(f->ssp_descriptors);
f->ssp_descriptors = NULL;
}
EXPORT_SYMBOL_GPL(usb_free_all_descriptors);

View File

@ -2640,6 +2640,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
do { /* lang_count > 0 so we can use do-while */
unsigned needed = needed_count;
u32 str_per_lang = str_count;
if (len < 3)
goto error_free;
@ -2675,7 +2676,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
data += length + 1;
len -= length + 1;
} while (--str_count);
} while (--str_per_lang);
s->id = 0; /* terminator */
s->s = NULL;
@ -3826,14 +3827,9 @@ static char *ffs_prepare_buffer(const char __user *buf, size_t len)
if (!len)
return NULL;
data = kmalloc(len, GFP_KERNEL);
if (!data)
return ERR_PTR(-ENOMEM);
if (copy_from_user(data, buf, len)) {
kfree(data);
return ERR_PTR(-EFAULT);
}
data = memdup_user(buf, len);
if (IS_ERR(data))
return ERR_PTR(PTR_ERR(data));
pr_vdebug("Buffer from user space:\n");
ffs_dump_mem("", data, len);

View File

@ -351,8 +351,6 @@ static inline struct fsg_dev *fsg_from_func(struct usb_function *f)
return container_of(f, struct fsg_dev, function);
}
typedef void (*fsg_routine_t)(struct fsg_dev *);
static int exception_in_progress(struct fsg_common *common)
{
return common->state > FSG_STATE_NORMAL;

View File

@ -825,7 +825,7 @@ set_printer_interface(struct printer_dev *dev)
result = usb_ep_enable(dev->out_ep);
if (result != 0) {
DBG(dev, "enable %s --> %d\n", dev->in_ep->name, result);
DBG(dev, "enable %s --> %d\n", dev->out_ep->name, result);
goto done;
}

View File

@ -19,6 +19,12 @@
#include "u_audio.h"
#include "u_uac1.h"
/* UAC1 spec: 3.7.2.3 Audio Channel Cluster Format */
#define UAC1_CHANNEL_MASK 0x0FFF
#define EPIN_EN(_opts) ((_opts)->p_chmask != 0)
#define EPOUT_EN(_opts) ((_opts)->c_chmask != 0)
struct f_uac1 {
struct g_audio g_audio;
u8 ac_intf, as_in_intf, as_out_intf;
@ -30,6 +36,11 @@ static inline struct f_uac1 *func_to_uac1(struct usb_function *f)
return container_of(f, struct f_uac1, g_audio.func);
}
static inline struct f_uac1_opts *g_audio_to_uac1_opts(struct g_audio *audio)
{
return container_of(audio->func.fi, struct f_uac1_opts, func_inst);
}
/*
* DESCRIPTORS ... most are static, but strings and full
* configuration descriptors are built on demand.
@ -42,11 +53,6 @@ static inline struct f_uac1 *func_to_uac1(struct usb_function *f)
* USB-OUT -> IT_1 -> OT_2 -> ALSA_Capture
* ALSA_Playback -> IT_3 -> OT_4 -> USB-IN
*/
#define F_AUDIO_AC_INTERFACE 0
#define F_AUDIO_AS_OUT_INTERFACE 1
#define F_AUDIO_AS_IN_INTERFACE 2
/* Number of streaming interfaces */
#define F_AUDIO_NUM_INTERFACES 2
/* B.3.1 Standard AC Interface Descriptor */
static struct usb_interface_descriptor ac_interface_desc = {
@ -57,73 +63,47 @@ static struct usb_interface_descriptor ac_interface_desc = {
.bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL,
};
/*
* The number of AudioStreaming and MIDIStreaming interfaces
* in the Audio Interface Collection
*/
DECLARE_UAC_AC_HEADER_DESCRIPTOR(2);
#define UAC_DT_AC_HEADER_LENGTH UAC_DT_AC_HEADER_SIZE(F_AUDIO_NUM_INTERFACES)
/* 2 input terminals and 2 output terminals */
#define UAC_DT_TOTAL_LENGTH (UAC_DT_AC_HEADER_LENGTH \
+ 2*UAC_DT_INPUT_TERMINAL_SIZE + 2*UAC_DT_OUTPUT_TERMINAL_SIZE)
/* B.3.2 Class-Specific AC Interface Descriptor */
static struct uac1_ac_header_descriptor_2 ac_header_desc = {
.bLength = UAC_DT_AC_HEADER_LENGTH,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_HEADER,
.bcdADC = cpu_to_le16(0x0100),
.wTotalLength = cpu_to_le16(UAC_DT_TOTAL_LENGTH),
.bInCollection = F_AUDIO_NUM_INTERFACES,
.baInterfaceNr = {
/* Interface number of the AudioStream interfaces */
[0] = 1,
[1] = 2,
}
};
static struct uac1_ac_header_descriptor *ac_header_desc;
#define USB_OUT_IT_ID 1
static struct uac_input_terminal_descriptor usb_out_it_desc = {
.bLength = UAC_DT_INPUT_TERMINAL_SIZE,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_INPUT_TERMINAL,
.bTerminalID = USB_OUT_IT_ID,
/* .bTerminalID = DYNAMIC */
.wTerminalType = cpu_to_le16(UAC_TERMINAL_STREAMING),
.bAssocTerminal = 0,
.wChannelConfig = cpu_to_le16(0x3),
};
#define IO_OUT_OT_ID 2
static struct uac1_output_terminal_descriptor io_out_ot_desc = {
.bLength = UAC_DT_OUTPUT_TERMINAL_SIZE,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_OUTPUT_TERMINAL,
.bTerminalID = IO_OUT_OT_ID,
/* .bTerminalID = DYNAMIC */
.wTerminalType = cpu_to_le16(UAC_OUTPUT_TERMINAL_SPEAKER),
.bAssocTerminal = 0,
.bSourceID = USB_OUT_IT_ID,
/* .bSourceID = DYNAMIC */
};
#define IO_IN_IT_ID 3
static struct uac_input_terminal_descriptor io_in_it_desc = {
.bLength = UAC_DT_INPUT_TERMINAL_SIZE,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_INPUT_TERMINAL,
.bTerminalID = IO_IN_IT_ID,
/* .bTerminalID = DYNAMIC */
.wTerminalType = cpu_to_le16(UAC_INPUT_TERMINAL_MICROPHONE),
.bAssocTerminal = 0,
.wChannelConfig = cpu_to_le16(0x3),
};
#define USB_IN_OT_ID 4
static struct uac1_output_terminal_descriptor usb_in_ot_desc = {
.bLength = UAC_DT_OUTPUT_TERMINAL_SIZE,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_OUTPUT_TERMINAL,
.bTerminalID = USB_IN_OT_ID,
/* .bTerminalID = DYNAMIC */
.wTerminalType = cpu_to_le16(UAC_TERMINAL_STREAMING),
.bAssocTerminal = 0,
.bSourceID = IO_IN_IT_ID,
/* .bSourceID = DYNAMIC */
};
/* B.4.1 Standard AS Interface Descriptor */
@ -168,7 +148,7 @@ static struct uac1_as_header_descriptor as_out_header_desc = {
.bLength = UAC_DT_AS_HEADER_SIZE,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_AS_GENERAL,
.bTerminalLink = USB_OUT_IT_ID,
/* .bTerminalLink = DYNAMIC */
.bDelay = 1,
.wFormatTag = cpu_to_le16(UAC_FORMAT_TYPE_I_PCM),
};
@ -177,7 +157,7 @@ static struct uac1_as_header_descriptor as_in_header_desc = {
.bLength = UAC_DT_AS_HEADER_SIZE,
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubtype = UAC_AS_GENERAL,
.bTerminalLink = USB_IN_OT_ID,
/* .bTerminalLink = DYNAMIC */
.bDelay = 1,
.wFormatTag = cpu_to_le16(UAC_FORMAT_TYPE_I_PCM),
};
@ -505,11 +485,144 @@ static void f_audio_disable(struct usb_function *f)
/*-------------------------------------------------------------------------*/
static struct
uac1_ac_header_descriptor *build_ac_header_desc(struct f_uac1_opts *opts)
{
struct uac1_ac_header_descriptor *ac_desc;
int ac_header_desc_size;
int num_ifaces = 0;
if (EPOUT_EN(opts))
num_ifaces++;
if (EPIN_EN(opts))
num_ifaces++;
ac_header_desc_size = UAC_DT_AC_HEADER_SIZE(num_ifaces);
ac_desc = kzalloc(ac_header_desc_size, GFP_KERNEL);
if (!ac_desc)
return NULL;
ac_desc->bLength = ac_header_desc_size;
ac_desc->bDescriptorType = USB_DT_CS_INTERFACE;
ac_desc->bDescriptorSubtype = UAC_HEADER;
ac_desc->bcdADC = cpu_to_le16(0x0100);
ac_desc->bInCollection = num_ifaces;
/* wTotalLength and baInterfaceNr will be defined later */
return ac_desc;
}
/* Use macro to overcome line length limitation */
#define USBDHDR(p) (struct usb_descriptor_header *)(p)
static void setup_descriptor(struct f_uac1_opts *opts)
{
/* patch descriptors */
int i = 1; /* ID's start with 1 */
if (EPOUT_EN(opts))
usb_out_it_desc.bTerminalID = i++;
if (EPIN_EN(opts))
io_in_it_desc.bTerminalID = i++;
if (EPOUT_EN(opts))
io_out_ot_desc.bTerminalID = i++;
if (EPIN_EN(opts))
usb_in_ot_desc.bTerminalID = i++;
usb_in_ot_desc.bSourceID = io_in_it_desc.bTerminalID;
io_out_ot_desc.bSourceID = usb_out_it_desc.bTerminalID;
as_out_header_desc.bTerminalLink = usb_out_it_desc.bTerminalID;
as_in_header_desc.bTerminalLink = usb_in_ot_desc.bTerminalID;
ac_header_desc->wTotalLength = cpu_to_le16(ac_header_desc->bLength);
if (EPIN_EN(opts)) {
u16 len = le16_to_cpu(ac_header_desc->wTotalLength);
len += sizeof(usb_in_ot_desc);
len += sizeof(io_in_it_desc);
ac_header_desc->wTotalLength = cpu_to_le16(len);
}
if (EPOUT_EN(opts)) {
u16 len = le16_to_cpu(ac_header_desc->wTotalLength);
len += sizeof(usb_out_it_desc);
len += sizeof(io_out_ot_desc);
ac_header_desc->wTotalLength = cpu_to_le16(len);
}
i = 0;
f_audio_desc[i++] = USBDHDR(&ac_interface_desc);
f_audio_desc[i++] = USBDHDR(ac_header_desc);
if (EPOUT_EN(opts)) {
f_audio_desc[i++] = USBDHDR(&usb_out_it_desc);
f_audio_desc[i++] = USBDHDR(&io_out_ot_desc);
}
if (EPIN_EN(opts)) {
f_audio_desc[i++] = USBDHDR(&io_in_it_desc);
f_audio_desc[i++] = USBDHDR(&usb_in_ot_desc);
}
if (EPOUT_EN(opts)) {
f_audio_desc[i++] = USBDHDR(&as_out_interface_alt_0_desc);
f_audio_desc[i++] = USBDHDR(&as_out_interface_alt_1_desc);
f_audio_desc[i++] = USBDHDR(&as_out_header_desc);
f_audio_desc[i++] = USBDHDR(&as_out_type_i_desc);
f_audio_desc[i++] = USBDHDR(&as_out_ep_desc);
f_audio_desc[i++] = USBDHDR(&as_iso_out_desc);
}
if (EPIN_EN(opts)) {
f_audio_desc[i++] = USBDHDR(&as_in_interface_alt_0_desc);
f_audio_desc[i++] = USBDHDR(&as_in_interface_alt_1_desc);
f_audio_desc[i++] = USBDHDR(&as_in_header_desc);
f_audio_desc[i++] = USBDHDR(&as_in_type_i_desc);
f_audio_desc[i++] = USBDHDR(&as_in_ep_desc);
f_audio_desc[i++] = USBDHDR(&as_iso_in_desc);
}
f_audio_desc[i] = NULL;
}
static int f_audio_validate_opts(struct g_audio *audio, struct device *dev)
{
struct f_uac1_opts *opts = g_audio_to_uac1_opts(audio);
if (!opts->p_chmask && !opts->c_chmask) {
dev_err(dev, "Error: no playback and capture channels\n");
return -EINVAL;
} else if (opts->p_chmask & ~UAC1_CHANNEL_MASK) {
dev_err(dev, "Error: unsupported playback channels mask\n");
return -EINVAL;
} else if (opts->c_chmask & ~UAC1_CHANNEL_MASK) {
dev_err(dev, "Error: unsupported capture channels mask\n");
return -EINVAL;
} else if ((opts->p_ssize < 1) || (opts->p_ssize > 4)) {
dev_err(dev, "Error: incorrect playback sample size\n");
return -EINVAL;
} else if ((opts->c_ssize < 1) || (opts->c_ssize > 4)) {
dev_err(dev, "Error: incorrect capture sample size\n");
return -EINVAL;
} else if (!opts->p_srate) {
dev_err(dev, "Error: incorrect playback sampling rate\n");
return -EINVAL;
} else if (!opts->c_srate) {
dev_err(dev, "Error: incorrect capture sampling rate\n");
return -EINVAL;
}
return 0;
}
/* audio function driver setup/binding */
static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
{
struct usb_composite_dev *cdev = c->cdev;
struct usb_gadget *gadget = cdev->gadget;
struct device *dev = &gadget->dev;
struct f_uac1 *uac1 = func_to_uac1(f);
struct g_audio *audio = func_to_g_audio(f);
struct f_uac1_opts *audio_opts;
@ -517,13 +630,23 @@ static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_string *us;
u8 *sam_freq;
int rate;
int ba_iface_id;
int status;
status = f_audio_validate_opts(audio, dev);
if (status)
return status;
audio_opts = container_of(f->fi, struct f_uac1_opts, func_inst);
us = usb_gstrings_attach(cdev, uac1_strings, ARRAY_SIZE(strings_uac1));
if (IS_ERR(us))
return PTR_ERR(us);
ac_header_desc = build_ac_header_desc(audio_opts);
if (!ac_header_desc)
return -ENOMEM;
ac_interface_desc.iInterface = us[STR_AC_IF].id;
usb_out_it_desc.iTerminal = us[STR_USB_OUT_IT].id;
usb_out_it_desc.iChannelNames = us[STR_USB_OUT_IT_CH_NAMES].id;
@ -564,40 +687,52 @@ static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
uac1->ac_intf = status;
uac1->ac_alt = 0;
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
as_out_interface_alt_0_desc.bInterfaceNumber = status;
as_out_interface_alt_1_desc.bInterfaceNumber = status;
ac_header_desc.baInterfaceNr[0] = status;
uac1->as_out_intf = status;
uac1->as_out_alt = 0;
ba_iface_id = 0;
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
as_in_interface_alt_0_desc.bInterfaceNumber = status;
as_in_interface_alt_1_desc.bInterfaceNumber = status;
ac_header_desc.baInterfaceNr[1] = status;
uac1->as_in_intf = status;
uac1->as_in_alt = 0;
if (EPOUT_EN(audio_opts)) {
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
as_out_interface_alt_0_desc.bInterfaceNumber = status;
as_out_interface_alt_1_desc.bInterfaceNumber = status;
ac_header_desc->baInterfaceNr[ba_iface_id++] = status;
uac1->as_out_intf = status;
uac1->as_out_alt = 0;
}
if (EPIN_EN(audio_opts)) {
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
as_in_interface_alt_0_desc.bInterfaceNumber = status;
as_in_interface_alt_1_desc.bInterfaceNumber = status;
ac_header_desc->baInterfaceNr[ba_iface_id++] = status;
uac1->as_in_intf = status;
uac1->as_in_alt = 0;
}
audio->gadget = gadget;
status = -ENODEV;
/* allocate instance-specific endpoints */
ep = usb_ep_autoconfig(cdev->gadget, &as_out_ep_desc);
if (!ep)
goto fail;
audio->out_ep = ep;
audio->out_ep->desc = &as_out_ep_desc;
if (EPOUT_EN(audio_opts)) {
ep = usb_ep_autoconfig(cdev->gadget, &as_out_ep_desc);
if (!ep)
goto fail;
audio->out_ep = ep;
audio->out_ep->desc = &as_out_ep_desc;
}
ep = usb_ep_autoconfig(cdev->gadget, &as_in_ep_desc);
if (!ep)
goto fail;
audio->in_ep = ep;
audio->in_ep->desc = &as_in_ep_desc;
if (EPIN_EN(audio_opts)) {
ep = usb_ep_autoconfig(cdev->gadget, &as_in_ep_desc);
if (!ep)
goto fail;
audio->in_ep = ep;
audio->in_ep->desc = &as_in_ep_desc;
}
setup_descriptor(audio_opts);
/* copy descriptors, and track endpoint copies */
status = usb_assign_descriptors(f, f_audio_desc, f_audio_desc, NULL,
@ -624,6 +759,8 @@ static int f_audio_bind(struct usb_configuration *c, struct usb_function *f)
err_card_register:
usb_free_all_descriptors(f);
fail:
kfree(ac_header_desc);
ac_header_desc = NULL;
return status;
}
@ -766,6 +903,9 @@ static void f_audio_unbind(struct usb_configuration *c, struct usb_function *f)
g_audio_cleanup(audio);
usb_free_all_descriptors(f);
kfree(ac_header_desc);
ac_header_desc = NULL;
audio->gadget = NULL;
}

View File

@ -14,6 +14,9 @@
#include "u_audio.h"
#include "u_uac2.h"
/* UAC2 spec: 4.1 Audio Channel Cluster Descriptor */
#define UAC2_CHANNEL_MASK 0x07FFFFFF
/*
* The driver implements a simple UAC_2 topology.
* USB-OUT -> IT_1 -> OT_3 -> ALSA_Capture
@ -284,6 +287,24 @@ static struct usb_endpoint_descriptor hs_epout_desc = {
.bInterval = 4,
};
static struct usb_endpoint_descriptor ss_epout_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = USB_DIR_OUT,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
/* .wMaxPacketSize = DYNAMIC */
.bInterval = 4,
};
static struct usb_ss_ep_comp_descriptor ss_epout_desc_comp = {
.bLength = sizeof(ss_epout_desc_comp),
.bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
.bMaxBurst = 0,
.bmAttributes = 0,
/* wBytesPerInterval = DYNAMIC */
};
/* CS AS ISO OUT Endpoint */
static struct uac2_iso_endpoint_descriptor as_iso_out_desc = {
.bLength = sizeof as_iso_out_desc,
@ -361,6 +382,24 @@ static struct usb_endpoint_descriptor hs_epin_desc = {
.bInterval = 4,
};
static struct usb_endpoint_descriptor ss_epin_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = USB_DIR_IN,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
/* .wMaxPacketSize = DYNAMIC */
.bInterval = 4,
};
static struct usb_ss_ep_comp_descriptor ss_epin_desc_comp = {
.bLength = sizeof(ss_epin_desc_comp),
.bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
.bMaxBurst = 0,
.bmAttributes = 0,
/* wBytesPerInterval = DYNAMIC */
};
/* CS AS ISO IN Endpoint */
static struct uac2_iso_endpoint_descriptor as_iso_in_desc = {
.bLength = sizeof as_iso_in_desc,
@ -433,6 +472,38 @@ static struct usb_descriptor_header *hs_audio_desc[] = {
NULL,
};
static struct usb_descriptor_header *ss_audio_desc[] = {
(struct usb_descriptor_header *)&iad_desc,
(struct usb_descriptor_header *)&std_ac_if_desc,
(struct usb_descriptor_header *)&ac_hdr_desc,
(struct usb_descriptor_header *)&in_clk_src_desc,
(struct usb_descriptor_header *)&out_clk_src_desc,
(struct usb_descriptor_header *)&usb_out_it_desc,
(struct usb_descriptor_header *)&io_in_it_desc,
(struct usb_descriptor_header *)&usb_in_ot_desc,
(struct usb_descriptor_header *)&io_out_ot_desc,
(struct usb_descriptor_header *)&std_as_out_if0_desc,
(struct usb_descriptor_header *)&std_as_out_if1_desc,
(struct usb_descriptor_header *)&as_out_hdr_desc,
(struct usb_descriptor_header *)&as_out_fmt1_desc,
(struct usb_descriptor_header *)&ss_epout_desc,
(struct usb_descriptor_header *)&ss_epout_desc_comp,
(struct usb_descriptor_header *)&as_iso_out_desc,
(struct usb_descriptor_header *)&std_as_in_if0_desc,
(struct usb_descriptor_header *)&std_as_in_if1_desc,
(struct usb_descriptor_header *)&as_in_hdr_desc,
(struct usb_descriptor_header *)&as_in_fmt1_desc,
(struct usb_descriptor_header *)&ss_epin_desc,
(struct usb_descriptor_header *)&ss_epin_desc_comp,
(struct usb_descriptor_header *)&as_iso_in_desc,
NULL,
};
struct cntrl_cur_lay3 {
__le32 dCUR;
};
@ -459,6 +530,7 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
break;
case USB_SPEED_HIGH:
case USB_SPEED_SUPER:
max_size_ep = 1024;
factor = 8000;
break;
@ -488,6 +560,72 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
/* Use macro to overcome line length limitation */
#define USBDHDR(p) (struct usb_descriptor_header *)(p)
static void setup_headers(struct f_uac2_opts *opts,
struct usb_descriptor_header **headers,
enum usb_device_speed speed)
{
struct usb_ss_ep_comp_descriptor *epout_desc_comp = NULL;
struct usb_ss_ep_comp_descriptor *epin_desc_comp = NULL;
struct usb_endpoint_descriptor *epout_desc;
struct usb_endpoint_descriptor *epin_desc;
int i;
switch (speed) {
case USB_SPEED_FULL:
epout_desc = &fs_epout_desc;
epin_desc = &fs_epin_desc;
break;
case USB_SPEED_HIGH:
epout_desc = &hs_epout_desc;
epin_desc = &hs_epin_desc;
break;
default:
epout_desc = &ss_epout_desc;
epin_desc = &ss_epin_desc;
epout_desc_comp = &ss_epout_desc_comp;
epin_desc_comp = &ss_epin_desc_comp;
}
i = 0;
headers[i++] = USBDHDR(&iad_desc);
headers[i++] = USBDHDR(&std_ac_if_desc);
headers[i++] = USBDHDR(&ac_hdr_desc);
if (EPIN_EN(opts))
headers[i++] = USBDHDR(&in_clk_src_desc);
if (EPOUT_EN(opts)) {
headers[i++] = USBDHDR(&out_clk_src_desc);
headers[i++] = USBDHDR(&usb_out_it_desc);
}
if (EPIN_EN(opts)) {
headers[i++] = USBDHDR(&io_in_it_desc);
headers[i++] = USBDHDR(&usb_in_ot_desc);
}
if (EPOUT_EN(opts)) {
headers[i++] = USBDHDR(&io_out_ot_desc);
headers[i++] = USBDHDR(&std_as_out_if0_desc);
headers[i++] = USBDHDR(&std_as_out_if1_desc);
headers[i++] = USBDHDR(&as_out_hdr_desc);
headers[i++] = USBDHDR(&as_out_fmt1_desc);
headers[i++] = USBDHDR(epout_desc);
if (epout_desc_comp)
headers[i++] = USBDHDR(epout_desc_comp);
headers[i++] = USBDHDR(&as_iso_out_desc);
}
if (EPIN_EN(opts)) {
headers[i++] = USBDHDR(&std_as_in_if0_desc);
headers[i++] = USBDHDR(&std_as_in_if1_desc);
headers[i++] = USBDHDR(&as_in_hdr_desc);
headers[i++] = USBDHDR(&as_in_fmt1_desc);
headers[i++] = USBDHDR(epin_desc);
if (epin_desc_comp)
headers[i++] = USBDHDR(epin_desc_comp);
headers[i++] = USBDHDR(&as_iso_in_desc);
}
headers[i] = NULL;
}
static void setup_descriptor(struct f_uac2_opts *opts)
{
/* patch descriptors */
@ -537,71 +675,39 @@ static void setup_descriptor(struct f_uac2_opts *opts)
iad_desc.bInterfaceCount++;
}
i = 0;
fs_audio_desc[i++] = USBDHDR(&iad_desc);
fs_audio_desc[i++] = USBDHDR(&std_ac_if_desc);
fs_audio_desc[i++] = USBDHDR(&ac_hdr_desc);
if (EPIN_EN(opts))
fs_audio_desc[i++] = USBDHDR(&in_clk_src_desc);
if (EPOUT_EN(opts)) {
fs_audio_desc[i++] = USBDHDR(&out_clk_src_desc);
fs_audio_desc[i++] = USBDHDR(&usb_out_it_desc);
}
if (EPIN_EN(opts)) {
fs_audio_desc[i++] = USBDHDR(&io_in_it_desc);
fs_audio_desc[i++] = USBDHDR(&usb_in_ot_desc);
}
if (EPOUT_EN(opts)) {
fs_audio_desc[i++] = USBDHDR(&io_out_ot_desc);
fs_audio_desc[i++] = USBDHDR(&std_as_out_if0_desc);
fs_audio_desc[i++] = USBDHDR(&std_as_out_if1_desc);
fs_audio_desc[i++] = USBDHDR(&as_out_hdr_desc);
fs_audio_desc[i++] = USBDHDR(&as_out_fmt1_desc);
fs_audio_desc[i++] = USBDHDR(&fs_epout_desc);
fs_audio_desc[i++] = USBDHDR(&as_iso_out_desc);
}
if (EPIN_EN(opts)) {
fs_audio_desc[i++] = USBDHDR(&std_as_in_if0_desc);
fs_audio_desc[i++] = USBDHDR(&std_as_in_if1_desc);
fs_audio_desc[i++] = USBDHDR(&as_in_hdr_desc);
fs_audio_desc[i++] = USBDHDR(&as_in_fmt1_desc);
fs_audio_desc[i++] = USBDHDR(&fs_epin_desc);
fs_audio_desc[i++] = USBDHDR(&as_iso_in_desc);
}
fs_audio_desc[i] = NULL;
setup_headers(opts, fs_audio_desc, USB_SPEED_FULL);
setup_headers(opts, hs_audio_desc, USB_SPEED_HIGH);
setup_headers(opts, ss_audio_desc, USB_SPEED_SUPER);
}
i = 0;
hs_audio_desc[i++] = USBDHDR(&iad_desc);
hs_audio_desc[i++] = USBDHDR(&std_ac_if_desc);
hs_audio_desc[i++] = USBDHDR(&ac_hdr_desc);
if (EPIN_EN(opts))
hs_audio_desc[i++] = USBDHDR(&in_clk_src_desc);
if (EPOUT_EN(opts)) {
hs_audio_desc[i++] = USBDHDR(&out_clk_src_desc);
hs_audio_desc[i++] = USBDHDR(&usb_out_it_desc);
static int afunc_validate_opts(struct g_audio *agdev, struct device *dev)
{
struct f_uac2_opts *opts = g_audio_to_uac2_opts(agdev);
if (!opts->p_chmask && !opts->c_chmask) {
dev_err(dev, "Error: no playback and capture channels\n");
return -EINVAL;
} else if (opts->p_chmask & ~UAC2_CHANNEL_MASK) {
dev_err(dev, "Error: unsupported playback channels mask\n");
return -EINVAL;
} else if (opts->c_chmask & ~UAC2_CHANNEL_MASK) {
dev_err(dev, "Error: unsupported capture channels mask\n");
return -EINVAL;
} else if ((opts->p_ssize < 1) || (opts->p_ssize > 4)) {
dev_err(dev, "Error: incorrect playback sample size\n");
return -EINVAL;
} else if ((opts->c_ssize < 1) || (opts->c_ssize > 4)) {
dev_err(dev, "Error: incorrect capture sample size\n");
return -EINVAL;
} else if (!opts->p_srate) {
dev_err(dev, "Error: incorrect playback sampling rate\n");
return -EINVAL;
} else if (!opts->c_srate) {
dev_err(dev, "Error: incorrect capture sampling rate\n");
return -EINVAL;
}
if (EPIN_EN(opts)) {
hs_audio_desc[i++] = USBDHDR(&io_in_it_desc);
hs_audio_desc[i++] = USBDHDR(&usb_in_ot_desc);
}
if (EPOUT_EN(opts)) {
hs_audio_desc[i++] = USBDHDR(&io_out_ot_desc);
hs_audio_desc[i++] = USBDHDR(&std_as_out_if0_desc);
hs_audio_desc[i++] = USBDHDR(&std_as_out_if1_desc);
hs_audio_desc[i++] = USBDHDR(&as_out_hdr_desc);
hs_audio_desc[i++] = USBDHDR(&as_out_fmt1_desc);
hs_audio_desc[i++] = USBDHDR(&hs_epout_desc);
hs_audio_desc[i++] = USBDHDR(&as_iso_out_desc);
}
if (EPIN_EN(opts)) {
hs_audio_desc[i++] = USBDHDR(&std_as_in_if0_desc);
hs_audio_desc[i++] = USBDHDR(&std_as_in_if1_desc);
hs_audio_desc[i++] = USBDHDR(&as_in_hdr_desc);
hs_audio_desc[i++] = USBDHDR(&as_in_fmt1_desc);
hs_audio_desc[i++] = USBDHDR(&hs_epin_desc);
hs_audio_desc[i++] = USBDHDR(&as_iso_in_desc);
}
hs_audio_desc[i] = NULL;
return 0;
}
static int
@ -612,11 +718,13 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
struct usb_composite_dev *cdev = cfg->cdev;
struct usb_gadget *gadget = cdev->gadget;
struct device *dev = &gadget->dev;
struct f_uac2_opts *uac2_opts;
struct f_uac2_opts *uac2_opts = g_audio_to_uac2_opts(agdev);
struct usb_string *us;
int ret;
uac2_opts = container_of(fn->fi, struct f_uac2_opts, func_inst);
ret = afunc_validate_opts(agdev, dev);
if (ret)
return ret;
us = usb_gstrings_attach(cdev, fn_strings, ARRAY_SIZE(strings_fn));
if (IS_ERR(us))
@ -716,6 +824,20 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
return ret;
}
ret = set_ep_max_packet_size(uac2_opts, &ss_epin_desc, USB_SPEED_SUPER,
true);
if (ret < 0) {
dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
return ret;
}
ret = set_ep_max_packet_size(uac2_opts, &ss_epout_desc, USB_SPEED_SUPER,
false);
if (ret < 0) {
dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
return ret;
}
if (EPOUT_EN(uac2_opts)) {
agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc);
if (!agdev->out_ep) {
@ -739,13 +861,20 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
le16_to_cpu(fs_epout_desc.wMaxPacketSize),
le16_to_cpu(hs_epout_desc.wMaxPacketSize));
agdev->in_ep_maxpsize = max_t(u16, agdev->in_ep_maxpsize,
le16_to_cpu(ss_epin_desc.wMaxPacketSize));
agdev->out_ep_maxpsize = max_t(u16, agdev->out_ep_maxpsize,
le16_to_cpu(ss_epout_desc.wMaxPacketSize));
hs_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
hs_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress;
ss_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
ss_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress;
setup_descriptor(uac2_opts);
ret = usb_assign_descriptors(fn, fs_audio_desc, hs_audio_desc, NULL,
NULL);
ret = usb_assign_descriptors(fn, fs_audio_desc, hs_audio_desc, ss_audio_desc,
ss_audio_desc);
if (ret)
return ret;

View File

@ -633,7 +633,12 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
uvc_hs_streaming_ep.wMaxPacketSize =
cpu_to_le16(max_packet_size | ((max_packet_mult - 1) << 11));
uvc_hs_streaming_ep.bInterval = opts->streaming_interval;
/* A high-bandwidth endpoint must specify a bInterval value of 1 */
if (max_packet_mult > 1)
uvc_hs_streaming_ep.bInterval = 1;
else
uvc_hs_streaming_ep.bInterval = opts->streaming_interval;
uvc_ss_streaming_ep.wMaxPacketSize = cpu_to_le16(max_packet_size);
uvc_ss_streaming_ep.bInterval = opts->streaming_interval;
@ -817,6 +822,7 @@ static struct usb_function_instance *uvc_alloc_inst(void)
pd->bmControls[0] = 1;
pd->bmControls[1] = 0;
pd->iProcessing = 0;
pd->bmVideoStandards = 0;
od = &opts->uvc_output_terminal;
od->bLength = UVC_DT_OUTPUT_TERMINAL_SIZE;

View File

@ -549,15 +549,15 @@ int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
if (err < 0)
goto snd_fail;
strlcpy(pcm->name, pcm_name, sizeof(pcm->name));
strscpy(pcm->name, pcm_name, sizeof(pcm->name));
pcm->private_data = uac;
uac->pcm = pcm;
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &uac_pcm_ops);
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &uac_pcm_ops);
strlcpy(card->driver, card_name, sizeof(card->driver));
strlcpy(card->shortname, card_name, sizeof(card->shortname));
strscpy(card->driver, card_name, sizeof(card->driver));
strscpy(card->shortname, card_name, sizeof(card->shortname));
sprintf(card->longname, "%s %i", card_name, card->dev->id);
snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_CONTINUOUS,

View File

@ -231,7 +231,7 @@ static struct config_item *uvcg_control_header_make(struct config_group *group,
h->desc.bLength = UVC_DT_HEADER_SIZE(1);
h->desc.bDescriptorType = USB_DT_CS_INTERFACE;
h->desc.bDescriptorSubType = UVC_VC_HEADER;
h->desc.bcdUVC = cpu_to_le16(0x0100);
h->desc.bcdUVC = cpu_to_le16(0x0110);
h->desc.dwClockFrequency = cpu_to_le32(48000000);
config_item_init_type_name(&h->item, name, &uvcg_control_header_type);

View File

@ -498,7 +498,8 @@ static void ep_aio_complete(struct usb_ep *ep, struct usb_request *req)
iocb->private = NULL;
/* aio_complete() reports bytes-transferred _and_ faults */
iocb->ki_complete(iocb, req->actual ? req->actual : req->status,
iocb->ki_complete(iocb,
req->actual ? req->actual : (long)req->status,
req->status);
} else {
/* ep_copy_to_user() won't report both; we hide some faults */

View File

@ -175,8 +175,10 @@ static int msg_bind(struct usb_composite_dev *cdev)
struct usb_descriptor_header *usb_desc;
usb_desc = usb_otg_descriptor_alloc(cdev->gadget);
if (!usb_desc)
if (!usb_desc) {
status = -ENOMEM;
goto fail_string_ids;
}
usb_otg_descriptor_init(cdev->gadget, usb_desc);
otg_desc[0] = usb_desc;
otg_desc[1] = NULL;

View File

@ -182,7 +182,7 @@ static int rndis_do_config(struct usb_configuration *c)
return ret;
}
static __ref int rndis_config_register(struct usb_composite_dev *cdev)
static int rndis_config_register(struct usb_composite_dev *cdev)
{
static struct usb_configuration config = {
.bConfigurationValue = MULTI_RNDIS_CONFIG_NUM,
@ -197,7 +197,7 @@ static __ref int rndis_config_register(struct usb_composite_dev *cdev)
#else
static __ref int rndis_config_register(struct usb_composite_dev *cdev)
static int rndis_config_register(struct usb_composite_dev *cdev)
{
return 0;
}
@ -265,7 +265,7 @@ static int cdc_do_config(struct usb_configuration *c)
return ret;
}
static __ref int cdc_config_register(struct usb_composite_dev *cdev)
static int cdc_config_register(struct usb_composite_dev *cdev)
{
static struct usb_configuration config = {
.bConfigurationValue = MULTI_CDC_CONFIG_NUM,
@ -280,7 +280,7 @@ static __ref int cdc_config_register(struct usb_composite_dev *cdev)
#else
static __ref int cdc_config_register(struct usb_composite_dev *cdev)
static int cdc_config_register(struct usb_composite_dev *cdev)
{
return 0;
}
@ -291,7 +291,7 @@ static __ref int cdc_config_register(struct usb_composite_dev *cdev)
/****************************** Gadget Bind ******************************/
static int __ref multi_bind(struct usb_composite_dev *cdev)
static int multi_bind(struct usb_composite_dev *cdev)
{
struct usb_gadget *gadget = cdev->gadget;
#ifdef CONFIG_USB_G_MULTI_CDC
@ -399,8 +399,10 @@ static int __ref multi_bind(struct usb_composite_dev *cdev)
struct usb_descriptor_header *usb_desc;
usb_desc = usb_otg_descriptor_alloc(gadget);
if (!usb_desc)
if (!usb_desc) {
status = -ENOMEM;
goto fail_string_ids;
}
usb_otg_descriptor_init(gadget, usb_desc);
otg_desc[0] = usb_desc;
otg_desc[1] = NULL;

Some files were not shown because too many files have changed in this diff Show More