USB / Thunderbolt patches for 5.14-rc1

Here is the big set of USB and Thunderbolt patches for 5.14-rc1.
 
 Nothing major here just lots of little changes for new hardware and
 features.  Highlights are:
 	- more USB 4 support added to the thunderbolt core
 	- build warning fixes all over the place
 	- usb-serial driver updates and new device support
 	- mtu3 driver updates
 	- gadget driver updates
 	- dwc3 driver updates
 	- dwc2 driver updates
 	- isp1760 host driver updates
 	- musb driver updates
 	- lots of other tiny things.
 
 Full details are in the shortlog.
 
 All of these have been in linux-next for a while now with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYOM3EA8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynGewCeMg7YvtCnqFBNebC+GfKpFTgWxO4AnAppjSrZ
 RPGQgfZdWmx7daCXWbSK
 =u68a
 -----END PGP SIGNATURE-----

Merge tag 'usb-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
 "Here is the big set of USB and Thunderbolt patches for 5.14-rc1.

  Nothing major here just lots of little changes for new hardware and
  features. Highlights are:

   - more USB 4 support added to the thunderbolt core

   - build warning fixes all over the place

   - usb-serial driver updates and new device support

   - mtu3 driver updates

   - gadget driver updates

   - dwc3 driver updates

   - dwc2 driver updates

   - isp1760 host driver updates

   - musb driver updates

   - lots of other tiny things.

  Full details are in the shortlog.

  All of these have been in linux-next for a while now with no reported
  issues"

* tag 'usb-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (223 commits)
  phy: qcom-qusb2: Add configuration for SM4250 and SM6115
  dt-bindings: phy: qcom,qusb2: document sm4250/6115 compatible
  dt-bindings: usb: qcom,dwc3: Add bindings for sm6115/4250
  USB: cdc-acm: blacklist Heimann USB Appset device
  usb: xhci-mtk: allow multiple Start-Split in a microframe
  usb: ftdi-elan: remove redundant continue statement in a while-loop
  usb: class: cdc-wdm: return the correct errno code
  xhci: remove redundant continue statement
  usb: dwc3: Fix debugfs creation flow
  usb: gadget: hid: fix error return code in hid_bind()
  usb: gadget: eem: fix echo command packet response issue
  usb: gadget: f_hid: fix endianness issue with descriptors
  Revert "USB: misc: Add onboard_usb_hub driver"
  Revert "of/platform: Add stubs for of_platform_device_create/destroy()"
  Revert "usb: host: xhci-plat: Create platform device for onboard hubs in probe()"
  Revert "arm64: dts: qcom: sc7180-trogdor: Add nodes for onboard USB hub"
  xhci: solve a double free problem while doing s4
  xhci: handle failed buffer copy to URB sg list and fix a W=1 copiler warning
  xhci: Add adaptive interrupt rate for isoch TRBs with XHCI_AVOID_BEI quirk
  xhci: Remove unused defines for ERST_SIZE and ERST_ENTRIES
  ...
This commit is contained in:
Linus Torvalds 2021-07-05 14:16:22 -07:00
commit 79160a603b
215 changed files with 8671 additions and 2712 deletions

View File

@ -8,6 +8,8 @@ Description:
c_chmask capture channel mask
c_srate capture sampling rate
c_ssize capture sample size (bytes)
c_sync capture synchronization type (async/adaptive)
fb_max maximum extra bandwidth in async mode
p_chmask playback channel mask
p_srate playback sampling rate
p_ssize playback sample size (bytes)

View File

@ -1,4 +1,4 @@
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
Date: Jun 2018
KernelVersion: 4.17
Contact: thunderbolt-software@lists.01.org
@ -21,7 +21,7 @@ Description: Holds a comma separated list of device unique_ids that
If a device is authorized automatically during boot its
boot attribute is set to 1.
What: /sys/bus/thunderbolt/devices/.../domainX/deauthorization
What: /sys/bus/thunderbolt/devices/.../domainX/deauthorization
Date: May 2021
KernelVersion: 5.12
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
@ -30,7 +30,7 @@ Description: This attribute tells whether the system supports
de-authorize PCIe tunnel by writing 0 to authorized
attribute under each device.
What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection
What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection
Date: Mar 2019
KernelVersion: 4.21
Contact: thunderbolt-software@lists.01.org
@ -39,7 +39,7 @@ Description: This attribute tells whether the system uses IOMMU
it is not (DMA protection is solely based on Thunderbolt
security levels).
What: /sys/bus/thunderbolt/devices/.../domainX/security
What: /sys/bus/thunderbolt/devices/.../domainX/security
Date: Sep 2017
KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org
@ -61,7 +61,7 @@ Description: This attribute holds current Thunderbolt security level
the BIOS.
======= ==================================================
What: /sys/bus/thunderbolt/devices/.../authorized
What: /sys/bus/thunderbolt/devices/.../authorized
Date: Sep 2017
KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org
@ -95,14 +95,14 @@ Description: This attribute is used to authorize Thunderbolt devices
EKEYREJECTED if the challenge response did not match.
== ========================================================
What: /sys/bus/thunderbolt/devices/.../boot
What: /sys/bus/thunderbolt/devices/.../boot
Date: Jun 2018
KernelVersion: 4.17
Contact: thunderbolt-software@lists.01.org
Description: This attribute contains 1 if Thunderbolt device was already
authorized on boot and 0 otherwise.
What: /sys/bus/thunderbolt/devices/.../generation
What: /sys/bus/thunderbolt/devices/.../generation
Date: Jan 2020
KernelVersion: 5.5
Contact: Christian Kellner <christian@kellner.me>
@ -110,7 +110,7 @@ Description: This attribute contains the generation of the Thunderbolt
controller associated with the device. It will contain 4
for USB4.
What: /sys/bus/thunderbolt/devices/.../key
What: /sys/bus/thunderbolt/devices/.../key
Date: Sep 2017
KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org
@ -213,12 +213,15 @@ Description: When new NVM image is written to the non-active NVM
restarted with the new NVM firmware. If the image
verification fails an error code is returned instead.
This file will accept writing values "1" or "2"
This file will accept writing values "1", "2" or "3".
- Writing "1" will flush the image to the storage
area and authenticate the image in one action.
- Writing "2" will run some basic validation on the image
and flush it to the storage area.
- Writing "3" will authenticate the image that is
currently written in the storage area. This is only
supported with USB4 devices and retimers.
When read holds status of the last authentication
operation if an error occurred during the process. This
@ -226,6 +229,20 @@ Description: When new NVM image is written to the non-active NVM
based mailbox before the device is power cycled. Writing
0 here clears the status.
What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
Date: Oct 2020
KernelVersion: v5.9
Contact: Mario Limonciello <mario.limonciello@dell.com>
Description: For supported devices, automatically authenticate the new Thunderbolt
image when the device is disconnected from the host system.
This file will accept writing values "1" or "2"
- Writing "1" will flush the image to the storage
area and prepare the device for authentication on disconnect.
- Writing "2" will run some basic validation on the image
and flush it to the storage area.
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/key
Date: Jan 2018
KernelVersion: 4.15
@ -276,6 +293,39 @@ Contact: thunderbolt-software@lists.01.org
Description: This contains XDomain service specific settings as
bitmask. Format: %x
What: /sys/bus/thunderbolt/devices/usb4_portX/link
Date: Sep 2021
KernelVersion: v5.14
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Returns the current link mode. Possible values are
"usb4", "tbt" and "none".
What: /sys/bus/thunderbolt/devices/usb4_portX/offline
Date: Sep 2021
KernelVersion: v5.14
Contact: Rajmohan Mani <rajmohan.mani@intel.com>
Description: Writing 1 to this attribute puts the USB4 port into
offline mode. Only allowed when there is nothing
connected to the port (link attribute returns "none").
Once the port is in offline mode it does not receive any
hotplug events. This is used to update NVM firmware of
on-board retimers. Writing 0 puts the port back to
online mode.
This attribute is only visible if the platform supports
powering on retimers when there is no cable connected.
What: /sys/bus/thunderbolt/devices/usb4_portX/rescan
Date: Sep 2021
KernelVersion: v5.14
Contact: Rajmohan Mani <rajmohan.mani@intel.com>
Description: When the USB4 port is in offline mode writing 1 to this
attribute forces rescan of the sideband for on-board
retimers. Each retimer appear under the USB4 port as if
the USB4 link was up. These retimers act in the same way
as if the cable was connected so upgrading their NVM
firmware can be done the usual way.
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/device
Date: Oct 2020
KernelVersion: v5.9
@ -308,17 +358,3 @@ Date: Oct 2020
KernelVersion: v5.9
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Retimer vendor identifier read from the hardware.
What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
Date: Oct 2020
KernelVersion: v5.9
Contact: Mario Limonciello <mario.limonciello@dell.com>
Description: For supported devices, automatically authenticate the new Thunderbolt
image when the device is disconnected from the host system.
This file will accept writing values "1" or "2"
- Writing "1" will flush the image to the storage
area and prepare the device for authentication on disconnect.
- Writing "2" will run some basic validation on the image
and flush it to the storage area.

View File

@ -154,17 +154,6 @@ Description:
files hold a string value (enable or disable) indicating whether
or not USB3 hardware LPM U1 or U2 is enabled for the device.
What: /sys/bus/usb/devices/.../removable
Date: February 2012
Contact: Matthew Garrett <mjg@redhat.com>
Description:
Some information about whether a given USB device is
physically fixed to the platform can be inferred from a
combination of hub descriptor bits and platform-specific data
such as ACPI. This file will read either "removable" or
"fixed" if the information is available, and "unknown"
otherwise.
What: /sys/bus/usb/devices/.../ltm_capable
Date: July 2012
Contact: Sarah Sharp <sarah.a.sharp@linux.intel.com>

View File

@ -0,0 +1,18 @@
What: /sys/devices/.../removable
Date: May 2021
Contact: Rajat Jain <rajatxjain@gmail.com>
Description:
Information about whether a given device can be removed from the
platform by the user. This is determined by its subsystem in a
bus / platform-specific way. This attribute is only present for
devices that can support determining such information:
"removable": device can be removed from the platform by the user
"fixed": device is fixed to the platform / cannot be removed
by the user.
"unknown": The information is unavailable / cannot be deduced.
Currently this is only supported by USB (which infers the
information from a combination of hub descriptor bits and
platform-specific data such as ACPI) and PCI (which gets this
from ACPI / device tree).

View File

@ -256,6 +256,35 @@ Note names of the NVMem devices ``nvm_activeN`` and ``nvm_non_activeN``
depend on the order they are registered in the NVMem subsystem. N in
the name is the identifier added by the NVMem subsystem.
Upgrading on-board retimer NVM when there is no cable connected
---------------------------------------------------------------
If the platform supports, it may be possible to upgrade the retimer NVM
firmware even when there is nothing connected to the USB4
ports. When this is the case the ``usb4_portX`` devices have two special
attributes: ``offline`` and ``rescan``. The way to upgrade the firmware
is to first put the USB4 port into offline mode::
# echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/offline
This step makes sure the port does not respond to any hotplug events,
and also ensures the retimers are powered on. The next step is to scan
for the retimers::
# echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/rescan
This enumerates and adds the on-board retimers. Now retimer NVM can be
upgraded in the same way than with cable connected (see previous
section). However, the retimer is not disconnected as we are offline
mode) so after writing ``1`` to ``nvm_authenticate`` one should wait for
5 or more seconds before running rescan again::
# echo 1 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/rescan
This point if everything went fine, the port can be put back to
functional state again::
# echo 0 > /sys/bus/thunderbolt/devices/0-0/usb4_port1/offline
Upgrading NVM when host controller is in safe mode
--------------------------------------------------
If the existing NVM is not properly authenticated (or is missing) the

View File

@ -15,7 +15,9 @@ properties:
const: 1
compatible:
const: allwinner,sun8i-h3-usb-phy
enum:
- allwinner,sun8i-h3-usb-phy
- allwinner,sun50i-h616-usb-phy
reg:
items:

View File

@ -23,6 +23,8 @@ properties:
- qcom,msm8998-qusb2-phy
- qcom,sdm660-qusb2-phy
- qcom,ipq6018-qusb2-phy
- qcom,sm4250-qusb2-phy
- qcom,sm6115-qusb2-phy
- items:
- enum:
- qcom,sc7180-qusb2-phy

View File

@ -22,6 +22,9 @@ properties:
- allwinner,sun8i-a83t-musb
- allwinner,sun50i-h6-musb
- const: allwinner,sun8i-a33-musb
- items:
- const: allwinner,sun50i-h616-musb
- const: allwinner,sun8i-h3-musb
reg:
maxItems: 1

View File

@ -75,6 +75,7 @@ required:
- reg
- reg-names
- interrupts
- interrupt-names
additionalProperties: false

View File

@ -24,6 +24,7 @@ properties:
- rockchip,rk3188-usb
- rockchip,rk3228-usb
- rockchip,rk3288-usb
- rockchip,rk3308-usb
- rockchip,rk3328-usb
- rockchip,rk3368-usb
- rockchip,rv1108-usb

View File

@ -0,0 +1,69 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/nxp,isp1760.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NXP ISP1760 family controller bindings
maintainers:
- Sebastian Siewior <bigeasy@linutronix.de>
- Laurent Pinchart <laurent.pinchart@ideasonboard.com>
description: |
NXP ISP1760 family, which includes ISP1760/1761/1763 devicetree controller
bindings
properties:
compatible:
enum:
- nxp,usb-isp1760
- nxp,usb-isp1761
- nxp,usb-isp1763
reg:
maxItems: 1
interrupts:
minItems: 1
maxItems: 2
items:
- description: Host controller interrupt
- description: Device controller interrupt in isp1761
interrupt-names:
minItems: 1
maxItems: 2
items:
- const: host
- const: peripheral
bus-width:
description:
Number of data lines.
enum: [8, 16, 32]
default: 32
dr_mode:
enum:
- host
- peripheral
required:
- compatible
- reg
- interrupts
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
usb@40200000 {
compatible = "nxp,usb-isp1763";
reg = <0x40200000 0x100000>;
interrupts = <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>;
bus-width = <16>;
dr_mode = "host";
};
...

View File

@ -19,6 +19,8 @@ properties:
- qcom,sc7280-dwc3
- qcom,sdm845-dwc3
- qcom,sdx55-dwc3
- qcom,sm4250-dwc3
- qcom,sm6115-dwc3
- qcom,sm8150-dwc3
- qcom,sm8250-dwc3
- qcom,sm8350-dwc3

View File

@ -0,0 +1,62 @@
# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/realtek,rts5411.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Binding for the Realtek RTS5411 USB 3.0 hub controller
maintainers:
- Matthias Kaehlcke <mka@chromium.org>
allOf:
- $ref: usb-device.yaml#
properties:
compatible:
items:
- enum:
- usbbda,5411
- usbbda,411
reg: true
vdd-supply:
description:
phandle to the regulator that provides power to the hub.
companion-hub:
$ref: '/schemas/types.yaml#/definitions/phandle'
description:
phandle to the companion hub on the controller.
required:
- companion-hub
- compatible
- reg
additionalProperties: false
examples:
- |
usb {
dr_mode = "host";
#address-cells = <1>;
#size-cells = <0>;
/* 2.0 hub on port 1 */
hub_2_0: hub@1 {
compatible = "usbbda,5411";
reg = <1>;
vdd-supply = <&pp3300_hub>;
companion-hub = <&hub_3_0>;
};
/* 3.0 hub on port 2 */
hub_3_0: hub@2 {
compatible = "usbbda,411";
reg = <2>;
vdd-supply = <&pp3300_hub>;
companion-hub = <&hub_2_0>;
};
};

View File

@ -61,6 +61,9 @@ USB-specific:
(c) requested data transfer length is invalid: negative
or too large for the host controller.
``-EBADR`` The wLength value in a control URB's setup packet does
not match the URB's transfer_buffer_length.
``-ENOSPC`` This request would overcommit the usb bandwidth reserved
for periodic transfers (interrupt, isochronous).

View File

@ -728,6 +728,8 @@ The uac2 function provides these attributes in its function directory:
c_chmask capture channel mask
c_srate capture sampling rate
c_ssize capture sample size (bytes)
c_sync capture synchronization type (async/adaptive)
fb_max maximum extra bandwidth in async mode
p_chmask playback channel mask
p_srate playback sampling rate
p_ssize playback sample size (bytes)

View File

@ -148,7 +148,7 @@ ethernet: ethernet@4e000000 {
usb: usb@4f000000 {
compatible = "nxp,usb-isp1761";
reg = <0x4f000000 0x20000>;
port1-otg;
dr_mode = "peripheral";
};
bridge {

View File

@ -166,7 +166,7 @@ usb@3b000000 {
reg = <0x3b000000 0x20000>;
interrupt-parent = <&intc_fpga1176>;
interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>;
port1-otg;
dr_mode = "peripheral";
};
bridge {

View File

@ -712,7 +712,7 @@ usb@4f000000 {
reg = <0x4f000000 0x20000>;
interrupt-parent = <&intc_tc11mp>;
interrupts = <0 3 IRQ_TYPE_LEVEL_HIGH>;
port1-otg;
dr_mode = "peripheral";
};
};
};

View File

@ -164,7 +164,7 @@ ethernet: ethernet@4e000000 {
usb: usb@4f000000 {
compatible = "nxp,usb-isp1761";
reg = <0x4f000000 0x20000>;
port1-otg;
dr_mode = "peripheral";
};
bridge {

View File

@ -144,7 +144,7 @@ usb@203000000 {
compatible = "nxp,usb-isp1761";
reg = <2 0x03000000 0x20000>;
interrupts = <16>;
port1-otg;
dr_mode = "peripheral";
};
iofpga-bus@300000000 {

View File

@ -62,7 +62,7 @@ usb@3,03000000 {
compatible = "nxp,usb-isp1761";
reg = <3 0x03000000 0x20000>;
interrupts = <16>;
port1-otg;
dr_mode = "peripheral";
};
iofpga@7,00000000 {

View File

@ -2409,6 +2409,25 @@ static ssize_t online_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RW(online);
static ssize_t removable_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
const char *loc;
switch (dev->removable) {
case DEVICE_REMOVABLE:
loc = "removable";
break;
case DEVICE_FIXED:
loc = "fixed";
break;
default:
loc = "unknown";
}
return sysfs_emit(buf, "%s\n", loc);
}
static DEVICE_ATTR_RO(removable);
int device_add_groups(struct device *dev, const struct attribute_group **groups)
{
return sysfs_create_groups(&dev->kobj, groups);
@ -2586,8 +2605,16 @@ static int device_add_attrs(struct device *dev)
goto err_remove_dev_online;
}
if (dev_removable_is_valid(dev)) {
error = device_create_file(dev, &dev_attr_removable);
if (error)
goto err_remove_dev_waiting_for_supplier;
}
return 0;
err_remove_dev_waiting_for_supplier:
device_remove_file(dev, &dev_attr_waiting_for_supplier);
err_remove_dev_online:
device_remove_file(dev, &dev_attr_online);
err_remove_dev_groups:
@ -2607,6 +2634,7 @@ static void device_remove_attrs(struct device *dev)
struct class *class = dev->class;
const struct device_type *type = dev->type;
device_remove_file(dev, &dev_attr_removable);
device_remove_file(dev, &dev_attr_waiting_for_supplier);
device_remove_file(dev, &dev_attr_online);
device_remove_groups(dev, dev->groups);

View File

@ -1576,6 +1576,26 @@ static void set_pcie_untrusted(struct pci_dev *dev)
dev->untrusted = true;
}
static void pci_set_removable(struct pci_dev *dev)
{
struct pci_dev *parent = pci_upstream_bridge(dev);
/*
* We (only) consider everything downstream from an external_facing
* device to be removable by the user. We're mainly concerned with
* consumer platforms with user accessible thunderbolt ports that are
* vulnerable to DMA attacks, and we expect those ports to be marked by
* the firmware as external_facing. Devices in traditional hotplug
* slots can technically be removed, but the expectation is that unless
* the port is marked with external_facing, such devices are less
* accessible to user / may not be removed by end user, and thus not
* exposed as "removable" to userspace.
*/
if (parent &&
(parent->external_facing || dev_is_removable(&parent->dev)))
dev_set_removable(&dev->dev, DEVICE_REMOVABLE);
}
/**
* pci_ext_cfg_is_aliased - Is ext config space just an alias of std config?
* @dev: PCI device
@ -1823,6 +1843,8 @@ int pci_setup_device(struct pci_dev *dev)
/* Early fixups, before probing the BARs */
pci_fixup_device(pci_fixup_early, dev);
pci_set_removable(dev);
pci_info(dev, "[%04x:%04x] type %02x class %#08x\n",
dev->vendor, dev->device, dev->hdr_type, dev->class);

View File

@ -219,6 +219,22 @@ static const struct qusb2_phy_init_tbl msm8998_init_tbl[] = {
QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_DIGITAL_TIMERS_TWO, 0x19),
};
static const struct qusb2_phy_init_tbl sm6115_init_tbl[] = {
QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE1, 0xf8),
QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE2, 0x53),
QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE3, 0x81),
QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE4, 0x17),
QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_TUNE, 0x30),
QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL1, 0x79),
QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL2, 0x21),
QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TEST2, 0x14),
QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_AUTOPGM_CTL1, 0x9f),
QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_PWR_CTRL, 0x00),
};
static const unsigned int qusb2_v2_regs_layout[] = {
[QUSB2PHY_PLL_CORE_INPUT_OVERRIDE] = 0xa8,
[QUSB2PHY_PLL_STATUS] = 0x1a0,
@ -342,6 +358,18 @@ static const struct qusb2_phy_cfg sdm660_phy_cfg = {
.autoresume_en = BIT(3),
};
static const struct qusb2_phy_cfg sm6115_phy_cfg = {
.tbl = sm6115_init_tbl,
.tbl_num = ARRAY_SIZE(sm6115_init_tbl),
.regs = msm8996_regs_layout,
.has_pll_test = true,
.se_clk_scheme_default = true,
.disable_ctrl = (CLAMP_N_EN | FREEZIO_N | POWER_DOWN),
.mask_core_ready = PLL_LOCKED,
.autoresume_en = BIT(3),
};
static const char * const qusb2_phy_vreg_names[] = {
"vdda-pll", "vdda-phy-dpdm",
};
@ -888,6 +916,12 @@ static const struct of_device_id qusb2_phy_of_match_table[] = {
}, {
.compatible = "qcom,sdm660-qusb2-phy",
.data = &sdm660_phy_cfg,
}, {
.compatible = "qcom,sm4250-qusb2-phy",
.data = &sm6115_phy_cfg,
}, {
.compatible = "qcom,sm6115-qusb2-phy",
.data = &sm6115_phy_cfg,
}, {
/*
* Deprecated. Only here to support legacy device

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2016-2019, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2016-2020, NVIDIA CORPORATION. All rights reserved.
*/
#include <linux/delay.h>
@ -113,6 +113,117 @@
#define ID_OVERRIDE_FLOATING ID_OVERRIDE(8)
#define ID_OVERRIDE_GROUNDED ID_OVERRIDE(0)
/* XUSB AO registers */
#define XUSB_AO_USB_DEBOUNCE_DEL (0x4)
#define UHSIC_LINE_DEB_CNT(x) (((x) & 0xf) << 4)
#define UTMIP_LINE_DEB_CNT(x) ((x) & 0xf)
#define XUSB_AO_UTMIP_TRIGGERS(x) (0x40 + (x) * 4)
#define CLR_WALK_PTR BIT(0)
#define CAP_CFG BIT(1)
#define CLR_WAKE_ALARM BIT(3)
#define XUSB_AO_UHSIC_TRIGGERS(x) (0x60 + (x) * 4)
#define HSIC_CLR_WALK_PTR BIT(0)
#define HSIC_CLR_WAKE_ALARM BIT(3)
#define HSIC_CAP_CFG BIT(4)
#define XUSB_AO_UTMIP_SAVED_STATE(x) (0x70 + (x) * 4)
#define SPEED(x) ((x) & 0x3)
#define UTMI_HS SPEED(0)
#define UTMI_FS SPEED(1)
#define UTMI_LS SPEED(2)
#define UTMI_RST SPEED(3)
#define XUSB_AO_UHSIC_SAVED_STATE(x) (0x90 + (x) * 4)
#define MODE(x) ((x) & 0x1)
#define MODE_HS MODE(0)
#define MODE_RST MODE(1)
#define XUSB_AO_UTMIP_SLEEPWALK_CFG(x) (0xd0 + (x) * 4)
#define XUSB_AO_UHSIC_SLEEPWALK_CFG(x) (0xf0 + (x) * 4)
#define FAKE_USBOP_VAL BIT(0)
#define FAKE_USBON_VAL BIT(1)
#define FAKE_USBOP_EN BIT(2)
#define FAKE_USBON_EN BIT(3)
#define FAKE_STROBE_VAL BIT(0)
#define FAKE_DATA_VAL BIT(1)
#define FAKE_STROBE_EN BIT(2)
#define FAKE_DATA_EN BIT(3)
#define WAKE_WALK_EN BIT(14)
#define MASTER_ENABLE BIT(15)
#define LINEVAL_WALK_EN BIT(16)
#define WAKE_VAL(x) (((x) & 0xf) << 17)
#define WAKE_VAL_NONE WAKE_VAL(12)
#define WAKE_VAL_ANY WAKE_VAL(15)
#define WAKE_VAL_DS10 WAKE_VAL(2)
#define LINE_WAKEUP_EN BIT(21)
#define MASTER_CFG_SEL BIT(22)
#define XUSB_AO_UTMIP_SLEEPWALK(x) (0x100 + (x) * 4)
/* phase A */
#define USBOP_RPD_A BIT(0)
#define USBON_RPD_A BIT(1)
#define AP_A BIT(4)
#define AN_A BIT(5)
#define HIGHZ_A BIT(6)
/* phase B */
#define USBOP_RPD_B BIT(8)
#define USBON_RPD_B BIT(9)
#define AP_B BIT(12)
#define AN_B BIT(13)
#define HIGHZ_B BIT(14)
/* phase C */
#define USBOP_RPD_C BIT(16)
#define USBON_RPD_C BIT(17)
#define AP_C BIT(20)
#define AN_C BIT(21)
#define HIGHZ_C BIT(22)
/* phase D */
#define USBOP_RPD_D BIT(24)
#define USBON_RPD_D BIT(25)
#define AP_D BIT(28)
#define AN_D BIT(29)
#define HIGHZ_D BIT(30)
#define XUSB_AO_UHSIC_SLEEPWALK(x) (0x120 + (x) * 4)
/* phase A */
#define RPD_STROBE_A BIT(0)
#define RPD_DATA0_A BIT(1)
#define RPU_STROBE_A BIT(2)
#define RPU_DATA0_A BIT(3)
/* phase B */
#define RPD_STROBE_B BIT(8)
#define RPD_DATA0_B BIT(9)
#define RPU_STROBE_B BIT(10)
#define RPU_DATA0_B BIT(11)
/* phase C */
#define RPD_STROBE_C BIT(16)
#define RPD_DATA0_C BIT(17)
#define RPU_STROBE_C BIT(18)
#define RPU_DATA0_C BIT(19)
/* phase D */
#define RPD_STROBE_D BIT(24)
#define RPD_DATA0_D BIT(25)
#define RPU_STROBE_D BIT(26)
#define RPU_DATA0_D BIT(27)
#define XUSB_AO_UTMIP_PAD_CFG(x) (0x130 + (x) * 4)
#define FSLS_USE_XUSB_AO BIT(3)
#define TRK_CTRL_USE_XUSB_AO BIT(4)
#define RPD_CTRL_USE_XUSB_AO BIT(5)
#define RPU_USE_XUSB_AO BIT(6)
#define VREG_USE_XUSB_AO BIT(7)
#define USBOP_VAL_PD BIT(8)
#define USBON_VAL_PD BIT(9)
#define E_DPD_OVRD_EN BIT(10)
#define E_DPD_OVRD_VAL BIT(11)
#define XUSB_AO_UHSIC_PAD_CFG(x) (0x150 + (x) * 4)
#define STROBE_VAL_PD BIT(0)
#define DATA0_VAL_PD BIT(1)
#define USE_XUSB_AO BIT(4)
#define TEGRA186_LANE(_name, _offset, _shift, _mask, _type) \
{ \
.name = _name, \
@ -130,16 +241,37 @@ struct tegra_xusb_fuse_calibration {
u32 rpd_ctrl;
};
struct tegra186_xusb_padctl_context {
u32 vbus_id;
u32 usb2_pad_mux;
u32 usb2_port_cap;
u32 ss_port_cap;
};
struct tegra186_xusb_padctl {
struct tegra_xusb_padctl base;
void __iomem *ao_regs;
struct tegra_xusb_fuse_calibration calib;
/* UTMI bias and tracking */
struct clk *usb2_trk_clk;
unsigned int bias_pad_enable;
/* padctl context */
struct tegra186_xusb_padctl_context context;
};
static inline void ao_writel(struct tegra186_xusb_padctl *priv, u32 value, unsigned int offset)
{
writel(value, priv->ao_regs + offset);
}
static inline u32 ao_readl(struct tegra186_xusb_padctl *priv, unsigned int offset)
{
return readl(priv->ao_regs + offset);
}
static inline struct tegra186_xusb_padctl *
to_tegra186_xusb_padctl(struct tegra_xusb_padctl *padctl)
{
@ -180,9 +312,264 @@ static void tegra186_usb2_lane_remove(struct tegra_xusb_lane *lane)
kfree(usb2);
}
static int tegra186_utmi_enable_phy_sleepwalk(struct tegra_xusb_lane *lane,
enum usb_device_speed speed)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
/* ensure sleepwalk logic is disabled */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~MASTER_ENABLE;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* ensure sleepwalk logics are in low power mode */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value |= MASTER_CFG_SEL;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* set debounce time */
value = ao_readl(priv, XUSB_AO_USB_DEBOUNCE_DEL);
value &= ~UTMIP_LINE_DEB_CNT(~0);
value |= UTMIP_LINE_DEB_CNT(1);
ao_writel(priv, value, XUSB_AO_USB_DEBOUNCE_DEL);
/* ensure fake events of sleepwalk logic are desiabled */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~(FAKE_USBOP_VAL | FAKE_USBON_VAL |
FAKE_USBOP_EN | FAKE_USBON_EN);
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* ensure wake events of sleepwalk logic are not latched */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~LINE_WAKEUP_EN;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* disable wake event triggers of sleepwalk logic */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~WAKE_VAL(~0);
value |= WAKE_VAL_NONE;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* power down the line state detectors of the pad */
value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index));
value |= (USBOP_VAL_PD | USBON_VAL_PD);
ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index));
/* save state per speed */
value = ao_readl(priv, XUSB_AO_UTMIP_SAVED_STATE(index));
value &= ~SPEED(~0);
switch (speed) {
case USB_SPEED_HIGH:
value |= UTMI_HS;
break;
case USB_SPEED_FULL:
value |= UTMI_FS;
break;
case USB_SPEED_LOW:
value |= UTMI_LS;
break;
default:
value |= UTMI_RST;
break;
}
ao_writel(priv, value, XUSB_AO_UTMIP_SAVED_STATE(index));
/* enable the trigger of the sleepwalk logic */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value |= LINEVAL_WALK_EN;
value &= ~WAKE_WALK_EN;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* reset the walk pointer and clear the alarm of the sleepwalk logic,
* as well as capture the configuration of the USB2.0 pad
*/
value = ao_readl(priv, XUSB_AO_UTMIP_TRIGGERS(index));
value |= (CLR_WALK_PTR | CLR_WAKE_ALARM | CAP_CFG);
ao_writel(priv, value, XUSB_AO_UTMIP_TRIGGERS(index));
/* setup the pull-ups and pull-downs of the signals during the four
* stages of sleepwalk.
* if device is connected, program sleepwalk logic to maintain a J and
* keep driving K upon seeing remote wake.
*/
value = USBOP_RPD_A | USBOP_RPD_B | USBOP_RPD_C | USBOP_RPD_D;
value |= USBON_RPD_A | USBON_RPD_B | USBON_RPD_C | USBON_RPD_D;
switch (speed) {
case USB_SPEED_HIGH:
case USB_SPEED_FULL:
/* J state: D+/D- = high/low, K state: D+/D- = low/high */
value |= HIGHZ_A;
value |= AP_A;
value |= AN_B | AN_C | AN_D;
break;
case USB_SPEED_LOW:
/* J state: D+/D- = low/high, K state: D+/D- = high/low */
value |= HIGHZ_A;
value |= AN_A;
value |= AP_B | AP_C | AP_D;
break;
default:
value |= HIGHZ_A | HIGHZ_B | HIGHZ_C | HIGHZ_D;
break;
}
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK(index));
/* power up the line state detectors of the pad */
value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index));
value &= ~(USBOP_VAL_PD | USBON_VAL_PD);
ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index));
usleep_range(150, 200);
/* switch the electric control of the USB2.0 pad to XUSB_AO */
value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index));
value |= FSLS_USE_XUSB_AO | TRK_CTRL_USE_XUSB_AO | RPD_CTRL_USE_XUSB_AO |
RPU_USE_XUSB_AO | VREG_USE_XUSB_AO;
ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index));
/* set the wake signaling trigger events */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~WAKE_VAL(~0);
value |= WAKE_VAL_ANY;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* enable the wake detection */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value |= MASTER_ENABLE | LINE_WAKEUP_EN;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
mutex_unlock(&padctl->lock);
return 0;
}
static int tegra186_utmi_disable_phy_sleepwalk(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
/* disable the wake detection */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~(MASTER_ENABLE | LINE_WAKEUP_EN);
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* switch the electric control of the USB2.0 pad to XUSB vcore logic */
value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index));
value &= ~(FSLS_USE_XUSB_AO | TRK_CTRL_USE_XUSB_AO | RPD_CTRL_USE_XUSB_AO |
RPU_USE_XUSB_AO | VREG_USE_XUSB_AO);
ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index));
/* disable wake event triggers of sleepwalk logic */
value = ao_readl(priv, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
value &= ~WAKE_VAL(~0);
value |= WAKE_VAL_NONE;
ao_writel(priv, value, XUSB_AO_UTMIP_SLEEPWALK_CFG(index));
/* power down the line state detectors of the port */
value = ao_readl(priv, XUSB_AO_UTMIP_PAD_CFG(index));
value |= USBOP_VAL_PD | USBON_VAL_PD;
ao_writel(priv, value, XUSB_AO_UTMIP_PAD_CFG(index));
/* clear alarm of the sleepwalk logic */
value = ao_readl(priv, XUSB_AO_UTMIP_TRIGGERS(index));
value |= CLR_WAKE_ALARM;
ao_writel(priv, value, XUSB_AO_UTMIP_TRIGGERS(index));
mutex_unlock(&padctl->lock);
return 0;
}
static int tegra186_utmi_enable_phy_wake(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value |= USB2_PORT_WAKEUP_EVENT(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
usleep_range(10, 20);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value |= USB2_PORT_WAKE_INTERRUPT_ENABLE(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
mutex_unlock(&padctl->lock);
return 0;
}
static int tegra186_utmi_disable_phy_wake(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value &= ~USB2_PORT_WAKE_INTERRUPT_ENABLE(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
usleep_range(10, 20);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value |= USB2_PORT_WAKEUP_EVENT(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
mutex_unlock(&padctl->lock);
return 0;
}
static bool tegra186_utmi_phy_remote_wake_detected(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
if ((value & USB2_PORT_WAKE_INTERRUPT_ENABLE(index)) &&
(value & USB2_PORT_WAKEUP_EVENT(index)))
return true;
return false;
}
static const struct tegra_xusb_lane_ops tegra186_usb2_lane_ops = {
.probe = tegra186_usb2_lane_probe,
.remove = tegra186_usb2_lane_remove,
.enable_phy_sleepwalk = tegra186_utmi_enable_phy_sleepwalk,
.disable_phy_sleepwalk = tegra186_utmi_disable_phy_sleepwalk,
.enable_phy_wake = tegra186_utmi_enable_phy_wake,
.disable_phy_wake = tegra186_utmi_disable_phy_wake,
.remote_wake_detected = tegra186_utmi_phy_remote_wake_detected,
};
static void tegra186_utmi_bias_pad_power_on(struct tegra_xusb_padctl *padctl)
@ -656,10 +1043,128 @@ static void tegra186_usb3_lane_remove(struct tegra_xusb_lane *lane)
kfree(usb3);
}
static int tegra186_usb3_enable_phy_sleepwalk(struct tegra_xusb_lane *lane,
enum usb_device_speed speed)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1);
value |= SSPX_ELPG_CLAMP_EN_EARLY(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1);
usleep_range(100, 200);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1);
value |= SSPX_ELPG_CLAMP_EN(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1);
usleep_range(250, 350);
mutex_unlock(&padctl->lock);
return 0;
}
static int tegra186_usb3_disable_phy_sleepwalk(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1);
value &= ~SSPX_ELPG_CLAMP_EN_EARLY(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1);
usleep_range(100, 200);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM_1);
value &= ~SSPX_ELPG_CLAMP_EN(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM_1);
mutex_unlock(&padctl->lock);
return 0;
}
static int tegra186_usb3_enable_phy_wake(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value |= SS_PORT_WAKEUP_EVENT(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
usleep_range(10, 20);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value |= SS_PORT_WAKE_INTERRUPT_ENABLE(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
mutex_unlock(&padctl->lock);
return 0;
}
static int tegra186_usb3_disable_phy_wake(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
mutex_lock(&padctl->lock);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value &= ~SS_PORT_WAKE_INTERRUPT_ENABLE(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
usleep_range(10, 20);
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
value &= ~ALL_WAKE_EVENTS;
value |= SS_PORT_WAKEUP_EVENT(index);
padctl_writel(padctl, value, XUSB_PADCTL_ELPG_PROGRAM);
mutex_unlock(&padctl->lock);
return 0;
}
static bool tegra186_usb3_phy_remote_wake_detected(struct tegra_xusb_lane *lane)
{
struct tegra_xusb_padctl *padctl = lane->pad->padctl;
unsigned int index = lane->index;
u32 value;
value = padctl_readl(padctl, XUSB_PADCTL_ELPG_PROGRAM);
if ((value & SS_PORT_WAKE_INTERRUPT_ENABLE(index)) && (value & SS_PORT_WAKEUP_EVENT(index)))
return true;
return false;
}
static const struct tegra_xusb_lane_ops tegra186_usb3_lane_ops = {
.probe = tegra186_usb3_lane_probe,
.remove = tegra186_usb3_lane_remove,
.enable_phy_sleepwalk = tegra186_usb3_enable_phy_sleepwalk,
.disable_phy_sleepwalk = tegra186_usb3_disable_phy_sleepwalk,
.enable_phy_wake = tegra186_usb3_enable_phy_wake,
.disable_phy_wake = tegra186_usb3_disable_phy_wake,
.remote_wake_detected = tegra186_usb3_phy_remote_wake_detected,
};
static int tegra186_usb3_port_enable(struct tegra_xusb_port *port)
{
return 0;
@ -913,7 +1418,9 @@ static struct tegra_xusb_padctl *
tegra186_xusb_padctl_probe(struct device *dev,
const struct tegra_xusb_padctl_soc *soc)
{
struct platform_device *pdev = to_platform_device(dev);
struct tegra186_xusb_padctl *priv;
struct resource *res;
int err;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@ -923,6 +1430,11 @@ tegra186_xusb_padctl_probe(struct device *dev,
priv->base.dev = dev;
priv->base.soc = soc;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ao");
priv->ao_regs = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->ao_regs))
return ERR_CAST(priv->ao_regs);
err = tegra186_xusb_read_fuse_calibration(priv);
if (err < 0)
return ERR_PTR(err);
@ -930,6 +1442,40 @@ tegra186_xusb_padctl_probe(struct device *dev,
return &priv->base;
}
static void tegra186_xusb_padctl_save(struct tegra_xusb_padctl *padctl)
{
struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
priv->context.vbus_id = padctl_readl(padctl, USB2_VBUS_ID);
priv->context.usb2_pad_mux = padctl_readl(padctl, XUSB_PADCTL_USB2_PAD_MUX);
priv->context.usb2_port_cap = padctl_readl(padctl, XUSB_PADCTL_USB2_PORT_CAP);
priv->context.ss_port_cap = padctl_readl(padctl, XUSB_PADCTL_SS_PORT_CAP);
}
static void tegra186_xusb_padctl_restore(struct tegra_xusb_padctl *padctl)
{
struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
padctl_writel(padctl, priv->context.usb2_pad_mux, XUSB_PADCTL_USB2_PAD_MUX);
padctl_writel(padctl, priv->context.usb2_port_cap, XUSB_PADCTL_USB2_PORT_CAP);
padctl_writel(padctl, priv->context.ss_port_cap, XUSB_PADCTL_SS_PORT_CAP);
padctl_writel(padctl, priv->context.vbus_id, USB2_VBUS_ID);
}
static int tegra186_xusb_padctl_suspend_noirq(struct tegra_xusb_padctl *padctl)
{
tegra186_xusb_padctl_save(padctl);
return 0;
}
static int tegra186_xusb_padctl_resume_noirq(struct tegra_xusb_padctl *padctl)
{
tegra186_xusb_padctl_restore(padctl);
return 0;
}
static void tegra186_xusb_padctl_remove(struct tegra_xusb_padctl *padctl)
{
}
@ -937,6 +1483,8 @@ static void tegra186_xusb_padctl_remove(struct tegra_xusb_padctl *padctl)
static const struct tegra_xusb_padctl_ops tegra186_xusb_padctl_ops = {
.probe = tegra186_xusb_padctl_probe,
.remove = tegra186_xusb_padctl_remove,
.suspend_noirq = tegra186_xusb_padctl_suspend_noirq,
.resume_noirq = tegra186_xusb_padctl_resume_noirq,
.vbus_override = tegra186_xusb_padctl_vbus_override,
};

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2016, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved.
*/
#include <linux/delay.h>
@ -321,11 +321,17 @@ static void tegra_xusb_lane_program(struct tegra_xusb_lane *lane)
if (soc->num_funcs < 2)
return;
if (lane->pad->ops->iddq_enable)
lane->pad->ops->iddq_enable(lane);
/* choose function */
value = padctl_readl(padctl, soc->offset);
value &= ~(soc->mask << soc->shift);
value |= lane->function << soc->shift;
padctl_writel(padctl, value, soc->offset);
if (lane->pad->ops->iddq_disable)
lane->pad->ops->iddq_disable(lane);
}
static void tegra_xusb_pad_program(struct tegra_xusb_pad *pad)
@ -376,7 +382,7 @@ static int tegra_xusb_setup_pads(struct tegra_xusb_padctl *padctl)
return 0;
}
static bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane,
bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane,
const char *function)
{
const char *func = lane->soc->funcs[lane->function];
@ -1267,10 +1273,36 @@ static int tegra_xusb_padctl_remove(struct platform_device *pdev)
return err;
}
static int tegra_xusb_padctl_suspend_noirq(struct device *dev)
{
struct tegra_xusb_padctl *padctl = dev_get_drvdata(dev);
if (padctl->soc && padctl->soc->ops && padctl->soc->ops->suspend_noirq)
return padctl->soc->ops->suspend_noirq(padctl);
return 0;
}
static int tegra_xusb_padctl_resume_noirq(struct device *dev)
{
struct tegra_xusb_padctl *padctl = dev_get_drvdata(dev);
if (padctl->soc && padctl->soc->ops && padctl->soc->ops->resume_noirq)
return padctl->soc->ops->resume_noirq(padctl);
return 0;
}
static const struct dev_pm_ops tegra_xusb_padctl_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_xusb_padctl_suspend_noirq,
tegra_xusb_padctl_resume_noirq)
};
static struct platform_driver tegra_xusb_padctl_driver = {
.driver = {
.name = "tegra-xusb-padctl",
.of_match_table = tegra_xusb_padctl_of_match,
.pm = &tegra_xusb_padctl_pm_ops,
},
.probe = tegra_xusb_padctl_probe,
.remove = tegra_xusb_padctl_remove,
@ -1337,6 +1369,62 @@ int tegra_xusb_padctl_hsic_set_idle(struct tegra_xusb_padctl *padctl,
}
EXPORT_SYMBOL_GPL(tegra_xusb_padctl_hsic_set_idle);
int tegra_xusb_padctl_enable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy,
enum usb_device_speed speed)
{
struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
if (lane->pad->ops->enable_phy_sleepwalk)
return lane->pad->ops->enable_phy_sleepwalk(lane, speed);
return -EOPNOTSUPP;
}
EXPORT_SYMBOL_GPL(tegra_xusb_padctl_enable_phy_sleepwalk);
int tegra_xusb_padctl_disable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy)
{
struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
if (lane->pad->ops->disable_phy_sleepwalk)
return lane->pad->ops->disable_phy_sleepwalk(lane);
return -EOPNOTSUPP;
}
EXPORT_SYMBOL_GPL(tegra_xusb_padctl_disable_phy_sleepwalk);
int tegra_xusb_padctl_enable_phy_wake(struct tegra_xusb_padctl *padctl, struct phy *phy)
{
struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
if (lane->pad->ops->enable_phy_wake)
return lane->pad->ops->enable_phy_wake(lane);
return -EOPNOTSUPP;
}
EXPORT_SYMBOL_GPL(tegra_xusb_padctl_enable_phy_wake);
int tegra_xusb_padctl_disable_phy_wake(struct tegra_xusb_padctl *padctl, struct phy *phy)
{
struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
if (lane->pad->ops->disable_phy_wake)
return lane->pad->ops->disable_phy_wake(lane);
return -EOPNOTSUPP;
}
EXPORT_SYMBOL_GPL(tegra_xusb_padctl_disable_phy_wake);
bool tegra_xusb_padctl_remote_wake_detected(struct tegra_xusb_padctl *padctl, struct phy *phy)
{
struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
if (lane->pad->ops->remote_wake_detected)
return lane->pad->ops->remote_wake_detected(lane);
return false;
}
EXPORT_SYMBOL_GPL(tegra_xusb_padctl_remote_wake_detected);
int tegra_xusb_padctl_usb3_set_lfps_detect(struct tegra_xusb_padctl *padctl,
unsigned int port, bool enable)
{

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2014-2015, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2015, Google Inc.
*/
@ -11,6 +11,7 @@
#include <linux/mutex.h>
#include <linux/workqueue.h>
#include <linux/usb/ch9.h>
#include <linux/usb/otg.h>
#include <linux/usb/role.h>
@ -35,6 +36,10 @@ struct tegra_xusb_lane_soc {
const char * const *funcs;
unsigned int num_funcs;
struct {
unsigned int misc_ctl2;
} regs;
};
struct tegra_xusb_lane {
@ -126,8 +131,17 @@ struct tegra_xusb_lane_ops {
struct device_node *np,
unsigned int index);
void (*remove)(struct tegra_xusb_lane *lane);
void (*iddq_enable)(struct tegra_xusb_lane *lane);
void (*iddq_disable)(struct tegra_xusb_lane *lane);
int (*enable_phy_sleepwalk)(struct tegra_xusb_lane *lane, enum usb_device_speed speed);
int (*disable_phy_sleepwalk)(struct tegra_xusb_lane *lane);
int (*enable_phy_wake)(struct tegra_xusb_lane *lane);
int (*disable_phy_wake)(struct tegra_xusb_lane *lane);
bool (*remote_wake_detected)(struct tegra_xusb_lane *lane);
};
bool tegra_xusb_lane_check(struct tegra_xusb_lane *lane, const char *function);
/*
* pads
*/
@ -230,7 +244,7 @@ struct tegra_xusb_pcie_pad {
struct reset_control *rst;
struct clk *pll;
unsigned int enable;
bool enable;
};
static inline struct tegra_xusb_pcie_pad *
@ -245,7 +259,7 @@ struct tegra_xusb_sata_pad {
struct reset_control *rst;
struct clk *pll;
unsigned int enable;
bool enable;
};
static inline struct tegra_xusb_sata_pad *
@ -388,6 +402,8 @@ struct tegra_xusb_padctl_ops {
const struct tegra_xusb_padctl_soc *soc);
void (*remove)(struct tegra_xusb_padctl *padctl);
int (*suspend_noirq)(struct tegra_xusb_padctl *padctl);
int (*resume_noirq)(struct tegra_xusb_padctl *padctl);
int (*usb3_save_context)(struct tegra_xusb_padctl *padctl,
unsigned int index);
int (*hsic_set_idle)(struct tegra_xusb_padctl *padctl,

View File

@ -2,7 +2,7 @@
obj-${CONFIG_USB4} := thunderbolt.o
thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
thunderbolt-objs += nvm.o retimer.o quirks.o
thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o
thunderbolt-${CONFIG_ACPI} += acpi.o
thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o

View File

@ -180,3 +180,209 @@ bool tb_acpi_is_xdomain_allowed(void)
return osc_sb_native_usb4_control & OSC_USB_XDOMAIN;
return true;
}
/* UUID for retimer _DSM: e0053122-795b-4122-8a5e-57be1d26acb3 */
static const guid_t retimer_dsm_guid =
GUID_INIT(0xe0053122, 0x795b, 0x4122,
0x8a, 0x5e, 0x57, 0xbe, 0x1d, 0x26, 0xac, 0xb3);
#define RETIMER_DSM_QUERY_ONLINE_STATE 1
#define RETIMER_DSM_SET_ONLINE_STATE 2
static int tb_acpi_retimer_set_power(struct tb_port *port, bool power)
{
struct usb4_port *usb4 = port->usb4;
union acpi_object argv4[2];
struct acpi_device *adev;
union acpi_object *obj;
int ret;
if (!usb4->can_offline)
return 0;
adev = ACPI_COMPANION(&usb4->dev);
if (WARN_ON(!adev))
return 0;
/* Check if we are already powered on (and in correct mode) */
obj = acpi_evaluate_dsm_typed(adev->handle, &retimer_dsm_guid, 1,
RETIMER_DSM_QUERY_ONLINE_STATE, NULL,
ACPI_TYPE_INTEGER);
if (!obj) {
tb_port_warn(port, "ACPI: query online _DSM failed\n");
return -EIO;
}
ret = obj->integer.value;
ACPI_FREE(obj);
if (power == ret)
return 0;
tb_port_dbg(port, "ACPI: calling _DSM to power %s retimers\n",
power ? "on" : "off");
argv4[0].type = ACPI_TYPE_PACKAGE;
argv4[0].package.count = 1;
argv4[0].package.elements = &argv4[1];
argv4[1].integer.type = ACPI_TYPE_INTEGER;
argv4[1].integer.value = power;
obj = acpi_evaluate_dsm_typed(adev->handle, &retimer_dsm_guid, 1,
RETIMER_DSM_SET_ONLINE_STATE, argv4,
ACPI_TYPE_INTEGER);
if (!obj) {
tb_port_warn(port,
"ACPI: set online state _DSM evaluation failed\n");
return -EIO;
}
ret = obj->integer.value;
ACPI_FREE(obj);
if (ret >= 0) {
if (power)
return ret == 1 ? 0 : -EBUSY;
return 0;
}
tb_port_warn(port, "ACPI: set online state _DSM failed with error %d\n", ret);
return -EIO;
}
/**
* tb_acpi_power_on_retimers() - Call platform to power on retimers
* @port: USB4 port
*
* Calls platform to turn on power to all retimers behind this USB4
* port. After this function returns successfully the caller can
* continue with the normal retimer flows (as specified in the USB4
* spec). Note if this returns %-EBUSY it means the type-C port is in
* non-USB4/TBT mode (there is non-USB4/TBT device connected).
*
* This should only be called if the USB4/TBT link is not up.
*
* Returns %0 on success.
*/
int tb_acpi_power_on_retimers(struct tb_port *port)
{
return tb_acpi_retimer_set_power(port, true);
}
/**
* tb_acpi_power_off_retimers() - Call platform to power off retimers
* @port: USB4 port
*
* This is the opposite of tb_acpi_power_on_retimers(). After returning
* successfully the normal operations with the @port can continue.
*
* Returns %0 on success.
*/
int tb_acpi_power_off_retimers(struct tb_port *port)
{
return tb_acpi_retimer_set_power(port, false);
}
static bool tb_acpi_bus_match(struct device *dev)
{
return tb_is_switch(dev) || tb_is_usb4_port_device(dev);
}
static struct acpi_device *tb_acpi_find_port(struct acpi_device *adev,
const struct tb_port *port)
{
struct acpi_device *port_adev;
if (!adev)
return NULL;
/*
* Device routers exists under the downstream facing USB4 port
* of the parent router. Their _ADR is always 0.
*/
list_for_each_entry(port_adev, &adev->children, node) {
if (acpi_device_adr(port_adev) == port->port)
return port_adev;
}
return NULL;
}
static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
{
struct acpi_device *adev = NULL;
struct tb_switch *parent_sw;
parent_sw = tb_switch_parent(sw);
if (parent_sw) {
struct tb_port *port = tb_port_at(tb_route(sw), parent_sw);
struct acpi_device *port_adev;
port_adev = tb_acpi_find_port(ACPI_COMPANION(&parent_sw->dev), port);
if (port_adev)
adev = acpi_find_child_device(port_adev, 0, false);
} else {
struct tb_nhi *nhi = sw->tb->nhi;
struct acpi_device *parent_adev;
parent_adev = ACPI_COMPANION(&nhi->pdev->dev);
if (parent_adev)
adev = acpi_find_child_device(parent_adev, 0, false);
}
return adev;
}
static struct acpi_device *tb_acpi_find_companion(struct device *dev)
{
/*
* The Thunderbolt/USB4 hierarchy looks like following:
*
* Device (NHI)
* Device (HR) // Host router _ADR == 0
* Device (DFP0) // Downstream port _ADR == lane 0 adapter
* Device (DR) // Device router _ADR == 0
* Device (UFP) // Upstream port _ADR == lane 0 adapter
* Device (DFP1) // Downstream port _ADR == lane 0 adapter number
*
* At the moment we bind the host router to the corresponding
* Linux device.
*/
if (tb_is_switch(dev))
return tb_acpi_switch_find_companion(tb_to_switch(dev));
else if (tb_is_usb4_port_device(dev))
return tb_acpi_find_port(ACPI_COMPANION(dev->parent),
tb_to_usb4_port_device(dev)->port);
return NULL;
}
static void tb_acpi_setup(struct device *dev)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
if (!adev || !usb4)
return;
if (acpi_check_dsm(adev->handle, &retimer_dsm_guid, 1,
BIT(RETIMER_DSM_QUERY_ONLINE_STATE) |
BIT(RETIMER_DSM_SET_ONLINE_STATE)))
usb4->can_offline = true;
}
static struct acpi_bus_type tb_acpi_bus = {
.name = "thunderbolt",
.match = tb_acpi_bus_match,
.find_companion = tb_acpi_find_companion,
.setup = tb_acpi_setup,
};
int tb_acpi_init(void)
{
return register_acpi_bus_type(&tb_acpi_bus);
}
void tb_acpi_exit(void)
{
unregister_acpi_bus_type(&tb_acpi_bus);
}

View File

@ -299,15 +299,13 @@ static int dma_port_request(struct tb_dma_port *dma, u32 in,
return status_to_errno(out);
}
static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address,
void *buf, u32 size)
static int dma_port_flash_read_block(void *data, unsigned int dwaddress,
void *buf, size_t dwords)
{
struct tb_dma_port *dma = data;
struct tb_switch *sw = dma->sw;
u32 in, dwaddress, dwords;
int ret;
dwaddress = address / 4;
dwords = size / 4;
u32 in;
in = MAIL_IN_CMD_FLASH_READ << MAIL_IN_CMD_SHIFT;
if (dwords < MAIL_DATA_DWORDS)
@ -323,14 +321,13 @@ static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address,
dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT);
}
static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
const void *buf, u32 size)
static int dma_port_flash_write_block(void *data, unsigned int dwaddress,
const void *buf, size_t dwords)
{
struct tb_dma_port *dma = data;
struct tb_switch *sw = dma->sw;
u32 in, dwaddress, dwords;
int ret;
dwords = size / 4;
u32 in;
/* Write the block to MAIL_DATA registers */
ret = dma_port_write(sw->tb->ctl, buf, tb_route(sw), dma->port,
@ -341,12 +338,8 @@ static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
in = MAIL_IN_CMD_FLASH_WRITE << MAIL_IN_CMD_SHIFT;
/* CSS header write is always done to the same magic address */
if (address >= DMA_PORT_CSS_ADDRESS) {
dwaddress = DMA_PORT_CSS_ADDRESS;
if (dwaddress >= DMA_PORT_CSS_ADDRESS)
in |= MAIL_IN_CSS;
} else {
dwaddress = address / 4;
}
in |= ((dwords - 1) << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK;
in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK;
@ -365,36 +358,8 @@ static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
void *buf, size_t size)
{
unsigned int retries = DMA_PORT_RETRIES;
do {
unsigned int offset;
size_t nbytes;
int ret;
offset = address & 3;
nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4);
ret = dma_port_flash_read_block(dma, address, dma->buf,
ALIGN(nbytes, 4));
if (ret) {
if (ret == -ETIMEDOUT) {
if (retries--)
continue;
ret = -EIO;
}
return ret;
}
nbytes -= offset;
memcpy(buf, dma->buf + offset, nbytes);
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
return tb_nvm_read_data(address, buf, size, DMA_PORT_RETRIES,
dma_port_flash_read_block, dma);
}
/**
@ -411,40 +376,11 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
const void *buf, size_t size)
{
unsigned int retries = DMA_PORT_RETRIES;
unsigned int offset;
if (address >= DMA_PORT_CSS_ADDRESS && size > DMA_PORT_CSS_MAX_SIZE)
return -E2BIG;
if (address >= DMA_PORT_CSS_ADDRESS) {
offset = 0;
if (size > DMA_PORT_CSS_MAX_SIZE)
return -E2BIG;
} else {
offset = address & 3;
address = address & ~3;
}
do {
u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
int ret;
memcpy(dma->buf + offset, buf, nbytes);
ret = dma_port_flash_write_block(dma, address, buf, nbytes);
if (ret) {
if (ret == -ETIMEDOUT) {
if (retries--)
continue;
ret = -EIO;
}
return ret;
}
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
return tb_nvm_write_data(address, buf, size, DMA_PORT_RETRIES,
dma_port_flash_write_block, dma);
}
/**

View File

@ -881,11 +881,12 @@ int tb_domain_init(void)
int ret;
tb_test_init();
tb_debugfs_init();
tb_acpi_init();
ret = tb_xdomain_init();
if (ret)
goto err_debugfs;
goto err_acpi;
ret = bus_register(&tb_bus_type);
if (ret)
goto err_xdomain;
@ -894,7 +895,8 @@ int tb_domain_init(void)
err_xdomain:
tb_xdomain_exit();
err_debugfs:
err_acpi:
tb_acpi_exit();
tb_debugfs_exit();
tb_test_exit();
@ -907,6 +909,7 @@ void tb_domain_exit(void)
ida_destroy(&tb_domain_ida);
tb_nvm_exit();
tb_xdomain_exit();
tb_acpi_exit();
tb_debugfs_exit();
tb_test_exit();
}

View File

@ -214,7 +214,10 @@ static u32 tb_crc32(void *data, size_t len)
return ~__crc32c_le(~0, data, len);
}
#define TB_DROM_DATA_START 13
#define TB_DROM_DATA_START 13
#define TB_DROM_HEADER_SIZE 22
#define USB4_DROM_HEADER_SIZE 16
struct tb_drom_header {
/* BYTE 0 */
u8 uid_crc8; /* checksum for uid */
@ -224,9 +227,9 @@ struct tb_drom_header {
u32 data_crc32; /* checksum for data_len bytes starting at byte 13 */
/* BYTE 13 */
u8 device_rom_revision; /* should be <= 1 */
u16 data_len:10;
u8 __unknown1:6;
/* BYTES 16-21 */
u16 data_len:12;
u8 reserved:4;
/* BYTES 16-21 - Only for TBT DROM, nonexistent in USB4 DROM */
u16 vendor_id;
u16 model_id;
u8 model_rev;
@ -401,10 +404,10 @@ static int tb_drom_parse_entry_port(struct tb_switch *sw,
*
* Drom must have been copied to sw->drom.
*/
static int tb_drom_parse_entries(struct tb_switch *sw)
static int tb_drom_parse_entries(struct tb_switch *sw, size_t header_size)
{
struct tb_drom_header *header = (void *) sw->drom;
u16 pos = sizeof(*header);
u16 pos = header_size;
u16 drom_size = header->data_len + TB_DROM_DATA_START;
int res;
@ -566,7 +569,7 @@ static int tb_drom_parse(struct tb_switch *sw)
header->data_crc32, crc);
}
return tb_drom_parse_entries(sw);
return tb_drom_parse_entries(sw, TB_DROM_HEADER_SIZE);
}
static int usb4_drom_parse(struct tb_switch *sw)
@ -583,7 +586,7 @@ static int usb4_drom_parse(struct tb_switch *sw)
return -EINVAL;
}
return tb_drom_parse_entries(sw);
return tb_drom_parse_entries(sw, USB4_DROM_HEADER_SIZE);
}
/**

View File

@ -1677,14 +1677,18 @@ static void icm_icl_rtd3_veto(struct tb *tb, const struct icm_pkg_header *hdr)
static bool icm_tgl_is_supported(struct tb *tb)
{
u32 val;
unsigned long end = jiffies + msecs_to_jiffies(10);
/*
* If the firmware is not running use software CM. This platform
* should fully support both.
*/
val = ioread32(tb->nhi->iobase + REG_FW_STS);
return !!(val & REG_FW_STS_NVM_AUTH_DONE);
do {
u32 val;
val = ioread32(tb->nhi->iobase + REG_FW_STS);
if (val & REG_FW_STS_NVM_AUTH_DONE)
return true;
usleep_range(100, 500);
} while (time_before(jiffies, end));
return false;
}
static void icm_handle_notification(struct work_struct *work)
@ -2505,6 +2509,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)
case PCI_DEVICE_ID_INTEL_TGL_NHI1:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI1:
case PCI_DEVICE_ID_INTEL_ADL_NHI0:
case PCI_DEVICE_ID_INTEL_ADL_NHI1:
icm->is_supported = icm_tgl_is_supported;
icm->driver_ready = icm_icl_driver_ready;
icm->set_uuid = icm_icl_set_uuid;

View File

@ -208,8 +208,8 @@ static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset,
if (ret)
return ret;
ctrl &= ~(TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD | TB_LC_SX_CTRL_WOP |
TB_LC_SX_CTRL_WOU4);
ctrl &= ~(TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD | TB_LC_SX_CTRL_WODPC |
TB_LC_SX_CTRL_WODPD | TB_LC_SX_CTRL_WOP | TB_LC_SX_CTRL_WOU4);
if (flags & TB_WAKE_ON_CONNECT)
ctrl |= TB_LC_SX_CTRL_WOC | TB_LC_SX_CTRL_WOD;
@ -217,6 +217,8 @@ static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset,
ctrl |= TB_LC_SX_CTRL_WOU4;
if (flags & TB_WAKE_ON_PCIE)
ctrl |= TB_LC_SX_CTRL_WOP;
if (flags & TB_WAKE_ON_DP)
ctrl |= TB_LC_SX_CTRL_WODPC | TB_LC_SX_CTRL_WODPD;
return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, offset + TB_LC_SX_CTRL, 1);
}

View File

@ -17,7 +17,6 @@
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/property.h>
#include <linux/platform_data/x86/apple.h>
#include "nhi.h"
#include "nhi_regs.h"
@ -1127,69 +1126,6 @@ static bool nhi_imr_valid(struct pci_dev *pdev)
return true;
}
/*
* During suspend the Thunderbolt controller is reset and all PCIe
* tunnels are lost. The NHI driver will try to reestablish all tunnels
* during resume. This adds device links between the tunneled PCIe
* downstream ports and the NHI so that the device core will make sure
* NHI is resumed first before the rest.
*/
static void tb_apple_add_links(struct tb_nhi *nhi)
{
struct pci_dev *upstream, *pdev;
if (!x86_apple_machine)
return;
switch (nhi->pdev->device) {
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
break;
default:
return;
}
upstream = pci_upstream_bridge(nhi->pdev);
while (upstream) {
if (!pci_is_pcie(upstream))
return;
if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
break;
upstream = pci_upstream_bridge(upstream);
}
if (!upstream)
return;
/*
* For each hotplug downstream port, create add device link
* back to NHI so that PCIe tunnels can be re-established after
* sleep.
*/
for_each_pci_bridge(pdev, upstream->subordinate) {
const struct device_link *link;
if (!pci_is_pcie(pdev))
continue;
if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
!pdev->is_hotplug_bridge)
continue;
link = device_link_add(&pdev->dev, &nhi->pdev->dev,
DL_FLAG_AUTOREMOVE_SUPPLIER |
DL_FLAG_PM_RUNTIME);
if (link) {
dev_dbg(&nhi->pdev->dev, "created link from %s\n",
dev_name(&pdev->dev));
} else {
dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
dev_name(&pdev->dev));
}
}
}
static struct tb *nhi_select_cm(struct tb_nhi *nhi)
{
struct tb *tb;
@ -1278,9 +1214,6 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return res;
}
tb_apple_add_links(nhi);
tb_acpi_add_links(nhi);
tb = nhi_select_cm(nhi);
if (!tb) {
dev_err(&nhi->pdev->dev,
@ -1400,6 +1333,10 @@ static struct pci_device_id nhi_ids[] = {
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
.driver_data = (kernel_ulong_t)&icl_nhi_ops },
/* Any USB4 compliant host */
{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },

View File

@ -72,6 +72,8 @@ extern const struct tb_nhi_ops icl_nhi_ops;
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE 0x15ea
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI 0x15eb
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef
#define PCI_DEVICE_ID_INTEL_ADL_NHI0 0x463e
#define PCI_DEVICE_ID_INTEL_ADL_NHI1 0x466d
#define PCI_DEVICE_ID_INTEL_ICL_NHI1 0x8a0d
#define PCI_DEVICE_ID_INTEL_ICL_NHI0 0x8a17
#define PCI_DEVICE_ID_INTEL_TGL_NHI0 0x9a1b

View File

@ -164,6 +164,101 @@ void tb_nvm_free(struct tb_nvm *nvm)
kfree(nvm);
}
/**
* tb_nvm_read_data() - Read data from NVM
* @address: Start address on the flash
* @buf: Buffer where the read data is copied
* @size: Size of the buffer in bytes
* @retries: Number of retries if block read fails
* @read_block: Function that reads block from the flash
* @read_block_data: Data passsed to @read_block
*
* This is a generic function that reads data from NVM or NVM like
* device.
*
* Returns %0 on success and negative errno otherwise.
*/
int tb_nvm_read_data(unsigned int address, void *buf, size_t size,
unsigned int retries, read_block_fn read_block,
void *read_block_data)
{
do {
unsigned int dwaddress, dwords, offset;
u8 data[NVM_DATA_DWORDS * 4];
size_t nbytes;
int ret;
offset = address & 3;
nbytes = min_t(size_t, size + offset, NVM_DATA_DWORDS * 4);
dwaddress = address / 4;
dwords = ALIGN(nbytes, 4) / 4;
ret = read_block(read_block_data, dwaddress, data, dwords);
if (ret) {
if (ret != -ENODEV && retries--)
continue;
return ret;
}
nbytes -= offset;
memcpy(buf, data + offset, nbytes);
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
}
/**
* tb_nvm_write_data() - Write data to NVM
* @address: Start address on the flash
* @buf: Buffer where the data is copied from
* @size: Size of the buffer in bytes
* @retries: Number of retries if the block write fails
* @write_block: Function that writes block to the flash
* @write_block_data: Data passwd to @write_block
*
* This is generic function that writes data to NVM or NVM like device.
*
* Returns %0 on success and negative errno otherwise.
*/
int tb_nvm_write_data(unsigned int address, const void *buf, size_t size,
unsigned int retries, write_block_fn write_block,
void *write_block_data)
{
do {
unsigned int offset, dwaddress;
u8 data[NVM_DATA_DWORDS * 4];
size_t nbytes;
int ret;
offset = address & 3;
nbytes = min_t(u32, size + offset, NVM_DATA_DWORDS * 4);
memcpy(data + offset, buf, nbytes);
dwaddress = address / 4;
ret = write_block(write_block_data, dwaddress, data, nbytes / 4);
if (ret) {
if (ret == -ETIMEDOUT) {
if (retries--)
continue;
ret = -EIO;
}
return ret;
}
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
}
void tb_nvm_exit(void)
{
ida_destroy(&nvm_ida);

View File

@ -367,7 +367,7 @@ static void __tb_path_deallocate_nfc(struct tb_path *path, int first_hop)
int i, res;
for (i = first_hop; i < path->path_length; i++) {
res = tb_port_add_nfc_credits(path->hops[i].in_port,
-path->nfc_credits);
-path->hops[i].nfc_credits);
if (res)
tb_port_warn(path->hops[i].in_port,
"nfc credits deallocation failed for hop %d\n",
@ -502,7 +502,7 @@ int tb_path_activate(struct tb_path *path)
/* Add non flow controlled credits. */
for (i = path->path_length - 1; i >= 0; i--) {
res = tb_port_add_nfc_credits(path->hops[i].in_port,
path->nfc_credits);
path->hops[i].nfc_credits);
if (res) {
__tb_path_deallocate_nfc(path, i);
goto err;

View File

@ -12,7 +12,17 @@ static void quirk_force_power_link(struct tb_switch *sw)
sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER;
}
static void quirk_dp_credit_allocation(struct tb_switch *sw)
{
if (sw->credit_allocation && sw->min_dp_main_credits == 56) {
sw->min_dp_main_credits = 18;
tb_sw_dbg(sw, "quirked DP main: %u\n", sw->min_dp_main_credits);
}
}
struct tb_quirk {
u16 hw_vendor_id;
u16 hw_device_id;
u16 vendor;
u16 device;
void (*hook)(struct tb_switch *sw);
@ -20,7 +30,13 @@ struct tb_quirk {
static const struct tb_quirk tb_quirks[] = {
/* Dell WD19TB supports self-authentication on unplug */
{ 0x00d4, 0xb070, quirk_force_power_link },
{ 0x0000, 0x0000, 0x00d4, 0xb070, quirk_force_power_link },
{ 0x0000, 0x0000, 0x00d4, 0xb071, quirk_force_power_link },
/*
* Intel Goshen Ridge NVM 27 and before report wrong number of
* DP buffers.
*/
{ 0x8087, 0x0b26, 0x0000, 0x0000, quirk_dp_credit_allocation },
};
/**
@ -36,7 +52,15 @@ void tb_check_quirks(struct tb_switch *sw)
for (i = 0; i < ARRAY_SIZE(tb_quirks); i++) {
const struct tb_quirk *q = &tb_quirks[i];
if (sw->device == q->device && sw->vendor == q->vendor)
q->hook(sw);
if (q->hw_vendor_id && q->hw_vendor_id != sw->config.vendor_id)
continue;
if (q->hw_device_id && q->hw_device_id != sw->config.device_id)
continue;
if (q->vendor && q->vendor != sw->vendor)
continue;
if (q->device && q->device != sw->device)
continue;
q->hook(sw);
}
}

View File

@ -103,6 +103,7 @@ static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt)
unsigned int image_size, hdr_size;
const u8 *buf = rt->nvm->buf;
u16 ds_size, device;
int ret;
image_size = rt->nvm->buf_data_size;
if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE)
@ -140,8 +141,43 @@ static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt)
buf += hdr_size;
image_size -= hdr_size;
return usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf,
image_size);
ret = usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf,
image_size);
if (!ret)
rt->nvm->flushed = true;
return ret;
}
static int tb_retimer_nvm_authenticate(struct tb_retimer *rt, bool auth_only)
{
u32 status;
int ret;
if (auth_only) {
ret = usb4_port_retimer_nvm_set_offset(rt->port, rt->index, 0);
if (ret)
return ret;
}
ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index);
if (ret)
return ret;
usleep_range(100, 150);
/*
* Check the status now if we still can access the retimer. It
* is expected that the below fails.
*/
ret = usb4_port_retimer_nvm_authenticate_status(rt->port, rt->index,
&status);
if (!ret) {
rt->auth_status = status;
return status ? -EINVAL : 0;
}
return 0;
}
static ssize_t device_show(struct device *dev, struct device_attribute *attr,
@ -176,8 +212,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct tb_retimer *rt = tb_to_retimer(dev);
bool val;
int ret;
int val, ret;
pm_runtime_get_sync(&rt->dev);
@ -191,7 +226,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
goto exit_unlock;
}
ret = kstrtobool(buf, &val);
ret = kstrtoint(buf, 10, &val);
if (ret)
goto exit_unlock;
@ -199,16 +234,22 @@ static ssize_t nvm_authenticate_store(struct device *dev,
rt->auth_status = 0;
if (val) {
if (!rt->nvm->buf) {
ret = -EINVAL;
goto exit_unlock;
if (val == AUTHENTICATE_ONLY) {
ret = tb_retimer_nvm_authenticate(rt, true);
} else {
if (!rt->nvm->flushed) {
if (!rt->nvm->buf) {
ret = -EINVAL;
goto exit_unlock;
}
ret = tb_retimer_nvm_validate_and_write(rt);
if (ret || val == WRITE_ONLY)
goto exit_unlock;
}
if (val == WRITE_AND_AUTHENTICATE)
ret = tb_retimer_nvm_authenticate(rt, false);
}
ret = tb_retimer_nvm_validate_and_write(rt);
if (ret)
goto exit_unlock;
ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index);
}
exit_unlock:
@ -283,11 +324,13 @@ struct device_type tb_retimer_type = {
static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
{
struct usb4_port *usb4;
struct tb_retimer *rt;
u32 vendor, device;
int ret;
if (!port->cap_usb4)
usb4 = port->usb4;
if (!usb4)
return -EINVAL;
ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor,
@ -331,7 +374,7 @@ static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
rt->port = port;
rt->tb = port->sw->tb;
rt->dev.parent = &port->sw->dev;
rt->dev.parent = &usb4->dev;
rt->dev.bus = &tb_bus_type;
rt->dev.type = &tb_retimer_type;
dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev),
@ -389,7 +432,7 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
struct tb_retimer_lookup lookup = { .port = port, .index = index };
struct device *dev;
dev = device_find_child(&port->sw->dev, &lookup, retimer_match);
dev = device_find_child(&port->usb4->dev, &lookup, retimer_match);
if (dev)
return tb_to_retimer(dev);
@ -399,19 +442,18 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
/**
* tb_retimer_scan() - Scan for on-board retimers under port
* @port: USB4 port to scan
* @add: If true also registers found retimers
*
* Tries to enumerate on-board retimers connected to @port. Found
* retimers are registered as children of @port. Does not scan for cable
* retimers for now.
* Brings the sideband into a state where retimers can be accessed.
* Then Tries to enumerate on-board retimers connected to @port. Found
* retimers are registered as children of @port if @add is set. Does
* not scan for cable retimers for now.
*/
int tb_retimer_scan(struct tb_port *port)
int tb_retimer_scan(struct tb_port *port, bool add)
{
u32 status[TB_MAX_RETIMER_INDEX + 1] = {};
int ret, i, last_idx = 0;
if (!port->cap_usb4)
return 0;
/*
* Send broadcast RT to make sure retimer indices facing this
* port are set.
@ -420,6 +462,13 @@ int tb_retimer_scan(struct tb_port *port)
if (ret)
return ret;
/*
* Enable sideband channel for each retimer. We can do this
* regardless whether there is device connected or not.
*/
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
usb4_port_retimer_set_inbound_sbtx(port, i);
/*
* Before doing anything else, read the authentication status.
* If the retimer has it set, store it for the new retimer
@ -451,10 +500,10 @@ int tb_retimer_scan(struct tb_port *port)
rt = tb_port_find_retimer(port, i);
if (rt) {
put_device(&rt->dev);
} else {
} else if (add) {
ret = tb_retimer_add(port, i, status[i]);
if (ret && ret != -EOPNOTSUPP)
return ret;
break;
}
}
@ -479,7 +528,10 @@ static int remove_retimer(struct device *dev, void *data)
*/
void tb_retimer_remove_all(struct tb_port *port)
{
if (port->cap_usb4)
device_for_each_child_reverse(&port->sw->dev, port,
struct usb4_port *usb4;
usb4 = port->usb4;
if (usb4)
device_for_each_child_reverse(&usb4->dev, port,
remove_retimer);
}

View File

@ -17,7 +17,9 @@
enum usb4_sb_opcode {
USB4_SB_OPCODE_ERR = 0x20525245, /* "ERR " */
USB4_SB_OPCODE_ONS = 0x444d4321, /* "!CMD" */
USB4_SB_OPCODE_ROUTER_OFFLINE = 0x4e45534c, /* "LSEN" */
USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */
USB4_SB_OPCODE_SET_INBOUND_SBTX = 0x5055534c, /* "LSUP" */
USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */
USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */
USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */

View File

@ -26,11 +26,6 @@ struct nvm_auth_status {
u32 status;
};
enum nvm_write_ops {
WRITE_AND_AUTHENTICATE = 1,
WRITE_ONLY = 2,
};
/*
* Hold NVM authentication failure status per switch This information
* needs to stay around even when the switch gets power cycled so we
@ -308,13 +303,23 @@ static inline int nvm_read(struct tb_switch *sw, unsigned int address,
return dma_port_flash_read(sw->dma_port, address, buf, size);
}
static int nvm_authenticate(struct tb_switch *sw)
static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
{
int ret;
if (tb_switch_is_usb4(sw))
if (tb_switch_is_usb4(sw)) {
if (auth_only) {
ret = usb4_switch_nvm_set_offset(sw, 0);
if (ret)
return ret;
}
sw->nvm->authenticating = true;
return usb4_switch_nvm_authenticate(sw);
} else if (auth_only) {
return -EOPNOTSUPP;
}
sw->nvm->authenticating = true;
if (!tb_route(sw)) {
nvm_authenticate_start_dma_port(sw);
ret = nvm_authenticate_host_dma_port(sw);
@ -459,7 +464,7 @@ static void tb_switch_nvm_remove(struct tb_switch *sw)
/* port utility functions */
static const char *tb_port_type(struct tb_regs_port_header *port)
static const char *tb_port_type(const struct tb_regs_port_header *port)
{
switch (port->type >> 16) {
case 0:
@ -488,17 +493,21 @@ static const char *tb_port_type(struct tb_regs_port_header *port)
}
}
static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port)
static void tb_dump_port(struct tb *tb, const struct tb_port *port)
{
const struct tb_regs_port_header *regs = &port->config;
tb_dbg(tb,
" Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x))\n",
port->port_number, port->vendor_id, port->device_id,
port->revision, port->thunderbolt_version, tb_port_type(port),
port->type);
regs->port_number, regs->vendor_id, regs->device_id,
regs->revision, regs->thunderbolt_version, tb_port_type(regs),
regs->type);
tb_dbg(tb, " Max hop id (in/out): %d/%d\n",
port->max_in_hop_id, port->max_out_hop_id);
tb_dbg(tb, " Max counters: %d\n", port->max_counters);
tb_dbg(tb, " NFC Credits: %#x\n", port->nfc_credits);
regs->max_in_hop_id, regs->max_out_hop_id);
tb_dbg(tb, " Max counters: %d\n", regs->max_counters);
tb_dbg(tb, " NFC Credits: %#x\n", regs->nfc_credits);
tb_dbg(tb, " Credits (total/control): %u/%u\n", port->total_credits,
port->ctl_credits);
}
/**
@ -738,13 +747,32 @@ static int tb_init_port(struct tb_port *port)
cap = tb_port_find_cap(port, TB_PORT_CAP_USB4);
if (cap > 0)
port->cap_usb4 = cap;
/*
* USB4 ports the buffers allocated for the control path
* can be read from the path config space. Legacy
* devices we use hard-coded value.
*/
if (tb_switch_is_usb4(port->sw)) {
struct tb_regs_hop hop;
if (!tb_port_read(port, &hop, TB_CFG_HOPS, 0, 2))
port->ctl_credits = hop.initial_credits;
}
if (!port->ctl_credits)
port->ctl_credits = 2;
} else if (port->port != 0) {
cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP);
if (cap > 0)
port->cap_adap = cap;
}
tb_dump_port(port->sw->tb, &port->config);
port->total_credits =
(port->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
tb_dump_port(port->sw->tb, port);
INIT_LIST_HEAD(&port->list);
return 0;
@ -991,8 +1019,11 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
* tb_port_lane_bonding_enable() - Enable bonding on port
* @port: port to enable
*
* Enable bonding by setting the link width of the port and the
* other port in case of dual link port.
* Enable bonding by setting the link width of the port and the other
* port in case of dual link port. Does not wait for the link to
* actually reach the bonded state so caller needs to call
* tb_port_wait_for_link_width() before enabling any paths through the
* link to make sure the link is in expected state.
*
* Return: %0 in case of success and negative errno in case of error
*/
@ -1043,6 +1074,79 @@ void tb_port_lane_bonding_disable(struct tb_port *port)
tb_port_set_link_width(port, 1);
}
/**
* tb_port_wait_for_link_width() - Wait until link reaches specific width
* @port: Port to wait for
* @width: Expected link width (%1 or %2)
* @timeout_msec: Timeout in ms how long to wait
*
* Should be used after both ends of the link have been bonded (or
* bonding has been disabled) to wait until the link actually reaches
* the expected state. Returns %-ETIMEDOUT if the @width was not reached
* within the given timeout, %0 if it did.
*/
int tb_port_wait_for_link_width(struct tb_port *port, int width,
int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
int ret;
do {
ret = tb_port_get_link_width(port);
if (ret < 0)
return ret;
else if (ret == width)
return 0;
usleep_range(1000, 2000);
} while (ktime_before(ktime_get(), timeout));
return -ETIMEDOUT;
}
static int tb_port_do_update_credits(struct tb_port *port)
{
u32 nfc_credits;
int ret;
ret = tb_port_read(port, &nfc_credits, TB_CFG_PORT, ADP_CS_4, 1);
if (ret)
return ret;
if (nfc_credits != port->config.nfc_credits) {
u32 total;
total = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
tb_port_dbg(port, "total credits changed %u -> %u\n",
port->total_credits, total);
port->config.nfc_credits = nfc_credits;
port->total_credits = total;
}
return 0;
}
/**
* tb_port_update_credits() - Re-read port total credits
* @port: Port to update
*
* After the link is bonded (or bonding was disabled) the port total
* credits may change, so this function needs to be called to re-read
* the credits. Updates also the second lane adapter.
*/
int tb_port_update_credits(struct tb_port *port)
{
int ret;
ret = tb_port_do_update_credits(port);
if (ret)
return ret;
return tb_port_do_update_credits(port->dual_link_port);
}
static int tb_port_start_lane_initialization(struct tb_port *port)
{
int ret;
@ -1054,6 +1158,33 @@ static int tb_port_start_lane_initialization(struct tb_port *port)
return ret == -EINVAL ? 0 : ret;
}
/*
* Returns true if the port had something (router, XDomain) connected
* before suspend.
*/
static bool tb_port_resume(struct tb_port *port)
{
bool has_remote = tb_port_has_remote(port);
if (port->usb4) {
usb4_port_device_resume(port->usb4);
} else if (!has_remote) {
/*
* For disconnected downstream lane adapters start lane
* initialization now so we detect future connects.
*
* For XDomain start the lane initialzation now so the
* link gets re-established.
*
* This is only needed for non-USB4 ports.
*/
if (!tb_is_upstream_port(port) || port->xdomain)
tb_port_start_lane_initialization(port);
}
return has_remote || port->xdomain;
}
/**
* tb_port_is_enabled() - Is the adapter port enabled
* @port: Port to check
@ -1592,8 +1723,7 @@ static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf,
bool disconnect)
{
struct tb_switch *sw = tb_to_switch(dev);
int val;
int ret;
int val, ret;
pm_runtime_get_sync(&sw->dev);
@ -1616,22 +1746,27 @@ static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf,
nvm_clear_auth_status(sw);
if (val > 0) {
if (!sw->nvm->flushed) {
if (!sw->nvm->buf) {
if (val == AUTHENTICATE_ONLY) {
if (disconnect)
ret = -EINVAL;
goto exit_unlock;
}
else
ret = nvm_authenticate(sw, true);
} else {
if (!sw->nvm->flushed) {
if (!sw->nvm->buf) {
ret = -EINVAL;
goto exit_unlock;
}
ret = nvm_validate_and_write(sw);
if (ret || val == WRITE_ONLY)
goto exit_unlock;
}
if (val == WRITE_AND_AUTHENTICATE) {
if (disconnect) {
ret = tb_lc_force_power(sw);
} else {
sw->nvm->authenticating = true;
ret = nvm_authenticate(sw);
ret = nvm_validate_and_write(sw);
if (ret || val == WRITE_ONLY)
goto exit_unlock;
}
if (val == WRITE_AND_AUTHENTICATE) {
if (disconnect)
ret = tb_lc_force_power(sw);
else
ret = nvm_authenticate(sw, false);
}
}
}
@ -2432,6 +2567,14 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
return ret;
}
ret = tb_port_wait_for_link_width(down, 2, 100);
if (ret) {
tb_port_warn(down, "timeout enabling lane bonding\n");
return ret;
}
tb_port_update_credits(down);
tb_port_update_credits(up);
tb_switch_update_link_attributes(sw);
tb_sw_dbg(sw, "lane bonding enabled\n");
@ -2462,7 +2605,17 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw)
tb_port_lane_bonding_disable(up);
tb_port_lane_bonding_disable(down);
/*
* It is fine if we get other errors as the router might have
* been unplugged.
*/
if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT)
tb_sw_warn(sw, "timeout disabling lane bonding\n");
tb_port_update_credits(down);
tb_port_update_credits(up);
tb_switch_update_link_attributes(sw);
tb_sw_dbg(sw, "lane bonding disabled\n");
}
@ -2529,6 +2682,16 @@ void tb_switch_unconfigure_link(struct tb_switch *sw)
tb_lc_unconfigure_port(down);
}
static void tb_switch_credits_init(struct tb_switch *sw)
{
if (tb_switch_is_icm(sw))
return;
if (!tb_switch_is_usb4(sw))
return;
if (usb4_switch_credits_init(sw))
tb_sw_info(sw, "failed to determine preferred buffer allocation, using defaults\n");
}
/**
* tb_switch_add() - Add a switch to the domain
* @sw: Switch to add
@ -2559,6 +2722,8 @@ int tb_switch_add(struct tb_switch *sw)
}
if (!sw->safe_mode) {
tb_switch_credits_init(sw);
/* read drom */
ret = tb_drom_read(sw);
if (ret) {
@ -2612,11 +2777,16 @@ int tb_switch_add(struct tb_switch *sw)
sw->device_name);
}
ret = usb4_switch_add_ports(sw);
if (ret) {
dev_err(&sw->dev, "failed to add USB4 ports\n");
goto err_del;
}
ret = tb_switch_nvm_add(sw);
if (ret) {
dev_err(&sw->dev, "failed to add NVM devices\n");
device_del(&sw->dev);
return ret;
goto err_ports;
}
/*
@ -2637,6 +2807,13 @@ int tb_switch_add(struct tb_switch *sw)
tb_switch_debugfs_init(sw);
return 0;
err_ports:
usb4_switch_remove_ports(sw);
err_del:
device_del(&sw->dev);
return ret;
}
/**
@ -2676,6 +2853,7 @@ void tb_switch_remove(struct tb_switch *sw)
tb_plug_events_active(sw, false);
tb_switch_nvm_remove(sw);
usb4_switch_remove_ports(sw);
if (tb_route(sw))
dev_info(&sw->dev, "device disconnected\n");
@ -2773,22 +2951,11 @@ int tb_switch_resume(struct tb_switch *sw)
/* check for surviving downstream switches */
tb_switch_for_each_port(sw, port) {
if (!tb_port_has_remote(port) && !port->xdomain) {
/*
* For disconnected downstream lane adapters
* start lane initialization now so we detect
* future connects.
*/
if (!tb_is_upstream_port(port) && tb_port_is_null(port))
tb_port_start_lane_initialization(port);
if (!tb_port_is_null(port))
continue;
if (!tb_port_resume(port))
continue;
} else if (port->xdomain) {
/*
* Start lane initialization for XDomain so the
* link gets re-established.
*/
tb_port_start_lane_initialization(port);
}
if (tb_wait_for_port(port, true) <= 0) {
tb_port_warn(port,
@ -2797,7 +2964,7 @@ int tb_switch_resume(struct tb_switch *sw)
tb_sw_set_unplugged(port->remote->sw);
else if (port->xdomain)
port->xdomain->is_unplugged = true;
} else if (tb_port_has_remote(port) || port->xdomain) {
} else {
/*
* Always unlock the port so the downstream
* switch/domain is accessible.
@ -2844,7 +3011,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
if (runtime) {
/* Trigger wake when something is plugged in/out */
flags |= TB_WAKE_ON_CONNECT | TB_WAKE_ON_DISCONNECT;
flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE;
flags |= TB_WAKE_ON_USB4;
flags |= TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE | TB_WAKE_ON_DP;
} else if (device_may_wakeup(&sw->dev)) {
flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE;
}

View File

@ -10,6 +10,7 @@
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/pm_runtime.h>
#include <linux/platform_data/x86/apple.h>
#include "tb.h"
#include "tb_regs.h"
@ -595,7 +596,7 @@ static void tb_scan_port(struct tb_port *port)
return;
}
tb_retimer_scan(port);
tb_retimer_scan(port, true);
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
tb_downstream_route(port));
@ -662,7 +663,7 @@ static void tb_scan_port(struct tb_port *port)
tb_sw_warn(sw, "failed to enable TMU\n");
/* Scan upstream retimers */
tb_retimer_scan(upstream_port);
tb_retimer_scan(upstream_port, true);
/*
* Create USB 3.x tunnels only when the switch is plugged to the
@ -1571,6 +1572,69 @@ static const struct tb_cm_ops tb_cm_ops = {
.disconnect_xdomain_paths = tb_disconnect_xdomain_paths,
};
/*
* During suspend the Thunderbolt controller is reset and all PCIe
* tunnels are lost. The NHI driver will try to reestablish all tunnels
* during resume. This adds device links between the tunneled PCIe
* downstream ports and the NHI so that the device core will make sure
* NHI is resumed first before the rest.
*/
static void tb_apple_add_links(struct tb_nhi *nhi)
{
struct pci_dev *upstream, *pdev;
if (!x86_apple_machine)
return;
switch (nhi->pdev->device) {
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
break;
default:
return;
}
upstream = pci_upstream_bridge(nhi->pdev);
while (upstream) {
if (!pci_is_pcie(upstream))
return;
if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
break;
upstream = pci_upstream_bridge(upstream);
}
if (!upstream)
return;
/*
* For each hotplug downstream port, create add device link
* back to NHI so that PCIe tunnels can be re-established after
* sleep.
*/
for_each_pci_bridge(pdev, upstream->subordinate) {
const struct device_link *link;
if (!pci_is_pcie(pdev))
continue;
if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
!pdev->is_hotplug_bridge)
continue;
link = device_link_add(&pdev->dev, &nhi->pdev->dev,
DL_FLAG_AUTOREMOVE_SUPPLIER |
DL_FLAG_PM_RUNTIME);
if (link) {
dev_dbg(&nhi->pdev->dev, "created link from %s\n",
dev_name(&pdev->dev));
} else {
dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
dev_name(&pdev->dev));
}
}
}
struct tb *tb_probe(struct tb_nhi *nhi)
{
struct tb_cm *tcm;
@ -1594,5 +1658,8 @@ struct tb *tb_probe(struct tb_nhi *nhi)
tb_dbg(tb, "using software connection manager\n");
tb_apple_add_links(nhi);
tb_acpi_add_links(nhi);
return tb;
}

View File

@ -20,6 +20,7 @@
#define NVM_MIN_SIZE SZ_32K
#define NVM_MAX_SIZE SZ_512K
#define NVM_DATA_DWORDS 16
/* Intel specific NVM offsets */
#define NVM_DEVID 0x05
@ -57,6 +58,12 @@ struct tb_nvm {
bool flushed;
};
enum tb_nvm_write_ops {
WRITE_AND_AUTHENTICATE = 1,
WRITE_ONLY = 2,
AUTHENTICATE_ONLY = 3,
};
#define TB_SWITCH_KEY_SIZE 32
#define TB_SWITCH_MAX_DEPTH 6
#define USB4_SWITCH_MAX_DEPTH 5
@ -135,6 +142,12 @@ struct tb_switch_tmu {
* @rpm_complete: Completion used to wait for runtime resume to
* complete (ICM only)
* @quirks: Quirks used for this Thunderbolt switch
* @credit_allocation: Are the below buffer allocation parameters valid
* @max_usb3_credits: Router preferred number of buffers for USB 3.x
* @min_dp_aux_credits: Router preferred minimum number of buffers for DP AUX
* @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN
* @max_pcie_credits: Router preferred number of buffers for PCIe
* @max_dma_credits: Router preferred number of buffers for DMA/P2P
*
* When the switch is being added or removed to the domain (other
* switches) you need to have domain lock held.
@ -177,6 +190,12 @@ struct tb_switch {
u8 depth;
struct completion rpm_complete;
unsigned long quirks;
bool credit_allocation;
unsigned int max_usb3_credits;
unsigned int min_dp_aux_credits;
unsigned int min_dp_main_credits;
unsigned int max_pcie_credits;
unsigned int max_dma_credits;
};
/**
@ -189,6 +208,7 @@ struct tb_switch {
* @cap_tmu: Offset of the adapter specific TMU capability (%0 if not present)
* @cap_adap: Offset of the adapter specific capability (%0 if not present)
* @cap_usb4: Offset to the USB4 port capability (%0 if not present)
* @usb4: Pointer to the USB4 port structure (only if @cap_usb4 is != %0)
* @port: Port number on switch
* @disabled: Disabled by eeprom or enabled but not implemented
* @bonded: true if the port is bonded (two lanes combined as one)
@ -198,6 +218,10 @@ struct tb_switch {
* @in_hopids: Currently allocated input HopIDs
* @out_hopids: Currently allocated output HopIDs
* @list: Used to link ports to DP resources list
* @total_credits: Total number of buffers available for this port
* @ctl_credits: Buffers reserved for control path
* @dma_credits: Number of credits allocated for DMA tunneling for all
* DMA paths through this port.
*
* In USB4 terminology this structure represents an adapter (protocol or
* lane adapter).
@ -211,6 +235,7 @@ struct tb_port {
int cap_tmu;
int cap_adap;
int cap_usb4;
struct usb4_port *usb4;
u8 port;
bool disabled;
bool bonded;
@ -219,6 +244,24 @@ struct tb_port {
struct ida in_hopids;
struct ida out_hopids;
struct list_head list;
unsigned int total_credits;
unsigned int ctl_credits;
unsigned int dma_credits;
};
/**
* struct usb4_port - USB4 port device
* @dev: Device for the port
* @port: Pointer to the lane 0 adapter
* @can_offline: Does the port have necessary platform support to moved
* it into offline mode and back
* @offline: The port is currently in offline mode
*/
struct usb4_port {
struct device dev;
struct tb_port *port;
bool can_offline;
bool offline;
};
/**
@ -255,6 +298,8 @@ struct tb_retimer {
* @next_hop_index: HopID of the packet when it is routed out from @out_port
* @initial_credits: Number of initial flow control credits allocated for
* the path
* @nfc_credits: Number of non-flow controlled buffers allocated for the
* @in_port.
*
* Hop configuration is always done on the IN port of a switch.
* in_port and out_port have to be on the same switch. Packets arriving on
@ -274,6 +319,7 @@ struct tb_path_hop {
int in_counter_index;
int next_hop_index;
unsigned int initial_credits;
unsigned int nfc_credits;
};
/**
@ -296,7 +342,6 @@ enum tb_path_port {
* struct tb_path - a unidirectional path between two ports
* @tb: Pointer to the domain structure
* @name: Name of the path (used for debugging)
* @nfc_credits: Number of non flow controlled credits allocated for the path
* @ingress_shared_buffer: Shared buffering used for ingress ports on the path
* @egress_shared_buffer: Shared buffering used for egress ports on the path
* @ingress_fc_enable: Flow control for ingress ports on the path
@ -317,7 +362,6 @@ enum tb_path_port {
struct tb_path {
struct tb *tb;
const char *name;
int nfc_credits;
enum tb_path_port ingress_shared_buffer;
enum tb_path_port egress_shared_buffer;
enum tb_path_port ingress_fc_enable;
@ -346,6 +390,7 @@ struct tb_path {
#define TB_WAKE_ON_USB4 BIT(2)
#define TB_WAKE_ON_USB3 BIT(3)
#define TB_WAKE_ON_PCIE BIT(4)
#define TB_WAKE_ON_DP BIT(5)
/**
* struct tb_cm_ops - Connection manager specific operations vector
@ -623,6 +668,7 @@ struct tb *tb_probe(struct tb_nhi *nhi);
extern struct device_type tb_domain_type;
extern struct device_type tb_retimer_type;
extern struct device_type tb_switch_type;
extern struct device_type usb4_port_device_type;
int tb_domain_init(void);
void tb_domain_exit(void);
@ -674,6 +720,16 @@ int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size,
void tb_nvm_free(struct tb_nvm *nvm);
void tb_nvm_exit(void);
typedef int (*read_block_fn)(void *, unsigned int, void *, size_t);
typedef int (*write_block_fn)(void *, unsigned int, const void *, size_t);
int tb_nvm_read_data(unsigned int address, void *buf, size_t size,
unsigned int retries, read_block_fn read_block,
void *read_block_data);
int tb_nvm_write_data(unsigned int address, const void *buf, size_t size,
unsigned int retries, write_block_fn write_next_block,
void *write_block_data);
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
u64 route);
struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb,
@ -853,6 +909,11 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid);
struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
struct tb_port *prev);
static inline bool tb_port_use_credit_allocation(const struct tb_port *port)
{
return tb_port_is_null(port) && port->sw->credit_allocation;
}
/**
* tb_for_each_port_on_path() - Iterate over each port on path
* @src: Source port
@ -870,6 +931,9 @@ int tb_port_get_link_width(struct tb_port *port);
int tb_port_state(struct tb_port *port);
int tb_port_lane_bonding_enable(struct tb_port *port);
void tb_port_lane_bonding_disable(struct tb_port *port);
int tb_port_wait_for_link_width(struct tb_port *port, int width,
int timeout_msec);
int tb_port_update_credits(struct tb_port *port);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
@ -904,6 +968,17 @@ bool tb_path_is_invalid(struct tb_path *path);
bool tb_path_port_on_path(const struct tb_path *path,
const struct tb_port *port);
/**
* tb_path_for_each_hop() - Iterate over each hop on path
* @path: Path whose hops to iterate
* @hop: Hop used as iterator
*
* Iterates over each hop on path.
*/
#define tb_path_for_each_hop(path, hop) \
for ((hop) = &(path)->hops[0]; \
(hop) <= &(path)->hops[(path)->path_length - 1]; (hop)++)
int tb_drom_read(struct tb_switch *sw);
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
@ -950,7 +1025,7 @@ void tb_xdomain_remove(struct tb_xdomain *xd);
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
u8 depth);
int tb_retimer_scan(struct tb_port *port);
int tb_retimer_scan(struct tb_port *port, bool add);
void tb_retimer_remove_all(struct tb_port *port);
static inline bool tb_is_retimer(const struct device *dev)
@ -975,10 +1050,12 @@ int usb4_switch_set_sleep(struct tb_switch *sw);
int usb4_switch_nvm_sector_size(struct tb_switch *sw);
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size);
int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address);
int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
const void *buf, size_t size);
int usb4_switch_nvm_authenticate(struct tb_switch *sw);
int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status);
int usb4_switch_credits_init(struct tb_switch *sw);
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in);
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
@ -986,20 +1063,27 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
const struct tb_port *port);
struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
const struct tb_port *port);
int usb4_switch_add_ports(struct tb_switch *sw);
void usb4_switch_remove_ports(struct tb_switch *sw);
int usb4_port_unlock(struct tb_port *port);
int usb4_port_configure(struct tb_port *port);
void usb4_port_unconfigure(struct tb_port *port);
int usb4_port_configure_xdomain(struct tb_port *port);
void usb4_port_unconfigure_xdomain(struct tb_port *port);
int usb4_port_router_offline(struct tb_port *port);
int usb4_port_router_online(struct tb_port *port);
int usb4_port_enumerate_retimers(struct tb_port *port);
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index);
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
u8 size);
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
const void *buf, u8 size);
int usb4_port_retimer_is_last(struct tb_port *port, u8 index);
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index);
int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
unsigned int address);
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index,
unsigned int address, const void *buf,
size_t size);
@ -1018,6 +1102,22 @@ int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw);
static inline bool tb_is_usb4_port_device(const struct device *dev)
{
return dev->type == &usb4_port_device_type;
}
static inline struct usb4_port *tb_to_usb4_port_device(struct device *dev)
{
if (tb_is_usb4_port_device(dev))
return container_of(dev, struct usb4_port, dev);
return NULL;
}
struct usb4_port *usb4_port_device_add(struct tb_port *port);
void usb4_port_device_remove(struct usb4_port *usb4);
int usb4_port_device_resume(struct usb4_port *usb4);
/* Keep link controller awake during update */
#define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0)
@ -1031,6 +1131,11 @@ bool tb_acpi_may_tunnel_usb3(void);
bool tb_acpi_may_tunnel_dp(void);
bool tb_acpi_may_tunnel_pcie(void);
bool tb_acpi_is_xdomain_allowed(void);
int tb_acpi_init(void);
void tb_acpi_exit(void);
int tb_acpi_power_on_retimers(struct tb_port *port);
int tb_acpi_power_off_retimers(struct tb_port *port);
#else
static inline void tb_acpi_add_links(struct tb_nhi *nhi) { }
@ -1039,6 +1144,11 @@ static inline bool tb_acpi_may_tunnel_usb3(void) { return true; }
static inline bool tb_acpi_may_tunnel_dp(void) { return true; }
static inline bool tb_acpi_may_tunnel_pcie(void) { return true; }
static inline bool tb_acpi_is_xdomain_allowed(void) { return true; }
static inline int tb_acpi_init(void) { return 0; }
static inline void tb_acpi_exit(void) { }
static inline int tb_acpi_power_on_retimers(struct tb_port *port) { return 0; }
static inline int tb_acpi_power_off_retimers(struct tb_port *port) { return 0; }
#endif
#ifdef CONFIG_DEBUG_FS

View File

@ -195,6 +195,7 @@ struct tb_regs_switch_header {
#define ROUTER_CS_5_SLP BIT(0)
#define ROUTER_CS_5_WOP BIT(1)
#define ROUTER_CS_5_WOU BIT(2)
#define ROUTER_CS_5_WOD BIT(3)
#define ROUTER_CS_5_C3S BIT(23)
#define ROUTER_CS_5_PTO BIT(24)
#define ROUTER_CS_5_UTO BIT(25)
@ -228,6 +229,7 @@ enum usb4_switch_op {
USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23,
USB4_SWITCH_OP_DROM_READ = 0x24,
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
USB4_SWITCH_OP_BUFFER_ALLOC = 0x33,
};
/* Router TMU configuration */
@ -458,6 +460,8 @@ struct tb_regs_hop {
#define TB_LC_SX_CTRL 0x96
#define TB_LC_SX_CTRL_WOC BIT(1)
#define TB_LC_SX_CTRL_WOD BIT(2)
#define TB_LC_SX_CTRL_WODPC BIT(3)
#define TB_LC_SX_CTRL_WODPD BIT(4)
#define TB_LC_SX_CTRL_WOU4 BIT(5)
#define TB_LC_SX_CTRL_WOP BIT(6)
#define TB_LC_SX_CTRL_L1C BIT(16)

View File

@ -87,22 +87,30 @@ static struct tb_switch *alloc_host(struct kunit *test)
sw->ports[1].config.type = TB_TYPE_PORT;
sw->ports[1].config.max_in_hop_id = 19;
sw->ports[1].config.max_out_hop_id = 19;
sw->ports[1].total_credits = 60;
sw->ports[1].ctl_credits = 2;
sw->ports[1].dual_link_port = &sw->ports[2];
sw->ports[2].config.type = TB_TYPE_PORT;
sw->ports[2].config.max_in_hop_id = 19;
sw->ports[2].config.max_out_hop_id = 19;
sw->ports[2].total_credits = 60;
sw->ports[2].ctl_credits = 2;
sw->ports[2].dual_link_port = &sw->ports[1];
sw->ports[2].link_nr = 1;
sw->ports[3].config.type = TB_TYPE_PORT;
sw->ports[3].config.max_in_hop_id = 19;
sw->ports[3].config.max_out_hop_id = 19;
sw->ports[3].total_credits = 60;
sw->ports[3].ctl_credits = 2;
sw->ports[3].dual_link_port = &sw->ports[4];
sw->ports[4].config.type = TB_TYPE_PORT;
sw->ports[4].config.max_in_hop_id = 19;
sw->ports[4].config.max_out_hop_id = 19;
sw->ports[4].total_credits = 60;
sw->ports[4].ctl_credits = 2;
sw->ports[4].dual_link_port = &sw->ports[3];
sw->ports[4].link_nr = 1;
@ -143,6 +151,25 @@ static struct tb_switch *alloc_host(struct kunit *test)
return sw;
}
static struct tb_switch *alloc_host_usb4(struct kunit *test)
{
struct tb_switch *sw;
sw = alloc_host(test);
if (!sw)
return NULL;
sw->generation = 4;
sw->credit_allocation = true;
sw->max_usb3_credits = 32;
sw->min_dp_aux_credits = 1;
sw->min_dp_main_credits = 0;
sw->max_pcie_credits = 64;
sw->max_dma_credits = 14;
return sw;
}
static struct tb_switch *alloc_dev_default(struct kunit *test,
struct tb_switch *parent,
u64 route, bool bonded)
@ -164,44 +191,60 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
sw->ports[1].config.type = TB_TYPE_PORT;
sw->ports[1].config.max_in_hop_id = 19;
sw->ports[1].config.max_out_hop_id = 19;
sw->ports[1].total_credits = 60;
sw->ports[1].ctl_credits = 2;
sw->ports[1].dual_link_port = &sw->ports[2];
sw->ports[2].config.type = TB_TYPE_PORT;
sw->ports[2].config.max_in_hop_id = 19;
sw->ports[2].config.max_out_hop_id = 19;
sw->ports[2].total_credits = 60;
sw->ports[2].ctl_credits = 2;
sw->ports[2].dual_link_port = &sw->ports[1];
sw->ports[2].link_nr = 1;
sw->ports[3].config.type = TB_TYPE_PORT;
sw->ports[3].config.max_in_hop_id = 19;
sw->ports[3].config.max_out_hop_id = 19;
sw->ports[3].total_credits = 60;
sw->ports[3].ctl_credits = 2;
sw->ports[3].dual_link_port = &sw->ports[4];
sw->ports[4].config.type = TB_TYPE_PORT;
sw->ports[4].config.max_in_hop_id = 19;
sw->ports[4].config.max_out_hop_id = 19;
sw->ports[4].total_credits = 60;
sw->ports[4].ctl_credits = 2;
sw->ports[4].dual_link_port = &sw->ports[3];
sw->ports[4].link_nr = 1;
sw->ports[5].config.type = TB_TYPE_PORT;
sw->ports[5].config.max_in_hop_id = 19;
sw->ports[5].config.max_out_hop_id = 19;
sw->ports[5].total_credits = 60;
sw->ports[5].ctl_credits = 2;
sw->ports[5].dual_link_port = &sw->ports[6];
sw->ports[6].config.type = TB_TYPE_PORT;
sw->ports[6].config.max_in_hop_id = 19;
sw->ports[6].config.max_out_hop_id = 19;
sw->ports[6].total_credits = 60;
sw->ports[6].ctl_credits = 2;
sw->ports[6].dual_link_port = &sw->ports[5];
sw->ports[6].link_nr = 1;
sw->ports[7].config.type = TB_TYPE_PORT;
sw->ports[7].config.max_in_hop_id = 19;
sw->ports[7].config.max_out_hop_id = 19;
sw->ports[7].total_credits = 60;
sw->ports[7].ctl_credits = 2;
sw->ports[7].dual_link_port = &sw->ports[8];
sw->ports[8].config.type = TB_TYPE_PORT;
sw->ports[8].config.max_in_hop_id = 19;
sw->ports[8].config.max_out_hop_id = 19;
sw->ports[8].total_credits = 60;
sw->ports[8].ctl_credits = 2;
sw->ports[8].dual_link_port = &sw->ports[7];
sw->ports[8].link_nr = 1;
@ -260,14 +303,18 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
if (port->dual_link_port && upstream_port->dual_link_port) {
port->dual_link_port->remote = upstream_port->dual_link_port;
upstream_port->dual_link_port->remote = port->dual_link_port;
}
if (bonded) {
/* Bonding is used */
port->bonded = true;
port->dual_link_port->bonded = true;
upstream_port->bonded = true;
upstream_port->dual_link_port->bonded = true;
if (bonded) {
/* Bonding is used */
port->bonded = true;
port->total_credits *= 2;
port->dual_link_port->bonded = true;
port->dual_link_port->total_credits = 0;
upstream_port->bonded = true;
upstream_port->total_credits *= 2;
upstream_port->dual_link_port->bonded = true;
upstream_port->dual_link_port->total_credits = 0;
}
}
return sw;
@ -294,6 +341,27 @@ static struct tb_switch *alloc_dev_with_dpin(struct kunit *test,
return sw;
}
static struct tb_switch *alloc_dev_usb4(struct kunit *test,
struct tb_switch *parent,
u64 route, bool bonded)
{
struct tb_switch *sw;
sw = alloc_dev_default(test, parent, route, bonded);
if (!sw)
return NULL;
sw->generation = 4;
sw->credit_allocation = true;
sw->max_usb3_credits = 14;
sw->min_dp_aux_credits = 1;
sw->min_dp_main_credits = 18;
sw->max_pcie_credits = 32;
sw->max_dma_credits = 14;
return sw;
}
static void tb_test_path_basic(struct kunit *test)
{
struct tb_port *src_port, *dst_port, *p;
@ -1829,6 +1897,475 @@ static void tb_test_tunnel_dma_match(struct kunit *test)
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_legacy_not_bonded(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *up, *down;
struct tb_tunnel *tunnel;
struct tb_path *path;
host = alloc_host(test);
dev = alloc_dev_default(test, host, 0x1, false);
down = &host->ports[8];
up = &dev->ports[9];
tunnel = tb_tunnel_alloc_pci(NULL, up, down);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
path = tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U);
path = tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 16U);
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_legacy_bonded(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *up, *down;
struct tb_tunnel *tunnel;
struct tb_path *path;
host = alloc_host(test);
dev = alloc_dev_default(test, host, 0x1, true);
down = &host->ports[8];
up = &dev->ports[9];
tunnel = tb_tunnel_alloc_pci(NULL, up, down);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
path = tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
path = tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_pcie(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *up, *down;
struct tb_tunnel *tunnel;
struct tb_path *path;
host = alloc_host_usb4(test);
dev = alloc_dev_usb4(test, host, 0x1, true);
down = &host->ports[8];
up = &dev->ports[9];
tunnel = tb_tunnel_alloc_pci(NULL, up, down);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
path = tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
path = tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U);
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_dp(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *in, *out;
struct tb_tunnel *tunnel;
struct tb_path *path;
host = alloc_host_usb4(test);
dev = alloc_dev_usb4(test, host, 0x1, true);
in = &host->ports[5];
out = &dev->ports[14];
tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
/* Video (main) path */
path = tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U);
/* AUX TX */
path = tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
/* AUX RX */
path = tunnel->paths[2];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_usb3(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *up, *down;
struct tb_tunnel *tunnel;
struct tb_path *path;
host = alloc_host_usb4(test);
dev = alloc_dev_usb4(test, host, 0x1, true);
down = &host->ports[12];
up = &dev->ports[16];
tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
path = tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
path = tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_dma(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_path *path;
host = alloc_host_usb4(test);
dev = alloc_dev_usb4(test, host, 0x1, true);
nhi = &host->ports[7];
port = &dev->ports[3];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
/* DMA RX */
path = tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
/* DMA TX */
path = tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
tb_tunnel_free(tunnel);
}
static void tb_test_credit_alloc_dma_multiple(struct kunit *test)
{
struct tb_tunnel *tunnel1, *tunnel2, *tunnel3;
struct tb_switch *host, *dev;
struct tb_port *nhi, *port;
struct tb_path *path;
host = alloc_host_usb4(test);
dev = alloc_dev_usb4(test, host, 0x1, true);
nhi = &host->ports[7];
port = &dev->ports[3];
/*
* Create three DMA tunnels through the same ports. With the
* default buffers we should be able to create two and the last
* one fails.
*
* For default host we have following buffers for DMA:
*
* 120 - (2 + 2 * (1 + 0) + 32 + 64 + spare) = 20
*
* For device we have following:
*
* 120 - (2 + 2 * (1 + 18) + 14 + 32 + spare) = 34
*
* spare = 14 + 1 = 15
*
* So on host the first tunnel gets 14 and the second gets the
* remaining 1 and then we run out of buffers.
*/
tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2);
path = tunnel1->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
path = tunnel1->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2);
KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2);
path = tunnel2->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
path = tunnel2->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3);
KUNIT_ASSERT_TRUE(test, tunnel3 == NULL);
/*
* Release the first DMA tunnel. That should make 14 buffers
* available for the next tunnel.
*/
tb_tunnel_free(tunnel1);
tunnel3 = tb_tunnel_alloc_dma(NULL, nhi, port, 10, 3, 10, 3);
KUNIT_ASSERT_TRUE(test, tunnel3 != NULL);
path = tunnel3->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
path = tunnel3->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
tb_tunnel_free(tunnel3);
tb_tunnel_free(tunnel2);
}
static void tb_test_credit_alloc_all(struct kunit *test)
{
struct tb_port *up, *down, *in, *out, *nhi, *port;
struct tb_tunnel *pcie_tunnel, *dp_tunnel1, *dp_tunnel2, *usb3_tunnel;
struct tb_tunnel *dma_tunnel1, *dma_tunnel2;
struct tb_switch *host, *dev;
struct tb_path *path;
/*
* Create PCIe, 2 x DP, USB 3.x and two DMA tunnels from host to
* device. Expectation is that all these can be established with
* the default credit allocation found in Intel hardware.
*/
host = alloc_host_usb4(test);
dev = alloc_dev_usb4(test, host, 0x1, true);
down = &host->ports[8];
up = &dev->ports[9];
pcie_tunnel = tb_tunnel_alloc_pci(NULL, up, down);
KUNIT_ASSERT_TRUE(test, pcie_tunnel != NULL);
KUNIT_ASSERT_EQ(test, pcie_tunnel->npaths, (size_t)2);
path = pcie_tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
path = pcie_tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 64U);
in = &host->ports[5];
out = &dev->ports[13];
dp_tunnel1 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, dp_tunnel1 != NULL);
KUNIT_ASSERT_EQ(test, dp_tunnel1->npaths, (size_t)3);
path = dp_tunnel1->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U);
path = dp_tunnel1->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
path = dp_tunnel1->paths[2];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
in = &host->ports[6];
out = &dev->ports[14];
dp_tunnel2 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, dp_tunnel2 != NULL);
KUNIT_ASSERT_EQ(test, dp_tunnel2->npaths, (size_t)3);
path = dp_tunnel2->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 12U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 18U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 0U);
path = dp_tunnel2->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
path = dp_tunnel2->paths[2];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 1U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
down = &host->ports[12];
up = &dev->ports[16];
usb3_tunnel = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
KUNIT_ASSERT_TRUE(test, usb3_tunnel != NULL);
KUNIT_ASSERT_EQ(test, usb3_tunnel->npaths, (size_t)2);
path = usb3_tunnel->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
path = usb3_tunnel->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 7U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 32U);
nhi = &host->ports[7];
port = &dev->ports[3];
dma_tunnel1 = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, dma_tunnel1 != NULL);
KUNIT_ASSERT_EQ(test, dma_tunnel1->npaths, (size_t)2);
path = dma_tunnel1->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
path = dma_tunnel1->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 14U);
dma_tunnel2 = tb_tunnel_alloc_dma(NULL, nhi, port, 9, 2, 9, 2);
KUNIT_ASSERT_TRUE(test, dma_tunnel2 != NULL);
KUNIT_ASSERT_EQ(test, dma_tunnel2->npaths, (size_t)2);
path = dma_tunnel2->paths[0];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 14U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
path = dma_tunnel2->paths[1];
KUNIT_ASSERT_EQ(test, path->path_length, 2);
KUNIT_EXPECT_EQ(test, path->hops[0].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[0].initial_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].nfc_credits, 0U);
KUNIT_EXPECT_EQ(test, path->hops[1].initial_credits, 1U);
tb_tunnel_free(dma_tunnel2);
tb_tunnel_free(dma_tunnel1);
tb_tunnel_free(usb3_tunnel);
tb_tunnel_free(dp_tunnel2);
tb_tunnel_free(dp_tunnel1);
tb_tunnel_free(pcie_tunnel);
}
static const u32 root_directory[] = {
0x55584401, /* "UXD" v1 */
0x00000018, /* Root directory length */
@ -2105,6 +2642,14 @@ static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_tunnel_dma_tx),
KUNIT_CASE(tb_test_tunnel_dma_chain),
KUNIT_CASE(tb_test_tunnel_dma_match),
KUNIT_CASE(tb_test_credit_alloc_legacy_not_bonded),
KUNIT_CASE(tb_test_credit_alloc_legacy_bonded),
KUNIT_CASE(tb_test_credit_alloc_pcie),
KUNIT_CASE(tb_test_credit_alloc_dp),
KUNIT_CASE(tb_test_credit_alloc_usb3),
KUNIT_CASE(tb_test_credit_alloc_dma),
KUNIT_CASE(tb_test_credit_alloc_dma_multiple),
KUNIT_CASE(tb_test_credit_alloc_all),
KUNIT_CASE(tb_test_property_parse),
KUNIT_CASE(tb_test_property_format),
KUNIT_CASE(tb_test_property_copy),

View File

@ -34,6 +34,16 @@
#define TB_DP_AUX_PATH_OUT 1
#define TB_DP_AUX_PATH_IN 2
/* Minimum number of credits needed for PCIe path */
#define TB_MIN_PCIE_CREDITS 6U
/*
* Number of credits we try to allocate for each DMA path if not limited
* by the host router baMaxHI.
*/
#define TB_DMA_CREDITS 14U
/* Minimum number of credits for DMA path */
#define TB_MIN_DMA_CREDITS 1U
static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
@ -57,6 +67,55 @@ static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
#define tb_tunnel_dbg(tunnel, fmt, arg...) \
__TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg)
static inline unsigned int tb_usable_credits(const struct tb_port *port)
{
return port->total_credits - port->ctl_credits;
}
/**
* tb_available_credits() - Available credits for PCIe and DMA
* @port: Lane adapter to check
* @max_dp_streams: If non-%NULL stores maximum number of simultaneous DP
* streams possible through this lane adapter
*/
static unsigned int tb_available_credits(const struct tb_port *port,
size_t *max_dp_streams)
{
const struct tb_switch *sw = port->sw;
int credits, usb3, pcie, spare;
size_t ndp;
usb3 = tb_acpi_may_tunnel_usb3() ? sw->max_usb3_credits : 0;
pcie = tb_acpi_may_tunnel_pcie() ? sw->max_pcie_credits : 0;
if (tb_acpi_is_xdomain_allowed()) {
spare = min_not_zero(sw->max_dma_credits, TB_DMA_CREDITS);
/* Add some credits for potential second DMA tunnel */
spare += TB_MIN_DMA_CREDITS;
} else {
spare = 0;
}
credits = tb_usable_credits(port);
if (tb_acpi_may_tunnel_dp()) {
/*
* Maximum number of DP streams possible through the
* lane adapter.
*/
ndp = (credits - (usb3 + pcie + spare)) /
(sw->min_dp_aux_credits + sw->min_dp_main_credits);
} else {
ndp = 0;
}
credits -= ndp * (sw->min_dp_aux_credits + sw->min_dp_main_credits);
credits -= usb3;
if (max_dp_streams)
*max_dp_streams = ndp;
return credits > 0 ? credits : 0;
}
static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths,
enum tb_tunnel_type type)
{
@ -94,24 +153,37 @@ static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate)
return 0;
}
static int tb_initial_credits(const struct tb_switch *sw)
static int tb_pci_init_credits(struct tb_path_hop *hop)
{
/* If the path is complete sw is not NULL */
if (sw) {
/* More credits for faster link */
switch (sw->link_speed * sw->link_width) {
case 40:
return 32;
case 20:
return 24;
}
struct tb_port *port = hop->in_port;
struct tb_switch *sw = port->sw;
unsigned int credits;
if (tb_port_use_credit_allocation(port)) {
unsigned int available;
available = tb_available_credits(port, NULL);
credits = min(sw->max_pcie_credits, available);
if (credits < TB_MIN_PCIE_CREDITS)
return -ENOSPC;
credits = max(TB_MIN_PCIE_CREDITS, credits);
} else {
if (tb_port_is_null(port))
credits = port->bonded ? 32 : 16;
else
credits = 7;
}
return 16;
hop->initial_credits = credits;
return 0;
}
static void tb_pci_init_path(struct tb_path *path)
static int tb_pci_init_path(struct tb_path *path)
{
struct tb_path_hop *hop;
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_ALL;
@ -119,11 +191,16 @@ static void tb_pci_init_path(struct tb_path *path)
path->priority = 3;
path->weight = 1;
path->drop_packages = 0;
path->nfc_credits = 0;
path->hops[0].initial_credits = 7;
if (path->path_length > 1)
path->hops[1].initial_credits =
tb_initial_credits(path->hops[1].in_port->sw);
tb_path_for_each_hop(path, hop) {
int ret;
ret = tb_pci_init_credits(hop);
if (ret)
return ret;
}
return 0;
}
/**
@ -163,14 +240,16 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
goto err_free;
}
tunnel->paths[TB_PCI_PATH_UP] = path;
tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]);
if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]))
goto err_free;
path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL,
"PCIe Down");
if (!path)
goto err_deactivate;
tunnel->paths[TB_PCI_PATH_DOWN] = path;
tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]);
if (tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]))
goto err_deactivate;
/* Validate that the tunnel is complete */
if (!tb_port_is_pcie_up(tunnel->dst_port)) {
@ -228,23 +307,25 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
path = tb_path_alloc(tb, down, TB_PCI_HOPID, up, TB_PCI_HOPID, 0,
"PCIe Down");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_pci_init_path(path);
if (!path)
goto err_free;
tunnel->paths[TB_PCI_PATH_DOWN] = path;
if (tb_pci_init_path(path))
goto err_free;
path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0,
"PCIe Up");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_pci_init_path(path);
if (!path)
goto err_free;
tunnel->paths[TB_PCI_PATH_UP] = path;
if (tb_pci_init_path(path))
goto err_free;
return tunnel;
err_free:
tb_tunnel_free(tunnel);
return NULL;
}
static bool tb_dp_is_usb4(const struct tb_switch *sw)
@ -599,9 +680,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
return 0;
}
static void tb_dp_init_aux_credits(struct tb_path_hop *hop)
{
struct tb_port *port = hop->in_port;
struct tb_switch *sw = port->sw;
if (tb_port_use_credit_allocation(port))
hop->initial_credits = sw->min_dp_aux_credits;
else
hop->initial_credits = 1;
}
static void tb_dp_init_aux_path(struct tb_path *path)
{
int i;
struct tb_path_hop *hop;
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
path->egress_shared_buffer = TB_PATH_NONE;
@ -610,13 +702,42 @@ static void tb_dp_init_aux_path(struct tb_path *path)
path->priority = 2;
path->weight = 1;
for (i = 0; i < path->path_length; i++)
path->hops[i].initial_credits = 1;
tb_path_for_each_hop(path, hop)
tb_dp_init_aux_credits(hop);
}
static void tb_dp_init_video_path(struct tb_path *path, bool discover)
static int tb_dp_init_video_credits(struct tb_path_hop *hop)
{
u32 nfc_credits = path->hops[0].in_port->config.nfc_credits;
struct tb_port *port = hop->in_port;
struct tb_switch *sw = port->sw;
if (tb_port_use_credit_allocation(port)) {
unsigned int nfc_credits;
size_t max_dp_streams;
tb_available_credits(port, &max_dp_streams);
/*
* Read the number of currently allocated NFC credits
* from the lane adapter. Since we only use them for DP
* tunneling we can use that to figure out how many DP
* tunnels already go through the lane adapter.
*/
nfc_credits = port->config.nfc_credits &
ADP_CS_4_NFC_BUFFERS_MASK;
if (nfc_credits / sw->min_dp_main_credits > max_dp_streams)
return -ENOSPC;
hop->nfc_credits = sw->min_dp_main_credits;
} else {
hop->nfc_credits = min(port->total_credits - 2, 12U);
}
return 0;
}
static int tb_dp_init_video_path(struct tb_path *path)
{
struct tb_path_hop *hop;
path->egress_fc_enable = TB_PATH_NONE;
path->egress_shared_buffer = TB_PATH_NONE;
@ -625,16 +746,15 @@ static void tb_dp_init_video_path(struct tb_path *path, bool discover)
path->priority = 1;
path->weight = 1;
if (discover) {
path->nfc_credits = nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK;
} else {
u32 max_credits;
tb_path_for_each_hop(path, hop) {
int ret;
max_credits = (nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
/* Leave some credits for AUX path */
path->nfc_credits = min(max_credits - 2, 12U);
ret = tb_dp_init_video_credits(hop);
if (ret)
return ret;
}
return 0;
}
/**
@ -674,7 +794,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
goto err_free;
}
tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path;
tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], true);
if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT]))
goto err_free;
path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX");
if (!path)
@ -761,7 +882,7 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
1, "Video");
if (!path)
goto err_free;
tb_dp_init_video_path(path, false);
tb_dp_init_video_path(path);
paths[TB_DP_VIDEO_PATH_OUT] = path;
path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out,
@ -785,20 +906,58 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
return NULL;
}
static u32 tb_dma_credits(struct tb_port *nhi)
static unsigned int tb_dma_available_credits(const struct tb_port *port)
{
u32 max_credits;
const struct tb_switch *sw = port->sw;
int credits;
max_credits = (nhi->config.nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK) >>
ADP_CS_4_TOTAL_BUFFERS_SHIFT;
return min(max_credits, 13U);
credits = tb_available_credits(port, NULL);
if (tb_acpi_may_tunnel_pcie())
credits -= sw->max_pcie_credits;
credits -= port->dma_credits;
return credits > 0 ? credits : 0;
}
static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits)
static int tb_dma_reserve_credits(struct tb_path_hop *hop, unsigned int credits)
{
int i;
struct tb_port *port = hop->in_port;
path->egress_fc_enable = efc;
if (tb_port_use_credit_allocation(port)) {
unsigned int available = tb_dma_available_credits(port);
/*
* Need to have at least TB_MIN_DMA_CREDITS, otherwise
* DMA path cannot be established.
*/
if (available < TB_MIN_DMA_CREDITS)
return -ENOSPC;
while (credits > available)
credits--;
tb_port_dbg(port, "reserving %u credits for DMA path\n",
credits);
port->dma_credits += credits;
} else {
if (tb_port_is_null(port))
credits = port->bonded ? 14 : 6;
else
credits = min(port->total_credits, credits);
}
hop->initial_credits = credits;
return 0;
}
/* Path from lane adapter to NHI */
static int tb_dma_init_rx_path(struct tb_path *path, unsigned int credits)
{
struct tb_path_hop *hop;
unsigned int i, tmp;
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
path->ingress_fc_enable = TB_PATH_ALL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_shared_buffer = TB_PATH_NONE;
@ -806,8 +965,80 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits
path->weight = 1;
path->clear_fc = true;
for (i = 0; i < path->path_length; i++)
path->hops[i].initial_credits = credits;
/*
* First lane adapter is the one connected to the remote host.
* We don't tunnel other traffic over this link so can use all
* the credits (except the ones reserved for control traffic).
*/
hop = &path->hops[0];
tmp = min(tb_usable_credits(hop->in_port), credits);
hop->initial_credits = tmp;
hop->in_port->dma_credits += tmp;
for (i = 1; i < path->path_length; i++) {
int ret;
ret = tb_dma_reserve_credits(&path->hops[i], credits);
if (ret)
return ret;
}
return 0;
}
/* Path from NHI to lane adapter */
static int tb_dma_init_tx_path(struct tb_path *path, unsigned int credits)
{
struct tb_path_hop *hop;
path->egress_fc_enable = TB_PATH_ALL;
path->ingress_fc_enable = TB_PATH_ALL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 5;
path->weight = 1;
path->clear_fc = true;
tb_path_for_each_hop(path, hop) {
int ret;
ret = tb_dma_reserve_credits(hop, credits);
if (ret)
return ret;
}
return 0;
}
static void tb_dma_release_credits(struct tb_path_hop *hop)
{
struct tb_port *port = hop->in_port;
if (tb_port_use_credit_allocation(port)) {
port->dma_credits -= hop->initial_credits;
tb_port_dbg(port, "released %u DMA path credits\n",
hop->initial_credits);
}
}
static void tb_dma_deinit_path(struct tb_path *path)
{
struct tb_path_hop *hop;
tb_path_for_each_hop(path, hop)
tb_dma_release_credits(hop);
}
static void tb_dma_deinit(struct tb_tunnel *tunnel)
{
int i;
for (i = 0; i < tunnel->npaths; i++) {
if (!tunnel->paths[i])
continue;
tb_dma_deinit_path(tunnel->paths[i]);
}
}
/**
@ -832,7 +1063,7 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_tunnel *tunnel;
size_t npaths = 0, i = 0;
struct tb_path *path;
u32 credits;
int credits;
if (receive_ring > 0)
npaths++;
@ -848,32 +1079,39 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
tunnel->src_port = nhi;
tunnel->dst_port = dst;
tunnel->deinit = tb_dma_deinit;
credits = tb_dma_credits(nhi);
credits = min_not_zero(TB_DMA_CREDITS, nhi->sw->max_dma_credits);
if (receive_ring > 0) {
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,
"DMA RX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits);
if (!path)
goto err_free;
tunnel->paths[i++] = path;
if (tb_dma_init_rx_path(path, credits)) {
tb_tunnel_dbg(tunnel, "not enough buffers for RX path\n");
goto err_free;
}
}
if (transmit_ring > 0) {
path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0,
"DMA TX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_ALL, credits);
if (!path)
goto err_free;
tunnel->paths[i++] = path;
if (tb_dma_init_tx_path(path, credits)) {
tb_tunnel_dbg(tunnel, "not enough buffers for TX path\n");
goto err_free;
}
}
return tunnel;
err_free:
tb_tunnel_free(tunnel);
return NULL;
}
/**
@ -1067,8 +1305,28 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
tunnel->allocated_up, tunnel->allocated_down);
}
static void tb_usb3_init_credits(struct tb_path_hop *hop)
{
struct tb_port *port = hop->in_port;
struct tb_switch *sw = port->sw;
unsigned int credits;
if (tb_port_use_credit_allocation(port)) {
credits = sw->max_usb3_credits;
} else {
if (tb_port_is_null(port))
credits = port->bonded ? 32 : 16;
else
credits = 7;
}
hop->initial_credits = credits;
}
static void tb_usb3_init_path(struct tb_path *path)
{
struct tb_path_hop *hop;
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_fc_enable = TB_PATH_ALL;
@ -1076,11 +1334,9 @@ static void tb_usb3_init_path(struct tb_path *path)
path->priority = 3;
path->weight = 3;
path->drop_packages = 0;
path->nfc_credits = 0;
path->hops[0].initial_credits = 7;
if (path->path_length > 1)
path->hops[1].initial_credits =
tb_initial_credits(path->hops[1].in_port->sw);
tb_path_for_each_hop(path, hop)
tb_usb3_init_credits(hop);
}
/**
@ -1280,6 +1536,9 @@ void tb_tunnel_free(struct tb_tunnel *tunnel)
if (!tunnel)
return;
if (tunnel->deinit)
tunnel->deinit(tunnel);
for (i = 0; i < tunnel->npaths; i++) {
if (tunnel->paths[i])
tb_path_free(tunnel->paths[i]);

View File

@ -27,6 +27,7 @@ enum tb_tunnel_type {
* @paths: All paths required by the tunnel
* @npaths: Number of paths in @paths
* @init: Optional tunnel specific initialization
* @deinit: Optional tunnel specific de-initialization
* @activate: Optional tunnel specific activation/deactivation
* @consumed_bandwidth: Return how much bandwidth the tunnel consumes
* @release_unused_bandwidth: Release all unused bandwidth
@ -47,6 +48,7 @@ struct tb_tunnel {
struct tb_path **paths;
size_t npaths;
int (*init)(struct tb_tunnel *tunnel);
void (*deinit)(struct tb_tunnel *tunnel);
int (*activate)(struct tb_tunnel *tunnel, bool activate);
int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up,
int *consumed_down);

View File

@ -13,7 +13,6 @@
#include "sb_regs.h"
#include "tb.h"
#define USB4_DATA_DWORDS 16
#define USB4_DATA_RETRIES 3
enum usb4_sb_target {
@ -37,8 +36,19 @@ enum usb4_sb_target {
#define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0)
typedef int (*read_block_fn)(void *, unsigned int, void *, size_t);
typedef int (*write_block_fn)(void *, const void *, size_t);
#define USB4_BA_LENGTH_MASK GENMASK(7, 0)
#define USB4_BA_INDEX_MASK GENMASK(15, 0)
enum usb4_ba_index {
USB4_BA_MAX_USB3 = 0x1,
USB4_BA_MIN_DP_AUX = 0x2,
USB4_BA_MIN_DP_MAIN = 0x3,
USB4_BA_MAX_PCIE = 0x4,
USB4_BA_MAX_HI = 0x5,
};
#define USB4_BA_VALUE_MASK GENMASK(31, 16)
#define USB4_BA_VALUE_SHIFT 16
static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec)
@ -62,76 +72,6 @@ static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
return -ETIMEDOUT;
}
static int usb4_do_read_data(u16 address, void *buf, size_t size,
read_block_fn read_block, void *read_block_data)
{
unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset;
do {
unsigned int dwaddress, dwords;
u8 data[USB4_DATA_DWORDS * 4];
size_t nbytes;
int ret;
offset = address & 3;
nbytes = min_t(size_t, size + offset, USB4_DATA_DWORDS * 4);
dwaddress = address / 4;
dwords = ALIGN(nbytes, 4) / 4;
ret = read_block(read_block_data, dwaddress, data, dwords);
if (ret) {
if (ret != -ENODEV && retries--)
continue;
return ret;
}
nbytes -= offset;
memcpy(buf, data + offset, nbytes);
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
}
static int usb4_do_write_data(unsigned int address, const void *buf, size_t size,
write_block_fn write_next_block, void *write_block_data)
{
unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset;
offset = address & 3;
address = address & ~3;
do {
u32 nbytes = min_t(u32, size, USB4_DATA_DWORDS * 4);
u8 data[USB4_DATA_DWORDS * 4];
int ret;
memcpy(data + offset, buf, nbytes);
ret = write_next_block(write_block_data, data, nbytes / 4);
if (ret) {
if (ret == -ETIMEDOUT) {
if (retries--)
continue;
ret = -EIO;
}
return ret;
}
size -= nbytes;
address += nbytes;
buf += nbytes;
} while (size > 0);
return 0;
}
static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode,
u32 *metadata, u8 *status,
const void *tx_data, size_t tx_dwords,
@ -193,7 +133,7 @@ static int __usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata,
{
const struct tb_cm_ops *cm_ops = sw->tb->cm_ops;
if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS)
if (tx_dwords > NVM_DATA_DWORDS || rx_dwords > NVM_DATA_DWORDS)
return -EINVAL;
/*
@ -320,7 +260,7 @@ int usb4_switch_setup(struct tb_switch *sw)
parent = tb_switch_parent(sw);
downstream_port = tb_port_at(tb_route(sw), parent);
sw->link_usb4 = link_is_usb4(downstream_port);
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3");
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT");
xhci = val & ROUTER_CS_6_HCI;
tbt3 = !(val & ROUTER_CS_6_TNS);
@ -414,8 +354,8 @@ static int usb4_switch_drom_read_block(void *data,
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
{
return usb4_do_read_data(address, buf, size,
usb4_switch_drom_read_block, sw);
return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES,
usb4_switch_drom_read_block, sw);
}
/**
@ -473,12 +413,18 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
val &= ~(PORT_CS_19_WOC | PORT_CS_19_WOD | PORT_CS_19_WOU4);
if (flags & TB_WAKE_ON_CONNECT)
val |= PORT_CS_19_WOC;
if (flags & TB_WAKE_ON_DISCONNECT)
val |= PORT_CS_19_WOD;
if (flags & TB_WAKE_ON_USB4)
if (tb_is_upstream_port(port)) {
val |= PORT_CS_19_WOU4;
} else {
bool configured = val & PORT_CS_19_PC;
if ((flags & TB_WAKE_ON_CONNECT) && !configured)
val |= PORT_CS_19_WOC;
if ((flags & TB_WAKE_ON_DISCONNECT) && configured)
val |= PORT_CS_19_WOD;
if ((flags & TB_WAKE_ON_USB4) && configured)
val |= PORT_CS_19_WOU4;
}
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_19, 1);
@ -487,7 +433,7 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
}
/*
* Enable wakes from PCIe and USB 3.x on this router. Only
* Enable wakes from PCIe, USB 3.x and DP on this router. Only
* needed for device routers.
*/
if (route) {
@ -495,11 +441,13 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
if (ret)
return ret;
val &= ~(ROUTER_CS_5_WOP | ROUTER_CS_5_WOU);
val &= ~(ROUTER_CS_5_WOP | ROUTER_CS_5_WOU | ROUTER_CS_5_WOD);
if (flags & TB_WAKE_ON_USB3)
val |= ROUTER_CS_5_WOU;
if (flags & TB_WAKE_ON_PCIE)
val |= ROUTER_CS_5_WOP;
if (flags & TB_WAKE_ON_DP)
val |= ROUTER_CS_5_WOD;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
@ -595,12 +543,21 @@ static int usb4_switch_nvm_read_block(void *data,
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
{
return usb4_do_read_data(address, buf, size,
usb4_switch_nvm_read_block, sw);
return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES,
usb4_switch_nvm_read_block, sw);
}
static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
unsigned int address)
/**
* usb4_switch_nvm_set_offset() - Set NVM write offset
* @sw: USB4 router
* @address: Start offset
*
* Explicitly sets NVM write offset. Normally when writing to NVM this
* is done automatically by usb4_switch_nvm_write().
*
* Returns %0 in success and negative errno if there was a failure.
*/
int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address)
{
u32 metadata, dwaddress;
u8 status = 0;
@ -618,8 +575,8 @@ static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
return status ? -EIO : 0;
}
static int usb4_switch_nvm_write_next_block(void *data, const void *buf,
size_t dwords)
static int usb4_switch_nvm_write_next_block(void *data, unsigned int dwaddress,
const void *buf, size_t dwords)
{
struct tb_switch *sw = data;
u8 status;
@ -652,8 +609,8 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
if (ret)
return ret;
return usb4_do_write_data(address, buf, size,
usb4_switch_nvm_write_next_block, sw);
return tb_nvm_write_data(address, buf, size, USB4_DATA_RETRIES,
usb4_switch_nvm_write_next_block, sw);
}
/**
@ -735,6 +692,147 @@ int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status)
return 0;
}
/**
* usb4_switch_credits_init() - Read buffer allocation parameters
* @sw: USB4 router
*
* Reads @sw buffer allocation parameters and initializes @sw buffer
* allocation fields accordingly. Specifically @sw->credits_allocation
* is set to %true if these parameters can be used in tunneling.
*
* Returns %0 on success and negative errno otherwise.
*/
int usb4_switch_credits_init(struct tb_switch *sw)
{
int max_usb3, min_dp_aux, min_dp_main, max_pcie, max_dma;
int ret, length, i, nports;
const struct tb_port *port;
u32 data[NVM_DATA_DWORDS];
u32 metadata = 0;
u8 status = 0;
memset(data, 0, sizeof(data));
ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_BUFFER_ALLOC, &metadata,
&status, NULL, 0, data, ARRAY_SIZE(data));
if (ret)
return ret;
if (status)
return -EIO;
length = metadata & USB4_BA_LENGTH_MASK;
if (WARN_ON(length > ARRAY_SIZE(data)))
return -EMSGSIZE;
max_usb3 = -1;
min_dp_aux = -1;
min_dp_main = -1;
max_pcie = -1;
max_dma = -1;
tb_sw_dbg(sw, "credit allocation parameters:\n");
for (i = 0; i < length; i++) {
u16 index, value;
index = data[i] & USB4_BA_INDEX_MASK;
value = (data[i] & USB4_BA_VALUE_MASK) >> USB4_BA_VALUE_SHIFT;
switch (index) {
case USB4_BA_MAX_USB3:
tb_sw_dbg(sw, " USB3: %u\n", value);
max_usb3 = value;
break;
case USB4_BA_MIN_DP_AUX:
tb_sw_dbg(sw, " DP AUX: %u\n", value);
min_dp_aux = value;
break;
case USB4_BA_MIN_DP_MAIN:
tb_sw_dbg(sw, " DP main: %u\n", value);
min_dp_main = value;
break;
case USB4_BA_MAX_PCIE:
tb_sw_dbg(sw, " PCIe: %u\n", value);
max_pcie = value;
break;
case USB4_BA_MAX_HI:
tb_sw_dbg(sw, " DMA: %u\n", value);
max_dma = value;
break;
default:
tb_sw_dbg(sw, " unknown credit allocation index %#x, skipping\n",
index);
break;
}
}
/*
* Validate the buffer allocation preferences. If we find
* issues, log a warning and fall back using the hard-coded
* values.
*/
/* Host router must report baMaxHI */
if (!tb_route(sw) && max_dma < 0) {
tb_sw_warn(sw, "host router is missing baMaxHI\n");
goto err_invalid;
}
nports = 0;
tb_switch_for_each_port(sw, port) {
if (tb_port_is_null(port))
nports++;
}
/* Must have DP buffer allocation (multiple USB4 ports) */
if (nports > 2 && (min_dp_aux < 0 || min_dp_main < 0)) {
tb_sw_warn(sw, "multiple USB4 ports require baMinDPaux/baMinDPmain\n");
goto err_invalid;
}
tb_switch_for_each_port(sw, port) {
if (tb_port_is_dpout(port) && min_dp_main < 0) {
tb_sw_warn(sw, "missing baMinDPmain");
goto err_invalid;
}
if ((tb_port_is_dpin(port) || tb_port_is_dpout(port)) &&
min_dp_aux < 0) {
tb_sw_warn(sw, "missing baMinDPaux");
goto err_invalid;
}
if ((tb_port_is_usb3_down(port) || tb_port_is_usb3_up(port)) &&
max_usb3 < 0) {
tb_sw_warn(sw, "missing baMaxUSB3");
goto err_invalid;
}
if ((tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) &&
max_pcie < 0) {
tb_sw_warn(sw, "missing baMaxPCIe");
goto err_invalid;
}
}
/*
* Buffer allocation passed the validation so we can use it in
* path creation.
*/
sw->credit_allocation = true;
if (max_usb3 > 0)
sw->max_usb3_credits = max_usb3;
if (min_dp_aux > 0)
sw->min_dp_aux_credits = min_dp_aux;
if (min_dp_main > 0)
sw->min_dp_main_credits = min_dp_main;
if (max_pcie > 0)
sw->max_pcie_credits = max_pcie;
if (max_dma > 0)
sw->max_dma_credits = max_dma;
return 0;
err_invalid:
return -EINVAL;
}
/**
* usb4_switch_query_dp_resource() - Query availability of DP IN resource
* @sw: USB4 router
@ -896,6 +994,60 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
return NULL;
}
/**
* usb4_switch_add_ports() - Add USB4 ports for this router
* @sw: USB4 router
*
* For USB4 router finds all USB4 ports and registers devices for each.
* Can be called to any router.
*
* Return %0 in case of success and negative errno in case of failure.
*/
int usb4_switch_add_ports(struct tb_switch *sw)
{
struct tb_port *port;
if (tb_switch_is_icm(sw) || !tb_switch_is_usb4(sw))
return 0;
tb_switch_for_each_port(sw, port) {
struct usb4_port *usb4;
if (!tb_port_is_null(port))
continue;
if (!port->cap_usb4)
continue;
usb4 = usb4_port_device_add(port);
if (IS_ERR(usb4)) {
usb4_switch_remove_ports(sw);
return PTR_ERR(usb4);
}
port->usb4 = usb4;
}
return 0;
}
/**
* usb4_switch_remove_ports() - Removes USB4 ports from this router
* @sw: USB4 router
*
* Unregisters previously registered USB4 ports.
*/
void usb4_switch_remove_ports(struct tb_switch *sw)
{
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (port->usb4) {
usb4_port_device_remove(port->usb4);
port->usb4 = NULL;
}
}
}
/**
* usb4_port_unlock() - Unlock USB4 downstream port
* @port: USB4 port to unlock
@ -1029,7 +1181,7 @@ static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit,
static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
if (dwords > NVM_DATA_DWORDS)
return -EINVAL;
return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
@ -1039,7 +1191,7 @@ static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
static int usb4_port_write_data(struct tb_port *port, const void *data,
size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
if (dwords > NVM_DATA_DWORDS)
return -EINVAL;
return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
@ -1175,6 +1327,48 @@ static int usb4_port_sb_op(struct tb_port *port, enum usb4_sb_target target,
return -ETIMEDOUT;
}
static int usb4_port_set_router_offline(struct tb_port *port, bool offline)
{
u32 val = !offline;
int ret;
ret = usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0,
USB4_SB_METADATA, &val, sizeof(val));
if (ret)
return ret;
val = USB4_SB_OPCODE_ROUTER_OFFLINE;
return usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0,
USB4_SB_OPCODE, &val, sizeof(val));
}
/**
* usb4_port_router_offline() - Put the USB4 port to offline mode
* @port: USB4 port
*
* This function puts the USB4 port into offline mode. In this mode the
* port does not react on hotplug events anymore. This needs to be
* called before retimer access is done when the USB4 links is not up.
*
* Returns %0 in case of success and negative errno if there was an
* error.
*/
int usb4_port_router_offline(struct tb_port *port)
{
return usb4_port_set_router_offline(port, true);
}
/**
* usb4_port_router_online() - Put the USB4 port back to online
* @port: USB4 port
*
* Makes the USB4 port functional again.
*/
int usb4_port_router_online(struct tb_port *port)
{
return usb4_port_set_router_offline(port, false);
}
/**
* usb4_port_enumerate_retimers() - Send RT broadcast transaction
* @port: USB4 port
@ -1200,6 +1394,33 @@ static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
timeout_msec);
}
/**
* usb4_port_retimer_set_inbound_sbtx() - Enable sideband channel transactions
* @port: USB4 port
* @index: Retimer index
*
* Enables sideband channel transations on SBTX. Can be used when USB4
* link does not go up, for example if there is no device connected.
*/
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index)
{
int ret;
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_SET_INBOUND_SBTX,
500);
if (ret != -ENODEV)
return ret;
/*
* Per the USB4 retimer spec, the retimer is not required to
* send an RT (Retimer Transaction) response for the first
* SET_INBOUND_SBTX command
*/
return usb4_port_retimer_op(port, index, USB4_SB_OPCODE_SET_INBOUND_SBTX,
500);
}
/**
* usb4_port_retimer_read() - Read from retimer sideband registers
* @port: USB4 port
@ -1292,8 +1513,19 @@ int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index)
return ret ? ret : metadata & USB4_NVM_SECTOR_SIZE_MASK;
}
static int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
unsigned int address)
/**
* usb4_port_retimer_nvm_set_offset() - Set NVM write offset
* @port: USB4 port
* @index: Retimer index
* @address: Start offset
*
* Exlicitly sets NVM write offset. Normally when writing to NVM this is
* done automatically by usb4_port_retimer_nvm_write().
*
* Returns %0 in success and negative errno if there was a failure.
*/
int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
unsigned int address)
{
u32 metadata, dwaddress;
int ret;
@ -1316,8 +1548,8 @@ struct retimer_info {
u8 index;
};
static int usb4_port_retimer_nvm_write_next_block(void *data, const void *buf,
size_t dwords)
static int usb4_port_retimer_nvm_write_next_block(void *data,
unsigned int dwaddress, const void *buf, size_t dwords)
{
const struct retimer_info *info = data;
@ -1357,8 +1589,8 @@ int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, unsigned int add
if (ret)
return ret;
return usb4_do_write_data(address, buf, size,
usb4_port_retimer_nvm_write_next_block, &info);
return tb_nvm_write_data(address, buf, size, USB4_DATA_RETRIES,
usb4_port_retimer_nvm_write_next_block, &info);
}
/**
@ -1442,7 +1674,7 @@ static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress,
int ret;
metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT;
if (dwords < USB4_DATA_DWORDS)
if (dwords < NVM_DATA_DWORDS)
metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT;
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
@ -1475,8 +1707,8 @@ int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
{
struct retimer_info info = { .port = port, .index = index };
return usb4_do_read_data(address, buf, size,
usb4_port_retimer_nvm_read_block, &info);
return tb_nvm_read_data(address, buf, size, USB4_DATA_RETRIES,
usb4_port_retimer_nvm_read_block, &info);
}
/**

View File

@ -0,0 +1,280 @@
// SPDX-License-Identifier: GPL-2.0
/*
* USB4 port device
*
* Copyright (C) 2021, Intel Corporation
* Author: Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <linux/pm_runtime.h>
#include "tb.h"
static ssize_t link_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
struct tb_port *port = usb4->port;
struct tb *tb = port->sw->tb;
const char *link;
if (mutex_lock_interruptible(&tb->lock))
return -ERESTARTSYS;
if (tb_is_upstream_port(port))
link = port->sw->link_usb4 ? "usb4" : "tbt";
else if (tb_port_has_remote(port))
link = port->remote->sw->link_usb4 ? "usb4" : "tbt";
else
link = "none";
mutex_unlock(&tb->lock);
return sysfs_emit(buf, "%s\n", link);
}
static DEVICE_ATTR_RO(link);
static struct attribute *common_attrs[] = {
&dev_attr_link.attr,
NULL
};
static const struct attribute_group common_group = {
.attrs = common_attrs,
};
static int usb4_port_offline(struct usb4_port *usb4)
{
struct tb_port *port = usb4->port;
int ret;
ret = tb_acpi_power_on_retimers(port);
if (ret)
return ret;
ret = usb4_port_router_offline(port);
if (ret) {
tb_acpi_power_off_retimers(port);
return ret;
}
ret = tb_retimer_scan(port, false);
if (ret) {
usb4_port_router_online(port);
tb_acpi_power_off_retimers(port);
}
return ret;
}
static void usb4_port_online(struct usb4_port *usb4)
{
struct tb_port *port = usb4->port;
usb4_port_router_online(port);
tb_acpi_power_off_retimers(port);
}
static ssize_t offline_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
return sysfs_emit(buf, "%d\n", usb4->offline);
}
static ssize_t offline_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
struct tb_port *port = usb4->port;
struct tb *tb = port->sw->tb;
bool val;
int ret;
ret = kstrtobool(buf, &val);
if (ret)
return ret;
pm_runtime_get_sync(&usb4->dev);
if (mutex_lock_interruptible(&tb->lock)) {
ret = -ERESTARTSYS;
goto out_rpm;
}
if (val == usb4->offline)
goto out_unlock;
/* Offline mode works only for ports that are not connected */
if (tb_port_has_remote(port)) {
ret = -EBUSY;
goto out_unlock;
}
if (val) {
ret = usb4_port_offline(usb4);
if (ret)
goto out_unlock;
} else {
usb4_port_online(usb4);
tb_retimer_remove_all(port);
}
usb4->offline = val;
tb_port_dbg(port, "%s offline mode\n", val ? "enter" : "exit");
out_unlock:
mutex_unlock(&tb->lock);
out_rpm:
pm_runtime_mark_last_busy(&usb4->dev);
pm_runtime_put_autosuspend(&usb4->dev);
return ret ? ret : count;
}
static DEVICE_ATTR_RW(offline);
static ssize_t rescan_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
struct tb_port *port = usb4->port;
struct tb *tb = port->sw->tb;
bool val;
int ret;
ret = kstrtobool(buf, &val);
if (ret)
return ret;
if (!val)
return count;
pm_runtime_get_sync(&usb4->dev);
if (mutex_lock_interruptible(&tb->lock)) {
ret = -ERESTARTSYS;
goto out_rpm;
}
/* Must be in offline mode already */
if (!usb4->offline) {
ret = -EINVAL;
goto out_unlock;
}
tb_retimer_remove_all(port);
ret = tb_retimer_scan(port, true);
out_unlock:
mutex_unlock(&tb->lock);
out_rpm:
pm_runtime_mark_last_busy(&usb4->dev);
pm_runtime_put_autosuspend(&usb4->dev);
return ret ? ret : count;
}
static DEVICE_ATTR_WO(rescan);
static struct attribute *service_attrs[] = {
&dev_attr_offline.attr,
&dev_attr_rescan.attr,
NULL
};
static umode_t service_attr_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
struct device *dev = kobj_to_dev(kobj);
struct usb4_port *usb4 = tb_to_usb4_port_device(dev);
/*
* Always need some platform help to cycle the modes so that
* retimers can be accessed through the sideband.
*/
return usb4->can_offline ? attr->mode : 0;
}
static const struct attribute_group service_group = {
.attrs = service_attrs,
.is_visible = service_attr_is_visible,
};
static const struct attribute_group *usb4_port_device_groups[] = {
&common_group,
&service_group,
NULL
};
static void usb4_port_device_release(struct device *dev)
{
struct usb4_port *usb4 = container_of(dev, struct usb4_port, dev);
kfree(usb4);
}
struct device_type usb4_port_device_type = {
.name = "usb4_port",
.groups = usb4_port_device_groups,
.release = usb4_port_device_release,
};
/**
* usb4_port_device_add() - Add USB4 port device
* @port: Lane 0 adapter port to add the USB4 port
*
* Creates and registers a USB4 port device for @port. Returns the new
* USB4 port device pointer or ERR_PTR() in case of error.
*/
struct usb4_port *usb4_port_device_add(struct tb_port *port)
{
struct usb4_port *usb4;
int ret;
usb4 = kzalloc(sizeof(*usb4), GFP_KERNEL);
if (!usb4)
return ERR_PTR(-ENOMEM);
usb4->port = port;
usb4->dev.type = &usb4_port_device_type;
usb4->dev.parent = &port->sw->dev;
dev_set_name(&usb4->dev, "usb4_port%d", port->port);
ret = device_register(&usb4->dev);
if (ret) {
put_device(&usb4->dev);
return ERR_PTR(ret);
}
pm_runtime_no_callbacks(&usb4->dev);
pm_runtime_set_active(&usb4->dev);
pm_runtime_enable(&usb4->dev);
pm_runtime_set_autosuspend_delay(&usb4->dev, TB_AUTOSUSPEND_DELAY);
pm_runtime_mark_last_busy(&usb4->dev);
pm_runtime_use_autosuspend(&usb4->dev);
return usb4;
}
/**
* usb4_port_device_remove() - Removes USB4 port device
* @usb4: USB4 port device
*
* Unregisters the USB4 port device from the system. The device will be
* released when the last reference is dropped.
*/
void usb4_port_device_remove(struct usb4_port *usb4)
{
device_unregister(&usb4->dev);
}
/**
* usb4_port_device_resume() - Resumes USB4 port device
* @usb4: USB4 port device
*
* Used to resume USB4 port device after sleep state.
*/
int usb4_port_device_resume(struct usb4_port *usb4)
{
return usb4->offline ? usb4_port_offline(usb4) : 0;
}

View File

@ -1527,6 +1527,13 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
return ret;
}
ret = tb_port_wait_for_link_width(port, 2, 100);
if (ret) {
tb_port_warn(port, "timeout enabling lane bonding\n");
return ret;
}
tb_port_update_credits(port);
tb_xdomain_update_link_attributes(xd);
dev_dbg(&xd->dev, "lane bonding enabled\n");
@ -1548,7 +1555,10 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
if (port->dual_link_port) {
tb_port_lane_bonding_disable(port);
if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT)
tb_port_warn(port, "timeout disabling lane bonding\n");
tb_port_disable(port->dual_link_port);
tb_port_update_credits(port);
tb_xdomain_update_link_attributes(xd);
dev_dbg(&xd->dev, "lane bonding disabled\n");

View File

@ -180,7 +180,7 @@ struct cxacru_data {
struct mutex poll_state_serialize;
enum cxacru_poll_state poll_state;
/* contol handles */
/* control handles */
struct mutex cm_serialize;
u8 *rcv_buf;
u8 *snd_buf;

View File

@ -677,7 +677,7 @@ static int cdns3_gadget_ep0_set_halt(struct usb_ep *ep, int value)
}
/**
* cdns3_gadget_ep0_queue Transfer data on endpoint zero
* cdns3_gadget_ep0_queue - Transfer data on endpoint zero
* @ep: pointer to endpoint zero object
* @request: pointer to request object
* @gfp_flags: gfp flags
@ -772,7 +772,7 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
}
/**
* cdns3_gadget_ep_set_wedge Set wedge on selected endpoint
* cdns3_gadget_ep_set_wedge - Set wedge on selected endpoint
* @ep: endpoint object
*
* Returns 0
@ -865,7 +865,7 @@ void cdns3_ep0_config(struct cdns3_device *priv_dev)
}
/**
* cdns3_init_ep0 Initializes software endpoint 0 of gadget
* cdns3_init_ep0 - Initializes software endpoint 0 of gadget
* @priv_dev: extended gadget object
* @priv_ep: extended endpoint object
*

View File

@ -155,7 +155,7 @@ static struct cdns3_request *cdns3_next_priv_request(struct list_head *list)
}
/**
* select_ep - selects endpoint
* cdns3_select_ep - selects endpoint
* @priv_dev: extended gadget object
* @ep: endpoint address
*/
@ -430,9 +430,7 @@ static int cdns3_start_all_request(struct cdns3_device *priv_dev,
if (ret)
return ret;
list_del(&request->list);
list_add_tail(&request->list,
&priv_ep->pending_req_list);
list_move_tail(&request->list, &priv_ep->pending_req_list);
if (request->stream_id != 0 || (priv_ep->flags & EP_TDLCHK_EN))
break;
}
@ -484,7 +482,7 @@ static void __cdns3_descmiss_copy_data(struct usb_request *request,
}
/**
* cdns3_wa2_descmiss_copy_data copy data from internal requests to
* cdns3_wa2_descmiss_copy_data - copy data from internal requests to
* request queued by class driver.
* @priv_ep: extended endpoint object
* @request: request object
@ -1835,7 +1833,7 @@ __must_hold(&priv_dev->lock)
}
/**
* cdns3_device_irq_handler- interrupt handler for device part of controller
* cdns3_device_irq_handler - interrupt handler for device part of controller
*
* @irq: irq number for cdns3 core device
* @data: structure of cdns3
@ -1879,7 +1877,7 @@ static irqreturn_t cdns3_device_irq_handler(int irq, void *data)
}
/**
* cdns3_device_thread_irq_handler- interrupt handler for device part
* cdns3_device_thread_irq_handler - interrupt handler for device part
* of controller
*
* @irq: irq number for cdns3 core device
@ -2022,7 +2020,7 @@ static void cdns3_configure_dmult(struct cdns3_device *priv_dev,
}
/**
* cdns3_ep_config Configure hardware endpoint
* cdns3_ep_config - Configure hardware endpoint
* @priv_ep: extended endpoint object
* @enable: set EP_CFG_ENABLE bit in ep_cfg register.
*/
@ -2221,7 +2219,7 @@ usb_ep *cdns3_gadget_match_ep(struct usb_gadget *gadget,
}
/**
* cdns3_gadget_ep_alloc_request Allocates request
* cdns3_gadget_ep_alloc_request - Allocates request
* @ep: endpoint object associated with request
* @gfp_flags: gfp flags
*
@ -2244,7 +2242,7 @@ struct usb_request *cdns3_gadget_ep_alloc_request(struct usb_ep *ep,
}
/**
* cdns3_gadget_ep_free_request Free memory occupied by request
* cdns3_gadget_ep_free_request - Free memory occupied by request
* @ep: endpoint object associated with request
* @request: request to free memory
*/
@ -2261,7 +2259,7 @@ void cdns3_gadget_ep_free_request(struct usb_ep *ep,
}
/**
* cdns3_gadget_ep_enable Enable endpoint
* cdns3_gadget_ep_enable - Enable endpoint
* @ep: endpoint object
* @desc: endpoint descriptor
*
@ -2396,7 +2394,7 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
}
/**
* cdns3_gadget_ep_disable Disable endpoint
* cdns3_gadget_ep_disable - Disable endpoint
* @ep: endpoint object
*
* Returns 0 on success, error code elsewhere
@ -2486,7 +2484,7 @@ static int cdns3_gadget_ep_disable(struct usb_ep *ep)
}
/**
* cdns3_gadget_ep_queue Transfer data on endpoint
* __cdns3_gadget_ep_queue - Transfer data on endpoint
* @ep: endpoint object
* @request: request object
* @gfp_flags: gfp flags
@ -2586,7 +2584,7 @@ static int cdns3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request,
}
/**
* cdns3_gadget_ep_dequeue Remove request from transfer queue
* cdns3_gadget_ep_dequeue - Remove request from transfer queue
* @ep: endpoint object associated with request
* @request: request object
*
@ -2653,7 +2651,7 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
}
/**
* __cdns3_gadget_ep_set_halt Sets stall on selected endpoint
* __cdns3_gadget_ep_set_halt - Sets stall on selected endpoint
* Should be called after acquiring spin_lock and selecting ep
* @priv_ep: endpoint object to set stall on.
*/
@ -2674,7 +2672,7 @@ void __cdns3_gadget_ep_set_halt(struct cdns3_endpoint *priv_ep)
}
/**
* __cdns3_gadget_ep_clear_halt Clears stall on selected endpoint
* __cdns3_gadget_ep_clear_halt - Clears stall on selected endpoint
* Should be called after acquiring spin_lock and selecting ep
* @priv_ep: endpoint object to clear stall on
*/
@ -2719,7 +2717,7 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
}
/**
* cdns3_gadget_ep_set_halt Sets/clears stall on selected endpoint
* cdns3_gadget_ep_set_halt - Sets/clears stall on selected endpoint
* @ep: endpoint object to set/clear stall on
* @value: 1 for set stall, 0 for clear stall
*
@ -2765,7 +2763,7 @@ static const struct usb_ep_ops cdns3_gadget_ep_ops = {
};
/**
* cdns3_gadget_get_frame Returns number of actual ITP frame
* cdns3_gadget_get_frame - Returns number of actual ITP frame
* @gadget: gadget object
*
* Returns number of actual ITP frame
@ -2874,7 +2872,7 @@ static void cdns3_gadget_config(struct cdns3_device *priv_dev)
}
/**
* cdns3_gadget_udc_start Gadget start
* cdns3_gadget_udc_start - Gadget start
* @gadget: gadget object
* @driver: driver which operates on this gadget
*
@ -2920,7 +2918,7 @@ static int cdns3_gadget_udc_start(struct usb_gadget *gadget,
}
/**
* cdns3_gadget_udc_stop Stops gadget
* cdns3_gadget_udc_stop - Stops gadget
* @gadget: gadget object
*
* Returns 0
@ -2983,7 +2981,7 @@ static void cdns3_free_all_eps(struct cdns3_device *priv_dev)
}
/**
* cdns3_init_eps Initializes software endpoints of gadget
* cdns3_init_eps - Initializes software endpoints of gadget
* @priv_dev: extended gadget object
*
* Returns 0 on success, error code elsewhere

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* cdns3-imx.c - NXP i.MX specific Glue layer for Cadence USB Controller
*
* Copyright (C) 2019 NXP

View File

@ -170,7 +170,7 @@ static int cdns3_plat_probe(struct platform_device *pdev)
}
/**
* cdns3_remove - unbind drd driver and clean up
* cdns3_plat_remove() - unbind drd driver and clean up
* @pdev: Pointer to Linux platform device
*
* Returns 0 on success otherwise negative errno

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
/**
/*
* cdns3-ti.c - TI specific Glue layer for Cadence USB Controller
*
* Copyright (C) 2019 Texas Instruments Incorporated - https://www.ti.com

View File

@ -56,7 +56,8 @@ u32 cdnsp_port_state_to_neutral(u32 state)
}
/**
* Find the offset of the extended capabilities with capability ID id.
* cdnsp_find_next_ext_cap - Find the offset of the extended capabilities
* with capability ID id.
* @base: PCI MMIO registers base address.
* @start: Address at which to start looking, (0 or HCC_PARAMS to start at
* beginning of list)
@ -1151,7 +1152,7 @@ static int cdnsp_gadget_ep_set_halt(struct usb_ep *ep, int value)
struct cdnsp_ep *pep = to_cdnsp_ep(ep);
struct cdnsp_device *pdev = pep->pdev;
struct cdnsp_request *preq;
unsigned long flags = 0;
unsigned long flags;
int ret;
spin_lock_irqsave(&pdev->lock, flags);
@ -1176,7 +1177,7 @@ static int cdnsp_gadget_ep_set_wedge(struct usb_ep *ep)
{
struct cdnsp_ep *pep = to_cdnsp_ep(ep);
struct cdnsp_device *pdev = pep->pdev;
unsigned long flags = 0;
unsigned long flags;
int ret;
spin_lock_irqsave(&pdev->lock, flags);

View File

@ -1082,9 +1082,8 @@ void cdnsp_mem_cleanup(struct cdnsp_device *pdev)
dma_pool_destroy(pdev->device_pool);
pdev->device_pool = NULL;
if (pdev->dcbaa)
dma_free_coherent(dev, sizeof(*pdev->dcbaa),
pdev->dcbaa, pdev->dcbaa->dma);
dma_free_coherent(dev, sizeof(*pdev->dcbaa),
pdev->dcbaa, pdev->dcbaa->dma);
pdev->dcbaa = NULL;

View File

@ -332,7 +332,7 @@ int cdns_hw_role_switch(struct cdns *cdns)
}
/**
* cdsn3_role_get - get current role of controller.
* cdns_role_get - get current role of controller.
*
* @sw: pointer to USB role switch structure
*
@ -419,7 +419,7 @@ static irqreturn_t cdns_wakeup_irq(int irq, void *data)
}
/**
* cdns_probe - probe for cdns3/cdnsp core device
* cdns_init - probe for cdns3/cdnsp core device
* @cdns: Pointer to cdns structure.
*
* Returns 0 on success otherwise negative errno

View File

@ -195,7 +195,6 @@ struct hw_bank {
* @phy: pointer to PHY, if any
* @usb_phy: pointer to USB PHY, if any and if using the USB PHY framework
* @hcd: pointer to usb_hcd for ehci host driver
* @debugfs: root dentry for this controller in debugfs
* @id_event: indicates there is an id event, and handled at ci_otg_work
* @b_sess_valid_event: indicates there is a vbus event, and handled
* at ci_otg_work
@ -249,7 +248,6 @@ struct ci_hdrc {
/* old usb_phy interface */
struct usb_phy *usb_phy;
struct usb_hcd *hcd;
struct dentry *debugfs;
bool id_event;
bool b_sess_valid_event;
bool imx28_write_fix;

View File

@ -335,7 +335,7 @@ static int _ci_usb_phy_init(struct ci_hdrc *ci)
}
/**
* _ci_usb_phy_exit: deinitialize phy taking in account both phy and usb_phy
* ci_usb_phy_exit: deinitialize phy taking in account both phy and usb_phy
* interfaces
* @ci: the controller
*/

View File

@ -342,26 +342,20 @@ DEFINE_SHOW_ATTRIBUTE(ci_registers);
*/
void dbg_create_files(struct ci_hdrc *ci)
{
ci->debugfs = debugfs_create_dir(dev_name(ci->dev), usb_debug_root);
struct dentry *dir;
debugfs_create_file("device", S_IRUGO, ci->debugfs, ci,
&ci_device_fops);
debugfs_create_file("port_test", S_IRUGO | S_IWUSR, ci->debugfs, ci,
&ci_port_test_fops);
debugfs_create_file("qheads", S_IRUGO, ci->debugfs, ci,
&ci_qheads_fops);
debugfs_create_file("requests", S_IRUGO, ci->debugfs, ci,
&ci_requests_fops);
dir = debugfs_create_dir(dev_name(ci->dev), usb_debug_root);
if (ci_otg_is_fsm_mode(ci)) {
debugfs_create_file("otg", S_IRUGO, ci->debugfs, ci,
&ci_otg_fops);
}
debugfs_create_file("device", S_IRUGO, dir, ci, &ci_device_fops);
debugfs_create_file("port_test", S_IRUGO | S_IWUSR, dir, ci, &ci_port_test_fops);
debugfs_create_file("qheads", S_IRUGO, dir, ci, &ci_qheads_fops);
debugfs_create_file("requests", S_IRUGO, dir, ci, &ci_requests_fops);
debugfs_create_file("role", S_IRUGO | S_IWUSR, ci->debugfs, ci,
&ci_role_fops);
debugfs_create_file("registers", S_IRUGO, ci->debugfs, ci,
&ci_registers_fops);
if (ci_otg_is_fsm_mode(ci))
debugfs_create_file("otg", S_IRUGO, dir, ci, &ci_otg_fops);
debugfs_create_file("role", S_IRUGO | S_IWUSR, dir, ci, &ci_role_fops);
debugfs_create_file("registers", S_IRUGO, dir, ci, &ci_registers_fops);
}
/**
@ -370,5 +364,5 @@ void dbg_create_files(struct ci_hdrc *ci)
*/
void dbg_remove_files(struct ci_hdrc *ci)
{
debugfs_remove_recursive(ci->debugfs);
debugfs_remove(debugfs_lookup(dev_name(ci->dev), usb_debug_root));
}

View File

@ -22,7 +22,7 @@
#include "otg_fsm.h"
/**
* hw_read_otgsc returns otgsc register bits value.
* hw_read_otgsc - returns otgsc register bits value.
* @ci: the controller
* @mask: bitfield mask
*/
@ -75,7 +75,7 @@ u32 hw_read_otgsc(struct ci_hdrc *ci, u32 mask)
}
/**
* hw_write_otgsc updates target bits of OTGSC register.
* hw_write_otgsc - updates target bits of OTGSC register.
* @ci: the controller
* @mask: bitfield mask
* @data: to be written
@ -140,8 +140,9 @@ void ci_handle_vbus_change(struct ci_hdrc *ci)
}
/**
* When we switch to device mode, the vbus value should be lower
* than OTGSC_BSV before connecting to host.
* hw_wait_vbus_lower_bsv - When we switch to device mode, the vbus value
* should be lower than OTGSC_BSV before connecting
* to host.
*
* @ci: the controller
*

View File

@ -238,7 +238,7 @@ static int hw_ep_set_halt(struct ci_hdrc *ci, int num, int dir, int value)
}
/**
* hw_is_port_high_speed: test if port is high speed
* hw_port_is_high_speed: test if port is high speed
* @ci: the controller
*
* This function returns true if high speed port

View File

@ -1946,6 +1946,11 @@ static const struct usb_device_id acm_ids[] = {
.driver_info = IGNORE_DEVICE,
},
/* Exclude Heimann Sensor GmbH USB appset demo */
{ USB_DEVICE(0x32a7, 0x0000),
.driver_info = IGNORE_DEVICE,
},
/* control interfaces without any protocol set */
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
USB_CDC_PROTO_NONE) },

View File

@ -1035,9 +1035,10 @@ static int wdm_create(struct usb_interface *intf, struct usb_endpoint_descriptor
INIT_WORK(&desc->rxwork, wdm_rxwork);
INIT_WORK(&desc->service_outs_intr, service_interrupt_work);
rv = -EINVAL;
if (!usb_endpoint_is_int_in(ep))
if (!usb_endpoint_is_int_in(ep)) {
rv = -EINVAL;
goto err;
}
desc->wMaxPacketSize = usb_endpoint_maxp(ep);

View File

@ -141,7 +141,7 @@ static const struct device_type ulpi_dev_type = {
/* -------------------------------------------------------------------------- */
/**
* ulpi_register_driver - register a driver with the ULPI bus
* __ulpi_register_driver - register a driver with the ULPI bus
* @drv: driver being registered
* @module: ends up being THIS_MODULE
*

View File

@ -83,11 +83,11 @@ static void usb_conn_detect_cable(struct work_struct *work)
else
role = USB_ROLE_NONE;
dev_dbg(info->dev, "role %d/%d, gpios: id %d, vbus %d\n",
info->last_role, role, id, vbus);
dev_dbg(info->dev, "role %s -> %s, gpios: id %d, vbus %d\n",
usb_role_string(info->last_role), usb_role_string(role), id, vbus);
if (info->last_role == role) {
dev_warn(info->dev, "repeated role: %d\n", role);
dev_warn(info->dev, "repeated role: %s\n", usb_role_string(role));
return;
}
@ -149,14 +149,32 @@ static int usb_charger_get_property(struct power_supply *psy,
return 0;
}
static int usb_conn_probe(struct platform_device *pdev)
static int usb_conn_psy_register(struct usb_conn_info *info)
{
struct device *dev = &pdev->dev;
struct power_supply_desc *desc;
struct usb_conn_info *info;
struct device *dev = info->dev;
struct power_supply_desc *desc = &info->desc;
struct power_supply_config cfg = {
.of_node = dev->of_node,
};
desc->name = "usb-charger";
desc->properties = usb_charger_properties;
desc->num_properties = ARRAY_SIZE(usb_charger_properties);
desc->get_property = usb_charger_get_property;
desc->type = POWER_SUPPLY_TYPE_USB;
cfg.drv_data = info;
info->charger = devm_power_supply_register(dev, desc, &cfg);
if (IS_ERR(info->charger))
dev_err(dev, "Unable to register charger\n");
return PTR_ERR_OR_ZERO(info->charger);
}
static int usb_conn_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct usb_conn_info *info;
bool need_vbus = true;
int ret = 0;
@ -205,18 +223,18 @@ static int usb_conn_probe(struct platform_device *pdev)
}
if (IS_ERR(info->vbus)) {
if (PTR_ERR(info->vbus) != -EPROBE_DEFER)
dev_err(dev, "failed to get vbus: %ld\n", PTR_ERR(info->vbus));
return PTR_ERR(info->vbus);
ret = PTR_ERR(info->vbus);
return dev_err_probe(dev, ret, "failed to get vbus :%d\n", ret);
}
info->role_sw = usb_role_switch_get(dev);
if (IS_ERR(info->role_sw)) {
if (PTR_ERR(info->role_sw) != -EPROBE_DEFER)
dev_err(dev, "failed to get role switch\n");
if (IS_ERR(info->role_sw))
return dev_err_probe(dev, PTR_ERR(info->role_sw),
"failed to get role switch\n");
return PTR_ERR(info->role_sw);
}
ret = usb_conn_psy_register(info);
if (ret)
goto put_role_sw;
if (info->id_gpiod) {
info->id_irq = gpiod_to_irq(info->id_gpiod);
@ -252,20 +270,6 @@ static int usb_conn_probe(struct platform_device *pdev)
}
}
desc = &info->desc;
desc->name = "usb-charger";
desc->properties = usb_charger_properties;
desc->num_properties = ARRAY_SIZE(usb_charger_properties);
desc->get_property = usb_charger_get_property;
desc->type = POWER_SUPPLY_TYPE_USB;
cfg.drv_data = info;
info->charger = devm_power_supply_register(dev, desc, &cfg);
if (IS_ERR(info->charger)) {
dev_err(dev, "Unable to register charger\n");
return PTR_ERR(info->charger);
}
platform_set_drvdata(pdev, info);
/* Perform initial detection */

View File

@ -1162,7 +1162,7 @@ static int do_proc_control(struct usb_dev_state *ps,
tbuf, ctrl->wLength);
usb_unlock_device(dev);
i = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), ctrl->bRequest,
i = usb_control_msg(dev, pipe, ctrl->bRequest,
ctrl->bRequestType, ctrl->wValue, ctrl->wIndex,
tbuf, ctrl->wLength, tmo);
usb_lock_device(dev);

View File

@ -2110,6 +2110,136 @@ int usb_hcd_get_frame_number (struct usb_device *udev)
return hcd->driver->get_frame_number (hcd);
}
/*-------------------------------------------------------------------------*/
#ifdef CONFIG_USB_HCD_TEST_MODE
static void usb_ehset_completion(struct urb *urb)
{
struct completion *done = urb->context;
complete(done);
}
/*
* Allocate and initialize a control URB. This request will be used by the
* EHSET SINGLE_STEP_SET_FEATURE test in which the DATA and STATUS stages
* of the GetDescriptor request are sent 15 seconds after the SETUP stage.
* Return NULL if failed.
*/
static struct urb *request_single_step_set_feature_urb(
struct usb_device *udev,
void *dr,
void *buf,
struct completion *done)
{
struct urb *urb;
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
struct usb_host_endpoint *ep;
urb = usb_alloc_urb(0, GFP_KERNEL);
if (!urb)
return NULL;
urb->pipe = usb_rcvctrlpipe(udev, 0);
ep = (usb_pipein(urb->pipe) ? udev->ep_in : udev->ep_out)
[usb_pipeendpoint(urb->pipe)];
if (!ep) {
usb_free_urb(urb);
return NULL;
}
urb->ep = ep;
urb->dev = udev;
urb->setup_packet = (void *)dr;
urb->transfer_buffer = buf;
urb->transfer_buffer_length = USB_DT_DEVICE_SIZE;
urb->complete = usb_ehset_completion;
urb->status = -EINPROGRESS;
urb->actual_length = 0;
urb->transfer_flags = URB_DIR_IN;
usb_get_urb(urb);
atomic_inc(&urb->use_count);
atomic_inc(&urb->dev->urbnum);
if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) {
usb_put_urb(urb);
usb_free_urb(urb);
return NULL;
}
urb->context = done;
return urb;
}
int ehset_single_step_set_feature(struct usb_hcd *hcd, int port)
{
int retval = -ENOMEM;
struct usb_ctrlrequest *dr;
struct urb *urb;
struct usb_device *udev;
struct usb_device_descriptor *buf;
DECLARE_COMPLETION_ONSTACK(done);
/* Obtain udev of the rhub's child port */
udev = usb_hub_find_child(hcd->self.root_hub, port);
if (!udev) {
dev_err(hcd->self.controller, "No device attached to the RootHub\n");
return -ENODEV;
}
buf = kmalloc(USB_DT_DEVICE_SIZE, GFP_KERNEL);
if (!buf)
return -ENOMEM;
dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_KERNEL);
if (!dr) {
kfree(buf);
return -ENOMEM;
}
/* Fill Setup packet for GetDescriptor */
dr->bRequestType = USB_DIR_IN;
dr->bRequest = USB_REQ_GET_DESCRIPTOR;
dr->wValue = cpu_to_le16(USB_DT_DEVICE << 8);
dr->wIndex = 0;
dr->wLength = cpu_to_le16(USB_DT_DEVICE_SIZE);
urb = request_single_step_set_feature_urb(udev, dr, buf, &done);
if (!urb)
goto cleanup;
/* Submit just the SETUP stage */
retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 1);
if (retval)
goto out1;
if (!wait_for_completion_timeout(&done, msecs_to_jiffies(2000))) {
usb_kill_urb(urb);
retval = -ETIMEDOUT;
dev_err(hcd->self.controller,
"%s SETUP stage timed out on ep0\n", __func__);
goto out1;
}
msleep(15 * 1000);
/* Complete remaining DATA and STATUS stages using the same URB */
urb->status = -EINPROGRESS;
usb_get_urb(urb);
atomic_inc(&urb->use_count);
atomic_inc(&urb->dev->urbnum);
retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0);
if (!retval && !wait_for_completion_timeout(&done,
msecs_to_jiffies(2000))) {
usb_kill_urb(urb);
retval = -ETIMEDOUT;
dev_err(hcd->self.controller,
"%s IN stage timed out on ep0\n", __func__);
}
out1:
usb_free_urb(urb);
cleanup:
kfree(dr);
kfree(buf);
return retval;
}
EXPORT_SYMBOL_GPL(ehset_single_step_set_feature);
#endif /* CONFIG_USB_HCD_TEST_MODE */
/*-------------------------------------------------------------------------*/
#ifdef CONFIG_PM

View File

@ -2434,6 +2434,8 @@ static void set_usb_port_removable(struct usb_device *udev)
u16 wHubCharacteristics;
bool removable = true;
dev_set_removable(&udev->dev, DEVICE_REMOVABLE_UNKNOWN);
if (!hdev)
return;
@ -2445,11 +2447,11 @@ static void set_usb_port_removable(struct usb_device *udev)
*/
switch (hub->ports[udev->portnum - 1]->connect_type) {
case USB_PORT_CONNECT_TYPE_HOT_PLUG:
udev->removable = USB_DEVICE_REMOVABLE;
dev_set_removable(&udev->dev, DEVICE_REMOVABLE);
return;
case USB_PORT_CONNECT_TYPE_HARD_WIRED:
case USB_PORT_NOT_USED:
udev->removable = USB_DEVICE_FIXED;
dev_set_removable(&udev->dev, DEVICE_FIXED);
return;
default:
break;
@ -2474,9 +2476,9 @@ static void set_usb_port_removable(struct usb_device *udev)
}
if (removable)
udev->removable = USB_DEVICE_REMOVABLE;
dev_set_removable(&udev->dev, DEVICE_REMOVABLE);
else
udev->removable = USB_DEVICE_FIXED;
dev_set_removable(&udev->dev, DEVICE_FIXED);
}
@ -2548,8 +2550,7 @@ int usb_new_device(struct usb_device *udev)
device_enable_async_suspend(&udev->dev);
/* check whether the hub or firmware marks this port as non-removable */
if (udev->parent)
set_usb_port_removable(udev);
set_usb_port_removable(udev);
/* Register the device. The device driver is responsible
* for configuring the device and invoking the add-device
@ -3387,6 +3388,26 @@ int usb_port_suspend(struct usb_device *udev, pm_message_t msg)
status = 0;
}
if (status) {
/* Check if the port has been suspended for the timeout case
* to prevent the suspended port from incorrect handling.
*/
if (status == -ETIMEDOUT) {
int ret;
u16 portstatus, portchange;
portstatus = portchange = 0;
ret = hub_port_status(hub, port1, &portstatus,
&portchange);
dev_dbg(&port_dev->dev,
"suspend timeout, status %04x\n", portstatus);
if (ret == 0 && port_is_suspended(hub, portstatus)) {
status = 0;
goto suspend_done;
}
}
dev_dbg(&port_dev->dev, "can't suspend, status %d\n", status);
/* Try to enable USB3 LTM again */
@ -3403,6 +3424,7 @@ int usb_port_suspend(struct usb_device *udev, pm_message_t msg)
if (!PMSG_IS_AUTO(msg))
status = 0;
} else {
suspend_done:
dev_dbg(&udev->dev, "usb %ssuspend, wakeup %d\n",
(PMSG_IS_AUTO(msg) ? "auto-" : ""),
udev->do_remote_wakeup);

View File

@ -783,6 +783,9 @@ int usb_get_descriptor(struct usb_device *dev, unsigned char type,
int i;
int result;
if (size <= 0) /* No point in asking for no data */
return -EINVAL;
memset(buf, 0, size); /* Make sure we parse really received data */
for (i = 0; i < 3; ++i) {
@ -832,6 +835,9 @@ static int usb_get_string(struct usb_device *dev, unsigned short langid,
int i;
int result;
if (size <= 0) /* No point in asking for no data */
return -EINVAL;
for (i = 0; i < 3; ++i) {
/* retry on length 0 or stall; some devices are flakey */
result = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),

View File

@ -406,7 +406,6 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Realtek hub in Dell WD19 (Type-C) */
{ USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
{ USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME },
/* Generic RTL8153 based ethernet adapters */
{ USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },

View File

@ -301,29 +301,6 @@ static ssize_t urbnum_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(urbnum);
static ssize_t removable_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct usb_device *udev;
char *state;
udev = to_usb_device(dev);
switch (udev->removable) {
case USB_DEVICE_REMOVABLE:
state = "removable";
break;
case USB_DEVICE_FIXED:
state = "fixed";
break;
default:
state = "unknown";
}
return sprintf(buf, "%s\n", state);
}
static DEVICE_ATTR_RO(removable);
static ssize_t ltm_capable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -828,7 +805,6 @@ static struct attribute *dev_attrs[] = {
&dev_attr_avoid_reset_quirk.attr,
&dev_attr_authorized.attr,
&dev_attr_remove.attr,
&dev_attr_removable.attr,
&dev_attr_ltm_capable.attr,
#ifdef CONFIG_OF
&dev_attr_devspec.attr,

View File

@ -407,6 +407,15 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
return -ENOEXEC;
is_out = !(setup->bRequestType & USB_DIR_IN) ||
!setup->wLength;
dev_WARN_ONCE(&dev->dev, (usb_pipeout(urb->pipe) != is_out),
"BOGUS control dir, pipe %x doesn't match bRequestType %x\n",
urb->pipe, setup->bRequestType);
if (le16_to_cpu(setup->wLength) != urb->transfer_buffer_length) {
dev_dbg(&dev->dev, "BOGUS control len %d doesn't match transfer length %d\n",
le16_to_cpu(setup->wLength),
urb->transfer_buffer_length);
return -EBADR;
}
} else {
is_out = usb_endpoint_dir_out(&ep->desc);
}

View File

@ -1111,15 +1111,6 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
usbcfg &= ~(GUSBCFG_ULPI_UTMI_SEL | GUSBCFG_PHYIF16);
if (hsotg->params.phy_utmi_width == 16)
usbcfg |= GUSBCFG_PHYIF16;
/* Set turnaround time */
if (dwc2_is_device_mode(hsotg)) {
usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
if (hsotg->params.phy_utmi_width == 16)
usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
else
usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
}
break;
default:
dev_err(hsotg->dev, "FS PHY selected at HS!\n");
@ -1141,6 +1132,24 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
return retval;
}
static void dwc2_set_turnaround_time(struct dwc2_hsotg *hsotg)
{
u32 usbcfg;
if (hsotg->params.phy_type != DWC2_PHY_TYPE_PARAM_UTMI)
return;
usbcfg = dwc2_readl(hsotg, GUSBCFG);
usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
if (hsotg->params.phy_utmi_width == 16)
usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
else
usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
dwc2_writel(hsotg, usbcfg, GUSBCFG);
}
int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
{
u32 usbcfg;
@ -1158,6 +1167,9 @@ int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
retval = dwc2_hs_phy_init(hsotg, select_phy);
if (retval)
return retval;
if (dwc2_is_device_mode(hsotg))
dwc2_set_turnaround_time(hsotg);
}
if (hsotg->hw_params.hs_phy_type == GHWCFG2_HS_PHY_TYPE_ULPI &&

View File

@ -1496,8 +1496,8 @@ static int dwc2_hsotg_ep_queue_lock(struct usb_ep *ep, struct usb_request *req,
{
struct dwc2_hsotg_ep *hs_ep = our_ep(ep);
struct dwc2_hsotg *hs = hs_ep->parent;
unsigned long flags = 0;
int ret = 0;
unsigned long flags;
int ret;
spin_lock_irqsave(&hs->lock, flags);
ret = dwc2_hsotg_ep_queue(ep, req, gfp_flags);
@ -3338,7 +3338,7 @@ static void dwc2_hsotg_irq_fifoempty(struct dwc2_hsotg *hsotg, bool periodic)
static int dwc2_hsotg_ep_disable(struct usb_ep *ep);
/**
* dwc2_hsotg_core_init - issue softreset to the core
* dwc2_hsotg_core_init_disconnected - issue softreset to the core
* @hsotg: The device state
* @is_usb_reset: Usb resetting flag
*
@ -4374,8 +4374,8 @@ static int dwc2_hsotg_ep_sethalt_lock(struct usb_ep *ep, int value)
{
struct dwc2_hsotg_ep *hs_ep = our_ep(ep);
struct dwc2_hsotg *hs = hs_ep->parent;
unsigned long flags = 0;
int ret = 0;
unsigned long flags;
int ret;
spin_lock_irqsave(&hs->lock, flags);
ret = dwc2_hsotg_ep_sethalt(ep, value, false);
@ -4505,7 +4505,7 @@ static int dwc2_hsotg_udc_start(struct usb_gadget *gadget,
static int dwc2_hsotg_udc_stop(struct usb_gadget *gadget)
{
struct dwc2_hsotg *hsotg = to_hsotg(gadget);
unsigned long flags = 0;
unsigned long flags;
int ep;
if (!hsotg)
@ -4577,7 +4577,7 @@ static int dwc2_hsotg_set_selfpowered(struct usb_gadget *gadget,
static int dwc2_hsotg_pullup(struct usb_gadget *gadget, int is_on)
{
struct dwc2_hsotg *hsotg = to_hsotg(gadget);
unsigned long flags = 0;
unsigned long flags;
dev_dbg(hsotg->dev, "%s: is_on: %d op_state: %d\n", __func__, is_on,
hsotg->op_state);

View File

@ -675,7 +675,7 @@ static int dwc2_hs_pmap_schedule(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh,
}
/**
* dwc2_ls_pmap_unschedule() - Undo work done by dwc2_hs_pmap_schedule()
* dwc2_hs_pmap_unschedule() - Undo work done by dwc2_hs_pmap_schedule()
*
* @hsotg: The HCD state structure for the DWC OTG controller.
* @qh: QH for the periodic transfer.

View File

@ -784,8 +784,8 @@ static void dwc2_get_dev_hwparams(struct dwc2_hsotg *hsotg)
}
/**
* During device initialization, read various hardware configuration
* registers and interpret the contents.
* dwc2_get_hwparams() - During device initialization, read various hardware
* configuration registers and interpret the contents.
*
* @hsotg: Programming view of the DWC_otg controller
*

View File

@ -64,7 +64,7 @@ struct dwc2_pci_glue {
};
/**
* dwc2_pci_probe() - Provides the cleanup entry points for the DWC_otg PCI
* dwc2_pci_remove() - Provides the cleanup entry points for the DWC_otg PCI
* driver
*
* @pci: The programming view of DWC_otg PCI

View File

@ -408,7 +408,7 @@ static bool dwc2_check_core_endianness(struct dwc2_hsotg *hsotg)
}
/**
* Check core version
* dwc2_check_core_version() - Check core version
*
* @hsotg: Programming view of the DWC_otg controller
*

View File

@ -1545,6 +1545,10 @@ static int dwc3_probe(struct platform_device *pdev)
dwc3_get_properties(dwc);
ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64));
if (ret)
return ret;
dwc->reset = devm_reset_control_array_get_optional_shared(dev);
if (IS_ERR(dwc->reset))
return PTR_ERR(dwc->reset);
@ -1616,17 +1620,18 @@ static int dwc3_probe(struct platform_device *pdev)
}
dwc3_check_params(dwc);
dwc3_debugfs_init(dwc);
ret = dwc3_core_init_mode(dwc);
if (ret)
goto err5;
dwc3_debugfs_init(dwc);
pm_runtime_put(dev);
return 0;
err5:
dwc3_debugfs_exit(dwc);
dwc3_event_buffers_cleanup(dwc);
usb_phy_shutdown(dwc->usb2_phy);

View File

@ -1013,7 +1013,6 @@ struct dwc3_scratchpad_array {
* @link_state: link state
* @speed: device speed (super, high, full, low)
* @hwparams: copy of hwparams registers
* @root: debugfs root folder pointer
* @regset: debugfs pointer to regdump file
* @dbg_lsp_select: current debug lsp mux register selection
* @test_mode: true when we're entering a USB test mode
@ -1222,7 +1221,6 @@ struct dwc3 {
u8 num_eps;
struct dwc3_hwparams hwparams;
struct dentry *root;
struct debugfs_regset32 *regset;
u32 dbg_lsp_select;

View File

@ -889,8 +889,10 @@ static void dwc3_debugfs_create_endpoint_files(struct dwc3_ep *dep,
void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)
{
struct dentry *dir;
struct dentry *root;
dir = debugfs_create_dir(dep->name, dep->dwc->root);
root = debugfs_lookup(dev_name(dep->dwc->dev), usb_debug_root);
dir = debugfs_create_dir(dep->name, root);
dwc3_debugfs_create_endpoint_files(dep, dir);
}
@ -909,8 +911,6 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START;
root = debugfs_create_dir(dev_name(dwc->dev), usb_debug_root);
dwc->root = root;
debugfs_create_regset32("regdump", 0444, root, dwc->regset);
debugfs_create_file("lsp_dump", 0644, root, dwc, &dwc3_lsp_fops);
@ -929,6 +929,6 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
void dwc3_debugfs_exit(struct dwc3 *dwc)
{
debugfs_remove_recursive(dwc->root);
debugfs_remove(debugfs_lookup(dev_name(dwc->dev), usb_debug_root));
kfree(dwc->regset);
}

View File

@ -596,7 +596,6 @@ int dwc3_drd_init(struct dwc3 *dwc)
dwc3_drd_update(dwc);
} else {
dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG);
dwc->current_dr_role = DWC3_GCTL_PRTCAP_OTG;
/* use OTG block to get ID event */
irq = dwc3_otg_get_irq(dwc);

View File

@ -36,7 +36,7 @@
#define PCI_DEVICE_ID_INTEL_CNPH 0xa36e
#define PCI_DEVICE_ID_INTEL_CNPV 0xa3b0
#define PCI_DEVICE_ID_INTEL_ICLLP 0x34ee
#define PCI_DEVICE_ID_INTEL_EHLLP 0x4b7e
#define PCI_DEVICE_ID_INTEL_EHL 0x4b7e
#define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee
#define PCI_DEVICE_ID_INTEL_TGPH 0x43ee
#define PCI_DEVICE_ID_INTEL_JSP 0x4dee
@ -167,7 +167,7 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
if (pdev->device == PCI_DEVICE_ID_INTEL_BXT ||
pdev->device == PCI_DEVICE_ID_INTEL_BXT_M ||
pdev->device == PCI_DEVICE_ID_INTEL_EHLLP) {
pdev->device == PCI_DEVICE_ID_INTEL_EHL) {
guid_parse(PCI_INTEL_BXT_DSM_GUID, &dwc->guid);
dwc->has_dsm_for_pm = true;
}
@ -375,8 +375,8 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICLLP),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_EHLLP),
(kernel_ulong_t) &dwc3_pci_intel_swnode },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_EHL),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGPLP),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },

View File

@ -2798,7 +2798,9 @@ static void dwc3_gadget_free_endpoints(struct dwc3 *dwc)
list_del(&dep->endpoint.ep_list);
}
debugfs_remove_recursive(debugfs_lookup(dep->name, dwc->root));
debugfs_remove_recursive(debugfs_lookup(dep->name,
debugfs_lookup(dev_name(dep->dwc->dev),
usb_debug_root)));
kfree(dep);
}
}

View File

@ -222,8 +222,6 @@ DECLARE_EVENT_CLASS(dwc3_log_trb,
TP_STRUCT__entry(
__string(name, dep->name)
__field(struct dwc3_trb *, trb)
__field(u32, allocated)
__field(u32, queued)
__field(u32, bpl)
__field(u32, bph)
__field(u32, size)

View File

@ -30,6 +30,11 @@ struct f_eem {
u8 ctrl_id;
};
struct in_context {
struct sk_buff *skb;
struct usb_ep *ep;
};
static inline struct f_eem *func_to_eem(struct usb_function *f)
{
return container_of(f, struct f_eem, port.func);
@ -320,9 +325,12 @@ static int eem_bind(struct usb_configuration *c, struct usb_function *f)
static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req)
{
struct sk_buff *skb = (struct sk_buff *)req->context;
struct in_context *ctx = req->context;
dev_kfree_skb_any(skb);
dev_kfree_skb_any(ctx->skb);
kfree(req->buf);
usb_ep_free_request(ctx->ep, req);
kfree(ctx);
}
/*
@ -410,7 +418,9 @@ static int eem_unwrap(struct gether *port,
* b15: bmType (0 == data, 1 == command)
*/
if (header & BIT(15)) {
struct usb_request *req = cdev->req;
struct usb_request *req;
struct in_context *ctx;
struct usb_ep *ep;
u16 bmEEMCmd;
/* EEM command packet format:
@ -439,11 +449,36 @@ static int eem_unwrap(struct gether *port,
skb_trim(skb2, len);
put_unaligned_le16(BIT(15) | BIT(11) | len,
skb_push(skb2, 2));
ep = port->in_ep;
req = usb_ep_alloc_request(ep, GFP_ATOMIC);
if (!req) {
dev_kfree_skb_any(skb2);
goto next;
}
req->buf = kmalloc(skb2->len, GFP_KERNEL);
if (!req->buf) {
usb_ep_free_request(ep, req);
dev_kfree_skb_any(skb2);
goto next;
}
ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx) {
kfree(req->buf);
usb_ep_free_request(ep, req);
dev_kfree_skb_any(skb2);
goto next;
}
ctx->skb = skb2;
ctx->ep = ep;
skb_copy_bits(skb2, 0, req->buf, skb2->len);
req->length = skb2->len;
req->complete = eem_cmd_complete;
req->zero = 1;
req->context = skb2;
req->context = ctx;
if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC))
DBG(cdev, "echo response queue fail\n");
break;

View File

@ -250,8 +250,8 @@ EXPORT_SYMBOL_GPL(ffs_lock);
static struct ffs_dev *_ffs_find_dev(const char *name);
static struct ffs_dev *_ffs_alloc_dev(void);
static void _ffs_free_dev(struct ffs_dev *dev);
static void *ffs_acquire_dev(const char *dev_name);
static void ffs_release_dev(struct ffs_data *ffs_data);
static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data);
static void ffs_release_dev(struct ffs_dev *ffs_dev);
static int ffs_ready(struct ffs_data *ffs);
static void ffs_closed(struct ffs_data *ffs);
@ -1554,8 +1554,8 @@ static int ffs_fs_parse_param(struct fs_context *fc, struct fs_parameter *param)
static int ffs_fs_get_tree(struct fs_context *fc)
{
struct ffs_sb_fill_data *ctx = fc->fs_private;
void *ffs_dev;
struct ffs_data *ffs;
int ret;
ENTER();
@ -1574,13 +1574,12 @@ static int ffs_fs_get_tree(struct fs_context *fc)
return -ENOMEM;
}
ffs_dev = ffs_acquire_dev(ffs->dev_name);
if (IS_ERR(ffs_dev)) {
ret = ffs_acquire_dev(ffs->dev_name, ffs);
if (ret) {
ffs_data_put(ffs);
return PTR_ERR(ffs_dev);
return ret;
}
ffs->private_data = ffs_dev;
ctx->ffs_data = ffs;
return get_tree_nodev(fc, ffs_sb_fill);
}
@ -1591,7 +1590,6 @@ static void ffs_fs_free_fc(struct fs_context *fc)
if (ctx) {
if (ctx->ffs_data) {
ffs_release_dev(ctx->ffs_data);
ffs_data_put(ctx->ffs_data);
}
@ -1630,10 +1628,8 @@ ffs_fs_kill_sb(struct super_block *sb)
ENTER();
kill_litter_super(sb);
if (sb->s_fs_info) {
ffs_release_dev(sb->s_fs_info);
if (sb->s_fs_info)
ffs_data_closed(sb->s_fs_info);
}
}
static struct file_system_type ffs_fs_type = {
@ -1703,6 +1699,7 @@ static void ffs_data_put(struct ffs_data *ffs)
if (refcount_dec_and_test(&ffs->ref)) {
pr_info("%s(): freeing\n", __func__);
ffs_data_clear(ffs);
ffs_release_dev(ffs->private_data);
BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
swait_active(&ffs->ep0req_completion.wait) ||
waitqueue_active(&ffs->wait));
@ -3032,6 +3029,7 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
struct ffs_function *func = ffs_func_from_usb(f);
struct f_fs_opts *ffs_opts =
container_of(f->fi, struct f_fs_opts, func_inst);
struct ffs_data *ffs_data;
int ret;
ENTER();
@ -3046,12 +3044,13 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
if (!ffs_opts->no_configfs)
ffs_dev_lock();
ret = ffs_opts->dev->desc_ready ? 0 : -ENODEV;
func->ffs = ffs_opts->dev->ffs_data;
ffs_data = ffs_opts->dev->ffs_data;
if (!ffs_opts->no_configfs)
ffs_dev_unlock();
if (ret)
return ERR_PTR(ret);
func->ffs = ffs_data;
func->conf = c;
func->gadget = c->cdev->gadget;
@ -3506,6 +3505,7 @@ static void ffs_free_inst(struct usb_function_instance *f)
struct f_fs_opts *opts;
opts = to_f_fs_opts(f);
ffs_release_dev(opts->dev);
ffs_dev_lock();
_ffs_free_dev(opts->dev);
ffs_dev_unlock();
@ -3693,47 +3693,48 @@ static void _ffs_free_dev(struct ffs_dev *dev)
{
list_del(&dev->entry);
/* Clear the private_data pointer to stop incorrect dev access */
if (dev->ffs_data)
dev->ffs_data->private_data = NULL;
kfree(dev);
if (list_empty(&ffs_devices))
functionfs_cleanup();
}
static void *ffs_acquire_dev(const char *dev_name)
static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data)
{
int ret = 0;
struct ffs_dev *ffs_dev;
ENTER();
ffs_dev_lock();
ffs_dev = _ffs_find_dev(dev_name);
if (!ffs_dev)
ffs_dev = ERR_PTR(-ENOENT);
else if (ffs_dev->mounted)
ffs_dev = ERR_PTR(-EBUSY);
else if (ffs_dev->ffs_acquire_dev_callback &&
ffs_dev->ffs_acquire_dev_callback(ffs_dev))
ffs_dev = ERR_PTR(-ENOENT);
else
if (!ffs_dev) {
ret = -ENOENT;
} else if (ffs_dev->mounted) {
ret = -EBUSY;
} else if (ffs_dev->ffs_acquire_dev_callback &&
ffs_dev->ffs_acquire_dev_callback(ffs_dev)) {
ret = -ENOENT;
} else {
ffs_dev->mounted = true;
ffs_dev->ffs_data = ffs_data;
ffs_data->private_data = ffs_dev;
}
ffs_dev_unlock();
return ffs_dev;
return ret;
}
static void ffs_release_dev(struct ffs_data *ffs_data)
static void ffs_release_dev(struct ffs_dev *ffs_dev)
{
struct ffs_dev *ffs_dev;
ENTER();
ffs_dev_lock();
ffs_dev = ffs_data->private_data;
if (ffs_dev) {
if (ffs_dev && ffs_dev->mounted) {
ffs_dev->mounted = false;
if (ffs_dev->ffs_data) {
ffs_dev->ffs_data->private_data = NULL;
ffs_dev->ffs_data = NULL;
}
if (ffs_dev->ffs_release_dev_callback)
ffs_dev->ffs_release_dev_callback(ffs_dev);
@ -3761,7 +3762,6 @@ static int ffs_ready(struct ffs_data *ffs)
}
ffs_obj->desc_ready = true;
ffs_obj->ffs_data = ffs;
if (ffs_obj->ffs_ready_callback) {
ret = ffs_obj->ffs_ready_callback(ffs);
@ -3789,7 +3789,6 @@ static void ffs_closed(struct ffs_data *ffs)
goto done;
ffs_obj->desc_ready = false;
ffs_obj->ffs_data = NULL;
if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) &&
ffs_obj->ffs_closed_callback)

View File

@ -88,7 +88,7 @@ static struct usb_interface_descriptor hidg_interface_desc = {
static struct hid_descriptor hidg_desc = {
.bLength = sizeof hidg_desc,
.bDescriptorType = HID_DT_HID,
.bcdHID = 0x0101,
.bcdHID = cpu_to_le16(0x0101),
.bCountryCode = 0x00,
.bNumDescriptors = 0x1,
/*.desc[0].bDescriptorType = DYNAMIC */
@ -1118,7 +1118,7 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
hidg->func.setup = hidg_setup;
hidg->func.free_func = hidg_free;
/* this could me made configurable at some point */
/* this could be made configurable at some point */
hidg->qlen = 4;
return &hidg->func;

View File

@ -667,8 +667,7 @@ printer_write(struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
value = usb_ep_queue(dev->in_ep, req, GFP_ATOMIC);
spin_lock(&dev->lock);
if (value) {
list_del(&req->list);
list_add(&req->list, &dev->tx_reqs);
list_move(&req->list, &dev->tx_reqs);
spin_unlock_irqrestore(&dev->lock, flags);
mutex_unlock(&dev->lock_printer_io);
return -EAGAIN;

View File

@ -44,6 +44,7 @@
#define EPIN_EN(_opts) ((_opts)->p_chmask != 0)
#define EPOUT_EN(_opts) ((_opts)->c_chmask != 0)
#define EPOUT_FBACK_IN_EN(_opts) ((_opts)->c_sync == USB_ENDPOINT_SYNC_ASYNC)
struct f_uac2 {
struct g_audio g_audio;
@ -273,7 +274,7 @@ static struct usb_endpoint_descriptor fs_epout_desc = {
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = USB_DIR_OUT,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
/* .bmAttributes = DYNAMIC */
/* .wMaxPacketSize = DYNAMIC */
.bInterval = 1,
};
@ -282,7 +283,7 @@ static struct usb_endpoint_descriptor hs_epout_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
/* .bmAttributes = DYNAMIC */
/* .wMaxPacketSize = DYNAMIC */
.bInterval = 4,
};
@ -292,7 +293,7 @@ static struct usb_endpoint_descriptor ss_epout_desc = {
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = USB_DIR_OUT,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC,
/* .bmAttributes = DYNAMIC */
/* .wMaxPacketSize = DYNAMIC */
.bInterval = 4,
};
@ -317,6 +318,37 @@ static struct uac2_iso_endpoint_descriptor as_iso_out_desc = {
.wLockDelay = 0,
};
/* STD AS ISO IN Feedback Endpoint */
static struct usb_endpoint_descriptor fs_epin_fback_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = USB_DIR_IN,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_USAGE_FEEDBACK,
.wMaxPacketSize = cpu_to_le16(3),
.bInterval = 1,
};
static struct usb_endpoint_descriptor hs_epin_fback_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_USAGE_FEEDBACK,
.wMaxPacketSize = cpu_to_le16(4),
.bInterval = 4,
};
static struct usb_endpoint_descriptor ss_epin_fback_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = USB_DIR_IN,
.bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_USAGE_FEEDBACK,
.wMaxPacketSize = cpu_to_le16(4),
.bInterval = 4,
};
/* Audio Streaming IN Interface - Alt0 */
static struct usb_interface_descriptor std_as_in_if0_desc = {
.bLength = sizeof std_as_in_if0_desc,
@ -431,6 +463,7 @@ static struct usb_descriptor_header *fs_audio_desc[] = {
(struct usb_descriptor_header *)&as_out_fmt1_desc,
(struct usb_descriptor_header *)&fs_epout_desc,
(struct usb_descriptor_header *)&as_iso_out_desc,
(struct usb_descriptor_header *)&fs_epin_fback_desc,
(struct usb_descriptor_header *)&std_as_in_if0_desc,
(struct usb_descriptor_header *)&std_as_in_if1_desc,
@ -461,6 +494,7 @@ static struct usb_descriptor_header *hs_audio_desc[] = {
(struct usb_descriptor_header *)&as_out_fmt1_desc,
(struct usb_descriptor_header *)&hs_epout_desc,
(struct usb_descriptor_header *)&as_iso_out_desc,
(struct usb_descriptor_header *)&hs_epin_fback_desc,
(struct usb_descriptor_header *)&std_as_in_if0_desc,
(struct usb_descriptor_header *)&std_as_in_if1_desc,
@ -492,6 +526,7 @@ static struct usb_descriptor_header *ss_audio_desc[] = {
(struct usb_descriptor_header *)&ss_epout_desc,
(struct usb_descriptor_header *)&ss_epout_desc_comp,
(struct usb_descriptor_header *)&as_iso_out_desc,
(struct usb_descriptor_header *)&ss_epin_fback_desc,
(struct usb_descriptor_header *)&std_as_in_if0_desc,
(struct usb_descriptor_header *)&std_as_in_if1_desc,
@ -549,8 +584,11 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
ssize = uac2_opts->c_ssize;
}
if (!is_playback && (uac2_opts->c_sync == USB_ENDPOINT_SYNC_ASYNC))
srate = srate * (1000 + uac2_opts->fb_max) / 1000;
max_size_bw = num_channels(chmask) * ssize *
((srate / (factor / (1 << (ep_desc->bInterval - 1)))) + 1);
DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
max_size_ep));
@ -568,22 +606,26 @@ static void setup_headers(struct f_uac2_opts *opts,
struct usb_ss_ep_comp_descriptor *epin_desc_comp = NULL;
struct usb_endpoint_descriptor *epout_desc;
struct usb_endpoint_descriptor *epin_desc;
struct usb_endpoint_descriptor *epin_fback_desc;
int i;
switch (speed) {
case USB_SPEED_FULL:
epout_desc = &fs_epout_desc;
epin_desc = &fs_epin_desc;
epin_fback_desc = &fs_epin_fback_desc;
break;
case USB_SPEED_HIGH:
epout_desc = &hs_epout_desc;
epin_desc = &hs_epin_desc;
epin_fback_desc = &hs_epin_fback_desc;
break;
default:
epout_desc = &ss_epout_desc;
epin_desc = &ss_epin_desc;
epout_desc_comp = &ss_epout_desc_comp;
epin_desc_comp = &ss_epin_desc_comp;
epin_fback_desc = &ss_epin_fback_desc;
}
i = 0;
@ -611,6 +653,9 @@ static void setup_headers(struct f_uac2_opts *opts,
headers[i++] = USBDHDR(epout_desc_comp);
headers[i++] = USBDHDR(&as_iso_out_desc);
if (EPOUT_FBACK_IN_EN(opts))
headers[i++] = USBDHDR(epin_fback_desc);
}
if (EPIN_EN(opts)) {
headers[i++] = USBDHDR(&std_as_in_if0_desc);
@ -781,6 +826,23 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
std_as_out_if1_desc.bInterfaceNumber = ret;
uac2->as_out_intf = ret;
uac2->as_out_alt = 0;
if (EPOUT_FBACK_IN_EN(uac2_opts)) {
fs_epout_desc.bmAttributes =
USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC;
hs_epout_desc.bmAttributes =
USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC;
ss_epout_desc.bmAttributes =
USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC;
std_as_out_if1_desc.bNumEndpoints++;
} else {
fs_epout_desc.bmAttributes =
USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ADAPTIVE;
hs_epout_desc.bmAttributes =
USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ADAPTIVE;
ss_epout_desc.bmAttributes =
USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ADAPTIVE;
}
}
if (EPIN_EN(uac2_opts)) {
@ -844,6 +906,15 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
return -ENODEV;
}
if (EPOUT_FBACK_IN_EN(uac2_opts)) {
agdev->in_ep_fback = usb_ep_autoconfig(gadget,
&fs_epin_fback_desc);
if (!agdev->in_ep_fback) {
dev_err(dev, "%s:%d Error!\n",
__func__, __LINE__);
return -ENODEV;
}
}
}
if (EPIN_EN(uac2_opts)) {
@ -867,8 +938,10 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
le16_to_cpu(ss_epout_desc.wMaxPacketSize));
hs_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
hs_epin_fback_desc.bEndpointAddress = fs_epin_fback_desc.bEndpointAddress;
hs_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress;
ss_epout_desc.bEndpointAddress = fs_epout_desc.bEndpointAddress;
ss_epin_fback_desc.bEndpointAddress = fs_epin_fback_desc.bEndpointAddress;
ss_epin_desc.bEndpointAddress = fs_epin_desc.bEndpointAddress;
setup_descriptor(uac2_opts);
@ -887,6 +960,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
agdev->params.c_srate = uac2_opts->c_srate;
agdev->params.c_ssize = uac2_opts->c_ssize;
agdev->params.req_number = uac2_opts->req_number;
agdev->params.fb_max = uac2_opts->fb_max;
ret = g_audio_setup(agdev, "UAC2 PCM", "UAC2_Gadget");
if (ret)
goto err_free_descs;
@ -1195,13 +1269,71 @@ end: \
\
CONFIGFS_ATTR(f_uac2_opts_, name)
#define UAC2_ATTRIBUTE_SYNC(name) \
static ssize_t f_uac2_opts_##name##_show(struct config_item *item, \
char *page) \
{ \
struct f_uac2_opts *opts = to_f_uac2_opts(item); \
int result; \
char *str; \
\
mutex_lock(&opts->lock); \
switch (opts->name) { \
case USB_ENDPOINT_SYNC_ASYNC: \
str = "async"; \
break; \
case USB_ENDPOINT_SYNC_ADAPTIVE: \
str = "adaptive"; \
break; \
default: \
str = "unknown"; \
break; \
} \
result = sprintf(page, "%s\n", str); \
mutex_unlock(&opts->lock); \
\
return result; \
} \
\
static ssize_t f_uac2_opts_##name##_store(struct config_item *item, \
const char *page, size_t len) \
{ \
struct f_uac2_opts *opts = to_f_uac2_opts(item); \
int ret = 0; \
\
mutex_lock(&opts->lock); \
if (opts->refcnt) { \
ret = -EBUSY; \
goto end; \
} \
\
if (!strncmp(page, "async", 5)) \
opts->name = USB_ENDPOINT_SYNC_ASYNC; \
else if (!strncmp(page, "adaptive", 8)) \
opts->name = USB_ENDPOINT_SYNC_ADAPTIVE; \
else { \
ret = -EINVAL; \
goto end; \
} \
\
ret = len; \
\
end: \
mutex_unlock(&opts->lock); \
return ret; \
} \
\
CONFIGFS_ATTR(f_uac2_opts_, name)
UAC2_ATTRIBUTE(p_chmask);
UAC2_ATTRIBUTE(p_srate);
UAC2_ATTRIBUTE(p_ssize);
UAC2_ATTRIBUTE(c_chmask);
UAC2_ATTRIBUTE(c_srate);
UAC2_ATTRIBUTE_SYNC(c_sync);
UAC2_ATTRIBUTE(c_ssize);
UAC2_ATTRIBUTE(req_number);
UAC2_ATTRIBUTE(fb_max);
static struct configfs_attribute *f_uac2_attrs[] = {
&f_uac2_opts_attr_p_chmask,
@ -1210,7 +1342,9 @@ static struct configfs_attribute *f_uac2_attrs[] = {
&f_uac2_opts_attr_c_chmask,
&f_uac2_opts_attr_c_srate,
&f_uac2_opts_attr_c_ssize,
&f_uac2_opts_attr_c_sync,
&f_uac2_opts_attr_req_number,
&f_uac2_opts_attr_fb_max,
NULL,
};
@ -1248,7 +1382,9 @@ static struct usb_function_instance *afunc_alloc_inst(void)
opts->c_chmask = UAC2_DEF_CCHMASK;
opts->c_srate = UAC2_DEF_CSRATE;
opts->c_ssize = UAC2_DEF_CSSIZE;
opts->c_sync = UAC2_DEF_CSYNC;
opts->req_number = UAC2_DEF_REQ_NUM;
opts->fb_max = UAC2_DEF_FB_MAX;
return &opts->func_inst;
}

View File

@ -16,6 +16,7 @@
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include <sound/control.h>
#include "u_audio.h"
@ -35,9 +36,13 @@ struct uac_rtd_params {
void *rbuf;
unsigned int pitch; /* Stream pitch ratio to 1000000 */
unsigned int max_psize; /* MaxPacketSize of endpoint */
struct usb_request **reqs;
struct usb_request *req_fback; /* Feedback endpoint request */
bool fb_ep_enabled; /* if the ep is enabled */
};
struct snd_uac_chip {
@ -70,6 +75,48 @@ static const struct snd_pcm_hardware uac_pcm_hardware = {
.periods_min = MIN_PERIODS,
};
static void u_audio_set_fback_frequency(enum usb_device_speed speed,
unsigned long long freq,
unsigned int pitch,
void *buf)
{
u32 ff = 0;
/*
* Because the pitch base is 1000000, the final divider here
* will be 1000 * 1000000 = 1953125 << 9
*
* Instead of dealing with big numbers lets fold this 9 left shift
*/
if (speed == USB_SPEED_FULL) {
/*
* Full-speed feedback endpoints report frequency
* in samples/frame
* Format is encoded in Q10.10 left-justified in the 24 bits,
* so that it has a Q10.14 format.
*
* ff = (freq << 14) / 1000
*/
freq <<= 5;
} else {
/*
* High-speed feedback endpoints report frequency
* in samples/microframe.
* Format is encoded in Q12.13 fitted into four bytes so that
* the binary point is located between the second and the third
* byte fromat (that is Q16.16)
*
* ff = (freq << 16) / 8000
*/
freq <<= 4;
}
ff = DIV_ROUND_CLOSEST_ULL((freq * pitch), 1953125);
*(__le32 *)buf = cpu_to_le32(ff);
}
static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
{
unsigned int pending;
@ -173,6 +220,35 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
dev_err(uac->card->dev, "%d Error!\n", __LINE__);
}
static void u_audio_iso_fback_complete(struct usb_ep *ep,
struct usb_request *req)
{
struct uac_rtd_params *prm = req->context;
struct snd_uac_chip *uac = prm->uac;
struct g_audio *audio_dev = uac->audio_dev;
struct uac_params *params = &audio_dev->params;
int status = req->status;
/* i/f shutting down */
if (!prm->fb_ep_enabled || req->status == -ESHUTDOWN)
return;
/*
* We can't really do much about bad xfers.
* Afterall, the ISOCH xfers could fail legitimately.
*/
if (status)
pr_debug("%s: iso_complete status(%d) %d/%d\n",
__func__, status, req->actual, req->length);
u_audio_set_fback_frequency(audio_dev->gadget->speed,
params->c_srate, prm->pitch,
req->buf);
if (usb_ep_queue(ep, req, GFP_ATOMIC))
dev_err(uac->card->dev, "%d Error!\n", __LINE__);
}
static int uac_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
{
struct snd_uac_chip *uac = snd_pcm_substream_chip(substream);
@ -335,14 +411,33 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep)
dev_err(uac->card->dev, "%s:%d Error!\n", __func__, __LINE__);
}
static inline void free_ep_fback(struct uac_rtd_params *prm, struct usb_ep *ep)
{
struct snd_uac_chip *uac = prm->uac;
if (!prm->fb_ep_enabled)
return;
prm->fb_ep_enabled = false;
if (prm->req_fback) {
usb_ep_dequeue(ep, prm->req_fback);
kfree(prm->req_fback->buf);
usb_ep_free_request(ep, prm->req_fback);
prm->req_fback = NULL;
}
if (usb_ep_disable(ep))
dev_err(uac->card->dev, "%s:%d Error!\n", __func__, __LINE__);
}
int u_audio_start_capture(struct g_audio *audio_dev)
{
struct snd_uac_chip *uac = audio_dev->uac;
struct usb_gadget *gadget = audio_dev->gadget;
struct device *dev = &gadget->dev;
struct usb_request *req;
struct usb_ep *ep;
struct usb_request *req, *req_fback;
struct usb_ep *ep, *ep_fback;
struct uac_rtd_params *prm;
struct uac_params *params = &audio_dev->params;
int req_len, i;
@ -374,6 +469,43 @@ int u_audio_start_capture(struct g_audio *audio_dev)
dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
}
ep_fback = audio_dev->in_ep_fback;
if (!ep_fback)
return 0;
/* Setup feedback endpoint */
config_ep_by_speed(gadget, &audio_dev->func, ep_fback);
prm->fb_ep_enabled = true;
usb_ep_enable(ep_fback);
req_len = ep_fback->maxpacket;
req_fback = usb_ep_alloc_request(ep_fback, GFP_ATOMIC);
if (req_fback == NULL)
return -ENOMEM;
prm->req_fback = req_fback;
req_fback->zero = 0;
req_fback->context = prm;
req_fback->length = req_len;
req_fback->complete = u_audio_iso_fback_complete;
req_fback->buf = kzalloc(req_len, GFP_ATOMIC);
if (!req_fback->buf)
return -ENOMEM;
/*
* Configure the feedback endpoint's reported frequency.
* Always start with original frequency since its deviation can't
* be meauserd at start of playback
*/
prm->pitch = 1000000;
u_audio_set_fback_frequency(audio_dev->gadget->speed,
params->c_srate, prm->pitch,
req_fback->buf);
if (usb_ep_queue(ep_fback, req_fback, GFP_ATOMIC))
dev_err(dev, "%s:%d Error!\n", __func__, __LINE__);
return 0;
}
EXPORT_SYMBOL_GPL(u_audio_start_capture);
@ -382,6 +514,8 @@ void u_audio_stop_capture(struct g_audio *audio_dev)
{
struct snd_uac_chip *uac = audio_dev->uac;
if (audio_dev->in_ep_fback)
free_ep_fback(&uac->c_prm, audio_dev->in_ep_fback);
free_ep(&uac->c_prm, audio_dev->out_ep);
}
EXPORT_SYMBOL_GPL(u_audio_stop_capture);
@ -463,12 +597,82 @@ void u_audio_stop_playback(struct g_audio *audio_dev)
}
EXPORT_SYMBOL_GPL(u_audio_stop_playback);
static int u_audio_pitch_info(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_info *uinfo)
{
struct uac_rtd_params *prm = snd_kcontrol_chip(kcontrol);
struct snd_uac_chip *uac = prm->uac;
struct g_audio *audio_dev = uac->audio_dev;
struct uac_params *params = &audio_dev->params;
unsigned int pitch_min, pitch_max;
pitch_min = (1000 - FBACK_SLOW_MAX) * 1000;
pitch_max = (1000 + params->fb_max) * 1000;
uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
uinfo->count = 1;
uinfo->value.integer.min = pitch_min;
uinfo->value.integer.max = pitch_max;
uinfo->value.integer.step = 1;
return 0;
}
static int u_audio_pitch_get(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
struct uac_rtd_params *prm = snd_kcontrol_chip(kcontrol);
ucontrol->value.integer.value[0] = prm->pitch;
return 0;
}
static int u_audio_pitch_put(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
struct uac_rtd_params *prm = snd_kcontrol_chip(kcontrol);
struct snd_uac_chip *uac = prm->uac;
struct g_audio *audio_dev = uac->audio_dev;
struct uac_params *params = &audio_dev->params;
unsigned int val;
unsigned int pitch_min, pitch_max;
int change = 0;
pitch_min = (1000 - FBACK_SLOW_MAX) * 1000;
pitch_max = (1000 + params->fb_max) * 1000;
val = ucontrol->value.integer.value[0];
if (val < pitch_min)
val = pitch_min;
if (val > pitch_max)
val = pitch_max;
if (prm->pitch != val) {
prm->pitch = val;
change = 1;
}
return change;
}
static const struct snd_kcontrol_new u_audio_controls[] = {
{
.iface = SNDRV_CTL_ELEM_IFACE_PCM,
.name = "Capture Pitch 1000000",
.info = u_audio_pitch_info,
.get = u_audio_pitch_get,
.put = u_audio_pitch_put,
},
};
int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
const char *card_name)
{
struct snd_uac_chip *uac;
struct snd_card *card;
struct snd_pcm *pcm;
struct snd_kcontrol *kctl;
struct uac_params *params;
int p_chmask, c_chmask;
int err;
@ -556,6 +760,23 @@ int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &uac_pcm_ops);
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &uac_pcm_ops);
if (c_chmask && g_audio->in_ep_fback) {
strscpy(card->mixername, card_name, sizeof(card->driver));
kctl = snd_ctl_new1(&u_audio_controls[0], &uac->c_prm);
if (!kctl) {
err = -ENOMEM;
goto snd_fail;
}
kctl->id.device = pcm->device;
kctl->id.subdevice = 0;
err = snd_ctl_add(card, kctl);
if (err < 0)
goto snd_fail;
}
strscpy(card->driver, card_name, sizeof(card->driver));
strscpy(card->shortname, card_name, sizeof(card->shortname));
sprintf(card->longname, "%s %i", card_name, card->dev->id);

View File

@ -11,6 +11,14 @@
#include <linux/usb/composite.h>
/*
* Same maximum frequency deviation on the slower side as in
* sound/usb/endpoint.c. Value is expressed in per-mil deviation.
* The maximum deviation on the faster side will be provided as
* parameter, as it impacts the endpoint required bandwidth.
*/
#define FBACK_SLOW_MAX 250
struct uac_params {
/* playback */
int p_chmask; /* channel mask */
@ -23,6 +31,7 @@ struct uac_params {
int c_ssize; /* sample size */
int req_number; /* number of preallocated requests */
int fb_max; /* upper frequency drift feedback limit per-mil */
};
struct g_audio {
@ -30,7 +39,10 @@ struct g_audio {
struct usb_gadget *gadget;
struct usb_ep *in_ep;
struct usb_ep *out_ep;
/* feedback IN endpoint corresponding to out_ep */
struct usb_ep *in_ep_fback;
/* Max packet size for all in_ep possible speeds */
unsigned int in_ep_maxpsize;

View File

@ -29,8 +29,8 @@ struct f_hid_opts {
* Protect the data form concurrent access by read/write
* and create symlink/remove symlink.
*/
struct mutex lock;
int refcnt;
struct mutex lock;
int refcnt;
};
int ghid_setup(struct usb_gadget *g, int count);

View File

@ -29,8 +29,8 @@ struct f_midi_opts {
* Protect the data form concurrent access by read/write
* and create symlink/remove symlink.
*/
struct mutex lock;
int refcnt;
struct mutex lock;
int refcnt;
};
#endif /* U_MIDI_H */

View File

@ -21,7 +21,9 @@
#define UAC2_DEF_CCHMASK 0x3
#define UAC2_DEF_CSRATE 64000
#define UAC2_DEF_CSSIZE 2
#define UAC2_DEF_CSYNC USB_ENDPOINT_SYNC_ASYNC
#define UAC2_DEF_REQ_NUM 2
#define UAC2_DEF_FB_MAX 5
struct f_uac2_opts {
struct usb_function_instance func_inst;
@ -31,7 +33,9 @@ struct f_uac2_opts {
int c_chmask;
int c_srate;
int c_ssize;
int c_sync;
int req_number;
int fb_max;
bool bound;
struct mutex lock;

Some files were not shown because too many files have changed in this diff Show More