Titan Ridge needs an additional connection manager handshake in order to
do proper Display Port tunneling so implement it here.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Lane bonding allows aggregating two 10/20 Gb/s (depending on the
generation) lanes into a single 20/40 Gb/s bonded link. This allows
sharing the full bandwidth more efficiently. In order to establish lane
bonding we need to check that lane bonding is possible through link
controller and that both ends of the link actually supports 2x widths.
This also means that all the paths should be established through the
primary port so update tb_path_alloc() to handle this as well.
Lane bonding is supported starting from Falcon Ridge (2nd generation)
controllers.
We also expose the current speed and number of lanes under each device
except the host router following similar attribute naming than USB bus.
Expose speed and number of lanes for both directions to allow possibility
of asymmetric link in the future.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Now that USB4 spec has names for these DP adapter registers we can use
them instead. This makes it easier to match certain register to the spec.
No functional changes.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Now that USB4 spec has names for these basic registers we can use them
instead. This makes it easier to match certain register to the spec.
No functional changes.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
PCIe tunnel path indices got mixed up when we added support for tunnels
between switches that are not adjacent. This did not affect the
functionality as it is just an index but fix it now nevertheless to make
the code easier to understand.
Reported-by: Rajmohan Mani <rajmohan.mani@intel.com>
Fixes: 8c7acaaf02 ("thunderbolt: Extend tunnel creation to more than 2 adjacent switches")
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Yehezkel Bernat <YehezkelShB@gmail.com>
Now that the driver can handle every possible tunnel types there is no
point to log everything as info level so turn these to happen at debug
level instead.
While at it remove duplicated tunnel activation log message
(tb_tunnel_activate() calls tb_tunnel_restart() which print the same
message) and add one missing '\n' termination.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
In addition to PCIe and Display Port tunnels it is also possible to
create tunnels that forward DMA traffic from the host interface adapter
(NHI) to a NULL port that is connected to another domain through a
Thunderbolt cable. These tunnels can be used to carry software messages
such as networking packets.
To support this we introduce another tunnel type (TB_TUNNEL_DMA) that
supports paths from NHI to NULL port and back.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Now that we have capability to discover existing tunnels during driver
load there is no point tearing down tunnels when the driver gets
unloaded. Instead we can just leave them running. If user disconnects
devices while there is no Thunderbolt driver loaded, tunneled protocol
hotplug happens and is handled by the corresponding driver (pciehp in
case of PCIe tunnel, GFX driver in case of DP tunnel).
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Display Port tunnels are somewhat more complex than PCIe tunnels as it
requires 3 tunnels (AUX Rx/Tx and Video). In addition we are not
supposed to create the tunnels immediately when a DP OUT is enumerated.
Instead we need to wait until we get hotplug event to that adapter port
or check if the port has HPD set before tunnels can be established. This
adds Display Port tunneling support to the software connection manager.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
In Apple Macs the boot firmware (EFI) connects all devices automatically
when the system is started, before it hands over to the OS. Instead of
ignoring we discover all those PCIe tunnels and record them using our
internal structures, just like we do when a device is connected after
the OS is already up.
By doing this we can properly tear down tunnels when devices are
disconnected. Also this allows us to resume the existing tunnels after
system suspend/resume cycle.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
State of the connected devices and tunnel configuration is not known
during resume. For example some paths may not be complete anymore if the
user has unplugged the related devices. So instead of marking all paths
as inactive we go ahead and deactivate them explicitly before we restart
them.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Now that we can allocate hop IDs per port on a path, we can take
advantage of this and create tunnels covering longer paths than just
between two adjacent switches. PCIe actually does not need this as it
is typically a daisy chain between two adjacent switches but this way we
do not need to hard-code creation of the tunnel.
While there add name to struct tb_path to make debugging easier, and
update kernel-doc comments.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
To be able to tunnel non-PCIe traffic, separate tunnel functionality
into generic and PCIe specific parts. Rename struct tb_pci_tunnel to
tb_tunnel, and make it hold an array of paths instead of just two.
Update all the tunneling functions to take this structure as parameter.
We also move tb_pci_port_active() to switch.c (and rename it) where we
will be keeping all port and switch related functions.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
In order to tunnel non-PCIe traffic as well rename tunnel_pci.[ch] to
tunnel.[ch] to reflect this fact. No functional changes.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>