Updates for 4.12 kernel merge window

- idr usage and locking changes
 - build fix for hns
 - ipoib debug path record file fix
 - hfi1 updates
 - core RDMA netdev addition
 - Intel VNIC driver addition
 - Enhanced accelerators for IPoIB addition
 - Debug cleanups in cxgb3/4
 - Trivial cleanups from SF Markus Elfring
 - Misc rxe fixes from Mellanox
 - Misc ipoib fixes from Mellanox
 - Lots of mlx4/mlx5 changes from Mellanox
 - Misc fixes across the RDMA subsystem
 - ODP paging fixes and improvements
 - qedr updates
 - hfi1 updates
 - OPA port info patches
 - OPA AH patches
 - OPA SA Query patches
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJZCfBsAAoJELgmozMOVy/d9GsP/je5/IyEwQOFVxhLM+BooDWy
 wfH/GWLoT4iSxviWtBzukZzrioxjfyFitZzkTWYxHMj3EIb63i52pDUTpes/soGl
 c3ob0SYv5mPB9b1mBZaIyyTWBWrXfm2pNSfyYryhI1cYxNX5ZLlXG51Xd3YxdB3D
 A8avUsCtH17zSb6Mimm04cT47pn5UIkVkcPKZDCir10hj1JiwLVwrWyC7abxLENp
 jHFw4uKQHOV3IN6jevM/tXfUenjALXwBHHKv+lJsBVijDUPTEmDsBiDXsvO++dmN
 Ph5ElY3KPfUmj4wIWIrY4L56j5Kr13Wxc+U8+MWNC6frbcHYoMCaSz3yaU15NLAd
 UYY5blzZsuNXqhgmudeV89qJpXYleW7KCgJQNiBmLkcQL38+ObdLTP0EmsC02K+W
 YpJbwecjNQtcb3KTJGnKCyMc3+Rs0u6Osz6YKuad4l8cNaxUI8NVujB2ru/wBczg
 fqXEunXjr6tEVM39zqwolImicsSSEzBKfpaFvB3D2Re5O22Eos6DM+DveUnzXAFR
 Hof5NhPURr/1aqNog2ymgGbjlg3tL4JAAG1PRBhvSFYywVMjV/LLBPQOgqaQzIU5
 J72jbSikRJYLCJaLFAeM7nNsTQgAMH58G0vhnrFoAjC7MglYaedcvouLjOs1jrpW
 d5f12NtIBIpC6DvQCNvH
 =pgEL
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma

Pull rdma updates from Doug Ledford:
 "More exchaustive description of primary updates in this release:

   - Lots of driver fixes and misc fixes across the board.

   - I had to base on a net-next tree because the IPoIB Accelorator
     patches needed it.

     Unfortunately, it was known to Mellanox that there would need to be
     an IPoIB accelorator patch to the net tree (which left some
     functions turned off by an #ifdef construct to avoid warnings about
     defined but unused functions), then one to the RDMA tree, then a
     fixup that went back and re-enabled the functions in the net tree
     and enabled their use in the rdma tree

     Also, a sparse fix was sent to the net tree after I did my pull,
     and the fixup patch conflicts quite directly with that sparse fix,
     so I'm going to submit the fixup patch towards the end of the merge
     window by itself and based upon your master branch at the time.

   - Two separate rounds of hfi1 fixes, one that got dropped from last
     release because it came in just a day or two before the end of the
     merge window and then the one from this release cycle.

     Of note is that I now have a third series that just landed from
     Intel yesterday. It is not included in this pull request, but I may
     submit it by the end of the week. I'll talk to Intel about
     improving the timing of thier submissions for my workflow.

   - Changes to our idr usage in the RDMA subsystem that will tie into
     our cgroup management and also into the upcoming changes for the
     RDMA kernel<->userspace API.

   - Addition of support for a netdev to be tied to an RDMA device at
     the core level

   - Addition of the VNIC driver from Intel.

     While IPoIB provides IP over InfiniBand (and *only* IP, no lower
     layer protocol headers are allowed or supported), the VNIC driver
     presents a virtual Ethernet device with support for things like
     varying Ethertypes, VLANs, priorities and other features of
     Ethernet.

     The virtual devices are centrally managed by the OPA fabric
     manager, making this (for the time being) a strictly OPA specific
     feature.

   - Improvements to the On-Demand Paging support in the RDMA subsystem.

   - Addition of three significant OPA changes.

     While we added OPA support some time ago (via the hfi1 driver), the
     RDMA subsystem has so far glossed over the areas where OPA and
     InfiniBand differ.

     With this release we are starting to add support for the OPA
     extensions into the RDMA core in the following area: Extended port
     information for OPA is now supported, extended Address Handle
     attributes for OPA are now supported, and extended SA Queries to
     get OPA specific subnet information is now supported.

  Concise summary from the tag:
   - idr usage and locking changes
   - build fix for hns
   - ipoib debug path record file fix
   - hfi1 updates
   - core RDMA netdev addition
   - Intel VNIC driver addition
   - Enhanced accelerators for IPoIB addition
   - Debug cleanups in cxgb3/4
   - Trivial cleanups from SF Markus Elfring
   - Misc rxe fixes from Mellanox
   - Misc ipoib fixes from Mellanox
   - Lots of mlx4/mlx5 changes from Mellanox
   - Misc fixes across the RDMA subsystem
   - ODP paging fixes and improvements
   - qedr updates
   - hfi1 updates
   - OPA port info patches
   - OPA AH patches
   - OPA SA Query patches"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma: (191 commits)
  infiniband: avoid dereferencing uninitialized dst on error path
  IB/SA: Add OPA addr header
  IB/mlx5: Add port_xmit_wait to counter registers read
  IB/ocrdma: fix out of bounds access to local buffer
  IB/mlx4: Fix incorrect order of formal and actual parameters
  IB/mlx4: Change flush logic so it adheres to the variable name
  mlx5: Fix mlx5_ib_map_mr_sg mr length
  IB/rxe: Don't clamp residual length to mtu
  IB/SA: Add support to query OPA path records
  IB/SA: Add OPA path record type
  IB/SA: Split struct sa_path_rec based on IB and ROCE specific fields
  IB/SA: Introduce path record specific types
  IB/SA: Rename ib_sa_path_rec to sa_path_rec
  IB/CM: Add braces when using sizeof
  IB/core: Define 'opa' rdma_ah_attr type
  IB/core: Define 'ib' and 'roce' rdma_ah_attr types
  IB/core: Use rdma_ah_attr accessor functions
  IB/core: Add accessor functions for rdma_ah_attr fields
  IB/PVRDMA: Rename ib_ah_attr related functions
  IB/mthca: Rename to_ib_ah_attr to to_rdma_ah_attr
  ...
This commit is contained in:
Linus Torvalds 2017-05-03 12:45:55 -07:00
commit 1684096b1e
242 changed files with 14629 additions and 5535 deletions

View File

@ -0,0 +1,153 @@
Intel Omni-Path (OPA) Virtual Network Interface Controller (VNIC) feature
supports Ethernet functionality over Omni-Path fabric by encapsulating
the Ethernet packets between HFI nodes.
Architecture
=============
The patterns of exchanges of Omni-Path encapsulated Ethernet packets
involves one or more virtual Ethernet switches overlaid on the Omni-Path
fabric topology. A subset of HFI nodes on the Omni-Path fabric are
permitted to exchange encapsulated Ethernet packets across a particular
virtual Ethernet switch. The virtual Ethernet switches are logical
abstractions achieved by configuring the HFI nodes on the fabric for
header generation and processing. In the simplest configuration all HFI
nodes across the fabric exchange encapsulated Ethernet packets over a
single virtual Ethernet switch. A virtual Ethernet switch, is effectively
an independent Ethernet network. The configuration is performed by an
Ethernet Manager (EM) which is part of the trusted Fabric Manager (FM)
application. HFI nodes can have multiple VNICs each connected to a
different virtual Ethernet switch. The below diagram presents a case
of two virtual Ethernet switches with two HFI nodes.
+-------------------+
| Subnet/ |
| Ethernet |
| Manager |
+-------------------+
/ /
/ /
/ /
/ /
+-----------------------------+ +------------------------------+
| Virtual Ethernet Switch | | Virtual Ethernet Switch |
| +---------+ +---------+ | | +---------+ +---------+ |
| | VPORT | | VPORT | | | | VPORT | | VPORT | |
+--+---------+----+---------+-+ +-+---------+----+---------+---+
| \ / |
| \ / |
| \/ |
| / \ |
| / \ |
+-----------+------------+ +-----------+------------+
| VNIC | VNIC | | VNIC | VNIC |
+-----------+------------+ +-----------+------------+
| HFI | | HFI |
+------------------------+ +------------------------+
The Omni-Path encapsulated Ethernet packet format is as described below.
Bits Field
------------------------------------
Quad Word 0:
0-19 SLID (lower 20 bits)
20-30 Length (in Quad Words)
31 BECN bit
32-51 DLID (lower 20 bits)
52-56 SC (Service Class)
57-59 RC (Routing Control)
60 FECN bit
61-62 L2 (=10, 16B format)
63 LT (=1, Link Transfer Head Flit)
Quad Word 1:
0-7 L4 type (=0x78 ETHERNET)
8-11 SLID[23:20]
12-15 DLID[23:20]
16-31 PKEY
32-47 Entropy
48-63 Reserved
Quad Word 2:
0-15 Reserved
16-31 L4 header
32-63 Ethernet Packet
Quad Words 3 to N-1:
0-63 Ethernet packet (pad extended)
Quad Word N (last):
0-23 Ethernet packet (pad extended)
24-55 ICRC
56-61 Tail
62-63 LT (=01, Link Transfer Tail Flit)
Ethernet packet is padded on the transmit side to ensure that the VNIC OPA
packet is quad word aligned. The 'Tail' field contains the number of bytes
padded. On the receive side the 'Tail' field is read and the padding is
removed (along with ICRC, Tail and OPA header) before passing packet up
the network stack.
The L4 header field contains the virtual Ethernet switch id the VNIC port
belongs to. On the receive side, this field is used to de-multiplex the
received VNIC packets to different VNIC ports.
Driver Design
==============
Intel OPA VNIC software design is presented in the below diagram.
OPA VNIC functionality has a HW dependent component and a HW
independent component.
The support has been added for IB device to allocate and free the RDMA
netdev devices. The RDMA netdev supports interfacing with the network
stack thus creating standard network interfaces. OPA_VNIC is an RDMA
netdev device type.
The HW dependent VNIC functionality is part of the HFI1 driver. It
implements the verbs to allocate and free the OPA_VNIC RDMA netdev.
It involves HW resource allocation/management for VNIC functionality.
It interfaces with the network stack and implements the required
net_device_ops functions. It expects Omni-Path encapsulated Ethernet
packets in the transmit path and provides HW access to them. It strips
the Omni-Path header from the received packets before passing them up
the network stack. It also implements the RDMA netdev control operations.
The OPA VNIC module implements the HW independent VNIC functionality.
It consists of two parts. The VNIC Ethernet Management Agent (VEMA)
registers itself with IB core as an IB client and interfaces with the
IB MAD stack. It exchanges the management information with the Ethernet
Manager (EM) and the VNIC netdev. The VNIC netdev part allocates and frees
the OPA_VNIC RDMA netdev devices. It overrides the net_device_ops functions
set by HW dependent VNIC driver where required to accommodate any control
operation. It also handles the encapsulation of Ethernet packets with an
Omni-Path header in the transmit path. For each VNIC interface, the
information required for encapsulation is configured by the EM via VEMA MAD
interface. It also passes any control information to the HW dependent driver
by invoking the RDMA netdev control operations.
+-------------------+ +----------------------+
| | | Linux |
| IB MAD | | Network |
| | | Stack |
+-------------------+ +----------------------+
| | |
| | |
+----------------------------+ |
| | |
| OPA VNIC Module | |
| (OPA VNIC RDMA Netdev | |
| & EMA functions) | |
| | |
+----------------------------+ |
| |
| |
+------------------+ |
| IB core | |
+------------------+ |
| |
| |
+--------------------------------------------+
| |
| HFI1 Driver with VNIC support |
| |
+--------------------------------------------+

View File

@ -5911,6 +5911,13 @@ F: drivers/block/cciss*
F: include/linux/cciss_ioctl.h
F: include/uapi/linux/cciss_ioctl.h
OPA-VNIC DRIVER
M: Dennis Dalessandro <dennis.dalessandro@intel.com>
M: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/ulp/opa_vnic
HFI1 DRIVER
M: Mike Marciniszyn <mike.marciniszyn@intel.com>
M: Dennis Dalessandro <dennis.dalessandro@intel.com>
@ -6519,6 +6526,7 @@ W: http://www.openfabrics.org/
Q: http://patchwork.kernel.org/project/linux-rdma/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
S: Supported
F: Documentation/devicetree/bindings/infiniband/
F: Documentation/infiniband/
F: drivers/infiniband/
F: include/uapi/linux/if_infiniband.h
@ -11489,11 +11497,11 @@ S: Supported
F: drivers/net/ethernet/emulex/benet/
EMULEX ONECONNECT ROCE DRIVER
M: Selvin Xavier <selvin.xavier@avagotech.com>
M: Devesh Sharma <devesh.sharma@avagotech.com>
M: Selvin Xavier <selvin.xavier@broadcom.com>
M: Devesh Sharma <devesh.sharma@broadcom.com>
L: linux-rdma@vger.kernel.org
W: http://www.emulex.com
S: Supported
W: http://www.broadcom.com
S: Odd Fixes
F: drivers/infiniband/hw/ocrdma/
F: include/uapi/rdma/ocrdma-abi.h

View File

@ -85,6 +85,7 @@ source "drivers/infiniband/ulp/srpt/Kconfig"
source "drivers/infiniband/ulp/iser/Kconfig"
source "drivers/infiniband/ulp/isert/Kconfig"
source "drivers/infiniband/ulp/opa_vnic/Kconfig"
source "drivers/infiniband/sw/rdmavt/Kconfig"
source "drivers/infiniband/sw/rxe/Kconfig"

View File

@ -29,4 +29,5 @@ ib_umad-y := user_mad.o
ib_ucm-y := ucm.o
ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o
ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o \
rdma_core.o uverbs_std_types.o

View File

@ -444,9 +444,9 @@ static int addr6_resolve(struct sockaddr_in6 *src_in,
fl6.saddr = src_in->sin6_addr;
fl6.flowi6_oif = addr->bound_dev_if;
dst = ip6_route_output(addr->net, NULL, &fl6);
if ((ret = dst->error))
goto put;
ret = ipv6_stub->ipv6_dst_lookup(addr->net, NULL, &dst, &fl6);
if (ret < 0)
return ret;
rt = (struct rt6_info *)dst;
if (ipv6_addr_any(&fl6.saddr)) {

View File

@ -137,13 +137,13 @@ void agent_send_response(const struct ib_mad_hdr *mad_hdr, const struct ib_grh *
err2:
ib_free_send_mad(send_buf);
err1:
ib_destroy_ah(ah);
rdma_destroy_ah(ah);
}
static void agent_send_handler(struct ib_mad_agent *mad_agent,
struct ib_mad_send_wc *mad_send_wc)
{
ib_destroy_ah(mad_send_wc->send_buf->ah);
rdma_destroy_ah(mad_send_wc->send_buf->ah);
ib_free_send_mad(mad_send_wc->send_buf);
}

View File

@ -228,7 +228,7 @@ struct cm_device {
struct cm_av {
struct cm_port *port;
union ib_gid dgid;
struct ib_ah_attr ah_attr;
struct rdma_ah_attr ah_attr;
u16 pkey_index;
u8 timeout;
};
@ -241,7 +241,7 @@ struct cm_work {
__be32 local_id; /* Established / timewait */
__be32 remote_id;
struct ib_cm_event cm_event;
struct ib_sa_path_rec path[0];
struct sa_path_rec path[0];
};
struct cm_timewait_info {
@ -343,7 +343,7 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
ret = -ENODEV;
goto out;
}
ah = ib_create_ah(mad_agent->qp->pd, &av->ah_attr);
ah = rdma_create_ah(mad_agent->qp->pd, &av->ah_attr);
if (IS_ERR(ah)) {
ret = PTR_ERR(ah);
goto out;
@ -355,7 +355,7 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
GFP_ATOMIC,
IB_MGMT_BASE_VERSION);
if (IS_ERR(m)) {
ib_destroy_ah(ah);
rdma_destroy_ah(ah);
ret = PTR_ERR(m);
goto out;
}
@ -390,7 +390,7 @@ static int cm_alloc_response_msg(struct cm_port *port,
GFP_ATOMIC,
IB_MGMT_BASE_VERSION);
if (IS_ERR(m)) {
ib_destroy_ah(ah);
rdma_destroy_ah(ah);
return PTR_ERR(m);
}
m->ah = ah;
@ -400,7 +400,7 @@ static int cm_alloc_response_msg(struct cm_port *port,
static void cm_free_msg(struct ib_mad_send_buf *msg)
{
ib_destroy_ah(msg->ah);
rdma_destroy_ah(msg->ah);
if (msg->context[0])
cm_deref_id(msg->context[0]);
ib_free_send_mad(msg);
@ -440,7 +440,7 @@ static void cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc,
grh, &av->ah_attr);
}
static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av,
static int cm_init_av_by_path(struct sa_path_rec *path, struct cm_av *av,
struct cm_id_private *cm_id_priv)
{
struct cm_device *cm_dev;
@ -453,7 +453,8 @@ static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av,
read_lock_irqsave(&cm.device_lock, flags);
list_for_each_entry(cm_dev, &cm.device_list, list) {
if (!ib_find_cached_gid(cm_dev->ib_device, &path->sgid,
path->gid_type, ndev, &p, NULL)) {
sa_conv_pathrec_to_gid_type(path),
ndev, &p, NULL)) {
port = cm_dev->port[p-1];
break;
}
@ -1172,8 +1173,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
struct cm_id_private *cm_id_priv,
struct ib_cm_req_param *param)
{
struct ib_sa_path_rec *pri_path = param->primary_path;
struct ib_sa_path_rec *alt_path = param->alternate_path;
struct sa_path_rec *pri_path = param->primary_path;
struct sa_path_rec *alt_path = param->alternate_path;
cm_format_mad_hdr(&req_msg->hdr, CM_REQ_ATTR_ID,
cm_form_tid(cm_id_priv, CM_MSG_SEQUENCE_REQ));
@ -1202,8 +1203,10 @@ static void cm_format_req(struct cm_req_msg *req_msg,
}
if (pri_path->hop_limit <= 1) {
req_msg->primary_local_lid = pri_path->slid;
req_msg->primary_remote_lid = pri_path->dlid;
req_msg->primary_local_lid =
htons(ntohl(sa_path_get_slid(pri_path)));
req_msg->primary_remote_lid =
htons(ntohl(sa_path_get_dlid(pri_path)));
} else {
/* Work-around until there's a way to obtain remote LID info */
req_msg->primary_local_lid = IB_LID_PERMISSIVE;
@ -1223,8 +1226,10 @@ static void cm_format_req(struct cm_req_msg *req_msg,
if (alt_path) {
if (alt_path->hop_limit <= 1) {
req_msg->alt_local_lid = alt_path->slid;
req_msg->alt_remote_lid = alt_path->dlid;
req_msg->alt_local_lid =
htons(ntohl(sa_path_get_slid(alt_path)));
req_msg->alt_remote_lid =
htons(ntohl(sa_path_get_dlid(alt_path)));
} else {
req_msg->alt_local_lid = IB_LID_PERMISSIVE;
req_msg->alt_remote_lid = IB_LID_PERMISSIVE;
@ -1401,14 +1406,15 @@ static inline int cm_is_active_peer(__be64 local_ca_guid, __be64 remote_ca_guid,
}
static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
struct ib_sa_path_rec *primary_path,
struct ib_sa_path_rec *alt_path)
struct sa_path_rec *primary_path,
struct sa_path_rec *alt_path)
{
memset(primary_path, 0, sizeof *primary_path);
primary_path->dgid = req_msg->primary_local_gid;
primary_path->sgid = req_msg->primary_remote_gid;
primary_path->dlid = req_msg->primary_local_lid;
primary_path->slid = req_msg->primary_remote_lid;
sa_path_set_dlid(primary_path,
htonl(ntohs(req_msg->primary_local_lid)));
sa_path_set_slid(primary_path,
htonl(ntohs(req_msg->primary_remote_lid)));
primary_path->flow_label = cm_req_get_primary_flow_label(req_msg);
primary_path->hop_limit = req_msg->primary_hop_limit;
primary_path->traffic_class = req_msg->primary_traffic_class;
@ -1423,14 +1429,15 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
primary_path->packet_life_time =
cm_req_get_primary_local_ack_timeout(req_msg);
primary_path->packet_life_time -= (primary_path->packet_life_time > 0);
primary_path->service_id = req_msg->service_id;
sa_path_set_service_id(primary_path, req_msg->service_id);
if (req_msg->alt_local_lid) {
memset(alt_path, 0, sizeof *alt_path);
alt_path->dgid = req_msg->alt_local_gid;
alt_path->sgid = req_msg->alt_remote_gid;
alt_path->dlid = req_msg->alt_local_lid;
alt_path->slid = req_msg->alt_remote_lid;
sa_path_set_dlid(alt_path,
htonl(ntohs(req_msg->alt_local_lid)));
sa_path_set_slid(alt_path,
htonl(ntohs(req_msg->alt_remote_lid)));
alt_path->flow_label = cm_req_get_alt_flow_label(req_msg);
alt_path->hop_limit = req_msg->alt_hop_limit;
alt_path->traffic_class = req_msg->alt_traffic_class;
@ -1445,7 +1452,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
alt_path->packet_life_time =
cm_req_get_alt_local_ack_timeout(req_msg);
alt_path->packet_life_time -= (alt_path->packet_life_time > 0);
alt_path->service_id = req_msg->service_id;
sa_path_set_service_id(alt_path, req_msg->service_id);
}
}
@ -1722,6 +1729,7 @@ static int cm_req_handler(struct cm_work *work)
struct cm_req_msg *req_msg;
union ib_gid gid;
struct ib_gid_attr gid_attr;
const struct ib_global_route *grh;
int ret;
req_msg = (struct cm_req_msg *)work->mad_recv_wc->recv_buf.mad;
@ -1758,21 +1766,34 @@ static int cm_req_handler(struct cm_work *work)
cm_id_priv->id.service_mask = ~cpu_to_be64(0);
cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
cm_format_paths_from_req(req_msg, &work->path[0], &work->path[1]);
memcpy(work->path[0].dmac, cm_id_priv->av.ah_attr.dmac, ETH_ALEN);
work->path[0].hop_limit = cm_id_priv->av.ah_attr.grh.hop_limit;
memset(&work->path[0], 0, sizeof(work->path[0]));
memset(&work->path[1], 0, sizeof(work->path[1]));
grh = rdma_ah_read_grh(&cm_id_priv->av.ah_attr);
ret = ib_get_cached_gid(work->port->cm_dev->ib_device,
work->port->port_num,
cm_id_priv->av.ah_attr.grh.sgid_index,
grh->sgid_index,
&gid, &gid_attr);
if (!ret) {
if (gid_attr.ndev) {
work->path[0].ifindex = gid_attr.ndev->ifindex;
work->path[0].net = dev_net(gid_attr.ndev);
work->path[0].rec_type =
sa_conv_gid_to_pathrec_type(gid_attr.gid_type);
sa_path_set_ifindex(&work->path[0],
gid_attr.ndev->ifindex);
sa_path_set_ndev(&work->path[0],
dev_net(gid_attr.ndev));
dev_put(gid_attr.ndev);
} else {
work->path[0].rec_type = SA_PATH_REC_TYPE_IB;
}
work->path[0].gid_type = gid_attr.gid_type;
if (req_msg->alt_local_lid)
work->path[1].rec_type = work->path[0].rec_type;
cm_format_paths_from_req(req_msg, &work->path[0],
&work->path[1]);
if (cm_id_priv->av.ah_attr.type == RDMA_AH_ATTR_TYPE_ROCE)
sa_path_set_dmac(&work->path[0],
cm_id_priv->av.ah_attr.roce.dmac);
work->path[0].hop_limit = grh->hop_limit;
ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av,
cm_id_priv);
}
@ -1782,11 +1803,18 @@ static int cm_req_handler(struct cm_work *work)
&work->path[0].sgid,
&gid_attr);
if (!err && gid_attr.ndev) {
work->path[0].ifindex = gid_attr.ndev->ifindex;
work->path[0].net = dev_net(gid_attr.ndev);
work->path[0].rec_type =
sa_conv_gid_to_pathrec_type(gid_attr.gid_type);
sa_path_set_ifindex(&work->path[0],
gid_attr.ndev->ifindex);
sa_path_set_ndev(&work->path[0],
dev_net(gid_attr.ndev));
dev_put(gid_attr.ndev);
} else {
work->path[0].rec_type = SA_PATH_REC_TYPE_IB;
}
work->path[0].gid_type = gid_attr.gid_type;
if (req_msg->alt_local_lid)
work->path[1].rec_type = work->path[0].rec_type;
ib_send_cm_rej(cm_id, IB_CM_REJ_INVALID_GID,
&work->path[0].sgid, sizeof work->path[0].sgid,
NULL, 0);
@ -2811,7 +2839,7 @@ static int cm_mra_handler(struct cm_work *work)
static void cm_format_lap(struct cm_lap_msg *lap_msg,
struct cm_id_private *cm_id_priv,
struct ib_sa_path_rec *alternate_path,
struct sa_path_rec *alternate_path,
const void *private_data,
u8 private_data_len)
{
@ -2822,8 +2850,10 @@ static void cm_format_lap(struct cm_lap_msg *lap_msg,
cm_lap_set_remote_qpn(lap_msg, cm_id_priv->remote_qpn);
/* todo: need remote CM response timeout */
cm_lap_set_remote_resp_timeout(lap_msg, 0x1F);
lap_msg->alt_local_lid = alternate_path->slid;
lap_msg->alt_remote_lid = alternate_path->dlid;
lap_msg->alt_local_lid =
htons(ntohl(sa_path_get_slid(alternate_path)));
lap_msg->alt_remote_lid =
htons(ntohl(sa_path_get_dlid(alternate_path)));
lap_msg->alt_local_gid = alternate_path->sgid;
lap_msg->alt_remote_gid = alternate_path->dgid;
cm_lap_set_flow_label(lap_msg, alternate_path->flow_label);
@ -2841,7 +2871,7 @@ static void cm_format_lap(struct cm_lap_msg *lap_msg,
}
int ib_send_cm_lap(struct ib_cm_id *cm_id,
struct ib_sa_path_rec *alternate_path,
struct sa_path_rec *alternate_path,
const void *private_data,
u8 private_data_len)
{
@ -2895,14 +2925,15 @@ out: spin_unlock_irqrestore(&cm_id_priv->lock, flags);
EXPORT_SYMBOL(ib_send_cm_lap);
static void cm_format_path_from_lap(struct cm_id_private *cm_id_priv,
struct ib_sa_path_rec *path,
struct sa_path_rec *path,
struct cm_lap_msg *lap_msg)
{
memset(path, 0, sizeof *path);
path->rec_type = SA_PATH_REC_TYPE_IB;
path->dgid = lap_msg->alt_local_gid;
path->sgid = lap_msg->alt_remote_gid;
path->dlid = lap_msg->alt_local_lid;
path->slid = lap_msg->alt_remote_lid;
sa_path_set_dlid(path, htonl(ntohs(lap_msg->alt_local_lid)));
sa_path_set_slid(path, htonl(ntohs(lap_msg->alt_remote_lid)));
path->flow_label = cm_lap_get_flow_label(lap_msg);
path->hop_limit = lap_msg->alt_hop_limit;
path->traffic_class = cm_lap_get_traffic_class(lap_msg);
@ -3708,7 +3739,7 @@ static void cm_recv_handler(struct ib_mad_agent *mad_agent,
atomic_long_inc(&port->counter_group[CM_RECV].
counter[attr_id - CM_ATTR_ID_OFFSET]);
work = kmalloc(sizeof *work + sizeof(struct ib_sa_path_rec) * paths,
work = kmalloc(sizeof(*work) + sizeof(struct sa_path_rec) * paths,
GFP_KERNEL);
if (!work) {
ib_free_recv_mad(mad_recv_wc);
@ -3800,7 +3831,7 @@ static int cm_init_qp_rtr_attr(struct cm_id_private *cm_id_priv,
cm_id_priv->responder_resources;
qp_attr->min_rnr_timer = 0;
}
if (cm_id_priv->alt_av.ah_attr.dlid) {
if (rdma_ah_get_dlid(&cm_id_priv->alt_av.ah_attr)) {
*qp_attr_mask |= IB_QP_ALT_PATH;
qp_attr->alt_port_num = cm_id_priv->alt_av.port->port_num;
qp_attr->alt_pkey_index = cm_id_priv->alt_av.pkey_index;
@ -3854,7 +3885,7 @@ static int cm_init_qp_rts_attr(struct cm_id_private *cm_id_priv,
default:
break;
}
if (cm_id_priv->alt_av.ah_attr.dlid) {
if (rdma_ah_get_dlid(&cm_id_priv->alt_av.ah_attr)) {
*qp_attr_mask |= IB_QP_PATH_MIG_STATE;
qp_attr->path_mig_state = IB_MIG_REARM;
}

View File

@ -929,7 +929,8 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
goto out;
ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num,
qp_attr.ah_attr.grh.sgid_index, &sgid, NULL);
rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index,
&sgid, NULL);
if (ret)
goto out;
@ -1127,7 +1128,7 @@ static inline int cma_any_port(struct sockaddr *addr)
static void cma_save_ib_info(struct sockaddr *src_addr,
struct sockaddr *dst_addr,
struct rdma_cm_id *listen_id,
struct ib_sa_path_rec *path)
struct sa_path_rec *path)
{
struct sockaddr_ib *listen_ib, *ib;
@ -1139,7 +1140,7 @@ static void cma_save_ib_info(struct sockaddr *src_addr,
ib->sib_pkey = path->pkey;
ib->sib_flowinfo = path->flow_label;
memcpy(&ib->sib_addr, &path->sgid, 16);
ib->sib_sid = path->service_id;
ib->sib_sid = sa_path_get_service_id(path);
ib->sib_scope_id = 0;
} else {
ib->sib_pkey = listen_ib->sib_pkey;
@ -1273,7 +1274,8 @@ static int cma_save_req_info(const struct ib_cm_event *ib_event,
memcpy(&req->local_gid, &req_param->primary_path->sgid,
sizeof(req->local_gid));
req->has_gid = true;
req->service_id = req_param->primary_path->service_id;
req->service_id =
sa_path_get_service_id(req_param->primary_path);
req->pkey = be16_to_cpu(req_param->primary_path->pkey);
if (req->pkey != req_param->bth_pkey)
pr_warn_ratelimited("RDMA CMA: got different BTH P_Key (0x%x) and primary path P_Key (0x%x)\n"
@ -1755,6 +1757,9 @@ static int cma_ib_handler(struct ib_cm_id *cm_id, struct ib_cm_event *ib_event)
event.status = -ETIMEDOUT;
break;
case IB_CM_REP_RECEIVED:
if (cma_comp(id_priv, RDMA_CM_CONNECT) &&
(id_priv->id.qp_type != IB_QPT_UD))
ib_send_cm_mra(cm_id, CMA_CM_MRA_SETTING, NULL, 0);
if (id_priv->id.qp) {
event.status = cma_rep_recv(id_priv);
event.event = event.status ? RDMA_CM_EVENT_CONNECT_ERROR :
@ -1821,8 +1826,8 @@ static struct rdma_id_private *cma_new_conn_id(struct rdma_cm_id *listen_id,
struct rdma_cm_id *id;
struct rdma_route *rt;
const sa_family_t ss_family = listen_id->route.addr.src_addr.ss_family;
const __be64 service_id =
ib_event->param.req_rcvd.primary_path->service_id;
struct sa_path_rec *path = ib_event->param.req_rcvd.primary_path;
const __be64 service_id = sa_path_get_service_id(path);
int ret;
id = rdma_create_id(listen_id->route.addr.dev_addr.net,
@ -1844,7 +1849,7 @@ static struct rdma_id_private *cma_new_conn_id(struct rdma_cm_id *listen_id,
if (!rt->path_rec)
goto err;
rt->path_rec[0] = *ib_event->param.req_rcvd.primary_path;
rt->path_rec[0] = *path;
if (rt->num_paths == 2)
rt->path_rec[1] = *ib_event->param.req_rcvd.alternate_path;
@ -2297,7 +2302,7 @@ void rdma_set_service_type(struct rdma_cm_id *id, int tos)
}
EXPORT_SYMBOL(rdma_set_service_type);
static void cma_query_handler(int status, struct ib_sa_path_rec *path_rec,
static void cma_query_handler(int status, struct sa_path_rec *path_rec,
void *context)
{
struct cma_work *work = context;
@ -2324,18 +2329,25 @@ static int cma_query_ib_route(struct rdma_id_private *id_priv, int timeout_ms,
struct cma_work *work)
{
struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
struct ib_sa_path_rec path_rec;
struct sa_path_rec path_rec;
ib_sa_comp_mask comp_mask;
struct sockaddr_in6 *sin6;
struct sockaddr_ib *sib;
memset(&path_rec, 0, sizeof path_rec);
if (rdma_cap_opa_ah(id_priv->id.device, id_priv->id.port_num))
path_rec.rec_type = SA_PATH_REC_TYPE_OPA;
else
path_rec.rec_type = SA_PATH_REC_TYPE_IB;
rdma_addr_get_sgid(dev_addr, &path_rec.sgid);
rdma_addr_get_dgid(dev_addr, &path_rec.dgid);
path_rec.pkey = cpu_to_be16(ib_addr_get_pkey(dev_addr));
path_rec.numb_path = 1;
path_rec.reversible = 1;
path_rec.service_id = rdma_get_service_id(&id_priv->id, cma_dst_addr(id_priv));
sa_path_set_service_id(&path_rec,
rdma_get_service_id(&id_priv->id,
cma_dst_addr(id_priv)));
comp_mask = IB_SA_PATH_REC_DGID | IB_SA_PATH_REC_SGID |
IB_SA_PATH_REC_PKEY | IB_SA_PATH_REC_NUMB_PATH |
@ -2449,7 +2461,7 @@ static int cma_resolve_ib_route(struct rdma_id_private *id_priv, int timeout_ms)
}
int rdma_set_ib_paths(struct rdma_cm_id *id,
struct ib_sa_path_rec *path_rec, int num_paths)
struct sa_path_rec *path_rec, int num_paths)
{
struct rdma_id_private *id_priv;
int ret;
@ -2528,6 +2540,7 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
struct cma_work *work;
int ret;
struct net_device *ndev = NULL;
enum ib_gid_type gid_type = IB_GID_TYPE_IB;
u8 default_roce_tos = id_priv->cma_dev->default_roce_tos[id_priv->id.port_num -
rdma_start_port(id_priv->cma_dev->device)];
u8 tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
@ -2572,21 +2585,22 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
}
}
route->path_rec->net = &init_net;
route->path_rec->ifindex = ndev->ifindex;
supported_gids = roce_gid_type_mask_support(id_priv->id.device,
id_priv->id.port_num);
route->path_rec->gid_type =
cma_route_gid_type(addr->dev_addr.network,
supported_gids,
id_priv->gid_type);
gid_type = cma_route_gid_type(addr->dev_addr.network,
supported_gids,
id_priv->gid_type);
route->path_rec->rec_type =
sa_conv_gid_to_pathrec_type(gid_type);
sa_path_set_ndev(route->path_rec, &init_net);
sa_path_set_ifindex(route->path_rec, ndev->ifindex);
}
if (!ndev) {
ret = -ENODEV;
goto err2;
}
memcpy(route->path_rec->dmac, addr->dev_addr.dst_dev_addr, ETH_ALEN);
sa_path_set_dmac(route->path_rec, addr->dev_addr.dst_dev_addr);
rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr,
&route->path_rec->sgid);
@ -2594,8 +2608,10 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
&route->path_rec->dgid);
/* Use the hint from IP Stack to select GID Type */
if (route->path_rec->gid_type < ib_network_to_gid_type(addr->dev_addr.network))
route->path_rec->gid_type = ib_network_to_gid_type(addr->dev_addr.network);
if (gid_type < ib_network_to_gid_type(addr->dev_addr.network))
gid_type = ib_network_to_gid_type(addr->dev_addr.network);
route->path_rec->rec_type = sa_conv_gid_to_pathrec_type(gid_type);
if (((struct sockaddr *)&id_priv->id.route.addr.dst_addr)->sa_family != AF_IB)
/* TODO: get the hoplimit from the inet/inet6 device */
route->path_rec->hop_limit = addr->dev_addr.hoplimit;
@ -3941,63 +3957,10 @@ static void cma_set_mgid(struct rdma_id_private *id_priv,
}
}
static void cma_query_sa_classport_info_cb(int status,
struct ib_class_port_info *rec,
void *context)
{
struct class_port_info_context *cb_ctx = context;
WARN_ON(!context);
if (status || !rec) {
pr_debug("RDMA CM: %s port %u failed query ClassPortInfo status: %d\n",
cb_ctx->device->name, cb_ctx->port_num, status);
goto out;
}
memcpy(cb_ctx->class_port_info, rec, sizeof(struct ib_class_port_info));
out:
complete(&cb_ctx->done);
}
static int cma_query_sa_classport_info(struct ib_device *device, u8 port_num,
struct ib_class_port_info *class_port_info)
{
struct class_port_info_context *cb_ctx;
int ret;
cb_ctx = kmalloc(sizeof(*cb_ctx), GFP_KERNEL);
if (!cb_ctx)
return -ENOMEM;
cb_ctx->device = device;
cb_ctx->class_port_info = class_port_info;
cb_ctx->port_num = port_num;
init_completion(&cb_ctx->done);
ret = ib_sa_classport_info_rec_query(&sa_client, device, port_num,
CMA_QUERY_CLASSPORT_INFO_TIMEOUT,
GFP_KERNEL, cma_query_sa_classport_info_cb,
cb_ctx, &cb_ctx->sa_query);
if (ret < 0) {
pr_err("RDMA CM: %s port %u failed to send ClassPortInfo query, ret: %d\n",
device->name, port_num, ret);
goto out;
}
wait_for_completion(&cb_ctx->done);
out:
kfree(cb_ctx);
return ret;
}
static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
struct cma_multicast *mc)
{
struct ib_sa_mcmember_rec rec;
struct ib_class_port_info class_port_info;
struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
ib_sa_comp_mask comp_mask;
int ret;
@ -4018,21 +3981,14 @@ static int cma_join_ib_multicast(struct rdma_id_private *id_priv,
rec.pkey = cpu_to_be16(ib_addr_get_pkey(dev_addr));
rec.join_state = mc->join_state;
if (rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) {
ret = cma_query_sa_classport_info(id_priv->id.device,
id_priv->id.port_num,
&class_port_info);
if (ret)
return ret;
if (!(ib_get_cpi_capmask2(&class_port_info) &
IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT)) {
pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
"RDMA CM: SM doesn't support Send Only Full Member option\n",
id_priv->id.device->name, id_priv->id.port_num);
return -EOPNOTSUPP;
}
if ((rec.join_state == BIT(SENDONLY_FULLMEMBER_JOIN)) &&
(!ib_sa_sendonly_fullmem_support(&sa_client,
id_priv->id.device,
id_priv->id.port_num))) {
pr_warn("RDMA CM: %s port %u Unable to multicast join\n"
"RDMA CM: SM doesn't support Send Only Full Member option\n",
id_priv->id.device->name, id_priv->id.port_num);
return -EOPNOTSUPP;
}
comp_mask = IB_SA_MCMEMBER_REC_MGID | IB_SA_MCMEMBER_REC_PORT_GID |

View File

@ -172,8 +172,16 @@ static void ib_device_release(struct device *device)
{
struct ib_device *dev = container_of(device, struct ib_device, dev);
ib_cache_release_one(dev);
kfree(dev->port_immutable);
WARN_ON(dev->reg_state == IB_DEV_REGISTERED);
if (dev->reg_state == IB_DEV_UNREGISTERED) {
/*
* In IB_DEV_UNINITIALIZED state, cache or port table
* is not even created. Free cache and port table only when
* device reaches UNREGISTERED state.
*/
ib_cache_release_one(dev);
kfree(dev->port_immutable);
}
kfree(dev);
}
@ -380,32 +388,27 @@ int ib_register_device(struct ib_device *device,
ret = ib_cache_setup_one(device);
if (ret) {
pr_warn("Couldn't set up InfiniBand P_Key/GID cache\n");
goto out;
goto port_cleanup;
}
ret = ib_device_register_rdmacg(device);
if (ret) {
pr_warn("Couldn't register device with rdma cgroup\n");
ib_cache_cleanup_one(device);
goto out;
goto cache_cleanup;
}
memset(&device->attrs, 0, sizeof(device->attrs));
ret = device->query_device(device, &device->attrs, &uhw);
if (ret) {
pr_warn("Couldn't query the device attributes\n");
ib_device_unregister_rdmacg(device);
ib_cache_cleanup_one(device);
goto out;
goto cache_cleanup;
}
ret = ib_device_register_sysfs(device, port_callback);
if (ret) {
pr_warn("Couldn't register device %s with driver model\n",
device->name);
ib_device_unregister_rdmacg(device);
ib_cache_cleanup_one(device);
goto out;
goto cache_cleanup;
}
device->reg_state = IB_DEV_REGISTERED;
@ -417,6 +420,14 @@ int ib_register_device(struct ib_device *device,
down_write(&lists_rwsem);
list_add_tail(&device->core_list, &device_list);
up_write(&lists_rwsem);
mutex_unlock(&device_mutex);
return 0;
cache_cleanup:
ib_cache_cleanup_one(device);
ib_cache_release_one(device);
port_cleanup:
kfree(device->port_immutable);
out:
mutex_unlock(&device_mutex);
return ret;

View File

@ -96,7 +96,8 @@ struct ib_fmr_pool {
void * arg);
void *flush_arg;
struct task_struct *thread;
struct kthread_worker *worker;
struct kthread_work work;
atomic_t req_ser;
atomic_t flush_ser;
@ -174,29 +175,19 @@ static void ib_fmr_batch_release(struct ib_fmr_pool *pool)
spin_unlock_irq(&pool->pool_lock);
}
static int ib_fmr_cleanup_thread(void *pool_ptr)
static void ib_fmr_cleanup_func(struct kthread_work *work)
{
struct ib_fmr_pool *pool = pool_ptr;
struct ib_fmr_pool *pool = container_of(work, struct ib_fmr_pool, work);
do {
if (atomic_read(&pool->flush_ser) - atomic_read(&pool->req_ser) < 0) {
ib_fmr_batch_release(pool);
ib_fmr_batch_release(pool);
atomic_inc(&pool->flush_ser);
wake_up_interruptible(&pool->force_wait);
atomic_inc(&pool->flush_ser);
wake_up_interruptible(&pool->force_wait);
if (pool->flush_function)
pool->flush_function(pool, pool->flush_arg);
if (pool->flush_function)
pool->flush_function(pool, pool->flush_arg);
}
set_current_state(TASK_INTERRUPTIBLE);
if (atomic_read(&pool->flush_ser) - atomic_read(&pool->req_ser) >= 0 &&
!kthread_should_stop())
schedule();
__set_current_state(TASK_RUNNING);
} while (!kthread_should_stop());
return 0;
if (atomic_read(&pool->flush_ser) - atomic_read(&pool->req_ser) < 0)
kthread_queue_work(pool->worker, &pool->work);
}
/**
@ -265,15 +256,13 @@ struct ib_fmr_pool *ib_create_fmr_pool(struct ib_pd *pd,
atomic_set(&pool->flush_ser, 0);
init_waitqueue_head(&pool->force_wait);
pool->thread = kthread_run(ib_fmr_cleanup_thread,
pool,
"ib_fmr(%s)",
device->name);
if (IS_ERR(pool->thread)) {
pr_warn(PFX "couldn't start cleanup thread\n");
ret = PTR_ERR(pool->thread);
pool->worker = kthread_create_worker(0, "ib_fmr(%s)", device->name);
if (IS_ERR(pool->worker)) {
pr_warn(PFX "couldn't start cleanup kthread worker\n");
ret = PTR_ERR(pool->worker);
goto out_free_pool;
}
kthread_init_work(&pool->work, ib_fmr_cleanup_func);
{
struct ib_pool_fmr *fmr;
@ -338,7 +327,7 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool)
LIST_HEAD(fmr_list);
int i;
kthread_stop(pool->thread);
kthread_destroy_worker(pool->worker);
ib_fmr_batch_release(pool);
i = 0;
@ -388,7 +377,7 @@ int ib_flush_fmr_pool(struct ib_fmr_pool *pool)
spin_unlock_irq(&pool->pool_lock);
serial = atomic_inc_return(&pool->req_ser);
wake_up_process(pool->thread);
kthread_queue_work(pool->worker, &pool->work);
if (wait_event_interruptible(pool->force_wait,
atomic_read(&pool->flush_ser) - serial >= 0))
@ -502,7 +491,7 @@ int ib_fmr_pool_unmap(struct ib_pool_fmr *fmr)
list_add_tail(&fmr->list, &pool->dirty_list);
if (++pool->dirty_len >= pool->dirty_watermark) {
atomic_inc(&pool->req_ser);
wake_up_process(pool->thread);
kthread_queue_work(pool->worker, &pool->work);
}
}
}

View File

@ -605,7 +605,7 @@ static void unregister_mad_snoop(struct ib_mad_snoop_private *mad_snoop_priv)
/*
* ib_unregister_mad_agent - Unregisters a client from using MAD services
*/
int ib_unregister_mad_agent(struct ib_mad_agent *mad_agent)
void ib_unregister_mad_agent(struct ib_mad_agent *mad_agent)
{
struct ib_mad_agent_private *mad_agent_priv;
struct ib_mad_snoop_private *mad_snoop_priv;
@ -622,7 +622,6 @@ int ib_unregister_mad_agent(struct ib_mad_agent *mad_agent)
agent);
unregister_mad_snoop(mad_snoop_priv);
}
return 0;
}
EXPORT_SYMBOL(ib_unregister_mad_agent);
@ -1834,12 +1833,13 @@ static inline int rcv_has_same_gid(const struct ib_mad_agent_private *mad_agent_
const struct ib_mad_send_wr_private *wr,
const struct ib_mad_recv_wc *rwc )
{
struct ib_ah_attr attr;
struct rdma_ah_attr attr;
u8 send_resp, rcv_resp;
union ib_gid sgid;
struct ib_device *device = mad_agent_priv->agent.device;
u8 port_num = mad_agent_priv->agent.port_num;
u8 lmc;
bool has_grh;
send_resp = ib_response_mad((struct ib_mad_hdr *)wr->send_buf.mad);
rcv_resp = ib_response_mad(&rwc->recv_buf.mad->mad_hdr);
@ -1848,36 +1848,40 @@ static inline int rcv_has_same_gid(const struct ib_mad_agent_private *mad_agent_
/* both requests, or both responses. GIDs different */
return 0;
if (ib_query_ah(wr->send_buf.ah, &attr))
if (rdma_query_ah(wr->send_buf.ah, &attr))
/* Assume not equal, to avoid false positives. */
return 0;
if (!!(attr.ah_flags & IB_AH_GRH) !=
!!(rwc->wc->wc_flags & IB_WC_GRH))
has_grh = !!(rdma_ah_get_ah_flags(&attr) & IB_AH_GRH);
if (has_grh != !!(rwc->wc->wc_flags & IB_WC_GRH))
/* one has GID, other does not. Assume different */
return 0;
if (!send_resp && rcv_resp) {
/* is request/response. */
if (!(attr.ah_flags & IB_AH_GRH)) {
if (!has_grh) {
if (ib_get_cached_lmc(device, port_num, &lmc))
return 0;
return (!lmc || !((attr.src_path_bits ^
return (!lmc || !((rdma_ah_get_path_bits(&attr) ^
rwc->wc->dlid_path_bits) &
((1 << lmc) - 1)));
} else {
const struct ib_global_route *grh =
rdma_ah_read_grh(&attr);
if (ib_get_cached_gid(device, port_num,
attr.grh.sgid_index, &sgid, NULL))
grh->sgid_index, &sgid, NULL))
return 0;
return !memcmp(sgid.raw, rwc->recv_buf.grh->dgid.raw,
16);
}
}
if (!(attr.ah_flags & IB_AH_GRH))
return attr.dlid == rwc->wc->slid;
if (!has_grh)
return rdma_ah_get_dlid(&attr) == rwc->wc->slid;
else
return !memcmp(attr.grh.dgid.raw, rwc->recv_buf.grh->sgid.raw,
return !memcmp(rdma_ah_read_grh(&attr)->dgid.raw,
rwc->recv_buf.grh->sgid.raw,
16);
}

View File

@ -81,7 +81,7 @@ static void destroy_rmpp_recv(struct mad_rmpp_recv *rmpp_recv)
{
deref_rmpp_recv(rmpp_recv);
wait_for_completion(&rmpp_recv->comp);
ib_destroy_ah(rmpp_recv->ah);
rdma_destroy_ah(rmpp_recv->ah);
kfree(rmpp_recv);
}
@ -171,7 +171,7 @@ static struct ib_mad_send_buf *alloc_response_msg(struct ib_mad_agent *agent,
hdr_len, 0, GFP_KERNEL,
IB_MGMT_BASE_VERSION);
if (IS_ERR(msg))
ib_destroy_ah(ah);
rdma_destroy_ah(ah);
else {
msg->ah = ah;
msg->context[0] = ah;
@ -201,7 +201,7 @@ static void ack_ds_ack(struct ib_mad_agent_private *agent,
ret = ib_post_send_mad(msg, NULL);
if (ret) {
ib_destroy_ah(msg->ah);
rdma_destroy_ah(msg->ah);
ib_free_send_mad(msg);
}
}
@ -209,7 +209,7 @@ static void ack_ds_ack(struct ib_mad_agent_private *agent,
void ib_rmpp_send_handler(struct ib_mad_send_wc *mad_send_wc)
{
if (mad_send_wc->send_buf->context[0] == mad_send_wc->send_buf->ah)
ib_destroy_ah(mad_send_wc->send_buf->ah);
rdma_destroy_ah(mad_send_wc->send_buf->ah);
ib_free_send_mad(mad_send_wc->send_buf);
}
@ -237,7 +237,7 @@ static void nack_recv(struct ib_mad_agent_private *agent,
ret = ib_post_send_mad(msg, NULL);
if (ret) {
ib_destroy_ah(msg->ah);
rdma_destroy_ah(msg->ah);
ib_free_send_mad(msg);
}
}
@ -852,7 +852,7 @@ static int init_newwin(struct ib_mad_send_wr_private *mad_send_wr)
struct ib_mad_agent_private *agent = mad_send_wr->mad_agent_priv;
struct ib_mad_hdr *mad_hdr = mad_send_wr->send_buf.mad;
struct mad_rmpp_recv *rmpp_recv;
struct ib_ah_attr ah_attr;
struct rdma_ah_attr ah_attr;
unsigned long flags;
int newwin = 1;
@ -867,10 +867,10 @@ static int init_newwin(struct ib_mad_send_wr_private *mad_send_wr)
(rmpp_recv->method & IB_MGMT_METHOD_RESP))
continue;
if (ib_query_ah(mad_send_wr->send_buf.ah, &ah_attr))
if (rdma_query_ah(mad_send_wr->send_buf.ah, &ah_attr))
continue;
if (rmpp_recv->slid == ah_attr.dlid) {
if (rmpp_recv->slid == rdma_ah_get_dlid(&ah_attr)) {
newwin = rmpp_recv->repwin;
break;
}

View File

@ -720,7 +720,7 @@ int ib_init_ah_from_mcmember(struct ib_device *device, u8 port_num,
struct ib_sa_mcmember_rec *rec,
struct net_device *ndev,
enum ib_gid_type gid_type,
struct ib_ah_attr *ah_attr)
struct rdma_ah_attr *ah_attr)
{
int ret;
u16 gid_index;
@ -743,19 +743,18 @@ int ib_init_ah_from_mcmember(struct ib_device *device, u8 port_num,
return ret;
memset(ah_attr, 0, sizeof *ah_attr);
ah_attr->dlid = be16_to_cpu(rec->mlid);
ah_attr->sl = rec->sl;
ah_attr->port_num = port_num;
ah_attr->static_rate = rec->rate;
ah_attr->type = rdma_ah_find_type(device, port_num);
ah_attr->ah_flags = IB_AH_GRH;
ah_attr->grh.dgid = rec->mgid;
ah_attr->grh.sgid_index = (u8) gid_index;
ah_attr->grh.flow_label = be32_to_cpu(rec->flow_label);
ah_attr->grh.hop_limit = rec->hop_limit;
ah_attr->grh.traffic_class = rec->traffic_class;
rdma_ah_set_dlid(ah_attr, be16_to_cpu(rec->mlid));
rdma_ah_set_sl(ah_attr, rec->sl);
rdma_ah_set_port_num(ah_attr, port_num);
rdma_ah_set_static_rate(ah_attr, rec->rate);
rdma_ah_set_grh(ah_attr, &rec->mgid,
be32_to_cpu(rec->flow_label),
(u8)gid_index,
rec->hop_limit,
rec->traffic_class);
return 0;
}
EXPORT_SYMBOL(ib_init_ah_from_mcmember);

View File

@ -0,0 +1,627 @@
/*
* Copyright (c) 2016, Mellanox Technologies inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/file.h>
#include <linux/anon_inodes.h>
#include <rdma/ib_verbs.h>
#include <rdma/uverbs_types.h>
#include <linux/rcupdate.h>
#include "uverbs.h"
#include "core_priv.h"
#include "rdma_core.h"
void uverbs_uobject_get(struct ib_uobject *uobject)
{
kref_get(&uobject->ref);
}
static void uverbs_uobject_free(struct kref *ref)
{
struct ib_uobject *uobj =
container_of(ref, struct ib_uobject, ref);
if (uobj->type->type_class->needs_kfree_rcu)
kfree_rcu(uobj, rcu);
else
kfree(uobj);
}
void uverbs_uobject_put(struct ib_uobject *uobject)
{
kref_put(&uobject->ref, uverbs_uobject_free);
}
static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive)
{
/*
* When a shared access is required, we use a positive counter. Each
* shared access request checks that the value != -1 and increment it.
* Exclusive access is required for operations like write or destroy.
* In exclusive access mode, we check that the counter is zero (nobody
* claimed this object) and we set it to -1. Releasing a shared access
* lock is done simply by decreasing the counter. As for exclusive
* access locks, since only a single one of them is is allowed
* concurrently, setting the counter to zero is enough for releasing
* this lock.
*/
if (!exclusive)
return __atomic_add_unless(&uobj->usecnt, 1, -1) == -1 ?
-EBUSY : 0;
/* lock is either WRITE or DESTROY - should be exclusive */
return atomic_cmpxchg(&uobj->usecnt, 0, -1) == 0 ? 0 : -EBUSY;
}
static struct ib_uobject *alloc_uobj(struct ib_ucontext *context,
const struct uverbs_obj_type *type)
{
struct ib_uobject *uobj = kzalloc(type->obj_size, GFP_KERNEL);
if (!uobj)
return ERR_PTR(-ENOMEM);
/*
* user_handle should be filled by the handler,
* The object is added to the list in the commit stage.
*/
uobj->context = context;
uobj->type = type;
atomic_set(&uobj->usecnt, 0);
kref_init(&uobj->ref);
return uobj;
}
static int idr_add_uobj(struct ib_uobject *uobj)
{
int ret;
idr_preload(GFP_KERNEL);
spin_lock(&uobj->context->ufile->idr_lock);
/*
* We start with allocating an idr pointing to NULL. This represents an
* object which isn't initialized yet. We'll replace it later on with
* the real object once we commit.
*/
ret = idr_alloc(&uobj->context->ufile->idr, NULL, 0,
min_t(unsigned long, U32_MAX - 1, INT_MAX), GFP_NOWAIT);
if (ret >= 0)
uobj->id = ret;
spin_unlock(&uobj->context->ufile->idr_lock);
idr_preload_end();
return ret < 0 ? ret : 0;
}
/*
* It only removes it from the uobjects list, uverbs_uobject_put() is still
* required.
*/
static void uverbs_idr_remove_uobj(struct ib_uobject *uobj)
{
spin_lock(&uobj->context->ufile->idr_lock);
idr_remove(&uobj->context->ufile->idr, uobj->id);
spin_unlock(&uobj->context->ufile->idr_lock);
}
/* Returns the ib_uobject or an error. The caller should check for IS_ERR. */
static struct ib_uobject *lookup_get_idr_uobject(const struct uverbs_obj_type *type,
struct ib_ucontext *ucontext,
int id, bool exclusive)
{
struct ib_uobject *uobj;
rcu_read_lock();
/* object won't be released as we're protected in rcu */
uobj = idr_find(&ucontext->ufile->idr, id);
if (!uobj) {
uobj = ERR_PTR(-ENOENT);
goto free;
}
uverbs_uobject_get(uobj);
free:
rcu_read_unlock();
return uobj;
}
static struct ib_uobject *lookup_get_fd_uobject(const struct uverbs_obj_type *type,
struct ib_ucontext *ucontext,
int id, bool exclusive)
{
struct file *f;
struct ib_uobject *uobject;
const struct uverbs_obj_fd_type *fd_type =
container_of(type, struct uverbs_obj_fd_type, type);
if (exclusive)
return ERR_PTR(-EOPNOTSUPP);
f = fget(id);
if (!f)
return ERR_PTR(-EBADF);
uobject = f->private_data;
/*
* fget(id) ensures we are not currently running uverbs_close_fd,
* and the caller is expected to ensure that uverbs_close_fd is never
* done while a call top lookup is possible.
*/
if (f->f_op != fd_type->fops) {
fput(f);
return ERR_PTR(-EBADF);
}
uverbs_uobject_get(uobject);
return uobject;
}
struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type,
struct ib_ucontext *ucontext,
int id, bool exclusive)
{
struct ib_uobject *uobj;
int ret;
uobj = type->type_class->lookup_get(type, ucontext, id, exclusive);
if (IS_ERR(uobj))
return uobj;
if (uobj->type != type) {
ret = -EINVAL;
goto free;
}
ret = uverbs_try_lock_object(uobj, exclusive);
if (ret) {
WARN(ucontext->cleanup_reason,
"ib_uverbs: Trying to lookup_get while cleanup context\n");
goto free;
}
return uobj;
free:
uobj->type->type_class->lookup_put(uobj, exclusive);
uverbs_uobject_put(uobj);
return ERR_PTR(ret);
}
static struct ib_uobject *alloc_begin_idr_uobject(const struct uverbs_obj_type *type,
struct ib_ucontext *ucontext)
{
int ret;
struct ib_uobject *uobj;
uobj = alloc_uobj(ucontext, type);
if (IS_ERR(uobj))
return uobj;
ret = idr_add_uobj(uobj);
if (ret)
goto uobj_put;
ret = ib_rdmacg_try_charge(&uobj->cg_obj, ucontext->device,
RDMACG_RESOURCE_HCA_OBJECT);
if (ret)
goto idr_remove;
return uobj;
idr_remove:
uverbs_idr_remove_uobj(uobj);
uobj_put:
uverbs_uobject_put(uobj);
return ERR_PTR(ret);
}
static struct ib_uobject *alloc_begin_fd_uobject(const struct uverbs_obj_type *type,
struct ib_ucontext *ucontext)
{
const struct uverbs_obj_fd_type *fd_type =
container_of(type, struct uverbs_obj_fd_type, type);
int new_fd;
struct ib_uobject *uobj;
struct ib_uobject_file *uobj_file;
struct file *filp;
new_fd = get_unused_fd_flags(O_CLOEXEC);
if (new_fd < 0)
return ERR_PTR(new_fd);
uobj = alloc_uobj(ucontext, type);
if (IS_ERR(uobj)) {
put_unused_fd(new_fd);
return uobj;
}
uobj_file = container_of(uobj, struct ib_uobject_file, uobj);
filp = anon_inode_getfile(fd_type->name,
fd_type->fops,
uobj_file,
fd_type->flags);
if (IS_ERR(filp)) {
put_unused_fd(new_fd);
uverbs_uobject_put(uobj);
return (void *)filp;
}
uobj_file->uobj.id = new_fd;
uobj_file->uobj.object = filp;
uobj_file->ufile = ucontext->ufile;
INIT_LIST_HEAD(&uobj->list);
kref_get(&uobj_file->ufile->ref);
return uobj;
}
struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type,
struct ib_ucontext *ucontext)
{
return type->type_class->alloc_begin(type, ucontext);
}
static void uverbs_uobject_add(struct ib_uobject *uobject)
{
mutex_lock(&uobject->context->uobjects_lock);
list_add(&uobject->list, &uobject->context->uobjects);
mutex_unlock(&uobject->context->uobjects_lock);
}
static int __must_check remove_commit_idr_uobject(struct ib_uobject *uobj,
enum rdma_remove_reason why)
{
const struct uverbs_obj_idr_type *idr_type =
container_of(uobj->type, struct uverbs_obj_idr_type,
type);
int ret = idr_type->destroy_object(uobj, why);
/*
* We can only fail gracefully if the user requested to destroy the
* object. In the rest of the cases, just remove whatever you can.
*/
if (why == RDMA_REMOVE_DESTROY && ret)
return ret;
ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device,
RDMACG_RESOURCE_HCA_OBJECT);
uverbs_idr_remove_uobj(uobj);
return ret;
}
static void alloc_abort_fd_uobject(struct ib_uobject *uobj)
{
struct ib_uobject_file *uobj_file =
container_of(uobj, struct ib_uobject_file, uobj);
struct file *filp = uobj->object;
int id = uobj_file->uobj.id;
/* Unsuccessful NEW */
fput(filp);
put_unused_fd(id);
}
static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj,
enum rdma_remove_reason why)
{
const struct uverbs_obj_fd_type *fd_type =
container_of(uobj->type, struct uverbs_obj_fd_type, type);
struct ib_uobject_file *uobj_file =
container_of(uobj, struct ib_uobject_file, uobj);
int ret = fd_type->context_closed(uobj_file, why);
if (why == RDMA_REMOVE_DESTROY && ret)
return ret;
if (why == RDMA_REMOVE_DURING_CLEANUP) {
alloc_abort_fd_uobject(uobj);
return ret;
}
uobj_file->uobj.context = NULL;
return ret;
}
static void lockdep_check(struct ib_uobject *uobj, bool exclusive)
{
#ifdef CONFIG_LOCKDEP
if (exclusive)
WARN_ON(atomic_read(&uobj->usecnt) > 0);
else
WARN_ON(atomic_read(&uobj->usecnt) == -1);
#endif
}
static int __must_check _rdma_remove_commit_uobject(struct ib_uobject *uobj,
enum rdma_remove_reason why)
{
int ret;
struct ib_ucontext *ucontext = uobj->context;
ret = uobj->type->type_class->remove_commit(uobj, why);
if (ret && why == RDMA_REMOVE_DESTROY) {
/* We couldn't remove the object, so just unlock the uobject */
atomic_set(&uobj->usecnt, 0);
uobj->type->type_class->lookup_put(uobj, true);
} else {
mutex_lock(&ucontext->uobjects_lock);
list_del(&uobj->list);
mutex_unlock(&ucontext->uobjects_lock);
/* put the ref we took when we created the object */
uverbs_uobject_put(uobj);
}
return ret;
}
/* This is called only for user requested DESTROY reasons */
int __must_check rdma_remove_commit_uobject(struct ib_uobject *uobj)
{
int ret;
struct ib_ucontext *ucontext = uobj->context;
/* put the ref count we took at lookup_get */
uverbs_uobject_put(uobj);
/* Cleanup is running. Calling this should have been impossible */
if (!down_read_trylock(&ucontext->cleanup_rwsem)) {
WARN(true, "ib_uverbs: Cleanup is running while removing an uobject\n");
return 0;
}
lockdep_check(uobj, true);
ret = _rdma_remove_commit_uobject(uobj, RDMA_REMOVE_DESTROY);
up_read(&ucontext->cleanup_rwsem);
return ret;
}
static void alloc_commit_idr_uobject(struct ib_uobject *uobj)
{
uverbs_uobject_add(uobj);
spin_lock(&uobj->context->ufile->idr_lock);
/*
* We already allocated this IDR with a NULL object, so
* this shouldn't fail.
*/
WARN_ON(idr_replace(&uobj->context->ufile->idr,
uobj, uobj->id));
spin_unlock(&uobj->context->ufile->idr_lock);
}
static void alloc_commit_fd_uobject(struct ib_uobject *uobj)
{
struct ib_uobject_file *uobj_file =
container_of(uobj, struct ib_uobject_file, uobj);
uverbs_uobject_add(&uobj_file->uobj);
fd_install(uobj_file->uobj.id, uobj->object);
/* This shouldn't be used anymore. Use the file object instead */
uobj_file->uobj.id = 0;
/* Get another reference as we export this to the fops */
uverbs_uobject_get(&uobj_file->uobj);
}
int rdma_alloc_commit_uobject(struct ib_uobject *uobj)
{
/* Cleanup is running. Calling this should have been impossible */
if (!down_read_trylock(&uobj->context->cleanup_rwsem)) {
int ret;
WARN(true, "ib_uverbs: Cleanup is running while allocating an uobject\n");
ret = uobj->type->type_class->remove_commit(uobj,
RDMA_REMOVE_DURING_CLEANUP);
if (ret)
pr_warn("ib_uverbs: cleanup of idr object %d failed\n",
uobj->id);
return ret;
}
uobj->type->type_class->alloc_commit(uobj);
up_read(&uobj->context->cleanup_rwsem);
return 0;
}
static void alloc_abort_idr_uobject(struct ib_uobject *uobj)
{
uverbs_idr_remove_uobj(uobj);
ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device,
RDMACG_RESOURCE_HCA_OBJECT);
uverbs_uobject_put(uobj);
}
void rdma_alloc_abort_uobject(struct ib_uobject *uobj)
{
uobj->type->type_class->alloc_abort(uobj);
}
static void lookup_put_idr_uobject(struct ib_uobject *uobj, bool exclusive)
{
}
static void lookup_put_fd_uobject(struct ib_uobject *uobj, bool exclusive)
{
struct file *filp = uobj->object;
WARN_ON(exclusive);
/* This indirectly calls uverbs_close_fd and free the object */
fput(filp);
}
void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive)
{
lockdep_check(uobj, exclusive);
uobj->type->type_class->lookup_put(uobj, exclusive);
/*
* In order to unlock an object, either decrease its usecnt for
* read access or zero it in case of exclusive access. See
* uverbs_try_lock_object for locking schema information.
*/
if (!exclusive)
atomic_dec(&uobj->usecnt);
else
atomic_set(&uobj->usecnt, 0);
uverbs_uobject_put(uobj);
}
const struct uverbs_obj_type_class uverbs_idr_class = {
.alloc_begin = alloc_begin_idr_uobject,
.lookup_get = lookup_get_idr_uobject,
.alloc_commit = alloc_commit_idr_uobject,
.alloc_abort = alloc_abort_idr_uobject,
.lookup_put = lookup_put_idr_uobject,
.remove_commit = remove_commit_idr_uobject,
/*
* When we destroy an object, we first just lock it for WRITE and
* actually DESTROY it in the finalize stage. So, the problematic
* scenario is when we just started the finalize stage of the
* destruction (nothing was executed yet). Now, the other thread
* fetched the object for READ access, but it didn't lock it yet.
* The DESTROY thread continues and starts destroying the object.
* When the other thread continue - without the RCU, it would
* access freed memory. However, the rcu_read_lock delays the free
* until the rcu_read_lock of the READ operation quits. Since the
* exclusive lock of the object is still taken by the DESTROY flow, the
* READ operation will get -EBUSY and it'll just bail out.
*/
.needs_kfree_rcu = true,
};
static void _uverbs_close_fd(struct ib_uobject_file *uobj_file)
{
struct ib_ucontext *ucontext;
struct ib_uverbs_file *ufile = uobj_file->ufile;
int ret;
mutex_lock(&uobj_file->ufile->cleanup_mutex);
/* uobject was either already cleaned up or is cleaned up right now anyway */
if (!uobj_file->uobj.context ||
!down_read_trylock(&uobj_file->uobj.context->cleanup_rwsem))
goto unlock;
ucontext = uobj_file->uobj.context;
ret = _rdma_remove_commit_uobject(&uobj_file->uobj, RDMA_REMOVE_CLOSE);
up_read(&ucontext->cleanup_rwsem);
if (ret)
pr_warn("uverbs: unable to clean up uobject file in uverbs_close_fd.\n");
unlock:
mutex_unlock(&ufile->cleanup_mutex);
}
void uverbs_close_fd(struct file *f)
{
struct ib_uobject_file *uobj_file = f->private_data;
struct kref *uverbs_file_ref = &uobj_file->ufile->ref;
_uverbs_close_fd(uobj_file);
uverbs_uobject_put(&uobj_file->uobj);
kref_put(uverbs_file_ref, ib_uverbs_release_file);
}
void uverbs_cleanup_ucontext(struct ib_ucontext *ucontext, bool device_removed)
{
enum rdma_remove_reason reason = device_removed ?
RDMA_REMOVE_DRIVER_REMOVE : RDMA_REMOVE_CLOSE;
unsigned int cur_order = 0;
ucontext->cleanup_reason = reason;
/*
* Waits for all remove_commit and alloc_commit to finish. Logically, We
* want to hold this forever as the context is going to be destroyed,
* but we'll release it since it causes a "held lock freed" BUG message.
*/
down_write(&ucontext->cleanup_rwsem);
while (!list_empty(&ucontext->uobjects)) {
struct ib_uobject *obj, *next_obj;
unsigned int next_order = UINT_MAX;
/*
* This shouldn't run while executing other commands on this
* context. Thus, the only thing we should take care of is
* releasing a FD while traversing this list. The FD could be
* closed and released from the _release fop of this FD.
* In order to mitigate this, we add a lock.
* We take and release the lock per order traversal in order
* to let other threads (which might still use the FDs) chance
* to run.
*/
mutex_lock(&ucontext->uobjects_lock);
list_for_each_entry_safe(obj, next_obj, &ucontext->uobjects,
list) {
if (obj->type->destroy_order == cur_order) {
int ret;
/*
* if we hit this WARN_ON, that means we are
* racing with a lookup_get.
*/
WARN_ON(uverbs_try_lock_object(obj, true));
ret = obj->type->type_class->remove_commit(obj,
reason);
list_del(&obj->list);
if (ret)
pr_warn("ib_uverbs: failed to remove uobject id %d order %u\n",
obj->id, cur_order);
/* put the ref we took when we created the object */
uverbs_uobject_put(obj);
} else {
next_order = min(next_order,
obj->type->destroy_order);
}
}
mutex_unlock(&ucontext->uobjects_lock);
cur_order = next_order;
}
up_write(&ucontext->cleanup_rwsem);
}
void uverbs_initialize_ucontext(struct ib_ucontext *ucontext)
{
ucontext->cleanup_reason = 0;
mutex_init(&ucontext->uobjects_lock);
INIT_LIST_HEAD(&ucontext->uobjects);
init_rwsem(&ucontext->cleanup_rwsem);
}
const struct uverbs_obj_type_class uverbs_fd_class = {
.alloc_begin = alloc_begin_fd_uobject,
.lookup_get = lookup_get_fd_uobject,
.alloc_commit = alloc_commit_fd_uobject,
.alloc_abort = alloc_abort_fd_uobject,
.lookup_put = lookup_put_fd_uobject,
.remove_commit = remove_commit_fd_uobject,
.needs_kfree_rcu = false,
};

View File

@ -0,0 +1,78 @@
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
* Copyright (c) 2005-2017 Mellanox Technologies. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 PathScale, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef RDMA_CORE_H
#define RDMA_CORE_H
#include <linux/idr.h>
#include <rdma/uverbs_types.h>
#include <rdma/ib_verbs.h>
#include <linux/mutex.h>
/*
* These functions initialize the context and cleanups its uobjects.
* The context has a list of objects which is protected by a mutex
* on the context. initialize_ucontext should be called when we create
* a context.
* cleanup_ucontext removes all uobjects from the context and puts them.
*/
void uverbs_cleanup_ucontext(struct ib_ucontext *ucontext, bool device_removed);
void uverbs_initialize_ucontext(struct ib_ucontext *ucontext);
/*
* uverbs_uobject_get is called in order to increase the reference count on
* an uobject. This is useful when a handler wants to keep the uobject's memory
* alive, regardless if this uobject is still alive in the context's objects
* repository. Objects are put via uverbs_uobject_put.
*/
void uverbs_uobject_get(struct ib_uobject *uobject);
/*
* In order to indicate we no longer needs this uobject, uverbs_uobject_put
* is called. When the reference count is decreased, the uobject is freed.
* For example, this is used when attaching a completion channel to a CQ.
*/
void uverbs_uobject_put(struct ib_uobject *uobject);
/* Indicate this fd is no longer used by this consumer, but its memory isn't
* necessarily released yet. When the last reference is put, we release the
* memory. After this call is executed, calling uverbs_uobject_get isn't
* allowed.
* This must be called from the release file_operations of the file!
*/
void uverbs_close_fd(struct file *f);
#endif /* RDMA_CORE_H */

File diff suppressed because it is too large Load Diff

View File

@ -253,6 +253,10 @@ static ssize_t rate_show(struct ib_port *p, struct port_attribute *unused,
speed = " EDR";
rate = 250;
break;
case IB_SPEED_HDR:
speed = " HDR";
rate = 500;
break;
case IB_SPEED_SDR:
default: /* default to SDR for invalid rates */
rate = 25;
@ -1301,7 +1305,7 @@ int ib_device_register_sysfs(struct ib_device *device,
free_port_list_attributes(device);
err_unregister:
device_unregister(class_dev);
device_del(class_dev);
err:
return ret;

View File

@ -702,10 +702,10 @@ static int ib_ucm_alloc_data(const void **dest, u64 src, u32 len)
return 0;
}
static int ib_ucm_path_get(struct ib_sa_path_rec **path, u64 src)
static int ib_ucm_path_get(struct sa_path_rec **path, u64 src)
{
struct ib_user_path_rec upath;
struct ib_sa_path_rec *sa_path;
struct sa_path_rec *sa_path;
*path = NULL;
@ -962,7 +962,7 @@ static ssize_t ib_ucm_send_lap(struct ib_ucm_file *file,
int in_len, int out_len)
{
struct ib_ucm_context *ctx;
struct ib_sa_path_rec *path = NULL;
struct sa_path_rec *path = NULL;
struct ib_ucm_lap cmd;
const void *data = NULL;
int result;

View File

@ -898,11 +898,18 @@ static ssize_t ucma_query_path(struct ucma_context *ctx,
for (i = 0, out_len -= sizeof(*resp);
i < resp->num_paths && out_len > sizeof(struct ib_path_rec_data);
i++, out_len -= sizeof(struct ib_path_rec_data)) {
struct sa_path_rec *rec = &ctx->cm_id->route.path_rec[i];
resp->path_data[i].flags = IB_PATH_GMP | IB_PATH_PRIMARY |
IB_PATH_BIDIRECTIONAL;
ib_sa_pack_path(&ctx->cm_id->route.path_rec[i],
&resp->path_data[i].path_rec);
if (rec->rec_type == SA_PATH_REC_TYPE_IB) {
ib_sa_pack_path(rec, &resp->path_data[i].path_rec);
} else {
struct sa_path_rec ib;
sa_convert_path_opa_to_ib(&ib, rec);
ib_sa_pack_path(&ib, &resp->path_data[i].path_rec);
}
}
if (copy_to_user(response, resp,
@ -1197,7 +1204,7 @@ static int ucma_set_option_id(struct ucma_context *ctx, int optname,
static int ucma_set_ib_path(struct ucma_context *ctx,
struct ib_path_rec_data *path_data, size_t optlen)
{
struct ib_sa_path_rec sa_path;
struct sa_path_rec sa_path;
struct rdma_cm_event event;
int ret;
@ -1215,8 +1222,17 @@ static int ucma_set_ib_path(struct ucma_context *ctx,
memset(&sa_path, 0, sizeof(sa_path));
sa_path.rec_type = SA_PATH_REC_TYPE_IB;
ib_sa_unpack_path(path_data->path_rec, &sa_path);
ret = rdma_set_ib_paths(ctx->cm_id, &sa_path, 1);
if (rdma_cap_opa_ah(ctx->cm_id->device, ctx->cm_id->port_num)) {
struct sa_path_rec opa;
sa_convert_path_ib_to_opa(&opa, &sa_path);
ret = rdma_set_ib_paths(ctx->cm_id, &opa, 1);
} else {
ret = rdma_set_ib_paths(ctx->cm_id, &sa_path, 1);
}
if (ret)
return ret;

View File

@ -115,11 +115,11 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
if (!umem)
return ERR_PTR(-ENOMEM);
umem->context = context;
umem->length = size;
umem->address = addr;
umem->page_size = PAGE_SIZE;
umem->pid = get_task_pid(current, PIDTYPE_PID);
umem->context = context;
umem->length = size;
umem->address = addr;
umem->page_shift = PAGE_SHIFT;
umem->pid = get_task_pid(current, PIDTYPE_PID);
/*
* We ask for writable memory if any of the following
* access flags are set. "Local write" and "remote write"
@ -133,7 +133,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
if (access & IB_ACCESS_ON_DEMAND) {
put_pid(umem->pid);
ret = ib_umem_odp_get(context, umem);
ret = ib_umem_odp_get(context, umem, access);
if (ret) {
kfree(umem);
return ERR_PTR(ret);
@ -315,7 +315,6 @@ EXPORT_SYMBOL(ib_umem_release);
int ib_umem_page_count(struct ib_umem *umem)
{
int shift;
int i;
int n;
struct scatterlist *sg;
@ -323,11 +322,9 @@ int ib_umem_page_count(struct ib_umem *umem)
if (umem->odp_data)
return ib_umem_num_pages(umem);
shift = ilog2(umem->page_size);
n = 0;
for_each_sg(umem->sg_head.sgl, sg, umem->nmap, i)
n += sg_dma_len(sg) >> shift;
n += sg_dma_len(sg) >> umem->page_shift;
return n;
}

View File

@ -38,6 +38,7 @@
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/vmalloc.h>
#include <linux/hugetlb.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_umem.h>
@ -254,11 +255,11 @@ struct ib_umem *ib_alloc_odp_umem(struct ib_ucontext *context,
if (!umem)
return ERR_PTR(-ENOMEM);
umem->context = context;
umem->length = size;
umem->address = addr;
umem->page_size = PAGE_SIZE;
umem->writable = 1;
umem->context = context;
umem->length = size;
umem->address = addr;
umem->page_shift = PAGE_SHIFT;
umem->writable = 1;
odp_data = kzalloc(sizeof(*odp_data), GFP_KERNEL);
if (!odp_data) {
@ -306,7 +307,8 @@ struct ib_umem *ib_alloc_odp_umem(struct ib_ucontext *context,
}
EXPORT_SYMBOL(ib_alloc_odp_umem);
int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem)
int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem,
int access)
{
int ret_val;
struct pid *our_pid;
@ -315,6 +317,20 @@ int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem)
if (!mm)
return -EINVAL;
if (access & IB_ACCESS_HUGETLB) {
struct vm_area_struct *vma;
struct hstate *h;
vma = find_vma(mm, ib_umem_start(umem));
if (!vma || !is_vm_hugetlb_page(vma))
return -EINVAL;
h = hstate_vma(vma);
umem->page_shift = huge_page_shift(h);
umem->hugetlb = 1;
} else {
umem->hugetlb = 0;
}
/* Prevent creating ODP MRs in child processes */
rcu_read_lock();
our_pid = get_task_pid(current->group_leader, PIDTYPE_PID);
@ -325,7 +341,6 @@ int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem)
goto out_mm;
}
umem->hugetlb = 0;
umem->odp_data = kzalloc(sizeof(*umem->odp_data), GFP_KERNEL);
if (!umem->odp_data) {
ret_val = -ENOMEM;
@ -504,7 +519,6 @@ void ib_umem_odp_release(struct ib_umem *umem)
static int ib_umem_odp_map_dma_single_page(
struct ib_umem *umem,
int page_index,
u64 base_virt_addr,
struct page *page,
u64 access_mask,
unsigned long current_seq)
@ -527,7 +541,7 @@ static int ib_umem_odp_map_dma_single_page(
if (!(umem->odp_data->dma_list[page_index])) {
dma_addr = ib_dma_map_page(dev,
page,
0, PAGE_SIZE,
0, BIT(umem->page_shift),
DMA_BIDIRECTIONAL);
if (ib_dma_mapping_error(dev, dma_addr)) {
ret = -EFAULT;
@ -555,8 +569,9 @@ static int ib_umem_odp_map_dma_single_page(
if (remove_existing_mapping && umem->context->invalidate_range) {
invalidate_page_trampoline(
umem,
base_virt_addr + (page_index * PAGE_SIZE),
base_virt_addr + ((page_index+1)*PAGE_SIZE),
ib_umem_start(umem) + (page_index >> umem->page_shift),
ib_umem_start(umem) + ((page_index + 1) >>
umem->page_shift),
NULL);
ret = -EAGAIN;
}
@ -595,10 +610,10 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt,
struct task_struct *owning_process = NULL;
struct mm_struct *owning_mm = NULL;
struct page **local_page_list = NULL;
u64 off;
int j, k, ret = 0, start_idx, npages = 0;
u64 base_virt_addr;
u64 page_mask, off;
int j, k, ret = 0, start_idx, npages = 0, page_shift;
unsigned int flags = 0;
phys_addr_t p = 0;
if (access_mask == 0)
return -EINVAL;
@ -611,9 +626,10 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt,
if (!local_page_list)
return -ENOMEM;
off = user_virt & (~PAGE_MASK);
user_virt = user_virt & PAGE_MASK;
base_virt_addr = user_virt;
page_shift = umem->page_shift;
page_mask = ~(BIT(page_shift) - 1);
off = user_virt & (~page_mask);
user_virt = user_virt & page_mask;
bcnt += off; /* Charge for the first page offset as well. */
owning_process = get_pid_task(umem->context->tgid, PIDTYPE_PID);
@ -631,13 +647,13 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt,
if (access_mask & ODP_WRITE_ALLOWED_BIT)
flags |= FOLL_WRITE;
start_idx = (user_virt - ib_umem_start(umem)) >> PAGE_SHIFT;
start_idx = (user_virt - ib_umem_start(umem)) >> page_shift;
k = start_idx;
while (bcnt > 0) {
const size_t gup_num_pages =
min_t(size_t, ALIGN(bcnt, PAGE_SIZE) / PAGE_SIZE,
PAGE_SIZE / sizeof(struct page *));
const size_t gup_num_pages = min_t(size_t,
(bcnt + BIT(page_shift) - 1) >> page_shift,
PAGE_SIZE / sizeof(struct page *));
down_read(&owning_mm->mmap_sem);
/*
@ -656,14 +672,25 @@ int ib_umem_odp_map_dma_pages(struct ib_umem *umem, u64 user_virt, u64 bcnt,
break;
bcnt -= min_t(size_t, npages << PAGE_SHIFT, bcnt);
user_virt += npages << PAGE_SHIFT;
mutex_lock(&umem->odp_data->umem_mutex);
for (j = 0; j < npages; ++j) {
for (j = 0; j < npages; j++, user_virt += PAGE_SIZE) {
if (user_virt & ~page_mask) {
p += PAGE_SIZE;
if (page_to_phys(local_page_list[j]) != p) {
ret = -EFAULT;
break;
}
put_page(local_page_list[j]);
continue;
}
ret = ib_umem_odp_map_dma_single_page(
umem, k, base_virt_addr, local_page_list[j],
access_mask, current_seq);
umem, k, local_page_list[j],
access_mask, current_seq);
if (ret < 0)
break;
p = page_to_phys(local_page_list[j]);
k++;
}
mutex_unlock(&umem->odp_data->umem_mutex);
@ -707,8 +734,8 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem *umem, u64 virt,
* invalidations, so we must make sure we free each page only
* once. */
mutex_lock(&umem->odp_data->umem_mutex);
for (addr = virt; addr < bound; addr += (u64)umem->page_size) {
idx = (addr - ib_umem_start(umem)) / PAGE_SIZE;
for (addr = virt; addr < bound; addr += BIT(umem->page_shift)) {
idx = (addr - ib_umem_start(umem)) >> umem->page_shift;
if (umem->odp_data->page_list[idx]) {
struct page *page = umem->odp_data->page_list[idx];
dma_addr_t dma = umem->odp_data->dma_list[idx];

View File

@ -197,7 +197,7 @@ static void send_handler(struct ib_mad_agent *agent,
struct ib_umad_packet *packet = send_wc->send_buf->context[0];
dequeue_send(file, packet);
ib_destroy_ah(packet->msg->ah);
rdma_destroy_ah(packet->msg->ah);
ib_free_send_mad(packet->msg);
if (send_wc->status == IB_WC_RESP_TIMEOUT_ERR) {
@ -235,17 +235,19 @@ static void recv_handler(struct ib_mad_agent *agent,
packet->mad.hdr.pkey_index = mad_recv_wc->wc->pkey_index;
packet->mad.hdr.grh_present = !!(mad_recv_wc->wc->wc_flags & IB_WC_GRH);
if (packet->mad.hdr.grh_present) {
struct ib_ah_attr ah_attr;
struct rdma_ah_attr ah_attr;
const struct ib_global_route *grh;
ib_init_ah_from_wc(agent->device, agent->port_num,
mad_recv_wc->wc, mad_recv_wc->recv_buf.grh,
&ah_attr);
packet->mad.hdr.gid_index = ah_attr.grh.sgid_index;
packet->mad.hdr.hop_limit = ah_attr.grh.hop_limit;
packet->mad.hdr.traffic_class = ah_attr.grh.traffic_class;
memcpy(packet->mad.hdr.gid, &ah_attr.grh.dgid, 16);
packet->mad.hdr.flow_label = cpu_to_be32(ah_attr.grh.flow_label);
grh = rdma_ah_read_grh(&ah_attr);
packet->mad.hdr.gid_index = grh->sgid_index;
packet->mad.hdr.hop_limit = grh->hop_limit;
packet->mad.hdr.traffic_class = grh->traffic_class;
memcpy(packet->mad.hdr.gid, &grh->dgid, 16);
packet->mad.hdr.flow_label = cpu_to_be32(grh->flow_label);
}
if (queue_packet(file, agent, packet))
@ -449,7 +451,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct ib_umad_file *file = filp->private_data;
struct ib_umad_packet *packet;
struct ib_mad_agent *agent;
struct ib_ah_attr ah_attr;
struct rdma_ah_attr ah_attr;
struct ib_ah *ah;
struct ib_rmpp_mad *rmpp_mad;
__be64 *tid;
@ -489,20 +491,22 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
}
memset(&ah_attr, 0, sizeof ah_attr);
ah_attr.dlid = be16_to_cpu(packet->mad.hdr.lid);
ah_attr.sl = packet->mad.hdr.sl;
ah_attr.src_path_bits = packet->mad.hdr.path_bits;
ah_attr.port_num = file->port->port_num;
ah_attr.type = rdma_ah_find_type(file->port->ib_dev,
file->port->port_num);
rdma_ah_set_dlid(&ah_attr, be16_to_cpu(packet->mad.hdr.lid));
rdma_ah_set_sl(&ah_attr, packet->mad.hdr.sl);
rdma_ah_set_path_bits(&ah_attr, packet->mad.hdr.path_bits);
rdma_ah_set_port_num(&ah_attr, file->port->port_num);
if (packet->mad.hdr.grh_present) {
ah_attr.ah_flags = IB_AH_GRH;
memcpy(ah_attr.grh.dgid.raw, packet->mad.hdr.gid, 16);
ah_attr.grh.sgid_index = packet->mad.hdr.gid_index;
ah_attr.grh.flow_label = be32_to_cpu(packet->mad.hdr.flow_label);
ah_attr.grh.hop_limit = packet->mad.hdr.hop_limit;
ah_attr.grh.traffic_class = packet->mad.hdr.traffic_class;
rdma_ah_set_grh(&ah_attr, NULL,
be32_to_cpu(packet->mad.hdr.flow_label),
packet->mad.hdr.gid_index,
packet->mad.hdr.hop_limit,
packet->mad.hdr.traffic_class);
rdma_ah_set_dgid_raw(&ah_attr, packet->mad.hdr.gid);
}
ah = ib_create_ah(agent->qp->pd, &ah_attr);
ah = rdma_create_ah(agent->qp->pd, &ah_attr);
if (IS_ERR(ah)) {
ret = PTR_ERR(ah);
goto err_up;
@ -596,7 +600,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
err_msg:
ib_free_send_mad(packet->msg);
err_ah:
ib_destroy_ah(ah);
rdma_destroy_ah(ah);
err_up:
mutex_unlock(&file->mutex);
err:

View File

@ -76,12 +76,13 @@
* an asynchronous event queue file is created and released when the
* event file is closed.
*
* struct ib_uverbs_event_file: One reference is held by the VFS and
* released when the file is closed. For asynchronous event files,
* another reference is held by the corresponding main context file
* and released when that file is closed. For completion event files,
* a reference is taken when a CQ is created that uses the file, and
* released when the CQ is destroyed.
* struct ib_uverbs_event_queue: Base structure for
* struct ib_uverbs_async_event_file and struct ib_uverbs_completion_event_file.
* One reference is held by the VFS and released when the file is closed.
* For asynchronous event files, another reference is held by the corresponding
* main context file and released when that file is closed. For completion
* event files, a reference is taken when a CQ is created that uses the file,
* and released when the CQ is destroyed.
*/
struct ib_uverbs_device {
@ -101,18 +102,26 @@ struct ib_uverbs_device {
struct list_head uverbs_events_file_list;
};
struct ib_uverbs_event_file {
struct kref ref;
int is_async;
struct ib_uverbs_file *uverbs_file;
struct ib_uverbs_event_queue {
spinlock_t lock;
int is_closed;
wait_queue_head_t poll_wait;
struct fasync_struct *async_queue;
struct list_head event_list;
};
struct ib_uverbs_async_event_file {
struct ib_uverbs_event_queue ev_queue;
struct ib_uverbs_file *uverbs_file;
struct kref ref;
struct list_head list;
};
struct ib_uverbs_completion_event_file {
struct ib_uobject_file uobj_file;
struct ib_uverbs_event_queue ev_queue;
};
struct ib_uverbs_file {
struct kref ref;
struct mutex mutex;
@ -120,9 +129,13 @@ struct ib_uverbs_file {
struct ib_uverbs_device *device;
struct ib_ucontext *ucontext;
struct ib_event_handler event_handler;
struct ib_uverbs_event_file *async_file;
struct ib_uverbs_async_event_file *async_file;
struct list_head list;
int is_closed;
struct idr idr;
/* spinlock protects write access to idr */
spinlock_t idr_lock;
};
struct ib_uverbs_event {
@ -159,6 +172,8 @@ struct ib_usrq_object {
struct ib_uqp_object {
struct ib_uevent_object uevent;
/* lock for mcast list */
struct mutex mcast_lock;
struct list_head mcast_list;
struct ib_uxrcd_object *uxrcd;
};
@ -176,32 +191,18 @@ struct ib_ucq_object {
u32 async_events_reported;
};
extern spinlock_t ib_uverbs_idr_lock;
extern struct idr ib_uverbs_pd_idr;
extern struct idr ib_uverbs_mr_idr;
extern struct idr ib_uverbs_mw_idr;
extern struct idr ib_uverbs_ah_idr;
extern struct idr ib_uverbs_cq_idr;
extern struct idr ib_uverbs_qp_idr;
extern struct idr ib_uverbs_srq_idr;
extern struct idr ib_uverbs_xrcd_idr;
extern struct idr ib_uverbs_rule_idr;
extern struct idr ib_uverbs_wq_idr;
extern struct idr ib_uverbs_rwq_ind_tbl_idr;
void idr_remove_uobj(struct idr *idp, struct ib_uobject *uobj);
struct file *ib_uverbs_alloc_event_file(struct ib_uverbs_file *uverbs_file,
struct ib_device *ib_dev,
int is_async);
extern const struct file_operations uverbs_event_fops;
void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue);
struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file,
struct ib_device *ib_dev);
void ib_uverbs_free_async_event_file(struct ib_uverbs_file *uverbs_file);
struct ib_uverbs_event_file *ib_uverbs_lookup_comp_file(int fd);
void ib_uverbs_release_ucq(struct ib_uverbs_file *file,
struct ib_uverbs_event_file *ev_file,
struct ib_uverbs_completion_event_file *ev_file,
struct ib_ucq_object *uobj);
void ib_uverbs_release_uevent(struct ib_uverbs_file *file,
struct ib_uevent_object *uobj);
void ib_uverbs_release_file(struct kref *ref);
void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context);
void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr);
@ -210,9 +211,12 @@ void ib_uverbs_wq_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_event_handler(struct ib_event_handler *handler,
struct ib_event *event);
void ib_uverbs_dealloc_xrcd(struct ib_uverbs_device *dev, struct ib_xrcd *xrcd);
int ib_uverbs_dealloc_xrcd(struct ib_uverbs_device *dev, struct ib_xrcd *xrcd,
enum rdma_remove_reason why);
int uverbs_dealloc_mw(struct ib_mw *mw);
void ib_uverbs_detach_umcast(struct ib_qp *qp,
struct ib_uqp_object *uobj);
struct ib_uverbs_flow_spec {
union {
@ -229,6 +233,7 @@ struct ib_uverbs_flow_spec {
struct ib_uverbs_flow_spec_tcp_udp tcp_udp;
struct ib_uverbs_flow_spec_ipv6 ipv6;
struct ib_uverbs_flow_spec_action_tag flow_tag;
struct ib_uverbs_flow_spec_action_drop drop;
};
};

File diff suppressed because it is too large Load Diff

View File

@ -52,6 +52,7 @@
#include "uverbs.h"
#include "core_priv.h"
#include "rdma_core.h"
MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("InfiniBand userspace verbs access");
@ -67,19 +68,6 @@ enum {
static struct class *uverbs_class;
DEFINE_SPINLOCK(ib_uverbs_idr_lock);
DEFINE_IDR(ib_uverbs_pd_idr);
DEFINE_IDR(ib_uverbs_mr_idr);
DEFINE_IDR(ib_uverbs_mw_idr);
DEFINE_IDR(ib_uverbs_ah_idr);
DEFINE_IDR(ib_uverbs_cq_idr);
DEFINE_IDR(ib_uverbs_qp_idr);
DEFINE_IDR(ib_uverbs_srq_idr);
DEFINE_IDR(ib_uverbs_xrcd_idr);
DEFINE_IDR(ib_uverbs_rule_idr);
DEFINE_IDR(ib_uverbs_wq_idr);
DEFINE_IDR(ib_uverbs_rwq_ind_tbl_idr);
static DEFINE_SPINLOCK(map_lock);
static DECLARE_BITMAP(dev_map, IB_UVERBS_MAX_DEVICES);
@ -168,37 +156,37 @@ static struct kobj_type ib_uverbs_dev_ktype = {
.release = ib_uverbs_release_dev,
};
static void ib_uverbs_release_event_file(struct kref *ref)
static void ib_uverbs_release_async_event_file(struct kref *ref)
{
struct ib_uverbs_event_file *file =
container_of(ref, struct ib_uverbs_event_file, ref);
struct ib_uverbs_async_event_file *file =
container_of(ref, struct ib_uverbs_async_event_file, ref);
kfree(file);
}
void ib_uverbs_release_ucq(struct ib_uverbs_file *file,
struct ib_uverbs_event_file *ev_file,
struct ib_uverbs_completion_event_file *ev_file,
struct ib_ucq_object *uobj)
{
struct ib_uverbs_event *evt, *tmp;
if (ev_file) {
spin_lock_irq(&ev_file->lock);
spin_lock_irq(&ev_file->ev_queue.lock);
list_for_each_entry_safe(evt, tmp, &uobj->comp_list, obj_list) {
list_del(&evt->list);
kfree(evt);
}
spin_unlock_irq(&ev_file->lock);
spin_unlock_irq(&ev_file->ev_queue.lock);
kref_put(&ev_file->ref, ib_uverbs_release_event_file);
uverbs_uobject_put(&ev_file->uobj_file.uobj);
}
spin_lock_irq(&file->async_file->lock);
spin_lock_irq(&file->async_file->ev_queue.lock);
list_for_each_entry_safe(evt, tmp, &uobj->async_list, obj_list) {
list_del(&evt->list);
kfree(evt);
}
spin_unlock_irq(&file->async_file->lock);
spin_unlock_irq(&file->async_file->ev_queue.lock);
}
void ib_uverbs_release_uevent(struct ib_uverbs_file *file,
@ -206,16 +194,16 @@ void ib_uverbs_release_uevent(struct ib_uverbs_file *file,
{
struct ib_uverbs_event *evt, *tmp;
spin_lock_irq(&file->async_file->lock);
spin_lock_irq(&file->async_file->ev_queue.lock);
list_for_each_entry_safe(evt, tmp, &uobj->event_list, obj_list) {
list_del(&evt->list);
kfree(evt);
}
spin_unlock_irq(&file->async_file->lock);
spin_unlock_irq(&file->async_file->ev_queue.lock);
}
static void ib_uverbs_detach_umcast(struct ib_qp *qp,
struct ib_uqp_object *uobj)
void ib_uverbs_detach_umcast(struct ib_qp *qp,
struct ib_uqp_object *uobj)
{
struct ib_uverbs_mcast_entry *mcast, *tmp;
@ -227,138 +215,11 @@ static void ib_uverbs_detach_umcast(struct ib_qp *qp,
}
static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
struct ib_ucontext *context)
struct ib_ucontext *context,
bool device_removed)
{
struct ib_uobject *uobj, *tmp;
context->closing = 1;
list_for_each_entry_safe(uobj, tmp, &context->ah_list, list) {
struct ib_ah *ah = uobj->object;
idr_remove_uobj(&ib_uverbs_ah_idr, uobj);
ib_destroy_ah(ah);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
kfree(uobj);
}
/* Remove MWs before QPs, in order to support type 2A MWs. */
list_for_each_entry_safe(uobj, tmp, &context->mw_list, list) {
struct ib_mw *mw = uobj->object;
idr_remove_uobj(&ib_uverbs_mw_idr, uobj);
uverbs_dealloc_mw(mw);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
kfree(uobj);
}
list_for_each_entry_safe(uobj, tmp, &context->rule_list, list) {
struct ib_flow *flow_id = uobj->object;
idr_remove_uobj(&ib_uverbs_rule_idr, uobj);
ib_destroy_flow(flow_id);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
kfree(uobj);
}
list_for_each_entry_safe(uobj, tmp, &context->qp_list, list) {
struct ib_qp *qp = uobj->object;
struct ib_uqp_object *uqp =
container_of(uobj, struct ib_uqp_object, uevent.uobject);
idr_remove_uobj(&ib_uverbs_qp_idr, uobj);
if (qp == qp->real_qp)
ib_uverbs_detach_umcast(qp, uqp);
ib_destroy_qp(qp);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
ib_uverbs_release_uevent(file, &uqp->uevent);
kfree(uqp);
}
list_for_each_entry_safe(uobj, tmp, &context->rwq_ind_tbl_list, list) {
struct ib_rwq_ind_table *rwq_ind_tbl = uobj->object;
struct ib_wq **ind_tbl = rwq_ind_tbl->ind_tbl;
idr_remove_uobj(&ib_uverbs_rwq_ind_tbl_idr, uobj);
ib_destroy_rwq_ind_table(rwq_ind_tbl);
kfree(ind_tbl);
kfree(uobj);
}
list_for_each_entry_safe(uobj, tmp, &context->wq_list, list) {
struct ib_wq *wq = uobj->object;
struct ib_uwq_object *uwq =
container_of(uobj, struct ib_uwq_object, uevent.uobject);
idr_remove_uobj(&ib_uverbs_wq_idr, uobj);
ib_destroy_wq(wq);
ib_uverbs_release_uevent(file, &uwq->uevent);
kfree(uwq);
}
list_for_each_entry_safe(uobj, tmp, &context->srq_list, list) {
struct ib_srq *srq = uobj->object;
struct ib_uevent_object *uevent =
container_of(uobj, struct ib_uevent_object, uobject);
idr_remove_uobj(&ib_uverbs_srq_idr, uobj);
ib_destroy_srq(srq);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
ib_uverbs_release_uevent(file, uevent);
kfree(uevent);
}
list_for_each_entry_safe(uobj, tmp, &context->cq_list, list) {
struct ib_cq *cq = uobj->object;
struct ib_uverbs_event_file *ev_file = cq->cq_context;
struct ib_ucq_object *ucq =
container_of(uobj, struct ib_ucq_object, uobject);
idr_remove_uobj(&ib_uverbs_cq_idr, uobj);
ib_destroy_cq(cq);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
ib_uverbs_release_ucq(file, ev_file, ucq);
kfree(ucq);
}
list_for_each_entry_safe(uobj, tmp, &context->mr_list, list) {
struct ib_mr *mr = uobj->object;
idr_remove_uobj(&ib_uverbs_mr_idr, uobj);
ib_dereg_mr(mr);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
kfree(uobj);
}
mutex_lock(&file->device->xrcd_tree_mutex);
list_for_each_entry_safe(uobj, tmp, &context->xrcd_list, list) {
struct ib_xrcd *xrcd = uobj->object;
struct ib_uxrcd_object *uxrcd =
container_of(uobj, struct ib_uxrcd_object, uobject);
idr_remove_uobj(&ib_uverbs_xrcd_idr, uobj);
ib_uverbs_dealloc_xrcd(file->device, xrcd);
kfree(uxrcd);
}
mutex_unlock(&file->device->xrcd_tree_mutex);
list_for_each_entry_safe(uobj, tmp, &context->pd_list, list) {
struct ib_pd *pd = uobj->object;
idr_remove_uobj(&ib_uverbs_pd_idr, uobj);
ib_dealloc_pd(pd);
ib_rdmacg_uncharge(&uobj->cg_obj, context->device,
RDMACG_RESOURCE_HCA_OBJECT);
kfree(uobj);
}
uverbs_cleanup_ucontext(context, device_removed);
put_pid(context->tgid);
ib_rdmacg_uncharge(&context->cg_obj, context->device,
@ -372,7 +233,7 @@ static void ib_uverbs_comp_dev(struct ib_uverbs_device *dev)
complete(&dev->comp);
}
static void ib_uverbs_release_file(struct kref *ref)
void ib_uverbs_release_file(struct kref *ref)
{
struct ib_uverbs_file *file =
container_of(ref, struct ib_uverbs_file, ref);
@ -392,58 +253,54 @@ static void ib_uverbs_release_file(struct kref *ref)
kfree(file);
}
static ssize_t ib_uverbs_event_read(struct file *filp, char __user *buf,
size_t count, loff_t *pos)
static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
struct ib_uverbs_file *uverbs_file,
struct file *filp, char __user *buf,
size_t count, loff_t *pos,
size_t eventsz)
{
struct ib_uverbs_event_file *file = filp->private_data;
struct ib_uverbs_event *event;
int eventsz;
int ret = 0;
spin_lock_irq(&file->lock);
spin_lock_irq(&ev_queue->lock);
while (list_empty(&file->event_list)) {
spin_unlock_irq(&file->lock);
while (list_empty(&ev_queue->event_list)) {
spin_unlock_irq(&ev_queue->lock);
if (filp->f_flags & O_NONBLOCK)
return -EAGAIN;
if (wait_event_interruptible(file->poll_wait,
(!list_empty(&file->event_list) ||
if (wait_event_interruptible(ev_queue->poll_wait,
(!list_empty(&ev_queue->event_list) ||
/* The barriers built into wait_event_interruptible()
* and wake_up() guarentee this will see the null set
* without using RCU
*/
!file->uverbs_file->device->ib_dev)))
!uverbs_file->device->ib_dev)))
return -ERESTARTSYS;
/* If device was disassociated and no event exists set an error */
if (list_empty(&file->event_list) &&
!file->uverbs_file->device->ib_dev)
if (list_empty(&ev_queue->event_list) &&
!uverbs_file->device->ib_dev)
return -EIO;
spin_lock_irq(&file->lock);
spin_lock_irq(&ev_queue->lock);
}
event = list_entry(file->event_list.next, struct ib_uverbs_event, list);
if (file->is_async)
eventsz = sizeof (struct ib_uverbs_async_event_desc);
else
eventsz = sizeof (struct ib_uverbs_comp_event_desc);
event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
if (eventsz > count) {
ret = -EINVAL;
event = NULL;
} else {
list_del(file->event_list.next);
list_del(ev_queue->event_list.next);
if (event->counter) {
++(*event->counter);
list_del(&event->obj_list);
}
}
spin_unlock_irq(&file->lock);
spin_unlock_irq(&ev_queue->lock);
if (event) {
if (copy_to_user(buf, event, eventsz))
@ -457,87 +314,158 @@ static ssize_t ib_uverbs_event_read(struct file *filp, char __user *buf,
return ret;
}
static unsigned int ib_uverbs_event_poll(struct file *filp,
static ssize_t ib_uverbs_async_event_read(struct file *filp, char __user *buf,
size_t count, loff_t *pos)
{
struct ib_uverbs_async_event_file *file = filp->private_data;
return ib_uverbs_event_read(&file->ev_queue, file->uverbs_file, filp,
buf, count, pos,
sizeof(struct ib_uverbs_async_event_desc));
}
static ssize_t ib_uverbs_comp_event_read(struct file *filp, char __user *buf,
size_t count, loff_t *pos)
{
struct ib_uverbs_completion_event_file *comp_ev_file =
filp->private_data;
return ib_uverbs_event_read(&comp_ev_file->ev_queue,
comp_ev_file->uobj_file.ufile, filp,
buf, count, pos,
sizeof(struct ib_uverbs_comp_event_desc));
}
static unsigned int ib_uverbs_event_poll(struct ib_uverbs_event_queue *ev_queue,
struct file *filp,
struct poll_table_struct *wait)
{
unsigned int pollflags = 0;
struct ib_uverbs_event_file *file = filp->private_data;
poll_wait(filp, &file->poll_wait, wait);
poll_wait(filp, &ev_queue->poll_wait, wait);
spin_lock_irq(&file->lock);
if (!list_empty(&file->event_list))
spin_lock_irq(&ev_queue->lock);
if (!list_empty(&ev_queue->event_list))
pollflags = POLLIN | POLLRDNORM;
spin_unlock_irq(&file->lock);
spin_unlock_irq(&ev_queue->lock);
return pollflags;
}
static int ib_uverbs_event_fasync(int fd, struct file *filp, int on)
static unsigned int ib_uverbs_async_event_poll(struct file *filp,
struct poll_table_struct *wait)
{
struct ib_uverbs_event_file *file = filp->private_data;
return fasync_helper(fd, filp, on, &file->async_queue);
return ib_uverbs_event_poll(filp->private_data, filp, wait);
}
static int ib_uverbs_event_close(struct inode *inode, struct file *filp)
static unsigned int ib_uverbs_comp_event_poll(struct file *filp,
struct poll_table_struct *wait)
{
struct ib_uverbs_event_file *file = filp->private_data;
struct ib_uverbs_completion_event_file *comp_ev_file =
filp->private_data;
return ib_uverbs_event_poll(&comp_ev_file->ev_queue, filp, wait);
}
static int ib_uverbs_async_event_fasync(int fd, struct file *filp, int on)
{
struct ib_uverbs_event_queue *ev_queue = filp->private_data;
return fasync_helper(fd, filp, on, &ev_queue->async_queue);
}
static int ib_uverbs_comp_event_fasync(int fd, struct file *filp, int on)
{
struct ib_uverbs_completion_event_file *comp_ev_file =
filp->private_data;
return fasync_helper(fd, filp, on, &comp_ev_file->ev_queue.async_queue);
}
static int ib_uverbs_async_event_close(struct inode *inode, struct file *filp)
{
struct ib_uverbs_async_event_file *file = filp->private_data;
struct ib_uverbs_file *uverbs_file = file->uverbs_file;
struct ib_uverbs_event *entry, *tmp;
int closed_already = 0;
mutex_lock(&file->uverbs_file->device->lists_mutex);
spin_lock_irq(&file->lock);
closed_already = file->is_closed;
file->is_closed = 1;
list_for_each_entry_safe(entry, tmp, &file->event_list, list) {
mutex_lock(&uverbs_file->device->lists_mutex);
spin_lock_irq(&file->ev_queue.lock);
closed_already = file->ev_queue.is_closed;
file->ev_queue.is_closed = 1;
list_for_each_entry_safe(entry, tmp, &file->ev_queue.event_list, list) {
if (entry->counter)
list_del(&entry->obj_list);
kfree(entry);
}
spin_unlock_irq(&file->lock);
spin_unlock_irq(&file->ev_queue.lock);
if (!closed_already) {
list_del(&file->list);
if (file->is_async)
ib_unregister_event_handler(&file->uverbs_file->
event_handler);
ib_unregister_event_handler(&uverbs_file->event_handler);
}
mutex_unlock(&file->uverbs_file->device->lists_mutex);
mutex_unlock(&uverbs_file->device->lists_mutex);
kref_put(&file->uverbs_file->ref, ib_uverbs_release_file);
kref_put(&file->ref, ib_uverbs_release_event_file);
kref_put(&uverbs_file->ref, ib_uverbs_release_file);
kref_put(&file->ref, ib_uverbs_release_async_event_file);
return 0;
}
static const struct file_operations uverbs_event_fops = {
static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
{
struct ib_uverbs_completion_event_file *file = filp->private_data;
struct ib_uverbs_event *entry, *tmp;
spin_lock_irq(&file->ev_queue.lock);
list_for_each_entry_safe(entry, tmp, &file->ev_queue.event_list, list) {
if (entry->counter)
list_del(&entry->obj_list);
kfree(entry);
}
spin_unlock_irq(&file->ev_queue.lock);
uverbs_close_fd(filp);
return 0;
}
const struct file_operations uverbs_event_fops = {
.owner = THIS_MODULE,
.read = ib_uverbs_event_read,
.poll = ib_uverbs_event_poll,
.release = ib_uverbs_event_close,
.fasync = ib_uverbs_event_fasync,
.read = ib_uverbs_comp_event_read,
.poll = ib_uverbs_comp_event_poll,
.release = ib_uverbs_comp_event_close,
.fasync = ib_uverbs_comp_event_fasync,
.llseek = no_llseek,
};
static const struct file_operations uverbs_async_event_fops = {
.owner = THIS_MODULE,
.read = ib_uverbs_async_event_read,
.poll = ib_uverbs_async_event_poll,
.release = ib_uverbs_async_event_close,
.fasync = ib_uverbs_async_event_fasync,
.llseek = no_llseek,
};
void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context)
{
struct ib_uverbs_event_file *file = cq_context;
struct ib_uverbs_event_queue *ev_queue = cq_context;
struct ib_ucq_object *uobj;
struct ib_uverbs_event *entry;
unsigned long flags;
if (!file)
if (!ev_queue)
return;
spin_lock_irqsave(&file->lock, flags);
if (file->is_closed) {
spin_unlock_irqrestore(&file->lock, flags);
spin_lock_irqsave(&ev_queue->lock, flags);
if (ev_queue->is_closed) {
spin_unlock_irqrestore(&ev_queue->lock, flags);
return;
}
entry = kmalloc(sizeof *entry, GFP_ATOMIC);
if (!entry) {
spin_unlock_irqrestore(&file->lock, flags);
spin_unlock_irqrestore(&ev_queue->lock, flags);
return;
}
@ -546,12 +474,12 @@ void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context)
entry->desc.comp.cq_handle = cq->uobject->user_handle;
entry->counter = &uobj->comp_events_reported;
list_add_tail(&entry->list, &file->event_list);
list_add_tail(&entry->list, &ev_queue->event_list);
list_add_tail(&entry->obj_list, &uobj->comp_list);
spin_unlock_irqrestore(&file->lock, flags);
spin_unlock_irqrestore(&ev_queue->lock, flags);
wake_up_interruptible(&file->poll_wait);
kill_fasync(&file->async_queue, SIGIO, POLL_IN);
wake_up_interruptible(&ev_queue->poll_wait);
kill_fasync(&ev_queue->async_queue, SIGIO, POLL_IN);
}
static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
@ -562,15 +490,15 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
struct ib_uverbs_event *entry;
unsigned long flags;
spin_lock_irqsave(&file->async_file->lock, flags);
if (file->async_file->is_closed) {
spin_unlock_irqrestore(&file->async_file->lock, flags);
spin_lock_irqsave(&file->async_file->ev_queue.lock, flags);
if (file->async_file->ev_queue.is_closed) {
spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
return;
}
entry = kmalloc(sizeof *entry, GFP_ATOMIC);
if (!entry) {
spin_unlock_irqrestore(&file->async_file->lock, flags);
spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
return;
}
@ -579,13 +507,13 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
entry->desc.async.reserved = 0;
entry->counter = counter;
list_add_tail(&entry->list, &file->async_file->event_list);
list_add_tail(&entry->list, &file->async_file->ev_queue.event_list);
if (obj_list)
list_add_tail(&entry->obj_list, obj_list);
spin_unlock_irqrestore(&file->async_file->lock, flags);
spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
wake_up_interruptible(&file->async_file->poll_wait);
kill_fasync(&file->async_file->async_queue, SIGIO, POLL_IN);
wake_up_interruptible(&file->async_file->ev_queue.poll_wait);
kill_fasync(&file->async_file->ev_queue.async_queue, SIGIO, POLL_IN);
}
void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr)
@ -603,7 +531,7 @@ void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr)
struct ib_uevent_object *uobj;
/* for XRC target qp's, check that qp is live */
if (!event->element.qp->uobject || !event->element.qp->uobject->live)
if (!event->element.qp->uobject)
return;
uobj = container_of(event->element.qp->uobject,
@ -648,15 +576,23 @@ void ib_uverbs_event_handler(struct ib_event_handler *handler,
void ib_uverbs_free_async_event_file(struct ib_uverbs_file *file)
{
kref_put(&file->async_file->ref, ib_uverbs_release_event_file);
kref_put(&file->async_file->ref, ib_uverbs_release_async_event_file);
file->async_file = NULL;
}
struct file *ib_uverbs_alloc_event_file(struct ib_uverbs_file *uverbs_file,
struct ib_device *ib_dev,
int is_async)
void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue)
{
struct ib_uverbs_event_file *ev_file;
spin_lock_init(&ev_queue->lock);
INIT_LIST_HEAD(&ev_queue->event_list);
init_waitqueue_head(&ev_queue->poll_wait);
ev_queue->is_closed = 0;
ev_queue->async_queue = NULL;
}
struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file,
struct ib_device *ib_dev)
{
struct ib_uverbs_async_event_file *ev_file;
struct file *filp;
int ret;
@ -664,16 +600,11 @@ struct file *ib_uverbs_alloc_event_file(struct ib_uverbs_file *uverbs_file,
if (!ev_file)
return ERR_PTR(-ENOMEM);
kref_init(&ev_file->ref);
spin_lock_init(&ev_file->lock);
INIT_LIST_HEAD(&ev_file->event_list);
init_waitqueue_head(&ev_file->poll_wait);
ib_uverbs_init_event_queue(&ev_file->ev_queue);
ev_file->uverbs_file = uverbs_file;
kref_get(&ev_file->uverbs_file->ref);
ev_file->async_queue = NULL;
ev_file->is_closed = 0;
filp = anon_inode_getfile("[infinibandevent]", &uverbs_event_fops,
kref_init(&ev_file->ref);
filp = anon_inode_getfile("[infinibandevent]", &uverbs_async_event_fops,
ev_file, O_RDONLY);
if (IS_ERR(filp))
goto err_put_refs;
@ -683,64 +614,33 @@ struct file *ib_uverbs_alloc_event_file(struct ib_uverbs_file *uverbs_file,
&uverbs_file->device->uverbs_events_file_list);
mutex_unlock(&uverbs_file->device->lists_mutex);
if (is_async) {
WARN_ON(uverbs_file->async_file);
uverbs_file->async_file = ev_file;
kref_get(&uverbs_file->async_file->ref);
INIT_IB_EVENT_HANDLER(&uverbs_file->event_handler,
ib_dev,
ib_uverbs_event_handler);
ret = ib_register_event_handler(&uverbs_file->event_handler);
if (ret)
goto err_put_file;
WARN_ON(uverbs_file->async_file);
uverbs_file->async_file = ev_file;
kref_get(&uverbs_file->async_file->ref);
INIT_IB_EVENT_HANDLER(&uverbs_file->event_handler,
ib_dev,
ib_uverbs_event_handler);
ret = ib_register_event_handler(&uverbs_file->event_handler);
if (ret)
goto err_put_file;
/* At that point async file stuff was fully set */
ev_file->is_async = 1;
}
/* At that point async file stuff was fully set */
return filp;
err_put_file:
fput(filp);
kref_put(&uverbs_file->async_file->ref, ib_uverbs_release_event_file);
kref_put(&uverbs_file->async_file->ref,
ib_uverbs_release_async_event_file);
uverbs_file->async_file = NULL;
return ERR_PTR(ret);
err_put_refs:
kref_put(&ev_file->uverbs_file->ref, ib_uverbs_release_file);
kref_put(&ev_file->ref, ib_uverbs_release_event_file);
kref_put(&ev_file->ref, ib_uverbs_release_async_event_file);
return filp;
}
/*
* Look up a completion event file by FD. If lookup is successful,
* takes a ref to the event file struct that it returns; if
* unsuccessful, returns NULL.
*/
struct ib_uverbs_event_file *ib_uverbs_lookup_comp_file(int fd)
{
struct ib_uverbs_event_file *ev_file = NULL;
struct fd f = fdget(fd);
if (!f.file)
return NULL;
if (f.file->f_op != &uverbs_event_fops)
goto out;
ev_file = f.file->private_data;
if (ev_file->is_async) {
ev_file = NULL;
goto out;
}
kref_get(&ev_file->ref);
out:
fdput(f);
return ev_file;
}
static int verify_command_mask(struct ib_device *ib_dev, __u32 command)
{
u64 mask;
@ -986,6 +886,8 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
}
file->device = dev;
spin_lock_init(&file->idr_lock);
idr_init(&file->idr);
file->ucontext = NULL;
file->async_file = NULL;
kref_init(&file->ref);
@ -1019,10 +921,11 @@ static int ib_uverbs_close(struct inode *inode, struct file *filp)
mutex_lock(&file->cleanup_mutex);
if (file->ucontext) {
ib_uverbs_cleanup_ucontext(file, file->ucontext);
ib_uverbs_cleanup_ucontext(file, file->ucontext, false);
file->ucontext = NULL;
}
mutex_unlock(&file->cleanup_mutex);
idr_destroy(&file->idr);
mutex_lock(&file->device->lists_mutex);
if (!file->is_closed) {
@ -1032,7 +935,8 @@ static int ib_uverbs_close(struct inode *inode, struct file *filp)
mutex_unlock(&file->device->lists_mutex);
if (file->async_file)
kref_put(&file->async_file->ref, ib_uverbs_release_event_file);
kref_put(&file->async_file->ref,
ib_uverbs_release_async_event_file);
kref_put(&file->ref, ib_uverbs_release_file);
kobject_put(&dev->kobj);
@ -1231,7 +1135,7 @@ static void ib_uverbs_free_hw_resources(struct ib_uverbs_device *uverbs_dev,
struct ib_device *ib_dev)
{
struct ib_uverbs_file *file;
struct ib_uverbs_event_file *event_file;
struct ib_uverbs_async_event_file *event_file;
struct ib_event event;
/* Pending running commands to terminate */
@ -1268,7 +1172,9 @@ static void ib_uverbs_free_hw_resources(struct ib_uverbs_device *uverbs_dev,
* (e.g mmput).
*/
ib_dev->disassociate_ucontext(ucontext);
ib_uverbs_cleanup_ucontext(file, ucontext);
mutex_lock(&file->cleanup_mutex);
ib_uverbs_cleanup_ucontext(file, ucontext, true);
mutex_unlock(&file->cleanup_mutex);
}
mutex_lock(&uverbs_dev->lists_mutex);
@ -1278,21 +1184,20 @@ static void ib_uverbs_free_hw_resources(struct ib_uverbs_device *uverbs_dev,
while (!list_empty(&uverbs_dev->uverbs_events_file_list)) {
event_file = list_first_entry(&uverbs_dev->
uverbs_events_file_list,
struct ib_uverbs_event_file,
struct ib_uverbs_async_event_file,
list);
spin_lock_irq(&event_file->lock);
event_file->is_closed = 1;
spin_unlock_irq(&event_file->lock);
spin_lock_irq(&event_file->ev_queue.lock);
event_file->ev_queue.is_closed = 1;
spin_unlock_irq(&event_file->ev_queue.lock);
list_del(&event_file->list);
if (event_file->is_async) {
ib_unregister_event_handler(&event_file->uverbs_file->
event_handler);
event_file->uverbs_file->event_handler.device = NULL;
}
ib_unregister_event_handler(
&event_file->uverbs_file->event_handler);
event_file->uverbs_file->event_handler.device =
NULL;
wake_up_interruptible(&event_file->poll_wait);
kill_fasync(&event_file->async_queue, SIGIO, POLL_IN);
wake_up_interruptible(&event_file->ev_queue.poll_wait);
kill_fasync(&event_file->ev_queue.async_queue, SIGIO, POLL_IN);
}
mutex_unlock(&uverbs_dev->lists_mutex);
}
@ -1396,13 +1301,6 @@ static void __exit ib_uverbs_cleanup(void)
unregister_chrdev_region(IB_UVERBS_BASE_DEV, IB_UVERBS_MAX_DEVICES);
if (overflow_maj)
unregister_chrdev_region(overflow_maj, IB_UVERBS_MAX_DEVICES);
idr_destroy(&ib_uverbs_pd_idr);
idr_destroy(&ib_uverbs_mr_idr);
idr_destroy(&ib_uverbs_mw_idr);
idr_destroy(&ib_uverbs_ah_idr);
idr_destroy(&ib_uverbs_cq_idr);
idr_destroy(&ib_uverbs_qp_idr);
idr_destroy(&ib_uverbs_srq_idr);
}
module_init(ib_uverbs_init);

View File

@ -34,20 +34,25 @@
#include <rdma/ib_marshall.h>
void ib_copy_ah_attr_to_user(struct ib_uverbs_ah_attr *dst,
struct ib_ah_attr *src)
struct rdma_ah_attr *src)
{
memcpy(dst->grh.dgid, src->grh.dgid.raw, sizeof src->grh.dgid);
dst->grh.flow_label = src->grh.flow_label;
dst->grh.sgid_index = src->grh.sgid_index;
dst->grh.hop_limit = src->grh.hop_limit;
dst->grh.traffic_class = src->grh.traffic_class;
memset(&dst->grh.reserved, 0, sizeof(dst->grh.reserved));
dst->dlid = src->dlid;
dst->sl = src->sl;
dst->src_path_bits = src->src_path_bits;
dst->static_rate = src->static_rate;
dst->is_global = src->ah_flags & IB_AH_GRH ? 1 : 0;
dst->port_num = src->port_num;
dst->dlid = rdma_ah_get_dlid(src);
dst->sl = rdma_ah_get_sl(src);
dst->src_path_bits = rdma_ah_get_path_bits(src);
dst->static_rate = rdma_ah_get_static_rate(src);
dst->is_global = rdma_ah_get_ah_flags(src) &
IB_AH_GRH ? 1 : 0;
if (dst->is_global) {
const struct ib_global_route *grh = rdma_ah_read_grh(src);
memcpy(dst->grh.dgid, grh->dgid.raw, sizeof(grh->dgid));
dst->grh.flow_label = grh->flow_label;
dst->grh.sgid_index = grh->sgid_index;
dst->grh.hop_limit = grh->hop_limit;
dst->grh.traffic_class = grh->traffic_class;
}
dst->port_num = rdma_ah_get_port_num(src);
dst->reserved = 0;
}
EXPORT_SYMBOL(ib_copy_ah_attr_to_user);
@ -91,15 +96,15 @@ void ib_copy_qp_attr_to_user(struct ib_uverbs_qp_attr *dst,
}
EXPORT_SYMBOL(ib_copy_qp_attr_to_user);
void ib_copy_path_rec_to_user(struct ib_user_path_rec *dst,
struct ib_sa_path_rec *src)
void __ib_copy_path_rec_to_user(struct ib_user_path_rec *dst,
struct sa_path_rec *src)
{
memcpy(dst->dgid, src->dgid.raw, sizeof src->dgid);
memcpy(dst->sgid, src->sgid.raw, sizeof src->sgid);
dst->dlid = src->dlid;
dst->slid = src->slid;
dst->raw_traffic = src->raw_traffic;
dst->dlid = htons(ntohl(sa_path_get_dlid(src)));
dst->slid = htons(ntohl(sa_path_get_slid(src)));
dst->raw_traffic = sa_path_get_raw_traffic(src);
dst->flow_label = src->flow_label;
dst->hop_limit = src->hop_limit;
dst->traffic_class = src->traffic_class;
@ -115,17 +120,43 @@ void ib_copy_path_rec_to_user(struct ib_user_path_rec *dst,
dst->preference = src->preference;
dst->packet_life_time_selector = src->packet_life_time_selector;
}
void ib_copy_path_rec_to_user(struct ib_user_path_rec *dst,
struct sa_path_rec *src)
{
struct sa_path_rec rec;
if (src->rec_type == SA_PATH_REC_TYPE_OPA) {
sa_convert_path_opa_to_ib(&rec, src);
__ib_copy_path_rec_to_user(dst, &rec);
return;
}
__ib_copy_path_rec_to_user(dst, src);
}
EXPORT_SYMBOL(ib_copy_path_rec_to_user);
void ib_copy_path_rec_from_user(struct ib_sa_path_rec *dst,
void ib_copy_path_rec_from_user(struct sa_path_rec *dst,
struct ib_user_path_rec *src)
{
__be32 slid, dlid;
memset(dst, 0, sizeof(*dst));
if ((ib_is_opa_gid((union ib_gid *)src->sgid)) ||
(ib_is_opa_gid((union ib_gid *)src->dgid))) {
dst->rec_type = SA_PATH_REC_TYPE_OPA;
slid = htonl(opa_get_lid_from_gid((union ib_gid *)src->sgid));
dlid = htonl(opa_get_lid_from_gid((union ib_gid *)src->dgid));
} else {
dst->rec_type = SA_PATH_REC_TYPE_IB;
slid = htonl(ntohs(src->slid));
dlid = htonl(ntohs(src->dlid));
}
memcpy(dst->dgid.raw, src->dgid, sizeof dst->dgid);
memcpy(dst->sgid.raw, src->sgid, sizeof dst->sgid);
dst->dlid = src->dlid;
dst->slid = src->slid;
dst->raw_traffic = src->raw_traffic;
sa_path_set_dlid(dst, dlid);
sa_path_set_slid(dst, slid);
sa_path_set_raw_traffic(dst, src->raw_traffic);
dst->flow_label = src->flow_label;
dst->hop_limit = src->hop_limit;
dst->traffic_class = src->traffic_class;
@ -141,9 +172,9 @@ void ib_copy_path_rec_from_user(struct ib_sa_path_rec *dst,
dst->preference = src->preference;
dst->packet_life_time_selector = src->packet_life_time_selector;
memset(dst->dmac, 0, sizeof(dst->dmac));
dst->net = NULL;
dst->ifindex = 0;
dst->gid_type = IB_GID_TYPE_IB;
/* TODO: No need to set this */
sa_path_set_dmac_zero(dst);
sa_path_set_ndev(dst, NULL);
sa_path_set_ifindex(dst, 0);
}
EXPORT_SYMBOL(ib_copy_path_rec_from_user);

View File

@ -0,0 +1,275 @@
/*
* Copyright (c) 2017, Mellanox Technologies inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <rdma/uverbs_std_types.h>
#include <rdma/ib_user_verbs.h>
#include <rdma/ib_verbs.h>
#include <linux/bug.h>
#include <linux/file.h>
#include "rdma_core.h"
#include "uverbs.h"
static int uverbs_free_ah(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
return rdma_destroy_ah((struct ib_ah *)uobject->object);
}
static int uverbs_free_flow(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
return ib_destroy_flow((struct ib_flow *)uobject->object);
}
static int uverbs_free_mw(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
return uverbs_dealloc_mw((struct ib_mw *)uobject->object);
}
static int uverbs_free_qp(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_qp *qp = uobject->object;
struct ib_uqp_object *uqp =
container_of(uobject, struct ib_uqp_object, uevent.uobject);
int ret;
if (why == RDMA_REMOVE_DESTROY) {
if (!list_empty(&uqp->mcast_list))
return -EBUSY;
} else if (qp == qp->real_qp) {
ib_uverbs_detach_umcast(qp, uqp);
}
ret = ib_destroy_qp(qp);
if (ret && why == RDMA_REMOVE_DESTROY)
return ret;
if (uqp->uxrcd)
atomic_dec(&uqp->uxrcd->refcnt);
ib_uverbs_release_uevent(uobject->context->ufile, &uqp->uevent);
return ret;
}
static int uverbs_free_rwq_ind_tbl(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_rwq_ind_table *rwq_ind_tbl = uobject->object;
struct ib_wq **ind_tbl = rwq_ind_tbl->ind_tbl;
int ret;
ret = ib_destroy_rwq_ind_table(rwq_ind_tbl);
if (!ret || why != RDMA_REMOVE_DESTROY)
kfree(ind_tbl);
return ret;
}
static int uverbs_free_wq(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_wq *wq = uobject->object;
struct ib_uwq_object *uwq =
container_of(uobject, struct ib_uwq_object, uevent.uobject);
int ret;
ret = ib_destroy_wq(wq);
if (!ret || why != RDMA_REMOVE_DESTROY)
ib_uverbs_release_uevent(uobject->context->ufile, &uwq->uevent);
return ret;
}
static int uverbs_free_srq(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_srq *srq = uobject->object;
struct ib_uevent_object *uevent =
container_of(uobject, struct ib_uevent_object, uobject);
enum ib_srq_type srq_type = srq->srq_type;
int ret;
ret = ib_destroy_srq(srq);
if (ret && why == RDMA_REMOVE_DESTROY)
return ret;
if (srq_type == IB_SRQT_XRC) {
struct ib_usrq_object *us =
container_of(uevent, struct ib_usrq_object, uevent);
atomic_dec(&us->uxrcd->refcnt);
}
ib_uverbs_release_uevent(uobject->context->ufile, uevent);
return ret;
}
static int uverbs_free_cq(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_cq *cq = uobject->object;
struct ib_uverbs_event_queue *ev_queue = cq->cq_context;
struct ib_ucq_object *ucq =
container_of(uobject, struct ib_ucq_object, uobject);
int ret;
ret = ib_destroy_cq(cq);
if (!ret || why != RDMA_REMOVE_DESTROY)
ib_uverbs_release_ucq(uobject->context->ufile, ev_queue ?
container_of(ev_queue,
struct ib_uverbs_completion_event_file,
ev_queue) : NULL,
ucq);
return ret;
}
static int uverbs_free_mr(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
return ib_dereg_mr((struct ib_mr *)uobject->object);
}
static int uverbs_free_xrcd(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_xrcd *xrcd = uobject->object;
struct ib_uxrcd_object *uxrcd =
container_of(uobject, struct ib_uxrcd_object, uobject);
int ret;
mutex_lock(&uobject->context->ufile->device->xrcd_tree_mutex);
if (why == RDMA_REMOVE_DESTROY && atomic_read(&uxrcd->refcnt))
ret = -EBUSY;
else
ret = ib_uverbs_dealloc_xrcd(uobject->context->ufile->device,
xrcd, why);
mutex_unlock(&uobject->context->ufile->device->xrcd_tree_mutex);
return ret;
}
static int uverbs_free_pd(struct ib_uobject *uobject,
enum rdma_remove_reason why)
{
struct ib_pd *pd = uobject->object;
if (why == RDMA_REMOVE_DESTROY && atomic_read(&pd->usecnt))
return -EBUSY;
ib_dealloc_pd((struct ib_pd *)uobject->object);
return 0;
}
static int uverbs_hot_unplug_completion_event_file(struct ib_uobject_file *uobj_file,
enum rdma_remove_reason why)
{
struct ib_uverbs_completion_event_file *comp_event_file =
container_of(uobj_file, struct ib_uverbs_completion_event_file,
uobj_file);
struct ib_uverbs_event_queue *event_queue = &comp_event_file->ev_queue;
spin_lock_irq(&event_queue->lock);
event_queue->is_closed = 1;
spin_unlock_irq(&event_queue->lock);
if (why == RDMA_REMOVE_DRIVER_REMOVE) {
wake_up_interruptible(&event_queue->poll_wait);
kill_fasync(&event_queue->async_queue, SIGIO, POLL_IN);
}
return 0;
};
const struct uverbs_obj_fd_type uverbs_type_attrs_comp_channel = {
.type = UVERBS_TYPE_ALLOC_FD(sizeof(struct ib_uverbs_completion_event_file), 0),
.context_closed = uverbs_hot_unplug_completion_event_file,
.fops = &uverbs_event_fops,
.name = "[infinibandevent]",
.flags = O_RDONLY,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_cq = {
.type = UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_ucq_object), 0),
.destroy_object = uverbs_free_cq,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_qp = {
.type = UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uqp_object), 0),
.destroy_object = uverbs_free_qp,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_mw = {
.type = UVERBS_TYPE_ALLOC_IDR(0),
.destroy_object = uverbs_free_mw,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_mr = {
/* 1 is used in order to free the MR after all the MWs */
.type = UVERBS_TYPE_ALLOC_IDR(1),
.destroy_object = uverbs_free_mr,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_srq = {
.type = UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_usrq_object), 0),
.destroy_object = uverbs_free_srq,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_ah = {
.type = UVERBS_TYPE_ALLOC_IDR(0),
.destroy_object = uverbs_free_ah,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_flow = {
.type = UVERBS_TYPE_ALLOC_IDR(0),
.destroy_object = uverbs_free_flow,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_wq = {
.type = UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uwq_object), 0),
.destroy_object = uverbs_free_wq,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_rwq_ind_table = {
.type = UVERBS_TYPE_ALLOC_IDR(0),
.destroy_object = uverbs_free_rwq_ind_tbl,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_xrcd = {
.type = UVERBS_TYPE_ALLOC_IDR_SZ(sizeof(struct ib_uxrcd_object), 0),
.destroy_object = uverbs_free_xrcd,
};
const struct uverbs_obj_idr_type uverbs_type_attrs_pd = {
/* 2 is used in order to free the PD after MRs */
.type = UVERBS_TYPE_ALLOC_IDR(2),
.destroy_object = uverbs_free_pd,
};

View File

@ -311,7 +311,7 @@ EXPORT_SYMBOL(ib_dealloc_pd);
/* Address handles */
struct ib_ah *ib_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr)
struct ib_ah *rdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr)
{
struct ib_ah *ah;
@ -321,12 +321,13 @@ struct ib_ah *ib_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr)
ah->device = pd->device;
ah->pd = pd;
ah->uobject = NULL;
ah->type = ah_attr->type;
atomic_inc(&pd->usecnt);
}
return ah;
}
EXPORT_SYMBOL(ib_create_ah);
EXPORT_SYMBOL(rdma_create_ah);
int ib_get_rdma_header_version(const union rdma_network_hdr *hdr)
{
@ -452,7 +453,7 @@ EXPORT_SYMBOL(ib_get_gids_from_rdma_hdr);
int ib_init_ah_from_wc(struct ib_device *device, u8 port_num,
const struct ib_wc *wc, const struct ib_grh *grh,
struct ib_ah_attr *ah_attr)
struct rdma_ah_attr *ah_attr)
{
u32 flow_class;
u16 gid_index;
@ -464,6 +465,7 @@ int ib_init_ah_from_wc(struct ib_device *device, u8 port_num,
union ib_gid sgid;
memset(ah_attr, 0, sizeof *ah_attr);
ah_attr->type = rdma_ah_find_type(device, port_num);
if (rdma_cap_eth_ah(device, port_num)) {
if (wc->wc_flags & IB_WC_WITH_NETWORK_HDR_TYPE)
net_type = wc->network_hdr_type;
@ -494,7 +496,7 @@ int ib_init_ah_from_wc(struct ib_device *device, u8 port_num,
return -ENODEV;
ret = rdma_addr_find_l2_eth_by_grh(&dgid, &sgid,
ah_attr->dmac,
ah_attr->roce.dmac,
wc->wc_flags & IB_WC_WITH_VLAN ?
NULL : &vlan_id,
&if_index, &hoplimit);
@ -525,15 +527,12 @@ int ib_init_ah_from_wc(struct ib_device *device, u8 port_num,
return ret;
}
ah_attr->dlid = wc->slid;
ah_attr->sl = wc->sl;
ah_attr->src_path_bits = wc->dlid_path_bits;
ah_attr->port_num = port_num;
rdma_ah_set_dlid(ah_attr, wc->slid);
rdma_ah_set_sl(ah_attr, wc->sl);
rdma_ah_set_path_bits(ah_attr, wc->dlid_path_bits);
rdma_ah_set_port_num(ah_attr, port_num);
if (wc->wc_flags & IB_WC_GRH) {
ah_attr->ah_flags = IB_AH_GRH;
ah_attr->grh.dgid = sgid;
if (!rdma_cap_eth_ah(device, port_num)) {
if (dgid.global.interface_id != cpu_to_be64(IB_SA_WELL_KNOWN_GUID)) {
ret = ib_find_cached_gid_by_port(device, &dgid,
@ -547,11 +546,12 @@ int ib_init_ah_from_wc(struct ib_device *device, u8 port_num,
}
}
ah_attr->grh.sgid_index = (u8) gid_index;
flow_class = be32_to_cpu(grh->version_tclass_flow);
ah_attr->grh.flow_label = flow_class & 0xFFFFF;
ah_attr->grh.hop_limit = hoplimit;
ah_attr->grh.traffic_class = (flow_class >> 20) & 0xFF;
rdma_ah_set_grh(ah_attr, &sgid,
flow_class & 0xFFFFF,
(u8)gid_index, hoplimit,
(flow_class >> 20) & 0xFF);
}
return 0;
}
@ -560,34 +560,37 @@ EXPORT_SYMBOL(ib_init_ah_from_wc);
struct ib_ah *ib_create_ah_from_wc(struct ib_pd *pd, const struct ib_wc *wc,
const struct ib_grh *grh, u8 port_num)
{
struct ib_ah_attr ah_attr;
struct rdma_ah_attr ah_attr;
int ret;
ret = ib_init_ah_from_wc(pd->device, port_num, wc, grh, &ah_attr);
if (ret)
return ERR_PTR(ret);
return ib_create_ah(pd, &ah_attr);
return rdma_create_ah(pd, &ah_attr);
}
EXPORT_SYMBOL(ib_create_ah_from_wc);
int ib_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr)
int rdma_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr)
{
if (ah->type != ah_attr->type)
return -EINVAL;
return ah->device->modify_ah ?
ah->device->modify_ah(ah, ah_attr) :
-ENOSYS;
}
EXPORT_SYMBOL(ib_modify_ah);
EXPORT_SYMBOL(rdma_modify_ah);
int ib_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr)
int rdma_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr)
{
return ah->device->query_ah ?
ah->device->query_ah(ah, ah_attr) :
-ENOSYS;
}
EXPORT_SYMBOL(ib_query_ah);
EXPORT_SYMBOL(rdma_query_ah);
int ib_destroy_ah(struct ib_ah *ah)
int rdma_destroy_ah(struct ib_ah *ah)
{
struct ib_pd *pd;
int ret;
@ -599,7 +602,7 @@ int ib_destroy_ah(struct ib_ah *ah)
return ret;
}
EXPORT_SYMBOL(ib_destroy_ah);
EXPORT_SYMBOL(rdma_destroy_ah);
/* Shared receive queues */
@ -1201,19 +1204,22 @@ int ib_modify_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state next_state,
EXPORT_SYMBOL(ib_modify_qp_is_ok);
int ib_resolve_eth_dmac(struct ib_device *device,
struct ib_ah_attr *ah_attr)
struct rdma_ah_attr *ah_attr)
{
int ret = 0;
struct ib_global_route *grh;
if (!rdma_is_port_valid(device, ah_attr->port_num))
if (!rdma_is_port_valid(device, rdma_ah_get_port_num(ah_attr)))
return -EINVAL;
if (!rdma_cap_eth_ah(device, ah_attr->port_num))
if (ah_attr->type != RDMA_AH_ATTR_TYPE_ROCE)
return 0;
if (rdma_link_local_addr((struct in6_addr *)ah_attr->grh.dgid.raw)) {
rdma_get_ll_mac((struct in6_addr *)ah_attr->grh.dgid.raw,
ah_attr->dmac);
grh = rdma_ah_retrieve_grh(ah_attr);
if (rdma_link_local_addr((struct in6_addr *)grh->dgid.raw)) {
rdma_get_ll_mac((struct in6_addr *)grh->dgid.raw,
ah_attr->roce.dmac);
} else {
union ib_gid sgid;
struct ib_gid_attr sgid_attr;
@ -1221,8 +1227,8 @@ int ib_resolve_eth_dmac(struct ib_device *device,
int hop_limit;
ret = ib_query_gid(device,
ah_attr->port_num,
ah_attr->grh.sgid_index,
rdma_ah_get_port_num(ah_attr),
grh->sgid_index,
&sgid, &sgid_attr);
if (ret || !sgid_attr.ndev) {
@ -1233,14 +1239,14 @@ int ib_resolve_eth_dmac(struct ib_device *device,
ifindex = sgid_attr.ndev->ifindex;
ret = rdma_addr_find_l2_eth_by_grh(&sgid,
&ah_attr->grh.dgid,
ah_attr->dmac,
NULL, &ifindex, &hop_limit);
ret =
rdma_addr_find_l2_eth_by_grh(&sgid, &grh->dgid,
ah_attr->roce.dmac,
NULL, &ifindex, &hop_limit);
dev_put(sgid_attr.ndev);
ah_attr->grh.hop_limit = hop_limit;
grh->hop_limit = hop_limit;
}
out:
return ret;
@ -1519,7 +1525,9 @@ int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
if (!qp->device->attach_mcast)
return -ENOSYS;
if (gid->raw[0] != 0xff || qp->qp_type != IB_QPT_UD)
if (gid->raw[0] != 0xff || qp->qp_type != IB_QPT_UD ||
lid < be16_to_cpu(IB_MULTICAST_LID_BASE) ||
lid == be16_to_cpu(IB_LID_PERMISSIVE))
return -EINVAL;
ret = qp->device->attach_mcast(qp, gid, lid);
@ -1535,7 +1543,9 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
if (!qp->device->detach_mcast)
return -ENOSYS;
if (gid->raw[0] != 0xff || qp->qp_type != IB_QPT_UD)
if (gid->raw[0] != 0xff || qp->qp_type != IB_QPT_UD ||
lid < be16_to_cpu(IB_MULTICAST_LID_BASE) ||
lid == be16_to_cpu(IB_LID_PERMISSIVE))
return -EINVAL;
ret = qp->device->detach_mcast(qp, gid, lid);

View File

@ -524,19 +524,20 @@ int bnxt_re_destroy_ah(struct ib_ah *ib_ah)
}
struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
struct ib_ah_attr *ah_attr,
struct rdma_ah_attr *ah_attr,
struct ib_udata *udata)
{
struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd);
struct bnxt_re_dev *rdev = pd->rdev;
struct bnxt_re_ah *ah;
const struct ib_global_route *grh = rdma_ah_read_grh(ah_attr);
int rc;
u16 vlan_tag;
u8 nw_type;
struct ib_gid_attr sgid_attr;
if (!(ah_attr->ah_flags & IB_AH_GRH)) {
if (!(rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH)) {
dev_err(rdev_to_dev(rdev), "Failed to alloc AH: GRH not set");
return ERR_PTR(-EINVAL);
}
@ -548,33 +549,33 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
ah->qplib_ah.pd = &pd->qplib_pd;
/* Supply the configuration for the HW */
memcpy(ah->qplib_ah.dgid.data, ah_attr->grh.dgid.raw,
memcpy(ah->qplib_ah.dgid.data, grh->dgid.raw,
sizeof(union ib_gid));
/*
* If RoCE V2 is enabled, stack will have two entries for
* each GID entry. Avoiding this duplicte entry in HW. Dividing
* the GID index by 2 for RoCE V2
*/
ah->qplib_ah.sgid_index = ah_attr->grh.sgid_index / 2;
ah->qplib_ah.host_sgid_index = ah_attr->grh.sgid_index;
ah->qplib_ah.traffic_class = ah_attr->grh.traffic_class;
ah->qplib_ah.flow_label = ah_attr->grh.flow_label;
ah->qplib_ah.hop_limit = ah_attr->grh.hop_limit;
ah->qplib_ah.sl = ah_attr->sl;
ah->qplib_ah.sgid_index = grh->sgid_index / 2;
ah->qplib_ah.host_sgid_index = grh->sgid_index;
ah->qplib_ah.traffic_class = grh->traffic_class;
ah->qplib_ah.flow_label = grh->flow_label;
ah->qplib_ah.hop_limit = grh->hop_limit;
ah->qplib_ah.sl = rdma_ah_get_sl(ah_attr);
if (ib_pd->uobject &&
!rdma_is_multicast_addr((struct in6_addr *)
ah_attr->grh.dgid.raw) &&
grh->dgid.raw) &&
!rdma_link_local_addr((struct in6_addr *)
ah_attr->grh.dgid.raw)) {
grh->dgid.raw)) {
union ib_gid sgid;
rc = ib_get_cached_gid(&rdev->ibdev, 1,
ah_attr->grh.sgid_index, &sgid,
grh->sgid_index, &sgid,
&sgid_attr);
if (rc) {
dev_err(rdev_to_dev(rdev),
"Failed to query gid at index %d",
ah_attr->grh.sgid_index);
grh->sgid_index);
goto fail;
}
if (sgid_attr.ndev) {
@ -595,8 +596,8 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
ah->qplib_ah.nw_type = CMDQ_CREATE_AH_TYPE_V1;
break;
}
rc = rdma_addr_find_l2_eth_by_grh(&sgid, &ah_attr->grh.dgid,
ah_attr->dmac, &vlan_tag,
rc = rdma_addr_find_l2_eth_by_grh(&sgid, &grh->dgid,
ah_attr->roce.dmac, &vlan_tag,
&sgid_attr.ndev->ifindex,
NULL);
if (rc) {
@ -605,7 +606,7 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
}
}
memcpy(ah->qplib_ah.dmac, ah_attr->dmac, ETH_ALEN);
memcpy(ah->qplib_ah.dmac, ah_attr->roce.dmac, ETH_ALEN);
rc = bnxt_qplib_create_ah(&rdev->qplib_res, &ah->qplib_ah);
if (rc) {
dev_err(rdev_to_dev(rdev), "Failed to allocate HW AH");
@ -634,24 +635,24 @@ struct ib_ah *bnxt_re_create_ah(struct ib_pd *ib_pd,
return ERR_PTR(rc);
}
int bnxt_re_modify_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
int bnxt_re_modify_ah(struct ib_ah *ib_ah, struct rdma_ah_attr *ah_attr)
{
return 0;
}
int bnxt_re_query_ah(struct ib_ah *ib_ah, struct ib_ah_attr *ah_attr)
int bnxt_re_query_ah(struct ib_ah *ib_ah, struct rdma_ah_attr *ah_attr)
{
struct bnxt_re_ah *ah = container_of(ib_ah, struct bnxt_re_ah, ib_ah);
memcpy(ah_attr->grh.dgid.raw, ah->qplib_ah.dgid.data,
sizeof(union ib_gid));
ah_attr->grh.sgid_index = ah->qplib_ah.host_sgid_index;
ah_attr->grh.traffic_class = ah->qplib_ah.traffic_class;
ah_attr->sl = ah->qplib_ah.sl;
memcpy(ah_attr->dmac, ah->qplib_ah.dmac, ETH_ALEN);
ah_attr->ah_flags = IB_AH_GRH;
ah_attr->port_num = 1;
ah_attr->static_rate = 0;
ah_attr->type = ib_ah->type;
rdma_ah_set_sl(ah_attr, ah->qplib_ah.sl);
memcpy(ah_attr->roce.dmac, ah->qplib_ah.dmac, ETH_ALEN);
rdma_ah_set_grh(ah_attr, NULL, 0,
ah->qplib_ah.host_sgid_index,
0, ah->qplib_ah.traffic_class);
rdma_ah_set_dgid_raw(ah_attr, ah->qplib_ah.dgid.data);
rdma_ah_set_port_num(ah_attr, 1);
rdma_ah_set_static_rate(ah_attr, 0);
return 0;
}
@ -692,9 +693,9 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp)
kfree(rdev->qp1_sqp);
}
if (qp->rumem && !IS_ERR(qp->rumem))
if (!IS_ERR_OR_NULL(qp->rumem))
ib_umem_release(qp->rumem);
if (qp->sumem && !IS_ERR(qp->sumem))
if (!IS_ERR_OR_NULL(qp->sumem))
ib_umem_release(qp->sumem);
mutex_lock(&rdev->qp_lock);
@ -1258,6 +1259,9 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
qp->qplib_qp.qkey = qp_attr->qkey;
}
if (qp_attr_mask & IB_QP_AV) {
const struct ib_global_route *grh =
rdma_ah_read_grh(&qp_attr->ah_attr);
qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_DGID |
CMDQ_MODIFY_QP_MODIFY_MASK_FLOW_LABEL |
CMDQ_MODIFY_QP_MODIFY_MASK_SGID_INDEX |
@ -1265,25 +1269,23 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
CMDQ_MODIFY_QP_MODIFY_MASK_TRAFFIC_CLASS |
CMDQ_MODIFY_QP_MODIFY_MASK_DEST_MAC |
CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID;
memcpy(qp->qplib_qp.ah.dgid.data, qp_attr->ah_attr.grh.dgid.raw,
memcpy(qp->qplib_qp.ah.dgid.data, grh->dgid.raw,
sizeof(qp->qplib_qp.ah.dgid.data));
qp->qplib_qp.ah.flow_label = qp_attr->ah_attr.grh.flow_label;
qp->qplib_qp.ah.flow_label = grh->flow_label;
/* If RoCE V2 is enabled, stack will have two entries for
* each GID entry. Avoiding this duplicte entry in HW. Dividing
* the GID index by 2 for RoCE V2
*/
qp->qplib_qp.ah.sgid_index =
qp_attr->ah_attr.grh.sgid_index / 2;
qp->qplib_qp.ah.host_sgid_index =
qp_attr->ah_attr.grh.sgid_index;
qp->qplib_qp.ah.hop_limit = qp_attr->ah_attr.grh.hop_limit;
qp->qplib_qp.ah.traffic_class =
qp_attr->ah_attr.grh.traffic_class;
qp->qplib_qp.ah.sl = qp_attr->ah_attr.sl;
ether_addr_copy(qp->qplib_qp.ah.dmac, qp_attr->ah_attr.dmac);
qp->qplib_qp.ah.sgid_index = grh->sgid_index / 2;
qp->qplib_qp.ah.host_sgid_index = grh->sgid_index;
qp->qplib_qp.ah.hop_limit = grh->hop_limit;
qp->qplib_qp.ah.traffic_class = grh->traffic_class;
qp->qplib_qp.ah.sl = rdma_ah_get_sl(&qp_attr->ah_attr);
ether_addr_copy(qp->qplib_qp.ah.dmac,
qp_attr->ah_attr.roce.dmac);
status = ib_get_cached_gid(&rdev->ibdev, 1,
qp_attr->ah_attr.grh.sgid_index,
grh->sgid_index,
&sgid, &sgid_attr);
if (!status && sgid_attr.ndev) {
memcpy(qp->qplib_qp.smac, sgid_attr.ndev->dev_addr,
@ -1423,14 +1425,14 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
qp_attr->qp_access_flags = __to_ib_access_flags(qplib_qp.access);
qp_attr->pkey_index = qplib_qp.pkey_index;
qp_attr->qkey = qplib_qp.qkey;
memcpy(qp_attr->ah_attr.grh.dgid.raw, qplib_qp.ah.dgid.data,
sizeof(qplib_qp.ah.dgid.data));
qp_attr->ah_attr.grh.flow_label = qplib_qp.ah.flow_label;
qp_attr->ah_attr.grh.sgid_index = qplib_qp.ah.host_sgid_index;
qp_attr->ah_attr.grh.hop_limit = qplib_qp.ah.hop_limit;
qp_attr->ah_attr.grh.traffic_class = qplib_qp.ah.traffic_class;
qp_attr->ah_attr.sl = qplib_qp.ah.sl;
ether_addr_copy(qp_attr->ah_attr.dmac, qplib_qp.ah.dmac);
qp_attr->ah_attr.type = RDMA_AH_ATTR_TYPE_ROCE;
rdma_ah_set_grh(&qp_attr->ah_attr, NULL, qplib_qp.ah.flow_label,
qplib_qp.ah.host_sgid_index,
qplib_qp.ah.hop_limit,
qplib_qp.ah.traffic_class);
rdma_ah_set_dgid_raw(&qp_attr->ah_attr, qplib_qp.ah.dgid.data);
rdma_ah_set_sl(&qp_attr->ah_attr, qplib_qp.ah.sl);
ether_addr_copy(qp_attr->ah_attr.roce.dmac, qplib_qp.ah.dmac);
qp_attr->path_mtu = __to_ib_mtu(qplib_qp.path_mtu);
qp_attr->timeout = qplib_qp.timeout;
qp_attr->retry_cnt = qplib_qp.retry_cnt;
@ -2116,7 +2118,7 @@ int bnxt_re_destroy_cq(struct ib_cq *ib_cq)
dev_err(rdev_to_dev(rdev), "Failed to destroy HW CQ");
return rc;
}
if (cq->umem && !IS_ERR(cq->umem))
if (!IS_ERR_OR_NULL(cq->umem))
ib_umem_release(cq->umem);
if (cq) {
@ -2818,7 +2820,7 @@ int bnxt_re_dereg_mr(struct ib_mr *ib_mr)
{
struct bnxt_re_mr *mr = container_of(ib_mr, struct bnxt_re_mr, ib_mr);
struct bnxt_re_dev *rdev = mr->rdev;
int rc = 0;
int rc;
if (mr->npages && mr->pages) {
rc = bnxt_qplib_free_fast_reg_page_list(&rdev->qplib_res,
@ -2829,7 +2831,7 @@ int bnxt_re_dereg_mr(struct ib_mr *ib_mr)
}
rc = bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr);
if (!IS_ERR(mr->ib_umem) && mr->ib_umem)
if (!IS_ERR_OR_NULL(mr->ib_umem))
ib_umem_release(mr->ib_umem);
kfree(mr);
@ -3016,7 +3018,7 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
struct bnxt_re_mr *mr;
struct ib_umem *umem;
u64 *pbl_tbl, *pbl_tbl_orig;
int i, umem_pgs, pages, page_shift, rc;
int i, umem_pgs, pages, rc;
struct scatterlist *sg;
int entry;
@ -3062,22 +3064,22 @@ struct ib_mr *bnxt_re_reg_user_mr(struct ib_pd *ib_pd, u64 start, u64 length,
}
pbl_tbl_orig = pbl_tbl;
page_shift = ilog2(umem->page_size);
if (umem->hugetlb) {
dev_err(rdev_to_dev(rdev), "umem hugetlb not supported!");
rc = -EFAULT;
goto fail;
}
if (umem->page_size != PAGE_SIZE) {
dev_err(rdev_to_dev(rdev), "umem page size unsupported!");
if (umem->page_shift != PAGE_SHIFT) {
dev_err(rdev_to_dev(rdev), "umem page shift unsupported!");
rc = -EFAULT;
goto fail;
}
/* Map umem buf ptrs to the PBL */
for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
pages = sg_dma_len(sg) >> page_shift;
pages = sg_dma_len(sg) >> umem->page_shift;
for (i = 0; i < pages; i++, pbl_tbl++)
*pbl_tbl = sg_dma_address(sg) + (i << page_shift);
*pbl_tbl = sg_dma_address(sg) + (i << umem->page_shift);
}
rc = bnxt_qplib_reg_mr(&rdev->qplib_res, &mr->qplib_mr, pbl_tbl_orig,
umem_pgs, false);

View File

@ -150,10 +150,10 @@ struct ib_pd *bnxt_re_alloc_pd(struct ib_device *ibdev,
struct ib_udata *udata);
int bnxt_re_dealloc_pd(struct ib_pd *pd);
struct ib_ah *bnxt_re_create_ah(struct ib_pd *pd,
struct ib_ah_attr *ah_attr,
struct rdma_ah_attr *ah_attr,
struct ib_udata *udata);
int bnxt_re_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
int bnxt_re_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
int bnxt_re_modify_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr);
int bnxt_re_query_ah(struct ib_ah *ah, struct rdma_ah_attr *ah_attr);
int bnxt_re_destroy_ah(struct ib_ah *ah);
struct ib_qp *bnxt_re_create_qp(struct ib_pd *pd,
struct ib_qp_init_attr *qp_init_attr,

View File

@ -51,17 +51,18 @@ void cxio_dump_tpt(struct cxio_rdev *rdev, u32 stag)
m->mem_id = MEM_PMRX;
m->addr = (stag>>8) * 32 + rdev->rnic_info.tpt_base;
m->len = size;
PDBG("%s TPT addr 0x%x len %d\n", __func__, m->addr, m->len);
pr_debug("%s TPT addr 0x%x len %d\n", __func__, m->addr, m->len);
rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m);
if (rc) {
PDBG("%s toectl returned error %d\n", __func__, rc);
pr_debug("%s toectl returned error %d\n", __func__, rc);
kfree(m);
return;
}
data = (u64 *)m->buf;
while (size > 0) {
PDBG("TPT %08x: %016llx\n", m->addr, (unsigned long long) *data);
pr_debug("TPT %08x: %016llx\n",
m->addr, (unsigned long long)*data);
size -= 8;
data++;
m->addr += 8;
@ -87,18 +88,19 @@ void cxio_dump_pbl(struct cxio_rdev *rdev, u32 pbl_addr, uint len, u8 shift)
m->mem_id = MEM_PMRX;
m->addr = pbl_addr;
m->len = size;
PDBG("%s PBL addr 0x%x len %d depth %d\n",
__func__, m->addr, m->len, npages);
pr_debug("%s PBL addr 0x%x len %d depth %d\n",
__func__, m->addr, m->len, npages);
rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m);
if (rc) {
PDBG("%s toectl returned error %d\n", __func__, rc);
pr_debug("%s toectl returned error %d\n", __func__, rc);
kfree(m);
return;
}
data = (u64 *)m->buf;
while (size > 0) {
PDBG("PBL %08x: %016llx\n", m->addr, (unsigned long long) *data);
pr_debug("PBL %08x: %016llx\n",
m->addr, (unsigned long long)*data);
size -= 8;
data++;
m->addr += 8;
@ -114,8 +116,8 @@ void cxio_dump_wqe(union t3_wr *wqe)
if (size == 0)
size = 8;
while (size > 0) {
PDBG("WQE %p: %016llx\n", data,
(unsigned long long) be64_to_cpu(*data));
pr_debug("WQE %p: %016llx\n",
data, (unsigned long long)be64_to_cpu(*data));
size--;
data++;
}
@ -127,8 +129,8 @@ void cxio_dump_wce(struct t3_cqe *wce)
int size = sizeof(*wce);
while (size > 0) {
PDBG("WCE %p: %016llx\n", data,
(unsigned long long) be64_to_cpu(*data));
pr_debug("WCE %p: %016llx\n",
data, (unsigned long long)be64_to_cpu(*data));
size -= 8;
data++;
}
@ -148,17 +150,18 @@ void cxio_dump_rqt(struct cxio_rdev *rdev, u32 hwtid, int nents)
m->mem_id = MEM_PMRX;
m->addr = ((hwtid)<<10) + rdev->rnic_info.rqt_base;
m->len = size;
PDBG("%s RQT addr 0x%x len %d\n", __func__, m->addr, m->len);
pr_debug("%s RQT addr 0x%x len %d\n", __func__, m->addr, m->len);
rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m);
if (rc) {
PDBG("%s toectl returned error %d\n", __func__, rc);
pr_debug("%s toectl returned error %d\n", __func__, rc);
kfree(m);
return;
}
data = (u64 *)m->buf;
while (size > 0) {
PDBG("RQT %08x: %016llx\n", m->addr, (unsigned long long) *data);
pr_debug("RQT %08x: %016llx\n",
m->addr, (unsigned long long)*data);
size -= 8;
data++;
m->addr += 8;
@ -180,10 +183,10 @@ void cxio_dump_tcb(struct cxio_rdev *rdev, u32 hwtid)
m->mem_id = MEM_CM;
m->addr = hwtid * size;
m->len = size;
PDBG("%s TCB %d len %d\n", __func__, m->addr, m->len);
pr_debug("%s TCB %d len %d\n", __func__, m->addr, m->len);
rc = rdev->t3cdev_p->ctl(rdev->t3cdev_p, RDMA_GET_MEM, m);
if (rc) {
PDBG("%s toectl returned error %d\n", __func__, rc);
pr_debug("%s toectl returned error %d\n", __func__, rc);
kfree(m);
return;
}

View File

@ -110,8 +110,7 @@ int cxio_hal_cq_op(struct cxio_rdev *rdev_p, struct t3_cq *cq,
while (!CQ_VLD_ENTRY(rptr, cq->size_log2, cqe)) {
udelay(1);
if (i++ > 1000000) {
printk(KERN_ERR "%s: stalled rnic\n",
rdev_p->dev_name);
pr_err("%s: stalled rnic\n", rdev_p->dev_name);
return -EIO;
}
}
@ -140,7 +139,7 @@ static int cxio_hal_clear_qp_ctx(struct cxio_rdev *rdev_p, u32 qpid)
struct t3_modify_qp_wr *wqe;
struct sk_buff *skb = alloc_skb(sizeof(*wqe), GFP_KERNEL);
if (!skb) {
PDBG("%s alloc_skb failed\n", __func__);
pr_debug("%s alloc_skb failed\n", __func__);
return -ENOMEM;
}
wqe = (struct t3_modify_qp_wr *) skb_put(skb, sizeof(*wqe));
@ -230,7 +229,7 @@ static u32 get_qpid(struct cxio_rdev *rdev_p, struct cxio_ucontext *uctx)
}
out:
mutex_unlock(&uctx->lock);
PDBG("%s qpid 0x%x\n", __func__, qpid);
pr_debug("%s qpid 0x%x\n", __func__, qpid);
return qpid;
}
@ -242,7 +241,7 @@ static void put_qpid(struct cxio_rdev *rdev_p, u32 qpid,
entry = kmalloc(sizeof *entry, GFP_KERNEL);
if (!entry)
return;
PDBG("%s qpid 0x%x\n", __func__, qpid);
pr_debug("%s qpid 0x%x\n", __func__, qpid);
entry->qpid = qpid;
mutex_lock(&uctx->lock);
list_add_tail(&entry->entry, &uctx->qpids);
@ -306,8 +305,8 @@ int cxio_create_qp(struct cxio_rdev *rdev_p, u32 kernel_domain,
wq->udb = (u64)rdev_p->rnic_info.udbell_physbase +
(wq->qpid << rdev_p->qpshift);
wq->rdev = rdev_p;
PDBG("%s qpid 0x%x doorbell 0x%p udb 0x%llx\n", __func__,
wq->qpid, wq->doorbell, (unsigned long long) wq->udb);
pr_debug("%s qpid 0x%x doorbell 0x%p udb 0x%llx\n",
__func__, wq->qpid, wq->doorbell, (unsigned long long)wq->udb);
return 0;
err4:
kfree(wq->sq);
@ -351,8 +350,8 @@ static void insert_recv_cqe(struct t3_wq *wq, struct t3_cq *cq)
{
struct t3_cqe cqe;
PDBG("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__,
wq, cq, cq->sw_rptr, cq->sw_wptr);
pr_debug("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__,
wq, cq, cq->sw_rptr, cq->sw_wptr);
memset(&cqe, 0, sizeof(cqe));
cqe.header = cpu_to_be32(V_CQE_STATUS(TPT_ERR_SWFLUSH) |
V_CQE_OPCODE(T3_SEND) |
@ -370,11 +369,11 @@ int cxio_flush_rq(struct t3_wq *wq, struct t3_cq *cq, int count)
u32 ptr;
int flushed = 0;
PDBG("%s wq %p cq %p\n", __func__, wq, cq);
pr_debug("%s wq %p cq %p\n", __func__, wq, cq);
/* flush RQ */
PDBG("%s rq_rptr %u rq_wptr %u skip count %u\n", __func__,
wq->rq_rptr, wq->rq_wptr, count);
pr_debug("%s rq_rptr %u rq_wptr %u skip count %u\n", __func__,
wq->rq_rptr, wq->rq_wptr, count);
ptr = wq->rq_rptr + count;
while (ptr++ != wq->rq_wptr) {
insert_recv_cqe(wq, cq);
@ -388,8 +387,8 @@ static void insert_sq_cqe(struct t3_wq *wq, struct t3_cq *cq,
{
struct t3_cqe cqe;
PDBG("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__,
wq, cq, cq->sw_rptr, cq->sw_wptr);
pr_debug("%s wq %p cq %p sw_rptr 0x%x sw_wptr 0x%x\n", __func__,
wq, cq, cq->sw_rptr, cq->sw_wptr);
memset(&cqe, 0, sizeof(cqe));
cqe.header = cpu_to_be32(V_CQE_STATUS(TPT_ERR_SWFLUSH) |
V_CQE_OPCODE(sqp->opcode) |
@ -429,11 +428,11 @@ void cxio_flush_hw_cq(struct t3_cq *cq)
{
struct t3_cqe *cqe, *swcqe;
PDBG("%s cq %p cqid 0x%x\n", __func__, cq, cq->cqid);
pr_debug("%s cq %p cqid 0x%x\n", __func__, cq, cq->cqid);
cqe = cxio_next_hw_cqe(cq);
while (cqe) {
PDBG("%s flushing hwcq rptr 0x%x to swcq wptr 0x%x\n",
__func__, cq->rptr, cq->sw_wptr);
pr_debug("%s flushing hwcq rptr 0x%x to swcq wptr 0x%x\n",
__func__, cq->rptr, cq->sw_wptr);
swcqe = cq->sw_queue + Q_PTR2IDX(cq->sw_wptr, cq->size_log2);
*swcqe = *cqe;
swcqe->header |= cpu_to_be32(V_CQE_SWCQE(1));
@ -476,7 +475,7 @@ void cxio_count_scqes(struct t3_cq *cq, struct t3_wq *wq, int *count)
(*count)++;
ptr++;
}
PDBG("%s cq %p count %d\n", __func__, cq, *count);
pr_debug("%s cq %p count %d\n", __func__, cq, *count);
}
void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count)
@ -485,7 +484,7 @@ void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count)
u32 ptr;
*count = 0;
PDBG("%s count zero %d\n", __func__, *count);
pr_debug("%s count zero %d\n", __func__, *count);
ptr = cq->sw_rptr;
while (!Q_EMPTY(ptr, cq->sw_wptr)) {
cqe = cq->sw_queue + (Q_PTR2IDX(ptr, cq->size_log2));
@ -494,7 +493,7 @@ void cxio_count_rcqes(struct t3_cq *cq, struct t3_wq *wq, int *count)
(*count)++;
ptr++;
}
PDBG("%s cq %p count %d\n", __func__, cq, *count);
pr_debug("%s cq %p count %d\n", __func__, cq, *count);
}
static int cxio_hal_init_ctrl_cq(struct cxio_rdev *rdev_p)
@ -521,12 +520,12 @@ static int cxio_hal_init_ctrl_qp(struct cxio_rdev *rdev_p)
skb = alloc_skb(sizeof(*wqe), GFP_KERNEL);
if (!skb) {
PDBG("%s alloc_skb failed\n", __func__);
pr_debug("%s alloc_skb failed\n", __func__);
return -ENOMEM;
}
err = cxio_hal_init_ctrl_cq(rdev_p);
if (err) {
PDBG("%s err %d initializing ctrl_cq\n", __func__, err);
pr_debug("%s err %d initializing ctrl_cq\n", __func__, err);
goto err;
}
rdev_p->ctrl_qp.workq = dma_alloc_coherent(
@ -536,7 +535,7 @@ static int cxio_hal_init_ctrl_qp(struct cxio_rdev *rdev_p)
&(rdev_p->ctrl_qp.dma_addr),
GFP_KERNEL);
if (!rdev_p->ctrl_qp.workq) {
PDBG("%s dma_alloc_coherent failed\n", __func__);
pr_debug("%s dma_alloc_coherent failed\n", __func__);
err = -ENOMEM;
goto err;
}
@ -571,9 +570,9 @@ static int cxio_hal_init_ctrl_qp(struct cxio_rdev *rdev_p)
wqe->sge_cmd = cpu_to_be64(sge_cmd);
wqe->ctx1 = cpu_to_be64(ctx1);
wqe->ctx0 = cpu_to_be64(ctx0);
PDBG("CtrlQP dma_addr 0x%llx workq %p size %d\n",
(unsigned long long) rdev_p->ctrl_qp.dma_addr,
rdev_p->ctrl_qp.workq, 1 << T3_CTRL_QP_SIZE_LOG2);
pr_debug("CtrlQP dma_addr 0x%llx workq %p size %d\n",
(unsigned long long)rdev_p->ctrl_qp.dma_addr,
rdev_p->ctrl_qp.workq, 1 << T3_CTRL_QP_SIZE_LOG2);
skb->priority = CPL_PRIORITY_CONTROL;
return iwch_cxgb3_ofld_send(rdev_p->t3cdev_p, skb);
err:
@ -605,26 +604,26 @@ static int cxio_hal_ctrl_qp_write_mem(struct cxio_rdev *rdev_p, u32 addr,
u64 utx_cmd;
addr &= 0x7FFFFFF;
nr_wqe = len % 96 ? len / 96 + 1 : len / 96; /* 96B max per WQE */
PDBG("%s wptr 0x%x rptr 0x%x len %d, nr_wqe %d data %p addr 0x%0x\n",
__func__, rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, len,
nr_wqe, data, addr);
pr_debug("%s wptr 0x%x rptr 0x%x len %d, nr_wqe %d data %p addr 0x%0x\n",
__func__, rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, len,
nr_wqe, data, addr);
utx_len = 3; /* in 32B unit */
for (i = 0; i < nr_wqe; i++) {
if (Q_FULL(rdev_p->ctrl_qp.rptr, rdev_p->ctrl_qp.wptr,
T3_CTRL_QP_SIZE_LOG2)) {
PDBG("%s ctrl_qp full wtpr 0x%0x rptr 0x%0x, "
"wait for more space i %d\n", __func__,
rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, i);
pr_debug("%s ctrl_qp full wtpr 0x%0x rptr 0x%0x, wait for more space i %d\n",
__func__,
rdev_p->ctrl_qp.wptr, rdev_p->ctrl_qp.rptr, i);
if (wait_event_interruptible(rdev_p->ctrl_qp.waitq,
!Q_FULL(rdev_p->ctrl_qp.rptr,
rdev_p->ctrl_qp.wptr,
T3_CTRL_QP_SIZE_LOG2))) {
PDBG("%s ctrl_qp workq interrupted\n",
__func__);
pr_debug("%s ctrl_qp workq interrupted\n",
__func__);
return -ERESTARTSYS;
}
PDBG("%s ctrl_qp wakeup, continue posting work request "
"i %d\n", __func__, i);
pr_debug("%s ctrl_qp wakeup, continue posting work request i %d\n",
__func__, i);
}
wqe = (__be64 *)(rdev_p->ctrl_qp.workq + (rdev_p->ctrl_qp.wptr %
(1 << T3_CTRL_QP_SIZE_LOG2)));
@ -645,7 +644,7 @@ static int cxio_hal_ctrl_qp_write_mem(struct cxio_rdev *rdev_p, u32 addr,
if ((i != 0) &&
(i % (((1 << T3_CTRL_QP_SIZE_LOG2)) >> 1) == 0)) {
flag = T3_COMPLETION_FLAG;
PDBG("%s force completion at i %d\n", __func__, i);
pr_debug("%s force completion at i %d\n", __func__, i);
}
/* build the utx mem command */
@ -717,8 +716,8 @@ static int __cxio_tpt_op(struct cxio_rdev *rdev_p, u32 reset_tpt_entry,
return -ENOMEM;
*stag = (stag_idx << 8) | ((*stag) & 0xFF);
}
PDBG("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n",
__func__, stag_state, type, pdid, stag_idx);
pr_debug("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n",
__func__, stag_state, type, pdid, stag_idx);
mutex_lock(&rdev_p->ctrl_qp.lock);
@ -767,9 +766,9 @@ int cxio_write_pbl(struct cxio_rdev *rdev_p, __be64 *pbl,
u32 wptr;
int err;
PDBG("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n",
__func__, pbl_addr, rdev_p->rnic_info.pbl_base,
pbl_size);
pr_debug("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n",
__func__, pbl_addr, rdev_p->rnic_info.pbl_base,
pbl_size);
mutex_lock(&rdev_p->ctrl_qp.lock);
err = cxio_hal_ctrl_qp_write_mem(rdev_p, pbl_addr >> 5, pbl_size << 3,
@ -837,7 +836,7 @@ int cxio_rdma_init(struct cxio_rdev *rdev_p, struct t3_rdma_init_attr *attr)
struct sk_buff *skb = alloc_skb(sizeof(*wqe), GFP_ATOMIC);
if (!skb)
return -ENOMEM;
PDBG("%s rdev_p %p\n", __func__, rdev_p);
pr_debug("%s rdev_p %p\n", __func__, rdev_p);
wqe = (struct t3_rdma_init_wr *) __skb_put(skb, sizeof(*wqe));
wqe->wrh.op_seop_flags = cpu_to_be32(V_FW_RIWR_OP(T3_WR_INIT));
wqe->wrh.gen_tid_len = cpu_to_be32(V_FW_RIWR_TID(attr->tid) |
@ -880,22 +879,20 @@ static int cxio_hal_ev_handler(struct t3cdev *t3cdev_p, struct sk_buff *skb)
static int cnt;
struct cxio_rdev *rdev_p = NULL;
struct respQ_msg_t *rsp_msg = (struct respQ_msg_t *) skb->data;
PDBG("%d: %s cq_id 0x%x cq_ptr 0x%x genbit %0x overflow %0x an %0x"
" se %0x notify %0x cqbranch %0x creditth %0x\n",
cnt, __func__, RSPQ_CQID(rsp_msg), RSPQ_CQPTR(rsp_msg),
RSPQ_GENBIT(rsp_msg), RSPQ_OVERFLOW(rsp_msg), RSPQ_AN(rsp_msg),
RSPQ_SE(rsp_msg), RSPQ_NOTIFY(rsp_msg), RSPQ_CQBRANCH(rsp_msg),
RSPQ_CREDIT_THRESH(rsp_msg));
PDBG("CQE: QPID 0x%0x genbit %0x type 0x%0x status 0x%0x opcode %d "
"len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n",
CQE_QPID(rsp_msg->cqe), CQE_GENBIT(rsp_msg->cqe),
CQE_TYPE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe),
CQE_OPCODE(rsp_msg->cqe), CQE_LEN(rsp_msg->cqe),
CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe));
pr_debug("%d: %s cq_id 0x%x cq_ptr 0x%x genbit %0x overflow %0x an %0x se %0x notify %0x cqbranch %0x creditth %0x\n",
cnt, __func__, RSPQ_CQID(rsp_msg), RSPQ_CQPTR(rsp_msg),
RSPQ_GENBIT(rsp_msg), RSPQ_OVERFLOW(rsp_msg), RSPQ_AN(rsp_msg),
RSPQ_SE(rsp_msg), RSPQ_NOTIFY(rsp_msg), RSPQ_CQBRANCH(rsp_msg),
RSPQ_CREDIT_THRESH(rsp_msg));
pr_debug("CQE: QPID 0x%0x genbit %0x type 0x%0x status 0x%0x opcode %d len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n",
CQE_QPID(rsp_msg->cqe), CQE_GENBIT(rsp_msg->cqe),
CQE_TYPE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe),
CQE_OPCODE(rsp_msg->cqe), CQE_LEN(rsp_msg->cqe),
CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe));
rdev_p = (struct cxio_rdev *)t3cdev_p->ulp;
if (!rdev_p) {
PDBG("%s called by t3cdev %p with null ulp\n", __func__,
t3cdev_p);
pr_debug("%s called by t3cdev %p with null ulp\n", __func__,
t3cdev_p);
return 0;
}
if (CQE_QPID(rsp_msg->cqe) == T3_CTRL_QP_ID) {
@ -934,13 +931,13 @@ int cxio_rdev_open(struct cxio_rdev *rdev_p)
strncpy(rdev_p->dev_name, rdev_p->t3cdev_p->name,
T3_MAX_DEV_NAME_LEN);
} else {
PDBG("%s t3cdev_p or dev_name must be set\n", __func__);
pr_debug("%s t3cdev_p or dev_name must be set\n", __func__);
return -EINVAL;
}
list_add_tail(&rdev_p->entry, &rdev_list);
PDBG("%s opening rnic dev %s\n", __func__, rdev_p->dev_name);
pr_debug("%s opening rnic dev %s\n", __func__, rdev_p->dev_name);
memset(&rdev_p->ctrl_qp, 0, sizeof(rdev_p->ctrl_qp));
if (!rdev_p->t3cdev_p)
rdev_p->t3cdev_p = dev2t3cdev(netdev_p);
@ -949,13 +946,12 @@ int cxio_rdev_open(struct cxio_rdev *rdev_p)
err = rdev_p->t3cdev_p->ctl(rdev_p->t3cdev_p, GET_EMBEDDED_INFO,
&(rdev_p->fw_info));
if (err) {
printk(KERN_ERR "%s t3cdev_p(%p)->ctl returned error %d.\n",
__func__, rdev_p->t3cdev_p, err);
pr_err("%s t3cdev_p(%p)->ctl returned error %d\n",
__func__, rdev_p->t3cdev_p, err);
goto err1;
}
if (G_FW_VERSION_MAJOR(rdev_p->fw_info.fw_vers) != CXIO_FW_MAJ) {
printk(KERN_ERR MOD "fatal firmware version mismatch: "
"need version %u but adapter has version %u\n",
pr_err("fatal firmware version mismatch: need version %u but adapter has version %u\n",
CXIO_FW_MAJ,
G_FW_VERSION_MAJOR(rdev_p->fw_info.fw_vers));
err = -EINVAL;
@ -965,15 +961,15 @@ int cxio_rdev_open(struct cxio_rdev *rdev_p)
err = rdev_p->t3cdev_p->ctl(rdev_p->t3cdev_p, RDMA_GET_PARAMS,
&(rdev_p->rnic_info));
if (err) {
printk(KERN_ERR "%s t3cdev_p(%p)->ctl returned error %d.\n",
__func__, rdev_p->t3cdev_p, err);
pr_err("%s t3cdev_p(%p)->ctl returned error %d\n",
__func__, rdev_p->t3cdev_p, err);
goto err1;
}
err = rdev_p->t3cdev_p->ctl(rdev_p->t3cdev_p, GET_PORTS,
&(rdev_p->port_info));
if (err) {
printk(KERN_ERR "%s t3cdev_p(%p)->ctl returned error %d.\n",
__func__, rdev_p->t3cdev_p, err);
pr_err("%s t3cdev_p(%p)->ctl returned error %d\n",
__func__, rdev_p->t3cdev_p, err);
goto err1;
}
@ -988,42 +984,39 @@ int cxio_rdev_open(struct cxio_rdev *rdev_p)
PAGE_SHIFT));
rdev_p->qpnr = rdev_p->rnic_info.udbell_len >> PAGE_SHIFT;
rdev_p->qpmask = (65536 >> ilog2(rdev_p->qpnr)) - 1;
PDBG("%s rnic %s info: tpt_base 0x%0x tpt_top 0x%0x num stags %d "
"pbl_base 0x%0x pbl_top 0x%0x rqt_base 0x%0x, rqt_top 0x%0x\n",
__func__, rdev_p->dev_name, rdev_p->rnic_info.tpt_base,
rdev_p->rnic_info.tpt_top, cxio_num_stags(rdev_p),
rdev_p->rnic_info.pbl_base,
rdev_p->rnic_info.pbl_top, rdev_p->rnic_info.rqt_base,
rdev_p->rnic_info.rqt_top);
PDBG("udbell_len 0x%0x udbell_physbase 0x%lx kdb_addr %p qpshift %lu "
"qpnr %d qpmask 0x%x\n",
rdev_p->rnic_info.udbell_len,
rdev_p->rnic_info.udbell_physbase, rdev_p->rnic_info.kdb_addr,
rdev_p->qpshift, rdev_p->qpnr, rdev_p->qpmask);
pr_debug("%s rnic %s info: tpt_base 0x%0x tpt_top 0x%0x num stags %d pbl_base 0x%0x pbl_top 0x%0x rqt_base 0x%0x, rqt_top 0x%0x\n",
__func__, rdev_p->dev_name, rdev_p->rnic_info.tpt_base,
rdev_p->rnic_info.tpt_top, cxio_num_stags(rdev_p),
rdev_p->rnic_info.pbl_base,
rdev_p->rnic_info.pbl_top, rdev_p->rnic_info.rqt_base,
rdev_p->rnic_info.rqt_top);
pr_debug("udbell_len 0x%0x udbell_physbase 0x%lx kdb_addr %p qpshift %lu qpnr %d qpmask 0x%x\n",
rdev_p->rnic_info.udbell_len,
rdev_p->rnic_info.udbell_physbase, rdev_p->rnic_info.kdb_addr,
rdev_p->qpshift, rdev_p->qpnr, rdev_p->qpmask);
err = cxio_hal_init_ctrl_qp(rdev_p);
if (err) {
printk(KERN_ERR "%s error %d initializing ctrl_qp.\n",
__func__, err);
pr_err("%s error %d initializing ctrl_qp\n", __func__, err);
goto err1;
}
err = cxio_hal_init_resource(rdev_p, cxio_num_stags(rdev_p), 0,
0, T3_MAX_NUM_QP, T3_MAX_NUM_CQ,
T3_MAX_NUM_PD);
if (err) {
printk(KERN_ERR "%s error %d initializing hal resources.\n",
pr_err("%s error %d initializing hal resources\n",
__func__, err);
goto err2;
}
err = cxio_hal_pblpool_create(rdev_p);
if (err) {
printk(KERN_ERR "%s error %d initializing pbl mem pool.\n",
pr_err("%s error %d initializing pbl mem pool\n",
__func__, err);
goto err3;
}
err = cxio_hal_rqtpool_create(rdev_p);
if (err) {
printk(KERN_ERR "%s error %d initializing rqt mem pool.\n",
pr_err("%s error %d initializing rqt mem pool\n",
__func__, err);
goto err4;
}
@ -1086,9 +1079,9 @@ static void flush_completed_wrs(struct t3_wq *wq, struct t3_cq *cq)
/*
* Insert this completed cqe into the swcq.
*/
PDBG("%s moving cqe into swcq sq idx %ld cq idx %ld\n",
__func__, Q_PTR2IDX(ptr, wq->sq_size_log2),
Q_PTR2IDX(cq->sw_wptr, cq->size_log2));
pr_debug("%s moving cqe into swcq sq idx %ld cq idx %ld\n",
__func__, Q_PTR2IDX(ptr, wq->sq_size_log2),
Q_PTR2IDX(cq->sw_wptr, cq->size_log2));
sqp->cqe.header |= htonl(V_CQE_SWCQE(1));
*(cq->sw_queue + Q_PTR2IDX(cq->sw_wptr, cq->size_log2))
= sqp->cqe;
@ -1154,12 +1147,11 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe,
*credit = 0;
hw_cqe = cxio_next_cqe(cq);
PDBG("%s CQE OOO %d qpid 0x%0x genbit %d type %d status 0x%0x"
" opcode 0x%0x len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n",
__func__, CQE_OOO(*hw_cqe), CQE_QPID(*hw_cqe),
CQE_GENBIT(*hw_cqe), CQE_TYPE(*hw_cqe), CQE_STATUS(*hw_cqe),
CQE_OPCODE(*hw_cqe), CQE_LEN(*hw_cqe), CQE_WRID_HI(*hw_cqe),
CQE_WRID_LOW(*hw_cqe));
pr_debug("%s CQE OOO %d qpid 0x%0x genbit %d type %d status 0x%0x opcode 0x%0x len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n",
__func__, CQE_OOO(*hw_cqe), CQE_QPID(*hw_cqe),
CQE_GENBIT(*hw_cqe), CQE_TYPE(*hw_cqe), CQE_STATUS(*hw_cqe),
CQE_OPCODE(*hw_cqe), CQE_LEN(*hw_cqe), CQE_WRID_HI(*hw_cqe),
CQE_WRID_LOW(*hw_cqe));
/*
* skip cqe's not affiliated with a QP.
@ -1278,9 +1270,10 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe,
if (!SW_CQE(*hw_cqe) && (CQE_WRID_SQ_WPTR(*hw_cqe) != wq->sq_rptr)) {
struct t3_swsq *sqp;
PDBG("%s out of order completion going in swsq at idx %ld\n",
__func__,
Q_PTR2IDX(CQE_WRID_SQ_WPTR(*hw_cqe), wq->sq_size_log2));
pr_debug("%s out of order completion going in swsq at idx %ld\n",
__func__,
Q_PTR2IDX(CQE_WRID_SQ_WPTR(*hw_cqe),
wq->sq_size_log2));
sqp = wq->sq +
Q_PTR2IDX(CQE_WRID_SQ_WPTR(*hw_cqe), wq->sq_size_log2);
sqp->cqe = *hw_cqe;
@ -1298,13 +1291,13 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe,
*/
if (SQ_TYPE(*hw_cqe)) {
wq->sq_rptr = CQE_WRID_SQ_WPTR(*hw_cqe);
PDBG("%s completing sq idx %ld\n", __func__,
Q_PTR2IDX(wq->sq_rptr, wq->sq_size_log2));
pr_debug("%s completing sq idx %ld\n", __func__,
Q_PTR2IDX(wq->sq_rptr, wq->sq_size_log2));
*cookie = wq->sq[Q_PTR2IDX(wq->sq_rptr, wq->sq_size_log2)].wr_id;
wq->sq_rptr++;
} else {
PDBG("%s completing rq idx %ld\n", __func__,
Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2));
pr_debug("%s completing rq idx %ld\n", __func__,
Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2));
*cookie = wq->rq[Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2)].wr_id;
if (wq->rq[Q_PTR2IDX(wq->rq_rptr, wq->rq_size_log2)].pbl_addr)
cxio_hal_pblpool_free(wq->rdev,
@ -1322,12 +1315,12 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe,
skip_cqe:
if (SW_CQE(*hw_cqe)) {
PDBG("%s cq %p cqid 0x%x skip sw cqe sw_rptr 0x%x\n",
__func__, cq, cq->cqid, cq->sw_rptr);
pr_debug("%s cq %p cqid 0x%x skip sw cqe sw_rptr 0x%x\n",
__func__, cq, cq->cqid, cq->sw_rptr);
++cq->sw_rptr;
} else {
PDBG("%s cq %p cqid 0x%x skip hw cqe rptr 0x%x\n",
__func__, cq, cq->cqid, cq->rptr);
pr_debug("%s cq %p cqid 0x%x skip hw cqe rptr 0x%x\n",
__func__, cq, cq->cqid, cq->rptr);
++cq->rptr;
/*

View File

@ -196,8 +196,11 @@ int cxio_poll_cq(struct t3_wq *wq, struct t3_cq *cq, struct t3_cqe *cqe,
u8 *cqe_flushed, u64 *cookie, u32 *credit);
int iwch_cxgb3_ofld_send(struct t3cdev *tdev, struct sk_buff *skb);
#define MOD "iw_cxgb3: "
#define PDBG(fmt, args...) pr_debug(MOD fmt, ## args)
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#ifdef DEBUG
void cxio_dump_tpt(struct cxio_rdev *rev, u32 stag);

View File

@ -209,13 +209,13 @@ u32 cxio_hal_get_qpid(struct cxio_hal_resource *rscp)
{
u32 qpid = cxio_hal_get_resource(&rscp->qpid_fifo,
&rscp->qpid_fifo_lock);
PDBG("%s qpid 0x%x\n", __func__, qpid);
pr_debug("%s qpid 0x%x\n", __func__, qpid);
return qpid;
}
void cxio_hal_put_qpid(struct cxio_hal_resource *rscp, u32 qpid)
{
PDBG("%s qpid 0x%x\n", __func__, qpid);
pr_debug("%s qpid 0x%x\n", __func__, qpid);
cxio_hal_put_resource(&rscp->qpid_fifo, &rscp->qpid_fifo_lock, qpid);
}
@ -257,13 +257,13 @@ void cxio_hal_destroy_resource(struct cxio_hal_resource *rscp)
u32 cxio_hal_pblpool_alloc(struct cxio_rdev *rdev_p, int size)
{
unsigned long addr = gen_pool_alloc(rdev_p->pbl_pool, size);
PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
return (u32)addr;
}
void cxio_hal_pblpool_free(struct cxio_rdev *rdev_p, u32 addr, int size)
{
PDBG("%s addr 0x%x size %d\n", __func__, addr, size);
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size);
gen_pool_free(rdev_p->pbl_pool, (unsigned long)addr, size);
}
@ -282,17 +282,18 @@ int cxio_hal_pblpool_create(struct cxio_rdev *rdev_p)
pbl_chunk = min(rdev_p->rnic_info.pbl_top - pbl_start + 1,
pbl_chunk);
if (gen_pool_add(rdev_p->pbl_pool, pbl_start, pbl_chunk, -1)) {
PDBG("%s failed to add PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pr_debug("%s failed to add PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
if (pbl_chunk <= 1024 << MIN_PBL_SHIFT) {
printk(KERN_WARNING MOD "%s: Failed to add all PBL chunks (%x/%x)\n",
__func__, pbl_start, rdev_p->rnic_info.pbl_top - pbl_start);
pr_warn("%s: Failed to add all PBL chunks (%x/%x)\n",
__func__, pbl_start,
rdev_p->rnic_info.pbl_top - pbl_start);
return 0;
}
pbl_chunk >>= 1;
} else {
PDBG("%s added PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pr_debug("%s added PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pbl_start += pbl_chunk;
}
}
@ -315,13 +316,13 @@ void cxio_hal_pblpool_destroy(struct cxio_rdev *rdev_p)
u32 cxio_hal_rqtpool_alloc(struct cxio_rdev *rdev_p, int size)
{
unsigned long addr = gen_pool_alloc(rdev_p->rqt_pool, size << 6);
PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6);
return (u32)addr;
}
void cxio_hal_rqtpool_free(struct cxio_rdev *rdev_p, u32 addr, int size)
{
PDBG("%s addr 0x%x size %d\n", __func__, addr, size << 6);
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size << 6);
gen_pool_free(rdev_p->rqt_pool, (unsigned long)addr, size << 6);
}

View File

@ -105,7 +105,7 @@ static void iwch_db_drop_task(struct work_struct *work)
static void rnic_init(struct iwch_dev *rnicp)
{
PDBG("%s iwch_dev %p\n", __func__, rnicp);
pr_debug("%s iwch_dev %p\n", __func__, rnicp);
idr_init(&rnicp->cqidr);
idr_init(&rnicp->qpidr);
idr_init(&rnicp->mmidr);
@ -145,12 +145,11 @@ static void open_rnic_dev(struct t3cdev *tdev)
{
struct iwch_dev *rnicp;
PDBG("%s t3cdev %p\n", __func__, tdev);
printk_once(KERN_INFO MOD "Chelsio T3 RDMA Driver - version %s\n",
DRV_VERSION);
pr_debug("%s t3cdev %p\n", __func__, tdev);
pr_info_once("Chelsio T3 RDMA Driver - version %s\n", DRV_VERSION);
rnicp = (struct iwch_dev *)ib_alloc_device(sizeof(*rnicp));
if (!rnicp) {
printk(KERN_ERR MOD "Cannot allocate ib device\n");
pr_err("Cannot allocate ib device\n");
return;
}
rnicp->rdev.ulp = rnicp;
@ -160,7 +159,7 @@ static void open_rnic_dev(struct t3cdev *tdev)
if (cxio_rdev_open(&rnicp->rdev)) {
mutex_unlock(&dev_mutex);
printk(KERN_ERR MOD "Unable to open CXIO rdev\n");
pr_err("Unable to open CXIO rdev\n");
ib_dealloc_device(&rnicp->ibdev);
return;
}
@ -171,18 +170,18 @@ static void open_rnic_dev(struct t3cdev *tdev)
mutex_unlock(&dev_mutex);
if (iwch_register_device(rnicp)) {
printk(KERN_ERR MOD "Unable to register device\n");
pr_err("Unable to register device\n");
close_rnic_dev(tdev);
}
printk(KERN_INFO MOD "Initialized device %s\n",
pci_name(rnicp->rdev.rnic_info.pdev));
pr_info("Initialized device %s\n",
pci_name(rnicp->rdev.rnic_info.pdev));
return;
}
static void close_rnic_dev(struct t3cdev *tdev)
{
struct iwch_dev *dev, *tmp;
PDBG("%s t3cdev %p\n", __func__, tdev);
pr_debug("%s t3cdev %p\n", __func__, tdev);
mutex_lock(&dev_mutex);
list_for_each_entry_safe(dev, tmp, &dev_list, entry) {
if (dev->rdev.t3cdev_p == tdev) {

View File

@ -112,9 +112,9 @@ static void connect_reply_upcall(struct iwch_ep *ep, int status);
static void start_ep_timer(struct iwch_ep *ep)
{
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
if (timer_pending(&ep->timer)) {
PDBG("%s stopped / restarted timer ep %p\n", __func__, ep);
pr_debug("%s stopped / restarted timer ep %p\n", __func__, ep);
del_timer_sync(&ep->timer);
} else
get_ep(&ep->com);
@ -126,7 +126,7 @@ static void start_ep_timer(struct iwch_ep *ep)
static void stop_ep_timer(struct iwch_ep *ep)
{
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
if (!timer_pending(&ep->timer)) {
WARN(1, "%s timer stopped when its not running! ep %p state %u\n",
__func__, ep, ep->com.state);
@ -227,13 +227,13 @@ int iwch_resume_tid(struct iwch_ep *ep)
static void set_emss(struct iwch_ep *ep, u16 opt)
{
PDBG("%s ep %p opt %u\n", __func__, ep, opt);
pr_debug("%s ep %p opt %u\n", __func__, ep, opt);
ep->emss = T3C_DATA(ep->com.tdev)->mtus[G_TCPOPT_MSS(opt)] - 40;
if (G_TCPOPT_TSTAMP(opt))
ep->emss -= 12;
if (ep->emss < 128)
ep->emss = 128;
PDBG("emss=%d\n", ep->emss);
pr_debug("emss=%d\n", ep->emss);
}
static enum iwch_ep_state state_read(struct iwch_ep_common *epc)
@ -257,7 +257,7 @@ static void state_set(struct iwch_ep_common *epc, enum iwch_ep_state new)
unsigned long flags;
spin_lock_irqsave(&epc->lock, flags);
PDBG("%s - %s -> %s\n", __func__, states[epc->state], states[new]);
pr_debug("%s - %s -> %s\n", __func__, states[epc->state], states[new]);
__state_set(epc, new);
spin_unlock_irqrestore(&epc->lock, flags);
return;
@ -273,7 +273,7 @@ static void *alloc_ep(int size, gfp_t gfp)
spin_lock_init(&epc->lock);
init_waitqueue_head(&epc->waitq);
}
PDBG("%s alloc ep %p\n", __func__, epc);
pr_debug("%s alloc ep %p\n", __func__, epc);
return epc;
}
@ -282,7 +282,8 @@ void __free_ep(struct kref *kref)
struct iwch_ep *ep;
ep = container_of(container_of(kref, struct iwch_ep_common, kref),
struct iwch_ep, com);
PDBG("%s ep %p state %s\n", __func__, ep, states[state_read(&ep->com)]);
pr_debug("%s ep %p state %s\n",
__func__, ep, states[state_read(&ep->com)]);
if (test_bit(RELEASE_RESOURCES, &ep->com.flags)) {
cxgb3_remove_tid(ep->com.tdev, (void *)ep, ep->hwtid);
dst_release(ep->dst);
@ -293,7 +294,7 @@ void __free_ep(struct kref *kref)
static void release_ep_resources(struct iwch_ep *ep)
{
PDBG("%s ep %p tid %d\n", __func__, ep, ep->hwtid);
pr_debug("%s ep %p tid %d\n", __func__, ep, ep->hwtid);
set_bit(RELEASE_RESOURCES, &ep->com.flags);
put_ep(&ep->com);
}
@ -358,7 +359,7 @@ static unsigned int find_best_mtu(const struct t3c_data *d, unsigned short mtu)
static void arp_failure_discard(struct t3cdev *dev, struct sk_buff *skb)
{
PDBG("%s t3cdev %p\n", __func__, dev);
pr_debug("%s t3cdev %p\n", __func__, dev);
kfree_skb(skb);
}
@ -367,7 +368,7 @@ static void arp_failure_discard(struct t3cdev *dev, struct sk_buff *skb)
*/
static void act_open_req_arp_failure(struct t3cdev *dev, struct sk_buff *skb)
{
printk(KERN_ERR MOD "ARP failure during connect\n");
pr_err("ARP failure during connect\n");
kfree_skb(skb);
}
@ -379,7 +380,7 @@ static void abort_arp_failure(struct t3cdev *dev, struct sk_buff *skb)
{
struct cpl_abort_req *req = cplhdr(skb);
PDBG("%s t3cdev %p\n", __func__, dev);
pr_debug("%s t3cdev %p\n", __func__, dev);
req->cmd = CPL_ABORT_NO_RST;
iwch_cxgb3_ofld_send(dev, skb);
}
@ -389,10 +390,10 @@ static int send_halfclose(struct iwch_ep *ep, gfp_t gfp)
struct cpl_close_con_req *req;
struct sk_buff *skb;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
skb = get_skb(NULL, sizeof(*req), gfp);
if (!skb) {
printk(KERN_ERR MOD "%s - failed to alloc skb\n", __func__);
pr_err("%s - failed to alloc skb\n", __func__);
return -ENOMEM;
}
skb->priority = CPL_PRIORITY_DATA;
@ -408,11 +409,10 @@ static int send_abort(struct iwch_ep *ep, struct sk_buff *skb, gfp_t gfp)
{
struct cpl_abort_req *req;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
skb = get_skb(skb, sizeof(*req), gfp);
if (!skb) {
printk(KERN_ERR MOD "%s - failed to alloc skb.\n",
__func__);
pr_err("%s - failed to alloc skb\n", __func__);
return -ENOMEM;
}
skb->priority = CPL_PRIORITY_DATA;
@ -434,12 +434,11 @@ static int send_connect(struct iwch_ep *ep)
unsigned int mtu_idx;
int wscale;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
skb = get_skb(NULL, sizeof(*req), GFP_KERNEL);
if (!skb) {
printk(KERN_ERR MOD "%s - failed to alloc skb.\n",
__func__);
pr_err("%s - failed to alloc skb\n", __func__);
return -ENOMEM;
}
mtu_idx = find_best_mtu(T3C_DATA(ep->com.tdev), dst_mtu(ep->dst));
@ -478,7 +477,7 @@ static void send_mpa_req(struct iwch_ep *ep, struct sk_buff *skb)
struct mpa_message *mpa;
int len;
PDBG("%s ep %p pd_len %d\n", __func__, ep, ep->plen);
pr_debug("%s ep %p pd_len %d\n", __func__, ep, ep->plen);
BUG_ON(skb_cloned(skb));
@ -538,13 +537,13 @@ static int send_mpa_reject(struct iwch_ep *ep, const void *pdata, u8 plen)
struct mpa_message *mpa;
struct sk_buff *skb;
PDBG("%s ep %p plen %d\n", __func__, ep, plen);
pr_debug("%s ep %p plen %d\n", __func__, ep, plen);
mpalen = sizeof(*mpa) + plen;
skb = get_skb(NULL, mpalen + sizeof(*req), GFP_KERNEL);
if (!skb) {
printk(KERN_ERR MOD "%s - cannot alloc skb!\n", __func__);
pr_err("%s - cannot alloc skb!\n", __func__);
return -ENOMEM;
}
skb_reserve(skb, sizeof(*req));
@ -587,13 +586,13 @@ static int send_mpa_reply(struct iwch_ep *ep, const void *pdata, u8 plen)
int len;
struct sk_buff *skb;
PDBG("%s ep %p plen %d\n", __func__, ep, plen);
pr_debug("%s ep %p plen %d\n", __func__, ep, plen);
mpalen = sizeof(*mpa) + plen;
skb = get_skb(NULL, mpalen + sizeof(*req), GFP_KERNEL);
if (!skb) {
printk(KERN_ERR MOD "%s - cannot alloc skb!\n", __func__);
pr_err("%s - cannot alloc skb!\n", __func__);
return -ENOMEM;
}
skb->priority = CPL_PRIORITY_DATA;
@ -636,7 +635,7 @@ static int act_establish(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct cpl_act_establish *req = cplhdr(skb);
unsigned int tid = GET_TID(req);
PDBG("%s ep %p tid %d\n", __func__, ep, tid);
pr_debug("%s ep %p tid %d\n", __func__, ep, tid);
dst_confirm(ep->dst);
@ -660,7 +659,7 @@ static int act_establish(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
static void abort_connection(struct iwch_ep *ep, struct sk_buff *skb, gfp_t gfp)
{
PDBG("%s ep %p\n", __FILE__, ep);
pr_debug("%s ep %p\n", __FILE__, ep);
state_set(&ep->com, ABORTING);
send_abort(ep, skb, gfp);
}
@ -669,12 +668,12 @@ static void close_complete_upcall(struct iwch_ep *ep)
{
struct iw_cm_event event;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
memset(&event, 0, sizeof(event));
event.event = IW_CM_EVENT_CLOSE;
if (ep->com.cm_id) {
PDBG("close complete delivered ep %p cm_id %p tid %d\n",
ep, ep->com.cm_id, ep->hwtid);
pr_debug("close complete delivered ep %p cm_id %p tid %d\n",
ep, ep->com.cm_id, ep->hwtid);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
ep->com.cm_id->rem_ref(ep->com.cm_id);
ep->com.cm_id = NULL;
@ -686,12 +685,12 @@ static void peer_close_upcall(struct iwch_ep *ep)
{
struct iw_cm_event event;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
memset(&event, 0, sizeof(event));
event.event = IW_CM_EVENT_DISCONNECT;
if (ep->com.cm_id) {
PDBG("peer close delivered ep %p cm_id %p tid %d\n",
ep, ep->com.cm_id, ep->hwtid);
pr_debug("peer close delivered ep %p cm_id %p tid %d\n",
ep, ep->com.cm_id, ep->hwtid);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
}
}
@ -700,13 +699,13 @@ static void peer_abort_upcall(struct iwch_ep *ep)
{
struct iw_cm_event event;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
memset(&event, 0, sizeof(event));
event.event = IW_CM_EVENT_CLOSE;
event.status = -ECONNRESET;
if (ep->com.cm_id) {
PDBG("abort delivered ep %p cm_id %p tid %d\n", ep,
ep->com.cm_id, ep->hwtid);
pr_debug("abort delivered ep %p cm_id %p tid %d\n", ep,
ep->com.cm_id, ep->hwtid);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
ep->com.cm_id->rem_ref(ep->com.cm_id);
ep->com.cm_id = NULL;
@ -718,7 +717,7 @@ static void connect_reply_upcall(struct iwch_ep *ep, int status)
{
struct iw_cm_event event;
PDBG("%s ep %p status %d\n", __func__, ep, status);
pr_debug("%s ep %p status %d\n", __func__, ep, status);
memset(&event, 0, sizeof(event));
event.event = IW_CM_EVENT_CONNECT_REPLY;
event.status = status;
@ -732,8 +731,8 @@ static void connect_reply_upcall(struct iwch_ep *ep, int status)
event.private_data = ep->mpa_pkt + sizeof(struct mpa_message);
}
if (ep->com.cm_id) {
PDBG("%s ep %p tid %d status %d\n", __func__, ep,
ep->hwtid, status);
pr_debug("%s ep %p tid %d status %d\n", __func__, ep,
ep->hwtid, status);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
}
if (status < 0) {
@ -747,7 +746,7 @@ static void connect_request_upcall(struct iwch_ep *ep)
{
struct iw_cm_event event;
PDBG("%s ep %p tid %d\n", __func__, ep, ep->hwtid);
pr_debug("%s ep %p tid %d\n", __func__, ep, ep->hwtid);
memset(&event, 0, sizeof(event));
event.event = IW_CM_EVENT_CONNECT_REQUEST;
memcpy(&event.local_addr, &ep->com.local_addr,
@ -776,7 +775,7 @@ static void established_upcall(struct iwch_ep *ep)
{
struct iw_cm_event event;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
memset(&event, 0, sizeof(event));
event.event = IW_CM_EVENT_ESTABLISHED;
/*
@ -785,7 +784,7 @@ static void established_upcall(struct iwch_ep *ep)
*/
event.ird = event.ord = 8;
if (ep->com.cm_id) {
PDBG("%s ep %p tid %d\n", __func__, ep, ep->hwtid);
pr_debug("%s ep %p tid %d\n", __func__, ep, ep->hwtid);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
}
}
@ -795,10 +794,10 @@ static int update_rx_credits(struct iwch_ep *ep, u32 credits)
struct cpl_rx_data_ack *req;
struct sk_buff *skb;
PDBG("%s ep %p credits %u\n", __func__, ep, credits);
pr_debug("%s ep %p credits %u\n", __func__, ep, credits);
skb = get_skb(NULL, sizeof(*req), GFP_KERNEL);
if (!skb) {
printk(KERN_ERR MOD "update_rx_credits - cannot alloc skb!\n");
pr_err("update_rx_credits - cannot alloc skb!\n");
return 0;
}
@ -819,7 +818,7 @@ static void process_mpa_reply(struct iwch_ep *ep, struct sk_buff *skb)
enum iwch_qp_attr_mask mask;
int err;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
/*
* Stop mpa timer. If it expired, then the state has
@ -906,10 +905,10 @@ static void process_mpa_reply(struct iwch_ep *ep, struct sk_buff *skb)
ep->mpa_attr.recv_marker_enabled = markers_enabled;
ep->mpa_attr.xmit_marker_enabled = mpa->flags & MPA_MARKERS ? 1 : 0;
ep->mpa_attr.version = mpa_rev;
PDBG("%s - crc_enabled=%d, recv_marker_enabled=%d, "
"xmit_marker_enabled=%d, version=%d\n", __func__,
ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled,
ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version);
pr_debug("%s - crc_enabled=%d, recv_marker_enabled=%d, xmit_marker_enabled=%d, version=%d\n",
__func__,
ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled,
ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version);
attrs.mpa_attr = ep->mpa_attr;
attrs.max_ird = ep->ird;
@ -944,7 +943,7 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb)
struct mpa_message *mpa;
u16 plen;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
/*
* Stop mpa timer. If it expired, then the state has
@ -964,7 +963,7 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb)
return;
}
PDBG("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__);
pr_debug("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__);
/*
* Copy the new data into our accumulation buffer.
@ -979,7 +978,7 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb)
*/
if (ep->mpa_pkt_len < sizeof(*mpa))
return;
PDBG("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__);
pr_debug("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__);
mpa = (struct mpa_message *) ep->mpa_pkt;
/*
@ -1029,10 +1028,10 @@ static void process_mpa_request(struct iwch_ep *ep, struct sk_buff *skb)
ep->mpa_attr.recv_marker_enabled = markers_enabled;
ep->mpa_attr.xmit_marker_enabled = mpa->flags & MPA_MARKERS ? 1 : 0;
ep->mpa_attr.version = mpa_rev;
PDBG("%s - crc_enabled=%d, recv_marker_enabled=%d, "
"xmit_marker_enabled=%d, version=%d\n", __func__,
ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled,
ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version);
pr_debug("%s - crc_enabled=%d, recv_marker_enabled=%d, xmit_marker_enabled=%d, version=%d\n",
__func__,
ep->mpa_attr.crc_enabled, ep->mpa_attr.recv_marker_enabled,
ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version);
state_set(&ep->com, MPA_REQ_RCVD);
@ -1047,7 +1046,7 @@ static int rx_data(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct cpl_rx_data *hdr = cplhdr(skb);
unsigned int dlen = ntohs(hdr->len);
PDBG("%s ep %p dlen %u\n", __func__, ep, dlen);
pr_debug("%s ep %p dlen %u\n", __func__, ep, dlen);
skb_pull(skb, sizeof(*hdr));
skb_trim(skb, dlen);
@ -1065,8 +1064,7 @@ static int rx_data(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
case MPA_REP_SENT:
break;
default:
printk(KERN_ERR MOD "%s Unexpected streaming data."
" ep %p state %d tid %d\n",
pr_err("%s Unexpected streaming data. ep %p state %d tid %d\n",
__func__, ep, state_read(&ep->com), ep->hwtid);
/*
@ -1095,11 +1093,11 @@ static int tx_ack(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
unsigned long flags;
int post_zb = 0;
PDBG("%s ep %p credits %u\n", __func__, ep, credits);
pr_debug("%s ep %p credits %u\n", __func__, ep, credits);
if (credits == 0) {
PDBG("%s 0 credit ack ep %p state %u\n",
__func__, ep, state_read(&ep->com));
pr_debug("%s 0 credit ack ep %p state %u\n",
__func__, ep, state_read(&ep->com));
return CPL_RET_BUF_DONE;
}
@ -1107,24 +1105,24 @@ static int tx_ack(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
BUG_ON(credits != 1);
dst_confirm(ep->dst);
if (!ep->mpa_skb) {
PDBG("%s rdma_init wr_ack ep %p state %u\n",
__func__, ep, ep->com.state);
pr_debug("%s rdma_init wr_ack ep %p state %u\n",
__func__, ep, ep->com.state);
if (ep->mpa_attr.initiator) {
PDBG("%s initiator ep %p state %u\n",
__func__, ep, ep->com.state);
pr_debug("%s initiator ep %p state %u\n",
__func__, ep, ep->com.state);
if (peer2peer && ep->com.state == FPDU_MODE)
post_zb = 1;
} else {
PDBG("%s responder ep %p state %u\n",
__func__, ep, ep->com.state);
pr_debug("%s responder ep %p state %u\n",
__func__, ep, ep->com.state);
if (ep->com.state == MPA_REQ_RCVD) {
ep->com.rpl_done = 1;
wake_up(&ep->com.waitq);
}
}
} else {
PDBG("%s lsm ack ep %p state %u freeing skb\n",
__func__, ep, ep->com.state);
pr_debug("%s lsm ack ep %p state %u freeing skb\n",
__func__, ep, ep->com.state);
kfree_skb(ep->mpa_skb);
ep->mpa_skb = NULL;
}
@ -1140,7 +1138,7 @@ static int abort_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
unsigned long flags;
int release = 0;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
BUG_ON(!ep);
/*
@ -1159,8 +1157,7 @@ static int abort_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
release = 1;
break;
default:
printk(KERN_ERR "%s ep %p state %d\n",
__func__, ep, ep->com.state);
pr_err("%s ep %p state %d\n", __func__, ep, ep->com.state);
break;
}
spin_unlock_irqrestore(&ep->com.lock, flags);
@ -1184,8 +1181,8 @@ static int act_open_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct iwch_ep *ep = ctx;
struct cpl_act_open_rpl *rpl = cplhdr(skb);
PDBG("%s ep %p status %u errno %d\n", __func__, ep, rpl->status,
status2errno(rpl->status));
pr_debug("%s ep %p status %u errno %d\n", __func__, ep, rpl->status,
status2errno(rpl->status));
connect_reply_upcall(ep, status2errno(rpl->status));
state_set(&ep->com, DEAD);
if (ep->com.tdev->type != T3A && act_open_has_tid(rpl->status))
@ -1202,10 +1199,10 @@ static int listen_start(struct iwch_listen_ep *ep)
struct sk_buff *skb;
struct cpl_pass_open_req *req;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
skb = get_skb(NULL, sizeof(*req), GFP_KERNEL);
if (!skb) {
printk(KERN_ERR MOD "t3c_listen_start failed to alloc skb!\n");
pr_err("t3c_listen_start failed to alloc skb!\n");
return -ENOMEM;
}
@ -1230,8 +1227,8 @@ static int pass_open_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct iwch_listen_ep *ep = ctx;
struct cpl_pass_open_rpl *rpl = cplhdr(skb);
PDBG("%s ep %p status %d error %d\n", __func__, ep,
rpl->status, status2errno(rpl->status));
pr_debug("%s ep %p status %d error %d\n", __func__, ep,
rpl->status, status2errno(rpl->status));
ep->com.rpl_err = status2errno(rpl->status);
ep->com.rpl_done = 1;
wake_up(&ep->com.waitq);
@ -1244,10 +1241,10 @@ static int listen_stop(struct iwch_listen_ep *ep)
struct sk_buff *skb;
struct cpl_close_listserv_req *req;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
skb = get_skb(NULL, sizeof(*req), GFP_KERNEL);
if (!skb) {
printk(KERN_ERR MOD "%s - failed to alloc skb\n", __func__);
pr_err("%s - failed to alloc skb\n", __func__);
return -ENOMEM;
}
req = (struct cpl_close_listserv_req *) skb_put(skb, sizeof(*req));
@ -1264,7 +1261,7 @@ static int close_listsrv_rpl(struct t3cdev *tdev, struct sk_buff *skb,
struct iwch_listen_ep *ep = ctx;
struct cpl_close_listserv_rpl *rpl = cplhdr(skb);
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
ep->com.rpl_err = status2errno(rpl->status);
ep->com.rpl_done = 1;
wake_up(&ep->com.waitq);
@ -1278,7 +1275,7 @@ static void accept_cr(struct iwch_ep *ep, __be32 peer_ip, struct sk_buff *skb)
u32 opt0h, opt0l, opt2;
int wscale;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
BUG_ON(skb_cloned(skb));
skb_trim(skb, sizeof(*rpl));
skb_get(skb);
@ -1312,8 +1309,8 @@ static void accept_cr(struct iwch_ep *ep, __be32 peer_ip, struct sk_buff *skb)
static void reject_cr(struct t3cdev *tdev, u32 hwtid, __be32 peer_ip,
struct sk_buff *skb)
{
PDBG("%s t3cdev %p tid %u peer_ip %x\n", __func__, tdev, hwtid,
peer_ip);
pr_debug("%s t3cdev %p tid %u peer_ip %x\n", __func__, tdev, hwtid,
peer_ip);
BUG_ON(skb_cloned(skb));
skb_trim(skb, sizeof(struct cpl_tid_release));
skb_get(skb);
@ -1347,11 +1344,10 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct rtable *rt;
struct iff_mac tim;
PDBG("%s parent ep %p tid %u\n", __func__, parent_ep, hwtid);
pr_debug("%s parent ep %p tid %u\n", __func__, parent_ep, hwtid);
if (state_read(&parent_ep->com) != LISTEN) {
printk(KERN_ERR "%s - listening ep not in LISTEN\n",
__func__);
pr_err("%s - listening ep not in LISTEN\n", __func__);
goto reject;
}
@ -1361,8 +1357,7 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
tim.mac_addr = req->dst_mac;
tim.vlan_tag = ntohs(req->vlan_tag);
if (tdev->ctl(tdev, GET_IFF_FROM_MAC, &tim) < 0 || !tim.dev) {
printk(KERN_ERR "%s bad dst mac %pM\n",
__func__, req->dst_mac);
pr_err("%s bad dst mac %pM\n", __func__, req->dst_mac);
goto reject;
}
@ -1373,22 +1368,19 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
req->local_port,
req->peer_port, G_PASS_OPEN_TOS(ntohl(req->tos_tid)));
if (!rt) {
printk(KERN_ERR MOD "%s - failed to find dst entry!\n",
__func__);
pr_err("%s - failed to find dst entry!\n", __func__);
goto reject;
}
dst = &rt->dst;
l2t = t3_l2t_get(tdev, dst, NULL, &req->peer_ip);
if (!l2t) {
printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n",
__func__);
pr_err("%s - failed to allocate l2t entry!\n", __func__);
dst_release(dst);
goto reject;
}
child_ep = alloc_ep(sizeof(*child_ep), GFP_KERNEL);
if (!child_ep) {
printk(KERN_ERR MOD "%s - failed to allocate ep entry!\n",
__func__);
pr_err("%s - failed to allocate ep entry!\n", __func__);
l2t_release(tdev, l2t);
dst_release(dst);
goto reject;
@ -1423,7 +1415,7 @@ static int pass_establish(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct iwch_ep *ep = ctx;
struct cpl_pass_establish *req = cplhdr(skb);
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
ep->snd_seq = ntohl(req->snd_isn);
ep->rcv_seq = ntohl(req->rcv_isn);
@ -1444,7 +1436,7 @@ static int peer_close(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
int disconnect = 1;
int release = 0;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
dst_confirm(ep->dst);
spin_lock_irqsave(&ep->com.lock, flags);
@ -1467,14 +1459,14 @@ static int peer_close(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
__state_set(&ep->com, CLOSING);
ep->com.rpl_done = 1;
ep->com.rpl_err = -ECONNRESET;
PDBG("waking up ep %p\n", ep);
pr_debug("waking up ep %p\n", ep);
wake_up(&ep->com.waitq);
break;
case MPA_REP_SENT:
__state_set(&ep->com, CLOSING);
ep->com.rpl_done = 1;
ep->com.rpl_err = -ECONNRESET;
PDBG("waking up ep %p\n", ep);
pr_debug("waking up ep %p\n", ep);
wake_up(&ep->com.waitq);
break;
case FPDU_MODE:
@ -1539,8 +1531,8 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
unsigned long flags;
if (is_neg_adv_abort(req->status)) {
PDBG("%s neg_adv_abort ep %p tid %d\n", __func__, ep,
ep->hwtid);
pr_debug("%s neg_adv_abort ep %p tid %d\n", __func__, ep,
ep->hwtid);
t3_l2t_send_event(ep->com.tdev, ep->l2t);
return CPL_RET_BUF_DONE;
}
@ -1554,7 +1546,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
}
spin_lock_irqsave(&ep->com.lock, flags);
PDBG("%s ep %p state %u\n", __func__, ep, ep->com.state);
pr_debug("%s ep %p state %u\n", __func__, ep, ep->com.state);
switch (ep->com.state) {
case CONNECTING:
break;
@ -1568,7 +1560,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
case MPA_REP_SENT:
ep->com.rpl_done = 1;
ep->com.rpl_err = -ECONNRESET;
PDBG("waking up ep %p\n", ep);
pr_debug("waking up ep %p\n", ep);
wake_up(&ep->com.waitq);
break;
case MPA_REQ_RCVD:
@ -1581,7 +1573,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
*/
ep->com.rpl_done = 1;
ep->com.rpl_err = -ECONNRESET;
PDBG("waking up ep %p\n", ep);
pr_debug("waking up ep %p\n", ep);
wake_up(&ep->com.waitq);
break;
case MORIBUND:
@ -1595,16 +1587,14 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
ep->com.qp, IWCH_QP_ATTR_NEXT_STATE,
&attrs, 1);
if (ret)
printk(KERN_ERR MOD
"%s - qp <- error failed!\n",
__func__);
pr_err("%s - qp <- error failed!\n", __func__);
}
peer_abort_upcall(ep);
break;
case ABORTING:
break;
case DEAD:
PDBG("%s PEER_ABORT IN DEAD STATE!!!!\n", __func__);
pr_debug("%s PEER_ABORT IN DEAD STATE!!!!\n", __func__);
spin_unlock_irqrestore(&ep->com.lock, flags);
return CPL_RET_BUF_DONE;
default:
@ -1620,8 +1610,7 @@ static int peer_abort(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
rpl_skb = get_skb(skb, sizeof(*rpl), GFP_KERNEL);
if (!rpl_skb) {
printk(KERN_ERR MOD "%s - cannot allocate skb!\n",
__func__);
pr_err("%s - cannot allocate skb!\n", __func__);
release = 1;
goto out;
}
@ -1645,7 +1634,7 @@ static int close_con_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
unsigned long flags;
int release = 0;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
BUG_ON(!ep);
/* The cm_id may be null if we failed to connect */
@ -1699,9 +1688,9 @@ static int terminate(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
if (state_read(&ep->com) != FPDU_MODE)
return CPL_RET_BUF_DONE;
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
skb_pull(skb, sizeof(struct cpl_rdma_terminate));
PDBG("%s saving %d bytes of term msg\n", __func__, skb->len);
pr_debug("%s saving %d bytes of term msg\n", __func__, skb->len);
skb_copy_from_linear_data(skb, ep->com.qp->attr.terminate_buffer,
skb->len);
ep->com.qp->attr.terminate_msg_len = skb->len;
@ -1714,12 +1703,12 @@ static int ec_status(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct cpl_rdma_ec_status *rep = cplhdr(skb);
struct iwch_ep *ep = ctx;
PDBG("%s ep %p tid %u status %d\n", __func__, ep, ep->hwtid,
rep->status);
pr_debug("%s ep %p tid %u status %d\n", __func__, ep, ep->hwtid,
rep->status);
if (rep->status) {
struct iwch_qp_attributes attrs;
printk(KERN_ERR MOD "%s BAD CLOSE - Aborting tid %u\n",
pr_err("%s BAD CLOSE - Aborting tid %u\n",
__func__, ep->hwtid);
stop_ep_timer(ep);
attrs.next_state = IWCH_QP_STATE_ERROR;
@ -1739,8 +1728,8 @@ static void ep_timeout(unsigned long arg)
int abort = 1;
spin_lock_irqsave(&ep->com.lock, flags);
PDBG("%s ep %p tid %u state %d\n", __func__, ep, ep->hwtid,
ep->com.state);
pr_debug("%s ep %p tid %u state %d\n", __func__, ep, ep->hwtid,
ep->com.state);
switch (ep->com.state) {
case MPA_REQ_SENT:
__state_set(&ep->com, ABORTING);
@ -1774,7 +1763,7 @@ int iwch_reject_cr(struct iw_cm_id *cm_id, const void *pdata, u8 pdata_len)
{
int err;
struct iwch_ep *ep = to_ep(cm_id);
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
pr_debug("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
if (state_read(&ep->com) == DEAD) {
put_ep(&ep->com);
@ -1800,7 +1789,7 @@ int iwch_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
struct iwch_dev *h = to_iwch_dev(cm_id->device);
struct iwch_qp *qp = get_qhp(h, conn_param->qpn);
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
pr_debug("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
if (state_read(&ep->com) == DEAD) {
err = -ECONNRESET;
goto err;
@ -1826,7 +1815,7 @@ int iwch_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
if (peer2peer && ep->ird == 0)
ep->ird = 1;
PDBG("%s %d ird %d ord %d\n", __func__, __LINE__, ep->ird, ep->ord);
pr_debug("%s %d ird %d ord %d\n", __func__, __LINE__, ep->ird, ep->ord);
/* bind QP to EP and move to RTS */
attrs.mpa_attr = ep->mpa_attr;
@ -1907,7 +1896,7 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
ep = alloc_ep(sizeof(*ep), GFP_KERNEL);
if (!ep) {
printk(KERN_ERR MOD "%s - cannot alloc ep.\n", __func__);
pr_err("%s - cannot alloc ep\n", __func__);
err = -ENOMEM;
goto out;
}
@ -1928,15 +1917,15 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
ep->com.cm_id = cm_id;
ep->com.qp = get_qhp(h, conn_param->qpn);
BUG_ON(!ep->com.qp);
PDBG("%s qpn 0x%x qp %p cm_id %p\n", __func__, conn_param->qpn,
ep->com.qp, cm_id);
pr_debug("%s qpn 0x%x qp %p cm_id %p\n", __func__, conn_param->qpn,
ep->com.qp, cm_id);
/*
* Allocate an active TID to initiate a TCP connection.
*/
ep->atid = cxgb3_alloc_atid(h->rdev.t3cdev_p, &t3c_client, ep);
if (ep->atid == -1) {
printk(KERN_ERR MOD "%s - cannot alloc atid.\n", __func__);
pr_err("%s - cannot alloc atid\n", __func__);
err = -ENOMEM;
goto fail2;
}
@ -1946,7 +1935,7 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
raddr->sin_addr.s_addr, laddr->sin_port,
raddr->sin_port, IPTOS_LOWDELAY);
if (!rt) {
printk(KERN_ERR MOD "%s - cannot find route.\n", __func__);
pr_err("%s - cannot find route\n", __func__);
err = -EHOSTUNREACH;
goto fail3;
}
@ -1954,7 +1943,7 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
ep->l2t = t3_l2t_get(ep->com.tdev, ep->dst, NULL,
&raddr->sin_addr.s_addr);
if (!ep->l2t) {
printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__);
pr_err("%s - cannot alloc l2e\n", __func__);
err = -ENOMEM;
goto fail4;
}
@ -1999,11 +1988,11 @@ int iwch_create_listen(struct iw_cm_id *cm_id, int backlog)
ep = alloc_ep(sizeof(*ep), GFP_KERNEL);
if (!ep) {
printk(KERN_ERR MOD "%s - cannot alloc ep.\n", __func__);
pr_err("%s - cannot alloc ep\n", __func__);
err = -ENOMEM;
goto fail1;
}
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
ep->com.tdev = h->rdev.t3cdev_p;
cm_id->add_ref(cm_id);
ep->com.cm_id = cm_id;
@ -2016,7 +2005,7 @@ int iwch_create_listen(struct iw_cm_id *cm_id, int backlog)
*/
ep->stid = cxgb3_alloc_stid(h->rdev.t3cdev_p, &t3c_client, ep);
if (ep->stid == -1) {
printk(KERN_ERR MOD "%s - cannot alloc atid.\n", __func__);
pr_err("%s - cannot alloc atid\n", __func__);
err = -ENOMEM;
goto fail2;
}
@ -2048,7 +2037,7 @@ int iwch_destroy_listen(struct iw_cm_id *cm_id)
int err;
struct iwch_listen_ep *ep = to_listen_ep(cm_id);
PDBG("%s ep %p\n", __func__, ep);
pr_debug("%s ep %p\n", __func__, ep);
might_sleep();
state_set(&ep->com, DEAD);
@ -2077,8 +2066,8 @@ int iwch_ep_disconnect(struct iwch_ep *ep, int abrupt, gfp_t gfp)
spin_lock_irqsave(&ep->com.lock, flags);
PDBG("%s ep %p state %s, abrupt %d\n", __func__, ep,
states[ep->com.state], abrupt);
pr_debug("%s ep %p state %s, abrupt %d\n", __func__, ep,
states[ep->com.state], abrupt);
tdev = (struct t3cdev *)ep->com.tdev;
rdev = (struct cxio_rdev *)tdev->ulp;
@ -2115,8 +2104,8 @@ int iwch_ep_disconnect(struct iwch_ep *ep, int abrupt, gfp_t gfp)
case MORIBUND:
case ABORTING:
case DEAD:
PDBG("%s ignoring disconnect ep %p state %u\n",
__func__, ep, ep->com.state);
pr_debug("%s ignoring disconnect ep %p state %u\n",
__func__, ep, ep->com.state);
break;
default:
BUG();
@ -2145,8 +2134,8 @@ int iwch_ep_redirect(void *ctx, struct dst_entry *old, struct dst_entry *new,
if (ep->dst != old)
return 0;
PDBG("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new,
l2t);
pr_debug("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new,
l2t);
dst_hold(new);
l2t_release(ep->com.tdev, ep->l2t);
ep->l2t = l2t;
@ -2225,8 +2214,8 @@ static int set_tcb_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
struct cpl_set_tcb_rpl *rpl = cplhdr(skb);
if (rpl->status != CPL_ERR_NONE) {
printk(KERN_ERR MOD "Unexpected SET_TCB_RPL status %u "
"for tid %u\n", rpl->status, GET_TID(rpl));
pr_err("Unexpected SET_TCB_RPL status %u for tid %u\n",
rpl->status, GET_TID(rpl));
}
return CPL_RET_BUF_DONE;
}

View File

@ -53,17 +53,17 @@
#define MPA_MARKERS 0x80
#define MPA_FLAGS_MASK 0xE0
#define put_ep(ep) { \
PDBG("put_ep (via %s:%u) ep %p refcnt %d\n", __func__, __LINE__, \
ep, kref_read(&((ep)->kref))); \
WARN_ON(kref_read(&((ep)->kref)) < 1); \
kref_put(&((ep)->kref), __free_ep); \
#define put_ep(ep) { \
pr_debug("put_ep (via %s:%u) ep %p refcnt %d\n", \
__func__, __LINE__, ep, kref_read(&((ep)->kref))); \
WARN_ON(kref_read(&((ep)->kref)) < 1); \
kref_put(&((ep)->kref), __free_ep); \
}
#define get_ep(ep) { \
PDBG("get_ep (via %s:%u) ep %p, refcnt %d\n", __func__, __LINE__, \
ep, kref_read(&((ep)->kref))); \
kref_get(&((ep)->kref)); \
#define get_ep(ep) { \
pr_debug("get_ep (via %s:%u) ep %p, refcnt %d\n", \
__func__, __LINE__, ep, kref_read(&((ep)->kref))); \
kref_get(&((ep)->kref)); \
}
struct mpa_message {

View File

@ -67,8 +67,8 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
ret = cxio_poll_cq(wq, &(chp->cq), &cqe, &cqe_flushed, &cookie,
&credit);
if (t3a_device(chp->rhp) && credit) {
PDBG("%s updating %d cq credits on id %d\n", __func__,
credit, chp->cq.cqid);
pr_debug("%s updating %d cq credits on id %d\n", __func__,
credit, chp->cq.cqid);
cxio_hal_cq_op(&rhp->rdev, &chp->cq, CQ_CREDIT_UPDATE, credit);
}
@ -83,11 +83,11 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
wc->vendor_err = CQE_STATUS(cqe);
wc->wc_flags = 0;
PDBG("%s qpid 0x%x type %d opcode %d status 0x%x wrid hi 0x%x "
"lo 0x%x cookie 0x%llx\n", __func__,
CQE_QPID(cqe), CQE_TYPE(cqe),
CQE_OPCODE(cqe), CQE_STATUS(cqe), CQE_WRID_HI(cqe),
CQE_WRID_LOW(cqe), (unsigned long long) cookie);
pr_debug("%s qpid 0x%x type %d opcode %d status 0x%x wrid hi 0x%x lo 0x%x cookie 0x%llx\n",
__func__,
CQE_QPID(cqe), CQE_TYPE(cqe),
CQE_OPCODE(cqe), CQE_STATUS(cqe), CQE_WRID_HI(cqe),
CQE_WRID_LOW(cqe), (unsigned long long)cookie);
if (CQE_TYPE(cqe) == 0) {
if (!CQE_STATUS(cqe))
@ -122,8 +122,7 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
wc->opcode = IB_WC_REG_MR;
break;
default:
printk(KERN_ERR MOD "Unexpected opcode %d "
"in the CQE received for QPID=0x%0x\n",
pr_err("Unexpected opcode %d in the CQE received for QPID=0x%0x\n",
CQE_OPCODE(cqe), CQE_QPID(cqe));
ret = -EINVAL;
goto out;
@ -177,8 +176,8 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
wc->status = IB_WC_WR_FLUSH_ERR;
break;
default:
printk(KERN_ERR MOD "Unexpected cqe_status 0x%x for "
"QPID=0x%0x\n", CQE_STATUS(cqe), CQE_QPID(cqe));
pr_err("Unexpected cqe_status 0x%x for QPID=0x%0x\n",
CQE_STATUS(cqe), CQE_QPID(cqe));
ret = -EINVAL;
}
}

View File

@ -52,7 +52,7 @@ static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp,
qhp = get_qhp(rnicp, CQE_QPID(rsp_msg->cqe));
if (!qhp) {
printk(KERN_ERR "%s unaffiliated error 0x%x qpid 0x%x\n",
pr_err("%s unaffiliated error 0x%x qpid 0x%x\n",
__func__, CQE_STATUS(rsp_msg->cqe),
CQE_QPID(rsp_msg->cqe));
spin_unlock(&rnicp->lock);
@ -61,15 +61,16 @@ static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp,
if ((qhp->attr.state == IWCH_QP_STATE_ERROR) ||
(qhp->attr.state == IWCH_QP_STATE_TERMINATE)) {
PDBG("%s AE received after RTS - "
"qp state %d qpid 0x%x status 0x%x\n", __func__,
qhp->attr.state, qhp->wq.qpid, CQE_STATUS(rsp_msg->cqe));
pr_debug("%s AE received after RTS - qp state %d qpid 0x%x status 0x%x\n",
__func__,
qhp->attr.state, qhp->wq.qpid,
CQE_STATUS(rsp_msg->cqe));
spin_unlock(&rnicp->lock);
return;
}
printk(KERN_ERR "%s - AE qpid 0x%x opcode %d status 0x%x "
"type %d wrid.hi 0x%x wrid.lo 0x%x \n", __func__,
pr_err("%s - AE qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
__func__,
CQE_QPID(rsp_msg->cqe), CQE_OPCODE(rsp_msg->cqe),
CQE_STATUS(rsp_msg->cqe), CQE_TYPE(rsp_msg->cqe),
CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe));
@ -117,8 +118,7 @@ void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb)
chp = get_chp(rnicp, cqid);
qhp = get_qhp(rnicp, CQE_QPID(rsp_msg->cqe));
if (!chp || !qhp) {
printk(KERN_ERR MOD "BAD AE cqid 0x%x qpid 0x%x opcode %d "
"status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x \n",
pr_err("BAD AE cqid 0x%x qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
cqid, CQE_QPID(rsp_msg->cqe),
CQE_OPCODE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe),
CQE_TYPE(rsp_msg->cqe), CQE_WRID_HI(rsp_msg->cqe),
@ -137,12 +137,12 @@ void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb)
if ((CQE_OPCODE(rsp_msg->cqe) == T3_TERMINATE) &&
(CQE_STATUS(rsp_msg->cqe) == 0)) {
if (SQ_TYPE(rsp_msg->cqe)) {
PDBG("%s QPID 0x%x ep %p disconnecting\n",
__func__, qhp->wq.qpid, qhp->ep);
pr_debug("%s QPID 0x%x ep %p disconnecting\n",
__func__, qhp->wq.qpid, qhp->ep);
iwch_ep_disconnect(qhp->ep, 0, GFP_ATOMIC);
} else {
PDBG("%s post REQ_ERR AE QPID 0x%x\n", __func__,
qhp->wq.qpid);
pr_debug("%s post REQ_ERR AE QPID 0x%x\n", __func__,
qhp->wq.qpid);
post_qp_event(rnicp, chp, rsp_msg,
IB_EVENT_QP_REQ_ERR, 0);
iwch_ep_disconnect(qhp->ep, 0, GFP_ATOMIC);
@ -218,7 +218,7 @@ void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb)
break;
default:
printk(KERN_ERR MOD "Unknown T3 status 0x%x QPID 0x%x\n",
pr_err("Unknown T3 status 0x%x QPID 0x%x\n",
CQE_STATUS(rsp_msg->cqe), qhp->wq.qpid);
post_qp_event(rnicp, chp, rsp_msg, IB_EVENT_QP_FATAL, 1);
break;

View File

@ -48,7 +48,7 @@ static int iwch_finish_mem_reg(struct iwch_mr *mhp, u32 stag)
mhp->attr.stag = stag;
mmid = stag >> 8;
mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
pr_debug("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid);
}

View File

@ -62,7 +62,7 @@
#include "common.h"
static struct ib_ah *iwch_ah_create(struct ib_pd *pd,
struct ib_ah_attr *ah_attr,
struct rdma_ah_attr *ah_attr,
struct ib_udata *udata)
{
return ERR_PTR(-ENOSYS);
@ -103,7 +103,7 @@ static int iwch_dealloc_ucontext(struct ib_ucontext *context)
struct iwch_ucontext *ucontext = to_iwch_ucontext(context);
struct iwch_mm_entry *mm, *tmp;
PDBG("%s context %p\n", __func__, context);
pr_debug("%s context %p\n", __func__, context);
list_for_each_entry_safe(mm, tmp, &ucontext->mmaps, entry)
kfree(mm);
cxio_release_ucontext(&rhp->rdev, &ucontext->uctx);
@ -117,7 +117,7 @@ static struct ib_ucontext *iwch_alloc_ucontext(struct ib_device *ibdev,
struct iwch_ucontext *context;
struct iwch_dev *rhp = to_iwch_dev(ibdev);
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
context = kzalloc(sizeof(*context), GFP_KERNEL);
if (!context)
return ERR_PTR(-ENOMEM);
@ -131,7 +131,7 @@ static int iwch_destroy_cq(struct ib_cq *ib_cq)
{
struct iwch_cq *chp;
PDBG("%s ib_cq %p\n", __func__, ib_cq);
pr_debug("%s ib_cq %p\n", __func__, ib_cq);
chp = to_iwch_cq(ib_cq);
remove_handle(chp->rhp, &chp->rhp->cqidr, chp->cq.cqid);
@ -157,7 +157,7 @@ static struct ib_cq *iwch_create_cq(struct ib_device *ibdev,
static int warned;
size_t resplen;
PDBG("%s ib_dev %p entries %d\n", __func__, ibdev, entries);
pr_debug("%s ib_dev %p entries %d\n", __func__, ibdev, entries);
if (attr->flags)
return ERR_PTR(-EINVAL);
@ -227,8 +227,7 @@ static struct ib_cq *iwch_create_cq(struct ib_device *ibdev,
mm->addr = virt_to_phys(chp->cq.queue);
if (udata->outlen < sizeof uresp) {
if (!warned++)
printk(KERN_WARNING MOD "Warning - "
"downlevel libcxgb3 (non-fatal).\n");
pr_warn("Warning - downlevel libcxgb3 (non-fatal)\n");
mm->len = PAGE_ALIGN((1UL << uresp.size_log2) *
sizeof(struct t3_cqe));
resplen = sizeof(struct iwch_create_cq_resp_v0);
@ -246,9 +245,9 @@ static struct ib_cq *iwch_create_cq(struct ib_device *ibdev,
}
insert_mmap(ucontext, mm);
}
PDBG("created cqid 0x%0x chp %p size 0x%0x, dma_addr 0x%0llx\n",
chp->cq.cqid, chp, (1 << chp->cq.size_log2),
(unsigned long long) chp->cq.dma_addr);
pr_debug("created cqid 0x%0x chp %p size 0x%0x, dma_addr 0x%0llx\n",
chp->cq.cqid, chp, (1 << chp->cq.size_log2),
(unsigned long long)chp->cq.dma_addr);
return &chp->ibcq;
}
@ -259,7 +258,7 @@ static int iwch_resize_cq(struct ib_cq *cq, int cqe, struct ib_udata *udata)
struct t3_cq oldcq, newcq;
int ret;
PDBG("%s ib_cq %p cqe %d\n", __func__, cq, cqe);
pr_debug("%s ib_cq %p cqe %d\n", __func__, cq, cqe);
/* We don't downsize... */
if (cqe <= cq->cqe)
@ -306,8 +305,7 @@ static int iwch_resize_cq(struct ib_cq *cq, int cqe, struct ib_udata *udata)
oldcq.cqid = newcq.cqid;
ret = cxio_destroy_cq(&chp->rhp->rdev, &oldcq);
if (ret) {
printk(KERN_ERR MOD "%s - cxio_destroy_cq failed %d\n",
__func__, ret);
pr_err("%s - cxio_destroy_cq failed %d\n", __func__, ret);
}
/* add user hooks here */
@ -342,12 +340,11 @@ static int iwch_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
chp->cq.rptr = rptr;
} else
spin_lock_irqsave(&chp->lock, flag);
PDBG("%s rptr 0x%x\n", __func__, chp->cq.rptr);
pr_debug("%s rptr 0x%x\n", __func__, chp->cq.rptr);
err = cxio_hal_cq_op(&rhp->rdev, &chp->cq, cq_op, 0);
spin_unlock_irqrestore(&chp->lock, flag);
if (err < 0)
printk(KERN_ERR MOD "Error %d rearming CQID 0x%x\n", err,
chp->cq.cqid);
pr_err("Error %d rearming CQID 0x%x\n", err, chp->cq.cqid);
if (err > 0 && !(flags & IB_CQ_REPORT_MISSED_EVENTS))
err = 0;
return err;
@ -363,8 +360,8 @@ static int iwch_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
struct iwch_ucontext *ucontext;
u64 addr;
PDBG("%s pgoff 0x%lx key 0x%x len %d\n", __func__, vma->vm_pgoff,
key, len);
pr_debug("%s pgoff 0x%lx key 0x%x len %d\n", __func__, vma->vm_pgoff,
key, len);
if (vma->vm_start & (PAGE_SIZE-1)) {
return -EINVAL;
@ -416,7 +413,7 @@ static int iwch_deallocate_pd(struct ib_pd *pd)
php = to_iwch_pd(pd);
rhp = php->rhp;
PDBG("%s ibpd %p pdid 0x%x\n", __func__, pd, php->pdid);
pr_debug("%s ibpd %p pdid 0x%x\n", __func__, pd, php->pdid);
cxio_hal_put_pdid(rhp->rdev.rscp, php->pdid);
kfree(php);
return 0;
@ -430,7 +427,7 @@ static struct ib_pd *iwch_allocate_pd(struct ib_device *ibdev,
u32 pdid;
struct iwch_dev *rhp;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
rhp = (struct iwch_dev *) ibdev;
pdid = cxio_hal_get_pdid(rhp->rdev.rscp);
if (!pdid)
@ -448,7 +445,7 @@ static struct ib_pd *iwch_allocate_pd(struct ib_device *ibdev,
return ERR_PTR(-EFAULT);
}
}
PDBG("%s pdid 0x%0x ptr 0x%p\n", __func__, pdid, php);
pr_debug("%s pdid 0x%0x ptr 0x%p\n", __func__, pdid, php);
return &php->ibpd;
}
@ -458,7 +455,7 @@ static int iwch_dereg_mr(struct ib_mr *ib_mr)
struct iwch_mr *mhp;
u32 mmid;
PDBG("%s ib_mr %p\n", __func__, ib_mr);
pr_debug("%s ib_mr %p\n", __func__, ib_mr);
mhp = to_iwch_mr(ib_mr);
kfree(mhp->pages);
@ -472,7 +469,7 @@ static int iwch_dereg_mr(struct ib_mr *ib_mr)
kfree((void *) (unsigned long) mhp->kva);
if (mhp->umem)
ib_umem_release(mhp->umem);
PDBG("%s mmid 0x%x ptr %p\n", __func__, mmid, mhp);
pr_debug("%s mmid 0x%x ptr %p\n", __func__, mmid, mhp);
kfree(mhp);
return 0;
}
@ -487,13 +484,13 @@ static struct ib_mr *iwch_get_dma_mr(struct ib_pd *pd, int acc)
__be64 *page_list;
int shift = 26, npages, ret, i;
PDBG("%s ib_pd %p\n", __func__, pd);
pr_debug("%s ib_pd %p\n", __func__, pd);
/*
* T3 only supports 32 bits of size.
*/
if (sizeof(phys_addr_t) > 4) {
pr_warn_once(MOD "Cannot support dma_mrs on this platform.\n");
pr_warn_once("Cannot support dma_mrs on this platform\n");
return ERR_PTR(-ENOTSUPP);
}
@ -518,8 +515,8 @@ static struct ib_mr *iwch_get_dma_mr(struct ib_pd *pd, int acc)
for (i = 0; i < npages; i++)
page_list[i] = cpu_to_be64((u64)i << shift);
PDBG("%s mask 0x%llx shift %d len %lld pbl_size %d\n",
__func__, mask, shift, total_size, npages);
pr_debug("%s mask 0x%llx shift %d len %lld pbl_size %d\n",
__func__, mask, shift, total_size, npages);
ret = iwch_alloc_pbl(mhp, npages);
if (ret) {
@ -567,7 +564,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
struct iwch_mr *mhp;
struct iwch_reg_user_mr_resp uresp;
struct scatterlist *sg;
PDBG("%s ib_pd %p\n", __func__, pd);
pr_debug("%s ib_pd %p\n", __func__, pd);
php = to_iwch_pd(pd);
rhp = php->rhp;
@ -584,7 +581,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
return ERR_PTR(err);
}
shift = ffs(mhp->umem->page_size) - 1;
shift = mhp->umem->page_shift;
n = mhp->umem->nmap;
@ -604,7 +601,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
len = sg_dma_len(sg) >> shift;
for (k = 0; k < len; ++k) {
pages[i++] = cpu_to_be64(sg_dma_address(sg) +
mhp->umem->page_size * k);
(k << shift));
if (i == PAGE_SIZE / sizeof *pages) {
err = iwch_write_pbl(mhp, pages, i, n);
if (err)
@ -637,8 +634,8 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (udata && !t3a_device(rhp)) {
uresp.pbl_addr = (mhp->attr.pbl_addr -
rhp->rdev.rnic_info.pbl_base) >> 3;
PDBG("%s user resp pbl_addr 0x%x\n", __func__,
uresp.pbl_addr);
pr_debug("%s user resp pbl_addr 0x%x\n", __func__,
uresp.pbl_addr);
if (ib_copy_to_udata(udata, &uresp, sizeof (uresp))) {
iwch_dereg_mr(&mhp->ibmr);
@ -692,7 +689,7 @@ static struct ib_mw *iwch_alloc_mw(struct ib_pd *pd, enum ib_mw_type type,
kfree(mhp);
return ERR_PTR(-ENOMEM);
}
PDBG("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
pr_debug("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
return &(mhp->ibmw);
}
@ -707,7 +704,7 @@ static int iwch_dealloc_mw(struct ib_mw *mw)
mmid = (mw->rkey) >> 8;
cxio_deallocate_window(&rhp->rdev, mhp->attr.stag);
remove_handle(rhp, &rhp->mmidr, mmid);
PDBG("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp);
pr_debug("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp);
kfree(mhp);
return 0;
}
@ -757,7 +754,7 @@ static struct ib_mr *iwch_alloc_mr(struct ib_pd *pd,
if (insert_handle(rhp, &rhp->mmidr, mhp, mmid))
goto err3;
PDBG("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
pr_debug("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
return &(mhp->ibmr);
err3:
cxio_dereg_mem(&rhp->rdev, stag, mhp->attr.pbl_size,
@ -818,8 +815,8 @@ static int iwch_destroy_qp(struct ib_qp *ib_qp)
cxio_destroy_qp(&rhp->rdev, &qhp->wq,
ucontext ? &ucontext->uctx : &rhp->rdev.uctx);
PDBG("%s ib_qp %p qpid 0x%0x qhp %p\n", __func__,
ib_qp, qhp->wq.qpid, qhp);
pr_debug("%s ib_qp %p qpid 0x%0x qhp %p\n", __func__,
ib_qp, qhp->wq.qpid, qhp);
kfree(qhp);
return 0;
}
@ -837,7 +834,7 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd,
int wqsize, sqsize, rqsize;
struct iwch_ucontext *ucontext;
PDBG("%s ib_pd %p\n", __func__, pd);
pr_debug("%s ib_pd %p\n", __func__, pd);
if (attrs->qp_type != IB_QPT_RC)
return ERR_PTR(-EINVAL);
php = to_iwch_pd(pd);
@ -878,8 +875,8 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd,
if (!ucontext && wqsize < (rqsize + (2 * sqsize)))
wqsize = roundup_pow_of_two(rqsize +
roundup_pow_of_two(attrs->cap.max_send_wr * 2));
PDBG("%s wqsize %d sqsize %d rqsize %d\n", __func__,
wqsize, sqsize, rqsize);
pr_debug("%s wqsize %d sqsize %d rqsize %d\n", __func__,
wqsize, sqsize, rqsize);
qhp = kzalloc(sizeof(*qhp), GFP_KERNEL);
if (!qhp)
return ERR_PTR(-ENOMEM);
@ -974,11 +971,10 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd,
}
qhp->ibqp.qp_num = qhp->wq.qpid;
init_timer(&(qhp->timer));
PDBG("%s sq_num_entries %d, rq_num_entries %d "
"qpid 0x%0x qhp %p dma_addr 0x%llx size %d rq_addr 0x%x\n",
__func__, qhp->attr.sq_num_entries, qhp->attr.rq_num_entries,
qhp->wq.qpid, qhp, (unsigned long long) qhp->wq.dma_addr,
1 << qhp->wq.size_log2, qhp->wq.rq_addr);
pr_debug("%s sq_num_entries %d, rq_num_entries %d qpid 0x%0x qhp %p dma_addr 0x%llx size %d rq_addr 0x%x\n",
__func__, qhp->attr.sq_num_entries, qhp->attr.rq_num_entries,
qhp->wq.qpid, qhp, (unsigned long long)qhp->wq.dma_addr,
1 << qhp->wq.size_log2, qhp->wq.rq_addr);
return &qhp->ibqp;
}
@ -990,7 +986,7 @@ static int iwch_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
enum iwch_qp_attr_mask mask = 0;
struct iwch_qp_attributes attrs;
PDBG("%s ib_qp %p\n", __func__, ibqp);
pr_debug("%s ib_qp %p\n", __func__, ibqp);
/* iwarp does not support the RTR state */
if ((attr_mask & IB_QP_STATE) && (attr->qp_state == IB_QPS_RTR))
@ -1023,20 +1019,20 @@ static int iwch_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
void iwch_qp_add_ref(struct ib_qp *qp)
{
PDBG("%s ib_qp %p\n", __func__, qp);
pr_debug("%s ib_qp %p\n", __func__, qp);
atomic_inc(&(to_iwch_qp(qp)->refcnt));
}
void iwch_qp_rem_ref(struct ib_qp *qp)
{
PDBG("%s ib_qp %p\n", __func__, qp);
pr_debug("%s ib_qp %p\n", __func__, qp);
if (atomic_dec_and_test(&(to_iwch_qp(qp)->refcnt)))
wake_up(&(to_iwch_qp(qp)->wait));
}
static struct ib_qp *iwch_get_qp(struct ib_device *dev, int qpn)
{
PDBG("%s ib_dev %p qpn 0x%x\n", __func__, dev, qpn);
pr_debug("%s ib_dev %p qpn 0x%x\n", __func__, dev, qpn);
return (struct ib_qp *)get_qhp(to_iwch_dev(dev), qpn);
}
@ -1044,7 +1040,7 @@ static struct ib_qp *iwch_get_qp(struct ib_device *dev, int qpn)
static int iwch_query_pkey(struct ib_device *ibdev,
u8 port, u16 index, u16 * pkey)
{
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
*pkey = 0;
return 0;
}
@ -1054,8 +1050,8 @@ static int iwch_query_gid(struct ib_device *ibdev, u8 port,
{
struct iwch_dev *dev;
PDBG("%s ibdev %p, port %d, index %d, gid %p\n",
__func__, ibdev, port, index, gid);
pr_debug("%s ibdev %p, port %d, index %d, gid %p\n",
__func__, ibdev, port, index, gid);
dev = to_iwch_dev(ibdev);
BUG_ON(port == 0 || port > 2);
memset(&(gid->raw[0]), 0, sizeof(gid->raw));
@ -1090,7 +1086,7 @@ static int iwch_query_device(struct ib_device *ibdev, struct ib_device_attr *pro
struct iwch_dev *dev;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
if (uhw->inlen || uhw->outlen)
return -EINVAL;
@ -1128,7 +1124,7 @@ static int iwch_query_port(struct ib_device *ibdev,
struct net_device *netdev;
struct in_device *inetdev;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
dev = to_iwch_dev(ibdev);
netdev = dev->rdev.port_info.lldevs[port-1];
@ -1171,7 +1167,7 @@ static ssize_t show_rev(struct device *dev, struct device_attribute *attr,
{
struct iwch_dev *iwch_dev = container_of(dev, struct iwch_dev,
ibdev.dev);
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
return sprintf(buf, "%d\n", iwch_dev->rdev.t3cdev_p->type);
}
@ -1183,7 +1179,7 @@ static ssize_t show_hca(struct device *dev, struct device_attribute *attr,
struct ethtool_drvinfo info;
struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev;
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
lldev->ethtool_ops->get_drvinfo(lldev, &info);
return sprintf(buf, "%s\n", info.driver);
}
@ -1193,7 +1189,7 @@ static ssize_t show_board(struct device *dev, struct device_attribute *attr,
{
struct iwch_dev *iwch_dev = container_of(dev, struct iwch_dev,
ibdev.dev);
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
return sprintf(buf, "%x.%x\n", iwch_dev->rdev.rnic_info.pdev->vendor,
iwch_dev->rdev.rnic_info.pdev->device);
}
@ -1278,7 +1274,7 @@ static int iwch_get_mib(struct ib_device *ibdev, struct rdma_hw_stats *stats,
if (port != 0 || !stats)
return -ENOSYS;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
dev = to_iwch_dev(ibdev);
ret = dev->rdev.t3cdev_p->ctl(dev->rdev.t3cdev_p, RDMA_GET_MIB, &m);
if (ret)
@ -1348,7 +1344,7 @@ static void get_dev_fw_ver_str(struct ib_device *ibdev, char *str,
struct ethtool_drvinfo info;
struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev;
PDBG("%s dev 0x%p\n", __func__, iwch_dev);
pr_debug("%s dev 0x%p\n", __func__, iwch_dev);
lldev->ethtool_ops->get_drvinfo(lldev, &info);
snprintf(str, str_len, "%s", info.fw_version);
}
@ -1358,7 +1354,7 @@ int iwch_register_device(struct iwch_dev *dev)
int ret;
int i;
PDBG("%s iwch_dev %p\n", __func__, dev);
pr_debug("%s iwch_dev %p\n", __func__, dev);
strlcpy(dev->ibdev.name, "cxgb3_%d", IB_DEVICE_NAME_MAX);
memset(&dev->ibdev.node_guid, 0, sizeof(dev->ibdev.node_guid));
memcpy(&dev->ibdev.node_guid, dev->rdev.t3cdev_p->lldev->dev_addr, 6);
@ -1469,7 +1465,7 @@ void iwch_unregister_device(struct iwch_dev *dev)
{
int i;
PDBG("%s iwch_dev %p\n", __func__, dev);
pr_debug("%s iwch_dev %p\n", __func__, dev);
for (i = 0; i < ARRAY_SIZE(iwch_class_attributes); ++i)
device_remove_file(&dev->ibdev.dev,
iwch_class_attributes[i]);

View File

@ -217,8 +217,9 @@ static inline struct iwch_mm_entry *remove_mmap(struct iwch_ucontext *ucontext,
if (mm->key == key && mm->len == len) {
list_del_init(&mm->entry);
spin_unlock(&ucontext->mmap_lock);
PDBG("%s key 0x%x addr 0x%llx len %d\n", __func__,
key, (unsigned long long) mm->addr, mm->len);
pr_debug("%s key 0x%x addr 0x%llx len %d\n",
__func__, key,
(unsigned long long)mm->addr, mm->len);
return mm;
}
}
@ -230,8 +231,8 @@ static inline void insert_mmap(struct iwch_ucontext *ucontext,
struct iwch_mm_entry *mm)
{
spin_lock(&ucontext->mmap_lock);
PDBG("%s key 0x%x addr 0x%llx len %d\n", __func__,
mm->key, (unsigned long long) mm->addr, mm->len);
pr_debug("%s key 0x%x addr 0x%llx len %d\n",
__func__, mm->key, (unsigned long long)mm->addr, mm->len);
list_add_tail(&mm->entry, &ucontext->mmaps);
spin_unlock(&ucontext->mmap_lock);
}

View File

@ -208,30 +208,30 @@ static int iwch_sgl2pbl_map(struct iwch_dev *rhp, struct ib_sge *sg_list,
mhp = get_mhp(rhp, (sg_list[i].lkey) >> 8);
if (!mhp) {
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
return -EIO;
}
if (!mhp->attr.state) {
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
return -EIO;
}
if (mhp->attr.zbva) {
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
return -EIO;
}
if (sg_list[i].addr < mhp->attr.va_fbo) {
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
return -EINVAL;
}
if (sg_list[i].addr + ((u64) sg_list[i].length) <
sg_list[i].addr) {
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
return -EINVAL;
}
if (sg_list[i].addr + ((u64) sg_list[i].length) >
mhp->attr.va_fbo + ((u64) mhp->attr.len)) {
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
return -EINVAL;
}
offset = sg_list[i].addr - mhp->attr.va_fbo;
@ -427,8 +427,8 @@ int iwch_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
err = build_inv_stag(wqe, wr, &t3_wr_flit_cnt);
break;
default:
PDBG("%s post of type=%d TBD!\n", __func__,
wr->opcode);
pr_debug("%s post of type=%d TBD!\n", __func__,
wr->opcode);
err = -EINVAL;
}
if (err)
@ -444,10 +444,10 @@ int iwch_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
Q_GENBIT(qhp->wq.wptr, qhp->wq.size_log2),
0, t3_wr_flit_cnt,
(wr_cnt == 1) ? T3_SOPEOP : T3_SOP);
PDBG("%s cookie 0x%llx wq idx 0x%x swsq idx %ld opcode %d\n",
__func__, (unsigned long long) wr->wr_id, idx,
Q_PTR2IDX(qhp->wq.sq_wptr, qhp->wq.sq_size_log2),
sqp->opcode);
pr_debug("%s cookie 0x%llx wq idx 0x%x swsq idx %ld opcode %d\n",
__func__, (unsigned long long)wr->wr_id, idx,
Q_PTR2IDX(qhp->wq.sq_wptr, qhp->wq.sq_size_log2),
sqp->opcode);
wr = wr->next;
num_wrs--;
qhp->wq.wptr += wr_cnt;
@ -508,9 +508,9 @@ int iwch_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
build_fw_riwrh((void *) wqe, T3_WR_RCV, T3_COMPLETION_FLAG,
Q_GENBIT(qhp->wq.wptr, qhp->wq.size_log2),
0, sizeof(struct t3_receive_wr) >> 3, T3_SOPEOP);
PDBG("%s cookie 0x%llx idx 0x%x rq_wptr 0x%x rw_rptr 0x%x "
"wqe %p \n", __func__, (unsigned long long) wr->wr_id,
idx, qhp->wq.rq_wptr, qhp->wq.rq_rptr, wqe);
pr_debug("%s cookie 0x%llx idx 0x%x rq_wptr 0x%x rw_rptr 0x%x wqe %p\n",
__func__, (unsigned long long)wr->wr_id,
idx, qhp->wq.rq_wptr, qhp->wq.rq_rptr, wqe);
++(qhp->wq.rq_wptr);
++(qhp->wq.wptr);
wr = wr->next;
@ -664,10 +664,10 @@ int iwch_post_zb_read(struct iwch_ep *ep)
struct sk_buff *skb;
u8 flit_cnt = sizeof(struct t3_rdma_read_wr) >> 3;
PDBG("%s enter\n", __func__);
pr_debug("%s enter\n", __func__);
skb = alloc_skb(40, GFP_KERNEL);
if (!skb) {
printk(KERN_ERR "%s cannot send zb_read!!\n", __func__);
pr_err("%s cannot send zb_read!!\n", __func__);
return -ENOMEM;
}
wqe = (union t3_wr *)skb_put(skb, sizeof(struct t3_rdma_read_wr));
@ -696,10 +696,10 @@ int iwch_post_terminate(struct iwch_qp *qhp, struct respQ_msg_t *rsp_msg)
struct terminate_message *term;
struct sk_buff *skb;
PDBG("%s %d\n", __func__, __LINE__);
pr_debug("%s %d\n", __func__, __LINE__);
skb = alloc_skb(40, GFP_ATOMIC);
if (!skb) {
printk(KERN_ERR "%s cannot send TERMINATE!\n", __func__);
pr_err("%s cannot send TERMINATE!\n", __func__);
return -ENOMEM;
}
wqe = (union t3_wr *)skb_put(skb, 40);
@ -729,7 +729,7 @@ static void __flush_qp(struct iwch_qp *qhp, struct iwch_cq *rchp,
int flushed;
PDBG("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp);
pr_debug("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp);
/* take a ref on the qhp since we must release the lock */
atomic_inc(&qhp->refcnt);
spin_unlock(&qhp->lock);
@ -807,7 +807,7 @@ u16 iwch_rqes_posted(struct iwch_qp *qhp)
count++;
wqe++;
}
PDBG("%s qhp %p count %u\n", __func__, qhp, count);
pr_debug("%s qhp %p count %u\n", __func__, qhp, count);
return count;
}
@ -854,12 +854,12 @@ static int rdma_init(struct iwch_dev *rhp, struct iwch_qp *qhp,
} else
init_attr.rtr_type = 0;
init_attr.irs = qhp->ep->rcv_seq;
PDBG("%s init_attr.rq_addr 0x%x init_attr.rq_size = %d "
"flags 0x%x qpcaps 0x%x\n", __func__,
init_attr.rq_addr, init_attr.rq_size,
init_attr.flags, init_attr.qpcaps);
pr_debug("%s init_attr.rq_addr 0x%x init_attr.rq_size = %d flags 0x%x qpcaps 0x%x\n",
__func__,
init_attr.rq_addr, init_attr.rq_size,
init_attr.flags, init_attr.qpcaps);
ret = cxio_rdma_init(&rhp->rdev, &init_attr);
PDBG("%s ret %d\n", __func__, ret);
pr_debug("%s ret %d\n", __func__, ret);
return ret;
}
@ -877,9 +877,9 @@ int iwch_modify_qp(struct iwch_dev *rhp, struct iwch_qp *qhp,
int free = 0;
struct iwch_ep *ep = NULL;
PDBG("%s qhp %p qpid 0x%x ep %p state %d -> %d\n", __func__,
qhp, qhp->wq.qpid, qhp->ep, qhp->attr.state,
(mask & IWCH_QP_ATTR_NEXT_STATE) ? attrs->next_state : -1);
pr_debug("%s qhp %p qpid 0x%x ep %p state %d -> %d\n", __func__,
qhp, qhp->wq.qpid, qhp->ep, qhp->attr.state,
(mask & IWCH_QP_ATTR_NEXT_STATE) ? attrs->next_state : -1);
spin_lock_irqsave(&qhp->lock, flag);
@ -1034,16 +1034,15 @@ int iwch_modify_qp(struct iwch_dev *rhp, struct iwch_qp *qhp,
goto err;
break;
default:
printk(KERN_ERR "%s in a bad state %d\n",
__func__, qhp->attr.state);
pr_err("%s in a bad state %d\n", __func__, qhp->attr.state);
ret = -EINVAL;
goto err;
break;
}
goto out;
err:
PDBG("%s disassociating ep %p qpid 0x%x\n", __func__, qhp->ep,
qhp->wq.qpid);
pr_debug("%s disassociating ep %p qpid 0x%x\n", __func__, qhp->ep,
qhp->wq.qpid);
/* disassociate the LLP connection */
qhp->attr.llp_stream_handle = NULL;
@ -1077,6 +1076,6 @@ int iwch_modify_qp(struct iwch_dev *rhp, struct iwch_qp *qhp,
if (free)
put_ep(&ep->com);
PDBG("%s exit state %d\n", __func__, qhp->attr.state);
pr_debug("%s exit state %d\n", __func__, qhp->attr.state);
return ret;
}

File diff suppressed because it is too large Load Diff

View File

@ -146,7 +146,7 @@ static int create_cq(struct c4iw_rdev *rdev, struct t4_cq *cq,
ret = c4iw_ofld_send(rdev, skb);
if (ret)
goto err4;
PDBG("%s wait_event wr_wait %p\n", __func__, &wr_wait);
pr_debug("%s wait_event wr_wait %p\n", __func__, &wr_wait);
ret = c4iw_wait_for_reply(rdev, &wr_wait, 0, 0, __func__);
if (ret)
goto err4;
@ -159,7 +159,7 @@ static int create_cq(struct c4iw_rdev *rdev, struct t4_cq *cq,
&cq->bar2_qid,
user ? &cq->bar2_pa : NULL);
if (user && !cq->bar2_pa) {
pr_warn(MOD "%s: cqid %u not in BAR2 range.\n",
pr_warn("%s: cqid %u not in BAR2 range\n",
pci_name(rdev->lldi.pdev), cq->cqid);
ret = -EINVAL;
goto err4;
@ -180,8 +180,8 @@ static void insert_recv_cqe(struct t4_wq *wq, struct t4_cq *cq)
{
struct t4_cqe cqe;
PDBG("%s wq %p cq %p sw_cidx %u sw_pidx %u\n", __func__,
wq, cq, cq->sw_cidx, cq->sw_pidx);
pr_debug("%s wq %p cq %p sw_cidx %u sw_pidx %u\n", __func__,
wq, cq, cq->sw_cidx, cq->sw_pidx);
memset(&cqe, 0, sizeof(cqe));
cqe.header = cpu_to_be32(CQE_STATUS_V(T4_ERR_SWFLUSH) |
CQE_OPCODE_V(FW_RI_SEND) |
@ -199,8 +199,8 @@ int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count)
int in_use = wq->rq.in_use - count;
BUG_ON(in_use < 0);
PDBG("%s wq %p cq %p rq.in_use %u skip count %u\n", __func__,
wq, cq, wq->rq.in_use, count);
pr_debug("%s wq %p cq %p rq.in_use %u skip count %u\n", __func__,
wq, cq, wq->rq.in_use, count);
while (in_use--) {
insert_recv_cqe(wq, cq);
flushed++;
@ -213,8 +213,8 @@ static void insert_sq_cqe(struct t4_wq *wq, struct t4_cq *cq,
{
struct t4_cqe cqe;
PDBG("%s wq %p cq %p sw_cidx %u sw_pidx %u\n", __func__,
wq, cq, cq->sw_cidx, cq->sw_pidx);
pr_debug("%s wq %p cq %p sw_cidx %u sw_pidx %u\n", __func__,
wq, cq, cq->sw_cidx, cq->sw_pidx);
memset(&cqe, 0, sizeof(cqe));
cqe.header = cpu_to_be32(CQE_STATUS_V(T4_ERR_SWFLUSH) |
CQE_OPCODE_V(swcqe->opcode) |
@ -283,8 +283,8 @@ static void flush_completed_wrs(struct t4_wq *wq, struct t4_cq *cq)
/*
* Insert this completed cqe into the swcq.
*/
PDBG("%s moving cqe into swcq sq idx %u cq idx %u\n",
__func__, cidx, cq->sw_pidx);
pr_debug("%s moving cqe into swcq sq idx %u cq idx %u\n",
__func__, cidx, cq->sw_pidx);
swsqe->cqe.header |= htonl(CQE_SWCQE_V(1));
cq->sw_queue[cq->sw_pidx] = swsqe->cqe;
t4_swcq_produce(cq);
@ -339,7 +339,7 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp)
struct t4_swsqe *swsqe;
int ret;
PDBG("%s cqid 0x%x\n", __func__, chp->cq.cqid);
pr_debug("%s cqid 0x%x\n", __func__, chp->cq.cqid);
ret = t4_next_hw_cqe(&chp->cq, &hw_cqe);
/*
@ -432,7 +432,7 @@ void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count)
u32 ptr;
*count = 0;
PDBG("%s count zero %d\n", __func__, *count);
pr_debug("%s count zero %d\n", __func__, *count);
ptr = cq->sw_cidx;
while (ptr != cq->sw_pidx) {
cqe = &cq->sw_queue[ptr];
@ -442,7 +442,7 @@ void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count)
if (++ptr == cq->size)
ptr = 0;
}
PDBG("%s cq %p count %d\n", __func__, cq, *count);
pr_debug("%s cq %p count %d\n", __func__, cq, *count);
}
/*
@ -473,12 +473,11 @@ static int poll_cq(struct t4_wq *wq, struct t4_cq *cq, struct t4_cqe *cqe,
if (ret)
return ret;
PDBG("%s CQE OVF %u qpid 0x%0x genbit %u type %u status 0x%0x"
" opcode 0x%0x len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n",
__func__, CQE_OVFBIT(hw_cqe), CQE_QPID(hw_cqe),
CQE_GENBIT(hw_cqe), CQE_TYPE(hw_cqe), CQE_STATUS(hw_cqe),
CQE_OPCODE(hw_cqe), CQE_LEN(hw_cqe), CQE_WRID_HI(hw_cqe),
CQE_WRID_LOW(hw_cqe));
pr_debug("%s CQE OVF %u qpid 0x%0x genbit %u type %u status 0x%0x opcode 0x%0x len 0x%0x wrid_hi_stag 0x%x wrid_low_msn 0x%x\n",
__func__, CQE_OVFBIT(hw_cqe), CQE_QPID(hw_cqe),
CQE_GENBIT(hw_cqe), CQE_TYPE(hw_cqe), CQE_STATUS(hw_cqe),
CQE_OPCODE(hw_cqe), CQE_LEN(hw_cqe), CQE_WRID_HI(hw_cqe),
CQE_WRID_LOW(hw_cqe));
/*
* skip cqe's not affiliated with a QP.
@ -606,8 +605,8 @@ static int poll_cq(struct t4_wq *wq, struct t4_cq *cq, struct t4_cqe *cqe,
if (!SW_CQE(hw_cqe) && (CQE_WRID_SQ_IDX(hw_cqe) != wq->sq.cidx)) {
struct t4_swsqe *swsqe;
PDBG("%s out of order completion going in sw_sq at idx %u\n",
__func__, CQE_WRID_SQ_IDX(hw_cqe));
pr_debug("%s out of order completion going in sw_sq at idx %u\n",
__func__, CQE_WRID_SQ_IDX(hw_cqe));
swsqe = &wq->sq.sw_sq[CQE_WRID_SQ_IDX(hw_cqe)];
swsqe->cqe = *hw_cqe;
swsqe->complete = 1;
@ -641,13 +640,13 @@ static int poll_cq(struct t4_wq *wq, struct t4_cq *cq, struct t4_cqe *cqe,
BUG_ON(wq->sq.in_use <= 0 && wq->sq.in_use >= wq->sq.size);
wq->sq.cidx = (uint16_t)idx;
PDBG("%s completing sq idx %u\n", __func__, wq->sq.cidx);
pr_debug("%s completing sq idx %u\n", __func__, wq->sq.cidx);
*cookie = wq->sq.sw_sq[wq->sq.cidx].wr_id;
if (c4iw_wr_log)
c4iw_log_wr_stats(wq, hw_cqe);
t4_sq_consume(wq);
} else {
PDBG("%s completing rq idx %u\n", __func__, wq->rq.cidx);
pr_debug("%s completing rq idx %u\n", __func__, wq->rq.cidx);
*cookie = wq->rq.sw_rq[wq->rq.cidx].wr_id;
BUG_ON(t4_rq_empty(wq));
if (c4iw_wr_log)
@ -664,12 +663,12 @@ static int poll_cq(struct t4_wq *wq, struct t4_cq *cq, struct t4_cqe *cqe,
skip_cqe:
if (SW_CQE(hw_cqe)) {
PDBG("%s cq %p cqid 0x%x skip sw cqe cidx %u\n",
__func__, cq, cq->cqid, cq->sw_cidx);
pr_debug("%s cq %p cqid 0x%x skip sw cqe cidx %u\n",
__func__, cq, cq->cqid, cq->sw_cidx);
t4_swcq_consume(cq);
} else {
PDBG("%s cq %p cqid 0x%x skip hw cqe cidx %u\n",
__func__, cq, cq->cqid, cq->cidx);
pr_debug("%s cq %p cqid 0x%x skip hw cqe cidx %u\n",
__func__, cq, cq->cqid, cq->cidx);
t4_hwcq_consume(cq);
}
return ret;
@ -715,10 +714,12 @@ static int c4iw_poll_cq_one(struct c4iw_cq *chp, struct ib_wc *wc)
wc->vendor_err = CQE_STATUS(&cqe);
wc->wc_flags = 0;
PDBG("%s qpid 0x%x type %d opcode %d status 0x%x len %u wrid hi 0x%x "
"lo 0x%x cookie 0x%llx\n", __func__, CQE_QPID(&cqe),
CQE_TYPE(&cqe), CQE_OPCODE(&cqe), CQE_STATUS(&cqe), CQE_LEN(&cqe),
CQE_WRID_HI(&cqe), CQE_WRID_LOW(&cqe), (unsigned long long)cookie);
pr_debug("%s qpid 0x%x type %d opcode %d status 0x%x len %u wrid hi 0x%x lo 0x%x cookie 0x%llx\n",
__func__, CQE_QPID(&cqe),
CQE_TYPE(&cqe), CQE_OPCODE(&cqe),
CQE_STATUS(&cqe), CQE_LEN(&cqe),
CQE_WRID_HI(&cqe), CQE_WRID_LOW(&cqe),
(unsigned long long)cookie);
if (CQE_TYPE(&cqe) == 0) {
if (!CQE_STATUS(&cqe))
@ -766,8 +767,7 @@ static int c4iw_poll_cq_one(struct c4iw_cq *chp, struct ib_wc *wc)
wc->opcode = IB_WC_SEND;
break;
default:
printk(KERN_ERR MOD "Unexpected opcode %d "
"in the CQE received for QPID=0x%0x\n",
pr_err("Unexpected opcode %d in the CQE received for QPID=0x%0x\n",
CQE_OPCODE(&cqe), CQE_QPID(&cqe));
ret = -EINVAL;
goto out;
@ -822,8 +822,7 @@ static int c4iw_poll_cq_one(struct c4iw_cq *chp, struct ib_wc *wc)
wc->status = IB_WC_WR_FLUSH_ERR;
break;
default:
printk(KERN_ERR MOD
"Unexpected cqe_status 0x%x for QPID=0x%0x\n",
pr_err("Unexpected cqe_status 0x%x for QPID=0x%0x\n",
CQE_STATUS(&cqe), CQE_QPID(&cqe));
wc->status = IB_WC_FATAL_ERR;
}
@ -860,7 +859,7 @@ int c4iw_destroy_cq(struct ib_cq *ib_cq)
struct c4iw_cq *chp;
struct c4iw_ucontext *ucontext;
PDBG("%s ib_cq %p\n", __func__, ib_cq);
pr_debug("%s ib_cq %p\n", __func__, ib_cq);
chp = to_c4iw_cq(ib_cq);
remove_handle(chp->rhp, &chp->rhp->cqidr, chp->cq.cqid);
@ -892,7 +891,7 @@ struct ib_cq *c4iw_create_cq(struct ib_device *ibdev,
size_t memsize, hwentries;
struct c4iw_mm_entry *mm, *mm2;
PDBG("%s ib_dev %p entries %d\n", __func__, ibdev, entries);
pr_debug("%s ib_dev %p entries %d\n", __func__, ibdev, entries);
if (attr->flags)
return ERR_PTR(-EINVAL);
@ -998,9 +997,9 @@ struct ib_cq *c4iw_create_cq(struct ib_device *ibdev,
mm2->len = PAGE_SIZE;
insert_mmap(ucontext, mm2);
}
PDBG("%s cqid 0x%0x chp %p size %u memsize %zu, dma_addr 0x%0llx\n",
__func__, chp->cq.cqid, chp, chp->cq.size,
chp->cq.memsize, (unsigned long long) chp->cq.dma_addr);
pr_debug("%s cqid 0x%0x chp %p size %u memsize %zu, dma_addr 0x%0llx\n",
__func__, chp->cq.cqid, chp, chp->cq.size,
chp->cq.memsize, (unsigned long long)chp->cq.dma_addr);
return &chp->ibcq;
err6:
kfree(mm2);

View File

@ -334,7 +334,7 @@ static int qp_release(struct inode *inode, struct file *file)
{
struct c4iw_debugfs_data *qpd = file->private_data;
if (!qpd) {
printk(KERN_INFO "%s null qpd?\n", __func__);
pr_info("%s null qpd?\n", __func__);
return 0;
}
vfree(qpd->buf);
@ -422,7 +422,7 @@ static int stag_release(struct inode *inode, struct file *file)
{
struct c4iw_debugfs_data *stagd = file->private_data;
if (!stagd) {
printk(KERN_INFO "%s null stagd?\n", __func__);
pr_info("%s null stagd?\n", __func__);
return 0;
}
vfree(stagd->buf);
@ -796,15 +796,14 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
* cqid and qpid range must match for now.
*/
if (rdev->lldi.udb_density != rdev->lldi.ucq_density) {
pr_err(MOD "%s: unsupported udb/ucq densities %u/%u\n",
pr_err("%s: unsupported udb/ucq densities %u/%u\n",
pci_name(rdev->lldi.pdev), rdev->lldi.udb_density,
rdev->lldi.ucq_density);
return -EINVAL;
}
if (rdev->lldi.vr->qp.start != rdev->lldi.vr->cq.start ||
rdev->lldi.vr->qp.size != rdev->lldi.vr->cq.size) {
pr_err(MOD "%s: unsupported qp and cq id ranges "
"qp start %u size %u cq start %u size %u\n",
pr_err("%s: unsupported qp and cq id ranges qp start %u size %u cq start %u size %u\n",
pci_name(rdev->lldi.pdev), rdev->lldi.vr->qp.start,
rdev->lldi.vr->qp.size, rdev->lldi.vr->cq.size,
rdev->lldi.vr->cq.size);
@ -813,23 +812,20 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
rdev->qpmask = rdev->lldi.udb_density - 1;
rdev->cqmask = rdev->lldi.ucq_density - 1;
PDBG("%s dev %s stag start 0x%0x size 0x%0x num stags %d "
"pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x "
"qp qid start %u size %u cq qid start %u size %u\n",
__func__, pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start,
rdev->lldi.vr->stag.size, c4iw_num_stags(rdev),
rdev->lldi.vr->pbl.start,
rdev->lldi.vr->pbl.size, rdev->lldi.vr->rq.start,
rdev->lldi.vr->rq.size,
rdev->lldi.vr->qp.start,
rdev->lldi.vr->qp.size,
rdev->lldi.vr->cq.start,
rdev->lldi.vr->cq.size);
PDBG("udb %pR db_reg %p gts_reg %p "
"qpmask 0x%x cqmask 0x%x\n",
&rdev->lldi.pdev->resource[2],
rdev->lldi.db_reg, rdev->lldi.gts_reg,
rdev->qpmask, rdev->cqmask);
pr_debug("%s dev %s stag start 0x%0x size 0x%0x num stags %d pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x qp qid start %u size %u cq qid start %u size %u\n",
__func__, pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start,
rdev->lldi.vr->stag.size, c4iw_num_stags(rdev),
rdev->lldi.vr->pbl.start,
rdev->lldi.vr->pbl.size, rdev->lldi.vr->rq.start,
rdev->lldi.vr->rq.size,
rdev->lldi.vr->qp.start,
rdev->lldi.vr->qp.size,
rdev->lldi.vr->cq.start,
rdev->lldi.vr->cq.size);
pr_debug("udb %pR db_reg %p gts_reg %p qpmask 0x%x cqmask 0x%x\n",
&rdev->lldi.pdev->resource[2],
rdev->lldi.db_reg, rdev->lldi.gts_reg,
rdev->qpmask, rdev->cqmask);
if (c4iw_num_stags(rdev) == 0)
return -EINVAL;
@ -843,22 +839,22 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
err = c4iw_init_resource(rdev, c4iw_num_stags(rdev), T4_MAX_NUM_PD);
if (err) {
printk(KERN_ERR MOD "error %d initializing resources\n", err);
pr_err("error %d initializing resources\n", err);
return err;
}
err = c4iw_pblpool_create(rdev);
if (err) {
printk(KERN_ERR MOD "error %d initializing pbl pool\n", err);
pr_err("error %d initializing pbl pool\n", err);
goto destroy_resource;
}
err = c4iw_rqtpool_create(rdev);
if (err) {
printk(KERN_ERR MOD "error %d initializing rqt pool\n", err);
pr_err("error %d initializing rqt pool\n", err);
goto destroy_pblpool;
}
err = c4iw_ocqp_pool_create(rdev);
if (err) {
printk(KERN_ERR MOD "error %d initializing ocqp pool\n", err);
pr_err("error %d initializing ocqp pool\n", err);
goto destroy_rqtpool;
}
rdev->status_page = (struct t4_dev_status_page *)
@ -936,7 +932,7 @@ static void c4iw_dealloc(struct uld_ctx *ctx)
static void c4iw_remove(struct uld_ctx *ctx)
{
PDBG("%s c4iw_dev %p\n", __func__, ctx->dev);
pr_debug("%s c4iw_dev %p\n", __func__, ctx->dev);
c4iw_unregister_device(ctx->dev);
c4iw_dealloc(ctx);
}
@ -954,25 +950,25 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop)
int ret;
if (!rdma_supported(infop)) {
printk(KERN_INFO MOD "%s: RDMA not supported on this device.\n",
pci_name(infop->pdev));
pr_info("%s: RDMA not supported on this device\n",
pci_name(infop->pdev));
return ERR_PTR(-ENOSYS);
}
if (!ocqp_supported(infop))
pr_info("%s: On-Chip Queues not supported on this device.\n",
pr_info("%s: On-Chip Queues not supported on this device\n",
pci_name(infop->pdev));
devp = (struct c4iw_dev *)ib_alloc_device(sizeof(*devp));
if (!devp) {
printk(KERN_ERR MOD "Cannot allocate ib device\n");
pr_err("Cannot allocate ib device\n");
return ERR_PTR(-ENOMEM);
}
devp->rdev.lldi = *infop;
/* init various hw-queue params based on lld info */
PDBG("%s: Ing. padding boundary is %d, egrsstatuspagesize = %d\n",
__func__, devp->rdev.lldi.sge_ingpadboundary,
devp->rdev.lldi.sge_egrstatuspagesize);
pr_debug("%s: Ing. padding boundary is %d, egrsstatuspagesize = %d\n",
__func__, devp->rdev.lldi.sge_ingpadboundary,
devp->rdev.lldi.sge_egrstatuspagesize);
devp->rdev.hw_queue.t4_eq_status_entries =
devp->rdev.lldi.sge_ingpadboundary > 64 ? 2 : 1;
@ -1000,7 +996,7 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop)
devp->rdev.bar2_kva = ioremap_wc(devp->rdev.bar2_pa,
pci_resource_len(devp->rdev.lldi.pdev, 2));
if (!devp->rdev.bar2_kva) {
pr_err(MOD "Unable to ioremap BAR2\n");
pr_err("Unable to ioremap BAR2\n");
ib_dealloc_device(&devp->ibdev);
return ERR_PTR(-EINVAL);
}
@ -1012,20 +1008,19 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop)
devp->rdev.oc_mw_kva = ioremap_wc(devp->rdev.oc_mw_pa,
devp->rdev.lldi.vr->ocq.size);
if (!devp->rdev.oc_mw_kva) {
pr_err(MOD "Unable to ioremap onchip mem\n");
pr_err("Unable to ioremap onchip mem\n");
ib_dealloc_device(&devp->ibdev);
return ERR_PTR(-EINVAL);
}
}
PDBG(KERN_INFO MOD "ocq memory: "
"hw_start 0x%x size %u mw_pa 0x%lx mw_kva %p\n",
devp->rdev.lldi.vr->ocq.start, devp->rdev.lldi.vr->ocq.size,
devp->rdev.oc_mw_pa, devp->rdev.oc_mw_kva);
pr_debug("ocq memory: hw_start 0x%x size %u mw_pa 0x%lx mw_kva %p\n",
devp->rdev.lldi.vr->ocq.start, devp->rdev.lldi.vr->ocq.size,
devp->rdev.oc_mw_pa, devp->rdev.oc_mw_kva);
ret = c4iw_rdev_open(&devp->rdev);
if (ret) {
printk(KERN_ERR MOD "Unable to open CXIO rdev err %d\n", ret);
pr_err("Unable to open CXIO rdev err %d\n", ret);
ib_dealloc_device(&devp->ibdev);
return ERR_PTR(ret);
}
@ -1071,17 +1066,17 @@ static void *c4iw_uld_add(const struct cxgb4_lld_info *infop)
}
ctx->lldi = *infop;
PDBG("%s found device %s nchan %u nrxq %u ntxq %u nports %u\n",
__func__, pci_name(ctx->lldi.pdev),
ctx->lldi.nchan, ctx->lldi.nrxq,
ctx->lldi.ntxq, ctx->lldi.nports);
pr_debug("%s found device %s nchan %u nrxq %u ntxq %u nports %u\n",
__func__, pci_name(ctx->lldi.pdev),
ctx->lldi.nchan, ctx->lldi.nrxq,
ctx->lldi.ntxq, ctx->lldi.nports);
mutex_lock(&dev_mutex);
list_add_tail(&ctx->entry, &uld_ctx_list);
mutex_unlock(&dev_mutex);
for (i = 0; i < ctx->lldi.nrxq; i++)
PDBG("rxqid[%u] %u\n", i, ctx->lldi.rxq_ids[i]);
pr_debug("rxqid[%u] %u\n", i, ctx->lldi.rxq_ids[i]);
out:
return ctx;
}
@ -1138,8 +1133,7 @@ static inline int recv_rx_pkt(struct c4iw_dev *dev, const struct pkt_gl *gl,
goto out;
if (c4iw_handlers[opcode] == NULL) {
pr_info("%s no handler opcode 0x%x...\n", __func__,
opcode);
pr_info("%s no handler opcode 0x%x...\n", __func__, opcode);
kfree_skb(skb);
goto out;
}
@ -1176,13 +1170,11 @@ static int c4iw_uld_rx_handler(void *handle, const __be64 *rsp,
if (recv_rx_pkt(dev, gl, rsp))
return 0;
pr_info("%s: unexpected FL contents at %p, " \
"RSS %#llx, FL %#llx, len %u\n",
pci_name(ctx->lldi.pdev), gl->va,
(unsigned long long)be64_to_cpu(*rsp),
(unsigned long long)be64_to_cpu(
*(__force __be64 *)gl->va),
gl->tot_len);
pr_info("%s: unexpected FL contents at %p, RSS %#llx, FL %#llx, len %u\n",
pci_name(ctx->lldi.pdev), gl->va,
be64_to_cpu(*rsp),
be64_to_cpu(*(__force __be64 *)gl->va),
gl->tot_len);
return 0;
} else {
@ -1195,8 +1187,7 @@ static int c4iw_uld_rx_handler(void *handle, const __be64 *rsp,
if (c4iw_handlers[opcode]) {
c4iw_handlers[opcode](dev, skb);
} else {
pr_info("%s no handler opcode 0x%x...\n", __func__,
opcode);
pr_info("%s no handler opcode 0x%x...\n", __func__, opcode);
kfree_skb(skb);
}
@ -1209,17 +1200,16 @@ static int c4iw_uld_state_change(void *handle, enum cxgb4_state new_state)
{
struct uld_ctx *ctx = handle;
PDBG("%s new_state %u\n", __func__, new_state);
pr_debug("%s new_state %u\n", __func__, new_state);
switch (new_state) {
case CXGB4_STATE_UP:
printk(KERN_INFO MOD "%s: Up\n", pci_name(ctx->lldi.pdev));
pr_info("%s: Up\n", pci_name(ctx->lldi.pdev));
if (!ctx->dev) {
int ret;
ctx->dev = c4iw_alloc(&ctx->lldi);
if (IS_ERR(ctx->dev)) {
printk(KERN_ERR MOD
"%s: initialization failed: %ld\n",
pr_err("%s: initialization failed: %ld\n",
pci_name(ctx->lldi.pdev),
PTR_ERR(ctx->dev));
ctx->dev = NULL;
@ -1227,22 +1217,19 @@ static int c4iw_uld_state_change(void *handle, enum cxgb4_state new_state)
}
ret = c4iw_register_device(ctx->dev);
if (ret) {
printk(KERN_ERR MOD
"%s: RDMA registration failed: %d\n",
pr_err("%s: RDMA registration failed: %d\n",
pci_name(ctx->lldi.pdev), ret);
c4iw_dealloc(ctx);
}
}
break;
case CXGB4_STATE_DOWN:
printk(KERN_INFO MOD "%s: Down\n",
pci_name(ctx->lldi.pdev));
pr_info("%s: Down\n", pci_name(ctx->lldi.pdev));
if (ctx->dev)
c4iw_remove(ctx);
break;
case CXGB4_STATE_START_RECOVERY:
printk(KERN_INFO MOD "%s: Fatal Error\n",
pci_name(ctx->lldi.pdev));
pr_info("%s: Fatal Error\n", pci_name(ctx->lldi.pdev));
if (ctx->dev) {
struct ib_event event;
@ -1255,8 +1242,7 @@ static int c4iw_uld_state_change(void *handle, enum cxgb4_state new_state)
}
break;
case CXGB4_STATE_DETACH:
printk(KERN_INFO MOD "%s: Detach\n",
pci_name(ctx->lldi.pdev));
pr_info("%s: Detach\n", pci_name(ctx->lldi.pdev));
if (ctx->dev)
c4iw_remove(ctx);
break;
@ -1406,9 +1392,7 @@ static void recover_lost_dbs(struct uld_ctx *ctx, struct qp_list *qp_list)
t4_sq_host_wq_pidx(&qp->wq),
t4_sq_wq_size(&qp->wq));
if (ret) {
pr_err(MOD "%s: Fatal error - "
"DB overflow recovery failed - "
"error syncing SQ qid %u\n",
pr_err("%s: Fatal error - DB overflow recovery failed - error syncing SQ qid %u\n",
pci_name(ctx->lldi.pdev), qp->wq.sq.qid);
spin_unlock(&qp->lock);
spin_unlock_irq(&qp->rhp->lock);
@ -1422,9 +1406,7 @@ static void recover_lost_dbs(struct uld_ctx *ctx, struct qp_list *qp_list)
t4_rq_wq_size(&qp->wq));
if (ret) {
pr_err(MOD "%s: Fatal error - "
"DB overflow recovery failed - "
"error syncing RQ qid %u\n",
pr_err("%s: Fatal error - DB overflow recovery failed - error syncing RQ qid %u\n",
pci_name(ctx->lldi.pdev), qp->wq.rq.qid);
spin_unlock(&qp->lock);
spin_unlock_irq(&qp->rhp->lock);
@ -1455,7 +1437,7 @@ static void recover_queues(struct uld_ctx *ctx)
/* flush the SGE contexts */
ret = cxgb4_flush_eq_cache(ctx->dev->rdev.lldi.ports[0]);
if (ret) {
printk(KERN_ERR MOD "%s: Fatal error - DB overflow recovery failed\n",
pr_err("%s: Fatal error - DB overflow recovery failed\n",
pci_name(ctx->lldi.pdev));
return;
}
@ -1513,8 +1495,8 @@ static int c4iw_uld_control(void *handle, enum cxgb4_control control, ...)
mutex_unlock(&ctx->dev->rdev.stats.lock);
break;
default:
printk(KERN_WARNING MOD "%s: unknown control cmd %u\n",
pci_name(ctx->lldi.pdev), control);
pr_warn("%s: unknown control cmd %u\n",
pci_name(ctx->lldi.pdev), control);
break;
}
return 0;
@ -1543,8 +1525,7 @@ static int __init c4iw_init_module(void)
c4iw_debugfs_root = debugfs_create_dir(DRV_NAME, NULL);
if (!c4iw_debugfs_root)
printk(KERN_WARNING MOD
"could not create debugfs entry, continuing\n");
pr_warn("could not create debugfs entry, continuing\n");
cxgb4_register_uld(CXGB4_ULD_RDMA, &c4iw_uld_info);

View File

@ -47,17 +47,16 @@ static void print_tpte(struct c4iw_dev *dev, u32 stag)
"%s cxgb4_read_tpte err %d\n", __func__, ret);
return;
}
PDBG("stag idx 0x%x valid %d key 0x%x state %d pdid %d "
"perm 0x%x ps %d len 0x%llx va 0x%llx\n",
stag & 0xffffff00,
FW_RI_TPTE_VALID_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_STAGKEY_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_STAGSTATE_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_PDID_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_PERM_G(ntohl(tpte.locread_to_qpid)),
FW_RI_TPTE_PS_G(ntohl(tpte.locread_to_qpid)),
((u64)ntohl(tpte.len_hi) << 32) | ntohl(tpte.len_lo),
((u64)ntohl(tpte.va_hi) << 32) | ntohl(tpte.va_lo_fbo));
pr_debug("stag idx 0x%x valid %d key 0x%x state %d pdid %d perm 0x%x ps %d len 0x%llx va 0x%llx\n",
stag & 0xffffff00,
FW_RI_TPTE_VALID_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_STAGKEY_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_STAGSTATE_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_PDID_G(ntohl(tpte.valid_to_pdid)),
FW_RI_TPTE_PERM_G(ntohl(tpte.locread_to_qpid)),
FW_RI_TPTE_PS_G(ntohl(tpte.locread_to_qpid)),
((u64)ntohl(tpte.len_hi) << 32) | ntohl(tpte.len_lo),
((u64)ntohl(tpte.va_hi) << 32) | ntohl(tpte.va_lo_fbo));
}
static void dump_err_cqe(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
@ -71,9 +70,9 @@ static void dump_err_cqe(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
CQE_STATUS(err_cqe), CQE_TYPE(err_cqe), ntohl(err_cqe->len),
CQE_WRID_HI(err_cqe), CQE_WRID_LOW(err_cqe));
PDBG("%016llx %016llx %016llx %016llx\n",
be64_to_cpu(p[0]), be64_to_cpu(p[1]), be64_to_cpu(p[2]),
be64_to_cpu(p[3]));
pr_debug("%016llx %016llx %016llx %016llx\n",
be64_to_cpu(p[0]), be64_to_cpu(p[1]), be64_to_cpu(p[2]),
be64_to_cpu(p[3]));
/*
* Ingress WRITE and READ_RESP errors provide
@ -124,8 +123,7 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
spin_lock_irq(&dev->lock);
qhp = get_qhp(dev, CQE_QPID(err_cqe));
if (!qhp) {
printk(KERN_ERR MOD "BAD AE qpid 0x%x opcode %d "
"status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
pr_err("BAD AE qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
CQE_QPID(err_cqe),
CQE_OPCODE(err_cqe), CQE_STATUS(err_cqe),
CQE_TYPE(err_cqe), CQE_WRID_HI(err_cqe),
@ -140,8 +138,7 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
cqid = qhp->attr.rcq;
chp = get_chp(dev, cqid);
if (!chp) {
printk(KERN_ERR MOD "BAD AE cqid 0x%x qpid 0x%x opcode %d "
"status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
pr_err("BAD AE cqid 0x%x qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n",
cqid, CQE_QPID(err_cqe),
CQE_OPCODE(err_cqe), CQE_STATUS(err_cqe),
CQE_TYPE(err_cqe), CQE_WRID_HI(err_cqe),
@ -165,7 +162,7 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
/* Completion Events */
case T4_ERR_SUCCESS:
printk(KERN_ERR MOD "AE with status 0!\n");
pr_err("AE with status 0!\n");
break;
case T4_ERR_STAG:
@ -207,7 +204,7 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
break;
default:
printk(KERN_ERR MOD "Unknown T4 status 0x%x QPID 0x%x\n",
pr_err("Unknown T4 status 0x%x QPID 0x%x\n",
CQE_STATUS(err_cqe), qhp->wq.sq.qid);
post_qp_event(dev, chp, qhp, err_cqe, IB_EVENT_QP_FATAL);
break;
@ -237,7 +234,7 @@ int c4iw_ev_handler(struct c4iw_dev *dev, u32 qid)
if (atomic_dec_and_test(&chp->refcnt))
wake_up(&chp->wait);
} else {
PDBG("%s unknown cqid 0x%x\n", __func__, qid);
pr_debug("%s unknown cqid 0x%x\n", __func__, qid);
spin_unlock_irqrestore(&dev->lock, flag);
}
return 0;

View File

@ -64,12 +64,11 @@
#define DRV_NAME "iw_cxgb4"
#define MOD DRV_NAME ":"
extern int c4iw_debug;
#define PDBG(fmt, args...) \
do { \
if (c4iw_debug) \
printk(MOD fmt, ## args); \
} while (0)
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "t4.h"
@ -231,15 +230,15 @@ static inline int c4iw_wait_for_reply(struct c4iw_rdev *rdev,
ret = wait_for_completion_timeout(&wr_waitp->completion, C4IW_WR_TO);
if (!ret) {
PDBG("%s - Device %s not responding (disabling device) - tid %u qpid %u\n",
func, pci_name(rdev->lldi.pdev), hwtid, qpid);
pr_debug("%s - Device %s not responding (disabling device) - tid %u qpid %u\n",
func, pci_name(rdev->lldi.pdev), hwtid, qpid);
rdev->flags |= T4_FATAL_ERROR;
wr_waitp->ret = -EIO;
}
out:
if (wr_waitp->ret)
PDBG("%s: FW reply %d tid %u qpid %u\n",
pci_name(rdev->lldi.pdev), wr_waitp->ret, hwtid, qpid);
pr_debug("%s: FW reply %d tid %u qpid %u\n",
pci_name(rdev->lldi.pdev), wr_waitp->ret, hwtid, qpid);
return wr_waitp->ret;
}
@ -538,8 +537,9 @@ static inline struct c4iw_mm_entry *remove_mmap(struct c4iw_ucontext *ucontext,
if (mm->key == key && mm->len == len) {
list_del_init(&mm->entry);
spin_unlock(&ucontext->mmap_lock);
PDBG("%s key 0x%x addr 0x%llx len %d\n", __func__,
key, (unsigned long long) mm->addr, mm->len);
pr_debug("%s key 0x%x addr 0x%llx len %d\n",
__func__, key,
(unsigned long long)mm->addr, mm->len);
return mm;
}
}
@ -551,8 +551,8 @@ static inline void insert_mmap(struct c4iw_ucontext *ucontext,
struct c4iw_mm_entry *mm)
{
spin_lock(&ucontext->mmap_lock);
PDBG("%s key 0x%x addr 0x%llx len %d\n", __func__,
mm->key, (unsigned long long) mm->addr, mm->len);
pr_debug("%s key 0x%x addr 0x%llx len %d\n",
__func__, mm->key, (unsigned long long)mm->addr, mm->len);
list_add_tail(&mm->entry, &ucontext->mmaps);
spin_unlock(&ucontext->mmap_lock);
}
@ -670,17 +670,19 @@ enum c4iw_mmid_state {
#define MPA_V2_RDMA_READ_RTR 0x4000
#define MPA_V2_IRD_ORD_MASK 0x3FFF
#define c4iw_put_ep(ep) { \
PDBG("put_ep (via %s:%u) ep %p refcnt %d\n", __func__, __LINE__, \
ep, kref_read(&((ep)->kref))); \
WARN_ON(kref_read(&((ep)->kref)) < 1); \
kref_put(&((ep)->kref), _c4iw_free_ep); \
#define c4iw_put_ep(ep) { \
pr_debug("put_ep (via %s:%u) ep %p refcnt %d\n", \
__func__, __LINE__, \
ep, kref_read(&((ep)->kref))); \
WARN_ON(kref_read(&((ep)->kref)) < 1); \
kref_put(&((ep)->kref), _c4iw_free_ep); \
}
#define c4iw_get_ep(ep) { \
PDBG("get_ep (via %s:%u) ep %p, refcnt %d\n", __func__, __LINE__, \
ep, kref_read(&((ep)->kref))); \
kref_get(&((ep)->kref)); \
#define c4iw_get_ep(ep) { \
pr_debug("get_ep (via %s:%u) ep %p, refcnt %d\n", \
__func__, __LINE__, \
ep, kref_read(&((ep)->kref))); \
kref_get(&((ep)->kref)); \
}
void _c4iw_free_ep(struct kref *kref);

View File

@ -38,9 +38,9 @@
#include "iw_cxgb4.h"
int use_dsgl = 0;
int use_dsgl = 1;
module_param(use_dsgl, int, 0644);
MODULE_PARM_DESC(use_dsgl, "Use DSGL for PBL/FastReg (default=0)");
MODULE_PARM_DESC(use_dsgl, "Use DSGL for PBL/FastReg (default=1) (DEPRECATED)");
#define T4_ULPTX_MIN_IO 32
#define C4IW_MAX_INLINE_SIZE 96
@ -125,7 +125,7 @@ static int _c4iw_write_mem_inline(struct c4iw_rdev *rdev, u32 addr, u32 len,
cmd |= cpu_to_be32(T5_ULP_MEMIO_IMM_F);
addr &= 0x7FFFFFF;
PDBG("%s addr 0x%x len %u\n", __func__, addr, len);
pr_debug("%s addr 0x%x len %u\n", __func__, addr, len);
num_wqe = DIV_ROUND_UP(len, C4IW_MAX_INLINE_SIZE);
c4iw_init_wr_wait(&wr_wait);
for (i = 0; i < num_wqe; i++) {
@ -231,13 +231,11 @@ static int _c4iw_write_mem_dma(struct c4iw_rdev *rdev, u32 addr, u32 len,
static int write_adapter_mem(struct c4iw_rdev *rdev, u32 addr, u32 len,
void *data, struct sk_buff *skb)
{
if (is_t5(rdev->lldi.adapter_type) && use_dsgl) {
if (rdev->lldi.ulptx_memwrite_dsgl && use_dsgl) {
if (len > inline_threshold) {
if (_c4iw_write_mem_dma(rdev, addr, len, data, skb)) {
printk_ratelimited(KERN_WARNING
"%s: dma map"
" failure (non fatal)\n",
pci_name(rdev->lldi.pdev));
pr_warn_ratelimited("%s: dma map failure (non fatal)\n",
pci_name(rdev->lldi.pdev));
return _c4iw_write_mem_inline(rdev, addr, len,
data, skb);
} else {
@ -289,8 +287,8 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
mutex_unlock(&rdev->stats.lock);
*stag = (stag_idx << 8) | (atomic_inc_return(&key) & 0xff);
}
PDBG("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n",
__func__, stag_state, type, pdid, stag_idx);
pr_debug("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n",
__func__, stag_state, type, pdid, stag_idx);
/* write TPT entry */
if (reset_tpt_entry)
@ -331,9 +329,9 @@ static int write_pbl(struct c4iw_rdev *rdev, __be64 *pbl,
{
int err;
PDBG("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n",
__func__, pbl_addr, rdev->lldi.vr->pbl.start,
pbl_size);
pr_debug("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n",
__func__, pbl_addr, rdev->lldi.vr->pbl.start,
pbl_size);
err = write_adapter_mem(rdev, pbl_addr >> 5, pbl_size << 3, pbl, NULL);
return err;
@ -376,7 +374,7 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag)
mhp->attr.stag = stag;
mmid = stag >> 8;
mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
pr_debug("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid);
}
@ -426,7 +424,7 @@ struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc)
int ret;
u32 stag = T4_STAG_UNSET;
PDBG("%s ib_pd %p\n", __func__, pd);
pr_debug("%s ib_pd %p\n", __func__, pd);
php = to_c4iw_pd(pd);
rhp = php->rhp;
@ -483,7 +481,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
struct c4iw_pd *php;
struct c4iw_mr *mhp;
PDBG("%s ib_pd %p\n", __func__, pd);
pr_debug("%s ib_pd %p\n", __func__, pd);
if (length == ~0ULL)
return ERR_PTR(-EINVAL);
@ -517,7 +515,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
return ERR_PTR(err);
}
shift = ffs(mhp->umem->page_size) - 1;
shift = mhp->umem->page_shift;
n = mhp->umem->nmap;
err = alloc_pbl(mhp, n);
@ -536,7 +534,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
len = sg_dma_len(sg) >> shift;
for (k = 0; k < len; ++k) {
pages[i++] = cpu_to_be64(sg_dma_address(sg) +
mhp->umem->page_size * k);
(k << shift));
if (i == PAGE_SIZE / sizeof *pages) {
err = write_pbl(&mhp->rhp->rdev,
pages,
@ -620,7 +618,7 @@ struct ib_mw *c4iw_alloc_mw(struct ib_pd *pd, enum ib_mw_type type,
ret = -ENOMEM;
goto dealloc_win;
}
PDBG("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
pr_debug("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
return &(mhp->ibmw);
dealloc_win:
@ -645,7 +643,7 @@ int c4iw_dealloc_mw(struct ib_mw *mw)
deallocate_window(&rhp->rdev, mhp->attr.stag, mhp->dereg_skb);
kfree_skb(mhp->dereg_skb);
kfree(mhp);
PDBG("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp);
pr_debug("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp);
return 0;
}
@ -703,7 +701,7 @@ struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd,
goto err3;
}
PDBG("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
pr_debug("%s mmid 0x%x mhp %p stag 0x%x\n", __func__, mmid, mhp, stag);
return &(mhp->ibmr);
err3:
dereg_mem(&rhp->rdev, stag, mhp->attr.pbl_size,
@ -748,7 +746,7 @@ int c4iw_dereg_mr(struct ib_mr *ib_mr)
struct c4iw_mr *mhp;
u32 mmid;
PDBG("%s ib_mr %p\n", __func__, ib_mr);
pr_debug("%s ib_mr %p\n", __func__, ib_mr);
mhp = to_c4iw_mr(ib_mr);
rhp = mhp->rhp;
@ -766,7 +764,7 @@ int c4iw_dereg_mr(struct ib_mr *ib_mr)
kfree((void *) (unsigned long) mhp->kva);
if (mhp->umem)
ib_umem_release(mhp->umem);
PDBG("%s mmid 0x%x ptr %p\n", __func__, mmid, mhp);
pr_debug("%s mmid 0x%x ptr %p\n", __func__, mmid, mhp);
kfree(mhp);
return 0;
}

View File

@ -59,7 +59,7 @@ module_param(fastreg_support, int, 0644);
MODULE_PARM_DESC(fastreg_support, "Advertise fastreg support (default=1)");
static struct ib_ah *c4iw_ah_create(struct ib_pd *pd,
struct ib_ah_attr *ah_attr,
struct rdma_ah_attr *ah_attr,
struct ib_udata *udata)
{
@ -102,7 +102,7 @@ void _c4iw_free_ucontext(struct kref *kref)
ucontext = container_of(kref, struct c4iw_ucontext, kref);
rhp = to_c4iw_dev(ucontext->ibucontext.device);
PDBG("%s ucontext %p\n", __func__, ucontext);
pr_debug("%s ucontext %p\n", __func__, ucontext);
list_for_each_entry_safe(mm, tmp, &ucontext->mmaps, entry)
kfree(mm);
c4iw_release_dev_ucontext(&rhp->rdev, &ucontext->uctx);
@ -113,7 +113,7 @@ static int c4iw_dealloc_ucontext(struct ib_ucontext *context)
{
struct c4iw_ucontext *ucontext = to_c4iw_ucontext(context);
PDBG("%s context %p\n", __func__, context);
pr_debug("%s context %p\n", __func__, context);
c4iw_put_ucontext(ucontext);
return 0;
}
@ -123,12 +123,11 @@ static struct ib_ucontext *c4iw_alloc_ucontext(struct ib_device *ibdev,
{
struct c4iw_ucontext *context;
struct c4iw_dev *rhp = to_c4iw_dev(ibdev);
static int warned;
struct c4iw_alloc_ucontext_resp uresp;
int ret = 0;
struct c4iw_mm_entry *mm = NULL;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
context = kzalloc(sizeof(*context), GFP_KERNEL);
if (!context) {
ret = -ENOMEM;
@ -141,8 +140,7 @@ static struct ib_ucontext *c4iw_alloc_ucontext(struct ib_device *ibdev,
kref_init(&context->kref);
if (udata->outlen < sizeof(uresp) - sizeof(uresp.reserved)) {
if (!warned++)
pr_err(MOD "Warning - downlevel libcxgb4 (non-fatal), device status page disabled.");
pr_err_once("Warning - downlevel libcxgb4 (non-fatal), device status page disabled\n");
rhp->rdev.flags |= T4_STATUS_PAGE_DISABLED;
} else {
mm = kmalloc(sizeof(*mm), GFP_KERNEL);
@ -187,8 +185,8 @@ static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
struct c4iw_ucontext *ucontext;
u64 addr;
PDBG("%s pgoff 0x%lx key 0x%x len %d\n", __func__, vma->vm_pgoff,
key, len);
pr_debug("%s pgoff 0x%lx key 0x%x len %d\n", __func__, vma->vm_pgoff,
key, len);
if (vma->vm_start & (PAGE_SIZE-1))
return -EINVAL;
@ -253,7 +251,7 @@ static int c4iw_deallocate_pd(struct ib_pd *pd)
php = to_c4iw_pd(pd);
rhp = php->rhp;
PDBG("%s ibpd %p pdid 0x%x\n", __func__, pd, php->pdid);
pr_debug("%s ibpd %p pdid 0x%x\n", __func__, pd, php->pdid);
c4iw_put_resource(&rhp->rdev.resource.pdid_table, php->pdid);
mutex_lock(&rhp->rdev.stats.lock);
rhp->rdev.stats.pd.cur--;
@ -270,7 +268,7 @@ static struct ib_pd *c4iw_allocate_pd(struct ib_device *ibdev,
u32 pdid;
struct c4iw_dev *rhp;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
rhp = (struct c4iw_dev *) ibdev;
pdid = c4iw_get_resource(&rhp->rdev.resource.pdid_table);
if (!pdid)
@ -293,14 +291,14 @@ static struct ib_pd *c4iw_allocate_pd(struct ib_device *ibdev,
if (rhp->rdev.stats.pd.cur > rhp->rdev.stats.pd.max)
rhp->rdev.stats.pd.max = rhp->rdev.stats.pd.cur;
mutex_unlock(&rhp->rdev.stats.lock);
PDBG("%s pdid 0x%0x ptr 0x%p\n", __func__, pdid, php);
pr_debug("%s pdid 0x%0x ptr 0x%p\n", __func__, pdid, php);
return &php->ibpd;
}
static int c4iw_query_pkey(struct ib_device *ibdev, u8 port, u16 index,
u16 *pkey)
{
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
*pkey = 0;
return 0;
}
@ -310,8 +308,8 @@ static int c4iw_query_gid(struct ib_device *ibdev, u8 port, int index,
{
struct c4iw_dev *dev;
PDBG("%s ibdev %p, port %d, index %d, gid %p\n",
__func__, ibdev, port, index, gid);
pr_debug("%s ibdev %p, port %d, index %d, gid %p\n",
__func__, ibdev, port, index, gid);
dev = to_c4iw_dev(ibdev);
BUG_ON(port == 0);
memset(&(gid->raw[0]), 0, sizeof(gid->raw));
@ -325,7 +323,7 @@ static int c4iw_query_device(struct ib_device *ibdev, struct ib_device_attr *pro
struct c4iw_dev *dev;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
if (uhw->inlen || uhw->outlen)
return -EINVAL;
@ -366,7 +364,7 @@ static int c4iw_query_port(struct ib_device *ibdev, u8 port,
struct net_device *netdev;
struct in_device *inetdev;
PDBG("%s ibdev %p\n", __func__, ibdev);
pr_debug("%s ibdev %p\n", __func__, ibdev);
dev = to_c4iw_dev(ibdev);
netdev = dev->rdev.lldi.ports[port-1];
@ -408,7 +406,7 @@ static ssize_t show_rev(struct device *dev, struct device_attribute *attr,
{
struct c4iw_dev *c4iw_dev = container_of(dev, struct c4iw_dev,
ibdev.dev);
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
return sprintf(buf, "%d\n",
CHELSIO_CHIP_RELEASE(c4iw_dev->rdev.lldi.adapter_type));
}
@ -421,7 +419,7 @@ static ssize_t show_hca(struct device *dev, struct device_attribute *attr,
struct ethtool_drvinfo info;
struct net_device *lldev = c4iw_dev->rdev.lldi.ports[0];
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
lldev->ethtool_ops->get_drvinfo(lldev, &info);
return sprintf(buf, "%s\n", info.driver);
}
@ -431,7 +429,7 @@ static ssize_t show_board(struct device *dev, struct device_attribute *attr,
{
struct c4iw_dev *c4iw_dev = container_of(dev, struct c4iw_dev,
ibdev.dev);
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
return sprintf(buf, "%x.%x\n", c4iw_dev->rdev.lldi.pdev->vendor,
c4iw_dev->rdev.lldi.pdev->device);
}
@ -524,7 +522,7 @@ static void get_dev_fw_str(struct ib_device *dev, char *str,
{
struct c4iw_dev *c4iw_dev = container_of(dev, struct c4iw_dev,
ibdev);
PDBG("%s dev 0x%p\n", __func__, dev);
pr_debug("%s dev 0x%p\n", __func__, dev);
snprintf(str, str_len, "%u.%u.%u.%u",
FW_HDR_FW_VER_MAJOR_G(c4iw_dev->rdev.lldi.fw_vers),
@ -538,7 +536,7 @@ int c4iw_register_device(struct c4iw_dev *dev)
int ret;
int i;
PDBG("%s c4iw_dev %p\n", __func__, dev);
pr_debug("%s c4iw_dev %p\n", __func__, dev);
BUG_ON(!dev->rdev.lldi.ports[0]);
strlcpy(dev->ibdev.name, "cxgb4_%d", IB_DEVICE_NAME_MAX);
memset(&dev->ibdev.node_guid, 0, sizeof(dev->ibdev.node_guid));
@ -648,7 +646,7 @@ void c4iw_unregister_device(struct c4iw_dev *dev)
{
int i;
PDBG("%s c4iw_dev %p\n", __func__, dev);
pr_debug("%s c4iw_dev %p\n", __func__, dev);
for (i = 0; i < ARRAY_SIZE(c4iw_class_attributes); ++i)
device_remove_file(&dev->ibdev.dev,
c4iw_class_attributes[i]);

View File

@ -254,11 +254,11 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
ret = -ENOMEM;
goto free_sq;
}
PDBG("%s sq base va 0x%p pa 0x%llx rq base va 0x%p pa 0x%llx\n",
__func__, wq->sq.queue,
(unsigned long long)virt_to_phys(wq->sq.queue),
wq->rq.queue,
(unsigned long long)virt_to_phys(wq->rq.queue));
pr_debug("%s sq base va 0x%p pa 0x%llx rq base va 0x%p pa 0x%llx\n",
__func__, wq->sq.queue,
(unsigned long long)virt_to_phys(wq->sq.queue),
wq->rq.queue,
(unsigned long long)virt_to_phys(wq->rq.queue));
memset(wq->rq.queue, 0, wq->rq.memsize);
dma_unmap_addr_set(&wq->rq, mapping, wq->rq.dma_addr);
@ -275,7 +275,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
* User mode must have bar2 access.
*/
if (user && (!wq->sq.bar2_pa || !wq->rq.bar2_pa)) {
pr_warn(MOD "%s: sqid %u or rqid %u not in BAR2 range.\n",
pr_warn("%s: sqid %u or rqid %u not in BAR2 range\n",
pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid);
goto free_dma;
}
@ -362,9 +362,9 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
if (ret)
goto free_dma;
PDBG("%s sqid 0x%x rqid 0x%x kdb 0x%p sq_bar2_addr %p rq_bar2_addr %p\n",
__func__, wq->sq.qid, wq->rq.qid, wq->db,
wq->sq.bar2_va, wq->rq.bar2_va);
pr_debug("%s sqid 0x%x rqid 0x%x kdb 0x%p sq_bar2_addr %p rq_bar2_addr %p\n",
__func__, wq->sq.qid, wq->rq.qid, wq->db,
wq->sq.bar2_va, wq->rq.bar2_va);
return 0;
free_dma:
@ -725,7 +725,7 @@ static void free_qp_work(struct work_struct *work)
ucontext = qhp->ucontext;
rhp = qhp->rhp;
PDBG("%s qhp %p ucontext %p\n", __func__, qhp, ucontext);
pr_debug("%s qhp %p ucontext %p\n", __func__, qhp, ucontext);
destroy_qp(&rhp->rdev, &qhp->wq,
ucontext ? &ucontext->uctx : &rhp->rdev.uctx);
@ -739,19 +739,19 @@ static void queue_qp_free(struct kref *kref)
struct c4iw_qp *qhp;
qhp = container_of(kref, struct c4iw_qp, kref);
PDBG("%s qhp %p\n", __func__, qhp);
pr_debug("%s qhp %p\n", __func__, qhp);
queue_work(qhp->rhp->rdev.free_workq, &qhp->free_work);
}
void c4iw_qp_add_ref(struct ib_qp *qp)
{
PDBG("%s ib_qp %p\n", __func__, qp);
pr_debug("%s ib_qp %p\n", __func__, qp);
kref_get(&to_c4iw_qp(qp)->kref);
}
void c4iw_qp_rem_ref(struct ib_qp *qp)
{
PDBG("%s ib_qp %p\n", __func__, qp);
pr_debug("%s ib_qp %p\n", __func__, qp);
kref_put(&to_c4iw_qp(qp)->kref, queue_qp_free);
}
@ -959,8 +959,8 @@ int c4iw_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
c4iw_invalidate_mr(qhp->rhp, wr->ex.invalidate_rkey);
break;
default:
PDBG("%s post of type=%d TBD!\n", __func__,
wr->opcode);
pr_debug("%s post of type=%d TBD!\n", __func__,
wr->opcode);
err = -EINVAL;
}
if (err) {
@ -981,9 +981,10 @@ int c4iw_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
init_wr_hdr(wqe, qhp->wq.sq.pidx, fw_opcode, fw_flags, len16);
PDBG("%s cookie 0x%llx pidx 0x%x opcode 0x%x read_len %u\n",
__func__, (unsigned long long)wr->wr_id, qhp->wq.sq.pidx,
swsqe->opcode, swsqe->read_len);
pr_debug("%s cookie 0x%llx pidx 0x%x opcode 0x%x read_len %u\n",
__func__,
(unsigned long long)wr->wr_id, qhp->wq.sq.pidx,
swsqe->opcode, swsqe->read_len);
wr = wr->next;
num_wrs--;
t4_sq_produce(&qhp->wq, len16);
@ -1057,8 +1058,9 @@ int c4iw_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
wqe->recv.r2[1] = 0;
wqe->recv.r2[2] = 0;
wqe->recv.len16 = len16;
PDBG("%s cookie 0x%llx pidx %u\n", __func__,
(unsigned long long) wr->wr_id, qhp->wq.rq.pidx);
pr_debug("%s cookie 0x%llx pidx %u\n",
__func__,
(unsigned long long)wr->wr_id, qhp->wq.rq.pidx);
t4_rq_produce(&qhp->wq, len16);
idx += DIV_ROUND_UP(len16*16, T4_EQ_ENTRY_SIZE);
wr = wr->next;
@ -1217,8 +1219,8 @@ static void post_terminate(struct c4iw_qp *qhp, struct t4_cqe *err_cqe,
struct sk_buff *skb;
struct terminate_message *term;
PDBG("%s qhp %p qid 0x%x tid %u\n", __func__, qhp, qhp->wq.sq.qid,
qhp->ep->hwtid);
pr_debug("%s qhp %p qid 0x%x tid %u\n", __func__, qhp, qhp->wq.sq.qid,
qhp->ep->hwtid);
skb = skb_dequeue(&qhp->ep->com.ep_skb_list);
if (WARN_ON(!skb))
@ -1254,7 +1256,7 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
int rq_flushed, sq_flushed;
unsigned long flag;
PDBG("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp);
pr_debug("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp);
/* locking hierarchy: cq lock first, then qp lock. */
spin_lock_irqsave(&rchp->lock, flag);
@ -1339,8 +1341,8 @@ static int rdma_fini(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
int ret;
struct sk_buff *skb;
PDBG("%s qhp %p qid 0x%x tid %u\n", __func__, qhp, qhp->wq.sq.qid,
ep->hwtid);
pr_debug("%s qhp %p qid 0x%x tid %u\n", __func__, qhp, qhp->wq.sq.qid,
ep->hwtid);
skb = skb_dequeue(&ep->com.ep_skb_list);
if (WARN_ON(!skb))
@ -1366,13 +1368,13 @@ static int rdma_fini(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
ret = c4iw_wait_for_reply(&rhp->rdev, &ep->com.wr_wait, qhp->ep->hwtid,
qhp->wq.sq.qid, __func__);
out:
PDBG("%s ret %d\n", __func__, ret);
pr_debug("%s ret %d\n", __func__, ret);
return ret;
}
static void build_rtr_msg(u8 p2p_type, struct fw_ri_init *init)
{
PDBG("%s p2p_type = %d\n", __func__, p2p_type);
pr_debug("%s p2p_type = %d\n", __func__, p2p_type);
memset(&init->u, 0, sizeof init->u);
switch (p2p_type) {
case FW_RI_INIT_P2PTYPE_RDMA_WRITE:
@ -1401,8 +1403,8 @@ static int rdma_init(struct c4iw_dev *rhp, struct c4iw_qp *qhp)
int ret;
struct sk_buff *skb;
PDBG("%s qhp %p qid 0x%x tid %u ird %u ord %u\n", __func__, qhp,
qhp->wq.sq.qid, qhp->ep->hwtid, qhp->ep->ird, qhp->ep->ord);
pr_debug("%s qhp %p qid 0x%x tid %u ird %u ord %u\n", __func__, qhp,
qhp->wq.sq.qid, qhp->ep->hwtid, qhp->ep->ird, qhp->ep->ord);
skb = alloc_skb(sizeof *wqe, GFP_KERNEL);
if (!skb) {
@ -1474,7 +1476,7 @@ static int rdma_init(struct c4iw_dev *rhp, struct c4iw_qp *qhp)
err1:
free_ird(rhp, qhp->attr.max_ird);
out:
PDBG("%s ret %d\n", __func__, ret);
pr_debug("%s ret %d\n", __func__, ret);
return ret;
}
@ -1491,9 +1493,10 @@ int c4iw_modify_qp(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
int free = 0;
struct c4iw_ep *ep = NULL;
PDBG("%s qhp %p sqid 0x%x rqid 0x%x ep %p state %d -> %d\n", __func__,
qhp, qhp->wq.sq.qid, qhp->wq.rq.qid, qhp->ep, qhp->attr.state,
(mask & C4IW_QP_ATTR_NEXT_STATE) ? attrs->next_state : -1);
pr_debug("%s qhp %p sqid 0x%x rqid 0x%x ep %p state %d -> %d\n",
__func__,
qhp, qhp->wq.sq.qid, qhp->wq.rq.qid, qhp->ep, qhp->attr.state,
(mask & C4IW_QP_ATTR_NEXT_STATE) ? attrs->next_state : -1);
mutex_lock(&qhp->mutex);
@ -1671,16 +1674,15 @@ int c4iw_modify_qp(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
goto err;
break;
default:
printk(KERN_ERR "%s in a bad state %d\n",
__func__, qhp->attr.state);
pr_err("%s in a bad state %d\n", __func__, qhp->attr.state);
ret = -EINVAL;
goto err;
break;
}
goto out;
err:
PDBG("%s disassociating ep %p qpid 0x%x\n", __func__, qhp->ep,
qhp->wq.sq.qid);
pr_debug("%s disassociating ep %p qpid 0x%x\n", __func__, qhp->ep,
qhp->wq.sq.qid);
/* disassociate the LLP connection */
qhp->attr.llp_stream_handle = NULL;
@ -1716,7 +1718,7 @@ int c4iw_modify_qp(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
*/
if (free)
c4iw_put_ep(&ep->com);
PDBG("%s exit state %d\n", __func__, qhp->attr.state);
pr_debug("%s exit state %d\n", __func__, qhp->attr.state);
return ret;
}
@ -1746,7 +1748,7 @@ int c4iw_destroy_qp(struct ib_qp *ib_qp)
c4iw_qp_rem_ref(ib_qp);
PDBG("%s ib_qp %p qpid 0x%0x\n", __func__, ib_qp, qhp->wq.sq.qid);
pr_debug("%s ib_qp %p qpid 0x%0x\n", __func__, ib_qp, qhp->wq.sq.qid);
return 0;
}
@ -1765,7 +1767,7 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs,
struct c4iw_mm_entry *sq_key_mm, *rq_key_mm = NULL, *sq_db_key_mm;
struct c4iw_mm_entry *rq_db_key_mm = NULL, *ma_sync_key_mm = NULL;
PDBG("%s ib_pd %p\n", __func__, pd);
pr_debug("%s ib_pd %p\n", __func__, pd);
if (attrs->qp_type != IB_QPT_RC)
return ERR_PTR(-EINVAL);
@ -1936,11 +1938,11 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs,
qhp->ibqp.qp_num = qhp->wq.sq.qid;
init_timer(&(qhp->timer));
INIT_LIST_HEAD(&qhp->db_fc_entry);
PDBG("%s sq id %u size %u memsize %zu num_entries %u "
"rq id %u size %u memsize %zu num_entries %u\n", __func__,
qhp->wq.sq.qid, qhp->wq.sq.size, qhp->wq.sq.memsize,
attrs->cap.max_send_wr, qhp->wq.rq.qid, qhp->wq.rq.size,
qhp->wq.rq.memsize, attrs->cap.max_recv_wr);
pr_debug("%s sq id %u size %u memsize %zu num_entries %u rq id %u size %u memsize %zu num_entries %u\n",
__func__,
qhp->wq.sq.qid, qhp->wq.sq.size, qhp->wq.sq.memsize,
attrs->cap.max_send_wr, qhp->wq.rq.qid, qhp->wq.rq.size,
qhp->wq.rq.memsize, attrs->cap.max_recv_wr);
return &qhp->ibqp;
err8:
kfree(ma_sync_key_mm);
@ -1970,7 +1972,7 @@ int c4iw_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
enum c4iw_qp_attr_mask mask = 0;
struct c4iw_qp_attributes attrs;
PDBG("%s ib_qp %p\n", __func__, ibqp);
pr_debug("%s ib_qp %p\n", __func__, ibqp);
/* iwarp does not support the RTR state */
if ((attr_mask & IB_QP_STATE) && (attr->qp_state == IB_QPS_RTR))
@ -2016,7 +2018,7 @@ int c4iw_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
struct ib_qp *c4iw_get_qp(struct ib_device *dev, int qpn)
{
PDBG("%s ib_dev %p qpn 0x%x\n", __func__, dev, qpn);
pr_debug("%s ib_dev %p qpn 0x%x\n", __func__, dev, qpn);
return (struct ib_qp *)get_qhp(to_c4iw_dev(dev), qpn);
}

View File

@ -90,7 +90,7 @@ u32 c4iw_get_resource(struct c4iw_id_table *id_table)
void c4iw_put_resource(struct c4iw_id_table *id_table, u32 entry)
{
PDBG("%s entry 0x%x\n", __func__, entry);
pr_debug("%s entry 0x%x\n", __func__, entry);
c4iw_id_free(id_table, entry);
}
@ -141,7 +141,7 @@ u32 c4iw_get_cqid(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx)
}
out:
mutex_unlock(&uctx->lock);
PDBG("%s qid 0x%x\n", __func__, qid);
pr_debug("%s qid 0x%x\n", __func__, qid);
mutex_lock(&rdev->stats.lock);
if (rdev->stats.qid.cur > rdev->stats.qid.max)
rdev->stats.qid.max = rdev->stats.qid.cur;
@ -157,7 +157,7 @@ void c4iw_put_cqid(struct c4iw_rdev *rdev, u32 qid,
entry = kmalloc(sizeof *entry, GFP_KERNEL);
if (!entry)
return;
PDBG("%s qid 0x%x\n", __func__, qid);
pr_debug("%s qid 0x%x\n", __func__, qid);
entry->qid = qid;
mutex_lock(&uctx->lock);
list_add_tail(&entry->entry, &uctx->cqids);
@ -215,7 +215,7 @@ u32 c4iw_get_qpid(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx)
}
out:
mutex_unlock(&uctx->lock);
PDBG("%s qid 0x%x\n", __func__, qid);
pr_debug("%s qid 0x%x\n", __func__, qid);
mutex_lock(&rdev->stats.lock);
if (rdev->stats.qid.cur > rdev->stats.qid.max)
rdev->stats.qid.max = rdev->stats.qid.cur;
@ -231,7 +231,7 @@ void c4iw_put_qpid(struct c4iw_rdev *rdev, u32 qid,
entry = kmalloc(sizeof *entry, GFP_KERNEL);
if (!entry)
return;
PDBG("%s qid 0x%x\n", __func__, qid);
pr_debug("%s qid 0x%x\n", __func__, qid);
entry->qid = qid;
mutex_lock(&uctx->lock);
list_add_tail(&entry->entry, &uctx->qpids);
@ -254,7 +254,7 @@ void c4iw_destroy_resource(struct c4iw_resource *rscp)
u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size)
{
unsigned long addr = gen_pool_alloc(rdev->pbl_pool, size);
PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
mutex_lock(&rdev->stats.lock);
if (addr) {
rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT);
@ -268,7 +268,7 @@ u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size)
void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
{
PDBG("%s addr 0x%x size %d\n", __func__, addr, size);
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size);
mutex_lock(&rdev->stats.lock);
rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT);
mutex_unlock(&rdev->stats.lock);
@ -290,19 +290,17 @@ int c4iw_pblpool_create(struct c4iw_rdev *rdev)
while (pbl_start < pbl_top) {
pbl_chunk = min(pbl_top - pbl_start + 1, pbl_chunk);
if (gen_pool_add(rdev->pbl_pool, pbl_start, pbl_chunk, -1)) {
PDBG("%s failed to add PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pr_debug("%s failed to add PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
if (pbl_chunk <= 1024 << MIN_PBL_SHIFT) {
printk(KERN_WARNING MOD
"Failed to add all PBL chunks (%x/%x)\n",
pbl_start,
pbl_top - pbl_start);
pr_warn("Failed to add all PBL chunks (%x/%x)\n",
pbl_start, pbl_top - pbl_start);
return 0;
}
pbl_chunk >>= 1;
} else {
PDBG("%s added PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pr_debug("%s added PBL chunk (%x/%x)\n",
__func__, pbl_start, pbl_chunk);
pbl_start += pbl_chunk;
}
}
@ -324,9 +322,9 @@ void c4iw_pblpool_destroy(struct c4iw_rdev *rdev)
u32 c4iw_rqtpool_alloc(struct c4iw_rdev *rdev, int size)
{
unsigned long addr = gen_pool_alloc(rdev->rqt_pool, size << 6);
PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size << 6);
if (!addr)
pr_warn_ratelimited(MOD "%s: Out of RQT memory\n",
pr_warn_ratelimited("%s: Out of RQT memory\n",
pci_name(rdev->lldi.pdev));
mutex_lock(&rdev->stats.lock);
if (addr) {
@ -341,7 +339,7 @@ u32 c4iw_rqtpool_alloc(struct c4iw_rdev *rdev, int size)
void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
{
PDBG("%s addr 0x%x size %d\n", __func__, addr, size << 6);
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size << 6);
mutex_lock(&rdev->stats.lock);
rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT);
mutex_unlock(&rdev->stats.lock);
@ -363,18 +361,17 @@ int c4iw_rqtpool_create(struct c4iw_rdev *rdev)
while (rqt_start < rqt_top) {
rqt_chunk = min(rqt_top - rqt_start + 1, rqt_chunk);
if (gen_pool_add(rdev->rqt_pool, rqt_start, rqt_chunk, -1)) {
PDBG("%s failed to add RQT chunk (%x/%x)\n",
__func__, rqt_start, rqt_chunk);
pr_debug("%s failed to add RQT chunk (%x/%x)\n",
__func__, rqt_start, rqt_chunk);
if (rqt_chunk <= 1024 << MIN_RQT_SHIFT) {
printk(KERN_WARNING MOD
"Failed to add all RQT chunks (%x/%x)\n",
rqt_start, rqt_top - rqt_start);
pr_warn("Failed to add all RQT chunks (%x/%x)\n",
rqt_start, rqt_top - rqt_start);
return 0;
}
rqt_chunk >>= 1;
} else {
PDBG("%s added RQT chunk (%x/%x)\n",
__func__, rqt_start, rqt_chunk);
pr_debug("%s added RQT chunk (%x/%x)\n",
__func__, rqt_start, rqt_chunk);
rqt_start += rqt_chunk;
}
}
@ -394,7 +391,7 @@ void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev)
u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size)
{
unsigned long addr = gen_pool_alloc(rdev->ocqp_pool, size);
PDBG("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
pr_debug("%s addr 0x%x size %d\n", __func__, (u32)addr, size);
if (addr) {
mutex_lock(&rdev->stats.lock);
rdev->stats.ocqp.cur += roundup(size, 1 << MIN_OCQP_SHIFT);
@ -407,7 +404,7 @@ u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size)
void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size)
{
PDBG("%s addr 0x%x size %d\n", __func__, addr, size);
pr_debug("%s addr 0x%x size %d\n", __func__, addr, size);
mutex_lock(&rdev->stats.lock);
rdev->stats.ocqp.cur -= roundup(size, 1 << MIN_OCQP_SHIFT);
mutex_unlock(&rdev->stats.lock);
@ -429,18 +426,17 @@ int c4iw_ocqp_pool_create(struct c4iw_rdev *rdev)
while (start < top) {
chunk = min(top - start + 1, chunk);
if (gen_pool_add(rdev->ocqp_pool, start, chunk, -1)) {
PDBG("%s failed to add OCQP chunk (%x/%x)\n",
__func__, start, chunk);
pr_debug("%s failed to add OCQP chunk (%x/%x)\n",
__func__, start, chunk);
if (chunk <= 1024 << MIN_OCQP_SHIFT) {
printk(KERN_WARNING MOD
"Failed to add all OCQP chunks (%x/%x)\n",
start, top - start);
pr_warn("Failed to add all OCQP chunks (%x/%x)\n",
start, top - start);
return 0;
}
chunk >>= 1;
} else {
PDBG("%s added OCQP chunk (%x/%x)\n",
__func__, start, chunk);
pr_debug("%s added OCQP chunk (%x/%x)\n",
__func__, start, chunk);
start += chunk;
}
}

View File

@ -466,14 +466,14 @@ static inline void t4_ring_sq_db(struct t4_wq *wq, u16 inc, union t4_wr *wqe)
wmb();
if (wq->sq.bar2_va) {
if (inc == 1 && wq->sq.bar2_qid == 0 && wqe) {
PDBG("%s: WC wq->sq.pidx = %d\n",
__func__, wq->sq.pidx);
pr_debug("%s: WC wq->sq.pidx = %d\n",
__func__, wq->sq.pidx);
pio_copy((u64 __iomem *)
(wq->sq.bar2_va + SGE_UDB_WCDOORBELL),
(u64 *)wqe);
} else {
PDBG("%s: DB wq->sq.pidx = %d\n",
__func__, wq->sq.pidx);
pr_debug("%s: DB wq->sq.pidx = %d\n",
__func__, wq->sq.pidx);
writel(PIDX_T5_V(inc) | QID_V(wq->sq.bar2_qid),
wq->sq.bar2_va + SGE_UDB_KDOORBELL);
}
@ -493,14 +493,14 @@ static inline void t4_ring_rq_db(struct t4_wq *wq, u16 inc,
wmb();
if (wq->rq.bar2_va) {
if (inc == 1 && wq->rq.bar2_qid == 0 && wqe) {
PDBG("%s: WC wq->rq.pidx = %d\n",
__func__, wq->rq.pidx);
pr_debug("%s: WC wq->rq.pidx = %d\n",
__func__, wq->rq.pidx);
pio_copy((u64 __iomem *)
(wq->rq.bar2_va + SGE_UDB_WCDOORBELL),
(void *)wqe);
} else {
PDBG("%s: DB wq->rq.pidx = %d\n",
__func__, wq->rq.pidx);
pr_debug("%s: DB wq->rq.pidx = %d\n",
__func__, wq->rq.pidx);
writel(PIDX_T5_V(inc) | QID_V(wq->rq.bar2_qid),
wq->rq.bar2_va + SGE_UDB_KDOORBELL);
}
@ -601,7 +601,8 @@ static inline void t4_swcq_produce(struct t4_cq *cq)
{
cq->sw_in_use++;
if (cq->sw_in_use == cq->size) {
PDBG("%s cxgb4 sw cq overflow cqid %u\n", __func__, cq->cqid);
pr_debug("%s cxgb4 sw cq overflow cqid %u\n",
__func__, cq->cqid);
cq->error = 1;
BUG_ON(1);
}
@ -656,7 +657,7 @@ static inline int t4_next_hw_cqe(struct t4_cq *cq, struct t4_cqe **cqe)
if (cq->queue[prev_cidx].bits_type_ts != cq->bits_type_ts) {
ret = -EOVERFLOW;
cq->error = 1;
printk(KERN_ERR MOD "cq overflow cqid %u\n", cq->cqid);
pr_err("cq overflow cqid %u\n", cq->cqid);
BUG_ON(1);
} else if (t4_valid_cqe(cq, &cq->queue[cq->cidx])) {
@ -672,7 +673,8 @@ static inline int t4_next_hw_cqe(struct t4_cq *cq, struct t4_cqe **cqe)
static inline struct t4_cqe *t4_next_sw_cqe(struct t4_cq *cq)
{
if (cq->sw_in_use == cq->size) {
PDBG("%s cxgb4 sw cq overflow cqid %u\n", __func__, cq->cqid);
pr_debug("%s cxgb4 sw cq overflow cqid %u\n",
__func__, cq->cqid);
cq->error = 1;
BUG_ON(1);
return NULL;

View File

@ -12,7 +12,7 @@ hfi1-y := affinity.o chip.o device.o driver.o efivar.o \
init.o intr.o mad.o mmu_rb.o pcie.o pio.o pio_copy.o platform.o \
qp.o qsfp.o rc.o ruc.o sdma.o sysfs.o trace.o \
uc.o ud.o user_exp_rcv.o user_pages.o user_sdma.o verbs.o \
verbs_txreq.o
verbs_txreq.o vnic_main.o vnic_sdma.o
hfi1-$(CONFIG_DEBUG_FS) += debugfs.o
CFLAGS_trace.o = -I$(src)

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -229,14 +229,17 @@ static inline void aspm_ctx_timer_function(unsigned long data)
spin_unlock_irqrestore(&rcd->aspm_lock, flags);
}
/* Disable interrupt processing for verbs contexts when PSM contexts are open */
/*
* Disable interrupt processing for verbs contexts when PSM or VNIC contexts
* are open.
*/
static inline void aspm_disable_all(struct hfi1_devdata *dd)
{
struct hfi1_ctxtdata *rcd;
unsigned long flags;
unsigned i;
for (i = 0; i < dd->first_user_ctxt; i++) {
for (i = 0; i < dd->first_dyn_alloc_ctxt; i++) {
rcd = dd->rcd[i];
del_timer_sync(&rcd->aspm_timer);
spin_lock_irqsave(&rcd->aspm_lock, flags);
@ -260,7 +263,7 @@ static inline void aspm_enable_all(struct hfi1_devdata *dd)
if (aspm_mode != ASPM_MODE_DYNAMIC)
return;
for (i = 0; i < dd->first_user_ctxt; i++) {
for (i = 0; i < dd->first_dyn_alloc_ctxt; i++) {
rcd = dd->rcd[i];
spin_lock_irqsave(&rcd->aspm_lock, flags);
rcd->aspm_intr_enable = true;
@ -276,7 +279,7 @@ static inline void aspm_ctx_init(struct hfi1_ctxtdata *rcd)
(unsigned long)rcd);
rcd->aspm_intr_supported = rcd->dd->aspm_supported &&
aspm_mode == ASPM_MODE_DYNAMIC &&
rcd->ctxt < rcd->dd->first_user_ctxt;
rcd->ctxt < rcd->dd->first_dyn_alloc_ctxt;
}
static inline void aspm_init(struct hfi1_devdata *dd)
@ -286,7 +289,7 @@ static inline void aspm_init(struct hfi1_devdata *dd)
spin_lock_init(&dd->aspm_lock);
dd->aspm_supported = aspm_hw_l1_supported(dd);
for (i = 0; i < dd->first_user_ctxt; i++)
for (i = 0; i < dd->first_dyn_alloc_ctxt; i++)
aspm_ctx_init(dd->rcd[i]);
/* Start with ASPM disabled */

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015 - 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -64,6 +64,7 @@
#include "platform.h"
#include "aspm.h"
#include "affinity.h"
#include "debugfs.h"
#define NUM_IB_PORTS 1
@ -125,9 +126,16 @@ struct flag_table {
#define DEFAULT_KRCVQS 2
#define MIN_KERNEL_KCTXTS 2
#define FIRST_KERNEL_KCTXT 1
/* sizes for both the QP and RSM map tables */
#define NUM_MAP_ENTRIES 256
#define NUM_MAP_REGS 32
/*
* RSM instance allocation
* 0 - Verbs
* 1 - User Fecn Handling
* 2 - Vnic
*/
#define RSM_INS_VERBS 0
#define RSM_INS_FECN 1
#define RSM_INS_VNIC 2
/* Bit offset into the GUID which carries HFI id information */
#define GUID_HFI_INDEX_SHIFT 39
@ -138,8 +146,7 @@ struct flag_table {
#define is_emulator_p(dd) ((((dd)->irev) & 0xf) == 3)
#define is_emulator_s(dd) ((((dd)->irev) & 0xf) == 4)
/* RSM fields */
/* RSM fields for Verbs */
/* packet type */
#define IB_PACKET_TYPE 2ull
#define QW_SHIFT 6ull
@ -169,6 +176,28 @@ struct flag_table {
/* QPN[m+n:1] QW 1, OFFSET 1 */
#define QPN_SELECT_OFFSET ((1ull << QW_SHIFT) | (1ull))
/* RSM fields for Vnic */
/* L2_TYPE: QW 0, OFFSET 61 - for match */
#define L2_TYPE_QW 0ull
#define L2_TYPE_BIT_OFFSET 61ull
#define L2_TYPE_OFFSET(off) ((L2_TYPE_QW << QW_SHIFT) | (off))
#define L2_TYPE_MATCH_OFFSET L2_TYPE_OFFSET(L2_TYPE_BIT_OFFSET)
#define L2_TYPE_MASK 3ull
#define L2_16B_VALUE 2ull
/* L4_TYPE QW 1, OFFSET 0 - for match */
#define L4_TYPE_QW 1ull
#define L4_TYPE_BIT_OFFSET 0ull
#define L4_TYPE_OFFSET(off) ((L4_TYPE_QW << QW_SHIFT) | (off))
#define L4_TYPE_MATCH_OFFSET L4_TYPE_OFFSET(L4_TYPE_BIT_OFFSET)
#define L4_16B_TYPE_MASK 0xFFull
#define L4_16B_ETH_VALUE 0x78ull
/* 16B VESWID - for select */
#define L4_16B_HDR_VESWID_OFFSET ((2 << QW_SHIFT) | (16ull))
/* 16B ENTROPY - for select */
#define L2_16B_ENTROPY_OFFSET ((1 << QW_SHIFT) | (32ull))
/* defines to build power on SC2VL table */
#define SC2VL_VAL( \
num, \
@ -1045,6 +1074,8 @@ static void dc_start(struct hfi1_devdata *);
static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
unsigned int *np);
static void clear_full_mgmt_pkey(struct hfi1_pportdata *ppd);
static int wait_link_transfer_active(struct hfi1_devdata *dd, int wait_ms);
static void clear_rsm_rule(struct hfi1_devdata *dd, u8 rule_index);
/*
* Error interrupt table entry. This is used as input to the interrupt
@ -6379,18 +6410,17 @@ static void lcb_shutdown(struct hfi1_devdata *dd, int abort)
*
* The expectation is that the caller of this routine would have taken
* care of properly transitioning the link into the correct state.
* NOTE: the caller needs to acquire the dd->dc8051_lock lock
* before calling this function.
*/
static void dc_shutdown(struct hfi1_devdata *dd)
static void _dc_shutdown(struct hfi1_devdata *dd)
{
unsigned long flags;
lockdep_assert_held(&dd->dc8051_lock);
spin_lock_irqsave(&dd->dc8051_lock, flags);
if (dd->dc_shutdown) {
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
if (dd->dc_shutdown)
return;
}
dd->dc_shutdown = 1;
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
/* Shutdown the LCB */
lcb_shutdown(dd, 1);
/*
@ -6401,35 +6431,45 @@ static void dc_shutdown(struct hfi1_devdata *dd)
write_csr(dd, DC_DC8051_CFG_RST, 0x1);
}
static void dc_shutdown(struct hfi1_devdata *dd)
{
mutex_lock(&dd->dc8051_lock);
_dc_shutdown(dd);
mutex_unlock(&dd->dc8051_lock);
}
/*
* Calling this after the DC has been brought out of reset should not
* do any damage.
* NOTE: the caller needs to acquire the dd->dc8051_lock lock
* before calling this function.
*/
static void dc_start(struct hfi1_devdata *dd)
static void _dc_start(struct hfi1_devdata *dd)
{
unsigned long flags;
int ret;
lockdep_assert_held(&dd->dc8051_lock);
spin_lock_irqsave(&dd->dc8051_lock, flags);
if (!dd->dc_shutdown)
goto done;
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
return;
/* Take the 8051 out of reset */
write_csr(dd, DC_DC8051_CFG_RST, 0ull);
/* Wait until 8051 is ready */
ret = wait_fm_ready(dd, TIMEOUT_8051_START);
if (ret) {
if (wait_fm_ready(dd, TIMEOUT_8051_START))
dd_dev_err(dd, "%s: timeout starting 8051 firmware\n",
__func__);
}
/* Take away reset for LCB and RX FPE (set in lcb_shutdown). */
write_csr(dd, DCC_CFG_RESET, 0x10);
/* lcb_shutdown() with abort=1 does not restore these */
write_csr(dd, DC_LCB_ERR_EN, dd->lcb_err_en);
spin_lock_irqsave(&dd->dc8051_lock, flags);
dd->dc_shutdown = 0;
done:
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
}
static void dc_start(struct hfi1_devdata *dd)
{
mutex_lock(&dd->dc8051_lock);
_dc_start(dd);
mutex_unlock(&dd->dc8051_lock);
}
/*
@ -6701,7 +6741,13 @@ static void rxe_kernel_unfreeze(struct hfi1_devdata *dd)
int i;
/* enable all kernel contexts */
for (i = 0; i < dd->n_krcv_queues; i++) {
for (i = 0; i < dd->num_rcv_contexts; i++) {
struct hfi1_ctxtdata *rcd = dd->rcd[i];
/* Ensure all non-user contexts(including vnic) are enabled */
if (!rcd || !rcd->sc || (rcd->sc->type == SC_USER))
continue;
rcvmask = HFI1_RCVCTRL_CTXT_ENB;
/* HFI1_RCVCTRL_TAILUPD_[ENB|DIS] needs to be set explicitly */
rcvmask |= HFI1_CAP_KGET_MASK(dd->rcd[i]->flags, DMA_RTAIL) ?
@ -7077,7 +7123,7 @@ static void add_full_mgmt_pkey(struct hfi1_pportdata *ppd)
{
struct hfi1_devdata *dd = ppd->dd;
/* Sanity check - ppd->pkeys[2] should be 0, or already initalized */
/* Sanity check - ppd->pkeys[2] should be 0, or already initialized */
if (!((ppd->pkeys[2] == 0) || (ppd->pkeys[2] == FULL_MGMT_P_KEY)))
dd_dev_warn(dd, "%s pkey[2] already set to 0x%x, resetting it to 0x%x\n",
__func__, ppd->pkeys[2], FULL_MGMT_P_KEY);
@ -7165,7 +7211,7 @@ static void get_link_widths(struct hfi1_devdata *dd, u16 *tx_width,
* set the max_rate field in handle_verify_cap until v0.19.
*/
if ((dd->icode == ICODE_RTL_SILICON) &&
(dd->dc8051_ver < dc8051_ver(0, 19))) {
(dd->dc8051_ver < dc8051_ver(0, 19, 0))) {
/* max_rate: 0 = 12.5G, 1 = 25G */
switch (max_rate) {
case 0:
@ -7277,15 +7323,6 @@ void handle_verify_cap(struct work_struct *work)
lcb_shutdown(dd, 0);
adjust_lcb_for_fpga_serdes(dd);
/*
* These are now valid:
* remote VerifyCap fields in the general LNI config
* CSR DC8051_STS_REMOTE_GUID
* CSR DC8051_STS_REMOTE_NODE_TYPE
* CSR DC8051_STS_REMOTE_FM_SECURITY
* CSR DC8051_STS_REMOTE_PORT_NO
*/
read_vc_remote_phy(dd, &power_management, &continious);
read_vc_remote_fabric(dd, &vau, &z, &vcu, &vl15buf,
&partner_supported_crc);
@ -7350,7 +7387,7 @@ void handle_verify_cap(struct work_struct *work)
}
ppd->link_speed_active = 0; /* invalid value */
if (dd->dc8051_ver < dc8051_ver(0, 20)) {
if (dd->dc8051_ver < dc8051_ver(0, 20, 0)) {
/* remote_tx_rate: 0 = 12.5G, 1 = 25G */
switch (remote_tx_rate) {
case 0:
@ -7416,20 +7453,6 @@ void handle_verify_cap(struct work_struct *work)
write_csr(dd, DC_LCB_ERR_EN, 0); /* mask LCB errors */
set_8051_lcb_access(dd);
ppd->neighbor_guid =
read_csr(dd, DC_DC8051_STS_REMOTE_GUID);
ppd->neighbor_port_number = read_csr(dd, DC_DC8051_STS_REMOTE_PORT_NO) &
DC_DC8051_STS_REMOTE_PORT_NO_VAL_SMASK;
ppd->neighbor_type =
read_csr(dd, DC_DC8051_STS_REMOTE_NODE_TYPE) &
DC_DC8051_STS_REMOTE_NODE_TYPE_VAL_MASK;
ppd->neighbor_fm_security =
read_csr(dd, DC_DC8051_STS_REMOTE_FM_SECURITY) &
DC_DC8051_STS_LOCAL_FM_SECURITY_DISABLED_MASK;
dd_dev_info(dd,
"Neighbor Guid: %llx Neighbor type %d MgmtAllowed %d FM security bypass %d\n",
ppd->neighbor_guid, ppd->neighbor_type,
ppd->mgmt_allowed, ppd->neighbor_fm_security);
if (ppd->mgmt_allowed)
add_full_mgmt_pkey(ppd);
@ -7897,6 +7920,9 @@ static void handle_dcc_err(struct hfi1_devdata *dd, u32 unused, u64 reg)
reg &= ~DCC_ERR_FLG_EN_CSR_ACCESS_BLOCKED_HOST_SMASK;
}
if (unlikely(hfi1_dbg_fault_suppress_err(&dd->verbs_dev)))
reg &= ~DCC_ERR_FLG_LATE_EBP_ERR_SMASK;
/* report any remaining errors */
if (reg)
dd_dev_info_ratelimited(dd, "DCC Error: %s\n",
@ -7995,7 +8021,9 @@ static void is_rcv_avail_int(struct hfi1_devdata *dd, unsigned int source)
if (likely(source < dd->num_rcv_contexts)) {
rcd = dd->rcd[source];
if (rcd) {
if (source < dd->first_user_ctxt)
/* Check for non-user contexts, including vnic */
if ((source < dd->first_dyn_alloc_ctxt) ||
(rcd->sc && (rcd->sc->type == SC_KERNEL)))
rcd->do_interrupt(rcd, 0);
else
handle_user_interrupt(rcd);
@ -8023,7 +8051,8 @@ static void is_rcv_urgent_int(struct hfi1_devdata *dd, unsigned int source)
rcd = dd->rcd[source];
if (rcd) {
/* only pay attention to user urgent interrupts */
if (source >= dd->first_user_ctxt)
if ((source >= dd->first_dyn_alloc_ctxt) &&
(!rcd->sc || (rcd->sc->type == SC_USER)))
handle_user_interrupt(rcd);
return; /* OK */
}
@ -8156,10 +8185,10 @@ static irqreturn_t sdma_interrupt(int irq, void *data)
/* handle the interrupt(s) */
sdma_engine_interrupt(sde, status);
} else
} else {
dd_dev_err(dd, "SDMA engine %u interrupt, but no status bits set\n",
sde->this_idx);
}
return IRQ_HANDLED;
}
@ -8343,6 +8372,52 @@ static int read_lcb_via_8051(struct hfi1_devdata *dd, u32 addr, u64 *data)
return 0;
}
/*
* Provide a cache for some of the LCB registers in case the LCB is
* unavailable.
* (The LCB is unavailable in certain link states, for example.)
*/
struct lcb_datum {
u32 off;
u64 val;
};
static struct lcb_datum lcb_cache[] = {
{ DC_LCB_ERR_INFO_RX_REPLAY_CNT, 0},
{ DC_LCB_ERR_INFO_SEQ_CRC_CNT, 0 },
{ DC_LCB_ERR_INFO_REINIT_FROM_PEER_CNT, 0 },
};
static void update_lcb_cache(struct hfi1_devdata *dd)
{
int i;
int ret;
u64 val;
for (i = 0; i < ARRAY_SIZE(lcb_cache); i++) {
ret = read_lcb_csr(dd, lcb_cache[i].off, &val);
/* Update if we get good data */
if (likely(ret != -EBUSY))
lcb_cache[i].val = val;
}
}
static int read_lcb_cache(u32 off, u64 *val)
{
int i;
for (i = 0; i < ARRAY_SIZE(lcb_cache); i++) {
if (lcb_cache[i].off == off) {
*val = lcb_cache[i].val;
return 0;
}
}
pr_warn("%s bad offset 0x%x\n", __func__, off);
return -1;
}
/*
* Read an LCB CSR. Access may not be in host control, so check.
* Return 0 on success, -EBUSY on failure.
@ -8354,9 +8429,13 @@ int read_lcb_csr(struct hfi1_devdata *dd, u32 addr, u64 *data)
/* if up, go through the 8051 for the value */
if (ppd->host_link_state & HLS_UP)
return read_lcb_via_8051(dd, addr, data);
/* if going up or down, no access */
if (ppd->host_link_state & (HLS_GOING_UP | HLS_GOING_OFFLINE))
return -EBUSY;
/* if going up or down, check the cache, otherwise, no access */
if (ppd->host_link_state & (HLS_GOING_UP | HLS_GOING_OFFLINE)) {
if (read_lcb_cache(addr, data))
return -EBUSY;
return 0;
}
/* otherwise, host has access */
*data = read_csr(dd, addr);
return 0;
@ -8371,7 +8450,7 @@ static int write_lcb_via_8051(struct hfi1_devdata *dd, u32 addr, u64 data)
int ret;
if (dd->icode == ICODE_FUNCTIONAL_SIMULATOR ||
(dd->dc8051_ver < dc8051_ver(0, 20))) {
(dd->dc8051_ver < dc8051_ver(0, 20, 0))) {
if (acquire_lcb_access(dd, 0) == 0) {
write_csr(dd, addr, data);
release_lcb_access(dd, 0);
@ -8420,16 +8499,11 @@ static int do_8051_command(
{
u64 reg, completed;
int return_code;
unsigned long flags;
unsigned long timeout;
hfi1_cdbg(DC8051, "type %d, data 0x%012llx", type, in_data);
/*
* Alternative to holding the lock for a long time:
* - keep busy wait - have other users bounce off
*/
spin_lock_irqsave(&dd->dc8051_lock, flags);
mutex_lock(&dd->dc8051_lock);
/* We can't send any commands to the 8051 if it's in reset */
if (dd->dc_shutdown) {
@ -8455,10 +8529,8 @@ static int do_8051_command(
return_code = -ENXIO;
goto fail;
}
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
dc_shutdown(dd);
dc_start(dd);
spin_lock_irqsave(&dd->dc8051_lock, flags);
_dc_shutdown(dd);
_dc_start(dd);
}
/*
@ -8539,8 +8611,7 @@ static int do_8051_command(
write_csr(dd, DC_DC8051_CFG_HOST_CMD_0, 0);
fail:
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
mutex_unlock(&dd->dc8051_lock);
return return_code;
}
@ -8677,13 +8748,20 @@ static void read_remote_device_id(struct hfi1_devdata *dd, u16 *device_id,
& REMOTE_DEVICE_REV_MASK;
}
void read_misc_status(struct hfi1_devdata *dd, u8 *ver_a, u8 *ver_b)
void read_misc_status(struct hfi1_devdata *dd, u8 *ver_major, u8 *ver_minor,
u8 *ver_patch)
{
u32 frame;
read_8051_config(dd, MISC_STATUS, GENERAL_CONFIG, &frame);
*ver_a = (frame >> STS_FM_VERSION_A_SHIFT) & STS_FM_VERSION_A_MASK;
*ver_b = (frame >> STS_FM_VERSION_B_SHIFT) & STS_FM_VERSION_B_MASK;
*ver_major = (frame >> STS_FM_VERSION_MAJOR_SHIFT) &
STS_FM_VERSION_MAJOR_MASK;
*ver_minor = (frame >> STS_FM_VERSION_MINOR_SHIFT) &
STS_FM_VERSION_MINOR_MASK;
read_8051_config(dd, VERSION_PATCH, GENERAL_CONFIG, &frame);
*ver_patch = (frame >> STS_FM_VERSION_PATCH_SHIFT) &
STS_FM_VERSION_PATCH_MASK;
}
static void read_vc_remote_phy(struct hfi1_devdata *dd, u8 *power_management,
@ -8891,8 +8969,6 @@ int send_idle_sma(struct hfi1_devdata *dd, u64 message)
*/
static int do_quick_linkup(struct hfi1_devdata *dd)
{
u64 reg;
unsigned long timeout;
int ret;
lcb_shutdown(dd, 0);
@ -8915,19 +8991,9 @@ static int do_quick_linkup(struct hfi1_devdata *dd)
write_csr(dd, DC_LCB_CFG_RUN,
1ull << DC_LCB_CFG_RUN_EN_SHIFT);
/* watch LCB_STS_LINK_TRANSFER_ACTIVE */
timeout = jiffies + msecs_to_jiffies(10);
while (1) {
reg = read_csr(dd, DC_LCB_STS_LINK_TRANSFER_ACTIVE);
if (reg)
break;
if (time_after(jiffies, timeout)) {
dd_dev_err(dd,
"timeout waiting for LINK_TRANSFER_ACTIVE\n");
return -ETIMEDOUT;
}
udelay(2);
}
ret = wait_link_transfer_active(dd, 10);
if (ret)
return ret;
write_csr(dd, DC_LCB_CFG_ALLOW_LINK_UP,
1ull << DC_LCB_CFG_ALLOW_LINK_UP_VAL_SHIFT);
@ -9091,7 +9157,7 @@ static int set_local_link_attributes(struct hfi1_pportdata *ppd)
if (ret)
goto set_local_link_attributes_fail;
if (dd->dc8051_ver < dc8051_ver(0, 20)) {
if (dd->dc8051_ver < dc8051_ver(0, 20, 0)) {
/* set the tx rate to the fastest enabled */
if (ppd->link_speed_enabled & OPA_LINK_SPEED_25G)
ppd->local_tx_rate = 1;
@ -9274,7 +9340,7 @@ static int handle_qsfp_error_conditions(struct hfi1_pportdata *ppd,
if ((qsfp_interrupt_status[0] & QSFP_HIGH_TEMP_ALARM) ||
(qsfp_interrupt_status[0] & QSFP_HIGH_TEMP_WARNING))
dd_dev_info(dd, "%s: QSFP cable on fire\n",
dd_dev_info(dd, "%s: QSFP cable temperature too high\n",
__func__);
if ((qsfp_interrupt_status[0] & QSFP_LOW_TEMP_ALARM) ||
@ -9494,8 +9560,11 @@ static int test_qsfp_read(struct hfi1_pportdata *ppd)
int ret;
u8 status;
/* report success if not a QSFP */
if (ppd->port_type != PORT_TYPE_QSFP)
/*
* Report success if not a QSFP or, if it is a QSFP, but the cable is
* not present
*/
if (ppd->port_type != PORT_TYPE_QSFP || !qsfp_mod_present(ppd))
return 0;
/* read byte 2, the status byte */
@ -10082,6 +10151,64 @@ static void check_lni_states(struct hfi1_pportdata *ppd)
decode_state_complete(ppd, last_remote_state, "received");
}
/* wait for wait_ms for LINK_TRANSFER_ACTIVE to go to 1 */
static int wait_link_transfer_active(struct hfi1_devdata *dd, int wait_ms)
{
u64 reg;
unsigned long timeout;
/* watch LCB_STS_LINK_TRANSFER_ACTIVE */
timeout = jiffies + msecs_to_jiffies(wait_ms);
while (1) {
reg = read_csr(dd, DC_LCB_STS_LINK_TRANSFER_ACTIVE);
if (reg)
break;
if (time_after(jiffies, timeout)) {
dd_dev_err(dd,
"timeout waiting for LINK_TRANSFER_ACTIVE\n");
return -ETIMEDOUT;
}
udelay(2);
}
return 0;
}
/* called when the logical link state is not down as it should be */
static void force_logical_link_state_down(struct hfi1_pportdata *ppd)
{
struct hfi1_devdata *dd = ppd->dd;
/*
* Bring link up in LCB loopback
*/
write_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET, 1);
write_csr(dd, DC_LCB_CFG_IGNORE_LOST_RCLK,
DC_LCB_CFG_IGNORE_LOST_RCLK_EN_SMASK);
write_csr(dd, DC_LCB_CFG_LANE_WIDTH, 0);
write_csr(dd, DC_LCB_CFG_REINIT_AS_SLAVE, 0);
write_csr(dd, DC_LCB_CFG_CNT_FOR_SKIP_STALL, 0x110);
write_csr(dd, DC_LCB_CFG_LOOPBACK, 0x2);
write_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET, 0);
(void)read_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET);
udelay(3);
write_csr(dd, DC_LCB_CFG_ALLOW_LINK_UP, 1);
write_csr(dd, DC_LCB_CFG_RUN, 1ull << DC_LCB_CFG_RUN_EN_SHIFT);
wait_link_transfer_active(dd, 100);
/*
* Bring the link down again.
*/
write_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET, 1);
write_csr(dd, DC_LCB_CFG_ALLOW_LINK_UP, 0);
write_csr(dd, DC_LCB_CFG_IGNORE_LOST_RCLK, 0);
/* call again to adjust ppd->statusp, if needed */
get_logical_state(ppd);
}
/*
* Helper for set_link_state(). Do not call except from that routine.
* Expects ppd->hls_mutex to be held.
@ -10098,6 +10225,8 @@ static int goto_offline(struct hfi1_pportdata *ppd, u8 rem_reason)
int do_transition;
int do_wait;
update_lcb_cache(dd);
previous_state = ppd->host_link_state;
ppd->host_link_state = HLS_GOING_OFFLINE;
pstate = read_physical_state(dd);
@ -10135,15 +10264,18 @@ static int goto_offline(struct hfi1_pportdata *ppd, u8 rem_reason)
return ret;
}
/* make sure the logical state is also down */
wait_logical_linkstate(ppd, IB_PORT_DOWN, 1000);
/*
* Now in charge of LCB - must be after the physical state is
* offline.quiet and before host_link_state is changed.
*/
set_host_lcb_access(dd);
write_csr(dd, DC_LCB_ERR_EN, ~0ull); /* watch LCB errors */
/* make sure the logical state is also down */
ret = wait_logical_linkstate(ppd, IB_PORT_DOWN, 1000);
if (ret)
force_logical_link_state_down(ppd);
ppd->host_link_state = HLS_LINK_COOLDOWN; /* LCB access allowed */
if (ppd->port_type == PORT_TYPE_QSFP &&
@ -10380,11 +10512,8 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
goto unexpected;
}
ppd->host_link_state = HLS_UP_INIT;
ret = wait_logical_linkstate(ppd, IB_PORT_INIT, 1000);
if (ret) {
/* logical state didn't change, stay at going_up */
ppd->host_link_state = HLS_GOING_UP;
dd_dev_err(dd,
"%s: logical state did not change to INIT\n",
__func__);
@ -10398,6 +10527,7 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
handle_linkup_change(dd, 1);
ppd->host_link_state = HLS_UP_INIT;
}
break;
case HLS_UP_ARMED:
@ -11853,6 +11983,10 @@ static void free_cntrs(struct hfi1_devdata *dd)
dd->scntrs = NULL;
kfree(dd->cntrnames);
dd->cntrnames = NULL;
if (dd->update_cntr_wq) {
destroy_workqueue(dd->update_cntr_wq);
dd->update_cntr_wq = NULL;
}
}
static u64 read_dev_port_cntr(struct hfi1_devdata *dd, struct cntr_entry *entry,
@ -12008,7 +12142,7 @@ u64 write_port_cntr(struct hfi1_pportdata *ppd, int index, int vl, u64 data)
return write_dev_port_cntr(ppd->dd, entry, sval, ppd, vl, data);
}
static void update_synth_timer(unsigned long opaque)
static void do_update_synth_timer(struct work_struct *work)
{
u64 cur_tx;
u64 cur_rx;
@ -12017,8 +12151,8 @@ static void update_synth_timer(unsigned long opaque)
int i, j, vl;
struct hfi1_pportdata *ppd;
struct cntr_entry *entry;
struct hfi1_devdata *dd = (struct hfi1_devdata *)opaque;
struct hfi1_devdata *dd = container_of(work, struct hfi1_devdata,
update_cntr_work);
/*
* Rather than keep beating on the CSRs pick a minimal set that we can
@ -12101,7 +12235,13 @@ static void update_synth_timer(unsigned long opaque)
} else {
hfi1_cdbg(CNTR, "[%d] No update necessary", dd->unit);
}
}
static void update_synth_timer(unsigned long opaque)
{
struct hfi1_devdata *dd = (struct hfi1_devdata *)opaque;
queue_work(dd->update_cntr_wq, &dd->update_cntr_work);
mod_timer(&dd->synth_stats_timer, jiffies + HZ * SYNTH_CNT_TIME);
}
@ -12337,6 +12477,13 @@ static int init_cntrs(struct hfi1_devdata *dd)
if (init_cpu_counters(dd))
goto bail;
dd->update_cntr_wq = alloc_ordered_workqueue("hfi1_update_cntr_%d",
WQ_MEM_RECLAIM, dd->unit);
if (!dd->update_cntr_wq)
goto bail;
INIT_WORK(&dd->update_cntr_work, do_update_synth_timer);
mod_timer(&dd->synth_stats_timer, jiffies + HZ * SYNTH_CNT_TIME);
return 0;
bail:
@ -12726,7 +12873,10 @@ static int request_msix_irqs(struct hfi1_devdata *dd)
first_sdma = last_general;
last_sdma = first_sdma + dd->num_sdma;
first_rx = last_sdma;
last_rx = first_rx + dd->n_krcv_queues;
last_rx = first_rx + dd->n_krcv_queues + HFI1_NUM_VNIC_CTXT;
/* VNIC MSIx interrupts get mapped when VNIC contexts are created */
dd->first_dyn_msix_idx = first_rx + dd->n_krcv_queues;
/*
* Sanity check - the code expects all SDMA chip source
@ -12740,7 +12890,7 @@ static int request_msix_irqs(struct hfi1_devdata *dd)
const char *err_info;
irq_handler_t handler;
irq_handler_t thread = NULL;
void *arg;
void *arg = NULL;
int idx;
struct hfi1_ctxtdata *rcd = NULL;
struct sdma_engine *sde = NULL;
@ -12767,24 +12917,25 @@ static int request_msix_irqs(struct hfi1_devdata *dd)
} else if (first_rx <= i && i < last_rx) {
idx = i - first_rx;
rcd = dd->rcd[idx];
/* no interrupt if no rcd */
if (!rcd)
continue;
/*
* Set the interrupt register and mask for this
* context's interrupt.
*/
rcd->ireg = (IS_RCVAVAIL_START + idx) / 64;
rcd->imask = ((u64)1) <<
((IS_RCVAVAIL_START + idx) % 64);
handler = receive_context_interrupt;
thread = receive_context_thread;
arg = rcd;
snprintf(me->name, sizeof(me->name),
DRIVER_NAME "_%d kctxt%d", dd->unit, idx);
err_info = "receive context";
remap_intr(dd, IS_RCVAVAIL_START + idx, i);
me->type = IRQ_RCVCTXT;
if (rcd) {
/*
* Set the interrupt register and mask for this
* context's interrupt.
*/
rcd->ireg = (IS_RCVAVAIL_START + idx) / 64;
rcd->imask = ((u64)1) <<
((IS_RCVAVAIL_START + idx) % 64);
handler = receive_context_interrupt;
thread = receive_context_thread;
arg = rcd;
snprintf(me->name, sizeof(me->name),
DRIVER_NAME "_%d kctxt%d",
dd->unit, idx);
err_info = "receive context";
remap_intr(dd, IS_RCVAVAIL_START + idx, i);
me->type = IRQ_RCVCTXT;
rcd->msix_intr = i;
}
} else {
/* not in our expected range - complain, then
* ignore it
@ -12822,6 +12973,84 @@ static int request_msix_irqs(struct hfi1_devdata *dd)
return ret;
}
void hfi1_vnic_synchronize_irq(struct hfi1_devdata *dd)
{
int i;
if (!dd->num_msix_entries) {
synchronize_irq(dd->pcidev->irq);
return;
}
for (i = 0; i < dd->vnic.num_ctxt; i++) {
struct hfi1_ctxtdata *rcd = dd->vnic.ctxt[i];
struct hfi1_msix_entry *me = &dd->msix_entries[rcd->msix_intr];
synchronize_irq(me->msix.vector);
}
}
void hfi1_reset_vnic_msix_info(struct hfi1_ctxtdata *rcd)
{
struct hfi1_devdata *dd = rcd->dd;
struct hfi1_msix_entry *me = &dd->msix_entries[rcd->msix_intr];
if (!me->arg) /* => no irq, no affinity */
return;
hfi1_put_irq_affinity(dd, me);
free_irq(me->msix.vector, me->arg);
me->arg = NULL;
}
void hfi1_set_vnic_msix_info(struct hfi1_ctxtdata *rcd)
{
struct hfi1_devdata *dd = rcd->dd;
struct hfi1_msix_entry *me;
int idx = rcd->ctxt;
void *arg = rcd;
int ret;
rcd->msix_intr = dd->vnic.msix_idx++;
me = &dd->msix_entries[rcd->msix_intr];
/*
* Set the interrupt register and mask for this
* context's interrupt.
*/
rcd->ireg = (IS_RCVAVAIL_START + idx) / 64;
rcd->imask = ((u64)1) <<
((IS_RCVAVAIL_START + idx) % 64);
snprintf(me->name, sizeof(me->name),
DRIVER_NAME "_%d kctxt%d", dd->unit, idx);
me->name[sizeof(me->name) - 1] = 0;
me->type = IRQ_RCVCTXT;
remap_intr(dd, IS_RCVAVAIL_START + idx, rcd->msix_intr);
ret = request_threaded_irq(me->msix.vector, receive_context_interrupt,
receive_context_thread, 0, me->name, arg);
if (ret) {
dd_dev_err(dd, "vnic irq request (vector %d, idx %d) fail %d\n",
me->msix.vector, idx, ret);
return;
}
/*
* assign arg after request_irq call, so it will be
* cleaned up
*/
me->arg = arg;
ret = hfi1_get_irq_affinity(dd, me);
if (ret) {
dd_dev_err(dd,
"unable to pin IRQ %d\n", ret);
free_irq(me->msix.vector, me->arg);
}
}
/*
* Set the general handler to accept all interrupts, remap all
* chip interrupts back to MSI-X 0.
@ -12853,7 +13082,7 @@ static int set_up_interrupts(struct hfi1_devdata *dd)
* N interrupts - one per used SDMA engine
* M interrupt - one per kernel receive context
*/
total = 1 + dd->num_sdma + dd->n_krcv_queues;
total = 1 + dd->num_sdma + dd->n_krcv_queues + HFI1_NUM_VNIC_CTXT;
entries = kcalloc(total, sizeof(*entries), GFP_KERNEL);
if (!entries) {
@ -12918,7 +13147,8 @@ static int set_up_interrupts(struct hfi1_devdata *dd)
*
* num_rcv_contexts - number of contexts being used
* n_krcv_queues - number of kernel contexts
* first_user_ctxt - first non-kernel context in array of contexts
* first_dyn_alloc_ctxt - first dynamically allocated context
* in array of contexts
* freectxts - number of free user contexts
* num_send_contexts - number of PIO send contexts being used
*/
@ -12995,10 +13225,14 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
total_contexts = num_kernel_contexts + num_user_contexts;
}
/* the first N are kernel contexts, the rest are user contexts */
/* Accommodate VNIC contexts */
if ((total_contexts + HFI1_NUM_VNIC_CTXT) <= dd->chip_rcv_contexts)
total_contexts += HFI1_NUM_VNIC_CTXT;
/* the first N are kernel contexts, the rest are user/vnic contexts */
dd->num_rcv_contexts = total_contexts;
dd->n_krcv_queues = num_kernel_contexts;
dd->first_user_ctxt = num_kernel_contexts;
dd->first_dyn_alloc_ctxt = num_kernel_contexts;
dd->num_user_contexts = num_user_contexts;
dd->freectxts = num_user_contexts;
dd_dev_info(dd,
@ -13454,11 +13688,8 @@ static void reset_rxe_csrs(struct hfi1_devdata *dd)
write_csr(dd, RCV_COUNTER_ARRAY32 + (8 * i), 0);
for (i = 0; i < RXE_NUM_64_BIT_COUNTERS; i++)
write_csr(dd, RCV_COUNTER_ARRAY64 + (8 * i), 0);
for (i = 0; i < RXE_NUM_RSM_INSTANCES; i++) {
write_csr(dd, RCV_RSM_CFG + (8 * i), 0);
write_csr(dd, RCV_RSM_SELECT + (8 * i), 0);
write_csr(dd, RCV_RSM_MATCH + (8 * i), 0);
}
for (i = 0; i < RXE_NUM_RSM_INSTANCES; i++)
clear_rsm_rule(dd, i);
for (i = 0; i < 32; i++)
write_csr(dd, RCV_RSM_MAP_TABLE + (8 * i), 0);
@ -13817,6 +14048,16 @@ static void add_rsm_rule(struct hfi1_devdata *dd, u8 rule_index,
(u64)rrd->value2 << RCV_RSM_MATCH_VALUE2_SHIFT);
}
/*
* Clear a receive side mapping rule.
*/
static void clear_rsm_rule(struct hfi1_devdata *dd, u8 rule_index)
{
write_csr(dd, RCV_RSM_CFG + (8 * rule_index), 0);
write_csr(dd, RCV_RSM_SELECT + (8 * rule_index), 0);
write_csr(dd, RCV_RSM_MATCH + (8 * rule_index), 0);
}
/* return the number of RSM map table entries that will be used for QOS */
static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
unsigned int *np)
@ -13932,7 +14173,7 @@ static void init_qos(struct hfi1_devdata *dd, struct rsm_map_table *rmt)
rrd.value2 = LRH_SC_VALUE;
/* add rule 0 */
add_rsm_rule(dd, 0, &rrd);
add_rsm_rule(dd, RSM_INS_VERBS, &rrd);
/* mark RSM map entries as used */
rmt->used += rmt_entries;
@ -13962,7 +14203,7 @@ static void init_user_fecn_handling(struct hfi1_devdata *dd,
/*
* RSM will extract the destination context as an index into the
* map table. The destination contexts are a sequential block
* in the range first_user_ctxt...num_rcv_contexts-1 (inclusive).
* in the range first_dyn_alloc_ctxt...num_rcv_contexts-1 (inclusive).
* Map entries are accessed as offset + extracted value. Adjust
* the added offset so this sequence can be placed anywhere in
* the table - as long as the entries themselves do not wrap.
@ -13970,9 +14211,9 @@ static void init_user_fecn_handling(struct hfi1_devdata *dd,
* start with that to allow for a "negative" offset.
*/
offset = (u8)(NUM_MAP_ENTRIES + (int)rmt->used -
(int)dd->first_user_ctxt);
(int)dd->first_dyn_alloc_ctxt);
for (i = dd->first_user_ctxt, idx = rmt->used;
for (i = dd->first_dyn_alloc_ctxt, idx = rmt->used;
i < dd->num_rcv_contexts; i++, idx++) {
/* replace with identity mapping */
regoff = (idx % 8) * 8;
@ -14006,11 +14247,84 @@ static void init_user_fecn_handling(struct hfi1_devdata *dd,
rrd.value2 = 1;
/* add rule 1 */
add_rsm_rule(dd, 1, &rrd);
add_rsm_rule(dd, RSM_INS_FECN, &rrd);
rmt->used += dd->num_user_contexts;
}
/* Initialize RSM for VNIC */
void hfi1_init_vnic_rsm(struct hfi1_devdata *dd)
{
u8 i, j;
u8 ctx_id = 0;
u64 reg;
u32 regoff;
struct rsm_rule_data rrd;
if (hfi1_vnic_is_rsm_full(dd, NUM_VNIC_MAP_ENTRIES)) {
dd_dev_err(dd, "Vnic RSM disabled, rmt entries used = %d\n",
dd->vnic.rmt_start);
return;
}
dev_dbg(&(dd)->pcidev->dev, "Vnic rsm start = %d, end %d\n",
dd->vnic.rmt_start,
dd->vnic.rmt_start + NUM_VNIC_MAP_ENTRIES);
/* Update RSM mapping table, 32 regs, 256 entries - 1 ctx per byte */
regoff = RCV_RSM_MAP_TABLE + (dd->vnic.rmt_start / 8) * 8;
reg = read_csr(dd, regoff);
for (i = 0; i < NUM_VNIC_MAP_ENTRIES; i++) {
/* Update map register with vnic context */
j = (dd->vnic.rmt_start + i) % 8;
reg &= ~(0xffllu << (j * 8));
reg |= (u64)dd->vnic.ctxt[ctx_id++]->ctxt << (j * 8);
/* Wrap up vnic ctx index */
ctx_id %= dd->vnic.num_ctxt;
/* Write back map register */
if (j == 7 || ((i + 1) == NUM_VNIC_MAP_ENTRIES)) {
dev_dbg(&(dd)->pcidev->dev,
"Vnic rsm map reg[%d] =0x%llx\n",
regoff - RCV_RSM_MAP_TABLE, reg);
write_csr(dd, regoff, reg);
regoff += 8;
if (i < (NUM_VNIC_MAP_ENTRIES - 1))
reg = read_csr(dd, regoff);
}
}
/* Add rule for vnic */
rrd.offset = dd->vnic.rmt_start;
rrd.pkt_type = 4;
/* Match 16B packets */
rrd.field1_off = L2_TYPE_MATCH_OFFSET;
rrd.mask1 = L2_TYPE_MASK;
rrd.value1 = L2_16B_VALUE;
/* Match ETH L4 packets */
rrd.field2_off = L4_TYPE_MATCH_OFFSET;
rrd.mask2 = L4_16B_TYPE_MASK;
rrd.value2 = L4_16B_ETH_VALUE;
/* Calc context from veswid and entropy */
rrd.index1_off = L4_16B_HDR_VESWID_OFFSET;
rrd.index1_width = ilog2(NUM_VNIC_MAP_ENTRIES);
rrd.index2_off = L2_16B_ENTROPY_OFFSET;
rrd.index2_width = ilog2(NUM_VNIC_MAP_ENTRIES);
add_rsm_rule(dd, RSM_INS_VNIC, &rrd);
/* Enable RSM if not already enabled */
add_rcvctrl(dd, RCV_CTRL_RCV_RSM_ENABLE_SMASK);
}
void hfi1_deinit_vnic_rsm(struct hfi1_devdata *dd)
{
clear_rsm_rule(dd, RSM_INS_VNIC);
/* Disable RSM if used only by vnic */
if (dd->vnic.rmt_start == 0)
clear_rcvctrl(dd, RCV_CTRL_RCV_RSM_ENABLE_SMASK);
}
static void init_rxe(struct hfi1_devdata *dd)
{
struct rsm_map_table *rmt;
@ -14023,6 +14337,8 @@ static void init_rxe(struct hfi1_devdata *dd)
init_qos(dd, rmt);
init_user_fecn_handling(dd, rmt);
complete_rsm_map_table(dd, rmt);
/* record number of used rsm map entries for vnic */
dd->vnic.rmt_start = rmt->used;
kfree(rmt);
/*

View File

@ -1,7 +1,7 @@
#ifndef _CHIP_H
#define _CHIP_H
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015 - 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -394,7 +394,8 @@
#define LAST_REMOTE_STATE_COMPLETE 0x13
#define LINK_QUALITY_INFO 0x14
#define REMOTE_DEVICE_ID 0x15
#define LINK_DOWN_REASON 0x16
#define LINK_DOWN_REASON 0x16 /* first byte of offset 0x16 */
#define VERSION_PATCH 0x16 /* last byte of offset 0x16 */
/* 8051 lane specific register field IDs */
#define TX_EQ_SETTINGS 0x00
@ -524,10 +525,12 @@ enum {
#define SUPPORTED_CRCS (CAP_CRC_14B | CAP_CRC_48B)
/* misc status version fields */
#define STS_FM_VERSION_A_SHIFT 16
#define STS_FM_VERSION_A_MASK 0xff
#define STS_FM_VERSION_B_SHIFT 24
#define STS_FM_VERSION_B_MASK 0xff
#define STS_FM_VERSION_MINOR_SHIFT 16
#define STS_FM_VERSION_MINOR_MASK 0xff
#define STS_FM_VERSION_MAJOR_SHIFT 24
#define STS_FM_VERSION_MAJOR_MASK 0xff
#define STS_FM_VERSION_PATCH_SHIFT 24
#define STS_FM_VERSION_PATCH_MASK 0xff
/* LCB_CFG_CRC_MODE TX_VAL and RX_VAL CRC mode values */
#define LCB_CRC_16B 0x0 /* 16b CRC */
@ -698,7 +701,8 @@ void fabric_serdes_reset(struct hfi1_devdata *dd);
int read_8051_data(struct hfi1_devdata *dd, u32 addr, u32 len, u64 *result);
/* chip.c */
void read_misc_status(struct hfi1_devdata *dd, u8 *ver_a, u8 *ver_b);
void read_misc_status(struct hfi1_devdata *dd, u8 *ver_major, u8 *ver_minor,
u8 *ver_patch);
void read_guid(struct hfi1_devdata *dd);
int wait_fm_ready(struct hfi1_devdata *dd, u32 mstimeout);
void set_link_down_reason(struct hfi1_pportdata *ppd, u8 lcl_reason,
@ -1358,6 +1362,8 @@ int hfi1_clear_ctxt_jkey(struct hfi1_devdata *dd, unsigned ctxt);
int hfi1_set_ctxt_pkey(struct hfi1_devdata *dd, unsigned ctxt, u16 pkey);
int hfi1_clear_ctxt_pkey(struct hfi1_devdata *dd, unsigned ctxt);
void hfi1_read_link_quality(struct hfi1_devdata *dd, u8 *link_quality);
void hfi1_init_vnic_rsm(struct hfi1_devdata *dd);
void hfi1_deinit_vnic_rsm(struct hfi1_devdata *dd);
/*
* Interrupt source table.

View File

@ -331,12 +331,15 @@ struct diag_pkt {
#define FULL_MGMT_P_KEY 0xFFFF
#define DEFAULT_P_KEY LIM_MGMT_P_KEY
#define HFI1_FECN_SHIFT 31
#define HFI1_FECN_MASK 1
#define HFI1_FECN_SMASK BIT(HFI1_FECN_SHIFT)
#define HFI1_BECN_SHIFT 30
#define HFI1_BECN_MASK 1
#define HFI1_BECN_SMASK BIT(HFI1_BECN_SHIFT)
/**
* 0xF8 - 4 bits of multicast range and 1 bit for collective range
* Example: For 24 bit LID space,
* Multicast range: 0xF00000 to 0xF7FFFF
* Collective range: 0xF80000 to 0xFFFFFE
*/
#define HFI1_MCAST_NR 0x4 /* Number of top bits set */
#define HFI1_COLLECTIVE_NR 0x1 /* Number of bits after MCAST_NR */
#define HFI1_PSM_IOC_BASE_SEQ 0x0

View File

@ -1,6 +1,6 @@
#ifdef CONFIG_DEBUG_FS
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -51,8 +51,12 @@
#include <linux/export.h>
#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/ratelimit.h>
#include <linux/fault-inject.h>
#include "hfi.h"
#include "trace.h"
#include "debugfs.h"
#include "device.h"
#include "qp.h"
@ -170,7 +174,7 @@ static int _opcode_stats_seq_show(struct seq_file *s, void *v)
struct hfi1_ibdev *ibd = (struct hfi1_ibdev *)s->private;
struct hfi1_devdata *dd = dd_from_dev(ibd);
for (j = 0; j < dd->first_user_ctxt; j++) {
for (j = 0; j < dd->first_dyn_alloc_ctxt; j++) {
if (!dd->rcd[j])
continue;
n_packets += dd->rcd[j]->opstats->stats[i].n_packets;
@ -196,7 +200,7 @@ static void *_ctx_stats_seq_start(struct seq_file *s, loff_t *pos)
if (!*pos)
return SEQ_START_TOKEN;
if (*pos >= dd->first_user_ctxt)
if (*pos >= dd->first_dyn_alloc_ctxt)
return NULL;
return pos;
}
@ -210,7 +214,7 @@ static void *_ctx_stats_seq_next(struct seq_file *s, void *v, loff_t *pos)
return pos;
++*pos;
if (*pos >= dd->first_user_ctxt)
if (*pos >= dd->first_dyn_alloc_ctxt)
return NULL;
return pos;
}
@ -1063,6 +1067,222 @@ DEBUGFS_SEQ_FILE_OPS(sdma_cpu_list);
DEBUGFS_SEQ_FILE_OPEN(sdma_cpu_list)
DEBUGFS_FILE_OPS(sdma_cpu_list);
#ifdef CONFIG_FAULT_INJECTION
static void *_fault_stats_seq_start(struct seq_file *s, loff_t *pos)
{
struct hfi1_opcode_stats_perctx *opstats;
if (*pos >= ARRAY_SIZE(opstats->stats))
return NULL;
return pos;
}
static void *_fault_stats_seq_next(struct seq_file *s, void *v, loff_t *pos)
{
struct hfi1_opcode_stats_perctx *opstats;
++*pos;
if (*pos >= ARRAY_SIZE(opstats->stats))
return NULL;
return pos;
}
static void _fault_stats_seq_stop(struct seq_file *s, void *v)
{
}
static int _fault_stats_seq_show(struct seq_file *s, void *v)
{
loff_t *spos = v;
loff_t i = *spos, j;
u64 n_packets = 0, n_bytes = 0;
struct hfi1_ibdev *ibd = (struct hfi1_ibdev *)s->private;
struct hfi1_devdata *dd = dd_from_dev(ibd);
for (j = 0; j < dd->first_dyn_alloc_ctxt; j++) {
if (!dd->rcd[j])
continue;
n_packets += dd->rcd[j]->opstats->stats[i].n_packets;
n_bytes += dd->rcd[j]->opstats->stats[i].n_bytes;
}
if (!n_packets && !n_bytes)
return SEQ_SKIP;
if (!ibd->fault_opcode->n_rxfaults[i] &&
!ibd->fault_opcode->n_txfaults[i])
return SEQ_SKIP;
seq_printf(s, "%02llx %llu/%llu (faults rx:%llu faults: tx:%llu)\n", i,
(unsigned long long)n_packets,
(unsigned long long)n_bytes,
(unsigned long long)ibd->fault_opcode->n_rxfaults[i],
(unsigned long long)ibd->fault_opcode->n_txfaults[i]);
return 0;
}
DEBUGFS_SEQ_FILE_OPS(fault_stats);
DEBUGFS_SEQ_FILE_OPEN(fault_stats);
DEBUGFS_FILE_OPS(fault_stats);
static void fault_exit_opcode_debugfs(struct hfi1_ibdev *ibd)
{
debugfs_remove_recursive(ibd->fault_opcode->dir);
kfree(ibd->fault_opcode);
ibd->fault_opcode = NULL;
}
static int fault_init_opcode_debugfs(struct hfi1_ibdev *ibd)
{
struct dentry *parent = ibd->hfi1_ibdev_dbg;
ibd->fault_opcode = kzalloc(sizeof(*ibd->fault_opcode), GFP_KERNEL);
if (!ibd->fault_opcode)
return -ENOMEM;
ibd->fault_opcode->attr.interval = 1;
ibd->fault_opcode->attr.require_end = ULONG_MAX;
ibd->fault_opcode->attr.stacktrace_depth = 32;
ibd->fault_opcode->attr.dname = NULL;
ibd->fault_opcode->attr.verbose = 0;
ibd->fault_opcode->fault_by_opcode = false;
ibd->fault_opcode->opcode = 0;
ibd->fault_opcode->mask = 0xff;
ibd->fault_opcode->dir =
fault_create_debugfs_attr("fault_opcode",
parent,
&ibd->fault_opcode->attr);
if (IS_ERR(ibd->fault_opcode->dir)) {
kfree(ibd->fault_opcode);
return -ENOENT;
}
DEBUGFS_SEQ_FILE_CREATE(fault_stats, ibd->fault_opcode->dir, ibd);
if (!debugfs_create_bool("fault_by_opcode", 0600,
ibd->fault_opcode->dir,
&ibd->fault_opcode->fault_by_opcode))
goto fail;
if (!debugfs_create_x8("opcode", 0600, ibd->fault_opcode->dir,
&ibd->fault_opcode->opcode))
goto fail;
if (!debugfs_create_x8("mask", 0600, ibd->fault_opcode->dir,
&ibd->fault_opcode->mask))
goto fail;
return 0;
fail:
fault_exit_opcode_debugfs(ibd);
return -ENOMEM;
}
static void fault_exit_packet_debugfs(struct hfi1_ibdev *ibd)
{
debugfs_remove_recursive(ibd->fault_packet->dir);
kfree(ibd->fault_packet);
ibd->fault_packet = NULL;
}
static int fault_init_packet_debugfs(struct hfi1_ibdev *ibd)
{
struct dentry *parent = ibd->hfi1_ibdev_dbg;
ibd->fault_packet = kzalloc(sizeof(*ibd->fault_packet), GFP_KERNEL);
if (!ibd->fault_packet)
return -ENOMEM;
ibd->fault_packet->attr.interval = 1;
ibd->fault_packet->attr.require_end = ULONG_MAX;
ibd->fault_packet->attr.stacktrace_depth = 32;
ibd->fault_packet->attr.dname = NULL;
ibd->fault_packet->attr.verbose = 0;
ibd->fault_packet->fault_by_packet = false;
ibd->fault_packet->dir =
fault_create_debugfs_attr("fault_packet",
parent,
&ibd->fault_opcode->attr);
if (IS_ERR(ibd->fault_packet->dir)) {
kfree(ibd->fault_packet);
return -ENOENT;
}
if (!debugfs_create_bool("fault_by_packet", 0600,
ibd->fault_packet->dir,
&ibd->fault_packet->fault_by_packet))
goto fail;
if (!debugfs_create_u64("fault_stats", 0400,
ibd->fault_packet->dir,
&ibd->fault_packet->n_faults))
goto fail;
return 0;
fail:
fault_exit_packet_debugfs(ibd);
return -ENOMEM;
}
static void fault_exit_debugfs(struct hfi1_ibdev *ibd)
{
fault_exit_opcode_debugfs(ibd);
fault_exit_packet_debugfs(ibd);
}
static int fault_init_debugfs(struct hfi1_ibdev *ibd)
{
int ret = 0;
ret = fault_init_opcode_debugfs(ibd);
if (ret)
return ret;
ret = fault_init_packet_debugfs(ibd);
if (ret)
fault_exit_opcode_debugfs(ibd);
return ret;
}
bool hfi1_dbg_fault_suppress_err(struct hfi1_ibdev *ibd)
{
return ibd->fault_suppress_err;
}
bool hfi1_dbg_fault_opcode(struct rvt_qp *qp, u32 opcode, bool rx)
{
bool ret = false;
struct hfi1_ibdev *ibd = to_idev(qp->ibqp.device);
if (!ibd->fault_opcode || !ibd->fault_opcode->fault_by_opcode)
return false;
if (ibd->fault_opcode->opcode != (opcode & ibd->fault_opcode->mask))
return false;
ret = should_fail(&ibd->fault_opcode->attr, 1);
if (ret) {
trace_hfi1_fault_opcode(qp, opcode);
if (rx)
ibd->fault_opcode->n_rxfaults[opcode]++;
else
ibd->fault_opcode->n_txfaults[opcode]++;
}
return ret;
}
bool hfi1_dbg_fault_packet(struct hfi1_packet *packet)
{
struct rvt_dev_info *rdi = &packet->rcd->ppd->dd->verbs_dev.rdi;
struct hfi1_ibdev *ibd = dev_from_rdi(rdi);
bool ret = false;
if (!ibd->fault_packet || !ibd->fault_packet->fault_by_packet)
return false;
ret = should_fail(&ibd->fault_packet->attr, 1);
if (ret) {
++ibd->fault_packet->n_faults;
trace_hfi1_fault_packet(packet);
}
return ret;
}
#endif
void hfi1_dbg_ibdev_init(struct hfi1_ibdev *ibd)
{
char name[sizeof("port0counters") + 1];
@ -1112,12 +1332,22 @@ void hfi1_dbg_ibdev_init(struct hfi1_ibdev *ibd)
!port_cntr_ops[i].ops.write ?
S_IRUGO : S_IRUGO | S_IWUSR);
}
#ifdef CONFIG_FAULT_INJECTION
debugfs_create_bool("fault_suppress_err", 0600,
ibd->hfi1_ibdev_dbg,
&ibd->fault_suppress_err);
fault_init_debugfs(ibd);
#endif
}
void hfi1_dbg_ibdev_exit(struct hfi1_ibdev *ibd)
{
if (!hfi1_dbg_root)
goto out;
#ifdef CONFIG_FAULT_INJECTION
fault_exit_debugfs(ibd);
#endif
debugfs_remove(ibd->hfi1_ibdev_link);
debugfs_remove_recursive(ibd->hfi1_ibdev_dbg);
out:

View File

@ -53,23 +53,79 @@ void hfi1_dbg_ibdev_init(struct hfi1_ibdev *ibd);
void hfi1_dbg_ibdev_exit(struct hfi1_ibdev *ibd);
void hfi1_dbg_init(void);
void hfi1_dbg_exit(void);
#ifdef CONFIG_FAULT_INJECTION
#include <linux/fault-inject.h>
struct fault_opcode {
struct fault_attr attr;
struct dentry *dir;
bool fault_by_opcode;
u64 n_rxfaults[256];
u64 n_txfaults[256];
u8 opcode;
u8 mask;
};
struct fault_packet {
struct fault_attr attr;
struct dentry *dir;
bool fault_by_packet;
u64 n_faults;
};
bool hfi1_dbg_fault_opcode(struct rvt_qp *qp, u32 opcode, bool rx);
bool hfi1_dbg_fault_packet(struct hfi1_packet *packet);
bool hfi1_dbg_fault_suppress_err(struct hfi1_ibdev *ibd);
#else
static inline bool hfi1_dbg_fault_packet(struct hfi1_packet *packet)
{
return false;
}
static inline bool hfi1_dbg_fault_opcode(struct rvt_qp *qp,
u32 opcode, bool rx)
{
return false;
}
static inline bool hfi1_dbg_fault_suppress_err(struct hfi1_ibdev *ibd)
{
return false;
}
#endif
#else
static inline void hfi1_dbg_ibdev_init(struct hfi1_ibdev *ibd)
{
}
void hfi1_dbg_ibdev_exit(struct hfi1_ibdev *ibd)
static inline void hfi1_dbg_ibdev_exit(struct hfi1_ibdev *ibd)
{
}
void hfi1_dbg_init(void)
static inline void hfi1_dbg_init(void)
{
}
void hfi1_dbg_exit(void)
static inline void hfi1_dbg_exit(void)
{
}
static inline bool hfi1_dbg_fault_packet(struct hfi1_packet *packet)
{
return false;
}
static inline bool hfi1_dbg_fault_opcode(struct rvt_qp *qp,
u32 opcode, bool rx)
{
return false;
}
static inline bool hfi1_dbg_fault_suppress_err(struct hfi1_ibdev *ibd)
{
return false;
}
#endif
#endif /* _HFI1_DEBUGFS_H */

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -59,6 +59,8 @@
#include "trace.h"
#include "qp.h"
#include "sdma.h"
#include "debugfs.h"
#include "vnic.h"
#undef pr_fmt
#define pr_fmt(fmt) DRIVER_NAME ": " fmt
@ -283,7 +285,7 @@ static void rcv_hdrerr(struct hfi1_ctxtdata *rcd, struct hfi1_pportdata *ppd,
{
struct ib_header *rhdr = packet->hdr;
u32 rte = rhf_rcv_type_err(packet->rhf);
int lnh = be16_to_cpu(rhdr->lrh[0]) & 3;
int lnh = ib_get_lnh(rhdr);
struct hfi1_ibport *ibp = rcd_to_iport(rcd);
struct hfi1_devdata *dd = ppd->dd;
struct rvt_dev_info *rdi = &dd->verbs_dev.rdi;
@ -295,7 +297,7 @@ static void rcv_hdrerr(struct hfi1_ctxtdata *rcd, struct hfi1_pportdata *ppd,
/* For TIDERR and RC QPs preemptively schedule a NAK */
struct ib_other_headers *ohdr = NULL;
u32 tlen = rhf_pkt_len(packet->rhf); /* in bytes */
u16 lid = be16_to_cpu(rhdr->lrh[1]);
u16 lid = ib_get_dlid(rhdr);
u32 qp_num;
u32 rcv_flags = 0;
@ -396,7 +398,7 @@ static void rcv_hdrerr(struct hfi1_ctxtdata *rcd, struct hfi1_pportdata *ppd,
u16 rlid;
u8 svc_type, sl, sc5;
sc5 = hdr2sc(rhdr, packet->rhf);
sc5 = hfi1_9B_get_sc5(rhdr, packet->rhf);
sl = ibp->sc_to_sl[sc5];
lqpn = be32_to_cpu(bth[1]) & RVT_QPN_MASK;
@ -414,7 +416,7 @@ static void rcv_hdrerr(struct hfi1_ctxtdata *rcd, struct hfi1_pportdata *ppd,
svc_type = IB_CC_SVCTYPE_UD;
break;
case IB_QPT_UC:
rlid = be16_to_cpu(rhdr->lrh[3]);
rlid = ib_get_slid(rhdr);
rqpn = qp->remote_qpn;
svc_type = IB_CC_SVCTYPE_UC;
break;
@ -460,7 +462,7 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
struct ib_other_headers *ohdr = pkt->ohdr;
struct ib_grh *grh = NULL;
u32 rqpn = 0, bth1;
u16 rlid, dlid = be16_to_cpu(hdr->lrh[1]);
u16 rlid, dlid = ib_get_dlid(hdr);
u8 sc, svc_type;
bool is_mcast = false;
@ -471,19 +473,19 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
case IB_QPT_SMI:
case IB_QPT_GSI:
case IB_QPT_UD:
rlid = be16_to_cpu(hdr->lrh[3]);
rlid = ib_get_slid(hdr);
rqpn = be32_to_cpu(ohdr->u.ud.deth[1]) & RVT_QPN_MASK;
svc_type = IB_CC_SVCTYPE_UD;
is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) &&
(dlid != be16_to_cpu(IB_LID_PERMISSIVE));
break;
case IB_QPT_UC:
rlid = qp->remote_ah_attr.dlid;
rlid = rdma_ah_get_dlid(&qp->remote_ah_attr);
rqpn = qp->remote_qpn;
svc_type = IB_CC_SVCTYPE_UC;
break;
case IB_QPT_RC:
rlid = qp->remote_ah_attr.dlid;
rlid = rdma_ah_get_dlid(&qp->remote_ah_attr);
rqpn = qp->remote_qpn;
svc_type = IB_CC_SVCTYPE_RC;
break;
@ -491,16 +493,16 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
return;
}
sc = hdr2sc(hdr, pkt->rhf);
sc = hfi1_9B_get_sc5(hdr, pkt->rhf);
bth1 = be32_to_cpu(ohdr->bth[1]);
if (do_cnp && (bth1 & HFI1_FECN_SMASK)) {
if (do_cnp && (bth1 & IB_FECN_SMASK)) {
u16 pkey = (u16)be32_to_cpu(ohdr->bth[0]);
return_cnp(ibp, qp, rqpn, pkey, dlid, rlid, sc, grh);
}
if (!is_mcast && (bth1 & HFI1_BECN_SMASK)) {
if (!is_mcast && (bth1 & IB_BECN_SMASK)) {
struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
u32 lqpn = bth1 & RVT_QPN_MASK;
u8 sl = ibp->sc_to_sl[sc];
@ -621,8 +623,7 @@ static void __prescan_rxq(struct hfi1_packet *packet)
packet->hdr = hfi1_get_msgheader(dd, rhf_addr);
hdr = packet->hdr;
lnh = be16_to_cpu(hdr->lrh[0]) & 3;
lnh = ib_get_lnh(hdr);
if (lnh == HFI1_LRH_BTH) {
packet->ohdr = &hdr->u.oth;
@ -634,7 +635,7 @@ static void __prescan_rxq(struct hfi1_packet *packet)
}
bth1 = be32_to_cpu(packet->ohdr->bth[1]);
is_ecn = !!(bth1 & (HFI1_FECN_SMASK | HFI1_BECN_SMASK));
is_ecn = !!(bth1 & (IB_FECN_SMASK | IB_BECN_SMASK));
if (!is_ecn)
goto next;
@ -652,7 +653,7 @@ static void __prescan_rxq(struct hfi1_packet *packet)
rcu_read_unlock();
/* turn off BECN, FECN */
bth1 &= ~(HFI1_FECN_SMASK | HFI1_BECN_SMASK);
bth1 &= ~(IB_FECN_SMASK | IB_BECN_SMASK);
packet->ohdr->bth[1] = cpu_to_be32(bth1);
next:
update_ps_mdata(&mdata, rcd);
@ -872,20 +873,42 @@ int handle_receive_interrupt_dma_rtail(struct hfi1_ctxtdata *rcd, int thread)
return last;
}
static inline void set_all_nodma_rtail(struct hfi1_devdata *dd)
static inline void set_nodma_rtail(struct hfi1_devdata *dd, u8 ctxt)
{
int i;
for (i = HFI1_CTRL_CTXT + 1; i < dd->first_user_ctxt; i++)
/*
* For dynamically allocated kernel contexts (like vnic) switch
* interrupt handler only for that context. Otherwise, switch
* interrupt handler for all statically allocated kernel contexts.
*/
if (ctxt >= dd->first_dyn_alloc_ctxt) {
dd->rcd[ctxt]->do_interrupt =
&handle_receive_interrupt_nodma_rtail;
return;
}
for (i = HFI1_CTRL_CTXT + 1; i < dd->first_dyn_alloc_ctxt; i++)
dd->rcd[i]->do_interrupt =
&handle_receive_interrupt_nodma_rtail;
}
static inline void set_all_dma_rtail(struct hfi1_devdata *dd)
static inline void set_dma_rtail(struct hfi1_devdata *dd, u8 ctxt)
{
int i;
for (i = HFI1_CTRL_CTXT + 1; i < dd->first_user_ctxt; i++)
/*
* For dynamically allocated kernel contexts (like vnic) switch
* interrupt handler only for that context. Otherwise, switch
* interrupt handler for all statically allocated kernel contexts.
*/
if (ctxt >= dd->first_dyn_alloc_ctxt) {
dd->rcd[ctxt]->do_interrupt =
&handle_receive_interrupt_dma_rtail;
return;
}
for (i = HFI1_CTRL_CTXT + 1; i < dd->first_dyn_alloc_ctxt; i++)
dd->rcd[i]->do_interrupt =
&handle_receive_interrupt_dma_rtail;
}
@ -895,8 +918,13 @@ void set_all_slowpath(struct hfi1_devdata *dd)
int i;
/* HFI1_CTRL_CTXT must always use the slow path interrupt handler */
for (i = HFI1_CTRL_CTXT + 1; i < dd->first_user_ctxt; i++)
dd->rcd[i]->do_interrupt = &handle_receive_interrupt;
for (i = HFI1_CTRL_CTXT + 1; i < dd->num_rcv_contexts; i++) {
struct hfi1_ctxtdata *rcd = dd->rcd[i];
if ((i < dd->first_dyn_alloc_ctxt) ||
(rcd && rcd->sc && (rcd->sc->type == SC_KERNEL)))
rcd->do_interrupt = &handle_receive_interrupt;
}
}
static inline int set_armed_to_active(struct hfi1_ctxtdata *rcd,
@ -908,7 +936,8 @@ static inline int set_armed_to_active(struct hfi1_ctxtdata *rcd,
packet->rhf_addr);
u8 etype = rhf_rcv_type(packet->rhf);
if (etype == RHF_RCV_TYPE_IB && hdr2sc(hdr, packet->rhf) != 0xf) {
if (etype == RHF_RCV_TYPE_IB &&
hfi1_9B_get_sc5(hdr, packet->rhf) != 0xf) {
int hwstate = read_logical_state(dd);
if (hwstate != LSTATE_ACTIVE) {
@ -1006,7 +1035,7 @@ int handle_receive_interrupt(struct hfi1_ctxtdata *rcd, int thread)
last = RCV_PKT_DONE;
if (needset) {
dd_dev_info(dd, "Switching to NO_DMA_RTAIL\n");
set_all_nodma_rtail(dd);
set_nodma_rtail(dd, rcd->ctxt);
needset = 0;
}
} else {
@ -1028,7 +1057,7 @@ int handle_receive_interrupt(struct hfi1_ctxtdata *rcd, int thread)
if (needset) {
dd_dev_info(dd,
"Switching to DMA_RTAIL\n");
set_all_dma_rtail(dd);
set_dma_rtail(dd, rcd->ctxt);
needset = 0;
}
}
@ -1077,10 +1106,10 @@ void receive_interrupt_work(struct work_struct *work)
set_link_state(ppd, HLS_UP_ACTIVE);
/*
* Interrupt all kernel contexts that could have had an
* interrupt during auto activation.
* Interrupt all statically allocated kernel contexts that could
* have had an interrupt during auto activation.
*/
for (i = HFI1_CTRL_CTXT; i < dd->first_user_ctxt; i++)
for (i = HFI1_CTRL_CTXT; i < dd->first_dyn_alloc_ctxt; i++)
force_recv_intr(dd->rcd[i]);
}
@ -1294,7 +1323,8 @@ int hfi1_reset_device(int unit)
spin_lock_irqsave(&dd->uctxt_lock, flags);
if (dd->rcd)
for (i = dd->first_user_ctxt; i < dd->num_rcv_contexts; i++) {
for (i = dd->first_dyn_alloc_ctxt;
i < dd->num_rcv_contexts; i++) {
if (!dd->rcd[i] || !dd->rcd[i]->cnt)
continue;
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
@ -1354,6 +1384,9 @@ void handle_eflags(struct hfi1_packet *packet)
*/
int process_receive_ib(struct hfi1_packet *packet)
{
if (unlikely(hfi1_dbg_fault_packet(packet)))
return RHF_RCV_CONTINUE;
trace_hfi1_rcvhdr(packet->rcd->ppd->dd,
packet->rcd->ctxt,
rhf_err_flags(packet->rhf),
@ -1363,6 +1396,11 @@ int process_receive_ib(struct hfi1_packet *packet)
packet->updegr,
rhf_egr_index(packet->rhf));
if (unlikely(
(hfi1_dbg_fault_suppress_err(&packet->rcd->dd->verbs_dev) &&
(packet->rhf & RHF_DC_ERR))))
return RHF_RCV_CONTINUE;
if (unlikely(rhf_err_flags(packet->rhf))) {
handle_eflags(packet);
return RHF_RCV_CONTINUE;
@ -1372,15 +1410,31 @@ int process_receive_ib(struct hfi1_packet *packet)
return RHF_RCV_CONTINUE;
}
static inline bool hfi1_is_vnic_packet(struct hfi1_packet *packet)
{
/* Packet received in VNIC context via RSM */
if (packet->rcd->is_vnic)
return true;
if ((HFI1_GET_L2_TYPE(packet->ebuf) == OPA_VNIC_L2_TYPE) &&
(HFI1_GET_L4_TYPE(packet->ebuf) == OPA_VNIC_L4_ETHR))
return true;
return false;
}
int process_receive_bypass(struct hfi1_packet *packet)
{
struct hfi1_devdata *dd = packet->rcd->dd;
if (unlikely(rhf_err_flags(packet->rhf)))
if (unlikely(rhf_err_flags(packet->rhf))) {
handle_eflags(packet);
} else if (hfi1_is_vnic_packet(packet)) {
hfi1_vnic_bypass_rcv(packet);
return RHF_RCV_CONTINUE;
}
dd_dev_err(dd,
"Bypass packets are not supported in normal operation. Dropping\n");
dd_dev_err(dd, "Unsupported bypass packet. Dropping\n");
incr_cntr64(&dd->sw_rcv_bypass_packet_errors);
if (!(dd->err_info_rcvport.status_and_code & OPA_EI_STATUS_SMASK)) {
u64 *flits = packet->ebuf;
@ -1398,6 +1452,12 @@ int process_receive_bypass(struct hfi1_packet *packet)
int process_receive_error(struct hfi1_packet *packet)
{
/* KHdrHCRCErr -- KDETH packet with a bad HCRC */
if (unlikely(
hfi1_dbg_fault_suppress_err(&packet->rcd->dd->verbs_dev) &&
rhf_rcv_type_err(packet->rhf) == 3))
return RHF_RCV_CONTINUE;
handle_eflags(packet);
if (unlikely(rhf_err_flags(packet->rhf)))
@ -1409,6 +1469,8 @@ int process_receive_error(struct hfi1_packet *packet)
int kdeth_process_expected(struct hfi1_packet *packet)
{
if (unlikely(hfi1_dbg_fault_packet(packet)))
return RHF_RCV_CONTINUE;
if (unlikely(rhf_err_flags(packet->rhf)))
handle_eflags(packet);
@ -1421,6 +1483,8 @@ int kdeth_process_eager(struct hfi1_packet *packet)
{
if (unlikely(rhf_err_flags(packet->rhf)))
handle_eflags(packet);
if (unlikely(hfi1_dbg_fault_packet(packet)))
return RHF_RCV_CONTINUE;
dd_dev_err(packet->rcd->dd,
"Unhandled eager packet received. Dropping.\n");

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -586,8 +586,8 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
* knows where it's own bitmap is within the page.
*/
memaddr = (unsigned long)(dd->events +
((uctxt->ctxt - dd->first_user_ctxt) *
HFI1_MAX_SHARED_CTXTS)) & PAGE_MASK;
((uctxt->ctxt - dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS)) & PAGE_MASK;
memlen = PAGE_SIZE;
/*
* v3.7 removes VM_RESERVED but the effect is kept by
@ -597,6 +597,10 @@ static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
vmf = 1;
break;
case STATUS:
if (flags & (unsigned long)(VM_WRITE | VM_EXEC)) {
ret = -EPERM;
goto done;
}
memaddr = kvirt_to_phys((void *)dd->status);
memlen = PAGE_SIZE;
flags |= VM_IO | VM_DONTEXPAND;
@ -756,7 +760,7 @@ static int hfi1_file_close(struct inode *inode, struct file *fp)
* Clear any left over, unhandled events so the next process that
* gets this context doesn't get confused.
*/
ev = dd->events + ((uctxt->ctxt - dd->first_user_ctxt) *
ev = dd->events + ((uctxt->ctxt - dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS) + fdata->subctxt;
*ev = 0;
@ -909,12 +913,18 @@ static int find_shared_ctxt(struct file *fp,
if (!(dd && (dd->flags & HFI1_PRESENT) && dd->kregbase))
continue;
for (i = dd->first_user_ctxt; i < dd->num_rcv_contexts; i++) {
for (i = dd->first_dyn_alloc_ctxt;
i < dd->num_rcv_contexts; i++) {
struct hfi1_ctxtdata *uctxt = dd->rcd[i];
/* Skip ctxts which are not yet open */
if (!uctxt || !uctxt->cnt)
continue;
/* Skip dynamically allocted kernel contexts */
if (uctxt->sc && (uctxt->sc->type == SC_KERNEL))
continue;
/* Skip ctxt if it doesn't match the requested one */
if (memcmp(uctxt->uuid, uinfo->uuid,
sizeof(uctxt->uuid)) ||
@ -960,7 +970,8 @@ static int allocate_ctxt(struct file *fp, struct hfi1_devdata *dd,
return -EIO;
}
for (ctxt = dd->first_user_ctxt; ctxt < dd->num_rcv_contexts; ctxt++)
for (ctxt = dd->first_dyn_alloc_ctxt;
ctxt < dd->num_rcv_contexts; ctxt++)
if (!dd->rcd[ctxt])
break;
@ -1306,7 +1317,7 @@ static int get_base_info(struct file *fp, void __user *ubase, __u32 len)
*/
binfo.user_regbase = HFI1_MMAP_TOKEN(UREGS, uctxt->ctxt,
fd->subctxt, 0);
offset = offset_in_page((((uctxt->ctxt - dd->first_user_ctxt) *
offset = offset_in_page((((uctxt->ctxt - dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS) + fd->subctxt) *
sizeof(*dd->events));
binfo.events_bufbase = HFI1_MMAP_TOKEN(EVENTS, uctxt->ctxt,
@ -1400,12 +1411,12 @@ int hfi1_set_uevent_bits(struct hfi1_pportdata *ppd, const int evtbit)
}
spin_lock_irqsave(&dd->uctxt_lock, flags);
for (ctxt = dd->first_user_ctxt; ctxt < dd->num_rcv_contexts;
for (ctxt = dd->first_dyn_alloc_ctxt; ctxt < dd->num_rcv_contexts;
ctxt++) {
uctxt = dd->rcd[ctxt];
if (uctxt) {
unsigned long *evs = dd->events +
(uctxt->ctxt - dd->first_user_ctxt) *
(uctxt->ctxt - dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS;
int i;
/*
@ -1477,7 +1488,7 @@ static int user_event_ack(struct hfi1_ctxtdata *uctxt, int subctxt,
if (!dd->events)
return 0;
evs = dd->events + ((uctxt->ctxt - dd->first_user_ctxt) *
evs = dd->events + ((uctxt->ctxt - dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS) + subctxt;
for (i = 0; i <= _HFI1_MAX_EVENT_BIT; i++) {

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015 - 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -1004,7 +1004,9 @@ static int load_8051_firmware(struct hfi1_devdata *dd,
{
u64 reg;
int ret;
u8 ver_a, ver_b;
u8 ver_major;
u8 ver_minor;
u8 ver_patch;
/*
* DC Reset sequence
@ -1073,10 +1075,10 @@ static int load_8051_firmware(struct hfi1_devdata *dd,
return -ETIMEDOUT;
}
read_misc_status(dd, &ver_a, &ver_b);
dd_dev_info(dd, "8051 firmware version %d.%d\n",
(int)ver_b, (int)ver_a);
dd->dc8051_ver = dc8051_ver(ver_b, ver_a);
read_misc_status(dd, &ver_major, &ver_minor, &ver_patch);
dd_dev_info(dd, "8051 firmware version %d.%d.%d\n",
(int)ver_major, (int)ver_minor, (int)ver_patch);
dd->dc8051_ver = dc8051_ver(ver_major, ver_minor, ver_patch);
return 0;
}

View File

@ -1,7 +1,7 @@
#ifndef _HFI1_KERNEL_H
#define _HFI1_KERNEL_H
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -54,6 +54,7 @@
#include <linux/list.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/idr.h>
#include <linux/io.h>
#include <linux/fs.h>
#include <linux/completion.h>
@ -66,6 +67,7 @@
#include <linux/i2c-algo-bit.h>
#include <rdma/ib_hdrs.h>
#include <linux/rhashtable.h>
#include <linux/netdevice.h>
#include <rdma/rdma_vt.h>
#include "chip_registers.h"
@ -278,6 +280,8 @@ struct hfi1_ctxtdata {
struct hfi1_devdata *dd;
/* so functions that need physical port can get it easily */
struct hfi1_pportdata *ppd;
/* associated msix interrupt */
u32 msix_intr;
/* A page of memory for rcvhdrhead, rcvegrhead, rcvegrtail * N */
void *subctxt_uregbase;
/* An array of pages for the eager receive buffers * N */
@ -337,6 +341,12 @@ struct hfi1_ctxtdata {
* packets with the wrong interrupt handler.
*/
int (*do_interrupt)(struct hfi1_ctxtdata *rcd, int threaded);
/* Indicates that this is vnic context */
bool is_vnic;
/* vnic queue index this context is mapped to */
u8 vnic_q_idx;
};
/*
@ -474,7 +484,7 @@ struct rvt_sge_state;
#define HFI1_PART_ENFORCE_OUT 0x2
/* how often we check for synthetic counter wrap around */
#define SYNTH_CNT_TIME 2
#define SYNTH_CNT_TIME 3
/* Counter flags */
#define CNTR_NORMAL 0x0 /* Normal counters, just read register */
@ -808,6 +818,32 @@ struct hfi1_asic_data {
struct hfi1_i2c_bus *i2c_bus1;
};
/* sizes for both the QP and RSM map tables */
#define NUM_MAP_ENTRIES 256
#define NUM_MAP_REGS 32
/*
* Number of VNIC contexts used. Ensure it is less than or equal to
* max queues supported by VNIC (HFI1_VNIC_MAX_QUEUE).
*/
#define HFI1_NUM_VNIC_CTXT 8
/* Number of VNIC RSM entries */
#define NUM_VNIC_MAP_ENTRIES 8
/* Virtual NIC information */
struct hfi1_vnic_data {
struct hfi1_ctxtdata *ctxt[HFI1_NUM_VNIC_CTXT];
struct kmem_cache *txreq_cache;
u8 num_vports;
struct idr vesw_idr;
u8 rmt_start;
u8 num_ctxt;
u32 msix_idx;
};
struct hfi1_vnic_vport_info;
/* device data struct now contains only "general per-device" info.
* fields related to a physical IB port are in a hfi1_pportdata struct.
*/
@ -926,8 +962,9 @@ struct hfi1_devdata {
spinlock_t rcvctrl_lock; /* protect changes to RcvCtrl */
/* around rcd and (user ctxts) ctxt_cnt use (intr vs free) */
spinlock_t uctxt_lock; /* rcd and user context changes */
/* exclusive access to 8051 */
spinlock_t dc8051_lock;
struct mutex dc8051_lock; /* exclusive access to 8051 */
struct workqueue_struct *update_cntr_wq;
struct work_struct update_cntr_work;
/* exclusive access to 8051 memory */
spinlock_t dc8051_memlock;
int dc8051_timed_out; /* remember if the 8051 timed out */
@ -1020,7 +1057,7 @@ struct hfi1_devdata {
u8 qos_shift;
u16 irev; /* implementation revision */
u16 dc8051_ver; /* 8051 firmware version */
u32 dc8051_ver; /* 8051 firmware version */
spinlock_t hfi1_diag_trans_lock; /* protect diag observer ops */
struct platform_config platform_config;
@ -1031,6 +1068,7 @@ struct hfi1_devdata {
/* MSI-X information */
struct hfi1_msix_entry *msix_entries;
u32 num_msix_entries;
u32 first_dyn_msix_idx;
/* INTx information */
u32 requested_intx_irq; /* did we request one? */
@ -1115,6 +1153,9 @@ struct hfi1_devdata {
send_routine process_dma_send;
void (*pio_inline_send)(struct hfi1_devdata *dd, struct pio_buf *pbuf,
u64 pbc, const void *from, size_t count);
int (*process_vnic_dma_send)(struct hfi1_devdata *dd, u8 q_idx,
struct hfi1_vnic_vport_info *vinfo,
struct sk_buff *skb, u64 pbc, u8 plen);
/* hfi1_pportdata, points to array of (physical) port-specific
* data structs, indexed by pidx (0..n-1)
*/
@ -1126,8 +1167,8 @@ struct hfi1_devdata {
u16 flags;
/* Number of physical ports available */
u8 num_pports;
/* Lowest context number which can be used by user processes */
u8 first_user_ctxt;
/* Lowest context number which can be used by user processes or VNIC */
u8 first_dyn_alloc_ctxt;
/* adding a new field here would make it part of this cacheline */
/* seqlock for sc2vl */
@ -1167,15 +1208,24 @@ struct hfi1_devdata {
bool eprom_available; /* true if EPROM is available for this device */
bool aspm_supported; /* Does HW support ASPM */
bool aspm_enabled; /* ASPM state: enabled/disabled */
struct rhashtable sdma_rht;
struct rhashtable *sdma_rht;
struct kobject kobj;
/* vnic data */
struct hfi1_vnic_data vnic;
};
static inline bool hfi1_vnic_is_rsm_full(struct hfi1_devdata *dd, int spare)
{
return (dd->vnic.rmt_start + spare) > NUM_MAP_ENTRIES;
}
/* 8051 firmware version helper */
#define dc8051_ver(a, b) ((a) << 8 | (b))
#define dc8051_ver_maj(a) ((a & 0xff00) >> 8)
#define dc8051_ver_min(a) (a & 0x00ff)
#define dc8051_ver(a, b, c) ((a) << 16 | (b) << 8 | (c))
#define dc8051_ver_maj(a) (((a) & 0xff0000) >> 16)
#define dc8051_ver_min(a) (((a) & 0x00ff00) >> 8)
#define dc8051_ver_patch(a) ((a) & 0x0000ff)
/* f_put_tid types */
#define PT_EXPECTED 0
@ -1235,6 +1285,9 @@ int handle_receive_interrupt(struct hfi1_ctxtdata *, int);
int handle_receive_interrupt_nodma_rtail(struct hfi1_ctxtdata *, int);
int handle_receive_interrupt_dma_rtail(struct hfi1_ctxtdata *, int);
void set_all_slowpath(struct hfi1_devdata *dd);
void hfi1_vnic_synchronize_irq(struct hfi1_devdata *dd);
void hfi1_set_vnic_msix_info(struct hfi1_ctxtdata *rcd);
void hfi1_reset_vnic_msix_info(struct hfi1_ctxtdata *rcd);
extern const struct pci_device_id hfi1_pci_tbl[];
@ -1254,16 +1307,24 @@ int hfi1_reset_device(int);
/* return the driver's idea of the logical OPA port state */
static inline u32 driver_lstate(struct hfi1_pportdata *ppd)
{
return ppd->lstate; /* use the cached value */
/*
* The driver does some processing from the time the logical
* link state is at INIT to the time the SM can be notified
* as such. Return IB_PORT_DOWN until the software state
* is ready.
*/
if (ppd->lstate == IB_PORT_INIT && !(ppd->host_link_state & HLS_UP))
return IB_PORT_DOWN;
else
return ppd->lstate;
}
void receive_interrupt_work(struct work_struct *work);
/* extract service channel from header and rhf */
static inline int hdr2sc(struct ib_header *hdr, u64 rhf)
static inline int hfi1_9B_get_sc5(struct ib_header *hdr, u64 rhf)
{
return ((be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf) |
((!!(rhf_dc_info(rhf))) << 4);
return ib_get_sc(hdr) | ((!!(rhf_dc_info(rhf))) << 4);
}
#define HFI1_JKEY_WIDTH 16
@ -1597,9 +1658,9 @@ static inline bool process_ecn(struct rvt_qp *qp, struct hfi1_packet *pkt,
u32 bth1;
bth1 = be32_to_cpu(ohdr->bth[1]);
if (unlikely(bth1 & (HFI1_BECN_SMASK | HFI1_FECN_SMASK))) {
if (unlikely(bth1 & (IB_BECN_SMASK | IB_FECN_SMASK))) {
hfi1_process_ecn_slowpath(qp, pkt, do_cnp);
return bth1 & HFI1_FECN_SMASK;
return !!(bth1 & IB_FECN_SMASK);
}
return false;
}

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -65,6 +65,7 @@
#include "verbs.h"
#include "aspm.h"
#include "affinity.h"
#include "vnic.h"
#undef pr_fmt
#define pr_fmt(fmt) DRIVER_NAME ": " fmt
@ -139,7 +140,7 @@ int hfi1_create_ctxts(struct hfi1_devdata *dd)
goto nomem;
/* create one or more kernel contexts */
for (i = 0; i < dd->first_user_ctxt; ++i) {
for (i = 0; i < dd->first_dyn_alloc_ctxt; ++i) {
struct hfi1_pportdata *ppd;
struct hfi1_ctxtdata *rcd;
@ -214,9 +215,9 @@ struct hfi1_ctxtdata *hfi1_create_ctxtdata(struct hfi1_pportdata *ppd, u32 ctxt,
u32 base;
if (dd->rcv_entries.nctxt_extra >
dd->num_rcv_contexts - dd->first_user_ctxt)
dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt)
kctxt_ngroups = (dd->rcv_entries.nctxt_extra -
(dd->num_rcv_contexts - dd->first_user_ctxt));
(dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt));
rcd = kzalloc_node(sizeof(*rcd), GFP_KERNEL, numa);
if (rcd) {
u32 rcvtids, max_entries;
@ -238,27 +239,29 @@ struct hfi1_ctxtdata *hfi1_create_ctxtdata(struct hfi1_pportdata *ppd, u32 ctxt,
* Calculate the context's RcvArray entry starting point.
* We do this here because we have to take into account all
* the RcvArray entries that previous context would have
* taken and we have to account for any extra groups
* assigned to the kernel or user contexts.
* taken and we have to account for any extra groups assigned
* to the static (kernel) or dynamic (vnic/user) contexts.
*/
if (ctxt < dd->first_user_ctxt) {
if (ctxt < dd->first_dyn_alloc_ctxt) {
if (ctxt < kctxt_ngroups) {
base = ctxt * (dd->rcv_entries.ngroups + 1);
rcd->rcv_array_groups++;
} else
} else {
base = kctxt_ngroups +
(ctxt * dd->rcv_entries.ngroups);
}
} else {
u16 ct = ctxt - dd->first_user_ctxt;
u16 ct = ctxt - dd->first_dyn_alloc_ctxt;
base = ((dd->n_krcv_queues * dd->rcv_entries.ngroups) +
kctxt_ngroups);
if (ct < dd->rcv_entries.nctxt_extra) {
base += ct * (dd->rcv_entries.ngroups + 1);
rcd->rcv_array_groups++;
} else
} else {
base += dd->rcv_entries.nctxt_extra +
(ct * dd->rcv_entries.ngroups);
}
}
rcd->eager_base = base * dd->rcv_entries.group_size;
@ -322,7 +325,8 @@ struct hfi1_ctxtdata *hfi1_create_ctxtdata(struct hfi1_pportdata *ppd, u32 ctxt,
}
rcd->egrbufs.rcvtid_size = HFI1_MAX_EAGER_BUFFER_SIZE;
if (ctxt < dd->first_user_ctxt) { /* N/A for PSM contexts */
/* Applicable only for statically created kernel contexts */
if (ctxt < dd->first_dyn_alloc_ctxt) {
rcd->opstats = kzalloc_node(sizeof(*rcd->opstats),
GFP_KERNEL, numa);
if (!rcd->opstats)
@ -482,6 +486,9 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
default_pkey_idx = 1;
ppd->pkeys[default_pkey_idx] = DEFAULT_P_KEY;
ppd->part_enforce |= HFI1_PART_ENFORCE_IN;
ppd->part_enforce |= HFI1_PART_ENFORCE_OUT;
if (loopback) {
hfi1_early_err(&pdev->dev,
"Faking data partition 0x8001 in idx %u\n",
@ -585,7 +592,7 @@ static void enable_chip(struct hfi1_devdata *dd)
* Enable kernel ctxts' receive and receive interrupt.
* Other ctxts done as user opens and initializes them.
*/
for (i = 0; i < dd->first_user_ctxt; ++i) {
for (i = 0; i < dd->first_dyn_alloc_ctxt; ++i) {
rcvmask = HFI1_RCVCTRL_CTXT_ENB | HFI1_RCVCTRL_INTRAVAIL_ENB;
rcvmask |= HFI1_CAP_KGET_MASK(dd->rcd[i]->flags, DMA_RTAIL) ?
HFI1_RCVCTRL_TAILUPD_ENB : HFI1_RCVCTRL_TAILUPD_DIS;
@ -679,6 +686,7 @@ int hfi1_init(struct hfi1_devdata *dd, int reinit)
dd->process_pio_send = hfi1_verbs_send_pio;
dd->process_dma_send = hfi1_verbs_send_dma;
dd->pio_inline_send = pio_copy;
dd->process_vnic_dma_send = hfi1_vnic_send_dma;
if (is_ax(dd)) {
atomic_set(&dd->drop_packet, DROP_PACKET_ON);
@ -714,7 +722,7 @@ int hfi1_init(struct hfi1_devdata *dd, int reinit)
}
/* dd->rcd can be NULL if early initialization failed */
for (i = 0; dd->rcd && i < dd->first_user_ctxt; ++i) {
for (i = 0; dd->rcd && i < dd->first_dyn_alloc_ctxt; ++i) {
/*
* Set up the (kernel) rcvhdr queue and egr TIDs. If doing
* re-init, the simplest way to handle this is to free
@ -1078,11 +1086,11 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
spin_lock_init(&dd->uctxt_lock);
spin_lock_init(&dd->hfi1_diag_trans_lock);
spin_lock_init(&dd->sc_init_lock);
spin_lock_init(&dd->dc8051_lock);
spin_lock_init(&dd->dc8051_memlock);
seqlock_init(&dd->sc2vl_lock);
spin_lock_init(&dd->sde_map_lock);
spin_lock_init(&dd->pio_map_lock);
mutex_init(&dd->dc8051_lock);
init_waitqueue_head(&dd->event_queue);
dd->int_counter = alloc_percpu(u64);
@ -1425,6 +1433,16 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* First, lock the non-writable module parameters */
HFI1_CAP_LOCK();
/* Validate dev ids */
if (!(ent->device == PCI_DEVICE_ID_INTEL0 ||
ent->device == PCI_DEVICE_ID_INTEL1)) {
hfi1_early_err(&pdev->dev,
"Failing on unknown Intel deviceid 0x%x\n",
ent->device);
ret = -ENODEV;
goto bail;
}
/* Validate some global module parameters */
ret = init_validate_rcvhdrcnt(&pdev->dev, rcvhdrcnt);
if (ret)
@ -1470,15 +1488,6 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto bail;
if (!(ent->device == PCI_DEVICE_ID_INTEL0 ||
ent->device == PCI_DEVICE_ID_INTEL1)) {
hfi1_early_err(&pdev->dev,
"Failing on unknown Intel deviceid 0x%x\n",
ent->device);
ret = -ENODEV;
goto clean_bail;
}
/*
* Do device-specific initialization, function table setup, dd
* allocation, etc.
@ -1497,6 +1506,9 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* do the generic initialization */
initfail = hfi1_init(dd, 0);
/* setup vnic */
hfi1_vnic_setup(dd);
ret = hfi1_register_ib_device(dd);
/*
@ -1530,6 +1542,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
hfi1_device_remove(dd);
if (!ret)
hfi1_unregister_ib_device(dd);
hfi1_vnic_cleanup(dd);
postinit_cleanup(dd);
if (initfail)
ret = initfail;
@ -1574,6 +1587,9 @@ static void remove_one(struct pci_dev *pdev)
/* unregister from IB core */
hfi1_unregister_ib_device(dd);
/* cleanup vnic */
hfi1_vnic_cleanup(dd);
/*
* Disable the IB link, disable interrupts on the device,
* clear dma engines, etc.
@ -1613,8 +1629,11 @@ int hfi1_create_rcvhdrq(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd)
amt = PAGE_ALIGN(rcd->rcvhdrq_cnt * rcd->rcvhdrqentsize *
sizeof(u32));
gfp_flags = (rcd->ctxt >= dd->first_user_ctxt) ?
GFP_USER : GFP_KERNEL;
if ((rcd->ctxt < dd->first_dyn_alloc_ctxt) ||
(rcd->sc && (rcd->sc->type == SC_KERNEL)))
gfp_flags = GFP_KERNEL;
else
gfp_flags = GFP_USER;
rcd->rcvhdrq = dma_zalloc_coherent(
&dd->pcidev->dev, amt, &rcd->rcvhdrq_dma,
gfp_flags | __GFP_COMP);

View File

@ -131,19 +131,24 @@ void handle_linkup_change(struct hfi1_devdata *dd, u32 linkup)
if (quick_linkup || dd->icode == ICODE_FUNCTIONAL_SIMULATOR) {
set_up_vl15(dd, dd->vau, dd->vl15_init);
assign_remote_cm_au_table(dd, dd->vcu);
ppd->neighbor_guid =
read_csr(dd, DC_DC8051_STS_REMOTE_GUID);
ppd->neighbor_type =
read_csr(dd, DC_DC8051_STS_REMOTE_NODE_TYPE) &
DC_DC8051_STS_REMOTE_NODE_TYPE_VAL_MASK;
ppd->neighbor_port_number =
read_csr(dd, DC_DC8051_STS_REMOTE_PORT_NO) &
DC_DC8051_STS_REMOTE_PORT_NO_VAL_SMASK;
dd_dev_info(dd, "Neighbor GUID: %llx Neighbor type %d\n",
ppd->neighbor_guid,
ppd->neighbor_type);
}
ppd->neighbor_guid =
read_csr(dd, DC_DC8051_STS_REMOTE_GUID);
ppd->neighbor_type =
read_csr(dd, DC_DC8051_STS_REMOTE_NODE_TYPE) &
DC_DC8051_STS_REMOTE_NODE_TYPE_VAL_MASK;
ppd->neighbor_port_number =
read_csr(dd, DC_DC8051_STS_REMOTE_PORT_NO) &
DC_DC8051_STS_REMOTE_PORT_NO_VAL_SMASK;
ppd->neighbor_fm_security =
read_csr(dd, DC_DC8051_STS_REMOTE_FM_SECURITY) &
DC_DC8051_STS_LOCAL_FM_SECURITY_DISABLED_MASK;
dd_dev_info(dd,
"Neighbor Guid %llx, Type %d, Port Num %d\n",
ppd->neighbor_guid, ppd->neighbor_type,
ppd->neighbor_port_number);
/* physical link went up */
ppd->linkup = 1;
ppd->offline_disabled_reason =

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -53,6 +53,7 @@
#include "mad.h"
#include "trace.h"
#include "qp.h"
#include "vnic.h"
/* the reset value from the FM is supposed to be 0xffff, handle both */
#define OPA_LINK_WIDTH_RESET_OLD 0x0fff
@ -650,9 +651,11 @@ static int __subn_get_opa_portinfo(struct opa_smp *smp, u32 am, u8 *data,
OPA_PI_MASK_PORT_ACTIVE_OPTOMIZE : 0);
pi->port_packet_format.supported =
cpu_to_be16(OPA_PORT_PACKET_FORMAT_9B);
cpu_to_be16(OPA_PORT_PACKET_FORMAT_9B |
OPA_PORT_PACKET_FORMAT_16B);
pi->port_packet_format.enabled =
cpu_to_be16(OPA_PORT_PACKET_FORMAT_9B);
cpu_to_be16(OPA_PORT_PACKET_FORMAT_9B |
OPA_PORT_PACKET_FORMAT_16B);
/* flit_control.interleave is (OPA V1, version .76):
* bits use
@ -701,7 +704,13 @@ static int __subn_get_opa_portinfo(struct opa_smp *smp, u32 am, u8 *data,
buffer_units |= (dd->vl15_init << 11) & OPA_PI_MASK_BUF_UNIT_VL15_INIT;
pi->buffer_units = cpu_to_be32(buffer_units);
pi->opa_cap_mask = cpu_to_be16(OPA_CAP_MASK3_IsSharedSpaceSupported);
pi->opa_cap_mask = cpu_to_be16(OPA_CAP_MASK3_IsSharedSpaceSupported |
OPA_CAP_MASK3_IsEthOnFabricSupported);
/* Driver does not support mcast/collective configuration */
pi->opa_cap_mask &=
cpu_to_be16(~OPA_CAP_MASK3_IsAddrRangeConfigSupported);
pi->collectivemask_multicastmask = ((HFI1_COLLECTIVE_NR & 0x7)
<< 3 | (HFI1_MCAST_NR & 0x7));
/* HFI supports a replay buffer 128 LTPs in size */
pi->replay_depth.buffer = 0x80;
@ -1146,16 +1155,6 @@ static int __subn_set_opa_portinfo(struct opa_smp *smp, u32 am, u8 *data,
ppd->linkinit_reason =
(pi->partenforce_filterraw &
OPA_PI_MASK_LINKINIT_REASON);
/* enable/disable SW pkey checking as per FM control */
if (pi->partenforce_filterraw & OPA_PI_MASK_PARTITION_ENFORCE_IN)
ppd->part_enforce |= HFI1_PART_ENFORCE_IN;
else
ppd->part_enforce &= ~HFI1_PART_ENFORCE_IN;
if (pi->partenforce_filterraw & OPA_PI_MASK_PARTITION_ENFORCE_OUT)
ppd->part_enforce |= HFI1_PART_ENFORCE_OUT;
else
ppd->part_enforce &= ~HFI1_PART_ENFORCE_OUT;
/* Must be a valid unicast LID address. */
if ((smlid == 0 && ls_old > IB_PORT_INIT) ||
@ -1167,9 +1166,9 @@ static int __subn_set_opa_portinfo(struct opa_smp *smp, u32 am, u8 *data,
spin_lock_irqsave(&ibp->rvp.lock, flags);
if (ibp->rvp.sm_ah) {
if (smlid != ibp->rvp.sm_lid)
ibp->rvp.sm_ah->attr.dlid = smlid;
rdma_ah_set_dlid(&ibp->rvp.sm_ah->attr, smlid);
if (msl != ibp->rvp.sm_sl)
ibp->rvp.sm_ah->attr.sl = msl;
rdma_ah_set_sl(&ibp->rvp.sm_ah->attr, msl);
}
spin_unlock_irqrestore(&ibp->rvp.lock, flags);
if (smlid != ibp->rvp.sm_lid)
@ -1465,25 +1464,15 @@ static int __subn_set_opa_pkeytable(struct opa_smp *smp, u32 am, u8 *data,
return __subn_get_opa_pkeytable(smp, am, data, ibdev, port, resp_len);
}
static int get_sc2vlt_tables(struct hfi1_devdata *dd, void *data)
{
u64 *val = data;
*val++ = read_csr(dd, SEND_SC2VLT0);
*val++ = read_csr(dd, SEND_SC2VLT1);
*val++ = read_csr(dd, SEND_SC2VLT2);
*val++ = read_csr(dd, SEND_SC2VLT3);
return 0;
}
#define ILLEGAL_VL 12
/*
* filter_sc2vlt changes mappings to VL15 to ILLEGAL_VL (except
* for SC15, which must map to VL15). If we don't remap things this
* way it is possible for VL15 counters to increment when we try to
* send on a SC which is mapped to an invalid VL.
* When getting the table convert ILLEGAL_VL back to VL15.
*/
static void filter_sc2vlt(void *data)
static void filter_sc2vlt(void *data, bool set)
{
int i;
u8 *pd = data;
@ -1491,8 +1480,14 @@ static void filter_sc2vlt(void *data)
for (i = 0; i < OPA_MAX_SCS; i++) {
if (i == 15)
continue;
if ((pd[i] & 0x1f) == 0xf)
pd[i] = ILLEGAL_VL;
if (set) {
if ((pd[i] & 0x1f) == 0xf)
pd[i] = ILLEGAL_VL;
} else {
if ((pd[i] & 0x1f) == ILLEGAL_VL)
pd[i] = 0xf;
}
}
}
@ -1500,7 +1495,7 @@ static int set_sc2vlt_tables(struct hfi1_devdata *dd, void *data)
{
u64 *val = data;
filter_sc2vlt(data);
filter_sc2vlt(data, true);
write_csr(dd, SEND_SC2VLT0, *val++);
write_csr(dd, SEND_SC2VLT1, *val++);
@ -1512,6 +1507,19 @@ static int set_sc2vlt_tables(struct hfi1_devdata *dd, void *data)
return 0;
}
static int get_sc2vlt_tables(struct hfi1_devdata *dd, void *data)
{
u64 *val = (u64 *)data;
*val++ = read_csr(dd, SEND_SC2VLT0);
*val++ = read_csr(dd, SEND_SC2VLT1);
*val++ = read_csr(dd, SEND_SC2VLT2);
*val++ = read_csr(dd, SEND_SC2VLT3);
filter_sc2vlt((u64 *)data, false);
return 0;
}
static int __subn_get_opa_sl_to_sc(struct opa_smp *smp, u32 am, u8 *data,
struct ib_device *ibdev, u8 port,
u32 *resp_len)
@ -1986,31 +1994,6 @@ struct opa_pma_mad {
u8 data[2024];
} __packed;
struct opa_class_port_info {
u8 base_version;
u8 class_version;
__be16 cap_mask;
__be32 cap_mask2_resp_time;
u8 redirect_gid[16];
__be32 redirect_tc_fl;
__be32 redirect_lid;
__be32 redirect_sl_qp;
__be32 redirect_qkey;
u8 trap_gid[16];
__be32 trap_tc_fl;
__be32 trap_lid;
__be32 trap_hl_qp;
__be32 trap_qkey;
__be16 trap_pkey;
__be16 redirect_pkey;
u8 trap_sl_rsvd;
u8 reserved[3];
} __packed;
struct opa_port_status_req {
__u8 port_num;
__u8 reserved[3];

View File

@ -583,7 +583,7 @@ pci_mmio_enabled(struct pci_dev *pdev)
if (words == ~0ULL)
ret = PCI_ERS_RESULT_NEED_RESET;
dd_dev_info(dd,
"HFI1 mmio_enabled function called, read wordscntr %Lx, returning %d\n",
"HFI1 mmio_enabled function called, read wordscntr %llx, returning %d\n",
words, ret);
}
return ret;

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -703,6 +703,7 @@ struct send_context *sc_alloc(struct hfi1_devdata *dd, int type,
{
struct send_context_info *sci;
struct send_context *sc = NULL;
int req_type = type;
dma_addr_t dma;
unsigned long flags;
u64 reg;
@ -729,6 +730,13 @@ struct send_context *sc_alloc(struct hfi1_devdata *dd, int type,
return NULL;
}
/*
* VNIC contexts are dynamically allocated.
* Hence, pick a user context for VNIC.
*/
if (type == SC_VNIC)
type = SC_USER;
spin_lock_irqsave(&dd->sc_lock, flags);
ret = sc_hw_alloc(dd, type, &sw_index, &hw_context);
if (ret) {
@ -738,6 +746,15 @@ struct send_context *sc_alloc(struct hfi1_devdata *dd, int type,
return NULL;
}
/*
* VNIC contexts are used by kernel driver.
* Hence, mark them as kernel contexts.
*/
if (req_type == SC_VNIC) {
dd->send_contexts[sw_index].type = SC_KERNEL;
type = SC_KERNEL;
}
sci = &dd->send_contexts[sw_index];
sci->sc = sc;

View File

@ -1,7 +1,7 @@
#ifndef _PIO_H
#define _PIO_H
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -54,6 +54,12 @@
#define SC_USER 3 /* must be the last one: it may take all left */
#define SC_MAX 4 /* count of send context types */
/*
* SC_VNIC types are allocated (dynamically) from the user context pool,
* (SC_USER) and used by kernel driver as kernel contexts (SC_KERNEL).
*/
#define SC_VNIC SC_MAX
/* invalid send context index */
#define INVALID_SCI 0xff
@ -195,7 +201,7 @@ struct sc_config_sizes {
* | mask | --/ |--------------------|
* |--------------------------| -/ | * |
* | actual_vls (max 8) | -/ |--------------------|
* |--------------------------| --/ | ksc[n] -> sc n |
* |--------------------------| --/ | ksc[n-1] -> sc n |
* | vls (max 8) | -/ +--------------------+
* |--------------------------| --/
* | map[0] |-/
@ -208,21 +214,21 @@ struct sc_config_sizes {
* |--------------------------| |--------------------|
* | map[vls - 1] |- | * |
* +--------------------------+ \- |--------------------|
* \- | ksc[m] -> sc m+n |
* \- | ksc[m-1] -> sc m+n |
* \ +--------------------+
* \-
* \
* \- +--------------------+
* \- | mask |
* \ |--------------------|
* \- | ksc[0] -> sc 1+m+n |
* \- |--------------------|
* >| ksc[1] -> sc 2+m+n |
* |--------------------|
* | * |
* |--------------------|
* | ksc[o] -> sc o+m+n |
* +--------------------+
* \- +----------------------+
* \- | mask |
* \ |----------------------|
* \- | ksc[0] -> sc 1+m+n |
* \- |----------------------|
* >| ksc[1] -> sc 2+m+n |
* |----------------------|
* | * |
* |----------------------|
* | ksc[o-1] -> sc o+m+n |
* +----------------------+
*
*/

View File

@ -294,7 +294,7 @@ int hfi1_check_send_wqe(struct rvt_qp *qp,
ah = ibah_to_rvtah(wqe->ud_wr.ah);
if (wqe->length > (1 << ah->log_pmtu))
return -EINVAL;
if (ibp->sl_to_sc[ah->attr.sl] == 0xf)
if (ibp->sl_to_sc[rdma_ah_get_sl(&ah->attr)] == 0xf)
return -EINVAL;
default:
break;
@ -631,8 +631,8 @@ void qp_iter_print(struct seq_file *s, struct qp_iter *iter)
qp->s_tail, qp->s_head, qp->s_size,
qp->s_avail,
qp->remote_qpn,
qp->remote_ah_attr.dlid,
qp->remote_ah_attr.sl,
rdma_ah_get_dlid(&qp->remote_ah_attr),
rdma_ah_get_sl(&qp->remote_ah_attr),
qp->pmtu,
qp->s_retry,
qp->s_retry_cnt,
@ -748,7 +748,7 @@ void hfi1_migrate_qp(struct rvt_qp *qp)
qp->s_mig_state = IB_MIG_MIGRATED;
qp->remote_ah_attr = qp->alt_ah_attr;
qp->port_num = qp->alt_ah_attr.port_num;
qp->port_num = rdma_ah_get_port_num(&qp->alt_ah_attr);
qp->s_pkey_index = qp->s_alt_pkey_index;
qp->s_flags |= RVT_S_AHG_CLEAR;
priv->s_sc = ah_to_sc(qp->ibqp.device, &qp->remote_ah_attr);
@ -778,7 +778,7 @@ u32 mtu_from_qp(struct rvt_dev_info *rdi, struct rvt_qp *qp, u32 pmtu)
u8 sc, vl;
ibp = &dd->pport[qp->port_num - 1].ibport_data;
sc = ibp->sl_to_sc[qp->remote_ah_attr.sl];
sc = ibp->sl_to_sc[rdma_ah_get_sl(&qp->remote_ah_attr)];
vl = sc_to_vlt(dd, sc);
mtu = verbs_mtu_enum_to_int(qp->ibqp.device, pmtu);
@ -861,7 +861,7 @@ void hfi1_error_port_qps(struct hfi1_ibport *ibp, u8 sl)
if (qp->port_num == ppd->port &&
(qp->ibqp.qp_type == IB_QPT_UC ||
qp->ibqp.qp_type == IB_QPT_RC) &&
qp->remote_ah_attr.sl == sl &&
rdma_ah_get_sl(&qp->remote_ah_attr) == sl &&
(ib_rvt_state_ops[qp->state] &
RVT_POST_SEND_OK)) {
spin_lock_irq(&qp->r_lock);

View File

@ -274,7 +274,7 @@ int hfi1_make_rc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
goto bail_no_tx;
ohdr = &ps->s_txreq->phdr.hdr.u.oth;
if (qp->remote_ah_attr.ah_flags & IB_AH_GRH)
if (rdma_ah_get_ah_flags(&qp->remote_ah_attr) & IB_AH_GRH)
ohdr = &ps->s_txreq->phdr.hdr.u.l.oth;
/* Sending responses has higher priority over sending requests. */
@ -744,9 +744,10 @@ void hfi1_send_rc_ack(struct hfi1_ctxtdata *rcd, struct rvt_qp *qp,
/* Construct the header */
/* header size in 32-bit words LRH+BTH+AETH = (8+12+4)/4 */
hwords = 6;
if (unlikely(qp->remote_ah_attr.ah_flags & IB_AH_GRH)) {
if (unlikely(rdma_ah_get_ah_flags(&qp->remote_ah_attr) & IB_AH_GRH)) {
hwords += hfi1_make_grh(ibp, &hdr.u.l.grh,
&qp->remote_ah_attr.grh, hwords, 0);
rdma_ah_read_grh(&qp->remote_ah_attr),
hwords, 0);
ohdr = &hdr.u.l.oth;
lrh0 = HFI1_LRH_GRH;
} else {
@ -763,17 +764,19 @@ void hfi1_send_rc_ack(struct hfi1_ctxtdata *rcd, struct rvt_qp *qp,
IB_AETH_CREDIT_SHIFT));
else
ohdr->u.aeth = rvt_compute_aeth(qp);
sc5 = ibp->sl_to_sc[qp->remote_ah_attr.sl];
sc5 = ibp->sl_to_sc[rdma_ah_get_sl(&qp->remote_ah_attr)];
/* set PBC_DC_INFO bit (aka SC[4]) in pbc_flags */
pbc_flags |= ((!!(sc5 & 0x10)) << PBC_DC_INFO_SHIFT);
lrh0 |= (sc5 & 0xf) << 12 | (qp->remote_ah_attr.sl & 0xf) << 4;
lrh0 |= (sc5 & 0xf) << 12 | (rdma_ah_get_sl(&qp->remote_ah_attr)
& 0xf) << 4;
hdr.lrh[0] = cpu_to_be16(lrh0);
hdr.lrh[1] = cpu_to_be16(qp->remote_ah_attr.dlid);
hdr.lrh[1] = cpu_to_be16(rdma_ah_get_dlid(&qp->remote_ah_attr));
hdr.lrh[2] = cpu_to_be16(hwords + SIZE_OF_CRC);
hdr.lrh[3] = cpu_to_be16(ppd->lid | qp->remote_ah_attr.src_path_bits);
hdr.lrh[3] = cpu_to_be16(ppd->lid |
rdma_ah_get_path_bits(&qp->remote_ah_attr));
ohdr->bth[0] = cpu_to_be32(bth0);
ohdr->bth[1] = cpu_to_be32(qp->remote_qpn);
ohdr->bth[1] |= cpu_to_be32((!!is_fecn) << HFI1_BECN_SHIFT);
ohdr->bth[1] |= cpu_to_be32((!!is_fecn) << IB_BECN_SHIFT);
ohdr->bth[2] = cpu_to_be32(mask_psn(qp->r_ack_psn));
/* Don't try to send ACKs if the link isn't ACTIVE */
@ -994,12 +997,12 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct ib_header *hdr)
return;
/* Find out where the BTH is */
if ((be16_to_cpu(hdr->lrh[0]) & 3) == HFI1_LRH_BTH)
if (ib_get_lnh(hdr) == HFI1_LRH_BTH)
ohdr = &hdr->u.oth;
else
ohdr = &hdr->u.l.oth;
opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
opcode = ib_bth_get_opcode(ohdr);
if (opcode >= OP(RDMA_READ_RESPONSE_FIRST) &&
opcode <= OP(ATOMIC_ACKNOWLEDGE)) {
WARN_ON(!qp->s_rdma_ack_cnt);
@ -1028,13 +1031,17 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct ib_header *hdr)
cmp_psn(qp->s_sending_psn, qp->s_sending_hpsn) <= 0)
break;
s_last = qp->s_last;
trace_hfi1_qp_send_completion(qp, wqe, s_last);
if (++s_last >= qp->s_size)
s_last = 0;
qp->s_last = s_last;
/* see post_send() */
barrier();
rvt_put_swqe(wqe);
rvt_qp_swqe_complete(qp, wqe, IB_WC_SUCCESS);
rvt_qp_swqe_complete(qp,
wqe,
ib_hfi1_wc_opcode[wqe->wr.opcode],
IB_WC_SUCCESS);
}
/*
* If we were waiting for sends to complete before re-sending,
@ -1076,12 +1083,16 @@ static struct rvt_swqe *do_rc_completion(struct rvt_qp *qp,
rvt_put_swqe(wqe);
s_last = qp->s_last;
trace_hfi1_qp_send_completion(qp, wqe, s_last);
if (++s_last >= qp->s_size)
s_last = 0;
qp->s_last = s_last;
/* see post_send() */
barrier();
rvt_qp_swqe_complete(qp, wqe, IB_WC_SUCCESS);
rvt_qp_swqe_complete(qp,
wqe,
ib_hfi1_wc_opcode[wqe->wr.opcode],
IB_WC_SUCCESS);
} else {
struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
@ -1092,10 +1103,11 @@ static struct rvt_swqe *do_rc_completion(struct rvt_qp *qp,
*/
if (ppd->dd->flags & HFI1_HAS_SEND_DMA) {
struct sdma_engine *engine;
u8 sl = rdma_ah_get_sl(&qp->remote_ah_attr);
u8 sc5;
/* For now use sc to find engine */
sc5 = ibp->sl_to_sc[qp->remote_ah_attr.sl];
sc5 = ibp->sl_to_sc[sl];
engine = qp_to_sdma_engine(qp, sc5);
sdma_engine_progress_schedule(engine);
}
@ -1516,7 +1528,7 @@ static void rc_rcv_resp(struct hfi1_ibport *ibp,
if (!do_rc_ack(qp, aeth, psn, opcode, 0, rcd))
goto ack_done;
/* Get the number of bytes the message was padded by. */
pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
pad = ib_bth_get_pad(ohdr);
/*
* Check that the data size is >= 0 && <= pmtu.
* Remember to account for ICRC (4).
@ -1540,7 +1552,7 @@ static void rc_rcv_resp(struct hfi1_ibport *ibp,
if (unlikely(wqe->wr.opcode != IB_WR_RDMA_READ))
goto ack_op_err;
/* Get the number of bytes the message was padded by. */
pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
pad = ib_bth_get_pad(ohdr);
/*
* Check that the data size is >= 1 && <= pmtu.
* Remember to account for ICRC (4).
@ -1922,7 +1934,8 @@ void hfi1_rc_rcv(struct hfi1_packet *packet)
int diff;
struct ib_reth *reth;
unsigned long flags;
int ret, is_fecn = 0;
int ret;
bool is_fecn = false;
bool copy_last = false;
u32 rkey;
@ -1934,7 +1947,7 @@ void hfi1_rc_rcv(struct hfi1_packet *packet)
is_fecn = process_ecn(qp, packet, false);
psn = be32_to_cpu(ohdr->bth[2]);
opcode = (bth0 >> 24) & 0xff;
opcode = ib_bth_get_opcode(ohdr);
/*
* Process responses (ACKs) before anything else. Note that the
@ -2065,7 +2078,7 @@ void hfi1_rc_rcv(struct hfi1_packet *packet)
wc.ex.imm_data = 0;
send_last:
/* Get the number of bytes the message was padded by. */
pad = (bth0 >> 20) & 3;
pad = ib_bth_get_pad(ohdr);
/* Check for invalid length. */
/* LAST len should be >= 1 */
if (unlikely(tlen < (hdrsize + pad + 4)))
@ -2089,7 +2102,7 @@ void hfi1_rc_rcv(struct hfi1_packet *packet)
wc.opcode = IB_WC_RECV;
wc.qp = &qp->ibqp;
wc.src_qp = qp->remote_qpn;
wc.slid = qp->remote_ah_attr.dlid;
wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr);
/*
* It seems that IB mandates the presence of an SL in a
* work completion only for the UD transport (see section
@ -2101,7 +2114,7 @@ void hfi1_rc_rcv(struct hfi1_packet *packet)
*
* See also OPA Vol. 1, section 9.7.6, and table 9-17.
*/
wc.sl = qp->remote_ah_attr.sl;
wc.sl = rdma_ah_get_sl(&qp->remote_ah_attr);
/* zero fields that are N/A */
wc.vendor_err = 0;
wc.pkey_index = 0;
@ -2378,7 +2391,7 @@ void hfi1_rc_hdrerr(
return;
psn = be32_to_cpu(ohdr->bth[2]);
opcode = (bth0 >> 24) & 0xff;
opcode = ib_bth_get_opcode(ohdr);
/* Only deal with RDMA Writes for now */
if (opcode < IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST) {

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015 - 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -219,72 +219,84 @@ int hfi1_ruc_check_hdr(struct hfi1_ibport *ibp, struct ib_header *hdr,
{
__be64 guid;
unsigned long flags;
u8 sc5 = ibp->sl_to_sc[qp->remote_ah_attr.sl];
u8 sc5 = ibp->sl_to_sc[rdma_ah_get_sl(&qp->remote_ah_attr)];
if (qp->s_mig_state == IB_MIG_ARMED && (bth0 & IB_BTH_MIG_REQ)) {
if (!has_grh) {
if (qp->alt_ah_attr.ah_flags & IB_AH_GRH)
if (rdma_ah_get_ah_flags(&qp->alt_ah_attr) &
IB_AH_GRH)
goto err;
} else {
if (!(qp->alt_ah_attr.ah_flags & IB_AH_GRH))
const struct ib_global_route *grh;
if (!(rdma_ah_get_ah_flags(&qp->alt_ah_attr) &
IB_AH_GRH))
goto err;
guid = get_sguid(ibp, qp->alt_ah_attr.grh.sgid_index);
grh = rdma_ah_read_grh(&qp->alt_ah_attr);
guid = get_sguid(ibp, grh->sgid_index);
if (!gid_ok(&hdr->u.l.grh.dgid, ibp->rvp.gid_prefix,
guid))
goto err;
if (!gid_ok(
&hdr->u.l.grh.sgid,
qp->alt_ah_attr.grh.dgid.global.subnet_prefix,
qp->alt_ah_attr.grh.dgid.global.interface_id))
grh->dgid.global.subnet_prefix,
grh->dgid.global.interface_id))
goto err;
}
if (unlikely(rcv_pkey_check(ppd_from_ibp(ibp), (u16)bth0,
sc5, be16_to_cpu(hdr->lrh[3])))) {
if (unlikely(rcv_pkey_check(ppd_from_ibp(ibp), (u16)bth0, sc5,
ib_get_slid(hdr)))) {
hfi1_bad_pqkey(ibp, OPA_TRAP_BAD_P_KEY,
(u16)bth0,
(be16_to_cpu(hdr->lrh[0]) >> 4) & 0xF,
ib_get_sl(hdr),
0, qp->ibqp.qp_num,
be16_to_cpu(hdr->lrh[3]),
be16_to_cpu(hdr->lrh[1]));
ib_get_slid(hdr),
ib_get_dlid(hdr));
goto err;
}
/* Validate the SLID. See Ch. 9.6.1.5 and 17.2.8 */
if (be16_to_cpu(hdr->lrh[3]) != qp->alt_ah_attr.dlid ||
ppd_from_ibp(ibp)->port != qp->alt_ah_attr.port_num)
if (ib_get_slid(hdr) !=
rdma_ah_get_dlid(&qp->alt_ah_attr) ||
ppd_from_ibp(ibp)->port !=
rdma_ah_get_port_num(&qp->alt_ah_attr))
goto err;
spin_lock_irqsave(&qp->s_lock, flags);
hfi1_migrate_qp(qp);
spin_unlock_irqrestore(&qp->s_lock, flags);
} else {
if (!has_grh) {
if (qp->remote_ah_attr.ah_flags & IB_AH_GRH)
if (rdma_ah_get_ah_flags(&qp->remote_ah_attr) &
IB_AH_GRH)
goto err;
} else {
if (!(qp->remote_ah_attr.ah_flags & IB_AH_GRH))
const struct ib_global_route *grh;
if (!(rdma_ah_get_ah_flags(&qp->remote_ah_attr) &
IB_AH_GRH))
goto err;
guid = get_sguid(ibp,
qp->remote_ah_attr.grh.sgid_index);
grh = rdma_ah_read_grh(&qp->remote_ah_attr);
guid = get_sguid(ibp, grh->sgid_index);
if (!gid_ok(&hdr->u.l.grh.dgid, ibp->rvp.gid_prefix,
guid))
goto err;
if (!gid_ok(
&hdr->u.l.grh.sgid,
qp->remote_ah_attr.grh.dgid.global.subnet_prefix,
qp->remote_ah_attr.grh.dgid.global.interface_id))
grh->dgid.global.subnet_prefix,
grh->dgid.global.interface_id))
goto err;
}
if (unlikely(rcv_pkey_check(ppd_from_ibp(ibp), (u16)bth0,
sc5, be16_to_cpu(hdr->lrh[3])))) {
if (unlikely(rcv_pkey_check(ppd_from_ibp(ibp), (u16)bth0, sc5,
ib_get_slid(hdr)))) {
hfi1_bad_pqkey(ibp, OPA_TRAP_BAD_P_KEY,
(u16)bth0,
(be16_to_cpu(hdr->lrh[0]) >> 4) & 0xF,
ib_get_sl(hdr),
0, qp->ibqp.qp_num,
be16_to_cpu(hdr->lrh[3]),
be16_to_cpu(hdr->lrh[1]));
ib_get_slid(hdr),
ib_get_dlid(hdr));
goto err;
}
/* Validate the SLID. See Ch. 9.6.1.5 */
if (be16_to_cpu(hdr->lrh[3]) != qp->remote_ah_attr.dlid ||
if (ib_get_slid(hdr) !=
rdma_ah_get_dlid(&qp->remote_ah_attr) ||
ppd_from_ibp(ibp)->port != qp->port_num)
goto err;
if (qp->s_mig_state == IB_MIG_REARM &&
@ -542,8 +554,8 @@ static void ruc_loopback(struct rvt_qp *sqp)
wc.byte_len = wqe->length;
wc.qp = &qp->ibqp;
wc.src_qp = qp->remote_qpn;
wc.slid = qp->remote_ah_attr.dlid;
wc.sl = qp->remote_ah_attr.sl;
wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr);
wc.sl = rdma_ah_get_sl(&qp->remote_ah_attr);
wc.port_num = 1;
/* Signal completion event if the solicited bit is set. */
rvt_cq_enter(ibcq_to_rvtcq(qp->ibqp.recv_cq), &wc,
@ -637,7 +649,7 @@ static void ruc_loopback(struct rvt_qp *sqp)
* Return the size of the header in 32 bit words.
*/
u32 hfi1_make_grh(struct hfi1_ibport *ibp, struct ib_grh *hdr,
struct ib_global_route *grh, u32 hwords, u32 nwords)
const struct ib_global_route *grh, u32 hwords, u32 nwords)
{
hdr->version_tclass_flow =
cpu_to_be32((IB_GRH_VERSION << IB_GRH_VERSION_SHIFT) |
@ -731,15 +743,17 @@ void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
extra_bytes = -ps->s_txreq->s_cur_size & 3;
nwords = (ps->s_txreq->s_cur_size + extra_bytes) >> 2;
lrh0 = HFI1_LRH_BTH;
if (unlikely(qp->remote_ah_attr.ah_flags & IB_AH_GRH)) {
qp->s_hdrwords += hfi1_make_grh(ibp,
&ps->s_txreq->phdr.hdr.u.l.grh,
&qp->remote_ah_attr.grh,
qp->s_hdrwords, nwords);
if (unlikely(rdma_ah_get_ah_flags(&qp->remote_ah_attr) & IB_AH_GRH)) {
qp->s_hdrwords +=
hfi1_make_grh(ibp,
&ps->s_txreq->phdr.hdr.u.l.grh,
rdma_ah_read_grh(&qp->remote_ah_attr),
qp->s_hdrwords, nwords);
lrh0 = HFI1_LRH_GRH;
middle = 0;
}
lrh0 |= (priv->s_sc & 0xf) << 12 | (qp->remote_ah_attr.sl & 0xf) << 4;
lrh0 |= (priv->s_sc & 0xf) << 12 |
(rdma_ah_get_sl(&qp->remote_ah_attr) & 0xf) << 4;
/*
* reset s_ahg/AHG fields
*
@ -763,11 +777,13 @@ void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
else
qp->s_flags &= ~RVT_S_AHG_VALID;
ps->s_txreq->phdr.hdr.lrh[0] = cpu_to_be16(lrh0);
ps->s_txreq->phdr.hdr.lrh[1] = cpu_to_be16(qp->remote_ah_attr.dlid);
ps->s_txreq->phdr.hdr.lrh[1] =
cpu_to_be16(rdma_ah_get_dlid(&qp->remote_ah_attr));
ps->s_txreq->phdr.hdr.lrh[2] =
cpu_to_be16(qp->s_hdrwords + nwords + SIZE_OF_CRC);
ps->s_txreq->phdr.hdr.lrh[3] = cpu_to_be16(ppd_from_ibp(ibp)->lid |
qp->remote_ah_attr.src_path_bits);
ps->s_txreq->phdr.hdr.lrh[3] =
cpu_to_be16(ppd_from_ibp(ibp)->lid |
rdma_ah_get_path_bits(&qp->remote_ah_attr));
bth0 |= hfi1_get_pkey(ibp, qp->s_pkey_index);
bth0 |= extra_bytes << 20;
ohdr->bth[0] = cpu_to_be32(bth0);
@ -775,7 +791,7 @@ void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
if (qp->s_flags & RVT_S_ECN) {
qp->s_flags &= ~RVT_S_ECN;
/* we recently received a FECN, so return a BECN */
bth1 |= (HFI1_BECN_MASK << HFI1_BECN_SHIFT);
bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);
}
ohdr->bth[1] = cpu_to_be32(bth1);
ohdr->bth[2] = cpu_to_be32(bth2);
@ -784,23 +800,29 @@ void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
/* when sending, force a reschedule every one of these periods */
#define SEND_RESCHED_TIMEOUT (5 * HZ) /* 5s in jiffies */
void hfi1_do_send_from_rvt(struct rvt_qp *qp)
{
hfi1_do_send(qp, false);
}
void _hfi1_do_send(struct work_struct *work)
{
struct iowait *wait = container_of(work, struct iowait, iowork);
struct rvt_qp *qp = iowait_to_qp(wait);
hfi1_do_send(qp);
hfi1_do_send(qp, true);
}
/**
* hfi1_do_send - perform a send on a QP
* @work: contains a pointer to the QP
* @in_thread: true if in a workqueue thread
*
* Process entries in the send work queue until credit or queue is
* exhausted. Only allow one CPU to send a packet per QP.
* Otherwise, two threads could send packets out of order.
*/
void hfi1_do_send(struct rvt_qp *qp)
void hfi1_do_send(struct rvt_qp *qp, bool in_thread)
{
struct hfi1_pkt_state ps;
struct hfi1_qp_priv *priv = qp->priv;
@ -815,9 +837,9 @@ void hfi1_do_send(struct rvt_qp *qp)
switch (qp->ibqp.qp_type) {
case IB_QPT_RC:
if (!loopback && ((qp->remote_ah_attr.dlid & ~((1 << ps.ppd->lmc
) - 1)) ==
ps.ppd->lid)) {
if (!loopback && ((rdma_ah_get_dlid(&qp->remote_ah_attr) &
~((1 << ps.ppd->lmc) - 1)) ==
ps.ppd->lid)) {
ruc_loopback(qp);
return;
}
@ -825,9 +847,9 @@ void hfi1_do_send(struct rvt_qp *qp)
timeout_int = (qp->timeout_jiffies);
break;
case IB_QPT_UC:
if (!loopback && ((qp->remote_ah_attr.dlid & ~((1 << ps.ppd->lmc
) - 1)) ==
ps.ppd->lid)) {
if (!loopback && ((rdma_ah_get_dlid(&qp->remote_ah_attr) &
~((1 << ps.ppd->lmc) - 1)) ==
ps.ppd->lid)) {
ruc_loopback(qp);
return;
}
@ -868,8 +890,10 @@ void hfi1_do_send(struct rvt_qp *qp)
qp->s_hdrwords = 0;
/* allow other tasks to run */
if (unlikely(time_after(jiffies, timeout))) {
if (workqueue_congested(cpu,
ps.ppd->hfi1_wq)) {
if (!in_thread ||
workqueue_congested(
cpu,
ps.ppd->hfi1_wq)) {
spin_lock_irqsave(
&qp->s_lock,
ps.flags);
@ -882,11 +906,9 @@ void hfi1_do_send(struct rvt_qp *qp)
*ps.ppd->dd->send_schedule);
return;
}
if (!irqs_disabled()) {
cond_resched();
this_cpu_inc(
*ps.ppd->dd->send_schedule);
}
cond_resched();
this_cpu_inc(
*ps.ppd->dd->send_schedule);
timeout = jiffies + (timeout_int) / 8;
}
spin_lock_irqsave(&qp->s_lock, ps.flags);
@ -909,8 +931,10 @@ void hfi1_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
last = qp->s_last;
old_last = last;
trace_hfi1_qp_send_completion(qp, wqe, last);
if (++last >= qp->s_size)
last = 0;
trace_hfi1_qp_send_completion(qp, wqe, last);
qp->s_last = last;
/* See post_send() */
barrier();
@ -920,7 +944,10 @@ void hfi1_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
qp->ibqp.qp_type == IB_QPT_GSI)
atomic_dec(&ibah_to_rvtah(wqe->ud_wr.ah)->refcount);
rvt_qp_swqe_complete(qp, wqe, status);
rvt_qp_swqe_complete(qp,
wqe,
ib_hfi1_wc_opcode[wqe->wr.opcode],
status);
if (qp->s_acked == old_last)
qp->s_acked = last;

View File

@ -868,7 +868,7 @@ struct sdma_engine *sdma_select_user_engine(struct hfi1_devdata *dd,
cpu_id = smp_processor_id();
rcu_read_lock();
rht_node = rhashtable_lookup_fast(&dd->sdma_rht, &cpu_id,
rht_node = rhashtable_lookup_fast(dd->sdma_rht, &cpu_id,
sdma_rht_params);
if (rht_node && rht_node->map[vl]) {
@ -962,7 +962,12 @@ ssize_t sdma_set_cpu_to_sde_map(struct sdma_engine *sde, const char *buf,
continue;
}
rht_node = rhashtable_lookup_fast(&dd->sdma_rht, &cpu,
if (vl >= ARRAY_SIZE(rht_node->map)) {
ret = -EINVAL;
goto out;
}
rht_node = rhashtable_lookup_fast(dd->sdma_rht, &cpu,
sdma_rht_params);
if (!rht_node) {
rht_node = kzalloc(sizeof(*rht_node), GFP_KERNEL);
@ -982,7 +987,7 @@ ssize_t sdma_set_cpu_to_sde_map(struct sdma_engine *sde, const char *buf,
rht_node->map[vl]->ctr = 1;
rht_node->map[vl]->sde[0] = sde;
ret = rhashtable_insert_fast(&dd->sdma_rht,
ret = rhashtable_insert_fast(dd->sdma_rht,
&rht_node->node,
sdma_rht_params);
if (ret) {
@ -1025,7 +1030,7 @@ ssize_t sdma_set_cpu_to_sde_map(struct sdma_engine *sde, const char *buf,
if (cpumask_test_cpu(cpu, mask))
continue;
rht_node = rhashtable_lookup_fast(&dd->sdma_rht, &cpu,
rht_node = rhashtable_lookup_fast(dd->sdma_rht, &cpu,
sdma_rht_params);
if (rht_node) {
bool empty = true;
@ -1049,7 +1054,7 @@ ssize_t sdma_set_cpu_to_sde_map(struct sdma_engine *sde, const char *buf,
}
if (empty) {
ret = rhashtable_remove_fast(&dd->sdma_rht,
ret = rhashtable_remove_fast(dd->sdma_rht,
&rht_node->node,
sdma_rht_params);
WARN_ON(ret);
@ -1108,7 +1113,7 @@ void sdma_seqfile_dump_cpu_list(struct seq_file *s,
struct sdma_rht_node *rht_node;
int i, j;
rht_node = rhashtable_lookup_fast(&dd->sdma_rht, &cpuid,
rht_node = rhashtable_lookup_fast(dd->sdma_rht, &cpuid,
sdma_rht_params);
if (!rht_node)
return;
@ -1322,6 +1327,12 @@ static void sdma_clean(struct hfi1_devdata *dd, size_t num_engines)
synchronize_rcu();
kfree(dd->per_sdma);
dd->per_sdma = NULL;
if (dd->sdma_rht) {
rhashtable_free_and_destroy(dd->sdma_rht, sdma_rht_free, NULL);
kfree(dd->sdma_rht);
dd->sdma_rht = NULL;
}
}
/**
@ -1341,12 +1352,14 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
{
unsigned this_idx;
struct sdma_engine *sde;
struct rhashtable *tmp_sdma_rht;
u16 descq_cnt;
void *curr_head;
struct hfi1_pportdata *ppd = dd->pport + port;
u32 per_sdma_credits;
uint idle_cnt = sdma_idle_cnt;
size_t num_engines = dd->chip_sdma_engines;
int ret = -ENOMEM;
if (!HFI1_CAP_IS_KSET(SDMA)) {
HFI1_CAP_CLEAR(SDMA_AHG);
@ -1378,7 +1391,7 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
/* alloc memory for array of send engines */
dd->per_sdma = kcalloc(num_engines, sizeof(*dd->per_sdma), GFP_KERNEL);
if (!dd->per_sdma)
return -ENOMEM;
return ret;
idle_cnt = ns_to_cclock(dd, idle_cnt);
if (!sdma_desct_intr)
@ -1507,18 +1520,27 @@ int sdma_init(struct hfi1_devdata *dd, u8 port)
dd->flags |= HFI1_HAS_SEND_DMA;
dd->flags |= idle_cnt ? HFI1_HAS_SDMA_TIMEOUT : 0;
dd->num_sdma = num_engines;
if (sdma_map_init(dd, port, ppd->vls_operational, NULL))
ret = sdma_map_init(dd, port, ppd->vls_operational, NULL);
if (ret < 0)
goto bail;
if (rhashtable_init(&dd->sdma_rht, &sdma_rht_params))
tmp_sdma_rht = kzalloc(sizeof(*tmp_sdma_rht), GFP_KERNEL);
if (!tmp_sdma_rht) {
ret = -ENOMEM;
goto bail;
}
ret = rhashtable_init(tmp_sdma_rht, &sdma_rht_params);
if (ret < 0)
goto bail;
dd->sdma_rht = tmp_sdma_rht;
dd_dev_info(dd, "SDMA num_sdma: %u\n", dd->num_sdma);
return 0;
bail:
sdma_clean(dd, num_engines);
return -ENOMEM;
return ret;
}
/**
@ -1604,7 +1626,6 @@ void sdma_exit(struct hfi1_devdata *dd)
sdma_finalput(&sde->state);
}
sdma_clean(dd, dd->num_sdma);
rhashtable_free_and_destroy(&dd->sdma_rht, sdma_rht_free, NULL);
}
/*

View File

@ -966,34 +966,34 @@ void sdma_engine_interrupt(struct sdma_engine *sde, u64 status);
* | mask | --/ |--------------------|
* |--------------------------| -/ | * |
* | actual_vls (max 8) | -/ |--------------------|
* |--------------------------| --/ | sde[n] -> eng n |
* |--------------------------| --/ | sde[n-1] -> eng n |
* | vls (max 8) | -/ +--------------------+
* |--------------------------| --/
* | map[0] |-/
* |--------------------------| +--------------------+
* | map[1] |--- | mask |
* |--------------------------| \---- |--------------------|
* | * | \-- | sde[0] -> eng 1+n |
* | * | \---- |--------------------|
* | * | \->| sde[1] -> eng 2+n |
* |--------------------------| |--------------------|
* | map[vls - 1] |- | * |
* +--------------------------+ \- |--------------------|
* \- | sde[m] -> eng m+n |
* \ +--------------------+
* |--------------------------| +---------------------+
* | map[1] |--- | mask |
* |--------------------------| \---- |---------------------|
* | * | \-- | sde[0] -> eng 1+n |
* | * | \---- |---------------------|
* | * | \->| sde[1] -> eng 2+n |
* |--------------------------| |---------------------|
* | map[vls - 1] |- | * |
* +--------------------------+ \- |---------------------|
* \- | sde[m-1] -> eng m+n |
* \ +---------------------+
* \-
* \
* \- +--------------------+
* \- | mask |
* \ |--------------------|
* \- | sde[0] -> eng 1+m+n|
* \- |--------------------|
* >| sde[1] -> eng 2+m+n|
* |--------------------|
* | * |
* |--------------------|
* | sde[o] -> eng o+m+n|
* +--------------------+
* \- +----------------------+
* \- | mask |
* \ |----------------------|
* \- | sde[0] -> eng 1+m+n |
* \- |----------------------|
* >| sde[1] -> eng 2+m+n |
* |----------------------|
* | * |
* |----------------------|
* | sde[o-1] -> eng o+m+n|
* +----------------------+
*
*/

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -542,7 +542,7 @@ static ssize_t show_nctxts(struct device *device,
* give a more accurate picture of total contexts available.
*/
return scnprintf(buf, PAGE_SIZE, "%u\n",
min(dd->num_rcv_contexts - dd->first_user_ctxt,
min(dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt,
(u32)dd->sc_sizes[SC_USER].count));
}

View File

@ -51,13 +51,12 @@ u8 ibhdr_exhdr_len(struct ib_header *hdr)
{
struct ib_other_headers *ohdr;
u8 opcode;
u8 lnh = (u8)(be16_to_cpu(hdr->lrh[0]) & 3);
if (lnh == HFI1_LRH_BTH)
if (ib_get_lnh(hdr) == HFI1_LRH_BTH)
ohdr = &hdr->u.oth;
else
ohdr = &hdr->u.l.oth;
opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
opcode = ib_bth_get_opcode(ohdr);
return hdr_len_by_opcode[opcode] == 0 ?
0 : hdr_len_by_opcode[opcode] - (12 + 8);
}

View File

@ -139,11 +139,11 @@ DECLARE_EVENT_CLASS(hfi1_ibhdr_template,
__entry->pkey =
be32_to_cpu(ohdr->bth[0]) & 0xffff;
__entry->f =
(be32_to_cpu(ohdr->bth[1]) >> HFI1_FECN_SHIFT) &
HFI1_FECN_MASK;
(be32_to_cpu(ohdr->bth[1]) >> IB_FECN_SHIFT) &
IB_FECN_MASK;
__entry->b =
(be32_to_cpu(ohdr->bth[1]) >> HFI1_BECN_SHIFT) &
HFI1_BECN_MASK;
(be32_to_cpu(ohdr->bth[1]) >> IB_BECN_SHIFT) &
IB_BECN_MASK;
__entry->qpn =
be32_to_cpu(ohdr->bth[1]) & RVT_QPN_MASK;
__entry->a =

View File

@ -72,6 +72,54 @@ TRACE_EVENT(hfi1_interrupt,
__entry->src)
);
#ifdef CONFIG_FAULT_INJECTION
TRACE_EVENT(hfi1_fault_opcode,
TP_PROTO(struct rvt_qp *qp, u8 opcode),
TP_ARGS(qp, opcode),
TP_STRUCT__entry(DD_DEV_ENTRY(dd_from_ibdev(qp->ibqp.device))
__field(u32, qpn)
__field(u8, opcode)
),
TP_fast_assign(DD_DEV_ASSIGN(dd_from_ibdev(qp->ibqp.device))
__entry->qpn = qp->ibqp.qp_num;
__entry->opcode = opcode;
),
TP_printk("[%s] qpn 0x%x opcode 0x%x",
__get_str(dev), __entry->qpn, __entry->opcode)
);
TRACE_EVENT(hfi1_fault_packet,
TP_PROTO(struct hfi1_packet *packet),
TP_ARGS(packet),
TP_STRUCT__entry(DD_DEV_ENTRY(packet->rcd->ppd->dd)
__field(u64, eflags)
__field(u32, ctxt)
__field(u32, hlen)
__field(u32, tlen)
__field(u32, updegr)
__field(u32, etail)
),
TP_fast_assign(DD_DEV_ASSIGN(packet->rcd->ppd->dd);
__entry->eflags = rhf_err_flags(packet->rhf);
__entry->ctxt = packet->rcd->ctxt;
__entry->hlen = packet->hlen;
__entry->tlen = packet->tlen;
__entry->updegr = packet->updegr;
__entry->etail = rhf_egr_index(packet->rhf);
),
TP_printk(
"[%s] ctxt %d eflags 0x%llx hlen %d tlen %d updegr %d etail %d",
__get_str(dev),
__entry->ctxt,
__entry->eflags,
__entry->hlen,
__entry->tlen,
__entry->updegr,
__entry->etail
)
);
#endif
#endif /* __HFI1_TRACE_MISC_H */
#undef TRACE_INCLUDE_PATH

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015, 2016, 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -104,11 +104,6 @@ DEFINE_EVENT(hfi1_rc_template, hfi1_ack,
TP_ARGS(qp, psn)
);
DEFINE_EVENT(hfi1_rc_template, hfi1_timeout,
TP_PROTO(struct rvt_qp *qp, u32 psn),
TP_ARGS(qp, psn)
);
DEFINE_EVENT(hfi1_rc_template, hfi1_rcv_error,
TP_PROTO(struct rvt_qp *qp, u32 psn),
TP_ARGS(qp, psn)

View File

@ -633,6 +633,49 @@ DEFINE_EVENT(hfi1_bct_template, bct_get,
TP_PROTO(struct hfi1_devdata *dd, struct buffer_control *bc),
TP_ARGS(dd, bc));
TRACE_EVENT(
hfi1_qp_send_completion,
TP_PROTO(struct rvt_qp *qp, struct rvt_swqe *wqe, u32 idx),
TP_ARGS(qp, wqe, idx),
TP_STRUCT__entry(
DD_DEV_ENTRY(dd_from_ibdev(qp->ibqp.device))
__field(struct rvt_swqe *, wqe)
__field(u64, wr_id)
__field(u32, qpn)
__field(u32, qpt)
__field(u32, length)
__field(u32, idx)
__field(u32, ssn)
__field(enum ib_wr_opcode, opcode)
__field(int, send_flags)
),
TP_fast_assign(
DD_DEV_ASSIGN(dd_from_ibdev(qp->ibqp.device))
__entry->wqe = wqe;
__entry->wr_id = wqe->wr.wr_id;
__entry->qpn = qp->ibqp.qp_num;
__entry->qpt = qp->ibqp.qp_type;
__entry->length = wqe->length;
__entry->idx = idx;
__entry->ssn = wqe->ssn;
__entry->opcode = wqe->wr.opcode;
__entry->send_flags = wqe->wr.send_flags;
),
TP_printk(
"[%s] qpn 0x%x qpt %u wqe %p idx %u wr_id %llx length %u ssn %u opcode %x send_flags %x",
__get_str(dev),
__entry->qpn,
__entry->qpt,
__entry->wqe,
__entry->idx,
__entry->wr_id,
__entry->length,
__entry->ssn,
__entry->opcode,
__entry->send_flags
)
);
#endif /* __HFI1_TRACE_TX_H */
#undef TRACE_INCLUDE_PATH

View File

@ -94,7 +94,7 @@ int hfi1_make_uc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
}
ohdr = &ps->s_txreq->phdr.hdr.u.oth;
if (qp->remote_ah_attr.ah_flags & IB_AH_GRH)
if (rdma_ah_get_ah_flags(&qp->remote_ah_attr) & IB_AH_GRH)
ohdr = &ps->s_txreq->phdr.hdr.u.l.oth;
/* Get the next send request. */
@ -320,7 +320,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet)
process_ecn(qp, packet, true);
psn = be32_to_cpu(ohdr->bth[2]);
opcode = (bth0 >> 24) & 0xff;
opcode = ib_bth_get_opcode(ohdr);
/* Compare the PSN verses the expected PSN. */
if (unlikely(cmp_psn(psn, qp->r_psn) != 0)) {
@ -433,7 +433,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet)
wc.wc_flags = 0;
send_last:
/* Get the number of bytes the message was padded by. */
pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
pad = ib_bth_get_pad(ohdr);
/* Check for invalid length. */
/* LAST len should be >= 1 */
if (unlikely(tlen < (hdrsize + pad + 4)))
@ -451,7 +451,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet)
wc.status = IB_WC_SUCCESS;
wc.qp = &qp->ibqp;
wc.src_qp = qp->remote_qpn;
wc.slid = qp->remote_ah_attr.dlid;
wc.slid = rdma_ah_get_dlid(&qp->remote_ah_attr);
/*
* It seems that IB mandates the presence of an SL in a
* work completion only for the UD transport (see section
@ -463,7 +463,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet)
*
* See also OPA Vol. 1, section 9.7.6, and table 9-17.
*/
wc.sl = qp->remote_ah_attr.sl;
wc.sl = rdma_ah_get_sl(&qp->remote_ah_attr);
/* zero fields that are N/A */
wc.vendor_err = 0;
wc.pkey_index = 0;
@ -528,7 +528,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet)
wc.wc_flags = IB_WC_WITH_IMM;
/* Get the number of bytes the message was padded by. */
pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
pad = ib_bth_get_pad(ohdr);
/* Check for invalid length. */
/* LAST len should be >= 1 */
if (unlikely(tlen < (hdrsize + pad + 4)))
@ -555,7 +555,7 @@ void hfi1_uc_rcv(struct hfi1_packet *packet)
case OP(RDMA_WRITE_LAST):
rdma_last:
/* Get the number of bytes the message was padded by. */
pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
pad = ib_bth_get_pad(ohdr);
/* Check for invalid length. */
/* LAST len should be >= 1 */
if (unlikely(tlen < (hdrsize + pad + 4)))

View File

@ -68,7 +68,7 @@ static void ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe)
struct hfi1_ibport *ibp = to_iport(sqp->ibqp.device, sqp->port_num);
struct hfi1_pportdata *ppd;
struct rvt_qp *qp;
struct ib_ah_attr *ah_attr;
struct rdma_ah_attr *ah_attr;
unsigned long flags;
struct rvt_sge_state ssge;
struct rvt_sge *sge;
@ -103,17 +103,17 @@ static void ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe)
if (qp->ibqp.qp_num > 1) {
u16 pkey;
u16 slid;
u8 sc5 = ibp->sl_to_sc[ah_attr->sl];
u8 sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
pkey = hfi1_get_pkey(ibp, sqp->s_pkey_index);
slid = ppd->lid | (ah_attr->src_path_bits &
slid = ppd->lid | (rdma_ah_get_path_bits(ah_attr) &
((1 << ppd->lmc) - 1));
if (unlikely(ingress_pkey_check(ppd, pkey, sc5,
qp->s_pkey_index, slid))) {
hfi1_bad_pqkey(ibp, OPA_TRAP_BAD_P_KEY, pkey,
ah_attr->sl,
rdma_ah_get_sl(ah_attr),
sqp->ibqp.qp_num, qp->ibqp.qp_num,
slid, ah_attr->dlid);
slid, rdma_ah_get_dlid(ah_attr));
goto drop;
}
}
@ -131,13 +131,13 @@ static void ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe)
if (unlikely(qkey != qp->qkey)) {
u16 lid;
lid = ppd->lid | (ah_attr->src_path_bits &
lid = ppd->lid | (rdma_ah_get_path_bits(ah_attr) &
((1 << ppd->lmc) - 1));
hfi1_bad_pqkey(ibp, OPA_TRAP_BAD_Q_KEY, qkey,
ah_attr->sl,
rdma_ah_get_sl(ah_attr),
sqp->ibqp.qp_num, qp->ibqp.qp_num,
lid,
ah_attr->dlid);
rdma_ah_get_dlid(ah_attr));
goto drop;
}
}
@ -183,11 +183,11 @@ static void ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe)
goto bail_unlock;
}
if (ah_attr->ah_flags & IB_AH_GRH) {
if (rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH) {
struct ib_grh grh;
struct ib_global_route grd = ah_attr->grh;
const struct ib_global_route *grd = rdma_ah_read_grh(ah_attr);
hfi1_make_grh(ibp, &grh, &grd, 0, 0);
hfi1_make_grh(ibp, &grh, grd, 0, 0);
hfi1_copy_sge(&qp->r_sge, &grh,
sizeof(grh), true, false);
wc.wc_flags |= IB_WC_GRH;
@ -243,12 +243,13 @@ static void ud_loopback(struct rvt_qp *sqp, struct rvt_swqe *swqe)
} else {
wc.pkey_index = 0;
}
wc.slid = ppd->lid | (ah_attr->src_path_bits & ((1 << ppd->lmc) - 1));
wc.slid = ppd->lid | (rdma_ah_get_path_bits(ah_attr) &
((1 << ppd->lmc) - 1));
/* Check for loopback when the port lid is not set */
if (wc.slid == 0 && sqp->ibqp.qp_type == IB_QPT_GSI)
wc.slid = be16_to_cpu(IB_LID_PERMISSIVE);
wc.sl = ah_attr->sl;
wc.dlid_path_bits = ah_attr->dlid & ((1 << ppd->lmc) - 1);
wc.sl = rdma_ah_get_sl(ah_attr);
wc.dlid_path_bits = rdma_ah_get_dlid(ah_attr) & ((1 << ppd->lmc) - 1);
wc.port_num = qp->port_num;
/* Signal completion event if the solicited bit is set. */
rvt_cq_enter(ibcq_to_rvtcq(qp->ibqp.recv_cq), &wc,
@ -272,7 +273,7 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
{
struct hfi1_qp_priv *priv = qp->priv;
struct ib_other_headers *ohdr;
struct ib_ah_attr *ah_attr;
struct rdma_ah_attr *ah_attr;
struct hfi1_pportdata *ppd;
struct hfi1_ibport *ibp;
struct rvt_swqe *wqe;
@ -319,9 +320,9 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
ibp = to_iport(qp->ibqp.device, qp->port_num);
ppd = ppd_from_ibp(ibp);
ah_attr = &ibah_to_rvtah(wqe->ud_wr.ah)->attr;
if (ah_attr->dlid < be16_to_cpu(IB_MULTICAST_LID_BASE) ||
ah_attr->dlid == be16_to_cpu(IB_LID_PERMISSIVE)) {
lid = ah_attr->dlid & ~((1 << ppd->lmc) - 1);
if (rdma_ah_get_dlid(ah_attr) < be16_to_cpu(IB_MULTICAST_LID_BASE) ||
rdma_ah_get_dlid(ah_attr) == be16_to_cpu(IB_LID_PERMISSIVE)) {
lid = rdma_ah_get_dlid(ah_attr) & ~((1 << ppd->lmc) - 1);
if (unlikely(!loopback &&
(lid == ppd->lid ||
(lid == be16_to_cpu(IB_LID_PERMISSIVE) &&
@ -356,7 +357,7 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
qp->s_hdrwords = 7;
ps->s_txreq->s_cur_size = wqe->length;
ps->s_txreq->ss = &qp->s_sge;
qp->s_srate = ah_attr->static_rate;
qp->s_srate = rdma_ah_get_static_rate(ah_attr);
qp->srate_mbps = ib_rate_to_mbps(qp->s_srate);
qp->s_wqe = wqe;
qp->s_sge.sge = wqe->sg_list[0];
@ -364,11 +365,11 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
qp->s_sge.num_sge = wqe->wr.num_sge;
qp->s_sge.total_len = wqe->length;
if (ah_attr->ah_flags & IB_AH_GRH) {
if (rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH) {
/* Header size in 32-bit words. */
qp->s_hdrwords += hfi1_make_grh(ibp,
&ps->s_txreq->phdr.hdr.u.l.grh,
&ah_attr->grh,
rdma_ah_read_grh(ah_attr),
qp->s_hdrwords, nwords);
lrh0 = HFI1_LRH_GRH;
ohdr = &ps->s_txreq->phdr.hdr.u.l.oth;
@ -388,8 +389,8 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
} else {
bth0 = IB_OPCODE_UD_SEND_ONLY << 24;
}
sc5 = ibp->sl_to_sc[ah_attr->sl];
lrh0 |= (ah_attr->sl & 0xf) << 4;
sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
lrh0 |= (rdma_ah_get_sl(ah_attr) & 0xf) << 4;
if (qp->ibqp.qp_type == IB_QPT_SMI) {
lrh0 |= 0xF000; /* Set VL (see ch. 13.5.3.1) */
priv->s_sc = 0xf;
@ -402,15 +403,17 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
priv->s_sendcontext = qp_to_send_context(qp, priv->s_sc);
ps->s_txreq->psc = priv->s_sendcontext;
ps->s_txreq->phdr.hdr.lrh[0] = cpu_to_be16(lrh0);
ps->s_txreq->phdr.hdr.lrh[1] = cpu_to_be16(ah_attr->dlid);
ps->s_txreq->phdr.hdr.lrh[1] =
cpu_to_be16(rdma_ah_get_dlid(ah_attr));
ps->s_txreq->phdr.hdr.lrh[2] =
cpu_to_be16(qp->s_hdrwords + nwords + SIZE_OF_CRC);
if (ah_attr->dlid == be16_to_cpu(IB_LID_PERMISSIVE)) {
if (rdma_ah_get_dlid(ah_attr) == be16_to_cpu(IB_LID_PERMISSIVE)) {
ps->s_txreq->phdr.hdr.lrh[3] = IB_LID_PERMISSIVE;
} else {
lid = ppd->lid;
if (lid) {
lid |= ah_attr->src_path_bits & ((1 << ppd->lmc) - 1);
lid |= rdma_ah_get_path_bits(ah_attr) &
((1 << ppd->lmc) - 1);
ps->s_txreq->phdr.hdr.lrh[3] = cpu_to_be16(lid);
} else {
ps->s_txreq->phdr.hdr.lrh[3] = IB_LID_PERMISSIVE;
@ -537,7 +540,7 @@ void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
bth0 = pkey | (IB_OPCODE_CNP << 24);
ohdr->bth[0] = cpu_to_be32(bth0);
ohdr->bth[1] = cpu_to_be32(remote_qpn | (1 << HFI1_BECN_SHIFT));
ohdr->bth[1] = cpu_to_be32(remote_qpn | (1 << IB_BECN_SHIFT));
ohdr->bth[2] = 0; /* PSN 0 */
hdr.lrh[0] = cpu_to_be16(lrh0);
@ -680,7 +683,7 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
u32 tlen = packet->tlen;
struct rvt_qp *qp = packet->qp;
bool has_grh = rcv_flags & HFI1_HAS_GRH;
u8 sc5 = hdr2sc(hdr, packet->rhf);
u8 sc5 = hfi1_9B_get_sc5(hdr, packet->rhf);
u32 bth1;
u8 sl_from_sc, sl;
u16 slid;
@ -688,18 +691,16 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
qkey = be32_to_cpu(ohdr->u.ud.deth[0]);
src_qp = be32_to_cpu(ohdr->u.ud.deth[1]) & RVT_QPN_MASK;
dlid = be16_to_cpu(hdr->lrh[1]);
dlid = ib_get_dlid(hdr);
bth1 = be32_to_cpu(ohdr->bth[1]);
slid = be16_to_cpu(hdr->lrh[3]);
pkey = (u16)be32_to_cpu(ohdr->bth[0]);
sl = (be16_to_cpu(hdr->lrh[0]) >> 4) & 0xf;
extra_bytes = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
slid = ib_get_slid(hdr);
pkey = ib_bth_get_pkey(ohdr);
opcode = ib_bth_get_opcode(ohdr);
sl = ib_get_sl(hdr);
extra_bytes = ib_bth_get_pad(ohdr);
extra_bytes += (SIZE_OF_CRC << 2);
sl_from_sc = ibp->sc_to_sl[sc5];
opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
opcode &= 0xff;
process_ecn(qp, packet, (opcode != IB_OPCODE_CNP));
/*
* Get the number of bytes the message was padded by

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -200,8 +200,9 @@ int hfi1_user_exp_rcv_init(struct file *fp)
if (!HFI1_CAP_UGET_MASK(uctxt->flags, TID_UNMAP)) {
fd->invalid_tid_idx = 0;
fd->invalid_tids = kzalloc(uctxt->expected_count *
sizeof(u32), GFP_KERNEL);
fd->invalid_tids = kcalloc(uctxt->expected_count,
sizeof(*fd->invalid_tids),
GFP_KERNEL);
if (!fd->invalid_tids) {
ret = -ENOMEM;
goto done;
@ -578,6 +579,9 @@ int hfi1_user_exp_rcv_clear(struct file *fp, struct hfi1_tid_info *tinfo)
u32 *tidinfo;
unsigned tididx;
if (unlikely(tinfo->tidcnt > fd->tid_used))
return -EINVAL;
tidinfo = memdup_user((void __user *)(unsigned long)tinfo->tidlist,
sizeof(tidinfo[0]) * tinfo->tidcnt);
if (IS_ERR(tidinfo))
@ -607,7 +611,7 @@ int hfi1_user_exp_rcv_invalid(struct file *fp, struct hfi1_tid_info *tinfo)
struct hfi1_filedata *fd = fp->private_data;
struct hfi1_ctxtdata *uctxt = fd->uctxt;
unsigned long *ev = uctxt->dd->events +
(((uctxt->ctxt - uctxt->dd->first_user_ctxt) *
(((uctxt->ctxt - uctxt->dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS) + fd->subctxt);
u32 *array;
int ret = 0;
@ -1011,8 +1015,8 @@ static int tid_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
* process in question.
*/
ev = uctxt->dd->events +
(((uctxt->ctxt - uctxt->dd->first_user_ctxt) *
HFI1_MAX_SHARED_CTXTS) + fdata->subctxt);
(((uctxt->ctxt - uctxt->dd->first_dyn_alloc_ctxt) *
HFI1_MAX_SHARED_CTXTS) + fdata->subctxt);
set_bit(_HFI1_EVENT_TID_MMU_NOTIFY_BIT, ev);
}
fdata->invalid_tid_idx++;

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015-2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -73,7 +73,8 @@ bool hfi1_can_pin_pages(struct hfi1_devdata *dd, struct mm_struct *mm,
{
unsigned long ulimit = rlimit(RLIMIT_MEMLOCK), pinned, cache_limit,
size = (cache_size * (1UL << 20)); /* convert to bytes */
unsigned usr_ctxts = dd->num_rcv_contexts - dd->first_user_ctxt;
unsigned int usr_ctxts =
dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt;
bool can_lock = capable(CAP_IPC_LOCK);
/*

View File

@ -376,7 +376,6 @@ int hfi1_user_sdma_alloc_queues(struct hfi1_ctxtdata *uctxt, struct file *fp)
{
struct hfi1_filedata *fd;
int ret = 0;
unsigned memsize;
char buf[64];
struct hfi1_devdata *dd;
struct hfi1_user_sdma_comp_q *cq;
@ -401,13 +400,15 @@ int hfi1_user_sdma_alloc_queues(struct hfi1_ctxtdata *uctxt, struct file *fp)
if (!pq)
goto pq_nomem;
memsize = sizeof(*pq->reqs) * hfi1_sdma_comp_ring_size;
pq->reqs = kzalloc(memsize, GFP_KERNEL);
pq->reqs = kcalloc(hfi1_sdma_comp_ring_size,
sizeof(*pq->reqs),
GFP_KERNEL);
if (!pq->reqs)
goto pq_reqs_nomem;
memsize = BITS_TO_LONGS(hfi1_sdma_comp_ring_size) * sizeof(long);
pq->req_in_use = kzalloc(memsize, GFP_KERNEL);
pq->req_in_use = kcalloc(BITS_TO_LONGS(hfi1_sdma_comp_ring_size),
sizeof(*pq->req_in_use),
GFP_KERNEL);
if (!pq->req_in_use)
goto pq_reqs_no_in_use;
@ -442,8 +443,8 @@ int hfi1_user_sdma_alloc_queues(struct hfi1_ctxtdata *uctxt, struct file *fp)
if (!cq)
goto cq_nomem;
memsize = PAGE_ALIGN(sizeof(*cq->comps) * hfi1_sdma_comp_ring_size);
cq->comps = vmalloc_user(memsize);
cq->comps = vmalloc_user(PAGE_ALIGN(sizeof(*cq->comps)
* hfi1_sdma_comp_ring_size));
if (!cq->comps)
goto cq_comps_nomem;
@ -704,7 +705,9 @@ int hfi1_user_sdma_process_request(struct file *fp, struct iovec *iovec,
/* Save all the IO vector structures */
for (i = 0; i < req->data_iovs; i++) {
INIT_LIST_HEAD(&req->iovs[i].list);
memcpy(&req->iovs[i].iov, iovec + idx++, sizeof(struct iovec));
memcpy(&req->iovs[i].iov,
iovec + idx++,
sizeof(req->iovs[i].iov));
ret = pin_vector_pages(req, &req->iovs[i]);
if (ret) {
req->status = ret;
@ -1615,9 +1618,10 @@ static inline void set_comp_state(struct hfi1_user_sdma_pkt_q *pq,
{
hfi1_cdbg(SDMA, "[%u:%u:%u:%u] Setting completion status %u %d",
pq->dd->unit, pq->ctxt, pq->subctxt, idx, state, ret);
cq->comps[idx].status = state;
if (state == ERROR)
cq->comps[idx].errcode = -ret;
smp_wmb(); /* make sure errcode is visible first */
cq->comps[idx].status = state;
trace_hfi1_sdma_user_completion(pq->dd, pq->ctxt, pq->subctxt,
idx, state, ret);
}

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015 - 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -60,6 +60,8 @@
#include "trace.h"
#include "qp.h"
#include "verbs_txreq.h"
#include "debugfs.h"
#include "vnic.h"
static unsigned int hfi1_lkey_table_size = 16;
module_param_named(lkey_table_size, hfi1_lkey_table_size, uint,
@ -296,6 +298,22 @@ static inline bool wss_exceeds_threshold(void)
return atomic_read(&wss.total_count) >= wss.threshold;
}
/*
* Translate ib_wr_opcode into ib_wc_opcode.
*/
const enum ib_wc_opcode ib_hfi1_wc_opcode[] = {
[IB_WR_RDMA_WRITE] = IB_WC_RDMA_WRITE,
[IB_WR_RDMA_WRITE_WITH_IMM] = IB_WC_RDMA_WRITE,
[IB_WR_SEND] = IB_WC_SEND,
[IB_WR_SEND_WITH_IMM] = IB_WC_SEND,
[IB_WR_RDMA_READ] = IB_WC_RDMA_READ,
[IB_WR_ATOMIC_CMP_AND_SWP] = IB_WC_COMP_SWAP,
[IB_WR_ATOMIC_FETCH_AND_ADD] = IB_WC_FETCH_ADD,
[IB_WR_SEND_WITH_INV] = IB_WC_SEND,
[IB_WR_LOCAL_INV] = IB_WC_LOCAL_INV,
[IB_WR_REG_MR] = IB_WC_REG_MR
};
/*
* Length of header by opcode, 0 --> not supported
*/
@ -501,6 +519,35 @@ static inline opcode_handler qp_ok(int opcode, struct hfi1_packet *packet)
return NULL;
}
static u64 hfi1_fault_tx(struct rvt_qp *qp, u8 opcode, u64 pbc)
{
#ifdef CONFIG_FAULT_INJECTION
if ((opcode & IB_OPCODE_MSP) == IB_OPCODE_MSP)
/*
* In order to drop non-IB traffic we
* set PbcInsertHrc to NONE (0x2).
* The packet will still be delivered
* to the receiving node but a
* KHdrHCRCErr (KDETH packet with a bad
* HCRC) will be triggered and the
* packet will not be delivered to the
* correct context.
*/
pbc |= (u64)PBC_IHCRC_NONE << PBC_INSERT_HCRC_SHIFT;
else
/*
* In order to drop regular verbs
* traffic we set the PbcTestEbp
* flag. The packet will still be
* delivered to the receiving node but
* a 'late ebp error' will be
* triggered and will be dropped.
*/
pbc |= PBC_TEST_EBP;
#endif
return pbc;
}
/**
* hfi1_ib_rcv - process an incoming packet
* @packet: data packet information
@ -525,7 +572,7 @@ void hfi1_ib_rcv(struct hfi1_packet *packet)
u16 lid;
/* Check for GRH */
lnh = be16_to_cpu(hdr->lrh[0]) & 3;
lnh = ib_get_lnh(hdr);
if (lnh == HFI1_LRH_BTH) {
packet->ohdr = &hdr->u.oth;
} else if (lnh == HFI1_LRH_GRH) {
@ -544,12 +591,12 @@ void hfi1_ib_rcv(struct hfi1_packet *packet)
trace_input_ibhdr(rcd->dd, hdr);
opcode = (be32_to_cpu(packet->ohdr->bth[0]) >> 24);
opcode = ib_bth_get_opcode(packet->ohdr);
inc_opstats(tlen, &rcd->opstats->stats[opcode]);
/* Get the destination QP number. */
qp_num = be32_to_cpu(packet->ohdr->bth[1]) & RVT_QPN_MASK;
lid = be16_to_cpu(hdr->lrh[1]);
lid = ib_get_dlid(hdr);
if (unlikely((lid >= be16_to_cpu(IB_MULTICAST_LID_BASE)) &&
(lid != be16_to_cpu(IB_LID_PERMISSIVE)))) {
struct rvt_mcast *mcast;
@ -557,7 +604,7 @@ void hfi1_ib_rcv(struct hfi1_packet *packet)
if (lnh != HFI1_LRH_GRH)
goto drop;
mcast = rvt_mcast_find(&ibp->rvp, &hdr->u.l.grh.dgid);
mcast = rvt_mcast_find(&ibp->rvp, &hdr->u.l.grh.dgid, lid);
if (!mcast)
goto drop;
list_for_each_entry_rcu(p, &mcast->qp_list, list) {
@ -583,6 +630,11 @@ void hfi1_ib_rcv(struct hfi1_packet *packet)
rcu_read_unlock();
goto drop;
}
if (unlikely(hfi1_dbg_fault_opcode(packet->qp, opcode,
true))) {
rcu_read_unlock();
goto drop;
}
spin_lock_irqsave(&packet->qp->r_lock, flags);
packet_handler = qp_ok(opcode, packet);
if (likely(packet_handler))
@ -781,7 +833,6 @@ static int build_verbs_tx_desc(
if (ret)
goto bail_txadd;
}
/* add the ulp payload - if any. tx->ss can be NULL for acks */
if (tx->ss)
ret = build_verbs_ulp_payload(sde, length, tx);
@ -800,7 +851,6 @@ int hfi1_verbs_send_dma(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
struct hfi1_ibdev *dev = ps->dev;
struct hfi1_pportdata *ppd = ps->ppd;
struct verbs_txreq *tx;
u64 pbc_flags = 0;
u8 sc5 = priv->s_sc;
int ret;
@ -809,12 +859,16 @@ int hfi1_verbs_send_dma(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
if (!sdma_txreq_built(&tx->txreq)) {
if (likely(pbc == 0)) {
u32 vl = sc_to_vlt(dd_from_ibdev(qp->ibqp.device), sc5);
u8 opcode = get_opcode(&tx->phdr.hdr);
/* No vl15 here */
/* set PBC_DC_INFO bit (aka SC[4]) in pbc_flags */
pbc_flags |= (!!(sc5 & 0x10)) << PBC_DC_INFO_SHIFT;
pbc |= (!!(sc5 & 0x10)) << PBC_DC_INFO_SHIFT;
if (unlikely(hfi1_dbg_fault_opcode(qp, opcode, false)))
pbc = hfi1_fault_tx(qp, opcode, pbc);
pbc = create_pbc(ppd,
pbc_flags,
pbc,
qp->srate_mbps,
vl,
plen);
@ -917,7 +971,6 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
u32 plen = hdrwords + dwords + 2; /* includes pbc */
struct hfi1_pportdata *ppd = ps->ppd;
u32 *hdr = (u32 *)&ps->s_txreq->phdr.hdr;
u64 pbc_flags = 0;
u8 sc5;
unsigned long flags = 0;
struct send_context *sc;
@ -942,9 +995,14 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
if (likely(pbc == 0)) {
u8 vl = sc_to_vlt(dd_from_ibdev(qp->ibqp.device), sc5);
struct verbs_txreq *tx = ps->s_txreq;
u8 opcode = get_opcode(&tx->phdr.hdr);
/* set PBC_DC_INFO bit (aka SC[4]) in pbc_flags */
pbc_flags |= (!!(sc5 & 0x10)) << PBC_DC_INFO_SHIFT;
pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen);
pbc |= (!!(sc5 & 0x10)) << PBC_DC_INFO_SHIFT;
if (unlikely(hfi1_dbg_fault_opcode(qp, opcode, false)))
pbc = hfi1_fault_tx(qp, opcode, pbc);
pbc = create_pbc(ppd, pbc, qp->srate_mbps, vl, plen);
}
if (cb)
iowait_pio_inc(&priv->s_iowait);
@ -1173,7 +1231,7 @@ int hfi1_verbs_send(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
hdr = &ps->s_txreq->phdr.hdr;
/* locate the pkey within the headers */
lnh = be16_to_cpu(hdr->lrh[0]) & 3;
lnh = ib_get_lnh(hdr);
if (lnh == HFI1_LRH_GRH)
ohdr = &hdr->u.l.oth;
else
@ -1220,17 +1278,20 @@ int hfi1_verbs_send(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
static void hfi1_fill_device_attr(struct hfi1_devdata *dd)
{
struct rvt_dev_info *rdi = &dd->verbs_dev.rdi;
u16 ver = dd->dc8051_ver;
u32 ver = dd->dc8051_ver;
memset(&rdi->dparms.props, 0, sizeof(rdi->dparms.props));
rdi->dparms.props.fw_ver = ((u64)(dc8051_ver_maj(ver)) << 16) |
(u64)dc8051_ver_min(ver);
rdi->dparms.props.fw_ver = ((u64)(dc8051_ver_maj(ver)) << 32) |
((u64)(dc8051_ver_min(ver)) << 16) |
(u64)dc8051_ver_patch(ver);
rdi->dparms.props.device_cap_flags = IB_DEVICE_BAD_PKEY_CNTR |
IB_DEVICE_BAD_QKEY_CNTR | IB_DEVICE_SHUTDOWN_PORT |
IB_DEVICE_SYS_IMAGE_GUID | IB_DEVICE_RC_RNR_NAK_GEN |
IB_DEVICE_PORT_ACTIVE_EVENT | IB_DEVICE_SRQ_RESIZE |
IB_DEVICE_MEM_MGT_EXTENSIONS;
IB_DEVICE_MEM_MGT_EXTENSIONS |
IB_DEVICE_RDMA_NETDEV_OPA_VNIC;
rdi->dparms.props.page_size_cap = PAGE_SIZE;
rdi->dparms.props.vendor_id = dd->oui1 << 16 | dd->oui2 << 8 | dd->oui3;
rdi->dparms.props.vendor_part_id = dd->pcidev->device;
@ -1398,14 +1459,14 @@ static int hfi1_get_guid_be(struct rvt_dev_info *rdi, struct rvt_ibport *rvp,
/*
* convert ah port,sl to sc
*/
u8 ah_to_sc(struct ib_device *ibdev, struct ib_ah_attr *ah)
u8 ah_to_sc(struct ib_device *ibdev, struct rdma_ah_attr *ah)
{
struct hfi1_ibport *ibp = to_iport(ibdev, ah->port_num);
struct hfi1_ibport *ibp = to_iport(ibdev, rdma_ah_get_port_num(ah));
return ibp->sl_to_sc[ah->sl];
return ibp->sl_to_sc[rdma_ah_get_sl(ah)];
}
static int hfi1_check_ah(struct ib_device *ibdev, struct ib_ah_attr *ah_attr)
static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
{
struct hfi1_ibport *ibp;
struct hfi1_pportdata *ppd;
@ -1413,9 +1474,9 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct ib_ah_attr *ah_attr)
u8 sc5;
/* test the mapping for validity */
ibp = to_iport(ibdev, ah_attr->port_num);
ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr));
ppd = ppd_from_ibp(ibp);
sc5 = ibp->sl_to_sc[ah_attr->sl];
sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
dd = dd_from_ppd(ppd);
if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf)
return -EINVAL;
@ -1423,7 +1484,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct ib_ah_attr *ah_attr)
}
static void hfi1_notify_new_ah(struct ib_device *ibdev,
struct ib_ah_attr *ah_attr,
struct rdma_ah_attr *ah_attr,
struct rvt_ah *ah)
{
struct hfi1_ibport *ibp;
@ -1436,9 +1497,9 @@ static void hfi1_notify_new_ah(struct ib_device *ibdev,
* done being setup. We can however modify things which we need to set.
*/
ibp = to_iport(ibdev, ah_attr->port_num);
ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr));
ppd = ppd_from_ibp(ibp);
sc5 = ibp->sl_to_sc[ah->attr.sl];
sc5 = ibp->sl_to_sc[rdma_ah_get_sl(&ah->attr)];
dd = dd_from_ppd(ppd);
ah->vl = sc_to_vlt(dd, sc5);
if (ah->vl < num_vls || ah->vl == 15)
@ -1447,17 +1508,21 @@ static void hfi1_notify_new_ah(struct ib_device *ibdev,
struct ib_ah *hfi1_create_qp0_ah(struct hfi1_ibport *ibp, u16 dlid)
{
struct ib_ah_attr attr;
struct rdma_ah_attr attr;
struct ib_ah *ah = ERR_PTR(-EINVAL);
struct rvt_qp *qp0;
struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
struct hfi1_devdata *dd = dd_from_ppd(ppd);
u8 port_num = ppd->port;
memset(&attr, 0, sizeof(attr));
attr.dlid = dlid;
attr.port_num = ppd_from_ibp(ibp)->port;
attr.type = rdma_ah_find_type(&dd->verbs_dev.rdi.ibdev, port_num);
rdma_ah_set_dlid(&attr, dlid);
rdma_ah_set_port_num(&attr, ppd_from_ibp(ibp)->port);
rcu_read_lock();
qp0 = rcu_dereference(ibp->rvp.qp[0]);
if (qp0)
ah = ib_create_ah(qp0->ibqp.pd, &attr);
ah = rdma_create_ah(qp0->ibqp.pd, &attr);
rcu_read_unlock();
return ah;
}
@ -1504,10 +1569,10 @@ static void hfi1_get_dev_fw_str(struct ib_device *ibdev, char *str,
{
struct rvt_dev_info *rdi = ib_to_rvt(ibdev);
struct hfi1_ibdev *dev = dev_from_rdi(rdi);
u16 ver = dd_from_dev(dev)->dc8051_ver;
u32 ver = dd_from_dev(dev)->dc8051_ver;
snprintf(str, str_len, "%u.%u", dc8051_ver_maj(ver),
dc8051_ver_min(ver));
snprintf(str, str_len, "%u.%u.%u", dc8051_ver_maj(ver),
dc8051_ver_min(ver), dc8051_ver_patch(ver));
}
static const char * const driver_cntr_names[] = {
@ -1524,6 +1589,7 @@ static const char * const driver_cntr_names[] = {
"DRIVER_EgrHdrFull"
};
static DEFINE_MUTEX(cntr_names_lock); /* protects the *_cntr_names bufers */
static const char **dev_cntr_names;
static const char **port_cntr_names;
static int num_driver_cntrs = ARRAY_SIZE(driver_cntr_names);
@ -1578,6 +1644,7 @@ static struct rdma_hw_stats *alloc_hw_stats(struct ib_device *ibdev,
{
int i, err;
mutex_lock(&cntr_names_lock);
if (!cntr_names_initialized) {
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
@ -1586,8 +1653,10 @@ static struct rdma_hw_stats *alloc_hw_stats(struct ib_device *ibdev,
num_driver_cntrs,
&num_dev_cntrs,
&dev_cntr_names);
if (err)
if (err) {
mutex_unlock(&cntr_names_lock);
return NULL;
}
for (i = 0; i < num_driver_cntrs; i++)
dev_cntr_names[num_dev_cntrs + i] =
@ -1601,10 +1670,12 @@ static struct rdma_hw_stats *alloc_hw_stats(struct ib_device *ibdev,
if (err) {
kfree(dev_cntr_names);
dev_cntr_names = NULL;
mutex_unlock(&cntr_names_lock);
return NULL;
}
cntr_names_initialized = 1;
}
mutex_unlock(&cntr_names_lock);
if (!port_num)
return rdma_alloc_hw_stats_struct(
@ -1707,6 +1778,8 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd)
ibdev->modify_device = modify_device;
ibdev->alloc_hw_stats = alloc_hw_stats;
ibdev->get_hw_stats = get_hw_stats;
ibdev->alloc_rdma_netdev = hfi1_vnic_alloc_rn;
ibdev->free_rdma_netdev = hfi1_vnic_free_rn;
/* keep process mad in the driver */
ibdev->process_mad = hfi1_process_mad;
@ -1751,7 +1824,7 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd)
dd->verbs_dev.rdi.driver_f.qp_priv_free = qp_priv_free;
dd->verbs_dev.rdi.driver_f.free_all_qps = free_all_qps;
dd->verbs_dev.rdi.driver_f.notify_qp_reset = notify_qp_reset;
dd->verbs_dev.rdi.driver_f.do_send = hfi1_do_send;
dd->verbs_dev.rdi.driver_f.do_send = hfi1_do_send_from_rvt;
dd->verbs_dev.rdi.driver_f.schedule_send = hfi1_schedule_send;
dd->verbs_dev.rdi.driver_f.schedule_send_no_lock = _hfi1_schedule_send;
dd->verbs_dev.rdi.driver_f.get_pmtu_from_attr = get_pmtu_from_attr;
@ -1823,9 +1896,13 @@ void hfi1_unregister_ib_device(struct hfi1_devdata *dd)
del_timer_sync(&dev->mem_timer);
verbs_txreq_exit(dev);
mutex_lock(&cntr_names_lock);
kfree(dev_cntr_names);
kfree(port_cntr_names);
dev_cntr_names = NULL;
port_cntr_names = NULL;
cntr_names_initialized = 0;
mutex_unlock(&cntr_names_lock);
}
void hfi1_cnp_rcv(struct hfi1_packet *packet)
@ -1840,12 +1917,12 @@ void hfi1_cnp_rcv(struct hfi1_packet *packet)
switch (packet->qp->ibqp.qp_type) {
case IB_QPT_UC:
rlid = qp->remote_ah_attr.dlid;
rlid = rdma_ah_get_dlid(&qp->remote_ah_attr);
rqpn = qp->remote_qpn;
svc_type = IB_CC_SVCTYPE_UC;
break;
case IB_QPT_RC:
rlid = qp->remote_ah_attr.dlid;
rlid = rdma_ah_get_dlid(&qp->remote_ah_attr);
rqpn = qp->remote_qpn;
svc_type = IB_CC_SVCTYPE_RC;
break;
@ -1859,7 +1936,7 @@ void hfi1_cnp_rcv(struct hfi1_packet *packet)
return;
}
sc5 = hdr2sc(hdr, packet->rhf);
sc5 = hfi1_9B_get_sc5(hdr, packet->rhf);
sl = ibp->sc_to_sl[sc5];
lqpn = qp->ibqp.qp_num;

View File

@ -1,5 +1,5 @@
/*
* Copyright(c) 2015, 2016 Intel Corporation.
* Copyright(c) 2015 - 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
@ -195,6 +195,11 @@ struct hfi1_ibdev {
struct dentry *hfi1_ibdev_dbg;
/* per HFI symlinks to above */
struct dentry *hfi1_ibdev_link;
#ifdef CONFIG_FAULT_INJECTION
struct fault_opcode *fault_opcode;
struct fault_packet *fault_packet;
bool fault_suppress_err;
#endif
#endif
};
@ -303,7 +308,7 @@ void hfi1_rc_hdrerr(
u32 rcv_flags,
struct rvt_qp *qp);
u8 ah_to_sc(struct ib_device *ibdev, struct ib_ah_attr *ah_attr);
u8 ah_to_sc(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr);
struct ib_ah *hfi1_create_qp0_ah(struct hfi1_ibport *ibp, u16 dlid);
@ -342,7 +347,7 @@ int hfi1_ruc_check_hdr(struct hfi1_ibport *ibp, struct ib_header *hdr,
int has_grh, struct rvt_qp *qp, u32 bth0);
u32 hfi1_make_grh(struct hfi1_ibport *ibp, struct ib_grh *hdr,
struct ib_global_route *grh, u32 hwords, u32 nwords);
const struct ib_global_route *grh, u32 hwords, u32 nwords);
void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
u32 bth0, u32 bth2, int middle,
@ -350,7 +355,9 @@ void hfi1_make_ruc_header(struct rvt_qp *qp, struct ib_other_headers *ohdr,
void _hfi1_do_send(struct work_struct *work);
void hfi1_do_send(struct rvt_qp *qp);
void hfi1_do_send_from_rvt(struct rvt_qp *qp);
void hfi1_do_send(struct rvt_qp *qp, bool in_thread);
void hfi1_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
enum ib_wc_status status);

View File

@ -0,0 +1,184 @@
#ifndef _HFI1_VNIC_H
#define _HFI1_VNIC_H
/*
* Copyright(c) 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
*
* GPL LICENSE SUMMARY
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* BSD LICENSE
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* - Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* - Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* - Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#include <rdma/opa_vnic.h>
#include "hfi.h"
#include "sdma.h"
#define HFI1_VNIC_MAX_TXQ 16
#define HFI1_VNIC_MAX_PAD 12
/* L2 header definitions */
#define HFI1_L2_TYPE_OFFSET 0x7
#define HFI1_L2_TYPE_SHFT 0x5
#define HFI1_L2_TYPE_MASK 0x3
#define HFI1_GET_L2_TYPE(hdr) \
((*((u8 *)(hdr) + HFI1_L2_TYPE_OFFSET) >> HFI1_L2_TYPE_SHFT) & \
HFI1_L2_TYPE_MASK)
/* L4 type definitions */
#define HFI1_L4_TYPE_OFFSET 8
#define HFI1_GET_L4_TYPE(data) \
(*((u8 *)(data) + HFI1_L4_TYPE_OFFSET))
/* L4 header definitions */
#define HFI1_VNIC_L4_HDR_OFFSET OPA_VNIC_L2_HDR_LEN
#define HFI1_VNIC_GET_L4_HDR(data) \
(*((u16 *)((u8 *)(data) + HFI1_VNIC_L4_HDR_OFFSET)))
#define HFI1_VNIC_GET_VESWID(data) \
(HFI1_VNIC_GET_L4_HDR(data) & 0xFFF)
/* Service class */
#define HFI1_VNIC_SC_OFFSET_LOW 6
#define HFI1_VNIC_SC_OFFSET_HI 7
#define HFI1_VNIC_SC_SHIFT 4
#define HFI1_VNIC_MAX_QUEUE 16
/**
* struct hfi1_vnic_sdma - VNIC per Tx ring SDMA information
* @dd - device data pointer
* @sde - sdma engine
* @vinfo - vnic info pointer
* @wait - iowait structure
* @stx - sdma tx request
* @state - vnic Tx ring SDMA state
* @q_idx - vnic Tx queue index
*/
struct hfi1_vnic_sdma {
struct hfi1_devdata *dd;
struct sdma_engine *sde;
struct hfi1_vnic_vport_info *vinfo;
struct iowait wait;
struct sdma_txreq stx;
unsigned int state;
u8 q_idx;
};
/**
* struct hfi1_vnic_rx_queue - HFI1 VNIC receive queue
* @idx: queue index
* @vinfo: pointer to vport information
* @netdev: network device
* @napi: netdev napi structure
* @skbq: queue of received socket buffers
*/
struct hfi1_vnic_rx_queue {
u8 idx;
struct hfi1_vnic_vport_info *vinfo;
struct net_device *netdev;
struct napi_struct napi;
struct sk_buff_head skbq;
};
/**
* struct hfi1_vnic_vport_info - HFI1 VNIC virtual port information
* @dd: device data pointer
* @netdev: net device pointer
* @flags: state flags
* @lock: vport lock
* @num_tx_q: number of transmit queues
* @num_rx_q: number of receive queues
* @vesw_id: virtual switch id
* @rxq: Array of receive queues
* @stats: per queue stats
* @sdma: VNIC SDMA structure per TXQ
*/
struct hfi1_vnic_vport_info {
struct hfi1_devdata *dd;
struct net_device *netdev;
unsigned long flags;
/* Lock used around state updates */
struct mutex lock;
u8 num_tx_q;
u8 num_rx_q;
u16 vesw_id;
struct hfi1_vnic_rx_queue rxq[HFI1_NUM_VNIC_CTXT];
struct opa_vnic_stats stats[HFI1_VNIC_MAX_QUEUE];
struct hfi1_vnic_sdma sdma[HFI1_VNIC_MAX_TXQ];
};
#define v_dbg(format, arg...) \
netdev_dbg(vinfo->netdev, format, ## arg)
#define v_err(format, arg...) \
netdev_err(vinfo->netdev, format, ## arg)
#define v_info(format, arg...) \
netdev_info(vinfo->netdev, format, ## arg)
/* vnic hfi1 internal functions */
void hfi1_vnic_setup(struct hfi1_devdata *dd);
void hfi1_vnic_cleanup(struct hfi1_devdata *dd);
int hfi1_vnic_txreq_init(struct hfi1_devdata *dd);
void hfi1_vnic_txreq_deinit(struct hfi1_devdata *dd);
void hfi1_vnic_bypass_rcv(struct hfi1_packet *packet);
void hfi1_vnic_sdma_init(struct hfi1_vnic_vport_info *vinfo);
bool hfi1_vnic_sdma_write_avail(struct hfi1_vnic_vport_info *vinfo,
u8 q_idx);
/* vnic rdma netdev operations */
struct net_device *hfi1_vnic_alloc_rn(struct ib_device *device,
u8 port_num,
enum rdma_netdev_t type,
const char *name,
unsigned char name_assign_type,
void (*setup)(struct net_device *));
void hfi1_vnic_free_rn(struct net_device *netdev);
int hfi1_vnic_send_dma(struct hfi1_devdata *dd, u8 q_idx,
struct hfi1_vnic_vport_info *vinfo,
struct sk_buff *skb, u64 pbc, u8 plen);
#endif /* _HFI1_VNIC_H */

View File

@ -0,0 +1,907 @@
/*
* Copyright(c) 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
*
* GPL LICENSE SUMMARY
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* BSD LICENSE
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* - Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* - Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* - Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
/*
* This file contains HFI1 support for VNIC functionality
*/
#include <linux/io.h>
#include <linux/if_vlan.h>
#include "vnic.h"
#define HFI_TX_TIMEOUT_MS 1000
#define HFI1_VNIC_RCV_Q_SIZE 1024
#define HFI1_VNIC_UP 0
static DEFINE_SPINLOCK(vport_cntr_lock);
static int setup_vnic_ctxt(struct hfi1_devdata *dd, struct hfi1_ctxtdata *uctxt)
{
unsigned int rcvctrl_ops = 0;
int ret;
ret = hfi1_init_ctxt(uctxt->sc);
if (ret)
goto done;
uctxt->do_interrupt = &handle_receive_interrupt;
/* Now allocate the RcvHdr queue and eager buffers. */
ret = hfi1_create_rcvhdrq(dd, uctxt);
if (ret)
goto done;
ret = hfi1_setup_eagerbufs(uctxt);
if (ret)
goto done;
set_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags);
if (uctxt->rcvhdrtail_kvaddr)
clear_rcvhdrtail(uctxt);
rcvctrl_ops = HFI1_RCVCTRL_CTXT_ENB;
rcvctrl_ops |= HFI1_RCVCTRL_INTRAVAIL_ENB;
if (!HFI1_CAP_KGET_MASK(uctxt->flags, MULTI_PKT_EGR))
rcvctrl_ops |= HFI1_RCVCTRL_ONE_PKT_EGR_ENB;
if (HFI1_CAP_KGET_MASK(uctxt->flags, NODROP_EGR_FULL))
rcvctrl_ops |= HFI1_RCVCTRL_NO_EGR_DROP_ENB;
if (HFI1_CAP_KGET_MASK(uctxt->flags, NODROP_RHQ_FULL))
rcvctrl_ops |= HFI1_RCVCTRL_NO_RHQ_DROP_ENB;
if (HFI1_CAP_KGET_MASK(uctxt->flags, DMA_RTAIL))
rcvctrl_ops |= HFI1_RCVCTRL_TAILUPD_ENB;
hfi1_rcvctrl(uctxt->dd, rcvctrl_ops, uctxt->ctxt);
uctxt->is_vnic = true;
done:
return ret;
}
static int allocate_vnic_ctxt(struct hfi1_devdata *dd,
struct hfi1_ctxtdata **vnic_ctxt)
{
struct hfi1_ctxtdata *uctxt;
unsigned int ctxt;
int ret;
if (dd->flags & HFI1_FROZEN)
return -EIO;
for (ctxt = dd->first_dyn_alloc_ctxt;
ctxt < dd->num_rcv_contexts; ctxt++)
if (!dd->rcd[ctxt])
break;
if (ctxt == dd->num_rcv_contexts)
return -EBUSY;
uctxt = hfi1_create_ctxtdata(dd->pport, ctxt, dd->node);
if (!uctxt) {
dd_dev_err(dd, "Unable to create ctxtdata, failing open\n");
return -ENOMEM;
}
uctxt->flags = HFI1_CAP_KGET(MULTI_PKT_EGR) |
HFI1_CAP_KGET(NODROP_RHQ_FULL) |
HFI1_CAP_KGET(NODROP_EGR_FULL) |
HFI1_CAP_KGET(DMA_RTAIL);
uctxt->seq_cnt = 1;
/* Allocate and enable a PIO send context */
uctxt->sc = sc_alloc(dd, SC_VNIC, uctxt->rcvhdrqentsize,
uctxt->numa_id);
ret = uctxt->sc ? 0 : -ENOMEM;
if (ret)
goto bail;
dd_dev_dbg(dd, "allocated vnic send context %u(%u)\n",
uctxt->sc->sw_index, uctxt->sc->hw_context);
ret = sc_enable(uctxt->sc);
if (ret)
goto bail;
if (dd->num_msix_entries)
hfi1_set_vnic_msix_info(uctxt);
hfi1_stats.sps_ctxts++;
dd_dev_dbg(dd, "created vnic context %d\n", uctxt->ctxt);
*vnic_ctxt = uctxt;
return ret;
bail:
/*
* hfi1_free_ctxtdata() also releases send_context
* structure if uctxt->sc is not null
*/
dd->rcd[uctxt->ctxt] = NULL;
hfi1_free_ctxtdata(dd, uctxt);
dd_dev_dbg(dd, "vnic allocation failed. rc %d\n", ret);
return ret;
}
static void deallocate_vnic_ctxt(struct hfi1_devdata *dd,
struct hfi1_ctxtdata *uctxt)
{
unsigned long flags;
dd_dev_dbg(dd, "closing vnic context %d\n", uctxt->ctxt);
flush_wc();
if (dd->num_msix_entries)
hfi1_reset_vnic_msix_info(uctxt);
spin_lock_irqsave(&dd->uctxt_lock, flags);
/*
* Disable receive context and interrupt available, reset all
* RcvCtxtCtrl bits to default values.
*/
hfi1_rcvctrl(dd, HFI1_RCVCTRL_CTXT_DIS |
HFI1_RCVCTRL_TIDFLOW_DIS |
HFI1_RCVCTRL_INTRAVAIL_DIS |
HFI1_RCVCTRL_ONE_PKT_EGR_DIS |
HFI1_RCVCTRL_NO_RHQ_DROP_DIS |
HFI1_RCVCTRL_NO_EGR_DROP_DIS, uctxt->ctxt);
/*
* VNIC contexts are allocated from user context pool.
* Release them back to user context pool.
*
* Reset context integrity checks to default.
* (writes to CSRs probably belong in chip.c)
*/
write_kctxt_csr(dd, uctxt->sc->hw_context, SEND_CTXT_CHECK_ENABLE,
hfi1_pkt_default_send_ctxt_mask(dd, SC_USER));
sc_disable(uctxt->sc);
dd->send_contexts[uctxt->sc->sw_index].type = SC_USER;
spin_unlock_irqrestore(&dd->uctxt_lock, flags);
dd->rcd[uctxt->ctxt] = NULL;
uctxt->event_flags = 0;
hfi1_clear_tids(uctxt);
hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);
hfi1_stats.sps_ctxts--;
hfi1_free_ctxtdata(dd, uctxt);
}
void hfi1_vnic_setup(struct hfi1_devdata *dd)
{
idr_init(&dd->vnic.vesw_idr);
}
void hfi1_vnic_cleanup(struct hfi1_devdata *dd)
{
idr_destroy(&dd->vnic.vesw_idr);
}
#define SUM_GRP_COUNTERS(stats, qstats, x_grp) do { \
u64 *src64, *dst64; \
for (src64 = &qstats->x_grp.unicast, \
dst64 = &stats->x_grp.unicast; \
dst64 <= &stats->x_grp.s_1519_max;) { \
*dst64++ += *src64++; \
} \
} while (0)
/* hfi1_vnic_update_stats - update statistics */
static void hfi1_vnic_update_stats(struct hfi1_vnic_vport_info *vinfo,
struct opa_vnic_stats *stats)
{
struct net_device *netdev = vinfo->netdev;
u8 i;
/* add tx counters on different queues */
for (i = 0; i < vinfo->num_tx_q; i++) {
struct opa_vnic_stats *qstats = &vinfo->stats[i];
struct rtnl_link_stats64 *qnstats = &vinfo->stats[i].netstats;
stats->netstats.tx_fifo_errors += qnstats->tx_fifo_errors;
stats->netstats.tx_carrier_errors += qnstats->tx_carrier_errors;
stats->tx_drop_state += qstats->tx_drop_state;
stats->tx_dlid_zero += qstats->tx_dlid_zero;
SUM_GRP_COUNTERS(stats, qstats, tx_grp);
stats->netstats.tx_packets += qnstats->tx_packets;
stats->netstats.tx_bytes += qnstats->tx_bytes;
}
/* add rx counters on different queues */
for (i = 0; i < vinfo->num_rx_q; i++) {
struct opa_vnic_stats *qstats = &vinfo->stats[i];
struct rtnl_link_stats64 *qnstats = &vinfo->stats[i].netstats;
stats->netstats.rx_fifo_errors += qnstats->rx_fifo_errors;
stats->netstats.rx_nohandler += qnstats->rx_nohandler;
stats->rx_drop_state += qstats->rx_drop_state;
stats->rx_oversize += qstats->rx_oversize;
stats->rx_runt += qstats->rx_runt;
SUM_GRP_COUNTERS(stats, qstats, rx_grp);
stats->netstats.rx_packets += qnstats->rx_packets;
stats->netstats.rx_bytes += qnstats->rx_bytes;
}
stats->netstats.tx_errors = stats->netstats.tx_fifo_errors +
stats->netstats.tx_carrier_errors +
stats->tx_drop_state + stats->tx_dlid_zero;
stats->netstats.tx_dropped = stats->netstats.tx_errors;
stats->netstats.rx_errors = stats->netstats.rx_fifo_errors +
stats->netstats.rx_nohandler +
stats->rx_drop_state + stats->rx_oversize +
stats->rx_runt;
stats->netstats.rx_dropped = stats->netstats.rx_errors;
netdev->stats.tx_packets = stats->netstats.tx_packets;
netdev->stats.tx_bytes = stats->netstats.tx_bytes;
netdev->stats.tx_fifo_errors = stats->netstats.tx_fifo_errors;
netdev->stats.tx_carrier_errors = stats->netstats.tx_carrier_errors;
netdev->stats.tx_errors = stats->netstats.tx_errors;
netdev->stats.tx_dropped = stats->netstats.tx_dropped;
netdev->stats.rx_packets = stats->netstats.rx_packets;
netdev->stats.rx_bytes = stats->netstats.rx_bytes;
netdev->stats.rx_fifo_errors = stats->netstats.rx_fifo_errors;
netdev->stats.multicast = stats->rx_grp.mcastbcast;
netdev->stats.rx_length_errors = stats->rx_oversize + stats->rx_runt;
netdev->stats.rx_errors = stats->netstats.rx_errors;
netdev->stats.rx_dropped = stats->netstats.rx_dropped;
}
/* update_len_counters - update pkt's len histogram counters */
static inline void update_len_counters(struct opa_vnic_grp_stats *grp,
int len)
{
/* account for 4 byte FCS */
if (len >= 1515)
grp->s_1519_max++;
else if (len >= 1020)
grp->s_1024_1518++;
else if (len >= 508)
grp->s_512_1023++;
else if (len >= 252)
grp->s_256_511++;
else if (len >= 124)
grp->s_128_255++;
else if (len >= 61)
grp->s_65_127++;
else
grp->s_64++;
}
/* hfi1_vnic_update_tx_counters - update transmit counters */
static void hfi1_vnic_update_tx_counters(struct hfi1_vnic_vport_info *vinfo,
u8 q_idx, struct sk_buff *skb, int err)
{
struct ethhdr *mac_hdr = (struct ethhdr *)skb_mac_header(skb);
struct opa_vnic_stats *stats = &vinfo->stats[q_idx];
struct opa_vnic_grp_stats *tx_grp = &stats->tx_grp;
u16 vlan_tci;
stats->netstats.tx_packets++;
stats->netstats.tx_bytes += skb->len + ETH_FCS_LEN;
update_len_counters(tx_grp, skb->len);
/* rest of the counts are for good packets only */
if (unlikely(err))
return;
if (is_multicast_ether_addr(mac_hdr->h_dest))
tx_grp->mcastbcast++;
else
tx_grp->unicast++;
if (!__vlan_get_tag(skb, &vlan_tci))
tx_grp->vlan++;
else
tx_grp->untagged++;
}
/* hfi1_vnic_update_rx_counters - update receive counters */
static void hfi1_vnic_update_rx_counters(struct hfi1_vnic_vport_info *vinfo,
u8 q_idx, struct sk_buff *skb, int err)
{
struct ethhdr *mac_hdr = (struct ethhdr *)skb->data;
struct opa_vnic_stats *stats = &vinfo->stats[q_idx];
struct opa_vnic_grp_stats *rx_grp = &stats->rx_grp;
u16 vlan_tci;
stats->netstats.rx_packets++;
stats->netstats.rx_bytes += skb->len + ETH_FCS_LEN;
update_len_counters(rx_grp, skb->len);
/* rest of the counts are for good packets only */
if (unlikely(err))
return;
if (is_multicast_ether_addr(mac_hdr->h_dest))
rx_grp->mcastbcast++;
else
rx_grp->unicast++;
if (!__vlan_get_tag(skb, &vlan_tci))
rx_grp->vlan++;
else
rx_grp->untagged++;
}
/* This function is overloaded for opa_vnic specific implementation */
static void hfi1_vnic_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
{
struct opa_vnic_stats *vstats = (struct opa_vnic_stats *)stats;
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
hfi1_vnic_update_stats(vinfo, vstats);
}
static u64 create_bypass_pbc(u32 vl, u32 dw_len)
{
u64 pbc;
pbc = ((u64)PBC_IHCRC_NONE << PBC_INSERT_HCRC_SHIFT)
| PBC_INSERT_BYPASS_ICRC | PBC_CREDIT_RETURN
| PBC_PACKET_BYPASS
| ((vl & PBC_VL_MASK) << PBC_VL_SHIFT)
| (dw_len & PBC_LENGTH_DWS_MASK) << PBC_LENGTH_DWS_SHIFT;
return pbc;
}
/* hfi1_vnic_maybe_stop_tx - stop tx queue if required */
static void hfi1_vnic_maybe_stop_tx(struct hfi1_vnic_vport_info *vinfo,
u8 q_idx)
{
netif_stop_subqueue(vinfo->netdev, q_idx);
if (!hfi1_vnic_sdma_write_avail(vinfo, q_idx))
return;
netif_start_subqueue(vinfo->netdev, q_idx);
}
static netdev_tx_t hfi1_netdev_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
u8 pad_len, q_idx = skb->queue_mapping;
struct hfi1_devdata *dd = vinfo->dd;
struct opa_vnic_skb_mdata *mdata;
u32 pkt_len, total_len;
int err = -EINVAL;
u64 pbc;
v_dbg("xmit: queue %d skb len %d\n", q_idx, skb->len);
if (unlikely(!netif_oper_up(netdev))) {
vinfo->stats[q_idx].tx_drop_state++;
goto tx_finish;
}
/* take out meta data */
mdata = (struct opa_vnic_skb_mdata *)skb->data;
skb_pull(skb, sizeof(*mdata));
if (unlikely(mdata->flags & OPA_VNIC_SKB_MDATA_ENCAP_ERR)) {
vinfo->stats[q_idx].tx_dlid_zero++;
goto tx_finish;
}
/* add tail padding (for 8 bytes size alignment) and icrc */
pad_len = -(skb->len + OPA_VNIC_ICRC_TAIL_LEN) & 0x7;
pad_len += OPA_VNIC_ICRC_TAIL_LEN;
/*
* pkt_len is how much data we have to write, includes header and data.
* total_len is length of the packet in Dwords plus the PBC should not
* include the CRC.
*/
pkt_len = (skb->len + pad_len) >> 2;
total_len = pkt_len + 2; /* PBC + packet */
pbc = create_bypass_pbc(mdata->vl, total_len);
skb_get(skb);
v_dbg("pbc 0x%016llX len %d pad_len %d\n", pbc, skb->len, pad_len);
err = dd->process_vnic_dma_send(dd, q_idx, vinfo, skb, pbc, pad_len);
if (unlikely(err)) {
if (err == -ENOMEM)
vinfo->stats[q_idx].netstats.tx_fifo_errors++;
else if (err != -EBUSY)
vinfo->stats[q_idx].netstats.tx_carrier_errors++;
}
/* remove the header before updating tx counters */
skb_pull(skb, OPA_VNIC_HDR_LEN);
if (unlikely(err == -EBUSY)) {
hfi1_vnic_maybe_stop_tx(vinfo, q_idx);
dev_kfree_skb_any(skb);
return NETDEV_TX_BUSY;
}
tx_finish:
/* update tx counters */
hfi1_vnic_update_tx_counters(vinfo, q_idx, skb, err);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
static u16 hfi1_vnic_select_queue(struct net_device *netdev,
struct sk_buff *skb,
void *accel_priv,
select_queue_fallback_t fallback)
{
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
struct opa_vnic_skb_mdata *mdata;
struct sdma_engine *sde;
mdata = (struct opa_vnic_skb_mdata *)skb->data;
sde = sdma_select_engine_vl(vinfo->dd, mdata->entropy, mdata->vl);
return sde->this_idx;
}
/* hfi1_vnic_decap_skb - strip OPA header from the skb (ethernet) packet */
static inline int hfi1_vnic_decap_skb(struct hfi1_vnic_rx_queue *rxq,
struct sk_buff *skb)
{
struct hfi1_vnic_vport_info *vinfo = rxq->vinfo;
int max_len = vinfo->netdev->mtu + VLAN_ETH_HLEN;
int rc = -EFAULT;
skb_pull(skb, OPA_VNIC_HDR_LEN);
/* Validate Packet length */
if (unlikely(skb->len > max_len))
vinfo->stats[rxq->idx].rx_oversize++;
else if (unlikely(skb->len < ETH_ZLEN))
vinfo->stats[rxq->idx].rx_runt++;
else
rc = 0;
return rc;
}
static inline struct sk_buff *hfi1_vnic_get_skb(struct hfi1_vnic_rx_queue *rxq)
{
unsigned char *pad_info;
struct sk_buff *skb;
skb = skb_dequeue(&rxq->skbq);
if (unlikely(!skb))
return NULL;
/* remove tail padding and icrc */
pad_info = skb->data + skb->len - 1;
skb_trim(skb, (skb->len - OPA_VNIC_ICRC_TAIL_LEN -
((*pad_info) & 0x7)));
return skb;
}
/* hfi1_vnic_handle_rx - handle skb receive */
static void hfi1_vnic_handle_rx(struct hfi1_vnic_rx_queue *rxq,
int *work_done, int work_to_do)
{
struct hfi1_vnic_vport_info *vinfo = rxq->vinfo;
struct sk_buff *skb;
int rc;
while (1) {
if (*work_done >= work_to_do)
break;
skb = hfi1_vnic_get_skb(rxq);
if (unlikely(!skb))
break;
rc = hfi1_vnic_decap_skb(rxq, skb);
/* update rx counters */
hfi1_vnic_update_rx_counters(vinfo, rxq->idx, skb, rc);
if (unlikely(rc)) {
dev_kfree_skb_any(skb);
continue;
}
skb_checksum_none_assert(skb);
skb->protocol = eth_type_trans(skb, rxq->netdev);
napi_gro_receive(&rxq->napi, skb);
(*work_done)++;
}
}
/* hfi1_vnic_napi - napi receive polling callback function */
static int hfi1_vnic_napi(struct napi_struct *napi, int budget)
{
struct hfi1_vnic_rx_queue *rxq = container_of(napi,
struct hfi1_vnic_rx_queue, napi);
struct hfi1_vnic_vport_info *vinfo = rxq->vinfo;
int work_done = 0;
v_dbg("napi %d budget %d\n", rxq->idx, budget);
hfi1_vnic_handle_rx(rxq, &work_done, budget);
v_dbg("napi %d work_done %d\n", rxq->idx, work_done);
if (work_done < budget)
napi_complete(napi);
return work_done;
}
void hfi1_vnic_bypass_rcv(struct hfi1_packet *packet)
{
struct hfi1_devdata *dd = packet->rcd->dd;
struct hfi1_vnic_vport_info *vinfo = NULL;
struct hfi1_vnic_rx_queue *rxq;
struct sk_buff *skb;
int l4_type, vesw_id = -1;
u8 q_idx;
l4_type = HFI1_GET_L4_TYPE(packet->ebuf);
if (likely(l4_type == OPA_VNIC_L4_ETHR)) {
vesw_id = HFI1_VNIC_GET_VESWID(packet->ebuf);
vinfo = idr_find(&dd->vnic.vesw_idr, vesw_id);
/*
* In case of invalid vesw id, count the error on
* the first available vport.
*/
if (unlikely(!vinfo)) {
struct hfi1_vnic_vport_info *vinfo_tmp;
int id_tmp = 0;
vinfo_tmp = idr_get_next(&dd->vnic.vesw_idr, &id_tmp);
if (vinfo_tmp) {
spin_lock(&vport_cntr_lock);
vinfo_tmp->stats[0].netstats.rx_nohandler++;
spin_unlock(&vport_cntr_lock);
}
}
}
if (unlikely(!vinfo)) {
dd_dev_warn(dd, "vnic rcv err: l4 %d vesw id %d ctx %d\n",
l4_type, vesw_id, packet->rcd->ctxt);
return;
}
q_idx = packet->rcd->vnic_q_idx;
rxq = &vinfo->rxq[q_idx];
if (unlikely(!netif_oper_up(vinfo->netdev))) {
vinfo->stats[q_idx].rx_drop_state++;
skb_queue_purge(&rxq->skbq);
return;
}
if (unlikely(skb_queue_len(&rxq->skbq) > HFI1_VNIC_RCV_Q_SIZE)) {
vinfo->stats[q_idx].netstats.rx_fifo_errors++;
return;
}
skb = netdev_alloc_skb(vinfo->netdev, packet->tlen);
if (unlikely(!skb)) {
vinfo->stats[q_idx].netstats.rx_fifo_errors++;
return;
}
memcpy(skb->data, packet->ebuf, packet->tlen);
skb_put(skb, packet->tlen);
skb_queue_tail(&rxq->skbq, skb);
if (napi_schedule_prep(&rxq->napi)) {
v_dbg("napi %d scheduling\n", q_idx);
__napi_schedule(&rxq->napi);
}
}
static int hfi1_vnic_up(struct hfi1_vnic_vport_info *vinfo)
{
struct hfi1_devdata *dd = vinfo->dd;
struct net_device *netdev = vinfo->netdev;
int i, rc;
/* ensure virtual eth switch id is valid */
if (!vinfo->vesw_id)
return -EINVAL;
rc = idr_alloc(&dd->vnic.vesw_idr, vinfo, vinfo->vesw_id,
vinfo->vesw_id + 1, GFP_NOWAIT);
if (rc < 0)
return rc;
for (i = 0; i < vinfo->num_rx_q; i++) {
struct hfi1_vnic_rx_queue *rxq = &vinfo->rxq[i];
skb_queue_head_init(&rxq->skbq);
napi_enable(&rxq->napi);
}
netif_carrier_on(netdev);
netif_tx_start_all_queues(netdev);
set_bit(HFI1_VNIC_UP, &vinfo->flags);
return 0;
}
static void hfi1_vnic_down(struct hfi1_vnic_vport_info *vinfo)
{
struct hfi1_devdata *dd = vinfo->dd;
u8 i;
clear_bit(HFI1_VNIC_UP, &vinfo->flags);
netif_carrier_off(vinfo->netdev);
netif_tx_disable(vinfo->netdev);
idr_remove(&dd->vnic.vesw_idr, vinfo->vesw_id);
/* ensure irqs see the change */
hfi1_vnic_synchronize_irq(dd);
/* remove unread skbs */
for (i = 0; i < vinfo->num_rx_q; i++) {
struct hfi1_vnic_rx_queue *rxq = &vinfo->rxq[i];
napi_disable(&rxq->napi);
skb_queue_purge(&rxq->skbq);
}
}
static int hfi1_netdev_open(struct net_device *netdev)
{
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
int rc;
mutex_lock(&vinfo->lock);
rc = hfi1_vnic_up(vinfo);
mutex_unlock(&vinfo->lock);
return rc;
}
static int hfi1_netdev_close(struct net_device *netdev)
{
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
mutex_lock(&vinfo->lock);
if (test_bit(HFI1_VNIC_UP, &vinfo->flags))
hfi1_vnic_down(vinfo);
mutex_unlock(&vinfo->lock);
return 0;
}
static int hfi1_vnic_allot_ctxt(struct hfi1_devdata *dd,
struct hfi1_ctxtdata **vnic_ctxt)
{
int rc;
rc = allocate_vnic_ctxt(dd, vnic_ctxt);
if (rc) {
dd_dev_err(dd, "vnic ctxt alloc failed %d\n", rc);
return rc;
}
rc = setup_vnic_ctxt(dd, *vnic_ctxt);
if (rc) {
dd_dev_err(dd, "vnic ctxt setup failed %d\n", rc);
deallocate_vnic_ctxt(dd, *vnic_ctxt);
*vnic_ctxt = NULL;
}
return rc;
}
static int hfi1_vnic_init(struct hfi1_vnic_vport_info *vinfo)
{
struct hfi1_devdata *dd = vinfo->dd;
int i, rc = 0;
mutex_lock(&hfi1_mutex);
if (!dd->vnic.num_vports) {
rc = hfi1_vnic_txreq_init(dd);
if (rc)
goto txreq_fail;
dd->vnic.msix_idx = dd->first_dyn_msix_idx;
}
for (i = dd->vnic.num_ctxt; i < vinfo->num_rx_q; i++) {
rc = hfi1_vnic_allot_ctxt(dd, &dd->vnic.ctxt[i]);
if (rc)
break;
dd->vnic.ctxt[i]->vnic_q_idx = i;
}
if (i < vinfo->num_rx_q) {
/*
* If required amount of contexts is not
* allocated successfully then remaining contexts
* are released.
*/
while (i-- > dd->vnic.num_ctxt) {
deallocate_vnic_ctxt(dd, dd->vnic.ctxt[i]);
dd->vnic.ctxt[i] = NULL;
}
goto alloc_fail;
}
if (dd->vnic.num_ctxt != i) {
dd->vnic.num_ctxt = i;
hfi1_init_vnic_rsm(dd);
}
dd->vnic.num_vports++;
hfi1_vnic_sdma_init(vinfo);
alloc_fail:
if (!dd->vnic.num_vports)
hfi1_vnic_txreq_deinit(dd);
txreq_fail:
mutex_unlock(&hfi1_mutex);
return rc;
}
static void hfi1_vnic_deinit(struct hfi1_vnic_vport_info *vinfo)
{
struct hfi1_devdata *dd = vinfo->dd;
int i;
mutex_lock(&hfi1_mutex);
if (--dd->vnic.num_vports == 0) {
for (i = 0; i < dd->vnic.num_ctxt; i++) {
deallocate_vnic_ctxt(dd, dd->vnic.ctxt[i]);
dd->vnic.ctxt[i] = NULL;
}
hfi1_deinit_vnic_rsm(dd);
dd->vnic.num_ctxt = 0;
hfi1_vnic_txreq_deinit(dd);
}
mutex_unlock(&hfi1_mutex);
}
static void hfi1_vnic_set_vesw_id(struct net_device *netdev, int id)
{
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
bool reopen = false;
/*
* If vesw_id is being changed, and if the vnic port is up,
* reset the vnic port to ensure new vesw_id gets picked up
*/
if (id != vinfo->vesw_id) {
mutex_lock(&vinfo->lock);
if (test_bit(HFI1_VNIC_UP, &vinfo->flags)) {
hfi1_vnic_down(vinfo);
reopen = true;
}
vinfo->vesw_id = id;
if (reopen)
hfi1_vnic_up(vinfo);
mutex_unlock(&vinfo->lock);
}
}
/* netdev ops */
static const struct net_device_ops hfi1_netdev_ops = {
.ndo_open = hfi1_netdev_open,
.ndo_stop = hfi1_netdev_close,
.ndo_start_xmit = hfi1_netdev_start_xmit,
.ndo_select_queue = hfi1_vnic_select_queue,
.ndo_get_stats64 = hfi1_vnic_get_stats64,
};
struct net_device *hfi1_vnic_alloc_rn(struct ib_device *device,
u8 port_num,
enum rdma_netdev_t type,
const char *name,
unsigned char name_assign_type,
void (*setup)(struct net_device *))
{
struct hfi1_devdata *dd = dd_from_ibdev(device);
struct hfi1_vnic_vport_info *vinfo;
struct net_device *netdev;
struct rdma_netdev *rn;
int i, size, rc;
if (!port_num || (port_num > dd->num_pports))
return ERR_PTR(-EINVAL);
if (type != RDMA_NETDEV_OPA_VNIC)
return ERR_PTR(-EOPNOTSUPP);
size = sizeof(struct opa_vnic_rdma_netdev) + sizeof(*vinfo);
netdev = alloc_netdev_mqs(size, name, name_assign_type, setup,
dd->chip_sdma_engines, HFI1_NUM_VNIC_CTXT);
if (!netdev)
return ERR_PTR(-ENOMEM);
rn = netdev_priv(netdev);
vinfo = opa_vnic_dev_priv(netdev);
vinfo->dd = dd;
vinfo->num_tx_q = dd->chip_sdma_engines;
vinfo->num_rx_q = HFI1_NUM_VNIC_CTXT;
vinfo->netdev = netdev;
rn->set_id = hfi1_vnic_set_vesw_id;
netdev->features = NETIF_F_HIGHDMA | NETIF_F_SG;
netdev->hw_features = netdev->features;
netdev->vlan_features = netdev->features;
netdev->watchdog_timeo = msecs_to_jiffies(HFI_TX_TIMEOUT_MS);
netdev->netdev_ops = &hfi1_netdev_ops;
mutex_init(&vinfo->lock);
for (i = 0; i < vinfo->num_rx_q; i++) {
struct hfi1_vnic_rx_queue *rxq = &vinfo->rxq[i];
rxq->idx = i;
rxq->vinfo = vinfo;
rxq->netdev = netdev;
netif_napi_add(netdev, &rxq->napi, hfi1_vnic_napi, 64);
}
rc = hfi1_vnic_init(vinfo);
if (rc)
goto init_fail;
return netdev;
init_fail:
mutex_destroy(&vinfo->lock);
free_netdev(netdev);
return ERR_PTR(rc);
}
void hfi1_vnic_free_rn(struct net_device *netdev)
{
struct hfi1_vnic_vport_info *vinfo = opa_vnic_dev_priv(netdev);
hfi1_vnic_deinit(vinfo);
mutex_destroy(&vinfo->lock);
free_netdev(netdev);
}

View File

@ -0,0 +1,323 @@
/*
* Copyright(c) 2017 Intel Corporation.
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
*
* GPL LICENSE SUMMARY
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* BSD LICENSE
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* - Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* - Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* - Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
/*
* This file contains HFI1 support for VNIC SDMA functionality
*/
#include "sdma.h"
#include "vnic.h"
#define HFI1_VNIC_SDMA_Q_ACTIVE BIT(0)
#define HFI1_VNIC_SDMA_Q_DEFERRED BIT(1)
#define HFI1_VNIC_TXREQ_NAME_LEN 32
#define HFI1_VNIC_SDMA_DESC_WTRMRK 64
#define HFI1_VNIC_SDMA_RETRY_COUNT 1
/*
* struct vnic_txreq - VNIC transmit descriptor
* @txreq: sdma transmit request
* @sdma: vnic sdma pointer
* @skb: skb to send
* @pad: pad buffer
* @plen: pad length
* @pbc_val: pbc value
* @retry_count: tx retry count
*/
struct vnic_txreq {
struct sdma_txreq txreq;
struct hfi1_vnic_sdma *sdma;
struct sk_buff *skb;
unsigned char pad[HFI1_VNIC_MAX_PAD];
u16 plen;
__le64 pbc_val;
u32 retry_count;
};
static void vnic_sdma_complete(struct sdma_txreq *txreq,
int status)
{
struct vnic_txreq *tx = container_of(txreq, struct vnic_txreq, txreq);
struct hfi1_vnic_sdma *vnic_sdma = tx->sdma;
sdma_txclean(vnic_sdma->dd, txreq);
dev_kfree_skb_any(tx->skb);
kmem_cache_free(vnic_sdma->dd->vnic.txreq_cache, tx);
}
static noinline int build_vnic_ulp_payload(struct sdma_engine *sde,
struct vnic_txreq *tx)
{
int i, ret = 0;
ret = sdma_txadd_kvaddr(
sde->dd,
&tx->txreq,
tx->skb->data,
skb_headlen(tx->skb));
if (unlikely(ret))
goto bail_txadd;
for (i = 0; i < skb_shinfo(tx->skb)->nr_frags; i++) {
struct skb_frag_struct *frag = &skb_shinfo(tx->skb)->frags[i];
/* combine physically continuous fragments later? */
ret = sdma_txadd_page(sde->dd,
&tx->txreq,
skb_frag_page(frag),
frag->page_offset,
skb_frag_size(frag));
if (unlikely(ret))
goto bail_txadd;
}
if (tx->plen)
ret = sdma_txadd_kvaddr(sde->dd, &tx->txreq,
tx->pad + HFI1_VNIC_MAX_PAD - tx->plen,
tx->plen);
bail_txadd:
return ret;
}
static int build_vnic_tx_desc(struct sdma_engine *sde,
struct vnic_txreq *tx,
u64 pbc)
{
int ret = 0;
u16 hdrbytes = 2 << 2; /* PBC */
ret = sdma_txinit_ahg(
&tx->txreq,
0,
hdrbytes + tx->skb->len + tx->plen,
0,
0,
NULL,
0,
vnic_sdma_complete);
if (unlikely(ret))
goto bail_txadd;
/* add pbc */
tx->pbc_val = cpu_to_le64(pbc);
ret = sdma_txadd_kvaddr(
sde->dd,
&tx->txreq,
&tx->pbc_val,
hdrbytes);
if (unlikely(ret))
goto bail_txadd;
/* add the ulp payload */
ret = build_vnic_ulp_payload(sde, tx);
bail_txadd:
return ret;
}
/* setup the last plen bypes of pad */
static inline void hfi1_vnic_update_pad(unsigned char *pad, u8 plen)
{
pad[HFI1_VNIC_MAX_PAD - 1] = plen - OPA_VNIC_ICRC_TAIL_LEN;
}
int hfi1_vnic_send_dma(struct hfi1_devdata *dd, u8 q_idx,
struct hfi1_vnic_vport_info *vinfo,
struct sk_buff *skb, u64 pbc, u8 plen)
{
struct hfi1_vnic_sdma *vnic_sdma = &vinfo->sdma[q_idx];
struct sdma_engine *sde = vnic_sdma->sde;
struct vnic_txreq *tx;
int ret = -ECOMM;
if (unlikely(READ_ONCE(vnic_sdma->state) != HFI1_VNIC_SDMA_Q_ACTIVE))
goto tx_err;
if (unlikely(!sde || !sdma_running(sde)))
goto tx_err;
tx = kmem_cache_alloc(dd->vnic.txreq_cache, GFP_ATOMIC);
if (unlikely(!tx)) {
ret = -ENOMEM;
goto tx_err;
}
tx->sdma = vnic_sdma;
tx->skb = skb;
hfi1_vnic_update_pad(tx->pad, plen);
tx->plen = plen;
ret = build_vnic_tx_desc(sde, tx, pbc);
if (unlikely(ret))
goto free_desc;
tx->retry_count = 0;
ret = sdma_send_txreq(sde, &vnic_sdma->wait, &tx->txreq);
/* When -ECOMM, sdma callback will be called with ABORT status */
if (unlikely(ret && unlikely(ret != -ECOMM)))
goto free_desc;
return ret;
free_desc:
sdma_txclean(dd, &tx->txreq);
kmem_cache_free(dd->vnic.txreq_cache, tx);
tx_err:
if (ret != -EBUSY)
dev_kfree_skb_any(skb);
return ret;
}
/*
* hfi1_vnic_sdma_sleep - vnic sdma sleep function
*
* This function gets called from sdma_send_txreq() when there are not enough
* sdma descriptors available to send the packet. It adds Tx queue's wait
* structure to sdma engine's dmawait list to be woken up when descriptors
* become available.
*/
static int hfi1_vnic_sdma_sleep(struct sdma_engine *sde,
struct iowait *wait,
struct sdma_txreq *txreq,
unsigned int seq)
{
struct hfi1_vnic_sdma *vnic_sdma =
container_of(wait, struct hfi1_vnic_sdma, wait);
struct hfi1_ibdev *dev = &vnic_sdma->dd->verbs_dev;
struct vnic_txreq *tx = container_of(txreq, struct vnic_txreq, txreq);
if (sdma_progress(sde, seq, txreq))
if (tx->retry_count++ < HFI1_VNIC_SDMA_RETRY_COUNT)
return -EAGAIN;
vnic_sdma->state = HFI1_VNIC_SDMA_Q_DEFERRED;
write_seqlock(&dev->iowait_lock);
if (list_empty(&vnic_sdma->wait.list))
list_add_tail(&vnic_sdma->wait.list, &sde->dmawait);
write_sequnlock(&dev->iowait_lock);
return -EBUSY;
}
/*
* hfi1_vnic_sdma_wakeup - vnic sdma wakeup function
*
* This function gets called when SDMA descriptors becomes available and Tx
* queue's wait structure was previously added to sdma engine's dmawait list.
* It notifies the upper driver about Tx queue wakeup.
*/
static void hfi1_vnic_sdma_wakeup(struct iowait *wait, int reason)
{
struct hfi1_vnic_sdma *vnic_sdma =
container_of(wait, struct hfi1_vnic_sdma, wait);
struct hfi1_vnic_vport_info *vinfo = vnic_sdma->vinfo;
vnic_sdma->state = HFI1_VNIC_SDMA_Q_ACTIVE;
if (__netif_subqueue_stopped(vinfo->netdev, vnic_sdma->q_idx))
netif_wake_subqueue(vinfo->netdev, vnic_sdma->q_idx);
};
inline bool hfi1_vnic_sdma_write_avail(struct hfi1_vnic_vport_info *vinfo,
u8 q_idx)
{
struct hfi1_vnic_sdma *vnic_sdma = &vinfo->sdma[q_idx];
return (READ_ONCE(vnic_sdma->state) == HFI1_VNIC_SDMA_Q_ACTIVE);
}
void hfi1_vnic_sdma_init(struct hfi1_vnic_vport_info *vinfo)
{
int i;
for (i = 0; i < vinfo->num_tx_q; i++) {
struct hfi1_vnic_sdma *vnic_sdma = &vinfo->sdma[i];
iowait_init(&vnic_sdma->wait, 0, NULL, hfi1_vnic_sdma_sleep,
hfi1_vnic_sdma_wakeup, NULL);
vnic_sdma->sde = &vinfo->dd->per_sdma[i];
vnic_sdma->dd = vinfo->dd;
vnic_sdma->vinfo = vinfo;
vnic_sdma->q_idx = i;
vnic_sdma->state = HFI1_VNIC_SDMA_Q_ACTIVE;
/* Add a free descriptor watermark for wakeups */
if (vnic_sdma->sde->descq_cnt > HFI1_VNIC_SDMA_DESC_WTRMRK) {
INIT_LIST_HEAD(&vnic_sdma->stx.list);
vnic_sdma->stx.num_desc = HFI1_VNIC_SDMA_DESC_WTRMRK;
list_add_tail(&vnic_sdma->stx.list,
&vnic_sdma->wait.tx_head);
}
}
}
static void hfi1_vnic_txreq_kmem_cache_ctor(void *obj)
{
struct vnic_txreq *tx = (struct vnic_txreq *)obj;
memset(tx, 0, sizeof(*tx));
}
int hfi1_vnic_txreq_init(struct hfi1_devdata *dd)
{
char buf[HFI1_VNIC_TXREQ_NAME_LEN];
snprintf(buf, sizeof(buf), "hfi1_%u_vnic_txreq_cache", dd->unit);
dd->vnic.txreq_cache = kmem_cache_create(buf,
sizeof(struct vnic_txreq),
0, SLAB_HWCACHE_ALIGN,
hfi1_vnic_txreq_kmem_cache_ctor);
if (!dd->vnic.txreq_cache)
return -ENOMEM;
return 0;
}
void hfi1_vnic_txreq_deinit(struct hfi1_devdata *dd)
{
kmem_cache_destroy(dd->vnic.txreq_cache);
dd->vnic.txreq_cache = NULL;
}

View File

@ -39,7 +39,8 @@
#define HNS_ROCE_VLAN_SL_BIT_MASK 7
#define HNS_ROCE_VLAN_SL_SHIFT 13
struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd, struct ib_ah_attr *ah_attr,
struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd,
struct rdma_ah_attr *ah_attr,
struct ib_udata *udata)
{
struct hns_roce_dev *hr_dev = to_hr_dev(ibpd->device);
@ -48,6 +49,7 @@ struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd, struct ib_ah_attr *ah_attr,
struct hns_roce_ah *ah;
u16 vlan_tag = 0xffff;
struct in6_addr in6;
const struct ib_global_route *grh = rdma_ah_read_grh(ah_attr);
union ib_gid sgid;
int ret;
@ -56,15 +58,20 @@ struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd, struct ib_ah_attr *ah_attr,
return ERR_PTR(-ENOMEM);
/* Get mac address */
memcpy(&in6, ah_attr->grh.dgid.raw, sizeof(ah_attr->grh.dgid.raw));
if (rdma_is_multicast_addr(&in6))
memcpy(&in6, grh->dgid.raw, sizeof(grh->dgid.raw));
if (rdma_is_multicast_addr(&in6)) {
rdma_get_mcast_mac(&in6, ah->av.mac);
else
memcpy(ah->av.mac, ah_attr->dmac, sizeof(ah_attr->dmac));
} else {
u8 *dmac = rdma_ah_retrieve_dmac(ah_attr);
if (!dmac)
return ERR_PTR(-EINVAL);
memcpy(ah->av.mac, dmac, ETH_ALEN);
}
/* Get source gid */
ret = ib_get_cached_gid(ibpd->device, ah_attr->port_num,
ah_attr->grh.sgid_index, &sgid, &gid_attr);
ret = ib_get_cached_gid(ibpd->device, rdma_ah_get_port_num(ah_attr),
grh->sgid_index, &sgid, &gid_attr);
if (ret) {
dev_err(dev, "get sgid failed! ret = %d\n", ret);
kfree(ah);
@ -78,45 +85,46 @@ struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd, struct ib_ah_attr *ah_attr,
}
if (vlan_tag < 0x1000)
vlan_tag |= (ah_attr->sl & HNS_ROCE_VLAN_SL_BIT_MASK) <<
vlan_tag |= (rdma_ah_get_sl(ah_attr) &
HNS_ROCE_VLAN_SL_BIT_MASK) <<
HNS_ROCE_VLAN_SL_SHIFT;
ah->av.port_pd = cpu_to_be32(to_hr_pd(ibpd)->pdn | (ah_attr->port_num <<
ah->av.port_pd = cpu_to_be32(to_hr_pd(ibpd)->pdn |
(rdma_ah_get_port_num(ah_attr) <<
HNS_ROCE_PORT_NUM_SHIFT));
ah->av.gid_index = ah_attr->grh.sgid_index;
ah->av.gid_index = grh->sgid_index;
ah->av.vlan = cpu_to_le16(vlan_tag);
dev_dbg(dev, "gid_index = 0x%x,vlan = 0x%x\n", ah->av.gid_index,
ah->av.vlan);
if (ah_attr->static_rate)
if (rdma_ah_get_static_rate(ah_attr))
ah->av.stat_rate = IB_RATE_10_GBPS;
memcpy(ah->av.dgid, ah_attr->grh.dgid.raw, HNS_ROCE_GID_SIZE);
ah->av.sl_tclass_flowlabel = cpu_to_le32(ah_attr->sl <<
memcpy(ah->av.dgid, grh->dgid.raw, HNS_ROCE_GID_SIZE);
ah->av.sl_tclass_flowlabel = cpu_to_le32(rdma_ah_get_sl(ah_attr) <<
HNS_ROCE_SL_SHIFT);
return &ah->ibah;
}
int hns_roce_query_ah(struct ib_ah *ibah, struct ib_ah_attr *ah_attr)
int hns_roce_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr)
{
struct hns_roce_ah *ah = to_hr_ah(ibah);
memset(ah_attr, 0, sizeof(*ah_attr));
ah_attr->sl = le32_to_cpu(ah->av.sl_tclass_flowlabel) >>
HNS_ROCE_SL_SHIFT;
ah_attr->port_num = le32_to_cpu(ah->av.port_pd) >>
HNS_ROCE_PORT_NUM_SHIFT;
ah_attr->static_rate = ah->av.stat_rate;
ah_attr->ah_flags = IB_AH_GRH;
ah_attr->grh.traffic_class = le32_to_cpu(ah->av.sl_tclass_flowlabel) >>
HNS_ROCE_TCLASS_SHIFT;
ah_attr->grh.flow_label = le32_to_cpu(ah->av.sl_tclass_flowlabel) &
HNS_ROCE_FLOW_LABLE_MASK;
ah_attr->grh.hop_limit = ah->av.hop_limit;
ah_attr->grh.sgid_index = ah->av.gid_index;
memcpy(ah_attr->grh.dgid.raw, ah->av.dgid, HNS_ROCE_GID_SIZE);
rdma_ah_set_sl(ah_attr, (le32_to_cpu(ah->av.sl_tclass_flowlabel) >>
HNS_ROCE_SL_SHIFT));
rdma_ah_set_port_num(ah_attr, (le32_to_cpu(ah->av.port_pd) >>
HNS_ROCE_PORT_NUM_SHIFT));
rdma_ah_set_static_rate(ah_attr, ah->av.stat_rate);
rdma_ah_set_grh(ah_attr, NULL,
(le32_to_cpu(ah->av.sl_tclass_flowlabel) &
HNS_ROCE_FLOW_LABLE_MASK), ah->av.gid_index,
ah->av.hop_limit,
(le32_to_cpu(ah->av.sl_tclass_flowlabel) >>
HNS_ROCE_TCLASS_SHIFT));
rdma_ah_set_dgid_raw(ah_attr, ah->av.dgid);
return 0;
}

View File

@ -299,9 +299,9 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
struct hns_roce_cmdq *hr_cmd = &hr_dev->cmd;
int i;
hr_cmd->context = kmalloc(hr_cmd->max_cmds *
sizeof(struct hns_roce_cmd_context),
GFP_KERNEL);
hr_cmd->context = kmalloc_array(hr_cmd->max_cmds,
sizeof(*hr_cmd->context),
GFP_KERNEL);
if (!hr_cmd->context)
return -ENOMEM;

View File

@ -219,8 +219,7 @@ static int hns_roce_ib_get_cq_umem(struct hns_roce_dev *hr_dev,
return PTR_ERR(*umem);
ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(*umem),
ilog2((unsigned int)(*umem)->page_size),
&buf->hr_mtt);
(*umem)->page_shift, &buf->hr_mtt);
if (ret)
goto err_buf;

View File

@ -687,9 +687,10 @@ void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap,
unsigned long obj, int cnt,
int rr);
struct ib_ah *hns_roce_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr,
struct ib_ah *hns_roce_create_ah(struct ib_pd *pd,
struct rdma_ah_attr *ah_attr,
struct ib_udata *udata);
int hns_roce_query_ah(struct ib_ah *ibah, struct ib_ah_attr *ah_attr);
int hns_roce_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr);
int hns_roce_destroy_ah(struct ib_ah *ah);
struct ib_pd *hns_roce_alloc_pd(struct ib_device *ib_dev,

View File

@ -33,6 +33,7 @@
#include <linux/platform_device.h>
#include <linux/acpi.h>
#include <linux/etherdevice.h>
#include <linux/of.h>
#include <rdma/ib_umem.h>
#include "hns_roce_common.h"
#include "hns_roce_device.h"
@ -657,6 +658,7 @@ static int hns_roce_v1_rsv_lp_qp(struct hns_roce_dev *hr_dev)
struct hns_roce_qp *hr_qp;
struct ib_cq *cq;
struct ib_pd *pd;
union ib_gid dgid;
u64 subnet_prefix;
int attr_mask = 0;
int i;
@ -707,12 +709,8 @@ static int hns_roce_v1_rsv_lp_qp(struct hns_roce_dev *hr_dev)
attr.rnr_retry = 7;
attr.timeout = 0x12;
attr.path_mtu = IB_MTU_256;
attr.ah_attr.ah_flags = 1;
attr.ah_attr.static_rate = 3;
attr.ah_attr.grh.sgid_index = 0;
attr.ah_attr.grh.hop_limit = 1;
attr.ah_attr.grh.flow_label = 0;
attr.ah_attr.grh.traffic_class = 0;
rdma_ah_set_grh(&attr.ah_attr, NULL, 0, 0, 1, 0);
rdma_ah_set_static_rate(&attr.ah_attr, 3);
subnet_prefix = cpu_to_be64(0xfe80000000000000LL);
for (i = 0; i < HNS_ROCE_V1_RESV_QP; i++) {
@ -741,24 +739,22 @@ static int hns_roce_v1_rsv_lp_qp(struct hns_roce_dev *hr_dev)
hr_qp->ibqp.recv_cq = cq;
hr_qp->ibqp.send_cq = cq;
attr.ah_attr.port_num = phy_port + 1;
attr.ah_attr.sl = sl;
rdma_ah_set_port_num(&attr.ah_attr, phy_port + 1);
rdma_ah_set_sl(&attr.ah_attr, phy_port + 1);
attr.port_num = phy_port + 1;
attr.dest_qp_num = hr_qp->qpn;
memcpy(attr.ah_attr.dmac, hr_dev->dev_addr[phy_port],
memcpy(rdma_ah_retrieve_dmac(&attr.ah_attr),
hr_dev->dev_addr[phy_port],
MAC_ADDR_OCTET_NUM);
memcpy(attr.ah_attr.grh.dgid.raw,
&subnet_prefix, sizeof(u64));
memcpy(&attr.ah_attr.grh.dgid.raw[8],
hr_dev->dev_addr[phy_port], 3);
memcpy(&attr.ah_attr.grh.dgid.raw[13],
hr_dev->dev_addr[phy_port] + 3, 3);
attr.ah_attr.grh.dgid.raw[11] = 0xff;
attr.ah_attr.grh.dgid.raw[12] = 0xfe;
attr.ah_attr.grh.dgid.raw[8] ^= 2;
memcpy(&dgid.raw, &subnet_prefix, sizeof(u64));
memcpy(&dgid.raw[8], hr_dev->dev_addr[phy_port], 3);
memcpy(&dgid.raw[13], hr_dev->dev_addr[phy_port] + 3, 3);
dgid.raw[11] = 0xff;
dgid.raw[12] = 0xfe;
dgid.raw[8] ^= 2;
rdma_ah_set_dgid_raw(&attr.ah_attr, dgid.raw);
attr_mask |= IB_QP_PORT;
ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, &attr, attr_mask,
@ -1851,6 +1847,7 @@ void hns_roce_v1_cq_set_ci(struct hns_roce_cq *hr_cq, u32 cons_index)
u32 doorbell[2];
doorbell[0] = cons_index & ((hr_cq->cq_depth << 1) - 1);
doorbell[1] = 0;
roce_set_bit(doorbell[1], ROCEE_DB_OTHERS_H_ROCEE_DB_OTH_HW_SYNS_S, 1);
roce_set_field(doorbell[1], ROCEE_DB_OTHERS_H_ROCEE_DB_OTH_CMD_M,
ROCEE_DB_OTHERS_H_ROCEE_DB_OTH_CMD_S, 3);
@ -2565,6 +2562,7 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
struct device *dev = &hr_dev->pdev->dev;
struct hns_roce_qp_context *context;
const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
dma_addr_t dma_handle_2 = 0;
dma_addr_t dma_handle = 0;
uint32_t doorbell[2] = {0};
@ -2573,6 +2571,7 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
int ret = -EINVAL;
u64 *mtts = NULL;
int port;
u8 port_num;
u8 *dmac;
u8 *smac;
@ -2739,7 +2738,7 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
goto out;
}
dmac = (u8 *)attr->ah_attr.dmac;
dmac = (u8 *)attr->ah_attr.roce.dmac;
context->sq_rq_bt_l = (u32)(dma_handle);
roce_set_field(context->qpc_bytes_24,
@ -2780,7 +2779,7 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
roce_set_bit(context->qpc_bytes_32,
QP_CONTEXT_QPC_BYTE_32_GLOBAL_HEADER_S,
attr->ah_attr.ah_flags);
rdma_ah_get_ah_flags(&attr->ah_attr));
roce_set_field(context->qpc_bytes_32,
QP_CONTEXT_QPC_BYTES_32_RESPONDER_RESOURCES_M,
QP_CONTEXT_QPC_BYTES_32_RESPONDER_RESOURCES_S,
@ -2792,12 +2791,13 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
attr->dest_qp_num);
/* Configure GID index */
port_num = rdma_ah_get_port_num(&attr->ah_attr);
roce_set_field(context->qpc_bytes_36,
QP_CONTEXT_QPC_BYTES_36_SGID_INDEX_M,
QP_CONTEXT_QPC_BYTES_36_SGID_INDEX_S,
hns_get_gid_index(hr_dev,
attr->ah_attr.port_num - 1,
attr->ah_attr.grh.sgid_index));
hns_get_gid_index(hr_dev,
port_num - 1,
grh->sgid_index));
memcpy(&(context->dmac_l), dmac, 4);
@ -2808,26 +2808,26 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
roce_set_field(context->qpc_bytes_44,
QP_CONTEXT_QPC_BYTES_44_MAXIMUM_STATIC_RATE_M,
QP_CONTEXT_QPC_BYTES_44_MAXIMUM_STATIC_RATE_S,
attr->ah_attr.static_rate);
rdma_ah_get_static_rate(&attr->ah_attr));
roce_set_field(context->qpc_bytes_44,
QP_CONTEXT_QPC_BYTES_44_HOPLMT_M,
QP_CONTEXT_QPC_BYTES_44_HOPLMT_S,
attr->ah_attr.grh.hop_limit);
grh->hop_limit);
roce_set_field(context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_FLOWLABEL_M,
QP_CONTEXT_QPC_BYTES_48_FLOWLABEL_S,
attr->ah_attr.grh.flow_label);
grh->flow_label);
roce_set_field(context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_TCLASS_M,
QP_CONTEXT_QPC_BYTES_48_TCLASS_S,
attr->ah_attr.grh.traffic_class);
grh->traffic_class);
roce_set_field(context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_MTU_M,
QP_CONTEXT_QPC_BYTES_48_MTU_S, attr->path_mtu);
memcpy(context->dgid, attr->ah_attr.grh.dgid.raw,
sizeof(attr->ah_attr.grh.dgid.raw));
memcpy(context->dgid, grh->dgid.raw,
sizeof(grh->dgid.raw));
dev_dbg(dev, "dmac:%x :%lx\n", context->dmac_l,
roce_get_field(context->qpc_bytes_44,
@ -2907,8 +2907,9 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
hr_qp->phy_port);
roce_set_field(context->qpc_bytes_156,
QP_CONTEXT_QPC_BYTES_156_SL_M,
QP_CONTEXT_QPC_BYTES_156_SL_S, attr->ah_attr.sl);
hr_qp->sl = attr->ah_attr.sl;
QP_CONTEXT_QPC_BYTES_156_SL_S,
rdma_ah_get_sl(&attr->ah_attr));
hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);
} else if (cur_state == IB_QPS_RTR &&
new_state == IB_QPS_RTS) {
/* If exist optional param, return error */
@ -3019,8 +3020,9 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
hr_qp->phy_port);
roce_set_field(context->qpc_bytes_156,
QP_CONTEXT_QPC_BYTES_156_SL_M,
QP_CONTEXT_QPC_BYTES_156_SL_S, attr->ah_attr.sl);
hr_qp->sl = attr->ah_attr.sl;
QP_CONTEXT_QPC_BYTES_156_SL_S,
rdma_ah_get_sl(&attr->ah_attr));
hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);
roce_set_field(context->qpc_bytes_156,
QP_CONTEXT_QPC_BYTES_156_INITIATOR_DEPTH_M,
QP_CONTEXT_QPC_BYTES_156_INITIATOR_DEPTH_S,
@ -3355,28 +3357,33 @@ static int hns_roce_v1_q_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
if (hr_qp->ibqp.qp_type == IB_QPT_RC ||
hr_qp->ibqp.qp_type == IB_QPT_UC) {
qp_attr->ah_attr.sl = roce_get_field(context->qpc_bytes_156,
QP_CONTEXT_QPC_BYTES_156_SL_M,
QP_CONTEXT_QPC_BYTES_156_SL_S);
qp_attr->ah_attr.grh.flow_label = roce_get_field(
context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_FLOWLABEL_M,
QP_CONTEXT_QPC_BYTES_48_FLOWLABEL_S);
qp_attr->ah_attr.grh.sgid_index = roce_get_field(
context->qpc_bytes_36,
QP_CONTEXT_QPC_BYTES_36_SGID_INDEX_M,
QP_CONTEXT_QPC_BYTES_36_SGID_INDEX_S);
qp_attr->ah_attr.grh.hop_limit = roce_get_field(
context->qpc_bytes_44,
QP_CONTEXT_QPC_BYTES_44_HOPLMT_M,
QP_CONTEXT_QPC_BYTES_44_HOPLMT_S);
qp_attr->ah_attr.grh.traffic_class = roce_get_field(
context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_TCLASS_M,
QP_CONTEXT_QPC_BYTES_48_TCLASS_S);
struct ib_global_route *grh =
rdma_ah_retrieve_grh(&qp_attr->ah_attr);
memcpy(qp_attr->ah_attr.grh.dgid.raw, context->dgid,
sizeof(qp_attr->ah_attr.grh.dgid.raw));
rdma_ah_set_sl(&qp_attr->ah_attr,
roce_get_field(context->qpc_bytes_156,
QP_CONTEXT_QPC_BYTES_156_SL_M,
QP_CONTEXT_QPC_BYTES_156_SL_S));
rdma_ah_set_ah_flags(&qp_attr->ah_attr, IB_AH_GRH);
grh->flow_label =
roce_get_field(context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_FLOWLABEL_M,
QP_CONTEXT_QPC_BYTES_48_FLOWLABEL_S);
grh->sgid_index =
roce_get_field(context->qpc_bytes_36,
QP_CONTEXT_QPC_BYTES_36_SGID_INDEX_M,
QP_CONTEXT_QPC_BYTES_36_SGID_INDEX_S);
grh->hop_limit =
roce_get_field(context->qpc_bytes_44,
QP_CONTEXT_QPC_BYTES_44_HOPLMT_M,
QP_CONTEXT_QPC_BYTES_44_HOPLMT_S);
grh->traffic_class =
roce_get_field(context->qpc_bytes_48,
QP_CONTEXT_QPC_BYTES_48_TCLASS_M,
QP_CONTEXT_QPC_BYTES_48_TCLASS_S);
memcpy(grh->dgid.raw, context->dgid,
sizeof(grh->dgid.raw));
}
qp_attr->pkey_index = roce_get_field(context->qpc_bytes_12,

View File

@ -127,11 +127,12 @@ static int hns_roce_buddy_init(struct hns_roce_buddy *buddy, int max_order)
buddy->max_order = max_order;
spin_lock_init(&buddy->lock);
buddy->bits = kzalloc((buddy->max_order + 1) * sizeof(long *),
GFP_KERNEL);
buddy->num_free = kzalloc((buddy->max_order + 1) * sizeof(int *),
GFP_KERNEL);
buddy->bits = kcalloc(buddy->max_order + 1,
sizeof(*buddy->bits),
GFP_KERNEL);
buddy->num_free = kcalloc(buddy->max_order + 1,
sizeof(*buddy->num_free),
GFP_KERNEL);
if (!buddy->bits || !buddy->num_free)
goto err_out;
@ -503,7 +504,8 @@ int hns_roce_ib_umem_write_mtt(struct hns_roce_dev *hr_dev,
for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
len = sg_dma_len(sg) >> mtt->page_shift;
for (k = 0; k < len; ++k) {
pages[i++] = sg_dma_address(sg) + umem->page_size * k;
pages[i++] = sg_dma_address(sg) +
(k << umem->page_shift);
if (i == PAGE_SIZE / sizeof(u64)) {
ret = hns_roce_write_mtt(hr_dev, mtt, n, i,
pages);
@ -563,9 +565,9 @@ struct ib_mr *hns_roce_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
}
n = ib_umem_page_count(mr->umem);
if (mr->umem->page_size != HNS_ROCE_HEM_PAGE_SIZE) {
dev_err(dev, "Just support 4K page size but is 0x%x now!\n",
mr->umem->page_size);
if (mr->umem->page_shift != HNS_ROCE_HEM_PAGE_SHIFT) {
dev_err(dev, "Just support 4K page size but is 0x%lx now!\n",
BIT(mr->umem->page_shift));
ret = -EINVAL;
goto err_umem;
}

View File

@ -437,8 +437,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
}
ret = hns_roce_mtt_init(hr_dev, ib_umem_page_count(hr_qp->umem),
ilog2((unsigned int)hr_qp->umem->page_size),
&hr_qp->mtt);
hr_qp->umem->page_shift, &hr_qp->mtt);
if (ret) {
dev_err(dev, "hns_roce_mtt_init error for create qp\n");
goto err_buf;

View File

@ -3184,9 +3184,8 @@ void i40iw_setup_cm_core(struct i40iw_device *iwdev)
INIT_LIST_HEAD(&cm_core->connected_nodes);
INIT_LIST_HEAD(&cm_core->listen_nodes);
init_timer(&cm_core->tcp_timer);
cm_core->tcp_timer.function = i40iw_cm_timer_tick;
cm_core->tcp_timer.data = (unsigned long)cm_core;
setup_timer(&cm_core->tcp_timer, i40iw_cm_timer_tick,
(unsigned long)cm_core);
spin_lock_init(&cm_core->ht_lock);
spin_lock_init(&cm_core->listen_list_lock);

View File

@ -844,10 +844,9 @@ void i40iw_terminate_start_timer(struct i40iw_sc_qp *qp)
iwqp = (struct i40iw_qp *)qp->back_qp;
i40iw_add_ref(&iwqp->ibqp);
init_timer(&iwqp->terminate_timer);
iwqp->terminate_timer.function = i40iw_terminate_timeout;
setup_timer(&iwqp->terminate_timer, i40iw_terminate_timeout,
(unsigned long)iwqp);
iwqp->terminate_timer.expires = jiffies + HZ;
iwqp->terminate_timer.data = (unsigned long)iwqp;
add_timer(&iwqp->terminate_timer);
}
@ -1436,9 +1435,8 @@ void i40iw_hw_stats_start_timer(struct i40iw_sc_vsi *vsi)
{
struct i40iw_vsi_pestat *devstat = vsi->pestat;
init_timer(&devstat->stats_timer);
devstat->stats_timer.function = i40iw_hw_stats_timeout;
devstat->stats_timer.data = (unsigned long)vsi;
setup_timer(&devstat->stats_timer, i40iw_hw_stats_timeout,
(unsigned long)vsi);
mod_timer(&devstat->stats_timer,
jiffies + msecs_to_jiffies(STATS_TIMER_DELAY));
}

Some files were not shown because too many files have changed in this diff Show More