RDMA v5.14 merge window Pull Request

This PR contains a replacement driver for Intel iWarp hardware. This new
 driver supports the old ethernet hardware and also newer chips that can do
 ROCE. Otherwise this contains the typical mix of patches:
 
 - Driver updates and cleanups for bnxt_re, cxgb4, mlx4, and mlx5
 
 - Many static checker driven code clean ups, including a wide refcount_t
   conversion
 
 - Several series for the hns driver, more HIP09 HW capabilities, migration
   to new HW register manipulators, and code cleanups
 
 - Minor fixes and improvements  in srp, rts, and cm
 
 - Improvements throughout for sysfs related code to use DEVICE_ATTR_*,
   make the ib_port sysfs first-class, and overall use sysfs APIs properly
 
 - Intel's new irdma driver replacing i40iw
 
 - rxe general clean ups and Memory Window support
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEfB7FMLh+8QxL+6i3OG33FX4gmxoFAmDcunQACgkQOG33FX4g
 mxqSBA//dsZi/UzpzgU+YqyMFmUp04wd2/iCYzOcCViNPQZCyCARbGaMXI4kMa4s
 8dM5xU76OnCuNSnXHaIwvHC3CdN9GUm08j9eWY7syvAiKtXCjzv7qmCVfBw35UyK
 IXKfXh57toTSSAIfxw8yKc97QgaDSJ2zQ34fXkoE0AvTlfyN6pHQe9ef/Ca0ejS4
 awUGYVG/oilLXrEHcSSAv5UoX6hOUje6jqqRgp5jmZTI3g7SlIPL8mWgXBkHAYmd
 kDX7lBd09CKo2bmR071/kF6xUzvbCg1tmeE6lZze7gE+aKlBkZcvCBe1RAh3sBzK
 ysLfON5GGw5qnkMaY8j5h3sgWvi3qTTEW+jCAmmVi/6z4PF47mvmVVn+/pZc3y2e
 PqH43cunhwS0KuoUJ5Sd48J/UvabrvdbCNZrjCGCpt45EF4VwKxYMh74Bf0ABEQS
 i9eKR/+wyHG6Uv1U37fIXsqa8yUttl9aV/s88s8irn4xhG8ygBLZgeVQNeGUfvdV
 1W0XLEjRmKFezC1FhiPOz7CLIgL3BfSU1V+S7p0Gvb6ijZqyZTfRUaWbaD3KJpRT
 1kwzE4qp6IbJMEqgQH/lq0xBzvzF48FPvBslX5kwlm0phQRrMCwMVIafutpu395q
 ySeStEvsTVfz/JUHL3ZaEJyTRjAvPL0lXLH80XUpgWk9GzsksOM=
 =wyqt
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma

Pull rdma updates from Jason Gunthorpe:
 "This contains a replacement driver for Intel iWarp hardware. This new
  driver supports the old ethernet hardware and also newer chips that
  can do ROCE.

  Other than that, this contains the typical mix of patches:

   - Driver updates and cleanups for bnxt_re, cxgb4, mlx4, and mlx5

   - Many static checker driven code clean ups, including a wide
     refcount_t conversion

   - Several series for the hns driver, more HIP09 HW capabilities,
     migration to new HW register manipulators, and code cleanups

   - Minor fixes and improvements in srp, rts, and cm

   - Improvements throughout for sysfs related code to use
     DEVICE_ATTR_*, make the ib_port sysfs first-class, and overall use
     sysfs APIs properly

   - Intel's new irdma driver replacing i40iw

   - rxe general clean ups and Memory Window support"

* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (211 commits)
  RDMA/core: Always release restrack object
  RDMA/mlx5: Don't access NULL-cleared mpi pointer
  RDMA/irdma: Fix potential overflow expression in irdma_prm_get_pbles
  RDMA/irdma: Check contents of user-space irdma_mem_reg_req object
  RDMA/rxe: Missing unlock on error in get_srq_wqe()
  RDMA/cma: Fix rdma_resolve_route() memory leak
  RDMA/core/sa_query: Remove unused argument
  RDMA/cma: Fix incorrect Packet Lifetime calculation
  RDMA/cma: Protect RMW with qp_mutex
  RDMA/cma: Remove unnecessary INIT->INIT transition
  RDMA/hns: Add window selection field of congestion control
  RDMA/hfi1: Remove use of kmap()
  RDMA/irdma: Remove use of kmap()
  RDMA/bnxt_re: Fix uninitialized struct bit field rsvd1
  IB/isert: Align target max I/O size to initiator size
  RDMA/hns: Fix incorrect vlan enable bit in QPC
  MAINTAINERS: Update Broadcom RDMA maintainers
  RDMA/irdma: Use the queried port attributes
  RDMA/rxe: Fix redundant skb_put_zero
  RDMA/rxe: Fix extra copy in prepare_ack_packet
  ...
This commit is contained in:
Linus Torvalds 2021-07-01 14:54:03 -07:00
commit e04360a2ea
233 changed files with 38062 additions and 34467 deletions

View File

@ -731,26 +731,6 @@ Description:
is the irq number of "sdma3", and M is irq number of "sdma4" in
the /proc/interrupts file.
sysfs interface for Intel(R) X722 iWARP i40iw driver
----------------------------------------------------
What: /sys/class/infiniband/i40iwX/hw_rev
What: /sys/class/infiniband/i40iwX/hca_type
What: /sys/class/infiniband/i40iwX/board_id
Date: Jan, 2016
KernelVersion: v4.10
Contact: linux-rdma@vger.kernel.org
Description:
=============== ==== ========================
hw_rev: (RO) Hardware revision number
hca_type: (RO) Show HCA type (I40IW)
board_id: (RO) I40IW board ID
=============== ==== ========================
sysfs interface for QLogic qedr NIC Driver
------------------------------------------

View File

@ -3737,9 +3737,6 @@ F: drivers/gpio/gpio-bcm-kona.c
BROADCOM NETXTREME-E ROCE DRIVER
M: Selvin Xavier <selvin.xavier@broadcom.com>
M: Devesh Sharma <devesh.sharma@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
M: Naresh Kumar PBS <nareshkumar.pbs@broadcom.com>
L: linux-rdma@vger.kernel.org
S: Supported
@ -6779,7 +6776,6 @@ F: drivers/net/ethernet/emulex/benet/
EMULEX ONECONNECT ROCE DRIVER
M: Selvin Xavier <selvin.xavier@broadcom.com>
M: Devesh Sharma <devesh.sharma@broadcom.com>
L: linux-rdma@vger.kernel.org
S: Odd Fixes
W: http://www.broadcom.com
@ -9182,6 +9178,14 @@ F: drivers/net/ethernet/intel/*/
F: include/linux/avf/virtchnl.h
F: include/linux/net/intel/iidc.h
INTEL ETHERNET PROTOCOL DRIVER FOR RDMA
M: Mustafa Ismail <mustafa.ismail@intel.com>
M: Shiraz Saleem <shiraz.saleem@intel.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/hw/irdma/
F: include/uapi/rdma/irdma-abi.h
INTEL FRAMEBUFFER DRIVER (excluding 810 and 815)
M: Maik Broemme <mbroemme@libmpq.org>
L: linux-fbdev@vger.kernel.org
@ -9421,14 +9425,6 @@ L: linux-pm@vger.kernel.org
S: Supported
F: drivers/cpufreq/intel_pstate.c
INTEL RDMA RNIC DRIVER
M: Faisal Latif <faisal.latif@intel.com>
M: Shiraz Saleem <shiraz.saleem@intel.com>
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/hw/i40iw/
F: include/uapi/rdma/i40iw-abi.h
INTEL SCU DRIVERS
M: Mika Westerberg <mika.westerberg@linux.intel.com>
S: Maintained

View File

@ -92,7 +92,7 @@ static int rnbd_clt_set_dev_attr(struct rnbd_clt_dev *dev,
dev->fua = !!(rsp->cache_policy & RNBD_FUA);
dev->max_hw_sectors = sess->max_io_size / SECTOR_SIZE;
dev->max_segments = BMAX_SEGMENTS;
dev->max_segments = sess->max_segments;
return 0;
}
@ -1292,7 +1292,7 @@ find_and_get_or_create_sess(const char *sessname,
sess->rtrs = rtrs_clt_open(&rtrs_ops, sessname,
paths, path_cnt, port_nr,
0, /* Do not use pdu of rtrs */
RECONNECT_DELAY, BMAX_SEGMENTS,
RECONNECT_DELAY,
MAX_RECONNECTS, nr_poll_queues);
if (IS_ERR(sess->rtrs)) {
err = PTR_ERR(sess->rtrs);
@ -1306,6 +1306,7 @@ find_and_get_or_create_sess(const char *sessname,
sess->max_io_size = attrs.max_io_size;
sess->queue_depth = attrs.queue_depth;
sess->nr_poll_queues = nr_poll_queues;
sess->max_segments = attrs.max_segments;
err = setup_mq_tags(sess);
if (err)

View File

@ -20,10 +20,6 @@
#include "rnbd-proto.h"
#include "rnbd-log.h"
/* Max. number of segments per IO request, Mellanox Connect X ~ Connect X5,
* choose minimial 30 for all, minus 1 for internal protocol, so 29.
*/
#define BMAX_SEGMENTS 29
/* time in seconds between reconnect tries, default to 30 s */
#define RECONNECT_DELAY 30
/*
@ -89,6 +85,7 @@ struct rnbd_clt_session {
atomic_t busy;
size_t queue_depth;
u32 max_io_size;
u32 max_segments;
struct blk_mq_tag_set tag_set;
u32 nr_poll_queues;
struct mutex lock; /* protects state and devs_list */

View File

@ -82,7 +82,7 @@ source "drivers/infiniband/hw/mthca/Kconfig"
source "drivers/infiniband/hw/qib/Kconfig"
source "drivers/infiniband/hw/cxgb4/Kconfig"
source "drivers/infiniband/hw/efa/Kconfig"
source "drivers/infiniband/hw/i40iw/Kconfig"
source "drivers/infiniband/hw/irdma/Kconfig"
source "drivers/infiniband/hw/mlx4/Kconfig"
source "drivers/infiniband/hw/mlx5/Kconfig"
source "drivers/infiniband/hw/ocrdma/Kconfig"

View File

@ -240,7 +240,7 @@ static void free_gid_entry_locked(struct ib_gid_table_entry *entry)
u32 port_num = entry->attr.port_num;
struct ib_gid_table *table = rdma_gid_table(device, port_num);
dev_dbg(&device->dev, "%s port=%u index=%d gid %pI6\n", __func__,
dev_dbg(&device->dev, "%s port=%u index=%u gid %pI6\n", __func__,
port_num, entry->attr.index, entry->attr.gid.raw);
write_lock_irq(&table->rwlock);
@ -323,7 +323,7 @@ static void store_gid_entry(struct ib_gid_table *table,
{
entry->state = GID_TABLE_ENTRY_VALID;
dev_dbg(&entry->attr.device->dev, "%s port=%d index=%d gid %pI6\n",
dev_dbg(&entry->attr.device->dev, "%s port=%u index=%u gid %pI6\n",
__func__, entry->attr.port_num, entry->attr.index,
entry->attr.gid.raw);
@ -354,7 +354,7 @@ static int add_roce_gid(struct ib_gid_table_entry *entry)
int ret;
if (!attr->ndev) {
dev_err(&attr->device->dev, "%s NULL netdev port=%d index=%d\n",
dev_err(&attr->device->dev, "%s NULL netdev port=%u index=%u\n",
__func__, attr->port_num, attr->index);
return -EINVAL;
}
@ -362,7 +362,7 @@ static int add_roce_gid(struct ib_gid_table_entry *entry)
ret = attr->device->ops.add_gid(attr, &entry->context);
if (ret) {
dev_err(&attr->device->dev,
"%s GID add failed port=%d index=%d\n",
"%s GID add failed port=%u index=%u\n",
__func__, attr->port_num, attr->index);
return ret;
}
@ -805,7 +805,7 @@ static void release_gid_table(struct ib_device *device,
continue;
if (kref_read(&table->data_vec[i]->kref) > 1) {
dev_err(&device->dev,
"GID entry ref leak for index %d ref=%d\n", i,
"GID entry ref leak for index %d ref=%u\n", i,
kref_read(&table->data_vec[i]->kref));
leak = true;
}
@ -1069,19 +1069,14 @@ int ib_get_cached_pkey(struct ib_device *device,
}
EXPORT_SYMBOL(ib_get_cached_pkey);
int ib_get_cached_subnet_prefix(struct ib_device *device, u32 port_num,
void ib_get_cached_subnet_prefix(struct ib_device *device, u32 port_num,
u64 *sn_pfx)
{
unsigned long flags;
if (!rdma_is_port_valid(device, port_num))
return -EINVAL;
read_lock_irqsave(&device->cache_lock, flags);
*sn_pfx = device->port_data[port_num].cache.subnet_prefix;
read_unlock_irqrestore(&device->cache_lock, flags);
return 0;
}
EXPORT_SYMBOL(ib_get_cached_subnet_prefix);
@ -1465,10 +1460,12 @@ static int config_non_roce_gid_cache(struct ib_device *device,
}
static int
ib_cache_update(struct ib_device *device, u32 port, bool enforce_security)
ib_cache_update(struct ib_device *device, u32 port, bool update_gids,
bool update_pkeys, bool enforce_security)
{
struct ib_port_attr *tprops = NULL;
struct ib_pkey_cache *pkey_cache = NULL, *old_pkey_cache;
struct ib_pkey_cache *pkey_cache = NULL;
struct ib_pkey_cache *old_pkey_cache = NULL;
int i;
int ret;
@ -1485,14 +1482,16 @@ ib_cache_update(struct ib_device *device, u32 port, bool enforce_security)
goto err;
}
if (!rdma_protocol_roce(device, port)) {
if (!rdma_protocol_roce(device, port) && update_gids) {
ret = config_non_roce_gid_cache(device, port,
tprops->gid_tbl_len);
if (ret)
goto err;
}
if (tprops->pkey_tbl_len) {
update_pkeys &= !!tprops->pkey_tbl_len;
if (update_pkeys) {
pkey_cache = kmalloc(struct_size(pkey_cache, table,
tprops->pkey_tbl_len),
GFP_KERNEL);
@ -1517,9 +1516,10 @@ ib_cache_update(struct ib_device *device, u32 port, bool enforce_security)
write_lock_irq(&device->cache_lock);
old_pkey_cache = device->port_data[port].cache.pkey;
device->port_data[port].cache.pkey = pkey_cache;
if (update_pkeys) {
old_pkey_cache = device->port_data[port].cache.pkey;
device->port_data[port].cache.pkey = pkey_cache;
}
device->port_data[port].cache.lmc = tprops->lmc;
device->port_data[port].cache.port_state = tprops->state;
@ -1551,6 +1551,8 @@ static void ib_cache_event_task(struct work_struct *_work)
* the cache.
*/
ret = ib_cache_update(work->event.device, work->event.element.port_num,
work->event.event == IB_EVENT_GID_CHANGE,
work->event.event == IB_EVENT_PKEY_CHANGE,
work->enforce_security);
/* GID event is notified already for individual GID entries by
@ -1624,7 +1626,7 @@ int ib_cache_setup_one(struct ib_device *device)
return err;
rdma_for_each_port (device, p) {
err = ib_cache_update(device, p, true);
err = ib_cache_update(device, p, true, true, true);
if (err)
return err;
}

File diff suppressed because it is too large Load Diff

View File

@ -926,25 +926,12 @@ static int cma_init_ud_qp(struct rdma_id_private *id_priv, struct ib_qp *qp)
return ret;
}
static int cma_init_conn_qp(struct rdma_id_private *id_priv, struct ib_qp *qp)
{
struct ib_qp_attr qp_attr;
int qp_attr_mask, ret;
qp_attr.qp_state = IB_QPS_INIT;
ret = rdma_init_qp_attr(&id_priv->id, &qp_attr, &qp_attr_mask);
if (ret)
return ret;
return ib_modify_qp(qp, &qp_attr, qp_attr_mask);
}
int rdma_create_qp(struct rdma_cm_id *id, struct ib_pd *pd,
struct ib_qp_init_attr *qp_init_attr)
{
struct rdma_id_private *id_priv;
struct ib_qp *qp;
int ret;
int ret = 0;
id_priv = container_of(id, struct rdma_id_private, id);
if (id->device != pd->device) {
@ -961,8 +948,6 @@ int rdma_create_qp(struct rdma_cm_id *id, struct ib_pd *pd,
if (id->qp_type == IB_QPT_UD)
ret = cma_init_ud_qp(id_priv, qp);
else
ret = cma_init_conn_qp(id_priv, qp);
if (ret)
goto out_destroy;
@ -1852,6 +1837,7 @@ static void _destroy_id(struct rdma_id_private *id_priv,
{
cma_cancel_operation(id_priv, state);
rdma_restrack_del(&id_priv->res);
if (id_priv->cma_dev) {
if (rdma_cap_ib_cm(id_priv->id.device, 1)) {
if (id_priv->cm_id.ib)
@ -1861,7 +1847,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
iw_destroy_cm_id(id_priv->cm_id.iw);
}
cma_leave_mc_groups(id_priv);
rdma_restrack_del(&id_priv->res);
cma_release_dev(id_priv);
}
@ -2472,8 +2457,10 @@ static int cma_iw_listen(struct rdma_id_private *id_priv, int backlog)
if (IS_ERR(id))
return PTR_ERR(id);
mutex_lock(&id_priv->qp_mutex);
id->tos = id_priv->tos;
id->tos_set = id_priv->tos_set;
mutex_unlock(&id_priv->qp_mutex);
id->afonly = id_priv->afonly;
id_priv->cm_id.iw = id;
@ -2534,8 +2521,10 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
cma_id_get(id_priv);
dev_id_priv->internal_id = 1;
dev_id_priv->afonly = id_priv->afonly;
mutex_lock(&id_priv->qp_mutex);
dev_id_priv->tos_set = id_priv->tos_set;
dev_id_priv->tos = id_priv->tos;
mutex_unlock(&id_priv->qp_mutex);
ret = rdma_listen(&dev_id_priv->id, id_priv->backlog);
if (ret)
@ -2582,8 +2571,10 @@ void rdma_set_service_type(struct rdma_cm_id *id, int tos)
struct rdma_id_private *id_priv;
id_priv = container_of(id, struct rdma_id_private, id);
mutex_lock(&id_priv->qp_mutex);
id_priv->tos = (u8) tos;
id_priv->tos_set = true;
mutex_unlock(&id_priv->qp_mutex);
}
EXPORT_SYMBOL(rdma_set_service_type);
@ -2610,8 +2601,10 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
return -EINVAL;
id_priv = container_of(id, struct rdma_id_private, id);
mutex_lock(&id_priv->qp_mutex);
id_priv->timeout = timeout;
id_priv->timeout_set = true;
mutex_unlock(&id_priv->qp_mutex);
return 0;
}
@ -2647,8 +2640,10 @@ int rdma_set_min_rnr_timer(struct rdma_cm_id *id, u8 min_rnr_timer)
return -EINVAL;
id_priv = container_of(id, struct rdma_id_private, id);
mutex_lock(&id_priv->qp_mutex);
id_priv->min_rnr_timer = min_rnr_timer;
id_priv->min_rnr_timer_set = true;
mutex_unlock(&id_priv->qp_mutex);
return 0;
}
@ -2819,7 +2814,8 @@ static int cma_resolve_ib_route(struct rdma_id_private *id_priv,
cma_init_resolve_route_work(work, id_priv);
route->path_rec = kmalloc(sizeof *route->path_rec, GFP_KERNEL);
if (!route->path_rec)
route->path_rec = kmalloc(sizeof *route->path_rec, GFP_KERNEL);
if (!route->path_rec) {
ret = -ENOMEM;
goto err1;
@ -3034,8 +3030,11 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
u8 default_roce_tos = id_priv->cma_dev->default_roce_tos[id_priv->id.port_num -
rdma_start_port(id_priv->cma_dev->device)];
u8 tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
u8 tos;
mutex_lock(&id_priv->qp_mutex);
tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
mutex_unlock(&id_priv->qp_mutex);
work = kzalloc(sizeof *work, GFP_KERNEL);
if (!work)
@ -3082,8 +3081,12 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
* PacketLifeTime = local ACK timeout/2
* as a reasonable approximation for RoCE networks.
*/
route->path_rec->packet_life_time = id_priv->timeout_set ?
id_priv->timeout - 1 : CMA_IBOE_PACKET_LIFETIME;
mutex_lock(&id_priv->qp_mutex);
if (id_priv->timeout_set && id_priv->timeout)
route->path_rec->packet_life_time = id_priv->timeout - 1;
else
route->path_rec->packet_life_time = CMA_IBOE_PACKET_LIFETIME;
mutex_unlock(&id_priv->qp_mutex);
if (!route->path_rec->mtu) {
ret = -EINVAL;
@ -4107,8 +4110,11 @@ static int cma_connect_iw(struct rdma_id_private *id_priv,
if (IS_ERR(cm_id))
return PTR_ERR(cm_id);
mutex_lock(&id_priv->qp_mutex);
cm_id->tos = id_priv->tos;
cm_id->tos_set = id_priv->tos_set;
mutex_unlock(&id_priv->qp_mutex);
id_priv->cm_id.iw = cm_id;
memcpy(&cm_id->local_addr, cma_src_addr(id_priv),

View File

@ -78,8 +78,6 @@ static inline struct rdma_dev_net *rdma_net_to_dev_net(struct net *net)
return net_generic(net, rdma_dev_net_id);
}
int ib_device_register_sysfs(struct ib_device *device);
void ib_device_unregister_sysfs(struct ib_device *device);
int ib_device_rename(struct ib_device *ibdev, const char *name);
int ib_device_set_dim(struct ib_device *ibdev, u8 use_dim);
@ -214,7 +212,7 @@ int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
struct nlmsghdr *nlh,
struct netlink_ext_ack *extack);
int ib_get_cached_subnet_prefix(struct ib_device *device,
void ib_get_cached_subnet_prefix(struct ib_device *device,
u32 port_num,
u64 *sn_pfx);
@ -378,13 +376,16 @@ struct net_device *rdma_read_gid_attr_ndev_rcu(const struct ib_gid_attr *attr);
void ib_free_port_attrs(struct ib_core_device *coredev);
int ib_setup_port_attrs(struct ib_core_device *coredev);
struct rdma_hw_stats *ib_get_hw_stats_port(struct ib_device *ibdev, u32 port_num);
void ib_device_release_hw_stats(struct hw_stats_device_data *data);
int ib_setup_device_attrs(struct ib_device *ibdev);
int rdma_compatdev_set(u8 enable);
int ib_port_register_module_stat(struct ib_device *device, u32 port_num,
struct kobject *kobj, struct kobj_type *ktype,
const char *name);
void ib_port_unregister_module_stat(struct kobject *kobj);
int ib_port_register_client_groups(struct ib_device *ibdev, u32 port_num,
const struct attribute_group **groups);
void ib_port_unregister_client_groups(struct ib_device *ibdev, u32 port_num,
const struct attribute_group **groups);
int ib_device_set_netns_put(struct sk_buff *skb,
struct ib_device *dev, u32 ns_fd);

View File

@ -605,10 +605,10 @@ void rdma_counter_init(struct ib_device *dev)
port_counter->mode.mode = RDMA_COUNTER_MODE_NONE;
mutex_init(&port_counter->lock);
if (!dev->ops.alloc_hw_stats)
if (!dev->ops.alloc_hw_port_stats)
continue;
port_counter->hstats = dev->ops.alloc_hw_stats(dev, port);
port_counter->hstats = dev->ops.alloc_hw_port_stats(dev, port);
if (!port_counter->hstats)
goto fail;
}

View File

@ -491,6 +491,8 @@ static void ib_device_release(struct device *device)
free_netdevs(dev);
WARN_ON(refcount_read(&dev->refcount));
if (dev->hw_stats_data)
ib_device_release_hw_stats(dev->hw_stats_data);
if (dev->port_data) {
ib_cache_release_one(dev);
ib_security_release_port_pkey_list(dev);
@ -584,7 +586,6 @@ struct ib_device *_ib_alloc_device(size_t size)
return NULL;
}
device->groups[0] = &ib_dev_attr_group;
rdma_init_coredev(&device->coredev, device, &init_net);
INIT_LIST_HEAD(&device->event_handler_list);
@ -886,15 +887,8 @@ static void ib_policy_change_task(struct work_struct *work)
rdma_for_each_port (dev, i) {
u64 sp;
int ret = ib_get_cached_subnet_prefix(dev,
i,
&sp);
WARN_ONCE(ret,
"ib_get_cached_subnet_prefix err: %d, this should never happen here\n",
ret);
if (!ret)
ib_security_cache_change(dev, i, sp);
ib_get_cached_subnet_prefix(dev, i, &sp);
ib_security_cache_change(dev, i, sp);
}
}
up_read(&devices_rwsem);
@ -1394,6 +1388,12 @@ int ib_register_device(struct ib_device *device, const char *name,
return ret;
}
device->groups[0] = &ib_dev_attr_group;
device->groups[1] = device->ops.device_group;
ret = ib_setup_device_attrs(device);
if (ret)
goto cache_cleanup;
ib_device_register_rdmacg(device);
rdma_counter_init(device);
@ -1407,7 +1407,7 @@ int ib_register_device(struct ib_device *device, const char *name,
if (ret)
goto cg_cleanup;
ret = ib_device_register_sysfs(device);
ret = ib_setup_port_attrs(&device->coredev);
if (ret) {
dev_warn(&device->dev,
"Couldn't register device with driver model\n");
@ -1449,6 +1449,7 @@ int ib_register_device(struct ib_device *device, const char *name,
cg_cleanup:
dev_set_uevent_suppress(&device->dev, false);
ib_device_unregister_rdmacg(device);
cache_cleanup:
ib_cache_cleanup_one(device);
return ret;
}
@ -1473,7 +1474,7 @@ static void __ib_unregister_device(struct ib_device *ib_dev)
/* Expedite removing unregistered pointers from the hash table */
free_netdevs(ib_dev);
ib_device_unregister_sysfs(ib_dev);
ib_free_port_attrs(&ib_dev->coredev);
device_del(&ib_dev->dev);
ib_device_unregister_rdmacg(ib_dev);
ib_cache_cleanup_one(ib_dev);
@ -1691,13 +1692,11 @@ int ib_device_set_netns_put(struct sk_buff *skb,
}
/*
* Currently supported only for those providers which support
* disassociation and don't do port specific sysfs init. Once a
* port_cleanup infrastructure is implemented, this limitation will be
* removed.
* All the ib_clients, including uverbs, are reset when the namespace is
* changed and this cannot be blocked waiting for userspace to do
* something, so disassociation is mandatory.
*/
if (!dev->ops.disassociate_ucontext || dev->ops.init_port ||
ib_devices_shared_netns) {
if (!dev->ops.disassociate_ucontext || ib_devices_shared_netns) {
ret = -EOPNOTSUPP;
goto ns_err;
}
@ -2595,7 +2594,8 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, add_gid);
SET_DEVICE_OP(dev_ops, advise_mr);
SET_DEVICE_OP(dev_ops, alloc_dm);
SET_DEVICE_OP(dev_ops, alloc_hw_stats);
SET_DEVICE_OP(dev_ops, alloc_hw_device_stats);
SET_DEVICE_OP(dev_ops, alloc_hw_port_stats);
SET_DEVICE_OP(dev_ops, alloc_mr);
SET_DEVICE_OP(dev_ops, alloc_mr_integrity);
SET_DEVICE_OP(dev_ops, alloc_mw);
@ -2637,6 +2637,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, destroy_rwq_ind_table);
SET_DEVICE_OP(dev_ops, destroy_srq);
SET_DEVICE_OP(dev_ops, destroy_wq);
SET_DEVICE_OP(dev_ops, device_group);
SET_DEVICE_OP(dev_ops, detach_mcast);
SET_DEVICE_OP(dev_ops, disassociate_ucontext);
SET_DEVICE_OP(dev_ops, drain_rq);
@ -2660,7 +2661,6 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, get_vf_config);
SET_DEVICE_OP(dev_ops, get_vf_guid);
SET_DEVICE_OP(dev_ops, get_vf_stats);
SET_DEVICE_OP(dev_ops, init_port);
SET_DEVICE_OP(dev_ops, iw_accept);
SET_DEVICE_OP(dev_ops, iw_add_ref);
SET_DEVICE_OP(dev_ops, iw_connect);
@ -2683,6 +2683,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops)
SET_DEVICE_OP(dev_ops, modify_wq);
SET_DEVICE_OP(dev_ops, peek_cq);
SET_DEVICE_OP(dev_ops, poll_cq);
SET_DEVICE_OP(dev_ops, port_groups);
SET_DEVICE_OP(dev_ops, post_recv);
SET_DEVICE_OP(dev_ops, post_send);
SET_DEVICE_OP(dev_ops, post_srq_recv);

View File

@ -211,8 +211,7 @@ static void free_cm_id(struct iwcm_id_private *cm_id_priv)
*/
static int iwcm_deref_id(struct iwcm_id_private *cm_id_priv)
{
BUG_ON(atomic_read(&cm_id_priv->refcount)==0);
if (atomic_dec_and_test(&cm_id_priv->refcount)) {
if (refcount_dec_and_test(&cm_id_priv->refcount)) {
BUG_ON(!list_empty(&cm_id_priv->work_list));
free_cm_id(cm_id_priv);
return 1;
@ -225,7 +224,7 @@ static void add_ref(struct iw_cm_id *cm_id)
{
struct iwcm_id_private *cm_id_priv;
cm_id_priv = container_of(cm_id, struct iwcm_id_private, id);
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
}
static void rem_ref(struct iw_cm_id *cm_id)
@ -257,7 +256,7 @@ struct iw_cm_id *iw_create_cm_id(struct ib_device *device,
cm_id_priv->id.add_ref = add_ref;
cm_id_priv->id.rem_ref = rem_ref;
spin_lock_init(&cm_id_priv->lock);
atomic_set(&cm_id_priv->refcount, 1);
refcount_set(&cm_id_priv->refcount, 1);
init_waitqueue_head(&cm_id_priv->connect_wait);
init_completion(&cm_id_priv->destroy_comp);
INIT_LIST_HEAD(&cm_id_priv->work_list);
@ -1094,7 +1093,7 @@ static int cm_event_handler(struct iw_cm_id *cm_id,
}
}
atomic_inc(&cm_id_priv->refcount);
refcount_inc(&cm_id_priv->refcount);
if (list_empty(&cm_id_priv->work_list)) {
list_add_tail(&work->list, &cm_id_priv->work_list);
queue_work(iwcm_wq, &work->work);

View File

@ -52,7 +52,7 @@ struct iwcm_id_private {
wait_queue_head_t connect_wait;
struct list_head work_list;
spinlock_t lock;
atomic_t refcount;
refcount_t refcount;
struct list_head work_free_list;
};

View File

@ -123,7 +123,7 @@ int iwpm_register_pid(struct iwpm_dev_data *pm_msg, u8 nl_client)
ret = iwpm_wait_complete_req(nlmsg_request);
return ret;
pid_query_error:
pr_info("%s: %s (client = %d)\n", __func__, err_str, nl_client);
pr_info("%s: %s (client = %u)\n", __func__, err_str, nl_client);
dev_kfree_skb(skb);
if (nlmsg_request)
iwpm_free_nlmsg_request(&nlmsg_request->kref);
@ -211,7 +211,7 @@ int iwpm_add_mapping(struct iwpm_sa_data *pm_msg, u8 nl_client)
ret = iwpm_wait_complete_req(nlmsg_request);
return ret;
add_mapping_error:
pr_info("%s: %s (client = %d)\n", __func__, err_str, nl_client);
pr_info("%s: %s (client = %u)\n", __func__, err_str, nl_client);
add_mapping_error_nowarn:
dev_kfree_skb(skb);
if (nlmsg_request)
@ -304,7 +304,7 @@ int iwpm_add_and_query_mapping(struct iwpm_sa_data *pm_msg, u8 nl_client)
ret = iwpm_wait_complete_req(nlmsg_request);
return ret;
query_mapping_error:
pr_info("%s: %s (client = %d)\n", __func__, err_str, nl_client);
pr_info("%s: %s (client = %u)\n", __func__, err_str, nl_client);
query_mapping_error_nowarn:
dev_kfree_skb(skb);
if (nlmsg_request)
@ -372,7 +372,7 @@ int iwpm_remove_mapping(struct sockaddr_storage *local_addr, u8 nl_client)
"remove_mapping: Local sockaddr:");
return 0;
remove_mapping_error:
pr_info("%s: %s (client = %d)\n", __func__, err_str, nl_client);
pr_info("%s: %s (client = %u)\n", __func__, err_str, nl_client);
if (skb)
dev_kfree_skb_any(skb);
return ret;
@ -431,7 +431,7 @@ int iwpm_register_pid_cb(struct sk_buff *skb, struct netlink_callback *cb)
strcmp(iwpm_ulib_name, iwpm_name) ||
iwpm_version < IWPM_UABI_VERSION_MIN) {
pr_info("%s: Incorrect info (dev = %s name = %s version = %d)\n",
pr_info("%s: Incorrect info (dev = %s name = %s version = %u)\n",
__func__, dev_name, iwpm_name, iwpm_version);
nlmsg_request->err_code = IWPM_USER_LIB_INFO_ERR;
goto register_pid_response_exit;
@ -439,7 +439,7 @@ int iwpm_register_pid_cb(struct sk_buff *skb, struct netlink_callback *cb)
iwpm_user_pid = cb->nlh->nlmsg_pid;
iwpm_ulib_version = iwpm_version;
if (iwpm_ulib_version < IWPM_UABI_VERSION)
pr_warn_once("%s: Down level iwpmd/pid %u. Continuing...",
pr_warn_once("%s: Down level iwpmd/pid %d. Continuing...",
__func__, iwpm_user_pid);
atomic_set(&echo_nlmsg_seq, cb->nlh->nlmsg_seq);
pr_debug("%s: iWarp Port Mapper (pid = %d) is available!\n",
@ -650,7 +650,7 @@ int iwpm_remote_info_cb(struct sk_buff *skb, struct netlink_callback *cb)
nl_client = RDMA_NL_GET_CLIENT(cb->nlh->nlmsg_type);
if (!iwpm_valid_client(nl_client)) {
pr_info("%s: Invalid port mapper client = %d\n",
pr_info("%s: Invalid port mapper client = %u\n",
__func__, nl_client);
return ret;
}
@ -731,13 +731,13 @@ int iwpm_mapping_info_cb(struct sk_buff *skb, struct netlink_callback *cb)
iwpm_version = nla_get_u16(nltb[IWPM_NLA_MAPINFO_ULIB_VER]);
if (strcmp(iwpm_ulib_name, iwpm_name) ||
iwpm_version < IWPM_UABI_VERSION_MIN) {
pr_info("%s: Invalid port mapper name = %s version = %d\n",
pr_info("%s: Invalid port mapper name = %s version = %u\n",
__func__, iwpm_name, iwpm_version);
return ret;
}
nl_client = RDMA_NL_GET_CLIENT(cb->nlh->nlmsg_type);
if (!iwpm_valid_client(nl_client)) {
pr_info("%s: Invalid port mapper client = %d\n",
pr_info("%s: Invalid port mapper client = %u\n",
__func__, nl_client);
return ret;
}
@ -746,7 +746,7 @@ int iwpm_mapping_info_cb(struct sk_buff *skb, struct netlink_callback *cb)
iwpm_user_pid = cb->nlh->nlmsg_pid;
if (iwpm_ulib_version < IWPM_UABI_VERSION)
pr_warn_once("%s: Down level iwpmd/pid %u. Continuing...",
pr_warn_once("%s: Down level iwpmd/pid %d. Continuing...",
__func__, iwpm_user_pid);
if (!iwpm_mapinfo_available())
@ -864,7 +864,7 @@ int iwpm_hello_cb(struct sk_buff *skb, struct netlink_callback *cb)
abi_version = nla_get_u16(nltb[IWPM_NLA_HELLO_ABI_VERSION]);
nl_client = RDMA_NL_GET_CLIENT(cb->nlh->nlmsg_type);
if (!iwpm_valid_client(nl_client)) {
pr_info("%s: Invalid port mapper client = %d\n",
pr_info("%s: Invalid port mapper client = %u\n",
__func__, nl_client);
return ret;
}

View File

@ -61,7 +61,7 @@ int iwpm_init(u8 nl_client)
{
int ret = 0;
mutex_lock(&iwpm_admin_lock);
if (atomic_read(&iwpm_admin.refcount) == 0) {
if (!refcount_read(&iwpm_admin.refcount)) {
iwpm_hash_bucket = kcalloc(IWPM_MAPINFO_HASH_SIZE,
sizeof(struct hlist_head),
GFP_KERNEL);
@ -77,8 +77,12 @@ int iwpm_init(u8 nl_client)
ret = -ENOMEM;
goto init_exit;
}
refcount_set(&iwpm_admin.refcount, 1);
} else {
refcount_inc(&iwpm_admin.refcount);
}
atomic_inc(&iwpm_admin.refcount);
init_exit:
mutex_unlock(&iwpm_admin_lock);
if (!ret) {
@ -105,12 +109,12 @@ int iwpm_exit(u8 nl_client)
if (!iwpm_valid_client(nl_client))
return -EINVAL;
mutex_lock(&iwpm_admin_lock);
if (atomic_read(&iwpm_admin.refcount) == 0) {
if (!refcount_read(&iwpm_admin.refcount)) {
mutex_unlock(&iwpm_admin_lock);
pr_err("%s Incorrect usage - negative refcount\n", __func__);
return -EINVAL;
}
if (atomic_dec_and_test(&iwpm_admin.refcount)) {
if (refcount_dec_and_test(&iwpm_admin.refcount)) {
free_hash_bucket();
free_reminfo_bucket();
pr_debug("%s: Resources are destroyed\n", __func__);
@ -303,7 +307,7 @@ int iwpm_get_remote_info(struct sockaddr_storage *mapped_loc_addr,
int ret = -EINVAL;
if (!iwpm_valid_client(nl_client)) {
pr_info("%s: Invalid client = %d\n", __func__, nl_client);
pr_info("%s: Invalid client = %u\n", __func__, nl_client);
return ret;
}
spin_lock_irqsave(&iwpm_reminfo_lock, flags);
@ -651,7 +655,7 @@ static int send_mapinfo_num(u32 mapping_num, u8 nl_client, int iwpm_pid)
err_str = "Unable to send a nlmsg";
goto mapinfo_num_error;
}
pr_debug("%s: Sent mapping number = %d\n", __func__, mapping_num);
pr_debug("%s: Sent mapping number = %u\n", __func__, mapping_num);
return 0;
mapinfo_num_error:
pr_info("%s: %s\n", __func__, err_str);

View File

@ -90,7 +90,7 @@ struct iwpm_remote_info {
};
struct iwpm_admin_data {
atomic_t refcount;
refcount_t refcount;
atomic_t nlmsg_seq;
int client_list[RDMA_NL_NUM_CLIENTS];
u32 reg_list[RDMA_NL_NUM_CLIENTS];
@ -183,7 +183,7 @@ u32 iwpm_check_registration(u8 nl_client, u32 reg);
void iwpm_set_registration(u8 nl_client, u32 reg);
/**
* iwpm_get_registration
* iwpm_get_registration - Get the client registration
* @nl_client: The index of the netlink client
*
* Returns the client registration type

View File

@ -351,7 +351,7 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
/* Validate device and port */
port_priv = ib_get_mad_port(device, port_num);
if (!port_priv) {
dev_dbg_ratelimited(&device->dev, "%s: Invalid port %d\n",
dev_dbg_ratelimited(&device->dev, "%s: Invalid port %u\n",
__func__, port_num);
ret = ERR_PTR(-ENODEV);
goto error1;
@ -1626,7 +1626,7 @@ static int validate_mad(const struct ib_mad_hdr *mad_hdr,
/* Make sure MAD base version is understood */
if (mad_hdr->base_version != IB_MGMT_BASE_VERSION &&
(!opa || mad_hdr->base_version != OPA_MGMT_BASE_VERSION)) {
pr_err("MAD received with unsupported base version %d %s\n",
pr_err("MAD received with unsupported base version %u %s\n",
mad_hdr->base_version, opa ? "(opa)" : "");
goto out;
}
@ -2459,16 +2459,18 @@ find_send_wr(struct ib_mad_agent_private *mad_agent_priv,
return NULL;
}
int ib_modify_mad(struct ib_mad_agent *mad_agent,
struct ib_mad_send_buf *send_buf, u32 timeout_ms)
int ib_modify_mad(struct ib_mad_send_buf *send_buf, u32 timeout_ms)
{
struct ib_mad_agent_private *mad_agent_priv;
struct ib_mad_send_wr_private *mad_send_wr;
unsigned long flags;
int active;
mad_agent_priv = container_of(mad_agent, struct ib_mad_agent_private,
agent);
if (!send_buf)
return -EINVAL;
mad_agent_priv = container_of(send_buf->mad_agent,
struct ib_mad_agent_private, agent);
spin_lock_irqsave(&mad_agent_priv->lock, flags);
mad_send_wr = find_send_wr(mad_agent_priv, send_buf);
if (!mad_send_wr || mad_send_wr->status != IB_WC_SUCCESS) {
@ -2493,13 +2495,6 @@ int ib_modify_mad(struct ib_mad_agent *mad_agent,
}
EXPORT_SYMBOL(ib_modify_mad);
void ib_cancel_mad(struct ib_mad_agent *mad_agent,
struct ib_mad_send_buf *send_buf)
{
ib_modify_mad(mad_agent, send_buf, 0);
}
EXPORT_SYMBOL(ib_cancel_mad);
static void local_completions(struct work_struct *work)
{
struct ib_mad_agent_private *mad_agent_priv;
@ -2872,7 +2867,7 @@ static void qp_event_handler(struct ib_event *event, void *qp_context)
/* It's worse than that! He's dead, Jim! */
dev_err(&qp_info->port_priv->device->dev,
"Fatal error (%d) on MAD QP (%d)\n",
"Fatal error (%d) on MAD QP (%u)\n",
event->event, qp_info->qp->qp_num);
}
@ -3130,9 +3125,9 @@ static void ib_mad_remove_device(struct ib_device *device, void *client_data)
if (ib_agent_port_close(device, i))
dev_err(&device->dev,
"Couldn't close port %d for agents\n", i);
"Couldn't close port %u for agents\n", i);
if (ib_mad_port_close(device, i))
dev_err(&device->dev, "Couldn't close port %d\n", i);
dev_err(&device->dev, "Couldn't close port %u\n", i);
}
}

View File

@ -115,7 +115,6 @@ struct ib_mad_snoop_private {
struct ib_mad_qp_info *qp_info;
int snoop_index;
int mad_snoop_flags;
atomic_t refcount;
struct completion comp;
};

View File

@ -61,7 +61,7 @@ struct mcast_port {
struct mcast_device *dev;
spinlock_t lock;
struct rb_root table;
atomic_t refcount;
refcount_t refcount;
struct completion comp;
u32 port_num;
};
@ -117,7 +117,7 @@ struct mcast_member {
struct mcast_group *group;
struct list_head list;
enum mcast_state state;
atomic_t refcount;
refcount_t refcount;
struct completion comp;
};
@ -178,7 +178,7 @@ static struct mcast_group *mcast_insert(struct mcast_port *port,
static void deref_port(struct mcast_port *port)
{
if (atomic_dec_and_test(&port->refcount))
if (refcount_dec_and_test(&port->refcount))
complete(&port->comp);
}
@ -199,7 +199,7 @@ static void release_group(struct mcast_group *group)
static void deref_member(struct mcast_member *member)
{
if (atomic_dec_and_test(&member->refcount))
if (refcount_dec_and_test(&member->refcount))
complete(&member->comp);
}
@ -401,7 +401,7 @@ static void process_group_error(struct mcast_group *group)
while (!list_empty(&group->active_list)) {
member = list_entry(group->active_list.next,
struct mcast_member, list);
atomic_inc(&member->refcount);
refcount_inc(&member->refcount);
list_del_init(&member->list);
adjust_membership(group, member->multicast.rec.join_state, -1);
member->state = MCAST_ERROR;
@ -445,7 +445,7 @@ static void mcast_work_handler(struct work_struct *work)
struct mcast_member, list);
multicast = &member->multicast;
join_state = multicast->rec.join_state;
atomic_inc(&member->refcount);
refcount_inc(&member->refcount);
if (join_state == (group->rec.join_state & join_state)) {
status = cmp_rec(&group->rec, &multicast->rec,
@ -497,7 +497,7 @@ static void process_join_error(struct mcast_group *group, int status)
member = list_entry(group->pending_list.next,
struct mcast_member, list);
if (group->last_join == member) {
atomic_inc(&member->refcount);
refcount_inc(&member->refcount);
list_del_init(&member->list);
spin_unlock_irq(&group->lock);
ret = member->multicast.callback(status, &member->multicast);
@ -589,7 +589,7 @@ static struct mcast_group *acquire_group(struct mcast_port *port,
kfree(group);
group = cur_group;
} else
atomic_inc(&port->refcount);
refcount_inc(&port->refcount);
found:
atomic_inc(&group->refcount);
spin_unlock_irqrestore(&port->lock, flags);
@ -632,7 +632,7 @@ ib_sa_join_multicast(struct ib_sa_client *client,
member->multicast.callback = callback;
member->multicast.context = context;
init_completion(&member->comp);
atomic_set(&member->refcount, 1);
refcount_set(&member->refcount, 1);
member->state = MCAST_JOINING;
member->group = acquire_group(&dev->port[port_num - dev->start_port],
@ -840,7 +840,7 @@ static int mcast_add_one(struct ib_device *device)
spin_lock_init(&port->lock);
port->table = RB_ROOT;
init_completion(&port->comp);
atomic_set(&port->refcount, 1);
refcount_set(&port->refcount, 1);
++count;
}

View File

@ -98,7 +98,7 @@ get_cb_table(const struct sk_buff *skb, unsigned int type, unsigned int op)
*/
up_read(&rdma_nl_types[type].sem);
request_module("rdma-netlink-subsys-%d", type);
request_module("rdma-netlink-subsys-%u", type);
down_read(&rdma_nl_types[type].sem);
cb_table = READ_ONCE(rdma_nl_types[type].cb_table);

View File

@ -2060,13 +2060,14 @@ static int stat_get_doit_default_counter(struct sk_buff *skb,
if (!device)
return -EINVAL;
if (!device->ops.alloc_hw_stats || !device->ops.get_hw_stats) {
if (!device->ops.alloc_hw_port_stats || !device->ops.get_hw_stats) {
ret = -EINVAL;
goto err;
}
port = nla_get_u32(tb[RDMA_NLDEV_ATTR_PORT_INDEX]);
if (!rdma_is_port_valid(device, port)) {
stats = ib_get_hw_stats_port(device, port);
if (!stats) {
ret = -EINVAL;
goto err;
}
@ -2088,11 +2089,6 @@ static int stat_get_doit_default_counter(struct sk_buff *skb,
goto err_msg;
}
stats = device->port_data ? device->port_data[port].hw_stats : NULL;
if (stats == NULL) {
ret = -EINVAL;
goto err_msg;
}
mutex_lock(&stats->lock);
num_cnts = device->ops.get_hw_stats(device, stats, port, 0);

View File

@ -186,12 +186,13 @@ is_eth_port_inactive_slave_filter(struct ib_device *ib_dev, u32 port,
return res;
}
/** is_ndev_for_default_gid_filter - Check if a given netdevice
/**
* is_ndev_for_default_gid_filter - Check if a given netdevice
* can be considered for default GIDs or not.
* @ib_dev: IB device to check
* @port: Port to consider for adding default GID
* @rdma_ndev: rdma netdevice pointer
* @cookie_ndev: Netdevice to consider to form a default GID
* @cookie: Netdevice to consider to form a default GID
*
* is_ndev_for_default_gid_filter() returns true if a given netdevice can be
* considered for deriving default RoCE GID, returns false otherwise.

View File

@ -389,7 +389,7 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
int count = 0, ret;
if (sg_cnt > pages_per_mr || prot_sg_cnt > pages_per_mr) {
pr_err("SG count too large: sg_cnt=%d, prot_sg_cnt=%d, pages_per_mr=%d\n",
pr_err("SG count too large: sg_cnt=%u, prot_sg_cnt=%u, pages_per_mr=%u\n",
sg_cnt, prot_sg_cnt, pages_per_mr);
return -EINVAL;
}
@ -429,7 +429,7 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
ret = ib_map_mr_sg_pi(ctx->reg->mr, sg, sg_cnt, NULL, prot_sg,
prot_sg_cnt, NULL, SZ_4K);
if (unlikely(ret)) {
pr_err("failed to map PI sg (%d)\n", sg_cnt + prot_sg_cnt);
pr_err("failed to map PI sg (%u)\n", sg_cnt + prot_sg_cnt);
goto out_destroy_sig_mr;
}
@ -714,7 +714,7 @@ int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr)
IB_MR_TYPE_MEM_REG,
max_num_sg, 0);
if (ret) {
pr_err("%s: failed to allocated %d MRs\n",
pr_err("%s: failed to allocated %u MRs\n",
__func__, nr_mrs);
return ret;
}
@ -724,7 +724,7 @@ int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr)
ret = ib_mr_pool_init(qp, &qp->sig_mrs, nr_sig_mrs,
IB_MR_TYPE_INTEGRITY, max_num_sg, max_num_sg);
if (ret) {
pr_err("%s: failed to allocated %d SIG MRs\n",
pr_err("%s: failed to allocated %u SIG MRs\n",
__func__, nr_sig_mrs);
goto out_free_rdma_mrs;
}

View File

@ -1172,7 +1172,6 @@ EXPORT_SYMBOL(ib_sa_unregister_client);
void ib_sa_cancel_query(int id, struct ib_sa_query *query)
{
unsigned long flags;
struct ib_mad_agent *agent;
struct ib_mad_send_buf *mad_buf;
xa_lock_irqsave(&queries, flags);
@ -1180,7 +1179,6 @@ void ib_sa_cancel_query(int id, struct ib_sa_query *query)
xa_unlock_irqrestore(&queries, flags);
return;
}
agent = query->port->agent;
mad_buf = query->mad_buf;
xa_unlock_irqrestore(&queries, flags);
@ -1190,7 +1188,7 @@ void ib_sa_cancel_query(int id, struct ib_sa_query *query)
* sent to the MAD layer and has to be cancelled from there.
*/
if (!ib_nl_cancel_request(query))
ib_cancel_mad(agent, mad_buf);
ib_cancel_mad(mad_buf);
}
EXPORT_SYMBOL(ib_sa_cancel_query);
@ -1444,8 +1442,7 @@ enum opa_pr_supported {
*/
static int opa_pr_query_possible(struct ib_sa_client *client,
struct ib_sa_device *sa_dev,
struct ib_device *device, u32 port_num,
struct sa_path_rec *rec)
struct ib_device *device, u32 port_num)
{
struct ib_port_attr port_attr;
@ -1567,8 +1564,7 @@ int ib_sa_path_rec_get(struct ib_sa_client *client,
query->sa_query.port = port;
if (rec->rec_type == SA_PATH_REC_TYPE_OPA) {
status = opa_pr_query_possible(client, sa_dev, device, port_num,
rec);
status = opa_pr_query_possible(client, sa_dev, device, port_num);
if (status == PR_NOT_SUPPORTED) {
ret = -EINVAL;
goto err1;

View File

@ -72,7 +72,7 @@ static int get_pkey_and_subnet_prefix(struct ib_port_pkey *pp,
if (ret)
return ret;
ret = ib_get_cached_subnet_prefix(dev, pp->port_num, subnet_prefix);
ib_get_cached_subnet_prefix(dev, pp->port_num, subnet_prefix);
return ret;
}
@ -586,7 +586,7 @@ int ib_security_modify_qp(struct ib_qp *qp,
WARN_ONCE((qp_attr_mask & IB_QP_PORT &&
rdma_protocol_ib(real_qp->device, qp_attr->port_num) &&
!real_qp->qp_sec),
"%s: QP security is not initialized for IB QP: %d\n",
"%s: QP security is not initialized for IB QP: %u\n",
__func__, real_qp->qp_num);
/* The port/pkey settings are maintained only for the real QP. Open
@ -664,10 +664,7 @@ static int ib_security_pkey_access(struct ib_device *dev,
if (ret)
return ret;
ret = ib_get_cached_subnet_prefix(dev, port_num, &subnet_prefix);
if (ret)
return ret;
ib_get_cached_subnet_prefix(dev, port_num, &subnet_prefix);
return security_ib_pkey_access(sec, subnet_prefix, pkey);
}

File diff suppressed because it is too large Load Diff

View File

@ -468,8 +468,8 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
resp.id = ctx->id;
if (copy_to_user(u64_to_user_ptr(cmd.response),
&resp, sizeof(resp))) {
ucma_destroy_private_ctx(ctx);
return -EFAULT;
ret = -EFAULT;
goto err1;
}
mutex_lock(&file->mut);
@ -1830,13 +1830,12 @@ static struct ib_client rdma_cma_client = {
};
MODULE_ALIAS_RDMA_CLIENT("rdma_cm");
static ssize_t show_abi_version(struct device *dev,
struct device_attribute *attr,
char *buf)
static ssize_t abi_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%d\n", RDMA_USER_CM_ABI_VERSION);
}
static DEVICE_ATTR(abi_version, S_IRUGO, show_abi_version, NULL);
static DEVICE_ATTR_RO(abi_version);
static int __init ucma_init(void)
{

View File

@ -479,7 +479,7 @@ int ib_ud_header_unpack(void *buf,
buf += IB_LRH_BYTES;
if (header->lrh.link_version != 0) {
pr_warn("Invalid LRH.link_version %d\n",
pr_warn("Invalid LRH.link_version %u\n",
header->lrh.link_version);
return -EINVAL;
}
@ -496,7 +496,7 @@ int ib_ud_header_unpack(void *buf,
buf += IB_GRH_BYTES;
if (header->grh.ip_version != 6) {
pr_warn("Invalid GRH.ip_version %d\n",
pr_warn("Invalid GRH.ip_version %u\n",
header->grh.ip_version);
return -EINVAL;
}
@ -508,7 +508,7 @@ int ib_ud_header_unpack(void *buf,
break;
default:
pr_warn("Invalid LRH.link_next_header %d\n",
pr_warn("Invalid LRH.link_next_header %u\n",
header->lrh.link_next_header);
return -EINVAL;
}
@ -530,7 +530,7 @@ int ib_ud_header_unpack(void *buf,
}
if (header->bth.transport_header_version != 0) {
pr_warn("Invalid BTH.transport_header_version %d\n",
pr_warn("Invalid BTH.transport_header_version %u\n",
header->bth.transport_header_version);
return -EINVAL;
}

View File

@ -445,7 +445,7 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
if (hmm_order + PAGE_SHIFT < page_shift) {
ret = -EINVAL;
ibdev_dbg(umem_odp->umem.ibdev,
"%s: un-expected hmm_order %d, page_shift %d\n",
"%s: un-expected hmm_order %u, page_shift %u\n",
__func__, hmm_order, page_shift);
break;
}

View File

@ -700,7 +700,7 @@ static int ib_umad_reg_agent(struct ib_umad_file *file, void __user *arg,
if (ureq.qpn != 0 && ureq.qpn != 1) {
dev_notice(&file->port->dev,
"%s: invalid QPN %d specified\n", __func__,
"%s: invalid QPN %u specified\n", __func__,
ureq.qpn);
ret = -EINVAL;
goto out;
@ -800,7 +800,7 @@ static int ib_umad_reg_agent2(struct ib_umad_file *file, void __user *arg)
}
if (ureq.qpn != 0 && ureq.qpn != 1) {
dev_notice(&file->port->dev, "%s: invalid QPN %d specified\n",
dev_notice(&file->port->dev, "%s: invalid QPN %u specified\n",
__func__, ureq.qpn);
ret = -EINVAL;
goto out;

View File

@ -97,7 +97,7 @@ ib_uverbs_init_udata_buf_or_null(struct ib_udata *udata,
*/
struct ib_uverbs_device {
atomic_t refcount;
refcount_t refcount;
u32 num_comp_vectors;
struct completion comp;
struct device dev;

View File

@ -3034,12 +3034,29 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs)
if (!wq)
return -EINVAL;
wq_attr.curr_wq_state = cmd.curr_wq_state;
wq_attr.wq_state = cmd.wq_state;
if (cmd.attr_mask & IB_WQ_FLAGS) {
wq_attr.flags = cmd.flags;
wq_attr.flags_mask = cmd.flags_mask;
}
if (cmd.attr_mask & IB_WQ_CUR_STATE) {
if (cmd.curr_wq_state > IB_WQS_ERR)
return -EINVAL;
wq_attr.curr_wq_state = cmd.curr_wq_state;
} else {
wq_attr.curr_wq_state = wq->state;
}
if (cmd.attr_mask & IB_WQ_STATE) {
if (cmd.wq_state > IB_WQS_ERR)
return -EINVAL;
wq_attr.wq_state = cmd.wq_state;
} else {
wq_attr.wq_state = wq_attr.curr_wq_state;
}
ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask,
&attrs->driver_udata);
rdma_lookup_put_uobject(&wq->uobject->uevent.uobject,
@ -3302,7 +3319,7 @@ static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs)
ib_spec += ((union ib_flow_spec *) ib_spec)->size;
}
if (cmd.flow_attr.size || (i != flow_attr->num_of_specs)) {
pr_warn("create flow failed, flow %d: %d bytes left from uverb cmd\n",
pr_warn("create flow failed, flow %d: %u bytes left from uverb cmd\n",
i, cmd.flow_attr.size);
err = -EINVAL;
goto err_free;

View File

@ -197,7 +197,7 @@ void ib_uverbs_release_file(struct kref *ref)
module_put(ib_dev->ops.owner);
srcu_read_unlock(&file->device->disassociate_srcu, srcu_key);
if (atomic_dec_and_test(&file->device->refcount))
if (refcount_dec_and_test(&file->device->refcount))
ib_uverbs_comp_dev(file->device);
if (file->default_async_file)
@ -891,7 +891,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
int srcu_key;
dev = container_of(inode->i_cdev, struct ib_uverbs_device, cdev);
if (!atomic_inc_not_zero(&dev->refcount))
if (!refcount_inc_not_zero(&dev->refcount))
return -ENXIO;
get_device(&dev->dev);
@ -955,7 +955,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
err:
mutex_unlock(&dev->lists_mutex);
srcu_read_unlock(&dev->disassociate_srcu, srcu_key);
if (atomic_dec_and_test(&dev->refcount))
if (refcount_dec_and_test(&dev->refcount))
ib_uverbs_comp_dev(dev);
put_device(&dev->dev);
@ -1124,7 +1124,7 @@ static int ib_uverbs_add_one(struct ib_device *device)
uverbs_dev->dev.release = ib_uverbs_release_dev;
uverbs_dev->groups[0] = &dev_attr_group;
uverbs_dev->dev.groups = uverbs_dev->groups;
atomic_set(&uverbs_dev->refcount, 1);
refcount_set(&uverbs_dev->refcount, 1);
init_completion(&uverbs_dev->comp);
uverbs_dev->xrcd_tree = RB_ROOT;
mutex_init(&uverbs_dev->xrcd_tree_mutex);
@ -1166,7 +1166,7 @@ static int ib_uverbs_add_one(struct ib_device *device)
err_uapi:
ida_free(&uverbs_ida, devnum);
err:
if (atomic_dec_and_test(&uverbs_dev->refcount))
if (refcount_dec_and_test(&uverbs_dev->refcount))
ib_uverbs_comp_dev(uverbs_dev);
wait_for_completion(&uverbs_dev->comp);
put_device(&uverbs_dev->dev);
@ -1229,7 +1229,7 @@ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data)
wait_clients = 0;
}
if (atomic_dec_and_test(&uverbs_dev->refcount))
if (refcount_dec_and_test(&uverbs_dev->refcount))
ib_uverbs_comp_dev(uverbs_dev);
if (wait_clients)
wait_for_completion(&uverbs_dev->comp);

View File

@ -517,7 +517,7 @@ static void uapi_key_okay(u32 key)
count++;
if (uapi_key_is_attr(key))
count++;
WARN(count != 1, "Bad count %d key=%x", count, key);
WARN(count != 1, "Bad count %u key=%x", count, key);
}
static void uapi_finalize_disable(struct uverbs_api *uapi)

View File

@ -1834,7 +1834,7 @@ int ib_get_eth_speed(struct ib_device *dev, u32 port_num, u16 *speed, u8 *width)
netdev_speed = lksettings.base.speed;
} else {
netdev_speed = SPEED_1000;
pr_warn("%s speed is unknown, defaulting to %d\n", netdev->name,
pr_warn("%s speed is unknown, defaulting to %u\n", netdev->name,
netdev_speed);
}
@ -2445,27 +2445,6 @@ int ib_destroy_wq_user(struct ib_wq *wq, struct ib_udata *udata)
}
EXPORT_SYMBOL(ib_destroy_wq_user);
/**
* ib_modify_wq - Modifies the specified WQ.
* @wq: The WQ to modify.
* @wq_attr: On input, specifies the WQ attributes to modify.
* @wq_attr_mask: A bit-mask used to specify which attributes of the WQ
* are being modified.
* On output, the current values of selected WQ attributes are returned.
*/
int ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
u32 wq_attr_mask)
{
int err;
if (!wq->device->ops.modify_wq)
return -EOPNOTSUPP;
err = wq->device->ops.modify_wq(wq, wq_attr, wq_attr_mask, NULL);
return err;
}
EXPORT_SYMBOL(ib_modify_wq);
int ib_check_mr_status(struct ib_mr *mr, u32 check_mask,
struct ib_mr_status *mr_status)
{

View File

@ -3,7 +3,7 @@ obj-$(CONFIG_INFINIBAND_MTHCA) += mthca/
obj-$(CONFIG_INFINIBAND_QIB) += qib/
obj-$(CONFIG_INFINIBAND_CXGB4) += cxgb4/
obj-$(CONFIG_INFINIBAND_EFA) += efa/
obj-$(CONFIG_INFINIBAND_I40IW) += i40iw/
obj-$(CONFIG_INFINIBAND_IRDMA) += irdma/
obj-$(CONFIG_MLX4_INFINIBAND) += mlx4/
obj-$(CONFIG_MLX5_INFINIBAND) += mlx5/
obj-$(CONFIG_INFINIBAND_OCRDMA) += ocrdma/

View File

@ -234,13 +234,10 @@ int bnxt_re_ib_get_hw_stats(struct ib_device *ibdev,
return ARRAY_SIZE(bnxt_re_stat_name);
}
struct rdma_hw_stats *bnxt_re_ib_alloc_hw_stats(struct ib_device *ibdev,
u32 port_num)
struct rdma_hw_stats *bnxt_re_ib_alloc_hw_port_stats(struct ib_device *ibdev,
u32 port_num)
{
BUILD_BUG_ON(ARRAY_SIZE(bnxt_re_stat_name) != BNXT_RE_NUM_COUNTERS);
/* We support only per port stats */
if (!port_num)
return NULL;
return rdma_alloc_hw_stats_struct(bnxt_re_stat_name,
ARRAY_SIZE(bnxt_re_stat_name),

View File

@ -96,8 +96,8 @@ enum bnxt_re_hw_stats {
BNXT_RE_NUM_COUNTERS
};
struct rdma_hw_stats *bnxt_re_ib_alloc_hw_stats(struct ib_device *ibdev,
u32 port_num);
struct rdma_hw_stats *bnxt_re_ib_alloc_hw_port_stats(struct ib_device *ibdev,
u32 port_num);
int bnxt_re_ib_get_hw_stats(struct ib_device *ibdev,
struct rdma_hw_stats *stats,
u32 port, int index);

View File

@ -163,6 +163,10 @@ int bnxt_re_query_device(struct ib_device *ibdev,
ib_attr->max_qp_init_rd_atom = dev_attr->max_qp_init_rd_atom;
ib_attr->atomic_cap = IB_ATOMIC_NONE;
ib_attr->masked_atomic_cap = IB_ATOMIC_NONE;
if (dev_attr->is_atomic) {
ib_attr->atomic_cap = IB_ATOMIC_GLOB;
ib_attr->masked_atomic_cap = IB_ATOMIC_GLOB;
}
ib_attr->max_ee_rd_atom = 0;
ib_attr->max_res_rd_atom = 0;
@ -1098,10 +1102,6 @@ static int bnxt_re_init_rq_attr(struct bnxt_re_qp *qp,
struct bnxt_re_srq *srq;
srq = container_of(init_attr->srq, struct bnxt_re_srq, ib_srq);
if (!srq) {
ibdev_err(&rdev->ibdev, "SRQ not found");
return -EINVAL;
}
qplqp->srq = &srq->qplib_srq;
rq->max_wqe = 0;
} else {
@ -1279,22 +1279,12 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
/* Setup CQs */
if (init_attr->send_cq) {
cq = container_of(init_attr->send_cq, struct bnxt_re_cq, ib_cq);
if (!cq) {
ibdev_err(&rdev->ibdev, "Send CQ not found");
rc = -EINVAL;
goto out;
}
qplqp->scq = &cq->qplib_cq;
qp->scq = cq;
}
if (init_attr->recv_cq) {
cq = container_of(init_attr->recv_cq, struct bnxt_re_cq, ib_cq);
if (!cq) {
ibdev_err(&rdev->ibdev, "Receive CQ not found");
rc = -EINVAL;
goto out;
}
qplqp->rcq = &cq->qplib_cq;
qp->rcq = cq;
}
@ -3473,10 +3463,6 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
((struct bnxt_qplib_qp *)
(unsigned long)(cqe->qp_handle),
struct bnxt_re_qp, qplib_qp);
if (!qp) {
ibdev_err(&cq->rdev->ibdev, "POLL CQ : bad QP handle");
continue;
}
wc->qp = &qp->ib_qp;
wc->ex.imm_data = cqe->immdata;
wc->src_qp = cqe->src_qp;
@ -3858,7 +3844,7 @@ int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata)
container_of(ctx, struct bnxt_re_ucontext, ib_uctx);
struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
struct bnxt_re_uctx_resp resp;
struct bnxt_re_uctx_resp resp = {};
u32 chip_met_rev_num = 0;
int rc;
@ -3886,15 +3872,15 @@ int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata)
chip_met_rev_num |= ((u32)rdev->chip_ctx->chip_metal & 0xFF) <<
BNXT_RE_CHIP_ID0_CHIP_MET_SFT;
resp.chip_id0 = chip_met_rev_num;
/* Future extension of chip info */
resp.chip_id1 = 0;
/*Temp, Use xa_alloc instead */
resp.dev_id = rdev->en_dev->pdev->devfn;
resp.max_qp = rdev->qplib_ctx.qpc_count;
resp.pg_size = PAGE_SIZE;
resp.cqe_sz = sizeof(struct cq_base);
resp.max_cqd = dev_attr->max_cq_wqes;
resp.rsvd = 0;
resp.comp_mask |= BNXT_RE_UCNTX_CMASK_HAVE_MODE;
resp.mode = rdev->chip_ctx->modes.wqe_mode;
rc = ib_copy_to_udata(udata, &resp, min(udata->outlen, sizeof(resp)));
if (rc) {

View File

@ -128,6 +128,9 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev, u8 wqe_mode)
rdev->rcfw.res = &rdev->qplib_res;
bnxt_re_set_drv_mode(rdev, wqe_mode);
if (bnxt_qplib_determine_atomics(en_dev->pdev))
ibdev_info(&rdev->ibdev,
"platform doesn't support global atomics.");
return 0;
}
@ -662,7 +665,7 @@ static const struct ib_device_ops bnxt_re_dev_ops = {
.uverbs_abi_ver = BNXT_RE_ABI_VERSION,
.add_gid = bnxt_re_add_gid,
.alloc_hw_stats = bnxt_re_ib_alloc_hw_stats,
.alloc_hw_port_stats = bnxt_re_ib_alloc_hw_port_stats,
.alloc_mr = bnxt_re_alloc_mr,
.alloc_pd = bnxt_re_alloc_pd,
.alloc_ucontext = bnxt_re_alloc_ucontext,
@ -680,6 +683,7 @@ static const struct ib_device_ops bnxt_re_dev_ops = {
.destroy_cq = bnxt_re_destroy_cq,
.destroy_qp = bnxt_re_destroy_qp,
.destroy_srq = bnxt_re_destroy_srq,
.device_group = &bnxt_re_dev_attr_group,
.get_dev_fw_str = bnxt_re_query_fw_str,
.get_dma_mr = bnxt_re_get_dma_mr,
.get_hw_stats = bnxt_re_ib_get_hw_stats,
@ -726,7 +730,6 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
ibdev->dev.parent = &rdev->en_dev->pdev->dev;
ibdev->local_dma_lkey = BNXT_QPLIB_RSVD_LKEY;
rdma_set_device_sysfs_group(ibdev, &bnxt_re_dev_attr_group);
ib_set_device_ops(ibdev, &bnxt_re_dev_ops);
ret = ib_device_set_netdev(&rdev->ibdev, rdev->netdev, 1);
if (ret)
@ -885,12 +888,6 @@ static int bnxt_re_srqn_handler(struct bnxt_qplib_nq *nq,
struct ib_event ib_event;
int rc = 0;
if (!srq) {
ibdev_err(NULL, "%s: SRQ is NULL, SRQN not handled",
ROCE_DRV_MODULE_NAME);
rc = -EINVAL;
goto done;
}
ib_event.device = &srq->rdev->ibdev;
ib_event.element.srq = &srq->ib_srq;
if (event == NQ_SRQ_EVENT_EVENT_SRQ_THRESHOLD_EVENT)
@ -903,7 +900,6 @@ static int bnxt_re_srqn_handler(struct bnxt_qplib_nq *nq,
(*srq->ib_srq.event_handler)(&ib_event,
srq->ib_srq.srq_context);
}
done:
return rc;
}
@ -913,11 +909,6 @@ static int bnxt_re_cqn_handler(struct bnxt_qplib_nq *nq,
struct bnxt_re_cq *cq = container_of(handle, struct bnxt_re_cq,
qplib_cq);
if (!cq) {
ibdev_err(NULL, "%s: CQ is NULL, CQN not handled",
ROCE_DRV_MODULE_NAME);
return -EINVAL;
}
if (cq->ib_cq.comp_handler) {
/* Lock comp_handler? */
(*cq->ib_cq.comp_handler)(&cq->ib_cq, cq->ib_cq.cq_context);

View File

@ -39,6 +39,8 @@
#ifndef __BNXT_QPLIB_FP_H__
#define __BNXT_QPLIB_FP_H__
#include <rdma/bnxt_re-abi.h>
/* Few helper structures temporarily defined here
* should get rid of these when roce_hsi.h is updated
* in original code base

View File

@ -959,3 +959,20 @@ int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
bnxt_qplib_free_res(res);
return rc;
}
int bnxt_qplib_determine_atomics(struct pci_dev *dev)
{
int comp;
u16 ctl2;
comp = pci_enable_atomic_ops_to_root(dev,
PCI_EXP_DEVCAP2_ATOMIC_COMP32);
if (comp)
return -EOPNOTSUPP;
comp = pci_enable_atomic_ops_to_root(dev,
PCI_EXP_DEVCAP2_ATOMIC_COMP64);
if (comp)
return -EOPNOTSUPP;
pcie_capability_read_word(dev, PCI_EXP_DEVCTL2, &ctl2);
return !(ctl2 & PCI_EXP_DEVCTL2_ATOMIC_REQ);
}

View File

@ -45,12 +45,6 @@ extern const struct bnxt_qplib_gid bnxt_qplib_gid_zero;
#define CHIP_NUM_57504 0x1751
#define CHIP_NUM_57502 0x1752
enum bnxt_qplib_wqe_mode {
BNXT_QPLIB_WQE_MODE_STATIC = 0x00,
BNXT_QPLIB_WQE_MODE_VARIABLE = 0x01,
BNXT_QPLIB_WQE_MODE_INVALID = 0x02
};
struct bnxt_qplib_drv_modes {
u8 wqe_mode;
/* Other modes to follow here */
@ -373,6 +367,7 @@ void bnxt_qplib_free_ctx(struct bnxt_qplib_res *res,
int bnxt_qplib_alloc_ctx(struct bnxt_qplib_res *res,
struct bnxt_qplib_ctx *ctx,
bool virt_fn, bool is_p5);
int bnxt_qplib_determine_atomics(struct pci_dev *dev);
static inline void bnxt_qplib_hwq_incr_prod(struct bnxt_qplib_hwq *hwq, u32 cnt)
{

View File

@ -54,6 +54,17 @@ const struct bnxt_qplib_gid bnxt_qplib_gid_zero = {{ 0, 0, 0, 0, 0, 0, 0, 0,
/* Device */
static bool bnxt_qplib_is_atomic_cap(struct bnxt_qplib_rcfw *rcfw)
{
u16 pcie_ctl2 = 0;
if (!bnxt_qplib_is_chip_gen_p5(rcfw->res->cctx))
return false;
pcie_capability_read_word(rcfw->pdev, PCI_EXP_DEVCTL2, &pcie_ctl2);
return (pcie_ctl2 & PCI_EXP_DEVCTL2_ATOMIC_REQ);
}
static void bnxt_qplib_query_version(struct bnxt_qplib_rcfw *rcfw,
char *fw_ver)
{
@ -162,7 +173,7 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
attr->tqm_alloc_reqs[i * 4 + 3] = *(++tqm_alloc);
}
attr->is_atomic = false;
attr->is_atomic = bnxt_qplib_is_atomic_cap(rcfw);
bail:
bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf);
return rc;

View File

@ -42,8 +42,6 @@
#define BNXT_QPLIB_RESERVED_QP_WRS 128
#define PCI_EXP_DEVCTL2_ATOMIC_REQ 0x0040
struct bnxt_qplib_dev_attr {
#define FW_VER_ARR_LEN 4
u8 fw_ver[FW_VER_ARR_LEN];

View File

@ -976,8 +976,8 @@ int c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
chp = to_c4iw_cq(ib_cq);
xa_erase_irq(&chp->rhp->cqs, chp->cq.cqid);
atomic_dec(&chp->refcnt);
wait_event(chp->wait, !atomic_read(&chp->refcnt));
refcount_dec(&chp->refcnt);
wait_event(chp->wait, !refcount_read(&chp->refcnt));
ucontext = rdma_udata_to_drv_context(udata, struct c4iw_ucontext,
ibucontext);
@ -1080,7 +1080,7 @@ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
chp->ibcq.cqe = entries - 2;
spin_lock_init(&chp->lock);
spin_lock_init(&chp->comp_handler_lock);
atomic_set(&chp->refcnt, 1);
refcount_set(&chp->refcnt, 1);
init_waitqueue_head(&chp->wait);
ret = xa_insert_irq(&rhp->cqs, chp->cq.cqid, chp, GFP_KERNEL);
if (ret)

View File

@ -151,7 +151,7 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
}
c4iw_qp_add_ref(&qhp->ibqp);
atomic_inc(&chp->refcnt);
refcount_inc(&chp->refcnt);
xa_unlock_irq(&dev->qps);
/* Bad incoming write */
@ -213,7 +213,7 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe)
break;
}
done:
if (atomic_dec_and_test(&chp->refcnt))
if (refcount_dec_and_test(&chp->refcnt))
wake_up(&chp->wait);
c4iw_qp_rem_ref(&qhp->ibqp);
out:
@ -228,13 +228,13 @@ int c4iw_ev_handler(struct c4iw_dev *dev, u32 qid)
xa_lock_irqsave(&dev->cqs, flag);
chp = xa_load(&dev->cqs, qid);
if (chp) {
atomic_inc(&chp->refcnt);
refcount_inc(&chp->refcnt);
xa_unlock_irqrestore(&dev->cqs, flag);
t4_clear_cq_armed(&chp->cq);
spin_lock_irqsave(&chp->comp_handler_lock, flag);
(*chp->ibcq.comp_handler)(&chp->ibcq, chp->ibcq.cq_context);
spin_unlock_irqrestore(&chp->comp_handler_lock, flag);
if (atomic_dec_and_test(&chp->refcnt))
if (refcount_dec_and_test(&chp->refcnt))
wake_up(&chp->wait);
} else {
pr_debug("unknown cqid 0x%x\n", qid);

View File

@ -427,7 +427,7 @@ struct c4iw_cq {
struct t4_cq cq;
spinlock_t lock;
spinlock_t comp_handler_lock;
atomic_t refcnt;
refcount_t refcnt;
wait_queue_head_t wait;
struct c4iw_wr_wait *wr_waitp;
};

View File

@ -377,14 +377,11 @@ static const char * const names[] = {
[IP6OUTRSTS] = "ip6OutRsts"
};
static struct rdma_hw_stats *c4iw_alloc_stats(struct ib_device *ibdev,
u32 port_num)
static struct rdma_hw_stats *c4iw_alloc_device_stats(struct ib_device *ibdev)
{
BUILD_BUG_ON(ARRAY_SIZE(names) != NR_COUNTERS);
if (port_num != 0)
return NULL;
/* FIXME: these look like port stats */
return rdma_alloc_hw_stats_struct(names, NR_COUNTERS,
RDMA_HW_STATS_DEFAULT_LIFESPAN);
}
@ -455,7 +452,7 @@ static const struct ib_device_ops c4iw_dev_ops = {
.driver_id = RDMA_DRIVER_CXGB4,
.uverbs_abi_ver = C4IW_UVERBS_ABI_VERSION,
.alloc_hw_stats = c4iw_alloc_stats,
.alloc_hw_device_stats = c4iw_alloc_device_stats,
.alloc_mr = c4iw_alloc_mr,
.alloc_pd = c4iw_allocate_pd,
.alloc_ucontext = c4iw_alloc_ucontext,
@ -468,6 +465,7 @@ static const struct ib_device_ops c4iw_dev_ops = {
.destroy_cq = c4iw_destroy_cq,
.destroy_qp = c4iw_destroy_qp,
.destroy_srq = c4iw_destroy_srq,
.device_group = &c4iw_attr_group,
.fill_res_cq_entry = c4iw_fill_res_cq_entry,
.fill_res_cm_id_entry = c4iw_fill_res_cm_id_entry,
.fill_res_mr_entry = c4iw_fill_res_mr_entry,
@ -542,7 +540,6 @@ void c4iw_register_device(struct work_struct *work)
memcpy(dev->ibdev.iw_ifname, dev->rdev.lldi.ports[0]->name,
sizeof(dev->ibdev.iw_ifname));
rdma_set_device_sysfs_group(&dev->ibdev, &c4iw_attr_group);
ib_set_device_ops(&dev->ibdev, &c4iw_dev_ops);
ret = set_netdevs(&dev->ibdev, &dev->rdev);
if (ret)

View File

@ -295,6 +295,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
if (user && (!wq->sq.bar2_pa || (need_rq && !wq->rq.bar2_pa))) {
pr_warn("%s: sqid %u or rqid %u not in BAR2 range\n",
pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid);
ret = -EINVAL;
goto free_dma;
}
@ -1963,7 +1964,6 @@ int c4iw_modify_qp(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
t4_set_wq_in_error(&qhp->wq, 0);
set_state(qhp, C4IW_QP_STATE_ERROR);
if (!internal) {
abort = 1;
disconnect = 1;
ep = qhp->ep;
c4iw_get_ep(&qhp->ep->com);

View File

@ -157,7 +157,8 @@ int efa_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
int qp_attr_mask, struct ib_udata *udata);
enum rdma_link_layer efa_port_link_layer(struct ib_device *ibdev,
u32 port_num);
struct rdma_hw_stats *efa_alloc_hw_stats(struct ib_device *ibdev, u32 port_num);
struct rdma_hw_stats *efa_alloc_hw_port_stats(struct ib_device *ibdev, u32 port_num);
struct rdma_hw_stats *efa_alloc_hw_device_stats(struct ib_device *ibdev);
int efa_get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
u32 port_num, int index);

View File

@ -242,7 +242,8 @@ static const struct ib_device_ops efa_dev_ops = {
.driver_id = RDMA_DRIVER_EFA,
.uverbs_abi_ver = EFA_UVERBS_ABI_VERSION,
.alloc_hw_stats = efa_alloc_hw_stats,
.alloc_hw_port_stats = efa_alloc_hw_port_stats,
.alloc_hw_device_stats = efa_alloc_hw_device_stats,
.alloc_pd = efa_alloc_pd,
.alloc_ucontext = efa_alloc_ucontext,
.create_cq = efa_create_cq,

View File

@ -1904,13 +1904,22 @@ int efa_destroy_ah(struct ib_ah *ibah, u32 flags)
return 0;
}
struct rdma_hw_stats *efa_alloc_hw_stats(struct ib_device *ibdev, u32 port_num)
struct rdma_hw_stats *efa_alloc_hw_port_stats(struct ib_device *ibdev, u32 port_num)
{
return rdma_alloc_hw_stats_struct(efa_stats_names,
ARRAY_SIZE(efa_stats_names),
RDMA_HW_STATS_DEFAULT_LIFESPAN);
}
struct rdma_hw_stats *efa_alloc_hw_device_stats(struct ib_device *ibdev)
{
/*
* It is probably a bug that efa reports its port stats as device
* stats
*/
return efa_alloc_hw_port_stats(ibdev, 0);
}
int efa_get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
u32 port_num, int index)
{

View File

@ -14186,7 +14186,7 @@ static void init_kdeth_qp(struct hfi1_devdata *dd)
}
/**
* hfi1_get_qp_map
* hfi1_get_qp_map - get qp map
* @dd: device data
* @idx: index to read
*/
@ -14199,7 +14199,7 @@ u8 hfi1_get_qp_map(struct hfi1_devdata *dd, u8 idx)
}
/**
* init_qpmap_table
* init_qpmap_table - init qp map
* @dd: device data
* @first_ctxt: first context
* @last_ctxt: first context

View File

@ -736,7 +736,7 @@ static u64 kvirt_to_phys(void *addr)
}
/**
* complete_subctxt
* complete_subctxt - complete sub-context info
* @fd: valid filedata pointer
*
* Sub-context info can only be set up after the base context
@ -841,7 +841,7 @@ static int assign_ctxt(struct hfi1_filedata *fd, unsigned long arg, u32 len)
}
/**
* match_ctxt
* match_ctxt - match context
* @fd: valid filedata pointer
* @uinfo: user info to compare base context with
* @uctxt: context to compare uinfo to.
@ -898,7 +898,7 @@ static int match_ctxt(struct hfi1_filedata *fd,
}
/**
* find_sub_ctxt
* find_sub_ctxt - fund sub-context
* @fd: valid filedata pointer
* @uinfo: matching info to use to find a possible context to share.
*

View File

@ -772,10 +772,6 @@ struct hfi1_pportdata {
struct hfi1_ibport ibport_data;
struct hfi1_devdata *dd;
struct kobject pport_cc_kobj;
struct kobject sc2vl_kobj;
struct kobject sl2sc_kobj;
struct kobject vl2mtu_kobj;
/* PHY support */
struct qsfp_data qsfp_info;
@ -1764,7 +1760,7 @@ static inline void pause_for_credit_return(struct hfi1_devdata *dd)
}
/**
* sc_to_vlt() reverse lookup sc to vl
* sc_to_vlt() - reverse lookup sc to vl
* @dd - devdata
* @sc5 - 5 bit sc
*/
@ -2188,12 +2184,11 @@ static inline bool hfi1_packet_present(struct hfi1_ctxtdata *rcd)
extern const char ib_hfi1_version[];
extern const struct attribute_group ib_hfi1_attr_group;
extern const struct attribute_group *hfi1_attr_port_groups[];
int hfi1_device_create(struct hfi1_devdata *dd);
void hfi1_device_remove(struct hfi1_devdata *dd);
int hfi1_create_port_files(struct ib_device *ibdev, u32 port_num,
struct kobject *kobj);
int hfi1_verbs_register_sysfs(struct hfi1_devdata *dd);
void hfi1_verbs_unregister_sysfs(struct hfi1_devdata *dd);
/* Hook for sysfs read of QSFP */

View File

@ -312,7 +312,7 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index_safe(struct hfi1_devdata *dd,
}
/**
* hfi1_rcd_get_by_index
* hfi1_rcd_get_by_index - get by index
* @dd: pointer to a valid devdata structure
* @ctxt: the index of an possilbe rcd
*
@ -499,7 +499,7 @@ int hfi1_create_ctxtdata(struct hfi1_pportdata *ppd, int numa,
}
/**
* hfi1_free_ctxt
* hfi1_free_ctxt - free context
* @rcd: pointer to an initialized rcd data structure
*
* This wrapper is the free function that matches hfi1_create_ctxtdata().

View File

@ -993,7 +993,7 @@ static bool is_sc_halted(struct hfi1_devdata *dd, u32 hw_context)
}
/**
* sc_wait_for_packet_egress
* sc_wait_for_packet_egress - wait for packet
* @sc: valid send context
* @pause: wait for credit return
*

View File

@ -279,7 +279,6 @@ int init_credit_return(struct hfi1_devdata *dd);
void free_credit_return(struct hfi1_devdata *dd);
int init_sc_pools_and_sizes(struct hfi1_devdata *dd);
int init_send_contexts(struct hfi1_devdata *dd);
int init_credit_return(struct hfi1_devdata *dd);
int init_pervl_scs(struct hfi1_devdata *dd);
struct send_context *sc_alloc(struct hfi1_devdata *dd, int type,
uint hdrqentsize, int numa);
@ -294,7 +293,6 @@ void sc_stop(struct send_context *sc, int bit);
struct pio_buf *sc_buffer_alloc(struct send_context *sc, u32 dw_len,
pio_release_cb cb, void *arg);
void sc_release_update(struct send_context *sc);
void sc_return_credits(struct send_context *sc);
void sc_group_release_update(struct hfi1_devdata *dd, u32 hw_context);
void sc_add_credit_return_intr(struct send_context *sc);
void sc_del_credit_return_intr(struct send_context *sc);

View File

@ -3130,7 +3130,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
}
if (type == SDMA_MAP_PAGE) {
kvaddr = kmap(page);
kvaddr = kmap_local_page(page);
kvaddr += offset;
} else if (WARN_ON(!kvaddr)) {
__sdma_txclean(dd, tx);
@ -3140,7 +3140,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
memcpy(tx->coalesce_buf + tx->coalesce_idx, kvaddr, len);
tx->coalesce_idx += len;
if (type == SDMA_MAP_PAGE)
kunmap(page);
kunmap_local(kvaddr);
/* If there is more data, return */
if (tx->tlen - tx->coalesce_idx)

View File

@ -45,11 +45,21 @@
*
*/
#include <linux/ctype.h>
#include <rdma/ib_sysfs.h>
#include "hfi.h"
#include "mad.h"
#include "trace.h"
static struct hfi1_pportdata *hfi1_get_pportdata_kobj(struct kobject *kobj)
{
u32 port_num;
struct ib_device *ibdev = ib_port_sysfs_get_ibdev_kobj(kobj, &port_num);
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
return &dd->pport[port_num - 1];
}
/*
* Start of per-port congestion control structures and support code
*/
@ -57,13 +67,12 @@
/*
* Congestion control table size followed by table entries
*/
static ssize_t read_cc_table_bin(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
static ssize_t cc_table_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf,
loff_t pos, size_t count)
{
int ret;
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, pport_cc_kobj);
struct hfi1_pportdata *ppd = hfi1_get_pportdata_kobj(kobj);
struct cc_state *cc_state;
ret = ppd->total_cct_entry * sizeof(struct ib_cc_table_entry_shadow)
@ -89,30 +98,19 @@ static ssize_t read_cc_table_bin(struct file *filp, struct kobject *kobj,
return count;
}
static void port_release(struct kobject *kobj)
{
/* nothing to do since memory is freed by hfi1_free_devdata() */
}
static const struct bin_attribute cc_table_bin_attr = {
.attr = {.name = "cc_table_bin", .mode = 0444},
.read = read_cc_table_bin,
.size = PAGE_SIZE,
};
static BIN_ATTR_RO(cc_table_bin, PAGE_SIZE);
/*
* Congestion settings: port control, control map and an array of 16
* entries for the congestion entries - increase, timer, event log
* trigger threshold and the minimum injection rate delay.
*/
static ssize_t read_cc_setting_bin(struct file *filp, struct kobject *kobj,
static ssize_t cc_setting_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
{
struct hfi1_pportdata *ppd = hfi1_get_pportdata_kobj(kobj);
int ret;
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, pport_cc_kobj);
struct cc_state *cc_state;
ret = sizeof(struct opa_congestion_setting_attr_shadow);
@ -136,27 +134,30 @@ static ssize_t read_cc_setting_bin(struct file *filp, struct kobject *kobj,
return count;
}
static BIN_ATTR_RO(cc_setting_bin, PAGE_SIZE);
static const struct bin_attribute cc_setting_bin_attr = {
.attr = {.name = "cc_settings_bin", .mode = 0444},
.read = read_cc_setting_bin,
.size = PAGE_SIZE,
static struct bin_attribute *port_cc_bin_attributes[] = {
&bin_attr_cc_setting_bin,
&bin_attr_cc_table_bin,
NULL
};
struct hfi1_port_attr {
struct attribute attr;
ssize_t (*show)(struct hfi1_pportdata *, char *);
ssize_t (*store)(struct hfi1_pportdata *, const char *, size_t);
};
static ssize_t cc_prescan_show(struct hfi1_pportdata *ppd, char *buf)
static ssize_t cc_prescan_show(struct ib_device *ibdev, u32 port_num,
struct ib_port_attribute *attr, char *buf)
{
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
struct hfi1_pportdata *ppd = &dd->pport[port_num - 1];
return sysfs_emit(buf, "%s\n", ppd->cc_prescan ? "on" : "off");
}
static ssize_t cc_prescan_store(struct hfi1_pportdata *ppd, const char *buf,
static ssize_t cc_prescan_store(struct ib_device *ibdev, u32 port_num,
struct ib_port_attribute *attr, const char *buf,
size_t count)
{
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
struct hfi1_pportdata *ppd = &dd->pport[port_num - 1];
if (!memcmp(buf, "on", 2))
ppd->cc_prescan = true;
else if (!memcmp(buf, "off", 3))
@ -164,60 +165,41 @@ static ssize_t cc_prescan_store(struct hfi1_pportdata *ppd, const char *buf,
return count;
}
static IB_PORT_ATTR_ADMIN_RW(cc_prescan);
static struct hfi1_port_attr cc_prescan_attr =
__ATTR(cc_prescan, 0600, cc_prescan_show, cc_prescan_store);
static ssize_t cc_attr_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct hfi1_port_attr *port_attr =
container_of(attr, struct hfi1_port_attr, attr);
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, pport_cc_kobj);
return port_attr->show(ppd, buf);
}
static ssize_t cc_attr_store(struct kobject *kobj, struct attribute *attr,
const char *buf, size_t count)
{
struct hfi1_port_attr *port_attr =
container_of(attr, struct hfi1_port_attr, attr);
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, pport_cc_kobj);
return port_attr->store(ppd, buf, count);
}
static const struct sysfs_ops port_cc_sysfs_ops = {
.show = cc_attr_show,
.store = cc_attr_store
};
static struct attribute *port_cc_default_attributes[] = {
&cc_prescan_attr.attr,
static struct attribute *port_cc_attributes[] = {
&ib_port_attr_cc_prescan.attr,
NULL
};
static struct kobj_type port_cc_ktype = {
.release = port_release,
.sysfs_ops = &port_cc_sysfs_ops,
.default_attrs = port_cc_default_attributes
static const struct attribute_group port_cc_group = {
.name = "CCMgtA",
.attrs = port_cc_attributes,
.bin_attrs = port_cc_bin_attributes,
};
/* Start sc2vl */
#define HFI1_SC2VL_ATTR(N) \
static struct hfi1_sc2vl_attr hfi1_sc2vl_attr_##N = { \
.attr = { .name = __stringify(N), .mode = 0444 }, \
.sc = N \
}
struct hfi1_sc2vl_attr {
struct attribute attr;
struct ib_port_attribute attr;
int sc;
};
static ssize_t sc2vl_attr_show(struct ib_device *ibdev, u32 port_num,
struct ib_port_attribute *attr, char *buf)
{
struct hfi1_sc2vl_attr *sattr =
container_of(attr, struct hfi1_sc2vl_attr, attr);
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
return sysfs_emit(buf, "%u\n", *((u8 *)dd->sc2vl + sattr->sc));
}
#define HFI1_SC2VL_ATTR(N) \
static struct hfi1_sc2vl_attr hfi1_sc2vl_attr_##N = { \
.attr = __ATTR(N, 0444, sc2vl_attr_show, NULL), \
.sc = N, \
}
HFI1_SC2VL_ATTR(0);
HFI1_SC2VL_ATTR(1);
HFI1_SC2VL_ATTR(2);
@ -251,78 +233,70 @@ HFI1_SC2VL_ATTR(29);
HFI1_SC2VL_ATTR(30);
HFI1_SC2VL_ATTR(31);
static struct attribute *sc2vl_default_attributes[] = {
&hfi1_sc2vl_attr_0.attr,
&hfi1_sc2vl_attr_1.attr,
&hfi1_sc2vl_attr_2.attr,
&hfi1_sc2vl_attr_3.attr,
&hfi1_sc2vl_attr_4.attr,
&hfi1_sc2vl_attr_5.attr,
&hfi1_sc2vl_attr_6.attr,
&hfi1_sc2vl_attr_7.attr,
&hfi1_sc2vl_attr_8.attr,
&hfi1_sc2vl_attr_9.attr,
&hfi1_sc2vl_attr_10.attr,
&hfi1_sc2vl_attr_11.attr,
&hfi1_sc2vl_attr_12.attr,
&hfi1_sc2vl_attr_13.attr,
&hfi1_sc2vl_attr_14.attr,
&hfi1_sc2vl_attr_15.attr,
&hfi1_sc2vl_attr_16.attr,
&hfi1_sc2vl_attr_17.attr,
&hfi1_sc2vl_attr_18.attr,
&hfi1_sc2vl_attr_19.attr,
&hfi1_sc2vl_attr_20.attr,
&hfi1_sc2vl_attr_21.attr,
&hfi1_sc2vl_attr_22.attr,
&hfi1_sc2vl_attr_23.attr,
&hfi1_sc2vl_attr_24.attr,
&hfi1_sc2vl_attr_25.attr,
&hfi1_sc2vl_attr_26.attr,
&hfi1_sc2vl_attr_27.attr,
&hfi1_sc2vl_attr_28.attr,
&hfi1_sc2vl_attr_29.attr,
&hfi1_sc2vl_attr_30.attr,
&hfi1_sc2vl_attr_31.attr,
static struct attribute *port_sc2vl_attributes[] = {
&hfi1_sc2vl_attr_0.attr.attr,
&hfi1_sc2vl_attr_1.attr.attr,
&hfi1_sc2vl_attr_2.attr.attr,
&hfi1_sc2vl_attr_3.attr.attr,
&hfi1_sc2vl_attr_4.attr.attr,
&hfi1_sc2vl_attr_5.attr.attr,
&hfi1_sc2vl_attr_6.attr.attr,
&hfi1_sc2vl_attr_7.attr.attr,
&hfi1_sc2vl_attr_8.attr.attr,
&hfi1_sc2vl_attr_9.attr.attr,
&hfi1_sc2vl_attr_10.attr.attr,
&hfi1_sc2vl_attr_11.attr.attr,
&hfi1_sc2vl_attr_12.attr.attr,
&hfi1_sc2vl_attr_13.attr.attr,
&hfi1_sc2vl_attr_14.attr.attr,
&hfi1_sc2vl_attr_15.attr.attr,
&hfi1_sc2vl_attr_16.attr.attr,
&hfi1_sc2vl_attr_17.attr.attr,
&hfi1_sc2vl_attr_18.attr.attr,
&hfi1_sc2vl_attr_19.attr.attr,
&hfi1_sc2vl_attr_20.attr.attr,
&hfi1_sc2vl_attr_21.attr.attr,
&hfi1_sc2vl_attr_22.attr.attr,
&hfi1_sc2vl_attr_23.attr.attr,
&hfi1_sc2vl_attr_24.attr.attr,
&hfi1_sc2vl_attr_25.attr.attr,
&hfi1_sc2vl_attr_26.attr.attr,
&hfi1_sc2vl_attr_27.attr.attr,
&hfi1_sc2vl_attr_28.attr.attr,
&hfi1_sc2vl_attr_29.attr.attr,
&hfi1_sc2vl_attr_30.attr.attr,
&hfi1_sc2vl_attr_31.attr.attr,
NULL
};
static ssize_t sc2vl_attr_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct hfi1_sc2vl_attr *sattr =
container_of(attr, struct hfi1_sc2vl_attr, attr);
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, sc2vl_kobj);
struct hfi1_devdata *dd = ppd->dd;
return sysfs_emit(buf, "%u\n", *((u8 *)dd->sc2vl + sattr->sc));
}
static const struct sysfs_ops hfi1_sc2vl_ops = {
.show = sc2vl_attr_show,
static const struct attribute_group port_sc2vl_group = {
.name = "sc2vl",
.attrs = port_sc2vl_attributes,
};
static struct kobj_type hfi1_sc2vl_ktype = {
.release = port_release,
.sysfs_ops = &hfi1_sc2vl_ops,
.default_attrs = sc2vl_default_attributes
};
/* End sc2vl */
/* Start sl2sc */
#define HFI1_SL2SC_ATTR(N) \
static struct hfi1_sl2sc_attr hfi1_sl2sc_attr_##N = { \
.attr = { .name = __stringify(N), .mode = 0444 }, \
.sl = N \
}
struct hfi1_sl2sc_attr {
struct attribute attr;
struct ib_port_attribute attr;
int sl;
};
static ssize_t sl2sc_attr_show(struct ib_device *ibdev, u32 port_num,
struct ib_port_attribute *attr, char *buf)
{
struct hfi1_sl2sc_attr *sattr =
container_of(attr, struct hfi1_sl2sc_attr, attr);
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
struct hfi1_ibport *ibp = &dd->pport[port_num - 1].ibport_data;
return sysfs_emit(buf, "%u\n", ibp->sl_to_sc[sattr->sl]);
}
#define HFI1_SL2SC_ATTR(N) \
static struct hfi1_sl2sc_attr hfi1_sl2sc_attr_##N = { \
.attr = __ATTR(N, 0444, sl2sc_attr_show, NULL), .sl = N \
}
HFI1_SL2SC_ATTR(0);
HFI1_SL2SC_ATTR(1);
HFI1_SL2SC_ATTR(2);
@ -356,79 +330,72 @@ HFI1_SL2SC_ATTR(29);
HFI1_SL2SC_ATTR(30);
HFI1_SL2SC_ATTR(31);
static struct attribute *sl2sc_default_attributes[] = {
&hfi1_sl2sc_attr_0.attr,
&hfi1_sl2sc_attr_1.attr,
&hfi1_sl2sc_attr_2.attr,
&hfi1_sl2sc_attr_3.attr,
&hfi1_sl2sc_attr_4.attr,
&hfi1_sl2sc_attr_5.attr,
&hfi1_sl2sc_attr_6.attr,
&hfi1_sl2sc_attr_7.attr,
&hfi1_sl2sc_attr_8.attr,
&hfi1_sl2sc_attr_9.attr,
&hfi1_sl2sc_attr_10.attr,
&hfi1_sl2sc_attr_11.attr,
&hfi1_sl2sc_attr_12.attr,
&hfi1_sl2sc_attr_13.attr,
&hfi1_sl2sc_attr_14.attr,
&hfi1_sl2sc_attr_15.attr,
&hfi1_sl2sc_attr_16.attr,
&hfi1_sl2sc_attr_17.attr,
&hfi1_sl2sc_attr_18.attr,
&hfi1_sl2sc_attr_19.attr,
&hfi1_sl2sc_attr_20.attr,
&hfi1_sl2sc_attr_21.attr,
&hfi1_sl2sc_attr_22.attr,
&hfi1_sl2sc_attr_23.attr,
&hfi1_sl2sc_attr_24.attr,
&hfi1_sl2sc_attr_25.attr,
&hfi1_sl2sc_attr_26.attr,
&hfi1_sl2sc_attr_27.attr,
&hfi1_sl2sc_attr_28.attr,
&hfi1_sl2sc_attr_29.attr,
&hfi1_sl2sc_attr_30.attr,
&hfi1_sl2sc_attr_31.attr,
static struct attribute *port_sl2sc_attributes[] = {
&hfi1_sl2sc_attr_0.attr.attr,
&hfi1_sl2sc_attr_1.attr.attr,
&hfi1_sl2sc_attr_2.attr.attr,
&hfi1_sl2sc_attr_3.attr.attr,
&hfi1_sl2sc_attr_4.attr.attr,
&hfi1_sl2sc_attr_5.attr.attr,
&hfi1_sl2sc_attr_6.attr.attr,
&hfi1_sl2sc_attr_7.attr.attr,
&hfi1_sl2sc_attr_8.attr.attr,
&hfi1_sl2sc_attr_9.attr.attr,
&hfi1_sl2sc_attr_10.attr.attr,
&hfi1_sl2sc_attr_11.attr.attr,
&hfi1_sl2sc_attr_12.attr.attr,
&hfi1_sl2sc_attr_13.attr.attr,
&hfi1_sl2sc_attr_14.attr.attr,
&hfi1_sl2sc_attr_15.attr.attr,
&hfi1_sl2sc_attr_16.attr.attr,
&hfi1_sl2sc_attr_17.attr.attr,
&hfi1_sl2sc_attr_18.attr.attr,
&hfi1_sl2sc_attr_19.attr.attr,
&hfi1_sl2sc_attr_20.attr.attr,
&hfi1_sl2sc_attr_21.attr.attr,
&hfi1_sl2sc_attr_22.attr.attr,
&hfi1_sl2sc_attr_23.attr.attr,
&hfi1_sl2sc_attr_24.attr.attr,
&hfi1_sl2sc_attr_25.attr.attr,
&hfi1_sl2sc_attr_26.attr.attr,
&hfi1_sl2sc_attr_27.attr.attr,
&hfi1_sl2sc_attr_28.attr.attr,
&hfi1_sl2sc_attr_29.attr.attr,
&hfi1_sl2sc_attr_30.attr.attr,
&hfi1_sl2sc_attr_31.attr.attr,
NULL
};
static ssize_t sl2sc_attr_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct hfi1_sl2sc_attr *sattr =
container_of(attr, struct hfi1_sl2sc_attr, attr);
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, sl2sc_kobj);
struct hfi1_ibport *ibp = &ppd->ibport_data;
return sysfs_emit(buf, "%u\n", ibp->sl_to_sc[sattr->sl]);
}
static const struct sysfs_ops hfi1_sl2sc_ops = {
.show = sl2sc_attr_show,
};
static struct kobj_type hfi1_sl2sc_ktype = {
.release = port_release,
.sysfs_ops = &hfi1_sl2sc_ops,
.default_attrs = sl2sc_default_attributes
static const struct attribute_group port_sl2sc_group = {
.name = "sl2sc",
.attrs = port_sl2sc_attributes,
};
/* End sl2sc */
/* Start vl2mtu */
#define HFI1_VL2MTU_ATTR(N) \
static struct hfi1_vl2mtu_attr hfi1_vl2mtu_attr_##N = { \
.attr = { .name = __stringify(N), .mode = 0444 }, \
.vl = N \
}
struct hfi1_vl2mtu_attr {
struct attribute attr;
struct ib_port_attribute attr;
int vl;
};
static ssize_t vl2mtu_attr_show(struct ib_device *ibdev, u32 port_num,
struct ib_port_attribute *attr, char *buf)
{
struct hfi1_vl2mtu_attr *vlattr =
container_of(attr, struct hfi1_vl2mtu_attr, attr);
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
return sysfs_emit(buf, "%u\n", dd->vld[vlattr->vl].mtu);
}
#define HFI1_VL2MTU_ATTR(N) \
static struct hfi1_vl2mtu_attr hfi1_vl2mtu_attr_##N = { \
.attr = __ATTR(N, 0444, vl2mtu_attr_show, NULL), \
.vl = N, \
}
HFI1_VL2MTU_ATTR(0);
HFI1_VL2MTU_ATTR(1);
HFI1_VL2MTU_ATTR(2);
@ -446,46 +413,29 @@ HFI1_VL2MTU_ATTR(13);
HFI1_VL2MTU_ATTR(14);
HFI1_VL2MTU_ATTR(15);
static struct attribute *vl2mtu_default_attributes[] = {
&hfi1_vl2mtu_attr_0.attr,
&hfi1_vl2mtu_attr_1.attr,
&hfi1_vl2mtu_attr_2.attr,
&hfi1_vl2mtu_attr_3.attr,
&hfi1_vl2mtu_attr_4.attr,
&hfi1_vl2mtu_attr_5.attr,
&hfi1_vl2mtu_attr_6.attr,
&hfi1_vl2mtu_attr_7.attr,
&hfi1_vl2mtu_attr_8.attr,
&hfi1_vl2mtu_attr_9.attr,
&hfi1_vl2mtu_attr_10.attr,
&hfi1_vl2mtu_attr_11.attr,
&hfi1_vl2mtu_attr_12.attr,
&hfi1_vl2mtu_attr_13.attr,
&hfi1_vl2mtu_attr_14.attr,
&hfi1_vl2mtu_attr_15.attr,
static struct attribute *port_vl2mtu_attributes[] = {
&hfi1_vl2mtu_attr_0.attr.attr,
&hfi1_vl2mtu_attr_1.attr.attr,
&hfi1_vl2mtu_attr_2.attr.attr,
&hfi1_vl2mtu_attr_3.attr.attr,
&hfi1_vl2mtu_attr_4.attr.attr,
&hfi1_vl2mtu_attr_5.attr.attr,
&hfi1_vl2mtu_attr_6.attr.attr,
&hfi1_vl2mtu_attr_7.attr.attr,
&hfi1_vl2mtu_attr_8.attr.attr,
&hfi1_vl2mtu_attr_9.attr.attr,
&hfi1_vl2mtu_attr_10.attr.attr,
&hfi1_vl2mtu_attr_11.attr.attr,
&hfi1_vl2mtu_attr_12.attr.attr,
&hfi1_vl2mtu_attr_13.attr.attr,
&hfi1_vl2mtu_attr_14.attr.attr,
&hfi1_vl2mtu_attr_15.attr.attr,
NULL
};
static ssize_t vl2mtu_attr_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct hfi1_vl2mtu_attr *vlattr =
container_of(attr, struct hfi1_vl2mtu_attr, attr);
struct hfi1_pportdata *ppd =
container_of(kobj, struct hfi1_pportdata, vl2mtu_kobj);
struct hfi1_devdata *dd = ppd->dd;
return sysfs_emit(buf, "%u\n", dd->vld[vlattr->vl].mtu);
}
static const struct sysfs_ops hfi1_vl2mtu_ops = {
.show = vl2mtu_attr_show,
};
static struct kobj_type hfi1_vl2mtu_ktype = {
.release = port_release,
.sysfs_ops = &hfi1_vl2mtu_ops,
.default_attrs = vl2mtu_default_attributes
static const struct attribute_group port_vl2mtu_group = {
.name = "vl2mtu",
.attrs = port_vl2mtu_attributes,
};
/* end of per-port file structures and support code */
@ -649,101 +599,13 @@ const struct attribute_group ib_hfi1_attr_group = {
.attrs = hfi1_attributes,
};
int hfi1_create_port_files(struct ib_device *ibdev, u32 port_num,
struct kobject *kobj)
{
struct hfi1_pportdata *ppd;
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
int ret;
if (!port_num || port_num > dd->num_pports) {
dd_dev_err(dd,
"Skipping infiniband class with invalid port %u\n",
port_num);
return -ENODEV;
}
ppd = &dd->pport[port_num - 1];
ret = kobject_init_and_add(&ppd->sc2vl_kobj, &hfi1_sc2vl_ktype, kobj,
"sc2vl");
if (ret) {
dd_dev_err(dd,
"Skipping sc2vl sysfs info, (err %d) port %u\n",
ret, port_num);
/*
* Based on the documentation for kobject_init_and_add(), the
* caller should call kobject_put even if this call fails.
*/
goto bail_sc2vl;
}
kobject_uevent(&ppd->sc2vl_kobj, KOBJ_ADD);
ret = kobject_init_and_add(&ppd->sl2sc_kobj, &hfi1_sl2sc_ktype, kobj,
"sl2sc");
if (ret) {
dd_dev_err(dd,
"Skipping sl2sc sysfs info, (err %d) port %u\n",
ret, port_num);
goto bail_sl2sc;
}
kobject_uevent(&ppd->sl2sc_kobj, KOBJ_ADD);
ret = kobject_init_and_add(&ppd->vl2mtu_kobj, &hfi1_vl2mtu_ktype, kobj,
"vl2mtu");
if (ret) {
dd_dev_err(dd,
"Skipping vl2mtu sysfs info, (err %d) port %u\n",
ret, port_num);
goto bail_vl2mtu;
}
kobject_uevent(&ppd->vl2mtu_kobj, KOBJ_ADD);
ret = kobject_init_and_add(&ppd->pport_cc_kobj, &port_cc_ktype,
kobj, "CCMgtA");
if (ret) {
dd_dev_err(dd,
"Skipping Congestion Control sysfs info, (err %d) port %u\n",
ret, port_num);
goto bail_cc;
}
kobject_uevent(&ppd->pport_cc_kobj, KOBJ_ADD);
ret = sysfs_create_bin_file(&ppd->pport_cc_kobj, &cc_setting_bin_attr);
if (ret) {
dd_dev_err(dd,
"Skipping Congestion Control setting sysfs info, (err %d) port %u\n",
ret, port_num);
goto bail_cc;
}
ret = sysfs_create_bin_file(&ppd->pport_cc_kobj, &cc_table_bin_attr);
if (ret) {
dd_dev_err(dd,
"Skipping Congestion Control table sysfs info, (err %d) port %u\n",
ret, port_num);
goto bail_cc_entry_bin;
}
dd_dev_info(dd,
"Congestion Control Agent enabled for port %d\n",
port_num);
return 0;
bail_cc_entry_bin:
sysfs_remove_bin_file(&ppd->pport_cc_kobj,
&cc_setting_bin_attr);
bail_cc:
kobject_put(&ppd->pport_cc_kobj);
bail_vl2mtu:
kobject_put(&ppd->vl2mtu_kobj);
bail_sl2sc:
kobject_put(&ppd->sl2sc_kobj);
bail_sc2vl:
kobject_put(&ppd->sc2vl_kobj);
return ret;
}
const struct attribute_group *hfi1_attr_port_groups[] = {
&port_cc_group,
&port_sc2vl_group,
&port_sl2sc_group,
&port_vl2mtu_group,
NULL,
};
struct sde_attribute {
struct attribute attr;
@ -868,23 +730,9 @@ int hfi1_verbs_register_sysfs(struct hfi1_devdata *dd)
*/
void hfi1_verbs_unregister_sysfs(struct hfi1_devdata *dd)
{
struct hfi1_pportdata *ppd;
int i;
/* Unwind operations in hfi1_verbs_register_sysfs() */
for (i = 0; i < dd->num_sdma; i++)
kobject_put(&dd->per_sdma[i].kobj);
for (i = 0; i < dd->num_pports; i++) {
ppd = &dd->pport[i];
sysfs_remove_bin_file(&ppd->pport_cc_kobj,
&cc_setting_bin_attr);
sysfs_remove_bin_file(&ppd->pport_cc_kobj,
&cc_table_bin_attr);
kobject_put(&ppd->pport_cc_kobj);
kobject_put(&ppd->vl2mtu_kobj);
kobject_put(&ppd->sl2sc_kobj);
kobject_put(&ppd->sc2vl_kobj);
}
}

View File

@ -1115,7 +1115,7 @@ static u32 kern_find_pages(struct tid_rdma_flow *flow,
}
flow->length = flow->req->seg_len - length;
*last = req->isge == ss->num_sge ? false : true;
*last = req->isge != ss->num_sge;
return i;
}

View File

@ -189,6 +189,11 @@ void hfi1_trace_parse_16b_bth(struct ib_other_headers *ohdr,
*qpn = ib_bth_get_qpn(ohdr);
}
static u16 ib_get_len(const struct ib_header *hdr)
{
return be16_to_cpu(hdr->lrh[2]);
}
void hfi1_trace_parse_9b_hdr(struct ib_header *hdr, bool sc5,
u8 *lnh, u8 *lver, u8 *sl, u8 *sc,
u16 *len, u32 *dlid, u32 *slid)

View File

@ -1693,54 +1693,53 @@ static int init_cntr_names(const char *names_in,
return 0;
}
static struct rdma_hw_stats *alloc_hw_stats(struct ib_device *ibdev,
u32 port_num)
static int init_counters(struct ib_device *ibdev)
{
int i, err;
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
int i, err = 0;
mutex_lock(&cntr_names_lock);
if (!cntr_names_initialized) {
struct hfi1_devdata *dd = dd_from_ibdev(ibdev);
if (cntr_names_initialized)
goto out_unlock;
err = init_cntr_names(dd->cntrnames,
dd->cntrnameslen,
num_driver_cntrs,
&num_dev_cntrs,
&dev_cntr_names);
if (err) {
mutex_unlock(&cntr_names_lock);
return NULL;
}
err = init_cntr_names(dd->cntrnames, dd->cntrnameslen, num_driver_cntrs,
&num_dev_cntrs, &dev_cntr_names);
if (err)
goto out_unlock;
for (i = 0; i < num_driver_cntrs; i++)
dev_cntr_names[num_dev_cntrs + i] =
driver_cntr_names[i];
for (i = 0; i < num_driver_cntrs; i++)
dev_cntr_names[num_dev_cntrs + i] = driver_cntr_names[i];
err = init_cntr_names(dd->portcntrnames,
dd->portcntrnameslen,
0,
&num_port_cntrs,
&port_cntr_names);
if (err) {
kfree(dev_cntr_names);
dev_cntr_names = NULL;
mutex_unlock(&cntr_names_lock);
return NULL;
}
cntr_names_initialized = 1;
err = init_cntr_names(dd->portcntrnames, dd->portcntrnameslen, 0,
&num_port_cntrs, &port_cntr_names);
if (err) {
kfree(dev_cntr_names);
dev_cntr_names = NULL;
goto out_unlock;
}
mutex_unlock(&cntr_names_lock);
cntr_names_initialized = 1;
if (!port_num)
return rdma_alloc_hw_stats_struct(
dev_cntr_names,
num_dev_cntrs + num_driver_cntrs,
RDMA_HW_STATS_DEFAULT_LIFESPAN);
else
return rdma_alloc_hw_stats_struct(
port_cntr_names,
num_port_cntrs,
RDMA_HW_STATS_DEFAULT_LIFESPAN);
out_unlock:
mutex_unlock(&cntr_names_lock);
return err;
}
static struct rdma_hw_stats *hfi1_alloc_hw_device_stats(struct ib_device *ibdev)
{
if (init_counters(ibdev))
return NULL;
return rdma_alloc_hw_stats_struct(dev_cntr_names,
num_dev_cntrs + num_driver_cntrs,
RDMA_HW_STATS_DEFAULT_LIFESPAN);
}
static struct rdma_hw_stats *hfi_alloc_hw_port_stats(struct ib_device *ibdev,
u32 port_num)
{
if (init_counters(ibdev))
return NULL;
return rdma_alloc_hw_stats_struct(port_cntr_names, num_port_cntrs,
RDMA_HW_STATS_DEFAULT_LIFESPAN);
}
static u64 hfi1_sps_ints(void)
@ -1787,12 +1786,14 @@ static const struct ib_device_ops hfi1_dev_ops = {
.owner = THIS_MODULE,
.driver_id = RDMA_DRIVER_HFI1,
.alloc_hw_stats = alloc_hw_stats,
.alloc_hw_device_stats = hfi1_alloc_hw_device_stats,
.alloc_hw_port_stats = hfi_alloc_hw_port_stats,
.alloc_rdma_netdev = hfi1_vnic_alloc_rn,
.device_group = &ib_hfi1_attr_group,
.get_dev_fw_str = hfi1_get_dev_fw_str,
.get_hw_stats = get_hw_stats,
.init_port = hfi1_create_port_files,
.modify_device = modify_device,
.port_groups = hfi1_attr_port_groups,
/* keep process mad in the driver */
.process_mad = hfi1_process_mad,
.rdma_netdev_get_params = hfi1_ipoib_rn_get_params,
@ -1927,9 +1928,6 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd)
i,
ppd->pkeys);
rdma_set_device_sysfs_group(&dd->verbs_dev.rdi.ibdev,
&ib_hfi1_attr_group);
ret = rvt_register_device(&dd->verbs_dev.rdi);
if (ret)
goto err_verbs_txreq;

View File

@ -63,65 +63,14 @@ int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj)
return ret;
}
void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj,
int rr)
void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj)
{
hns_roce_bitmap_free_range(bitmap, obj, 1, rr);
}
int hns_roce_bitmap_alloc_range(struct hns_roce_bitmap *bitmap, int cnt,
int align, unsigned long *obj)
{
int ret = 0;
int i;
if (likely(cnt == 1 && align == 1))
return hns_roce_bitmap_alloc(bitmap, obj);
spin_lock(&bitmap->lock);
*obj = bitmap_find_next_zero_area(bitmap->table, bitmap->max,
bitmap->last, cnt, align - 1);
if (*obj >= bitmap->max) {
bitmap->top = (bitmap->top + bitmap->max + bitmap->reserved_top)
& bitmap->mask;
*obj = bitmap_find_next_zero_area(bitmap->table, bitmap->max, 0,
cnt, align - 1);
}
if (*obj < bitmap->max) {
for (i = 0; i < cnt; i++)
set_bit(*obj + i, bitmap->table);
if (*obj == bitmap->last) {
bitmap->last = (*obj + cnt);
if (bitmap->last >= bitmap->max)
bitmap->last = 0;
}
*obj |= bitmap->top;
} else {
ret = -EINVAL;
}
spin_unlock(&bitmap->lock);
return ret;
}
void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap,
unsigned long obj, int cnt,
int rr)
{
int i;
obj &= bitmap->max + bitmap->reserved_top - 1;
spin_lock(&bitmap->lock);
for (i = 0; i < cnt; i++)
clear_bit(obj + i, bitmap->table);
clear_bit(obj, bitmap->table);
if (!rr)
bitmap->last = min(bitmap->last, obj);
bitmap->last = min(bitmap->last, obj);
bitmap->top = (bitmap->top + bitmap->max + bitmap->reserved_top)
& bitmap->mask;
spin_unlock(&bitmap->lock);
@ -208,10 +157,10 @@ struct hns_roce_buf *hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size,
/* Calc the trunk size and num by required size and page_shift */
if (flags & HNS_ROCE_BUF_DIRECT) {
buf->trunk_shift = ilog2(ALIGN(size, PAGE_SIZE));
buf->trunk_shift = order_base_2(ALIGN(size, PAGE_SIZE));
ntrunk = 1;
} else {
buf->trunk_shift = ilog2(ALIGN(page_size, PAGE_SIZE));
buf->trunk_shift = order_base_2(ALIGN(page_size, PAGE_SIZE));
ntrunk = DIV_ROUND_UP(size, 1 << buf->trunk_shift);
}
@ -252,50 +201,41 @@ struct hns_roce_buf *hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size,
}
int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
int buf_cnt, int start, struct hns_roce_buf *buf)
int buf_cnt, struct hns_roce_buf *buf,
unsigned int page_shift)
{
int i, end;
int total;
unsigned int offset, max_size;
int total = 0;
int i;
end = start + buf_cnt;
if (end > buf->npages) {
dev_err(hr_dev->dev,
"failed to check kmem bufs, end %d + %d total %u!\n",
start, buf_cnt, buf->npages);
if (page_shift > buf->trunk_shift) {
dev_err(hr_dev->dev, "failed to check kmem buf shift %u > %u\n",
page_shift, buf->trunk_shift);
return -EINVAL;
}
total = 0;
for (i = start; i < end; i++)
bufs[total++] = hns_roce_buf_page(buf, i);
offset = 0;
max_size = buf->ntrunks << buf->trunk_shift;
for (i = 0; i < buf_cnt && offset < max_size; i++) {
bufs[total++] = hns_roce_buf_dma_addr(buf, offset);
offset += (1 << page_shift);
}
return total;
}
int hns_roce_get_umem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
int buf_cnt, int start, struct ib_umem *umem,
int buf_cnt, struct ib_umem *umem,
unsigned int page_shift)
{
struct ib_block_iter biter;
int total = 0;
int idx = 0;
u64 addr;
if (page_shift < HNS_HW_PAGE_SHIFT) {
dev_err(hr_dev->dev, "failed to check umem page shift %u!\n",
page_shift);
return -EINVAL;
}
/* convert system page cnt to hw page cnt */
rdma_umem_for_each_dma_block(umem, &biter, 1 << page_shift) {
addr = rdma_block_iter_dma_address(&biter);
if (idx >= start) {
bufs[total++] = addr;
if (total >= buf_cnt)
goto done;
}
idx++;
bufs[total++] = rdma_block_iter_dma_address(&biter);
if (total >= buf_cnt)
goto done;
}
done:
@ -305,13 +245,13 @@ int hns_roce_get_umem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
void hns_roce_cleanup_bitmap(struct hns_roce_dev *hr_dev)
{
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC)
hns_roce_cleanup_xrcd_table(hr_dev);
ida_destroy(&hr_dev->xrcd_ida.ida);
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ)
hns_roce_cleanup_srq_table(hr_dev);
hns_roce_cleanup_qp_table(hr_dev);
hns_roce_cleanup_cq_table(hr_dev);
hns_roce_cleanup_mr_table(hr_dev);
hns_roce_cleanup_pd_table(hr_dev);
ida_destroy(&hr_dev->mr_table.mtpt_ida.ida);
ida_destroy(&hr_dev->pd_ida.ida);
hns_roce_cleanup_uar_table(hr_dev);
}

View File

@ -77,6 +77,14 @@
#define hr_reg_clear(ptr, field) _hr_reg_clear(ptr, field)
#define _hr_reg_write_bool(ptr, field_type, field_h, field_l, val) \
({ \
(val) ? _hr_reg_enable(ptr, field_type, field_h, field_l) : \
_hr_reg_clear(ptr, field_type, field_h, field_l); \
})
#define hr_reg_write_bool(ptr, field, val) _hr_reg_write_bool(ptr, field, val)
#define _hr_reg_write(ptr, field_type, field_h, field_l, val) \
({ \
_hr_reg_clear(ptr, field_type, field_h, field_l); \
@ -373,8 +381,8 @@
#define ROCEE_TX_CMQ_BASEADDR_L_REG 0x07000
#define ROCEE_TX_CMQ_BASEADDR_H_REG 0x07004
#define ROCEE_TX_CMQ_DEPTH_REG 0x07008
#define ROCEE_TX_CMQ_HEAD_REG 0x07010
#define ROCEE_TX_CMQ_TAIL_REG 0x07014
#define ROCEE_TX_CMQ_PI_REG 0x07010
#define ROCEE_TX_CMQ_CI_REG 0x07014
#define ROCEE_RX_CMQ_BASEADDR_L_REG 0x07018
#define ROCEE_RX_CMQ_BASEADDR_H_REG 0x0701c

View File

@ -154,7 +154,7 @@ static int alloc_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
hr_cq->cons_index = 0;
hr_cq->arm_sn = 1;
atomic_set(&hr_cq->refcount, 1);
refcount_set(&hr_cq->refcount, 1);
init_completion(&hr_cq->free);
return 0;
@ -188,7 +188,7 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq);
/* wait for all interrupt processed */
if (atomic_dec_and_test(&hr_cq->refcount))
if (refcount_dec_and_test(&hr_cq->refcount))
complete(&hr_cq->free);
wait_for_completion(&hr_cq->free);
@ -202,13 +202,13 @@ static int alloc_cq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
struct hns_roce_buf_attr buf_attr = {};
int ret;
buf_attr.page_shift = hr_dev->caps.cqe_buf_pg_sz + HNS_HW_PAGE_SHIFT;
buf_attr.page_shift = hr_dev->caps.cqe_buf_pg_sz + PAGE_SHIFT;
buf_attr.region[0].size = hr_cq->cq_depth * hr_cq->cqe_size;
buf_attr.region[0].hopnum = hr_dev->caps.cqe_hop_num;
buf_attr.region_count = 1;
ret = hns_roce_mtr_create(hr_dev, &hr_cq->mtr, &buf_attr,
hr_dev->caps.cqe_ba_pg_sz + HNS_HW_PAGE_SHIFT,
hr_dev->caps.cqe_ba_pg_sz + PAGE_SHIFT,
udata, addr);
if (ret)
ibdev_err(ibdev, "failed to alloc CQ mtr, ret = %d.\n", ret);
@ -234,8 +234,7 @@ static int alloc_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
udata->outlen >= offsetofend(typeof(*resp), cap_flags)) {
uctx = rdma_udata_to_drv_context(udata,
struct hns_roce_ucontext, ibucontext);
err = hns_roce_db_map_user(uctx, udata, addr,
&hr_cq->db);
err = hns_roce_db_map_user(uctx, addr, &hr_cq->db);
if (err)
return err;
hr_cq->flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
@ -481,7 +480,7 @@ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
return;
}
atomic_inc(&hr_cq->refcount);
refcount_inc(&hr_cq->refcount);
ibcq = &hr_cq->ib_cq;
if (ibcq->event_handler) {
@ -491,7 +490,7 @@ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type)
ibcq->event_handler(&event, ibcq->cq_context);
}
if (atomic_dec_and_test(&hr_cq->refcount))
if (refcount_dec_and_test(&hr_cq->refcount))
complete(&hr_cq->free);
}

View File

@ -8,8 +8,7 @@
#include <rdma/ib_umem.h>
#include "hns_roce_device.h"
int hns_roce_db_map_user(struct hns_roce_ucontext *context,
struct ib_udata *udata, unsigned long virt,
int hns_roce_db_map_user(struct hns_roce_ucontext *context, unsigned long virt,
struct hns_roce_db *db)
{
unsigned long page_addr = virt & PAGE_MASK;

View File

@ -47,8 +47,6 @@
#define HNS_ROCE_IB_MIN_SQ_STRIDE 6
#define HNS_ROCE_BA_SIZE (32 * 4096)
#define BA_BYTE_LEN 8
/* Hardware specification only for v1 engine */
@ -97,9 +95,6 @@
#define HNS_ROCE_HOP_NUM_0 0xff
#define BITMAP_NO_RR 0
#define BITMAP_RR 1
#define MR_TYPE_MR 0x00
#define MR_TYPE_FRMR 0x01
#define MR_TYPE_DMA 0x03
@ -258,14 +253,18 @@ struct hns_roce_bitmap {
unsigned long *table;
};
struct hns_roce_ida {
struct ida ida;
u32 min; /* Lowest ID to allocate. */
u32 max; /* Highest ID to allocate. */
};
/* For Hardware Entry Memory */
struct hns_roce_hem_table {
/* HEM type: 0 = qpc, 1 = mtt, 2 = cqc, 3 = srq, 4 = other */
u32 type;
/* HEM array elment num */
unsigned long num_hem;
/* HEM entry record obj total num */
unsigned long num_obj;
/* Single obj size */
unsigned long obj_size;
unsigned long table_chunk_size;
@ -338,7 +337,7 @@ struct hns_roce_mw {
struct hns_roce_mr {
struct ib_mr ibmr;
u64 iova; /* MR's virtual orignal addr */
u64 iova; /* MR's virtual original addr */
u64 size; /* Address range of MR */
u32 key; /* Key of MR */
u32 pd; /* PD num of MR */
@ -352,7 +351,7 @@ struct hns_roce_mr {
};
struct hns_roce_mr_table {
struct hns_roce_bitmap mtpt_bitmap;
struct hns_roce_ida mtpt_ida;
struct hns_roce_hem_table mtpt_table;
};
@ -446,7 +445,7 @@ struct hns_roce_cq {
int cqe_size;
unsigned long cqn;
u32 vector;
atomic_t refcount;
refcount_t refcount;
struct completion free;
struct list_head sq_list; /* all qps on this send cq */
struct list_head rq_list; /* all qps on this recv cq */
@ -473,7 +472,7 @@ struct hns_roce_srq {
u32 xrcdn;
void __iomem *db_reg;
atomic_t refcount;
refcount_t refcount;
struct completion free;
struct hns_roce_mtr buf_mtr;
@ -555,7 +554,6 @@ struct hns_roce_cmd_context {
struct hns_roce_cmdq {
struct dma_pool *pool;
struct mutex hcr_mutex;
struct semaphore poll_sem;
/*
* Event mode: cmd register mutex protection,
@ -642,7 +640,7 @@ struct hns_roce_qp {
u32 xrcdn;
atomic_t refcount;
refcount_t refcount;
struct completion free;
struct hns_roce_sge sge;
@ -745,6 +743,7 @@ struct hns_roce_caps {
u32 max_rq_sg;
u32 max_extend_sg;
u32 num_qps;
u32 num_pi_qps;
u32 reserved_qps;
int num_qpc_timer;
int num_cqc_timer;
@ -854,8 +853,7 @@ struct hns_roce_caps {
u32 gmv_buf_pg_sz;
u32 gmv_hop_num;
u32 sl_num;
u32 tsq_buf_pg_sz;
u32 tpq_buf_pg_sz;
u32 llm_buf_pg_sz;
u32 chunk_sz; /* chunk size in non multihop mode */
u64 flags;
u16 default_ceq_max_cnt;
@ -963,8 +961,8 @@ struct hns_roce_dev {
void __iomem *priv_addr;
struct hns_roce_cmdq cmd;
struct hns_roce_bitmap pd_bitmap;
struct hns_roce_bitmap xrcd_bitmap;
struct hns_roce_ida pd_ida;
struct hns_roce_ida xrcd_ida;
struct hns_roce_uar_table uar_table;
struct hns_roce_mr_table mr_table;
struct hns_roce_cq_table cq_table;
@ -1052,7 +1050,7 @@ static inline void hns_roce_write64_k(__le32 val[2], void __iomem *dest)
static inline struct hns_roce_qp
*__hns_roce_qp_lookup(struct hns_roce_dev *hr_dev, u32 qpn)
{
return xa_load(&hr_dev->qp_table_xa, qpn & (hr_dev->caps.num_qps - 1));
return xa_load(&hr_dev->qp_table_xa, qpn);
}
static inline void *hns_roce_buf_offset(struct hns_roce_buf *buf,
@ -1062,14 +1060,18 @@ static inline void *hns_roce_buf_offset(struct hns_roce_buf *buf,
(offset & ((1 << buf->trunk_shift) - 1));
}
static inline dma_addr_t hns_roce_buf_page(struct hns_roce_buf *buf, u32 idx)
static inline dma_addr_t hns_roce_buf_dma_addr(struct hns_roce_buf *buf,
unsigned int offset)
{
unsigned int offset = idx << buf->page_shift;
return buf->trunk_list[offset >> buf->trunk_shift].map +
(offset & ((1 << buf->trunk_shift) - 1));
}
static inline dma_addr_t hns_roce_buf_page(struct hns_roce_buf *buf, u32 idx)
{
return hns_roce_buf_dma_addr(buf, idx << buf->page_shift);
}
#define hr_hw_page_align(x) ALIGN(x, 1 << HNS_HW_PAGE_SHIFT)
static inline u64 to_hr_hw_page_addr(u64 addr)
@ -1141,33 +1143,24 @@ void hns_roce_mtr_destroy(struct hns_roce_dev *hr_dev,
int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
dma_addr_t *pages, unsigned int page_cnt);
int hns_roce_init_pd_table(struct hns_roce_dev *hr_dev);
int hns_roce_init_mr_table(struct hns_roce_dev *hr_dev);
void hns_roce_init_pd_table(struct hns_roce_dev *hr_dev);
void hns_roce_init_mr_table(struct hns_roce_dev *hr_dev);
void hns_roce_init_cq_table(struct hns_roce_dev *hr_dev);
int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev);
void hns_roce_init_qp_table(struct hns_roce_dev *hr_dev);
int hns_roce_init_srq_table(struct hns_roce_dev *hr_dev);
int hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev);
void hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_pd_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_mr_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_eq_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_cq_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_srq_table(struct hns_roce_dev *hr_dev);
void hns_roce_cleanup_xrcd_table(struct hns_roce_dev *hr_dev);
int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj);
void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj,
int rr);
void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj);
int hns_roce_bitmap_init(struct hns_roce_bitmap *bitmap, u32 num, u32 mask,
u32 reserved_bot, u32 resetrved_top);
void hns_roce_bitmap_cleanup(struct hns_roce_bitmap *bitmap);
void hns_roce_cleanup_bitmap(struct hns_roce_dev *hr_dev);
int hns_roce_bitmap_alloc_range(struct hns_roce_bitmap *bitmap, int cnt,
int align, unsigned long *obj);
void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap,
unsigned long obj, int cnt,
int rr);
int hns_roce_create_ah(struct ib_ah *ah, struct rdma_ah_init_attr *init_attr,
struct ib_udata *udata);
@ -1206,9 +1199,10 @@ struct hns_roce_buf *hns_roce_buf_alloc(struct hns_roce_dev *hr_dev, u32 size,
u32 page_shift, u32 flags);
int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
int buf_cnt, int start, struct hns_roce_buf *buf);
int buf_cnt, struct hns_roce_buf *buf,
unsigned int page_shift);
int hns_roce_get_umem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
int buf_cnt, int start, struct ib_umem *umem,
int buf_cnt, struct ib_umem *umem,
unsigned int page_shift);
int hns_roce_create_srq(struct ib_srq *srq,
@ -1248,8 +1242,7 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
struct ib_udata *udata);
int hns_roce_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata);
int hns_roce_db_map_user(struct hns_roce_ucontext *context,
struct ib_udata *udata, unsigned long virt,
int hns_roce_db_map_user(struct hns_roce_ucontext *context, unsigned long virt,
struct hns_roce_db *db);
void hns_roce_db_unmap_user(struct hns_roce_ucontext *context,
struct hns_roce_db *db);
@ -1259,6 +1252,7 @@ void hns_roce_free_db(struct hns_roce_dev *hr_dev, struct hns_roce_db *db);
void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn);
void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type);
void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp);
void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type);
void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type);
u8 hns_get_gid_index(struct hns_roce_dev *hr_dev, u32 port, int gid_index);

View File

@ -36,9 +36,6 @@
#include "hns_roce_hem.h"
#include "hns_roce_common.h"
#define DMA_ADDR_T_SHIFT 12
#define BT_BA_SHIFT 32
#define HEM_INDEX_BUF BIT(0)
#define HEM_INDEX_L0 BIT(1)
#define HEM_INDEX_L1 BIT(2)
@ -227,8 +224,7 @@ int hns_roce_calc_hem_mhop(struct hns_roce_dev *hr_dev,
chunk_ba_num = mhop->bt_chunk_size / BA_BYTE_LEN;
chunk_size = table->type < HEM_TYPE_MTT ? mhop->buf_chunk_size :
mhop->bt_chunk_size;
table_idx = (*obj & (table->num_obj - 1)) /
(chunk_size / table->obj_size);
table_idx = *obj / (chunk_size / table->obj_size);
switch (bt_num) {
case 3:
mhop->l2_idx = table_idx & (chunk_ba_num - 1);
@ -271,7 +267,6 @@ static struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev,
if (!hem)
return NULL;
hem->refcount = 0;
INIT_LIST_HEAD(&hem->chunk_list);
order = get_order(hem_alloc_size);
@ -338,81 +333,6 @@ void hns_roce_free_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem *hem)
kfree(hem);
}
static int hns_roce_set_hem(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_table *table, unsigned long obj)
{
spinlock_t *lock = &hr_dev->bt_cmd_lock;
struct device *dev = hr_dev->dev;
struct hns_roce_hem_iter iter;
void __iomem *bt_cmd;
__le32 bt_cmd_val[2];
__le32 bt_cmd_h = 0;
unsigned long flags;
__le32 bt_cmd_l;
int ret = 0;
u64 bt_ba;
long end;
/* Find the HEM(Hardware Entry Memory) entry */
unsigned long i = (obj & (table->num_obj - 1)) /
(table->table_chunk_size / table->obj_size);
switch (table->type) {
case HEM_TYPE_QPC:
case HEM_TYPE_MTPT:
case HEM_TYPE_CQC:
case HEM_TYPE_SRQC:
roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_M,
ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_S, table->type);
break;
default:
return ret;
}
roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_M,
ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_S, obj);
roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_S, 0);
roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_HW_SYNS_S, 1);
/* Currently iter only a chunk */
for (hns_roce_hem_first(table->hem[i], &iter);
!hns_roce_hem_last(&iter); hns_roce_hem_next(&iter)) {
bt_ba = hns_roce_hem_addr(&iter) >> DMA_ADDR_T_SHIFT;
spin_lock_irqsave(lock, flags);
bt_cmd = hr_dev->reg_base + ROCEE_BT_CMD_H_REG;
end = HW_SYNC_TIMEOUT_MSECS;
while (end > 0) {
if (!(readl(bt_cmd) >> BT_CMD_SYNC_SHIFT))
break;
mdelay(HW_SYNC_SLEEP_TIME_INTERVAL);
end -= HW_SYNC_SLEEP_TIME_INTERVAL;
}
if (end <= 0) {
dev_err(dev, "Write bt_cmd err,hw_sync is not zero.\n");
spin_unlock_irqrestore(lock, flags);
return -EBUSY;
}
bt_cmd_l = cpu_to_le32(bt_ba);
roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_M,
ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_S,
bt_ba >> BT_BA_SHIFT);
bt_cmd_val[0] = bt_cmd_l;
bt_cmd_val[1] = bt_cmd_h;
hns_roce_write64_k(bt_cmd_val,
hr_dev->reg_base + ROCEE_BT_CMD_L_REG);
spin_unlock_irqrestore(lock, flags);
}
return ret;
}
static int calc_hem_config(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_table *table, unsigned long obj,
struct hns_roce_hem_mhop *mhop,
@ -618,7 +538,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
mutex_lock(&table->mutex);
if (table->hem[index.buf]) {
++table->hem[index.buf]->refcount;
refcount_inc(&table->hem[index.buf]->refcount);
goto out;
}
@ -637,7 +557,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
}
}
++table->hem[index.buf]->refcount;
refcount_set(&table->hem[index.buf]->refcount, 1);
goto out;
err_alloc:
@ -657,13 +577,12 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
if (hns_roce_check_whether_mhop(hr_dev, table->type))
return hns_roce_table_mhop_get(hr_dev, table, obj);
i = (obj & (table->num_obj - 1)) / (table->table_chunk_size /
table->obj_size);
i = obj / (table->table_chunk_size / table->obj_size);
mutex_lock(&table->mutex);
if (table->hem[i]) {
++table->hem[i]->refcount;
refcount_inc(&table->hem[i]->refcount);
goto out;
}
@ -678,7 +597,7 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
}
/* Set HEM base address(128K/page, pa) to Hardware */
if (hns_roce_set_hem(hr_dev, table, obj)) {
if (hr_dev->hw->set_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT)) {
hns_roce_free_hem(hr_dev, table->hem[i]);
table->hem[i] = NULL;
ret = -ENODEV;
@ -686,7 +605,7 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
goto out;
}
++table->hem[i]->refcount;
refcount_set(&table->hem[i]->refcount, 1);
out:
mutex_unlock(&table->mutex);
return ret;
@ -753,11 +672,11 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev,
return;
}
mutex_lock(&table->mutex);
if (check_refcount && (--table->hem[index.buf]->refcount > 0)) {
mutex_unlock(&table->mutex);
if (!check_refcount)
mutex_lock(&table->mutex);
else if (!refcount_dec_and_mutex_lock(&table->hem[index.buf]->refcount,
&table->mutex))
return;
}
clear_mhop_hem(hr_dev, table, obj, &mhop, &index);
free_mhop_hem(hr_dev, table, &mhop, &index);
@ -776,19 +695,17 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
return;
}
i = (obj & (table->num_obj - 1)) /
(table->table_chunk_size / table->obj_size);
i = obj / (table->table_chunk_size / table->obj_size);
mutex_lock(&table->mutex);
if (!refcount_dec_and_mutex_lock(&table->hem[i]->refcount,
&table->mutex))
return;
if (--table->hem[i]->refcount == 0) {
/* Clear HEM base address */
if (hr_dev->hw->clear_hem(hr_dev, table, obj, 0))
dev_warn(dev, "Clear HEM base address failed.\n");
if (hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT))
dev_warn(dev, "failed to clear HEM base address.\n");
hns_roce_free_hem(hr_dev, table->hem[i]);
table->hem[i] = NULL;
}
hns_roce_free_hem(hr_dev, table->hem[i]);
table->hem[i] = NULL;
mutex_unlock(&table->mutex);
}
@ -816,8 +733,8 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev,
if (!hns_roce_check_whether_mhop(hr_dev, table->type)) {
obj_per_chunk = table->table_chunk_size / table->obj_size;
hem = table->hem[(obj & (table->num_obj - 1)) / obj_per_chunk];
idx_offset = (obj & (table->num_obj - 1)) % obj_per_chunk;
hem = table->hem[obj / obj_per_chunk];
idx_offset = obj % obj_per_chunk;
dma_offset = offset = idx_offset * table->obj_size;
} else {
u32 seg_size = 64; /* 8 bytes per BA and 8 BA per segment */
@ -834,8 +751,7 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev,
hem_idx = i;
hem = table->hem[hem_idx];
dma_offset = offset = (obj & (table->num_obj - 1)) * seg_size %
mhop.bt_chunk_size;
dma_offset = offset = obj * seg_size % mhop.bt_chunk_size;
if (mhop.hop_num == 2)
dma_offset = offset = 0;
}
@ -877,7 +793,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
if (!hns_roce_check_whether_mhop(hr_dev, type)) {
table->table_chunk_size = hr_dev->caps.chunk_sz;
obj_per_chunk = table->table_chunk_size / obj_size;
num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk;
num_hem = DIV_ROUND_UP(nobj, obj_per_chunk);
table->hem = kcalloc(num_hem, sizeof(*table->hem), GFP_KERNEL);
if (!table->hem)
@ -899,8 +815,9 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
hop_num = mhop.hop_num;
obj_per_chunk = buf_chunk_size / obj_size;
num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk;
num_hem = DIV_ROUND_UP(nobj, obj_per_chunk);
bt_chunk_num = bt_chunk_size / BA_BYTE_LEN;
if (type >= HEM_TYPE_MTT)
num_bt_l0 = bt_chunk_num;
@ -912,8 +829,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
if (check_whether_bt_num_3(type, hop_num)) {
unsigned long num_bt_l1;
num_bt_l1 = (num_hem + bt_chunk_num - 1) /
bt_chunk_num;
num_bt_l1 = DIV_ROUND_UP(num_hem, bt_chunk_num);
table->bt_l1 = kcalloc(num_bt_l1,
sizeof(*table->bt_l1),
GFP_KERNEL);
@ -945,7 +861,6 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
table->type = type;
table->num_hem = num_hem;
table->num_obj = nobj;
table->obj_size = obj_size;
table->lowmem = use_lowmem;
mutex_init(&table->mutex);
@ -1053,7 +968,7 @@ void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev)
hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table);
}
struct roce_hem_item {
struct hns_roce_hem_item {
struct list_head list; /* link all hems in the same bt level */
struct list_head sibling; /* link all hems in last hop for mtt */
void *addr;
@ -1063,12 +978,18 @@ struct roce_hem_item {
int end; /* end buf offset in this hem */
};
static struct roce_hem_item *hem_list_alloc_item(struct hns_roce_dev *hr_dev,
int start, int end,
int count, bool exist_bt,
int bt_level)
/* All HEM items are linked in a tree structure */
struct hns_roce_hem_head {
struct list_head branch[HNS_ROCE_MAX_BT_REGION];
struct list_head root;
struct list_head leaf;
};
static struct hns_roce_hem_item *
hem_list_alloc_item(struct hns_roce_dev *hr_dev, int start, int end, int count,
bool exist_bt, int bt_level)
{
struct roce_hem_item *hem;
struct hns_roce_hem_item *hem;
hem = kzalloc(sizeof(*hem), GFP_KERNEL);
if (!hem)
@ -1093,7 +1014,7 @@ static struct roce_hem_item *hem_list_alloc_item(struct hns_roce_dev *hr_dev,
}
static void hem_list_free_item(struct hns_roce_dev *hr_dev,
struct roce_hem_item *hem, bool exist_bt)
struct hns_roce_hem_item *hem, bool exist_bt)
{
if (exist_bt)
dma_free_coherent(hr_dev->dev, hem->count * BA_BYTE_LEN,
@ -1104,7 +1025,7 @@ static void hem_list_free_item(struct hns_roce_dev *hr_dev,
static void hem_list_free_all(struct hns_roce_dev *hr_dev,
struct list_head *head, bool exist_bt)
{
struct roce_hem_item *hem, *temp_hem;
struct hns_roce_hem_item *hem, *temp_hem;
list_for_each_entry_safe(hem, temp_hem, head, list) {
list_del(&hem->list);
@ -1120,24 +1041,24 @@ static void hem_list_link_bt(struct hns_roce_dev *hr_dev, void *base_addr,
/* assign L0 table address to hem from root bt */
static void hem_list_assign_bt(struct hns_roce_dev *hr_dev,
struct roce_hem_item *hem, void *cpu_addr,
struct hns_roce_hem_item *hem, void *cpu_addr,
u64 phy_addr)
{
hem->addr = cpu_addr;
hem->dma_addr = (dma_addr_t)phy_addr;
}
static inline bool hem_list_page_is_in_range(struct roce_hem_item *hem,
static inline bool hem_list_page_is_in_range(struct hns_roce_hem_item *hem,
int offset)
{
return (hem->start <= offset && offset <= hem->end);
}
static struct roce_hem_item *hem_list_search_item(struct list_head *ba_list,
int page_offset)
static struct hns_roce_hem_item *hem_list_search_item(struct list_head *ba_list,
int page_offset)
{
struct roce_hem_item *hem, *temp_hem;
struct roce_hem_item *found = NULL;
struct hns_roce_hem_item *hem, *temp_hem;
struct hns_roce_hem_item *found = NULL;
list_for_each_entry_safe(hem, temp_hem, ba_list, list) {
if (hem_list_page_is_in_range(hem, page_offset)) {
@ -1161,7 +1082,7 @@ static bool hem_list_is_bottom_bt(int hopnum, int bt_level)
return bt_level >= (hopnum ? hopnum - 1 : hopnum);
}
/**
/*
* calc base address entries num
* @hopnum: num of mutihop addressing
* @bt_level: base address table level
@ -1194,7 +1115,7 @@ static u32 hem_list_calc_ba_range(int hopnum, int bt_level, int unit)
return step;
}
/**
/*
* calc the root ba entries which could cover all regions
* @regions: buf region array
* @region_cnt: array size of @regions
@ -1227,9 +1148,9 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
int offset, struct list_head *mid_bt,
struct list_head *btm_bt)
{
struct roce_hem_item *hem_ptrs[HNS_ROCE_MAX_BT_LEVEL] = { NULL };
struct hns_roce_hem_item *hem_ptrs[HNS_ROCE_MAX_BT_LEVEL] = { NULL };
struct list_head temp_list[HNS_ROCE_MAX_BT_LEVEL];
struct roce_hem_item *cur, *pre;
struct hns_roce_hem_item *cur, *pre;
const int hopnum = r->hopnum;
int start_aligned;
int distance;
@ -1307,56 +1228,96 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
return ret;
}
static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_list *hem_list, int unit,
const struct hns_roce_buf_region *regions,
int region_cnt)
static struct hns_roce_hem_item *
alloc_root_hem(struct hns_roce_dev *hr_dev, int unit, int *max_ba_num,
const struct hns_roce_buf_region *regions, int region_cnt)
{
struct list_head temp_list[HNS_ROCE_MAX_BT_REGION];
struct roce_hem_item *hem, *temp_hem, *root_hem;
const struct hns_roce_buf_region *r;
struct list_head temp_root;
struct list_head temp_btm;
void *cpu_base;
u64 phy_base;
int ret = 0;
struct hns_roce_hem_item *hem;
int ba_num;
int offset;
int total;
int step;
int i;
r = &regions[0];
root_hem = hem_list_search_item(&hem_list->root_bt, r->offset);
if (root_hem)
return 0;
ba_num = hns_roce_hem_list_calc_root_ba(regions, region_cnt, unit);
if (ba_num < 1)
return -ENOMEM;
return ERR_PTR(-ENOMEM);
if (ba_num > unit)
return -ENOBUFS;
return ERR_PTR(-ENOBUFS);
ba_num = min_t(int, ba_num, unit);
INIT_LIST_HEAD(&temp_root);
offset = r->offset;
offset = regions[0].offset;
/* indicate to last region */
r = &regions[region_cnt - 1];
root_hem = hem_list_alloc_item(hr_dev, offset, r->offset + r->count - 1,
ba_num, true, 0);
hem = hem_list_alloc_item(hr_dev, offset, r->offset + r->count - 1,
ba_num, true, 0);
if (!hem)
return ERR_PTR(-ENOMEM);
*max_ba_num = ba_num;
return hem;
}
static int alloc_fake_root_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
u64 phy_base, const struct hns_roce_buf_region *r,
struct list_head *branch_head,
struct list_head *leaf_head)
{
struct hns_roce_hem_item *hem;
hem = hem_list_alloc_item(hr_dev, r->offset, r->offset + r->count - 1,
r->count, false, 0);
if (!hem)
return -ENOMEM;
hem_list_assign_bt(hr_dev, hem, cpu_base, phy_base);
list_add(&hem->list, branch_head);
list_add(&hem->sibling, leaf_head);
return r->count;
}
static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
int unit, const struct hns_roce_buf_region *r,
const struct list_head *branch_head)
{
struct hns_roce_hem_item *hem, *temp_hem;
int total = 0;
int offset;
int step;
step = hem_list_calc_ba_range(r->hopnum, 1, unit);
if (step < 1)
return -EINVAL;
/* if exist mid bt, link L1 to L0 */
list_for_each_entry_safe(hem, temp_hem, branch_head, list) {
offset = (hem->start - r->offset) / step * BA_BYTE_LEN;
hem_list_link_bt(hr_dev, cpu_base + offset, hem->dma_addr);
total++;
}
return total;
}
static int
setup_root_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem_list *hem_list,
int unit, int max_ba_num, struct hns_roce_hem_head *head,
const struct hns_roce_buf_region *regions, int region_cnt)
{
const struct hns_roce_buf_region *r;
struct hns_roce_hem_item *root_hem;
void *cpu_base;
u64 phy_base;
int i, total;
int ret;
root_hem = list_first_entry(&head->root,
struct hns_roce_hem_item, list);
if (!root_hem)
return -ENOMEM;
list_add(&root_hem->list, &temp_root);
hem_list->root_ba = root_hem->dma_addr;
INIT_LIST_HEAD(&temp_btm);
for (i = 0; i < region_cnt; i++)
INIT_LIST_HEAD(&temp_list[i]);
total = 0;
for (i = 0; i < region_cnt && total < ba_num; i++) {
for (i = 0; i < region_cnt && total < max_ba_num; i++) {
r = &regions[i];
if (!r->count)
continue;
@ -1368,48 +1329,64 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
/* if hopnum is 0 or 1, cut a new fake hem from the root bt
* which's address share to all regions.
*/
if (hem_list_is_bottom_bt(r->hopnum, 0)) {
hem = hem_list_alloc_item(hr_dev, r->offset,
r->offset + r->count - 1,
r->count, false, 0);
if (!hem) {
ret = -ENOMEM;
goto err_exit;
}
hem_list_assign_bt(hr_dev, hem, cpu_base, phy_base);
list_add(&hem->list, &temp_list[i]);
list_add(&hem->sibling, &temp_btm);
total += r->count;
} else {
step = hem_list_calc_ba_range(r->hopnum, 1, unit);
if (step < 1) {
ret = -EINVAL;
goto err_exit;
}
/* if exist mid bt, link L1 to L0 */
list_for_each_entry_safe(hem, temp_hem,
&hem_list->mid_bt[i][1], list) {
offset = (hem->start - r->offset) / step *
BA_BYTE_LEN;
hem_list_link_bt(hr_dev, cpu_base + offset,
hem->dma_addr);
total++;
}
}
if (hem_list_is_bottom_bt(r->hopnum, 0))
ret = alloc_fake_root_bt(hr_dev, cpu_base, phy_base, r,
&head->branch[i], &head->leaf);
else
ret = setup_middle_bt(hr_dev, cpu_base, unit, r,
&hem_list->mid_bt[i][1]);
if (ret < 0)
return ret;
total += ret;
}
list_splice(&temp_btm, &hem_list->btm_bt);
list_splice(&temp_root, &hem_list->root_bt);
list_splice(&head->leaf, &hem_list->btm_bt);
list_splice(&head->root, &hem_list->root_bt);
for (i = 0; i < region_cnt; i++)
list_splice(&temp_list[i], &hem_list->mid_bt[i][0]);
list_splice(&head->branch[i], &hem_list->mid_bt[i][0]);
return 0;
}
err_exit:
static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_list *hem_list, int unit,
const struct hns_roce_buf_region *regions,
int region_cnt)
{
struct hns_roce_hem_item *root_hem;
struct hns_roce_hem_head head;
int max_ba_num;
int ret;
int i;
root_hem = hem_list_search_item(&hem_list->root_bt, regions[0].offset);
if (root_hem)
return 0;
max_ba_num = 0;
root_hem = alloc_root_hem(hr_dev, unit, &max_ba_num, regions,
region_cnt);
if (IS_ERR(root_hem))
return PTR_ERR(root_hem);
/* List head for storing all allocated HEM items */
INIT_LIST_HEAD(&head.root);
INIT_LIST_HEAD(&head.leaf);
for (i = 0; i < region_cnt; i++)
hem_list_free_all(hr_dev, &temp_list[i], false);
INIT_LIST_HEAD(&head.branch[i]);
hem_list_free_all(hr_dev, &temp_root, true);
hem_list->root_ba = root_hem->dma_addr;
list_add(&root_hem->list, &head.root);
ret = setup_root_hem(hr_dev, hem_list, unit, max_ba_num, &head, regions,
region_cnt);
if (ret) {
for (i = 0; i < region_cnt; i++)
hem_list_free_all(hr_dev, &head.branch[i], false);
hem_list_free_all(hr_dev, &head.root, true);
}
return ret;
}
@ -1495,7 +1472,7 @@ void *hns_roce_hem_list_find_mtt(struct hns_roce_dev *hr_dev,
int offset, int *mtt_cnt, u64 *phy_addr)
{
struct list_head *head = &hem_list->btm_bt;
struct roce_hem_item *hem, *temp_hem;
struct hns_roce_hem_item *hem, *temp_hem;
void *cpu_base = NULL;
u64 phy_base = 0;
int nr = 0;

View File

@ -34,9 +34,7 @@
#ifndef _HNS_ROCE_HEM_H
#define _HNS_ROCE_HEM_H
#define HW_SYNC_SLEEP_TIME_INTERVAL 20
#define HW_SYNC_TIMEOUT_MSECS (25 * HW_SYNC_SLEEP_TIME_INTERVAL)
#define BT_CMD_SYNC_SHIFT 31
#define HEM_HOP_STEP_DIRECT 0xff
enum {
/* MAP HEM(Hardware Entry Memory) */
@ -74,11 +72,6 @@ enum {
(type >= HEM_TYPE_MTT && hop_num == 1) || \
(type >= HEM_TYPE_MTT && hop_num == HNS_ROCE_HOP_NUM_0))
enum {
HNS_ROCE_HEM_PAGE_SHIFT = 12,
HNS_ROCE_HEM_PAGE_SIZE = 1 << HNS_ROCE_HEM_PAGE_SHIFT,
};
struct hns_roce_hem_chunk {
struct list_head list;
int npages;
@ -88,8 +81,8 @@ struct hns_roce_hem_chunk {
};
struct hns_roce_hem {
struct list_head chunk_list;
int refcount;
struct list_head chunk_list;
refcount_t refcount;
};
struct hns_roce_hem_iter {

View File

@ -462,6 +462,81 @@ static void hns_roce_set_db_event_mode(struct hns_roce_dev *hr_dev,
roce_write(hr_dev, ROCEE_GLB_CFG_REG, val);
}
static int hns_roce_v1_set_hem(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_table *table, int obj,
int step_idx)
{
spinlock_t *lock = &hr_dev->bt_cmd_lock;
struct device *dev = hr_dev->dev;
struct hns_roce_hem_iter iter;
void __iomem *bt_cmd;
__le32 bt_cmd_val[2];
__le32 bt_cmd_h = 0;
unsigned long flags;
__le32 bt_cmd_l;
int ret = 0;
u64 bt_ba;
long end;
/* Find the HEM(Hardware Entry Memory) entry */
unsigned long i = obj / (table->table_chunk_size / table->obj_size);
switch (table->type) {
case HEM_TYPE_QPC:
case HEM_TYPE_MTPT:
case HEM_TYPE_CQC:
case HEM_TYPE_SRQC:
roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_M,
ROCEE_BT_CMD_H_ROCEE_BT_CMD_MDF_S, table->type);
break;
default:
return ret;
}
roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_M,
ROCEE_BT_CMD_H_ROCEE_BT_CMD_IN_MDF_S, obj);
roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_S, 0);
roce_set_bit(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_HW_SYNS_S, 1);
/* Currently iter only a chunk */
for (hns_roce_hem_first(table->hem[i], &iter);
!hns_roce_hem_last(&iter); hns_roce_hem_next(&iter)) {
bt_ba = hns_roce_hem_addr(&iter) >> HNS_HW_PAGE_SHIFT;
spin_lock_irqsave(lock, flags);
bt_cmd = hr_dev->reg_base + ROCEE_BT_CMD_H_REG;
end = HW_SYNC_TIMEOUT_MSECS;
while (end > 0) {
if (!(readl(bt_cmd) >> BT_CMD_SYNC_SHIFT))
break;
mdelay(HW_SYNC_SLEEP_TIME_INTERVAL);
end -= HW_SYNC_SLEEP_TIME_INTERVAL;
}
if (end <= 0) {
dev_err(dev, "Write bt_cmd err,hw_sync is not zero.\n");
spin_unlock_irqrestore(lock, flags);
return -EBUSY;
}
bt_cmd_l = cpu_to_le32(bt_ba);
roce_set_field(bt_cmd_h, ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_M,
ROCEE_BT_CMD_H_ROCEE_BT_CMD_BA_H_S,
upper_32_bits(bt_ba));
bt_cmd_val[0] = bt_cmd_l;
bt_cmd_val[1] = bt_cmd_h;
hns_roce_write64_k(bt_cmd_val,
hr_dev->reg_base + ROCEE_BT_CMD_L_REG);
spin_unlock_irqrestore(lock, flags);
}
return ret;
}
static void hns_roce_set_db_ext_mode(struct hns_roce_dev *hr_dev, u32 sdb_mode,
u32 odb_mode)
{
@ -1123,8 +1198,7 @@ static int hns_roce_v1_dereg_mr(struct hns_roce_dev *hr_dev,
dev_dbg(dev, "Free mr 0x%x use 0x%x us.\n",
mr->key, jiffies_to_usecs(jiffies) - jiffies_to_usecs(start));
hns_roce_bitmap_free(&hr_dev->mr_table.mtpt_bitmap,
key_to_hw_index(mr->key), 0);
ida_free(&hr_dev->mr_table.mtpt_ida.ida, (int)key_to_hw_index(mr->key));
hns_roce_mtr_destroy(hr_dev, &mr->pbl_mtr);
kfree(mr);
@ -4352,6 +4426,7 @@ static const struct hns_roce_hw hns_roce_hw_v1 = {
.set_mtu = hns_roce_v1_set_mtu,
.write_mtpt = hns_roce_v1_write_mtpt,
.write_cqc = hns_roce_v1_write_cqc,
.set_hem = hns_roce_v1_set_hem,
.clear_hem = hns_roce_v1_clear_hem,
.modify_qp = hns_roce_v1_modify_qp,
.dereg_mr = hns_roce_v1_dereg_mr,

View File

@ -1085,6 +1085,11 @@ struct hns_roce_db_table {
struct hns_roce_ext_db *ext_db;
};
#define HW_SYNC_SLEEP_TIME_INTERVAL 20
#define HW_SYNC_TIMEOUT_MSECS (25 * HW_SYNC_SLEEP_TIME_INTERVAL)
#define BT_CMD_SYNC_SHIFT 31
#define HNS_ROCE_BA_SIZE (32 * 4096)
struct hns_roce_bt_table {
struct hns_roce_buf_list qpc_buf;
struct hns_roce_buf_list mtpt_buf;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -748,34 +748,16 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
goto err_uar_table_free;
}
ret = hns_roce_init_pd_table(hr_dev);
if (ret) {
dev_err(dev, "Failed to init protected domain table.\n");
goto err_uar_alloc_free;
}
hns_roce_init_pd_table(hr_dev);
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC) {
ret = hns_roce_init_xrcd_table(hr_dev);
if (ret) {
dev_err(dev, "failed to init xrcd table, ret = %d.\n",
ret);
goto err_pd_table_free;
}
}
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC)
hns_roce_init_xrcd_table(hr_dev);
ret = hns_roce_init_mr_table(hr_dev);
if (ret) {
dev_err(dev, "Failed to init memory region table.\n");
goto err_xrcd_table_free;
}
hns_roce_init_mr_table(hr_dev);
hns_roce_init_cq_table(hr_dev);
ret = hns_roce_init_qp_table(hr_dev);
if (ret) {
dev_err(dev, "Failed to init queue pair table.\n");
goto err_cq_table_free;
}
hns_roce_init_qp_table(hr_dev);
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) {
ret = hns_roce_init_srq_table(hr_dev);
@ -790,19 +772,13 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
err_qp_table_free:
hns_roce_cleanup_qp_table(hr_dev);
err_cq_table_free:
hns_roce_cleanup_cq_table(hr_dev);
hns_roce_cleanup_mr_table(hr_dev);
ida_destroy(&hr_dev->mr_table.mtpt_ida.ida);
err_xrcd_table_free:
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC)
hns_roce_cleanup_xrcd_table(hr_dev);
ida_destroy(&hr_dev->xrcd_ida.ida);
err_pd_table_free:
hns_roce_cleanup_pd_table(hr_dev);
err_uar_alloc_free:
ida_destroy(&hr_dev->pd_ida.ida);
hns_roce_uar_free(hr_dev, &hr_dev->priv_uar);
err_uar_table_free:

View File

@ -38,9 +38,9 @@
#include "hns_roce_cmd.h"
#include "hns_roce_hem.h"
static u32 hw_index_to_key(unsigned long ind)
static u32 hw_index_to_key(int ind)
{
return (u32)(ind >> 24) | (ind << 8);
return ((u32)ind >> 24) | ((u32)ind << 8);
}
unsigned long key_to_hw_index(u32 key)
@ -68,22 +68,23 @@ int hns_roce_hw_destroy_mpt(struct hns_roce_dev *hr_dev,
static int alloc_mr_key(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr)
{
struct hns_roce_ida *mtpt_ida = &hr_dev->mr_table.mtpt_ida;
struct ib_device *ibdev = &hr_dev->ib_dev;
unsigned long obj = 0;
int err;
int id;
/* Allocate a key for mr from mr_table */
err = hns_roce_bitmap_alloc(&hr_dev->mr_table.mtpt_bitmap, &obj);
if (err) {
ibdev_err(ibdev,
"failed to alloc bitmap for MR key, ret = %d.\n",
err);
id = ida_alloc_range(&mtpt_ida->ida, mtpt_ida->min, mtpt_ida->max,
GFP_KERNEL);
if (id < 0) {
ibdev_err(ibdev, "failed to alloc id for MR key, id(%d)\n", id);
return -ENOMEM;
}
mr->key = hw_index_to_key(obj); /* MR key */
mr->key = hw_index_to_key(id); /* MR key */
err = hns_roce_table_get(hr_dev, &hr_dev->mr_table.mtpt_table, obj);
err = hns_roce_table_get(hr_dev, &hr_dev->mr_table.mtpt_table,
(unsigned long)id);
if (err) {
ibdev_err(ibdev, "failed to alloc mtpt, ret = %d.\n", err);
goto err_free_bitmap;
@ -91,7 +92,7 @@ static int alloc_mr_key(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr)
return 0;
err_free_bitmap:
hns_roce_bitmap_free(&hr_dev->mr_table.mtpt_bitmap, obj, BITMAP_NO_RR);
ida_free(&mtpt_ida->ida, id);
return err;
}
@ -100,7 +101,7 @@ static void free_mr_key(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr)
unsigned long obj = key_to_hw_index(mr->key);
hns_roce_table_put(hr_dev, &hr_dev->mr_table.mtpt_table, obj);
hns_roce_bitmap_free(&hr_dev->mr_table.mtpt_bitmap, obj, BITMAP_NO_RR);
ida_free(&hr_dev->mr_table.mtpt_ida.ida, (int)obj);
}
static int alloc_mr_pbl(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr,
@ -122,7 +123,7 @@ static int alloc_mr_pbl(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr,
buf_attr.mtt_only = is_fast;
err = hns_roce_mtr_create(hr_dev, &mr->pbl_mtr, &buf_attr,
hr_dev->caps.pbl_ba_pg_sz + HNS_HW_PAGE_SHIFT,
hr_dev->caps.pbl_ba_pg_sz + PAGE_SHIFT,
udata, start);
if (err)
ibdev_err(ibdev, "failed to alloc pbl mtr, ret = %d.\n", err);
@ -196,23 +197,13 @@ static int hns_roce_mr_enable(struct hns_roce_dev *hr_dev,
return ret;
}
int hns_roce_init_mr_table(struct hns_roce_dev *hr_dev)
void hns_roce_init_mr_table(struct hns_roce_dev *hr_dev)
{
struct hns_roce_mr_table *mr_table = &hr_dev->mr_table;
int ret;
struct hns_roce_ida *mtpt_ida = &hr_dev->mr_table.mtpt_ida;
ret = hns_roce_bitmap_init(&mr_table->mtpt_bitmap,
hr_dev->caps.num_mtpts,
hr_dev->caps.num_mtpts - 1,
hr_dev->caps.reserved_mrws, 0);
return ret;
}
void hns_roce_cleanup_mr_table(struct hns_roce_dev *hr_dev)
{
struct hns_roce_mr_table *mr_table = &hr_dev->mr_table;
hns_roce_bitmap_cleanup(&mr_table->mtpt_bitmap);
ida_init(&mtpt_ida->ida);
mtpt_ida->max = hr_dev->caps.num_mtpts - 1;
mtpt_ida->min = hr_dev->caps.reserved_mrws;
}
struct ib_mr *hns_roce_get_dma_mr(struct ib_pd *pd, int acc)
@ -503,8 +494,8 @@ static void hns_roce_mw_free(struct hns_roce_dev *hr_dev,
key_to_hw_index(mw->rkey));
}
hns_roce_bitmap_free(&hr_dev->mr_table.mtpt_bitmap,
key_to_hw_index(mw->rkey), BITMAP_NO_RR);
ida_free(&hr_dev->mr_table.mtpt_ida.ida,
(int)key_to_hw_index(mw->rkey));
}
static int hns_roce_mw_enable(struct hns_roce_dev *hr_dev,
@ -558,16 +549,21 @@ static int hns_roce_mw_enable(struct hns_roce_dev *hr_dev,
int hns_roce_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
{
struct hns_roce_dev *hr_dev = to_hr_dev(ibmw->device);
struct hns_roce_ida *mtpt_ida = &hr_dev->mr_table.mtpt_ida;
struct ib_device *ibdev = &hr_dev->ib_dev;
struct hns_roce_mw *mw = to_hr_mw(ibmw);
unsigned long index = 0;
int ret;
int id;
/* Allocate a key for mw from bitmap */
ret = hns_roce_bitmap_alloc(&hr_dev->mr_table.mtpt_bitmap, &index);
if (ret)
return ret;
/* Allocate a key for mw from mr_table */
id = ida_alloc_range(&mtpt_ida->ida, mtpt_ida->min, mtpt_ida->max,
GFP_KERNEL);
if (id < 0) {
ibdev_err(ibdev, "failed to alloc id for MW key, id(%d)\n", id);
return -ENOMEM;
}
mw->rkey = hw_index_to_key(index);
mw->rkey = hw_index_to_key(id);
ibmw->rkey = mw->rkey;
mw->pdn = to_hr_pd(ibmw->pd)->pdn;
@ -737,11 +733,11 @@ static int mtr_map_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
return -ENOMEM;
if (mtr->umem)
npage = hns_roce_get_umem_bufs(hr_dev, pages, page_count, 0,
npage = hns_roce_get_umem_bufs(hr_dev, pages, page_count,
mtr->umem, page_shift);
else
npage = hns_roce_get_kmem_bufs(hr_dev, pages, page_count, 0,
mtr->kmem);
npage = hns_roce_get_kmem_bufs(hr_dev, pages, page_count,
mtr->kmem, page_shift);
if (npage != page_count) {
ibdev_err(ibdev, "failed to get mtr page %d != %d.\n", npage,
@ -753,8 +749,8 @@ static int mtr_map_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
if (mtr->hem_cfg.is_direct && npage > 1) {
ret = mtr_check_direct_pages(pages, npage, page_shift);
if (ret) {
ibdev_err(ibdev, "failed to check %s mtr, idx = %d.\n",
mtr->umem ? "user" : "kernel", ret);
ibdev_err(ibdev, "failed to check %s page: %d / %d.\n",
mtr->umem ? "umtr" : "kmtr", ret, npage);
ret = -ENOBUFS;
goto err_alloc_list;
}
@ -776,7 +772,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
struct ib_device *ibdev = &hr_dev->ib_dev;
struct hns_roce_buf_region *r;
unsigned int i, mapped_cnt;
int ret;
int ret = 0;
/*
* Only use the first page address as root ba when hopnum is 0, this
@ -799,7 +795,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
if (r->offset + r->count > page_cnt) {
ret = -EINVAL;
ibdev_err(ibdev,
"failed to check mtr%u end %u + %u, max %u.\n",
"failed to check mtr%u count %u + %u > %u.\n",
i, r->offset, r->count, page_cnt);
return ret;
}
@ -992,7 +988,7 @@ int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
&buf_page_shift,
udata ? user_addr & ~PAGE_MASK : 0);
if (buf_page_cnt < 1 || buf_page_shift < HNS_HW_PAGE_SHIFT) {
ibdev_err(ibdev, "failed to init mtr cfg, count %d shift %d.\n",
ibdev_err(ibdev, "failed to init mtr cfg, count %d shift %u.\n",
buf_page_cnt, buf_page_shift);
return -EINVAL;
}

View File

@ -34,39 +34,31 @@
#include <linux/pci.h>
#include "hns_roce_device.h"
static int hns_roce_pd_alloc(struct hns_roce_dev *hr_dev, unsigned long *pdn)
void hns_roce_init_pd_table(struct hns_roce_dev *hr_dev)
{
return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn) ? -ENOMEM : 0;
}
struct hns_roce_ida *pd_ida = &hr_dev->pd_ida;
static void hns_roce_pd_free(struct hns_roce_dev *hr_dev, unsigned long pdn)
{
hns_roce_bitmap_free(&hr_dev->pd_bitmap, pdn, BITMAP_NO_RR);
}
int hns_roce_init_pd_table(struct hns_roce_dev *hr_dev)
{
return hns_roce_bitmap_init(&hr_dev->pd_bitmap, hr_dev->caps.num_pds,
hr_dev->caps.num_pds - 1,
hr_dev->caps.reserved_pds, 0);
}
void hns_roce_cleanup_pd_table(struct hns_roce_dev *hr_dev)
{
hns_roce_bitmap_cleanup(&hr_dev->pd_bitmap);
ida_init(&pd_ida->ida);
pd_ida->max = hr_dev->caps.num_pds - 1;
pd_ida->min = hr_dev->caps.reserved_pds;
}
int hns_roce_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
{
struct ib_device *ib_dev = ibpd->device;
struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
struct hns_roce_ida *pd_ida = &hr_dev->pd_ida;
struct hns_roce_pd *pd = to_hr_pd(ibpd);
int ret;
int ret = 0;
int id;
ret = hns_roce_pd_alloc(to_hr_dev(ib_dev), &pd->pdn);
if (ret) {
ibdev_err(ib_dev, "failed to alloc pd, ret = %d.\n", ret);
return ret;
id = ida_alloc_range(&pd_ida->ida, pd_ida->min, pd_ida->max,
GFP_KERNEL);
if (id < 0) {
ibdev_err(ib_dev, "failed to alloc pd, id = %d.\n", id);
return -ENOMEM;
}
pd->pdn = (unsigned long)id;
if (udata) {
struct hns_roce_ib_alloc_pd_resp resp = {.pdn = pd->pdn};
@ -74,7 +66,7 @@ int hns_roce_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
ret = ib_copy_to_udata(udata, &resp,
min(udata->outlen, sizeof(resp)));
if (ret) {
hns_roce_pd_free(to_hr_dev(ib_dev), pd->pdn);
ida_free(&pd_ida->ida, id);
ibdev_err(ib_dev, "failed to copy to udata, ret = %d\n", ret);
}
}
@ -84,7 +76,10 @@ int hns_roce_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
int hns_roce_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata)
{
hns_roce_pd_free(to_hr_dev(pd->device), to_hr_pd(pd)->pdn);
struct hns_roce_dev *hr_dev = to_hr_dev(pd->device);
ida_free(&hr_dev->pd_ida.ida, (int)to_hr_pd(pd)->pdn);
return 0;
}
@ -121,8 +116,7 @@ int hns_roce_uar_alloc(struct hns_roce_dev *hr_dev, struct hns_roce_uar *uar)
void hns_roce_uar_free(struct hns_roce_dev *hr_dev, struct hns_roce_uar *uar)
{
hns_roce_bitmap_free(&hr_dev->uar_table.bitmap, uar->logic_idx,
BITMAP_NO_RR);
hns_roce_bitmap_free(&hr_dev->uar_table.bitmap, uar->logic_idx);
}
int hns_roce_init_uar_table(struct hns_roce_dev *hr_dev)
@ -140,35 +134,27 @@ void hns_roce_cleanup_uar_table(struct hns_roce_dev *hr_dev)
static int hns_roce_xrcd_alloc(struct hns_roce_dev *hr_dev, u32 *xrcdn)
{
unsigned long obj;
int ret;
struct hns_roce_ida *xrcd_ida = &hr_dev->xrcd_ida;
int id;
ret = hns_roce_bitmap_alloc(&hr_dev->xrcd_bitmap, &obj);
if (ret)
return ret;
*xrcdn = obj;
id = ida_alloc_range(&xrcd_ida->ida, xrcd_ida->min, xrcd_ida->max,
GFP_KERNEL);
if (id < 0) {
ibdev_err(&hr_dev->ib_dev, "failed to alloc xrcdn(%d).\n", id);
return -ENOMEM;
}
*xrcdn = (u32)id;
return 0;
}
static void hns_roce_xrcd_free(struct hns_roce_dev *hr_dev,
u32 xrcdn)
void hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev)
{
hns_roce_bitmap_free(&hr_dev->xrcd_bitmap, xrcdn, BITMAP_NO_RR);
}
struct hns_roce_ida *xrcd_ida = &hr_dev->xrcd_ida;
int hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev)
{
return hns_roce_bitmap_init(&hr_dev->xrcd_bitmap,
hr_dev->caps.num_xrcds,
hr_dev->caps.num_xrcds - 1,
hr_dev->caps.reserved_xrcds, 0);
}
void hns_roce_cleanup_xrcd_table(struct hns_roce_dev *hr_dev)
{
hns_roce_bitmap_cleanup(&hr_dev->xrcd_bitmap);
ida_init(&xrcd_ida->ida);
xrcd_ida->max = hr_dev->caps.num_xrcds - 1;
xrcd_ida->min = hr_dev->caps.reserved_xrcds;
}
int hns_roce_alloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata)
@ -181,18 +167,18 @@ int hns_roce_alloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata)
return -EINVAL;
ret = hns_roce_xrcd_alloc(hr_dev, &xrcd->xrcdn);
if (ret) {
dev_err(hr_dev->dev, "failed to alloc xrcdn, ret = %d.\n", ret);
if (ret)
return ret;
}
return 0;
}
int hns_roce_dealloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata)
{
hns_roce_xrcd_free(to_hr_dev(ib_xrcd->device),
to_hr_xrcd(ib_xrcd)->xrcdn);
struct hns_roce_dev *hr_dev = to_hr_dev(ib_xrcd->device);
u32 xrcdn = to_hr_xrcd(ib_xrcd)->xrcdn;
ida_free(&hr_dev->xrcd_ida.ida, (int)xrcdn);
return 0;
}

View File

@ -65,7 +65,7 @@ static void flush_work_handle(struct work_struct *work)
* make sure we signal QP destroy leg that flush QP was completed
* so that it can safely proceed ahead now and destroy QP
*/
if (atomic_dec_and_test(&hr_qp->refcount))
if (refcount_dec_and_test(&hr_qp->refcount))
complete(&hr_qp->free);
}
@ -75,10 +75,25 @@ void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
flush_work->hr_dev = hr_dev;
INIT_WORK(&flush_work->work, flush_work_handle);
atomic_inc(&hr_qp->refcount);
refcount_inc(&hr_qp->refcount);
queue_work(hr_dev->irq_workq, &flush_work->work);
}
void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
{
/*
* Hip08 hardware cannot flush the WQEs in SQ/RQ if the QP state
* gets into errored mode. Hence, as a workaround to this
* hardware limitation, driver needs to assist in flushing. But
* the flushing operation uses mailbox to convey the QP state to
* the hardware and which can sleep due to the mutex protection
* around the mailbox calls. Hence, use the deferred flush for
* now.
*/
if (!test_and_set_bit(HNS_ROCE_FLUSH_FLAG, &qp->flush_flag))
init_flush_work(dev, qp);
}
void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
{
struct device *dev = hr_dev->dev;
@ -87,7 +102,7 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
xa_lock(&hr_dev->qp_table_xa);
qp = __hns_roce_qp_lookup(hr_dev, qpn);
if (qp)
atomic_inc(&qp->refcount);
refcount_inc(&qp->refcount);
xa_unlock(&hr_dev->qp_table_xa);
if (!qp) {
@ -102,13 +117,13 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
event_type == HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION ||
event_type == HNS_ROCE_EVENT_TYPE_INVALID_XRCETH)) {
qp->state = IB_QPS_ERR;
if (!test_and_set_bit(HNS_ROCE_FLUSH_FLAG, &qp->flush_flag))
init_flush_work(hr_dev, qp);
flush_cqe(hr_dev, qp);
}
qp->event(qp, (enum hns_roce_event)event_type);
if (atomic_dec_and_test(&qp->refcount))
if (refcount_dec_and_test(&qp->refcount))
complete(&qp->free);
}
@ -379,7 +394,7 @@ void hns_roce_qp_remove(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
list_del(&hr_qp->rq_node);
xa_lock_irqsave(xa, flags);
__xa_erase(xa, hr_qp->qpn & (hr_dev->caps.num_qps - 1));
__xa_erase(xa, hr_qp->qpn);
xa_unlock_irqrestore(xa, flags);
}
@ -648,9 +663,7 @@ static int set_kernel_sq_size(struct hns_roce_dev *hr_dev,
if (!cap->max_send_wr || cap->max_send_wr > hr_dev->caps.max_wqes ||
cap->max_send_sge > hr_dev->caps.max_sq_sg) {
ibdev_err(ibdev,
"failed to check SQ WR or SGE num, ret = %d.\n",
-EINVAL);
ibdev_err(ibdev, "failed to check SQ WR or SGE num.\n");
return -EINVAL;
}
@ -761,7 +774,7 @@ static int alloc_qp_buf(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
goto err_inline;
}
ret = hns_roce_mtr_create(hr_dev, &hr_qp->mtr, &buf_attr,
HNS_HW_PAGE_SHIFT + hr_dev->caps.mtt_ba_pg_sz,
PAGE_SHIFT + hr_dev->caps.mtt_ba_pg_sz,
udata, addr);
if (ret) {
ibdev_err(ibdev, "failed to create WQE mtr, ret = %d.\n", ret);
@ -826,7 +839,7 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
if (udata) {
if (user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd)) {
ret = hns_roce_db_map_user(uctx, udata, ucmd->sdb_addr,
ret = hns_roce_db_map_user(uctx, ucmd->sdb_addr,
&hr_qp->sdb);
if (ret) {
ibdev_err(ibdev,
@ -839,7 +852,7 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
}
if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) {
ret = hns_roce_db_map_user(uctx, udata, ucmd->db_addr,
ret = hns_roce_db_map_user(uctx, ucmd->db_addr,
&hr_qp->rdb);
if (ret) {
ibdev_err(ibdev,
@ -1076,7 +1089,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
hr_qp->ibqp.qp_num = hr_qp->qpn;
hr_qp->event = hns_roce_ib_qp_event;
atomic_set(&hr_qp->refcount, 1);
refcount_set(&hr_qp->refcount, 1);
init_completion(&hr_qp->free);
return 0;
@ -1099,7 +1112,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
void hns_roce_qp_destroy(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
struct ib_udata *udata)
{
if (atomic_dec_and_test(&hr_qp->refcount))
if (refcount_dec_and_test(&hr_qp->refcount))
complete(&hr_qp->free);
wait_for_completion(&hr_qp->free);
@ -1416,7 +1429,7 @@ bool hns_roce_wq_overflow(struct hns_roce_wq *hr_wq, u32 nreq,
return cur + nreq >= hr_wq->wqe_cnt;
}
int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
void hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
{
struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
unsigned int reserved_from_bot;
@ -1439,8 +1452,6 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
HNS_ROCE_QP_BANK_NUM - 1;
hr_dev->qp_table.bank[i].next = hr_dev->qp_table.bank[i].min;
}
return 0;
}
void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)

View File

@ -17,7 +17,7 @@ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type)
xa_lock(&srq_table->xa);
srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1));
if (srq)
atomic_inc(&srq->refcount);
refcount_inc(&srq->refcount);
xa_unlock(&srq_table->xa);
if (!srq) {
@ -27,7 +27,7 @@ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type)
srq->event(srq, event_type);
if (atomic_dec_and_test(&srq->refcount))
if (refcount_dec_and_test(&srq->refcount))
complete(&srq->free);
}
@ -132,7 +132,7 @@ static int alloc_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
err_put:
hns_roce_table_put(hr_dev, &srq_table->table, srq->srqn);
err_out:
hns_roce_bitmap_free(&srq_table->bitmap, srq->srqn, BITMAP_NO_RR);
hns_roce_bitmap_free(&srq_table->bitmap, srq->srqn);
return ret;
}
@ -149,12 +149,12 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
xa_erase(&srq_table->xa, srq->srqn);
if (atomic_dec_and_test(&srq->refcount))
if (refcount_dec_and_test(&srq->refcount))
complete(&srq->free);
wait_for_completion(&srq->free);
hns_roce_table_put(hr_dev, &srq_table->table, srq->srqn);
hns_roce_bitmap_free(&srq_table->bitmap, srq->srqn, BITMAP_NO_RR);
hns_roce_bitmap_free(&srq_table->bitmap, srq->srqn);
}
static int alloc_srq_idx(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
@ -167,14 +167,14 @@ static int alloc_srq_idx(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
srq->idx_que.entry_shift = ilog2(HNS_ROCE_IDX_QUE_ENTRY_SZ);
buf_attr.page_shift = hr_dev->caps.idx_buf_pg_sz + HNS_HW_PAGE_SHIFT;
buf_attr.page_shift = hr_dev->caps.idx_buf_pg_sz + PAGE_SHIFT;
buf_attr.region[0].size = to_hr_hem_entries_size(srq->wqe_cnt,
srq->idx_que.entry_shift);
buf_attr.region[0].hopnum = hr_dev->caps.idx_hop_num;
buf_attr.region_count = 1;
ret = hns_roce_mtr_create(hr_dev, &idx_que->mtr, &buf_attr,
hr_dev->caps.idx_ba_pg_sz + HNS_HW_PAGE_SHIFT,
hr_dev->caps.idx_ba_pg_sz + PAGE_SHIFT,
udata, addr);
if (ret) {
ibdev_err(ibdev,
@ -222,15 +222,15 @@ static int alloc_srq_wqe_buf(struct hns_roce_dev *hr_dev,
HNS_ROCE_SGE_SIZE *
srq->max_gs)));
buf_attr.page_shift = hr_dev->caps.srqwqe_buf_pg_sz + HNS_HW_PAGE_SHIFT;
buf_attr.page_shift = hr_dev->caps.srqwqe_buf_pg_sz + PAGE_SHIFT;
buf_attr.region[0].size = to_hr_hem_entries_size(srq->wqe_cnt,
srq->wqe_shift);
buf_attr.region[0].hopnum = hr_dev->caps.srqwqe_hop_num;
buf_attr.region_count = 1;
ret = hns_roce_mtr_create(hr_dev, &srq->buf_mtr, &buf_attr,
hr_dev->caps.srqwqe_ba_pg_sz +
HNS_HW_PAGE_SHIFT, udata, addr);
hr_dev->caps.srqwqe_ba_pg_sz + PAGE_SHIFT,
udata, addr);
if (ret)
ibdev_err(ibdev,
"failed to alloc SRQ buf mtr, ret = %d.\n", ret);
@ -417,7 +417,7 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
srq->db_reg = hr_dev->reg_base + SRQ_DB_REG;
srq->event = hns_roce_ib_srq_event;
atomic_set(&srq->refcount, 1);
refcount_set(&srq->refcount, 1);
init_completion(&srq->free);
return 0;

View File

@ -1,9 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config INFINIBAND_I40IW
tristate "Intel(R) Ethernet X722 iWARP Driver"
depends on INET && I40E
depends on IPV6 || !IPV6
depends on PCI
select GENERIC_ALLOCATOR
help
Intel(R) Ethernet X722 iWARP Driver

View File

@ -1,9 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_INFINIBAND_I40IW) += i40iw.o
i40iw-objs :=\
i40iw_cm.o i40iw_ctrl.o \
i40iw_hmc.o i40iw_hw.o i40iw_main.o \
i40iw_pble.o i40iw_puda.o i40iw_uk.o i40iw_utils.o \
i40iw_verbs.o i40iw_virtchnl.o i40iw_vf.o

View File

@ -1,602 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_IW_H
#define I40IW_IW_H
#include <linux/netdevice.h>
#include <linux/inetdevice.h>
#include <linux/spinlock.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/workqueue.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <linux/crc32c.h>
#include <linux/net/intel/i40e_client.h>
#include <rdma/ib_smi.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_pack.h>
#include <rdma/rdma_cm.h>
#include <rdma/iw_cm.h>
#include <crypto/hash.h>
#include "i40iw_status.h"
#include "i40iw_osdep.h"
#include "i40iw_d.h"
#include "i40iw_hmc.h"
#include "i40iw_type.h"
#include "i40iw_p.h"
#include <rdma/i40iw-abi.h>
#include "i40iw_pble.h"
#include "i40iw_verbs.h"
#include "i40iw_cm.h"
#include "i40iw_user.h"
#include "i40iw_puda.h"
#define I40IW_FW_VER_DEFAULT 2
#define I40IW_HW_VERSION 2
#define I40IW_ARP_ADD 1
#define I40IW_ARP_DELETE 2
#define I40IW_ARP_RESOLVE 3
#define I40IW_MACIP_ADD 1
#define I40IW_MACIP_DELETE 2
#define IW_CCQ_SIZE (I40IW_CQP_SW_SQSIZE_2048 + 1)
#define IW_CEQ_SIZE 2048
#define IW_AEQ_SIZE 2048
#define RX_BUF_SIZE (1536 + 8)
#define IW_REG0_SIZE (4 * 1024)
#define IW_TX_TIMEOUT (6 * HZ)
#define IW_FIRST_QPN 1
#define IW_SW_CONTEXT_ALIGN 1024
#define MAX_DPC_ITERATIONS 128
#define I40IW_EVENT_TIMEOUT 100000
#define I40IW_VCHNL_EVENT_TIMEOUT 100000
#define I40IW_NO_VLAN 0xffff
#define I40IW_NO_QSET 0xffff
/* access to mcast filter list */
#define IW_ADD_MCAST false
#define IW_DEL_MCAST true
#define I40IW_DRV_OPT_ENABLE_MPA_VER_0 0x00000001
#define I40IW_DRV_OPT_DISABLE_MPA_CRC 0x00000002
#define I40IW_DRV_OPT_DISABLE_FIRST_WRITE 0x00000004
#define I40IW_DRV_OPT_DISABLE_INTF 0x00000008
#define I40IW_DRV_OPT_ENABLE_MSI 0x00000010
#define I40IW_DRV_OPT_DUAL_LOGICAL_PORT 0x00000020
#define I40IW_DRV_OPT_NO_INLINE_DATA 0x00000080
#define I40IW_DRV_OPT_DISABLE_INT_MOD 0x00000100
#define I40IW_DRV_OPT_DISABLE_VIRT_WQ 0x00000200
#define I40IW_DRV_OPT_ENABLE_PAU 0x00000400
#define I40IW_DRV_OPT_MCAST_LOGPORT_MAP 0x00000800
#define IW_HMC_OBJ_TYPE_NUM ARRAY_SIZE(iw_hmc_obj_types)
#define IW_CFG_FPM_QP_COUNT 32768
#define I40IW_MAX_PAGES_PER_FMR 512
#define I40IW_MIN_PAGES_PER_FMR 1
#define I40IW_CQP_COMPL_RQ_WQE_FLUSHED 2
#define I40IW_CQP_COMPL_SQ_WQE_FLUSHED 3
#define I40IW_CQP_COMPL_RQ_SQ_WQE_FLUSHED 4
struct i40iw_cqp_compl_info {
u32 op_ret_val;
u16 maj_err_code;
u16 min_err_code;
bool error;
u8 op_code;
};
#define i40iw_pr_err(fmt, args ...) pr_err("%s: "fmt, __func__, ## args)
#define i40iw_pr_info(fmt, args ...) pr_info("%s: " fmt, __func__, ## args)
#define i40iw_pr_warn(fmt, args ...) pr_warn("%s: " fmt, __func__, ## args)
struct i40iw_cqp_request {
struct cqp_commands_info info;
wait_queue_head_t waitq;
struct list_head list;
atomic_t refcount;
void (*callback_fcn)(struct i40iw_cqp_request*, u32);
void *param;
struct i40iw_cqp_compl_info compl_info;
bool waiting;
bool request_done;
bool dynamic;
};
struct i40iw_cqp {
struct i40iw_sc_cqp sc_cqp;
spinlock_t req_lock; /*cqp request list */
wait_queue_head_t waitq;
struct i40iw_dma_mem sq;
struct i40iw_dma_mem host_ctx;
u64 *scratch_array;
struct i40iw_cqp_request *cqp_requests;
struct list_head cqp_avail_reqs;
struct list_head cqp_pending_reqs;
};
struct i40iw_device;
struct i40iw_ccq {
struct i40iw_sc_cq sc_cq;
spinlock_t lock; /* ccq control */
wait_queue_head_t waitq;
struct i40iw_dma_mem mem_cq;
struct i40iw_dma_mem shadow_area;
};
struct i40iw_ceq {
struct i40iw_sc_ceq sc_ceq;
struct i40iw_dma_mem mem;
u32 irq;
u32 msix_idx;
struct i40iw_device *iwdev;
struct tasklet_struct dpc_tasklet;
};
struct i40iw_aeq {
struct i40iw_sc_aeq sc_aeq;
struct i40iw_dma_mem mem;
};
struct i40iw_arp_entry {
u32 ip_addr[4];
u8 mac_addr[ETH_ALEN];
};
enum init_completion_state {
INVALID_STATE = 0,
INITIAL_STATE,
CQP_CREATED,
HMC_OBJS_CREATED,
PBLE_CHUNK_MEM,
CCQ_CREATED,
AEQ_CREATED,
CEQ_CREATED,
ILQ_CREATED,
IEQ_CREATED,
IP_ADDR_REGISTERED,
RDMA_DEV_REGISTERED
};
struct i40iw_msix_vector {
u32 idx;
u32 irq;
u32 cpu_affinity;
u32 ceq_id;
cpumask_t mask;
};
struct l2params_work {
struct work_struct work;
struct i40iw_device *iwdev;
struct i40iw_l2params l2params;
};
#define I40IW_MSIX_TABLE_SIZE 65
struct virtchnl_work {
struct work_struct work;
union {
struct i40iw_cqp_request *cqp_request;
struct i40iw_virtchnl_work_info work_info;
};
};
struct i40e_qvlist_info;
struct i40iw_device {
struct i40iw_ib_device *iwibdev;
struct net_device *netdev;
wait_queue_head_t vchnl_waitq;
struct i40iw_sc_dev sc_dev;
struct i40iw_sc_vsi vsi;
struct i40iw_handler *hdl;
struct i40e_info *ldev;
struct i40e_client *client;
struct i40iw_hw hw;
struct i40iw_cm_core cm_core;
u8 *mem_resources;
unsigned long *allocated_qps;
unsigned long *allocated_cqs;
unsigned long *allocated_mrs;
unsigned long *allocated_pds;
unsigned long *allocated_arps;
struct i40iw_qp **qp_table;
bool msix_shared;
u32 msix_count;
struct i40iw_msix_vector *iw_msixtbl;
struct i40e_qvlist_info *iw_qvlist;
struct i40iw_hmc_pble_rsrc *pble_rsrc;
struct i40iw_arp_entry *arp_table;
struct i40iw_cqp cqp;
struct i40iw_ccq ccq;
u32 ceqs_count;
struct i40iw_ceq *ceqlist;
struct i40iw_aeq aeq;
u32 arp_table_size;
u32 next_arp_index;
spinlock_t resource_lock; /* hw resource access */
spinlock_t qptable_lock;
u32 vendor_id;
u32 vendor_part_id;
u32 of_device_registered;
u32 device_cap_flags;
unsigned long db_start;
u8 resource_profile;
u8 max_rdma_vfs;
u8 max_enabled_vfs;
u8 max_sge;
u8 iw_status;
u8 send_term_ok;
/* x710 specific */
struct mutex pbl_mutex;
struct tasklet_struct dpc_tasklet;
struct workqueue_struct *virtchnl_wq;
struct virtchnl_work virtchnl_w[I40IW_MAX_PE_ENABLED_VF_COUNT];
struct i40iw_dma_mem obj_mem;
struct i40iw_dma_mem obj_next;
u8 *hmc_info_mem;
u32 sd_type;
struct workqueue_struct *param_wq;
atomic_t params_busy;
enum init_completion_state init_state;
u16 mac_ip_table_idx;
atomic_t vchnl_msgs;
u32 max_mr;
u32 max_qp;
u32 max_cq;
u32 max_pd;
u32 next_qp;
u32 next_cq;
u32 next_pd;
u32 max_mr_size;
u32 max_qp_wr;
u32 max_cqe;
u32 mr_stagmask;
u32 mpa_version;
bool dcb;
bool closing;
bool reset;
u32 used_pds;
u32 used_cqs;
u32 used_mrs;
u32 used_qps;
wait_queue_head_t close_wq;
atomic64_t use_count;
};
struct i40iw_ib_device {
struct ib_device ibdev;
struct i40iw_device *iwdev;
};
struct i40iw_handler {
struct list_head list;
struct i40e_client *client;
struct i40iw_device device;
struct i40e_info ldev;
};
/**
* i40iw_fw_major_ver - get firmware major version
* @dev: iwarp device
**/
static inline u64 i40iw_fw_major_ver(struct i40iw_sc_dev *dev)
{
return RS_64(dev->feature_info[I40IW_FEATURE_FW_INFO],
I40IW_FW_VER_MAJOR);
}
/**
* i40iw_fw_minor_ver - get firmware minor version
* @dev: iwarp device
**/
static inline u64 i40iw_fw_minor_ver(struct i40iw_sc_dev *dev)
{
return RS_64(dev->feature_info[I40IW_FEATURE_FW_INFO],
I40IW_FW_VER_MINOR);
}
/**
* to_iwdev - get device
* @ibdev: ib device
**/
static inline struct i40iw_device *to_iwdev(struct ib_device *ibdev)
{
return container_of(ibdev, struct i40iw_ib_device, ibdev)->iwdev;
}
/**
* to_ucontext - get user context
* @ibucontext: ib user context
**/
static inline struct i40iw_ucontext *to_ucontext(struct ib_ucontext *ibucontext)
{
return container_of(ibucontext, struct i40iw_ucontext, ibucontext);
}
/**
* to_iwpd - get protection domain
* @ibpd: ib pd
**/
static inline struct i40iw_pd *to_iwpd(struct ib_pd *ibpd)
{
return container_of(ibpd, struct i40iw_pd, ibpd);
}
/**
* to_iwmr - get device memory region
* @ibdev: ib memory region
**/
static inline struct i40iw_mr *to_iwmr(struct ib_mr *ibmr)
{
return container_of(ibmr, struct i40iw_mr, ibmr);
}
/**
* to_iwmw - get device memory window
* @ibmw: ib memory window
**/
static inline struct i40iw_mr *to_iwmw(struct ib_mw *ibmw)
{
return container_of(ibmw, struct i40iw_mr, ibmw);
}
/**
* to_iwcq - get completion queue
* @ibcq: ib cqdevice
**/
static inline struct i40iw_cq *to_iwcq(struct ib_cq *ibcq)
{
return container_of(ibcq, struct i40iw_cq, ibcq);
}
/**
* to_iwqp - get device qp
* @ibqp: ib qp
**/
static inline struct i40iw_qp *to_iwqp(struct ib_qp *ibqp)
{
return container_of(ibqp, struct i40iw_qp, ibqp);
}
/* i40iw.c */
void i40iw_qp_add_ref(struct ib_qp *ibqp);
void i40iw_qp_rem_ref(struct ib_qp *ibqp);
struct ib_qp *i40iw_get_qp(struct ib_device *, int);
void i40iw_flush_wqes(struct i40iw_device *iwdev,
struct i40iw_qp *qp);
void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
unsigned char *mac_addr,
u32 *ip_addr,
bool ipv4,
u32 action);
int i40iw_manage_apbvt(struct i40iw_device *iwdev,
u16 accel_local_port,
bool add_port);
struct i40iw_cqp_request *i40iw_get_cqp_request(struct i40iw_cqp *cqp, bool wait);
void i40iw_free_cqp_request(struct i40iw_cqp *cqp, struct i40iw_cqp_request *cqp_request);
void i40iw_put_cqp_request(struct i40iw_cqp *cqp, struct i40iw_cqp_request *cqp_request);
/**
* i40iw_alloc_resource - allocate a resource
* @iwdev: device pointer
* @resource_array: resource bit array:
* @max_resources: maximum resource number
* @req_resources_num: Allocated resource number
* @next: next free id
**/
static inline int i40iw_alloc_resource(struct i40iw_device *iwdev,
unsigned long *resource_array,
u32 max_resources,
u32 *req_resource_num,
u32 *next)
{
u32 resource_num;
unsigned long flags;
spin_lock_irqsave(&iwdev->resource_lock, flags);
resource_num = find_next_zero_bit(resource_array, max_resources, *next);
if (resource_num >= max_resources) {
resource_num = find_first_zero_bit(resource_array, max_resources);
if (resource_num >= max_resources) {
spin_unlock_irqrestore(&iwdev->resource_lock, flags);
return -EOVERFLOW;
}
}
set_bit(resource_num, resource_array);
*next = resource_num + 1;
if (*next == max_resources)
*next = 0;
*req_resource_num = resource_num;
spin_unlock_irqrestore(&iwdev->resource_lock, flags);
return 0;
}
/**
* i40iw_is_resource_allocated - detrmine if resource is
* allocated
* @iwdev: device pointer
* @resource_array: resource array for the resource_num
* @resource_num: resource number to check
**/
static inline bool i40iw_is_resource_allocated(struct i40iw_device *iwdev,
unsigned long *resource_array,
u32 resource_num)
{
bool bit_is_set;
unsigned long flags;
spin_lock_irqsave(&iwdev->resource_lock, flags);
bit_is_set = test_bit(resource_num, resource_array);
spin_unlock_irqrestore(&iwdev->resource_lock, flags);
return bit_is_set;
}
/**
* i40iw_free_resource - free a resource
* @iwdev: device pointer
* @resource_array: resource array for the resource_num
* @resource_num: resource number to free
**/
static inline void i40iw_free_resource(struct i40iw_device *iwdev,
unsigned long *resource_array,
u32 resource_num)
{
unsigned long flags;
spin_lock_irqsave(&iwdev->resource_lock, flags);
clear_bit(resource_num, resource_array);
spin_unlock_irqrestore(&iwdev->resource_lock, flags);
}
struct i40iw_handler *i40iw_find_netdev(struct net_device *netdev);
/**
* iw_init_resources -
*/
u32 i40iw_initialize_hw_resources(struct i40iw_device *iwdev);
int i40iw_register_rdma_device(struct i40iw_device *iwdev);
void i40iw_port_ibevent(struct i40iw_device *iwdev);
void i40iw_cm_disconn(struct i40iw_qp *iwqp);
void i40iw_cm_disconn_worker(void *);
int mini_cm_recv_pkt(struct i40iw_cm_core *, struct i40iw_device *,
struct sk_buff *);
enum i40iw_status_code i40iw_handle_cqp_op(struct i40iw_device *iwdev,
struct i40iw_cqp_request *cqp_request);
enum i40iw_status_code i40iw_add_mac_addr(struct i40iw_device *iwdev,
u8 *mac_addr, u8 *mac_index);
int i40iw_modify_qp(struct ib_qp *, struct ib_qp_attr *, int, struct ib_udata *);
void i40iw_cq_wq_destroy(struct i40iw_device *iwdev, struct i40iw_sc_cq *cq);
void i40iw_cleanup_pending_cqp_op(struct i40iw_device *iwdev);
void i40iw_rem_pdusecount(struct i40iw_pd *iwpd, struct i40iw_device *iwdev);
void i40iw_add_pdusecount(struct i40iw_pd *iwpd);
void i40iw_rem_devusecount(struct i40iw_device *iwdev);
void i40iw_add_devusecount(struct i40iw_device *iwdev);
void i40iw_hw_modify_qp(struct i40iw_device *iwdev, struct i40iw_qp *iwqp,
struct i40iw_modify_qp_info *info, bool wait);
void i40iw_qp_suspend_resume(struct i40iw_sc_dev *dev,
struct i40iw_sc_qp *qp,
bool suspend);
enum i40iw_status_code i40iw_manage_qhash(struct i40iw_device *iwdev,
struct i40iw_cm_info *cminfo,
enum i40iw_quad_entry_type etype,
enum i40iw_quad_hash_manage_type mtype,
void *cmnode,
bool wait);
void i40iw_receive_ilq(struct i40iw_sc_vsi *vsi, struct i40iw_puda_buf *rbuf);
void i40iw_free_sqbuf(struct i40iw_sc_vsi *vsi, void *bufp);
void i40iw_free_qp_resources(struct i40iw_qp *iwqp);
enum i40iw_status_code i40iw_obj_aligned_mem(struct i40iw_device *iwdev,
struct i40iw_dma_mem *memptr,
u32 size, u32 mask);
void i40iw_request_reset(struct i40iw_device *iwdev);
void i40iw_destroy_rdma_device(struct i40iw_ib_device *iwibdev);
int i40iw_setup_cm_core(struct i40iw_device *iwdev);
void i40iw_cleanup_cm_core(struct i40iw_cm_core *cm_core);
void i40iw_process_ceq(struct i40iw_device *, struct i40iw_ceq *iwceq);
void i40iw_process_aeq(struct i40iw_device *);
void i40iw_next_iw_state(struct i40iw_qp *iwqp,
u8 state, u8 del_hash,
u8 term, u8 term_len);
int i40iw_send_syn(struct i40iw_cm_node *cm_node, u32 sendack);
int i40iw_send_reset(struct i40iw_cm_node *cm_node);
struct i40iw_cm_node *i40iw_find_node(struct i40iw_cm_core *cm_core,
u16 rem_port,
u32 *rem_addr,
u16 loc_port,
u32 *loc_addr,
bool add_refcnt,
bool accelerated_list);
enum i40iw_status_code i40iw_hw_flush_wqes(struct i40iw_device *iwdev,
struct i40iw_sc_qp *qp,
struct i40iw_qp_flush_info *info,
bool wait);
void i40iw_gen_ae(struct i40iw_device *iwdev,
struct i40iw_sc_qp *qp,
struct i40iw_gen_ae_info *info,
bool wait);
void i40iw_copy_ip_ntohl(u32 *dst, __be32 *src);
struct ib_mr *i40iw_reg_phys_mr(struct ib_pd *ib_pd,
u64 addr,
u64 size,
int acc,
u64 *iova_start);
int i40iw_inetaddr_event(struct notifier_block *notifier,
unsigned long event,
void *ptr);
int i40iw_inet6addr_event(struct notifier_block *notifier,
unsigned long event,
void *ptr);
int i40iw_net_event(struct notifier_block *notifier,
unsigned long event,
void *ptr);
int i40iw_netdevice_event(struct notifier_block *notifier,
unsigned long event,
void *ptr);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,462 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_CM_H
#define I40IW_CM_H
#define QUEUE_EVENTS
#define I40IW_MANAGE_APBVT_DEL 0
#define I40IW_MANAGE_APBVT_ADD 1
#define I40IW_MPA_REQUEST_ACCEPT 1
#define I40IW_MPA_REQUEST_REJECT 2
/* IETF MPA -- defines, enums, structs */
#define IEFT_MPA_KEY_REQ "MPA ID Req Frame"
#define IEFT_MPA_KEY_REP "MPA ID Rep Frame"
#define IETF_MPA_KEY_SIZE 16
#define IETF_MPA_VERSION 1
#define IETF_MAX_PRIV_DATA_LEN 512
#define IETF_MPA_FRAME_SIZE 20
#define IETF_RTR_MSG_SIZE 4
#define IETF_MPA_V2_FLAG 0x10
#define SNDMARKER_SEQNMASK 0x000001FF
#define I40IW_MAX_IETF_SIZE 32
/* IETF RTR MSG Fields */
#define IETF_PEER_TO_PEER 0x8000
#define IETF_FLPDU_ZERO_LEN 0x4000
#define IETF_RDMA0_WRITE 0x8000
#define IETF_RDMA0_READ 0x4000
#define IETF_NO_IRD_ORD 0x3FFF
/* HW-supported IRD sizes*/
#define I40IW_HW_IRD_SETTING_2 2
#define I40IW_HW_IRD_SETTING_4 4
#define I40IW_HW_IRD_SETTING_8 8
#define I40IW_HW_IRD_SETTING_16 16
#define I40IW_HW_IRD_SETTING_32 32
#define I40IW_HW_IRD_SETTING_64 64
#define MAX_PORTS 65536
#define I40IW_VLAN_PRIO_SHIFT 13
enum ietf_mpa_flags {
IETF_MPA_FLAGS_MARKERS = 0x80, /* receive Markers */
IETF_MPA_FLAGS_CRC = 0x40, /* receive Markers */
IETF_MPA_FLAGS_REJECT = 0x20, /* Reject */
};
struct ietf_mpa_v1 {
u8 key[IETF_MPA_KEY_SIZE];
u8 flags;
u8 rev;
__be16 priv_data_len;
u8 priv_data[];
};
#define ietf_mpa_req_resp_frame ietf_mpa_frame
struct ietf_rtr_msg {
__be16 ctrl_ird;
__be16 ctrl_ord;
};
struct ietf_mpa_v2 {
u8 key[IETF_MPA_KEY_SIZE];
u8 flags;
u8 rev;
__be16 priv_data_len;
struct ietf_rtr_msg rtr_msg;
u8 priv_data[];
};
struct i40iw_cm_node;
enum i40iw_timer_type {
I40IW_TIMER_TYPE_SEND,
I40IW_TIMER_TYPE_RECV,
I40IW_TIMER_NODE_CLEANUP,
I40IW_TIMER_TYPE_CLOSE,
};
#define I40IW_PASSIVE_STATE_INDICATED 0
#define I40IW_DO_NOT_SEND_RESET_EVENT 1
#define I40IW_SEND_RESET_EVENT 2
#define MAX_I40IW_IFS 4
#define SET_ACK 0x1
#define SET_SYN 0x2
#define SET_FIN 0x4
#define SET_RST 0x8
#define TCP_OPTIONS_PADDING 3
struct option_base {
u8 optionnum;
u8 length;
};
enum option_numbers {
OPTION_NUMBER_END,
OPTION_NUMBER_NONE,
OPTION_NUMBER_MSS,
OPTION_NUMBER_WINDOW_SCALE,
OPTION_NUMBER_SACK_PERM,
OPTION_NUMBER_SACK,
OPTION_NUMBER_WRITE0 = 0xbc
};
struct option_mss {
u8 optionnum;
u8 length;
__be16 mss;
};
struct option_windowscale {
u8 optionnum;
u8 length;
u8 shiftcount;
};
union all_known_options {
char as_end;
struct option_base as_base;
struct option_mss as_mss;
struct option_windowscale as_windowscale;
};
struct i40iw_timer_entry {
struct list_head list;
unsigned long timetosend; /* jiffies */
struct i40iw_puda_buf *sqbuf;
u32 type;
u32 retrycount;
u32 retranscount;
u32 context;
u32 send_retrans;
int close_when_complete;
};
#define I40IW_DEFAULT_RETRYS 64
#define I40IW_DEFAULT_RETRANS 8
#define I40IW_DEFAULT_TTL 0x40
#define I40IW_DEFAULT_RTT_VAR 0x6
#define I40IW_DEFAULT_SS_THRESH 0x3FFFFFFF
#define I40IW_DEFAULT_REXMIT_THRESH 8
#define I40IW_RETRY_TIMEOUT HZ
#define I40IW_SHORT_TIME 10
#define I40IW_LONG_TIME (2 * HZ)
#define I40IW_MAX_TIMEOUT ((unsigned long)(12 * HZ))
#define I40IW_CM_HASHTABLE_SIZE 1024
#define I40IW_CM_TCP_TIMER_INTERVAL 3000
#define I40IW_CM_DEFAULT_MTU 1540
#define I40IW_CM_DEFAULT_FRAME_CNT 10
#define I40IW_CM_THREAD_STACK_SIZE 256
#define I40IW_CM_DEFAULT_RCV_WND 64240
#define I40IW_CM_DEFAULT_RCV_WND_SCALED 0x3fffc
#define I40IW_CM_DEFAULT_RCV_WND_SCALE 2
#define I40IW_CM_DEFAULT_FREE_PKTS 0x000A
#define I40IW_CM_FREE_PKT_LO_WATERMARK 2
#define I40IW_CM_DEFAULT_MSS 536
#define I40IW_CM_DEF_SEQ 0x159bf75f
#define I40IW_CM_DEF_LOCAL_ID 0x3b47
#define I40IW_CM_DEF_SEQ2 0x18ed5740
#define I40IW_CM_DEF_LOCAL_ID2 0xb807
#define MAX_CM_BUFFER (I40IW_MAX_IETF_SIZE + IETF_MAX_PRIV_DATA_LEN)
typedef u32 i40iw_addr_t;
#define i40iw_cm_tsa_context i40iw_qp_context
struct i40iw_qp;
/* cm node transition states */
enum i40iw_cm_node_state {
I40IW_CM_STATE_UNKNOWN,
I40IW_CM_STATE_INITED,
I40IW_CM_STATE_LISTENING,
I40IW_CM_STATE_SYN_RCVD,
I40IW_CM_STATE_SYN_SENT,
I40IW_CM_STATE_ONE_SIDE_ESTABLISHED,
I40IW_CM_STATE_ESTABLISHED,
I40IW_CM_STATE_ACCEPTING,
I40IW_CM_STATE_MPAREQ_SENT,
I40IW_CM_STATE_MPAREQ_RCVD,
I40IW_CM_STATE_MPAREJ_RCVD,
I40IW_CM_STATE_OFFLOADED,
I40IW_CM_STATE_FIN_WAIT1,
I40IW_CM_STATE_FIN_WAIT2,
I40IW_CM_STATE_CLOSE_WAIT,
I40IW_CM_STATE_TIME_WAIT,
I40IW_CM_STATE_LAST_ACK,
I40IW_CM_STATE_CLOSING,
I40IW_CM_STATE_LISTENER_DESTROYED,
I40IW_CM_STATE_CLOSED
};
enum mpa_frame_version {
IETF_MPA_V1 = 1,
IETF_MPA_V2 = 2
};
enum mpa_frame_key {
MPA_KEY_REQUEST,
MPA_KEY_REPLY
};
enum send_rdma0 {
SEND_RDMA_READ_ZERO = 1,
SEND_RDMA_WRITE_ZERO = 2
};
enum i40iw_tcpip_pkt_type {
I40IW_PKT_TYPE_UNKNOWN,
I40IW_PKT_TYPE_SYN,
I40IW_PKT_TYPE_SYNACK,
I40IW_PKT_TYPE_ACK,
I40IW_PKT_TYPE_FIN,
I40IW_PKT_TYPE_RST
};
/* CM context params */
struct i40iw_cm_tcp_context {
u8 client;
u32 loc_seq_num;
u32 loc_ack_num;
u32 rem_ack_num;
u32 rcv_nxt;
u32 loc_id;
u32 rem_id;
u32 snd_wnd;
u32 max_snd_wnd;
u32 rcv_wnd;
u32 mss;
u8 snd_wscale;
u8 rcv_wscale;
};
enum i40iw_cm_listener_state {
I40IW_CM_LISTENER_PASSIVE_STATE = 1,
I40IW_CM_LISTENER_ACTIVE_STATE = 2,
I40IW_CM_LISTENER_EITHER_STATE = 3
};
struct i40iw_cm_listener {
struct list_head list;
struct i40iw_cm_core *cm_core;
u8 loc_mac[ETH_ALEN];
u32 loc_addr[4];
u16 loc_port;
struct iw_cm_id *cm_id;
atomic_t ref_count;
struct i40iw_device *iwdev;
atomic_t pend_accepts_cnt;
int backlog;
enum i40iw_cm_listener_state listener_state;
u32 reused_node;
u8 user_pri;
u8 tos;
u16 vlan_id;
bool qhash_set;
bool ipv4;
struct list_head child_listen_list;
};
struct i40iw_kmem_info {
void *addr;
u32 size;
};
/* per connection node and node state information */
struct i40iw_cm_node {
u32 loc_addr[4], rem_addr[4];
u16 loc_port, rem_port;
u16 vlan_id;
enum i40iw_cm_node_state state;
u8 loc_mac[ETH_ALEN];
u8 rem_mac[ETH_ALEN];
atomic_t ref_count;
struct i40iw_qp *iwqp;
struct i40iw_device *iwdev;
struct i40iw_sc_dev *dev;
struct i40iw_cm_tcp_context tcp_cntxt;
struct i40iw_cm_core *cm_core;
struct i40iw_cm_node *loopbackpartner;
struct i40iw_timer_entry *send_entry;
struct i40iw_timer_entry *close_entry;
spinlock_t retrans_list_lock; /* cm transmit packet */
enum send_rdma0 send_rdma0_op;
u16 ird_size;
u16 ord_size;
u16 mpav2_ird_ord;
struct iw_cm_id *cm_id;
struct list_head list;
bool accelerated;
struct i40iw_cm_listener *listener;
int apbvt_set;
int accept_pend;
struct list_head timer_entry;
struct list_head reset_entry;
struct list_head teardown_entry;
atomic_t passive_state;
bool qhash_set;
u8 user_pri;
u8 tos;
bool ipv4;
bool snd_mark_en;
u16 lsmm_size;
enum mpa_frame_version mpa_frame_rev;
struct i40iw_kmem_info pdata;
union {
struct ietf_mpa_v1 mpa_frame;
struct ietf_mpa_v2 mpa_v2_frame;
};
u8 pdata_buf[IETF_MAX_PRIV_DATA_LEN];
struct i40iw_kmem_info mpa_hdr;
bool ack_rcvd;
};
/* structure for client or CM to fill when making CM api calls. */
/* - only need to set relevant data, based on op. */
struct i40iw_cm_info {
struct iw_cm_id *cm_id;
u16 loc_port;
u16 rem_port;
u32 loc_addr[4];
u32 rem_addr[4];
u16 vlan_id;
int backlog;
u8 user_pri;
u8 tos;
bool ipv4;
};
/* CM event codes */
enum i40iw_cm_event_type {
I40IW_CM_EVENT_UNKNOWN,
I40IW_CM_EVENT_ESTABLISHED,
I40IW_CM_EVENT_MPA_REQ,
I40IW_CM_EVENT_MPA_CONNECT,
I40IW_CM_EVENT_MPA_ACCEPT,
I40IW_CM_EVENT_MPA_REJECT,
I40IW_CM_EVENT_MPA_ESTABLISHED,
I40IW_CM_EVENT_CONNECTED,
I40IW_CM_EVENT_RESET,
I40IW_CM_EVENT_ABORTED
};
/* event to post to CM event handler */
struct i40iw_cm_event {
enum i40iw_cm_event_type type;
struct i40iw_cm_info cm_info;
struct work_struct event_work;
struct i40iw_cm_node *cm_node;
};
struct i40iw_cm_core {
struct i40iw_device *iwdev;
struct i40iw_sc_dev *dev;
struct list_head listen_nodes;
struct list_head accelerated_list;
struct list_head non_accelerated_list;
struct timer_list tcp_timer;
struct workqueue_struct *event_wq;
struct workqueue_struct *disconn_wq;
spinlock_t ht_lock; /* manage hash table */
spinlock_t listen_list_lock; /* listen list */
spinlock_t apbvt_lock; /*manage apbvt entries*/
unsigned long ports_in_use[BITS_TO_LONGS(MAX_PORTS)];
u64 stats_nodes_created;
u64 stats_nodes_destroyed;
u64 stats_listen_created;
u64 stats_listen_destroyed;
u64 stats_listen_nodes_created;
u64 stats_listen_nodes_destroyed;
u64 stats_loopbacks;
u64 stats_accepts;
u64 stats_rejects;
u64 stats_connect_errs;
u64 stats_passive_errs;
u64 stats_pkt_retrans;
u64 stats_backlog_drops;
};
int i40iw_schedule_cm_timer(struct i40iw_cm_node *cm_node,
struct i40iw_puda_buf *sqbuf,
enum i40iw_timer_type type,
int send_retrans,
int close_when_complete);
int i40iw_accept(struct iw_cm_id *, struct iw_cm_conn_param *);
int i40iw_reject(struct iw_cm_id *, const void *, u8);
int i40iw_connect(struct iw_cm_id *, struct iw_cm_conn_param *);
int i40iw_create_listen(struct iw_cm_id *, int);
int i40iw_destroy_listen(struct iw_cm_id *);
int i40iw_cm_start(struct i40iw_device *);
int i40iw_cm_stop(struct i40iw_device *);
int i40iw_arp_table(struct i40iw_device *iwdev,
u32 *ip_addr,
bool ipv4,
u8 *mac_addr,
u32 action);
void i40iw_if_notify(struct i40iw_device *iwdev, struct net_device *netdev,
u32 *ipaddr, bool ipv4, bool ifup);
void i40iw_cm_teardown_connections(struct i40iw_device *iwdev, u32 *ipaddr,
struct i40iw_cm_info *nfo,
bool disconnect_all);
bool i40iw_port_in_use(struct i40iw_cm_core *cm_core, u16 port);
#endif /* I40IW_CM_H */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,821 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#include "i40iw_osdep.h"
#include "i40iw_register.h"
#include "i40iw_status.h"
#include "i40iw_hmc.h"
#include "i40iw_d.h"
#include "i40iw_type.h"
#include "i40iw_p.h"
#include "i40iw_vf.h"
#include "i40iw_virtchnl.h"
/**
* i40iw_find_sd_index_limit - finds segment descriptor index limit
* @hmc_info: pointer to the HMC configuration information structure
* @type: type of HMC resources we're searching
* @idx: starting index for the object
* @cnt: number of objects we're trying to create
* @sd_idx: pointer to return index of the segment descriptor in question
* @sd_limit: pointer to return the maximum number of segment descriptors
*
* This function calculates the segment descriptor index and index limit
* for the resource defined by i40iw_hmc_rsrc_type.
*/
static inline void i40iw_find_sd_index_limit(struct i40iw_hmc_info *hmc_info,
u32 type,
u32 idx,
u32 cnt,
u32 *sd_idx,
u32 *sd_limit)
{
u64 fpm_addr, fpm_limit;
fpm_addr = hmc_info->hmc_obj[(type)].base +
hmc_info->hmc_obj[type].size * idx;
fpm_limit = fpm_addr + hmc_info->hmc_obj[type].size * cnt;
*sd_idx = (u32)(fpm_addr / I40IW_HMC_DIRECT_BP_SIZE);
*sd_limit = (u32)((fpm_limit - 1) / I40IW_HMC_DIRECT_BP_SIZE);
*sd_limit += 1;
}
/**
* i40iw_find_pd_index_limit - finds page descriptor index limit
* @hmc_info: pointer to the HMC configuration information struct
* @type: HMC resource type we're examining
* @idx: starting index for the object
* @cnt: number of objects we're trying to create
* @pd_idx: pointer to return page descriptor index
* @pd_limit: pointer to return page descriptor index limit
*
* Calculates the page descriptor index and index limit for the resource
* defined by i40iw_hmc_rsrc_type.
*/
static inline void i40iw_find_pd_index_limit(struct i40iw_hmc_info *hmc_info,
u32 type,
u32 idx,
u32 cnt,
u32 *pd_idx,
u32 *pd_limit)
{
u64 fpm_adr, fpm_limit;
fpm_adr = hmc_info->hmc_obj[type].base +
hmc_info->hmc_obj[type].size * idx;
fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);
*(pd_idx) = (u32)(fpm_adr / I40IW_HMC_PAGED_BP_SIZE);
*(pd_limit) = (u32)((fpm_limit - 1) / I40IW_HMC_PAGED_BP_SIZE);
*(pd_limit) += 1;
}
/**
* i40iw_set_sd_entry - setup entry for sd programming
* @pa: physical addr
* @idx: sd index
* @type: paged or direct sd
* @entry: sd entry ptr
*/
static inline void i40iw_set_sd_entry(u64 pa,
u32 idx,
enum i40iw_sd_entry_type type,
struct update_sd_entry *entry)
{
entry->data = pa | (I40IW_HMC_MAX_BP_COUNT << I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |
(((type == I40IW_SD_TYPE_PAGED) ? 0 : 1) <<
I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |
(1 << I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT);
entry->cmd = (idx | (1 << I40E_PFHMC_SDCMD_PMSDWR_SHIFT) | (1 << 15));
}
/**
* i40iw_clr_sd_entry - setup entry for sd clear
* @idx: sd index
* @type: paged or direct sd
* @entry: sd entry ptr
*/
static inline void i40iw_clr_sd_entry(u32 idx, enum i40iw_sd_entry_type type,
struct update_sd_entry *entry)
{
entry->data = (I40IW_HMC_MAX_BP_COUNT <<
I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |
(((type == I40IW_SD_TYPE_PAGED) ? 0 : 1) <<
I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);
entry->cmd = (idx | (1 << I40E_PFHMC_SDCMD_PMSDWR_SHIFT) | (1 << 15));
}
/**
* i40iw_hmc_sd_one - setup 1 sd entry for cqp
* @dev: pointer to the device structure
* @hmc_fn_id: hmc's function id
* @pa: physical addr
* @sd_idx: sd index
* @type: paged or direct sd
* @setsd: flag to set or clear sd
*/
enum i40iw_status_code i40iw_hmc_sd_one(struct i40iw_sc_dev *dev,
u8 hmc_fn_id,
u64 pa, u32 sd_idx,
enum i40iw_sd_entry_type type,
bool setsd)
{
struct i40iw_update_sds_info sdinfo;
sdinfo.cnt = 1;
sdinfo.hmc_fn_id = hmc_fn_id;
if (setsd)
i40iw_set_sd_entry(pa, sd_idx, type, sdinfo.entry);
else
i40iw_clr_sd_entry(sd_idx, type, sdinfo.entry);
return dev->cqp->process_cqp_sds(dev, &sdinfo);
}
/**
* i40iw_hmc_sd_grp - setup group od sd entries for cqp
* @dev: pointer to the device structure
* @hmc_info: pointer to the HMC configuration information struct
* @sd_index: sd index
* @sd_cnt: number of sd entries
* @setsd: flag to set or clear sd
*/
static enum i40iw_status_code i40iw_hmc_sd_grp(struct i40iw_sc_dev *dev,
struct i40iw_hmc_info *hmc_info,
u32 sd_index,
u32 sd_cnt,
bool setsd)
{
struct i40iw_hmc_sd_entry *sd_entry;
struct i40iw_update_sds_info sdinfo;
u64 pa;
u32 i;
enum i40iw_status_code ret_code = 0;
memset(&sdinfo, 0, sizeof(sdinfo));
sdinfo.hmc_fn_id = hmc_info->hmc_fn_id;
for (i = sd_index; i < sd_index + sd_cnt; i++) {
sd_entry = &hmc_info->sd_table.sd_entry[i];
if (!sd_entry ||
(!sd_entry->valid && setsd) ||
(sd_entry->valid && !setsd))
continue;
if (setsd) {
pa = (sd_entry->entry_type == I40IW_SD_TYPE_PAGED) ?
sd_entry->u.pd_table.pd_page_addr.pa :
sd_entry->u.bp.addr.pa;
i40iw_set_sd_entry(pa, i, sd_entry->entry_type,
&sdinfo.entry[sdinfo.cnt]);
} else {
i40iw_clr_sd_entry(i, sd_entry->entry_type,
&sdinfo.entry[sdinfo.cnt]);
}
sdinfo.cnt++;
if (sdinfo.cnt == I40IW_MAX_SD_ENTRIES) {
ret_code = dev->cqp->process_cqp_sds(dev, &sdinfo);
if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_HMC,
"i40iw_hmc_sd_grp: sd_programming failed err=%d\n",
ret_code);
return ret_code;
}
sdinfo.cnt = 0;
}
}
if (sdinfo.cnt)
ret_code = dev->cqp->process_cqp_sds(dev, &sdinfo);
return ret_code;
}
/**
* i40iw_vfdev_from_fpm - return vf dev ptr for hmc function id
* @dev: pointer to the device structure
* @hmc_fn_id: hmc's function id
*/
struct i40iw_vfdev *i40iw_vfdev_from_fpm(struct i40iw_sc_dev *dev, u8 hmc_fn_id)
{
struct i40iw_vfdev *vf_dev = NULL;
u16 idx;
for (idx = 0; idx < I40IW_MAX_PE_ENABLED_VF_COUNT; idx++) {
if (dev->vf_dev[idx] &&
((u8)dev->vf_dev[idx]->pmf_index == hmc_fn_id)) {
vf_dev = dev->vf_dev[idx];
break;
}
}
return vf_dev;
}
/**
* i40iw_vf_hmcinfo_from_fpm - get ptr to hmc for func_id
* @dev: pointer to the device structure
* @hmc_fn_id: hmc's function id
*/
struct i40iw_hmc_info *i40iw_vf_hmcinfo_from_fpm(struct i40iw_sc_dev *dev,
u8 hmc_fn_id)
{
struct i40iw_hmc_info *hmc_info = NULL;
u16 idx;
for (idx = 0; idx < I40IW_MAX_PE_ENABLED_VF_COUNT; idx++) {
if (dev->vf_dev[idx] &&
((u8)dev->vf_dev[idx]->pmf_index == hmc_fn_id)) {
hmc_info = &dev->vf_dev[idx]->hmc_info;
break;
}
}
return hmc_info;
}
/**
* i40iw_hmc_finish_add_sd_reg - program sd entries for objects
* @dev: pointer to the device structure
* @info: create obj info
*/
static enum i40iw_status_code i40iw_hmc_finish_add_sd_reg(struct i40iw_sc_dev *dev,
struct i40iw_hmc_create_obj_info *info)
{
if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt)
return I40IW_ERR_INVALID_HMC_OBJ_INDEX;
if ((info->start_idx + info->count) >
info->hmc_info->hmc_obj[info->rsrc_type].cnt)
return I40IW_ERR_INVALID_HMC_OBJ_COUNT;
if (!info->add_sd_cnt)
return 0;
return i40iw_hmc_sd_grp(dev, info->hmc_info,
info->hmc_info->sd_indexes[0],
info->add_sd_cnt, true);
}
/**
* i40iw_sc_create_hmc_obj - allocate backing store for hmc objects
* @dev: pointer to the device structure
* @info: pointer to i40iw_hmc_iw_create_obj_info struct
*
* This will allocate memory for PDs and backing pages and populate
* the sd and pd entries.
*/
enum i40iw_status_code i40iw_sc_create_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_hmc_create_obj_info *info)
{
struct i40iw_hmc_sd_entry *sd_entry;
u32 sd_idx, sd_lmt;
u32 pd_idx = 0, pd_lmt = 0;
u32 pd_idx1 = 0, pd_lmt1 = 0;
u32 i, j;
bool pd_error = false;
enum i40iw_status_code ret_code = 0;
if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt)
return I40IW_ERR_INVALID_HMC_OBJ_INDEX;
if ((info->start_idx + info->count) >
info->hmc_info->hmc_obj[info->rsrc_type].cnt) {
i40iw_debug(dev, I40IW_DEBUG_HMC,
"%s: error type %u, start = %u, req cnt %u, cnt = %u\n",
__func__, info->rsrc_type, info->start_idx, info->count,
info->hmc_info->hmc_obj[info->rsrc_type].cnt);
return I40IW_ERR_INVALID_HMC_OBJ_COUNT;
}
if (!dev->is_pf)
return i40iw_vchnl_vf_add_hmc_objs(dev, info->rsrc_type, 0, info->count);
i40iw_find_sd_index_limit(info->hmc_info, info->rsrc_type,
info->start_idx, info->count,
&sd_idx, &sd_lmt);
if (sd_idx >= info->hmc_info->sd_table.sd_cnt ||
sd_lmt > info->hmc_info->sd_table.sd_cnt) {
return I40IW_ERR_INVALID_SD_INDEX;
}
i40iw_find_pd_index_limit(info->hmc_info, info->rsrc_type,
info->start_idx, info->count, &pd_idx, &pd_lmt);
for (j = sd_idx; j < sd_lmt; j++) {
ret_code = i40iw_add_sd_table_entry(dev->hw, info->hmc_info,
j,
info->entry_type,
I40IW_HMC_DIRECT_BP_SIZE);
if (ret_code)
goto exit_sd_error;
sd_entry = &info->hmc_info->sd_table.sd_entry[j];
if ((sd_entry->entry_type == I40IW_SD_TYPE_PAGED) &&
((dev->hmc_info == info->hmc_info) &&
(info->rsrc_type != I40IW_HMC_IW_PBLE))) {
pd_idx1 = max(pd_idx, (j * I40IW_HMC_MAX_BP_COUNT));
pd_lmt1 = min(pd_lmt,
(j + 1) * I40IW_HMC_MAX_BP_COUNT);
for (i = pd_idx1; i < pd_lmt1; i++) {
/* update the pd table entry */
ret_code = i40iw_add_pd_table_entry(dev->hw, info->hmc_info,
i, NULL);
if (ret_code) {
pd_error = true;
break;
}
}
if (pd_error) {
while (i && (i > pd_idx1)) {
i40iw_remove_pd_bp(dev->hw, info->hmc_info, (i - 1),
info->is_pf);
i--;
}
}
}
if (sd_entry->valid)
continue;
info->hmc_info->sd_indexes[info->add_sd_cnt] = (u16)j;
info->add_sd_cnt++;
sd_entry->valid = true;
}
return i40iw_hmc_finish_add_sd_reg(dev, info);
exit_sd_error:
while (j && (j > sd_idx)) {
sd_entry = &info->hmc_info->sd_table.sd_entry[j - 1];
switch (sd_entry->entry_type) {
case I40IW_SD_TYPE_PAGED:
pd_idx1 = max(pd_idx,
(j - 1) * I40IW_HMC_MAX_BP_COUNT);
pd_lmt1 = min(pd_lmt, (j * I40IW_HMC_MAX_BP_COUNT));
for (i = pd_idx1; i < pd_lmt1; i++)
i40iw_prep_remove_pd_page(info->hmc_info, i);
break;
case I40IW_SD_TYPE_DIRECT:
i40iw_prep_remove_pd_page(info->hmc_info, (j - 1));
break;
default:
ret_code = I40IW_ERR_INVALID_SD_TYPE;
break;
}
j--;
}
return ret_code;
}
/**
* i40iw_finish_del_sd_reg - delete sd entries for objects
* @dev: pointer to the device structure
* @info: dele obj info
* @reset: true if called before reset
*/
static enum i40iw_status_code i40iw_finish_del_sd_reg(struct i40iw_sc_dev *dev,
struct i40iw_hmc_del_obj_info *info,
bool reset)
{
struct i40iw_hmc_sd_entry *sd_entry;
enum i40iw_status_code ret_code = 0;
u32 i, sd_idx;
struct i40iw_dma_mem *mem;
if (dev->is_pf && !reset)
ret_code = i40iw_hmc_sd_grp(dev, info->hmc_info,
info->hmc_info->sd_indexes[0],
info->del_sd_cnt, false);
if (ret_code)
i40iw_debug(dev, I40IW_DEBUG_HMC, "%s: error cqp sd sd_grp\n", __func__);
for (i = 0; i < info->del_sd_cnt; i++) {
sd_idx = info->hmc_info->sd_indexes[i];
sd_entry = &info->hmc_info->sd_table.sd_entry[sd_idx];
if (!sd_entry)
continue;
mem = (sd_entry->entry_type == I40IW_SD_TYPE_PAGED) ?
&sd_entry->u.pd_table.pd_page_addr :
&sd_entry->u.bp.addr;
if (!mem || !mem->va)
i40iw_debug(dev, I40IW_DEBUG_HMC, "%s: error cqp sd mem\n", __func__);
else
i40iw_free_dma_mem(dev->hw, mem);
}
return ret_code;
}
/**
* i40iw_sc_del_hmc_obj - remove pe hmc objects
* @dev: pointer to the device structure
* @info: pointer to i40iw_hmc_del_obj_info struct
* @reset: true if called before reset
*
* This will de-populate the SDs and PDs. It frees
* the memory for PDS and backing storage. After this function is returned,
* caller should deallocate memory allocated previously for
* book-keeping information about PDs and backing storage.
*/
enum i40iw_status_code i40iw_sc_del_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_hmc_del_obj_info *info,
bool reset)
{
struct i40iw_hmc_pd_table *pd_table;
u32 sd_idx, sd_lmt;
u32 pd_idx, pd_lmt, rel_pd_idx;
u32 i, j;
enum i40iw_status_code ret_code = 0;
if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) {
i40iw_debug(dev, I40IW_DEBUG_HMC,
"%s: error start_idx[%04d] >= [type %04d].cnt[%04d]\n",
__func__, info->start_idx, info->rsrc_type,
info->hmc_info->hmc_obj[info->rsrc_type].cnt);
return I40IW_ERR_INVALID_HMC_OBJ_INDEX;
}
if ((info->start_idx + info->count) >
info->hmc_info->hmc_obj[info->rsrc_type].cnt) {
i40iw_debug(dev, I40IW_DEBUG_HMC,
"%s: error start_idx[%04d] + count %04d >= [type %04d].cnt[%04d]\n",
__func__, info->start_idx, info->count,
info->rsrc_type,
info->hmc_info->hmc_obj[info->rsrc_type].cnt);
return I40IW_ERR_INVALID_HMC_OBJ_COUNT;
}
if (!dev->is_pf) {
ret_code = i40iw_vchnl_vf_del_hmc_obj(dev, info->rsrc_type, 0,
info->count);
if (info->rsrc_type != I40IW_HMC_IW_PBLE)
return ret_code;
}
i40iw_find_pd_index_limit(info->hmc_info, info->rsrc_type,
info->start_idx, info->count, &pd_idx, &pd_lmt);
for (j = pd_idx; j < pd_lmt; j++) {
sd_idx = j / I40IW_HMC_PD_CNT_IN_SD;
if (info->hmc_info->sd_table.sd_entry[sd_idx].entry_type !=
I40IW_SD_TYPE_PAGED)
continue;
rel_pd_idx = j % I40IW_HMC_PD_CNT_IN_SD;
pd_table = &info->hmc_info->sd_table.sd_entry[sd_idx].u.pd_table;
if (pd_table->pd_entry[rel_pd_idx].valid) {
ret_code = i40iw_remove_pd_bp(dev->hw, info->hmc_info, j,
info->is_pf);
if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_HMC, "%s: error\n", __func__);
return ret_code;
}
}
}
i40iw_find_sd_index_limit(info->hmc_info, info->rsrc_type,
info->start_idx, info->count, &sd_idx, &sd_lmt);
if (sd_idx >= info->hmc_info->sd_table.sd_cnt ||
sd_lmt > info->hmc_info->sd_table.sd_cnt) {
i40iw_debug(dev, I40IW_DEBUG_HMC, "%s: error invalid sd_idx\n", __func__);
return I40IW_ERR_INVALID_SD_INDEX;
}
for (i = sd_idx; i < sd_lmt; i++) {
if (!info->hmc_info->sd_table.sd_entry[i].valid)
continue;
switch (info->hmc_info->sd_table.sd_entry[i].entry_type) {
case I40IW_SD_TYPE_DIRECT:
ret_code = i40iw_prep_remove_sd_bp(info->hmc_info, i);
if (!ret_code) {
info->hmc_info->sd_indexes[info->del_sd_cnt] = (u16)i;
info->del_sd_cnt++;
}
break;
case I40IW_SD_TYPE_PAGED:
ret_code = i40iw_prep_remove_pd_page(info->hmc_info, i);
if (!ret_code) {
info->hmc_info->sd_indexes[info->del_sd_cnt] = (u16)i;
info->del_sd_cnt++;
}
break;
default:
break;
}
}
return i40iw_finish_del_sd_reg(dev, info, reset);
}
/**
* i40iw_add_sd_table_entry - Adds a segment descriptor to the table
* @hw: pointer to our hw struct
* @hmc_info: pointer to the HMC configuration information struct
* @sd_index: segment descriptor index to manipulate
* @type: what type of segment descriptor we're manipulating
* @direct_mode_sz: size to alloc in direct mode
*/
enum i40iw_status_code i40iw_add_sd_table_entry(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info,
u32 sd_index,
enum i40iw_sd_entry_type type,
u64 direct_mode_sz)
{
enum i40iw_status_code ret_code = 0;
struct i40iw_hmc_sd_entry *sd_entry;
bool dma_mem_alloc_done = false;
struct i40iw_dma_mem mem;
u64 alloc_len;
sd_entry = &hmc_info->sd_table.sd_entry[sd_index];
if (!sd_entry->valid) {
if (type == I40IW_SD_TYPE_PAGED)
alloc_len = I40IW_HMC_PAGED_BP_SIZE;
else
alloc_len = direct_mode_sz;
/* allocate a 4K pd page or 2M backing page */
ret_code = i40iw_allocate_dma_mem(hw, &mem, alloc_len,
I40IW_HMC_PD_BP_BUF_ALIGNMENT);
if (ret_code)
goto exit;
dma_mem_alloc_done = true;
if (type == I40IW_SD_TYPE_PAGED) {
ret_code = i40iw_allocate_virt_mem(hw,
&sd_entry->u.pd_table.pd_entry_virt_mem,
sizeof(struct i40iw_hmc_pd_entry) * 512);
if (ret_code)
goto exit;
sd_entry->u.pd_table.pd_entry = (struct i40iw_hmc_pd_entry *)
sd_entry->u.pd_table.pd_entry_virt_mem.va;
memcpy(&sd_entry->u.pd_table.pd_page_addr, &mem, sizeof(struct i40iw_dma_mem));
} else {
memcpy(&sd_entry->u.bp.addr, &mem, sizeof(struct i40iw_dma_mem));
sd_entry->u.bp.sd_pd_index = sd_index;
}
hmc_info->sd_table.sd_entry[sd_index].entry_type = type;
I40IW_INC_SD_REFCNT(&hmc_info->sd_table);
}
if (sd_entry->entry_type == I40IW_SD_TYPE_DIRECT)
I40IW_INC_BP_REFCNT(&sd_entry->u.bp);
exit:
if (ret_code)
if (dma_mem_alloc_done)
i40iw_free_dma_mem(hw, &mem);
return ret_code;
}
/**
* i40iw_add_pd_table_entry - Adds page descriptor to the specified table
* @hw: pointer to our HW structure
* @hmc_info: pointer to the HMC configuration information structure
* @pd_index: which page descriptor index to manipulate
* @rsrc_pg: if not NULL, use preallocated page instead of allocating new one.
*
* This function:
* 1. Initializes the pd entry
* 2. Adds pd_entry in the pd_table
* 3. Mark the entry valid in i40iw_hmc_pd_entry structure
* 4. Initializes the pd_entry's ref count to 1
* assumptions:
* 1. The memory for pd should be pinned down, physically contiguous and
* aligned on 4K boundary and zeroed memory.
* 2. It should be 4K in size.
*/
enum i40iw_status_code i40iw_add_pd_table_entry(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info,
u32 pd_index,
struct i40iw_dma_mem *rsrc_pg)
{
enum i40iw_status_code ret_code = 0;
struct i40iw_hmc_pd_table *pd_table;
struct i40iw_hmc_pd_entry *pd_entry;
struct i40iw_dma_mem mem;
struct i40iw_dma_mem *page = &mem;
u32 sd_idx, rel_pd_idx;
u64 *pd_addr;
u64 page_desc;
if (pd_index / I40IW_HMC_PD_CNT_IN_SD >= hmc_info->sd_table.sd_cnt)
return I40IW_ERR_INVALID_PAGE_DESC_INDEX;
sd_idx = (pd_index / I40IW_HMC_PD_CNT_IN_SD);
if (hmc_info->sd_table.sd_entry[sd_idx].entry_type != I40IW_SD_TYPE_PAGED)
return 0;
rel_pd_idx = (pd_index % I40IW_HMC_PD_CNT_IN_SD);
pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table;
pd_entry = &pd_table->pd_entry[rel_pd_idx];
if (!pd_entry->valid) {
if (rsrc_pg) {
pd_entry->rsrc_pg = true;
page = rsrc_pg;
} else {
ret_code = i40iw_allocate_dma_mem(hw, page,
I40IW_HMC_PAGED_BP_SIZE,
I40IW_HMC_PD_BP_BUF_ALIGNMENT);
if (ret_code)
return ret_code;
pd_entry->rsrc_pg = false;
}
memcpy(&pd_entry->bp.addr, page, sizeof(struct i40iw_dma_mem));
pd_entry->bp.sd_pd_index = pd_index;
pd_entry->bp.entry_type = I40IW_SD_TYPE_PAGED;
page_desc = page->pa | 0x1;
pd_addr = (u64 *)pd_table->pd_page_addr.va;
pd_addr += rel_pd_idx;
memcpy(pd_addr, &page_desc, sizeof(*pd_addr));
pd_entry->sd_index = sd_idx;
pd_entry->valid = true;
I40IW_INC_PD_REFCNT(pd_table);
if (hmc_info->hmc_fn_id < I40IW_FIRST_VF_FPM_ID)
I40IW_INVALIDATE_PF_HMC_PD(hw, sd_idx, rel_pd_idx);
else if (hw->hmc.hmc_fn_id != hmc_info->hmc_fn_id)
I40IW_INVALIDATE_VF_HMC_PD(hw, sd_idx, rel_pd_idx,
hmc_info->hmc_fn_id);
}
I40IW_INC_BP_REFCNT(&pd_entry->bp);
return 0;
}
/**
* i40iw_remove_pd_bp - remove a backing page from a page descriptor
* @hw: pointer to our HW structure
* @hmc_info: pointer to the HMC configuration information structure
* @idx: the page index
* @is_pf: distinguishes a VF from a PF
*
* This function:
* 1. Marks the entry in pd table (for paged address mode) or in sd table
* (for direct address mode) invalid.
* 2. Write to register PMPDINV to invalidate the backing page in FV cache
* 3. Decrement the ref count for the pd _entry
* assumptions:
* 1. Caller can deallocate the memory used by backing storage after this
* function returns.
*/
enum i40iw_status_code i40iw_remove_pd_bp(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info,
u32 idx,
bool is_pf)
{
struct i40iw_hmc_pd_entry *pd_entry;
struct i40iw_hmc_pd_table *pd_table;
struct i40iw_hmc_sd_entry *sd_entry;
u32 sd_idx, rel_pd_idx;
struct i40iw_dma_mem *mem;
u64 *pd_addr;
sd_idx = idx / I40IW_HMC_PD_CNT_IN_SD;
rel_pd_idx = idx % I40IW_HMC_PD_CNT_IN_SD;
if (sd_idx >= hmc_info->sd_table.sd_cnt)
return I40IW_ERR_INVALID_PAGE_DESC_INDEX;
sd_entry = &hmc_info->sd_table.sd_entry[sd_idx];
if (sd_entry->entry_type != I40IW_SD_TYPE_PAGED)
return I40IW_ERR_INVALID_SD_TYPE;
pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table;
pd_entry = &pd_table->pd_entry[rel_pd_idx];
I40IW_DEC_BP_REFCNT(&pd_entry->bp);
if (pd_entry->bp.ref_cnt)
return 0;
pd_entry->valid = false;
I40IW_DEC_PD_REFCNT(pd_table);
pd_addr = (u64 *)pd_table->pd_page_addr.va;
pd_addr += rel_pd_idx;
memset(pd_addr, 0, sizeof(u64));
if (is_pf)
I40IW_INVALIDATE_PF_HMC_PD(hw, sd_idx, idx);
else
I40IW_INVALIDATE_VF_HMC_PD(hw, sd_idx, idx,
hmc_info->hmc_fn_id);
if (!pd_entry->rsrc_pg) {
mem = &pd_entry->bp.addr;
if (!mem || !mem->va)
return I40IW_ERR_PARAM;
i40iw_free_dma_mem(hw, mem);
}
if (!pd_table->ref_cnt)
i40iw_free_virt_mem(hw, &pd_table->pd_entry_virt_mem);
return 0;
}
/**
* i40iw_prep_remove_sd_bp - Prepares to remove a backing page from a sd entry
* @hmc_info: pointer to the HMC configuration information structure
* @idx: the page index
*/
enum i40iw_status_code i40iw_prep_remove_sd_bp(struct i40iw_hmc_info *hmc_info, u32 idx)
{
struct i40iw_hmc_sd_entry *sd_entry;
sd_entry = &hmc_info->sd_table.sd_entry[idx];
I40IW_DEC_BP_REFCNT(&sd_entry->u.bp);
if (sd_entry->u.bp.ref_cnt)
return I40IW_ERR_NOT_READY;
I40IW_DEC_SD_REFCNT(&hmc_info->sd_table);
sd_entry->valid = false;
return 0;
}
/**
* i40iw_prep_remove_pd_page - Prepares to remove a PD page from sd entry.
* @hmc_info: pointer to the HMC configuration information structure
* @idx: segment descriptor index to find the relevant page descriptor
*/
enum i40iw_status_code i40iw_prep_remove_pd_page(struct i40iw_hmc_info *hmc_info,
u32 idx)
{
struct i40iw_hmc_sd_entry *sd_entry;
sd_entry = &hmc_info->sd_table.sd_entry[idx];
if (sd_entry->u.pd_table.ref_cnt)
return I40IW_ERR_NOT_READY;
sd_entry->valid = false;
I40IW_DEC_SD_REFCNT(&hmc_info->sd_table);
return 0;
}
/**
* i40iw_pf_init_vfhmc -
* @vf_cnt_array: array of cnt values of iwarp hmc objects
* @vf_hmc_fn_id: hmc function id ofr vf driver
* @dev: pointer to i40iw_dev struct
*
* Called by pf driver to initialize hmc_info for vf driver instance.
*/
enum i40iw_status_code i40iw_pf_init_vfhmc(struct i40iw_sc_dev *dev,
u8 vf_hmc_fn_id,
u32 *vf_cnt_array)
{
struct i40iw_hmc_info *hmc_info;
enum i40iw_status_code ret_code = 0;
u32 i;
if ((vf_hmc_fn_id < I40IW_FIRST_VF_FPM_ID) ||
(vf_hmc_fn_id >= I40IW_FIRST_VF_FPM_ID +
I40IW_MAX_PE_ENABLED_VF_COUNT)) {
i40iw_debug(dev, I40IW_DEBUG_HMC, "%s: invalid vf_hmc_fn_id 0x%x\n",
__func__, vf_hmc_fn_id);
return I40IW_ERR_INVALID_HMCFN_ID;
}
ret_code = i40iw_sc_init_iw_hmc(dev, vf_hmc_fn_id);
if (ret_code)
return ret_code;
hmc_info = i40iw_vf_hmcinfo_from_fpm(dev, vf_hmc_fn_id);
for (i = I40IW_HMC_IW_QP; i < I40IW_HMC_IW_MAX; i++)
if (vf_cnt_array)
hmc_info->hmc_obj[i].cnt =
vf_cnt_array[i - I40IW_HMC_IW_QP];
else
hmc_info->hmc_obj[i].cnt = hmc_info->hmc_obj[i].max_cnt;
return 0;
}

View File

@ -1,241 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_HMC_H
#define I40IW_HMC_H
#include "i40iw_d.h"
struct i40iw_hw;
enum i40iw_status_code;
#define I40IW_HMC_MAX_BP_COUNT 512
#define I40IW_MAX_SD_ENTRIES 11
#define I40IW_HW_DBG_HMC_INVALID_BP_MARK 0xCA
#define I40IW_HMC_INFO_SIGNATURE 0x484D5347
#define I40IW_HMC_PD_CNT_IN_SD 512
#define I40IW_HMC_DIRECT_BP_SIZE 0x200000
#define I40IW_HMC_MAX_SD_COUNT 4096
#define I40IW_HMC_PAGED_BP_SIZE 4096
#define I40IW_HMC_PD_BP_BUF_ALIGNMENT 4096
#define I40IW_FIRST_VF_FPM_ID 16
#define FPM_MULTIPLIER 1024
#define I40IW_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++)
#define I40IW_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++)
#define I40IW_INC_BP_REFCNT(bp) ((bp)->ref_cnt++)
#define I40IW_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--)
#define I40IW_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--)
#define I40IW_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--)
/**
* I40IW_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
* @hw: pointer to our hw struct
* @sd_idx: segment descriptor index
* @pd_idx: page descriptor index
*/
#define I40IW_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \
i40iw_wr32((hw), I40E_PFHMC_PDINV, \
(((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
(0x1 << I40E_PFHMC_PDINV_PMSDPARTSEL_SHIFT) | \
((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
/**
* I40IW_INVALIDATE_VF_HMC_PD - Invalidates the pd cache in the hardware
* @hw: pointer to our hw struct
* @sd_idx: segment descriptor index
* @pd_idx: page descriptor index
* @hmc_fn_id: VF's function id
*/
#define I40IW_INVALIDATE_VF_HMC_PD(hw, sd_idx, pd_idx, hmc_fn_id) \
i40iw_wr32(hw, I40E_GLHMC_VFPDINV(hmc_fn_id - I40IW_FIRST_VF_FPM_ID), \
((sd_idx << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \
(pd_idx << I40E_PFHMC_PDINV_PMPDIDX_SHIFT)))
struct i40iw_hmc_obj_info {
u64 base;
u32 max_cnt;
u32 cnt;
u64 size;
};
enum i40iw_sd_entry_type {
I40IW_SD_TYPE_INVALID = 0,
I40IW_SD_TYPE_PAGED = 1,
I40IW_SD_TYPE_DIRECT = 2
};
struct i40iw_hmc_bp {
enum i40iw_sd_entry_type entry_type;
struct i40iw_dma_mem addr;
u32 sd_pd_index;
u32 ref_cnt;
};
struct i40iw_hmc_pd_entry {
struct i40iw_hmc_bp bp;
u32 sd_index;
bool rsrc_pg;
bool valid;
};
struct i40iw_hmc_pd_table {
struct i40iw_dma_mem pd_page_addr;
struct i40iw_hmc_pd_entry *pd_entry;
struct i40iw_virt_mem pd_entry_virt_mem;
u32 ref_cnt;
u32 sd_index;
};
struct i40iw_hmc_sd_entry {
enum i40iw_sd_entry_type entry_type;
bool valid;
union {
struct i40iw_hmc_pd_table pd_table;
struct i40iw_hmc_bp bp;
} u;
};
struct i40iw_hmc_sd_table {
struct i40iw_virt_mem addr;
u32 sd_cnt;
u32 ref_cnt;
struct i40iw_hmc_sd_entry *sd_entry;
};
struct i40iw_hmc_info {
u32 signature;
u8 hmc_fn_id;
u16 first_sd_index;
struct i40iw_hmc_obj_info *hmc_obj;
struct i40iw_virt_mem hmc_obj_virt_mem;
struct i40iw_hmc_sd_table sd_table;
u16 sd_indexes[I40IW_HMC_MAX_SD_COUNT];
};
struct update_sd_entry {
u64 cmd;
u64 data;
};
struct i40iw_update_sds_info {
u32 cnt;
u8 hmc_fn_id;
struct update_sd_entry entry[I40IW_MAX_SD_ENTRIES];
};
struct i40iw_ccq_cqe_info;
struct i40iw_hmc_fcn_info {
void (*callback_fcn)(struct i40iw_sc_dev *, void *,
struct i40iw_ccq_cqe_info *);
void *cqp_callback_param;
u32 vf_id;
u16 iw_vf_idx;
bool free_fcn;
};
enum i40iw_hmc_rsrc_type {
I40IW_HMC_IW_QP = 0,
I40IW_HMC_IW_CQ = 1,
I40IW_HMC_IW_SRQ = 2,
I40IW_HMC_IW_HTE = 3,
I40IW_HMC_IW_ARP = 4,
I40IW_HMC_IW_APBVT_ENTRY = 5,
I40IW_HMC_IW_MR = 6,
I40IW_HMC_IW_XF = 7,
I40IW_HMC_IW_XFFL = 8,
I40IW_HMC_IW_Q1 = 9,
I40IW_HMC_IW_Q1FL = 10,
I40IW_HMC_IW_TIMER = 11,
I40IW_HMC_IW_FSIMC = 12,
I40IW_HMC_IW_FSIAV = 13,
I40IW_HMC_IW_PBLE = 14,
I40IW_HMC_IW_MAX = 15,
};
struct i40iw_hmc_create_obj_info {
struct i40iw_hmc_info *hmc_info;
struct i40iw_virt_mem add_sd_virt_mem;
u32 rsrc_type;
u32 start_idx;
u32 count;
u32 add_sd_cnt;
enum i40iw_sd_entry_type entry_type;
bool is_pf;
};
struct i40iw_hmc_del_obj_info {
struct i40iw_hmc_info *hmc_info;
struct i40iw_virt_mem del_sd_virt_mem;
u32 rsrc_type;
u32 start_idx;
u32 count;
u32 del_sd_cnt;
bool is_pf;
};
enum i40iw_status_code i40iw_copy_dma_mem(struct i40iw_hw *hw, void *dest_buf,
struct i40iw_dma_mem *src_mem, u64 src_offset, u64 size);
enum i40iw_status_code i40iw_sc_create_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_hmc_create_obj_info *info);
enum i40iw_status_code i40iw_sc_del_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_hmc_del_obj_info *info,
bool reset);
enum i40iw_status_code i40iw_hmc_sd_one(struct i40iw_sc_dev *dev, u8 hmc_fn_id,
u64 pa, u32 sd_idx, enum i40iw_sd_entry_type type,
bool setsd);
enum i40iw_status_code i40iw_update_sds_noccq(struct i40iw_sc_dev *dev,
struct i40iw_update_sds_info *info);
struct i40iw_vfdev *i40iw_vfdev_from_fpm(struct i40iw_sc_dev *dev, u8 hmc_fn_id);
struct i40iw_hmc_info *i40iw_vf_hmcinfo_from_fpm(struct i40iw_sc_dev *dev,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_add_sd_table_entry(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info, u32 sd_index,
enum i40iw_sd_entry_type type, u64 direct_mode_sz);
enum i40iw_status_code i40iw_add_pd_table_entry(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info, u32 pd_index,
struct i40iw_dma_mem *rsrc_pg);
enum i40iw_status_code i40iw_remove_pd_bp(struct i40iw_hw *hw,
struct i40iw_hmc_info *hmc_info, u32 idx, bool is_pf);
enum i40iw_status_code i40iw_prep_remove_sd_bp(struct i40iw_hmc_info *hmc_info, u32 idx);
enum i40iw_status_code i40iw_prep_remove_pd_page(struct i40iw_hmc_info *hmc_info, u32 idx);
#define ENTER_SHARED_FUNCTION()
#define EXIT_SHARED_FUNCTION()
#endif /* I40IW_HMC_H */

View File

@ -1,851 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/if_vlan.h>
#include "i40iw.h"
/**
* i40iw_initialize_hw_resources - initialize hw resource during open
* @iwdev: iwarp device
*/
u32 i40iw_initialize_hw_resources(struct i40iw_device *iwdev)
{
unsigned long num_pds;
u32 resources_size;
u32 max_mr;
u32 max_qp;
u32 max_cq;
u32 arp_table_size;
u32 mrdrvbits;
void *resource_ptr;
max_qp = iwdev->sc_dev.hmc_info->hmc_obj[I40IW_HMC_IW_QP].cnt;
max_cq = iwdev->sc_dev.hmc_info->hmc_obj[I40IW_HMC_IW_CQ].cnt;
max_mr = iwdev->sc_dev.hmc_info->hmc_obj[I40IW_HMC_IW_MR].cnt;
arp_table_size = iwdev->sc_dev.hmc_info->hmc_obj[I40IW_HMC_IW_ARP].cnt;
iwdev->max_cqe = 0xFFFFF;
num_pds = I40IW_MAX_PDS;
resources_size = sizeof(struct i40iw_arp_entry) * arp_table_size;
resources_size += sizeof(unsigned long) * BITS_TO_LONGS(max_qp);
resources_size += sizeof(unsigned long) * BITS_TO_LONGS(max_mr);
resources_size += sizeof(unsigned long) * BITS_TO_LONGS(max_cq);
resources_size += sizeof(unsigned long) * BITS_TO_LONGS(num_pds);
resources_size += sizeof(unsigned long) * BITS_TO_LONGS(arp_table_size);
resources_size += sizeof(struct i40iw_qp **) * max_qp;
iwdev->mem_resources = kzalloc(resources_size, GFP_KERNEL);
if (!iwdev->mem_resources)
return -ENOMEM;
iwdev->max_qp = max_qp;
iwdev->max_mr = max_mr;
iwdev->max_cq = max_cq;
iwdev->max_pd = num_pds;
iwdev->arp_table_size = arp_table_size;
iwdev->arp_table = (struct i40iw_arp_entry *)iwdev->mem_resources;
resource_ptr = iwdev->mem_resources + (sizeof(struct i40iw_arp_entry) * arp_table_size);
iwdev->device_cap_flags = IB_DEVICE_LOCAL_DMA_LKEY |
IB_DEVICE_MEM_WINDOW | IB_DEVICE_MEM_MGT_EXTENSIONS;
iwdev->allocated_qps = resource_ptr;
iwdev->allocated_cqs = &iwdev->allocated_qps[BITS_TO_LONGS(max_qp)];
iwdev->allocated_mrs = &iwdev->allocated_cqs[BITS_TO_LONGS(max_cq)];
iwdev->allocated_pds = &iwdev->allocated_mrs[BITS_TO_LONGS(max_mr)];
iwdev->allocated_arps = &iwdev->allocated_pds[BITS_TO_LONGS(num_pds)];
iwdev->qp_table = (struct i40iw_qp **)(&iwdev->allocated_arps[BITS_TO_LONGS(arp_table_size)]);
set_bit(0, iwdev->allocated_mrs);
set_bit(0, iwdev->allocated_qps);
set_bit(0, iwdev->allocated_cqs);
set_bit(0, iwdev->allocated_pds);
set_bit(0, iwdev->allocated_arps);
/* Following for ILQ/IEQ */
set_bit(1, iwdev->allocated_qps);
set_bit(1, iwdev->allocated_cqs);
set_bit(1, iwdev->allocated_pds);
set_bit(2, iwdev->allocated_cqs);
set_bit(2, iwdev->allocated_pds);
spin_lock_init(&iwdev->resource_lock);
spin_lock_init(&iwdev->qptable_lock);
/* stag index mask has a minimum of 14 bits */
mrdrvbits = 24 - max(get_count_order(iwdev->max_mr), 14);
iwdev->mr_stagmask = ~(((1 << mrdrvbits) - 1) << (32 - mrdrvbits));
return 0;
}
/**
* i40iw_cqp_ce_handler - handle cqp completions
* @iwdev: iwarp device
* @arm: flag to arm after completions
* @cq: cq for cqp completions
*/
static void i40iw_cqp_ce_handler(struct i40iw_device *iwdev, struct i40iw_sc_cq *cq, bool arm)
{
struct i40iw_cqp_request *cqp_request;
struct i40iw_sc_dev *dev = &iwdev->sc_dev;
u32 cqe_count = 0;
struct i40iw_ccq_cqe_info info;
int ret;
do {
memset(&info, 0, sizeof(info));
ret = dev->ccq_ops->ccq_get_cqe_info(cq, &info);
if (ret)
break;
cqp_request = (struct i40iw_cqp_request *)(unsigned long)info.scratch;
if (info.error)
i40iw_pr_err("opcode = 0x%x maj_err_code = 0x%x min_err_code = 0x%x\n",
info.op_code, info.maj_err_code, info.min_err_code);
if (cqp_request) {
cqp_request->compl_info.maj_err_code = info.maj_err_code;
cqp_request->compl_info.min_err_code = info.min_err_code;
cqp_request->compl_info.op_ret_val = info.op_ret_val;
cqp_request->compl_info.error = info.error;
if (cqp_request->waiting) {
cqp_request->request_done = true;
wake_up(&cqp_request->waitq);
i40iw_put_cqp_request(&iwdev->cqp, cqp_request);
} else {
if (cqp_request->callback_fcn)
cqp_request->callback_fcn(cqp_request, 1);
i40iw_put_cqp_request(&iwdev->cqp, cqp_request);
}
}
cqe_count++;
} while (1);
if (arm && cqe_count) {
i40iw_process_bh(dev);
dev->ccq_ops->ccq_arm(cq);
}
}
/**
* i40iw_iwarp_ce_handler - handle iwarp completions
* @iwdev: iwarp device
* @iwcq: iwarp cq receiving event
*/
static void i40iw_iwarp_ce_handler(struct i40iw_device *iwdev,
struct i40iw_sc_cq *iwcq)
{
struct i40iw_cq *i40iwcq = iwcq->back_cq;
if (i40iwcq->ibcq.comp_handler)
i40iwcq->ibcq.comp_handler(&i40iwcq->ibcq,
i40iwcq->ibcq.cq_context);
}
/**
* i40iw_puda_ce_handler - handle puda completion events
* @iwdev: iwarp device
* @cq: puda completion q for event
*/
static void i40iw_puda_ce_handler(struct i40iw_device *iwdev,
struct i40iw_sc_cq *cq)
{
struct i40iw_sc_dev *dev = (struct i40iw_sc_dev *)&iwdev->sc_dev;
enum i40iw_status_code status;
u32 compl_error;
do {
status = i40iw_puda_poll_completion(dev, cq, &compl_error);
if (status == I40IW_ERR_QUEUE_EMPTY)
break;
if (status) {
i40iw_pr_err("puda status = %d\n", status);
break;
}
if (compl_error) {
i40iw_pr_err("puda compl_err =0x%x\n", compl_error);
break;
}
} while (1);
dev->ccq_ops->ccq_arm(cq);
}
/**
* i40iw_process_ceq - handle ceq for completions
* @iwdev: iwarp device
* @ceq: ceq having cq for completion
*/
void i40iw_process_ceq(struct i40iw_device *iwdev, struct i40iw_ceq *ceq)
{
struct i40iw_sc_dev *dev = &iwdev->sc_dev;
struct i40iw_sc_ceq *sc_ceq;
struct i40iw_sc_cq *cq;
bool arm = true;
sc_ceq = &ceq->sc_ceq;
do {
cq = dev->ceq_ops->process_ceq(dev, sc_ceq);
if (!cq)
break;
if (cq->cq_type == I40IW_CQ_TYPE_CQP)
i40iw_cqp_ce_handler(iwdev, cq, arm);
else if (cq->cq_type == I40IW_CQ_TYPE_IWARP)
i40iw_iwarp_ce_handler(iwdev, cq);
else if ((cq->cq_type == I40IW_CQ_TYPE_ILQ) ||
(cq->cq_type == I40IW_CQ_TYPE_IEQ))
i40iw_puda_ce_handler(iwdev, cq);
} while (1);
}
/**
* i40iw_next_iw_state - modify qp state
* @iwqp: iwarp qp to modify
* @state: next state for qp
* @del_hash: del hash
* @term: term message
* @termlen: length of term message
*/
void i40iw_next_iw_state(struct i40iw_qp *iwqp,
u8 state,
u8 del_hash,
u8 term,
u8 termlen)
{
struct i40iw_modify_qp_info info;
memset(&info, 0, sizeof(info));
info.next_iwarp_state = state;
info.remove_hash_idx = del_hash;
info.cq_num_valid = true;
info.arp_cache_idx_valid = true;
info.dont_send_term = true;
info.dont_send_fin = true;
info.termlen = termlen;
if (term & I40IWQP_TERM_SEND_TERM_ONLY)
info.dont_send_term = false;
if (term & I40IWQP_TERM_SEND_FIN_ONLY)
info.dont_send_fin = false;
if (iwqp->sc_qp.term_flags && (state == I40IW_QP_STATE_ERROR))
info.reset_tcp_conn = true;
iwqp->hw_iwarp_state = state;
i40iw_hw_modify_qp(iwqp->iwdev, iwqp, &info, 0);
}
/**
* i40iw_process_aeq - handle aeq events
* @iwdev: iwarp device
*/
void i40iw_process_aeq(struct i40iw_device *iwdev)
{
struct i40iw_sc_dev *dev = &iwdev->sc_dev;
struct i40iw_aeq *aeq = &iwdev->aeq;
struct i40iw_sc_aeq *sc_aeq = &aeq->sc_aeq;
struct i40iw_aeqe_info aeinfo;
struct i40iw_aeqe_info *info = &aeinfo;
int ret;
struct i40iw_qp *iwqp = NULL;
struct i40iw_sc_cq *cq = NULL;
struct i40iw_cq *iwcq = NULL;
struct i40iw_sc_qp *qp = NULL;
struct i40iw_qp_host_ctx_info *ctx_info = NULL;
unsigned long flags;
u32 aeqcnt = 0;
if (!sc_aeq->size)
return;
do {
memset(info, 0, sizeof(*info));
ret = dev->aeq_ops->get_next_aeqe(sc_aeq, info);
if (ret)
break;
aeqcnt++;
i40iw_debug(dev, I40IW_DEBUG_AEQ,
"%s ae_id = 0x%x bool qp=%d qp_id = %d\n",
__func__, info->ae_id, info->qp, info->qp_cq_id);
if (info->qp) {
spin_lock_irqsave(&iwdev->qptable_lock, flags);
iwqp = iwdev->qp_table[info->qp_cq_id];
if (!iwqp) {
spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
i40iw_debug(dev, I40IW_DEBUG_AEQ,
"%s qp_id %d is already freed\n",
__func__, info->qp_cq_id);
continue;
}
i40iw_qp_add_ref(&iwqp->ibqp);
spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
qp = &iwqp->sc_qp;
spin_lock_irqsave(&iwqp->lock, flags);
iwqp->hw_tcp_state = info->tcp_state;
iwqp->hw_iwarp_state = info->iwarp_state;
iwqp->last_aeq = info->ae_id;
spin_unlock_irqrestore(&iwqp->lock, flags);
ctx_info = &iwqp->ctx_info;
ctx_info->err_rq_idx_valid = true;
} else {
if (info->ae_id != I40IW_AE_CQ_OPERATION_ERROR)
continue;
}
switch (info->ae_id) {
case I40IW_AE_LLP_FIN_RECEIVED:
if (qp->term_flags)
break;
if (atomic_inc_return(&iwqp->close_timer_started) == 1) {
iwqp->hw_tcp_state = I40IW_TCP_STATE_CLOSE_WAIT;
if ((iwqp->hw_tcp_state == I40IW_TCP_STATE_CLOSE_WAIT) &&
(iwqp->ibqp_state == IB_QPS_RTS)) {
i40iw_next_iw_state(iwqp,
I40IW_QP_STATE_CLOSING, 0, 0, 0);
i40iw_cm_disconn(iwqp);
}
iwqp->cm_id->add_ref(iwqp->cm_id);
i40iw_schedule_cm_timer(iwqp->cm_node,
(struct i40iw_puda_buf *)iwqp,
I40IW_TIMER_TYPE_CLOSE, 1, 0);
}
break;
case I40IW_AE_LLP_CLOSE_COMPLETE:
if (qp->term_flags)
i40iw_terminate_done(qp, 0);
else
i40iw_cm_disconn(iwqp);
break;
case I40IW_AE_BAD_CLOSE:
case I40IW_AE_RESET_SENT:
i40iw_next_iw_state(iwqp, I40IW_QP_STATE_ERROR, 1, 0, 0);
i40iw_cm_disconn(iwqp);
break;
case I40IW_AE_LLP_CONNECTION_RESET:
if (atomic_read(&iwqp->close_timer_started))
break;
i40iw_cm_disconn(iwqp);
break;
case I40IW_AE_QP_SUSPEND_COMPLETE:
i40iw_qp_suspend_resume(dev, &iwqp->sc_qp, false);
break;
case I40IW_AE_TERMINATE_SENT:
i40iw_terminate_send_fin(qp);
break;
case I40IW_AE_LLP_TERMINATE_RECEIVED:
i40iw_terminate_received(qp, info);
break;
case I40IW_AE_CQ_OPERATION_ERROR:
i40iw_pr_err("Processing an iWARP related AE for CQ misc = 0x%04X\n",
info->ae_id);
cq = (struct i40iw_sc_cq *)(unsigned long)info->compl_ctx;
iwcq = (struct i40iw_cq *)cq->back_cq;
if (iwcq->ibcq.event_handler) {
struct ib_event ibevent;
ibevent.device = iwcq->ibcq.device;
ibevent.event = IB_EVENT_CQ_ERR;
ibevent.element.cq = &iwcq->ibcq;
iwcq->ibcq.event_handler(&ibevent, iwcq->ibcq.cq_context);
}
break;
case I40IW_AE_LLP_DOUBT_REACHABILITY:
break;
case I40IW_AE_PRIV_OPERATION_DENIED:
case I40IW_AE_STAG_ZERO_INVALID:
case I40IW_AE_IB_RREQ_AND_Q1_FULL:
case I40IW_AE_DDP_UBE_INVALID_DDP_VERSION:
case I40IW_AE_DDP_UBE_INVALID_MO:
case I40IW_AE_DDP_UBE_INVALID_QN:
case I40IW_AE_DDP_NO_L_BIT:
case I40IW_AE_RDMAP_ROE_INVALID_RDMAP_VERSION:
case I40IW_AE_RDMAP_ROE_UNEXPECTED_OPCODE:
case I40IW_AE_ROE_INVALID_RDMA_READ_REQUEST:
case I40IW_AE_ROE_INVALID_RDMA_WRITE_OR_READ_RESP:
case I40IW_AE_INVALID_ARP_ENTRY:
case I40IW_AE_INVALID_TCP_OPTION_RCVD:
case I40IW_AE_STALE_ARP_ENTRY:
case I40IW_AE_LLP_RECEIVED_MPA_CRC_ERROR:
case I40IW_AE_LLP_SEGMENT_TOO_SMALL:
case I40IW_AE_LLP_SYN_RECEIVED:
case I40IW_AE_LLP_TOO_MANY_RETRIES:
case I40IW_AE_LCE_QP_CATASTROPHIC:
case I40IW_AE_LCE_FUNCTION_CATASTROPHIC:
case I40IW_AE_LCE_CQ_CATASTROPHIC:
case I40IW_AE_UDA_XMIT_DGRAM_TOO_LONG:
case I40IW_AE_UDA_XMIT_DGRAM_TOO_SHORT:
ctx_info->err_rq_idx_valid = false;
fallthrough;
default:
if (!info->sq && ctx_info->err_rq_idx_valid) {
ctx_info->err_rq_idx = info->wqe_idx;
ctx_info->tcp_info_valid = false;
ctx_info->iwarp_info_valid = false;
ret = dev->iw_priv_qp_ops->qp_setctx(&iwqp->sc_qp,
iwqp->host_ctx.va,
ctx_info);
}
i40iw_terminate_connection(qp, info);
break;
}
if (info->qp)
i40iw_qp_rem_ref(&iwqp->ibqp);
} while (1);
if (aeqcnt)
dev->aeq_ops->repost_aeq_entries(dev, aeqcnt);
}
/**
* i40iw_cqp_manage_abvpt_cmd - send cqp command manage abpvt
* @iwdev: iwarp device
* @accel_local_port: port for apbvt
* @add_port: add or delete port
*/
static enum i40iw_status_code
i40iw_cqp_manage_abvpt_cmd(struct i40iw_device *iwdev,
u16 accel_local_port,
bool add_port)
{
struct i40iw_apbvt_info *info;
struct i40iw_cqp_request *cqp_request;
struct cqp_commands_info *cqp_info;
enum i40iw_status_code status;
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, add_port);
if (!cqp_request)
return I40IW_ERR_NO_MEMORY;
cqp_info = &cqp_request->info;
info = &cqp_info->in.u.manage_apbvt_entry.info;
memset(info, 0, sizeof(*info));
info->add = add_port;
info->port = cpu_to_le16(accel_local_port);
cqp_info->cqp_cmd = OP_MANAGE_APBVT_ENTRY;
cqp_info->post_sq = 1;
cqp_info->in.u.manage_apbvt_entry.cqp = &iwdev->cqp.sc_cqp;
cqp_info->in.u.manage_apbvt_entry.scratch = (uintptr_t)cqp_request;
status = i40iw_handle_cqp_op(iwdev, cqp_request);
if (status)
i40iw_pr_err("CQP-OP Manage APBVT entry fail");
return status;
}
/**
* i40iw_manage_apbvt - add or delete tcp port
* @iwdev: iwarp device
* @accel_local_port: port for apbvt
* @add_port: add or delete port
*/
enum i40iw_status_code i40iw_manage_apbvt(struct i40iw_device *iwdev,
u16 accel_local_port,
bool add_port)
{
struct i40iw_cm_core *cm_core = &iwdev->cm_core;
enum i40iw_status_code status;
unsigned long flags;
bool in_use;
/* apbvt_lock is held across CQP delete APBVT OP (non-waiting) to
* protect against race where add APBVT CQP can race ahead of the delete
* APBVT for same port.
*/
if (add_port) {
spin_lock_irqsave(&cm_core->apbvt_lock, flags);
in_use = __test_and_set_bit(accel_local_port,
cm_core->ports_in_use);
spin_unlock_irqrestore(&cm_core->apbvt_lock, flags);
if (in_use)
return 0;
return i40iw_cqp_manage_abvpt_cmd(iwdev, accel_local_port,
true);
} else {
spin_lock_irqsave(&cm_core->apbvt_lock, flags);
in_use = i40iw_port_in_use(cm_core, accel_local_port);
if (in_use) {
spin_unlock_irqrestore(&cm_core->apbvt_lock, flags);
return 0;
}
__clear_bit(accel_local_port, cm_core->ports_in_use);
status = i40iw_cqp_manage_abvpt_cmd(iwdev, accel_local_port,
false);
spin_unlock_irqrestore(&cm_core->apbvt_lock, flags);
return status;
}
}
/**
* i40iw_manage_arp_cache - manage hw arp cache
* @iwdev: iwarp device
* @mac_addr: mac address ptr
* @ip_addr: ip addr for arp cache
* @ipv4: flag indicating IPv4 when true
* @action: add, delete or modify
*/
void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
unsigned char *mac_addr,
u32 *ip_addr,
bool ipv4,
u32 action)
{
struct i40iw_add_arp_cache_entry_info *info;
struct i40iw_cqp_request *cqp_request;
struct cqp_commands_info *cqp_info;
int arp_index;
arp_index = i40iw_arp_table(iwdev, ip_addr, ipv4, mac_addr, action);
if (arp_index < 0)
return;
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, false);
if (!cqp_request)
return;
cqp_info = &cqp_request->info;
if (action == I40IW_ARP_ADD) {
cqp_info->cqp_cmd = OP_ADD_ARP_CACHE_ENTRY;
info = &cqp_info->in.u.add_arp_cache_entry.info;
memset(info, 0, sizeof(*info));
info->arp_index = cpu_to_le16((u16)arp_index);
info->permanent = true;
ether_addr_copy(info->mac_addr, mac_addr);
cqp_info->in.u.add_arp_cache_entry.scratch = (uintptr_t)cqp_request;
cqp_info->in.u.add_arp_cache_entry.cqp = &iwdev->cqp.sc_cqp;
} else {
cqp_info->cqp_cmd = OP_DELETE_ARP_CACHE_ENTRY;
cqp_info->in.u.del_arp_cache_entry.scratch = (uintptr_t)cqp_request;
cqp_info->in.u.del_arp_cache_entry.cqp = &iwdev->cqp.sc_cqp;
cqp_info->in.u.del_arp_cache_entry.arp_index = arp_index;
}
cqp_info->in.u.add_arp_cache_entry.cqp = &iwdev->cqp.sc_cqp;
cqp_info->in.u.add_arp_cache_entry.scratch = (uintptr_t)cqp_request;
cqp_info->post_sq = 1;
if (i40iw_handle_cqp_op(iwdev, cqp_request))
i40iw_pr_err("CQP-OP Add/Del Arp Cache entry fail");
}
/**
* i40iw_send_syn_cqp_callback - do syn/ack after qhash
* @cqp_request: qhash cqp completion
* @send_ack: flag send ack
*/
static void i40iw_send_syn_cqp_callback(struct i40iw_cqp_request *cqp_request, u32 send_ack)
{
i40iw_send_syn(cqp_request->param, send_ack);
}
/**
* i40iw_manage_qhash - add or modify qhash
* @iwdev: iwarp device
* @cminfo: cm info for qhash
* @etype: type (syn or quad)
* @mtype: type of qhash
* @cmnode: cmnode associated with connection
* @wait: wait for completion
*/
enum i40iw_status_code i40iw_manage_qhash(struct i40iw_device *iwdev,
struct i40iw_cm_info *cminfo,
enum i40iw_quad_entry_type etype,
enum i40iw_quad_hash_manage_type mtype,
void *cmnode,
bool wait)
{
struct i40iw_qhash_table_info *info;
struct i40iw_sc_dev *dev = &iwdev->sc_dev;
struct i40iw_sc_vsi *vsi = &iwdev->vsi;
enum i40iw_status_code status;
struct i40iw_cqp *iwcqp = &iwdev->cqp;
struct i40iw_cqp_request *cqp_request;
struct cqp_commands_info *cqp_info;
cqp_request = i40iw_get_cqp_request(iwcqp, wait);
if (!cqp_request)
return I40IW_ERR_NO_MEMORY;
cqp_info = &cqp_request->info;
info = &cqp_info->in.u.manage_qhash_table_entry.info;
memset(info, 0, sizeof(*info));
info->vsi = &iwdev->vsi;
info->manage = mtype;
info->entry_type = etype;
if (cminfo->vlan_id != 0xFFFF) {
info->vlan_valid = true;
info->vlan_id = cpu_to_le16(cminfo->vlan_id);
} else {
info->vlan_valid = false;
}
info->ipv4_valid = cminfo->ipv4;
info->user_pri = cminfo->user_pri;
ether_addr_copy(info->mac_addr, iwdev->netdev->dev_addr);
info->qp_num = cpu_to_le32(vsi->ilq->qp_id);
info->dest_port = cpu_to_le16(cminfo->loc_port);
info->dest_ip[0] = cpu_to_le32(cminfo->loc_addr[0]);
info->dest_ip[1] = cpu_to_le32(cminfo->loc_addr[1]);
info->dest_ip[2] = cpu_to_le32(cminfo->loc_addr[2]);
info->dest_ip[3] = cpu_to_le32(cminfo->loc_addr[3]);
if (etype == I40IW_QHASH_TYPE_TCP_ESTABLISHED) {
info->src_port = cpu_to_le16(cminfo->rem_port);
info->src_ip[0] = cpu_to_le32(cminfo->rem_addr[0]);
info->src_ip[1] = cpu_to_le32(cminfo->rem_addr[1]);
info->src_ip[2] = cpu_to_le32(cminfo->rem_addr[2]);
info->src_ip[3] = cpu_to_le32(cminfo->rem_addr[3]);
}
if (cmnode) {
cqp_request->callback_fcn = i40iw_send_syn_cqp_callback;
cqp_request->param = (void *)cmnode;
}
if (info->ipv4_valid)
i40iw_debug(dev, I40IW_DEBUG_CM,
"%s:%s IP=%pI4, port=%d, mac=%pM, vlan_id=%d\n",
__func__, (!mtype) ? "DELETE" : "ADD",
info->dest_ip,
info->dest_port, info->mac_addr, cminfo->vlan_id);
else
i40iw_debug(dev, I40IW_DEBUG_CM,
"%s:%s IP=%pI6, port=%d, mac=%pM, vlan_id=%d\n",
__func__, (!mtype) ? "DELETE" : "ADD",
info->dest_ip,
info->dest_port, info->mac_addr, cminfo->vlan_id);
cqp_info->in.u.manage_qhash_table_entry.cqp = &iwdev->cqp.sc_cqp;
cqp_info->in.u.manage_qhash_table_entry.scratch = (uintptr_t)cqp_request;
cqp_info->cqp_cmd = OP_MANAGE_QHASH_TABLE_ENTRY;
cqp_info->post_sq = 1;
status = i40iw_handle_cqp_op(iwdev, cqp_request);
if (status)
i40iw_pr_err("CQP-OP Manage Qhash Entry fail");
return status;
}
/**
* i40iw_hw_flush_wqes - flush qp's wqe
* @iwdev: iwarp device
* @qp: hardware control qp
* @info: info for flush
* @wait: flag wait for completion
*/
enum i40iw_status_code i40iw_hw_flush_wqes(struct i40iw_device *iwdev,
struct i40iw_sc_qp *qp,
struct i40iw_qp_flush_info *info,
bool wait)
{
enum i40iw_status_code status;
struct i40iw_qp_flush_info *hw_info;
struct i40iw_cqp_request *cqp_request;
struct cqp_commands_info *cqp_info;
struct i40iw_qp *iwqp = (struct i40iw_qp *)qp->back_qp;
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, wait);
if (!cqp_request)
return I40IW_ERR_NO_MEMORY;
cqp_info = &cqp_request->info;
hw_info = &cqp_request->info.in.u.qp_flush_wqes.info;
memcpy(hw_info, info, sizeof(*hw_info));
cqp_info->cqp_cmd = OP_QP_FLUSH_WQES;
cqp_info->post_sq = 1;
cqp_info->in.u.qp_flush_wqes.qp = qp;
cqp_info->in.u.qp_flush_wqes.scratch = (uintptr_t)cqp_request;
status = i40iw_handle_cqp_op(iwdev, cqp_request);
if (status) {
i40iw_pr_err("CQP-OP Flush WQE's fail");
complete(&iwqp->sq_drained);
complete(&iwqp->rq_drained);
return status;
}
if (!cqp_request->compl_info.maj_err_code) {
switch (cqp_request->compl_info.min_err_code) {
case I40IW_CQP_COMPL_RQ_WQE_FLUSHED:
complete(&iwqp->sq_drained);
break;
case I40IW_CQP_COMPL_SQ_WQE_FLUSHED:
complete(&iwqp->rq_drained);
break;
case I40IW_CQP_COMPL_RQ_SQ_WQE_FLUSHED:
break;
default:
complete(&iwqp->sq_drained);
complete(&iwqp->rq_drained);
break;
}
}
return 0;
}
/**
* i40iw_gen_ae - generate AE
* @iwdev: iwarp device
* @qp: qp associated with AE
* @info: info for ae
* @wait: wait for completion
*/
void i40iw_gen_ae(struct i40iw_device *iwdev,
struct i40iw_sc_qp *qp,
struct i40iw_gen_ae_info *info,
bool wait)
{
struct i40iw_gen_ae_info *ae_info;
struct i40iw_cqp_request *cqp_request;
struct cqp_commands_info *cqp_info;
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, wait);
if (!cqp_request)
return;
cqp_info = &cqp_request->info;
ae_info = &cqp_request->info.in.u.gen_ae.info;
memcpy(ae_info, info, sizeof(*ae_info));
cqp_info->cqp_cmd = OP_GEN_AE;
cqp_info->post_sq = 1;
cqp_info->in.u.gen_ae.qp = qp;
cqp_info->in.u.gen_ae.scratch = (uintptr_t)cqp_request;
if (i40iw_handle_cqp_op(iwdev, cqp_request))
i40iw_pr_err("CQP OP failed attempting to generate ae_code=0x%x\n",
info->ae_code);
}
/**
* i40iw_hw_manage_vf_pble_bp - manage vf pbles
* @iwdev: iwarp device
* @info: info for managing pble
* @wait: flag wait for completion
*/
enum i40iw_status_code i40iw_hw_manage_vf_pble_bp(struct i40iw_device *iwdev,
struct i40iw_manage_vf_pble_info *info,
bool wait)
{
enum i40iw_status_code status;
struct i40iw_manage_vf_pble_info *hw_info;
struct i40iw_cqp_request *cqp_request;
struct cqp_commands_info *cqp_info;
if ((iwdev->init_state < CCQ_CREATED) && wait)
wait = false;
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, wait);
if (!cqp_request)
return I40IW_ERR_NO_MEMORY;
cqp_info = &cqp_request->info;
hw_info = &cqp_request->info.in.u.manage_vf_pble_bp.info;
memcpy(hw_info, info, sizeof(*hw_info));
cqp_info->cqp_cmd = OP_MANAGE_VF_PBLE_BP;
cqp_info->post_sq = 1;
cqp_info->in.u.manage_vf_pble_bp.cqp = &iwdev->cqp.sc_cqp;
cqp_info->in.u.manage_vf_pble_bp.scratch = (uintptr_t)cqp_request;
status = i40iw_handle_cqp_op(iwdev, cqp_request);
if (status)
i40iw_pr_err("CQP-OP Manage VF pble_bp fail");
return status;
}
/**
* i40iw_get_ib_wc - return change flush code to IB's
* @opcode: iwarp flush code
*/
static enum ib_wc_status i40iw_get_ib_wc(enum i40iw_flush_opcode opcode)
{
switch (opcode) {
case FLUSH_PROT_ERR:
return IB_WC_LOC_PROT_ERR;
case FLUSH_REM_ACCESS_ERR:
return IB_WC_REM_ACCESS_ERR;
case FLUSH_LOC_QP_OP_ERR:
return IB_WC_LOC_QP_OP_ERR;
case FLUSH_REM_OP_ERR:
return IB_WC_REM_OP_ERR;
case FLUSH_LOC_LEN_ERR:
return IB_WC_LOC_LEN_ERR;
case FLUSH_GENERAL_ERR:
return IB_WC_GENERAL_ERR;
case FLUSH_FATAL_ERR:
default:
return IB_WC_FATAL_ERR;
}
}
/**
* i40iw_set_flush_info - set flush info
* @pinfo: set flush info
* @min: minor err
* @maj: major err
* @opcode: flush error code
*/
static void i40iw_set_flush_info(struct i40iw_qp_flush_info *pinfo,
u16 *min,
u16 *maj,
enum i40iw_flush_opcode opcode)
{
*min = (u16)i40iw_get_ib_wc(opcode);
*maj = CQE_MAJOR_DRV;
pinfo->userflushcode = true;
}
/**
* i40iw_flush_wqes - flush wqe for qp
* @iwdev: iwarp device
* @iwqp: qp to flush wqes
*/
void i40iw_flush_wqes(struct i40iw_device *iwdev, struct i40iw_qp *iwqp)
{
struct i40iw_qp_flush_info info;
struct i40iw_qp_flush_info *pinfo = &info;
struct i40iw_sc_qp *qp = &iwqp->sc_qp;
memset(pinfo, 0, sizeof(*pinfo));
info.sq = true;
info.rq = true;
if (qp->term_flags) {
i40iw_set_flush_info(pinfo, &pinfo->sq_minor_code,
&pinfo->sq_major_code, qp->flush_code);
i40iw_set_flush_info(pinfo, &pinfo->rq_minor_code,
&pinfo->rq_major_code, qp->flush_code);
}
(void)i40iw_hw_flush_wqes(iwdev, &iwqp->sc_qp, &info, true);
}

File diff suppressed because it is too large Load Diff

View File

@ -1,195 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_OSDEP_H
#define I40IW_OSDEP_H
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/bitops.h>
#include <net/tcp.h>
#include <crypto/hash.h>
/* get readq/writeq support for 32 bit kernels, use the low-first version */
#include <linux/io-64-nonatomic-lo-hi.h>
#define STATS_TIMER_DELAY 1000
static inline void set_64bit_val(u64 *wqe_words, u32 byte_index, u64 value)
{
wqe_words[byte_index >> 3] = value;
}
/**
* get_64bit_val - read 64 bit value from wqe
* @wqe_words: wqe addr
* @byte_index: index to read from
* @value: read value
**/
static inline void get_64bit_val(u64 *wqe_words, u32 byte_index, u64 *value)
{
*value = wqe_words[byte_index >> 3];
}
struct i40iw_dma_mem {
void *va;
dma_addr_t pa;
u32 size;
} __packed;
struct i40iw_virt_mem {
void *va;
u32 size;
} __packed;
#define i40iw_debug(h, m, s, ...) \
do { \
if (((m) & (h)->debug_mask)) \
pr_info("i40iw " s, ##__VA_ARGS__); \
} while (0)
#define i40iw_flush(a) readl((a)->hw_addr + I40E_GLGEN_STAT)
#define I40E_GLHMC_VFSDCMD(_i) (0x000C8000 + ((_i) * 4)) \
/* _i=0...31 */
#define I40E_GLHMC_VFSDCMD_MAX_INDEX 31
#define I40E_GLHMC_VFSDCMD_PMSDIDX_SHIFT 0
#define I40E_GLHMC_VFSDCMD_PMSDIDX_MASK (0xFFF \
<< I40E_GLHMC_VFSDCMD_PMSDIDX_SHIFT)
#define I40E_GLHMC_VFSDCMD_PF_SHIFT 16
#define I40E_GLHMC_VFSDCMD_PF_MASK (0xF << I40E_GLHMC_VFSDCMD_PF_SHIFT)
#define I40E_GLHMC_VFSDCMD_VF_SHIFT 20
#define I40E_GLHMC_VFSDCMD_VF_MASK (0x1FF << I40E_GLHMC_VFSDCMD_VF_SHIFT)
#define I40E_GLHMC_VFSDCMD_PMF_TYPE_SHIFT 29
#define I40E_GLHMC_VFSDCMD_PMF_TYPE_MASK (0x3 \
<< I40E_GLHMC_VFSDCMD_PMF_TYPE_SHIFT)
#define I40E_GLHMC_VFSDCMD_PMSDWR_SHIFT 31
#define I40E_GLHMC_VFSDCMD_PMSDWR_MASK (0x1 << I40E_GLHMC_VFSDCMD_PMSDWR_SHIFT)
#define I40E_GLHMC_VFSDDATAHIGH(_i) (0x000C8200 + ((_i) * 4)) \
/* _i=0...31 */
#define I40E_GLHMC_VFSDDATAHIGH_MAX_INDEX 31
#define I40E_GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_SHIFT 0
#define I40E_GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_MASK (0xFFFFFFFF \
<< I40E_GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_SHIFT)
#define I40E_GLHMC_VFSDDATALOW(_i) (0x000C8100 + ((_i) * 4)) \
/* _i=0...31 */
#define I40E_GLHMC_VFSDDATALOW_MAX_INDEX 31
#define I40E_GLHMC_VFSDDATALOW_PMSDVALID_SHIFT 0
#define I40E_GLHMC_VFSDDATALOW_PMSDVALID_MASK (0x1 \
<< I40E_GLHMC_VFSDDATALOW_PMSDVALID_SHIFT)
#define I40E_GLHMC_VFSDDATALOW_PMSDTYPE_SHIFT 1
#define I40E_GLHMC_VFSDDATALOW_PMSDTYPE_MASK (0x1 \
<< I40E_GLHMC_VFSDDATALOW_PMSDTYPE_SHIFT)
#define I40E_GLHMC_VFSDDATALOW_PMSDBPCOUNT_SHIFT 2
#define I40E_GLHMC_VFSDDATALOW_PMSDBPCOUNT_MASK (0x3FF \
<< I40E_GLHMC_VFSDDATALOW_PMSDBPCOUNT_SHIFT)
#define I40E_GLHMC_VFSDDATALOW_PMSDDATALOW_SHIFT 12
#define I40E_GLHMC_VFSDDATALOW_PMSDDATALOW_MASK (0xFFFFF \
<< I40E_GLHMC_VFSDDATALOW_PMSDDATALOW_SHIFT)
#define I40E_GLPE_FWLDSTATUS 0x0000D200
#define I40E_GLPE_FWLDSTATUS_LOAD_REQUESTED_SHIFT 0
#define I40E_GLPE_FWLDSTATUS_LOAD_REQUESTED_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_LOAD_REQUESTED_SHIFT)
#define I40E_GLPE_FWLDSTATUS_DONE_SHIFT 1
#define I40E_GLPE_FWLDSTATUS_DONE_MASK (0x1 << I40E_GLPE_FWLDSTATUS_DONE_SHIFT)
#define I40E_GLPE_FWLDSTATUS_CQP_FAIL_SHIFT 2
#define I40E_GLPE_FWLDSTATUS_CQP_FAIL_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_CQP_FAIL_SHIFT)
#define I40E_GLPE_FWLDSTATUS_TEP_FAIL_SHIFT 3
#define I40E_GLPE_FWLDSTATUS_TEP_FAIL_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_TEP_FAIL_SHIFT)
#define I40E_GLPE_FWLDSTATUS_OOP_FAIL_SHIFT 4
#define I40E_GLPE_FWLDSTATUS_OOP_FAIL_MASK (0x1 \
<< I40E_GLPE_FWLDSTATUS_OOP_FAIL_SHIFT)
struct i40iw_sc_dev;
struct i40iw_sc_qp;
struct i40iw_puda_buf;
struct i40iw_puda_completion_info;
struct i40iw_update_sds_info;
struct i40iw_hmc_fcn_info;
struct i40iw_virtchnl_work_info;
struct i40iw_manage_vf_pble_info;
struct i40iw_device;
struct i40iw_hmc_info;
struct i40iw_hw;
u8 __iomem *i40iw_get_hw_addr(void *dev);
void i40iw_ieq_mpa_crc_ae(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
enum i40iw_status_code i40iw_vf_wait_vchnl_resp(struct i40iw_sc_dev *dev);
bool i40iw_vf_clear_to_send(struct i40iw_sc_dev *dev);
enum i40iw_status_code i40iw_ieq_check_mpacrc(struct shash_desc *desc, void *addr,
u32 length, u32 value);
struct i40iw_sc_qp *i40iw_ieq_get_qp(struct i40iw_sc_dev *dev, struct i40iw_puda_buf *buf);
void i40iw_ieq_update_tcpip_info(struct i40iw_puda_buf *buf, u16 length, u32 seqnum);
void i40iw_free_hash_desc(struct shash_desc *);
enum i40iw_status_code i40iw_init_hash_desc(struct shash_desc **);
enum i40iw_status_code i40iw_puda_get_tcpip_info(struct i40iw_puda_completion_info *info,
struct i40iw_puda_buf *buf);
enum i40iw_status_code i40iw_cqp_sds_cmd(struct i40iw_sc_dev *dev,
struct i40iw_update_sds_info *info);
enum i40iw_status_code i40iw_cqp_manage_hmc_fcn_cmd(struct i40iw_sc_dev *dev,
struct i40iw_hmc_fcn_info *hmcfcninfo);
enum i40iw_status_code i40iw_cqp_query_fpm_values_cmd(struct i40iw_sc_dev *dev,
struct i40iw_dma_mem *values_mem,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_cqp_commit_fpm_values_cmd(struct i40iw_sc_dev *dev,
struct i40iw_dma_mem *values_mem,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_alloc_query_fpm_buf(struct i40iw_sc_dev *dev,
struct i40iw_dma_mem *mem);
enum i40iw_status_code i40iw_cqp_manage_vf_pble_bp(struct i40iw_sc_dev *dev,
struct i40iw_manage_vf_pble_info *info);
void i40iw_cqp_spawn_worker(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_work_info *work_info, u32 iw_vf_idx);
void *i40iw_remove_head(struct list_head *list);
void i40iw_qp_suspend_resume(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp, bool suspend);
void i40iw_term_modify_qp(struct i40iw_sc_qp *qp, u8 next_state, u8 term, u8 term_len);
void i40iw_terminate_done(struct i40iw_sc_qp *qp, int timeout_occurred);
void i40iw_terminate_start_timer(struct i40iw_sc_qp *qp);
void i40iw_terminate_del_timer(struct i40iw_sc_qp *qp);
enum i40iw_status_code i40iw_hw_manage_vf_pble_bp(struct i40iw_device *iwdev,
struct i40iw_manage_vf_pble_info *info,
bool wait);
struct i40iw_sc_vsi;
void i40iw_hw_stats_start_timer(struct i40iw_sc_vsi *vsi);
void i40iw_hw_stats_stop_timer(struct i40iw_sc_vsi *vsi);
#define i40iw_mmiowb() do { } while (0)
void i40iw_wr32(struct i40iw_hw *hw, u32 reg, u32 value);
u32 i40iw_rd32(struct i40iw_hw *hw, u32 reg);
#endif /* _I40IW_OSDEP_H_ */

View File

@ -1,129 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_P_H
#define I40IW_P_H
#define PAUSE_TIMER_VALUE 0xFFFF
#define REFRESH_THRESHOLD 0x7FFF
#define HIGH_THRESHOLD 0x800
#define LOW_THRESHOLD 0x200
#define ALL_TC2PFC 0xFF
#define CQP_COMPL_WAIT_TIME 0x3E8
#define CQP_TIMEOUT_THRESHOLD 5
void i40iw_debug_buf(struct i40iw_sc_dev *dev, enum i40iw_debug_flag mask,
char *desc, u64 *buf, u32 size);
/* init operations */
enum i40iw_status_code i40iw_device_init(struct i40iw_sc_dev *dev,
struct i40iw_device_init_info *info);
void i40iw_sc_cqp_post_sq(struct i40iw_sc_cqp *cqp);
u64 *i40iw_sc_cqp_get_next_send_wqe(struct i40iw_sc_cqp *cqp, u64 scratch);
void i40iw_check_cqp_progress(struct i40iw_cqp_timeout *cqp_timeout, struct i40iw_sc_dev *dev);
enum i40iw_status_code i40iw_sc_mr_fast_register(struct i40iw_sc_qp *qp,
struct i40iw_fast_reg_stag_info *info,
bool post_sq);
void i40iw_insert_wqe_hdr(u64 *wqe, u64 header);
/* HMC/FPM functions */
enum i40iw_status_code i40iw_sc_init_iw_hmc(struct i40iw_sc_dev *dev,
u8 hmc_fn_id);
enum i40iw_status_code i40iw_pf_init_vfhmc(struct i40iw_sc_dev *dev, u8 vf_hmc_fn_id,
u32 *vf_cnt_array);
/* stats functions */
void i40iw_hw_stats_refresh_all(struct i40iw_vsi_pestat *stats);
void i40iw_hw_stats_read_all(struct i40iw_vsi_pestat *stats, struct i40iw_dev_hw_stats *stats_values);
void i40iw_hw_stats_read_32(struct i40iw_vsi_pestat *stats,
enum i40iw_hw_stats_index_32b index,
u64 *value);
void i40iw_hw_stats_read_64(struct i40iw_vsi_pestat *stats,
enum i40iw_hw_stats_index_64b index,
u64 *value);
void i40iw_hw_stats_init(struct i40iw_vsi_pestat *stats, u8 index, bool is_pf);
/* vsi misc functions */
enum i40iw_status_code i40iw_vsi_stats_init(struct i40iw_sc_vsi *vsi, struct i40iw_vsi_stats_info *info);
void i40iw_vsi_stats_free(struct i40iw_sc_vsi *vsi);
void i40iw_sc_vsi_init(struct i40iw_sc_vsi *vsi, struct i40iw_vsi_init_info *info);
void i40iw_change_l2params(struct i40iw_sc_vsi *vsi, struct i40iw_l2params *l2params);
void i40iw_qp_add_qos(struct i40iw_sc_qp *qp);
void i40iw_qp_rem_qos(struct i40iw_sc_qp *qp);
void i40iw_terminate_send_fin(struct i40iw_sc_qp *qp);
void i40iw_terminate_connection(struct i40iw_sc_qp *qp, struct i40iw_aeqe_info *info);
void i40iw_terminate_received(struct i40iw_sc_qp *qp, struct i40iw_aeqe_info *info);
enum i40iw_status_code i40iw_sc_suspend_qp(struct i40iw_sc_cqp *cqp,
struct i40iw_sc_qp *qp, u64 scratch);
enum i40iw_status_code i40iw_sc_resume_qp(struct i40iw_sc_cqp *cqp,
struct i40iw_sc_qp *qp, u64 scratch);
enum i40iw_status_code i40iw_sc_static_hmc_pages_allocated(struct i40iw_sc_cqp *cqp,
u64 scratch, u8 hmc_fn_id,
bool post_sq,
bool poll_registers);
enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_count);
enum i40iw_status_code i40iw_get_rdma_features(struct i40iw_sc_dev *dev);
void free_sd_mem(struct i40iw_sc_dev *dev);
enum i40iw_status_code i40iw_process_cqp_cmd(struct i40iw_sc_dev *dev,
struct cqp_commands_info *pcmdinfo);
enum i40iw_status_code i40iw_process_bh(struct i40iw_sc_dev *dev);
/* prototype for functions used for dynamic memory allocation */
enum i40iw_status_code i40iw_allocate_dma_mem(struct i40iw_hw *hw,
struct i40iw_dma_mem *mem, u64 size,
u32 alignment);
void i40iw_free_dma_mem(struct i40iw_hw *hw, struct i40iw_dma_mem *mem);
enum i40iw_status_code i40iw_allocate_virt_mem(struct i40iw_hw *hw,
struct i40iw_virt_mem *mem, u32 size);
enum i40iw_status_code i40iw_free_virt_mem(struct i40iw_hw *hw,
struct i40iw_virt_mem *mem);
u8 i40iw_get_encoded_wqe_size(u32 wqsize, bool cqpsq);
void i40iw_reinitialize_ieq(struct i40iw_sc_dev *dev);
#endif

View File

@ -1,611 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#include "i40iw_status.h"
#include "i40iw_osdep.h"
#include "i40iw_register.h"
#include "i40iw_hmc.h"
#include "i40iw_d.h"
#include "i40iw_type.h"
#include "i40iw_p.h"
#include <linux/pci.h>
#include <linux/genalloc.h>
#include <linux/vmalloc.h>
#include "i40iw_pble.h"
#include "i40iw.h"
struct i40iw_device;
static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc);
static void i40iw_free_vmalloc_mem(struct i40iw_hw *hw, struct i40iw_chunk *chunk);
/**
* i40iw_destroy_pble_pool - destroy pool during module unload
* @dev: i40iw_sc_dev struct
* @pble_rsrc: pble resources
*/
void i40iw_destroy_pble_pool(struct i40iw_sc_dev *dev, struct i40iw_hmc_pble_rsrc *pble_rsrc)
{
struct list_head *clist;
struct list_head *tlist;
struct i40iw_chunk *chunk;
struct i40iw_pble_pool *pinfo = &pble_rsrc->pinfo;
if (pinfo->pool) {
list_for_each_safe(clist, tlist, &pinfo->clist) {
chunk = list_entry(clist, struct i40iw_chunk, list);
if (chunk->type == I40IW_VMALLOC)
i40iw_free_vmalloc_mem(dev->hw, chunk);
kfree(chunk);
}
gen_pool_destroy(pinfo->pool);
}
}
/**
* i40iw_hmc_init_pble - Initialize pble resources during module load
* @dev: i40iw_sc_dev struct
* @pble_rsrc: pble resources
*/
enum i40iw_status_code i40iw_hmc_init_pble(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc)
{
struct i40iw_hmc_info *hmc_info;
u32 fpm_idx = 0;
hmc_info = dev->hmc_info;
pble_rsrc->fpm_base_addr = hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].base;
/* Now start the pble' on 4k boundary */
if (pble_rsrc->fpm_base_addr & 0xfff)
fpm_idx = (PAGE_SIZE - (pble_rsrc->fpm_base_addr & 0xfff)) >> 3;
pble_rsrc->unallocated_pble =
hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].cnt - fpm_idx;
pble_rsrc->next_fpm_addr = pble_rsrc->fpm_base_addr + (fpm_idx << 3);
pble_rsrc->pinfo.pool_shift = POOL_SHIFT;
pble_rsrc->pinfo.pool = gen_pool_create(pble_rsrc->pinfo.pool_shift, -1);
INIT_LIST_HEAD(&pble_rsrc->pinfo.clist);
if (!pble_rsrc->pinfo.pool)
goto error;
if (add_pble_pool(dev, pble_rsrc))
goto error;
return 0;
error:i40iw_destroy_pble_pool(dev, pble_rsrc);
return I40IW_ERR_NO_MEMORY;
}
/**
* get_sd_pd_idx - Returns sd index, pd index and rel_pd_idx from fpm address
* @pble_rsrc: structure containing fpm address
* @idx: where to return indexes
*/
static inline void get_sd_pd_idx(struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct sd_pd_idx *idx)
{
idx->sd_idx = (u32)(pble_rsrc->next_fpm_addr) / I40IW_HMC_DIRECT_BP_SIZE;
idx->pd_idx = (u32)(pble_rsrc->next_fpm_addr) / I40IW_HMC_PAGED_BP_SIZE;
idx->rel_pd_idx = (idx->pd_idx % I40IW_HMC_PD_CNT_IN_SD);
}
/**
* add_sd_direct - add sd direct for pble
* @dev: hardware control device structure
* @pble_rsrc: pble resource ptr
* @info: page info for sd
*/
static enum i40iw_status_code add_sd_direct(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_add_page_info *info)
{
enum i40iw_status_code ret_code = 0;
struct sd_pd_idx *idx = &info->idx;
struct i40iw_chunk *chunk = info->chunk;
struct i40iw_hmc_info *hmc_info = info->hmc_info;
struct i40iw_hmc_sd_entry *sd_entry = info->sd_entry;
u32 offset = 0;
if (!sd_entry->valid) {
if (dev->is_pf) {
ret_code = i40iw_add_sd_table_entry(dev->hw, hmc_info,
info->idx.sd_idx,
I40IW_SD_TYPE_DIRECT,
I40IW_HMC_DIRECT_BP_SIZE);
if (ret_code)
return ret_code;
chunk->type = I40IW_DMA_COHERENT;
}
}
offset = idx->rel_pd_idx << I40IW_HMC_PAGED_BP_SHIFT;
chunk->size = info->pages << I40IW_HMC_PAGED_BP_SHIFT;
chunk->vaddr = ((u8 *)sd_entry->u.bp.addr.va + offset);
chunk->fpm_addr = pble_rsrc->next_fpm_addr;
i40iw_debug(dev, I40IW_DEBUG_PBLE, "chunk_size[%d] = 0x%x vaddr=%p fpm_addr = %llx\n",
chunk->size, chunk->size, chunk->vaddr, chunk->fpm_addr);
return 0;
}
/**
* i40iw_free_vmalloc_mem - free vmalloc during close
* @hw: hw struct
* @chunk: chunk information for vmalloc
*/
static void i40iw_free_vmalloc_mem(struct i40iw_hw *hw, struct i40iw_chunk *chunk)
{
struct pci_dev *pcidev = hw->pcidev;
int i;
if (!chunk->pg_cnt)
goto done;
for (i = 0; i < chunk->pg_cnt; i++)
dma_unmap_page(&pcidev->dev, chunk->dmaaddrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
done:
kfree(chunk->dmaaddrs);
chunk->dmaaddrs = NULL;
vfree(chunk->vaddr);
chunk->vaddr = NULL;
chunk->type = 0;
}
/**
* i40iw_get_vmalloc_mem - get 2M page for sd
* @hw: hardware address
* @chunk: chunk to adf
* @pg_cnt: #of 4 K pages
*/
static enum i40iw_status_code i40iw_get_vmalloc_mem(struct i40iw_hw *hw,
struct i40iw_chunk *chunk,
int pg_cnt)
{
struct pci_dev *pcidev = hw->pcidev;
struct page *page;
u8 *addr;
u32 size;
int i;
chunk->dmaaddrs = kzalloc(pg_cnt << 3, GFP_KERNEL);
if (!chunk->dmaaddrs)
return I40IW_ERR_NO_MEMORY;
size = PAGE_SIZE * pg_cnt;
chunk->vaddr = vmalloc(size);
if (!chunk->vaddr) {
kfree(chunk->dmaaddrs);
chunk->dmaaddrs = NULL;
return I40IW_ERR_NO_MEMORY;
}
chunk->size = size;
addr = (u8 *)chunk->vaddr;
for (i = 0; i < pg_cnt; i++) {
page = vmalloc_to_page((void *)addr);
if (!page)
break;
chunk->dmaaddrs[i] = dma_map_page(&pcidev->dev, page, 0,
PAGE_SIZE, DMA_BIDIRECTIONAL);
if (dma_mapping_error(&pcidev->dev, chunk->dmaaddrs[i]))
break;
addr += PAGE_SIZE;
}
chunk->pg_cnt = i;
chunk->type = I40IW_VMALLOC;
if (i == pg_cnt)
return 0;
i40iw_free_vmalloc_mem(hw, chunk);
return I40IW_ERR_NO_MEMORY;
}
/**
* fpm_to_idx - given fpm address, get pble index
* @pble_rsrc: pble resource management
* @addr: fpm address for index
*/
static inline u32 fpm_to_idx(struct i40iw_hmc_pble_rsrc *pble_rsrc, u64 addr)
{
return (addr - (pble_rsrc->fpm_base_addr)) >> 3;
}
/**
* add_bp_pages - add backing pages for sd
* @dev: hardware control device structure
* @pble_rsrc: pble resource management
* @info: page info for sd
*/
static enum i40iw_status_code add_bp_pages(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_add_page_info *info)
{
u8 *addr;
struct i40iw_dma_mem mem;
struct i40iw_hmc_pd_entry *pd_entry;
struct i40iw_hmc_sd_entry *sd_entry = info->sd_entry;
struct i40iw_hmc_info *hmc_info = info->hmc_info;
struct i40iw_chunk *chunk = info->chunk;
struct i40iw_manage_vf_pble_info vf_pble_info;
enum i40iw_status_code status = 0;
u32 rel_pd_idx = info->idx.rel_pd_idx;
u32 pd_idx = info->idx.pd_idx;
u32 i;
status = i40iw_get_vmalloc_mem(dev->hw, chunk, info->pages);
if (status)
return I40IW_ERR_NO_MEMORY;
status = i40iw_add_sd_table_entry(dev->hw, hmc_info,
info->idx.sd_idx, I40IW_SD_TYPE_PAGED,
I40IW_HMC_DIRECT_BP_SIZE);
if (status)
goto error;
if (!dev->is_pf) {
status = i40iw_vchnl_vf_add_hmc_objs(dev, I40IW_HMC_IW_PBLE,
fpm_to_idx(pble_rsrc,
pble_rsrc->next_fpm_addr),
(info->pages << PBLE_512_SHIFT));
if (status) {
i40iw_pr_err("allocate PBLEs in the PF. Error %i\n", status);
goto error;
}
}
addr = chunk->vaddr;
for (i = 0; i < info->pages; i++) {
mem.pa = chunk->dmaaddrs[i];
mem.size = PAGE_SIZE;
mem.va = (void *)(addr);
pd_entry = &sd_entry->u.pd_table.pd_entry[rel_pd_idx++];
if (!pd_entry->valid) {
status = i40iw_add_pd_table_entry(dev->hw, hmc_info, pd_idx++, &mem);
if (status)
goto error;
addr += PAGE_SIZE;
} else {
i40iw_pr_err("pd entry is valid expecting to be invalid\n");
}
}
if (!dev->is_pf) {
vf_pble_info.first_pd_index = info->idx.rel_pd_idx;
vf_pble_info.inv_pd_ent = false;
vf_pble_info.pd_entry_cnt = PBLE_PER_PAGE;
vf_pble_info.pd_pl_pba = sd_entry->u.pd_table.pd_page_addr.pa;
vf_pble_info.sd_index = info->idx.sd_idx;
status = i40iw_hw_manage_vf_pble_bp(dev->back_dev,
&vf_pble_info, true);
if (status) {
i40iw_pr_err("CQP manage VF PBLE BP failed. %i\n", status);
goto error;
}
}
chunk->fpm_addr = pble_rsrc->next_fpm_addr;
return 0;
error:
i40iw_free_vmalloc_mem(dev->hw, chunk);
return status;
}
/**
* add_pble_pool - add a sd entry for pble resoure
* @dev: hardware control device structure
* @pble_rsrc: pble resource management
*/
static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc)
{
struct i40iw_hmc_sd_entry *sd_entry;
struct i40iw_hmc_info *hmc_info;
struct i40iw_chunk *chunk;
struct i40iw_add_page_info info;
struct sd_pd_idx *idx = &info.idx;
enum i40iw_status_code ret_code = 0;
enum i40iw_sd_entry_type sd_entry_type;
u64 sd_reg_val = 0;
u32 pages;
if (pble_rsrc->unallocated_pble < PBLE_PER_PAGE)
return I40IW_ERR_NO_MEMORY;
if (pble_rsrc->next_fpm_addr & 0xfff) {
i40iw_pr_err("next fpm_addr %llx\n", pble_rsrc->next_fpm_addr);
return I40IW_ERR_INVALID_PAGE_DESC_INDEX;
}
chunk = kzalloc(sizeof(*chunk), GFP_KERNEL);
if (!chunk)
return I40IW_ERR_NO_MEMORY;
hmc_info = dev->hmc_info;
chunk->fpm_addr = pble_rsrc->next_fpm_addr;
get_sd_pd_idx(pble_rsrc, idx);
sd_entry = &hmc_info->sd_table.sd_entry[idx->sd_idx];
pages = (idx->rel_pd_idx) ? (I40IW_HMC_PD_CNT_IN_SD -
idx->rel_pd_idx) : I40IW_HMC_PD_CNT_IN_SD;
pages = min(pages, pble_rsrc->unallocated_pble >> PBLE_512_SHIFT);
info.chunk = chunk;
info.hmc_info = hmc_info;
info.pages = pages;
info.sd_entry = sd_entry;
if (!sd_entry->valid) {
sd_entry_type = (!idx->rel_pd_idx &&
(pages == I40IW_HMC_PD_CNT_IN_SD) &&
dev->is_pf) ? I40IW_SD_TYPE_DIRECT : I40IW_SD_TYPE_PAGED;
} else {
sd_entry_type = sd_entry->entry_type;
}
i40iw_debug(dev, I40IW_DEBUG_PBLE,
"pages = %d, unallocated_pble[%u] current_fpm_addr = %llx\n",
pages, pble_rsrc->unallocated_pble, pble_rsrc->next_fpm_addr);
i40iw_debug(dev, I40IW_DEBUG_PBLE, "sd_entry_type = %d sd_entry valid = %d\n",
sd_entry_type, sd_entry->valid);
if (sd_entry_type == I40IW_SD_TYPE_DIRECT)
ret_code = add_sd_direct(dev, pble_rsrc, &info);
if (ret_code)
sd_entry_type = I40IW_SD_TYPE_PAGED;
else
pble_rsrc->stats_direct_sds++;
if (sd_entry_type == I40IW_SD_TYPE_PAGED) {
ret_code = add_bp_pages(dev, pble_rsrc, &info);
if (ret_code)
goto error;
else
pble_rsrc->stats_paged_sds++;
}
if (gen_pool_add_virt(pble_rsrc->pinfo.pool, (unsigned long)chunk->vaddr,
(phys_addr_t)chunk->fpm_addr, chunk->size, -1)) {
i40iw_pr_err("could not allocate memory by gen_pool_addr_virt()\n");
ret_code = I40IW_ERR_NO_MEMORY;
goto error;
}
pble_rsrc->next_fpm_addr += chunk->size;
i40iw_debug(dev, I40IW_DEBUG_PBLE, "next_fpm_addr = %llx chunk_size[%u] = 0x%x\n",
pble_rsrc->next_fpm_addr, chunk->size, chunk->size);
pble_rsrc->unallocated_pble -= (chunk->size >> 3);
sd_reg_val = (sd_entry_type == I40IW_SD_TYPE_PAGED) ?
sd_entry->u.pd_table.pd_page_addr.pa : sd_entry->u.bp.addr.pa;
if (dev->is_pf && !sd_entry->valid) {
ret_code = i40iw_hmc_sd_one(dev, hmc_info->hmc_fn_id,
sd_reg_val, idx->sd_idx,
sd_entry->entry_type, true);
if (ret_code) {
i40iw_pr_err("cqp cmd failed for sd (pbles)\n");
goto error;
}
}
sd_entry->valid = true;
list_add(&chunk->list, &pble_rsrc->pinfo.clist);
return 0;
error:
kfree(chunk);
return ret_code;
}
/**
* free_lvl2 - fee level 2 pble
* @pble_rsrc: pble resource management
* @palloc: level 2 pble allocation
*/
static void free_lvl2(struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc)
{
u32 i;
struct gen_pool *pool;
struct i40iw_pble_level2 *lvl2 = &palloc->level2;
struct i40iw_pble_info *root = &lvl2->root;
struct i40iw_pble_info *leaf = lvl2->leaf;
pool = pble_rsrc->pinfo.pool;
for (i = 0; i < lvl2->leaf_cnt; i++, leaf++) {
if (leaf->addr)
gen_pool_free(pool, leaf->addr, (leaf->cnt << 3));
else
break;
}
if (root->addr)
gen_pool_free(pool, root->addr, (root->cnt << 3));
kfree(lvl2->leaf);
lvl2->leaf = NULL;
}
/**
* get_lvl2_pble - get level 2 pble resource
* @pble_rsrc: pble resource management
* @palloc: level 2 pble allocation
* @pool: pool pointer
*/
static enum i40iw_status_code get_lvl2_pble(struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc,
struct gen_pool *pool)
{
u32 lf4k, lflast, total, i;
u32 pblcnt = PBLE_PER_PAGE;
u64 *addr;
struct i40iw_pble_level2 *lvl2 = &palloc->level2;
struct i40iw_pble_info *root = &lvl2->root;
struct i40iw_pble_info *leaf;
/* number of full 512 (4K) leafs) */
lf4k = palloc->total_cnt >> 9;
lflast = palloc->total_cnt % PBLE_PER_PAGE;
total = (lflast == 0) ? lf4k : lf4k + 1;
lvl2->leaf_cnt = total;
leaf = kzalloc((sizeof(*leaf) * total), GFP_ATOMIC);
if (!leaf)
return I40IW_ERR_NO_MEMORY;
lvl2->leaf = leaf;
/* allocate pbles for the root */
root->addr = gen_pool_alloc(pool, (total << 3));
if (!root->addr) {
kfree(lvl2->leaf);
lvl2->leaf = NULL;
return I40IW_ERR_NO_MEMORY;
}
root->idx = fpm_to_idx(pble_rsrc,
(u64)gen_pool_virt_to_phys(pool, root->addr));
root->cnt = total;
addr = (u64 *)root->addr;
for (i = 0; i < total; i++, leaf++) {
pblcnt = (lflast && ((i + 1) == total)) ? lflast : PBLE_PER_PAGE;
leaf->addr = gen_pool_alloc(pool, (pblcnt << 3));
if (!leaf->addr)
goto error;
leaf->idx = fpm_to_idx(pble_rsrc, (u64)gen_pool_virt_to_phys(pool, leaf->addr));
leaf->cnt = pblcnt;
*addr = (u64)leaf->idx;
addr++;
}
palloc->level = I40IW_LEVEL_2;
pble_rsrc->stats_lvl2++;
return 0;
error:
free_lvl2(pble_rsrc, palloc);
return I40IW_ERR_NO_MEMORY;
}
/**
* get_lvl1_pble - get level 1 pble resource
* @dev: hardware control device structure
* @pble_rsrc: pble resource management
* @palloc: level 1 pble allocation
*/
static enum i40iw_status_code get_lvl1_pble(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc)
{
u64 *addr;
struct gen_pool *pool;
struct i40iw_pble_info *lvl1 = &palloc->level1;
pool = pble_rsrc->pinfo.pool;
addr = (u64 *)gen_pool_alloc(pool, (palloc->total_cnt << 3));
if (!addr)
return I40IW_ERR_NO_MEMORY;
palloc->level = I40IW_LEVEL_1;
lvl1->addr = (unsigned long)addr;
lvl1->idx = fpm_to_idx(pble_rsrc, (u64)gen_pool_virt_to_phys(pool,
(unsigned long)addr));
lvl1->cnt = palloc->total_cnt;
pble_rsrc->stats_lvl1++;
return 0;
}
/**
* get_lvl1_lvl2_pble - calls get_lvl1 and get_lvl2 pble routine
* @dev: i40iw_sc_dev struct
* @pble_rsrc: pble resources
* @palloc: contains all inforamtion regarding pble (idx + pble addr)
* @pool: pointer to general purpose special memory pool descriptor
*/
static inline enum i40iw_status_code get_lvl1_lvl2_pble(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc,
struct gen_pool *pool)
{
enum i40iw_status_code status = 0;
status = get_lvl1_pble(dev, pble_rsrc, palloc);
if (status && (palloc->total_cnt > PBLE_PER_PAGE))
status = get_lvl2_pble(pble_rsrc, palloc, pool);
return status;
}
/**
* i40iw_get_pble - allocate pbles from the pool
* @dev: i40iw_sc_dev struct
* @pble_rsrc: pble resources
* @palloc: contains all inforamtion regarding pble (idx + pble addr)
* @pble_cnt: #of pbles requested
*/
enum i40iw_status_code i40iw_get_pble(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc,
u32 pble_cnt)
{
struct gen_pool *pool;
enum i40iw_status_code status = 0;
u32 max_sds = 0;
int i;
pool = pble_rsrc->pinfo.pool;
palloc->total_cnt = pble_cnt;
palloc->level = I40IW_LEVEL_0;
/*check first to see if we can get pble's without acquiring additional sd's */
status = get_lvl1_lvl2_pble(dev, pble_rsrc, palloc, pool);
if (!status)
goto exit;
max_sds = (palloc->total_cnt >> 18) + 1;
for (i = 0; i < max_sds; i++) {
status = add_pble_pool(dev, pble_rsrc);
if (status)
break;
status = get_lvl1_lvl2_pble(dev, pble_rsrc, palloc, pool);
if (!status)
break;
}
exit:
if (!status)
pble_rsrc->stats_alloc_ok++;
else
pble_rsrc->stats_alloc_fail++;
return status;
}
/**
* i40iw_free_pble - put pbles back into pool
* @pble_rsrc: pble resources
* @palloc: contains all inforamtion regarding pble resource being freed
*/
void i40iw_free_pble(struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc)
{
struct gen_pool *pool;
pool = pble_rsrc->pinfo.pool;
if (palloc->level == I40IW_LEVEL_2)
free_lvl2(pble_rsrc, palloc);
else
gen_pool_free(pool, palloc->level1.addr,
(palloc->level1.cnt << 3));
pble_rsrc->stats_alloc_freed++;
}

View File

@ -1,131 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_PBLE_H
#define I40IW_PBLE_H
#define POOL_SHIFT 6
#define PBLE_PER_PAGE 512
#define I40IW_HMC_PAGED_BP_SHIFT 12
#define PBLE_512_SHIFT 9
enum i40iw_pble_level {
I40IW_LEVEL_0 = 0,
I40IW_LEVEL_1 = 1,
I40IW_LEVEL_2 = 2
};
enum i40iw_alloc_type {
I40IW_NO_ALLOC = 0,
I40IW_DMA_COHERENT = 1,
I40IW_VMALLOC = 2
};
struct i40iw_pble_info {
unsigned long addr;
u32 idx;
u32 cnt;
};
struct i40iw_pble_level2 {
struct i40iw_pble_info root;
struct i40iw_pble_info *leaf;
u32 leaf_cnt;
};
struct i40iw_pble_alloc {
u32 total_cnt;
enum i40iw_pble_level level;
union {
struct i40iw_pble_info level1;
struct i40iw_pble_level2 level2;
};
};
struct sd_pd_idx {
u32 sd_idx;
u32 pd_idx;
u32 rel_pd_idx;
};
struct i40iw_add_page_info {
struct i40iw_chunk *chunk;
struct i40iw_hmc_sd_entry *sd_entry;
struct i40iw_hmc_info *hmc_info;
struct sd_pd_idx idx;
u32 pages;
};
struct i40iw_chunk {
struct list_head list;
u32 size;
void *vaddr;
u64 fpm_addr;
u32 pg_cnt;
dma_addr_t *dmaaddrs;
enum i40iw_alloc_type type;
};
struct i40iw_pble_pool {
struct gen_pool *pool;
struct list_head clist;
u32 total_pble_alloc;
u32 free_pble_cnt;
u32 pool_shift;
};
struct i40iw_hmc_pble_rsrc {
u32 unallocated_pble;
u64 fpm_base_addr;
u64 next_fpm_addr;
struct i40iw_pble_pool pinfo;
u32 stats_direct_sds;
u32 stats_paged_sds;
u64 stats_alloc_ok;
u64 stats_alloc_fail;
u64 stats_alloc_freed;
u64 stats_lvl1;
u64 stats_lvl2;
};
void i40iw_destroy_pble_pool(struct i40iw_sc_dev *dev, struct i40iw_hmc_pble_rsrc *pble_rsrc);
enum i40iw_status_code i40iw_hmc_init_pble(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc);
void i40iw_free_pble(struct i40iw_hmc_pble_rsrc *pble_rsrc, struct i40iw_pble_alloc *palloc);
enum i40iw_status_code i40iw_get_pble(struct i40iw_sc_dev *dev,
struct i40iw_hmc_pble_rsrc *pble_rsrc,
struct i40iw_pble_alloc *palloc,
u32 pble_cnt);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,188 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_PUDA_H
#define I40IW_PUDA_H
#define I40IW_IEQ_MPA_FRAMING 6
struct i40iw_sc_dev;
struct i40iw_sc_qp;
struct i40iw_sc_cq;
enum puda_resource_type {
I40IW_PUDA_RSRC_TYPE_ILQ = 1,
I40IW_PUDA_RSRC_TYPE_IEQ
};
enum puda_rsrc_complete {
PUDA_CQ_CREATED = 1,
PUDA_QP_CREATED,
PUDA_TX_COMPLETE,
PUDA_RX_COMPLETE,
PUDA_HASH_CRC_COMPLETE
};
struct i40iw_puda_completion_info {
struct i40iw_qp_uk *qp;
u8 q_type;
u8 vlan_valid;
u8 l3proto;
u8 l4proto;
u16 payload_len;
u32 compl_error; /* No_err=0, else major and minor err code */
u32 qp_id;
u32 wqe_idx;
};
struct i40iw_puda_send_info {
u64 paddr; /* Physical address */
u32 len;
u8 tcplen;
u8 maclen;
bool ipv4;
bool doloopback;
void *scratch;
};
struct i40iw_puda_buf {
struct list_head list; /* MUST be first entry */
struct i40iw_dma_mem mem; /* DMA memory for the buffer */
struct i40iw_puda_buf *next; /* for alloclist in rsrc struct */
struct i40iw_virt_mem buf_mem; /* Buffer memory for this buffer */
void *scratch;
u8 *iph;
u8 *tcph;
u8 *data;
u16 datalen;
u16 vlan_id;
u8 tcphlen; /* tcp length in bytes */
u8 maclen; /* mac length in bytes */
u32 totallen; /* machlen+iphlen+tcphlen+datalen */
atomic_t refcount;
u8 hdrlen;
bool ipv4;
u32 seqnum;
};
struct i40iw_puda_rsrc_info {
enum puda_resource_type type; /* ILQ or IEQ */
u32 count;
u16 pd_id;
u32 cq_id;
u32 qp_id;
u32 sq_size;
u32 rq_size;
u16 buf_size;
u16 mss;
u32 tx_buf_cnt; /* total bufs allocated will be rq_size + tx_buf_cnt */
void (*receive)(struct i40iw_sc_vsi *, struct i40iw_puda_buf *);
void (*xmit_complete)(struct i40iw_sc_vsi *, void *);
};
struct i40iw_puda_rsrc {
struct i40iw_sc_cq cq;
struct i40iw_sc_qp qp;
struct i40iw_sc_pd sc_pd;
struct i40iw_sc_dev *dev;
struct i40iw_sc_vsi *vsi;
struct i40iw_dma_mem cqmem;
struct i40iw_dma_mem qpmem;
struct i40iw_virt_mem ilq_mem;
enum puda_rsrc_complete completion;
enum puda_resource_type type;
u16 buf_size; /*buffer must be max datalen + tcpip hdr + mac */
u16 mss;
u32 cq_id;
u32 qp_id;
u32 sq_size;
u32 rq_size;
u32 cq_size;
struct i40iw_sq_uk_wr_trk_info *sq_wrtrk_array;
u64 *rq_wrid_array;
u32 compl_rxwqe_idx;
u32 rx_wqe_idx;
u32 rxq_invalid_cnt;
u32 tx_wqe_avail_cnt;
bool check_crc;
struct shash_desc *hash_desc;
struct list_head txpend;
struct list_head bufpool; /* free buffers pool list for recv and xmit */
u32 alloc_buf_count;
u32 avail_buf_count; /* snapshot of currently available buffers */
spinlock_t bufpool_lock;
struct i40iw_puda_buf *alloclist;
void (*receive)(struct i40iw_sc_vsi *, struct i40iw_puda_buf *);
void (*xmit_complete)(struct i40iw_sc_vsi *, void *);
/* puda stats */
u64 stats_buf_alloc_fail;
u64 stats_pkt_rcvd;
u64 stats_pkt_sent;
u64 stats_rcvd_pkt_err;
u64 stats_sent_pkt_q;
u64 stats_bad_qp_id;
};
struct i40iw_puda_buf *i40iw_puda_get_bufpool(struct i40iw_puda_rsrc *rsrc);
void i40iw_puda_ret_bufpool(struct i40iw_puda_rsrc *rsrc,
struct i40iw_puda_buf *buf);
void i40iw_puda_send_buf(struct i40iw_puda_rsrc *rsrc,
struct i40iw_puda_buf *buf);
enum i40iw_status_code i40iw_puda_send(struct i40iw_sc_qp *qp,
struct i40iw_puda_send_info *info);
enum i40iw_status_code i40iw_puda_create_rsrc(struct i40iw_sc_vsi *vsi,
struct i40iw_puda_rsrc_info *info);
void i40iw_puda_dele_resources(struct i40iw_sc_vsi *vsi,
enum puda_resource_type type,
bool reset);
enum i40iw_status_code i40iw_puda_poll_completion(struct i40iw_sc_dev *dev,
struct i40iw_sc_cq *cq, u32 *compl_err);
struct i40iw_sc_qp *i40iw_ieq_get_qp(struct i40iw_sc_dev *dev,
struct i40iw_puda_buf *buf);
enum i40iw_status_code i40iw_puda_get_tcpip_info(struct i40iw_puda_completion_info *info,
struct i40iw_puda_buf *buf);
enum i40iw_status_code i40iw_ieq_check_mpacrc(struct shash_desc *desc,
void *addr, u32 length, u32 value);
enum i40iw_status_code i40iw_init_hash_desc(struct shash_desc **desc);
void i40iw_ieq_mpa_crc_ae(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
void i40iw_free_hash_desc(struct shash_desc *desc);
void i40iw_ieq_update_tcpip_info(struct i40iw_puda_buf *buf, u16 length,
u32 seqnum);
enum i40iw_status_code i40iw_cqp_qp_create_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
enum i40iw_status_code i40iw_cqp_cq_create_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_cq *cq);
void i40iw_cqp_qp_destroy_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
void i40iw_cqp_cq_destroy_cmd(struct i40iw_sc_dev *dev, struct i40iw_sc_cq *cq);
void i40iw_ieq_cleanup_qp(struct i40iw_puda_rsrc *ieq, struct i40iw_sc_qp *qp);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,101 +0,0 @@
/*******************************************************************************
*
* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenFabrics.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*******************************************************************************/
#ifndef I40IW_STATUS_H
#define I40IW_STATUS_H
/* Error Codes */
enum i40iw_status_code {
I40IW_SUCCESS = 0,
I40IW_ERR_NVM = -1,
I40IW_ERR_NVM_CHECKSUM = -2,
I40IW_ERR_CONFIG = -4,
I40IW_ERR_PARAM = -5,
I40IW_ERR_DEVICE_NOT_SUPPORTED = -6,
I40IW_ERR_RESET_FAILED = -7,
I40IW_ERR_SWFW_SYNC = -8,
I40IW_ERR_NO_MEMORY = -9,
I40IW_ERR_BAD_PTR = -10,
I40IW_ERR_INVALID_PD_ID = -11,
I40IW_ERR_INVALID_QP_ID = -12,
I40IW_ERR_INVALID_CQ_ID = -13,
I40IW_ERR_INVALID_CEQ_ID = -14,
I40IW_ERR_INVALID_AEQ_ID = -15,
I40IW_ERR_INVALID_SIZE = -16,
I40IW_ERR_INVALID_ARP_INDEX = -17,
I40IW_ERR_INVALID_FPM_FUNC_ID = -18,
I40IW_ERR_QP_INVALID_MSG_SIZE = -19,
I40IW_ERR_QP_TOOMANY_WRS_POSTED = -20,
I40IW_ERR_INVALID_FRAG_COUNT = -21,
I40IW_ERR_QUEUE_EMPTY = -22,
I40IW_ERR_INVALID_ALIGNMENT = -23,
I40IW_ERR_FLUSHED_QUEUE = -24,
I40IW_ERR_INVALID_INLINE_DATA_SIZE = -26,
I40IW_ERR_TIMEOUT = -27,
I40IW_ERR_OPCODE_MISMATCH = -28,
I40IW_ERR_CQP_COMPL_ERROR = -29,
I40IW_ERR_INVALID_VF_ID = -30,
I40IW_ERR_INVALID_HMCFN_ID = -31,
I40IW_ERR_BACKING_PAGE_ERROR = -32,
I40IW_ERR_NO_PBLCHUNKS_AVAILABLE = -33,
I40IW_ERR_INVALID_PBLE_INDEX = -34,
I40IW_ERR_INVALID_SD_INDEX = -35,
I40IW_ERR_INVALID_PAGE_DESC_INDEX = -36,
I40IW_ERR_INVALID_SD_TYPE = -37,
I40IW_ERR_MEMCPY_FAILED = -38,
I40IW_ERR_INVALID_HMC_OBJ_INDEX = -39,
I40IW_ERR_INVALID_HMC_OBJ_COUNT = -40,
I40IW_ERR_INVALID_SRQ_ARM_LIMIT = -41,
I40IW_ERR_SRQ_ENABLED = -42,
I40IW_ERR_BUF_TOO_SHORT = -43,
I40IW_ERR_BAD_IWARP_CQE = -44,
I40IW_ERR_NVM_BLANK_MODE = -45,
I40IW_ERR_NOT_IMPLEMENTED = -46,
I40IW_ERR_PE_DOORBELL_NOT_ENABLED = -47,
I40IW_ERR_NOT_READY = -48,
I40IW_NOT_SUPPORTED = -49,
I40IW_ERR_FIRMWARE_API_VERSION = -50,
I40IW_ERR_RING_FULL = -51,
I40IW_ERR_MPA_CRC = -61,
I40IW_ERR_NO_TXBUFS = -62,
I40IW_ERR_SEQ_NUM = -63,
I40IW_ERR_list_empty = -64,
I40IW_ERR_INVALID_MAC_ADDR = -65,
I40IW_ERR_BAD_STAG = -66,
I40IW_ERR_CQ_COMPL_ERROR = -67,
I40IW_ERR_QUEUE_DESTROYED = -68,
I40IW_ERR_INVALID_FEAT_CNT = -69
};
#endif

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More