linux/include/uapi/rdma/vmw_pvrdma-abi.h

290 lines
7.1 KiB
C
Raw Normal View History

IB: Add vmw_pvrdma driver This patch series adds a driver for a paravirtual RDMA device. The device is developed for VMware's Virtual Machines and allows existing RDMA applications to continue to use existing Verbs API when deployed in VMs on ESXi. We recently did a presentation in the OFA Workshop [1] regarding this device. Description and RDMA Support ============================ The virtual device is exposed as a dual function PCIe device. One part is a virtual network device (VMXNet3) which provides networking properties like MAC, IP addresses to the RDMA part of the device. The networking properties are used to register GIDs required by RDMA applications to communicate. These patches add support and the all required infrastructure for letting applications use such a device. We support the mandatory Verbs API as well as the base memory management extensions (Local Inv, Send with Inv and Fast Register Work Requests). We currently support both Reliable Connected and Unreliable Datagram QPs but do not support Shared Receive Queues (SRQs). Also, we support the following types of Work Requests: o Send/Receive (with or without Immediate Data) o RDMA Write (with or without Immediate Data) o RDMA Read o Local Invalidate o Send with Invalidate o Fast Register Work Requests This version only adds support for version 1 of RoCE. We will add RoCEv2 support in a future patch. We do support registration of both MAC-based and IP-based GIDs. I have also created a git tree for our user-level driver [2]. Testing ======= We have tested this internally for various types of Guest OS - Red Hat, Centos, Ubuntu 12.04/14.04/16.04, Oracle Enterprise Linux, SLES 12 using backported versions of this driver. The tests included several runs of the performance tests (included with OFED), Intel MPI PingPong benchmark on OpenMPI, krping for FRWRs. Mellanox has been kind enough to test the backported version of the driver internally on their hardware using a VMware provided ESX build. I have also applied and tested this with Doug's k.o/for-4.9 branch (commit 5603910b). Note, that this patch series should be applied all together. I split out the commits so that it may be easier to review. PVRDMA Resources ================ [1] OFA Workshop Presentation - https://openfabrics.org/images/eventpresos/2016presentations/102parardma.pdf [2] Libpvrdma User-level library - http://git.openfabrics.org/?p=~aditr/libpvrdma.git;a=summary Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Reviewed-by: George Zhang <georgezhang@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Bryan Tan <bryantan@vmware.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-10-03 10:10:22 +08:00
/*
* Copyright (c) 2012-2016 VMware, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of EITHER the GNU General Public License
* version 2 as published by the Free Software Foundation or the BSD
* 2-Clause License. This program is distributed in the hope that it
* will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License version 2 for more details at
* http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html.
*
* You should have received a copy of the GNU General Public License
* along with this program available in the file COPYING in the main
* directory of this source tree.
*
* The BSD 2-Clause License
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __VMW_PVRDMA_ABI_H__
#define __VMW_PVRDMA_ABI_H__
#include <linux/types.h>
#define PVRDMA_UVERBS_ABI_VERSION 3 /* ABI Version. */
#define PVRDMA_UAR_HANDLE_MASK 0x00FFFFFF /* Bottom 24 bits. */
#define PVRDMA_UAR_QP_OFFSET 0 /* QP doorbell. */
#define PVRDMA_UAR_QP_SEND BIT(30) /* Send bit. */
#define PVRDMA_UAR_QP_RECV BIT(31) /* Recv bit. */
#define PVRDMA_UAR_CQ_OFFSET 4 /* CQ doorbell. */
#define PVRDMA_UAR_CQ_ARM_SOL BIT(29) /* Arm solicited bit. */
#define PVRDMA_UAR_CQ_ARM BIT(30) /* Arm bit. */
#define PVRDMA_UAR_CQ_POLL BIT(31) /* Poll bit. */
enum pvrdma_wr_opcode {
PVRDMA_WR_RDMA_WRITE,
PVRDMA_WR_RDMA_WRITE_WITH_IMM,
PVRDMA_WR_SEND,
PVRDMA_WR_SEND_WITH_IMM,
PVRDMA_WR_RDMA_READ,
PVRDMA_WR_ATOMIC_CMP_AND_SWP,
PVRDMA_WR_ATOMIC_FETCH_AND_ADD,
PVRDMA_WR_LSO,
PVRDMA_WR_SEND_WITH_INV,
PVRDMA_WR_RDMA_READ_WITH_INV,
PVRDMA_WR_LOCAL_INV,
PVRDMA_WR_FAST_REG_MR,
PVRDMA_WR_MASKED_ATOMIC_CMP_AND_SWP,
PVRDMA_WR_MASKED_ATOMIC_FETCH_AND_ADD,
PVRDMA_WR_BIND_MW,
PVRDMA_WR_REG_SIG_MR,
};
enum pvrdma_wc_status {
PVRDMA_WC_SUCCESS,
PVRDMA_WC_LOC_LEN_ERR,
PVRDMA_WC_LOC_QP_OP_ERR,
PVRDMA_WC_LOC_EEC_OP_ERR,
PVRDMA_WC_LOC_PROT_ERR,
PVRDMA_WC_WR_FLUSH_ERR,
PVRDMA_WC_MW_BIND_ERR,
PVRDMA_WC_BAD_RESP_ERR,
PVRDMA_WC_LOC_ACCESS_ERR,
PVRDMA_WC_REM_INV_REQ_ERR,
PVRDMA_WC_REM_ACCESS_ERR,
PVRDMA_WC_REM_OP_ERR,
PVRDMA_WC_RETRY_EXC_ERR,
PVRDMA_WC_RNR_RETRY_EXC_ERR,
PVRDMA_WC_LOC_RDD_VIOL_ERR,
PVRDMA_WC_REM_INV_RD_REQ_ERR,
PVRDMA_WC_REM_ABORT_ERR,
PVRDMA_WC_INV_EECN_ERR,
PVRDMA_WC_INV_EEC_STATE_ERR,
PVRDMA_WC_FATAL_ERR,
PVRDMA_WC_RESP_TIMEOUT_ERR,
PVRDMA_WC_GENERAL_ERR,
};
enum pvrdma_wc_opcode {
PVRDMA_WC_SEND,
PVRDMA_WC_RDMA_WRITE,
PVRDMA_WC_RDMA_READ,
PVRDMA_WC_COMP_SWAP,
PVRDMA_WC_FETCH_ADD,
PVRDMA_WC_BIND_MW,
PVRDMA_WC_LSO,
PVRDMA_WC_LOCAL_INV,
PVRDMA_WC_FAST_REG_MR,
PVRDMA_WC_MASKED_COMP_SWAP,
PVRDMA_WC_MASKED_FETCH_ADD,
PVRDMA_WC_RECV = 1 << 7,
PVRDMA_WC_RECV_RDMA_WITH_IMM,
};
enum pvrdma_wc_flags {
PVRDMA_WC_GRH = 1 << 0,
PVRDMA_WC_WITH_IMM = 1 << 1,
PVRDMA_WC_WITH_INVALIDATE = 1 << 2,
PVRDMA_WC_IP_CSUM_OK = 1 << 3,
PVRDMA_WC_WITH_SMAC = 1 << 4,
PVRDMA_WC_WITH_VLAN = 1 << 5,
PVRDMA_WC_FLAGS_MAX = PVRDMA_WC_WITH_VLAN,
};
struct pvrdma_alloc_ucontext_resp {
__u32 qp_tab_size;
__u32 reserved;
};
struct pvrdma_alloc_pd_resp {
__u32 pdn;
__u32 reserved;
};
struct pvrdma_create_cq {
__u64 buf_addr;
__u32 buf_size;
__u32 reserved;
};
struct pvrdma_create_cq_resp {
__u32 cqn;
__u32 reserved;
};
struct pvrdma_resize_cq {
__u64 buf_addr;
__u32 buf_size;
__u32 reserved;
};
struct pvrdma_create_srq {
__u64 buf_addr;
};
struct pvrdma_create_srq_resp {
__u32 srqn;
__u32 reserved;
};
struct pvrdma_create_qp {
__u64 rbuf_addr;
__u64 sbuf_addr;
__u32 rbuf_size;
__u32 sbuf_size;
__u64 qp_addr;
};
/* PVRDMA masked atomic compare and swap */
struct pvrdma_ex_cmp_swap {
__u64 swap_val;
__u64 compare_val;
__u64 swap_mask;
__u64 compare_mask;
};
/* PVRDMA masked atomic fetch and add */
struct pvrdma_ex_fetch_add {
__u64 add_val;
__u64 field_boundary;
};
/* PVRDMA address vector. */
struct pvrdma_av {
__u32 port_pd;
__u32 sl_tclass_flowlabel;
__u8 dgid[16];
__u8 src_path_bits;
__u8 gid_index;
__u8 stat_rate;
__u8 hop_limit;
__u8 dmac[6];
__u8 reserved[6];
};
/* PVRDMA scatter/gather entry */
struct pvrdma_sge {
__u64 addr;
__u32 length;
__u32 lkey;
};
/* PVRDMA receive queue work request */
struct pvrdma_rq_wqe_hdr {
__u64 wr_id; /* wr id */
__u32 num_sge; /* size of s/g array */
__u32 total_len; /* reserved */
};
/* Use pvrdma_sge (ib_sge) for receive queue s/g array elements. */
/* PVRDMA send queue work request */
struct pvrdma_sq_wqe_hdr {
__u64 wr_id; /* wr id */
__u32 num_sge; /* size of s/g array */
__u32 total_len; /* reserved */
__u32 opcode; /* operation type */
__u32 send_flags; /* wr flags */
union {
__be32 imm_data;
IB: Add vmw_pvrdma driver This patch series adds a driver for a paravirtual RDMA device. The device is developed for VMware's Virtual Machines and allows existing RDMA applications to continue to use existing Verbs API when deployed in VMs on ESXi. We recently did a presentation in the OFA Workshop [1] regarding this device. Description and RDMA Support ============================ The virtual device is exposed as a dual function PCIe device. One part is a virtual network device (VMXNet3) which provides networking properties like MAC, IP addresses to the RDMA part of the device. The networking properties are used to register GIDs required by RDMA applications to communicate. These patches add support and the all required infrastructure for letting applications use such a device. We support the mandatory Verbs API as well as the base memory management extensions (Local Inv, Send with Inv and Fast Register Work Requests). We currently support both Reliable Connected and Unreliable Datagram QPs but do not support Shared Receive Queues (SRQs). Also, we support the following types of Work Requests: o Send/Receive (with or without Immediate Data) o RDMA Write (with or without Immediate Data) o RDMA Read o Local Invalidate o Send with Invalidate o Fast Register Work Requests This version only adds support for version 1 of RoCE. We will add RoCEv2 support in a future patch. We do support registration of both MAC-based and IP-based GIDs. I have also created a git tree for our user-level driver [2]. Testing ======= We have tested this internally for various types of Guest OS - Red Hat, Centos, Ubuntu 12.04/14.04/16.04, Oracle Enterprise Linux, SLES 12 using backported versions of this driver. The tests included several runs of the performance tests (included with OFED), Intel MPI PingPong benchmark on OpenMPI, krping for FRWRs. Mellanox has been kind enough to test the backported version of the driver internally on their hardware using a VMware provided ESX build. I have also applied and tested this with Doug's k.o/for-4.9 branch (commit 5603910b). Note, that this patch series should be applied all together. I split out the commits so that it may be easier to review. PVRDMA Resources ================ [1] OFA Workshop Presentation - https://openfabrics.org/images/eventpresos/2016presentations/102parardma.pdf [2] Libpvrdma User-level library - http://git.openfabrics.org/?p=~aditr/libpvrdma.git;a=summary Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Reviewed-by: George Zhang <georgezhang@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Bryan Tan <bryantan@vmware.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-10-03 10:10:22 +08:00
__u32 invalidate_rkey;
} ex;
__u32 reserved;
union {
struct {
__u64 remote_addr;
__u32 rkey;
__u8 reserved[4];
} rdma;
struct {
__u64 remote_addr;
__u64 compare_add;
__u64 swap;
__u32 rkey;
__u32 reserved;
} atomic;
struct {
__u64 remote_addr;
__u32 log_arg_sz;
__u32 rkey;
union {
struct pvrdma_ex_cmp_swap cmp_swap;
struct pvrdma_ex_fetch_add fetch_add;
} wr_data;
} masked_atomics;
struct {
__u64 iova_start;
__u64 pl_pdir_dma;
__u32 page_shift;
__u32 page_list_len;
__u32 length;
__u32 access_flags;
__u32 rkey;
} fast_reg;
struct {
__u32 remote_qpn;
__u32 remote_qkey;
struct pvrdma_av av;
} ud;
} wr;
};
/* Use pvrdma_sge (ib_sge) for send queue s/g array elements. */
/* Completion queue element. */
struct pvrdma_cqe {
__u64 wr_id;
__u64 qp;
__u32 opcode;
__u32 status;
__u32 byte_len;
__be32 imm_data;
IB: Add vmw_pvrdma driver This patch series adds a driver for a paravirtual RDMA device. The device is developed for VMware's Virtual Machines and allows existing RDMA applications to continue to use existing Verbs API when deployed in VMs on ESXi. We recently did a presentation in the OFA Workshop [1] regarding this device. Description and RDMA Support ============================ The virtual device is exposed as a dual function PCIe device. One part is a virtual network device (VMXNet3) which provides networking properties like MAC, IP addresses to the RDMA part of the device. The networking properties are used to register GIDs required by RDMA applications to communicate. These patches add support and the all required infrastructure for letting applications use such a device. We support the mandatory Verbs API as well as the base memory management extensions (Local Inv, Send with Inv and Fast Register Work Requests). We currently support both Reliable Connected and Unreliable Datagram QPs but do not support Shared Receive Queues (SRQs). Also, we support the following types of Work Requests: o Send/Receive (with or without Immediate Data) o RDMA Write (with or without Immediate Data) o RDMA Read o Local Invalidate o Send with Invalidate o Fast Register Work Requests This version only adds support for version 1 of RoCE. We will add RoCEv2 support in a future patch. We do support registration of both MAC-based and IP-based GIDs. I have also created a git tree for our user-level driver [2]. Testing ======= We have tested this internally for various types of Guest OS - Red Hat, Centos, Ubuntu 12.04/14.04/16.04, Oracle Enterprise Linux, SLES 12 using backported versions of this driver. The tests included several runs of the performance tests (included with OFED), Intel MPI PingPong benchmark on OpenMPI, krping for FRWRs. Mellanox has been kind enough to test the backported version of the driver internally on their hardware using a VMware provided ESX build. I have also applied and tested this with Doug's k.o/for-4.9 branch (commit 5603910b). Note, that this patch series should be applied all together. I split out the commits so that it may be easier to review. PVRDMA Resources ================ [1] OFA Workshop Presentation - https://openfabrics.org/images/eventpresos/2016presentations/102parardma.pdf [2] Libpvrdma User-level library - http://git.openfabrics.org/?p=~aditr/libpvrdma.git;a=summary Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Reviewed-by: George Zhang <georgezhang@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Bryan Tan <bryantan@vmware.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2016-10-03 10:10:22 +08:00
__u32 src_qp;
__u32 wc_flags;
__u32 vendor_err;
__u16 pkey_index;
__u16 slid;
__u8 sl;
__u8 dlid_path_bits;
__u8 port_num;
__u8 smac[6];
__u8 reserved2[7]; /* Pad to next power of 2 (64). */
};
#endif /* __VMW_PVRDMA_ABI_H__ */